📚Papers

ICML 2024

International Conference on Machine Learning

会议官网
335/ 3321 相关论文
方向
Tier
335 / 335 篇论文
9
精读ICML 2024

Getting the most out of your tokenizer for pre-training and domain adaptation

Tokenizer 的选择对预训练效率和下游性能影响很大,但在 domain adaptation(领域适配)场景下,原始 tokenizer 往往对新领域文本的压缩率很差(token 数膨胀),导致训练和推理效率下降。这篇工作系统研究了如何为预训练和领域适配选择/调整 tokenizer。

Gautier Dagan,Gabriel Synnaeve,Baptiste Rozière
Meta FAIRtokenizerpretraindomain-adaptationPMLRDBLP
9
精读ICML 2024

Transformers are SSMs: Generalized Models and Efficient Algorithms Through Structured State Space Duality

Transformer 和 SSM(状态空间模型,如 Mamba)看似是两种不同的序列建模范式,但它们之间存在深层的数学联系。这篇工作建立了两者的统一理论框架(Structured State Space Duality),并基于此设计了 Mamba-2,一个更高效的 SSM 架构。

Tri Dao,Albert Gu
Princeton UniversityCarnegie Mellon UniversityssmtransformermambaPMLRDBLP
9
精读ICML 2024

Better & Faster Large Language Models via Multi-token Prediction

这篇论文的核心结论是:next-token prediction 不是大语言模型唯一合理的训练目标,同时预测多个未来 token 可以同时带来更好的模型质量和更快的推理。传统 AR 训练把监督信号压在下一个 token 上,目标简单但信息密度低,也天然把解码速度绑死在逐 token 生成上;作者试图在不放弃 AR 主干的前提下,提升训练信号利用率和推理吞吐。

Fabian Gloeckle,Badr Youbi Idrissi,Baptiste Rozière,David Lopez-Paz,Gabriel Synnaeve
multi-token-predictionllm-pretraintraining-objectivePMLRDBLP
9
精读ICML 2024

Mechanistic Design and Scaling of Hybrid Architectures

这篇论文讨论的是 hybrid architectures 的设计与扩展规律:如何把不同序列建模模块组合起来,并在机制上理解何时比纯 Transformer 更优。过去混合架构常凭经验拼接 attention、SSM、卷积或其他状态空间模块,能跑但缺少清晰设计原则;作者想把“为什么这样混、怎么随规模扩展”讲得更系统。

Michael Poli,Armin W. Thomas,Eric Nguyen,Pragaash Ponnusamy,Björn Deiseroth,Kristian Kersting ... 省略 2 位作者 ... ,Stefano Ermon,Christopher Ré,Ce Zhang,Stefano Massaroli
hybrid-architecturessmattentionPMLRDBLP
9
精读SpotlightICML 2024

QuRating: Selecting High-Quality Data for Training Language Models

现有预训练数据筛选依赖简单启发式规则,无法量化人类感知的多维度数据质量,导致训练出的模型性能波动大,没有统一的可复用质量评估框架。

Alexander Wettig,Aatmik Gupta,Saumya Malik,Danqi Chen
Princeton Universitydata-qualitydata-selectionpretrainingPMLRarXivDBLP
9
精读ICML 2024

UniAudio: Towards Universal Audio Generation with Large Language Models

这篇工作要解决的是音频生成任务长期被切成多个孤立子问题的问题。过去音乐、语音、环境声、语音编辑往往各自建模、各用各的 tokenizer 和训练目标,导致能力难复用、数据难统一,因此作者尝试用大语言模型框架做统一音频生成。

Dongchao Yang,Jinchuan Tian,Xu Tan,Rongjie Huang,Songxiang Liu,Haohan Guo ... 省略 3 位作者 ... ,Jiang Bian,Zhou Zhao,Xixin Wu,Helen M. Meng
audio-lmmultimodaltokenizerPMLRDBLP
8
精读ICML 2024

Libra: Building Decoupled Vision System on Large Language Models

现有多模态大模型(MLLM)的视觉建模和跨模态交互耦合在同一注意力层,导致内模态特征提取不充分、跨模态对齐效率低,且视觉注入会严重损伤LLM的原生语言能力。

Yifan Xu,Xiaoshan Yang,Yaguang Song,Changsheng Xu
Institute of Automation, Chinese Academy of Sciencesmultimodal-pretrainingautoregressivevision-tokenizerPMLRarXivDBLP
7
泛读ICML 2024

Two Stones Hit One Bird: Bilevel Positional Encoding for Better Length Extrapolation

现有位置编码(绝对/相对/ROPE等)无法同时兼顾短上下文语义建模和长上下文长度外推能力,通常需要二选一或者引入额外的微调成本,没有统一的解决方案。

Zhenyu He,Guhao Feng,Shengjie Luo,Kai Yang,Liwei Wang,Jingjing Xu,Zhi Zhang,Hongxia Yang,Di He
Peking Universitypositional-encodinglength-extrapolationlong-contextPMLRarXivDBLP
8
精读ICML 2024

In-Context Language Learning: Architectures and Algorithms

缺少摘要信息,无法确认该工作要解决的 ICL(in-context learning)核心问题是“为什么能学”、还是“怎样设计架构/算法让 ICL 更强/更稳/更可控”。

Ekin Akyürek,Bailin Wang,Yoon Kim,Jacob Andreas
in-context-learningarchitectureiclPMLRDBLP
8
精读ICML 2024

A Dynamical Model of Neural Scaling Laws

Blake Bordelon,Alexander B. Atanasov,Cengiz Pehlevan
scaling-lawtraining-dynamicsneural-scalingPMLRDBLP
8
精读OralICML 2024

Weak-to-Strong Generalization: Eliciting Strong Capabilities With Weak Supervision

Collin Burns,Pavel Izmailov,Jan Hendrik Kirchner,Bowen Baker,Leo Gao,Leopold Aschenbrenner ... 省略 2 位作者 ... ,Manas Joglekar,Jan Leike,Ilya Sutskever,Jeffrey Wu
weak-to-strongalignmentscalable-oversightPMLRDBLP
8
精读ICML 2024

Medusa: Simple LLM Inference Acceleration Framework with Multiple Decoding Heads

Tianle Cai,Yuhong Li,Zhengyang Geng,Hongwu Peng,Jason D. Lee,Deming Chen,Tri Dao
speculative-decodinginference-accelerationmulti-headPMLRDBLP
8
精读ICML 2024

Data Engineering for Scaling Language Models to 128K Context

Yao Fu,Rameswar Panda,Xinyao Niu,Xiang Yue,Hannaneh Hajishirzi,Yoon Kim,Hao Peng
long-contextdata-engineeringdata-qualityPMLRDBLP
8
精读ICML 2024

Accelerating Transformer Pre-training with 2: 4 Sparsity

这篇论文解决的是:如何在不显著掉点的前提下,把 Transformer 预训练的 FFN 计算迁移到 GPU 原生支持的 2:4 结构化稀疏,从而获得接近硬件上限的加速。过去稀疏训练常见问题是训练不稳定、稀疏模式频繁翻转导致收敛差,最终需要回退到 dense。

Yuezhou Hu,Kang Zhao,Weiyu Huang,Jianfei Chen,Jun Zhu
sparsitytransformer-pretrainingtraining-efficiencyPMLRarXivDBLP
8
精读ICML 2024

Scaling Beyond the GPU Memory Limit for Large Mixture-of-Experts Model Training

这篇工作要解决的是:MoE 大模型训练经常不是被算力而是被单卡显存上限卡住,尤其在 expert 数量、路由缓存、优化器状态一起膨胀时,常规数据并行或张量并行很快失效。过去常见做法是缩小 batch、减少 expert 或依赖更大 GPU,这些都只是回避问题,没有真正突破显存墙。

Yechan Kim,Hwijoon Lim,Dongsu Han
moetraining-systemmemoryPMLRDBLP
8
精读ICML 2024

Audio Flamingo: A Novel Audio Language Model with Few-Shot Learning and Dialogue Abilities

这篇工作要解决的是:现有音频语言模型往往擅长单轮音频理解或特定任务,但很难兼顾 few-shot 学习和多轮对话能力。过去音频系统通常在 ASR、音频事件分类、语音问答之间分开建模,或者把音频简单接到文本 LLM 前面,导致真正的音频条件对话和少样本泛化能力不足。

Zhifeng Kong,Arushi Goel,Rohan Badlani,Wei Ping,Rafael Valle,Bryan Catanzaro
audio-lmmultimodalfew-shotPMLRDBLP
8
精读ICML 2024

CLLMs: Consistency Large Language Models

这篇工作要解决的是:能否把 consistency model 的快速生成思想带到语言模型里,绕开标准自回归逐 token 解码的线性时延。过去非 AR 语言建模一直受两个问题限制:一是训练目标往往和离散文本不匹配,二是即便并行生成,质量也常明显落后于 AR LM,因此很难真正挑战 next-token 范式。

Siqi Kou,Lanxiang Hu,Zhezhi He,Zhijie Deng,Hao Zhang
consistency-modelparallel-decodinginference-accelerationPMLRDBLP
7
泛读ICML 2024

ReMax: A Simple, Effective, and Efficient Reinforcement Learning Method for Aligning Large Language Models

现有RLHF主流算法PPO是通用RL算法,过于复杂,需要训练额外价值网络,超参调优难度大、计算成本高,没有适配LLM对齐场景的特性,落地门槛高。

Ziniu Li,Tian Xu,Yushun Zhang,Zhihang Lin,Yang Yu,Ruoyu Sun,Zhi-Quan Luo
Nanjing Universityrlhfalignmentpolicy-optimizationPMLRarXivDBLP
7
泛读ICML 2024

MobileLLM: Optimizing Sub-billion Parameter Language Models for On-Device Use Cases

现有十亿参数以下的小参数LLM性能较差,之前的工作认为小模型性能提升主要靠增加数据量,忽略了架构设计的作用,无法满足端侧部署的性能和延迟要求。

Zechun Liu,Changsheng Zhao,Forrest N. Iandola,Chen Lai,Yuandong Tian,Igor Fedorov ... 省略 2 位作者 ... ,Yangyang Shi,Raghuraman Krishnamoorthi,Liangzhen Lai,Vikas Chandra
Meta AI Researchsmall-lmarchitecturemobilePMLRarXivDBLP
8
精读SpotlightICML 2024

Nash Learning from Human Feedback

Rémi Munos,Michal Valko,Daniele Calandriello,Mohammad Gheshlaghi Azar,Mark Rowland,Daniel Guo ... 省略 8 位作者 ... ,Olivier Bachem,Daniel J. Mankowitz,Doina Precup,Bilal Piot
rlhfnash-equilibriumalignmentPMLRDBLP
8
精读ICML 2024

Various Lengths, Constant Speed: Efficient Language Modeling with Lightning Attention

Zhen Qin,Weigao Sun,Dong Li,Xuyang Shen,Weixuan Sun,Yiran Zhong
linear-attentionlong-contextefficient-attentionPMLRDBLP
7
精读SpotlightICML 2024

What needs to go right for an induction head? A mechanistic study of in-context learning circuits and their formation

现有机制解释性研究只发现诱导头(IH)是上下文学习的核心组件,但不清楚诱导头的多样性、涌现动力学以及依赖关系,无法解释Transformer训练中的损失相变现象。

Aaditya K. Singh,Ted Moskovitz,Felix Hill,Stephanie C. Y. Chan,Andrew M. Saxe
DeepMindin-context-learninginduction-headsmechanistic-interpretabilityPMLRarXivDBLP
8
精读ICML 2024

Transforming and Combining Rewards for Aligning Large Language Models

RLHF 中从偏好数据学到的 reward model 存在两个被忽视的问题:(1) reward 的单调变换不影响偏好排序,但不同变换对 alignment 效果差异很大,该选哪个?(2) 多个 reward model 如何组合才最优?以前这两个问题缺乏有原则的解法。

Zihao Wang,Chirag Nagpal,Jonathan Berant,Jacob Eisenstein,Alexander Nicholas D'Amour,Sanmi Koyejo,Victor Veitch
Google DeepMindGooglealignmentreward-modelrlhfPMLRarXivDBLP
8
精读ICML 2024

InstructRetro: Instruction Tuning post Retrieval-Augmented Pretraining

检索增强预训练(Retro 方法)此前只在 7.5B 规模验证过,规模受限导致指令微调和零样本泛化效果不佳。需要验证 Retro 方法能否 scale 到更大模型,并在指令微调后保持优势。

Boxin Wang,Wei Ping,Lawrence McAfee,Peng Xu,Bo Li,Mohammad Shoeybi,Bryan Catanzaro
NVIDIAragretrieval-augmented-pretraininginstruction-tuningPMLRarXivDBLP
8
精读ICML 2024

Gated Linear Attention Transformers with Hardware-Efficient Training

这篇工作要解决的是线性注意力虽然理论上是 O(n) 更适合长序列,但在实际训练中常常既不够稳也不够快的问题。很多线性注意力变体在表达能力上弱于 softmax attention,或者训练时需要不友好的张量重排与前缀扫描,导致硬件上未必比标准注意力占优。

Songlin Yang,Bailin Wang,Yikang Shen,Rameswar Panda,Yoon Kim
linear-attentiontransformerhardware-efficiencyPMLRDBLP
8
精读ICML 2024

StableMask: Refining Causal Masking in Decoder-only Transformer

这篇工作针对的是 decoder-only Transformer 里固定 causal mask 过于僵硬的问题。标准下三角 mask 保证自回归训练与推理一致,但它也把所有历史 token 一视同仁,可能限制信息流和优化效率,因此作者尝试在不破坏自回归约束的前提下重新设计 masking。

Qingyu Yin,Xuzheng He,Xiang Zhuang,Yu Zhao,Jianhua Yao,Xiaoyu Shen,Qiang Zhang
decoder-onlycausal-masktransformerPMLRDBLP
8
精读ICML 2024

When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models

这篇工作要解决的是线性注意力模型在自回归解码阶段往往没有想象中有效的问题。很多线性化 LLM 在训练复杂度上看起来更优,但真正到 AR decoding 时,质量、状态更新方式和硬件效率之间存在新的矛盾,因此作者专门从解码场景重新审视 linear attention。

Haoran You,Yichao Fu,Zheng Wang,Amir Yazdanbakhsh,Yingyan Celine Lin
linear-attentionautoregressivellmPMLRDBLP
8
精读ICML 2024

Self-Rewarding Language Models

Weizhe Yuan,Richard Yuanzhe Pang,Kyunghyun Cho,Xian Li,Sainbayar Sukhbaatar,Jing Xu,Jason Weston
self-rewardingalignmentreward-modelPMLRDBLP
8
精读ICML 2024

Look Ahead or Look Around? A Theoretical Comparison Between Autoregressive and Masked Pretraining

现有生成式自监督学习的两种核心范式(AR预训练和掩码预训练)的优劣边界没有理论支撑,只能通过实验试错,导致不同任务的建模范式选择缺乏依据。

Qi Zhang,Tianqi Du,Haotian Huang,Yifei Wang,Yisen Wang
Peking Universityautoregressivemasked-pretrainingssl-theoryPMLRarXivDBLP
7
泛读ICML 2024

Q-Probe: A Lightweight Approach to Reward Maximization for Language Models

这篇工作要解决的是:如何在不做重型 finetuning 或 RL 的情况下,让语言模型朝任务 reward 更高的方向偏转。过去的做法通常在 few-shot prompting 和 full finetuning 之间二选一,前者轻但弱,后者强但贵;Q-Probe 试图填这条中间地带。

Kenneth Li,Samy Jelassi,Hugh Zhang,Sham M. Kakade,Martin Wattenberg,David Brandfonbrener
Google DeepMindGoogle Researchreward-modelinginference-timealignmentPMLRarXivDBLP
7
泛读ICML 2024

Magicoder: Empowering Code Generation with OSS-Instruct

这篇工作要解决的是:开源小模型做代码生成时,和顶级闭源/大模型之间仍有明显差距,而高质量指令数据又稀缺。过去大量 synthetic instruction 数据依赖模型自己编题,容易带入分布偏差和不真实的任务风格;Magicoder 要解决的正是这类数据偏置。

Yuxiang Wei,Zhe Wang,Jiawei Liu,Yifeng Ding,Lingming Zhang
code-llmsynthetic-datainstruction-tuningPMLRarXivDBLP
7
泛读ICML 2024

Short-Long Convolutions Help Hardware-Efficient Linear Attention to Focus on Long Sequences

Zicheng Liu,Siyuan Li,Li Wang,Zedong Wang,Yunfan Liu,Stan Z. Li
linear-attentionlong-contexthardware-efficiencyPMLRDBLP
7
泛读ICML 2024

Iterative Preference Learning from Human Feedback: Bridging Theory and Practice for RLHF under KL-constraint

现有RLHF算法(PPO、DPO等)缺乏策略性探索,离线场景下性能受限,且没有严格的理论保证,导致对齐效果不稳定,无法支撑迭代式偏好优化。

Wei Xiong,Hanze Dong,Chenlu Ye,Ziqi Wang,Han Zhong,Heng Ji,Nan Jiang,Tong Zhang
rlhfdpoiterative-preferencePMLRarXivDBLP
7
泛读ICML 2024

FrameQuant: Flexible Low-Bit Quantization for Transformers

缺少摘要信息,无法确认 FrameQuant 试图解决的低比特量化痛点是精度掉点、跨层/跨通道分布漂移,还是 KV-cache/激活量化带来的推理瓶颈。

Harshavardhan Adepu,Zhanpeng Zeng,Li Zhang,Vikas Singh
quantizationtransformerslow-bitPMLRDBLP
7
精读ICML 2024

CHAI: Clustered Head Attention for Efficient LLM Inference

缺少摘要信息,无法确认 CHAI 要解决的是多头注意力在推理时的算力/带宽浪费,还是长上下文下 KV 访问成为瓶颈导致的吞吐下降。

Saurabh Agarwal,Bilge Acun,Basil Hosmer,Mostafa Elhoushi,Yejin Lee,Shivaram Venkataraman,Dimitris Papailiopoulos,Carole-Jean Wu
attentionkv-cachellm-inferencePMLRDBLP
7
泛读ICML 2024

LeaPformer: Enabling Linear Transformers for Autoregressive and Simultaneous Tasks via Learned Proportions

缺少摘要信息,无法确认 LeaPformer 具体要解决线性 Transformer 在自回归生成与同时(simultaneous)任务之间难以兼容的问题细节与现有折中。

Victor Agostinelli,Sanghyun Hong,Lizhong Chen
linear-transformerautoregressivearchitecturePMLRDBLP
7
精读ICML 2024

Understanding Adam Optimizer via Online Learning of Updates: Adam is FTRL in Disguise

缺少摘要信息,无法确认该工作要解释 Adam 的哪类现象(收敛性、泛化、超参敏感性、或与权重衰减/正则的关系),以及“Adam 是 FTRL”用于解决什么实际困惑。

Kwangjun Ahn,Zhiyu Zhang,Yunbum Kook,Yan Dai
adamoptimizerftrlPMLRDBLP
7
精读OralICML 2024

Flextron: Many-in-One Flexible Large Language Model

Ruisi Cai,Saurav Muralidharan,Greg Heinrich,Hongxu Yin,Zhangyang Wang,Jan Kautz,Pavlo Molchanov
elastic-inferencemodel-compressionflexible-architecturePMLRDBLP
7
精读ICML 2024

LoCoCo: Dropping In Convolutions for Long Context Compression

Ruisi Cai,Yuandong Tian,Zhangyang Wang,Beidi Chen
long-contextcontext-compressionconvolutionPMLRDBLP
7
精读ICML 2024

Human Alignment of Large Language Models through Online Preference Optimisation

Daniele Calandriello,Zhaohan Daniel Guo,Rémi Munos,Mark Rowland,Yunhao Tang,Bernardo Ávila Pires ... 省略 3 位作者 ... ,Tianqi Liu,Rishabh Joshi,Zeyu Zheng,Bilal Piot
online-rlhfpreference-optimizationalignmentPMLRDBLP
7
泛读ICML 2024

MoE-RBench: Towards Building Reliable Language Models with Sparse Mixture-of-Experts

Guanjie Chen,Xinyu Zhao,Tianlong Chen,Yu Cheng
moereliabilityevaluationPMLRDBLP
7
泛读ICML 2024

Transformers Implement Functional Gradient Descent to Learn Non-Linear Functions In Context

这篇论文想回答的是:Transformer 在 in-context learning 里到底学到了什么算法,尤其是面对非线性函数时是否仍可解释为某种优化过程。此前很多理论结果把 ICL 解释成线性回归、核回归或梯度下降近似,但大多局限在线性任务;这篇工作要把这个解释推进到更一般的非线性情形。

Xiang Cheng,Yuxin Chen,Suvrit Sra
in-context-learningtransformeroptimizationPMLRDBLP
7
泛读ICML 2024

KV-Runahead: Scalable Causal LLM Inference by Parallel Key-Value Cache Generation

这篇论文解决的是因果 LLM 推理时 KV cache 生成串行、难并行的问题。标准自回归解码每生成一个 token 都要等待前一步完成,导致即使算力足够,推理延迟仍受串行依赖限制;现有优化多在 attention kernel 或 speculative decoding 上做文章,而不是直接并行化 KV 生成。

Minsik Cho,Mohammad Rastegari,Devang Naik
kv-cacheinferenceparallelismPMLRDBLP
7
泛读ICML 2024

A Provably Effective Method for Pruning Experts in Fine-tuned Sparse Mixture-of-Experts

这篇论文解决的是:针对已经微调过的稀疏 MoE 模型,如何安全地剪掉一部分 experts,以降低部署时的显存和计算开销。MoE 在训练时能以较低激活成本扩展参数,但推理和部署并不总便宜;尤其下游微调后,哪些 expert 还能删、删多少不伤精度,一直缺少有理论保证的方法。

Mohammed Nowaz Rabbani Chowdhury,Meng Wang,Kaoutar El Maghraoui,Naigang Wang,Pin-Yu Chen,Christopher D. Carothers
moepruningfine-tuningPMLRarXivDBLP
7
精读ICML 2024

ULTRAFEEDBACK: Boosting Language Models with Scaled AI Feedback

高质量的 AI 反馈数据对 LLM 对齐至关重要,但现有反馈数据集规模小、多样性不足。UltraFeedback 构建了一个大规模、多维度的 AI 反馈数据集,用于提升 LLM 的 RLHF/DPO 训练效果。

Ganqu Cui,Lifan Yuan,Ning Ding,Guanming Yao,Bingxiang He,Wei Zhu ... 省略 2 位作者 ... ,Ruobing Xie,Yankai Lin,Zhiyuan Liu,Maosong Sun
Tsinghua Universityai-feedbacksftrlhfPMLRDBLP
7
精读ICML 2024

Break the Sequential Dependency of LLM Inference Using Lookahead Decoding

Yichao Fu,Peter Bailis,Ion Stoica,Hao Zhang
decodinginferenceparallelismPMLRDBLP
7
泛读ICML 2024

Breaking through the learning plateaus of in-context learning in Transformer

Jingwen Fu,Tao Yang,Yuwang Wang,Yan Lu,Nanning Zheng
in-context-learningtransformerlearning-dynamicsPMLRDBLP
7
精读ICML 2024

Can Looped Transformers Learn to Implement Multi-step Gradient Descent for In-context Learning?

这篇论文要检验一个很具体的机制假设:looped Transformer 能不能在 in-context learning 里真的实现多步梯度下降,而不是只学到一个近似的一步更新。此前很多 ICL 理论把 Transformer 解释成隐式学习器,但多数结果停留在单步或线性情形;作者关注更强的命题,即通过权重共享反复迭代的 Transformer,是否具备执行多步优化过程的能力。

Khashayar Gatmiry,Nikunj Saunshi,Sashank J. Reddi,Stefanie Jegelka,Sanjiv Kumar
looped-transformerin-context-learninggradient-descentPMLRDBLP
7
精读ICML 2024

Patchscopes: A Unifying Framework for Inspecting Hidden Representations of Language Models

这篇论文试图解决 hidden representation inspection 过于碎片化的问题:不同 probing 和 patching 方法各看一角,但缺少统一框架来系统检查语言模型中间表示到底编码了什么、能被谁读出、以及会如何影响后续计算。过去很多分析要么只做线性 probe,要么只做激活替换,结论容易局限在单一视角。

Asma Ghandeharioun,Avi Caciularu,Adam Pearce,Lucas Dixon,Mor Geva
interpretabilityhidden-representationsprobingPMLRDBLP
7
精读ICML 2024

Understanding Finetuning for Factual Knowledge Extraction

这篇论文关注一个常被混淆的问题:finetuning 到底是在教模型提取已有事实知识,还是在往模型里写入新知识。过去很多 factual probing 或 knowledge editing 工作只看最终问答正确率,因此很难分清模型是学会了更好地读取参数中已有知识,还是确实改变了内部知识存储。

Gaurav Rohit Ghosal,Tatsunori Hashimoto,Aditi Raghunathan
fine-tuningfactual-knowledgellmPMLRDBLP
7
精读ICML 2024

A Closer Look at the Limitations of Instruction Tuning

这篇论文的核心结论大概率是:instruction tuning 的收益有明确边界,而且这些边界比社区默认想象得更早暴露。过去很多工作把指令微调视作通用增强手段,但它常把多种能力提升、格式对齐和数据污染混在一起,作者想更细地看它到底改善了什么、牺牲了什么。

Sreyan Ghosh,Chandra Kiran Reddy Evuru,Sonal Kumar,Ramaneswaran S.,Deepali Aneja,Zeyu Jin,Ramani Duraiswami,Dinesh Manocha
instruction-tuninglimitationssftPMLRDBLP
6
泛读ICML 2024

Eureka-Moments in Transformers: Multi-Step Tasks Reveal Softmax Induced Optimization Problems

Transformer在多步任务训练时会出现损失长期停滞、突然跳升的尤里卡时刻,此前训练动态研究多聚焦单步简单任务,默认损失下降是平滑过程,未覆盖多步任务特有的优化瓶颈。

David T. Hoffmann,Simon Schrodi,Jelena Bratulic,Nadine Behrmann,Volker Fischer,Thomas Brox
transformertraining-dynamicssoftmaxPMLRarXivDBLP
5
泛读OralICML 2024

PrE-Text: Training Language Models on Private Federated Data in the Age of LLMs

端侧联邦训练大语言模型存在端侧算力不足、通信开销大、部署调试难的问题,此前私有分布式用户数据训练只能依赖端侧本地训练,隐私预算限制了大模型对端侧私有数据的利用效率。

Charlie Hou,Akshat Shrivastava,Hongyuan Zhan,Rylan Conway,Trang Le,Adithya Sagar,Giulia Fanti,Daniel Lazar
federated-learningprivacydataPMLRarXivDBLP
7
精读ICML 2024

In-context Convergence of Transformers

这篇论文研究的核心问题是:带 softmax attention 的真实 Transformer(哪怕只有一层)在做 in-context learning(ICL)时,梯度下降训练过程如何收敛、收敛到什么解,以及数据结构(balanced/imbalanced 采样)如何影响这种“上下文内拟合”。以往理论多聚焦线性 Transformer,难以解释 softmax 注意力下的训练动力学。

Yu Huang,Yuan Cheng,Yingbin Liang
in-context-learningtransformer-theoryconvergencePMLRarXivDBLP
7
精读OralICML 2024

Position: The Platonic Representation Hypothesis

缺少摘要信息,无法准确概括该 Position paper 对“Platonic Representation Hypothesis”的具体主张、适用范围与可证伪点。

Minyoung Huh,Brian Cheung,Tongzhou Wang,Phillip Isola
representation-learningconvergencemultimodalPMLRDBLP
7
精读ICML 2024

From Self-Attention to Markov Models: Unveiling the Dynamics of Generative Transformers

缺少摘要信息,无法准确说明论文如何把 generative Transformer 的生成动力学与 Markov model 联系起来,以及它要解释的具体现象(如长程依赖、暴露偏差或采样退化)。

Muhammed Emrullah Ildiz,Yixiao Huang,Yingcong Li,Ankit Singh Rawat,Samet Oymak
self-attentionmarkov-modelgenerative-transformerPMLRDBLP
7
精读ICML 2024

Understanding the Learning Dynamics of Alignment with Human Feedback

缺少摘要信息,无法可靠界定论文讨论的“alignment with human feedback”是 RLHF、DPO/IPO 还是更一般的偏好学习,以及它要解释的学习动力学现象是什么。

Shawn Im,Yixuan Li
rlhfalignmenttraining-dynamicsPMLRDBLP
7
泛读OralICML 2024

Debating with More Persuasive LLMs Leads to More Truthful Answers

这篇论文解决的问题是:当“评审者”比“作答者”更弱、甚至缺乏关键信息时,如何仍然从更强模型那里得到更接近真相的答案。传统 RLHF 假设人类或标注者能稳定判断对错,但在强模型时代这个假设会变弱。

Akbir Khan,John Hughes,Dan Valentine,Laura Ruis,Kshitij Sachan,Ansh Radhakrishnan,Edward Grefenstette,Samuel R. Bowman,Tim Rocktäschel,Ethan Perez
alignmentdebatetruthfulnessPMLRarXivDBLP
7
泛读ICML 2024

Towards an Understanding of Stepwise Inference in Transformers: A Synthetic Graph Navigation Model

这篇论文要解释的是:为什么 scratchpad/CoT 这类“分步推理协议”能显著提升 Transformer 的多步问题能力,以及模型在内部到底学到了什么计算结构。以往在真实任务上很难做机制分析,因为数据与推理步骤不可控、评估也混杂。

Mikail Khona,Maya Okawa,Jan Hula,Rahul Ramesh,Kento Nishi,Robert P. Dick,Ekdeep Singh Lubana,Hidenori Tanaka
chain-of-thoughttransformerin-context-learningPMLRarXivDBLP
7
泛读OralICML 2024

VideoPoet: A Large Language Model for Zero-Shot Video Generation

这篇工作要解决的是:能否像训练语言模型一样,用统一 token 序列建模做零样本视频生成,而不是依赖任务特化的视频 diffusion 管线。过去视频生成系统通常拆成多模块:文本编码、时空扩散、级联超分辨率,各部分训练目标不同,扩展到多任务和跨模态时成本很高,也不利于统一预训练。

Dan Kondratyuk,Lijun Yu,Xiuye Gu,José Lezama,Jonathan Huang,Grant Schindler ... 省略 20 位作者 ... ,Irfan Essa,Huisheng Wang,David A. Ross,Bryan Seybold
video-generationllmmultimodal-tokenPMLRDBLP
7
精读ICML 2024

Towards Understanding Inductive Bias in Transformers: A View From Infinity

这篇工作关注的是:Transformer 的归纳偏置到底来自哪里,尤其在无限宽度或无限尺度视角下,哪些偏置是架构固有的,哪些是有限模型训练过程带来的。过去很多经验现象——例如位置敏感性、上下文组合方式、特定函数族更容易学——都被归因于 Transformer“擅长序列”,但这类说法通常缺少可分析的极限视角。

Itay Lavie,Guy Gur-Ari,Zohar Ringel
inductive-biastransformergeneralizationPMLRDBLP
7
精读OralICML 2024

A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity

Andrew Lee,Xiaoyan Bai,Itamar Pres,Martin Wattenberg,Jonathan K. Kummerfeld,Rada Mihalcea
dpoalignmentmechanistic-interpretabilityPMLRDBLP
7
精读ICML 2024

EAGLE: Speculative Sampling Requires Rethinking Feature Uncertainty

Yuhui Li,Fangyun Wei,Chao Zhang,Hongyang Zhang
speculative-decodingsamplinginferencePMLRDBLP
7
精读ICML 2024

Dual Operating Modes of In-Context Learning

Ziqian Lin,Kangwook Lee
in-context-learningmechanistic-analysisgeneralizationPMLRDBLP
6
泛读ICML 2024

Can We Remove the Square-Root in Adaptive Gradient Methods? A Second-Order Perspective

Adam等自适应优化器的对角预处理平方根操作和二阶优化的理论动机存在本质差异,此前自适应优化器默认保留平方根步骤,未系统分析去掉平方根的效果和 trade-off。

Wu Lin,Felix Dangel,Runa Eschenhagen,Juhan Bae,Richard E. Turner,Alireza Makhzani
optimizeradamadaptive-gradientPMLRarXivDBLP
7
泛读ICML 2024

Online Speculative Decoding

Xiaoxuan Liu,Lanxiang Hu,Peter Bailis,Alvin Cheung,Zhijie Deng,Ion Stoica,Hao Zhang
speculative-decodingonline-learninginference-efficiencyPMLRDBLP
7
精读OralICML 2024

DoRA: Weight-Decomposed Low-Rank Adaptation

这篇论文的核心问题是:LoRA 虽然便宜,但和全参数微调之间常有稳定的性能差距,原因在于它只对权重更新空间做了低秩限制,却没有匹配全量微调对权重幅值和方向的联合调整能力。过去很多 PEFT 工作主要在 rank、target module 或初始化上打补丁,这篇工作试图从参数化方式本身解释差距来源。

Shih-Yang Liu,Chien-Yi Wang,Hongxu Yin,Pavlo Molchanov,Yu-Chiang Frank Wang,Kwang-Ting Cheng,Min-Hung Chen
lorapeftweight-decompositionPMLRarXivDBLP
7
泛读ICML 2024

Auto-Regressive Next-Token Predictors are Universal Learners

这篇论文的核心问题是:标准自回归 next-token predictor 作为学习器到底有多强,是否足以在理论上表达非常广的学习过程。这个问题之所以重要,是因为很多新范式都在挑战 AR,但如果 AR 在学习能力上本来就具备普适性,那么讨论焦点就会从“能不能学”转到“样本效率、优化效率和归纳偏置是否更好”。

Eran Malach
autoregressivetheorynext-tokenPMLRDBLP
7
精读ICML 2024

Controlled Decoding from Language Models

Sidharth Mudgal,Jong Lee,Harish Ganapathy,YaGuang Li,Tao Wang,Yanping Huang ... 省略 3 位作者 ... ,Trevor Strohman,Jilin Chen,Alex Beutel,Ahmad Beirami
controlled-decodingreward-modelalignmentPMLRDBLP
7
精读ICML 2024

Learning to Route Among Specialized Experts for Zero-Shot Generalization

Mohammed Muqeeth,Haokun Liu,Yufan Liu,Colin Raffel
moeroutingzero-shotPMLRDBLP
7
精读ICML 2024

Dynamic Memory Compression: Retrofitting LLMs for Accelerated Inference

Piotr Nawrot,Adrian Lancucki,Marcin Chochowski,David Tarjan,Edoardo M. Ponti
kv-cachememory-compressioninference-accelerationPMLRDBLP
7
泛读SpotlightICML 2024

Self-Alignment of Large Language Models via Monopolylogue-based Social Scene Simulation

Xianghe Pang,Shuo Tang,Rui Ye,Yuxin Xiong,Bolun Zhang,Yanfeng Wang,Siheng Chen
self-alignmentinstruction-tuningsynthetic-dataPMLRDBLP
7
泛读ICML 2024

Can Mamba Learn How To Learn? A Comparative Study on In-Context Learning Tasks

Jongho Park,Jaeseung Park,Zheyang Xiong,Nayoung Lee,Jaewoong Cho,Samet Oymak,Kangwook Lee,Dimitris Papailiopoulos
mambain-context-learningsequence-modelingPMLRDBLP
7
泛读ICML 2024

Modeling Language Tokens as Functionals of Semantic Fields

这篇论文想改写 token 的建模方式:不是把词看成离散符号本身,而是看成底层语义场的函数值或泛函。传统 LM 直接在 token 序列上建模,表达力强但对语义连续结构刻画粗糙;作者试图引入更连续的语义表征,把离散 token 生成和潜在语义场联系起来。

Zhengqi Pei,Anran Zhang,Shuhui Wang,Qingming Huang
language-modelingtokenizationsemanticsPMLRDBLP
7
精读ICML 2024

Compute Better Spent: Replacing Dense Layers with Structured Matrices

Shikai Qiu,Andres Potapczynski,Marc Anton Finzi,Micah Goldblum,Andrew Gordon Wilson
structured-matricesefficient-architecturemodel-compressionPMLRDBLP
7
精读ICML 2024

WARM: On the Benefits of Weight Averaged Reward Models

Alexandre Ramé,Nino Vieillard,Léonard Hussenot,Robert Dadashi,Geoffrey Cideron,Olivier Bachem,Johan Ferret
reward-modelweight-averagingrlhfPMLRDBLP
7
精读ICML 2024

Compositional Capabilities of Autoregressive Transformers: A Study on Synthetic, Interpretable Tasks

Rahul Ramesh,Ekdeep Singh Lubana,Mikail Khona,Robert P. Dick,Hidenori Tanaka
compositionalityautoregressivetransformerPMLRDBLP
7
泛读ICML 2024

Language Generation with Strictly Proper Scoring Rules

Chenze Shao,Fandong Meng,Yijin Liu,Jie Zhou
language-modelingtraining-objectivescoring-rulesPMLRDBLP
7
泛读OralICML 2024

Position: Do pretrained Transformers Learn In-Context by Gradient Descent?

Lingfeng Shen,Aayush Mishra,Daniel Khashabi
in-context-learningtransformersmechanistic-interpretabilityPMLRDBLP
7
精读ICML 2024

DéjàVu: KV-cache Streaming for Fast, Fault-tolerant Generative LLM Serving

这篇论文解决的是分布式 LLM serving 里一个很现实的问题:KV cache 不只是内存负担,它还牵连流水线气泡、显存过度预留和故障恢复慢。过去这些问题通常被拆开分别优化,但 prompt 阶段和 decode 阶段时延形态完全不同,导致系统层面总是有利用率损失。

Foteini Strati,Sara McAllister,Amar Phanishayee,Jakub Tarnawski,Ana Klimovic
kv-cacheservingpipeline-parallelPMLRarXivDBLP
7
精读ICML 2024

video-SALMONN: Speech-Enhanced Audio-Visual Large Language Models

这篇论文解决的是 audio-visual LLM 里一个被低估的问题:视频理解不仅是看帧和听环境声,还包括高时间分辨率的语音理解,而现有 av-LLM 往往在这一点上做得不够。常见做法要么把语音单独交给 ASR 管线,要么统一压缩成粗粒度 token,结果是语音细节丢失,和视频其他元素也难统一建模。

Guangzhi Sun,Wenyi Yu,Changli Tang,Xianzhao Chen,Tian Tan,Wei Li,Lu Lu,Zejun Ma,Yuxuan Wang,Chao Zhang
audio-visual-llmspeech-understandingvideo-llmPMLRarXivDBLP
7
精读ICML 2024

MEMORYLLM: Towards Self-Updatable Large Language Models

LLM 的参数知识在预训练后基本固化,无法在推理阶段高效地自我更新。现有方法要么依赖外部检索(RAG),要么需要重新微调,缺乏一种让模型自身持续吸收新知识的轻量机制。

Yu Wang,Yifan Gao,Xiusi Chen,Haoming Jiang,Shiyang Li,Jingfeng Yang ... 省略 2 位作者 ... ,Xian Li,Bing Yin,Jingbo Shang,Julian J. McAuley
continual-pretrainmemorylifelong-learningPMLRDBLP
7
精读ICML 2024

Diffusion Language Models Are Versatile Protein Learners

蛋白质语言模型此前主要基于 autoregressive 或 masked language modeling 范式,在生成多样性和条件生成灵活性上受限。能否用离散扩散模型(discrete diffusion)作为蛋白质序列的统一预训练框架,同时获得强生成和强表示学习能力?

Xinyou Wang,Zaixiang Zheng,Fei Ye,Dongyu Xue,Shujian Huang,Quanquan Gu
Nanjing Universitydiffusion-lmproteindiscrete-diffusionPMLRarXivDBLP
7
泛读OralICML 2024

NExT-GPT: Any-to-Any Multimodal LLM

现有多模态大模型仅支持输入侧多模态理解,不支持任意模态输出,此前的多模态LLM默认仅做输入对齐,未实现任意模态到任意模态的端到端建模。

Shengqiong Wu,Hao Fei,Leigang Qu,Wei Ji,Tat-Seng Chua
National University of Singaporeany-to-anymultimodal-llmunified-pretrainingPMLRarXivDBLP
7
泛读ICML 2024

Do Efficient Transformers Really Save Computation?

Kai Yang,Jan Ackermann,Zhenyu He,Guhao Feng,Bohang Zhang,Yunzhen Feng,Qiwei Ye,Di He,Liwei Wang
efficient-transformerattentiontraining-costPMLRDBLP
7
泛读ICML 2024

Collage: Light-Weight Low-Precision Strategy for LLM Training

这篇工作要解决的是 LLM 训练成本过高,尤其是全精度训练对显存、带宽和能耗压力都很大。现有低精度方法在推理侧已经比较成熟,但训练侧仍常被数值不稳定、梯度噪声放大和优化器状态开销限制,因此作者提出轻量级低精度训练策略。

Tao Yu,Gaurav Gupta,Karthick Gopalswamy,Amith R. Mamidala,Hao Zhou,Jeffrey Huynh,Youngsuk Park,Ron Diamant,Anoop Deoras,Luke Huan
low-precisionmixed-precisiontraining-efficiencyPMLRDBLP
7
泛读ICML 2024

Token-level Direct Preference Optimization

现有DPO等对齐方法在整句级别优化,和自回归生成的token级序列特性不匹配,导致对齐效率低、生成多样性差。

Yongcheng Zeng,Guoqing Liu,Weiyu Ma,Ning Yang,Haifeng Zhang,Jun Wang
dpotoken-levelalignmentPMLRarXivDBLP
4
ICML 2024

IM-Unpack: Training and Inference with Arbitrarily Low Precision Integers

低比特整数GEMM存在 rounding 误差控制问题,此前的低精度训练推理需要复杂的误差控制技术,默认必须使用浮点才能保证Transformer训练推理精度持平。

Zhanpeng Zeng,Karthikeyan Sankaralingam,Vikas Singh
low-precisioninteger-arithmetictraining-efficiencyPMLRarXivDBLP
7
泛读ICML 2024

Iterative Data Smoothing: Mitigating Reward Overfitting and Overoptimization in RLHF

RLHF的奖励模型训练1个epoch后性能下降,过度优化奖励模型会导致奖励 hacking,此前的奖励模型训练只用硬标签训练,未解决奖励过拟合和过度优化问题。

Banghua Zhu,Michael I. Jordan,Jiantao Jiao
University of California, Berkeleyrlhfreward-modeloveroptimizationPMLRarXivDBLP
6
泛读ICML 2024

GiLOT: Interpreting Generative Language Models via Optimal Transport

这篇工作要解决的是:怎样用一个更有全局结构感的工具去解释生成式语言模型,而不是只看 attention、梯度或局部特征归因。现有解释方法大多擅长做局部相关性分析,但对“生成分布是怎样从上下文移动到输出”的整体几何关系刻画较弱,因此作者引入最优传输来分析 token 分布之间的对应与迁移。

Xuhong Li,Jiamin Chen,Yekun Chai,Haoyi Xiong
interpretabilitygenerative-lmoptimal-transportPMLRDBLP
6
泛读OralICML 2024

Chain of Code: Reasoning with a Language Model-Augmented Code Emulator

这篇工作的核心判断是:让语言模型“写代码再执行”并不够,因为很多推理步骤涉及模糊语义函数,真实解释器并不能执行;作者因此要解决的是,怎样把代码的精确结构和语言模型的语义补全结合起来。过去 Program-of-Thought 或 tool use 往往假设函数可执行,这在算术上好用,但在语义推理混合任务上经常卡住。

Chengshu Li,Jacky Liang,Andy Zeng,Xinyun Chen,Karol Hausman,Dorsa Sadigh,Sergey Levine,Li Fei-Fei,Fei Xia,Brian Ichter
Google ResearchGoogle DeepMindreasoningtool-usecodePMLRarXivDBLP
6
泛读SpotlightICML 2024

LIDAO: Towards Limited Interventions for Debiasing (Large) Language Models

这篇工作关注的是:怎样在尽量少改动大语言模型的前提下做去偏,而不是靠大规模再训练或强干预。这个问题长期存在,因为主流 debiasing 方法要么效果有限,要么会明显伤模型能力;“limited interventions”说明作者想在两者之间找更稳的平衡。

Tianci Liu,Haoyu Wang,Shiyang Wang,Yu Cheng,Jing Gao
debiasingalignmentinterventionPMLRDBLP
6
泛读ICML 2024

Understanding Reasoning Ability of Language Models From the Perspective of Reasoning Paths Aggregation

这篇工作要回答的是:语言模型的推理能力到底来自单条 reasoning path 的质量,还是来自多条路径聚合后的总体效果。过去很多分析只盯 CoT 单样本正确率,默认“会推理”意味着能生成一条好链路,但 self-consistency 这类现象已经说明聚合机制本身可能贡献很大。

Xinyi Wang,Alfonso Amayuelas,Kexun Zhang,Liangming Pan,Wenhu Chen,William Yang Wang
reasoningcotaggregationPMLRDBLP
6
泛读ICML 2024

MultiMax: Sparse and Multi-Modal Attention Learning

这篇工作的结论是:SoftMax 在注意力里同时兼顾多峰性和稀疏性做得不够好,导致注意力分布容易把概率摊到一堆残余项上,既不够可解释,也会引入噪声。已有 sparsemax 一类方法能做稀疏,但通常会牺牲多模态结构,甚至需要改 loss;作者要解决的正是这个长期 trade-off。

Yuxuan Zhou,Mario Fritz,Margret Keuper
attentionsparsitymultimodalPMLRarXivDBLP
6
泛读ICML 2024

Junk DNA Hypothesis: Pruning Small Pre-Trained Weights Irreversibly and Monotonically Impairs "Difficult" Downstream Tasks in LLMs

Lu Yin,Ajay Kumar Jaiswal,Shiwei Liu,Souvik Kundu,Zhangyang Wang
pruningllmdownstream-transferPMLRDBLP
4
ICML 2024

Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity

现有LLM剪枝默认均匀剪枝所有层到相同稀疏度,和CV模型非均匀剪枝效果更好的认知冲突,此前的LLM剪枝没有考虑参数离群值的影响。

Lu Yin,You Wu,Zhenyu Zhang,Cheng-Yu Hsieh,Yaqing Wang,Yiling Jia ... 省略 3 位作者 ... ,Yi Liang,Michael Bendersky,Zhangyang Wang,Shiwei Liu
pruningllmsparsityPMLRarXivDBLP
6
泛读ICML 2024

Rewards-in-Context: Multi-objective Alignment of Foundation Models with Dynamic Preference Adjustment

大模型多目标对齐此前依赖RLHF实现,存在训练成本高、过程不稳定的问题,同时人类偏好的多维度冲突特性导致动态调整偏好必须重新训练模型,没有灵活的适配方案。

Rui Yang,Xiaoman Pan,Feng Luo,Shuang Qiu,Han Zhong,Dong Yu,Jianshu Chen
multi-objective-alignmentrlhfreward-modelPMLRarXivDBLP
6
泛读ICML 2024

InferCept: Efficient Intercept Support for Augmented Large Language Model Inference

缺少摘要信息,无法确认其“intercept support”具体拦截什么(KV-cache、算子、外部工具调用、RAG、或中间层激活)以及要解决的推理效率/可控性问题边界。

Reyna Abhyankar,Zijian He,Vikranth Srivatsa,Hao Zhang,Yiying Zhang
llm-inferenceservingaugmented-generationPMLRDBLP
6
泛读ICML 2024

Distinguishing the Knowable from the Unknowable with Language Models

缺少摘要信息,无法确认该工作如何形式化“可知 vs 不可知”(是 epistemic uncertainty、数据外分布、还是信息论不可辨识),以及它要解决的具体失败模式(幻觉、过度自信、拒答策略)。

Gustaf Ahdritz,Tian Qin,Nikhil Vyas,Boaz Barak,Benjamin L. Edelman
uncertaintylanguage-modelknowable-unknowablePMLRDBLP
6
泛读ICML 2024

Linguistic Calibration of Long-Form Generations

缺少摘要信息,无法确认其“long-form generation 的 linguistic calibration”要校准的对象是事实性、语用一致性、还是模型对自身语言选择(语气/不确定性表达)的置信度匹配。

Neil Band,Xuechen Li,Tengyu Ma,Tatsunori Hashimoto
calibrationlong-formgenerationPMLRDBLP
6
泛读ICML 2024

To Each (Textual Sequence) Its Own: Improving Memorized-Data Unlearning in Large Language Models

这篇论文要解决的是:如何更有效地让 LLM 忘掉被记住的特定文本序列,而不是只在表面测试上降低复现率。现有 unlearning 往往对分类式删除更自然,但对语言模型里的逐 token 记忆删除更难,因为模型参数会以分布式方式存储信息,简单微调常常只是把触发方式改掉,没有真正消除记忆。

George-Octavian Barbulescu,Peter Triantafillou
unlearningllmmemorizationPMLRDBLP
6
泛读ICML 2024

On Mechanistic Knowledge Localization in Text-to-Image Generative Models

这篇论文研究的是:文本到图像生成模型里的“知识”到底局部存在哪里,能不能被机制化定位,而不是只靠整体 prompt 行为做模糊归因。过去对 T2I 模型的知识分析多停留在输出层面,例如某个 prompt 能不能生成某概念,但这不足以回答知识是分散存储还是集中在某些模块、层或 cross-attention 通路中。

Samyadeep Basu,Keivan Rezaei,Priyatham Kattakinda,Vlad I. Morariu,Nanxuan Zhao,Ryan A. Rossi,Varun Manjunatha,Soheil Feizi
mechanistic-interpretabilitytext-to-imageknowledgePMLRDBLP
6
泛读ICML 2024

Neural Networks Learn Statistics of Increasing Complexity

这篇论文试图回答一个机制问题:神经网络在训练过程中,是不是按统计复杂度由浅到深地学习数据规律。以往大家常说网络先学简单模式后学复杂模式,但很多证据是经验性的,缺少一个明确把“复杂度”定义成统计结构层级并系统验证的框架。

Nora Belrose,Quintin Pope,Lucia Quirke,Alex Mallen,Xiaoli Z. Fern
learning-dynamicsstatisticsgeneralizationPMLRDBLP
6
泛读SpotlightICML 2024

By Tying Embeddings You Are Assuming the Distributional Hypothesis

这篇论文指出一个常被忽略的前提:输入输出 embedding tying 并不是纯粹的参数节省技巧,它隐含假设了 distributional hypothesis 在模型输出空间也成立。过去 tying 在 LLM 中很常见,大家主要从省参数和轻微正则化角度理解它,但很少追问这个约束在语义、词频和生成分布上到底强加了什么结构偏置。

Francesco Bertolotti,Walter Cazzola
embedding-tyingtokenizerlanguage-modelingPMLRDBLP
6
泛读ICML 2024

Improving fine-grained understanding in image-text pre-training

这篇论文要解决的是:现有 image-text pre-training 虽然能学到全局图文对齐,但对细粒度理解往往不够好,特别是在属性、关系和局部区域层面。很多 CLIP 式方法只优化整图和整句的匹配,模型因此容易抓住粗粒度共现信号,却在需要精确 grounding 的场景下掉队。

Ioana Bica,Anastasija Ilic,Matthias Bauer,Goker Erdogan,Matko Bosnjak,Christos Kaplanis ... 省略 1 位作者 ... ,Matthias Minderer,Charles Blundell,Razvan Pascanu,Jovana Mitrovic
image-textpretrainingfine-grainedPMLRDBLP
6
泛读ICML 2024

ETHER: Efficient Finetuning of Large-Scale Models with Hyperplane Reflections

这篇论文解决的是参数高效微调里的一个老问题:如何在保持低训练成本的同时,提供足够强的更新表达能力。LoRA、adapter 等方法虽然省参数,但它们通常把更新限制在低秩或窄瓶颈空间里,这在某些任务上会变成明显的表示上限。

Massimo Bini,Karsten Roth,Zeynep Akata,Anna Khoreva
peftfinetuninglow-rankPMLRDBLP
6
泛读ICML 2024

Algorithm and Hardness for Dynamic Attention Maintenance in Large Language Models

Jan van den Brand,Zhao Song,Tianyi Zhou
attentiondynamic-maintenancealgorithmPMLRDBLP
6
泛读OralICML 2024

Genie: Generative Interactive Environments

Jake Bruce,Michael D. Dennis,Ashley Edwards,Jack Parker-Holder,Yuge Shi,Edward Hughes ... 省略 15 位作者 ... ,Jeff Clune,Nando de Freitas,Satinder Singh,Tim Rocktäschel
world-modelvideo-generationinteractive-environmentPMLRDBLP
6
泛读ICML 2024

CodeIt: Self-Improving Language Models with Prioritized Hindsight Replay

Natasha Butt,Blazej Manczak,Auke J. Wiggers,Corrado Rainone,David W. Zhang,Michaël Defferrard,Taco Cohen
self-improvementcode-generationhindsight-replayPMLRDBLP
6
泛读ICML 2024

Generative Flows on Discrete State-Spaces: Enabling Multimodal Flows with Applications to Protein Co-Design

Andrew Campbell,Jason Yim,Regina Barzilay,Tom Rainforth,Tommi S. Jaakkola
discrete-flowgenerative-flowprotein-designPMLRDBLP
6
泛读OralICML 2024

Stealing part of a production language model

Nicholas Carlini,Daniel Paleka,Krishnamurthy Dj Dvijotham,Thomas Steinke,Jonathan Hayase,A. Feder Cooper ... 省略 3 位作者 ... ,Arthur Conmy,Eric Wallace,David Rolnick,Florian Tramèr
model-stealingllm-securityapi-attackPMLRDBLP
6
泛读ICML 2024

In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation

Shiqi Chen,Miao Xiong,Junteng Liu,Zhengxuan Wu,Teng Xiao,Siyang Gao,Junxian He
hallucinationrepresentationin-context-learningPMLRDBLP
6
泛读ICML 2024

What Can Transformer Learn with Varying Depth? Case Studies on Sequence Learning Tasks

Xingwu Chen,Difan Zou
transformerdepthsequence-modelingPMLRDBLP
6
泛读SpotlightICML 2024

RIME: Robust Preference-based Reinforcement Learning with Noisy Preferences

这篇论文解决的是:当偏好标签有噪声时,基于偏好的强化学习如何保持稳定和鲁棒。这个问题在 RLHF 里很实际,因为人工比较、模型打分和合成偏好都会有系统性误标;标准做法通常默认偏好可靠,因此容易把噪声放大成错误奖励。

Jie Cheng,Gang Xiong,Xingyuan Dai,Qinghai Miao,Yisheng Lv,Fei-Yue Wang
preference-learningreinforcement-learningnoise-robustnessPMLRDBLP
6
泛读ICML 2024

Can AI Assistants Know What They Don't Know?

这篇论文研究的是 AI assistant 能否识别并表达自己的未知,而不是在不确定时继续流畅作答。这个问题过去常被“提升 factuality”间接处理,但那通常优化的是答对率,不是 calibrated abstention,也就是该拒答时能否可靠拒答。

Qinyuan Cheng,Tianxiang Sun,Xiangyang Liu,Wenwei Zhang,Zhangyue Yin,Shimin Li,Linyang Li,Zhengfu He,Kai Chen,Xipeng Qiu
uncertaintycalibrationllm-evaluationPMLRDBLP
6
泛读ICML 2024

Chatbot Arena: An Open Platform for Evaluating LLMs by Human Preference

这篇论文解决的是 LLM 评测长期缺少开放、动态、真实用户偏好信号的问题。传统 benchmark 往往静态、封闭,而且容易被过拟合;人工标注评测则贵且慢,难以跟上模型迭代速度。Chatbot Arena 试图把评测转成持续在线的人类偏好比较。

Wei-Lin Chiang,Lianmin Zheng,Ying Sheng,Anastasios Nikolas Angelopoulos,Tianle Li,Dacheng Li ... 省略 1 位作者 ... ,Hao Zhang,Michael I. Jordan,Joseph E. Gonzalez,Ion Stoica
llm-evaluationhuman-preferencebenchmarkPMLRDBLP
6
泛读ICML 2024

Listwise Reward Estimation for Offline Preference-based Reinforcement Learning

这篇论文研究的是离线偏好强化学习中,如何从多候选响应的整体排序里更准确地估计奖励,而不是只做成对比较。传统 preference learning 大多把数据拆成 pairwise,这样简单但会丢掉 list 里的相对结构信息,也更容易在离线设置下产生样本效率和估计偏差问题。

Heewoong Choi,Sangwon Jung,Hongjoon Ahn,Taesup Moon
preference-learningoffline-rlreward-modelingPMLRDBLP
6
泛读ICML 2024

BWS: Best Window Selection Based on Sample Scores for Data Pruning across Broad Ranges

这篇论文解决的是数据剪枝在不同保留率范围内性能不稳定的问题。很多 pruning 方法在某个特定压缩比上有效,但一旦要从轻度裁剪切到极限裁剪,样本打分或选择策略就不再稳;这说明它们更像是局部最优启发式,而不是广范围可用的数据选择原则。

Hoyong Choi,Nohyun Ki,Hye Won Chung
data-pruningdata-qualitysample-selectionPMLRDBLP
6
泛读ICML 2024

MusicRL: Aligning Music Generation to Human Preferences

这篇论文要解决的是音乐生成如何对齐人类偏好,而不是只优化似然或音频重建指标。对音乐这类开放式生成任务,传统训练目标和自动指标很难覆盖“好不好听、像不像人想要的风格、是否连贯”这些主观维度,因此需要 RLHF 式的方法,但音乐偏好比文本更难标注、更难建模。

Geoffrey Cideron,Sertan Girgin,Mauro Verzetti,Damien Vincent,Matej Kastelic,Zalán Borsos ... 省略 4 位作者 ... ,Matthieu Geist,Léonard Hussenot,Neil Zeghidour,Andrea Agostinelli
music-generationpreference-learningalignmentPMLRDBLP
6
泛读ICML 2024

Scaling Laws for the Value of Individual Data Points in Machine Learning

单个数据点对模型训练的价值如何随数据集规模变化?以往的 data valuation 方法(如 Shapley value)计算成本极高且缺乏 scaling 视角,这篇工作建立了个体数据价值的 scaling law。

Ian Connick Covert,Wenlong Ji,Tatsunori Hashimoto,James Zou
Stanford Universityscaling-lawdata-valuationdata-qualityPMLRDBLP
6
泛读ICML 2024

A decoder-only foundation model for time-series forecasting

时间序列预测传统上依赖针对特定数据集训练的专用模型,缺乏跨数据集的泛化能力。这篇工作构建了一个 decoder-only 的时间序列基础模型,用大量异构时间序列数据预训练,实现 zero-shot 预测。

Abhimanyu Das,Weihao Kong,Rajat Sen,Yichen Zhou
Google Researchtime-seriesdecoder-onlyfoundation-modelPMLRDBLP
6
泛读ICML 2024

Is In-Context Learning in Large Language Models Bayesian? A Martingale Perspective

ICL(上下文学习)常被假设为近似贝叶斯推断,但缺乏严格验证。这篇工作从鞅(martingale)性质——贝叶斯学习系统对可交换数据的基本要求——出发,提出了可操作的检验方法来测试 LLM 的 ICL 是否真的是贝叶斯的。

Fabian Falck,Ziyu Wang,Christopher C. Holmes
University of Oxfordiclbayesian-inferencetheoryPMLRarXivDBLP
6
泛读ICML 2024

Keypoint-based Progressive Chain-of-Thought Distillation for LLMs

Kaituo Feng,Changsheng Li,Xiaolu Zhang,Jun Zhou,Ye Yuan,Guoren Wang
cotdistillationllmPMLRDBLP
6
泛读ICML 2024

Promptbreeder: Self-Referential Self-Improvement via Prompt Evolution

Chrisantha Fernando,Dylan Banarse,Henryk Michalewski,Simon Osindero,Tim Rocktäschel
promptingself-improvementoptimizationPMLRDBLP
6
泛读ICML 2024

Linear Alignment: A Closed-form Solution for Aligning Human Preferences without Tuning and Feedback

Songyang Gao,Qiming Ge,Wei Shen,Shihan Dou,Junjie Ye,Xiao Wang ... 省略 2 位作者 ... ,Zhi Chen,Hang Yan,Qi Zhang,Dahua Lin
alignmentclosed-formpreference-optimizationPMLRDBLP
6
泛读OralICML 2024

Speech Self-Supervised Learning Using Diffusion Model Synthetic Data

Heting Gao,Kaizhi Qian,Junrui Ni,Chuang Gan,Mark A. Hasegawa-Johnson,Shiyu Chang,Yang Zhang
speech-ssldiffusionsynthetic-dataPMLRDBLP
6
泛读ICML 2024

Variance-reduced Zeroth-Order Methods for Fine-Tuning Language Models

这篇论文解决的是一个现实问题:当梯度不可得、代价高或接口受限时,怎样用 zeroth-order 方法高效微调语言模型。传统 ZO fine-tuning 往往样本效率差、方差大、收敛慢,所以常被当成不得已的替代方案;作者想做的是把这条路线从“能用”推进到“更接近可用”。

Tanmay Gautam,Youngsuk Park,Hao Zhou,Parameswaran Raman,Wooseok Ha
zeroth-orderfine-tuningvariance-reductionPMLRDBLP
6
泛读ICML 2024

Self-Correcting Self-Consuming Loops for Generative Model Training

这篇论文讨论的是生成模型训练中的一个越来越现实的问题:当训练数据里混入越来越多模型自己生成的样本时,性能会不会持续退化,还是可以通过自纠正机制避免崩塌。此前关于 self-consuming loop 的讨论多强调 distribution collapse 和 error accumulation,作者显然想说明这个闭环并非天然有害,关键在于是否有纠错信号。

Nate Gillman,Michael Freeman,Daksh Aggarwal,Chia-Hong Hsu,Calvin Luo,Yonglong Tian,Chen Sun
synthetic-dataself-consumingmodel-collapsePMLRDBLP
6
泛读ICML 2024

Flora: Low-Rank Adapters Are Secretly Gradient Compressors

这篇论文的核心结论是:LoRA 之类低秩适配方法不只是参数高效模块,它们在优化上可以被理解成一种梯度压缩。过去大家通常从参数化角度理解 low-rank adapter——限制更新落在低秩子空间里以减少可训练参数;作者则把视角转到梯度通信/表示上,解释它为什么在资源受限下仍常常有效。

Yongchang Hao,Yanshuai Cao,Lili Mou
lorapeftgradient-compressionPMLRDBLP
6
泛读ICML 2024

Fine-grained Local Sensitivity Analysis of Standard Dot-Product Self-Attention

这篇论文研究的是标准 dot-product self-attention 的细粒度局部敏感性,也就是输入发生微小变化时,attention 输出会在什么条件下被放大、抑制或发生结构性改变。过去很多关于 attention 稳定性或 Lipschitz 性的结果较粗,只能给全局上界,难以解释真实模型里哪些 token 交互最脆弱。

Aaron J. Havens,Alexandre Araujo,Huan Zhang,Bin Hu
attentioninterpretabilitysensitivityPMLRDBLP
6
泛读ICML 2024

LoRA+: Efficient Low Rank Adaptation of Large Models

Soufiane Hayou,Nikhil Ghosh,Bin Yu
lorapeftfine-tuningPMLRDBLP
3
ICML 2024

Instruction Tuning for Secure Code Generation

现有代码大模型的指令微调流程完全忽略生成代码的安全性,导致当前SOTA指令微调代码模型频繁输出带漏洞的不安全代码,存在显著的落地安全风险。

Jingxuan He,Mark Vero,Gabriela Krasnopolska,Martin T. Vechev
instruction-tuningcode-llmsafetyPMLRarXivDBLP
5
泛读ICML 2024

Do Large Code Models Understand Programming Concepts? Counterfactual Analysis for Code Predicates

现有代码大模型在代码生成、补全等任务上表现优异,但无法验证其表现是来自真正理解底层编程概念还是表面模式匹配,此前的评估方法只能测端到端任务效果,无法解耦能力来源。

Ashish Hooda,Mihai Christodorescu,Miltiadis Allamanis,Aaron Wilson,Kassem Fawaz,Somesh Jha
code-llmcounterfactualinterpretabilityPMLRarXivDBLP
6
泛读ICML 2024

Recovering the Pre-Fine-Tuning Weights of Generative Models

此前业界普遍认为生成模型经过安全对齐微调后,原始未对齐的预训练权重无法被恢复,因此对齐后的模型是安全的,但这个假设从未被严格验证。

Eliahu Horwitz,Jonathan Kahana,Yedid Hoshen
fine-tuningalignmentsafetyPMLRarXivDBLP
5
泛读OralICML 2024

Decomposing Uncertainty for Large Language Models through Input Clarification Ensembling

大模型的不确定性可分为数据固有随机性导致的偶然不确定性和训练数据缺失导致的认知不确定性,此前没有可行的方案解耦这两类不确定性,现有方法只能测量总不确定性,阻碍大模型可靠性和可解释性的提升。

Bairu Hou,Yujian Liu,Kaizhi Qian,Jacob Andreas,Shiyu Chang,Yang Zhang
llmuncertaintyensemblePMLRarXivDBLP
6
泛读ICML 2024

Outlier-Efficient Hopfield Layers for Large Transformer-Based Models

大规模Transformer模型训练时,注意力输出的离群值会导致低比特量化后的模型精度大幅下降,现有注意力机制无法同时解决离群值问题和保持模型效果,阻碍大模型的低比特训练和部署。

Jerry Yao-Chieh Hu,Pei-Hsuan Chang,Haozheng Luo,Hong-Yu Chen,Weijian Li,Wei-Po Wang,Han Liu
transformerhopfieldtraining-stabilityPMLRarXivDBLP
6
泛读ICML 2024

Accelerated Speculative Sampling Based on Tree Monte Carlo

Zhengmian Hu,Heng Huang
speculative-decodingsamplinginferencePMLRDBLP
6
泛读ICML 2024

Case-Based or Rule-Based: How Do Transformers Do the Math?

这篇论文要回答的核心问题是:Transformer 在做基础数学(如加法)时,到底是在学“可组合的规则”(rule-based)还是在靠训练语料里的“相似例题检索”(case-based)。以往很多工作用准确率来间接讨论泛化,但很难区分“真学会规则”与“记住大量模式后插值”。

Yi Hu,Xiaojuan Tang,Haotong Yang,Muhan Zhang
arithmeticgeneralizationiclPMLRarXivDBLP
6
泛读ICML 2024

BiLLM: Pushing the Limit of Post-Training Quantization for LLMs

缺少摘要信息,无法可靠确定其量化方法要解决的具体瓶颈(如权重量化、KV 量化、激活量化或端到端部署约束)。

Wei Huang,Yangdong Liu,Haotong Qin,Ying Li,Shiming Zhang,Xianglong Liu,Michele Magno,Xiaojuan Qi
quantizationllm-compressionpost-training-quantizationPMLRDBLP
6
泛读ICML 2024

Deep Networks Always Grok and Here is Why

缺少摘要信息,无法可靠说明论文对 grokking(训练后期突然泛化)的“总会发生”主张是在什么设定、什么损失与什么数据分布下成立。

Ahmed Imtiaz Humayun,Randall Balestriero,Richard G. Baraniuk
grokkinggeneralizationtraining-dynamicsPMLRDBLP
6
泛读ICML 2024

A Unified Linear Programming Framework for Offline Reward Learning from Human Demonstrations and Feedback

这篇论文要解决的是:在离线条件下同时统一处理 IRL(从示范学奖励)与 RLHF(从偏好/反馈学奖励)时,如何减少对特定偏好模型或决策模型假设的依赖,从而提升鲁棒性。过去很多 reward learning 方法需要强假设(如噪声模型、最优性形式),假设错了就会学到偏的 reward。

Kihyun Kim,Jiawei Zhang,Asuman E. Ozdaglar,Pablo A. Parrilo
reward-learningrlhfinverse-rlPMLRarXivDBLP
6
泛读ICML 2024

SqueezeLLM: Dense-and-Sparse Quantization

这篇工作要解决的是:如何在单卡或少卡上部署大语言模型时,把量化真正做成“省带宽且不明显掉点”的方案。此前很多量化方法默认计算是瓶颈,但对单 batch 自回归生成,真正卡住吞吐的是权重读取带宽;已有低比特量化虽然能省显存,却常因统一量化所有通道而在少数高敏感权重上造成明显精度损失。

Sehoon Kim,Coleman Hooper,Amir Gholami,Zhen Dong,Xiuyu Li,Sheng Shen,Michael W. Mahoney,Kurt Keutzer
quantizationllmcompressionPMLRarXivDBLP
6
泛读OralICML 2024

Transformers Learn Nonlinear Features In Context: Nonconvex Mean-field Dynamics on the Attention Landscape

这篇工作要回答的是:Transformer 在 in-context learning 里到底学到了什么样的非线性特征,以及这种能力如何从注意力动力学里出现。过去大量理论结果停留在线性回归、核回归或凸设定,因此能解释“会拟合”,却很难解释模型为什么能在上下文中形成更复杂的特征选择与非线性决策边界。

Juno Kim,Taiji Suzuki
transformerin-context-learningtraining-dynamicsPMLRDBLP
6
泛读ICML 2024

DistiLLM: Towards Streamlined Distillation for Large Language Models

这篇工作聚焦的问题是:LLM 蒸馏流程往往过重、过慢,而且效果高度依赖复杂的 teacher 采样和多阶段训练,导致蒸馏很难成为稳定的模型压缩路径。过去做法常把蒸馏看成“尽量复制 teacher 全部分布”,但对大模型来说,这样既贵也未必最有效,因为 teacher 输出里有很多对 student 不必要甚至有噪声的信息。

Jongwoo Ko,Sungnyun Kim,Tianyi Chen,Se-Young Yun
distillationllmcompressionPMLRDBLP
6
泛读ICML 2024

Implicit meta-learning may lead language models to trust more reliable sources

这篇工作研究的是:语言模型在预训练或元学习过程中,是否会隐式学会“更相信可靠信息源”,以及这种行为在什么条件下出现。过去很多关于 truthfulness 或 calibration 的工作主要依赖额外对齐、监督或推理时检索,但如果模型本身能通过训练分布学到 source reliability,这会改变我们对预训练目标能力上限的判断。

Dmitrii Krasheninnikov,Egor Krasheninnikov,Bruno Kacper Mlodozeniec,Tegan Maharaj,David Krueger
meta-learningin-context-learningsource-reliabilityPMLRDBLP
6
泛读ICML 2024

A Human-Inspired Reading Agent with Gist Memory of Very Long Contexts

Kuang-Huei Lee,Xinyun Chen,Hiroki Furuta,John F. Canny,Ian Fischer
long-contextreading-agentgist-memoryPMLRDBLP
6
泛读ICML 2024

Evaluating Quantized Large Language Models

Shiyao Li,Xuefei Ning,Luning Wang,Tengxuan Liu,Xiangsheng Shi,Shengen Yan,Guohao Dai,Huazhong Yang,Yu Wang
quantizationllmevaluationPMLRDBLP
6
泛读ICML 2024

The WMDP Benchmark: Measuring and Reducing Malicious Use with Unlearning

Nathaniel Li,Alexander Pan,Anjali Gopal,Summer Yue,Daniel Berrios,Alice Gatti ... 省略 20 位作者 ... ,Weiran Lin,Adam A. Hunt,Justin Tienken-Harder,Kevin Y. Shih
unlearningsafetybenchmarkPMLRDBLP
4
ICML 2024

LoRAP: Transformer Sub-Layers Deserve Differentiated Structured Compression for Large Language Models

现有LLM结构化压缩方法对Transformer的MHA和FFN子层采用统一的压缩策略,忽略了两类子层的结构差异,导致压缩比和模型效果的权衡结果较差。

Guangyan Li,Yongqiang Tang,Wensheng Zhang
compressionloratransformerPMLRarXivDBLP
4
ICML 2024

VQDNA: Unleashing the Power of Vector Quantization for Multi-Species Genomic Sequence Modeling

现有基因组预训练模型采用手工设计的k-mer等tokenizer,无法编码基因组数据中最有判别性的隐含模式,限制了基因组模型的下游任务表现。

Siyuan Li,Zedong Wang,Zicheng Liu,Di Wu,Cheng Tan,Jiangbin Zheng,Yufei Huang,Stan Z. Li
tokenizervector-quantizationsequence-modelingPMLRarXivDBLP
6
泛读ICML 2024

Revisiting the Role of Language Priors in Vision-Language Models

生成式VLM通常仅被用于生成类任务,判别类任务如图文检索长期依赖专门做对比预训练的CLIP类模型,额外预训练成本高,生成式VLM的判别潜力未被挖掘。

Zhiqiu Lin,Xinyue Chen,Deepak Pathak,Pengchuan Zhang,Deva Ramanan
vision-languagelanguage-priorgenerative-vlmPMLRarXivDBLP
6
泛读ICML 2024

Structured Inverse-Free Natural Gradient Descent: Memory-Efficient & Numerically-Stable KFAC

Wu Lin,Felix Dangel,Runa Eschenhagen,Kirill Neklyudov,Agustinus Kristiadi,Richard E. Turner,Alireza Makhzani
natural-gradientoptimizermemory-efficiencyPMLRDBLP
6
泛读OralICML 2024

Learning to Model the World With Language

Jessy Lin,Yuqing Du,Olivia Watkins,Danijar Hafner,Pieter Abbeel,Dan Klein,Anca D. Dragan
world-modellanguage-modelreasoningPMLRDBLP
6
泛读ICML 2024

Selecting Large Language Model to Fine-tune via Rectified Scaling Law

LLM生态下可选基座模型数量过多,资源有限的情况下无法全量微调后再选最优模型,现有预训练缩放定律无法预测微调后性能,模型选择无量化依据。

Haowei Lin,Baizhou Huang,Haotian Ye,Qinyu Chen,Zihao Wang,Sujian Li,Jianzhu Ma,Xiaojun Wan,James Zou,Yitao Liang
scaling-lawmodel-selectionfine-tuningPMLRarXivDBLP
3
ICML 2024

Use Your INSTINCT: INSTruction optimization for LLMs usIng Neural bandits Coupled with Transformers

LLM下游性能高度依赖指令质量,人工调优指令需要大量人力成本,现有基于贝叶斯优化的自动指令优化方法受限于高斯过程的拟合能力,无法处理高维指令优化场景。

Xiaoqiang Lin,Zhaoxuan Wu,Zhongxiang Dai,Wenyang Hu,Yao Shu,See-Kiong Ng,Patrick Jaillet,Bryan Kian Hsiang Low
instruction-optimizationbanditpromptingPMLRarXivDBLP
6
泛读SpotlightICML 2024

Decoding-time Realignment of Language Models

Tianlin Liu,Shangmin Guo,Leonardo Bianco,Daniele Calandriello,Quentin Berthet,Felipe Llinares-López,Jessica Hoffmann,Lucas Dixon,Michal Valko,Mathieu Blondel
decodingalignmentrealignmentPMLRDBLP
6
泛读ICML 2024

Unified Generation, Reconstruction, and Representation: Generalized Diffusion with Adaptive Latent Encoding-Decoding

这篇论文要解决的问题是:生成、重建和表示学习通常被不同目标函数和架构拆开做,导致一个模型很难同时兼顾采样质量、重建能力和表征泛化。对 pretrain 来说,这个问题重要,因为统一训练目标如果做得好,可以减少多阶段拼装,也可能让扩散模型从“只会生成”走向更通用的表征学习器。

Guangyi Liu,Yu Wang,Zeyu Feng,Qiyu Wu,Liping Tang,Yuan Gao ... 省略 2 位作者 ... ,Julian J. McAuley,Zichao Yang,Eric P. Xing,Zhiting Hu
diffusionunified-generationlatent-encodingPMLRDBLP
6
泛读ICML 2024

COPAL: Continual Pruning in Large Language Generative Models

这篇论文解决的是:如何在持续适应多个新域时,一边降低大语言生成模型的计算成本,一边尽量避免反复微调带来的开销和灾难性遗忘。传统做法通常在 continual adaptation 和 model pruning 之间二选一:要么每来一个域就继续训,要么一次性剪枝后假设分布不变,这两者都不适合长期演化场景。

Srikanth Malla,Joon Hee Choi,Chiho Choi
pruningcontinual-learningllmPMLRarXivDBLP
6
泛读ICML 2024

HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal

这篇论文要解决的问题是:自动化红队测试和稳健拒答目前缺少统一、标准化、可比较的评测框架,导致不同方法之间很难公平比较。过去很多安全评测依赖自定义攻击集、人工判分或口径不一致的 jailbreak 成功标准,因此结论常常不稳。

Mantas Mazeika,Long Phan,Xuwang Yin,Andy Zou,Zifan Wang,Norman Mu ... 省略 2 位作者 ... ,Steven Basart,Bo Li,David A. Forsyth,Dan Hendrycks
safetyevaluationred-teamingPMLRDBLP
6
泛读ICML 2024

Deep Fusion: Efficient Network Training via Pre-trained Initializations

这篇论文的核心问题是:能否利用已经训练好的小网络初始化,低成本地把模型在训练中途长大,而不是从头训练目标大网络。这个问题在 LLM 训练里很实际,因为直接训练大模型代价高,而网络增长方法虽然看起来省算力,但过去缺少对其为什么有效、什么时候失效的机制理解。

Hanna Mazzawi,Javier Gonzalvo,Michael Wunder,Sammy Jerome,Benoit Dherin
training-efficiencyinitializationnetwork-growingPMLRarXivDBLP
6
泛读ICML 2024

Copyright Traps for Large Language Models

这篇论文关注的核心问题是:是否可以通过设计“版权陷阱”来识别或证明 LLM 使用了受保护内容,从而更系统地研究训练数据中的版权风险。过去关于模型是否记住或使用版权材料,常依赖泄漏样例、成员推断或相似度检索,证据链不总是强;陷阱式设计试图把这一问题变成更可验证的实验。

Matthieu Meeus,Igor Shilov,Manuel Faysse,Yves-Alexandre de Montjoye
llmcopyrightdata-qualityPMLRDBLP
6
泛读ICML 2024

The Illusion of State in State-Space Models

这篇论文的核心问题是:状态空间模型(SSM)被宣传为通过压缩状态高效处理长序列,但这种“状态”是否真的提供了比注意力或有限上下文表示更强的计算与记忆能力。现在值得讨论,是因为 Mamba 一类模型引发了大量替代 Transformer 的兴趣,但社区对其能力边界和宣传口径仍有不少混淆。

William Merrill,Jackson Petty,Ashish Sabharwal
state-space-modelsarchitecturetheoryPMLRDBLP
6
泛读ICML 2024

Efficient World Models with Context-Aware Tokenization

Vincent Micheli,Eloi Alonso,François Fleuret
world-modelstokenizersequence-modelingPMLRDBLP
6
泛读ICML 2024

Prodigy: An Expeditiously Adaptive Parameter-Free Learner

Konstantin Mishchenko,Aaron Defazio
optimizertraining-dynamicsadaptive-optimizationPMLRDBLP
6
泛读ICML 2024

Why Do You Grok? A Theoretical Analysis on Grokking Modular Addition

Mohamad Amin Mohamadi,Zhiyuan Li,Lei Wu,Danica J. Sutherland
grokkingtheorytraining-dynamicsPMLRDBLP
6
泛读ICML 2024

Language Models with Conformal Factuality Guarantees

Christopher Mohri,Tatsunori Hashimoto
llmfactualityconformalPMLRDBLP
6
泛读ICML 2024

Active Preference Learning for Large Language Models

William Muldrew,Peter Hayes,Mingtian Zhang,David Barber
preference-learningactive-learningrlhfPMLRDBLP
6
泛读ICML 2024

Is Temperature Sample Efficient for Softmax Gaussian Mixture of Experts?

Huy Nguyen,Pedram Akbarian,Nhat Ho
moesoftmax-gatingtheoryPMLRDBLP
6
泛读ICML 2024

A General Theory for Softmax Gating Multinomial Logistic Mixture of Experts

Huy Nguyen,Pedram Akbarian,TrungTin Nguyen,Nhat Ho
moesoftmax-gatingtheoryPMLRDBLP
6
泛读ICML 2024

Trainable Transformer in Transformer

Abhishek Panigrahi,Sadhika Malladi,Mengzhou Xia,Sanjeev Arora
transformerarchitecturetrainabilityPMLRDBLP
6
泛读OralICML 2024

Arrows of Time for Large Language Models

Vassilis Papadopoulos,Jérémie Wenger,Clément Hongler
language-modelingtemporal-reasoningsequence-orderPMLRDBLP
6
泛读ICML 2024

The Linear Representation Hypothesis and the Geometry of Large Language Models

Kiho Park,Yo Joong Choe,Victor Veitch
representationgeometrylinear-representationPMLRDBLP
6
泛读OralICML 2024

Any-Precision LLM: Low-Cost Deployment of Multiple, Different-Sized LLMs

Yeonhong Park,Jake Hyun,SangLyul Cho,Bonggeun Sim,Jae W. Lee
quantizationdeploymentmodel-compressionPMLRDBLP
6
泛读ICML 2024

In-Context Unlearning: Language Models as Few-Shot Unlearners

这篇论文要回答的是:语言模型能否仅靠上下文提示,在不更新参数的情况下临时“忘掉”某些知识或行为。过去的 unlearning 基本依赖参数编辑、继续训练或数据删除证明,成本高且容易伤及无关能力;如果 in-context unlearning 可行,就说明一部分“遗忘”其实可以通过条件化推理而不是权重修改来实现。

Martin Pawelczyk,Seth Neel,Himabindu Lakkaraju
unlearningin-context-learningpromptingPMLRDBLP
6
泛读ICML 2024

Pragmatic Feature Preferences: Learning Reward-Relevant Preferences from Human Input

这篇论文关注的是:人类偏好数据里真正和奖励相关的特征,常常不是直接显式标注出来的,模型因此容易学到表面偏好而不是决策依据。过去很多 preference learning 默认人能稳定比较整体输出优劣,但现实中人更擅长指出局部特征、线索或理由;作者试图把这种更“务实”的输入转成可用的奖励信号。

Andi Peng,Yuying Sun,Tianmin Shu,David Abel
preference-learninghuman-feedbackreward-modelingPMLRDBLP
6
泛读ICML 2024

Prompting a Pretrained Transformer Can Be a Universal Approximator

这篇论文要回答的是一个基础能力问题:固定预训练 Transformer,只通过 prompt 条件化,是否已经具备普适逼近能力。过去 prompt learning 的经验结果很多,但理论上常被质疑只是启发式技巧;作者试图给出更强的表达性结论,说明提示不仅能调出已有能力,理论上也足以实现任意函数近似。

Aleksandar Petrov,Philip Torr,Adel Bibi
promptingpretrained-transformerexpressivityPMLRDBLP
6
泛读OralICML 2024

Accurate LoRA-Finetuning Quantization of LLMs via Information Retention

Haotong Qin,Xudong Ma,Xingyu Zheng,Xiaoyang Li,Yang Zhang,Shouda Liu,Jie Luo,Xianglong Liu,Michele Magno
quantizationlorallm-finetuningPMLRDBLP
5
泛读ICML 2024

In-Context Learning Agents Are Asymmetric Belief Updaters

上下文学习(ICL)的底层机制尚未被完全解释,现有研究多从统计或数学角度分析,缺乏从认知层面的实证验证,ICL过程中的信念更新逻辑没有明确结论。

Johannes A. Schubert,Akshay K. Jagadish,Marcel Binz,Eric Schulz
in-context-learningbehavior-analysisbelief-updatingPMLRarXivDBLP
6
泛读ICML 2024

Principled Penalty-based Methods for Bilevel Reinforcement Learning and RLHF

Han Shen,Zhuoran Yang,Tianyi Chen
rlhfreinforcement-learningoptimizationPMLRDBLP
6
泛读ICML 2024

Why Larger Language Models Do In-context Learning Differently?

Zhenmei Shi,Junyi Wei,Zhuoyan Xu,Yingyu Liang
in-context-learningscalingllmPMLRDBLP
5
泛读OralICML 2024

Offline Actor-Critic Reinforcement Learning Scales to Large Models

之前普遍认为离线强化学习(RL)不稳定,无法扩展到Transformer级别的大模型,多任务连续控制场景的主流训练范式是监督式行为克隆,离线RL性能弱于行为克隆且无明确缩放规律。

Jost Tobias Springenberg,Abbas Abdolmaleki,Jingwei Zhang,Oliver Groth,Michael Bloesch,Thomas Lampe ... 省略 2 位作者 ... ,Steven Kapturowski,Roland Hafner,Nicolas Heess,Martin A. Riedmiller
offline-rlscaling-lawactor-criticPMLRarXivDBLP
6
泛读ICML 2024

Dirichlet Flow Matching with Applications to DNA Sequence Design

现有离散流匹配/扩散模型做序列生成时,朴素的单纯形线性流匹配存在训练目标不连续的问题,要么需要多步推理速度慢,要么一步生成质量下降严重,相比自回归模型优势不明显。

Hannes Stärk,Bowen Jing,Chenyu Wang,Gabriele Corso,Bonnie Berger,Regina Barzilay,Tommi S. Jaakkola
flow-matchingdiscrete-diffusiondirichletPMLRarXivDBLP
6
泛读ICML 2024

From Vision to Audio and Beyond: A Unified Model for Audio-Visual Representation and Generation

这篇论文想解决的核心问题是:音视频领域里表征学习和生成建模长期分裂,导致同一个模型很难既学到跨模态表示,又能做从视觉到音频的生成。过去工作通常二选一,要么做对比学习拿表示,要么做条件生成拿样本质量,但两者的训练目标和表示空间往往不兼容。

Kun Su,Xiulong Liu,Eli Shlizerman
audio-visualunified-modelrepresentation-learningPMLRarXivDBLP
6
泛读ICML 2024

A Minimaximalist Approach to Reinforcement Learning from Human Feedback

这篇论文解决的是 RLHF 里一个长期存在但常被工程上绕开的问题:当偏好是非马尔可夫、非传递甚至带噪声时,怎么做简单但稳健的学习。很多 RLHF 流程依赖奖励模型和离线偏好优化,这在顺序决策里容易积累分布偏移误差,也默认偏好能被一个标量奖励很好刻画。

Gokul Swamy,Christoph Dann,Rahul Kidambi,Steven Wu,Alekh Agarwal
rlhfself-playpreference-optimizationPMLRarXivDBLP
6
泛读ICML 2024

Preference Fine-Tuning of LLMs Should Leverage Suboptimal, On-Policy Data

这篇论文讨论的核心问题是:LLM 的 preference fine-tuning 不应只依赖高质量离线偏好数据,还应主动利用次优但来自当前策略分布的数据。传统做法倾向于追求“更干净、更优质”的偏好样本,但这会带来典型的 off-policy 偏差:模型真正会生成什么,和训练时看到的偏好对并不一致。

Fahim Tajwar,Anikait Singh,Archit Sharma,Rafael Rafailov,Jeff Schneider,Tengyang Xie,Stefano Ermon,Chelsea Finn,Aviral Kumar
preference-tuningon-policydpoPMLRDBLP
6
泛读ICML 2024

Codebook Features: Sparse and Discrete Interpretability for Neural Networks

这篇论文要解决的问题是:神经网络隐藏状态太稠密、连续,导致解释性分析常常停留在事后线性探针或局部可视化,难以形成可操作的离散概念单元。过去不是没人尝试离散化,而是担心一加瓶颈性能就大幅下降,所以很少把可解释性约束真正嵌进模型内部。

Alex Tamkin,Mohammad Taufeeque,Noah D. Goodman
interpretabilityvector-quantizationsparse-representationPMLRarXivDBLP
6
泛读ICML 2024

AlphaZero-Like Tree-Search can Guide Large Language Model Decoding and Training

这篇论文要解决的问题是:现有用树搜索增强 LLM 推理的方法,通常把预训练模型自己当 value function,因此只适合浅搜索、知识已足够的场景。遇到长程规划或模型先验不足时,这类方法会失效,因为搜索本身没有学到如何评估中间状态。

Ziyu Wan,Xidong Feng,Muning Wen,Stephen Marcus McAleer,Ying Wen,Weinan Zhang,Jun Wang
tree-searchreasoningrlPMLRarXivDBLP
6
泛读ICML 2024

In-context Learning on Function Classes Unveiled for Transformers

Transformer 的 in-context learning(ICL)能力在理论上尚未被充分理解,特别是在不同函数类上 ICL 是如何工作的。已有理论分析多局限于线性函数类,对更一般的函数类缺乏严格刻画。

Zhijie Wang,Bo Jiang,Shuai Li
in-context-learningtransformertheoryPMLRDBLP
6
泛读ICML 2024

Helpful or Harmful Data? Fine-tuning-free Shapley Attribution for Explaining Language Model Predictions

在 fine-tuning 场景下,如何将模型预测归因到每条训练样本(instance attribution),且保证归因结果对数据重采样具有鲁棒性。已有的 leave-one-out 方法在鲁棒性上存在缺陷,而 Shapley 值虽然更鲁棒但计算成本极高。

Jingtan Wang,Xiaoqiang Lin,Rui Qiao,Chuan-Sheng Foo,Bryan Kian Hsiang Low
NUSinterpretabilitydata-attributionshapleyPMLRarXivDBLP
6
泛读ICML 2024

Rethinking Generative Large Language Model Evaluation for Semantic Comprehension

多选题(MCQA)是当前 LLM 评估的主流方式,但它与实际开放式生成场景存在不一致性——模型在 MCQA 上的表现不能真实反映其语义理解和生成能力。需要更贴近真实使用场景的评估方法。

Fangyun Wei,Xi Chen,Lin Luo
Microsoft Research Asiallm-evaluationmcqabenchmarkPMLRarXivDBLP
6
泛读ICML 2024

Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications

经过安全对齐的 LLM,其安全性在模型剪枝和低秩修改等轻量操作下是否脆弱?以往对 safety alignment 的评估主要关注对抗性 prompt,忽略了参数层面的简单修改就可能破坏安全护栏。

Boyi Wei,Kaixuan Huang,Yangsibo Huang,Tinghao Xie,Xiangyu Qi,Mengzhou Xia,Prateek Mittal,Mengdi Wang,Peter Henderson
Princeton Universitysafety-alignmentpruninglow-rankPMLRDBLP
6
泛读ICML 2024

Bridging discrete and continuous state spaces: Exploring the Ehrenfest process in time-continuous diffusion models

之前离散状态空间的时间连续马尔可夫跳过程与连续状态空间的扩散过程之间没有严格的理论对应关系,两类模型的方法无法互相迁移,离散扩散的理论基础不足。

Ludwig Winkler,Lorenz Richter,Manfred Opper
discrete-diffusioncontinuous-timeehrenfestPMLRarXivDBLP
6
泛读ICML 2024

Fundamental Limitations of Alignment in Large Language Models

Yotam Wolf,Noam Wies,Oshri Avnery,Yoav Levine,Amnon Shashua
alignmentllmlimitationsPMLRDBLP
6
泛读ICML 2024

A Dense Reward View on Aligning Text-to-Image Diffusion with Preference

Shentao Yang,Tianqi Chen,Mingyuan Zhou
diffusionpreference-learningreward-modelPMLRDBLP
6
泛读ICML 2024

Better Safe than Sorry: Pre-training CLIP against Targeted Data Poisoning and Backdoor Attacks

Wenhan Yang,Jingdong Gao,Baharan Mirzasoleiman
clippretrainingdata-poisoningPMLRDBLP
6
泛读ICML 2024

Reducing Fine-Tuning Memory Overhead by Approximate and Memory-Sharing Backpropagation

这篇工作要解决的是全参数微调时反向传播显存开销过高的问题。现有做法通常在 LoRA、梯度检查点、重计算和 CPU/offload 之间做折中,但这些方法要么牺牲可训练参数范围,要么显著拖慢训练,因此作者试图直接改造反传本身来减少需要保存的激活和梯度状态。

Yuchen Yang,Yingdong Shi,Cheems Wang,Xiantong Zhen,Yuxuan Shi,Jun Xu
fine-tuningmemorybackpropagationPMLRDBLP
6
泛读ICML 2024

Characterizing Truthfulness in Large Language Model Generations with Local Intrinsic Dimension

这篇工作要解决的是如何刻画 LLM 生成内容的真实性,而不是只看最终答案对错。以往 truthfulness 研究常依赖外部判别器、基准问答正确率或生成后验证,这些方法能告诉你“错了没有”,但不太解释模型内部表征在 truthful 和 untruthful 生成时有什么结构性差异。

Fan Yin,Jayanth Srinivasa,Kai-Wei Chang
llmtruthfulnessrepresentationPMLRDBLP
6
泛读ICML 2024

Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch

这篇工作要解决的是如何在不增加太多训练成本的情况下,让语言模型吸收其他同源模型的能力。传统做法通常依赖蒸馏、继续预训练或多模型集成,但这些方法要么额外开销大,要么会引入明显的训练与部署复杂度,因此作者想寻找一种接近“免费午餐”的能力迁移方式。

Le Yu,Bowen Yu,Haiyang Yu,Fei Huang,Yongbin Li
llmmodel-mergingknowledge-transferPMLRDBLP
6
泛读ICML 2024

Unveiling and Harnessing Hidden Attention Sinks: Enhancing Large Language Models without Training through Attention Calibration

这篇工作要解决的是大语言模型里隐藏的 attention sinks 会无谓吸走注意力质量,进而影响生成效果。已有研究知道某些特殊 token 或位置会成为‘注意力黑洞’,但通常把它当现象描述;作者进一步追问能否不训练模型、只通过推理时校准就修复这种问题。

Zhongzhi Yu,Zheng Wang,Yonggan Fu,Huihong Shi,Khalid Shaikh,Yingyan Celine Lin
attention-sinkattention-calibrationinferencePMLRDBLP
2
ICML 2024

Actions Speak Louder than Words: Trillion-Parameter Sequential Transducers for Generative Recommendations

现有工业界深度学习推荐模型(DLRM)无法随计算量提升实现性能缩放,Transformer在NLP/CV领域的缩放成功无法直接迁移到推荐场景,因为推荐数据是高基数、非平稳的流式数据。

Jiaqi Zhai,Lucy Liao,Xing Liu,Yueming Wang,Rui Li,Xuan Cao ... 省略 2 位作者 ... ,Fangda Gu,Jiayuan He,Yinghai Lu,Yu Shi
recommendationsequential-transducerscalingPMLRarXivDBLP
4
ICML 2024

LQER: Low-Rank Quantization Error Reconstruction for LLMs

大语言模型训练后量化(PTQ)的W4A8(4位权重、8位激活)方案普遍存在性能损失,现有实现要么需要蒸馏、网格搜索、梯度迭代优化等额外训练开销,要么依赖特殊的Scatter/Gather内存操作,部署门槛高。

Cheng Zhang,Jianyi Cheng,George Anthony Constantinides,Yiren Zhao
quantizationlow-rankpost-training-quantizationPMLRarXivDBLP
4
ICML 2024

Self-Infilling Code Generation

现有自回归代码生成采用单调顺序生成逻辑,无法在生成过程中根据后续上下文补全中间代码片段,对存在前后依赖的复杂代码生成的可控性和质量受限。

Lin Zheng,Jianbo Yuan,Zhi Zhang,Hongxia Yang,Lingpeng Kong
code-generationinfillingdecodingPMLRarXivDBLP
6
泛读ICML 2024

On the Emergence of Cross-Task Linearity in Pretraining-Finetuning Paradigm

预训练-微调范式下,不同任务微调后的模型权重插值的行为缺乏系统研究,现有工作不清楚微调后模型的特征映射规律,多任务模型融合的理论依据不足。

Zhanpeng Zhou,Zijun Chen,Yilan Chen,Bo Zhang,Junchi Yan
pretrain-finetunelinearityrepresentationPMLRarXivDBLP
5
泛读ICML 2024

Exploring Training on Heterogeneous Data with Mixture of Low-rank Adapters

统一大模型在异质多任务/多域数据上训练时存在任务冲突,现有方法要么训练开销大,要么推理时必须提供任务标识,无法同时适配任务感知和任务未知两种推理场景。

Yuhang Zhou,Zihua Zhao,Siyuan Du,Haolin Li,Jiangchao Yao,Ya Zhang,Yanfeng Wang
loramixture-of-expertsheterogeneous-dataPMLRarXivDBLP
4
ICML 2024

ArCHer: Training Language Model Agents via Hierarchical Multi-Turn RL

现有LLM智能体的RL训练方法大多优化单轮奖励,无法处理多轮交互的信用分配,也无法引导LLM在多轮任务中主动收集信息,多轮交互任务性能受限。

Yifei Zhou,Andrea Zanette,Jiayi Pan,Sergey Levine,Aviral Kumar
rllanguage-agentmulti-turnPMLRarXivDBLP
5
泛读ICML 2024

Model Tailor: Mitigating Catastrophic Forgetting in Multi-modal Large Language Models

多模态大模型(MLLM)微调新任务时会出现灾难性遗忘,现有缓解方法要么需要保存大量旧任务数据,要么调整大量参数,训练和部署开销大。

Didi Zhu,Zhongyi Sun,Zexi Li,Tao Shen,Ke Yan,Shouhong Ding,Chao Wu,Kun Kuang
multimodal-llmcatastrophic-forgettingfine-tuningPMLRarXivDBLP
6
泛读ICML 2024

Catapults in SGD: spikes in the training loss and their impact on generalization through feature learning

SGD训练神经网络时常见的训练损失尖峰现象的成因和对泛化性的影响缺乏系统解释,现有工作仅在GD大学习率下观察到该现象,不清楚SGD下的机制和影响。

Libin Zhu,Chaoyue Liu,Adityanarayanan Radhakrishnan,Mikhail Belkin
optimizationloss-spikessgdPMLRarXivDBLP
5
泛读ICML 2024

Asymmetry in Low-Rank Adapters of Foundation Models

现有LoRA方法对A和B两个低秩矩阵的作用没有区分,普遍认为两者同等重要,微调策略缺乏针对性,存在不必要的参数开销。

Jiacheng Zhu,Kristjan H. Greenewald,Kimia Nadjahi,Haitz Sáez de Ocáriz Borde,Rickard Brüel Gabrielsson,Leshem Choshen,Marzyeh Ghassemi,Mikhail Yurochkin,Justin Solomon
lorapeftfine-tuningPMLRarXivDBLP
5
泛读ICML 2024

Language Models Represent Beliefs of Self and Others

现有研究仅观察到LLM存在一定心理理论(ToM)行为表现,但未明确其底层机制,无法验证LLM是否真的存储了自我与他人信念的内部表征。

Wentao Zhu,Zhining Zhang,Yizhou Wang
theory-of-mindrepresentationlinear-probingPMLRarXivDBLP
6
泛读ICML 2024

Emergence of In-Context Reinforcement Learning from Noise Distillation

这篇工作要解决的问题,是如何在没有最优策略标签、也不依赖昂贵 RL agent 采样的情况下做 in-context RL。以往这类方法通常要求轨迹来自强策略或显式动作监督,数据门槛很高,因此很难扩展到更广的环境和更便宜的数据来源。

Ilya Zisman,Vladislav Kurenkov,Alexander Nikulin,Viacheslav Sinii,Sergey Kolesnikov
in-context-learningrltransformerPMLRarXivDBLP
6
泛读ICML 2024

Safety Fine-Tuning at (Almost) No Cost: A Baseline for Vision Large Language Models

这篇工作要解决的问题,是 VLLM 在视觉指令微调后容易生成有害内容且很容易被简单越狱,而现有做法往往把安全对齐当成额外昂贵流程。作者的判断是,问题很大一部分来自视觉语言 SFT 数据中混入了有害样本,同时多模态微调会冲掉底层 LLM 原本学到的安全边界。

Yongshuo Zong,Ondrej Bohdal,Tingyang Yu,Yongxin Yang,Timothy M. Hospedales
vision-llmsafetyfine-tuningPMLRarXivDBLP
6
泛读ICML 2024

BiE: Bi-Exponent Block Floating-Point for Large Language Models Quantization

这篇工作关注的问题,是如何为大语言模型设计更适合的低比特数值格式,使量化既省算力和带宽,又尽量不破坏精度。传统定点、浮点或单指数 block floating-point 往往在动态范围和量化误差之间取舍很硬,遇到 LLM 中跨层、跨通道分布差异大的权重与激活时容易失真。

Lancheng Zou,Wenqian Zhao,Shuo Yin,Chen Bai,Qi Sun,Bei Yu
quantizationllmlow-precisionPMLRDBLP
6
泛读ICML 2024

Fine-Tuning LLMs for Multi-Turn Dialogues: Optimizing Cross-Entropy Loss with KL Divergence for All Rounds of Responses

这篇工作要解决的问题,是多轮对话 SFT 往往没有充分利用整段对话数据,或者为了利用所有轮次而带来很长训练时间。常见做法要么只监督最后一轮回复,浪费中间监督信号;要么把每轮拆开重算,代价高且训练效率差。

Zeyu Teng,Yong Song,Xiaozhou Ye,Ye Ouyang
sftmulti-turnkl-divergenceDOIDBLP
6
泛读ICML 2024

Distilling Multi-Step Reasoning Capabilities into Smaller Language Model

这篇工作关注的问题,是如何把大模型的多步推理能力蒸馏到更小模型里,而不是只蒸馏最终答案。过去的小模型蒸馏常停留在输出模仿,导致学生模型学到结果分布,却学不到中间推理结构,因此在组合泛化和复杂任务上退化明显。

Yauwai Yim,Zirui Wang
distillationreasoningcotDOIDBLP
6
泛读ICML 2024

Dual-Agent Multi-Hop Reasoning Based on Reward Shaping and Action Dropout

这篇工作要解决的问题,是多跳推理中单个 agent 容易在搜索过程中走偏,奖励又稀疏,导致训练不稳、推理链条质量差。以往方法通常在单智能体框架里靠更强搜索或更复杂奖励去补,但这样要么成本高,要么仍难避免错误早期传播。

Qiyue Sun,Yongli Wang,Dongmei Liu
multi-agentreasoningreward-shapingDOIDBLP
5
泛读ICML 2024

CaM: Cache Merging for Memory-efficient LLMs Inference

这篇工作要解决的是:大模型推理时 KV cache 占用太大,尤其长上下文和多轮场景下显存压力会迅速成为瓶颈。现有办法通常是在精度和内存之间硬做取舍,比如压缩、裁剪或分页,而作者想通过“合并 cache”减少冗余,尽量保留生成质量。

Yuxin Zhang,Yuxuan Du,Gen Luo,Yunshan Zhong,Zhenyu Zhang,Shiwei Liu,Rongrong Ji
kv-cacheinferencememoryPMLRDBLP
5
泛读ICML 2024

Diffusion Posterior Sampling is Computationally Intractable

这篇工作的结论很直接:diffusion posterior sampling 在计算上是不可 tractable 的,至少在论文设定下不能指望有通用高效算法。这个问题重要,是因为大量逆问题工作默认扩散模型的 posterior sampling 既合理又可做,但很多方法实际依赖启发式近似,理论边界长期不清楚。

Shivam Gupta,Ajil Jalal,Aditya Parulekar,Eric Price,Zhiyang Xun
diffusionsamplingtheoryPMLRDBLP
5
泛读ICML 2024

Variational Schrödinger Diffusion Models

这篇工作要解决的是:Schrödinger bridge 用在 diffusion model 里时,传输效率很好,但训练代价太高,因为前向 score 不可解,往往需要基于模拟轨迹的隐式损失。以前大家接受了这个成本,是因为 SB 的运输计划更优;这篇工作想把 SB 拉回到更可扩展的训练范式里。

Wei Deng,Weijian Luo,Yixin Tan,Marin Bilos,Yu Chen,Yuriy Nevmyvaka,Ricky T. Q. Chen
diffusionschrodinger-bridgevariational-inferencePMLRarXivDBLP
5
泛读ICML 2024

VideoPrism: A Foundational Visual Encoder for Video Understanding

这篇工作要解决的是:视频理解长期缺少一个足够强、可迁移的基础视觉编码器,很多方法仍依赖任务定制结构或从图像编码器勉强迁移。视频相比图像多了时间维和运动信息,单纯把 image encoder 套过去通常吃不满数据,也学不好时序抽象。

Long Zhao,Nitesh Bharadwaj Gundavarapu,Liangzhe Yuan,Hao Zhou,Shen Yan,Jennifer J. Sun ... 省略 9 位作者 ... ,Hartwig Adam,Mikhail Sirotenko,Ting Liu,Boqing Gong
video-encoderfoundation-modelpretrainingPMLRDBLP
5
泛读ICML 2024

SyCoCa: Symmetrizing Contrastive Captioners with Attentive Masking for Multimodal Alignment

这篇工作关注的是:多模态对齐里,captioning 和 contrastive learning 往往是不对称的,导致图文表征学到的对应关系不够稳定或不够细。很多方法用对比损失对齐全局,再靠 caption loss 学生成文本,但两者优化目标并不天然一致,作者试图把这两部分“对称化”。

Ziping Ma,Furong Xu,Jian Liu,Ming Yang,Qingpei Guo
multimodal-alignmentcontrastive-learningcaptioningPMLRDBLP
5
泛读ICML 2024

Visual-Text Cross Alignment: Refining the Similarity Score in Vision-Language Models

Jinhao Li,Haopeng Li,Sarah Monazam Erfani,Lei Feng,James Bailey,Feng Liu
vision-languagealignmentsimilarityPMLRDBLP
5
泛读ICML 2024

Accelerating Convergence of Score-Based Diffusion Models, Provably

Gen Li,Yu Huang,Timofey Efimov,Yuting Wei,Yuejie Chi,Yuxin Chen
diffusionconvergencetheoryPMLRDBLP
4
ICML 2024

Mastering Text-to-Image Diffusion: Recaptioning, Planning, and Generating with Multimodal LLMs

现有文生图扩散模型处理包含多物体、多属性、复杂关系的prompt时,组合生成能力差,容易出现物体缺失、属性错配,此前方案大多需要微调扩散模型,落地成本高。

Ling Yang,Zhaochen Yu,Chenlin Meng,Minkai Xu,Stefano Ermon,Bin Cui
Stanford Universitymultimodal-llmtext-to-imageplanningPMLRarXivDBLP
5
泛读ICML 2024

Parameter-Efficient Fine-Tuning with Controls

Chi Zhang,Jingpu Cheng,Yanyu Xu,Qianxiao Li
peftfine-tuningcontrolPMLRDBLP
6
泛读ICML 2024

When Will Gradient Regularization Be Harmful?

梯度正则化(GR)被广泛用于提升过参数化神经网络的泛化性,但实际使用中经常出现非预期的性能退化,此前研究未明确其失效场景和底层原因。

Yang Zhao,Hao Zhang,Xiuyuan Hu
gradient-regularizationoptimizerwarmupPMLRarXivDBLP
5
泛读OralICML 2024

Repoformer: Selective Retrieval for Repository-Level Code Completion

Repo 级代码补全里“默认总检索”的 RAG 做法同时浪费算力又容易被噪声上下文带偏,作者要解决的是:模型能否在需要时才检索,并且在检索结果不可靠时仍能稳健生成。

Di Wu,Wasi Uddin Ahmad,Dejiao Zhang,Murali Krishna Ramanathan,Xiaofei Ma
code-completionragselective-retrievalPMLRarXivDBLP
5
泛读ICML 2024

MLIP: Efficient Multi-Perspective Language-Image Pretraining with Exhaustive Data Utilization

缺少摘要信息,无法判断其具体要解决的语言-图像预训练瓶颈与现有方法的次优点。

Yu Zhang,Qi Zhang,Zixuan Gong,Yiwei Shi,Yepeng Liu,Duoqian Miao ... 省略 2 位作者 ... ,Kun Yi,Wei Fan,Liang Hu,Changwei Wang
cliplanguage-image-pretrainingdata-utilizationPMLRDBLP
5
泛读ICML 2024

AttnLRP: Attention-Aware Layer-Wise Relevance Propagation for Transformers

缺少摘要信息,无法确认 AttnLRP 具体要修复 Transformer 可解释性里哪类已知失真(例如 attention≠explanation、梯度饱和、残差路径归因不守恒)。

Reduan Achtibat,Sayed Mohammad Vakilzadeh Hatefi,Maximilian Dreyer,Aakriti Jain,Thomas Wiegand,Sebastian Lapuschkin,Wojciech Samek
interpretabilityattentiontransformersPMLRDBLP
5
泛读ICML 2024

Characterizing Large Language Model Geometry Helps Solve Toxicity Detection and Generation

缺少摘要信息,无法确认其所谓“LLM geometry”具体指向哪类几何量(表示空间曲率、子空间结构、谱性质、或 logits/embedding 的分布形状),以及它如何对应毒性检测与生成的失败模式。

Randall Balestriero,Romain Cosentino,Sarath Shekkizhar
llmrepresentationtoxicityPMLRDBLP
5
泛读ICML 2024

Stochastic positional embeddings improve masked image modeling

这篇论文要解决的是:在 masked image modeling(MIM)里,固定位置编码会让模型过度依赖绝对位置,削弱对局部视觉统计和跨位置泛化的学习。以往 MIM 往往默认把 ViT 里的位置编码直接照搬过来,很少单独审视它是否和“遮挡后重建”这个目标匹配;但对预训练来说,位置先验过强会让表征学到捷径,而不是更稳健的内容建模。

Amir Bar,Florian Bordes,Assaf Shocher,Mido Assran,Pascal Vincent,Nicolas Ballas,Trevor Darrell,Amir Globerson,Yann LeCun
masked-modelingpositional-encodingvisionPMLRDBLP
5
泛读ICML 2024

Adversarial Robustness Limits via Scaling-Law and Human-Alignment Studies

这篇论文要回答的是:大模型的对抗鲁棒性究竟受什么限制,单纯扩大模型或做人类偏好对齐能不能持续解决问题。过去很多工作分别研究 scaling 或 alignment,但较少把两者放在同一个框架里看鲁棒性上限,因此很难判断鲁棒性提升是暂时的工程收益,还是存在更深的容量与目标错配约束。

Brian R. Bartoldson,James Diffenderfer,Konstantinos Parasyris,Bhavya Kailkhura
scaling-lawrobustnessalignmentPMLRDBLP
5
泛读ICML 2024

Neural Diffusion Models

这篇论文在解决一个更基础的问题:能否把 diffusion 的思想直接推广成更一般的 neural diffusion models,而不是局限在固定噪声过程或特定生成实现上。传统 diffusion 通常把前向加噪过程和反向去噪参数化写得比较死,这让理论统一性和建模灵活性都受限,尤其不利于把 diffusion 扩展到离散 token、结构化状态或更广泛的预训练场景。

Grigory Bartosh,Dmitry P. Vetrov,Christian A. Naesseth
diffusionarchitecturegenerative-modelingPMLRDBLP
5
泛读ICML 2024

Guiding LLMs The Right Way: Fast, Non-Invasive Constrained Generation

这篇论文要解决的是:如何在不改模型参数、不显著拖慢解码的前提下,让 LLM 满足结构化约束生成。现有 constrained decoding 往往在‘约束强’和‘速度快’之间取舍明显:基于 rejection 或重采样的方法浪费 token 预算,基于大规模搜索或外部控制器的方法则侵入性强、延迟高。

Luca Beurer-Kellner,Marc Fischer,Martin T. Vechev
constrained-decodinggenerationinferencePMLRDBLP
5
泛读ICML 2024

Hierarchical State Space Models for Continuous Sequence-to-Sequence Modeling

这篇论文关注的是连续值 sequence-to-sequence 建模中,如何同时处理长时依赖和多尺度结构。传统 Transformer 在连续控制、轨迹或传感器序列上往往算力开销大,离散 token 化又可能丢掉精细动态;而标准 state space model 虽擅长长序列,但对 seq2seq 的层级结构建模不一定充分。

Raunaq M. Bhirangi,Chenyu Wang,Venkatesh Pattabiraman,Carmel Majidi,Abhinav Gupta,Tess Lee Hellebrekers,Lerrel Pinto
state-space-modelsequence-modelinghierarchicalPMLRDBLP
5
泛读ICML 2024

Multi-Patch Prediction: Adapting Language Models for Time Series Representation Learning

这篇论文要解决的是:如何把语言模型里有效的预测式预训练迁移到时间序列表征学习,而不是继续依赖只适合局部平滑信号的重建目标。传统时间序列 SSL 常做 point-wise reconstruction 或对比学习,但这些目标容易过度关注局部细节,未必能学到跨时间尺度的语义结构;语言模型式的预测目标值得重新引入。

Yuxuan Bian,Xuan Ju,Jiangtong Li,Zhijian Xu,Dawei Cheng,Qiang Xu
time-serieslanguage-modelingrepresentation-learningPMLRDBLP
5
泛读ICML 2024

Towards Understanding the Word Sensitivity of Attention Layers: A Study via Random Features

Simone Bombari,Marco Mondelli
attentionword-sensitivityrandom-featuresPMLRDBLP
5
泛读OralICML 2024

Scalable AI Safety via Doubly-Efficient Debate

Jonah Brown-Cohen,Geoffrey Irving,Georgios Piliouras
ai-safetydebatescalable-oversightPMLRDBLP
5
泛读ICML 2024

Semantically-correlated memories in a dense associative model

Thomas F. Burns
associative-memoryhopfield-networkdense-associative-modelPMLRDBLP
5
泛读ICML 2024

Learning Associative Memories with Gradient Descent

Vivien Cabannes,Berfin Simsek,Alberto Bietti
associative-memorygradient-descentlearning-theoryPMLRDBLP
5
泛读ICML 2024

Enhancing Cross-Modal Fine-Tuning with Gradually Intermediate Modality Generation

Lincan Cai,Shuang Li,Wenxuan Ma,Jingxuan Kang,Binhui Xie,Zixun Sun,Chengwei Zhu
cross-modalfine-tuningmodality-generationPMLRDBLP
5
泛读SpotlightICML 2024

Vocabulary for Universal Approximation: A Linguistic Perspective of Mapping Compositions

Yongqiang Cai
universal-approximationvocabularycompositionalityPMLRDBLP
5
泛读ICML 2024

Successor Features for Efficient Multi-Subject Controlled Text Generation

Meng Cao,Mehdi Fatemi,Jackie C. K. Cheung,Samira Shabanian
controlled-generationsuccessor-featurestext-generationPMLRDBLP
5
泛读ICML 2024

How Smooth Is Attention?

Valérie Castin,Pierre Ablin,Gabriel Peyré
attentionsmoothnesstheoryPMLRDBLP
5
泛读ICML 2024

On the Implicit Bias of Adam

Matias D. Cattaneo,Jason M. Klusowski,Boris Shigida
adamimplicit-biasoptimizerPMLRDBLP
5
泛读ICML 2024

Diffusive Gibbs Sampling

Wenlin Chen,Mingtian Zhang,Brooks Paige,José Miguel Hernández-Lobato,David Barber
diffusionsamplinggibbs-samplingPMLRDBLP
5
泛读SpotlightICML 2024

Do Models Explain Themselves? Counterfactual Simulatability of Natural Language Explanations

这篇论文要回答的核心问题是:模型给出的自然语言解释,是否真的反映了其决策机制,而不只是事后编造一个听起来合理的理由。以往工作通常用“解释是否能说服人”或“解释是否和标签相关”来评估,但这些指标回避了一个更严格的问题:如果把解释作为干预变量去改动,模型行为会不会按解释所声称的方式发生变化。

Yanda Chen,Ruiqi Zhong,Narutatsu Ri,Chen Zhao,He He,Jacob Steinhardt,Zhou Yu,Kathleen R. McKeown
explanationssimulatabilityinterpretabilityPMLRDBLP
5
泛读ICML 2024

Layerwise Change of Knowledge in Neural Networks

这篇论文关注一个很基础但长期缺少定量答案的问题:神经网络的知识在各层是如何变化、转移和重写的。过去关于“某层存知识”的说法往往依赖 probing 或个别案例,很难区分是真正的知识更新,还是下游读出方式变了。

Xu Cheng,Lei Cheng,Zhaoran Peng,Yang Xu,Tian Han,Quanshi Zhang
knowledgelayerwiserepresentationPMLRDBLP
5
泛读ICML 2024

PICLe: Eliciting Diverse Behaviors from Large Language Models with Persona In-Context Learning

这篇论文研究的是如何在不改模型参数的前提下,更系统地诱导 LLM 产生多样行为。普通 prompt engineering 往往靠措辞微调去试探不同风格,但可控性弱、覆盖面窄;PICLe 试图用 persona in-context learning,把“角色设定”变成一种更稳定的行为控制接口。

Hyeong Kyu Choi,Yixuan Li
personain-context-learningpromptingPMLRDBLP
5
泛读ICML 2024

Studying K-FAC Heuristics by Viewing Adam through a Second-Order Lens

Adam 优化器在实践中效果好但理论理解不足,作者试图通过二阶优化(K-FAC)的视角重新审视 Adam,解释 Adam 中一些启发式设计为何有效,并探索 K-FAC 的哪些技巧可以反向迁移到 Adam。

Ross M. Clarke,José Miguel Hernández-Lobato
University of Cambridgeoptimizeradamk-facPMLRDBLP
5
泛读ICML 2024

Scalable High-Resolution Pixel-Space Image Synthesis with Hourglass Diffusion Transformers

高分辨率图像生成通常依赖 latent space diffusion(如 Stable Diffusion),但 latent 编码器会引入信息损失和额外复杂度。这篇工作探索直接在像素空间做高分辨率 diffusion,核心挑战是计算量随分辨率平方增长。

Katherine Crowson,Stefan Andreas Baumann,Alex Birch,Tanishq Mathew Abraham,Daniel Z. Kaplan,Enrico Shippole
Stability AIdiffusion-transformerimage-generationscalingPMLRDBLP
5
泛读ICML 2024

Learning Latent Space Hierarchical EBM Diffusion Models

在潜在空间中结合 EBM(能量模型)和 diffusion 模型进行生成建模。传统 EBM 训练不稳定、采样困难,而纯 diffusion 模型缺乏 EBM 的灵活能量函数建模能力。这篇工作尝试在层次化潜在空间中融合两者。

Jiali Cui,Tian Han
Stevens Institute of Technologyebmdiffusionlatent-spacePMLRDBLP
5
泛读ICML 2024

Larimar: Large Language Models with Episodic Memory Control

LLM 在部署后需要快速整合新知识(如事实更新),但传统方法要么需要微调(慢且可能遗忘),要么依赖检索增强(受限于检索质量)。Larimar 提出用外部 episodic memory 模块让 LLM 实现快速、可控的知识更新。

Payel Das,Subhajit Chaudhury,Elliot Nelson,Igor Melnyk,Sarathkrishna Swaminathan,Sihui Dai ... 省略 2 位作者 ... ,Vijil Chenthamarakshan,Jirí Navrátil,Soham Dan,Pin-Yu Chen
IBM Researchepisodic-memoryllmcontinual-learningPMLRDBLP
5
泛读ICML 2024

Multicalibration for Confidence Scoring in LLMs

LLM 输出的置信度分数(如 token 概率)通常校准不良,尤其是在不同子群体上表现不一致。Multicalibration 要求置信度在所有可识别的子群体上都校准良好,这篇工作将 multicalibration 框架应用于 LLM 的置信度评分。

Gianluca Detommaso,Martin Bertran Lopez,Riccardo Fogliato,Aaron Roth
University of PennsylvaniaMicrosoft Researchcalibrationllmconfidence-scoringPMLRDBLP
5
泛读OralICML 2024

Stop Regressing: Training Value Functions via Classification for Scalable Deep RL

深度 RL 中的 value function 通常用回归(MSE loss)训练,但在大规模环境中容易出现训练不稳定和表示退化。这篇工作提出用分类(cross-entropy loss)替代回归来训练 value function,显著改善了 scalability。

Jesse Farebrother,Jordi Orbay,Quan Vuong,Adrien Ali Taïga,Yevgen Chebotar,Ted Xiao ... 省略 2 位作者 ... ,Pablo Samuel Castro,Aleksandra Faust,Aviral Kumar,Rishabh Agarwal
Google DeepMindreinforcement-learningvalue-learningclassificationPMLRDBLP
5
泛读ICML 2024

Privacy Backdoors: Stealing Data with Corrupted Pretrained Models

Shanglun Feng,Florian Tramèr
securitypretrained-modelsprivacyPMLRDBLP
5
泛读ICML 2024

Towards Theoretical Understandings of Self-Consuming Generative Models

Shi Fu,Sen Zhang,Yingjie Wang,Xinmei Tian,Dacheng Tao
data-qualitygenerative-modelsself-trainingPMLRDBLP
5
泛读OralICML 2024

A Touch, Vision, and Language Dataset for Multimodal Alignment

Letian Fu,Gaurav Datta,Huang Huang,William Chung-Ho Panitch,Jaimyn Drake,Joseph Ortiz,Mustafa Mukadam,Mike Lambeta,Roberto Calandra,Ken Goldberg
multimodalalignmentdatasetPMLRDBLP
5
泛读ICML 2024

Parameter-Efficient Fine-Tuning with Discrete Fourier Transform

Ziqi Gao,Qichao Wang,Aochuan Chen,Zijing Liu,Bingzhe Wu,Liang Chen,Jia Li
peftfine-tuningfourierPMLRDBLP
5
泛读ICML 2024

LLark: A Multimodal Instruction-Following Language Model for Music

Joshua Patrick Gardner,Simon Durand,Daniel Stoller,Rachel M. Bittner
music-lmmultimodalinstruction-followingPMLRDBLP
5
泛读SpotlightICML 2024

Memorization Through the Lens of Curvature of Loss Function Around Samples

这篇论文要回答的问题很直接:样本是否被模型记住,能不能从该样本附近的损失曲率看出来。以往对 memorization 的分析多依赖成员推断、重复生成或训练后行为统计,这些方法能发现“记住了没有”,但不太解释“为什么这个样本更容易被记住”;作者转而看局部几何,试图把记忆现象和优化景观联系起来。

Isha Garg,Deepak Ravikumar,Kaushik Roy
memorizationloss-curvaturegeneralizationPMLRDBLP
5
泛读ICML 2024

Simplicity Bias via Global Convergence of Sharpness Minimization

这篇论文讨论的是一个基础但长期难讲清的问题:sharpness minimization 为什么会带来 simplicity bias,也就是偏向更简单的解。过去大家普遍接受“平坦极小值更能泛化”的经验,但很多结论停留在局部分析或经验现象;作者想给出更强的全局收敛视角,说明这种偏好不是偶然副产物,而是优化目标本身诱导出来的。

Khashayar Gatmiry,Zhiyuan Li,Sashank J. Reddi,Stefanie Jegelka
simplicity-biassharpness-minimizationgeneralizationPMLRDBLP
5
泛读ICML 2024

Data-efficient Large Vision Models through Sequential Autoregression

这篇论文的核心问题是:视觉大模型能否通过 sequential autoregression 获得更好的数据效率,而不是继续依赖大规模监督或重型对比/掩码训练。视觉领域长期默认 AR 不如判别式或 masked 建模高效,但随着统一 token 建模和 multimodal LM 兴起,这个判断值得重新审视。

Zhiwei Hao,Jianyuan Guo,Chengcheng Wang,Yehui Tang,Han Wu,Han Hu,Kai Han,Chang Xu
visionautoregressivedata-efficiencyPMLRDBLP
5
泛读ICML 2024

GLoRe: When, Where, and How to Improve LLM Reasoning via Global and Local Refinements

Alexander Havrilla,Sharath Chandra Raparthy,Christoforos Nalmpantis,Jane Dwivedi-Yu,Maksym Zhuravinskyi,Eric Hambro,Roberta Raileanu
llmreasoningrefinementPMLRDBLP
5
泛读ICML 2024

Understanding Diffusion Models by Feynman's Path Integral

Yuji Hirono,Akinori Tanaka,Kenji Fukushima
diffusiontheoryinterpretabilityPMLRDBLP
3
ICML 2024

Decoding Compressed Trust: Scrutinizing the Trustworthiness of Efficient LLMs Under Compression

现有LLM压缩方法仅关注保留正常任务性能,忽略压缩对模型安全、可信度的影响,没有系统的评估框架量化压缩后模型的可信风险。

Junyuan Hong,Jinhao Duan,Chenhui Zhang,Zhangheng Li,Chulin Xie,Kelsey Lieberman ... 省略 5 位作者 ... ,Dan Hendrycks,Dawn Song,Zhangyang Wang,Bo Li
llmcompressionquantizationPMLRarXivDBLP
5
泛读ICML 2024

InstructSpeech: Following Speech Editing Instructions via Large Language Models

缺少摘要信息,无法可靠确定其具体要解决的 pretrain/建模核心问题。

Rongjie Huang,Ruofan Hu,Yongqi Wang,Zehan Wang,Xize Cheng,Ziyue Jiang ... 省略 1 位作者 ... ,Dongchao Yang,Luping Liu,Peng Gao,Zhou Zhao
speech-editingllmaudio-generationPMLRDBLP
5
泛读ICML 2024

Rotational Equilibrium: How Weight Decay Balances Learning Across Neural Networks

这篇工作要回答的是:weight decay 在深度网络里到底平衡了什么,为什么它几乎总是有效,但现有解释往往只覆盖线性模型或简单范数正则视角。训练大模型时,大家知道不加 weight decay 常常更不稳或泛化更差,但“它如何在网络不同部分分配学习力度”这个机制长期并不清楚。

Atli Kosson,Bettina Messmer,Martin Jaggi
weight-decayoptimizationtraining-dynamicsPMLRDBLP
5
泛读ICML 2024

Understanding the Effects of Iterative Prompting on Truthfulness

这篇工作要回答的是:iterative prompting 到底会让模型更真实,还是只是让它更擅长把错误答案包装得更完整。过去很多 work 观察到多轮提示、self-reflection、chain-of-thought 能提升某些推理任务表现,但它们对 truthfulness 的影响并不清楚,因为“答得更长、更像推理”不等于“更接近事实”。

Satyapriya Krishna,Chirag Agarwal,Himabindu Lakkaraju
promptingtruthfulnessllmPMLRDBLP
5
泛读ICML 2024

No Free Prune: Information-Theoretic Barriers to Pruning at Initialization

这篇工作要解决的是:在随机初始化时进行剪枝,是否存在不可绕过的信息论障碍。lottery ticket 之后,很多人希望在训练前就找到好子网络,以减少训练成本;但经验上这件事高度不稳定,往往需要额外先验、打分器或训练后信息,说明“免费剪枝”可能在根上就受限。

Tanishq Kumar,Kevin Luo,Mark Sellke
pruninginformation-theoryinitializationPMLRDBLP
5
泛读ICML 2024

Modeling Caption Diversity in Contrastive Vision-Language Pretraining

Samuel Lavoie,Polina Kirichenko,Mark Ibrahim,Mido Assran,Andrew Gordon Wilson,Aaron C. Courville,Nicolas Ballas
contrastive-learningclipcaption-diversityPMLRDBLP
5
泛读ICML 2024

Slow and Steady Wins the Race: Maintaining Plasticity with Hare and Tortoise Networks

Hojoon Lee,Hyeonseo Cho,Hyunseung Kim,Donghu Kim,Dugki Min,Jaegul Choo,Clare Lyle
plasticitycontinual-learningtraining-dynamicsPMLRDBLP
4
ICML 2024

DecisionNCE: Embodied Multimodal Representations via Implicit Preference Learning

现有具身机器人多模态预训练用多个独立目标分别优化局部/全局任务进展、时序一致性、语言grounding三个核心目标,容易出现次优解,无法同时满足三个需求。

Jianxiong Li,Jinliang Zheng,Yinan Zheng,Liyuan Mao,Xiao Hu,Sijie Cheng ... 省略 2 位作者 ... ,Yu Liu,Jingjing Liu,Ya-Qin Zhang,Xianyuan Zhan
multimodal-pretrainingembodiedpreference-learningPMLRarXivDBLP
2
OralICML 2024

Monitoring AI-Modified Content at Scale: A Case Study on the Impact of ChatGPT on AI Conference Peer Reviews

此前没有大规模、低误差的方法估算真实语料中LLM生成或修改内容的占比,ChatGPT发布后AI会议审稿中LLM的实际使用情况未知。

Weixin Liang,Zachary Izzo,Yaohui Zhang,Haley Lepp,Hancheng Cao,Xuandong Zhao ... 省略 2 位作者 ... ,Sheng Liu,Zhi Huang,Daniel A. McFarland,James Y. Zou
llmdata-qualityai-generated-textPMLRarXivDBLP
3
ICML 2024

Graph-enhanced Large Language Models in Asynchronous Plan Reasoning

异步规划需要同时考虑串行和并行任务以优化时间成本,现有LLM在这类任务上表现很差,此前没有系统的基准和针对性优化方法。

Fangru Lin,Emanuele La Malfa,Valentin Hofmann,Elle Michelle Yang,Anthony G. Cohn,Janet B. Pierrehumbert
University of LeedsllmplanninggraphPMLRarXivDBLP
5
泛读ICML 2024

From Generalization Analysis to Optimization Designs for State Space Models

Fusheng Liu,Qianxiao Li
ssmstate-space-modelgeneralizationPMLRDBLP
5
泛读OralICML 2024

How do Large Language Models Navigate Conflicts between Honesty and Helpfulness?

这篇论文关注的核心问题是:当“诚实”和“有帮助”发生冲突时,LLM 实际会优先遵循什么原则,以及这种取舍是否稳定可控。过去对齐研究更多把安全、真实性、帮助性分开测,默认它们可以同时优化;这篇工作指出,真实交互里冲突场景很常见,如果不单独分析,模型表面的高分可能掩盖了系统性的取舍偏差。

Ryan Liu,Theodore R. Sumers,Ishita Dasgupta,Thomas L. Griffiths
alignmenthonestyhelpfulnessPMLRDBLP
5
泛读ICML 2024

Position: Data-driven Discovery with Large Generative Models

这篇 Position paper 讨论的核心问题是:大生成模型能否成为数据驱动科学发现的新工具,而不是只做问答或内容生成。这个问题过去常被愿景化讨论,但缺少清晰的能力边界和方法论框架;现在值得重看,是因为 foundation model 已经开始接触科研自动化、假设生成和实验设计等高价值环节。

Bodhisattwa Prasad Majumder,Harshit Surana,Dhruv Agarwal,Sanchaita Hazra,Ashish Sabharwal,Peter Clark
generative-modelsscientific-discoveryfoundation-modelsPMLRDBLP
5
泛读ICML 2024

Large Language Models are Geographically Biased

这篇论文的核心问题是:LLM 的偏见不只体现在敏感属性问答上,还会系统性地体现在地理空间预测中,而且这种偏差可以用真实世界地理数据做较客观的检验。过去偏见评测常停留在模板化社会属性任务上,这篇工作换了一个更结构化的视角:看模型如何把文化、种族、语言、政治、宗教等因素投射到空间上。

Rohin Manvi,Samar Khanna,Marshall Burke,David B. Lobell,Stefano Ermon
llmbiasevaluationPMLRarXivDBLP
5
泛读ICML 2024

OSSCAR: One-Shot Structured Pruning in Vision and Language Models with Combinatorial Optimization

这篇论文要解决的是:如何对视觉和语言模型做一次性的结构化剪枝,同时尽量保住性能,而不依赖昂贵的反复搜索或重训练。过去很多 pruning 方法要么是非结构化、部署收益有限,要么需要多轮迭代和大量校准;在大模型上,这两类代价都很难接受。

Xiang Meng,Shibal Ibrahim,Kayhan Behdin,Hussein Hazimeh,Natalia Ponomareva,Rahul Mazumder
pruningvlmcompressionPMLRDBLP
5
泛读ICML 2024

Understanding Retrieval-Augmented Task Adaptation for Vision-Language Models

Yifei Ming,Yixuan Li
vlmretrievaltask-adaptationPMLRDBLP
5
泛读ICML 2024

Provable Interactive Learning with Hindsight Instruction Feedback

Dipendra Misra,Aldo Pacchiano,Robert E. Schapire
interactive-learningfeedbackinstruction-followingPMLRDBLP
5
泛读ICML 2024

A Simple Early Exiting Framework for Accelerated Sampling in Diffusion Models

Tae Hong Moon,Moonseok Choi,EungGu Yun,Jongmin Yoon,Gayoung Lee,Jaewoong Cho,Juho Lee
diffusionearly-exitingsampling-accelerationPMLRDBLP
5
泛读SpotlightICML 2024

Position: Levels of AGI for Operationalizing Progress on the Path to AGI

Meredith Ringel Morris,Jascha Sohl-Dickstein,Noah Fiedel,Tris Warkentin,Allan Dafoe,Aleksandra Faust,Clément Farabet,Shane Legg
agievaluationposition-paperPMLRDBLP
5
泛读ICML 2024

Aligning Transformers with Weisfeiler-Leman

Luis Müller,Christopher Morris
transformergraph-isomorphismexpressivenessPMLRDBLP
5
泛读ICML 2024

Diffusion Rejection Sampling

Byeonghu Na,Yeongmin Kim,Minsang Park,DongHyeok Shin,Wanmo Kang,Il-Chul Moon
diffusionrejection-samplingscore-basedPMLRDBLP
5
泛读SpotlightICML 2024

Memoria: Resolving Fateful Forgetting Problem through Human-Inspired Memory Architecture

Sangjun Park,JinYeong Bak
memorycontinual-learninglong-contextPMLRDBLP
5
泛读SpotlightICML 2024

Exploiting Code Symmetries for Learning Program Semantics

这篇论文要解决的是:程序语义学习为什么常被表层语法差异干扰,以及如何显式利用代码中的对称性来学到更稳健的语义表示。过去很多 code model 主要从 token 序列或 AST 里学统计共现,容易把变量重命名、语句交换等语法变体当成新模式,而不是同一语义的等价表达。

Kexin Pei,Weichen Li,Qirui Jin,Shuyang Liu,Scott Geng,Lorenzo Cavallaro,Junfeng Yang,Suman Jana
codeprogram-semanticssymmetryPMLRDBLP
5
泛读ICML 2024

BetterV: Controlled Verilog Generation with Discriminative Guidance

这篇论文解决的是 Verilog 生成里一个很实际的问题:通用代码生成模型往往能生成语法像样的 HDL,但功能正确率和可控性不足。过去做法常靠纯生成模型加后验筛选,命中率低、搜索成本高;作者尝试把判别式信号直接用于生成过程,引导模型更稳定地产生满足设计约束的 Verilog。

Zehua Pei,Hui-Ling Zhen,Mingxuan Yuan,Yu Huang,Bei Yu
code-generationguidanceverilogPMLRDBLP
5
泛读ICML 2024

CaPS: Collaborative and Private Synthetic Data Generation from Distributed Sources

这篇论文解决的是分布式场景下的合成数据生成:多个数据源都想参与,但又不能直接共享原始数据。过去常见做法是在 centralized data 上训练生成器,或者采用联邦学习但不专门优化合成数据质量与隐私;作者关注的是如何协作生成 synthetic data,同时保护各方私有数据。

Sikha Pentyala,Mayana Pereira,Martine De Cock
synthetic-dataprivacydata-generationPMLRDBLP
5
泛读ICML 2024

Interpreting and Improving Diffusion Models from an Optimization Perspective

这篇论文要解决的是:扩散模型通常从概率建模角度解释,但很多训练与采样现象更像优化过程,缺少统一视角。过去大家知道 diffusion 有效,却不总能说清楚哪些成功来自目标函数、哪些来自离散时间近似、哪些来自优化路径;作者试图用优化视角重新解释并改进 diffusion models。

Frank Permenter,Chenyang Yuan
diffusionoptimizationtraining-dynamicsPMLRDBLP
5
泛读ICML 2024

tinyBenchmarks: evaluating LLMs with fewer examples

这篇论文解决的是 LLM 评测成本过高、样本需求过大的问题。很多 benchmark 需要大量题目才能稳定区分模型,但在快速迭代、模型筛选和 ablation 阶段,这种评测太慢也太贵;作者想用更少的样本得到足够可靠的模型排序或性能估计。

Felipe Maia Polo,Lucas Weber,Leshem Choshen,Yuekai Sun,Gongjun Xu,Mikhail Yurochkin
benchmarkllm-evaluationefficiencyPMLRDBLP
5
泛读ICML 2024

The Entropy Enigma: Success and Failure of Entropy Minimization

这篇论文讨论 entropy minimization 为什么有时非常有效、有时又明显失败。熵最小化常被用在半监督学习、测试时适应和自训练里,直觉上让模型更自信似乎能提升决策边界;但实践中它也经常放大错误、导致塌缩。作者要做的是解释这种成败分化,而不是再给一个经验技巧。

Ori Press,Ravid Shwartz-Ziv,Yann LeCun,Matthias Bethge
entropy-minimizationgeneralizationself-trainingPMLRDBLP
5
泛读ICML 2024

Momentor: Advancing Video Large Language Model with Fine-Grained Temporal Reasoning

这篇论文要解决的是 Video LLM 在细粒度时间推理上的短板。现有视频大模型往往能抓住粗略场景和对象,但对事件顺序、持续时间、瞬时变化和跨片段因果关系理解不足,原因通常是视觉 token 压缩过强、时间监督稀疏、训练目标偏静态图文对齐。

Long Qian,Juncheng Li,Yu Wu,Yaobo Ye,Hao Fei,Tat-Seng Chua,Yueting Zhuang,Siliang Tang
video-llmtemporal-reasoningmultimodalPMLRDBLP
5
泛读ICML 2024

Federated Full-Parameter Tuning of Billion-Sized Language Models with Communication Cost under 18 Kilobytes

Zhen Qin,Daoyuan Chen,Bingchen Qian,Bolin Ding,Yaliang Li,Shuiguang Deng
federated-learningllm-finetuningcommunication-efficiencyPMLRDBLP
5
泛读ICML 2024

To Cool or not to Cool? Temperature Network Meets Large Foundation Models via DRO

Zi-Hao Qiu,Siqi Guo,Mao Xu,Tuo Zhao,Lijun Zhang,Tianbao Yang
temperature-scalingfoundation-modeldroPMLRDBLP
5
泛读ICML 2024

Bespoke Non-Stationary Solvers for Fast Sampling of Diffusion and Flow Models

Neta Shaul,Uriel Singer,Ricky T. Q. Chen,Matthew Le,Ali K. Thabet,Albert Pumarola,Yaron Lipman
diffusionsamplingflow-modelsPMLRDBLP
5
泛读ICML 2024

Thermometer: Towards Universal Calibration for Large Language Models

Maohao Shen,Subhro Das,Kristjan H. Greenewald,Prasanna Sattigeri,Gregory W. Wornell,Soumya Ghosh
llmcalibrationevaluationPMLRDBLP
5
泛读ICML 2024

Long-Tail Learning with Foundation Model: Heavy Fine-Tuning Hurts

Jiang-Xin Shi,Tong Wei,Zhi Zhou,Jie-Jing Shao,Xin-Yan Han,Yufeng Li
foundation-modelsfine-tuninglong-tailPMLRDBLP
5
泛读ICML 2024

OT-CLIP: Understanding and Generalizing CLIP via Optimal Transport

Liangliang Shi,Jack Fan,Junchi Yan
clipmultimodaloptimal-transportPMLRDBLP
5
泛读ICML 2024

Deletion-Anticipative Data Selection with a Limited Budget

Rachael Hwee Ling Sim,Jue Fan,Xiao Tian,Patrick Jaillet,Bryan Kian Hsiang Low
data-selectiondata-qualitydataset-curationPMLRDBLP
5
泛读ICML 2024

Representation Surgery: Theory and Practice of Affine Steering

现有表征steering方法用来修正LLM的不良行为(如毒性、偏见),但没有统一的理论框架指导steering函数的设计,不同方法的最优性未被证明。

Shashwat Singh,Shauli Ravfogel,Jonathan Herzig,Roee Aharoni,Ryan Cotterell,Ponnurangam Kumaraguru
ETH Zurichrepresentation-steeringalignmentcontrolPMLRarXivDBLP
5
泛读SpotlightICML 2024

Sparse is Enough in Fine-tuning Pre-trained Large Language Models

Weixi Song,Zuchao Li,Lefei Zhang,Hai Zhao,Bo Du
sparse-finetuningllmparameter-efficiencyPMLRDBLP
4
ICML 2024

SLEB: Streamlining LLMs through Redundancy Verification and Elimination of Transformer Blocks

现有LLM结构化剪枝多针对参数或注意力头粒度,难以实现可观的端到端推理提速,且未利用Transformer固有的块级冗余特性。

Jiwon Song,Kyungseok Oh,Taesu Kim,Hyungjun Kim,Yulhwa Kim,Jae-Joon Kim
pruningllm-compressiontransformer-blocksPMLRarXivDBLP
5
泛读ICML 2024

Grokking Group Multiplication with Cosets

现有可解释性方法无法完全拆解grokking(泛化延迟涌现)的内部机制,此前算法类任务的可解释性研究未覆盖群运算这类结构复杂的离散符号任务。

Dashiell Stander,Qinan Yu,Honglu Fan,Stella Biderman
EleutherAIgrokkinginterpretabilityreverse-engineeringPMLRarXivDBLP
5
泛读ICML 2024

RLVF: Learning from Verbal Feedback without Overgeneralization

这篇论文要解决的问题是:如何让 LLM 根据自然语言反馈调整行为,同时不要把反馈错误地泛化到无关场景。以往最省事的做法是把反馈直接塞进 prompt,但这会把局部偏好当成全局规则,例如“给老板写邮件不要用 emoji”会外溢到所有写作任务;而标准 RLHF 又要求成对偏好或奖励标注,定制成本太高。

Moritz Stephan,Alexander Khazatsky,Eric Mitchell,Annie S. Chen,Sheryl Hsu,Archit Sharma,Chelsea Finn
verbal-feedbackalignmentllm-customizationPMLRarXivDBLP
5
泛读ICML 2024

Whispering Experts: Neural Interventions for Toxicity Mitigation in Language Models

这篇论文要解决的问题很明确:能不能不用重新训练或重对齐整个模型,只通过神经层面的干预就稳定降低 LLM 的毒性输出。以往常见方案是 RLHF、拒答微调或解码时过滤,这些方法要么成本高、要么牺牲能力、要么只能事后拦截,难以做到轻量且模型无关。

Xavier Suau,Pieter Delobelle,Katherine Metcalf,Armand Joulin,Nicholas Apostoloff,Luca Zappella,Pau Rodríguez
toxicityinterpretabilityneuron-interventionPMLRarXivDBLP
5
泛读ICML 2024

A Universal Class of Sharpness-Aware Minimization Algorithms

这篇论文要解决的问题是:SAM 一类 sharpness-aware 优化方法虽然有效,但现有 sharpness 定义太窄,也常常不尊重神经网络的参数重标定不变性。结果是你优化到的“平坦性”可能只是参数坐标系里的假象,而不是和泛化真正相关的几何性质。

Behrooz Tahmasebi,Ashkan Soleymani,Dara Bahri,Stefanie Jegelka,Patrick Jaillet
samsharpness-awareoptimizationPMLRarXivDBLP
5
泛读ICML 2024

Merging Multi-Task Models via Weight-Ensembling Mixture of Experts

这篇论文讨论的是多任务模型合并中的一个关键问题:如何在不重新联合训练所有任务的前提下,把多个专用模型合成为一个更通用的模型,同时尽量避免任务间权重冲突。传统权重平均或 task arithmetic 经常在任务相近时有效,但任务差异一大就容易互相干扰。

Anke Tang,Li Shen,Yong Luo,Nan Yin,Lefei Zhang,Dacheng Tao
model-mergingmoemulti-taskPMLRDBLP
5
泛读ICML 2024

Executable Code Actions Elicit Better LLM Agents

这篇论文解决的是 LLM agent 动作空间设计过于僵硬的问题:让模型输出 JSON 或固定格式文本,虽然便于解析,但动作种类受预定义工具限制,也很难组合、修改和复用。随着 agent 任务越来越复杂,这种“把动作当字符串槽位填空”的接口开始成为能力上限。

Xingyao Wang,Yangyi Chen,Lifan Yuan,Yizhe Zhang,Yunzhu Li,Hao Peng,Heng Ji
llm-agenttool-usecode-executionPMLRarXivDBLP
5
泛读ICML 2024

Diagnosing the Compositional Knowledge of Vision Language Models from a Game-Theoretic View

VLM 在组合推理(如关系、属性的组合理解)上表现差,但缺乏系统性的诊断工具来揭示这种缺陷的根源。以往工作多停留在'发现 VLM 组合推理弱',没有深入分析编码表示层面的原因。

Jin Wang,Shichao Dong,Yapeng Zhu,Kelu Yao,Weidong Zhao,Chao Li,Ping Luo
vlmcompositionalitydiagnosisPMLRarXivDBLP
5
泛读ICML 2024

SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models

现有 LLM 科学推理 benchmark 集中在高中水平、简单代数运算,无法评估 LLM 解决大学级复杂科学问题的能力。需要一个覆盖数学、化学、物理的大学级 benchmark 来暴露 LLM 的真实推理上限。

Xiaoxuan Wang,Ziniu Hu,Pan Lu,Yanqiao Zhu,Jieyu Zhang,Satyen Subramaniam,Arjun R. Loomba,Shichang Zhang,Yizhou Sun,Wei Wang
UCLAbenchmarkreasoningsciencePMLRarXivDBLP
5
泛读ICML 2024

RL-VLM-F: Reinforcement Learning from Vision Language Foundation Model Feedback

RL 中的 reward 设计长期依赖人工试错,成本高且难以泛化。能否利用 VLM 的视觉-语言理解能力,仅凭任务文本描述和视觉观测自动生成 reward 函数?

Yufei Wang,Zhanyi Sun,Jesse Zhang,Zhou Xian,Erdem Biyik,David Held,Zackory Erickson
vlmreinforcement-learningreward-modelPMLRarXivDBLP
5
泛读ICML 2024

Transformers Provably Learn Sparse Token Selection While Fully-Connected Nets Cannot

Transformer 能否可证明地学习稀疏 token 选择(即从输入序列中选择性地关注少数关键 token),而全连接网络做不到?这是理解 Transformer 注意力机制独特优势的理论问题。

Zixuan Wang,Stanley Wei,Daniel Hsu,Jason D. Lee
Princeton Universitytransformertoken-selectionsparsityPMLRDBLP
3
OralICML 2024

Unified Training of Universal Time Series Forecasting Transformers

传统时间序列预测采用单数据集单模型范式,无法利用预训练大模型的迁移能力,通用时序预训练面临跨频率学习、可变多变量适配、分布差异三大特有挑战。

Gerald Woo,Chenghao Liu,Akshat Kumar,Caiming Xiong,Silvio Savarese,Doyen Sahoo
Salesforce Researchtime-seriesfoundation-modelpretrainingPMLRarXivDBLP
4
ICML 2024

Optimizing Watermarks for Large Language Models

现有LLM水印方案多为手工调参,无法在可检测性、生成文本质量、鲁棒性之间找到最优权衡,缺乏系统化的优化框架。

Bram Wouters
watermarkingllmmulti-objectivePMLRarXivDBLP
5
泛读ICML 2024

Theoretical insights for diffusion guidance: A case study for Gaussian mixture models

现有扩散模型的引导机制缺乏理论解释,引导强度与生成对齐度、多样性的权衡关系仅靠经验调参,没有形式化证明支撑。

Yuchen Wu,Minshuo Chen,Zihao Li,Mengdi Wang,Yuting Wei
diffusionguidancetheoryPMLRarXivDBLP
4
ICML 2024

A Resilient and Accessible Distribution-Preserving Watermark for Large Language Models

现有LLM水印会改变原始生成文本的token分布,容易被检测和去除,且大多需要访问模型API或原prompt才能检测,实用性受限。

Yihan Wu,Zhengmian Hu,Junfeng Guo,Hongyang Zhang,Heng Huang
watermarkingllmdistribution-preservingPMLRarXivDBLP
5
泛读ICML 2024

Representation Surgery for Multi-Task Model Merging

Enneng Yang,Li Shen,Zhenyi Wang,Guibing Guo,Xiaojun Chen,Xingwei Wang,Dacheng Tao
model-mergingmulti-taskrepresentationPMLRDBLP
5
泛读ICML 2024

What is Dataset Distillation Learning?

这篇工作要回答的不是如何做 dataset distillation,而是 dataset distillation 到底学到了什么。这个问题过去经常被新算法指标掩盖:大家关注少量合成样本能把测试精度提到多少,却较少追问这些 distilled data 是在逼近原数据分布、在编码训练轨迹,还是在过拟合某个初始化和优化器。

William Yang,Ye Zhu,Zhiwei Deng,Olga Russakovsky
dataset-distillationdata-qualitydata-selectionPMLRDBLP
5
泛读ICML 2024

Mobile Attention: Mobile-Friendly Linear-Attention for Vision Transformers

这篇工作要解决的是视觉 Transformer 在移动端部署时,标准注意力计算和内存访问都太重,而很多线性注意力替代方案又损失精度或不够端侧友好。过去常见做法是通过窗口化、低分辨率或卷积混合结构降成本,但这些方案往往在全局建模和移动硬件效率之间仍有明显折中。

Zhiyu Yao,Jian Wang,Haixu Wu,Jingdong Wang,Mingsheng Long
linear-attentionvision-transformermobilePMLRDBLP
5
泛读ICML 2024

MMT-Bench: A Comprehensive Multimodal Benchmark for Evaluating Large Vision-Language Models Towards Multitask AGI

这篇工作要解决的是现有多模态大模型评测过于碎片化、任务覆盖不全的问题。很多 benchmark 只测问答、OCR 或少数视觉推理能力,导致模型看起来很强,但其实只是对少数任务优化得好,离“多任务通用能力”还有明显距离。

Kaining Ying,Fanqing Meng,Jin Wang,Zhiqian Li,Han Lin,Yue Yang ... 省略 12 位作者 ... ,Yu Qiao,Ping Luo,Kaipeng Zhang,Wenqi Shao
vlmbenchmarkmultitaskPMLRDBLP
5
泛读ICML 2024

MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities

Weihao Yu,Zhengyuan Yang,Linjie Li,Jianfeng Wang,Kevin Lin,Zicheng Liu,Xinchao Wang,Lijuan Wang
benchmarkvlm-evaluationmultimodalPMLRDBLP
2
ICML 2024

NExT-Chat: An LMM for Chat, Detection and Segmentation

现有多模态大模型的区域理解能力依赖将坐标序列化作为文本输入输出,无法同时支持边界框、分割掩码等多种位置格式,灵活性差。

Ao Zhang,Yuan Yao,Wei Ji,Zhiyuan Liu,Tat-Seng Chua
Tsinghua UniversityNational University of SingaporevlmdetectionsegmentationPMLRarXivDBLP
6
泛读ICML 2024

Confronting Reward Overoptimization for Diffusion Models: A Perspective of Inductive and Primacy Biases

扩散模型与人类偏好对齐时容易出现奖励过优化问题,即奖励模型得分很高但实际生成质量下降,现有方法未解释其底层机制。

Ziyi Zhang,Sen Zhang,Yibing Zhan,Yong Luo,Yonggang Wen,Dacheng Tao
reward-overoptimizationdiffusionalignmentPMLRarXivDBLP
4
ICML 2024

Random Scaling and Momentum for Non-smooth Non-convex Optimization

经典SGDM(带动量的随机梯度下降)的收敛分析仅适用于凸或光滑损失,但神经网络训练的损失通常是非凸非光滑的,此前的优化方法要么对SGDM改动过大,要么无法在该场景下获得最优收敛保证。

Qinzi Zhang,Ashok Cutkosky
optimizersgdmnon-smoothPMLRarXivDBLP
5
泛读ICML 2024

Characteristic Guidance: Non-linear Correction for Diffusion Model at Large Guidance Scale

现有扩散模型的无分类器引导(CFG)采用线性组合条件与无条件模型输出的方式,大引导系数下会忽略显著的非线性效应,导致生成样本畸形、语义偏差,此前的改进方法要么需要重新训练模型,要么无法适配大引导系数场景。

Candi Zheng,Yuan Lan
diffusionguidancesamplingPMLRarXivDBLP
3
ICML 2024

ESM All-Atom: Multi-Scale Protein Language Model for Unified Molecular Modeling

现有蛋白质语言模型仅在残基尺度建模,无法输出原子级信息,限制了其在蛋白-小分子互作、高精度结构预测等任务上的应用,此前的方法要么仅能单独建模残基或原子尺度,要么需要额外3D结构监督,无法实现统一建模。

Kangjie Zheng,Siyu Long,Tianyu Lu,Junwei Yang,Xinyu Dai,Ming Zhang,Zaiqing Nie,Wei-Ying Ma,Hao Zhou
protein-lmtokenizermultiscalePMLRarXivDBLP
5
泛读ICML 2024

On Prompt-Driven Safeguarding for Large Language Models

前置安全提示是LLM对齐的常用手段,但其作用机制此前未被明确,导致无法自动优化安全提示,还会出现误拒无害查询的问题,此前的研究仅关注安全提示的效果,未深入表征层面的机制分析。

Chujie Zheng,Fan Yin,Hao Zhou,Fandong Meng,Jie Zhou,Kai-Wei Chang,Minlie Huang,Nanyun Peng
safetypromptingalignmentPMLRarXivDBLP
5
泛读ICML 2024

Score identity Distillation: Exponentially Fast Distillation of Pretrained Diffusion Models for One-Step Generation

现有扩散模型蒸馏为单步生成器的方法要么需要真实训练数据,要么收敛速度慢,蒸馏后的模型FID比教师模型差,此前的蒸馏方法都依赖反向扩散过程生成的样本或者真实训练数据。

Mingyuan Zhou,Huangjie Zheng,Zhendong Wang,Mingzhang Yin,Hai Huang
diffusiondistillationone-stepPMLRarXivDBLP
5
泛读ICML 2024

Switched Flow Matching: Eliminating Singularities via Switching ODEs

现有流匹配(Flow Matching, FM)方法使用统一ODE建模分布转换,源/目标分布的固有异质性会导致奇异性问题,训练后的神经ODE需要多步推理才能保证精度,采样速度慢,此前的方法要么无法解决奇异性问题,要么需要修改训练流程引入额外开销。

Qunxi Zhu,Wei Lin
flow-matchingodesamplingPMLRarXivDBLP
5
泛读ICML 2024

Viewing Transformers Through the Lens of Long Convolutions Layers

这篇工作要解决的问题,是把 Transformer 重新表述为长卷积层之后,能否更清楚地理解其归纳偏置和计算行为。过去大家通常把注意力和卷积当成两类机制分别分析,这让很多关于感受野、频域特性和参数效率的讨论停留在类比层面,而不是统一框架下的可比分析。

Itamar Zimerman,Lior Wolf
transformerlong-convolutionarchitecturePMLRDBLP
5
泛读ICML 2024

Fool Your (Vision and) Language Model with Embarrassingly Simple Permutations

这篇工作指出一个很具体但实际影响不小的问题:LLM 和 VLM 在多选问答里对选项顺序高度敏感,简单重排答案就可能显著改变输出。这个问题以前常被 benchmark 设计掩盖,因为大多数评测默认固定选项顺序,模型看起来稳定,但部署时这种脆弱性会直接影响可靠性。

Yongshuo Zong,Tingyang Yu,Ruchika Chavhan,Bingchen Zhao,Timothy M. Hospedales
robustnessllmvlmPMLRarXivDBLP
5
泛读ICML 2024

Tensor Train Low-rank Approximation (TT-LoRA): Democratizing AI with Accelerated LLMs

这篇工作要解决的问题,是 LoRA 这类参数高效微调方法在大模型上虽然省训练参数,但压缩性和扩展性并不总够好。随着层数和隐藏维度增长,低秩矩阵本身也会变大,导致参数、显存和通信开销重新上升,削弱了‘democratizing AI’这类轻量微调目标。

Afia Anjum,Maksim Ekin Eren,Ismael Boureima,Boian S. Alexandrov,Manish Bhattarai
loralow-rankaccelerationDOIarXivDBLP
5
泛读ICML 2024

Action Selection in Reinforcement Learning with Upgoing Policy Update

这篇工作要解决的问题,是如何在 PPO/GAE 框架里更好地选出高价值动作并放大其训练信号,同时避免 UPGO 带来的过估计。传统 GAE 在信用分配上较平滑但不够积极,UPGO 更强调好动作传播,但直接用在标准环境中容易把乐观估计放大成训练偏差。

Xiaoning Zhao,Toshiharu Sugawara
reinforcement-learningpolicy-optimizationupgoDOIDBLP
5
泛读ICML 2024

Motion Diffusion Model for Long Motion Generation

这篇工作要解决的问题,是长时长动作序列生成很难同时保证局部自然性和全局连贯性。已有 motion generation 方法在短片段上往往足够平滑,但一旦拉长序列,就容易出现漂移、重复或语义结构断裂,这和长文本生成中的 exposure bias 有相似之处。

Xiaoxu Han,Hegen Xu,Huilling Sun
diffusionmotion-generationlong-horizonDOIDBLP