📚Papers

ACL 2025

Annual Meeting of the Association for Computational Linguistics

会议官网
641/ 3380 相关论文
Track
方向
Tier
641 / 641 篇论文
9
精读LongACL 2025

Revisiting Scaling Laws for Language Models: The Role of Data Quality and Training Strategies

Zhengyu Chen,Siqi Wang,Teng Xiao,Yudong Wang,Shiqi Chen,Xunliang Cai,Junxian He,Jingang Wang
scaling-lawdata-qualitytraining-dynamicsAnthologyDBLP
9
精读LongACL 2025

Nemotron-CC: Transforming Common Crawl into a Refined Long-Horizon Pretraining Dataset

现有基于Common Crawl的预训练数据集(如FineWeb-Edu、DCLM)依赖强模型过滤提升质量,会移除90%以上原始数据,无法支撑10T+ tokens级别的长horizon预训练需求,此前方案均以牺牲数据规模为代价换质量,无平衡两者的可行路径。

Dan Su,Kezhi Kong,Ying Lin,Joseph Jennings,Brandon Norick,Markus Kliegl,Mostofa Patwary,Mohammad Shoeybi,Bryan Catanzaro
data-qualitydata-filteringcommon-crawlAnthologyarXivDBLP
9
精读LongACL 2025

Pre-Training Curriculum for Multi-Token Prediction in Language Models

这篇论文关注 multi-token prediction(一次预测多个 token)在预训练中的课程设计问题。多 token 预测能提高训练和推理效率,但直接上这种目标往往会带来优化不稳、目标错配或早期学习困难,所以关键问题不是‘要不要做’,而是‘按什么顺序让模型学会做’。

Ansar Aynetdinov,Alan Akbik
multi-token-predictioncurriculum-learningpretrainingAnthologyDBLP
9
精读LongACL 2025

Efficient Pretraining Data Selection for Language Models via Multi-Actor Collaboration

这篇论文解决的是大规模预训练中如何高效做数据选择,避免在海量候选数据上用单一评分器做昂贵、偏置明显的筛选。这个问题现在更重要,因为高质量 web 数据越来越稀缺,训练成本又高,数据选择已经从‘能不能训’变成‘怎么把每个 token 花在更值的地方’。

Tianyi Bai,Ling Yang,Zhen Hao Wong,Fupeng Sun,Xinlin Zhuang,Jiahui Peng ... 省略 2 位作者 ... ,Jiantao Qiu,Wentao Zhang,Binhang Yuan,Conghui He
data-selectionpretraining-datadata-qualityAnthologyDBLP
9
精读LongACL 2025

Optimizing Pre-Training Data Mixtures with Mixtures of Data Expert Models

Lior Belenki,Alekh Agarwal,Tianze Shi,Kristina Toutanova
data-mixturepretraining-datadata-qualityAnthologyDBLP
9
精读LongACL 2025

Towards Effective and Efficient Continual Pre-training of Large Language Models

Jie Chen,Zhipeng Chen,Jiapeng Wang,Kun Zhou,Yutao Zhu,Jinhao Jiang ... 省略 9 位作者 ... ,Zhewei Wei,Di Hu,Wenbing Huang,Ji-Rong Wen
continual-pretrainefficiencytrainingAnthologyDBLP
9
精读FindingsACL 2025

Scaling Laws for Multilingual Language Models

多语言 LLM 的 scaling law 尚不清楚——给定总计算预算,不同语言的数据应该怎么配比?模型规模和数据量的最优分配在多语言场景下是否与单语言不同?

Yifei He,Alon Benhaim,Barun Patra,Praneetha Vaddamanu,Sanchit Ahuja,Parul Chopra,Vishrav Chaudhary,Han Zhao,Xia Song
Microsoft Researchscaling-lawmultilingualdata-mixtureAnthologyDBLP
9
精读LongACL 2025

YuLan-Mini: Pushing the Limits of Open Data-efficient Language Model

Yiwen Hu,Huatong Song,Jie Chen,Jia Deng,Jiapeng Wang,Kun Zhou ... 省略 3 位作者 ... ,Yang Lu,Xu Miao,Xin Zhao,Ji-Rong Wen
data-efficiencyopen-datascalingAnthologyDBLP
9
精读LongACL 2025

Understanding Silent Data Corruption in LLM Training

这篇工作要解决的是:LLM 训练中的 silent data corruption 会如何影响训练稳定性和最终模型质量。相比训练直接崩溃,静默数据损坏更危险,因为系统还能继续跑,loss 也未必立刻爆炸,但模型可能被持续注入错误梯度,最后很难追责和定位。

Jeffrey Jian Ma,Hengzhi Pei,Leonard Lausen,George Karypis
silent-data-corruptiontraining-stabilitytraining-dynamicsAnthologyDBLP
9
精读LongACL 2025

TESS 2: A Large-Scale Generalist Diffusion Language Model

这篇论文的核心问题是:diffusion language model 能不能从“特定任务或小规模验证”走向大规模通用语言建模。过去 diffusion LM 一直被质疑两点:一是生成质量和效率未必能稳定追上 AR;二是很多工作停留在中小规模实验,缺少 generalist 级别的系统验证,因此很难判断它究竟是替代范式还是局部补充。

Jaesung Tae,Hamish Ivison,Sachin Kumar,Arman Cohan
diffusion-lmnon-autoregressivescalingAnthologyDBLP
9
精读LongACL 2025

Unveiling the Potential of BERT-family: A New Recipe for Building Scalable, General and Competitive Large Language Models

Yisheng Xiao,Juntao Li,Wenpeng Hu,Zhunchen Luo,Min Zhang
masked-lmbertscalingAnthologyDBLP
9
精读LongACL 2025

How to Train Long-Context Language Models (Effectively)

此前长上下文LLM的训练和评估都依赖困惑度或简单的大海捞针(NIAH)测试,和SFT后的实际下游任务表现对齐度低,训练数据配比、位置外推等核心设计选择缺乏可靠的评估依据。

Tianyu Gao,Alexander Wettig,Howard Yen,Danqi Chen
Princeton Universitylong-contextcontinued-pretrainingsftAnthologyarXivDBLP
7
泛读LongACL 2025

LangSAMP: Language-Script Aware Multilingual Pretraining

现有多语言预训练模型大多不使用语言嵌入,让token表示承担所有语言特定信息的编码,损害了表示的语言中立性,低资源语言的跨语言迁移效果不佳。

Yihong Liu,Haotian Ye,Chunlan Ma,Mingyang Wang,Hinrich Schütze
LMU MunichmultilingualpretrainingtokenizerAnthologyarXivDBLP
8
精读LongACL 2025

ScaleBiO: Scalable Bilevel Optimization for LLM Data Reweighting

这篇工作要解决的是:双层优化在数据重加权上理论上很适合 LLM,但传统方法依赖二阶信息,规模一大就根本跑不动。以前大家知道“哪些数据更该被重视”很重要,但真到几十亿参数模型时,计算图、显存和内外层优化耦合让很多漂亮公式都落不了地。

Rui Pan,Dylan Zhang,Hanning Zhang,Xingyuan Pan,Minrui Xu,Jipeng Zhang,Renjie Pi,Xiaoyu Wang,Tong Zhang
data-reweightingbilevel-optimizationdata-qualityAnthologyarXivDBLP
8
精读LongACL 2025

Analyzing and Mitigating Inconsistency in Discrete Speech Tokens for Neural Codec Language Models

这篇工作解决的是:神经 codec 把语音离散化后,同一段感知上等价的语音可能对应多条差异很大的 token 序列,这会让 speech LM 学得很混乱。文本 token 基本是确定性的,而离散语音 token 受上下文、说话风格和编码器细节影响,导致一对多表示;作者把这个问题定义为 Discrete Representation Inconsistency, DRI。

Wenrui Liu,Zhifang Guo,Jin Xu,Yuanjun Lv,Yunfei Chu,Zemin Liu,Junyang Lin
speech-lmaudio-tokenizerdiscrete-tokenAnthologyDBLP
8
精读LongACL 2025

AutoMixer: Checkpoint Artifacts as Automatic Data Mixers

Ernie Chang,Yang Li,Patrick Huber,Vish Vogeti,David Kant,Yangyang Shi,Vikas Chandra
data-mixturecheckpointcurriculumAnthologyDBLP
8
精读LongACL 2025

Automatic Expert Discovery in LLM Upcycling via Sparse Interpolated Mixture-of-Experts

Shengzhuang Chen,Ying Wei,Jonathan Richard Schwarz
moeupcyclingexpert-routingAnthologyDBLP
8
精读LongACL 2025

P² Law: Scaling Law for Post-Training After Model Pruning

这篇工作解决的是一个很实际但少被系统量化的问题:模型剪枝之后,还需要多少 post-training 才能把能力追回来,这个量和参数规模、剪枝率之间是否存在可预测的 scaling law。过去剪枝和后训练常被分开讨论,缺少统一规律,导致工程上很难做 compute 预算。

Xiaodong Chen,Yuxuan Hu,Xiaokang Zhang,Yanling Wang,Cuiping Li,Hong Chen,Jing Zhang
scaling-lawpruningpost-trainingAnthologyDBLP
8
精读LongACL 2025

HoPE: A Novel Positional Encoding Without Long-Term Decay for Enhanced Context Awareness and Extrapolation

这篇工作针对的位置编码问题很明确:现有很多位置编码在长上下文下会出现 long-term decay,导致远距离 token 的相对位置信号被压弱,训练长度外 extrapolation 也容易失真。随着长上下文成为基础能力,这个问题已经不能只靠数据补。

Yuhan Chen,Ang Lv,Jian Luan,Bin Wang,Wei Liu
positional-encodinglong-contextextrapolationAnthologyDBLP
8
精读LongACL 2025

Inner Thinking Transformer: Leveraging Dynamic Depth Scaling to Foster Adaptive Internal Thinking

Yilong Chen,Junyuan Shang,Zhenyu Zhang,Yanxi Xie,Jiawei Sheng,Tingwen Liu,Shuohuan Wang,Yu Sun,Hua Wu,Haifeng Wang
dynamic-depthadaptive-computearchitectureAnthologyDBLP
8
精读ShortACL 2025

Inconsistent Tokenizations Cause Language Models to be Perplexed by Japanese Grammar

Andrew Gambardella,Takeshi Kojima,Yusuke Iwasawa,Yutaka Matsuo
tokenizerjapanesedata-qualityDOIDBLP
8
精读FindingsACL 2025

Towards A Better Initial Policy Model For Scalable Long-CoT Reinforcement Learning

Bofei Gao,Yejie Wang,Yibo Miao,Ruoyu Wu,Feifan Song,Longhui Yu,Tianyu Liu,Baobao Chang
rlreasoninglong-cotAnthologyDBLP
8
精读FindingsACL 2025

Splintering Nonconcatenative Languages for Better Tokenization

Bar Gazit,Shaltiel Shmidman,Avi Shmidman,Yuval Pinter
tokenizermorphologymultilingualAnthologyDBLP
8
精读LongACL 2025

LongRecipe: Recipe for Efficient Long Context Generalization in Large Language Models

这篇工作要解决的是长上下文泛化并不等于把训练长度硬拉长。很多模型在短上下文训练后即使做了位置编码外推,也会在更长序列上出现检索退化、注意力漂移和推理不稳定;作者显然想给出一套更完整的“长上下文训练配方”。

Zhiyuan Hu,Yuliang Liu,Jinman Zhao,Suyuchen Wang,WangYan WangYan,Wei Shen ... 省略 1 位作者 ... ,Anh Tuan Luu,See-Kiong Ng,Zhiwei Jiang,Bryan Hooi
long-contextcontinual-pretraindata-recipeAnthologyDBLP
8
精读FindingsACL 2025

Multi-matrix Factorization Attention

这篇工作要解决的是标准 attention 的表达能力和计算成本之间的老问题:全矩阵 attention 很贵,而常见低秩或线性化近似又常因表达受限而掉点。作者试图通过 multi-matrix factorization 在保留更多表达自由度的同时降低成本。

Jingcheng Hu,Houyi Li,Yinmin Zhang,Zili Wang,Shuigeng Zhou,Xiangyu Zhang,Heung-Yeung Shum
attentionarchitecturematrix-factorizationAnthologyDBLP
8
精读LongACL 2025

Language-Codec: Bridging Discrete Codec Representations and Speech Language Models

Shengpeng Ji,Minghui Fang,Jialong Zuo,Ziyue Jiang,Dingdong Wang,Hanting Wang,Hai Huang,Zhou Zhao
speech-lmaudio-tokenizercodecAnthologyDBLP
8
精读LongACL 2025

RATIONALYST: Pre-training Process-Supervision for Improving Reasoning

这篇论文的核心问题是:推理能力如果只在后训练阶段靠 CoT 蒸馏或 RL 去补,往往学得晚、学得脆,迁移也不稳定;作者想把 process supervision,也就是对中间推理过程的监督,前移到 pre-training。这个问题现在值得重做,因为大模型推理提升越来越依赖长链路中间信号,而不仅是最终答案监督。

Dongwei Jiang,Guoxuan Wang,Yining Lu,Andrew Wang,Jingyu Zhang,Chuyu Liu,Benjamin Van Durme,Daniel Khashabi
process-supervisionpretrainingreasoningAnthologyDBLP
8
精读FindingsACL 2025

Training Long-Context LLMs Efficiently via Chunk-wise Optimization

Wenhao Li,Yuxin Zhang,Gen Luo,Daohai Yu,Rongrong Ji
long-contexttrainingoptimizationAnthologyDBLP
8
精读LongACL 2025

Exploring Forgetting in Large Language Model Pre-Training

这篇论文直接研究一个常被忽视但对 continual pretraining 很关键的问题:LLM 在预训练过程中也会遗忘,而不是只在微调阶段遗忘。过去大家更关心 scaling 带来的吸收能力,较少系统分析随着数据流、阶段训练和分布迁移,模型早先学到的知识或能力如何被覆盖。

Chonghua Liao,Ruobing Xie,Xingwu Sun,Haowen Sun,Zhanhui Kang
forgettingpretrain-dynamicscontinual-pretrainAnthologyDBLP
8
精读LongACL 2025

Scaling up the State Size of RNN LLMs for Long-Context Scenarios

Kai Liu,Jianfei Gao,Kai Chen
rnn-lmstate-sizelong-contextAnthologyDBLP
8
精读FindingsACL 2025

Drop Dropout on Single Epoch Language Model Pretraining

Houjun Liu,John Bauer,Christopher D. Manning
dropoutpretrain-dynamicssingle-epochAnthologyDBLP
8
精读LongACL 2025

Beyond Text Compression: Evaluating Tokenizers Across Scales

Jonas F. Lotz,António Vilarinho Lopes,Stephan Peitz,Hendra Setiawan,Leonardo Emili
tokenizerscalingevaluationAnthologyDBLP
8
精读LongACL 2025

A Survey on Efficient Large Language Model Training: From Data-centric Perspectives

Junyu Luo,Bohan Wu,Xiao Luo,Zhiping Xiao,Yiqiao Jin,Rong-Cheng Tu ... 省略 1 位作者 ... ,Yifan Wang,Jingyang Yuan,Wei Ju,Ming Zhang
surveydata-centrictraining-efficiencyAnthologyDBLP
8
精读LongACL 2025

Velocitune: A Velocity-based Dynamic Domain Reweighting Method for Continual Pre-training

这篇工作要解决的是:continual pre-training 里不同领域数据该如何动态配比,避免模型在新域上学得慢、在旧域上忘得快。现有做法常用静态 mixture 或按 loss 重加权,但 loss 既滞后又受 domain 难度影响,未必能真实反映“现在多给这个域还有没有收益”。

Zheheng Luo,Xin Zhang,Xiao Liu,Haoling Li,Yeyun Gong,Qi Chen,Peng Cheng
continual-pretraindomain-reweightingdata-mixtureAnthologyDBLP
8
精读FindingsACL 2025

Slamming: Training a Speech Language Model on One GPU in a Day

Gallil Maimon,Avishai Elmakies,Yossi Adi
speech-lmtraining-efficiencysingle-gpuAnthologyDBLP
8
精读ShortACL 2025

Diffusion Directed Acyclic Transformer for Non-Autoregressive Machine Translation

Quan Nguyen-Tri,Cong Dao Tran,Hoang Thanh-Tung
diffusion-lmnon-autoregressivetranslationDOIDBLP
8
精读FindingsACL 2025

Continual Quantization-Aware Pre-Training: When to transition from 16-bit to 1.58-bit pre-training for BitNet language models?

Jacob Nielsen,Peter Schneider-Kamp,Lukas Galke
quantization-aware-pretrainingbitnet1-bitAnthologyDBLP
8
精读ShortACL 2025

Decoder-Only LLMs can be Masked Auto-Encoders

Dan Qiao,Yuan Gao,Zheming Yang,Di Yang,Ziheng Wu,Pengcheng Lu,Minghui Qiu,Juntao Li,Min Zhang
masked-lmdecoder-onlyobjectiveDOIDBLP
8
精读LongACL 2025

Demons in the Detail: On Implementing Load Balancing Loss for Training Specialized Mixture-of-Expert Models

Zihan Qiu,Zeyu Huang,Bo Zheng,Kaiyue Wen,Zekun Wang,Rui Men,Ivan Titov,Dayiheng Liu,Jingren Zhou,Junyang Lin
moeload-balancingtraining-dynamicsAnthologyDBLP
8
精读FindingsACL 2025

Large Vocabulary Size Improves Large Language Models

这篇论文的结论导向很明确:更大的词表规模本身可以提升 LLM,而不仅仅是 tokenizer 工程细节。过去很多工作默认词表大小只是压缩率与序列长度之间的次级超参,通常在一个经验范围内选定后就不再系统研究;这篇论文是在重新追问,词表是不是被长期低估了。

Sho Takase,Ryokan Ri,Shun Kiyono,Takuya Kato
tokenizervocabularypretrainingAnthologyDBLP
8
精读LongACL 2025

Untie the Knots: An Efficient Data Augmentation Strategy for Long-Context Pre-Training in Language Models

Junfeng Tian,Da Zheng,Yang Chen,Rui Wang,Colin Zhang,Debing Zhang
long-contextdata-augmentationpretrainingAnthologyDBLP
8
精读LongACL 2025

Scaling Laws and Efficient Inference for Ternary Language Models

Tejas Vaidhya,Ayush Kaushal,Vineet Jain,Francis Couture Harpin,Prashant Shishodia,Majid Behbahani,Yuriy Nevmyvaka,Irina Rish
quantizationternaryscaling-lawAnthologyDBLP
8
精读LongACL 2025

InSerter: Speech Instruction Following with Unsupervised Interleaved Pre-training

Dingdong Wang,Jin Xu,Ruihang Chu,Zhifang Guo,Xiong Wang,Jincenzi Wu,Dongchao Yang,Shengpeng Ji,Junyang Lin
speech-lmmultimodalinstruction-followingAnthologyDBLP
8
精读FindingsACL 2025

Cautious Next Token Prediction

Yizhou Wang,Lingzhi Zhang,Yue Bai,Mang Tik Chiu,Zhengmian Hu,Mingyuan Zhang,Qihua Dong,Yu Yin,Sohrab Amirghodsi,Yun Fu
next-token-predictionobjectiveuncertaintyAnthologyDBLP
8
精读IndustryACL 2025

Light-R1: Curriculum SFT, DPO and RL for Long COT from Scratch and Beyond

Liang Wen,Yunke Cai,Fenrui Xiao,Xin He,Qi An,Zhenyu Duan ... 省略 4 位作者 ... ,Haosheng Zou,Yongchao Deng,Shousheng Jia,Xiangzheng Zhang
curriculum-learningsftdpoDOIDBLP
8
精读Best PaperLongACL 2025

Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention

Jingyang Yuan,Huazuo Gao,Damai Dai,Junyu Luo,Liang Zhao,Zhengyan Zhang ... 省略 5 位作者 ... ,Chong Ruan,Ming Zhang,Wenfeng Liang,Wangding Zeng
sparse-attentiontraininghardwareAnthologyDBLP
8
精读FindingsACL 2025

Preference Curriculum: LLMs Should Always Be Pretrained on Their Preferred Data

Xuemiao Zhang,Liangyu Xu,Feiyu Duan,Yongwei Zhou,Sirui Wang,Rongxiang Weng,Jingang Wang,Xunliang Cai
data-mixturecurriculum-learningpretrainingAnthologyDBLP
8
精读ShortACL 2025

Accelerating Dense LLMs via L0-regularized Mixture-of-Experts

Zhenyu Zhang,JiuDong Yang,Zhaowen Tao,Meng Chen
moel0-regularizationmodel-architectureDOIDBLP
8
精读LongACL 2025

Segment-Level Diffusion: A Framework for Controllable Long-Form Generation with Diffusion Language Models

Xiaochen Zhu,Georgi Karadzhov,Chenxi Whitehouse,Andreas Vlachos
diffusion-lmlong-formcontrollable-generationAnthologyDBLP
8
精读LongACL 2025

FoldMoE: Efficient Long Sequence MoE Training via Attention-MoE Pipelining

Guichao Zhu,Lintian Lei,Yuhao Qing,Yichao Fu,Fanxin Li,Dong Huang,Zekai Sun,Heming Cui
moelong-contexttrainingAnthologyDBLP
8
精读LongACL 2025

Meta-rater: A Multi-dimensional Data Selection Method for Pre-training Language Models

Xinlin Zhuang,Jiahui Peng,Ren Ma,Yinfan Wang,Tianyi Bai,Xingjian Wei,Jiantao Qiu,Chi Zhang,Ying Qian,Conghui He
data-selectiondata-qualitypretraining-dataAnthologyDBLP
8
精读FindingsACL 2025

Contrastive Learning for Task-Independent SpeechLLM-Pretraining

Maike Züfle,Jan Niehues
speech-lmaudiocontrastive-learningAnthologyDBLP
6
泛读IndustryACL 2025

DistilQwen2.5: Industrial Practices of Training Distilled Open Lightweight Language Models

现有开源小参数LLM的指令跟随能力弱,传统蒸馏方法依赖固定的教师模型输出,没有针对学生模型的能力做适配,工业部署的性能和成本trade-off不佳。

Chengyu Wang,Junbing Yan,Yuanhao Yue,Jun Huang
distillationlightweight-llmindustrial-practiceDOIarXivDBLP
7
泛读LongACL 2025

AlignDistil: Token-Level Language Model Alignment as Adaptive Policy Distillation

现有LLM对齐方法(RLHF、DPO)都用稀疏的回复级奖励,忽略token级奖励信号,会错误惩罚优质token或鼓励劣质token,导致性能次优、收敛速度慢。

Songming Zhang,Xue Zhang,Tong Zhang,Bojie Hu,Yufeng Chen,Jinan Xu
Hong Kong University of Science and Technologyalignmentdistillationtoken-levelAnthologyarXivDBLP
6
泛读LongACL 2025

STUN: Structured-Then-Unstructured Pruning for Scalable MoE Pruning

MoE模型的专家数量多,部署成本高,现有剪枝方法普遍认为非结构化剪枝的性能优于结构化剪枝,没有针对MoE的模块化特性设计最优剪枝流程。

Jaeseong Lee,Seung-won Hwang,Aurick Qiao,Daniel F. Campos,Zhewei Yao,Yuxiong He
moepruningsparsityAnthologyarXivDBLP
5
泛读FindingsACL 2025

Multi-Sense Embeddings for Language Models and Knowledge Distillation

现有LLM的上下文嵌入是连续的,没有显式建模token的有限语义,小模型蒸馏时无法有效学习大模型的语义表示,蒸馏效率低。

Qitong Wang,Mohammed J. Zaki,Georgios Kollias,Vasileios Kalantzis
embeddingsmulti-sensetokenizerAnthologyarXivDBLP
7
精读SRWACL 2025

Evaluating Tokenizer Adaptation Methods for Large Language Models on Low-Resource Programming Languages

Georgy Andryushchenko,Vladimir V. Ivanov
tokenizeradaptationcodeDOIDBLP
7
泛读FindingsACL 2025

Revisiting In-Context Learning with Long Context Language Models

这篇论文重新审视长上下文模型里的 in-context learning,核心问题是:上下文长度变长之后,ICL 的收益、机制和失败模式是否发生了根本变化。过去很多 ICL 结论是在短上下文条件下得到的,但长上下文模型已经改变了检索范围、注意力分配和示例干扰结构,旧结论未必还成立。

Jinheon Baek,Sun Jae Lee,Prakhar Gupta,Geunseob Oh,Siddharth Dalmia,Prateek Kolhar
in-context-learninglong-contexticlAnthologyDBLP
7
泛读IndustryACL 2025

One Missing Piece for Open-Source Reasoning Models: A Dataset to Mitigate Cold-Starting Short CoT LLMs in RL

Hyungjoo Chae,Dongjin Kang,Jihyuk Kim,Beong-woo Kwak,Sunghyun Park,Haeju Park,Jinyoung Yeo,Moontae Lee,Kyungjae Lee
rlreasoningcold-startDOIDBLP
7
泛读LongACL 2025

EpMAN: Episodic Memory AttentioN for Generalizing to Longer Contexts

Subhajit Chaudhury,Payel Das,Sarathkrishna Swaminathan,Georgios Kollias,Elliot Nelson,Khushbu Pahwa,Tejaswini Pedapati,Igor Melnyk,Matthew Riemer
long-contextmemoryattentionAnthologyDBLP
7
泛读FindingsACL 2025

SLAM-Omni: Timbre-Controllable Voice Interaction System with Single-Stage Training

Wenxi Chen,Ziyang Ma,Ruiqi Yan,Yuzhe Liang,Xiquan Li,Ruiyang Xu ... 省略 6 位作者 ... ,Jinyu Li,Yan Lu,Shujie Liu,Xie Chen
speech-lmvoice-interactionaudio-tokenAnthologyDBLP
7
泛读LongACL 2025

DiffPO: Diffusion-styled Preference Optimization for Inference Time Alignment of Large Language Models

Ruizhe Chen,Wenhao Chai,Zhifei Yang,Xiaotian Zhang,Ziyang Wang,Tony Q. S. Quek,Joey Tianyi Zhou,Soujanya Poria,Zuozhu Liu
dpodiffusionalignmentAnthologyDBLP
7
泛读FindingsACL 2025

Better Process Supervision with Bi-directional Rewarding Signals

这篇工作要解决的是:process supervision 往往只用单向奖励信号,导致对中间推理步骤的判断不充分,既可能错奖局部合理但最终错误的过程,也可能漏掉前期失误对后续推理的连锁影响。随着推理型 post-training 变重要,如何给过程更准的监督值得重新设计。

Wenxiang Chen,Wei He,Zhiheng Xi,Honglin Guo,Boyang Hong,Jiazheng Zhang ... 省略 1 位作者 ... ,Tao Gui,Yun Li,Qi Zhang,Xuanjing Huang
process-reward-modelreward-signalrlAnthologyDBLP
7
精读FindingsACL 2025

Enhancing Cross-Tokenizer Knowledge Distillation with Contextual Dynamical Mapping

这篇工作要解决的是 cross-tokenizer knowledge distillation 的对齐难题:老师和学生使用不同 tokenizer 时,token 序列边界不一致,直接蒸馏 logits 或 hidden states 会出现错位,导致蒸馏信号变脏。随着跨架构蒸馏和 tokenizer 重设计变常见,这个问题越来越实际。

Yijie Chen,Yijin Liu,Fandong Meng,Yufeng Chen,Jinan Xu,Jie Zhou
knowledge-distillationcross-tokenizertokenizerAnthologyDBLP
7
泛读LongACL 2025

F5-TTS: A Fairytaler that Fakes Fluent and Faithful Speech with Flow Matching

Yushen Chen,Zhikang Niu,Ziyang Ma,Keqi Deng,Chunhui Wang,Jian Zhao,Kai Yu,Xie Chen
ttsflow-matchingspeech-generationAnthologyDBLP
7
泛读LongACL 2025

EAC-MoE: Expert-Selection Aware Compressor for Mixture-of-Experts Large Language Models

Yuanteng Chen,Yuantian Shao,Peisong Wang,Jian Cheng
moecompressionexpert-selectionAnthologyDBLP
7
精读LongACL 2025

LADM: Long-context Training Data Selection with Attention-based Dependency Measurement for LLMs

Jianghao Chen,Junhong Wu,Yangyifan Xu,Jiajun Zhang
long-contextdata-selectionattention-dependencyAnthologyDBLP
7
泛读LongACL 2025

VoxEval: Benchmarking the Knowledge Understanding Capabilities of End-to-End Spoken Language Models

Wenqian Cui,Xiaoqi Jiao,Ziqiao Meng,Irwin King
speech-lmbenchmarkspoken-languageAnthologyDBLP
7
泛读LongACL 2025

Recent Advances in Speech Language Models: A Survey

Wenqian Cui,Dianzhi Yu,Xiaoqi Jiao,Ziqiao Meng,Guangyan Zhang,Qichao Wang,Steven Y. Guo,Irwin King
speech-lmsurveyaudioAnthologyDBLP
7
泛读LongACL 2025

Pretraining Context Compressor for Large Language Models with Embedding-Based Memory

Yuhong Dai,Jianxun Lian,Yitian Huang,Wei Zhang,Mingyang Zhou,Mingqi Wu,Xing Xie,Hao Liao
long-contextcontext-compressionmemoryAnthologyDBLP
7
泛读LongACL 2025

SimulS2S-LLM: Unlocking Simultaneous Inference of Speech LLMs for Speech-to-Speech Translation

Keqi Deng,Wenxi Chen,Xie Chen,Philip C. Woodland
speech-lmspeech-to-speechsimultaneousAnthologyDBLP
7
泛读FindingsACL 2025

Maximum Score Routing For Mixture-of-Experts

这篇工作解决的是:MoE 路由常在负载均衡与专家质量之间做妥协,结果要么路由得不够准,要么为了均衡引入额外训练技巧和性能损失。现有 top-k gating 往往不是“选最合适的专家”,而是在可训练性、通信和均衡约束下的折中。

Bowen Dong,Yilong Fan,Yutao Sun,Zhenyu Li,Tengyu Pan,Zhou Xun,Jianyong Wang
moeroutingarchitectureAnthologyDBLP
7
泛读LongACL 2025

LongReD: Mitigating Short-Text Degradation of Long-Context Large Language Models via Restoration Distillation

这篇工作处理的是一个很实际的问题:长上下文 LLM 往往为了适应超长序列而牺牲短文本性能,出现 short-text degradation。过去常见做法是继续在长序列上训练或靠位置编码改造来扩上下文,但这会让模型对短输入的分布和注意力模式发生偏移。

Zican Dong,Junyi Li,Jinhao Jiang,Mingyu Xu,Xin Zhao,Bingning Wang,Weipeng Chen
long-contextdistillationshort-text-degradationAnthologyDBLP
7
精读LongACL 2025

Separating Tongue from Thought: Activation Patching Reveals Language-Agnostic Concept Representations in Transformers

Clément Dumas,Chris Wendler,Veniamin Veselovsky,Giovanni Monea,Robert West
activation-patchingmultilingualinterpretabilityAnthologyDBLP
7
精读LongACL 2025

Emergent Abilities of Large Language Models under Continued Pre-training for Language Adaptation

Ahmed Elhady,Eneko Agirre,Mikel Artetxe
continual-pretrainlanguage-adaptationemergent-abilitiesAnthologyDBLP
7
泛读FindingsACL 2025

mOSCAR: A Large-scale Multilingual and Multimodal Document-level Corpus

Matthieu Futeral,Armel Randy Zebaze,Pedro Ortiz Suarez,Julien Abadji,Rémi Lacroix,Cordelia Schmid,Rachel Bawden,Benoît Sagot
multimodal-corpusmultilingualdocument-understandingAnthologyDBLP
7
泛读FindingsACL 2025

Visualising Policy-Reward Interplay to Inform Zeroth-Order Preference Optimisation of Large Language Models

Alessio Galatolo,Zhenbang Dai,Katie Winkle,Meriem Beloucif
preference-optimizationzeroth-orderreward-modelAnthologyDBLP
7
泛读LongACL 2025

IPO: Your Language Model is Secretly a Preference Classifier

Shivank Garg,Ayush Singh,Shweta Singh,Paras Chopra
preference-optimizationipopreference-modelingAnthologyDBLP
7
泛读OutstandingLongACL 2025

Capability Salience Vector: Fine-grained Alignment of Loss and Capabilities for Downstream Task Scaling Law

Qiming Ge,Shuhao Xing,Songyang Gao,Yunhua Zhou,Yicheng Zou,Songyang Zhang ... 省略 1 位作者 ... ,Hang Yan,Qi Zhang,Qipeng Guo,Kai Chen
scaling-lawcapabilitylossAnthologyDBLP
7
精读LongACL 2025

Adversarial Tokenization

Renato Lui Geh,Zilei Shao,Guy Van den Broeck
tokenizeradversarialrobustnessAnthologyDBLP
7
精读FindingsACL 2025

The Right Time Matters: Data Arrangement Affects Zero-Shot Generalization in Instruction Tuning

指令微调中训练数据的排列顺序会影响模型的零样本泛化能力,但此前这个因素很少被系统研究。作者发现「什么时候看到什么数据」对最终效果有显著影响。

Bingxiang He,Ning Ding,Cheng Qian,Jia Deng,Ganqu Cui,Lifan Yuan ... 省略 3 位作者 ... ,Hui Xue,Huimin Chen,Zhiyuan Liu,Maosong Sun
Tsinghua Universityinstruction-tuningdata-orderingzero-shotAnthologyDBLP
7
精读DemoACL 2025

MERaLiON-AudioLLM: Advancing Speech and Language Understanding for Singapore

Yingxu He,Zhuohan Liu,Geyu Lin,Shuo Sun,Bin Wang,Wenyu Zhang,Xunlong Zou,Nancy F. Chen,AiTi Aw
audio-lmspeechmultilingualDOIDBLP
7
精读FindingsACL 2025

A²ATS: Retrieval-Based KV Cache Reduction via Windowed Rotary Position Embedding and Query-Aware Vector Quantization

Junhui He,Junna Xing,Nan Wang,Rui Xu,Shangyu Wu,Peng Zhou,Qiang Liu,Chun Jason Xue,Qingan Li
kv-cachevector-quantizationlong-contextAnthologyDBLP
7
精读FindingsACL 2025

The Reasoning-Memorization Interplay in Language Models Is Mediated by a Single Direction

Yihuai Hong,Meng Cao,Dian Zhou,Lei Yu,Zhijing Jin
reasoningmemorizationinterpretabilityAnthologyDBLP
7
泛读LongACL 2025

Squeezed Attention: Accelerating Long Context Length LLM Inference

这篇工作要解决的是长上下文推理时 attention 计算和 KV 读写成本过高,导致推理延迟和显存开销都难以接受。现有长上下文加速方法往往要么牺牲精度太明显,要么依赖训练时改模型;作者显然想做一个更实用的 inference-time attention 压缩方案。

Coleman Richard Charles Hooper,Sehoon Kim,Hiva Mohammadzadeh,Monishwaran Maheswaran,Sebastian Zhao,June Paik,Michael W. Mahoney,Kurt Keutzer,Amir Gholami
long-contextkv-cacheattentionAnthologyDBLP
7
精读LongACL 2025

TreeRL: LLM Reinforcement Learning with On-Policy Tree Search

这篇工作要解决的是:标准 LLM 强化学习通常把生成看成单条轨迹优化,信用分配稀疏且探索低效,尤其在长推理任务上更明显。以往大家用 rejection sampling、best-of-N 或 MCTS 风格搜索来补救,但这些方法和 on-policy RL 往往没有真正打通。

Zhenyu Hou,Ziniu Hu,Yujiang Li,Rui Lu,Jie Tang,Yuxiao Dong
rltree-searchreasoningAnthologyDBLP
7
泛读FindingsACL 2025

RaaS: Reasoning-Aware Attention Sparsity for Efficient LLM Reasoning

这篇工作要解决的是:推理任务需要长链生成,但 full attention 让这类样本的成本过高;简单稀疏化又容易在关键推理步骤上掉精度。过去稀疏 attention 多按位置或相似度做通用裁剪,没有显式考虑“哪些 token 对当前 reasoning step 真有用”。

Junhao Hu,Wenrui Huang,Weidong Wang,Zhenwen Li,Tiancheng Hu,Zhixia Liu,Xusheng Chen,Tao Xie,Yizhou Shan
attention-sparsityreasoninginference-efficiencyAnthologyDBLP
7
精读OutstandingLongACL 2025

Between Circuits and Chomsky: Pre-pretraining on Formal Languages Imparts Linguistic Biases

这篇工作要解决的是一个更基础的问题:预训练前先在形式语言上做“预预训练”,是否能给模型注入更好的语言归纳偏置。过去这类问题常在小模型认知实验或理论分析里讨论,和大规模语言模型训练配方之间连接不强;作者想把形式语言、神经电路和语言偏置之间的关系做得更可检验。

Michael Y. Hu,Jackson Petty,Chuan Shi,William Merrill,Tal Linzen
pre-pretrainingformal-languageslinguistic-biasAnthologyDBLP
7
泛读LongACL 2025

Intuitive Fine-Tuning: Towards Simplifying Alignment into a Single Process

Ermo Hua,Biqing Qi,Kaiyan Zhang,Kai Tian,Xingtai Lv,Ning Ding,Bowen Zhou
alignmentfine-tuningsft-simplificationAnthologyDBLP
7
精读Best PaperLongACL 2025

Language Models Resist Alignment: Evidence From Data Compression

Jiaming Ji,Kaile Wang,Tianyi Alex Qiu,Boyuan Chen,Jiayi Zhou,Changye Li,Hantao Lou,Josef Dai,Yunhuai Liu,Yaodong Yang
alignmentcompressionrepresentationAnthologyDBLP
7
精读LongACL 2025

UniCodec: Unified Audio Codec with Single Domain-Adaptive Codebook

Yidi Jiang,Qian Chen,Shengpeng Ji,Yu Xi,Wen Wang,Chong Zhang,Xianghu Yue,Shiliang Zhang,Haizhou Li
audio-tokenizercodecspeech-lmAnthologyDBLP
7
精读FindingsACL 2025

Instruction-Tuning Data Synthesis from Scratch via Web Reconstruction

这篇论文要解决的是高质量 instruction-tuning 数据过度依赖人工编写或闭源教师模型生成,成本高且可持续性差。作者提出从网页内容重建指令数据,目标是让指令数据合成摆脱对现成 SFT 语料和强教师的依赖。

Yuxin Jiang,Yufei Wang,Chuhan Wu,Xinyi Dai,Yan Xu,Weinan Gan ... 省略 1 位作者 ... ,Xin Jiang,Lifeng Shang,Ruiming Tang,Wei Wang
data-synthesisinstruction-tuningweb-dataAnthologyDBLP
7
精读LongACL 2025

A Survey of Post-Training Scaling in Large Language Models

Hanyu Lai,Xiao Liu,Junjie Gao,Jiale Cheng,Zehan Qi,Yifan Xu ... 省略 4 位作者 ... ,Xin Lv,Minlie Huang,Yuxiao Dong,Jie Tang
post-trainingscalingalignmentAnthologyDBLP
7
精读LongACL 2025

Geometric Signatures of Compositionality Across a Language Model's Lifetime

这篇工作想回答的是:语言模型在训练和对齐的不同阶段,组合性能力在表示空间里留下了什么几何结构。过去关于 compositionality 的研究很多停留在行为指标上,比如任务准确率或泛化曲线,但这些结果很难告诉我们能力是在预训练中自然出现、在后训练中被放大,还是被某些阶段削弱了。

Jin Hwa Lee,Thomas Jiralerspong,Lei Yu,Yoshua Bengio,Emily Cheng
compositionalityrepresentation-geometrytraining-dynamicsAnthologyDBLP
7
精读LongACL 2025

EdiText: Controllable Coarse-to-Fine Text Editing with Diffusion Language Models

这篇工作解决的是:如何让 diffusion language model 做可控文本编辑,而且既能大幅改写,也能保留不该动的内容。传统 AR 编辑通常依赖逐 token 重写,局部可控性和全局一致性很难兼得;而 diffusion LM 天然支持并行细化,但如果没有分阶段控制,容易要么改得太猛,要么编辑意图落不准。

Che Hyun Lee,Heeseung Kim,Jiheum Yeom,Sungroh Yoon
diffusion-lmtext-editingcontrollable-generationAnthologyDBLP
7
精读LongACL 2025

Quantification of Large Language Model Distillation

这篇工作要解决的是:大家都在做 LLM distillation,但对“蒸馏到底传递了什么、损失了什么、哪些能力能按比例继承”缺少系统量化。过去多数论文用少量 benchmark 报告 teacher-student 差距,能说明有没有用,却回答不了蒸馏的能力边界,也难指导何时该蒸、蒸到多小、该蒸哪些能力。

Sunbowen Lee,Junting Zhou,Chang Ao,Kaige Li,Xeron Du,Sirui He ... 省略 4 位作者 ... ,Min Yang,Yitao Liang,Zhoufutu Wen,Shiwen Ni
distillationquantificationknowledge-transferAnthologyDBLP
7
精读LongACL 2025

Causal Estimation of Tokenisation Bias

Tokenizer 的选择会系统性地偏置下游任务评估结果,但此前缺乏因果性的量化手段来衡量这种偏置。不同 tokenizer 对同一文本的分词粒度不同,导致 perplexity、生成长度等指标不可直接跨模型比较,这个问题长期被忽视或仅做定性讨论。

Pietro Lesci,Clara Meister,Thomas Hofmann,Andreas Vlachos,Tiago Pimentel
ETH ZurichUniversity of Cambridgetokenizertokenization-biascausal-estimationAnthologyDBLP
7
泛读LongACL 2025

What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective

LLM 在训练 fast thinking(直接回答)和 slow thinking(CoT 推理)时,各层的学习动态有什么不同?此前对 CoT 训练为什么有效的理解主要停留在输出层面,缺乏从梯度和层级视角的机制分析。

Ming Li,Yanhong Li,Tianyi Zhou
University of Marylandreasoninggradientslayer-analysisAnthologyDBLP
7
泛读LongACL 2025

Uncertainty-Aware Iterative Preference Optimization for Enhanced LLM Reasoning

迭代偏好优化(如 DPO/IPO 的多轮迭代)在提升 LLM 推理能力时,容易因为自生成数据的噪声而导致 reward hacking 或性能退化。如何在迭代过程中识别并处理不确定性高的偏好对,是提升迭代 PO 稳定性的关键。

Lei Li,Hehuan Liu,Yaxin Zhou,ZhaoYang Gui,Xudong Weng,Yi Yuan,Zheng Wei,Zang Li
preference-optimizationreasoninguncertaintyAnthologyDBLP
7
泛读FindingsACL 2025

Self-Improvement Towards Pareto Optimality: Mitigating Preference Conflicts in Multi-Objective Alignment

Moxin Li,Yuantao Zhang,Wenjie Wang,Wentao Shi,Zhuo Liu,Fuli Feng,Tat-Seng Chua
alignmentmulti-objectiveself-improvementAnthologyDBLP
7
精读LongACL 2025

TokAlign: Efficient Vocabulary Adaptation via Token Alignment

这篇论文要解决的是:当模型迁移到新语言、新领域或新 tokenizer 设定时,词表不匹配会带来严重效率损失,但直接重训 embedding 或全量继续预训练成本很高。过去常见做法要么硬扩词表,要么完全依赖子词拆分,前者改动大,后者 token 效率差。

Chong Li,Jiajun Zhang,Chengqing Zong
tokenizervocabulary-adaptationtoken-alignmentAnthologyDBLP
7
精读LongACL 2025

Align-SLM: Textless Spoken Language Models with Reinforcement Learning from AI Feedback

这篇论文处理的是 textless spoken language model 的对齐问题:当模型直接在离散语音 token 或声学单元上生成时,如何像文本 LLM 一样做偏好优化和行为塑形。过去语音生成模型更多依赖重建或模仿损失,交互质量、可控性和安全性不容易通过常规监督修正。

Guan-Ting Lin,Prashanth Gurunath Shivakumar,Aditya Gourav,Yile Gu,Ankur Gandhe,Hung-yi Lee,Ivan Bulyko
speech-lmtextlessrlhfAnthologyDBLP
7
精读FindingsACL 2025

Implicit Reasoning in Transformers is Reasoning through Shortcuts

Tianhe Lin,Jian Xie,Siyu Yuan,Deqing Yang
implicit-reasoningshortcutstransformersAnthologyDBLP
7
精读DemoACL 2025

GigaChat Family: Efficient Russian Language Modeling Through Mixture of Experts Architecture

Valentin Mamedov,Evgenii Kosarev,Gregory Leleytner,Ilya Shchuckin,Valeriy Berezovskiy,Daniil Smirnov ... 省略 20 位作者 ... ,Eldar Damirov,Vladimir Karlov,Ruslan Gaitukiev,Arkadiy Shatenov
moemultilingualrussianDOIDBLP
7
泛读ShortACL 2025

Call for Rigor in Reporting Quality of Instruction Tuning Data

Hyeonseok Moon,Jaehyung Seo,Heuiseok Lim
instruction-tuningdata-qualityevaluationDOIDBLP
7
泛读FindingsACL 2025

Self-Training Elicits Concise Reasoning in Large Language Models

Tergel Munkhbat,Namgyu Ho,Seo Hyun Kim,Yongjin Yang,Yujin Kim,Se-Young Yun
self-trainingreasoningdistillationAnthologyDBLP
7
精读FindingsACL 2025

How Do LLMs Acquire New Knowledge? A Knowledge Circuits Perspective on Continual Pre-Training

这篇工作想弄清楚:LLM 在 continual pre-training 过程中到底是如何获得新知识的,知识是局部写入、全局重组,还是沿着已有“知识电路”被接入。过去大家更多从行为层面看持续预训练能不能学到新事实,但对内部机制缺少可操作解释,因此很难判断哪些更新会带来灾难性遗忘、哪些只是浅层覆盖。

Yixin Ou,Yunzhi Yao,Ningyu Zhang,Hui Jin,Jiacheng Sun,Shumin Deng,Zhenguo Li,Huajun Chen
knowledge-circuitscontinual-pretrainmechanistic-interpretabilityAnthologyDBLP
7
精读LongACL 2025

Low-Bit Quantization Favors Undertrained LLMs

这篇工作的结论从标题就很明确:低比特量化对‘训练还不充分’的 LLM 更友好,而不是对 fully trained 模型一视同仁。常规观点通常把量化误差看成与模型训练状态相对独立的部署问题,但这篇论文指出,模型是否收敛、参数处于什么几何状态,会直接影响量化鲁棒性。

Xu Ouyang,Tao Ge,Thomas Hartvigsen,Zhisong Zhang,Haitao Mi,Dong Yu
quantizationundertrained-llmlow-bitAnthologyDBLP
7
精读SRWACL 2025

HYPEROFA: Expanding LLM Vocabulary to New Languages via Hypernetwork-Based Embedding Initialization

这篇工作的核心问题很直接:如何给现有 LLM 扩词表到新语言,同时尽量少破坏已有能力、少花再训练成本。过去最常见做法是随机初始化新词 embedding 再继续训练,但这会带来冷启动慢、训练不稳定,而且对低资源新语言尤其不友好。

Enes Özeren,Yihong Liu,Hinrich Schütze
vocabulary-expansionmultilingualhypernetworkDOIDBLP
7
泛读LongACL 2025

LLäMmlein: Transparent, Compact and Competitive German-Only Language Models from Scratch

这篇工作回答一个近年来越来越现实的问题:在英语主导的大模型时代,从零训练一个德语专用、可复现且足够强的紧凑模型,是否仍然值得。过去很多非英语场景直接拿多语模型或英语模型微调,因为便宜且现成,但这样通常牺牲语言专精、数据透明度和训练可控性。

Jan Pfister,Julia Wunderle,Andreas Hotho
small-lmfrom-scratchmultilingualAnthologyDBLP
7
泛读FindingsACL 2025

LongDPO: Unlock Better Long-form Generation Abilities for LLMs via Critique-augmented Stepwise Information

Bowen Ping,Jiali Zeng,Fandong Meng,Shuo Wang,Jie Zhou,Shanghang Zhang
dpolong-formreasoningAnthologyDBLP
7
泛读ACL 2025

Inverse Reinforcement Learning Meets Large Language Model Alignment

Mihaela van der Schaar,Hao Sun
alignmentinverse-rlrlhfDOIDBLP
7
泛读FindingsACL 2025

Training Bilingual LMs with Data Constraints in the Targeted Language

这篇工作要解决的是:在目标语言数据受限的情况下,如何训练真正有用的双语语言模型。现实里很多双语或多语训练都默认目标语言有足够语料,或者直接把高资源语言数据堆进去,但这样很容易让模型表面上双语、实际被高资源语言主导,低资源语言能力上不来。

Skyler Seto,Maartje ter Hoeve,Richard He Bai,Natalie Schluter,David Grangier
bilingual-lmdata-mixturelow-resourceAnthologyDBLP
7
泛读LongACL 2025

Safety Alignment via Constrained Knowledge Unlearning

Zesheng Shi,Yucheng Zhou,Jing Li,Yuxin Jin,Yu Li,Daojing He,Fangming Liu,Saleh Alharbi,Jun Yu,Min Zhang
safety-alignmentunlearningknowledge-editingAnthologyDBLP
7
泛读LongACL 2025

KV-Latent: Dimensional-level KV Cache Reduction with Frequency-aware Rotary Positional Embedding

Luohe Shi,Zuchao Li,Lefei Zhang,Baoyuan Qi,Guoming Liu,Hai Zhao
kv-cacheropeinference-efficiencyAnthologyDBLP
7
泛读FindingsACL 2025

LLMVoX: Autoregressive Streaming Text-to-Speech Model for Any LLM

Sambal Shikhar,Mohammed Irfan Kurpath,Sahal Shaji Mullappilly,Jean Lahoud,Fahad Shahbaz Khan,Rao Muhammad Anwer,Salman H. Khan,Hisham Cholakkal
speech-lmttsautoregressiveAnthologyDBLP
7
精读FindingsACL 2025

EnerGIZAr: Leveraging GIZA++ for Effective Tokenizer Initialization

Pranaydeep Singh,Eneko Agirre,Gorka Azkune,Orphée De Clercq,Els Lefever
tokenizerinitializationmultilingualAnthologyDBLP
7
精读LongACL 2025

Information Locality as an Inductive Bias for Neural Language Models

Taiga Someya,Anej Svete,Brian DuSell,Timothy J. O'Donnell,Mario Giulianelli,Ryan Cotterell
inductive-biasinformation-localitylanguage-modelingAnthologyDBLP
7
精读LongACL 2025

PRMBench: A Fine-grained and Challenging Benchmark for Process-Level Reward Models

Mingyang Song,Zhaochen Su,Xiaoye Qu,Jiawei Zhou,Yu Cheng
process-reward-modelbenchmarkreasoningAnthologyDBLP
7
精读LongACL 2025

Aligning Large Language Models with Implicit Preferences from User-Generated Content

这篇论文要解决的是:如何从用户生成内容中挖掘隐式偏好来对齐 LLM,而不依赖昂贵且显式的人工偏好标注。过去 RLHF/DPO 一类方法高度依赖成对偏好或高质量标注,这在规模、覆盖面和分布多样性上都受限;但真实世界里,大量用户行为本身就在暴露偏好信号,只是噪声更大、归因更难。

Zhaoxuan Tan,Zheng Li,Tianyi Liu,Haodong Wang,Hyokun Yun,Ming Zeng ... 省略 3 位作者 ... ,Ruijie Wang,Priyanka Nigam,Bing Yin,Meng Jiang
alignmentpreference-learninguser-feedbackAnthologyDBLP
7
精读LongACL 2025

Synthesizing Post-Training Data for LLMs through Multi-Agent Simulation

Shuo Tang,Xianghe Pang,Zexi Liu,Bohan Tang,Rui Ye,Tian Jin,Xiaowen Dong,Yanfeng Wang,Siheng Chen
data-synthesispost-trainingmulti-agentAnthologyDBLP
7
精读FindingsACL 2025

Adversarial Preference Learning for Robust LLM Alignment

这篇工作的核心问题是:偏好学习在对抗扰动、分布偏移或恶意偏好数据下不够稳,导致 LLM alignment 容易学到脆弱甚至被操纵的行为。过去很多 preference learning 方法默认偏好标注是干净且代表真实目标的,但实际中数据常有噪声、偏差和可被利用的模式,因此鲁棒对齐成为现实问题。

Yuanfu Wang,Pengyu Wang,Chenyang Xi,Bo Tang,Junyi Zhu,Wenqiang Wei ... 省略 6 位作者 ... ,Zhiyu Li,Feiyu Xiong,Jie Hu,Mingchuan Yang
preference-learningalignmentrobustnessAnthologyDBLP
7
精读LongACL 2025

HelpSteer3: Human-Annotated Feedback and Edit Data to Empower Inference-Time Scaling in Open-Ended General-Domain Tasks

Zhilin Wang,Jiaqi Zeng,Olivier Delalleau,Daniel Egert,Ellie Evans,Hoo-Chang Shin,Felipe Soares,Yi Dong,Oleksii Kuchaiev
feedback-datainference-time-scalingalignmentAnthologyDBLP
7
泛读LongACL 2025

Smarter, Better, Faster, Longer: A Modern Bidirectional Encoder for Fast, Memory Efficient, and Long Context Finetuning and Inference

Benjamin Warner,Antoine Chaffin,Benjamin Clavié,Orion Weller,Oskar Hallström,Said Taghadouini ... 省略 3 位作者 ... ,Tom Aarsen,Griffin Thomas Adams,Jeremy Howard,Iacopo Poli
bidirectional-encoderlong-contextefficiencyAnthologyDBLP
7
精读LongACL 2025

Tokenisation is NP-Complete

Philip Whittington,Gregor Bachmann,Tiago Pimentel
tokenizercomplexity-theorynp-completeAnthologyDBLP
7
精读FindingsACL 2025

Systematic Generalization in Language Models Scales with Information Entropy

Sondre Wold,Lucas Georges Gabriel Charpentier,Étienne Simon
systematic-generalizationinformation-entropyscalingAnthologyDBLP
7
泛读LongACL 2025

Finding the Sweet Spot: Preference Data Construction for Scaling Preference Optimization

偏好优化(DPO 等)的效果高度依赖偏好数据的构造方式,但如何系统地选择 chosen/rejected 对的质量差距、多样性和规模,缺乏清晰的指导原则。现有做法多凭经验拼凑,scaling 行为不明确。

Yao Xiao,Hai Ye,Linyao Chen,Hwee Tou Ng,Lidong Bing,Xiaoli Li,Roy Ka-Wei Lee
Alibaba DAMO Academypreference-optimizationpreference-dataalignmentAnthologyDBLP
7
泛读FindingsACL 2025

Full-Step-DPO: Self-Supervised Preference Optimization with Step-wise Rewards for Mathematical Reasoning

Huimin Xu,Xin Mao,Feng-Lin Li,Xiaobao Wu,Wang Chen,Wei Zhang,Anh Tuan Luu
dpoself-supervisionstep-rewardAnthologyDBLP
7
泛读LongACL 2025

Enhancing Character-Level Understanding in LLMs through Token Internal Structure Learning

这篇工作要解决的是:子词/词级 tokenizer 让 LLM 对“字符内部结构”(如拼写、形态、汉字部件等)不敏感,导致拼写鲁棒性、字符级编辑与跨脚本泛化能力长期被次优地用数据增强或纯字符模型来补。

Zhu Xu,Zhiqiang Zhao,Zihan Zhang,Yuchi Liu,Quanwei Shen,Fei Liu,Yu Kuang,Jian He,Conglin Liu
tokenizercharacter-leveltoken-structureAnthologyDBLP
7
泛读LongACL 2025

LESA: Learnable LLM Layer Scaling-Up

这篇工作要解决的是:把 LLM “加深/加宽/加层”通常需要重新训练或复杂的初始化与蒸馏,否则容易不稳定或性能回退,导致持续扩容(continual scaling-up)成本高。

Yifei Yang,Zouying Cao,Xinbei Ma,Yao Yao,Zhi Chen,Libo Qin,Hai Zhao
layer-scalingarchitecturemodel-growthAnthologyDBLP
7
精读LongACL 2025

Dynamic and Generalizable Process Reward Modeling

这篇工作要解决的是:现有过程奖励模型(Process Reward Model, PRM)往往对特定任务/格式过拟合,换任务或换推理风格就失效,导致用 PRM 做推理增强时泛化差且维护成本高。

Zhangyue Yin,Qiushi Sun,Zhiyuan Zeng,Qinyuan Cheng,Xipeng Qiu,Xuanjing Huang
process-reward-modelrlreasoningAnthologyDBLP
7
精读FindingsACL 2025

Revealing and Mitigating the Local Pattern Shortcuts of Mamba

Wangjie You,Zecheng Tang,Juntao Li,Lili Yao,Min Zhang
mambasequence-modelingtraining-dynamicsAnthologyDBLP
7
泛读FindingsACL 2025

Position Paper: MeMo: Towards Language Models with Associative Memory Mechanisms

Fabio Massimo Zanzotto,Elena Sofia Ruzzetti,Giancarlo A. Xompero,Leonardo Ranaldi,Davide Venditti,Federico Ranaldi,Cristina Giannone,Andrea Favalli,Raniero Romagnoli
associative-memoryarchitectureposition-paperAnthologyDBLP
7
精读FindingsACL 2025

Diversifying the Expert Knowledge for Task-Agnostic Pruning in Sparse Mixture-of-Experts

Zeliang Zhang,Xiaodong Liu,Hao Cheng,Chenliang Xu,Jianfeng Gao
moepruningtask-agnosticAnthologyDBLP
7
精读LongACL 2025

ClusterAttn: KV Cache Compression under Intrinsic Attention Clustering

Minwei Zhang,Haifeng Sun,Jingyu Wang,Shaolong Li,Wanyi Ning,Qi Qi,Zirui Zhuang,Jianxin Liao
kv-cacheattentioncompressionAnthologyDBLP
7
泛读FindingsACL 2025

Self-Tuning: Instructing LLMs to Effectively Acquire New Knowledge through Self-Teaching

Xiaoying Zhang,Baolin Peng,Ye Tian,Jingyan Zhou,Yipeng Zhang,Haitao Mi,Helen M. Meng
self-trainingknowledge-editinginstruction-tuningAnthologyDBLP
7
泛读LongACL 2025

IOPO: Empowering LLMs with Complex Instruction Following via Input-Output Preference Optimization

Xinghua Zhang,Haiyang Yu,Cheng Fu,Fei Huang,Yongbin Li
preference-optimizationinstruction-followingdpoAnthologyDBLP
7
精读FindingsACL 2025

The Lessons of Developing Process Reward Models in Mathematical Reasoning

Zhenru Zhang,Chujie Zheng,Yangzhen Wu,Beichen Zhang,Runji Lin,Bowen Yu,Dayiheng Liu,Jingren Zhou,Junyang Lin
process-reward-modelmath-reasoningrlAnthologyDBLP
7
精读LongACL 2025

Analyzing the Rapid Generalization of SFT via the Perspective of Attention Head Activation Patterns

Yang Zhao,Li Du,Xiao Ding,Kai Xiong,Ting Liu,Bing Qin
sftattention-headgeneralizationAnthologyDBLP
7
精读LongACL 2025

Model Extrapolation Expedites Alignment

Chujie Zheng,Ziqi Wang,Heng Ji,Minlie Huang,Nanyun Peng
model-extrapolationalignmentsftAnthologyDBLP
7
精读LongACL 2025

DAPE V2: Process Attention Score as Feature Map for Length Extrapolation

Chuanyang Zheng,Yihang Gao,Han Shi,Jing Xiong,Jiankai Sun,Jingyao Li ... 省略 2 位作者 ... ,Michael Ng,Xin Jiang,Zhenguo Li,Yu Li
length-extrapolationpositional-encodingattentionAnthologyDBLP
7
泛读FindingsACL 2025

FANNO: Augmenting High-Quality Instruction Data with Open-Sourced LLMs Only

He Zhu,Yifan Ding,Yicheng Tao,Zhiwen Ruan,Yixia Li,Wenjia Zhang,Yun Chen,Guanhua Chen
instruction-datadata-synthesisalignmentAnthologyDBLP
7
泛读FindingsACL 2025

Unsupervised Morphological Tree Tokenizer

Qingyang Zhu,Xiang Hu,Pengyu Ji,Wei Wu,Kewei Tu
tokenizermorphologysubwordAnthologyDBLP
7
泛读LongACL 2025

Second Language (Arabic) Acquisition of LLMs via Progressive Vocabulary Expansion

Jianqing Zhu,Huang Huang,Zhihang Lin,Juhao Liang,Zhengyang Tang,Khalid Almubarak ... 省略 12 位作者 ... ,Ruoyu Sun,Haizhou Li,Benyou Wang,Jinchao Xu
vocabularymultilingualcontinual-pretrainAnthologyDBLP
7
泛读FindingsACL 2025

SGDPO: Self-Guided Direct Preference Optimization for Language Model Alignment

Wenqiao Zhu,Ji Liu,Lulu Wang,Jun Wu,Yulun Zhang
dpoalignmentpreference-optimizationAnthologyDBLP
7
泛读OutstandingLongACL 2025

From Real to Synthetic: Synthesizing Millions of Diversified and Complicated User Instructions with Attributed Grounding

Chiwei Zhu,Benfeng Xu,Xiaorui Wang,Zhendong Mao
instruction-datasynthetic-datadata-synthesisDOIDBLP
7
泛读FindingsACL 2025

TRANS-ZERO: Self-Play Incentivizes Large Language Models for Multilingual Translation Without Parallel Data

Wei Zou,Sen Yang,Yu Bao,Shujian Huang,Jiajun Chen,Shanbo Cheng
self-playmultilingualtranslationAnthologyDBLP
6
泛读FindingsACL 2025

Retrieval-Augmented Process Reward Model for Generalizable Mathematical Reasoning

现有过程奖励模型(PRM)的分布外泛化能力差,无法处理推理模式差异导致的步骤OOD、数据集偏移导致的问题OOD两类场景。

Jiachen Zhu,Congmin Zheng,Jianghao Lin,Kounianhua Du,Ying Wen,Yong Yu,Jun Wang,Weinan Zhang
process-reward-modelretrievalreasoningAnthologyarXivDBLP
5
泛读LongACL 2025

SelfElicit: Your Language Model Secretly Knows Where is the Relevant Evidence

现有大模型上下文证据注入方案(检索/用户提供)无法解决大模型在含噪声的真实场景上下文里难以定位、利用关键证据的问题,之前的工作普遍默认模型能正确识别有效证据,未针对证据提取环节做推理时优化。

Zhining Liu,Rana Ali Amjad,Ravinarayana Adkathimar,Tianxin Wei,Hanghang Tong
retrievalevidence-uselong-contextAnthologyarXivDBLP
6
泛读LongACL 2025

DoMIX: An Efficient Framework for Exploiting Domain Knowledge in Fine-Tuning

现有持续领域自适应预训练(continual DAP)方法存在三个未同时解决的缺陷:训练计算和显存开销高、对增量数据顺序敏感、输出通用单模型不符合DAP面向特定领域优化的核心目标。

Dohoon Kim,Donghun Kang,Taesup Moon
domain-adaptive-pretrainingcontinual-pretrainingfine-tuningAnthologyarXivDBLP
6
泛读LongACL 2025

Lost in Multilinguality: Dissecting Cross-lingual Factual Inconsistency in Transformer Language Models

这篇工作要解决的是:多语模型明明在不同语言里学到的是同一事实,却常在语义等价的跨语提示下给出不一致答案。过去大家知道这种 cross-lingual inconsistency 存在,但多数研究停留在现象统计层面,没有解释错误究竟发生在知识存储、跨语映射,还是输出阶段。

Mingyang Wang,Heike Adel,Lukas Lange,Yihong Liu,Ercong Nie,Jannik Strötgen,Hinrich Schütze
multilingualfactualityinterpretabilityAnthologyarXivDBLP
6
泛读LongACL 2025

Unlocking General Long Chain-of-Thought Reasoning Capabilities of Large Language Models via Representation Engineering

现有长思维链(long CoT)推理能力诱导方案大多基于小样本微调,未从表征层面验证长CoT是否是大模型的通用能力,也未明确其跨任务迁移的边界条件。

Xinyu Tang,Xiaolei Wang,Zhihao Lv,Yingqian Min,Xin Zhao,Binbin Hu,Ziqi Liu,Zhiqiang Zhang
long-cotrepresentation-engineeringreasoningAnthologyarXivDBLP
6
泛读LongACL 2025

Revisiting the Test-Time Scaling of o1-like Models: Do they Truly Possess Test-Time Scaling Capabilities?

现有o1类测试时缩放模型的研究默认推理链越长、计算资源投入越多,推理准确率越高,这一假设的普适性未得到验证,测试时缩放的边界条件也不明确。

Zhiyuan Zeng,Qinyuan Cheng,Zhangyue Yin,Yunhua Zhou,Xipeng Qiu
test-time-scalingreasoningo1AnthologyarXivDBLP
6
泛读LongACL 2025

Accurate KV Cache Quantization with Outlier Tokens Tracing

Yi Su,Yuechi Zhou,Quantong Qiu,Juntao Li,Qingrong Xia,Ping Li,Xinyu Duan,Zhefeng Wang,Min Zhang
kv-cachequantizationoutlier-tokensAnthologyDBLP
6
泛读LongACL 2025

SCOPE: Optimizing Key-Value Cache Compression in Long-context Generation

Jialong Wu,Zhenglin Wang,Linhai Zhang,Yilong Lai,Yulan He,Deyu Zhou
kv-cachecompressionlong-contextAnthologyDBLP
6
泛读FindingsACL 2025

Knowing Before Saying: LLM Representations Encode Information About Chain-of-Thought Success Before Completion

Anum Afzal,Florian Matthes,Gal Chechik,Yftah Ziser
cotrepresentationsreasoningAnthologyDBLP
6
泛读SRWACL 2025

Learning and Enforcing Context-Sensitive Control for LLMs

Mohammad Albinhassan,Pranava Madhyastha,Mark Law,Alessandra Russo
controllabilityalignmentsafetyDOIDBLP
6
泛读LongACL 2025

The Hidden Attention of Mamba Models

Ameen Ali,Itamar Zimerman,Lior Wolf
mambaattentioninterpretabilityAnthologyDBLP
6
泛读LongACL 2025

Sparse Logit Sampling: Accelerating Knowledge Distillation in LLMs

Anshumann,Mohd Abbas Zaidi,Akhil Kedia,Jinwoo Ahn,Taehwak Kwon,Kangwook Lee,Haejun Lee,Joohyung Lee
knowledge-distillationsamplingefficiencyAnthologyDBLP
6
泛读LongACL 2025

Language Models Grow Less Humanlike beyond Phase Transition

Tatsuya Aoyama,Ethan Wilcox
scalingphase-transitionhumanlikeAnthologyDBLP
6
泛读LongACL 2025

On the Acquisition of Shared Grammatical Representations in Bilingual Language Models

这篇论文关注双语语言模型是否会自发学到共享语法表征,而不是仅仅在词汇层面对齐。过去很多跨语研究默认共享参数会带来某种跨语言抽象,但这种抽象到底是语法层面的迁移,还是统计共现带来的假象,并没有被充分拆清。

Catherine Arnett,Tyler A. Chang,James A. Michaelov,Ben Bergen
bilingualrepresentationsgrammarAnthologyDBLP
6
泛读ShortACL 2025

A Little Human Data Goes A Long Way

这篇论文要回答的是:在人类标注数据昂贵的前提下,少量高质量人工数据能否显著放大模型效果,甚至替代大量低质或合成数据。这个问题一直存在,但在合成数据主导后训练之后更尖锐,因为大家已经看到数据量能堆起来,真正缺的是高信号监督究竟值多少钱。

Dhananjay Ashok,Jonathan May
human-dataalignmentdata-efficiencyDOIDBLP
6
泛读FindingsACL 2025

REVS: Unlearning Sensitive Information in Language Models via Rank Editing in the Vocabulary Space

这篇论文解决的是语言模型遗忘敏感信息时,如何在不大规模重训的情况下做到更精准、更局部的知识删除。现有 unlearning 方法常见问题是代价高、影响面大,或者只能在行为层面遮掩答案,不能真正改掉参数里可检索的知识。

Tomer Ashuach,Martin Tutek,Yonatan Belinkov
unlearningmodel-editingsafetyAnthologyDBLP
6
泛读LongACL 2025

LongBench v2: Towards Deeper Understanding and Reasoning on Realistic Long-context Multitasks

这篇论文提出 LongBench v2,目标是更真实地评估长上下文模型在多任务环境下的深层理解与推理,而不是只测浅层检索或局部定位。旧的长上下文基准经常把问题简化成‘能不能从长文里找到答案’,但这不足以区分真正的跨段整合、约束跟踪和多步推理能力。

Yushi Bai,Shangqing Tu,Jiajie Zhang,Hao Peng,Xiaozhi Wang,Xin Lv ... 省略 2 位作者 ... ,Lei Hou,Yuxiao Dong,Jie Tang,Juanzi Li
long-contextbenchmarkreasoningAnthologyDBLP
6
泛读FindingsACL 2025

Comparing Bad Apples to Good Oranges Aligning Large Language Models via Joint Preference Optimization

这篇论文讨论偏好对齐中的一个核心缺陷:现有方法常把‘好回答’和‘坏回答’当作同一分布里的简单相对排序样本,但实际上 bad apples 和 good oranges 往往来源不一致、难度不对称,直接比较会让优化目标失真。这个问题重要,因为偏好优化已经是 post-train 主流,而数据分布错配会直接决定模型学到的是用户偏好,还是标注集偏差。

Hritik Bansal,Ashima Suvarna,Gantavya Bhatt,Nanyun Peng,Kai-Wei Chang,Aditya Grover
preference-optimizationalignmentdpoAnthologyDBLP
6
泛读LongACL 2025

Fixing Distribution Shifts of LLM Self-Critique via On-Policy Self-Play Training

Rong Bao,Donglei Yu,Kai Fan,Minpeng Liao
self-critiquedistribution-shiftself-playAnthologyDBLP
6
泛读LongACL 2025

GRaMPa: Subword Regularisation by Skewing Uniform Segmentation Distributions with an Efficient Path-counting Markov Model

Thomas Bauwens,David Kaczér,Miryam de Lhoneux
tokenizersubword-regularizationbpeAnthologyDBLP
6
泛读FindingsACL 2025

Context-DPO: Aligning Language Models for Context-Faithfulness

Baolong Bi,Shaohan Huang,Yiwei Wang,Tianchi Yang,Zihan Zhang,Haizhen Huang ... 省略 4 位作者 ... ,Weiwei Deng,Feng Sun,Qi Zhang,Shenghua Liu
dpocontext-faithfulnessalignmentAnthologyDBLP
6
泛读FindingsACL 2025

Boosting Policy and Process Reward Models with Monte Carlo Tree Search in Open-Domain QA

Chi-Min Chan,Chunpu Xu,Junqi Zhu,Jiaming Ji,Donghai Hong,Pengcheng Wen ... 省略 2 位作者 ... ,Yaodong Yang,Wei Xue,Sirui Han,Yike Guo
reward-modelmctsreasoningAnthologyDBLP
6
泛读LongACL 2025

JoPA: Explaining Large Language Model's Generation via Joint Prompt Attribution

Yurui Chang,Bochuan Cao,Yujia Wang,Jinghui Chen,Lu Lin
interpretabilityattributiongenerationAnthologyDBLP
6
泛读FindingsACL 2025

Data-Centric Improvements for Enhancing Multi-Modal Understanding in Spoken Conversation Modeling

Maximillian Chen,Ruoxi Sun,Sercan Ö. Arik
spoken-dialoguemultimodaldata-qualityAnthologyDBLP
6
泛读FindingsACL 2025

Advancing General Multimodal Capability of Vision-language Models with Pyramid-descent Visual Position Encoding

Zhanpeng Chen,Mingxiao Li,Ziyang Chen,Nan Du,Xiaolong Li,Yuexian Zou
vlmposition-encodingvisual-tokenAnthologyDBLP
6
泛读LongACL 2025

What are the Essential Factors in Crafting Effective Long Context Multi-Hop Instruction Datasets? Insights and Best Practices

Zhi Chen,Qiguang Chen,Libo Qin,Qipeng Guo,Haijun Lv,Yicheng Zou,Hang Yan,Kai Chen,Dahua Lin
long-contextinstruction-datasynthetic-dataAnthologyDBLP
6
泛读LongACL 2025

Cracking Factual Knowledge: A Comprehensive Analysis of Degenerate Knowledge Neurons in Large Language Models

Yuheng Chen,Pengfei Cao,Yubo Chen,Yining Wang,Shengping Liu,Kang Liu,Jun Zhao
knowledgeneuronsinterpretabilityAnthologyDBLP
6
泛读LongACL 2025

The Knowledge Microscope: Features as Better Analytical Lenses than Neurons

Yuheng Chen,Pengfei Cao,Kang Liu,Jun Zhao
interpretabilityfeaturesrepresentationAnthologyDBLP
6
泛读FindingsACL 2025

SPICA: Retrieving Scenarios for Pluralistic In-Context Alignment

这篇工作要解决的是:当“对齐”没有唯一正确答案时,模型如何在 in-context setting 下根据具体场景给出符合不同价值观或偏好的响应,而不是被单一平均偏好拉平。过去很多 alignment 方法默认一个全局奖励或单一规范,这在开放场景里常常不够用,所以作者把问题改写成“先检索相似场景,再做情境化对齐”。

Quan Ze Chen,Kevin Feng,Chan Young Park,Amy X. Zhang
in-context-learningalignmentretrievalAnthologyDBLP
6
泛读LongACL 2025

Retrospective Learning from Interactions

这篇工作关注的是:模型与用户交互后,如何不只利用即时反馈,而是从历史交互中做 retrospective learning,把过去没被充分利用的训练信号重新转化成改进。很多交互式学习系统只看在线一步反馈,样本利用率低,而且容易被短期噪声主导,因此回溯式学习值得做。

Zizhao Chen,Mustafa Omer Gul,Yiwei Chen,Gloria Geng,Anne Wu,Yoav Artzi
interaction-dataonline-learningfeedbackAnthologyDBLP
6
泛读LongACL 2025

A Statistical and Multi-Perspective Revisiting of the Membership Inference Attack in Large Language Models

这篇工作重新审视的是:大家常用来评估 LLM 泄露训练数据风险的 membership inference attack,到底在统计上有多可靠、结论有多稳。这个问题过去经常被单一攻击设定、单一阈值或个别数据点上的成功案例放大,但如果统计控制不严,很容易高估真实隐私风险。

Bowen Chen,Namgi Han,Yusuke Miyao
privacymembership-inferencememorizationAnthologyDBLP
6
泛读FindingsACL 2025

MIG: Automatic Data Selection for Instruction Tuning by Maximizing Information Gain in Semantic Space

这篇工作要解决的是:instruction tuning 数据越来越多,但并不是越多越好,如何自动挑出最有增益的子集,而不是靠启发式去重、随机采样或手工规则。这个问题以前常被粗糙地处理,因为算信息价值很难;现在数据池变大,低质量和冗余样本的训练成本已经不可忽略。

Yicheng Chen,Yining Li,Kai Hu,Zerun Ma,Haochen Ye,Kai Chen
data-selectioninstruction-tuninginformation-gainAnthologyDBLP
6
泛读IndustryACL 2025

EdgeInfinite: A Memory-Efficient Infinite-Context Transformer for Edge Devices

Jiyu Chen,Shuang Peng,Daxiong Luo,Fan Yang,Renshou Wu,Fangyuan Li,Xiaoxin Chen
long-contextedge-devicememory-efficientDOIDBLP
6
泛读LongACL 2025

CLaSp: In-Context Layer Skip for Self-Speculative Decoding

Longze Chen,Renke Shan,Huiming Wang,Lu Wang,Ziqiang Liu,Run Luo,Jiawei Wang,Hamid Alinejad-Rokny,Min Yang
speculative-decodinglayer-skipin-contextAnthologyDBLP
6
泛读LongACL 2025

EfficientQAT: Efficient Quantization-Aware Training for Large Language Models

Mengzhao Chen,Wenqi Shao,Peng Xu,Jiahao Wang,Peng Gao,Kaipeng Zhang,Ping Luo
quantizationqattraining-efficiencyAnthologyDBLP
6
泛读LongACL 2025

Quantifying Semantic Emergence in Language Models

Hang Chen,Xinyu Yang,Jiaying Zhu,Wenya Wang
emergencesemantic-emergencescalingAnthologyDBLP
6
泛读FindingsACL 2025

LCFO: Long Context and Long Form Output Dataset and Benchmarking

Marta R. Costa-jussà,Pierre Andrews,Mariano Coria Meglioli,Joy Chen,Joe Chuang,David Dale ... 省略 3 位作者 ... ,Holger Schwenk,Tuan Tran,Arina Turkatenko,Carleigh Wood
long-contextbenchmarkevaluationAnthologyDBLP
6
泛读LongACL 2025

Data Caricatures: On the Representation of African American Language in Pretraining Corpora

Nicholas Deas,Blake Vente,Amith Ananthram,Jessica Grieser,Desmond Upton Patton,Shana Kleiner,James R. Shepard III,Kathleen McKeown
data-qualitypretraining-datadialectAnthologyDBLP
6
泛读LongACL 2025

Efficient Safety Alignment of Large Language Models via Preference Re-ranking and Representation-based Reward Modeling

Qiyuan Deng,Xuefeng Bai,Kehai Chen,Yaowei Wang,Liqiang Nie,Min Zhang
safetyalignmentpreference-optimizationAnthologyDBLP
6
泛读LongACL 2025

Unveiling Language-Specific Features in Large Language Models via Sparse Autoencoders

这篇工作要回答的是:多语言 LLM 里是否存在可分离的“语言特异特征”,以及这些特征到底编码在什么稀疏方向上。过去大家常用 probing、representation similarity 或 attention 可视化来讨论语言共享与分离,但这些方法很难给出可干预、可组合、可局部解释的特征级结论。

Boyi Deng,Yu Wan,Baosong Yang,Yidan Zhang,Fuli Feng
interpretabilitysparse-autoencodermultilingualAnthologyDBLP
6
泛读LongACL 2025

A Silver Bullet or a Compromise for Full Attention? A Comprehensive Study of Gist Token-based Context Compression

这篇工作要回答的是:gist token 这种上下文压缩方法,究竟能不能作为 full attention 的可替代方案,还是只能在部分场景里做速度换质量的折中。过去这类方法常在少量任务上汇报正面结果,但缺少系统研究来说明它什么时候有效、什么时候会把关键信息压掉。

Chenlong Deng,Zhisong Zhang,Kelong Mao,Shuaiyi Li,Xinting Huang,Dong Yu,Zhicheng Dou
gist-tokencontext-compressionlong-contextAnthologyDBLP
6
泛读LongACL 2025

Unleashing LLM Reasoning Capability via Scalable Question Synthesis from Scratch

这篇工作想解决的是:高质量推理数据稀缺,导致提升 LLM reasoning 常依赖人工构造数据、教师模型蒸馏或已有 benchmark 反复过拟合。问题并不是“有没有题”,而是如何从零开始、规模化地产生真正能训练推理能力的问题分布。

Yuyang Ding,Xinyu Shi,Xiaobo Liang,Juntao Li,Zhaopeng Tu,Qiaoming Zhu,Min Zhang
reasoningdata-synthesisinstruction-tuningAnthologyDBLP
6
泛读LongACL 2025

Scalable Vision Language Model Training via High Quality Data Curation

这篇工作想解决的是:VLM 训练并不一定被模型结构卡住,很多时候真正的瓶颈是低质图文数据把可扩展训练拖慢了。过去大家经常一边扩大数据规模,一边容忍噪声和配对错误,但在高算力阶段,数据质量差会直接体现在对齐效率低、样本利用率差和后期收益变平。

Hongyuan Dong,Zijian Kang,Weijie Yin,LiangXiao LiangXiao,ChaoFeng ChaoFeng,Ran Jiao
vlmdata-curationdata-qualityAnthologyDBLP
6
泛读LongACL 2025

Lost in the Context: Insufficient and Distracted Attention to Contexts in Preference Modeling

这篇工作指出:偏好模型在做 response ranking 时,常常没有充分利用输入上下文,甚至会被无关表面特征分散注意,导致 preference modeling 学到的不是“哪个回答更符合上下文”,而是“哪个回答看起来更像高分答案”。这解释了为什么很多 reward model 在复杂上下文任务上表现不稳。

Shihan Dou,Jiayi Chen,Chenhao Huang,Feng Chen,Wei Chengzhi,Huiyuan Zheng ... 省略 5 位作者 ... ,Zongzhang Zhang,Tao Gui,Qi Zhang,Xuanjing Huang
preference-modelingattention-analysisreward-modelAnthologyDBLP
6
泛读LongACL 2025

Disentangling the Roles of Representation and Selection in Data Pruning

Yupei Du,Yingjin Song,Hugh Mee Wong,Daniil Ignatev,Albert Gatt,Dong Nguyen
data-pruningdata-selectionrepresentationAnthologyDBLP
6
泛读FindingsACL 2025

Variable Layerwise Quantization: A Simple and Effective Approach to Quantize LLMs

Razvan-Gabriel Dumitru,Vikas Yadav,Rishabh Maheshwary,Paul-Ioan Clotan,Sathwik Tejaswi Madhusudhan,Mihai Surdeanu
quantizationlayerwisecompressionAnthologyDBLP
6
泛读IndustryACL 2025

Model Merging for Knowledge Editing

Zichuan Fu,Xian Wu,Guojing Li,Yingying Zhang,Yefeng Zheng,Tianshi Ming,Yejing Wang,Wanyu Wang,Xiangyu Zhao
model-mergingknowledge-editingalignmentDOIDBLP
6
泛读FindingsACL 2025

Mitigating Hallucination in Multimodal Large Language Model via Hallucination-targeted Direct Preference Optimization

Yuhan Fu,Ruobing Xie,Xingwu Sun,Zhanhui Kang,Xirong Li
mllmhallucinationdpoAnthologyDBLP
6
泛读FindingsACL 2025

GUIDEX: Guided Synthetic Data Generation for Zero-Shot Information Extraction

Neil De La Fuente,Oscar Sainz,Iker García-Ferrero,Eneko Agirre
synthetic-datadata-generationinstruction-dataAnthologyDBLP
6
泛读FindingsACL 2025

JsonTuning: Towards Generalizable, Robust, and Controllable Instruction Tuning

Chang Gao,Wenxuan Zhang,Guizhen Chen,Wai Lam
instruction-tuningstructured-outputalignmentAnthologyDBLP
6
泛读FindingsACL 2025

Do Vision-Language Models Have Internal World Models? Towards an Atomic Evaluation

Qiyue Gao,Xinyu Pi,Kevin Liu,Junrong Chen,Ruolan Yang,Xinqi Huang ... 省略 14 位作者 ... ,Tianmin Shu,Ziqiao Ma,Lianhui Qin,Zhiting Hu
vlmworld-modelevaluationAnthologyDBLP
6
泛读LongACL 2025

A Strategic Coordination Framework of Small LMs Matches Large LMs in Data Synthesis

Xin Gao,Qizhi Pei,Zinan Tang,Yu Li,Honglin Lin,Jiang Wu,Lijun Wu,Conghui He
synthetic-datasmall-modelsdata-generationAnthologyDBLP
6
泛读LongACL 2025

Benchmarking Open-ended Audio Dialogue Understanding for Large Audio-Language Models

Kuofeng Gao,Shutao Xia,Ke Xu,Philip Torr,Jindong Gu
audio-languagebenchmarkdialogueAnthologyDBLP
6
泛读LongACL 2025

Centurio: On Drivers of Multilingual Ability of Large Vision-Language Model

Gregor Geigle,Florian Schneider,Carolin Holtermann,Chris Biemann,Radu Timofte,Anne Lauscher,Goran Glavas
vlmmultilingualpretrainingAnthologyDBLP
6
泛读FindingsACL 2025

ProcrustesGPT: Compressing LLMs with Structured Matrices and Orthogonal Transformations

LLM 压缩中,非结构化剪枝虽然精度好但硬件不友好,结构化剪枝又损失大。作者想用结构化矩阵(如 Kronecker、低秩等)替换权重矩阵来同时获得压缩率和硬件效率,但此前这类方法在 LLM 上的系统性探索不足。

Ekaterina Grishina,Mikhail Gorbunov,Maxim V. Rakhuba
compressionstructured-pruningorthogonalAnthologyDBLP
6
泛读FindingsACL 2025

DReSD: Dense Retrieval for Speculative Decoding

推测解码(speculative decoding)需要一个 draft model 快速生成候选 token,但训练或维护 draft model 有额外成本。作者提出用稠密检索(dense retrieval)从语料库中直接检索候选 token 序列来替代 draft model,从而降低推测解码的部署复杂度。

Milan Gritta,Huiyin Xue,Gerasimos Lampouras
speculative-decodingretrievalinferenceAnthologyDBLP
6
泛读LongACL 2025

Exploring the Impact of Instruction-Tuning on LLM's Susceptibility to Misinformation

指令微调(instruction tuning)会改变 LLM 对错误信息的易感性,但具体是增强还是削弱了模型抵抗错误信息的能力,此前缺乏系统研究。

Kyubeen Han,Junseo Jang,Hongjin Kim,Geunyeong Jeong,Harksoo Kim
instruction-tuningmisinformationalignmentAnthologyDBLP
6
泛读LongACL 2025

Theoretical Analysis of Hierarchical Language Recognition and Generation by Transformers without Positional Encoding

Transformer 在没有位置编码的情况下能否识别和生成层次化语言结构(如嵌套括号、上下文无关语言)?此前的理论分析大多假设有位置编码,去掉后 Transformer 的表达能力边界不清楚。

Daichi Hayakawa,Issei Sato
theorytransformerpositional-encodingAnthologyDBLP
6
泛读LongACL 2025

Don't Half-listen: Capturing Key-part Information in Continual Instruction Tuning

Yongquan He,Wenyuan Zhang,Xuancheng Huang,Peng Zhang,Lingxun Meng,Xiang Zhou,Ke Zeng,Xunliang Cai
continual-tuninginstruction-tuningforgettingAnthologyDBLP
6
泛读FindingsACL 2025

Unlocking Speech Instruction Data Potential with Query Rewriting

Yonghua Hei,Yibo Yan,Shuliang Liu,Huiyu Zhou,Linfeng Zhang,Xuming Hu
speechinstruction-dataquery-rewritingAnthologyDBLP
6
泛读FindingsACL 2025

Fuzzy Speculative Decoding for a Tunable Accuracy-Runtime Tradeoff

Maximilian Holsman,Yukun Huang,Bhuwan Dhingra
speculative-decodinginference-efficiencyaccuracy-tradeoffAnthologyDBLP
6
泛读LongACL 2025

Mixtures of In-Context Learners

这篇工作要解决的是:单一 in-context learning(ICL)策略对任务分布变化很脆弱,而现有做法通常默认一个固定提示或固定示例选择规则就够用。这个假设在任务异质、示例噪声高、或者同一输入存在多种有效推理路径时很容易失效,因此作者转向“混合多个上下文学习器”而不是继续优化单一 learner。

Giwon Hong,Emile van Krieken,Edoardo Maria Ponti,Nikolay Malkin,Pasquale Minervini
in-context-learningmixture-modelsicl-mechanismAnthologyDBLP
6
泛读LongACL 2025

StitchLLM: Serving LLMs, One Block at a Time

这篇工作要解决的是大模型服务的计算粒度过粗:现有 serving 往往以整模型为最小调度单位,导致资源利用、延迟控制和异构部署都不够灵活。模型越大,这个问题越明显,因为不同层的计算和带宽瓶颈并不一致。

Bodun Hu,Shuozhe Li,Saurabh Agarwal,Myungjin Lee,Akshay Jajoo,Jiamin Li ... 省略 2 位作者 ... ,Donghyun Kim,Hong Xu,Amy Zhang,Aditya Akella
model-servingblock-levelinference-efficiencyAnthologyDBLP
6
泛读FindingsACL 2025

Investigating and Enhancing Vision-Audio Capability in Omnimodal Large Language Models

Rui Hu,Delai Qiu,Shuyu Wei,Jiaming Zhang,Yining Wang,Shengping Liu,Jitao Sang
omnimodalvision-audiomultimodal-llmAnthologyDBLP
6
泛读LongACL 2025

SAM Decoding: Speculative Decoding via Suffix Automaton

Yuxuan Hu,Ke Wang,Xiaokang Zhang,Fanjin Zhang,Cuiping Li,Hong Chen,Jing Zhang
speculative-decodingsuffix-automatoninference-efficiencyAnthologyDBLP
6
泛读FindingsACL 2025

Task-Informed Anti-Curriculum by Masking Improves Downstream Performance on Text

Andrei Jarca,Florinel-Alin Croitoru,Radu Tudor Ionescu
curriculum-learningmaskingobjectiveDOIDBLP
6
泛读LongACL 2025

K/DA: Automated Data Generation Pipeline for Detoxifying Implicitly Offensive Language in Korean

Minkyeong Jeon,Hyemin Jeong,Yerang Kim,Jiyoung Kim,Jae Hyeon Cho,Byung-Jun Lee
data-synthesissafetydetoxificationAnthologyDBLP
6
泛读FindingsACL 2025

Unlocking LLMs' Self-Improvement Capacity with Autonomous Learning for Domain Adaptation

Ke Ji,Junying Chen,Anningzhe Gao,Wenya Xie,Xiang Wan,Benyou Wang
self-improvementdomain-adaptationsynthetic-dataAnthologyDBLP
6
泛读LongACL 2025

Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs

Tao Ji,Bin Guo,Yuanbin Wu,Qipeng Guo,Shenlixing Shenlixing,Chenzhan Chenzhan,Xipeng Qiu,Qi Zhang,Tao Gui
attentionlatent-attentioninferenceAnthologyDBLP
6
泛读LongACL 2025

PKU-SafeRLHF: Towards Multi-Level Safety Alignment for LLMs with Human Preference

Jiaming Ji,Donghai Hong,Borong Zhang,Boyuan Chen,Josef Dai,Boren Zheng ... 省略 3 位作者 ... ,Boxun Li,Sirui Han,Yike Guo,Yaodong Yang
rlhfsafetypreference-dataAnthologyDBLP
6
泛读LongACL 2025

Neuron-Level Sequential Editing for Large Language Models

Houcheng Jiang,Junfeng Fang,Tianyu Zhang,Baolong Bi,An Zhang,Ruipeng Wang,Tao Liang,Xiang Wang
model-editingneuronsinterpretabilityAnthologyDBLP
6
泛读FindingsACL 2025

KVPR: Efficient LLM Inference with I/O-Aware KV Cache Partial Recomputation

这篇论文要解决的是长上下文推理里 KV cache 占显存和带宽过大,导致推理吞吐受限的问题。现有做法通常在“全量保留 KV”与“激进压缩 KV”之间二选一:前者成本高,后者容易伤精度;作者试图用“部分重算”把这两者之间的空档补上。

Chaoyi Jiang,Lei Gao,Hossein Entezari Zarch,Murali Annavaram
kv-cacheinferenceio-awareAnthologyDBLP
6
泛读FindingsACL 2025

SafeChain: Safety of Language Models with Long Chain-of-Thought Reasoning Capabilities

这篇论文关注一个正在变得更现实的风险:长链式推理能力可能同时放大模型的有害规划、规避约束和逐步执行能力。过去很多安全评测偏短指令和单轮拒答,未充分覆盖长 CoT 场景,因此无法判断“更会推理”是否也意味着“更会做坏事”。

Fengqing Jiang,Zhangchen Xu,Yuetai Li,Luyao Niu,Zhen Xiang,Bo Li,Bill Yuchen Lin,Radha Poovendran
safetycotalignmentAnthologyDBLP
6
泛读LongACL 2025

Disentangling Memory and Reasoning Ability in Large Language Models

这篇论文的核心是区分 LLM 的 memory 和 reasoning:模型答对一道题,到底是因为记住了模板、事实或训练集痕迹,还是因为真正做了推理。这个问题长期被混在一起讨论,导致很多“推理提升”结果其实可能只是检索或模式复现。

Mingyu Jin,Weidi Luo,Sitao Cheng,Xinyi Wang,Wenyue Hua,Ruixiang Tang,William Yang Wang,Yongfeng Zhang
memoryreasoninginterpretabilityAnthologyDBLP
6
泛读IndustryACL 2025

TaDA: Training-free recipe for Decoding with Adaptive KV Cache Compression and Mean-centering

这篇论文解决的是推理时 KV cache 太贵,但很多压缩方法要么需要训练配合,要么一压就掉点。作者给出的方向是 training-free 的自适应 KV cache 压缩,并结合 mean-centering,目标是在不改模型参数的情况下提高长上下文解码效率。

Vinay Joshi,Pratik Prabhanjan Brahma,Zicheng Liu,Emad Barsoum
kv-cachecompressiondecodingDOIDBLP
6
泛读LongACL 2025

Binary Classifier Optimization for Large Language Model Alignment

这篇论文的核心问题是很多对齐方法最终都依赖一个二元判断:哪个回答更好、是否应被偏好、是否安全可接受;但这个 binary classifier 往往训练得并不稳,也未必和最终对齐目标一致。作者试图直接优化这个二分类器,从源头提升 alignment 信号质量。

Seungjae Jung,Gunsoo Han,Daniel Wontae Nam,Kyoung-Woon On
alignmentbinary-classifierreward-modelAnthologyDBLP
6
泛读LongACL 2025

Diversity Explains Inference Scaling Laws: Through a Case Study of Minimum Bayes Risk Decoding

Hidetaka Kamigaito,Hiroyuki Deguchi,Yusuke Sakai,Katsuhiko Hayashi,Taro Watanabe
inference-scalingmbr-decodingdiversityAnthologyDBLP
6
泛读ShortACL 2025

State-offset Tuning: State-based Parameter-Efficient Fine-Tuning for State Space Models

Wonjun Kang,Kevin Galim,Yuchen Zeng,Minjae Lee,Hyung Il Koo,Nam Ik Cho
ssmpeftfine-tuningDOIDBLP
6
泛读FindingsACL 2025

How Programming Concepts and Neurons Are Shared in Code Language Models

Amir Hossein Kargaran,Yihong Liu,François Yvon,Hinrich Schütze
code-lmneuron-sharinginterpretabilityAnthologyDBLP
6
泛读LongACL 2025

SDPO: Segment-Level Direct Preference Optimization for Social Agents

Aobo Kong,Wentao Ma,Shiwan Zhao,Yongbin Li,Yuchuan Wu,Ke Wang,Xiaoqian Liu,Qicheng Li,Yong Qin,Fei Huang
dpopreference-learningalignmentAnthologyDBLP
6
泛读LongACL 2025

CoT-ICL Lab: A Synthetic Framework for Studying Chain-of-Thought Learning from In-Context Demonstrations

Vignesh Kothapalli,Hamed Firooz,Maziar Sanjabi
coticlsynthetic-dataAnthologyDBLP
6
泛读IndustryACL 2025

Think Again! The Effect of Test-Time Compute on Preferences, Opinions, and Beliefs of Large Language Models

George Kour,Itay Nakash,Michal Shmueli-Scheuer,Ateret Anaby-Tavor
test-time-computepreferencesreasoningDOIDBLP
6
泛读FindingsACL 2025

Towards Safety Reasoning in LLMs: AI-agentic Deliberation for Policy-embedded CoT Data Creation

Tharindu Kumarage,Ninareh Mehrabi,Anil Ramakrishna,Xinyan Zhao,Richard S. Zemel,Kai-Wei Chang,Aram Galstyan,Rahul Gupta,Charith Peris
safetycotsynthetic-dataAnthologyDBLP
6
泛读LongACL 2025

Cramming 1568 Tokens into a Single Vector and Back Again: Exploring the Limits of Embedding Space Capacity

Yuri Kuratov,Mikhail Arkhipov,Aydar Bulatov,Mikhail Burtsev
representationcompressionembeddingAnthologyDBLP
6
泛读LongACL 2025

"Give Me BF16 or Give Me Death"? Accuracy-Performance Trade-Offs in LLM Quantization

Eldar Kurtic,Alexandre Noll Marques,Shubhra Pandit,Mark Kurtz,Dan Alistarh
quantizationbf16inferenceAnthologyDBLP
6
泛读DemoACL 2025

Textagon: Boosting Language Models with Theory-guided Parallel Representations

John P. Lalor,Ruiyang Qin,David G. Dobolyi,Ahmed Abbasi
representationarchitectureparallelismDOIDBLP
6
泛读FindingsACL 2025

AVG-LLaVA: An Efficient Large Multimodal Model with Adaptive Visual Granularity

Zhibin Lan,Liqiang Niu,Fandong Meng,Wenbo Li,Jie Zhou,Jinsong Su
lmmefficientvision-encoderAnthologyDBLP
6
泛读FindingsACL 2025

Beyond Single-Value Metrics: Evaluating and Enhancing LLM Unlearning with Cognitive Diagnosis

Yicheng Lang,Kehan Guo,Yue Huang,Yujun Zhou,Haomin Zhuang,Tianyu Yang,Yao Su,Xiangliang Zhang
unlearningevaluationcognitive-diagnosisAnthologyDBLP
6
泛读FindingsACL 2025

Sparse Rewards Can Self-Train Dialogue Agents

Barrett Martin Lattimer,Varun Prashant Gangal,Ryan McDonald,Yi Yang
rlsparse-rewardself-trainingAnthologyDBLP
6
泛读LongACL 2025

SPECTRA: Faster Large Language Model Inference with Optimized Internal and External Speculation

Nguyen-Khang Le,Truong Dinh Do,Le-Minh Nguyen
speculative-decodinginferencellm-servingAnthologyDBLP
6
泛读FindingsACL 2025

AMXFP4: Taming Activation Outliers with Asymmetric Microscaling Floating-Point for 4-bit LLM Inference

这篇工作解决的是:4-bit LLM 推理里 activation outlier 会显著拉低量化精度,而现有 FP4 或对称缩放格式很难同时兼顾范围和分辨率。这个问题过去通常靠更高 bit、逐层回退或更复杂校准规避,但那会直接吞掉 4-bit 推理原本想要的带宽和吞吐收益。

Janghwan Lee,Jiwoong Park,Jinseok Kim,Yongjik Kim,Jungju Oh,Jinwook Oh,Jungwook Choi
quantizationfp4inferenceAnthologyDBLP
6
泛读LongACL 2025

SEAL: Scaling to Emphasize Attention for Long-Context Retrieval

这篇工作关注长上下文检索中的一个老问题:模型明明能看到相关信息,却未必把足够注意力分给它。现有长上下文方法很多在扩窗口、改位置编码或做检索增强,但如果注意力分布本身被无关 token 稀释,单纯把上下文拉长不一定提升实际 retrieval 能力。

Changhun Lee,Minsang Seok,Jungyu Jin,Younghyun Cho,Eunhyeok Park
long-contextattention-scalingretrievalAnthologyDBLP
6
泛读LongACL 2025

Why Safeguarded Ships Run Aground? Aligned Large Language Models' Safety Mechanisms Tend to Be Anchored in The Template Region

这篇工作想解释一个很具体也很关键的现象:已经做过安全对齐的 LLM,为什么仍然会被越狱或在稍作改写后失守。作者的核心判断从标题已经给出——安全机制往往锚定在 template region,也就是对齐更多依赖特定拒答模板、表面话术或局部区域信号,而不是在更广泛语义空间里真正学会稳健拒绝。

Chak Tou Leong,Qingyu Yin,Jian Wang,Wenjie Li
safetyalignmenttemplate-regionAnthologyDBLP
6
泛读LongACL 2025

Design Choices for Extending the Context Length of Visual Language Models

视觉语言模型(VLM)在处理长上下文时面临效率和效果的双重挑战,但此前缺乏系统性的设计选择分析——包括位置编码外推、注意力机制、视觉 token 压缩等维度的 trade-off 尚不清楚。

Mukai Li,Lei Li,Shansan Gong,Qi Liu
vlmlong-contextdesign-choicesAnthologyDBLP
6
泛读LongACL 2025

Multi-Modality Expansion and Retention for LLMs through Parameter Merging and Decoupling

给已有的文本 LLM 扩展新模态(如视觉、音频)时,常见做法是联合微调,但这会导致原有语言能力退化(catastrophic forgetting)。如何在扩展模态的同时保持语言能力,是多模态 LLM 的核心难题。

Junlin Li,Guodong Du,Jing Li,Sim Kuan Goh,Wenya Wang,Yequan Wang ... 省略 1 位作者 ... ,Ho-Kin Tang,Saleh Alharbi,Daojing He,Min Zhang
multimodal-expansionparameter-mergingcontinual-pretrainAnthologyDBLP
6
泛读LongACL 2025

MadaKV: Adaptive Modality-Perception KV Cache Eviction for Efficient Multimodal Long-Context Inference

多模态 LLM 在长上下文推理时,KV cache 占用大量显存,而现有的 KV cache 淘汰策略没有区分不同模态(文本 vs 视觉 token)的重要性差异,导致淘汰策略次优。

Kunxi Li,Zhonghua Jiang,Zhouzhou Shen,Zhaode Wang,Chengfei Lv,Shengyu Zhang,Fan Wu,Fei Wu
kv-cachemultimodallong-contextAnthologyDBLP
6
泛读FindingsACL 2025

CARE-STaR: Constraint-aware Self-taught Reasoner

LLM 的自我改进(self-improvement)方法在推理任务上效果不错,但往往忽略约束条件——现实任务中的推理不仅要正确,还要满足特定约束(如格式、安全、逻辑一致性等)。现有的 self-taught reasoning 方法没有显式处理约束。

Zhiliang Li,Bo Tang,Yijun Niu,Beihong Jin,Qiwen Shi,Yuchen Feng,Zhiyu Li,Jie Hu,Mingchuan Yang,Feiyu Xiong
self-trainingreasoningconstraintsAnthologyDBLP
6
泛读LongACL 2025

Taming LLMs with Gradient Grouping

LLM 训练中的梯度更新效率问题——不同参数组的梯度特征差异大,统一的优化策略(如全局学习率、统一的 Adam 超参)可能不是最优的。此前虽有 layer-wise learning rate 等做法,但缺乏系统性的梯度分组优化框架。

Siyuan Li,Juanxi Tian,Zedong Wang,Xin Jin,Zicheng Liu,Wentao Zhang,Dan Xu
fine-tuninggradientsalignmentAnthologyDBLP
6
泛读FindingsACL 2025

C²LEVA: Toward Comprehensive and Contamination-Free Language Model Evaluation

Yanyang Li,Tin Long Wong,Cheung To Hung,Jianqiao Zhao,Duo Zheng,Ka Wai Liu,Michael R. Lyu,Liwei Wang
evaluationbenchmarkcontaminationAnthologyDBLP
6
泛读LongACL 2025

Decoding Knowledge Attribution in Mixture-of-Experts: A Framework of Basic-Refinement Collaboration and Efficiency Analysis

Junzhuo Li,Bo Wang,Xiuze Zhou,Peijie Jiang,Jia Liu,Xuming Hu
moeknowledgeroutingAnthologyDBLP
6
泛读FindingsACL 2025

Small Models Struggle to Learn from Strong Reasoners

Yuetai Li,Xiang Yue,Zhangchen Xu,Fengqing Jiang,Luyao Niu,Bill Yuchen Lin,Bhaskar Ramasubramanian,Radha Poovendran
distillationsmall-language-modelreasoningAnthologyDBLP
6
泛读FindingsACL 2025

Large Language Models are Miscalibrated In-Context Learners

Chengzu Li,Han Zhou,Goran Glavas,Anna Korhonen,Ivan Vulic
in-context-learningcalibrationevaluationAnthologyDBLP
6
泛读FindingsACL 2025

RedundancyLens: Revealing and Exploiting Visual Token Processing Redundancy for Efficient Decoder-Only MLLMs

Hongliang Li,Jiaxin Zhang,Wenhui Liao,Dezhi Peng,Kai Ding,Lianwen Jin
mllmvisual-tokensredundancyAnthologyDBLP
6
泛读LongACL 2025

FocusLLM: Precise Understanding of Long Context by Dynamic Condensing

Zhenyu Li,Yike Zhang,Tengyu Pan,Yutao Sun,Zhichao Duan,Junjie Fang,Rong Han,Zixuan Wang,Jianyong Wang
long-contextcontext-compressioninferenceAnthologyDBLP
6
泛读LongACL 2025

Gradient-Adaptive Policy Optimization: Towards Multi-Objective Alignment of Large Language Models

这篇论文要解决的是:现有对齐方法通常把多个目标压成单一奖励或固定权重,导致 LLM 在帮助性、安全性、风格一致性等目标之间难以稳定兼顾。这个问题之所以重要,是因为多目标冲突是实际部署里的常态,而不是例外。

Chengao Li,Hanyu Zhang,Yunkun Xu,Hongyan Xue,Xiang Ao,Qing He
rlhfmulti-objectivealignmentAnthologyDBLP
6
泛读LongACL 2025

Generative Reward Modeling via Synthetic Criteria Preference Learning

这篇论文解决的是 reward model 训练标准稀缺且不稳定的问题:人类偏好数据昂贵、标准含混,而单一总偏好标签很难告诉模型“为什么更好”。过去很多 reward modeling 直接学 chosen vs rejected,但这种监督信息太压缩,泛化和可控性都有限。

Xiaobo Liang,Haoke Zhang,Juntao Li,Kehai Chen,Qiaoming Zhu,Min Zhang
reward-modelsynthetic-datapreference-learningAnthologyDBLP
6
泛读FindingsACL 2025

Investigating Inference-time Scaling for Chain of Multi-modal Thought: A Preliminary Study

这篇论文研究的是多模态链式思维的 inference-time scaling:在不改训练的前提下,增加测试时计算是否能像文本推理那样稳定提升多模态推理质量。这个问题此前常被默认成立,但多模态 CoT 的错误既可能来自推理深度不足,也可能来自视觉感知噪声,二者不一定能靠多采样或更长思维链解决。

Yujie Lin,Ante Wang,Moye Chen,Jingyao Liu,Hao Liu,Jinsong Su,Xinyan Xiao
inference-time-scalingmultimodal-cotreasoningAnthologyDBLP
6
泛读LongACL 2025

Look Both Ways and No Sink: Converting LLMs into Text Encoders without Training

这篇论文要解决的是:decoder-only LLM 擅长生成,但默认不是好的文本编码器;若想拿到强 embedding,通常需要额外对比学习或双塔训练。问题在于这会引入额外训练成本,也破坏‘直接复用现成 LLM’的便利性。

Ziyong Lin,Haoyi Wu,Shu Wang,Kewei Tu,Zilong Zheng,Zixia Jia
text-encoderattention-sinkbidirectionalAnthologyDBLP
6
泛读OutstandingLongACL 2025

Rethinking the Role of Prompting Strategies in LLM Test-Time Scaling: A Perspective of Probability Theory

Yexiang Liu,Zekun Li,Zhi Fang,Nan Xu,Ran He,Tieniu Tan
inference-time-scalingpromptingprobability-theoryAnthologyDBLP
6
泛读FindingsACL 2025

Beyond In-Context Learning: Aligning Long-form Generation of Large Language Models via Task-Inherent Attribute Guidelines

Do Xuan Long,Duong Ngoc Yen,Do Xuan Trong,Anh Tuan Luu,Kenji Kawaguchi,Shafiq Joty,Min-Yen Kan,Nancy F. Chen
alignmentlong-form-generationattribute-guidelinesAnthologyDBLP
6
泛读LongACL 2025

Learning to Generate Structured Output with Schema Reinforcement Learning

Yaxi Lu,Haolun Li,Xin Cong,Zhong Zhang,Yesai Wu,Yankai Lin,Zhiyuan Liu,Fangming Liu,Maosong Sun
reinforcement-learningstructured-outputschemaAnthologyDBLP
6
泛读LongACL 2025

SEA: Low-Resource Safety Alignment for Multimodal Large Language Models via Synthetic Embeddings

Weikai Lu,Hao Peng,Huiping Zhuang,Cen Chen,Ziqian Zeng
mllmsafety-alignmentlow-resourceAnthologyDBLP
6
泛读LongACL 2025

Controlled Low-Rank Adaptation with Subspace Regularization for Continued Training on Large Language Models

Yuheng Lu,Bingshuo Qian,Caixia Yuan,Huixing Jiang,Xiaojie Wang
continued-trainingloraregularizationAnthologyDBLP
6
泛读DemoACL 2025

AutoAlign: Get Your LLM Aligned with Minimal Annotations

Xinyu Lu,Dong Xu,Chunkang Zhang,Xinyan Guan,Junxiang Wang,Qingyu Zhang ... 省略 5 位作者 ... ,Yaojie Lu,Hongyu Lin,Le Sun,Xianpei Han
alignmentminimal-supervisionannotation-efficiencyDOIDBLP
6
泛读FindingsACL 2025

A Bounding Box is Worth One Token - Interleaving Layout and Text in a Large Language Model for Document Understanding

Jinghui Lu,Haiyang Yu,Yanjie Wang,Yongjie Ye,Jingqun Tang,Ziwei Yang ... 省略 2 位作者 ... ,Hao Feng,Han Wang,Hao Liu,Can Huang
document-understandinglayoutinterleaved-tokensAnthologyDBLP
6
泛读FindingsACL 2025

BOSE: A Systematic Evaluation Method Optimized for Base Models

Hongzhi Luan,Changxin Tian,Zhaoxin Huan,Xiaolu Zhang,Kunlong Chen,Zhiqiang Zhang,Jun Zhou
base-modelevaluationbenchmarkAnthologyDBLP
6
泛读FindingsACL 2025

Let's Be Self-generated via Step by Step: A Curriculum Learning Approach to Automated Reasoning with Large Language Models

Kangyang Luo,Zichen Ding,Zhenmin Weng,Lingfeng Qiao,Meng Zhao,Xiang Li,Di Yin,Jinlong Shu
curriculum-learningself-generationreasoningAnthologyDBLP
6
泛读FindingsACL 2025

DiffSkip: Differential Layer Skipping in Large Language Models

这篇工作要解决的是:如何在不重训模型的前提下,把 LLM 推理中的层跳过做得更细、更稳,而不是用固定层裁剪或统一跳层这种过于粗糙的办法。现有做法通常按层重要性静态裁剪,能省算力,但不同 token、不同位置、不同生成阶段对层的依赖并不一样,所以固定策略常常在长生成或困难样本上质量掉得快。

Xuan Luo,Weizhi Wang,Xifeng Yan
layer-skippinginference-efficiencydynamic-computationAnthologyDBLP
6
泛读FindingsACL 2025

MMEvol: Empowering Multimodal Large Language Models with Evol-Instruct

这篇工作要解决的是:多模态大模型缺少高质量、可持续扩展的指令数据,尤其是能覆盖复杂视觉理解与交互能力的数据。过去常靠人工写模板或少量高质量人工标注,质量高但扩不起来;纯自动合成又容易模式单一、任务浅层。

Run Luo,Haonan Zhang,Longze Chen,Ting-En Lin,Xiong Liu,Yuchuan Wu ... 省略 7 位作者 ... ,Hamid Alinejad-Rokny,Xiaobo Xia,Jingkuan Song,Fei Huang
multimodal-instructiondata-synthesisevol-instructAnthologyDBLP
6
泛读LongACL 2025

S²R: Teaching LLMs to Self-verify and Self-correct via Reinforcement Learning

这篇工作要解决的是:如何让 LLM 不只会给答案,还会在生成后自我核验并在必要时自我纠错。现有做法常靠 test-time sampling、reflection prompting 或单独 verifier,但这些方法要么推理开销高,要么 verification 不是模型内生能力。

Ruotian Ma,Peisong Wang,Cheng Liu,Xingyan Liu,Jiaqi Chen,Bang Zhang,Xin Zhou,Nan Du,Jia Li
self-verifyself-correctreinforcement-learningAnthologyDBLP
6
泛读LongACL 2025

CoT-Valve: Length-Compressible Chain-of-Thought Tuning

这篇工作要解决的是:Chain-of-Thought 虽然能提升推理,但输出太长、成本太高,而且长度和效果通常绑在一起,难以压缩。现有方法要么直接蒸馏短 CoT,容易丢信息;要么允许自由长推理,效果好但推理代价不可控。

Xinyin Ma,Guangnian Wan,Runpeng Yu,Gongfan Fang,Xinchao Wang
chain-of-thoughtlength-compressionreasoningAnthologyDBLP
6
泛读LongACL 2025

Dynamic Scaling of Unit Tests for Code Reward Modeling

这篇工作要解决的是:代码 reward modeling 高度依赖单元测试,但固定规模的测试既可能太弱,奖励不可靠,也可能太贵,训练吞吐太低。很多现有方法默认测试集是静态的,于是 reward 信号要么覆盖不足,要么成本失控。

Zeyao Ma,Xiaokang Zhang,Jing Zhang,Jifan Yu,Sijia Luo,Jie Tang
reward-modelunit-testcode-generationAnthologyDBLP
6
泛读LongACL 2025

Circuit Compositions: Exploring Modular Structures in Transformer-Based Language Models

Philipp Mondorf,Sondre Wold,Barbara Plank
interpretabilitytransformercircuitsAnthologyDBLP
6
泛读FindingsACL 2025

Structured Pruning for Diverse Best-of-N Reasoning Optimization

Hieu Trung Nguyen,Bao Nguyen,Viet Anh Nguyen
best-of-nreasoningpruningAnthologyDBLP
6
泛读FindingsACL 2025

Beyond Semantic Entropy: Boosting LLM Uncertainty Quantification with Pairwise Semantic Similarity

Dang Nguyen,Ali Payani,Baharan Mirzasoleiman
uncertaintycalibrationsemantic-entropyAnthologyDBLP
6
泛读LongACL 2025

Multi-Attribute Steering of Language Models via Targeted Intervention

Duy Nguyen,Archiki Prasad,Elias Stengel-Eskin,Mohit Bansal
steeringinterventioncontrollabilityAnthologyDBLP
6
泛读LongACL 2025

Towards Fully Exploiting LLM Internal States to Enhance Knowledge Boundary Perception

Shiyu Ni,Keping Bi,Jiafeng Guo,Lulu Yu,Baolong Bi,Xueqi Cheng
internal-statesknowledge-boundaryuncertaintyAnthologyDBLP
6
泛读LongACL 2025

Prediction Hubs are Context-Informed Frequent Tokens in LLMs

Beatrix Miranda Ginn Nielsen,Iuri Macocco,Marco Baroni
prediction-hubstoken-frequencyinterpretabilityAnthologyDBLP
6
泛读OutstandingLongACL 2025

Llama See, Llama Do: A Mechanistic Perspective on Contextual Entrainment and Distraction in LLMs

Jingcheng Niu,Xingdi Yuan,Tong Wang,Hamidreza Saghir,Amir H. Abdi
mechanistic-interpretabilitycontextual-entrainmentin-context-learningAnthologyDBLP
6
泛读LongACL 2025

The Impact of Token Granularity on the Predictive Power of Language Model Surprisal

这篇工作研究一个基础但常被忽略的问题:token 粒度会显著改变 surprisal 的数值和解释力,因此不同语言模型的 surprisal 预测能力不能直接横比。很多认知建模和语言学工作把 LM surprisal 当作统一信号,但如果一个模型按词、另一个按子词或字符计分,所谓“预测能力差异”里会混进大量 tokenization 偏差。

Byung-Doh Oh,William Schuler
tokenizersurprisalpsycholinguisticsAnthologyDBLP
6
泛读FindingsACL 2025

The Inverse Scaling Effect of Pre-Trained Language Model Surprisal Is Not Due to Data Leakage

这篇工作直接回应一个争议:预训练语言模型 surprisal 的 inverse scaling 现象,并不能简单归因于数据泄漏。此前一个直觉解释是,大模型在训练中见过更多测试分布相似文本,因此 surprisal 异常表现只是泄漏假象;这篇论文显然在检验并反驳这个解释。

Byung-Doh Oh,Hongao Zhu,William Schuler
scalingsurprisaldata-leakageAnthologyDBLP
6
泛读FindingsACL 2025

ClozeMath: Improving Mathematical Reasoning in Language Models by Learning to Fill Equations

Quang Hieu Pham,Thuy Duong Nguyen,Tung Pham,Anh Tuan Luu,Dat Quoc Nguyen
reasoningmathmasked-lmAnthologyDBLP
6
泛读OutstandingLongACL 2025

LaTIM: Measuring Latent Token-to-Token Interactions in Mamba Models

Hugo Pitorro,Marcos Vinícius Treviso
mambainterpretabilitysequence-modelingAnthologyDBLP
6
泛读LongACL 2025

Fine-Tuning on Diverse Reasoning Chains Drives Within-Inference CoT Refinement in LLMs

Haritz Puerto,Tilek Chubakov,Xiaodan Zhu,Harish Tayyar Madabushi,Iryna Gurevych
cotreasoningfine-tuningAnthologyDBLP
6
泛读LongACL 2025

Boosting Long-Context Information Seeking via Query-Guided Activation Refilling

Hongjin Qian,Zheng Liu,Peitian Zhang,Zhicheng Dou,Defu Lian
long-contextactivationretrievalAnthologyDBLP
6
泛读LongACL 2025

Beyond Output Matching: Bidirectional Alignment for Enhanced In-Context Learning

Chengwei Qin,Wenhan Xia,Fangkai Jiao,Chen Chen,Yuchen Hu,Bosheng Ding,Ruirui Chen,Shafiq Joty
iclalignmentin-context-learningAnthologyDBLP
6
泛读ShortACL 2025

Learning Auxiliary Tasks Improves Reference-Free Hallucination Detection in Open-Domain Long-Form Generation

Chengwei Qin,Wenxuan Zhou,Karthik Abinav Sankararaman,Nanshu Wang,Tengyu Xu,Alexander Radovic ... 省略 3 位作者 ... ,Sinong Wang,Shafiq Joty,Han Fang,Hao Ma
hallucinationauxiliary-losslong-formDOIDBLP
6
泛读FindingsACL 2025

Eliciting In-context Retrieval and Reasoning for Long-context Large Language Models

Yifu Qiu,Varun R. Embar,Yizhe Zhang,Navdeep Jaitly,Shay B. Cohen,Benjamin Han
long-contextin-context-retrievalreasoningAnthologyDBLP
6
泛读FindingsACL 2025

Reward Generalization in RLHF: A Topological Perspective

Tianyi Alex Qiu,Fanzhi Zeng,Jiaming Ji,Dong Yan,Kaile Wang,Jiayi Zhou,Yang Han,Josef Dai,Xuehai Pan,Yaodong Yang
rlhfreward-modelgeneralizationAnthologyDBLP
6
泛读LongACL 2025

Cooperative or Competitive? Understanding the Interaction between Attention Heads From A Game Theory Perspective

Xiaoye Qu,Zengqi Yu,Dongrui Liu,Wei Wei,Daizong Liu,Jianfeng Dong,Yu Cheng
attention-headsgame-theoryinterpretabilityAnthologyDBLP
6
泛读LongACL 2025

PIC: Unlocking Long-Form Text Generation Capabilities of Large Language Models via Position ID Compression

Haoran Que,Wenge Rong
long-contextposition-encodingtext-generationAnthologyDBLP
6
泛读LongACL 2025

Balancing the Budget: Understanding Trade-offs Between Supervised and Preference-Based Finetuning

Mohit Raghavendra,Junmo Kang,Alan Ritter
sftpreference-trainingtrade-offAnthologyDBLP
6
泛读FindingsACL 2025

Fact Recall, Heuristics or Pure Guesswork? Precise Interpretations of Language Models for Fact Completion

Denitsa Saynova,Lovisa Hagström,Moa Johansson,Richard Johansson,Marco Kuhlmann
factualityinterpretabilityknowledgeAnthologyDBLP
6
泛读SRWACL 2025

Only for the Unseen Languages, Say the Llamas: On the Efficacy of Language Adapters for Cross-lingual Transfer in English-centric LLMs

Julian Schlenker,Jenny Kunz,Tatiana Anikina,Günter Neumann,Simon Ostermann
cross-linguallanguage-adaptermultilingualDOIDBLP
6
泛读LongACL 2025

SAKE: Steering Activations for Knowledge Editing

这篇工作要解决的是:知识编辑通常依赖参数更新,代价高且容易引入副作用,而很多事实性行为可能已经以某种形式编码在激活里,问题变成能不能直接在推理时“拨动”激活来改写知识。过去主流做法是 ROME、MEMIT 这类权重编辑,优点是可持久化,缺点是定位难、串扰明显;因此 activation steering 如果有效,会提供一条更轻量的替代路线。

Marco Scialanga,Thibault Laugel,Vincent Grari,Marcin Detyniecki
knowledge-editingactivation-steeringmodel-editingAnthologyDBLP
6
泛读ShortACL 2025

FEAT: A Preference Feedback Dataset through a Cost-Effective Auto-Generation and Labeling Framework for English AI Tutoring

这篇工作要解决的是:英语 AI tutoring 的偏好数据太贵,人工写候选回答再做人类偏好标注的流程成本高、覆盖窄,导致教育场景里的偏好优化往往数据不够。过去很多工作直接拿通用偏好数据迁移,但教学反馈有更强的正确性、引导性和分步解释要求,通用聊天偏好并不等价。

Hyein Seo,Taewook Hwang,Yohan Lee,Sangkeun Jung
preference-dataalignmentsynthetic-dataDOIDBLP
6
泛读LongACL 2025

Efficient Long Context Language Model Retrieval with Compression

这篇工作要解决的是:长上下文检索在语言模型里越来越重要,但直接保留完整上下文表示会带来高内存和高延迟,尤其当检索要和生成联动时,系统成本很快失控。过去常见方案要么做粗暴截断,要么靠外部向量检索单独处理,但这两种方式都容易损失对 token-level 细节的访问能力。

Minju Seo,Jinheon Baek,Seongyun Lee,Sung Ju Hwang
long-contextretrievalcompressionAnthologyDBLP
6
泛读LongACL 2025

Are the Hidden States Hiding Something? Testing the Limits of Factuality-Encoding Capabilities in LLMs

这篇工作要解决的是:LLM 的隐藏状态到底编码了多少事实性信息,这些信息是否足够稳定、可线性读出、能否支撑 factuality 判断。过去很多分析工作默认“如果 probing 能读出来,就说明模型内部真的有事实表征”,但这个推断并不稳,因为 probe 可能在做额外拟合,或者只是捕捉到提示模板偏置。

Giovanni Servedio,Alessandro De Bellis,Dario Di Palma,Vito Walter Anelli,Tommaso Di Noia
hidden-statesfactualityprobingAnthologyDBLP
6
泛读LongACL 2025

Acquisition and Application of Novel Knowledge in Large Language Models

这篇工作要解决的是:LLM 如何获取新知识、又如何把新知识真正用于推理和生成,这两件事并不等价。很多模型能在微调或检索增强后“接触到”新信息,但并不意味着它们已经把知识稳定吸收、能跨提示泛化、或能在多跳推理中可靠调用。

Ziyu Shang,Jianghan Liu,Zhizhao Luo,Peng Wang,Wenjun Ke,Jiajun Liu,Zijie Xu,Guozheng Li
knowledge-acquisitionknowledge-updategeneralizationAnthologyDBLP
6
泛读FindingsACL 2025

MiniKV: Pushing the Limits of 2-Bit KV Cache via Compression and System Co-Design for Efficient Long Context Inference

这篇工作要解决的是:长上下文推理的主要瓶颈之一是 KV cache 显存,而把 KV 压到 2-bit 往往会严重伤害精度,导致理论上省内存、实际上不可用。过去已有量化 KV cache 的工作,但低到 2-bit 时误差控制和系统吞吐通常都不理想,所以真正的难点不是“能不能量化”,而是“在足够低比特下还能不能稳”。

Akshat Sharma,Hangliang Ding,Jianping Li,Neel Dani,Minjia Zhang
kv-cachequantizationlong-contextAnthologyDBLP
6
泛读ACL 2025

Uncertainty Quantification for Large Language Models

Artem Shelmanov,Maxim Panov,Roman Vashurin,Artem Vazhentsev,Ekaterina Fadeeva,Timothy Baldwin
uncertaintycalibrationevaluationDOIDBLP
6
泛读LongACL 2025

LLM Braces: Straightening Out LLM Predictions with Relevant Sub-Updates

Ying Shen,Lifu Huang
prediction-correctionmodel-editingfactualityAnthologyDBLP
6
泛读LongACL 2025

How to Mitigate Overfitting in Weak-to-strong Generalization?

Junhao Shi,Qinyuan Cheng,Zhaoye Fei,Yining Zheng,Qipeng Guo,Xipeng Qiu
weak-to-stronggeneralizationoverfittingAnthologyDBLP
6
泛读LongACL 2025

Mutual-Taught for Co-adapting Policy and Reward Models

Tianyuan Shi,Canbin Huang,Fanqi Wan,Longguang Zhong,Ziyi Yang,Weizhou Shen,Xiaojun Quan,Ming Yan
reward-modelpolicy-optimizationco-trainingAnthologyDBLP
6
泛读LongACL 2025

Aligning Large Language Models to Follow Instructions and Hallucinate Less via Effective Data Filtering

Shuzheng Si,Haozhe Zhao,Gang Chen,Cheng Gao,Yuzhuo Bai,Zhitong Wang ... 省略 2 位作者 ... ,Chen Qian,Fanchao Qi,Baobao Chang,Maosong Sun
sftdata-filteringhallucinationAnthologyDBLP
6
泛读LongACL 2025

Linguistic Generalizability of Test-Time Scaling in Mathematical Reasoning

Guijin Son,Jiwoo Hong,Hyunwoo Ko,James Thorne
test-time-scalingmath-reasoningmultilingualAnthologyDBLP
6
泛读FindingsACL 2025

Achieving binary weight and activation for LLMs using Post-Training Quantization

Siqing Song,Chuang Wang,Rui-Qi Wang,Yi Yang,Xu-Yao Zhang
quantizationbinaryptqAnthologyDBLP
6
泛读FindingsACL 2025

Unveiling and Addressing Pseudo Forgetting in Large Language Models

这篇论文要解决的是:LLM 在看起来像“遗忘”的行为里,有相当一部分并不是真正的参数级知识丢失,而是检索或表达阶段的“伪遗忘”。以往很多遗忘评测默认把答不出来等同于知识被抹掉,因此会高估 unlearning、持续训练或对齐训练带来的副作用;这个问题现在重要,是因为大家越来越依赖行为测试来判断模型是否真的忘了某些知识。

Huashan Sun,Yizhe Yang,Yinghao Li,Jiawei Li,Yang Gao
forgettingcontinual-learningtraining-dynamicsAnthologyDBLP
6
泛读FindingsACL 2025

Beyond Reactive Safety: Risk-Aware LLM Alignment via Long-Horizon Simulation

这篇论文要解决的是:现有 LLM 安全对齐大多是“被问到再防守”的反应式范式,难以覆盖长时程、多步骤、间接累积的风险。过去安全训练常把风险建模成单轮输入输出分类或拒答,这对短指令有效,但对代理式使用、计划生成和多轮协作场景明显不够。

Chenkai Sun,Denghui Zhang,ChengXiang Zhai,Heng Ji
alignmentsafetysimulationAnthologyDBLP
6
泛读LongACL 2025

Enhancing Spoken Discourse Modeling in Language Models Using Gestural Cues

这篇论文要解决的是:纯文本或纯语音的 spoken discourse modeling 漏掉了手势这类关键非语言线索,导致模型对口语互动结构和语用意图建模不足。过去很多语言模型把 spoken discourse 近似成“带转写的文本”或“只看声学”,这在日常对话、turn-taking、强调、指代和态度识别上都有先天盲区。

Varsha Suresh,Muhammad Hamza Mughal,Christian Theobalt,Vera Demberg
spoken-dialoguegesturemultimodalDOIDBLP
6
泛读LongACL 2025

Neural Incompatibility: The Unbridgeable Gap of Cross-Scale Parametric Knowledge Transfer in Large Language Models

这篇论文的结论很强:跨尺度 LLM 之间的参数化知识迁移存在难以弥合的 incompatibility,不能简单指望小模型学到的参数知识被无损搬到大模型,反之亦然。过去知识蒸馏、logit transfer、weight initialization 或 editing transfer 往往默认不同尺度模型共享足够相似的表示与知识结构,但这个前提可能并不成立。

Yuqiao Tan,Shizhu He,Kang Liu,Jun Zhao
knowledge-transferscalingdistillationAnthologyDBLP
6
泛读LongACL 2025

SpindleKV: A Novel KV Cache Reduction Method Balancing Both Shallow and Deep Layers

Zicong Tang,Luohe Shi,Zuchao Li,Baoyuan Qi,Guoming Liu,Lefei Zhang,Ping Wang
kv-cachelong-contextinferenceAnthologyDBLP
6
泛读LongACL 2025

L-CiteEval: A Suite for Evaluating Fidelity of Long-context Models

Zecheng Tang,Keyan Zhou,Juntao Li,Baibei Ji,Jianye Hou,Min Zhang
long-contextevaluationcitationAnthologyDBLP
6
泛读ShortACL 2025

Combining Domain and Alignment Vectors Provides Better Knowledge-Safety Trade-offs in LLMs

Megh Thakkar,Quentin Fournier,Matthew Riemer,Pin-Yu Chen,Amal Zouaq,Payel Das,Sarath Chandar
alignmentsafetymodel-editingDOIDBLP
6
泛读FindingsACL 2025

Distance between Relevant Information Pieces Causes Bias in Long-Context LLMs

Runchu Tian,Yanghao Li,Yuepeng Fu,Siyang Deng,Qinyu Luo,Cheng Qian ... 省略 3 位作者 ... ,Yesai Wu,Yankai Lin,Huadong Wang,Xiaojiang Liu
long-contextbiasattentionAnthologyDBLP
6
泛读FindingsACL 2025

LLM as Effective Streaming Processor: Bridging Streaming-Batch Mismatches with Group Position Encoding

Junlong Tong,Jinlan Fu,Zixuan Lin,Yingqi Fan,Anhao Zhao,Hui Su,Xiaoyu Shen
streamingposition-encodinginferenceAnthologyDBLP
6
泛读FindingsACL 2025

Blessing of Multilinguality: A Systematic Analysis of Multilingual In-Context Learning

Yilei Tu,Andrew Xue,Freda Shi
multilingualin-context-learninganalysisAnthologyDBLP
6
泛读LongACL 2025

Logical forms complement probability in understanding language model (and human) performance

这篇工作要回答的核心问题是:只看模型给正确答案的概率,是否足以解释语言模型和人类在语言理解任务上的表现;作者的判断是否定的,逻辑形式信息能补上概率视角遗漏的结构性差异。过去很多分析默认概率高就代表理解到位,但在组合泛化、歧义消解和推理链条较长的样例上,高概率输出常常只是相关模式匹配,而不是对句子逻辑骨架的把握,因此这个问题值得重新检视。

Yixuan Wang,Freda Shi
logical-formsevaluationgeneralizationAnthologyDBLP
6
泛读LongACL 2025

Logic-Regularized Verifier Elicits Reasoning from LLMs

这篇工作的核心问题是:怎样让 verifier 不只是给答案打分,而是真正逼出或筛出更有逻辑性的推理。现有 verifier 往往只学到结果相关的表面模式,能做偏好判断,但对推理链条的逻辑一致性约束不够,所以很容易奖励看起来像推理、实际漏洞很多的输出。

Xinyu Wang,Changzhi Sun,Lian Cheng,Yuanbin Wu,Dell Zhang,Xiaoling Wang,Xuelong Li
verifierreasoningalignmentAnthologyDBLP
6
泛读LongACL 2025

Tracing and Dissecting How LLMs Recall Factual Knowledge for Real World Questions

这篇工作要解决的是一个很关键但常被黑箱处理的问题:LLM 回答真实世界事实问题时,到底是如何从参数中召回知识的。以前大家常用最终答对率评价 factual QA,但这只能看到结果,看不到知识是被哪类线索触发、经过哪些中间表征路径被取出,以及失败是检索不到、检索错了,还是后续生成阶段改写了正确证据。

Yiqun Wang,Chaoqun Wan,Sile Hu,Yonggang Zhang,Xiang Tian,Yaowu Chen,Xu Shen,Jieping Ye
factual-recallknowledgeinterpretabilityAnthologyDBLP
6
泛读LongACL 2025

PopAlign: Diversifying Contrasting Patterns for a More Comprehensive Alignment

这篇论文的核心问题是:现有 alignment 数据里的对比模式太单一,导致模型学到的是狭窄的偏好边界,而不是更全面的行为约束;PopAlign 试图通过丰富 contrasting patterns 来改善这一点。过去很多偏好对齐依赖固定模板的 chosen/rejected 对,覆盖的错误类型有限,模型容易把‘对齐’学成某几类表面拒答或安全措辞。

Zekun Moore Wang,Shenzhi Wang,King Zhu,Jiaheng Liu,Ke Xu,Jie Fu,Wangchunshu Zhou,Wenhao Huang
alignmentpreference-learningdata-qualityAnthologyDBLP
6
泛读LongACL 2025

Activating Distributed Visual Region within LLMs for Efficient and Effective Vision-Language Training and Inference

这篇工作的核心问题是:VLM/MLLM 在视觉 token 很多时训练和推理成本都高,而现有压缩方法常靠粗暴下采样,容易丢掉关键区域;作者试图在 LLM 内部激活分布式视觉区域表示,只保留真正有用的视觉信息。过去视觉-语言模型常把所有 patch 一视同仁送入语言侧,计算昂贵且噪声大,因此高效视觉 token 选择是一个持续问题。

Siyuan Wang,Dianyi Wang,Chengxing Zhou,Zejun Li,Zhihao Fan,Xuanjing Huang,Zhongyu Wei
vlmvision-trainingefficiencyDOIDBLP
6
泛读FindingsACL 2025

Tag-Evol: Achieving Efficient Instruction Evolving via Tag Injection

Yixuan Wang,Shiqi Zhou,Chuanzhe Guo,Qingfu Zhu
instruction-datadata-synthesisalignmentAnthologyDBLP
6
泛读FindingsACL 2025

Tokenization is Sensitive to Language Variation

Anna Wegmann,Dong Nguyen,David Jurgens
tokenizerlanguage-variationrobustnessAnthologyDBLP
6
泛读LongACL 2025

Cheems: A Practical Guidance for Building and Evaluating Chinese Reward Models from Scratch

Xueru Wen,Jie Lou,Zichao Li,Yaojie Lu,XingYu,Yuqiu Ji ... 省略 2 位作者 ... ,Ben He,Xianpei Han,Le Sun,Debing Zhang
reward-modelchineserlhfAnthologyDBLP
6
泛读LongACL 2025

CORAL: Learning Consistent Representations across Multi-step Training with Lighter Speculative Drafter

Yepeng Weng,Dianwen Mei,Huishi Qiu,Xujie Chen,Li Liu,Jiang Tian,Zhongchao Shi
speculative-decodingrepresentation-alignmentmulti-step-trainingAnthologyDBLP
6
泛读FindingsACL 2025

TokenShapley: Token Level Context Attribution with Shapley Value

Yingtai Xiao,Yuqing Zhu,Sirat Samyoun,Wanrong Zhang,Jiachen T. Wang,Jian Du
interpretabilityattributioncontextAnthologyDBLP
6
泛读LongACL 2025

Efficient Many-Shot In-Context Learning with Dynamic Block-Sparse Attention

Emily Xiao,Chin-Jou Li,Yilin Zhang,Graham Neubig,Amanda Bertsch
in-context-learningblock-sparseattentionAnthologyDBLP
6
泛读ShortACL 2025

Mitigating Posterior Salience Attenuation in Long-Context LLMs with Positional Contrastive Decoding

长上下文 LLM 在生成阶段对靠后位置的信息过度关注(posterior salience attenuation),导致前面和中间位置的关键信息被忽略——即 'lost in the middle' 问题在解码端的体现。已有工作多从 attention 分布或位置编码角度缓解,但没有在解码策略层面直接纠偏。

Zikai Xiao,Ziyang Wang,Wen Ma,Yan Zhang,Wei Shen,WangYan WangYan,Luqi Gong,Zuozhu Liu
long-contextdecodingpositionDOIDBLP
6
泛读LongACL 2025

From Outcomes to Processes: Guiding PRM Learning from ORM for Inference-Time Alignment

过程奖励模型(PRM)训练需要昂贵的逐步标注,而结果奖励模型(ORM)只需最终答案标注但无法提供细粒度的过程反馈。如何用 ORM 的信号来引导 PRM 的学习,降低标注成本的同时获得过程级的推理对齐能力?

Bin Xie,Bingbing Xu,Yige Yuan,Shengmao Zhu,Huawei Shen
Institute of Computing Technology, Chinese Academy of Sciencesprocess-reward-modeloutcome-reward-modelalignmentAnthologyDBLP
6
泛读LongACL 2025

Revealing the Deceptiveness of Knowledge Editing: A Mechanistic Analysis of Superficial Editing

知识编辑方法(如 ROME、MEMIT)看似成功修改了模型的事实回答,但实际上可能只是表面编辑——模型在特定 prompt 格式下给出正确答案,换个问法就暴露出原始知识未被真正替换。这篇工作从机制层面分析这种'欺骗性'。

Jiakuan Xie,Pengfei Cao,Yubo Chen,Kang Liu,Jun Zhao
Institute of Automation, Chinese Academy of Sciencesknowledge-editingmechanistic-interpretabilityrepresentationAnthologyDBLP
6
泛读FindingsACL 2025

Weak-to-Strong Honesty Alignment via Learning-to-Rank Supervision

让 LLM 诚实(honesty alignment)面临的困难是:强模型的诚实行为难以用弱模型的监督信号来引导(weak-to-strong 场景)。传统的 pointwise 监督在诚实度标注上噪声大,弱监督者的判断不够可靠。

Yunfan Xie,Lixin Zou,Dan Luo,Min Tang,Chenliang Li
alignmenthonestylearning-to-rankAnthologyDBLP
6
泛读LongACL 2025

Genius: A Generalizable and Purely Unsupervised Self-Training Framework For Advanced Reasoning

LLM 的推理能力提升通常依赖高质量的标注数据或强模型蒸馏,纯无监督的自训练(self-training)在推理任务上效果有限,因为模型容易在自身错误上强化。如何设计一个通用的、不依赖外部标注的自训练框架来提升推理能力?

Fangzhi Xu,Hang Yan,Chang Ma,Haiteng Zhao,Qiushi Sun,Kanzhi Cheng,Junxian He,Jun Liu,Zhiyong Wu
self-trainingunsupervisedreasoningAnthologyDBLP
6
泛读LongACL 2025

RefreshKV: Updating Small KV Cache During Long-form Generation

Fangyuan Xu,Tanya Goyal,Eunsol Choi
kv-cachelong-contextgenerationAnthologyDBLP
6
泛读LongACL 2025

Extending LLM Context Window with Adaptive Grouped Positional Encoding: A Training-Free Method

Xinhao Xu,Jiaxin Li,Hui Chen,Zijia Lin,Jungong Han,Guiguang Ding
long-contextpositional-encodingtraining-freeAnthologyDBLP
6
泛读FindingsACL 2025

KodCode: A Diverse, Challenging, and Verifiable Synthetic Dataset for Coding

Zhangchen Xu,Yang Liu,Yueqin Yin,Mingyuan Zhou,Radha Poovendran
synthetic-datacodingdata-qualityDOIDBLP
6
泛读FindingsACL 2025

SCOPE: Compress Mathematical Reasoning Steps for Efficient Automated Process Annotation

Huimin Xu,Xin Mao,Feng-Lin Li,Xiaobao Wu,Wang Chen,Wei Zhang,Anh Tuan Luu
process-supervisionreasoningannotationAnthologyDBLP
6
泛读LongACL 2025

Interactive Evolution: A Neural-Symbolic Self-Training Framework For Large Language Models

Fangzhi Xu,Qiushi Sun,Kanzhi Cheng,Jun Liu,Yu Qiao,Zhiyong Wu
self-trainingneural-symbolicreasoningAnthologyDBLP
6
泛读LongACL 2025

Principled Understanding of Generalization for Generative Transformer Models in Arithmetic Reasoning Tasks

这篇工作要解决的是:生成式 Transformer 在算术推理任务上的“泛化”经常被经验结果主导,缺少可解释的原则来区分它是在学算法还是在记模板,从而导致训练配方与数据设计难以有把握地迭代。

Xingcheng Xu,Zibo Zhao,Haipeng Zhang,Yanqing Yang
generalizationtransformerarithmetic-reasoningAnthologyDBLP
6
泛读LongACL 2025

Anything Goes? A Crosslinguistic Study of (Im)possible Language Learning in LMs

这篇工作要解决的是:语言模型“什么语言都能学”的印象可能来自评测偏置——很多基准没有区分哪些语言现象在跨语言上可学、哪些在统计上几乎不可学或需要强归纳偏置。

Xiulin Yang,Tatsuya Aoyama,Yuekun Yao,Ethan Wilcox
language-learningcrosslinguallinguistic-universalsAnthologyDBLP
6
泛读FindingsACL 2025

100-LongBench: Are de facto Long-Context Benchmarks Literally Evaluating Long-Context Ability?

这篇工作要解决的是:很多“长上下文”基准可能并没有真的测长上下文能力,而是在测检索捷径、格式记忆或局部线索匹配,导致社区对长上下文训练收益的判断被高估或误导。

Wang Yang,Hongye Jin,Shaochen Zhong,Song Jiang,Qifan Wang,Vipin Chaudhary,Xiaotian Han
long-contextbenchmarkevaluationAnthologyDBLP
6
泛读FindingsACL 2025

Do Large Language Models Perform Latent Multi-Hop Reasoning without Exploiting Shortcuts?

这篇工作要解决的是:多跳推理评测里模型可能靠数据集偏置或检索捷径“看起来在推理”,但实际上没有在隐空间里进行真正的多步信息整合。

Sohee Yang,Nora Kassner,Elena Gribovskaya,Sebastian Riedel,Mor Geva
multi-hop-reasoninglatent-reasoningshortcutsAnthologyDBLP
6
泛读LongACL 2025

Marco-o1 v2: Towards Widening The Distillation Bottleneck for Reasoning Models

这篇工作要解决的是:推理模型蒸馏常受“瓶颈”限制——学生模型只能学到老师的最终答案或少量 CoT,导致推理能力迁移不充分且容易过拟合特定表达风格。

Huifeng Yin,Yu Zhao,Minghao Wu,Xuanfan Ni,Bo Zeng,Hao Wang ... 省略 2 位作者 ... ,Chenyang Lyu,Longyue Wang,Weihua Luo,Kaifu Zhang
distillationreasoningteacher-studentAnthologyDBLP
6
泛读FindingsACL 2025

Imagine to Hear: Auditory Knowledge Generation can be an Effective Assistant for Language Models

Suho Yoo,Hyunjong Ok,Jaeho Lee
audioknowledge-augmentationmultimodalAnthologyDBLP
6
泛读FindingsACL 2025

Code-Switching Curriculum Learning for Multilingual Transfer in LLMs

Haneul Yoo,Cheonbok Park,Sangdoo Yun,Alice Oh,Hwaran Lee
code-switchingcurriculum-learningmultilingualAnthologyDBLP
6
泛读LongACL 2025

Self-Error-Instruct: Generalizing from Errors for LLMs Mathematical Reasoning

Erxin Yu,Jing Li,Ming Liao,Qi Zhu,Boyang Xue,Minghui Xu,Baojun Wang,Lanqing Hong,Fei Mi,Lifeng Shang
instruction-synthesiserror-learningmath-reasoningAnthologyDBLP
6
泛读LongACL 2025

Reversal of Thought: Enhancing Large Language Models with Preference-Guided Reverse Reasoning Warm-up

Jiahao Yuan,Dehui Du,Hao Zhang,Zixiang Di,Usman Naseem
reasoningpreference-learningwarm-upAnthologyDBLP
6
泛读FindingsACL 2025

FastDraft: How to Train Your Draft

Ofir Zafrir,Igor Margulis,Dorin Shteyman,Shira Guskin,Guy Boudoukh
speculative-decodingdraft-modeldistillationAnthologyDBLP
6
泛读FindingsACL 2025

InternLM-XComposer2.5-Reward: A Simple Yet Effective Multi-Modal Reward Model

Yuhang Zang,Xiaoyi Dong,Pan Zhang,Yuhang Cao,Ziyu Liu,Shengyuan Ding ... 省略 3 位作者 ... ,Wenwei Zhang,Kai Chen,Dahua Lin,Jiaqi Wang
reward-modelmultimodalalignmentAnthologyDBLP
6
泛读FindingsACL 2025

COPR: Continual Human Preference Learning via Optimal Policy Regularization

Han Zhang,Lin Gui,Yu Lei,Yuanzhao Zhai,Yehong Zhang,Zhuo Zhang ... 省略 2 位作者 ... ,Yue Yu,Kam-Fai Wong,Bin Liang,Ruifeng Xu
continual-learningpreference-optimizationdpoAnthologyDBLP
6
泛读LongACL 2025

Enhancing Multimodal Continual Instruction Tuning with BranchLoRA

Duzhen Zhang,Yong Ren,Zhong-Zhi Li,Yahan Yu,Jiahua Dong,Chenxing Li,Zhilong Ji,Jinfeng Bai
multimodalcontinual-learninginstruction-tuningAnthologyDBLP
6
泛读LongACL 2025

ShifCon: Enhancing Non-Dominant Language Capabilities with a Shift-based Multilingual Contrastive Framework

Hengyuan Zhang,Chenming Shang,Sizhe Wang,Dongdong Zhang,Yiyao Yu,Feng Yao,Renliang Sun,Yujiu Yang,Furu Wei
multilingualcontrastive-learningdata-mixtureAnthologyDBLP
6
泛读FindingsACL 2025

Divide-Verify-Refine: Can LLMs Self-align with Complex Instructions?

Xianren Zhang,Xianfeng Tang,Hui Liu,Zongyu Wu,Qi He,Dongwon Lee,Suhang Wang
self-alignmentinstruction-followingverificationAnthologyDBLP
6
泛读FindingsACL 2025

Diversification Catalyzes Language Models' Instruction Generalization To Unseen Semantics

Dylan Zhang,Justin Wang,François Charton
instruction-tuningdata-diversitygeneralizationAnthologyDBLP
6
泛读LongACL 2025

Attention Entropy is a Key Factor: An Analysis of Parallel Context Encoding with Full-attention-based Pre-trained Language Models

Zhisong Zhang,Yan Wang,Xinting Huang,Tianqing Fang,Hongming Zhang,Chenlong Deng,Shuaiyi Li,Dong Yu
attentionlong-contextparallel-encodingAnthologyDBLP
6
泛读LongACL 2025

Understanding the Dark Side of LLMs' Intrinsic Self-Correction

Qingjie Zhang,Di Wang,Haoting Qian,Yiming Li,Tianwei Zhang,Minlie Huang,Ke Xu,Hewu Li,Liu Yan,Han Qiu
self-correctionfailure-analysisreasoningAnthologyDBLP
6
泛读LongACL 2025

Beyond Similarity: A Gradient-based Graph Method for Instruction Tuning Data Selection

Yang Zhao,Li Du,Xiao Ding,Yangou Ouyang,Hepeng Wang,Kai Xiong ... 省略 3 位作者 ... ,Qing Yang,Dongchen Li,Bing Qin,Ting Liu
data-selectioninstruction-tuninggradient-basedAnthologyDBLP
6
泛读LongACL 2025

Neuron Empirical Gradient: Discovering and Quantifying Neurons' Global Linear Controllability

Xin Zhao,Zehui Jiang,Naoki Yoshinaga
interpretabilityneuron-analysiscontrollabilityAnthologyDBLP
6
泛读LongACL 2025

FR-Spec: Accelerating Large-Vocabulary Language Models via Frequency-Ranked Speculative Sampling

Weilin Zhao,Tengyu Pan,Xu Han,Yudi Zhang,Sun Ao,Yuxiang Huang ... 省略 4 位作者 ... ,Hao Zhou,Jianyong Wang,Maosong Sun,Zhiyuan Liu
speculative-decodingfrequency-rankinginference-accelerationAnthologyDBLP
6
泛读LongACL 2025

A Systematic Study of Compositional Syntactic Transformer Language Models

Yida Zhao,Hao Xve,Xiang Hu,Kewei Tu
syntactic-transformercompositionalitylanguage-modelingAnthologyDBLP
6
泛读LongACL 2025

PTQ1.61: Push the Real Limit of Extremely Low-Bit Post-Training Quantization Methods for Large Language Models

Jiaqi Zhao,Miao Zhang,Ming Wang,Yuzhang Shang,Kaihao Zhang,Weili Guan,Yaowei Wang,Min Zhang
quantizationptqextreme-low-bitAnthologyDBLP
6
泛读FindingsACL 2025

Tag-Instruct: Controlled Instruction Complexity Enhancement through Structure-based Augmentation

He Zhu,Zhiwen Ruan,Junyou Su,Xingwei He,Yun Chen,Wenjia Zhang,Guanhua Chen
instruction-datadata-augmentationcomplexityAnthologyDBLP
6
泛读LongACL 2025

Establishing Trustworthy LLM Evaluation via Shortcut Neuron Analysis

Kejian Zhu,Shangqing Tu,Zhuoran Jin,Lei Hou,Juanzi Li,Jun Zhao
interpretabilityevaluationneuronsAnthologyDBLP
6
泛读LongACL 2025

Enhancing Cross-Lingual Transfer through Reversible Transliteration: A Huffman-Based Approach for Low-Resource Languages

Wenhao Zhuang,Yuan Sun,Xiaobing Zhao
multilingualtokenizertransliterationAnthologyDBLP
6
泛读LongACL 2025

SEUF: Is Unlearning One Expert Enough for Mixture-of-Experts LLMs?

Haomin Zhuang,Yihua Zhang,Kehan Guo,Jinghan Jia,Gaowen Liu,Sijia Liu,Xiangliang Zhang
moeunlearningsafetyAnthologyDBLP
6
泛读LongACL 2025

TestNUC: Enhancing Test-Time Computing Approaches and Scaling through Neighboring Unlabeled Data Consistency

Henry Peng Zou,Zhengyao Gu,Yue Zhou,Yankai Chen,Weizhi Zhang,Liancheng Fang,Yibo Wang,Yangning Li,Kay Liu,Philip S. Yu
test-time-computeconsistencyreasoningAnthologyDBLP
6
泛读LongACL 2025

On Many-Shot In-Context Learning for Long-Context Evaluation

Kaijian Zou,Muhammad Khalifa,Lu Wang
in-context-learninglong-contextevaluationAnthologyDBLP
5
泛读FindingsACL 2025

Revisiting Self-Consistency from Dynamic Distributional Alignment Perspective on Answer Aggregation

现有自一致性(Self-Consistency)方法使用固定温度采样,要么高温度采样需要大量样本才能稳定分布,要么低温度采样会放大偏差,之前的方案未动态适配采样过程的置信度。

Yiwei Li,Ji Zhang,Shaoxiong Feng,Peiwen Yuan,Xinglin Wang,Jiayi Shi ... 省略 1 位作者 ... ,Chuyi Tan,Boyuan Pan,Yao Hu,Kan Li
self-consistencyreasoningsamplingAnthologyarXivDBLP
5
泛读LongACL 2025

Fusing Highly Specialized Language Models for Comprehensive Expertise

Ning Ding,Yulin Chen,Ganqu Cui,Xingtai Lv,Weilin Zhao,Kaiyan Zhang,Ruobing Xie,Bowen Zhou,Zhiyuan Liu,Maosong Sun
model-mergingspecializationfusionAnthologyDBLP
5
泛读LongACL 2025

Neural Parameter Search for Slimmer Fine-Tuned Models and Better Transfer

现有微调模型存在域内效果好但跨域迁移差、参数冗余度高的问题,之前的剪枝方案未利用任务向量的子空间差异做针对性剪枝,剪枝后模型容易出现灾难性遗忘。

Guodong Du,Zitao Fang,Jing Li,Junlin Li,Runhua Jiang,Shuyang Yu ... 省略 3 位作者 ... ,Ho-Kin Tang,Daojing He,Honghai Liu,Min Zhang
pruningfine-tuningtransferAnthologyarXivDBLP
5
泛读FindingsACL 2025

The Law of Knowledge Overshadowing: Towards Understanding, Predicting and Preventing LLM Hallucination

这篇工作要解释的是:LLM 幻觉并不只是“知识不够”,而是已有知识在生成时会相互遮蔽,导致正确知识被错误知识压过去。过去很多方法把幻觉归因于检索缺失、参数知识不足或解码不稳定,但这些解释很难预测模型什么时候会在“其实知道答案”的情况下仍然答错;作者试图给出一个更可预测、可干预的机制视角。

Yuji Zhang,Sha Li,Cheng Qian,Jiateng Liu,Pengfei Yu,Chi Han ... 省略 1 位作者 ... ,Kathleen McKeown,ChengXiang Zhai,Manling Li,Heng Ji
hallucinationknowledgepredictionAnthologyDBLP
5
泛读FindingsACL 2025

To Code or not to Code? Adaptive Tool Integration for Math Language Models via Expectation-Maximization

这篇工作解决的是:数学推理模型何时该写代码、何时该纯语言推理,不能一直靠外部模板硬规定。过去的 code+CoT 框架通常把工具调用写死,或者靠人工提示决定是否用代码,这在训练中会形成错配——模型能力变了,但工具策略不跟着变。

Haozhe Wang,Long Li,Chao Qu,Weidi Xu,Fengming Zhu,Wei Chu,Fangzhen Lin
tool-usemath-reasoningcode-executionAnthologyarXivDBLP
5
泛读LongACL 2025

SeaKR: Self-aware Knowledge Retrieval for Adaptive Retrieval Augmented Generation

这篇工作要解决的是:RAG 不该每次都检索,也不该检索后等权使用所有片段,关键在于模型能否知道自己什么时候不确定。过去的 adaptive RAG 往往用表层启发式信号触发检索,比如置信度、规则阈值或外部分类器,但这些信号和生成时真正的内部不确定性并不总一致。

Zijun Yao,Weijian Qi,Liangming Pan,Shulin Cao,Linmei Hu,Weichuan Liu,Lei Hou,Juanzi Li
raguncertaintyretrievalAnthologyarXivDBLP
5
泛读FindingsACL 2025

A Survey on Personalized Alignment - The Missing Piece for Large Language Models in Real-World Applications

这篇论文要解决的是:通用 alignment 默认所有用户偏好都能被一套统一目标覆盖,但现实应用里这不成立。过去很多工作要么只做安全对齐,不管个体差异;要么只做个性化推荐,不处理价值边界,结果就是模型要么过于统一,要么个性化后容易偏离公共约束。

Jian Guan,Junfei Wu,Jia-Nan Li,Chuanqi Cheng,Wei Wu
personalizationalignmentsurveyAnthologyarXivDBLP
5
泛读LongACL 2025

Drift: Enhancing LLM Faithfulness in Rationale Generation via Dual-Reward Probabilistic Inference

这篇工作解决的是:LLM 能答对题,但给出的 rationale 往往并不忠实于真实决策过程,也不够贴合输入上下文。已有方法通常走两条路:要么做提示层面的校准,收益有限;要么重新训练模型,成本高且不一定保留原模型能力。

Jiazheng Li,Hanqi Yan,Yulan He
faithfulnessreasoningreward-modelingAnthologyDBLP
5
泛读ACL 2025

Synthetic Data in the Era of Large Language Models

这篇论文的目标是系统回答:在 LLM 时代,合成数据到底能解决什么问题,又会引入什么新问题。过去合成数据常被当作低成本扩充语料的工程手段,但随着 teacher LM、self-instruct、self-play 和 synthetic reasoning data 普及,它已经从辅助手段变成训练范式的一部分,值得单独梳理。

Vijay Viswanathan,Xiang Yue,Alisa Liu,Yizhong Wang,Graham Neubig
synthetic-datasurveydata-generationDOIDBLP
5
泛读LongACL 2025

Unraveling LoRA Interference: Orthogonal Subspaces for Robust Model Merging

这篇工作解决的是:LoRA 微调模型做 merge 时经常互相干扰,性能一合并就掉,不是简单平均一下低秩更新就行。以往很多 merge 方法默认各任务更新可线性叠加,但对 LoRA 来说,低秩子空间本身会和任务数据分布耦合,导致一个任务的更新把另一个任务的输出分布推偏。

Haobo Zhang,Jiayu Zhou
loramodel-merginginterferenceAnthologyarXivDBLP
5
泛读FindingsACL 2025

FPE2M2: Approaching Lossless and Efficient Quantization with Native Floating Point

这篇工作要解决的是:现有量化方法通常在压缩率、硬件友好性和精度损失之间做艰难折中,而作者想用原生浮点格式逼近“接近无损且高效”的量化。过去很多低比特方案依赖自定义码本、非原生数据格式或复杂反量化路径,理论压得很低,但部署链路和实际吞吐不一定占优。

Ke Yi,Jianwei Zhang,Zhiying Xu,Xinlong Yang,Yang Zhou,Minmin Sun,Zengke Liu,Tong Zhang,Junyang Lin,Jingren Zhou
quantizationfloating-pointcompressionAnthologyDBLP
5
泛读FindingsACL 2025

Self-Reasoning Language Models: Unfold Hidden Reasoning Chains with Few Reasoning Catalyst

这篇工作想解决的是:长链推理数据难造、难标,导致模型虽然能做基础推理,但缺少反思、拆解等更长程的 reasoning pattern。过去常见做法是依赖昂贵的人类长 CoT 标注或更强 teacher 蒸馏,而作者想证明只用少量“催化样本”也能让模型自己把隐藏推理链展开并自举提升。

Hongru Wang,Deng Cai,Wanjun Zhong,Shijue Huang,Jeff Z. Pan,Zeming Liu,Kam-Fai Wong
reasoningdistillationcotAnthologyarXivDBLP
5
泛读FindingsACL 2025

Selecting Demonstrations for Many-Shot In-Context Learning via Gradient Matching

现有多示例上下文学习(many-shot ICL)的示例选择方法大多是随机选择,传统实例级检索方法不适合多示例场景,无法最大化ICL的效果。

Jianfei Zhang,Bei Li,Jun Bai,Rumei Li,Yanmeng Wang,Chenghua Lin,Wenge Rong
in-context-learningdemonstration-selectiongradient-matchingAnthologyarXivDBLP
7
泛读FindingsACL 2025

Self-Foveate: Enhancing Diversity and Difficulty of Synthesized Instructions from Unsupervised Text via Multi-Level Foveation

现有从无标注文本自动合成指令数据的方法,生成的指令多样性和难度不足,无法满足大模型SFT对高质量指令数据的需求,之前的方案大多仅从单一层级提取文本信息。

Mingzhe Li,Xin Lu,Yanyan Zhao
instruction-synthesisdata-qualitysftAnthologyarXivDBLP
5
泛读LongACL 2025

One QuantLLM for ALL: Fine-tuning Quantized LLMs Once for Efficient Deployments

这篇工作要解决的是:量化 LLM 的部署需求是离散且多样的,但现有做法通常要为每一种 bit 配置或硬件约束分别训练一次,训练成本高且难以覆盖真实部署面。过去量化更多把重点放在单一目标位宽下的精度恢复,默认“一个部署场景对应一套量化后模型”;这在服务器、边缘端、个人电脑并存的场景里很低效,因此作者尝试把 once-for-all 超网思路搬到量化 LLM 上。

Ke Yi,Yuhui Xu,Heng Chang,Yuan Meng,Tong Zhang,Jia Li
quantizationdeploymentefficiencyAnthologyarXivDBLP
5
泛读LongACL 2025

LLM as a Broken Telephone: Iterative Generation Distorts Information

这篇工作要回答的是:当 LLM 反复处理、改写、翻译自己或其他模型生成的文本时,信息是否会像“传话游戏”一样逐轮失真。这个问题以前更多停留在直觉层面,或者被当作个别 hallucination 现象处理;作者把它改写成一个可控的迭代生成问题,去测量失真是否累积、受什么因素影响、能否缓解。

Amr Mohamed,Mingmeng Geng,Michalis Vazirgiannis,Guokan Shang
model-collapseiterative-generationdata-qualityAnthologyarXivDBLP
5
泛读FindingsACL 2025

R³Mem: Bridging Memory Retention and Retrieval via Reversible Compression

Xiaoqiang Wang,Suyuchen Wang,Yun Zhu,Bang Liu
memorycompressionlong-contextAnthologyDBLP
5
泛读LongACL 2025

Attention Speaks Volumes: Localizing and Mitigating Bias in Language Models

Rishabh Adiga,Besmira Nushi,Varun Chandrasekaran
attentionbiasinterpretabilityAnthologyDBLP
5
泛读LongACL 2025

CONFETTI: Conversational Function-Calling Evaluation Through Turn-Level Interactions

Tamer Alkhouli,Katerina Margatina,James Gung,Raphael Shu,Claudia Zaghi,Monica Sunkara,Yi Zhang
function-callingevaluationtool-useAnthologyDBLP
5
泛读LongACL 2025

Maximal Matching Matters: Preventing Representation Collapse for Robust Cross-Modal Retrieval

Hani Alomari,Anushka Sivakumar,Andrew Zhang,Chris Thomas
cross-modalretrievalrepresentationAnthologyDBLP
5
泛读LongACL 2025

Palm: A Culturally Inclusive and Linguistically Diverse Dataset for Arabic LLMs

Fakhraddin Alwajih,Abdellah El Mekki,Samar Mohamed Magdy,AbdelRahim A. Elmadany,Omer Nacar,El Moatez Billah Nagoudi ... 省略 20 位作者 ... ,Houdaifa Atou,Issam Ait Yahia,Abdelhak Bouayad,Mohammed Machrouh
datasetarabicdata-qualityAnthologyDBLP
5
泛读LongACL 2025

When the LM misunderstood the human chuckled: Analyzing garden path effects in humans and language models

Samuel Joseph Amouyal,Aya Meltzer-Asscher,Jonathan Berant
psycholinguisticsgeneralizationevaluationAnthologyDBLP
5
泛读OutstandingLongACL 2025

I0T: Embedding Standardization Method Towards Zero Modality Gap

Na Min An,Eunki Kim,James Thorne,Hyunjung Shim
multimodalembeddingalignmentAnthologyDBLP
5
泛读FindingsACL 2025

There's No Such Thing as Simple Reasoning for LLMs

这篇论文要解决的问题是:LLM 中所谓“简单推理”并不是一个稳定、单一、可局部评估的能力,现有做法往往把推理任务按表面形式切成“简单/复杂”,但这种划分可能掩盖了模型真实的失败模式。这个问题现在值得重看,因为推理评测越来越被用来指导后训练和模型选择,如果“简单推理”这个前提本身不成立,很多结论都会变得不可靠。

Nurul Fajrin Ariyani,Zied Bouraoui,Richard Booth,Steven Schockaert
reasoningevaluationgeneralizationAnthologyDBLP
5
泛读FindingsACL 2025

Lifelong Model Editing with Graph-Based External Memory

这篇论文要解决的是:模型编辑一旦变成长期、连续发生的过程,旧编辑会被新编辑干扰,模型内部也会积累冲突,如何让编辑具备“终身学习”能力。过去很多 editing 方法默认单次修改或少量修改,但实际系统会不断接收新事实,这时遗忘、冲突和灾难性覆盖会成为主问题。

Yash Kumar Atri,Ahmed M. Alaa,Thomas Hartvigsen
model-editingmemorycontinual-learningAnthologyDBLP
5
泛读FindingsACL 2025

FREE: Fast and Robust Vision Language Models with Early Exits

这篇论文解决的是 VLM 推理成本过高的问题,尤其是在视觉输入复杂但并非所有样本都需要完整计算路径时,如何通过 early exit 同时提升速度与鲁棒性。传统 early exit 在纯文本模型里已有探索,但搬到视觉语言模型并不直接,因为跨模态对齐通常在较深层才稳定,过早退出容易丢失关键视觉证据。

Divya Jyoti Bajpai,Manjesh Kumar Hanawal
vlmearly-exitinference-efficiencyAnthologyDBLP
5
泛读FindingsACL 2025

Probing the Geometry of Truth: Consistency and Generalization of Truth Directions in LLMs Across Logical Transformations and Question Answering Tasks

这篇论文研究 LLM 中“truth directions”的几何性质,核心问题是:模型内部是否存在稳定、可泛化的真实性方向,能够跨逻辑变换和问答任务保持一致。过去不少工作会在线性表示空间里寻找某种‘真/假’方向,但这些方向究竟是任务特定捷径,还是更普适的表征结构,还缺少系统验证。

Yuntai Bao,Xuhong Zhang,Tianyu Du,Xinkui Zhao,Zhengwen Feng,Hao Peng,Jianwei Yin
truth-directionprobinginterpretabilityAnthologyDBLP
5
泛读LongACL 2025

Decoding by Contrasting Knowledge: Enhancing Large Language Model Confidence on Edited Facts

Baolong Bi,Shenghua Liu,Lingrui Mei,Yiwei Wang,Junfeng Fang,Pengliang Ji,Xueqi Cheng
knowledge-editingdecodingconfidenceAnthologyDBLP
5
泛读LongACL 2025

Response Wide Shut? Surprising Observations in Basic Vision Language Model Capabilities

Shivam Chandhok,Wan-Cyuan Fan,Vered Shwartz,Vineeth N. Balasubramanian,Leonid Sigal
vlmcapabilityevaluationAnthologyDBLP
5
泛读FindingsACL 2025

Which Retain Set Matters for LLM Unlearning? A Case Study on Entity Unlearning

Hwan Chang,Hwanhee Lee
unlearningdata-selectionsafetyAnthologyDBLP
5
泛读FindingsACL 2025

War of Thoughts: Competition Stimulates Stronger Reasoning in Large Language Models

Yibin Chen,Jinyi Liu,Yan Zheng,Yifu Yuan,Jianye Hao
reasoningmulti-agentinference-timeAnthologyDBLP
5
泛读OutstandingLongACL 2025

Pre³: Enabling Deterministic Pushdown Automata for Faster Structured LLM Generation

Junyi Chen,Shihao Bai,Zaijun Wang,Siyu Wu,Chuheng Du,Hailong Yang,Ruihao Gong,Shengzhong Liu,Fan Wu,Guihai Chen
structured-generationdecodingautomataAnthologyDBLP
5
泛读FindingsACL 2025

SafeEraser: Enhancing Safety in Multimodal Large Language Models through Multimodal Machine Unlearning

这篇工作要解决的是:多模态大模型在部署后如果被发现记住了不该保留的有害或敏感视觉-文本知识,如何做定向遗忘,同时尽量不伤害通用能力。现有 machine unlearning 大多围绕纯文本 LLM 或分类模型展开,对 MLLM 往往只能靠继续对齐或拒答去“压住输出”,但这并不等于真的删除跨模态记忆,因此现在值得单独研究。

Junkai Chen,Zhijie Deng,Kening Zheng,Yibo Yan,Shuliang Liu,PeiJun Wu,Peijie Jiang,Jia Liu,Xuming Hu
unlearningmultimodalsafetyAnthologyDBLP
5
泛读LongACL 2025

WavRAG: Audio-Integrated Retrieval Augmented Generation for Spoken Dialogue Models

这篇工作要解决的是:spoken dialogue model 做 RAG 时,检索通常只基于文本转写,导致语音里的韵律、情感、说话风格和转写误差都被忽略,从而损失关键信息。对于真正面向语音交互的系统,只检索文本是不够的。

Yifu Chen,Shengpeng Ji,Haoxiao Wang,Ziqing Wang,Siyu Chen,Jinzheng He,Jin Xu,Zhou Zhao
speech-lmragaudio-retrievalAnthologyDBLP
5
泛读FindingsACL 2025

ALPS: Attention Localization and Pruning Strategy for Efficient Adaptation of Large Language Models

这篇工作关注的是高效适配中的一个核心浪费:微调时并不是所有 attention 位置和头都同等重要,但常见 PEFT 方法往往默认统一处理,导致算力和参数预算没有用在最关键的局部结构上。对于长上下文和资源受限适配,这个问题尤其明显。

Hao Chen,Haoze Li,Zhiqing Xiao,Lirong Gao,Qi Zhang,Xiaomeng Hu,Ningtao Wang,Xing Fu,Junbo Zhao
attention-pruningefficient-adaptationpeftAnthologyDBLP
5
泛读FindingsACL 2025

DAST: Context-Aware Compression in LLMs via Dynamic Allocation of Soft Tokens

这篇工作要解决的是长上下文压缩中的一个常见问题:固定数量或固定位置的 soft tokens 无法适应不同输入段落的信息密度,结果要么压缩不够,要么把关键信息一起丢掉。随着上下文越来越长,静态压缩策略的浪费会越来越明显。

Shaoshen Chen,Yangning Li,Zishan Xu,Yongqin Zeng,Shunlong Wu,Xinshuo Hu ... 省略 1 位作者 ... ,Xin Su,Jiwei Tang,Yinghui Li,Hai-Tao Zheng
context-compressionsoft-tokenslong-contextAnthologyDBLP
5
泛读FindingsACL 2025

RQT: Hierarchical Residual Quantization for Multi-Model Compression

Tianqi Chen,Peisong Wang,Weixiang Xu,Zeyu Zhu,Jian Cheng
quantizationmulti-model-compressionresidual-quantizationAnthologyDBLP
5
泛读FindingsACL 2025

From Imitation to Introspection: Probing Self-Consciousness in Language Models

Sirui Chen,Shu Yu,Shengjie Zhao,Chaochao Lu
self-consciousnessprobingemergent-behaviorAnthologyDBLP
5
泛读LongACL 2025

DNASpeech: A Contextualized and Situated Text-to-Speech Dataset with Dialogues, Narratives and Actions

Chuanqi Cheng,Hongda Sun,Bo Du,Shuo Shang,Xinrong Hu,Rui Yan
ttsdatasetcontextualized-speechAnthologyDBLP
5
泛读FindingsACL 2025

2M-BELEBELE: Highly Multilingual Speech and American Sign Language Comprehension Dataset Download PDF

Marta R. Costa-jussà,Bokai Yu,Pierre Andrews,Belen Alastruey,Necati Cihan Camgöz,Joe Chuang,Jean Maillard,Christophe Ropers,Arina Turkatenko,Carleigh Wood
speechsign-languagebenchmarkAnthologyDBLP
5
泛读FindingsACL 2025

Stepwise Perplexity-Guided Refinement for Efficient Chain-of-Thought Reasoning in Large Language Models

Yingqian Cui,Pengfei He,Jingying Zeng,Hui Liu,Xianfeng Tang,Zhenwei Dai ... 省略 4 位作者 ... ,Suhang Wang,Yue Xing,Jiliang Tang,Qi He
reasoningcotperplexityAnthologyDBLP
5
泛读FindingsACL 2025

CRPO: Confidence-Reward Driven Preference Optimization for Machine Translation

Guofeng Cui,Pichao Wang,Yang Liu,Zemian Ke,Zhu Liu,Vimal Bhat
preference-optimizationreward-modelconfidenceAnthologyDBLP
5
泛读FindingsACL 2025

Robust Data Watermarking in Language Models by Injecting Fictitious Knowledge

Xinyue Cui,Johnny Tian-Zheng Wei,Swabha Swayamdipta,Robin Jia
data-qualitywatermarkingmemorizationAnthologyDBLP
5
泛读LongACL 2025

Stealing Training Data from Large Language Models in Decentralized Training through Activation Inversion Attack

Chenxi Dai,Lin Lu,Pan Zhou
privacytraining-dataactivation-inversionAnthologyDBLP
5
泛读LongACL 2025

Ranking Unraveled: Recipes for LLM Rankings in Head-to-Head AI Combat

Roland Daynauth,Christopher Clarke,Krisztián Flautner,Lingjia Tang,Jason Mars
evaluationrankingpreferenceAnthologyDBLP
5
泛读LongACL 2025

DRPruning: Efficient Large Language Model Pruning through Distributionally Robust Optimization

这篇工作要解决的是:在尽量不伤害模型泛化的前提下,把 LLM 剪得更小、更快,而不是只在训练分布上把参数删掉。现有剪枝方法大多按单一校准集或局部重要性做决策,容易出现“校准集上还行、分布一变就掉得很快”的问题;这对真正部署到开放域输入的 LLM 尤其不稳。

Hexuan Deng,Wenxiang Jiao,Xuebo Liu,Jing Li,Min Zhang,Zhaopeng Tu
pruningcompressionefficiencyAnthologyDBLP
5
泛读LongACL 2025

LongDocURL: a Comprehensive Multimodal Long Document Benchmark Integrating Understanding, Reasoning, and Locating

这篇工作解决的是:现有多模态长文档 benchmark 往往只测“看懂一点内容”,但很少同时要求理解、跨页推理和精确定位,因此很难判断模型到底是会做长文档推理,还是只会在局部页面里找关键词。随着原生多模态长上下文模型增多,这个空缺已经开始限制模型诊断。

Chao Deng,Jiale Yuan,Pi Bu,Peijie Wang,Zhong-Zhi Li,Jian Xu ... 省略 1 位作者 ... ,Yuan Gao,Jun Song,Bo Zheng,Cheng-Lin Liu
multimodallong-contextbenchmarkAnthologyDBLP
5
泛读FindingsACL 2025

MultiChallenge: A Realistic Multi-Turn Conversation Evaluation Benchmark Challenging to Frontier LLMs

这篇工作针对的是:现有对话评测大多是单轮、短程、模板化,导致很多前沿 LLM 在真实多轮交互中的脆弱性被掩盖。模型在一问一答上高分,不代表它能在多约束、多目标、跨轮记忆和用户策略变化下稳定完成任务。

Kaustubh Deshpande,Ved Sirdeshmukh,Johannes Baptist Mols,Lifeng Jin,Ed-Yeremai Hernandez-Cardona,Dean Lee,Jeremy Kritz,Willow E. Primack,Summer Yue,Chen Xing
benchmarkmulti-turnconversationAnthologyDBLP
5
泛读LongACL 2025

Dynamic Parallel Tree Search for Efficient LLM Reasoning

这篇工作要解决的是:提升 LLM 推理质量时,简单增加采样条数或串行 tree search 的效率太差,导致 test-time compute 很难真正扩展。现有推理增强方法常在准确率上有效,但在墙钟时间、并行利用率和搜索冗余上代价很高。

Yifu Ding,Wentao Jiang,Shunyu Liu,Yongcheng Jing,Jinyang Guo,Yingjie Wang ... 省略 2 位作者 ... ,Ziwei Liu,Bo Du,Xianglong Liu,Dacheng Tao
reasoningtree-searchinferenceAnthologyDBLP
5
泛读FindingsACL 2025

Improving Causal Interventions in Amnesic Probing with Mean Projection or LEACE

这篇工作讨论的是:amnesic probing 里做因果干预时,原先的“删除某类信息”操作可能并不干净,导致我们以为去掉了某个属性,其实只是引入了新的表示失真。换句话说,问题不是 probe 能不能读出信息,而是 intervention 是否真的只去掉了目标信息、不破坏其他内容。

Alicja Dobrzeniecka,Antske Fokkens,Pia Sommerauer
probingcausal-interventioninterpretabilityAnthologyDBLP
5
泛读LongACL 2025

Making LLMs Better Many-to-Many Speech-to-Text Translators with Curriculum Learning

Yexing Du,Youcheng Pan,Ziyang Ma,Bo Yang,Yifan Yang,Keqi Deng,Xie Chen,Yang Xiang,Ming Liu,Bing Qin
speech-translationcurriculum-learningllmAnthologyDBLP
5
泛读FindingsACL 2025

EXECUTE: A Multilingual Benchmark for LLM Token Understanding

Lukas Edman,Helmut Schmid,Alexander Fraser
tokenizermultilingualbenchmarkAnthologyDBLP
5
泛读LongACL 2025

Can External Validation Tools Improve Annotation Quality for LLM-as-a-Judge?

Arduin Findeis,Floris Weers,Guoli Yin,Ke Ye,Ruoming Pang,Tom Gunter
llm-as-a-judgeevaluationreward-signalAnthologyDBLP
5
泛读LongACL 2025

Estimating Privacy Leakage of Augmented Contextual Knowledge in Language Models

James Flemings,Bo Jiang,Wanrong Zhang,Zafar Takhirov,Murali Annavaram
privacycontext-learningmemorizationAnthologyDBLP
5
泛读ShortACL 2025

Improving the Calibration of Confidence Scores in Text Generation Using the Output Distribution's Characteristics

Lorenzo Jaime Yu Flores,Ori Ernst,Jackie CK Cheung
calibrationtext-generationuncertaintyDOIDBLP
5
泛读LongACL 2025

Training-free LLM Merging for Multi-task Learning

Zichuan Fu,Xian Wu,Yejing Wang,Wanyu Wang,Shanshan Ye,Hongzhi Yin,Yi Chang,Yefeng Zheng,Xiangyu Zhao
model-mergingmultitasktraining-freeAnthologyDBLP
5
泛读LongACL 2025

UniICL: An Efficient ICL Framework Unifying Compression, Selection, and Generation

Jun Gao,Qi Lv,Zili Wang,Tianxiang Wu,Ziqiang Cao,Wenjie Li
iclcompressionretrievalAnthologyDBLP
5
泛读LongACL 2025

ExpeTrans: LLMs Are Experiential Transfer Learners

Jinglong Gao,Xiao Ding,Lingxiao Zou,Bibo Cai,Bing Qin,Ting Liu
transfer-learningreasoninggeneralizationAnthologyDBLP
5
泛读LongACL 2025

Shaping the Safety Boundaries: Understanding and Defending Against Jailbreaks in Large Language Models

Lang Gao,Jiahui Geng,Xiangliang Zhang,Preslav Nakov,Xiuying Chen
safetyjailbreakalignmentAnthologyDBLP
5
泛读FindingsACL 2025

'No' Matters: Out-of-Distribution Detection in Multimodality Multi-Turn Interactive Dialogue Download PDF

Rena Wei Gao,Xuetong Wu,Siwen Luo,Caren Han,Feng Liu
ood-detectionmultimodaldialogueAnthologyDBLP
5
泛读LongACL 2025

Can Graph Descriptive Order Affect Solving Graph Problems with LLMs?

Yuyao Ge,Shenghua Liu,Baolong Bi,Yiwei Wang,Lingrui Mei,Wenjie Feng,Lizhe Chen,Xueqi Cheng
reasoningorder-effectsgraphsAnthologyDBLP
5
泛读LongACL 2025

When Backdoors Speak: Understanding LLM Backdoor Attacks Through Model-Generated Explanations

Huaizhi Ge,Yiming Li,Qifan Wang,Yongfeng Zhang,Ruixiang Tang
backdoorsecurityinterpretabilityAnthologyDBLP
5
泛读LongACL 2025

Beyond Logits: Aligning Feature Dynamics for Effective Knowledge Distillation

Guoqiang Gong,Jiaxing Wang,Jin Xu,Deping Xiang,Zicheng Zhang,Leqi Shen ... 省略 2 位作者 ... ,ZhaolongXing ZhaolongXing,Zhen Chen,Pengzhang Liu,Ke Zhang
knowledge-distillationfeature-alignmenttrainingAnthologyDBLP
5
泛读LongACL 2025

Meta-Learning Neural Mechanisms rather than Bayesian Priors

Michael Eric Goodale,Salvador Mascarenhas,Yair Lakretz
meta-learningiclbayesianAnthologyDBLP
5
泛读FindingsACL 2025

Eliciting Textual Descriptions from Representations of Continuous Prompts

Daniela Gottesman,Mor Geva,Dana Ramati
prompt-tuninginterpretabilitycontinuous-promptAnthologyDBLP
5
泛读LongACL 2025

Adapt Once, Thrive with Updates: Transferable Parameter-Efficient Fine-Tuning on Evolving Base Models

PEFT adapter(如 LoRA)是针对特定版本的 base model 训练的,当 base model 更新版本后,adapter 就失效了,需要重新训练。作者想让一次训练的 adapter 能跨 base model 版本迁移复用。

Naibin Gu,Peng Fu,Xiyu Liu,Ke Ma,Zheng Lin,Weiping Wang
peftmodel-updatecontinual-learningAnthologyDBLP
5
泛读FindingsACL 2025

NeuronMerge: Merging Models via Functional Neuron Groups

模型合并(model merging)是把多个微调模型的权重合成一个模型,但现有方法多在参数级别操作(如平均、TIES),忽略了神经元的功能语义。不同模型中功能相似的神经元可能位于不同位置,直接按位置合并会产生冲突。

Wangyun Gu,Qianghua Gao,Li-Xin Zhang,Xu Shen,Jieping Ye
model-mergingneuron-groupingcompressionAnthologyDBLP
5
泛读FindingsACL 2025

Token-Budget-Aware LLM Reasoning

LLM 的 chain-of-thought 推理会消耗大量 token,但很多推理步骤是冗余的。作者想在给定 token 预算下让模型自适应地分配推理资源——该详细时详细,该简洁时简洁。

Tingxu Han,Zhenting Wang,Chunrong Fang,Shiyu Zhao,Shiqing Ma,Zhenyu Chen
reasoninginference-budgettest-timeAnthologyDBLP
5
泛读FindingsACL 2025

All That Glitters is Not Gold: Improving Robust Retrieval-Augmented Language Models with Fact-Centric Preference Alignment

RAG 系统在检索到包含噪声或错误信息的文档时,LLM 容易被误导而生成不准确的回答。现有方法要么过滤检索结果要么训练模型忽略噪声,但缺乏让模型主动偏好事实性内容的对齐机制。

Jia Hao,Chunhong Zhang,Jiarun Liu,Haiyu Zhao,Zhiqiang Zhan,Zheng Hu
ragpreference-alignmentrobustnessAnthologyDBLP
5
泛读LongACL 2025

Teaching an Old LLM Secure Coding: Localized Preference Optimization on Distilled Preferences

LLM 生成的代码经常包含安全漏洞,而通用的对齐训练不足以让模型系统性地避免不安全的编码模式。作者想通过局部化的偏好优化(localized preference optimization)专门提升代码安全性。

Mohammad Saqib Hasan,Saikat Chakraborty,Santu Karmaker,Niranjan Balasubramanian
preference-optimizationdistillationsecure-codingAnthologyDBLP
5
泛读FindingsACL 2025

Self-Correction is More than Refinement: A Learning Framework for Visual and Language Reasoning Tasks

LLM 的自我纠正(self-correction)通常被理解为对初始输出的「精炼」(refinement),但作者认为这种理解太窄——自我纠正应该是一种可学习的能力,模型需要学会识别错误并系统性地修正,而不仅仅是重新生成。

Jiayi He,Hehai Lin,Qingyun Wang,Yi R. Fung,Heng Ji
UIUCself-correctionvlmreasoningAnthologyDBLP
5
泛读LongACL 2025

Can Large Language Models Detect Errors in Long Chain-of-Thought Reasoning?

Yancheng He,Shilong Li,Jiaheng Liu,Weixun Wang,Xingyuan Bu,Ge Zhang ... 省略 1 位作者 ... ,Zhaoxiang Zhang,Zhicheng Zheng,Wenbo Su,Bo Zheng
chain-of-thoughterror-detectionreasoningAnthologyDBLP
5
泛读FindingsACL 2025

TUBA: Cross-Lingual Transferability of Backdoor Attacks in LLMs with Instruction Tuning

Xuanli He,Jun Wang,Qiongkai Xu,Pasquale Minervini,Pontus Stenetorp,Benjamin I. P. Rubinstein,Trevor Cohn
backdoorinstruction-tuningsecurityAnthologyDBLP
5
泛读LongACL 2025

Cracking the Code of Hallucination in LVLMs with Vision-aware Head Divergence

Jinghan He,Kuan Zhu,Haiyun Guo,Junfeng Fang,Zhenglin Hua,Yuheng Jia,Ming Tang,Tat-Seng Chua,Jinqiao Wang
lvlmhallucinationinterpretabilityAnthologyDBLP
5
泛读LongACL 2025

Distilling an End-to-End Voice Assistant Without Instruction Training Data

William Barr Held,Yanzhe Zhang,Weiyan Shi,Minzhi Li,Michael J. Ryan,Diyi Yang
voice-assistantdistillationspeechAnthologyDBLP
5
泛读FindingsACL 2025

Statistical inference on black-box generative models in the data kernel perspective space

Hayden S. Helm,Aranyak Acharyya,Youngser Park,Brandon Duderstadt,Carey E. Priebe
black-boxgenerative-modelsstatisticsAnthologyDBLP
5
泛读IndustryACL 2025

Domain Adaptation of Foundation LLMs for e-Commerce

Christian Herold,Michael Kozielski,Tala Bazazo,Pavel Petrushkov,Yannick Versley,Seyyed Hadi Hashemi,Patrycja Cieplicka,Dominika Basaj,Shahram Khadivi
domain-adaptationcontinued-pretraininginstruction-tuningDOIDBLP
5
泛读FindingsACL 2025

Decoupling Memories, Muting Neurons: Towards Practical Machine Unlearning for Large Language Models

这篇工作要解决的是大模型遗忘(unlearning)在实践里既要删得掉指定知识,又不能把通用能力一起打坏。过去的方法常在参数层面粗暴更新,结果是要么删不干净,要么引入明显副作用;作者尝试把“记忆存储位置”和“功能神经元”分开处理。

Lishuai Hou,Zixiong Wang,Gaoyang Liu,Chen Wang,Wei Liu,Kai Peng
machine-unlearningknowledge-editingsafetyAnthologyDBLP
5
泛读FindingsACL 2025

Scaling LLMs' Social Reasoning: Sprinkle Cognitive "Aha Moment" into Fundamental Long-thought Logical Capabilities

这篇工作想解决的是:LLM 的社会推理能力通常靠少量高质量数据或复杂后训练技巧堆出来,但基础长链逻辑能力和“社会认知中的顿悟式转折”之间缺少系统桥接。过去做法要么偏 social benchmark engineering,要么偏一般 CoT 增强,没有把两者一起 scale。

Guiyang Hou,Wenqi Zhang,Zhe Zheng,Yongliang Shen,Weiming Lu
reasoningsocial-reasoningrlAnthologyDBLP
5
泛读FindingsACL 2025

Let's Fuse Step by Step: A Generative Fusion Decoding Algorithm with LLMs for Robust and Instruction-Aware ASR and OCR

这篇工作要解决的是 ASR 和 OCR 在噪声、歧义和复杂指令约束下容易出错,而传统流水线通常把识别和语言后处理割裂开。作者显然认为仅靠独立识别器再接一个 LLM 重写不够,因为这样很难在解码阶段就利用指令和生成先验。

Chan-Jan Hsu,Yi-Chang Chen,Feng-Ting Liao,Pei-Chen Ho,Yu-Hsiang Wang,Po-Chun Hsu,Da-shan Shiu
asrocrfusion-decodingAnthologyDBLP
5
泛读LongACL 2025

mPLUG-DocOwl2: High-resolution Compressing for OCR-free Multi-page Document Understanding

这篇工作要解决的是多页文档理解里分辨率和上下文长度的双重瓶颈:OCR-free 方法想直接吃页面图像,但高分辨率页面和多页输入会让视觉 token 数爆炸。过去常见做法要么依赖 OCR 把视觉问题转成文本问题,要么缩图导致小字、表格和版式信息损失明显。

Anwen Hu,Haiyang Xu,Liang Zhang,Jiabo Ye,Ming Yan,Ji Zhang,Qin Jin,Fei Huang,Jingren Zhou
document-understandingvisual-compressionmultimodal-llmAnthologyDBLP
5
泛读LongACL 2025

TrimLLM: Progressive Layer Dropping for Domain-Specific LLMs

Lanxiang Hu,Tajana Rosing,Hao Zhang
layer-pruningcontinual-pretrainmodel-compressionAnthologyDBLP
5
泛读LongACL 2025

InductionBench: LLMs Fail in the Simplest Complexity Class

Wenyue Hua,Tyler Wong,Fei Sun,Liangming Pan,Adam Jardine,William Yang Wang
inductioncomplexityllm-evaluationAnthologyDBLP
5
泛读FindingsACL 2025

Disentangling Logic: The Role of Context in Large Language Model Reasoning Capabilities

Wenyue Hua,Kaijie Zhu,Lingyao Li,Lizhou Fan,Mingyu Jin,Shuhang Lin,Haochen Xue,Zelong Li,Jindong Wang,Yongfeng Zhang
reasoningcontext-dependencedisentanglingAnthologyDBLP
5
泛读FindingsACL 2025

First-Step Advantage: Importance of Starting Right in Multi-Step Math Reasoning

Kushal Jain,Moritz Miller,Niket Tandon,Kumar Shridhar
reasoningcoterror-analysisAnthologyDBLP
5
泛读LongACL 2025

MEXMA: Token-level objectives improve sentence representations

João Maria Janeiro,Benjamin Piwowarski,Patrick Gallinari,Loïc Barrault
token-levelrepresentation-learningobjectiveAnthologyDBLP
5
泛读LongACL 2025

L4Q: Parameter Efficient Quantization-Aware Fine-Tuning on Large Language Models

Hyesung Jeon,Yulhwa Kim,Jae-Joon Kim
quantizationpeftfine-tuningAnthologyDBLP
5
泛读ShortACL 2025

Unique Hard Attention: A Tale of Two Sides

Selim Jerad,Anej Svete,Jiaoda Li,Ryan Cotterell
attentionhard-attentionarchitectureDOIDBLP
5
泛读FindingsACL 2025

Large Language Models Still Exhibit Bias in Long Text

Wonje Jeung,Dongjae Jeon,Ashkan Yousefpour,Jonghyun Choi
biaslong-contextevaluationDOIDBLP
5
泛读LongACL 2025

ControlSpeech: Towards Simultaneous and Independent Zero-shot Speaker Cloning and Zero-shot Language Style Control

Shengpeng Ji,Qian Chen,Wen Wang,Jialong Zuo,Minghui Fang,Ziyue Jiang ... 省略 1 位作者 ... ,Zehan Wang,Xize Cheng,Siqi Zheng,Zhou Zhao
speech-generationspeaker-cloningstyle-controlAnthologyDBLP
5
泛读FindingsACL 2025

Evaluating the Long-Term Memory of Large Language Models

Zixi Jia,Qinghua Liu,Hexiao Li,Yuyan Chen,Jiqiang Liu
memoryevaluationlong-contextAnthologyDBLP
5
泛读FindingsACL 2025

HICD: Hallucination-Inducing via Attention Dispersion for Contrastive Decoding to Mitigate Hallucinations in Large Language Models

Xinyan Jiang,Hang Ye,Yongxin Zhu,Xiaoying Zheng,Zikang Chen,Jun Gong
hallucinationcontrastive-decodingattentionAnthologyDBLP
5
泛读LongACL 2025

Synergistic Weak-Strong Collaboration by Aligning Preferences

这篇论文要解决的是 weak-to-strong 对齐里一个常见问题:弱模型虽然便宜、覆盖广,但偏好信号噪声大,直接拿来监督强模型容易把错误偏好一并放大。作者的切入点是通过 preference alignment,让弱模型与强模型形成互补协作,而不是单向蒸馏。

Yizhu Jiao,Xuchao Zhang,Zhaoyang Wang,Yubo Ma,Zhun Deng,Rujia Wang,Chetan Bansal,Saravan Rajmohan,Jiawei Han,Huaxiu Yao
weak-to-strongpreference-alignmentcollaborationAnthologyDBLP
5
泛读LongACL 2025

Internal Value Alignment in Large Language Models through Controlled Value Vector Activation

这篇论文要解决的是价值对齐通常依赖外部监督或拒答训练,但模型内部是否已经存在可控的价值表征,以及能否直接激活它们来改善对齐。这个问题有意义,因为如果价值偏好部分以内在方向的形式存于表示空间,那么对齐不一定只能靠额外数据和全量微调。

Haoran Jin,Meng Li,Xiting Wang,Zhihao Xu,Minlie Huang,Yantao Jia,Defu Lian
alignmentvalue-vectorinterpretabilityAnthologyDBLP
5
泛读LongACL 2025

Multimodal Transformers are Hierarchical Modal-wise Heterogeneous Graphs

这篇论文试图回答一个结构层面的疑问:多模态 Transformer 到底在内部如何组织不同模态的信息流。作者的结论从题目上看是把它刻画为层级化、按模态异质的图结构,而不是简单的统一 token 序列处理器。

Yijie Jin,Junjie Peng,Xuanchao Lin,Haochen Yuan,Lan Wang,Cangzhi Zheng
multimodal-transformergrapharchitecture-analysisAnthologyDBLP
5
泛读FindingsACL 2025

"Well, Keep Thinking": Enhancing LLM Reasoning with Adaptive Injection Decoding

这篇论文要解决的是:很多推理错误不是模型完全不会,而是解码过程中太早收敛到浅层路径,没有继续思考。现有 test-time scaling 往往靠多采样、投票或树搜索来补救,但计算代价高;作者尝试用自适应注入式解码,在单次生成中更便宜地延长有效推理。

Hyunbin Jin,Je Won Yeom,Seunghyun Bae,Taesup Kim
reasoningdecodingtest-time-computeAnthologyDBLP
5
泛读ShortACL 2025

Is That Your Final Answer? Test-Time Scaling Improves Selective Question Answering

这篇论文要解决的是 selective QA 里的一个老问题:模型不仅要答题,还要知道什么时候不该答。传统 selective QA 主要依赖单次置信度估计,但大模型在单次生成里的校准常常不稳定;作者提出 test-time scaling 能提升这种‘答或不答’决策质量。

William Jurayj,Jeffrey Cheng,Benjamin Van Durme
test-time-computeselective-qascalingDOIDBLP
5
泛读FindingsACL 2025

MEXA: Multilingual Evaluation of English-Centric LLMs via Cross-Lingual Alignment

Amir Hossein Kargaran,Ali Modarressi,Nafiseh Nikeghbal,Jana Diesner,François Yvon,Hinrich Schütze
multilingualcross-lingualevaluationAnthologyDBLP
5
泛读LongACL 2025

Predicting Through Generation: Why Generation Is Better for Prediction

Md. Kowsher,Nusrat Jahan Prottasha,Prakash Bhat,Chun-Nam Yu,Mojtaba Soltanalian,Ivan Garibay,Ozlem O. Garibay,Chen Chen,Niloofar Yousefi
generationpredictiontraining-objectiveAnthologyDBLP
5
泛读ShortACL 2025

Cross-Lingual Representation Alignment Through Contrastive Image-Caption Tuning

Nathaniel Krasner,Nicholas Lanuzo,Antonios Anastasopoulos
cross-lingualcontrastive-learningimage-textDOIDBLP
5
泛读LongACL 2025

SCULPT: Systematic Tuning of Long Prompts

Shanu Kumar,Akhila Yesantarao Venkata,Shubhanshu Khandelwal,Bishal Santra,Parag Agrawal,Manish Gupta
promptinglong-contextprompt-optimizationAnthologyDBLP
5
泛读FindingsACL 2025

Feature-Level Insights into Artificial Text Detection with Sparse Autoencoders

Kristian Kuznetsov,Laida Kushnareva,Anton Razzhigaev,Polina Druzhinina,Anastasia Voznyuk,Irina Piontkovskaya,Evgeny Burnaev,Serguei Barannikov
sparse-autoencoderinterpretabilitytext-detectionAnthologyDBLP
5
泛读LongACL 2025

Can LLMs Ground when they (Don't) Know: A Study on Direct and Loaded Political Questions

Clara Lachenmaier,Judith Sieker,Sina Zarrieß
groundinguncertaintypolitical-qaAnthologyDBLP
5
泛读ShortACL 2025

Leveraging Human Production-Interpretation Asymmetries to Test LLM Cognitive Plausibility

Suet-Ying Lam,Qingcheng Zeng,Jingyi Wu,Rob Voigt
cognitive-evaluationlanguage-productioninterpretationDOIDBLP
5
泛读LongACL 2025

How LLMs Comprehend Temporal Meaning in Narratives: A Case Study in Cognitive Evaluation of LLMs

Karin de Langis,Jong Inn Park,Andreas Schramm,Bin Hu,Khanh Chi Le,Dongyeop Kang
temporal-reasoningnarrativescognitive-evaluationAnthologyDBLP
5
泛读LongACL 2025

Data Quality Issues in Multilingual Speech Datasets: The Need for Sociolinguistic Awareness and Proactive Language Planning

Mingfei Lau,Qian Chen,Yeming Fang,Tingting Xu,Tongzhou Chen,Pavel Golik
speechdata-qualitymultilingualAnthologyDBLP
5
泛读LongACL 2025

Theorem Prover as a Judge for Synthetic Data Generation

这篇工作要解决的是:如何为数学推理合成数据提供比 LLM 自评更可靠的自动判别信号。过去常见做法是用另一个 LLM 打分或过滤,但这类 judge 容易受表述风格、提示模板和模型偏见影响,尤其在定理证明场景里很难保证“结论对但过程错”或“形式合法但语义无效”被正确区分,因此把 theorem prover 引入 judge 是值得重做的一步。

Joshua Ong Jun Leang,Giwon Hong,Wenda Li,Shay B. Cohen
synthetic-datatheorem-provingdata-qualityAnthologyDBLP
5
泛读FindingsACL 2025

PROMTEC: Fast LLM Inference Decoding using Prompt Multi-Lookup with Template Database and Common Sequences

这篇工作解决的是:如何在不改模型权重的前提下,进一步压缩 LLM 解码时的逐 token 延迟。现有推测解码、KV 优化和前缀缓存已经吃掉了一部分系统冗余,但对大量模板化请求、常见短语和高频续写模式,模型仍在重复做“早就见过”的计算,PROMTEC 试图把这部分冗余直接绕开。

Alan Chi-Man Lee,Wing-Sun Cheng,Calvin Chun-Kit Chan
speculative-decodinginferencelookup-decodingAnthologyDBLP
5
泛读LongACL 2025

Cross-Lingual Optimization for Language Transfer in Large Language Models

这篇工作要解决的是:多语言 LLM 做跨语言迁移时,优化目标往往没有真正对齐“语言间可迁移能力”,导致高资源语言学得更强,低资源语言只得到表面覆盖。以往工作更多靠数据配比、继续预训练或指令微调补救,但如果优化过程本身偏向主导语言,迁移上限会被提前锁死。

Jungseob Lee,Seongtae Hong,Hyeonseok Moon,Heuiseok Lim
cross-lingualtransfer-learningmultilingualAnthologyDBLP
5
泛读FindingsACL 2025

Uncertainty-Aware Contrastive Decoding

这篇工作要解决的是:contrastive decoding 在提升输出质量时,经常把“分歧”当成“错误”,但模型不确定时这种判断并不可靠。传统 contrastive decoding 用 expert 和 amateur 的差异来压制低质量 token,可一旦 expert 自己也高熵或处在分布外,这种固定对比会过度惩罚合理候选,导致流畅性、事实性或多样性受损。

Hakyung Lee,Subeen Park,Joowang Kim,Sungjun Lim,Kyungwoo Song
decodinguncertaintycontrastive-decodingAnthologyDBLP
5
泛读SRWACL 2025

Controlling Language Confusion in Multilingual LLMs

这篇工作解决的是:multilingual LLM 在目标语言生成时经常出现 language confusion,也就是无关语言混入、代码切换失控或回答语言漂移。这个问题过去多被当成 prompt engineering 或数据不均衡的副作用处理,但随着模型覆盖语种增多、共享词表更拥挤,它已经成为统一多语言建模的系统性问题。

Nahyun Lee,Yeongseo Woo,Hyunwoo Ko,Guijin Son
multilinguallanguage-confusionllm-behaviorDOIDBLP
5
泛读FindingsACL 2025

TAMP: Token-Adaptive Layerwise Pruning in Multimodal Large Language Models

这篇工作要解决的是:MLLM 推理成本高,不同 token 对不同层的需求却并不相同,统一跑满所有层明显浪费。过去剪枝多做静态层裁剪或样本级早退,但多模态输入里视觉 token、文本 token 和生成中的不同阶段难度差异更大,静态策略很容易在省算力和保性能之间失衡。

Jaewoo Lee,Keyang Xuan,Chanakya Ekbote,Sandeep Polisetty,Yi R. Fung,Paul Pu Liang
pruningmultimodal-llmtoken-adaptiveAnthologyDBLP
5
泛读FindingsACL 2025

Know the Unknown: An Uncertainty-Sensitive Method for LLM Instruction Tuning

LLM 在指令微调时对所有样本一视同仁,没有区分模型对不同指令的确定性差异。对于模型已经很确定的样本,继续强行拟合可能导致过拟合;对于高不确定性样本,标准训练信号可能不够。此前的 SFT 方法很少显式利用模型的不确定性信息。

Jiaqi Li,Yixuan Tang,Yi Yang
uncertaintyinstruction-tuningsftAnthologyDBLP
5
泛读FindingsACL 2025

BayesKD: Bayesian Knowledge Distillation for Compact LLMs in Constrained Fine-tuning Scenarios

在资源受限的微调场景下,如何将大模型的知识蒸馏到小模型中。标准知识蒸馏方法在 LLM 上的效果不稳定,尤其是当 teacher-student 容量差距大、微调数据有限时,蒸馏信号的噪声问题更突出。

Wei Li,Lujun Li,Mark G. Lee,Shengjie Sun,Lei Zhang,Wei Xue,Yike Guo
knowledge-distillationbayesiancompact-llmAnthologyDBLP
5
泛读LongACL 2025

Think&Cite: Improving Attributed Text Generation with Self-Guided Tree Search and Progress Reward Modeling

带引用的文本生成(attributed text generation)要求模型在生成内容的同时给出准确的来源引用,但现有方法要么引用不准确,要么生成质量差。核心难点在于生成和检索/引用是两个目标,容易冲突。

Junyi Li,Hwee Tou Ng
National University of Singaporetree-searchreward-modelattributionAnthologyDBLP
5
泛读LongACL 2025

500xCompressor: Generalized Prompt Compression for Large Language Models

长 prompt 导致 LLM 推理成本高且受上下文窗口限制,需要高效的 prompt 压缩方法。现有方法压缩比有限(通常 10-50x),且泛化性差——在一种任务上训练的压缩器换到另一种任务就失效。

Zongqian Li,Yixuan Su,Nigel Collier
University of Cambridgeprompt-compressioninferencelong-contextAnthologyDBLP
5
泛读LongACL 2025

Representations of Fact, Fiction and Forecast in Large Language Models: Epistemics and Attitudes

Meng Li,Michael Vrazitulis,David Schlangen
representationsfactualityepistemicsAnthologyDBLP
5
泛读LongACL 2025

MobiLoRA: Accelerating LoRA-based LLM Inference on Mobile Devices via Context-aware KV Cache Optimization

Borui Li,Yitao Wang,Haoran Ma,Ligeng Chen,Jun Xiao,Shuai Wang
lorakv-cachemobileAnthologyDBLP
5
泛读FindingsACL 2025

Forget the Token and Pixel: Rethinking Gradient Ascent for Concept Unlearning in Multimodal Generative Models

Jiaqi Li,Chuanyi Zhang,Miaozeng Du,Hui Zhang,Yongrui Chen,Qianshan Wei,Junfeng Fang,Ruipeng Wang,Sheng Bi,Guilin Qi
unlearningmultimodal-generationgradientsAnthologyDBLP
5
泛读FindingsACL 2025

Investigating Context Faithfulness in Large Language Models: The Roles of Memory Strength and Evidence Style

Yuepei Li,Kang Zhou,Qiao Qiao,Bach Nguyen,Qing Wang,Qi Li
faithfulnesscontextmemoryAnthologyDBLP
5
泛读LongACL 2025

Lost in Literalism: How Supervised Training Shapes Translationese in LLMs

这篇论文关注的核心问题是:监督训练会系统性地把 LLM 的翻译推向 translationese,也就是更字面、更源语言牵引的译文风格。这个问题过去常被 BLEU 一类 n-gram 指标掩盖,因为字面对应往往更容易拿分,但对高质量机器翻译和跨语言生成来说,这会牺牲目标语自然度与风格自治。

Yafu Li,Ronghao Zhang,Zhilin Wang,Huajian Zhang,Leyang Cui,Yongjing Yin,Tong Xiao,Yue Zhang
supervised-finetuningtranslationdata-distributionAnthologyDBLP
5
泛读LongACL 2025

Knowledge Boundary of Large Language Models: A Survey

这篇论文要解决的不是单一技术问题,而是系统梳理 LLM 的 knowledge boundary:模型知道什么、不知道什么、何时会越界胡编,以及这些边界如何被评测、建模和干预。这个主题现在值得单独做 survey,因为大模型从“记忆多少知识”转向了“何时该承认不知道、何时该调用外部知识”的阶段。

Moxin Li,Yong Zhao,Wenxuan Zhang,Shuaiyi Li,Wenya Xie,See-Kiong Ng,Tat-Seng Chua,Yang Deng
knowledge-boundarysurveyllm-capabilitiesAnthologyDBLP
5
泛读FindingsACL 2025

ClusComp: A Simple Paradigm for Model Compression and Efficient Finetuning

这篇论文要解决的是:模型压缩和高效微调通常是两套流程,前者追求参数/计算节省,后者追求低成本适配,二者经常彼此干扰。很多方案要么压完之后难以继续调,要么为了保留可调性而牺牲压缩比。

Baohao Liao,Christian Herold,Seyyed Hadi Hashemi,Stefan Vasilev,Shahram Khadivi,Christof Monz
compressionclusteringfine-tuningAnthologyDBLP
5
泛读IndustryACL 2025

NeKo: Cross-Modality Post-Recognition Error Correction with Tasks-Guided Mixture-of-Experts Language Model

这篇论文解决的是 post-recognition error correction,且是跨模态版本:识别系统出错后,仅靠文本上下文往往不够,需要同时利用音频、视觉或任务线索来修正错误。传统 error correction 多是单模态文本后处理,遇到同音词、专名或口语省略时边界很明显。

Yen-Ting Lin,Zhehuai Chen,Piotr Zelasko,Zhen Wan,Xuesong Yang,Zih-Ching Chen ... 省略 4 位作者 ... ,Jagadeesh Balam,Boris Ginsburg,Yu-Chiang Frank Wang,Chao-Han Huck Yang
moeerror-correctioncross-modalityDOIDBLP
5
泛读LongACL 2025

TeamLoRA: Boosting Low-Rank Adaptation with Expert Collaboration and Competition

这篇论文要解决的是:LoRA 虽然高效,但单个低秩适配器表达能力有限,面对复杂任务或多分布数据时容易欠拟合。现有增强方案通常是加 rank 或堆多个 adapter,但前者成本上升,后者又缺少明确协作机制。

Tianwei Lin,Jiang Liu,Wenqiao Zhang,Yang Dai,Haoyuan Li,Zhelun Yu ... 省略 2 位作者 ... ,Jiannan Guo,Hao Jiang,Siliang Tang,Yueting Zhuang
loraparameter-efficientcollaborationAnthologyDBLP
5
泛读LongACL 2025

Insight Over Sight: Exploring the Vision-Knowledge Conflicts in Multimodal LLMs

Xiaoyuan Liu,Wenxuan Wang,Youliang Yuan,Jen-tse Huang,Qiuzhi Liu,Pinjia He,Zhaopeng Tu
vision-knowledge-conflictvlmhallucinationAnthologyDBLP
5
泛读ShortACL 2025

Revisiting Epistemic Markers in Confidence Estimation: Can Markers Accurately Reflect Large Language Models' Uncertainty?

Jiayu Liu,Qing Zong,Weiqi Wang,Yangqiu Song
uncertaintycalibrationconfidenceDOIDBLP
5
泛读LongACL 2025

What Makes a Good Natural Language Prompt?

Do Xuan Long,Duy Dinh,Ngoc-Hai Nguyen,Kenji Kawaguchi,Nancy F. Chen,Shafiq Joty,Min-Yen Kan
promptingprompt-qualityevaluationAnthologyDBLP
5
泛读LongACL 2025

Language Model Probabilities are Not Calibrated in Numeric Contexts

Charles Lovering,Michael Krumdick,Viet Dac Lai,Varshini Reddy,Seth Ebner,Nilesh Kumar,Rik Koncel-Kedziorski,Chris Tanner
calibrationnumeric-reasoningprobabilitiesAnthologyDBLP
5
泛读LongACL 2025

CodeTool: Enhancing Programmatic Tool Invocation of LLMs via Process Supervision

Yifei Lu,Fanghua Ye,Jian Li,Qiang Gao,Cheng Liu,Haibo Luo,Nan Du,Xiaolong Li,Feiliang Ren
tool-useprocess-supervisioncodeAnthologyDBLP
5
泛读FindingsACL 2025

WASA: WAtermark-based Source Attribution for Large Language Model-Generated Data

Xinyang Lu,Jingtan Wang,Zitong Zhao,Zhongxiang Dai,Chuan-Sheng Foo,See-Kiong Ng,Bryan Kian Hsiang Low
watermarkingdata-attributionsynthetic-dataAnthologyDBLP
5
泛读LongACL 2025

Demystifying Small Language Models for Edge Deployment

Zhenyan Lu,Xiang Li,Dongqi Cai,Rongjie Yi,Fangming Liu,Wei Liu,Jian Luan,Xiwen Zhang,Nicholas D. Lane,Mengwei Xu
small-lmedge-deploymentefficiencyAnthologyDBLP
5
泛读LongACL 2025

Global Eye: Breaking the "Fixed Thinking Pattern" during the Instruction Expansion Process

Wenxuan Lu,Wei Liu,Jian Luan,Bin Wang,Songhao Jiang,Tianning Zang
instruction-synthesisdata-generationdiversityAnthologyDBLP
5
泛读FindingsACL 2025

Adaptive Detoxification: Safeguarding General Capabilities of LLMs through Toxicity-Aware Knowledge Editing

Yifan Lu,Jing Li,Yigeng Zhou,Yihui Zhang,Wenya Wang,Xiucheng Li,Meishan Zhang,Fangming Liu,Jun Yu,Min Zhang
knowledge-editingsafetydetoxificationAnthologyDBLP
5
泛读LongACL 2025

Tree-of-Evolution: Tree-Structured Instruction Evolution for Code Generation in Large Language Models

Ziyang Luo,Kaixin Li,Hongzhan Lin,Yuchen Tian,Mohan S. Kankanhalli,Jing Ma
instruction-evolutioncode-generationsynthetic-dataAnthologyDBLP
5
泛读OutstandingLongACL 2025

Turning Trash into Treasure: Accelerating Inference of Large Language Models with Token Recycling

这篇工作要解决的是:如何把生成过程中已经被丢弃的 token 计算结果重新利用起来,从而降低 LLM 推理成本。传统解码里,大量候选 token、草稿 token 或中间结果一旦没被最终采纳就直接作废,这很干净,但算力利用率很低。

Xianzhen Luo,Yixuan Wang,Qingfu Zhu,Zhiming Zhang,Xuanyu Zhang,Qing Yang,Dongliang Xu
speculative-decodingtoken-recyclinginference-accelerationAnthologyDBLP
5
泛读FindingsACL 2025

Rethinking Diverse Human Preference Learning through Principal Component Analysis

这篇工作关注的是:人类偏好并不只有一条“平均方向”,如何在偏好学习里显式建模多样性,而不是把不同人群、不同标准压缩成一个单一 reward。现有偏好建模常用单标量 reward 或单一胜负信号,训练稳定,但会把可解释的分歧抹平,导致对齐结果过于折中。

Feng Luo,Rui Yang,Hao Sun,Chunyuan Deng,Jiarui Yao,Jingyan Shen,Huan Zhang,Hanjie Chen
preference-learningrlhfpcaAnthologyDBLP
5
泛读FindingsACL 2025

Beyond Decoder-only: Large Language Models Can be Good Encoders for Machine Translation

这篇工作要解决的是:在机器翻译里,decoder-only LLM 之外是否可以把大语言模型当作强 encoder 来用,从而重新审视 encoder-decoder 路线的价值。过去很多工作默认生成任务就该用 decoder-only,但翻译天然是条件生成,源句编码质量和跨语言对齐能力往往比纯自回归续写更关键。

Yingfeng Luo,Tong Zheng,Yongyu Mu,Bei Li,Qinghong Zhang,Yongqi Gao ... 省略 1 位作者 ... ,Peinan Feng,Xiaoqian Liu,Tong Xiao,JingBo Zhu
encoderdecoder-onlymachine-translationAnthologyDBLP
5
泛读FindingsACL 2025

Whether LLMs Know If They Know: Identifying Knowledge Boundaries via Debiased Historical In-Context Learning

这篇工作要解决的是:LLM 能否可靠地知道自己的知识边界,也就是区分“我知道”和“我不知道”。传统评测往往直接看问答正确率或自信度,但历史上下文中的示例会给模型带来偏置,导致它看起来像是“知道”,其实只是被 in-context pattern 诱导。

Bo Lv,Nayu Liu,Yang Shen,Xin Liu,Ping Luo,Yue Yu
knowledge-boundarycalibrationin-context-learningAnthologyDBLP
5
泛读LongACL 2025

SoRFT: Issue Resolving with Subtask-oriented Reinforced Fine-Tuning

这篇工作关注的是:复杂问题的解决过程往往由多个子任务组成,直接做整体式 RL 或 SFT 很容易奖励稀疏、信用分配模糊,导致模型学会表面修补而不是逐步解决问题。过去常见做法是端到端微调整条轨迹,简单,但对多阶段 issue resolving 类任务不够稳。

Zexiong Ma,Chao Peng,Pengfei Gao,Xiangxin Meng,Yanzhen Zou,Bing Xie
reinforcement-learningcode-generationissue-resolvingAnthologyDBLP
5
泛读LongACL 2025

Unravelling the Logic: Investigating the Generalisation of Transformers in Numerical Satisfiability Problems

Tharindu Madusanka,Marco Valentino,Iqra Zahid,Ian Pratt-Hartmann,Riza Batista-Navarro
generalizationtransformersreasoningAnthologyDBLP
5
泛读IndustryACL 2025

Genetic Instruct: Scaling up Synthetic Generation of Coding Instructions for Large Language Models

Somshubra Majumdar,Vahid Noroozi,Mehrzad Samadi,Sean Narenthiran,Aleksander Ficek,Wasi Uddin Ahmad,Jocelyn Huang,Jagadeesh Balam,Boris Ginsburg
synthetic-datacode-instructiondata-synthesisDOIDBLP
5
泛读LongACL 2025

Data Laundering: Artificially Boosting Benchmark Results through Knowledge Distillation

Jonibek Mansurov,Akhmed Sakip,Alham Fikri Aji
data-contaminationbenchmarkknowledge-distillationAnthologyDBLP
5
泛读LongACL 2025

Condor: Enhance LLM Alignment with Knowledge-Driven Data Synthesis and Refinement

Maosongcao Maosongcao,Taolin Zhang,Mo Li,Chuyu Zhang,Yunxin Liu,Conghui He,Haodong Duan,Songyang Zhang,Kai Chen
data-synthesisalignmentknowledge-drivenAnthologyDBLP
5
泛读FindingsACL 2025

How do Transformer Embeddings Represent Compositions? A Functional Analysis

Aishik Nagar,Ishaan Singh Rawal,Mansi Dhanania,Cheston Tan
embeddingscompositionrepresentationAnthologyDBLP
5
泛读FindingsACL 2025

TituLLMs: A Family of Bangla LLMs with Comprehensive Benchmarking

Shahriar Kabir Nahin,Rabindra Nath Nandi,Sagor Sarker,Quazi Sarwar Muhtaseem,Md. Kowsher,Apu Chandraw Shill,Md Ibrahim,Mehadi Hasan Menon,Tareq Al Muntasir,Firoj Alam
llmmultilinguallow-resourceDOIDBLP
5
泛读OutstandingLongACL 2025

Typology-Guided Adaptation in Multilingual Models

Ndapa Nakashole
multilingualadaptationtypologyAnthologyDBLP
5
泛读LongACL 2025

Frictional Agent Alignment Framework: Slow Down and Don't Break Things

Abhijnan Nath,Carine Graff,Andrei Bachinin,Nikhil Krishnaswamy
agent-alignmentsafetyalignmentAnthologyDBLP
5
泛读FindingsACL 2025

Cross-Lingual Transfer of Debiasing and Detoxification in Multilingual LLMs: An Extensive Investigation

Vera Neplenbroek,Arianna Bisazza,Raquel Fernández
debiasingdetoxificationmultilingualAnthologyDBLP
5
泛读LongACL 2025

OZSpeech: One-step Zero-shot Speech Synthesis with Learned-Prior-Conditioned Flow Matching

Nghia-Huynh Nguyen-Hieu,Ngoc Son Nguyen,Huynh Nguyen Dang,Thieu Vo,Truong-Son Hy,Van Nguyen
speech-generationflow-matchingzero-shotAnthologyDBLP
5
泛读LongACL 2025

A Text is Worth Several Tokens: Text Embedding from LLMs Secretly Aligns Well with The Key Tokens

Zhijie Nie,Richong Zhang,Zhanyu Wu
text-embeddingtoken-alignmentrepresentationAnthologyDBLP
5
泛读LongACL 2025

Library-Like Behavior In Language Models is Enhanced by Self-Referencing Causal Cycles

这篇工作想回答:语言模型能否通过显式的自引用因果循环,更稳定地表现出“像图书馆一样检索和回指知识”的行为。传统 next-token 训练会让模型在参数里隐式存知识,但对“引用自身已有表述、保持前后一致”这类行为没有直接约束,因此模型常会记得事实却不会稳定地调用自己的内部记录。

Munachiso Nwadike,Zangir Iklassov,Toluwani Aremu,Tatsuya Hiraoka,Benjamin Heinzerling,Velibor Bojkovic,Hilal AlQuabeh,Martin Takác,Kentaro Inui
causal-cyclesself-referencingmemorizationAnthologyDBLP
5
泛读LongACL 2025

Multilingual Arbitration: Optimizing Data Pools to Accelerate Multilingual Progress

这篇工作聚焦一个很实际的问题:多语言训练里,有限 token 预算应该怎样在不同语言数据池之间分配,才能更快提升整体多语言能力。过去常见做法是按语料规模、温度采样或启发式重加权分配数据,但这些策略很少直接优化“单位训练成本带来的多语言总体进步”。

Ayomide Odumakinde,Daniel D'souza,Pat Verga,Beyza Ermis,Sara Hooker
multilingualdata-selectiondata-mixtureAnthologyDBLP
5
泛读LongACL 2025

Incorporating Domain Knowledge into Materials Tokenization

这篇工作要解决的是:通用 tokenizer 对材料科学文本或材料表示切分不合理,导致关键结构信息在预训练入口就被破坏。过去很多领域模型直接复用通用 BPE 或 SentencePiece,因为便宜且兼容,但这会把化学式、晶体组成、材料属性标记切得过碎或过歧义,模型后面再大也只能在有损输入上学习。

Yerim Oh,Jun-Hyung Park,Junho Kim,SungHo Kim,SangKeun Lee
tokenizerdomain-specificmaterials-scienceAnthologyDBLP
5
泛读SRWACL 2025

Neuron-Level Language Tag Injection Improves Zero-Shot Translation Performance

这篇工作要解决的是:零样本翻译里,模型往往知道多语言映射能力存在,但无法被稳定地‘切到正确语言模式’。传统做法通常在输入前后加语言 tag 或指令,但这些控制信号经过层层传播后会变弱,尤其在弱资源或未充分对齐的语言对上不够可靠。

Jay Orten,Ammon Shurtz,Nancy Fulda,Stephen D. Richardson
multilingualzero-shot-translationneuron-levelDOIDBLP
5
泛读LongACL 2025

Towards Reward Fairness in RLHF: From a Resource Allocation Perspective

这篇工作关注 RLHF 里的 reward fairness:奖励模型或偏好优化往往把训练资源不均匀地分配给不同类型样本或群体,导致某些群体持续受益、另一些群体持续被忽视。过去很多 RLHF 工作默认只要平均 reward 上升就是好事,但这会掩盖奖励分配不公和由此带来的对齐偏差。

Sheng Ouyang,Yulan Hu,Ge Chen,Qingyang Li,Fuzheng Zhang,Yong Liu
rlhfreward-fairnessresource-allocationAnthologyDBLP
5
泛读OutstandingLongACL 2025

Mapping 1, 000+ Language Models via the Log-Likelihood Vector

这篇工作要解决的是:当语言模型数量已经上千时,如何用统一、可计算的表征去比较它们的能力结构,而不只是看零散 benchmark 分数。传统 leaderboard 把模型压成几个任务均分,很难看出两个模型到底是同类、互补,还是仅仅在测试集上碰巧接近。

Momose Oyama,Hiroaki Yamagiwa,Yusuke Takase,Hidetoshi Shimodaira
model-comparisonlog-likelihoodmodel-mappingAnthologyDBLP
5
泛读DemoACL 2025

SlimLM: An Efficient Small Language Model for On-Device Document Assistance

Thang M. Pham,Phat T. Nguyen,Seunghyun Yoon,Viet Dac Lai,Franck Dernoncourt,Trung Bui
small-lmon-devicecompressionDOIDBLP
5
泛读FindingsACL 2025

Evaluating LLMs' Assessment of Mixed-Context Hallucination Through the Lens of Summarization

Siya Qi,Rui Cao,Yulan He,Zheng Yuan
hallucinationevaluationsummarizationAnthologyDBLP
5
泛读FindingsACL 2025

Relevant or Random: Can LLMs Truly Perform Analogical Reasoning?

Chengwei Qin,Wenhan Xia,Tan Wang,Fangkai Jiao,Yuchen Hu,Bosheng Ding,Ruirui Chen,Shafiq Joty
analogical-reasoningevaluationreasoningAnthologyDBLP
5
泛读LongACL 2025

Just Go Parallel: Improving the Multilingual Capabilities of Large Language Models

Muhammad Reza Qorib,Junyi Li,Hwee Tou Ng
multilingualparallel-datallm-capabilityAnthologyDBLP
5
泛读FindingsACL 2025

RASD: Retrieval-Augmented Speculative Decoding

Guofeng Quan,Wenfeng Feng,Chuzhan Hao,Guochao Jiang,Yuewei Zhang,Hao Henry Wang
speculative-decodingretrieval-augmentedinferenceDOIDBLP
5
泛读FindingsACL 2025

On the Generalization vs Fidelity Paradox in Knowledge Distillation

Suhas Kamasetty Ramesh,Ayan Sengupta,Tanmoy Chakraborty
knowledge-distillationgeneralizationfidelityAnthologyDBLP
5
泛读FindingsACL 2025

APT: Improving Specialist LLM Performance with Weakness Case Acquisition and Iterative Preference Training

Jun Rao,Zepeng Lin,Xuebo Liu,Xiaopeng Ke,Lian Lian,Dong Jin,Shengjun Cheng,Jun Yu,Min Zhang
preference-trainingiterativespecialist-llmAnthologyDBLP
5
泛读FindingsACL 2025

MVL-SIB: A Massively Multilingual Vision-Language Benchmark for Cross-Modal Topical Matching

这篇工作要解决的是:现有视觉-语言评测对“跨模态主题匹配”覆盖太窄,尤其缺少大规模、多语言、细粒度的基准,导致模型看起来会做图文对齐,但其实可能只在英语或表层语义上有效。过去这类问题常被零散的图文检索或分类数据集间接代替,但那些基准通常语言数少、主题粒度粗,也很难区分模型是在做真正的语义/主题对齐,还是在利用数据偏置。

Fabian David Schmidt,Florian Schneider,Chris Biemann,Goran Glavas
vlmbenchmarkmultilingualAnthologyDBLP
5
泛读FindingsACL 2025

MALAMUTE: A Multilingual, Highly-granular, Template-free, Education-based Probing Dataset

这篇工作要解决的是:现有 probing 数据集往往模板化严重、粒度粗、语言覆盖有限,导致我们测到的更多是模型对提示格式的适应,而不是其真实知识与推理结构。教育场景的数据如果做得细且无模板,能更好地区分模型到底会不会概念、层级、依赖关系,而不是背题型。

Sagi Shaier,George Arthur Baker,Chiranthan Sridhar,Lawrence Hunter,Katharina von der Wense
probingmultilingualbenchmarkAnthologyDBLP
5
泛读FindingsACL 2025

TACO-RL: Task Aware Prompt Compression Optimization with Reinforcement Learning

这篇工作要解决的是:提示压缩常常只追求更短,但不同任务对信息损失的容忍度不同,统一的压缩策略很容易在某些任务上省了 token、丢了关键信号。过去常见做法是基于启发式摘要、关键词选择或静态压缩器,但这些方法通常没有直接针对下游任务收益优化。

Shivam Shandilya,Menglin Xia,Supriyo Ghosh,Huiqiang Jiang,Jue Zhang,Qianhui Wu,Victor Rühle,Saravan Rajmohan
prompt-compressionreinforcement-learninginferenceAnthologyDBLP
5
泛读DemoACL 2025

Token Level Routing Inference System for Edge Devices

这篇工作要解决的是:边缘设备上的 LLM 推理受算力、带宽和能耗限制,统一地对所有 token 走同一条重计算路径通常太浪费。过去常见优化是层裁剪、早退或静态小模型替代,但这些方法对不同 token 的难度差异利用不够,容易在省算力时牺牲太多质量。

Jianshu She,Wenhao Zheng,Zhengzhong Liu,Hongyi Wang,Eric P. Xing,Huaxiu Yao,Qirong Ho
token-routingedge-inferencesystemDOIDBLP
5
泛读FindingsACL 2025

Is It JUST Semantics? A Case Study of Discourse Particle Understanding in LLMs

这篇工作要解决的是:LLM 对 discourse particles 的理解是否真的建立在语义上,还是更多依赖表层共现和话语模板。这个问题以前常被一般语义理解评测掩盖,因为 discourse particles 往往短、小、频繁、语义依赖强语境,仅靠词向量相近或句法正确并不能说明模型真正理解它们的语用功能。

William Berkeley Sheffield,Kanishka Misra,Valentina Pyatkin,Ashwini Deo,Kyle Mahowald,Junyi Jessy Li
discoursesemanticsevaluationAnthologyDBLP
5
泛读LongACL 2025

MaCP: Minimal yet Mighty Adaptation via Hierarchical Cosine Projection

Yixian Shen,Qi Bi,Jia-Hong Huang,Hongyi Zhu,Andy D. Pimentel,Anuj Pathania
parameter-efficientadaptationfine-tuningAnthologyDBLP
5
泛读FindingsACL 2025

Transparentize the Internal and External Knowledge Utilization in LLMs with Trustworthy Citation

Jiajun Shen,Tong Zhou,Yubo Chen,Delai Qiu,Shengping Liu,Kang Liu,Jun Zhao
citationknowledge-utilizationfactualityAnthologyDBLP
5
泛读LongACL 2025

Dynamic Chunking and Selection for Reading Comprehension of Ultra-Long Context in Large Language Models

Boheng Sheng,Jiacheng Yao,Meicong Zhang,Guoxiu He
long-contextchunkingselectionAnthologyDBLP
5
泛读FindingsACL 2025

Making RALM Robust to Irrelevant Contexts via Layer Knowledge Guided Attention

Weijie Shi,Hao Chen,Jiaming Li,Yao Zhao,Yazhong Zhang,Qijin Chen ... 省略 1 位作者 ... ,Ruiyuan Zhang,Jia Zhu,Jiajie Xu,Xiaofang Zhou
ralmrobustnessattentionAnthologyDBLP
5
泛读LongACL 2025

Steering off Course: Reliability Challenges in Steering Language Models

Patrick Queiroz Da Silva,Hari Sethuraman,Dheeraj Rajagopal,Hannaneh Hajishirzi,Sachin Kumar
steering-vectorsreliabilityinterpretabilityDOIDBLP
5
泛读FindingsACL 2025

Can VLMs Actually See and Read? A Survey on Modality Collapse in Vision-Language Models

Mong Yuan Sim,Wei Emma Zhang,Xiang Dai,Biaoyan Fang
vlmmodality-collapsesurveyAnthologyDBLP
5
泛读LongACL 2025

Global MMLU: Understanding and Addressing Cultural and Linguistic Biases in Multilingual Evaluation

Shivalika Singh,Angelika Romanou,Clémentine Fourrier,David Ifeoluwa Adelani,Jian Gang Ngui,Daniel Vila-Suero ... 省略 13 位作者 ... ,Enzo Ferrante,Marzieh Fadaee,Beyza Ermis,Sara Hooker
benchmarkmultilingualevaluationAnthologyDBLP
5
泛读FindingsACL 2025

COSMIC: Generalized Refusal Direction Identification in LLM Activations

Vincent Siu,Nicholas Crispino,Zihao Yu,Sam Pan,Zhun Wang,Yang Liu,Dawn Song,Chenguang Wang
refusalactivation-analysissafetyAnthologyDBLP
5
泛读Best PaperLongACL 2025

A Theory of Response Sampling in LLMs: Part Descriptive and Part Prescriptive

Sarath Sivaprasad,Pramod Kaushik,Sahar Abdelnabi,Mario Fritz
samplingdecodingtheoryAnthologyDBLP
5
泛读LongACL 2025

Do Language Models Have Semantics? On the Five Standard Positions

Anders Søgaard
semanticsphilosophylanguage-modelAnthologyDBLP
5
泛读LongACL 2025

Divide-Then-Align: Honest Alignment based on the Knowledge Boundary of RAG

Xin Sun,Jianan Xie,Zhongqi Chen,Qiang Liu,Shu Wu,Yuehe Chen,Bowen Song,Zilei Wang,Weiqiang Wang,Liang Wang
ragalignmenthallucinationAnthologyDBLP
5
泛读LongACL 2025

Steering into New Embedding Spaces: Analyzing Cross-Lingual Alignment Induced by Model Interventions in Multilingual Language Models

这篇论文关注的是:对多语模型施加干预后,内部表征空间会不会进入新的跨语言对齐状态,以及这种对齐是稳定增强还是局部扭曲。过去跨语言对齐通常从预训练数据或词向量几何去解释,较少系统研究 inference-time 或 parameter-level intervention 如何重塑 embedding space。

Anirudh Sundar,Sinead Williamson,Katherine Metcalf,Barry-John Theobald,Skyler Seto,Masha Fedzechkina
multilingualrepresentationsteeringAnthologyDBLP
5
泛读LongACL 2025

GRACE: A Granular Benchmark for Evaluating Model Calibration against Human Calibration

这篇论文要解决的是:现有 calibration 评测过于粗粒度,往往只看模型置信度和正确率是否匹配,却很少和人的校准方式做细粒度对照。结果是,一个模型可能在总体 ECE 之类指标上看起来还行,但在不同题型、不同不确定性来源或不同知识状态下,与人类的置信分布相差很大。

Yoo Yeon Sung,Eve Fleisig,Yu Hou,Ishan Upadhyay,Jordan Lee Boyd-Graber
calibrationevaluationuncertaintyAnthologyDBLP
5
泛读FindingsACL 2025

Mechanistic Interpretability of Emotion Inference in Large Language Models

这篇论文要解决的是:LLM 做情绪推断时,到底依赖了哪些内部机制,而不是只看最终分类是否正确。过去情绪理解研究大多停留在行为评测或 probe,能告诉你模型会不会判,但很难说明它通过哪些电路、注意模式或中间特征得出判断。

Ala N. Tak,Amin Banayeeanzade,Anahita Bolourani,Mina Kian,Robin Jia,Jonathan Gratch
mechanistic-interpretabilityemotionrepresentationAnthologyDBLP
5
泛读FindingsACL 2025

Answer When Needed, Forget When Not: Language Models Pretend to Forget via In-Context Knowledge Unlearning

这篇论文要解决的是:LLM 所谓的 in-context knowledge unlearning,很多时候只是“装作忘记”,并没有真正停止利用相关知识。过去有不少工作通过提示、系统消息或上下文约束让模型在当前对话里不回答某些事实,但这类方法是否真的让模型忘了,一直缺乏更强的反事实验证。

Shota Takashiro,Takeshi Kojima,Andrew Gambardella,Qi Cao,Yusuke Iwasawa,Yutaka Matsuo
unlearningin-context-learningknowledgeAnthologyDBLP
5
泛读FindingsACL 2025

UAQFact: Evaluating Factual Knowledge Utilization of LLMs on Unanswerable Questions

这篇论文要解决的是:LLM 面对不可回答问题时,是否真的知道“不能答”,以及它会不会错误调用相关事实导致幻觉。过去 factuality 评测多看可回答问题上的正确率,但现实里更难的是模型在知识不足、前提错误或问题无解时,能否正确利用已有知识判断不可答而不是硬编。

Chuanyuan Tan,Wenbiao Shao,Hao Xiong,Tong Zhu,Zhenhua Liu,Kai Shi,Wenliang Chen
factualityevaluationknowledgeAnthologyDBLP
5
泛读LongACL 2025

CogniBench: A Legal-inspired Framework and Dataset for Assessing Cognitive Faithfulness of Large Language Models

Xiaqiang Tang,Jian Li,Keyu Hu,Nan Du,Xiaolong Li,Xi Zhang,Weigao Sun,Sihong Xie
faithfulnessreasoningevaluationAnthologyDBLP
5
泛读LongACL 2025

Top-nσ: Eliminating Noise in Logit Space for Robust Token Sampling of LLM

Chenxia Tang,Jianchun Liu,Hongli Xu,Liusheng Huang
samplingdecodinglogitsAnthologyDBLP
5
泛读FindingsACL 2025

EpiCoDe: Boosting Model Performance Beyond Training with Extrapolation and Contrastive Decoding

Mingxu Tao,Jie Hu,Mingchuan Yang,Yunhuai Liu,Dongyan Zhao,Yansong Feng
contrastive-decodingdecodingextrapolationAnthologyDBLP
5
泛读LongACL 2025

MoQAE: Mixed-Precision Quantization for Long-Context LLM Inference via Mixture of Quantization-Aware Experts

Wei Tao,Haocheng Lu,Xiaoyang Qu,Bin Zhang,Kai Lu,Jiguang Wan,Jianzong Wang
quantizationlong-contextmixed-precisionAnthologyDBLP
5
泛读FindingsACL 2025

Confidence Improves Self-Consistency in LLMs

Amir Taubenfeld,Tom Sheffer,Eran Ofek,Amir Feder,Ariel Goldstein,Zorik Gekhman,Gal Yona
self-consistencyconfidencereasoningAnthologyDBLP
5
泛读FindingsACL 2025

LlamaV-o1: Rethinking Step-by-step Visual Reasoning in LLMs

Omkar Thawakar,Dinura Dissanayake,Ketan Pravin More,Ritesh Thawkar,Ahmed Heakl,Noor Ahsan ... 省略 5 位作者 ... ,Ivan Laptev,Mubarak Shah,Fahad Shahbaz Khan,Salman H. Khan
multimodalvisual-reasoningchain-of-thoughtAnthologyDBLP
5
泛读FindingsACL 2025

Behavioral Analysis of Information Salience in Large Language Models

Jan Trienes,Jörg Schlötterer,Junyi Jessy Li,Christin Seifert
information-saliencebehavioral-analysisinterpretabilityAnthologyDBLP
5
泛读FindingsACL 2025

Redundancy, Isotropy, and Intrinsic Dimensionality of Prompt-based Text Embeddings

Hayato Tsukagoshi,Ryohei Sasano
isotropyembeddingsintrinsic-dimensionalityAnthologyDBLP
5
泛读LongACL 2025

Behavioural vs. Representational Systematicity in End-to-End Models: An Opinionated Survey

Ivan Vegner,Sydelle de Souza,Valentin Forch,Martha Lewis,Leonidas A. A. Doumas
systematicitycompositionalityrepresentationAnthologyDBLP
5
泛读FindingsACL 2025

Language Models Lack Temporal Generalization and Bigger is Not Better

Stella Verkijk,Piek Vossen,Pia Sommerauer
temporal-generalizationscalingevaluationAnthologyDBLP
5
泛读ShortACL 2025

Zero-Shot Text-to-Speech for Vietnamese

Thi Vu,Linh The Nguyen,Dat Quoc Nguyen
ttszero-shotspeech-lmDOIDBLP
5
泛读FindingsACL 2025

Who Taught You That? Tracing Teachers in Model Distillation

Somin Wadhwa,Chantal Shaib,Silvio Amir,Byron C. Wallace
distillationattributiontraining-dataDOIDBLP
5
泛读FindingsACL 2025

Unveiling Confirmation Bias in Chain-of-Thought Reasoning

Yue Wan,Xiaowei Jia,Xiang Lorraine Li
confirmation-biaschain-of-thoughtreasoningAnthologyDBLP
5
泛读FindingsACL 2025

Decoupling Reasoning and Knowledge Injection for In-Context Knowledge Editing

这篇工作聚焦 in-context knowledge editing 的核心瓶颈:模型在上下文里临时注入新知识时,常把“会不会推理”和“有没有拿到新事实”混在一起,导致编辑效果不稳。过去很多方法直接往 prompt 里塞事实并测试答案,但如果失败,很难判断是知识注入没成功,还是推理链条本身出了问题,因此作者尝试把两者解耦。

Changyue Wang,Weihang Su,Qingyao Ai,Yujia Zhou,Yiqun Liu
knowledge-editingin-context-learningreasoningAnthologyDBLP
5
泛读FindingsACL 2025

Uncertainty Unveiled: Can Exposure to More In-context Examples Mitigate Uncertainty for Large Language Models?

这篇工作研究的核心问题是:给 LLM 更多 in-context examples,能不能系统性降低不确定性;作者显然不满足于“多给几个 shot 通常更好”这种粗结论,而是想看不确定性在哪些条件下真的下降、在哪些条件下反而被放大。过去 few-shot 提升通常只看 accuracy,但对置信度、分布漂移和示例噪声带来的影响分析不足。

Yifei Wang,Yu Sheng,Linjing Li,Daniel Dajun Zeng
in-context-learninguncertaintyfew-shotAnthologyDBLP
5
泛读LongACL 2025

Towards Objective Fine-tuning: How LLMs' Prior Knowledge Causes Potential Poor Calibration?

这篇论文想解决的问题是:为什么 LLM 在 fine-tuning 后 calibration 仍可能变差,甚至出现和训练目标相反的置信度偏移;作者将原因指向模型已有先验知识与新监督目标之间的冲突。过去很多工作把校准问题归因于训练数据不足或目标函数不匹配,但对“预训练先验如何干扰后续目标”讨论得不够具体。

Ziming Wang,Zeyu Shi,Haoyi Zhou,Shiqi Gao,Qingyun Sun,Jianxin Li
fine-tuningcalibrationknowledgeAnthologyDBLP
5
泛读OutstandingLongACL 2025

Bridging the Language Gaps in Large Language Models with Inference-Time Cross-Lingual Intervention

这篇工作的核心问题是:多语言 LLM 的语言能力不均衡时,能不能只在推理时做跨语言干预,弥补低资源语言或弱势语言的能力缺口,而不重新训练模型。以往常见做法是多语继续预训练或翻译后再答,但前者成本高,后者会引入翻译误差并损失原语言语境,所以 inference-time intervention 是一个很实际的方向。

Weixuan Wang,Minghao Wu,Barry Haddow,Alexandra Birch
multilingualinference-timeinterventionAnthologyDBLP
5
泛读LongACL 2025

Direct Prompt Optimization with Continuous Representations

这篇论文想解决的核心问题是:prompt optimization 能不能摆脱离散 token 搜索的低效和不稳定,直接在连续表示空间里优化。以往自动 prompt 设计常依赖离散编辑、强化学习或梯度近似,搜索空间大、可迁移性差,而且容易卡在表面词汇技巧上,因此直接用连续表示做 prompt 优化是很自然的方向。

Yangkun Wang,Zihan Wang,Jingbo Shang
prompt-optimizationcontinuous-promptalignmentAnthologyDBLP
5
泛读LongACL 2025

SConU: Selective Conformal Uncertainty in Large Language Models

这篇工作要解决的问题是:LLM 的不确定性估计不仅要有覆盖保证,还要能做选择性输出,也就是知道什么时候该回答、什么时候该拒答;SConU 看起来是在 conformal uncertainty 上加入 selective 机制。传统 conformal prediction 强调分布无关的覆盖率,但直接套到生成模型时往往集合太宽、决策不实用,因此需要更适合 LLM 的选择性设计。

Zhiyuan Wang,Qingni Wang,Yue Zhang,Tianlong Chen,Xiaofeng Zhu,Xiaoshuang Shi,Kaidi Xu
uncertaintyconformalcalibrationAnthologyDBLP
5
泛读LongACL 2025

Beyond Prompt Engineering: Robust Behavior Control in LLMs via Steering Target Atoms

Mengru Wang,Ziwen Xu,Shengyu Mao,Shumin Deng,Zhaopeng Tu,Huajun Chen,Ningyu Zhang
steeringbehavior-controlalignmentAnthologyDBLP
5
泛读FindingsACL 2025

Re-TASK: Revisiting LLM Tasks from Capability, Skill, and Knowledge Perspectives

Zhihu Wang,Shiwan Zhao,Yu Wang,Heyuan Huang,Sitao Xie,Yubo Zhang,Jiaxin Shi,Zhixing Wang,Hongyan Li,Junchi Yan
capabilityknowledgeskillsAnthologyDBLP
5
泛读LongACL 2025

Bitnet.cpp: Efficient Edge Inference for Ternary LLMs

Jinheng Wang,Hansong Zhou,Ting Song,Shijie Cao,Yan Xia,Ting Cao,Jianyu Wei,Shuming Ma,Hongyu Wang,Furu Wei
bitnetquantizationedge-inferenceAnthologyDBLP
5
泛读LongACL 2025

Flexora: Flexible Low-Rank Adaptation for Large Language Models

Chenxing Wei,Yao Shu,Ying Tiffany He,Fei Yu
loralow-rank-adaptationfine-tuningAnthologyDBLP
5
泛读FindingsACL 2025

Token Pruning in Multimodal Large Language Models: Are We Solving the Right Problem?

Zichen Wen,Yifeng Gao,Weijia Li,Conghui He,Linfeng Zhang
token-pruningmultimodal-llmefficiencyAnthologyDBLP
5
泛读FindingsACL 2025

On-Policy Self-Alignment with Fine-grained Knowledge Feedback for Hallucination Mitigation

Xueru Wen,Jie Lou,Xinyu Lu,Yuqiu Ji,Xinyan Guan,Yaojie Lu ... 省略 1 位作者 ... ,Ben He,Xianpei Han,Debing Zhang,Le Sun
self-alignmenthallucinationon-policyAnthologyDBLP
5
泛读FindingsACL 2025

Edit Once, Update Everywhere: A Simple Framework for Cross-Lingual Knowledge Synchronization in LLMs

Yuchen Wu,Liang Ding,Li Shen,Dacheng Tao
knowledge-editingcross-lingualsynchronizationAnthologyDBLP
5
泛读LongACL 2025

Ref-Long: Benchmarking the Long-context Referencing Capability of Long-context Language Models

Junjie Wu,Gefei Gu,Yanan Zheng,Dit-Yan Yeung,Arman Cohan
long-contextbenchmarkreferencingAnthologyDBLP
5
泛读LongACL 2025

Analyzing LLMs' Knowledge Boundary Cognition Across Languages Through the Lens of Internal Representations

Chenghao Xiao,Hou Pong Chan,Hao Zhang,Mahani Aljunied,Lidong Bing,Noura Al Moubayed,Yu Rong
multilingualknowledgerepresentationAnthologyDBLP
5
泛读LongACL 2025

SCOP: Evaluating the Comprehension Process of Large Language Models from a Cognitive View

Yongjie Xiao,Hongru Liang,Peixin Qin,Yao Zhang,Wenqiang Lei
evaluationcomprehensioncognitionAnthologyDBLP
5
泛读FindingsACL 2025

SWE-Fixer: Training Open-Source LLMs for Effective and Efficient GitHub Issue Resolution

自动解决 GitHub issue(SWE-bench 类任务)目前主要依赖闭源大模型,开源 LLM 在这类复杂代码推理任务上表现不佳。如何训练开源 LLM 使其在 issue resolution 上既有效又高效?

Chengxing Xie,Bowen Li,Chang Gao,He Du,Wai Lam,Difan Zou,Kai Chen
The Chinese University of Hong Kongcodingswe-benchtraining-dataAnthologyDBLP
5
泛读FindingsACL 2025

Automated Fine-Grained Mixture-of-Experts Quantization

MoE 模型的量化比 dense 模型更复杂,因为不同 expert 的激活分布差异大,统一量化策略会导致部分 expert 精度损失严重。现有量化方法要么忽略 expert 间差异,要么只做粗粒度的 expert 级别配置。

Zhanhao Xie,Yuexiao Ma,Xiawu Zheng,Fei Chao,Wanchen Sui,Yong Li,Shen Li,Rongrong Ji
moequantizationcompressionAnthologyDBLP
5
泛读LongACL 2025

Improving Model Factuality with Fine-grained Critique-based Evaluator

LLM 生成内容的事实性(factuality)评估和改进仍然粗粒度——现有方法要么只给整体判断,要么细粒度评估的准确性不够。需要一个能提供细粒度、可操作反馈的事实性评估器来指导模型改进。

Yiqing Xie,Wenxuan Zhou,Pradyot Prakash,Di Jin,Yuning Mao,Quintin Fettes ... 省略 2 位作者 ... ,Han Fang,Carolyn P. Rosé,Daniel Fried,Hejia Zhang
AmazonfactualityevaluatorcritiqueAnthologyDBLP
5
泛读LongACL 2025

Deliberate Reasoning in Language Models as Structure-Aware Planning with an Accurate World Model

LLM 的推理过程缺乏结构化规划能力——模型在 CoT 中倾向于贪心地逐步生成,而非像人类一样先建立问题的结构化表示再规划求解路径。已有的 planning 方法要么依赖外部搜索,要么缺乏对问题结构的显式建模。

Siheng Xiong,Ali Payani,Yuan Yang,Faramarz Fekri
reasoningplanningworld-modelAnthologyDBLP
5
泛读LongACL 2025

φ-Decoding: Adaptive Foresight Sampling for Balanced Inference-Time Exploration and Exploitation

推理时的采样策略面临 exploration-exploitation 困境:贪心解码(exploitation)容易陷入局部最优,纯随机采样(exploration)效率低。现有方法(如 temperature 调节、top-k/p)缺乏对未来 token 质量的前瞻性考量。

Fangzhi Xu,Hang Yan,Chang Ma,Haiteng Zhao,Jun Liu,Qika Lin,Zhiyong Wu
decodingsamplinginference-timeAnthologyDBLP
5
泛读LongACL 2025

ReLearn: Unlearning via Learning for Large Language Models

LLM 的 unlearning(遗忘特定知识)现有方法通常通过梯度上升或对抗训练来'擦除'目标知识,但这些方法容易破坏模型的通用能力。能否通过'学习'而非'遗忘'的方式来实现 unlearning?

Haoming Xu,Ningyuan Zhao,Liming Yang,Sendong Zhao,Shumin Deng,Mengru Wang,Bryan Hooi,Nay Oo,Huajun Chen,Ningyu Zhang
unlearningfine-tuningforgettingAnthologyDBLP
5
泛读FindingsACL 2025

SeqPO-SiMT: Sequential Policy Optimization for Simultaneous Machine Translation

Ting Xu,Zhichao Huang,Jiankai Sun,Shanbo Cheng,Wai Lam
policy-optimizationtranslationrlAnthologyDBLP
5
泛读LongACL 2025

Memorizing is Not Enough: Deep Knowledge Injection Through Reasoning

Ruoxi Xu,Yunjie Ji,Boxi Cao,Yaojie Lu,Hongyu Lin,Xianpei Han,Ben He,Yingfei Sun,Xiangang Li,Le Sun
knowledge-injectionreasoningfine-tuningAnthologyDBLP
5
泛读LongACL 2025

Defining and Evaluating Visual Language Models' Basic Spatial Abilities: A Perspective from Psychometrics

Wenrui Xu,Dalin Lyu,Weihang Wang,Jie Feng,Chen Gao,Yong Li
vlmspatial-reasoningevaluationAnthologyDBLP
5
泛读LongACL 2025

Knowledge Decoupling via Orthogonal Projection for Lifelong Editing of Large Language Models

Haoyu Xu,Pengxiang Lan,Enneng Yang,Guibing Guo,Jianzhe Zhao,Linying Jiang,Xingwei Wang
knowledge-editinglifelong-learningrepresentationAnthologyDBLP
5
泛读LongACL 2025

Subtle Errors in Reasoning: Preference Learning via Error-injected Self-editing

Kaishuai Xu,Tiezheng Yu,Wenjun Hou,Yi Cheng,Chak Tou Leong,Liangyou Li,Xin Jiang,Lifeng Shang,Qun Liu,Wenjie Li
preference-learningself-editingreasoningAnthologyDBLP
5
泛读FindingsACL 2025

MWPO: Enhancing LLMs Performance through Multi-Weight Preference Strength and Length Optimization

Shiyue Xu,Fu Zhang,Jingwei Cheng,Linfeng Zhou
preference-optimizationdpolength-biasAnthologyDBLP
5
泛读FindingsACL 2025

Reason from Future: Reverse Thought Chain Enhances LLM Reasoning

这篇工作要解决的是:LLM 在多步推理里容易“前面走偏后面圆不回来”,而常见的前向 CoT 只是在错误路径上继续展开,缺少从目标约束中间步骤的机制。

Yinlong Xu,Yanzhao Zheng,Shuoshuo Sun,Shuaihan Huang,Baohua Dong,Hangcheng Zhu,Ruohui Huang,Gang Yu,Hongxia Xu,Jian Wu
reasoningchain-of-thoughtreverse-reasoningAnthologyDBLP
5
泛读LongACL 2025

UAlign: Leveraging Uncertainty Estimations for Factuality Alignment on Large Language Models

这篇工作要解决的是:事实性对齐(factuality alignment)常把“模型是否不确定”与“模型是否在胡编”混在一起处理,导致对齐信号要么过度惩罚合理的不确定回答,要么放过高置信幻觉。

Boyang Xue,Fei Mi,Qi Zhu,Hongru Wang,Rui Wang,Sheng Wang,Erxin Yu,Xuming Hu,Kam-Fai Wong
alignmentfactualityuncertaintyAnthologyDBLP
5
泛读FindingsACL 2025

Improve Language Model and Brain Alignment via Associative Memory

这篇工作要解决的是:LM 与大脑表征对齐(brain alignment)常停留在相关性拟合,缺少能同时提升语言建模与神经对齐的可解释机制。

Congchi Yin,Yongpeng Zhang,Xuyun Wen,Piji Li
associative-memoryrepresentationbrain-alignmentAnthologyDBLP
5
泛读LongACL 2025

Disentangling Language and Culture for Evaluating Multilingual Large Language Models

这篇工作要解决的是:多语种 LLM 评测常把“语言能力不足”和“文化背景缺失”混为一谈,导致模型被错误归因(看似不懂语言,其实是不懂文化常识,反之亦然)。

Jiahao Ying,Wei Tang,Yiran Zhao,Yixin Cao,Yu Rong,Wenxuan Zhang
multilingualcultureevaluationAnthologyDBLP
5
泛读LongACL 2025

RoToR: Towards More Reliable Responses for Order-Invariant Inputs

Soyoung Yoon,Dongha Ahn,Youngwon Lee,Minkyu Jung,HyungJoo Jang,Seung-won Hwang
order-invariancerobustnessreasoningAnthologyDBLP
5
泛读LongACL 2025

If Attention Serves as a Cognitive Model of Human Memory Retrieval, What is the Plausible Memory Representation?

Ryo Yoshida,Shinnosuke Isono,Kohei Kajikawa,Taiga Someya,Yushi Sugimoto,Yohei Oseki
attentionmemoryrepresentationAnthologyDBLP
5
泛读LongACL 2025

Representation Bending for Large Language Model Safety

Ashkan Yousefpour,Taeheon Kim,Ryan Sungmo Kwon,Seungbeen Lee,Wonje Jeung,Seungju Han,Alvin Wan,Harrison Ngan,Youngjae Yu,Jonghyun Choi
safetyrepresentationalignmentAnthologyDBLP
5
泛读FindingsACL 2025

Mitigate Position Bias in LLMs via Scaling a Single Hidden States Channel

Yijiong Yu,Huiqiang Jiang,Xufang Luo,Qianhui Wu,Chin-Yew Lin,Dongsheng Li,Yuqing Yang,Yongfeng Huang,Lili Qiu
position-biashidden-statesreasoningAnthologyDBLP
5
泛读FindingsACL 2025

Progressive LoRA for Multimodal Continual Instruction Tuning

Yahan Yu,Duzhen Zhang,Yong Ren,Xuanle Zhao,Xiuyi Chen,Chenhui Chu
loracontinual-learningmultimodalAnthologyDBLP
5
泛读LongACL 2025

Chain-of-Reasoning: Towards Unified Mathematical Reasoning in Large Language Models via a Multi-Paradigm Perspective

Yiyao Yu,Yuxiang Zhang,Dongdong Zhang,Xiao Liang,Hengyuan Zhang,Xingxing Zhang ... 省略 1 位作者 ... ,Hany Hassan Awadalla,Junjie Wang,Yujiu Yang,Furu Wei
reasoningmulti-paradigmcotAnthologyDBLP
5
泛读LongACL 2025

Exploiting Contextual Knowledge in LLMs through V-usable Information based Layer Enhancement

Xiaowei Yuan,Zhao Yang,Ziyang Huang,Yequan Wang,Siqi Fan,Yiming Ju,Jun Zhao,Kang Liu
layer-enhancementcontextrepresentationAnthologyDBLP
5
泛读LongACL 2025

Refuse Whenever You Feel Unsafe: Improving Safety in LLMs via Decoupled Refusal Training

Youliang Yuan,Wenxiang Jiao,Wenxuan Wang,Jen-tse Huang,Jiahao Xu,Tian Liang,Pinjia He,Zhaopeng Tu
safetyrefusalalignmentAnthologyDBLP
5
泛读FindingsACL 2025

LegoMT2: Selective Asynchronous Sharded Data Parallel Training for Massive Neural Machine Translation

Fei Yuan,Yinquan Lu,Lei Li,Jingjing Xu
distributed-trainingdata-parallelshardingAnthologyDBLP
5
泛读LongACL 2025

Merge Hijacking: Backdoor Attacks to Model Merging of Large Language Models

Zenghui Yuan,Yangming Xu,Jiawen Shi,Pan Zhou,Lichao Sun
model-mergingbackdoorsecurityAnthologyDBLP
5
泛读LongACL 2025

MMMU-Pro: A More Robust Multi-discipline Multimodal Understanding Benchmark

Xiang Yue,Tianyu Zheng,Yuansheng Ni,Yubo Wang,Kai Zhang,Shengbang Tong ... 省略 3 位作者 ... ,Huan Sun,Yu Su,Wenhu Chen,Graham Neubig
benchmarkmultimodal-understandingvlmAnthologyDBLP
5
泛读LongACL 2025

Towards Context-Robust LLMs: A Gated Representation Fine-tuning Approach

Shenglai Zeng,Pengfei He,Kai Guo,Tianqi Zheng,Hanqing Lu,Yue Xing,Hui Liu
context-robustnessfine-tuninggated-representationAnthologyDBLP
5
泛读LongACL 2025

ACECODER: Acing Coder RL via Automated Test-Case Synthesis

Huaye Zeng,Dongfu Jiang,Haozhe Wang,Ping Nie,Xiaotong Chen,Wenhu Chen
code-generationrltest-case-synthesisAnthologyDBLP
5
泛读LongACL 2025

From Lists to Emojis: How Format Bias Affects Model Alignment

Xuanchang Zhang,Wei Xiong,Lichang Chen,Tianyi Zhou,Heng Huang,Tong Zhang
format-biasalignmentsftAnthologyDBLP
5
泛读LongACL 2025

CodeDPO: Aligning Code Models with Self Generated and Verified Source Code

Kechi Zhang,Ge Li,Yihong Dong,Jingjing Xu,Jun Zhang,Jing Su,Yongfei Liu,Zhi Jin
code-generationdpoself-verificationAnthologyDBLP
5
泛读FindingsACL 2025

daDPO: Distribution-Aware DPO for Distilling Conversational Abilities

Zhengze Zhang,Shiqi Wang,Yiqun Shen,Simin Guo,Dahua Lin,Xiaoliang Wang,Cam-Tu Nguyen,Fei Tan
dpodistillationconversationalAnthologyDBLP
5
泛读LongACL 2025

ICR Probe: Tracking Hidden State Dynamics for Reliable Hallucination Detection in LLMs

Zhenliang Zhang,Xinyu Hu,Huixuan Zhang,Junzhe Zhang,Xiaojun Wan
hallucinationhidden-statesprobingAnthologyDBLP
5
泛读FindingsACL 2025

LongCite: Enabling LLMs to Generate Fine-grained Citations in Long-Context QA

Jiajie Zhang,Yushi Bai,Xin Lv,Wanjun Gu,Danqing Liu,Minhao Zou ... 省略 1 位作者 ... ,Lei Hou,Yuxiao Dong,Ling Feng,Juanzi Li
long-contextcitationqaAnthologyDBLP
5
泛读FindingsACL 2025

NegVQA: Can Vision Language Models Understand Negation?

Yuhui Zhang,Yuchang Su,Yiming Liu,Serena Yeung-Levy
vlmbenchmarknegationAnthologyDBLP
5
泛读LongACL 2025

Crowd Comparative Reasoning: Unlocking Comprehensive Evaluations for LLM-as-a-Judge

Qiyuan Zhang,Yufei Wang,Yuxin Jiang,Liangyou Li,Chuhan Wu,Yasheng Wang ... 省略 1 位作者 ... ,Lifeng Shang,Ruiming Tang,Fuyuan Lyu,Chen Ma
llm-as-a-judgeevaluationcomparative-rankingAnthologyDBLP
5
泛读LongACL 2025

Growing Through Experience: Scaling Episodic Grounding in Language Models

Chunhui Zhang,Sirui Wang,Zhongyu Ouyang,Xiangchi Yuan,Soroush Vosoughi
groundingmemoryexperienceAnthologyDBLP
5
泛读LongACL 2025

MEraser: An Effective Fingerprint Erasure Approach for Large Language Models

Jingxuan Zhang,Zhenhua Xu,Rui Hu,Wenpeng Xing,Xuhong Zhang,Meng Han
unlearningmodel-fingerprintsafetyAnthologyDBLP
5
泛读LongACL 2025

D.Va: Validate Your Demonstration First Before You Use It

Qi Zhang,Zhiqing Xiao,Ruixuan Xiao,Lirong Gao,Junbo Zhao
in-context-learningdemonstration-selectiondata-qualityAnthologyDBLP
5
泛读FindingsACL 2025

Supervised Optimism Correction: Be Confident When LLMs Are Sure

Junjie Zhang,Rushuai Yang,Shunyu Liu,Ting-En Lin,Fei Huang,Yi Chen,Yongbin Li,Dacheng Tao
calibrationuncertaintyconfidenceAnthologyDBLP
5
泛读FindingsACL 2025

ReasonerRank: Redefining Language Model Evaluation with Ground-Truth-Free Ranking Frameworks

Jiamu Zhang,Jiayi Yuan,Andrew Wen,Hoang Anh Duy Le,Yu-Neng Chuang,Soo-Hyun Choi,Rui Chen,Xia Hu
evaluationrankingllm-as-a-judgeAnthologyDBLP
5
泛读LongACL 2025

Prompt-Guided Internal States for Hallucination Detection of Large Language Models

Fujie Zhang,Peiqi Yu,Biao Yi,Baolei Zhang,Tong Li,Zheli Liu
hallucinationinternal-statesinterpretabilityAnthologyDBLP
5
泛读FindingsACL 2025

CoT-UQ: Improving Response-wise Uncertainty Quantification in LLMs with Chain-of-Thought

Boxuan Zhang,Ruqi Zhang
uncertaintychain-of-thoughtcalibrationAnthologyDBLP
5
泛读LongACL 2025

Sharper and Faster mean Better: Towards More Efficient Vision-Language Model for Hour-scale Long Video Understanding

Daoze Zhang,Yuze Zhao,Jintao Huang,Yingda Chen
vlmlong-videoefficiencyAnthologyDBLP
5
泛读LongACL 2025

Improve Vision Language Model Chain-of-thought Reasoning

Ruohong Zhang,Bowen Zhang,Yanghao Li,Haotian Zhang,Zhiqing Sun,Zhe Gan,Yinfei Yang,Ruoming Pang,Yiming Yang
vlmchain-of-thoughtreasoningDOIDBLP
5
泛读LongACL 2025

The Invisible Hand: Unveiling Provider Bias in Large Language Models for Code Generation

Xiaoyu Zhang,Juan Zhai,Shiqing Ma,Qingshuang Bao,Weipeng Jiang,Qian Wang,Chao Shen,Yang Liu
code-generationbiasevaluationAnthologyDBLP
5
泛读LongACL 2025

CFBench: A Comprehensive Constraints-Following Benchmark for LLMs

Tao Zhang,Chenglin Zhu,Yanjun Shen,Wenjing Luo,Yan Zhang,Hao Liang ... 省略 3 位作者 ... ,Weipeng Chen,Bin Cui,Wentao Zhang,Zenan Zhou
benchmarkinstruction-followingconstraintsAnthologyDBLP
5
泛读LongACL 2025

GenderAlign: An Alignment Dataset for Mitigating Gender Bias in Large Language Models

Tao Zhang,Ziqian Zeng,YuxiangXiao YuxiangXiao,Huiping Zhuang,Cen Chen,James R. Foulds,Shimei Pan
biasalignmentdatasetAnthologyDBLP
5
泛读FindingsACL 2025

Pruning General Large Language Models into Customized Expert Models

Yiran Zhao,Guizhen Chen,Kenji Kawaguchi,Lidong Bing,Wenxuan Zhang
pruningmodel-compressionexpert-modelAnthologyDBLP
5
泛读LongACL 2025

OmniAlign-V: Towards Enhanced Alignment of MLLMs with Human Preference

Xiangyu Zhao,Shengyuan Ding,Zicheng Zhang,Haian Huang,Maosongcao Maosongcao,Jiaqi Wang ... 省略 3 位作者 ... ,Guangtao Zhai,Hua Yang,Haodong Duan,Kai Chen
mllm-alignmenthuman-preferencemultimodalAnthologyDBLP
5
泛读LongACL 2025

MMLU-CF: A Contamination-free Multi-task Language Understanding Benchmark

Qihao Zhao,Yangyu Huang,Tengchao Lv,Lei Cui,Qinzheng Sun,Shaoguang Mao ... 省略 1 位作者 ... ,Ying Xin,Qiufeng Yin,Scarlett Li,Furu Wei
benchmarkcontaminationevaluationAnthologyDBLP
5
泛读FindingsACL 2025

How Does Response Length Affect Long-Form Factuality

James Xu Zhao,Jimmy Z. J. Liu,Bryan Hooi,See-Kiong Ng
factualityresponse-lengthlong-form-generationAnthologyDBLP
5
泛读LongACL 2025

IAM: Efficient Inference through Attention Mapping between Different-scale LLMs

Yi Zhao,Zuchao Li,Hai Zhao
attention-mappinginference-efficiencymodel-distillationAnthologyDBLP
5
泛读LongACL 2025

EvolveBench: A Comprehensive Benchmark for Assessing Temporal Awareness in LLMs on Evolving Knowledge

Zhiyuan Zhu,Yusheng Liao,Zhe Chen,Yuhao Wang,Yunfeng Guan,Yanfeng Wang,Yu Wang
benchmarktemporal-awarenessevaluationDOIDBLP
5
泛读FindingsACL 2025

Rationales Are Not Silver Bullets: Measuring the Impact of Rationales on Model Performance and Reliability

Chiwei Zhu,Benfeng Xu,An Yang,Junyang Lin,Quan Wang,Chang Zhou,Zhendong Mao
rationalereliabilityevaluationAnthologyDBLP
5
泛读LongACL 2025

Conformity in Large Language Models

Xiaochen Zhu,Caiqi Zhang,Tom Stafford,Nigel Collier,Andreas Vlachos
social-biasbehaviorevaluationAnthologyDBLP
5
泛读LongACL 2025

Self-Taught Agentic Long Context Understanding

Yufan Zhuang,Xiaodong Yu,Jialian Wu,Ximeng Sun,Ze Wang,Jiang Liu,Yusheng Su,Jingbo Shang,Zicheng Liu,Emad Barsoum
long-contextself-trainingagentAnthologyDBLP
5
泛读FindingsACL 2025

ComparisonQA: Evaluating Factuality Robustness of LLMs Through Knowledge Frequency Control and Uncertainty

Qing Zong,Zhaowei Wang,Tianshi Zheng,Xiyu Ren,Yangqiu Song
factualitybenchmarkuncertaintyAnthologyDBLP