📚Papers

EMNLP 2023

Conference on Empirical Methods in Natural Language Processing

会议官网
472/ 2310 相关论文
Track
方向
Tier
472 / 472 篇论文
8
精读EMNLP 2023

GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints

多查询注意力(MQA)可大幅加快解码器推理,但会降低生成质量,且单独训练MQA模型会造成算力浪费,此前方案要么牺牲质量要么承担重复训练成本。

Joshua Ainslie,James Lee-Thorp,Michiel de Jong,Yury Zemlyanskiy,Federico Lebrón,Sumit Sanghai
Google ResearchgqamqauptrainingDOIarXivDBLP
9
精读EMNLP 2023

Towards A Unified View of Sparse Feed-Forward Network in Pretraining Large Language Model

Zeyu Liu,Tim Dettmers,Xi Lin,Veselin Stoyanov,Xian Li
moesparse-ffnpretrainDOIDBLP
9
精读FindingsEMNLP 2023

RWKV: Reinventing RNNs for the Transformer Era

Bo Peng,Eric Alcaide,Quentin Anthony,Alon Albalak,Samuel Arcadinho,Stella Biderman ... 省略 20 位作者 ... ,Johan S. Wind,Stanislaw Wozniak,Zhenyuan Zhang,Qinghua Zhou
rwkvrnnarchitectureDOIDBLP
8
精读EMNLP 2023

Self-Influence Guided Data Reweighting for Language Model Pre-training

预训练默认对所有样本分配相同权重,未考虑数据质量和相关性差异,此前数据重加权仅应用于监督微调和下游任务,未覆盖大规模无标注预训练场景。

Megh Thakkar,Tolga Bolukbasi,Sriram Ganapathy,Shikhar Vashishth,Sarath Chandar,Partha Talukdar
Mila - Quebec AI InstituteGoogle ResearchAWS AI Labsdata-qualitydata-reweightingpretrainingDOIarXivDBLP
9
精读EMNLP 2023

Shall We Pretrain Autoregressive Language Models with Retrieval? A Comprehensive Study

这篇论文问的是一个更前置的问题:检索到底应该只在推理时接入,还是应该在自回归语言模型预训练阶段就纳入训练闭环。过去主流做法通常把 retrieval 当 inference-time 插件,但如果模型在预训练时从未学会如何使用检索,那么下游再接入 often 会出现接口错配和收益不稳定。

Boxin Wang,Wei Ping,Peng Xu,Lawrence McAfee,Zihan Liu,Mohammad Shoeybi ... 省略 2 位作者 ... ,Bo Li,Chaowei Xiao,Anima Anandkumar,Bryan Catanzaro
retrievalpretrainscalingDOIDBLP
9
精读FindingsEMNLP 2023

SpeechGPT: Empowering Large Language Models with Intrinsic Cross-Modal Conversational Abilities

让 LLM 原生支持语音的"感知-理解-生成"闭环,而不是靠 ASR+LLM+TTS 三段拼接。拼接方案延迟大、语义信息在转写时丢失、无法端到端训练。

Dong Zhang,Shimin Li,Xin Zhang,Jun Zhan,Pengyu Wang,Yaqian Zhou,Xipeng Qiu
Fudan Universityspeech-lmmultimodaltokenizerDOIDBLP
8
精读EMNLP 2023

Large Language Models Can Self-Improve

这篇论文解决的是:LLM 的推理能力提升是否一定依赖人工标注监督,还是模型可以用未标注问题自己生成高质量解答,再反过来训练自己。过去 reasoning finetune 通常高度依赖人工解题过程或答案,成本高且扩展慢;而如果模型能从自身高置信输出中学习,就有机会把后训练扩展到大量无标注数据。

Jiaxin Huang,Shixiang Gu,Le Hou,Yuexin Wu,Xuezhi Wang,Hongkun Yu,Jiawei Han
Google Researchself-improvereasoningself-trainingDOIarXivDBLP
7
泛读EMNLP 2023

CoLT5: Faster Long-Range Transformers with Conditional Computation

长文档Transformer的训练推理成本高,除了注意力平方复杂度外,前馈和投影层对所有token施加全量计算也是重要开销,此前长程Transformer仅优化注意力复杂度,未解决前馈层的计算浪费问题。

Joshua Ainslie,Tao Lei,Michiel de Jong,Santiago Ontañón,Siddhartha Brahma,Yury Zemlyanskiy ... 省略 2 位作者 ... ,James Lee-Thorp,Yi Tay,Yun-Hsuan Sung,Sumit Sanghai
Google Researchlong-contextconditional-computationefficient-attentionDOIarXivDBLP
7
泛读EMNLP 2023

Influence Scores at Scale for Efficient Language Data Sampling

现有影响力分数大多在CV场景验证,在NLP分类任务及预训练微调流程中的适用性未经过系统验证,无法确定适合NLP数据采样的最优影响力分数。

Nikhil Anand,Joshua Tan,Maria Minakova
Meta AIdata-selectioninfluence-functionsdata-samplingDOIarXivDBLP
8
精读EMNLP 2023

A Cheaper and Better Diffusion Language Model with Soft-Masked Noise

这篇论文要解决的是:扩散语言模型在离散 token 上训练和采样都太贵,因此很难真正和自回归 LM 竞争。已有离散 diffusion LM 往往依赖固定的离散噪声过程,例如随机替换或 [MASK] 污染,这类设计训练目标与文本结构不够匹配,导致步数多、信号稀疏、生成成本高。

Jiaao Chen,Aston Zhang,Mu Li,Alex Smola,Diyi Yang
diffusion-lmmasked-lmnon-arDOIDBLP
8
精读FindingsEMNLP 2023

Approximating Two-Layer Feedforward Networks for Efficient Transformers

这篇论文的核心问题是:如何在不明显牺牲性能的前提下降低 Transformer FFN 的计算和存储成本,而且要用对语言模型更公平的比较方式。过去很多 MoE 工作在 compute-equal 条件下比较稀疏和稠密模型,但对 LM 来说,参数量、容量和训练预算耦合很强,只看计算相等容易得出偏乐观结论。

Róbert Csordás,Kazuki Irie,Jürgen Schmidhuber
moearchitectureefficientDOIarXivDBLP
8
精读IndustryEMNLP 2023

Multi-word Tokenization for Sequence Compression

现有子词tokenizer的序列长度压缩率有限,导致LLM训练推理成本高,此前压缩方案要么损失语义,要么需要修改模型架构,迁移成本高。

Leonidas Gee,Leonardo Rigutini,Marco Ernandes,Andrea Zugarini
Google Researchtokenizersequence-compressionefficiencyDOIarXivDBLP
8
精读EMNLP 2023

Adaptive Gating in Mixture-of-Experts based Language Models

这篇论文要解决的问题是:MoE 语言模型中的 gating 往往静态且脆弱,导致专家利用不均衡、路由噪声大,最终浪费参数容量并影响训练稳定性。现有工作常把重点放在扩专家数和 load balancing loss,但对 gating 本身如何自适应输入与训练阶段关注不够。

Jiamin Li,Qiang Su,Yitao Yang,Yimin Jiang,Cong Wang,Hong Xu
moeroutingtraining-dynamicsDOIDBLP
8
精读EMNLP 2023

XLM-V: Overcoming the Vocabulary Bottleneck in Multilingual Masked Language Models

Davis Liang,Hila Gonen,Yuning Mao,Rui Hou,Naman Goyal,Marjan Ghazvininejad,Luke Zettlemoyer,Madian Khabsa
tokenizermultilingualvocabularyDOIDBLP
8
精读FindingsEMNLP 2023

Emergent Inabilities? Inverse Scaling Over the Course of Pretraining

这篇论文要解决的问题是:inverse scaling 不仅会随模型变大出现,是否也会在同一个模型的预训练过程中逐步出现。过去 inverse scaling 常被表述为“大模型比小模型更差”的横向现象;作者把问题改成纵向训练动态分析:在 general LM loss 继续变好时,某些具体能力是否反而退化。

James A. Michaelov,Ben Bergen
University of California, San Diegotraining-dynamicsscaling-lawinverse-scalingDOIarXivDBLP
7
泛读EMNLP 2023

Better Quality Pre-training Data and T5 Models for African Languages

低资源非洲语言的预训练数据质量差,现有多语言语料如mC4中的非洲语言数据存在大量噪声和错误,导致多语言模型在非洲语言上的性能极低。

Akintunde Oladipo,Mofetoluwa Adeyemi,Orevaoghene Ahia,Abraham Toluwase Owodunni,Odunayo Ogundepo,David Ifeoluwa Adelani,Jimmy Lin
Hugging FaceAfrican NLP Collectivedata-qualitymultilingualpretraining-dataDOIDBLP
8
精读FindingsEMNLP 2023

INGENIOUS: Using Informative Data Subsets for Efficient Pre-Training of Language Models

这篇工作要解决的是预训练数据并非越多越好、但高质量筛选又常常成本很高的问题。过去大规模 LM 预训练普遍采取‘尽量多抓数据再做轻过滤’,因为人工定义什么是有信息量的数据很难,但这会让训练预算大量消耗在边际收益很低的样本上。

H. S. V. N. S. Kowndinya Renduchintala,Krishnateja Killamsetty,Sumit Bhatia,Milan Aggarwal,Ganesh Ramakrishnan,Rishabh K. Iyer,Balaji Krishnamurthy
data-selectionefficient-pretraindata-qualityDOIDBLP
8
精读FindingsEMNLP 2023

NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark

这篇工作要解决的是 LLM 评测被数据污染系统性高估的问题,而且作者强调污染不能只做一次全局声明,必须对每个 benchmark 单独测量。过去很多论文只粗略说‘训练数据可能包含测试集’,但不量化 benchmark 级别污染率,这会让不同榜单分数失去可比性。

Oscar Sainz,Jon Ander Campos,Iker García-Ferrero,Julen Etxaniz,Oier Lopez de Lacalle,Eneko Agirre
data-contaminationevaluationbenchmarkDOIDBLP
8
精读EMNLP 2023

CodeFusion: A Pre-trained Diffusion Model for Code Generation

AR 代码生成模型无法回头修改已生成的 token,这对需要全局一致的代码(变量名、括号匹配、格式规则)是硬伤。作者想用 diffusion 的迭代去噪来支持'反复改写整段程序'这种生成方式。

Mukul Singh,José Cambronero,Sumit Gulwani,Vu Le,Carina Negreanu,Gust Verbruggen
Microsoftdiffusion-lmcode-generationnon-ar-lmDOIarXivDBLP
8
精读FindingsEMNLP 2023

Scaling Laws vs Model Architectures: How does Inductive Bias Influence Scaling?

此前缩放定律研究默认Transformer架构的缩放行为一致,未系统对比不同架构的归纳偏置对缩放规律的影响,无法确定不同计算规模下的最优架构选择。

Yi Tay,Mostafa Dehghani,Samira Abnar,Hyung Won Chung,William Fedus,Jinfeng Rao,Sharan Narang,Vinh Q. Tran,Dani Yogatama,Donald Metzler
Google Researchscaling-lawarchitectureinductive-biasDOIarXivDBLP
8
精读FindingsEMNLP 2023

Learn Your Tokens: Word-Pooled Tokenization for Language Modeling

Avijit Thawani,Saurabh Ghanekar,Xiaoyuan Zhu,Jay Pujara
tokenizerlanguage-modelingsubwordDOIDBLP
8
精读EMNLP 2023

Label Words are Anchors: An Information Flow Perspective for Understanding In-Context Learning

这篇论文要解释 in-context learning 到底是如何工作的,重点不是描述现象,而是定位信息在层间如何流动。过去对 ICL 机制的讨论很多,但常停留在“像隐式梯度更新”或“像模式匹配”的类比,缺少能指导 prompt 设计和压缩的具体可操作解释。

Lean Wang,Lei Li,Damai Dai,Deli Chen,Hao Zhou,Fandong Meng,Jie Zhou,Xu Sun
iclattention-analysisinformation-flowDOIarXivDBLP
8
精读FindingsEMNLP 2023

Pretraining Without Attention

Junxiong Wang,Jing Nathan Yan,Albert Gu,Alexander M. Rush
architecturessmattention-freeDOIDBLP
8
精读EMNLP 2023

Symbol tuning improves in-context learning in language models

Jerry W. Wei,Le Hou,Andrew K. Lampinen,Xiangning Chen,Da Huang,Yi Tay ... 省略 1 位作者 ... ,Yifeng Lu,Denny Zhou,Tengyu Ma,Quoc V. Le
iclsymbol-tuninginstruction-tuningDOIDBLP
8
精读EMNLP 2023

Inverse Scaling Can Become U-Shaped

Jason Wei,Najoung Kim,Yi Tay,Quoc V. Le
scaling-lawinverse-scalingemergenceDOIDBLP
7
精读FindingsEMNLP 2023

Speculative Decoding: Exploiting Speculative Execution for Accelerating Seq2seq Generation

自回归解码速度慢,此前的草稿-验证范式仅能达到1.4-2倍加速,无法满足工业部署的需求。

Heming Xia,Tao Ge,Peiyi Wang,Si-Qing Chen,Furu Wei,Zhifang Sui
Microsoft Research Asiaspeculative-decodinginference-accelerationautoregressiveDOIarXivDBLP
7
泛读FindingsEMNLP 2023

Towards Being Parameter-Efficient: A Stratified Sparsely Activated Transformer with Dynamic Capacity

现有稀疏MoE模型采用统一容量的专家设计,参数效率随专家数量上升而下降,性能增益边际递减,此前方案均未适配不同token、不同任务的计算复杂度差异需求。

Haoran Xu,Maha Elbayad,Kenton Murray,Jean Maillard,Vedanuj Goswami
Meta AIJohns Hopkins Universitymoesparse-activationroutingDOIarXivDBLP
8
精读FindingsEMNLP 2023

Dynamic Stashing Quantization for Efficient Transformer Training

LLM 训练是 memory-bound 的,现有量化训练方法主要关注减少计算量,但对内存访问的优化不够。问题是如何设计一种量化策略,重点减少内存操作同时兼顾计算效率。

Guo Yang,Daniel Lo,Robert Mullins,Yiren Zhao
University of Cambridgequantizationtraining-efficiencymemory-optimizationDOIarXivDBLP
8
精读EMNLP 2023

Data Similarity is Not Enough to Explain Language Model Performance

挑战了“预训练数据与下游任务越相似,模型表现越好”的普遍假设,指出仅靠数据相似度无法解释或预测语言模型的下游性能。

Gregory Yauney,Emily Reif,David Mimno
Cornell Universitydata-qualitypretraining-datascaling-lawDOIarXivDBLP
8
精读EMNLP 2023

Characterizing Mechanisms for Factual Recall in Language Models

这篇论文研究的是:语言模型做事实回忆时,内部到底依赖什么机制,而不只是报告能不能答对。这个问题之所以重要,是因为 factual recall 常被当成参数记忆的黑箱现象来讨论,但如果不知道是哪些表示或电路在起作用,就很难谈编辑、遗忘或时效性控制。

Qinan Yu,Jack Merullo,Ellie Pavlick
factual-recallinterpretabilityknowledgeDOIDBLP
8
精读EMNLP 2023

Pre-training Language Models for Comparative Reasoning

这篇论文要解决的是:语言模型很少被显式预训练去做 comparative reasoning,而这类能力又广泛出现在排序、比较、差异判断和关系推断中。过去这类能力通常寄希望于通用语言建模自然涌现,或者在下游任务里再补,但这可能导致样本效率低、泛化不稳。

Mengxia Yu,Zhihan Zhang,Wenhao Yu,Meng Jiang
pretrainingreasoningobjectiveDOIDBLP
8
精读FindingsEMNLP 2023

Efficient Long-Range Transformers: You Need to Attend More, but Not Necessarily at Every Layer

长序列 Transformer 里 full attention 的 O(L²) 代价主要花得冤枉——多数层的 attention 并不需要那么远的视野。但之前的"稀疏一半层"方案通常均匀替换,没考虑层与层之间的功能差异。

Qingru Zhang,Dhananjay Ram,Cole Hawkins,Sheng Zha,Tuo Zhao
Amazon AWS AIGeorgia Institute of Technologylong-contextattentionefficient-transformerDOIDBLP
7
泛读EMNLP 2023

Rigorously Assessing Natural Language Explanations of Neurons

解决用大模型(如 GPT-4)自动生成自然语言来解释神经元功能的评估难题,指出当前缺乏严谨的验证手段来判断这些解释是否忠实于神经元的真实行为。

Jing Huang,Atticus Geiger,Karel D'Oosterlinck,Zhengxuan Wu,Christopher Potts
Stanford UniversityinterpretabilityneuronsexplanationDOIarXivDBLP
7
泛读EMNLP 2023

Emergent Linear Representations in World Models of Self-Supervised Sequence Models

修正先前关于 Othello-GPT(黑白棋模型)内部世界模型是“非线性”的结论,探究序列模型底层状态表示的真实几何结构。

Neel Nanda,Andrew Lee,Martin Wattenberg
DeepMindHarvard Universityworld-modelslinear-representationprobingDOIarXivDBLP
7
泛读EMNLP 2023

Causal Abstraction for Chain-of-Thought Reasoning in Arithmetic Word Problems

这篇论文要回答的核心问题是:CoT 提高算术题准确率,到底是因为模型真的沿着中间推理在算,还是只是把 CoT 当成一种有用的表面格式。此前很多工作只看“有 CoT 时答案更准”,但没有检验中间推理本身是否正确、也没有验证这些中间 token 是否对最终答案有因果作用,因此这个问题一直被经验现象掩盖。

Juanhe (TJ) Tan
cotcausal-abstractionreasoningDOIDBLP
7
泛读FindingsEMNLP 2023

Multilingual Lottery Tickets to Pretrain Language Models

这篇论文解决的是 multilinguality curse:在容量受限的 mPLM 中,多语言会相互干扰,尤其低资源语言常被高资源语言挤压。过去常见解法是直接扩模型容量,但这会同步抬高训练和推理成本;作者想要的是在不长期保留大模型成本的前提下,减少语言间负迁移。

Jaeseong Lee,Seung-won Hwang
multilinguallottery-ticketpretrain-efficiencyDOIDBLP
6
泛读EMNLP 2023

Weakly-Supervised Learning of Visual Relations in Multimodal Pretraining

现有多模态预训练仅依赖目标检测级别的监督信号,缺乏实体间关系的细粒度对齐能力,此前方案要么需要全量标注的视觉关系数据成本过高,要么完全无监督无法学习到细粒度关联。

Emanuele Bugliarello,Aida Nematzadeh,Lisa Anne Hendricks
DeepMindUC Berkeleymultimodal-pretrainingvisual-relationsweak-supervisionDOIarXivDBLP
6
泛读EMNLP 2023

Speak, Memory: An Archaeology of Books Known to ChatGPT/GPT-4

闭源大模型的训练数据不透明,无法检测其是否记忆了受版权保护的内容,也无法避免下游任务测试集被训练数据污染,此前没有仅通过API调用即可推断闭源模型训练数据构成的低成本方法。

Kent K. Chang,Mackenzie Cramer,Sandeep Soni,David Bamman
UC Berkeleymemorizationdata-qualitymembership-inferenceDOIarXivDBLP
7
泛读FindingsEMNLP 2023

The Locality and Symmetry of Positional Encodings

不同的位置编码(PE,如 RoPE, ALiBi)在长上下文外推时的表现差异巨大,但缺乏统一的数学框架来解释其底层机制。

Lihu Chen,Gaël Varoquaux,Fabian M. Suchanek
positional-encodingarchitectureDOIDBLP
7
泛读EMNLP 2023

Unlearn What You Want to Forget: Efficient Unlearning for LLMs

如何在不重新进行昂贵预训练的情况下,高效地从 LLM 中擦除特定知识(如有毒内容、隐私数据或版权文本),且不引发灾难性遗忘。

Jiaao Chen,Diyi Yang
unlearningllmDOIDBLP
7
泛读EMNLP 2023

The Distributional Hypothesis Does Not Fully Explain the Benefits of Masked Language Model Pretraining

这篇论文的核心结论是:分布假说只能部分解释 masked language model 预训练的收益,尤其解释不了其泛化能力。过去很多分析默认 MLM 的优势主要来自词在相似上下文中出现,从而学到语义接近性;作者要检验这个解释到底够不够。

Ting-Rui Chiang,Dani Yogatama
mlmpretraining-objectiveanalysisDOIarXivDBLP
7
泛读FindingsEMNLP 2023

On the Impact of Cross-Domain Data on German Language Models

在德语这种非英语语种上,预训练数据该优先堆质量还是堆跨域多样性?以往德语模型要么训 CommonCrawl,要么训某个垂直域,缺乏系统对比。

Amin Dada,Aokun Chen,Cheng Peng,Kaleb E. Smith,Ahmad Idrissi-Yaghir,Constantin Seibold ... 省略 4 位作者 ... ,Jan Egger,Jiang Bian,Jens Kleesiek,Yonghui Wu
data-mixturemultilingualdata-qualityDOIarXivDBLP
7
泛读EMNLP 2023

Enhancing Chat Language Models by Scaling High-quality Instructional Conversations

开源 chat 模型和 ChatGPT 的差距很大程度上是指令数据的差距。作者要造一个不依赖人工 query、覆盖面广、多轮、质量有保证的大规模指令对话集。

Ning Ding,Yulin Chen,Bokai Xu,Yujia Qin,Shengding Hu,Zhiyuan Liu,Maosong Sun,Bowen Zhou
Tsinghua University (THUNLP)OpenBMBsftinstruction-datadata-qualityDOIarXivDBLP
7
泛读EMNLP 2023

HyperRouter: Towards Efficient Training and Inference of Sparse Mixture of Experts

MoE 的 router 在训练初期信号弱、容易 collapse 到少数专家,而且 router 本身的参数增长会吃效率。HyperRouter 想解决 router 训练不稳和 router 参数代价这对矛盾。

Truong Do,Le Khiem,Quang Pham,TrungTin Nguyen,Thanh-Nam Doan,Binh Nguyen,Chenghao Liu,Savitha Ramasamy,Xiaoli Li,Steven C. H. Hoi
moesparse-routingtraining-efficiencyDOIDBLP
6
泛读EMNLP 2023

Dissecting Recall of Factual Associations in Auto-Regressive Language Models

现有研究仅明确事实知识存储在Transformer的参数中,不清楚推理时模型内部如何召回这些事实关联,此前的干预实验仅关注存储位置,未追踪信息流动的完整过程。

Mor Geva,Jasmijn Bastings,Katja Filippova,Amir Globerson
Google DeepMindTel Aviv Universityinterpretabilityfactual-knowledgeinformation-flowDOIarXivDBLP
6
泛读FindingsEMNLP 2023

Contrastive Deterministic Autoencoders For Language Modeling

文本VAE训练容易出现后验崩溃,导致表征质量下降,图像领域的确定性自编码器可避免后验崩溃,但无法直接迁移到文本模态,此前没有适配文本场景的确定性自编码器语言建模方案。

Amur Ghose,Pascal Poupart
University of Waterloolanguage-modelingautoencodernon-ar-lmDOIDBLP
7
泛读FindingsEMNLP 2023

DiffuSeq-v2: Bridging Discrete and Continuous Text Spaces for Accelerated Seq2Seq Diffusion Models

文本扩散模型在连续空间建模离散文本时训练慢、采样慢。DiffuSeq-v2 要解决的是如何在保持生成质量的同时大幅加速训练收敛和推理速度。

Shansan Gong,Mukai Li,Jiangtao Feng,Zhiyong Wu,Lingpeng Kong
The University of Hong KongByteDancediffusion-lmseq2seqdiscrete-tokenDOIarXivDBLP
7
精读EMNLP 2023

trlX: A Framework for Large Scale Reinforcement Learning from Human Feedback

Alexander Havrilla,Maksym Zhuravinskyi,Duy Phung,Aman Tiwari,Jonathan Tow,Stella Biderman,Quentin Anthony,Louis Castricato
rlhfreinforcement-learningtraining-frameworkDOIDBLP
7
泛读EMNLP 2023

Merging Experts into One: Improving Computational Efficiency of Mixture of Experts

Shwai He,Run-Ze Fan,Liang Ding,Li Shen,Tianyi Zhou,Dacheng Tao
moemodel-compressionefficiencyDOIDBLP
7
精读FindingsEMNLP 2023

In-Context Learning Creates Task Vectors

Roee Hendel,Mor Geva,Amir Globerson
icltask-vectorrepresentationDOIDBLP
7
泛读EMNLP 2023

Effects of sub-word segmentation on performance of transformer language models

Jue Hou,Anisia Katinskaia,Anh-Duc Vu,Roman Yangarber
tokenizersubwordlm-trainingDOIDBLP
7
泛读EMNLP 2023

Towards a Mechanistic Interpretation of Multi-Step Reasoning Capabilities of Language Models

Yifan Hou,Jiaoda Li,Yu Fei,Alessandro Stolfo,Wangchunshu Zhou,Guangtao Zeng,Antoine Bosselut,Mrinmaya Sachan
interpretabilityreasoningcircuitsDOIDBLP
7
泛读FindingsEMNLP 2023

Long-Range Language Modeling with Selective Cache

这篇工作要解决的问题是:长程语言建模需要历史上下文,但完整缓存历史 token 的表示成本太高,而简单截断又会丢掉关键远程信息。此前常见做法是在固定窗口、segment recurrence 或 memory token 之间折中,但都没有真正按'哪些过去内容值得记'来选择缓存。

Xinting Huang,Nora Hollenstein
long-contextcachelmDOIDBLP
7
泛读EMNLP 2023

Practical Computational Power of Linear Transformers and Their Recurrent and Self-Referential Extensions

这篇工作的结论是:线性 Transformer 在有限精度、实时计算假设下的计算能力可以被严格刻画,而且其局限与扩展路径都比工程界常见说法更清楚。过去大家常把 linear attention 当成'更快但近似更差的 attention',较少从形式语言与状态容量角度分析它究竟能算什么、不能算什么。

Kazuki Irie,Róbert Csordás,Jürgen Schmidhuber
linear-attentionfast-weightexpressivityDOIarXivDBLP
7
泛读EMNLP 2023

A Frustratingly Easy Post-Training Quantization Scheme for LLMs

Yongkweon Jeon,Chungman Lee,Kyungphil Park,Ho-Young Kim
quantizationptqllmDOIDBLP
5
泛读EMNLP 2023

LLMLingua: Compressing Prompts for Accelerated Inference of Large Language Models

ICL、CoT等场景下prompt长度可达上万token,导致LLM推理速度慢、成本高,此前的prompt压缩方法要么语义损失大,要么压缩率低,无法适配不同模型的分布差异。

Huiqiang Jiang,Qianhui Wu,Chin-Yew Lin,Yuqing Yang,Lili Qiu
Microsoft Research Asiaprompt-compressioninferenceDOIarXivDBLP
5
泛读EMNLP 2023

Active Retrieval Augmented Generation

现有RAG方法仅在生成前检索一次,生成长文本时无法获取动态更新的相关信息,容易出现幻觉,此前的方案未考虑生成过程中的动态检索需求。

Zhengbao Jiang,Frank F. Xu,Luyu Gao,Zhiqing Sun,Qian Liu,Jane Dwivedi-Yu,Yiming Yang,Jamie Callan,Graham Neubig
Carnegie Mellon UniversityGoogle BrainragretrievalhallucinationDOIarXivDBLP
7
泛读EMNLP 2023

Leap-of-Thought: Accelerating Transformers via Dynamic Token Routing

Transformer 在推理时对所有 token 施加相同的计算量(层数),导致处理简单 token 时存在严重的计算冗余。

Yeachan Kim,Junho Kim,Jun-Hyung Park,Mingyu Lee,SangKeun Lee
dynamic-routingefficient-inferencetransformer-architectureDOIDBLP
7
泛读EMNLP 2023

Active Instruction Tuning: Improving Cross-Task Generalization by Training on Prompt Sensitive Tasks

指令微调(IT)数据量庞大,全量训练成本高,而随机采样任务会导致跨任务泛化能力次优。

Po-Nien Kung,Fan Yin,Di Wu,Kai-Wei Chang,Nanyun Peng
UCLAinstruction-tuningdata-selectiongeneralizationDOIarXivDBLP
7
泛读DemoEMNLP 2023

Okapi: Instruction-tuned Large Language Models in Multiple Languages with Reinforcement Learning from Human Feedback

这篇论文要解决的是:把 RLHF 风格的指令对齐从英语扩展到多语言开源 LLM,而不是继续停留在“先做英文、其他语言靠迁移碰运气”的状态。已有开源指令模型大多只覆盖英语和少数高资源语言,这使很多语言既缺少可用助手模型,也缺少关于多语言对齐是否能稳定迁移的经验。

Viet Dac Lai,Chien Van Nguyen,Nghia Trung Ngo,Thuat Nguyen,Franck Dernoncourt,Ryan A. Rossi,Thien Huu Nguyen
rlhfmultilingualinstruction-tuningDOIarXivDBLP
7
泛读EMNLP 2023

Enhancing Computation Efficiency in Large Language Models through Weight and Activation Quantization

这篇论文关注的是:如何通过权重和激活量化降低大语言模型的计算与部署成本,同时尽量保住精度。这个问题一直存在,但在 LLM 场景下更尖锐,因为参数量、KV cache 和激活带宽都把推理成本推得很高。

Janghwan Lee,Minsoo Kim,Seungcheol Baek,Seok Joong Hwang,Wonyong Sung,Jungwook Choi
quantizationefficient-inferenceptqDOIDBLP
7
泛读FindingsEMNLP 2023

Ensemble-Instruct: Instruction Tuning Data Generation with a Heterogeneous Mixture of LMs

这篇论文要解决的是:instruction tuning 数据生成过度依赖单一 teacher LM,导致风格、覆盖和错误模式都比较单一。单模型合成虽然便宜,但容易把一个 teacher 的盲点原样复制到整个指令数据集里。

Young-Suk Lee,Md. Arafat Sultan,Yousef El-Kurdi,Tahira Naseem,Asim Munawar,Radu Florian,Salim Roukos,Ramón Fernandez Astudillo
instruction-tuningsynthetic-dataensembleDOIDBLP
7
泛读FindingsEMNLP 2023

Tuna: Instruction Tuning using Feedback from Large Language Models

这篇论文要解决的问题是:高质量 instruction tuning 数据昂贵且难扩展,人工反馈收集速度慢,导致很多模型虽有强预训练能力,但难以稳定转化为对话与遵循指令能力。作者希望用大模型自身产生的反馈来替代或放大人工监督,从而降低对人类偏好标注的依赖。

Haoran Li,Yiran Liu,Xingxing Zhang,Wei Lu,Furu Wei
instruction-tuningsynthetic-datafeedbackDOIDBLP
7
泛读FindingsEMNLP 2023

Rethinking the Construction of Effective Metrics for Understanding the Mechanisms of Pretrained Language Models

You Li,Jinhui Yin,Yuming Lin
interpretabilitymetricspretrained-modelsDOIDBLP
7
泛读FindingsEMNLP 2023

Enhancing Scalability of Pre-trained Language Models via Efficient Parameter Sharing

Peiyu Liu,Ze-Feng Gao,Yushuo Chen,Xin Zhao,Ji-Rong Wen
parameter-sharingefficientarchitectureDOIDBLP
7
泛读EMNLP 2023

G-Eval: NLG Evaluation using Gpt-4 with Better Human Alignment

Yang Liu,Dan Iter,Yichong Xu,Shuohang Wang,Ruochen Xu,Chenguang Zhu
evaluationgpt-4nlg-evalDOIDBLP
7
泛读EMNLP 2023

LLM-FP4: 4-Bit Floating-Point Quantized Transformers

Shih-Yang Liu,Zechun Liu,Xijie Huang,Pingcheng Dong,Kwang-Ting Cheng
quantizationfp4inferenceDOIDBLP
7
泛读DemoEMNLP 2023

CoLLiE: Collaborative Training of Large Language Models in an Efficient Way

Kai Lv,Shuo Zhang,Tianle Gu,Shuhao Xing,Jiawei Hong,Keyu Chen ... 省略 4 位作者 ... ,Yu Sun,Qipeng Guo,Hang Yan,Xipeng Qiu
distributed-trainingefficient-trainingllm-infrastructureDOIDBLP
7
泛读FindingsEMNLP 2023

Subspace Chronicles: How Linguistic Information Emerges, Shifts and Interacts during Language Model Training

现有研究不清楚LM预训练过程中不同类型的语言信息(句法、语义、推理)如何涌现、转移和交互,此前的探测方法仅能比较任务性能,无法比较表征子空间的差异。

Max Müller-Eberstein,Rob van der Goot,Barbara Plank,Ivan Titov
University of AmsterdamLMU MunichDeepMindtraining-dynamicsinterpretabilityemergenceDOIarXivDBLP
7
泛读EMNLP 2023

Pushdown Layers: Encoding Recursive Structure in Transformer Language Models

Transformer自注意力没有显式递归状态跟踪机制,无法有效建模人类语言的递归结构,导致长尾递归结构捕捉能力差、句法泛化样本效率低,过往方案大多依赖外部句法标注,无原生嵌入Transformer层的无监督递归跟踪设计。

Shikhar Murty,Pratyusha Sharma,Jacob Andreas,Christopher D. Manning
Stanford Universityarchitectureself-attentionrecursive-structureDOIarXivDBLP
6
泛读EMNLP 2023

MAF: Multi-Aspect Feedback for Improving Reasoning in Large Language Models

大语言模型推理时存在幻觉、中间步骤错误、数学计算错误等多样问题,现有单一生成反馈方法无法覆盖多种错误类型,导致推理优化效果有限。

Deepak Nathani,David Wang,Liangming Pan,William Yang Wang
University of California, Santa Barbarafeedbackreasoningself-improvementDOIarXivDBLP
7
泛读EMNLP 2023

BioT5: Enriching Cross-modal Integration in Biology with Chemical Knowledge and Natural Language Associations

Qizhi Pei,Wei Zhang,Jinhua Zhu,Kehan Wu,Kaiyuan Gao,Lijun Wu,Yingce Xia,Rui Yan
multimodal-pretrainingt5biologyDOIDBLP
7
泛读FindingsEMNLP 2023

mmT5: Modular Multilingual Pre-Training Solves Source Language Hallucinations

Jonas Pfeiffer,Francesco Piccinno,Massimo Nicosia,Xinyi Wang,Machel Reid,Sebastian Ruder
multilingualt5modularDOIDBLP
7
泛读FindingsEMNLP 2023

Measuring and Narrowing the Compositionality Gap in Language Models

LLM 在需要多步组合推理的问题上存在'组合性差距'(compositionality gap):模型能正确回答每个子问题,但无法将子答案组合得到最终答案。本文量化了这一差距并提出 self-ask 方法来缩小它。

Ofir Press,Muru Zhang,Sewon Min,Ludwig Schmidt,Noah A. Smith,Mike Lewis
University of WashingtonAllen Institute for AIcompositionalityreasoningcotDOIDBLP
7
泛读EMNLP 2023

Context Compression for Auto-regressive Transformers with Sentinel Tokens

这篇工作要解决的是:在不改动自回归生成范式的前提下,把长上下文中的冗余历史压缩掉,从而降低 Transformer 的计算和显存成本。以往长上下文通常靠稀疏注意力、外部记忆或截断历史来规避二次复杂度,但这些方法要么改模型较重,要么直接损失生成所需的细粒度条件信息。

Siyu Ren,Qi Jia,Kenny Q. Zhu
context-compressionlong-contextkv-cacheDOIDBLP
7
精读FindingsEMNLP 2023

Scaling Vision-Language Models with Sparse Mixture of Experts

把 MoE 引入 VLM,验证稀疏专家能否像在纯文本 LM 上一样,用同样的 FLOPs 拿到更好的多模态性能。之前 MoE 在 VLM 里做得少,主要担心跨模态专家分化不清。

Sheng Shen,Zhewei Yao,Chunyuan Li,Trevor Darrell,Kurt Keutzer,Yuxiong He
UC BerkeleyMicrosoftmoevision-languagescalingDOIDBLP
6
泛读EMNLP 2023

Sparse Universal Transformer

通用Transformer(UT)通过层间参数共享实现了比普通Transformer(VT)更好的组合泛化和参数效率,但UT的计算和内存开销随参数扩增大幅上升,远高于普通Transformer,限制了其大规模应用。

Shawn Tan,Yikang Shen,Zhenfang Chen,Aaron C. Courville,Chuang Gan
MIT-IBM Watson AI LabUniversité de Montréaltransformerparameter-sharingsparsityDOIarXivDBLP
7
泛读EMNLP 2023

Transcending Scaling Laws with 0.1% Extra Compute

Yi Tay,Jason Wei,Hyung Won Chung,Vinh Q. Tran,David R. So,Siamak Shakeri ... 省略 6 位作者 ... ,Slav Petrov,Neil Houlsby,Quoc V. Le,Mostafa Dehghani
scaling-lawcompute-optimaltraining-dynamicsDOIDBLP
7
泛读FindingsEMNLP 2023

mLongT5: A Multilingual and Efficient Text-To-Text Transformer for Longer Sequences

现有的多语言模型(如 mT5)在处理长文本(如长文档摘要、跨语言长文本 QA)时,受限于标准自注意力的二次复杂度,效率低下且效果不佳。

David C. Uthus,Santiago Ontañón,Joshua Ainslie,Mandy Guo
Google Researchlong-contextmultilingualt5DOIarXivDBLP
7
泛读DemoEMNLP 2023

Koala: An Index for Quantifying Overlaps with Pre-training Corpora

随着预训练语料规模的爆炸,下游评估数据集极易被包含在训练集中(数据污染),导致评估结果虚高,且难以快速检测。

Thuy-Trang Vu,Xuanli He,Gholamreza Haffari,Ehsan Shareghi
Monash Universitypretrain-datacontaminationdata-qualityDOIDBLP
7
泛读EMNLP 2023

CodeT5+: Open Code Large Language Models for Code Understanding and Generation

这篇论文要解决的是代码 LLM 在架构和预训练目标上的双重僵化:很多模型要么是 encoder-only、要么是 decoder-only,少数统一 encoder-decoder 方案又往往在部分任务上折中明显;同时预训练目标通常过少,导致某些代码任务和训练信号错配。

Yue Wang,Hung Le,Akhilesh Gotmare,Nghi D. Q. Bui,Junnan Li,Steven C. H. Hoi
code-llmpretrainencoder-decoderDOIarXivDBLP
7
泛读FindingsEMNLP 2023

InfoDiffusion: Information Entropy Aware Diffusion Process for Non-Autoregressive Text Generation

这篇论文要解决的是文本 diffusion 生成顺序与人类写作过程不匹配的问题。现有很多非自回归文本 diffusion 模型偏向 easy-first 逐步恢复,而人类往往先确定关键信息再补细节;这种顺序错配会让模型在关键信息保真和生成效率之间都吃亏。

Renzhi Wang,Jing Li,Piji Li
diffusion-lmnon-artext-generationDOIarXivDBLP
7
泛读EMNLP 2023

Outlier Suppression+: Accurate quantization of large language models by equivalent and effective shifting and scaling

Xiuying Wei,Yunchen Zhang,Yuhang Li,Xiangguo Zhang,Ruihao Gong,Jinyang Guo,Xianglong Liu
quantizationllmoutlierDOIDBLP
7
精读FindingsEMNLP 2023

Difference-Masking: Choosing What to Mask in Continued Pretraining

Alex Wilf,Syeda Nahida Akter,Leena Mathur,Paul Pu Liang,Sheryl Mathew,Mengrou Shou,Eric Nyberg,Louis-Philippe Morency
continued-pretrainingmasking-strategydata-selectionDOIDBLP
7
泛读EMNLP 2023

TLM: Token-Level Masking for Transformers

Yangjun Wu,Kebin Fang,Dongxiang Zhang,Han Wang,Hao Zhang,Gang Chen
token-maskingtransformertraining-objectiveDOIDBLP
8
精读FindingsEMNLP 2023

Adapting Pretrained Text-to-Text Models for Long Text Sequences

现有预训练文本到文本模型大多针对短序列设计,直接适配长序列时性能下降明显,过往适配方案要么需要从头预训练长上下文模型,要么修改架构后泛化性差,没有成熟的低成本从短模型迁移长上下文能力的方案。

Wenhan Xiong,Anchit Gupta,Shubham Toshniwal,Yashar Mehdad,Scott Yih
Meta AIlong-contextattention-modificationcontinual-pretrainingDOIarXivDBLP
7
泛读FindingsEMNLP 2023

Pit One Against Many: Leveraging Attention-head Embeddings for Parameter-efficient Multi-head Attention

Multi-Head Attention (MHA) 的参数量随 head 数线性增长,每个 head 都有独立的 Q/K/V 投影矩阵,这在大模型中占据显著内存。问题是能否用更少的参数实现接近 MHA 的效果。

Huiyin Xue,Nikolaos Aletras
University of Sheffieldattention-optimizationmulti-head-attentionparameter-efficiencyDOIarXivDBLP
7
泛读EMNLP 2023

Bootstrapping Small & High Performance Language Models with Unmasking-Removal Training Policy

如何训练小而高性能的语言模型。提出一种 unmasking-removal 训练策略来 bootstrap 小模型的能力。

Yahan Yang,Elior Sulem,Insup Lee,Dan Roth
University of Pennsylvaniamasked-lmtraining-dynamicssmall-modelDOIDBLP
7
泛读FindingsEMNLP 2023

How Predictable Are Large Language Model Capabilities? A Case Study on BIG-bench

探究大语言模型的能力是否可预测:能否仅根据模型家族、参数量、任务类型和 few-shot 数量等元数据,在不实际运行评估的情况下准确预测其在特定任务上的表现。

Qinyuan Ye,Harvey Yiyun Fu,Xiang Ren,Robin Jia
USCscaling-lawcapability-predictionbig-benchDOIarXivDBLP
7
泛读FindingsEMNLP 2023

TRAMS: Training-free Memory Selection for Long-range Language Modeling

这篇论文关注长程语言建模中的记忆选择问题:当外部记忆或长历史太大时,如何在不重新训练模型的前提下挑出最有用的上下文。这个问题重要,是因为很多长上下文方案的瓶颈不是“能不能存下”,而是“给模型看什么最值”。

Haofei Yu,Cunxiang Wang,Yue Zhang,Wei Bi
long-contextmemoryinferenceDOIDBLP
7
泛读EMNLP 2023

Revisiting Block-based Quantisation: What is Important for Sub-8-bit LLM Inference?

在 sub-8-bit 场景下系统梳理 block-based 量化的设计空间,回答"LLM 推理量化到底哪些因子真正重要"。此前工作各自挑选 block size、scaling、数值格式,结论难以互相比较,6-bit 以下究竟什么组合可用并不清楚。

Cheng Zhang,Jianyi Cheng,Ilia Shumailov,George A. Constantinides,Yiren Zhao
Imperial College LondonUniversity of Cambridgequantizationinferencelow-bitDOIDBLP
7
泛读FindingsEMNLP 2023

Scaling Law for Document Neural Machine Translation

给文档级神经机器翻译(Doc-NMT)写一个 scaling law:模型大小、数据量、上下文长度三者如何共同决定 loss。句级 NMT 的 scaling 已被反复验证,但引入跨句上下文后,增加的长程依赖是否遵循同类规律,此前没有清楚答案。

Zhuocheng Zhang,Shuhao Gu,Min Zhang,Yang Feng
Institute of Computing Technology, CASUniversity of Chinese Academy of Sciencesscaling-lawnmtdocumentDOIDBLP
7
泛读DemoEMNLP 2023

Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding

把 LLM 扩展到同时理解视频帧和音频的对话模型,并且能做指令跟随。此前大多数 video-LLM 只接视觉一条模态,音频要么丢掉要么另接 ASR,缺少端到端的音视频+语言联合建模。

Hang Zhang,Xin Li,Lidong Bing
DAMO Academy, Alibaba Groupvideo-llmaudio-visualinstruction-tuningDOIDBLP
7
泛读EMNLP 2023

Do Transformers Parse while Predicting the Masked Word?

BERT/RoBERTa 的 embedding 里能探测出句法树,但业内一直有争论:模型是真的在做 parse,还是只学到了某种弱相关的统计信号?作者想给这件事一个更严格的回答。

Haoyu Zhao,Abhishek Panigrahi,Rong Ge,Sanjeev Arora
Princeton UniversityDuke UniversitymlminterpretabilitysyntaxDOIarXivDBLP
7
泛读EMNLP 2023

A Predictive Factor Analysis of Social Biases and Task-Performance in Pretrained Masked Language Models

MLM 的社会偏见被大量报道,但到底是模型大小、数据量、训练目标、tokenizer、语料领域、语言哪一个因素在推动偏见?先前工作只单独看一两个因素,没法解耦。

Yi Zhou,José Camacho-Collados,Danushka Bollegala
Cardiff UniversityUniversity of Liverpoolmlmbiaspretraining-factorsDOIarXivDBLP
7
泛读EMNLP 2023

Data Factors for Better Compositional Generalization

组合泛化 benchmark(SCAN/COGS)上从头训练的 transformer 惨败,但在大数据上训练的 SOTA 模型却表现不错。这个矛盾怎么解释?

Xiang Zhou,Yichen Jiang,Mohit Bansal
UNC Chapel Hillcompositional-generalizationdata-scalingpretrainingDOIarXivDBLP
7
泛读EMNLP 2023

DiffS2UT: A Semantic Preserving Diffusion Model for Textless Direct Speech-to-Speech Translation

无文本直接语音到语音翻译(S2ST)中,离散语音单元序列远长于对应文本,给自回归模型带来效率瓶颈;同时,直接在离散语音单元上做离散扩散会忽略连续空间结构,导致生成质量下降。问题的根源是语音的低信息密度和离散化后丢失的连续语义关系。

Yongxin Zhu,Zhujin Gao,Xinyuan Zhou,Zhongyi Ye,Linli Xu
University of Science and Technology of Chinadiffusionspeech-to-speechdiscrete-unitsDOIarXivDBLP
6
泛读EMNLP 2023

Self-Consistency of Large Language Models under Ambiguity

解决大模型在面对存在多个正确答案的模糊问题(under-specification)时,输出一致性难以评估的问题。以往研究多关注确定性任务,忽略了模型在模糊场景下的内在偏好稳定性。

Henning Bartsch,Ole Jorgensen,Domenic Rosati,Jason Hoelscher-Obermaier,Jacob Pfau
consistencyllm-evalambiguityDOIarXivDBLP
6
泛读EMNLP 2023

Identifying and Adapting Transformer-Components Responsible for Gender Bias in an English Language Model

解决如何在不损害通用语言建模能力的前提下,精准定位并消除 LLM 内部导致性别偏见的特定组件。

Abhijith Chintam,Rahel Beloch,Willem H. Zuidema,Michael Hanna,Oskar van der Wal
University of Amsterdambiascausal-mediationinterpretabilityDOIarXivDBLP
6
泛读EMNLP 2023

Probing Quantifier Comprehension in Large Language Models: Another Example of Inverse Scaling

纠正先前研究中关于“LLM 对量词(few/most)理解存在逆向缩放(inverse scaling)”的结论,指出这是评估方法不当导致的假象。

Akshat Gupta
inverse-scalingprobingscalingDOIarXivDBLP
6
泛读EMNLP 2023

Memory Injections: Correcting Multi-Hop Reasoning Failures During Inference in Transformer-Based Language Models

解决 LLM 在多跳推理时因无法有效整合分散信息而失败的问题,提出一种无需微调、在推理期直接干预注意力头的方法。

Mansi Sakarvadia,Aswathy Ajith,Arham Khan,Daniel Grzenda,Nathaniel Hudson,André Bauer,Kyle Chard,Ian T. Foster
University of ChicagoArgonne National Laboratorymulti-hopinterpretabilityattentionDOIarXivDBLP
6
泛读EMNLP 2023

Explaining Data Patterns in Natural Language with Language Models

解决如何利用 LLM 自动发现并用自然语言解释数据集中的潜在模式,且无需访问模型梯度。

Chandan Singh,John X. Morris,Jyoti Aneja,Alexander M. Rush,Jianfeng Gao
Microsoft ResearchCornell UniversityexplanationpromptinginterpretabilityDOIDBLP
6
泛读EMNLP 2023

Character-Level Chinese Backpack Language Models

验证 Backpack 架构(将预测分解为 token 语义组件加权和)是否适用于缺乏自然词边界、高度依赖组合语义的中文(字符级)。

Hao Sun,John Hewitt
Stanford UniversityarchitecturetokenizerchineseDOIarXivDBLP
6
泛读EMNLP 2023

Compressing Context to Enhance Inference Efficiency of Large Language Models

这篇论文解决的是长上下文推理时一个很实际的问题:很多输入里有大量冗余 token,导致推理时显存和时延随上下文长度急剧上升,而模型真正需要的信息只占一部分。过去主流解法要么扩 context window、要么做检索或摘要,但前者成本高,后者会改写内容并引入生成误差,因此“只删冗余、不重写语义”的压缩路径值得单独做。

Yucheng Li,Bo Dong,Frank Guerin,Chenghua Lin
context-compressionlong-contextinferenceDOIarXivDBLP
6
泛读EMNLP 2023

Task-Adaptive Tokenization: Enhancing Long-Form Text Generation Efficacy in Mental Health and Beyond

这篇论文解决的是一个常被忽略但很实际的问题:预训练模型的通用 tokenizer 未必适合特定长文本生成任务,尤其在心理健康问答这类高频术语、固定表达和长句并存的场景里,低效切分会拉长序列、稀释语义片段,并放大生成难度。过去通常默认 tokenizer 固定不动,只在模型或数据上做适配,因此 tokenization 这一层的任务不匹配常被忽略。

Siyang Liu,Naihao Deng,Sahand Sabour,Yilin Jia,Minlie Huang,Rada Mihalcea
tokenizertask-adaptivelong-form-generationDOIarXivDBLP
4
FindingsEMNLP 2023

Dior-CVAE: Pre-trained Language Models and Diffusion Priors for Variational Dialog Generation

现有变分对话模型的高斯先验假设与预训练LM的输出分布不兼容,导致生成回复多样性低、后验崩溃问题,过往方案要么限制生成多样性,要么需要额外正则约束,效果有限。

Tianyu Yang,Thy Thy Tran,Iryna Gurevych
Technische Universität Darmstadtdiffusion-priorvaedialogue-generationDOIarXivDBLP
6
泛读EMNLP 2023

CESAR: Automatic Induction of Compositional Instructions for Multi-turn Dialogs

开源LLM在多轮对话复杂指令任务上性能落后于ChatGPT,过往普遍归因于模型能力不足,实际缺乏大规模复杂指令演示数据是核心瓶颈,现有对话指令数据集规模小、复杂度低。

Taha Aksu,Devamanyu Hazarika,Shikib Mehri,Seokhwan Kim,Dilek Hakkani-Tur,Yang Liu,Mahdi Namazifar
Amazon Alexa AIinstruction-tuningdata-synthesismulti-turn-dialogueDOIarXivDBLP
6
泛读FindingsEMNLP 2023

The Internal State of an LLM Knows When It's Lying

Amos Azaria,Tom M. Mitchell
interpretabilitytruthfulnesshallucinationDOIDBLP
5
泛读EMNLP 2023

Fast and Robust Early-Exiting Framework for Autoregressive Language Models with Synchronized Parallel Decoding

自回归LM的现有早期退出框架存在性能下降、对置信度阈值敏感的问题,过往方案的状态复制或多退出路径设计会引入误差,导致加速比和性能的trade-off较差。

Sangmin Bae,Jongwoo Ko,Hwanjun Song,Se-Young Yun
Korea Advanced Institute of Science and Technologyearly-exitinginference-efficiencyparallel-decodingDOIarXivDBLP
5
泛读FindingsEMNLP 2023

Can Retriever-Augmented Language Models Reason? The Blame Game Between the Retriever and the Language Model

检索增强LM的推理性能不佳,过往研究要么单一归因于检索器要么归因于LM,没有系统拆分两者的贡献边界,无法针对性优化。

Parishad BehnamGhader,Santiago Miret,Siva Reddy
McGill UniversityMilaragreasoningretrieval-augmented-generationDOIarXivDBLP
2
FindingsEMNLP 2023

Non-Autoregressive Sentence Ordering

现有句子排序方法均采用自回归指针网络框架,解码时仅能利用单侧依赖,无法充分挖掘句子间语义依赖实现最优排序,此前研究默认自回归是结构化序列排序的唯一可行范式,未探索非自回归方法在该任务的适配性。

Yi Bin,Wenhao Shi,Bin Ji,Jipeng Zhang,Yujuan Ding,Yang Yang
non-autoregressivedecodingorderingDOIarXivDBLP
5
泛读EMNLP 2023

Why LLMs Hallucinate, and How to Get (Evidential) Closure: Perceptual, Intensional, and Extensional Learning for Faithful Natural Language Generation

大语言模型幻觉的根源未被明确定义,现有抑制幻觉的方法均为生成阶段的补丁策略(如检索增强、规则约束),未从建模根源解决问题,此前研究默认统计语言建模本身无法区分生成内容的真假。

Adam Bouyamourn
hallucinationfaithfulnessobjectiveDOIarXivDBLP
6
泛读EMNLP 2023

Understanding the Inner-workings of Language Models Through Representation Dissimilarity

Davis Brown,Charles Godfrey,Nicholas Konz,Jonathan H. Tu,Henry Kvinge
interpretabilityrepresentationsanalysisDOIDBLP
4
EMNLP 2023

A Suite of Generative Tasks for Multi-Level Multimodal Webpage Understanding

现有网页理解数据集仅保留网页的部分信息(图文对、长文本、原始HTML),未统一存储所有关联的图像、文本、结构数据,导致网页多模态理解任务研究不足,结构化图文数据利用率低。

Andrea Burns,Krishna Srinivasan,Joshua Ainslie,Geoff Brown,Bryan A. Plummer,Kate Saenko,Jianmo Ni,Mandy Guo
Google ResearchBoston Universitymultimodalweb-databenchmarkDOIarXivDBLP
3
FindingsEMNLP 2023

A Framework for Bidirectional Decoding: Case Study in Morphological Inflection

现有序列到序列任务默认采用左到右自回归解码,现有双向解码方法没有统一的训练框架,针对形态变化这类对前缀后缀依赖对称的任务,左到右解码的效率和准确率都偏低。

Marc E. Canby,Julia Hockenmaier
University of Illinois Urbana-Champaignbidirectional-decodingnon-autoregressivesequence-modelingDOIarXivDBLP
5
泛读EMNLP 2023

Unnatural Error Correction: GPT-4 Can Almost Perfectly Handle Unnatural Scrambled Text

大语言模型对输入噪声的鲁棒性机制不明确,现有鲁棒性测试都集中在词级或句级噪声,没有系统测试字符级打乱输入的处理能力。

Qi Cao,Takeshi Kojima,Yutaka Matsuo,Yusuke Iwasawa
The University of Tokyorobustnesscharacter-levelevaluationDOIarXivDBLP
6
泛读FindingsEMNLP 2023

How Many Demonstrations Do You Need for In-context Learning?

在 In-Context Learning (ICL) 中,demonstration 数量与模型性能之间的 Scaling 关系尚不明确,且长上下文带来的收益递减点未知。

Jiuhai Chen,Lichang Chen,Chen Zhu,Tianyi Zhou
icldemonstrationsDOIDBLP
6
泛读EMNLP 2023

Personalized Distillation: Empowering Open-Sourced LLMs with Adaptive Learning for Code Generation

传统的代码生成知识蒸馏采用静态数据集,忽略了学生模型在不同学习阶段的特定短板,导致蒸馏效率和上限受限。

Hailin Chen,Amrita Saha,Steven Chu-Hong Hoi,Shafiq Joty
distillationcodesftDOIDBLP
6
泛读FindingsEMNLP 2023

MCC-KD: Multi-CoT Consistent Knowledge Distillation

传统的 Chain-of-Thought (CoT) 蒸馏容易导致学生模型只学会了推理的表面格式(格式模仿),而缺乏内在的逻辑一致性和鲁棒性。

Hongzhan Chen,Siyue Wu,Xiaojun Quan,Rui Wang,Ming Yan,Ji Zhang
distillationcotreasoningDOIDBLP
6
泛读FindingsEMNLP 2023

Boosting Inference Efficiency: Unleashing the Power of Parameter-Shared Pre-trained Language Models

大模型推理时的显存占用(特别是权重加载)和访存带宽瓶颈严重限制了 Serving 效率和吞吐量。

Weize Chen,Xiaoyue Xu,Xu Han,Yankai Lin,Ruobing Xie,Zhiyuan Liu,Maosong Sun,Jie Zhou
Tsinghua UniversityTencentparameter-sharinginferenceplmDOIDBLP
6
泛读EMNLP 2023

TheoremQA: A Theorem-driven Question Answering Dataset

现有的数学评测集(如 GSM8K)过于偏向初等代数,无法有效评估 LLM 在复杂科学定理应用上的深度逻辑推理(Latent Reasoning)能力。

Wenhu Chen,Ming Yin,Max Ku,Pan Lu,Yixin Wan,Xueguang Ma,Jianyu Xu,Xinyi Wang,Tony Xia
benchmarkreasoningmathDOIDBLP
6
泛读FindingsEMNLP 2023

On the Relation between Sensitivity and Accuracy in In-Context Learning

这篇论文要解决的是:in-context learning 里模型对示例扰动的敏感性,和最终准确率之间到底是什么关系。过去很多工作把 prompt sensitivity 当成 ICL 不稳定性的表征,但较少系统回答它是否只是坏现象,还是和高性能存在结构性联系。

Yanda Chen,Chen Zhao,Zhou Yu,Kathleen R. McKeown,He He
iclsensitivityanalysisDOIDBLP
6
泛读EMNLP 2023

Bridging Information-Theoretic and Geometric Compression in Language Models

这篇论文的核心结论是:语言模型里的“压缩”可以同时从几何维度和信息论编码长度两种角度理解,而且二者高度相关。过去这两个视角通常分开讨论——一个看表示空间的内在维度,一个看模型给数据分配的负对数似然——作者想回答它们是不是在衡量同一件更底层的能力。

Emily Cheng,Corentin Kervadec,Marco Baroni
representationcompressiongeometryDOIarXivDBLP
6
泛读EMNLP 2023

Adapting Language Models to Compress Contexts

这篇论文解决的是 Transformer 上下文窗口有限、长文处理成本过高的问题,而且它不满足于简单截断或检索拼接,而是希望把长历史压缩成模型可继续利用的内部状态。过去做长上下文通常要么直接扩窗,算力和显存代价线性到二次增长;要么外接摘要器,但摘要文本会丢信息且训练目标不一致。

Alexis Chevalier,Alexander Wettig,Anirudh Ajith,Danqi Chen
context-compressionlong-contextDOIarXivDBLP
6
泛读FindingsEMNLP 2023

Transformer Working Memory Enables Regular Language Reasoning And Natural Language Length Extrapolation

这篇论文的核心问题是:标准 Transformer 被普遍认为不能严格建模正则语言,也难做稳定的长度外推。过去的回避方式通常是引入 recurrence、外部状态机,或者接受在训练长度之外性能崩溃;作者想证明,Transformer 只要有合适的‘工作记忆’结构,未必做不到。

Ta-Chung Chi,Ting-Han Fan,Alexander Rudnicky,Peter J. Ramadge
architecturelength-extrapolationattentionDOIarXivDBLP
6
泛读EMNLP 2023

Where to start? Analyzing the potential value of intermediate models

这篇论文的核心结论是:选择中间模型作为新任务起点时,收益可以相当程度上分解为‘目标数据集本身的可获益性’和‘候选起点模型本身的通用性’,而不主要取决于源任务与目标任务是否对齐。过去 intertraining 常被理解为:先在一个相近任务上微调,再转到目标任务会更好;作者系统检验这个直觉是否成立。

Leshem Choshen,Elad Venezian,Shachar Don-Yehiya,Noam Slonim,Yoav Katz
intertrainingtransferfinetuningDOIarXivDBLP
6
泛读FindingsEMNLP 2023

RegaVAE: A Retrieval-Augmented Gaussian Mixture Variational Auto-Encoder for Language Modeling

现有检索增强 LM 有两个老问题:检索用的 query 只看当前 source 不考虑未来 target,检索到的原始文本又受上下文长度和噪声限制。作者想在隐空间里同时解决这两件事。

Jingcheng Deng,Liang Pang,Huawei Shen,Xueqi Cheng
Institute of Computing Technology, Chinese Academy of Sciencesretrieval-augmentedvaelanguage-modelDOIarXivDBLP
6
泛读FindingsEMNLP 2023

SteerLM: Attribute Conditioned SFT as an (User-Steerable) Alternative to RLHF

RLHF 训练链路复杂,reward 模型又常是单维打分,用户在推理时无法控制模型的具体行为维度(比如更幽默、更简洁)。能不能用纯 SFT 就拿到可控 + 对齐?

Yi Dong,Zhilin Wang,Makesh Narsimhan Sreedhar,Xianchao Wu,Oleksii Kuchaiev
NVIDIAalignmentsftrlhf-alternativeDOIarXivDBLP
6
泛读FindingsEMNLP 2023

Roles of Scaling and Instruction Tuning in Language Perception: Model vs. Human Attention

scaling 和 instruction tuning 到底如何改变 LM 在读文本时的'注意分布'?它们让模型更像人看文本,还是更不像?这是理解对齐后 LM 认知行为变化的一个具体切口。

Changjiang Gao,Shujian Huang,Jixing Li,Jiajun Chen
Nanjing Universityscalinginstruction-tuningattention-mechanismDOIDBLP
6
泛读EMNLP 2023

Enabling Large Language Models to Generate Text with Citations

Tianyu Gao,Howard Yen,Jiatong Yu,Danqi Chen
ragcitation-generationattributionDOIDBLP
5
泛读DemoEMNLP 2023

Fabricator: An Open Source Toolkit for Generating Labeled Training Data with Teacher LLMs

NLP任务的标注训练数据获取成本高,现有用大模型生成标注数据的方法没有统一的开源工具链,需要重复开发适配不同任务的生成逻辑。

Jonas Golde,Patrick Haller,Felix Hamborg,Julian Risch,Alan Akbik
Technical University of Berlinsynthetic-datadata-generationteacher-llmDOIarXivDBLP
6
泛读FindingsEMNLP 2023

Demystifying Prompts in Language Models via Perplexity Estimation

Prompt 的选择对 LLM 性能影响巨大但缺乏系统理解。本文通过困惑度(perplexity)来解释和预测不同 prompt 的效果差异,试图揭开 prompt 工程的黑箱。

Hila Gonen,Srini Iyer,Terra Blevins,Noah A. Smith,Luke Zettlemoyer
University of WashingtonMeta FAIRpromptingperplexityinterpretabilityDOIDBLP
6
泛读FindingsEMNLP 2023

Improving Input-label Mapping with Demonstration Replay for In-context Learning

ICL 中因果语言模型的单向注意力限制了对 demonstration 中 input-label 映射关系的充分捕获。每个 token 只能看到前面的 token,无法回顾完整的 input-label 对信息。

Zhuocheng Gong,Jiahao Liu,Qifan Wang,Jingang Wang,Xunliang Cai,Dongyan Zhao,Rui Yan
MeituanMeta AIicldemonstration-selectionpromptingDOIarXivDBLP
6
泛读FindingsEMNLP 2023

Automatic Unit Test Data Generation and Actor-Critic Reinforcement Learning for Code Synthesis

代码生成模型用 RL 微调时需要 unit test 作为 reward signal,但高质量 unit test 的获取成本远高于代码数据本身。本文要解决的是如何自动生成 unit test 数据,并结合 actor-critic RL 来提升代码合成质量。

Philip John Gorinski,Matthieu Zimmer,Gerasimos Lampouras,Derrick-Goh-Xin Deik,Ignacio Iacobacci
code-generationreinforcement-learningactor-criticDOIarXivDBLP
6
泛读FindingsEMNLP 2023

Knowledge is a Region in Weight Space for Fine-tuned Language Models

微调后的语言模型中,知识是否可以在权重空间中被定位为一个区域(region)?此前对知识编辑和模型合并的研究暗示权重空间有结构,但缺乏系统验证。

Almog Gueta,Elad Venezian,Colin Raffel,Noam Slonim,Yoav Katz,Leshem Choshen
IBM ResearchUNC Chapel Hillweight-spacefine-tuningknowledge-editingDOIDBLP
6
泛读FindingsEMNLP 2023

Coverage-based Example Selection for In-Context Learning

ICL 的 example 选择通常独立地按相似度排序,导致选出的 example 冗余且遗漏重要信息。本文要解决的是如何选择一组互补的 example,使其覆盖测试输入的关键方面。

Shivanshu Gupta,Matt Gardner,Sameer Singh
UC Irvineicldemonstration-selectionpromptingDOIarXivDBLP
6
泛读EMNLP 2023

Editing Common Sense in Transformers

现有的模型知识编辑方法(如 MEMIT)只在百科知识(单一正确答案)上验证过,但常识知识(多个正确答案,如苹果可以是绿色或红色但不是透明的)从未被研究。本文探索常识判断是否也与 Transformer 中可定位、可编辑的参数因果关联。

Anshita Gupta,Debanjan Mondal,Akshay Krishna Sheshadri,Wenlong Zhao,Xiang Li,Sarah Wiegreffe,Niket Tandon
model-editingcommonsensetransformerDOIarXivDBLP
6
泛读EMNLP 2023

The Effect of Scaling, Retrieval Augmentation and Form on the Factual Consistency of Language Models

LLM 对语义等价的问题给出不一致的事实性回答(如同时说某人死于爱丁堡和伦敦)。本文系统研究 scaling 和检索增强(RAG)对事实一致性的影响,并分析不一致的来源。

Lovisa Hagström,Denitsa Saynova,Tobias Norlund,Moa Johansson,Richard Johansson
scalingretrievalfactualityDOIarXivDBLP
6
泛读FindingsEMNLP 2023

Improving Sequential Model Editing with Fact Retrieval

序列化模型编辑(sequential model editing)中,连续编辑多个事实后模型性能退化。本文通过引入事实检索(fact retrieval)来改善序列编辑的稳定性。

Xiaoqi Han,Ru Li,Hongye Tan,Yuanlong Wang,Qinghua Chai,Jeff Z. Pan
model-editingfact-retrievalknowledge-editingDOIDBLP
6
泛读EMNLP 2023

Reasoning with Language Model is Planning with World Model

现有大语言模型的推理方法(如Chain-of-Thought)缺乏对世界状态的建模,无法处理需要长期规划、环境交互的推理任务,此前研究默认推理能力可以通过纯文本预训练自发涌现。

Shibo Hao,Yi Gu,Haodi Ma,Joshua Jiahua Hong,Zhen Wang,Daisy Zhe Wang,Zhiting Hu
reasoningplanningworld-modelDOIarXivDBLP
6
泛读FindingsEMNLP 2023

Verb Conjugation in Transformers Is Determined by Linear Encodings of Subject Number

Transformer的表示被认为是黑箱不可解释,现有可解释性研究没有明确验证语言特征的编码方式,尤其是句法层面的特征是否是线性可解释的。

Sophie Hao,Tal Linzen
New York Universityinterpretabilitytransformerlinear-probeDOIarXivDBLP
6
泛读EMNLP 2023

Prompting is not a substitute for probability measurements in large language models

Jennifer Hu,Roger Levy
probingpromptinglm-analysisDOIDBLP
6
泛读EMNLP 2023

Meta-Learning Online Adaptation of Language Models

Nathan Hu,Eric Mitchell,Christopher D. Manning,Chelsea Finn
meta-learningonline-adaptationcontinualDOIDBLP
6
泛读EMNLP 2023

DecipherPref: Analyzing Influential Factors in Human Preference Judgments via GPT-4

这篇工作要解决的问题是:人类偏好标注为什么会偏向某些回答,现有偏好学习通常直接把 winner/loser 当监督信号,却很少拆开看究竟哪些可观测因素在驱动判断。这个问题现在值得重做,是因为 RLHF 和偏好模型越来越依赖大规模人工或 AI 偏好数据,如果偏好本身被长度、语气、格式等表层因素系统性污染,后续 reward model 和对齐过程会一起继承偏差。

Yebowen Hu,Kaiqiang Song,Sangwoo Cho,Xiaoyang Wang,Hassan Foroosh,Fei Liu
preferencerlhfevaluationDOIDBLP
6
泛读EMNLP 2023

LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models

这篇工作要解决的是:现有 PEFT 方法在大语言模型上各有短板,LoRA、prefix/prompt tuning、adapter 之类方法往往只覆盖一种插入位置或更新形式,导致适配能力、稳定性和工程便利性之间难以兼顾。这个问题重要,因为大模型训练成本已使'如何高效改模型'成为默认前提,而不是补充技巧。

Zhiqiang Hu,Lei Wang,Yihuai Lan,Wanyu Xu,Ee-Peng Lim,Lidong Bing,Xing Xu,Soujanya Poria,Roy Ka-Wei Lee
peftadapterloraDOIDBLP
6
泛读FindingsEMNLP 2023

Dynamic Low-rank Estimation for Transformer-based Language Models

这篇工作要解决的问题是:Transformer 语言模型中的低秩结构并不是静态的,固定秩近似会在压缩率和性能损失之间做出僵硬折中。过去很多压缩方法预先设定 rank,再用统一配置套所有层,但不同层、不同训练阶段的有效秩并不一样,因此会要么浪费参数,要么伤性能。

Ting Hua,Xiao Li,Shangqian Gao,Yen-Chang Hsu,Yilin Shen,Hongxia Jin
low-rankcompressionlmDOIDBLP
6
泛读FindingsEMNLP 2023

DiffusionSL: Sequence Labeling via Tag Diffusion Process

这篇工作要解决的是:序列标注通常被当作逐 token 的局部分类问题,但这种做法对标签间依赖建模较弱,尤其在需要全局一致性的任务上容易出现局部最优。过去常见方案是 CRF 或自回归标签生成;前者表达力有限,后者推理串行且容易把标签生成问题过度语言化。

Ziyang Huang,Pengfei Cao,Jun Zhao,Kang Liu
diffusion-lmsequence-labelingDOIDBLP
6
泛读EMNLP 2023

Privacy Implications of Retrieval-Based Language Models

这篇工作要回答的是:retrieval-based language model 在引入外部语料提升知识与时效性的同时,会不会带来新的隐私泄露面,且这种风险是否不同于纯参数化 LM。过去对隐私的讨论多集中在参数记忆和训练数据抽取,但 RLM 把私有文档直接接到推理链路里,攻击面明显变了。

Yangsibo Huang,Samyak Gupta,Zexuan Zhong,Kai Li,Danqi Chen
ragprivacymemorizationDOIDBLP
6
泛读FindingsEMNLP 2023

Diffusion Language Model with Query-Document Relevance for Query-Focused Summarization

这篇工作要解决的是:query-focused summarization 不仅要求摘要流畅,还要求紧扣查询意图,传统自回归摘要模型容易生成通顺但偏题的内容。过去通常通过 encoder 里拼接 query 或加 relevance features 来缓解,但生成目标本身仍然没有直接约束'逐步朝更相关摘要靠近'。

Shaoyao Huang,Luozheng Qin,Ziqiang Cao
diffusion-lmsummarizationDOIDBLP
6
泛读FindingsEMNLP 2023

Not All Languages Are Created Equal in LLMs: Improving Multilingual Capability by Cross-Lingual-Thought Prompting

这篇工作要解决的是:LLM 的多语言能力明显不均衡,高资源语言往往能触发更强推理,而低资源语言即使问题等价也表现更差。过去常见处理是直接在目标语言提问,或者做翻译后再回答,但这两种方式要么受限于目标语言内部表示能力,要么把跨语言推理链条拆成外部流水线。

Haoyang Huang,Tianyi Tang,Dongdong Zhang,Xin Zhao,Ting Song,Yan Xia,Furu Wei
multilingualpromptingllmDOIDBLP
6
泛读EMNLP 2023

Learning Preference Model for LLMs via Automatic Preference Data Generation

这篇工作要解决的是:LLM 的偏好模型训练受限于高质量人工偏好数据稀缺且昂贵,而直接用少量人工比较对大模型做 reward modeling 容易过拟合标注风格。此前已有工作尝试用 AI feedback 替代人类,但自动生成偏好数据的质量和可控性一直是核心瓶颈。

Shijia Huang,Jianqiao Zhao,Yanyang Li,Liwei Wang
preference-modelrlhfdata-synthesisDOIDBLP
6
泛读FindingsEMNLP 2023

Towards Mitigating LLM Hallucination via Self Reflection

Ziwei Ji,Tiezheng Yu,Yan Xu,Nayeon Lee,Etsuko Ishii,Pascale Fung
hallucinationself-reflectionDOIDBLP
6
泛读EMNLP 2023

Lion: Adversarial Distillation of Proprietary Large Language Models

Yuxin Jiang,Chunkit Chan,Mingyang Chen,Wei Wang
distillationadversarialllmDOIDBLP
6
泛读FindingsEMNLP 2023

Towards Anytime Fine-tuning: Continually Pre-trained Language Models with Hypernetwork Prompts

Gangwei Jiang,Caigao Jiang,Siqiao Xue,James Zhang,Jun Zhou,Defu Lian,Ying Wei
continual-pretrainhypernetworkpromptDOIDBLP
6
泛读FindingsEMNLP 2023

Generative Calibration for In-context Learning

上下文学习(ICL)的性能对prompt样例选择、排序等配置高度敏感,过往工作大多从prompt工程角度局部优化,未从分布偏移的根源解决该问题。

Zhongtao Jiang,Yuanzhe Zhang,Cao Liu,Jun Zhao,Kang Liu
in-context-learningcalibrationDOIarXivDBLP
4
EMNLP 2023

Parameter-efficient Tuning for Large Language Model without Calculating Its Gradients

现有参数高效微调(PEFT)方法如LoRA、Adapter仍需计算大模型梯度并反向传播,仅能节省约30%训练显存,无法适配极端资源受限场景。

Feihu Jin,Jiajun Zhang,Chengqing Zong
peftgradient-freetuningDOIDBLP
5
泛读EMNLP 2023

Exploiting Asymmetry for Synthetic Training Data Generation: SynthIE and the Case of Information Extraction

对于大模型无法直接解决的复杂结构化输出任务(如闭域信息抽取),过往合成数据方法依赖大模型正向生成输入输出对,质量差、规模小,无法满足训练需求。

Martin Josifoski,Marija Sakota,Maxime Peyrard,Robert West
synthetic-datainformation-extractionDOIarXivDBLP
6
泛读FindingsEMNLP 2023

Impact of Co-occurrence on Factual Knowledge of Large Language Models

Cheongwoong Kang,Jaesik Choi
factual-knowledgeco-occurrencememorizationDOIDBLP
6
泛读EMNLP 2023

Preserving Privacy Through Dememorization: An Unlearning Technique For Mitigating Memorization Risks In Language Models

Aly M. Kassem,Omar Mahmoud,Sherif Saad
memorizationunlearningprivacyDOIDBLP
6
泛读FindingsEMNLP 2023

Measuring the Knowledge Acquisition-Utilization Gap in Pretrained Language Models

Amirhossein Kazemnejad,Mehdi Rezagholizadeh,Prasanna Parthasarathi,Sarath Chandar
knowledge-acquisitionknowledge-utilizationpretrained-lmDOIDBLP
6
泛读EMNLP 2023

Aligning Large Language Models through Synthetic Feedback

依赖人类反馈的强化学习(RLHF)成本高昂且难以扩展,限制了 LLM 对齐的迭代速度。

Sungdong Kim,Sanghwan Bae,Jamin Shin,Soyoung Kang,Donghyun Kwak,Kang Min Yoo,Minjoon Seo
alignmentsynthetic-feedbackllmDOIDBLP
6
泛读EMNLP 2023

The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning

中小规模语言模型(<10B)缺乏 zero-shot CoT 推理能力,直接在常规任务上微调无法激发其逐步推理潜力。

Seungone Kim,Se June Joo,Doyoung Kim,Joel Jang,Seonghyeon Ye,Jamin Shin,Minjoon Seo
cotinstruction-tuningsftDOIDBLP
6
泛读EMNLP 2023

The Past, Present and Better Future of Feedback Learning in Large Language Models for Subjective Human Preferences and Values

RLHF 被广泛使用,但在处理主观人类偏好时,反馈的收集和整合方式缺乏标准,容易引入偏见且效率低下。

Hannah Kirk,Andrew M. Bean,Bertie Vidgen,Paul Röttger,Scott Hale
Oxford Internet InstituteAlan Turing Instituterlhfhuman-feedbacksurveyDOIarXivDBLP
6
泛读FindingsEMNLP 2023

Co-training and Co-distillation for Quality Improvement and Compression of Language Models

这篇论文要解决的是:知识蒸馏通常能压缩语言模型,但小模型性能往往仍明显低于大模型,形成“速度换质量”的硬 trade-off。过去多数蒸馏是单向的——大模型教小模型——作者质疑这种方向性本身限制了训练收益。

Hayeon Lee,Rui Hou,Jongpil Kim,Davis Liang,Hongbo Zhang,Sung Ju Hwang,Alexander Min
knowledge-distillationmodel-compressionco-trainingDOIarXivDBLP
6
泛读EMNLP 2023

A Framework for Vision-Language Warm-up Tasks in Multimodal Dialogue Models

这篇论文要处理的是:多模态对话模型在正式任务训练前,如何设计合适的 vision-language warm-up tasks,让模型先学会跨模态对齐与交互,而不是直接在对话监督上硬学。这个问题常被数据规模掩盖,但在中小规模训练里尤其关键。

Jaewook Lee,Seongsik Park,Seong-Heum Park,Hongjin Kim,Harksoo Kim
vlmpretrainingdialogueDOIDBLP
6
泛读EMNLP 2023

Making Large Language Models Better Data Creators

这篇论文关注的是:如何让大语言模型更擅长生成训练数据,而不是只把它们当最终任务求解器。这个问题现在值得重视,因为越来越多后训练、蒸馏和自举流程依赖模型合成数据,但“会回答”不等于“会稳定地产生高质量、可训练的数据”。

Dong-Ho Lee,Jay Pujara,Mohit Sewak,Ryen White,Sujay Kumar Jauhar
synthetic-datadata-generationDOIDBLP
6
泛读EMNLP 2023

HaluEval: A Large-Scale Hallucination Evaluation Benchmark for Large Language Models

这篇论文要解决的是:LLM 幻觉缺少一个大规模、系统化且可操作的评测基准,导致不同工作各测各的,结果很难横向比较。此前很多评估要么规模小、要么任务单一、要么只覆盖事实问答的一部分幻觉形态。

Junyi Li,Xiaoxue Cheng,Xin Zhao,Jian-Yun Nie,Ji-Rong Wen
hallucinationbenchmarkevaluationDOIDBLP
6
泛读EMNLP 2023

Evaluating Object Hallucination in Large Vision-Language Models

这篇论文要解决的是:如何系统评估大视觉语言模型中的 object hallucination,也就是模型描述出图中不存在的物体。随着 LVLM 生成能力增强,这类错误变得更常见也更隐蔽,而传统图文指标往往对它不敏感。

Yifan Li,Yifan Du,Kun Zhou,Jinpeng Wang,Wayne Xin Zhao,Ji-Rong Wen
vlmhallucinationevaluationDOIDBLP
6
泛读EMNLP 2023

Pragmatic Reasoning Unlocks Quantifier Semantics for Foundation Models

这篇论文要解决的问题是:foundation models 在量词语义上常常表现不稳定,尤其遇到 all / some / most 这类依赖语境与说话者意图的判断时,表面匹配远不够。过去很多评测把问题归因于模型缺少逻辑规则,但作者认为还缺少 pragmatic reasoning,也就是结合交流目的去解释量词。

Yiyuan Li,Rakesh R. Menon,Sayan Ghosh,Shashank Srivastava
reasoningsemanticsfoundation-modelsDOIDBLP
6
泛读EMNLP 2023

MoT: Memory-of-Thought Enables ChatGPT to Self-Improve

这篇论文要解决的问题是:ChatGPT 一次性生成的思维链不稳定,且无法自然积累跨问题经验,因此 self-improvement 常停留在单样本反思。作者希望给模型一个可持续复用的“思维记忆”,让过去的高质量推理轨迹在新问题上继续产生作用。

Xiaonan Li,Xipeng Qiu
Fudan Universityself-improvementmemoryreasoningDOIDBLP
6
泛读FindingsEMNLP 2023

Finding Support Examples for In-Context Learning

这篇论文要解决的问题是:in-context learning 的效果高度依赖示例选择,但什么样的 support examples 真正有帮助,过去更多靠启发式或相似度检索,并不稳定。作者要找的是支持示例的可操作判据,而不是继续盲目堆示例数量。

Xiaonan Li,Xipeng Qiu
Fudan Universityin-context-learningretrievalexample-selectionDOIDBLP
6
泛读EMNLP 2023

MMNMT: Modularizing Multilingual Neural Machine Translation with Flexibly Assembled MoE and Dense Blocks

Shangjie Li,Xiangpeng Wei,Shaolin Zhu,Jun Xie,Baosong Yang,Deyi Xiong
moemachine-translationmodularizationDOIDBLP
6
泛读EMNLP 2023

Interpreting and Exploiting Functional Specialization in Multi-Head Attention under Multi-task Learning

Chong Li,Shaonan Wang,Yunhao Zhang,Jiajun Zhang,Chengqing Zong
attentioninterpretabilitymulti-task-learningDOIDBLP
6
泛读EMNLP 2023

Synthetic Data Generation with Large Language Models for Text Classification: Potential and Limitations

Zhuoyan Li,Hangxiao Zhu,Zhuoran Lu,Ming Yin
synthetic-datadata-qualitytext-classificationDOIDBLP
6
泛读DemoEMNLP 2023

CLEVA: Chinese Language Models EVAluation Platform

Yanyang Li,Jianqiao Zhao,Duo Zheng,Zi-Yuan Hu,Zhi Chen,Xiaohui Su ... 省略 1 位作者 ... ,Shijia Huang,Dahua Lin,Michael R. Lyu,Liwei Wang
benchmarkchineseevaluationDOIDBLP
6
泛读EMNLP 2023

Disentangling Transformer Language Models as Superposed Topic Models

Jia Peng Lim,Hady W. Lauw
interpretabilitytopic-modelsuperpositionDOIDBLP
6
泛读EMNLP 2023

Cognitive Dissonance: Why Do Language Model Outputs Disagree with Internal Representations of Truthfulness?

Kevin Liu,Stephen Casper,Dylan Hadfield-Menell,Jacob Andreas
truthfulnessinterpretabilityhallucinationDOIDBLP
6
泛读EMNLP 2023

Composable Text Controls in Latent Space with ODEs

Guangyi Liu,Zeyu Feng,Yuan Gao,Zichao Yang,Xiaodan Liang,Junwei Bao,Xiaodong He,Shuguang Cui,Zhen Li,Zhiting Hu
latent-spacediffusioncontrollable-generationDOIDBLP
6
泛读FindingsEMNLP 2023

Retrieval-based Knowledge Transfer: An Effective Approach for Extreme Large Language Model Compression

极大规模LLM(如100B+)的传统知识蒸馏(KD)需要计算全量预训练数据的 teacher logits,计算成本极高且难以落地。

Jiduan Liu,Jiahao Liu,Qifan Wang,Jingang Wang,Xunliang Cai,Dongyan Zhao,Ran Wang,Rui Yan
Meta AITencentcompressiondistillationretrievalDOIDBLP
6
泛读FindingsEMNLP 2023

LogiCoT: Logical Chain-of-Thought Instruction Tuning

标准的 CoT 微调依赖模型自发涌现的推理能力,容易产生逻辑跳跃或幻觉,缺乏严格的逻辑约束。

Hanmeng Liu,Zhiyang Teng,Leyang Cui,Chaoli Zhang,Qiji Zhou,Yue Zhang
Tencent AI LabWestlake Universitycotinstruction-tuningreasoningDOIDBLP
6
泛读FindingsEMNLP 2023

Segmented Recurrent Transformer: An Efficient Sequence-to-Sequence Model

Transformer 的全局注意力机制在处理长序列 Seq2Seq 任务时,面临显存和计算量的二次方爆炸问题。

Yinghan Long,Sayeed Shafayet Chowdhury,Kaushik Roy
Purdue Universitylong-contextrecurrenttransformerDOIDBLP
6
泛读EMNLP 2023

Text Rendering Strategies for Pixel Language Models

基于像素的语言模型(如 PIXEL)将文本渲染为图像输入,但文本的视觉渲染策略(字体、排版)对模型学习效率的影响尚不明确。

Jonas F. Lotz,Elizabeth Salesky,Phillip Rust,Desmond Elliott
University of Copenhagentokenizerpixel-lmDOIDBLP
6
泛读EMNLP 2023

Inference-Time Policy Adapters (IPA): Tailoring Extreme-Scale LMs without Fine-tuning

对千亿参数极大规模模型(如 GPT-3 175B)进行 SFT 或 RLHF 对齐成本极高,且容易导致对齐税(Alignment Tax)。

Ximing Lu,Faeze Brahman,Peter West,Jaehun Jung,Khyathi Raghavi Chandu,Abhilasha Ravichander ... 省略 7 位作者 ... ,Lianhui Qin,Xiang Ren,Sean Welleck,Yejin Choi
University of WashingtonAllen Institute for AI (AI2)inference-timepolicy-adapteralignmentDOIDBLP
6
泛读FindingsEMNLP 2023

Measuring Pointwise \mathcalV-Usable Information In-Context-ly

Sheng Lu,Shan Chen,Yingya Li,Danielle S. Bitterman,Guergana Savova,Iryna Gurevych
iclinformation-theoryanalysisDOIDBLP
6
泛读FindingsEMNLP 2023

TRIP: Accelerating Document-level Multilingual Pre-training via Triangular Document-level Pre-training on Parallel Data Triplets

Hongyuan Lu,Haoyang Huang,Shuming Ma,Dongdong Zhang,Wai Lam,Zhaochuan Gao,Anthony Aue,Arul Menezes,Furu Wei
multilingualpretraindocument-levelDOIDBLP
6
泛读FindingsEMNLP 2023

Take a Closer Look at Multilinguality! Improve Multilingual Pre-Training Using Monolingual Corpora Only

Jinliang Lu,Yu Lu,Jiajun Zhang
multilingualpretraindataDOIDBLP
6
泛读EMNLP 2023

Efficient Classification of Long Documents via State-Space Models

Peng Lu,Suyuchen Wang,Mehdi Rezagholizadeh,Bang Liu,Ivan Kobyzev
ssmlong-contextarchitectureDOIDBLP
6
泛读EMNLP 2023

Focus Your Attention (with Adaptive IIR Filters)

Shahar Lutati,Itamar Zimerman,Lior Wolf
attentioniir-filterarchitectureDOIDBLP
6
泛读EMNLP 2023

FinGPT: Large Generative Models for a Small Language

Risto Luukkonen,Ville Komulainen,Jouni Luoma,Anni Eskelinen,Jenna Kanerva,Hanna-Mari Kupari ... 省略 11 位作者 ... ,Jyrki Heinonen,Aija Vahtola,Samuel Antao,Sampo Pyysalo
small-languagefinnishpretrainingDOIDBLP
6
泛读FindingsEMNLP 2023

What Makes Chain-of-Thought Prompting Effective? A Counterfactual Study

Aman Madaan,Katherine Hermann,Amir Yazdanbakhsh
cotpromptingcounterfactualDOIDBLP
6
泛读FindingsEMNLP 2023

Sources of Hallucination by Large Language Models on Inference Tasks

这篇论文要解决的问题是:LLM 在推理任务上产生“幻觉”时,错因到底来自推理能力不足,还是来自预训练学到的偏置。此前很多工作把 NLI、QA、摘要中的错误笼统归因于 reasoning failure;作者用受控行为实验指出,至少在一部分场景下,模型错误可以被两个更基础的预训练来源解释:句子级记忆和词项共现模式。

Nick McKenna,Tianyi Li,Liang Cheng,Mohammad Javad Hosseini,Mark Johnson,Mark Steedman
hallucinationnlipretraining-biasDOIarXivDBLP
6
泛读EMNLP 2023

CompoundPiece: Evaluating and Improving Decompounding Performance of Language Models

这篇论文要解决的问题是:语言模型在处理复合词,尤其是需要 decompounding 的语言现象时,表现到底有多差,以及如何系统改善。复合词会把词法、分词和语义组合问题捆在一起;如果 tokenizer 和预训练没有学好这种组合规律,模型在生成、理解和迁移上都可能吃亏。

Benjamin Minixhofer,Jonas Pfeiffer,Ivan Vulic
tokenizersubwordevaluationDOIDBLP
6
泛读FindingsEMNLP 2023

Balaur: Language Model Pretraining with Lexical Semantic Relations

这篇论文要解决的问题是:预训练语言模型虽然在表面语言建模上很强,但对 hypernymy 等 lexical semantic relations 仍然不稳,说明隐藏表示没有把词汇语义关系编码好。过去这类问题多在下游或 probing 阶段补救;作者选择直接在预训练中约束 hidden states,希望从表示层改写模型的词汇推理行为。

Andrei Mircea,Jackie C. K. Cheung
McGill Universitypretraining-objectivesemantic-relationsDOIDBLP
6
泛读FindingsEMNLP 2023

MUX-PLMs: Data Multiplexing for High-throughput Language Models

Vishvak Murahari,Ameet Deshpande,Carlos E. Jimenez,Izhak Shafran,Mingqiu Wang,Yuan Cao,Karthik Narasimhan
efficiencythroughputmultiplexingDOIDBLP
6
泛读FindingsEMNLP 2023

The Cost of Compression: Investigating the Impact of Compression on Parametric Knowledge in Language Models

Satya Sai Srinath Namburi,Makesh Sreedhar,Srinath Srinivasan,Frederic Sala
compressionparametric-knowledgeefficiencyDOIDBLP
6
泛读EMNLP 2023

On the Representational Capacity of Recurrent Neural Language Models

Franz Nowak,Anej Svete,Li Du,Ryan Cotterell
rnnlanguage-modelingarchitectureDOIDBLP
5
泛读FindingsEMNLP 2023

Transformer-Based Language Model Surprisal Predicts Human Reading Times Best with About Two Billion Training Tokens

过往心理语言学研究对语言模型质量和其surprisal预测人类阅读时间的相关性结论冲突,过往研究的模型参数量、训练数据量差异太大,未做控制变量对比。

Byung-Doh Oh,William Schuler
scalingsurprisaltraining-dataDOIarXivDBLP
6
泛读FindingsEMNLP 2023

Logic-LM: Empowering Large Language Models with Symbolic Solvers for Faithful Logical Reasoning

Liangming Pan,Alon Albalak,Xinyi Wang,William Yang Wang
reasoningtool-usesymbolicDOIDBLP
6
泛读EMNLP 2023

MAGNIFICo: Evaluating the In-Context Learning Ability of Large Language Models to Generalize to Novel Interpretations

Arkil Patel,Satwik Bhattamishra,Siva Reddy,Dzmitry Bahdanau
in-context-learninggeneralizationevaluationDOIDBLP
6
泛读EMNLP 2023

Can language models learn analogical reasoning? Investigating training objectives and comparisons to human performance

Molly R. Petersen,Lonneke van der Plas
training-objectivereasoninganalogical-reasoningDOIDBLP
6
泛读EMNLP 2023

Learning From Free-Text Human Feedback - Collect New Datasets Or Extend Existing Ones?

Dominic Petrak,Nafise Sadat Moosavi,Ye Tian,Nikolai Rozanov,Iryna Gurevych
human-feedbackalignmentdatasetDOIDBLP
6
泛读EMNLP 2023

Investigating Efficiently Extending Transformers for Long Input Summarization

Jason Phang,Yao Zhao,Peter J. Liu
long-contextefficient-attentionsummarizationDOIDBLP
6
泛读EMNLP 2023

How Does Generative Retrieval Scale to Millions of Passages?

生成式检索(generative retrieval)用 seq2seq 模型直接生成文档标识符来完成检索,在小规模语料上表现不错,但能否 scale 到百万级文档一直是未解问题。本文系统评估了这一 scaling 瓶颈。

Ronak Pradeep,Kai Hui,Jai Gupta,Ádám D. Lelkes,Honglei Zhuang,Jimmy Lin,Donald Metzler,Vinh Q. Tran
University of WaterlooGoogle Researchgenerative-retrievalscalingDOIDBLP
6
泛读EMNLP 2023

ReCEval: Evaluating Reasoning Chains via Correctness and Informativeness

现有对 LLM 推理链(chain-of-thought)的评估主要看最终答案是否正确,忽略了推理过程本身的质量。ReCEval 提出从正确性(correctness)和信息量(informativeness)两个维度评估推理链中每一步。

Archiki Prasad,Swarnadeep Saha,Xiang Zhou,Mohit Bansal
UNC Chapel HillreasoningcotevaluationDOIDBLP
6
泛读EMNLP 2023

EpiK-Eval: Evaluation for Language Models as Epistemic Models

LLM 作为知识库使用时,其内部知识的一致性(epistemic consistency)缺乏系统评估。EpiK-Eval 构建了一个评估框架,测试模型是否能在不同问法下对同一事实保持一致的回答。

Gabriele Prato,Jerry Huang,Prasanna Parthasarathi,Shagun Sodhani,Sarath Chandar
MilaPolytechnique MontréalknowledgeevaluationhallucinationDOIDBLP
6
泛读EMNLP 2023

Automatic Prompt Optimization with &quot;Gradient Descent&quot; and Beam Search

手动编写 LLM prompt 费时且不稳定,自动 prompt 优化方法大多需要梯度或大量标注数据。本文提出用自然语言'梯度'(文本反馈)和 beam search 来自动优化 prompt,无需访问模型参数。

Reid Pryzant,Dan Iter,Jerry Li,Yin Tat Lee,Chenguang Zhu,Michael Zeng
Microsoft Researchprompt-optimizationapoDOIDBLP
6
泛读FindingsEMNLP 2023

Romanization-based Large-scale Adaptation of Multilingual Language Models

多语言 LM 对低资源语言(尤其是使用非拉丁文字的语言)覆盖不足,因为 tokenizer 和预训练数据都偏向高资源语言。本文探索用罗马化(romanization,即把非拉丁文字转写为拉丁字母)来弥合这一差距。

Sukannya Purkayastha,Sebastian Ruder,Jonas Pfeiffer,Iryna Gurevych,Ivan Vulic
Google DeepMindTU DarmstadtUniversity of CambridgemultilingualtokenizerromanizationDOIDBLP
6
泛读EMNLP 2023

Cross-Lingual Consistency of Factual Knowledge in Multilingual Language Models

多语言 LM 在不同语言中对同一事实的回答是否一致?已有工作主要关注单语言的事实准确性,跨语言一致性(cross-lingual factual consistency)缺乏系统研究。

Jirui Qi,Raquel Fernández,Arianna Bisazza
University of GroningenmultilingualknowledgeconsistencyDOIDBLP
6
泛读EMNLP 2023

Is ChatGPT a General-Purpose Natural Language Processing Task Solver?

厘清经过RLHF的超大模型(如ChatGPT)在传统NLP基础任务上的真实水位,验证其是否已经可以全面替代针对特定任务微调的小模型。

Chengwei Qin,Aston Zhang,Zhuosheng Zhang,Jiaao Chen,Michihiro Yasunaga,Diyi Yang
AmazonStanford UniversitySJTUchatgptbenchmarkevaluationDOIDBLP
6
泛读EMNLP 2023

Accelerating Toeplitz Neural Network with Constant-time Inference Complexity

解决非自注意力架构(如Toeplitz Neural Network, TNN)在自回归推理时,无法像RNN那样做到O(1)复杂度,导致生成延迟高的问题。

Zhen Qin,Yiran Zhong
SenseTimeOpenGVLabefficient-attentiontoeplitzinferenceDOIDBLP
6
泛读EMNLP 2023

Whispering LLaMA: A Cross-Modal Generative Error Correction Framework for Speech Recognition

解决端到端语音识别(如Whisper)在特定领域或长尾词汇上容易产生拼写或语法错误,且重新训练声学模型成本极高的问题。

Srijith Radhakrishnan,Chao-Han Huck Yang,Sumeer Ahmad Khan,Rohit Kumar,Narsis A. Kiani,David Gomez-Cabrero,Jesper Tegnér
speechllamacross-modalDOIDBLP
6
泛读FindingsEMNLP 2023

Dissecting In-Context Learning of Translations in GPT-3

探究大模型在机器翻译任务中的上下文学习(ICL)机制,验证其是否仅仅依赖格式识别(如分类任务中的发现)还是真正利用了演示中的映射关系。

Vikas Raunak,Arul Menezes,Hany Hassan Awadalla
Microsoftin-context-learningtranslationgpt-3DOIDBLP
6
泛读EMNLP 2023

Axiomatic Preference Modeling for Longform Question Answering

这篇工作要解决的是长篇问答中的偏好建模不稳定:同样答案可能在事实性、覆盖度、结构、引用方式等多个维度上好坏不同,简单的成对偏好学习往往把这些标准混在一起。过去 RLHF 或 ranker 常直接从总体偏好标签学一个黑盒分数,但在 long-form QA 里这种做法很难解释,也难以控制评估准则。

Corby Rosset,Guoqing Zheng,Victor Dibia,Ahmed Awadallah,Paul N. Bennett
preference-modelingreward-modellong-formDOIDBLP
6
泛读EMNLP 2023

Outlier Dimensions Encode Task Specific Knowledge

这篇工作要解决的是表征中的 outlier dimensions 到底只是数值异常,还是承载了有功能性的任务知识。过去对激活 outlier 的讨论多集中在量化、数值稳定性和表示几何上,常把它们视作工程噪声或副产物,而不是可解释的知识载体。

William Rudman,Catherine Chen,Carsten Eickhoff
outlier-dimensionsrepresentationsinterpretabilityDOIDBLP
6
泛读FindingsEMNLP 2023

Plausibility Processing in Transformer Language Models: Focusing on the Role of Attention Heads in GPT

这篇工作要解决的是 Transformer 语言模型如何处理 plausibility,也就是句子是否符合常识或语义合理性,尤其想定位到 GPT 中具体由哪些注意力头承担这类处理。过去很多工作能证明模型整体区分合理/不合理句子,但较少把责任进一步拆到 head 级机制上。

Soo Ryu
attention-headsgptinterpretabilityDOIDBLP
6
泛读FindingsEMNLP 2023

SHARCS: Efficient Transformers Through Routing with Dynamic Width Sub-networks

这篇工作要解决的是 Transformer 推理和训练成本过高,而静态宽度缩减又会在所有输入上统一损失容量。过去常见做法是蒸馏、剪枝或 early exit,但这些方法通常把计算预算固定住,不能根据样本难度动态分配宽度。

Mohammadreza Salehi,Sachin Mehta,Aditya Kusupati,Ali Farhadi,Hannaneh Hajishirzi
routingdynamic-widthefficient-transformerDOIDBLP
6
泛读EMNLP 2023

Multilingual Pixel Representations for Translation and Effective Cross-lingual Transfer

这篇工作要解决的是跨语言文本建模过度依赖离散词片词表,导致对拼写变化大、资源稀缺或脚本差异明显的语言迁移不稳。传统多语 NMT 和 LM 通常通过共享 subword vocabulary 来做跨语迁移,但当脚本碎片化严重或词形变化丰富时,这条路线会带来高 OOV 和不公平的词表分配。

Elizabeth Salesky,Neha Verma,Philipp Koehn,Matt Post
tokenizerpixelmultilingualDOIDBLP
6
泛读FindingsEMNLP 2023

Loose lips sink ships: Mitigating Length Bias in Reinforcement Learning from Human Feedback

RLHF 下奖励模型普遍给长回复更高分,导致 PPO 后模型越训越啰嗦。作者想在不牺牲偏好对齐质量的前提下消除长度偏置。

Wei Shen,Rui Zheng,WenYu Zhan,Jun Zhao,Shihan Dou,Tao Gui,Qi Zhang,Xuanjing Huang
Fudan Universityrlhflength-biasreward-hackingDOIDBLP
6
泛读FindingsEMNLP 2023

Getting MoRE out of Mixture of Language Model Reasoning Experts

单一大模型在不同类型推理任务上表现不均衡,简单做一个模型包打所有推理既不经济也效果不稳。作者把多个专长不同的推理模型组合为 MoE 路由系统。

Chenglei Si,Weijia Shi,Chen Zhao,Luke Zettlemoyer,Jordan L. Boyd-Graber
University of MarylandUniversity of Washingtonmoereasoningexpert-modelsDOIDBLP
6
泛读FindingsEMNLP 2023

An Empirical Study of Instruction-tuning Large Language Models in Chinese

系统回答中文指令微调里 LLM 基座、参数规模、数据量和数据配方这些因素各自贡献多少的问题。之前多数工作只在某个基座上堆数据或随手换个模型,缺少把变量拆开看的对照实验。

Qingyi Si,Tong Wang,Zheng Lin,Xu Zhang,Yanan Cao,Weiping Wang
instruction-tuningempirical-studysftDOIDBLP
6
泛读EMNLP 2023

The Curious Case of Hallucinatory (Un)answerability: Finding Truths in the Hidden States of Over-Confident Large Language Models

LLM 面对'不可回答'问题时经常自信地编造答案(hallucinated answerability)。作者想知道模型内部是否其实'知道'这个问题不可答,只是输出层没体现出来。

Aviv Slobodkin,Omer Goldman,Avi Caciularu,Ido Dagan,Shauli Ravfogel
Bar-Ilan Universityhallucinationhidden-statesinterpretabilityDOIDBLP
6
泛读EMNLP 2023

A Mechanistic Interpretation of Arithmetic Reasoning in Language Models using Causal Mediation Analysis

LLM 能做两位数加减法等算术,但到底是哪些组件在完成计算、怎么分工的并不清楚。作者想用因果中介分析(causal mediation)定位算术推理的具体电路。

Alessandro Stolfo,Yonatan Belinkov,Mrinmaya Sachan
ETH ZurichTechnionmechanistic-interpretabilityarithmetic-reasoningcausal-mediationDOIDBLP
6
泛读EMNLP 2023

Exploring the Impact of Model Scaling on Parameter-Efficient Tuning

PET 方法(Adapter / LoRA / Prefix 等)在小模型上各有高低,为什么到大模型差距就变小?作者假设是模型规模把设计差异抹平了,并且想用实验验证这一点。

Yusheng Su,Chi-Min Chan,Jiali Cheng,Yujia Qin,Yankai Lin,Shengding Hu ... 省略 2 位作者 ... ,Xingzhi Sun,Guotong Xie,Zhiyuan Liu,Maosong Sun
Tsinghua Universityparameter-efficient-tuningscalingpeftDOIarXivDBLP
6
泛读FindingsEMNLP 2023

Tokenization Consistency Matters for Generative Models on Extractive NLP Tasks

生成式模型做抽取式任务(如抽取式 QA)时,输入里的 span 和输出里的同一个 span 可能被 tokenizer 切成不同形式(比如是否带前导空格),导致模型必须'翻译'而非'抽取',带来性能下降和幻觉。这个坑大多数工程代码默认忽略。

Kaiser Sun,Peng Qi,Yuhao Zhang,Lan Liu,William Yang Wang,Zhiheng Huang
AWS AI LabsUC Santa Barbaratokenizertokenization-consistencygenerative-modelsDOIarXivDBLP
6
泛读IndustryEMNLP 2023

Self-Criticism: Aligning Large Language Models with their Understanding of Helpfulness, Honesty, and Harmlessness

Xiaoyu Tan,Shaojie Shi,Xihe Qiu,Chao Qu,Zhenting Qi,Yinghui Xu,Yuan Qi
alignmentself-trainingrlhfDOIDBLP
6
泛读EMNLP 2023

Just Ask for Calibration: Strategies for Eliciting Calibrated Confidence Scores from Language Models Fine-Tuned with Human Feedback

RLHF 微调被广泛认为会破坏预训练模型原本良好的条件概率校准,导致模型输出的 logits 无法真实反映其置信度。

Katherine Tian,Eric Mitchell,Allan Zhou,Archit Sharma,Rafael Rafailov,Huaxiu Yao,Chelsea Finn,Christopher D. Manning
Stanford UniversitycalibrationrlhfconfidenceDOIarXivDBLP
6
泛读FindingsEMNLP 2023

Efficient Multilingual Language Model Compression through Vocabulary Trimming

多语言预训练模型(如 mBERT, XLM-R)的词表极其庞大,在单语言或少语言下游部署时造成严重的显存和计算浪费。

Asahi Ushio,Yi Zhou,José Camacho-Collados
Cardiff UniversitycompressionvocabularymultilingualDOIDBLP
6
泛读EMNLP 2023

Explore-Instruct: Enhancing Domain-Specific Instruction Coverage through Active Exploration

现有的指令数据合成方法(如 Self-Instruct)在垂直领域应用时,往往生成的数据分布狭窄、多样性不足,无法覆盖复杂的领域知识边界。

Fanqi Wan,Xinting Huang,Tao Yang,Xiaojun Quan,Wei Bi,Shuming Shi
Tencent AI Labinstruction-tuningdata-synthesisDOIDBLP
6
泛读EMNLP 2023

GradSim: Gradient-Based Language Grouping for Effective Multilingual Training

在多语言预训练中,将所有语言混合训练会导致资源竞争和负迁移。传统的基于语言学语系的分组方法往往不能反映模型优化的真实动态。

Mingyang Wang,Heike Adel,Lukas Lange,Jannik Strötgen,Hinrich Schütze
LMU MunichBoschmultilingualgradient-analysislanguage-groupingDOIDBLP
6
泛读FindingsEMNLP 2023

Orthogonal Subspace Learning for Language Model Continual Learning

语言模型在进行持续学习(Continual Learning,如不断注入新知识或新任务)时,会严重遗忘旧知识(灾难性遗忘)。

Xiao Wang,Tianze Chen,Qiming Ge,Han Xia,Rong Bao,Rui Zheng,Qi Zhang,Tao Gui,Xuanjing Huang
BAAI (Beijing Academy of Artificial Intelligence)continual-learningsubspaceDOIDBLP
6
泛读EMNLP 2023

Democratizing Reasoning Ability: Tailored Learning from Large Language Model

这篇论文要解决的是:如何把闭源大模型的推理能力,而不只是指令跟随能力,迁移到算力和参数都更受限的开源小模型。过去蒸馏工作大多让大模型做数据标注器,重点提升对话和指令遵循;但推理能力更依赖过程性信号,单轮标注通常不够。

Zhaoyang Wang,Shaohan Huang,Yuxuan Liu,Jiahai Wang,Minghui Song,Zihan Zhang ... 省略 1 位作者 ... ,Furu Wei,Weiwei Deng,Feng Sun,Qi Zhang
distillationreasoningDOIarXivDBLP
6
泛读EMNLP 2023

kNN-LM Does Not Improve Open-ended Text Generation

这篇论文要回答一个很具体但很关键的问题:kNN-LM 这种外部记忆增强方法,是否真的能提升开放式文本生成。过去 kNN-LM 在困惑度和某些受约束 setting 上常有收益,但这些收益能否转化为开放生成质量,并没有被同样严格地确认。

Shufan Wang,Yixiao Song,Andrew Drozdov,Aparna Garimella,Varun Manjunatha,Mohit Iyyer
knn-lmretrievalgenerationDOIDBLP
6
泛读FindingsEMNLP 2023

Let&apos;s Synthesize Step by Step: Iterative Dataset Synthesis with Large Language Models by Extrapolating Errors from Small Models

Ruida Wang,Wangchunshu Zhou,Mrinmaya Sachan
data-synthesisdistillationDOIDBLP
6
泛读FindingsEMNLP 2023

Are Language Models Worse than Humans at Following Prompts? It&apos;s Complicated

Albert Webson,Alyssa Marie Loo,Qinan Yu,Ellie Pavlick
promptingevaluationDOIDBLP
5
泛读EMNLP 2023

Oolong: Investigating What Makes Transfer Learning Hard with Controlled Studies

预训练语言模型跨语言迁移的性能下降原因混淆,过往研究无法区分句法差异、词汇差异、嵌入初始化等多个因素的单独影响。

Zhengxuan Wu,Alex Tamkin,Isabel Papadimitriou
transfer-learningcross-lingualcontrolled-studyDOIarXivDBLP
7
泛读FindingsEMNLP 2023

Extrapolating Multilingual Understanding Models as Multilingual Generators

以mBERT、XLM-R为代表的编码器型多语言理解模型生成能力差,过往方法要么需要重新训练统一的编码器-解码器模型,计算开销大,要么生成质量远低于自回归(AR)模型,无法兼顾理解和生成能力。

Bohong Wu,Fei Yuan,Hai Zhao,Lei Li,Jingjing Xu
multilingualnon-autoregressiveencoder-to-generatorDOIarXivDBLP
6
泛读FindingsEMNLP 2023

Are Structural Concepts Universal in Transformer Language Models? Towards Interpretable Cross-Lingual Generalization

LLM 的跨语言泛化能力参差不齐,尤其低资源语言表现差。核心问题是:不同语言内部的结构概念(如句法结构)空间是否可对齐?如果可以,能否通过显式对齐来增强跨语言迁移?

Ningyu Xu,Qi Zhang,Jingting Ye,Menghan Zhang,Xuanjing Huang
Fudan Universitycross-lingual-generalizationinterpretabilitytransformersDOIarXivDBLP
6
泛读EMNLP 2023

Absolute Position Embedding Learns Sinusoid-like Waves for Attention Based on Relative Position

Transformer 中某些 attention head 会集中关注邻近 token,形成局部注意力模式。这种现象的内在机制是什么?具体来说,学习到的绝对位置编码 (APE) 是如何实现基于相对位置的注意力的?

Yuji Yamamoto,Takuya Matsuzaki
position-embeddingsinterpretabilityattention-mechanismDOIDBLP
6
泛读EMNLP 2023

Once Upon a Time in Graph: Relative-Time Pretraining for Complex Temporal Reasoning

预训练语言模型难以理解和推理文本中的时间上下文,现有方法侧重于将文本与时间戳直接关联,但这种关联不足以支撑需要跨知识时间依赖推理的下游任务。

Sen Yang,Xin Li,Lidong Bing,Wai Lam
Alibaba DAMO AcademyChinese University of Hong Kongtemporal-reasoningpretraining-objectiveknowledge-graphDOIarXivDBLP
6
泛读EMNLP 2023

Representative Demonstration Selection for In-Context Learning with Two-Stage Determinantal Point Process

ICL 中为每个测试实例单独选择 demonstration 耗时且不实用。问题是如何选出一个代表性子集,能同时有效提示同一任务中的不同测试实例。

Zhao Yang,Yuanzhe Zhang,Dianbo Sui,Cao Liu,Jun Zhao,Kang Liu
Institute of Automation, Chinese Academy of Sciencesicldemonstration-selectiondppDOIDBLP
6
泛读EMNLP 2023

Improving Summarization with Human Edits

探索如何利用人类编辑(Human Edits)轨迹而非单纯的偏好打分来优化 LLM 摘要能力,同时解决收集高质量编辑数据成本过高的问题。

Zonghai Yao,Benjamin J. Schloss,Sai P. Selvaraj
human-feedbacksummarizationpreference-learningDOIarXivDBLP
6
泛读EMNLP 2023

Knowledge Rumination for Pre-trained Language Models

解决预训练模型在面对知识密集型任务时,无法有效激活和利用其参数内部已编码的丰富知识,从而过度依赖外部检索的问题。

Yunzhi Yao,Peng Wang,Shengyu Mao,Chuanqi Tan,Fei Huang,Huajun Chen,Ningyu Zhang
Alibabaknowledgepretrained-lmrepresentationDOIarXivDBLP
6
泛读EMNLP 2023

Editing Large Language Models: Problems, Methods, and Opportunities

系统性梳理和评估大语言模型编辑(Model Editing)技术,解决如何在不引发灾难性遗忘的情况下,高效、精准地修改 LLM 特定领域知识的问题。

Yunzhi Yao,Peng Wang,Bozhong Tian,Siyuan Cheng,Zhoubo Li,Shumin Deng,Huajun Chen,Ningyu Zhang
model-editingknowledgesurveyDOIarXivDBLP
6
泛读EMNLP 2023

Explanation Selection Using Unlabeled Data for Chain-of-Thought Prompting

解决 Chain-of-Thought (CoT) 提示中,未经特定任务优化的解释(如非专家编写的现成解释)会导致下游任务性能平庸且方差极大的问题。

Xi Ye,Greg Durrett
UT Austincotpromptingunlabeled-dataDOIarXivDBLP
6
泛读FindingsEMNLP 2023

UReader: Universal OCR-free Visually-situated Language Understanding with Multimodal Large Language Model

(注:摘要缺失,基于标题推断)旨在解决多模态大语言模型在处理视觉场景文字(Visually-situated Language)时过度依赖外部 OCR 系统的问题。

Jiabo Ye,Anwen Hu,Haiyang Xu,Qinghao Ye,Ming Yan,Guohai Xu ... 省略 4 位作者 ... ,Qin Jin,Liang He,Xin Lin,Fei Huang
Alibabamultimodalocr-freedocument-understandingDOIDBLP
6
泛读EMNLP 2023

Dynosaur: A Dynamic Growth Paradigm for Instruction-Tuning Data Curation

这篇论文要解决的是:如何低成本、可扩展地构造高质量 instruction-tuning 数据,而不是继续依赖纯人工标注或让 GPT 从零编指令。此前主流做法要么贵、要么风格单一、要么没有充分利用现成监督数据中的字段语义;作者的判断是,已有标注数据的元数据和字段结构本身就是可复用的指令模板来源。

Da Yin,Xiao Liu,Fan Yin,Ming Zhong,Hritik Bansal,Jiawei Han,Kai-Wei Chang
instruction-tuningdata-curationsynthetic-dataDOIarXivDBLP
6
泛读EMNLP 2023

Answering Questions by Meta-Reasoning over Multiple Chains of Thought

这篇论文要解决的是:多条 CoT 采样之后,只对最终答案投票会浪费中间推理信息,也无法形成统一解释。此前 self-consistency 类方法默认“链条之间彼此独立,最后只聚合答案”就够了,但作者认为跨链的中间步骤里有可被再次推理利用的互补证据。

Ori Yoran,Tomer Wolfson,Ben Bogin,Uri Katz,Daniel Deutch,Jonathan Berant
cotreasoningself-consistencyDOIarXivDBLP
6
泛读IndustryEMNLP 2023

DocumentNet: Bridging the Data Gap in Document Pre-training

这篇论文要解决的是:文档预训练尤其是文档实体检索缺少大规模公开数据,现有数据又因隐私和实体空间不重叠而难以迁移。过去很多文档任务被数据稀缺卡住,不是模型结构不够强,而是没有跨文档类型、跨实体集合的统一预训练语料。

Lijun Yu,Jin Miao,Xiaoyu Sun,Jiayi Chen,Alexander G. Hauptmann,Hanjun Dai,Wei Wei
document-pretrainingdata-qualitymultimodalDOIarXivDBLP
6
泛读FindingsEMNLP 2023

Frequency Balanced Datasets Lead to Better Language Models

这篇论文要解决的是:训练语料中的频率分布偏置是否会系统性伤害语言模型,而更平衡的频率分布能否学出更好的表示。这个问题过去常被弱化,因为大规模 LM 训练默认接受自然语料的 Zipf 分布,但这不代表它对学习目标就是最优的。

Rodolfo Zevallos,Mireia Farrús,Núria Bel
data-qualityfrequencypretrain-dataDOIDBLP
6
泛读FindingsEMNLP 2023

Toward Building General Foundation Models for Language, Vision, and Vision-Language Understanding Tasks

这篇论文要解决的是:能否构建真正通用的 foundation model,同时覆盖语言、视觉和视觉语言理解任务,而不是分别训练单模态模型再用桥接模块拼接。这个问题重要,因为多模态系统长期受限于表示空间割裂、训练目标不统一、迁移不顺畅。

Xinsong Zhang,Yan Zeng,Jipeng Zhang,Hang Li
foundation-modelvision-languageunifiedDOIDBLP
6
泛读EMNLP 2023

How Do Large Language Models Capture the Ever-changing World Knowledge? A Review of Recent Advances

综述 LLM 如何应对"世界知识会变"这件事。预训练一次定格知识,但现实中事实会更新、会过时,补救路径(重训、continual pretrain、RAG、知识编辑、参数外挂)各自代价和边界不同,缺一个统一的坐标系。

Zihan Zhang,Meng Fang,Ling Chen,Mohammad-Reza Namazi-Rad,Jun Wang
University of Wollongongsurveyknowledge-updatecontinual-pretrainDOIDBLP
6
泛读EMNLP 2023

Enhancing Uncertainty-Based Hallucination Detection with Stronger Focus

用 token 级不确定性(熵、概率)检测 LLM 幻觉,但直接平均所有 token 的不确定性有两个问题:大量 function word 稀释信号,且模型"过度自信"本身是一种系统性偏差。

Tianhang Zhang,Lin Qiu,Qipeng Guo,Cheng Deng,Yue Zhang,Zheng Zhang,Chenghu Zhou,Xinbing Wang,Luoyi Fu
Shanghai Jiao Tong UniversityWestlake UniversityhallucinationuncertaintyDOIDBLP
6
泛读EMNLP 2023

Can We Edit Factual Knowledge by In-Context Learning?

传统知识编辑方法(ROME/MEMIT 等)要改参数,模型大了代价高,且黑盒 API 模型根本没法改。作者问:能不能纯用 in-context learning 改事实知识?

Ce Zheng,Lei Li,Qingxiu Dong,Yuxuan Fan,Zhiyong Wu,Jingjing Xu,Baobao Chang
Peking UniversityShanghai AI Labknowledge-editingin-context-learningDOIarXivDBLP
6
泛读EMNLP 2023

MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions

现有知识编辑评测只看能不能复述被编辑的事实,看不出编辑是否传播到相关信念。把英国首相改成 Sunak 后,'英国首相的妻子是谁' 的答案也应该变——这才是真的改了。

Zexuan Zhong,Zhengxuan Wu,Christopher D. Manning,Christopher Potts,Danqi Chen
Stanford UniversityPrinceton Universityknowledge-editingmulti-hopbenchmarkDOIarXivDBLP
6
泛读FindingsEMNLP 2023

On the Calibration of Large Language Models and Alignment

大语言模型经过对齐训练(RLHF/SFT)后,其置信度校准(calibration)会发生什么变化?此前对 LLM 校准的研究大多只看单一阶段或单一任务,缺乏对 pretrain → alignment 全流程的系统性分析。

Chiwei Zhu,Benfeng Xu,Quan Wang,Yongdong Zhang,Zhendong Mao
University of Science and Technology of ChinacalibrationalignmentreliabilityDOIarXivDBLP
6
泛读FindingsEMNLP 2023

Pretraining Language Models with Text-Attributed Heterogeneous Graphs

现有语言模型预训练只利用每个实体的文本信息,忽略了文本属性异构图(Text-Attributed Heterogeneous Graphs, TAHGs)中实体间的拓扑关系和异构类型信息。学术网络、社交平台等场景中,实体既有文本又有多种关系,单独建模文本会丢失结构信号。

Tao Zou,Le Yu,Yifei Huang,Leilei Sun,Bowen Du
Beihang Universitypretraininggraphheterogeneous-graphDOIarXivDBLP
5
泛读EMNLP 2023

Unveiling Multilinguality in Transformer Models: Exploring Language Specificity in Feed-Forward Networks

探究多语言 Transformer 模型如何在参数层面(特别是前馈神经网络 FFN 层)隔离或共享不同语言的表示。

Sunit Bhattacharya,Ondrej Bojar
multilingualffninterpretabilityDOIDBLP
5
泛读EMNLP 2023

Investigating the Effect of Discourse Connectives on Transformer Surprisal: Language Models Understand Connectives, Even So They Are Surprised

评估自回归语言模型是否真正理解篇章连接词(如让步或对比)对后续语义预期的反转作用,而不仅仅是拟合局部词频统计。

Yan Cong,Emmanuele Chersoni,Yu-Yin Hsu,Philippe Blache
surprisaldiscourseprobingDOIDBLP
5
泛读EMNLP 2023

Why Bother with Geometry? On the Relevance of Linear Decompositions of Transformer Embeddings

探究 Transformer 嵌入的线性分解(将表示拆解为输入或组件的加权和)在经验上是否真的具有物理意义,还是仅仅是数学上的巧合。

Timothee Mickus,Raúl Vázquez
University of HelsinkiinterpretabilityembeddingstransformerDOIarXivDBLP
5
泛读EMNLP 2023

Layered Bias: Interpreting Bias in Pretrained Large Language Models

探究预训练语言模型中的社会偏见是如何在 Transformer 的不同层级中累积和分布的,以及去偏方法在各层的实际干预效果。

Nirmalendu Prakash,Roy Ka-Wei Lee
Singapore University of Technology and DesignbiasinterpretabilitydebiasingDOIDBLP
5
泛读EMNLP 2023

When Language Models Fall in Love: Animacy Processing in Transformer Language Models

这篇论文关注一个被文本预训练常常绕开的问题:Transformer 语言模型在没有直接接触真实世界感知信号的情况下,是否真的学到了 animacy(有生性)这种依赖常识和选择限制的语义属性。过去大家更多在典型、容易判断的名词上验证模型行为像人,但这不足以说明模型掌握了更细粒度、非显式表达的 animacy 机制。

Michael Hanna,Yonatan Belinkov,Sandro Pezzelle
probinglinguisticsanalysisDOIarXivDBLP
5
泛读EMNLP 2023

Cross-lingual Prompting: Improving Zero-shot Chain-of-Thought Reasoning across Languages

这篇论文要解决的是:zero-shot CoT 提示词在英语里有效,但一旦换语言,尤其是低资源或与英语分布差异大的语言,推理能力会明显不稳。过去的 zero-shot prompting 基本默认“同一种提示语在单语言内工作”,没有系统处理跨语言对齐与求解阶段分工,因此多语言 CoT 一直缺少简单可迁移的方法。

Libo Qin,Qiguang Chen,Fuxuan Wei,Shijue Huang,Wanxiang Che
cotmultilingualpromptingDOIarXivDBLP
5
泛读FindingsEMNLP 2023

The Interpreter Understands Your Meaning: End-to-end Spoken Language Understanding Aided by Speech Translation

这篇论文要解决的是端到端 SLU,尤其是跨语言 SLU,为什么即使有大规模文本和语音预训练仍然难做。作者的判断是,现有 speech 表征往往停留在较低层的声学模式,而 SLU 需要更高层的语义抽象和跨语言对齐;传统做法通常把 ASR、MT、NLU 分开学,信息链长且误差累积,因此需要一个更贴近语义的语音预训练目标。

Mutian He,Philip N. Garner
speechslutranslationDOIarXivDBLP
5
泛读FindingsEMNLP 2023

KEPLET: Knowledge-Enhanced Pretrained Language Model with Topic Entity Awareness

这篇论文要解决的是现有知识增强 PLM 在 Wikipedia 上做实体建模时,虽然利用了 mention-entity 交互,却忽略了一个很强的结构先验:每个页面本身围绕一个 topic entity 组织。过去的方法把页面里的实体当作局部提及来处理,没有显式把“整页都在讲谁”纳入预训练,因此实体间关系建模仍然不完整,尤其会漏掉跨句乃至全篇的主题约束。

Yichuan Li,Jialong Han,Kyumin Lee,Chengyuan Ma,Benjamin Z. Yao,Xiaohu Liu
knowledge-enhancedplmentityDOIarXivDBLP
5
泛读FindingsEMNLP 2023

ZeroSCROLLS: A Zero-Shot Benchmark for Long Text Understanding

这篇论文要解决的是长文本理解缺少一个真正 zero-shot 的评测基准。此前很多长文本 benchmark 带训练集,模型可以通过任务特定适配获益,难以分离“预训练本身的长文理解能力”和“额外监督带来的适配能力”;而且一些关键能力,比如跨段聚合、信息融合,并没有被充分覆盖。

Uri Shaham,Maor Ivgi,Avia Efrat,Jonathan Berant,Omer Levy
long-contextbenchmarkzero-shotDOIarXivDBLP
5
泛读EMNLP 2023

Unraveling Feature Extraction Mechanisms in Neural Networks

这篇论文想解决的是:神经网络到底在训练中按什么机制提取统计特征,并把它们整合为最终决策。过去关于内部机制的分析很多停留在经验 probing 或案例解释,缺少能直接连接训练动态与特征形成过程的统一理论工具,因此作者转向 NTK 框架,希望在可分析设置下把“学到了什么、何时学到”讲清楚。

Xiaobing Sun,Jiaxi Li,Wei Lu
ntkfeature-extractionlearning-dynamicsDOIarXivDBLP
5
泛读FindingsEMNLP 2023

Towards Informative Few-Shot Prompt with Maximum Information Gain for In-Context Learning

这篇论文解决的是 few-shot ICL 的高方差问题,而且它指出一个容易被忽略的来源:即使示例数量、顺序、格式都固定,单纯换一组随机示例,性能也会显著波动。以往大家更多优化模板、排序或标签平衡,却较少直接问“哪些样本本身更有信息量”,因此示例选择仍然偏启发式。

Hongfu Liu,Ye Wang
in-context-learningprompt-selectioninformation-theoryDOIarXivDBLP
4
FindingsEMNLP 2023

A Joint Matrix Factorization Analysis of Multilingual Representations

现有多语言表示的分析方法(如探针)只能单语言单特征分析,无法联合对比多语言、多模型的隐层表示的共性和差异,无法系统性分析形态句法特征的编码规律。

Zheng Zhao,Yftah Ziser,Bonnie Webber,Shay B. Cohen
multilingualrepresentation-analysismatrix-factorizationDOIarXivDBLP
5
泛读EMNLP 2023

Explicit Planning Helps Language Models in Logical Reasoning

大语言模型的多步逻辑推理性能差,过往方法如思维链(CoT)是隐式推理,无法提前规划推理路径,容易被虚假特征误导,推理过程不可控。

Hongyu Zhao,Kangrui Wang,Mo Yu,Hongyuan Mei
logical-reasoningplanningchain-of-thoughtDOIarXivDBLP
4
EMNLP 2023

Can We Edit Multimodal Large Language Models?

多模态大模型(MLLM)的编辑任务缺乏统一基准与评估体系,现有编辑方法均针对单模态LLM设计,未考虑跨模态对齐干扰,导致编辑效果不稳定、不可控。

Siyuan Cheng,Bozhong Tian,Qingbin Liu,Xi Chen,Yongheng Wang,Huajun Chen,Ningyu Zhang
knowledge-editingmultimodal-llmbenchmarkDOIarXivDBLP
5
泛读EMNLP 2023

Let&apos;s Sample Step by Step: Adaptive-Consistency for Efficient Reasoning and Coding with LLMs

Pranjal Aggarwal,Aman Madaan,Yiming Yang,Mausam
adaptive-consistencyreasoninginference-efficiencyDOIDBLP
5
泛读EMNLP 2023

MEGA: Multilingual Evaluation of Generative AI

生成式大模型的多语言能力评估此前仅覆盖少数高资源印欧语系语言,缺乏覆盖多语系、多资源层级的统一测评基准,无法准确衡量模型在不同类型语言上的真实性能差异。

Kabir Ahuja,Harshita Diddee,Rishav Hada,Millicent Ochieng,Krithika Ramesh,Prachi Jain ... 省略 2 位作者 ... ,Sameer Segal,Mohamed Ahmed,Kalika Bali,Sunayana Sitaram
multilingualevaluationbenchmarkDOIarXivDBLP
6
泛读FindingsEMNLP 2023

Context Quality Matters in Training Fusion-in-Decoder for Extractive Open-Domain Question Answering

检索增强生成(RAG)模型训练阶段的上下文质量对最终性能的影响此前未被系统研究,现有Fusion-in-Decoder(FiD)模型默认使用高质量检索上下文训练,导致推理时遇到低质量上下文时性能骤降。

Kosuke Akimoto,Kunihiro Takeoka,Masafumi Oyamada
ragfiddata-qualityDOIarXivDBLP
6
泛读EMNLP 2023

Understanding the Role of Input Token Characters in Language Models: How Does Information Loss Affect Performance?

预训练语言模型(PLM)对输入token的字符信息依赖程度此前未被量化研究,现有研究默认需要完整字符信息才能学习到合格的语言表示,达到可用的下游任务性能。

Ahmed Alajrami,Katerina Margatina,Nikolaos Aletras
tokenizerinput-representationrobustnessDOIarXivDBLP
4
EMNLP 2023

Skill-Based Few-Shot Selection for In-Context Learning

现有上下文学习(ICL)的少样本选择方法依赖预训练嵌入的表面语义相似性,容易被与任务无关的表面特征干扰,且大多需要额外微调,无法适配动态更新的样本库。

Shengnan An,Bo Zhou,Zeqi Lin,Qiang Fu,Bei Chen,Nanning Zheng,Weizhu Chen,Jian-Guang Lou
iclfew-shot-selectionin-context-learningDOIarXivDBLP
4
EMNLP 2023

Have LLMs Advanced Enough? A Challenging Problem Solving Benchmark For Large Language Models

现有LLM推理基准难度较低,已被最新模型接近饱和,缺乏需要深度领域知识+长链推理的高难度测评基准,无法准确衡量LLM的复杂问题求解能力边界。

Daman Arora,Himanshu Gaurav Singh,Mausam
benchmarkreasoningmath-reasoningDOIarXivDBLP
5
泛读EMNLP 2023

Make Every Example Count: On the Stability and Utility of Self-Influence for Learning from Noisy NLP Datasets

Irina Bejan,Artem Sokolov,Katja Filippova
data-qualityinfluence-functionsnoisy-dataDOIDBLP
5
泛读EMNLP 2023

Optimizing Retrieval-augmented Reader Models via Token Elimination

Fusion-in-Decoder(FiD)模型的解码速度受检索到的大量上下文token限制,推理延迟高,现有优化方法多以显著损失性能为代价,无法兼顾速度和效果。

Moshe Berchansky,Peter Izsak,Avi Caciularu,Ido Dagan,Moshe Wasserblat
ragtoken-eliminationinference-efficiencyDOIarXivDBLP
5
泛读EMNLP 2023

A Video Is Worth 4096 Tokens: Verbalize Story Videos To Understand Them In Zero Shot

Aanisha Bhattacharyya,Yaman Singla,Balaji Krishnamurthy,Rajiv Ratn Shah,Changyou Chen
videocaptioningzero-shotDOIDBLP
5
泛读EMNLP 2023

q2d: Turning Questions into Dialogs to Teach Models How to Search

训练对话模型的搜索查询生成能力需要大量人工标注的带搜索动作的对话数据,标注成本高、周期长,无法快速覆盖不同领域的搜索需求。

Yonatan Bitton,Shlomi Cohen-Ganor,Ido Hakimi,Yoad Lewenberg,Roee Aharoni,Enav Weinreb
searchdata-synthesistool-useDOIarXivDBLP
2
EMNLP 2023

ScanDL: A Diffusion Model for Generating Synthetic Scanpaths on Texts

阅读眼动数据稀缺、部署阶段无法获取,限制了LM可解释性、预训练增强等方向的研究;传统基于认知规则的合成扫描路径保真度低,现有纯数据驱动合成方案效果存在明显局限。

Lena S. Bolliger,David R. Reich,Patrick Haller,Deborah N. Jakobi,Paul Prasse,Lena A. Jäger
diffusionsynthetic-datainterpretabilityDOIarXivDBLP
4
IndustryEMNLP 2023

Efficient Transformer Knowledge Distillation: A Performance Review

现有研究分别单独探索高效注意力机制和知识蒸馏压缩,未验证两者结合的效果和性价比,导致长上下文场景下的模型压缩方案缺乏统一参考标准。

Nathan Brown,Ashton Williamson,Tahj Anderson,Logan Lawrence
distillationefficient-attentioncompressionDOIarXivDBLP
3
EMNLP 2023

Evaluating Bias and Fairness in Gender-Neutral Pretrained Vision-and-Language Models

多模态预训练模型会放大训练数据中的性别偏见,但现有研究未区分预训练和微调阶段的偏见放大关联,无法准确定位偏见来源,导致偏见缓解方案针对性不足。

Laura Cabello,Emanuele Bugliarello,Stephanie Brandl,Desmond Elliott
vlmbiasevaluationDOIarXivDBLP
3
DemoEMNLP 2023

H2O Open Ecosystem for State-of-the-art Large Language Models

闭源LLM存在偏见、隐私、版权、安全风险,现有开源LLM的开发、微调、部署工具链碎片化,使用门槛较高,中小团队难以快速落地。

Arno Candel,Jon McKinney,Philipp Singer,Pascal Pfeiffer,Maximilian Jeblick,Chun Ming Lee,Marcos V. Conde
H2O.aiopen-sourcellmecosystemDOIarXivDBLP
5
泛读FindingsEMNLP 2023

Probabilistic Tree-of-thought Reasoning for Answering Knowledge-intensive Complex Questions

现有检索增强的思维链推理存在两个核心缺陷:错误检索结果会误导推理,单链推理的局部错误会沿链传播,无法回溯修正,导致知识密集型复杂问答准确率低。

Shulin Cao,Jiajie Zhang,Jiaxin Shi,Xin Lv,Zijun Yao,Qi Tian,Lei Hou,Juanzi Li
tree-of-thoughtragreasoningDOIarXivDBLP
3
EMNLP 2023

ChatGPT to Replace Crowdsourcing of Paraphrases for Intent Classification: Higher Diversity and Comparable Model Robustness

众包生成意图分类的复述数据成本高、周期长,生成数据的多样性有限,此前未验证LLM生成的复述数据是否能达到众包数据的训练效果。

Ján Cegin,Jakub Simko,Peter Brusilovsky
data-synthesisparaphraseintentDOIarXivDBLP
5
泛读EMNLP 2023

Dialogue Chain-of-Thought Distillation for Commonsense-aware Conversational Agents

Hyungjoo Chae,Yongho Song,Kai Tzu-iunn Ong,Taeyoon Kwon,Minjin Kim,Youngjae Yu,Dongha Lee,Dongyeop Kang,Jinyoung Yeo
distillationcotdialogDOIDBLP
5
泛读FindingsEMNLP 2023

Selective Demonstrations for Cross-domain Text-to-SQL

跨领域 Text-to-SQL 任务中缺乏目标领域的标注数据,导致 In-Context Learning (ICL) 无法利用高质量的 in-domain demonstration,性能受限。

Shuaichen Chang,Eric Fosler-Lussier
icldemonstration-selectiontext-to-sqlDOIarXivDBLP
5
泛读EMNLP 2023

Cabbage Sweeter than Cake? Analysing the Potential of Large Language Models for Learning Conceptual Spaces

纯文本预训练的 LLM 能否在没有多模态感知输入的情况下,学到符合人类认知的物理/感知概念空间(如颜色、味道、大小)。

Usashi Chatterjee,Amit Gajbhiye,Steven Schockaert
concept-representationsemanticsanalysisDOIarXivDBLP
5
泛读EMNLP 2023

Can LMs Generalize to Future Data? An Empirical Analysis on Text Summarization

预训练数据的时间截止点导致 LLM 在处理未来时间戳的文本(如新闻摘要)时出现事实性错误和性能衰减。

Chi Seng Cheang,Hou Pong Chan,Derek F. Wong,Xuebo Liu,Zhaocong Li,Yanming Sun,Shudong Liu,Lidia S. Chao
generalizationdistribution-shiftevaluationDOIDBLP
5
泛读EMNLP 2023

Fidelity-Enriched Contrastive Search: Reconciling the Faithfulness-Diversity Trade-Off in Text Generation

在文本生成解码过程中,维持文本多样性(避免重复)和保持对源文本的忠实度(避免幻觉)之间存在固有的 Trade-off。

Wei-Lin Chen,Cheng-Kuang Wu,Hsin-Hsi Chen,Chung-Chi Chen
decodinggenerationcontrastive-searchDOIDBLP
5
泛读EMNLP 2023

Self-ICL: Zero-Shot In-Context Learning with Self-Generated Demonstrations

在 Zero-shot 场景下缺乏外部人工标注的 demonstration,导致无法激活 LLM 强大的 In-Context Learning (ICL) 能力。

Wei-Lin Chen,Cheng-Kuang Wu,Yun-Nung Chen,Hsin-Hsi Chen
iclzero-shotself-generationDOIDBLP
5
泛读EMNLP 2023

STAIR: Learning Sparse Text and Image Representation in Grounded Tokens

这篇论文要解决的是:图文联合表示通常要么把图像 patch/token 全量送入模型,计算重且冗余高;要么做粗粒度对齐,丢掉细粒度 grounding。作者试图学习一种稀疏、可落地到 token 的 grounded representation,让文本和图像只通过少量关键单元交互。

Chen Chen,Bowen Zhang,Liangliang Cao,Jiguang Shen,Tom Gunter,Albin Madappally Jose ... 省略 1 位作者 ... ,Yantao Zheng,Jonathon Shlens,Ruoming Pang,Yinfei Yang
sparse-representationvision-languagetokensDOIDBLP
5
泛读EMNLP 2023

How do languages influence each other? Studying cross-lingual data sharing during LM fine-tuning

这篇论文研究的是:多语言微调时,不同语言之间究竟如何共享数据、相互帮助或相互干扰。这个问题常被粗略地用‘高资源帮助低资源’或‘语言相近更有利’来概括,但真正的共享机制和边界并不清楚。

Rochelle Choenni,Dan Garrette,Ekaterina Shutova
multilingualcross-lingualfinetuningDOIDBLP
5
泛读IndustryEMNLP 2023

EELBERT: Tiny Models through Dynamic Embeddings

这篇论文解决的是小型 Transformer 模型里 embedding 层占比过高的问题。对 BERT-tiny 这类小模型,输入词表 embedding 常常不是最有用的参数,却吞掉大量存储;传统压缩多聚焦层数、宽度或蒸馏,embedding 这一大块往往处理得不够激进。

Gabrielle Cohn,Rishika Agarwal,Deepanshu Gupta,Siddharth Patwardhan
compressionembeddingsbertDOIarXivDBLP
5
泛读EMNLP 2023

Using Artificial French Data to Understand the Emergence of Gender Bias in Transformer Language Models

这篇论文要解决的是:Transformer LM 中性别偏见是如何从训练数据分布里长出来的,模型又在什么条件下真正学到语法性别规则,而不是只学到偏置共现。过去这类研究常在真实语料上做诊断,但真实语料变量太多,很难把‘语言规律学习’和‘社会偏见放大’拆开。

Lina Conti,Guillaume Wisniewski
biassynthetic-dataanalysisDOIarXivDBLP
5
泛读EMNLP 2023

Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model

在不改 LM 参数的前提下做可控生成,既要控属性(毒性、情感)又要尽量不拖慢解码速度。以往 FUDGE 类方法用双向分类器每步重打分,代价贵。

Haikang Deng,Colin Raffel
UNC Chapel Hilldecodingreward-modelcontrolled-generationDOIarXivDBLP
5
泛读FindingsEMNLP 2023

Attack Prompt Generation for Red Teaming and Defending Large Language Models

怎样系统性地生成对抗 prompt 去红队测试 LLM,同时用这些数据把模型防御训得更稳?过去的红队数据要么靠人工要么靠简单模板,覆盖面窄。

Boyi Deng,Wenjie Wang,Fuli Feng,Yang Deng,Qifan Wang,Xiangnan He
USTCMeta AIred-teamingsafetyadversarialDOIDBLP
5
泛读EMNLP 2023

Chain-of-Thought Tuning: Masked Language Models can also Think Step By Step in Natural Language Understanding

CoT 一直被认为是生成式 LLM 的专属能力,MLM(BERT 类)因为不是自回归似乎天然不能做 step-by-step 推理。作者想验证在 NLU 任务上 MLM 能不能也靠 CoT 拿收益。

Caoyun Fan,Jidong Tian,Yitian Li,Wenqing Chen,Hao He,Yaohui Jin
chain-of-thoughtmasked-lmnluDOIDBLP
5
泛读EMNLP 2023

Byte Pair Encoding for Symbolic Music

符号音乐(MIDI 等)的常用 tokenization 要么用事件序列,词表小但序列很长,要么手工合并,缺乏自动化的压缩方案。作者问能不能直接把 BPE 搬到符号音乐上。

Nathan Fradet,Nicolas Gutowski,Fabien Chhel,Jean-Pierre Briot
tokenizerbpesymbolic-musicDOIDBLP
5
泛读FindingsEMNLP 2023

Estimating Large Language Model Capabilities without Labeled Test Data

在没有 labeled test data 的前提下,怎么估计一个 LLM 在某个下游任务上的能力?这是 serving 和模型选型里的现实痛点——标注贵,新任务多。

Harvey Yiyun Fu,Qinyuan Ye,Albert Xu,Xiang Ren,Robin Jia
USCevaluationllm-capabilitiesDOIDBLP
5
泛读FindingsEMNLP 2023

Is ChatGPT a Good Causal Reasoner? A Comprehensive Evaluation

ChatGPT 经常在表面语言任务上看起来很强,但真正做因果推理时水平到底怎样?作者要做一个系统的因果推理评估。

Jinglong Gao,Xiao Ding,Bing Qin,Ting Liu
Harbin Institute of Technology (SCIR)causal-reasoningevaluationchatgptDOIDBLP
2
EMNLP 2023

From Wrong To Right: A Recursive Approach Towards Vision-Language Explanation

预训练多模态模型做视觉推理的解释生成任务时,标注数据稀缺,单步生成的解释准确率低,错误无法修正,导致解释可信度不足。

Jiaxin Ge,Sanjay Subramanian,Trevor Darrell,Boyi Li
vlmvisual-reasoningexplanation-generationDOIarXivDBLP
5
泛读EMNLP 2023

Augmenting Zero-Shot Dense Retrievers with Plug-in Mixture-of-Memories

零样本稠密检索器的跨域泛化能力不足,现有增强方案无法在推理阶段灵活插入新的领域记忆,需要重新训练模型,适配成本高。

Suyu Ge,Chenyan Xiong,Corby Rosset,Arnold Overwijk,Jiawei Han,Paul Bennett
retrieval-augmentationzero-shot-generalizationmemory-mechanismDOIarXivDBLP
4
EMNLP 2023

TrueTeacher: Learning Factual Consistency Evaluation with Large Language Models

现有事实一致性评估方案存在明显缺陷:基于NLI的小模型泛化性差,难以适配摘要等生成任务的评估需求;直接用大模型评估的推理成本过高无法落地。之前用于优化NLI模型的合成训练数据基于人工摘要扰动构造,和真实模型生成摘要的误差分布不匹配,训练出的模型效果受限。

Zorik Gekhman,Jonathan Herzig,Roee Aharoni,Chen Elkind,Idan Szpektor
factual-consistencysynthetic-dataevaluationDOIarXivDBLP
5
泛读EMNLP 2023

Grammar-Constrained Decoding for Structured NLP Tasks without Finetuning

大模型零样本生成复杂结构化输出时容易违反格式要求,现有语法约束解码方法只能适配解析、代码生成等特定任务,无法覆盖通用结构化NLP场景,每个新任务都需要单独微调模型或开发解码逻辑,部署成本高。

Saibo Geng,Martin Josifoski,Maxime Peyrard,Robert West
constrained-decodingstructured-generationinferenceDOIarXivDBLP
1
FindingsEMNLP 2023

KBioXLM: A Knowledge-anchored Biomedical Multilingual Pretrained Language Model

现有生物医学预训练模型大多是单语言的,非英语生物医学语料及跨语言平行语料极度稀缺,无法满足快速增长的跨语言生物医学NLP应用需求。

Lei Geng,Xu Yan,Ziqiang Cao,Juntao Li,Wenjie Li,Sujian Li,Xinjie Zhou,Yang Yang,Jun Zhang
domain-specific-pretrainingmultilingualbiomedicalDOIarXivDBLP
4
EMNLP 2023

What Comes Next? Evaluating Uncertainty in Neural Text Generators Against Human Production Variability

现有文本生成模型的不确定性评估没有和人类生产的固有变异对齐,无法区分模型本身的认知不确定性和数据本身的固有随机性,往往高估模型的误差水平,无法为解码策略优化提供准确信号。

Mario Giulianelli,Joris Baan,Wilker Aziz,Raquel Fernández,Barbara Plank
uncertaintynlgevaluationDOIarXivDBLP
5
泛读FindingsEMNLP 2023

Decoding Stumpers: Large Language Models vs. Human Problem-Solvers

现有LLM问题解决能力的评估没有和人类的直觉推理能力做针对性对比,无法明确LLM和人类在单步直觉问题上的能力边界,不利于混合推理系统的设计。

Alon Goldstein,Miriam Havin,Roi Reichart,Ariel Goldstein
reasoningevaluationproblem-solvingDOIarXivDBLP
5
泛读EMNLP 2023

Understanding the Effect of Model Compression on Social Bias in Large Language Models

模型压缩(量化、知识蒸馏)会如何影响 LLM 中已有的社会偏见?此前偏见缓解和模型压缩各自有大量工作,但两者的交互效应几乎没有被系统研究过,导致部署压缩模型时偏见风险不可控。

Gustavo Gonçalves,Emma Strubell
CMUAllen AImodel-compressionbiasllm-evaluationDOIarXivDBLP
5
泛读FindingsEMNLP 2023

Robustness of Named-Entity Replacements for In-Context Learning

研究 ICL 中命名实体替换对模型鲁棒性的影响。当 demonstration 中的命名实体被替换时,LLM 的 ICL 表现是否稳定?

Saeed Goodarzi,Nikhil Kagita,Dennis Minn,Shufan Wang,Roberto Dessì,Shubham Toshniwal,Adina Williams,Jack Lanchantin,Koustuv Sinha
iclrobustnessnamed-entitiesDOIDBLP
5
泛读FindingsEMNLP 2023

STEER: Unified Style Transfer with Expert Reinforcement

文本风格迁移通常假设已知源风格,但现实中源风格往往未知。本文要解决的是任意风格到目标风格的迁移,且缺乏平行数据。

Skyler Hallinan,Faeze Brahman,Ximing Lu,Jaehun Jung,Sean Welleck,Yejin Choi
University of WashingtonAllen AIreinforcement-learningstyle-transferalignmentDOIarXivDBLP
5
泛读IndustryEMNLP 2023

On Sample-Efficient Code Generation

Hojae Han,Yu Jin Kim,Byoungjip Kim,Youngwon Lee,Kyungjae Lee,Kyungmin Lee,Moontae Lee,Kyunghoon Bae,Seung-won Hwang
code-generationsample-efficiencydata-efficiencyDOIDBLP
5
泛读FindingsEMNLP 2023

Complex Event Schema Induction with Knowledge-Enriched Diffusion Model

现有复杂事件schema归纳方法多采用自回归生成范式,存在累积误差传播问题,同时训练数据质量差,生成的schema准确率低,难以覆盖复杂事件的多维度关系。

Yupu Hao,Pengfei Cao,Yubo Chen,Kang Liu,Jiexin Xu,Huaijun Li,Xiaojian Jiang,Jun Zhao
diffusiondiscrete-tokenknowledge-graphDOIDBLP
5
泛读EMNLP 2023

Incorporating Structured Representations into Pretrained Vision &amp; Language Models Using Scene Graphs

Roei Herzig,Alon Mendelson,Leonid Karlinsky,Assaf Arbelle,Rogério Feris,Trevor Darrell,Amir Globerson
vision-languagescene-graphstructured-representationDOIDBLP
5
泛读FindingsEMNLP 2023

Harnessing Dataset Cartography for Improved Compositional Generalization in Transformers

这篇工作要解决的是:Transformer 在 compositional generalization 上常常失败,但训练样本并不是同样有价值,现有做法很少利用样本难度和学习动态去主动重配训练数据。过去 dataset cartography 更多用于诊断数据集中的 easy / hard / ambiguous 样本,而不是直接作为提升组合泛化的训练信号。

Osman Batur Ince,Tanin Zeraati,Semih Yagcioglu,Yadollah Yaghoobzadeh,Erkut Erdem,Aykut Erdem
dataset-cartographygeneralizationtransformerDOIDBLP
5
泛读FindingsEMNLP 2023

In-Context Demonstration Selection with Cross Entropy Difference

这篇工作的结论是:in-context demonstration selection 可以通过 cross-entropy difference 做到更有针对性,而不是依赖 embedding 相似度或启发式检索。过去 few-shot ICL 的难点不是有没有示例,而是给定测试样本时选哪些示例最有用;现有方法常按语义相似度选例子,但相似不等于对该测试样本最能改善条件建模。

Dan Iter,Reid Pryzant,Ruochen Xu,Shuohang Wang,Yang Liu,Yichong Xu,Chenguang Zhu
in-context-learningdemonstration-selectionDOIarXivDBLP
6
泛读FindingsEMNLP 2023

Incorporating Syntactic Knowledge into Pre-trained Language Model using Optimization for Overcoming Catastrophic Forgetting

通用预训练语言模型缺乏足够的句法知识,在需要句法理解的长句、复杂句下游任务上表现差,增量注入句法知识时容易发生灾难性遗忘,丢失原有通用语义能力。

Ran Iwamoto,Issei Yoshida,Hiroshi Kanayama,Takuya Ohko,Masayasu Muraoka
continual-pretrainsyntaxcatastrophic-forgettingDOIDBLP
5
泛读FindingsEMNLP 2023

Code-Switching with Word Senses for Pretraining in Neural Machine Translation

现有神经机器翻译的码切换预训练方法忽略了词义歧义问题,在噪声构造阶段无差别替换词导致预训练数据存在词义偏差,模型继承偏差后多义词翻译效果差。

Vivek Iyer,Edoardo Barba,Alexandra Birch,Jeff Z. Pan,Roberto Navigli
pretrainnmtcode-switchingDOIarXivDBLP
5
泛读FindingsEMNLP 2023

A Comprehensive Evaluation of Tool-Assisted Generation Strategies

现有少样本工具辅助生成策略缺乏系统公平的横向对比,也未和无工具强基线做对齐,过往研究默认工具辅助方案效果更优,却未验证该假设的合理性。

Alon Jacovi,Avi Caciularu,Jonathan Herzig,Roee Aharoni,Bernd Bohnet,Mor Geva
Bar-Ilan UniversityGoogle Researchtool-useevaluationDOIarXivDBLP
5
泛读FindingsEMNLP 2023

Test-Time Self-Adaptive Small Language Models for Question Answering

大模型在特定域QA任务上的知识迁移适配效果差,全参数微调需要高标注成本,小模型本身知识容量有限,过往未验证仅用无标注测试数据即可实现小模型自适配的可行性。

Soyeong Jeong,Jinheon Baek,Sukmin Cho,Sung Ju Hwang,Jong Park
KAISTPOSTECHtest-timeadaptationqaDOIarXivDBLP
5
泛读EMNLP 2023

StructGPT: A General Framework for Large Language Model to Reason over Structured Data

大模型在结构化数据(表格、知识库、图)上的零样本推理能力差,现有方法多针对单类结构化数据设计,无统一适配框架,且通常需要额外微调。

Jinhao Jiang,Kun Zhou,Zican Dong,Keming Ye,Xin Zhao,Ji-Rong Wen
Renmin University of Chinastructured-datareasoningtool-useDOIarXivDBLP
4
EMNLP 2023

Instruct and Extract: Instruction Tuning for On-Demand Information Extraction

现有信息抽取系统多为任务特定设计,无法适配非专业用户的长尾自定义抽取需求,通用指令微调模型未针对结构化输出的抽取场景做优化,效果难以满足要求。

Yizhu Jiao,Ming Zhong,Sha Li,Ruining Zhao,Siru Ouyang,Heng Ji,Jiawei Han
University of Illinois Urbana-ChampaignByteDanceinstruction-tuninginformation-extractionDOIarXivDBLP
5
泛读FindingsEMNLP 2023

ParroT: Translating during Chat using Large Language Models tuned with Human Translation and Feedback

Wenxiang Jiao,Jen-tse Huang,Wenxuan Wang,Zhiwei He,Tian Liang,Xing Wang,Shuming Shi,Zhaopeng Tu
translationinstruction-tuningfeedbackDOIDBLP
5
泛读FindingsEMNLP 2023

Transparency at the Source: Evaluating and Interpreting Language Models With Access to the True Distribution

Jaap Jumelet,Willem H. Zuidema
interpretabilitylanguage-modelevaluationDOIDBLP
5
泛读EMNLP 2023

Small Language Models Fine-tuned to Coordinate Larger Language Models improve Complex Reasoning

Gurusha Juneja,Subhabrata Dutta,Soumen Chakrabarti,Sunny Manchanda,Tanmoy Chakraborty
multi-agentreasoningcoordinationDOIDBLP
5
泛读EMNLP 2023

Text encoders bottleneck compositionality in contrastive vision-language models

Amita Kamath,Jack Hessel,Kai-Wei Chang
compositionalitycliptext-encoderDOIDBLP
5
泛读EMNLP 2023

Language Models with Rationality

Nora Kassner,Oyvind Tafjord,Ashish Sabharwal,Kyle Richardson,Hinrich Schütze,Peter Clark
rationalityreasoninglanguage-modelDOIDBLP
5
泛读FindingsEMNLP 2023

VISIT: Visualizing and Interpreting the Semantic Information Flow of Transformers

Shahar Katz,Yonatan Belinkov
interpretabilityinformation-flowtransformerDOIDBLP
5
泛读FindingsEMNLP 2023

Sub-network Discovery and Soft-masking for Continual Learning of Mixed Tasks

混合任务持续学习(Continual Learning)中,传统的硬掩码(hard-masking)方法虽然能防止灾难性遗忘,但阻碍了任务间的知识共享(forward transfer)。

Zixuan Ke,Bing Liu,Wenhan Xiong,Asli Celikyilmaz,Haoran Li
continual-learningsoft-maskingsub-networkDOIDBLP
5
泛读FindingsEMNLP 2023

Unnatural language processing: How do language models handle machine-generated prompts?

LLM 主要在人类自然语言上预训练,但实际应用中常被输入机器生成的、非自然的 prompt,模型对此类分布外(OOD)输入的处理机制尚不清晰。

Corentin Kervadec,Francesca Franzon,Marco Baroni
prompt-sensitivitymachine-generated-promptsrobustnessDOIDBLP
5
泛读FindingsEMNLP 2023

GRACE: Discriminator-Guided Chain-of-Thought Reasoning

链式思考(CoT)在自回归解码时容易在中间步骤产生逻辑错误或幻觉,且标准解码无法提前评估步骤的正确性。

Muhammad Khalifa,Lajanugen Logeswaran,Moontae Lee,Honglak Lee,Lu Wang
University of Michiganchain-of-thoughtdiscriminatorreasoningDOIDBLP
5
泛读FindingsEMNLP 2023

Approximating CKY with Transformers

Transformer 能否在内部原生实现类似 CKY 的动态规划算法以处理上下文无关文法(CFG),其理论表达能力与实际学习机制之间存在 gap。

Ghazal Khalighinejad,Ollie Liu,Sam Wiseman
Duke Universitycky-parsingtransformerapproximationDOIDBLP
5
泛读FindingsEMNLP 2023

FinePrompt: Unveiling the Role of Finetuned Inductive Bias on Compositional Reasoning in GPT-4

指令微调(Instruction Tuning)引入的归纳偏置(Inductive Bias)究竟是增强了底层模型的组合推理能力,还是仅仅教会了模型特定的输出格式?

Jeonghwan Kim,Giwon Hong,Sung-Hyon Myaeng,Joyce Jiyoung Whang
compositionalitygpt4finetuningDOIDBLP
5
泛读EMNLP 2023

ATHENA: Mathematical Reasoning with Thought Expansion

数学推理任务中,单一路径的 CoT 容易陷入死胡同,缺乏对不同解题方向的探索和回溯能力。

JB. Kim,Hazel Kim,Joonghyuk Hahn,Yo-Sub Han
math-reasoningcotpromptingDOIDBLP
5
泛读FindingsEMNLP 2023

NASH: A Simple Unified Framework of Structured Pruning for Accelerating Encoder-Decoder Language Models

Encoder-Decoder 模型的结构化剪枝研究不足,过去的方法往往将两部分同等对待,没有理清各自对推理速度和生成质量的真实影响。

Jongwoo Ko,Seungjoon Park,Yujin Kim,Sumyeong Ahn,Du-Seong Chang,Euijai Ahn,Se-Young Yun
structured-pruningmodel-compressionencoder-decoderDOIarXivDBLP
5
泛读EMNLP 2023

Critic-Driven Decoding for Mitigating Hallucinations in Data-to-text Generation

这篇论文要解决的是:在 data-to-text 生成里减少 hallucination,但不改动生成器结构、也不要求额外标注数据。以往很多方法依赖重训模型、加入覆盖机制或额外监督,迁移到现有模型的成本高,所以作者转向“只改解码”的路线。

Mateusz Lango,Ondrej Dusek
decodinghallucinationgenerationDOIarXivDBLP
5
泛读EMNLP 2023

Self-Detoxifying Language Models via Toxification Reversal

这篇论文处理的是:如何让语言模型自我去毒化,而不是依赖外部过滤器或大量额外安全数据。传统 detoxification 往往通过拒答、词表屏蔽或单独分类器来压制有毒输出,但这些方法经常牺牲通用能力或只在表层词汇上起作用。

Chak Tou Leong,Yi Cheng,Jiashuo Wang,Jian Wang,Wenjie Li
toxicityalignmentdecodingDOIDBLP
5
泛读EMNLP 2023

Comparing Biases and the Impact of Multilingual Training across Multiple Languages

这篇论文要回答的是:多语言训练会如何改变不同语言上的偏见表现,且这些偏见是否可跨语言比较。很多工作只测英语偏见,或者默认多语言训练会自然稀释偏差,但这个假设并不稳。

Sharon Levy,Neha Anna John,Ling Liu,Yogarshi Vyas,Jie Ma,Yoshinari Fujinuma,Miguel Ballesteros,Vittorio Castelli,Dan Roth
multilingualbiaspretrainingDOIDBLP
5
泛读EMNLP 2023

Theory of Mind for Multi-Agent Collaboration via Large Language Models

这篇论文关注的是:能否让 LLM 在多智能体协作中显式利用 theory of mind,也就是建模其他智能体的信念、意图和信息状态,而不是只做单体规划。过去很多 LLM agent 工作把他人状态当作环境文本的一部分处理,但这在协作任务里往往不够。

Huao Li,Yu Quan Chong,Simon Stepputtis,Joseph Campbell,Dana Hughes,Charles Lewis,Katia P. Sycara
theory-of-mindmulti-agentcollaborationDOIDBLP
5
泛读FindingsEMNLP 2023

Multi-step Jailbreaking Privacy Attacks on ChatGPT

这篇论文要解决的问题是:现有对话模型的安全防护往往只拦截单轮、显式的违规提示,但对多轮、分步诱导的隐私窃取攻击缺乏系统分析。作者关注的不是一般越狱,而是攻击者如何把敏感信息提取任务拆成看似无害的子步骤,从而绕过 ChatGPT 的拒答策略。

Haoran Li,Dadi Guo,Wei Fan,Mingshi Xu,Jie Huang,Fanpu Meng,Yangqiu Song
jailbreakprivacyalignmentDOIDBLP
5
泛读FindingsEMNLP 2023

Watermarking LLMs with Weight Quantization

这篇论文要解决的问题是:如何给 LLM 植入可验证的所有权标记,同时尽量不影响模型能力、也不依赖输出层面的文本 watermark。现有文本水印容易被改写、抽样扰动或后处理破坏,而参数级标记更接近模型归属证明,但实现上要兼顾可检出性与性能损失。

Linyang Li,Botian Jiang,Pengyu Wang,Ke Ren,Hang Yan,Xipeng Qiu
watermarkingquantizationllm-securityDOIDBLP
5
泛读EMNLP 2023

Towards Robust Pruning: An Adaptive Knowledge-Retention Pruning Strategy for Language Models

这篇论文要解决的问题是:语言模型剪枝后性能通常脆弱,尤其在分布变化或任务迁移下容易丢掉关键知识。传统剪枝大多围绕参数重要性或稀疏率优化,但没有显式控制“保住哪些知识”,所以压缩后常见现象是表面指标还行、鲁棒性先掉。

Jianwei Li,Qi Lei,Wei Cheng,Dongkuan Xu
pruningcompressionknowledge-retentionDOIDBLP
5
泛读FindingsEMNLP 2023

A Zero-Shot Language Agent for Computer Control with Structured Reflection

这篇论文要解决的问题是:在没有专门微调的情况下,语言模型如何稳定地完成电脑控制这类长链交互任务。过去这类任务常依赖任务特定数据收集和行为克隆,但覆盖面差、迁移差;作者尝试用 zero-shot agent 配合结构化反思,把通用模型能力直接转成 GUI/计算机操作能力。

Tao Li,Gang Li,Zhiwei Deng,Bryan Wang,Yang Li
agentcomputer-usereflectionDOIDBLP
5
泛读FindingsEMNLP 2023

Mixture-of-Linguistic-Experts Adapters for Improving and Interpreting Pre-trained Language Models

这篇论文要解决的问题是:预训练语言模型虽然强,但其内部对不同语言现象的处理缺乏可控的模块化结构,因此既难进一步提升,也难解释。传统 adapter 往往按任务或层来设计,而不是按语言学功能来分工,导致性能和可解释性都不理想。

Raymond Li,Gabriel Murray,Giuseppe Carenini
adaptersmoeinterpretabilityDOIDBLP
5
泛读FindingsEMNLP 2023

PerturbScore: Connecting Discrete and Continuous Perturbations in NLP

这篇论文要解决的问题是:NLP 中离散扰动和连续扰动通常分开研究,前者像字符替换、词替换,后者像 embedding-space attack 或平滑正则,但两者之间缺少统一视角。结果是鲁棒性评测和训练常常各做各的,难以比较,也难迁移。

Linyang Li,Ke Ren,Yunfan Shao,Pengyu Wang,Xipeng Qiu
Fudan UniversityrobustnessperturbationevaluationDOIDBLP
5
泛读EMNLP 2023

CoAnnotating: Uncertainty-Guided Work Allocation between Human and Large Language Models for Data Annotation

这篇论文要解决的问题是:数据标注中如果完全依赖人工,成本太高;如果直接让 LLM 全自动标,则质量和一致性难保证。作者关注的是更实际的协作问题:如何根据样本不确定性,把简单样本交给 LLM、把高风险样本交给人,从而在固定预算下最大化标注质量。

Minzhi Li,Taiwei Shi,Caleb Ziems,Min-Yen Kan,Nancy F. Chen,Zhengyuan Liu,Diyi Yang
A*STARdata-annotationhuman-in-the-loopllmDOIDBLP
5
泛读EMNLP 2023

Robust Prompt Optimization for Large Language Models Against Distribution Shifts

Moxin Li,Wenjie Wang,Fuli Feng,Yixin Cao,Jizhi Zhang,Tat-Seng Chua
promptingdistribution-shiftrobustnessDOIDBLP
5
泛读EMNLP 2023

MoPe: Model Perturbation based Privacy Attacks on Language Models

Marvin Li,Jason Wang,Jeffrey G. Wang,Seth Neel
privacyattackslanguage-modelsDOIDBLP
5
泛读EMNLP 2023

Can Language Models Understand Physical Concepts?

Lei Li,Jingjing Xu,Qingxiu Dong,Ce Zheng,Xu Sun,Lingpeng Kong,Qi Liu
llm-evaluationphysical-reasoningbenchmarkDOIDBLP
5
泛读EMNLP 2023

API-Bank: A Comprehensive Benchmark for Tool-Augmented LLMs

Minghao Li,Yingxiu Zhao,Bowen Yu,Feifan Song,Hangyu Li,Haiyang Yu,Zhoujun Li,Fei Huang,Yongbin Li
tool-usebenchmarkapi-callingDOIDBLP
5
泛读EMNLP 2023

Hi-ArG: Exploring the Integration of Hierarchical Argumentation Graphs in Language Pretraining

Jingcong Liang,Rong Ye,Meng Han,Qi Zhang,Ruofei Lai,Xinyu Zhang,Zhao Cao,Xuanjing Huang,Zhongyu Wei
graphpretrainargumentationDOIDBLP
5
泛读EMNLP 2023

Self-Improvement of Non-autoregressive Model via Sequence-Level Distillation

Yusheng Liao,Shuyang Jiang,Yiqi Li,Yu Wang,Yanfeng Wang
non-autoregressivedistillationDOIDBLP
5
泛读FindingsEMNLP 2023

Learning to Compose Representations of Different Encoder Layers towards Improving Compositional Generalization

Lei Lin,Shuangtao Li,Yafang Zheng,Biao Fu,Shan Liu,Yidong Chen,Xiaodong Shi
compositional-generalizationrepresentationDOIDBLP
5
泛读FindingsEMNLP 2023

ToxicChat: Unveiling Hidden Challenges of Toxicity Detection in Real-World User-AI Conversation

Zi Lin,Zihan Wang,Yongqi Tong,Yangkun Wang,Yuxin Guo,Yujia Wang,Jingbo Shang
safetytoxicitybenchmarkDOIDBLP
5
泛读EMNLP 2023

Plan, Verify and Switch: Integrated Reasoning with Diverse X-of-Thoughts

Tengxiao Liu,Qipeng Guo,Yuqing Yang,Xiangkun Hu,Yue Zhang,Xipeng Qiu,Zheng Zhang
reasoningcotplanningDOIDBLP
5
泛读EMNLP 2023

ViT-TTS: Visual Text-to-Speech with Scalable Diffusion Transformer

Huadai Liu,Rongjie Huang,Xuan Lin,Wenqiang Xu,Maozong Zheng,Hong Chen,Jinzheng He,Zhou Zhao
ttsdiffusionvisualDOIDBLP
5
泛读EMNLP 2023

A Picture is Worth a Thousand Words: Language Models Plan from Pixels

Anthony Z. Liu,Lajanugen Logeswaran,Sungryull Sohn,Honglak Lee
vlmplanningpixelDOIDBLP
5
泛读EMNLP 2023

We&apos;re Afraid Language Models Aren&apos;t Modeling Ambiguity

自然语言天然存在歧义,但现有的 Next-token 预测目标强制模型将概率分布坍缩到单一的最可能解释上,导致模型无法识别或保留多重语义。

Alisa Liu,Zhaofeng Wu,Julian Michael,Alane Suhr,Peter West,Alexander Koller,Swabha Swayamdipta,Noah A. Smith,Yejin Choi
University of WashingtonAllen Institute for AI (AI2)ambiguityevaluationDOIDBLP
5
泛读FindingsEMNLP 2023

Knowledge-Selective Pretraining for Attribute Value Extraction

垂直领域(如电商)的属性值提取(AVE)任务中,通用的随机 Mask 预训练无法有效建立实体与属性之间的结构化关联。

Hui Liu,Qingyu Yin,Zhengyang Wang,Chenwei Zhang,Haoming Jiang,Yifan Gao ... 省略 2 位作者 ... ,Chao Zhang,Bing Yin,William Wang,Xiaodan Zhu
AmazonRutgers Universitypretraindata-selectionattribute-extractionDOIDBLP
5
泛读FindingsEMNLP 2023

Hierarchical Prompting Assists Large Language Model on Web Navigation

在长周期的 Web 导航任务中,HTML 文本极长且充满噪声,单体 LLM 容易在长上下文中迷失高层目标(Lost in the middle)。

Robert Lo,Abishek Sridhar,Frank F. Xu,Hao Zhu,Shuyan Zhou
Carnegie Mellon University (CMU)web-agentpromptingDOIDBLP
5
泛读FindingsEMNLP 2023

On Surgical Fine-tuning for Language Encoders

对预训练语言编码器进行全量微调(Full FT)容易破坏底层通用特征,导致灾难性遗忘和 OOD 泛化能力下降。

Abhilasha Lodha,Gayatri Belapurkar,Saloni Chalkapurkar,Yuanming Tao,Reshmi Ghosh,Samyadeep Basu,Dmitrii Petrov,Soundararajan Srinivasan
Amazonfine-tuninglayerwiseDOIDBLP
5
泛读EMNLP 2023

Adapt in Contexts: Retrieval-Augmented Domain Adaptation via In-Context Learning

领域适应(Domain Adaptation)通常需要昂贵的继续预训练(DAPT),而纯 Zero-shot 的 ICL 在垂直领域表现不佳。

Quanyu Long,Wenya Wang,Sinno Jialin Pan
Nanyang Technological University (NTU)icldomain-adaptationretrievalDOIDBLP
5
泛读FindingsEMNLP 2023

Exploring the Sensitivity of LLMs&apos; Decision-Making Capabilities: Insights from Prompt Variations and Hyperparameters

社区倾向于用 LLM 模拟人类决策(如行为经济学博弈),但这些决策结果可能只是 Prompt 格式或采样参数的伪影,而非真正的推理。

Manikanta Loya,Divya Sinha,Richard Futrell
UC Irvineprompt-sensitivityllm-evaluationDOIDBLP
5
泛读FindingsEMNLP 2023

Perceptual Structure in the absence of grounding: the impact of abstractedness and subjectivity in color language for LLMs

纯文本预训练模型能否在没有视觉 Grounding 的情况下,仅凭文本共现学到真实世界的连续感知结构(如颜色空间)?

Pablo Loyola,Edison Marrese-Taylor,Andrés Hoyos Idrobo
National Institute of Informatics (Japan)University of TokyogroundingcolorrepresentationDOIDBLP
5
泛读FindingsEMNLP 2023

Improving End-to-End Speech Processing by Efficient Text Data Utilization with Latent Synthesis

Jianqiao Lu,Wenyong Huang,Nianzu Zheng,Xingshan Zeng,Yu Ting Yeung,Xiao Chen
speechtext-speechlatentDOIDBLP
5
泛读FindingsEMNLP 2023

Systematic Assessment of Factual Knowledge in Large Language Models

Linhao Luo,Thuy-Trang Vu,Dinh Q. Phung,Gholamreza Haffari
factual-knowledgellm-evaluationprobingDOIDBLP
5
泛读FindingsEMNLP 2023

Towards A Holistic Landscape of Situated Theory of Mind in Large Language Models

现有大模型心理理论(ToM)评估基准碎片化,仅覆盖ToM的部分维度,容易存在捷径和数据泄露问题,过往研究对大模型是否具备ToM的结论差异极大,无统一评估框架。

Ziqiao Ma,Jacob Sansom,Run Peng,Joyce Chai
University of Michigantheory-of-mindllm-evaluationbenchmarkDOIarXivDBLP
4
EMNLP 2023

Non-autoregressive Streaming Transformer for Simultaneous Translation

现有自回归同声传译模型为平衡延迟和质量,容易出现过度预测问题,导致翻译错误率高,现有非自回归模型无法适配流式场景的动态读写需求。

Zhengrui Ma,Shaolei Zhang,Shoutao Guo,Chenze Shao,Min Zhang,Yang Feng
Beijing Language and Culture UniversityTencent AI Labnon-autoregressivesimultaneous-translationstreamingDOIarXivDBLP
5
泛读EMNLP 2023

SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models

这篇论文要解决的问题是:在拿不到模型 logits、也不接外部知识库的黑盒条件下,如何检测生成式 LLM 的事实性幻觉。此前主流做法要么依赖白盒概率信息,要么依赖检索和知识库管线;这两类方法在 ChatGPT 一类闭源接口上都不够实用,因此“只靠模型自己输出的一致性”成为值得单独研究的方向。

Potsawee Manakul,Adian Liusie,Mark J. F. Gales
University of Cambridgehallucinationdetectionblack-boxDOIarXivDBLP
5
泛读FindingsEMNLP 2023

Active Learning Principles for In-Context Learning with Large Language Models

这篇论文要解决的问题是:ICL 里少量 demonstration 该怎么选,不能再只靠随机抽样或启发式相似度。过去 few-shot prompting 的注意力大多放在 prompt 模板和模型规模上,而 demonstration selection 常被当作次要工程细节;作者认为这其实是一个标准的 pool-based active learning 问题,因为预算极小的时候,示例信息量决定了 ICL 上限。

Katerina Margatina,Timo Schick,Nikolaos Aletras,Jane Dwivedi-Yu
University of LiverpoolThe University of Manchesterin-context-learningactive-learningexample-selectionDOIarXivDBLP
5
泛读EMNLP 2023

UniChart: A Universal Vision-language Pretrained Model for Chart Comprehension and Reasoning

这篇论文要解决的问题是:现有图表理解模型大多借用通用 OCR 或图文预训练,却没有显式建模图表的结构语义,因此在图表问答和摘要等任务上泛化有限。图表不是普通图片,关键信息来自数据字段、坐标轴、图例和视觉编码之间的对应关系;如果预训练阶段不把这些结构纳入表示,模型很难稳定做跨图表推理。

Ahmed Masry,Parsa Kavehzadeh,Do Xuan Long,Enamul Hoque,Shafiq Joty
chart-understandingvision-languagepretrainingDOIarXivDBLP
5
泛读FindingsEMNLP 2023

Blackbird language matrices (BLM), a new task for rule-like generalization in neural networks: Can Large Language Models pass the test?

这篇论文要解决的问题是:如何用一个更接近“规则泛化”而非记忆拟合的测试,评估 LLM 是否真的学会了抽象规则。常见 benchmark 往往混杂了语言先验、数据泄漏和启发式捷径,因此很难单独测出模型能否在新组合上做类规则推断。

Paola Merlo
generalizationrule-learningllm-evaluationDOIDBLP
5
泛读EMNLP 2023

Structural Priming Demonstrates Abstract Grammatical Representations in Multilingual Language Models

这篇论文要解决的问题是:多语语言模型里的语法知识到底有多抽象,是否超越了词汇和单语模板记忆。过去很多关于语法能力的证据都可能被同语言表面重叠解释掉;作者选择 structural priming,尤其是 crosslingual priming,来检验模型是否共享更抽象的句法表示。

James A. Michaelov,Catherine Arnett,Tyler A. Chang,Ben Bergen
University of California, San DiegointerpretabilitygrammarmultilingualDOIarXivDBLP
5
泛读EMNLP 2023

FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation

这篇论文要解决的问题是:长文本生成的 factuality 不能再用整段“真/假”二分类来评估,因为一段回答里通常混有支持和不支持的原子事实。此前很多评测把长文本作为整体打分,导致模型之间的细粒度差异被掩盖,也难以给训练和系统改进提供可操作反馈。

Sewon Min,Kalpesh Krishna,Xinxi Lyu,Mike Lewis,Wen-tau Yih,Pang Wei Koh,Mohit Iyyer,Luke Zettlemoyer,Hannaneh Hajishirzi
evaluationfactualityhallucinationDOIarXivDBLP
5
泛读FindingsEMNLP 2023

NarrativeXL: a Large-scale Dataset for Long-Term Memory Models

这篇论文要解决的问题是:长程记忆模型缺少足够大、足够长、能稳定评估记忆保持与检索能力的数据集。现有长上下文评测常偏短、偏单任务,或把记忆与表面局部匹配混在一起,因此很难比较不同 memory architecture 的真实收益。

Arsenii Moskvichev,Ky-Vinh Mai
long-contextdatasetevaluationDOIDBLP
5
泛读FindingsEMNLP 2023

Towards Agile Text Classifiers for Everyone

这篇论文要解决的问题是:安全文本分类器通常跟着政策改、需求碎、标注预算小,怎样才能快速为新政策做出高质量分类器。传统做法依赖较大标注集和完整微调流程,迭代太慢;而在内容审核和对话安全里,政策本身就在持续变化,所以“agile classification”比静态 benchmark 最优更重要。

Maximilian Mozes,Jessica Hoffmann,Katrin Tomanek,Muhamed Kouate,Nithum Thain,Ann Yuan,Tolga Bolukbasi,Lucas Dixon
Google Researchsafetyclassificationdata-filteringDOIarXivDBLP
5
泛读FindingsEMNLP 2023

Pseudointelligence: A Unifying Lens on Language Model Evaluation

Shikhar Murty,Orr Paradise,Pratyusha Sharma
evaluationmethodologyDOIDBLP
6
泛读FindingsEMNLP 2023

Words, Subwords, and Morphemes: What Really Matters in the Surprisal-Reading Time Relationship?

用大模型做心理语言学研究时,普遍采用BPE子词切分计算surprisal,但未验证该切分是否比基于语素、正字法的切分更符合人类阅读行为,过往研究默认BPE切分的surprisal最优,未做严格对比。

Sathvik Nair,Philip Resnik
University of Marylandtokenizersurprisalcognitive-modelingDOIarXivDBLP
5
泛读EMNLP 2023

LINC: A Neurosymbolic Approach for Logical Reasoning by Combining Language Models with First-Order Logic Provers

Theo Olausson,Alex Gu,Benjamin Lipkin,Cedegao E. Zhang,Armando Solar-Lezama,Joshua B. Tenenbaum,Roger Levy
reasoningtool-usesymbolicDOIDBLP
4
EMNLP 2023

CombLM: Adapting Black-Box Language Models through Small Fine-Tuned Models

高质量大模型多为黑盒API形式,无法访问权重做微调,全参数微调大模型的成本过高,现有适配方法都需要白盒访问模型权重或中间激活,无法适配黑盒场景。

Aitor Ormazabal,Mikel Artetxe,Eneko Agirre
University of the Basque Countryadaptationblack-boxfine-tuningDOIarXivDBLP
5
泛读FindingsEMNLP 2023

On the Risk of Misinformation Pollution with Large Language Models

Yikang Pan,Liangming Pan,Wenhu Chen,Preslav Nakov,Min-Yen Kan,William Yang Wang
misinformationdata-qualitysafetyDOIDBLP
5
泛读EMNLP 2023

Guideline Learning for In-Context Information Extraction

Chaoxu Pang,Yixuan Cao,Qiang Ding,Ping Luo
in-context-learninginformation-extractionpromptingDOIDBLP
5
泛读FindingsEMNLP 2023

Injecting structural hints: Using language models to study inductive biases in language learning

Isabel Papadimitriou,Dan Jurafsky
inductive-biaslanguage-learningstructural-biasDOIDBLP
5
泛读EMNLP 2023

Structural generalization in COGS: Supertagging is (almost) all you need

Alban Petit,Caio F. Corro,François Yvon
structural-generalizationsyntaxinductive-biasDOIDBLP
5
泛读FindingsEMNLP 2023

Emptying the Ocean with a Spoon: Should We Edit Models?

模型编辑(model editing)作为修正 LLM 事实错误的手段,其可行性和可靠性是否被高估了?现有编辑方法在局部修改后往往引发不可预测的副作用,作者质疑这一研究方向的根本前提。

Yuval Pinter,Michael Elhadad
Ben-Gurion University of the Negevmodel-editingposition-paperDOIDBLP
5
泛读DemoEMNLP 2023

Adapters: A Unified Library for Parameter-Efficient and Modular Transfer Learning

参数高效微调(PEFT)方法种类繁多(Adapter、LoRA、Prefix Tuning 等),但缺乏统一的实现框架,导致研究者难以公平比较和组合使用这些方法。Adapters 库旨在提供一个统一接口。

Clifton Poth,Hannah Sterz,Indraneil Paul,Sukannya Purkayastha,Leon Engländer,Timo Imhof,Ivan Vulic,Sebastian Ruder,Iryna Gurevych,Jonas Pfeiffer
TU DarmstadtadapterspeftDOIDBLP
5
泛读EMNLP 2023

An Investigation of LLMs&apos; Inefficacy in Understanding Converse Relations

LLM 在理解逆关系(converse relations)方面存在系统性缺陷——例如知道'A 是 B 的父亲'但无法推出'B 是 A 的孩子'。本文系统调查了这一现象的范围和原因。

Chengwen Qi,Bowen Li,Binyuan Hui,Bailin Wang,Jinyang Li,Jinwang Wu,Yuanjun Laili
evaluationrelationsllmDOIDBLP
5
泛读EMNLP 2023

The Art of SOCRATIC QUESTIONING: Recursive Thinking with Large Language Models

LLM 在复杂推理任务中的表现受限于单次 prompting 的深度。本文提出 Socratic Questioning 方法,通过递归地让模型自问自答来分解复杂问题,提升推理深度。

Jingyuan Qi,Zhiyang Xu,Ying Shen,Minqian Liu,Di Jin,Qifan Wang,Lifu Huang
reasoningcotpromptingDOIDBLP
5
泛读FindingsEMNLP 2023

CREATOR: Tool Creation for Disentangling Abstract and Concrete Reasoning of Large Language Models

LLM 使用外部工具时,通常只能调用预定义的工具集,面对新问题时缺乏灵活性。CREATOR 让 LLM 自己创建工具(即编写可复用的代码函数),将抽象推理和具体执行解耦。

Cheng Qian,Chi Han,Yi Ren Fung,Yujia Qin,Zhiyuan Liu,Heng Ji
Tsinghua Universitytool-usereasoningDOIDBLP
5
泛读FindingsEMNLP 2023

DiffusionRet: Diffusion-Enhanced Generative Retriever using Constrained Decoding

解决生成式检索(Generative Retrieval)中自回归模型容易生成无效文档ID(幻觉)的问题,探索非自回归生成在强约束场景的可用性。

Shanbao Qiao,Xuebing Liu,Seung-Hoon Na
diffusionretrievalconstrained-decodingDOIDBLP
5
泛读EMNLP 2023

Lifelong Sequence Generation with Dynamic Module Expansion and Adaptation

解决Seq2Seq模型在持续学习(Lifelong Learning)新任务序列时面临的灾难性遗忘问题,寻找比全量微调或固定容量Adapter更灵活的参数更新策略。

Chengwei Qin,Chen Chen,Shafiq Joty
continual-learningmodularDOIDBLP
5
泛读EMNLP 2023

Detecting and Mitigating Hallucinations in Multilingual Summarisation

解决多语言摘要生成中,由于跨语言对齐不足和语料质量参差不齐导致的严重幻觉问题。

Yifu Qiu,Yftah Ziser,Anna Korhonen,Edoardo Maria Ponti,Shay B. Cohen
University of EdinburghUniversity of CambridgehallucinationmultilingualsummarizationDOIDBLP
5
泛读IndustryEMNLP 2023

AART: AI-Assisted Red-Teaming with Diverse Data Generation for New LLM-powered Applications

解决大模型安全对齐中,人工红队测试(Red-Teaming)成本高、覆盖率低、且难以针对新应用场景快速定制的问题。

Bhaktipriya Radharapu,Kevin Robinson,Lora Aroyo,Preethi Lahoti
Googlered-teamingsafetydata-generationDOIDBLP
5
泛读FindingsEMNLP 2023

CoEdIT: Text Editing by Task-Specific Instruction Tuning

解决文本编辑任务(如语法纠错、文本简化、改写)碎片化的问题,构建一个统一的指令微调文本编辑基座。

Vipul Raheja,Dhruv Kumar,Ryan Koo,Dongyeop Kang
Grammarlyinstruction-tuningtext-editingDOIDBLP
5
泛读FindingsEMNLP 2023

INVITE: a Testbed of Automatically Generated Invalid Questions to Evaluate Large Language Models for Hallucinations

解决现有幻觉评估基准难以区分模型是“真懂”还是“盲目迎合用户预设”的问题。

Anil Ramakrishna,Rahul Gupta,Jens Lehmann,Morteza Ziyadi
AmazonhallucinationbenchmarkevaluationDOIDBLP
5
泛读EMNLP 2023

The Troubling Emergence of Hallucination in Large Language Models - An Extensive Definition, Quantification, and Prescriptive Remediations

系统性地定义、量化并提出缓解大模型幻觉的综合框架,解决领域内对幻觉定义模糊、评估标准不一的问题。

Vipula Rawte,Swagata Chakraborty,Agnibh Pathak,Anubhav Sarkar,S. M. Towhidul Islam Tonmoy,Aman Chadha,Amit P. Sheth,Amitava Das
hallucinationevaluationDOIDBLP
5
泛读FindingsEMNLP 2023

Representation Projection Invariance Mitigates Representation Collapse

缓解对比学习或微调过程中常见的表示坍塌(Representation Collapse)问题,即模型倾向于将所有输入映射到狭窄的低秩向量空间。

Anastasia Razdaibiedina,Ashish Khetan,Zohar S. Karnin,Daniel Khashabi,Vivek Madan
Amazonrepresentationfine-tuningcollapseDOIDBLP
5
泛读FindingsEMNLP 2023

TESTA: Temporal-Spatial Token Aggregation for Long-form Video-Language Understanding

这篇工作要解决的是长视频-语言理解中的 token 爆炸问题:视频一长,时序帧和空间 patch 数都会迅速增长,标准视频 Transformer 很难在可接受计算量下保留足够上下文。过去常见做法是均匀采样或固定下采样,但这会把真正与文本相关的时间段和区域一起丢掉。

Shuhuai Ren,Sishuo Chen,Shicheng Li,Xu Sun,Lu Hou
videotoken-aggregationlong-contextDOIDBLP
5
泛读FindingsEMNLP 2023

XTREME-UP: A User-Centric Scarce-Data Benchmark for Under-Represented Languages

这篇工作要解决的是低资源、弱代表语言的评测长期脱离真实用户需求:现有多语 benchmark 往往语言覆盖有限、任务偏学术、数据规模也不足以反映实际使用痛点。过去跨语评测常默认有平行资源、标准化训练集和统一标签体系,但这恰恰不是 under-represented languages 的常态。

Sebastian Ruder,Jonathan H. Clark,Alexander Gutkin,Mihir Kale,Min Ma,Massimo Nicosia ... 省略 17 位作者 ... ,R. Reeve Ingle,Melvin Johnson,Dmitry Panteleev,Partha Talukdar
multilingualbenchmarklow-resourceDOIDBLP
5
泛读EMNLP 2023

A State-Vector Framework for Dataset Effects

这篇工作要解决的是数据集效应缺乏统一、可比较的刻画方式:同一模型在不同数据集上表现差异很大,但我们往往只能事后归因于领域差异、标注风格或难度,而没有一个稳定表征去量化这种影响。过去关于 dataset effects 的分析多是任务内经验性比较,缺少可迁移的抽象框架。

Esmat Sahak,Zining Zhu,Frank Rudzicz
dataset-effectsanalysisDOIDBLP
5
泛读EMNLP 2023

PromptMix: A Class Boundary Augmentation Method for Large Language Model Distillation

这篇工作要解决的是 LLM 蒸馏到小模型时,学生模型往往学不到教师在类别边界附近的判别行为,导致泛化不足。过去蒸馏主要依赖原始样本上的软标签,但如果训练样本在边界附近覆盖不够,学生学到的决策面会比教师更粗糙。

Gaurav Sahu,Olga Vechtomova,Dzmitry Bahdanau,Issam H. Laradji
data-augmentationdistillationsynthetic-dataDOIDBLP
5
泛读FindingsEMNLP 2023

On General Language Understanding

讨论'通用语言理解(General Language Understanding)'这个说法到底指什么,以及当前评测实践和这个目标的关系。背景是 GLUE/SuperGLUE 之后,社区已习惯把'通用'等同于'多任务平均分高',但这和人对语言理解的直觉并不一致。

David Schlangen
University of Potsdamlanguage-understandingevaluationtheoryDOIDBLP
5
泛读EMNLP 2023

Polyglot or Not? Measuring Multilingual Encyclopedic Knowledge in Foundation Models

在多语种下系统测量基础模型的百科知识掌握程度,回答'参数里到底记了多少跨语言事实'。以前的事实性评测(LAMA 等)主要是英文,没法区分模型是知识稀缺还是只在英语里会。

Tim Schott,Daniel Furman,Shreshta Bhat
UC Berkeleymultilingualknowledge-probingfoundation-modelsDOIDBLP
5
泛读EMNLP 2023

Ignore This Title and HackAPrompt: Exposing Systemic Vulnerabilities of LLMs Through a Global Prompt Hacking Competition

系统刻画 LLM 在真实对抗下能被如何'劫持 prompt'。以前 prompt injection 多是少量案例研究,缺一个大规模、分类清楚的攻击语料。

Sander Schulhoff,Jeremy Pinto,Anaum Khan,Louis-François Bouchard,Chenglei Si,Svetlina Anati,Valen Tagliabue,Anson Liu Kost,Christopher Carnahan,Jordan L. Boyd-Graber
University of Marylandprompt-injectionsafetyadversarialDOIDBLP
5
泛读FindingsEMNLP 2023

Manifold-Preserving Transformers are Effective for Short-Long Range Encoding

在同时需要短程和长程依赖建模的任务上,普通 Transformer 的 attention 会把 token 表示拉离原始几何流形,造成信息退化。作者想给编码器加一个'几何约束'让表示在层间保持流形结构。

Ayan Sengupta,Md. Shad Akhtar,Tanmoy Chakraborty
IIIT DelhiIIT Delhitransformer-architecturelong-rangemanifoldDOIDBLP
5
泛读FindingsEMNLP 2023

Towards Concept-Aware Large Language Models

LLM 按 subword token 建模,内部没有'概念'这一层抽象;作者主张让模型显式具备概念感知(concept-aware)能力,以改进组合泛化和抽象推理。

Chen Shani,Jilles Vreeken,Dafna Shahaf
Stanford UniversityCISPAHebrew Universityconcept-awarenessllm-knowledgerepresentationDOIDBLP
5
泛读FindingsEMNLP 2023

Flatness-Aware Prompt Selection Improves Accuracy and Sample Efficiency

in-context learning 下不同 prompt 模板的效果差异巨大,但没有可靠的无标签方法选出好 prompt。以前的方法要么依赖验证集,要么用 perplexity,效果不稳。

Lingfeng Shen,Weiting Tan,Boyuan Zheng,Daniel Khashabi
Johns Hopkins Universityprompt-selectionflatnessin-context-learningDOIDBLP
5
泛读FindingsEMNLP 2023

Toward Human Readable Prompt Tuning: Kubrick&apos;s The Shining is a good movie, and a good prompt too?

Soft prompt tuning 学出来的向量通常不对应任何自然语言,既不可读也不可迁移。作者想学可读的 prompt——又像人话又性能强。

Weijia Shi,Xiaochuang Han,Hila Gonen,Ari Holtzman,Yulia Tsvetkov,Luke Zettlemoyer
University of WashingtonCMUprompt-tuninginterpretabilitysoft-promptDOIDBLP
5
泛读EMNLP 2023

Specialist or Generalist? Instruction Tuning for Specific NLP Tasks

做某个特定任务时,到底该用一个任务专用微调模型,还是用一个指令调优过的通才模型?社区默认 instruction tuning = 通才牺牲单点最优,但缺乏系统对比。

Chufan Shi,Yixuan Su,Cheng Yang,Yujiu Yang,Deng Cai
Tsinghua Universityinstruction-tuningspecialist-vs-generalistsftDOIDBLP
5
泛读FindingsEMNLP 2023

POSQA: Probe the World Models of LLMs with Size Comparisons

用物体大小比较这个简单问题当作探针,测 LLM 内部是否有一致的世界模型(物理常识)。以前的常识评测多是选择题,容易被表层相关性做对。

Chang Shu,Jiuzhou Han,Fangyu Liu,Ehsan Shareghi,Nigel Collier
University of CambridgeMonash Universityprobingworld-modelinterpretabilityDOIDBLP
5
泛读EMNLP 2023

Coarse-to-Fine Contrastive Learning in Image-Text-Graph Space for Improved Vision-Language Compositionality

对比学习出来的 VLM(CLIP 系)在组合推理上很差——分不清 'red cube on blue ball' 和 'blue cube on red ball'。作者想用场景图作为结构化监督去教模型理解对象-属性-关系。

Harman Singh,Pengchuan Zhang,Qifan Wang,Mengjiao Wang,Wenhan Xiong,Jingfei Du,Yu Chen
Metacontrastive-learningvision-languagecompositionalityDOIarXivDBLP
5
泛读EMNLP 2023

Evaluation Metrics in the Era of GPT-4: Reliably Evaluating Large Language Models on Sequence to Sequence Tasks

当前 LLM 生成质量的评估混乱:BLEU/ROUGE 类自动指标与人类判断严重脱节,而用 GPT-4 当评委又缺乏系统验证。作者想系统看清几类指标在 seq2seq 任务上到底谁更可信。

Andrea Sottana,Bin Liang,Kai Zou,Zheng Yuan
evaluationmetricsseq2seqDOIarXivDBLP
5
泛读FindingsEMNLP 2023

Probing LLMs for Joint Encoding of Linguistic Categories

已有 probing 工作告诉我们 LLM 层级里低层偏句法、高层偏语义,但不同语言类别是否共享同一组神经元/表示并没有搞清。作者要回答的是 joint encoding 的问题——比如词性和依存关系用的是不是同一块电路。

Giulio Starace,Konstantinos Papakostas,Rochelle Choenni,Apostolos Panagiotopoulos,Matteo Rosati,Alina Leidinger,Ekaterina Shutova
probinglinguistic-hierarchyinterpretabilityDOIarXivDBLP
5
泛读EMNLP 2023

Knowledge Distillation \approx Label Smoothing: Fact or Fallacy?

一种流行观点认为知识蒸馏 ≈ label smoothing(都是软化目标分布)。作者想验证这在实际 NLP 模型上到底成立不成立,尤其是两者对泛化和置信度校准的影响是否可互换。

Md. Sultan
IBM Researchknowledge-distillationlabel-smoothingtraining-dynamicsDOIDBLP
5
泛读EMNLP 2023

Evaluating Large Language Models on Controlled Generation Tasks

LLM 在'受控生成'上到底行不行——尤其是细粒度硬约束(必须出现某个词、符合某种结构、长度精确)。以前大多只看粗粒度的风格/主题控制,没有系统比较 LLM 与 finetuned 小模型。

Jiao Sun,Yufei Tian,Wangchunshu Zhou,Nan Xu,Qian Hu,Rahul Gupta,John Frederick Wieting,Nanyun Peng,Xuezhe Ma
controlled-generationevaluationllmDOIarXivDBLP
5
泛读FindingsEMNLP 2023

An Empirical Study of Multimodal Model Merging

已有 model merging 工作几乎都是在同一个初始化、同类任务的模型之间做插值。作者要看一个更硬的问题:在不同模态(视觉、语言、跨模态)上分别训练的 Transformer,能不能合并出一个模态无关、参数高效的模型?

Yi-Lin Sung,Linjie Li,Kevin Lin,Zhe Gan,Mohit Bansal,Lijuan Wang
UNC Chapel HillMicrosoftmodel-mergingmultimodaltransferDOIarXivDBLP
5
泛读EMNLP 2023

Conceptual structure coheres in human cognition but not in large language models

Siddharth Suresh,Kushin Mukherjee,Xizheng Yu,Wei-Chun Huang,Lisa Padua,Timothy T. Rogers
cognitionconceptsllmDOIDBLP
5
泛读EMNLP 2023

Recurrent Neural Language Models as Probabilistic Finite-state Automata

Anej Svete,Ryan Cotterell
rnnsformal-languagestheoryDOIDBLP
6
泛读FindingsEMNLP 2023

Assessing Privacy Risks in Language Models: A Case Study on Summarization Tasks

这篇论文要回答的是:摘要模型在只开放黑盒 API 的情况下,是否会泄露训练样本的 membership。这个问题过去更多在生成式 LM 或分类模型上讨论,而摘要任务常被默认更安全,因为输出是压缩后的摘要、未必直接复现原文;作者指出这种直觉并不可靠,摘要模型同样会通过相似性模式和对输入扰动的稳定性暴露训练痕迹。

Ruixiang Tang,Gord Lueck,Rodolfo Quispe,Huseyin A. Inan,Janardhan Kulkarni,Xia Hu
privacymembership-inferencelanguage-modelsDOIarXivDBLP
6
泛读EMNLP 2023

EasyQuant: An Efficient Data-free Quantization Algorithm for LLMs

这篇论文要解决的是:LLM 权重量化能否在完全不依赖校准数据的情况下做得足够好。过去多数 PTQ 方法默认拿少量训练样本或校准集来估计激活分布,但这既带来数据依赖,也可能让量化结果对任务和样本选择过拟合;作者试图把这个依赖拿掉。

Hanlin Tang,Yifu Sun,Decheng Wu,Kai Liu,Jianchen Zhu,Zhanhui Kang
quantizationllmdata-freeDOIarXivDBLP
2
FindingsEMNLP 2023

RSVP: Customer Intent Detection via Agent Response Contrastive and Generative Pre-Training

这篇论文要解决的是:客户意图识别过度依赖大规模自适应预训练数据,而对话里更便宜、却高度相关的客服回复信号没有被充分利用。以往方法主要盯住用户 utterance 做判别或继续预训练,但在客服场景里,坐席回复往往已经编码了对用户意图的判断,这部分监督被浪费了。

Yu-Chien Tang,Wei-Yao Wang,An-Zi Yen,Wen-Chih Peng
contrastive-learninggenerative-pretrainingdialogueDOIarXivDBLP
5
泛读EMNLP 2023

When are Lemons Purple? The Concept Association Bias of Vision-Language Models

Yingtian Tang,Yutaro Yamada,Yoyo Zhang,Ilker Yildirim
vlmbiasconceptsDOIDBLP
5
泛读FindingsEMNLP 2023

A Language Model with Limited Memory Capacity Captures Interference in Human Sentence Processing

试图用 Transformer 的自注意力机制解释人类句子处理的认知负荷,但现有方法依赖不切实际的假设(如数百次并行检索和手动挑选特定注意力头)。

William Timkey,Tal Linzen
New York University (NYU)memorysentence-processinglanguage-modelingDOIarXivDBLP
5
泛读IndustryEMNLP 2023

A Comparative Analysis of Task-Agnostic Distillation Methods for Compressing Transformer Language Models

针对 Transformer 语言模型的任务无关(通用)知识蒸馏方法繁多,缺乏在统一框架下的系统性对比和选型指导。

Takuma Udagawa,Aashka Trivedi,Michele Merler,Bishwaranjan Bhattacharjee
IBM ResearchdistillationcompressiontransformerDOIarXivDBLP
5
泛读FindingsEMNLP 2023

Evaluating the Knowledge Base Completion Potential of GPT

大语言模型(LLM)内部蕴含丰富的世界知识,能否直接用 LLM 替代传统的知识图谱嵌入(KGE)模型来做严格的知识库补全(KBC)?

Blerta Veseli,Simon Razniewski,Jan-Christoph Kalo,Gerhard Weikum
Max Planck Instituteknowledge-probinggptDOIDBLP
5
泛读DemoEMNLP 2023

Prompt2Model: Generating Deployable Models from Natural Language Instructions

针对特定任务部署千亿参数 LLM 成本极高,而手动收集数据训练小模型又耗时费力。

Vijay Viswanathan,Chenyang Zhao,Amanda Bertsch,Tongshuang Wu,Graham Neubig
Carnegie Mellon University (CMU)data-synthesisdistillationDOIDBLP
5
泛读EMNLP 2023

Universal Self-Adaptive Prompting

不同的任务甚至不同的输入样本,往往需要不同的 Prompt 策略(如 Zero-shot, Few-shot, CoT)才能激发 LLM 的最佳性能,手动为每个任务寻找最优 Prompt 成本极高。

Xingchen Wan,Ruoxi Sun,Hootan Nakhost,Hanjun Dai,Julian Eisenschlos,Sercan Ö. Arik,Tomas Pfister
Google DeepMindpromptingself-adaptiveDOIDBLP
5
泛读FindingsEMNLP 2023

NEWTON: Are Large Language Models Capable of Physical Reasoning?

这篇论文的核心是补上“LLM 是否具备物理常识与物理推理能力”这一评测空白。已有工作大量测语法、语义、常识和数学,但对日常物体属性、可供性、相互作用这类物理知识缺少系统 benchmark,导致大家往往用零散案例判断模型能力,既不稳定,也不利于后续定向改进。

Yi Ru Wang,Jiafei Duan,Dieter Fox,Siddhartha S. Srinivasa
benchmarkphysical-reasoningDOIarXivDBLP
5
泛读FindingsEMNLP 2023

DocSplit: Simple Contrastive Pretraining for Large Document Embeddings

这篇论文关注长文档表示学习:如何在不依赖复杂架构或昂贵监督信号的情况下,学到对大文档检索和匹配有效的 embedding。以往长文档 embedding 常依赖分层编码、任务特定监督或复杂预训练,训练成本高,迁移性也未必稳定。

Yujie Wang,Mike Izbicki
contrastivedocument-embeddingDOIDBLP
5
泛读EMNLP 2023

Learning from Mistakes via Cooperative Study Assistant for Large Language Models

这篇论文要解决的是:LLM 虽然能基于自反馈修正答案,但单模型自我反思的反馈经常不准,导致“从错误中学习”这条路收益受限。过去不少方法把纠错责任放在同一个模型身上,结果是错误诊断和错误修复都可能沿着同一偏差漂移。

Danqing Wang,Lei Li
self-improvementfeedbackDOIarXivDBLP
5
泛读EMNLP 2023

Document-Level Machine Translation with Large Language Models

这篇论文的核心问题是:像 ChatGPT 这样的 LLM,是否真的具备文档级机器翻译所需的 discourse modeling 能力,而不仅是句子级流畅改写能力。以往对 LLM 翻译能力的讨论常聚焦单句质量,但文档级翻译更依赖指代消解、篇章衔接和跨句一致性,这正好能更细地暴露模型的上下文建模边界。

Longyue Wang,Chenyang Lyu,Tianbo Ji,Zhirui Zhang,Dian Yu,Shuming Shi,Zhaopeng Tu
long-contextmtevaluationDOIarXivDBLP
5
泛读FindingsEMNLP 2023

Self-Knowledge Guided Retrieval Augmentation for Large Language Models

这篇论文关注检索增强中的一个基础问题:LLM 何时真的需要外部检索,何时其实已经知道答案。很多 RAG 系统默认‘能检索就检索’,但这会引入无谓开销,也可能把外部噪声带进来,反而伤害生成稳定性。

Yile Wang,Peng Li,Maosong Sun,Yang Liu
ragself-knowledgeDOIDBLP
5
泛读FindingsEMNLP 2023

Can ChatGPT Defend its Belief in Truth? Evaluating LLM Reasoning via Debate

这篇论文的核心问题是:LLM 是否真的拥有稳定的、可辩护的真信念,而不只是会在单轮问答中给出看似正确的答案。传统 QA 评测只看最终答案,难以区分模型是基于一致推理坚持真值,还是被表述方式和对抗话术轻易带偏。

Boshi Wang,Xiang Yue,Huan Sun
reasoningevaluationllmDOIDBLP
5
泛读EMNLP 2023

Hallucination Detection for Generative Large Language Models by Bayesian Sequential Estimation

Xiaohua Wang,Yuliang Yan,Longtao Huang,Xiaoqing Zheng,Xuanjing Huang
hallucinationevaluationDOIDBLP
5
泛读FindingsEMNLP 2023

On the Dimensionality of Sentence Embeddings

Hongwei Wang,Hongming Zhang,Dong Yu
embeddingsdimensionalityDOIDBLP
5
泛读EMNLP 2023

Hyperpolyglot LLMs: Cross-Lingual Interpretability in Token Embeddings

Andrea W. Wen-Yi,David Mimno
interpretabilitymultilingualembeddingsDOIDBLP
5
泛读EMNLP 2023

Re³Dial: Retrieve, Reorganize and Rescale Conversations for Long-Turn Open-Domain Dialogue Pre-training

Jiaxin Wen,Hao Zhou,Jian Guan,Jie Zhou,Minlie Huang
dialogue-pretraininglong-turndata-constructionDOIDBLP
5
泛读FindingsEMNLP 2023

NovaCOMET: Open Commonsense Foundation Models with Symbolic Knowledge Distillation

Peter West,Ronan Le Bras,Taylor Sorensen,Bill Yuchen Lin,Liwei Jiang,Ximing Lu ... 省略 1 位作者 ... ,Jack Hessel,Ashutosh Baheti,Chandra Bhagavatula,Yejin Choi
knowledge-distillationcommonsensefoundation-modelDOIDBLP
5
泛读EMNLP 2023

Increasing Probability Mass on Answer Choices Does Not Always Improve Accuracy

Sarah Wiegreffe,Matthew Finlayson,Oyvind Tafjord,Peter Clark,Ashish Sabharwal
calibrationmultiple-choicellm-evaluationDOIDBLP
5
泛读EMNLP 2023

Language Model Quality Correlates with Psychometric Predictive Power in Multiple Languages

Ethan Wilcox,Clara Meister,Ryan Cotterell,Tiago Pimentel
psychometricslanguage-modelmultilingualDOIDBLP
5
泛读EMNLP 2023

DEPN: Detecting and Editing Privacy Neurons in Pretrained Language Models

Xinwei Wu,Junzhuo Li,Minghui Xu,Weilong Dong,Shuangzhi Wu,Chao Bian,Deyi Xiong
privacyneuron-editinginterpretabilityDOIDBLP
5
泛读EMNLP 2023

Query-as-context Pre-training for Dense Passage Retrieval

Xing Wu,Guangyuan Ma,Wanhui Qian,Zijia Lin,Songlin Hu
dense-retrievalpretrainingquery-as-contextDOIDBLP
5
泛读FindingsEMNLP 2023

Proto-lm: A Prototypical Network-Based Framework for Built-in Interpretability in Large Language Models

这篇论文要解决的是:LLM 的可解释性长期依赖事后分析,而不是模型内部自带可读结构。现有解释方法多在推理后做 attribution 或 probing,往往停留在 token 或神经元层面,难以给出更高层语义单元的解释;作者想把解释性直接写进 fine-tuning 过程。

Sean Xie,Soroush Vosoughi,Saeed Hassanpour
interpretabilityprototypical-networksarchitectureDOIarXivDBLP
5
泛读EMNLP 2023

Baize: An Open-Source Chat Model with Parameter-Efficient Tuning on Self-Chat Data

Canwen Xu,Daya Guo,Nan Duan,Julian J. McAuley
synthetic-datasftself-chatDOIDBLP
5
泛读EMNLP 2023

Condensing Multilingual Knowledge with Lightweight Language-Specific Modules

多语言翻译中,为每种语言引入独立的 language-specific (LS) 模块能提升性能,但当语言数量扩展到数百种时,全秩矩阵带来的参数量不可控。问题是如何在保持 LS 模块收益的同时大幅压缩参数。

Haoran Xu,Weiting Tan,Shuyue Stella Li,Yunmo Chen,Benjamin Van Durme,Philipp Koehn,Kenton Murray
Johns Hopkins Universitymultilingualmachine-translationmoeDOIarXivDBLP
5
泛读EMNLP 2023

Look-back Decoding for Open-Ended Text Generation

开放式文本生成中,自回归解码容易产生重复、退化等问题。Look-back Decoding 试图在解码阶段通过回顾已生成内容来缓解这些退化现象。

Nan Xu,Chunting Zhou,Asli Celikyilmaz,Xuezhe Ma
Meta FAIRUSCdecoding-strategytext-generationDOIDBLP
5
泛读FindingsEMNLP 2023

Not All Demonstration Examples are Equally Beneficial: Reweighting Demonstration Examples for In-Context Learning

In-Context Learning (ICL) 中,所有 demonstration example 被等权对待,但实际上不同 example 的质量差异很大。问题是如何为 demonstration 分配近似最优的权重,以及如何在 ICL 中应用这些权重。

Zhe Yang,Damai Dai,Peiyi Wang,Zhifang Sui
Peking UniversityTsinghua Universityin-context-learningdemonstration-selectionpromptingDOIarXivDBLP
5
泛读FindingsEMNLP 2023

A New Benchmark and Reverse Validation Method for Passage-level Hallucination Detection

LLM 容易生成幻觉(事实错误和未经验证的信息),现有零资源幻觉检测方法多在句子级别工作。问题是如何在段落级别进行零资源幻觉检测,以及如何构建可靠的评测基准。

Shiping Yang,Renliang Sun,Xiaojun Wan
Peking UniversityhallucinationbenchmarkevaluationDOIarXivDBLP
5
泛读FindingsEMNLP 2023

RefGPT: Dialogue Generation of GPT, by GPT, and for GPT

如何利用 GPT 自身来生成高质量的对话数据,用于训练和改进 GPT 类模型。

Dongjie Yang,Ruifeng Yuan,Yuantao Fan,Yifei Yang,Zili Wang,Shusen Wang,Hai Zhao
dialoguesynthetic-datagptDOIDBLP
5
泛读FindingsEMNLP 2023

Efficiently Enhancing Zero-Shot Performance of Instruction Following Model via Retrieval of Soft Prompt

(注:摘要缺失,基于标题推断)解决指令微调模型在零样本(Zero-Shot)场景下性能不足的问题,同时追求计算效率。

Seonghyeon Ye,Joel Jang,Doyoung Kim,Yongrae Jo,Minjoon Seo
KAISTinstruction-tuningretrievalsoft-promptDOIDBLP
5
泛读EMNLP 2023

Assessing Step-by-Step Reasoning against Lexical Negation: A Case Study on Syllogism

探究 LLM 在执行 Chain-of-Thought (CoT) 推理时,是否真正掌握了逻辑演绎,还是仅仅在进行脆弱的模式匹配,特别是在面对词汇否定(Lexical Negation)时的鲁棒性。

Mengyu Ye,Tatsuki Kuribayashi,Jun Suzuki,Goro Kobayashi,Hiroaki Funayama
Tohoku UniversitycotreasoningnegationDOIarXivDBLP
5
泛读EMNLP 2023

Generating Data for Symbolic Language with Large Language Models

解决利用 LLM 作为数据生成器时,难以生成具有复杂结构约束的符号语言(如语义解析、代码)数据的问题。

Jiacheng Ye,Chengzu Li,Lingpeng Kong,Tao Yu
HKUsynthetic-datallmsymbolicDOIarXivDBLP
5
泛读EMNLP 2023

Conceptor-Aided Debiasing of Large Language Models

解决现有 LLM 去偏(Debiasing)方法往往在消除社会偏见的同时,严重牺牲模型通用语言能力(准确率)的 Trade-off 问题。

Yifei Li,Lyle H. Ungar,João Sedoc
UPenndebiasingrepresentationalignmentDOIarXivDBLP
5
泛读EMNLP 2023

ALCUNA: Large Language Models Meet New Knowledge

解决现有 Benchmark 无法准确评估 LLM 处理“新知识”能力的问题,因为真实世界的实体往往已经存在于模型的预训练语料中(数据污染)。

Xunjian Yin,Baizhou Huang,Xiaojun Wan
Peking UniversitybenchmarkknowledgeevaluationDOIarXivDBLP
5
泛读FindingsEMNLP 2023

IdealGPT: Iteratively Decomposing Vision and Language Reasoning via Large Language Models

这篇论文要解决的是:现有 VLM 在零样本多步推理上仍然不稳,而已有 divide-and-conquer 流水线又过度依赖专用子问题分解器,并且在信息不足时仍被迫输出最终答案。作者想要一种更通用、可迭代、能在缺信息时继续追问的视觉语言推理框架。

Haoxuan You,Rui Sun,Zhecan Wang,Long Chen,Gengyu Wang,Hammad A. Ayyubi,Kai-Wei Chang,Shih-Fu Chang
vlmreasoningdecompositionDOIarXivDBLP
5
泛读FindingsEMNLP 2023

Licon: A Diverse, Controllable and Challenging Linguistic Concept Learning Benchmark

这篇论文要解决的是:现有 linguistic concept learning 评测往往不够多样、不可控,或者难度不足,因而很难真正区分模型是学会了概念,还是只是在吃表面模式。这个问题值得重做,因为随着大模型能力上升,旧 benchmark 很容易饱和,失去诊断价值。

Shenglong Yu,Ying Zhang,Wenya Guo,Zhengkun Zhang,Ru Zhou,Xiaojie Yuan
benchmarkconcept-learninggeneralizationDOIDBLP
5
泛读FindingsEMNLP 2023

Beneath Surface Similarity: Large Language Models Make Reasonable Scientific Analogies after Structure Abduction

这篇论文要解决的是:LLM 做科学类 analogy 时,容易停留在表面相似而不是结构相似;作者想验证,若先做 structure abduction,再做类比,模型是否能给出更合理的科学类比。这个问题重要,因为 analogical reasoning 本来就依赖关系结构,不是关键词匹配。

Siyu Yuan,Jiangjie Chen,Xuyang Ge,Yanghua Xiao,Deqing Yang
reasoninganalogystructureDOIDBLP
5
泛读EMNLP 2023

Mitigating Temporal Misalignment by Discarding Outdated Facts

这篇论文要解决的是:语言模型中的时间错配并不只是因为缺少新事实,也因为旧事实仍在干扰;因此需要研究如何主动丢弃过时事实。过去常见做法更关注注入或更新新知识,但较少把‘忘掉旧知识’作为独立控制目标。

Michael J. Q. Zhang,Eunsol Choi
temporalknowledgedata-curationDOIDBLP
5
泛读EMNLP 2023

RepoCoder: Repository-Level Code Completion Through Iterative Retrieval and Generation

仓库级代码补全中,如何把散落在多个文件里的上下文喂给 LLM。单文件上下文不够,而简单拼接又受限于 context window 且噪声大。

Fengji Zhang,Bei Chen,Yue Zhang,Jacky Keung,Jin Liu,Daoguang Zan,Yi Mao,Jian-Guang Lou,Weizhu Chen
MicrosoftCity University of Hong Kongcode-completionrepo-levelretrievalDOIDBLP
5
泛读FindingsEMNLP 2023

CITB: A Benchmark for Continual Instruction Tuning

缺一个专门衡量"continual instruction tuning"的基准:模型在连续接入新指令任务时,既不能遗忘旧能力,也要保持对新任务的泛化。现有 continual learning 基准大多是分类任务,不覆盖指令格式的异质性。

Zihan Zhang,Meng Fang,Ling Chen,Mohammad-Reza Namazi-Rad
University of WollongongEindhoven University of Technologyinstruction-tuningcontinual-learningbenchmarkDOIDBLP
5
泛读FindingsEMNLP 2023

SAC³: Reliable Hallucination Detection in Black-Box Language Models via Semantic-aware Cross-check Consistency

黑盒 LLM(只能调 API)如何检测幻觉。已有 self-consistency 类方法(SelfCheckGPT)对同一问题多采样看一致性,但对"在同义改写的问题上系统性一致地错"这种情况无能为力。

Jiaxin Zhang,Zhuohang Li,Kamalika Das,Bradley A. Malin,Kumar Sricharan
Intuit AI ResearchVanderbilt UniversityhallucinationconsistencyDOIDBLP
5
泛读EMNLP 2023

Contrastive Learning of Sentence Embeddings from Scratch

句向量对比学习一直受限于高质量正样本对的稀缺:NLI 标注贵,SimCSE 的无监督路线(同一句过两次 dropout)上限又明显低于有监督。能否在不依赖任何标注数据的情况下达到有监督水平?

Junlei Zhang,Zhenzhong Lan,Junxian He
Westlake UniversityHong Kong University of Science and Technologysentence-embeddingcontrastiveDOIDBLP
5
泛读FindingsEMNLP 2023

Auto-Instruct: Automatic Instruction Generation and Ranking for Black-Box Language Models

黑盒 LLM 的 prompt 工程高度依赖人工,且同一任务不同写法性能差距可达 10 分以上。能否让模型自己生成并挑选 instruction,摆脱人工 prompt tuning?

Zhihan Zhang,Shuohang Wang,Wenhao Yu,Yichong Xu,Dan Iter,Qingkai Zeng,Yang Liu,Chenguang Zhu,Meng Jiang
University of Notre DameMicrosoftinstructionpromptingdata-synthesisDOIDBLP
5
泛读FindingsEMNLP 2023

Parameter-Efficient Cross-lingual Transfer of Vision and Language Models via Translation-based Alignment

视觉-语言模型的跨语言迁移通常要么重训 encoder、要么做昂贵的多语言对齐预训练,成本高、覆盖语言少。作者想用更轻量的方式把英文 VLM 迁移到其他语言。

Zhen Zhang,Jialu Wang,Xin Eric Wang
University of California, Santa Cruzvlmcross-lingualpeftDOIDBLP
5
泛读EMNLP 2023

Generative Table Pre-training Empowers Models for Tabular Prediction

表格预测(分类/回归)长期由 GBDT(XGBoost/LightGBM)主导,深度方法缺乏跨表通用预训练范式。作者问:能不能把 LM 式的生成式预训练搬到 tabular 上,让一个预训练模型迁移到各种下游表格任务。

Tianping Zhang,Shaowen Wang,Shuicheng Yan,Li Jian,Qian Liu
Sea AI LabTsinghua UniversitytabularpretraininggenerativeDOIDBLP
5
泛读DemoEMNLP 2023

ZhuJiu: A Multi-dimensional, Multi-faceted Chinese Benchmark for Large Language Models

中文 LLM 缺乏系统性、多维度的评测。作者想做一个覆盖知识/推理/生成/价值观等维度的中文 benchmark。

Baoli Zhang,Haining Xie,Pengfan Du,Junhao Chen,Pengfei Cao,Yubo Chen,Shengping Liu,Kang Liu,Jun Zhao
Institute of Automation, Chinese Academy of SciencesbenchmarkchineseevaluationDOIDBLP
5
泛读FindingsEMNLP 2023

Exploring the Cognitive Knowledge Structure of Large Language Models: An Educational Diagnostic Assessment Approach

现有 LLM 评测多用考试总分,看不出模型在知识维度上的认知结构差异。作者想用教育学的诊断评估思路去刻画 LLM 的知识图景。

Zheyuan Zhang,Jifan Yu,Juanzi Li,Lei Hou
Tsinghua Universityknowledge-probingevaluationDOIarXivDBLP
5
泛读FindingsEMNLP 2023

Retrieving Multimodal Information for Augmented Generation: A Survey

多模态检索增强生成(MM-RAG)的文献散在不同模态/任务里,缺乏统一视角去回答:在什么阶段、用什么方式把多模态知识注入生成模型最合适。

Ruochen Zhao,Hailin Chen,Weishi Wang,Fangkai Jiao,Do Xuan Long,Chengwei Qin ... 省略 1 位作者 ... ,Xiaobao Guo,Minzhi Li,Xingxuan Li,Shafiq Joty
Nanyang Technological UniversityragmultimodalsurveyDOIarXivDBLP
5
泛读EMNLP 2023

Noisy Exemplars Make Large Language Models More Robust: A Domain-Agnostic Behavioral Analysis

few-shot prompting 在多跳推理上效果好,但其鲁棒性没有系统研究。作者问:对 exemplar 做什么样的扰动会让 LLM 垮掉?反过来,能否用扰动提升鲁棒性?

Hongyi Zheng,Abulhair Saparov
New York Universityfew-shotrobustnessreasoningDOIarXivDBLP
5
泛读EMNLP 2023

Navigating the Grey Area: How Expressions of Uncertainty and Overconfidence Affect Language Models

LM 被当作事实来源使用,但用户话语里的不确定性/自信表达('我觉得'、'Wikipedia 说')会不会让模型改变答案?模型的 epistemology 稳不稳?

Kaitlyn Zhou,Dan Jurafsky,Tatsunori Hashimoto
Stanford Universityepistemicsuncertaintyllm-behaviorDOIarXivDBLP
5
泛读EMNLP 2023

Zero-shot Sharpness-Aware Quantization for Pre-trained Language Models

预训练语言模型的零样本量化(zero-shot quantization,即不使用原始训练数据进行校准)中,现有方法未考虑量化后损失曲面的平坦性,导致量化模型泛化能力不足。

Miaoxi Zhu,Qihuang Zhong,Li Shen,Liang Ding,Juhua Liu,Bo Du,Dacheng Tao
quantizationzero-shotsharpness-awareDOIDBLP