Yekun Chai

Contact:
chaiyekun (at) gmail.com
I am a staff researcher at Baidu NLP, on the ERNIE team.
I work on pre-training, post-training, and reasoning research. My research endeavors revolve around natural language processing and beyond, specifically on (1) scaling transformers across languages, modalities, and tasks; (2) efficient alignment, reasoning, and inference at scale.
I have contributed to Baidu’s large language model series, incl., ERNIE 5.0, 4.0, 3.5 and ERNIE-Code, and their generative AI products, e.g., ERNIE-Bot (文心一言, 2023), Baidu Comate (文心快码, 2022). Before that, I honed my skills in RL and NLP at Institute of Automation, Chinese Academy of Sciences. I pursued my academic studies in NLP at University of Edinburgh, under the supervision of Adam Lopez and Naomi Saphra.
news
Jan 23, 2025 |
One paper on MA-RLHF has been accepted to ICLR 2025![]() ![]() |
---|---|
Sep 21, 2024 |
Our papers on PixelGPT, GPTfluence, and TKEval have been accepted to EMNLP 2024 & Findings. ![]() |
May 02, 2024 |
One paper on GiLOT, an XAI approach for LLMs, has been accepted to ICML 2024. ![]() |
Feb 20, 2024 | One paper on HumanEval-XL, a multilingual code generation benchmark has been accepted to LREC-COLING 2024. We’ve released the code and data. |
Jan 16, 2024 |
One paper on tool-augmented reward models has been accepted to ICLR 2024 (spotlight)![]() ![]() |