Yekun Chai

Baidu ERNIE

london.jpg

Contact:
chaiyekun (at) gmail.com

I am a Staff Researcher at Baidu NLP on the ERNIE team.

My research focuses on foundation models, spanning pre-training, post-training, and reasoning. I am particularly interested in efficient scaling paradigms, the training dynamics of large-scale models, and reinforcement learning–based methods.

I welcome collaboration and research discussions.

news

Aug 21, 2025 Four papers have been accepted to EMNLP 2025 & Findings. :bear:
May 16, 2025 One paper on curiosity-driven RLHF has been accepted to ACL 2025.
Jan 23, 2025 One paper on macro action RLHF has been accepted to ICLR 2025:bear:. Dive into our research and code now! :fire:
Sep 21, 2024 Our papers on PixelGPT, GPTfluence, and TKEval have been accepted to EMNLP 2024 & Findings. :bear:
May 02, 2024 One paper on GiLOT, an XAI approach for LLMs, has been accepted to ICML 2024. :snowflake:

selected publications

  1. ICLR
    MA-RLHF: Reinforcement Learning from Human Feedback with Macro Actions
    Yekun Chai*†Haoran Sun*^Huang FangShuohuan Wang, and 2 more authors
    In The Thirteenth International Conference on Learning Representations, 2025
  2. ICLRSpotlight
    Tool-Augmented Reward Modeling
    Lei Li*^Yekun Chai*†Shuohuan Wang , Yu Sun, and 3 more authors
    In The Twelfth International Conference on Learning Representations(top 5%) , 2024
  3. ACL-Findings
    ERNIE-Code: Beyond English-Centric Cross-lingual Pretraining for Programming Languages
    Yekun ChaiShuohuan Wang, Chao Pang , Yu Sun, and 2 more authors
    In Findings of the Association for Computational Linguistics: ACL 2023 , Jul 2023

academic services

Conference PC/Reviewer ACL, EMNLP, ICLR, NeurIPS, NAACL, COLING, EACL, COLM, ICASSP, LREC
Journal Reviewer TASLP