Yekun Chai
Contact:
chaiyekun (at) gmail.com
I am a staff research engineer working on large language models at Baidu NLP. Before that, I was associated with Institute of Automation, Chinese Academy of Sciences (CASIA). I graduated from University of Edinburgh under the supervision of Adam Lopez and Naomi Saphra.
My research endeavors revolve around transformers and generative AI, with a particular emphasis on:
- Pre-training large-scale foundation models across languages, modalities, and tasks.
- AI alignment, reasoning, and scalable oversight.
- Multimodal deep generative models.
news
Sep 21, 2024 | Our papers on pixel-based pre-training, training data influence, and LLM tokenization have been accepted to EMNLP 2024 and Findings. |
---|---|
May 02, 2024 | One paper on GiLOT, an XAI approach for LLMs, has been accepted to ICML 2024. |
Feb 20, 2024 | One paper on HumanEval-XL, a multilingual code generation benchmark has been accepted to LREC-COLING 2024. We’ve released the code and data. |
Jan 16, 2024 | One paper on reward models with tool-augmented feedback has been accepted to ICLR 2024 (spotlight). Dive into our research and code now! |
Sep 23, 2023 | One paper on XAI has been accepted to NeurIPS 2023 datasets and benchmarks track. Code is available here. |