Yekun Chai
Contact:
chaiyekun (at) gmail.com
I am a staff engineer at Baidu NLP, where I have contributed to Baidu’s LLM series such as ERNIE 4.0, ERNIE 3.5 and ERNIE-Code, and their industry-driven generative AI products such as ERNIE-Bot (文心一言) and Comate (文心快码). Before that, I honed my skills in RL and NLP at Institute of Automation, Chinese Academy of Sciences. I pursued my academic studies in NLP at University of Edinburgh, where I was fortunate to be supervised by Dr. Adam Lopez and Dr. Naomi Saphra.
My research endeavors revolve around transformers and generative AI, with a particular emphasis on:
- Pre-training large-scale foundation models across languages, modalities, and tasks.
- Efficient alignment, reasoning, and inference at scale.
- Multimodal deep generative models.
news
Sep 21, 2024 | Our papers on pixel-based pre-training, training data attribution, and tokenization robustness have been accepted to EMNLP 2024 and Findings. |
---|---|
May 02, 2024 | One paper on GiLOT, an XAI approach for LLMs, has been accepted to ICML 2024. |
Feb 20, 2024 | One paper on HumanEval-XL, a multilingual code generation benchmark has been accepted to LREC-COLING 2024. We’ve released the code and data. |
Jan 16, 2024 | One paper on reward models with tool-augmented feedback has been accepted to ICLR 2024 (spotlight). Dive into our research and code now! |
Sep 23, 2023 | One paper on XAI has been accepted to NeurIPS 2023 datasets and benchmarks track. Code is available here. |