A review of multimodal tokenization approaches using vector quantization[1] approaches.
Inductive Positions in Transformers
Diffusion Models: A Mathematical Note from Scratch
A diffusion probabilistic model is a parameterized Markov chain trained to reverse a predefined forward process, closely related to both likelihood-based optimization and score matching. The forward diffusion process is a stochastic process constructed to gradually corrupt the original data into random nose.
Large Language Models for Programming Languages
A note of code pre-trained language models (PLMs).
Efficient Large-Scale Distributed Training
A note of distributed training methods for large neural models.
Mask Denoising Strategy for Pre-trained Language Models
Mask modeling is a crucial role in pre-training language models. This note provides a short summary.
Subword Tokenization in Natural Language Processing
Summary of word tokenization in natural language processing.
Scaling Up Large Language Models: A Summary
A summary of large language models (LLMs) on a large scale (beyond 10B).
Sequence GANs in a Nutshell
Background: Conventional maximum likelihood approaches for sequence generation with teacher forcing algorithms are inherently prone to exposure bias at the inference stage due to the training-testing discrepancy—the generator produces a sequence iteratively conditioned on its previously predicted ones that may be never observed during training—leading to accumulative mismatch with the increment of generated sequences. In other words, the model is only trained on demonstrated behaviors (real data samples) but not free-running mode.
Generative Adversarial Networks (GANs) hold the promise of mitigating such issues for generating discrete sequences, such as language modeling, speech/music generation, etc.
Automatic Evaluation Metrics for Language Generation
A summary of the automatic evaluation metric for natural language generation (NLG) applications.
The human evaluation considers the aspects of adequacy, fidelity, and fluency, but it is quite expensive.
- Adequacy: Does the output convey the same meaning as the input sentence? Is part of the message lost, added, or distorted?
- Fluency: Is the output good fluent English? This involves both grammatical correctness and idiomatic word choices.
Thus, a useful metric for automatic evaluation in NLG applications holds the promise, such as machine translation, text summarization, image captioning, dialogue generation, poetry/story generation, etc.