We summarize the positional encoding approaches in transformers.
Summary
PE | Relative | Trainable | Each Layer | Extrapolation |
---|---|---|---|---|
Sinusoidal | ✘ | ✘ | ✘ | ✘ |
T5 bias | ✔ | ✔ | ✔ | ✔ |
RoPE | ✔ | ✔ | ✔ | ✘ |
ALiBi | ✔ | ✘ | ✔ | ✔ |
KERPLE | ✔ | ✔ | ✔ | ✔ |
Sandwich | ✔ | ✘ | ✔ | ✔ |
xPos | ✔ | ✘ | ✔ | ✔ |
Position Encoding
Sinusoidal Position Embeddings
Sinusoidal position embeddings[1] are constantly encoded vectors to be added on token embeddings of the first transformer layer.
where $\text{pos}$ is the position in the sentence and $i$ is the order along the embedding vector dimension. Assume this allows to learn to attend by relative positions, since for and fixed offset $k$, can be represented as the linear function of .
1 | # Positional encoding layer in PyTorch |
- Refer to Attention in a Nutshell.
- Relative position in Transformer-XL: refer to Transformer Variants: A Peek
Rotary Position Embedding (RoPE)
Rotary Position Embedding (RoPE) [3][4] proposes to use complex numbers as the base field of encoding space. Instead of working in $\mathbb{R}^d$, it uses consecutive pairs of elements of the query and key vectors in $\mathbb{C}^{d/2}$ to be a single complex number.
Specifically, instead of viewing as a $d$-dimensional real vector, RoPE views it as . If $d$ is odd, RoPE pads it with a dummy coordinate to ensure things line up correctly. Alternatives, it simply increases $d$ by one.
Derivation
The complex number format of RoPE is written as:
It is convenient to convert into matrix equation:
where , $\mathbf{\Theta_m}$ is the block diagonal rotation matrix, $\mathbf{W_q}$ is learned query weights, and $\mathbf{X_m}$ is the embedding of the $m$-th token.
Due to the high computation cost of sparse matrix, it is implemented as:
where $\odot$ denotes the element-wise product (*).
- Extension to Multiple Dimensions
Difference from sinusoidal embedding[4]
- Sinusoidal embeddings apply to each coordinate individually, while RoPE mixes pairs of coordinates.
- Sinusoidal embeddings add a $\cos(m\theta)$ or $\sin(m\theta)$ term, while RoPE uses a multiplicative factor.
Implementation
1 |
|
GPT-NeoX (PyTorch) implementation.1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35import torch
class Rotary(torch.nn.Module):
def __init__(self, dim, base=10000):
super().__init__()
inv_freq = 1.0 / (base ** (torch.arange(0, dim, 2).float() / dim))
self.register_buffer("inv_freq", inv_freq)
self.seq_len_cached = None
self.cos_cached = None
self.sin_cached = None
def forward(self, x, seq_dim=1):
seq_len = x.shape[seq_dim]
if seq_len != self.seq_len_cached:
self.seq_len_cached = seq_len
t = torch.arange(x.shape[seq_dim], device=x.device).type_as(self.inv_freq)
freqs = torch.einsum("i,j->ij", t, self.inv_freq)
emb = torch.cat((freqs, freqs), dim=-1).to(x.device)
self.cos_cached = emb.cos()[:, None, None, :]
self.sin_cached = emb.sin()[:, None, None, :]
return self.cos_cached, self.sin_cached
# rotary pos emb helpers:
def rotate_half(x):
x1, x2 = x[..., : x.shape[-1] // 2], x[..., x.shape[-1] // 2 :]
return torch.cat(
(-x2, x1), dim=x1.ndim - 1
) # dim=-1 triggers a bug in torch < 1.8.0
def apply_rotary_pos_emb(q, k, cos, sin):
return (q * cos) + (rotate_half(q) * sin), (k * cos) + (rotate_half(k) * sin)
RoPE with Bias
[6] finds that RoPE w/ Bias can increase the capability of length extrapolation.
where are rotation matrix, $\boldsymbol{a}, \boldsymbol{b}$ are learnable bias.
NB: Pure self-attention softmax gets equivalent results with or without bias term, since it can be cancelled by the softmax normalization.
But reducing the bias term for self-attention with RoPE cannot obtain the same results.[6]
T5 Bias
T5[7] adds no position encoding to word embeddings. Instead, it add a learned, shared bias to each query-key self-attention score that is dependent on just the distance between the query and key. In which multiple different distances share the same learned bias, which might be beneficial for length interpolation. Specifically, a fixed number of embeddings are learned, each corresponding to a range of possible key-query offsets.
T5 uses a bucket of 32 learnable parameters and assign the relative position bias with a log-binning strategy:
[8] finds that T5 bias enables length extrapolation.
T5 uses 32 embeddings for all models with ranges that increase in size logarithmically up to an offset of 128 beyond which it assigns all relative positions to the same embedding. All position embeddings are shared across all layers in T5, though within a given layer each attention head uses a different learned position embedding.
1 | class T5Attention(nn.Module): |
ALiBi (Attention with Linear Biases)
Length extrapolation allows transformers training on short sequences while testing on substantially long sequences, by means of relative positional encoding.
ALiBi[8] adds a static, non-learnable bias to the query-key dot product. As is done in T5 bias and RoPE, it adds position information to keys and querys at each layer.
where $\alpha$ is a head-specific slope (fixed). For $i$-th heads, the value of slope takes .
ALiBi bias is not multiplied by the scaling factor as in the original transformer.
It is observed that ALiBi and T5 bias show length extrapolation ability[8], while RoPE and sinusoidal position do not have.
KERPLE
KErnelize Relative Positional Embedding for Length Extrapolation (KERPLE)[9] proposes kernelized positional embeddings as follows:
where are learnable parameters.
Triangle kernel: . It reduces to ALiBi.
xPos
Extrapolatable Position Embedding (XPOS)[10] proposes:
Sandwich
The self-attention is calculated as:
The temporal bias terms is:
Sandwich[11]
1 | import numpy as np |
Randomized Position
[13] introduce randomized position encoding that simulates the positions of longer sequences and randomly selects an ordered subset to fit the sequence’s length.
1 | source code: https://github.com/deepmind/randomized_positional_encodings/blob/main/models/positional_encodings.py#L160 |
It allows transformers to generalize to sequences of unseen length (increasing test accuracy by 12.0% on average) across 15 algorithmic reasoning tasks.
No Position
[12] observe that LMs without any explicit position encoding (NoPos) are still competitive with standard transformers across datasets, model sizes, and sequence length. It shows that causal LMs might derive positional awareness not only from the explicit positioning mechanism, but also from the causal mask effects.
Causal transformer LM can achieve competitive results with original LMs, while the bidirectional masked LMs fail to converge. This may be because that causal LMs can learn positions from the autoregressive nature (left-to-right) but masked LMs are order-invariant.
NB: LMs without explicit positional encodings (NoPos) are always slightly worse, suggesting the importance of inductive positional bias.
Position Interpolation
Instead of extrapolation, [15][16][17][18] presents position interpolation (PI) that directly downscales the non-integer position indices (RoPE-based) so that the maximum position index matches the previous context window limit in pre-training.
It simply adds two lines of code.[18]1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19class ScaledRotaryEmbedding(torch.nn.Module):
def __init__(self, dim, max_position_embeddings=2048, base=10000, device=None):
super().__init__()
inv_freq = 1.0 / (base ** (torch.arange(0, dim, 2).float().to(device) / dim))
self.register_buffer("inv_freq", inv_freq)
max_position_embeddings = 8192
# Build here to make `torch.jit.trace` work.
self.max_seq_len_cached = max_position_embeddings
t = torch.arange(
self.max_seq_len_cached,
device=self.inv_freq.device,
dtype=self.inv_freq.dtype,
)
# These two lines:
self.scale = 1 / 4
t *= self.scale
SuperHOT-13B[17] uptrained on scaling factor of 0.25, compared to base LLaMa 13B and a test LoRA trained on 6K sequence length with no scaling.
Note that PI requires further fine-tuning to take effects for length extrapolation.
NTK-Aware Scaled RoPE
Background
- “Simply interpolating the RoPE’s fourier space “linearly” is very sub-optimal, as it prevents the network to distinguish the order and positions of tokens that are very close by.”[19]
- “Scaling down the fourier features too much will eventually even prevent succesful finetunes (this is corroborated by the recent paper by Meta[15] that suggests an upper bound of ~600x)”[19]
NTK-Aware Scaled RoPE[19][20] designs a nonlinear interpolation scheme using Neural Tangent Kernel (NTK) theory. It changes the base of the RoPE instead of the scale, which intuitively changes the “spinning” speed from which each of the RoPE’s dimension vectors shifts to the next.
Implementation [21]1
2
3
4
5
6
7
8
9
10
11
12
13
14
15import transformers
old_init = transformers.models.llama.modeling_llama.LlamaRotaryEmbedding.__init__
def ntk_scaled_init(self, dim, max_position_embeddings=2048, base=10000, device=None):
#The method is just these three lines
max_position_embeddings = 16384
a = 8 # Alpha value
base = base * a ** (dim / (dim-2)) #Base change formula
old_init(self, dim, max_position_embeddings, base, device)
# apply ntk-sclaed init patch
transformers.models.llama.modeling_llama.LlamaRotaryEmbedding.__init__ = ntk_scaled_init
Average perplexity of LLaMA-7B on a set of 40 very long prompts (12k+ context size).
Dynamic Linear RoPE
Dynamic linear RoPE: set the scale
to max_seq_len/current position length
, which can slowly increase the scale.
1 | import torch |
Dynamic NTK-Aware Scaled RoPE
Cons:
- Compared to dynamic linear scaling, NTK-Aware has higher perplexity for shorter sequences, but better perplexity at the tail end of the sequence lengths.
- NTK-aware RoPE suffers from catastrophic perplexity blowup, like regular RoPE and static linear scaling.
[22] introduces dynamic NTK-aware scaling. The scaling of $\alpha$ is set to:
This dynamically scales the $\alpha$ as the sequence length increases.
1 | import math |
Partial NTK Scaled RoPE
Combine RoPE, Linear, NTK.[23]
1 | import torch |
$\beta$-ary RoPE
ReRoPE
References
- 1.Vaswani, Ashish, Noam M. Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin. “Attention is All you Need.” NeurIPS (2017). ↩
- 2.Press, Ofir, Noah A. Smith and Mike Lewis. “Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation.” ICLR 2022. ↩
- 3.Su, Jianlin, Yu Lu, Shengfeng Pan, Bo Wen and Yunfeng Liu. “RoFormer: Enhanced Transformer with Rotary Position Embedding.” ArXiv abs/2104.09864 (2021). ↩
- 4.Rotary Embeddings: A Relative Revolution ↩
- 5.RoFormer blog (Chinese)- Transformer升级之路:2、博采众长的旋转式位置编码 ↩
- 6.Blog- Bias项的神奇作用:RoPE + Bias = 更好的长度外推性 ↩
- 7.Raffel, C., Shazeer, N.M., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., & Liu, P.J. (2020). Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. JMLR. ↩
- 8.Press, O., Smith, N.A., & Lewis, M. (2022). Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation. ICLR. ↩
- 9.Chi, T., Fan, T., Ramadge, P.J., & Rudnicky, A.I. (2022). KERPLE: Kernelized Relative Positional Embedding for Length Extrapolation. NeurIPS. ↩
- 10.Sun, Y., Dong, L., Patra, B., Ma, S., Huang, S., Benhaim, A., Chaudhary, V., Song, X., & Wei, F. (2022). A Length-Extrapolatable Transformer. ArXiv, abs/2212.10554. ↩
- 11.Chi, T., Fan, T., Rudnicky, A., & Ramadge, P.J. (2022). Dissecting Transformer Length Extrapolation via the Lens of Receptive Field Analysis. ↩
- 12.Haviv, Adi et al. “Transformer Language Models without Positional Encodings Still Learn Positional Information.” Conference on Empirical Methods in Natural Language Processing (2022). ↩
- 13.Ruoss, A., Del'etang, G., Genewein, T., Grau-Moya, J., Csordás, R., Abbana Bennani, M., Legg, S., & Veness, J. (2023). Randomized Positional Encodings Boost Length Generalization of Transformers. ACL. ↩
- 14.Blog: Transformer升级之路:8、长度外推性与位置鲁棒性 ↩
- 15.Chen, Shouyuan, Sherman Wong, Liangjian Chen and Yuandong Tian. Extending Context Window of Large Language Models via Positional Interpolation. ArXiv abs/2306.15595 (2023). ↩
- 16.Github discussion: Position Interpolation ↩
- 17.Reddit: A simple way to "Extending Context to 8K" ↩
- 18.Things I’m Learning While Training SuperHOT ↩
- 19.NTK-Aware Scaled RoPE ↩
- 20.RoPE is a β-ary encoding (Chinese) ↩
- 21.NTK-aware RoPE colab ↩
- 22.Dynamic NTK-aware RoPE ↩
- 23.GitHub: Dynamic RoPE. ↩
- 24.β-ary RoPE (in Chinese) ↩
- 25.ReROPE (in Chinese) ↩