Relative Positional Encoding for Transformers with Linear Complexity - Télécom Paris Accéder directement au contenu
Communication Dans Un Congrès Année : 2021

Relative Positional Encoding for Transformers with Linear Complexity

Résumé

Recent advances in Transformer models allow for unprecedented sequence lengths, due to linear space and time complexity. In the meantime, relative positional encoding (RPE) was proposed as beneficial for classical Transformers and consists in exploiting lags instead of absolute positions for inference. Still, RPE is not available for the recent linear-variants of the Transformer, because it requires the explicit computation of the attention matrix, which is precisely what is avoided by such methods. In this paper, we bridge this gap and present Stochastic Positional Encoding as a way to generate PE that can be used as a replacement to the classical additive (sinusoidal) PE and provably behaves like RPE. The main theoretical contribution is to make a connection between positional encoding and cross-covariance structures of correlated Gaussian processes. We illustrate the performance of our approach on the Long-Range Arena benchmark and on music generation.
Fichier principal
Vignette du fichier
spe.pdf (7.12 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03256451 , version 1 (10-06-2021)

Identifiants

Citer

Antoine Liutkus, Ondřej Cífka, Shih-Lun Wu, Umut Şimşekli, Yi-Hsuan Yang, et al.. Relative Positional Encoding for Transformers with Linear Complexity. ICML 2021 - 38th International Conference on Machine Learning, Jul 2021, Virtual Only, United States. pp.7067-7079. ⟨hal-03256451⟩
1602 Consultations
673 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More