Automatic Data Augmentation Selection and Parametrization in Contrastive Self-Supervised Speech Representation Learning - Equipe Signal, Statistique et Apprentissage Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

Automatic Data Augmentation Selection and Parametrization in Contrastive Self-Supervised Speech Representation Learning

Résumé

Contrastive learning enables learning useful audio and speech representations without ground-truth labels by maximizing the similarity between latent representations of similar signal segments. In this framework various data augmentation techniques are usually exploited to help enforce desired invariances within the learned representations, improving performance on various audio tasks thanks to more robust embeddings. Now, selecting the most relevant augmentations has proven crucial for better downstream performances. Thus, this work introduces a conditional independance-based method which allows for automatically selecting a suitable distribution on the choice of augmentations and their parametrization from a set of predefined ones, for contrastive self-supervised pre-training. This is performed with respect to a downstream task of interest, hence saving a costly hyper-parameter search. Experiments performed on two different downstream tasks validate the proposed approach showing better results than experimenting without augmentation or with baseline augmentations. We furthermore conduct a qualitative analysis of the automatically selected augmentations and their variation according to the considered final downstream dataset.
Fichier principal
Vignette du fichier
IS2022 (14).pdf (1.44 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03817736 , version 1 (18-10-2022)

Identifiants

Citer

Salah Zaiem, Titouan Parcollet, Slim Essid. Automatic Data Augmentation Selection and Parametrization in Contrastive Self-Supervised Speech Representation Learning. Interspeech 2022, Sep 2022, Incheon, South Korea. pp.669-673, ⟨10.21437/interspeech.2022-10191⟩. ⟨hal-03817736⟩
65 Consultations
78 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More