Learning Visual Voice Activity Detection with an Automatically Annotated Dataset - Laboratoire Traitement et Communication de l'Information Accéder directement au contenu
Communication Dans Un Congrès Année : 2021

Learning Visual Voice Activity Detection with an Automatically Annotated Dataset

Résumé

Visual voice activity detection (V-VAD) uses visual features to predict whether a person is speaking or not. V-VAD is useful whenever audio VAD (A-VAD) is inefficient either because the acoustic signal is difficult to analyze or is simply missing. We propose two deep architectures for V-VAD, one based on facial landmarks and one based on optical flow. Moreover, available datasets, used for learning and for testing V-VAD, lack content variability. We introduce a novel methodology to automatically create and annotate very large datasets in-the-wild, based on combining A-VAD and face detection. A thorough empirical evaluation shows the advantage of training the proposed deep V-VAD models with such a dataset.
Fichier principal
Vignette du fichier
GUY_ICPR2020_sub.pdf (6.38 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-02882229 , version 1 (26-06-2020)
hal-02882229 , version 2 (23-09-2020)
hal-02882229 , version 3 (16-10-2020)
hal-02882229 , version 4 (16-10-2020)

Identifiants

  • HAL Id : hal-02882229 , version 2

Citer

Sylvain Guy, Stéphane Lathuilière, Pablo Mesejo, Radu Horaud. Learning Visual Voice Activity Detection with an Automatically Annotated Dataset. International Conference on Pattern Recognition, Jan 2021, Milano, Italy. ⟨hal-02882229v2⟩
565 Consultations
655 Téléchargements

Partager

Gmail Facebook X LinkedIn More