Show simple item record

dc.contributor.authorGuy, Sylvain
dc.contributor.authorLathuilière, Stéphane
dc.contributor.authorMesejo Santiago, Pablo 
dc.contributor.authorHoraud, Radu
dc.identifier.citationPublished version: S. Guy... [et al.]. "Learning Visual Voice Activity Detection with an Automatically Annotated Dataset," 2020 25th International Conference on Pattern Recognition (ICPR), 2021, pp. 4851-4856, doi: [10.1109/ICPR48806.2021.9412884]es_ES
dc.descriptionThis work has been funded by the EU H2020 project #871245 SPRING and by the Multidisciplinary Institute in Artificial Intelligence (MIAI) #ANR-19-P3IA-0003.es_ES
dc.description.abstractVisual voice activity detection (V-VAD) uses visual features to predict whether a person is speaking or not. VVAD is useful whenever audio VAD (A-VAD) is inefficient either because the acoustic signal is difficult to analyze or because it is simply missing. We propose two deep architectures for V-VAD, one based on facial landmarks and one based on optical flow. Moreover, available datasets, used for learning and for testing VVAD, lack content variability. We introduce a novel methodology to automatically create and annotate very large datasets inthe- wild – WildVVAD – based on combining A-VAD with face detection and tracking. A thorough empirical evaluation shows the advantage of training the proposed deep V-VAD models with this dataset.es_ES
dc.description.sponsorshipEuropean Commission 871245 SPRINGes_ES
dc.description.sponsorshipMultidisciplinary Institute in Artificial Intelligence (MIAI) ANR-19-P3IA-0003es_ES
dc.rightsAtribución-NoComercial-SinDerivadas 3.0 España*
dc.titleLearning Visual Voice Activity Detection with an Automatically Annotated Datasetes_ES

Files in this item


This item appears in the following Collection(s)

Show simple item record

Atribución-NoComercial-SinDerivadas 3.0 España
Except where otherwise noted, this item's license is described as Atribución-NoComercial-SinDerivadas 3.0 España