End-to-End Active Speaker Detection

Publikation: Beiträge in SammelwerkenAufsätze in KonferenzbändenForschungbegutachtet


  • Juan León Alcázar
  • Moritz Cordes
  • Chen Zhao
  • Bernard Ghanem

Recent advances in the Active Speaker Detection (ASD) problem build upon a two-stage process: feature extraction and spatio-temporal context aggregation. In this paper, we propose an end-to-end ASD workflow where feature learning and contextual predictions are jointly learned. Our end-to-end trainable network simultaneously learns multi-modal embeddings and aggregates spatio-temporal context. This results in more suitable feature representations and improved performance in the ASD task. We also introduce interleaved graph neural network (iGNN) blocks, which split the message passing according to the main sources of context in the ASD problem. Experiments show that the aggregated features from the iGNN blocks are more suitable for ASD, resulting in state-of-the art performance. Finally, we design a weakly-supervised strategy, which demonstrates that the ASD problem can also be approached by utilizing audiovisual data but relying exclusively on audio annotations. We achieve this by modelling the direct relationship between the audio signal and the possible sound sources (speakers), as well as introducing a contrastive loss.

TitelComputer Vision – ECCV 2022 - 17th European Conference, Proceedings
HerausgeberShai Avidan, Gabriel Brostow, Moustapha Cissé, Giovanni Maria Farinella, Tal Hassner
Anzahl der Seiten18
VerlagSpringer Science and Business Media Deutschland GmbH
ISBN (Print)978-3-031-19835-9
ISBN (elektronisch)978-3-031-19836-6
PublikationsstatusErschienen - 2022
VeranstaltungConference - 17th European Conference on Computer Vision - ECCV 2022 - Expo Tel Aviv / David Intercontinental Hotel, Tel Aviv, Israel
Dauer: 23.10.202227.10.2022
Konferenznummer: 17

Bibliographische Notiz

Funding Information:
Acknowledgements. This work was supported by the King Abdullah University of Science and Technology (KAUST) Office of Sponsored Research through the Visual Computing Center (VCC) funding.

Publisher Copyright:
© 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.