End-to-End Active Speaker Detection

Publikation: Beiträge in SammelwerkenAufsätze in KonferenzbändenForschungbegutachtet

Standard

End-to-End Active Speaker Detection. / Alcázar, Juan León; Cordes, Moritz; Zhao, Chen et al.

Computer Vision – ECCV 2022 - 17th European Conference, Proceedings. Hrsg. / Shai Avidan; Gabriel Brostow; Moustapha Cissé; Giovanni Maria Farinella; Tal Hassner. Springer Science and Business Media Deutschland GmbH, 2022. S. 126-143 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Band 13697 LNCS).

Publikation: Beiträge in SammelwerkenAufsätze in KonferenzbändenForschungbegutachtet

Harvard

Alcázar, JL, Cordes, M, Zhao, C & Ghanem, B 2022, End-to-End Active Speaker Detection. in S Avidan, G Brostow, M Cissé, GM Farinella & T Hassner (Hrsg.), Computer Vision – ECCV 2022 - 17th European Conference, Proceedings. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Bd. 13697 LNCS, Springer Science and Business Media Deutschland GmbH, S. 126-143, Conference - 17th European Conference on Computer Vision - ECCV 2022, Tel Aviv, Israel, 23.10.22. https://doi.org/10.48550/arXiv.2203.14250, https://doi.org/10.1007/978-3-031-19836-6_8

APA

Alcázar, J. L., Cordes, M., Zhao, C., & Ghanem, B. (2022). End-to-End Active Speaker Detection. in S. Avidan, G. Brostow, M. Cissé, G. M. Farinella, & T. Hassner (Hrsg.), Computer Vision – ECCV 2022 - 17th European Conference, Proceedings (S. 126-143). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Band 13697 LNCS). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.48550/arXiv.2203.14250, https://doi.org/10.1007/978-3-031-19836-6_8

Vancouver

Alcázar JL, Cordes M, Zhao C, Ghanem B. End-to-End Active Speaker Detection. in Avidan S, Brostow G, Cissé M, Farinella GM, Hassner T, Hrsg., Computer Vision – ECCV 2022 - 17th European Conference, Proceedings. Springer Science and Business Media Deutschland GmbH. 2022. S. 126-143. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)). doi: 10.48550/arXiv.2203.14250, 10.1007/978-3-031-19836-6_8

Bibtex

@inbook{f53b33da89fc48f8bf13f3676febd593,
title = "End-to-End Active Speaker Detection",
abstract = "Recent advances in the Active Speaker Detection (ASD) problem build upon a two-stage process: feature extraction and spatio-temporal context aggregation. In this paper, we propose an end-to-end ASD workflow where feature learning and contextual predictions are jointly learned. Our end-to-end trainable network simultaneously learns multi-modal embeddings and aggregates spatio-temporal context. This results in more suitable feature representations and improved performance in the ASD task. We also introduce interleaved graph neural network (iGNN) blocks, which split the message passing according to the main sources of context in the ASD problem. Experiments show that the aggregated features from the iGNN blocks are more suitable for ASD, resulting in state-of-the art performance. Finally, we design a weakly-supervised strategy, which demonstrates that the ASD problem can also be approached by utilizing audiovisual data but relying exclusively on audio annotations. We achieve this by modelling the direct relationship between the audio signal and the possible sound sources (speakers), as well as introducing a contrastive loss.",
keywords = "Informatics, Business informatics",
author = "Alc{\'a}zar, {Juan Le{\'o}n} and Moritz Cordes and Chen Zhao and Bernard Ghanem",
note = "Publisher Copyright: {\textcopyright} 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.; Conference - 17th European Conference on Computer Vision - ECCV 2022, ECCV 2022 ; Conference date: 23-10-2022 Through 27-10-2022",
year = "2022",
doi = "10.48550/arXiv.2203.14250",
language = "English",
isbn = "978-3-031-19835-9",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
publisher = "Springer Science and Business Media Deutschland GmbH",
pages = "126--143",
editor = "Shai Avidan and Gabriel Brostow and Moustapha Ciss{\'e} and Farinella, {Giovanni Maria} and Tal Hassner",
booktitle = "Computer Vision – ECCV 2022 - 17th European Conference, Proceedings",
address = "Germany",
url = "https://eccv2022.ecva.net/",

}

RIS

TY - CHAP

T1 - End-to-End Active Speaker Detection

AU - Alcázar, Juan León

AU - Cordes, Moritz

AU - Zhao, Chen

AU - Ghanem, Bernard

N1 - Conference code: 17

PY - 2022

Y1 - 2022

N2 - Recent advances in the Active Speaker Detection (ASD) problem build upon a two-stage process: feature extraction and spatio-temporal context aggregation. In this paper, we propose an end-to-end ASD workflow where feature learning and contextual predictions are jointly learned. Our end-to-end trainable network simultaneously learns multi-modal embeddings and aggregates spatio-temporal context. This results in more suitable feature representations and improved performance in the ASD task. We also introduce interleaved graph neural network (iGNN) blocks, which split the message passing according to the main sources of context in the ASD problem. Experiments show that the aggregated features from the iGNN blocks are more suitable for ASD, resulting in state-of-the art performance. Finally, we design a weakly-supervised strategy, which demonstrates that the ASD problem can also be approached by utilizing audiovisual data but relying exclusively on audio annotations. We achieve this by modelling the direct relationship between the audio signal and the possible sound sources (speakers), as well as introducing a contrastive loss.

AB - Recent advances in the Active Speaker Detection (ASD) problem build upon a two-stage process: feature extraction and spatio-temporal context aggregation. In this paper, we propose an end-to-end ASD workflow where feature learning and contextual predictions are jointly learned. Our end-to-end trainable network simultaneously learns multi-modal embeddings and aggregates spatio-temporal context. This results in more suitable feature representations and improved performance in the ASD task. We also introduce interleaved graph neural network (iGNN) blocks, which split the message passing according to the main sources of context in the ASD problem. Experiments show that the aggregated features from the iGNN blocks are more suitable for ASD, resulting in state-of-the art performance. Finally, we design a weakly-supervised strategy, which demonstrates that the ASD problem can also be approached by utilizing audiovisual data but relying exclusively on audio annotations. We achieve this by modelling the direct relationship between the audio signal and the possible sound sources (speakers), as well as introducing a contrastive loss.

KW - Informatics

KW - Business informatics

UR - http://www.scopus.com/inward/record.url?scp=85142706504&partnerID=8YFLogxK

UR - https://www.mendeley.com/catalogue/e6496c8f-b57c-3961-9c1b-0128427ddd58/

U2 - 10.48550/arXiv.2203.14250

DO - 10.48550/arXiv.2203.14250

M3 - Article in conference proceedings

AN - SCOPUS:85142706504

SN - 978-3-031-19835-9

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 126

EP - 143

BT - Computer Vision – ECCV 2022 - 17th European Conference, Proceedings

A2 - Avidan, Shai

A2 - Brostow, Gabriel

A2 - Cissé, Moustapha

A2 - Farinella, Giovanni Maria

A2 - Hassner, Tal

PB - Springer Science and Business Media Deutschland GmbH

T2 - Conference - 17th European Conference on Computer Vision - ECCV 2022

Y2 - 23 October 2022 through 27 October 2022

ER -

DOI