Hands in Focus: Sign Language Recognition Via Top-Down Attention

Research output: Contributions to collected editions/worksArticle in conference proceedingsResearchpeer-review

Authors

  • Noha Sarhan
  • Christian Wilms
  • Vanessa Closius
  • Ulf Brefeld
  • Simone Frintrop

In this paper, we propose a novel Sign Language Recognition (SLR) model that leverages the task-specific knowledge to incorporate Top-Down (TD) attention to focus the processing of the network on the most relevant parts of the input video sequence. For SLR, this includes information about the hands' shape, orientation and positions, and motion trajectory. Our model consists of three streams that process RGB, optical flow and TD attention data. For the TD attention, we generate pixel-precise attention maps focusing on both hands, thereby retaining valuable hand information, while eliminating distracting background information. Our proposed method outperforms state-of-the-art on a challenging large-scale dataset by over 2%, and achieves strong results with a much simpler architecture compared to other systems on the newly released AUTSL dataset [1].

Original languageEnglish
Title of host publication2023 IEEE International Conference on Image Processing, ICIP 2023 - Proceedings : Proceedings
Number of pages5
Place of PublicationPiscataway
PublisherIEEE Electromagnetic Compatibility Society
Publication date08.10.2023
Pages2555-2559
ISBN (print)978-1-7281-9836-1
ISBN (electronic)978-1-7281-9835-4
DOIs
Publication statusPublished - 08.10.2023
Event2023 IEEE International Conference on Image Processing - Kuala Lumpur Convention Centre, Kuala Lumpur, Malaysia
Duration: 08.10.202311.10.2023
Conference number: 30
https://2023.ieeeicip.org/

Bibliographical note

Publisher Copyright:
© 2023 IEEE.

    Research areas

  • Informatics - sign language recognition, top-down attention, deep learning