Surveying the FAIRness of Annotation Tools: Difficult to find, difficult to reuse

Research output: Contributions to collected editions/worksArticle in conference proceedingsResearchpeer-review

Authors

  • Ekaterina Borisova
  • Raia Abu Ahmad
  • Leyla Jael Garcia-Castro
  • Ricardo Usbeck
  • Georg Rehm
In the realm of Machine Learning and Deep Learning, there is a need for high-quality annotated data to train and evaluate supervised models. An extensive number of annotation tools have been developed to facilitate the data labelling process. However, finding the right tool is a demanding task involving thorough searching and testing. Hence, to effectively navigate the multitude of tools, it becomes essential to ensure their findability, accessibility, interoperability, and reusability (FAIR). This survey addresses the FAIRness of existing annotation software by evaluating 50 different tools against the FAIR principles for research software (FAIR4RS). The study indicates that while being accessible and interoperable, annotation tools are difficult to find and reuse. In addition, there is a need to establish community standards for annotation software development, documentation, and distribution.
Original languageEnglish
Title of host publicationLAW 2024 - 18th Linguistic Annotation Workshop, Co-located with EACL 2024 - Proceedings of the Workshop : Proceedings of the Workshop
EditorsSophie Henning, Manfred Stede
Number of pages17
Place of PublicationStroudsburg
PublisherAssociation for Computational Linguistics (ACL)
Publication date01.03.2024
Pages29-45
ISBN (electronic)979-8-89176-073-8
Publication statusPublished - 01.03.2024
Event18th Linguistic Annotation Workshop - St. Julians, Malta
Duration: 21.03.202422.03.2024
Conference number: 18
https://www.aclweb.org/portal/content/first-call-papers-18th-linguistic-annotation-workshop

Bibliographical note

Publisher Copyright:
© 2024 Association for Computational Linguistics.

Links