GERBIL - General entity annotator benchmarking framework
Publikation: Beiträge in Sammelwerken › Aufsätze in Konferenzbänden › Forschung › begutachtet
Standard
WWW 2015 - Proceedings of the 24th International Conference on World Wide Web. Hrsg. / Aldo Gangemi; Stefano Leonardi; Alessandro Panconesi. Association for Computing Machinery, Inc, 2015. S. 1133-1143 (WWW 2015 - Proceedings of the 24th International Conference on World Wide Web).
Publikation: Beiträge in Sammelwerken › Aufsätze in Konferenzbänden › Forschung › begutachtet
Harvard
APA
Vancouver
Bibtex
}
RIS
TY - CHAP
T1 - GERBIL - General entity annotator benchmarking framework
AU - Usbeck, Ricardo
AU - Röder, Michael
AU - Ngomo, Axel Cyrille Ngonga
AU - Baron, Ciro
AU - Both, Andreas
AU - Brümmer, Martin
AU - Ceccarelli, Diego
AU - Cornolti, Marco
AU - Cherix, Didier
AU - Eickmann, Bernd
AU - Ferragina, Paolo
AU - Lemke, Christiane
AU - Moro, Andrea
AU - Navigli, Roberto
AU - Piccinno, Francesco
AU - Rizzo, Giuseppe
AU - Sack, Harald
AU - Speck, René
AU - Troncy, Raphaël
AU - Waitelonis, Jörg
AU - Wesemann, Lars
PY - 2015/5/18
Y1 - 2015/5/18
N2 - The need to bridge between the unstructured data on the Document Web and the structured data on the Web of Data has led to the development of a considerable number of annotation tools. However, these tools are currently still hard to compare since the published evaluation results are calculated on diverse datasets and evaluated based on different measures. We present GERBIL, an evaluation framework for semantic entity annotation. The rationale behind our framework is to provide developers, end users and researchers with easy-To-use interfaces that allow for the agile, fine-grained and uniform evaluation of annotation tools on multiple datasets. By these means, we aim to ensure that both tool developers and end users can derive meaningful insights pertaining to the extension, integration and use of annotation applications. In particular, GERBIL provides comparable results to tool developers so as to allow them to easily discover the strengths and weaknesses of their implementations with respect to the state of the art. With the permanent experiment URIs provided by our framework, we ensure the reproducibility and archiving of evaluation results. Moreover, the framework generates data in machineprocessable format, allowing for the efficient querying and post-processing of evaluation results. Finally, the tool diagnostics provided by GERBIL allows deriving insights pertaining to the areas in which tools should be further refined, thus allowing developers to create an informed agenda for extensions and end users to detect the right tools for their purposes. GERBIL aims to become a focal point for the state of the art, driving the research agenda of the community by presenting comparable objective evaluation results.
AB - The need to bridge between the unstructured data on the Document Web and the structured data on the Web of Data has led to the development of a considerable number of annotation tools. However, these tools are currently still hard to compare since the published evaluation results are calculated on diverse datasets and evaluated based on different measures. We present GERBIL, an evaluation framework for semantic entity annotation. The rationale behind our framework is to provide developers, end users and researchers with easy-To-use interfaces that allow for the agile, fine-grained and uniform evaluation of annotation tools on multiple datasets. By these means, we aim to ensure that both tool developers and end users can derive meaningful insights pertaining to the extension, integration and use of annotation applications. In particular, GERBIL provides comparable results to tool developers so as to allow them to easily discover the strengths and weaknesses of their implementations with respect to the state of the art. With the permanent experiment URIs provided by our framework, we ensure the reproducibility and archiving of evaluation results. Moreover, the framework generates data in machineprocessable format, allowing for the efficient querying and post-processing of evaluation results. Finally, the tool diagnostics provided by GERBIL allows deriving insights pertaining to the areas in which tools should be further refined, thus allowing developers to create an informed agenda for extensions and end users to detect the right tools for their purposes. GERBIL aims to become a focal point for the state of the art, driving the research agenda of the community by presenting comparable objective evaluation results.
KW - Archivability
KW - Benchmarking Framework
KW - Reusability
KW - Semantic Entity Annotation System
KW - Informatics
KW - Business informatics
UR - http://www.scopus.com/inward/record.url?scp=85018193235&partnerID=8YFLogxK
UR - https://www.mendeley.com/catalogue/d391ff27-3f0a-38ee-8fef-67b407a1719a/
U2 - 10.1145/2736277.2741626
DO - 10.1145/2736277.2741626
M3 - Article in conference proceedings
AN - SCOPUS:85018193235
SN - 978-1-4503-3469-3
T3 - WWW 2015 - Proceedings of the 24th International Conference on World Wide Web
SP - 1133
EP - 1143
BT - WWW 2015 - Proceedings of the 24th International Conference on World Wide Web
A2 - Gangemi, Aldo
A2 - Leonardi, Stefano
A2 - Panconesi, Alessandro
PB - Association for Computing Machinery, Inc
T2 - 24th International Conference on World Wide Web, WWW 2015
Y2 - 18 May 2015 through 22 May 2015
ER -