Proxy Indicators for the Quality of Open-domain Dialogues

Publikation: Beiträge in SammelwerkenAufsätze in KonferenzbändenForschungbegutachtet

Authors

The automatic evaluation of open-domain dialogues remains a largely unsolved challenge. Thus, despite the abundance of work done in the field, human judges have to evaluate dialogues' quality. As a consequence, performing such evaluations at scale is usually expensive. This work investigates using a deep-learning model trained on the General Language Understanding Evaluation (GLUE) benchmark to serve as a quality indication of open-domain dialogues. The aim is to use the various GLUE tasks as different perspectives on judging the quality of conversation, thus reducing the need for additional training data or responses that serve as quality references. Due to this nature, the method can infer various quality metrics and derive a component-based overall score. We achieve statistically significant correlation coefficients of up to 0.7.

OriginalspracheEnglisch
TitelEMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing, Proceedings
HerausgeberMarie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
Anzahl der Seiten22
VerlagAssociation for Computational Linguistics (ACL)
Erscheinungsdatum01.01.2021
Seiten7834-7855
ISBN (elektronisch)9781955917094
DOIs
PublikationsstatusErschienen - 01.01.2021
Extern publiziertJa
Veranstaltung2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021 - ONLINE, Punta Cana, Dominikanische Republik
Dauer: 07.11.202111.11.2021
https://2021.emnlp.org

Bibliographische Notiz

Publisher Copyright:
© 2021 Association for Computational Linguistics

DOI