Treating dialogue quality evaluation as an anomaly detection problem
Publikation: Beiträge in Sammelwerken › Aufsätze in Konferenzbänden › Forschung › begutachtet
Authors
Dialogue systems for interaction with humans have been enjoying increased popularity in the research and industry fields. To this day, the best way to estimate their success is through means of human evaluation and not automated approaches, despite the abundance of work done in the field. In this paper, we investigate the effectiveness of perceiving dialogue evaluation as an anomaly detection task. The paper looks into four dialogue modeling approaches and how their objective functions correlate with human annotation scores. A high-level perspective exhibits negative results. However, a more in-depth look shows limited potential for using anomaly detection for evaluating dialogues.
Originalsprache | Englisch |
---|---|
Titel | LREC 2020 - 12th International Conference on Language Resources and Evaluation, Conference Proceedings |
Herausgeber | Nicoletta Calzolari, Frederic Bechet, Philippe Blache, Khalid Choukri, Christopher Cieri, Thierry Declerck, Sara Goggi, Hitoshi Isahara, Bente Maegaard, Joseph Mariani, Helene Mazo, Asuncion Moreno, Jan Odijk, Stelios Piperidis |
Anzahl der Seiten | 5 |
Verlag | European Language Resources Association (ELRA) |
Erscheinungsdatum | 2020 |
Seiten | 508-512 |
ISBN (elektronisch) | 9791095546344 |
Publikationsstatus | Erschienen - 2020 |
Extern publiziert | Ja |
Veranstaltung | 12th International Conference on Language Resources and Evaluation, LREC 2020 - Le Palais du Pharao, Marseille, Frankreich Dauer: 11.05.2020 → 16.05.2020 https://lrec2020.lrec-conf.org/en/about/organizers/index.html |
Bibliographische Notiz
Publisher Copyright:
© European Language Resources Association (ELRA), licensed under CC-BY-NC
- Informatik
- Wirtschaftsinformatik