Language Model Transformers as Evaluators for Open-domain Dialogues

Publikation: Beiträge in SammelwerkenAufsätze in KonferenzbändenForschungbegutachtet

Standard

Language Model Transformers as Evaluators for Open-domain Dialogues. / Nedelchev, Rostislav; Lehmann, Jens; Usbeck, Ricardo.
COLING 2020 - 28th International Conference on Computational Linguistics: Proceedings of the Conference. Hrsg. / Donia Scott; Nuria Bel; Chengqing Zong. Association for Computational Linguistics (ACL), 2020. S. 6797-6808 (COLING 2020 - 28th International Conference on Computational Linguistics, Proceedings of the Conference).

Publikation: Beiträge in SammelwerkenAufsätze in KonferenzbändenForschungbegutachtet

Harvard

Nedelchev, R, Lehmann, J & Usbeck, R 2020, Language Model Transformers as Evaluators for Open-domain Dialogues. in D Scott, N Bel & C Zong (Hrsg.), COLING 2020 - 28th International Conference on Computational Linguistics: Proceedings of the Conference. COLING 2020 - 28th International Conference on Computational Linguistics, Proceedings of the Conference, Association for Computational Linguistics (ACL), S. 6797-6808, 28th International Conference on Computational Linguistics, COLING 2020, Virtual, Online, Spanien, 08.12.20. https://doi.org/10.18653/v1/2020.coling-main.599

APA

Nedelchev, R., Lehmann, J., & Usbeck, R. (2020). Language Model Transformers as Evaluators for Open-domain Dialogues. In D. Scott, N. Bel, & C. Zong (Hrsg.), COLING 2020 - 28th International Conference on Computational Linguistics: Proceedings of the Conference (S. 6797-6808). (COLING 2020 - 28th International Conference on Computational Linguistics, Proceedings of the Conference). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.coling-main.599

Vancouver

Nedelchev R, Lehmann J, Usbeck R. Language Model Transformers as Evaluators for Open-domain Dialogues. in Scott D, Bel N, Zong C, Hrsg., COLING 2020 - 28th International Conference on Computational Linguistics: Proceedings of the Conference. Association for Computational Linguistics (ACL). 2020. S. 6797-6808. (COLING 2020 - 28th International Conference on Computational Linguistics, Proceedings of the Conference). doi: 10.18653/v1/2020.coling-main.599

Bibtex

@inbook{060baa868fe74263b7f5495df8027644,
title = "Language Model Transformers as Evaluators for Open-domain Dialogues",
abstract = "Computer-based systems for communication with humans are a cornerstone of AI research since the 1950s. So far, the most effective way to assess the quality of the dialogues produced by these systems is to use resource-intensive manual labor instead of automated means. In this work, we investigate whether language models (LM) based on transformer neural networks can indicate the quality of a conversation. In a general sense, language models are methods that learn to predict one or more words based on an already given context. Due to their unsupervised nature, they are candidates for efficient, automatic indication of dialogue quality. We demonstrate that human evaluators have a positive correlation between the output of the language models and scores. We also provide some insights into their behavior and inner-working in a conversational context.",
keywords = "Informatics, Business informatics",
author = "Rostislav Nedelchev and Jens Lehmann and Ricardo Usbeck",
note = "We acknowledge the support of the EU projects Cleopatra (GA 812997) and TAILOR (GA 952215), the Federal Ministry for Economic Affairs and Energy (BMWi) project SPEAKER (FKZ 01MK20011A), the German Federal Ministry of Education and Research (BMBF) projects and excellence clusters ML2R (FKZ 01 15 18038 A/B/C), MLwin (01S18050 D/F), ScaDS.AI (01/S18026A) as well as the Fraunhofer Zukunftsstiftung project JOSEPH. Publisher Copyright: {\textcopyright} 2020 COLING 2020 - 28th International Conference on Computational Linguistics, Proceedings of the Conference. All rights reserved.; 28th International Conference on Computational Linguistics, COLING 2020 ; Conference date: 08-12-2020 Through 13-12-2020",
year = "2020",
month = jan,
day = "1",
doi = "10.18653/v1/2020.coling-main.599",
language = "English",
series = "COLING 2020 - 28th International Conference on Computational Linguistics, Proceedings of the Conference",
publisher = "Association for Computational Linguistics (ACL)",
pages = "6797--6808",
editor = "Donia Scott and Nuria Bel and Chengqing Zong",
booktitle = "COLING 2020 - 28th International Conference on Computational Linguistics",
address = "United States",
url = "https://coling2020.org, https://coling2020.org/COLING2020programme.pdf",

}

RIS

TY - CHAP

T1 - Language Model Transformers as Evaluators for Open-domain Dialogues

AU - Nedelchev, Rostislav

AU - Lehmann, Jens

AU - Usbeck, Ricardo

N1 - We acknowledge the support of the EU projects Cleopatra (GA 812997) and TAILOR (GA 952215), the Federal Ministry for Economic Affairs and Energy (BMWi) project SPEAKER (FKZ 01MK20011A), the German Federal Ministry of Education and Research (BMBF) projects and excellence clusters ML2R (FKZ 01 15 18038 A/B/C), MLwin (01S18050 D/F), ScaDS.AI (01/S18026A) as well as the Fraunhofer Zukunftsstiftung project JOSEPH. Publisher Copyright: © 2020 COLING 2020 - 28th International Conference on Computational Linguistics, Proceedings of the Conference. All rights reserved.

PY - 2020/1/1

Y1 - 2020/1/1

N2 - Computer-based systems for communication with humans are a cornerstone of AI research since the 1950s. So far, the most effective way to assess the quality of the dialogues produced by these systems is to use resource-intensive manual labor instead of automated means. In this work, we investigate whether language models (LM) based on transformer neural networks can indicate the quality of a conversation. In a general sense, language models are methods that learn to predict one or more words based on an already given context. Due to their unsupervised nature, they are candidates for efficient, automatic indication of dialogue quality. We demonstrate that human evaluators have a positive correlation between the output of the language models and scores. We also provide some insights into their behavior and inner-working in a conversational context.

AB - Computer-based systems for communication with humans are a cornerstone of AI research since the 1950s. So far, the most effective way to assess the quality of the dialogues produced by these systems is to use resource-intensive manual labor instead of automated means. In this work, we investigate whether language models (LM) based on transformer neural networks can indicate the quality of a conversation. In a general sense, language models are methods that learn to predict one or more words based on an already given context. Due to their unsupervised nature, they are candidates for efficient, automatic indication of dialogue quality. We demonstrate that human evaluators have a positive correlation between the output of the language models and scores. We also provide some insights into their behavior and inner-working in a conversational context.

KW - Informatics

KW - Business informatics

UR - http://www.scopus.com/inward/record.url?scp=85108285068&partnerID=8YFLogxK

UR - https://www.mendeley.com/catalogue/0f9694bb-370d-3c37-bb25-8347d9aac64a/

U2 - 10.18653/v1/2020.coling-main.599

DO - 10.18653/v1/2020.coling-main.599

M3 - Article in conference proceedings

AN - SCOPUS:85108285068

T3 - COLING 2020 - 28th International Conference on Computational Linguistics, Proceedings of the Conference

SP - 6797

EP - 6808

BT - COLING 2020 - 28th International Conference on Computational Linguistics

A2 - Scott, Donia

A2 - Bel, Nuria

A2 - Zong, Chengqing

PB - Association for Computational Linguistics (ACL)

T2 - 28th International Conference on Computational Linguistics, COLING 2020

Y2 - 8 December 2020 through 13 December 2020

ER -

DOI

Zuletzt angesehen

Publikationen

  1. Where pragmatics and dialectology meet
  2. Resolution improvement of accelerometers measurement for drones in agricultural applications
  3. Environmental performance, carbon performance and earnings management
  4. A comprehensive method for determination of fatty acids in the initial oral biofilm (pellicle)
  5. National ecosystem restoration pledges are mismatched with social-ecological enabling conditions
  6. A comparative survey of chemistry-driven in silico methods to identify hazardous substances under REACH
  7. Photodegradation of the UV filter ethylhexyl methoxycinnamate under ultraviolet light
  8. Group evaluations as self-group distancing
  9. Understanding the role of gender identity in charitable giving—recruiting bone marrow donors
  10. D2R2 2024: Linked Data-driven Resilience Research 2024
  11. Work-in-Progress
  12. Regierung
  13. Pragmatics and the English Language, Jonathan Culpeper, Michael Haugh. Palgrave Macmillan, Basingstoke (2014), 316 pp., ISBN: 9780230551732
  14. Ex Machina
  15. So verknüpfen Sie operatives Controlling und Berichtswesen
  16. The perceiver’s social role and a risk’s causal structure as determinants of environmental risk evaluation
  17. Non-native Douglas fir promotes epigeal spider density, but has a mixed effect on functional diversity
  18. Determinants of pollution
  19. Mindfulness and cognitive-behavioral strategies for psychological detachment
  20. Global decoupling of functional and phylogenetic diversity in plant communities
  21. COVID-19 and the ageing workforce
  22. Manieren beibringen
  23. A stakeholder theory perspective on business models: Value creation for sustainability
  24. Joint proceedings of RSP 2017 and QuWeDa 2017
  25. Regulation of a servo piezo mechanical hydraulic actuator for intake valves in camless combustion engines
  26. Is ‘waste’ an appropriate concept in a sustainable bioeconomy?