Language Model Transformers as Evaluators for Open-domain Dialogues

Research output: Contributions to collected editions/worksArticle in conference proceedingsResearchpeer-review

Standard

Language Model Transformers as Evaluators for Open-domain Dialogues. / Nedelchev, Rostislav; Lehmann, Jens; Usbeck, Ricardo.
COLING 2020 - 28th International Conference on Computational Linguistics: Proceedings of the Conference. ed. / Donia Scott; Nuria Bel; Chengqing Zong. Association for Computational Linguistics (ACL), 2020. p. 6797-6808 (COLING 2020 - 28th International Conference on Computational Linguistics, Proceedings of the Conference).

Research output: Contributions to collected editions/worksArticle in conference proceedingsResearchpeer-review

Harvard

Nedelchev, R, Lehmann, J & Usbeck, R 2020, Language Model Transformers as Evaluators for Open-domain Dialogues. in D Scott, N Bel & C Zong (eds), COLING 2020 - 28th International Conference on Computational Linguistics: Proceedings of the Conference. COLING 2020 - 28th International Conference on Computational Linguistics, Proceedings of the Conference, Association for Computational Linguistics (ACL), pp. 6797-6808, 28th International Conference on Computational Linguistics, COLING 2020, Virtual, Online, Spain, 08.12.20. https://doi.org/10.18653/v1/2020.coling-main.599

APA

Nedelchev, R., Lehmann, J., & Usbeck, R. (2020). Language Model Transformers as Evaluators for Open-domain Dialogues. In D. Scott, N. Bel, & C. Zong (Eds.), COLING 2020 - 28th International Conference on Computational Linguistics: Proceedings of the Conference (pp. 6797-6808). (COLING 2020 - 28th International Conference on Computational Linguistics, Proceedings of the Conference). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.coling-main.599

Vancouver

Nedelchev R, Lehmann J, Usbeck R. Language Model Transformers as Evaluators for Open-domain Dialogues. In Scott D, Bel N, Zong C, editors, COLING 2020 - 28th International Conference on Computational Linguistics: Proceedings of the Conference. Association for Computational Linguistics (ACL). 2020. p. 6797-6808. (COLING 2020 - 28th International Conference on Computational Linguistics, Proceedings of the Conference). doi: 10.18653/v1/2020.coling-main.599

Bibtex

@inbook{060baa868fe74263b7f5495df8027644,
title = "Language Model Transformers as Evaluators for Open-domain Dialogues",
abstract = "Computer-based systems for communication with humans are a cornerstone of AI research since the 1950s. So far, the most effective way to assess the quality of the dialogues produced by these systems is to use resource-intensive manual labor instead of automated means. In this work, we investigate whether language models (LM) based on transformer neural networks can indicate the quality of a conversation. In a general sense, language models are methods that learn to predict one or more words based on an already given context. Due to their unsupervised nature, they are candidates for efficient, automatic indication of dialogue quality. We demonstrate that human evaluators have a positive correlation between the output of the language models and scores. We also provide some insights into their behavior and inner-working in a conversational context.",
keywords = "Informatics, Business informatics",
author = "Rostislav Nedelchev and Jens Lehmann and Ricardo Usbeck",
note = "We acknowledge the support of the EU projects Cleopatra (GA 812997) and TAILOR (GA 952215), the Federal Ministry for Economic Affairs and Energy (BMWi) project SPEAKER (FKZ 01MK20011A), the German Federal Ministry of Education and Research (BMBF) projects and excellence clusters ML2R (FKZ 01 15 18038 A/B/C), MLwin (01S18050 D/F), ScaDS.AI (01/S18026A) as well as the Fraunhofer Zukunftsstiftung project JOSEPH. Publisher Copyright: {\textcopyright} 2020 COLING 2020 - 28th International Conference on Computational Linguistics, Proceedings of the Conference. All rights reserved.; 28th International Conference on Computational Linguistics, COLING 2020 ; Conference date: 08-12-2020 Through 13-12-2020",
year = "2020",
month = jan,
day = "1",
doi = "10.18653/v1/2020.coling-main.599",
language = "English",
series = "COLING 2020 - 28th International Conference on Computational Linguistics, Proceedings of the Conference",
publisher = "Association for Computational Linguistics (ACL)",
pages = "6797--6808",
editor = "Donia Scott and Nuria Bel and Chengqing Zong",
booktitle = "COLING 2020 - 28th International Conference on Computational Linguistics",
address = "United States",
url = "https://coling2020.org, https://coling2020.org/COLING2020programme.pdf",

}

RIS

TY - CHAP

T1 - Language Model Transformers as Evaluators for Open-domain Dialogues

AU - Nedelchev, Rostislav

AU - Lehmann, Jens

AU - Usbeck, Ricardo

N1 - We acknowledge the support of the EU projects Cleopatra (GA 812997) and TAILOR (GA 952215), the Federal Ministry for Economic Affairs and Energy (BMWi) project SPEAKER (FKZ 01MK20011A), the German Federal Ministry of Education and Research (BMBF) projects and excellence clusters ML2R (FKZ 01 15 18038 A/B/C), MLwin (01S18050 D/F), ScaDS.AI (01/S18026A) as well as the Fraunhofer Zukunftsstiftung project JOSEPH. Publisher Copyright: © 2020 COLING 2020 - 28th International Conference on Computational Linguistics, Proceedings of the Conference. All rights reserved.

PY - 2020/1/1

Y1 - 2020/1/1

N2 - Computer-based systems for communication with humans are a cornerstone of AI research since the 1950s. So far, the most effective way to assess the quality of the dialogues produced by these systems is to use resource-intensive manual labor instead of automated means. In this work, we investigate whether language models (LM) based on transformer neural networks can indicate the quality of a conversation. In a general sense, language models are methods that learn to predict one or more words based on an already given context. Due to their unsupervised nature, they are candidates for efficient, automatic indication of dialogue quality. We demonstrate that human evaluators have a positive correlation between the output of the language models and scores. We also provide some insights into their behavior and inner-working in a conversational context.

AB - Computer-based systems for communication with humans are a cornerstone of AI research since the 1950s. So far, the most effective way to assess the quality of the dialogues produced by these systems is to use resource-intensive manual labor instead of automated means. In this work, we investigate whether language models (LM) based on transformer neural networks can indicate the quality of a conversation. In a general sense, language models are methods that learn to predict one or more words based on an already given context. Due to their unsupervised nature, they are candidates for efficient, automatic indication of dialogue quality. We demonstrate that human evaluators have a positive correlation between the output of the language models and scores. We also provide some insights into their behavior and inner-working in a conversational context.

KW - Informatics

KW - Business informatics

UR - http://www.scopus.com/inward/record.url?scp=85108285068&partnerID=8YFLogxK

UR - https://www.mendeley.com/catalogue/0f9694bb-370d-3c37-bb25-8347d9aac64a/

U2 - 10.18653/v1/2020.coling-main.599

DO - 10.18653/v1/2020.coling-main.599

M3 - Article in conference proceedings

AN - SCOPUS:85108285068

T3 - COLING 2020 - 28th International Conference on Computational Linguistics, Proceedings of the Conference

SP - 6797

EP - 6808

BT - COLING 2020 - 28th International Conference on Computational Linguistics

A2 - Scott, Donia

A2 - Bel, Nuria

A2 - Zong, Chengqing

PB - Association for Computational Linguistics (ACL)

T2 - 28th International Conference on Computational Linguistics, COLING 2020

Y2 - 8 December 2020 through 13 December 2020

ER -

Recently viewed

Researchers

  1. Tomas Kaiser

Publications

  1. Anonymized Firm Data under Test: Evidence from a Replication Study
  2. Representative time use data and new harmonised calibration of the American Heritage Time Use Data (AHTUD) 1965-1999
  3. Flood risk management via collaborative modelling
  4. An extended kalman filter for temperature monitoring of a metal-polymer hybrid fibre based heater structure
  5. Socioeconomic status and word problem solving in PISA: The role of mathematical content areas
  6. Integrierte Eingabegeräte
  7. Magnesium recycling: State-of-the-Art developments, part II
  8. Why the future is democratic
  9. Treatment or Documentation? Pareto Optimality in the Physicians’ Time Allocation
  10. Rapid ecosystem change challenges the adaptive capacity of local environmental knowledge
  11. Comparing eye movements during mathematical word problem solving in Chinese and German
  12. Elementary School Students’ Length Estimation Skills
  13. Model-based estimation of pesticides and transformation products and their export pathways in a headwater catchment
  14. Guest Editorial
  15. Calibrated Passive Sampling - Multi-plot Field Measurements of NH3 Emissions with a Combination of Dynamic Tube Method and Passive Samplers
  16. Representative time use data and calibration of the American time use studies 1965 - 1999
  17. Utilization of protein-rich residues in biotechnological processes
  18. Simplify the Uptake of Community Energy by Leveraging Intermediaries and the Use of Digital Planning Tools
  19. Mining product configurator data
  20. How to Measure the Speed of Enterprise IT?
  21. Moderators of intergroup evaluation in disadvantaged groups
  22. Landslide Hazards
  23. Institutional Proxy Representatives of Future Generations
  24. Designing Small Touch-Screen Devices
  25. Frame Diffusion
  26. Sonnenscheinchen
  27. From railroad imperialism to neoliberal reprimarization: Lessons from regime-shifts in the Global Soybean Complex
  28. Flat-of-the-curve medicine
  29. The Meaning of Higher-Order Factors in Reflective-Measurement Models
  30. Healthier and Sustainable Food Systems: Integrating Underutilised Crops in a ‘Theory of Change Approach’