Proxy Indicators for the Quality of Open-domain Dialogues

Publikation: Beiträge in SammelwerkenAufsätze in KonferenzbändenForschungbegutachtet

Standard

Proxy Indicators for the Quality of Open-domain Dialogues. / Nedelchev, Rostislav; Lehmann, Jens; Usbeck, Ricardo.
EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing, Proceedings. Hrsg. / Marie-Francine Moens; Xuanjing Huang; Lucia Specia; Scott Wen-tau Yih. Association for Computational Linguistics (ACL), 2021. S. 7834-7855 (EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing, Proceedings).

Publikation: Beiträge in SammelwerkenAufsätze in KonferenzbändenForschungbegutachtet

Harvard

Nedelchev, R, Lehmann, J & Usbeck, R 2021, Proxy Indicators for the Quality of Open-domain Dialogues. in M-F Moens, X Huang, L Specia & S Wen-tau Yih (Hrsg.), EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing, Proceedings. EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing, Proceedings, Association for Computational Linguistics (ACL), S. 7834-7855, 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Punta Cana, Dominikanische Republik, 07.11.21. https://doi.org/10.18653/v1/2021.emnlp-main.618

APA

Nedelchev, R., Lehmann, J., & Usbeck, R. (2021). Proxy Indicators for the Quality of Open-domain Dialogues. In M.-F. Moens, X. Huang, L. Specia, & S. Wen-tau Yih (Hrsg.), EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing, Proceedings (S. 7834-7855). (EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing, Proceedings). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.emnlp-main.618

Vancouver

Nedelchev R, Lehmann J, Usbeck R. Proxy Indicators for the Quality of Open-domain Dialogues. in Moens MF, Huang X, Specia L, Wen-tau Yih S, Hrsg., EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing, Proceedings. Association for Computational Linguistics (ACL). 2021. S. 7834-7855. (EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing, Proceedings). doi: 10.18653/v1/2021.emnlp-main.618

Bibtex

@inbook{2acc49ff1cb64885973f6c6f152d46bb,
title = "Proxy Indicators for the Quality of Open-domain Dialogues",
abstract = "The automatic evaluation of open-domain dialogues remains a largely unsolved challenge. Thus, despite the abundance of work done in the field, human judges have to evaluate dialogues' quality. As a consequence, performing such evaluations at scale is usually expensive. This work investigates using a deep-learning model trained on the General Language Understanding Evaluation (GLUE) benchmark to serve as a quality indication of open-domain dialogues. The aim is to use the various GLUE tasks as different perspectives on judging the quality of conversation, thus reducing the need for additional training data or responses that serve as quality references. Due to this nature, the method can infer various quality metrics and derive a component-based overall score. We achieve statistically significant correlation coefficients of up to 0.7.",
keywords = "Informatics, Business informatics",
author = "Rostislav Nedelchev and Jens Lehmann and Ricardo Usbeck",
note = "Funding Information: Turning to Maintains Context, we see the inverse perspective. The pair-wise sentence proxy indicators applied to the dialogue context, and target response demonstrate the best ability, while the single sentence is the worst. Furthermore, the observation is partially supported by the pair-wise tasks applied to the dialogue facts. Publisher Copyright: {\textcopyright} 2021 Association for Computational Linguistics; 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021 ; Conference date: 07-11-2021 Through 11-11-2021",
year = "2021",
month = jan,
day = "1",
doi = "10.18653/v1/2021.emnlp-main.618",
language = "English",
series = "EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing, Proceedings",
publisher = "Association for Computational Linguistics (ACL)",
pages = "7834--7855",
editor = "Marie-Francine Moens and Xuanjing Huang and Lucia Specia and {Wen-tau Yih}, Scott",
booktitle = "EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing, Proceedings",
address = "United States",
url = "https://2021.emnlp.org",

}

RIS

TY - CHAP

T1 - Proxy Indicators for the Quality of Open-domain Dialogues

AU - Nedelchev, Rostislav

AU - Lehmann, Jens

AU - Usbeck, Ricardo

N1 - Funding Information: Turning to Maintains Context, we see the inverse perspective. The pair-wise sentence proxy indicators applied to the dialogue context, and target response demonstrate the best ability, while the single sentence is the worst. Furthermore, the observation is partially supported by the pair-wise tasks applied to the dialogue facts. Publisher Copyright: © 2021 Association for Computational Linguistics

PY - 2021/1/1

Y1 - 2021/1/1

N2 - The automatic evaluation of open-domain dialogues remains a largely unsolved challenge. Thus, despite the abundance of work done in the field, human judges have to evaluate dialogues' quality. As a consequence, performing such evaluations at scale is usually expensive. This work investigates using a deep-learning model trained on the General Language Understanding Evaluation (GLUE) benchmark to serve as a quality indication of open-domain dialogues. The aim is to use the various GLUE tasks as different perspectives on judging the quality of conversation, thus reducing the need for additional training data or responses that serve as quality references. Due to this nature, the method can infer various quality metrics and derive a component-based overall score. We achieve statistically significant correlation coefficients of up to 0.7.

AB - The automatic evaluation of open-domain dialogues remains a largely unsolved challenge. Thus, despite the abundance of work done in the field, human judges have to evaluate dialogues' quality. As a consequence, performing such evaluations at scale is usually expensive. This work investigates using a deep-learning model trained on the General Language Understanding Evaluation (GLUE) benchmark to serve as a quality indication of open-domain dialogues. The aim is to use the various GLUE tasks as different perspectives on judging the quality of conversation, thus reducing the need for additional training data or responses that serve as quality references. Due to this nature, the method can infer various quality metrics and derive a component-based overall score. We achieve statistically significant correlation coefficients of up to 0.7.

KW - Informatics

KW - Business informatics

UR - http://www.scopus.com/inward/record.url?scp=85127432288&partnerID=8YFLogxK

UR - https://www.mendeley.com/catalogue/87ca9f87-497d-31ee-9b3d-c0876c35cb07/

U2 - 10.18653/v1/2021.emnlp-main.618

DO - 10.18653/v1/2021.emnlp-main.618

M3 - Article in conference proceedings

AN - SCOPUS:85127432288

T3 - EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing, Proceedings

SP - 7834

EP - 7855

BT - EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing, Proceedings

A2 - Moens, Marie-Francine

A2 - Huang, Xuanjing

A2 - Specia, Lucia

A2 - Wen-tau Yih, Scott

PB - Association for Computational Linguistics (ACL)

T2 - 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021

Y2 - 7 November 2021 through 11 November 2021

ER -

DOI