RoMe: A Robust Metric for Evaluating Natural Language Generation

Publikation: Beiträge in SammelwerkenAufsätze in KonferenzbändenForschungbegutachtet

Standard

RoMe : A Robust Metric for Evaluating Natural Language Generation. / Al Hasan Rony, Md Rashad; Kovriguina, Liubov; Chaudhuri, Debanjan et al.

ACL 2022 - 60th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers). Hrsg. / Smaranda Muresan; Preslav Nakov; Aline Villavicencio. Association for Computational Linguistics (ACL), 2022. S. 5645-5657 (Proceedings of the Annual Meeting of the Association for Computational Linguistics; Band 1).

Publikation: Beiträge in SammelwerkenAufsätze in KonferenzbändenForschungbegutachtet

Harvard

Al Hasan Rony, MR, Kovriguina, L, Chaudhuri, D, Usbeck, R & Lehmann, J 2022, RoMe: A Robust Metric for Evaluating Natural Language Generation. in S Muresan, P Nakov & A Villavicencio (Hrsg.), ACL 2022 - 60th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers). Proceedings of the Annual Meeting of the Association for Computational Linguistics, Bd. 1, Association for Computational Linguistics (ACL), S. 5645-5657, 60th Annual Meeting of the Association for Computational Linguistics - ACL 2022, Dublin, Irland, 22.05.22. <https://aclanthology.org/2022.acl-long.387.pdf>

APA

Al Hasan Rony, M. R., Kovriguina, L., Chaudhuri, D., Usbeck, R., & Lehmann, J. (2022). RoMe: A Robust Metric for Evaluating Natural Language Generation. in S. Muresan, P. Nakov, & A. Villavicencio (Hrsg.), ACL 2022 - 60th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers) (S. 5645-5657). (Proceedings of the Annual Meeting of the Association for Computational Linguistics; Band 1). Association for Computational Linguistics (ACL). https://aclanthology.org/2022.acl-long.387.pdf

Vancouver

Al Hasan Rony MR, Kovriguina L, Chaudhuri D, Usbeck R, Lehmann J. RoMe: A Robust Metric for Evaluating Natural Language Generation. in Muresan S, Nakov P, Villavicencio A, Hrsg., ACL 2022 - 60th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers). Association for Computational Linguistics (ACL). 2022. S. 5645-5657. (Proceedings of the Annual Meeting of the Association for Computational Linguistics).

Bibtex

@inbook{e7084c0d6ba0470ebbe19f81ba2d3273,
title = "RoMe: A Robust Metric for Evaluating Natural Language Generation",
abstract = "Evaluating Natural Language Generation (NLG) systems is a challenging task. Firstly, the metric should ensure that the generated hypothesis reflects the reference's semantics. Secondly, it should consider the grammatical quality of the generated sentence. Thirdly, it should be robust enough to handle various surface forms of the generated sentence. Thus, an effective evaluation metric has to be multifaceted. In this paper, we propose an automatic evaluation metric incorporating several core aspects of natural language understanding (language competence, syntactic and semantic variation). Our proposed metric, RoMe, is trained on language features such as semantic similarity combined with tree edit distance and grammatical acceptability, using a self-supervised neural network to assess the overall quality of the generated sentence. Moreover, we perform an extensive robustness analysis of the state-of-the-art methods and RoMe. Empirical results suggest that RoMe has a stronger correlation to human judgment over state-of-the-art metrics in evaluating system-generated sentences across several NLG tasks.",
keywords = "Informatics, Business informatics",
author = "{Al Hasan Rony}, {Md Rashad} and Liubov Kovriguina and Debanjan Chaudhuri and Ricardo Usbeck and Jens Lehmann",
note = "We acknowledge the support of the following projects: SPEAKER (BMWi FKZ 01MK20011A), JOSEPH (Fraunhofer Zukunftsstiftung), OpenGPT-X (BMWK FKZ 68GX21007A), the excellence clusters ML2R (BmBF FKZ 01 15 18038 A/B/C), ScaDS.AI (IS18026A-F) and TAILOR (EU GA 952215). Publisher Copyright: {\textcopyright} 2022 Association for Computational Linguistics.; 60th Annual Meeting of the Association for Computational Linguistics - ACL 2022, ACL 2022 ; Conference date: 22-05-2022 Through 27-05-2022",
year = "2022",
language = "English",
series = "Proceedings of the Annual Meeting of the Association for Computational Linguistics",
publisher = "Association for Computational Linguistics (ACL)",
pages = "5645--5657",
editor = "Smaranda Muresan and Preslav Nakov and Aline Villavicencio",
booktitle = "ACL 2022 - 60th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers)",
address = "United States",
url = "https://www.2022.aclweb.org/",

}

RIS

TY - CHAP

T1 - RoMe

T2 - 60th Annual Meeting of the Association for Computational Linguistics - ACL 2022

AU - Al Hasan Rony, Md Rashad

AU - Kovriguina, Liubov

AU - Chaudhuri, Debanjan

AU - Usbeck, Ricardo

AU - Lehmann, Jens

N1 - Conference code: 60

PY - 2022

Y1 - 2022

N2 - Evaluating Natural Language Generation (NLG) systems is a challenging task. Firstly, the metric should ensure that the generated hypothesis reflects the reference's semantics. Secondly, it should consider the grammatical quality of the generated sentence. Thirdly, it should be robust enough to handle various surface forms of the generated sentence. Thus, an effective evaluation metric has to be multifaceted. In this paper, we propose an automatic evaluation metric incorporating several core aspects of natural language understanding (language competence, syntactic and semantic variation). Our proposed metric, RoMe, is trained on language features such as semantic similarity combined with tree edit distance and grammatical acceptability, using a self-supervised neural network to assess the overall quality of the generated sentence. Moreover, we perform an extensive robustness analysis of the state-of-the-art methods and RoMe. Empirical results suggest that RoMe has a stronger correlation to human judgment over state-of-the-art metrics in evaluating system-generated sentences across several NLG tasks.

AB - Evaluating Natural Language Generation (NLG) systems is a challenging task. Firstly, the metric should ensure that the generated hypothesis reflects the reference's semantics. Secondly, it should consider the grammatical quality of the generated sentence. Thirdly, it should be robust enough to handle various surface forms of the generated sentence. Thus, an effective evaluation metric has to be multifaceted. In this paper, we propose an automatic evaluation metric incorporating several core aspects of natural language understanding (language competence, syntactic and semantic variation). Our proposed metric, RoMe, is trained on language features such as semantic similarity combined with tree edit distance and grammatical acceptability, using a self-supervised neural network to assess the overall quality of the generated sentence. Moreover, we perform an extensive robustness analysis of the state-of-the-art methods and RoMe. Empirical results suggest that RoMe has a stronger correlation to human judgment over state-of-the-art metrics in evaluating system-generated sentences across several NLG tasks.

KW - Informatics

KW - Business informatics

UR - http://www.scopus.com/inward/record.url?scp=85138904185&partnerID=8YFLogxK

M3 - Article in conference proceedings

AN - SCOPUS:85138904185

T3 - Proceedings of the Annual Meeting of the Association for Computational Linguistics

SP - 5645

EP - 5657

BT - ACL 2022 - 60th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers)

A2 - Muresan, Smaranda

A2 - Nakov, Preslav

A2 - Villavicencio, Aline

PB - Association for Computational Linguistics (ACL)

Y2 - 22 May 2022 through 27 May 2022

ER -

Links