RoMe: A Robust Metric for Evaluating Natural Language Generation

Research output: Contributions to collected editions/worksArticle in conference proceedingsResearchpeer-review

Authors

  • Md Rashad Al Hasan Rony
  • Liubov Kovriguina
  • Debanjan Chaudhuri
  • Ricardo Usbeck
  • Jens Lehmann

Evaluating Natural Language Generation (NLG) systems is a challenging task. Firstly, the metric should ensure that the generated hypothesis reflects the reference's semantics. Secondly, it should consider the grammatical quality of the generated sentence. Thirdly, it should be robust enough to handle various surface forms of the generated sentence. Thus, an effective evaluation metric has to be multifaceted. In this paper, we propose an automatic evaluation metric incorporating several core aspects of natural language understanding (language competence, syntactic and semantic variation). Our proposed metric, RoMe, is trained on language features such as semantic similarity combined with tree edit distance and grammatical acceptability, using a self-supervised neural network to assess the overall quality of the generated sentence. Moreover, we perform an extensive robustness analysis of the state-of-the-art methods and RoMe. Empirical results suggest that RoMe has a stronger correlation to human judgment over state-of-the-art metrics in evaluating system-generated sentences across several NLG tasks.

Original languageEnglish
Title of host publicationACL 2022 - 60th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers)
EditorsSmaranda Muresan, Preslav Nakov, Aline Villavicencio
Number of pages13
PublisherAssociation for Computational Linguistics (ACL)
Publication date2022
Pages5645-5657
ISBN (Electronic)9781955917216
Publication statusPublished - 2022
Externally publishedYes
Event60th Annual Meeting of the Association for Computational Linguistics - ACL 2022 - Convention Centre Dublin & Online, Dublin, Ireland
Duration: 22.05.202227.05.2022
Conference number: 60
https://www.2022.aclweb.org/

Bibliographical note

We acknowledge the support of the following projects: SPEAKER (BMWi FKZ 01MK20011A), JOSEPH (Fraunhofer Zukunftsstiftung), OpenGPT-X (BMWK FKZ 68GX21007A), the excellence clusters ML2R (BmBF FKZ 01 15 18038 A/B/C),
ScaDS.AI (IS18026A-F) and TAILOR (EU GA 952215).

Publisher Copyright:
© 2022 Association for Computational Linguistics.