Knowledge Graph Question Answering Datasets and Their Generalizability: Are They Enough for Future Research?

Publikation: Beiträge in SammelwerkenAufsätze in KonferenzbändenForschungbegutachtet

Standard

Knowledge Graph Question Answering Datasets and Their Generalizability: Are They Enough for Future Research? / Jiang, Longquan; Usbeck, Ricardo.
SIGIR 2022 - Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval. Hrsg. / Enrique Amigo; Pablo Castells; Julio Gonzalo. New York: Association for Computing Machinery, Inc, 2022. S. 3209-3218 (Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval; Band 2022).

Publikation: Beiträge in SammelwerkenAufsätze in KonferenzbändenForschungbegutachtet

Harvard

Jiang, L & Usbeck, R 2022, Knowledge Graph Question Answering Datasets and Their Generalizability: Are They Enough for Future Research? in E Amigo, P Castells & J Gonzalo (Hrsg.), SIGIR 2022 - Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval. Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval, Bd. 2022, Association for Computing Machinery, Inc, New York, S. 3209-3218, 45th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval - SIGIR 2022, Madrid, Spanien, 11.07.22. https://doi.org/10.1145/3477495.3531751, https://doi.org/10.48550/arxiv.2205.06573

APA

Jiang, L., & Usbeck, R. (2022). Knowledge Graph Question Answering Datasets and Their Generalizability: Are They Enough for Future Research? In E. Amigo, P. Castells, & J. Gonzalo (Hrsg.), SIGIR 2022 - Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (S. 3209-3218). (Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval; Band 2022). Association for Computing Machinery, Inc. https://doi.org/10.1145/3477495.3531751, https://doi.org/10.48550/arxiv.2205.06573

Vancouver

Jiang L, Usbeck R. Knowledge Graph Question Answering Datasets and Their Generalizability: Are They Enough for Future Research? in Amigo E, Castells P, Gonzalo J, Hrsg., SIGIR 2022 - Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval. New York: Association for Computing Machinery, Inc. 2022. S. 3209-3218. (Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval). doi: 10.1145/3477495.3531751, 10.48550/arxiv.2205.06573

Bibtex

@inbook{1747a699866c4ae3adc29801090597ba,
title = "Knowledge Graph Question Answering Datasets and Their Generalizability: Are They Enough for Future Research?",
abstract = "Existing approaches on Question Answering over Knowledge Graphs (KGQA) have weak generalizability. That is often due to the standard i.i.d. assumption on the underlying dataset. Recently, three levels of generalization for KGQA were defined, namely i.i.d., compositional, zero-shot. We analyze 25 well-known KGQA datasets for 5 different Knowledge Graphs (KGs). We show that according to this definition many existing and online available KGQA datasets are either not suited to train a generalizable KGQA system or that the datasets are based on discontinued and out-dated KGs. Generating new datasets is a costly process and, thus, is not an alternative to smaller research groups and companies. In this work, we propose a mitigation method for re-splitting available KGQA datasets to enable their applicability to evaluate generalization, without any cost and manual effort. We test our hypothesis on three KGQA datasets, i.e., LC-QuAD, LC-QuAD 2.0 and QALD-9). Experiments on re-splitted KGQA datasets demonstrate its effectiveness towards generalizability. The code and a unified way to access 18 available datasets is online at https: //github.com/semantic-systems/KGQA-datasets as well as https: //github.com/semantic-systems/KGQA-datasets-generalization.",
keywords = "benchmark, evaluation, generalizability, generalization, kgqa, question answering, Informatics, Business informatics",
author = "Longquan Jiang and Ricardo Usbeck",
note = "Publisher Copyright: {\textcopyright} 2022 ACM.; 45th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval - SIGIR 2022, ACM SIGIR 2022 ; Conference date: 11-07-2022 Through 15-07-2022",
year = "2022",
month = jul,
day = "6",
doi = "10.1145/3477495.3531751",
language = "English",
series = "Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval",
publisher = "Association for Computing Machinery, Inc",
pages = "3209--3218",
editor = "Enrique Amigo and Pablo Castells and Julio Gonzalo",
booktitle = "SIGIR 2022 - Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval",
address = "United States",
url = "https://sigir.org/sigir2022/",

}

RIS

TY - CHAP

T1 - Knowledge Graph Question Answering Datasets and Their Generalizability

T2 - 45th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval - SIGIR 2022

AU - Jiang, Longquan

AU - Usbeck, Ricardo

N1 - Conference code: 45

PY - 2022/7/6

Y1 - 2022/7/6

N2 - Existing approaches on Question Answering over Knowledge Graphs (KGQA) have weak generalizability. That is often due to the standard i.i.d. assumption on the underlying dataset. Recently, three levels of generalization for KGQA were defined, namely i.i.d., compositional, zero-shot. We analyze 25 well-known KGQA datasets for 5 different Knowledge Graphs (KGs). We show that according to this definition many existing and online available KGQA datasets are either not suited to train a generalizable KGQA system or that the datasets are based on discontinued and out-dated KGs. Generating new datasets is a costly process and, thus, is not an alternative to smaller research groups and companies. In this work, we propose a mitigation method for re-splitting available KGQA datasets to enable their applicability to evaluate generalization, without any cost and manual effort. We test our hypothesis on three KGQA datasets, i.e., LC-QuAD, LC-QuAD 2.0 and QALD-9). Experiments on re-splitted KGQA datasets demonstrate its effectiveness towards generalizability. The code and a unified way to access 18 available datasets is online at https: //github.com/semantic-systems/KGQA-datasets as well as https: //github.com/semantic-systems/KGQA-datasets-generalization.

AB - Existing approaches on Question Answering over Knowledge Graphs (KGQA) have weak generalizability. That is often due to the standard i.i.d. assumption on the underlying dataset. Recently, three levels of generalization for KGQA were defined, namely i.i.d., compositional, zero-shot. We analyze 25 well-known KGQA datasets for 5 different Knowledge Graphs (KGs). We show that according to this definition many existing and online available KGQA datasets are either not suited to train a generalizable KGQA system or that the datasets are based on discontinued and out-dated KGs. Generating new datasets is a costly process and, thus, is not an alternative to smaller research groups and companies. In this work, we propose a mitigation method for re-splitting available KGQA datasets to enable their applicability to evaluate generalization, without any cost and manual effort. We test our hypothesis on three KGQA datasets, i.e., LC-QuAD, LC-QuAD 2.0 and QALD-9). Experiments on re-splitted KGQA datasets demonstrate its effectiveness towards generalizability. The code and a unified way to access 18 available datasets is online at https: //github.com/semantic-systems/KGQA-datasets as well as https: //github.com/semantic-systems/KGQA-datasets-generalization.

KW - benchmark

KW - evaluation

KW - generalizability

KW - generalization

KW - kgqa

KW - question answering

KW - Informatics

KW - Business informatics

UR - http://www.scopus.com/inward/record.url?scp=85135050347&partnerID=8YFLogxK

UR - https://www.mendeley.com/catalogue/b6989f93-815e-39f3-8194-cc48d3490cd6/

U2 - 10.1145/3477495.3531751

DO - 10.1145/3477495.3531751

M3 - Article in conference proceedings

AN - SCOPUS:85135050347

T3 - Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval

SP - 3209

EP - 3218

BT - SIGIR 2022 - Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval

A2 - Amigo, Enrique

A2 - Castells, Pablo

A2 - Gonzalo, Julio

PB - Association for Computing Machinery, Inc

CY - New York

Y2 - 11 July 2022 through 15 July 2022

ER -

Zuletzt angesehen