DBLP QuAD 2.0: Scholarly Natural Questions from SPARQL

Publikation: Beiträge in SammelwerkenAufsätze in KonferenzbändenForschungbegutachtet

Standard

DBLP QuAD 2.0: Scholarly Natural Questions from SPARQL. / Taffa, Tilahun; Neises, Patrick; Ollinger, Stefan et al.
K-CAP '25: Proceedings of the 13th Knowledge Capture Conference 2025. 2025.

Publikation: Beiträge in SammelwerkenAufsätze in KonferenzbändenForschungbegutachtet

Harvard

Taffa, T, Neises, P, Ollinger, S, Westphal, P, Ackermann, MR, Banerjee, D & Usbeck, R 2025, DBLP QuAD 2.0: Scholarly Natural Questions from SPARQL. in K-CAP '25: Proceedings of the 13th Knowledge Capture Conference 2025. https://doi.org/10.1145/3731443.3771376

APA

Taffa, T., Neises, P., Ollinger, S., Westphal, P., Ackermann, M. R., Banerjee, D., & Usbeck, R. (2025). DBLP QuAD 2.0: Scholarly Natural Questions from SPARQL. In K-CAP '25: Proceedings of the 13th Knowledge Capture Conference 2025 https://doi.org/10.1145/3731443.3771376

Vancouver

Taffa T, Neises P, Ollinger S, Westphal P, Ackermann MR, Banerjee D et al. DBLP QuAD 2.0: Scholarly Natural Questions from SPARQL. in K-CAP '25: Proceedings of the 13th Knowledge Capture Conference 2025. 2025 doi: 10.1145/3731443.3771376

Bibtex

@inbook{8cc3fe105ae1493cbebd4afc297ff647,
title = "DBLP QuAD 2.0: Scholarly Natural Questions from SPARQL",
abstract = "We present DBLP-QuAD 2.0, designed to evaluate Scholarly Knowledge Graph Question Answering (KGQA) over DBLP. Recent updates in the underlying DBLP KG, including new entities and relationships such as venues, research streams, and citation links, have necessitated a corresponding update to existing KG QA benchmarking resources. While the DBLP-QuAD dataset focused on author and publication-centered queries, DBLP-QuAD 2.0 broadens the coverage to reflect the enriched structure of the updated KG. Specifically, the questions in our dataset are formulated from SPARQL query logs that cover a wide range of entities involving authors, publications, venues, research streams, and citation relationships. DBLP-QuAD 2.0 thus provides a more comprehensive benchmark for evaluating KGQA systems with a baseline.",
author = "Tilahun Taffa and Patrick Neises and Stefan Ollinger and Patrick Westphal and Ackermann, {Marcel R.} and Debayan Banerjee and Ricardo Usbeck",
year = "2025",
month = dec,
day = "9",
doi = "10.1145/3731443.3771376",
language = "English",
booktitle = "K-CAP '25: Proceedings of the 13th Knowledge Capture Conference 2025",

}

RIS

TY - CHAP

T1 - DBLP QuAD 2.0: Scholarly Natural Questions from SPARQL

AU - Taffa, Tilahun

AU - Neises, Patrick

AU - Ollinger, Stefan

AU - Westphal, Patrick

AU - Ackermann, Marcel R.

AU - Banerjee, Debayan

AU - Usbeck, Ricardo

PY - 2025/12/9

Y1 - 2025/12/9

N2 - We present DBLP-QuAD 2.0, designed to evaluate Scholarly Knowledge Graph Question Answering (KGQA) over DBLP. Recent updates in the underlying DBLP KG, including new entities and relationships such as venues, research streams, and citation links, have necessitated a corresponding update to existing KG QA benchmarking resources. While the DBLP-QuAD dataset focused on author and publication-centered queries, DBLP-QuAD 2.0 broadens the coverage to reflect the enriched structure of the updated KG. Specifically, the questions in our dataset are formulated from SPARQL query logs that cover a wide range of entities involving authors, publications, venues, research streams, and citation relationships. DBLP-QuAD 2.0 thus provides a more comprehensive benchmark for evaluating KGQA systems with a baseline.

AB - We present DBLP-QuAD 2.0, designed to evaluate Scholarly Knowledge Graph Question Answering (KGQA) over DBLP. Recent updates in the underlying DBLP KG, including new entities and relationships such as venues, research streams, and citation links, have necessitated a corresponding update to existing KG QA benchmarking resources. While the DBLP-QuAD dataset focused on author and publication-centered queries, DBLP-QuAD 2.0 broadens the coverage to reflect the enriched structure of the updated KG. Specifically, the questions in our dataset are formulated from SPARQL query logs that cover a wide range of entities involving authors, publications, venues, research streams, and citation relationships. DBLP-QuAD 2.0 thus provides a more comprehensive benchmark for evaluating KGQA systems with a baseline.

U2 - 10.1145/3731443.3771376

DO - 10.1145/3731443.3771376

M3 - Article in conference proceedings

BT - K-CAP '25: Proceedings of the 13th Knowledge Capture Conference 2025

ER -

DOI