Automating SPARQL Query Translations between DBpedia and Wikidata

Research output: Contributions to collected editions/worksArticle in conference proceedingsResearch

Standard

Automating SPARQL Query Translations between DBpedia and Wikidata. / Bartels, Malte Christian; Banerjee, Debayan; Usbeck, Ricardo.
Linking Meaning: Semantic Technologies Shaping the Future of AI: Cover 74617 Proceedings of the 21st International Conference on Semantic Systems, 3-5 September 2025, Vienna, Austria. ed. / Blerina Spahiu; Sahar Vahdati; Angelo Salatino; Tassilo Pellegrini; Giray Havur. IOS Press BV, 2025. p. 176-193 (Studies on the Semantic Web; Vol. 62).

Research output: Contributions to collected editions/worksArticle in conference proceedingsResearch

Harvard

Bartels, MC, Banerjee, D & Usbeck, R 2025, Automating SPARQL Query Translations between DBpedia and Wikidata. in B Spahiu, S Vahdati, A Salatino, T Pellegrini & G Havur (eds), Linking Meaning: Semantic Technologies Shaping the Future of AI: Cover 74617 Proceedings of the 21st International Conference on Semantic Systems, 3-5 September 2025, Vienna, Austria. Studies on the Semantic Web, vol. 62, IOS Press BV, pp. 176-193. https://doi.org/10.3233/SSW250019

APA

Bartels, M. C., Banerjee, D., & Usbeck, R. (2025). Automating SPARQL Query Translations between DBpedia and Wikidata. In B. Spahiu, S. Vahdati, A. Salatino, T. Pellegrini, & G. Havur (Eds.), Linking Meaning: Semantic Technologies Shaping the Future of AI: Cover 74617 Proceedings of the 21st International Conference on Semantic Systems, 3-5 September 2025, Vienna, Austria (pp. 176-193). (Studies on the Semantic Web; Vol. 62). IOS Press BV. https://doi.org/10.3233/SSW250019

Vancouver

Bartels MC, Banerjee D, Usbeck R. Automating SPARQL Query Translations between DBpedia and Wikidata. In Spahiu B, Vahdati S, Salatino A, Pellegrini T, Havur G, editors, Linking Meaning: Semantic Technologies Shaping the Future of AI: Cover 74617 Proceedings of the 21st International Conference on Semantic Systems, 3-5 September 2025, Vienna, Austria. IOS Press BV. 2025. p. 176-193. (Studies on the Semantic Web). doi: 10.3233/SSW250019

Bibtex

@inbook{dd61b91aa9d446f78a1a41a959479bd2,
title = "Automating SPARQL Query Translations between DBpedia and Wikidata",
abstract = "Purpose:This paper investigates whether state-of-the-art Large Language Models (LLMs) can automatically translate SPARQL between popular Knowledge Graph (KG) schemas. We focus on translations between the DBpedia and Wikidata KG, and later on DBLP and OpenAlex KG. This study addresses a notable gap in KG interoperability research by evaluating LLM performance on SPARQL-to-SPARQL translation.Methodology:Two benchmarks are assembled, where the first aligns 100 DBpedia–Wikidata queries from QALD-9-Plus dataset; the second contains 100 DBLP queries aligned to OpenAlex, testing generalizability beyond encyclopaedic KGs. Three open LLMs: Llama-3-8B, DeepSeek-R1-Distill-Llama-70B, and Mistral-Large-Instruct-2407 are selected based on their sizes and architectures and tested with zero-shot, few-shot, and two chain-of-thought variants. Outputs were compared with gold-standard answers, and resulting errors were systematically categorized.Findings:We find that the performance varies markedly across models and prompting strategies, and that translations for Wikidata to DBpedia work far better than translations for DBpedia to Wikidata. The largest model, Mistral-Large-Instruct-2407, achieved the highest accuracy, reaching 86% on the Wikidata → DBpedia task using a Chain-of-Thought approach. This performance was replicated in the DBLP → OpenAlex generalization task, which achieved similar results with a few- shot setup, underscoring the critical role of in-context examples.Value:This study demonstrates a viable and scalable pathway toward KG interoperability by using LLMs with structured prompting and explicit schema-mapping tables to translate queries across heterogeneous KGs. The method{\textquoteright}s strong performance when applied to general purpose KGs and specialized scholarly domain suggests its potential as a promising approach to reduce the manual effort required for cross-KG data integration and analysis.",
keywords = "cs.AI, cs.CL, Informatics",
author = "Bartels, {Malte Christian} and Debayan Banerjee and Ricardo Usbeck",
note = "18 pages, 2 figues. Paper accepted at SEMANTiCS 2025 conference happening on September 2025",
year = "2025",
month = jul,
day = "14",
doi = "10.3233/SSW250019",
language = "English",
series = "Studies on the Semantic Web",
publisher = "IOS Press BV",
pages = "176--193",
editor = "Blerina Spahiu and Sahar Vahdati and Angelo Salatino and Tassilo Pellegrini and Giray Havur",
booktitle = "Linking Meaning: Semantic Technologies Shaping the Future of AI",
address = "Netherlands",

}

RIS

TY - CHAP

T1 - Automating SPARQL Query Translations between DBpedia and Wikidata

AU - Bartels, Malte Christian

AU - Banerjee, Debayan

AU - Usbeck, Ricardo

N1 - 18 pages, 2 figues. Paper accepted at SEMANTiCS 2025 conference happening on September 2025

PY - 2025/7/14

Y1 - 2025/7/14

N2 - Purpose:This paper investigates whether state-of-the-art Large Language Models (LLMs) can automatically translate SPARQL between popular Knowledge Graph (KG) schemas. We focus on translations between the DBpedia and Wikidata KG, and later on DBLP and OpenAlex KG. This study addresses a notable gap in KG interoperability research by evaluating LLM performance on SPARQL-to-SPARQL translation.Methodology:Two benchmarks are assembled, where the first aligns 100 DBpedia–Wikidata queries from QALD-9-Plus dataset; the second contains 100 DBLP queries aligned to OpenAlex, testing generalizability beyond encyclopaedic KGs. Three open LLMs: Llama-3-8B, DeepSeek-R1-Distill-Llama-70B, and Mistral-Large-Instruct-2407 are selected based on their sizes and architectures and tested with zero-shot, few-shot, and two chain-of-thought variants. Outputs were compared with gold-standard answers, and resulting errors were systematically categorized.Findings:We find that the performance varies markedly across models and prompting strategies, and that translations for Wikidata to DBpedia work far better than translations for DBpedia to Wikidata. The largest model, Mistral-Large-Instruct-2407, achieved the highest accuracy, reaching 86% on the Wikidata → DBpedia task using a Chain-of-Thought approach. This performance was replicated in the DBLP → OpenAlex generalization task, which achieved similar results with a few- shot setup, underscoring the critical role of in-context examples.Value:This study demonstrates a viable and scalable pathway toward KG interoperability by using LLMs with structured prompting and explicit schema-mapping tables to translate queries across heterogeneous KGs. The method’s strong performance when applied to general purpose KGs and specialized scholarly domain suggests its potential as a promising approach to reduce the manual effort required for cross-KG data integration and analysis.

AB - Purpose:This paper investigates whether state-of-the-art Large Language Models (LLMs) can automatically translate SPARQL between popular Knowledge Graph (KG) schemas. We focus on translations between the DBpedia and Wikidata KG, and later on DBLP and OpenAlex KG. This study addresses a notable gap in KG interoperability research by evaluating LLM performance on SPARQL-to-SPARQL translation.Methodology:Two benchmarks are assembled, where the first aligns 100 DBpedia–Wikidata queries from QALD-9-Plus dataset; the second contains 100 DBLP queries aligned to OpenAlex, testing generalizability beyond encyclopaedic KGs. Three open LLMs: Llama-3-8B, DeepSeek-R1-Distill-Llama-70B, and Mistral-Large-Instruct-2407 are selected based on their sizes and architectures and tested with zero-shot, few-shot, and two chain-of-thought variants. Outputs were compared with gold-standard answers, and resulting errors were systematically categorized.Findings:We find that the performance varies markedly across models and prompting strategies, and that translations for Wikidata to DBpedia work far better than translations for DBpedia to Wikidata. The largest model, Mistral-Large-Instruct-2407, achieved the highest accuracy, reaching 86% on the Wikidata → DBpedia task using a Chain-of-Thought approach. This performance was replicated in the DBLP → OpenAlex generalization task, which achieved similar results with a few- shot setup, underscoring the critical role of in-context examples.Value:This study demonstrates a viable and scalable pathway toward KG interoperability by using LLMs with structured prompting and explicit schema-mapping tables to translate queries across heterogeneous KGs. The method’s strong performance when applied to general purpose KGs and specialized scholarly domain suggests its potential as a promising approach to reduce the manual effort required for cross-KG data integration and analysis.

KW - cs.AI

KW - cs.CL

KW - Informatics

U2 - 10.3233/SSW250019

DO - 10.3233/SSW250019

M3 - Article in conference proceedings

T3 - Studies on the Semantic Web

SP - 176

EP - 193

BT - Linking Meaning: Semantic Technologies Shaping the Future of AI

A2 - Spahiu, Blerina

A2 - Vahdati, Sahar

A2 - Salatino, Angelo

A2 - Pellegrini, Tassilo

A2 - Havur, Giray

PB - IOS Press BV

ER -

Documents

DOI

Recently viewed

Researchers

  1. Matthias von Saldern