Automating SPARQL Query Translations between DBpedia and Wikidata
Research output: Contributions to collected editions/works › Article in conference proceedings › Research
Authors
Purpose:
This paper investigates whether state-of-the-art Large Language Models (LLMs) can automatically translate SPARQL between popular Knowledge Graph (KG) schemas. We focus on translations between the DBpedia and Wikidata KG, and later on DBLP and OpenAlex KG. This study addresses a notable gap in KG interoperability research by evaluating LLM performance on SPARQL-to-SPARQL translation.
Methodology:
Two benchmarks are assembled, where the first aligns 100 DBpedia–Wikidata queries from QALD-9-Plus dataset; the second contains 100 DBLP queries aligned to OpenAlex, testing generalizability beyond encyclopaedic KGs. Three open LLMs: Llama-3-8B, DeepSeek-R1-Distill-Llama-70B, and Mistral-Large-Instruct-2407 are selected based on their sizes and architectures and tested with zero-shot, few-shot, and two chain-of-thought variants. Outputs were compared with gold-standard answers, and resulting errors were systematically categorized.
Findings:
We find that the performance varies markedly across models and prompting strategies, and that translations for Wikidata to DBpedia work far better than translations for DBpedia to Wikidata. The largest model, Mistral-Large-Instruct-2407, achieved the highest accuracy, reaching 86% on the Wikidata → DBpedia task using a Chain-of-Thought approach. This performance was replicated in the DBLP → OpenAlex generalization task, which achieved similar results with a few- shot setup, underscoring the critical role of in-context examples.
Value:
This study demonstrates a viable and scalable pathway toward KG interoperability by using LLMs with structured prompting and explicit schema-mapping tables to translate queries across heterogeneous KGs. The method’s strong performance when applied to general purpose KGs and specialized scholarly domain suggests its potential as a promising approach to reduce the manual effort required for cross-KG data integration and analysis.
This paper investigates whether state-of-the-art Large Language Models (LLMs) can automatically translate SPARQL between popular Knowledge Graph (KG) schemas. We focus on translations between the DBpedia and Wikidata KG, and later on DBLP and OpenAlex KG. This study addresses a notable gap in KG interoperability research by evaluating LLM performance on SPARQL-to-SPARQL translation.
Methodology:
Two benchmarks are assembled, where the first aligns 100 DBpedia–Wikidata queries from QALD-9-Plus dataset; the second contains 100 DBLP queries aligned to OpenAlex, testing generalizability beyond encyclopaedic KGs. Three open LLMs: Llama-3-8B, DeepSeek-R1-Distill-Llama-70B, and Mistral-Large-Instruct-2407 are selected based on their sizes and architectures and tested with zero-shot, few-shot, and two chain-of-thought variants. Outputs were compared with gold-standard answers, and resulting errors were systematically categorized.
Findings:
We find that the performance varies markedly across models and prompting strategies, and that translations for Wikidata to DBpedia work far better than translations for DBpedia to Wikidata. The largest model, Mistral-Large-Instruct-2407, achieved the highest accuracy, reaching 86% on the Wikidata → DBpedia task using a Chain-of-Thought approach. This performance was replicated in the DBLP → OpenAlex generalization task, which achieved similar results with a few- shot setup, underscoring the critical role of in-context examples.
Value:
This study demonstrates a viable and scalable pathway toward KG interoperability by using LLMs with structured prompting and explicit schema-mapping tables to translate queries across heterogeneous KGs. The method’s strong performance when applied to general purpose KGs and specialized scholarly domain suggests its potential as a promising approach to reduce the manual effort required for cross-KG data integration and analysis.
Original language | English |
---|---|
Title of host publication | Linking Meaning: Semantic Technologies Shaping the Future of AI : Cover 74617 Proceedings of the 21st International Conference on Semantic Systems, 3-5 September 2025, Vienna, Austria |
Editors | Blerina Spahiu, Sahar Vahdati, Angelo Salatino, Tassilo Pellegrini, Giray Havur |
Number of pages | 18 |
Publisher | IOS Press BV |
Publication date | 14.07.2025 |
Pages | 176-193 |
ISBN (electronic) | 978-1-64368-616-5 |
DOIs | |
Publication status | Published - 14.07.2025 |
- cs.AI, cs.CL
- Informatics