Leveraging LLMs in Scholarly Knowledge Graph Question Answering

Research output: Contributions to collected editions/worksArticle in conference proceedingsResearchpeer-review

Authors

This paper presents a scholarly Knowledge Graph Question Answering (KGQA) that answers bibliographic natural language questions by leveraging a large language model (LLM) in a few-shot manner. The model initially identifies the top-n similar training questions related to a given test question via a BERT-based sentence encoder and retrieves their corresponding SPARQL. Using the top-n similar question-SPARQL pairs as an example and the test question creates a prompt. Then pass the prompt to the LLM and generate a SPARQL. Finally, runs the SPARQL against the underlying KG - ORKG (Open Research KG) endpoint and returns an answer. Our system achieves an F1 score of 99.0%, on SciQA - one of the Scholarly-QALD-23 challenge benchmarks.
Original languageEnglish
Title of host publicationJoint Proceedings of Scholarly QALD 2023 and SemREC 2023 co-located with 22nd International Semantic Web Conference ISWC 2023, Athens, Greece, November 6-10, 2023
EditorsDebayan Banerjee, Ricardo Usbeck, Nandana Mihindukulasooriya, Gunjan Singh, Raghava Mutharaju, Pavan Kapanipathi
Number of pages10
Volume3592
PublisherCEUR-WS.org
Publication date2023
DOIs
Publication statusPublished - 2023
EventScholarly QALD 2023 - Athen, Greece
Duration: 06.11.202310.11.2023
Conference number: 1
https://ceur-ws.org/Vol-3592/

Bibliographical note

Funding Information:
This work has been partially supported by grants for the DFG project NFDI4DataScience project ?DFG project no. 460234259) and by the Federal Ministry for Economics and Climate Action in the project CoyPu ?project number 01MK21007G).

Publisher Copyright:
© 2023 CEUR-WS. All rights reserved.

    Research areas

  • Informatics - Knowledge Graph Question Answering (KGQA), pen Research Knowledge Graph, arge Language Model, Scholarly KGQA, Scholarly-QALD, RKG, SciQA