Leveraging LLMs in Scholarly Knowledge Graph Question Answering
Research output: Contributions to collected editions/works › Article in conference proceedings › Research › peer-review
Standard
Joint Proceedings of Scholarly QALD 2023 and SemREC 2023 co-located with 22nd International Semantic Web Conference ISWC 2023, Athens, Greece, November 6-10, 2023. ed. / Debayan Banerjee; Ricardo Usbeck; Nandana Mihindukulasooriya; Gunjan Singh; Raghava Mutharaju; Pavan Kapanipathi. Vol. 3592 CEUR-WS.org, 2023. (CEUR Workshop Proceedings; Vol. 3592).
Research output: Contributions to collected editions/works › Article in conference proceedings › Research › peer-review
Harvard
APA
Vancouver
Bibtex
}
RIS
TY - CHAP
T1 - Leveraging LLMs in Scholarly Knowledge Graph Question Answering
AU - Taffa, Tilahun Abedissa
AU - Usbeck, Ricardo
N1 - Conference code: 1
PY - 2023
Y1 - 2023
N2 - This paper presents a scholarly Knowledge Graph Question Answering (KGQA) that answers bibliographic natural language questions by leveraging a large language model (LLM) in a few-shot manner. The model initially identifies the top-n similar training questions related to a given test question via a BERT-based sentence encoder and retrieves their corresponding SPARQL. Using the top-n similar question-SPARQL pairs as an example and the test question creates a prompt. Then pass the prompt to the LLM and generate a SPARQL. Finally, runs the SPARQL against the underlying KG - ORKG (Open Research KG) endpoint and returns an answer. Our system achieves an F1 score of 99.0%, on SciQA - one of the Scholarly-QALD-23 challenge benchmarks.
AB - This paper presents a scholarly Knowledge Graph Question Answering (KGQA) that answers bibliographic natural language questions by leveraging a large language model (LLM) in a few-shot manner. The model initially identifies the top-n similar training questions related to a given test question via a BERT-based sentence encoder and retrieves their corresponding SPARQL. Using the top-n similar question-SPARQL pairs as an example and the test question creates a prompt. Then pass the prompt to the LLM and generate a SPARQL. Finally, runs the SPARQL against the underlying KG - ORKG (Open Research KG) endpoint and returns an answer. Our system achieves an F1 score of 99.0%, on SciQA - one of the Scholarly-QALD-23 challenge benchmarks.
KW - Informatics
KW - Knowledge Graph Question Answering (KGQA)
KW - pen Research Knowledge Graph
KW - arge Language Model
KW - Scholarly KGQA
KW - Scholarly-QALD
KW - RKG
KW - SciQA
UR - http://www.scopus.com/inward/record.url?scp=85180546080&partnerID=8YFLogxK
U2 - 10.48550/ARXIV.2311.09841
DO - 10.48550/ARXIV.2311.09841
M3 - Article in conference proceedings
VL - 3592
T3 - CEUR Workshop Proceedings
BT - Joint Proceedings of Scholarly QALD 2023 and SemREC 2023 co-located with 22nd International Semantic Web Conference ISWC 2023, Athens, Greece, November 6-10, 2023
A2 - Banerjee, Debayan
A2 - Usbeck, Ricardo
A2 - Mihindukulasooriya, Nandana
A2 - Singh, Gunjan
A2 - Mutharaju, Raghava
A2 - Kapanipathi, Pavan
PB - CEUR-WS.org
T2 - Scholarly QALD 2023
Y2 - 6 November 2023 through 10 November 2023
ER -