Biomedical Entity Linking with Triple-aware Pre-Training

Publikation: Beiträge in SammelwerkenAufsätze in KonferenzbändenForschung

Authors

The large-scale analysis of scientific and technical documents is crucial for extracting structured knowledge from unstructured text. A key challenge in this process is linking biomedical entities, as these entities are sparsely distributed and often underrepresented in the training data of large language models (LLM). At the same time, those LLMs are not aware of high level semantic connection between different biomedical entities, which are useful in identifying similar concepts in different textual contexts. To cope with aforementioned problems, some recent works focused on injecting knowledge graph information into LLMs. However, former methods either ignore the relational knowledge of the entities or lead to catastrophic forgetting. Therefore, we propose a novel framework to pre-train the powerful generative LLM by a corpus synthesized from a KG. In the evaluations we are unable to confirm the benefit of including synonym, description or relational information. This work-in-progress highlights key challenges and invites further discussion on leveraging semantic information for LLm performance and on scientific document processing.
OriginalspracheEnglisch
TitelSemantic Technologies and Deep Learning Models for Scientific, Technical and Legal Data 2025
HerausgeberRima Dessi, Joy Jeenu, Danilo Dessi, Francesco Osborne, Hidir Aras
Anzahl der Seiten8
ErscheinungsortAachen
VerlagCEUR-WS
Erscheinungsdatum16.06.2025
DOIs
PublikationsstatusErschienen - 16.06.2025
VeranstaltungThird International Workshop on Semantic Technologies and Deep Learning Models for Scientific, Technical and Legal Data - SemTech4STLD 2025 - Portoroz, Slowenien
Dauer: 01.06.202501.06.2025
Konferenznummer: 3

Links

DOI