Biomedical Entity Linking with Triple-aware Pre-Training
Publikation: Beiträge in Sammelwerken › Aufsätze in Konferenzbänden › Forschung
Authors
The large-scale analysis of scientific and technical documents is crucial for extracting structured knowledge from unstructured text. A key challenge in this process is linking biomedical entities, as these entities are sparsely distributed and often underrepresented in the training data of large language models (LLM). At the same time, those LLMs are not aware of high level semantic connection between different biomedical entities, which are useful in identifying similar concepts in different textual contexts. To cope with aforementioned problems, some recent works focused on injecting knowledge graph information into LLMs. However, former methods either ignore the relational knowledge of the entities or lead to catastrophic forgetting. Therefore, we propose a novel framework to pre-train the powerful generative LLM by a corpus synthesized from a KG. In the evaluations we are unable to confirm the benefit of including synonym, description or relational information. This work-in-progress highlights key challenges and invites further discussion on leveraging semantic information for LLm performance and on scientific document processing.
Originalsprache | Englisch |
---|---|
Titel | Semantic Technologies and Deep Learning Models for Scientific, Technical and Legal Data 2025 |
Herausgeber | Rima Dessi, Joy Jeenu, Danilo Dessi, Francesco Osborne, Hidir Aras |
Anzahl der Seiten | 8 |
Erscheinungsort | Aachen |
Verlag | CEUR-WS |
Erscheinungsdatum | 16.06.2025 |
DOIs | |
Publikationsstatus | Erschienen - 16.06.2025 |
Veranstaltung | Third International Workshop on Semantic Technologies and Deep Learning Models for Scientific, Technical and Legal Data - SemTech4STLD 2025 - Portoroz, Slowenien Dauer: 01.06.2025 → 01.06.2025 Konferenznummer: 3 |
- Informatik