Biomedical Entity Linking with Triple-aware Pre-Training

Research output: Contributions to collected editions/worksArticle in conference proceedingsResearch

Authors

The large-scale analysis of scientific and technical documents is crucial for extracting structured knowledge from unstructured text. A key challenge in this process is linking biomedical entities, as these entities are sparsely distributed and often underrepresented in the training data of large language models (LLM). At the same time, those LLMs are not aware of high level semantic connection between different biomedical entities, which are useful in identifying similar concepts in different textual contexts. To cope with aforementioned problems, some recent works focused on injecting knowledge graph information into LLMs. However, former methods either ignore the relational knowledge of the entities or lead to catastrophic forgetting. Therefore, we propose a novel framework to pre-train the powerful generative LLM by a corpus synthesized from a KG. In the evaluations we are unable to confirm the benefit of including synonym, description or relational information. This work-in-progress highlights key challenges and invites further discussion on leveraging semantic information for LLm performance and on scientific document processing.
Original languageEnglish
Title of host publicationSemantic Technologies and Deep Learning Models for Scientific, Technical and Legal Data 2025
EditorsRima Dessi, Joy Jeenu, Danilo Dessi, Francesco Osborne, Hidir Aras
Number of pages8
Place of PublicationAachen
PublisherCEUR-WS
Publication date16.06.2025
DOIs
Publication statusPublished - 16.06.2025
EventThird International Workshop on Semantic Technologies and Deep Learning Models for Scientific, Technical and Legal Data - SemTech4STLD 2025 - Portoroz, Slovenia
Duration: 01.06.202501.06.2025
Conference number: 3

    Research areas

  • Entity Linking, cientific data, Deep Learning, Semantic information
  • Informatics