Construct relation extraction from scientific papers: Is it automatable yet?

Research output: Contributions to collected editions/worksPublished abstract in conference proceedingsResearchpeer-review

Standard

Construct relation extraction from scientific papers: Is it automatable yet? / Funk, Burkhardt; Scharfenberger, Jonas.
Proceedings of the 58th Hawaii International Conference on System Sciences 2025. 2025. p. 4675.

Research output: Contributions to collected editions/worksPublished abstract in conference proceedingsResearchpeer-review

Harvard

Funk, B & Scharfenberger, J 2025, Construct relation extraction from scientific papers: Is it automatable yet? in Proceedings of the 58th Hawaii International Conference on System Sciences 2025. pp. 4675, 58th Hawaii International Conference on System Sciences - HICSS 2025, Waikoloa, Hawaii, United States, 07.01.25.

APA

Funk, B., & Scharfenberger, J. (2025). Construct relation extraction from scientific papers: Is it automatable yet? In Proceedings of the 58th Hawaii International Conference on System Sciences 2025 (pp. 4675)

Vancouver

Funk B, Scharfenberger J. Construct relation extraction from scientific papers: Is it automatable yet? In Proceedings of the 58th Hawaii International Conference on System Sciences 2025. 2025. p. 4675

Bibtex

@inbook{00e7824d6ef749868a60373247aec3f4,
title = "Construct relation extraction from scientific papers: Is it automatable yet?",
abstract = "The process of identifying relevant prior researcharticles is crucial for theoretical advancements, butoften requires significant human effort. This studyexamines the feasibility of using large languagemodels (LLMs) to support this task by extractingtested hypotheses, which consist of related constructs,moderators or mediators, path coefficients, andp-values, from empirical studies using structuralequation modeling (SEM). We combine state-of-the-artLLMs with a variety of post-processing measuresto improve the relation extraction quality. Anextensive evaluation yields recall scores of up to79.2% in construct entity extraction, 58.4% inconstruct-mediator/moderator-construct extraction,and 39.3% in extracting the full tested hypotheses.We provide a manually annotated dataset of 72 SEMarticles and 749 construct relations to facilitate futureresearch. Our findings offer critical insights andsuggest promising directions for advancing the field ofautomated construct relation extraction from scholarlydocuments.",
author = "Burkhardt Funk and Jonas Scharfenberger",
year = "2025",
language = "English",
pages = "4675",
booktitle = "Proceedings of the 58th Hawaii International Conference on System Sciences 2025",
note = "58th Hawaii International Conference on System Sciences - HICSS 2025, HICSS 2025 ; Conference date: 07-01-2025 Through 10-01-2025",

}

RIS

TY - CHAP

T1 - Construct relation extraction from scientific papers: Is it automatable yet?

AU - Funk, Burkhardt

AU - Scharfenberger, Jonas

N1 - Conference code: 58

PY - 2025

Y1 - 2025

N2 - The process of identifying relevant prior researcharticles is crucial for theoretical advancements, butoften requires significant human effort. This studyexamines the feasibility of using large languagemodels (LLMs) to support this task by extractingtested hypotheses, which consist of related constructs,moderators or mediators, path coefficients, andp-values, from empirical studies using structuralequation modeling (SEM). We combine state-of-the-artLLMs with a variety of post-processing measuresto improve the relation extraction quality. Anextensive evaluation yields recall scores of up to79.2% in construct entity extraction, 58.4% inconstruct-mediator/moderator-construct extraction,and 39.3% in extracting the full tested hypotheses.We provide a manually annotated dataset of 72 SEMarticles and 749 construct relations to facilitate futureresearch. Our findings offer critical insights andsuggest promising directions for advancing the field ofautomated construct relation extraction from scholarlydocuments.

AB - The process of identifying relevant prior researcharticles is crucial for theoretical advancements, butoften requires significant human effort. This studyexamines the feasibility of using large languagemodels (LLMs) to support this task by extractingtested hypotheses, which consist of related constructs,moderators or mediators, path coefficients, andp-values, from empirical studies using structuralequation modeling (SEM). We combine state-of-the-artLLMs with a variety of post-processing measuresto improve the relation extraction quality. Anextensive evaluation yields recall scores of up to79.2% in construct entity extraction, 58.4% inconstruct-mediator/moderator-construct extraction,and 39.3% in extracting the full tested hypotheses.We provide a manually annotated dataset of 72 SEMarticles and 749 construct relations to facilitate futureresearch. Our findings offer critical insights andsuggest promising directions for advancing the field ofautomated construct relation extraction from scholarlydocuments.

M3 - Published abstract in conference proceedings

SP - 4675

BT - Proceedings of the 58th Hawaii International Conference on System Sciences 2025

T2 - 58th Hawaii International Conference on System Sciences - HICSS 2025

Y2 - 7 January 2025 through 10 January 2025

ER -