Avoiding algorithm errors in textual analysis: A guide to selecting software, and a research agenda toward generative artificial intelligence

Publikation: Beiträge in ZeitschriftenZeitschriftenaufsätzeForschungbegutachtet

Authors

The use of textual analysis is expanding in organizational research, yet software packages vary in their compatibility with complex constructs. This study helps researchers select suitable tools by focusing on phrase-based dictionary methods. We empirically evaluate four software packages—LIWC, DICTION, CAT Scanner, and a custom Python tool—using the complex construct of value-based management as a test case. The analysis shows that software from the same methodological family produces highly consistent results, while popular but mismatched tools yield significant errors such as miscounted phrases. Based on this, we develop a structured selection guideline that links construct features with software capabilities. The framework enhances construct validity, supports methodological transparency, and is applicable across disciplines. Finally, we position the approach as a bridge to AI-enabled textual analysis, including prompt-based workflows, reinforcing the continued need for theory-grounded construct design.

OriginalspracheEnglisch
Aufsatznummer115571
ZeitschriftJournal of Business Research
Jahrgang199
Anzahl der Seiten9
ISSN0148-2963
DOIs
PublikationsstatusErschienen - 10.2025

Bibliographische Notiz

Publisher Copyright:
© 2025 The Authors

DOI