Avoiding algorithm errors in textual analysis: A guide to selecting software, and a research agenda toward generative artificial intelligence
Research output: Journal contributions › Journal articles › Research › peer-review
Authors
The use of textual analysis is expanding in organizational research, yet software packages vary in their compatibility with complex constructs. This study helps researchers select suitable tools by focusing on phrase-based dictionary methods. We empirically evaluate four software packages—LIWC, DICTION, CAT Scanner, and a custom Python tool—using the complex construct of value-based management as a test case. The analysis shows that software from the same methodological family produces highly consistent results, while popular but mismatched tools yield significant errors such as miscounted phrases. Based on this, we develop a structured selection guideline that links construct features with software capabilities. The framework enhances construct validity, supports methodological transparency, and is applicable across disciplines. Finally, we position the approach as a bridge to AI-enabled textual analysis, including prompt-based workflows, reinforcing the continued need for theory-grounded construct design.
Original language | English |
---|---|
Article number | 115571 |
Journal | Journal of Business Research |
Volume | 199 |
Number of pages | 9 |
ISSN | 0148-2963 |
DOIs | |
Publication status | Published - 10.2025 |
Bibliographical note
Publisher Copyright:
© 2025 The Authors
- Algorithm error, Generative AI, Large language models, Reliability, Software selection, Textual analysis, Validity, Value-based management
- Management studies
Research areas
- Marketing