Joint Item Response Models for Manual and Automatic Scores on Open-Ended Test Items
Research output: Journal contributions › Journal articles › Research › peer-review
Authors
Test items using open-ended response formats can increase an instrument’s construct validity. However, traditionally, their application in educational testing requires human coders to score the responses. Manual scoring not only increases operational costs but also prohibits the use of evidence from open-ended items to inform routing decisions in adaptive designs. Using machine learning and natural language processing, automatic scoring provides classifiers that can instantly assign scores to text responses. Although optimized for agreement with manual scores, automatic scoring is not perfectly accurate and introduces an additional source of error into the response process, leading to a misspecification of the measurement model used with the manual score. We propose two joint models for manual and automatic scores of automatically scored open-ended items. Our models extend a given model from Item Response Theory for the manual scores by a component for the automatic scores, accounting for classification errors. The models were evaluated using data from the Programme for International Student Assessment (2012) and simulated data, demonstrating their capacity to mitigate the impact of classification errors on ability estimation compared to a baseline that disregards classification errors.
Original language | English |
---|---|
Journal | Psychometrika |
ISSN | 0033-3123 |
DOIs | |
Publication status | Accepted/In press - 2025 |
Bibliographical note
Publisher Copyright:
© 2025 Cambridge University Press. All rights reserved.
- automatic scoring, item response modeling, large-scale assessment
- Informatics
Research areas
- General Psychology
- Applied Mathematics
- Psychology(all)