Automated scoring in the era of artificial intelligence: An empirical study with Turkish essays
Publikation: Beiträge in Zeitschriften › Zeitschriftenaufsätze › Forschung › begutachtet
Authors
Automated scoring (AS) has gained significant attention as a tool to enhance the efficiency and reliability of assessment processes. Yet, its application in under-represented languages, such as Turkish, remains limited. This study addresses this gap by empirically evaluating AS for Turkish using a zero-shot approach with a rubric powered by OpenAI's GPT-4o. A dataset of 590 essays written by learners of Turkish as a second language was scored by professional human raters and an artificial intelligence (AI) model integrated via a custom-built interface. The scoring rubric, grounded in the Common European Framework of Reference for Languages, assessed six dimensions of writing quality. Results revealed a strong alignment between human and AI scores with a Quadratic Weighted Kappa of 0.72, Pearson correlation of 0.73, and an overlap measure of 83.5 %. Analysis of rater effects showed minimal influence on score discrepancies, though factors such as experience and gender exhibited modest effects. These findings demonstrate the potential of AI-driven scoring in Turkish, offering valuable insights for broader implementation in under-represented languages, such as the possible source of disagreements between human and AI scores. Conclusions from a specific writing task with a single human rater underscore the need for future research to explore diverse inputs and multiple raters.
Originalsprache | Englisch |
---|---|
Aufsatznummer | 103784 |
Zeitschrift | System |
Jahrgang | 133 |
Anzahl der Seiten | 12 |
ISSN | 0346-251X |
DOIs | |
Publikationsstatus | Erschienen - 10.2025 |
Bibliographische Notiz
Publisher Copyright:
© 2025 The Authors
- Erziehungswissenschaften
Fachgebiete
- Sprache und Linguistik
- Ausbildung bzw. Denomination
- Linguistik und Sprache