Effects and interaction of linguistic and subject-specific requirements in school-related assessment tasks

Project: Research

Project participants

Description

In the school subjects of mathematics and science, assessment tasks are frequently used to diagnose and measure students’ subject-specific competences and to provide them with feedback on their learning progress. Because these tasks are often presented in written form, linguistic competences are necessary pre-requisites in order to demonstrate subject-specific cognitive competences. But even though linguistic competences play a vital role for mental model building and the activation of previous knowledge, it is still largely unclear on empirical grounds in how far the linguistic requirements of assessment tasks influence their level of difficulty and how linguistic and subject-specific cognitive item features interact with each other.The aim of this study is therefore to investigate in an experimental design, how subject-specific cognitive requirements (UV1) and linguistic requirements (UV2) of assessment items in the domains of mathematics and physics influence empirical task difficulty. Besides the main effects (UV1, UV2), effects of interaction (UV1*UV2) are measured. The goal is to investigate if, and if yes, which individual learner variables (e.g. subject-specific and linguistic competences) influence empirical task difficulty. The treatment design of the experimental study is as follows: Students solve assessment tasks differing systematically in their subject-specific cognitive and linguistic requirements. The situational context is kept stable, as is the subject-specific domain in which the tasks are situated. Each assessment task is systematically varied on three subject-specific cognitive and three linguistic requirement levels (low, intermediate, high). This results in a 3x3 variation matrix for every task. 15 tasks are created for both subjects each, which in turn are varied within the 3x3 matrix design. The task variations are presented in rotated order in 9 test booklets. Students are randomly assigned to test booklets. Power analyses have identified an optimal sample size of 1350 students. The target group are 9th graders from three federal states of Germany. Methods of IRT are used for multidimensional scaling of the obtained data. This allows us to determine the task parameters for all test tasks and to analyse the effects of the subject-specific cognitive and the linguistics task requirements. Structural Equation Models and Multilevel Analysis which controls for the nested data structure are used for moderator analysis. This analysis allows to investigate which individual student variables influence the empirical task difficulty and in how far domain specific effects can be identified. The results will serve as the basis for practice-oriented recommendations for suitable task design for objective, reliable and valid diagnostic and achievement tests for subject-specific contexts.
StatusFinished
Period01.02.1931.08.23

Activities

Research outputs