Observer ratings of instructional quality: Do they fulfill what they promise?
Research output: Journal contributions › Journal articles › Research › peer-review
Standard
In: Learning and Instruction, Vol. 22, No. 6, 12.2012, p. 387-400.
Research output: Journal contributions › Journal articles › Research › peer-review
Harvard
APA
Vancouver
Bibtex
}
RIS
TY - JOUR
T1 - Observer ratings of instructional quality
T2 - Do they fulfill what they promise?
AU - Praetorius, Anna Katharina
AU - Lenske, Gerlinde
AU - Helmke, Andreas
PY - 2012/12
Y1 - 2012/12
N2 - Despite considerable interest in the topic of instructional quality in research as well as practice, little is known about the quality of its assessment. Using generalizability analysis as well as content analysis, the present study investigates how reliably and validly instructional quality is measured by observer ratings. Twelve trained raters judged 57 videotaped lesson sequences with regard to aspects of domain-independent instructional quality. Additionally, 3 of these sequences were judged by 390 untrained raters (i.e., student teachers and teachers). Depending on scale level and dimension, 16-44% of the variance in ratings could be attributed to instructional quality, whereas rater bias accounted for 12-40% of the variance. Although the trained raters referred more often to aspects considered essential for instructional quality, this was not reflected in the reliability of their ratings. The results indicate that observer ratings should be treated in a more differentiated manner in the future.
AB - Despite considerable interest in the topic of instructional quality in research as well as practice, little is known about the quality of its assessment. Using generalizability analysis as well as content analysis, the present study investigates how reliably and validly instructional quality is measured by observer ratings. Twelve trained raters judged 57 videotaped lesson sequences with regard to aspects of domain-independent instructional quality. Additionally, 3 of these sequences were judged by 390 untrained raters (i.e., student teachers and teachers). Depending on scale level and dimension, 16-44% of the variance in ratings could be attributed to instructional quality, whereas rater bias accounted for 12-40% of the variance. Although the trained raters referred more often to aspects considered essential for instructional quality, this was not reflected in the reliability of their ratings. The results indicate that observer ratings should be treated in a more differentiated manner in the future.
KW - Generalizability theory
KW - Instructional quality
KW - Observer ratings
KW - Reliability
KW - Validity
KW - Educational science
UR - http://www.scopus.com/inward/record.url?scp=84865576473&partnerID=8YFLogxK
U2 - 10.1016/j.learninstruc.2012.03.002
DO - 10.1016/j.learninstruc.2012.03.002
M3 - Journal articles
AN - SCOPUS:84865576473
VL - 22
SP - 387
EP - 400
JO - Learning and Instruction
JF - Learning and Instruction
SN - 0959-4752
IS - 6
ER -