Observer ratings of instructional quality: Do they fulfill what they promise?

Research output: Journal contributionsJournal articlesResearchpeer-review

Authors

Despite considerable interest in the topic of instructional quality in research as well as practice, little is known about the quality of its assessment. Using generalizability analysis as well as content analysis, the present study investigates how reliably and validly instructional quality is measured by observer ratings. Twelve trained raters judged 57 videotaped lesson sequences with regard to aspects of domain-independent instructional quality. Additionally, 3 of these sequences were judged by 390 untrained raters (i.e., student teachers and teachers). Depending on scale level and dimension, 16-44% of the variance in ratings could be attributed to instructional quality, whereas rater bias accounted for 12-40% of the variance. Although the trained raters referred more often to aspects considered essential for instructional quality, this was not reflected in the reliability of their ratings. The results indicate that observer ratings should be treated in a more differentiated manner in the future.

Original languageEnglish
JournalLearning and Instruction
Volume22
Issue number6
Pages (from-to)387-400
Number of pages14
ISSN0959-4752
DOIs
Publication statusPublished - 12.2012
Externally publishedYes

    Research areas

  • Generalizability theory, Instructional quality, Observer ratings, Reliability, Validity
  • Educational science