The Cronbach’s Alpha of Domain-Specific Knowledge Tests Before and After Learning: A Meta-Analysis of Published Studies
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Knowledge is an important predictor and outcome of learning and development. Its measurement is challenged by the fact that knowledge can be integrated and homogeneous, or fragmented and heterogeneous, which can change through learning. These characteristics of knowledge are at odds with current standards for test development, demanding a high internal consistency (e.g., Cronbach's Alphas greater .70). To provide an initial empirical base for this debate, we conducted a meta-analysis of the Cronbach's Alphas of knowledge tests derived from an available data set. Based on 285 effect sizes from 55 samples, the estimated typical Alpha of domain-specific knowledge tests in publications was α = .85, CI90 [.82; .87]. Alpha was so high despite a low mean item intercorrelation of .22 because the tests were relatively long on average and bias in the test construction or publication process led to an underrepresentation of low Alphas. Alpha was higher in tests with more items, with open answers and in younger age, it increased after interventions and throughout development, and it was higher for knowledge in languages and mathematics than in science and social sciences/humanities. Generally, Alphas varied strongly between different knowledge tests and populations with different characteristics, reflected in a 90% prediction interval of [.35, .96]. We suggest this range as a guideline for the Alphas that researchers can expect for knowledge tests with 20 items, providing guidelines for shorter and longer tests. We discuss implications for our understanding of domain-specific knowledge and how fixed cut-off values for the internal consistency of knowledge tests bias research findings.