Advances in Methods and Practices in Psychological Science, Volume 8, Issue 1, January-March 2025.
Cronbach’s α is the most widely reported metric of the reliability of psychological measures. Decisions about an observed α’s adequacy are often made using rule-of-thumb thresholds, such as α of at least .70. Such thresholds can put pressure on researchers to make their measures meet these criteria, similar to the pressure to meet the significance threshold with p values. We examined whether α values reported in the psychology literature are inflated at the rule-of-thumb thresholds (αs = .70, .80, .90) because of, for example, overfitting to in-sample data (α-hacking) or publication bias. We extracted reported α values from three very large data sets covering the general psychology literature (> 30,000 α values taken from > 74,000 published articles in American Psychological Association [APA] journals), the industrial and organizational (I/O) psychology literature (> 89,000 α values taken from > 14,000 published articles in I/O journals), and the APA’s PsycTests database, which aims to cover all psychological measures published since 1894 (> 67,000 α values taken from > 60,000 measures). The distributions of these values show robust evidence of excesses at the α = .70 rule-of-thumb threshold that cannot be explained by justifiable measurement practices. We discuss the scope, causes, and consequences of α-hacking and how increased transparency, preregistration of measurement strategy, and standardized protocols could mitigate this problem.