Psychological Review, Vol 130(4), Jul 2023, 853-872; doi:10.1037/rev0000421
Understanding model complexity is important for developing useful psychological models. One way to think about model complexity is in terms of the predictions a model makes and the ability of empirical evidence to falsify those predictions. We argue that existing measures of falsifiability have important limitations and develop a new measure. KL-delta uses Kullback–Leibler divergence to compare the prior predictive distributions of models to the data prior that formalizes knowledge about the plausibility of different experimental outcomes. Using introductory conceptual examples and applications with existing models and experiments, we show that KL-delta challenges widely held scientific intuitions about model complexity and falsifiability. In a psychophysics application, we show that hierarchical models with more parameters are often more falsifiable than the original nonhierarchical model. This counters the intuition that adding parameters always makes a model more complex. In a decision-making application, we show that a choice model incorporating response determinism can be harder to falsify than its special case of probability matching. This counters the intuition that if one model is a special case of another, the special case must be less complex. In a memory recall application, we show that using informative data priors based on the serial position curve allows KL-delta to distinguish models that otherwise would be indistinguishable. This shows the value in model evaluation of extending the notion of possible falsifiability, in which all data are considered equally likely, to the more general notion of plausible falsifiability, in which some data are more likely than others. (PsycInfo Database Record (c) 2023 APA, all rights reserved)