Advances in Methods and Practices in Psychological Science, Volume 7, Issue 3, July-September 2024.
Recent studies in psychology have documented how analytic flexibility can result in different results from the same data set. Here, we demonstrate a package in the R programming language, DeclareDesign, that uses simulated data to diagnose the ways in which different analytic designs can give different outcomes. To illustrate features of the package, we contrast two analyses of a randomized controlled trial (RCT) of GraphoGame, an intervention to help children learn to read. The initial analysis found no evidence that the intervention was effective, but a subsequent reanalysis concluded that GraphoGame significantly improved children’s reading. With DeclareDesign, we can simulate data in which the truth is known and thus can identify which analysis is optimal for estimating the intervention effect using “diagnosands,” including bias, precision, and power. The simulations showed that the original analysis accurately estimated intervention effects, whereas selection of a subset of data in the reanalysis introduced substantial bias, overestimating the effect sizes. This problem was exacerbated by inclusion of multiple outcome measures in the reanalysis. Much has been written about the dangers of performing reanalyses of data from RCTs that violate the random assignment of participants to conditions; simulated data make this message clear and quantify the extent to which such practices introduce bias. The simulations confirm the original conclusion that the intervention has no benefit over “business as usual.” In this tutorial, we demonstrate several features of DeclareDesign, which can simulate observational and experimental research designs, allowing researchers to make principled decisions about which analysis to prefer.