Abstract
This study applies equivalence testing methods on a representative sample of published empirical structural equation modeling studies in the psychological sciences and assesses the extent to which reported fit results are replicated when compared to those obtained via traditional methods. Results of 382 models from a sample of 242 articles published in 5 top ranked journals from the general field of Developmental Psychology were reported. The results indicated that a sizeable number of models designated in the original studies as ‘good’ were in fact not reproducible when examined through an equivalence testing lens. These questionable models displayed substantial discrepancy with the data and should not have qualified for further consideration because they were simply not plausible models. Implications of the results and suggestions for best modeling practices within the psychological sciences are discussed.