Psychological Methods, Vol 28(6), Dec 2023, 1223-1241; doi:10.1037/met0000457
When multiple hypothesis tests are conducted, the familywise Type I error probability correspondingly increases. Various multiple test procedures (MTPs) have been developed, which generally aim to control the familywise Type I error rate at the desired level. However, although multiplicity is frequently discussed in the ANOVA literature and MTPs are correspondingly employed, the issue has received considerably little attention in the regression literature and it is rare to see MTPs employed empirically. The present aims are three-fold. First, within the eclectic uses of multiple regression, specific situations are delineated wherein adjusting for multiplicity may be most relevant. Second, the performance of ten MTPs amenable to regression is investigated via familywise Type I error control, statistical power, and, where appropriate, false discovery rate, simultaneous confidence interval coverage and width. Although methodologists may anticipate general patterns, the focus is on the magnitude of error inflation and the size of the differences among methods under plausible scenarios. Third, perspectives from across the scientific literature are discussed, which shed light on contextual factors to consider when evaluating whether multiplicity adjustment is advantageous. Results indicated that multiple testing can be problematic, even in nonextreme situations where multiplicity consequences may not be immediately expected. Results pointed toward several effective, balanced, MTPs, particularly those that accommodate correlated parameters. Importantly, the goal is not to universally recommend MTPs for all regression models, but. rather to identify a set of circumstances wherein multiplicity is most relevant, evaluate MTPs, and integrate diverse perspectives that suggest multiplicity adjustment or alternate solutions. (PsycInfo Database Record (c) 2024 APA, all rights reserved)