Abstract
This study was designed to develop validity cutoffs within the Finger Tapping Test (FTT) using demographically adjusted T-scores, and to compare their classification accuracy to existing cutoffs based on raw scores. Given that FTT performance is known to vary with age, sex, and level of education, failure to correct for these demographic variables poses the risk of elevated false positive rates in examinees who, at the level of raw scores, have inherently lower FTT performance (women, older, and less educated individuals). Data were collected from an archival sample of 100 adult outpatients (MAge = 38.8 years, MEducation = 13.7 years, 56% men) consecutively referred for neuropsychological assessment at an academic medical center in the Midwestern USA after sustaining a traumatic brain injury (TBI). Performance validity was psychometrically defined using the Word Memory Test and two validity composites based on five embedded performance validity indicators. Previously published raw score-based validity cutoffs disproportionately sacrificed sensitivity (.13–.33) for specificity (.98–1.00). Worse yet, they were confounded by sex and education. Newly introduced demographically adjusted cutoffs (T ≤ 33 for the dominant hand, T ≤ 37 for both hands) produced high levels of specificity (.89–.98) and acceptable sensitivity (.36–.55) across criterion measures. Equally importantly, they were robust to injury severity and demographic variables. The present findings provide empirical support for a growing trend of demographically adjusted performance validity cutoffs. They provide a practical and epistemologically superior alternative to raw score cutoffs, while also reducing the potential bias against examinees inherently vulnerable to lower raw score level FTT performance.