Abstract
Following standardized administration practices, the provision of neuropsychological services has predominately taken place in a face-to-face setting. Interpretation of psychometric findings in this context is dependent on the use of normative comparison. Deviating from the standardization used to develop a measure’s norms raises doubts about the reliability and validity of its findings. In recent years, remote (i.e., virtual) neuropsychological assessment has garnered increasing attention. While various neuropsychological measures have been investigated in this context, we sought to examine the impact of administration format (in-person or remote) on various performance and embedded validity measures (PVTs) used in neuropsychological assessment. The current study employed a battery of PVTs and compared test scores obtained via face-to-face paper-and-pencil administration and those garnered via remote computerized administration. No differences were found with respect to mode of administration on the Test of Memory Malingering (TOMM; p = 1.00), 21 Item Test (p = .499), and Reliable Digit Span (RDS-Total; p = .218). Conversely, significant differences were observed between modalities on the Dot Counting Test (DCT; p = .001) and Judgement of Line Orientation Test (JLO; p = .003). The present study findings, while preliminary, have important clinical implications for the reliable administration of test measures in a remote setting. Additional research utilizing larger sample sizes and clinical populations will be necessary to further generalize these findings. In circumstances where in-person testing is impeded by distancing protocols, accessibility, or geographic limitations, clinicians can reliably assess the credibility of performance by way of performance and embedded validity measures in the context of remote test administration.