Measuring cognition in an aging population is a public health priority. A move towards survey measurement via the web (as opposed to phone or in-person) is cost effective but challenging as it may induce bias in cognitive measures. We examine this possibility using an experiment embedded in the 2018 wave of data collection for the US Health and Retirement Study (HRS).
We utilize techniques from multiple group item response theory to assess the effect of survey mode on performance on the HRS cognitive measure. We also study the problem of attrition by attempting to predict dropout and via approaches meant to minimize bias in subsequent inferences due to attrition.
We find evidence of an increase in scores for HRS respondents who are randomly assigned to the web-based mode of data collection in 2018. Web-based respondents score higher in 2018 than do phone-based respondents, and they show much larger gains relative to 2016 performance and subsequently larger declines in 2020. The bias in favor of web-based responding is observed across all items, but most pronounced for the serial 7 task and numeracy items. Due to the relative ease of the web-based mode, we suggest a cutscore of 12 being used to indicate CIND (cognitively impaired but not demented) status when using the web-based version rather than 11.
The difference in mode may be non-ignorable for many uses of the HRS cognitive measure. In particular, it may require reconsideration of some cutscore-based approaches to identify impairment.