Translational Issues in Psychological Science, Vol 10(3), Sep 2024, 276-289; doi:10.1037/tps0000437
Psychological research heavily relies on the use of multi-item self-report scales. Traditionally, each participant is administered the same set of items, under the assumption that these items comprehensively represent the intended construct. However, an inherent challenge arises because of content sampling error, stemming from the disparity between the utilized subset of items and the complete universe of possible items. In this study, we explore whether randomly selecting items per respondent from a validated item pool might counter content sampling error on an aggregate level. We compare the application of random item selection to traditional survey methods. Using the construct of “pro-environmental behavior” (PEB) as an example, respondents were randomly assigned to either the traditional or randomized approach. For the randomized approach, one item was randomly selected for each of the 10 PEB domains from a pool of 10 possible items for each respondent. Four scales, separated by a filler task, were administered. For the traditional approach, two fixed scales were created with one item per domain and administered two times, again separated by a filler task. By correlating the outcomes, we can assess the convergent validity. The traditional approach shows higher correlations between scales, but variance decomposition reveals that the randomized condition captures a broader range of content. Lower correlations in the randomized condition are because of higher item and residual variance. The potential benefits of using validated item pools with random item sampling are discussed, with a focus on both psychometric improvements and researcher involvement in the measurement process. (PsycInfo Database Record (c) 2024 APA, all rights reserved)