Pretesting survey questions via cognitive interviewing is based on the assumptions that the problems identified by the method truly exist in a later survey and that question revisions based on cognitive interviewing findings produce higher-quality data than the original questions. In this study, we empirically tested these assumptions in a web survey experiment (n = 2,200). Respondents received one of two versions of a question on self-reported financial knowledge: either the original draft version, which was pretested in ten cognitive interviews, or a revised version, which was modified based on the results of the cognitive interviews. We examined whether the cognitive interviewing findings predicted problems encountered in the web survey and whether the revised question version was associated with higher content-related and criterion-related validity than the draft version. The results show that cognitive interviewing is effective in identifying real question problems, but not necessarily in fixing survey questions and improving data quality. Overall, our findings point to the importance of using iterative pretesting designs, that is, carrying out multiple rounds of cognitive interviews and also testing the revisions to ensure that they are indeed of higher quality than the draft questions.