Abstract
Numerous online surveys are impacted by bot and fraudulent data and the techniques employed are becoming increasingly sophisticated. However, many researchers are unaware of this growing challenge and may not be taking adequate measures to identify and remove this fake data, contaminating their findings. This paper describes the measures taken for two different surveys, one of which received a significantly higher rate of bot and fraudulent activity than the other, despite both having similar distribution pathways. One survey (n = 628 responses) contained 26 automatically identified red flags via Qualtrics™ and 10 manually identified red flags across the responses. The other survey (n = 690 responses) contained 263 automatically identified red flags via Qualtrics™ and 407 manually identified red flags across the responses. It stresses the need for both a “macro-view” or “birds-eye-view” to explore patterns and clusters across the dataset and a “micro-view” to examine the content of individual responses, looking for either single strong red flags of fraud or cumulative flags of suspicion. The time taken to do this illustrates the increased burden on researchers when one online survey receives high volumes of fake respondents. With bots and baddies increasing the chances of making a Type I or Type II error in quantitative research and infiltrating qualitative research with data which are not the authentic voices, perspectives, and lived experiences of participants, it is imperative that awareness is raised across the research community and steps are taken to safeguard the integrity of online survey research.