Online Survey Bot attack: Dealing with Fraudulent Research Responses

Exposition

In July 2021, we launched an online user study on digital COVID-19 immunity certificates in the Republic of Ireland. The study design included an initial online screening survey with a random prize draw among those who completed it (20 euro Amazon voucher), which would allow us to choose a diverse group of participants for further online interviews (everything is (was) online during the pandemic). Interviews would be reimbursed with 20 euro Amazon vouchers. 

Our research objective was to learn what the Republic of Ireland residents think about the introduction of digital COVID-19 certificates. At the time of the study, more than 88% of 18+ adults had been vaccinated from COVID, and the Irish COVID tracker (contact-tracing app) had already included the feature that allowed uploading a vaccination certificate. However, our goal was to explore general opinions about such tech, including the fairness of it, potential implementation expectations, as well as interaction design preferences of potential users.

Rise

Recruitment is an important and often challenging task for academic research. Being aware of the common pitfall – the lack of diversity in HCI studies (read this paper – “How WEIRD is CHI?“), we aspired to reach out to diverse participants by advertising the screening survey using all available channels (within those approved by the ethics committee). Our goal was to recruit at least 90 participants before closing the screening survey and invite 30 people for the interviews.

Climax

I frequently checked the progress of the screening survey, and on the morning of the second day, something strange happened – we received more than 1000 responses overnight.

Return

Most of the respondents were in the same demographic group. As we collected phone numbers and/or email addresses to invite participants for the interviews, I could see that those also looked randomly generated. We realized that a prize draw attracted a bot (also known as automatic survey-takers or fraudsters) with a massive number of fraudulent survey responses threatening the integrity of our screening survey. 

What do you do in this case?

We surely had some genuine responses as well, and we had to find a way to “separate the wheat from the chaff,” reach the desired number of genuine responses without altering the survey, which the ethics committee approved. As a first step, we paused the survey, identified when the mass responses started and deleted all responses after that timestamp. The next step was to include the CAPTCHA, which we should have thought about from the beginning. We then restarted the survey, but the sketchy responses kept coming (less though!) Someone really wanted that 20 euro Amazon voucher, but that was already a matter of principle for me. 

Denouement

I was sure other researchers dealt with this problem previously and did a small literature review to find efficient strategies to prevent fraudulent responses. Here are the lessons learned, which perhaps will be useful to those who want to avoid such situations in their research studies: 

  1. CAPTCHA (Completely Automated Public Turing Test to tell Computers and Humans Apart). Including it did not completely stop fraudulent responses, as bot can overcome it nevertheless. There are algorithms that can decipher the distorted CAPTCHA text puzzles with over 99% accuracy, which motivated Google to develop reCAPTCHA. Still, it is helpful and can be added at the beginning of the survey.

  2. Attention-checking questions or “trap questions,” which can be structured either as directed questions asking for a specific response (i.e., “Choose answer 3” in multiple option questions) or as bogus questions asking for an obvious answer/ universal truths (i.e., agree with “I was born on planet Earth” or disagree with “I can teleport across time and space”). These should be included at the study development stage.

  3. If you collect IP addresses, use the IP eligibility strategy. Usually fraudulent responses come from the IP addresses located in the same region, which might not be the region of your recruitment. In our case, we recruited the residents in the Republic of Ireland, and the most fraudulent responses came from the USA.

  4. Survey completion duration. As you probably know the approximate time it takes to fill out your survey, you can see that fraud responses are usually much longer or much shorter. In our case, the screening survey took 3-4 mins to complete, and the fraud responses were completed in 10-15 mins.

  5. Survey completion time. The spam responses can start at the same time with very short difference.

  6. Check for the answer format eligibility based on your exclusion/inclusion criteria. For instance, our screening survey recruited the residents of the Republic of Ireland, which were older than 18, and provided eligible contact details.

I also recommend reading the paper – “Got Bots? Practical Recommendations to Protect Online Survey Data from Bot Attacks“, which I found very helpful.

Uncategorised