Remote data collection and cognitive task performance in the age of internet bad actors

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Bad actors and bots present threats to data integrity in remote research designs, compromising estimates of cognitive task performance. Here, we share our experiences and recommendations for identifying and preventing bad actors and bots from enrolling in remote studies of cognitive task performance. As part of a large-scale study, 3,488 participants were recruited through various platforms, including social media and MTurk, and were compensated for their participation. Participants were categorized according to bad actor status as determined by 1) number and risk-level associated with red flags within initial screening and pattern of responses; and 2) a researcher review process to identify patterns of aberrant, patterned, or inconsistent responding. Participants were either automatically labeled as bad actors (n=2,829), labeled as bad actors after researcher review (n=242), or labeled as good actors after researcher review (n=417). Cognitive task performance was assessed with Adaptive Cognitive Evaluation Explorer (ACE-X), a mobile, gamified collection of executive function tasks including measures of working memory, attention, and cognitive flexibility. We considered differences between good and bad actors for mean response times, variability in response times, accuracy, and number of trials attempted. Across 6 of the 7 ACE-X tasks evaluated, bad actors responded slower than good actors on average (β=34.10-150.15, P<.001-.047). Additional differences in task performance suggested that overall, bad actors also tended to be more variable in their response times (β=-0.32-1.01, P<.001), less accurate (β=-2.15-.00, P<.001-1.000), and responded to fewer trials (β=-1.30-.00, P<.001-1.000). This study indicates that bad actors show differences in their patterns of performance on cognitive assessments. Findings presented here suggest a need for careful consideration related to study design and protocols to ensure data integrity when evaluating cognitive performance in remote research settings.

Article activity feed