Anticipating Crime and Security Risks Enabled by Social Robots: A Research and Policy Agenda
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Artificial intelligence is rapidly advancing the capabilities of social robots. Positioned as one key solution to issues related to ageing societies, these socially interactive, physically moving robots will likely be used in many public and private contexts. Emerging technologies like these are known to be prone to criminal exploitation before regulatory and governance frameworks adapt. To support anticipatory governance, this study uses expert elicitation to identify and assess plausible future crime and security risks associated with large-scale social robot deployment, alongside potential countermeasures. Over two days, 21 expert stakeholders identified and prioritised 21 distinct crime threats and 17 countermeasures according to anticipated risk severity and implementation potential. The highest-risk threats concerned the exploitation of social robots’ social features for fraud and social engineering, using them to spread hate, extremism, or disinformation, and their facilitation of harassment, stalking, or coercive control. The most promising countermeasures included robot registration or identity markers, anticipatory risk assessment practices, and cybersecurity measures. The findings contribute to futures-oriented debates on human–robot relations and inform research and policy agendas aimed at pre-empting harmful social robot–enabled outcomes.