AI and Mental Health – A Policy Gap?
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
The clinical potential of AI to assist mental health, in particular to help diagnose and treat mental disorders, is increasingly well-researched. However, the wider long-term implications of AI for mental health are less well-researched and raise important policy questions. These include how best to address the potentially harmful effects of social media algorithms, of doomscrolling and technostress, and how to ensure the mental health impact of AI on employment is positive rather than negative. The impact of social media algorithms encouraging eating disorders, self-harm and suicide has already been well publicised. The role of AI in increasing anxiety, for instance through doomscrolling and technostress, is also now being researched. There's also good evidence that job loss is harmful for mental health and evidence that fear of job loss may also be harmful. The pace and nature of change generated by AI can be an issue too, as well as ethical and data privacy concerns. The need for policy to address risks arising from AI has been recognised internationally, as illustrated by the EU’s 2024 AI Act. However, progress elsewhere has been erratic and, as yet, the implications of AI for mental health do not appear to have featured significantly in policy making. This is an important gap that needs to be filled. In the UK there’s currently a policy deficit in this area, only partially addressed by the Online Safety Act. In addition, recent government policy decisions affecting employers have the potential unintended consequence of incentivizing some to use AI to replace staff, with knock-on mental health implications. There’s currently also no powerful national public health government agency in the UK. This leaves a public health policy vacuum regarding the implications of AI for the nation’s health. It will be important to identify those most at risk and to develop strategies and policies to prepare for and cope with the changes ahead. This is important to enable the positive effects of AI to be achieved, while minimising risk. Re-creating a powerful national public health body and ensuring a mental health impact assessment for all proposed AI-related policies would be two useful first steps. Helping create a coalition of willing countries and international bodies to ensure the pursuit of AI is not at the expense of mental health would be another.