Ethical Decision-Making Guidelines for Mental Health Clinicians in the Artificial Intelligence (AI) Era

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The meteoric rise in generative AI has created both opportunities and ethical challenges in the mental health disciplines namely in clinical mental health counseling, psychology, psychiatry, and social work. While these disciplines have been grounded in well-established ethical principles such as autonomy, beneficence, justice, fidelity, and confidentiality, the exponential ubiquity of AI in society in the past three years has rendered mental health professionals unsure as to how to navigate ethical decision making in the AI era. The author proposes a preliminary ethical framework which synthesizes the code of ethics of the American Counseling Association, the American Psychological Association, the American Medical Association and the National Association of Social Workers which is then organized around five pillars: autonomy and informed consent; beneficence and non-malfeasance; confidentiality, privacy, and transparency; justice, fairness and inclusiveness; and fidelity, professional integrity, and accountability. These pillars are juxtaposed with AI ethical guidelines developed by international organizations, governments, and technology corporations. The resulting integrated ethical framework provides a practical cogent structure that mental health professionals can use when navigating this uncharted terrain. Limitations of the framework and implications for future research are addressed.

Article activity feed