Critical Artificial Intelligence Literacy for Psychologists
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Psychologists — from computational modellers to social and personality researchers to cognitive neuroscientists and from experimentalists to methodologists to theoreticians — can fall prey to exaggerated claims about artificial intelligence (AI). In social psychology, as in psychology generally, we see arguments taken at face value for: a) the displacement of experimental participants with opaque AI products; the outsourcing of b) programming, c) writing, and even d) scientific theorising to such models; and the notion that e) human-technology interactions could be on the same footing as human-human (e.g., client-therapist, student-teacher, patient-doctor, friendship, or romantic) relationships. But if our colleagues are, accidentally or otherwise, promoting such ideas in exchange for salary, grants, or citations, how are we as academic psychologists meant to react? Formal models, from statistics and computational methods broadly, have a potential obfuscatory power that is weaponisable, laying serious traps for the uncritical adopters, with even the term 'AI' having murky referents. Herein, we concretise the term AI and counter the five related proposals above — from the clearly insidious to those whose ethical neutrality is skin-deep and whose functionality is a mirage. Ultimately, contemporary AI is research misconduct.