Philosophical Significance of Artificial Intuitions

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

This paper explores the ways in which large language models (LLMs) can be used for the purpose of experimental philosophy. LLMs can be used in different ways for different purposes in experimental philosophy (Section 2). Among others, we focus on what we call the “non-predictive” research, in which LLMs’ “intuitive” responses to thought experiment cases are not expected to be predictive of, or analogous to, human subjects’ intuitive responses to them (Section 3). In effect, the non-predictive research regards LLMs’ responses as superior to human subjects’ responses, at least for the purpose of experimental philosophy. We argue that the non-predictive use of LLMs in experimental philosophy is highly motivated. Firstly, human subjects’ intuitive responses tend to be vulnerable to irrelevant factors and conceptual misunderstandings, which raises a serious worry concerning the reliability of their responses. Secondly, it is possible in the near future that LLMs can overcome these problematic factors to which human subjects are subject. As a case study, we conducted an experimental study in which we examined LLM’s “intuitions” on free will and determinism (Section 4). Our mixed results suggest that while current LLMs outperform human participants in terms of comprehension—especially in understanding deterministic scenarios—they remain highly sensitive to irrelevant factors such as framing and question order. These findings indicate that although LLMs have promising potential, they are not yet reliable sources of stable and unbiased philosophical intuitions. We conclude that substantial technical improvements are required before artificial intuitions can serve as superior alternatives to human intuitions in philosophical research.

Article activity feed