Decomposing the neurocomputational mechanisms of deontological moral preferences
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Research on the neurocomputational mechanisms of moral judgment has typically focused on contrasting utilitarian preferences to impartially maximize aggregate welfare and deontological preferences that judge the morality of actions based on rules. However, there has been little work to decompose the cognitive subcomponents of deontological preferences. Here, we investigated the neurocomputational mechanisms underlying two types of deontological preferences (Rawlsian and Kantian) and their contrast with utilitarian preferences in an incentivized moral dilemma task. Participants repeatedly decided how to allocate harm between a single individual (“the one”) and a group of 3-4 individuals (“the group”). The task distinguished preferences for Rawlsian, Kantian and utilitarian strategies by quantifying trade-offs among active harm, concern for the worst-off individual, and overall utility. Behaviorally, participants favored the Rawlsian strategy, preferring to impose more harm overall rather than disproportionately harm the one individual. Computational modeling revealed two dissociable dimensions of individual variability in Rawlsian preferences: i) minimizing the maximum amount of harm delivered to a single person and ii) subjective threshold of acceptable amount of harm imposed on one person. Combination of univariate and multivariate fMRI analyses revealed the engagement of distinct brain regions in these two dimensions of Rawlsian preferences, which respectively mapped onto activity in mentalizing and valuation networks. Our results reveal the neurocomputational mechanisms guiding tradeoffs between the welfare of one versus a larger group, and highlight distinct roles for the mentalizing and valuation networks in shaping Rawlsian moral preferences.