Brain’s Fairness Logic: Why We Refuse to Sacrifice the One for the Many

This shows a person looking at a head made of a crowd of people.

Researchers found that the brain’s valuation network actively "prices in" fairness, often overriding the simple mathematical goal of minimizing total harm. Credit: Neuroscience News

Summary: When faced with an ethical dilemma, do we choose the “greater good” or the “fairest” outcome? A new neuroimaging study suggests that fairness often trumps efficiency.

In a series of “icy water” experiments, university students consistently chose to inflict more total pain across a group rather than allow a single individual to suffer disproportionately. By using fMRI scans, researchers discovered that this “Rawlsian” approach—prioritizing the worst-off—isn’t just a philosophical preference; it’s driven by specific valuation and mentalizing networks in the brain that model the subjective suffering of others.

Key Facts

Source: PNAS Nexus

When making ethical decisions, university students appear to prioritize fairness and the fate of the worst-off over either reducing total harm or obeying unconditional moral precepts, according to a study.

Woo-Young Ahn and colleagues designed an experimental dilemma that pits a utilitarian approach—which seeks to minimize total harm—against an approach promoted by philosopher John Rawls, which emphasizes improving the situation of the worst-off person.

Fifty-two paid volunteers from a university in South Korea were asked to allocate harm—here, the discomfort of plunging a hand into ice water—while inside fMRI scanners. In each trial, participants pressed buttons to choose between a single person experiencing a hand in ice water or a group of 3 or 4 people each experiencing the same harm for shorter times.

Crucially, however, the summed time of the group was larger than the total time for the single person, representing more harm overall. In some trials, the screen shown to participants included a default option already selected. In these cases, participants could not press any buttons at all, avoiding personally causing harm.

The authors expected this to be a popular approach for those who wanted to avoid causing harm directly. Most people chose to allocate the harm to the group, causing more harm overall but less unfairness.

Participants chose to give 68 seconds of additional icy-cold discomfort to the group, on average, to save the lone individual from being disproportionately targeted. There was little evidence of a bias toward the default option, suggesting that participants did not feel that personally causing harm was prohibited.

According to the authors, brain imaging suggests that mentalizing—modeling the mental experiences of others—is involved in this moral decision-making, along with valuation networks.

Key Questions Answered:

Q: Why would we choose “more” pain overall? That sounds illogical.

A: It’s “logical” if you value equity over efficiency. To the human brain, seeing one person suffer 100% of the pain feels “more wrong” than seeing four people suffer 30% each, even if the math says the second option creates more total discomfort. We are biologically tuned to prevent the “singling out” of individuals.

Q: Did the participants feel guilty about “pressing the button” to cause harm?

A: Surprisingly, they didn’t hide behind the “default” options. Usually, people avoid making a choice to stay “clean,” but in this study, they actively chose to distribute the harm. This suggests that the desire for fairness is stronger than the fear of being the one who caused the discomfort.

Q: Does this explain why we hate “unfair” systems even if they are efficient?

A: Precisely. This study provides a neural basis for why societies often reject “efficient” policies (like cutting services for a small minority to save a larger majority money) if those policies seem to pick on the “worst-off” person. Our valuation networks literally place a “higher price” on fairness than on total output.

Editorial Notes:

About this ethics and neuroscience research news

Author: Woo-Young Ahn
Source: PNAS Nexus
Contact: Woo-Young Ahn – PNAS Nexus
Image: The image is credited to Neuroscience News

Original Research: Open access.
Decomposing the neurocomputational mechanisms of deontological moral preferences” by Yoonseo Zoh, Soyeon Kim, Hackjin Kim, M. J. Crockett, and Woo-Young Ahn. PNAS Nexus
DOI:10.1093/pnasnexus/pgag074


Abstract

Decomposing the neurocomputational mechanisms of deontological moral preferences

Research on the neurocomputational mechanisms of moral judgment has typically focused on contrasting “utilitarian” preferences to impartially maximize aggregate welfare and “deontological” preferences that judge the morality of actions based on rules. However, there has been little work to decompose the cognitive subcomponents of deontological preferences.

Here, we investigated the neurocomputational mechanisms underlying two types of deontological preferences (Rawlsian and Kantian) and their contrast with utilitarian preferences in an incentivized moral dilemma task. Participants repeatedly decided how to allocate harm between a single individual (“the one”) and a group of three to four individuals (“the group”).

The task distinguished preferences for Rawlsian, Kantian, and utilitarian strategies by quantifying trade-offs among active harm, concern for the worst-off individual, and overall utility. Behaviorally, participants favored the Rawlsian strategy, preferring to impose more harm overall rather than disproportionately harm the one individual.

Computational modeling revealed two dissociable dimensions of individual variability in Rawlsian preferences: (i) minimizing the maximum amount of harm delivered to a single person and (ii) subjective threshold of acceptable amount of harm imposed on one person.

The combination of univariate and multivariate functional MRI analyses revealed the engagement of distinct brain regions in these two dimensions of Rawlsian preferences, which respectively mapped onto activity in mentalizing and valuation networks.

Our results reveal the neurocomputational mechanisms guiding trade-offs between the welfare of one versus a larger group and highlight distinct roles for the mentalizing and valuation networks in shaping Rawlsian moral preferences.

Join our Newsletter
Thank you for subscribing.
Something went wrong.
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.
Exit mobile version