Moral Behavior Pays Off

Summary: Coupling two approaches of game theory sheds light on how moral norms and selfless behaviors evolved.

Source: Max Planck Institute

Selfless behavior and cooperation cannot be taken for granted. Mohammad Salahshour of the Max Planck Institute for Mathematics in the Sciences (now at Max Planck Institute of Animal Behavior), has used a game theory-based approach to show why it can be worthwhile for individuals to set self-interests aside.

One of the most fundamental questions facing humanity is: why do we behave morally? Because it is by no means self-evident that under certain circumstances we set our self-interest aside and put ourselves in the service of a group – sometimes to the point of self-sacrifice. Many theories have been developed to get to the bottom of this moral conundrum.

There are two well-known proposed solutions: that individuals help their relatives so that the common genes survive (kin selection), and that the principle of “you scratch my back and I’ll scratch yours” applies. If people help each other, everyone benefits in the end (principle of reciprocity).

Prisoner’s dilemma combined with a coordination game

Mathematician Mohammad Salahshour of the Max Planck Institute for Mathematics in the Sciences in Leipzig, Germany, has used the tools of game theory to explain the emergence of moral norms – because game theory studies how people make rational decisions in conflict situations.

For Salahshour, the question at the outset was: why do moral norms exist in the first place? And why do we have different, or even contrasting moral norms?

For example, while some norms such as “help others”, promote self-sacrificing behaviour, others, such as dress codes, appear not to have much to do with curbing selfishness.

To answer these questions, Salahshour coupled two games: first, the classic prisoner’s dilemma, in which two players must decide whether to cooperate for a small reward or betray themselves for a much larger reward (social dilemma). This game can be a typical example of a social dilemma, where success of a group as a whole requires individuals to behave selflessly.

In this game everybody loses out if too many members of a group behave selfishly, compared to a scenario in which everybody acts altruistically. However, if only a few individuals behave selfishly, they can receive a better outcome than their altruistic team members.

Second, a game that focuses on typical decisions within groups, such as a coordination task, distribution of resources, choice of a leader, or conflict resolution. Many of these problems can be ultimately categorized as coordination or anticoordination problems.

Without coupling the two games, it is clear that in the Prisoner’s Dilemma, cooperation does not pay off, and self-interested behaviour is the best choice from the individual’s perspective if there are enough people who act selflessly. But individuals who act selfishly are not able to solve coordination problems efficiently and lose a lot of resources due to failing to coordinate their activity.

The situation can be completely different when the results of the two games are considered as a whole and there are moral norms at work which favour cooperation: now cooperation in the prisoner’s dilemma can suddenly pay off because the gain in the second game more than compensates for the loss in the first game.  

Out of self-interest to coordination and cooperation

As a result of this process, not only cooperative behaviour emerges, but also a social order. All individuals benefit from it – and for this reason, moral behaviour pay off for them.

“In my evolutionary model, there were no selfless behaviours at the beginning, but more and more moral norms emerged as a result of the coupling of the two games,” Salahshour reports.

This shows a woman holding a balloon
Without coupling the two games, it is clear that in the Prisoner’s Dilemma, cooperation does not pay off, and self-interested behaviour is the best choice from the individual’s perspective if there are enough people who act selflessly. Image is in the public domain

“Then I observed a sudden transition to a system where there is a lot of cooperation.”

In this “moral state”, a set of norms of coordination evolve which help individuals to better coordinate their activity, and it is precisely through this that social norms and moral standards can emerge.

However, coordination norms favour cooperation: cooperation turns out to be a rewarding behaviour for the individual as well.

Mahammad Salahshour: “A moral system behaves like a Trojan horse: once established out of the individuals’ self-interest to promote order and organization, it also brings self-sacrificing cooperation”.

Through his work, Salahshour hopes to better understand social systems. “This can help improve people’s lives in the future,” he explains.

“But you can also use my game-theoretic approach to explain the emergence of social norms in social media. There, people exchange information and make strategic decisions at the same time – for example, who to support or what cause to support.”

Again, he said, two dynamics are at work at once: the exchange of information and the emergence of cooperative strategies. Their interplay is not yet well understood – but perhaps game theory will soon shed new light on this topical issue as well.

About this morality and neuroscience research news

Author: Carla Avolio
Source: Max Planck Institute
Contact: Carla Avolio – Max Planck Institute
Image: The image is in the public domain

Original Research: Open access.
Interaction between games give rise to the evolution of moral norms of cooperation” by Mohammad Salahshour. PLOS Computational Biology


Abstract

Interaction between games give rise to the evolution of moral norms of cooperation

In many biological populations, such as human groups, individuals face a complex strategic setting, where they need to make strategic decisions over a diverse set of issues and their behavior in one strategic context can affect their decisions in another.

This raises the question of how the interaction between different strategic contexts affects individuals’ strategic choices and social norms?

To address this question, I introduce a framework where individuals play two games with different structures and decide upon their strategy in a second game based on their knowledge of their opponent’s strategy in the first game.

I consider both multistage games, where the same opponents play the two games consecutively, and reputation-based model, where individuals play their two games with different opponents but receive information about their opponent’s strategy.

By considering a case where the first game is a social dilemma, I show that when the second game is a coordination or anti-coordination game, the Nash equilibrium of the coupled game can be decomposed into two classes, a defective equilibrium which is composed of two simple equilibrium of the two games, and a cooperative equilibrium, in which coupling between the two games emerge and sustain cooperation in the social dilemma.

For the existence of the cooperative equilibrium, the cost of cooperation should be smaller than a value determined by the structure of the second game. Investigation of the evolutionary dynamics shows that a cooperative fixed point exists when the second game belongs to coordination or anti-coordination class in a mixed population.

However, the basin of attraction of the cooperative fixed point is much smaller for the coordination class, and this fixed point disappears in a structured population.

When the second game belongs to the anti-coordination class, the system possesses a spontaneous symmetry-breaking phase transition above which the symmetry between cooperation and defection breaks.

A set of cooperation supporting moral norms emerges according to which cooperation stands out as a valuable trait. Notably, the moral system also brings a more efficient allocation of resources in the second game.

This observation suggests a moral system has two different roles: Promotion of cooperation, which is against individuals’ self-interest but beneficial for the population, and promotion of organization and order, which is at both the population’s and the individual’s self-interest. Interestingly, the latter acts like a Trojan horse: Once established out of individuals’ self-interest, it brings the former with itself.

Importantly, the fact that the evolution of moral norms depends only on the cost of cooperation and is independent of the benefit of cooperation implies that moral norms can be harmful and incur a pure collective cost, yet they are just as effective in promoting order and organization.

Finally, the model predicts that recognition noise can have a surprisingly positive effect on the evolution of moral norms and facilitates cooperation in the Snow Drift game in structured populations.

Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.