AI in Tetris Game Alters Player Perception and Performance

Summary: In a new study, players of a modified two-person Tetris game perceived their partners as less likable when they had fewer turns, irrespective of whether a human or algorithm determined the turn allocation.

The researchers aimed to explore how AI-driven decisions can influence human relationships and interactions. The study found that equal turn allocation didn’t necessarily lead to improved gameplay.

Instead, a more unequal distribution, favoring the stronger player, led to better team performance overall.

Key Facts:

  1. The study revealed that players who received fewer turns in a game of Tetris found their partners less likable, regardless of whether the turn allocation was managed by a human or an algorithm.
  2. The researchers coined the term “machine allocation behavior” to describe unique behavior arising from an AI system determining resource allocation.
  3. Contrary to expectations, equal turn allocation didn’t result in better gameplay performance. In fact, the team performed better when the stronger player received a larger share of the turns.

Source: Cornell University

A Cornell University-led experiment in which two people play a modified version of Tetris revealed that players who get fewer turns perceived the other player as less likable, regardless of whether a person or an algorithm allocated the turns.

Most studies on algorithmic fairness focus on the algorithm or the decision itself, but researchers sought to explore the relationships among the people affected by the decisions.

“We are starting to see a lot of situations in which AI makes decisions on how resources should be distributed among people,” said Malte Jung, associate professor of information science, whose group conducted the study.

This shows tetris blocks
In an earlier study, a robot chose which person to give a block to and studied the reactions of each individual to the machine’s allocation decisions. Credit: Neuroscience News

“We want to understand how that influences the way people perceive one another and behave towards each other. We see more and more evidence that machines mess with the way we interact with each other.”

In an earlier study, a robot chose which person to give a block to and studied the reactions of each individual to the machine’s allocation decisions.

“We noticed that every time the robot seemed to prefer one person, the other one got upset,” said Jung. “We wanted to study this further, because we thought that, as machines making decisions becomes more a part of the world – whether it be a robot or an algorithm – how does that make a person feel?”

Using open-source software, Houston Claure – the study’s first author and postdoctoral researcher at Yale University – developed a two-player version of Tetris, in which players manipulate falling geometric blocks in order to stack them without leaving gaps before the blocks pile to the top of the screen. Claure’s version, Co-Tetris, allows two people (one at a time) to work together to complete each round.

An “allocator” – either human or AI, which was conveyed to the players – determines which player takes each turn. Jung and Claure devised their experiment so that players would have either 90% of the turns (the “more” condition), 10% (“less”) or 50% (“equal”).

The researchers found, predictably, that those who received fewer turns were acutely aware that their partner got significantly more. But they were surprised to find that feelings about it were largely the same regardless of whether a human or an AI was doing the allocating.

The effect of these decisions is what the researchers have termed “machine allocation behavior” – similar to the established phenomenon of “resource allocation behavior,” the observable behavior people exhibit based on allocation decisions.

Jung said machine allocation behavior is “the concept that there is this unique behavior that results from a machine making a decision about how something gets allocated.”

The researchers also found that fairness didn’t automatically lead to better game play and performance. In fact, equal allocation of turns led, on average, to a worse score than unequal allocation.

“If a strong player receives most of the blocks,” Claure said, “the team is going to do better. And if one person gets 90%, eventually they’ll get better at it than if two average players split the blocks.”

About this AI research news

Author: Becka Bowyer
Source: Cornell University
Contact: Becka Bowyer – Cornell University
Image: The image is credited to Neuroscience News

Original Research: Open access.
The social consequences of Machine Allocation Behavior: Fairness, interpersonal perceptions and performance” by Malte Jung et al. Computers in Human Behavior


Abstract

The social consequences of Machine Allocation Behavior: Fairness, interpersonal perceptions and performance

Machines increasingly decide over the allocation of resources or tasks among people resulting in what we call Machine Allocation Behavior. People respond strongly to how other people or machines allocate resources.

However, the implications for human relationships of algorithmic allocations of, for example, tasks among crowd workers, annual bonuses among employees, or a robot’s gaze among members of a group entering a store remains unclear.

We leverage a novel research paradigm to study the impact of machine allocation behavior on fairness perceptions, interpersonal perceptions, and individual performance.

In a 2 × 3 between-subject design that manipulates how the allocation agent is presented (human vs. artificial intelligent [AI] system) and the allocation type (receiving less vs. equal vs. more resources), we find that group members who receive more resources perceive their counterpart as less dominant when the allocation originates from an AI as opposed to a human.

Our findings have implications on our understanding of the impact of machine allocation behavior on interpersonal dynamics and on the way in which we understand human responses towards this type of machine behavior.

Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.