People Empathize with Bullied AI Bots

Summary: People empathize with AI bots excluded from a virtual game, treating them like social beings in need of fairness. Participants favored giving the AI bot a fair chance in play, with older adults showing a stronger inclination to rectify the perceived unfairness.

The researchers suggest that human-like traits in AI bots prompt social responses, raising questions about AI design in social contexts. Future AI design could account for human empathy by creating bots that avoid overly human characteristics, helping users distinguish between AI and real social interactions.

Key Facts:

  • Study participants tended to include AI bots excluded from play, showing empathy.
  • Older participants showed a stronger response to unfair AI treatment.
  • Designers are encouraged to avoid overly human traits in AI to maintain distinctions.

Source: Imperial College London

In an Imperial College London study, humans displayed sympathy towards and protected AI bots who were excluded from playtime.

The researchers say the study, which used a virtual ball game, highlights humans’ tendency to treat AI agents as social beings – an inclination that should be considered when designing AI bots.

The study is published in Human Behavior and Emerging Technologies.

This shows a sad little robot.
This would mean users would likely intuitively include virtual agents as real team members and engage with them socially. Credit: Neuroscience News

Lead author Jianan Zhou, from Imperial’s Dyson School of Design Engineering, said: “This is a unique insight into how humans interact with AI, with exciting implications for their design and our psychology.”

People are increasingly required to interact with AI virtual agents when accessing services, and many also use them as companions for social interaction. However, these findings suggest that developers should avoid designing agents as overly human-like.

Senior author Dr Nejra van Zalk, also from Imperial’s Dyson School of Design Engineering, said: “A small but increasing body of research shows conflicting findings regarding whether humans treat AI virtual agents as social beings. This raises important questions about how people perceive and interact with these agents.

“Our results show that participants tended to treat AI virtual agents as social beings, because they tried to include them into the ball-tossing game if they felt the AI was being excluded.

“This is common in human-to-human interactions, and our participants showed the same tendency even though they knew they were tossing a ball to a virtual agent. Interestingly this effect was stronger in the older participants.”

People don’t like ostracism – even toward AI

Feeling empathy and taking corrective action against unfairness is something most humans appear hardwired to do. Prior studies not involving AI found that people tended to compensate ostracised targets by tossing the ball to them more frequently, and that people also tended to dislike the perpetrator of exclusionary behaviour while feeling preference and sympathy towards the target.

To carry out the study, the researchers looked at how 244 human participants responded when they observed an AI virtual agent being excluded from play by another human in a game called ‘Cyberball’, in which players pass a virtual ball to each other on-screen. The participants were aged between 18 and 62.

In some games, the non-participant human threw the ball a fair number of times to the bot, and in others, the non-participant human blatantly excluded the bot by throwing the ball only to the participant.

Participants were observed and subsequently surveyed for their reactions to test whether they favoured throwing the ball to the bot after it was treated unfairly, and why.

They found that most of the time, the participants tried to rectify the unfairness towards the bot by favouring throwing the ball to the bot. Older participants were more likely to perceive unfairness.

Human caution

The researchers say that as AI virtual agents become more popular in collaborative tasks, increased engagement with humans could increase our familiarity and trigger automatic processing. This would mean users would likely intuitively include virtual agents as real team members and engage with them socially.

This, they say, can be an advantage for work collaboration but might be concerning where virtual agents are used as friends to replace human relationships, or as advisors on physical or mental health.

Jianan said: “By avoiding designing overly human-like agents, developers could help people distinguish between virtual and real interaction. They could also tailor their design for specific age ranges, for example, by accounting for how our varying human characteristics affect our perception.”

The researchers point out that Cyberball might not represent how humans interact in real-life scenarios, which typically occur through written or spoken language with chatbots or voice assistants. This might have conflicted with some participants’ user expectations and raised feelings of strangeness, affecting their responses during the experiment.

Therefore, they are now designing similar experiments using face-to-face conversations with agents in varying contexts such as in the lab or more casual settings. This way, they can test how far their findings extend.

About this AI and psychology research news

Author: Hayley Dunning
Source: Imperial College London
Contact: Hayley Dunning – Imperial College London
Image: The image is credited to Neuroscience News

Original Research: Open access.
Humans Mindlessly Treat AI Virtual Agents as Social Beings, but This Tendency Diminishes Among the Young: Evidence From a Cyberball Experiment” by Jianan Zhou et al. Human Behavior and Emerging Technologies


Abstract

Humans Mindlessly Treat AI Virtual Agents as Social Beings, but This Tendency Diminishes Among the Young: Evidence From a Cyberball Experiment

The “social being” perspective has largely influenced the design and research of AI virtual agents. Do humans really treat these agents as social beings?

To test this, we conducted a 2 between (Cyberball condition: exclusion vs. fair play) × 2 within (coplayer type: AGENT vs. HUMAN) online experiment employing the Cyberball paradigm; we investigated how participants (N = 244) responded when they observed an AI virtual agent being ostracised or treated fairly by another human in Cyberball, and we compared our results with those from human–human Cyberball research.

We found that participants mindlessly applied the social norm of inclusion, compensating the ostracised agent by tossing the ball to them more frequently, just as people would to an ostracised human.

This finding suggests that individuals tend to mindlessly treat AI virtual agents as social beings, supporting the media equation theory; however, age (no other user characteristics) influenced this tendency, with younger participants less likely to mindlessly apply the inclusion norm.

We also found that participants showed increased sympathy towards the ostracised agent, but they did not devalue the human player for their ostracising behaviour; this indicates that participants did not mindfully perceive AI virtual agents as comparable to humans.

Furthermore, we uncovered two other exploratory findings: the association between frequency of agent usage and sympathy, and the carryover effect of positive usage experience.

Our study advances the theoretical understanding of the human side of human–agent interaction. Practically, it provides implications for the design of AI virtual agents, including the consideration of social norms, caution in human-like design, and age-specific targeting.

Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.