Who’s to Blame? How We Perceive Responsibility in Human-AI Collaborations

Summary: Even when people view AI-based assistants merely as tools, they still assign partial responsibility to these systems for decisions made.

The research investigated how participants perceived responsibility when a human driver used an AI-powered assistant versus a non-AI navigation instrument. Participants found AI assistants partly responsible for both successes and failures, even while affirming that these systems are merely tools.

The study sheds new light on how people apply different moral standards for praise and blame when AI is involved in decision-making.

Key Facts

  1. Participants in the study viewed AI assistants as partly responsible for outcomes, unlike non-AI-powered instruments.
  2. The study found that AI assistants were more often credited for positive outcomes than blamed for negative ones.
  3. The mode of interaction, whether verbal or tactile, did not significantly affect people’s perception of the AI assistant’s responsibility.

Source: LMU

Even when humans see AI-based assistants purely as tools, they ascribe partial responsibility for decisions to them, as a new study shows.

Future AI-based systems may navigate autonomous vehicles through traffic with no human input. Research has shown that people judge such futuristic AI systems to be just as responsible as humans when they make autonomous traffic decisions. However, real-life AI assistants are far removed from this kind of autonomy.

This shows a woman and a robot.
No such division of responsibility occurred for the non-AI powered instrument. Credit: Neuroscience News

They provide human users with supportive information such as navigation and driving aids. So, who is responsible in these real-life cases when something goes right or wrong? The human user? Or the AI assistant?

A team led by Louis Longin from the Chair of Philosophy of Mind has now investigated how people assess responsibility in these cases.

“We all have smart assistants in our pockets,” says Longin.

“Yet a lot of the experimental evidence we have on responsibility gaps focuses on robots or autonomous vehicles where AI is literally in the driver’s seat, deciding for us. Investigating cases where we are still the ones making the final decision, but use AI more like a sophisticated instrument, is essential.”

A philosopher specialized in the interaction between humans and AI, Longin, working in collaboration with his colleague Dr. Bahador Bahrami and Prof. Ophelia Deroy, Chair of Philosophy of Mind, investigated how 940 participants judged a human driver using either a smart AI-powered verbal assistant, a smart AI-powered tactile assistant, or a non-AI navigation instrument. Participants also indicated whether they saw the navigation aid as responsible, and to which degree it was a tool.

Ambivalent status of smart assistants

The results reveal an ambivalence: Participants strongly asserted that smart assistants were just tools, yet they saw them as partly responsible for the success or failures of the human drivers who consulted them. No such division of responsibility occurred for the non-AI powered instrument.

No less surprising for the authors was that the smart assistants were also considered more responsible for positive rather than negative outcomes.

“People might apply different moral standards for praise and blame. When a crash is averted and no harm ensues, standards are relaxed, making it easier for people to assign credit than blame to non-human systems” suggests Dr. Bahrami, who is an expert on collective responsibility.

Role of language is not relevant

In the study, the authors found no difference between smart assistants that used language and those that alarmed their users by a tactile vibration of the wheel.

“The two provided the same information in this case, ‘Hey, careful, something ahead,’ but of course, ChatGPT in practice gives much more information,” says Ophelia Deroy, whose research examines our conflicting attitudes toward artificial intelligence as a form of animist beliefs.

In relation to the additional information provided by novel language-based AI systems like ChatGPT, Deroy adds: “The richer the interaction, the easier it is to anthropomorphize.”

“In sum, our findings support the idea that AI assistants are seen as something more than mere recommendation tools but remain nonetheless far from human standards,” says Longin.

The authors believe that the findings of the new study will have a far-reaching impact on the design and social discourse around AI assistants: “Organizations that develop and release smart assistants should think about how social and moral norms are affected,” Longin concludes.

About this AI and psychology research news

Author: Constanze Drewlo
Source: LMU
Contact: Constanze Drewlo – LMU
Image: The image is credited to Neuroscience News

Original Research: Open access.
Intelligence brings responsibility – Even smart AI assistants are held responsible” by Louis Longin et al. Cell


Abstract

Intelligence brings responsibility – Even smart AI assistants are held responsible

Highlights

  • Basic AI-assistants are seen as sharing responsibility with their human user
  • Active AI-assistants receive more credit than blame
  • But AI-assistants are strongly perceived as tools
  • Results are the same for verbal and tactile assistants

Summary

People will not hold cars responsible for traffic accidents, yet they do when artificial intelligence (AI) is involved. AI systems are held responsible when they act or merely advise a human agent.

Does this mean that as soon as AI is involved responsibility follows?

To find out, we examined whether purely instrumental AI systems stay clear of responsibility. We compared AI-powered with non-AI-powered car warning systems and measured their responsibility rating alongside their human users.

Our findings show that responsibility is shared when the warning system is powered by AI but not by a purely mechanical system, even though people consider both systems as mere tools.

Surprisingly, whether the warning prevents the accident introduces an outcome bias: the AI takes higher credit than blame depending on what the human manages or fails to do.

Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.