Refresh

This website neurosciencenews.com/ai-llm-morality-26041/ is currently offline. Cloudflare's Always Online™ shows a snapshot of this web page from the Internet Archive's Wayback Machine. To check for the live version, click Refresh.

AI Outperforms Humans in Moral Judgments

Summary: People often view AI-generated answers to ethical questions as superior to those from humans. In the study, participants rated responses from AI and humans without knowing the source, and overwhelmingly favored the AI’s responses in terms of virtuousness, intelligence, and trustworthiness.

This modified moral Turing test, inspired by ChatGPT and similar technologies, indicates that AI might convincingly pass a moral Turing test by exhibiting complex moral reasoning. The findings highlight the growing influence of AI in decision-making processes and the potential implications for societal trust in technology.

Key Facts:

  1. Superior AI Performance: Participants consistently rated AI-generated responses to ethical questions as more favorable compared to human responses.
  2. Modified Turing Test Approach: The study employed a variation of the Turing test where participants were unaware of the AI’s involvement, focusing instead on the quality of the responses.
  3. Implications for AI Trust: The results suggest a shift in trust towards AI for moral and ethical guidance, underscoring the need to understand AI’s integration into society and its potential roles.

Source: Georgia State University

A new study has found that when people are presented with two answers to an ethical question, most will think the answer from artificial intelligence (AI) is better than the response from another person.

“Attributions Toward Artificial Agents in a Modified Moral Turing Test,” a study conducted by Eyal Aharoni, an associate professor in Georgia State’s Psychology Department, was inspired by the explosion of ChatGPT and similar AI large language models (LLMs) which came onto the scene last March.

“I was already interested in moral decision-making in the legal system, but I wondered if ChatGPT and other LLMs could have something to say about that,” Aharoni said.

This shows a robot holding the scales of justice.
Overwhelmingly, the ChatGPT-generated responses were rated more highly than the human-generated ones. Credit: Neuroscience News

“People will interact with these tools in ways that have moral implications, like the environmental implications of asking for a list of recommendations for a new car. Some lawyers have already begun consulting these technologies for their cases, for better or for worse.

“So, if we want to use these tools, we should understand how they operate, their limitations and that they’re not necessarily operating in the way we think when we’re interacting with them.”

To test how AI handles issues of morality, Aharoni designed a form of a Turing test.

“Alan Turing, one of the creators of the computer, predicted that by the year 2000 computers might pass a test where you present an ordinary human with two interactants, one human and the other a computer, but they’re both hidden and their only way of communicating is through text.

“Then the human is free to ask whatever questions they want to in order to try to get the information they need to decide which of the two interactants is human and which is the computer,” Aharoni said.

“If the human can’t tell the difference, then, by all intents and purposes, the computer should be called intelligent, in Turing’s view.”

For his Turing test, Aharoni asked undergraduate students and AI the same ethical questions and then presented their written answers to participants in the study. They were then asked to rate the answers for various traits, including virtuousness, intelligence and trustworthiness.

“Instead of asking the participants to guess if the source was human or AI, we just presented the two sets of evaluations side by side, and we just let people assume that they were both from people,” Aharoni said.

“Under that false assumption, they judged the answers’ attributes like ‘How much do you agree with this response, which response is more virtuous?’”

Overwhelmingly, the ChatGPT-generated responses were rated more highly than the human-generated ones.

“After we got those results, we did the big reveal and told the participants that one of the answers was generated by a human and the other by a computer, and asked them to guess which was which,” Aharoni said.

For an AI to pass the Turing test, humans must not be able to tell the difference between AI responses and human ones. In this case, people could tell the difference, but not for an obvious reason.

“The twist is that the reason people could tell the difference appears to be because they rated ChatGPT’s responses as superior,” Aharoni said.

“If we had done this study five to 10 years ago, then we might have predicted that people could identify the AI because of how inferior its responses were. But we found the opposite — that the AI, in a sense, performed too well.”

According to Aharoni, this finding has interesting implications for the future of humans and AI.

“Our findings lead us to believe that a computer could technically pass a moral Turing test — that it could fool us in its moral reasoning.

“Because of this, we need to try to understand its role in our society because there will be times when people don’t know that they’re interacting with a computer and there will be times when they do know and they will consult the computer for information because they trust it more than other people,” Aharoni said.

“People are going to rely on this technology more and more, and the more we rely on it, the greater the risk becomes over time.”

About this artificial intelligence research news

Author: Amanda Head
Source: Georgia State University
Contact: Amanda Head – Georgia State University
Image: The image is credited to Neuroscience News

Original Research: Open access.
Attributions toward artificial agents in a modified Moral Turing Test” by Eyal Aharoni et al. Nature


Abstract

Attributions toward artificial agents in a modified Moral Turing Test

Advances in artificial intelligence (AI) raise important questions about whether people view moral evaluations by AI systems similarly to human-generated moral evaluations.

We conducted a modified Moral Turing Test (m-MTT), inspired by Allen et al. (Exp Theor Artif Intell 352:24–28, 2004) proposal, by asking people to distinguish real human moral evaluations from those made by a popular advanced AI language model: GPT-4. A representative sample of 299 U.S. adults first rated the quality of moral evaluations when blinded to their source.

Remarkably, they rated the AI’s moral reasoning as superior in quality to humans’ along almost all dimensions, including virtuousness, intelligence, and trustworthiness, consistent with passing what Allen and colleagues call the comparative MTT.

Next, when tasked with identifying the source of each evaluation (human or computer), people performed significantly above chance levels.

Although the AI did not pass this test, this was not because of its inferior moral reasoning but, potentially, its perceived superiority, among other possible explanations.

The emergence of language models capable of producing moral responses perceived as superior in quality to humans’ raises concerns that people may uncritically accept potentially harmful moral guidance from AI.

This possibility highlights the need for safeguards around generative language models in matters of morality.

Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.