AI Increases Lie Accusations, Changing How We Trust and Detect Deception

Summary: New research shows people are more likely to accuse others of lying when AI makes the accusation first. This insight highlights the potential social impact of AI in lie detection and suggests caution for policymakers. The study found AI’s presence increased accusation rates and influenced behavior despite people’s general reluctance to use AI lie-detection tools.

Key Facts:

  1. AI predictions led to higher rates of lie accusations compared to human judgment alone.
  2. Participants were more likely to accuse statements of being false when AI indicated so.
  3. Despite AI’s higher accuracy, only a third of participants chose to use it for lie detection.

Source: Cell Press

Although people lie a lot, they typically refrain from accusing others of lying because of social norms around making false accusations and being polite. But artificial intelligence (AI) could soon shake up the rules.

In a study published June 27 in the journal iScience, researchers demonstrate that people are much more likely to accuse others of lying when an AI makes an accusation.

The finding provided insights into the social implications of using AI systems for lie detection, which could inform policymakers when implementing similar technologies.

“Our society has strong, well-established norms about accusations of lying,” says senior author Nils Köbis, a behavioral scientist at the University Duisburg-Essen in Germany.

This shows a man.
In the baseline group, participants answered true or false without help from the AI. Credit: Neuroscience News

“It would take a lot of courage and evidence for one to openly accuse others of lying. But our study shows that AI could become an excuse for people to conveniently hide behind, so that they can avoid being held responsible for the consequences of accusations.”

Human society has long operated based on the truth-default theory, which explains that people generally assume what they hear is true. Because of this tendency to trust others, humans are terrible at detecting lies. Previous research has shown that people perform no better than chance when trying to detect lies.  

Köbis and his team wanted to know whether the presence of AI would change the established social norms and behaviors about making accusations. 

To investigate, the team asked 986 people to write one true and one false description of what they plan to do next weekend. The team then trained an algorithm with the data to develop an AI model that was able to correctly identify true and false statements 66% of the time, an accuracy significantly higher than what an average person can achieve.

Next, the team recruited more than 2,000 people to be the judges who would read a statement and decide if it is true or false. The researchers divided the participants into four groups—“baseline,” “forced,” “blocked,” and “choice.”

In the baseline group, participants answered true or false without help from the AI. In the forced group, the participants always received an AI prediction before making their own judgment. In the blocked and choice groups, participants had the option of receiving an AI-generated prediction. People who requested the prediction from the blocked group would not receive it, while people in the choice group would.

The research team found participants in the baseline group had an accuracy of 46% when identifying the statements of being true or false. Only 19% of the people in the group accused the statements they read being false, even though they knew that 50% of the statements were false. This confirms that people tend to refrain from accusing others of lying.

In the forced group where participants were given an AI prediction regardless of whether they wanted it, over a third of participants accused the statements of being false. The rate is significantly higher than both the baseline and blocked groups that received no AI predictions.

When the AI predicted a statement was true, only 13% of participants said the statement was false. However, when the AI predicted a statement as false, more than 40% of participants accused the statement of being false.

Moreover, among the participants who requested and received an AI prediction, an overwhelming 84% of them adopted the prediction and made accusations when the AI said the statement was false.

“It shows that once people have such an algorithm on hand, they would rely on it and maybe change their behaviors. If the algorithm calls something a lie, people are willing to jump on that. This is quite alarming, and it shows we should be really careful with this technology,” Köbis says.

Interestingly, people seemed to be reluctant to use AI as a lie-detection tool. In the blocked and choice groups, only a third of participants requested the AI prediction.

The result was surprising to the team, because the researchers had told the participants in advance that the algorithm could detect lies better than humans. “It might be because of this very robust effect we’ve seen in various studies that people are overconfident in their lie detection abilities, even though humans are really bad at it,” Köbis says.

AI is known for making frequent mistakes and reinforcing biases. Given the findings, Köbis suggests that policymakers should reconsider using the technology on important and sensitive matters like granting asylum at the borders.

“There’s such a big hype around AI, and many people believe these algorithms are really, really potent and even objective. I’m really worried that this would make people over-rely on it, even when it doesn’t work that well,” Köbis says.

About this AI research news

Author: Kristopher Benke
Source: Cell Press
Contact: Kristopher Benke – Cell Press
Image: The image is credited to Neuroscience News

Original Research: Open access.
Lie detection algorithms disrupt the social dynamics of accusation behavior” by Nils Köbis et al. iScience


Abstract

Lie detection algorithms disrupt the social dynamics of accusation behavior

Highlights

  • Supervised learning algorithm surpasses human accuracy in text-based lie detection
  • Without algorithmic support, people are reluctant to accuse others of lying
  • Availability of a lie-detection algorithm increases people’s lying accusations
  • 31% of participants request algorithmic advice, among those, most follow its advice

Summary

Humans, aware of the social costs associated with false accusations, are generally hesitant to accuse others of lying. Our study shows how lie detection algorithms disrupt this social dynamic.

We develop a supervised machine-learning classifier that surpasses human accuracy and conduct a large-scale incentivized experiment manipulating the availability of this lie-detection algorithm.

In the absence of algorithmic support, people are reluctant to accuse others of lying, but when the algorithm becomes available, a minority actively seeks its prediction and consistently relies on it for accusations.

Although those who request machine predictions are not inherently more prone to accuse, they more willingly follow predictions that suggest accusation than those who receive such predictions without actively seeking them.

Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.