Refresh

This website neurosciencenews.com/liar-intent-ai-15155/ is currently offline. Cloudflare's Always Online™ shows a snapshot of this web page from the Internet Archive's Wayback Machine. To check for the live version, click Refresh.

A new way to know liars’ intent

Summary: The patterns of reasoning deceptive people use may serve as indicators of truthfulness, a new AI algorithm discovered. Researchers say reasoning intent is more reliable than verbal changes and personal differences when trying to determine deception.

Source: Thayer School of Engineering at Dartmouth

Dartmouth engineering researchers have developed a new approach for detecting a speaker’s intent to mislead. The approach’s framework, which could be developed to extract opinion from “fake news,” among other uses, was recently published as part of a paper in Journal of Experimental & Theoretical Artificial Intelligence.

Although previous studies have examined deception, this is possibly the first study to look at a speaker’s intent. The researchers posit that while a true story can be manipulated into various deceiving forms, the intent, rather than the content of the communication, determines whether the communication is deceptive or not. For example, the speaker could be misinformed or make a wrong assumption, meaning the speaker made an unintentional error but did not attempt to deceive.

“Deceptive intent to mislead listeners on purpose poses a much larger threat than unintentional mistakes,” said Eugene Santos Jr., co-author and professor of engineering at Thayer School of Engineering at Dartmouth. “To the best of our knowledge, our algorithm is the only method that detects deception and at the same time discriminates malicious acts from benign acts.”

The researchers developed a unique approach and resulting algorithm that can tell deception apart from all benign communications by retrieving the universal features of deceptive reasoning. However, the framework is currently limited by the amount of data needed to measure a speaker’s deviation from their past arguments; the study used data from a 2009 survey of 100 participants on their opinions on controversial topics, as well as a 2011 dataset of 800 real and 400 fictitious reviews of the same 20 hotels.

Santos believes the framework could be further developed to help readers distinguish and closely examine the intent of “fake news,” allowing the reader to determine if a reasonable, logical argument is used or if opinion plays a strong role. In further studies, Santos hopes to examine the ripple effect of misinformation, including its impacts.

In the study, the researchers use the popular 2001 film Ocean’s Eleven to illustrate how the framework can be used to examine a deceiver’s arguments, which in reality may go against his true beliefs, resulting in a falsified final expectation. For example, in the movie, a group of thieves break into a bank vault while simultaneously revealing to the owner that he is being robbed in order to negotiate. The thieves supply the owner with false information, namely that they will only take half the money if the owner doesn’t call police. However, the thieves expect the owner to call police, which he does, so the thieves then disguise themselves as police to steal the entirety of the vault contents.

This shows wavy lines of binary code
The researchers developed a unique approach and resulting algorithm that can tell deception apart from all benign communications by retrieving the universal features of deceptive reasoning. The image is in the public domain.

Because Ocean’s Eleven is a scripted film, viewers can be sure of the thieves’ intent – to steal all of the money – and how it conflicts with what they tell the owner – that they will only take half. This illustrates how the thieves were able to deceive the owner and anticipate his actions due to the fact that the thieves and owner had different information and therefore perceived the scene differently.

“People expect things to work in a certain way,” said Santos, “just like the thieves knew that the owner would call police when he found out he was being robbed. So, in this scenario, the thieves used that knowledge to convince the owner to come to a certain conclusion and follow the standard path of expectations. They forced their deception intent so the owner would reach the conclusions the thieves desired.”

In popular culture, verbal and non-verbal behaviors such as facial expressions are often used to determine if someone is lying, but the co-authors note that those cues are not always reliable.

“We have found that models based on reasoning intent are more reliable than verbal changes and personal differences, and thus are better at distinguishing intentional lies from other types of information distortion,” said co-author Deqing Li, who worked on the paper as part of her PhD thesis at Thayer.

About this neuroscience research article

Source:
Thayer School of Engineering at Dartmouth
Media Contacts:
Julie Bonette – Thayer School of Engineering at Dartmouth
Image Source:
The image is in the public domain.

Original Research: Closed access
“Discriminating deception from truth and misinformation: an intent-level approach”. Deqing LiT & ugene Santos Jr.
Journal of Experimental & Theoretical Artificial Intelligence doi:10.1080/0952813X.2019.1652354.

Abstract

Discriminating deception from truth and misinformation: an intent-level approach

Deception detection has been studied for hundreds of years. A particularly challenging problem is to not only identify truth from deception, but also discriminate misinformation, i.e. errors, from deception. Misinformation has generally been ignored in the study of deception detection, but through analysing the foundations of deception, it may be possible to pinpoint a fundamental difference between deception and all other benign communications – namely, the intent of the speaker. We present a detection model that captures a speaker’s intent by measuring his patterns of reasoning. The reasoning patterns of deceivers may serve as indicators of intentional deception. Through empirical studies, these intent-driven reasoning patterns can identify as well as explain deceptive communications.

Feel free to share this Artificial Intelligence News.
Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.