AI Mimics Human Decision-Making for Better Accuracy

Summary: Researchers developed a neural network that mimics human decision-making by incorporating elements of uncertainty and evidence accumulation. This model, trained on handwritten digits, produces more human-like decisions compared to traditional neural networks.

It shows similar accuracy, response time, and confidence patterns to humans. This advancement could lead to more reliable AI systems and reduce the cognitive load of daily decision-making.

Key Facts:

  1. Human-Like Decisions: The neural network mimics human uncertainty and evidence accumulation in decision-making.
  2. Performance Comparison: The model shows similar accuracy and confidence patterns to humans when tested on a noisy dataset.
  3. Future Potential: This approach could improve AI reliability and help offload cognitive burdens from daily decisions.

Source: Georgia Institute of Technology

Humans make nearly 35,000 decisions every day, from whether it’s safe to cross the road to what to have for lunch. Every decision involves weighing the options, remembering similar past scenarios, and feeling reasonably confident about the right choice. What may seem like a snap decision actually comes from gathering evidence from the surrounding environment. And often the same person makes different decisions in the same scenarios at different times.

Neural networks do the opposite, making the same decisions each time. Now, Georgia Tech researchers in Associate Professor Dobromir Rahnev’s lab are training them to make decisions more like humans.

This shows the outline of a head.
“If we try to make our models closer to the human brain, it will show in the behavior itself without fine-tuning,” he said. Credit: Neuroscience News

This science of human decision-making is only just being applied to machine learning, but developing a neural network even closer to the actual human brain may make it more reliable, according to the researchers.

In a paper in Nature Human Behaviour, “The Neural Network RTNet Exhibits the Signatures of Human Perceptual Decision-Making,” a team from the School of Psychology reveals a new neural network trained to make decisions similar to humans.

Decoding Decision

“Neural networks make a decision without telling you whether or not they are confident about their decision,” said Farshad Rafiei, who earned his Ph.D. in psychology at Georgia Tech. “This is one of the essential differences from how people make decisions.” 

Large language models (LLM), for example, are prone to hallucinations. When an LLM is asked a question it doesn’t know the answer to, it will make up something without acknowledging the artifice. By contrast, most humans in the same situation will admit they don’t know the answer. Building a more human-like neural network can prevent this duplicity and lead to more accurate answers.

Making the Model

The team trained their neural network on handwritten digits from a famous computer science dataset called MNIST and asked it to decipher each number. To determine the model’s accuracy, they ran it with the original dataset and then added noise to the digits to make it harder for humans to discern.

To compare the model performance against humans, they trained their model (as well as three other models: CNet, BLNet, and MSDNet) on the original MNIST dataset without noise, but tested them on the noisy version used in the experiments and compared results from the two datasets. 

The researchers’ model relied on two key components: a Bayesian neural network (BNN), which uses probability to make decisions, and an evidence accumulation process that keeps track of the evidence for each choice. The BNN produces responses that are slightly different each time.

As it gathers more evidence, the accumulation process can sometimes favor one choice and sometimes another. Once there is enough evidence to decide, the RTNet stops the accumulation process and makes a decision. 

The researchers also timed the model’s decision-making speed to see whether it follows a psychological phenomenon called the “speed-accuracy trade-off” that dictates that humans are less accurate when they must make decisions quickly. 

Once they had the model’s results, they compared them to humans’ results. Sixty Georgia Tech students viewed the same dataset and shared their confidence in their decisions, and the researchers found the accuracy rate, response time, and confidence patterns were similar between the humans and the neural network.

“Generally speaking, we don’t have enough human data in existing computer science literature, so we don’t know how people will behave when they are exposed to these images. This limitation hinders the development of models that accurately replicate human decision-making,” Rafiei said.

“This work provides one of the biggest datasets of humans responding to MNIST.” 

Not only did the team’s model outperform all rival deterministic models, but it also was more accurate in higher-speed scenarios due to another fundamental element of human psychology: RTNet behaves like humans. As an example, people feel more confident when they make correct decisions. Without even having to train the model specifically to favor confidence, the model automatically applied it, Rafiei noted. 

“If we try to make our models closer to the human brain, it will show in the behavior itself without fine-tuning,” he said.

The research team hopes to train the neural network on more varied datasets to test its potential. They also expect to apply this BNN model to other neural networks to enable them to rationalize more like humans.

Eventually, algorithms won’t just be able to emulate our decision-making abilities, but could even help offload some of the cognitive burden of those 35,000 decisions we make daily.

About this artificial intelligence research news

Author: Tess Malone
Source: Georgia Institute of Technology
Contact: Tess Malone – Georgia Institute of Technology
Image: The image is credited to Neuroscience News

Original Research: Closed access.
The neural network RTNet exhibits the signatures of human perceptual decision-making” by Dobromir Rahnev et al. Nature Human Behavior


Abstract

The neural network RTNet exhibits the signatures of human perceptual decision-making

Convolutional neural networks show promise as models of biological vision. However, their decision behaviour, including the facts that they are deterministic and use equal numbers of computations for easy and difficult stimuli, differs markedly from human decision-making, thus limiting their applicability as models of human perceptual behaviour.

Here we develop a new neural network, RTNet, that generates stochastic decisions and human-like response time (RT) distributions. We further performed comprehensive tests that showed RTNet reproduces all foundational features of human accuracy, RT and confidence and does so better than all current alternatives.

To test RTNet’s ability to predict human behaviour on novel images, we collected accuracy, RT and confidence data from 60 human participants performing a digit discrimination task. We found that the accuracy, RT and confidence produced by RTNet for individual novel images correlated with the same quantities produced by human participants.

Critically, human participants who were more similar to the average human performance were also found to be closer to RTNet’s predictions, suggesting that RTNet successfully captured average human behaviour.

Overall, RTNet is a promising model of human RTs that exhibits the critical signatures of perceptual decision-making.

Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.