Learning Bias, Not Distrust: The Unseen Hurdle in Embracing AI Algorithms

Summary: Researchers shed new light on the phenomenon often referred to as ‘algorithm aversion’.

The study suggests that humans don’t always mistrust machines, but instead may struggle to learn how to effectively use them due to a bias in their learning process. When humans do not follow an algorithm’s recommendations, they miss out on opportunities to observe its accuracy, leading to an incomplete understanding of its decision-making capabilities.

The findings underline the importance of continuous, rather than selective, learning from machines for effective human-machine collaboration.

Key Facts:

  1. This research brings a novel perspective to algorithm aversion, shifting focus from an inherent mistrust of machines to the inability of humans to learn from machines due to the decision-making context.
  2. According to the study, a human decision-maker doesn’t always get to observe whether a machine’s recommendation was correct, especially when they decide not to take follow-up actions based on the machine’s suggestions. This absence of feedback creates a bias in learning, hindering the effective use of machines.
  3. Researchers underscore that trust isn’t the sole issue influencing the use of algorithmic decision-making. They highlight the importance of continuous, not selective, learning from machine intelligence for more effective collaboration between humans and machines.

Source: ESMT Berlin

Machines can make better decisions than humans, but humans often struggle to know when the machine’s decision-making is actually more accurate and end up overriding the algorithm decisions for worse, according to new research by ESMT Berlin.

This phenomenon is known as algorithm aversion and is often attributed to an inherent mistrust of machines. However, systematically overriding an algorithm may not necessarily stem from algorithm aversion.

This new research shows that the very context in which a human decision-maker works can also prevent the decision-maker from learning whether a machine produces better decisions.  

Credit: Neuroscience News

These findings come from research by Francis de Véricourt and Huseyin Gurkan, both professors of management science at ESMT Berlin.

The researchers wanted to determine under which conditions a human decision-maker, supervising a machine making critical decisions, could properly assess whether the machine produces better recommendations.

To do so, the researchers set up an analytical model where a human decision-maker supervised a machine tasked with important decisions, such as whether to perform a biopsy on a patient.

The human decision-maker then made the best choice based on the information they received from the machine for each task. 

The researchers found that if a human decision-maker heeded the machine’s recommendation and it proved correct, the human would trust the machine more.

But the human sometimes did not observe whether the machine’s recommendation was correct – this happened, for instance, when the human decision-maker decided not to take follow-up actions. In this case, there was no change in trust and no lessons learned for the human decision-maker.

This interaction between the human’s decision and the human’s assessment of the machine creates biased learning. Hence, over time, they might not learn how to effectively use machines. 

These findings clearly show that it is not always an inherent mistrust against the machines that means humans override algorithmic decisions, but over time, this biased learning can be reinforced by consistent overriding, which might result in incorrectly and ineffectively using machines in decision-making. 

“Often, we see a tendency for humans to override algorithms, which can be typically attributed to an intrinsic mistrust of machine-based predictions,” says Prof. de Véricourt.

“This bias, however, may not be the sole reason for inappropriately and systematically overriding an algorithm. It may also be the case that we are simply not learning how to effectively use machines correctly when our learning is based solely on the correctness of the machine’s predictions.” 

This shows two robots.
The researchers found that if a human decision-maker heeded the machine’s recommendation and it proved correct, the human would trust the machine more. Credit: Neuroscience News

These findings show that trust in a machine’s decision-making ability is key to ensuring that we effectively learn how to utilize them, and that the accuracy of their usage also improves. 

“Our research shows that there is clearly a lack of opportunities for human decision-makers to learn from a machine’s intelligence unless they account for its advice continually,” says Prof. Gurkan.

“We need to adopt ways of complete learning with the machines constantly, not just selectively.” 

The researchers say that these findings shed light on the importance of collaboration between humans and machines and guide us on when (and when not) to trust machines. By studying such situations, we can learn when it is best to listen to the machine and when it is better to make our own decisions.

The framework set out by the researchers can help humans to better leverage machines in decision-making. 

About this artificial intelligence research news

Author: Martha Ihlbrock
Source: ESMT Berlin
Contact: Martha Ihlbrock – ESMT Berlin
Image: The image is credited to Neuroscience News

Original Research: Open access.
Is Your Machine Better Than You? You May Never Know” by Francis de Véricourt et al. Management Science


Abstract

Is Your Machine Better Than You? You May Never Know

Artificial intelligence systems are increasingly demonstrating their capacity to make better predictions than human experts. Yet recent studies suggest that professionals sometimes doubt the quality of these systems and overrule machine-based prescriptions.

This paper explores the extent to which a decision maker (DM) supervising a machine to make high-stakes decisions can properly assess whether the machine produces better recommendations.

To that end, we study a setup in which a machine performs repeated decision tasks (e.g., whether to perform a biopsy) under the DM’s supervision.

Because stakes are high, the DM primarily focuses on making the best choice for the task at hand. Nonetheless, as the DM observes the correctness of the machine’s prescriptions across tasks, the DM updates the DM’s belief about the machine.

However, the DM is subject to a so-called verification bias such that the DM verifies the machine’s correctness and updates the DM’s belief accordingly only if the DM ultimately decides to act on the task.

In this setup, we characterize the evolution of the DM’s belief and overruling decisions over time. We identify situations under which the DM hesitates forever whether the machine is better; that is, the DM never fully ignores but regularly overrules it.

Moreover, the DM sometimes wrongly believes with positive probability that the machine is better. We fully characterize the conditions under which these learning failures occur and explore how mistrusting the machine affects them.

These findings provide a novel explanation for human–machine complementarity and suggest guidelines on the decision to fully adopt or reject a machine.

Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.