Summary: People stop gathering information earlier when it supports the conclusion they wish is true than when it supports an undesirable conclusion.
A new study suggests people stop gathering evidence earlier when the data supports their desired conclusion than when it supports the conclusion they wish was false. Filip Gesiarz, Donal Cahill and Tali Sharot of University College London, U.K. report in PLOS Computational Biology.
Previous studies had already provided some clues that people gather less information before reaching desirable beliefs. For example, people are more likely to seek a second medical opinion when the first diagnosis is grave. However, certain design limitations of those studies prevented a definitive conclusion and the reasons behind this bias were previously unknown. By fitting people’s behavior to a mathematical model Gesiarz and colleagues were able to identify the reasons for this bias.
“Our research suggests that people start with an assumption that their favored conclusion is more likely true and weight each piece of evidence supporting it more than evidence opposing it. Because of that, people will find no need to gather additional information that could have revealed their conclusion to be false. They will stop the investigation as soon as the jury tilts in their favor” said Gesiarz.
In this new study, 84 volunteers played an online categorization game in which they could gather as much evidence as they wanted to help them make judgements and were paid according to how accurate they were. In addition, if the evidence pointed to a certain category they would get bonus points and if it pointed to another category they would lose points. So while there was a reason to wish the evidence pointed to a specific judgement, the only way for volunteers to maximize rewards was to provide accurate responses. Despite this, they found that the volunteers stopped gathering data earlier when it supported the conclusion they wished was true than when it supported the undesirable conclusion.
“Today, a limitless amount of information is available at the click of a mouse,” Sharot says. “However, because people are likely to conduct less through searches when the first few hits provide desirable information, this wealth of data will not necessarily translate to more accurate beliefs.”
Next, the authors hope to determine what factors make certain individuals more likely to have a bias in how they gather information than others. For instance, they are curious whether children might show the same bias revealed in this study, or whether people with depression, which is associated with motivation problems, have different data-gathering patterns.
Funding: Funded by a Wellcome Trust Fellowship 214268/Z/18/Z to TS. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Tali Sharot – PLOS
The image is in the public domain.
Original Research: Open access
“Evidence accumulation is biased by motivation: A computational account”. Filip Gesiarz, Donal Cahill, Tali Sharot .
Evidence accumulation is biased by motivation: A computational account
To make good judgments people gather information. An important problem an agent needs to solve is when to continue sampling data and when to stop gathering evidence. We examine whether and how the desire to hold a certain belief influences the amount of information participants require to form that belief. Participants completed a sequential sampling task in which they were incentivized to accurately judge whether they were in a desirable state, which was associated with greater rewards than losses, or an undesirable state, which was associated with greater losses than rewards. While one state was better than the other, participants had no control over which they were in, and to maximize rewards they had to maximize accuracy. Results show that participants’ judgments were biased towards believing they were in the desirable state. They required a smaller proportion of supporting evidence to reach that conclusion and ceased gathering samples earlier when reaching the desirable conclusion. The findings were replicated in an additional sample of participants. To examine how this behavior was generated we modeled the data using a drift-diffusion model. This enabled us to assess two potential mechanisms which could be underlying the behavior: (i) a valence-dependent response bias and/or (ii) a valence-dependent process bias. We found that a valence-dependent model, with both a response bias and a process bias, fit the data better than a range of other alternatives, including valence-independent models and models with only a response or process bias. Moreover, the valence-dependent model provided better out-of-sample prediction accuracy than the valence-independent model. Our results provide an account for how the motivation to hold a certain belief decreases the need for supporting evidence. The findings also highlight the advantage of incorporating valence into evidence accumulation models to better explain and predict behavior.