Summary: The belief in AI enhancement can lead to increased risk-taking behavior. In the study, participants were informed that an AI application would enhance their cognitive abilities during a virtual card game.
Despite the absence of actual AI assistance, participants displayed higher risk-taking, indicating a potential placebo effect in technological applications.
The findings highlight the importance of assessing actual AI benefits and considering user expectations in the development process.
- The study involved a virtual card game where participants were told they’d be assisted by an AI application, leading to increased risk-taking behavior even without actual AI enhancement.
- The results suggest the existence of a placebo effect in the use of technological applications, similar to the placebo effect observed with medications.
- The researchers stress the importance of evaluating the genuine benefits of AI applications before their release, and taking user expectations into consideration during development.
Human augmentation technologies refer to technological aids that enhance human abilities. They include things like exoskeletons, but also augmented reality headsets.
A study at the Chair of Human-Centered Ubiquitous Media at LMU has now shown that users have high expectations of the effects of these technologies.
As soon as they believe that AI is enhancing their cognitive abilities, they increase their risk-taking. And they do this independently of whether the AI is actually assisting them.
“The hype around AI applications affects the expectations of users. This can lead to riskier behavior,” says Steeven Villa, doctoral researcher at the Chair of Human-Centered Ubiquitous Media and lead author of the study.
Ruling out placebo effects
In the study, participants were informed they would be assisted by an AI application that augments their cognitive abilities during a virtual card game.
In reality, there was no such AI enhancement. Nevertheless, the participants exhibited higher risk-taking as soon as they believed they were benefiting from AI.
The study points to the possible existence of a placebo effect in technical applications of this nature, akin to the well-established placebo effect for medication.
“At a time when people are increasingly interacting with intelligent systems, it’s important to understand a possible placebo effect so that we can build systems that offer genuine support,” says Albrecht Schmidt, Professor of Computer Science at LMU.
The researchers recommend assessing the actual benefit of AI applications before releasing them, taking possible placebo effects into account. In addition, they advise tech companies to involve users and their expectations to a greater extent in the development process.
About this AI and psychology research news
Author: Constanze Drewlo
Contact: Constanze Drewlo – LMU
Image: The image is credited to Neuroscience News
Original Research: Open access.
“The placebo effect of human augmentation: Anticipating cognitive augmentation increases risk-taking behavior” by Steeven Villa et al. Computers in Human Behavior
The placebo effect of human augmentation: Anticipating cognitive augmentation increases risk-taking behavior
Human Augmentation Technologies improve human capabilities using technology. In this study, we investigate the placebo effect of Augmentation Technologies.
Thirty naïve participants were told to be augmented with a cognitive augmentation technology or no augmentation system while conducting a Columbia Card Task. In this risk-taking measure, participants flip win and loss cards.
The sham augmentation system consisted of a brain–computer interface allegedly coordinated to play non-audible sounds that increase cognitive functions. However, no sounds were played throughout all conditions.
We show a placebo effect in human augmentation, where a sustained belief of improvement remains after using the sham system and an increase in risk-taking conditional on heightened expectancy using Bayesian statistical modeling.
Furthermore, we identify differences in event-related potentials in the electroencephalogram that occur during the sham condition when flipping loss cards.
Finally, we integrate our findings into theories of human augmentation and discuss implications for the future assessment of augmentation technologies.