Summary: A new study finds that people are more concerned about the immediate risks of artificial intelligence, like job loss, bias, and disinformation, than they are about hypothetical future threats to humanity. Researchers exposed over 10,000 participants to different AI narratives and found that, while future catastrophes raise concern, real-world present dangers resonate more strongly.
This challenges the idea that dramatic “doomsday” messaging distracts from urgent issues. The findings suggest the public is capable of holding nuanced views and supports a balanced conversation about both current and long-term AI risks.
Key Facts:
- Present > Future: Respondents prioritized concerns like bias and misinformation over existential AI threats.
- No Trade-Off: Awareness of future risks did not reduce concern for today’s real-world AI harms.
- Public Dialogue Needed: People want thoughtful discourse on both immediate and long-term AI challenges.
Source: University of Zurich
Most people generally are more concerned about the immediate risks of artificial intelligence than they are about a theoretical future in which AI threatens humanity.
A new study by the University of Zurich reveals that respondents draw clear distinctions between abstract scenarios and specific tangible problems and particularly take the latter very seriously.
There is a broad consensus that artificial intelligence is associated with risks, but there are differences in how those risks are understood and prioritized.
One widespread perception emphasizes theoretical long-term risks such as that of AI potentially threatening the survival of humanity.
Another common viewpoint focuses on immediate concerns such as how AI systems amplify social prejudices or contribute to disinformation.
Some fear that emphasizing dramatic “existential risks” may distract attention from the more urgent actual present problems that AI is already causing today.
Present and future AI risks
To examine those views, a team of political scientists at the University of Zurich conducted three large-scale online experiments involving more than 10,000 participants in the USA and the UK.
Some subjects were shown a variety of headlines that portrayed AI as a catastrophic risk.
Others read about present threats such as discrimination or misinformation, and others about potential benefits of AI.
The objective was to examine whether warnings about a catastrophe far off in the future caused by AI diminish alertness to actual present problems.
Greater concern about present problems
“Our findings show that the respondents are much more worried about present risks posed by AI than about potential future catastrophes,” says Professor Fabrizio Gilardi from the Department of Political Science at UZH.
Even if texts about existential threats amplified fears about scenarios of that kind, there was still much more concern about present problems including, for example, systematic bias in AI decisions and job losses due to AI.
The study, however, also shows that people are capable of distinguishing between theoretical dangers and specific tangible problems and take both seriously.
Conduct broad dialogue on AI risks
The study thus fills a significant gap in knowledge. In public discussion, fears are often voiced that focusing on sensational future scenarios distracts attention from pressing present problems.
The study is the first-ever to deliver systematic data showing that awareness of actual present threats persists even when people are confronted with apocalyptic warnings.
“Our study shows that the discussion about long-term risks is not automatically occurring at the expense of alertness to present problems,” co-author Emma Hoes says.
Gilardi adds that “the public discourse shouldn’t be ‘either-or.’ A concurrent understanding and appreciation of both the immediate and potential future challenges is needed.”
About this AI and psychology research news
Author: Nathalie Huber
Source: University of Zurich
Contact: Nathalie Huber – University of Zurich
Image: The image is credited to Neuroscience News
Original Research: Open access.
“Existential Risk Narratives About Artificial Intelligence Do Not Distract From Its Immediate Harms” by Fabrizio Gilardi et al. PNAS
Abstract
Existential Risk Narratives About Artificial Intelligence Do Not Distract From Its Immediate Harms
There is broad consensus that AI presents risks, but considerable disagreement about the nature of those risks.
These differing viewpoints can be understood as distinct narratives, each offering a specific interpretation of AI’s potential dangers.
One narrative focuses on doomsday predictions of AI posing long-term existential risks for humanity. Another narrative prioritizes immediate concerns that AI brings to society today, such as the reproduction of biases embedded into AI systems.
A significant point of contention is that the “existential risk” narrative, which is largely speculative, may distract from the less dramatic but real and present dangers of AI.
We address this “distraction hypothesis” by examining whether a focus on existential threats diverts attention from the immediate risks AI poses today.
In three preregistered, online survey experiments (N = 10,800), participants were exposed to news headlines that either depicted AI as a catastrophic risk, highlighted its immediate societal impacts, or emphasized its potential benefits.
Results show that i) respondents are much more concerned with the immediate, rather than existential, risks of AI, and ii) existential risk narratives increase concerns for catastrophic risks without diminishing the significant worries respondents express for immediate harms.
These findings provide important empirical evidence to inform ongoing scientific and political debates on the societal implications of AI.