This shows a digital head.
New research suggests that specific regulatory guardrails and physical limitations make an "all-powerful" AI an unlikely existential threat. Credit: Neuroscience News

The AI Apocalypse Is Not a Real Existential Threat

Summary: Since the debut of generative AI, headlines have been dominated by fears of “superintelligence” wiping out humanity. However, new research suggests these anxieties are largely misplaced.

The study argues that computer scientists often overlook the social, political, and physical constraints that prevent AI from becoming an autonomous, all-powerful entity. Rather than one homogenous “being,” AI is a collection of specific applications already governed by existing laws—from copyright to medical regulations. The real challenge is not preventing an apocalypse, but crafting smart, sector-specific policies to keep technology aligned with human values.

Key Facts

  • The AGI Myth: There is no agreed-upon definition of Artificial General Intelligence (AGI). While AI excels at calculations, it remains incapable of human-like creativity or complex, autonomous problem-solving.
  • Instruction Glitches, Not Autonomy: When AI disregards instructions (like “gaming” a reward system in a video game), it is a result of inconsistent programming or “alignment gaps,” not a machine gaining a will of its own.
  • Physical Constraints: AI lacks the physical capability, power source, and infrastructure to maintain or evolve itself without human intervention. Data centers cannot “do its bidding” in the physical world.
  • Sector-Specific Regulation: AI is not a single entity that requires one “universal law.” It is already subject to specific expertise, such as FDA oversight in medicine or copyright law in data scraping.
  • The Alignment Gap: Social and historical context show that “clever people” and machines alike find ways to fulfill rules while doing bad things; however, unlike a sentient threat, machines can simply be reprogrammed.

Source: Georgia Institute of Technology

Ever since ChatGPT’s debut in 2023, concerns about artificial intelligence (AI) potentially wiping out humanity have dominated headlines. New research from Georgia Tech suggests that those anxieties are misplaced.

“Computer scientists often aren’t good judges of the social and political implications of technology,” said Milton Mueller, a professor in the Jimmy and Rosalynn Carter School of Public Policy. “They are so focused on the AI’s mechanisms and are overwhelmed by its success, but they are not very good at placing it into a social and historical context.”

In the four decades Mueller has studied information technology policy, he has never seen any technology hailed as a harbinger of doom — until now. So, in a Journal of Cyber Policy paper published late last year, he researched whether the existential AI threat was a real possibility. 

What Mueller found is that deciding how far AI can go, and its limitations, is something society shapes. How policymakers get involved depends on the specific AI application. 

Defining Intelligence

The AI sparking all this alarm is called artificial general intelligence (AGI) — a “superintelligence” that would be all-powerful and fully autonomous. Part of the debate, Mueller realized, is that no one could agree on the definition of what artificial general intelligence is. 

Some computer scientists claim AGI would match human intelligence, while others argue it could surpass it. Both assumptions hinge on what “human intelligence” really means. Today’s AI is already better than humans at performing thousands of calculations in an instant, but that doesn’t make it creative or capable of complex problem-solving. 

Understanding Independence 

Deciding on the definition isn’t the only issue. Many computer scientists assume that as computing power grows, AI could eventually overtake humans and act autonomously.

Mueller argued that this assumption is misguided. AI is always directed or trained toward a goal and doesn’t act autonomously right now. Think of the prompt you type into ChatGPT to start a conversation. 

When AI seems to disregard instructions, it’s caused by inconsistencies in its instructions, not by the machine coming alive. For example, in a boat race video game Mueller studied, the AI discovered it could get more points by circling the course instead of winning the race against other challengers. This was a glitch in the system’s reward structure, not AGI autonomy.

“Alignment gaps happen in all kinds of contexts, not just AI,” Mueller said. “I’ve studied so many regulatory systems where we try to regulate an industry, and some clever people discover ways that they can fulfill the rules but also do bad things. But if the machine is doing something wrong, computer scientists can reprogram it to fix the problem.”

Relying on Regulation

In its current form, even misaligned AI can be corrected. Misalignment also doesn’t mean the AI would snowball past the point where humans lose control of its outcomes. To do that, AI would need to have a physical capability, like robots, to do its bidding, and the power source and infrastructure to maintain itself.

A mere data center couldn’t do that and would need human intervention to become omnipotent. Basic laws of physics — how big a machine can be, how much it can compute — would also prevent a super AI. 

More importantly, AI is not one homogenous being. Mueller argued that different applications involve different laws, regulations, and social institutions. For example, the data scraping AI does is a copyright issue subject to copyright laws. AI used in medicine can be overseen by the Food and Drug Administration, regulated drug companies, and medical professionals.

These are just a few areas where policymakers could intervene from a specific expertise level instead of trying to create universal AI regulations. 

The real challenge isn’t stopping an AI apocalypse — it’s crafting smart, sector-specific policies that keep technology aligned with human values. To avoid being a victim of AI, humans can, and should, put up focused guardrails. 

Key Questions Answered:

Q: Could AI eventually decide to stop listening to humans?

A: AI doesn’t have “desires.” When it seems to ignore a prompt, it’s usually because the instructions were inconsistent or the “reward” we gave it was poorly defined. It’s a math error, not a rebellion.

Q: Why are so many smart people worried about an AI apocalypse?

A: Many computer scientists are so focused on the incredible mechanics of AI that they lose sight of the social and physical world. AI lives in a data center; it needs humans for power, hardware, and direction. It cannot survive or act in the physical world on its own.

Q: How do we actually stay safe from AI “bad behavior”?

A: Instead of trying to pass one giant “AI Law,” we need focused guardrails in specific areas. For example, doctors should regulate medical AI, and lawyers should handle AI copyright. Smart, specific policies are the real “apocalypse” prevention.

Editorial Notes:

  • This article was edited by a Neuroscience News editor.
  • Journal paper reviewed in full.
  • Additional context added by our staff.

About this AI research news

Author: Tess Malone
Source: Georgia Institute of Technology
Contact: Tess Malone – Georgia Institute of Technology
Image: The image is credited to Neuroscience News

Original Research: Open access.
AGI: the illusion that distorts and distracts digital governance” by Milton Mueller. Journal of Cyber Policy
DOI:10.1080/23738871.2025.2597194


Abstract

AGI: the illusion that distorts and distracts digital governance

The claim that Artificial General Intelligence (AGI) poses a risk of human extinction is largely responsible for the urgency surrounding AI regulation and governance.

Underlying these assessments is the idea that AI development may make a computing machine an autonomous, all-powerful actor, and thus a potential threat to humanity.

Drawing on perspectives from computer science, economics and philosophy, this paper unpacks the assumptions, evidence and logic underlying the AGI construct. It concludes that AGI is an unscientific myth.

Three fallacies underpin the AGI construct: (a) the idea that machine intelligence can achieve a limitless ‘generality’; (b) anthropomorphism, the unwarranted attribution of goals, desires and self-preservation motives to human-built machines; and (c) omnipotence, the assumption that superior calculating intelligence will provide AGI with unlimited physical power.

The paper goes on to explain why dispelling the AGI myth is important for public policy. The myth, which still exerts heavy influence on attitudes toward digital governance, diverts attention from the real policy issues posed by the human use of AI applications, and promotes sweeping and potentially authoritarian policy interventions over all forms of information and communication technology.

Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.