Summary: A new study argues that some generative AI agents meet all three philosophical criteria for free will: agency, choice, and control. Drawing from theories by Dennett and List, researchers examined AI agents like Minecraft’s Voyager and fictional autonomous drones, concluding that they exhibit functional free will.
As AI takes on increasingly autonomous roles—from chatbots to self-driving cars—questions of moral responsibility are shifting from developers to the AI itself. Martela warns that if AI is to make adult-like decisions, it must be given a moral compass from the outset, and developers must be equipped to program ethical reasoning.
Key Facts:
- Free Will in AI: Some generative AI agents fulfill philosophical criteria for free will.
- Moral Responsibility Shift: Autonomy may shift moral accountability from developers to AI agents.
- Urgent Ethical Need: Developers must be equipped to embed complex moral reasoning into AI.
Source: Aalto University
AI is advancing at such speed that speculative moral questions, once the province of science fiction, are suddenly real and pressing, says Finnish philosopher and psychology researcher Frank Martela.
Martela’s latest study finds that generative AI meets all three of the philosophical conditions of free will — the ability to have goal-directed agency, make genuine choices and to have control over its actions.
It will be published in the journal AI and Ethics on Tuesday.
Drawing on the concept of functional free will as explained in the theories of philosophers Daniel Dennett and Christian List, the study examined two generative AI agents powered by large language models (LLMs): the Voyager agent in Minecraft and fictional ‘Spitenik’ killer drones with the cognitive function of today’s unmanned aerial vehicles.
‘Both seem to meet all three conditions of free will — for the latest generation of AI agents we need to assume they have free will if we want to understand how they work and be able to predict their behaviour,’ says Martela.
He adds that these case studies are broadly applicable to currently available generative agents using LLMs.
This development brings us to a critical point in human history, as we give AI more power and freedom, potentially in life or death situations. Whether it is a self-help bot, a self-driving car or a killer drone — moral responsibility may move from the AI developer to the AI agent itself.
‘We are entering new territory. The possession of free will is one of the key conditions for moral responsibility. While it is not a sufficient condition, it is one step closer to AI having moral responsibility for its actions,’ he adds. It follows that issues around how we ‘parent’ our AI technology have become both real and pressing.
‘AI has no moral compass unless it is programmed to have one. But the more freedom you give AI, the more you need to give it a moral compass from the start. Only then will it be able to make the right choices,’ Martela says.
The recent withdrawal of the latest ChatGPT update due to potentially harmful sycophantic tendencies is a red flag that deeper ethical questions must be addressed. We have moved beyond teaching the simplistic morality of a child.
‘AI is getting closer and closer to being an adult — and it increasingly has to make decisions in the complex moral problems of the adult world. By instructing AI to behave in a certain way, developers are also passing on their own moral convictions to the AI.
‘We need to ensure that the people developing AI have enough knowledge about moral philosophy to be able to teach them to make the right choices in difficult situations,’ says Martela.
About this AI and free will research news
Author: Sarah Hudson
Source: Aalto University
Contact: Sarah Hudson – Aalto University
Image: The image is credited to Neuroscience News
Original Research: Open access.
“Artificial intelligence and free will: generative agents utilizing large language models have functional free will” by Frank Martela et al. AI and Ethics
Abstract
Artificial intelligence and free will: generative agents utilizing large language models have functional free will
Combining large language models (LLMs) with memory, planning, and execution units has made possible almost human-like agentic behavior, where the artificial intelligence creates goals for itself, breaks them into concrete plans, and refines the tactics based on sensory feedback.
Do such generative LLM agents possess free will?
Free will requires that an entity exhibits intentional agency, has genuine alternatives, and can control its actions.
Building on Dennett’s intentional stance and List’s theory of free will, I will focus on functional free will, where we observe an entity to determine whether we need to postulate free will to understand and predict its behavior.
Focusing on two running examples, the recently developed Voyager, an LLM-powered Minecraft agent, and the fictitious Spitenik, an assassin drone, I will argue that the best (and only viable) way of explaining both of their behavior involves postulating that they have goals, face alternatives, and that their intentions guide their behavior.
While this does not entail that they have consciousness or that they possess physical free will, where their intentions alter physical causal chains, we must nevertheless conclude that they are agents whose behavior cannot be understood without postulating that they possess functional free will.