Consciousness May Require a New Kind of Computation

Summary: A new theoretical framework argues that the long-standing split between computational functionalism and biological naturalism misses how real brains actually compute.

The authors propose “biological computationalism,” the idea that neural computation is inseparable from the brain’s physical, hybrid, and energy-constrained dynamics rather than an abstract algorithm running on hardware. In this view, discrete neural events and continuous physical processes form a tightly coupled system that cannot be reduced to symbolic information processing.

The theory suggests that digital AI, despite its capabilities, may not recreate the essential computational style that gives rise to conscious experience. Instead, truly mind-like cognition may require building systems whose computation emerges from physical dynamics similar to those found in biological brains.

Key Facts:

  • Hybrid Dynamics: Brain computation arises from discrete spikes embedded within continuous chemical and electrical fields.
  • Multi-Scale Coupling: Neural processes remain deeply intertwined across levels, meaning algorithms cannot be separated from physical implementation.
  • Energetic Constraints: Metabolic limits shape neural computation, influencing learning, stability, and information flow.

Source: Estonian Research Council

Right now, the debate about consciousness often feels frozen between two entrenched positions.

On one side sits computational functionalism, which treats cognition as something you can fully explain in terms of abstract information processing: get the right functional organization (regardless of the material it runs on) and you get consciousness.

On the other side is biological naturalism, which insists that consciousness is inseparable from the distinctive properties of living brains and bodies: biology isn’t just a vehicle for cognition, it is part of what cognition is.

This shows a brain.
Biological computationalism suggests that to engineer genuinely mind-like systems, we may need to build new kinds of physical systems: machines whose computing is not layered neatly into software on hardware, but distributed across levels, dynamically coupled, and grounded in the constraints of real-time physics and energy. Credit: Neuroscience News

Each camp captures something important, but the stalemate suggests that something is missing from the picture.

In our new paper, we argue for a third path: biological computationalism. The idea is deliberately provocative but, we think, clarifying. Our core claim is that the traditional computational paradigm is broken or at least badly mismatched to how real brains operate.

For decades, it has been tempting to assume that brains “compute” in roughly the same way conventional computers do: as if cognition were essentially software, running atop neural hardware. But brains do not resemble von Neumann machines, and treating them as though they do forces us into awkward metaphors and brittle explanations.

If we want a serious theory of how brains compute and what it would take to build minds in other substrates, we need to widen what we mean by “computation” in the first place.

Biological computation, as we describe it, has three defining properties.

First, it is hybrid: it combines discrete events with continuous dynamics. Neurons spike, synapses release neurotransmitters, and networks exhibit event-like transitions, yet all of this is embedded in evolving fields of voltage, chemical gradients, ionic diffusion, and time-varying conductances.

The brain is not purely digital, and it is not merely an analog machine either. It is a layered system where continuous processes shape discrete happenings, and discrete happenings reshape continuous landscapes, in a constant feedback loop.

Second, it is scale-inseparable. In conventional computing, we can draw a clean line between software and hardware, or between a “functional level” and an “implementation level.” In brains, that separation is not clean at all.

There is no tidy boundary where we can say: here is the algorithm, and over there is the physical stuff that happens to realize it. The causal story runs through multiple scales at once, from ion channels to dendrites to circuits to whole-brain dynamics and the levels do not behave like modular layers in a stack.

Changing the “implementation” changes the “computation,” because in biological systems, those are deeply entangled.

Third, biological computation is metabolically grounded. The brain is an energy-limited organ, and its organization reflects that constraint everywhere. Importantly, this is not just an engineering footnote; it shapes what the brain can represent, how it learns, which dynamics are stable, and how information flows are orchestrated.

In this view, tight coupling across levels is not accidental complexity. It is an energy optimization strategy: a way to produce robust, adaptive intelligence under severe metabolic limits.

These three properties lead to a conclusion that can feel uncomfortable if we are used to thinking in classical computational terms: computation in the brain is not abstract symbol manipulation. It is not simply a matter of shuffling representations according to formal rules, with the physical medium relegated to “mere implementation.”

Instead, in biological computation, the algorithm is the substrate. The physical organization does not just support the computation; it constitutes it. Brains don’t merely run a program. They are a particular kind of physical process that performs computation by unfolding in time.

This also highlights a key limitation in how we often talk about contemporary AI. Current systems, for all their power, largely simulate functions. They approximate mappings from inputs to outputs, often with impressive generalization, but the computation is still fundamentally a digital procedure executed on hardware designed for a very different computational style.

Brains, by contrast, instantiate computation in physical time. Continuous fields, ion flows, dendritic integration, local oscillatory coupling, and emergent electromagnetic interactions are not just biological “details” we might safely ignore while extracting an abstract algorithm.

In our view, these are the computational primitives of the system. They are the mechanism by which the brain achieves real-time integration, resilience, and adaptive control.

This does not mean we think consciousness is magically exclusive to carbon-based life. We are not making a “biology or nothing” argument.

What we are claiming is more specific: if consciousness (or mind-like cognition) depends on this kind of computation, then it may require biological-style computational organization, even if it is implemented in new substrates.

In other words, the crucial question is not whether the substrate is literally biological, but whether the system instantiates the right class of hybrid, scale-inseparable, metabolically (or more generally energetically) grounded computation.

That shift changes the target for anyone interested in synthetic minds. If the brain’s computation is inseparable from the way it is physically realized, then scaling digital AI alone may not be sufficient. Not because digital systems can’t become more capable, but because capability is only part of the story.

The deeper challenge is that we might be optimizing the wrong thing: improving algorithms while leaving the underlying computational ontology untouched.

Biological computationalism suggests that to engineer genuinely mind-like systems, we may need to build new kinds of physical systems: machines whose computing is not layered neatly into software on hardware, but distributed across levels, dynamically coupled, and grounded in the constraints of real-time physics and energy.

So, if we want something like synthetic consciousness, the problem may not be, “What algorithm should we run?” The problem may be, “What kind of physical system must exist for that algorithm to be inseparable from its own dynamics?”

What are the necessary features—hybrid event–field interactions, multi-scale coupling without clean interfaces, energetic constraints that shape inference and learning—such that computation is not an abstract description laid on top, but an intrinsic property of the system itself?

That is the shift biological computationalism demands: moving from a search for the right program to a search for the right kind of computing matter.

Key Questions Answered:

Q: What problem does the new framework aim to solve?

A: It addresses the stalemate between theories that view consciousness as pure information processing and those that ground it exclusively in biology, proposing a model that integrates computation with physical dynamics.

Q: Why can’t brain computation be treated like conventional digital computation?

A: Biological computation depends on continuous physical processes, energy constraints, and multi-scale interactions that fundamentally change how information is represented and transformed.

Q: What does this imply for creating synthetic consciousness?

A: If consciousness depends on biological-style computation, then future artificial systems may need new physical architectures—not just scaled-up digital algorithms—to replicate mind-like properties.

Editorial Notes:

  • This article was edited by a Neuroscience News editor.
  • Journal paper reviewed in full.
  • Additional context added by our staff.

About this consciousness and AI research news

Author: Merilin Reede
Source: Estonian Research Council
Contact: Merilin Reede – Estonian Research Council
Image: The image is credited to Neuroscience News

Original Research: Open access.
On biological and artificial consciousness: A case for biological computationalism” by Jaan Aru et al. Neuroscience and Biobehavioral Reviews


Abstract

On biological and artificial consciousness: A case for biological computationalism

The rapid advances in the capabilities of Large Language Models (LLMs) have galvanised public and scientific debates over whether artificial systems might one day be conscious. Prevailing optimism is often grounded in computational functionalism: the assumption that consciousness is determined solely by the right pattern of information processing, independent of the physical substrate.

Opposing this, biological naturalism insists that conscious experience is fundamentally dependent on the concrete physical processes of living systems. Despite the centrality of these positions to the artificial consciousness debate, there is currently no coherent framework that explains how biological computation differs from digital computation, and why this difference might matter for consciousness.

Here, we argue that the absence of consciousness in artificial systems is not merely due to missing functional organisation but reflects a deeper divide between digital and biological modes of computation and the dynamico-structural dependencies of living organisms.

Specifically, we propose that biological systems support conscious processing because they (i) instantiate scale-inseparable, substrate-dependent multiscale processing as a metabolic optimisation strategy, and (ii) alongside discrete computations, they perform continuous-valued computations due to the very nature of the fluidic substrate from which they are composed.

These features – scale inseparability and hybrid computations – are not peripheral, but essential to the brain’s mode of computation.

In light of these differences, we outline the foundational principles of a biological theory of computation and explain why current artificial intelligence systems are unlikely to replicate conscious processing as it arises in biology.

Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.