Summary: The human brain processes spoken language in a step-by-step sequence that closely matches how large language models transform text. Using electrocorticography recordings from people listening to a podcast, researchers found that early brain responses aligned with early AI layers, while deeper layers corresponded to later neural activity in regions such as Broca’s area.
The findings challenge traditional theories of language that rely on fixed rules, instead highlighting dynamic, context-driven computation. The team also released a rich dataset linking neural signals with linguistic features, offering a powerful resource for future neuroscience research.
Key Facts
- Layered Alignment: Early brain responses tracked early AI model layers, while deeper layers aligned with later neural activity.
- Context Over Rules: AI-derived contextual embeddings predicted brain activity better than classical linguistic units.
- New Resource: Researchers released a large neural–linguistic dataset to accelerate language neuroscience.
Source: Hebrew University of Jerusalem
In a study published in Nature Communications, researchers led by Dr. Ariel Goldstein of the Hebrew University in collaboration with Dr. Mariano Schain from Google Research along with Prof Uri Hasson and Eric Ham from Princeton University, uncovered a surprising connection between the way our brains make sense of spoken language and the way advanced AI models analyze text.
Using electrocorticography recordings from participants listening to a thirty-minute podcast, the team showed that the brain processes language in a structured sequence that mirrors the layered architecture of large language models such as GPT-2 and Llama 2.
What the Study Found
When we listen to someone speak, our brain transforms each incoming word through a cascade of neural computations. Goldstein’s team discovered that these transformations unfold over time in a pattern that parallels the tiered layers of AI language models.
Early AI layers track simple features of words, while deeper layers integrate context, tone, and meaning. The study found that human brain activity follows a similar progression: early neural responses aligned with early model layers, and later neural responses aligned with deeper layers.
This alignment was especially clear in high-level language regions such as Broca’s area, where the peak brain response occurred later in time for deeper AI layers.
According to Dr. Goldstein, “What surprised us most was how closely the brain’s temporal unfolding of meaning matches the sequence of transformations inside large language models. Even though these systems are built very differently, both seem to converge on a similar step-by-step buildup toward understanding”
Why It Matters
The findings suggest that artificial intelligence is not just a tool for generating text. It may also offer a new window into understanding how the human brain processes meaning. For decades, scientists believed that language comprehension relied on symbolic rules and rigid linguistic hierarchies.
This study challenges that view. Instead, it supports a more dynamic and statistical approach to language, in which meaning emerges gradually through layers of contextual processing.
The researchers also found that classical linguistic features such as phonemes and morphemes did not predict the brain’s real-time activity as well as AI-derived contextual embeddings. This strengthens the idea that the brain integrates meaning in a more fluid and context-driven way than previously believed.
A New Benchmark for Neuroscience
To advance the field, the team publicly released the full dataset of neural recordings paired with linguistic features. This new resource enables scientists worldwide to test competing theories of how the brain understands natural language, paving the way for computational models that more closely resemble human cognition.
Key Questions Answered:
A: The brain transforms spoken language through a sequence of computations that align with progressively deeper layers of large language models.
A: It challenges rule-based theories of language, suggesting instead that meaning emerges through dynamic, context-driven processing similar to modern AI systems.
A: A publicly available dataset pairing electrocorticography recordings with linguistic features, enabling new tests of competing language theories.
Editorial Notes:
- This article was edited by a Neuroscience News editor.
- Journal paper reviewed in full.
- Additional context added by our staff.
About this language and AI research news
Author: Yarden Mills
Source: Hebrew University of Jerusalem
Contact: Yarden Mills – Hebrew University of Jerusalem
Image: The image is credited to Neuroscience News
Original Research: Open access.
“Temporal structure of natural language processing in the human brain corresponds to layered hierarchy of large language models” by Uri Hasson et al. Nature Communications
Abstract
Temporal structure of natural language processing in the human brain corresponds to layered hierarchy of large language models
Large Language Models (LLMs) offer a framework for understanding language processing in the human brain. Unlike traditional models, LLMs represent words and context through layered numerical embeddings.
Here, we demonstrate that LLMs’ layer hierarchy aligns with the temporal dynamics of language comprehension in the brain.
Using electrocorticography (ECoG) data from participants listening to a 30-minute narrative, we show that deeper LLM layers correspond to later brain activity, particularly in Broca’s area and other language-related regions.
We extract contextual embeddings from GPT-2 XL and Llama-2 and use linear models to predict neural responses across time. Our results reveal a strong correlation between model depth and the brain’s temporal receptive window during comprehension.
We also compare LLM-based predictions with symbolic approaches, highlighting the advantages of deep learning models in capturing brain dynamics.
We release our aligned neural and linguistic dataset as a public benchmark to test competing theories of language processing.

