Summary: Being able to put a name to an object involves changing network interactions between different brain regions.
Source: Baylor College of Medicine
You see an object, you think of its name and then you say it. This apparently simple activity engages a set of brain regions that must interact with each other to produce the behavior quickly and accurately. A report published in eNeuro shows that a reliable sequence of neural interactions occurs in the human brain that corresponds to the visual processing stage, the language state when we think of the name, and finally the articulation state when we say the name. The study reveals that the neural processing does not involve just a sequence of different brain regions, but instead, it engages a sequence of changing interactions between those brain regions.
“In this study, we worked with patients with epilepsy whose brain activity was being recorded with electrodes to find where their seizures started. While the electrodes were in place, we showed the patients pictures and asked them to name them while we recorded their brain activity,” said co-corresponding author Dr. Xaq Pitkow, assistant professor of neuroscience and McNair Scholar at Baylor College of Medicine and assistant professor of electrical and computer engineering at Rice University.
“We then analyzed the data we recorded and derived a new level of understanding of how the brain network comes up with the right word and enables us to say that word,” said Dr. Nitin Tandon, professor in the Vivian L. Smith Department of Neurosurgery at McGovern Medical School at The University of Texas Health Science Center at Houston.

The researchers’ findings support the view that when a person names a picture, the different behavioral stages – looking at the image, thinking of the name and saying it – consistently correspond to dynamic interactions within neural networks.
“Before our findings, the typical view was that separate brain areas would be activated in sequence,” Pitkow said. “But we used more complex statistical methods and fast measurement methods, and found more interesting brain dynamics.”
“This methodological advance provides a template by which to assess other complex neural processes, as well as to explain disorders of language production,” Tandon said.
Funding: Financial support for this study was provided by the National Institute on Deafness and Other Communication Disorders (R01DC014589), the National Institute of Neurological Disorders and Stroke (U01NS098981), the National Science Foundation Awards 1533664 and IOS-1552868, and the McNair Foundation.
Aram Giahi Saravani of Baylor College of Medicine and Kiefer J. Forseth of UTHealth also are authors of this work.
Source:
Baylor College of Medicine
Media Contacts:
Graciela Gutierrez – Baylor College of Medicine
Image Source:
The image is in the public domain.
Original Research: Closed access
“Dynamic brain interactions during picture naming”. Aram Giahi-Saravani, Kiefer J. Forseth, Nitin Tandon and Xaq Pitkow.
eNeuro. doi:10.1523/ENEURO.0472-18.2019
Abstract
Dynamic brain interactions during picture naming
Brain computations involve multiple processes by which sensory information is encoded and transformed to drive behavior. These computations are thought to be mediated by dynamic interactions between populations of neurons. Here we demonstrate that human brains exhibit a reliable sequence of neural interactions during speech production. We use an autoregressive hidden Markov model to identify dynamical network states exhibited by electrocorticographic signals recorded from human neurosurgical patients. Our method resolves dynamic latent network states on a trial-by-trial basis. We characterize individual network states according to the patterns of directional information flow between cortical regions of interest. These network states occur consistently and in a specific, interpretable sequence across trials and subjects: the data support the hypothesis of a fixed-length visual processing state, followed by a variable-length language state, and then by a terminal articulation state. This empirical evidence validates classical psycho-linguistic theories that have posited such intermediate states during speaking. It further reveals these state dynamics are not localized to one brain area or one sequence of areas, but are instead a network phenomenon.
Significance
Cued speech production engages a distributed set of brain regions that must interact with each other to perform this behavior rapidly and precisely. To characterize the spatio-temporal properties of the networks engaged in picture naming, we recorded from electrodes placed directly on the brain surfaces of patients with epilepsy being evaluated for surgical resection. We used a flexible statistical model applied to broadband gamma to characterize changing brain interactions. Unlike conventional models, ours can identify changes on individual trials that correlate with behavior. Our results reveal that interactions between brain regions are consistent across trials. This flexible statistical model provides a useful platform for quantifying brain dynamics during cognitive processes.