Output of Single Neurons Can Predict Behavior on Perceptual Tests

By analyzing the signals of individual neurons in animals undergoing behavioral tests, neuroscientists at Rice University, Baylor College of Medicine, the University of Geneva and the University of Rochester have deciphered the code the brain uses to make the most of its inherently “noisy” neuronal circuits.

The human brain contains about 100 billion neurons, and each of these sends signals to thousands of other neurons each second. Understanding how neurons work, both individually and collectively, is important to better understand how humans think, as well as to treat neurological and psychiatric disorders like Alzheimer’s disease, Parkinson’s disease, autism, epilepsy, schizophrenia, depression, traumatic brain injury and paralysis.

“If the brain could always count on receiving the same sensory response to the same stimulus, it would have an easier time,” said neuroscientist Xaq Pitkow, lead author of a new study this week in Neuron. “But noise is always there in the brain: studies have repeatedly shown that neurons give a variety of responses to the same stimulus.”

Pitkow, assistant professor of neuroscience at Baylor and assistant professor of electrical and computer engineering at Rice, said “noise” can be described as anything that changes neural activity in a way that doesn’t depend on the task the brain wants to accomplish.

Not only are neural responses noisy, but each neuron’s noise is correlated with the noise in thousands of other neurons. That means that something that affects the output of one neuron may be amplified to affect many more. Because of these correlations, it is extraordinarily difficult for scientists to accurately model how small groups of neurons will affect the way a person or animal reacts to a given stimulus.

Given both these correlated responses and the inherently noisy nature of neuronal signals, scientists have struggled to explain a seeming paradox that was first observed in experiments more than 25 years ago.

“When neuroscientists first analyzed the output of individual neurons, they were surprised to find that the activity of just a single neuron sometimes predicted behavior in certain tasks,” Pitkow said.

This perplexing find has turned up in numerous experiments, but neuroscientists have yet to explain it.

“A lot of people have studied this and offered up different kinds of models that make all sorts of assumptions,” Pitkow said. “By integrating all of those ideas and applying some analytical techniques, we found there were two different ways this could happen.”

He said one possibility is that many neurons are sharing the same information, processing it independently and arriving at the same answer. The other possibility is that each neuron is using different information and casting its vote for a slightly different answer but the brain is doing a poor job of coming to a consensus with the different votes.

“The first model is a bit like trying to find a needle in haystack, and the second is like trying to find a needle on a clean floor while looking backward through a pair of binoculars,” Pitkow said. “Each piece of straw looks like a needle, which makes the haystack test very difficult. On the other hand, a needle should really stand out on a clean floor, but it will be hard to find with a bad searching method.”

In each case, the neurons are correlated with one other, “but in the first instance the noise correlations can never be removed, and in the second they could and should be removed but they’re not,” Pitkow said. “And each of these scenarios has very different consequences for the brain’s code, how it represents information. In terms of information theory, if the brain has a lot of information and it is not doing a good job of using it, there are very different implications than if all the neurons are correlated and they’re all informative in the same way.”

Alt: This image represents single neuron activity in two different brain regions.
By analyzing the signals of individual neurons, neuroscientists have deciphered the code the brain uses to make the most of its inherently ‘noisy’ neuronal circuits. These green and purple hills represent the average activity for many neurons in two different brain regions. These neuronal activity patterns will differ from time to time, even in response to exactly the same sensory stimulus, and those differences set the limit for how well the brain can sense things. Image credit: Xaq Pitkow/Rice University.

To determine which of these scenarios is at play in the brain, Pitkow and colleagues developed two mathematical models, one for each scenario. The models described how information and noise would flow through the network in the two opposing cases.

The team tested each model against the activity of single neurons in monkeys that were undergoing perceptual tests to measure how accurately they could perceive slight movements to the left or right. The experimenters found that some neurons predicted the animals’ guesses about whether they were moving left or right.

“When we examined the output, we found that the monkeys’ brains were not throwing away information,” Pitkow said. “They were using each neuron’s information very effectively. And we also saw that even though there were many neurons involved, the guess of any individual neuron was only slightly worse than the animal’s actual guess during the test. These two pieces of evidence together indicate the neurons mostly share the same information.”

But if every neuron is doing the same processing, why have so many? It’s an obvious question, Pitkow said, but it’s beyond the scope of what he and his colleagues could address in the current study.

“We didn’t explore the value of redundancy in this study, but we are very interested in that question,” Pitkow said. He pointed out that the vestibular sensors, the part of the inner ear dedicated to the sense of balance, contain only about 6,000 of the brain’s 100 billion neurons. Even those few thousand might be redundant, which would mean that the rest of the neurons they contact also are redundant.

“One intriguing possibility that we are looking into is that redundancy allows the brain to reformat information and approach complex problems from many different angles,” he said.

About this neurology research

Funding: The research was supported by the National Institutes of Health, the McNair Foundation, the McDonnell Foundation and the Swiss National Science Foundation. Study co-authors include Sheng Liu of Baylor, Dora Angelaki of both Baylor and Rice, Gregory DeAngelis of the University of Rochester and Alexandre Pouget of the University of Geneva.

Source: Rice University
Image Credit: The image is credited to Xaq Pitkow/Rice University
Original Research: Abstract for “How Can Single Sensory Neurons Predict Behavior?” by Xaq Pitkow, Sheng Liu, Dora E. Angelaki, Gregory C. DeAngelis, and Alexandre Pouget in Neuron. Published online July 16 2015 doi:10.1016/j.neuron.2015.06.033


Abstract

How Can Single Sensory Neurons Predict Behavior?

Highlights
•Responses of single neurons correlate with heading percepts
•This can be explained by optimally decoding populations with limited information …
•… Or by suboptimally decoding populations with extensive information
•Electrophysiological data support the model with limited information

Summary
Single sensory neurons can be surprisingly predictive of behavior in discrimination tasks. We propose this is possible because sensory information extracted from neural populations is severely restricted, either by near-optimal decoding of a population with information-limiting correlations or by suboptimal decoding that is blind to correlations. These have different consequences for choice correlations, the correlations between neural responses and behavioral choices. In the vestibular and cerebellar nuclei and the dorsal medial superior temporal area, we found that choice correlations during heading discrimination are consistent with near-optimal decoding of neuronal responses corrupted by information-limiting correlations. In the ventral intraparietal area, the choice correlations are also consistent with the presence of information-limiting correlations, but this area does not appear to influence behavior, although the choice correlations are particularly large. These findings demonstrate how choice correlations can be used to assess the efficiency of the downstream readout and detect the presence of information-limiting correlations.

“How Can Single Sensory Neurons Predict Behavior?” by Xaq Pitkow, Sheng Liu, Dora E. Angelaki, Gregory C. DeAngelis, and Alexandre Pouget in Neuron. Published online July 16 2015 doi:10.1016/j.neuron.2015.06.033

Feel free to share this neuroscience news.
Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.