A password will be e-mailed to you.

Simple Tasks Don’t Test Brain’s True Complexity

Summary: The brain’s ability to use approximate probabilistic inference can not be studied by simple tasks that are not well suited to expose the inferential computations that make the brain special, researchers say.

Source: Rice University.

The human brain naturally makes its best guess when making a decision, and studying those guesses can be very revealing about the brain’s inner workings. But neuroscientists at Rice University and Baylor College of Medicine said a full understanding of the complexity of the human brain will require new research strategies that better simulate real-world conditions.

Xaq Pitkow and Dora Angelaki, both faculty members in Baylor’s Department of Neuroscience and Rice’s Department of Electrical and Computer Engineering, said the brain’s ability to perform “approximate probabilistic inference” cannot be truly studied with simple tasks that are “ill-suited to expose the inferential computations that make the brain special.”

A new article by the researchers suggests the brain uses nonlinear message-passing between connected, redundant populations of neurons that draw upon a probabilistic model of the world. That model, coarsely passed down via evolution and refined through learning, simplifies decision-making based on general concepts and its particular biases.

The article, which lays out a broad research agenda for neuroscience, is featured this month in a special edition of Neuron. The edition presents ideas that first appeared as part of a workshop at the University of Copenhagen last September titled “How Does the Brain Work?”

“Evolution has given us what we call a good model bias,” Pitkow said. “It’s been known for a couple of decades that very simple neural networks can compute any function, but those universal networks can be enormous, requiring extraordinary time and resources.

“In contrast, if you have the right kind of model — not a completely general model that could learn anything, but a more limited model that can learn specific things, especially the kind of things that often happen in the real world — then you have a model that’s biased. In this sense, bias can be a positive trait. We use it to be sensitive to the right things in the world that we inhabit. Of course, the flip side is that when our brain’s bias is not matched to reality, it can lead to severe problems.”

The researchers said simple tests of brain processes, like those in which subjects choose between two options, provide only simple results. “Before we had access to large amounts of data, neuroscience made huge strides from using simple tasks, and they’ll remain very useful,” Pitkow said. “But for computations that we think are most important about the brain, there are things you just can’t reveal with some of those tasks.” Pitkow and Angelaki wrote that tasks should incorporate more diversity — like nuisance variables and uncertainty — to better simulate real-world conditions that the brain evolved to handle.

Image shows diagrams of neural networks.

Rice University and Baylor College of Medicine researchers are taking a deep look at the models by which the brain infers correct decisions. The graphic outlines, from left, interrelated variables in a simple statistical model, a neural network model with populations of neurons that capture the same structure, and a variant of the neural network collapsed into a more realistic overlapping configuration. All three images represent populations of neurons that hold specific models of the world. The researchers are working to untangle these networks to determine how the brain infers solutions to problems without being overwhelmed by data. NeuroscienceNews.com image is credited to Xaq Pitkow and Dora Angelaki.

They suggested that the brain infers solutions based on statistical crosstalk between redundant population codes. Population codes are responses by collections of neurons that are sensitive to certain inputs, like the shape or movement of an object. Pitkow and Angelaki think that to better understand the brain, it can be more useful to describe what these populations compute, rather than precisely how each individual neuron computes it. Pitkow said this means thinking “at the representational level” rather than the “mechanistic level,” as described by the influential vision scientist David Marr.

The research has implications for artificial intelligence, another interest of both researchers.

“A lot of artificial intelligence has done impressive work lately, but it still fails in some spectacular ways,” Pitkow said. “They can play the ancient game of Go and beat the best human player in the world, as done recently by DeepMind’s AlphaGo about a decade before anybody expected. But AlphaGo doesn’t know how to pick up the Go pieces. Even the best algorithms are extremely specialized. Their ability to generalize is often still pretty poor. Our brains have a much better model of the world; We can learn more from less data. Neuroscience theories suggest ways to translate experiments into smarter algorithms that could lead to a greater understanding of general intelligence.”

Pitkow is an assistant professor in the Department of Neuroscience and co-director of the Center for Neuroscience and Artificial Intelligence at Baylor and is an assistant professor of electrical and computer engineering at Rice. Angelaki is the Wilhelmina Robertson Professor of Neurosurgery at Baylor and an adjunct professor of electrical and computer engineering and of psychology at Rice.

About this neuroscience research article

Funding: The research was supported by the McNair Foundation, the National Science Foundation, Britton Sanderford, the Intelligence Advance Research Projects Activity via the Department of Interior/Interior Business Center, the Simons Collaboration on the Global Brain and the National Institutes of Health.

Source: Rice University
Image Source: NeuroscienceNews.com image is credited to Xaq Pitkow and Dora Angelaki.
Original Research: Abstract for “Inference in the Brain: Statistics Flowing in Redundant Population Codes” by Xaq Pitkow and Dora E. Angelaki in Neuron. Published online July 7 2017 doi:10.1016/j.neuron.2017.05.028

Cite This NeuroscienceNews.com Article
Rice University “Simple Tasks Don’t Test Brain’s True Complexity.” NeuroscienceNews. NeuroscienceNews, 8 June 2017.
<http://neurosciencenews.com/brain-complexity-tests-6867/>.
Rice University (2017, June 8). Simple Tasks Don’t Test Brain’s True Complexity. NeuroscienceNew. Retrieved June 8, 2017 from http://neurosciencenews.com/brain-complexity-tests-6867/
Rice University “Simple Tasks Don’t Test Brain’s True Complexity.” http://neurosciencenews.com/brain-complexity-tests-6867/ (accessed June 8, 2017).

Abstract

Inference in the Brain: Statistics Flowing in Redundant Population Codes

It is widely believed that the brain performs approximate probabilistic inference to estimate causal variables in the world from ambiguous sensory data. To understand these computations, we need to analyze how information is represented and transformed by the actions of nonlinear recurrent neural networks. We propose that these probabilistic computations function by a message-passing algorithm operating at the level of redundant neural populations. To explain this framework, we review its underlying concepts, including graphical models, sufficient statistics, and message-passing, and then describe how these concepts could be implemented by recurrently connected probabilistic population codes. The relevant information flow in these networks will be most interpretable at the population level, particularly for redundant neural codes. We therefore outline a general approach to identify the essential features of a neural message-passing algorithm. Finally, we argue that to reveal the most important aspects of these neural computations, we must study large-scale activity patterns during moderately complex, naturalistic behaviors.

“Inference in the Brain: Statistics Flowing in Redundant Population Codes” by Xaq Pitkow and Dora E. Angelaki in Neuron. Published online July 7 2017 doi:10.1016/j.neuron.2017.05.028

Feel free to share this Neuroscience News.
Join our Newsletter
Sign up to receive the latest neuroscience headlines and summaries sent to your email daily from NeuroscienceNews.com
We hate spam. Your email address will not be sold or shared with anyone else.
No more articles

Pin It on Pinterest

Share This

Share This

Share this neuroscience news with your friends!