Summary: According to researchers, the primate brain anticipates for all new situations by creating a special kind of neural network that is pre-adapted to face any eventuality.
Scientists have shown how the brain anticipates all of the new situations that it may encounter in a lifetime by creating a special kind of neural network that is “pre-adapted” to face any eventuality. This emerges from a new neuroscience study published in PLOS Computational Biology.
Enel et al at the INSERM in France investigate one of the most noteworthy properties of primate behavior, its diversity and adaptability. Human and non-human primates can learn an astonishing variety of novel behaviors that could not have been directly anticipated by evolution — we now understand that this ability to cope with new situations is due to the “pre-adapted” nature of the primate brain.
This study shows that this seemingly miraculous pre-adaptation comes from connections between neurons that form recurrent loops where inputs can rebound and mix in the network, like waves in a pond, thus called “reservoir” computing. This mix of the inputs allows a potentially universal representation of combinations of the inputs that can then be used to learn the right behaviour for a new situation.
The authors demonstrate this by training a reservoir network to perform a novel problem solving task. They then compared the activity of neurons in the model with activity of neurons in the prefrontal cortex of a research primate that was trained to perform the same task. Remarkably, there were striking similarities in the activation of neurons in both the reservoir model and the primate.
This breakthrough shows that we have taken big step towards understanding the local recurrent connectivity in the brain that prepares primates to face unlimited situations. This research shows that by allowing essentially unlimited combinations of internal representations in the network of the brain, one of them is always on hand for the given situation.
Funding: The present work was funded by European research projects IST- 231267 (Organic), FP7 270490 (EFAA), FP7 612139 (WYSIWYD), and CRCNS NSF-ANR ANR-14-NEUC-0005-1 (Spaquence). EP is funded by the Agence Nationale de la Recherche ANR-06-JCJC-0048 and ANR- 11-BSV4-0006, and by the labex CORTEX ANR-11-LABX-0042. The authors declare no competing financial interests.
Source: Peter Ford Dominey – PLOS
Image Source: This NeuroscienceNews.com image is credited to Craig ONeal.
Original Research: Full open access research for “Reservoir Computing Properties of Neural Dynamics in Prefrontal Cortex” by Pierre Enel, Emmanuel Procyk, René Quilodran, and Peter Ford Dominey in PLOS Computational Biology. Published online June 10 2016 doi:10.1371/journal.pcbi.1004967
Reservoir Computing Properties of Neural Dynamics in Prefrontal Cortex
Primates display a remarkable ability to adapt to novel situations. Determining what is most pertinent in these situations is not always possible based only on the current sensory inputs, and often also depends on recent inputs and behavioral outputs that contribute to internal states. Thus, one can ask how cortical dynamics generate representations of these complex situations. It has been observed that mixed selectivity in cortical neurons contributes to represent diverse situations defined by a combination of the current stimuli, and that mixed selectivity is readily obtained in randomly connected recurrent networks. In this context, these reservoir networks reproduce the highly recurrent nature of local cortical connectivity. Recombining present and past inputs, random recurrent networks from the reservoir computing framework generate mixed selectivity which provides pre-coded representations of an essentially universal set of contexts. These representations can then be selectively amplified through learning to solve the task at hand. We thus explored their representational power and dynamical properties after training a reservoir to perform a complex cognitive task initially developed for monkeys. The reservoir model inherently displayed a dynamic form of mixed selectivity, key to the representation of the behavioral context over time. The pre-coded representation of context was amplified by training a feedback neuron to explicitly represent this context, thereby reproducing the effect of learning and allowing the model to perform more robustly. This second version of the model demonstrates how a hybrid dynamical regime combining spatio-temporal processing of reservoirs, and input driven attracting dynamics generated by the feedback neuron, can be used to solve a complex cognitive task. We compared reservoir activity to neural activity of dorsal anterior cingulate cortex of monkeys which revealed similar network dynamics. We argue that reservoir computing is a pertinent framework to model local cortical dynamics and their contribution to higher cognitive function.
“Reservoir Computing Properties of Neural Dynamics in Prefrontal Cortex” by Pierre Enel, Emmanuel Procyk, René Quilodran, and Peter Ford Dominey in PLOS Computational Biology. Published online June 10 2016 doi:10.1371/journal.pcbi.1004967