The Brain Applies Data Compression for Decision-Making

Summary: The brain maximizes performance while minimizing cost by using data compression to help optimize decision-making.

Source: Champalimaud Centre for the Unknown

If you were a kid in the 80s, or are a fan of retro video games, then you must know Frogger. The game can be quite a challenge. To win, you must first survive a stream of heavy traffic, only to then narrowly escape oblivion by zig-zagging across speeding wooden logs.

How does the brain know what to focus on within all this mess? 

A study published today in the scientific journal Nature Neuroscience provides a possible solution: data compression.

“Compressing the representations of the external world is akin to eliminating all irrelevant information and adopting temporary ‘tunnel vision’ of the situation”, said one of the study’s senior authors Christian Machens, head of the Theoretical Neuroscience lab at the Champalimaud Foundation in Portugal. 

“The idea that the brain maximises performance while minimising cost by using data compression is pervasive in studies of sensory processing. However, it hasn’t really been examined in cognitive functions,” said senior author Joe Paton, Director of the Champalimaud Neuroscience Research Programme.

“Using a combination of experimental and computational techniques, we demonstrated that this same principle extends across a much broader range of functions than previously appreciated.” 

In their experiments, the researchers used a timing paradigm. In each trial, mice had to determine whether two tones were separated by an interval longer or shorter than 1.5 seconds. At the same time, the researchers recorded the activity of dopamine neurons in the animal’s brain while it was performing the task.

“It is well known that dopamine neurons play a key role in learning the value of actions”, Machens explained. “So if the animal wrongly estimated the duration of the interval on a given trial, then the activity of these neurons would produce a ‘prediction error’ that should help improve performance on future trials.”

Asma Motiwala, the first author of the study, built a variety of computational reinforcement learning models and tested which was best at capturing both the activity of the neurons and the behaviour of the animals. The models shared some common principles, but differed in how they represented the information that might be relevant for performing the task. 

The team discovered that only models with a compressed task representation could account for the data. “The brain seems to eliminate all irrelevant information. Curiously, it also apparently gets rid of some relevant information, but not enough to take a real hit on how much reward the animal collects overall. It clearly knows how to succeed in this game”, Machens said. 

Interestingly, the type of information represented was not only about the variables of the task itself. Instead, it also captured the animal’s own actions.

This shows the outline of two heads
The models shared some common principles, but differed in how they represented the information that might be relevant for performing the task. Image is in the public domain

“Previous research has focused on the features of the environment independently of the individual’s behaviour. But we found that only compressed representations that depended on the animal’s actions fully explained the data.

“Indeed, our study is the first to show that the way representations of the external world are learnt, especially taxing ones such as in this task, may interact in unusual ways with how animals choose to act”, Motiwala explained. 

According to the authors, this finding has broad implications for Neuroscience as well as for Artificial Intelligence. “While the brain has clearly evolved to process information efficiently, AI algorithms often solve problems by brute force: using lots of data and lots of parameters. Our work provides a set of principles to guide future studies on how internal representations of the world may support intelligent behavior in the context of biology and AI”, Paton concluded. 

About this neuroscience research news

Author: Liad Hollender
Source: Champalimaud Center for the Unknown
Contact: Liad Hollender – Champalimaud Center for the Unknown
Image: The image is in the public domain

Original Research: Closed access.
Efficient coding of cognitive variables underlies dopamine responses and choice behavior” by Afonso Vaz Pinto et al. Nature Neuroscience


Abstract

Efficient coding of cognitive variables underlies dopamine responses and choice behavior

Reward expectations based on internal knowledge of the external environment are a core component of adaptive behavior. However, internal knowledge may be inaccurate or incomplete due to errors in sensory measurements. Some features of the environment may also be encoded inaccurately to minimize representational costs associated with their processing.

In this study, we investigated how reward expectations are affected by features of internal representations by studying behavior and dopaminergic activity while mice make time-based decisions.

We show that several possible representations allow a reinforcement learning agent to model animals’ overall performance during the task.

However, only a small subset of highly compressed representations simultaneously reproduced the co-variability in animals’ choice behavior and dopaminergic activity. Strikingly, these representations predict an unusual distribution of response times that closely match animals’ behavior.

These results inform how constraints of representational efficiency may be expressed in encoding representations of dynamic cognitive variables used for reward-based computations.

Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.