Learning on the Fly

Summary: Fruit flies may use dopamine to learn in similar ways to humans.

Source: University of Sussex

Even the humble fruit fly craves a dose of the happy hormone, according to a new study from the University of Sussex which shows how they may use dopamine to learn in a similar manner to humans.

Informatics experts at the University of Sussex have developed a new computational model that demonstrates a long sought after link between insect and mammalian learning, as detailed in a new paper published today in Nature Communications.

Incorporating anatomical and functional data from recent experiments, Dr James Bennett and colleagues modelled how the anatomy and physiology of the fruit fly’s brain can support learning according to the reward prediction error (RPE) hypothesis.

The computational model indicates how dopamine neurons in an area of a fruit fly’s brain, known as the mushroom body, can produce similar signals to dopamine neurons in mammals, and how these dopamine signals can reliably instruct learning.

The academics believe that establishing whether flies also use prediction errors to learn could lead to more humane animal research allowing researchers to replace animals with more simple insect species for future studies into the mechanisms of learning.

By opening up new opportunities to study neural mechanisms of learning, the researchers hope the model could also be helpful in illuminating greater understanding of mental health issues such as depression or addiction which are underpinned by the RPE hypothesis.

Dr Bennett, research fellow in the University of Sussex’s School of Engineering and Informatics, said: “Using our computational model, we were able to show that data from insect experiments did not necessarily conflict with predictions from the RPE hypothesis, as had been thought previously.

“Establishing a bridge between insect and mammal studies on learning may open up the possibility to exploit the powerful genetic tools available for performing experiments in insects, and the smaller scale of their brains, to make sense of brain function and disease in mammals, including humans.”

Understanding of how mammals learn has come a long way thanks to the RPE hypothesis, which suggests that associative memories are learned in proportion to how inaccurate they are.

The hypothesis has had considerable success explaining experimental data about learning in mammals, and has been extensively applied to decision-making and mental health illnesses such as addiction and depression. But scientists have encountered difficulties when applying the hypothesis to learning in insects due to conflicting results from different experiments.

The University of Sussex research team created a computational model to show how the major features of mushroom body anatomy and physiology can implement learning according to the RPE hypothesis.

The model simulates a simplification of the mushroom body, including different neuron types and the connections between them, and how the activity of those neurons promote learning and influence the decisions a fly makes when certain choices are rewarded.

To further understanding of learning in fly brains, the research team used their model to make five novel predictions about the influence different neurons in the mushroom body have on learning and decision-making, in the hope that they promote future experimental work.

Dr Bennett said: “While other models of the mushroom body have been created, to the best of our knowledge no other model until now has included connections between dopamine neurons and another set of neurons that predict and drive behaviour towards rewards.

Schematic of the VS model. Units are colour-coded according to cell types. Image is credited to University of Sussex

For example, when the reward is the sugar content of food, these connections would allow the predicted sugar availability to be compared with the actual sugar ingested, allowing more accurate predictions and appropriate sugar-seeking behaviours to be learned.

“The model can explain a large array of behaviours exhibited by fruit flies when the activity of particular neurons in their brains are either silenced or activated artificially in experiments. We also propose connections between dopamine neurons and other neurons in the mushroom body, which have not yet been reported in experiments, but would help to explain even more experimental data.”

Thomas Nowotny, Professor of Informatics at the University of Sussex, said: “The model brings together learning theory and experimental knowledge in a way that allows us to think systematically how fly brains actually work. The results show how learning in simple flies might be more similar to how we learn than previously thought.”

About this neuroscience and learning research news

Source: University of Sussex
Contact: Neil Vowles – University of Sussex
Image: The image is credited to University of Sussex

Original Research: Open access.
Learning with reinforcement prediction errors in a model of the Drosophila mushroom body” by James E. M. Bennett, Andrew Philippides & Thomas Nowotny. Nature Communication


Abstract

Learning with reinforcement prediction errors in a model of the Drosophila mushroom body

Effective decision making in a changing environment demands that accurate predictions are learned about decision outcomes. In Drosophila, such learning is orchestrated in part by the mushroom body, where dopamine neurons signal reinforcing stimuli to modulate plasticity presynaptic to mushroom body output neurons.

Building on previous mushroom body models, in which dopamine neurons signal absolute reinforcement, we propose instead that dopamine neurons signal reinforcement prediction errors by utilising feedback reinforcement predictions from output neurons.

We formulate plasticity rules that minimise prediction errors, verify that output neurons learn accurate reinforcement predictions in simulations, and postulate connectivity that explains more physiological observations than an experimentally constrained model.

The constrained and augmented models reproduce a broad range of conditioning and blocking experiments, and we demonstrate that the absence of blocking does not imply the absence of prediction error dependent learning.

Our results provide five predictions that can be tested using established experimental methods.

Join our Newsletter
Thank you for subscribing.
Something went wrong.
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.
Exit mobile version