How Artificial Neural Networks Help Us Understand Neural Networks in the Human Brain

Summary: Researchers propose a novel computational framework that uses artificial intelligence technology to disentangle the relationship between perception and memory in the human brain.

Source: Stanford

Neuroscience is a relatively young discipline. This is especially true in relation to the physical sciences. While we understand a great deal about how, for example, physical properties emerge from atomic/subatomic forces, comparatively little is known about how intelligent behavior emerges from neural function.

In order to make traction on this problem, neuroscientists often rely on intuitive concepts like “perception” and “memory,” enabling them to understand the relationship between the brain and behavior. In this way, the field has begun to characterize neural function in broad strokes.

For example, in primates we know that the ventral visual stream (VVS) supports visual perception, while the medial temporal lobe (MTL) enables memory-related behaviors.

But using these concepts to describe and categorize neural processing does not mean we understand the neural functions that support these behaviors. At least not as physicists understand electrons. Illustrating this point, the field’s reliance on these concepts has led to enduring neuroscientific debates: Where does perception end and memory begin? Does the brain draw distinctions, as we do in the language we use to describe it?

This question is not mere semantics. By understanding how the brain functions in neurotypical cases (i.e., an idealized, but fictional “normal” brain), it might be possible to better support individuals experiencing pathological memory-related brain states, such as post-traumatic stress disorder.

Unfortunately, even after decades of research, characterizing the relationship between these “perceptual” and “mnemonic” systems has resulted in a seemingly intractable debate, frustrating attempts to apply our knowledge of the brain to more applied settings.

Neuroscientists on either side of this debate would look at identical experimental data and interpret them in radically different ways: One group of scientists claims that the MTL is involved in both memory and perception, while the other claims that the MTL is responsible only for memory-related behaviors.

To better understand how the MTL supports these behaviors, Tyler Bonnen, a Stanford doctoral candidate in psychology, has been working with Daniel Yamins, an assistant professor of psychology and of computer science and member of the Stanford Institute for Human-Centered Artificial Intelligence (HAI), as well as Anthony Wagner, a professor of psychology and director of The Memory Lab at Stanford.

Their recent work, published in the journal Neuron, proposes a novel computational framework for addressing this problem: using state-of-the-art computational tools from artificial intelligence to disentangle the relationship between perception and memory within the human brain.

“The concepts of perception and memory have been valuable in psychology in that they have allowed us to learn a great deal about neural function — but only to a point,” Bonnen says. “These terms eventually fall short of fully explaining how the brain supports these behaviors. We can see this quite clearly in the historical debate over the perceptual functions of the MTL; because experimentalists were forced to rely on their intuitions for what counted as perception and memory, they had different interpretations of the data. Data that, according to our results, are in fact consistent with a single, unified model.”

A Fresh Solution

The research team’s solution was to leverage recent advances in a field of artificial intelligence known as computer vision. This field is among the most highly developed areas of AI. More specifically, the research team used computational models that are able to predict neural responses in the primate visual system: task-optimized convolutional neural networks (CNNs).

“These models are not just ‘good’ at predicting visual behavior,” Bonnen says. “These models do a better job of predicting neural responses in the primate visual system than any of the models neuroscientists had developed explicitly for this purpose. For our project this is useful because it enables us to use these models as a proxy for the human visual system.”

Leveraging these tools enabled Bonnen to rerun historical experiments, which have been used as evidence to support both sides of the debate over MTL involvement in perception.

First, they collected stimuli and behavioral data from 30 previously published experiments. Then, using the exact same stimuli as in the original experiments (the same images, the same compositions, and the same order of presentation, etc.) they determined how well the model performed these tasks. Finally, Bonnen compared the model performance directly with the behavior of experimental participants.

“Our results were striking. Across experiments in this literature, our modeling framework was able to predict the behavior of MTL-lesioned subjects (i.e., subjects lacking an MTL because of neural injury). However, MTL-intact subjects were able to outperform our computational model,” Bonnen says. “These results clearly implicate MTL in what have long been described as perceptual behaviors, resolving decades of apparent inconsistencies.” 

This shows a brain and computer chips
Leveraging these tools enabled Bonnen to rerun historical experiments, which have been used as evidence to support both sides of the debate over MTL involvement in perception. Image is in the public domain

But Bonnen hesitates when asked whether the MTL is involved in perception. “While that interpretation is entirely consistent with our findings, we’re not concerned with which words people should use to describe these MTL-dependent abilities. We’re more interested in using this modeling approach to understand how the MTL supports such enchanting — indeed, at times, indescribable — behaviors.”

“The critical difference between our work and what has come before us,” Bonnen stresses, “is not any new theoretical advance, it’s our method: We challenge the AI system to solve the same problems that confront humans, generating intelligent behaviors directly from experimental inputs — e.g., pixels.”

Settling Old Scores, Opening New Ones

The research team’s work provides a case study on the limitations of contemporary neuroscientific approaches, as well as a promising path forward: using novel tools from AI to formalize our understanding of neural function

“Demonstrating the utility of this approach in the context of a seemingly intractable neuroscientific debate,” Bonnen offers, “we have provided a powerful proof-of-principle: These biologically plausible computational methods can help us understand neural systems beyond canonical visual cortices.” For the MTL, this holds potential not only for understanding memory-related behaviors but also developing novel ways of helping people who suffer from memory-related pathologies, such as post-traumatic stress disorder.

Bonnen cautions that the algorithms needed to understand these affective and memory-related behaviors are not as developed as the computer vision models he deployed in the current study. They don’t yet exist and would need to be developed, ideally in ways that reflect the same biological systems that support these behaviors. Nonetheless, artificial intelligence has already offered powerful tools to formalize our intuitions of animal behavior, greatly improving our understanding of the brain.

About this artificial intelligence research news

Source: Stanford
Contact: Shana Lynch – Stanford
Image: The image is in the public domain

Original Research: Closed access.
When the ventral visual stream is not enough: A deep learning account of medial temporal lobe involvement in perception” by Tyler Bonnen et al. Neuron


Abstract

When the ventral visual stream is not enough: A deep learning account of medial temporal lobe involvement in perception

The medial temporal lobe (MTL) supports a constellation of memory-related behaviors. Its involvement in perceptual processing, however, has been subject to enduring debate. This debate centers on perirhinal cortex (PRC), an MTL structure at the apex of the ventral visual stream (VVS).

Here we leverage a deep learning framework that approximates visual behaviors supported by the VVS (i.e., lacking PRC).

We first apply this approach retroactively, modeling 30 published visual discrimination experiments: excluding non-diagnostic stimulus sets, there is a striking correspondence between VVS-modeled and PRC-lesioned behavior, while each is outperformed by PRC-intact participants.

We corroborate and extend these results with a novel experiment, directly comparing PRC-intact human performance to electrophysiological recordings from the macaque VVS: PRC-intact participants outperform a linear readout of high-level visual cortex.

By situating lesion, electrophysiological, and behavioral results within a shared computational framework, this work resolves decades of seemingly inconsistent findings surrounding PRC involvement in perception.

Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.