AI-Enhanced Imaging: Probing Brain’s Visual Processing

Summary: Researchers used AI to select and generate images for studying brain’s visual processing. Functional MRI (fMRI) recorded heightened brain activity in response to these images, surpassing control images.

The approach enabled tuning visual models to individual responses, enhancing the study of brain’s reaction to visual stimuli. This method, offering an unbiased, systematic view of visual processing, could revolutionize neuroscience and therapeutic approaches.

Key Facts:

  1. AI-selected and generated images were used to systematically study brain’s visual processing, yielding more significant activation in targeted areas compared to control images.
  2. Personalized AI models were effective in enhancing the brain’s response to visual stimuli, showing potential for individualized neuroscience studies.
  3. The research opens avenues for studying other sensory systems and exploring therapeutic applications, like modifying brain connectivity for mental health treatment.

Source: Weill Cornell University

Researchers at Weill Cornell Medicine, Cornell Tech and Cornell’s Ithaca campus have demonstrated the use of AI-selected natural images and AI-generated synthetic images as neuroscientific tools for probing the visual processing areas of the brain.

The goal is to apply a data-driven approach to understand how vision is organized while potentially removing biases that may arise when looking at responses to a more limited set of researcher-selected images.

In the study, published Oct. 23 in Communications Biology, the researchers had volunteers look at images that had been selected or generated based on an AI model of the human visual system.

This shows an AI generated dog.
Image generated by an AI algorithm called BigGAN-deep that was designed to activate one specific part of the brain that is known to respond to images of faces. Image generated on Jan. 28, 2021.Credit: Weill Cornell Medicine

The images were predicted to maximally activate several visual processing areas. Using functional magnetic resonance imaging (fMRI) to record the brain activity of the volunteers, the researchers found that the images did activate the target areas significantly better than control images.

The researchers also showed that they could use this image-response data to tune their vision model for individual volunteers, so that images generated to be maximally activating for a particular individual worked better than images generated based on a general model.

“We think this is a promising new approach to study the neuroscience of vision,” said study senior author Dr. Amy Kuceyeski, a professor of mathematics in radiology and of mathematics in neuroscience in the Feil Family Brain and Mind Research Institute at Weill Cornell Medicine.

The study was a collaboration with the laboratory of Dr. Mert Sabuncu, a professor of electrical and computer engineering at Cornell Engineering and Cornell Tech, and of electrical engineering in radiology at Weill Cornell Medicine. The study’s first author was Dr. Zijin Gu, a who was a doctoral student co-mentored by Dr. Sabuncu and Dr. Kuceyeski at the time of the study.

Making an accurate model of the human visual system, in part by mapping brain responses to specific images, is one of the more ambitious goals of modern neuroscience. Researchers have found for example, that one visual processing region may activate strongly in response to an image of a face whereas another may respond to a landscape.

Scientists must rely mainly on non-invasive methods in pursuit of this goal, given the risk and difficulty of recording brain activity directly with implanted electrodes.

The preferred non-invasive method is fMRI, which essentially records changes in blood flow in small vessels of the brain—an indirect measure of brain activity—as subjects are exposed to sensory stimuli or otherwise perform cognitive or physical tasks. An fMRI machine can read out these tiny changes in three dimensions across the brain, at a resolution on the order of cubic millimeters.

For their own studies, Dr. Kuceyeski and Dr. Sabuncu and their teams used an existing dataset comprising tens of thousands of natural images, with corresponding fMRI responses from human subjects, to train an AI-type system called an artificial neural network (ANN) to model the human brain’s visual processing system.

They then used this model to predict which images, across the dataset, should maximally activate several targeted vision areas of the brain. They also coupled the model with an AI-based image generator to generate synthetic images to accomplish the same task.

“Our general idea here has been to map and model the visual system in a systematic, unbiased way, in principle even using images that a person normally wouldn’t encounter,” Dr. Kuceyeski said.

The researchers enrolled six volunteers and recorded their fMRI responses to these images, focusing on the responses in several visual processing areas.

The results showed that, for both the natural images and the synthetic images, the predicted maximal activator images, on average across the subjects, did activate the targeted brain regions significantly more than a set of images that were selected or generated to be only average activators.

This supports the general validity of the team’s ANN-based model and suggests that even synthetic images may be useful as probes for testing and improving such models.

In a follow-on experiment, the team used the image and fMRI-response data from the first session to create separate ANN-based visual system models for each of the six subjects. They then used these individualized models to select or generate predicted maximal-activator images for each subject.

The fMRI responses to these images showed that, at least for the synthetic images, there was greater activation of the targeted visual region, a face-processing region called FFA1, compared to the responses to images based on the group model.

This result suggests that AI and fMRI can be useful for individualized visual-system modeling, for example to study differences in visual system organization across populations.

The researchers are now running similar experiments using a more advanced version of the image generator, called Stable Diffusion.

The same general approach could be useful in studying other senses such as hearing, they noted.

Dr. Kuceyeski also hopes ultimately to study the therapeutic potential of this approach.

“In principle, we could alter the connectivity between two parts of the brain using specifically designed stimuli, for example to weaken a connection that causes excess anxiety,” she said.

About this AI and visual neuroscience research news

Author: Barbara Prempeh
Source: Weill Cornell University
Contact: Barbara Prempeh – Weill Cornell University
Image: The image is credited to Weill Cornell Medicine

Original Research: Open access.
Human brain responses are modulated when exposed to optimized natural images or synthetically generated images” by Amy Kuceyeski et al. Communications Biology


Human brain responses are modulated when exposed to optimized natural images or synthetically generated images

Understanding how human brains interpret and process information is important. Here, we investigated the selectivity and inter-individual differences in human brain responses to images via functional MRI.

In our first experiment, we found that images predicted to achieve maximal activations using a group level encoding model evoke higher responses than images predicted to achieve average activations, and the activation gain is positively associated with the encoding model accuracy.

Furthermore, anterior temporal lobe face area (aTLfaces) and fusiform body area 1 had higher activation in response to maximal synthetic images compared to maximal natural images.

In our second experiment, we found that synthetic images derived using a personalized encoding model elicited higher responses compared to synthetic images from group-level or other subjects’ encoding models. The finding of aTLfaces favoring synthetic images than natural images was also replicated.

Our results indicate the possibility of using data-driven and generative approaches to modulate macro-scale brain region responses and probe inter-individual differences in and functional specialization of the human visual system.

Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.