Summary: A new study explores the neural basis for facial recognition and identification.
Source: Carnegie Mellon University.
At a glance, you can recognize a friend’s face whether they are happy or sad or even if you haven’t seen them in a decade. How does the brain do this — recognize familiar faces with efficiency and ease despite extensive variation in how they appear?
Researchers at Carnegie Mellon University are closer than ever before to understanding the neural basis of facial identification. In a study published in the Dec. 26, 2016 issue of the Proceedings of the National Academy of Sciences (PNAS), they used highly sophisticated brain imaging tools and computational methods to measure the real-time brain processes that convert the appearance of a face into the recognition of an individual. The research team is hopeful that the findings might be used in the near future to locate the exact point at which the visual perception system breaks down in different disorders and injuries, ranging from developmental dyslexia to prosopagnosia, or face blindness.
“Our results provide a step toward understanding the stages of information processing that begin when an image of a face first enters a person’s eye and unfold over the next few hundred milliseconds, until the person is able to recognize the identity of the face,” said Mark D. Vida, a postdoctoral research fellow in the Dietrich College of Humanities and Social Sciences’ Department of Psychology and Center for the Neural Basis of Cognition (CNBC).
To determine how the brain rapidly distinguishes faces, the researchers scanned the brains of four people using magnetoencephalography (MEG). MEG allowed them to measure ongoing brain activity throughout the brain on a millisecond-by-millisecond basis while the participants viewed images of 91 different individuals with two facial expressions each: happy and neutral. The participants indicated when they recognized that the same individual’s face was repeated, regardless of expression.
The MEG scans allowed the researchers to map out, for each of many points in time, which parts of the brain encode appearance-based information and which encode identity-based information. The team also compared the neural data to behavioral judgments of the face images from humans, whose judgments were based mainly on identity-based information. Then, they validated the results by comparing the neural data to the information present in different parts of a computational simulation of an artificial neural network that was trained to recognize individuals from the same face images.
“Combining the detailed timing information from MEG imaging with computational models of how the visual system works has the potential to provide insight into the real-time brain processes underlying many other abilities beyond face recognition,” said David C. Plaut, professor of psychology and a member of the CNBC.
In addition to Vida and Plaut, CMU’s Marlene Behrmann and University of Toronto Scarborough’s Adrian Nestor participated in the study.
Funding: This research was funded by the Natural Sciences and Engineering Research Council, Pennsylvania Department of Health’s Commonwealth Universal Research Enhancement Program and the National Science Foundation.
F31NS081805, NINDS R01NS078294, U54HD083092 to the BCM IDDRC and the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior/Interior Business Center (DoI/IBC) contract number D16PC00003.
Source: Shilo Rea – Carnegie Mellon University
Image Source: NeuroscienceNews.com image is adapted from the CMU press release.
Original Research: Abstract for “Spatiotemporal dynamics of similarity-based neural representations of facial identity” by Mark D. Vida, Adrian Nestor, David C. Plaut, and Marlene Behrmann in PNAS. Published online December 27 2016 doi:10.1073/pnas.1614763114
Spatiotemporal dynamics of similarity-based neural representations of facial identity
Humans’ remarkable ability to quickly and accurately discriminate among thousands of highly similar complex objects demands rapid and precise neural computations. To elucidate the process by which this is achieved, we used magnetoencephalography to measure spatiotemporal patterns of neural activity with high temporal resolution during visual discrimination among a large and carefully controlled set of faces. We also compared these neural data to lower level “image-based” and higher level “identity-based” model-based representations of our stimuli and to behavioral similarity judgments of our stimuli. Between ∼50 and 400 ms after stimulus onset, face-selective sources in right lateral occipital cortex and right fusiform gyrus and sources in a control region (left V1) yielded successful classification of facial identity. In all regions, early responses were more similar to the image-based representation than to the identity-based representation. In the face-selective regions only, responses were more similar to the identity-based representation at several time points after 200 ms. Behavioral responses were more similar to the identity-based representation than to the image-based representation, and their structure was predicted by responses in the face-selective regions. These results provide a temporally precise description of the transformation from low- to high-level representations of facial identity in human face-selective cortex and demonstrate that face-selective cortical regions represent multiple distinct types of information about face identity at different times over the first 500 ms after stimulus onset. These results have important implications for understanding the rapid emergence of fine-grained, high-level representations of object identity, a computation essential to human visual expertise.
“Spatiotemporal dynamics of similarity-based neural representations of facial identity” by Mark D. Vida, Adrian Nestor, David C. Plaut, and Marlene Behrmann in PNAS. Published online December 27 2016 doi:10.1073/pnas.1614763114