How Blind People Recognize Faces via Sound

Summary: A new study reveals that people who are blind can recognize faces using auditory patterns processed by the fusiform face area, a brain region crucial for face processing in sighted individuals.

The study employed a sensory substitution device to translate images into sound, demonstrating that face recognition in the brain isn’t solely dependent on visual experience. Blind and sighted participants underwent functional MRI scans, showing that the fusiform face area encodes the concept of a face, irrespective of the sensory input.

This discovery challenges the understanding of how facial recognition develops and functions in the brain.

Key Facts:

  1. The study shows that the fusiform face area in the brain can process the concept of a face through auditory patterns, not just visually.
  2. Functional MRI scans revealed that this area is active in both blind and sighted individuals during face recognition tasks.
  3. The research utilized a specialized device to translate visual information into sound, enabling blind participants to recognize basic facial configurations.

Source: Georgetown University Medical Center

Using a specialized device that translates images into sound, Georgetown University Medical Center neuroscientists and colleagues showed that people who are blind recognized basic faces using the part of the brain known as the fusiform face area, a region that is crucial for the processing of faces in sighted people.

The findings appeared in PLOS ONE on November 22, 2023.

This shows a woman's face.
Currently, with their device, people who are blind can recognize a basic ‘cartoon’ face (such as an emoji happy face) when it is transcribed into sound patterns. Credit: Neuroscience News

“It’s been known for some time that people who are blind can compensate for their loss of vision, to a certain extent, by using their other senses,” says Josef Rauschecker, Ph.D., D.Sc., professor in the Department of Neuroscience at Georgetown University and senior author of this study.

“Our study tested the extent to which this plasticity, or compensation, between seeing and hearing exists by encoding basic visual patterns into auditory patterns with the aid of a technical device we refer to as a sensory substitution device. With the use of functional magnetic resonance imaging (fMRI), we can determine where in the brain this compensatory plasticity is taking place.”

Face perception in humans and nonhuman primates is accomplished by a patchwork of specialized cortical regions. How these regions develop has remained controversial. Due to their importance for social behavior, many researchers believe that the neural mechanisms for face recognition are innate in primates or depend on early visual experience with faces.

“Our results from people who are blind implies that fusiform face area development does not depend on experience with actual visual faces but on exposure to the geometry of facial configurations, which can be conveyed by other sensory modalities,” Rauschecker adds.

Paula Plaza, Ph.D., one of the lead authors of the study, who is now at Universidad Andres Bello, Chile, says, “Our study demonstrates that the fusiform face area encodes the ‘concept’ of a face regardless of input channel, or the visual experience, which is an important discovery.”

Six people who are blind and 10 sighted people, who served as control subjects, went through three rounds of functional MRI scans to see what parts of the brain were being activated during the translations from image into sound.

The scientists found that brain activation by sound in people who are blind was found primarily in the left fusiform face area while face processing in sighted people occurred mostly in the right fusiform face area.

“We believe the left/right difference between people who are and aren’t blind may have to do with how the left and right sides of the fusiform area processes faces – either as connected patterns or as separate parts, which may be an important clue in helping us refine our sensory substitution device,” says Rauschecker, who is also co-director of the Center for Neuroengineering at Georgetown University.

Currently, with their device, people who are blind can recognize a basic ‘cartoon’ face (such as an emoji happy face) when it is transcribed into sound patterns. Recognizing faces via sounds was a time-intensive process that took many practice sessions.

Each session started with getting people to recognize simple geometrical shapes, such as horizontal and vertical lines; complexity of the stimuli was then gradually increased, so the lines formed shapes, such as houses or faces, which then became even more complex (tall versus wide houses and happy faces versus sad faces).

Ultimately, the scientists would like to use pictures of real faces and houses in combination with their device, but the researchers note that they would first have to greatly increase the resolution of the device.

“We would love to be able to find out whether it is possible for people who are blind to learn to recognize individuals from their pictures. This may need a lot more practice with our device but now that we’ve pinpointed the region of the brain where the translation is taking place, we may have a better handle on how to fine-tune our processes,” Rauschecker concludes.

In addition to Rauschecker, the other authors at Georgetown University are Laurent Renier and Stephanie Rosemann. Anne G. De Volder, who passed away while this manuscript was in preparation, was at the Neural Rehabilitation Laboratory, Institute of Neuroscience, Université Catholique de Louvain, Brussels, Belgium.   

Funding: This work was supported by a grant from the National Eye Institute (#R01 EY018923).

The authors declare no personal financial interests related to the study.

About this visual and auditory neuroscience research news

Author: Karen Teber
Source: Georgetown University Medical Center
Contact: Karen Teber – Georgetown University Medical Center
Image: The image is credited to Neuroscience News

Original Research: Open access.
Sound-encoded faces activate the left fusiform face area in the early blind” by Josef Rauschecker et al. PLOS ONE


Abstract

Sound-encoded faces activate the left fusiform face area in the early blind

Face perception in humans and nonhuman primates is accomplished by a patchwork of specialized cortical regions. How these regions develop has remained controversial. In sighted individuals, facial information is primarily conveyed via the visual modality. Early blind individuals, on the other hand, can recognize shapes using auditory and tactile cues.

Here we demonstrate that such individuals can learn to distinguish faces from houses and other shapes by using a sensory substitution device (SSD) presenting schematic faces as sound-encoded stimuli in the auditory modality.

Using functional MRI, we then asked whether a face-selective brain region like the fusiform face area (FFA) shows selectivity for faces in the same subjects, and indeed, we found evidence for preferential activation of the left FFA by sound-encoded faces.

These results imply that FFA development does not depend on experience with visual faces per se but may instead depend on exposure to the geometry of facial configurations.

Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.