Refresh

This website neurosciencenews.com/infant-vision-development-ai-27078/ is currently offline. Cloudflare's Always Online™ shows a snapshot of this web page from the Internet Archive's Wayback Machine. To check for the live version, click Refresh.

Visual Diet Shapes Development in Infants

Summary: Researchers uncovered a critical aspect of infant vision, revealing that very young babies experience a unique visual diet consisting of simple, high-contrast patterns and edges found in everyday environments. This “diet” significantly influences their developmental trajectory.

Their study utilized head-mounted cameras on infants to directly observe and analyze the visual stimuli in their daily surroundings, comparing it to adult perceptions. These findings not only advance our understanding of human visual development but also offer insights for improving AI visual systems through similarly staged learning processes.

Key Facts:

  1. Unique Visual Input: Infants are naturally drawn to and surrounded by high-contrast patterns in their daily environments, which are crucial for their visual development.
  2. Impact on AI Learning: The study’s methodology and findings are being applied to enhance artificial intelligence visual systems, showing that AI trained on sequences of images mimicking infant visual experiences performs better.
  3. Broader Implications: This research provides a deeper understanding of how early visual experiences are optimized for developmental progress and could lead to better early intervention strategies for visual abnormalities.

Source: Indiana University

What do infants see?  What do they look at?  

The answers to these questions are very different for the youngest babies than they are for older infants, children and adults. Characterized by a few high-contrast edges in simple patterns, these early scenes also contain the very materials needed to build a strong foundation for human vision. 

This shows a baby surrounded by colorful toys.
For example, infants born with visual abnormalities such as cataracts or those in orphanages with limited visual experiences have been shown to have lifelong visual deficiencies. Credit: Neuroscience News

That is the finding of a new study, “An edge-simplicity bias in the visual input to young infants,” published on May 10 in Science Advances by IU researchers Erin Anderson, Rowan Candy, Jason Gold and Linda Smith. 

“The starting assumption for everybody who thinks about the role of experience in visual development has always been that at the scale of everyday experience, visual input is pretty much the same for everyone,” explains principal investigator Linda Smith, a professor in the Department of Psychological and Brain Sciences.

“Yet, this study says, no, visual input changes with development. It’s not the same for everybody. The daily life input for very young infants appears to be unique to that age.”  

Prior studies in the laboratory and clinic had shown that young infants prefer to look at simple, high-contrast scenes of big black stripes and checkerboards. The current study is the first to ask to what extent these preferences make up their daily-life input.

“To see what young babies see and look at,” says Anderson, a former postdoctoral researcher in Smith’s Cognitive Development Lab, she and her colleagues put head-cameras on infants to wear in the home during daily life activities.

“You can buy ‘baby flash cards’ for newborns that show these simple, high-contrast images,” she explains.

“What the head-camera videos show, what this work shows, is that young infants find these types of images all around them in their daily life, just by looking at things like lights and ceiling corners.” 

“What we found is a very special, early ‘diet’ for visual development,” adds Smith. “As with food, young infants do not start with rich, complex meals or pizza, but rather with simple, developmentally specific nourishment.” 

Previous work has recognized the critical nature of this early period to the future development of human vision. For example, infants born with visual abnormalities such as cataracts or those in orphanages with limited visual experiences have been shown to have lifelong visual deficiencies.

The current study offers some preliminary data for addressing these deficiencies. It also has important implications for the makings of AI visual systems, which likewise acquire stronger visual skills when training begins with the same simple, high-contrast visual content.

“The massive scale of daily-life input”

To identify the properties of visual input in infants at approximately three to 13 months old, the researchers placed head-mounted video cameras on 10 infants and 10 of their adult caregivers, collecting and analyzing 70 hours of visual documentation of at-home daily life.

Clear differences emerge between the contents of the infants and adults’ images with a higher concentration of simple patterns and high-contrast edges within the views of infants than in those of adults. 

Smith infers that the reason for these views is not only that infants will turn their heads to look at the features of the world they can see, but that parents or caregivers are likely to put them in places where they like to look at things.

“You have to think why they are where they are. There is probably some natural knowledge implicit on the part of parents to leave infants where like to look at things. Mom’s not gonna bother you if you’re not fussing,” she observes.

Yet, is this small group of participants from Bloomington, Indiana representative of infants more broadly around the world? To answer this question Smith’s lab conducted the same experiment with a collaborator in a small, crowded fishing village in Chennai, India where electricity is minimal and much of daily life occurs outdoors.

And while images from the head cameras of 6-month-olds and 12-month-olds looked very different from their Bloomington counterparts, the youngest infants share a common “diet” of high-contrast edges and simple patterns in both Chennai and Bloomington.

Bigger pictures, past and future

Smith and her collaborators have also shown that the same sequence of images improves the training of AI visual systems. In a follow-up to the current study, published in the 2023 Neural Information Processing Systems Conference Proceedings, they found that if you train an AI system by first feeding it images characteristic of early infancy, it has greater success learning to identify visual images than if you feed it images in a random developmental order or simply provide images typical of an adult’s daily life. The more precise developmental sequence produced the best results.

Their work opens up new avenues for evolutionary speculation. As Smith explains, “One of the things I always used to ask as a grad student – and maybe we’re getting a chance to answer it – is why do human babies have such slow motor development.

“They spend about three months just listening and looking and another six months with a little bit of posture and head control. Why are they so slow? Horses come out and run races.”

This research suggests that “over evolutionary time these slow, incremental and optimized biases work to build up a very smart visual and auditory system,” she says. “That’s a story that could be told.” 

In the meantime, their work raises new questions on the visual content of early infancy and its role in the developing visual system, whether human or AI.

Other researchers include IU Bloomington professors Rowan Candy in the School of Optometry and Jason Gold in the Department of Psychological and Brain Sciences.

About this vision and neurodevelopment research news

Author: Liz Rosdeitcher
Source: Indiana University
Contact: Liz Rosdeitcher – Indiana University
Image: The image is credited to Neuroscience News

Original Research: Open access.
An edge-simplicity bias in the visual input to young infants” by Erin Anderson et al. Science Advances


Abstract

An edge-simplicity bias in the visual input to young infants

The development of sparse edge coding in the mammalian visual cortex depends on early visual experience. In humans, there are multiple indicators that the statistics of early visual experiences has unique properties that may support these developments.

However, there are no direct measures of the edge statistics of infant daily-life experience.

Using head-mounted cameras to capture egocentric images of young infants and adults in the home, we found infant images to have distinct edge statistics relative to adults. For infants, scenes with sparse edge patterns—few edges and few orientations—dominate.

The findings implicate biased early input at the scale of daily life that is likely specific to the early months after birth and provide insights into the quality, amount, and timing of the visual experiences during the foundational developmental period for human vision.

Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.