Researchers Are Using Machine Learning to Screen for Autism in Children

Summary: With the help of data collected from an online app, researchers have developed a machine learning algorithm which is 90% accurate in determining subsets of behavior associated with ASD.

Source: Duke University

For more than five years, researchers from Duke Engineering and the Duke University School of Medicine have been working toward creating an app that can help screen for autism in young children. With results from the first pilot study rolling in just last year, their work is leading to new insights about autism spectrum disorder (ASD) and has the potential to transform how children’s development is screened and monitored.

“Babies who go on to develop autism typically don’t pay attention to social cues,” said Geraldine Dawson, director of the Duke Center for Autism and Brain Development, in a recent article published on Wired. “They’re more interested in non-social things, like toys or objects. They’re also less emotionally expressive. They smile less, particularly in response to positive social events.”

The app first administers caregiver consent forms and survey questions and then uses the phone’s ‘selfie’ camera to collect videos of young children’s reactions while they watch movies designed to elicit autism risk behaviors, such as patterns of emotion and attention, on the device’s screen.

The videos of the child’s reactions are sent to the study’s servers, where automatic behavioral coding software tracks the movement of video landmarks on the child’s face and quantifies the child’s emotions and attention. For example, in response to a short movie of bubbles floating across the screen, the video coding algorithm looks for movements of the face that would indicate joy.

The initial study, from informed consent to data collection and preliminary analysis, was conducted with an app available for free from Apple Store and based on Apple’s ResearchKit open source development platform.

This shows a computerized face
Through the app, the Duke team was able to collect behavior data from about 1,700 children—far more than the 50 to 100 typically found in an ASD study. The image is in the public domain.

Guillermo Sapiro, professor of electrical and computer engineering, is using Amazon Web Services and tools called TensorFlow and PyTorch to build machine learning algorithms that connect children’s facial expressions and eye movements to potential signs of ASD. His group is also using these cloud computing tools to develop new machine learning algorithms for privacy filters for the images and videos they collect.

Through the app, the Duke team was able to collect behavior data from about 1,700 children—far more than the 50 to 100 typically found in an ASD study. With that amount of data in hand, the researchers have so far found the app to be almost 90 percent accurate for some subsets of behaviors.


Credit: AWS.

“The more algorithms, the more people, the more resources we put toward this data, the better the potential outcomes for patients,” Sapiro said in an article on Wired. “I wish every child in the world could meet with an ASD specialist, but that’s unrealistic. If we could provide ASD screening at a large scale, that would be a tremendous contribution.”

About this neuroscience research article

Source:
Duke University
Media Contacts:
Guillermo Sapiro – Duke University
Image Source:
The image is in the public domain.

Feel free to share this Machine Learning News.
Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.