Artificial Intelligence Identifies Infants at Risk of Blinding Disease

Summary: Researchers have developed a deep learning AI tool that can automate the diagnostics of retinopathy of prematurity (ROP), a leading cause of childhood blindness.

The tool was found to be as effective as senior pediatric ophthalmologists in discriminating normal retinal images from those with ROP that could lead to blindness, and the researchers hope it will improve access to care in underserved areas and prevent blindness in thousands of newborns worldwide.

Key Facts:

  1. Researchers have created a deep learning AI tool that can diagnose retinopathy of prematurity (ROP), which causes childhood blindness.
  2. The AI tool was trained on over 7,400 images of newborns’ eyes and was as effective as senior pediatric ophthalmologists in identifying ROP.
  3. The AI tool could help prevent blindness in premature babies, as ROP is becoming more common and the proper infrastructure for care is lacking in some areas.

Source: UCL

The team developed a deep learning AI model that can identify which at-risk infants have ROP that may lead to blindness if left untreated, and they hope their technique could improve access to screening in the many areas with limited neonatal services and few trained ophthalmologists.

The study, by an international team of scientists and clinicians in the UK, Brazil, Egypt and the US, supported by the National Institute for Health and Care Research (NIHR) Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, is published in The Lancet Digital Health.

Lead author Dr. Konstantinos Balaskas (Director, Moorfields Ophthalmic Reading Centre & Clinical AI Lab, Moorfields Eye Hospital and Associate Professor, UCL Institute of Ophthalmology) said, “Retinopathy of prematurity is becoming increasingly common as survival rates of premature babies improve across the globe, and it is now the leading cause of childhood blindness in middle-income countries and in the US.”

“As many as 30% of newborns in sub-Saharan Africa have some degree of ROP and, while treatments are now readily available, it can cause blindness if not detected and treated quickly. This is often due to a lack of eye care specialists—but, given it is detectable and treatable, no child should be going blind from ROP.”

“As it becomes more common, many areas do not have enough trained ophthalmologists to screen all at-risk children; we hope that our technique to automate diagnostics of ROP will improve access to care in underserved areas and prevent blindness in thousands of newborns worldwide.”

ROP is a condition primarily affecting premature babies, where abnormal blood vessels grow in the retina, the thin layer of nerve cells at the back of the eye that converts light into signals the brain can recognize. These blood vessels can leak or bleed, damaging the retina, and possibly leading to retinal detachment.

While milder forms of ROP do not require treatment, only monitoring, more acute cases require prompt treatment. An estimated 50,000 children globally are blind because of it.

Symptoms of ROP cannot be seen by the naked eye, meaning the only way to identify the condition is by monitoring infants at risk with eye exams. Without the proper infrastructure for comprehensive antenatal and postnatal care, the narrow window for screening and treatment could be missed, leading to preventable blindness.

The UCL-Moorfields team have developed a deep learning AI model to screen for ROP, which was trained on a sample of 7,414 images of the eyes of 1,370 newborns who had been admitted to the Homerton Hospital, London, and assessed for ROP by ophthalmologists.

The hospital serves an ethnically and socioeconomically diverse community, which is important as ROP can vary between ethnic groups, so the tool was trained to work safely across different ethnic groups ensuring anyone can benefit.

The tool’s performance was then assessed on another 200 images and compared to the assessments of senior ophthalmologists.

The researchers further validated their tool by employing it on datasets sourced from the US, Brazil and Egypt.

This is a drawing of an eye
Symptoms of ROP cannot be seen by the naked eye, meaning the only way to identify the condition is by monitoring infants at risk with eye exams. Credit: Neuroscience News

The AI tool was found to be as effective as senior pediatric ophthalmologists in discriminating normal retinal images from those with ROP that could lead to blindness.

While the tool was optimized for a UK population, the researchers say it is promising that they found it to still be effective on other continents, and they add it could still be further optimized for other environments. The tool has been developed as a code-free deep learning platform, which means it could be optimized in new settings by people without prior coding experience.

First author Dr. Siegfried Wagner (UCL Institute of Ophthalmology and Moorfields Eye Hospital) said, “Our findings justify the continued investigation of AI tools to screen for ROP. We are now further validating our tool in multiple hospitals in the UK and are seeking to learn how people interact with the AI’s outputs, to understand how we could incorporate the tool into real world clinical settings.”

“We hope that the tool will enable a trained nurse to take images that could be assessed by the AI tool, in order for a referral for treatment to be made without the need for an ophthalmologist to manually review the scans.”

“AI tools are particularly useful in ophthalmology, a field which is heavily reliant on the manual interpretation and analysis of scans for detection and monitoring—here we have found further evidence that AI can be a game-changer for the field and open up access to sight-saving treatments.”

About this artificial intelligence research news

Author: Press Office
Source: UCL
Contact: Press Office – UCL
Image: The image is credited to Neuroscience News

Original Research: Open access.
Development and international validation of custom-engineered and code-free deep-learning models for detection of plus disease in retinopathy of prematurity: a retrospective study” by Siegfried K Wagner et al. Lancet Digital Health


Abstract

Development and international validation of custom-engineered and code-free deep-learning models for detection of plus disease in retinopathy of prematurity: a retrospective study

Background

Retinopathy of prematurity (ROP), a leading cause of childhood blindness, is diagnosed through interval screening by paediatric ophthalmologists. However, improved survival of premature neonates coupled with a scarcity of available experts has raised concerns about the sustainability of this approach. We aimed to develop bespoke and code-free deep learning-based classifiers for plus disease, a hallmark of ROP, in an ethnically diverse population in London, UK, and externally validate them in ethnically, geographically, and socioeconomically diverse populations in four countries and three continents. Code-free deep learning is not reliant on the availability of expertly trained data scientists, thus being of particular potential benefit for low resource health-care settings.

Methods

This retrospective cohort study used retinal images from 1370 neonates admitted to a neonatal unit at Homerton University Hospital NHS Foundation Trust, London, UK, between 2008 and 2018. Images were acquired using a Retcam Version 2 device (Natus Medical, Pleasanton, CA, USA) on all babies who were either born at less than 32 weeks gestational age or had a birthweight of less than 1501 g. Each images was graded by two junior ophthalmologists with disagreements adjudicated by a senior paediatric ophthalmologist.

Bespoke and code-free deep learning models (CFDL) were developed for the discrimination of healthy, pre-plus disease, and plus disease. Performance was assessed internally on 200 images with the majority vote of three senior paediatric ophthalmologists as the reference standard. External validation was on 338 retinal images from four separate datasets from the USA, Brazil, and Egypt with images derived from Retcam and the 3nethra neo device (Forus Health, Bengaluru, India).

Findings

Of the 7414 retinal images in the original dataset, 6141 images were used in the final development dataset. For the discrimination of healthy versus pre-plus or plus disease, the bespoke model had an area under the curve (AUC) of 0·986 (95% CI 0·973–0·996) and the CFDL model had an AUC of 0·989 (0·979–0·997) on the internal test set. Both models generalised well to external validation test sets acquired using the Retcam for discriminating healthy from pre-plus or plus disease (bespoke range was 0·975–1·000 and CFDL range was 0·969–0·995). The CFDL model was inferior to the bespoke model on discriminating pre-plus disease from healthy or plus disease in the USA dataset (CFDL 0·808 [95% CI 0·671–0·909, bespoke 0·942 [0·892–0·982]], p=0·0070). Performance also reduced when tested on the 3nethra neo imaging device (CFDL 0·865 [0·742–0·965] and bespoke 0·891 [0·783–0·977]).

Interpretation

Both bespoke and CFDL models conferred similar performance to senior paediatric ophthalmologists for discriminating healthy retinal images from ones with features of pre-plus or plus disease; however, CFDL models might generalise less well when considering minority classes. Care should be taken when testing on data acquired using alternative imaging devices from that used for the development dataset. Our study justifies further validation of plus disease classifiers in ROP screening and supports a potential role for code-free approaches to help prevent blindness in vulnerable neonates.

Funding

National Institute for Health Research Biomedical Research Centre based at Moorfields Eye Hospital NHS Foundation Trust and the University College London Institute of Ophthalmology.

Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.