Summary: Researchers have developed an AI-based diagnostic tool that uses smartphone video and cloud computing to detect nystagmus—a key symptom of balance and neurological disorders. Unlike traditional methods like videonystagmography, which are expensive and cumbersome, this deep learning system provides a low-cost, patient-friendly alternative for remote diagnosis.
It maps 468 facial landmarks in real time, analyzes slow-phase velocity, and generates clinician-ready reports. Early testing shows that the tool delivers results comparable to gold-standard equipment, with the potential to expand access to care through telemedicine.
Key Facts:
- Smartphone-Based Diagnosis: Patients can upload eye movement videos from home for remote AI analysis.
- Clinically Reliable: Pilot results showed strong agreement with traditional diagnostic tools.
- Telehealth Integration: Designed for remote consultations, reducing costs and increasing accessibility in underserved areas.
Source: FAU
Artificial intelligence is playing an increasingly vital role in modern medicine, particularly in interpreting medical images to help clinicians assess disease severity, guide treatment decisions and monitor disease progression.
Despite these advancements, most current AI models are based on static datasets, limiting their adaptability and real-time diagnostic potential.
To address this gap, researchers from Florida Atlantic University and collaborators, have developed a novel proof-of-concept deep learning model that leverages real-time data to assist in diagnosing nystagmus – a condition characterized by involuntary, rhythmic eye movements often linked to vestibular or neurological disorders.

Gold-standard diagnostic tools such as videonystagmography (VNG) and electronystagmography have been used to detect nystagmus. However, these methods come with notable drawbacks: high costs (with VNG equipment often exceeding $100,000), bulky setups, and inconvenience for patients during testing.
FAU’s AI-driven system offers a cost-effective and patient-friendly alternative, for a quick and reliable screening for balance disorders and abnormal eye movements.
The platform allows patients to record their eye movements using a smartphone, securely upload the video to a cloud-based system, and receive remote diagnostic analysis from vestibular and balance experts – all without leaving their home.
At the heart of this innovation is a deep learning framework that uses real-time facial landmark tracking to analyze eye movements. The AI system automatically maps 468 facial landmarks and evaluates slow-phase velocity – a key metric for identifying nystagmus intensity, duration and direction. It then generates intuitive graphs and reports that can easily be interpreted by audiologists and other clinicians during virtual consultations.
Results of the pilot study involving 20 participants, published in Cureus (part of Springer Nature), demonstrated that the AI system’s assessments closely mirrored those obtained through traditional medical devices. This early success underscores the model’s accuracy and potential for clinical reliability, even in its initial stages.
“Our AI model offers a promising tool that can partially supplement – or, in some cases, replace – conventional diagnostic methods, especially in telehealth environments where access to specialized care is limited,” said Ali Danesh, Ph.D., principal investigator of the study, senior author, a professor in the Department of Communication Sciences and Disorders within FAU’s College of Education and a professor of biomedical science within FAU’s Charles E. Schmidt College of Medicine.
“By integrating deep learning, cloud computing and telemedicine, we’re making diagnosis more flexible, affordable and accessible – particularly for low-income rural and remote communities.”
The team trained their algorithm on more than 15,000 video frames, using a structured 70:20:10 split for training, testing and validation. This rigorous approach ensured the model’s robustness and adaptability across varied patient populations. The AI also employs intelligent filtering to eliminate artifacts such as eye blinks, ensuring accurate and consistent readings.
Beyond diagnostics, the system is designed to streamline clinical workflows. Physicians and audiologists can access AI-generated reports via telehealth platforms, compare them with patients’ electronic health records, and develop personalized treatment plans.
Patients, in turn, benefit from reduced travel, lower costs and the convenience of conducting follow-up assessments by simply uploading new videos from home – enabling clinicians to track disorder progression over time.
In parallel, FAU researchers are also experimenting with a wearable headset equipped with deep learning capabilities to detect nystagmus in real-time. Early tests in controlled environments have shown promise, though improvements are still needed to address challenges such as sensor noise and variability among individual users.
“While still in its early stages, our technology holds the potential to transform care for patients with vestibular and neurological disorders,” said Harshal Sanghvi, Ph.D., first author, an FAU electrical engineering and computer science graduate, and a postdoctoral fellow at FAU’s College of Medicine and College of Business.
“With its ability to provide non-invasive, real-time analysis, our platform could be deployed widely – in clinics, emergency rooms, audiology centers and even at home.”
Sanghvi worked closely with his mentors and co-authors on this project including Abhijit S. Pandya, Ph.D., FAU Department of Electrical Engineering and Computer Science and FAU Department of Biomedical Engineering, and B. Sue Graves, Ed.D., Department of Exercise Science and Health Promotion, FAU Charles E. Schmidt College of Science.
This interdisciplinary initiative includes collaborators from FAU’s College of Business, College of Medicine, College of Engineering and Computer Science, College of Science, and partners from Advanced Research, Marcus Neuroscience Institute – part of Baptist Health – at Boca Raton Regional Hospital, Loma Linda University Medical Center, and Broward Health North.
Together, they are working to enhance the model’s accuracy, expand testing across diverse patient populations, and move toward FDA approval for broader clinical adoption.
“As telemedicine becomes an increasingly integral part of health care delivery, AI-powered diagnostic tools like this one are poised to improve early detection, streamline specialist referrals, and reduce the burden on health care providers,” said Danesh. “Ultimately, this innovation promises better outcomes for patients –regardless of where they live.”
Along with Pandya and Graves, study co-authors are Jilene Moxam, Advanced Research LLC; Sandeep K. Reddy, Ph.D., FAU College of Engineering and Computer Science; Gurnoor S. Gill, FAU College of Medicine; Sajeel A. Chowdhary, M.D., Marcus Neuroscience Institute – part of Baptist Health – at Boca Raton Regional Hospital; Kakarla Chalam, M.D., Ph.D., Loma Linda University; and Shailesh Gupta, M.D., Broward Health North.
About this AI and eye movement research news
Author: Gisele Galoustian
Source: FAU
Contact: Gisele Galoustian – FAU
Image: The image is credited to Neuroscience News
Original Research: Open access.
“Artificial Intelligence-Driven Telehealth Framework for Detecting Nystagmus” by Ali Danesh et al. Cureus
Abstract
Artificial Intelligence-Driven Telehealth Framework for Detecting Nystagmus
Purpose: This study reports the implementation of a proof-of-concept, artificial intelligence (AI)-driven clinical decision support system for detecting nystagmus. The system collects and analyzes real-time clinical data to assist in diagnosing, demonstrating its potential for integration into telemedicine platforms.
Patients may benefit from the system’s convenience, reduced need for travel and associated costs, and increased flexibility and increased flexibility through both in-person and virtual applications.
Methods: A bedside clinical test revealed vertigo during rightward body movement, and the patient was referred for videonystagmography (VNG). The VNG showed normal central ocular findings. During the right Dix-Hallpike maneuver, the patient demonstrated rotatory nystagmus accompanied by vertigo.
Caloric tests revealed symmetric responses, with no evidence of unilateral or bilateral weakness. A cloud-based deep learning framework was developed and trained to track eye movements and detect 468 distinct facial landmarks in real time. Ten subjects participated in this study.
Results: The slow-phase velocity (SPV) value was verified for statistical significance using both VNG machine-generated graphs and clinician assessment. The average SPV was compared to the value generated by the VNG machine. The calculated statistical values were as follows: p < 0.05, a mean squared error of 0.00459, and a correction error of ±4.8%.
Conclusion: This deep learning model demonstrates the potential to provide diagnostic consultation to individuals in remote locations. To some extent, it may supplement or partially replace traditional methods such as VNG. Ongoing advancements in machine learning within medicine will enhance the ability to diagnose patients, facilitate appropriate specialist referrals, and support physicians in post-treatment follow-up. As this was a proof-of-concept pilot study, further research with a larger sample size is warranted.