Summary: Combining artificial intelligence technology with speech analysis, researchers report while AI can be used to assess speech patterns for signs of Alzheimer’s, the specific task assigned to the person being tested plays a critical role in the accuracy of diagnosis.
Source: St. George’s University of London
A new study, by researchers from the Neurosciences Research Centre at St George’s, has identified the strengths and limitations of different tasks used to detect the early signs of Alzheimer’s Disease through speech analysis and machine-learning.
Published in the journal, Frontiers in Computer Science, the study demonstrates that while machine-learning can be used to assess speech patterns for signs of disease, the specific task assigned to the person being tested plays a critical role in test accuracy.
Previous research by the group has shown that Alzheimer’s Disease affects language very early on in the disease and, therefore, language assessments can be used to detect the disease at an earlier stage. The earlier it is picked up, the sooner interventions can be considered to help the patient.
This latest study adds to the evidence by seeking to assess the measures and tasks that can be used to test for Alzheimer’s. By recording the audio from tasks performed by participants, the research team then employed a machine-learning programme, developed at St George’s, to assess signs of disease.
The tasks used in the study represent a range of methods used in healthcare scenarios. One of the most common approaches used by clinicians is to ask patients to describe a scene known as the ‘Cookie Theft’ picture.
Other approaches include asking the patient to narrate a learned story, such as well-known fairy-tales like Cinderella – a complex task, which requires them to integrate a series of characters and events into a timeline that they can recall.
For this study, the researchers used the above assessments, as well as procedural recall (recounting how to make a cup of tea), novel narrative retelling (describing a story from pictures presented in a wordless children’s story book), and conversational speech (giving instructions to another person, describing a route through landmarks on a map), to detect signs of Alzheimer’s through speech analysis.
After assessing the results of 50 trial participants (25 with mild Alzheimer’s Disease or Mild Cognitive Impairment and 25 healthy controls), the team found that narrating an overlearned story, such as Cinderella gave the most accurate results.
The machine-learning system used was able to identify whether a participant had Alzheimer’s or Mild Cognitive Impairment with 78% accuracy, with the ‘Cookie Theft’ task close behind on 76% – results which are comparable to existing tests for disease. The other tasks assessed gave accuracies ranging between 62% (novel narrative retelling) and 74% (procedural recall).
“Our results show that by altering the tasks used to assess Alzheimer’s, we have the potential to be detecting disease with higher accuracy through speech analysis,” says study author and final year PhD student at St George’s, Natasha Clarke.
Noting that larger studies are needed to improve their understanding of their assessments even further, Clarke adds, “In the long-term, we hope that this technology could be used remotely, such as through smartphone apps, reducing anxiety around testing for disease. If we can make testing easier, then hopefully we can identify disease earlier and start treating people sooner.”
Following the results of this study, the team are now looking to follow up study participants one year later to assess changes over time and learn more about disease progression.
About this AI and Alzheimer’s disease research news
A Comparison of Connected Speech Tasks for Detecting Early Alzheimer’s Disease and Mild Cognitive Impairment Using Natural Language Processing and Machine Learning
Alzheimer’s disease (AD) has a long pre-clinical period, and so there is a crucial need for early detection, including of Mild Cognitive Impairment (MCI).
Computational analysis of connected speech using Natural Language Processing and machine learning has been found to indicate disease and could be utilized as a rapid, scalable test for early diagnosis. However, there has been a focus on the Cookie Theft picture description task, which has been criticized.
Fifty participants were recruited – 25 healthy controls (HC), 25 mild AD or MCI (AD+MCI) – and these completed five connected speech tasks: picture description, a conversational map reading task, recall of an overlearned narrative, procedural recall and narration of a wordless picture book. A high-dimensional set of linguistic features were automatically extracted from each transcript and used to train Support Vector Machines to classify groups.
Performance varied, with accuracy for HC vs. AD+MCI classification ranging from 62% using picture book narration to 78% using overlearned narrative features. This study shows that, importantly, the conditions of the speech task have an impact on the discourse produced, which influences accuracy in detection of AD beyond the length of the sample.
Further, we report the features important for classification using different tasks, showing that a focus on the Cookie Theft picture description task may narrow the understanding of how early AD pathology impacts speech.