Researchers have identified the location of dysfunctional brain networks that lead to impaired sentence production and word-finding in primary progressive aphasia (PPA). PPA can occur in those with neurodegenerative diseases, such as frontotemporal dementia and Alzheimer's disease. Mapping the networks allows clinicians to apply non-invasive brain stimulation to potentially improve speech in those with PPA.
Artificial IntelligenceDeep LearningFeaturedMachine LearningNeuroscienceNeurotechOpen Neuroscience Articles··8 min read
Using ECoG and machine learning, researchers decoded spoken words and phrases in real-time from brain signals that control speech. The technology could eventually be used to help those who have lost vocal control to regain their voice.
Study of macaque monkeys reveals speech and music may have shaped the human brain's auditory networks. Researchers found specific areas of the human brain have a stronger preference for pitch than that of primates, raising the possibility certain sounds, which are embedded in music and speech, may have shaped the organization of our brains.
Children at higher risk of ASD are less able to distinguish between differences in speech patterns. The findings suggest the biological mechanism of language development is less acquisitive in high-risk infants who are diagnosed with autism during toddlerhood.
··5 min read
An auditory-based machine learning algorithm was able to identify children diagnosed with depression and anxiety with 80% accuracy after analyzing recordings of their speech. The algorithm identified eight audio features that signify a higher risk of depression. Of these, a lower pitch of voice, repeatable speech inflections and a higher pitch response to surprise stimuli, were more indicative of depression. Researchers hope to develop a smartphone app that records and analyzes speech immediately, helping to better detect children at risk of internalizing disorders.
The common myth that Anne Boleyn tried to speak after her untimely demise at the hands of her executioner may have scientific evidence behind it. Researchers explore how consciousness may remain intact for a period of time after death, and what that may mean for medical sciences.
Artificial IntelligenceAuditory NeuroscienceDeep LearningFeaturedMachine LearningNeuroscienceNeurotech··4 min read
A neural decoder is able to use sound representations encoded in human cortical activity to synthesize audible speech. The technology could be used to help those with difficulties speaking to communicate freely.
··2 min read
A minimally invasive brain implant is to be tested on humans for the first time. The device, named Stentrode, will be placed in blood vessels in the motor cortex, and researchers believe it will help improve movement and speech for those with a range of neurological disorders.
Using fNIRS, researchers discovered babies are able to pick out words from speech at as young as three days old.