More Neurotech News

Browse all of our neurotechnology articles over the years. Remember you can click on the tags or search for specific articles.

This shows a person and ven diagrams.
A new study introduces a multilingualism calculator that quantifies how multilingual a person truly is, offering a clearer alternative to vague labels like “bilingual.” By combining age of acquisition with self-rated listening, speaking, reading, and writing skills across languages, the tool generates both a multilingualism score and a language-dominance profile.
This shows a neuron.
Researchers have engineered a next-generation glutamate sensor, iGluSnFR4, capable of detecting the faintest incoming synaptic signals between neurons—signals that, until now, have been nearly impossible to record in living brain tissue. By capturing these whisper-quiet inputs, scientists can finally observe how neurons weigh thousands of glutamate messages and transform them into an electrical output, the core computation behind memory, learning, and emotion.
This shows AI representations of brain scans.
fMRI signals don’t always match the brain’s true activity levels, overturning a core assumption used in tens of thousands of studies. In about 40% of cases, an increased fMRI signal appeared in regions where neural activity was actually reduced, while decreased signals sometimes showed up in areas with heightened activity.
This shows a brain.
New research shows that deep learning can use EEG signals to distinguish Alzheimer’s disease from frontotemporal dementia with high accuracy. By analyzing both the timing and frequency of brain activity, the model uncovered distinct patterns: broader disruption across multiple regions in Alzheimer’s and more localized frontal and temporal changes in frontotemporal dementia.
This shows a bionic hand.
A new study shows that integrating artificial intelligence with advanced proximity and pressure sensors allows a commercial bionic hand to grasp objects in a natural, intuitive way—reducing cognitive effort for amputees. By training an artificial neural network on grasping postures, each finger could independently “see” objects and automatically move into the correct position, improving grip security and precision.