Refresh

This website neurosciencenews.com/neuroscience-terms/neurotech/page/2/ is currently offline. Cloudflare's Always Online™ shows a snapshot of this web page from the Internet Archive's Wayback Machine. To check for the live version, click Refresh.

More Neurotech News

Browse all of our neurotechnology articles over the years. Remember you can click on the tags or search for specific articles.

This shows a brain.
A revolutionary microscopy method called LICONN enables scientists to reconstruct brain tissue and map synaptic connections using standard light microscopes. By embedding brain tissue in hydrogel, expanding it, and imaging at nanoscale resolution, researchers achieve a detailed view of neuronal architecture previously only possible with electron microscopy.
This shows a brain.
A pioneering clinical study found that pairing vagus nerve stimulation (VNS) with traditional therapy eliminated PTSD diagnoses in all participants up to six months post-treatment. The trial combined prolonged exposure therapy with brief bursts of VNS via an implanted device, enhancing neuroplasticity and sustaining remission.
This shows a brain.
In a breakthrough study, researchers enabled brain-computer interface (BCI) users with tetraplegia to create personalized tactile sensations, marking a step toward restoring realistic touch. Unlike previous attempts where artificial touch felt generic, participants could adjust stimulation parameters to make digital objects—like a cat, apple, or key—feel distinct.
This shows an eye.
Scientists have created a technology called Oz that stimulates individual photoreceptor cells in the human eye to create an entirely new, ultra-saturated color never seen in nature—dubbed olo. Using microdoses of laser light, Oz activates specific combinations of cone cells to generate this vivid blue-green hue, which vanishes the moment the precision targeting is disrupted.
This shows a robot holding a cup and watching a video.
Researchers have developed RHyME, an AI-powered system that enables robots to learn complex tasks by watching a single human demonstration video. Traditional robots struggle with unpredictable scenarios and require extensive training data, but RHyME allows robots to adapt by drawing on previous video knowledge.