Brain-AI System Translates Thoughts Into Movement

Summary: Researchers have created a noninvasive brain-computer interface enhanced with artificial intelligence, enabling users to control a robotic arm or cursor with greater accuracy and speed. The system translates brain signals from EEG recordings into movement commands, while an AI camera interprets user intent in real time.

In tests, participants — including one paralyzed individual — completed tasks significantly faster with AI assistance, even accomplishing actions otherwise impossible without it. Researchers say this breakthrough could pave the way for safer, more accessible assistive technologies for people with paralysis or motor impairments.

Key Facts:

  • Noninvasive Breakthrough: Combines EEG-based brain signal decoding with AI vision for shared autonomy.
  • Faster Task Completion: Paralyzed participants could complete tasks that were impossible without AI assistance.
  • Accessible Alternative: Offers safer, lower-risk solutions compared to invasive surgical implants.

Source: UCLA

UCLA engineers have developed a wearable, noninvasive brain-computer interface system that utilizes artificial intelligence as a co-pilot to help infer user intent and complete tasks by moving a robotic arm or a computer cursor.

Published in Nature Machine Intelligence, the study shows that the interface demonstrates a new level of performance in noninvasive brain-computer interface, or BCI, systems.

This shows the outline of a person with a glowing brain and a robotic arm.
Notably, the paralyzed participant completed the robotic arm task in about six-and-a-half minutes with AI assistance, whereas without it, he was unable to complete the task. Credit: Neuroscience News

This could lead to a range of technologies to help people with limited physical capabilities, such as those with paralysis or neurological conditions, handle and move objects more easily and precisely.

The team developed custom algorithms to decode electroencephalography, or EEG — a method of recording the brain’s electrical activity — and extract signals that reflect movement intentions.

They paired the decoded signals with a camera-based artificial intelligence platform that interprets user direction and intent in real time. The system allows individuals to complete tasks significantly faster than without AI assistance.

“By using artificial intelligence to complement brain-computer interface systems, we’re aiming for much less risky and invasive avenues,” said study leader Jonathan Kao, an associate professor of electrical and computer engineering at the UCLA Samueli School of Engineering.

“Ultimately, we want to develop AI-BCI systems that offer shared autonomy, allowing people with movement disorders, such as paralysis or ALS, to regain some independence for everyday tasks.”

State-of-the-art, surgically implanted BCI devices can translate brain signals into commands, but the benefits they currently offer are outweighed by the risks and costs associated with neurosurgery to implant them.

More than two decades after they were first demonstrated, such devices are still limited to small pilot clinical trials. Meanwhile, wearable and other external BCIs have demonstrated a lower level of performance in detecting brain signals reliably. 

To address these limitations, the researchers tested their new noninvasive AI-assisted BCI with four participants — three without motor impairments and a fourth who was paralyzed from the waist down.

Participants wore a head cap to record EEG, and the researchers used custom decoder algorithms to translate these brain signals into movements of a computer cursor and robotic arm. Simultaneously, an AI system with a built-in camera observed the decoded movements and helped participants complete two tasks.

In the first task, they were instructed to move a cursor on a computer screen to hit eight targets, holding the cursor in place at each for at least half a second. In the second challenge, participants were asked to activate a robotic arm to move four blocks on a table from their original spots to designated positions. 

All participants completed both tasks significantly faster with AI assistance. Notably, the paralyzed participant completed the robotic arm task in about six-and-a-half minutes with AI assistance, whereas without it, he was unable to complete the task.

The BCI deciphered electrical brain signals that encoded the participants’ intended actions. Using a computer vision system, the custom-built AI inferred the users’ intent — not their eye movements — to guide the cursor and position the blocks.

“Next steps for AI-BCI systems could include the development of more advanced co-pilots that move robotic arms with more speed and precision, and offer a deft touch that adapts to the object the user wants to grasp,” said co-lead author Johannes Lee, a UCLA electrical and computer engineering doctoral candidate advised by Kao.

“And adding in larger-scale training data could also help the AI collaborate on more complex tasks, as well as improve EEG decoding itself.”

The paper’s authors are all members of Kao’s Neural Engineering and Computation Lab, including Sangjoon Lee, Abhishek Mishra, Xu Yan, Brandon McMahan, Brent Gaisford, Charles Kobashigawa, Mike Qu and Chang Xie. A member of the UCLA Brain Research Institute, Kao also holds faculty appointments in the Computer Science Department and the Interdepartmental Ph.D. Program in Neuroscience.

Funding: The research was funded by the National Institutes of Health and the Science Hub for Humanity and Artificial Intelligence, which is a collaboration between UCLA and Amazon. The UCLA Technology Development Group has applied for a patent related to the AI-BCI technology. 

About this neurotech and AI research news

Author: Christine Wei-li Lee
Source: UCLA
Contact: Christine Wei-li Lee – UCLA
Image: The image is credited to Neuroscience News

Original Research: Closed access.
Brain–computer interface control with artificial intelligence copilots” by Jonathan Kao, et al. Nature Machine Intelligence


Abstract

Brain–computer interface control with artificial intelligence copilots

Motor brain–computer interfaces (BCIs) decode neural signals to help people with paralysis move and communicate.

Even with important advances in the past two decades, BCIs face a key obstacle to clinical viability: BCI performance should strongly outweigh costs and risks.

To significantly increase the BCI performance, we use shared autonomy, where artificial intelligence (AI) copilots collaborate with BCI users to achieve task goals.

We demonstrate this AI-BCI in a non-invasive BCI system decoding electroencephalography signals.

We first contribute a hybrid adaptive decoding approach using a convolutional neural network and ReFIT-like Kalman filter, enabling healthy users and a participant with paralysis to control computer cursors and robotic arms via decoded electroencephalography signals.

We then design two AI copilots to aid BCI users in a cursor control task and a robotic arm pick-and-place task.

We demonstrate AI-BCIs that enable a participant with paralysis to achieve 3.9-times-higher performance in target hit rate during cursor control and control a robotic arm to sequentially move random blocks to random locations, a task they could not do without an AI copilot.

As AI copilots improve, BCIs designed with shared autonomy may achieve higher performance.

Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.