AI Watches Pianists and Reconstructs Their Muscle Signals

Summary: Researchers developed an AI system that can reconstruct fine hand muscle activity using only standard video footage. Traditionally, this type of measurement required intrusive electrodes attached to the skin, but the new method eliminates that need entirely.

The system was trained on high-precision recordings of expert hand movements, allowing it to infer hidden muscle signals with remarkable accuracy. This breakthrough opens the door to affordable, remote analysis of fine motor control for healthcare, rehabilitation, and performance training.

Key Facts:

  • Sensor-Free Muscle Tracking: The AI estimates hidden hand muscle activity using video alone, without EMG electrodes.
  • High Accuracy Across Tasks: The system reliably predicts both timing and strength of muscle activation, even for unseen performers.
  • Broad Real-World Applications: Potential uses include rehabilitation monitoring, sports science, robotics, and gesture-based interfaces.

Source: Institute of Science Tokyo

Hand movements during piano performance depend on precise coordination between small muscles hidden beneath the skin.

Tracking these signals has traditionally required electromyography (EMG) sensors, which are expensive, intrusive, and technically complex.

This shows hands on a piano.
By combining cues from hand poses and keystrokes, the model reconstructs muscle signal timing and strength. Credit: Neuroscience News

A research team led by Professor Hideki Koike from the Department of Computer Science, School of Computing, Institute of Science Tokyo (Science Tokyo), Japan, and Dr. Shinichi Furuya from Sony Computer Science Laboratories, Japan, has now addressed this challenge using artificial intelligence.

Their new framework, Piano Keystroke-Pose-Muscle Network (PianoKPM Net), estimates miniature hand muscle activity using only video recordings. Their findings were published online on September 19, 2025, and will be presented at the 39th Conference on Neural Information Processing Systems (NeurIPS 2025), held in San Diego, USA, on December 2, 2025.

The system is built on a new dataset, PianoKPM, that captures how expert pianists move, press, and control their hands with exceptional precision. It includes 12.6 hours of synchronized data from 20 professional pianists performing seven distinct musical tasks.

Each performance was recorded with multi-view videos at 60 frames per second, 3D hand poses, 1 kHz keystroke data, audio, and 2 kHz EMG signals from six small hand muscles.

The dataset contains more than five million pose frames and 28 million EMG samples, creating the first detailed map linking visible motion with internal muscle activity.

“Leveraging this dataset, we propose PianoKPM Net to infer high-frequency EMG from pose data,” exclaims Koike.

Using this foundation, PianoKPM Net learns to infer muscle behavior from video data. By combining cues from hand poses and keystrokes, the model reconstructs muscle signal timing and strength.

In comparative tests against advanced baselines, such as NeuroPose and CodeTalker, PianoKPM Net achieved higher accuracy in predicting both the amplitude and timing of muscle activation. Even with pianists and musical pieces not included in training, the model maintains strong performance, confirming its adaptability and generalization.

This approach transforms a simple camera into a non-invasive tool for studying muscle coordination. This allows observation of how skilled pianists control subtle muscle movements to achieve speed, control, and precision. This makes a detailed physiological assessment possible without attaching any sensors to the body, reducing both cost and discomfort.

The technology has potential far beyond the piano. In sports science, it could track muscle exertion to enhance training accuracy and prevent overuse injuries. In rehabilitation, it could monitor recovery progress, providing clinicians with continuous feedback without physical attachments. It could also improve human–machine interaction systems, where understanding a user’s muscle effort helps refine robotic assistance and gesture-based interfaces.

“Together, the PianoKPM Net and PianoKPM dataset create a foundation for affordable access to internal physiological and muscle activity signals, supporting progress in human augmentation and advanced human–machine interaction,” explains Koike.

The research team plans to make both the dataset and model publicly available. This open release will allow scientists and developers to advance studies in motor learning, embodied intelligence, and assistive robotics. Widespread access will help standardize benchmarks for motion and muscle activity estimation, accelerating development in multiple fields.

By linking vision and physiology, PianoKPM Net offers a new method to study fine motor control. It replaces complex EMG setups with accessible video-based analysis, creating opportunities for performance research, clinical evaluation, and human–technology design.

The system marks a clear step toward affordable, AI-driven analysis of skilled movement, where invisible muscle patterns can finally be observed and understood through vision alone.

In the future, this technology could contribute to remote skill education by enabling use over low-latency communication networks, even in environments without access to expensive biological measurement equipment.

Key Questions Answered:

Q: What did the AI system achieve?

A: It accurately estimated tiny hand muscle activity using only ordinary video recordings, without physical sensors.

Q: How accurate was the system?

A: The model outperformed existing deep-learning methods in predicting both timing and strength of muscle activation.

Q: Why is this important?

A: It replaces expensive, intrusive muscle sensors with a low-cost, non-invasive alternative.

Editorial Notes:

  • This article was edited by a Neuroscience News editor.
  • Journal paper reviewed in full.
  • Additional context added by our staff.

About this AI and neurotech research news

Author: Miki Yamaoka
Source: Institute of Science Tokyo
Contact: Miki Yamaoka – Institute of Science Tokyo
Image: The image is credited to Neuroscience News

Original Research: The findings will be presented at the 39th Conference on Neural Information Processing Systems (NeurIPS 2025)

Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.