Grasping an Object: Model Describes Complete Movement Planning in the Brain

Summary: Researchers have developed a new model that represents the planning of movement from seeing an object to grasping it.

Source: DZE

Every day we effortlessly make countless grasping movements. We take a key in our hand, open the front door by operating the door handle, then pull it closed from the outside and lock it with the key. What is a natural matter for us is based on a complex interaction of our eyes, different regions of the brain and ultimately our muscles in the arm and hand.

Neuroscientists at the German Primate Center (DPZ) – Leibniz Institute for Primate Research in Göttingen have succeeded for the first time in developing a model that can seamlessly represent the entire planning of movement from seeing an object to grasping it.

Comprehensive neural and motor data from grasping experiments with two rhesus monkeys provided decisive results for the development of the model, which is an artificial neural network that, by feeding it with images showing certain objects, is able to simulate processes and interactions in the brain for the processing of this information. The neuronal data from the artificial network model were able to explain the complex biological data from the animal experiments and thus prove the validity of the functional model.

This could be used in the long term for the development of better neuroprostheses, for example, to bridge the damaged nerve connection between brain and extremities in paraplegia and thus restore the transmission of movement commands from the brain to arms and legs.

Rhesus monkeys, like humans, have a highly developed nervous and visual system as well as dexterous hand motor control. For this reason, they are particularly well suited for research into grasping movements. From previous studies in rhesus monkeys it is known that the interaction of three brain areas is responsible for grasping a targeted object. Until now, however, there has been no detailed model at the neural level to represent the entire process from the processing of visual information to the control of arm and hand muscles for grasping that object.

In order to develop such a model, two male rhesus monkeys were trained to grasp 42 objects of different shapes and sizes, presented to them in random order. The monkeys wore a data glove that continuously recorded the movements of arm, hand and fingers. The experiment was performed by first briefly illuminating the object to be grasped while the monkeys looked at a red dot below the respective object and performed the grasping movement with a short delay after a blinking signal.

These conditions provide information about the time at which the different brain areas are active in order to generate the grasping movement and the associated muscle activations based on the visual signals.

In the next step, images of the 42 objects, taken from the perspective of the monkeys, were fed into an artificial neural network in the computer, whose functionality was mimicking the biological processes in the brain. The network model consisted of three interconnected stages, corresponding to the three cortical brain areas of the monkeys, and provided meaningful insights into the dynamics of the brain networks.

After appropriate training with the behavioral data of the monkeys, the network was able to precisely reflect the grasping movements of the rhesus monkeys. It was able to process images of recognizable objects and could reproduce the muscle dynamics required to grasp the objects accurately.

This shows a monkey hand
A rhesus macaque (Macaca mulatta) wearing a data glove for detailed hand and arm tracking. Photo: Ricarda Lbik

The results obtained using the artificial network model were then compared with the biological data from the monkey experiment. It turned out that the neural dynamics of the model were highly consistent with the neural dynamics of the cortical brain areas of the monkeys.

“This artificial model describes for the first time in a biologically realistic way the neuronal processing from seeing an object for object recognition, to action planning and hand muscle control during grasping”, says Hansjörg Scherberger, head of the Neurobiology Laboratory at the DPZ, and he adds: “This model contributes to a better understanding of the neuronal processes in the brain and in the long term could be useful for the development of more efficient neuroprostheses.”

About this movement research news

Source: DZE
Contact: Anika Appelles – DZE
Image: The image is credited to Ricarda Lbik

Original Research: Closed access.
A goal-driven modular neural network predicts parietofrontal neural dynamics during grasping” by Hansjörg Scherberger et al. PNAS


A goal-driven modular neural network predicts parietofrontal neural dynamics during grasping

One of the primary ways we interact with the world is using our hands. In macaques, the circuit spanning the anterior intraparietal area, the hand area of the ventral premotor cortex, and the primary motor cortex is necessary for transforming visual information into grasping movements. However, no comprehensive model exists that links all steps of processing from vision to action. We hypothesized that a recurrent neural network mimicking the modular structure of the anatomical circuit and trained to use visual features of objects to generate the required muscle dynamics used by primates to grasp objects would give insight into the computations of the grasping circuit. Internal activity of modular networks trained with these constraints strongly resembled neural activity recorded from the grasping circuit during grasping and paralleled the similarities between brain regions. Network activity during the different phases of the task could be explained by linear dynamics for maintaining a distributed movement plan across the network in the absence of visual stimulus and then generating the required muscle kinematics based on these initial conditions in a module-specific way. These modular models also outperformed alternative models at explaining neural data, despite the absence of neural data during training, suggesting that the inputs, outputs, and architectural constraints imposed were sufficient for recapitulating processing in the grasping circuit. Finally, targeted lesioning of modules produced deficits similar to those observed in lesion studies of the grasping circuit, providing a potential model for how brain regions may coordinate during the visually guided grasping of objects.

Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.