Summary: Combining brain imaging data with machine learning, researchers make new discoveries about how the brain controls the hand. The findings could lead to the development of more advanced neuroprosthetics.
Source: University of East Anglia
Researchers at the University of East Anglia have made an astonishing discovery about how our brains control our hands.
They used MRI data to study which parts of the brain are used when we handle tools, such as a knives.
They read out the signal from certain brain regions and tried to distinguish when participants handled tools appropriately for use.
Humans have used tools for millions of years, but this research is the first to show that actions such grasping a knife by its handle for cutting are represented by brain areas that also represent images of human hands, our primary ‘tool’ for interacting with the world.
The research could pave the way for the development of next-generation neuroprosthetics – prosthetic limbs that tap into the brain’s control centre, and help rehabilitate people who have lost function in their limbs due to brain injury.
The study was led by UEA and carried out at the Norfolk and Norwich University Hospital.
Lead researcher Dr Stephanie Rossit, from UEA’s School of Psychology, said: “The emergence of handheld tools marks the beginning of a major discontinuity between humans and our closest primate relatives and is considered a defining feature of our species. Our findings could shed light on the regions of the brain that specifically evolved in the humans.”
“We knew that seeing images of tools activates a different region of the brain to when we see other kinds of objects, for example a chair.
“Until now it was assumed that the brain segregates visual information in this way to optimise processing of actions associated with tools. But how the human brain controls our hands to correctly grasp 3D objects such as tools was not well understood.
“We wanted to test whether the human brain automatically processes 3D objects in terms of how we grasp them for use. And we particularly wanted to find out whether we could use signals from specific parts of the brain to distinguish whether people were handling tools correctly – for example grasping a knife by the handle rather than the blade.”
The team used an MRI scanner to collect brain imaging data while participants interacted with 3D objects.
Dr Rossit said: “This was really challenging because the space inside the scanner is really small and the participants need to stay really still.
“So we used a one-of-a-kind ‘real action’ set-up for presenting 3D tools and other objects.
“Our participants lay in the dark, on a custom-built bed with a revolving table mounted above their waist, so that we could show them 3D objects and they could grasp them.
“We designed and 3D-printed everyday kitchen tools from non-magnetic materials so that they would be safe in the MRI such as a plastic knife, pizza cutter and a spoon as well as another group of 3D-printed bars to represent items that were not tools, which we used as control objects.”
Dr Ethan Knights, who was a PhD student with Dr Rossit, coordinated the data collection and scanned the brains of 20 volunteers at the Norfolk and Norwich University Hospital. In the first session, participants were asked to grasp the 3D tool and 3D bars correctly or incorrectly using the bespoke ‘real action’ set-up.
The same participants returned to the scanner for a second session in which they simply looked at pictures of tools and hands.
Dr Fraser Smith, also from UEA’s School of Psychology, said: “We studied brain activity when participants viewed pictures of tools and hands to identify which parts of the brain where the brain hand picture is represented.
“We then used state of the art machine learning to see if we could predict whether people actually grasped a tool by its handle or not. This is really important because knowing not to grasp an object, like a knife, by its blade, is critical to successful tool-use.
Dr Rossit said: “In contrast to what most scientists thought, we were able to predict whether a tool was grasped correctly from the signals of brain areas that respond to the sight of pictures of hands and not from visual areas that respond to the sight of pictures of tools.

“Importantly the signals from the visual hand areas could only be used to predict hand actions with tools but could not predict matched actions with the control 3D bar objects.
“This suggests that the visual hand areas are specially tuned for actions with tools.
“Our discovery changes our fundamental understanding of how the brain controls our hands and could have important implications for health and society.
“For example, it could help develop better devices or rehabilitation for people who have lost function in their limbs due to brain injury. And it could even allow people without limbs to control prosthetics with their minds.
“The potential for brain-driven interfaces and prosthetics is very exciting,” she added.
‘Hand-selective visual regions represent how to grasp 3D tools: brain decoding during real actions’ is published in the Journal of Neuroscience on May 10, 2021.
Funding: The study was funded by the BIAL Foundation.
About this neuroscience research news
Source: University of East Anglia
Contact: Lisa Horton – University of East Anglia
Image: The image is credited to University of East Anglia
Original Research: Open access.
“Hand-selective visual regions represent how to grasp 3D tools: brain decoding during real actions” by Ethan Knights, Courtney Mansfield, Diana Tonin, Janak Saada, Fraser W. Smith and Stéphanie Rossit. Journal of Neuroscience
Abstract
Hand-selective visual regions represent how to grasp 3D tools: brain decoding during real actions
Most neuroimaging experiments that investigate how tools and their actions are represented in the brain use visual paradigms where tools or hands are displayed as 2D images and no real movements are performed.
These studies discovered selective visual responses in occipito-temporal and parietal cortices for viewing pictures of hands or tools, which are assumed to reflect action processing, but this has rarely been directly investigated. Here, we examined the responses of independently visually defined category-selective brain areas when participants grasped 3D tools (N=20; 9 females).
Using real action fMRI and multi-voxel pattern analysis, we found that grasp typicality representations (i.e., whether a tool is grasped appropriately for use) were decodable from hand-selective areas in occipito-temporal and parietal cortices, but not from tool-, object-, or body-selective areas, even if partially overlapping. Importantly, these effects were exclusive for actions with tools, but not for biomechanically matched actions with control non-tools.
In addition, grasp typicality decoding was significantly higher in hand than tool-selective parietal regions. Notably, grasp typicality representations were automatically evoked even when there was no requirement for tool use and participants were naïve to object category (tool vs non-tools).
Finding a specificity for typical tool grasping in hand-, rather than tool-, selective regions challenges the long-standing assumption that activation for viewing tool images reflects sensorimotor processing linked to tool manipulation.
Instead, our results show that typicality representations for tool grasping are automatically evoked in visual regions specialised for representing the human hand, the brain’s primary tool for interacting with the world.
SIGNIFICANCE STATEMENT
The unique ability of humans to manufacture and use tools is unsurpassed across the animal kingdom, with tool use considered a defining feature of our species.
Most neuroscientific studies that investigate the brain mechanisms that support tool use, record brain activity while people simply view images of tools or hands and not when people perform actual hand movements with tools.
Here we show that specific areas of the human visual system that preferentially process hands automatically encode how to appropriately grasp 3D tools, even when no actual tool use is required.
These findings suggest that visual areas optimized for processing hands represent fundamental aspects of tool grasping in humans, such as which side they should be grasped for correct manipulation.