Robot Teaches Itself How to Dress People

Summary: Researchers have developed a new robotic system than can help people to get dressed. The system relies on the force it feels to help guide a hospital gown onto a person’s arm efficiently.

Source: Georgia Tech.

More than 1 million Americans require daily physical assistance to get dressed because of injury, disease and advanced age. Robots could potentially help, but cloth and the human body are complex.

To help address this need, a robot at the Georgia Institute of Technology is successfully sliding hospital gowns on people’s arms. The machine doesn’t use its eyes as it pulls the cloth. Instead, it relies on the forces it feels as it guides the garment onto a person’s hand, around the elbow and onto the shoulder.

The machine, a PR2, taught itself in one day, by analyzing nearly 11,000 simulated examples of a robot putting a gown onto a human arm. Some of those attempts were flawless. Others were spectacular failures — the simulated robot applied dangerous forces to the arm when the cloth would catch on the person’s hand or elbow.

From these examples, the PR2’s neural network learned to estimate the forces applied to the human. In a sense, the simulations allowed the robot to learn what it feels like to be the human receiving assistance.

“People learn new skills using trial and error. We gave the PR2 the same opportunity,” said Zackory Erickson, the lead Georgia Tech Ph.D. student on the research team. “Doing thousands of trials on a human would have been dangerous, let alone impossibly tedious. But in just one day, using simulations, the robot learned what a person may physically feel while getting dressed.”

The robot also learned to predict the consequences of moving the gown in different ways. Some motions made the gown taut, pulling hard against the person’s body. Other movements slid the gown smoothly along the person’s arm. The robot uses these predictions to select motions that comfortably dress the arm.

After success in simulation, the PR2 attempted to dress people. Participants sat in front of the robot and watched as it held a gown and slid it onto their arms. Rather than vision, the robot used its sense of touch to perform the task based on what it learned about forces during the simulations.

dressing robot
The machine, a PR2, taught itself in one day, by analyzing nearly 11,000 simulated examples of a robot putting a gown onto a human arm. NeuroscienceNews.com image is adapted from Georgia Tech video.

“The key is that the robot is always thinking ahead,” said Charlie Kemp, an associate professor in the Wallace H. Coulter Department of Biomedical Engineering at Georgia Tech and Emory University and the lead faculty member. “It asks itself, ‘if I pull the gown this way, will it cause more or less force on the person’s arm? What would happen if I go that way instead?’”

The researchers varied the robot’s timing and allowed it to think as much as a fifth of a second into the future while strategizing about its next move. Less than that caused the robot to fail more often.

“The more robots can understand about us, the more they’ll be able to help us,” Kemp said. “By predicting the physical implications of their actions, robots can provide assistance that is safer, more comfortable and more effective.”

The robot is currently putting the gown on one arm. The entire process takes about 10 seconds. The team says fully dressing a person is something that is many steps away from this work.

Ph.D. student Henry Clever and Professors Karen Liu and Greg Turk also contributed to the research. Their paper, Deep Haptic Model Predictive Control for Robot-Assisted Dressing, will be presented May 21-25 in Australia during the International Conference on Robotics and Automation (ICRA). The work is part of a larger effort on robot-assisted dressing funded by the National Science Foundation (NSF) and led by Liu.

About this neuroscience research article

Funding: This work was supported in part by NSF award IIS-1514258, AWS Cloud Credits for Research, and the NSF NRT Traineeship DGE-1545287. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the sponsors.

Kemp is a cofounder, a board member, an equity holder, and the CTO of Hello Robot Inc., which is developing products related to this research. This research could affect his personal financial status. The terms of this arrangement have been reviewed and approved by Georgia Tech in accordance with its conflict of interest policies.

Source: Georgia Tech
Publisher: Organized by NeuroscienceNews.com.
Image Source: NeuroscienceNews.com image is adapted from Georgia Tech video.
Video Source: Video credited to Georgia TEch.
Original Research: Abstract for “Deep Haptic Model Predictive Control for Robot-Assisted Dressing” by Zackory Erickson, Henry M. Clever, Greg Turk, C. Karen Liu, and Charles C. Kemp. The findings will be presented May 21-25 in Australia during the International Conference on Robotics and Automation (ICRA).

Cite This NeuroscienceNews.com Article

[cbtabs][cbtab title=”MLA”]Georgia Tech “Robot Teaches Itself How to Dress People.” NeuroscienceNews. NeuroscienceNews, 15 May 2018.
<https://neurosciencenews.com/dressing-robot-9060/>.[/cbtab][cbtab title=”APA”]Georgia Tech (2018, May 15). Robot Teaches Itself How to Dress People. NeuroscienceNews. Retrieved May 15, 2018 from https://neurosciencenews.com/dressing-robot-9060/[/cbtab][cbtab title=”Chicago”]Georgia Tech “Robot Teaches Itself How to Dress People.” https://neurosciencenews.com/dressing-robot-9060/ (accessed May 15, 2018).[/cbtab][/cbtabs]


Abstract

Deep Haptic Model Predictive Control for Robot-Assisted Dressing

Robot-assisted dressing offers an opportunity to benefit the lives of many people with disabilities, such as some older adults. However, robots currently lack common sense about the physical implications of their actions on people. The physical implications of dressing are complicated by non-rigid garments, which can result in a robot indirectly applying high forces to a person’s body. We present a deep recurrent model that, when given a proposed action by the robot, predicts the forces a garment will apply to a person’s body. We also show that a robot can provide better dressing assistance by using this model with model predictive control. The predictions made by our model only use haptic and kinematic observations from the robot’s end effector, which are readily attainable. Collecting training data from real world physical human-robot interaction can be time consuming, costly, and put people at risk. Instead, we train our predictive model using data collected in an entirely self-supervised fashion from a physics-based simulation. We evaluated our approach with a PR2 robot that attempted to pull a hospital gown onto the arms of 10 human participants. With a 0.2s prediction horizon, our controller succeeded at high rates and lowered applied force while navigating the garment around a persons fist and elbow without getting caught. Shorter prediction horizons resulted in significantly reduced performance with the sleeve catching on the participants’ fists and elbows, demonstrating the value of our model’s predictions. These behaviors of mitigating catches emerged from our deep predictive model and the controller objective function, which primarily penalizes high forces.

Feel free to share this Neuroscience News.
Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.