Teach Your Robot Well

Within a decade, personal robots could become as common in U.S. homes as any other major appliance, and many if not most of these machines will be able to perform innumerable tasks not explicitly imagined by their manufacturers. This opens up a wider world of personal robotics, in which machines are doing anything their owners can program them to do—without actually being programmers.

Laying some helpful groundwork for this world is, a new study by researchers in Georgia Tech’s Center for Robotics & Intelligent Machines (RIM), who have identified the types of questions a robot can ask during a learning interaction that are most likely to characterize a smooth and productive human-robot relationship. These questions are about certain features of tasks, more so than labels of task components or real-time demonstrations of the task itself, and the researchers identified them not by studying robots, but by studying the everyday (read: non-programmer) people who one day will be their masters. The findings were detailed in the paper, “Designing Robot Learners that Ask Good Questions,” presented this week in Boston at the 7th ACM/IEEE Conference on Human-Robot Interaction (HRI).

“People are not so good at teaching robots because they don’t understand the robots’ learning mechanism,” said lead author Maya Cakmak, Ph.D. student in the School of Interactive Computing. “It’s like when you try to train a dog, and it’s difficult because dogs do not learn like humans do. We wanted to find out the best kinds of questions a robot could ask to make the human-robot relationship as ‘human’ as it can be.”

Cakmak’s study attempted to discover the role “active learning” concepts play in human-robot interaction. In a nutshell, active learning refers to giving machine learners more control over the information they receive. Simon, a humanoid robot created in the lab of Andrea Thomaz (assistant professor in the Georgia Tech’s School of Interactive Computing, and co-author), is well acquainted with active learning; Thomaz and Cakmak are programming him to learn new tasks by asking questions.

Cakmak designed two separate experiments: first, she asked human volunteers to assume the role of an inquisitive robot attempting to learn a simple task by asking questions of a human instructor. Having identified the three main question types (feature, label and demonstration), Cakmak tagged each of the participants’ questions as one of the three. The overwhelming majority (about 82 percent) of questions were feature queries, showing a clear cognitive preference in human learning for this query type.

httpv://www.youtube.com/watch?v=6FKaEOSVczM

Designing Robot Learners that Ask Good Questions by Maya Cakmak and Andrea L. Thomaz. Programming new skills on a robot should take minimal time and effort. One approach to achieve this goal is to allow the robot to ask questions. This idea, called Active Learning, has recently caught a lot of attention in the robotics community. However, it has not been explored from a human-robot interaction perspective. In this paper, we identify three types of questions (label, demonstration and feature queries) and discuss how a robot can use these while learning new skills. Then, we present an experiment on human question asking which characterizes the extent to which humans use these question types. Finally, we evaluate the three question types within a human-robot teaching interaction. We investigate the ease with which different types of questions are answered and whether or not there is a general preference of one type of question over another. Based on our findings from both experiments we provide guidelines for designing question asking behaviors on a robot learner. Video fromYoutube.com user SimonTheSocialRobot

Type of question Example

Label query                                    “Can I pour salt like this?”

Demonstration query                       “Can you show me how to pour salt from here?”

Feature query                                 “Can I pour salt from any height?”

Next, Cakmak recruited humans to teach Simon new tasks by answering the robot’s questions and then rating those questions on how “smart” they thought they were. Feature queries once again were the preferred interrogatory, with 72 percent of participants calling them the smartest questions.

“These findings are important because they help give us the ability to teach robots the kinds of questions that humans would ask,” Cakmak said. “This in turn will help manufacturers produce the kinds of robots that are most likely to integrate quickly into a household or other environment and better serve the needs we’ll have for them.”

Notes about this robotics research article

Georgia Tech is fielding five of the 38 papers accepted for HRI’s technical program, making it the largest academic contributor to the conference. Those five include:

All five papers describe research geared toward the realization of in-home robots assisting humans with everyday activities. Ph.D. student Baris Akgun’s paper, for example, assumes the same real-life application scenario as Cakmak’s—a robot learning new tasks from a non-programmer—and examines whether robots learn more quickly from continuous, real-time demonstrations of a physical task, or from isolated key frames in the motion sequence. The research is nominated for Best Paper at HRI 2012.

“Georgia Tech is certainly a leader in the field of human-robot interaction; we have more than 10 faculty across campus for whom HRI is a primary research area,” Thomaz said. “Additionally, the realization of ‘personal robots’ is a shared vision of the whole robotics faculty—and a mission of the RIM research center.”

Contact: Michael Terrazas, Assistant Director of Communications – College of Computing at Georgia Tech
Source: College of Computing at Georgia Tech press release
Video Source: Neuroscience video from Youtube.com user SimonTheSocialRobot
Image Source: Neuroscience image adapted from press release image above

A woman is holding the hand of a robot as it appears to pour something from a box into a bowl on a table. The robot's eyes appear to be half open.
Simon the Robot, created in the lab of Andrea Thomaz (School of Interactive Computing), learns a new task from a participant in a study seeking to determine the best questions a robot learner can ask to facilitate smooth human-robot interaction. Image adapted from press release listed.
Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.