Robots Don’t Need to be Human Look Alikes

Summary: A new paper looks at human-robot interactions and concludes that a robot does not need to be a true ‘humanoid’ to be accepted by people, so long as its signals are designed correctly.

Source: University of Twente.

R2-D2, the robot from Star Wars, doesn’t communicate in human language, but is, nevertheless, capable of showing its intentions. For human-robot interactions, the robot does not have to be a true ‘humanoid’, provided that its signals are designed in the right way, UT researcher Daphne Karreman says.

A human being will only be capable of communicating with robots if this robot has many human characteristics. That is the common idea. But mimicking natural movements and expressions is complicated, and some of our nonverbal communication is not really suitable for robots: wide arm gestures, for example. Humans prove to be capable of responding in a social way, even to machines that look like machines. We have a natural tendency of translating machine movements and signals to the human world. Two simple lenses on a machine can make people wave to the machine.

BEYOND R2-D2

Knowing that, designing intuitive signals is challenging. In her research, Daphne Karreman focused on a robot functioning as a guide in a museum or a zoo. If the robot doesn’t have arms, can it still point to something the visitors have to look at? Using speech, written language, a screen, projection of images on a wall and specific movements, the robot has quite a number of ‘modalities’ that humans don’t have. Add to this playing with light and colour, and even a ‘low-anthropomorphic’ robot can be equipped with strong communication skills. It goes way beyond R2-D2 that communicates using beeps that need to be translated first. Karreman’s PhD thesis is therefore entitled ‘Beyond R2-D2’.

IN THE WILD

Karreman analysed a huge amount of video data to see how humans respond to a robot. Up to now, this type of research was mainly done in controlled lab situations, without other people present or after the test person was informed about what was going to happen. In this case, the robot was introduced ‘in the wild’ and in an unstructured way. People could come across the robot in the Real Alcázar Palace, Sevilla, for example. They decide for themselves if they want to be guided by a robot. What makes them keep distance, do people recognize what this robot is capable of?

Image shows a man interacting with the FROG robot.
The robot called Fun Robotic Outdoor Guide (FROG) has a screen, communicates using spoken language and light signals, and has a small pointer on its ‘head’. All by itself, FROG recognizes if people are interested in interaction and guidance. NeuroscienceNews.com image is adapted from the University of Twente press release.

VIDEO TOOL

To analyse these video data, Karreman developed a tool called Data Reduction Event Analysis Method (DREAM). The robot called Fun Robotic Outdoor Guide (FROG) has a screen, communicates using spoken language and light signals, and has a small pointer on its ‘head’. All by itself, FROG recognizes if people are interested in interaction and guidance. Thanks to the powerful DREAM tool, for the first time it is possible to analyse and classify human-robot interaction in a fast and reliable way. Unlike other methods, DREAM will not interpret all signals immediately, but it compares several ‘coders’ for a reliable and reproducible result.

How many people show interest, do they join the robot during the entire tour, do they respond as expected? It is possible to evaluate this using questionnaires, but that places the robot in a special position: people primarily come to visit the expo or zoo and not for meeting a robot. Using the DREAM tool, spontaneous interaction becomes more visible and thus, robot behaviour can be optimized.

About this robotics research article

Daphne Karreman did her PhD work in UT’s Human Media Interaction group of Prof Vanessa Evers. Her research was part of the European FP7 program FROG. Karreman’s PhD-thesis is entitled ‘Beyond R2-D2. The Design of nonverbal interaction behavior optimized for robot-specific morphologies.’

Source: Wiebe van der Veen – University of Twente
Image Source: NeuroscienceNews.com image is adapted from the University of Twente video.
Video Source: The video is credited to the University of Twente.

Cite This NeuroscienceNews.com Article

[cbtabs][cbtab title=”MLA”]University of Twente. “Robots Don’t Need to be Human Look Alikes.” NeuroscienceNews. NeuroscienceNews, 25 September 2016.
<https://neurosciencenews.com/robotics-psychology-aesthetics-5122/>.[/cbtab][cbtab title=”APA”]University of Twente. (2016, September 25). Robots Don’t Need to be Human Look Alikes. NeuroscienceNews. Retrieved September 25, 2016 from https://neurosciencenews.com/robotics-psychology-aesthetics-5122/[/cbtab][cbtab title=”Chicago”]University of Twente. “Robots Don’t Need to be Human Look Alikes.” https://neurosciencenews.com/robotics-psychology-aesthetics-5122/ (accessed September 25, 2016).[/cbtab][/cbtabs]

Feel free to share this Neuroscience News.
Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.