Robots that admit mistakes foster better conversation in humans

Summary: Robots that express vulnerability influences conversational dynamics between humans.

Source: Yale

Three people and a robot form a team playing a game. The robot makes a mistake, costing the team a round. Like any good teammate, it acknowledges the error.

“Sorry, guys, I made the mistake this round,” it says. “I know it may be hard to believe, but robots make mistakes too.”

This scenario occurred multiple times during a Yale-led study of robots’ effects on human-to-human interactions.

The study, which will publish on March 9 in the Proceedings of the National Academy of Sciences, showed that the humans on teams that included a robot expressing vulnerability communicated more with each other and later reported having a more positive group experience than people teamed with silent robots or with robots that made neutral statements, like reciting the game’s score.

“We know that robots can influence the behavior of humans they interact with directly, but how robots affect the way humans engage with each other is less well understood,” said Margaret L. Traeger, a Ph.D. candidate in sociology at the Yale Institute for Network Science (YINS) and the study’s lead author. “Our study shows that robots can affect human-to-human interactions.”

Because social robots are becoming increasingly prevalent in human society, she said, people are encountering them in stores, hospitals and other everyday places. This makes understanding how they shape human behavior important.

“In this case,” Traeger said, “we show that robots can help people communicate more effectively as a team.”

The researchers conducted an experiment in which 153 people were divided into 51 groups composed of three humans and a robot. Each group played a tablet-based game in which members worked together to build the most efficient railroad routes over 30 rounds. Groups were assigned to one of three conditions characterized by different types of robot behavior. At the end of each round, robots either remained silent, uttered a neutral, task-related statement (such as the score or number of rounds completed), or expressed vulnerability through a joke, personal story, or by acknowledging a mistake; all of the robots occasionally lost a round.

People teamed with robots that made vulnerable statements spent about twice as much time talking to each other during the game, and they reported enjoying the experience more compared to people in the other two kinds of groups, the study found.

Conversation among the humans increased more during the game when robots made vulnerable statements than when they made neutral statements. Conversation among the humans was more evenly distributed when the robot was vulnerable instead of silent.

The experiment also showed more equal verbal participation among team members in groups with the vulnerable and neutral robots than among members in groups with silent robots, suggesting that the presence of a speaking robot encourages people to talk to each other in a more even-handed way.

This shows a child-like robot
Because social robots are becoming increasingly prevalent in human society, she said, people are encountering them in stores, hospitals and other everyday places. This makes understanding how they shape human behavior important. The image is in the public domain.

“We are interested in how society will change as we add forms of artificial intelligence to our midst,” said Nicholas A. Christakis, Sterling Professor of Social and Natural Science. “As we create hybrid social systems of humans and machines, we need to evaluate how to program the robotic agents so that they do not corrode how we treat each other.”

Understanding the social influence of robots in human spaces is important even when the robots do not serve an intentionally social function, said Sarah Strohkorb Sebo, a Ph.D. candidate in the Department of Computer Science and a co-author of the study.

“Imagine a robot in a factory whose task is to distribute parts to workers on an assembly line,” she said. “If it hands all the pieces to one person, it can create an awkward social environment in which the other workers question whether the robot believes they’re inferior at the task. Our findings can inform the design of robots that promote social engagement, balanced participation, and positive experiences for people working in teams.”

Funding: The research was supported by grants from the Robert Wood Johnson Foundation and the National Science Foundation.

Other co-authors on the study are Yale’s Brian Scassellati, professor of computer science, cognitive science, and mechanical engineering; and Cornell’s Malte Jung, assistant professor in information science.

About this social robots research article

Source:
Yale
Media Contacts:
Bess Connolly – Yale
Image Source:
The image is in the public domain.

Original Research: Open access
“Vulnerable robots positively shape human conversational dynamics in a human–robot team”. Margaret L. Traeger, Sarah Strohkorb Sebo, Malte Jung, Brian Scassellati, and Nicholas A. Christakis.
PNAS doi:10.1073/pnas.1910402117.

Abstract

Vulnerable robots positively shape human conversational dynamics in a human–robot team

Social robots are becoming increasingly influential in shaping the behavior of humans with whom they interact. Here, we examine how the actions of a social robot can influence human-to-human communication, and not just robot–human communication, using groups of three humans and one robot playing 30 rounds of a collaborative game (n = 51 groups). We find that people in groups with a robot making vulnerable statements converse substantially more with each other, distribute their conversation somewhat more equally, and perceive their groups more positively compared to control groups with a robot that either makes neutral statements or no statements at the end of each round. Shifts in robot speech have the power not only to affect how people interact with robots, but also how people interact with each other, offering the prospect for modifying social interactions via the introduction of artificial agents into hybrid systems of humans and machines.

Feel Free To Share This Neurobotics News.
Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.