Summary: When artificial intelligence (AI) agents disclose personal information, it can boost users’ empathy towards them, thus increasing acceptance of AI technologies. The study involved online experiments where participants engaged in text-based chats with an AI agent.
Researchers found that the agents’ disclosure of work-relevant personal information was associated with greater empathy from participants, whereas no disclosure suppressed empathy.
The visual representation of the AI agent, whether human-like or robotic, did not significantly influence empathy levels.
- The study found that when AI agents disclose personal, especially work-relevant information, it can increase users’ empathy towards them, thereby improving acceptance of these AI technologies.
- The visual representation of the AI agent, be it human-like or robotic, had no significant impact on the level of empathy participants felt towards the AI.
- The study involved online text-based chats where AI agents and participants engaged in a scenario as if they were colleagues on a lunch break, indicating that context and relational dynamics can play a role in human-AI interaction outcomes.
In a new study, participants showed more empathy for an online anthropomorphic artificial intelligence (A.I.) agent when it seemed to disclose personal information about itself while chatting with participants.
Takahiro Tsumura of The Graduate University for Advanced Studies, SOKENDAI in Tokyo, Japan, and Seiji Yamada of the National Institute of Informatics, also in Tokyo, present these findings in the open-access journal PLOS ONE on May 10, 2023.
The use of A.I. in daily life is increasing, raising interest in factors that might contribute to the level of trust and acceptance people feel towards A.I. agents.
Prior research has suggested that people are more likely to accept artificial objects if the objects elicit empathy. For instance, people may empathize with cleaning robots, robots that mimic pets, and anthropomorphic chat tools that provide assistance on websites.
Earlier research has also highlighted the importance of disclosing personal information in building human relationships.
Stemming from those findings, Tsumura and Yamada hypothesized that self-disclosure by an anthropomorphic A.I. agent might boost people’s empathy toward those agents.
To test this idea, the researchers conducted online experiments in which participants had a text-based chat with an online A.I. agent that was visually represented by either a human-like illustration or an illustration of an anthropomorphic robot.
The chat involved a scenario in which the participant and agent were colleagues on a lunch break at the agent’s workplace. In each conversation, the agent seemed to self-disclose either highly work-relevant personal information, less-relevant information about a hobby, or no personal information.
The final analysis included data from 918 participants whose empathy for the A.I. agent was evaluated using a standard empathy questionnaire.
The researchers found that, compared to less-relevant self-disclosure, highly work-relevant self-disclosure from the A.I. agent was associated with greater empathy from participants. A lack of self-disclosure was associated with suppressed empathy.
The agent’s appearance as either a human or anthropomorphic robot did not have a significant association with empathy levels.
These findings suggest that self-disclosure by A.I. agents may, indeed, elicit empathy from humans, which could help inform future development of A.I. tools.
The authors add: “This study investigates whether self-disclosure by anthropomorphic agents affects human empathy. Our research will change the negative image of artifacts used in society and contribute to future social relationships between humans and anthropomorphic agents.”
Funding: This work was partially supported by JST, CREST (JPMJCR21D4), Japan. This work was also supported by JST, the establishment of university fellowships towards the creation of science technology innovation, Grant Number JPMJFS2136. The funders had no role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript.
About this artificial intelligence research news
Author: Hanna Abdallah
Contact: Hanna Abdallah – PLOS
Image: The image is credited to Neuroscience News
Original Research: Open access.
“Influence of agent’s self-disclosure on human empathy” by Takahiro Tsumura et al. PLOS ONE
Influence of agent’s self-disclosure on human empathy
As AI technologies progress, social acceptance of AI agents, including intelligent virtual agents and robots, is becoming even more important for more applications of AI in human society. One way to improve the relationship between humans and anthropomorphic agents is to have humans empathize with the agents.
By empathizing, humans act positively and kindly toward agents, which makes it easier for them to accept the agents. In this study, we focus on self-disclosure from agents to humans in order to increase empathy felt by humans toward anthropomorphic agents.
We experimentally investigate the possibility that self-disclosure from an agent facilitates human empathy. We formulate hypotheses and experimentally analyze and discuss the conditions in which humans have more empathy toward agents.
Experiments were conducted with a three-way mixed plan, and the factors were the agents’ appearance (human, robot), self-disclosure (high-relevance self-disclosure, low-relevance self-disclosure, no self-disclosure), and empathy before/after a video stimulus. An analysis of variance (ANOVA) was performed using data from 918 participants.
We found that the appearance factor did not have a main effect, and self-disclosure that was highly relevant to the scenario used facilitated more human empathy with a statistically significant difference. We also found that no self-disclosure suppressed empathy.
These results support our hypotheses. This study reveals that self-disclosure represents an important characteristic of anthropomorphic agents which helps humans to accept them.