This shows a robotic face.
The authors argue that being unable to fully trust a conversational partner's intentions and identity may result in excessive suspicion even when there is no reason for it. Credit: Neurosciene News

AI’s Human-Like Features Impact Trust in Conversations

Summary: A new study delves into how advanced AI systems affect our trust in the individuals we interact with. The research finds that a strong design perspective is driving the development of AI with increasingly human-like features. While appealing in some contexts, it can be problematic when it is unclear if you are communicating with a computer or a human.

The study examined three types of conversations and audience reactions and comments. The uncertainty of whether one is talking to a human or a computer affects this aspect of communication. This has the potential to impact human connection, particularly in therapy, and raises important ethical questions about AI development.

Key Facts:

  1. Researchers at the University of Gothenburg have examined how advanced AI systems impact our trust in the individuals we interact with.
  2. The study discovered that during interactions between two humans, some behaviors were interpreted as signs that one of them was actually a robot.
  3. The researchers propose creating AI with well-functioning and eloquent voices that are still clearly synthetic, increasing transparency.

Source: University of Gothenburg

As AI becomes increasingly realistic, our trust in those with whom we communicate may be compromised. Researchers at the University of Gothenburg have examined how advanced AI systems impact our trust in the individuals we interact with.

In one scenario, a would-be scammer, believing he is calling an elderly man, is instead connected to a computer system that communicates through pre-recorded loops. The scammer spends considerable time attempting the fraud, patiently listening to the “man’s” somewhat confusing and repetitive stories.

Oskar Lindwall, a professor of communication at the University of Gothenburg, observes that it often takes a long time for people to realize they are interacting with a technical system.

He has, in collaboration with Professor of informatics Jonas Ivarsson, written an article titled Suspicious Minds: The Problem of Trust and Conversational Agents, exploring how individuals interpret and relate to situations where one of the parties might be an AI agent.

The article highlights the negative consequences of harboring suspicion toward others, such as the damage it can cause to relationships.

Ivarsson provides an example of a romantic relationship where trust issues arise, leading to jealousy and an increased tendency to search for evidence of deception. The authors argue that being unable to fully trust a conversational partner’s intentions and identity may result in excessive suspicion even when there is no reason for it.

Their study discovered that during interactions between two humans, some behaviors were interpreted as signs that one of them was actually a robot.

The researchers suggest that a pervasive design perspective is driving the development of AI with increasingly human-like features. While this may be appealing in some contexts, it can also be problematic, particularly when it is unclear who you are communicating with.

Ivarsson questions whether AI should have such human-like voices, as they create a sense of intimacy and lead people to form impressions based on the voice alone.

Credit: Neuroscience News

In the case of the would-be fraudster calling the “older man,” the scam is only exposed after a long time, which Lindwall and Ivarsson attribute to the believability of the human voice and the assumption that the confused behavior is due to age.

Once an AI has a voice, we infer attributes such as gender, age, and socio-economic background, making it harder to identify that we are interacting with a computer.

The researchers propose creating AI with well-functioning and eloquent voices that are still clearly synthetic, increasing transparency.

Communication with others involves not only deception but also relationship-building and joint meaning-making. The uncertainty of whether one is talking to a human or a computer affects this aspect of communication.

While it might not matter in some situations, such as cognitive-behavioral therapy, other forms of therapy that require more human connection may be negatively impacted.

Study Information
Jonas Ivarsson and Oskar Lindwall analyzed data made available on YouTube. They studied three types of conversations and audience reactions and comments. In the first type, a robot calls a person to book a hair appointment, unbeknownst to the person on the other end. In the second type, a person calls another person for the same purpose. In the third type, telemarketers are transferred to a computer system with pre-recorded speech.

About this artificial intelligence research news

Author: Thomas Melin
Source: University of Gothenburg
Contact: Thomas Melin – University of Gothenburg
Image: The image is credited to Neuroscience News

Original Research: Open access.
Suspicious Minds: the Problem of Trust and Conversational Agents” by Jonas Ivarsson et al. Computer Supported Cooperative Work (CSCW)


Abstract

Suspicious Minds: the Problem of Trust and Conversational Agents

In recent years, the field of natural language processing has seen substantial developments, resulting in powerful voice-based interactive services.

The quality of the voice and interactivity are sometimes so good that the artificial can no longer be differentiated from real persons. Thus, discerning whether an interactional partner is a human or an artificial agent is no longer merely a theoretical question but a practical problem society faces.

Consequently, the ‘Turing test’ has moved from the laboratory into the wild. The passage from the theoretical to the practical domain also accentuates understanding as a topic of continued inquiry.

When interactions are successful but the artificial agent has not been identified as such, can it also be said that the interlocutors have understood each other? In what ways does understanding figure in real-world human–computer interactions? Based on empirical observations, this study shows how we need two parallel conceptions of understanding to address these questions.

By departing from ethnomethodology and conversation analysis, we illustrate how parties in a conversation regularly deploy two forms of analysis (categorial and sequential) to understand their interactional partners. The interplay between these forms of analysis shapes the developing sense of interactional exchanges and is crucial for established relations.

Furthermore, outside of experimental settings, any problems in identifying and categorizing an interactional partner raise concerns regarding trust and suspicion. When suspicion is roused, shared understanding is disrupted.

Therefore, this study concludes that the proliferation of conversational systems, fueled by artificial intelligence, may have unintended consequences, including impacts on human–human interactions.

Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.