The Social Consequences of Using AI in Conversations

Summary: When using AI-enabled chat tools, people have more effective conversations, perceive each other more positively, and use more positive language.

Source: Cornell University

Cornell University researchers have found people have more efficient conversations, use more positive language and perceive each other more positively when using an artificial intelligence-enabled chat tool.

The study, published in Scientific Reports, examined how the use of AI in conversations impacts the way that people express themselves and view each other.

“Technology companies tend to emphasize the utility of AI tools to accomplish tasks faster and better, but they ignore the social dimension,” said Malte Jung, associate professor of information science.

“We do not live and work in isolation, and the systems we use impact our interactions with others.”

However, in addition to greater efficiency and positivity, the group found that when participants think their partner is using more AI-suggested responses, they perceive that partner as less cooperative, and feel less affiliation toward them.

“I was surprised to find that people tend to evaluate you more negatively simply because they suspect that you’re using AI to help you compose text, regardless of whether you actually are,” said Jess Hohenstein, lead author and postdoctoral researcher. “This illustrates the persistent overall suspicion that people seem to have around AI.”

For their first experiment, researchers developed a smart-reply platform the group called “Moshi” (Japanese for “hello”), patterned after the now-defunct Google “Allo” (French for “hello”), the first smart-reply platform, unveiled in 2016. Smart replies are generated from LLMs (large language models) to predict plausible next responses in chat-based interactions.

Participants were asked to talk about a policy issue and assigned to one of three conditions: both participants can use smart replies; only one participant can use smart replies; or neither participant can use smart replies.

This shows a robot and a chat bubble
The study, published in Scientific Reports, examined how the use of AI in conversations impacts the way that people express themselves and view each other. Image is in the public domain

Researchers found that using smart replies increased communication efficiency, positive emotional language and positive evaluations by communication partners. On average, smart replies accounted for 14.3% of sent messages (1 in 7).

But participants who their partners suspected of responding with smart replies were evaluated more negatively than those who were thought to have typed their own responses, consistent with common assumptions about the negative implications of AI.

“While AI might be able to help you write,” Hohenstein said, “it’s altering your language in ways you might not expect, especially by making you sound more positive. This suggests that by using text-generating AI, you’re sacrificing some of your own personal voice.”

Said Jung: “What we observe in this study is the impact that AI has on social dynamics and some of the unintended consequences that could result from integrating AI in social contexts. This suggests that whoever is in control of the algorithm may have influence on people’s interactions, language and perceptions of each other.”

Funding: This work was supported by the National Science Foundation.

About this artificial intelligence research news

Author: Becka Bowyer
Source: Cornell University
Contact: Becka Bowyer – Cornell University
Image: The image is in the public domain

Original Research: Open access.
Artificial intelligence in communication impacts language and social relationships” by Malte Jung et al. Scientific Reports


Abstract

Artificial intelligence in communication impacts language and social relationships

Artificial intelligence (AI) is already widely used in daily communication, but despite concerns about AI’s negative effects on society the social consequences of using it to communicate remain largely unexplored.

We investigate the social consequences of one of the most pervasive AI applications, algorithmic response suggestions (“smart replies”), which are used to send billions of messages each day.

Two randomized experiments provide evidence that these types of algorithmic recommender systems change how people interact with and perceive one another in both pro-social and anti-social ways.

We find that using algorithmic responses changes language and social relationships. More specifically, it increases communication speed, use of positive emotional language, and conversation partners evaluate each other as closer and more cooperative.

However, consistent with common assumptions about the adverse effects of AI, people are evaluated more negatively if they are suspected to be using algorithmic responses.

Thus, even though AI can increase the speed of communication and improve interpersonal perceptions, the prevailing anti-social connotations of AI undermine these potential benefits if used overtly.

Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.
  1. I am concerned about the social consequences of relying too heavily on machines for conversations. While AI can be helpful, we risk losing the personal touch and human connection that makes conversations meaningful. It’s important to strike a balance and use AI as a tool, rather than a replacement for human interaction.

  2. The idea that relying too heavily on AI for communication could result in a lack of empathy and human connection was one idea that stuck out to me in particular. It is understandable why someone who is used to interacting with others solely through text-based chatbots could find it difficult to understand nonverbal indications or have face-to-face talks. You mentioned the possibility for AI to reinforce negative stereotypes and perpetuate biases, which is another part of this problem. It is essential to make sure that these systems are created with diversity and inclusivity in mind because AI algorithms are only as objective as the data they are trained on.

  3. Thank the NSF for funding this important research and for you for including it in your newsletter.

    Do the worrisome social implications indicate that people have already begun or will begin start talking like robots? I have noticed this as have others in my circle of friends & acquaintances.

    But will all creativity be sacrificed, stifled?

    How will a writer have to prove for the sake of ©️ that he/she has created a work?

    Especially if it was written on a computer!

    An academic administrator in Italy replies:

    “…right questions and right concerns… However, I fear that everything originated from when humanity (all, without any geographical difference) began to unlearn to speak … for me, for a while now many have been expressing themselves as robots…”

    And I answered it like this:

    …Yes, with set phrases… like:
    “Thanks for sharing.”

    We are in a very dark period, but perhaps this could transform into the new fertilizer for another RENAISSANCE… ages and ages hence!

Comments are closed.