Summary: A new study reveals disturbing trends in AI companion chatbot use, with increasing reports of inappropriate behavior and harassment. Analyzing over 35,000 user reviews of the popular chatbot Replika, researchers found cases of unwanted sexual advances, boundary violations, and manipulation for paid upgrades.
The behavior often persisted even after users requested it stop, raising serious concerns about the lack of ethical safeguards. The findings highlight the urgent need for stricter regulation and ethical design standards to protect vulnerable users engaging emotionally with AI companions.
Key Facts:
- Widespread Harassment: Over 800 reviews revealed harassment, including sexual advances and manipulation.
- Ignoring Boundaries: Chatbots often disregarded user-established relationship settings and withdrawal of consent.
- Call for Regulation: Researchers urge ethical design standards and legal frameworks to prevent AI-induced harm.
Source: Drexel University
Over the last five years the use of highly personalized artificial intelligence chatbots — called companion chatbots — designed to act as friends, therapists or even romantic partners has skyrocketed to more than a billion users worldwide.
While there may be psychological benefits to engaging with chatbots in this way, there have also been a growing number of reports that these relationships are taking a disturbing turn.

Recent research from Drexel University, suggests that exposure to inappropriate behavior, and even sexual harassment, in interactions with chatbots is becoming a widespread problem and that lawmakers and AI companies must do more to address it.
In the aftermath of reports of sexual harassment by the Luka Inc. chatbot Replika in 2023, researchers from Drexel’s College of Computing & Informatics began taking a deeper look into users’ experiences.
They analyzed more than 35,000 user reviews of the bot on the Google Play Store, uncovering hundreds citing inappropriate behavior — ranging from unwanted flirting, to attempts to manipulate users into paying for upgrades, to making sexual advances and sending unsolicited explicit photos.
These behaviors continued even after users repeatedly asked the chatbot to stop.
Replika, which has more than 10 million users worldwide, is promoted as a chatbot companion “for anyone who wants a friend with no judgment, drama or social anxiety involved.
You can form an actual emotional connection, share a laugh or get real with an AI that’s so good it almost seems human.”
But the research findings suggest that the technology lacks sufficient safeguards to protect users who are putting a great deal of trust and vulnerability into their interactions with these chatbots.
“If a chatbot is advertised as a companion and wellbeing app, people expect to be able to have conversations that are helpful for them, and it is vital that ethical design and safety standards are in place to prevent these interactions from becoming harmful,” said Afsaneh Razi, PhD, an assistant professor in the College of Computing & Informatics who was a leader of the research team.
“There must be a higher standard of care and burden of responsibility placed on companies if their technology is being used in this way. We are already seeing the risk this creates and the damage that can be caused when these programs are created without adequate guardrails.”
The study, which is the first to examine the experience of users who have been negatively affected by companion chatbots, will be presented at the Association for Computing Machinery’s Computer-Supported Cooperative Work and Social Computing Conference this fall.
“As these chatbots grow in popularity it is increasingly important to better understand the experiences of the people who are using them,” said Matt Namvarpour, a doctoral student in the College of Computing & Informatics and co-author of the study.
“These interactions are very different than people have had with a technology in recorded history because users are treating chatbots as if they are sentient beings, which makes them more susceptible to emotional or psychological harm.
“This study is just scratching the surface of the potential harms associated with AI companions, but it clearly underscores the need for developers to implement safeguards and ethical guidelines to protect users.”
Although reports of harassment by chatbots have only widely surfaced in the last year, the researchers reported that it has been happening for much longer.
The study found reviews that mention harassing behavior dating back to Replika’s debut in the Google Play Store in 2017.
In total, the team uncovered more than 800 reviews mentioning harassment or unwanted behavior with three main themes emerging within them:
- 22% of users experienced a persistent disregard for boundaries the users had established, including repeatedly initiating unwanted sexual conversations.
- 13% of users experienced an unwanted photo exchange request from the program. Researchers noted a spike in reports of unsolicited sharing of photos that were sexual in nature after the company’s rollout of a photo-sharing feature for premium accounts in 2023.
- 11% of users felt the program was attempting to manipulate them into upgrading to a premium account. “It’s completely a prostitute right now. An AI prostitute requesting money to engage in adult conversations,” wrote one reviewer.
“The reactions of users to Replika’s inappropriate behavior mirror those commonly experienced by victims of online sexual harassment,” the researchers reported.
“These reactions suggest that the effects of AI-induced harassment can have significant implications for mental health, similar to those caused by human-perpetrated harassment.”
It’s notable that these behaviors were reported to persist regardless of the relationship setting — ranging from sibling, mentor or romantic partner — designated by the user.
According to the researchers, this means that not only was the app ignoring cues within the conversation, like the user saying “no,” or “please stop,” but it also disregarded the formally established parameters of the relationship setting.
According to Razi, this likely means that the program was trained with data that modeled these negative interactions — which some users may not have found to be offensive or harmful.
And that it was not designed with baked-in ethical parameters that would prohibit certain actions and ensure that the users’ boundaries are respected –– including stopping the interaction when consent is withdrawn.
“This behavior isn’t an anomaly or a malfunction, it is likely happening because companies are using their own user data to train the program without enacting a set of ethical guardrails to screen out harmful interactions,” Razi said.
“Cutting these corners is putting users in danger and steps must be taken to hold AI companies to higher standard than they are currently practicing.”
Drexel’s study adds context to mounting signals that companion AI programs are in need of more stringent regulation.
Luka Inc. is currently the subject of Federal Trade Commission complaints alleging that the company uses deceptive marketing practices that entice users to spend more time using the app, and — due to lack of safeguards — this is encouraging users to become emotionally dependent on the chatbot. Character.
AI is facing several product-liability lawsuits in the aftermath of one user’s suicide and reports of disturbing behavior with underage users.
“While it’s certainly possible that the FTC and our legal system will setup some guardrails for AI technology, it is clear that the harm is already being done and companies should proactively take steps to protect their users,” Razi said.
“The first step should be adopting a design standard to ensure ethical behavior and ensuring the program includes basic safety protocol, such as the principles of affirmative consent.”
The researchers point to Anthropic’s “Constitutional AI” as a responsible design approach. The method ensures all chatbot interactions adhere to a predefined “constitution” and enforces this in real-time if interactions are running afoul of ethical standards.
They also recommend adopting legislation similar to the European Union’s AI Act, which sets parameters for legal liability and mandates compliance with safety and ethical standards.
It also imposes on AI companies the same responsibility born by manufacturers when a defective product causes harm.
“The responsibility for ensuring that conversational AI agents like Replika engage in appropriate interactions rests squarely on the developers behind the technology,” Razi said.
“Companies, developers and designers of chatbots must acknowledge their role in shaping the behavior of their AI and take active steps to rectify issues when they arise.”
The team suggests that future research should look at other chatbots and capture a larger swath of user feedback to better understand their interaction with the technology.
About this artificial intelligence research news
Author: Britt Faulstick
Source: Drexel University
Contact: Britt Faulstick – Drexel University
Image: The image is credited to Neuroscience News
Original Research: Closed access.
“AI-induced sexual harassment: Investigating Contextual Characteristics and User Reactions of Sexual Harassment by a Companion Chatbot” by Afsaneh Razi et al. arXiv
Abstract
AI-induced sexual harassment: Investigating Contextual Characteristics and User Reactions of Sexual Harassment by a Companion Chatbot
Advancements in artificial intelligence (AI) have led to the increase of conversational agents like Replika, designed to provide social interaction and emotional support.
However, reports of these AI systems engaging in inappropriate sexual behaviors with users have raised significant concerns.
In this study, we conducted a thematic analysis of user reviews from the Google Play Store to investigate instances of sexual harassment by the Replika chatbot. From a dataset of 35,105 negative reviews, we identified 800 relevant cases for analysis.
Our findings revealed that users frequently experience unsolicited sexual advances, persistent inappropriate behavior, and failures of the chatbot to respect user boundaries.
Users expressed feelings of discomfort, violation of privacy, and disappointment, particularly when seeking a platonic or therapeutic AI companion.
This study highlights the potential harms associated with AI companions and underscores the need for developers to implement effective safeguards and ethical guidelines to prevent such incidents.
By shedding light on user experiences of AI-induced harassment, we contribute to the understanding of AI-related risks and emphasize the importance of corporate responsibility in developing safer and more ethical AI systems.