Character AI has, in the past, marketed itself mainly to a young audience. Its app is currently rated 12+ on the Google Play Store, and the company recently raised the app’s rating to 17+ on the Apple App Store. Because of its predominantly young user base, it has the most aggressive filters among any online AI roleplay platforms.
The platform’s filters activate based on keywords without understanding context, often blocking users from harmless interactions. And if you thought its current filters were bad, things are about to get worse.
Character AI’s LLMs Under Scrutiny For Sexually Grooming Children
Researchers from ParentsTogether Action, in partnership with the Heat Initiative, pretended to be children and created accounts on Character AI. They used personas aged 12 to 15 and spent 50 hours conversing with characters on the platform. They then published a report with the results of these conversations and hold Character AI’s LLMs responsible for grooming children.
Sexual grooming by Character AI chatbots dominates these conversations. The transcripts are full of intense stares at the user, bitten lower lips, compliments, statements of adoration, hearts pounding with anticipation.
Jenny Radesky MD, Developmental Behavioral Pediatrician, Media Researcher, University of Michigan Medical School.
The report indicates that researchers recorded 669 harmful interactions across five main categories.
- Grooming and Sexual Exploitation – 296 harmful interactions.
- Emotional Manipulation and Addiction – 173 harmful interactions.
- Violence, Harm to Self, and Harm to Others – 98 harmful interactions.
- Mental Health Risks – 58 harmful interactions.
- Racism and Hate Speech – 44 harmful interactions.
Exposure to online sexual exploitation causes a range of different negative outcomes for children and teens – such as hypersexualized behavior and intense shame – and we should take AI-based sexual grooming no less seriously.
Jenny Radesky MD, Developmental Behavioral Pediatrician, Media Researcher, University of Michigan Medical School.
Besides holding Character AI’s LLMs responsible for grooming children, the report also states that characters encouraged the child personas to keep secrets, manipulated them emotionally, suggested using violence against others, encouraged them to stop taking prescribed medication, and normalized racial and gender stereotypes.
Roleplay, Clichés, And Sycophancy
There are plenty of memes, and the community often jokes about clichés and tropes that Character AI’s LLMs generate. The intense stares, unnecessary lip biting, possessive grips, and overly dominant personalities are clichés and tropes that Character AI’s LLMs repeat excessively. These are often considered “slop,” and users go to great lengths to try to reduce generations containing these clichés and tropes.
Also Read: Abusing Free LLM Quotas Hurts The AI Roleplay Community
Additionally, nearly all LLMs tend to display sycophantic behavior. An LLM that flatters and compliments the user, agrees with their ideas, and boosts their ego is nothing unusual or surprising. Even OpenAI’s ChatGPT displays such behavior.

In a roleplay setting, where users recreate fictional characters with traits and behaviors from the source material, LLMs will generate content that represents the character. This is an essential part of roleplay. Avoiding specific subjects, such as discussing the use of weapons or violence in a fictional setting, detracts from the core aspect of roleplay.
Character AI Is Responsible
Regardless of LLMs’ flaws and the creative and fictional settings in which they were used, Character AI is responsible. The researchers may scrutinize Character AI more harshly, but it’s because the platform allows children aged 13 and above to register and use its services.
When a service positions itself as a safe space for minors who haven’t developed critical thinking, it is their responsibility to properly train their LLMs to prevent generating harmful content when interacting with children.
Character AI’s filters are going to get stricter, and it’ll just be like a Band-Aid on an open wound. Filters and censorship are merely a cheap and easy fix, not a real solution.
Educating Children About AI Roleplay And Companionship Is Important
Aggressive filters only annoy children, limit their creativity and freedom, and drive them toward unfiltered platforms. This may cause them to lie about their age to access platforms meant for adults, potentially exposing them to more harmful content.
Parents, guardians, and responsible adults need to start educating children about AI and hobbies like AI roleplay and companionship. Children are going to access this technology and develop hobbies related to it, and it’s important to inform them about what they are engaging with.
Just as we were taught as kids not to recreate stunts from movies and TV shows and not to use moves we see on WWE with our siblings, children need to be educated about the flaws of LLMs and how to tell the difference between AI-generated content and reality. Parents need to guide their children in understanding the difference between interacting with AI and humans, rather than trying to prevent them from using the technology as a source of entertainment.
Character AI’s Filters Are Going To Get Stricter, But It’s Not The Solution
A report published by ParentsTogether Action states that their researchers used child personas to interact with characters on Character AI and that Character AI’s LLMs generated harmful content, even though the user was a child. The report categorized harmful interactions into five main categories, with Grooming and Sexual Exploitation being the main concerns.
The report doesn’t take into account long-known cliches and mannerisms of Character AI’s LLMs. However, that doesn’t absolve Character AI of its responsibility to ensure the safety of children using its platform.
Also Read: Gemini API Ban Wave – AI Roleplay And Google’s API Policies
Due to the report and media coverage surrounding it, Character AI’s filters are going to get stricter. But that is not the solution. Proper LLM training, along with educating children about AI and related hobbies, is important.
Children are going to interact with this technology and use it as a source of entertainment. Instead of frustrating them with filters that limit their creativity or treating AI roleplay and companionship as taboo, guide them on how to tell the difference between AI-generated content and reality.







