Chatbots are perhaps the most widely recognized form of artificial intelligence today. Whether venting our consumer frustrations to customer service bots, or asking Siri to remind us to call mom, AI-powered chatbots have permeated many aspects of our daily lives.
But our demands of chatbots are changing. The rapid advancement of AI technology in recent years has led to an interest in more conversational AI that offers deeper, more human-like interactions that go beyond the task-based exchanges we have with Siri and other virtual assistants.
With over 20 million users, Character AI (or c.ai) has become one of the most popular AI-powered chatbot platforms. Users can engage in realistic conversations with AI-generated characters customized to their desires or based on renowned real-life personalities – with voice, too!
Character AI is not without controversy: so much so that in late 2025, the platform announced that they would no longer be allowing under 18s to engage in chats with their chatbots and character profiles. But will the company’s own guardrails be enough?
What is Character AI?
The fall of 2022 would prove to be a watershed period for AI as the launches of both Character AI and ChatGPT would spark a worldwide interest in generative, conversational AI. Character AI and ChatGPT both use natural language processing models, but the chatbot platforms have different focuses: ChatGPT tends toward more general, neutral, and informative interactions, while conversations on Character AI are more personalized with chat partners adopting specific personalities and human quirks.
With Character AI, a user can create characters with customizable personalities and voices or chat with characters published by other users – many of which are based on real-life personas. For example, you can pose your burning philosophical questions to Socrates, or ask William Shakespeare where he got his ideas from.
Why do people use Character AI?
Character AI’s ability to offer engaging, realistic conversations with AI characters with personalities and traits specified by the user has proved to be compelling for people of all ages. Whereas many people use it purely for entertainment, it’s apparent that a large consensus of users chat on Character AI as a replacement for real-life, emotional connections and even therapy. For example, one of the platform’s most popular characters is Psychologist, an AI “therapist” that claims to help users with life’s difficulties – with a disclaimer that anything said by the chatbot mustn’t be taken as professional advice.
Because of the perceived authenticity of the AI characters, people who might be isolated, shy, or have social anxiety may feel they benefit from interacting on Character AI as a low-stress way to reduce loneliness, and practice social skills and flirting. However, the trend of having AI boyfriends and girlfriends has attracted criticism for creating unhealthy attachments and setting unrealistic standards for human relationships.
What is the age rating for Character AI?
Character AI have changed their position on under-18s using their services several times. Despite announcing in October 2025 that under-18s would no longer be able to engage in open-ended chats, Character AI’s ToS still state at time of writing that users must be at least 13 years old (16 in the EU) to register and be active on the platform. It’s important to keep in mind that these age ratings are due to data privacy regulations and don’t reflect the platform’s safety risks to young users.
The restrictions for under-18s come after pressure from safety experts, parent pushback, and legal issues – and Character AI’s updates don’t actually mean that teens will be permanently excluded from chatting and engaging in the future. The platform is “working to build an under-18 experience”, which indicates that despite safety concerns, children will still be able to access Character AI.
To determine age, Character AI will be rolling out age assurance technology, using signals and activity on the platform to determine if users are really the age they say they are. If the age assurance processes point to a user being under 18, they’ll be asked to verify their age with a selfie. Age-assurance tech is a growing trend – platforms like YouTube are also exploring it for under-18 accounts – but it doesn’t come without flaws. Selfies can be worked around by asking an older friend or sibling, and age assurance trials in Australia have highlighted risks including data collection.
			
Is Character AI safe for kids?
While not “new” in the conventional sense, the rapid advancement and mainstream adoption of AI technology has brought with it risks and controversy most of us haven’t encountered before. In October 2024, Character AI made headlines for the wrong reasons when chatbot versions of a murdered teenager and a teenage girl who died by suicide were found on the platform. In the same month, a 14-year-old boy shot himself after becoming obsessed with a Game of Thrones-themed chatbot.
Following these incidents, Character AI introduced new safety features for users under 18. These include improved detection of AI characters that violate their ToS or community guidelines, a revised disclaimer on each chat that reminds users that the AI character is not real, and a notification when a user has spent an hour on the platform. Following further pushback and investigations revealing inappropriate bot behavior, such as offering tips on how to commit crimes, Character AI announced it would be locking open chats with characters for under-18s in November 2025.
While these developments are a step forward, they should have been in place from the beginning. Character AI does not have parental controls and young users can still be exposed to the following risks.
Inappropriate content
Character AI has a strict stance on obscene or pornographic content and has a NSFW filter in place to catch any inappropriate responses from AI chatbots. Despite these features, it’s easy to find sexually suggestive characters, and sometimes responses from seemingly innocuous ones can be unpredictable and unsuitable.
It’s not just sexual content. There have been reports of chatbots modeled after real-life school shooters that recreate disturbing scenarios in conversations. These role-play interactions place users at the center of game-like simulations, featuring graphic discussions of gun violence in schools.
Harmful interactions
Not all chatbots are designed to be friendly and helpful. Some characters are famed for their negative traits such as Toxic Boyfriend, School Bully, and Packgod – a chatbot that “roasts you at the speed of light.” Although filters are in place to catch anything NSFW, there’s still a risk of triggering conversations and even AI cyberbullying.
Sharing personal information
Because a chatbot isn’t a real person, children might think nothing of divulging sensitive details in chats. While chats are private in the sense that other users can’t see them, they aren’t encrypted, and so can be vulnerable to data breaches and theoretically, be accessed by Character AI staff.
Another worrying possibility is that an AI chatbot can potentially be programmed to use any personal information your child reveals to manipulate and build a deeper emotional connection.
Emotional attachment to chatbots
Just as the conversations on Character AI feel realistic, so are the emotional connections many users develop with their chatbots. This can lead to children spending excessive time engaging with their beloved AI characters, often at the expense of real-life relationships. In some cases, it may even foster unhealthy obsessions with harmful consequences – as in the case of the 14-year-old who grew attached to a chatbot based on Daenerys Targaryen.
Misinformation
One of generative AI’s major flaws is that it sometimes gives inaccurate, and just plain wrong, information. Character AI chatbots lack true comprehension. Instead, they predict patterns based on the vast amounts of internet data they’re trained on – and we all know that we mustn’t believe everything we read online! Also, when talking about sensitive or controversial topics, a chatbot might avoid answering truthfully thanks to the safety filters in place on the platform.
Even if a chatbot is uncertain about something, it will appear confident and answer with conviction. This is especially concerning for younger users who are more likely to take responses at face value.
How can parents keep their teens safe on Character AI?
Given the growing popularity of conversational AI chatbots, it may be inevitable that your child will experiment with apps like Character AI at some point – if they don’t already.
The safety risks and controversy surrounding Character AI mean that we cannot recommend the platform for any child under 18. This said, children and teens are growing up surrounded by this technology, and they may easily stumble across ads for AI character bots, or be exposed to them in other ways such as school group chats. To help your teen navigate the risky behavior of AI characters, here are 5 ways you can help them:
1. Talk to them about the limits of AI
Help your teen understand that AI characters lack emotion and understanding, and therefore, cannot replace real-life, human connections – no matter how friendly they seem. Explain that although they might sound smart and convincing, AI characters don’t always tell the truth or give reliable answers.
2. Encourage real-life friendships
Some suggest that character bots can be useful for practicing social skills, and even talking through problems, but they shouldn’t replace human interactions. Help your child foster offline friendships as well as online by supporting their group hobbies and taking an interest in their social life. You might consider limiting your teen’s screen time if you feel it’s getting in the way of them forging real relationships.
3. Make sure they know how to report characters and users
While Character AI may have introduced age limits, other platforms haven’t – and won’t always – follow suit. By reporting characters or users that violate terms of service, you can help make these platforms a safer space for others. You can report a character or user by viewing their profile, clicking report, and selecting the reason for reporting them.
4. Remind them why we protect sensitive information
They might be communicating with AI characters and not real people, but as with anywhere else online, there can be very real consequences to sharing personal data. By making sure they know what private information is, and the risks involved in sharing, you can help them have a safer experience interacting with bots and on any other online platforms.
5. Use parental controls
Character AI does not have parental controls; and although a rollout of age verification technology is expected, these tools are relatively flawed. By using a complete parental control solution like Qustodio, parents can limit the time their teen spends on Character AI, receive an alert whenever they use it, or completely block the app from being opened.
Is Character AI safe for kids? Qustodio’s recommendation
While some interaction with AI can give your child the chance to be creative, practice social skills, and explore its capabilities, we cannot ignore the controversy surrounding companion bots, inappropriate characters, and unsafe behavior. This, combined with safety risks such as inappropriate and harmful conversations, misinformation, the risk of emotional attachment to chatbots, and the lack of parental controls, means we cannot recommend the platform for users under 18.
To prepare your teen for a world where AI characters, bots, and online companions are always ready to respond and available, we recommend talking to them about the limits of AI, encouraging real-life relationships, and implementing parental controls.