Chatbots are perhaps the most widely recognized form of artificial intelligence today. Whether venting our consumer frustrations to customer service bots, or asking Siri to remind us to call mom, AI-powered chatbots have permeated many aspects of our daily lives.
But our demands of chatbots are changing. The rapid advancement of AI technology in recent years has led to an interest in more conversational AI that offers deeper, more human-like interactions that go beyond the task-based exchanges we have with Siri and other virtual assistants.
With over 20 million users, Character AI (or c.ai) has become one of the most popular AI-powered chatbot platforms. Users can engage in realistic conversations with AI-generated characters customized to their desires or based on renowned real-life personalities – with voice, too!
Character AI is not without controversy, however, and parents of young users need to be aware of the safety risks before allowing their children to use the platform.
What is Character AI?
The fall of 2022 would prove to be a watershed period for AI as the launches of both Character AI and ChatGPT would spark a worldwide interest in generative, conversational AI. Character AI and ChatGPT both use natural language processing models, but the chatbot platforms have different focuses: ChatGPT tends toward more general, neutral, and informative interactions, while conversations on Character AI are more personalized with chat partners adopting specific personalities and human quirks.
With Character AI, a user can create characters with customizable personalities and voices or chat with characters published by other users – many of which are based on real-life personas. For example, you can pose your burning philosophical questions to Socrates, or ask William Shakespeare where he got his ideas from.
Why do people use Character AI?
Character AI’s ability to offer engaging, realistic conversations with AI characters with personalities and traits specified by the user has proved to be compelling for people of all ages. Whereas many people use it purely for entertainment, it’s apparent that a large consensus of users chat on Character AI as a replacement for real-life, emotional connections and even therapy. For example, one of the platform’s most popular characters is Psychologist, an AI “therapist” that claims to help users with life’s difficulties – with a disclaimer that anything said by the chatbot mustn’t be taken as professional advice.
Because of the perceived authenticity of the AI characters, people who might be isolated, shy, or have social anxiety may feel they benefit from interacting on Character AI as a low-stress way to reduce loneliness, and practice social skills and flirting. However, the trend of having AI boyfriends and girlfriends has attracted criticism for creating unhealthy attachments and setting unrealistic standards for human relationships.
What is the age rating for Character AI?
Character AI’s ToS states that users must be at least 13 years old (16 in the EU) to register and be active on the platform. However, it’s important to keep in mind that the age rating of 13 is due to data privacy regulations and doesn’t reflect the platform’s safety risks to young users.
What’s more, as there’s no age verification process, there’s nothing stopping children younger than 13 from falsifying their birthdate on signup – a worrying thought when we consider the platform’s controversy and potential dangers to children.
Is Character AI safe for kids?
While not “new” in the conventional sense, the rapid advancement and mainstream adoption of AI technology has brought with it risks and controversy most of us haven’t encountered before. In October 2024, Character AI made headlines for the wrong reasons when chatbot versions of a murdered teenager and a teenage girl who died by suicide were found on the platform. In the same month, a 14-year-old boy shot himself after becoming obsessed with a Game of Thrones-themed chatbot.
Following these incidents, Character AI introduced new safety features for users under 18. These include improved detection of AI characters that violate their ToS or community guidelines, a revised disclaimer on each chat that reminds users that the AI character is not real, and a notification when a user has spent an hour on the platform.
While these features are a positive step, Character AI does not have parental controls and young users can still be exposed to the following risks.
Inappropriate content
Character AI has a strict stance on obscene or pornographic content and has a NSFW filter in place to catch any inappropriate responses from AI chatbots. Despite these features, it’s easy to find sexually suggestive characters, and sometimes responses from seemingly innocuous ones can be unpredictable and unsuitable.
It’s not just sexual content. There have been reports of chatbots modeled after real-life school shooters that recreate disturbing scenarios in conversations. These role-play interactions place users at the center of game-like simulations, featuring graphic discussions of gun violence in schools.
Harmful interactions
Not all chatbots are designed to be friendly and helpful. Some characters are famed for their negative traits such as Toxic Boyfriend, School Bully, and Packgod – a chatbot that “roasts you at the speed of light.” Although filters are in place to catch anything NSFW, there’s still a risk of triggering conversations and even AI cyberbullying.
Sharing personal information
Because a chatbot isn’t a real person, children might think nothing of divulging sensitive details in chats. While chats are private in the sense that other users can’t see them, they aren’t encrypted, and so can be vulnerable to data breaches and theoretically, be accessed by Character AI staff.
Another worrying possibility is that an AI chatbot can potentially be programmed to use any personal information your child reveals to manipulate and build a deeper emotional connection.
Emotional attachment to chatbots
Just as the conversations on Character AI feel realistic, so are the emotional connections many users develop with their chatbots. This can lead to children spending excessive time engaging with their beloved AI characters, often at the expense of real-life relationships. In some cases, it may even foster unhealthy obsessions with harmful consequences – as in the case of the 14-year-old who grew attached to a chatbot based on Daenerys Targaryen.
Misinformation
One of generative AI’s major flaws is that it sometimes gives inaccurate, and just plain wrong, information. Character AI chatbots lack true comprehension. Instead, they predict patterns based on the vast amounts of internet data they’re trained on – and we all know that we mustn’t believe everything we read online! Also, when talking about sensitive or controversial topics, a chatbot might avoid answering truthfully thanks to the safety filters in place on the platform.
Even if a chatbot is uncertain about something, it will appear confident and answer with conviction. This is especially concerning for younger users who are more likely to take responses at face value.
How can parents keep their teens safe on Character AI?
Given the growing popularity of conversational AI chatbots, it may be inevitable that your child will experiment with apps like Character AI at some point – if they don’t already.
Although the official age rating is 13+, the safety risks and controversy mean that we cannot recommend the platform for any child under 16. If you want to allow your teen to engage with AI characters, here are 5 ways you can help them have a safe and responsible time on Character AI.
1. Talk to them about the limits of AI
Help your teen understand that AI characters lack emotion and understanding, and therefore, cannot replace real-life, human connections – no matter how friendly they seem. Explain that although they might sound smart and convincing, AI characters don’t always tell the truth or give reliable answers.
2. Encourage real-life friendships
Character AI can be useful for practicing social skills, and even talking through problems, but it shouldn’t replace human interactions. Help your child foster offline friendships as well as online by supporting their group hobbies and taking an interest in their social life. You might consider limiting your teen’s screen time if you feel it’s getting in the way of them forging real relationships.
3. Make sure they know how to report characters and users
By reporting characters or users that violate the platform’s ToS, you can help keep Character AI safe for your teen and others. You can report a character or user by viewing their profile, clicking report, and selecting the reason for reporting them.
4. Remind them why we protect sensitive information
They might be communicating with AI characters and not real people, but as with anywhere else online, there can be very real consequences to sharing personal data. By making sure they know what private information is, and the risks involved in sharing, you can help them have a safer experience on Character AI and any other online platforms.
5. Use parental controls
Character AI does not have parental controls; and although there is an age restriction of 13 (16 for Europe), this is not backed up by any kind of age verification. By using a complete parental control solution like Qustodio, parents can limit the time their teen spends on Character AI, receive an alert whenever they use it, or completely block the app from being opened.
Is Character AI safe for kids? Qustodio’s recommendation
While chatting on Character AI can give your child the chance to learn, practice social skills, and explore AI’s capabilities, we cannot ignore the controversy surrounding certain aspects of the platform. This, combined with safety risks such as inappropriate and harmful conversations, misinformation, the risk of emotional attachment to chatbots, and the lack of parental controls, means we cannot recommend the platform for users under 16.
If you still wish to allow your teen to use Character AI, we recommend talking to them about the limits of AI, encouraging real-life relationships, and implementing parental controls.