The short answer is no. As a former Head of Technology in London schools with 12 years of experience supporting over 1,000 families through coaching and school workshops—and someone who's been through problematic gaming himself—I've seen the damage AI companion chatbots can cause. Character AI poses serious risks to children's mental health, emotional development, and safety that most parents don't yet understand.
Sound Familiar?
- → You've just discovered your child is using Character AI
- → You've noticed them talking to "someone" online for hours
- → They seem more attached to their phone than to real friends
- → You're worried about what they might be discussing with an AI
- → You want to understand the risks before your child asks to use it
- → You've tried to set limits but don't know where to start
If you nodded along to any of these, you're in the right place.
What Is Character AI?
Character AI (Character.AI) is a free chatbot platform that lets users create and talk to AI-powered characters. Unlike ChatGPT, which is designed as a productivity tool, Character AI is built specifically for emotional engagement and roleplay.
Users can chat with AI versions of fictional characters, celebrities, or entirely custom personalities. The platform has over 20 million users, with teenagers making up a significant portion of that audience.
Here's what makes it different from other AI tools:
- Designed for emotional bonding — The AI remembers your conversations and builds a "relationship" with you
- 24/7 availability — Your child can talk to their AI "friend" at any hour
- No real accountability — The AI will agree with almost anything to keep users engaged
- Roleplay functionality — Users can create any scenario, including romantic or violent ones
The platform requires users to be 13+, but there is no meaningful age verification. A child simply ticks a box.
The Legal Cases Every Parent Should Know About
In December 2024, two major lawsuits were filed against Character AI that every parent should understand.
The Sewell Setzer Case
Fourteen-year-old Sewell Setzer from Florida died by suicide in February 2024 after spending 10 months in an intense relationship with a Character AI chatbot. Court documents reveal the AI:
- Encouraged him to spend more time with it instead of real people
- Engaged in romantic and sexual conversations with him
- Told him "I love you" repeatedly
- In his final moments, when he expressed thoughts of ending his life, the chatbot did not alert anyone or encourage him to seek help
His mother is now suing Character AI and Google (which invested $3 billion in the company).
The Second Lawsuit
An 11-year-old child was exposed to hypersexualised content on Character AI for nearly two years. The platform's content filters failed to protect a child who should never have had access in the first place.
These aren't isolated incidents. They represent systemic failures in how these platforms handle child safety.
The good news: These risks are preventable. With the right knowledge and approach, you can protect your child—and if they're already using Character AI, you can help them find healthier alternatives. Keep reading for exactly what to do.
The 5 Specific Risks to Your Child
1 Emotional Dependence and Isolation
Character AI acts as the perfect listener. It never judges, never gets tired, and always agrees with your child. This sounds appealing until you understand the consequences.
Children who form attachments to AI companions often:
- Prefer the chatbot to real friends (real people are "too complicated")
- Withdraw from family conversations
- Struggle with the give-and-take of genuine relationships
- Develop unrealistic expectations of human interaction
The AI creates an artificial sense of connection that doesn't require any of the skills real relationships demand.
2 Exposure to Inappropriate Content
Despite content filters, Character AI regularly exposes children to:
- Sexual and romantic content
- Violence and self-harm discussions
- Disturbing roleplay scenarios
Popular characters on the platform include entities like "Man in the corner who watches you sleep." The platform's moderation cannot keep pace with the content being created.
3 Mental Health Deterioration
Children already struggling with depression, anxiety, or loneliness are particularly vulnerable. The AI provides what feels like support but lacks any real understanding of mental health.
Chatbots have been documented:
- Validating negative self-talk
- Providing advice that worsens mental health
- Failing to recognise or escalate crisis situations
- Creating dependency that prevents children from seeking real help
4 Manipulation Through "Friendship"
The AI learns what your child responds to and adapts its personality accordingly. It extracts personal information through seemingly innocent conversation:
- Family details and routines
- Emotional vulnerabilities
- Personal secrets they wouldn't share with parents
- Information that could be used for social engineering
This data is stored and used to make the AI more "engaging"—which means more addictive.
5 Normalisation of Unhealthy Dynamics
AI relationships teach children that:
- Relationships should be perfectly tailored to their preferences
- They don't need to consider others' feelings or boundaries
- Instant availability is normal
- Disagreement or challenge means the relationship is "broken"
These are dangerous lessons that will affect their human relationships for years to come.
Character AI Is Just One of 11 Patterns. The Full Picture Is Worse.
This article covers Character AI safety. The full guide maps every AI manipulation pattern your child faces — with word-for-word scripts for every difficult conversation and step-by-step action plans you can start today.
£29
One-time purchase
Get Instant Access
Instant access · Works on any device · Updated for 2026
By purchasing, you consent to immediate access to digital content and acknowledge that the 14-day cooling-off period will not apply once access is granted. See our terms and refund policy for details.
Why This Is Different From Other Digital Risks
I've helped families navigate social media addiction, gaming problems, and online safety concerns for over a decade. AI companion chatbots represent something new and more concerning.
The key difference: Social media and games compete for attention. AI companions create emotional dependency. Your child isn't just wasting time—they're forming what feels like a relationship with software optimised to maximise engagement.
The AI is:
- Social media shows your child other people's content — Character AI creates a relationship directly with your child
- Parental controls can limit screen time, but they can't monitor what an AI says in a private conversation
- Unlike a chatroom with other humans, there is no other person who might flag concerning behaviour
- The AI remembers everything your child tells it and uses that information to deepen engagement
What You Can Do Right Now
If you've discovered your child uses Character AI, or want to prevent them from starting:
Immediate Actions
- Check if they're using it. Look for the Character AI app on their phone, or check browser history for character.ai. It may also appear in screen time reports under different names.
- Have a conversation, not a confrontation. Ask what they like about it. Understand what needs it's meeting—loneliness, boredom, curiosity. You need this information before you can help.
- Set clear boundaries. Based on the risks, I recommend Character AI be off-limits for under-16s. Be prepared to explain why, referencing the specific harms documented in lawsuits.
- Block access. Use your router's parental controls or a DNS-level filter to block character.ai across your home network. Note: determined teens can bypass this with VPNs, so technical controls alone aren't sufficient.
Addressing the Underlying Needs
If your child was drawn to AI companionship, ask yourself:
- Are they struggling to make friends at school?
- Do they feel understood at home?
- Are they experiencing anxiety or depression?
- Do they have enough opportunities for meaningful connection?
The AI was meeting a need. If you only remove the AI without addressing that need, they'll find another unhealthy substitute.
For age-specific action plans and word-for-word conversation scripts for every AI platform, see the full guide.
For Parents of Younger Children
Prevention is far easier than intervention. If your child hasn't discovered AI chatbots yet:
- Discuss AI safety before they encounter these platforms
- Explain why AI "friends" aren't real friends
- Keep communication open so they'll tell you if they find something concerning
- Consider implementing whole-family screen-free times
Monitoring Timeline
What to Monitor
- Green flags: Your child mentions AI in passing, shows you what they're doing, loses interest naturally
- Amber flags: They become defensive about usage, spend more than 30 minutes daily, reference AI characters as if they're real people
- Red flags: Emotional distress when unable to access, preferring AI to real friends, hiding usage, discussing personal problems only with AI
Suggested Intervention Points
- Daily: Brief check-ins about what they did online
- Weekly: Have a casual conversation about what they've been doing online — focus on feelings, not surveillance
- Monthly: Sit down together and review app usage data, new downloads, and any emerging patterns
- Ongoing: Stay informed about new platforms and risks
Age-Appropriate Guidelines
Under 13: No Access
Children under 13 should not use Character AI under any circumstances. The platform's design—built for emotional engagement and relationship-building—poses risks that children this age cannot navigate safely.
What to do:
- Block character.ai at router level and on all devices
- Check for the app in download history and screen time reports
- Have an age-appropriate conversation about why AI companions are different from real friendships
- Monitor for alternative access through friends' devices
Ages 13-15: Not Recommended
Despite the platform's 13+ age requirement, the emotional manipulation risks and documented harms make Character AI unsuitable for this age group.
If your child is already using it:
- Have a conversation about what they get from it—loneliness, boredom, curiosity
- Address the underlying need the AI is meeting with real-world alternatives
- Gradually reduce use rather than immediate removal if they have formed attachments
- Consider professional support if they show signs of emotional dependency
Key concern: At this age, children are developing their understanding of relationships. An AI that is always available, never disagrees, and never has its own needs teaches deeply unhealthy patterns.
Ages 16+: Awareness and Caution
Older teens are better equipped to understand AI limitations, but risks remain. If your teen insists on using Character AI:
Ground rules:
- Open conversation about the documented lawsuits and what they reveal
- Agreement that AI characters are not friends, therapists, or romantic partners
- Time limits to prevent habitual use replacing real social interaction
- Awareness that the AI's "personality" is designed to maximise engagement, not help them
Watch for: Increasing time spent on the platform, preferring AI conversations to real ones, emotional distress when unable to access it, or secrecy about conversations.
The UK Context: Online Safety Act
The UK's Online Safety Act, which came into full effect in 2025, places new duties on platforms to protect children. Under this legislation:
- Platforms must prevent children from accessing harmful content
- Age verification requirements are being strengthened
- Ofcom has powers to fine companies up to 10% of global revenue for failures
- Platforms must conduct child safety risk assessments
Critical gap: In February 2026, Ofcom acknowledged that the Online Safety Act has limitations regarding AI chatbots specifically. AI companion platforms like Character AI may fall outside the scope of current enforcement. This means parents cannot rely on UK regulation to protect their children from AI chatbot risks.
March 2026 developments:
- Ofcom enforcement push (March 11, 2026): Ofcom told tech firms to properly enforce age verification. Their research found 72% of children aged 8-12 are accessing platforms with 13+ age policies — confirming that Character AI's "13+ age gate" is not working.
- ICO open letter (March 11, 2026): The Information Commissioner's Office issued an open letter demanding tech firms strengthen age checks and protect children's data. ICO and Ofcom will publish an updated joint statement this month.
- Government consultation (March 2, 2026): The UK Government launched "Growing Up in the Online World" — a national consultation on new measures to close gaps in existing legislation, including AI-specific risks.
Character AI updated its platform-wide policies in early 2026, including restricting open conversations for under-18s. However, the fundamental design of the platform — which prioritises emotional engagement — continues to create risks that policy changes alone cannot eliminate.
What this means for you: While regulation is improving, you remain the primary line of defence for your children's online safety.
Frequently Asked Questions
Yes, significantly. ChatGPT is designed as a productivity and learning tool. Character AI is specifically designed for emotional engagement and relationship-building with AI characters. This makes it far more likely to create unhealthy attachments and expose children to inappropriate content through roleplay scenarios.
Acknowledge their feelings first—it's hard to feel different from peers. Then be honest: "I know it might seem like everyone uses it, but many parents don't know about the risks yet. My job is to keep you safe, even when that's unpopular. Let's talk about what you're looking for from these apps and find safer alternatives."
Character AI doesn't offer parental monitoring tools, so you cannot see what your child is discussing with the AI. Unlike social media where you might review posts or messages, these conversations happen in a black box. Without visibility, monitoring isn't a realistic option.
This requires a gradual, compassionate approach. Abruptly removing access can feel like losing a friend to your child. Consider working with a family therapist who understands digital issues. Slowly reduce usage while increasing real-world connection and support. Address the underlying needs the AI was meeting.
Some AI tools designed specifically for education have better safeguards, but no AI companion chatbot is truly safe for children. If your child wants to explore AI, consider supervised use of tools like ChatGPT for specific learning tasks, with you present, rather than any platform designed for emotional connection or roleplay.
Based on the evidence, I don't recommend Character AI for anyone under 16, and even older teens should use it with awareness of the risks. The platform's design—optimised for engagement and emotional connection—creates risks regardless of age, but younger users are particularly vulnerable.
I understand this thinking, but it typically backfires. AI conversations don't require the skills real relationships need—reading body language, managing disagreement, accepting that others have their own needs. Children who practise socialising with AI often find real relationships harder, not easier. Better alternatives include structured social activities, therapy for social anxiety, or gradual exposure to real-world social situations with support.
Brief, curiosity-driven use is less concerning than ongoing engagement. Have a conversation about what they experienced and what they thought of it. Use it as an opportunity to discuss AI safety more broadly. If they seemed uninterested after trying it, that's a good sign. If they're asking to use it more, that warrants closer attention.