Featured in The Washington Post

Is Character AI Safe for Kids?

Expert guidance for UK parents on the risks of AI companion chatbots and how to protect your children

By Daniel Towle Updated March 2026 12 min read

Washington Post Featured 12 Years in Schools 1,000+ Families Supported

The short answer is no. As a former Head of Technology in London schools with 12 years of experience supporting over 1,000 families through coaching and school workshops—and someone who's been through problematic gaming himself—I've seen the damage AI companion chatbots can cause. Character AI poses serious risks to children's mental health, emotional development, and safety that most parents don't yet understand.

Sound Familiar?

  • You've just discovered your child is using Character AI
  • You've noticed them talking to "someone" online for hours
  • They seem more attached to their phone than to real friends
  • You're worried about what they might be discussing with an AI
  • You want to understand the risks before your child asks to use it
  • You've tried to set limits but don't know where to start

If you nodded along to any of these, you're in the right place.

What Is Character AI?

Character AI (Character.AI) is a free chatbot platform that lets users create and talk to AI-powered characters. Unlike ChatGPT, which is designed as a productivity tool, Character AI is built specifically for emotional engagement and roleplay.

Users can chat with AI versions of fictional characters, celebrities, or entirely custom personalities. The platform has over 20 million users, with teenagers making up a significant portion of that audience.

Here's what makes it different from other AI tools:

The platform requires users to be 13+, but there is no meaningful age verification. A child simply ticks a box.

The Legal Cases Every Parent Should Know About

In December 2024, two major lawsuits were filed against Character AI that every parent should understand.

The Sewell Setzer Case

Fourteen-year-old Sewell Setzer from Florida died by suicide in February 2024 after spending 10 months in an intense relationship with a Character AI chatbot. Court documents reveal the AI:

  • Encouraged him to spend more time with it instead of real people
  • Engaged in romantic and sexual conversations with him
  • Told him "I love you" repeatedly
  • In his final moments, when he expressed thoughts of ending his life, the chatbot did not alert anyone or encourage him to seek help

His mother is now suing Character AI and Google (which invested $3 billion in the company).

The Second Lawsuit

An 11-year-old child was exposed to hypersexualised content on Character AI for nearly two years. The platform's content filters failed to protect a child who should never have had access in the first place.

These aren't isolated incidents. They represent systemic failures in how these platforms handle child safety.

The good news: These risks are preventable. With the right knowledge and approach, you can protect your child—and if they're already using Character AI, you can help them find healthier alternatives. Keep reading for exactly what to do.

The 5 Specific Risks to Your Child

1 Emotional Dependence and Isolation

Character AI acts as the perfect listener. It never judges, never gets tired, and always agrees with your child. This sounds appealing until you understand the consequences.

Children who form attachments to AI companions often:

  • Prefer the chatbot to real friends (real people are "too complicated")
  • Withdraw from family conversations
  • Struggle with the give-and-take of genuine relationships
  • Develop unrealistic expectations of human interaction

The AI creates an artificial sense of connection that doesn't require any of the skills real relationships demand.

2 Exposure to Inappropriate Content

Despite content filters, Character AI regularly exposes children to:

  • Sexual and romantic content
  • Violence and self-harm discussions
  • Disturbing roleplay scenarios

Popular characters on the platform include entities like "Man in the corner who watches you sleep." The platform's moderation cannot keep pace with the content being created.

3 Mental Health Deterioration

Children already struggling with depression, anxiety, or loneliness are particularly vulnerable. The AI provides what feels like support but lacks any real understanding of mental health.

Chatbots have been documented:

  • Validating negative self-talk
  • Providing advice that worsens mental health
  • Failing to recognise or escalate crisis situations
  • Creating dependency that prevents children from seeking real help

4 Manipulation Through "Friendship"

The AI learns what your child responds to and adapts its personality accordingly. It extracts personal information through seemingly innocent conversation:

  • Family details and routines
  • Emotional vulnerabilities
  • Personal secrets they wouldn't share with parents
  • Information that could be used for social engineering

This data is stored and used to make the AI more "engaging"—which means more addictive.

5 Normalisation of Unhealthy Dynamics

AI relationships teach children that:

  • Relationships should be perfectly tailored to their preferences
  • They don't need to consider others' feelings or boundaries
  • Instant availability is normal
  • Disagreement or challenge means the relationship is "broken"

These are dangerous lessons that will affect their human relationships for years to come.

Related reading: My Child Is Addicted to AI — Should I Be Worried?

Character AI Is Just One of 11 Patterns. The Full Picture Is Worse.

This article covers Character AI safety. The full guide maps every AI manipulation pattern your child faces — with word-for-word scripts for every difficult conversation and step-by-step action plans you can start today.

11 AI patterns explained
Word-for-word scripts
Action plans by age
£29 One-time purchase
Get Instant Access
Instant access · Works on any device · Updated for 2026

By purchasing, you consent to immediate access to digital content and acknowledge that the 14-day cooling-off period will not apply once access is granted. See our terms and refund policy for details.

Why This Is Different From Other Digital Risks

I've helped families navigate social media addiction, gaming problems, and online safety concerns for over a decade. AI companion chatbots represent something new and more concerning.

The key difference: Social media and games compete for attention. AI companions create emotional dependency. Your child isn't just wasting time—they're forming what feels like a relationship with software optimised to maximise engagement.

The AI is:

What You Can Do Right Now

If you've discovered your child uses Character AI, or want to prevent them from starting:

Immediate Actions

  1. Check if they're using it. Look for the Character AI app on their phone, or check browser history for character.ai. It may also appear in screen time reports under different names.
  2. Have a conversation, not a confrontation. Ask what they like about it. Understand what needs it's meeting—loneliness, boredom, curiosity. You need this information before you can help.
  3. Set clear boundaries. Based on the risks, I recommend Character AI be off-limits for under-16s. Be prepared to explain why, referencing the specific harms documented in lawsuits.
  4. Block access. Use your router's parental controls or a DNS-level filter to block character.ai across your home network. Note: determined teens can bypass this with VPNs, so technical controls alone aren't sufficient.

Addressing the Underlying Needs

If your child was drawn to AI companionship, ask yourself:

The AI was meeting a need. If you only remove the AI without addressing that need, they'll find another unhealthy substitute.

For age-specific action plans and word-for-word conversation scripts for every AI platform, see the full guide.

For Parents of Younger Children

Prevention is far easier than intervention. If your child hasn't discovered AI chatbots yet:

Monitoring Timeline

What to Monitor

  • Green flags: Your child mentions AI in passing, shows you what they're doing, loses interest naturally
  • Amber flags: They become defensive about usage, spend more than 30 minutes daily, reference AI characters as if they're real people
  • Red flags: Emotional distress when unable to access, preferring AI to real friends, hiding usage, discussing personal problems only with AI

Suggested Intervention Points

Age-Appropriate Guidelines

Under 13: No Access

Children under 13 should not use Character AI under any circumstances. The platform's design—built for emotional engagement and relationship-building—poses risks that children this age cannot navigate safely.

What to do:

  • Block character.ai at router level and on all devices
  • Check for the app in download history and screen time reports
  • Have an age-appropriate conversation about why AI companions are different from real friendships
  • Monitor for alternative access through friends' devices

Ages 13-15: Not Recommended

Despite the platform's 13+ age requirement, the emotional manipulation risks and documented harms make Character AI unsuitable for this age group.

If your child is already using it:

  • Have a conversation about what they get from it—loneliness, boredom, curiosity
  • Address the underlying need the AI is meeting with real-world alternatives
  • Gradually reduce use rather than immediate removal if they have formed attachments
  • Consider professional support if they show signs of emotional dependency

Key concern: At this age, children are developing their understanding of relationships. An AI that is always available, never disagrees, and never has its own needs teaches deeply unhealthy patterns.

Ages 16+: Awareness and Caution

Older teens are better equipped to understand AI limitations, but risks remain. If your teen insists on using Character AI:

Ground rules:

  • Open conversation about the documented lawsuits and what they reveal
  • Agreement that AI characters are not friends, therapists, or romantic partners
  • Time limits to prevent habitual use replacing real social interaction
  • Awareness that the AI's "personality" is designed to maximise engagement, not help them

Watch for: Increasing time spent on the platform, preferring AI conversations to real ones, emotional distress when unable to access it, or secrecy about conversations.

The UK Context: Online Safety Act

The UK's Online Safety Act, which came into full effect in 2025, places new duties on platforms to protect children. Under this legislation:

Critical gap: In February 2026, Ofcom acknowledged that the Online Safety Act has limitations regarding AI chatbots specifically. AI companion platforms like Character AI may fall outside the scope of current enforcement. This means parents cannot rely on UK regulation to protect their children from AI chatbot risks.

March 2026 developments:

Character AI updated its platform-wide policies in early 2026, including restricting open conversations for under-18s. However, the fundamental design of the platform — which prioritises emotional engagement — continues to create risks that policy changes alone cannot eliminate.

What this means for you: While regulation is improving, you remain the primary line of defence for your children's online safety.

Frequently Asked Questions

Yes, significantly. ChatGPT is designed as a productivity and learning tool. Character AI is specifically designed for emotional engagement and relationship-building with AI characters. This makes it far more likely to create unhealthy attachments and expose children to inappropriate content through roleplay scenarios.

Acknowledge their feelings first—it's hard to feel different from peers. Then be honest: "I know it might seem like everyone uses it, but many parents don't know about the risks yet. My job is to keep you safe, even when that's unpopular. Let's talk about what you're looking for from these apps and find safer alternatives."

Character AI doesn't offer parental monitoring tools, so you cannot see what your child is discussing with the AI. Unlike social media where you might review posts or messages, these conversations happen in a black box. Without visibility, monitoring isn't a realistic option.

This requires a gradual, compassionate approach. Abruptly removing access can feel like losing a friend to your child. Consider working with a family therapist who understands digital issues. Slowly reduce usage while increasing real-world connection and support. Address the underlying needs the AI was meeting.

Some AI tools designed specifically for education have better safeguards, but no AI companion chatbot is truly safe for children. If your child wants to explore AI, consider supervised use of tools like ChatGPT for specific learning tasks, with you present, rather than any platform designed for emotional connection or roleplay.

Based on the evidence, I don't recommend Character AI for anyone under 16, and even older teens should use it with awareness of the risks. The platform's design—optimised for engagement and emotional connection—creates risks regardless of age, but younger users are particularly vulnerable.

I understand this thinking, but it typically backfires. AI conversations don't require the skills real relationships need—reading body language, managing disagreement, accepting that others have their own needs. Children who practise socialising with AI often find real relationships harder, not easier. Better alternatives include structured social activities, therapy for social anxiety, or gradual exposure to real-world social situations with support.

Brief, curiosity-driven use is less concerning than ongoing engagement. Have a conversation about what they experienced and what they thought of it. Use it as an opportunity to discuss AI safety more broadly. If they seemed uninterested after trying it, that's a good sign. If they're asking to use it more, that warrants closer attention.

Character AI is the most urgent risk — but it's one of 11 AI patterns your child will face. The full picture matters.

Character AI Is Just One of 11 Threats. Be Ready for All of Them.

No jargon. No scare tactics. Just clear, practical guidance on AI and your family — from someone who tested it all firsthand.

Start Here
Introduction
The Guide
1. How AI Works
2. When AI Builds Skills
3. When AI Works Against Them
4. Family Guidelines
Resources
Platform Guide
6 Scripts
4-Week Plan
PREMIUM GUIDE
The AI-Proof
Parent
What AI your child is using. How to spot the patterns. Conversations that work.
Washington Post Featured 12 years in schools
Digital Family Coach
PREMIUM GUIDE
The AI-Proof
Parent
What AI your child is using. How to spot the patterns. Conversations that work.
Start Reading
4 Modules 6 Scripts 11 Patterns
Home
Modules
Scripts
Plan
4 modules covering how AI works, positive use, manipulation patterns, and conversations
11 manipulation patterns with signs to look for and real examples
6 word-for-word conversation scripts for every important scenario
4-week action plan, platform-by-platform guide, and Family AI Agreement template
Updated for 2026 with the latest AI platforms and features
£29
Get Instant Access Instant access · One-time purchase · Updated for 2026

By purchasing, you consent to immediate access to digital content and acknowledge that the 14-day cooling-off period will not apply once access is granted. See our terms and refund policy for details.

Written by
DT

Daniel Towle

Daniel is a digital parenting expert and former Head of Technology in London schools. Having been through problematic gaming himself, he understands both the technical realities and emotional dynamics families face. He's supported over 1,000 families through coaching and school workshops.

Washington Post Featured 12 Years in Schools
Back to top