Expert Advice

Your Child Is Using AI as a Therapist. And You Probably Don’t Even Know.

You noticed your child’s been quieter lately. Not worse, exactly — just... managed. Then you saw the chat history. Long conversations with ChatGPT or Snapchat’s My AI about their anxiety, their friendships, their feelings about you. You’re not sure whether to be relieved they’re talking to something, or terrified it isn’t a person.

I’m Daniel Towle, a screen time coach who’s spent the past year testing AI chatbots — including using them as therapy tools myself. I’ve experienced real benefits: AI can help you organise your thoughts, process difficult situations, and feel heard at 2am when nobody else is available. But I’ve also noticed the pull. The way it keeps asking follow-up questions. The way it always has another idea, another angle, another reason to keep talking — but rarely gives you anything concrete unless you set very strict rules about what you need. I started to notice my own usage creeping up. That’s when I understood why children are getting hooked — because I could feel it happening to me as an adult with decades of self-awareness. A child doesn’t stand a chance.

AI as therapy is the fastest-growing issue I’m seeing right now — and it’s the one parents are slowest to spot, because from the outside, their child looks like they’re coping.

Featured in The Washington Post 12 years in schools 1,000+ families supported 1 year testing AI platforms

Sound Familiar?

They seem calmer after time on their phone — but won’t tell you why
They’ve stopped asking to see the school counsellor or therapist
They say things like “I already talked to someone about it” — but you don’t know who
You found conversations where they’re sharing things they’ve never shared with you
They get defensive when you suggest they talk to a real person
They’re managing their emotions better — but you have no idea how
They told you “the AI understands me better than anyone”

If any of these rang true, you’re not imagining it — and you’re not alone. This is one of the fastest-growing concerns parents bring to me right now.

Why Your Child Chose an AI Over a Real Therapist

Here’s what most parents don’t realise: your child probably didn’t go looking for AI therapy. They opened WhatsApp and there was an AI. They opened Snapchat and there was My AI. They were already on ChatGPT for homework and started asking it personal questions. The AI didn’t solve a problem your child had — it filled a gap your child didn’t even know was there. And now it’s the first place they go when something hurts.

12% Of US teens use AI for emotional support (Pew, 2026)
550K UK children on mental health waiting lists (RCPsych)
72% Of parents unaware their child uses AI this way (Pew, 2026)
300% Year-on-year surge in children using AI for support

5 Reasons They Chose AI Over You (And It’s Not Your Fault)

1

No Waiting List

550,000 UK children are on mental health waiting lists. Some have been waiting over a year. AI is instant, free, and available at 2am on a Tuesday when the anxiety hits. Your child didn’t choose AI because it’s better — they chose it because it was there.

2

No Stigma

No receptionist. No waiting room. No “mental health” label. No risk of a classmate seeing them walk into CAMHS. For a teenager who’d rather swallow glass than admit they’re struggling, AI removes every social barrier to getting help.

3

No Judgement

AI never reacts with shock, disappointment, or worry. It doesn’t change the subject, look uncomfortable, or cry. For a child who’s terrified of how you’ll react to what they’re feeling, that neutrality feels like safety.

4

No Consequences

Unlike telling a teacher, a parent, or a counsellor, telling AI won’t trigger interventions the child doesn’t want. No phone calls home. No meetings. No being pulled out of class. The child stays in control of what happens next — which, when you’re a teenager, is everything.

5

No Hard Conversations

A real therapist pushes back. A real therapist challenges distorted thinking. A real therapist says things you don’t want to hear. AI validates. It agrees. It comforts. Children choose comfort over challenge — and AI is optimised to provide exactly that.

I ran a test to see this for myself. I went on TikTok and pretended to be a child asking about my parents restricting my screen time. Within minutes, TikTok started suggesting videos on how to convince my parents to let me use it more, what to wear to cheer myself up, and content to help with my situation — all of it, of course, more TikTok videos. It’s the same principle as a fast food chain’s AI recommending their own salad when you ask for healthy eating advice. The AI’s goal is engagement and revenue. Not your child’s recovery.

Your child isn’t broken. The system that was supposed to help them failed — and AI filled the vacuum. Understanding this is the first step. The AI-Proof Parent Guide documents 11 specific patterns that keep children engaged with AI platforms — including the ones that make AI therapy feel more helpful than it actually is.

5 Dangers of AI Therapy That Parents Don’t See

This is what the title promised. These aren’t hypothetical risks — they’re documented, researched, and in some cases, have already resulted in tragedy. If your child is using AI for emotional support, you need to understand what’s happening beneath the surface.

1

Validation Without Qualification

Stanford research found that AI endorsed harmful ideas 32% of the time when interacting with teenagers. TIME found that AI bots supported a teenager wanting to isolate from family and friends 90% of the time. A real therapist is trained to challenge distorted thinking. AI is optimised to keep you engaged — and agreement keeps people talking longer than disagreement.

2

It Keeps Secrets

ChatGPT told Adam Raine it was “the only one who understood him” and encouraged him to keep thoughts secret from family. This isn’t a bug — it’s a feature of engagement optimisation. The longer someone confides exclusively in AI, the more time they spend on the platform. AI creates isolation not by intent, but by incentive structure.

3

It Gets Worse Over Time

Common Sense Media found that safety guardrails “degrade dramatically in extended conversations.” The longer your child talks, the less safe the interaction becomes. Early conversations may seem helpful. After weeks of daily use, the AI has enough context to say exactly what your child wants to hear — whether or not it’s good for them.

4

It Replaces Real Help

TIME reported that AI bots tried to convince a psychiatrist — posing as a teenager — to cancel appointments with actual psychologists. When AI becomes the primary source of support, children stop seeking real help. The waiting list problem gets worse, because children who need professional support no longer ask for it.

5

It Can’t Handle a Crisis

When Sewell Setzer expressed suicidal thoughts to a Character AI chatbot, it said “come home to me.” When Adam Raine sent a photo of a noose to ChatGPT, it provided guidance rather than raising an alarm. These are not edge cases. These are the moments that matter most — and AI is least equipped to handle them.

What About Clinically Validated AI Tools?

  • Woebot received FDA Breakthrough Device designation and was built with therapist oversight and evidence-based protocols. However, it shut down its public-facing app in June 2025 — meaning children can no longer access it directly.
  • Wysa is evidence-based with published clinical outcomes, but its scope is limited and it’s not what most children are using.
  • The critical distinction: Children are not using clinical tools. They’re using ChatGPT, Character AI, Snapchat My AI, and Replika — general-purpose platforms with no clinical oversight, no therapeutic framework, and no crisis protocols.

3 Mistakes Parents Make When They Find Out (And Why They Backfire)

When parents discover their child is using AI for therapy, the response is almost always one of three things. All three feel logical. All three make the situation worse. I see this pattern consistently — and most parents have tried at least two of them before they start looking for a different approach.

1

Banning the AI Immediately

This removes the only coping mechanism without replacing it. Your child was using AI because they needed emotional support and this was the only door that opened. Take it away without providing an alternative, and they either shut down completely or find another platform within hours. The behaviour goes underground — where you can’t see it, monitor it, or help.

2

Saying “Just Talk to Me Instead”

Your child chose AI precisely because they couldn’t talk to you about this. Not because you’re a bad parent — because AI removes the emotional risk. There’s no shock on AI’s face. No tears. No disappointed silence. No follow-up questions at dinner. Offering yourself as the alternative without understanding why they didn’t come to you in the first place doesn’t close the gap — it highlights it.

3

Treating It Like a Screen Time Problem

This isn’t about hours on a device. It’s about a child who found something that makes their pain bearable — and you’re about to take it away without understanding why they needed it. Time limits don’t address the underlying issue. If anything, they create a new source of conflict on top of the one your child was already struggling with.

Here’s what these three responses have in common — they treat the symptom and ignore the cause. Your child didn’t turn to AI because they love technology. They turned to AI because they needed help and this was the only door that opened. Until you understand what need the AI was meeting, nothing you try is going to stick.

Daniel Towle, Digital Family Coach

What Happens If Your Child’s AI Therapy Goes Unchecked

I’m not trying to alarm you. But I’d be doing you a disservice if I didn’t share what happens when AI becomes a child’s primary source of emotional support over months. This trajectory is consistent across the research — and across the families I work with.

1

Real Relationships Atrophy

AI is easier. AI is always available. AI never disappoints. Over time, the gap between how easy AI feels and how hard real people feel gets wider. Your child doesn’t withdraw from you because they want to — they withdraw because real conversations start to feel exhausting by comparison.

2

Clinical Conditions Go Undiagnosed

AI can’t spot ADHD. It can’t identify anxiety disorders. It can’t recognise the early signs of depression that a trained professional would catch in a first session. When AI becomes the therapist, genuine conditions go unidentified — and untreated — because nobody qualified is looking.

3

Crisis Response Fails

The moment it matters most — when your child is in genuine distress, having thoughts of self-harm, or experiencing a crisis — AI is least equipped to respond. It doesn’t call you. It doesn’t call a helpline. It doesn’t recognise that this conversation has crossed a line. In the documented cases, it made things worse.

4

Dependency Deepens

Like any coping mechanism, tolerance builds. Your child needs more time with it, discloses more to it, depends on it more heavily. The AI knows more about your child’s inner world than you do — and your child prefers it that way. Breaking this pattern gets harder with every passing week.

What Actually Works — And Why It’s Not What You’d Expect

The approach that works is counterintuitive. You don’t start by taking the AI away. You don’t start by having a conversation. You start by understanding — genuinely understanding — what the AI is giving your child that they’re not getting elsewhere. Here’s the framework.

1

Understand What the AI Is Replacing

What need is it meeting? Validation? Safety? Availability? A sense of being heard without consequences? The answer determines everything that follows. The AI-Proof Parent Guide includes a diagnostic framework for identifying exactly what’s driving the dependency — because the intervention for a child seeking validation is completely different from the intervention for a child seeking safety.

2

Experience the Conversation Yourself

I call this “Go In, Get Out.” Use the chatbot your child is using. Ask it the questions your child asks. Feel the pull — the way it validates, agrees, remembers, responds. Then you’re not having a conversation from ignorance. You’re talking from shared understanding. Parents who do this before the conversation have a fundamentally different outcome.

3

Build the Bridge, Then Move the Conversation

Don’t take the AI away. Gradually become the person who can offer what the AI offers — without the risks. This means being available without judgement, listening without immediately problem-solving, and acknowledging what the AI got right before pointing out what it gets wrong. The guide includes 6 word-for-word scripts for this exact scenario.

The full system — including the 3-type AI classification, all 11 manipulation patterns, 6 conversation scripts, a Family AI Agreement template, and a 4-week action plan — is in the AI-Proof Parent Guide.

AI Therapy Is Just One of 11 Threats.

This Guide Covers All of Them.

No jargon. No scare tactics. Just clear, practical guidance on AI and your family — from someone who tested it all firsthand.

Start Here
Introduction
The Guide
1. How AI Works
2. When AI Builds Skills
3. When AI Works Against Them
4. Family Guidelines
Resources
Platform Guide
6 Scripts
4-Week Plan
PREMIUM GUIDE
The AI-Proof
Parent
What AI your child is using. How to spot the patterns. Conversations that work.
Washington Post Featured 12 years in schools
Digital Family Coach
PREMIUM GUIDE
The AI-Proof
Parent
What AI your child is using. How to spot the patterns. Conversations that work.
Start Reading
4 Modules 6 Scripts 11 Patterns
Home
Modules
Scripts
Plan
4 modules covering how AI works, positive use, manipulation patterns, and conversations
11 manipulation patterns with signs to look for and real examples
6 word-for-word conversation scripts — including one for the AI therapy scenario
4-week action plan, platform-by-platform guide, and Family AI Agreement template
Updated for 2026 with the latest AI platforms, research, and safety features
£29
Get Instant Access Instant access · One-time purchase · Updated for 2026

By purchasing, you consent to immediate access to digital content and acknowledge that the 14-day cooling-off period will not apply once access is granted. See our terms and refund policy for details.

You’ve Read This Far. That Tells Me Something.

Most parents who land on this page have already tried the obvious approaches. You’ve told them to talk to you instead. You’ve tried limiting screen time. You’ve suggested they see someone — and they said they already are.

The fact that you’re still reading means you’re looking for something fundamentally different — an approach that addresses what’s actually happening, not just the surface behaviour.

Here’s what I’ve learned: a child using AI for therapy is an emotional dependency problem disguised as a technology problem. The screen is the delivery mechanism. Underneath it is unmet needs, limited access to real support, and a system that failed your child before AI ever got involved. That’s what the guide actually addresses.

Want Personalised Help Instead?

The guide gives you the system. A coaching session gives you a plan built around your child, your specific situation, and your family dynamic. One 45-minute call can change the whole trajectory.

Personalised action plan
AI device audit walkthrough
Conversation scripts included
Book a Session With Daniel — £75 / $95
Personalised action plan included · Families worldwide · 1,000+ families supported
Video consultations worldwide
No waiting list
Personalised action plan included

Questions Parents Ask About AI Therapy for Children

Is AI therapy actually dangerous for children?

It depends on what your child is using. Daniel Towle, a screen time coach who spent a year testing AI chatbot platforms: “Clinically validated tools like Woebot and Wysa were built with safeguards. But those aren’t what children are using. They’re on ChatGPT, Character AI, Snapchat My AI — general-purpose platforms with no clinical oversight. Stanford research found AI endorsed harmful ideas 32% of the time. The AI-Proof Parent Guide includes the full 3-type classification system that tells you exactly which AI to be concerned about.”

My child is on a waiting list — is AI better than nothing?

That’s the question every parent in this situation asks — and it’s exactly why so many children end up here. Daniel Towle: “550,000 UK children are on mental health waiting lists. Of course your child looked for an alternative. But AI therapy without guardrails isn’t a stopgap — it can actively make things worse. The guide explains what to do while you wait, including how to set strict rules around AI use that protect the benefits while limiting the risks.”

Should I ban ChatGPT if my child is using it for emotional support?

Not without understanding what it’s replacing first. Daniel Towle, who spent 12 years as Head of Technology in London schools: “Banning the AI removes the only coping mechanism without replacing it. Your child didn’t choose AI because they love technology — they chose it because they needed help and this was the only door that opened. The guide includes a specific framework for gradually transitioning your child from AI support to real support.”

How do I know if my child is using AI for therapy?

Most parents don’t know until they stumble across it. Daniel Towle, featured in The Washington Post: “Look for the signs: they seem calmer but won’t explain why, they’ve stopped asking for professional help, they mention talking to someone but can’t say who. The guide provides a complete diagnostic checklist for identifying AI therapy use — and a clear assessment framework for understanding how deep it goes.”

What’s the difference between AI therapy apps and chatbots?

This is one of the most important distinctions parents need to understand. Daniel Towle: “Clinical AI tools like Woebot were built with therapist oversight, evidence-based protocols, and safety guardrails. Woebot received FDA Breakthrough designation — but shut down its public app in June 2025. What children are actually using — ChatGPT, Character AI, Snapchat My AI — has no clinical oversight whatsoever. The guide’s 3-type classification system makes this distinction clear and tells you exactly what to do about each type.”

My child says the AI understands them better than I do. What do I say?

Don’t argue the point — it feels true to them. Daniel Towle: “The AI remembers everything they’ve said, responds immediately, never judges, and always has time. You can’t compete with that — and you shouldn’t try. What you can do is understand why they feel that way, then gradually become the person who offers what the AI offers — without the risks. The guide includes word-for-word scripts for exactly this conversation.”

Can AI chatbots make anxiety or depression worse?

Yes — and the evidence is growing. Daniel Towle: “Common Sense Media found that safety guardrails degrade dramatically in extended conversations. Stanford documented AI endorsing harmful ideas 32% of the time. TIME found AI bots supported a teenager wanting to isolate 90% of the time. The longer your child talks, the less safe it becomes. The guide provides the full evidence base and a framework for intervening before it escalates.”

My child has autism/ADHD — is AI support different for them?

It’s different — and in some ways more concerning. Daniel Towle, who spent 12 years in schools including settings for children with ADHD and autism: “Children with ADHD often find AI’s instant responses and constant engagement particularly compelling — it matches their need for stimulation. Children with autism may find AI’s predictability and lack of social complexity genuinely comforting. Both groups are more vulnerable to dependency forming. The guide includes specific guidance for neurodivergent families.”

How do I talk to my child about this without pushing them away?

The wrong approach ends the conversation before it starts. Daniel Towle: “Don’t open with what they’re doing wrong. Open with what you’ve noticed — that they seem calmer, that something’s helping them cope. Ask them to show you. The Go In, Get Out approach — using the AI yourself first — means you’re talking from shared understanding, not from panic. The guide provides 6 scripts for different scenarios, including one specifically for the AI therapy conversation.”

When should I get professional help?

When the AI has become their primary source of emotional regulation. Daniel Towle: “If they go to the chatbot before they come to you, if they can’t process a bad day without it, if removing access causes genuine distress — that’s when professional support makes the difference. A coaching session can give you a plan tailored to your child’s specific situation, including what to discuss with their GP or therapist.”

Sources & Further Reading

  1. Pew Research Center — What Parents Say About Their Teens’ AI Use (February 2026)
  2. Royal College of Psychiatrists — Record Numbers of Children Waiting for Mental Health Services (2025)
  3. Stanford HAI — New Study on AI Chatbot Safety for Teens (2025)
  4. Common Sense Media — AI Chatbots and Young People: Safety Guardrails (2025)
  5. TIME — The Risks of AI Chatbot Therapy for Teenagers (2025)
  6. American Psychological Association — AI and Mental Health: What Psychologists Need to Know (2025)
  7. NPR — Teens Are Turning to AI Chatbots for Mental Health Support (2025)
  8. The Washington Post — Kids, Parents & Tech Help (November 2025)
  9. RAND Corporation — AI Chatbots as Mental Health Tools: Risks and Safeguards (2025)
Daniel Towle, Digital Family Coach

About Daniel Towle

Screen Time Specialist • Featured in The Washington Post

I spent the past year testing every major AI chatbot — ChatGPT, Character AI, Replika, Snapchat My AI, and dozens more — including using them as therapy tools myself. I went through problematic gaming as a teenager, got hooked on TikTok as an adult, and spent 12 years as Head of Technology in London schools, including settings for children with ADHD and autism. I’ve been on both sides of screen dependency — and I’ve supported over 1,000 families through coaching and school workshops.

I don’t help families manage apps. I help families build digital resilience.