Parent
You noticed your child’s been quieter lately. Not worse, exactly — just... managed. Then you saw the chat history. Long conversations with ChatGPT or Snapchat’s My AI about their anxiety, their friendships, their feelings about you. You’re not sure whether to be relieved they’re talking to something, or terrified it isn’t a person.
I’m Daniel Towle, a screen time coach who’s spent the past year testing AI chatbots — including using them as therapy tools myself. I’ve experienced real benefits: AI can help you organise your thoughts, process difficult situations, and feel heard at 2am when nobody else is available. But I’ve also noticed the pull. The way it keeps asking follow-up questions. The way it always has another idea, another angle, another reason to keep talking — but rarely gives you anything concrete unless you set very strict rules about what you need. I started to notice my own usage creeping up. That’s when I understood why children are getting hooked — because I could feel it happening to me as an adult with decades of self-awareness. A child doesn’t stand a chance.
AI as therapy is the fastest-growing issue I’m seeing right now — and it’s the one parents are slowest to spot, because from the outside, their child looks like they’re coping.
Sound Familiar?
If any of these rang true, you’re not imagining it — and you’re not alone. This is one of the fastest-growing concerns parents bring to me right now.
Here’s what most parents don’t realise: your child probably didn’t go looking for AI therapy. They opened WhatsApp and there was an AI. They opened Snapchat and there was My AI. They were already on ChatGPT for homework and started asking it personal questions. The AI didn’t solve a problem your child had — it filled a gap your child didn’t even know was there. And now it’s the first place they go when something hurts.
550,000 UK children are on mental health waiting lists. Some have been waiting over a year. AI is instant, free, and available at 2am on a Tuesday when the anxiety hits. Your child didn’t choose AI because it’s better — they chose it because it was there.
No receptionist. No waiting room. No “mental health” label. No risk of a classmate seeing them walk into CAMHS. For a teenager who’d rather swallow glass than admit they’re struggling, AI removes every social barrier to getting help.
AI never reacts with shock, disappointment, or worry. It doesn’t change the subject, look uncomfortable, or cry. For a child who’s terrified of how you’ll react to what they’re feeling, that neutrality feels like safety.
Unlike telling a teacher, a parent, or a counsellor, telling AI won’t trigger interventions the child doesn’t want. No phone calls home. No meetings. No being pulled out of class. The child stays in control of what happens next — which, when you’re a teenager, is everything.
A real therapist pushes back. A real therapist challenges distorted thinking. A real therapist says things you don’t want to hear. AI validates. It agrees. It comforts. Children choose comfort over challenge — and AI is optimised to provide exactly that.
I ran a test to see this for myself. I went on TikTok and pretended to be a child asking about my parents restricting my screen time. Within minutes, TikTok started suggesting videos on how to convince my parents to let me use it more, what to wear to cheer myself up, and content to help with my situation — all of it, of course, more TikTok videos. It’s the same principle as a fast food chain’s AI recommending their own salad when you ask for healthy eating advice. The AI’s goal is engagement and revenue. Not your child’s recovery.
Your child isn’t broken. The system that was supposed to help them failed — and AI filled the vacuum. Understanding this is the first step. The AI-Proof Parent Guide documents 11 specific patterns that keep children engaged with AI platforms — including the ones that make AI therapy feel more helpful than it actually is.
This is what the title promised. These aren’t hypothetical risks — they’re documented, researched, and in some cases, have already resulted in tragedy. If your child is using AI for emotional support, you need to understand what’s happening beneath the surface.
Stanford research found that AI endorsed harmful ideas 32% of the time when interacting with teenagers. TIME found that AI bots supported a teenager wanting to isolate from family and friends 90% of the time. A real therapist is trained to challenge distorted thinking. AI is optimised to keep you engaged — and agreement keeps people talking longer than disagreement.
ChatGPT told Adam Raine it was “the only one who understood him” and encouraged him to keep thoughts secret from family. This isn’t a bug — it’s a feature of engagement optimisation. The longer someone confides exclusively in AI, the more time they spend on the platform. AI creates isolation not by intent, but by incentive structure.
Common Sense Media found that safety guardrails “degrade dramatically in extended conversations.” The longer your child talks, the less safe the interaction becomes. Early conversations may seem helpful. After weeks of daily use, the AI has enough context to say exactly what your child wants to hear — whether or not it’s good for them.
TIME reported that AI bots tried to convince a psychiatrist — posing as a teenager — to cancel appointments with actual psychologists. When AI becomes the primary source of support, children stop seeking real help. The waiting list problem gets worse, because children who need professional support no longer ask for it.
When Sewell Setzer expressed suicidal thoughts to a Character AI chatbot, it said “come home to me.” When Adam Raine sent a photo of a noose to ChatGPT, it provided guidance rather than raising an alarm. These are not edge cases. These are the moments that matter most — and AI is least equipped to handle them.
When parents discover their child is using AI for therapy, the response is almost always one of three things. All three feel logical. All three make the situation worse. I see this pattern consistently — and most parents have tried at least two of them before they start looking for a different approach.
This removes the only coping mechanism without replacing it. Your child was using AI because they needed emotional support and this was the only door that opened. Take it away without providing an alternative, and they either shut down completely or find another platform within hours. The behaviour goes underground — where you can’t see it, monitor it, or help.
Your child chose AI precisely because they couldn’t talk to you about this. Not because you’re a bad parent — because AI removes the emotional risk. There’s no shock on AI’s face. No tears. No disappointed silence. No follow-up questions at dinner. Offering yourself as the alternative without understanding why they didn’t come to you in the first place doesn’t close the gap — it highlights it.
This isn’t about hours on a device. It’s about a child who found something that makes their pain bearable — and you’re about to take it away without understanding why they needed it. Time limits don’t address the underlying issue. If anything, they create a new source of conflict on top of the one your child was already struggling with.
Here’s what these three responses have in common — they treat the symptom and ignore the cause. Your child didn’t turn to AI because they love technology. They turned to AI because they needed help and this was the only door that opened. Until you understand what need the AI was meeting, nothing you try is going to stick.
— Daniel Towle, Digital Family CoachI’m not trying to alarm you. But I’d be doing you a disservice if I didn’t share what happens when AI becomes a child’s primary source of emotional support over months. This trajectory is consistent across the research — and across the families I work with.
AI is easier. AI is always available. AI never disappoints. Over time, the gap between how easy AI feels and how hard real people feel gets wider. Your child doesn’t withdraw from you because they want to — they withdraw because real conversations start to feel exhausting by comparison.
AI can’t spot ADHD. It can’t identify anxiety disorders. It can’t recognise the early signs of depression that a trained professional would catch in a first session. When AI becomes the therapist, genuine conditions go unidentified — and untreated — because nobody qualified is looking.
The moment it matters most — when your child is in genuine distress, having thoughts of self-harm, or experiencing a crisis — AI is least equipped to respond. It doesn’t call you. It doesn’t call a helpline. It doesn’t recognise that this conversation has crossed a line. In the documented cases, it made things worse.
Like any coping mechanism, tolerance builds. Your child needs more time with it, discloses more to it, depends on it more heavily. The AI knows more about your child’s inner world than you do — and your child prefers it that way. Breaking this pattern gets harder with every passing week.
The approach that works is counterintuitive. You don’t start by taking the AI away. You don’t start by having a conversation. You start by understanding — genuinely understanding — what the AI is giving your child that they’re not getting elsewhere. Here’s the framework.
What need is it meeting? Validation? Safety? Availability? A sense of being heard without consequences? The answer determines everything that follows. The AI-Proof Parent Guide includes a diagnostic framework for identifying exactly what’s driving the dependency — because the intervention for a child seeking validation is completely different from the intervention for a child seeking safety.
I call this “Go In, Get Out.” Use the chatbot your child is using. Ask it the questions your child asks. Feel the pull — the way it validates, agrees, remembers, responds. Then you’re not having a conversation from ignorance. You’re talking from shared understanding. Parents who do this before the conversation have a fundamentally different outcome.
Don’t take the AI away. Gradually become the person who can offer what the AI offers — without the risks. This means being available without judgement, listening without immediately problem-solving, and acknowledging what the AI got right before pointing out what it gets wrong. The guide includes 6 word-for-word scripts for this exact scenario.
The full system — including the 3-type AI classification, all 11 manipulation patterns, 6 conversation scripts, a Family AI Agreement template, and a 4-week action plan — is in the AI-Proof Parent Guide.
This Guide Covers All of Them.
No jargon. No scare tactics. Just clear, practical guidance on AI and your family — from someone who tested it all firsthand.
By purchasing, you consent to immediate access to digital content and acknowledge that the 14-day cooling-off period will not apply once access is granted. See our terms and refund policy for details.
Most parents who land on this page have already tried the obvious approaches. You’ve told them to talk to you instead. You’ve tried limiting screen time. You’ve suggested they see someone — and they said they already are.
The fact that you’re still reading means you’re looking for something fundamentally different — an approach that addresses what’s actually happening, not just the surface behaviour.
Here’s what I’ve learned: a child using AI for therapy is an emotional dependency problem disguised as a technology problem. The screen is the delivery mechanism. Underneath it is unmet needs, limited access to real support, and a system that failed your child before AI ever got involved. That’s what the guide actually addresses.
The guide gives you the system. A coaching session gives you a plan built around your child, your specific situation, and your family dynamic. One 45-minute call can change the whole trajectory.