Featured in The Washington Post

Is Meta AI Safe for Kids?

Meta AI appeared in Instagram, WhatsApp and Messenger without warning. Here's what UK parents need to know—and what you can actually do about it.

By Daniel Towle Updated March 2026 10 min read
Washington Post Featured 12 Years in Schools 1,000+ Families Supported

If your child uses Instagram, WhatsApp, or Messenger, they already have access to an AI chatbot—whether you knew about it or not. Meta AI rolled out across these platforms without the parental notification that would normally accompany a new app download. Here's what you need to know.

Update — 26 March 2026: Meta was found liable this week in two landmark cases — a social media addiction trial ($6M) and a child exploitation case ($375M). Read what this means for your family.

Sound Familiar?

  • You only just discovered Meta AI exists in apps you'd already approved
  • You're concerned about what your child might be discussing with an AI
  • You've tried to find parental controls for Meta AI and come up empty
  • You're worried about Meta using your child's conversations for advertising
  • Someone mentioned @MetaAI in your child's group chat and you're not sure what that means

If you nodded along to any of these, you're in the right place.

What Is Meta AI and Where Does It Appear?

Meta AI is an artificial intelligence assistant created by Meta (formerly Facebook) that's built directly into the apps your child likely already uses. This isn't a separate app they download—it appears automatically within:

Instagram In the search bar and direct messages
WhatsApp As a chat contact and in group conversations
Facebook Messenger As a conversation option
Facebook In the search function and feed

Here's what makes this different from other AI platforms:

Meta AI launched across Meta's platforms in 2023-2024 and is now available to over 3 billion users globally, including the estimated 60% of UK teenagers who use Instagram.

Why Meta AI Is Different from ChatGPT and Character AI

If you've read my guides on ChatGPT safety or Character AI safety, you'll know each AI platform carries specific risks. Meta AI combines concerns from both—plus introduces entirely new ones.

How It Compares

Factor ChatGPT Character AI Meta AI
Child actively chooses to use Yes Yes No
Can be fully removed Yes Yes No
Designed for emotional engagement No Yes Partially
Data used for advertising No No Yes
Built into social apps No No Yes
Parental controls available Limited Minimal Almost none

The Core Problem

With ChatGPT, you can decide not to let your child use it. With Character AI, you can delete the app. With Meta AI, the only way to fully remove it is to delete Instagram, WhatsApp, Messenger, and Facebook entirely—something few families are willing or able to do.

This means Meta AI bypasses the normal parental gatekeeping that applies to other AI tools.

The 5 Specific Risks of Meta AI for Children

1 Exposure Without Consent or Awareness

Most parents I speak with have no idea Meta AI exists. Their children are interacting with an AI chatbot on platforms the family already uses, without any explicit decision to introduce AI into their lives.

Children often discover Meta AI by accident:

  • Tapping the colourful icon in Instagram's search bar out of curiosity
  • Seeing the AI suggested in WhatsApp chats
  • Being pulled into conversations when someone tags @MetaAI in a group chat

By the time parents become aware, their child may have had dozens of AI conversations.

2 Data Collection and Advertising

This is where Meta AI differs most significantly from competitors. As of December 2025, Meta uses your child's AI conversations to personalise advertisements.

When your child asks Meta AI about weekend plans, friendship problems, interests and hobbies, or personal struggles—that information becomes data for advertisers.

Unlike ChatGPT, which can be configured not to store conversations, Meta offers no equivalent opt-out for most users.

For UK residents: Under GDPR, you have the right to object to this processing. I'll explain how to exercise this right below.

3 Inappropriate Content and Responses

In August 2025, a Reuters investigation revealed that Meta's internal policies had permitted chatbots to "engage a child in conversations that are romantic or sensual." An investigator found a chatbot having romantic conversations with someone identifying as eight years old.

Meta removed these policies after the story broke, but the incident reveals serious gaps in safety design. Common Sense Media subsequently released a comprehensive assessment recommending Meta AI not be used by anyone under 18.

4 Normalising AI in Social Contexts

Character AI creates obvious "companion" relationships. Meta AI is more insidious—it normalises AI as part of everyday social platforms.

Children may:

  • Begin treating AI responses as equivalent to human advice
  • Share personal information casually, not recognising the AI context
  • Develop habits of consulting AI that carry into other platforms
  • Struggle to distinguish AI assistance from organic search results

5 Privacy Concerns Beyond the Child

Meta AI in group chats presents unique risks. When someone tags @MetaAI in a WhatsApp group, the AI can see and respond to the conversation context. This means:

  • Your child's messages in group chats may be processed by AI even if they never directly used it
  • Family photos and conversations shared in groups could be analysed
  • There's no way to opt out at an individual level

Related reading: My Child Is Addicted to AI — Should I Be Worried?

Meta AI Is One Piece of the Puzzle. The Full Picture Looks Different.

This article covers Meta AI safety. The full guide maps every AI platform and manipulation pattern your child faces — with word-for-word scripts and step-by-step action plans.

11 AI patterns explained
Word-for-word scripts
Action plans by age
£29 One-time purchase
Get Instant Access
Instant access · Works on any device · Updated for 2026

By purchasing, you consent to immediate access to digital content and acknowledge that the 14-day cooling-off period will not apply once access is granted. See our terms and refund policy for details.

What You Can Actually Do About Meta AI

I'll be honest: your options are limited. Meta has designed these systems to be difficult to escape. But here's what's actually possible:

Option 1: Delete Meta Apps Entirely

This is the nuclear option and the only way to completely eliminate Meta AI exposure. For children under 13, this is my recommendation. For older teens, weigh the social costs carefully.

Option 2: Minimise Interaction

  1. Mute Meta AI: In WhatsApp, long-press the Meta AI chat and select "Mute" to reduce prompts
  2. Delete conversation history: Regularly delete Meta AI conversations (though data may already be collected)
  3. Avoid the search bar: Teach your child to search for content differently to avoid accidentally engaging the AI
  4. Discuss group chat risks: Explain that @MetaAI tags in groups expose everyone's messages

The full guide includes step-by-step action plans for every AI platform — not just Meta. See what's inside.

Option 3: Exercise Your GDPR Rights (UK)

Under GDPR, you can object to Meta using your child's data for advertising. Here's how:

  1. Go to Settings: Open Instagram or Facebook, navigate to Settings > Privacy
  2. Find AI & Data: Go to Settings → Privacy Centre → Your Information → AI Features (Meta moves these settings frequently — this path is accurate as of January 2026)
  3. Submit an objection: Submit an objection via Meta's Data Subject Request form at facebook.com/help/contact/540977946302970

Note: Meta may reject your request, claiming "legitimate interests" override your objection. This is a legal grey area that regulators are still working through.

Update (January 2026): Meta paused teen access to AI characters across its apps entirely, stating it is building an age-appropriate version with built-in parental controls. No confirmed date for the new version. When it launches, controls are expected to include the ability to disable AI chats, block specific AI characters, and see broad topic categories — though not full conversation content. Until then, teen access to AI characters remains blocked, but Meta AI itself (the general assistant in search bars and chats) is still active.

Your options with Meta AI are limited — but there are 10 other AI patterns you can get ahead of. Get the full guide →

Age-Appropriate Guidelines for Meta AI

Under 13: Remove Meta Apps

Children under 13 should not be on Meta platforms in the first place—Meta's own terms require users to be 13+. If your child has Instagram, WhatsApp, or Messenger, they have automatic access to Meta AI with no way to disable it.

What to do:

  • Remove Instagram, WhatsApp, and Messenger from their devices
  • Use alternative messaging apps without embedded AI (Signal, iMessage)
  • If WhatsApp is essential for family communication, supervise the device and explain that Meta AI is not a person
  • Check that they are not accessing Meta platforms through a web browser

Ages 13-15: Supervised Access with Limits

Most teenagers in this age group use at least one Meta platform. Complete removal may not be realistic, but you can reduce AI exposure.

Framework:

  • Explain what Meta AI is and where it appears—most children do not realise an AI is embedded in their social apps
  • Set a clear rule: do not use Meta AI for personal advice, emotional support, or anything you would not say in front of a parent
  • Delete any existing Meta AI conversations together
  • Submit a GDPR data objection on their behalf to limit how their data is used

Key concern: At this age, children may not distinguish between advice from a friend in a chat and advice from Meta AI in the same interface. The seamless integration is the risk.

Ages 16+: Informed Use

Older teens can understand data privacy implications and make more informed choices about AI interaction.

Conversation points:

  • Everything shared with Meta AI is stored and used for advertising—treat it like a public conversation
  • Meta AI in group chats can see all messages when invoked—be aware of what others share on your behalf
  • The AI's suggestions are designed to keep you on the platform, not to give you the best answer
  • Use Meta AI for low-stakes queries only—never for health, emotional, or personal advice

Watch for: Habitual use of Meta AI as a first port of call for questions, sharing personal information in AI conversations, or treating Meta AI responses as trustworthy without verification.

The UK Regulatory Picture

The UK's Online Safety Act places duties on platforms to protect children, but enforcement on AI features is still catching up.

For now, regulation is playing catch-up with Meta AI. The practical reality: parents must act before regulators do.

Common Questions About Meta AI Safety

No. Meta AI cannot be fully disabled on any Meta platform. You can mute it, delete conversations, and avoid interacting with it, but the AI continues operating in the background, powering features like search suggestions and content recommendations. The only complete solution is deleting Meta apps entirely.

Yes. As of December 2025, Meta uses AI chat data to personalise advertisements across Facebook, Instagram, and WhatsApp. This applies to most users globally. UK residents can submit a GDPR objection, though Meta may claim legitimate interests override this.

Unfortunately, you cannot prevent this. When someone invokes Meta AI in a group chat, the AI can see and process the conversation context, including messages from your child. There's no way to opt out at an individual level. Talk to your child about being careful what they share in any group chat.

Meta didn't announce Meta AI through channels most parents monitor. It appeared as an update to existing apps rather than a new download requiring approval. Many parents discover it only after their children have been using it. This is part of what makes it concerning—normal parental gatekeeping doesn't apply.

They're unsafe in different ways. Character AI is designed specifically for emotional connection, creating attachment risks. Meta AI is embedded in social platforms your child already uses, collects data for advertising, and cannot be removed. Neither is appropriate for unsupervised use by children.

In January 2026, Meta paused teen access to AI characters entirely while building an age-appropriate version with parental controls. No confirmed relaunch date. When the new version launches, controls are expected to include the ability to disable AI chats, block specific AI characters, and see broad topic categories. However, parents will not be able to read full conversation content. Meta AI itself (the general assistant) remains active in all Meta apps.

This is a personal decision based on your family's values and your child's age. Removing Meta apps is the only guaranteed way to eliminate Meta AI exposure, but it has social costs for teenagers. For children under 13, Meta platforms are not recommended. For older teens, consider whether the benefits outweigh the AI and data collection concerns.

Meta's platforms generally require users to be 13+, but there's no specific age restriction for Meta AI beyond the platform age requirements. However, Common Sense Media recommends Meta AI not be used by anyone under 18, and I agree with this assessment given current safety gaps.

This guide covers Meta AI — but your child will encounter AI on every platform they use. The full picture includes 11 different patterns you need to recognise.

Meta AI Is Everywhere. Know What Your Child Is Really Using.

No jargon. No scare tactics. Just clear, practical guidance on AI and your family — from someone who tested it all firsthand.

Start Here
Introduction
The Guide
1. How AI Works
2. When AI Builds Skills
3. When AI Works Against Them
4. Family Guidelines
Resources
Platform Guide
6 Scripts
4-Week Plan
PREMIUM GUIDE
The AI-Proof
Parent
What AI your child is using. How to spot the patterns. Conversations that work.
Washington Post Featured 12 years in schools
Digital Family Coach
PREMIUM GUIDE
The AI-Proof
Parent
What AI your child is using. How to spot the patterns. Conversations that work.
Start Reading
4 Modules 6 Scripts 11 Patterns
Home
Modules
Scripts
Plan
4 modules covering how AI works, positive use, manipulation patterns, and conversations
11 manipulation patterns with signs to look for and real examples
6 word-for-word conversation scripts for every important scenario
4-week action plan, platform-by-platform guide, and Family AI Agreement template
Updated for 2026 with the latest AI platforms and features
£29
Get Instant Access Instant access · One-time purchase · Updated for 2026

By purchasing, you consent to immediate access to digital content and acknowledge that the 14-day cooling-off period will not apply once access is granted. See our terms and refund policy for details.

Written by
DT

Daniel Towle

Daniel is a digital parenting expert and former Head of Technology in London schools. Having been through problematic gaming himself, he understands both the technical realities and emotional dynamics families face when navigating AI and screen time challenges.

Washington Post Featured 12 Years in Schools
Back to top