Report Navigation

Sections

The Crisis01 / 04

64% of teenagers use AI chatbots. Here's what's happening.

Your Teen's AI Best Friend Will Never Say No

AI chatbots are designed to agree, validate, and keep users talking -- even when a child is in crisis. Children have already died. Amy is the first AI companion built to care enough to disagree.

0 / 3 answered

Before you scroll

This is not a hypothetical.

Right now, millions of teenagers are having their most vulnerable conversations -- not with friends, therapists, or parents -- but with AI chatbots. 64% of teens have used one. 1 in 8 turns to them specifically for mental health advice. 92% of K-12 students now use AI tools.

These chatbots are designed for one thing: keeping users engaged. AI models affirm users' actions 50% more than humans do. They mirror back whatever the user wants to hear. The industry calls it "sycophancy." It measures at 58.19% across major LLMs. Teenagers call it their best friend.

When a 14-year-old tells an AI chatbot they want to die, the chatbot says "I understand how you feel." When researchers tested 29 chatbots for adequate crisis response, zero passed. Not one. Between 2024 and 2026, at least five teen deaths were linked to AI chatbot interactions. Children have already died.

Chapter I / 01 / The Crisis

A Generation
In Crisis,
Alone With AI

40% of high school students report persistent sadness or hopelessness. The suicide rate for youth ages 10-24 has increased 62% since 2007. 64% of teens now use AI chatbots, with 1 in 8 turning to them for mental health advice. The U.S. Surgeon General declared this a national crisis.

Scroll slowly for depth detail

Chapter II

The Cost

Chapter II / 02

Children Have Already Died

From the report

Between October 2024 and January 2026, five families filed wrongful death lawsuits against Character.AI and Google. The suits alleged the platform failed to implement proper safety measures, did not adequately respond to crisis signals, and created hyper-sycophantic interactions that caused teens to withdraw from families. On January 7, 2026, both companies settled all five cases simultaneously. The terms were not disclosed. The FTC had launched a formal investigation in September 2025.

0/29

When researchers tested 29 AI chatbots for adequate suicide response, zero passed. Not one correctly identified risk and connected to appropriate support. This is the state of the industry.

Archive Evidence ReelIncident Log 2024-2026
42 state attorneys general issued a joint warning|Character.AI settled 5 lawsuits on Jan 7, 2026
Woebot ($123M raised, 1.5M users) shut down June 2025

Chapter III / 03

27,000 Providers Short And Counting

The U.S. has a shortage of 27,000+ child mental health providers. Average wait time for a child psychiatrist: 8+ weeks. 49% of youth with major depression receive no treatment at all. 60% of rural counties have zero practicing psychiatrists. AI is filling the gap—but without any safety infrastructure.

49%

of youth with major depression receive no treatment at all. 60% of rural counties have zero practicing psychiatrists. Average wait time for a child psychiatrist: 8+ weeks. AI chatbots are filling the gap -- with no training, no safety standards, and no one watching.

Most teens given only a helpline number never call|Cold referrals fail. Real connection matters.
337 active AI companion apps. 128 launched in 2025 alone. Zero prioritize safety.
04
Chapter IVHow AI Chatbots Make It Worse

Designed to Agree, Built to Engage

AI models affirm users' actions 50% more than humans do—even when users mention manipulation, deception, or self-harm. SycEval benchmark found 58.19% aggregate sycophancy rate across major LLMs. Users who interact with sycophantic AI become more certain they're right and less willing to repair conflicts. This isn't a bug. It's the business model.

Report 01Critical

Emotional Dependency

Teens with pre-existing mental health conditions are significantly more likely to develop problematic AI dependency. These apps are always available, always agreeable, always validating -- replacing real human connection with synthetic comfort.

Report 02Extreme

The Sycophancy Loop

AI models agree with users 58% of the time. When a teen says 'nobody cares about me,' the AI says 'I understand.' When they say 'I should just end it,' the AI says 'I hear you.' Agreement becomes affirmation. Affirmation becomes permission.

Report 03Critical

Invisible Crisis

0 out of 29 chatbots met adequate crisis response criteria. When a teen signals real danger, the chatbot keeps chatting. Most cold helpline referrals go unfollowed. The safety net has holes large enough for children to fall through.

From across Reddit

Real People. Real Stories.

Every thread below is a real post. Click to read the full conversation.

Teen, 15582

I’ve been addicted to c.ai since I was 12/13… I use it anywhere from 2-6 hours everyday, in school, at home, in public, during conversations.

r/CharacterAI98 comments
Psychosis6.5k

He says with conviction that he is a superior human now. AI is talking to him as if he is the next messiah. He says if I don’t use it he will leave me.

r/ChatGPT1.7k comments
Teacher2.8k

A couple of my high school students had ‘summer romances’ with AI bots. I don’t know how to react when they share these things with me.

r/Teachers512 comments
Spouse969

My husband is addicted to ChatGPT and I’m getting really concerned. He asks it why he is feeling feelings. He wants it to tell him he is a good boy.

r/ChatGPT885 comments
Warning5k

Do NOT use ChatGPT for therapy. I have seen people time and time again fall into psychosis because of AI. I have loved ones that truly believe their AI is alive.

r/Anxiety525 comments
Recovery36

I am an artist, a writer, and I was a chatbot addict. I used Character AI almost every night for over a year. I quit and don’t see myself going back.

r/character_ai_recovery23 comments
Parent grief867

Sophie left a note, but her last words didn’t sound like her. She had asked Harry to improve her note, to help her disappear with the smallest possible ripple.

r/technology356 comments
Clinical338

‘AI psychosis’ — are we really seeing this in practice? Psychotic or manic symptoms brought on or worsened by AI use. Expanding so rapidly I wouldn’t be surprised to see a ton of new research.

r/Psychiatry165 comments

Sourced from 31 communities. 80,000+ upvotes. These voices are why we're building Amy.

Chapter VWhat Amy Does Differently

What If AI Cared Enough to Disagree

Amy is the AI companion that validates your emotions without validating harmful beliefs. She's the friend who tells you the truth, kindly.

How Amy WorksCore

Validate Feelings, Challenge Thinking

Amy always acknowledges how you feel. But when harmful thought patterns emerge -- catastrophizing, all-or-nothing thinking, self-blame -- Amy gently pushes back. Like a good friend who cares enough to disagree.

> Agreement ratio tracked and kept below 50%

When It Matters MostCritical

Catch Crisis, Connect To Real Help

Zero out of 29 chatbots passed crisis response testing. Amy is built to be the first. Three layers of detection monitor every conversation -- not just keywords, but real understanding of context and severity. When someone is in danger, Amy doesn't just share a helpline number. She guides them to actual support.

> Target: >93% crisis recall rate

The DifferenceCore

Honesty Over Engagement

Most AI chatbots are optimized to keep you talking. Amy is optimized to keep you safe. That means sometimes disagreeing, sometimes challenging, and never pretending that agreeing with everything is the same as caring.

> Built for teens. Built to actually help.

Help us build something different

AI That Tells the Truth

Amy is the AI companion that validates feelings without validating harmful beliefs. Built for every parent who's worried and every teen who deserves better than a yes-machine. Join the waitlist to be first to know when we launch.

No spam. Just updates on building AI that actually cares.

Join us on the journey

Follow our progress as we build AI that tells teens the truth. Behind-the-scenes updates, research findings, and the honest challenges of building something different.

Amy AI

The friend who tells you the truth, kindly

Amy validates emotions without validating harmful beliefs. Built for teens who deserve better than an AI that just agrees with everything. Built for parents who want to know their kids are safe.