When 15-year-old Ava first logged into a generative AI chatbot last winter, she wasn't looking for therapy.
"It just felt like talking to someone who always had time for me," Ava told ABC News.
Today, Ava, who asked that her last name not be shared for privacy reasons, still uses the app occasionally, mostly late at night. "It's like having someone there when I can't sleep," she said.
Ava's mom Stephanie, who also asked that her last name not be used for privacy reasons, said she only learned about the late-night conversations weeks after her daughter began using the generative AI chatbot.
"At first, I was relieved she was opening up somewhere," she said. "But then I worried, was this replacing real connections?"
Ava's story reflects a growing reality: As more teens flock to generative AI chatbots, teen mental health concerns remain high, leaving parents to wonder how best to step in.
In August, a California family sued OpenAI, alleging the company's generative AI chatbot, ChatGPT, played a role in their 16-year-old son's death by suicide. In a statement to The New York Times at the time, an OpenAI spokesperson extended the company's sympathies to the family, saying they were "deeply saddened" by the teen's death, "and our thoughts are with his family."
OpenAI also addressed concerns about teen use of ChatGPT in a blog post that month, writing that it was "continuing to improve how our models recognize and respond to signs of mental and emotional distress and connect people with care, guided by expert input."
"As the world adapts to this new technology, we feel a deep responsibility to help those who need it most," the company wrote. "We want to explain what ChatGPT is designed to do, where our systems can improve, and the future work we're planning."
OpenAI outlined the "stack of layered safeguards" it had built, including training ChatGPT to respond to a user's prompt about wanting to hurt themselves by steering them toward help and training it to automatically block responses that go against the model's safety training, "with stronger protections for minors" and users who are not logged into an account.
Other safeguards include prompting users to take breaks during longer use sessions, referring users who express suicidal intent to helplines like the 988 suicide hotline, and escalating potential threats to others to human reviewers or law enforcement if necessary.
Teens who spoke with ABC News about using a generative AI chatbot for companionship said that to them, the chatbot feels nonjudgmental and always available.
Kendra Read, Ph.D., vice president of care strategy and delivery for Brightline, a pediatric therapy center for kids and teens, told ABC News that voice can feel reassuring, especially for kids struggling with anxiety.
Ava said the "nonjudgmental" tone of her generative AI companion was part of the appeal.
"It never told me I was being dramatic or that I should get over it," she said. "Sometimes I just wanted to say things out loud without worrying how someone would react."
Sixteen-year-old Jordan, who said he started experimenting with a generative AI chatbot earlier this year, told ABC News he turned to it when he felt too embarrassed to talk to friends.
"It felt safer because it couldn't laugh at me or spread rumors," he said.
But he admitted it also left him feeling more isolated, saying, "After a while, I realized I was talking to my phone instead of texting my actual friends."
Jean Twenge, Ph.D., professor of psychology at San Diego State University and author of "10 Rules for Raising Kids in a High-Tech World," warned that this reliance comes with trade-offs.
"Although at times it may help to talk to a nonjudgmental AI program, that's poor preparation for relationships with actual people," she said. "People are going to be judgmental sometimes."
William Leever, PsyD., a pediatric psychologist at Nationwide Children's Hospital and contributor to The Kids Mental Health Foundation, added, "They ask: Is it safe? The answer is sometimes yes, but risks do exist. Is it good? In the right context, yes, but these programs can still 'hallucinate' or produce false information, and that can be harmful."
The American Psychological Association has echoed those concerns, issuing guidelines on generative AI-assisted mental health tools. While the organization notes these platforms can provide "entry points" for support, it stresses they are not substitutes for professional care.
Read told ABC News that the line between healthy coping and harmful avoidance can be blurry.
"In general, we want to look for tools that help our teens do the things that they need or want to do when mental health concerns are otherwise getting in the way," she said. "Sometimes, activities that are billed as therapeutic promote avoidance of bigger feelings or hard situations."
Parents should be alert if a teen starts neglecting responsibilities, becomes socially withdrawn or grows secretive about their use.
Stephanie said she initially began noticing Ava was staying up later than usual and pulling away from family activities.
"She wasn't sneaking out, but she was sneaking onto her phone," Stephanie said. "That's when I realized something was different."
Twenge noted that a particularly troubling signal is when kids start describing generative AI chatbots like they would a romantic partner or close friend. "That's a clear warning sign," she said. "Changes to their routines or to spending time with friends are also red flags."
Leever pointed out that one of the biggest risks lies in how AI tools reinforce a user's thinking.
"AI companions are built to reinforce users' thinking rather than challenge it. Experts call this 'sycophancy,'" he explained. "For young people with complex problems, this can be very dangerous. Mental health professionals challenge kids to reframe their thoughts and build new skills; AI doesn't do that."
In early September, OpenAI announced it would be launching new parental controls in the coming months to help parents monitor their child's use of ChatGPT.
The company said parents would be able to link their ChatGPT accounts with their teens' accounts and set "age-appropriate model behavior rules." The controls would also allow caregivers to disable features, receive notifications if their child appears "in a moment of acute distress," and limit how the chatbot responds to their child's prompts and queries.
An OpenAI spokesperson told ABC News the company is working to build safeguards into the product.
"People sometimes turn to ChatGPT in sensitive moments, so we're working to make sure it responds with care, guided by experts," the spokesperson said. "ChatGPT's default model provides more helpful and reliable responses in these contexts, and introduces 'safe completions' to help it stay within safety limits."
The spokesperson added that the company will also "expand interventions to more people in crisis, make it easier to reach emergency services and expert help, and strengthen protections for teens."
"We'll keep learning and strengthening our approach over time," they said.
Additionally, OpenAI says it is expanding its network of clinicians and researchers, including experts in eating disorders, substance use, and adolescent health, as it prepares to roll out the updates within the next four months.
While experts caution against viewing AI as a cure-all, Read said it can complement therapy when used responsibly.
"AI could play a very helpful role in disseminating evidence-based psychoeducation regarding mental health concerns and interventions as an initial early intervention step for many families," she explained.
It may also support kids between sessions by reminding them of coping strategies or tracking emotions, she said.
Still, she added, "We have a great deal of work to do to ensure that there are appropriate safeguards and supports in place to ensure that the use of AI remains supportive and aligned with evidence-based practice overall."
For parents unsure of how to raise the subject, Read emphasized curiosity over judgment.
"Parents and caregivers can keep the door open by asking exploratory questions -- what they find helpful, how their friends use it, and how it guides them," she told ABC News.
Framing conversations as learning together, rather than policing, can help prevent secrecy, according to Read. She said families can also set collaborative rules around AI use, so parents have room to step in if a tool is causing harm.
Leever suggested starting with basics.
"Create tech-free zones and times, track usage, and replace technology time with sleep, physical activity, and real social connection," Leever said. "Social belonging is crucial to kids' mental health."
As Stephanie, Ava's mom, put it, "The AI gave her an outlet, but what she really needed was me asking better questions."
If you or someone you know is struggling with thoughts of suicide -- free, confidential help is available 24 hours a day, 7 days a week. Call or text the national lifeline at 988.