Scorecard Research Beacon
Search Icon
November 7, 2025

What parents need to know about Sora, the generative AI video app blurring the line between real and fake

WATCH: New concern over AI companions and kids

A new generative artificial intelligence app called Sora is quickly becoming one of the most talked about and debated tech launches of the year.

Created by OpenAI, the company behind ChatGPT, Sora allows users to turn written prompts into realistic AI-generated videos in seconds.

But experts warn that this innovation comes with potentially significant child-safety risks, from misinformation to the misuse of kids' likenesses.

In an October safety report, Common Sense Media, a nonprofit that monitors children's digital well-being, gave Sora an "Unacceptable Risk" rating for use by kids and teens, citing its "relative lack of safety features" and the potential for misuse of AI-generated video.

"The biggest difference is that Sora is simply better than its competitors," said Michael Dobuski, a technology reporter for ABC News Audio. "Its videos look passably real, not uncanny or obviously computer-generated. That's a double-edged sword, because when something looks real, it's easier to spread misinformation or create harmful videos."

Titania Jordan, Chief Parent Officer at Bark Technologies, a parental control online safety app, told ABC News that Sora's capabilities go far beyond a photo filter or animation tool.

"It can create mind-blowingly realistic videos that can fool even the most tech-savvy among us," she explained. "The most important thing for parents to understand is that it can create scenes that look 100% real, but are completely fake. It blurs the line between reality and fiction in a way we've never seen before."

How Sora works and why there are concerns

Sora is what's known as a text-to-video platform. Users type a prompt -- for example, "a man riding a bike through the park at sunset" -- and the app generates a lifelike AI video that appears to have been filmed in the real world.

The app is currently available to the public through OpenAI's platform and iOS app, with initial access included for ChatGPT Plus and Pro subscribers. There's also a free introductory tier, which allows users to create a limited number of short, lower-resolution AI-generated videos each month.

OpenAI's terms of use require users to be at least 13 years old, with anyone under 18 needing parental permission. However, experts say that the app's teen protections are minimal and that videos created on Sora can easily be shared on other platforms like TikTok or YouTube, where kids of any age can view them.

Users can even add themselves or their friends into these clips using a feature called Cameos, which lets people upload their face or voice and have it animated into new scenes.

OpenAI says Cameos are "consent-based," meaning users decide who can access their likeness and can revoke that permission at any time.

The company also says it blocks depictions of public figures and applies "extra safety guardrails" to any video featuring a Cameo. Users can see every video that includes their image and delete or report it directly from the app.

Still, Jordan cautions that these protections may not be enough. "Once your likeness is out there, you lose control over how it's used," she said. "Someone could take your child's face or voice to create a fake video about them. That can lead to bullying, humiliation, or worse. And when kids see so many hyper-realistic videos online, it becomes harder for them to tell what's true, which can really affect self-esteem and trust."

Dobuski said that while OpenAI has included several safeguards, enforcement has been spotty. "When Sora first launched, people were making videos of copyrighted characters like SpongeBob and Pikachu, even fake clips of OpenAI's CEO doing illegal things," he said. "So, it appears a lot of those restrictions are easy to get around."

OpenAI's safety commitments and its limits

When reached for comment about Sora's safety features, OpenAI pointed ABC News to its website, where its emphasized that Sora 2 and the Sora app were "built with safety from the start."

Every AI-generated Sora video includes both visible and invisible provenance signals, a visible watermark, and C2PA metadata, an industry-standard digital signature that identifies AI-generated content, it said.

The company also maintains internal reverse-image and audio search tools designed to trace videos back to Sora with high accuracy.

OpenAI says it has implemented teen-specific safeguards, including limits on mature content, restrictions on messaging between adults and teens, and feed filters designed to make content appropriate for younger users. Parents can also manage whether teens can send or receive direct messages and limit continuous scrolling.

While those measures sound promising, Jordan says parents should be cautious.

"Even with filters and watermarks, Sora can generate disturbing or inappropriate content," she said. "And because kids can unlink their accounts from their parents' supervision at any time, those safeguards aren't foolproof."

Dobuski agrees that the larger issue isn't just content moderation, it's speed. "Given how easy these videos are to make," he said, "they can spread online before any platform is able to meaningfully crack down."

For now, there's no federal law governing AI-generated video content. Some states, including California, have proposed laws requiring AI videos to include clear labeling and banning the creation of "non-consensual intimate imagery" or child sexual-abuse material. But enforcement varies, and many experts say it's not enough.

"The Silicon Valley ethos of 'move fast and break things' is still alive," Dobuski said. "Companies are racing to dominate the AI market before regulations are in place, and that strategy isn't without risk, especially when it comes to kids."

What parents can do right now

Jordan recommends parents take a hands-on approach. She suggests starting by explaining what Sora is and why it matters.

"Tell your kids, 'What you see online might be fake -- always question it,'" she said. "Teach them not to upload their face or voice anywhere and to come to you if they see something that makes them uncomfortable."

Families should review new apps together, establish rules about sharing personal media, and keep devices in shared spaces, Jordan said, recommending parents also discuss with their kids the wider influence of AI-generated media.

"Even if your child doesn't use Sora directly, they're going to see its content," she said. "That means you have to talk not just about what they make, but what they're consuming."