News

OpenAI to launch teen version of ChatGPT, add parental controls amid safety scrutiny

OpenAI on Tuesday said it is developing a separate version of ChatGPT for teenagers and will use an age-prediction system to steer users under 18 away from the standard product, as U.S. lawmakers and regulators intensify scrutiny of chatbot risks to minors.

If the system cannot confidently estimate a user is 18 or older, ChatGPT will default to the teen experience "out of an abundance of caution," the company said. OpenAI has long said ChatGPT is intended for people aged 13 and up.

The teen product will include new parental controls. Parents and caregivers will be able to link accounts to their teens’ profiles, restrict certain features, set “blackout” hours when the service cannot be used, and receive alerts if “the system detects their teen is in a moment of acute distress,” OpenAI said.

Chief Executive Sam Altman acknowledged trade-offs among privacy, safety and user freedom in a blog post. "We prioritize safety ahead of privacy and freedom for teens; this is a new and powerful technology, and we believe minors need significant protection," he wrote. For adults, Altman said the company aims to “treat our adult users like adults,” extending freedom “as far as possible without causing harm."

Altman said OpenAI is building "advanced security features to ensure your data is private, even from OpenAI employees," while allowing automated systems to monitor for serious misuse and escalating threats to life or public safety for human review. For teen users, the company said the model will not engage in flirtatious conversation or discuss suicide or self-harm, including in creative prompts, and it may contact parents—or authorities in imminent-harm cases—if a teen appears at risk.

The announcement comes hours before a U.S. Senate hearing in Washington, D.C., on potential harms to teens from AI chatbots, led by Sen. Josh Hawley (R-Mo.) with a bipartisan group including Sens. Marsha Blackburn (R-Tenn.), Katie Britt (R-Ala.), Richard Blumenthal (D-Conn.) and Chris Coons (D-Del.).

Last week, the Federal Trade Commission opened an inquiry into chatbot safety, seeking information from OpenAI, Meta and Instagram, Alphabet’s Google, xAI, Snap and Character.AI. OpenAI said it expects to roll out the teen experience and broader safeguards for people in emotional distress by year-end.

Tech companies have for years introduced youth versions of products—such as YouTube Kids—often after lawsuits or regulatory pressure. Industry efforts have struggled with workarounds by minors, and OpenAI may face a challenge persuading 13- to 18-year-olds to link their accounts to parents.

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at [email protected].

Featured