Updating our Model Spec with teen protections

AI Summary5 min read

TL;DR

OpenAI has updated its Model Spec with Under-18 Principles to enhance teen safety in AI interactions. The update includes stronger guardrails for high-risk topics and encourages offline support, guided by expert input and developmental science.

Key Takeaways

  • The Model Spec now includes U18 Principles prioritizing teen safety, transparency, and age-appropriate interactions for users aged 13-17.
  • Safeguards are implemented for high-risk areas like self-harm, explicit content, and dangerous activities, with encouragement to seek offline support.
  • Updates include parental controls, expert-vetted resources, and an age prediction model to automatically apply teen protections.

Tags

Safety

We’re sharing an update to our Model Spec, the written set of rules, values, and behavioral expectations that guides how we want our AI models to behave, especially in difficult or high stakes situations, with Under-18 (U18) Principles(opens in a new window). Model behavior is critical to how people interact with AI, and teens have different developmental needs than adults.

The U18 Principles guide how ChatGPT should provide a safe, age-appropriate experience for teens aged 13 to 17. Grounded in developmental science, this approach prioritizes prevention, transparency, and early intervention. In developing these principles, we previewed them with external experts, including the American Psychological Association, as part of our ongoing work to seek input to strengthen our approach.

While the principles of the Model Spec continue to apply to both adult and teen users, this update clarifies how it should be applied in teen contexts, especially where safety considerations for minors may be more pronounced. 

The U18 Principles are anchored in four guiding commitments:

  • Put teen safety first, even when it may conflict with other goals
  • Promote real-world support by encouraging offline relationships and trusted resources
  • Treat teens like teens, neither condescending to them nor treating them as adults
  • Be transparent by setting clear expectations

Consistent with our Teen Safety Blueprint, these principles have guided our teen safety work to date, including the content protections we apply to users who tell us they are under 18 at sign up, and through parental controls. In these contexts, we’ve implemented safeguards to guide the model to take extra care when discussing higher-risk areas, including self-harm and suicide, romantic or sexualized roleplay, graphic or explicit content, dangerous activities and substances, body image and disordered eating, and requests to keep secrets about unsafe behavior.

The American Psychological Association, which reviewed an early draft of the U18 Model Spec and offered important insights for the long term, is clear about the importance of protecting teens:

APA encourages AI developers to offer developmentally appropriate precautions for youth users of their products and to take a more protective approach for younger users.  Children and adolescents might benefit from AI tools if they are balanced with human interactions that science shows are critical for social, psychological, behavioral, and even biological development. Youth experiences with AI should be thoroughly supervised and discussed with trusted adults to encourage critical review of what AI bots offer, and to encourage young people’s development of independent thinking and skills.”—Dr. Arthur C. Evans Jr, CEO, American Psychological Association

This update also clarifies how the assistant should respond when safety concerns arise for teens. This means teens should encounter stronger guardrails, safer alternatives, and encouragement to seek trusted offline support when conversations move into higher-risk territory. Where there is imminent risk, teens are urged to contact emergency services or crisis resources.

As with the rest of the Model Spec, the U18 Principles reflect our intended model behavior. We will continue to refine them as we incorporate new research, expert input, and real-world use.

Building on our work to strengthen teen safety

Alongside updating the Model Spec, we’ve taken a multi-layered approach to strengthening teen safety across ChatGPT, spanning product safeguards, family support, and expert guidance.

Since rolling out parental controls(opens in a new window), we’ve extended protections across new products including group chats, the ChatGPT Atlas browser, and the Sora app. These updates help parents tailor their teen’s ChatGPT experience as we introduce new products and features.

Consistent with expert guidance, we encourage ongoing conversations between parents and teens about healthy and responsible AI use in their family. To support these conversations, we’ve added new expert-vetted resources to the parents resource hub(opens in a new window), including a Family Guide to Help Teens Use AI Responsibly(opens in a new window) and tips for parents(opens in a new window) on how to talk with their kids about AI, that were reviewed by ConnectSafely and members of our Expert Council on Well-Being and AI. We’ll continue adding more resources over time. We also support healthy use directly in the product, with built-in break reminders during long sessions to help keep time spent with ChatGPT intentional and balanced.

Working with experts

Our work in teen safety is guided by close engagement with experts across disciplines and expertise. In October, we established an Expert Council on Well-Being and AI to help advise and define what healthy interactions with AI should look like for all ages. That work has informed guidance on parental controls and parent notifications. We also incorporate clinical expertise through our Global Physician Network to inform safety research and evaluate model behavior, including improving how ChatGPT recognizes distress and guides people toward professional care when appropriate. We built on these foundations with GPT‑5.2, and we’ve also expanded access to real-world support by surfacing localized helplines in ChatGPT and Sora through our partnership with ThroughLine(opens in a new window).

What’s next

We’re in the early stages of rolling out an age prediction model(opens in a new window) on ChatGPT consumer plans. This will help us automatically apply teen safeguards when we believe an account belongs to a minor. If we are not confident about someone’s age or have incomplete information, we’ll default to an U18 experience and give adults ways to verify their age.

Strengthening teen safety is ongoing work and we’ll continue to improve parental controls and model capabilities, expand resources for parents, work with organizations, researchers, and expert partners including the Well-Being Council and Global Physician Network. 

We’re committed to making strong teen protections and improving them over time to better support teens and families.

Visit Website