An update on our mental health-related work
TL;DR
OpenAI updates its mental health safeguards for ChatGPT, including a new trusted contact feature and improved emotional distress detection. The company also addresses ongoing litigation related to mental health cases, emphasizing careful handling and transparency.
Key Takeaways
- •OpenAI is introducing a trusted contact feature for adult users to designate someone for support notifications.
- •Parental controls, launched in September 2025, are being expanded to enhance safety for teen users.
- •The company is advancing AI models to better detect and respond to signs of emotional distress through new evaluation methods.
- •OpenAI is handling consolidated mental health-related litigation in California with principles of transparency and respect.
- •Ongoing collaboration with mental health experts and clinicians aims to improve ChatGPT's responses in sensitive situations.
Tags
Each week, more than 900 million people use ChatGPT to improve their daily lives through uses such as learning new skills or navigating complex healthcare systems. Our ongoing safety work continues to play an important role in delivering these benefits to everyday people, as well as supporting scientific research and discovery.
Since introducing parental controls in September 2025, we’ve seen encouraging engagement from families and will continue building on these protections. Working closely with experts from our Council on Well-Being and AI and our Global Physicians Network, we will also soon be introducing a trusted contact feature, which will allow adult users to designate someone to receive notifications when they may need additional support. As a reminder, parents also receive safety notifications about their teens’ use of ChatGPT through parental controls. We’ll share more as these updates roll out in ChatGPT.
We are also continuing to advance how our models detect and respond to signs of emotional distress. This includes new evaluation methods that simulate extended mental health-related conversations, helping us better identify potential risks and improve how ChatGPT responds in sensitive moments. We’ll share more about this work in the coming weeks as we continue strengthening ChatGPT’s safeguards.
Litigation updates
Separately, the Court recently coordinated a number of mental health-related cases involving ChatGPT into a single proceeding in California. In the coming days, the Court will assign the coordination judge for this proceeding. As part of this consolidation process, plaintiffs’ attorneys involved in these proceedings have informed the Court that they intend to file a number of new cases*.
As with the earlier filed mental health-related litigation, OpenAI will continue to handle any additional cases with care, transparency, respect for the people involved, and in line with the following principles:
- We start with the facts and put genuine effort into understanding them.
- We will respectfully make our case in a way that is cognizant of the complexity and nuances of situations involving real people and real lives.
- We recognize that these cases inherently involve certain types of private information that require sensitivity when in a public setting like a court.
- And independent of any litigation, we’ll remain focused on improving our technology in line with our mission.
We recognize that court processes can be lengthy and, at times, opaque due to strict legal rules. It can also take time to collect and understand the relevant facts, and present them to the court in line with its evidence procedures. We work to understand the details in good faith, and we only seek information as part of the court process that’s relevant to the case and the specific allegations that have been made.
It’s important to reserve judgment and allow the facts to appropriately emerge through the court process, as these are complex and nuanced cases with many factors and circumstances that are often not reflected in the initial filings.
Our thoughts are with all those impacted by these incredibly heartbreaking situations. We continue to improve ChatGPT’s training to recognize and respond to signs of distress, de-escalate conversations in sensitive moments, and guide people toward real-world support, working closely with mental health clinicians and experts.
More information about our safety work can be found here: