Funding grants for new research into AI and mental health
TL;DR
OpenAI announces funding grants for independent research exploring AI and mental health intersections, focusing on risks, benefits, and safety improvements. Applications open until December 2025, with grants up to $100,000 per project.
Key Takeaways
- •OpenAI is offering research grants ($5,000-$100,000 per project) to study AI's intersection with mental health, emphasizing both risks and benefits.
- •The program seeks interdisciplinary proposals combining technical and mental health expertise, with deliverables like datasets, evaluations, and actionable insights.
- •Applications are open until December 19, 2025, with rolling reviews and notifications by January 15, 2026.
- •Eligibility requires applicants to be 18+, affiliated with research institutions or have mental health experience, with for-profit organizations deprioritized.
- •Funded projects should inform OpenAI's safety work and the broader AI/mental health community, with topics including cultural variations in distress expression and AI support for sensitive issues.
Tags
We’re announcing a call for applications to fund research proposals that explore the intersection of AI and mental health. As AI becomes more capable and ubiquitous, we know that people will increasingly use it in more personal areas of their lives.
We continue to strengthen how our models recognize and respond to signs of mental and emotional distress. Working closely with leading experts, we’ve trained our models to respond more appropriately during sensitive conversations and have shared detailed updates for how those improvements are performing. While we’ve made meaningful progress on our own models and interventions, this remains an emerging area of research across the industry.
As part of our broader safety investments, we are opening a call for research submissions to support independent researchers outside of OpenAI, helping to spark new ideas, deepen understanding, and accelerate innovation across the ecosystem. These grants are designed to support foundational work that strengthens both our own safety efforts, and the wider field.
We’ve done research in some of these areas including Investigating Affective Use and Emotional Well-being on ChatGPT(opens in a new window) and Healthbench, and are focused on deepening our understanding to inform our safety and well-being work.
We believe that continuing to support independent research on AI and mental health will help improve our collective understanding of this emerging field and help fulfill our mission to ensure that AGI benefits all of humanity.
What we’re funding
We're seeking research project proposals that deepen our understanding of the overlap of AI and mental health—both the potential risks and benefits—and help build a safer, more helpful AI ecosystem for everyone. We are particularly interested in interdisciplinary research that combines technical researchers with either mental health experts and those with lived experience.
Successful projects will produce clear deliverables (datasets, evals, rubrics) or generate actionable insights (like synthesized views from people with lived experience, descriptions of how mental health symptoms manifest in a specific culture, research on language and slang used to discuss mental health topics that classifiers may miss) that can inform OpenAI’s safety work and the AI and mental health community overall.
How to apply
Submissions are open today through December 19, 2025. A panel of internal researchers and experts will review applications on a rolling basis and notify selected proposals on or before January 15th, 2026. Follow this link to apply(opens in a new window).
FAQ
What kinds of topics are you looking for?
We present these potential topics of exploration as examples, but this is not meant to be a comprehensive list of all potential research directions. Successful proposals can pertain to topics that are not included on this list.
Potential areas of interest include:
How expressions of distress, delusion, or other mental health-related language vary across cultures and languages, and how these differences affect detection or interpretation by AI systems
Perspectives from individuals with lived experience on what feels safe, supportive, or harmful when interacting with AI-powered chatbots
How mental healthcare providers currently use AI tools, including what is effective, what falls short, and where safety risks emerge
The potential of AI systems to promote healthy, pro-social behaviors and reduce harm
The robustness of existing AI model safeguards to vernacular, slang, and under-represented linguistic patterns—particularly in low-resource languages
How AI systems should adjust tone, style, and framing when responding to youth and adolescents to ensure that guidance feels age-appropriate, respectful, and accessible, with deliverables such as evaluation rubrics, style guidelines, or annotated examples of effective vs. ineffective phrasing across age groups
How stigma associated with mental illness may surface in language model recommendations or interaction styles
How AI systems interpret or respond to visual indicators related to body dysmorphia or eating disorders, including the creation of ethically collected, annotated multimodal datasets and evaluation tasks that capture common real-world patterns of distress
How AI systems can provide compassionate, sensitive support to individuals experiencing grief -- helping them process loss, maintain connections, and access coping resources -- along with deliverables such as exemplar response patterns, tone/style guidelines, or evaluation rubrics for assessing supportive grief-related interactions
What kinds of outputs are you expecting from funded projects?
We’re sharing illustrative examples of deliverables below to help spark proposals, but these are by no means exhaustive:
Research papers that aim to gather evidence around the above areas of interest, or related matters
Taxonomies of model behavior in sensitive contexts that could be further improved
Culturally or linguistically diverse datasets
Prototype interaction flows showing contextually appropriate conversational patterns
What are the eligibility criteria for this funding?
Must be 18 or older
Affiliated with a research institution or organization, and/or significant experience with mental health
We are seeking to fund research rather than for-profit initiatives, so we will not prioritize for profit organizations at this time
What budgets are available for each project?
We will award targeted research grants with proposed budgets between $5,000 and $100,000, totaling up to $2 million.
Are these grants funded by OpenAI Foundation or OpenAI Group PBC?
The grants are funded and administered by OpenAI Group PBC. This program is separate from our People-First AI Fund and other initiatives from the OpenAI Foundation.