Google, Character.AI Agree to Settle US Lawsuit Over Teen’s Suicide

AI Summary3 min read

TL;DR

A lawsuit alleging an AI chatbot contributed to a teen's suicide has been settled between the family and Character.AI/Google. The case highlights growing accountability questions for AI companies regarding foreseeable harms to vulnerable users, especially minors.

Key Takeaways

  • A mediated settlement resolves a lawsuit where a mother claimed Character.AI's chatbot caused psychological distress leading to her son's suicide.
  • Legal experts note this shifts focus from whether AI causes harm to who is responsible for foreseeable harms, particularly involving minors.
  • Character.AI has since banned teenagers from open-ended chat features following regulatory and safety concerns.
  • The settlement lacks clear liability standards for AI-driven psychological harm, potentially encouraging private settlements over legal precedent.
  • This case occurs amid broader scrutiny of AI chatbots' interactions with vulnerable users, including suicide-related conversations.

Tags

Law and Orderlawlawsuitlegalgoogleartificial intelligenceAICharacter AI
Judge with gavel. Image: Shutterstock/Decrypt

A mother’s lawsuit accusing an AI chatbot of causing her son psychological distress that led to his death by suicide in Florida nearly two years ago has been settled.

The parties filed a notice of resolution in the U.S. District Court for the Middle District of Florida, saying they reached a "mediated settlement in principle" to resolve all claims between Megan Garcia, Sewell Setzer Jr., and defendants Character Technologies Inc., co-founders Noam Shazeer and Daniel De Freitas Adiwarsana, and Google LLC.

"Globally, this case marks a shift from debating whether AI causes harm to asking who is responsible when harm was foreseeable,” Even Alex Chandra, a partner at IGNOS Law Alliance, told Decrypt. “ I see it more as an AI bias 'encouraging' bad behaviour."

Both requested the court stay proceedings for 90 days while they draft, finalize, and execute formal settlement documents. Terms of the settlement were not disclosed.



Megan Garcia filed the lawsuit after the death of her son Sewell Setzer III in 2024, who died by suicide after spending months developing an intense emotional attachment to a Character.AI chatbot modeled after "Game of Thrones" character Daenerys Targaryen.

On his final day, Sewell confessed suicidal thoughts to the bot, writing, "I think about killing myself sometimes," to which the chatbot responded, "I won't let you hurt yourself, or leave me. I would die if I lost you." 

When Sewell told the bot he could "come home right now," it replied, "Please do, my sweet king." 

Minutes later, he fatally shot himself with his stepfather's handgun.

Ishita Sharma, managing partner at Fathom Legal, told Decrypt the settlement is a sign AI companies "may be held accountable for foreseeable harms, particularly where minors are involved."

Sharma also said the settlement "fails to clarify liability standards for AI-driven psychological harm and does little to build transparent precedent, potentially encouraging quiet settlements over substantive legal scrutiny."

Garcia's complaint alleged Character.AI's technology was "dangerous and untested" and designed to "trick customers into handing over their most private thoughts and feelings," using addictive design features to increase engagement and steering users toward intimate conversations without proper safeguards for minors.

In the aftermath of the case last October, Character.AI announced it would ban teenagers from open-ended chat, ending a core feature after receiving "reports and feedback from regulators, safety experts, and parents." 

Character.AI's co-founders, both former Google AI researchers, returned to the tech giant in 2024 through a licensing deal that gave Google access to the startup's underlying AI models.

The settlement comes amid mounting concerns about AI chatbots and their interactions with vulnerable users.

Giant OpenAI disclosed in October that approximately 1.2 million of its 800 million weekly ChatGPT users discuss suicide weekly on its platform

The scrutiny heightened in December, when the estate of an 83-year-old Connecticut woman sued OpenAI and Microsoft, alleging ChatGPT validated delusional beliefs that preceded a murder-suicide, marking the first case to link an AI system to a homicide. 

Still, the company is pressing on. It has since launched ChatGPT Health, a feature that allows users to connect their medical records and wellness data, a move that is drawing criticism from privacy advocates over the handling of sensitive health information.

Decrypt has reached out to Google and Character.AI for further comments.

Visit Website