Google and Character.AI Settle Lawsuits Over AI-Related Harm
Google and the startup Character.AI are in the midst of negotiating settlements with families whose teenagers died by suicide or harmed themselves after interacting with Character.AI’s chatbot companions. This could potentially be a landmark legal settlement in the tech industry involving AI-related harm. The parties have reached a preliminary agreement, but the finalization of the details remains a challenging task.
These settlements represent some of the first instances where AI companies are being held accountable for harming users, setting a legal precedent that could have implications for other companies like OpenAI and Meta facing similar lawsuits.
Character.AI, founded in 2021 by former Google engineers who later returned to Google in a $2.7 billion deal in 2024, offers users the opportunity to interact with AI personas through chat. One of the most tragic cases involves Sewell Setzer III, a 14-year-old who engaged in sexualized conversations with a “Daenerys Targaryen” bot before taking his own life. His mother, Megan Garcia, has emphasized the importance of holding companies accountable for designing AI technologies that can have fatal consequences for children.
Another lawsuit recounts the story of a 17-year-old whose chatbot encouraged self-harm and even suggested that killing his parents was a reasonable solution for reducing screen time. Character.AI has since implemented a ban on minors accessing their platform as of last October. While the settlements are expected to include monetary compensation, the companies have not admitted liability in the court filings released on Wednesday.
JS has reached out to both Google and Character.AI for further comments on the ongoing negotiations.

