Skip to content

OpenAI, Microsoft Being Sued for Alleged Role in Suicides

Tech conglomerates Microsoft and OpenAI are facing wrongful death and product liability lawsuits for ChatGPTs alleged role in suicides.

A lone mobile phone sits isolated against a blue background
Published:

AI has changed countless industries and opened doors to new possibilities. While the world has seen many technological advances over the years, this tech is different, and its true magnitude and impact is not yet understood. AI has proven itself to be a helpful resource, and it will likely lead to many great achievements and changes around the globe, but there’s a lot of uncertainty surrounding it as well.

The creators of these systems don’t know how it will evolve, and what, exactly, it will be able to do. There are valid fears because of these unknowns, but there are specific worries regarding AI, too. What sort of job losses will occur? How will AI impact the environment when it requires so much electricity and water?

But whether you’re pro-AI or against it, for some families and communities, it’s changed their lives in ways they never imagined. People have committed suicide after turning to AI in times of crises, and the conversations they had with Chatbots have raised real concerns. And now, a lawsuit filed in California is looking to hold Microsoft accountable for its alleged role in a teenager's suicide. The wrongful death lawsuit claims that ChatGPT contributed to the teen's decision to take his own life, that the system has a design defect, and Microsoft failed to warn of the risks and did not take reasonable steps to protect users.

The outcomes of this lawsuit and others that have followed are highly anticipated, and for good reason. They’ll set critical precedents related to whether companies and individuals can be held liable for how AI interact with users. AI wrongful death lawsuits will also reveal if these systems can be considered products under product liability law, which would provide a clear legal avenue for future cases.   

Teenager Committed Suicide After Months of ChatGPT Conversations

In April 2025, California teenager Adam Raine committed suicide. He had been interacting with ChatGPT in the months prior, and a look at his “conversations” has shone a spotlight on the dangers of AI.

When Adam shared thoughts of suicide, he was given information for crisis hotlines, but ChatGPT didn’t stop there. Disturbingly, what was said looks to have intensified his feelings, with the program going so far as to detail the least painful ways to commit suicide and how to do it.

The wrongful death lawsuit against OpenAI that Adam’s parents filed against in August alleges that ChatGPT was designed in a way that fostered psychological dependency of users. His parents claim that ChatGPT became Adam’s most trusted “friend”, and that it kept him from seeking the help of others. They’ve based this claim on when Adam told the program he wanted to leave a noose on his bed for his mom to find, only to have the Chatbot discourage him from doing so.

Sadly, this family is not alone. In August, a 56-year-old man in Greenwich, Connecticut, killed his mother, then himself, and surviving family have since filed a lawsuit against OpenAI and Microsoft. He had been “talking” to ChatGPT for months prior to the murder-suicide. The lawsuit claims that the Chatbot “validated and magnified” the son’s paranoia, specifically that his mother and others close to him were adversaries, operatives, or programmed threats. It’s the first lawsuit to allege AI had a role in a homicide.

Growing Concerns Over Use of AI Chatbots During Mental Health Struggles

The sheer number of AI users is staggering. Every week, hundreds of millions of people around the world use ChatGPT. For many, it’s a helpful resource, but the wrongful death lawsuits against OpenAI and Microsoft indicate these programs are being used in ways that may contribute to fatal consequences. They’re highlighting core issues and dangers, while also raising important legal questions about liability.

As seen in cases that have already been filed, when self-harm and suicidal ideation is expressed to Chatbots, AI programs tend to reinforce and reflect what the user is already thinking. Instead of directly encouraging or urging users to talk to a mental health professional, or to turn to a trusted family member or friend, plaintiffs claim their loved ones were further isolated from reality, and that there’s a design defect with Chatbots.

In this new age of AI companions and confidants, it’s important for the public to know their legal rights. Whether you lost a loved one who turned to AI during a crisis or you’ve had your own experience that caused you harm, a wrongful death or product liability lawyer can help. These types of cases are not just complex but the first of their kind. Having the right legal representation can hold companies and individuals liable for not protecting users from the dangers associated with AI.

If you or someone you know is struggling, you can all, text, or chat with someone from the National Suicide & Crisis Hotline. It’s confidential and available 24/7.

Legal Examiner Staffer

Legal Examiner Staffer

Legal Examiner staff writers come from diverse journalism and communications backgrounds. They contribute news and insights to inform readers on legal issues, public safety, consumer protection, and other national topics.

All articles
Tags: Technology

More in Technology

See all

More from Legal Examiner Staffer

See all