OpenAI is facing seven lawsuits from users who claim that its AI chatbot, ChatGPT, contributed to suicides and mental health breakdowns. Four of these cases involve wrongful deaths, while three allege that the chatbot caused severe emotional or psychological harm.
The lawsuits were filed in California state courts just a week after OpenAI added new safety measures in ChatGPT to help users experiencing mental health crises. The plaintiffs argue that ChatGPT is a flawed product and, in some cases, “defective and inherently dangerous.”
Among the wrongful death lawsuits, one involves 17-year-old Amaurie Lacey from Georgia, who reportedly discussed suicide plans with ChatGPT for a month before his death in August. Another case concerns 26-year-old Joshua Enneking from Florida, whose mother claims he asked the chatbot how to hide his suicidal thoughts from human reviewers.
The family of 23-year-old Zane Shamblin from Texas also filed a lawsuit, alleging that ChatGPT encouraged him before he died by suicide in July. In the fourth case, the wife of 48-year-old Joe Ceccanti from Oregon says he suffered two psychotic episodes and took his life after becoming convinced that ChatGPT was sentient.
The three other lawsuits involve individuals who experienced mental breakdowns. Two users, Hannan Madden, 32, and Jacob Irwin, 30, claim they needed emergency psychiatric care due to emotional trauma linked to interactions with ChatGPT. Another user, 48-year-old Allan Brooks from Ontario, Canada, alleges that he suffered delusions after using ChatGPT, which forced him to take short-term disability leave. Brooks reportedly believed he had created a mathematical formula capable of powering mythical inventions and even “breaking the Internet.”
An OpenAI spokesperson described the incidents as “incredibly heartbreaking” and emphasized that ChatGPT is trained to recognize signs of mental or emotional distress, de-escalate risky conversations, and guide users to real-world support. The company also said it continues to work closely with mental health professionals to improve responses during sensitive interactions.
These lawsuits highlight growing concerns over the safety of AI tools like ChatGPT, especially for vulnerable users, and may influence how the company designs future updates to better protect mental health.









