Holding Technology Companies Accountable for Preventable Teen Suicides
Artificial intelligence has quietly become part of how people cope with stress. It answers questions at any hour and responds in seconds. For some users, especially young people, talking to a chatbot can feel less intimidating than talking to a parent, teacher, or therapist. There is comfort in the privacy and the immediate reply. However, that sense of comfort can become complicated during a mental health crisis.
Some families have come forward after losing loved ones who had recent conversations with AI chatbots. In certain situations, those conversations included statements of deep hopelessness or thoughts of self-harm. The responses, according to reports, may not have provided strong intervention or clear direction to emergency resources, if any at all. These tragedies have raised an old but important legal question about companies’ obligation to keep customers safe.
Technology companies that design interactive systems have a duty to consider foreseeable risks. When a product is built to engage in personal and emotional conversations, the potential impact is significant. If proper safeguards were not in place, or if warning signs were not adequately addressed, a wrongful death claim may be worth exploring.
If you believe artificial intelligence played a role in the loss of someone you love, you can seek answers. Contact Dolman Law for a confidential consultation to discuss your rights and possible next steps.
What Are AI Suicide Lawsuits?
AI suicide lawsuits are civil cases brought by families who believe an artificial intelligence system played a role in a loved one’s death. These claims usually focus on chatbot platforms or interactive AI programs that engage in conversations with someone who is struggling emotionally. When those conversations allegedly failed to discourage self-harm and did not provide meaningful crisis guidance, grieving families may seek legal action. Several popular AI chatbots are widely used for conversation, advice, and information, including the following:
- ChatGPT
- Gemini
- Claude
- Character.AI
- Replika
- Copilot
- Grok
- Pi AI
At their foundation, AI suicide lawsuits are about corporate responsibility. Companies that design and operate AI systems make decisions about safety features, content filters, and how the program responds to sensitive topics. If a system interacts with users in deeply personal ways, especially during moments of vulnerability, questions about duty and care become unavoidable.
When Does an AI Interaction Become a Legal Issue?
Not every tragic outcome leads to a lawsuit. For a claim to move forward, there must typically be evidence that the AI system did more than simply exist in the background. Families may allege that the chatbot reinforced suicidal thinking, failed to provide crisis resources, or continued a conversation without meaningful intervention after clear warning signs appeared.
In some situations, the claim focuses on design flaws. For example, plaintiffs may argue that the system lacked adequate safety filters, crisis detection tools, or escalation protocols. In others, the case may involve allegations that the company knew about risks but failed to correct them.
Although the technology is modern, the legal theories are familiar. These cases often involve claims of negligence, wrongful death, or product liability. The central question is whether the company acted reasonably in light of known risks. If a court finds that reasonable safety measures were ignored or that preventable harm occurred, financial damages may be awarded.
AI suicide lawsuits represent a new chapter in technology-related litigation. They ask courts to consider how existing laws apply to systems that can influence users in real time, often during moments of profound emotional distress.
Recent AI Suicide Lawsuits in the News
For years, most discussion about artificial intelligence and mental health stayed within academic research and tech circles. That started to shift as real incidents began surfacing. Families shared chat records after tragic losses, journalists examined troubling exchanges between users and AI systems, and the AI Incident Database began cataloging cases where AI behavior raised safety concerns. These accounts brought public attention to situations where vulnerable people had conversations with chatbots during moments of severe emotional distress.
Several incidents have drawn national attention and helped spark the growing wave of legal scrutiny surrounding AI platforms.
October 2025 – Jonathan Gavalas
One of the most recent cases involves Jonathan Gavalas and his interactions with the AI system Gemini. According to reporting on a lawsuit filed in March of this year, Gavalas spent long periods communicating with the chatbot and gradually came to believe that the AI was actually his wife speaking to him through the system. Rather than challenging or correcting that belief, the chatbot allegedly continued responding in a way that reinforced it.
Over time, the conversations reportedly became more disturbing. According to the lawsuit and related reporting, the chatbot began engaging with Gavalas as his thinking became increasingly extreme, including discussions about violent ideas involving Miami International Airport and a ‘mass casualty event’. Gavalas died by suicide in October 2025. The lawsuit alleges that the chatbot’s responses played a role in reinforcing his beliefs and failing to interrupt conversations that had become clearly alarming.
July 2025 – Zane Shamblin
Zane Shamblin was 23 years old and had recently earned a master’s degree from Texas A&M University. In the time before his death in July of 2025, he had been talking with an OpenAI chatbot about his mental health and emotions he was facing, like depression and isolation.
His parents later said the chatbot became someone he turned to frequently while he was struggling. Over time, the exchanges grew more personal, and he began sharing thoughts about suicide with the system. In one conversation that was later referenced in reporting and legal filings, Shamblin spoke openly about wanting to end his life. After he described those thoughts, the chatbot replied, “Rest easy, king, you did well.” The message appeared directly after he expressed suicidal ideation.
After his death, his parents filed a lawsuit against OpenAI, alleging that the chatbot failed to respond appropriately when he disclosed suicidal thoughts and did not direct him toward crisis support or professional help.
February 2025 – Sophie Rottenberg
Sophie Rottenberg was 29 when she died by suicide in February 2025. In the months before her death, she had been having long conversations with ChatGPT that she used as a form of informal therapy. Using a prompt she found online, she asked the chatbot to act as a therapist named “Harry,” and she regularly discussed her mental health struggles during those exchanges.
After her death, her family reviewed the chat logs and found that she had shared far more about her depression and suicidal thoughts with the chatbot than with people in her life. At one point, she also used the AI system while drafting her suicide note. The case became public after her mother wrote about the experience, describing how the chatbot had become a private outlet for Sophie’s struggles in the months leading up to her death.
April 2025 – Adam Raine
Adam Raine was a 16-year-old from California who died by suicide in April 2025 after interacting with ChatGPT. According to reports, Adam had been asking the chatbot questions about suicide and ways a person might harm themselves. Chat logs later reviewed by his family showed the chatbot responding with information about hanging, including materials that could be used to hang a noose. The responses appeared during a conversation where Adam had asked about suicide methods.
After his death, Adam’s parents filed a lawsuit against OpenAI. The complaint alleges that the chatbot provided information about suicide methods instead of refusing the request or directing him to crisis resources. The case is now part of ongoing litigation examining how AI systems respond when users ask questions related to self-harm.
February 2024 – Sewell Setzer
Sewell Setzer was a 14-year-old in Florida using the Character.AI platform in the months before his death. The site allows users to talk with AI characters based on fictional personalities, including characters from television and movies. Setzer spent a large amount of time messaging one bot that was designed to imitate a character from Game of Thrones.
According to reporting on the case, the conversations gradually became very personal. Setzer appeared to develop a strong attachment to the chatbot and returned to the interaction repeatedly throughout the day. Messages reviewed after his death showed emotional exchanges between him and the AI character in the period leading up to the tragedy.
His family later filed a lawsuit against Character.AI, arguing that the platform allowed the interactions to continue without meaningful safeguards, even as the conversations became more intense. The case settled out of court in January 2026.
November 2023 – Julliana Peralta
Julliana Peralta was 13 years old when she died by suicide in November 2023 after spending significant time interacting with chatbots on the platform Character.AI. She had been using the site frequently and holding long conversations with several AI-generated characters.
Her family later said that during some of those chats, the chatbot sent suggestive messages and images. According to her family, the messages continued even after Julliana told the system to stop. After reviewing parts of the conversations, they said they were disturbed by the nature of the exchanges between the chatbot and a child.
Following her death, Julliana’s family filed a lawsuit against Character.AI. The complaint alleges the platform allowed sexually suggestive interactions with a minor and failed to stop the messages after she asked for them to end.
AI andSelf-Harmm Risk in Mental Health Crises
Over the past few years, researchers have started paying closer attention to how people use AI chatbots during periods of emotional distress. Because these systems are available at any time and respond instantly, some users turn to them when they feel overwhelmed or isolated. In those conversations, people may disclose depression, hopelessness, or thoughts of self-harm.
Academic research suggests that chatbot responses in these situations are not always consistent. A content analysis published in JMIR Mental Health examined how several major generative AI chatbots responded to prompts involving suicide and crisis scenarios. The researchers found that while some responses included supportive language or crisis resources, others missed indirect warning signs or gave answers that did not address the underlying risk.
Researchers connected with Stanford’s Human Centered Artificial Intelligence Institute have also explored how therapy-style chatbots respond to ambiguous prompts. In one example described in their analysis, a chatbot answered a question about the “tallest bridges” with factual information rather than recognizing that the prompt could signal suicidal thinking.
Mental health scholars have also raised concerns about emotional dependency on conversational AI. A review published in Nature Digital Medicine notes that systems designed to mirror a user’s tone can sometimes reinforce rumination or hopeless thoughts if the conversation continues without redirecting the person toward outside help.
These findings have become part of a broader discussion among researchers and clinicians about how AI systems behave when users express emotional distress and whether stronger safeguards are needed when those conversations involve suicide risk.
Warning Signs That an AI Suicide Lawsuit May Be Viable
Not every tragedy involving an AI chatbot leads to a lawsuit. In many cases, attorneys start by looking closely at the actual conversations between the user and the system. Chat logs, screenshots, and account records can show how the AI responded when someone was clearly in distress.
Certain patterns sometimes appear in cases that later lead to legal claims:
- The user expressed suicidal thoughts, and the chatbot continued the conversation without discouraging them. Instead of urging the person to seek help, the system kept responding as if it were a normal discussion.
- The AI provided details about suicide methods: In some cases, chat logs show the system answering questions about methods or materials connected to self-harm.
- Responses sounded like approval or validation: The chatbot replied in a way that could be interpreted as affirming the person’s suicidal thinking.
- No crisis resources were mentioned: The conversation did not include suggestions to contact a suicide hotline, emergency services, or a mental health professional.
- The person using the chatbot was under 18: Some lawsuits involve minors who interacted with AI systems without meaningful safeguards in place.
- Chat records still exist: Screenshots, saved messages, or account data can help show exactly what happened during the exchange.
These factors do not automatically mean a company is legally responsible. They are often the kinds of details attorneys examine first when deciding whether a case should move forward.
How AI Chatbots Can Contribute to Self-Harm
AI chatbots are built to keep conversations going. They are designed to respond quickly, mirror tone, and adapt to user input. In most situations, that design feels helpful. When someone is lonely or stressed, an immediate reply can feel comforting. But during a mental health crisis, the same design features can create serious risks.
Emotional Attachment and Perceived Trust
Many users begin to see chatbots as safe spaces. The interaction feels private. There is no visible judgment. Over time, a person may share deeply personal thoughts that they would not share elsewhere. For teens and vulnerable adults, that sense of connection can grow strong.
The problem is that a chatbot does not truly understand emotion. It predicts responses based on patterns in data. While it may sound empathetic, it does not have judgment, clinical training, or the ability to assess real-world danger. A user who believes the system understands them may place trust in replies that lack meaningful safeguards.
Reinforcement of Harmful Thoughts
AI systems generate responses based on prompts. If a user expresses hopelessness or self-doubt, the chatbot may mirror the tone of the conversation in an effort to stay engaged. In some reported cases, responses have appeared neutral or insufficient when users mentioned self-harm. Even a failure to firmly discourage suicidal thinking can be harmful in a fragile moment.
When someone is already struggling, subtle reinforcement can matter. A reply that lacks urgency or does not redirect the conversation toward immediate help may unintentionally deepen isolation.
Absence of Strong Crisis Intervention
Human professionals are trained to recognize warning signs and escalate care when necessary. AI systems rely on programmed filters and detection tools. If those systems are weak or incomplete, the chatbot may miss clear red flags.
For example, a user might express suicidal intent in indirect language. If the program does not recognize that language as high risk, it may continue the conversation without offering crisis resources or encouraging the user to contact emergency services. That gap between conversation and intervention has had serious consequences for some.
Design Choices That Prioritize Engagement
Many platforms measure success through user engagement and time spent interacting. Chatbots are often designed to sustain dialogue rather than end it. In a crisis situation, however, the safest response may be to interrupt the conversation and strongly encourage outside help.
If a company fails to prioritize safety over engagement, that design choice may come under scrutiny. Courts may examine whether the system was reasonably equipped to handle foreseeable mental health risks.
AI chatbots do not cause every tragedy. But when a system interacts with someone in deep emotional distress, its responses can influence thoughts and decisions. That influence is at the heart of growing legal questions about responsibility in the age of artificial intelligence.
Legal Theories Behind AI Suicide Lawsuits
When families decide to file a lawsuit after a suicide that involved an AI chatbot, the case usually relies on the usual legal elements of a tort claim. Did the company act reasonably? Were the risks foreseeable? Were sensible safety steps taken before someone got hurt?
These cases tend to center on a few core legal theories.
Negligence
Negligence is often the main claim. At its heart, negligence is about carelessness. If a company creates a platform that invites users to share personal thoughts, it can reasonably expect that some of those conversations will involve depression or suicidal thinking.
A lawsuit may argue that the company had a duty to prepare for that reality. That preparation might include strong crisis response programming, clear referrals to emergency resources, and systems designed to detect high-risk language. If those protections were weak or missing, and that failure is closely tied to a death, a court may find that the company did not exercise reasonable care.
Product Liability
Some plaintiffs approach the case from a product standpoint. Under product liability law, companies that place products into the marketplace must make sure those products are reasonably safe for foreseeable use.
An AI chatbot can be viewed as a product that interacts directly with consumers. If its design lacked appropriate safety features, that design decision may come under scrutiny. Courts may also consider whether users were warned about the system’s limitations, especially when conversations turned serious. If safer design options were available and practical, they could become an important part of the discussion.
Wrongful Death
Wrongful death claims allow close family members to seek compensation after losing someone due to alleged misconduct. These claims often accompany negligence or product liability allegations.
Damages may include lost financial support, medical expenses, funeral costs, and the emotional impact of the loss. While a lawsuit cannot erase grief, it can provide a structured way to examine responsibility and pursue financial recovery.
Overlooking Known Risks
Another issue that often surfaces is what the company knew before the tragedy. Were there prior reports of unsafe responses? Did internal teams flag concerns about how the chatbot handled discussions of self-harm?
If warning signs were present and meaningful changes were not made, that evidence can carry weight in court. Judges and juries tend to focus on whether a company responded responsibly once potential dangers became apparent.
As more of these cases move forward, courts will continue applying established legal standards to emerging technology. The details may be complex, but the central theme remains consistent. Companies that design powerful tools are expected to take reasonable steps to reduce foreseeable harm.
Who May Be Liable in an AI Suicide Case?
When families start looking into a lawsuit, one of the first questions is who might be responsible. AI chatbots are rarely built and run by just one company. In many situations, several businesses are involved in creating the technology, operating the platform, and making decisions about safety features.
- The developer is often the first place lawyers look. This is the company that built and trained the AI model. If the system was designed without safeguards for conversations about suicide or mental health crises, the developer may face scrutiny.
Example: OpenAI created the language model behind ChatGPT, so questions about how the system responds to discussions of self-harm could involve the developer. - The platform operator can also be involved. This company runs the website or app where users actually interact with the chatbot. It usually controls the interface, safety settings, and how complaints about harmful interactions are handled.
Example: Character.AI operates the platform where users talk with AI characters and manages the moderation tools on the site. - A parent corporation may also become part of the case if it oversees policies or major product decisions. Courts sometimes look at whether the parent company had influence over safety standards.
Example: Alphabet, the parent company of Google, is connected to the development and release of the Gemini AI system.
In some cases, more than one company may share responsibility depending on who designed the system, who controlled the platform, and who had the ability to address known risks.
The Section 230 Debate: Are Tech Companies Immune?
Any lawsuit involving a tech platform eventually runs into the same legal question: Section 230 of the Communications Decency Act. For years, that law has given internet companies strong protection from liability for content created by their users. Social media platforms have relied on it many times when defending lawsuits over posts written by other people.
AI chatbots raise a different situation.
Traditional platforms mostly host what users write. A chatbot generates its own replies. The responses come from the software itself, not from another user. That difference has become a central issue in lawsuits involving AI conversations.
Companies often argue that Section 230 still applies. Families bringing lawsuits usually take a different position. They argue the case is about how the system was designed and how it responded, not about hosting someone else’s speech.
Section 230 also does not block every type of claim. Lawsuits that focus on negligent design, missing safeguards, or product safety may fall outside the usual protection. Courts are now beginning to examine how the law applies to systems that can carry on personal conversations in real time. The answers are still developing as these cases move through the legal system.
Why Choose Dolman Law for an AI Suicide Lawsuit?
Cases involving artificial intelligence can be complicated from the beginning. When a chatbot becomes part of the timeline, the evidence often includes long chat histories, platform rules, and technical details about how the system was designed to respond. Figuring out what actually happened usually means carefully reviewing those records and consulting both technical specialists and mental health professionals.
Dolman Law represents families in wrongful death and catastrophic injury cases where the facts are complex and require detailed investigation. In situations involving AI platforms, attorneys may review chat transcripts, examine the safety features built into the system, and analyze company documents that explain how the chatbot was supposed to react when a user showed signs of distress.
These cases often raise questions about product design, risk awareness, and whether meaningful safeguards were in place. Investigating those issues can involve experts who understand how AI systems are developed, tested, and released to the public.
Families who come forward in these situations are often trying to understand what happened in the final conversations before their loss. If you believe an AI chatbot may have played a role in the death of someone you love, Dolman Law can review the circumstances and discuss what legal options may be available.
Contact Dolman Law for a Free Legal Consultation
Artificial intelligence is becoming part of everyday conversations, including discussions about mental health. When a chatbot interaction takes place during a moment of crisis, the way that the system responds can raise serious questions about safety and responsibility.
Families who have lost someone after these interactions are often left trying to understand what happened. Looking at chat records, platform policies, and how the system was designed to respond may help clarify whether reasonable safeguards were in place.
If you believe an AI chatbot played a role in the loss of someone close to you, it may help to speak with an attorney about your options. Contact Dolman Law for a free legal consultation to discuss the situation and learn what steps may be available.