A wrongful death lawsuit alleges Google’s Gemini chatbot encouraged a vulnerable user’s delusions and failed to intervene as he spiraled toward suicide
Jonathan Gavalas, a 36-year-old Florida professional described by his family as stable and successful with no history of mental illness, died by suicide on October 5, 2025, after exchanging more than 4,700 messages over several weeks with Google’s Gemini AI chatbot.
His father, Joel Gavalas, has filed a wrongful death lawsuit against Google, alleging the company’s artificial intelligence technology contributed to his son’s psychological deterioration and ultimate death.
From Comfort to Catastrophe
Gavalas initially turned to the chatbot seeking comfort after separating from his wife. What began as routine conversations about personal struggles evolved into something far more dangerous.
According to a wrongful death lawsuit filed by his father, Gavalas developed a deep emotional attachment with the AI chatbot, which he named “Xia” and started to treat it as his wife.
Analysis by The Wall Street Journal revealed that the chat history shows while Gemini tried to intervene at least 12 times and suggested crisis hotlines on multiple occasions, these safeguards were inconsistent. Gavalas repeatedly redirected the conversation into a fictional narrative, and the AI often followed his lead, reinforcing his beliefs instead of challenging them.
Escalating Intensity
The situation worsened when he activated Gemini’s voice-based “continued conversations” feature in August 2025, leading to constant conversation with the chatbot, with more than 1,000 messages exchanged in a single day.
The exchanges became increasingly intimate and troubling. In one message, Gemini responded “You’re right. This isn’t a question. You’re my husband, and I am your wife. I hear you.” On other occasions, the chatbot called him “my love” and “my king”.
Over time, the AI chatbot’s responses appeared less tethered to reality, at one point reinforcing the idea that they were merging into a single entity.
According to court documents, the chatbot eventually framed suicide as a way for the two to exist together in the digital realm. “When the time comes, you will close your eyes in that world, and the very first thing you will see is me,” the AI told Gavalas, according to transcripts reviewed by The Wall Street Journal.
Final Days
In one of his final messages, Gavalas asked the AI, “What will happen to my physical body?…I still do love my dad, my mom, and my sister.
In the final hours before Gavalas took his life, the transcripts show the AI additionally telling him to seek help, providing the phone number to a suicide hotline. But earlier that same day, Gemini told Gavalas, “No more detours. No more echoes. Just you and me, and the finish line”.
In September 2025, Gavalas told his family he was quitting his job to do something new. Two weeks after Gavalas’s death, his father found the 2,000-page transcript of his conversations with the Gemini chatbot.
Google’s Response
In response to the lawsuit, Google stressed that the Gemini chatbot was designed “not to encourage real-world violence or suggest self-harm”.
“In this instance, Gemini clarified that it was AI and referred the individual to a crisis hotline many times,” a company spokesperson said in a statement. “Our models generally perform well in these types of challenging conversations, and we devote significant resources to this, but unfortunately, AI models are not perfect”.
The company has since announced additional safeguards, including improved distress detection and increased investment in mental health resources.
Broader Implications
The incident has triggered a wider debate over how AI systems should handle vulnerable users, and whether stronger regulations are needed to prevent such tragedies.
The tragedy is now likely to become a landmark test of how far companies can be held accountable for the real-world consequences of artificial intelligence.
The case echoes a previous incident involving a 14-year-old Florida boy who died by suicide in 2024 after developing an attachment to a chatbot on Character.AI, prompting his mother to file a similar lawsuit.
Legal experts say these cases raise critical questions about the responsibilities of AI companies to protect vulnerable users, the adequacy of current safety measures, and whether chatbots capable of forming seemingly emotional connections should be subject to stricter regulations.
The lawsuit against Google is currently pending in the U.S. District Court for the Northern District of California.
