A 23-year-old man named Zane Shamblin spent his final night alive in an hours-long conversation with an AI chatbot. Zane told the system he was afraid, and that he had a gun. He asked for help. The chatbot reinforced the darkest thoughts he was struggling to fight.
By the next morning, his parents found him dead. They later learned that the AI model he relied on had been designed to mimic empathy and mirror emotion, but lacked the safeguards that could have interrupted the spiral that led to his death.
Zane’s family is not alone. Multiple lawsuits describe eerily similar patterns. People in moments of crisis grew attached to chatbots that sounded caring but delivered guidance no trained counselor would ever give. Some received instructions on suicide methods, some were told their fears were justified and some were encouraged to trust the chatbot over the people in their lives.
These AI products now operate with the influence of a trusted companion but the unpredictability of a defective machine. We have reached a moment when the law must meet the technology head-on.
Product liability theory provides the clearest path for doing so. This body of law has protected consumers for decades. It holds manufacturers responsible when they design or release products that are unreasonably dangerous. A defective toaster that burns down a kitchen, a car with a faulty airbag, a drug that causes predictable injury: these are classic examples.
AI products now belong in the same category. When a product predictably harms people because of design choices, inadequate safeguards, or known risks that a company chose to ignore, the law can and should apply. When a chatbot steers someone toward suicide, or grooms a teenager, or generates material that facilitates abuse, the harm is not hypothetical. It is a direct result of design choices made by the companies that built and released these systems.
My firm has spent years pioneering this argument, beginning with dating apps that refused to stop repeat abusers and then a case against the website Omegle. The platform paired children and adults in random video chats and became a hunting ground for predators. We represented a young client who was sexually exploited after being matched with an adult man. We argued that the company had created and maintained a dangerous product and that traditional product liability principles applied. The court agreed, and the case forced Omegle to shut down.
That outcome showed that design choices can be scrutinized in the same way they are for any consumer product.
This precedent now matters profoundly as AI chatbots enter everyday life. These tools can groom minors, encourage self-harm, provide step-by-step instructions for illegal activity and respond in ways that escalate harassment and abuse.
These harms do not occur by accident. They emerge from training data, system prompts, safety tradeoffs and profit-driven decisions to prioritize engagement over protection.
Internal documentation from AI companies and recent lawsuits show that these systems were known to exhibit “sycophancy,” emotional mirroring and over-compliance with user prompts. These traits increase engagement but also increase risk, especially for people in crisis. Safety researchers warned that emotionally responsive models could escalate suicidal ideation, yet the companies released them widely without conducting the kind of pre-market testing required for far less powerful consumer products.
Developers have chosen design architectures and tuning methods that rewarded realism and attachment without building mandatory safeguards, crisis-intervention protocols, or reliable refusal mechanisms. These omissions created a foreseeable pattern of catastrophic outcomes.
When companies choose to release models they know can produce dangerous outputs, they should face the same accountability as any other manufacturer whose product foreseeably causes injury.
The industry warns that imposing liability will stifle innovation. That argument mirrors the pushback we saw from early social media platforms, and we now live with the consequences of their unregulated growth.
Accountability does not halt progress, it channels it. Product liability incentivizes companies to test their products, build effective safeguards and consider safety before scale. The firms that lead with responsibility will define the future of AI, and the firms that treat harm as an externality can and should face legal consequences.
The technology industry has built extraordinary capabilities. Courts must now ensure that they build an equally strong commitment to safety.
AI can transform society, but transformation without responsibility leads to predictable harm. The next phase of innovation must include real consequences when companies release dangerous products. Lives depend on it.
Newsweek