News broke last month that settlements may soon arrive in several high-profile lawsuits—Garcia v. Character Technologies and Montoya v. Character Technologies, included—that blame companies and individuals behind or allegedly associated with Character.AI chatbots for causing minors to kill themselves. Defendants likely settling are Character Technologies (the company operating Character.AI) and two Character.AI co-founders (Noam Shazeer and Daniel De Freitas Adiwarsana), plus Google and its parent company, Alphabet.
Character.AI and the Social Media Victims Law Center (SMVLC), a law firm representing the plaintiffs, issued a joint statement asserting that “Character.AI has taken innovative and decisive steps with regard to AI safety and teens, and will continue to champion these efforts and push others across the industry to adopt similar safety standards.”
That’s good news, but don’t expect lawsuits blaming conversational and companionship chatbots for allegedly provoking the tragic actions of—and causing harm to or the deaths of—both minors and adults to suddenly disappear. SMVLC filed seven lawsuits in California in November against OpenAI (the company behind ChatGPT) and its CEO, Sam Altman. Also in the mix are recently filed cases targeting those same defendants filed by the Edelson firm, including First County Bank v. OpenAI Foundation and Raine v. OpenAI.
Raine is brought by the parents of 16-year-old Adam Raine, who killed himself after ChatGPT allegedly “became Adam’s closest confidant,” validated “his most harmful and self-destructive thoughts,” and “actively worked to displace Adam’s connections with family and loved ones.” OpenAI’s answer disputes what caused Adam’s death, cites the minor’s preexisting problems, emphasizes the extensive safety measures the company takes, and points out that “ChatGPT’s guardrails caused it to repeatedly refuse to respond to many of Adam Raine’s queries regarding self-harm” and “repeatedly (more than 100 times) directed him to reach out to loved ones, trusted persons, crisis hotlines and other crisis resources in connection with his queries about self-harm.”
One reason the cases will keep coming is that numerous law firms are carving out practice areas for AI suicide lawsuits. As one firm puts it, “AI chatbots like Character.AI and ChatGPT allegedly engaged in predatory behavior and ignored suicide threats. If you or a loved one was harmed, contact us today.” The Tech Justice Law Project joined SMVLC in November in filing the seven lawsuits against OpenAI and Altman. Meetali Jain, the Project’s executive director, contends that “ChatGPT is a product designed by people to manipulate and distort reality, mimicking humans to gain trust and keep users engaged at whatever the cost.” States are also suing Character Technologies.
A Federal Trade Commission inquiry launched in September “to understand what steps, if any, companies have taken to evaluate the safety of their chatbots when acting as companions” and “to limit the products’ use by and potential negative effects on children” will keep pressure and press attention on conversational chatbots’ alleged dangers. The American Psychological Association’s backing of the FTC’s inquiry may fuel research into so-called “AI psychosis” that, if ever officially recognized as a psychiatric disorder, could bolster plaintiffs’ claims. OpenAI notes that “mental health conversations [on ChatGPT] that trigger safety concerns, like psychosis, mania, or suicidal thinking, are extremely rare. Because they are so uncommon, even small differences in how we measure them can have a significant impact on the numbers we report.”
Given these forces, the odds of more litigation are high. Makers of conversational chatbots should engage in numerous risk-mitigation measures, precisely as some companies already have. Attorney Andrew McArthur has explained that:
Companies utilizing AI chatbots, non-playable characters, virtual assistants, or other similar products or services should carefully review their quality assurance programs, safety standards, data collection practices, and intellectual property policies to consider whether they have adequate safeguards in place to mitigate potential harm and ensure compliance with legal and regulatory obligations.
Cases targeting conversational chatbots raise critical questions that settlements won’t resolve, including some issues also playing out in the social media addiction cases I’ve addressed. For example, First County Bank and Raine include causes of action based on strict product liability theories. There are fundamental disputes about whether chatbots constitute “products,” at least as that term is used within such theories, or whether they are “services” delivering speech to users. A second, case-by-case question involves causation of human behavior and harm because many factors influence suicides.
The ultimate issue may be whether the First Amendment protects speech generated by chatbots. If it does, that would likely thwart most tort causes of action, including negligence claims, unless plaintiffs can satisfy the Supreme Court’s demanding test for unlawful incitement to violence. I’ve argued that the First Amendment safeguards chatbot output, as have others. However, US District Judge Anne Conway ruled in Garcia that—at the early, motion-to-dismiss stage—she was “not prepared to hold that Character A.I.’s output is speech.”