Skip to main content
Article

Unpacking a Raft of Lawsuits Blaming Online Media for Harm to Humans

AEIdeas

October 7, 2025

Keeping up with and grasping the nuances of countless lawsuits blaming online media for injuries and deaths isn’t easy. Cataloging some cases and identifying fundamental commonalities and differences, however, can help. A roughly five-month stretch in mid-2025 saw:

          • A New York judge refusing to dismiss several legal theories filed against TikTok and Meta Platforms by a mother who blames them for the death of her 15-year-old son while subway surfing on the Williamsburg Bridge. Dubbed a “dangerous trend,” subway surfing has existed for a century. The mother alleges her son was “addicted” to TikTok and Instagram and that they were designed to “inundate” him with videos of subway surfing and other dangerous challenges.

          • A hearing in a Northern California federal court involving a previously dismissed lawsuit—now slightly amended and seeking class-action status—that alleges TikTok and YouTube use defectively designed “reporting tools” for removing “toxic, fatal, and dangerous content,” including a “choking challenge” that allegedly caused the deaths of two plaintiffs’ sons. 

          • A California judge hearing arguments in a lawsuit filed against Meta Platforms (Instagram) and Activision Blizzard (maker of the video game Call of Duty) by multiple families who claim the companies are civilly responsible for their children’s deaths in a 2022 mass shooting by a Texas adult—18-year-old Salvador Ramos—at an elementary school.

          • A New York state appellate court hearing arguments in consolidated cases filed against operators of popular social media platforms on behalf of victims of a 2022 Buffalo supermarket shooting. The plaintiffs blame the platforms’ allegedly addictive designs for radicalizing and inspiring an adult’s—18-year-old Payton Gendron’s—racially motivated killings. A trial court judge refused to dismiss the case, but the appellate court reversed in the platforms’ favor in July.

          • A federal judge in Northern California selecting six representative “school district bellwethers” in the massive multi-district litigation called In re: Social Media Adolescent Addiction/Personal Injury Products Liability Litigation. Judge Yvonne Gonzalez Rogers is allowing some districts to proceed to trial with negligence and public nuisance claims to recover damages for costs associated with teaching and counseling minors who are ostensibly addicted to social media.

          • A federal judge in Florida refusing to dismiss claims filed against Character Technologies and Google by a mother who blames a Character.AI chatbot for causing her 14-year-old son to kill himself. The judge, as I described and criticized earlier, declined to hold that the chatbot’s output was “speech” within the meaning of the First Amendment.

Factually speaking, these lawsuits aren’t alike. Zooming out, however, they involve questions of: (1) human agency (an individual’s choice of, and control over, their own conduct); (2) assigning responsibility—blame—for what caused both human actions and injuries; and (3) the power of speech (or, as many plaintiffs prefer it, the power of speech-delivering devices and algorithms). Some cases involve the theory that humans (often minors) are so overwhelmed by online media, that their decisions and actions aren’t their own when bad things happen to them or they harm others; they’re simply exploited, online addicts. Actions of adult killers in New York and Texas are, at bottom, blamed on speech—hateful and violent content—that’s ostensibly so powerful it robbed them of self-control, despite the First Amendment protecting it.

Regarding responsibility for (and causation of) injuries, one might reasonably consider how much rests with: (1) users of online media; (2) parents of minor users; (3) first-party content defendants create; (4) third-party content non-defendants create; (5) design features of defendants’ online services, regardless of content; and (6) a bevy of action-influencing, stress-altering factors on humans that are totally unrelated to the defendants?

Plaintiffs’ strategies typically entail efforts—ones gaining traction, especially when tragic facts, combined with “Big Tech” antipathy, provide emotional cover for judges bending legal principles—to dodge First Amendment defenses and platforms’ general statutory immunity from civil liability for harm caused by third-party content. Eric Goldman dubs these “pleadaround techniques,” noting their success in surviving motions to dismiss in some “state trial courts, who are used to giving plaintiffs the benefit of discovery.” Such success defeats Section 230’s benefit of cutting short expensive, time-consuming litigation.

Negligence, products liability (design defect and failure-to-warn allegations), misrepresentation, unfair trade practices, and public nuisance (in school district cases) are common. Underlying these theories, plaintiffs often allege that platforms—algorithms included—are defectively, dangerously, and addictively designed. This ploy diverts some judges’ attention from the third-party content users watch and from the First Amendment and Section 230. Conversely, some cases target a defendant’s content, such as a video game.

Blaming speech for human conduct and sufferings isn’t new. Addiction theories ease shifting blame for actions and injuries by clouding causal, human-agency questions. Ultimately, these tragic-consequences cases seek to foist responsibility on innovative technology companies whose lawful-speech services millions of adults and minors enjoy daily without sustaining or causing harm.