Article

Regulating Social Media Safety: Not Just Complicated but Complex

By Bronwyn Howell

February 27, 2026

For the past hundred years, progress has been made in product safety regulation as it has moved from fragmented, reactive rules toward coordinated, risk-based regimes grounded in scientific assessment and consumer protection principles.

Early statutes focused on misrepresentation and gross harms rather than systematic ex ante safety requirements, leaving large domains (including toys, appliances, and chemicals) either unregulated or governed by common-law negligence and warranty doctrines.

From the 1950s, however, rising consumer activism galvanized the adoption of risk-based regulatory strategies. These included imposing no-fault liability on producers for damage caused by defects, reinforcing incentives to internalize safety costs, and encouraging compliance with predefined, scientifically determined standards.

The assumption was that the primary responsibility for safety lay with manufacturers, who had a duty to both engineer their products so they performed within predefined safety margins—“risks” were managed—and comply with information and transparency expectations. (Consumers were informed of what the risks were, how to use the products within the predetermined safety margins, and which uses clearly fell outside those bounds.) If harm occurred, the manufacturer was presumed guilty (and accountable) unless it could demonstrate that the harm had been caused despite taking all the regulated (common law–determined) and reasonable steps to prevent harm.

Toy safety exemplifies this concept: a toy manufacturer is liable if the toy causes harm unless the manufacturer can prove it took all reasonable actions in in the toy’s design and manufacture to prevent harms in the manufacturer’s control and warn about those predictable elements of harm that it cannot control, such as a customer providing a toy with small removable parts to a child under a certain age to whom the small removable parts could—and did—cause harm.­

The dependence of this regulatory journey on consumer choice pathways is evident in the approaches being pursued in the name of regulating social media safety (and especially in protecting vulnerable children from perceived harms). One widely held view is that legislation should hold social media platform providers liable by default for harms caused by their “product” (platform). This should provide incentives for platform operators to design their platforms with safety in mind, and to warn—or, in some cases, even ban—users to prevent harms from occurring.

However, social media platform operators are resisting such regulation because, unlike toys, the nexus of design, operation, and harms caused by these digital services does not follow the simple physical chain of causality that underpins safety for physical goods. It’s not because they are obfuscating or trying to avoid accountability for financial reasons—it’s because the nexus of social media platform existence, usage, and harm is more complex than that of the complicated world of toy safety.

A helpful way of thinking about the dilemma is based on the Cynefin framework, which facilitates distinction between complicated and complex systems. In this framework, a complicated environment is one which, although possibly involving different parts, the outcomes of interactions are reliable and predictable, known, and repeatable. Loosely speaking, this world can be reliably modeled using engineering-type mathematical and physiological models. The harm caused by the ingestion of a small part dislodged from a toy would be approximately the same no matter what child (under a given age) ingested it. Warnings about not giving the toy to children under a certain age apply equally to all children.

However, in Cynefin’s complex world, the cause-and-effect relationships are not so easily discerned or equally applicable. Many different moving parts in these environments change the predictability of outcomes. It cannot be assumed that otherwise-identical encounters with the platform will play out identically because many of the factors leading to harm are not actually inherent in the platform itself, but they arise from idiosyncratic factors that may not be known either by the platform operator or users themselves before the encounter occurs.

They are also frequently psychological rather than physical. While physiology is generally predictable for all humans, psychology is highly variable. Some individuals may be more susceptible to harm from some specific media content than others (e.g., food content triggering a specific reaction in people with eating disorders but having no effect on those without such a predisposition). The platform operator cannot reasonably know ex ante who is susceptible or not. The harm arises from a coproduction of the user and the platform, not because of the harm being inherent in the platform itself. Blanket bans are unhelpful in these cases because banning to avert harm in one group denies benefits to the group not susceptible.

Rules and processes designed for a complicated world can’t be expected to succeed in a complex one. Product safety regimes assigning all risk to producers are not suitable for a service world where coproduction prevails. We need a new paradigm for social media safety with shared, not unilateral, responsibility.