An Arkansas federal court recently issued a preliminary injunction barring authorities in the Natural State from enforcing a speech-restrictive statute called Act 901. Among other things, it prohibits social media platforms from using algorithms they know or reasonably should know will cause “a user to: (1) purchase a controlled substance; (2) develop an eating disorder; (3) commit or attempt to commit suicide; or (4) develop or sustain an addiction to the social media platform[s].”
Chief US District Judge Timothy L. Brooks’ ruling in NetChoice v. Griffin marks yet another victory for NetChoice in its seemingly ceaseless battle against state laws that curb the First Amendment speech rights of two groups—users (to express and receive lawful content) and platforms (to exercise editorial discretion and moderate content without government interference). Brooks’ decision also offers several constitutional lessons for lawmakers about such measures; two are addressed below.
Targeting Harmful Results Doesn’t Make a Law Content Neutral. Under numerous Supreme Court rulings, judges examine laws that restrict lawful content (speech not falling into an unprotected category of expression such as obscenity or true threats) about some subjects but not others under the rigorous strict scrutiny test. Such content-based laws target “particular speech because of the topic discussed or the idea or message expressed,” such as a law regulating violent video games, not others.
Strict scrutiny requires the government to prove that it has a compelling interest (one of the highest order) to justify a content-based law and that the law is so narrowly tailored (so precisely drafted and confined in scope) that it restricts no more speech than is absolutely necessary to serve that interest. It’s a difficult two-part benchmark to clear; the Supreme Court recently observed it has “held only once that a law triggered but satisfied strict scrutiny.”
Lawmakers therefore would much rather draft content-neutral laws—ones “agnostic as to content”—because they would be analyzed under the more deferential, government-friendly intermediate scrutiny test. Here, the government only needs to demonstrate a substantial, important, or significant regulatory interest (one not as high as compelling). Additionally, a content-neutral statute doesn’t need to be as narrowly confined; it will pass intermediate scrutiny if it doesn’t “burden substantially more speech than is necessary.”
Arkansas argued its statute is content neutral because “it imposes liability on platforms based on the result caused by a design, algorithm, or feature without regard to the particular content that design, algorithm, or feature causes to be displayed.” (emphasis added). As described above, Act 901 bars platforms from using algorithms that would result in—would cause—a user’s controlled-substance purchase, eating disorder development, suicide attempt/commission, or addiction to social media.
Judge Brooks correctly concluded the law is content based—at least regarding the first three prohibited results—because “certain types of content . . . are known to be associated with certain results.” For example, he noted that YouTube knows that displaying certain videos in response to user searches for “how to tie a noose” creates a risk of “causing the user to commit or attempt to commit suicide.” Act 901 thus would prohibit YouTube and other platforms from using algorithms that push any noose-tying content to users, but it wouldn’t ban algorithms serving benign content. [YouTube, in fact, already voluntarily exercises its First Amendment-protected editorial freedom by self-policing its response to the noose-tying query]. In sum, statutorily targeting behavioral results isn’t always a workaround from strict scrutiny.
Protecting Some People from Harm Doesn’t Justify Restricting Access for All to Lawful Speech. Judge Brooks assumed Arkansas has a compelling interest in preventing illegal drug use, eating disorders, and suicide. Nonetheless, the statute’s language targeting those three harms fails strict scrutiny because it isn’t narrowly tailored in scope; indeed, it’s “substantially overinclusive.” Brooks explained that the law
imposes liability on platforms for disseminating content in a way that the platform “should have known” would cause any [emphasis in original] Arkansas user to purchase a controlled substance, develop an eating disorder, or attempt suicide—even if the vast majority [emphasis added] of Arkansas users exposed to that content or dissemination method would not respond in those manners. By imposing liability on a platform any time the protected speech by or on that platform results in a specified “harm” to a single Arkansas viewer that the platform should have anticipated, [the statute] impermissibly limits the online posting and promotion of protected speech that is not harmful to most viewers.
In short, Arkansas’s statute is disproportionate. It amounts to legislative overkill, barring the algorithmic delivery of First Amendment-protected content to all users and that’s “not harmful to most”—say, certain weight-loss information or “thinspo” imagery—to safeguard a “subset of particularly susceptible users.” The Supreme Court has explained that such overinclusive regulatory trade-offs and free-speech compromises “burn the house to roast the pig.”