Article

Statutory Underinclusivity and Social Media Platforms: First Amendment Lessons for Lawmakers

By Clay Calvert

February 25, 2026

The theme song from H.R. Pufnstuf, a “fantastical” 1969 children’s television program, features the cryptic lyric “can’t do a little cause he can’t do enough.” Two recent federal court opinions—NetChoice v. Griffin and NetChoice v. Murrill—blocking enforcement of laws restricting access to First Amendment-protected speech on social media platforms offer constitutional lessons riffing on that lighthearted line.

Namely, lawmakers who do too little—who don’t do enough—to address a purported speech-caused problem may find their statutory handiwork deemed unconstitutionally underinclusive. I’ve previously addressed underinclusivity, but the December 2025 rulings in Griffin and Murrill add new light.

Underinclusivity. Content-based statutes—ones regulating lawful speech about some ideas or subjects but not others—must be carefully drafted to pass the strict scrutiny standard of judicial review. While some statutes don’t survive strict scrutiny because they regulate too much speech (a less speech-restrictive alternative would’ve sufficed), others are fatally underinclusive.

The Supreme Court holds that “underinclusivity creates a First Amendment concern when the State regulates one aspect of a problem while declining to regulate a different aspect of the problem that affects its stated interest in a comparable way.” (emphasis in original). For instance, the Court has ruled that a sign code that placed “strict limits on temporary directional signs” but not on other signs was “hopelessly underinclusive” in serving a town’s asserted interest in aesthetics. The Court reasoned that:

Temporary directional signs are “no greater an eyesore” . . . than ideological or political ones. Yet the code allows unlimited proliferation of larger ideological signs while strictly limiting the number, size, and duration of smaller directional ones. The town cannot claim that placing strict limits on temporary directional signs is necessary to beautify the town while at the same time allowing unlimited numbers of other types of signs that create the same problem.

Similarly, the Court concluded in 2011 in Brown v. Entertainment Merchants Association that a California statute restricting minors’ access to violent video games was “wildly underinclusive” in serving California’s goal of preventing “harm to minors.” Justice Antonin Scalia explained that the statute was underinclusive for two reasons.

First, the effects of violent video games “on children’s feelings of aggression” are “indistinguishable from effects produced by other [unregulated] media.” This indicates impermissible discrimination against the video game industry: “California has singled out the purveyors of video games for disfavored treatment—at least when compared to booksellers, cartoonists, and movie producers—and has given no persuasive reason why.”

Second, California’s statute allowed minors to obtain supposedly harmful violent video games if they received parental or guardian permission. This carveout from the general ban on minors purchasing the games also made the law “seriously underinclusive.” Scalia reasoned that:

The California Legislature is perfectly willing to leave this dangerous, mind-altering material in the hands of children so long as one parent (or even an aunt or uncle) says it’s OK. . . . That is not how one addresses a serious social problem.

As I wrote elsewhere, underinclusive laws “fail to materially cure or solve whatever problem supposedly exists.” Underinclusivity often “plagues laws that single out one medium . . . for conveying content . . . but that leave unregulated and unlegislated other varieties of media . . . to transmit the same content.”

Griffin and Murrill. In NetChoice v. Griffin, Chief US District Judge Timothy Brooks concluded that part of an Arkansas law banning social media platforms from using algorithms to deliver content that might lead to substance abuse, eating disorders, or suicide was unconstitutionally underinclusive. He reasoned that:

13 Reasons Why, a TV show which graphically depicts a suicide, can remain on Netflix, but YouTube could be liable for showing a user clips of the same. Similarly, Netflix can glorify thin bodies and cast exclusively thin actresses, but Instagram may be liable if it promotes those actresses’ posts thereby causing a user to develop an eating disorder.

In short, Arkansas didn’t restrict on vehicles other than regulated social media platforms the same types of speech that supposedly might cause the same types of harm.

In NetChoice v. Murrill, US District Judge John deGravelles examined a Louisiana law imposing age-verification and parental-consent requirements for minors to access First Amendment-protected speech on social media platforms with at least five million account holders and where the function is predominantly or exclusively socially interactive. He deemed the law underinclusive largely because a minor can freely access the same “potentially harmful content on a social media platform with 4,999,999 account holders or [on] any website—regardless of user-ship—where social interaction is . . . not the predominant function.” (emphasis in original). Furthermore, minors can access supposedly harmful content on regulated social media platforms with parental permission. To paraphrase Justice Scalia, such underinclusivity fails to remedy a supposedly serious problem.