A federal court recently blocked Colorado from enforcing part of a new law that compels social media platforms “to provide non-commercial disclosures to minors about the alleged health impacts of using their platforms.” In issuing a preliminary injunction in NetChoice v. Weiser, Senior US District Judge William Martinez concluded the measure would likely fail the rigorous strict scrutiny standard of review because it wasn’t narrowly tailored to serve minors. He determined that NetChoice was “substantially likely to succeed on the merits” of its complaint.
NetChoice’s triumph pivots on the First Amendment right not to speak—a right not to be forced by the government to voice contested, controversial, or disagreeable views or information. The US Supreme Court has observed that “some of this Court’s leading First Amendment precedents have established the principle that freedom of speech prohibits the government from telling people what they must say.” It’s an autonomy principle that protects not only people, but also entities including crisis pregnancy centers, print newspapers, and social media platforms.
Martinez’s opinion holds at least three lessons for lawmakers regarding forcing platforms to warn minors about alleged dangers of their services. First, such content-based, compelled-speech measures that commandeer platforms’ communication channels to deliver government-desired statements that the platforms dispute will be closely scrutinized rather than deferentially reviewed. Second, the Supreme Court’s 2025 ruling in Free Speech Coalition v. Paxton doesn’t always provide a workaround from strict scrutiny simply because a law supposedly protects minors online. Third, states that want to educate or warn minors about social media usage should do it themselves or provide meaningful carrots for platforms to do it rather than wield compelled-speech sticks against them.
The Contested Section. As encapsulated by Martinez, Section 4 of the statute requires covered platforms—Facebook, Reddit, Snapchat, and X, among others—to “convey to minor users Colorado’s belief that excessive use of social media may be risky to their health and well-being.” This can be done in different ways, but all require platforms to draw on “data” for such warnings found in “peer-reviewed scholarly articles” or in a state-created “resource bank.” As Martinez wrote, platforms must “provide minors with data-based information about the impacts of social media use on their mental and physical health.”
Standard of Review. Martinez’s selection of strict scrutiny to evaluate the law’s constitutionality is—by itself—a First Amendment victory. Platforms now confront a wave of compelled-speech legislation that forces them to convey government-mandated messages. These measures often require platforms to (1) provide warnings directly to users, or (2) explain to the government—as a New York law does—whether and how they moderate varieties of lawful content such as hate speech.
For instance, California Governor Gavin Newsom recently signed into law Assembly Bill 56. It requires platforms to display a “black box warning” about “significant mental health harms” ostensibly associated with social media.
Strict scrutiny makes it hard for such laws to pass First Amendment muster. The government must prove that: (1) it has a compelling interest (one of the highest order) justifying them, and (2) there aren’t alternative ways to serve that interest that impinge less on platforms’ expressive rights.
Thus, even if Colorado has a compelling interest in alerting minors to the supposed “risks of excessive social media use” (as Colorado asserted and Martinez assumed arguendo), the law fails strict scrutiny because there are alternative methods of educating minors that don’t impinge on platforms’ expressive rights. As Martinez noted, “Colorado could have incentivized social media companies to voluntarily provide these disclosures to their minor users, or it could have elected to provide minors with these disclosures itself.”
Martinez chose strict scrutiny over the more government-friendly intermediate scrutiny test and the even more relaxed, easy-to-satisfy variation of rational basis review the Supreme Court adopted in Zauderer v. Office of Disciplinary Counsel. Zauderer lets the government compel businesses to disclose purely factual, uncontroversial information in advertisements to prevent consumer deception. Colorado Attorney General Philip Weiser argued the law fit Zauderer’s framework; NetChoice contended strict scrutiny applied.
Martinez rejected both the intermediate scrutiny and Zauderer tests largely because the speech Colorado compels isn’t “commercial.” It isn’t an advertisement, it doesn’t propose a business transaction, and platforms have “no economic motivation” to convey it. Furthermore, the compelled information isn’t purely factual and uncontroversial. Instead, it constitutes “opinions” that NetChoice says are “the subject of ongoing and vigorous debate.”
Martinez also leaned on the Supreme Court’s 2024 decision in Moody v. NetChoice for the proposition that platforms’ “content moderation is expressive speech in and of itself.” Citing Moody, Martinez observed that “like traditional media, a social media platform is entitled to heightened First Amendment protection where it is engaged in expressive activity.”
In December, Colorado appealed Martinez’s decision to the Tenth Circuit. Martinez’s ruling, however, is sound and shouldn’t be disturbed.