Skip to main content
Article

Protecting Kids and Adults Online: Device-Level Age Authentication

AEIdeas

January 30, 2025

Last week, the Supreme Court heard oral arguments in Free Speech Coalition v. Paxton, which involves a constitutional challenge to a Texas age verification law for websites containing sexually explicit material. The case offers the Court the opportunity to revisit two cases decided at the dawn of the Internet Age finding such requirements violated the First Amendment. This post looks at the legal and practical arguments against the law, and asks whether shifting verification from websites to devices might alleviate some of those concerns.

The case pits the state’s desire to protect children from harmful material against adults’ constitutional right to consume such content. As my AEI colleague Clay Calvert discussed in detail, the First Amendment protects adults, but not minors, from laws restricting the distribution of pornography. In the offline world, this is uncontroversial and evidence of this legal divide is ubiquitous: Clay notes, for instance, how bookstores hide adult magazines behind blinder racks to obscure cover photos, or how video stores placed X-rated movies behind a curtain in the “adult section.” Even the vocabulary of “adult” content testifies to the uncontroversial nature of this constitutional divide.

But things get more complicated when the scene shifts to cyberspace—not because minors have more rights online, but because it is harder to distinguish minors from adults. Twice in the late 1990s, Congress tried to recreate offline protections by requiring online purveyors of sexually explicit content to verify user ages. Because age is not self-authenticating online, this requires websites to check all users’ identification, to confirm which IP addresses belong to adults. In Reno v. ACLU and Ashcroft v. ACLU, the Supreme Court struck down these provisions largely because of concerns they may chill adult speech. Age verification could be cost-prohibitive for websites, meaning many would self-censor by removing explicit-but-protected content rather than incur the fees. And adult users might be reluctant to provide identification every time they wish to consume sexually explicit content: they may not want their names tied to specific content, which could reveal information about their preferences, or they may fear identity theft. The Court endorsed user-side filtering technology as a better alternative that put control in parents’ hands while mitigating these risks.

Paxton offers a chance to ask whether these concerns remain valid over a quarter-century later. Filtering technology is available but its effectiveness seems limited: the American College of Pediatricians found that half of American boys and one-third of American girls admit viewing pornography before age 13. Age verification costs remain high. Publicly available information suggests that websites pay 34-65 cents per verification transaction. And privacy and identity theft have become central policy concerns as the Internet ecosystem has grown and matured.

One might also ask whether alternative approaches could alleviate these concerns. Meta and Avlo (PornHub’s parent company) have recommended shifting the locus of age verification from the website to the device level. Under this approach, a consumer would provide one-time age verification by a device manufacturer or operating system when the consumer sets up the device. Once established, the smartphone or tablet would use a token or API to vouch for the user when accessing sexually explicit websites while otherwise preserving what one judge called “the anonymity otherwise available on the Internet.”

Device-level age authentication can address some of the Court’s concerns about chilling adult speech. Websites could rely on the device token with a few lines of code, meaning transaction costs are minimized. Users need not confront the dilemma whether to provide personally identifiable information in exchange for particular content. And there are fewer opportunities for identity theft: the user provides identification only once, typically to a trusted company such as Apple or Google that have strong reputations and sophisticated defenses against data breaches. The increasing popularity of Google and Apple Pay show that adult consumers trust these brands with sensitive credit card information.

Of course, there are limits to this solution. It could work well within the walled gardens of mobile ecosystems but may not as easily block minors from accessing such content on a desktop or laptop. And minors could still get access through an adult’s verified device—with or without the adult’s permission. These under-inclusiveness concerns could matter to the Court: because content-based restrictions are subject to strict scrutiny, the Court must consider whether the law is narrowly tailored to address the problem it seeks to solve. Moreover, Apple and Google may understandably object to being drafted to police age restrictions on the Internet.

Device-level age verification is not a silver bullet, either practically or constitutionally. But an increasing number of states recognize the need to protect minors online. Depending on how the Court decides Paxton, device-level age tokens could be important part of the solution.