Article

Is Social Media Age-Gating Being Regulated Proportionately?

By Bronwyn Howell

March 27, 2026

Regulating social media platforms is complex, not least because the “social” part deals with people and their characteristics, preferences, wants, and wishes. Any intervention must be cognizant of these differences. Regulating to protect one subset of people, such as filtering out content relating to eating disorders to protect individuals susceptible to being triggered by it, leads to compromises for other subsets—the vast majority of the population not triggered by that content cannot view it, even though it causes them no harm, and they may have derived benefit from viewing it. Not to mention the benefits that may have accrued to the provider of the content that cannot be viewed.

When policymakers place legislative restrictions on social media platforms, it’s beholden on them to consider the trade-offs imposed by their choices. This is especially important when the regulations contain restrictions on the fundamental rights of the people concerned. The content moderation example above invokes consideration of the fundamental right to freedom of expression, balanced against society’s obligation to protect vulnerable citizens from harm. In the United States, there has been much discussion about the extent to which, and under what circumstances, it’s justified or even legal for legislatures to impose content moderation rules.

But what guidance is there for policymakers when it comes to these trade-offs?

European Union law provides some insights, due to the legally binding obligation that member institutions must comply with the EU Charter of Fundamental Rights in their actions and legislation. This has given rise to necessity and proportionality. The necessity principle asks if there is a less restrictive but similarly effective alternative to the proposed legislation that would achieve the desired objective. The proportionality principle asks whether the proposed legislative obligations pertain to a legitimate aim (e.g., protecting children) and are suitable (e.g., would they actually reduce harm), necessary (as per the necessity principle), and strictly proportional or balanced (i.e., would the benefits of the law outweigh any harms it may cause, including to other rights, and would the overall burden on the regulated and the wider population not be excessive relative to the goal).

The current worldwide rush to regulate access to social media by children under a certain age—such as the “Canberra contagion” of age-gating regulation—would appear to be an excellent candidate for reasoned application of the proportionality principle (and the necessity principle it incorporates). While few would doubt the legitimacy of the goal of reducing harm to children, it’s not clear that all such rules are actually effective (Australian Prime Minister Anthony Albanese has admitted that his country’s laws will not stop many instances of under-16 online access or harm from occurring). Regarding necessity: Is Australia’s ban on individuals under age 16 holding social media accounts the best (or least intrusive) way of achieving the harm reduction objective? For each social media platform, were other alternatives evaluated? And against which criteria?

The most important test would appear to be that of strict proportionality—in particular, the trade-off between the rights of the different groups affected by the interventions. Restricting social media access by age balances the freedom of expression rights of children and adults required to incur the costs of proving they are not children against the principle of keeping children safe. Depending on the tools selected to undertake age detection, the anticipated benefits must also be balanced against harms to the privacy of those subjected to the tests.

Theoretically, such trade-off exercises are very appealing and should lead to a more considered rulemaking process. However, how does one assess effects on unmeasurable principles, which humans are reluctant to juxtapose against each other? Often both principles are valued, but promoting one invariably denigrates the other. Furthermore, a principled process would counter the temptation to regulate social media platforms in response to political pressure regardless of the consequences—or for political rather than safety purposes—instead of basing such decisions on reasoned policy analysis. For example, in their recent consideration of Australia-like age-gating laws, the New Zealand parliamentary select committee failed to quantify the financial impact of regulation on the parties that would be affected or to take into account how their rights would be traded off. The committee’s response to the proportionality brief in its report states, “The majority of us consider that this intervention is proportionate to the serious nature of the harm it would mitigate.”

It’s not easy to undertake a principled proportionality assessment of social media age-gating (and other social media regulations), but that should not be a justification for not doing one at all. Evidence-based policymaking demands it. Tools do exist to assist with assessing the trade-offs between complex principles like privacy and protection, and have been applied in other policy contexts.

We should be expecting nothing less from our policymakers in this important area.