Just over three months ago, Australia’s world-leading regulations attempting to ban social media use by under-16s came into force. The relevant regulator, the eSafety Commissioner, has released its first compliance report on the effectiveness of the Online Safety Amendment (Social Media Minimum Age) Act 2024. The report makes interesting reading, given the number of countries apparently considering whether to emulate the Australian endeavors.
Somewhat unsurprisingly, the eSafety Commissioner finds “progress” to be remarkably modest. Based on a survey of 898 parents and caregivers of children age 8–15 taken between January 19 and February 2, 2026, the commissioner reports that while just under half reported their children having their own account on at least one of the banned platforms prior to the law coming into force on December 10, 2025, that proportion decreased to only 31.3 percent in the survey period. If these survey findings are generalizable, this indicates that the law has been ineffective in removing the access of around 70 percent of the targeted underage social media–using population. This is despite the commission reporting the closure of over 4.7 million accounts in the period leading up to the law coming into effect. The Australian Bureau of Statistics reports a population of fewer than 900,000 in the 10–14 age group at June 2024.
Furthermore, the compliance report notes: “We have not observed a notable change in the number of cyberbullying and image-based abuse complaints involving age-restricted accounts across the platforms in January and February 2026 when compared to the same period in 2025.” While monthly figures are not available, the commissioner’s 2024–25 annual report states that in that year the agency “dealt with more than 840 complaints of cyberbullying against children and successfully intervened in more than 90% of cases.” While not all instances of child cyberbullying will be escalated into complaints, presumably the commissioner handles all cases serious enough to warrant demonstrable parental concern. Yet even before the legislation came into force, this number seems to be small when considered as a proportion to the population being protected.
The commissioner concludes that the disappointingly small evidence of success is due to insufficient compliance by the regulated social media firms with their context-determined (and commission-agreed) checks to ascertain account holder age. The agency is pursuing further investigations into the activities of Facebook, Instagram, TikTok, YouTube, and Snapchat (five of ten banned platforms, and coincidentally the only ones in the survey yielding enough responses for confident reporting), which face fines of up to A$49.5 million if found to be in breach. “Whether a platform has taken reasonable steps will include an assessment of the totality of the steps taken by a platform to comply with the [social media minimum age] obligation,” the commission says in its compliance report. “Steps taken cannot be evaluated in isolation. This is about systems and processes, not individual accounts.”
The commissioner considers that some platforms have not done enough to prevent children under 16 from having accounts. Platforms are required to use multiple methods of age verification, including age-banding, photo evaluation and behavioral assessment, rather than relying solely on self-declared age at sign-up. However, all are either circumventable) or prone to false positives and negatives, especially for individuals close to the 16-year threshold. The risk is the platform may deny an account to an individual who looks or behaves younger than age 16, even though they are legally able to hold an account, while at the same time enrolling mature-looking and -acting 14- and 15-year-olds. Hence some platforms have been reluctant to use these more intrusive methods of age verification. Apple and similar age-banding APIs rely on parents having set the appropriate controls and are impotent in the face of deceitful circumvention.
More prosaically, the commissioner appears to be also homing in on some specific platform behaviors. For example, some platforms are apparently allowing multiple attempts to create accounts in a single day or on consecutive days, and repeated attempts to use the same age verification method until it succeeds. Other platforms have apparently carried over accounts belonging to age-unverified under-16s but have not yet asked them for age verification (even though they may have no way ex ante to ascertain whether the owner is under 16). There’s also evidence that some platforms are messaging children under 16, encouraging them to attempt age verification even when their declared age was under 16 prior to December 10, 2025. Though why the messaging should be considered problematic when the accounts should have been shut down already appears somewhat disingenuous.
There’s little in the commissioner’s report to recommend the Australian approach to other jurisdictions. It’s neither effective nor proportionate, and is apparently succeeding only in making more work for the regulator.