I’ve written about this before, but it bears repeating: Should responsibility for kids’ online safety lie with operating systems and app stores, or with the applications themselves? At first glance, the operating system (OS) approach seems appealing. Verify a child’s age at the device’s operating level, block downloads deemed inappropriate by regulation, and call it a day. It sounds clean, efficient, and centralized. But it is also insufficient.
The core problem is that operating systems are distribution channels, while applications are experiences. When it comes to children’s online safety, the risk lies in the experience, not in access. An app store can only gate the initial access. It does not control design. Once a child downloads an application, the operating system no longer shapes what happens next. It does not determine whether an app uses algorithmically amplified feeds. It does not control whether push notifications are engineered to maximize engagement. It does not design in-app messaging features or decide how user data is collected and monetized. Developers make those decisions.
Children do not experience the operating system; they use individual apps. Consider how kids use devices today. As I have noted previously, on the same phone, a teenager might have a messaging app, a short-form video platform with algorithmic recommendations, a fitness tracker, and a restaurant-ordering app. All four apps are delivered initially through the same app store, yet their risk profiles differ dramatically.
A video platform optimized for engagement, driven by recommendation engines and social amplification, raises entirely different concerns than an app used to order takeout. Treating these services as interchangeable because they share a distribution channel overlooks the fundamental distinction between device-level access and application-level design.
Broad mandates imposed at the operating system level risk creating a false sense of security among parents while imposing compliance burdens on low-risk applications. That outcome would neither enhance safety nor promote innovation. Plus, the modern digital environment is inherently multi-device. Even if an OS-level age gate worked perfectly on a smartphone, it would not follow a child across devices. The same application may be available on a gaming console, a laptop, a tablet, or a smart TV.
More importantly, a one-time verification at download does not account for how applications evolve. Children’s online experiences are dynamic, and apps are not static products; they update frequently as features are introduced, tested, modified, and occasionally retired. Algorithms change, and monetization strategies often shift as the app’s use evolves.
If Congress wants durable protections, it must recognize that safety obligations should track with the product’s ongoing design choices, not with a one-time transaction at the distribution layer. Application-level responsibility aligns capability with accountability because the developer controls the app’s content moderation systems, data collection practices, recommendation algorithms, and engagement mechanisms. They are uniquely positioned to implement age-appropriate safeguards because they build and continuously refine features that may pose risks.
When responsibility sits with developers, incentives align. If a company introduces a feature that increases risk for minors, it must address that risk directly. If a service poses minimal risk, such as a restaurant app or a basic productivity tool, it is not swept into compliance regimes designed for algorithmically driven social platforms.
Application-level safeguards also empower parents more effectively. Such an approach places responsibility where it belongs: with those who design the user experience. When protections are built into specific services, families can make informed decisions about individual products. Parents can evaluate what an app does, what behaviors it encourages, what data it collects, and what risks it poses. That level of transparency is impossible when everything is filtered through a single operating system checkpoint.
None of this argues against congressional action. On the contrary, Congress should act to protect children online. If Congress gets this right, it can align responsibility with capability, preserve innovation in low-risk applications, ensure high-risk features are appropriately managed, and create a framework flexible enough to adapt to evolving technology.
To help protect children online, lawmakers should resist the lure of centralized symbolism and instead place responsibility where it can make a difference—at the application level, where design decisions shape behavior and outcomes every day.