Article

Kids’ Online Safety Requires Precision, Not Centralization

By Shane Tews

January 13, 2026

Parents have valid concerns about how online environments shape their children’s behavior. However, as the House Energy and Commerce Committee advances a comprehensive package of children’s online safety bills, the crucial question isn’t whether lawmakers should act, but how responsibility should be shared. How Congress assigns that responsibility will determine whether these reforms truly protect children or simply impose rules that fail to address underlying risks.

Much of the current debate is framed as a choice between giving parents more power and holding platforms accountable. However, in practice, many proposals move toward centralization by shifting age verification and parental consent from individual services to operating systems or app stores. While this might seem administratively easier, it misplaces the risks and where they can be most effectively managed.

Children do not see the internet as a single, unified environment. They interact with individual services—each with its own design, purpose, and business model. A messaging app, short-form video platform, fitness tracker, and restaurant delivery app might all be used on the same device, yet they pose very different risks. Treating them as interchangeable due to a shared operating system overlooks the important differences policymakers need to understand. It also ignores the possibility that a child could switch to using them on a different online interface, such as a gaming console or a website on their laptop.

Operating systems and app stores primarily serve as distribution channels. They do not create in-app features, influence recommendation algorithms, or control how user engagement is encouraged or monetized. Asking them to be responsible for age verification or parental controls conflates access with the overall experience; it also creates a false sense of security. Verifying a user at download doesn’t address how an app evolves through updates, how its features work together, or how its design choices affect minors over time.

More importantly, centralized age-gating risks apply to a wide range of applications that do not pose the social harms Congress intends to address. Retailers, restaurants, educational platforms, and other consumer-facing businesses increasingly rely on apps for essential tasks such as payments, reservations, and customer support. These services lack algorithm-driven feeds or social amplification tools, yet broad legislative definitions could subject them to the same compliance rules as social media platforms. These businesses are becoming increasingly concerned that vague statutory language could turn everyday commercial apps into regulatory targets, with little benefit to child safety.

The Parents Over Platforms Act offers the most practical solution for child safety. It recognizes that operating systems and app stores primarily serve as distribution channels. Requiring them to be the primary place for age verification or parental controls conflates access with the overall experience and creates a false sense of security. Verifying a user at download doesn’t address how an app evolves through updates, how its features work together, or how its design choices affect minors over time.

Assigning responsibility at the application level offers a more sustainable and principled approach. Developers are best positioned to understand how their products affect minors because they control the features that pose risk. They design the interfaces, select engagement mechanics, and determine how content is displayed. When responsibility aligns with these choices, incentives better match the outcomes. Developers who add risky features must address them directly, while those with products that pose minimal risk are not overwhelmed by compliance requirements meant for entirely different services.

This approach better supports parents. Application-level age verification and parental controls enable families to make informed choices about specific products rather than relying on a one-size-fits-all gate at the operating system level. Parents can assess whether a particular app is suitable for their child based on its function and design, not just its category in an app store. That is empowerment rooted in context, not abstraction.

Critically, application-level responsibility does not preclude congressional action. It simply requires greater precision. Lawmakers can define the harms they seek to address, identify the services causing them, and mandate that those services implement age-appropriate safeguards. What should be avoided is the temptation to solve a complex issue by shifting responsibility to the easiest technical layer.

Regulatory history shows that overly broad mandates rarely achieve their intended goals. They are often impossible to enforce or serve only to protect large incumbents that can absorb compliance costs. Neither outcome makes children safer. In contrast, precision enables regulation to focus on genuine risks while preserving innovation and choice in the digital economy.

Protecting kids online is not about shifting responsibility upward or centralizing control. It is about placing responsibility where it rightly belongs: with the applications that shape user experience and behavior. Congress can meet the moment by resisting administrative shortcuts and embracing a more disciplined approach. When responsibility follows design, both children and parents are better served.