Skip to main content
Post

Who Is Ultimately Responsible for Children’s Internet Safety?

AEIdeas

February 20, 2024

Last week, Project Liberty released the results of a survey of 14,000 adults in seven countries—Brazil, China, France, India, South Africa, the United Kingdom, and the United States—that found “people are deeply worried that social media opens the door to a range of harms for young people.” According to the study, “65% say they are ‘very concerned’ that kids might be subjected to cyberbullying or harassment. A clear majority of those surveyed also worry about children being exposed to inappropriate sexual or violent content (64% and 63%, respectively).” Project Liberty’s founder claims “the harms are now obvious, it’s time to fix the internet.” 

In Project Liberty’s view, “something must be done to create a safer internet for our children and society at large.” But while Project Liberty stops short of saying who must do this, many people think this is squarely the responsibility of governments, device manufacturers, and internet platform operators. One response is the Kids Online Safety Act, which was first introduced in 2022 and had more than 60 backers in the Senate as of last Thursday. This act would create a

duty for social media platforms to prevent and mitigate harms to minors, such as content promoting of self-harm, suicide, eating disorders, substance abuse, and sexual exploitation. It also requires social media platforms perform an annual independent audit assessing risks to minors, their compliance with this Act, and whether the platform is taking meaningful steps to prevent those harms.

Via Adobe

That is, platforms are required to not just manage the content that minors view on them but also take an active role in preventing self-harm, suicide, eating disorders, substance abuse, and sexual exploitation. That is, they must become active agents of social policy delivery. 

Much has already been written about the platforms’ technological capabilities to manage who receives their content. The algorithms used are already quite good, but they are far from perfect, especially at detecting exactly who is viewing the content on the end device. The perennial problem is that, despite the best intentions of regulators and content distributors at arm’s length (at least) from the act of content consumption, it is nearly impossible for them to prevent a person who is determined to access the content from getting it. While a lot of effort will be put into algorithms and other tools preventing people who would never be harmed by the content from accessing it, those most vulnerable to the associated harms will likely be ingenious enough to get around the obstacles put in their way. 

A case in point is the regulation restricting the broadcasting of adult content on television to after certain hours, on the assumption that by that time, young people will be in bed and therefore not able to watch or be harmed by it. As the parent of any 10-year-old determined to watch an R-16 blood-and-gore horror movie knows, the law in this case is futile. No matter how compliant the broadcaster is or how much information it reportedly disseminates about the harms of the content—which is perfectly legal to broadcast to a large portion of society at any time of the day—the real power to prevent the harm from content viewing lies with parents and caregivers on the spot. 

In this light, it is apposite to consider the extent to which parents use the many tools available to control access to devices and applications. A Kaspersky survey in 2021 found that only 50 percent of parents used parental control apps, and “44% of parents report their kids use digital devices under the supervision of either a parent (36%) or a family member (8%).” More recent (2023) results from a MacBook Journal survey of Apple device users is not more reassuring: It found that overall, only 51 percent used app controls, although the proportion using them for children under 5 years old was 86 percent. However, use declined rapidly as children enter adolescence. Only 38 percent of parents had blocked access to specific apps or websites on their child’s device, and just 33 percent reported using download-management tools to restrict the types of software that could be installed. 

Even though surveys such as Project Liberty’s report high levels of societal concern about online harms to children, it seems that when the rubber meets the road, only close to half of parents use the tools available from app and device operators. It’s hard to raise adolescents, but their safety is ultimately the responsibility of parents, not legislators or platform operators.

See also: Keeping Kids Safe Online Requires Shared Responsibility | Of Meta and Minors, Filters and Filings: An Uncertain Path Forward | Cheap Fakes and Dirty Politics: Examining Manipulated Media and Platform Accountability | Getting Hooked on Social Media Addiction Lawsuits: What to Know and Some Points to Consider