Digital spinach: What Australia can learn from China’s youth screen-time restrictions
19 Sep 2024|

As Australia debates the right cut-off age for social media use, let’s not forget there already is a cut off age—13. That’s the age most platforms set in their terms of service in compliance with the United States’ Children’s Online Privacy Protection Act (COPPA).

But there’s a slight problem—whatever they’re doing to keep younger kids out, it’s not working. Surprisingly, a key part of the solution might be found in a place few would expect—China. While the idea of borrowing tactics from a surveillance state might seem unappealing to Australians, there are valuable lessons we could learn from Beijing when it comes to enforcing age restrictions and protecting young users from the harms of social media.

Research conducted by the Office of the eSafety Commissioner reveals that nearly a quarter of children aged eight to 10 report using social media weekly or more often, and almost half of those aged 11 to 13 are doing so. Which raises the question: what are these companies actually doing to keep underage users off their platforms?

That’s precisely what eSafety Commissioner Julie Inman Grant asked the major social media platforms earlier this month. They have 30 days to respond, but the answer is already clear. With kids getting phones at younger ages, many are downloading apps intended for those 13 and older, and are lying about their age in the process. The safeguards in place are far from robust.

Even the major platforms concede that keeping underage users out is a losing battle. When Instagram paused its ‘Instagram Kids’ project in 2021—a version of the app specifically designed for users under 13—Adam Mosseri, the Head of Instagram, admitted that relying solely on age verification was not enough, advocating instead for a safer, controlled version of the app for younger users.

So, if the current measures are ineffective, what’s the solution? The federal government’s $6.5 million trial of ‘age assurance’ technologies is exploring a range of options to enforce age restrictions more effectively, from a digital ID to AI profiling and biometric analysis.

But in a draft open letter to the government, some Australian tech leaders criticised the trial as a ‘fundamental strategic error’, arguing that tech giants should be responsible for developing and enforcing age verification systems themselves. These companies, they said, should face severe penalties if they fail to comply—penalties that would compel them to figure out a solution.

The crux of the issue is how severe those penalties should be. The US Federal Trade Commission (FTC) regularly fines major platforms for violating COPPA, but with little impact. For example, TikTok and ByteDance have been accused of flagrantly violating COPPA by collecting data from children under 13 without parental consent. However, the $5.7 million fine imposed—a record at the time—was insignificant for a company with $16 billion in US revenue last year. The risk-reward balance remains skewed towards non-compliance.

Even if fines increase, platforms still face a fundamental challenge: verifying the age of children too young to have an ID. Platforms argue that others are better placed to solve the problem. Snap, for instance, has suggested that device manufacturers should handle age verification since they control the registration process when a new phone is activated. Meanwhile, Meta advocates for legislation requiring app stores to implement age verification tools, allowing parents to block children under 16 from downloading social media apps.

So, the social media platforms blame either the app stores or the device makers, who then point right back at the platforms. Perhaps it’s time for everyone to take responsibility?

China, unexpectedly, provides a model for how this could be done. Last year, Beijing mandated a coordinated effort across app developers, app stores, and device manufacturers to create a unified ‘minor’s mode.’ This framework enforces strict rules like age-specific screen time limits, mandatory breaks, and a curfew banning use between 10 p.m. and 6 a.m. These measures are designed to close the loopholes kids have exploited, such as using their grandparents’ accounts to dodge restrictions and indulge in late-night gaming.

Being communist China, the approach extends beyond mere access restrictions. It segments children into age groups, prescribing the type of content they can access. Children under eight are limited to 40 minutes of screen time per day, with content strictly educational. Once they turn eight, their allowance increases to one hour, introducing ‘entertainment content with positive guidance’. It’s a grand piece of social engineering, rooted in a blend of paternalistic, Confucian, and Leninist principles, that appears designed to ensure the next generation grows up patriotic, productive, and in line with the party-state’s vision for the future.

Some Western critics have argued that while Beijing ensures a healthy digital diet for its own youth, it simultaneously exports platforms like TikTok to weaken the youth of other nations. As former Congressman Mike Gallagher starkly put it, ‘ByteDance and the CCP [Chinese Communist Party] have decided that China’s children get spinach, and America’s get digital fentanyl.’

The truth is, TikTok is just a more efficient delivery device for the same content available on any platform, regardless of ownership. While the CCP can influence TikTok’s algorithm, they’re not force-feeding us digital fentanyl; the real issue is our own failure to implement safeguards that ensure a healthier digital experience for our kids.

We don’t need the state to socially engineer or force our children onto a strict ‘spinach’ digital diet, but we can certainly take a page from Beijing’s playbook and force all the stakeholders in our digital ecosystem—from app developers to app stores and device manufacturers—to co-operate, so we can build a digital ecosystem that keeps young children off social media until they’re ready.