tech

Australia Probes Meta, TikTok Over Failed Under-16 Ban—70% of Kids Kept Accounts

A landmark social media ban for children in Australia appears to be largely failing, as regulators accuse the world's biggest tech companies of non-compliance and launch a formal investigation into their porous age-gating systems.

SignalEdge·March 31, 2026·4 min read
A teenager uses a smartphone in the dark, illustrating the difficulty of enforcing social media age restrictions for children

Key Takeaways

  • Australia is investigating Meta, TikTok, and Google for allegedly disobeying a ban on social media use for those under 16.
  • A survey found nearly 70% of underage users on Instagram, Snapchat, or TikTok maintained access after the ban was implemented.
  • Australia's eSafety regulator has formally expressed concerns about weak enforcement by all major platforms, including YouTube and Snapchat.
  • Another survey of parents showed that while underage use dropped from 49% to 31% post-ban, a significant number of children remain on the platforms.

Australia's government has launched an investigation into Meta, TikTok, and Google, accusing the tech giants of failing to enforce the country's ban on social media use for children under 16. The move comes after a survey, cited by The Guardian, found that nearly 70% of underage users with existing accounts on platforms like Instagram, Snapchat, or TikTok had maintained access despite the new rules. This isn't a minor compliance issue; it's a direct challenge to the idea that age restrictions can even work on the modern internet.

A Ban With No Teeth

The data paints a clear picture of the ban's limited impact. While one survey of 900 parents found the number of children with accounts dropped from 49% to 31% post-ban, the majority of kids who were already on these platforms appear to have stayed there. That 70% figure reported by The Guardian is damning. It suggests that whatever age verification systems are in place are easily bypassed or simply not applied retroactively to existing accounts. The systems function more as a legal shield than an effective barrier.

This pattern indicates that the ban likely only deterred the least committed underage users. Anyone with a passing familiarity of how the internet works knows how to enter a false birthdate. The platforms' reliance on self-reporting has proven to be an unreliable enforcement mechanism. The core issue is that robust, truly effective age verification would introduce friction, something these companies have spent billions of dollars to eliminate from the user experience. Until now, there has been little incentive to build a gate that actually keeps people out.

Regulators Turn Up the Heat

The government's investigation is not happening in a vacuum. It follows formal warnings from Australia's own online safety watchdog. According to the BBC, the eSafety regulator has voiced concerns about how Facebook, Instagram, Snapchat, TikTok, and YouTube are complying with the ban. Both sources point to the same conclusion: the platforms are failing, and Australian officials are losing patience.

Together, these reports show a clear escalation. What began as regulatory concern has now become a full-blown government probe. This puts Meta, Google, and TikTok in a difficult position. They can no longer point to their terms of service as proof of compliance. Australian authorities are now demanding proof that their enforcement technology actually works, and the initial data suggests it does not.

This isn't just an Australian problem. It's a test case for every government around the world grappling with how to protect children online. If a country with a dedicated eSafety commission cannot enforce a straightforward age ban, it suggests the problem is not with the law itself, but with the technical architecture of the platforms that dominate global communication. The outcome of this investigation will set a precedent for whether tech companies will be forced to fundamentally change how they verify who is using their products.

SignalEdge Insight

  • What this means: Tech platforms' reliance on simple, self-reported age verification is being officially challenged as insufficient by a national government.
  • Who benefits: Child safety advocates and regulators, who now have a strong case to demand more effective enforcement technologies.
  • Who loses: Meta, Google, and TikTok, which face a formal investigation, potential penalties, and pressure to implement costly new systems.
  • What to watch: Whether this probe forces companies to adopt stricter verification methods, such as document scans or AI-based age estimation, setting a new global standard.

Sources & References

Daily Newsletter

Stay ahead of the curve

Get the most important stories in tech, business, and finance delivered to your inbox every morning.

You might also like