Tech giant Meta has begun removing users under the age of 16 from Instagram, Facebook, and Threads in Australia, as the country prepares to implement a world-first ban on social media access for young teenagers.
Under the new law, which takes effect on December 10, major digital platforms — including Meta, TikTok and YouTube — must block under-16 users or face penalties of up to A$49.5 million (US$32 million) for failing to take “reasonable steps” to comply.
A Meta spokesperson confirmed on Thursday that the company had started offboarding affected users, describing the rollout as the start of a “multi-layered” compliance process.
“We are working hard to remove all users who we understand to be under the age of 16 by 10 December,” the spokesperson said.
“Before you turn 16, we will notify you that you will soon be allowed to regain access to these platforms, and your content will be restored exactly as you left it.”
The company added that young users will be able to download and save their account history before removal.
The new rules are expected to affect hundreds of thousands of Australian adolescents. Instagram alone reports approximately 350,000 users aged 13 to 15.
Some popular platforms — including Roblox, Pinterest and WhatsApp — are exempt from the ban, though the exemption list is still under review.
Meta Calls for App Store Responsibility
While Meta maintains it will comply with the law, the company argued that responsibility for age verification should shift to app stores such as Google Play and Apple’s App Store.
“The government should require app stores to verify age and obtain parental approval whenever teens under 16 download apps,” the company said, adding that this approach would prevent teens from repeatedly verifying their ages across multiple platforms.
Meta said social media companies could then rely on verified age information from app stores to ensure age-appropriate experiences.
YouTube voiced its own concerns this week, claiming the ban could make children “less safe”, as under-16s might still access its website anonymously but without the platform’s safety filters.
Australia’s Communications Minister Anika Wells dismissed the argument as “weird”.
“If YouTube is reminding us that there is content not appropriate for age-restricted users on their website, that’s something YouTube needs to fix,” she said.
Wells noted that harmful algorithmic content had contributed to the deaths of some Australian teenagers, eroding their self-esteem.
“This law will not fix every harm occurring on the internet, but it will make it easier for kids to chase a better version of themselves,” she added.
Legal Challenge and Global Impact
The Digital Freedom Project, an internet rights organisation, has launched a High Court challenge against the legislation, describing it as an “unfair” restriction on freedom of expression.
Authorities acknowledge that determined teens may attempt to bypass the restrictions using fake IDs or AI-modified photos. Platforms have been instructed to design their own safeguards, though Australia’s online safety regulator admits that “no solution is likely to be 100 percent effective”.
The enforcement of Australia’s sweeping restrictions is being closely watched internationally. Malaysia has indicated plans to block under-16s from joining social media next year, while New Zealand is set to introduce a similar ban.
The success or failure of Australia’s approach could shape global efforts to regulate the risks social media poses to younger users.













