02 Dec 2025
Age Assurance Under the Online Safety Act: The Online Industry’s Final Countdown
Authored by Allison Lawrence, Senior Legal Writer, Technology & Innovation.
Australia’s online safety framework is about to reach a major turning point. On 10 December 2025, the new age-assurance obligations introduced by the Online Safety Amendment (Social Media Minimum Age) Act 2024 (Cth) will take effect.
From that date, any service designated as an “age-restricted social media service” under the Online Safety Act 2021 (Cth) must take reasonable steps to ensure Australians under 16 cannot create or maintain an account.
For the online industry, this is no longer a distant regulatory change. It is an immediate operational imperative requiring finalisation of risk assessments, onboarding redesign, privacy compliance adjustments, and governance oversight before 10 December.
In this blog, Allison examines the new age-assurance obligations commencing on 10 December 2025 under the Online Safety Act 2021 (Cth). She explains which services are captured, what “reasonable steps” require, how privacy law interacts with age verification, and what the online industry must do to ensure compliance. The blog also outlines key takeaways from eSafety’s recent regulatory guidance and technology review.
Age-Restricted Social Media Services Under the Online Safety Act
The 2024 Amendment Act inserted a new minimum-age regime into the Online Safety Act 2021 (Cth).
Under s 63C, the Minister for Communications may specify a service or class of services as an age-restricted social media service, having first sought and had regard to advice from the eSafety Commissioner.
The services affected are those whose sole or significant purpose is to enable users to interact socially - for example, through posting, messaging, commenting, sharing, or connecting.
Once designated, the service must comply with the s 63D “reasonable steps” obligation from the moment designation takes effect.
Meeting the s 63D “Reasonable Steps” Obligation for Age Assurance
The Act requires designated providers to take reasonable steps to prevent under-16 users from creating or maintaining accounts.
While the Act itself does not prescribe specific technologies, the eSafety Commissioner’s Regulatory Guidance (September 2025) makes clear that compliance requires appropriate age-assurance measures aligned with four key principles: accuracy, fairness, reliability, and privacy protection.
To meet the s 63D standard, age-restricted social media services must undertake three core actions:
1. Understand Their Service’s Risks
A structured, platform-specific risk assessment is essential, covering:
- user pathways and onboarding flows;
- potential under-16 access points;
- relevant service features (messaging, posting, connecting);
- likely circumvention behaviours (multi-account creation, shared devices, social engineering).
A well-documented assessment is central to demonstrating that a provider has acted reasonably.
2. Adopt Verification / Age-Assurance Measures Proportionate to Those Risks
Providers must implement measures that are accurate, fair, reliable, and privacy protective. Options include:
- digital identity verification;
- document verification;
- facial age-estimation technologies;
- telecommunications-based checks;
- layered or hybrid “waterfall” models.
Self-declaration, tick-box confirmations, or basic date-of-birth fields cannot, on their own, meet the “reasonable steps” standard.
Because age assurance may involve personal information, and where this occurs, providers should complete a Privacy Impact Assessment (PIA).
3. Document Why Their Approach Is Reasonable
Providers must:
- document their risk assessment;
- record how their age-assurance method was selected;
- define internal approval pathways and accountability;
- retain evidence demonstrating compliance with s 63D.
Age assurance is not a “set and forget” obligation. Providers must conduct ongoing review, test system performance, monitor error rates, and update verification processes as technologies and risks evolve.
Privacy Law
Age assurance is not only a safety requirement, but it also must engage Australia’s privacy law framework, where an age-assurance activity involves handling personal information.
Where they do, providers must comply with the Privacy Act 1988 (Cth) and the Australian Privacy Principles (APPs).
This includes obligations to collect only what is reasonably necessary, be transparent, secure personal information, limit secondary use and disclosure, and destroy or de-identify information once it is no longer required.
Related: Privacy Law Bulletin 2025 Special Edition
What eSafety’s Review Shows About Age-Assurance Methods
To support the new framework, eSafety reviewed age-assurance methods used domestically and internationally. This work, including the Age Assurance Technology Trial, informed its Regulatory Guidance.
Higher-Assurance Methods (Context-Dependent):
- digital-identity and document-verification
- facial age estimation (with privacy, bias, and accessibility safeguards)
- layered or “waterfall” approaches, incorporating account-holder data or telecommunications checks
These methods can offer strong accuracy but must be balanced against privacy impacts, demographic fairness, and accessibility.
Lower-Assurance Methods (Unlikely to Be Sufficient on Their Own):
- self-declaration or tick-box confirmations
- date-of-birth fields without verification
- email-based checks or other easily circumvented mechanisms
eSafety emphasises that no single method suits every service. Providers must select and justify methods appropriate to their risk environment.
The Dynamic Designation Process: Why Online Services Must Stay Alert
Under s 63C, the Minister may update the list of designated services at any time.
Some online providers have already experienced late notice, raising concerns about compressed compliance timelines.
Age assurance should therefore be treated as a standing regulatory exposure. Services not currently listed may still fall within scope through future rule updates.
A service designated after 10 December must comply with s 63D from the moment it is designated, although eSafety will assess “reasonable steps” in light of timing.
Conclusion: From Preparation to Execution
The new social media minimum-age obligations commence in less than a fortnight. For designated online-industry services, this is the final opportunity to confirm risk assessments, finalise verification methods, update onboarding and privacy materials, and ensure governance structures are in place.
The introduction of age assurance marks a major evolution in Australia’s digital-regulatory landscape, integrating safety, privacy, and technological compliance in unprecedented ways.
To help practitioners manage ongoing compliance obligations, we will be launching a LexisNexis® Practical Guidance: Technology & Innovation module in March 2026, which will help practitioners meet the continuous compliance demands that sit at the heart of the “reasonable steps” obligation.
Subscribe here for early access updates and release notifications.
