“It has been estimated that by the time a child turns 13, around 72 million pieces of data will have been collected about them, making them vulnerable to harms from data breaches, discrimination, algorithmic bias and targeted advertising of harmful products, amongst other risks,” Australian Privacy Commissioner Carly Kind said when releasing the exposure draft of the Children’s Online Privacy Code on 31 March 2026.
The draft Code, released by the Office of the Australian Information Commissioner (OAIC) under the Privacy and Other Legislation Amendment Act 2024, must be registered by 10 December 2026. A breach of the Code will be a breach of the Privacy Act, with civil penalties for serious or repeated interference reaching up to AUD $50 million. To be clear, that figure is the existing Privacy Act penalty ceiling now extended to Code breaches, not a new Code-specific number. The Code is framed as a children’s safety measure, but its design implications for AI training, profiling and personalisation reach far beyond apps that target kids.
What the Code actually requires
The Code sets out how entities covered by the Privacy Act must comply with the Australian Privacy Principles (APPs) when their services are likely to be accessed by children. It applies to social media services, relevant electronic services, and designated internet services, including apps, games, streaming platforms, and educational tools.
At the centre of the Code is a best interests of the child standard. Entities must consider children’s interests before collecting, using or disclosing their personal information, and they must collect only what is strictly necessary to provide the service. The draft also introduces consent requirements for using children’s personal information for targeted advertising, an explicit right for children to request deletion of their personal information, age assurance obligations, and time limits for responding to privacy complaints.
Kind framed the Code’s purpose carefully: “The code will not restrict or limit children and young people’s participation in online spaces. Instead, it raises the standard for privacy protections in Australia and puts the onus on online services to do better when handling children’s personal information online.” The framing matters because the Code complements, rather than overlaps with, Australia’s Social Media Minimum Age scheme that came into effect in December 2025. The minimum age law removes under-16s from age-restricted platforms; the Code sets the privacy rules for everywhere children still are.
The evidence base: what the GPEN sweep found
Days before the OAIC released the COPC exposure draft, the Global Privacy Enforcement Network (GPEN) published the results of its 2026 sweep. 27 data protection and privacy authorities from around the world examined nearly 900 websites and mobile apps used by children. The findings were stark.
59% of the sites and apps examined required an email address to access full functionality. Half required usernames. 46% demanded geolocation. 71% provided no information at all about their protective controls or privacy practices for children. Among sites with high-risk data processing for children, only 35% used popups to prompt for parental permission. And 72% failed to stop sweep participants from circumventing age assurance. The OAIC is not legislating in response to a hypothetical problem. It is legislating against a documented baseline of non-compliance with even basic privacy hygiene for child users.
For workplaces: why this matters even if you do not build apps for kids
Before the design implications, the question most SAW readers will ask: does this Code apply to my workplace? It touches more workplaces than the headline suggests. Three patterns put workplace tools in scope. First, staff routinely use enterprise SaaS tools at home where children share devices and screens. A productivity tool used on the family laptop is, in the OAIC’s framing, a service likely to be accessed by children. Second, edtech and education-sector clients pull general-purpose enterprise tools (collaboration platforms, video conferencing, AI assistants) into school environments where the user base shifts toward minors. Third, AI models trained on mixed adult and child interaction data sit inside services that cannot cleanly disclaim child use. The Code’s scope is determined by who uses the service in practice, not by who the marketing was aimed at.
Why the AI implications are bigger than the headline
The Code is being read as a child safety instrument, but the technical obligations it imposes apply to almost every AI design pattern in modern consumer services.
Targeted advertising requires consent. That cuts directly across the recommendation engines and behavioural profiling models that drive most consumer AI products. If consent cannot be shown to be voluntary, informed, specific, current and unambiguous, the underlying training and inference cannot lawfully proceed for users covered by the Code.
Data minimisation requires collecting only what is strictly necessary. AI systems are typically built on the opposite assumption: collect everything, decide later what is useful. The Code reverses that default for any service likely to be accessed by children.
Right to deletion. Models trained on children’s personal information may need to support meaningful data removal, not just account deactivation. For machine learning systems, that is technically difficult and operationally expensive. The Code does not give organisations a free pass on technical complexity.
Best interests of the child is a substantive test, not a procedural one. It will require documented assessments of how each AI feature affects children, comparable to a privacy impact assessment but focused specifically on developmental and welfare outcomes.
How this looks in practice: the messaging service example
The OAIC has provided a clean illustration. Imagine a child signs up for a messaging app. Under the Code, the provider can collect limited personal information from the child to enable the core service: directing messages to the right recipient. That data collection is strictly necessary, so it is permitted. What the provider cannot do is collect data from that child to power a personalised content recommendation engine, drive targeted advertising, or feed a behavioural profiling model. Those uses are not strictly necessary to deliver the service the child signed up for. Without specific consent meeting the Code’s standard, they are not lawful. If the child later disables those features, the data collected for them must be destroyed. Any AI feature in any consumer-facing service likely to be accessed by children will need to be re-examined against the same test.
The scoping problem: what counts as “likely to be accessed by children”?
The Code applies to services “likely to be accessed by children,” but the OAIC has not yet provided a clear test for what that means. Gilbert + Tobin described the exposure draft as foreshadowing “more prescriptive, higher-standard obligations for online services handling children’s personal information, signalling a major shift in privacy compliance and potential future reforms to the Privacy Act.” But there is no threshold (such as a percentage of users under 18) and no firm guidance on how organisations should self-assess.
In practice, the safer assumption is that any general-purpose service with a meaningful youth user base is in scope. As the workplace examples above show, vendors that cannot demonstrate clean separation between adult and child use will need to either treat the entire service as in-scope or restrict its deployment.
Industry will push back, and the consultation is genuinely open
Not every voice on the Code is enthusiastic. The Association for Data-driven Marketing and Advertising (ADMA) has flagged concerns about regulatory overreach in earlier submissions, arguing that the “likely to be accessed by children” test should be narrowed to refer to “the part of the service which clearly is appealing to children, directed at or focused for children.” ADMA’s position is that broad scoping risks capturing routine adult-focused services that incidentally have some youth users.
Child safety advocates take the opposite view. Their argument: a narrow scoping test would let providers off the hook for the most common pattern of harm, which is children using “adult” services by default. Most of the privacy and AI risks young people face come not from kid-targeted apps but from general-purpose platforms that did not design with children in mind. On that view, broad scope is necessary precisely because children rarely use the services that were built for them. The 60-day consultation window from 31 March to 5 June 2026 is the time for both arguments to be tested. Issues such as the scoping test, the practical operation of deletion rights at the model level, and the interaction with the OAIC’s separate March 2026 age assurance guidance are still in play.
The dual-deadline problem
The Code lands in the same year as the more general Privacy Act automated decision-making (ADM) transparency requirement. From 10 December 2026, the Privacy and Other Legislation Amendment Act requires all Australian entities using automated decision-making affecting individuals’ rights or interests to disclose this in their privacy policy. The two obligations overlap substantially for any AI system that processes data about children. The Code adds specific child-focused requirements (best interests, consent for advertising, deletion rights) on top of the general ADM transparency baseline. The OAIC has also issued separate age assurance guidance in March 2026, reinforcing the necessity-and-proportionality test for collecting age-related personal information. Organisations building governance for one of these without the others will need to retrofit later.
What businesses should do now
Map services likely to be accessed by children. Conduct an internal scoping exercise across all consumer-facing products, education sector deployments, and any tools repurposed for youth use. Document the assessment, even if the conclusion is that no services are in scope. Without documentation, you cannot demonstrate compliance with the best-interests test.
Audit AI training and analytics pipelines for youth data, and update DPIAs. Identify any model trained on or fine-tuned with data from users under 18. Assess whether deletion requests can be honoured at the model level, not just the account level. Where they cannot, document the limitation and prepare to rebuild affected models. Existing privacy impact assessments will need to expand to cover child-specific welfare considerations as a substantive analytical requirement, not a procedural one.
Tighten consent mechanisms for targeted advertising and personalisation. If your AI-driven personalisation cannot operate without behavioural profiling, the consent regime under the Code may make it unviable for child users. Plan now for what a non-personalised version of the service looks like.
Submit feedback before 5 June 2026. The 60-day consultation period runs until 5 June 2026. Industry submissions will shape the final form of the Code. Issues such as the scoping test, age assurance requirements, and the practical operation of deletion rights are still genuinely open.
Child safety today, AI design baseline tomorrow
The Code is narrowly targeted at children, but the principles it establishes (data minimisation, purpose limitation, meaningful consent, deletion rights, documented welfare assessments) align closely with what international AI governance frameworks already expect for high-risk systems generally. Once Australian organisations have built the compliance infrastructure for the Code, most of the work also satisfies the December 2026 ADM transparency requirement, the EU AI Act for organisations exporting into Europe, and the emerging expectations of Australian sector regulators. Organisations that treat this as a child safety project will be doing the work twice. The ones that treat it as a design baseline for AI handling personal information will be ahead of the next regulatory cycle.
Sources
-
OAIC, “OAIC releases Exposure Draft of the Children’s Online Privacy Code,” 31 March 2026. oaic.gov.au
-
OAIC, Children’s Online Privacy Code consultation page. oaic.gov.au
-
OAIC, “Privacy Commissioner publishes new guidance to ensure proportionate age assurance,” March 2026. oaic.gov.au
-
Gilbert + Tobin, “Exposure Draft of Children’s Online Privacy Code released: a major shift for online service providers,” April 2026. gtlaw.com.au
-
Information Age (ACS), “Age checks loom for most online services,” April 2026 (citing GPEN sweep findings and Carly Kind statements). ia.acs.org.au
-
Australian Cyber Security Magazine, “OAIC issues draft Children’s Online Privacy Code for public consultation,” 31 March 2026. australiancybersecuritymagazine.com.au
-
Cyber Daily, “Kids’ stuff: OAIC releases exposure draft of Children’s Online Privacy Code,” 31 March 2026. cyberdaily.au
-
The Lawyer Magazine Australasia, “OAIC publishes Children’s Online Privacy Code draft,” April 2026. thelawyermag.com
-
ADMA, “Understanding the impending Children’s Online Privacy Code,” December 2025. adma.com.au