Ontario’s Information and Privacy Commissioner and the Ontario Human Rights Commission published joint AI principles on 21 January 2026. They are technically non-binding. Both agencies have stated the principles will ground their assessment of organisations adopting AI systems. Law firms including DLA Piper, Lerners, and Torys advise treating them as effective requirements.

Why these principles have teeth without being law

Ontario’s Enhancing Digital Security and Trust Act, passed in 2024, created a statutory framework for AI governance in the province. Regulations under that Act have not yet been published. Until they are, the IPC-OHRC principles fill the gap.

The IPC and OHRC are not merely advisory bodies. The Information and Privacy Commissioner has statutory powers to investigate, audit, and issue binding orders in relation to privacy matters. The Human Rights Commission has authority to investigate systemic discrimination. When both agencies state that AI adoption will be assessed against these six principles, that statement carries enforcement weight.

DLA Piper’s Canadian team noted in January 2026 that while the principles are not formal regulations, “failure to comply could be used as evidence of non-compliance with existing privacy and human rights obligations.” Torys described them as “de facto requirements for organisations operating in Ontario’s public sector, with strong knock-on effects for the private sector.”

Ontario’s Working for Workers Four Act took effect 1 January 2026, requiring employers with 25 or more employees to disclose AI use in job postings. That statutory requirement sits alongside the principles in the same regulatory environment, signalling a consistent direction of travel.

The six principles: what they actually require

The principles apply across the full AI lifecycle, from system design through to decommissioning. This is not a point-in-time compliance exercise.

Valid and reliable. AI systems must produce outputs that are accurate and consistent for their intended purpose, supported by objective evidence and testing before deployment and throughout operation. Organisations must document their validation methodology, not simply assert that the system works.

Safe. Systems must prevent harm to life, health, economic security, and the environment. This includes cybersecurity requirements, mechanisms to detect unexpected or anomalous outputs, and processes to act on those detections. The safety principle extends to third-party systems integrated into or dependent on the AI.

Privacy-protective. Privacy by design is the baseline. Systems must operate on lawful authority, minimise data collection, and support individuals’ rights to access, correction, and opting out of automated decision-making where possible.

Human-rights affirming. Organisations must take active steps to prevent and remedy discrimination on grounds protected by the Ontario Human Rights Code. This requires examining training data for bias, testing outputs across protected groups, and establishing a process for remediation when disparate impacts are identified. A passive “we did not intend to discriminate” position is not sufficient.

Transparent. Individuals must understand that AI is being used in decisions affecting them, have a general understanding of how the system works, and know how to seek redress. This does not require disclosing proprietary model architecture, but it does require meaningful explanation accessible to the people affected.

Accountable. Organisations must have a documented governance structure for each AI system in operation. There must be a named person or body with authority to intervene, modify, or shut down the system. Human-in-the-loop oversight must be meaningful, not nominal.

Reach beyond the public sector

The principles were issued in the context of public sector AI adoption, but their reach extends further. Private organisations that service or partner with Ontario public bodies are increasingly seeing the principles passed through contractual requirements. A healthcare provider supplying services to a publicly funded health authority, or a technology vendor serving a provincial ministry, faces the same operational requirements through contract even if the principles do not formally apply to it directly.

The principles also align closely with the federal Office of the Privacy Commissioner’s nine principles for generative AI published in December 2023. That alignment creates a coherent Canadian regulatory consensus that applies federal expectations through the federal OPC and provincial expectations through the IPC-OHRC, covering most AI deployments across the country.

Internationally, the six principles map well against the EU AI Act’s requirements for high-risk systems and the OECD AI Principles. Organisations already building to EU standards will find significant overlap, though the human rights affirmation principle adds specificity around Ontario Human Rights Code grounds that requires separate attention.

Mapping to existing programmes

For many organisations, the six principles will not require building new governance infrastructure from scratch. Most mature privacy programmes already cover data minimisation, lawful authority, and access rights. The work is mapping existing controls to the AI-specific context and identifying the gaps.

A practical starting point is an AI inventory covering every system in active use, under development, or under evaluation. Against each system, the six principles provide a structured assessment framework. The AI governance framework guide covers how to structure that inventory and the governance controls around each AI system.

Federal obligations under the Treasury Board Directive on Automated Decision-Making apply to federal institutions and provide a useful reference model for the accountability and transparency principles. Organisations in both federal and provincial jurisdictions can harmonise their AI governance documentation across both frameworks to reduce duplication.

For Australian and international businesses with Canadian operations, the EU AI Act guide covers the parallel international compliance timeline, much of which overlaps with Ontario’s direction of travel.

What other provinces are watching

Ontario’s IPC-OHRC principles are already being read closely by privacy commissioners in British Columbia, Quebec, and Alberta. The joint IPC-OHRC model, in which a privacy regulator and a human rights body act together, is a template others are likely to follow.

Quebec’s Law 25, in force since September 2023, already contains provisions on automated decision-making. British Columbia’s Privacy Commissioner has flagged AI governance as a 2026 priority. Alberta’s Office of the Information and Privacy Commissioner published AI guidance in late 2025.

The direction across Canadian provinces is consistent. Organisations treating Ontario’s principles as an outlier are likely to find themselves navigating the same requirements province by province over the next 18 months.


Related reading: AI Compliance Deadlines in 2026: What Every Business Needs to Know | What Is an AI Governance Framework?


Stay across AI governance developments in Australia and globally. Subscribe to the Shadow AI Watch newsletter.


Sources