Canada has no comprehensive federal AI law and will not have one in the near term. What it does have is a set of real expectations: regulators across the country now expect AI inventories, impact assessments, and clear accountability structures, even without legislation that mandates them explicitly. Ontario’s six-principle framework, published in January 2026, is the most developed template available and has immediate practical relevance well beyond the public sector.


Why There Is No Federal AI Law

Canada’s Artificial Intelligence and Data Act (AIDA) did not proceed: Parliament was prorogued in January 2025 and AIDA lapsed without royal assent. Its core concepts, including risk-based classification, human oversight, and accountability obligations, remain influential but are not law.

In May 2025, Prime Minister Mark Carney appointed Evan Solomon as Canada’s first federal minister responsible for artificial intelligence and digital innovation, and an AI Strategy Task Force is consulting on the next national AI strategy (MLT Aikins, March 2026). Federal legislation will likely re-emerge from that process, but its form and timing are uncertain.

Three layers of existing obligation fill part of the gap. Privacy law: PIPEDA at the federal level and provincial equivalents in Quebec (Law 25), British Columbia, and Alberta apply fully to AI systems processing personal information. The Treasury Board’s Directive on Automated Decision-Making requires algorithmic impact assessments and transparency measures for federal government AI systems, and its assessment tool is a useful reference for all organisations. Provincial initiatives in BC, Alberta, Saskatchewan, and Manitoba are each moving at different speeds, with Alberta’s Privacy Commissioner recommending a dedicated provincial AI law in August 2025 (MLT Aikins, March 2026).


Ontario’s Framework: The Practical Baseline

Ontario has moved furthest on AI-specific governance. Its Enhancing Digital Security and Trust Act (EDSTA), passed in late 2024, sets accountability requirements for public sector AI use, with implementing regulations still pending.

The January 2026 joint statement from Ontario’s Information and Privacy Commissioner and Ontario Human Rights Commission established six principles for responsible AI use. Law firms including DLA Piper, Lerners, and Torys have described these as establishing de facto requirements for organisations operating in the province, because enforcement of existing privacy and human rights legislation will be guided by them.

The six principles are: validity and reliability; safety; privacy protection; human rights affirmation; transparency; and accountability. The framework covers the full AI lifecycle from design through to decommissioning. It applies to any organisation using AI that affects Ontario residents, not just public sector entities.

For a detailed breakdown of what each principle requires in practice, see the Ontario IPC-OHRC principles analysis.


The Governance Gap Is Structural

The gap between what regulators expect and what organisations have in place is substantial. Globally, 62% of organisations do not have a documented AI governance or management plan, and 58% lack confidence in their AI inventory (OCEG GRC Strategies for Effective AI Governance, October 2025). Canadian organisations are not meaningfully ahead of that baseline.

Privacy commissioners in multiple provinces are actively investigating AI-related complaints. Human rights tribunals are hearing cases involving algorithmic employment decisions. An organisation that cannot produce an AI inventory, explain how a specific system reaches its outputs, or demonstrate operative human oversight faces real legal exposure in these proceedings.

Organisations with mature AI governance programmes report 45% fewer AI-related security incidents and resolve breaches 70 days faster than those without formal oversight (Practical DevSecOps, March 2026). The governance infrastructure that reduces incident risk is the same infrastructure that provides the documentation trail regulators and tribunals will examine.


A Practical Framework for Canadian CIOs

MLT Aikins identifies five practical steps that translate directly into a governance programme (March 2026).

AI inventory. Identify every AI system in use or under development, including third-party tools. Classify each by risk level: systems that make or materially influence consequential decisions for individuals are the priority.

Algorithmic impact assessments. For AI systems affecting individuals’ rights or access to services, conduct a structured impact assessment before deployment and annually thereafter. The federal Directive on Automated Decision-Making provides a methodology well-regarded by Canadian regulators, and the assessment tool is publicly available.

Governance structures and policies. Assign explicit accountability for each AI system. Establish policies on acceptable use, what decisions AI can make autonomously, what requires human review, and what cannot be delegated to AI at all.

Transparency practices. Individuals affected by AI-assisted decisions should be able to learn that AI was involved and, in material cases, how it influenced the outcome. Ontario’s six principles frame this as an ongoing obligation, not a disclosure at initial deployment only.

Framework alignment. The NIST AI Risk Management Framework, adopted by more than 70% of US federal agencies since 2023, is structurally compatible with Ontario’s six principles and the federal Directive. ISO 42001, the international AI management system standard, provides a certification-ready framework. Adopting either creates a structured approach that maps into new requirements as legislation emerges.


What the Federal Future Looks Like

AIDA’s core concepts will not disappear from Canadian regulatory thinking. The AI Strategy Task Force consultation is likely to produce recommendations for new federal legislation incorporating risk-based classification, human oversight requirements, and impact assessment obligations.

Organisations that build governance programmes now, grounded in privacy law obligations, Ontario’s six principles, and NIST AI RMF or ISO 42001, will be able to map into new federal requirements with substantially less disruption than those starting from scratch. The infrastructure being built under current soft law is the foundation for whatever comes next.


Related reading: Ontario’s IPC-OHRC AI Principles: A Governance Baseline Businesses Cannot Ignore | What Is an AI Governance Framework? | AI Compliance Deadlines in 2026


Stay across AI governance and compliance developments. Subscribe to the Shadow AI Watch newsletter.


Sources