The Committee of Sponsoring Organizations of the Treadway Commission published its first GenAI-specific internal control guidance in February 2026. Achieving Effective Internal Control Over Generative AI translates the five components of the COSO Internal Control–Integrated Framework into audit-ready practices for governing AI across the data-to-decision lifecycle. The argument that AI governance is too new for established control frameworks no longer holds.
Why COSO Matters for AI Governance
COSO’s Internal Control–Integrated Framework is the most widely adopted internal control standard in the world. It underpins Sarbanes-Oxley compliance, internal audit programmes, and board-level risk reporting across every regulated industry. When COSO publishes guidance saying its five components apply to generative AI, it carries immediate weight with audit committees, external auditors, and regulators.
The publication, released 23 February 2026, was authored by researchers from Arizona State University, the University of Duisburg-Essen, and Brigham Young University, alongside practitioners from EY and Meta. It builds on COSO’s 2021 report, Realize the Full Potential of Artificial Intelligence (co-authored with Deloitte), but moves from general principles to operational specifics. The guidance is 30 pages, freely available, and written for compliance, audit, and governance professionals (COSO/Journal of Accountancy, February 2026).
The timing matters. IBM’s 2025 CEO Study, surveying 2,000 CEOs across 33 countries, found that only 25 per cent of AI initiatives delivered the expected return on investment over the previous three years. Just 16 per cent managed to scale across the entire organisation (IBM/Oxford Economics, May 2025). Half of CEOs acknowledged that the pace of investment had left their organisation with disconnected technology stacks.
A separate IBM Institute for Business Value report published in December 2025, Go Further, Faster with AI, examined the relationship between governance maturity and AI outcomes. The report found that organisations with mature AI governance programmes were scaling AI faster and achieving higher operating profit from AI investments than those without. Its central conclusion was that governance functions as an accelerator rather than a constraint on AI deployment (IBM IBV, December 2025).
COSO’s guidance lands weeks before the EU AI Act’s August 2026 high-risk enforcement deadline, at a point where boards and audit committees can no longer defer the question of how AI fits into existing control environments.
Eight Capability Types, Not Vendor Names
The most useful structural decision in the guidance is organising AI governance around capability types rather than specific tools or vendors. AI exists across too many platforms for a product-by-product governance approach to scale. COSO groups generative AI into eight capability types across the data-to-decision lifecycle:
1. Data ingestion and extraction (e.g. analysing customer complaints from incoming emails)
2. Transformation and enrichment (e.g. standardising data across formats)
3. Posting and record creation (e.g. auto-generating journal entries)
4. Workflow orchestration and autonomous task execution (e.g. automated invoice routing)
5. Judgment, forecasting, and insight generation (e.g. predicting customer demand)
6. Monitoring and continuous review (e.g. real-time equipment failure detection)
7. Regulatory intelligence and compliance (e.g. scanning regulatory updates for relevance)
8. Human-AI interaction and collaboration (e.g. copilot assistants in workflows)
Each capability type has its own risk profile and requires tailored controls. A data ingestion system that extracts information from customer emails carries different risks than an autonomous workflow agent that routes invoices and creates records. The capability taxonomy gives internal control teams a way to assess AI risk that does not depend on knowing the underlying model architecture (COSO, February 2026).
Radical Compliance’s Matt Kelly, reviewing the guidance, described this approach as smart because it forces governance professionals to think in terms of what AI does rather than what the specific system is. That distinction is critical for organisations running AI across multiple vendors and platforms simultaneously (Radical Compliance, 23 February 2026).
How the Five COSO Components Apply to AI
Rather than proposing a new framework, the guidance maps generative AI risks onto the five components of COSO’s existing Internal Control–Integrated Framework, published in 2013, with GenAI-specific applications for each.
Control environment. Organisations should assign clear owners for each AI capability type, with defined authority, escalation paths, and documented scope of use. Prompts, system prompts, retrieval connectors, and transformation rules should be treated as governed configurations with version history, approval workflows, and rollback plans. The board must have visibility into GenAI use and its risks. COSO frames this as embedding AI governance into the broader control culture rather than treating it as a standalone compliance exercise.
Risk assessment. GenAI introduces risks that traditional control models were not designed for: hallucination, prompt manipulation, model drift, opaque reasoning, vendor configuration changes, and shadow AI deployment. COSO identifies these as requiring specific risk identification and assessment processes. The guidance notes that GenAI can be “confidently wrong, easily manipulated, or deployed outside formal oversight channels,” a description that maps directly to shadow AI risk (COSO, February 2026).
Control activities. Some capability types require especially robust controls. For data ingestion, COSO recommends confidence thresholds and mandatory human review for low-confidence extractions. For workflow orchestration, it recommends simulating and testing routing changes before production deployment, with documented routing logic so deviations are detectable. For judgment and forecasting, the guidance requires citations for all material outputs, capture of contrary information when reviewers disagree, and hindsight analysis comparing forecasts to actual results (Accounting Today, February 2026).
Information and communication. The guidance addresses how AI-generated information flows through the organisation and reaches decision-makers. Where AI outputs feed into financial reporting or regulatory submissions, the information pathway from model output to final report must be documented and traceable.
Monitoring activities. COSO recommends combining real-time dashboards with scheduled deep-dive reviews. Human-in-the-loop quality reviews should use rubrics specific to each use case covering accuracy, completeness, and tone. Explicit triggers should be defined for retraining, reconfiguration, or rollback based on monitored metrics. A remediation log should record each issue, its root cause, corrective action, and follow-up testing. The guidance adds a pointed observation: monitoring systems themselves need monitoring to ensure detection logic remains accurate and relevant (Accounting Today, February 2026).
The Shadow AI Connection
COSO explicitly identifies shadow AI as a risk within its framework. Doeren Mayhew’s analysis for financial institutions translated COSO’s control environment requirements to include a formal Acceptable Use Policy prohibiting unauthorised entry of customer or member nonpublic information into unsecured tools (Doeren Mayhew, February 2026).
IBM’s 2025 Cost of a Data Breach report found that 63 per cent of breached organisations either had no AI governance policy or were still developing one. Only 34 per cent of organisations with policies performed regular audits for unsanctioned AI. Among AI-related breaches specifically, 97 per cent lacked proper access controls on AI systems (IBM, July 2025). Nudge Security’s March 2026 review of the IBM report underscored that shadow AI incidents are adding hundreds of thousands of dollars to average breach costs.
The connection between COSO’s control environment component and shadow AI governance is direct. An organisation that cannot produce an inventory of its AI use cases, classified by capability type with named business owners, has not met the baseline control environment requirement. COSO’s guidance makes this an audit-testable standard rather than an aspiration.
The Six-Step Implementation Roadmap
COSO outlines a six-step implementation cycle. Centri Consulting’s February 2026 analysis and Doeren Mayhew’s financial services translation both provide practical commentary on operationalising each step.
Step 1: Establish an AI governance structure. Define who owns each AI use case, how decisions are made, and how risks are escalated. For SMEs, this requires a named person with authority to approve, restrict, or shut down AI use cases and a documented escalation path, not a standing committee.
Step 2: Inventory GenAI use cases. Identify all active and planned AI use cases, including purpose, data sources, owners, and dependencies. This creates visibility into where GenAI is operating and surfaces shadow usage. Doeren Mayhew notes that if leadership cannot articulate where AI touches data, influences decisions, or executes tasks autonomously, governance has not kept pace with deployment.
Step 3: Evaluate use cases against COSO components. Focus on GenAI-specific risks: drift, hallucinations, bias, data exposure, and prompt manipulation. Determine risk criticality for each use case based on its capability type and the sensitivity of the data and decisions involved.
Step 4: Design and map controls. Develop controls that match the risk level for each use case, including human-in-the-loop checkpoints, validation routines, and governed configurations. COSO provides illustrative metrics for each capability type to support both operational monitoring and audit evidence collection.
Step 5: Implement and communicate. Deploy controls and train users on responsible AI interaction, output interpretation, and issue escalation. Communication extends beyond policy distribution to ensuring operational staff understand what the controls require of them in practice.
Step 6: Monitor and adapt. Track performance through defined KPIs and KRIs. Review changes to models, prompts, or data sources on a regular schedule. COSO emphasises that GenAI systems change faster than traditional IT systems, and monitoring cadences must reflect that pace.
What This Means for Internal Audit
The publication gives internal audit functions an explicit mandate and a structured methodology for auditing AI controls. Each of the eight capability types includes minimum control expectations aligned to all five COSO components and illustrative metrics to support audit evidence collection.
For organisations subject to SOX or equivalent financial reporting controls, the implications are immediate. Where AI outputs feed into internal control over financial reporting, the COSO guidance applies directly. An AI system that generates journal entries, reconciles accounts, or produces forecasts used in management assertions is now subject to the same control documentation, testing, and monitoring requirements as any other ICFR process.
Centri Consulting’s analysis framed the practical consequence: SOX compliance, AI governance, COSO alignment, risk and control design, and third-party oversight all need to evolve to keep pace with GenAI adoption. Organisations that treat AI governance as separate from their internal control programme will find the audit gap widening as AI use expands (Centri Consulting, February 2026).
According to Practical DevSecOps’s AI Security Statistics 2026 compilation, only 24 per cent of enterprises had a dedicated AI security governance team in 2024, and just 9 per cent operated real-time AI model risk dashboards, though 67 per cent planned to have them by 2026. These are aggregated figures from multiple sources rather than a single primary study. The COSO guidance converts that planned investment from a discretionary initiative to an audit-readiness requirement.
What SMEs Should Do
COSO’s guidance was written with larger, regulated entities in mind. Its principles apply to organisations of any size, but the implementation must be proportional.
Start with the inventory. If leadership cannot list which AI tools are in use, what data they process, and who is responsible for each, no governance programme can be effective. The inventory does not need to be a formal register on day one. A spreadsheet listing every AI tool, its capability type, its data inputs, and its business owner is a workable starting point.
Classify by risk. Use COSO’s capability taxonomy to sort AI use cases into those that require human oversight (judgment, posting, orchestration) and those that carry lower risk (ingestion, monitoring). Apply controls proportional to the risk tier.
Treat prompts as governed configurations. For any AI use case that touches financial data, customer records, or regulatory reporting, the prompts and system instructions should be documented, version-controlled, and subject to change management. This is one of COSO’s most practical recommendations and requires no specialised tooling.
Build audit evidence from the start. Every control implemented should generate evidence that an auditor could test: logs, review sign-offs, exception reports, or remediation records. Organisations that bolt governance onto AI deployments after the fact consistently find the evidence gap harder to close than the control gap.
Internal audit functions that learn to audit AI against COSO now will become essential advisors on AI risk. Those that wait will find the audit committee asking questions they cannot yet answer. The framework is available, free, and 30 pages. The window for treating AI governance as someone else’s problem has closed.
Related reading: ASIC v Bekier: first Australian judicial guidance on directors and AI | What is an AI governance framework? | AI compliance deadlines 2026 | AI Usage Policy Template (free download)
Sources
- Journal of Accountancy: COSO creates audit-ready guidance for governing generative AI (25 February 2026)
- COSO press release: Achieving Effective Internal Control Over Generative AI (23 February 2026)
- Accounting Today: COSO releases guidance on applying internal controls to AI (24 February 2026)
- Doeren Mayhew: COSO Releases Roadmap for Governing Generative AI (February 2026)
- Centri Consulting: From Guidance to Governance (26 February 2026)
- Radical Compliance: COSO Guidance on Generative AI Risks (23 February 2026)
- IBM Institute for Business Value: Go Further, Faster with AI (9 December 2025)
- IBM: Cost of a Data Breach 2025 (29 July 2025)
- Nudge Security: Shadow AI in IBM’s 2025 Cost of a Data Breach Report (14 March 2026)
- Practical DevSecOps: AI Security Statistics 2026 (8 March 2026)
- IBM Institute for Business Value: 2025 CEO Study (2,000 CEOs, February-April 2025)