A March 2026 update to an EU AI Act compliance tool offers a concrete look at how monitoring platforms are encoding the EU AI Office’s emerging expectations into product features: two-layer transparency signals, automated Article 14(4) safety checks, mandatory change-log tracking, and client-side handling of sensitive Annex IV documentation. For compliance teams building requirements lists now, these features point toward what auditors will expect to see from high-risk AI systems by August 2026.


The EU AI Act’s Documentation Demands

Annex IV specifies the technical documentation that must be created and maintained before deployment of a high-risk AI system. It covers the system’s general description and intended purpose, design specifications, training data and validation methodology, monitoring and logging capabilities, and the risk management system applied throughout development.

This is a substantive technical record available for inspection by national market surveillance authorities, not a policy checklist. The documentation must also be versioned: Annex IV Section II requires records of hardware and algorithmic modifications. A system that evolves after initial deployment must maintain a traceable history of those changes.

Article 14 requires human oversight measures to be designed into high-risk AI systems. Article 14(4) specifically addresses protection of human overseers against automation bias, the tendency to over-rely on AI outputs and discount conflicting human judgement.

Only 24% of enterprises had a dedicated AI security governance team in 2024, and just 9% operated real-time AI model risk dashboards, though 67% planned to have them by 2026 (Practical DevSecOps, AI Security Statistics 2026, March 2026). The tools now being built are responding directly to that planned investment.


What v2.1.0 Actually Added

The AI-Report Tool (aireporttool.eu), focused on automated Annex IV report generation, published version 2.1.0 on 7 March 2026. The update log explicitly references alignment with the EU AI Office’s Code of Practice, second draft, dated 5 March 2026.

Four additions are noteworthy for compliance teams, not as endorsements of this vendor, but as reference patterns for what operationalised compliance looks like.

Two-layer transparency framework. The update implements L1 (machine-readable metadata) and L2 (human-visible cues) as distinct transparency outputs. Machine-readable metadata serves automated compliance checking and downstream operators. Human-visible transparency cues serve end users, human overseers, and individuals affected by AI-assisted decisions. A tool that produces only a PDF summary satisfies neither requirement cleanly.

Article 14(4) safety checks. Version 2.1.0 added automated checks specifically addressing organisational protections for human overseers against automation bias. It prompts for and assesses whether specific controls prevent overseers from defaulting uncritically to AI recommendations. Human oversight is an architectural requirement, not a checkbox, and its adequacy depends on organisational design as much as technical features.

Enhanced Annex IV versioning. The update reinforced mandatory versioning and change-log tracking for hardware and algorithmic modifications. Organisations using third-party AI systems need to ask suppliers for this documentation. It should not be assumed to exist.

ISO/IEC 23894:2023 alignment. In the absence of harmonised EU standards, the tool references ISO/IEC 23894:2023 for AI risk management. The AI Act allows conformity with harmonised standards as a pathway to presumption of compliance for high-risk systems; until those standards are published, ISO/IEC 23894 is the most defensible reference framework available.


Zero-Storage Design as a Privacy Pattern

Version 2.0.0, released in February 2026, introduced client-side processing: all audit logic runs locally, meaning sensitive AI technical documentation does not pass through or rest on the tool provider’s servers.

Annex IV documentation contains detailed records of training data, model architecture, and system capabilities. That information is commercially sensitive and, where it relates to personal data used in training or inference, potentially within scope of GDPR data processing obligations. Client-side design eliminates the secondary processing relationship. For organisations evaluating compliance tools, this is a procurement question worth asking: where does sensitive documentation go, and what is the supplier’s GDPR legal basis for processing it?


What This Means for Compliance Teams

Tools implementing these features are not setting universal regulatory requirements. The EU AI Office’s Code of Practice is still in draft and specific patterns will evolve before August 2026. Treat these features as reference patterns for the direction of travel, not mandated specifications.

Build toward two-layer transparency now. Plan for both machine-readable compliance outputs and human-readable disclosures for any high-risk AI system. A single document format will not satisfy both the technical documentation requirements and the transparency obligations to affected individuals.

Design human oversight with automation bias controls. Article 14(4) is specific. Oversight mechanisms need to be designed so human reviewers exercise genuine independent judgement. That requires training, interface design, and accountability structures, not just a sign-off workflow.

Implement version control for all AI system changes. Treat AI system modifications the way regulated financial systems treat software changes: documented, dated, and auditable. Any system without a change log cannot demonstrate conformity with Annex IV Section II.

Audit your AI documentation supply chain. Where third-party AI systems are in use for high-risk applications, the deploying organisation is accountable for the Annex IV documentation. If the supplier cannot provide it, the deployer faces an audit gap regardless of their own internal governance quality.

For the broader August 2026 compliance picture, the EU AI Act guide for Australian businesses covers the full range of high-risk obligations. The AI compliance deadlines guide covers the complete 2026 regulatory calendar.


Regulators Expect Evidence-Ready Systems

As EU AI Act enforcement approaches, the standard for compliance documentation is converging on the model used in regulated financial reporting: evidence-ready, metadata-rich, and capable of producing a complete audit trail on demand.

Fines for serious non-compliance with high-risk AI obligations reach €35 million or 7% of global annual turnover (EU AI Act, 2024). Compliance audits for large enterprises are expected to become mandatory for more than 50% by 2026 per Gartner forecasts. Organisations that begin building structured, versioned, two-layer documentation programmes in mid-2026 will find themselves behind the audit curve.


Related reading: EU AI Act: What Australian Businesses Need to Know | AI Compliance Deadlines in 2026 | What Is an AI Governance Framework?


Stay across AI governance and compliance developments. Subscribe to the Shadow AI Watch newsletter.


Sources