New York has passed a law requiring generative AI systems to warn users that outputs may be inaccurate. Utah has enacted provenance standards for AI-generated content. The EU is designing a potential standard icon for AI-generated material. These are no longer ethics guidelines. They are hard UI requirements, consent obligations, and penalty regimes that apply to any business whose AI features reach users in those jurisdictions.

Three Jurisdictions, One Direction

In the first two weeks of March 2026, three separate jurisdictions moved AI transparency from voluntary best practice to statutory obligation. New York’s legislature passed a generative AI warning bill on 9 March. Utah’s Digital Content Provenance Standards Act was enrolled and takes effect on 6 May 2026. The European Commission published a second draft of its Code of Practice on AI-generated content under Article 50 of the EU AI Act on 5 March, proposing a potential standardised EU icon for AI-generated material.

Each addresses a different facet of AI transparency. New York targets output accuracy warnings. Utah targets content provenance and digital authenticity. The EU targets systematic labelling across the single market. Taken together, they signal that businesses building or embedding AI features will need to ship notices, labels, and provenance metadata as standard, not as optional extras.

New York: Generative AI Must Warn Users About Inaccuracy

Assembly Bill A3411B, passed by the New York legislature on 9 March 2026, requires the owner, licensee, or operator of a generative AI system to “clearly and conspicuously display a notice on the system’s user interface that the outputs of the generative artificial intelligence system may be inaccurate” (New York State Legislature). The bill defines generative AI broadly: any class of AI models that are self-supervised and emulate the structure of input data to generate synthetic content including images, video, audio, text, and other digital content.

The bill awaits Governor Hochul’s signature. If signed, it takes effect 90 days from enactment. Troutman Pepper noted that an earlier draft required warnings that outputs may be “inaccurate and/or inappropriate,” but the final version dropped the “inappropriate” element. The bill does not specify what format satisfies “clear and conspicuous,” leaving that to future interpretation (Troutman Pepper, March 2026).

Penalties under the final text are up to $1,000 per violation per user, with each user who does not receive the warning constituting a separate violation for each instance. For a generative AI tool with thousands of users, the exposure scales quickly. The bill is short at 30 lines, but its scope is broad: any generative AI system accessible to New York users falls within reach, regardless of where the operator is based.

Utah: Digital Content Provenance Standards

Utah’s HB 276, the Artificial Intelligence Modifications bill, enacts the Digital Content Provenance Standards Act. The enrolled bill takes effect on 6 May 2026 (Utah State Legislature). The provenance provisions address how AI-generated content is labelled and tracked, requiring standards for authenticity and origin of digital content produced by AI systems.

Utah has been a consistent early mover on AI legislation. In 2025, Governor Cox signed the Artificial Intelligence Policy Act (HB 149), which clarified that using an AI system is not a defence for violating state consumer protection laws and required regulated professionals to disclose when a person is interacting with generative AI. HB 276 builds on that foundation by shifting from disclosure at the point of interaction to provenance at the point of creation.

For businesses generating AI content that reaches Utah users, this means content provenance is moving from a technical nicety to a compliance requirement. Provenance standards typically involve embedding metadata or markers that indicate when, how, and by what system content was generated. The C2PA (Coalition for Content Provenance and Authenticity) standard, backed by Adobe, Microsoft, and others, is the leading technical framework in this space and is referenced in multiple legislative discussions internationally.

The EU: A Potential Standard Icon and Two-Layer Marking

Article 50 of the EU AI Act requires providers and deployers to ensure that AI-generated content (including synthetic audio, images, video, and text) is marked in a machine-readable format and is detectable as artificially generated or manipulated. The Commission is operationalising this through a Code of Practice, the second draft of which was published on 5 March 2026.

Simmons & Simmons reported that the second draft proposes a two-layered marking approach: secured metadata embedded in content files combined with visible watermarking. The draft also proposes a potential EU icon to indicate AI-generated content, designed for consistent recognition across the single market. Finalisation is expected by June 2026 (Simmons & Simmons, 17 March 2026).

For businesses selling into the EU, the Code of Practice sits alongside the AI Act’s binding obligations. While codes of practice are soft instruments, compliance with them is treated as a strong indicator of meeting the Act’s requirements. The AI Office, which has sole authority over GPAI model compliance, has the power to request adherence and to use non-compliance as grounds for further investigation.

The Convergence Problem for Businesses

These three regimes are evolving independently. New York’s bill says nothing about provenance metadata. Utah’s provenance standards say nothing about output accuracy warnings. The EU’s Code of Practice addresses both marking and labelling but through a different technical framework than either US state. Businesses operating across jurisdictions face a layering problem: meeting one set of requirements does not automatically satisfy the others.

The state-level AI legislative tracker maintained by Troutman Pepper identified movement on AI bills in dozens of US states during the first 10 weeks of 2026 alone, including chatbot regulation, employment AI disclosure, content provenance, and health care AI rules. Washington passed provenance and chatbot bills during the same period. Oregon advanced SB 1546, regulating AI companions with a private right of action and statutory damages. The pace of state-level activity means the compliance surface is expanding faster than most SME legal teams can track.

What Businesses Should Do

The practical response to a messy multi-jurisdictional landscape is to standardise on a conservative baseline rather than customise per jurisdiction.

Add output accuracy notices wherever generative AI produces content. New York’s requirement is specific, but the principle extends logically to any jurisdiction where consumer protection law applies to AI outputs. A clear, persistent notice that AI-generated content may be inaccurate is low-cost to implement and high-value as a default.

Implement content provenance metadata for AI-generated material. Utah and the EU are converging on the principle that AI-generated content should carry embedded provenance data. The C2PA standard provides a ready-made technical framework. Businesses generating marketing content, product descriptions, or customer-facing material with AI should be embedding provenance now.

Audit where AI features reach users in regulated jurisdictions. A SaaS product with a chatbot feature accessible to New York users is in scope for A3411B even if the company has no physical presence in the state. Businesses should map which AI features are user-facing, which jurisdictions those users sit in, and which transparency obligations apply.

Watch for EU iconography standards. If the Code of Practice is finalised by June 2026 with a standard AI icon, businesses selling into the EU will need to integrate it into their content labelling. Early adoption of the proposed standard reduces rework later.

Businesses that build a simple, conservative transparency model now, combining clear AI labelling, in-product accuracy notices, and basic provenance metadata, will be better positioned across this fragmented landscape than those trying to track and satisfy each jurisdiction separately. That conservative baseline also reduces shadow AI risk: when AI use is labelled and visible, it is governed. When it is unlabelled and invisible, it is shadow AI by definition.

Related reading: AI Compliance Deadlines 2026 | Federal AI Preemption Moves From Executive Order to Legislative Blueprint | EU AI Act Enforcement Is Behind Schedule

Sources