“The development and use of artificial intelligence is expanding with breathtaking speed and reach across the world,” Connecticut Attorney General William Tong wrote in a memorandum released on 25 February 2026. “One need look no further than the proliferation of AI generated nonconsensual sexual abuse images on xAI’s Grok platform to see that they can also cause significant harm.”
That harm, in Tong’s argument, is already covered by Connecticut law. The memo maps AI to four bodies of existing statute: civil rights, privacy and data security, consumer protection, and antitrust. The message to businesses waiting for “AI-specific” laws before acting is direct. Regulators do not need new legislation to enforce against AI-related harm. They already have the tools.
What the memo actually says
The memorandum is not new legislation. It is a public advisory addressed to state officials, agencies and members of the public, explaining how four bodies of Connecticut law apply to AI systems: civil rights, privacy and data security (via the Connecticut Data Privacy Act), consumer protection (via the Connecticut Unfair Trade Practices Act), and antitrust.
“Individuals tend to use AI in search of a quick answer, but businesses use AI for a range of reasons,” Tong wrote. “These include, but are not limited to, tenant screenings for rentals, employment decisions, credit risk and loan decisions, insurance claims, and targeted consumer ads. It is imperative that consumers understand how AI impacts their lives, and how their data is compiled and used to train these tools.” In each of those use cases, his office argues, the same legal obligations that apply to human decision-making apply equally when those decisions are made or assisted by AI.
The Grok case: what “existing laws” already cover
Tong’s use of the Grok example was deliberate. xAI’s chatbot has, over recent months, been documented generating nonconsensual sexual abuse imagery, including imagery depicting minors and identifiable adults. There is no Connecticut AI-specific law against this. There does not need to be. The conduct already runs into multiple existing legal frameworks: criminal child sexual abuse material laws at federal and state level, Connecticut’s civil rights protections against harassment and discrimination, the Connecticut Data Privacy Act’s sensitive data and minors’ data provisions, and CUTPA’s prohibition on unfair and deceptive practices.
The point of using Grok as the lead example is not that it is the worst AI harm; it is that it is the clearest. A consumer cannot easily argue that nonconsensual abuse imagery is a grey area. And if existing law reaches the most extreme conduct, it also reaches the less extreme conduct: discriminatory tenant screening, deceptive AI-generated marketing claims, biased credit scoring. The Grok case is the high-water mark Tong is using to set the principle.
One caveat. This is a forward-looking enforcement posture, not yet a long line of decided AI cases under Connecticut law. The memo is fresh and the test cases are still to come. But the AG’s office has signalled the framework it intends to apply, and Tong’s track record on consumer protection enforcement suggests the signal is not rhetorical.
Privacy obligations under the CTDPA
The Connecticut Data Privacy Act (CTDPA) forms the backbone of the memo’s AI-related privacy framework. The key obligations the memo highlights for AI deployers include clear privacy notice disclosures that personal data may be used in AI models, lawful sharing requirements when buying third-party datasets to train AI (the original collector must have disclosed this use), change-of-use notices when AI processing was not part of the original data collection purpose, data protection assessments for any AI processing presenting a heightened risk of harm, including processing of sensitive data such as health information, biometric data, and precise geolocation, and special safeguards for minors’ data.
From 1 July 2026, amendments to the CTDPA will expand the law’s reach further. Businesses will be prohibited from profiling minors, as defined in the CTDPA amendments, in connection with automated decisions related to financial services, housing, insurance, education, employment, healthcare, or access to essential goods unless strictly necessary to provide the service. This provision alone signals that AI-driven youth targeting and profiling will receive close scrutiny.
Consumer protection and enforcement teeth
The Connecticut Unfair Trade Practices Act (CUTPA) gives the Attorney General’s office enforcement power against deceptive or unfair business practices, including those carried out through AI systems. The memo makes clear this covers AI-generated marketing claims, algorithmic pricing that may be deceptive, and AI systems that produce discriminatory outcomes in housing, credit, or employment.
Chris Davis, vice president of public policy at the Connecticut Business and Industry Association, framed the practical implications for the business community: “Connecticut’s existing civil rights, consumer protection, privacy, and antitrust laws provide meaningful protections for both consumers and businesses when it comes to the use of artificial intelligence.” Translation: businesses that thought they were operating in a legal vacuum on AI are not. The AG’s office has injunctive relief, restitution, and the ability to seek civil penalties.
The memo also notes that federal antidiscrimination statutes, including the Fair Housing Act, Title VII of the Civil Rights Act, the Americans with Disabilities Act, and the Equal Credit Opportunity Act, remain fully enforceable regardless of the Trump administration’s rescission of earlier federal AI guidance. Algorithmic discrimination, in Tong’s framing, remains discrimination.
The political backdrop in Connecticut
Tong’s memo did not emerge from a state with a settled approach to AI. Connecticut Governor Ned Lamont has repeatedly pushed back on comprehensive AI legislation, warning that heavy regulation risks signalling Connecticut is unwelcoming to technology sector investment. Multiple comprehensive AI bills introduced in previous sessions failed.
Sen. James Maroney, a Democrat from Milford and the Connecticut General Assembly’s lead voice on AI legislation, has shifted strategy this year. Rather than pushing one comprehensive bill, the legislature is now targeting specific issues: data privacy, consumer protection, online safety for minors, and AI workforce capacity. Tong has appeared alongside Maroney at press events backing the targeted approach. The memo fits that strategy: in the absence of comprehensive AI law, the AG is signalling that existing law already does the work.
Why this matters beyond Connecticut
Connecticut is one state. But the memo’s logic applies everywhere that general-purpose privacy, consumer protection and civil rights laws exist. And that is almost everywhere.
Australian regulators have not issued an equivalent AI memo, but their existing powers operate in similar ways. The OAIC already has enforcement powers under the Privacy Act that extend to automated decision-making, with the December 2026 ADM transparency requirement adding a specific AI overlay. ASIC is already using existing market integrity and consumer protection provisions to scrutinise AI in financial services, including its examination of 23 Australian lenders’ AI systems published in REP 798. The ACCC has powers under the Competition and Consumer Act that apply to AI-driven pricing, advertising and consumer outcomes. None of these regulators needs AI-specific legislation to act, and at least one (ASIC) has already demonstrated it will not wait. The point is the analogy, not a formal statement: the same enforcement architecture exists in Australia, ready to be used.
The Connecticut memo is likely a template. Both Hunton Andrews Kurth and Orrick flagged the memo as significant because it provides a roadmap other state attorneys general can follow without needing to pass new laws. The pattern is already emerging: regulators are mapping existing powers to AI uses, and enforcement is following the map.
What should AI deployers do now?
Conduct data protection impact assessments for high-risk AI processing. If your AI system handles sensitive data, profiles individuals, or influences decisions about employment, credit, housing or insurance, assess it now. The CTDPA already requires this. The EU AI Act requires it for high-risk systems from August 2026. The Australian Privacy Act amendments will require it from December 2026. The governance work is the same regardless of which jurisdiction triggers it first.
Update privacy notices to describe AI processing. If personal data is being used to train models, generate outputs, or inform automated decisions, disclose it. The CTDPA requires clear and meaningful disclosure. So does the GDPR. Silence on AI processing in a privacy policy is now a compliance risk, not an oversight.
Audit AI-driven consumer-facing systems for fairness and accuracy. If an AI system is making or influencing decisions about people, including tenant screening, loan approvals, insurance pricing, hiring, or ad targeting, test it for discriminatory outcomes and document the results. The memo makes clear that “black box” opacity will not serve as a defence.
Stop waiting for AI-specific laws. The Connecticut memo is a clear signal that the regulatory risk from AI is not future. It is present. Existing privacy, consumer protection and civil rights laws already apply. Organisations that build governance around these existing obligations will be ready when AI-specific laws arrive. Those that wait will be playing catch-up under enforcement pressure.
If you are not based in the US
The specifics of the Connecticut memo are state-specific, but the underlying logic translates directly. Whatever jurisdiction you operate in, the governance work is the same:
• Map your AI use cases to existing laws. Privacy, consumer protection, anti-discrimination, advertising standards, sector-specific licensing rules: most jurisdictions already have these. Identify which apply to which AI deployments.
• Update DPIA or FRIA templates to cover AI processing. Whatever your impact-assessment regime is called, make sure AI is in scope.
• Tighten marketing and disclosure review for AI-generated claims. False or misleading AI outputs are not a special category of consumer harm. Existing advertising and consumer protection rules apply.
• Document human oversight and bias testing. These will be expected under the EU AI Act, the Australian December 2026 ADM transparency requirements, and any future regulator-issued guidance.
The bottom line
“Waiting for AI laws” is no longer a strategy. Connecticut has shown that existing legal frameworks are broad enough to reach AI-related harm, and that the AG’s office is willing to use them. As Tong put it in the memo’s conclusion: “The Office of the Attorney General continues to be at the forefront of holding individuals accountable for violating our laws and harming the residents of Connecticut, including those who use algorithms and evolving technology to do so.” Other regulators, in Australia and elsewhere, will follow the same playbook. The organisations that treat AI governance as a current obligation under current law, rather than a future obligation under future law, are the ones that will avoid the enforcement actions heading toward everyone else.
Sources
-
Connecticut Attorney General, “Memorandum on the Application of Existing Laws to Artificial Intelligence,” 25 February 2026. portal.ct.gov
-
Connecticut Attorney General, press release, 25 February 2026. portal.ct.gov
-
Hunton Andrews Kurth, “Connecticut AG Clarifies AI Compliance Obligations Under CTDPA,” March 2026. hunton.com
-
Orrick, Herrington & Sutcliffe LLP, “Connecticut attorney general releases guidance on application of state and federal laws to AI use,” 5 March 2026. jdsupra.com
-
Connecticut Business & Industry Association, “AG Confirms Existing State Laws Cover AI Use,” 27 February 2026. cbia.com
-
BABL AI, “Connecticut Attorney General Issues AI Legal Guidance, Warns Businesses of Existing Liability Under State Law,” 3 March 2026. babl.ai
-
Daily Campus, “CT Attorney General Issues Warning: Artificial Intelligence Already Covered by Current Laws,” 10 March 2026. dailycampus.com
-
CT Mirror, “In final weeks of CT session, AI policy bills come into focus,” 3 April 2026. ctmirror.org