“This Department of Justice will not tolerate discriminating against U.S. workers, no matter who, or what, drafts a job advertisement, or whether it is an employee, a recruiter, or an AI tool.” That was Assistant Attorney General Harmeet K. Dhillon of the Civil Rights Division, announcing a settlement on 25 February 2026 with Virginia IT firm Elegant Enterprise-Wide Solutions over job advertisements that an AI tool had drafted with unlawful citizenship restrictions.
Six weeks later, on 6 April 2026, the same Civil Rights Division announced a USD 313,420 settlement with Compunnel Software Group, a New Jersey IT staffing firm, over more than 50 job advertisements that excluded US citizens and permanent residents in favour of H-1B and other visa holders. The Compunnel ads were not all AI-generated, but the pattern is now clear: the DOJ is policing AI-assisted recruitment using laws that have been on the books since 1986. The legal exposure exists today, and so does the compliance gap most employers have left open.
What the two settlements actually say
The Elegant case is the cleaner AI story. Under the settlement, dated 23 February 2026, the company posted job ads generated by an AI tool that restricted applicants to H-1B, OPT, or H-4 visa holders. All three are temporary work authorisations. None are categories an employer is permitted to require, absent a specific legal exception. Elegant agreed to pay USD 9,460 in civil penalties to the US Treasury, conduct training within 60 days for all staff involved in job ads, recruitment, hiring or employment eligibility verification, and submit to a three-year monitoring period during which the DOJ can inspect premises, interview staff, and demand compliance reports.
The Compunnel case is larger in scale and money but covers similar conduct without the explicit AI label. A DOJ investigation found at least ten Compunnel recruiters had posted more than 50 discriminatory job advertisements. The company agreed to pay USD 58,000 in back pay to a US citizen who was excluded from a Python developer role on citizenship grounds, plus USD 255,420 in civil penalties to the Treasury. The agreement runs for two years and requires mandatory training, monitoring reports, and revised internal policies. Compunnel is the ninth settlement under the DOJ’s Protecting U.S. Workers Initiative since the program was relaunched in 2025.
Both cases were brought by the Immigrant and Employee Rights Section under Section 1324b of the Immigration and Nationality Act, which prohibits citizenship-status discrimination in recruiting and hiring. The law applies to employers with four or more employees. It has applied to citizenship-status restrictions in job ads since 1986, when the Immigration Reform and Control Act added Section 1324b. The novelty is not the legal theory. It is the enforcement focus and the willingness to name AI explicitly as part of the violation.
Why the AI angle matters
The Elegant case settled the question employers were still asking each other in private. If a third-party AI tool drafts a job ad with discriminatory content, who is liable? The DOJ’s answer is the employer, not the vendor or the model provider.
Berkshire Associates, a recruitment compliance consultancy, summarised the implication directly: “When enforcing federal anti-discrimination laws, the government will not distinguish between whether the potential violation was carried out by an employee of the company or an AI tool.” The employer owns the output regardless of who or what generated it.
This shifts the practical compliance question. Employers using AI to draft, screen, or rank candidates have been treating model output as a productivity problem (does it produce good ads quickly?) when the regulator is treating it as a liability problem (does it produce lawful ads at all?). The two settlements together establish that “the AI did it” is not a defence. They also establish that procurement and compliance teams need to look beyond a single posting. As VisaVerge noted in its analysis of the Compunnel case, “templates, draft libraries, client intake forms and AI prompts can all shape the final language used to reach applicants.” A discriminatory prompt template, replicated across dozens of postings, is a pattern rather than a single mistake.
The enforcement trend, not just the cases
The DOJ has now settled nine cases under the Protecting U.S. Workers Initiative since the 2025 relaunch. The Civil Rights Division’s settlements page shows multiple 2025 and 2026 cases involving technology firms, staffing businesses, and recruiting practices that allegedly favoured specific visa categories. Natsoft (January 2026, USD 18,440 civil penalty), Nitya Software Solutions (January 2026, USD 40,000 civil penalty), Intellicept (January 2026), and now Elegant and Compunnel: the pattern is consistent and the cadence is increasing.
Ogletree Deakins, a US labour and employment firm, observed on 13 April 2026 that the settlements reflect a strategic enforcement focus aligned with the Trump administration’s “anti-American bias” priorities, and that staffing firms in particular are sitting in the regulatory crosshairs. The combination of high-volume hiring, reusable job templates, and increasing reliance on AI drafting tools makes staffing firms a natural target. The same factors apply to any large employer with a centralised recruitment function, whether or not staffing is the core business.
The two-sided problem: employer versus vendor liability
The settlements expose a gap in how AI hiring tools are typically procured. Vendor contracts for AI recruitment software often include disclaimers about output accuracy, reliance, and fitness for purpose. Those disclaimers may protect the vendor commercially. They do not protect the employer from a regulator. The DOJ has now stated explicitly, twice, that the employer is liable for what the AI tool produces.
For employers, this means three things. First, vendor contracts should be reviewed for indemnification and warranty provisions specific to discrimination outcomes, not just general performance warranties. Second, internal review processes need to assume the AI will produce non-compliant output some of the time, and human review needs to catch it before publication. Third, the discrimination risk does not stop at the job ad. Screening tools, ranking algorithms, and automated outreach all carry the same legal exposure under existing federal anti-discrimination law.
For vendors, the dynamic is different but equally direct. Vendors that supply AI recruitment tools to US employers are now selling products that have been named in federal enforcement actions. Customer due diligence questionnaires will start asking specific questions about training data, prompt design, and output review processes. Vendors that cannot answer those questions will lose deals to vendors that can, creating a competitive advantage that did not exist six months ago.
What this means for Australian employers
The DOJ enforcement cases are US-specific. The legal architecture is not.
Australia’s Fair Work Act 2009 prohibits discrimination in recruitment on grounds including race, national origin, and other protected attributes. The Australian Human Rights Commission Act gives the AHRC power to investigate complaints. State-level anti-discrimination Acts cover the same conduct. The OAIC is preparing to enforce the December 2026 automated decision-making transparency requirement under the Privacy and Other Legislation Amendment Act 2024, which applies to AI involvement in recruitment decisions affecting individuals. The NSW Work Health and Safety Amendment (Digital Work Systems) Act 2026 treats AI-driven workplace systems as a WHS hazard, which extends to recruitment platforms used in NSW.
Australian employers using AI recruitment tools should not assume the DOJ approach is foreign. The Fair Work Commission has already shown, in its draft guidance on AI-generated submissions, that Australian regulators are willing to treat AI involvement as material to the legal analysis. The Connecticut Attorney General has shown, in its February 2026 memorandum on existing laws, that AI-specific legislation is not required to enforce existing legal protections against AI-driven discrimination. The DOJ has now operationalised that principle with money on the table.
What HR, legal, and IT should do this quarter
Audit current AI use in recruitment. Identify every tool that drafts, screens, ranks, or scores job applicants. Include vendor-provided AI features inside applicant tracking systems, large language model integrations in HR platforms, and any AI tool a recruiter has self-served onto without IT approval. The audit needs to cover both sanctioned and shadow tools.
Review job ad templates and AI prompts. Citizenship status, visa category, age, gender, and other protected characteristic filters should not appear in templates, prompt libraries, or vendor-provided defaults. The DOJ has now twice cited “templates” as a multiplier of discriminatory ads. A single bad template can produce dozens of violations.
Build human review into the workflow. Every AI-drafted job ad should be reviewed by a person trained on Section 1324b (in the US) or the relevant Australian, UK, or EU anti-discrimination framework before it is posted. The review needs to be substantive, not a tick-box. The Compunnel case involved at least ten recruiters posting bad ads, which suggests the internal training and review processes were not catching the problem.
Update vendor contracts. Existing vendor agreements likely do not allocate discrimination liability. Procurement and legal need to identify which vendors supply AI recruitment tools, and what those contracts say about indemnification for discriminatory outputs. Where the contracts are silent or vendor-favourable, renegotiation is appropriate.
Treat AI hiring tools as in-scope for bias and compliance testing. AI governance frameworks that focus on customer-facing AI (chatbots, recommendation engines) often exclude internal HR tools. The DOJ enforcement actions confirm that recruitment AI is high-risk and needs the same level of governance scrutiny as any other consequential AI deployment.
The pattern is set
Two settlements in six weeks, nine settlements under the Protecting U.S. Workers Initiative since the 2025 relaunch, a DOJ that has now twice stated in writing that AI involvement does not insulate employers from liability, and a staffing industry under specific enforcement focus. The conditions are in place for the pattern to continue, and the next case is unlikely to be the last.
Employers that are still treating AI hiring tools as a tech problem rather than a compliance problem are operating on assumptions the regulator has now publicly rejected. The cost of getting this wrong is documented in two recent press releases. The cost of getting it right is a procurement review, a template audit, and a human in the loop.
Sources
- US Department of Justice, Office of Public Affairs, “Civil Rights Division Obtains Settlement with Company that Discouraged U.S. Workers from Applying for Jobs,” 6 April 2026 (Compunnel settlement; Dhillon quote). justice.gov
- US Department of Justice, Office of Public Affairs, “Civil Rights Division Obtains Settlement with a Company that Used AI-Generated Advertisements that Excluded U.S. Workers from Jobs,” 25 February 2026 (Elegant settlement; “no matter who, or what” quote). justice.gov
- US Department of Justice, Civil Rights Division, “Settlements and Lawsuits” page (full list of 2025-26 IER settlements). justice.gov
- Berkshire Associates, “Department of Justice Obtains Settlement Based on AI-Generated Job Advertisements,” March 2026. berkshireassociates.com
- Ogletree Deakins, “Justice Department Targets Visa Preference in Job Ads,” 13 April 2026. ogletree.com
- VisaVerge, “DOJ Hits Compunnel with $313,420 Settlement Over H-1B-Only Job Ads,” April 2026 (templates and prompt analysis). visaverge.com
- Virginia Lawyers Weekly, “Justice Department settles with Va. company accused of excluding U.S. workers from jobs,” February 2026 (settlement terms detail). valawyersweekly.com
- JD Supra (Berkshire Associates), “Department of Justice Obtains Settlement Based on AI-Generated Job Advertisements,” March 2026 (Section 1324b explainer). jdsupra.com
- ICLG, “DoJ settlement over visa-restricted job adverts highlights corporate immigration risk,” February 2026. iclg.com