By Mike Reeves | ComplianceJournal.news | Updated April 2026
Texas TRAIGA has been in effect since January 1, 2026. Colorado's AI Act takes effect June 30, 2026 — roughly 65 days from now. More than 20 other states have introduced or passed AI governance legislation. Congress has twice rejected federal preemption. The businesses that are not building AI compliance records today are accumulating exposure with every hiring decision, every tenant screening, every AI-assisted credit decision that touches a resident of a covered state.
What Is AI Governance Law?
State AI governance laws regulate how businesses use artificial intelligence in decisions that significantly affect people. Unlike most technology regulation, these laws do not target the companies that build AI — they primarily target the companies that use it. If your business uses a hiring platform, background check service, tenant screening tool, scheduling system, or CRM with AI features, you are almost certainly a covered deployer under one or more of these laws.
The core insight these laws share is that AI systems trained on historical data reproduce historical patterns — including historical discrimination. A hiring AI trained on decades of employment records may have learned that certain names, ZIP codes, or employment gaps correlate with not being hired. A tenant screening AI may have learned that certain demographic proxies correlate with missed rent. These outcomes are not intentional, but they are real, and they fall hardest on people who were already disadvantaged by the historical patterns the AI learned from.
Texas TRAIGA — In Effect Since January 1, 2026
The Texas Responsible AI Governance Act was signed by Governor Greg Abbott on June 22, 2025 and took effect January 1, 2026. It is enforced exclusively by the Texas Attorney General — there is no private right of action. The law applies to any business that deploys a high-risk AI system in a consequential decision affecting a Texas resident.
Who Is a Deployer Under TRAIGA?
A deployer is any business that uses an AI system — not just the company that built it. If you use Indeed to source job candidates, Checkr to run background checks, TransUnion SmartMove to screen tenants, 7shifts to schedule employees, or Salesforce Einstein to prioritize customer contacts, you are a deployer. You do not have to write a single line of code to be a deployer under TRAIGA.
What TRAIGA Prohibits
TRAIGA prohibits the intentional deployment of AI systems to manipulate human behavior, infringe constitutional rights, or engage in unlawful discrimination based on protected characteristics. The intent requirement is TRAIGA's most distinctive feature — it is an intent-based law, not an impact-based law. The AG must demonstrate that prohibited AI use was intentional, not merely careless.
What TRAIGA Requires of Deployers
TRAIGA requires deployers to take reasonable steps to understand the AI systems they use, document their AI vendor relationships, implement human oversight of AI-assisted consequential decisions, and maintain a certified compliance record. The reasonable care standard is calibrated to the deployer's size and resources — a small business is not held to the same standard as a national enterprise, but both must demonstrate documented good faith effort.
TRAIGA Penalties
Civil penalties range from $10,000 to $12,000 per curable violation and $80,000 to $200,000 per uncurable violation. Continuing violations carry $2,000 to $40,000 per day. The per-violation structure means a single non-compliant AI deployment affecting thousands of people creates penalties calculated per affected person — not as a single aggregate fine. The Texas AG's complaint portal is required to be operational by September 1, 2026.
The NIST AI RMF Safe Harbor
TRAIGA explicitly provides that substantial compliance with the NIST AI Risk Management Framework serves as an affirmative defense against enforcement actions. The NIST AI RMF organizes AI risk management into four functions — Govern, Map, Measure, and Manage — and documenting alignment with these functions creates the strongest available legal protection under TRAIGA.
Colorado AI Act (SB 24-205) — Effective June 30, 2026
Colorado's Artificial Intelligence Act is the first comprehensive state AI governance law in the United States. Signed in May 2024, its effective date was delayed from February 1 to June 30, 2026 following a special legislative session. The core compliance obligations are unchanged from the original law.
Colorado's approach is fundamentally different from TRAIGA. Where TRAIGA is intent-based, Colorado's law is impact-based — it focuses on whether AI could cause algorithmic discrimination regardless of intent. This is a broader and more demanding standard for deployers.
Colorado's Specific Requirements
Written risk management policy. A documented policy identifying each high-risk AI system, the risks it creates, and how those risks are managed. Must specifically address algorithmic discrimination risks.
Impact assessments. A separate documented analysis for each high-risk AI system covering the system's purpose, data inputs, known discrimination risks, and mitigation measures. Must be updated annually and whenever the system changes significantly. A business using five AI-powered platforms needs five separate impact assessments.
Consumer disclosures. When a high-risk AI system is used in a consequential decision about a Colorado resident, the person must be notified before or at the time of the decision that AI was used.
Appeal process and human review. Colorado residents affected by adverse AI-assisted decisions must be given a meaningful opportunity to appeal and request human review. This requirement has no equivalent in TRAIGA and is unique among current state AI laws.
Annual updates. Impact assessments must be reviewed and updated at least annually and when systems change. This creates a recurring compliance calendar that does not exist under TRAIGA.
Who Is Exempt Under Colorado's AI Act?
There is a limited exemption for deployers with fewer than 50 full-time employees who do not train the AI system using their own data. Most small businesses using third-party AI platforms — hiring sites, background check services, scheduling software — do not train those platforms with their own data and are therefore covered regardless of size.
The Software You Already Use Is Almost Certainly Covered
The most common misconception about AI governance laws is that they apply to technology companies. They apply primarily to deployers — businesses using AI. The major platforms that Colorado and Texas small businesses already use have all embedded AI into their core functions.
Indeed uses AI to rank candidates. Checkr uses AI in its screening workflow. TransUnion SmartMove uses AI to generate tenant risk scores. Workday uses AI across talent acquisition, performance management, and compensation. 7shifts uses AI to optimize scheduling. LinkedIn Recruiter uses AI to surface and rank candidates. Each of these creates deployer obligations under TRAIGA and potentially under Colorado's AI Act.
What 20+ Other States Are Doing
Virginia's HB 2094 passed and takes effect July 1, 2026 — one day after Colorado's deadline. Illinois SB 2979 took effect January 1, 2026 and requires notice to employees when AI is used in employment decisions. New York City's Local Law 144 requires annual independent bias audits for AI hiring tools used in NYC and has been in effect since July 2023. Connecticut, Minnesota, Washington, Massachusetts, New Jersey, Oregon, and Maryland all have comprehensive AI governance bills in active legislative consideration.
Congress has twice rejected federal preemption of state AI laws. State AI legislation is accelerating, not slowing. Businesses operating across state lines face a compliance landscape that changes month by month.
Building Your AI Compliance Record
A defensible AI compliance record has five components that must be built before an enforcement notice arrives — not after.
AI vendor inventory. List every platform your business uses that makes or influences decisions about employees, applicants, tenants, or customers. For each platform, document what AI it uses and what consequential decisions it influences.
Vendor documentation requests. Send formal letters to each vendor's legal or compliance department citing the applicable state laws by name. Request AI system documentation, bias testing results, and compliance posture. Set a 30-day response deadline. Send by email and certified mail. Log every response and non-response.
Impact assessments. For each AI system, document the system's purpose, data inputs, known discrimination risks, and your mitigation measures. Required annually under Colorado's law; strongly recommended under TRAIGA.
Human oversight. Before acting on any AI-generated output in a consequential decision, a named human must review it. Log every review — who, what, when, what decision was made.
Compliance file. Organize all of the above in a single, dated, accessible file. The documentation you build now is the documentation that protects you when the AG investigation arrives.
Federal Preemption — The Accurate Picture
Congress has twice rejected federal preemption of state AI laws. The Senate voted 99 to 1 to strip a 10-year moratorium from the One Big Beautiful Bill Act. Congress declined to include preemption in the 2025 National Defense Authorization Act. A December 2025 executive order directing the DOJ to challenge state AI laws cannot preempt state law without Congressional action. The Colorado AI Act takes effect June 30, 2026 as scheduled. TRAIGA is already in effect. Waiting for federal action is not a compliance strategy.
This guide is for informational purposes only and does not constitute legal advice. For legal advice specific to your situation, consult a licensed attorney. ComplianceJournal.news is an independent publication. Content current as of April 2026.