Insurance carriers have been using algorithmic and model-based underwriting for decades. What is new is the speed and opacity of the AI systems now being deployed — systems that can process vastly more variables than traditional actuarial models, that are trained on historical data that may embed historical discrimination, and that produce outputs that underwriters often cannot explain to regulators or courts.
State insurance departments are responding. The National Association of Insurance Commissioners has had an active working group on AI in insurance since 2019. Model bulletin language on AI use in underwriting has been drafted and discussed. Several states have now issued their own guidance or proposed regulations specifically addressing AI underwriting tools. The regulatory framework is fragmentary — but it is forming.
The Fair Discrimination Problem
Insurance regulation has long distinguished between fair and unfair discrimination. Actuarially justified risk differentiation — charging higher premiums to higher-risk policies — is the foundation of the insurance business model. Unfair discrimination based on protected characteristics — race, national origin, religion, sex — is prohibited under every state insurance code.
AI underwriting models create a new version of an old problem. A model trained on historical policy and claims data will learn the patterns in that data — including patterns that reflect historical discriminatory underwriting practices. A model that has learned that certain ZIP codes correlate with higher claims may be learning that those ZIP codes also correlate with the demographics of their residents. The model may never see race as an input variable and still produce racially disparate underwriting outcomes through proxy variables.
State insurance departments examining AI underwriting tools are increasingly asking carriers to demonstrate that their models are tested for disparate impact on protected classes — not just for actuarial accuracy. That testing is not yet required in most states by explicit rule. But it is increasingly expected in examination, and carriers whose models cannot produce disparate impact documentation are finding themselves in extended examination cycles.
The State AI Law Overlay
State AI governance laws add another layer to insurance AI compliance. Both Texas TRAIGA and Colorado's AI Act apply to insurance as a consequential decision category. An insurer using AI to determine policy eligibility or pricing is a deployer under both laws, with the documentation obligations those laws impose.
Colorado's impact assessment requirement is particularly significant for insurance AI. An insurer using AI-assisted underwriting in Colorado must complete a documented impact assessment for that system — covering data inputs, known discrimination risks, and mitigation measures — and update it annually. The assessment obligation exists independently of whether the state insurance department has specifically regulated AI underwriting.
Carriers with multi-state operations face a compliance stack that includes state insurance regulation, state AI governance law, and federal fair lending law where applicable. Managing that stack requires coordination between actuarial, legal, compliance, and technology teams that many carriers have not yet built.
This article is for informational purposes and does not constitute legal advice.