Tracking the Evolution of AI Insurance Regulation

By: Heidi Lawson , Faye Wang , Sarah Hopkins

Artificial intelligence continues to transform the insurance industry, affecting underwriting, pricing, claims processing and customer engagement.

A major development in 2025 was the rise of agentic AI — autonomous systems capable of performing insurance tasks without human input. Agentic AI started to be utilized in insurance processes in early 2025, with pilot projects reported by insurers and vendors. By mid-2025, industry publications and consulting firms documented real-world use in claims processing, fraud detection and underwriting.

As these advances grow, regulators, led by the National Association of Insurance Commissioners, or NAIC, are increasing oversight to ensure that innovation does not outpace consumer protections.

NAIC Surveys and Findings, Model Bulletin and State Adoption

The NAIC Big Data and Artificial Intelligence (H) Working Group completed its health insurance survey regarding AI use in 2025, building on previous studies of auto, life and home insurers. AI adoption rates are high: 92% of health insurers, 88% of auto insurers, 70% of home insurers and 58% of life insurers surveyed report current or planned AI usage.

Despite this widespread adoption, the survey revealed that nearly one-third of health insurers still do not regularly test their models for bias or discrimination, even though the NAIC's December 2023 "Model Bulletin: Use of Artificial Intelligence Systems by Insurers" recommends such practices. Consequently, regulators are concerned that inadequately tested AI systems employed by insurers could lead to unfair discrimination or other issues, such as unfair claim practices or provision of misleading information to customers.

By late 2025, 23 states and Washington, D.C., had adopted the NAIC's AI Model Bulletin, with some variations. The NAIC's AI Model Bulletin is principle-based, requiring insurers to establish governance, documentation and audit procedures, but not prescribing specific standards. Enforcement relies on existing laws, such as unfair trade practice statutes, unfair claim settlement practice statutes and consumer protection laws.

This scenario prompts a critical question: Are current statutes sufficient to address the risks posed by rapidly evolving AI technologies, or is a new model law necessary?

Debate Over a Comprehensive AI Model Law

In response to this question, the Big Data Working Group held six discussions in 2025 to consider whether the NAIC should develop a comprehensive AI model law.

Advocates for uniform standards, including consumer groups, argue that such regulations are essential to prevent harm from AI-driven coverage or claim decisions made without human oversight. Conversely, industry representatives caution that existing legal frameworks may be adequate and warn against premature regulation without clear evidence of regulatory gaps.

To explore these positions further, NAIC issued a request for information in May, inviting stakeholder input and highlighting the ongoing divide.

At the same time, some insurers' use of AI is already being tested in court.

For example, in October, several homeowners filed a lawsuit against State Farm in the U.S. District Court for the Middle District of Alabama, captioned Kelly v. State Farm Fire & Casualty Co., alleging that State Farm used "cheat and defeat AI algorithms" as discriminatory tools in the claims-processing methods that "disproportionately impact[ed] Black and non-white policyholders."

In March, the U.S. District Court for the Eastern District of California allowed a lawsuit, Kisting-Leung v. Cigna Corp., to proceed against the health insurer. The main allegation of the suit is that Cigna used AI to deny medical claims without proper review.

These are the same issues that concern regulators.

AI Systems Evaluation Tool

To assist regulators, the Big Data Working Group introduced a draft AI Systems Evaluation Tool in July. The AI Systems Evaluation Tool is a framework that regulators can use to assess how insurers deploy artificial intelligence, focusing on risks to consumers and financial stability. It supplements existing regulatory examinations and helps ensure transparency, accountability and fairness in AI use.

The tool, consisting of questionnaires and checklists, aims to standardize assessments of insurers' AI governance, risk management and use. While adoption of this tool is voluntary on the state level, insurers would be subject to examinations where a particular regulator chooses to apply the tool. Edits and feedback of the tool were discussed extensively during the NAIC's fall 2025 meeting. Pilot programs are expected in early 2026. The results of the pilot evaluation programs will also help determine whether a model law is needed and define its scope.

Increased Oversight of Third-Party Data and Models

With the rise of AI vendors in the insurance industry, regulators are also increasingly concerned about insurers' reliance on these third-party providers. These companies are transforming the insurance industry by lowering costs, reducing fraud and enhancing customers' experience. Some leading vendors for insurance companies include Simplifai for insurance workflow automation, Shift for fraud detection and Tractable for claims assessment.

Currently, the vendors are not directly regulated by insurance departments. In response, the NAIC formed the Third-Party Data and Models (H) Working Group.

This year, the working group adopted a broad definition of "third party," encompassing any nongovernmental entity providing data, models or outputs for insurance activities. A model law on third-party oversight is anticipated in 2026, potentially including licensing requirements for vendors. Insurers should prepare for stricter diligence measures, such as contractual controls, documentation of model origins and standards for explainability.

State-Specific Initiatives

While some states are waiting for the uniform rules from the NAIC, certain states have been more proactive in creating their own rules.

For instance, Colorado's Artificial Intelligence Act, passed in May 2024, will require insurers to follow governance and testing procedures to prevent unfair discrimination. Some other states are considering similar steps, leading to a patchwork of requirements that will make compliance more complex for insurers operating in multiple states. This variation highlights the need for more uniform NAIC guidance and standards that can be adopted nationwide.

Emerging Themes: Disclosure and Transparency

A key topic gaining attention among regulators and stakeholders is whether consumers should be informed when the use of AI significantly affects coverage or claims decisions. This is a crucial question and is central to current debates in insurance regulation and ethics. If AI is involved and influences people's finances and well-being, consumers deserve to know how those decisions are made.

The previously discussed NAIC AI Systems Evaluation Tool includes specific questions regarding transparency. State regulators are already exploring disclosure requirements. In Europe, the EU AI Act will require transparency when AI is used in high-risk areas, such as insurance. Although formal disclosure requirements are not yet in place, they are actively being discussed and might be included in future model law proposals.

Outlook for 2026

In summary, several developments are anticipated in 2026: Regulators are likely to start using the AI Systems Evaluation Tool during examinations; a draft model law on third-party data and models will probably be introduced; and state-level initiatives could increase, making compliance more complex.

To prepare for these changes, insurers should proactively audit their AI systems, maintain detailed inventories of AI models and document testing processes to demonstrate compliance. Additionally, as agentic AI advances to enable autonomous execution of complex workflows, it will offer efficiency gains but will continue to prompt concerns among regulators about transparency, bias and accountability — requiring insurers to demonstrate and ensure ongoing human oversight to protect consumer interests.

This article was published in Law360, access a copy here.