Best Practices for Utilizing AI/ML Tools as NAIC Adopts the Model Bulletin on Use of AI

What You Need To Know

  • In December 2023, the National Association of Insurance Commissioners (NAIC) adopted the Model Bulletin on the use of Artificial Intelligence Systems by Insurers.
  • The Model Bulletin urges insurers to mitigate AI-related risks, such as discrimination, data vulnerability and lack of transparency. This guidance aims to ensure responsible AI usage and prevent adverse consumer outcomes in the industry.
  • While its primary focus is on insurance companies, insurance intermediaries should also pay attention to potential regulations.
  • Insurance entities should develop internal policies and document AI use to mitigate legal exposure in an evolving regulatory landscape. Best practices for utilizing AI/ML tools in the insurance industry include disclosure, explanation, compliance and oversight and documentation.

At the recent 2023 national meeting of the National Association of Insurance Commissioners (NAIC) in Orlando (November 30 – December 4), the Innovation Cybersecurity and Technology Committee of NAIC (the “H Committee”) and the NAIC Executive Committee officially adopted the NAIC Model Bulletin: Use of Artificial Intelligence Systems by Insurers (“Model Bulletin”). The first draft of the Model Bulletin was published in July 2023. After four months of in-depth discussion and collaborations among state insurance regulators, industry leaders and interested parties, the Model Bulletin went through two rounds of revisions and the final draft was adopted by NAIC on December 4, 2023. The Model Bulletin, as adopted, is substantively similar to the preceding drafts, however changes were made within the Model Bulletin’s defined terms including changes to the definitions of Big Data, and Generative AI and the inclusion of a definition for Adverse Consumer Outcome. Indeed the Model Bulletin now requires “[Insurers] to develop, implement and maintain a written program (an “AIS Program”) for the responsible use of AI Systems that make or support decisions related to regulated insurance practices [and] the AIS Program should be designed to mitigate the risk of Adverse Consumer Outcomes, including, at a minimum, the statutory provisions set forth in Section 1 of [the Model] [B]ulletin.”

The NAIC has been focused on AI since insurance carriers were some of the earliest adopters of AI tools and in many cases, carriers are already heavily reliant on these tools. Unsurprisingly, the proliferation in the use of AI tools by carriers has also led to litigation in this area. For example, there is a case pending in California regarding Cigna’s use of an internally developed AI tool dubbed “PXDX” that identifies discrepancies between diagnoses and the tests and procedures that may have been ordered by providers. After PXDX makes a rejection, doctors go through a second level of review and sign-off on the program’s decisions. The plaintiffs allege that in practice, doctors were signing off on denials without opening the individual patient files to review and cite that data which shows, on average, reviewers spent just 1.2 second reviewing each file.

Similar to the Cigna suit, a proposed class action lawsuit recently filed in the District Court for the District of Minnesota on November 14, 2023, alleges that UnitedHealth Group Inc. (“United”) relies on an AI tool that systematically denies elderly patients’ claims for extended care stays under their Medicare Advantage Plans. The AI model at issue, “nH Predict,” predicts the amount of care that an elderly patient should require, which, according to the complaint, often overrides “real” doctors’ determinations as to the amount of care the patient should receive. Plaintiffs allege that United not only relied on the nH Predict tool to override doctors’ determinations but also lowered its costs by removing the doctor and/or medical professional overview required to manually make these calculations. This conduct, allege the plaintiffs, is a breach of United’s fiduciary duties by elevating United’s economic self-interest above the interests of its insureds.

In both cases, the consumers challenged the adverse outcome resulting from the determinations generated by AI/ML tools utilized by the insurers. Unsurprisingly, consumer protection is one of the key considerations and driving force for NAIC’s newly adopted Model Bulletin. The Model Bulletin provides “AI, including AI Systems, can present unique risks to consumers, including the potential for inaccuracy, unfair discrimination, data vulnerability, and lack of transparency and explainability. Insurers should take actions to minimize these risks.” At the recent NAIC meeting, several state insurance commissioners indicated that their states would likely adopt the NAIC Model Bulletin in the near future. While the NAIC Model Bulletin is primarily targeted at insurance companies and not insurance producers, insurance intermediaries should keep in mind that states could use the same principles and guidance in the Model Bulletin to regulate the insurance producers as well.

In addition to the NAIC’s recent focus on AI and adoption of the Model Bulletin, individual states have also started to consider the use of AI by insurers. To date, the states’ focus has centered on a level of oversight that at least ensures compliance with current state regulations and laws, including applicable data privacy laws. For example, Colorado passed the “Insurers' Use of External Consumer Data and Information Sources, Algorithms, and Predictive Models” (Colo. Rev. Stat. Sec. 10-3-1104.9) statute on July 6, 2021, requiring insurers to show the Colorado Department of Insurance (“CDOI”) that the AI tools and data relied on do not result in unfair discrimination. Pursuant to this new law, the CDOI recently adopted a new regulation, “Regulation 10-1-1 Governance and Risk Management Framework Requirements for Life Insurers’ Use of External Consumer Data and Information Sources, Algorithms, and Predictive Models” (3 CCR 702-10) on November 14, 2023. This new regulation imposes governance and risk management requirements on life insurers’ use of AI in the underwriting process. The CDOI has also published a related regulation regarding the proposed quantitative testing regime. The principles covered in the new Colorado framework are likely indicative of how other state regulators will generally handle the use of AI in the insurance industry, together with the Model Bulletin and the October 30, 2023, Executive Order on AI, which is examined in greater detail by our colleagues in their article – Key Provisions and Impacts of Biden’s Executive Order on AI Regulation and Development.

After examining the Colorado statue and regulations, as well as the key principles in other state opinions on the use of AI in the insurance industry including NY’s 2019 Circular No. 1 and California’s 2022-5 Bulletin, we summarized the below principles for best practice of utilizing AI/ML Tools:

  1. Disclosure: Insurance entities should disclose the specific usage of AI in underwriting and marketing processes, including highlighting how data is aggregated, the sources of proprietary or public data, and any provisions for auditing or validating the data.
  2. Explanation: Insurance entities must be able to explain the decision-making process related to the AI models they use as well as the outputs from these models. This includes the criteria and methodology used to develop the AI model and the weight given to each factor used in any decision-making processes.
  3. Compliance and Oversight: AI models should adhere to all applicable laws, including those related to unfair discrimination, unfair trade practices, and data privacy. Insurance entities must be confident that AI programs used are unlikely to result in discriminatory outcomes. Insurers are generally responsible for ensuring that third-party providers of AI models and applications comply with applicable laws.
  4. Documentation: Insurance entities must maintain documentation about an AI model’s development, data sources, and compliance processes which can be used to internally to continually assess compliance within the evolving regulatory landscape, and which can also be used to justify decisions to state Departments of Insurance should an investigation into the company’s business practices be initiated.

After we attended recent NAIC meetings and discussions on the use of AI, it is our observation that insurance regulators have a mixed understanding of AI use and struggle with its regulation. However, it is clear that insurance entities need to be thinking about developing internal policies and documenting the scope and nature of AI tools used, as outlined above, to mitigate legal exposure as the regulatory landscape continues to evolve.

It is helpful to remember that lack of intent is not an excuse for noncompliance with insurance regulations or for outputs of bias from predictive models. Reliance on AI without the proper oversight (as described above) could lead to legal exposure. That said, there is not a one-size-fits-all approach to manual intervention for use of AI in consumer-facing insurance products. With time, regulation around the use of AI in the insurance industry will continue to evolve. Today the growing consensus appears to be that the function of the AI program itself will likely influence the level of manual intervention required.

Login

Don’t have an account yet?

Register