NAIC Considers Use of and Reliance on Third-Party AI Systems

By: Heidi Lawson , Sarah Hopkins , Faye Wang

At the recent National Association of Insurance Commissioners (NAIC) meeting in Seattle (August 12 – 16), the newly unveiled exposure draft of the proposed NAIC Model Bulletin “Use of Algorithms, Predictive Models, and Artificial Intelligence Systems by Insurers” was the primary topic of conversation at the meeting of the Innovation Cybersecurity and Technology Committee (Committee).

The current draft of the Bulletin is principles based and not prescriptive, and the Committee made clear that at this time, it is not yet looking at the adoption of a model rule. The draft Bulletin “encourages” insurers to adopt a written AIS program. As part of an AIS program, insurers are asked to address their standards for the acquisition of, use of or reliance on AI systems developed or deployed by third parties. Insurers should ideally include terms in contracts with third parties that: “(i) require third-party data and model vendors and AI system developers to have and maintain an AIS program commensurate with the standards expected of the insurer, (ii) entitle the insurer to audit the third-party vendor for compliance, (iii) entitle the insurer to receive audit reports by qualified auditing entities confirming the third party’s compliance with standards, and (iv) require the third party to cooperate with regulatory inquiries and investigation related to the insurer’s use of the third party’s product or services and require the third party to cooperate with the insurer’s regulators as part of the investigation or examination of the insurer.”

Further, the draft Bulletin provides that “[i]nvestigations and examinations of an insurer may include requests for the following kinds of information and documentation related to data, models and AI systems developed by third parties that are relied on or used by or on behalf of an insurer, directly or by an agent or representative: (i) due diligence conducted on third parties and their data, model or AI systems, (ii) contracts with third-party AI systems, model or data vendors, including terms relating to representations, warranties, data security and privacy, data sourcing, data use, intellectual processes rights, confidentiality and disclosure, and cooperation with regulators, and (iii) audits and confirmation processes performed with respect to third-party compliance with contractual and, where applicable, regulatory obligations.”

The wording of the above sections concerning third-party vendors received much attention during the Committee meeting. Many industry groups, especially those representing small- and mid-size insurers, were concerned that they have little leverage when contracting with third parties to change contract language to include the terms specified by the NAIC draft Bulletin. The Committee seemed particularly interested in hearing from insurers about the amount of control they have when products are purchased to include language-requiring coordination with insurance departments that may have questions of third-party vendors.

The Committee signaled that a question it has been evaluating is whether third-party AI service providers can be regulated through the insurers they do business with or need to licensed separately by insurance departments. This last point was not elaborated on but is worth paying attention to. With the rise of such recent cases as Kisting-Leung et al. v. Cigna Corp. et al., 2:23-cv-01477-DAD-KJN (E.D. Cal. Aug. 8, 2023), which alleges violation of, inter alia, certain insurance regulations, including Cal. Ins. Code § 790.03(h) (Unfair Practices) and Cal. Code Regs. tit. 10, §2695.7(b)(1), (d) and (e) (Standards for Prompt, Fair and Equitable Settlements), regulators are starting to take an even closer look at both AI solutions developed and used in-house by insurers and those developed by third-party AI service providers.


Don’t have an account yet?