NAIC AI Model Bulletin Guides Ethical AI Use in Insurance Industry
Radost Roumenova Wenman | July 09, 2024
The insurance industry is rapidly evolving, with artificial intelligence (AI) playing a significant role in the industry's digital transformation. Insurance companies, as well as firms and organizations supporting the industry, need clear guidance to ensure that technological advances comply with regulatory standards and ethical principles and safeguard consumer interests. To attempt to address that need, the National Association of Insurance Commissioners (NAIC) adopted the Model Bulletin on the Use of Artificial Intelligence Systems by Insurers at its 2023 Fall National Meeting.
The bulletin was developed by the NAIC Innovation, Cybersecurity, and Technology Committee, led by Chair Kathleen A. Birrane, Maryland insurance commissioner, with co-vice chairs Michael Conway, commissioner of the Colorado Division of Insurance, and Doug Ommen, commissioner of the Iowa Insurance Division. The NAIC AI Model Bulletin assists insurers in navigating technological transition while emphasizing the importance of ethical AI innovation and deployment, regulatory compliance, risk management, and consumer trust.
The bulletin should not be viewed as strict regulation but rather as a guide setting forth the regulators' principles of how insurers are expected to operate in the complex landscape of AI utilization. The document presents a consumer-centric approach, advocating for transparency, accountability, and fairness in all aspects of the insurer–policyholder interactions that involve sophisticated analytical and computational technologies.
The NAIC defines AI systems as "machine-based systems that can generate outputs such as predictions, recommendations, content (such as text, images, videos or sounds), or other output influencing decisions made in real or virtual environments." This definition implies that AI incorporates predictive modeling because it enables computer systems to process historical data and recognize patterns within the data to forecast future events and make informed decisions. Predictive modeling is a key component of insurers' operations and decision-making processes, and the bulletin confers predictive modeling's significance and relevance in its intersection with AI.
The NAIC bulletin suggests that AI could impact various aspects of the insurance sector, such as marketing, customer service, underwriting, claims processing, fraud detection, and others. But while promoting innovation through AI, the bulletin also warns that AI systems could present unique risks, such as inaccuracies, unfair discrimination, data vulnerability, and a lack of transparency for consumers.
The overarching message of the bulletin is that, going forward, regulators will expect insurers to take appropriate steps and measures to control and mitigate those stated risks. An important key point for insurers is NAIC's caution that an insurer's AI practices with potential impact on consumers could be "subject to the department's examination to determine that the reliance on AI Systems are compliant with all applicable existing legal standards governing the conduct of the insurer."
The bulletin is divided into four sections, each covering core aspects of AI implementation in the insurance industry and underscoring the importance of implementing careful governance, risk management strategies, and protocols to ensure fair and accurate outcomes for consumers. These sections address 1) the laws and regulations the bulletin relies on; 2) various definitions related to AI and fairness; 3) regulatory guidance and expectations, including general guidelines, governance, and third-party AI systems and data; and 4) regulatory oversight and examination considerations. The bulletin also references the "Principles in Artificial Intelligence" that the NAIC adopted in 2020 as an additional source for insurer guidance around fairness, accountability, compliance, transparency, and security.
The bulletin emphasizes that AI-supported decisions by insurers must adhere to relevant insurance laws and regulations covering aspects such as unfair trade practices, claims settlements, governance reporting, and rate fairness in property-casualty insurance and workers compensation. Insurers' AI systems are expected to "…ensure that the use of AI Systems does not result in…" unfair trade practices or unfair claims settlement practices, and that "rates, rating rules, and rating plans developed using AI techniques and predictive models that rely on data and machine learning do not result in excessive, inadequate, or unfairly discriminatory insurance rates with respect to all forms of casualty insurance."
AI systems should prioritize minimizing the risk of adverse consumer outcomes. This involves establishing governance structures, robust risk management, and internal audit functions. Responsibility for AI program development, implementation, monitoring, and oversight, along with setting an AI strategy, should be "vested in senior management accountable to the board" or a relevant committee.
It's crucial to identify and address the use of AI systems across the insurance lifecycle, from product development to claim management to fraud detection. Additionally, processes must ensure that consumers are informed about AI usage and provided with appropriate access to information throughout the insurance process. Regarding predictive models specifically, insurers must outline the techniques employed "to detect and address errors, performance issues, outliers, or unfair discrimination in the insurance practices" stemming from the predictive model's application.
In the brief time since the passage of the bulletin, it has already been adopted in 12 states, including Alaska, Connecticut, Illinois, Kentucky, Maryland, Nebraska, Nevada, New Hampshire, Pennsylvania, Rhode Island, Vermont, and Washington, plus the District of Columbia, and it is anticipated that additional states will soon implement it as well. The adoption of the NAIC AI Model Bulletin by those states underscores a collective commitment to fostering responsible AI integration in the insurance sector. By embracing ethical principles, regulatory guidance, and industry collaboration, states are paving the way for a future in which AI transforms insurance operations while prioritizing consumer protection and ethical considerations.
Three states—California, Colorado, and New York—have elected to embrace different approaches to exercising oversight of insurers' AI implementation in their business operations.
In June 2022, the California Department of Insurance issued Bulletin 2022-5, imposing constraints on the insurance industry's utilization of AI and alternative data sets. The bulletin highlighted recent allegations of racial discrimination in various insurance practices and emphasized the responsibility of insurance companies to treat all individuals equally. It also cited examples of ongoing investigations into potential unfair discrimination, such as subjecting claims from specific urban ZIP codes to additional scrutiny, employing facial recognition in claims assessment, and collecting irrelevant personal information during underwriting.
In July 2021, the Colorado Division of Insurance (CDI) enacted Senate Bill 21-169, which mandates life insurance providers assess their "external consumer data, information sources, algorithms, and predictive models" to prevent any biased treatment against consumers based on protected characteristics.
In September 2023, the CDI also published a draft regulation on quantitative testing for unfairly discriminatory algorithms and predictive models utilized in life insurance underwriting. Using the inferred race of their life insurance applicants, the draft regulation requires insurers to assess their data and models that use external consumer information and take appropriate measures to address such instances. Following their precedent on life insurance regulations, the CDI has recently initiated discussions on personal auto and health insurance. The first personal auto stakeholder meeting occurred in April 2023, and the first health stakeholder meeting took place this past February.
In January, the New York State Department of Financial Services released a proposed Insurance Circular Letter regarding "the use of artificial intelligence systems and external consumer data and information sources in insurance underwriting and pricing." The letter is intended to address issues only in processes that are part of underwriting and pricing. It is broader in scope, however, than the CDI regulation, as it encompasses all types of insurance in addition to just life insurance. Insurers are responsible for demonstrating compliance with existing laws, ensuring that their use of external data and AI systems is not unfairly discriminatory, aligns with actuarial standards, is based on reasonable expectations, and does not serve as a proxy for any protected class.
Unlike the CDI draft regulation on quantitative testing, the New York letter is broadly applicable to any group of "similarly situated insureds," rather than solely the insureds of a protected class. Another difference between the CDI draft regulation and the letter is that the letter does not prescribe a specific approach to quantitative testing but instead provides flexible guidelines and several examples of recommended statistical techniques for testing for disproportionate adverse effects.
As AI becomes increasingly integrated into all facets of the insurance sector, effective collaboration among regulators, insurers, and other stakeholders will be vital to ensure the responsible implementation of AI. The adoption of the NAIC Bulletin represents a critical change in how regulators view AI integration within insurance. The emergence of divergent approaches by some states so far points to the possibility of future challenges insurers could encounter in the face of nonuniform regulatory expectations and regulation. But regardless of which way the "winds of change" blow, the bulletin will continue to provide a robust set of standards and guiding principles to insurers, helping them navigate the regulatory landscape while (hopefully) also encouraging innovation and achieving efficiencies to the benefit of both insurers and the insured.
The bulletin is a call for action for insurers to demonstrate their commitment to using AI responsibly in their practice environment. Insurers can foster trust with consumers and regulators by creating a documented program outlining their responsible AI use, developing AI models that are as interpretable as possible, ensuring the fairness and accuracy of the data they use to train the AI models, and regularly evaluating the AI models for bias and implementing measures to mitigate it.
For a full list of references for this article, please see the original on the Pinnacle website.
Radost Roumenova Wenman | July 09, 2024