Alerts

5 Issues for Policyholders Pursuing Recovery Litigation To Consider in the Emerging Age of Artificial Intelligence

May 06, 2024

The insurance business is not immune from the wave of innovation and automation generically referred to as artificial intelligence. Insurers are increasingly reliant on various AI applications to underwrite policies, process claims and automate other functions, including fraud detection and customer service.1 So far, most media on this subject have touted the benefits to insurers and policyholders, expressed in terms of cost savings and reduced premiums.2 However, relying on AI applications to automate insurance processes is not without risk for both policyholders and insurers. As both the opportunities and perils for policyholders continue to emerge, here are five issues that insureds should consider in pursuing insurance recovery in the emerging age of AI.

  1. Policy Applications & Misrepresentation Claims. Many corporate liability policies, particularly D&O and cyber insurance, require detailed applications for both initial placements and renewals. Some policies may incorporate and define the policy “application” to include past submissions and a range of extrinsic documents, including financial statements and related filings with regulators. In most cases, defining the policy application broadly benefits the insurer and puts the insured at risk if there are errors or misstatements somewhere in the information submitted. Some state statutes and even some policy terms protect the policyholder against an insurer’s attempt to avoid coverage based on a “misrepresentation” in a policy application by requiring the alleged misrepresentation to “contribute” or be “material” to the risk made the subject of a given claim.3 Where, in the past, the underwriting function has been subjectively performed, at least in part, by a human, policyholders have faced a certain level of risk that an alleged misrepresentation could be deemed material to the underwriters’ decision to quote and place the requested coverage. However, in the event that the subject policy has been underwritten using a specific algorithm and/or an extensive, but discrete, dataset from an AI application, the question of materiality and causation arguably becomes a much different and far more objective issue. Those policyholders facing a “misrepresentation” claim from an insurer should carefully consider and explore the extent to which the insurer’s underwriting function is automated and what impact this may have on the “misrepresentation” defense to coverage.
  2. Rules Of Construction For AI-Drafted Policies. At some point, the use of AI applications in underwriting may move insurers from standardized policy forms to “customized” policies for individual policyholders.4 While a particularized policy may result in efficiencies for both the policyholder and insurer, “personalized” policies create a challenge for state insurance regulators, who have historically approved some property and casualty policy forms. Computer-generated policy forms also prompt a basic question over interpretation, if the policy terms are ambiguous. Historically, rules of contractual interpretation and construction resolve ambiguity against the insurer and in favor of the insured, based on the premise that the drafting insurer is in the best position to avoid ambiguity in the first place. In other words, if a specific policy provision is unclear, the policyholder, who had no part in drafting the language, should be given the benefit of the doubt. If a generative AI program is the nominal author of a customized policy, will the traditional rule of contra proferentum—construing language “against the offeror”—hold? Whether an insurer’s automated drafting function has been performed internally or delegated to an AI contractor, the case can be made that resolving ambiguity in favor of the non-drafting insured is even more compelling when the insurer has elected to undertake the risk (and reward) of using an automated process to underwrite and draft an insurance policy. In either case, however, policyholders should consider how traditional rules of construction may be applied to forms that originate from generative AI.
  3. Third-Party Defendants & Jurisdictional Considerations. Federal court dockets throughout the country are littered with opinions addressing the joinder of various non-diverse third-party defendants. As a general proposition, policyholders may favor the joinder of a non-diverse claims handler or adjuster in order to avoid federal diversity jurisdiction. Insurers may oppose joinder of non-diverse defendants of any kind in order to claim federal diversity jurisdiction. The advent of outsourced AI functions, whether in the realm of underwriting or claims handling, has the potential to raise additional questions over joinder and diversity jurisdiction when there is litigation over insurance coverage. Policyholders contemplating litigation against an insurer should carefully consider and determine (1) whether an insurer’s AI processes, if any, have been performed in-house or through a third-party contractor; and (2) whether, under applicable law, the third-party AI contractor is a necessary and appropriate defendant in a dispute over coverage.
  4. AI Claims Handling & Bias. Of all the potential problems that may arise from delegating insurance processes to AI, the notion of a computer deciding whether to accept or deny coverage for a first-party or third-party insurance claim is perhaps the most viscerally compelling. Corporate policyholders want to believe and know that the nuances and unique circumstances of their claim have been accounted for in deciding issues of coverage. No one wants their claim to be treated as routine or regarded as only one among a million other generic claims processed by a computer in the blink of an eye. This sentiment has particular resonance when it is the very nature of current AI technology to identify patterns within large data sets and “match” the characteristics of the claim under consideration with “similar” claims. An AI claims handling application will only be as good and accurate as the underlying data used for the analysis, and policyholders have no independent means to validate the datasets used for coverage evaluations. Moreover, any inherent defect or implicit bias in a dataset will only become more pronounced over time if AI claim decisions are recycled into the dataset and the data “drift” in the direction of the original bias. Put another way, a dataset riddled with erroneous claims decisions will only perpetuate more erroneous claims decisions. Insureds have already challenged some automated claims handling practices. In July 2023, Cigna policyholders filed a putative class action lawsuit alleging that the insurer relied on an improper AI algorithm to deny healthcare claims,5 and additional litigation over the basis for AI-based claims decisions will inevitably follow.
    In Dec. 2023, the NAIC’s Innovation, Cybersecurity, and Technology Committee adopted a Model Bulletin governing the Use of Artificial Intelligence Systems by Insurers (the “Bulletin”).6 Among other things, the Bulletin requires Insurers to “develop, implement, and maintain a written program (an ‘AIS Program’) for the responsible use of AI Systems that make, or support decisions related to regulated insurance practices.”7 The Bulletin also “encourages the development and use of verification and testing methods to identify errors and bias in Predictive Models and AI Systems, as well as the potential for unfair discrimination in the decisions and outcomes resulting from the use of Predictive Models and AI Systems.”8 Some states have already codified this model rule. Policyholders engaged in coverage litigation with their insurers should determine whether a coverage decision was made using automated processes, and if so, pursue discovery of the insurer’s AIS program where appropriate, as well as other details necessary to evaluate the sufficiency of the insurer’s dataset and the propriety of the insurer’s algorithm for handling claims decisions.
  5. AI Applications & Data Privacy. As alluded to above, AI applications, whether used for underwriting, claims handling or other insurance processes, depend on massive, appropriately curated sets of data. Where do the data used for AI insurance applications come from? If an insurer relies exclusively or even primarily on unvalidated data obtained from public websites, including social media, there may be questions about the integrity and reliability of such information as a basis for underwriting coverage or evaluating claims. Alternatively, if an insurer relies on private information obtained from policyholders’ underwriting and claims files, a different set of concerns may arise. Policyholders should carefully examine the data privacy policies of the insurers to whom policy applications and other underwriting information have been provided. In many cases, publicly available insurer privacy policies were drafted years ago and do not specifically address the use of policyholder information for use in AI applications. Corporate policyholders, who are opposed to supporting insurers’ AI applications with their own data, should address these concerns to the insurance brokers and underwriters with whom they have ongoing business relationships to ensure that corporate information is used only for approved purposes. Likewise, in settling and resolving insurance claims, whether disputed or otherwise, corporate policyholders (and insurers) should carefully consider what language to include in confidentiality provisions to address the use of claims information in the insurer’s existing or future AI applications. 

Conclusion. At some point in the future, as technology matures, regulatory guidance develops, and standards of practice evolve, the automation of basic insurance functions, including claims handling and underwriting, may become commonplace and routine. And the associated risks to policyholders may be mitigated to some degree. In the interim, however, there are a host of issues for policyholders to contemplate, particularly when involved in litigation over disputed first-party and third-party claims. The above-referenced issues are among the more obvious considerations that should be addressed by corporate insureds engaged in insurance recovery litigation. But other issues may also merit review and advice from coverage counsel based on individual facts and circumstances. If you have any questions about artificial intelligence and insurance or about insurance recovery in general, please contact one of Haynes Boone’s Insurance Recovery or AI and Deep Learning Practice Group partners listed below.


1 See, e.g., Adam Uzialko, Artificial Insurance? How Machine Learning Is Transforming Underwriting, BUSINESS NEWS DAILY (Apr. 12, 2024), available at https://www.businessnewsdaily.com/10203-artificial-intelligence-insurance-industry.html.
2 See, e.g., Bain & Company, $50 billion opportunity emerges for insurers worldwide from generative AI’s potential to boost revenues and take out costs, PR NEWSWIRE (Apr. 1, 2024), available at https://www.prnewswire.com/news-releases/50-billion-opportunity-emerges-for-insurers-worldwide-from-generative-ais-potential-to-boost-revenues-and-take-out-costs- 302103876.html#:~:text=Bain's%20report%2C%20It's%20for%20Real,for%20companies%20in%20the%20sector. 
3 See, e.g., TEX. INS. CODE ANN. § 705.004. 
4 Adam Uzialko, Artificial Insurance? How Machine Learning Is Transforming Underwriting, BUSINESS NEWS DAILY (Apr. 12, 2024), available at https://www.businessnewsdaily.com/10203-artificial-intelligence-insurance-industry.html (“Traditionally, [the industry has offered] ‘lowest common denominator’ products: a standard liability policy,” Pogreb said. “What you end up with is a very undifferentiated product, where a bakery and a laundromat have the same policy. That’s not the right way to go for the customer. Being able to consume more data automatically, we will see more customization, and customers will benefit by paying for coverage they truly need.”).
5 Jeffrey Bendix, Cigna using AI to reject claims, lawsuit charges, MEDICAL ECONOMICS (Aug. 7, 2023), available at https://www.medicaleconomics.com/view/cigna-using-ai-to-reject-claims-lawsuit-charges
6 https://content.naic.org/sites/default/files/inline-files/2023-12-4%20Model%20Bulletin_Adopted_0.pdf
7 Id. at 4 (§ 3).
8 Id.