How Insurance Policies Are Adapting To AI Risk, Law360

Time 9 Minute Read
July 2, 2025
Publication

Artificial intelligence risk has captured the attention of the business world, as evidenced by some 72% of S&P 500 companies discussing AI and its related risks in their annual securities filings.1

As the world increasingly acclimates to the reality of AI, thinking about the associated risks early and often is more important than ever. While many AI risks align with traditional commercial insurance coverages, the unique characteristics of AI are causing insurers to change their approach to assessing and covering AI risk.

These changes are manifest through the proliferation of AI-specific policy exclusions and affirmative AI-specific coverages.

Legacy Coverage Lines: The “Silent” Protection for AI Risks

Many of the risks posed by AI are not new. Risks like privacy violations, securities litigation, property damage, personal injury, product liability and workplace liabilities have been around for centuries. These risks represent familiar exposures manifesting and enhanced through new technology.2

It is reasonable to expect, therefore, that existing lines of insurance like commercial general liability, directors and officers liability or cyber insurance should respond to AI-enhanced risks even if the policies do not explicitly mention AI.

As one example, if an AI-powered machine malfunctions and causes bodily injury or property damage, a CGL policy may provide coverage, whether or not AI caused the malfunction. As another example, if a business faces a securities lawsuit based on allegedly false or misleading statements, a D&O policy may provide coverage, whether or not the supposed misstatement is about AI.

These so-called silent coverages provide pathways for traditional insurance policies to cover AI risks. But as insurers react to the integration of AI into all facets of the economy, this baseline is changing in many ways. Two of the most prominent examples are the rise of AI-specific policy exclusions and the offering of affirmative AI-specific insurance coverages.

The Growing Threat of AI Exclusions

Perhaps in recognition of the broad applicability of legacy coverage lines, insurers are beginning to introduce explicit AI exclusions. Some are focused and narrow in scope; others purport to be absolute bars to coverage for AI risks. Narrow or broad, the takeaway is the same: Coverage for AI risks can no longer be assumed.

Three examples show the range of potential AI-related activities that might be excluded.

Philadelphia Indemnity Insurance Co. exclusion states coverage is available for offenses "committed by [the insured] anywhere including the internet, electronic data, and printed material, except in [the insured's] advertisement or in content created or posted for any third party [the insured] created using generative artificial intelligence in performance of [the insured's] services."3

A Hamilton Select Insurance Inc. exclusion states that coverage is excluded for "any 'claim', 'wrongful act', 'damages', or 'defense costs' based upon, arising out of, or in any way involving any actual or alleged use of 'generative artificial intelligence' by the 'insured.'"4

Most recently, a Berkley Insurance Co. absolute AI exclusion form became public. Berkley's new exclusion, intended for use in the company's D&O, errors and omissions, and fiduciary liability insurance products, purports to broadly exclude coverage for "any actual or alleged use, deployment, or development of Artificial Intelligence."5

These exclusions are illustrative, with new endorsements and forms being developed and introduced in real time as AI risks continue to emerge. The rise of AI insurance policy exclusions like these signals a shift in insurer risk appetite and a tightening of policy language that could leave many AI-related claims uninsured unless policyholders actively negotiate coverage or seek alternative solutions.

The Emergence of Affirmative AI Insurance Products

Accompanying the rise of AI-specific exclusions, the market has also begun offering affirmative insurance policies specifically tailored to protecting companies and their officers and directors from AI risks. These new products may provide clarity and coverage where legacy policies could fall short.

Munich Re's aiSure was a trailblazer in AI insurance when it launched in 2018, providing performance guarantee coverage for AI technologies. Then, as market demand for comprehensive AI insurance solutions grew, new insurers like Armilla AI and Testudo emerged, offering innovative products designed to cover losses resulting from the unique risks associated with AI.

A key development came in April, when Armilla Insurance Services introduced an AI liability insurance product underwritten by Lloyd's of London syndicates, including Chaucer Group.6 Armilla's offering explicitly addresses AI-specific perils such as hallucinations (erroneous AI outputs), degrading model performance, and mechanical or algorithmic failures.

New products like this represent a significant effort to recognize and insure the unique exposures of AI technologies, potentially providing policyholders with a clearer and more reliable safety net.

Earlier in 2025, Google took its own significant step into AI-specific risk mitigation by announcing a partnership with insurers Beazley Group, The Chubb Corp. and Munich Re. This collaboration introduces a tailored cyber insurance solution specifically designed to provide affirmative AI coverage that Google Cloud customers can purchase from the insurers Google has partnered with.

In sum, insurers are beginning to address perceived coverage gaps that traditional policies may overlook. As momentum builds, the years ahead are likely to bring a continued rollout of AI-specific coverages tailored to this evolving landscape.

Defining Artificial Intelligence 

As policyholders and insurers continue to think about how AI can be insured or excluded, a threshold inquiry remains: What is artificial intelligence? The term encompasses a vast array of technologies — from basic automation to complex neural networks powering autonomous vehicles and conversational agents. Varying interpretations of whether a given technology or event involves AI are inevitable, increasing the risk of disputes over coverage applicability.

The issue is further obscured by the lack of transparency into the AI process. In fact, the process is so obscure that many have referred to it as a black box, where little is understood about how various inputs yield various outputs.

Understanding this process will allow underwriters to add needed clarity and precision to the definitions of AI that are used in their insurance provisions. The significance of this clarity cannot be understated, since a lack of clarity in contracts for insurance can lead to ambiguity, misaligned expectations and claim disputes.

Practice Pointers for Policyholders Thinking About AI Risk

For in-house counsel, risk managers, C-suite executives and other business leaders, the evolving AI insurance environment demands proactive engagement. Legacy policies may offer a foundation of coverage, but emerging exclusions threaten to narrow protections just as AI-related liabilities increase in frequency and complexity. Considering the potential for AI-related disclosure lawsuits, businesses may wish to strategically consider insurance as a risk mitigation tool. Specifically, they may wish to consider the following.

Audit Business-Specific AI Risk 

AI risks are unique to each business, influenced by how AI is integrated and the jurisdictions in which a business operates. Companies may wish to conduct thorough audits to identify these risks, especially as they navigate a patchwork of state and federal regulations.

It's also essential to assess these risks in the context of specific policy language to ensure liability coverage is not compromised. For instance, some exclusions, like those in the Hamilton policy concerning generative AI, could effectively eliminate coverage for businesses that incorporate generative AI into their products or services.

By understanding these exclusions, companies can strategically negotiate policy terms or explore alternative coverage options to safeguard against uninsured liabilities.

Involve Relevant Stakeholders

Effective risk assessments should involve relevant stakeholders, including various business units, third-party vendors and AI providers. This comprehensive approach ensures that all facets of a company's AI risk profile are thoroughly evaluated and addressed.

Consider AI Training and Educational Initiatives 

Given the rapidly developing nature of AI and its corresponding risks, businesses may wish to consider education and training initiatives for employees, officers and board members alike. Familiarity with AI technologies is key to developing effective strategies for mitigating AI risks.

Consider Establishing a Chief AI Officer Position

The chief AI officer would coordinate AI use and manage associated risks across all departments. This strategic role would mirror the function of a chief information security officer, ensuring a comprehensive understanding of AI technologies, compliance and risk management. By centralizing AI oversight, the chief AI officer can enhance the transparency and accountability of AI-related decisions.

Evaluate Insurance Needs Holistically 

Following business-specific AI audits, companies may wish to meticulously review their insurance programs to identify potential coverage gaps that could lead to uninsured liabilities.

Consider AI-Specific Policy Language

As insurers adapt to the evolving AI landscape, companies should be vigilant about reviewing their policies for AI exclusions and limitations. When traditional insurance products fall short, businesses might consider AI-specific policies or endorsements to facilitate comprehensive coverage that aligns with their specific risk profiles.

As AI continues to transform the business landscape, its implications for the insurance industry are profound. While many AI-related risks may still fit under existing commercial insurance policies, the rise of broad AI exclusions — and the definitional uncertainties surrounding what qualifies as AI — signal a shift toward a more fragmented and complex coverage environment.

Policyholders should think about these issues before they materialize into uninsured risks.


The opinions expressed are those of the author(s) and do not necessarily reflect the views of the firm, its clients, or Portfolio Media Inc., or any of its or their respective affiliates. This article is for general information purposes and is not intended to be and should not be taken as legal advice.

1. https://www.thecaq.org/aia-sp500-10k-climate-ai.

2. https://www.hunton.com/media/publication/200595_MealeysAILITReport-Levine-Pappas-Zullo-3-11-25.pdf.

3. https://www.hunton.com/assets/htmldocuments/noindex/Philadelphia-ARTIFICIAL-INTELLIGENCE-EXCLUSION.pdf.

4. https://www.hunton.com/assets/htmldocuments/noindex/Hamilton-EXCLUSION-GENERATIVE-ARTIFICIAL-INTELLIGENCE.pdf.

5. https://www.hunton.com/assets/htmldocuments/noindex/PC-51380-00-06-24-Artificial-Intelligence-Exclusion-Absolute.pdf.

6. https://www.armilla.ai/resources/chaucer-and-armilla-launch-new-ai-liability-insurance-product.

Related Insights

Jump to Page