Major insurers are moving to ring-fence their exposure to artificial intelligence failures, after a run of costly and highly public incidents pushed concerns about systemic, correlated losses to the top of the industry’s risk models. According to the Financial Times, AIG, WR Berkley, and Great American have each sought regulatory clearance for new policy exclusions that would allow them to deny claims tied to the use or integration of AI systems, including chatbots and agents.
The requests arrive at a time when companies across virtually all sectors have accelerated adoption of generative tools. That shift has already produced expensive errors. Google is facing a $110 million defamation suit after its AI Overview feature incorrectly claimed a solar company was being sued by a state attorney-general. Meanwhile, Air Canada was ordered to honor a discount invented by its customer-service chatbot, and UK engineering firm Arup lost £20 million after staff were duped by a digitally cloned executive during a video-call scam.
Those incidents have made it harder for insurers to quantify where liability begins and ends. Mosaic Insurance told the FT that outputs from large language models remain too unpredictable for traditional underwriting, describing them as “a black box.” Even Mosaic, which markets specialist cover for AI-enhanced software, has declined to underwrite risks from LLMs like ChatGPT.
As a workaround, a potential WR Berkley exclusion would bar claims tied to “any actual or alleged use” of AI, even if the technology forms only a minor part of a product or workflow. AIG told regulators it had “no plans to implement” its proposed exclusions immediately, but wants the option available as the frequency and scale of claims increase.
At issue is not only the severity of individual losses but the threat of widespread, simultaneous damage triggered by a single underlying model or vendor. Kevin Kalinich, Aon’s head of cyber, told the paper that the industry could absorb a $400 million or $500 million hit from a misfiring agent used by one company. What it cannot absorb, he says, is an upstream failure that produces a thousand losses at once, which he described as a “systemic, correlated, aggregated risk.”
Some carriers have moved toward partial clarity through policy endorsements. QBE introduced one extending limited coverage for fines under the EU AI Act, capped at 2.5% of the insured limit. Chubb has agreed to cover certain AI-related incidents while excluding any event capable of affecting “widespread” incidents simultaneously. Brokers say these endorsements must be read closely, as some reduce protection while appearing to offer new guarantees.
As regulators and insurers reshape their positions, businesses may find that the risk of deploying AI now sits more heavily on their own balance sheets than they expected.
Stay On the Cutting Edge: Get the Tom's Hardware Newsletter Get Tom's Hardware's best news and in-depth reviews, straight to your inbox. Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors
Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.