Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now
A new startup founded by an early Anthropic hire has raised $15 million to solve one of the most pressing challenges facing enterprises today: how to deploy artificial intelligence systems without risking catastrophic failures that could damage their businesses.
The Artificial Intelligence Underwriting Company (AIUC), which launches publicly today, combines insurance coverage with rigorous safety standards and independent audits to give companies confidence in deploying AI agents — autonomous software systems that can perform complex tasks like customer service, coding, and data analysis.
The seed funding round was led by Nat Friedman, former GitHub CEO, through his firm NFDG, with participation from Emergence Capital, Terrain, and several notable angel investors including Ben Mann, co-founder of Anthropic, and former chief information security officers at Google Cloud and MongoDB.
“Enterprises are walking a tightrope,” said Rune Kvist, AIUC’s co-founder and CEO, in an interview. “On the one hand, you can stay on the sidelines and watch your competitors make you irrelevant, or you can lean in and risk making headlines for having your chatbot spew Nazi propaganda, or hallucinating your refund policy, or discriminating against the people you’re trying to recruit.”
The AI Impact Series Returns to San Francisco - August 5 The next phase of AI is here - are you ready? Join leaders from Block, GSK, and SAP for an exclusive look at how autonomous agents are reshaping enterprise workflows - from real-time decision-making to end-to-end automation. Secure your spot now - space is limited: https://bit.ly/3GuuPLF
The company’s approach tackles a fundamental trust gap that has emerged as AI capabilities rapidly advance. While AI systems can now perform tasks that rival human undergraduate-level reasoning, many enterprises remain hesitant to deploy them due to concerns about unpredictable failures, liability issues, and reputational risks.
Creating security standards that move at AI speed
AIUC’s solution centers on creating what Kvist calls “SOC 2 for AI agents” — a comprehensive security and risk framework specifically designed for artificial intelligence systems. SOC 2 is the widely-adopted cybersecurity standard that enterprises typically require from vendors before sharing sensitive data.
“SOC 2 is a standard for cybersecurity that specifies all the best practices you must adopt in sufficient detail so that a third party can come and check whether a company meets those requirements,” Kvist explained. “But it doesn’t say anything about AI. There are tons of new questions like: how are you handling my training data? What about hallucinations? What about these tool calls?”
... continue reading