Fortune | FORTUNE 07月23日 21:15
Exclusive: Who covers the damage when an AI agent goes rogue? This startup has an insurance policy for that
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

人工智能保险公司(AIUC)近日宣布获得1500万美元种子轮融资,旨在为AI代理构建必要的保险、审计和认证基础设施,以安全地将AI引入企业。AIUC推出的AIUC-1风险与安全框架,整合了现有标准并增加了代理特有的安全措施,通过保险机制激励企业降低AI风险。该框架旨在成为AI代理采用的行业标准,如同SOC-2认证在网络安全领域的地位。AIUC提供标准制定、独立审计和责任保险,以期在2030年建立一个价值5000亿美元的AI代理保险市场,解决企业对AI代理信任度和法律合规性的担忧。

💡AIUC推出AIUC-1框架,整合NIST、欧盟AI法案和MITRE ATLAS等标准,并加入可审计的代理特有安全措施,为AI代理的应用提供统一的信任信号,解决企业在AI采用过程中面临的法律模糊性问题。

💰保险的核心在于创造降低风险的经济激励。AIUC通过追踪AI代理的错误和问题,并对采取安全措施的AI系统提供更优惠的保险条款,从而推动AI供应商改进实践,加速企业对AI的采纳。

🏛️借鉴汽车保险和电子产品安全测试的先例,AIUC致力于建立一个独立的第三方生态系统,提供AI代理的标准、审计和责任保险。独立的审计将通过尝试使AI代理失败来测试其在现实世界中的表现,确保其安全性和可靠性。

🤝AIUC提供了一种市场驱动的负责任AI发展途径,不同于纯粹的政府监管或企业自愿承诺。通过类似SOC-2的网络安全认证标准,AIUC-1旨在成为AI代理安全性的行业基准,推动AI责任保险的普及。

📈AIUC预测,到2030年,AI代理保险市场将达到5000亿美元,超越网络保险。这一预测基于AI代理日益增长的自主决策能力和潜在的责任风险,例如AI销售代理泄露客户信息或AI金融助手提供错误信息,这些都将是AIUC保险业务的重点覆盖领域。

Today, the Artificial Intelligence Underwriting Company (AIUC) is emerging from stealth with a $15 million seed round led by Nat Friedman at NFDG, with participation from Emergence, Terrain, and notable angels including Anthropic cofounder Ben Mann and former CISOs from Google Cloud and MongoDB. The company’s goal? Build the insurance, audit, and certification infrastructure needed to bring AI agents safely into the enterprise world.

That’s right: Insurance policies for AI agents. AIUC cofounder and CEO Rune Kvist says that insurance for agents—that is, autonomous AI systems capable of making decisions and taking action without constant human oversight—is about to be big business. Previously the first product and go-to-market hire at Anthropic in 2022, Kvist’s founding team also includes CTO Brandon Wang, a Thiel Fellow who previously founded a consumer underwriting business, and Rajiv Dattani a former McKinsey partner who led work in the global insurance sector, and was COO of METR, a research non-profit that evaluated OpenAI and Anthropic’s models before deployment.

Creating financial incentives to reduce risk of AI agent adoption

At the heart of AIUC’s approach is a new risk and safety framework called AIUC-1, designed specifically for AI agents. It pulls together existing standards like the NIST AI Risk Management Framework, the EU AI Act, and MITRE’s ATLAS threat model—then layers on auditable, agent-specific safeguards. The idea is simple: make it easy for enterprises to adopt AI agents with the same kind of trust signals they expect in cloud security or data privacy.

“The important thing about insurance is that it creates financial incentives to reduce the risk,” Kvist told Fortune. “That means that we’re going to be tracking, where does it go wrong, what are the problems you’re solving. And insurers can often enforce that you do take certain steps in order to get certified.” 

While there other startups also currently working on AI insurance products, Kvist said none are building the kind of agent standard that prevents risks like AIUC-1. “Insurance & standards go hand-in-hand to create confidence around AI adoption,” he said. 

“AIUC-1 creates a standard for AI adoption,” said John Bautista, partner at law firm Orrick and who helped create the standard. “As businesses enter a brave new world of AI, there’s a ton of legal ambiguities that hold up adoption. With new laws and frameworks constantly emerging, companies need one clear standard that pulls it all together and makes adoption massively simple,” he said.

A need for independent vendors

The story of American progress, he added, is also a story of insurance. Benjamin Franklin founded the country’s first mutual fire insurance company in response to devastating house fires. In the 20th century, specialized players like UL Labs emerged from the insurance industry to test the safety of electric appliances. Car insurers built crash-test standards that gave birth to the modern auto industry.

AIUC is betting that history is about to repeat. “It’s not Toyota that does the car crash testing, it’s independent bodies.” Kvist pointed out. “I think there’s a need for an independent ecosystem of companies that are answering [the question], can we trust these AI agents?” 

To make that happen, AIUC will offer a trifecta: standards, audits, and liability coverage. The AIUC-1 framework creates a technical and operational baseline. Independent audits test real-world performance—by trying to get agents to fail, hallucinate, leak data, or act dangerously. And insurance policies cover customers and vendors in the event an agent causes harm, with pricing that reflects how safe the system is.

If an AI sales agent accidentally exposes customer personally identifiable information, for example, or if an AI assistant in finance fabricates a policy or misquotes tax information, this type of insurance policy could cover the fallout. The financial incentive, Kvist explained, is the point. Just like consumers get a better car insurance rate for having airbags and anti-lock brakes, AI systems that pass the AIUC-1 audit could get better terms on insurance, in Kvist’s view. That pushes AI vendors toward better practices, faster—and gives enterprises a concrete reason to adopt sooner, before their competitors do.

Using insurance to align incentives

AIUC’s view is that the market, not just government, can drive responsible development. Top-down regulation is “hard to get right,” said Kvist. But leaving it all to companies like OpenAI, Anthropic and Google doesn’t work either—voluntary safety commitments are already being walked back.  Insurance creates a third way to align incentives and evolves with the technology, he explained. 

Kvist likens AIUC-1 to SOC-2, the security certification standard that gave startups a way to signal trust to enterprise buyers. He imagines a world in which AI agent liability insurance becomes as common—and necessary—as cyber insurance is today, predicting a $500 billion market by 2030, eclipsing even cyber insurance. 

AIUC is already working with several enterprise customers and insurance partners (AIUC said it could disclose the names yet), and is moving quickly to become the industry benchmark for AI agent safety. 

Investors like Nat Friedman agree. As the former CEO of GitHub, Friedman saw the trust issues firsthand when launching GitHub Copilot. “All his customers were wary of adopting it,” Kvist recalls. “There were all these IP risks.” As a result, Friedman had been looking for an AI insurance startup for a couple of years. After a 90-minute pitch meeting, he said he wanted to invest—which he did, in a seed round in June, before Friedman moved to join Alexandr Wang at Mark Zuckerberg’s new Meta Superintelligence Labs. 

In a few years, said Kvist, insuring AI agents will be mainstream. “These agents are making a much bigger promise, which is ‘we’re going to do the work for you,’” he said. “We think the liability becomes much bigger, and therefore the interest is much bigger.” 

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI保险 AI代理 风险管理 企业AI AIUC
相关文章