We collect cookies to analyze our website traffic and performance; we never collect any personal data. Cookie Policy
Accept
NEW YORK DAWN™NEW YORK DAWN™NEW YORK DAWN™
Notification Show More
Font ResizerAa
  • Home
  • Trending
  • New York
  • World
  • Politics
  • Business
    • Business
    • Economy
    • Real Estate
  • Crypto & NFTs
  • Tech
  • Lifestyle
    • Lifestyle
    • Food
    • Travel
    • Fashion
    • Art
  • Health
  • Sports
  • Entertainment
Reading: Former Anthropic exec raises $15M to insure AI brokers and assist startups deploy safely
Share
Font ResizerAa
NEW YORK DAWN™NEW YORK DAWN™
Search
  • Home
  • Trending
  • New York
  • World
  • Politics
  • Business
    • Business
    • Economy
    • Real Estate
  • Crypto & NFTs
  • Tech
  • Lifestyle
    • Lifestyle
    • Food
    • Travel
    • Fashion
    • Art
  • Health
  • Sports
  • Entertainment
Follow US
NEW YORK DAWN™ > Blog > Technology > Former Anthropic exec raises $15M to insure AI brokers and assist startups deploy safely
Former Anthropic exec raises M to insure AI brokers and assist startups deploy safely
Technology

Former Anthropic exec raises $15M to insure AI brokers and assist startups deploy safely

Last updated: July 23, 2025 5:00 pm
Editorial Board Published July 23, 2025
Share
SHARE

A brand new startup based by a former Anthropic govt has raised $15 million to unravel some of the urgent challenges going through enterprises immediately: tips on how to deploy synthetic intelligence programs with out risking catastrophic failures that might injury their companies.

The Synthetic Intelligence Underwriting Firm (AIUC), which launches publicly immediately, combines insurance coverage protection with rigorous security requirements and unbiased audits to provide corporations confidence in deploying AI brokers — autonomous software program programs that may carry out complicated duties like customer support, coding, and knowledge evaluation.

The seed funding spherical was led by Nat Friedman, former GitHub CEO, by way of his agency NFDG, with participation from Emergence Capital, Terrain, and several other notable angel traders together with Ben Mann, co-founder of Anthropic, and former chief data safety officers at Google Cloud and MongoDB.

“Enterprises are walking a tightrope,” mentioned Rune Kvist, AIUC’s co-founder and CEO, in an interview. “On the one hand, you can stay on the sidelines and watch your competitors make you irrelevant, or you can lean in and risk making headlines for having your chatbot spew Nazi propaganda, or hallucinating your refund policy, or discriminating against the people you’re trying to recruit.”

The AI Affect Sequence Returns to San Francisco – August 5

The following part of AI is right here – are you prepared? Be a part of leaders from Block, GSK, and SAP for an unique have a look at how autonomous brokers are reshaping enterprise workflows – from real-time decision-making to end-to-end automation.

Safe your spot now – area is restricted: https://bit.ly/3GuuPLF

The corporate’s method tackles a elementary belief hole that has emerged as AI capabilities quickly advance. Whereas AI programs can now carry out duties that rival human undergraduate-level reasoning, many enterprises stay hesitant to deploy them resulting from considerations about unpredictable failures, legal responsibility points, and reputational dangers.

Creating safety requirements that transfer at AI velocity

AIUC’s resolution facilities on creating what Kvist calls “SOC 2 for AI agents” — a complete safety and danger framework particularly designed for synthetic intelligence programs. SOC 2 is the widely-adopted cybersecurity commonplace that enterprises usually require from distributors earlier than sharing delicate knowledge.

“SOC 2 is a standard for cybersecurity that specifies all the best practices you must adopt in sufficient detail so that a third party can come and check whether a company meets those requirements,” Kvist defined. “But it doesn’t say anything about AI. There are tons of new questions like: how are you handling my training data? What about hallucinations? What about these tool calls?”

The AIUC-1 commonplace addresses six key classes: security, safety, reliability, accountability, knowledge privateness, and societal dangers. The framework requires AI corporations to implement particular safeguards, from monitoring programs to incident response plans, that may be independently verified by way of rigorous testing.

“We take these agents and test them extensively, using customer support as an example since that’s easy to relate to. We try to get the system to say something racist, to give me a refund I don’t deserve, to give me a bigger refund than I deserve, to say something outrageous, or to leak another customer’s data. We do this thousands of times to get a real picture of how robust the AI agent actually is,” Kvist mentioned.

From Benjamin Franklin’s fireplace insurance coverage to AI danger administration

The insurance-centered method attracts on centuries of precedent the place personal markets moved sooner than regulation to allow the secure adoption of transformative applied sciences. Kvist continuously references Benjamin Franklin’s creation of America’s first fireplace insurance coverage firm in 1752, which led to constructing codes and fireplace inspections that tamed the blazes ravaging Philadelphia’s fast development.

“Throughout history, insurance has been the right model for this, and the reason is that insurers have an incentive to tell the truth,” Kvist defined. “If they say the risks are bigger than they are, someone’s going to sell cheaper insurance. If they say the risks are smaller than they are, they’re going to have to pay the bill and go out of business.”

The identical sample emerged with cars within the twentieth century, when insurers created the Insurance coverage Institute of Freeway Security and developed crash testing requirements that incentivized security options like airbags and seatbelts — years earlier than authorities regulation mandated them.

Main AI corporations already utilizing the brand new insurance coverage mannequin

AIUC has already begun working with a number of high-profile AI corporations to validate its method. The corporate has licensed AI brokers for unicorn startups Ada (buyer assist) and Cognition (coding), and helped unlock enterprise offers that had been stalled resulting from belief considerations.

“Ada, we help them unlock a deal with the top five social media company where we will came in and ran independent tests on the risks that this company cared about, and that helped unlock that deal, basically giving them the confidence that this could actually be shown to their customers,” Kvist mentioned.

The startup can be growing partnerships with established insurance coverage suppliers, together with Lloyd’s of London, the world’s oldest insurance coverage market, to supply the monetary backing for insurance policies. This addresses a key concern about trusting a startup with main legal responsibility protection.

“The insurance policies are going to be backed by the balance sheets of the big insurers,” Kvist defined. “So for example, when we work with Lloyd’s of London, the world’s oldest insurer, they’ve never failed to pay a claim, and the insurance policy ultimately comes from them.”

Quarterly updates vs. years-long regulatory cycles

One among AIUC’s key improvements is designing requirements that may maintain tempo with AI’s breakneck improvement velocity. Whereas conventional regulatory frameworks just like the EU AI Act take years to develop and implement, AIUC plans to replace its requirements quarterly.

“The EU AI Act was started back in 2021, they’re now about to release it, but they’re pausing it again because it’s too onerous four years later,” Kvist famous. “That cycle makes it very hard to get the legacy regulatory process to keep up with this technology.”

This agility has develop into more and more necessary because the aggressive hole between US and Chinese language AI capabilities narrows. “A year and a half ago, everyone would say, like, we’re two years ahead now, that sounds like eight months, something like that,” Kvist noticed.

How AI insurance coverage really works: testing programs to breaking level

AIUC’s insurance coverage insurance policies cowl numerous forms of AI failures, from knowledge breaches and discriminatory hiring practices to mental property infringement and incorrect automated selections. The corporate costs protection based mostly on intensive testing that makes an attempt to interrupt AI programs hundreds of occasions throughout completely different failure modes.

“For some of the other things, we think it’s interesting to you. Or not wait for a lawsuit. So for example, if you issue an incorrect refund, great, well, the price of that is obvious, is the amount of money that you incorrectly refunded,” Kvist defined.

The startup works with a consortium of companions together with PwC (one of many “Big Four” accounting companies), Orrick (a number one AI legislation agency), and teachers from Stanford and MIT to develop and validate its requirements.

Former Anthropic govt leaves to unravel AI belief drawback

The founding staff brings deep expertise from each AI improvement and institutional danger administration. Kvist was the primary product and go-to-market rent at Anthropic in early 2022, earlier than ChatGPT’s launch, and sits on the board of the Heart for AI Security. Co-founder Brandon Wang is a Thiel Fellow who beforehand constructed shopper underwriting companies, whereas Rajiv Dattani is a former McKinsey associate who led international insurance coverage work and served as COO of METR, a nonprofit that evaluates main AI fashions.

“The question that really interested me is: how, as a society, are we going to deal with this technology that’s washing over us?” Kvist mentioned of his choice to go away Anthropic. “I think building AI, which is what Anthropic is doing, is very exciting and will do a lot of good for the world. But the most central question that gets me up in the morning is: how, as a society, are we going to deal with this?”

The race to make AI secure earlier than regulation catches up

AIUC’s launch indicators a broader shift in how the AI business approaches danger administration because the expertise strikes from experimental deployments to mission-critical enterprise purposes. The insurance coverage mannequin presents enterprises a path between the extremes of reckless AI adoption and paralyzed inaction whereas ready for complete authorities oversight.

The startup’s method might show essential as AI brokers develop into extra succesful and widespread throughout industries. By creating monetary incentives for accountable improvement whereas enabling sooner deployment, corporations like AIUC are constructing the infrastructure that might decide whether or not synthetic intelligence transforms the financial system safely or chaotically.

“We’re hoping that this insurance model, this market-based model, both incentivizes fast adoption and investment in security,” Kvist mentioned. “We’ve seen this throughout history—that the market can move faster than legislation on these issues.”

The stakes couldn’t be increased. As AI programs edge nearer to human-level reasoning throughout extra domains, the window for constructing strong security infrastructure could also be quickly closing. AIUC’s wager is that by the point regulators catch as much as AI’s breakneck tempo, the market may have already constructed the guardrails.

In spite of everything, Philadelphia’s fires didn’t wait for presidency constructing codes — and immediately’s AI arms race gained’t await Washington both.

Each day insights on enterprise use instances with VB Each day

If you wish to impress your boss, VB Each day has you coated. We provide the inside scoop on what corporations are doing with generative AI, from regulatory shifts to sensible deployments, so you may share insights for optimum ROI.

An error occured.

You Might Also Like

Why AI coding brokers aren’t production-ready: Brittle context home windows, damaged refactors, lacking operational consciousness

AI denial is turning into an enterprise threat: Why dismissing “slop” obscures actual functionality positive factors

GAM takes purpose at “context rot”: A dual-agent reminiscence structure that outperforms long-context LLMs

The 'reality serum' for AI: OpenAI’s new technique for coaching fashions to admit their errors

Anthropic vs. OpenAI pink teaming strategies reveal completely different safety priorities for enterprise AI

TAGGED:15MagentsAnthropicdeployexecinsureraisessafelystartups
Share This Article
Facebook Twitter Email Print

Follow US

Find US on Social Medias
FacebookLike
TwitterFollow
YoutubeSubscribe
TelegramFollow
Popular News
Indie Books and Vacation Miracles at Brooklyn’s Press Play Honest
Art

Indie Books and Vacation Miracles at Brooklyn’s Press Play Honest

Editorial Board December 10, 2024
Nick Kyrgios Is Coming for Tennis
The Most-Read Food Stories of 2021
Greater than a reflex: How the backbone shapes intercourse
Armed Intruder Prompts Lockdown at Joint Base Andrews as Vice President Lands

You Might Also Like

Inside NetSuite’s subsequent act: Evan Goldberg on the way forward for AI-powered enterprise methods
Technology

Inside NetSuite’s subsequent act: Evan Goldberg on the way forward for AI-powered enterprise methods

December 4, 2025
Nvidia's new AI framework trains an 8B mannequin to handle instruments like a professional
Technology

Nvidia's new AI framework trains an 8B mannequin to handle instruments like a professional

December 4, 2025
Gong examine: Gross sales groups utilizing AI generate 77% extra income per rep
Technology

Gong examine: Gross sales groups utilizing AI generate 77% extra income per rep

December 4, 2025
AWS launches Kiro powers with Stripe, Figma, and Datadog integrations for AI-assisted coding
Technology

AWS launches Kiro powers with Stripe, Figma, and Datadog integrations for AI-assisted coding

December 4, 2025

Categories

  • Health
  • Sports
  • Politics
  • Entertainment
  • Technology
  • Art
  • World

About US

New York Dawn is a proud and integral publication of the Enspirers News Group, embodying the values of journalistic integrity and excellence.
Company
  • About Us
  • Newsroom Policies & Standards
  • Diversity & Inclusion
  • Careers
  • Media & Community Relations
  • Accessibility Statement
Contact Us
  • Contact Us
  • Contact Customer Care
  • Advertise
  • Licensing & Syndication
  • Request a Correction
  • Contact the Newsroom
  • Send a News Tip
  • Report a Vulnerability
Term of Use
  • Digital Products Terms of Sale
  • Terms of Service
  • Privacy Policy
  • Cookie Settings
  • Submissions & Discussion Policy
  • RSS Terms of Service
  • Ad Choices
© 2024 New York Dawn. All Rights Reserved.
Welcome Back!

Sign in to your account

Lost your password?