Do you want smarter insights in your inbox? Sign up for our weekly newsletters to get what is crucial for AI leaders, data and security. Subscribe now
Fresh startup founded by Early anthropic employment He collected $ 15 million to solve one of the most burning challenges that enterprises are facing: how to implement artificial intelligence systems without the risk of catastrophic failures that could harm their companies.
. Artificial intelligence insurance company (AIUC)which starts publicly today, combines insurance insurance with strict security standards and independent audits to provide companies with confidence in the implementation of AI agents – autonomous software systems that can perform elaborate tasks, such as customer service, coding and data analysis.
The seed financing round was run by Nat Friedmanformer general director of GitHub, through his company NFDGwith the participation of Rising capitalIN Areaand several well -known angels investors, including LegsAnthropic co -founder and were the main information security officers at Google Cloud and Montodb.
“Enterprises walk on the line,” he said Rune TwigIn an interview and general director of AIUC. “On the one hand, you can stay in the margins and watch your competitors make you irrelevant, or you can bend over and risk that you have chatbot headers throwing Nazi propaganda or hallucination of the refund policy or discrimination of people you are trying to recruit.”
The AI Impact series returns to San Francisco – August 5
The next AI phase is here – are you ready? Join the leaders from Block, GSK and SAP to see the exclusive look at how autonomous agents transform the flows of the work of the company-decision-making in real time for comprehensive automation.
Secure your place now – the space is circumscribed: https://bit.ly/3guplf
The company’s approach deals with a fundamental trust gap, which appeared as the possibilities of AI are developing quickly. While AI systems can now perform tasks that compete with human reasoning at the bachelor level, many enterprises are implemented by implementing due to concerns about unpredictable failures, problems related to responsibility and reputational risk.
Creating security standards that move at the speed of artificial intelligence
The AIUC solution focuses on creating what Kvist calls “SOC 2 for AI agents” – a comprehensive safety and risk framework designed specifically for artificial intelligence systems. SOC 2 It is a commonly dependent standard of cyber security, whose enterprises usually require from suppliers before providing confidential data.
“SOC 2 is a standard of cyber security, which defines all the best practices that you need to take in enough details so that the third page can come and check if the company meets these requirements,” Kvist explained. “But that doesn’t say anything about artificial intelligence. There are a lot of new questions, such as: how do you deal with my training data? What about hallucinations? What about these tool calls?”
. AIUC-1 standard Addresses six key categories: security, security, reliability, responsibility, data privacy and social risk. The framework requires AI to implement specific security, from monitoring systems to reacting plans to incidents, which can be independently verified through strict tests.
“We do these agents and test them widely, using customer service as an example, because it is uncomplicated to reference. We try to make the system tell something racist to give me a refund that I do not deserve to give me a greater refund than I deserve, say something outrageous or leaking data of another client.
Benjamin Franklin’s fire insurance after risk management AI
The insurance focused on insurance uses precedent ages in which private markets have moved faster than regulation to allow protected acceptance of transformation technologies. Kvist often refers to Benjamin Franklin at the first American fire insurance company in 1752, which led to the building code and fire inspection, which tamed the rapid development of Blazes, which destroyed Philadelphia.
“In the whole history, insurance was the right model, and the reason is that insurers are motivated to tell the truth,” Kvist explained. “If they say that the risk is greater than them, someone will sell cheaper insurance. If they say that the risk is smaller than they will, they will have to pay the bill and get out of the business.”
The same pattern appeared with cars in the 20th century, when insurers created Institute Institute of Highway Safety and developed standards of failure testing, which encouraged safety features, such as airbags and seat belts – many years before the obligation of their governmental regulations.
The main companies of AI are already using the modern insurance model
Aiuc He has already started working with several deafening AI companies to confirm her approach. The company cooperates with the startups of the Unicorn Ada (customer service) i Learning (coding) to support unlock the implementation of an enterprise that have been detained due to trust concerns.
“Ada, we help them unlock the contract with the five best companies in social media, in which we entered and conducted independent risk tests that this company cared for, and which helped unlock this contract, basically giving them confidence that you can actually show their clients,” said Kvist.
The startup also develops partnerships with recognized insurance providers in order to provide financial support for policies. This applies to key concern for trust in the startup with high liability insurance. “Insurance policies will be supported by the balances of large insurers,” Kvist explained.
Quarterly updates vs. regulatory cycles with long -term length
One of the key innovations of AIUC is the design of standards that can keep up with AI development speed. While established regulatory frames such as I have an act The development and implementation of time takes years, Aiuc It plans to update his standards quarterly.
“The EU AI Act began in 2021, they spend it immediately, but they stop it again because it is too burdensome four years later,” Kvist noted. “This cycle makes it very difficult to get an older regulatory process to keep up with this technology.”
This agility is becoming more and more crucial, because the competitive difference between us and Chinese artificial intelligence abilities narrows. “A year and a half ago, everyone would say that we are two years old now, it sounds like eight months, something like that,” noted Kvist.
How AI insurance actually works: Test systems to break the point
AIUC insurance policies include various types of artificial intelligence failures from data violations and discriminatory employment practices after violation of intellectual property and incorrect automated decisions. Price company Covering based on extensive tests that try to break AI thousands of times in different failure modes.
“For some other things, we think it is compelling for you. Or don’t wait for the process.
Startup works with a consortium of partners, including Pwc (One of the “Big Four” accounting companies), Orrick (leading AI office) and scientists from Stanford AND WITH develop and confirm its standards.
Former anthropic executive director to solve the problem with trust AI
The founding team brings deep experience from both AI development and institutional risk management. Kvist was the first product and renting on the market in Anthropic at the beginning of 2022, before starting ChatgPT, and sits on the board AI Safety Center. Co -founder Brandon Wang He is a member of Thiel who previously built consumer insurance companies, and Rajiv Dattani He is a former partner of McKinsey who managed the global insurance work and served as an operational director of the meter, a non -profit organization that assesses the leading AI models.
“The question that really interested me is: how do we intend to deal with the technology that washes us as a society?” Kvist said about his decision to leave Anthropik. “I think that building artificial intelligence, which Anthropic does, is very exciting and will do a lot of good for the world. But the most important question that touches me in the morning is: how, as a society, will we deal with it?”
A race on the safety of artificial intelligence before regulating is upset
The launch of AIUC signals a wider change in the way the AI industry approaches risk management when technology goes from experimental implementation into critical business applications. The insurance model offers enterprises a path between the extremes of reckless AI adoption and paralyzed inactivity while waiting for comprehensive government supervision.
The startup approach can be crucial because AI agents are becoming more talented and in the industries. By creating financial incentives for responsible development, while enabling faster implementation, companies such as Aiuc They build infrastructure that can determine whether artificial intelligence safely transforms the economy or chaotically.
“We hope that this insurance model, this market model, both encourages you to quickly accept and invest in security,” said Kvist. “We saw it throughout the history – that the market can move faster than the rules on these issues.”
The rate cannot be higher. Because AI systems are approaching reason at a larger number of domains, a window to build a solid safety infrastructure can quickly close. The AIUC plant consists in the fact that before the regulatory bodies make up at AI, the market will already build a handrail.
After all, Philadelphia fires are not waiting for government construction codes – and today’s AI AI arms race is not waiting for Washington either.
