Technology

Former Anthropic exec raises $15M to insure AI agents and help startups deploy safely


Want more intelligent visions of your inbox? Subscribe to our weekly newsletters to get what is concerned only for institutions AI, data and security leaders. Subscribe now


A new start -up company, founded by a former anthropologist, raised $ 15 million to solve one of the most urgent challenges facing institutions today: how to spread artificial intelligence systems without risking catastrophic failure that may harm their work.

AIUC, which publicly calls the insurance coverage, strict safety standards, and independent reviews to give companies confidence in publishing artificial intelligence agents – independent software systems that can perform complex tasks such as customer service, coding and data analysis.

Nat Friedman’s seed financing round, former CEO of GitHub, through his NFDG, was led by the participation of Capital in the appearance and terrain and many prominent owners investors including Ben Man, co -founder of former main security officers in Google Cloud and Mongodb.

“The companies are walking a rope,” said Ron Cavist, co -founder and CEO of AIUC. “On the one hand, you can stay on the margin and watch your competitors make you unrelated, or you can tend to the main headlines to manufacture to manufacture for your chat advertisement, or hallucination of your recovery policy, or discrimination against the people you are trying to recruit.”


AI Impact series returns to San Francisco – August 5

The next stage of artificial intelligence here – are you ready? Join the leaders from Block, GSK and SAP to take an exclusive look on how to restart independent agents from the Foundation’s workflow tasks-from decisions in an actual time to comprehensive automation.

Ensure your place now – the space is limited: https://bit.ly/3Guupf


The company’s approach deals with an essential confidence gap that has emerged with the advances of artificial intelligence quickly. Although artificial intelligence systems can now perform the tasks competing with logic at the human level, many institutions are still reluctant to spread due to concerns about unexpected failures, responsibility issues, and reputable risks.

Create security standards that move quickly artificial intelligence

AIUC’s solution focuses on creating what KVist “Soccer” is called AI agents ” – a comprehensive framework for security and risk specifically designed for artificial intelligence systems. SOC 2 is a wide -scale cybersecurity standard that usually requires sellers before sharing sensitive data.

“SOC 2 is a standard for cybersecurity, which determines all the best practices that you must adopt in sufficient details so that a third party can come and verify whether the company meets these requirements,” explained KVIST. “But she does not say anything about artificial intelligence. There are many new questions such as: How do you deal with my training data? What about hallucinations? What about these tools?”

The AIUC-1 standard treats six main categories: safety, security, reliability, accountability, data privacy and societal risks. The framework of artificial intelligence companies requires the implementation of specific guarantees, from monitoring systems to accident response plans, which can be independently verified through a strict test.

“We take these agents and test them widely, using customer support as an example because it is easy to attach to it. We try to obtain the system to say something racist, to give me a recovery that I do not deserve, to give me more recovery than I deserve, or say something from the horrific, or to leak another customer data. We do these thousands of times to obtain a true image of a group of volatile factors.”

From fire insurance in Benjamin Franklin to Risk Management Amnesty International

The approach centered on insurance depends on centuries of precedents, as private markets moved faster than organization to enable the safe adoption of transformational technologies. Kvist repeatedly refers to the creation of Benjamin Franklin for the first firefighting company in America in 1752, which led to the construction of fires icons and searches that drive fire that provokes rapid growth in Philadelphia.

“Throughout history, the insurance was the correct model for this, and the reason is that insurance companies have an incentive to say the truth,” Kvist explained. “If they say the risks are greater than them, someone will sell cheaper insurance. If they say the risks are smaller than them, they will have to pay the bill and get out of the work.”

The same style appeared with cars in the twentieth century, when the insurance companies established the Institute for Safety on Highways and developed collision test standards that stimulate safety features such as air pillows and safety belts – years before their governmental organization.

The major artificial intelligence companies already use the new insurance model

AIUC has already started working with many prominent artificial intelligence companies to verify the authenticity of its approach. The company obtained the agents of Amnesty International for Emerging Organizations Unicorn Ada (customer support) and awareness (coding), and helped open the deals of institutions that stopped due to confidence concerns.

“ADA, we help them open a deal with the five best social media companies, where we will come and do independent tests on the risks that this company has taken care of, which helped open this deal, which mainly gave them confidence that this could appear to its customers already.”

The startup also develops partnerships with applicable insurance sponsors, including LLOYD’s of London, the oldest insurance market in the world, to provide financial support for policies. This deals with a major concern about confidence in the start of operation by covering a major responsibility.

“Insurance policies will support the public budget for large insurance companies,” Kivus explained. “For example, when we work with LLOYD’s of London, the oldest insurance company in the world, have never failed to pay a claim, and the insurance policy comes from it in the end.”

The quarterly updates against organizational courses for a period of years

One of the main innovations of AIUC is the design of the standards that can keep pace with AI’s development speed. While traditional regulatory frameworks such as European Union AI law take years to develop and implement, AIUC plans to update their quarterly standards.

“The European Union Act of Amnesty International has started in 2021, and they are now about to launch it, but they stop it again because it is very stressful after four years,” Kivus pointed out. “This course makes it very difficult to obtain the old organizational process to keep pace with this technique.”

This fitness has become of increasing importance because the competitive gap between us and the capabilities of Chinese artificial intelligence is narrowing. “A year and a half ago, everyone was saying, like, we are two years ago, that seems to be eight months, something like that,” Kivus noted.

How to actually work artificial intelligence: Test systems to the point of collapse

AIUC’s insurance policies cover different types of artificial intelligence failure, from data violations and discriminatory employment practices to violation of intellectual property and incorrect mechanical decisions. Covering the company’s prices based on extensive tests trying to break the artificial intelligence systems thousands of times through various failures.

“For some other things, we believe it is interesting for you. Or not awaiting a lawsuit. So for example, if you issue an incorrect, wonderful recovery, well, the price of that is the amount of money you have responded incorrectly.”

Startup works with a consortium of partners including PWC (one of the “Big Four” accounting companies), Orek (Amnesty International’s law firm), Stanford academics and the Massachusetts Institute of Technology to develop its standards and verify their validity.

Previous executive papers to solve the problem of artificial intelligence confidence

The founding team brings deep experience from both the development of artificial intelligence and the management of institutional risks. Kvist was the first producer and market leasing in early 2022, before the launch of Chatgpt, and sits on the AI painting. Participant founder Brandon Wang is a fellow in Thil who previously built consumer subscription companies, while Rajiv Dattani is a former MCKINSEY partner who led global insurance work and worked as Metr CEO, a non -profit organization that evaluates the leading AI models.

“The question that really cares about me is: How, as a community, will we deal with this technology that we wash on it?” Kivus said about his decision to leave man. “I think building artificial intelligence, which Antarbur does, is very exciting and will do a lot of good for the world. But the most central question that makes me in the morning is: How, as a community, will we deal with this?”

The race to make artificial intelligence

AIUC’s launch indicates a broader shift in how the artificial intelligence industry is dealing with risk management as technology moves from experimental publishing operations to important business applications. The institution’s insurance model provides a path between the maximum levels of reckless artificial intelligence and paralysis while waiting for comprehensive government control.

The startup approach may be it is very important as artificial intelligence agents become widespread through industries. By creating financial incentives for responsible development while enabling publishing faster, companies like AIUC are building an infrastructure that can determine whether artificial intelligence converts the economy safely or messy.

“We hope that this insurance model will stimulate this market -based model, quickly and investment in security,” said Kivus. “We have seen this throughout history – that the market can move faster than legislation on these issues.”

The risks cannot be higher. Since Systems Systems AI is closer to thinking at the human level through more areas, a powerful structural structure for a strong safety safety may be a rapid closure. AIUC’s bet is that by the time when the organizers are attached to Amnesty International, the market will have already built handrails.

After all, the Philadelphia fires did not wait for government building laws – and the male arms race will not wait today.


Don’t miss more hot News like this! Click here to discover the latest in Technology news!


2025-07-23 15:00:00

Related Articles

Back to top button