Trump administration may think there is regulation paralyzing artificial intelligence industry, but one of the industry’s biggest players disagrees.
At Thursday’s WIRED Substantial Interview event, Anthropic president and co-founder Daniela Amodei told WIRED editor Steven Levy that while Trump’s AI and cryptocurrency czar David Sacks may have had he tweeted that her company “employs a sophisticated fear-mongering regulatory capture strategy,” she believes her company’s commitment to flagging the potential dangers of AI strengthens the industry.
“We have been vocal from the very beginning that we feel there is incredible potential,” Amodei said. “We really want the whole world to realize the potential, the positive benefits and the advantages that can come from AI, but to achieve that we need to get the difficult issues right. We need to make the risks manageable. And that’s why we talk about it so much.”
More than 300,000 startups, developers and companies utilize some version of Anthropic’s Claude model, and Amodei said that through the company’s interactions with these brands, it has learned that while customers want their AI to be able to do great things, they also want it to be reliable and secure.
“No one is saying, ‘We want a less safe product,'” Amodei said, comparing Anthropic’s reports on its model’s limitations and jailbreak attempts to reports from a car company publishing crash-test studies to show how it has addressed safety problems. Watching a crash test dummy fly through a car window in a video may seem shocking, but learning that an automaker has updated its vehicle’s safety features as a result of the test can make you buy the car. Amodei said the same applies to companies using Anthropic’s AI products, creating a market that is somewhat self-regulating.
“We set what could almost be considered minimum safety standards simply based on what we put into the economy,” she said. Companies are now “building a lot of workflows and everyday tool tasks around AI and saying, ‘Well, we know this product doesn’t make you hallucinate like that, doesn’t produce harmful content, and doesn’t do all these bad things.’ Why would you partner with a competitor who scores lower on this?”
Photo: Annie Noelker
