OpenAI CEO Sam Altman is still reeling this week after his company signed a deal with the U.S. military. OpenAI employees criticized the move, which came after Anthropic terminated its roughly $200 million contract with the Pentagon, and asked Altman to disclose more information about the deal. Altman admitted it looked “sloppy” on social media. post.
While this incident has become headline news, it may be the latest and most public example of OpenAI creating unclear rules about how the U.S. military can access its artificial intelligence.
In 2023, OpenAI’s usage policy specifically banned the military from accessing its AI models. However, some OpenAI employees have discovered that the Pentagon has already started experimenting with Azure OpenAI, a version of OpenAI models offered by Microsoft, two sources familiar with the matter said. At that time, Microsoft had been working with the Department of Defense for decades. He was also OpenAI’s largest investor and held extensive licenses to commercialize the startup’s technology.
Sources say that later that year, OpenAI employees saw Pentagon officials walking through the company’s offices in San Francisco. They spoke on condition of anonymity because they are not authorized to comment on the affairs of private companies.
Some OpenAI employees were wary of dealing with the Pentagon, while others were simply confused about what OpenAI’s terms of apply meant. Did the policy apply to Microsoft? While sources tell WIRED that it was unclear to most employees at the time, OpenAI and Microsoft spokespeople say Azure OpenAI products are not and were not covered by OpenAI policies.
“Microsoft has a product called Azure OpenAI Service that became available to the U.S. government in 2023 and is subject to Microsoft’s terms of service,” spokesman Frank Shaw said in a statement to WIRED. Microsoft declined to comment on when it made Azure OpenAI available to the Pentagon, but notes that the service has not been approved for “top secret“burdening the government with tasks until 2025.
“Artificial intelligence already plays a significant role in national security, and we believe it is important to have a seat at the table to ensure it is deployed safely and responsibly,” OpenAI spokeswoman Liz Bourgeois said in a statement. “In approaching this work, we have been transparent with our employees, providing regular updates and dedicated channels where teams can ask questions and connect directly with our national security team.”
The Department of Defense did not respond to WIRED’s request for comment.
By January 2024, OpenAI updated its policies to remove the complete ban on military applications. Sources say several OpenAI employees learned of the policy update from an article in The Intercept. Company leaders later discussed this change at an all-hands meeting, explaining how the company would carefully handle this area in the future.
In December 2024, OpenAI announced a partnership with Anduril to develop and deploy artificial intelligence systems for “national security missions.” Before the announcement, OpenAI informed employees that the scope of the partnership was narrow and would only apply to unclassified workloads, the same sources said. This contrasts with the deal Anthropic signed with Palantir, under which Anthropic’s artificial intelligence would be used for covert military work.
Palantir reached out to OpenAI in fall 2024 to discuss participating in the “FedStart” program, an OpenAI spokesperson confirmed to WIRED. The company ultimately rejected the proposal and told employees it would be too risky, two sources familiar with the matter tell WIRED. However, OpenAI is now working with Palantir in other ways.
Around the time the Anduril deal was announced, several dozen OpenAI employees joined a public Slack channel to discuss their concerns about the company’s military partnerships, sources said and a spokesman confirmed. Some felt the company’s models were too unreliable to handle user credit card information, let alone support Americans on the battlefield.
