At the beginning of 2024, Anthropic, Google, Meta and OpenAI united against the military employ of their artificial intelligence tools. However, over the next 12 months, something changed.
In January, OpenAI quietly canceled banning the employ of artificial intelligence for “military and warlike” purposes, and shortly thereafter it was reported that it was working with the Pentagon on “numerous projects.” In November, the same week that Donald Trump was re-elected as US president, Meta announced that the United States and selected allies would be able to employ llamas for defense purposes. A few days later, Anthropic announced that it would also allow its models to be used in the military and entered into a partnership with the arms company Palantir. At the end of the year, OpenAI announced its own cooperation with the defense startup Anduril. Finally, in February 2025, Google revised its AI policies to allow the development and employ of weapons and technologies that could harm humans. In one year, concerns about existential threats related to AGI have virtually disappeared, and the employ of artificial intelligence for military purposes has normalized.
Part of this change has to do with the enormous costs involved in building these models. Research on general-purpose technologies (so-called Other GPT) has often emphasized the importance of the defense sector as a way to overcome adoption problems. “GPTs grow faster when there is a large, demanding and revenue-generating application sector,” economist David J. Teece wrote in 2018“such as the U.S. Department of Defense’s purchases of early transistors and microprocessors.” The cushioned budget constraints and long-term nature of defense contracts, combined with often unclear measures of success, make the military a highly desirable customer for modern technologies. Given the need for artificial intelligence startups in particular to secure huge and patient investments, a shift toward military financing was perhaps inevitable. But that doesn’t explain the speed of this change or the fact that all of America’s leading AI research labs have moved in the same direction.
Over the past few years, the landscape of capitalist competition has changed dramatically, from one driven by neoliberal free market ideals to one steeped in geopolitical concerns. To understand the transition from neoliberalism to geopolitics, we need to look at the relationship between countries and their huge technology companies. Such state-capitalism relations were central to earlier formations of imperialism – Lenin famously characterized the imperialism of his era as a combination of monopoly capital and great powers – and remained influential throughout the 20th century. In recent decades, this has taken the form of a broad consensus between the technological and political elite on the role of digital technology in innovation, economic growth and state power.
The consensus on Silicon Valley
Until approximately mid-2010, the so-called Silicon Valley Consensus. In this case, both political and technological elites have reached broad agreement about the role of technology in the world, what is required for technology to advance, what American values it purports to embody, and the requirements for capital accumulation in the technology sector. For both the technological elite and the political establishment, globalized communications, capital, data and technology served their interests.
The Silicon Valley consensus appealed to both tech and political elites because it expressed confidence in technology’s ability to create a borderless world of trade and data under America’s leadership. While the tech sector may have (initially) had more utopian impulses than the state’s tough geopolitical realism, both could see that their joint projects would be realized by the same means.
In practice, this meant carte blanche for the tech sector, where regulation was either conspicuously absent or strangely facilitatory. Deregulation was, of course, a key element of the broader neoliberal period, but it particularly affected technology companies, which were able to confuse existing regulatory categories and “disrupt” existing rules. The lack of any significant federal privacy laws or actions regarding the status of workers in the gig economy highlights this widespread willingness to allow digital companies to operate as they see fit. Under President Bill Clinton, the Framework for Global E-Commerce set out principles that: according to international studies professor Henry Farrellmanaged to “discourage policymakers from trying to tax or regulate” the digital economy and instead turned to voluntary, industry-led regulation. The basic belief – which remains true today – was that any regulation would simply hinder innovation and the development of American technology and power.
