2025 will see a course correction in artificial intelligence and geopolitics as world leaders increasingly understand that their national interests are best served by the promise of a more positive and collaborative future.
The years after ChatGPT in AI discourse can be characterized as a period between gold rush and moral panic. In 2023, at the same time as record investment in artificial intelligence was recorded, technology experts including Elon Musk and Steve Wozniak published an open letter calling for a six-month moratorium on the training of artificial intelligence systems more powerful than GPT -4, and at the same time others compared AI to “nuclear war” and a “pandemic”.
This has understandably clouded the judgment of political leaders, pushing the geopolitical discussion on artificial intelligence into troubling places. At the AI & Geopolitics Project, my research organization at the University of Cambridge, our analysis clearly shows a growing trend towards AI nationalism.
For example, in 2017, President Xi Jinping announced plans for China to become an AI superpower by 2030. The Chinese “Next-generation artificial intelligence development plan”, which aimed for the country to achieve a “world-leading level” of AI innovation by 2025 and become a major center for AI innovation by 2030.
The CHIPs and Science Act of 2022 – the US semiconductor export ban – was a direct response to this situation, intended to leverage US domestic AI capabilities and constrain China. In 2024, under an executive order signed by President Biden, the U.S. Department of the Treasury also published draft regulations prohibiting or restricting investments in artificial intelligence in China.
AI nationalism portrays AI as a battle to be won, rather than an opportunity to be exploited. However, those who favor this approach would do well to draw deeper lessons from the Icy War that go beyond the notion of an arms race. At the time, the United States, eager to become the most technologically advanced nation, was able to utilize politics, diplomacy, and statecraft to create a positive and aspirational vision for space exploration. Successive U.S. governments have also managed to gain support at the United Nations for a treaty protecting outer space from nuclear power, specifying that no nation can colonize the Moon, and ensuring that outer space is the “province of all mankind.”
AI lacked the same political leadership. However, in 2025 we will start to see a shift towards cooperation and diplomacy.
The AI Summit in France in 2025 will be part of this change. President Macron is already changing the format of his event, moving away from the strict “security” framework related to artificial intelligence and focusing, in his words, on more pragmatic “solutions and standards.” In a virtual summit speech in Seoul, the French president made clear that he intended to address a much broader range of policy issues, including how to actually deliver the benefits of artificial intelligence to society.
The UN, recognizing the exclusion of some countries from the AI debate, also published its own plans in 2024 aimed at a more collaborative global approach.
Even the US and China have started getting involved trial diplomacyestablishing a bilateral consultation channel on artificial intelligence in 2024. While the impact of these initiatives remains uncertain, they clearly indicate that in 2025, the world’s AI superpowers will likely pursue diplomacy over nationalism.