Monday, April 21, 2025

We are waiting for the AI ​​summit in Seoul

Share

How the Seoul Summits in France and beyond can spur international cooperation on AI border security

Last year, the UK Government hosted the first major global border AI security summit at Bletchley Park. It focused the world’s attention on rapid progress in frontier areas of artificial intelligence development and ensured concrete international action in response to potential future threats, including Bletchley Declaration; modern AI security institutes; and International scientific report on the security of advanced artificial intelligence.

Six months after Bletchley, the international community has an opportunity to build on this momentum and spur further global cooperation at this week’s AI Summit in Seoul. Below, we share our thoughts on how the summit – and future ones – can advance progress towards a common, global approach to border AI security.

Artificial intelligence capabilities are developing at a rapid pace

Since Bletchley, there has been significant progress across the field, including Google DeepMind. Artificial intelligence continues to drive breakthroughs in key scientific fields with our innovations AlphaFold 3 a model that predicts the structure and interactions of all the molecules of life with unprecedented accuracy. This work will lend a hand transform our understanding of the biological world and accelerate drug discovery. At the same time, our Gemini family has already made products used by billions of people around the world more useful and accessible. We’ve also been working to improve the way our models perceive, reason, and interact, and recently shared our progress in building the future of AI assistants with Project Astra.

These advances in AI capabilities promise to improve the lives of many people, but they also raise modern questions that need to be collectively addressed in a number of key security areas. Google DeepMind is working to identify these challenges and address them through pioneering security research. Just in the last few months we shared our evolving approach to develop a comprehensive set of safety and liability assessments for our advanced models, including early research assessing critical capabilities such as deception, cybersecurity, self-propagation, and reasoning. We also published a detailed study on how future advanced AI assistants will align with human values ​​and interests. Beyond the LLM, we recently shared our approach to biosecurity Down AlphaFold 3.

At the core of this work is our belief that we must innovate in security and management as quickly as we innovate in capabilities, and that both must be done simultaneously, constantly informing and strengthening each other.

Building international consensus on the frontier threats of artificial intelligence

Maximizing the benefits of advanced AI systems requires building international consensus on key border security issues, including anticipating and preparing for modern threats beyond those posed by current models. However, given the high degree of uncertainty about these potential future threats, policymakers clearly demand an independent, scientifically sound position.

Therefore, a modern transitional product was introduced International scientific report on the security of advanced artificial intelligence is an vital feature of the Seoul AI Summit and we look forward to presenting evidence from our research later this year. Over time, such efforts may become a major contribution to the summit process, and we believe that, if successful, it should be given a more enduring status, loosely modeled on that of the Intergovernmental Panel on Climate Change. This would constitute an vital contribution to the evidence base needed for decision-makers around the world to inform international action.

We believe that these AI Summits can provide a regular forum for building international consensus and a common, coordinated approach to governance. Focusing exclusively on border security will also ensure that these gatherings complement, rather than duplicate, other international governance efforts.

Establishing best assessment practices and a consistent governance framework

Assessments are a key element needed to make AI management decisions. They enable us to measure the capabilities, behavior and impact of an AI system and provide an vital input into risk assessment and the design of appropriate countermeasures. However, the science of pioneering AI safety assessments is still in its early stages of development.

That’s why Limit models forum (FMF), which Google launched with other leading AI labs, is working with AI security institutes in the US and UK and other stakeholders on best practices for evaluating pioneering models. AI summits could lend a hand scale this work internationally and lend a hand avoid a patchwork of national testing and management systems that are duplicative or conflicting. It is critical that we avoid fragmentation that could inadvertently harm security or innovation.

US and UK AI Security Institutes We have already agreed building a common approach to security testing, which is an vital first step towards better coordination. We believe that over time this can be leveraged towards a common, global approach. An initial priority for the Seoul summit could be to agree on a roadmap for a broad range of actors to collaborate on developing and standardizing pioneering benchmarks and approaches to assessing artificial intelligence.

It will also be vital to develop a common risk management framework. To contribute to these discussions, we recently launched the first version of our Border Security Framework, a set of protocols to proactively identify future AI capabilities that have the potential to cause significant harm and put in place mechanisms to detect and mitigate them. We expect the Framework to evolve significantly as we learn from its implementation, deepen our understanding of AI risks and assessments, and engage with industry, academia and government. We hope that over time, sharing our approaches will facilitate collaboration with others to agree on standards and best practices for assessing the security of future generations of AI models.

Towards a global approach to border AI security

Many of the potential risks that could arise from advances in the frontier areas of artificial intelligence are global. As we approach the Seoul AI Summit and look ahead to future summits in France and beyond, we look forward to the opportunity to strengthen global cooperation on AI border security. We hope that these summits will become a specialized forum for progress towards a common, global approach. Getting this right is a key step towards unlocking the enormous benefits that artificial intelligence can bring to society.

Latest Posts

More News