A recent white paper examines the models and functions of international institutions that could lend a hand manage the opportunities and mitigate risks associated with advanced AI
Growing awareness of the global impact of advanced artificial intelligence (AI) has inspired public discussions about the need for international governance structures to lend a hand manage opportunities and mitigate associated risks.
Much of the discussion was based on analogies with ICAO (International Civil Aviation Organization) in civil aviation; CERN (European Organization for Nuclear Research) in elementary particle physics; IAEA (International Atomic Energy Agency) in nuclear technology; and intergovernmental and multilateral organizations in many other areas. And yet, while analogies may be a useful start, the technologies emerging from AI will be different from aerospace, particle physics, or nuclear technology.
To succeed in AI management, we need to better understand:
- What specific benefits and risks do we need to manage internationally.
- What management functions do these benefits and risks require?
- Which organizations can best provide these features.
Our latest publicationand colleagues from the University of Oxford, Université de Montréal, University of Toronto, Columbia University, Harvard University, Stanford University and OpenAI, are answering these questions and exploring how international institutions can lend a hand manage the global impact of pioneering AI development and make that the benefits of artificial intelligence will certainly reach all communities.
The critical role of international and multilateral institutions
Access to some AI technologies could significantly boost prosperity and stability, but the benefits of these technologies may not be equally distributed or focused on the greatest needs of underrepresented communities or the developing world. Inadequate access to online services, computing power or the availability of training or expertise in machine learning may also prevent some groups from fully benefiting from advances in artificial intelligence.
International cooperation could lend a hand address these issues by encouraging organizations to develop systems and applications that meet the needs of underserved communities and by addressing educational, infrastructure and financial barriers to underserved communities’ full operate of AI technologies.
Additionally, international efforts may be necessary to manage the risks posed by powerful AI capabilities. Without appropriate safeguards, some of these capabilities – such as automated software development, chemistry and synthetic biology research, and text and video generation – can be exploited to cause harm. Advanced AI systems can also fail in ways that are complex to predict, creating the risk of accidents with potentially international consequences if the technology is not deployed responsibly.
International and multi-stakeholder institutions could lend a hand improve AI development and implementation protocols that minimize such risks. For example, they could facilitate global consensus on the risks that various AI capabilities pose to society, and establish international standards for identifying and treating models with hazardous capabilities. International cooperation in security research would also enhance our ability to create systems that are reliable and resistant to misuse.
Finally, in situations where countries have incentives (e.g. from economic competition) to undercut each other’s regulatory obligations, international institutions can lend a hand support and encourage best practices and even monitor compliance with standards.
Four potential institutional models
We explore four complementary institutional models supporting global coordination and governance functions:
- Intergovernmental AI Border Commission could build an international consensus on the opportunities and risks of advanced artificial intelligence and how to manage them. This would boost public awareness and understanding of the prospects and issues related to AI, contribute to a science-based description of the operate of AI and risk reduction, and provide a source of expertise for policymakers.
- Intergovernmental or multilateral entity Organization of advanced artificial intelligence management could lend a hand internationalize and align efforts to address global threats from advanced AI systems by establishing governance norms and standards and assisting in their implementation. It can also perform compliance monitoring functions for any international management system.
- AND Collaboration at the frontier of AI could promote access to advanced artificial intelligence through international public-private partnerships. This would thereby lend a hand disadvantaged societies benefit from cutting-edge AI technology and promote international access to AI technology for security and governance purposes.
- Some AI security project could bring together leading researchers and engineers and provide them with access to computational resources and advanced AI models for research on the technical limits of AI threats. This would promote AI security research and development by increasing its scale, resources and coordination.
Operational challenges
Many crucial open questions remain regarding the feasibility of these institutional models. For example, the Commission on Advanced Artificial Intelligence will face significant scientific challenges, given the extreme uncertainty about the trajectory and capabilities of AI and the restricted research to date on advanced AI issues.
The rapid pace of progress in AI and the restricted capacity of the public sector to pioneer AI issues may also make it complex for an organization managing advanced AI to set standards that meet the risk landscape. The many difficulties associated with international coordination raise questions about how countries will be encouraged to adopt its standards or accept their monitoring.
Similarly, the many obstacles that prevent societies from fully realizing the benefits of advanced artificial intelligence systems (and other technologies) may prevent the Frontier AI Collaborative from optimizing its impact. There may also be an unmanageable tension between sharing the benefits of AI and preventing the spread of hazardous systems.
For an AI Safety project, it will be crucial to carefully consider which elements of security research are best conducted collaboratively rather than as individual company efforts. Furthermore, ensuring adequate access to the most capable models for conducting security research from all relevant developers can be an issue within the project.
Given the enormous global opportunities and challenges that AI systems pose on the horizon, a broader discussion is needed among governments and other stakeholders about the role of international institutions and how their functions can support AI governance and coordination.
We hope this research will contribute to increased discussion in the international community on how to ensure the development of advanced artificial intelligence for the benefit of humanity.