Thursday, May 8, 2025

Singapore vision regarding security bridges and the division of USA-China

Share

Government Singapore published plan Today for global cooperation on the safety of artificial intelligence after the meeting of AI researchers from the USA, China and Europe. The document presents a common vision of work on AI safety through international cooperation, not competition.

“Singapore is one of the few countries on the planet who gets along well with both East and the West,” says Max Tegmark, a scientist from myth who helped to convene a meeting AI Luminaries last month. “They know that they are not going to build [artificial general intelligence] They themselves – they will do it to them – so it is very in their interest that the countries that build it, talk to each other. “

Countries are believed that they will most likely build Agi, of course, they are the US and China – and yet these nations seem to swap more than cooperation. In January, after the Chinese startup Deepseek published the most state-of-the-art model, President Trump called him “stimulation for our industries” and said that the United States must be “focused on the laser for the competition for winning.”

Singapore consensus regarding global priorities for AI security research requires researchers to cooperate in three key areas: risk testing created by AI models, studying safer ways of building these models and developing methods of controlling the behavior of the most advanced AI systems.

The consensus was developed at a meeting, which took place on April 26 with the international conference of the Learning (ICLR) national team, the most significant AI event in Singapore this year.

Scientists from Opeli, Anthropic, Google Deepmind, Xai and Meta participated in the AI ​​Safety Event, as did scientists from institutions, including myth, Stanford, Tsinghua and Chinese Academy of Sciences. Experts from the AI ​​Safety Institutes in the USA, Great Britain, France, Canada, China, Japan and Korea also participated.

“In the era of geopolitical fragmentation, this comprehensive synthesis of the latest AI security research is a promising sign that the global community is approaching with a common commitment to shaping AI’s safer future,” said Xue Lan, dean of the University of Tsinghua.

The development of more and more talented AI models, some of which have surprising skills, caused that researchers were worried about a series of risk. While some focus on tiny -term damage, including problems caused by biased AI systems or potential for criminals to use technologyA significant number believes that AI can be an existential threat to humanity because it begins to store people in a larger number of domains. These researchers, sometimes called “destruction of AI”, are worried that models can cheat and manipulate people to achieve their own goals.

The potential of artificial intelligence also jumped a conversation about the arms race between the USA, China and other powerful nations. Technology is perceived in political circles as crucial for economic prosperity and military dominance, and many governments tried to publish their own visions and regulations regulating the way it should be developed.

Latest Posts

More News