Three days later The Trump administration has published its long -awaited AI action plan, the Chinese government has issued its own AI policy plan. Was time a coincidence? I doubt.
The Chinese “Global Ai Runance Action Action” was published on July 26, on the first day of the World Intelligence Conference (WAiC), the largest annual AI event in China. Geoffrey Hinton and Eric Schmidt were one of the many Western technology industry who participated in the celebrations in Shanghai. Our wired colleague Will Knight was also on stage.
Zhou Bowen, leader of the Shanghai laboratory AI, one of the best Chinese research institutions AI, advertised the work of his team over the safety of AI in Waic. He also suggested that the government can play a role in monitoring commercial AI models for loopholes in security.
In an interview with Wired, Yi Zeng, a professor of the Chinese Academy of Sciences and one of the leading votes in the country at AI, said that he hopes that AI security organizations from around the world find ways of cooperation. “It would be best if Great Britain, the USA, China, Singapore and other institutes gathered,” he said.
The conference also included meetings closed on problems related to the AI security policy. Speaking after he participated in one such confab, Paul Triolo, a partner at the DGA-Albright Stonebridge Group consulting company, said Wired that the discussions were productive, despite the noticeable lack of American leadership. After the US leaving the “Coalition of the Main Players of AI, led by China, Singapore, Great Britain and the EU, will now try to build handrails around the development of the Frontier AI model,” said Triolo Wired. He added that only the US government was missing: among all the main American AI laboratories, only Xai Elona Musk sent employees to take part in the Waic forum.
Many Western visitors were surprised when he learned how much talks about artificial intelligence in China revolved around security regulations. “Over the past seven days, it has been possible to literally participate in AI security events in the last seven days. And so it was not with some other global AI peaks,” Brian Tse told me, founder of the AI Safety Research research institute based on Concordia AI. At the beginning of this week, Concordia AI organized a full -day Safety Forum in Shanghai with well -known AI researchers, such as Stuart Russel and Yoshua Bengio.
Switching items
Comparing the Chinese plan of artificial intelligence with Trump’s action plan, it seems that both countries have changed positions. When Chinese companies first began to develop advanced AI models, many observers thought that they would be stopped by censorship requirements imposed by the government. Now American leaders say they want to assure you that home AI models “strive for objective truth”, an undertaking that my colleague Steven Levy wrote last week Backchannel The newsletter is a “glaring exercise in top -up ideological prejudice.” Meanwhile, the Chinese action plan AI sounds like a globalist manifesto: he recommends that the United Nations support lead international AI efforts and suggests that governments play an essential role in the regulation of technology.
Although their governments are very different when it comes to AI safety, people in China and the United States are worried about many of the same things: model hallucinations, discrimination, existential risk, gaps in the field of cyber security and, because American and Chinese models AI “trained on the same architecture and use the same methods of scaling of laws, the influence of society, the influence of society and the influence of society, the influence of society and the influence of society and bite the influence of the influence of society and bite the influence of the influence of laws, the influence of the impact They themselves, very similar, “says TSE. It also means that academic research on the safety of AI coincides in both countries, including in areas such as scalable supervision (how people can monitor AI models with other AI models) and developing interoperable safety testing standards.
