Thursday, March 12, 2026

“Global call to the red AI lines” is an alarm about the lack of international AI policy

Share

On Monday, over 200 former heads of states, diplomats, Nobel laureates, AI leaders, scientists and others agreed to one thing: there should be an international agreement on “red lines”, which AI should never exceed-on an example, not allowing AI to impersonate a man or self-sufficiency.

Together with over 70 organizations that turn to AI, everyone signed a global call to the red lines of AI initiative, Call of the rule to achieve the “International Political Agreement on” Red Lines “for AI until the end of 2026” The signatories are the British Canadian IT specialist Geoffrey Hinton, co -founder of OPENAI Wojciech Zaremba, anthropic Ciso Jason Clinton, scientists from Google Deepmind Ian Goodfellow and others.

“The goal is not to react after a serious incident … But preventing a large, potentially irreversible risk before their occurrence,” said Charbel -raphaël segerie, executive director of French Center for Ai Safety (Cesia), during Monday’s check-in with reporters.

He added: “If nations cannot agree yet what they want to do with AI, they must at least agree on what AI can never do.”

The announcement is ahead of the 80th UN General Assembly in Modern York, and the initiative was led by Cesia, Future Society and Center for Human Center of the UC Berkeley intelligence.

The Nobel Peace Prize winner, Maria Ress, was mentioned during the initiative Opening remarks at the assembly, calling for efforts to “put the end of great technology through global responsibility.”

There are some regional red AI lines. For example, the AI ​​of the European Union, which prohibits certain AI applications, found “unacceptable” in the EU. There is also a contract between the USA and China Nuclear weapon He should remain under man, not AI, control. But there is no global consensus yet.

In the long term, more than “voluntary promises,” said Niki Iliadis, director of Global AI management at the Future Society on Monday. Responsible scaling rules made in AI companies “are not for real enforcement.” Ultimately, an independent global institution “with teeth” is needed to define, monitor and enforce red lines, she said.

“They can comply with Aga not building until they know how safe it was,” said Stuart Russell, a professor of computer science at UC Berkeley and a leading AI researcher. “Like nuclear energy programmers, they did not build nuclear plants until they had no idea how to stop them from exploding, the AI ​​industry must choose a different technology path that has been building safety from the very beginning, and we must know that they are doing it.”

Russell said that red lines do not hinder economic development or innovation, as some AI regulation critics say. “You can have artificial intelligence for economic development without Aga that we don’t know how to control,” he said. “This alleged dichotomy, if you want a medical diagnosis, you need to accept the world’s world destruction-I just think it’s nonsense.”

0 Comments

Follow topics and authors From this story to see more in the personalized main page channel and receive E -Mail updates.



Latest Posts

More News