Monday, December 23, 2024

OpenAI co-founder Ilya Sutskever says the way artificial intelligence is built will change soon

Share

OpenAI co-founder and former chief scientist Ilya Sutskever made headlines this year after he left to start his own AI lab called Secure Superintelligence Inc. He has stayed out of the spotlight since his departure, but on Friday he made a uncommon public appearance in Vancouver at the Neural Information Processing Systems (NeurIPS) conference.

“Preparation for training as we know it will undoubtedly end,” Suckewer said on stage. This refers to the first phase of AI model development, when a enormous language model learns patterns from huge amounts of unlabeled data—typically text from the Internet, books, and other sources.

“We have reached peak data and there will be no more.”

During his NeurIPS lecture, Sutskever said that while he believes existing data can continue to advance AI development, the industry is drawing on novel data for training purposes. This lively, he said, will ultimately force a shift away from today’s way of training models. He compared this situation to that of fossil fuels: just as oil is a finite resource, the Internet contains a finite amount of human-generated content.

“We have reached peak data and it will no longer be there,” says Sutskever. “We have to deal with the data we have. There is only one Internet.”

a:hover]:text-gray-63 [&>a:hover]:shadow-underline-black dark:[&>a:hover]:text-grey-bd dark:[&>a:hover]:shadow-highlight-gray [&>a]:shadow-highlight-gray-63 dark:[&>a]:text-grey-bd dark:[&>a]:shadow-underline-grey”>Ilya Sutskever/NeurIPS

He predicts that next-generation models “will be agentic in a real way.” Agents have become a real buzzword in the field of AI. Although Suckever did not define them in his talk, they are widely considered to be an autonomous artificial intelligence system that independently performs tasks, makes decisions and interacts with software.

He said future systems would not only be “agentic” but also be able to reason. Unlike today’s AI, which mainly matches patterns based on what the model has seen before, future AI systems will be able to work things out step by step in a way that is more comparable to thinking.

The more a system reasons, “the more unpredictable it becomes,” Suckever says. He compared the unpredictability of “true reasoning systems” to how advanced chess-playing artificial intelligence “is unpredictable to the best chess players in the world.”

“They will understand everything based on limited data,” he said. “They won’t be wrong.”

On stage, he compared the scaling of artificial intelligence systems with evolutionary biology, citing research showing the relationship between the brain and body weight in various species. He noted that while most mammals follow a single scaling pattern, hominids (human ancestors) show a markedly different slope of the brain-to-body mass ratio on a logarithmic scale.

He suggested that just as evolution has found a novel pattern for scaling hominid brains, artificial intelligence may similarly discover novel approaches to scaling beyond what currently works before training.

a:hover]:text-gray-63 [&>a:hover]:shadow-underline-black dark:[&>a:hover]:text-grey-bd dark:[&>a:hover]:shadow-highlight-gray [&>a]:shadow-highlight-gray-63 dark:[&>a]:text-grey-bd dark:[&>a]:shadow-underline-grey”>Ilya Sutskever/NeurIPS

After Suckever’s speech ended, an audience member asked him how researchers could create the right incentives for humanity to create artificial intelligence in a way that allows it “the freedoms we have as homosapiens.”

“I feel like in some ways these are questions that people need to think about more often,” Sutskever replied. He paused for a moment, then said he “didn’t feel confident answering these types of questions” because it would require a “top-down government structure.” An audience member suggested cryptocurrency, which caused chuckles from others in the audience.

“I don’t feel like I’m the right person to comment on cryptocurrencies, but there’s a chance that what you are [are] description will follow,” Sutskever said. “You know, in a way it’s not a bad end result if you have AI and they just want to coexist with us and just have rights. Maybe everything will be fine… I think things are incredibly unpredictable. I hesitate to comment on this, but I encourage you to speculate.

Latest Posts

More News