How is the field of artificial intelligence evolving and what does it mean for the future of work, education and humanity? MIT President Sally Kornbluth and OpenAI CEO Sam Altman discussed all this and more during a wide-ranging discussion on the MIT campus on May 2.
The success of OpenAI’s vast ChatGPT language models has helped spur a wave of investment and innovation in AI. ChatGPT-3.5 became the fastest-growing consumer app in history after its release in behind schedule 2022, and hundreds of millions of people apply the tool. Since then, OpenAI has also demonstrated AI-based image, audio and video generation products and collaborated with Microsoft.
The event, held in a packed Kresge Auditorium, captured the excitement around AI with an eye toward what’s next.
“I think most of us remember the first time we saw ChatGPT and thought, ‘Oh my God, this is great!’” Kornbluth said. “Now we’re trying to figure out what the next generation of all this will be.”
For his part, Altman welcomes the high expectations for his company and the field of artificial intelligence more broadly.
“I think it’s amazing that for two weeks everyone was freaking out about ChatGPT-4, and then after the third week everyone was like, ‘Come on, where’s GPT-5?’” Altman said. “I think that says something really great about human expectations and aspirations and why we all need to do this [be working to] improve the situation.”
AI problems
At the beginning of the discussion, Kornbluth and Altman discussed the many ethical dilemmas created by artificial intelligence.
“I think we’ve made surprisingly good progress in terms of aligning the system with a set of values,” Altman said. “Although people like to say, ‘You can’t use this stuff because it dumps toxic waste all the time,’ GPT-4 behaves the way you want it to, and we are able to achieve that by following a given set of values, not perfectly, but better than I expected at this stage.”
Altman also pointed out that people disagree on how exactly an AI system should behave in many situations, complicating efforts to create a universal code of conduct.
“How do we decide what values a system should have?” – Altman asked. “How do we decide what the system should do? To what extent does society define boundaries versus trusting the user with these tools? Not everyone will use them the way we like, but that is the case with tools. I think it’s important to give people a lot of control… but there are some things that the system just shouldn’t do, and we’re going to have to negotiate together what those are.”
Kornbluth agreed that things like eliminating bias in AI systems will be difficult.
“It’s interesting to think about whether we can make the models less biased than we as human beings are,” she said.
Kornbluth also raised privacy concerns about the enormous amount of data needed to train today’s large language models. Altman said society has grappled with these concerns since the early days of the internet, but artificial intelligence makes such considerations more complex and higher-stakes. He also sees completely new questions raised by the prospect of powerful artificial intelligence systems.
“How are we going to strike a trade-off between privacy, usability and security?” – Altman asked. “The fact that we all individually decide on the trade-offs and benefits that will be possible if one allows a system to be trained over a lifetime is a new thing for society. I don’t know what the answers will be.”
Altman said he believes advances in future versions of AI models will be helpful, both in terms of privacy and energy consumption.
“We want GPT-5 or 6 or whatever to be the best inference engine possible,” Altman said. “The truth is that currently we can only achieve this by training it on tons of data. In the process he learns something about very, very limited reasoning or cognition or whatever you want to call it, but that he can remember data or that he stores it all in parameter space, I think we’ll look back and say : “That was a bit of a weird waste of resources.” I assume at some point we’ll figure out how to separate the inference engine from having to collect tons of data or store data in [the model]and be able to treat them as separate things.”
Kornbluth also asked how artificial intelligence could lead to job relocation.
“One of the things that irritates me most about people working on artificial intelligence is that they say with a straight face, ‘This will never make you quit your job.’ It’s just a bonus. Everything will be great,” Altman said. “This will eliminate many current jobs and change the way many current jobs operate, resulting in the creation of entirely new jobs. This always happens with technology.”
The promise of artificial intelligence
Altman believes that advances in artificial intelligence will make it worth tackling all of the current problems in the field.
“If we devoted 1 percent of global electricity demand to training powerful artificial intelligence that could help us find a way to produce non-carbon energy or better capture carbon dioxide in the deep, that would be a huge victory,” Altman said.
He also said that he is most interested in the application of artificial intelligence in scientific discoveries.
“I believe [scientific discovery] is the main engine of human progress and that this is the only way to achieve sustainable economic growth,” Altman said. “People are not happy with GPT-4. They want it to be better. Everyone wants life to be more, better and faster, and with science we can make it happen.”
Kornbluth also asked Altman for advice for students considering their careers. He appealed to students not to limit themselves.
“The most important lesson to learn early in your career is that you can understand anything, and no one has all the answers at the beginning,” Altman said. “You just stumble around, you have a high iteration speed and you try to drift towards the most interesting problems, you are around the most impressive people, and you have the confidence that you will successfully iterate to the right thing. … You can do more than you think, faster than you think.”
This advice was part of Altman’s broader message of remaining optimistic and working to create a better future.
“The way we teach our young people that the world is completely screwed up and that trying to solve problems is hopeless, that all we can do is sit in the dark in the bedroom and think about how terrible we are, is a truly deeply unproductive streak.” – Altman said. “I hope MIT will be different than many other college campuses. I assume so. But you all must consider fighting this to be part of your mission in life. Prosperity, abundance, a better life next year, a better life for our children. This is the only way forward. This is the only way to have a functioning society… and I hope you all will fight against the anti-progressive, anti-“people deserve a great life” tendency.