Philosopher Nick Bostrom Lately published an articlein which he postulated that the miniature chance of AI wiping out all humans might be worth the risk because advanced AI could free humanity from a “universal death sentence”. This sanguine risk is a huge step forward from his earlier, murky thoughts on artificial intelligence that made him the godfather of doom. His 2014 book Superintelligence was an early examination of the existential risks of artificial intelligence. One unforgettable thought experiment: an artificial intelligence tasked with creating paper clips destroys humanity because all these people needing resources are an obstacle to the production of paper clips. His newer book titled Deep utopiareflects a change in his focus. Bostrom, who heads the Future of Humanity Institute at Oxford, waxes on about the “solved world” that will come if we get artificial intelligence right.
STEVEN LEVY: Deep utopia is more sanguine than your previous book. What has changed for you?
NICK BOSTROM: I call myself a restless optimist. I am very excited about the potential to radically improve human life and unlock possibilities for our civilization. This is consistent with the real possibility of something going wrong.
You wrote an article with a striking argument: since we’re all going to die anyway, the worst that can happen with artificial intelligence is that we die sooner. But if artificial intelligence proves successful, it could extend our lives, perhaps indefinitely.
This article clearly discusses only one aspect of this issue. No scientific work can address the topic of life, the universe and the meaning of everything. So let’s take a look at this little problem and try to solve it.
This is not a miniature problem.
I guess I’ve been annoyed by some of the arguments put forward by the doomsayers who claim that if you build an AI you will kill me and my children, but how dare you. Just like the last book If anyone builds it, everyone will die. It’s even more likely that if nobody If I build it, everyone dies! This has been the case for the last few 100,000 years.
But in a doomsday scenario, everyone dies and no more people are born. A significant difference.
Of course I was very worried about this. However, in this article I will address a different question, which is: what would be best for the current human population like you, me, our families and the people of Bangladesh? It looks like our life expectancy will boost if we develop artificial intelligence, even if it is quite risky.
IN Deep utopia you speculate that artificial intelligence could create incredible abundance to the point that humanity might have a huge problem finding purpose. I live in the United States. We are a very wealthy country, but our government, supposedly with the support of its citizens, pursues policies that deny services to the penniless and distribute rewards to the wealthy. I think that even if artificial intelligence could provide abundance for everyone, we wouldn’t provide it for everyone.
You may be right. Deep utopia It starts with the assumption that everything is going great. If we do a reasonably good job of management, everyone will get a share. There is quite a deep philosophical question about what a good human life would look like under these ideal circumstances.
The meaning of life is something you often hear about in Woody Allen movies and perhaps in the philosophy community. I am more concerned about the means to support myself and share in this abundance.
A book is not only about meaning. This is one of many different values that he takes into account. It can be a wonderful emancipation from the drudgery to which people were subjected. If, as an adult, you have to give up, say, half your waking time to make ends meet by doing a job you don’t enjoy and don’t believe in, that’s a sorrowful situation. Society is so used to this that we have come up with all kinds of rationalizations around it. It’s like a partial form of slavery.
