If you reach a point where progress exceeded the possibility of system security or would you stop?
I don’t think today’s systems put any existential risk, so it’s still theoretical. Geopolitical questions can be more complex. But considering enough time and sufficient care and care, as well as the exploit of a scientific method –
If the time frames are as tight as you say, we don’t have much time to care and care.
We NO have a lot of time. Increasingly, we put resources in safety and such things as cyber, as well as research, you know, controlling and understanding of these systems, sometimes called a mechanistic interpretation. At the same time, we must also conduct social debates about the institutional building. How did we want management? How will we receive an international agreement, at least on the basis of some basic rules on how these systems are used, implemented and built?
How much do you think AI will change or eliminate people’s work?
What is usually happening is modern tasks that exploit modern tools or technologies and are actually better. This time we will see if it is different, but for the next few years we will have these amazing tools that pay our productivity and almost make us a bit superhuman.
If Agi can do everything that people can do, it seems that it can also do modern work.
There are many things that we will not want with the machine. The AI tool can aid your doctor and you can even have an AI doctor. But you wouldn’t want a robot nurse to be something in the aspect of human empathy of this care, which is particularly humanistic.
Tell me what you imagine when you look at our future in 20 years and, according to your forecast, Agi is everywhere?
If everything goes well, we should be in the era of radical abundance, a kind of golden era. Agi can solve problems with the root node in the world-by highlighting terrible diseases, much healthier and longer life, finding modern energy sources. If all this happens, it should be the era of maximum human flourish in which we travel to the stars and colonize the galaxy. I think it will start in 2030.
I am skeptical. We have an incredible abundance in the Western world, but we do not distribute it honestly. When it comes to solving gigantic problems, we don’t need a response as much as a solution. We don’t need Aga to tell us how to fix climate change – we know how. But we don’t do it.
I agree with this. As a species, we were society, not good in cooperation. Our natural habitats are destroyed, and partly because it would require people to sacrifice and people do not want. But this radical abundance of artificial intelligence will make everything feel like a game of non-errors
Agi would change human behavior?
Yes. I will give you a very plain example. Access to water will be a huge problem, but we have a solution. ” [because AI came up with it] From a fusion you suddenly solve the problem with access to water. Suddenly it is no longer a game of zero.