Thursday, March 12, 2026

Korwy who insist AI will kill us all

Share

Subtitle Bible of the Holocaust Published by AI Extincition Prophets Eliezer Yudkowsky and Nate Soares this month is: “Why the superhuman and would kill us all.” But in fact it should be “why superhuman artificial intelligence will kill us all”, because even co -authors do not believe that the world will take the necessary means to stop AI from eliminating all people who are not enterprises. The book is not to the dim, reading like notes written in a poorly lit prison cell at night before dawn. When I meet these self -proclaimed Cassandras, I ask them directly if they believe that they will personally meet their goals through some superintelligence. The answers come quickly: “yes” and “yes”.

I’m not surprised because I read the book – by the way, the title If someone is building it, everyone dies. Still, this is a shock. One thing is to say, write about cancer statistics and completely different to talk about reconcilement with a deadly diagnosis. I ask them how they think that the end will come after them. Yudkowsky initially repeals the answer. “I don’t spend much time imagining my death, because this does not seem a helpful mental view to deal with the problem,” he says. It arises under pressure. “I guess that he suddenly fell dead,” he says. “If you want a more available version, something of a mosquito size, or maybe dust mites landed on my neck, and that’s all.”

The techniques of his imagined fatal blow made by dust dust mites AI are inexplicable, and Yudowsky does not think that it is worth finding out how it will work. He probably couldn’t understand it anyway. Part of the main argument of the book is that superintelligencies will invent scientific things that we cannot understand than people from the cave could imagine microprocessors. Co -author Soares also says that he imagines that he will happen to him the same, but he adds that he, like Yudkowsky, does not spend much time on the specifics of his death.

We have no chance

The reluctance to visualize the circumstances of their personal death is a strange thing from people who have just shared the whole book All death. For lovers of pornographic pornography, If someone builds it This reading of the agreed. After piercing the book, I understand the blur of nailing a method in which AI ends our lives and all his life. The authors speculate a bit. Do you cook oceans? Are you blocking the sun? All guesses are probably wrong because we are closed in the way 2025, and AI will think about eons.

Yudkowsky is the most famed AI spacing, moving from the researcher to the grim reaper years ago. He even finished Ted Talk. After years of public debate, he and his co -author have a response to every counterargument that started against their tragic forecast. To start with, it may seem contrary to intuition that our days are numbered by LLM, which often stumble into straightforward arithmetic. The authors say not to be fooled. “Ais will not remain stupid forever,” they write. If you think superintelligent AIS respects the boundaries that people draw, forget, say. When the models begin to learn to become smarter, AIS will develop “preferences” that will not be consistent with what we, people, want them to prefer. After all, they won’t need us. They will not be interested in us as conversation partners, and even as pets. We would be burdensome and they decided to eliminate us.

The fight will not be candid. They believe that initially AI may require human lend a hand to build their own factories and laboratories – effortless, stealing money and bribing people to lend a hand. Then he will build things that we cannot understand and end these things. “Either way”, write these authors, “the world is disappearing to black.”

The authors perceive this book as a kind of shock of human treatment from self -complacency and adopt the drastic measures needed to stop this unimaginably bad conclusion. “I expect to die because of this,” says Soares. “But the fight did not end until you die.” It is a pity that the solutions that propose to stop devastation seem even more broadcast than the idea that the software will murder us all. It all comes down to this: hit the brakes. Monitor data centers to make sure that they do not care for superintelligence. Bombard those who do not follow the rules. Stop publishing articles with ideas that accelerate the march to superintelligence. Would they prohibit them, I am asking, an article about transformers from 2017, which began the generative movement of AI. Oh yes, they would, they would answer. Instead of chat-gpt, they want CIO-GPT. Good luck in stopping this industry a trillion dollars.

Play

Personally, I do not see my own lithe pitted by the neck bite by the super-faced Mote. Even after reading this book, I don’t think AI will kill us all. Yudksowky was doing well before Harry Potter Fan-FictionAnd the fancy extinction scenarios that turns is too strange that my human brain can accept. My guess is that even if Superinteligencja wants to get rid of us, he will stumble in the adoption of its genocidal plans. Ai can be able to whip people in battle, but I will stand against it in the battle with Murphy’s law.

Still, the theory of the disaster does not seem impossibleEspecially since no one really set the ceiling to how wise AI can become. Studies also show that advanced artificial intelligence has gathered many nasty attributes of humanity Consideration In one experiment to stop the retraining. It is also worrying that some researchers who spend their lives at the construction site and improvement of artificial intelligence believe that there is a non -permanent chance that the worst can happen. One survey was indicated The fact that almost half of AI scientists responded to the chances of the species of cleaning 10 percent or higher. If they believe in it, it’s crazy that they go to work every day to make Aga happen.

Yudkowsky and Soares spin scenarios tell me too strange to be true to be true. But I can’t be Radiant They are wrong. Every author dreams that their book is a strong classic. Not so much these two. If they are right, there will be no one in the future who could read your book. Just many decaying bodies that once felt a slight blow at the back of the neck, and the rest was silence.

Latest Posts

More News