Two members of the EXPERPHY community, internet entrepreneurs Brian and Sabine Atkins – who met at the extrusion of the mailing list in 1998 and got married shortly after this message was so accepted. At the age of 21, Yudkowsky moved to Atlanta and began to receive a non -profit salary of around USD 20,000 a year to proclaim his message about beneficial overintelligence. “I thought that very wise things would be automatically good,” he said. However, within eight months he began to realize that he was wrong – being wrong. Ai, he decided, he can be a disaster.
“I took The money of someone else, and I feel a fairly deep sense of duty to those who help me – explained Yudkowsky. – Some Point, Inspes of Thinking, ` – ˜Well, BUT BUNT ATKINS WOURT PROBBBLY PRAFBLY NOT to be Killed by a Superintelligence. “` € € € € that Atkins may want to have a “rainfall plan”, but when he sat and tried to work out, he realized with horror that it was impossible. “It caused that I was actually involved in basic problems, and then I realized that I was completely wrong in everything.
ATKINSES did, and the Institute’s mission traded from artificial intelligence to friendly artificial intelligence. “The part in which we had to solve the warm AI problem was an obstacle on the charging path directly for employing AI researchers, but we certainly did not have financing for it,” said Yudkowsky. Instead, he developed a new intellectual framework, which he called “rationalism” (while on his face rationalism is a belief that humanity has the power to use reason to improve answers, with time described the movement that in the words of the writer Oza Brennan, includes “editionism, materialism, moral non-Annareism, exploit, antimism and antiism and antiism, as well Antimism I “. Transhumanism.
In an article from 2004, “assuming Wola”, Yudkowsky argued that friendly artificial intelligence should be developed not only on what we think we want to do now, but what would really be in our best interest. “The engineering goal is to ask what humanity” wiks “, or rather what we decide, if we knew more, think faster, were more people we regret, grew up, etc. – he wrote. In the article he also used an unforgettable metaphor, created by Bostrom, because AI may be wrong: if your artificial intelligence is programmed for the production of paper clips, if you are not cautious, it can ultimately fill the solar system with paper clips.
In 2005, Yudkowsky participated in a private dinner at a restaurant in San Francisco, where it is organized by the Foresight Institute, Think Tank technology founded in the 1980s to postpone nanotechnology. (Many of its original members came from the L5 company, which was devoted to the creation of a space colony floating just behind the moon and successfully lobbyed to stop the United States before signing the agreement on the United Moon of 1979 due to its provisions against the fact that this is not a decision that would be potential. Thiel, he approached him after lunch. Something that is not equipped with information about the position.
