During the meeting of class 6.c40/24.c40 (computer ethics), professor Armando Solar-Eleezama He puts the same impossible question to his students, which is often asked in research, which conducts with a computer -assisted programming group in MIT:
“How to make sure that the machine does what we want and only what we want?”
At the moment, what some consider to be a golden age of generative artificial intelligence may seem urgent a fresh question. But Solar-Lezama, an outstanding professor of calculations in myth, quickly notices that this fight is as ancient as humanity itself.
He begins to repeat the Greek myth of King Midas, a monarch, who received the divine power to transform everything that touched into solid gold. As you might expect, the wish was succeeded when Midas accidentally turned everyone he loved into a gold -plated stone.
“Be careful what you are asking for because you can admit it in a way you don’t expect,” he says, warning their students, many of them aspiring mathematicians and programmers.
By getting into the MIT archives, to divide the slides of granular black and white photos, he talks about the history of programming. We hear about Pygmalion from the 70s, which required incredibly detailed tips, to computer software from the 90s, which was taken by teams of engineers and a 800-page document for the program.
Although unusual in their time, these processes took too long to reach users. They did not leave room for impromptu discovery, fun and innovation.
Solar-Lezama talks about the risk of building up-to-date machines that do not always respect the tips or red programmer lines, and which are equally able to require damage as saving life.
Titus Roesler, senior specialization in electrical engineering, proclaims consciously. Roesler writes his last article on the ethics of autonomous vehicles and weighs who is morally responsible when someone hypothetically strikes and kills a pedestrian. His argument questions the assumptions with technical progress and considers many vital points of view. It is based on the theory of philosophy of utilitarianism. Roesler explains: “roughly, according to utilitarianism, the moral thing brings the largest number for the largest number of people.”
With a philosopher Brad Skuwwith whom he developed Solar-Lezama and learns with a team, he bends forward and takes notes.
A class that requires technical and philosophical knowledge
Ethics of calculations, offered for the first time in autumn 2024, was created by Common plane of computing educationInitiative Myth Schwarzman College of Computing, which combines many faculties to develop and teach fresh courses and launch fresh programs that combine calculations with other disciplines.
Instructors alternating lecture days. SPOW, The Launce S. Rockefeller, professor of philosophy, brings a lens of his discipline to examine the wider implications of today’s ethical problems, while Solar-Lezama, who is also a substitute director and operational director, the myth of computer science and artificial intelligence laboratory, offers a perspective by his.
SWOW and Solar-Lezama take part in lectures and adapt supplementary sessions in response. The introduction of an element of learning from each other in real time created more energetic and responsive class conversations. Rectation to break up the topic of the week with philosophy or computer science doctoral students and a lively discussion combine the content of the course.
“Outsider may think that it will be a class that will make sure that these new computer programmers are sent to the world through the myth, they always act properly,” says SPOW. However, the class is intentionally designed to teach students a different set of skills.
Determined to create an influential course on a semester course, which did more than lectures on students about good or evil, professor of philosophy Caspar Hare invented the idea of processing ethics in his role as an associate dean Social and ethical obligations of calculations. Hare was recruited by SKOW and Solar-Lezama as the main instructors because he knew they could do something deeper.
“Deep thinking about questions that appear in this class requires both technical and philosophical knowledge. There are no other activities in the myth that they place both side by side, “says SWOW.
This is what attracted older Alek Westover to sign up. Double May mathematics and computer science explains: “Many people talk about how the AI trajectory will look like in five years. I thought it was important to take part in classes that will help me think about it. “
Westover says that he attracts philosophy due to interest in ethics and a desire to distinguish between good and evil. In mathematical classes, he learned to write down the problem and receive immediate clarity as to whether he successfully solved them or not. However, in the ethics of calculations he learned how to argue with written arguments for “difficult philosophical questions” that may not have one correct answer.
For example: “One of the problems we can worry about is what will happen if we build powerful AI agents who can do any job that a person can do?” Westover asks. “If we interact with these AI to this extent, should we pay them a salary? How much should we depend on what they want? “
There is no effortless answer, and Westover assumes that in the future he will encounter many other dilemmas in the workplace.
“Does the Internet destroy the world?”
The semester began with a deep immersion in the risk of AI or the concept of “whether AI is an existential risk for humanity”, unpacking free will, learning how our brains make decisions in uncertainty, and debate about long -term obligations, and AI regulation. The second, longer unit zoomed on the “Internet, web and social influence of technical decisions”. The end of the term analyzes privacy, prejudice and freedom of speech.
One class topic was provocatively devoted to the question: “Does the Internet destroy the world?”
Senior Caitlin Ogoe specializes in the course 6-9 (calculations and cognition). Being in an environment where it can examine such problems, which is why the self -styled “skeptic technology” has enrolled in the course.
Growing up with a mother who is handicapped, and a younger sister with developmental disabilities, Ogoe became a default family member whose role was to call technical support or iPhones suppliers. She put her skills at part -time work, repairing cell phones, which paved her path to developing deep interest in calculations and the path to myth. However, the prestigious summer scholarship in the first year meant that she questioned the ethics of ethics in which consumers influenced the technology that she helped to program.
“Everything I have done with technology comes from the perspective of people, education and personal relationship,” says Ogoe. “This is a niche I love. By participating in humanities around public policy, technology and culture, he is one of my great passions, but this is the first course that also includes a professor of philosophy. “
Next week, SPOW lectures on the role of bias in artificial intelligence, and Ogoe, which enters the labor market next year, but finally plans to attend a law school to focus on regulating related problems, raises his hand to ask questions or divide counterattacks four times.
The WPOWS is delving into Compas, controversial AI software that uses the algorithm to predict the likelihood that people accused of crime will succeed again. According to Article Propublica 2018Compas probably meant black accused as future criminals and gave falsely positive positive at a pace, as was the case for the white accused.
The class session is devoted to determining whether the article justifies the conclusion that the Compas system is biased and should be interrupted. To do this, SWOW introduces two different theories about honesty:
“Substantive justice is an idea that a specific result can be fair or unfair,” he explains. “Procedural honesty is whether the procedure in which the result is resulting is honest.” Then various contradictory criteria for honesty are introduced, and the class discusses likely and what conclusions justified the compass system.
Later, two professors go upstairs to the Solar-Lezama office to summarize how the exercise has passed that day.
“Who knows?” Sun-lesama says. “Maybe in five years, everyone will laugh at how people worried about the existential risk of artificial intelligence. But one of the topics I see through this class is to learn to approach these debates outside the media discourse and strictly access to thinking about these problems. ”