Google announced on Tuesday that he is reviewing the rules regulating the way he uses artificial intelligence and other advanced technologies. The company has removed a promising language that it will not continue “technologies that cause or probably cause general damage”, “weapons or other technologies whose main goal or implementation is to cause or directly facilitate human injuries”, “technologies gathering or using information for information for information for Supervision violating the norms accepted on the international arena “and” technologies whose goal is opposite to the generally accepted principles of international law and human rights. “
The changes were revealed in Note attached At the top of the post on the blog 2018, presenting the guidelines. “We have developed updates of our AI principles. Visit the latest AI.GOogle, “we read in the note.
IN Blog post on TuesdayA pair of Google directors quoted the increasingly common operate of artificial intelligence, evolving standards and geopolitical battles over artificial intelligence as a “background”, why Google rules required renovation.
Google first published rules in 2018, when he went to suppress internal protests regarding the company’s decision to work on the American military drone program. In response, he refused to extend the government agreement, and also announced a set of principles of managing future operate of advanced technologies, such as artificial intelligence. Among other things, the rules stated that Google would not develop weapons, some supervision systems or technology undermining human rights.
But in the advertisement on Tuesday, Google takes off these obligations. New website He no longer mentioned a set of prohibited applications of AI Google initiatives. Instead, the changed document offers Google more space for the implementation of potentially sensitive cases of operate. He states that Google will implement “adequate human supervision, due diligence and mechanisms of feedback to adapt to the user’s goals, social responsibility and generally accepted principles of international law and human rights.” Google also says that it will work to “relieve unintentional or harmful results.”
“We believe that democracies should lead to the development of artificial intelligence, directed by basic values, such as freedom, equality and respect for human rights,” wrote James Manyika, senior vice president of research, technology and society, and demis haassabis, general director of Google Deepmind , The Google Deepmind, The the Google Deepmind, The Research Laboratory AI of the Company. “We believe that companies, governments and organizations sharing these values should cooperate to create artificial intelligence that protects people, promotes global growth and supports national security.”
They added that Google will continue to focus on AI projects “which comply with our mission, our scientific concentration and our fields of specialist knowledge and will remain in line with the generally accepted principles of international law and human rights.”
Do you have a clue?
Are you a current or former employee on Google? We would like to hear from you. By using a phone or computer, contact Paresh Dave on Signal/WhatsApp/Telegram at +1-415-565-1302 or paresh_dave@wired.com or Caroline Haskins for signaling at a temperature of +1 785-813-1084 or W E -Mailcarolinehaskins@gmail .com
The return of US President Donald Trump last month revived many companies to revise policies promoting their own capital and other liberal ideals. Google spokesman Alex Krasov claims that the changes have been at work much longer.
Google mentions its modern goals as the pursuit of the brave, responsible and cooperation of the AI initiative. The phrases such as “to be socially beneficial” and maintain “scientific perfection” have disappeared. A mention of “respecting intellectual property rights” has been added.