The National Institute of Standards and Technology (NIST) has issued fresh instructions to scientists that cooperate with the American Institute of Artificial Intelligence Safety (AISI), which eliminate the mention of “AI safety”, “responsible AI”, and “justice AI” in the skills that he expects members and introduces a request to give “limitation of attitude, enabling human flour and enabling human flour economic competitiveness. “
This information appears as part of an updated research and development agreement for members of the consortium of the AI Security Institute, sent at the beginning of March. Earlier, this agreement encouraged researchers to bring technical work that could facilitate identify and repair discriminatory model behaviors related to gender, breed, age or unevenness of wealth. Such prejudices are extremely significant because they can directly affect end users and disproportionately harm minorities and groups in an adverse economic situation.
The fresh agreement removes the mention of the development of tools “for authentication of content and tracking its origin”, as well as “labeling synthetic content”, signaling less interest in tracking disinformation and deep fakes. He also puts emphasis on putting America in the first place, asking one working group to develop test tools “to expand the global position of AI America.”
“Trump’s administration has removed safety, honesty, disinformation and responsibility as things that AI values, which in my opinion speaks for themselves,” says one of the researchers from an organization working from the AI Safety Institute, who asked not to receive recommendations from fear of repression.
The researcher believes that ignoring these problems can harm ordinary users, allowing algorithms to discriminate against income or other demographic data. “Unless you are a technological billionaire, it will lead to a worse future for you and the people you care about. Expect AI to be unfair, discriminating, dangerous and irresponsible, “says the researcher.
“He is wild,” says another researcher who worked with the AI Safety Institute in the past. “What does this mean that people flourished?”
Elon Musk, who currently conducts controversial efforts to reduce government expenditure and bureaucracy on behalf of President Trump, criticized AI models built by OpenAI and Google. In February last year, he published a meme on X, in which Gemini and Opeli were marked as “racist” and “Woke”. Often Cites The incident in which one of the Google models debated whether it would be wrong, even if he prevented nuclear apocalypse – a very unlikely scenario. In addition to Tesla and Spacex, Musk runs XAI, AI, which competes directly with OpenAI and Google. The researcher who advises XAI recently developed a fresh technique possibly a change in the political tendency of enormous language models, as reported by Wired.
Growing research shows that political prejudice in AI models can affect both liberals and conservatives. For example, Twitter recommendation algorithm study Published in 2021, it showed that users will be more often shown with right -wing prospects on the platform.
From January, the so -called Musk government department (Doge) swept the US government, effectively releasing officials, stopping expenses and creating an environment considered hostile to those who can oppose the goals of Trump’s administration. Some government departments, such as the Education Department, archived and removed the documents recalling Dei. In recent weeks, Doge also aimed at Nist, the home organization of Aisi. Dozens of employees were dismissed.