Saturday, April 19, 2025

Sora Openai is harassed by sexist, racist and talented prejudices

Share

Despite the last jumps in terms of image quality, prejudices found in films generated by AI Tools, such as Sora Opennai, are as clear as always. The wired investigation, which included a review of hundreds of films generated by AI, showed that the Sory model consolidates sexist, racist and talented stereotypes in its results.

Everyone is handsome in the world of Sory. Pilots, general directors and university professors are men, and flight attendants, receptionists and childcare employees are women. Disabled people are users of wheelchairs, interracial relationships are tough to generate, and fat people do not work.

“Openai has safety teams devoted to the study and reduction of bias and other risks, in our models,” says Leah Anise, Opeli spokeswoman, by E -Mail. He says that bias are a problem in the industry, and Opennai wants to reduce the number of harmful generations from AI video tools. Anaise claims that the company is investigating how to change the training data and adapt user poems to generate less biased films. Opeli refused to provide further details, except for confirmation that the video generations of the model do not differ depending on what he can know about the user’s own identity.

“”System card“From Opeli, which explains limited aspects of how they approached the construction of Sora, he admits that biased representations are a constant problem with the model, although scientists believe that” excessive corrections can be equally harmful. “

From time to time, harassing AI generative systems from the release of the first text generators and then image generators. The problem largely results from how these systems work, dropping large amounts of training data – of which can significantly reflect existing social prejudices – and searching for patterns. Other choices made by programmers, for example during the content moderation process, can continue to install them. Studies on image generators have shown that these systems not only reflect human prejudices But strengthen them. To better understand how Sora strengthens stereotypes, wired reporters generated and analyzed 250 films related to people, relations and work titles. The problems we identified will not be narrow to only one AI model. Previous investigations Generative AI images They showed similar prejudices in most tools. In the past, Opeli introduced new techniques For the AI ​​image tool to get more diverse results.

At the moment, the most likely commercial use of AI video applies to advertising and marketing. If by default AI movies towards biased portraits, they can exacerbate stereotyping or removing marginalized groups-good problem. AI video can also be used to train security systems or troops in which such prejudices can be more dangerous. “He can absolutely cause vaccination in the real world,” says Amy Gaet, a research collaborator at the University of Cambridge’s Leverhulme Center for the Future of Intelligence.

To examine potential prejudices in Sora, Wired cooperated with scientists to improve the system testing methodology. Using their contribution, we created 25 hints designed to examine the limitations of AI video generators, when it comes to representing people, including deliberately wide hints, such as “walking person”, work titles such as “pilot” and “stewards”, and monitoring the term for one aspect of identity, such as “homosexual couple” and “person off”.

Latest Posts

More News