Wednesday, December 25, 2024

We need a modern law to fix artificial intelligence

Share

There is a growing trend for people and organizations to reject the unwanted imposition of artificial intelligence in their lives. In December 2023, The Novel York Times sued OpenAI and Microsoft for copyright infringement. In March 2024, three authors filed a class action lawsuit in California against Nvidia for allegedly training its NeMo AI platform on their copyrighted works. Two months later, leading actress Scarlett Johansson sent a legal letter to OpenAI after realizing that ChatGPT’s modern voice was “eerily similar” to hers.

Technology is not the problem here. The power dynamics are there. People understand that this technology is built on their data, often without our consent. No wonder public trust in artificial intelligence is degenerating. A recent study by Pew research shows that more than half of Americans are more concerned than excited about artificial intelligence; this opinion is shared by the majority of people in Central and South America, Africa and the Middle East in the survey Global Risk Survey.

In 2025, people will demand more control over how artificial intelligence is used. How will this be achieved? One example is red teaming, a practice borrowed from the military and used in cybersecurity. In a red team exercise, external experts are asked to “infiltrate” or break the system. It acts as a test of where your defense can go wrong so you can fix it.

Gigantic AI companies utilize red teaming to find problems in their models, but it is not yet widespread as a practice for public utilize. This will change in 2025.

For example, law firm DLA Piper is currently using collaboration with lawyers to directly check whether AI systems comply with the legal framework. My nonprofit, Humane Intelligence, organizes red teaming exercises with non-technical experts, governments, and civil society organizations to test AI for discrimination and bias. In 2023, we conducted Red Teaming exercises in which 2,200 people participated, with the support of the White House. In 2025, our red teaming events will draw on the lived experiences of everyday people to assess AI models against Islamophobia and their ability to enable online harassment of women.

Overwhelmingly, when I conduct exercises like these, the most common question I am asked is how we can move from identifying problems to solving problems ourselves. In other words, people want the right to repair.

An AI right to repair could look like this – a user could be able to run AI diagnostics, report any anomalies, and see when they will be fixed by the company. Outside groups, such as ethical hackers, can create patches or fixes for problems that anyone can access. You can also hire an independent, accredited company to evaluate your AI system and tailor it to your needs.

Although it is an abstract concept today, we are setting the stage for the right to recovery to become a reality in the future. Reversing the current unsafe power dynamics will take some work – we are quickly being forced to normalize a world where AI companies simply insert modern and untested AI models into real-world systems, and ordinary humans are the collateral damage. The right to repair gives every person the ability to control how artificial intelligence is used in their lives. 2024 was the year the world woke up to the ubiquity and impact of artificial intelligence. 2025 is the year we demand our rights.

Latest Posts

More News