Monday, December 23, 2024

Misuse of humans will make AI more perilous

Share

OpenAI CEO Sam Altman expects AGI, or artificial general intelligence – artificial intelligence that is superior to humans in most tasks – around 2027 or 2028. Elon Musk’s forecasts are either 2025 or 2026and it has he claimed that he was “losing sleep due to the threat of artificial intelligence.” Such predictions are wrong. as limitations current artificial intelligence is becoming more and more clear, most artificial intelligence researchers have come to the view that simply building larger and more powerful chatbots will not lead to AGI.

However, in 2025, AI will still pose enormous risks: not from artificial superintelligence, but from human misuse.

These could be unintentional abuses, such as lawyers over-reliance on artificial intelligence. For example, after the release of ChatGPT, many lawyers were sanctioned for using AI to generate erroneous court briefings, apparently unaware of chatbots’ tendency to make things up. IN British Columbialawyer Chong Ke was ordered to pay opposing counsel’s fees after including fictitious AI-generated cases in her lawsuit. IN New YorkSteven Schwartz and Peter LoDuca were fined $5,000 for providing false citations. IN ColoradoZachariah Crabil was suspended for a year for using fictitious court cases generated using ChatGPT and blaming a “legal intern” for his mistakes. The list is growing rapidly.

Other abuses are intentional. In January 2024, social media platforms were flooded with sexually explicit deepfakes of Taylor Swift. These images were created using Microsoft’s “Designer” AI tool. Although the company had guardrails against generating images of real people, the misspelling of Swift’s name was enough to bypass them. Since then, Microsoft he fixed it this error. But Taylor Swift is the tip of the iceberg, and illegal deepfakes are spreading widely – in part because open-source tools for creating deepfakes are publicly available. Existing legislation around the world aims to combat deepfakes in hopes of limiting the damage. Whether it will be effective remains to be seen.

In 2025, it will become even more complex to distinguish what is real from what is imagined. The fidelity of AI-generated audio, text and images is extraordinary, and video will be next. This can lead to a “liar’s dividend”: those in power dismiss evidence of their misconduct by claiming it is false. In 2023, Tesla he argued that Elon Musk’s 2016 video may have been a deepfake in response to allegations that the CEO exaggerated the safety of Tesla’s Autopilot, which led to the accident. An Indian politician claimed that audio recordings of him admitting to corruption in his political party were faked (the audio in at least one of his clips was faked) verified as true by the press). Two defendants involved in the January 6 riots said the videos they appeared in were fraudulent content. They both were found guilty.

Meanwhile, companies are taking advantage of social confusion to sell fundamentally questionable products by labeling them “artificial intelligence.” This could end badly if such tools are used to classify people and make specific decisions about them. For example, the company that employs Retorio, claims that its AI predicts the suitability of job candidates based on video interviews, but the study found that the system can be fooled simply by the presence of glasses or by replacing the usual background with a bookshelf, showing that it relies on superficial correlations.

There are also dozens of applications in healthcare, education, finance, criminal justice and insurance where AI is now being used to deny people crucial life opportunities. In the Netherlands, the Dutch tax authority used an artificial intelligence algorithm to identify people who had committed child benefit fraud. This wrongly accused thousands of parents, often demanding a refund of tens of thousands of euros. As a result, the Prime Minister and his entire cabinet resigned.

We expect that in 2025, AI threats will come not from what AI itself does, but from what humans do with it. This includes cases where it seems works well and is over-relied on (lawyers using ChatGPT); when it works well and is misused (non-consensual deepfakes and the liar’s dividend); and when it is simply not fit for purpose (denying people their rights). Mitigating this risk is an enormous task facing companies, governments and society. This will be tough enough without the distraction of sci-fi worries.

Latest Posts

More News