Wednesday, December 25, 2024

OpenAI Shuts Down Election Influence Operation That Used ChatGPT

Share

OpenAI reportedly banned a group of ChatGPT accounts linked to an Iranian influence operation that was generating content related to the US presidential election blog post on Friday. The company says the operation created AI-generated articles and social media posts, though it does not appear to have reached a immense audience.

This isn’t the first time OpenAI has banned accounts linked to state-linked actors who maliciously exploit ChatGPT. In May, the company disrupted five campaigns using ChatGPT to manipulate public opinion.

These episodes are reminiscent of state actors using social media platforms like Facebook and Twitter to try to influence previous election cycles. Now, similar groups (or perhaps the same ones) are using generative AI to flood social media feeds with disinformation. Like social media companies, OpenAI appears to be taking a whack-a-mole approach, banning accounts associated with these activities as they arise.

OpenAI says its investigation into this group of accounts benefited from Microsoft Threat Intelligence Report published last week that identified the group (dubbed Storm-2035) as part of a broader campaign to influence the U.S. election that has been ongoing since 2020.

Microsoft said Storm-2035 is an Iranian network with multiple sites imitating news outlets and “actively engaging American voter groups on opposite ends of the political spectrum with polarizing messages on topics such as U.S. presidential candidates, LGBTQ rights, and the Israel-Hamas conflict.” The playbook, as has been shown in other operations, is not necessarily about promoting one policy or another, but about sowing dissent and conflict.

OpenAI identified five front sites for Storm-2035, presenting themselves as both progressive and conservative news sites with convincing domain names like “evenpolitics.com.” The group used ChatGPT to write several long-form articles, including one alleging that “X is censoring Trump’s tweets,” something Elon Musk’s platform certainly hasn’t done (if anything, Musk is encouraging former President Donald Trump to get more involved with X).

Example of a phony news site that publishes content generated by ChatGPT.
Image sources: OpenAI

On social media, OpenAI identified more than a dozen X accounts and one Instagram account controlled by the operation. The company says ChatGPT was used to transcribe various political commentaries that were then posted on those platforms. One of those tweets falsely and misleadingly claimed that Kamala Harris attributed the “rising cost of immigration” to climate change, followed by “#DumpKamala.”

OpenAI says it has seen no evidence that Storm-2035’s articles were widely shared, and noted that most of its social media posts received few to no likes, shares, or comments. This is often the case for these operations, which can be quickly and cheaply launched using AI tools like ChatGPT. Expect to see many more such notifications as the election approaches and partisan bickering intensifies online.

Latest Posts

More News