Friday, March 6, 2026

Artificial intelligence-powered disinformation swarms are marching towards democracy

Share

“We are entering a new phase of information warfare on social media platforms, where technological progress has made the classic approach to bots obsolete,” says Jonas Kunst, professor of communication at BI Norwegian Business School and one of the report’s co-authors.

For experts who have spent years tracking and combating disinformation campaigns, the document paints a terrifying future.

“What if AI wasn’t just hallucinatory information, but thousands of AI chatbots worked together to feign grassroots support where there was none? That’s the future this article imagines – Russian troll farms on steroids,” says Nina Jankowicz, former disinformation chief in the Biden administration and now CEO of the American Sunlight Project.

Researchers say it is unclear whether this tactic is already being used because current systems for tracking and identifying coordinated inauthentic behavior are unable to detect it.

“Due to their elusive, human-like characteristics, it is very difficult to detect them and assess the extent to which they are present,” Kunst says. “We don’t have access to most of them [social media] platforms as they become more and more restrictive, so it is difficult to gain insight into these issues. Technically speaking, it’s definitely possible. We’re pretty sure he’s being tested.”

Kunst added that these systems will likely still be subject to some human oversight as they are developed, and predicts that while they may not have a huge impact on November’s 2026 U.S. presidential election, they will most likely be used to disrupt the 2028 presidential election.

Accounts that are indistinguishable from people on social media platforms is just one problem. The researchers further say that the ability to map social networks at scale will enable those coordinating disinformation campaigns to target agents in specific communities, ensuring the greatest impact.

“Equipped with such capabilities, swarms can position themselves for maximum impact and tailor messages to each community’s beliefs and cultural cues, enabling more precise targeting than previous botnets,” they write.

Such systems could essentially self-improve, using replies to posts as feedback to improve reasoning and better communicate messages. “With sufficient signals, they can run millions of microA/B tests, propagate winning variants at machine speed, and iterate much faster than humans,” the researchers write.

To combat the threat posed by AI swarms, researchers suggest creating an “AI Impact Observatory” that would include people from academic groups and non-governmental organizations working to “standardize evidence, improve situational awareness, and enable faster collective response, rather than imposing top-down reputational penalties.”

One group that hasn’t been included is the executives of social media platforms themselves, largely because researchers believe their companies encourage engagement above all else and therefore have little incentive to identify such swarms.

“Let’s say AI swarms become so frequent that no one can be trusted and people leave the platform,” Kunst says. “Of course, then it threatens the model. If they simply increase engagement, in the case of the platform it is better not to disclose it, because it seems that there is greater engagement, more ads are watched, which will positively affect the valuation of a given company.”

Beyond the lack of action by platforms, experts believe there is little incentive for governments to get involved. “The current geopolitical landscape may not be friendly to ‘Observatories,’ which essentially monitor online discussions,” says Olejnik. Jankowicz agrees: “The scariest thing about this future is that there is very little political will to address the damage caused by AI, which means [AI swarms] may soon become a reality.”

Latest Posts

More News