Much AI-generated content has been used to express support or fandom for specific candidates. For example, an AI-generated video of Donald Trump and Elon Musk dancing to the BeeGees’ song “Stayin’ Alive” was shared millions of times on social mediaincluding Senator Mike Lee, Republican from Utah.
“It’s about social signaling. These are all reasons why people share these things. This is not artificial intelligence. We’re seeing the effects of a polarized electorate,” says Bruce Schneier, a public interest technologist and lecturer at the Harvard Kennedy School. “It’s not like we’ve had perfect elections in our history and now all of a sudden artificial intelligence shows up and it’s all disinformation.”
But don’t get it twisted – there was misleading false information that has spread during this election. For example, a few days before the elections in Bangladesh deepfakes circulated encouragingly on the Internet supporters of one of the country’s political parties to boycott the vote. Sam Gregory, program director of the nonprofit Witness, which helps people operate technology to advance human rights and runs a rapid response program for civil society organizations and journalists, says his team has seen an escalate in deepfakes cases this year.
“In the context of multiple elections,” he says, “there were examples of both genuinely deceptive and misleading use of synthetic media in audio, video and image formats that journalists were intrigued by or could not fully verify or dispute. ” In his opinion, this shows that the tools and systems currently used to detect media generated by artificial intelligence still cannot keep up with the pace of development of this technology. In places outside the United States and Western Europe, these detection tools are even less reliable.
“Fortunately, AI has not been used fraudulently in most elections on a large scale or in a crucial way, but it is very clear that there is a gap in detection tools and access to them for those who need it most,” says Gregory. “This is not the time for complacency.”
He argues that the very existence of synthetic media means that politicians can claim that real media is fake – a phenomenon known as the “liar’s dividend”. In AugustDonald Trump has claimed that images showing large crowds of people arriving at rallies for Vice President Kamala Harris were generated by artificial intelligence. (They weren’t). Gregory says Witness’s analysis of all reports sent to the fake rapid response force shows that in about a third of the cases, politicians used AI to deny evidence of a real event – many of them related to leaked conversations.