Artificial intelligence is often considered a threat to democracy and a boon to dictators. It is likely that in 2025, algorithms will continue to undermine democratic discussion by spreading outrage, bogus news and conspiracy theories. In 2025, algorithms will also continue to accelerate the creation of total surveillance systems, where the entire population is watched 24 hours a day.
Most importantly, AI makes it easier to concentrate all information and power on one node. In the 20th century, distributed information networks like the US performed better than centralized information networks like the USSR because the human apparatchiks at the center simply couldn’t analyze all the information effectively. Replacing apparatchiks with artificial intelligence could make Soviet-style centralized networks better.
Nevertheless, artificial intelligence is not the only good news for dictators. First, there is the well-known problem of control. Dictatorship control is based on terror, but algorithms cannot be terrorized. In Russia, the invasion of Ukraine is officially defined as a “special military operation,” and calling it a “war” is a crime punishable by up to three years in prison. If a chatbot on the Russian Internet calls it “war” or mentions war crimes committed by Russian troops, how can the regime punish this chatbot? The government could block it and try to punish human creators, but that’s much harder than disciplining human users. Moreover, authorized bots can develop different views themselves simply by detecting patterns in the Russian information sphere. This is an alignment problem, in Russian. Russian engineers may go to great lengths to create an AI that is completely tailored to the regime, but given the ability of AI to learn and change on its own, how can engineers ensure that an AI that gains regime approval in 2024? , won’t work? Will you venture into illegal territory in 2025?
The Russian constitution contains great promises that “everyone is guaranteed freedom of thought and speech” (Article 29(1)) and “censorship is prohibited” (29(5)). Few Russian citizens are naive enough to take these promises seriously. But bots don’t understand doublespeak. A chatbot instructed to follow Russian law and values could read this constitution, conclude that freedom of speech is a core Russian value, and criticize Putin’s regime for violating that value. How could Russian engineers explain to a chatbot that although the constitution guarantees freedom of speech, the chatbot should not actually believe the constitution, nor should it ever mention the gap between theory and reality?
In the longer term, authoritarian regimes will likely face an even greater danger: Instead of criticizing them, artificial intelligence may take control of them. Throughout history, the greatest threat to autocrats has usually come from their own subordinates. No Roman emperor or Soviet prime minister was overthrown by a democratic revolution, but they were always in danger of being overthrown or turned into puppets by their own subordinates. A dictator who gives AI too much power in 2025 may become their puppet in the future.
Dictatorships are much more susceptible to such algorithmic takeovers than democracies. Even super-Machiavellian AI would have a challenging time amassing power in a decentralized democratic system like the United States. Even if artificial intelligence learns to manipulate the US president, it may face opposition from Congress, the Supreme Court, state governors, the media, gigantic corporations and various non-governmental organizations. How would an algorithm deal with, for example, a Senate filibuster? Taking power in a highly centralized system is much easier. To hack an authoritarian network, the AI only needs to manipulate one paranoid person.