In 2010 The FBI sent a letter to Wikipedia that would be intimidating for any organization to receive.
The letter demanded that the Free Online Encyclopedia remove the FBI logo from its entry on the agency, saying that reproducing the emblem is illegal and punishable by fines, imprisonment or “or both.” Instead of backing down, a lawyer for the Wikimedia Foundation, which hosts Wikipedia, sharply rejected the proposal, emphasizing that the FBI’s interpretation of the relevant statute was incorrect and stating that Wikipedia “is prepared to argue our view in court.” It worked – the FBI dropped the case.
But the dispute envisioned a society based on the rule of law in which a government agency would listen to legal arguments in good faith rather than forcefully ignore them. Quick forward to today and the situation is very different. Elon Musk named the site “Wokepedia” and claimed to be controlled by far-left activists. Last fall, Tucker Carlson played all of 90 minutes podcast to criticize Wikipedia as “totally dishonest and completely controlled on important issues.” And when Republican congressmen James Comer and Nancy Mace accused Wikipedia of “information manipulation” during a congressional investigation, the foundation responded respectful translator about how Wikipedia works, taking a more conciliatory approach rather than arguing about government overreach. This pragmatic shift reflects a world in which the Trump administration picks winners and losers based on policy preferences.
The most celebrated free online encyclopedia in the world turns 25 today and faces many challenges. Right-wing forces have attacked Wikipedia for its alleged liberal bias, with the conservative Heritage Foundation going so far as to say that “identify and target“, the site’s volunteer editors. AI bots ruthlessly remove information from Wikipedia, load on the website’s servers. These problems are compounded by the struggle to replenish the community of volunteers participating in the project, which is the so-called turning Wikipedia gray.
Beneath these threats lies a foreboding feeling that the culture has moved away from the ideals of Wikipedia’s founders. Striving for neutrality, evaluating sources, volunteering for the public good, maintaining a non-commercial Internet project – these concepts seem old-fashioned at best and useless at worst in today’s blatantly biased, lawless, anti-people “greed is good“internet phase”.
Still, it’s possible that Wikipedia’s most influential days will be in its future, assuming it turns into a melting pot.
Bernadette MeehanThe modern CEO of the Wikimedia Foundation, whose résumé includes positions as a foreign service officer and ambassador, is well-prepared to face these attacks, according to communications director Anusha Alikhan. “Diplomatic and negotiation skills are things that I think will work well in the current environment,” she told WIRED. But even the best diplomat could cope with the current list of challenges: Britain proposed Age-preserving Wikipedia in accordance with the Internet Security Act. In Saudi Arabia, these were Wikipedia editors imprisoned after documenting human rights violations in the country on the platform. And the Great Firewall still blocks every version of the site for mainland China.
Perhaps more telling is that even within the Wikipedia community, longtime contributors worry about its decline in importance. In widespread essayveteran editor Christopher Henner expressed concern that Wikipedia would increasingly become a “temple” filled with aging volunteers content with work no one is looking at anymore.
In addition to ongoing censorship battles, Wikipedia is also trying to explain why human work still matters in the age of artificial intelligence. While almost every major AI system uses openly licensed Wikipedia content, the message from 2022 to the technology industry is that AI has made human-assisted knowledge creation irrelevant. Except it’s not true. While we’re still in the early days of the AI revolution, it appears for now that AI applications perform better when trained on human-written and human-reviewed information, that is, information that comes from human-centric editorial processes like Wikipedia. When an AI system trains recursively on its own synthetic AI-generated data, this is likely to happen suffer from model collapse.
