Current Threats to Wikipedia Feel Bigger Than Ever

In 2010, the The FBI sent Wikipedia a letter that would have shocked any organization to receive it.
The activist demanded that the free online encyclopedia remove the FBI logo from writing about the agency, saying that reproducing the logo is illegal and punishable by fines, imprisonment, “or both.” Instead of backing down, a lawyer for the Wikimedia Foundation, which owns Wikipedia, responded with a strong denial explaining that the FBI’s interpretation of the relevant law was incorrect and said that Wikipedia “is ready to challenge our opinion in court.” It worked—the FBI dropped the case.
But the conflict had to be in a society based on law, where the government organization would listen to the legal conflict with sincerity rather than forcefully end it. Fast forward to today, and things are very different. Elon Musk called the site “Wokepedia” and accused it of being controlled by far-left activists. Last fall, Tucker Carlson devoted an entire 90-minute podcast to insulting Wikipedia as “completely unreliable and completely controlled on important questions.” And after Republican members of Congress James Comer and Nancy Mace accused Wikipedia of “information fraud” in a congressional investigation, the foundation responded with a respectful explanation of how Wikipedia works, taking a conciliatory rather than confrontational approach to government exploitation. The pragmatic shift reflects a world where the Trump administration picks winners and losers based on political preferences.
As the world’s most popular online encyclopedia turns 25 today, it faces many challenges. Right-wing forces have attacked Wikipedia for alleged bias, and the conservative Heritage Foundation even said it would “identify and target” the site’s volunteer editors. AI bots have deleted Wikipedia information, overwhelming the site’s servers. Compounding these issues is the struggle to fill the project’s volunteer community, the so-called graying of Wikipedia.
Beneath these threats is a frightening sense that the culture has moved away from Wikipedia’s founding ideals. Intended to be neutral, to check sources, to volunteer for the good of the community, to support an internet project that cannot be sold, these ideas seem too old and too useless in today’s ultra-racist, lawless, anti-people, “greed is good” phase of the internet.
Still, there’s still a chance that Wikipedia’s most influential days are in its future, assuming it puts itself back in the crucible.
Bernadette Meehan, the new CEO of the Wikimedia Foundation, whose CV includes posts as a foreign service official and ambassador, is ready to face the onslaught, according to communications officer Anusha Alikhan. “Negotiation and negotiation skills are things that I think will lend themselves well to the environment,” he told WIRED. But even the best strategist can face current challenges: The UK has proposed deprecating Wikipedia under the Internet Safety Act. In Saudi Arabia, Wikipedia editors were arrested after documenting human rights violations on the platform. And the Great Firewall continues to block all versions of China’s national site.
Most importantly, even within the Wikipedia community, long-time contributors are concerned about its diminishing importance. In a widely publicized story, veteran editor Christopher Henner said he feared Wikipedia would grow into a “temple” filled with aging volunteers, content with work that no one cares about anymore.
Aside from these ongoing audit battles, Wikipedia is also struggling to explain why human work is still relevant in the age of artificial intelligence. Although almost all major AI programs are training on Wikipedia’s freely licensed content, the message from the tech industry as of 2022 has been that human-powered knowledge production has been rendered irrelevant by AI. Otherwise it is not true. While we’re still in the early days of the AI revolution, it currently appears that AI applications do best when trained on human-authored and human-reviewed information, the kind that comes from human-centered editing processes like Wikipedia’s. If an AI system repeatedly trains itself on its AI-generated artificial data, it may suffer from model failure.


