Why Only Humans Are Allowed as Wikipedia Bans All AI-Generated Text for Writing Articles

For 25 years, one website has held a quiet, extraordinary promise. Anyone, anywhere, could contribute to the sum of all human knowledge, as long as what they wrote was true, verifiable, and fair. Nearly 260,000 volunteer editors kept that promise alive, building a digital library of over 7.1 million English-language articles, one careful sentence at a time.

But something started to change. A new kind of contributor began showing up, one that never slept, never checked its facts, and never cared whether its words meant anything at all. And by the time editors noticed, the damage had already started spreading. What happened next could reshape how every platform on the internet thinks about artificial intelligence.

Wikipedia Just Drew a Line in the Sand

On March 20, 2026, volunteer editors for Wikipedia’s English-language platform cast a formal vote. By a margin of 40 to 2, they approved a policy that left no room for interpretation. Large language models like ChatGPT and Google Gemini can no longer generate or rewrite article content on Wikipedia. Period.

For a platform built on openness and community trust, a ban of any kind carries weight. Wikipedia has always operated on the belief that good-faith human editors, armed with reliable sources, could build something worth trusting. AI-generated text threatened that belief at its foundation, and the community decided to act before things got worse.

Months of internal debate preceded the vote. Editors wrestled with how far AI should reach into the editorial process. Some wanted room for AI-assisted summaries or drafts. Others saw any AI involvement as a slippery slope. In the end, the overwhelming majority chose to protect what made Wikipedia different from every other corner of the internet.

What Got Banned and What Survived

Every editor on Wikipedia now operates under a clear rule. You cannot use a large language model to write, rewrite, or generate article content. Two narrow exceptions remain.

First, editors can use AI to translate articles from other language Wikipedias into English, but only if the translator speaks both languages fluently and follows all of Wikipedia’s existing editorial policies. Second, editors may use AI to suggest basic copy edits on their own writing, such as fixing typos or adjusting formatting. Even then, a human reviewer must confirm that no new information crept in during the process.

Wikipedia’s policy warns editors to stay cautious even with permitted uses. AI tools tend to go beyond what you ask of them. A simple grammar fix can quietly alter the meaning of a sentence, leaving it unsupported by the source. In a system built on verifiability, that kind of drift can erode trust fast.

Why AI Writing Kept Breaking Wikipedia’s Core Rules

Image Source: Shutterstock

Wikipedia runs on a set of non-negotiable editorial standards. Every claim needs a verifiable source. Every article must present information with neutrality. Every citation must lead somewhere real.

AI-generated text broke these rules over and over again. Chatbots routinely produced hallucinations, a term for made-up facts dressed in confident language. Citations led to articles that didn’t exist. Links pointed nowhere. Sources were fabricated entirely. And because LLMs write with such fluid authority, spotting the fiction required editors to check every claim by hand.

Jimmy Wales, Wikipedia’s co-founder, did not mince words about the state of AI writing. Speaking to the BBC last October, he acknowledged that AI might someday help with certain parts of Wikipedia, but made his position clear for now. “I wouldn’t say absolutely never, but at least not in the short run,” Wales said. He added that current models are still, from a Wikipedian standpoint, nowhere near good enough. For a platform where accuracy is everything, not good enough is reason enough to shut the door.

An Editor in France Saw It Coming

Ilyas Lebleu edits Wikipedia under the username Chaotic Enby. Based in France, Lebleu is an AI research student and a founding member of the WikiProject AI Cleanup squad, a volunteer group dedicated to hunting down and removing AI-generated content from the encyclopedia.

Lebleu also authored the original proposal that became the ban. Speaking to NPR last September, Lebleu described the moment the problem became impossible to ignore. “We started to notice a lot of articles which were written in a style that didn’t match the style we usually saw on Wikipedia,” Lebleu said.

At first, editors approached AI with cautious optimism. Maybe it could help. Maybe it could speed things up. But as AI-generated articles kept multiplying, the mood shifted. Optimism gave way to worry. Lebleu told 404Media that the growing volume of AI content had become unmanageable, and the community reached a breaking point where waiting was no longer an option.

How Editors Learned to Spot the Bots

Before the formal ban, Wikipedia developed its own bot-detection guidelines to help editors identify AI-written content. Editors trained themselves to look for common tells that gave AI writing away.

Fake or inaccurate citations topped the list. AI models often generate references that look real but lead to nonexistent articles or broken links. Overused phrases and cliches were another red flag, along with unnecessarily wordy explanations and abrupt shifts in writing style within a single article.

When an editor suspected AI involvement, the article would go through peer review. Other editors could challenge the content, revise it, or remove it entirely. But detection remained imperfect. Some human editors write in styles that resemble AI output, making enforcement tricky. Wikipedia’s new policy acknowledges this gap, noting that stylistic similarity alone is not enough to justify sanctions against an editor.

ChatGPT Is Now Bigger Than Wikipedia

Image Source: Shutterstock

While Wikipedia debated how to handle AI, AI was already pulling readers away. Survey data from UK-based market research firm GWI showed that ChatGPT overtook Wikipedia in monthly reach sometime in late 2024. Between Q4 2023 and Q4 2024, ChatGPT saw a 36 percent rise in users, while most other platforms barely moved.

GWI senior data journalist Chris Beer noted the speed of adoption, calling it unlike almost anything seen before in internet history. University students adopted ChatGPT at staggering rates, with 49 percent using the tool, a number approaching Amazon’s 53 percent usage among the same group.

Similarweb data backs up the trend. Wikipedia currently ranks eighth globally in web traffic. ChatGPT ranks sixth, pulling in over 4.5 billion monthly visits. Wikipedia’s human page views dropped 8 percent in late 2025 compared to the previous year.

Part of that decline predates ChatGPT. Search engines have spent years adding zero-click answers that keep users on the search page instead of sending them to Wikipedia. Google’s AI features, combined with ChatGPT’s rise, have only accelerated a trend that was already underway.

Still, the Wikimedia Foundation pushed back on the narrative of decline. A spokesperson told Futurism that Wikipedia’s pageviews remain around 15 billion per month, a range that has held steady since late 2020.

An Irony Too Big to Ignore

Here is where the story takes its most uncomfortable turn. Wikipedia, a free and open repository of human knowledge, almost certainly helped train the very AI models now competing with it for attention.

OpenAI and other AI companies built their large language models on massive datasets scraped from the internet. Wikipedia, with its millions of well-sourced, clearly written articles, was a goldmine. Now those same companies profit from tools that pull readers away from the source they studied.

Last year, the Wikimedia Foundation asked AI companies to stop scraping Wikipedia’s data directly and instead use its Enterprise API. Doing so would reduce the strain on Wikipedia’s servers while giving the nonprofit a sustainable path forward. Whether AI companies will comply remains an open question.

Where Jimmy Wales Sees AI Helping

Image Source: Shutterstock

Wales hasn’t closed the door on AI entirely. He has experimented with tools that analyze short Wikipedia entries against their cited sources, flagging missing information or unsupported claims. He found the results promising for editorial maintenance work.

The Wikimedia Foundation has also built a dedicated Machine Learning Team to develop AI tools that help editors rather than replace them. Wales sees potential in using AI to detect bias across Wikipedia’s entries. Data has shown that only 20 percent of Wikipedia biographies cover women, and those entries tend to be shorter and more focused on personal relationships than professional achievements. AI could scan the encyclopedia at scale and flag those imbalances for human editors to address. But drafting articles? Not yet. And maybe not for a long time.

A Warning Shot for Every Platform Online

Wikipedia’s ban may feel like one platform’s internal decision, but Lebleu believes the implications run much deeper. Lebleu predicted a ripple effect across the internet. “As anxiety over the AI bubble grows, I foresee a domino effect, empowering communities on other platforms to decide whether AI should be welcome on their own terms,” Lebleu said.

Every online community now faces a version of the same question Wikipedia just answered. Do you let machines generate your content, knowing they fabricate facts and erode trust? Or do you draw a line and insist that human judgment still matters? Wikipedia chose humans. And in doing so, it reminded the rest of the internet what it might be losing.

Loading...