Saturday, March 7, 2026

The Wikipedia group has created a guide to AI typing detection. Now the plugin uses it to “humanize” chatbots

Share

On Saturday, tech entrepreneur Siqi Chen released an open source plugin for Anthropic’s Claude Code AI assistant that instructs the AI ​​model to stop writing like an AI model.

This plain plug-in, called Humanizer, provides Claude with a list of 24 languages ​​and formatting patterns available to Wikipedia editors listed as chatbot gifts. Chen published the plugin on GitHub, where it has gained more than 1,600 stars as of Monday.

“It’s really useful that Wikipedia has compiled a detailed list of ‘signs of AI typing’ – Chen he wrote to X. “So much so that you can just tell your LLM to…don’t do it.”

The source material is a guide from WikiProject AI Cleana group of Wikipedia editors who have been searching for articles generated by artificial intelligence since the end of 2023. The founder of the project is the French editor of Wikipedia Ilyas Lebleu. Volunteers marked over 500 articles for review, and in August 2025 published a formal list of patterns they consistently noticed.

Chen’s tool is “skill file” for Claude Code, Anthropic’s terminal coding assistant, which includes a Markdown file that adds a list of written instructions (you can see them here) attached to the prompt that is fed into the enormous language model that powers the assistant. Different than normal system promptfor example, skill information is formatted in a standardized way so that Claude’s models can be fine-tuned to interpret with greater precision than a regular system prompt. (Custom skills require a paid Claude subscription with code execution enabled.)

But as with all AI prompts, language models don’t always perfectly match skill files, so does Humanizer actually work? In our limited testing, Chen’s skill file made the AI ​​agent’s messages sound less precise and more casual, but this may have had some drawbacks: it didn’t improve the facts and may have hurt coding skills.

In particular, some of the Humanizer instructions may lead you astray, depending on the task. For example, the Humanizer skill includes the following line: “Have opinions. Don’t just report facts – react to them. ‘I really don’t know what to think’ is more human than neutrally listing pros and cons.” While being imperfect seems human, this kind of advice probably wouldn’t do you any good if you were using Claude to write technical documentation.

Despite its flaws, it’s ironic that one of the most frequently referenced sets of rules on the Internet for detecting AI-assisted typing may help some people debunk it.

Spotting patterns

So what does AI writing look like? The Wikipedia guide is detailed and includes many examples, but for brevity we will only give one here.

According to the guide, some chatbots love to spice things up with words like “marking a key moment” or “is a testimony.” They write like tourist brochures, calling the views “breathtaking” and describing cities as “set in” scenic regions. They add “-ing” to the end of sentences to make them sound analytical: “symbolizing the region’s commitment to innovation.”

To get around these rules, the Humanizer skill tells Claude to replace the inflated language with simple facts and offers the following sample transformation:

Before: “The Statistical Institute of Catalonia was officially created in 1989, representing a key moment in the evolution of regional statistics in Spain.”

After: “The Statistical Institute of Catalonia was established in 1989 to collect and publish regional statistics.”

Claude will read this and, as a pattern-matching machine, will do his best to produce a result that fits the context of the conversation or task at hand.

Why AI write detection fails

Even with such a certain set of rules developed by Wikipedia editors, we managed to do it previously written on why AI handwriting detectors don’t work reliably: There is nothing unique about human handwriting that reliably distinguishes it from LLM handwriting.

One reason is that while most AI language models tend to gravitate toward certain types of language, they can also be induced to avoid them, as in the case of the Humanizer skill. (Although sometimes this is very difficult, as OpenAI discovered in its many years of struggle against dash.)

Latest Posts

More News