On Saturday, tech entrepreneur Siqi Chen released an open source plug-in for Anthropic’s Claude Code AI assistant that instructs the AI model to stop writing like an AI model.
Called Humanizer, the simple prompt plug-in feeds Claude a list of 24 language and formatting patterns that Wikipedia editors have listed as chatbot giveaways. Chen published the plug-in on GitHub, where it has picked up more than 1,600 stars as of Monday.
“It’s really handy that Wikipedia went and collated a detailed list of ‘signs of AI writing,’” Chen wrote on X. “So much so that you can just tell your LLM to … not do that.”
The source material is a guide from WikiProject AI Cleanup, a group of Wikipedia editors who have been hunting AI-generated articles since late 2023. French Wikipedia editor Ilyas Lebleu founded the project. The volunteers have tagged over 500 articles for review and, in August 2025, published a formal list of the patterns they kept seeing.
Chen’s tool is a “skill file” for Claude Code, Anthropic’s terminal-based coding assistant, which involves a Markdown-formatted file that adds a list of written instructions (you can see them here) appended to the prompt fed into the large language model that powers the assistant. Unlike a normal system prompt, for example, the skill information is formatted in a standardized way that Claude models are fine-tuned to interpret with more precision than a plain system prompt. (Custom skills require a paid Claude subscription with code execution turned on.)
But as with all AI prompts, language models don’t always perfectly follow skill files, so does the Humanizer actually work? In our limited testing, Chen’s skill file made the AI agent’s output sound less precise and more casual, but it could have some drawbacks: It won’t improve factuality and might harm coding ability.
In particular, some of Humanizer’s instructions might lead you astray, depending on the task. For example, the Humanizer skill includes this line: “Have opinions. Don’t just report facts—react to them. ‘I genuinely don’t know how to feel about this’ is more human than neutrally listing pros and cons.” While being imperfect seems human, this kind of advice would probably not do you any favors if you were using Claude to write technical documentation.
Even with its drawbacks, it’s ironic that one of the web’s most referenced rule sets for detecting AI-assisted writing may help some people subvert it.
Spotting the Patterns
So what does AI writing look like? The Wikipedia guide is specific with many examples, but we’ll give you just one here for brevity’s sake.
... continue reading