Emilija Manevska/Moment/Getty Images
Follow ZDNET: Add us as a preferred source on Google.
ZDNET's key takeaways
Torvalds and the Linux maintainers are taking a pragmatic approach to using AI in the kernel.
AI or no AI, it's people, not LLMs, who are responsible for Linux's code.
If you try to mess around with Linux code using AI, bad things will happen.
After months of heated debate, Linus Torvalds and the Linux kernel maintainers have officially codified the project's first formal policy on AI-assisted code contributions. This new policy reflects Torvald's pragmatic approach, balancing the embrace of modern AI development tools with the kernel's rigorous quality standards.
The new guidelines establish three core principles:
AI agents cannot add Signed-off-by tags: Only humans can legally certify the Linux kernel's Developer Certificate of Origin (DCO). This is the legal mechanism that ensures code licensing compliance. In other words, even if you turned in a patch that was written entirely by AI, you, and not the AI or its creator, are solely responsible for the contribution. Mandatory Assisted-by attribution: Any contribution using AI tools must include an Assisted-by tag identifying the model, agent, and auxiliary tools used. For example: "Assisted-by: Claude:claude-3-opus coccinelle sparse." Full human liability: Put it all together, and you, the human submitter, bear full responsibility and accountability for reviewing the AI-generated code, ensuring license compliance, and for any bugs or security flaws that arise. Do not try to sneak bad code into the kernel, as a pair of University of Minnesota students tried back in 2021, or you can kiss your chances of ever becoming a Linux kernel developer or programmer in any other respectable open-source project goodbye.
The Assisted-by tag serves as both a transparency mechanism and a review flag. It enables maintainers to give AI-assisted patches the extra scrutiny they may require without stigmatizing the practice itself.
... continue reading