After months of fierce debate among Linus Torvalds and kernel maintainers, the Linux kernel has published official two-page guidelines for handling AI-generated code policy in contributions. The policy represents a pragmatic middle ground: AI coding assistants are permitted, but humans bear absolute responsibility for every line, bug, and security flaw that reaches the codebase.
Key Takeaways
- Linux kernel requires human accountability for all AI-generated code, including bugs and security flaws.
- AI agents explicitly banned from adding Signed-off-by tags; only humans can certify Developer Certificate of Origin compliance.
- New Assisted-by attribution tag required: AGENT_NAME:MODEL_VERSION with optional tool names.
- All code must comply with GPL-2.0-only licensing and use appropriate SPDX identifiers.
- Guidelines treat AI as a tool like any other, not a banned technology or automatic solution.
How Linux AI-generated code policy shifts responsibility to humans
The core principle is unambiguous: humans who submit code generated by AI tools own every consequence. They must review all AI-generated code, ensure it complies with GPL-2.0-only licensing, add their Signed-off-by tag certifying the Developer Certificate of Origin, and take full legal and technical responsibility for the work. This approach differs sharply from organizations that use AI code without disclosure or accountability.
AI agents are explicitly forbidden from adding Signed-off-by tags—only humans can legally certify DCO compliance. This distinction matters because the Signed-off-by tag represents a legal assertion of authorship and accountability. By contrast, a new Assisted-by tag documents which AI tool contributed to the code: AGENT_NAME:MODEL_VERSION [TOOL1] [TOOL2], where tool names like coccinelle, sparse, smatch, or clang-tidy are optional.
Linus Torvalds stated there is zero point in documenting AI slop as bad policy, because developers who submit garbage AI code won’t disclose it anyway. The guidelines target good actors willing to follow rules, not bad actors hiding their use of AI tools.
Why Linux maintainers are warming to AI tools
The shift in tone reflects a real change in AI tool quality. Greg Kroah-Hartman, a senior Linux kernel maintainer, noted that AI has suddenly become useful for developers and the kernel team can’t ignore it—the technology is improving rapidly. A month before the guidelines were published, Sashiko, an AI system used by Linux kernel maintainers for code review and security reports, began producing real, actionable reports instead of the low-quality output that had plagued earlier attempts.
This practical improvement shifted the conversation from blanket rejection to managed integration. The kernel team recognized that AI coding assistants like Copilot, when used responsibly and with full human review, can accelerate development without compromising code quality or legal compliance. The policy acknowledges this reality while erecting clear guardrails.
Licensing compliance and the GPL-2.0-only requirement
All AI-generated code contributions must comply with GPL-2.0-only licensing and include appropriate SPDX identifiers. This requirement protects the kernel from a specific risk: AI models trained on code from multiple sources may reproduce copyrighted material verbatim, creating legal liability. Unlike human code resemblances that may qualify as fair use, direct reproductions of copyrighted code in AI output are a foreseeable legal hazard that organizations can be held liable for.
The Linux Foundation’s broader policy on generative AI in open source projects allows AI-generated code if the tool’s terms align with open source licenses and no unpermitted third-party copyrighted material is included. Linux kernel guidelines enforce this principle by requiring human review and GPL compliance—the submitter must ensure licensing is correct before adding their Signed-off-by tag.
What the policy doesn’t do
The guidelines are tool-agnostic. They do not endorse Copilot specifically or ban any particular AI assistant. The policy treats AI like any other tool—a compiler, a linter, or a code generator—and focuses on process and accountability, not on judging AI output quality. This pragmatism contrasts with earlier months of debate when some maintainers pushed for outright rejection of AI-assisted code.
Linux Foundation projects may enforce stricter rules, and contributors must also comply with their employer’s AI policies. The kernel guidelines set a floor, not a ceiling, for responsible AI use in open source.
Is AI-generated code allowed in Linux kernel contributions?
Yes, AI-generated code is allowed if the human submitter reviews it, ensures GPL-2.0-only compliance, adds the Assisted-by attribution tag, and signs off with their own Signed-off-by tag taking full responsibility. The code must also not infringe on third-party copyrights.
Can AI agents add Signed-off-by tags to Linux kernel patches?
No. AI agents are explicitly banned from adding Signed-off-by tags. Only humans can legally certify Developer Certificate of Origin compliance. This preserves the legal meaning of the Signed-off-by tag as a human assertion of authorship and accountability.
What is the Assisted-by tag in Linux AI-generated code policy?
The Assisted-by tag documents which AI tool contributed to the code, formatted as AGENT_NAME:MODEL_VERSION [TOOL1] [TOOL2]. Tool names are optional and might include coccinelle, sparse, smatch, or clang-tidy. This tag provides transparency without burdening the submitter with excessive documentation.
The Linux kernel’s AI-generated code policy succeeds because it avoids two extremes: blind rejection of AI tools and uncritical adoption without oversight. Instead, it demands that humans remain in control, take responsibility, and disclose their use of AI. As AI tools improve—as Sashiko’s recent shift to useful security reports demonstrates—this framework will likely influence how other open source projects approach the same challenge. The precedent matters: Linux is setting the standard for responsible AI integration in critical infrastructure software.
This article was written with AI assistance and editorially reviewed.
Source: Tom's Hardware


