Hey folks, I got a lot of feedback from various meetings on the proposed LLVM AI contribution policy, and I made some significant changes based on that feedback. The current draft proposal focuses on the idea of requiring a human in the loop who understands their contribution well enough to answer questions about it during review. The idea here is that contributors are not allowed to offload the work of validating LLM tool output to maintainers. I’ve mostly removed the Fedora policy in an effort to move from the vague notion of “owning the contribution” to a more explicit “contributors have to review their contributions and be prepared to answer questions about them”. Contributors should never find themselves in the position of saying “I don’t know, an LLM did it”. I felt the change here was significant, and deserved a new thread.
From an informal show of hands at the round table at the US LLVM developer meeting, most contributors (or at least the subset with the resources and interest in attending this round table in person) are interested in using LLM assistance to increase their productivity, and I really do want to enable them to do so, while also making sure we give maintainers a useful policy tool for pushing back against unwanted contributions.
I’ve updated the PR, and I’ve pasted the markdown below as well, but you can also view it on GitHub.
LLVM AI Tool Use Policy
Policy
LLVM’s policy is that contributors can use whatever tools they would like to
craft their contributions, but there must be a human in the loop.
Contributors must read and review all LLM-generated code or text before they
ask other project members to review it. The contributor is always the author
and is fully accountable for their contributions. Contributors should be
... continue reading