Zigs anti-AI policy turned Buns speedup into a fork-only gain
Open source projects are starting to make an ugly trade visible: take help from AI coding tools and risk drowning maintainers, or block that help and leave useful code sitting in forks. Zig, the programming language behind parts of Bun, is now one of the clearest examples. A public Bun fork claims major compile-time gains, but Bun says it does not plan to upstream the work because Zig bans large language model, or LLM, authored contributions.
What makes this more than another open source policy fight is Zig's reason for the ban. In a recent post, Loris Cro, a core Zig contributor, argued that maintainers are not just reviewing patches. They are deciding which contributors are worth spending more review time on later. Cro calls that process contributor poker. In that frame, AI-written patches are not just a code-quality problem. They are a trust problem for projects that survive on a small group of humans learning who they can rely on.
The policy itself is blunt. Zig's code of conduct says: "No LLMs for issues. No LLMs for pull requests. No LLMs for comments on the bug tracker, including translation." Cro's follow-up explains why the project landed there. He wrote that incoming pull requests already exceed the core team's processing capacity, and that many AI-assisted contributions arrived as noise: hallucinated fixes, oversized first-time submissions, and extra review burden for maintainers who were already overloaded.
That argument would be easier to dismiss if there were no visible cost attached. But Simon Willison reported that Bun's Zig fork recently delivered a 4x speedup for Bun compile after adding parallel semantic analysis, meaning the compiler can analyze different parts of a program at the same time, and multiple code generation units in Zig's LLVM backend, the layer that turns Zig's internal representation into machine code. The same GitHub compare view shows a smaller benchmark where parallel semantic analysis cut one compile path from about 18 seconds to about 2.2 seconds at 16-way parallelism.
Willison also quoted Bun saying it does not plan to upstream that work because Zig has a strict ban on LLM-authored contributions. That statement is still secondhand through Willison rather than a direct Bun engineering post about the fork. But the tension itself is direct and hard to miss. The code exists. The policy exists. Cro's reasoning exists. Together they show what Zig is optimizing for: not maximum code intake, but a contribution process its maintainers think they can still govern.
That makes contributor poker the useful idea here. Most arguments about AI coding focus on whether generated code is good or bad. Zig is making a different claim. For a maintainer-heavy project, the scarce resource is not raw code output. It is reviewer attention and long-term confidence in the humans on the other end of a pull request. If that is the bottleneck, then even a good patch can be too expensive when it arrives through a channel the maintainers think destroys signal.
The counterargument is straightforward. A blanket ban can trap useful work in private forks and punish contributors who use AI as a drafting tool rather than a substitute for understanding. Other open source projects may decide disclosure rules are enough. Zig chose the harder line.
That is why this matters beyond one language community. If Bun's fork is an early example, AI coding tools may speed up downstream products while making upstream collaboration more brittle. What to watch next is whether other maintainer-constrained projects start treating AI help as a review-budget problem too, even when the code itself looks valuable.