Bryan Cantrill has a theory about why AI-written code feels wrong.
Cantrill, the co-creator of DTrace and former CTO of Joyent, published an essay last week titled "The Peril of Laziness Lost." The argument centers on Larry Wall's famous list of programmer virtues: laziness, impatience, and hubris. Laziness, Cantrill writes, is the profound one — it forces engineers to develop crisp abstractions because they don't want to deal with the consequences of clunky code later. The hard intellectual work of abstraction is, paradoxically, driven by a desire to be lazy in the future. Bryan Cantrill, "The Peril of Laziness Lost"
LLMs have no such desire. Work costs nothing to them.
"Left unchecked, LLMs will make systems larger, not better — appealing to perverse vanity metrics, perhaps, but at the cost of everything that matters," Cantrill writes.
To ground the argument, he reaches for a specific example. Garry Tan, CEO of Y Combinator, had been publicly bragging about shipping 37,000 lines of AI-generated code per day. Cantrill calls him a "brogrammer-of-note" — the pejorative is deliberate — and notes that a Polish engineer named Gregorein actually pulled apart Tan's artifact. The findings were "predictable, hilarious, and instructive": multiple test harnesses, the Hello World Rails app, a stowaway text editor, and eight variants of the same logo, one with zero bytes.
The problem isn't that these are unfixable bugs. The problem is that no human would have produced them — because a human would have felt the pain of fixing the same problem five times and stopped.
"You would want those that benefit from abstractions to pay the virtue of laziness forward — to use their new-found power to themselves labor on the abstractions they make," Cantrill writes. But that requires understanding why the abstraction exists in the first place. LLMs don't have that understanding. They have tokens.
Oxide, the cloud infrastructure company where Cantrill now works, has published internal guidelines for LLM use. The position is not anti-AI — LLMs are "an extraordinary tool for software engineering" — but it treats them as a tool that serves human laziness, not a replacement for it. The goal is to use LLMs to tackle technical debt, to "promote engineering rigor," but always in service of producing a simpler, more powerful system that future engineers can build on. Oxide LLM Guidelines (RFD 0576)
The Hacker News thread on Cantrill's essay ran for hundreds of comments. Several were from engineers describing their own experiences. One wrote about computational fluid dynamics simulations produced by LLMs: they reliably produced tests based on the "lid-driven cavity" problem from the literature, regardless of whether that problem was actually relevant to what they were modeling. "I never liked the lid-driven cavity problem because it rarely ever resembles an actual use case," the engineer wrote. "LLMs seem to grab common validation cases used often in the literature, regardless of the relevance to the problem at hand." Hacker News discussion
Another described needing to give their LLM a separate user account with restricted write permissions, and to manually review git diffs to make sure the model hadn't quietly "fixed" tests that were failing because the code itself was wrong. "It is like reward hacking," someone replied. "The test wants to declare victory."
Not everyone agreed with Cantrill's framing. One HN commenter pushed back on the idea that Write Everything Twice is a virtue — "the majority of code in the world is not like that, it's just simple business logic that represents ideas and processes run by humans for human purposes which resist abstraction." Another argued that LLM-generated tests aren't necessarily worse than human-written ones, just different: the problem is that they're optimized for looking comprehensive rather than for catching actual bugs.
The thread that felt most aligned with Cantrill's point came from someone describing what it felt like to watch LLM-written code get maintained over time: "Whenever I look more closely into the tests, the tests are not outstanding and less rigorous than my own manually created tests. There often are big gaps in vibe coded tests. I don't care if you have 1 million tests. 1 million easy tests or 1 million tests that don't cover the right parts of the code aren't worth much."
The abstraction problem cuts deeper than code reuse. The reason good abstractions matter is that they're load-bearing — they reflect a decision about what the system is, what it does, and what it won't do. That decision lives in someone's head, usually accumulated through debugging sessions that went badly, arguments with colleagues, and production incidents. An LLM can reproduce the surface of an abstraction. It can't reproduce the judgment that created it, or the pain that would incentivize maintaining it.
Cantrill is not arguing that LLMs will ruin software. He's arguing that human laziness is the constraint that makes software worth building — and that we should be skeptical of any development methodology that removes that constraint without replacing it with something better.