How LLMs Became the Programmer’s Pattern Engine

The fastest programmer I ever worked with did not type quickly. He was, if anything, slower at the keyboard than the rest of us. What he did better than anyone was see the pattern — the repetition hiding inside a two-thousand-line file, the shape of a change that looked tedious from the outside but was actually one operation applied many times. He would record a vim macro, hit @@ a few times, and walk away from work that would have taken the rest of us an hour.

That instinct — find the pattern, encode it, replay at scale — has always been the dividing line between good engineers and great ones. The tools have changed across decades. The instinct has not.

What’s happening with LLMs is the next tier of that same instinct. Not a replacement for craft. A new substrate for it.


A short lineage of leverage

Every era of serious programming has had a “force multiplier” tool whose mastery separated the leveraged from the labour-bound.

In the seventies it was sed and awk — text-stream surgery from the command line. In the eighties and nineties it was regular expressions and Emacs keyboard macros. By the time I was learning, vim macros and well-crafted shell pipelines were the secret handshake of senior engineers. Refactor a hundred imports? Don’t do it by hand — record the change once, replay it everywhere.

The pattern was always the same. You looked at a problem, found the structural repetition inside it, and described that repetition in a language the machine understood. Regex was a pattern language for text. Vim macros were a pattern language for keystrokes. Codemods, when they arrived, were a pattern language for ASTs.

Each of these is a step up the ladder of abstraction. Each requires more of you when you’re describing the pattern, but each rewards you with more leverage when you replay it.

LLMs are the next rung on that ladder. The pattern language is now natural language plus context. And the patterns the model can match are no longer just syntactic — they’re semantic.

Why the old patterns hit a ceiling

The trouble is that codebases outgrew the abstractions we were using to wrangle them.

A vim macro is excellent inside a buffer. It is useless across three services, a config repository, a Terraform module, and a fixture file in the test suite — all of which need to change together for the refactor to be coherent. A regex can match user_id, but it can’t tell you that this particular user_id is the one that needs to migrate to an opaque token, while the other one over there is a database primary key that should stay numeric.

The signal that mattered was always meaning, not surface form. We just didn’t have tools that could see meaning. So we approximated it — through naming conventions, file organisation, careful imports, tests that would scream when we got it wrong. Those were workarounds for the fact that our pattern engines were syntactic and our problems were semantic.

The newer generation of programmers — the ones I see doing the most leveraged work right now — aren’t faster typists than their predecessors. They’ve simply moved their pattern-matching up a level. Instead of “find every line matching this regex,” they’re saying things like “find every place we assume the legacy auth shape and migrate it to the new middleware, leaving the legacy callers backward-compatible until next quarter.”

That sentence is a pattern. It’s just a pattern at the wrong altitude for sed and the right altitude for an agent.


What actually changed

It’s tempting to describe LLMs in code as autocomplete-on-steroids. That framing badly misses the point. The shift isn’t about generating tokens faster. It’s about what kind of pattern you can now describe and execute.

Three things become possible that weren’t before:

Cross-file reasoning. An agent with the right tooling can hold the shape of a multi-service refactor in working memory long enough to make consistent changes across files that have nothing in common syntactically but everything in common semantically. The vim macro never could.

Pattern matching at the level of intent. “Wherever we’re parsing dates from user input without timezone normalisation, fix it” used to be a hand-grepped audit. Now it’s a single instruction with verification at the end.

Restructure as a single operation. Architectural changes that used to take a week of careful, anxious work — splitting a god-module, lifting a concern up into middleware, threading a new field through a type hierarchy — collapse into a planning conversation followed by a supervised execution. The work that’s left for you is the part that genuinely required judgement.

None of this removes the engineer. All of it changes what the engineer’s leverage looks like.


The new craft

What separates the engineers getting real leverage out of LLMs from the ones generating impressive-looking slop?

It is not, in my experience, who has access to which model. The frontier models are roughly within reach of each other. The gap is in how the engineer thinks about the work.

The good ones think in patterns first. Before opening a chat window, they’ve already articulated to themselves what shape of change they’re trying to make — what’s the same across all the call sites, what’s different, what’s the invariant that has to hold after the refactor. This is the same skill that made someone good at regex twenty years ago. The vocabulary changed; the discipline didn’t.

The good ones manage context deliberately. An LLM is only as good as the slice of the codebase you give it. Throwing the whole repo at the model is the new equivalent of cat *.py | grep — it works for trivial things and falls apart for anything subtle. The skill is selecting the right context: the failing test, the type definitions, the three call sites that exhibit the pattern, the convention document. Treating the context window as a precious resource is the modern equivalent of the senior engineer who knew exactly which three files mattered and ignored the other ninety.

The good ones verify. Tests, not vibes. The output of an agent is a hypothesis about what the change should be, not the change itself. The same engineers who would never trust a vim macro without diff-checking the result aren’t trusting agent output blindly either. They run the tests. They read the diff. They keep the model honest.

And — this is the part that feels heretical — the good ones know when not to use the LLM. There are problems where grep and a small sed script are still the right answer. There are refactors where the model’s overhead and verification cost exceed the benefit. The instinct for matching tool to problem is older than any of the tools, and it still applies.


What this doesn’t replace

It is worth being clear about what hasn’t changed.

The model cannot tell you whether the refactor is the right refactor. It can execute the change beautifully and you can still have shipped the wrong abstraction. Taste, judgement, and a feel for what the codebase wants to become — those remain entirely human, and arguably matter more now that execution has gotten cheaper. When the cost of doing a refactor falls, the cost of doing the wrong refactor falls just as much, which means the value of choosing well goes up.

The model also cannot replace the thing that vim macros couldn’t replace either: knowing your codebase. The engineers I see getting the most out of agentic tooling are the ones who have, in their heads, a working map of how their system actually behaves. They use the LLM to apply leverage to that map. They don’t use it as a substitute for having one.


The throughline

Every generation of programmers has been shaped by the abstraction at which they could describe patterns. The greats from the macro era and the greats from the agent era have more in common than either group has with the merely competent in their own time. They share an instinct: see the structure, describe it once, let the machine do the repetition.

What’s new is the altitude of the patterns we can now describe. What’s old is the discipline of describing them well.

The macro hasn’t disappeared. It just got bigger.