When the creator of the world's most superior coding agent speaks, Silicon Valley doesn't simply pay attention — it takes notes.
For the previous week, the engineering neighborhood has been dissecting a thread on X from Boris Cherny, the creator and head of Claude Code at Anthropic. What started as an informal sharing of his private terminal setup has spiraled right into a viral manifesto on the way forward for software program growth, with business insiders calling it a watershed second for the startup.
"If you're not reading the Claude Code best practices straight from its creator, you're behind as a programmer," wrote Jeff Tang, a outstanding voice within the developer neighborhood. Kyle McNease, one other business observer, went additional, declaring that with Cherny's "game-changing updates," Anthropic is "on fire," probably going through "their ChatGPT moment."
The thrill stems from a paradox: Cherny's workflow is surprisingly easy, but it permits a single human to function with the output capability of a small engineering division. As one consumer famous on X after implementing Cherny's setup, the expertise "feels more like Starcraft" than conventional coding — a shift from typing syntax to commanding autonomous models.
Right here is an evaluation of the workflow that’s reshaping how software program will get constructed, straight from the architect himself.
How operating 5 AI brokers without delay turns coding right into a real-time technique recreation
Essentially the most placing revelation from Cherny's disclosure is that he doesn’t code in a linear style. Within the conventional "inner loop" of growth, a programmer writes a operate, assessments it, and strikes to the following. Cherny, nevertheless, acts as a fleet commander.
"I run 5 Claudes in parallel in my terminal," Cherny wrote. "I number my tabs 1-5, and use system notifications to know when a Claude needs input."
By using iTerm2 system notifications, Cherny successfully manages 5 simultaneous work streams. Whereas one agent runs a check suite, one other refactors a legacy module, and a 3rd drafts documentation. He additionally runs "5-10 Claudes on claude.ai" in his browser, utilizing a "teleport" command handy off periods between the net and his native machine.
This validates the "do more with less" technique articulated by Anthropic President Daniela Amodei earlier this week. Whereas opponents like OpenAI pursue trillion-dollar infrastructure build-outs, Anthropic is proving that superior orchestration of present fashions can yield exponential productiveness positive aspects.
The counterintuitive case for selecting the slowest, smartest mannequin
In a stunning transfer for an business obsessive about latency, Cherny revealed that he solely makes use of Anthropic's heaviest, slowest mannequin: Opus 4.5.
"I use Opus 4.5 with thinking for everything," Cherny defined. "It's the best coding model I've ever used, and even though it's bigger & slower than Sonnet, since you have to steer it less and it's better at tool use, it is almost always faster than using a smaller model in the end."
For enterprise expertise leaders, it is a vital perception. The bottleneck in fashionable AI growth isn't the technology velocity of the token; it’s the human time spent correcting the AI's errors. Cherny's workflow means that paying the "compute tax" for a wiser mannequin upfront eliminates the "correction tax" later.
One shared file turns each AI mistake right into a everlasting lesson
Cherny additionally detailed how his group solves the issue of AI amnesia. Commonplace massive language fashions don’t "remember" an organization's particular coding type or architectural choices from one session to the following.
To deal with this, Cherny's group maintains a single file named CLAUDE.md of their git repository. "Anytime we see Claude do something incorrectly we add it to the CLAUDE.md, so Claude knows not to do it next time," he wrote.
This observe transforms the codebase right into a self-correcting organism. When a human developer critiques a pull request and spots an error, they don't simply repair the code; they tag the AI to replace its personal directions. "Every mistake becomes a rule," famous Aakash Gupta, a product chief analyzing the thread. The longer the group works collectively, the smarter the agent turns into.
Slash instructions and subagents automate probably the most tedious components of growth
The "vanilla" workflow one observer praised is powered by rigorous automation of repetitive duties. Cherny makes use of slash instructions — customized shortcuts checked into the undertaking's repository — to deal with advanced operations with a single keystroke.
He highlighted a command known as /commit-push-pr, which he invokes dozens of instances each day. As an alternative of manually typing git instructions, writing a commit message, and opening a pull request, the agent handles the paperwork of model management autonomously.
Cherny additionally deploys subagents — specialised AI personas — to deal with particular phases of the event lifecycle. He makes use of a code-simplifier to scrub up structure after the principle work is completed and a verify-app agent to run end-to-end assessments earlier than something ships.
Why verification loops are the actual unlock for AI-generated code
If there’s a single motive Claude Code has reportedly hit $1 billion in annual recurring income so shortly, it’s seemingly the verification loop. The AI is not only a textual content generator; it’s a tester.
"Claude tests every single change I land to claude.ai/code using the Claude Chrome extension," Cherny wrote. "It opens a browser, tests the UI, and iterates until the code works and the UX feels good."
He argues that giving the AI a option to confirm its personal work — whether or not by way of browser automation, operating bash instructions, or executing check suites — improves the standard of the ultimate consequence by "2-3x." The agent doesn't simply write code; it proves the code works.
What Cherny's workflow indicators about the way forward for software program engineering
The response to Cherny's thread suggests a pivotal shift in how builders take into consideration their craft. For years, "AI coding" meant an autocomplete operate in a textual content editor — a sooner option to sort. Cherny has demonstrated that it may possibly now operate as an working system for labor itself.
"Read this if you're already an engineer… and want more power," Jeff Tang summarized on X.
The instruments to multiply human output by an element of 5 are already right here. They require solely a willingness to cease pondering of AI as an assistant and begin treating it as a workforce. The programmers who make that psychological leap first gained't simply be extra productive. They'll be enjoying a completely completely different recreation — and everybody else will nonetheless be typing.

