Underneath is just a system prompt, or more likely a prompt layered on top "You are a frontend engineer, competent in react and Next.js, tailwind-css" - the stack details and project layout, key information is already in the CLAUDE.md. For more stuff the model is going to call file-read tools etc.
I think its more theatre then utilty.
What I have taken to doing is having a parent folder and then frontend/ backend/ infra/ etc as children.
parent/CLAUDE.md frontend/CLAUDE.md backend/CLAUDE.md
The parent/CLAUDE.md provides a highlevel view of the stack "FastAPI backend with postgres, Next.js frontend using with tailwind, etc". The parent/CLAUDE.md also points to the childrens CLAUDE.md's which have more granular information.
I then just spawn a claude in the parent folder, set up plan mode, go back and forth on a design and then have it dump out to markdown to RFC/ and after that go to work. I find it does really well then as all changes it makes are made with a context of the other service.
My CLAUDE.md or AGENTS.md is usually just a bulleted list of reminders with high level information. If the agent needs more steering, I add more reminders. I try not to give it _too_ broad of a task without prior planning or it'll just go off the rails.
Something I haven't really experimented with is having claude generate ADRs [1] like your RFC/ idea. I'll probably try that and see how it goes.
Kind of like telling it to generate Ghibli pics. These things are best at imitation.
Subagents do not work well for coding at all
Subagents can work very well, especially for larger projects. Based on this statement, I think you're experiencing how I felt in my early experience with them, and that your mental model for how to use them effectively is still embryonic.
I've found that the primary benefit for subagents is context/focus management. For example, I'm doing auth using Stytch. What I absolutely don't want to do is load https://stytch.com/docs/llms.txt and instructions for leveraging it in my CLAUDE.md. But it's perfect for my auth agent, and the quality of the output for auth-related tasks is far higher as a result.
A recommended read: https://jxnl.co/writing/2025/08/29/context-engineering-slash...
P.S. I know they added 1m context to their API, with a price increase, but AFAIK the subscription still uses the 200k context.
So far, I find it much more important to define task scope and boundaries. If I want to implement a non-trivial feature, I'll have one role for analyzing the problem and coming up with a high-level plan, and then another role for breaking that plan down into very small atomic steps. I'll then pass each step to an implementation role and give it both the high-level plan and the whole list of individual steps as context, while making it clear that the scope is only to implement that one specific step.
I've had very good results with this so far, and once the two main documents are done, I can automate this with a small orchestration script (that does not depend on an LLM and is completely deterministic) going through the list and passing each item to an implementation agent sequentially, even letting the agent create a commit message after every step so I can trace its work afterwards. I've had very clean long-running tasks this way with minimal need for fixing things afterwards. I can go to bed in the evening and launch it and wake up to a long list of commits.
With the new 6 dollar subscription by Z.ai which includes 120 prompts (around 2000 requests) every 5 hours, I can pretty much let this run without having to worry about exceeding my limits.
CLAUDE.md is kept somewhat lean, with pointers to individual files in ./docs/ and .claude/commands is a symlink to .agents/commands.
After starting Claude, I use /commands to load a role and context, which pulls in only the necessary docs and avoids, say, loading UI design or test architecture docs, when adding a backend feature.
I don't want to have to do any of this, but it helps me try and keep the agents on the rails and minimize context rot.
I advise people to only use subagents for stuff that is very compartmentalized because they're hard to monitor and prone to failure with complex codebases where agents live and die by project knowledge curated in files like CLAUDE.md. If your main Claude instance doesn't give a good handoff to a subagent, or a subagent doesn't give a good handback to the main Claude, shit will go sideways fast.
Also, don't lean on agents for refactoring. Their ability to refactor a codebase goes in the toilet pretty quickly.
Very much this. I tried to get Claude to move some code from one file to another. Some of the code went missing. Some of it was modified along the way.
Humans have strategies for refactoring, e.g. "I'm going to start from the top of the file and Cut code that needs to be moved and Paste it in the new location". LLM don't have a clipboard (yet!) so they can't do this.
Claude can only reliably do this refactoring if it can keep the start and end files in context. This was a large file, so it got lost. Even then it needs direct supervision.
For my own agent I have a `move_file` and `copy_file` tool with two args each, that at least GPT-OSS seems to be able to use whenever it suits, like for moving stuff around. I've seen it use it as part of refactoring as well, moving a file to one location, copying that to another, the trim both of them but different trims, seems to have worked OK.
If the agent has access to `exec_shell` or similar, I'm sure you could add `Use mv and cp if you need to move or copy files` to the system prompt to get it to use that instead, probably would work in Claude Code as well.
On the one hand, it’s kind or irritating when it goes great-great-great-fail.
On the other hand, it really enforces the best practices of small classes, small files, separation of concerns. If each unit is small enough it does great.
Unfortunately, it’s also fairly verbose and not great at recognizing that it is writing the same code over and over again, so I often find some basic file has exploded to 3000 lines, and a simple “identity repeated logic and move to functions” prompt shrinks it to 500 lines.
Like "evaluate the test coverage" or "check if the project follows the style guide".
This way the "main" context only gets the report and doesn't waste space on massive test outputs or reading multiple files.
Chat completion sends the full prompt history on every call.
I am working on my own coding agent and seeing massive improvements by rewriting history using either a smaller model or a freestanding call to the main one.
It really mitigates context poisoning.
I’ve been using subagents since they were introduced and it has been a great way to manage context size / pollution.
I was on the subagent hype train myself for a while but as my codebases have scaled (I have a couple of codebases up to almost 400k now) subagents have become a lot more error prone and now I cringe when I see them for anything challenging and immediately escape out. They seem to work great with more greenfield projects though.
Then I moved some parts in rules, some parts in slash commands and then I got much better results.
The subagents are like a freelance contractors (I know, I have been one very recently) Good when they need little handoff (Not possible in realtime), little overseeing and their results are a good advice not an action. They don't know what you are doing, they don't care what you do with the info they produce. They just do the work for you while you do something else, or wait for them to produce independent results. They come and go with little knowledge of existing functionalities, but good on their own.
Here are 3 agents I still keep and one I am working on.
1: Scaffolding: Now I create (and sometimes destroy) a lot of new projects. I use a scaffolding agents when I am trying something new. They start with fresh one line instruction to what to scaffold (e.g. a New docker container with Hono and Postgres connection, or a new cloudflare worker which will connect to R2, D1 and AI Gateway, or a AWS Serverless API Gateway with SQS that does this that and that), where to deploy. At the end of the day they setup the project with structure, create a Github Repo and commit it for me. I will take it forward from them
2: Triage: When I face some issues which is not obvious from reading code alone, I give them the place, some logs and the agent will use whatever available (including the DB Data) to make a best guess of why this issue happens. I often found out they work best when they are not biased by recent work
3: Pre-Release Check QA: Now this QA will test the entire system (Essentially calling all integration and end-to-end test suite to make sure this product doesn't break anything existing. Now I am adding a functionality to let them see original business requirement and see if the code satisfies it or not. I want this agent to be my advisor to help me decide if something goes to release pipeline or not.
4: Web search (Experimental) Sometimes, some search are too costly for existing token, and we only need the end result, not what they search and those 10 pages it found out...
I spent a few hours trying stuff like this and the results were pretty bad compared to just using CC with no agent specific instructions.
Maybe I needed to push through and find a combination that works but I don't find this article convincing as the author basically says "it works" without showing examples or comparing doing the same project with and without subagents.
Anyone got anything more convincing to suggest it's worth me putting more time into building out flows like this instead of just using a generic agent for everything?
A backend developer subagent is going to do the job ok, but then the supervisor agent will be missing useful context about what’s been done and will go off the rails.
The ideal sub agent is one that can take a simple question, use up massive amounts of tokens answering it, and then return a simple answer, dropping all those intermediate tokens as unnecessary.
Documentation Search is a good one - does X library have a Y function - the subagent can search the web, read doc MCPs, and then return a simple answer without the supervisor needing to be polluted with all the context
Make agents for tasks, not roles.
I've seen this for coding agents using spec-driven development for example. You can try to divide agents into lots of different roles that roughly correspond to human job positions, like for example BMad does, or you can simply make each agent do a task and have a template for the task. Like make an implementation plan using a template for an implementation plan or make a task list, using a template for a task list. In general, I've gotten much better results with agents that has a specific task to do than trying to give a role, with a job-like description.
For code review, I don't use a code reviewer agent, instead I've defined a dozen code reviewing tasks, that each runs as separate agents (though I group some related tasks together).
Subagents open all the new metaphorical tabs to get to some answer, then close those tabs so the main agent can proceed with the main task.
Excellent article on this pattern: https://jxnl.co/writing/2025/08/29/context-engineering-slash...
At some point you gotta stop and wonder if you’re doing way too much work managing claude rather than your business problem.
I see lots of people saying you should be doing it, but not actually doing it themselves.
Or at least, not showing full examples of exactly how to handle it when it starts to fail or scale, because obviously when you dont have anything, having a bunch of agents doing any random shit works fine.
Frustrating.
Last week I asked Claude Code to set up a Next.js project with internationalization. It tried to install a third party library instead of using the internationalization method recommended for the latest version of Next.js (using Next's middleware) and could not produce of functional version of the boilerplate site.
There are some specific cases where agentic AI does help me but I can't picture an agent running unchecked effectively in its current state.
Not very agentic but it works a lot better.
However the complexity is in knowing what to do and when. Actually typing the code/running commands doesn't take that much time and energy. I feel like any time gained by overusing an LLM will be offset by having to debug its code when it messes things up.
I've also seen seen it choking when tasked to add a simple result count on a search.
The short answer is, it's cheap to let it try.
And this is just the tip of the tip of the iceberg of what even a medium sized startup spends. This is not cheap in any way.
With all due respect to the .agents/ markdown files, Claude code often, like other LLMs, get fixed on a certain narrative, and no matter what the instructions are, it repeats that wrong choice over and over and over again, while “apologizing”…
Anything beyond a close and intimate review of its implementation is doomed to fail.
What made things a bit better recently was setting Gemini cli and Claude code taking turns in designing reviewing, implementing and testing each other.
My gut feeling from past experiences is that we have git, but now git-flow, yet: a standardized approach that is simple to learn and implement across teams.
Once (if?) someone will just "get it right", and has a reliable way to break this down do the point that engineer(s) can efficiently review specs and code against expectations, it'll be the moment where being a coder will have a different meaning, at large.
So far, all projects i've seen end up building "frameworks" to match each person internal workflow. That's great and can be very effective for the single person (it is for me), but unless that can be shared across teams, throughput will still be limited (when compared that of a team of engs, with the same tools).
Also, refactoring a project to fully leverage AI workflows might be inefficient, if compared to a rebuild from scratch to implement that from zero, since building docs for context in pair with development cannot be backported: it's likely already lost in time, and accrued as technical debt.
If code is a liability and the best part is no part, what about leveraging Markdown files only?
The last programs I created were just CLI agents with Markdown files and MCP servers(some code here but very little).
The feedback loop is much faster, allowing me to understand what I want after experiencing it, and self-correction is super fast. Plus, you don't get lost in the implementation noise.
It's no different to inheriting a legacy application though. As well, from the perspective of a product owner, it's not a new risk.
I don't trust Claude to write reams of code that I can't maintain except when that code is embarrassingly testable, i.e it has an external source of truth.
https://www.youtube.com/watch?v=wL22URoMZjo
Have a great day =3
Fast decision-making is terrible for software development. You can't make good decisions unless you have a complete understanding of all reasonable alternatives. There's no way that someone who is juggling 4 LLMs at the same time has the capacity to consider all reasonable alternatives when they make technical decisions.
IMO, considering all reasonable alternatives (and especially identifying the optimal approach) is a creative process, not a calculation. Creative processes cannot be rushed. People who rush into technical decisions tend to go for naive solutions; they don't give themselves the space to have real lightbulb moments.
Deep focus is good but great ideas arise out of synthesis. When I feel like I finally understand a problem deeply, I like to sleep on it.
One of my greatest pleasures is going to bed with a problem running through my head and then waking up with a simple, creative solution which saves you a ton of work.
I hate work. Work sucks. I try to minimize the amount of time I spend working; the best way to achieve that is by staring into space.
I've solved complex problems in a few days with a couple of thousand lines of code which took some other developers, more intelligent than myself, months and 20K+ lines of code to solve.
I was working on a large-ish R analysis. In R, people generally start with loading entire libraries like
library(a)
library(b)
etc., leading to namespace clashes. It's better practice to replace all calls to package-functions with package namespaces, i.e., it's better to do
a::function_a()
b::function_b()
than to load both libraries and then blindly trusting that function_a() and function_b() come from a and b.
I asked Claude Code to take a >1000 LOC R script and replace all function calls with their model-namespace function call. It ran one subagent to look for function calls, identified >40 packages, and then started one subagent per package call for >40 subagents. Cost-wise (and speed-wise!) it was mayhem as every subagent re-read the script. It was far faster and cheaper, but a bit harder to judge, to just copy paste the R script into regular Claude and ask it to carry out the same action. The lesson is that subagents are often costly overkill.
I see people who never coded in their life signing up for loveable or some other code agent and try their luck.
What cements this thought pattern in your post is this: "If the agents get it wrong, I don’t really care—I’ll just fire off another run"
initially, being in the loop is necessary, once you find yourself "just approving" you can be relaxed and think back or, more likely, initially you need fine-grained tasks; as reliability grows, tasks can become more complex
"parallelizing" allows single (sub)agents with ad-hoc responsibilities to rely on separate "institutionalized" context/rules, .ie: architecture-agent and coder-agent can talk to each others and solve a decision-conflict based on wether one is making the decision based on concrete rules you have added, or hallucinating decisions
i have seen a friend build a rule based system and have been impressed at how well LLM work within that context
Most subagent examples are vague or simplistic.
> "Managing Cost and Usage Limits: Chaining agents, especially in a loop, will increase your token usage significantly. This means you’ll hit the usage caps on plans like Claude Pro/Max much faster. You need to be cognizant of this and decide if the trade-off—dramatically increased output and velocity at the cost of higher usage—is worth it."
https://github.com/pchalasani/claude-code-tools/tree/main?ta...
If the first CLI-agent just needs a review or suggestions of approaches, I find it helps to have the first agent ask the other CLI-agent to dump its analysis into a markdown file which it can then look at.
Why not? I'm assuming we're not talking about "vibe coding" as it's not a serious workflow, it was suggested as a joke basically, and we're talking about working together with LLMs. Why would correctness be any harder to achieve than programming without them?
Using a coding agent can make your entire work day turn into doing nothing but code reviews. I.e. the least fun part: constant review of a junior dev that's on the brink of failing their probation period with random strokes of genius.
The idea was to encapsulate the context for a subagent to work on in a single GitHub issue/document. I’m yet to see how the development/QA subagents will fare in real-world scenarios by relying on the context in the GitHub issue.
Like many others here, I believe subagents will starve for context. Claude Code Agent is context-rich, while claude subagents are context-poor.
Ideally I would like to spin off multiple agents to solve multiple bugs or features. The agents have to use the ci in GitHub to get feedback on tests. And I would like to view it on IDE because I like the ability to understand code by jumping through definitions.
Support for multiple branches at once - I should be able to spin off multiple agents that work on multiple branches simultaneously.
Am I the only one convinced that all of the hype around coding agents like codex and claude is 85% BS ?