So the model won’t “understand” that you have a skill and use it. The generation of the text that would trigger the skill usage is made via Reinforcement Learning with human generated examples and usage traces.
So why don’t the model use skills all the time? Because it’s a new thing, there is not enough training samples displaying that behavior.
They also cannot enforce that via RL because skills use human language, which is ambiguous and not formal. Force it to use skills always via RL policy and you’ll make the model dumber.
So, right now, we are generating usage traces that will be used to train the future models to get a better grasp of when to use skills not. Just give it time.
AGENTS.md, on the other hand, is context. Models have been trained to follow context since the dawn of the thing.
The skills frontmatter end up in context as well.
If AGENTS.md outperform skills in a given agent, it is down to specifically how the skills frontmatter is extracted and injected into the context, because that is the only difference between the two approaches.
EDIT: I haven't tried to check this so this is pure speculation, but I suppose there is the possibility that some agents might use a smaller model to selectively decide what skills frontmatter to include in context for a bigger model. E.g. you could imagine Claude passing the prompt + skills frontmatter to Haiku to selectively decide what to include before passing to Sonnet or Opus. In that case, depending on approach, putting it directly in AGENTS.md might simply be a question of what information is prioritised in the ouput passed to the full model. (Again: this is pure speculation of a possible approach; though it is one I'd test if I were to pick up writing my own coding agent again)
But really the overall point is that AGENTS.md vs. skills here still is entirely a question of what ends up in the "raw" context/prompt that gets passed to the full model, so this is just nuance to my original answer with respect to possible ways that raw prompt could be composed.
Hence the submission's conclusion:
> Our working theory [for why this performs better] comes down to three factors.
> No decision point. With AGENTS.md, there's no moment where the agent must decide "should I look this up?" The information is already present.
> Consistent availability. Skills load asynchronously and only when invoked. AGENTS.md content is in the system prompt for every turn.
> No ordering issues. Skills create sequencing decisions (read docs first vs. explore project first). Passive context avoids this entirely.
How do you know? What if AGI can be implemented as a reasonably small set of logic rules, which implement what we call "epistemology" and "informal reasoning"? And this set of rules is just being run in a loop, producing better and better models of reality. It might even include RL, for what we know.
And what if LLMs already know all these rules? So they are AGI-complete without us knowing.
To borrow from Dennett, we understand LLMs from the physical stance (they are neural networks) and the design stance (they predict next token of language), but do we understand them from an intentional stance, i.e. what rules they employ when they running chain-of-thought for example?
They're very useful, but as we all know - they're far from infallible.
We're probably plateauing on the improvement of the core GPT technology. For these models and APIs to improve, it's things like Skills that need to be worked on and improved, to reduce those mistakes that it makes and produce better output.
So it's pretty disappointing to see that the 'Skills' feature set as implemented, as great of a concept as it is, is pretty bogus compared to just front loading the AGENTS.md file. This is not obvious and valuable to know.
This makes the assumption that AGI is not autocomplete of steroids, which even before LLMs was a very plausible suggested mechanism for what intelligence is.
The agent passes the Turing test...
2) Bots replying to those posts,
3) Bots asking whether the bots in #2 even read TFA, and finally
4) Bots posting the HN guideline where it says you shouldn’t ask people whether they have read TFA.
…And amid the smouldering ruins of civilization, the last human, dang, will be there, posting links to all the times this particular thing has been posted to HN before.
But seriously, this is my main answer to people telling me AI is not reliable: "guess what, most humans are not either, but at least I can tell AI to correct course and it's ego won't get in the way of fixing the problem".
In fact, while AI is not nearly as a good as a senior dev for non trivial tasks yet, it is definitely more reliable than most junior devs at following instructions.
Whereas a junior might be reluctant at first, but if they are smart they will learn and get better.
So maybe LLM are better than not-so-smart people, but you usually try to avoid hiring those people in the first place.
Humans are reliably unreliable. Some are lazy, some sloppy, some obtuse, some all at once. As a tech lead you can learn their strengths and weaknesses. LLMs vacillate wildly while maintaining sycophancy and arrogance.
Human egos make them unlikely to admit error, sometimes, but that fragile ego also gives them shame and a vision of glory. An egotistical programmer won’t deliver flat garbage for fear of being exposed as inferior, and can be cajoled towards reasonable output with reward structures and clear political rails. LLMs fail hilariously and shamelessly in indiscriminate fashion. They don’t care, and will happily argue both sides of anything.
Also that thing that LLMs don’t actually learn. You can threaten to chop their fingers off if they do something again… they don’t have fingers, they don’t recall, and can’t actually tell if they did the thing. “I’m not lying, oops I am, no I’m not, oops I am… lemme delete the home directory and see if that helps…”
If we’re going to make an analogy to a human, LLMs reliably act like absolute psychopaths with constant disassociation. They lie, lie about lying, and lie about following instructions.
I agree LLMs better than your average junior first time following first directives. I’m far less convinced about that story over time, as the dialog develops more effective juniors over time.
It's barely readable to humans, but directly and efficiently relevant to LLM's (direct reference -> referent, without language verbiage).
This suggests some (compressed) index format that is always loaded into context will replace heuristics around agents.md/claude.md/skills.md.
So I would bet this year we get some normalization of both the indexes and the referenced documentation (esp. matching terms).
Possibly also a side issue: API's could repurpose their test suites as validation to compare LLM performance of code tasks.
LLM's create huge adoption waves. Libraries/API's will have to learn to surf them or be limited to usage by humans.
> "Explore project first, then invoke skill" [produces better results than] "You MUST invoke the skill".
I recently tried to get Antigravity to consistently adhere to my AGENTS.md (Antigravity uses GEMINI.md). The agent consistently ignored instructions in GEMINI.md like:- "You must follow the rules in [..]/AGENTS.md"
- "Always refer to your instructions in [..]/AGENTS.md"
Yet, this works every time: "Check for the presence of AGENTS.md files in the project workspace."
This behavior is mysterious. It's like how, in earlier days, "let's think, step by step" invoked chain-of-thought behavior but analogous prompts did not.
Perhaps what they've learned from training data is “must” often occurs in cases with bullshit red tape or other regulations. "You must read the terms and conditions before using this stuff," or something like that, which are actually best ignored.
Obviously directly including context in something like a system prompt will put it in context 100% of the time. You could just as easily take all of an agent's skills, feed it to the agent (in a system prompt, or similar) and it will follow the instructions more reliably.
However, at a certain point you have to use skills, because including it in the context every time is wasteful, or not possible. this is the same reason anthropic is doing advanced tool use ref: https://www.anthropic.com/engineering/advanced-tool-use, because there's not enough context to straight up include everything.
It's all a context / price trade off, obviously if you have the context budget just include what you can directly (in this case, compressing into a AGENTS.md)
How do you suppose skills get announced to the model? It's all in the context in some way. The interesting part here is: Just (relatively naively) compressing stuff in the AGENTS.md seems to work better than however skills are implemented.
If your agent isn’t being used, it’s not as simple as “agents aren’t getting called”. You have to figure out how to get the agent invoked.
Having an agent manage its own context ends up being extraordinarily useful, on par with the leap from non-reasoning to reasoning chats. There are still issues with memory and integration, and other LLM weaknesses, but agents are probably going to get extremely useful this year.
And how do you guarantee that said relevant things actually get put into the context?
OP is about the same problem: relevant skills being ignored.
I think Vercel mixes skills and context configuration up. So the whole evaluation is totally misleading because it tests for two completely different use cases.
To sum it up: Vercel should us both files, agents.md is combination with skills. Both functions have two totally different purposes.
1. You absolutely want to force certain context in, no questions or non-determinism asked (index and sparknotes). This can be done conditionally, but still rule based on the files accessed and other "context"
2. You want to keep it clean and only provide useful context as necessary (skills, search, mcp; and really a explore/query/compress mechanism around all of this, ralph wiggum is one example)
Which makes sense.
& some numbers that prove that.
Instead it’s a problem when you’re part of a team and you’re using skills for standards like code style or architectural patterns. You can’t ask everyone to constantly update their system prompt.
Claude skill adherence is very low.
The article also doesn't mention that they don't know how the compressed index output quality. That's always a concern with this kind of compression. Skills are just another, different kind of compression. One with a much higher compression rate and presumably less likely to negatively influence quality. The cost being that it doesn't always get invoked.
In Claude Code you can invoke an agent when you want as a developer and it copies the file content as context in the prompt.
I expect the benefit is from better Skill design, specifically, minimizing the number of steps and decisions between the AI’s starting state and the correct information. Fewer transitions -> fewer chances for error to compound.
1. Those I force into the system prompt using rules based systems and "context"
2. Those I let the agent lookup or discover
I also limit what gets into message parts, moving some of the larger token consumers to the system prompt so they only show once, most notable read/write_file
TFA says they added an index to Agents.md that told the agent where to find all documentation and that was a big improvement.
The part I don't understand is that this is exactly how I thought skills work. The short descriptions are given to the model up-front and then it can request the full documentation as it wants. With skills this is called "Progressive disclosure".
Maybe they used more effective short descriptions in the AGENTS.md than they did in their skills?
This is a recurring problem with LLM benchmarking — small sample sizes presented with high confidence. The underlying finding (always-in-context > lazy-loaded) is probably directionally correct, but the specific numbers don't really support the strength of the claims in the article.
What they're comparing is: (A) a skill with a short description in the frontmatter, which the agent may or may not decide to invoke, vs. (B) a massive compressed index of documentation paths dumped directly into AGENTS.md, which is always in context.
This isn't really "AGENTS.md vs skills." It's "always-in-context with high token count vs. lazy-loaded with a decision point." Of course the always-in-context version wins — you're giving the model way more information upfront. The agent literally can't miss it. That's not a surprising finding, it's almost tautological.
The more interesting question they don't address: what did their skill descriptions actually look like? In my experience, the quality of the frontmatter description is the single biggest factor in whether a skill gets invoked. A vague "Documentation lookup skill" will get ignored. A specific "Use this when the user asks about API endpoints, authentication, rate limits, or SDK usage for the Vercel platform" will get picked up reliably.
If you wrote equally detailed compressed pointers in AGENTS.md and equally detailed descriptions in skill frontmatter, the gap would likely be much smaller. The real takeaway isn't "skills are worse" — it's "if you don't invest effort in writing good skill descriptions, the agent won't know when to use them."
Create a folder called .context and symlink anything in there that is relevant to the project. For example READMEs and important docs from dependencies you're using. Then configure your tool to always read .context into context, just like it does for AGENTS.md.
This ensures the LLM has all the information it needs right in context from the get go. Much better performance, cheaper, and less mistakes.
Cheaper because it has the right context from the start instead of faffing about trying to find it, which uses tokens and ironically bloats context.
It doesn't have to be every bit of documentation, but putting the most salient bits in context makes LLMs perform much more efficiently and accurately in my experience. You can also use the trick of asking an LLM to extract the most useful parts from the documentation into a file, which you then re-use across projects.
Their approach is still agentic in the sense that the LLM must make a tool cool to load the particular doc in. The most efficient approach would be to know ahead of time which parts of the doc will be needed, and then give the LLM a compressed version of those docs specifically. That doesn't require an agentic tool call.
Of course, it's a tradeoff.
What's actually useful is to put the source code of your dependencies in the project.
I have a `_vendor` dir at the root, and inside it I put multiple git subtrees for the major dependencies and download the source code for the tag you're using.
That way the LLM has access to the source code and the tests, which is way more valuable than docs because the LLM can figure out how stuff works exactly by digging into it.
You don’t want to be burning tokens and large files will give diminishing returns as is mentioned in the Claude Code blog.
1. Start from the Claude Code extracted instructions, they have many things like this in there. Their knowledge share in docs and blog on this aspect are bar none
2. Use AGENTS.md as a table of contents and sparknotes, put them everywhere, load them automatically
3. Have topical markdown files / skills
4. Make great tools, this is still opaque in my mind to explain, lots of overlap with MCP and skills, conceptually they are the same to me
5. Iterate, experiment, do weird things, and have fun!
I changed read/write_file to put contents in the state and presented in the system prompt, same for the agents.md, now working on evals to show how much better this is, because anecdotally, it kicks ass
Can you detail this a bit more? Do you put the actual contents of the file in the system prompt? Forever?
> If you think there is even a 1% chance a skill might apply to what you are doing, you ABSOLUTELY MUST invoke the skill. IF A SKILL APPLIES TO YOUR TASK, YOU DO NOT HAVE A CHOICE. YOU MUST USE IT.
While this may result in overzealous activation of skills, I've found that if I have a skill related, I _want_ to use it. It has worked well for me.
works pretty well
It’s really silly to waste big model tokens on throat clearing steps
Basically use a small model up front to efficiently trigger the big model. Sub agents are at best small models deployed by the bigger model (still largely manually triggered in most workflows today)
What a wonderful world that would be.
If your goal is to always give a permanent knowledge base to your agent that's exactly what AGENTS.md is for...
Regularly the skills were not being loaded and thus not utilised. The outputs themselves were fine. This suggested that at some stage through the improvements of the models that baseline AGENTS.md had become redundant.
I ran their tool with an otherwise empty CLAUDE.md, and ran `claude /context`, which showed 3.1k tokens used by this approach (1.6% of the opus context window, bit more than the default system prompt. 8.3% is system tools).
Otherwise it's an interesting finding. The nudge seems like the real winner here, but potential further lines of inquiry that would be really illuminating: 1. How do these approaches scale with model size? 2. How are they impacted by multiple such clauses/blocks? Ie maybe 10 `IMPORTANT` rules dilute their efficacy 3. Can we get best of both worlds with specialist agents / how effective are hierarchical routing approaches really? (idk if it'd make sense for vercel specifically to focus on this though)
The first thing that surprising to me is how much the default tuning are leaned toward laudative stances, the user is always absolutely right, what was done is solving everything expected. But actually no, not a single actual check was done, a tone of code was produced but the goal is not at all achieved and of course many regressions now lure in the code base, when it's not straight breaking everything (which is at least less insidious).
The thing that is surprising to me, is that it can easily drop thousands of lines of tests, and then it can be forced to loop over these tests until it succeed. In my experiments it still drop far too much noise code, but at least the burden of checking if it looks like it makes any sense is drastically reduced.
And I have been trying to improve the framework and abstractions/types to reduce the lines of code required for LLMs to create features in my web app.
Did the LLM really needed to spit 1k lines for this feature? Could I create abstractions to make it feasible in under 300 lines?
Of course there's cost and diminishing returns to abstractions so there are tradeoffs.
These things are non-deterministic across multiple axes.
> Before writing code, first explore the project structure, then invoke the nextjs-doc skill for documentation.
I have a SKILL.md for marimo notebooks with instructions in the frontmatter to always read it before working with marimo files. But half the time Claude Code still doesn't invoke it even with me mentioning marimo in the first conversation turn.
I've resorted to typing "read marimo skill" manually and that works fine. Technically you can use skills with slash commands but that automatically sends off the message too which just wastes time.
But the actual concept of instructions to load in certain scenarios is very good and has been worth the time to write up the skill.
Skills are new. Models haven't been trained on them yet. Give it 2 months.
It's a difference of "choose whether or not to make use of a skill that would THEN attempt to find what you need in the docs" vs. "here's a list of everything in the docs that you might need."
Skills are still very much relevant on big and diverse projects.
I have a skill in a project named "determine-feature-directory" with a short description explaining that it is meant to determine the feature directory of a current branch. The initial prompt I provide will tell it to determine the feature directory and do other work. Claude will even state "I need to determine the feature directory..."
Then, about 5-10% of the time, it will not use the skill. It does use the skill most of the time, but the low failure rate is frustrating because it makes it tough to tell whether or not a prompt change actually improved anything. Of course I could be doing something wrong, but it does work most of the time. I miss deterministic bugs.
Recently, I stopped Claude after it skipped using a skill and just said "Aren't you forgetting something?". It then remembered to use the skill. I found that amusing.
*You are the Super Duper Database Master Administrator of the Galaxy*
does not improve the model ability reason about databases?
If I remind it, it will be go, "oh yes, ok, sure." then do it, but the whole point is that I want to optimize my time with the agent.
I need to evaluate how do different project scaffolding impacts the results of Claude Code/Opencode (either with Anthropic models or third party) for agentic purpose.
But I am unsure on how should I be testing and it's not very clear how did Vercel proceeded here.
But switching over to using coding agents we never did the same. Feels like building an eval set will be an important part of what engg orgs do going forward.
There is a lot of language floating around what effectively groups of text files put together in different configurations, or selected reliably.
Just create an MCP server that does embedding retrieval or agentic retrieval with a sub agent on your framework docs.
Finally add an instruction to AGENT.md to look up stuff using that MCP.
Does the model even understand what this line even means?
Which is why I use a skill that is a command, that routes requests to agents and skills.
------> Captain Obvious Strikes Again! <----------
See the rest the comments for examples pedantic discussions about terms that are ultimately somewhat arbitrary and if anything suggest the singularity will be runaway technobabble not technological progress.
they used prisma to handle their database interactions. they preached tRPC and screamed TYPE SAFETY!!!
you really think these guys will ever again touch the keyboard to program? they despise programming.
I guess you need to make sure your file paths are self-explanatory and fairly unique, otherwise the agent might bring extra documentation into the context trying to find which file had what it needed?