The article seems a few months too late. Claude (and others) are already doing this: i've been instructing claude code to generate code following certain best practices provided through URLs or asking it to compare certain approaches from different URLs. Claude Skill uses file "URLs" to provide progressive disclosure: only include detailed texts into the context if needed. This helps reduce context size, and improves cachability.
Heh, the problem with having a half drafted post on your machine for a few weeks is the industry moves fast!
I had the post pretty much done, went on vacation for a week, and Claude Skills came out in the interim.
That being said Skills are indeed an implementation of the patterns possible with linking, but they are narrower in scope than what's possible even with MCP Resources if they were properly made available to agents (e.g. dynamic construction of context based on environment and/or fetching from remote sources).
The problem with MCP resources is someone needs to stand up a server. That’s enough overhead and infrastructure that it slows down creating these kinds of resources and linking. Do any of the popular forge sites have like a GitHub pages but it’s MCP kind of capability? I think that would lower the hurdle for standing up such tooling so it would be much more appealing to actually do.
Codex run locally doesn't have access to any "web search" tool, but that doesn't stop it from trying to browse the web via cURL from time to time. Place hyperlinks in the documents in the repository and it'll try to reach them best it can. It doesn't seem overly eager doing so though, and only does that when absolutely needed and it can't find the information elsewhere. This has been my experience with gpt-5-high at least.
Probably a security feature. If it can access the internet, it can send your private data to the internet. Of course, if you allow it to run arbitrary commands it can do the same.
That article appears to be very confused about how skills work.
It seems to think they are a vendor lockin play by Anthropic, running as an opaque black box.
To rebut their four complaints:
1. "Migrating away from Anthropic in the future wouldn't just mean swapping an API client; it would mean manually reconstructing your entire Skills system elsewhere." - that's just not true. Any LLM tool that can access a filesystem can use your skills, you just need to tell it to do so! The author advocates for creating your own hierarchy of READMEs, but that's identical to how skills work already.
2. "There's no record of the selection process, making the system inherently un-trustworthy." - you can see exactly when a skill was selected by looking in the tool logs for a Read(path/to/SKILL.md) call.
3. "This documentation is no longer readily accessible to humans or other AI agents outside the Claude API." - it's markdown files on disk! Hard to imagine how it could be more accessible to humans and other AI agents.
4. "You cannot optimize the prompt that selects Skills. You are entirely at the mercy of Anthropic's hidden, proprietary logic." - skills are selected by promoting driven by the system prompt. Your CLAUDE.md file is injected into that same system prompt. You can influence that as much as you like.
The closing section of that article revels where that author got confused They said: "Theoretically, we're losing potentially "free" server-side skill selection that Anthropic provides."
Skills are selected by Claude Code running on the client. They seem to think the it's a model feature that's proprietary to Anthropic - it's not, it's just another simple prompting hack.
That's why I like skills! They're a pattern that works with any AI agent already. Anthropic merely gave a name to the exact same pattern that this author calls "Agent-Agnostic Documentation" and advocates for instead.
This isn't an accurate take either. You can load client skills, sure, but the whole selling point from Anthropic is that skills are like memory, managed at the API layer "transparently" and that those skills are cross project. If you forced Claude to only use skills from the current project to try and be transparent, it would still be a black box that makes it harder to debug failures, and it'd still be agent and vendor specific rather than more human friendly and vendor agnostic.
I'm talking about skills as they are used in Claude Code running on my laptop. Are you talking about skills as they are used by the https://claude.ai consumer app?
Skills are just lazy loaded prompts. The system prompt will include the summary of every skill, and if the LLM decides it want to use a specific skill the details of that skill will be added to the system prompt. It's just markdown documents all the way down.