- My own "execute bash command" tool, adding output pagination, forcing the agent to choose a working directory, and working around some Cursor bugs on Windows. This avoids context explosion when a command unexpectedly returns a huge amount of text, and avoids a common agent failure mode where it misunderstands what directory it is currently in.
- SQL command execution. This can be done perfectly fine with "execute bash command" but the agent struggles to correctly encode multi-line SQL queries on the command line. You can force it to write a file, but then that's two MCP tool calls (write file, execute command) which increases the chances that it goofs it up. I simply accept an unencoded, multi-line SQL query directly via the MCP tool and encode it myself. This, again, is simply avoiding a common failure mode in the built-in tools.
I haven't needed a third tool, and if the built-in tools were better I wouldn't have needed these two, either. Everything else I've ever needed has been a bash script that both the agent and I can run, explained in the agent's global rules. It's really unclear to me what other use case I might encounter that would be better as MCP.
In theory I can see that an MCP server only launches once and is persistent across many requests, whereas bash scripts are one-and-done. Perhaps some use case requires a lot of up-front loading that would need to be redone for every tool call if it were a bash script. Or perhaps there are complex interactions across multiple tool calls where state must be kept in memory and writing to disk is not an option. But I have not yet encountered anything like this.
Context7, I assume? I wanted to like context7 but I constantly need documentation that is either private or not in a format that context7 supports. Instead, I scrape the docs to Markdown, stick them into a "context" folder[1], and use Cursor's vector codebase indexing. This allows the agent to literally ask questions like "how do I do ABC with library XYZ?" and the vector database delivers a chunked answer from all available documentation. This is, IMO, much better than how context7 works: context7 just returns whole pages of documentation, polluting the context window with info that isn't relevant to what the agent wanted to know.
I have done this with entire textbooks. Find a PDF and get GPT-5 to transcribe it page by page to Markdown. Costs a couple bucks and turns the agent into a wizard on that subject.
Context7, too, could easily have been a command line tool that both you and the agent can use. Even now, I don't see what MCP--specifically--brings to the table.
[1] One trick for Cursor users: put "/context/" in .gitignore and "!/context/" in .cursorignore. This will keep it out of git but still index it.
It's at https://github.com/brianluft/arcadia and the actual MCP server is at https://github.com/brianluft/arcadia/tree/main/server/src. If not suitable as-is, you can probably get Claude to repackage or tweak the code for your needs. The project has a .NET component for the SQL tool that isn't used at all for bash execution; only the Node.js server is needed for the bash tool.
- My own "execute bash command" tool, adding output pagination, forcing the agent to choose a working directory, and working around some Cursor bugs on Windows. This avoids context explosion when a command unexpectedly returns a huge amount of text, and avoids a common agent failure mode where it misunderstands what directory it is currently in.
- SQL command execution. This can be done perfectly fine with "execute bash command" but the agent struggles to correctly encode multi-line SQL queries on the command line. You can force it to write a file, but then that's two MCP tool calls (write file, execute command) which increases the chances that it goofs it up. I simply accept an unencoded, multi-line SQL query directly via the MCP tool and encode it myself. This, again, is simply avoiding a common failure mode in the built-in tools.
I haven't needed a third tool, and if the built-in tools were better I wouldn't have needed these two, either. Everything else I've ever needed has been a bash script that both the agent and I can run, explained in the agent's global rules. It's really unclear to me what other use case I might encounter that would be better as MCP.
In theory I can see that an MCP server only launches once and is persistent across many requests, whereas bash scripts are one-and-done. Perhaps some use case requires a lot of up-front loading that would need to be redone for every tool call if it were a bash script. Or perhaps there are complex interactions across multiple tool calls where state must be kept in memory and writing to disk is not an option. But I have not yet encountered anything like this.