The main alternative one would have for having a plug-and-play (just configure a single URL) non-MCP REST API would be to use OpenAPI definitions and ingesting them accordingly.
However, as someone that has tried to use OpenAPI for that in the past (both via OpenAI's "Custom GPT"s and auto-converting OpenAPI specifications to a list of tools), in my experience almost every existing OpenAPI spec out there is insufficient as a basis for tool calling in one way or another:
- Largely insufficient documentation on the endpoints themselves
- REST is too open to interpretation, and without operationIds (which almost nobody in the wild defines), there is usually context missing on what "action" is being triggered by POST/PUT/DELETE endpoints (e.g. many APIs do a delete of a resource via a POST or PUT, and some APIs use DELETE to archive resources)
- baseUrls are often wrong/broken and assumed to be replaced by the API client
- underdocumented AuthZ/AuthN mechanisms (usually only present in the general description comment on the API, and missing on the individual endpoints)
In practice you often have to remedy that by patching the officially distributed OpenAPI specs to make them good enough for a basis of tool calling, making it not-very-plug-and-play.
I think the biggest upside that MCP brings (given all "content"/"functionality") being equal is that using it instead of just plain REST, is that it acts as a badge that says "we had AI usage in mind when building this".
On top of that, MCP also standardizes mechanisms like e.g. elicitation that with traditional REST APIs are completely up to the client to implement.
I can’t help but notice that so many of the things mentioned are not at all due to flaws in the protocol, but developers specifying protocol endpoints incorrectly. We’re one step away from developers putting everything as a tool call, which would put us in the same situation with MCP that we’re in with OpenAPI. You can get that badge with a literal badge; for a protocol, I’d hope for something at least novel over HATEOAS.
Agreed. Without standards, we wouldn’t have the rich web-based ecosystem we have now.
As an example, anyone who’s coded email templates will tell you: it’s hard. While the major browsers adopted the W3C specs, email clients (I.e. email renderers) never adopted the spec, or such a W3C email HTML spec didn’t exist. So something that renders correctly in Gmail looks broken in Yahoo mail in Safari on iOS, etc.
Standards are very important, especially extensible ones where proposals are adopted when they make sense - this means companies can still innovate but users get the benefit of everything just working.
But browsers/web ecosystem are still a bad example as we had decades of browsers supporting their own particular features/extensions. This has converged slightly pretty much because everything now uses Chrome underneath (bar Safari and Firefox).
But even so...if I write an extension while using Firefox, why can't I install that extension in Chrome? And vice-versa? Even bookmarks are stored in slightly different formats.
It is a massive pain to align technology like this but the benefits are huge. Like boxing developers in with a good library (to stop them from doing arbitrary custom per-project BS) I think all software needs to be boxed into standards with provisions for extension/innovation. Rather than this pick & choose BS because muh lock-in.
you have to write code for MCP server, and code to consume them too. It's just the LLM vendor decide that they are going to have the consume side built-in, which people question as they could as well do the same for open API, GRPC and what not, instead of a completely new thing.
The analogy that was used a lot is that it's essentially USB-C for your data being connected to LLMs. You don't need to fight 4,532,529 standards - there is one (yes, I am familiar with the XKCD comic). As long as your client is MCP-compatible, it can work with _any_ MCP server.
The whole USB C comparison they make is eyeroll inducing, imo. All they needed to state was that it was a specification for function calling.
My gripe is that they had the opportunity to spec out tool use in models and they did not. The client->llm implementation is up to the implementor and many models differ with different tags like <|python_call|> etc.
Clearly they need to try explaining it it easy terms. The number of people that cannot or will not understand the protocol is mind boggling.
I'm with you on an Agent -> LLM industry standard spec need. The APIs are all over the place and it's frustrating. If there was a spec for that, then agent development becomes simply focused on the business logic and the LLM and the Tools/Resource are just standardized components you plug together like Lego. I've basically done that for our internal agent development. I have a Universal LLM API that everything uses. It's helped a lot.
The comparison to USB C is valid, given the variety of unmarked support from cable to cable and port to port.
It has the physical plug, but what can it actually do?
It would be nice to see a standard aiming for better UX than USB C. (Imho they should have used colored micro dots on device and cable connector to physically declare capabilities)
Certainly valid in that just like various usb c cables supporting slightly different data rates or power capacities, MCP doesn't deal with my aforementioned issue of the glue between MCP client and model you've chosen; that exercise is left up to us still.
My gripe with USB C isn't really on the nature, but on the UX and modality of capability discovery.
If I am looking at a device/cable, with my eyes, in the physical world, and ask the question "What does this support?", there's no way to tell.
I have to consult documentation and specifications, which may not exist anymore.
So in the case of standards like MCP, I think it's important to come up with answers to discovery questions, lest we all just accept that nothing can be done and the clusterfuck in +10 years was inevitable.
A good analogy might be imagining how the web would have evolved if we'd had TCP but no HTTP.
100% agree but with private enterprise this is a problem that can never be solved; everyone wants their lock-in and slice of the cake.
I would say for all the technology we have in 2025, this has certainly been one of the core issues for decades & decades. Nothing talks to each other properly, nothing works with another thing properly. Immense effort has to be expended for each thing to talk to or work with the other thing.
I got a Macbook Air for light dev as a personal laptop. It can't access Android filesystem with a phone plugged in. Windows can do it. I know Apple's corporate reasoning, but just an example of purposeful incompatibility.
As you say, all these companies use standards like TCP/HTTP/Wifi/Bluetooth/USB/etc and they would be nowhere without them - but literally every chance they get they try to shaft us on it. Perhaps AI will assist in the future - tell it you want x to work with y and the model will hack on it until the fucking thing works.