In relation to the client (AI Agent), the MCP server is serving resources like tools, but in relation to your platform that hosts the API those tools call, it is a client.
I share your frustration on services that won’t let you automate them, but to me that’s precisely what generative AI will let you do. You don’t need an API at the family doctors to have AI automate it for you. It just rings them up and sorts it out at your command. AI is like obtaining an API to anything
This is a large numbers problem that is not yet visible. You can't have everybody walking and talking. There would be too much noise (think crickets, cicadas, toads, etc.)
I'm not going into cycling/running and talking .. that's just not how things work when you need to breathe.
Driving and talking to a phone to then have it recite back to you 10 minutes of details you can just glance at but would be dangerous to?
If I'm relaxing on a couch .. i'm using a device. And please don't come back with "play me a chill song" as a fancy use case.
What I'm trying to say is that voice is not it and the only other kind of interaction I'm looking forward to see evolve is neuralink-style. In the sense that it needs to be wireless / non-invasive for mass adoption. That's it.