The Future of MCP: How Agents Will Connect to Everything
AI agents are moving from flashy demos to real production work. To get there, they need one thing more than anything else: reliable connectivity to the tools, apps, and data people actually use. That’s where MCP (Model Context Protocol) comes in.
Today, MCP is evolving from a simple way to expose tools to something much bigger: a standard way for agents to ship full applications, connect to SaaS, handle enterprise auth, and coordinate with other agents. Here’s what that future looks like—and how to build for it.
From Simple Tools to Full MCP Applications
MCP started as a way for models to call tools in a structured, standard way. But it’s quickly turning into something more powerful: a way to ship entire applications over a protocol.
An MCP server can now:
• Expose tools for the model to call
• Ship a full UI as an "MCP application" for humans to use
• Run in many environments—cloud, desktop, IDEs like VS Code or Cursor—without being hardcoded into a specific product
Because the client and server share a protocol and semantics, both sides understand what’s being sent: tools, UI elements, tasks, resources, and more. That means the same MCP server can power:
• A human-facing interface
• An AI agent calling tools behind the scenes
• Different clients across different platforms, all speaking the same language
This dual nature—human UI plus machine tools—is one of the most unique aspects of MCP and a big part of its future.
MCP’s Rapid Growth and Why It Matters
In just over a year, MCP has gone from a small spec and a few local SDKs to a widely adopted standard. It now sees on the order of 110 million monthly downloads across the ecosystem.
Those downloads don’t just come from one company’s products. MCP is being pulled in by:
• OpenAI’s agent SDK
• Google’s SDKs
• Frameworks like LangChain
• Thousands of smaller tools and libraries that depend on a common protocol
The result: a shared standard that lets agents and tools talk to each other, regardless of which platform or vendor you’re using. If you’re building agents today, MCP is quickly becoming part of the default connectivity stack—especially if you care about interoperability.
For a deeper dive into how MCP works in practice with coding workflows, check out this in‑depth guide to Claude Code, MCP, skills, and sub‑agents.
Why 2026 Will Be About Connectivity, Not Just Coding Agents
Recent years have been dominated by coding agents—LLMs that write and run code, usually on a local machine. That’s a great fit for agents because:
• It’s local and sandboxed
• Results are verifiable (you can compile or run tests)
• A developer is usually in the loop to fix issues
• The UI is straightforward
But most knowledge workers aren’t trying to compile code all day. They need agents that can:
• Pull data from multiple SaaS apps
• Work with shared drives and documents
• Respect enterprise permissions and policies
• Orchestrate workflows across tools like Slack, Notion, Linear, CRMs, and more
For these use cases, connectivity is the main challenge—not raw reasoning. And there’s no single connectivity solution that fits everything. Instead, you’ll likely combine:
• Skills – domain knowledge and usage patterns packaged in a simple format
• CLIs or computer use – great for local, sandboxed coding agents and tools that already exist in pretraining (like Git)
• MCP – ideal when you need rich semantics, UIs, long-running tasks, resources, platform independence, governance, and enterprise features
The best agents in 2026 won’t choose one of these—they’ll use all of them, depending on the job.
How to Build Better Agent Clients: Progressive Discovery & Code Mode
Most of the current pain with agents isn’t the protocol—it’s how clients use it. Two key patterns can dramatically improve performance and usability: progressive discovery and programmatic tool calling.
Progressive Discovery: Don’t Load Every Tool Upfront
A common anti-pattern is to dump every available tool into the model’s context window and hope for the best. This quickly explodes token usage and doesn’t scale as your toolset grows.
Instead, use progressive discovery:
• Start with a lightweight "tool search" or "tool loader" tool
• Let the model decide when it needs a tool
• When it does, fetch only the relevant tools on demand
• Then expose just those tools for the current task
This pattern is already supported in some APIs (like Anthropic’s) but can also be implemented manually in any agent harness. The result is a huge reduction in context usage and a more focused toolset per task.
Programmatic Tool Calling (Code Mode)
Another common issue is letting the model orchestrate tools step by step:
1) Call tool A
2) Read result
3) Call tool B
4) Repeat
Each step is another inference call, which adds latency and cost and makes orchestration brittle.
A better approach is programmatic tool calling, often called "code mode":
• Give the model access to a small execution environment (e.g., a JS engine, Python, Lua, or similar)
• Let it write a short script that calls multiple tools, processes results, and returns a final output
• Execute that script in one go, instead of many separate tool calls
MCP helps here with structured output. Tools can declare the shape of their return values, so the model knows the types and can compose them reliably in code. If a tool doesn’t provide structured output, you can still ask a cheaper model to normalize responses into a known schema before composing them.
The end result: fewer round trips, lower latency, and more powerful compositions across MCP tools, CLIs, and APIs.
Designing MCP Servers for Agents (Not Just Wrapping REST)
On the server side, many teams make another mistake: taking an existing REST API and auto-converting it into an MCP server one-to-one. That usually produces a clumsy, agent-unfriendly interface.
Instead, design your MCP server with agents and humans in mind:
• Think about the tasks, not the endpoints—what workflows should an agent or user actually perform?
• Group operations into higher-level tools that match how a human would naturally interact with the system
• Use programmatic tool calling on the server too, not just the client—some MCP servers (like Cloudflare’s) expose execution environments instead of just raw tools
Most importantly, take advantage of MCP’s richer semantics:
• MCP applications – ship UI alongside tools so humans can interact directly
• Tasks and long-running operations – handle jobs that take time and need status updates
• Resources – expose files, documents, and other data in a structured way
• Elicitations – prompt users for missing information in a structured, protocol-aware way
When you design for agents first, you get cleaner tools, better compositions, and a much easier time moving from prototype to production. If you’re experimenting with local models plus MCP, you may also find this guide to turning local LLMs into MCP-powered agents helpful.
What’s Coming Next for MCP
MCP is already widely used, but the spec and SDKs are still evolving quickly. Several important upgrades are on the way.
1. A More Scalable Core Transport
The current streamable HTTP transport can be hard to scale at hyperscaler levels. A new stateless transport protocol, contributed by Google, is in the works. It will let MCP servers behave more like stateless REST services that are easy to run on platforms like Cloud Run or Kubernetes.
This change is expected to land in the spec around June, with SDK support following.
2. Better Asynchronous Tasks and Agent-to-Agent Communication
MCP already has an experimental async task primitive, but support is limited. The goal is to mature this into robust agent-to-agent communication, so agents can:
• Kick off long-running tasks
• Poll or subscribe to updates
• Coordinate with other agents via MCP, not just tools
3. New SDK Versions and Better Developer Experience
Based on a year of real-world usage, new versions of the official SDKs are coming:
• TypeScript SDK v2
• Python SDK v2 (inspired by community projects like fastMCP, which have improved on the original design)
The aim is to make it easier and faster to build high-quality MCP servers and clients without fighting low-level details.
4. Enterprise-Ready Integration: Cross-App Access & Discovery
For enterprises, two upcoming features are especially important:
• Cross-app access – integrate with identity providers like Google or Okta so that once a user logs in with their company identity, they can access multiple MCP servers without re-authenticating each time.
• Server discovery – define how MCP servers can be discovered via well-known URLs. Agents, browsers, or crawlers can visit a domain and automatically detect if there’s an MCP server available, not just a website to scrape.
Both of these are expected to land in the next spec iteration and will make MCP much smoother to use at scale.
5. Extensions: MCP Apps and Skills Over MCP
MCP also has an extension mechanism so different clients can support different capabilities:
• Web-based clients can support MCP applications that render HTML UIs
• CLI clients may skip UI-heavy extensions but still use tools and tasks
One especially exciting extension is skills over MCP. The idea is simple: if your MCP server exposes many tools, you should also be able to ship the domain knowledge and usage patterns alongside them.
That means:
• Servers can publish skills that explain how to use their tools effectively
• Skills can be updated continuously without relying on plugin registries
• Agents can load skills dynamically, just like tools
Even before the semantics are finalized, you can approximate this today by exposing a "load skills" tool that returns usage guidance for your tools.
The Road Ahead: Fully Connected Agents
MCP is in a strong position: widely adopted, rapidly improving, and increasingly central to how agents connect to the world. Over the next couple of years, we’ll likely see:
• Agents that combine computer use, CLIs, MCP, and skills seamlessly
• Enterprise-ready setups with single sign-on, discovery, and governance
• Rich MCP applications that feel like native apps but are powered entirely by a protocol
• More agent-to-agent coordination over MCP, not just agent-to-tool
If you’re building agents today, the key mindset shift is this: don’t treat MCP as just another way to wrap an API. Treat it as the connective tissue that lets your agents talk to tools, apps, data, other agents, and humans—using the right method for each job.
Design for agents, use progressive discovery and code mode, and lean into MCP’s richer semantics. That’s how we get from demos to durable, production-grade AI agents that can actually work across the real systems people use every day.
Comments
No comments yet. Be the first to share your thoughts!