On February 15, CNBC reported that Peter Steinberger, the creator of OpenClaw — the open-source AI agent framework that had recently surged in popularity — was joining OpenAI. Sam Altman announced it personally. The hire was strategic in a way that transcends a single person's talents. Steinberger built the agent framework that most developers were using to connect AI models to real-world tools. The protocol those connections run on is called MCP. And MCP is the most important piece of AI infrastructure that almost nobody outside the developer community has heard of.
🎙️ Listen to this article
The Model Context Protocol started as a technical specification released by Anthropic in 2025. It defined a standard way for AI models to communicate with external tools — databases, APIs, file systems, web services. Before MCP, every combination of AI model and external tool required custom integration code. If you wanted Claude to query your Salesforce data and GPT to read your PostgreSQL database, you wrote two completely different integrations. Nothing was portable. Nothing was reusable.
MCP changed that by providing a universal protocol layer. A tool that speaks MCP works with any model that speaks MCP. The integration is written once and used everywhere. This week, at the India AI Impact Summit, NIST announced it would facilitate the development of interoperable and secure standards for agentic AI — and the initiative builds explicitly on MCP as its foundation. When a U.S. government standards body adopts your protocol as baseline infrastructure, the adoption question is settled.
Standards are boring until you realize the entire digital economy runs on them
The reason MCP matters is the same reason USB, HTTP, TCP/IP, and SQL matter. Standards create interoperability. Interoperability creates markets. Markets create ecosystems. Ecosystems create lock-in that persists for decades after the original standard is no longer technically optimal.
Consider USB. When it was proposed in 1994, every peripheral device had its own proprietary connector. Printers used parallel ports. Mice used PS/2. Modems used serial. Keyboards had their own interface. The proliferation of connectors imposed a cost on every hardware manufacturer, every software developer, and every consumer who had to figure out which cable went where. USB didn't win because it was technically superior to every alternative. It won because it was good enough and universal. The universality created a network effect that made every subsequent connector compete not against USB's technical specs but against USB's installed base.
MCP is playing the same game in AI. Before MCP, the tool integration landscape looked like the pre-USB peripheral market: fragmented, proprietary, and expensive to navigate. Every AI company had its own approach to connecting models with external systems. Function calling protocols varied between providers. Tool schemas were incompatible. An integration built for one model couldn't be reused with another.
MCP collapsed that complexity into a single protocol. The analogy is precise: MCP is USB for AI. The protocol is simple enough to implement quickly, flexible enough to handle diverse use cases, and open enough that no single company's competitive interests are threatened by adoption.
Anthropic's Linux Foundation play was the decisive strategic move
If Anthropic had kept MCP proprietary, it would have died. OpenAI would never adopt a standard owned by a direct competitor. Google would build its own alternative. The industry would fragment into competing protocol ecosystems, and the integration tax would persist.


Instead, Anthropic donated MCP to the Linux Foundation — the same organization that governs Linux, Kubernetes, Node.js, and dozens of other foundational open-source projects. This was strategically brilliant for reasons that compound over time.
First, it neutralized the competitive objection. A standard governed by the Linux Foundation is neutral ground. OpenAI, Google, Microsoft, and every other player can adopt it without conceding strategic advantage to Anthropic. The governance structure ensures that no single company can modify the protocol to favor its own products.
Second, it accelerated ecosystem growth. Under Linux Foundation governance, developers can contribute extensions, implementations, and improvements without asking Anthropic's permission. The community around MCP grew faster than any single company could have driven it because the contributors had ownership in the standard's evolution.
Third — and this is the move that matters most — it created switching costs that benefit the entire ecosystem equally. Once thousands of tool integrations exist in MCP format, switching to a different protocol means rewriting every integration. The cost is prohibitive not just for Anthropic's customers, but for everyone. The lock-in is symmetric, and symmetric lock-in is the only kind competitors will accept.
This is the same playbook that created the modern technology industry. IBM made the PC architecture open. Google made Android open. Facebook made React open. In each case, the company that created the standard profited from the ecosystem it enabled rather than from the standard itself. Anthropic may not directly monetize MCP. But by establishing the protocol layer for every AI agent, they've ensured that the agentic AI ecosystem is built on infrastructure they understand more deeply than anyone else.
The adoption list reads like a roster of the entire AI industry
OpenAI uses MCP. OpenClaw — the framework popular enough to get its creator hired by OpenAI — uses MCP as its primary tool integration layer. GitHub Copilot's agent features communicate through MCP. Alibaba designed Qwen 3.5 with explicit MCP compatibility, which CNBC noted as a signal that China's AI labs see agentic capabilities as the next competitive frontier. Marc Einstein of Counterpoint Research told CNBC that Chinese AI companies are preparing for the possibility that AI agents could "upend traditional Internet business models."
NIST's announcement at the India AI Summit formalized what the market had already decided. The AI Agent Standards Initiative will facilitate interoperable and secure standards for agentic AI. The initiative's foundation is MCP. When the U.S. Commerce Department's standards body builds on your protocol, adoption becomes a compliance consideration rather than a technical choice.
The speed of this consensus is unusual. Most protocol standards take years of committee work, competing proposals, and political negotiation before reaching adoption. MCP went from an Anthropic release in 2025 to de facto industry standard in under a year. The speed reflects both the quality of the specification and the intensity of demand. Developers needed a universal tool integration layer. MCP was available, free, and good enough. Nobody had time to wait for a better alternative to emerge through committee.
The counterargument: protocol standards can be disrupted, and MCP has limitations
The strongest objection to MCP's permanence is historical. Protocol standards that seemed unshakeable have been displaced before. SOAP and XML-RPC were the dominant API standards in the early 2000s. REST displaced them within a decade. RSS was the standard for web syndication until social media feeds rendered it niche. Standards persist until the paradigm they serve is itself disrupted.


MCP also has genuine technical limitations. The current specification is optimized for request-response patterns between a model and a tool. More complex agent architectures — multi-agent systems, agent-to-agent communication, long-running autonomous workflows — may require protocol extensions that don't yet exist. If those extensions prove incompatible with MCP's core design assumptions, a successor protocol could emerge.
The practical limitation is simpler. MCP standardizes the connection between models and tools. It does not standardize what happens inside the model or inside the tool. Two agents using MCP can communicate with the same database, but they may interpret the results differently, take conflicting actions, or fail in incompatible ways. Protocol interoperability is necessary for the agentic AI ecosystem. It is not sufficient.
The rebuttal sits in the installed base, not the specification
Technical objections to MCP are valid at the specification level and irrelevant at the ecosystem level. By the time a technically superior alternative emerges, the installed base of MCP integrations will be so large that migration costs dominate the technical comparison.
This is the dynamic that sustained x86 for three decades despite ARM being more power-efficient for most of that period. It's what kept QWERTY as the keyboard standard despite Dvorak demonstrating measurable typing speed improvements. It's why HTML — a specification that web developers have been complaining about since 1995 — still powers every website you've ever visited. Installed base beats specification quality once adoption crosses a critical threshold. MCP crossed that threshold when OpenAI, Google, and Microsoft all adopted it within the same 12-month window.
The agentic AI era is not coming. It's here. OpenClaw's surge in popularity, Anthropic's agent tools, Alibaba's agentic Qwen 3.5, and the scramble among every AI company to ship agent capabilities all point to the same reality: AI systems that take autonomous actions in the real world are shipping to production. Those systems need to connect to real-world tools. The connections run on MCP.
What this means for builders and what it means for everyone else
If you build AI products, the action is unambiguous. Implement MCP for your tool integrations now. Not because it's the best possible protocol — it may not be — but because it's the protocol your users' other tools already speak. The integration tax for non-MCP tools will become a competitive disadvantage as the ecosystem matures.
If you build tools that AI agents might use — APIs, databases, SaaS products, developer platforms — publish an MCP server specification. The tools that are easily discoverable and connectable through MCP will be the ones that agents use. The tools that require custom integration will be the ones that agents route around.
If you're watching this from outside the AI industry entirely, the significance is this: a standard protocol for AI agents to interact with the digital world just achieved consensus adoption in under a year. The protocol that defines how AI systems interact with your bank, your email, your calendar, your medical records, and your employer's databases is now settled. The protocol wars are over. The question that remains is who builds what on top of them, and whether the governance structure that the Linux Foundation provides is sufficient to keep the standard open as the stakes grow.
The answer to that question will matter more than any model benchmark published this year.