From: aidotengineer
The Model Context Protocol (MCP) space is a new and rapidly evolving area crucial for the future potential of AI and agents [00:00:02]. While Large Language Models (LLMs) have achieved impressive intelligence, this intelligence is “stuck in a box” without practical application [00:01:52]. To make AI agents practically useful, it is essential to consider their context and capability, specifically their inputs and outputs [00:01:59].
Origin of the MCP Concept
The concept of MCP emerged from the challenge of enabling LLMs to solve complex problems. Henry, founder and CEO of Smithery, initially focused on the ARC AGI challenge, an IQ test designed for LLMs [00:00:41]. This challenge involves predicting missing patterns from examples, a task easy for humans (80% accuracy) but historically difficult for LLMs [00:00:59]. However, with the release of OpenAI’s 03, human-level performance was achieved, raising questions about the immediate deployment of autonomous agents [00:01:15].
This led to what is termed “Claude’s paradox”: despite significant advances in intelligence from frontier labs, this intelligence remains confined [00:01:45]. Recognizing this, Anthropic released the Model Context Protocol (MCP) in November 2024, an open standard to help LLMs connect to various services, promising to standardize this complex problem [00:02:08]. The advent of MCP, combined with smarter models, aimed to standardize how models interact with services [00:02:27].
Current Challenges in the MCP Ecosystem
Despite the initial excitement and a vibrant developer community [00:02:23], the MCP ecosystem faces several challenges that hinder the widespread adoption and transformation with AI agents:
User-Side Problems
- Fragmentation: The increasing number of MCP servers makes it difficult to find high-quality ones [00:02:50]. The MCP committee is working on an official registry to address this, but assigning reputation to quality MCPs remains an open question [00:02:56].
- High Friction Install: Many MCPs require a complex multi-step installation process, making them difficult to set up [00:03:14].
- Insecurity: Users risk installing insecure MCPs [00:03:25].
- Lack of AI-Native Economy: There is no clear plan for agentic payments or managing subscriptions from numerous services [00:03:30].
Developer-Side Problems
Developers building MCPs also encounter significant issues:
- Hosting Problems: Despite improvements in HTTP transport, developers still grapple with stateful sessions, resumability, and other hosting complexities [00:03:53]. This impacts the impact on infrastructure.
- Lacking Developer Tooling: Basic MCP inspectors exist, but there’s a lack of tools to help developers design optimal MCPs, ensure tools are called correctly, and create the best agent experience [00:04:06].
- Distribution: Developers face challenges in getting their MCPs discovered [00:04:30].
- Observability: Improving deployed MCPs after launch is difficult due to a lack of proper observability [00:04:36].
- Monetization: A clear path for developers to monetize their MCPs is still undefined [00:04:43].
Smithery’s Vision and Demonstration
Smithery, founded in December 2024, aims to address these challenges by becoming the “AI gateway” that grows and orchestrates the new era of AI-native services for AI agents [00:04:57].
A demo showcases the potential when these problems are solved:
The Smidy playground demonstrates an AI agent with access to thousands of curated MCPs [00:05:22]. For example, an agent was prompted to “Find the most pressing issue on my GitHub repository called smidy-CLI
and create a new ticket on Linear” [00:05:33]. The agent successfully:
- Thought about the issue [00:05:48].
- Called a search services function to find and connect to the best servers [00:05:51].
- Used the GitHub MCP to find the highest priority bug on the repository [00:06:06].
- Created a detailed ticket on Linear, including a link to the original issue [00:06:16]. This end-to-end task was solved by an AI agent connected to two different MCPs, demonstrating the potential for developing AI agents for productivity [00:06:40].
The Future of AI Agent Ecosystems
The enthusiasm from developers for deploying servers and making tool calls is evident [00:07:01]. It is increasingly clear that the future of the internet will be dominated by “tool calls” rather than “clicks” [00:07:09]. In this new paradigm, the “agent experience” will matter more than the traditional user experience [00:07:15]. This shift will require a collaborative effort from the entire community, not just a few companies, to build this future AI agent ecosystem [00:07:21].