Meet MarsDevs at Gitex AI Asia 2026 · Marina Bay Sands, Singapore · 9 to 10 April 2026 · Booth HC-Q035
TL;DR: The Model Context Protocol (MCP) is an open standard created by Anthropic that gives AI models and agents a universal way to connect to external tools, data sources, and services. Think of MCP as USB-C for AI: instead of building a custom integration for every model-tool combination, you build one MCP server and it works with any MCP-compatible AI application. As of 2026, MCP has crossed 97 million monthly SDK downloads, supports 10,000+ public servers, and is governed by the Linux Foundation's Agentic AI Foundation (AAIF) with backing from Anthropic, OpenAI, Google, Microsoft, and AWS.
By the MarsDevs Engineering Team. Based on production AI agent systems with MCP integration deployed for clients across 12 countries.
Every AI agent needs to talk to the outside world. It needs to query databases, pull files from GitHub, send Slack messages, read documents from Google Drive, or call internal APIs. Before MCP, every one of those connections required custom glue code.
Build an agent on Claude? Write a Slack integration. Move to GPT-4? Rewrite it. Add a new tool? Write another connector. The result was an N-by-M problem: N models times M tools, each needing its own bespoke integration.
The Model Context Protocol (MCP) is an open standard that defines a single, universal interface between AI applications and external systems. Anthropic created MCP and open-sourced it in November 2024. Two Anthropic engineers, David Soria Parra and Justin Spahr-Summers, built the initial protocol internally in July 2024, then the team open-sourced it after seeing strong results at an internal hackathon.
MarsDevs is a product engineering company that builds AI-powered applications, SaaS platforms, and MVPs for startup founders. We integrate MCP into agentic AI systems for clients who need their agents to interact with real business tools in production. On our last three agent projects, MCP cut weeks of custom integration work down to days.
Here is the simplest way to think about it. Before MCP, connecting AI to tools was like every phone having a different charger. MCP is USB-C. One standard. Every device works. You build an MCP server once for your tool, and any AI application that speaks MCP can use it.
Key facts about MCP:
MCP follows a client-server architecture with three distinct participants. You need to understand these roles before you build anything.
| Component | Role | Example |
|---|---|---|
| MCP Host | The AI application that runs the LLM and coordinates connections | Claude Desktop, VS Code, Cursor, your custom app |
| MCP Client | A connector inside the host that maintains a dedicated connection to one server | One client per server connection |
| MCP Server | A program that exposes tools, data, or prompts to clients | A GitHub server, a PostgreSQL server, your custom API server |
An MCP host is the AI application that the user interacts with, such as Claude Desktop, Cursor, or VS Code. When an MCP host starts up, it creates one MCP client for each configured MCP server. Each client maintains its own dedicated connection. A single host can connect to dozens of servers at once, giving the AI access to a wide range of tools and data sources through one protocol.
An MCP client is the connector component inside the host that manages communication with a single MCP server. Each client handles one server connection, including capability negotiation, tool discovery, and request routing.
An MCP server is a lightweight program that exposes specific capabilities (tools, resources, prompts) to clients through the standardized MCP interface. Servers are the integration layer between the AI system and the external tool or data source.
MCP servers expose capabilities through three core primitives. These are the building blocks that define what an AI can access.
Tools are executable functions the AI can call to perform actions. Query a database. Create a GitHub issue. Send an email. Tools are the "hands" of the AI agent. When the LLM decides it needs to take an action, it invokes a tool through the MCP client.
Resources are data sources that provide context. File contents, database schemas, API responses, configuration data. Resources are read-only and give the AI the information it needs to make better decisions. Think of them as GET endpoints in a REST API.
Prompts are reusable templates that structure interactions with the LLM. System prompts, few-shot examples, specialized instruction sets that help the AI use tools and resources effectively. Prompts are the least-discussed primitive, but they are critical for production quality.
Summary of MCP primitives:
MCP uses JSON-RPC 2.0 as its wire protocol. JSON-RPC 2.0 is a stateless, lightweight remote procedure call protocol encoded in JSON. Every message between client and server follows this standard format. The transport layer handles how those messages actually travel between processes.
Stdio transport uses standard input/output streams for local communication. The MCP server runs as a local process on the same machine as the host. Fast, zero network overhead, and the default for most developer tool integrations.
Streamable HTTP transport uses HTTP POST for client-to-server messages with optional Server-Sent Events (SSE) for streaming responses. This is the standard for remote servers, supporting OAuth authentication, multi-client connections, and cloud deployment. The MCP specification introduced Streamable HTTP in March 2025, replacing the earlier SSE-only transport.
| Transport | Best For | Authentication | Multi-Client | Network |
|---|---|---|---|---|
| Stdio | Local dev tools, IDE plugins | N/A (local process) | No | None (same machine) |
| Streamable HTTP | Remote servers, cloud deployment, production | OAuth 2.0 | Yes | HTTP/HTTPS |
Here is what happens when you ask an AI agent to "check the weather in San Francisco" and it has a weather MCP server connected:
weather_current tooltools/call request to the weather serverThis entire flow uses the same JSON-RPC protocol regardless of whether the server runs locally via stdio or remotely via Streamable HTTP. That abstraction is what makes MCP powerful.
If you are building AI agents, you will run into two protocols in 2026: MCP and A2A. They solve different problems. They work together, not against each other.
MCP handles vertical integration: agent to tools. The Model Context Protocol connects an AI agent to external tools, databases, and services. One agent, many tools. MCP is the standard interface between an AI "brain" and its "hands."
A2A handles horizontal collaboration: agent to agent. Google created the Agent-to-Agent (A2A) protocol in April 2025 to standardize how AI agents discover, communicate, and collaborate with each other. A2A is the protocol for multi-agent communication where one agent coordinates with or delegates tasks to other agents.
| Feature | MCP | A2A |
|---|---|---|
| Created by | Anthropic (Nov 2024) | Google (Apr 2025) |
| Purpose | Connect agents to tools and data | Connect agents to other agents |
| Direction | Vertical (agent to tool) | Horizontal (agent to agent) |
| Governance | Linux Foundation (AAIF) | Linux Foundation (AAIF) |
| Transport | stdio, Streamable HTTP | HTTP, gRPC (v1.0) |
| Discovery | tools/list, resources/list | Agent Cards |
| Maturity (2026) | 10,000+ servers, 97M+ monthly SDK downloads | v1.0 shipped, growing ecosystem |
| Use when | Your agent needs to call tools, read data, or take actions | Your agents need to delegate tasks to other agents |
Both protocols now live under the Agentic AI Foundation (AAIF) within the Linux Foundation, co-founded by Anthropic, OpenAI, Google, Microsoft, AWS, and Block in December 2025. Neutral governance means neither Anthropic nor Google controls the specs alone.
The short answer: if your agent needs to call a database, use MCP. If your agent needs to ask another agent to handle a subtask, use A2A. Most production systems will use both.
Building an MCP server is where developers spend most of their time. The good news: the official SDKs make it straightforward in Python and TypeScript.
If you are a non-technical founder evaluating whether your team should adopt MCP, here is the key takeaway: an experienced engineer can go from zero to a working MCP server in under an hour with the Python SDK. That is not a sales pitch. It is how the tooling works.
FastMCP is a high-level Python API included in the official MCP SDK that uses decorators and type hints to auto-generate tool schemas. It is the fastest path from zero to a working MCP server.
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("my-analytics-server")
@mcp.tool()
def query_revenue(start_date: str, end_date: str) -> str:
"""Query total revenue between two dates.
Args:
start_date: Start date in YYYY-MM-DD format
end_date: End date in YYYY-MM-DD format
"""
# Your database query logic here
result = db.execute(
"SELECT SUM(amount) FROM transactions WHERE date BETWEEN ? AND ?",
[start_date, end_date]
)
return f"Total revenue: ${result:,.2f}"
@mcp.resource("schema://database")
def get_schema() -> str:
"""Return the database schema for context."""
return db.get_schema_description()
FastMCP reads the type hints and docstrings, then automatically generates the JSON schema that MCP clients need for tool discovery. No manual schema definition required.
The TypeScript SDK uses a handler-based pattern where you define JSON schemas explicitly with Zod validation. More verbose, but gives you precise control.
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { z } from "zod";
const server = new McpServer({
name: "my-analytics-server",
version: "1.0.0"
});
server.tool(
"query_revenue",
"Query total revenue between two dates",
{
start_date: z.string().describe("Start date (YYYY-MM-DD)"),
end_date: z.string().describe("End date (YYYY-MM-DD)")
},
async ({ start_date, end_date }) => {
const result = await db.query(
`SELECT SUM(amount) FROM transactions WHERE date BETWEEN $1 AND $2`,
[start_date, end_date]
);
return {
content: [{ type: "text", text: `Total revenue: $${result}` }]
};
}
);
| Language | SDK | Notes |
|---|---|---|
| Python | mcp (official) | Includes FastMCP for rapid development |
| TypeScript | @modelcontextprotocol/sdk | Handler-based, explicit schemas |
| Java/Kotlin | Official SDK | Spring AI integration available |
| C# | Official SDK | .NET integration |
| Go | Community SDK | Growing ecosystem |
| Rust | Community SDK | Performance-critical use cases |
The MCP Inspector is a browser-based debugging tool that connects to your server and lets you test tools, resources, and prompts interactively. Run it locally at localhost:6274 during development. You can also test by connecting your server to Claude Desktop via its configuration file, or by adding it to Claude Code's .mcp.json.
For production, add the MCP server to your CI/CD pipeline. Write integration tests that verify tool discovery (tools/list) returns the expected schemas and that tool execution (tools/call) returns valid responses. Every MCP project we ship at MarsDevs has CI/CD from day one. No exceptions.
MCP server testing checklist:
tools/list) returns expected schemastools/call) returns valid responsesMCP is no longer experimental. Production deployments span engineering, sales, support, and operations. Here are the patterns we see most often in our own work and across the industry.
An AI coding assistant (Cursor, VS Code, Claude Code) connects to MCP servers for GitHub, Sentry, and your internal documentation. A developer asks the agent to "investigate the spike in 500 errors from the payments service." The agent pulls error logs from Sentry, checks recent commits on GitHub, reads the relevant service docs, and returns a root cause analysis with suggested fixes.
Block (formerly Square) reported that thousands of employees use MCP integrations connecting Snowflake, Jira, Slack, Google Drive, and internal APIs, cutting up to 75% of time spent on daily engineering tasks.
Connect an MCP server to your PostgreSQL, Snowflake, or Supabase instance. The AI agent discovers the database schema (via resources), runs read queries (via tools), and generates reports. A founder asks "what was our MRR growth last quarter?" and gets a data-backed answer in seconds, not after a Slack thread with the data team.
If you are a non-technical founder, this is the part that matters: your AI assistant can answer business questions directly from your database, without you writing SQL or waiting on an engineer.
An agentic AI system connects to MCP servers for your CRM (Salesforce, HubSpot), ticketing system (Zendesk, Intercom), and knowledge base. When a customer submits a ticket, the agent pulls their account history, checks order status, reviews relevant help articles, and either resolves the issue automatically or routes it with full context to a human agent.
Connect MCP servers to Google Drive, Notion, and your contract management system. An AI agent reads incoming contracts, extracts key terms, cross-references them against your standard terms, flags discrepancies, and files the document in the right location. A global fintech reported 60% faster integration time after connecting CRM, analytics, and onboarding tools through MCP.
At MarsDevs, we integrate MCP into AI agent systems that need to interact with real business infrastructure. A typical engagement looks like this: a SaaS founder needs an AI assistant that can query their database, update their CRM, and generate reports. Instead of building three custom integrations, we deploy three MCP servers and connect them to one agent framework. The agent gets tool access in days, not weeks.
We have seen founders burn months trying to wire up custom integrations that break every time they update their AI model. MCP fixes that. One integration per tool, every AI model can use it.
Founded in 2019, MarsDevs has shipped 80+ products across 12 countries for startups and scale-ups. MCP is now part of our standard stack for any project involving agent-to-tool integration.
The MCP ecosystem has matured fast. Here is where things stand.
Every major AI provider now supports MCP:
That is not a wishlist. Every name on this list has shipped MCP support in production.
The MCP registry lists over 10,000 public servers in 2026, covering:
Whatever tool your startup runs on, there is probably an MCP server for it already.
Anthropic donated MCP to the Agentic AI Foundation (AAIF) in December 2025. The AAIF operates under the Linux Foundation with six co-founders: Anthropic, OpenAI, Google, Microsoft, AWS, and Block. The latest protocol specification version is 2025-11-25, with active development on authorization improvements, remote server security, and the experimental Tasks primitive for long-running operations.
The 2026 MCP roadmap identifies four priority areas that will shape the protocol's evolution.
Transport scalability. The shift from stdio (local) to Streamable HTTP (remote) transport means MCP servers are moving to the cloud. But running Streamable HTTP at scale has surfaced gaps: stateful sessions fighting with load balancers, horizontal scaling requiring workarounds, and no standard way for registries to discover what a server offers without connecting to it. Expect specification updates addressing these issues.
Agent communication. The Tasks primitive shipped as an experimental feature for long-running operations. Early production use has surfaced lifecycle gaps including retry semantics for transient failures and expiry policies for result retention.
Governance maturation. As adoption scales, the AAIF is refining how protocol changes are proposed, reviewed, and ratified. Clearer contributor pathways and decision-making structures are in development.
Enterprise readiness. Production deployments increasingly need audit trails, SSO-integrated authentication, gateway behavior standardization, and configuration portability. Gateway solutions (like the open-source ContextForge project) provide fine-grained access control, audit logging, and policy enforcement. If you are deploying MCP in a regulated industry (fintech, healthcare), this gateway layer is non-negotiable.
Enterprise adoption. CData and other enterprise integration platforms now offer MCP connectivity out of the box. This brings MCP to organizations that would never build custom servers. If you are a startup selling to enterprise customers, MCP compatibility in your product is quickly becoming a checkbox requirement.
MCP (Model Context Protocol) is an open standard that defines how AI models and agents connect to external tools, databases, and services through a single universal interface. It matters because it kills the need for custom integrations between every AI model and every tool. Build one MCP server for your tool, and it works with Claude, ChatGPT, Gemini, and any other MCP-compatible application. Anthropic, OpenAI, Google, Microsoft, and AWS all support it.
Anthropic created MCP. Two Anthropic engineers, David Soria Parra and Justin Spahr-Summers, started building the protocol internally in July 2024. Anthropic open-sourced MCP in November 2024. In December 2025, Anthropic donated MCP to the Agentic AI Foundation (AAIF) under the Linux Foundation, where a neutral body with backing from six major AI companies now governs it.
MCP provides a standardized discovery and execution protocol on top of API calls. With a regular API, the AI needs custom code to understand the API's schema, authenticate, and handle responses. MCP wraps that API behind a standard interface that any AI application can discover and use automatically. The AI calls tools/list to find available tools, reads their schemas, and calls tools/call to execute them. No custom integration code needed on the AI side.
MCP has official SDKs in Python, TypeScript, Java/Kotlin, and C#. Community SDKs exist for Go and Rust. The Python SDK includes FastMCP, a decorator-based API that auto-generates tool schemas from type hints, making it the fastest option for prototyping. The TypeScript SDK offers explicit schema control via Zod validation. Both official SDKs have crossed 97 million combined monthly downloads as of early 2026.
Yes. MCP runs in production at scale today. Anthropic uses it in Claude Desktop and Claude Code. Cursor and Windsurf use it in their IDEs. Block (Square) runs MCP integrations for thousands of employees across Snowflake, Jira, Slack, and internal APIs. The protocol specification is on version 2025-11-25, with Streamable HTTP transport supporting remote deployment, OAuth authentication, and multi-client connections. Enterprise adoption is accelerating as gateway solutions provide the security and observability layers that regulated industries require.
A2A (Agent-to-Agent) and MCP are complementary protocols under the same governance body (AAIF). MCP connects agents to tools (vertical integration). A2A connects agents to other agents (horizontal collaboration). Use MCP when your agent needs to call a database or send a Slack message. Use A2A when your agent needs to delegate a task to another specialized agent. Most production multi-agent systems use both protocols.
Yes, and that is the primary use case. Any developer can build a custom MCP server that exposes their application's functionality as tools and resources. The Python SDK (FastMCP) lets you go from zero to a working server in under an hour. Define your tools with decorated functions, add type hints, and the SDK generates the MCP-compatible schemas automatically. Test with the MCP Inspector, then deploy locally (stdio) or remotely (Streamable HTTP) depending on your architecture.
Use MCP when the primary consumer of your API is an AI system. MCP is purpose-built for AI context: it includes tool discovery, schema negotiation, and structured result formats that LLMs can interpret directly. REST and GraphQL are designed for traditional web applications where a human developer writes the integration code. If your API serves both AI and non-AI consumers, expose it through both: a standard REST/GraphQL API for your web app, and an MCP server that wraps the same logic for AI agents.
Secure production MCP servers by deploying them via Streamable HTTP with OAuth 2.0 authentication. Add a gateway layer for fine-grained access control, audit logging, and rate limiting. Use tool allow-lists to restrict which tools each client can access. Log every tool invocation for compliance. For regulated industries like fintech or healthcare, gateway projects such as ContextForge provide policy enforcement and SSO integration. Never expose stdio-based MCP servers to untrusted networks.
MCP itself is free and open-source. The cost comes from engineering time to build MCP servers for your specific tools and the infrastructure to run them. A basic MCP server wrapping an existing API takes an experienced developer a few hours to build with FastMCP. Complex servers with multiple tools, resource endpoints, and custom authentication take 1 to 3 days. If you are using pre-built public MCP servers (GitHub, PostgreSQL, Slack), the integration cost is near zero: install the server and configure the connection.
MCP is the standard plumbing for AI agent infrastructure. If you are building AI agents that need to interact with real business systems, it is no longer optional. It is the default.
The founders who move fastest pick a proven protocol, connect to the tools that matter, and ship. They do not spend months building custom integrations that break every time they switch models. We have seen both paths. The MCP path is faster every time.
Want to build an AI agent system that connects to your existing tools through MCP? Talk to the MarsDevs engineering team about your project. We ship production AI systems in weeks, not quarters. We take on 4 new projects per month. Claim an engagement slot.

Co-Founder, MarsDevs
Vishvajit started MarsDevs in 2019 to help founders turn ideas into production-grade software. With deep expertise in AI, cloud architecture, and product engineering, he has led the delivery of 80+ software products for clients in 12+ countries.
Get more insights like this
Join founders and CTOs who receive our engineering insights weekly. No spam, just actionable technical content.