Model Context Protocol (MCP) Explained: What SaaS Founders Need to Know in 2026

#Model Context Protocol
Sandor Farkas - Founder & Lead Developer at Wolf-Tech

Sandor Farkas

Founder & Lead Developer

Expert in software development and legacy code optimization

A procurement questionnaire we reviewed last week from a Hamburg-based logistics platform had a section we hadn't seen six months ago: seven questions about Model Context Protocol. Does the product expose an MCP server? Which resources and tools does it expose? Is the server gated behind OAuth or static keys? What is the rate-limiting model per consumer? Is the schema versioned? The vendor we were advising had not heard of MCP until we mentioned it. Their product is excellent. They lost the deal three weeks later — not because of MCP specifically, but because the buyer's AI engineering team had decided MCP support would be a hard requirement going into 2027 and the vendor's roadmap could not promise it.

This is the shape MCP has taken in B2B SaaS in 2026. A protocol almost nobody had heard of in late 2024 has become a column in vendor evaluation spreadsheets at every enterprise we work with that has an internal AI team. The pattern is familiar — first public APIs themselves, then GraphQL, then OpenAPI, then webhooks — but the tempo this time is faster, the buyers are louder, and the question of whether to ship an MCP server is showing up on roadmaps where it does not yet have a clear answer. This post explains what MCP actually is, what shipping a server actually costs, and how a SaaS founder should be deciding right now between build, wait, and skip.

What Model Context Protocol Actually Is in 2026

Model Context Protocol is a JSON-RPC-based standard for letting an LLM-powered application discover and call into the resources, tools, and prompts that another system exposes. Anthropic introduced it in late 2024; by mid-2026 it is supported natively in Claude, ChatGPT, every major IDE assistant, and the agent platforms that matter. The reason the standard caught on quickly is that it solves a problem every team building AI features had been solving worse on its own: how do you let a language model reach into a real system — a CRM, a ticketing tool, a database, a code repository — without writing a one-off integration glue layer for every model and every system pair?

Three concepts carry the weight of the protocol. Resources are read-only context the model can pull in — a customer record, a recent support thread, a document. Tools are operations the model can invoke — create a ticket, run a query, send a message. Prompts are reusable templated workflows the server exposes that the host application can offer to the user. The MCP server (run by the SaaS vendor) advertises what it offers; the MCP client (run inside Claude, ChatGPT, the IDE, or the customer's agent platform) discovers, displays, and invokes them on the user's behalf. Authentication in any real deployment is OAuth, with scopes that map cleanly to the tools the model is allowed to invoke.

That definition matters because it tells you what the engineering work looks like. An MCP server is not a new product. It is a thin, well-specified protocol layer in front of capabilities your platform already exposes through its REST or GraphQL API, plus the security and observability scaffolding to make calls from someone else's LLM safe.

Why Enterprise Buyers Are Asking About MCP

Three things happened in 2025 that explain the pressure.

The big AI vendors standardised on MCP as the integration story they support natively. If you run an internal AI platform, MCP is now the path of least resistance for connecting your model to the dozens of SaaS tools your business uses. The alternative — bespoke integrations, brittle screen-scraping, or Zapier-shaped middlemen — is what enterprises are actively trying to retire.

Internal AI teams found out the hard way that shipping useful AI features without good context is mostly impossible. A model that cannot pull a customer's recent activity, look at their open tickets, or see the relevant document is reduced to chat. With MCP, that context comes from the systems that already own it, on demand, with the user's permissions intact.

Procurement teams, especially in regulated industries, like that MCP makes the integration shape legible. Instead of an opaque "we use AI" pitch, the vendor publishes an MCP server, the buyer's security team reviews the resources and tools it exposes, and the audit conversation gets concrete. That predictability is exactly what buyers are starting to demand in their RFPs.

The result is a fairly stark gradient: B2B SaaS vendors selling into AI-forward buyers — fintech, dev tools, support platforms, internal-ops tools — are getting the question now. Vendors selling into more conservative buyers will get it within twelve months.

What an MCP Server Implementation Actually Looks Like

The good news is that an MCP server is small. The hard parts are not the protocol itself; they are authentication, authorisation, and the contracts between your server and the systems it fronts.

A minimal Symfony-based server exposing one resource and one tool, gated by OAuth and scoped per user, looks roughly like this:

// src/Mcp/CrmServer.php
final class CrmServer implements McpServer
{
    public function __construct(
        private readonly CustomerRepository $customers,
        private readonly TicketService $tickets,
        private readonly Authorization $authz,
        private readonly LoggerInterface $log,
    ) {}

    public function listResources(McpContext $ctx): array
    {
        return [
            new Resource(
                uri: 'crm://customer/{id}',
                name: 'Customer record',
                mimeType: 'application/json',
                description: 'Read a customer profile and recent activity.',
            ),
        ];
    }

    public function readResource(string $uri, McpContext $ctx): ResourceContent
    {
        $id = $this->extractId($uri);
        $this->authz->assertCanRead($ctx->userId, 'customer', $id);
        $this->log->info('mcp.resource.read', ['user' => $ctx->userId, 'uri' => $uri]);
        return ResourceContent::json($this->customers->snapshot($id));
    }

    public function listTools(McpContext $ctx): array
    {
        return [
            new Tool(
                name: 'create_ticket',
                description: 'Open a support ticket on behalf of the user.',
                inputSchema: TicketSchema::INPUT,
            ),
        ];
    }

    public function callTool(string $name, array $args, McpContext $ctx): ToolResult
    {
        if ($name !== 'create_ticket') {
            throw new UnknownToolException($name);
        }
        $this->authz->assertCanInvoke($ctx->userId, 'create_ticket', $args);
        $ticket = $this->tickets->open($ctx->userId, $args);
        return ToolResult::ok(['ticket_id' => $ticket->id]);
    }
}

The transport sits on top of JSON-RPC over either WebSocket or HTTP/SSE depending on how your edge is configured. None of that is interesting. What is interesting — and what drives ninety percent of the implementation cost — is the authorisation layer.

Every tool your MCP server exposes is, from the buyer's security team's perspective, a button on the model's keyboard. If the button can do something destructive, the auth model has to constrain it explicitly: per-user scopes, per-tool quotas, write-action confirmation steps, and an audit trail that ties every call back to a real human user via OAuth. Treat the registry of exposed tools as the security boundary, the same way you treat a public REST API. Anything else is how prompt injection turns your MCP server into a confused deputy. We covered the broader shape of this scaffolding in our agentic features decision framework; the MCP-specific lesson is to assume the LLM calling your server is, for security purposes, hostile and easily tricked.

The deployment shape is genuinely simple. A single MCP server in front of an existing Symfony application, deployed alongside it, talking to the same database — there is no new microservice to introduce. Most teams we work with ship the first version in two engineer-weeks once the design is settled.

Build, Wait, or Skip: A Decision Framework

The same kind of triage the agentic features decision needs applies here, with different inputs.

Build now if your buyers are AI-forward, your product has clear high-value read resources and low-stakes write tools, and your existing API and OAuth layer is in good shape. AI-forward buyers asking the question already are usually in fintech, dev tools, support, internal-ops, marketing analytics, and revenue ops categories. If a representative customer would get value from pulling your data into their internal AI assistant, MCP is the shortest path to delivering that value.

Wait if your auth layer is weak, your destructive endpoints lack confirmation flows, or your backend is a legacy PHP application that has not had a code quality audit in years. Shipping an MCP server on top of a fragile auth layer is the worst possible time to discover you have one. Spend the next quarter hardening the underlying API, then ship MCP on top.

Skip — for now if your buyers do not run their own AI platforms, your write endpoints carry irreversible consequences (regulated finance, healthcare records, identity systems), or your differentiator is the UX of your own product. Some categories will eventually need MCP support, but in 2026 the buyer pressure is not yet there and the security work is much heavier. Track the inbound questions and revisit quarterly.

A useful gut check: count the number of questions about MCP that came up in the last ten enterprise sales conversations. Three or more, build now. One or two, plan to ship in the next two quarters. Zero, keep watching.

Common Pitfalls When Shipping MCP

The mistakes we see most often when reviewing early MCP implementations cluster in three places.

Authentication shortcuts — most often static API keys or shared service accounts — eliminate the user identity that the audit story depends on. Once a buyer's security team realises the MCP server cannot answer "which human user invoked this tool?", the conversation ends. OAuth with per-user scopes is the only model that survives procurement.

Tool sprawl — exposing every API endpoint as an MCP tool — creates a security review burden that is impossible to maintain. Curate. Three high-value resources and four well-scoped tools beat a registry of forty thinly described ones, every time. The catalogue is a product surface; treat it like one.

Missing observability — no per-call audit log, no rate-limit metrics, no error budget for the MCP transport itself — turns the server into a black box the moment something goes wrong. Treat it like any other production surface: traces, structured logs, dashboards, alerts. The protocol is new; the operational discipline is not.

A fourth pitfall worth naming: skipping schema versioning. MCP clients cache tool definitions; if you change input shapes without versioning, you will break every long-lived agent that has memorised your schema. Bump the tool name (create_ticket_v2), keep the old one alive for a deprecation window, and announce the change in the description field of the deprecated tool — which is the closest thing the protocol has to an API changelog.

Closing

MCP in 2026 is in the same place public APIs were in 2010 and webhooks were in 2015 — a standard your buyers will eventually expect, currently being adopted at speed by the early movers, and unevenly understood by everyone else. Shipping a thoughtful MCP server costs less than most founders assume; shipping a careless one is how a confused-deputy incident lands in your security postmortem. The build, wait, or skip framework above is meant to make that decision concrete now, before the question shows up in next quarter's RFP.

Wolf-Tech helps European B2B SaaS teams design and ship MCP servers on PHP/Symfony and Next.js stacks — auth design, tool curation, observability, and the boring scaffolding that turns an integration story into a custom software development deliverable that survives enterprise security review. Contact us at hello@wolf-tech.io or visit wolf-tech.io for a free consultation.