Model Context Protocol (MCP)

Open standard defining how AI agents communicate with external tools, databases, and services through a unified interface for LLM-to-infrastructure interaction.

Model Context Protocol (MCP) is an open standard that defines how LLMs communicate with external tools, databases, APIs, file systems, and code execution environments. Often described as the "USB-C of AI agents," MCP provides a single, unified interface that allows any AI model to interact with any backend service — replacing the need for custom integrations per tool.

Architecture

MCP defines three core components in its communication model:

  • Host: The application where the AI runs (IDE, chat interface, automation platform)
  • Client: A lightweight process that parses requests, manages context, and handles the protocol-level communication with the server
  • Server: The component that exposes tools, resources, and prompt templates that the AI can invoke

Data flows continuously across trust boundaries between these components. The protocol uses JSON-RPC for structured communication and supports multiple transport mechanisms including stdio (local), Server-Sent Events (SSE), and Streamable HTTP (remote).

Security implications

MCP introduces a fundamentally different attack surface compared to traditional APIs:

  • Non-deterministic control flow: The LLM decides which tools to call, what parameters to pass, and how to interpret results — unlike traditional APIs where endpoints and parameters are developer-defined
  • Delegated permissions: When an LLM acts through an MCP server, it inherits the user's permissions, often with broader scope than any single API call would grant
  • Indirect prompt injection: Malicious instructions embedded in data retrieved by MCP tools can manipulate the LLM's behavior, since models cannot architecturally distinguish between data and commands
  • Tool poisoning: Attackers can embed malicious instructions in MCP tool metadata (names, descriptions, parameter definitions) to manipulate tool selection and parameter formatting

A 2025 analysis of over 5,200 open-source MCP server implementations found that 88% required credentials to function, while 53% relied entirely on static, plaintext API keys — creating widespread credential exposure risks.

Key security controls

Hardening an MCP deployment requires controls at every layer:

  • Identity: Cryptographic workload identity (SPIFFE/SPIRE), mutual TLS, OAuth 2.0 with token exchange
  • Transport: mTLS for all remote communication, network segmentation, SSRF prevention
  • Runtime: Distroless containers, non-root execution, kernel-level sandboxing (Seccomp, AppArmor)
  • Data: Two-tier sanitization (regex + ML classifier) on all tool outputs before they enter the context window
  • Observability: Structured audit logs with correlation IDs, OpenTelemetry tracing, centralized MCP gateways

Web3 and DeFi context

In cryptocurrency and DeFi environments, MCP vulnerabilities carry the highest stakes because private key exposure through an MCP-connected wallet tool is unrecoverable. Attack patterns specific to multi-MCP DeFi deployments include "Function Priority Hijacking" (a malicious plugin overrides legitimate function execution) and "Cross-MCP Triggering" (one server's output manipulates tools on another server).

For a practical hardening guide, see How to harden an MCP server. For a quick-reference checklist, see the MCP security checklist and the interactive MCP checklist.

Need expert guidance on Model Context Protocol (MCP)?

Our team at Zealynx has deep expertise in blockchain security and DeFi protocols. Whether you need an audit or consultation, we're here to help.

Get a Quote

oog
zealynx

Smart Contract Security Digest

Monthly exploit breakdowns, audit checklists, and DeFi security research — straight to your inbox

© 2026 Zealynx