Model Context Protocol (MCP)
Open standard defining how AI agents communicate with external tools, databases, and services through a unified interface for LLM-to-infrastructure interaction.
Model Context Protocol (MCP) is an open standard that defines how LLMs communicate with external tools, databases, APIs, file systems, and code execution environments. Often described as the "USB-C of AI agents," MCP provides a single, unified interface that allows any AI model to interact with any backend service — replacing the need for custom integrations per tool.
Architecture
MCP defines three core components in its communication model:
- Host: The application where the AI runs (IDE, chat interface, automation platform)
- Client: A lightweight process that parses requests, manages context, and handles the protocol-level communication with the server
- Server: The component that exposes tools, resources, and prompt templates that the AI can invoke
Data flows continuously across trust boundaries between these components. The protocol uses JSON-RPC for structured communication and supports multiple transport mechanisms including stdio (local), Server-Sent Events (SSE), and Streamable HTTP (remote).
Security implications
MCP introduces a fundamentally different attack surface compared to traditional APIs:
- Non-deterministic control flow: The LLM decides which tools to call, what parameters to pass, and how to interpret results — unlike traditional APIs where endpoints and parameters are developer-defined
- Delegated permissions: When an LLM acts through an MCP server, it inherits the user's permissions, often with broader scope than any single API call would grant
- Indirect prompt injection: Malicious instructions embedded in data retrieved by MCP tools can manipulate the LLM's behavior, since models cannot architecturally distinguish between data and commands
- Tool poisoning: Attackers can embed malicious instructions in MCP tool metadata (names, descriptions, parameter definitions) to manipulate tool selection and parameter formatting
A 2025 analysis of over 5,200 open-source MCP server implementations found that 88% required credentials to function, while 53% relied entirely on static, plaintext API keys — creating widespread credential exposure risks.
Key security controls
Hardening an MCP deployment requires controls at every layer:
- Identity: Cryptographic workload identity (SPIFFE/SPIRE), mutual TLS, OAuth 2.0 with token exchange
- Transport: mTLS for all remote communication, network segmentation, SSRF prevention
- Runtime: Distroless containers, non-root execution, kernel-level sandboxing (Seccomp, AppArmor)
- Data: Two-tier sanitization (regex + ML classifier) on all tool outputs before they enter the context window
- Observability: Structured audit logs with correlation IDs, OpenTelemetry tracing, centralized MCP gateways
Web3 and DeFi context
In cryptocurrency and DeFi environments, MCP vulnerabilities carry the highest stakes because private key exposure through an MCP-connected wallet tool is unrecoverable. Attack patterns specific to multi-MCP DeFi deployments include "Function Priority Hijacking" (a malicious plugin overrides legitimate function execution) and "Cross-MCP Triggering" (one server's output manipulates tools on another server).
For a practical hardening guide, see How to harden an MCP server. For a quick-reference checklist, see the MCP security checklist and the interactive MCP checklist.
Articles Using This Term
Learn more about Model Context Protocol (MCP) in these articles:

How to Harden an MCP Server Before It Becomes a Master Key to Your Infrastructure
Secure your MCP servers against prompt injection, credential theft, and supply chain attacks. A practical hardening guide for identity, transport, and runtime.

MCP Security Checklist: 24 Critical Checks for AI Agents
Complete MCP (Model Context Protocol) security checklist with 24 actionable checks. Learn how to prevent tool poisoning, prompt injection, and RCE attacks. Essential guide for AI agent builders and MCP server developers.
Related Terms
LLM
Large Language Model - AI system trained on vast text data to generate human-like responses and perform language tasks.
Prompt Injection
Attack technique manipulating AI system inputs to bypass safety controls or extract unauthorized information.
Context Window
The maximum amount of text (measured in tokens) that an LLM can process in a single interaction, defining its working memory limits.
Tool Integration Security
Security practices for validating and controlling how AI systems interact with external tools, APIs, and services to prevent unauthorized actions.
Trust Boundary
Interface where data enters protocol or assets move between components, representing highest-risk areas requiring focused security analysis.
Attack Surface
The total number of points where unauthorized users can try to enter data or extract data from an environment, including AI-specific entry points and interactions.
Need expert guidance on Model Context Protocol (MCP)?
Our team at Zealynx has deep expertise in blockchain security and DeFi protocols. Whether you need an audit or consultation, we're here to help.
Get a Quote
