Architectural Decomposition
Security methodology isolating system into distinct components (vault, logic, oracle, governance) to verify local invariants before compositional analysis.
Architectural Decomposition is the security engineering methodology that systematically isolates a complex smart contract system into distinct functional components—Vault (assets), Logic (computation), Oracle (external data), and Governance (administration)—to verify each component's local invariants in isolation before analyzing cross-component interactions and trust boundaries. The article establishes this as foundational: "This is the engineering practice of dismantling a system into isolated components, verifying their local invariants, and reassembling them to stress-test the seams."
The methodology addresses fundamental audit limitations rooted in cognitive science. The article explains: "Research into Cognitive Load suggests a hard limit on the number of active variables a reviewer can track simultaneously. A complex DeFi protocol involves hundreds of state variables and interactions." Linear code review forces reviewers to: track entire protocol state simultaneously, maintain mental model of all interactions, and identify bugs across scattered codebase. Decomposition reduces cognitive load by: limiting scope to single component, defining clear interfaces (trust boundaries), and enabling focused deep analysis.
Component Categories
Vault/Asset Cluster isolates where capital resides. The article defines: "The Vault/Asset Cluster: Where is the capital? (ERC20s, ETH balances)." This component includes: token storage contracts, balance tracking state, deposit/withdrawal functions, and custody mechanisms. Security focus: can funds be extracted without authorization? Are balances accurately tracked? Is value preserved across operations? Isolating the Vault enables: focused asset security analysis, clear authorization requirements, and contained blast radius if exploited.
Logic Cluster isolates computational components. The article defines: "The Logic Cluster: The 'brains.' (Pricing, Swapping, Rate calculation)." This includes: mathematical formulas (AMM invariants, interest rates), business logic (swap execution, liquidation decisions), and state transitions (position updates, share calculations). Security focus: are calculations correct? Do invariants hold? Are edge cases handled? Isolating Logic enables: mathematical verification, formula correctness proofs, and focused algorithm review.
Oracle/Data Cluster isolates external truth sources. The article defines: "The Data/Oracle Cluster: External truth sources." This includes: price feed integrations, randomness sources, cross-chain data bridges, and external state readers. Security focus: can data be manipulated? Is staleness detected? What if oracle fails? Isolating Oracle enables: focused manipulation analysis, fallback design review, and clear data flow tracking.
Governance Cluster isolates administrative functions. The article defines: "The Governance Cluster: Admin powers and time-locks." This includes: parameter update functions, upgrade mechanisms, pause/emergency controls, and access control systems. Security focus: what can admin change? Are changes time-locked? Can governance attack protocol? Isolating Governance enables: privilege analysis, upgrade security review, and administrative risk assessment.
Decomposition Methodology
Map architecture before analyzing code as first step. The article advises: "Before analyzing syntax, we map the territory. Effective risk management requires a clear understanding of state transitions across interacting contracts." This mapping: identifies all contracts and their roles, categorizes into component clusters, documents interfaces between components, and visualizes data and value flows. Mapping reveals: system structure, trust relationships, and high-risk boundaries.
Identify trust boundaries between components. Where components interact: data crosses from trusted to untrusted domain, assumptions from one component meet reality of another, and most vulnerabilities occur. The article emphasizes: "The goal is always to identify the Trust Boundaries—where does data enter, and where does money leave?" Trust boundary identification focuses analysis on highest-risk interfaces.
Define local invariants for each component. Within each isolated component: what mathematical truths must hold? What properties must be preserved? What constraints exist on state? For Uniswap's Logic component: $x \cdot y \geq k$ after swaps. For Vault component: withdrawals ≤ deposits per user. These local invariants become: test requirements, formal specifications, and security assertions.
Verify components in isolation before composition. Analyze each component independently: does implementation satisfy invariants? Are edge cases handled? Are inputs validated? The article explains: "By verifying components in isolation, we ensure that the security guarantee is built on a foundation of proven local truths." Isolated verification: reduces complexity (one component at a time), enables deep analysis (focused attention), and builds confidence (known-secure components).
Practical Decomposition Example
Uniswap V2 decomposition as case study. The article demonstrates: "Mapping Uniswap V2 reveals two primary distinct components: The Factory (Governance/Admin cluster—deploys pairs, manages feeTo) and The Pair (Logic and Vault cluster—the 'God Class')." This decomposition reveals: Factory is administrative (lower priority), Pair holds assets AND logic (highest priority), and clear boundary exists between Factory and Pair.
Prioritization from decomposition focuses audit effort. The article explains: "We deprioritize the Factory, as it is primarily administrative. The critical solvency risk lies in the Pair contract." Decomposition enables: risk-based prioritization (audit high-risk components thoroughly), efficient resource allocation (don't waste time on low-risk admin code), and clear scope definition (what requires deep analysis vs quick review).
Isolation strategy for focused analysis. The article describes: "We isolate UniswapV2Pair.sol, treating it as a black box with strict inputs and outputs." This isolation: defines clear interface (what goes in, what comes out), abstracts away Factory details, and enables focused Pair analysis. Treating component as black box: simplifies analysis, reveals interface assumptions, and catches boundary violations.
Component Interaction Analysis
Reassembly phase verifies cross-component security. After isolated verification: analyze how components interact, verify assumptions hold across boundaries, and test compositional properties. The article labels this "Phase 4: The Reassembly (Interaction Risks)." Components that are individually secure may: have incompatible assumptions, create emergent vulnerabilities when combined, or expose security gaps at interfaces.
Reentrancy across components creates compositional risk. The article demonstrates: during flash swap callback, Pair's reserves are stale while balances have changed. External contracts (other components, integrators) reading Pair state see: inconsistent data across different queries, potential manipulation vectors, and read-only reentrancy vulnerabilities. Compositional analysis catches these cross-component issues.
Interface contract verification ensures safe composition. At each component boundary: verify caller validates inputs (don't trust upstream), verify callee handles all edge cases (defensive design), and verify state consistency (no mid-operation reads). The article warns: "integrating protocols are vulnerable if they calculate price based on a mix of getReserves() and balanceOf() during a flash swap."
Decomposition Benefits for Auditing
Reduced cognitive load improves audit quality. The article explains: "Attempting a linear audit forces the engineer to load every variable into their mental stack, leading to cognitive overflow and, inevitably, overlooked vulnerabilities." Decomposition reduces: variables tracked simultaneously (only current component), context switches (focused analysis sessions), and mental fatigue (manageable chunks).
Clear responsibility assignment enables parallel auditing. With well-defined components: different auditors can focus on different components, expertise can be matched (crypto expert on math, security expert on access control), and findings can be aggregated systematically. Decomposition enables: team-based auditing, specialist contribution, and comprehensive coverage.
Systematic vulnerability discovery through structured analysis. Rather than hoping to spot bugs: define invariants for each component, test invariants systematically, and analyze boundaries explicitly. The article's approach: decompose → define invariants → verify locally → verify composition. This systematic methodology: catches more bugs than linear review, provides coverage confidence, and enables formal verification.
Decomposition Patterns by Protocol Type
DEX/AMM decomposition separates pools from routing. Components: Pool contracts (Vault + Logic for individual markets), Router contracts (Logic for multi-hop trades), Factory contracts (Governance for deployment), and Oracle integration (Data for price feeds). Core-periphery architecture formalizes this separation. Analyze: Pool invariants, Router input validation, Factory permissions, and Oracle reliability independently.
Lending protocol decomposition separates markets from control. Components: Market contracts (Vault + Logic for individual assets), Comptroller/Controller (Logic for cross-market rules), Oracle integration (Data for collateral valuation), and Governance (admin functions). Analyze: market solvency, controller correctness, oracle manipulation resistance, and governance risk independently.
Bridge decomposition separates custody from messaging. Components: Custody contracts (Vault on each chain), Messaging layer (Data for cross-chain proofs), Validator set (Governance for proof validation), and Minting/burning logic (Logic for token representation). Analyze: custody security, message verification, validator trust assumptions, and mint/burn invariants independently.
Integration with Verification Tools
Invariant testing per component using Foundry/Echidna. The article demonstrates: "We do not write unit tests for trivial arithmetic. We write fuzz tests that attempt to break the protocol's laws." For each component: define handler functions (all component operations), define invariant assertions (component's laws), and run extensive fuzzing. Component-focused testing: enables deep exploration of single component's state space, catches component-specific bugs, and builds confidence before composition.
Formal verification per component with Certora/Halmos. The article notes: "Tools like Halmos, Certora, or Hevm cannot 'solve' an entire protocol at once—the state space is too vast. They require defined invariants for specific components." Decomposition enables formal verification: manageable state spaces (single component), clear specifications (component invariants), and tractable proofs (limited scope).
Compositional verification across components. After component verification: verify interface contracts hold, prove composition preserves properties, and test integrated system. This layered approach: builds on component guarantees, focuses composition analysis on boundaries, and provides comprehensive assurance.
Decomposition Documentation
Architecture diagrams visualize component structure. Document: each component and its role, interfaces between components, data and value flows, and trust assumptions at boundaries. The article shows "Uniswap V2 External Calls Map" as example. These diagrams: communicate system structure, guide audit prioritization, and document security model.
Component specifications define local requirements. For each component: state variables and their meaning, functions and their invariants, inputs and validation requirements, and outputs and guarantees. These specifications: guide implementation review, define test requirements, and serve as formal verification specs.
Boundary contracts document interface assumptions. At each trust boundary: what component A assumes about component B, what validation B provides, and what happens if assumptions violated. These contracts: clarify responsibilities, identify assumption gaps, and guide integration testing.
Applying Decomposition Methodology
Step 1: Documentation review before code. Read: protocol documentation, architecture diagrams, deployment scripts, and developer comments. Build initial understanding of: intended component structure, expected data flows, and claimed security properties.
Step 2: Static architecture mapping from code. Verify documentation against code: identify all contracts, categorize into component clusters, map function call graphs, and identify storage locations. Update architecture understanding based on code reality.
Step 3: Dynamic analysis of interactions. Trace: typical user flows (deposit, swap, withdraw), edge case flows (liquidation, emergency), and attack flows (common vulnerability patterns). Identify: unexpected interactions, boundary crossings, and potential vulnerabilities.
Step 4: Component-by-component audit in priority order. For each component (high-risk first): define invariants, review implementation, fuzz test, and document findings. Build: component security assessments, verified invariants, and identified issues.
Step 5: Composition verification reassembling components. After component audits: analyze cross-component interactions, verify boundary assumptions, and test integrated system. Catch: compositional vulnerabilities, assumption mismatches, and integration bugs.
Understanding architectural decomposition is essential for auditing complex DeFi systems effectively. The article positions decomposition as the solution to cognitive limitations: "Divide and Conquer is a necessity for correctness in high-value systems." The methodology transforms auditing from overwhelming linear review into structured component analysis—decompose into Vault/Logic/Oracle/Governance clusters, define local invariants, verify components in isolation, and analyze composition at trust boundaries. This approach: reduces cognitive load (manageable chunks), enables systematic coverage (clear component boundaries), supports formal methods (tractable verification scopes), and catches compositional bugs (explicit boundary analysis). For auditors, decomposition methodology provides: clear starting point (map architecture), systematic process (component by component), and comprehensive coverage (local verification + composition analysis). For developers, understanding decomposition helps: design audit-friendly architectures, define clear component boundaries, and document security assumptions explicitly.
Articles Using This Term
Learn more about Architectural Decomposition in these articles:
Related Terms
Trust Boundary
Interface where data enters protocol or assets move between components, representing highest-risk areas requiring focused security analysis.
Protocol Solvency
Mathematical guarantee that protocol maintains sufficient reserves to honor all obligations, verified through invariant testing and formal methods.
Invariant Testing
Property-based testing approach verifying that critical protocol conditions remain true across all possible execution paths.
Need expert guidance on Architectural Decomposition?
Our team at Zealynx has deep expertise in blockchain security and DeFi protocols. Whether you need an audit or consultation, we're here to help.
Get a Quote

