Back to Blog
Defense-in-Depth Smart Contract Workflow: Beyond Static Checklists
AuditWeb3 SecuritySecurity ChecklistTestingToolsSolidity

Defense-in-Depth Smart Contract Workflow: Beyond Static Checklists

8 min

TL;DR — Why checklists alone won't save your protocol

In Web3, "move fast and break things" is not a development methodology; it is a financial death sentence. A single line of erroneous code can result in the irreversible loss of nine-figure sums.
Most developers attempt to mitigate this risk with checklists. However, "checklist fatigue" is real. A static PDF list of 100+ items often becomes a bureaucratic hurdle rather than an engineering tool. Developers tick boxes without understanding the attack vectors, leading to a false sense of security.
This article outlines a methodology to transform static security checklists—based on standards like the SCSVS (Smart Contract Security Verification Standard)—into a dynamic, defense-in-depth engineering workflow.

1. The hierarchy of verification

A monolithic checklist is inefficient. It forces a developer to check for gas optimizations while trying to architect the economic game theory. To engineer security effectively, we must stratify verification into three levels of abstraction, aligning them with the SDLC (Software Development Life Cycle).
The three levels of verification hierarchy: architecture, implementation, and operational

Level 1: Architecture and design (the blueprint)

This layer addresses the fundamental economic logic and trust assumptions. It must be applied before Solidity is written.
  • Trust assumptions: Who holds the admin keys? Is the protocol resistant to centralization vectors?
  • Economic incentives: Are there mechanisms that incentivize malicious behavior (e.g., cheap griefing attacks)?
  • Composability: How does the protocol handle oracle failure or dependent protocol pauses?
Note: Data from audit firms like Zealynx suggests that a significant percentage of GameFi and DeFi exploits stem specifically from logic errors and tokenomics flaws—issues that exist in the design phase, invisible to code scanners.

Level 2: Implementation (the code)

This is the granular layer, focusing on Solidity patterns. This layer is the primary domain of automated tooling.

Level 3: Operational (the live environment)

A secure contract deployed with insecure parameters is a vulnerability.
  • Key management: Multisig quorum configurations.
  • Timelocks: Verification of delay parameters against governance proposals.

2. Shift left: Threat modeling and invariants

The most effective security practice is "shifting left"—integrating security constraints into the design phase.
Shift-left security: moving verification earlier in the development lifecycle

STRIDE for smart contracts

We adapt the Microsoft STRIDE model to blockchain-specific vectors:
  • Spoofing: Verify msg.sender vs tx.origin.
  • Tampering: Can on-chain oracle data be manipulated via flash loans?
  • Repudiation: Do critical state changes emit events?
  • Information disclosure: Is private data (e.g., commit-reveal salts) exposed?
  • Denial of service: Are there unbounded loops or gas griefing vectors?
STRIDE threat model mapped to smart contract attack vectors

Defining system invariants

An invariant is a fundamental truth about the system that must always hold, regardless of market conditions. These should be defined in the checklist and then translated into code.
Example invariants:
  1. Solvency: TotalAssets >= TotalLiabilities
  2. Conservation: Supply_t1 == Supply_t0 + Minted - Burned
Diagram showing the flow from abstract invariant to checklist item to fuzz test

3. Automating compliance: The static layer

Manual review should not be wasted on syntax. We operationalize Level 2 of our hierarchy using automated static analysis.

Slither configuration

Slither is the industry workhorse for static analysis. However, running it with default settings often produces too much noise. To integrate it effectively into a CI/CD pipeline, you must tune it to focus on high-value detectors.
Create a slither.config.json to filter noise and focus on critical detectors (like reentrancy or uninitialized state variables):
1{
2 "detectors_to_exclude": [
3 "naming-convention",
4 "pragma",
5 "solc-version"
6 ],
7 "filter_paths": [
8 "node_modules",
9 "test"
10 ]
11}
By adding this to your repository, you ensure that every Pull Request is automatically checked against the "syntax check" layer of your security framework.

Custom detectors

For project-specific logic, standard tools fail. If your protocol requires that "every asset in the lending pool must have a non-zero collateral factor," you can write a custom Slither Python script to enforce this. This turns a manual checklist item into an automated guardrail.

4. Dynamic verification: Property-based testing

Static analysis catches patterns; dynamic analysis (fuzzing) catches edge cases. This is where we mathematically verify the invariants defined in phase 1.

Foundry and invariant tests

Foundry (Forge) allows developers to write stateful fuzz tests in Solidity. Unlike unit tests, which check one input, fuzzers generate thousands of random transaction sequences to try and break your invariants.
Checklist item: "User balances must never exceed total supply."
Code implementation (Foundry):
1// defined in your test file
2function invariant_totalSupply() public view {
3 // The fuzzer will call random functions (mint, burn, transfer)
4 // and check this assertion after every step.
5 assert(token.totalSupply() >= token.balanceOf(msg.sender));
6}
If the fuzzer finds a sequence of calls that causes totalSupply to be less than a user's balance, the test fails. This automates the verification of complex economic logic that is impossible to verify via simple unit testing.

5. The human element: The "swiss cheese" model

Automation cannot understand business intent. The manual audit workflow uses the checklist to catch what tools miss.

The "swiss cheese" discovery model

In complex systems, catastrophic failure rarely happens due to a single bug. It happens when multiple minor weaknesses align.
  1. Hole 1: A gas optimization removes a check (Low Severity).
  2. Hole 2: A lax access control modifier (Medium Severity).
  3. Result: An attacker combines them to drain the vault.
The swiss cheese model applied to smart contract security, showing how layers of defense block vectors

Vulnerability scoring

When manual review identifies a checklist violation, it must be scored based on Impact vs. Likelihood.
  • Critical: Easy to exploit, total loss of funds.
  • High: Difficult to exploit, total loss of funds; OR easy to exploit, yield loss.
  • Medium: Limited impact.
This prioritization allows teams to focus engineering resources on "fund-safety" issues rather than style nits.

Conclusion: Security as a process

A checklist is not a certificate of safety; it is a map for exploration.
By structuring requirements hierarchically, automating the syntax checks with tools like Slither, and mathematically verifying economic truths with Foundry, we transform the checklist from a passive document into an active defense system.
Security is not a final gate before deployment. It is a continuous loop of threat modeling, automated analysis, and invariant testing. As the DeFi ecosystem evolves, so too must your invariants, transforming every past exploit into a future defense.

Get in touch

Building a DeFi protocol and want to move beyond static checklists? At Zealynx, we engineer defense-in-depth security workflows that combine automated analysis with expert manual review.
Ready to secure your protocol? Get a quote or reach out directly to discuss your project.

FAQ: Defense-in-depth workflow

1. What is defense-in-depth for smart contracts?
Defense-in-depth is a layered security strategy that combines multiple independent protections—architecture review, automated static analysis, dynamic fuzzing, and manual audits—rather than relying on a single checklist or tool to secure your protocol.
2. Why do security checklists fail on their own?
Static checklists suffer from "checklist fatigue." Developers tick boxes without understanding the underlying attack vectors, leading to false confidence. Checklists also can't catch emergent vulnerabilities that arise from the interaction of multiple contract components.
3. What is the STRIDE model and how does it apply to Web3?
STRIDE is a threat modeling framework originally developed by Microsoft. It stands for Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of Privilege. Adapted for smart contracts, it maps to vectors like tx.origin spoofing, oracle manipulation, missing event emissions, and gas griefing.
4. How do invariant tests differ from unit tests?
Unit tests verify specific input-output scenarios the developer anticipates. Invariant tests define fundamental properties that must always hold (e.g., "total supply equals sum of all balances") and use fuzzers to generate thousands of random transaction sequences, catching edge cases no developer would think to test manually.
5. When should I start threat modeling for my protocol?
Threat modeling should begin before any Solidity is written—during the architecture and design phase. Defining trust assumptions, economic incentives, and system invariants upfront is far cheaper and more effective than discovering design-level flaws during a post-implementation audit.

Glossary

TermDefinition
Threat ModelingStructured process of identifying, evaluating, and prioritizing potential security threats to a system during the design phase.
STRIDEMicrosoft-developed threat classification framework covering Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of Privilege.
Shift LeftSecurity practice of integrating testing and verification earlier in the software development lifecycle rather than post-implementation.
Defense in DepthLayered security strategy combining multiple independent protections rather than relying on single security measures.
Property-Based TestingTesting approach that verifies general properties or invariants hold across randomly generated inputs rather than checking specific examples.
Swiss Cheese ModelRisk analysis model showing how failures align across multiple defense layers to cause catastrophic breaches.

oog
zealynx

Subscribe to Our Newsletter

Stay updated with our latest security insights and blog posts

© 2024 Zealynx