Back to Blog
Shadow Audits: How to Learn Web3 Security by Breaking Real Protocol Forks
Web3 SecurityAuditDeFiTutorial

Shadow Audits: How to Learn Web3 Security by Breaking Real Protocol Forks

15 min
The security mindset you want takes years to build. Shadow audits compress that into weeks.

TL;DR — Quick Summary

  • A shadow audit is a training exercise: you review a real protocol that has already been publicly audited, inside a time-boxed window, and your findings are scored against the known answer key.
  • Shadow audits compress the learning curve because you get instant feedback and graded ground truth — something real audits never offer.
  • The Zealynx Academy Shadow Arena has 5 live targets: Basin (a protocol that ran a $40k public contest), ElasticSwap, Velodrome, Flux Finance, and Canto v2. Together 10,163 lines of real Solidity with 46 documented bugs.
  • Shadow audits are not just for aspiring auditors. Builders about to fork a protocol benefit the most — they see what the teams before them broke, class by class.
  • You submit findings during the window. True positives earn Lynx. False positives cost Lynx after the first three. When the window closes, you get the full review.

How Top Auditors Actually Built Their Skills

If you ask the most respected security researchers in Web3 how they got good at auditing, the answer is not "I took a course." The answer is almost always some variation of: "I picked a protocol that had just finished a contest, spent a week reviewing it, then compared my findings to the official report."
This is shadow auditing. You do not wait for a paying client. You do not wait for a contest you are qualified for. You pick something already-graded, do the work, check your answers. The contests published by Code4rena, Sherlock, Cantina, Immunefi, and CodeHawks are open for anyone to review after they close — thousands of hours of real protocol code, with the ground truth already published.
The discipline of reviewing and then comparing your findings to the known answers is where the security mindset actually develops. You start to see the same classes of bugs appearing across protocols. You start to recognize the patterns: "This contract is calling a pool's balance mid-swap, that looks like a reentrancy risk." Then you check the report and confirm — yes, that was the critical finding. Your intuition was right. Next time, you will trust it faster.
This works. The problem has been that setting it up — finding targets, getting the code, building the test environment, tracking your findings, comparing to the ground truth — takes enough friction that most people never start. Zealynx Academy's Shadow Arena removes that friction.

What the Shadow Arena Actually Is

The Shadow Arena is five live shadow-audit targets, each one a real protocol that went through a real public security contest. The contest findings are already documented. Your job is to rediscover them inside a time-boxed window.

The Five Current Targets

TargetCategorySLOCDocumented FindingsDifficultyTime Window
BasinDEX / AMM1,14514Intermediate7 days
ElasticSwapDEX / AMM7393Beginner2 days
VelodromeDEX / AMM1,91413Advanced4 days
Flux FinanceLending4,3656Beginner2 days
Canto v2Lending2,00010Intermediate4 days
That is 10,163 lines of real Solidity. 46 documented bugs. Most originally found during paid competitive audits and bug bounty programs. Basin specifically ran a $40,000 public security contest — the findings you are looking for were worth real money to the original finders.

The Session Flow

  1. Pick a target. The Shadow Arena landing page shows all 5 with difficulty ratings and time windows.
  2. Start the timer. The session begins when you open the audit. You have the stated window (2, 4, or 7 days) before results are revealed.
  3. Work the code. Read contracts. Trace token flows. Check invariants. Submit each finding with severity, location, description, and impact.
  4. Submit findings. Each submission is evaluated immediately against the known answer key. True positives earn Lynx based on severity. False positives cost Lynx after your first three (the allowance).
  5. Window closes. You see the full review: what you found, what you missed, and a walkthrough of each documented bug from the original contest.
That last step is where most of the learning happens. Getting a "you missed this" on a Critical finding, with an explanation of why it matters, is how pattern recognition compounds.

Why Builders Benefit Most (Not Just Auditors)

Most people assume shadow audits are only useful for someone who wants to become a professional auditor. That is wrong. The builders who benefit most from shadow audits are the ones about to fork a protocol.
Consider: you are planning to fork Compound V2 to launch a lending protocol. You have two options.
Option A: Fork the code, read the whitepaper, maybe skim a few audit reports. Deploy. Hope for the best.
Option B: Run a shadow audit on Flux Finance and Canto v2 — two real Compound V2 forks — inside the Academy. You see exactly what those teams changed, what they broke, and which bugs came back around when they modified the original. By the time you fork for real, you know which modifications are dangerous, which patterns to preserve, and which corner cases your own tests need to cover.
This is the pattern recognition a paid auditor builds over years, compressed into a weekend of focused work. Not because you replace the auditor — the auditor still catches things you would miss — but because you ship with a meaningfully safer codebase before the audit even begins.
Our own experience: across 30+ smart contract audits at Zealynx Security, the teams that had clearly done this kind of review before engaging us caught an order of magnitude fewer High/Critical findings during the audit. Not because they were better coders. Because they had internalized the failure modes that actually kill forks.

What 46 Documented Bugs Teach You

Across the five Shadow Arena targets, 46 bugs span the most common vulnerability classes in DeFi:
Reentrancy variants — not just the classic ones, but read-only reentrancy, cross-function reentrancy, and callback reentrancy via hooks. Canto v2's findings include several of these in the borrow/repay flow.
Oracle manipulation — Basin's pump (on-chain oracle) mechanism has a specific update-frequency assumption that breaks if updates get throttled. Several of Basin's findings are oracle-related.
Accounting and rounding errors — Velodrome's gauge system has precise math about reward distribution. Several findings relate to rounding in the wrong direction, accumulating LP losses.
Fork-specific regressions — Flux Finance and Canto v2 are Compound V2 forks with modifications. Several of their findings are about the modifications themselves: KYC checks that can be bypassed, custom interest rate models that have edge cases, and cNote-specific logic that introduces new invariants.
Access control drift — modifications to admin functions in forks often lose a subtle check from the original. Several findings are admin-function regressions.
DoS vectors — a few findings are denial-of-service through gas griefing or forced revert patterns.
After working through all 46, you will not just recognize these classes — you will see their shape before reading the code carefully. "This function loops over user balances, that's going to DoS if someone donates many small amounts" is the kind of instinct that starts forming.

How Scoring Works

The Shadow Arena's scoring system is designed to penalize careless submissions and reward careful work — the same economics a real competitive audit enforces.

Point Values

SeverityTrue Positive RewardFalse Positive Cost (after first 3)
Critical+75 Lynx-25 Lynx
High+50 Lynx-15 Lynx
Medium+25 Lynx-10 Lynx
Low+10 Lynx-5 Lynx
Informational+3 Lynx0 (no cost)

The First Three Rule

Your first three false positives are free. After that, every false positive costs Lynx at the rate above. This matches the incentive structure of real contests: submit everything you see early, refine your judgment over time, and be more selective once you have built intuition.

Severity Mismatch

If you submit a finding as Critical and the original report graded it as High, you get partial credit. The scoring recognizes valid findings even when severity is off, but rewards accurate severity calibration.

The Leaderboard

Your cumulative Lynx across all targets appears on the public leaderboard. This is not just a vanity metric — Web3 hiring increasingly looks at verifiable on-platform track records. A sustained leaderboard position is real social proof.

A Sample Session: Rediscovering a Basin Bug

To make this concrete, here is what a shadow audit session actually looks like in practice. The setup: you are auditing Basin, a protocol that modularizes the AMM into Wells (liquidity pools), Well Functions (pricing invariants), Pumps (oracles), and Aquifers (factories).
You spend the first day reading the architecture — the Well.sol core, the ConstantProduct2 pricing function, the MultiFlowPump oracle. By mid-day 2 you have mapped the token flows and understand how a swap composes through these modules.
Day 3: you focus on the Pump. The pump has an update() function that records the current price every N blocks. You notice that borrowRatePerBlock() — a downstream function called by external integrations — only works if the pump has been updated recently.
You trace: if nobody calls update() for N blocks, the pump's stored data goes stale. Then borrowRatePerBlock() reverts inside the delegator proxy pattern. Any protocol reading rates from Basin goes down.
You write this up as a High severity finding: "After updateFrequency blocks of inactivity, borrowRatePerBlock() and supplyRatePerBlock() revert when called through the delegator proxy. This breaks any external integration that reads rates (UI dashboards, analytics, other protocols). It may also break internal CToken operations that read rates through the delegator."

Get the DeFi Protocol Security Checklist

15 vulnerabilities every DeFi team should check before mainnet. Used by 30+ protocols.

No spam. Unsubscribe anytime.

You submit. The system evaluates against the known answer key. You get +50 Lynx.
Now imagine the alternate: you miss this finding. When the window closes, you see it. You think: "that was in plain sight, I walked right past it." That feeling — that specific flavor of "I should have caught this" — is the moment pattern recognition installs itself. You will catch the next one.

Why the Zealynx Academy Version Works

There are many ways to learn security. A short list of why the Shadow Arena specifically accelerates learning:
Time pressure is real. Real audits happen on deadlines. Practicing under time pressure builds the muscle of prioritization — looking for the biggest classes of bugs first, not chasing every micro-optimization. The 2/4/7-day windows force this.
Scoring is immediate. You do not submit to a Discord and wait for a reviewer. The Lynx value of each finding is calculated the moment you submit. This tightens the feedback loop dramatically — you stop doubting a category of finding after you see one succeed, and you stop repeating a category that failed.
The answer key exists. This sounds obvious but is rare. Most audit training material uses synthetic bugs in crafted scenarios. Shadow audits use real production code with real exploitation paths, and the "answer key" is what the original contest auditors actually found.
Post-window review is structured. When the window closes, you do not just see a score. You see each original finding with explanation, severity justification, and impact analysis. This is where the mental model gets upgraded.
Variety of targets. AMMs, lending protocols, concentrated liquidity, fork variations. You stop thinking about Web3 security as one big topic and start recognizing that different protocol categories have different failure modes.

How to Start

  1. Do the Uniswap V2 build first. The Build module gives you the foundation for understanding the AMM targets in the Arena. Without that background, you are reading forks without knowing what the original was supposed to do.
  2. Pick a target matched to your current level. ElasticSwap (2 days, beginner) or Flux Finance (2 days, beginner) are good first targets. Save Velodrome (4 days, advanced) for later.
  3. Commit to the full window. The timer starts when you open the audit. Do not half-commit. The learning compounds when you engage seriously.
  4. Submit speculatively early. Use your first three false-positive allowance. It is there to reward boldness.
  5. Review the answer key carefully when the window closes. Not just "did I get this right" — spend time understanding why the answer is what it is. The severity justifications matter almost as much as the bug itself.
Everything runs in-browser at academy.zealynx.io/shadow-arena. No Foundry setup. No forking the repo. The platform handles it.

Shadow Audits Are a Public Good

A quick note: the concept of shadow auditing depends on public contests existing. Code4rena, Sherlock, Cantina, CodeHawks — these organizations do extraordinary work making audit results transparent and available. The Shadow Arena builds on top of that foundation and directs learners back to the original ecosystem.
If you appreciate what this kind of public-goods infrastructure enables, consider supporting the Giveth Ethereum Security QF round backed by TheDAO Security Fund's 500 ETH matching pool. The round runs April 21 – May 12, 2026 and funds Ethereum security work, including education platforms like Zealynx Academy. A $5 donation from a new supporter unlocks significantly more matching than a larger donation from fewer donors. Details and donation guide.

Conclusion

The security mindset is not something you can read your way into. It develops through repetition — reviewing real code, finding real bugs, comparing your findings to the ground truth, and correcting your instincts over time.
The Shadow Arena compresses that process. Five targets, 46 documented bugs, immediate scoring, structured post-window review. Builders about to fork a protocol benefit most because they see exactly what the teams before them broke — and can avoid those same mistakes.
Announcement and full platform: Zealynx Academy Is Public
Support the Ethereum Security QF round: giveth.io/project/zealynx-academy

FAQ

1. What's the difference between a shadow audit and a competitive audit?
A competitive audit (on Code4rena, Sherlock, or similar) is a live contest on an unreleased or recently-released codebase. Auditors compete for a prize pool based on who finds the most valid bugs. A shadow audit is a training exercise on a past competitive audit — the findings are already known and documented. You compete against the already-published answer key to build skills.
2. Do I need audit experience to start?
No. Shadow audits are a way to build audit experience. The beginner-level targets (ElasticSwap, Flux Finance) are specifically designed to be approachable. You will miss findings early on — that is expected and the post-window review is where you learn from the misses.
3. How long should I spend on a target?
The full stated window. 2 days on beginner targets, 4 days on intermediate, 7 days on advanced. Spending less than the window means you submit fewer findings and learn less. Spending more than the window is impossible — the session closes automatically.
4. What if I submit a finding the answer key doesn't have?
That's a false positive in scoring terms. The first three are free; subsequent ones cost Lynx. In real life, sometimes the original contest missed a finding — but the Shadow Arena scores against the published answer key, so even legitimate "novel" findings you identify count as false positives in this context.
5. Can I audit a target more than once?
Each target has a single attempt per user. Once the window closes, you see the full review and the session is complete. If you want more practice, pick another target — or wait for new ones to roll out.
6. Does this help if I don't want to be a professional auditor?
Yes — arguably more than if you do. Builders about to fork a protocol are the audience that benefits most from seeing what the teams before them broke. The pattern recognition transfers directly to writing your own code more defensively.
7. What comes after finishing all five targets?
More targets roll out over time. Upcoming targets include lending protocols (Aave, Morpho-style forks), bridges, and stablecoin designs. You can also take the skills developed in the Arena into live competitive audits on Code4rena or Sherlock — that is the natural next step for anyone pursuing security work as a career.

Glossary

TermDefinition
Shadow AuditA training exercise where you audit a real past security contest on a known-graded protocol fork, inside a time-boxed window, scored against the actual contest results.
Competitive AuditA live security audit contest where multiple auditors review the same codebase and split a prize pool based on which valid bugs they report.
Audit ScopeThe specific files, functions, and interactions an auditor is contracted to review. Defines what counts as in-scope vs out-of-scope during the engagement.
Reentrancy AttackAn attack where a malicious contract calls back into the target contract during an external call, exploiting state that has not yet been updated.
Bug BountyAn ongoing rewards program where protocols pay security researchers for reporting vulnerabilities. Different from a competitive audit, which is time-boxed.

Get the DeFi Protocol Security Checklist

15 vulnerabilities every DeFi team should check before mainnet. Used by 30+ protocols.

No spam. Unsubscribe anytime.

oog
zealynx

Smart Contract Security Digest

Monthly exploit breakdowns, audit checklists, and DeFi security research — straight to your inbox

© 2026 Zealynx