Shadow Audit

A training exercise where you audit a real past security contest on a known-graded protocol fork, inside a time-boxed window, with your findings scored against the actual contest results.

A shadow audit is a training exercise where a security learner audits a protocol that has already been publicly audited, inside a time-boxed window, with findings scored against the known answer key from the original contest. The learner does not see the actual findings until the window closes.

Why "Shadow"

The name comes from the idea of working alongside (in the shadow of) a real audit team that has already graded the target. The learner experiences realistic time pressure, realistic code quality, realistic ambiguity — but with a fixed, published ground truth for scoring.

This is how many of the most respected security researchers in Web3 sharpened their skills early in their careers: review a public audit contest after it closed, work the code end to end, compare your findings against the official report, iterate. The shadow audit format systematizes the exercise.

How It Differs from a Real Audit

DimensionReal AuditShadow Audit
Ground truthUnknown at audit timeKnown, published, graded
StakesClient treasury, reputationLearning points, leaderboard
PressureDeadline, client expectationsTime box, scoring window
Code provenanceProduction, live moneyForked snapshot from a past contest
ResolutionYou negotiate, triage severityScore compared to actual contest outcome

Shadow audits are educationally rich precisely because the ground truth is fixed. The learner can calibrate "I missed this bug" against "here is the exact class of bug I keep missing," which accelerates pattern recognition faster than any lecture.

Shadow Audits in Zealynx Academy

Zealynx Academy's Shadow Arena is the implementation of this concept. Five live targets include Basin (a protocol that ran a $40k public contest), ElasticSwap, Velodrome, Flux Finance, and Canto v2 — together 10,163 lines of real Solidity with 46 documented bugs. Each target has a time box (2, 4, or 7 days depending on complexity), and learners submit findings during the window. True positives earn Lynx. False positives cost Lynx after the first three. When the window closes, the learner sees the full review.

Why Shadow Audits Matter for Builders

Shadow audits are not just for aspiring auditors. They are particularly valuable for builders about to fork a protocol. By reviewing past contests on similar protocols, a builder sees:

  • The classes of bugs that appear in the protocol family they are forking
  • The mistakes other teams have shipped in similar code
  • The attack surface they are inheriting by copying a fork

This translates directly to better design decisions before deployment. A builder who has shadow-audited three Compound V2 forks has internalized the classic Compound V2 failure modes. When they fork Compound V2 themselves, those patterns are top of mind.

Related Approaches

  • Competitive audit — live contests where multiple auditors compete on the same codebase for real prize money. Shadow audits replay past competitive audits as training.
  • Bug bounty programs — live protocols paying for vulnerabilities. Higher stakes but less structured feedback.
  • Self-paced CTF challenges — synthetic bugs in crafted scenarios. Good for specific skill drills but less realistic than shadow audits on real past contests.

Need expert guidance on Shadow Audit?

Our team at Zealynx has deep expertise in blockchain security and DeFi protocols. Whether you need an audit or consultation, we're here to help.

Get a Quote

oog
zealynx

Smart Contract Security Digest

Monthly exploit breakdowns, audit checklists, and DeFi security research — straight to your inbox

© 2026 Zealynx