Audit methodology

How Zealynx audits.

Every Zealynx engagement applies a multi-layer methodology that combines line-by-line manual review with multiple automated verification techniques, scaled to the protocol’s risk surface and time budget. Each layer catches a different class of bug, and running them in concert provides depth that no single technique can match.

01Approach

Multiple independent layers, calibrated against seeded bugs

Smart contract audits fail when a single technique, usually manual review, is asked to carry the entire engagement. Manual review is irreplaceable, but it has blind spots: long state sequences, unintuitive boundary values, and pattern-blind assumptions a reader carries from prior protocols. The Zealynx methodology adds independent verification layers that are sensitive to those exact blind spots, and we require each layer to demonstrate sensitivity to a deliberately-introduced bug before its passing result is trusted.

The output we publish on every report (severity, status, fix, and the 6-axis rubric) is the artifact of every layer landing on the same conclusion.

02Manual review

Reading every line

  • Line-by-line code reading of every contract in scope, with the lead auditor signing off on every file.
  • End-to-end scenario walkthroughs of every documented user flow, traced from on-chain initiation through finalization.
  • Adversarial deep-dives on each high-risk attack surface: signatures, math and boundary conditions, state machines, capital accounting, and upgrade paths.
  • Application of external audit heuristic checklists and pattern-matching against historical findings from comparable protocols and prior public audits.
03Automated & formal verification

Five independent layers below manual review

  • Static analysis. Slither plus custom rules integrated into CI. Every rule is calibrated against a known positive before its results are trusted.
  • Krait. Our in-house AI-first auditor runs a pre-audit pass to surface low-hanging issues, freeing manual review time for the architectural layer. The Krait output is cross-checked by the human team; nothing reaches the report without independent verification.
  • Custom Foundry invariant fuzzing. Each invariant is hand-written for the protocol under review and calibrated against a deliberately-seeded bug before its passing result is trusted.
  • Mutation testing. Trail of Bits’ mewt is run against the existing test suite to verify it actually catches behavioral changes in the contracts, not just compiles them. We report the high-severity catch rate and triage every surviving mutant.
  • Stateful protocol-level fuzzing. Medusa campaigns explore long-sequence violations across millions of method invocations, surfacing bugs that only appear after specific call orderings.
  • Symbolic execution. Kontrol formally proves core mathematical identities for all inputs (not a sample, the entire input space). Used on math-heavy paths where fuzzing hits diminishing returns.
04Four pillars

What we’re actually evaluating

Across every layer above, the work is shaped by four explicit priorities. They’re also the lens we use to write each finding so reviewers know what was checked and what was deferred.

01 · Code quality

Diligent evaluation of the code itself

Identifying potential vulnerabilities, weaknesses, and code smells before they become exploits.

02 · Best practices

Adherence to industry-accepted guidelines

CEI ordering, custom errors, SafeERC20, OpenZeppelin v5 upgradeable patterns, namespaced storage, and reentrancy guards on every state-mutating entry point.

03 · Documentation

Code that says what it does

Comments and NatSpec are reviewed alongside the implementation to confirm they reflect the actual logic and expected behavior; misleading docs are reported as findings.

04 · Defense in depth

Independent layers, each calibrated

Beyond a single review pass, we apply independent verification layers and require each to demonstrate sensitivity to a seeded bug before trusting its passing results.

05Severity classification

Impact × Likelihood

Every finding is classified using a 3×3 Impact × Likelihood matrix. The published severity is the joint outcome, not the worst-case impact on its own and not the worst-case likelihood on its own.

 Impact: HighImpact: MediumImpact: Low
Likelihood: HighCriticalHighMedium
Likelihood: MediumHighMediumLow
Likelihood: LowMediumLowLow

Findings with no exploitable Impact at any Likelihood are filed as Informational.

06The finding rubric

Six axes published on every finding

A single severity tag flattens too much information. Every finding we publish carries six axes. The severity is the aggregate, but each axis is independently inspectable so a reader can understand why the severity landed where it did.

Severity
Critical · High · Medium · Low · Informational

Joint outcome of impact and likelihood per the matrix above.

Impact
High · Medium · Low

What is lost if the bug is exploited: capital, control, availability, or correctness.

Likelihood
High · Medium · Low

Probability of the conditions arising in normal operation, excluding adversarial setup.

Method
Manual · Static · Fuzzing · Formal

Which layer surfaced the finding. Cross-method confirmations are noted explicitly.

Complexity
Low · Medium · High

Setup an attacker needs: trivial preconditions, specific ordering, or long-running state.

Exploitability
Low · Medium · High

How readily the conditions can actually be reached on-chain in the reviewed protocol’s deployment.

07Deliverables

What you receive

  • Final PDF report. The canonical artifact: executive summary, scope, methodology, severity classification, every finding with description, impact, recommendation, and resolution.
  • Public HTML report. Each engagement we are cleared to publish lands on the audits catalogue with a permalinkable page per finding, citable individually and navigable by severity, chain, language, or protocol type.
  • Per-finding pages. Stable URLs of the form /audits/[client]/[report]/findings/[id] so a specific finding can be linked from a tweet, a Slack, an LLM prompt, or another audit’s remediation note.
  • Retest pass. Every finding above Informational is re-verified after the client’s fix lands; the Status field on the finding is the result of that retest, not a self-report.
  • Fuzzing and formal verification artifacts. The invariant suite, Medusa configuration, and Kontrol proofs are handed back to the client so the same harness can run in their CI long after the engagement closes.
Want to see this methodology applied? Every published audit on the catalogue follows the process above: findings, fixes, and the artifacts that came out of each layer.
Browse audits →

oog
zealynx

Smart Contract Security Digest

Monthly exploit breakdowns, audit checklists, and DeFi security research — straight to your inbox

© 2026 Zealynx