Every Zealynx engagement applies a multi-layer methodology that combines line-by-line manual review with multiple automated verification techniques, scaled to the protocol’s risk surface and time budget. Each layer catches a different class of bug, and running them in concert provides depth that no single technique can match.
Smart contract audits fail when a single technique, usually manual review, is asked to carry the entire engagement. Manual review is irreplaceable, but it has blind spots: long state sequences, unintuitive boundary values, and pattern-blind assumptions a reader carries from prior protocols. The Zealynx methodology adds independent verification layers that are sensitive to those exact blind spots, and we require each layer to demonstrate sensitivity to a deliberately-introduced bug before its passing result is trusted.
The output we publish on every report (severity, status, fix, and the 6-axis rubric) is the artifact of every layer landing on the same conclusion.
mewt is run against the existing test suite to verify it actually catches behavioral changes in the contracts, not just compiles them. We report the high-severity catch rate and triage every surviving mutant.Across every layer above, the work is shaped by four explicit priorities. They’re also the lens we use to write each finding so reviewers know what was checked and what was deferred.
Identifying potential vulnerabilities, weaknesses, and code smells before they become exploits.
CEI ordering, custom errors, SafeERC20, OpenZeppelin v5 upgradeable patterns, namespaced storage, and reentrancy guards on every state-mutating entry point.
Comments and NatSpec are reviewed alongside the implementation to confirm they reflect the actual logic and expected behavior; misleading docs are reported as findings.
Beyond a single review pass, we apply independent verification layers and require each to demonstrate sensitivity to a seeded bug before trusting its passing results.
Every finding is classified using a 3×3 Impact × Likelihood matrix. The published severity is the joint outcome, not the worst-case impact on its own and not the worst-case likelihood on its own.
| Impact: High | Impact: Medium | Impact: Low | |
|---|---|---|---|
| Likelihood: High | Critical | High | Medium |
| Likelihood: Medium | High | Medium | Low |
| Likelihood: Low | Medium | Low | Low |
Findings with no exploitable Impact at any Likelihood are filed as Informational.
A single severity tag flattens too much information. Every finding we publish carries six axes. The severity is the aggregate, but each axis is independently inspectable so a reader can understand why the severity landed where it did.
Joint outcome of impact and likelihood per the matrix above.
What is lost if the bug is exploited: capital, control, availability, or correctness.
Probability of the conditions arising in normal operation, excluding adversarial setup.
Which layer surfaced the finding. Cross-method confirmations are noted explicitly.
Setup an attacker needs: trivial preconditions, specific ordering, or long-running state.
How readily the conditions can actually be reached on-chain in the reviewed protocol’s deployment.
/audits/[client]/[report]/findings/[id] so a specific finding can be linked from a tweet, a Slack, an LLM prompt, or another audit’s remediation note.
