AI Audits/Methodology

Zealynx AI Audit Method

Zealynx audits AI systems the same way serious security work should be done: start from authority, map trust boundaries, follow influence to execution sinks, and test the controls that actually constrain money, code, data, and governance.

Primary lens

What can the system really do if the model, memory, or tools are steered adversarially?

Why it matters

AI failures become security issues when they can change code, leak secrets, move funds, or bypass review.

Zealynx wedge

Prompt injection alone is not enough. We inspect memory, tools, approval semantics, and Agentic DeFi execution risk.

Audit phases

1. Scope the authority surface

We start from what the system can actually do: read, write, approve, call tools, touch secrets, route money, modify code, or influence governance. In AI systems, authority matters more than feature count.

2. Map trust boundaries

We separate trusted operator inputs from mixed-trust and attacker-controlled inputs such as docs, tickets, emails, repos, MCP tools, memory stores, social feeds, and governance text.

3. Trace prompt-to-sink paths

We do not stop at the prompt layer. We follow untrusted influence to its execution sink: shell, file write, approval, wallet action, API call, PR comment, model routing, or transaction submission.

4. Test persistence and delegation

For long-lived agents, we inspect memory writes, summaries, queues, schedules, child agents, and standing approvals. The key question is whether low-trust state can later unlock privileged action.

5. Validate controls at execution time

We test what really constrains the system when it matters: argument provenance, destination validation, fresh approval, sandboxing, key isolation, spending limits, and egress control.

6. Report by blast radius

Findings are prioritized by reachable authority and business impact, not by generic AI novelty. We care about owner-harm, data exfiltration, financial loss, silent governance influence, and forensic blind spots.

Threat classes we explicitly test

Prompt injection and indirect prompt injection
Memory poisoning and unsafe persistence
Tool misuse and capability escalation
Skill, plugin, and MCP backdoors
Data exfiltration and secret misuse
Model routing and cost abuse
Human approval bypass
Observability and forensics gaps
Autonomous exploit discovery
Agentic DeFi execution risk

How the method changes by system type

LLM applications

prompt injection, indirect injection, RAG poisoning, output misuse, identity and session separation

Coding agents

repo trust boundaries, shell construction, setup scripts, approvals, secret exposure, CI mutation

Long-lived agents

memory poisoning, identity drift, scheduler abuse, delegated authority, summary integrity, replayability

MCP deployments

tool poisoning, unsafe transport, cross-server trust bleed, auth gaps, connector blast radius

Agentic DeFi systems

wallet authority, approvals, route validation, market data provenance, governance influence, loss containment

What an auditor should always check

  • • Which inputs are attacker-controlled or mixed-trust?
  • • Which execution sinks can those inputs reach?
  • • What secrets, approvals, wallets, repos, or channels are in scope?
  • • Which controls validate arguments and destinations at execution time?
  • • What persists across sessions, workers, and schedules?
  • • Can the operator reconstruct exactly what happened after an incident?

Smart Contract Security Digest

Monthly exploit breakdowns, audit checklists, and DeFi security research — straight to your inbox

© 2026 Zealynx