AI Findings/Prompt-to-Shell Execution via Unsafe Command Construction
CriticalPublished Wed May 13 2026 00:00:00 GMT+0000 (Coordinated Universal Time)

Prompt-to-Shell Execution via Unsafe Command Construction

Untrusted prompt-derived content reaches shell execution through string interpolation, template expansion, or unsafe command wrappers.

Primary threat classes

  • Indirect Prompt Injection
  • Tool Misuse
  • Capability Escalation

Affected systems

  • Coding agents
  • MCP-connected agents
  • Long-lived agents

Root cause

  • The system treats model output as trustworthy command text instead of preserving argument-level provenance and safe command construction.

Exploit path

  • Attacker-controlled content enters prompt context through repo files, tickets, docs, or external content
  • The model converts that content into a command suggestion or parameter
  • The runtime interpolates the output into a shell string or eval helper
  • The command executes with operator or agent authority

What an auditor should check

  • Trace mixed-trust inputs all the way to shell, eval, dynamic imports, notebooks, and package scripts
  • Inspect wrappers for string concatenation, shell=True, bash -lc, template interpolation, and fallback helpers
  • Check whether file paths, URLs, branch names, and destinations preserve trusted provenance

Evidence to collect

  • Code path from prompt assembly to command execution
  • Examples of unsafe shell construction or templating
  • Logs showing operator approval granularity versus actual executed command

Remediation guidance

  • Use structured argument arrays instead of raw shell strings
  • Require sink-time validation for destinations, paths, and side-effectful flags
  • Sandbox dynamic execution and narrow inherited secrets

Agentic DeFi relevance

  • Any agent that shells out to wallet, relayer, simulation, or deployment tooling can convert prompt injection into direct financial or operational impact.

Detailed note

This finding matters because the real failure is usually not the prompt injection itself. The decisive failure is the execution sink. If the system lets model output become executable shell text, then any poisoned instruction path can inherit the runtime's authority.

For Zealynx, this should be treated as a prompt-to-sink issue, not generic prompt injection. Auditors should map exactly which inputs can influence the command and whether the human approval covered the actual arguments.

Smart Contract Security Digest

Monthly exploit breakdowns, audit checklists, and DeFi security research — straight to your inbox

© 2026 Zealynx