Skip to content
Perstack

Sandbox Integration

AI agents differ fundamentally from traditional software. The same input can produce different outputs. Model updates can change behavior. Hallucinations can trigger destructive actions without any attacker involved.

This creates a security challenge with two possible approaches:

ApproachTrade-off
Restrict the agentLimit tools, actions, or decisions — defeats the purpose
Sandbox the environmentFull capability inside isolated boundaries

Perstack takes the sandbox-first approach. The runtime doesn’t enforce its own security layer — it’s designed to run inside infrastructure that provides isolation.

Agent security can be understood as four boundary layers:

LayerThreatPerstack’s approach
PromptInjection, context pollutionObservability — full event history for post-hoc verification
Tool (MCP)Privilege abuserequiredEnv for minimal privilege, explicit skill assignment
RuntimeHost compromiseDesigned for sandboxed infrastructure (Docker, ECS, Workers)
ExternalData exfiltration, lateral movementEvent-based messaging with controlled boundaries

Traditional software security relies on input validation and output filtering — detect threats before they execute. For AI agents, this approach hits a fundamental limit: distinguishing legitimate instructions from malicious ones requires interpreting intent, which is close to an undecidable problem.

Sandbox-first inverts this model. Instead of trying to prevent all bad outcomes upfront, you let the agent operate freely within boundaries and verify behavior after the fact. This only works if every action is visible and auditable.

This is why Observability is one of Perstack’s three core principles (alongside Isolation and Reusability). It’s not a nice-to-have — it’s a prerequisite for the sandbox-first approach to work. Full event history, deterministic checkpoints, and transparent execution make post-hoc verification possible.

Sandboxing introduces a challenge: how does the agent communicate with the outside world?

Giving agents direct access to external messaging — webhooks, queues, notifications — creates a new attack surface. A prompt injection could exfiltrate data or trigger unintended actions through these channels.

Perstack addresses this with event-based messaging:

  • Default (CLI): JSON events emitted to stdout — the standard boundary for sandboxed processes. Your infrastructure reads stdout and decides what to do with events.
  • Programmatic: Custom eventListener callback for direct integration. When embedding the runtime in your application, you receive events programmatically and control the messaging layer yourself.
import { run } from "@perstack/runtime"
await run(params, {
eventListener: (event) => {
// Handle events programmatically
// You control what crosses the boundary
}
})

In both cases, the agent itself cannot initiate outbound connections — you maintain full control over what crosses the boundary.


Perstack is not a standalone security solution. It’s designed to work with sandboxed infrastructure — your platform provides the isolation, Perstack provides the runtime.

Perstack runs on any Node.js environment, with no dependency on specific sandbox solutions:

  • Container platforms: Docker, AWS ECS + Fargate, Google Cloud Run, Kubernetes
  • Serverless: Cloudflare Workers, Vercel, AWS Lambda
  • Local development: Direct Node.js execution

This portability lets you choose infrastructure based on your security requirements, cost constraints, and operational preferences.

Perstack applies the principle of least privilege at the skill level:

  • requiredEnv explicitly declares which environment variables a skill can access
  • Skills are assigned per-Expert — no global tool access
  • MCP servers are spawned per-session with isolated lifecycles

Every execution produces a complete audit trail:

  • JSON events emitted to stdout for every state change
  • Deterministic checkpoints enable replay and forensic analysis
  • No hidden context injection — prompts are fully visible

# AWS Fargate
taskDefinition:
cpu: 1024
memory: 2048
containerDefinitions:
- name: perstack-expert
image: my-org/perstack-runner
command: ["npx", "perstack", "run", "@org/expert"]
environment:
- name: ANTHROPIC_API_KEY
valueFrom: "arn:aws:secretsmanager:..."

Platform-native controls apply automatically: resource limits, network isolation, IAM-based access control.

// Cloudflare Workers
export default {
async fetch(request, env) {
const { run } = await import("@perstack/runtime");
const result = await run({
expertKey: "@org/expert",
query: await request.text()
});
return new Response(result.lastMessage.content);
}
}

Use your platform’s native secrets management:

# AWS Secrets Manager
environment:
- name: ANTHROPIC_API_KEY
valueFrom: "arn:aws:secretsmanager:region:account:secret:key"
Terminal window
# Cloudflare Workers
wrangler secret put ANTHROPIC_API_KEY

Defense in depth: Combine platform isolation (containers, VMs) with Perstack’s workspace isolation. Apply network-level access controls.

Minimal privilege: Grant only the skills each Expert needs. Use requiredEnv to explicitly declare environment variable access.

Auditability: Correlate Perstack’s event logs with your platform’s audit logs. Set up anomaly detection for unexpected behavior.

Dependency pinning: Pin Registry Expert versions. Write-once versioning guarantees reproducible builds.