Run.
Engineered for reliable agent execution.
Every Expert runs isolated. Every step is recorded. Works on any infrastructure.
Open source isn't optional.
When an agent can read your files, call APIs, and make decisions — you need to know exactly what it's doing. No black boxes.
Fully auditable
Every line of runtime code is public. Review it, fork it, run it on your own infrastructure.
No vendor lock-in
Apache 2.0 license. Your Experts, your data, your infrastructure. Move freely.
Open standards
Docker-based isolation. No proprietary sandbox. Industry-proven security you can audit.
Deterministic state, probabilistic reasoning
LLMs are probabilistic — same input can produce different outputs. Perstack draws a clear boundary: the "thinking" is probabilistic; the "doing" and "recording" are deterministic.
Job → Run → Checkpoint
Three-level hierarchy. Every step produces a checkpoint snapshot.
Replay from any point
Event Sourcing + Checkpoint/Restore. Resume with identical state.
Verify after the fact
Full event history for post-hoc verification. Observability enables sandbox-first security.
Job (jobId)
├── Run 1 (Coordinator Expert)
│ └── Checkpoints: step 1 → 2 → delegates
│ ↓
├── Run 2 (Delegated Expert)
│ └── Checkpoints: step 3 → 4 → completes
│ ↓
└── Run 1 continues: step 5 → 6 → 7 → done Use the tools you already have
Perstack runtime is optional. Experts also run on Claude Code, Cursor, Gemini CLI — any MCP-compatible environment.