Skip to content
Perstack

Multi-Runtime Support

Perstack supports running Experts through third-party coding agent runtimes. Instead of using the default runtime, you can leverage Cursor, Claude Code, or Gemini CLI as the execution engine.

This feature is experimental. Some capabilities may be limited depending on the runtime.

In the agent-first era, Expert definitions are the single source of truth — not the runtime, not the app, not the vendor platform. Your carefully crafted instructions, delegation patterns, and skill configurations represent accumulated domain knowledge. They should be:

  • Portable — run on any compatible runtime
  • Comparable — test the same definition across different runtimes to measure cost vs. performance
  • Shareable — publish to the registry and let others run your Experts on their preferred runtime

Agent definitions should not be trapped in vendor silos. With multi-runtime support:

Traditional approachPerstack approach
Agent locked to one platformExpert runs on any runtime
Switching requires rewriteSwitching requires one flag
Vendor controls your agentYou control your Expert
BenefitDescription
Cost/performance comparisonRun the same Expert on Cursor, Claude Code, and Gemini — compare results and costs
Runtime-specific strengthsLeverage Cursor’s codebase indexing, Claude’s reasoning, Gemini’s speed
Registry interoperabilityInstantly try any published Expert on your preferred runtime
Subscription leverageUse existing subscriptions (Cursor Pro, Claude Max) instead of API credits
RuntimeModel SupportDomainSkill Definition
perstackMulti-vendorGeneral purposeVia perstack.toml
dockerMulti-vendorIsolated executionVia perstack.toml
cursorMulti-vendorCoding-focusedVia Cursor settings
claude-codeClaude onlyCoding-focusedVia claude mcp
geminiGemini onlyGeneral purposeVia Gemini config

Skill definition in perstack.toml only works with the default Perstack runtime. Other runtimes have their own tool/MCP configurations — you must set them up separately in each runtime.

Runtime is specified in two ways:

Terminal window
npx perstack run my-expert "query" --runtime cursor
npx perstack run my-expert "query" --runtime claude-code
npx perstack run my-expert "query" --runtime gemini

Set the default runtime in perstack.toml:

perstack.toml
runtime = "cursor" # All Experts use Cursor by default
model = "claude-sonnet-4-5"
[provider]
providerName = "anthropic"
[experts."my-expert"]
# ...
ScenarioRuntime used
--runtime cursor specifiedcursor
No --runtime, config has runtime = "cursor"cursor
No --runtime, no config runtimeperstack (default)

The --runtime CLI option always takes precedence over the config file setting.

perstack.toml
runtime = "cursor"
model = "claude-sonnet-4-5"
[provider]
providerName = "anthropic"
[experts."code-reviewer"]
version = "1.0.0"
description = "Reviews code for quality, security, and best practices"
instruction = """
You are a senior code reviewer. Analyze the codebase and provide feedback on:
- Code quality and maintainability
- Security vulnerabilities
- Performance issues
- Best practices violations
Write your review to `review.md`.
"""

Run the Expert:

Terminal window
# Uses Cursor (from config)
npx perstack run code-reviewer "Review the src/ directory"
# Override to use Claude Code instead
npx perstack run code-reviewer "Review the src/ directory" --runtime claude-code

When you specify a non-default runtime, Perstack:

  1. Converts the Expert definition into the runtime’s native format
  2. Executes the runtime CLI in headless mode
  3. Captures the output and converts events to Perstack format
  4. Stores checkpoints in the standard perstack/jobs/ directory
perstack run --runtime <runtime>
┌─────────────────────────┐
│ Runtime Adapter │
│ (converts Expert │
│ to CLI arguments) │
└───────────┬─────────────┘
┌─────────────────────────┐
│ Runtime CLI │
│ (headless mode) │
│ │
│ cursor-agent --print │
│ claude -p "..." │
│ gemini -p "..." │
└───────────┬─────────────┘
┌─────────────────────────┐
│ Event Normalization │
│ → Perstack format │
└───────────┬─────────────┘
┌─────────────────────────┐
│ perstack/jobs/ │
│ (Job/Run/Checkpoint) │
└─────────────────────────┘

Prerequisites:

  • Docker installed and daemon running
  • Docker Compose available

How it works: The docker runtime provides containerized execution with security isolation:

  1. Dockerfile generation: Creates a container with required runtimes (Node.js, Python)
  2. MCP server installation: Installs skill packages inside the container
  3. Network isolation: Squid proxy enforces domain allowlist
  4. Environment isolation: Only required environment variables are passed

Network configuration:

[experts."secure-expert"]
instruction = "..."
[experts."secure-expert".skills."web-search"]
type = "mcpStdioSkill"
command = "npx"
packageName = "exa-mcp-server"
requiredEnv = ["EXA_API_KEY"]
allowedDomains = ["api.exa.ai"]

The final allowlist merges:

  • Skill-level allowedDomains from all skills
  • Provider API domains (auto-included based on provider)

Provider API domains (e.g., api.anthropic.com for Anthropic) are automatically included based on your provider.providerName setting.

Passing environment variables:

Use --env to pass specific environment variables to the Docker container at runtime:

Terminal window
# Pass NPM_TOKEN for private npm packages
perstack run my-expert "query" --runtime docker --env NPM_TOKEN
# Pass multiple environment variables
perstack run my-expert "query" --runtime docker --env NPM_TOKEN --env MY_API_KEY

This is useful for:

  • Private npm packages (skills using npx with private registries)
  • Custom API keys needed by skills at runtime
  • Any credentials that shouldn’t be baked into the container image

Prerequisites:

  • Cursor CLI installed (curl https://cursor.com/install -fsS | bash)
  • CURSOR_API_KEY environment variable set

How Expert definitions are mapped:

  • instruction → Passed via cursor-agent --print "..." prompt argument
  • skills → Not supported (headless mode has no MCP)
  • delegates → Included in prompt as context

Cursor headless CLI (cursor-agent --print) does not support MCP tools. Only built-in capabilities (file read/write, shell commands via --force) are available.

Prerequisites:

  • Claude Code CLI installed (npm install -g @anthropic-ai/claude-code)
  • Authenticated via claude command

How Expert definitions are mapped:

  • instruction → Passed via --append-system-prompt flag
  • skills → Not injectable (runtime uses its own MCP config)
  • delegates → Included in system prompt as context

Claude Code has its own MCP configuration (claude mcp), but Perstack cannot inject skills into it. Configure MCP servers separately.

Prerequisites:

  • Gemini CLI installed
  • GEMINI_API_KEY environment variable set

How Expert definitions are mapped:

  • instruction → Passed via gemini -p "..." prompt argument
  • skills → Not supported (MCP unavailable)
  • delegates → Included in prompt as context

Gemini CLI does not support MCP. Use Gemini’s built-in file/shell capabilities instead.

Non-default runtimes do not natively support Expert-to-Expert delegation. Delegation behavior depends on the runtime:

RuntimeDelegation handling
perstackNative support
cursorInstruction-based (LLM decides)
claude-codeInstruction-based (LLM decides)
geminiInstruction-based (LLM decides)

With instruction-based delegation, the delegate Expert’s description is included in the system prompt, and the LLM is instructed to “think as” the delegate when appropriate.

Interactive tools (interactiveSkill) are handled differently:

RuntimeInteractive tools
perstackNative support with --continue -i
cursorMapped to Cursor’s confirmation prompts
claude-codeMapped to Claude’s permission system
geminiNot supported in headless mode

Checkpoints created with non-default runtimes use a normalized format. You can:

  • View checkpoints with perstack start --continue-job
  • Query job history
  • Resume may have limitations (runtime-specific state not preserved)
  1. Start with the default runtime during development for full skill control
  2. Design skill-free Experts when targeting non-default runtimes
  3. Configure tools in each runtime — set up MCP servers via claude mcp, Cursor settings, etc.
  4. Leverage built-in capabilities — non-default runtimes have their own file/shell tools
  5. Set runtime in config for consistent team workflows