Skip to content
Perstack

CLI Reference

Interactive workbench for developing and testing Experts.

Terminal window
perstack start [expertKey] [query] [options]

Arguments:

  • [expertKey]: Expert key (optional — prompts if not provided)
  • [query]: Input query (optional — prompts if not provided)

Opens a text-based UI for iterating on Expert definitions. See Running Experts.

Headless execution for production and automation.

Terminal window
perstack run <expertKey> <query> [options]

Arguments:

  • <expertKey>: Expert key (required)
    • Examples: my-expert, @org/my-expert, @org/my-expert@1.0.0
  • <query>: Input query (required)

Outputs JSON events to stdout.

Both start and run accept the same options:

OptionDescriptionDefault
--provider <provider>LLM provideranthropic
--model <model>Model nameclaude-sonnet-4-5

Providers: anthropic, google, openai, ollama, azure-openai, amazon-bedrock, google-vertex

OptionDescriptionDefault
--max-steps <n>Maximum total steps across all Runs in a Jobunlimited
--max-retries <n>Max retry attempts per generation5
--timeout <ms>Timeout per generation (ms)60000
OptionDescriptionDefault
--runtime <runtime>Execution runtimeFrom config or docker
--workspace <path>Workspace directory for Docker runtime./workspace
--env <name...>Env vars to pass to Docker runtime-

Available runtimes:

  • docker — Containerized runtime with network isolation (default)
  • local — Built-in runtime without isolation
  • cursor — Cursor CLI (experimental)
  • claude-code — Claude Code CLI (experimental)
  • gemini — Gemini CLI (experimental)

If --runtime is not specified, the runtime is determined by runtime field in perstack.toml. If neither is set, docker is used.

Passing environment variables to Docker:

Use --env to pass specific environment variables to the Docker container at runtime. This is useful for:

  • Private npm packages: --env NPM_TOKEN
  • Custom API keys needed by skills: --env MY_API_KEY
Terminal window
# Pass NPM_TOKEN for private npm packages
perstack run my-expert "query" --runtime docker --env NPM_TOKEN
# Pass multiple environment variables
perstack run my-expert "query" --env NPM_TOKEN --env MY_API_KEY

See Multi-Runtime Support for setup and limitations.

OptionDescriptionDefault
--config <path>Path to perstack.tomlAuto-discover from cwd
--env-path <path...>Environment file paths.env, .env.local
OptionDescription
--job-id <id>Custom Job ID for new Job (default: auto-generated)
--continueContinue latest Job with new Run
--continue-job <id>Continue specific Job with new Run
--resume-from <id>Resume from specific checkpoint (requires --continue-job)

Combining options:

Terminal window
# Continue latest Job from its latest checkpoint
--continue
# Continue specific Job from its latest checkpoint
--continue-job <jobId>
# Continue specific Job from a specific checkpoint
--continue-job <jobId> --resume-from <checkpointId>

Note: --resume-from requires --continue-job (Job ID must be specified). You can only resume from the Coordinator Expert’s checkpoints.

OptionDescription
-i, --interactive-tool-call-resultTreat query as interactive tool call result

Use with --continue to respond to interactive tool calls from the Coordinator Expert.

OptionDescription
--verboseEnable verbose logging (see Verbose Mode)

The --verbose flag enables detailed logging for debugging purposes. The behavior varies by runtime:

Shows additional runtime information in the output.

Enables comprehensive debugging output:

Docker Build Progress:

  • Image layer pulling progress
  • Build step execution
  • Dependency installation status

Container Lifecycle:

  • Container startup status
  • Health check results
  • Container exit information

Proxy Monitoring (when network isolation is enabled):

  • Real-time allow/block events for network requests
  • Domain and port information for each request
  • Clear indication of blocked requests with reasons

Example output in TUI:

Docker Build [runtime] Building Installing dependencies...
Docker Build [runtime] Complete Docker build completed
Docker [proxy] Starting Starting proxy container...
Docker [proxy] Healthy Proxy container ready
Docker [runtime] Starting Starting runtime container...
Docker [runtime] Running Runtime container started
Proxy ✓ api.anthropic.com:443
Proxy ✗ blocked-domain.com:443 Domain not in allowlist

Use cases:

  • Debugging network connectivity issues
  • Verifying proxy allowlist configuration
  • Monitoring which domains are being accessed
  • Troubleshooting container startup failures
Terminal window
# Basic execution (creates new Job)
npx perstack run my-expert "Review this code"
# With model options
npx perstack run my-expert "query" \
--provider google \
--model gemini-2.5-pro \
--max-steps 100
# Continue Job with follow-up
npx perstack run my-expert "initial query"
npx perstack run my-expert "follow-up" --continue
# Continue specific Job from latest checkpoint
npx perstack run my-expert "continue" --continue-job job_abc123
# Continue specific Job from specific checkpoint
npx perstack run my-expert "retry with different approach" \
--continue-job job_abc123 \
--resume-from checkpoint_xyz
# Custom Job ID for new Job
npx perstack run my-expert "query" --job-id my-custom-job
# Respond to interactive tool call
npx perstack run my-expert "user response" --continue -i
# Custom config
npx perstack run my-expert "query" \
--config ./configs/production.toml \
--env-path .env.production
# Registry Experts
npx perstack run tic-tac-toe "Let's play!"
npx perstack run @org/expert@1.0.0 "query"
# Non-default runtimes
npx perstack run my-expert "query" --runtime local
npx perstack run my-expert "query" --runtime cursor
npx perstack run my-expert "query" --runtime claude-code
npx perstack run my-expert "query" --runtime gemini

Publish an Expert to the registry.

Terminal window
perstack publish [expertName] [options]

Arguments:

  • [expertName]: Expert name from perstack.toml (prompts if not provided)

Options:

OptionDescription
--config <path>Path to perstack.toml
--dry-runValidate without publishing

Example:

Terminal window
perstack publish my-expert
perstack publish my-expert --dry-run

Requires PERSTACK_API_KEY environment variable.

Note: Published Experts must use npx or uvx as skill commands. Arbitrary commands are not allowed for security reasons. See Publishing.

Remove an Expert version from the registry.

Terminal window
perstack unpublish [expertKey] [options]

Arguments:

  • [expertKey]: Expert key with version (e.g., my-expert@1.0.0)

Options:

OptionDescription
--config <path>Path to perstack.toml
--forceSkip confirmation (required for non-interactive)

Example:

Terminal window
perstack unpublish # Interactive mode
perstack unpublish my-expert@1.0.0 --force # Non-interactive

Add or update tags on an Expert version.

Terminal window
perstack tag [expertKey] [tags...] [options]

Arguments:

  • [expertKey]: Expert key with version (e.g., my-expert@1.0.0)
  • [tags...]: Tags to set (e.g., stable, beta)

Options:

OptionDescription
--config <path>Path to perstack.toml

Example:

Terminal window
perstack tag # Interactive mode
perstack tag my-expert@1.0.0 stable beta # Set tags directly

Change the status of an Expert version.

Terminal window
perstack status [expertKey] [status] [options]

Arguments:

  • [expertKey]: Expert key with version (e.g., my-expert@1.0.0)
  • [status]: New status (available, deprecated, disabled)

Options:

OptionDescription
--config <path>Path to perstack.toml

Example:

Terminal window
perstack status # Interactive mode
perstack status my-expert@1.0.0 deprecated
StatusMeaning
availableNormal, visible in registry
deprecatedStill usable but discouraged
disabledCannot be executed

View execution history and events for debugging.

Terminal window
perstack log [options]

Purpose:

Inspect job/run execution history and events for debugging. This command is designed for both human inspection and AI agent usage, making it easy to diagnose issues in Expert runs.

Default Behavior:

When called without options, shows a summary of the latest job with:

  • “(showing latest job)” indicator when no --job specified
  • “Storage: ” showing where data is stored
  • Maximum 100 events (use --take 0 for all)

Options:

OptionDescription
--job <jobId>Show events for a specific job
--run <runId>Show events for a specific run
--checkpoint <id>Show checkpoint details
--step <step>Filter by step number (e.g., 5, >5, 1-10)
--type <type>Filter by event type
--errorsShow only error-related events
--toolsShow only tool call events
--delegationsShow only delegation events
--filter <expression>Simple filter expression
--jsonOutput as JSON (machine-readable)
--prettyPretty-print JSON output
--verboseShow full event details
--take <n>Number of events to display (default: 100, 0 for all)
--offset <n>Number of events to skip (default: 0)
--context <n>Include N events before/after matches
--messagesShow message history for checkpoint
--summaryShow summarized view

Event Types:

Event TypeDescription
startRunRun started
callToolsTool calls made
resolveToolResultsTool results received
callDelegateDelegation to another expert
stopRunByErrorError occurred
retryGeneration retry
completeRunRun completed
continueToNextStepStep transition

Filter Expression Syntax:

Simple conditions are supported:

Terminal window
# Exact match
--filter '.type == "completeRun"'
# Numeric comparison
--filter '.stepNumber > 5'
--filter '.stepNumber >= 5'
--filter '.stepNumber < 10'
# Array element matching
--filter '.toolCalls[].skillName == "base"'

Step Range Syntax:

Terminal window
--step 5 # Exact step number
--step ">5" # Greater than 5
--step ">=5" # Greater than or equal to 5
--step "1-10" # Range (inclusive)

Examples:

Terminal window
# Show latest job summary
perstack log
# Show all events for a specific job
perstack log --job abc123
# Show events for a specific run
perstack log --run xyz789
# Show checkpoint details with messages
perstack log --checkpoint cp123 --messages
# Show only errors
perstack log --errors
# Show tool calls for steps 5-10
perstack log --tools --step "5-10"
# Filter by event type
perstack log --job abc123 --type callTools
# JSON output for automation
perstack log --job abc123 --json
# Error diagnosis with context
perstack log --errors --context 5
# Filter with expression
perstack log --filter '.toolCalls[].skillName == "base"'
# Summary view
perstack log --summary

Output Format:

Terminal output (default) shows human-readable format with colors:

Job: abc123 (completed)
Expert: my-expert@1.0.0
Started: 2024-12-23 10:30:15
Steps: 12
Events:
─────────────────────────────────────────────
[Step 1] startRun 10:30:15
Expert: my-expert@1.0.0
Query: "Analyze this code..."
[Step 2] callTools 10:30:18
Tools: read_file, write_file
[Step 3] resolveToolResults 10:30:22
✓ read_file: Success
✗ write_file: Permission denied
─────────────────────────────────────────────

JSON output (--json) for machine parsing:

{
"job": { "id": "abc123", "status": "completed" },
"events": [
{ "type": "startRun", "stepNumber": 1 }
],
"summary": {
"totalEvents": 15,
"errorCount": 0
}
}

Pre-collect tool definitions to enable instant LLM inference.

Terminal window
perstack install [options]

Purpose:

By default, Perstack initializes MCP skills at runtime to discover their tool definitions. This can add 500ms-6s startup latency per skill. perstack install solves this by:

  1. Initializing all skills once and collecting their tool schemas
  2. Caching the schemas in a perstack.lock file
  3. Enabling the runtime to start LLM inference immediately using cached schemas
  4. Deferring actual MCP connections until tools are called

Options:

OptionDescriptionDefault
--config <path>Path to perstack.tomlAuto-discover from cwd
--env-path <path...>Environment file paths.env, .env.local

Example:

Terminal window
# Generate lockfile for current project
perstack install
# Generate lockfile for specific config
perstack install --config ./configs/production.toml
# Re-generate after adding new skills
perstack install

Output:

Creates perstack.lock in the same directory as perstack.toml. This file contains:

  • All expert definitions (including resolved delegates from registry)
  • All tool definitions for each expert’s skills

When to run:

  • After adding or modifying skills in perstack.toml
  • After updating MCP server dependencies
  • Before deploying to production for faster startup

Note: The lockfile is optional. If not present, skills are initialized at runtime as usual.

Interactive wizard to create Perstack Experts.

Terminal window
npx create-expert # New project setup
npx create-expert my-expert "Add X" # Improve existing Expert

New Project Mode:

  • Detects available LLMs (Anthropic, OpenAI, Google)
  • Detects available runtimes (Cursor, Claude Code, Gemini)
  • Creates .env, AGENTS.md, perstack.toml
  • Runs Expert creation flow

Improvement Mode: When called with Expert name, skips setup and improves existing Expert.