CLI Reference
CLI Reference
Section titled “CLI Reference”Running Experts
Section titled “Running Experts”perstack start
Section titled “perstack start”Interactive workbench for developing and testing Experts.
perstack start [expertKey] [query] [options]Arguments:
[expertKey]: Expert key (optional — prompts if not provided)[query]: Input query (optional — prompts if not provided)
Opens a text-based UI for iterating on Expert definitions. See Running Experts.
perstack run
Section titled “perstack run”Headless execution for production and automation.
perstack run <expertKey> <query> [options]Arguments:
<expertKey>: Expert key (required)- Examples:
my-expert,@org/my-expert,@org/my-expert@1.0.0
- Examples:
<query>: Input query (required)
Outputs JSON events to stdout.
Shared Options
Section titled “Shared Options”Both start and run accept the same options:
Model and Provider
Section titled “Model and Provider”| Option | Description | Default |
|---|---|---|
--provider <provider> | LLM provider | anthropic |
--model <model> | Model name | claude-sonnet-4-5 |
Providers: anthropic, google, openai, ollama, azure-openai, amazon-bedrock, google-vertex
Execution Control
Section titled “Execution Control”| Option | Description | Default |
|---|---|---|
--max-steps <n> | Maximum total steps across all Runs in a Job | unlimited |
--max-retries <n> | Max retry attempts per generation | 5 |
--timeout <ms> | Timeout per generation (ms) | 60000 |
Runtime
Section titled “Runtime”| Option | Description | Default |
|---|---|---|
--runtime <runtime> | Execution runtime | From config or docker |
--workspace <path> | Workspace directory for Docker runtime | ./workspace |
--env <name...> | Env vars to pass to Docker runtime | - |
Available runtimes:
docker— Containerized runtime with network isolation (default)local— Built-in runtime without isolationcursor— Cursor CLI (experimental)claude-code— Claude Code CLI (experimental)gemini— Gemini CLI (experimental)
If --runtime is not specified, the runtime is determined by runtime field in perstack.toml. If neither is set, docker is used.
Passing environment variables to Docker:
Use --env to pass specific environment variables to the Docker container at runtime. This is useful for:
- Private npm packages:
--env NPM_TOKEN - Custom API keys needed by skills:
--env MY_API_KEY
# Pass NPM_TOKEN for private npm packagesperstack run my-expert "query" --runtime docker --env NPM_TOKEN
# Pass multiple environment variablesperstack run my-expert "query" --env NPM_TOKEN --env MY_API_KEYSee Multi-Runtime Support for setup and limitations.
Configuration
Section titled “Configuration”| Option | Description | Default |
|---|---|---|
--config <path> | Path to perstack.toml | Auto-discover from cwd |
--env-path <path...> | Environment file paths | .env, .env.local |
Job and Run Management
Section titled “Job and Run Management”| Option | Description |
|---|---|
--job-id <id> | Custom Job ID for new Job (default: auto-generated) |
--continue | Continue latest Job with new Run |
--continue-job <id> | Continue specific Job with new Run |
--resume-from <id> | Resume from specific checkpoint (requires --continue-job) |
Combining options:
# Continue latest Job from its latest checkpoint--continue
# Continue specific Job from its latest checkpoint--continue-job <jobId>
# Continue specific Job from a specific checkpoint--continue-job <jobId> --resume-from <checkpointId>Note: --resume-from requires --continue-job (Job ID must be specified). You can only resume from the Coordinator Expert’s checkpoints.
Interactive
Section titled “Interactive”| Option | Description |
|---|---|
-i, --interactive-tool-call-result | Treat query as interactive tool call result |
Use with --continue to respond to interactive tool calls from the Coordinator Expert.
| Option | Description |
|---|---|
--verbose | Enable verbose logging (see Verbose Mode) |
Verbose Mode
Section titled “Verbose Mode”The --verbose flag enables detailed logging for debugging purposes. The behavior varies by runtime:
Default Runtime (perstack)
Section titled “Default Runtime (perstack)”Shows additional runtime information in the output.
Docker Runtime (--runtime docker)
Section titled “Docker Runtime (--runtime docker)”Enables comprehensive debugging output:
Docker Build Progress:
- Image layer pulling progress
- Build step execution
- Dependency installation status
Container Lifecycle:
- Container startup status
- Health check results
- Container exit information
Proxy Monitoring (when network isolation is enabled):
- Real-time allow/block events for network requests
- Domain and port information for each request
- Clear indication of blocked requests with reasons
Example output in TUI:
Docker Build [runtime] Building Installing dependencies...Docker Build [runtime] Complete Docker build completedDocker [proxy] Starting Starting proxy container...Docker [proxy] Healthy Proxy container readyDocker [runtime] Starting Starting runtime container...Docker [runtime] Running Runtime container startedProxy ✓ api.anthropic.com:443Proxy ✗ blocked-domain.com:443 Domain not in allowlistUse cases:
- Debugging network connectivity issues
- Verifying proxy allowlist configuration
- Monitoring which domains are being accessed
- Troubleshooting container startup failures
Examples
Section titled “Examples”# Basic execution (creates new Job)npx perstack run my-expert "Review this code"
# With model optionsnpx perstack run my-expert "query" \ --provider google \ --model gemini-2.5-pro \ --max-steps 100
# Continue Job with follow-upnpx perstack run my-expert "initial query"npx perstack run my-expert "follow-up" --continue
# Continue specific Job from latest checkpointnpx perstack run my-expert "continue" --continue-job job_abc123
# Continue specific Job from specific checkpointnpx perstack run my-expert "retry with different approach" \ --continue-job job_abc123 \ --resume-from checkpoint_xyz
# Custom Job ID for new Jobnpx perstack run my-expert "query" --job-id my-custom-job
# Respond to interactive tool callnpx perstack run my-expert "user response" --continue -i
# Custom confignpx perstack run my-expert "query" \ --config ./configs/production.toml \ --env-path .env.production
# Registry Expertsnpx perstack run tic-tac-toe "Let's play!"npx perstack run @org/expert@1.0.0 "query"
# Non-default runtimesnpx perstack run my-expert "query" --runtime localnpx perstack run my-expert "query" --runtime cursornpx perstack run my-expert "query" --runtime claude-codenpx perstack run my-expert "query" --runtime geminiRegistry Management
Section titled “Registry Management”perstack publish
Section titled “perstack publish”Publish an Expert to the registry.
perstack publish [expertName] [options]Arguments:
[expertName]: Expert name fromperstack.toml(prompts if not provided)
Options:
| Option | Description |
|---|---|
--config <path> | Path to perstack.toml |
--dry-run | Validate without publishing |
Example:
perstack publish my-expertperstack publish my-expert --dry-runRequires PERSTACK_API_KEY environment variable.
Note: Published Experts must use npx or uvx as skill commands. Arbitrary commands are not allowed for security reasons. See Publishing.
perstack unpublish
Section titled “perstack unpublish”Remove an Expert version from the registry.
perstack unpublish [expertKey] [options]Arguments:
[expertKey]: Expert key with version (e.g.,my-expert@1.0.0)
Options:
| Option | Description |
|---|---|
--config <path> | Path to perstack.toml |
--force | Skip confirmation (required for non-interactive) |
Example:
perstack unpublish # Interactive modeperstack unpublish my-expert@1.0.0 --force # Non-interactiveperstack tag
Section titled “perstack tag”Add or update tags on an Expert version.
perstack tag [expertKey] [tags...] [options]Arguments:
[expertKey]: Expert key with version (e.g.,my-expert@1.0.0)[tags...]: Tags to set (e.g.,stable,beta)
Options:
| Option | Description |
|---|---|
--config <path> | Path to perstack.toml |
Example:
perstack tag # Interactive modeperstack tag my-expert@1.0.0 stable beta # Set tags directlyperstack status
Section titled “perstack status”Change the status of an Expert version.
perstack status [expertKey] [status] [options]Arguments:
[expertKey]: Expert key with version (e.g.,my-expert@1.0.0)[status]: New status (available,deprecated,disabled)
Options:
| Option | Description |
|---|---|
--config <path> | Path to perstack.toml |
Example:
perstack status # Interactive modeperstack status my-expert@1.0.0 deprecated| Status | Meaning |
|---|---|
available | Normal, visible in registry |
deprecated | Still usable but discouraged |
disabled | Cannot be executed |
Debugging and Inspection
Section titled “Debugging and Inspection”perstack log
Section titled “perstack log”View execution history and events for debugging.
perstack log [options]Purpose:
Inspect job/run execution history and events for debugging. This command is designed for both human inspection and AI agent usage, making it easy to diagnose issues in Expert runs.
Default Behavior:
When called without options, shows a summary of the latest job with:
- “(showing latest job)” indicator when no
--jobspecified - “Storage:
” showing where data is stored - Maximum 100 events (use
--take 0for all)
Options:
| Option | Description |
|---|---|
--job <jobId> | Show events for a specific job |
--run <runId> | Show events for a specific run |
--checkpoint <id> | Show checkpoint details |
--step <step> | Filter by step number (e.g., 5, >5, 1-10) |
--type <type> | Filter by event type |
--errors | Show only error-related events |
--tools | Show only tool call events |
--delegations | Show only delegation events |
--filter <expression> | Simple filter expression |
--json | Output as JSON (machine-readable) |
--pretty | Pretty-print JSON output |
--verbose | Show full event details |
--take <n> | Number of events to display (default: 100, 0 for all) |
--offset <n> | Number of events to skip (default: 0) |
--context <n> | Include N events before/after matches |
--messages | Show message history for checkpoint |
--summary | Show summarized view |
Event Types:
| Event Type | Description |
|---|---|
startRun | Run started |
callTools | Tool calls made |
resolveToolResults | Tool results received |
callDelegate | Delegation to another expert |
stopRunByError | Error occurred |
retry | Generation retry |
completeRun | Run completed |
continueToNextStep | Step transition |
Filter Expression Syntax:
Simple conditions are supported:
# Exact match--filter '.type == "completeRun"'
# Numeric comparison--filter '.stepNumber > 5'--filter '.stepNumber >= 5'--filter '.stepNumber < 10'
# Array element matching--filter '.toolCalls[].skillName == "base"'Step Range Syntax:
--step 5 # Exact step number--step ">5" # Greater than 5--step ">=5" # Greater than or equal to 5--step "1-10" # Range (inclusive)Examples:
# Show latest job summaryperstack log
# Show all events for a specific jobperstack log --job abc123
# Show events for a specific runperstack log --run xyz789
# Show checkpoint details with messagesperstack log --checkpoint cp123 --messages
# Show only errorsperstack log --errors
# Show tool calls for steps 5-10perstack log --tools --step "5-10"
# Filter by event typeperstack log --job abc123 --type callTools
# JSON output for automationperstack log --job abc123 --json
# Error diagnosis with contextperstack log --errors --context 5
# Filter with expressionperstack log --filter '.toolCalls[].skillName == "base"'
# Summary viewperstack log --summaryOutput Format:
Terminal output (default) shows human-readable format with colors:
Job: abc123 (completed)Expert: my-expert@1.0.0Started: 2024-12-23 10:30:15Steps: 12
Events:─────────────────────────────────────────────[Step 1] startRun 10:30:15 Expert: my-expert@1.0.0 Query: "Analyze this code..."
[Step 2] callTools 10:30:18 Tools: read_file, write_file
[Step 3] resolveToolResults 10:30:22 ✓ read_file: Success ✗ write_file: Permission denied─────────────────────────────────────────────JSON output (--json) for machine parsing:
{ "job": { "id": "abc123", "status": "completed" }, "events": [ { "type": "startRun", "stepNumber": 1 } ], "summary": { "totalEvents": 15, "errorCount": 0 }}Performance Optimization
Section titled “Performance Optimization”perstack install
Section titled “perstack install”Pre-collect tool definitions to enable instant LLM inference.
perstack install [options]Purpose:
By default, Perstack initializes MCP skills at runtime to discover their tool definitions. This can add 500ms-6s startup latency per skill. perstack install solves this by:
- Initializing all skills once and collecting their tool schemas
- Caching the schemas in a
perstack.lockfile - Enabling the runtime to start LLM inference immediately using cached schemas
- Deferring actual MCP connections until tools are called
Options:
| Option | Description | Default |
|---|---|---|
--config <path> | Path to perstack.toml | Auto-discover from cwd |
--env-path <path...> | Environment file paths | .env, .env.local |
Example:
# Generate lockfile for current projectperstack install
# Generate lockfile for specific configperstack install --config ./configs/production.toml
# Re-generate after adding new skillsperstack installOutput:
Creates perstack.lock in the same directory as perstack.toml. This file contains:
- All expert definitions (including resolved delegates from registry)
- All tool definitions for each expert’s skills
When to run:
- After adding or modifying skills in
perstack.toml - After updating MCP server dependencies
- Before deploying to production for faster startup
Note: The lockfile is optional. If not present, skills are initialized at runtime as usual.
Project Setup
Section titled “Project Setup”npx create-expert
Section titled “npx create-expert”Interactive wizard to create Perstack Experts.
npx create-expert # New project setupnpx create-expert my-expert "Add X" # Improve existing ExpertNew Project Mode:
- Detects available LLMs (Anthropic, OpenAI, Google)
- Detects available runtimes (Cursor, Claude Code, Gemini)
- Creates
.env,AGENTS.md,perstack.toml - Runs Expert creation flow
Improvement Mode: When called with Expert name, skips setup and improves existing Expert.