Experts
Experts
Section titled “Experts”Experts are the core building block of Perstack — modular micro-agents designed for reuse.
The term “Expert” is familiar in AI (e.g., Mixture of Experts), but here it means something specific: a specialist component with a single, well-defined role.
Why Experts?
Section titled “Why Experts?”Traditional agent development produces monolithic agents optimized for specific use cases. They work, but they don’t transfer. You can’t take a “research agent” from one project and reuse it in another without significant rework.
Experts solve this by inverting the design:
| Traditional Agent | Expert |
|---|---|
| Represents a user | Serves an application |
| Does many things | Does one thing well |
| Application-specific | Purpose-specific, context-independent |
| Hard to reuse | Designed for reuse |
An agent represents a user — it acts on their behalf across many tasks. An Expert is a specialist component — it helps an application achieve a specific goal.
This distinction matters. When you build an Expert, you’re not building an application. You’re building a reusable capability that any application can leverage.
What is an Expert?
Section titled “What is an Expert?”An Expert is defined by three things:
1. Purpose (description)
Section titled “1. Purpose (description)”A clear statement of what the Expert does. Unlike instruction (which is private to the Expert), description is exposed to other Experts as a tool description when delegating.
When Expert A can delegate to Expert B, the runtime presents Expert B as a callable tool to Expert A — with description as the tool’s description. This is how Expert A decides:
- Which delegate to call
- What query to write
A good description tells potential callers what this Expert can do, when to use it, and what to include in the query.
[experts."code-reviewer"]description = """Reviews TypeScript code for type safety, error handling, and security issues.Provide the file path to review. Returns actionable feedback with code examples."""2. Domain knowledge (instruction)
Section titled “2. Domain knowledge (instruction)”The knowledge that transforms a general-purpose LLM into a specialist. This includes:
- What the Expert is expected to achieve
- Domain-specific concepts, rules, and constraints
- Completion criteria and priority tradeoffs
- Guidelines for using assigned skills
instruction = """You are a TypeScript code reviewer for production systems.
Review criteria:- Type safety: No `any` types, all types explicitly defined- Error handling: All errors must use codes from `error-codes.ts`- Security: Flag even minor risks
Provide actionable feedback with code examples."""3. Capabilities (skills, delegates)
Section titled “3. Capabilities (skills, delegates)”What the Expert can do:
- Skills: Tools available through MCP (file access, web search, APIs)
- Delegates: Other Experts this Expert can call
delegates = ["security-analyst"]
[experts."code-reviewer".skills."static-analysis"]type = "mcpStdioSkill"command = "npx"packageName = "@eslint/mcp"How Experts work
Section titled “How Experts work”Execution model
Section titled “Execution model”When you run an Expert:
- The runtime creates a Job and starts the first Run with your Expert (the Coordinator)
- The instruction becomes the system prompt (with runtime meta-instructions)
- Your query becomes the user message
- The LLM reasons and calls tools (skills) as needed
- Each step produces a checkpoint — a complete snapshot of the Run’s state
The runtime manages the execution loop. The Expert definition declares what to achieve; the runtime handles how.
Delegation
Section titled “Delegation”Experts collaborate through delegation, not shared context. Each delegation creates a new Run within the same Job.
Job │ ├── Run 1: Expert A (Coordinator) │ │ │ ├─ sees delegates as tools │ │ (description → tool description) │ │ │ ├─ calls delegate ─────────────────────┐ │ │ (writes query) │ │ │ │ │ │ [Run 1 paused] │ │ │ ▼ │ │ ┌── Run 2: Expert B ──┐ │ │ │ starts fresh │ │ │ │ (empty history) │ │ │ │ (own instruction) │ │ │ │ │ │ │ │ │ ├─ executes │ │ │ │ │ │ │ │ │ ├─ completes │ │ │ │ │ │ │ │ └───────┼─────────────┘ │ │ │ │ ├─ resumes ◄───────────────────────────┘ │ │ (receives run result only) │ ▼Context is never shared between Experts. The delegate receives only the query — no message history, no parent context. This is a security boundary, not a limitation. See Why context isolation matters.
Parallel delegation
Section titled “Parallel delegation”When the LLM calls multiple delegate tools in a single response, the runtime executes them in parallel:
Job │ ├── Run 1: Expert A (Coordinator) │ │ │ ├─ calls Expert B and Expert C ────────┬─────────────┐ │ │ (in single response) │ │ │ │ │ │ │ │ [Run 1 paused] ▼ ▼ │ │ ┌── Run 2 ──┐ ┌── Run 3 ──┐ │ │ │ Expert B │ │ Expert C │ │ │ │ │ │ │ │ │ │ executes │ │ executes │ │ │ │ in │ │ in │ │ │ │ parallel │ │ parallel │ │ │ │ │ │ │ │ │ └─────┬─────┘ └─────┬─────┘ │ │ │ │ │ ├─ resumes ◄─────────────────────────┴──────────────┘ │ │ (receives all results) │ ▼Benefits:
- Performance: Independent tasks run concurrently
- Natural: LLM decides when to parallelize based on task requirements
- No configuration: Automatic when multiple delegates called together
Note the asymmetry: Expert A sees Expert B’s description (public interface), but never its instruction (private implementation). This is what makes Experts composable — the caller only needs to know what a delegate does, not how it does it.
Key design decisions:
| Aspect | Design | Rationale |
|---|---|---|
| Message history | Not shared | Each Expert has a single responsibility; mixing contexts breaks focus |
| Communication | Natural language | No schema versioning, maximum flexibility, humans and Experts use the same interface |
| State exchange | Workspace files | Persistent, inspectable, works across restarts |
| Interactive tools | Coordinator only | See below |
This is intentional. See Why context isolation matters for the security rationale.
Why no interactive tools for delegates?
Section titled “Why no interactive tools for delegates?”Delegated Experts run without interactive tool access. If a delegate needs clarification:
- It should return what it knows (via
attemptCompletion) - The Coordinator receives the result
- The Coordinator can ask the user for clarification
- The Coordinator can re-delegate with better information
This keeps the user interface at the Coordinator level and prevents deep call chains from blocking on user input.
Delegation failure handling
Section titled “Delegation failure handling”When a Delegated Expert fails (unrecoverable error), the Job continues:
- The failed Run is marked as
stoppedByError - The error is returned to the Coordinator as the delegation result
- The Coordinator decides how to handle it (retry, try different Expert, give up)
Job (continues running) │ ├── Run 1: Coordinator │ │ │ ├─ delegates to Expert B ───────────┐ │ │ │ │ │ Run 2: Expert B │ │ │ │ │ ❌ FAILS │ │ │ │ ├─ receives error ◄─────────────────┘ │ │ "Delegation failed: [error message]" │ │ │ ├─ decides: retry? different Expert? give up? │ ▼This design provides resilience — a single delegate failure doesn’t crash the entire Job. The Coordinator has full control over error handling.
Workspace
Section titled “Workspace”The workspace is a shared filesystem where Experts read and write files. Unlike message history, the workspace persists across Expert boundaries — this is how Experts exchange complex data.
How you organize workspace files is up to you. The runtime reserves perstack/ for execution history — see Runtime for details.
What’s next
Section titled “What’s next”Ready to build Experts? See the Making Experts guide:
- Making Experts — defining Experts in
perstack.toml - Best Practices — design guidelines for effective Experts
- Skills — adding MCP tools to your Experts