Break down complex goals into tasks, route them to specialized agents, and iterate until done — all driven by an LLM planner with a real-time web dashboard.
Orchestrator (LLM planner, no tools)
├── researcher [web-search, browser] — finds information
├── coder [bash, file-ops] — writes and tests code
├── analyst [no tools, reasoning only] — compares and recommends
└── main [general] — fallback for everything elseEach agent runs in its own sandbox with only the tools it needs. No single agent has the keys to the kingdom.
A researcher has web search but can't execute code. A coder has a shell but can't access your database. Permissions are scoped by design.
When a coding task goes to an agent that can only write code, the blast radius of a prompt injection or hallucination is contained.
Focused agents with targeted system prompts and specific tool access outperform a single general-purpose agent on domain tasks.
The orchestrator decides what to do and who should do it. Agents don't need to know about each other — they just execute.
The LLM decides what to do next based on accumulated results — not a rigid pre-planned DAG.
Tasks assigned to the best agent by name or capability. Researcher, coder, analyst, or any custom agent.
Agent metadata loaded from each agent's SOUL.md on the gateway. No static configuration needed.
Browser-based UI with SSE streaming, step visualization, task output inspection, and run history.
OpenClaw gateway agents, HTTP endpoints, or plain async functions — mix and match.
Handles markdown-wrapped JSON, prose prefixes, and truncated gateway responses gracefully.
Submit goals, watch tasks execute with live status updates, and inspect results — all from your browser.
The orchestrator runs an adaptive loop — the LLM decides what to do, agents execute, and results feed back in.
The orchestrator sends the goal and all accumulated results to the LLM planner. It responds with a batch of tasks to execute, or a final answer.
Tasks are dispatched to agents based on name or capability. Tasks in the same step run concurrently for maximum throughput.
Results feed back into the next think step. The LLM sees what succeeded, what failed, and decides what to do next — or synthesizes the final answer.
# Install npm install openclaw-orchestrator # Start the dashboard (connects to your OpenClaw gateway) openclaw-orchestrator serve -g ws://your-gateway:port/ -t YOUR_TOKEN # Or run a goal directly from the CLI openclaw-orchestrator run "Compare React and Svelte for dashboards" \ -g ws://your-gateway:port/ -t YOUR_TOKEN
Open http://localhost:3000 to see the dashboard.
All commands accept --debug for verbose logging.
import { Orchestrator, FunctionAdapter } from "openclaw-orchestrator"; const orch = new Orchestrator(); orch.addAgent(new FunctionAdapter({ name: "researcher", description: "Finds information on the web", capabilities: ["research", "web-search"], fn: async (task) => { return `Results for: ${task}`; }, })); const result = await orch.run("Build a URL shortener", { maxSteps: 5, }, { onStepStart: (step, ids) => console.log(`Step ${step}`), onTaskEnd: (step, id, res) => console.log(` ${id}: ${res.status}`), onFinish: (answer) => console.log(answer), });
Mix and match OpenClaw gateway agents, HTTP endpoints, and in-process functions in the same orchestration.
Discover agents from your OpenClaw gateway. Metadata loaded automatically from SOUL.md files.
const orch = new Orchestrator(); orch.addGateway({ name: "main", url: "ws://host:port/", token: "...", });
Call any HTTP API as an agent. POST a task, get a result back.
orch.addAgent(new HttpAdapter({ name: "summarizer", url: "https://api.co/summarize", capabilities: ["summarization"], }));
Wrap any async function as an agent. Great for testing and local tools.
orch.addAgent(new FunctionAdapter({ name: "calc", capabilities: ["math"], fn: async (task) => eval(task), }));