Skip to main content

Multi-Agent Coordinator

A multi-agent coordinator that dispatches tasks to 3 specialized sub-agents -- a demo-planner (breaks tasks into research queries with @decision), demo-researcher (researches each query individually, spawned 3 times), and demo-executor (synthesizes findings into a final answer). All sub-agents are decorated with @waxell.observe for automatic parent-child trace correlation via WaxellContext lineage. Supports --policy-triggers mode that intentionally exceeds max_steps to demonstrate governance halting.

Environment variables

This example runs in dry-run mode by default (no API key needed). For live mode, set OPENAI_API_KEY, WAXELL_API_KEY, and WAXELL_API_URL.

Architecture

Key Code

Coordinator with delegation steps

The coordinator records each delegation as a step and dispatches sub-agents sequentially.

@waxell.observe(agent_name="demo-coordinator", workflow_name="multi-agent-task")
async def run_coordinator(task, *, dry_run=False, policy_triggers=False, waxell_ctx=None):
waxell.tag("demo", "multi-agent")
waxell.tag("num_agents", "3")
client = get_openai_client(dry_run=dry_run)

delegate_to_planner()
plan_result = await plan_task(task, client=client)

delegate_to_researchers()
findings = []
for i, query in enumerate(plan_result["queries"]):
finding = await research_query(query, query_index=i, client=client)
findings.append(finding)

delegate_to_executor()
final_answer = await synthesize_findings(findings, task, client=client)

Planner with strategy decision

The planner breaks the task into queries, then decides the execution strategy.

@waxell.observe(agent_name="demo-planner", workflow_name="plan-task")
async def plan_task(task_description, *, client=None, waxell_ctx=None):
analyze_task(task_description)

response = await client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "system", "content": "Break this into 3 research queries"}, ...],
)
queries = parse_queries(response.choices[0].message.content)

await choose_strategy(queries) # @decision: parallel/sequential/focused
generate_plan(queries)
return {"queries": queries, "strategy": "parallel-research"}

What this demonstrates

  • @waxell.observe -- 5 agent contexts (coordinator + planner + 3 researchers + executor) with automatic parent-child lineage.
  • @waxell.step_dec -- delegation steps, task analysis, search, findings compilation, evaluation, and output production.
  • @waxell.decision -- research strategy selection (parallel-research/sequential-research/focused-deep-dive).
  • waxell.tag() -- per-researcher query_index tags for trace filtering.
  • waxell.metadata() -- num_findings on the executor agent.
  • Auto-instrumented LLM calls -- 5 OpenAI calls across 3 agent types.
  • PolicyViolationError handling -- catches governance policy violations and halts gracefully.
  • --policy-triggers mode -- intentionally exceeds max_steps to demonstrate governance enforcement.

Run it

# Dry-run (no API key needed)
python -m app.demos.multi_agent --dry-run

# Live mode with OpenAI
OPENAI_API_KEY=sk-... python -m app.demos.multi_agent

# Policy trigger mode (exceeds step limits)
python -m app.demos.multi_agent --dry-run --policy-triggers

Source

dev/waxell-dev/app/demos/multi_agent.py