Skip to main content

Multi-Agent Swarm Agent

A multi-agent collaboration demo that compares 3 swarm frameworks -- Agency Swarm, SuperAGI, and CAMEL -- side-by-side. An agency-swarm-runner exercises Agency.run, Agent.run, and BaseTool.run; a superagi-runner exercises Agent.execute, execute_next_step, and ToolStepHandler.execute; a camel-runner exercises ChatAgent.step and RolePlaying.step. The orchestrator compares results, selects the best framework via @decision, and synthesizes findings with an LLM call.

Environment variables

This example runs in dry-run mode by default (no API key needed). For live mode, set OPENAI_API_KEY, WAXELL_API_KEY, and WAXELL_API_URL.

Architecture

Key Code

Framework-specific tool wrappers

Each framework's API methods are wrapped with @waxell.tool(tool_type="multi_agent") for consistent trace attribution.

@waxell.tool(tool_type="multi_agent")
def agency_run(agency, message: str) -> dict:
result = agency.run(message)
return {"result": result, "agents_invoked": len(agency.agents)}

@waxell.tool(tool_type="multi_agent")
def superagi_execute(agent) -> dict:
result = agent.execute()
return {"agent_name": agent.agent_name, "status": result["status"], "steps_taken": result["steps_taken"]}

@waxell.tool(tool_type="multi_agent")
def camel_chat_step(agent, message: str) -> dict:
resp = agent.step(message)
return {"agent_role": agent.role_name, "message_count": len(resp.msgs)}

@waxell.tool(tool_type="multi_agent")
def camel_roleplay_step(session) -> dict:
assistant_resp, user_resp = session.step()
return {"assistant_resp": assistant_resp, "user_resp": user_resp}

Framework comparison and selection

The orchestrator compares all 3 frameworks and selects the best one for the task.

@waxell.step_dec(name="compare_frameworks")
def compare_frameworks(swarm_result, superagi_result, camel_msgs) -> dict:
return {"frameworks_tested": 3, "agency_swarm": swarm_result[:100],
"superagi_status": superagi_result["status"], "camel_messages": camel_msgs}

@waxell.decision(name="choose_framework", options=["agency_swarm", "superagi", "camel"])
async def choose_framework(comparison) -> dict:
return {"chosen": "agency_swarm", "reasoning": "Best multi-agent orchestration with clear flows"}

What this demonstrates

  • @waxell.tool(tool_type="multi_agent") -- 8 tool calls across 3 swarm frameworks covering all wrapt-target methods.
  • @waxell.step_dec -- agent setup and framework comparison steps.
  • @waxell.decision -- framework selection (agency_swarm/superagi/camel) based on comparison results.
  • waxell.score() -- framework_coverage score.
  • waxell.tag() -- per-child-agent framework tags for trace filtering.
  • Auto-instrumented LLM calls -- OpenAI synthesis call.
  • Nested @waxell.observe -- orchestrator + 3 child agents (agency-swarm-runner, superagi-runner, camel-runner).
  • 3 swarm frameworks compared -- Agency Swarm (multi-agent agency), SuperAGI (autonomous execution), CAMEL (role-playing chat).

Run it

# Dry-run (no API key needed)
python -m app.demos.multi_agent_swarm_agent --dry-run

# Live mode with OpenAI
OPENAI_API_KEY=sk-... python -m app.demos.multi_agent_swarm_agent

Source

dev/waxell-dev/app/demos/multi_agent_swarm_agent.py