Voice Platforms Agent
A managed voice AI platforms comparison between Vapi and Retell. Shows how Waxell Observe traces call lifecycle management, turn-by-turn conversation handling, and platform-specific voice agent configuration across two production voice AI services.
Environment variables
This example requires OPENAI_API_KEY, WAXELL_API_KEY, and WAXELL_API_URL. Use --dry-run to skip real API calls.
Architecture
Key Code
Vapi Call Lifecycle
Three @tool-decorated functions exercise the full Vapi call lifecycle: start, send messages, stop.
@waxell.tool(tool_type="voice")
def vapi_start_call(vapi, assistant_config: dict) -> dict:
"""Start a Vapi call with inline assistant configuration."""
result = vapi.start(assistant=assistant_config)
return {"call_id": result.id, "status": result.status}
@waxell.tool(tool_type="voice")
def vapi_send_message(vapi, message: dict) -> dict:
"""Send a message or tool-result through the Vapi call."""
vapi.send(message=message)
return {"message_type": message.get("type", "unknown"), "status": "delivered"}
@waxell.tool(tool_type="voice")
def vapi_stop_call(vapi) -> dict:
"""Stop the active Vapi call and get transcript + cost."""
result = vapi.stop()
return {"transcript_preview": result["transcript"][:100], "cost": result["cost"], "duration_seconds": result["duration"]}
Retell Agent and Call Management
Retell exercises agent creation, call creation (phone + web), and call retrieval.
@waxell.tool(tool_type="voice")
def retell_create_agent(retell, config: dict) -> dict:
result = retell.agent.create(**config)
return {"agent_id": result.agent_id, "voice_id": result.voice_id}
@waxell.tool(tool_type="voice")
def retell_create_call(retell, agent_id: str, config: dict) -> dict:
result = retell.call.create(agent_id=agent_id, **config)
return {"call_id": result.call_id, "call_status": result.call_status}
@waxell.tool(tool_type="voice")
def retell_retrieve_call(retell, call_id: str) -> dict:
result = retell.call.retrieve(call_id=call_id)
return {"duration_ms": result.duration_ms, "e2e_latency_ms": result.e2e_latency}
What this demonstrates
- Vapi instrumentor --
Vapi.start,Vapi.stop, andVapi.sendtraced withtool_type="voice". - Retell instrumentor --
CallResource.create/retrieve/create_web_call/create_phone_callandAgentResource.create/retrievetraced. - Full call lifecycle -- Vapi: start, send user message, send tool result, stop with transcript and cost.
- Platform comparison --
@stepcompares metrics,@decisionuses LLM to recommend best platform. - Cost and latency tracking -- Vapi call cost and Retell end-to-end latency captured in tool results.
Run it
# Dry-run mode (no API key needed)
cd dev/waxell-dev
python -m app.demos.voice_platform_agent --dry-run
# Live mode
export OPENAI_API_KEY="sk-..."
python -m app.demos.voice_platform_agent