Multi-Provider Agent
One waxell.init() call, and all your LLM providers are automatically traced -- zero manual instrumentation code needed.
Environment variables
This example requires OPENAI_API_KEY, ANTHROPIC_API_KEY, GROQ_API_KEY, WAXELL_API_KEY, and WAXELL_API_URL.
import asyncio
import waxell_observe as waxell
from waxell_observe import generate_session_id
# waxell.init() auto-instruments all three providers -- zero manual code needed
waxell.init()
from openai import OpenAI
import anthropic
from groq import Groq
openai_client = OpenAI()
anthropic_client = anthropic.Anthropic()
groq_client = Groq()
@waxell.observe(agent_name="multi-provider")
async def multi_provider_analysis(topic: str) -> dict:
session = generate_session_id()
waxell.metadata("providers_used", ["openai", "anthropic", "groq"])
waxell.tag("task", "multi-perspective-analysis")
# OpenAI -- technical perspective
waxell.tag("provider", "openai")
openai_resp = openai_client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": f"Technical perspective on: {topic}"}],
)
waxell.score("response_quality", 0.9, comment="openai technical")
# Anthropic -- creative perspective
waxell.tag("provider", "anthropic")
anthropic_resp = anthropic_client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=300,
messages=[{"role": "user", "content": f"Creative perspective on: {topic}"}],
)
waxell.score("response_quality", 0.85, comment="anthropic creative")
# Groq -- fast inference summary
waxell.tag("provider", "groq")
groq_resp = groq_client.chat.completions.create(
model="llama-3.3-70b-versatile",
messages=[{"role": "user", "content": f"Concise summary of: {topic}"}],
)
waxell.score("response_quality", 0.88, comment="groq summary")
return {
"technical": openai_resp.choices[0].message.content,
"creative": anthropic_resp.content[0].text,
"summary": groq_resp.choices[0].message.content,
}
asyncio.run(multi_provider_analysis("The future of AI agents"))
What this demonstrates
- Zero-effort auto-instrumentation --
waxell.init()patches all three providers before import; every LLM call is captured automatically. @observedecorator -- creates a named trace with automatic lifecycle management.- Top-level convenience functions --
waxell.tag(),waxell.score(), andwaxell.metadata()enrich the trace without needing a context object. - Groq integration -- Groq uses the OpenAI-compatible SDK, so auto-instrumentation works out of the box.
- Unified observability -- one dashboard view across all providers with per-call model attribution.
- Session correlation --
generate_session_id()ties related runs together for analysis.
Run it
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
export GROQ_API_KEY="gsk_..."
export WAXELL_API_KEY="your-waxell-api-key"
export WAXELL_API_URL="https://api.waxell.ai"
python multi_provider.py