Skip to main content

Anthropic Agent

Three patterns for integrating waxell-observe with Anthropic Claude, from simplest to most flexible.

Environment variables

This example requires ANTHROPIC_API_KEY, WAXELL_API_KEY, and WAXELL_API_URL.

Pattern 1: Auto-instrumentation

Call init() before importing the Anthropic SDK and every call is captured automatically.

import waxell_observe as waxell

waxell.init() # must come before importing anthropic

import anthropic

client = anthropic.Anthropic()
response = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=300,
messages=[{"role": "user", "content": "Explain quantum computing in one paragraph."}],
)
print(response.content[0].text)

Pattern 2: @observe decorator

Wrap a function to create a named trace. Use top-level convenience functions to enrich the trace without touching the context object directly.

import asyncio
import waxell_observe as waxell

waxell.init()

import anthropic

client = anthropic.Anthropic()


@waxell.step_dec(name="classify")
def classify_text(text: str) -> str:
response = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=100,
messages=[{"role": "user", "content": f"Classify as news/opinion/technical/creative: {text}"}],
)
return response.content[0].text


@waxell.observe(agent_name="content-analyzer")
async def analyze_content(text: str) -> dict:
waxell.tag("pipeline", "content-analysis")
category = classify_text(text)
waxell.tag("category", category)

response = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=200,
messages=[{"role": "user", "content": f"Summarize in 2 sentences: {text}"}],
)
summary = response.content[0].text
waxell.step("summarize", output={"length": len(summary)})
waxell.score("completeness", 0.9, comment="auto-scored")
return {"category": category, "summary": summary}


asyncio.run(analyze_content("Your text here..."))

Pattern 3: WaxellContext

Full control over the trace lifecycle, including session IDs, tags, and structured results.

import asyncio
import waxell_observe as waxell
from waxell_observe import WaxellContext

waxell.init()

import anthropic

client = anthropic.Anthropic()


async def analyze_content(text: str):
async with WaxellContext(agent_name="content-analyzer") as ctx:
classify_response = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=100,
messages=[{"role": "user", "content": f"Classify as news/opinion/technical/creative: {text}"}],
)
category = classify_response.content[0].text
ctx.record_step("classify", output={"category": category})

summary_response = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=200,
messages=[{"role": "user", "content": f"Summarize in 2 sentences: {text}"}],
)
summary = summary_response.content[0].text
ctx.record_step("summarize", output={"summary": summary})

ctx.set_tag("category", category)
ctx.set_result({"category": category, "summary": summary})
return {"category": category, "summary": summary}


asyncio.run(analyze_content("Your text here..."))

What this demonstrates

  • Auto-instrumentation -- zero-code capture of all Anthropic calls after waxell.init().
  • @observe decorator -- named agent traces with automatic lifecycle tracking.
  • @step_dec decorator -- wraps helper functions to record steps automatically.
  • Top-level convenience functions -- waxell.tag(), waxell.step(), waxell.score() enrich traces without needing the context object.
  • WaxellContext -- explicit trace lifecycle with async with for maximum control.
  • Multi-step pipeline -- sequential LLM calls within a single trace, each recorded as a distinct step.

Run it

export ANTHROPIC_API_KEY="sk-ant-..."
export WAXELL_API_KEY="your-waxell-api-key"
export WAXELL_API_URL="https://api.waxell.ai"

python anthropic_agent.py