Anthropic
A multi-agent content analysis pipeline using Anthropic's Claude models. The orchestrator dispatches three child agents sequentially: a classifier for content categorization, an entity extractor for structured entity identification, and a summarizer that assesses content complexity before generating a final summary. All child agents use claude-sonnet-4-5 via auto-instrumented Anthropic messages.create calls.
This example requires ANTHROPIC_API_KEY, WAXELL_API_KEY, and WAXELL_API_URL. Use --dry-run to run without any API keys.
Architecture
Key Code
Three-stage content analysis pipeline
The orchestrator dispatches three child agents in sequence, passing results from earlier stages to later ones.
@waxell.observe(agent_name="content-analyzer", workflow_name="content-analysis", capture_io=True)
async def run_agent(query: str, *, dry_run: bool = False, waxell_ctx=None) -> dict:
waxell.tag("demo", "anthropic")
waxell.tag("provider", "anthropic")
waxell.metadata("pipeline", "classify -> extract -> summarize")
client = get_anthropic_client(dry_run=dry_run)
query_info = preprocess_query(query)
depth = choose_analysis_depth(query_info)
# Step 1: Classify
class_result = await run_classifier(query, client, dry_run=dry_run)
# Step 2: Extract entities
entity_result = await run_entity_extractor(query, client, dry_run=dry_run)
# Step 3: Summarize with complexity assessment
summary_result = await run_summarizer(
query, class_result["classification"], entity_result["entities"],
client, dry_run=dry_run,
)
waxell.score("pipeline_quality", 0.87)
return {
"classification": class_result["classification"],
"entities": entity_result["entities"],
"summary": summary_result["summary"],
}
Anthropic Messages API with auto-instrumentation
Each child agent uses client.messages.create which is auto-instrumented. The summarizer runs a complexity assessment before generation.
@waxell.observe(agent_name="summarizer", workflow_name="content-summarization", capture_io=True)
async def run_summarizer(query: str, classification: str, entities: str, client,
*, dry_run=False, waxell_ctx=None) -> dict:
waxell.tag("task", "summarization")
# Assess complexity before summarizing
complexity = assess_content_complexity(classification, entities)
response = await client.messages.create(
model="claude-sonnet-4-5-20250929",
max_tokens=500,
messages=[{
"role": "user",
"content": (
f"Provide a concise summary of: {query}\n\n"
f"Classification: {classification}\nKey entities: {entities}"
),
}],
)
summary = response.content[0].text
waxell.score("summary_quality", 0.90)
return {"summary": summary, "complexity": complexity}
What this demonstrates
@waxell.observe-- parent orchestrator with 3 child agents@waxell.step_dec-- query preprocessing@waxell.decision-- analysis depth selection (shallow/deep/comprehensive)@waxell.reasoning_dec-- content complexity assessmentwaxell.tag()-- task-specific tagging per child agentwaxell.score()-- quality scores at each pipeline stagewaxell.metadata()-- SDK and pipeline metadata- Auto-instrumented Anthropic calls --
messages.createtraced automatically - Three-stage pipeline -- classify, extract entities, summarize
Run it
cd dev/waxell-dev
python -m app.demos.anthropic_agent --dry-run