Enrichment Agent
A showcase of every waxell-observe enrichment feature: @observe for agent scoping, @step for pipeline stages, score() with all data types, tag() for categorization, and metadata() for structured context. Two child agents handle LLM analysis and quality scoring.
Environment variables
This example requires OPENAI_API_KEY, WAXELL_API_KEY, and WAXELL_API_URL. Use --dry-run to skip real API calls.
Architecture
Key Code
Score Data Types
The evaluator demonstrates all score() data types: numeric, boolean, and categorical.
@waxell.observe(agent_name="enrichment-evaluator", workflow_name="enrichment-scoring")
async def run_enrichment_evaluation(analysis_tokens: int, assessment_tokens: int, waxell_ctx=None):
waxell.tag("agent_role", "evaluator")
waxell.score("quality", 0.92)
waxell.score("relevance", 0.85)
waxell.score("helpfulness", 0.88)
waxell.score("safety", True, data_type="boolean")
waxell.score("category", "informational", data_type="categorical")
waxell.score("confidence", 0.78, comment="Based on source availability")
Tags and Metadata on the Orchestrator
The parent agent attaches categorization tags and structured metadata.
@waxell.observe(agent_name="enrichment-orchestrator", workflow_name="enrichment-showcase")
async def run_enrichment_pipeline(query: str, dry_run: bool = False, waxell_ctx=None):
waxell.tag("demo", "enrichment")
waxell.tag("intent", "question")
waxell.tag("domain", "technology")
waxell.tag("priority", "high")
waxell.metadata("user_tier", "premium")
waxell.metadata("request_source", "api")
waxell.metadata("model_config", {"temperature": 0.7, "max_tokens": 500})
waxell.metadata("pipeline_version", "2.1.0")
What this demonstrates
- All enrichment primitives --
tag(),metadata(), andscore()with numeric, boolean, and categorical data types. - Multi-agent enrichment flow -- orchestrator sets context, runner does LLM analysis, evaluator records quality scores.
@stepfor pipeline stages --analyze_input,record_scores, andfinal_enrichmenteach recorded as discrete spans.- Auto-instrumented LLM calls -- two OpenAI calls (analysis + assessment) captured without extra code.
- Structured metadata -- nested objects like
model_configserialized via_safe_attr.
Run it
# Dry-run mode (no API key needed)
cd dev/waxell-dev
python -m app.demos.enrichment_agent --dry-run
# Live mode
export OPENAI_API_KEY="sk-..."
python -m app.demos.enrichment_agent