Gemini
A multi-agent topic analysis pipeline using Google's Gemini API. A parent orchestrator preprocesses the query, makes a decision about analysis focus (themes vs trends vs implications), dispatches two child agents -- a topic analyzer and an insight synthesizer -- both backed by gemini-2.0-flash, and records scores, tags, and metadata throughout.
This example requires GOOGLE_API_KEY, WAXELL_API_KEY, and WAXELL_API_URL. Use --dry-run to run without any API keys.
Architecture
Key Code
Parent orchestrator with child dispatch
The orchestrator preprocesses the query, decides the analysis focus, then calls two child agents sequentially. Each child auto-links to the parent via WaxellContext lineage.
@waxell.observe(agent_name="gemini-orchestrator", workflow_name="gemini-pipeline", capture_io=True)
async def run_agent(query: str, *, dry_run: bool = False, waxell_ctx=None) -> dict:
waxell.tag("demo", "gemini")
waxell.tag("provider", "google_gemini")
waxell.metadata("sdk", "google-generativeai")
model = get_gemini_model(dry_run=dry_run)
# Preprocess
query_info = preprocess_query(query)
# Decide focus
focus = choose_analysis_focus(query_info)
# Step 1: Analyze
analysis_result = await run_topic_analyzer(query, model, dry_run=dry_run)
# Step 2: Synthesize
synthesis_result = await run_insight_synthesizer(
analysis_result["analysis"], model, dry_run=dry_run,
)
waxell.score("pipeline_quality", 0.87, comment="Overall Gemini pipeline quality")
return {
"analysis": analysis_result["analysis"][:200],
"insights": synthesis_result["insights"][:200],
"focus": focus["chosen"],
}
Decision and reasoning decorators
The @decision decorator records a structured choice with options, reasoning, and confidence. The @reasoning_dec decorator captures a multi-factor quality evaluation.
@waxell.decision(name="choose_analysis_focus", options=["themes", "trends", "implications", "all"])
def choose_analysis_focus(query_info: dict) -> dict:
query = query_info.get("cleaned_query", "").lower()
if "trend" in query or "future" in query or "transform" in query:
chosen = "trends"
reasoning = "Query focuses on change/evolution -- prioritize trend analysis"
elif "impact" in query or "effect" in query:
chosen = "implications"
reasoning = "Query asks about consequences -- focus on implications"
else:
chosen = "themes"
reasoning = "General query -- identify key themes first"
return {"chosen": chosen, "reasoning": reasoning, "confidence": 0.82}
@waxell.reasoning_dec(step="evaluate_analysis_depth")
def evaluate_analysis_depth(analysis: str) -> dict:
word_count = len(analysis.split())
has_specifics = any(w in analysis.lower() for w in ["specifically", "for example", "such as"])
has_nuance = any(w in analysis.lower() for w in ["however", "although", "conversely"])
depth_score = 0.5
if word_count > 100: depth_score += 0.2
if has_specifics: depth_score += 0.15
if has_nuance: depth_score += 0.15
return {"depth_score": round(min(depth_score, 1.0), 2), "reasoning": f"Analysis has {word_count} words"}
What this demonstrates
@waxell.observe-- parent-child agent hierarchy (orchestrator + 2 child agents)@waxell.step_dec-- preprocessing step recorded in the trace@waxell.decision-- structured choice with options, reasoning, and confidence@waxell.reasoning_dec-- multi-factor quality evaluationwaxell.tag()-- provider and demo tagging for filteringwaxell.score()-- numeric quality scores at each stagewaxell.metadata()-- model and SDK metadata for attribution- Auto-instrumented Gemini LLM calls --
generate_content_asynctraced automatically
Run it
cd dev/waxell-dev
python -m app.demos.gemini_agent --dry-run