LangChain
A LangChain-powered multi-agent analysis pipeline with a parent orchestrator coordinating 2 child agents -- a runner and an evaluator -- using chain invocations (prompt | llm | parser), context retrieval, and decorator-based observability. The runner executes a LangChain analysis chain with prompt formatting and output parsing, while the evaluator generates a summary chain and scores quality.
This example requires OPENAI_API_KEY, WAXELL_API_KEY, and WAXELL_API_URL. Use --dry-run to run without any API keys (uses FakeListChatModel).
Architecture
Key Code
Orchestrator with @observe, @decision, and manual decide()
The parent agent wraps the full pipeline. Each nested @observe call creates a child run with automatic parent-child lineage.
@waxell.observe(agent_name="langchain-orchestrator", workflow_name="langchain-pipeline")
async def run_pipeline(query: str, dry_run: bool = False, waxell_ctx=None):
waxell.tag("demo", "langchain")
waxell.metadata("framework", "langchain")
waxell.metadata("num_agents", 3)
# @step -- preprocess the query
preprocessed = await preprocess_query(query)
# @decision -- classify the topic via OpenAI
classification = await classify_topic(query=query, openai_client=openai_client)
# Manual decide() -- routing decision
waxell.decide(
"analysis_strategy",
chosen=strategy,
options=["deep_analysis", "balanced_analysis", "framework_analysis"],
reasoning=f"Topic classified as '{chosen}' -- {strategy} optimal",
confidence=0.87,
)
# @retrieval -- retrieve context documents
context_docs = retrieve_context(query=query, documents=_MOCK_CONTEXT_DOCS)
# Child agents auto-link via WaxellContext
analysis_result = await run_langchain_analysis(query=query, context_docs=context_docs)
eval_result = await run_langchain_evaluation(query=query, analysis=analysis_result["analysis"])
Runner with LangChain chain invocation and @tool decorators
The runner child agent uses @tool to record prompt formatting and output parsing, then executes a LangChain prompt | llm | parser chain.
@waxell.tool(tool_type="prompt_formatter")
def format_prompt(template: str, variables: dict) -> str:
"""Format a prompt template with variables."""
result = template
for key, value in variables.items():
result = result.replace(f"{{{key}}}", str(value))
return result
@waxell.tool(tool_type="output_parser")
def parse_output(raw_output: str, max_length: int = 500) -> dict:
"""Parse and truncate LangChain output."""
cleaned = raw_output.strip()
return {"content": cleaned[:max_length], "length": len(cleaned)}
# LangChain chain invocation (auto-instrumented LLM call)
analysis_chain = analysis_prompt | llm | StrOutputParser()
analysis = analysis_chain.invoke({"topic": query, "context": context_text})
Evaluator with @reasoning, @decision, and score()
The evaluator child agent assesses quality, selects output format, and attaches scores.
@waxell.reasoning_dec(step="quality_evaluation")
async def evaluate_quality(analysis: str, summary: str, context_docs: list) -> dict:
return {
"thought": f"Analysis references {analysis_refs}/{len(context_docs)} docs.",
"evidence": [f"Source: {t}" for t in doc_titles],
"conclusion": "Both outputs adequately grounded",
}
@waxell.decision(name="output_format", options=["brief", "detailed", "bullet_points"])
def choose_output_format(num_docs: int, context: str) -> dict:
return {"chosen": "detailed", "reasoning": "...", "confidence": 0.85}
waxell.score("analysis_quality", 0.88, comment="context coverage")
waxell.score("summary_coherence", 0.91, comment="analysis-summary alignment")
What this demonstrates
@waxell.observe-- parent-child agent hierarchy with automatic lineage viaWaxellContext@waxell.step_dec-- query preprocessing recorded as an execution step@waxell.tool-- prompt formatting and output parsing recorded with customtool_type@waxell.retrieval-- context document retrieval recorded withsource="langchain_context"@waxell.decision-- topic classification and output format selectionwaxell.decide()-- manual analysis strategy decision with options and confidence@waxell.reasoning_dec-- chain-of-thought quality evaluationwaxell.score()-- analysis quality and summary coherence scoreswaxell.tag()/waxell.metadata()-- framework, agent role, and pipeline metadata- Auto-instrumented LLM calls -- two LangChain chain invocations captured automatically
- LangChain LCEL pattern --
prompt | llm | StrOutputParser()chain with auto-captured LLM calls
Run it
# Dry-run (no API keys needed, uses FakeListChatModel)
cd dev/waxell-dev
python -m app.demos.langchain_agent --dry-run
# Live (real OpenAI via LangChain)
export OPENAI_API_KEY="sk-..."
python -m app.demos.langchain_agent
# Custom query
python -m app.demos.langchain_agent --query "Analyze the future of quantum computing"