LangChain Agent
Using WaxellLangChainHandler to capture LLM calls within LangChain chains.
Additional dependencies
pip install waxell-observe[langchain] langchain-openai
Environment variables
This example requires OPENAI_API_KEY, WAXELL_API_KEY, and WAXELL_API_URL.
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from waxell_observe.integrations.langchain import WaxellLangChainHandler
handler = WaxellLangChainHandler(agent_name="langchain-qa")
llm = ChatOpenAI(model="gpt-4o")
# Build prompts
analyze_prompt = ChatPromptTemplate.from_template(
"Analyze the key themes in: {topic}"
)
summarize_prompt = ChatPromptTemplate.from_template(
"Summarize this analysis in 2 sentences: {analysis}"
)
# Step 1: Analyze
analysis = (analyze_prompt | llm).invoke(
{"topic": "The impact of large language models on software engineering"},
config={"callbacks": [handler]},
)
# Step 2: Summarize
summary = (summarize_prompt | llm).invoke(
{"analysis": analysis.content},
config={"callbacks": [handler]},
)
# Flush telemetry
handler.flush_sync(result={"summary": summary.content})
print(f"Run ID: {handler.run_id}")
What this demonstrates
- LangChain callback handler --
WaxellLangChainHandlerplugs into LangChain's callback system to capture every LLM call. - Chain composition -- prompt-template-to-LLM chains connected via
|(LCEL) are traced automatically. - Automatic LLM call capture -- input prompts, model parameters, and completions are recorded without extra code.
flush_sync-- ensures all telemetry is sent before the process exits, and attaches a final result to the trace.
Run it
export OPENAI_API_KEY="sk-..."
export WAXELL_API_KEY="your-waxell-api-key"
export WAXELL_API_URL="https://api.waxell.ai"
python langchain_agent.py