LangChain Integration
The WaxellLangChainHandler is a LangChain callback handler that automatically captures LLM calls, chain executions, and tool usage -- sending everything to the Waxell control plane for tracking and governance.
Installation
Install with the LangChain extra:
pip install waxell-observe[langchain]
This ensures langchain-core is available.
Basic Usage
from waxell_observe.integrations.langchain import WaxellLangChainHandler
# Create the handler
handler = WaxellLangChainHandler(agent_name="my-langchain-agent")
# Pass it as a callback to any LangChain component
result = chain.invoke(input_data, config={"callbacks": [handler]})
# Flush buffered telemetry when done
handler.flush_sync(result={"output": result})
What Gets Captured Automatically
The handler intercepts six LangChain callback events:
| Callback | What is recorded |
|---|---|
on_llm_start | Model name, prompt preview (first 500 chars) |
on_llm_end | Token counts (prompt + completion), cost estimate, response preview (first 500 chars) |
on_chain_start | Chain name as an execution step |
on_chain_end | Chain output attached to the step |
on_tool_start | Tool name as a step (prefixed with tool:) |
on_tool_end | Tool output (first 1000 chars) |
Cost is automatically estimated using built-in pricing data for 20+ models. See Cost Management for details.
Full Working Example
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from waxell_observe import WaxellObserveClient
from waxell_observe.integrations.langchain import WaxellLangChainHandler
# Configure once at startup
WaxellObserveClient.configure(
api_url="https://acme.waxell.dev",
api_key="wax_sk_...",
)
# Build a LangChain chain
llm = ChatOpenAI(model="gpt-4o", temperature=0)
prompt = ChatPromptTemplate.from_template(
"Summarize the following text in 2 sentences:\n\n{text}"
)
chain = prompt | llm | StrOutputParser()
# Create the Waxell handler
handler = WaxellLangChainHandler(
agent_name="summarizer",
workflow_name="summarize-text",
)
# Run the chain with the handler
text = "Waxell is a Python framework for building governed AI agents..."
result = chain.invoke(
{"text": text},
config={"callbacks": [handler]},
)
# Flush all buffered data to the control plane
handler.flush_sync(result={"summary": result})
print(f"Run ID: {handler.run_id}")
print(f"Summary: {result}")
Parameters
The WaxellLangChainHandler factory function accepts:
| Parameter | Type | Default | Description |
|---|---|---|---|
agent_name | str | (required) | Name for this agent in the Waxell control plane |
workflow_name | str | "default" | Workflow name for grouping runs |
client | WaxellObserveClient | None | None | Pre-configured client instance. If None, creates a new one using current configuration |
enforce_policy | bool | True | Check policies before execution starts |
auto_start_run | bool | True | Automatically start a run on the first callback event |
Policy Enforcement
When enforce_policy=True (the default), the handler checks policies before the first LLM call. If the policy result is block or throttle, a PolicyViolationError is raised:
from waxell_observe.errors import PolicyViolationError
from waxell_observe.integrations.langchain import WaxellLangChainHandler
handler = WaxellLangChainHandler(
agent_name="my-agent",
enforce_policy=True,
)
try:
result = chain.invoke(input_data, config={"callbacks": [handler]})
handler.flush_sync(result={"output": result})
except PolicyViolationError as e:
print(f"Blocked by policy: {e}")
print(f"Policy result: {e.policy_result}")
To disable policy checks (for example, in development):
handler = WaxellLangChainHandler(
agent_name="my-agent",
enforce_policy=False,
)
Flushing
The handler buffers all LLM calls and steps in memory during execution. You must call flush() or flush_sync() when execution completes to send the data to the control plane and close the run.
Synchronous flush
handler.flush_sync(result={"output": "the result"})
Async flush
await handler.flush(result={"output": "the result"})
Flush parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
result | dict | None | None | Result data to include in the completed run |
status | str | "success" | Run status: "success" or "error" |
error | str | "" | Error message if the run failed |
If you forget to flush, the run will remain open on the control plane and no LLM call or step data will be recorded.
Properties
| Property | Type | Description |
|---|---|---|
run_id | str | The run ID from the control plane, or an empty string if no run has started |
Run Lifecycle
- First callback triggers
_ensure_run_started(), which starts a run on the control plane (ifauto_start_run=True) - During execution, LLM calls, chain steps, and tool steps are buffered in memory
- On flush, buffered data is sent to the control plane and the run is completed
If auto_start_run=False, no run is started automatically. You would need to manage the run lifecycle manually via the client.
Next Steps
- Decorator Pattern -- Alternative integration using
@waxell_agent - LLM Call Tracking -- Details on what data is captured
- Cost Management -- How cost estimation works