Skip to main content

LangChain Integration

The WaxellLangChainHandler is a LangChain callback handler that automatically captures LLM calls, chain executions, and tool usage -- sending everything to the Waxell control plane for tracking and governance.

Installation

Install with the LangChain extra:

pip install waxell-observe[langchain]

This ensures langchain-core is available.

Basic Usage

from waxell_observe.integrations.langchain import WaxellLangChainHandler

# Create the handler
handler = WaxellLangChainHandler(agent_name="my-langchain-agent")

# Pass it as a callback to any LangChain component
result = chain.invoke(input_data, config={"callbacks": [handler]})

# Flush buffered telemetry when done
handler.flush_sync(result={"output": result})

What Gets Captured Automatically

The handler intercepts six LangChain callback events:

CallbackWhat is recorded
on_llm_startModel name, prompt preview (first 500 chars)
on_llm_endToken counts (prompt + completion), cost estimate, response preview (first 500 chars)
on_chain_startChain name as an execution step
on_chain_endChain output attached to the step
on_tool_startTool name as a step (prefixed with tool:)
on_tool_endTool output (first 1000 chars)

Cost is automatically estimated using built-in pricing data for 20+ models. See Cost Management for details.

Full Working Example

from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from waxell_observe import WaxellObserveClient
from waxell_observe.integrations.langchain import WaxellLangChainHandler

# Configure once at startup
WaxellObserveClient.configure(
api_url="https://acme.waxell.dev",
api_key="wax_sk_...",
)

# Build a LangChain chain
llm = ChatOpenAI(model="gpt-4o", temperature=0)
prompt = ChatPromptTemplate.from_template(
"Summarize the following text in 2 sentences:\n\n{text}"
)
chain = prompt | llm | StrOutputParser()

# Create the Waxell handler
handler = WaxellLangChainHandler(
agent_name="summarizer",
workflow_name="summarize-text",
)

# Run the chain with the handler
text = "Waxell is a Python framework for building governed AI agents..."
result = chain.invoke(
{"text": text},
config={"callbacks": [handler]},
)

# Flush all buffered data to the control plane
handler.flush_sync(result={"summary": result})

print(f"Run ID: {handler.run_id}")
print(f"Summary: {result}")

Parameters

The WaxellLangChainHandler factory function accepts:

ParameterTypeDefaultDescription
agent_namestr(required)Name for this agent in the Waxell control plane
workflow_namestr"default"Workflow name for grouping runs
clientWaxellObserveClient | NoneNonePre-configured client instance. If None, creates a new one using current configuration
enforce_policyboolTrueCheck policies before execution starts
auto_start_runboolTrueAutomatically start a run on the first callback event

Policy Enforcement

When enforce_policy=True (the default), the handler checks policies before the first LLM call. If the policy result is block or throttle, a PolicyViolationError is raised:

from waxell_observe.errors import PolicyViolationError
from waxell_observe.integrations.langchain import WaxellLangChainHandler

handler = WaxellLangChainHandler(
agent_name="my-agent",
enforce_policy=True,
)

try:
result = chain.invoke(input_data, config={"callbacks": [handler]})
handler.flush_sync(result={"output": result})
except PolicyViolationError as e:
print(f"Blocked by policy: {e}")
print(f"Policy result: {e.policy_result}")

To disable policy checks (for example, in development):

handler = WaxellLangChainHandler(
agent_name="my-agent",
enforce_policy=False,
)

Flushing

The handler buffers all LLM calls and steps in memory during execution. You must call flush() or flush_sync() when execution completes to send the data to the control plane and close the run.

Synchronous flush

handler.flush_sync(result={"output": "the result"})

Async flush

await handler.flush(result={"output": "the result"})

Flush parameters

ParameterTypeDefaultDescription
resultdict | NoneNoneResult data to include in the completed run
statusstr"success"Run status: "success" or "error"
errorstr""Error message if the run failed
warning

If you forget to flush, the run will remain open on the control plane and no LLM call or step data will be recorded.

Properties

PropertyTypeDescription
run_idstrThe run ID from the control plane, or an empty string if no run has started

Run Lifecycle

  1. First callback triggers _ensure_run_started(), which starts a run on the control plane (if auto_start_run=True)
  2. During execution, LLM calls, chain steps, and tool steps are buffered in memory
  3. On flush, buffered data is sent to the control plane and the run is completed

If auto_start_run=False, no run is started automatically. You would need to manage the run lifecycle manually via the client.

Next Steps