LangChain vs Waxell
This page compares three approaches to building the same agent: LangChain alone, LangChain enhanced with Waxell Observe, and a fully native Waxell implementation. The use case is a customer support classifier that categorizes incoming tickets and generates responses.
A) LangChain Alone
A standard LangChain agent using ChatOpenAI with a prompt template and tools. This works, but you are on your own for observability, cost tracking, and governance.
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.tools import tool as langchain_tool
# Define a tool
@langchain_tool
def lookup_customer(customer_id: str) -> str:
"""Look up customer details by ID."""
# In production, this would query your database
return f"Customer {customer_id}: Premium tier, active since 2023"
@langchain_tool
def search_knowledge_base(query: str) -> str:
"""Search the support knowledge base."""
return f"KB result for '{query}': Reset via Settings > Account > Password"
# Build the chain
llm = ChatOpenAI(model="gpt-4o", temperature=0)
prompt = ChatPromptTemplate.from_messages([
("system", (
"You are a customer support classifier. "
"Classify the ticket into: billing, technical, or general. "
"Then generate a helpful response."
)),
("human", "Customer {customer_id} says: {message}"),
])
chain = prompt | llm.bind_tools([lookup_customer, search_knowledge_base])
# Run it
result = chain.invoke({
"customer_id": "C-1234",
"message": "I can't reset my password",
})
print(result.content)
What is missing:
- No visibility into how many tokens were consumed or what they cost
- No audit trail of which agent ran, when, or what it decided
- No policy enforcement (any agent can run at any time with no limits)
- No budget controls (a runaway loop can burn through your API credits)
- No durable execution (if the process crashes mid-chain, you start over)
B) LangChain + Waxell Observe
The same LangChain agent with three extra lines to add full observability. Your agent code stays exactly the same.
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.tools import tool as langchain_tool
from waxell_observe.integrations.langchain import WaxellLangChainHandler
@langchain_tool
def lookup_customer(customer_id: str) -> str:
"""Look up customer details by ID."""
return f"Customer {customer_id}: Premium tier, active since 2023"
@langchain_tool
def search_knowledge_base(query: str) -> str:
"""Search the support knowledge base."""
return f"KB result for '{query}': Reset via Settings > Account > Password"
llm = ChatOpenAI(model="gpt-4o", temperature=0)
prompt = ChatPromptTemplate.from_messages([
("system", (
"You are a customer support classifier. "
"Classify the ticket into: billing, technical, or general. "
"Then generate a helpful response."
)),
("human", "Customer {customer_id} says: {message}"),
])
chain = prompt | llm.bind_tools([lookup_customer, search_knowledge_base])
# Three lines to add full observability
handler = WaxellLangChainHandler(agent_name="support-classifier")
result = chain.invoke(
{"customer_id": "C-1234", "message": "I can't reset my password"},
config={"callbacks": [handler]},
)
handler.flush_sync(result={"output": result.content})
What you now get -- with no changes to your agent logic:
- Every LLM call tracked with model name, token counts, and cost estimates
- Chain and tool execution steps recorded automatically
- Pre-execution policy checks (budget limits, rate limiting, content filtering)
- Full run lifecycle visible in the Waxell dashboard
- Audit trail of inputs, outputs, and execution status
The WaxellLangChainHandler hooks into LangChain's callback system. It intercepts on_llm_start, on_llm_end, on_chain_start, on_chain_end, on_tool_start, and on_tool_end events automatically. You do not need to change any agent code.
C) Waxell Native
The same customer support classifier built natively with the Waxell SDK. Here, observability, governance, and durability are built into the framework itself -- there is nothing to instrument.
from waxell_sdk import agent, workflow, router, tool, WorkflowContext, RouterContext
@agent(
name="support-classifier",
description="Classifies customer support tickets and generates responses",
signals=["support_ticket"],
domains=["customers", "knowledge_base"],
)
class SupportClassifier:
@tool
async def lookup_customer(self, ctx: WorkflowContext, customer_id: str) -> dict:
"""Look up customer details by ID."""
return await ctx.domain(
"customers", "get_details", customer_id=customer_id
)
@tool
async def search_knowledge_base(self, ctx: WorkflowContext, query: str) -> dict:
"""Search the support knowledge base."""
return await ctx.domain(
"knowledge_base", "search", query=query
)
@router("classify_and_respond", decision="classify_ticket")
async def classify_and_respond(self, ctx: RouterContext) -> dict:
"""Classify the ticket and route to the appropriate handler."""
# Add customer context for the LLM decision
customer = await ctx.tool(
"lookup_customer",
customer_id=ctx.inputs.get("customer_id"),
)
ctx.add_signal("customer_tier", customer.get("tier", "standard"))
return await ctx.route()
@workflow("handle_billing")
async def handle_billing(self, ctx: WorkflowContext, message: str) -> dict:
"""Handle billing-related tickets."""
response = await ctx.llm.generate(
prompt=f"Generate a billing support response for: {message}",
output_format="json",
task="billing_response",
)
return response
@workflow("handle_technical")
async def handle_technical(self, ctx: WorkflowContext, message: str) -> dict:
"""Handle technical support tickets."""
kb_result = await ctx.tool("search_knowledge_base", query=message)
response = await ctx.llm.generate(
prompt=f"Using this KB article: {kb_result}\n\nRespond to: {message}",
output_format="json",
task="technical_response",
)
return response
What you gain with native Waxell:
- Declarative agent definition: The
@agentdecorator registers your agent with the control plane automatically - Durable workflows: If the process crashes mid-workflow, execution resumes from the last checkpoint
- LLM routing:
ctx.llm.generate()uses the configured LLM router with model selection, fallbacks, and rate limiting - Domain integration:
ctx.domain()calls route through governed, audited domain endpoints - Zero instrumentation: Every LLM call, tool invocation, and workflow step is tracked automatically
- Signal-driven execution: Agents are triggered by signals (webhooks), enabling centralized orchestration
Comparison Table
| Capability | LangChain | LangChain + Observe | Waxell Native |
|---|---|---|---|
| Agent definition | Imperative (chains, agents) | Unchanged | Declarative (@agent, @workflow, @tool) |
| Observability | Manual (LangSmith or custom) | Automatic via callback handler | Built-in, zero instrumentation |
| LLM cost tracking | Not included | Automatic for 20+ models | Built-in with tenant-level overrides |
| Policy enforcement | Not included | Pre-execution checks | Full lifecycle governance |
| Budget limits | Not included | Supported via policies | Built-in with tenant/agent scoping |
| Durable workflows | Not included | Not included | Checkpoint/resume with WorkflowEnvelope |
| Approval workflows | Not included | Not included | Built-in with pause/resume |
| Multi-tenancy | Not included | Tenant-scoped via control plane | Native tenant isolation |
| Audit logging | Not included | Run-level audit trail | Full execution trace with agent_trace |
Which Approach Should You Choose?
If you already have LangChain agents in production, start with LangChain + Observe. You get immediate value (visibility, cost tracking, policy enforcement) with zero changes to your agent code. You can always migrate to native Waxell later if you need durable workflows or full governance.
See the Progressive Migration guide for a phased approach to adopting Waxell.