Phase 4: Go Fully Native
You are here if: you want the full power of the Waxell platform -- durable workflows, declarative agent definitions, built-in governance, and production infrastructure. This is a manual migration guide for converting existing agents to native Waxell.
What you will have after this phase: agents defined with @agent, @workflow, @tool, and @router decorators running on Waxell's durable runtime with checkpoint/resume, full governance, and production backends.
The Mapping
Every pattern in your existing agent code has a native Waxell equivalent:
| Your existing code | Waxell SDK equivalent |
|---|---|
LLM API calls (openai.chat.completions.create(...)) | await ctx.llm.generate(prompt=..., task=...) |
| Orchestration logic (function sequences, if/else, loops) | @workflow decorator with ctx.log_step() |
External API calls (requests.get(...), SDK clients) | await ctx.domain("service", "action", ...) |
| Tool functions | @tool decorator |
| Agent class or entry point | @agent decorator |
| LLM-based classification / routing | @router decorator with decision spec |
Manual observability (print, logging, custom dashboards) | Remove it -- built-in |
| Manual cost tracking | Remove it -- built-in |
| Manual retry logic | Remove it -- built-in via durable workflows |
Step-by-Step: Converting a LangChain Agent
Here is a complete walkthrough converting a LangChain customer support agent to native Waxell.
Before: LangChain
import os
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.tools import tool as langchain_tool
from langchain.agents import create_tool_calling_agent, AgentExecutor
@langchain_tool
def lookup_customer(customer_id: str) -> str:
"""Look up customer details by ID."""
import requests
resp = requests.get(
f"https://api.myapp.com/customers/{customer_id}",
headers={"Authorization": f"Bearer {os.environ['APP_API_KEY']}"},
)
return resp.json()
@langchain_tool
def search_kb(query: str) -> str:
"""Search the knowledge base for relevant articles."""
import requests
resp = requests.get(
f"https://api.myapp.com/kb/search",
params={"q": query},
headers={"Authorization": f"Bearer {os.environ['APP_API_KEY']}"},
)
return resp.json()
@langchain_tool
def create_ticket(customer_id: str, category: str, summary: str) -> str:
"""Create a support ticket."""
import requests
resp = requests.post(
"https://api.myapp.com/tickets",
json={
"customer_id": customer_id,
"category": category,
"summary": summary,
},
headers={"Authorization": f"Bearer {os.environ['APP_API_KEY']}"},
)
return resp.json()
llm = ChatOpenAI(model="gpt-4o", temperature=0)
prompt = ChatPromptTemplate.from_messages([
("system", (
"You are a customer support agent. Look up the customer, "
"search the knowledge base, classify the issue, and create a ticket. "
"Respond with a helpful message."
)),
("human", "Customer {customer_id}: {message}"),
("placeholder", "{agent_scratchpad}"),
])
tools = [lookup_customer, search_kb, create_ticket]
agent = create_tool_calling_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
# Run it
result = executor.invoke({
"customer_id": "C-1234",
"message": "I was charged twice for my subscription",
})
print(result["output"])
After: Waxell Native
from waxell_sdk import agent, workflow, tool, router, WorkflowContext, RouterContext
@agent(
name="support-agent",
description="Handles customer support tickets with classification and response",
signals=["support_ticket"],
domains=["customers", "knowledge_base", "tickets"],
)
class SupportAgent:
# ----------------------------------------------------------------
# Tools: External integrations become domain calls
# ----------------------------------------------------------------
@tool
async def lookup_customer(self, ctx: WorkflowContext, customer_id: str) -> dict:
"""Look up customer details by ID."""
return await ctx.domain("customers", "get_details", customer_id=customer_id)
@tool
async def search_kb(self, ctx: WorkflowContext, query: str) -> dict:
"""Search the knowledge base for relevant articles."""
return await ctx.domain("knowledge_base", "search", query=query)
@tool
async def create_ticket(
self,
ctx: WorkflowContext,
customer_id: str,
category: str,
summary: str,
) -> dict:
"""Create a support ticket."""
return await ctx.domain(
"tickets", "create",
customer_id=customer_id,
category=category,
summary=summary,
)
# ----------------------------------------------------------------
# Router: LLM-based classification replaces AgentExecutor
# ----------------------------------------------------------------
@router("classify_and_handle", decision="classify_ticket")
async def classify_and_handle(self, ctx: RouterContext) -> dict:
"""Classify the ticket and route to the appropriate handler."""
# Gather context for the LLM decision
customer = await ctx.tool(
"lookup_customer",
customer_id=ctx.inputs.get("customer_id"),
)
ctx.add_signal("customer_info", customer)
kb_results = await ctx.tool(
"search_kb",
query=ctx.inputs.get("message"),
)
ctx.add_signal("kb_results", kb_results)
return await ctx.route()
# ----------------------------------------------------------------
# Workflows: Each category gets a dedicated handler
# ----------------------------------------------------------------
@workflow("handle_billing")
async def handle_billing(
self,
ctx: WorkflowContext,
customer_id: str,
message: str,
) -> dict:
"""Handle billing issues."""
# Generate a response using the LLM router
response = await ctx.llm.generate(
prompt=(
f"Generate a helpful billing support response.\n"
f"Customer issue: {message}"
),
output_format="json",
task="billing_response",
)
# Create a ticket
ticket = await ctx.tool(
"create_ticket",
customer_id=customer_id,
category="billing",
summary=message,
)
ctx.log_step("ticket_created", {"ticket_id": ticket.get("id")})
return {"response": response, "ticket": ticket}
@workflow("handle_technical")
async def handle_technical(
self,
ctx: WorkflowContext,
customer_id: str,
message: str,
) -> dict:
"""Handle technical issues."""
kb_result = await ctx.tool("search_kb", query=message)
ctx.log_step("kb_searched", {"results_count": len(kb_result)})
response = await ctx.llm.generate(
prompt=(
f"Using this knowledge base article: {kb_result}\n\n"
f"Generate a helpful technical support response for: {message}"
),
output_format="json",
task="technical_response",
)
ticket = await ctx.tool(
"create_ticket",
customer_id=customer_id,
category="technical",
summary=message,
)
ctx.log_step("ticket_created", {"ticket_id": ticket.get("id")})
return {"response": response, "ticket": ticket}
What Changed and Why
Direct API calls became domain calls
Before: requests.get(f"https://api.myapp.com/customers/{customer_id}") with hardcoded URLs and manual auth headers.
After: await ctx.domain("customers", "get_details", customer_id=customer_id) -- the control plane routes this to your configured domain endpoint with proper authentication, audit logging, and error handling.
AgentExecutor became a router
Before: LangChain's AgentExecutor lets the LLM decide which tools to call in a loop. The classification happens implicitly as part of the agent's reasoning.
After: The @router decorator makes classification explicit. The LLM sees the available capabilities and selects one. The decision is logged, auditable, and governed by policies.
Manual tool definitions became @tool
Before: @langchain_tool decorators with string return types and inline HTTP calls.
After: @tool decorators with typed parameters that delegate to domain calls. The tool is registered with the control plane and tracked in every execution.
Orchestration logic became workflows
Before: The AgentExecutor handles all orchestration in a single opaque loop.
After: Each handling path (handle_billing, handle_technical) is an explicit workflow with named steps. Each step is a checkpoint -- if the process crashes after kb_searched, it resumes from there.
Observability code was removed
Before: verbose=True for console output, plus any custom logging or metrics you added.
After: Nothing. Every LLM call, tool invocation, domain call, and workflow step is automatically tracked by the runtime. The control plane dashboard shows everything.
What You Gain at Phase 4
Durable Workflows with Checkpoint/Resume
Every workflow step is a durable checkpoint. If your process crashes, the WorkflowEnvelope resumes execution from the last completed step. No data is lost, no work is repeated.
Full Governance Lifecycle
The DynamicPolicyManager evaluates policies at every hook point:
- Pre-execution: Can this agent run right now?
- Pre-step: Can this workflow step proceed?
- Post-step: Should execution continue based on this step's output?
- Post-execution: Record the final outcome for audit
Infrastructure Package
Native Waxell agents run on production-grade infrastructure:
| Component | Default (local dev) | Production |
|---|---|---|
| State backend | InMemoryBackend | DjangoRuntimeBackend (PostgreSQL) |
| Token semaphore | InMemoryTokenSemaphore | RedisTokenSemaphore |
| Task execution | In-process | CeleryTaskStatusProvider |
| Governance hooks | None | PolicyGovernanceHook |
Switching from local dev to production happens automatically when the waxell_infra Django app loads -- no code changes needed.
Generation Layer
For agents that produce content (emails, reports, summaries), the generation package provides:
- RAG (Retrieval-Augmented Generation) pipelines
- Prompt versioning and A/B testing
- Content synthesis with quality controls
- Model routing per task type
Multi-Tenancy
Native agents support full tenant isolation:
- Per-tenant policies and model configurations
- Per-tenant billing and cost tracking
- Per-tenant audit trails
- Database-level data isolation
Management Commands
Operate your agents via Django management commands:
# List all registered agents
python manage.py agent_list
# Run an agent directly
python manage.py agent_run support-agent
# Interactive agent shell
python manage.py agent_shell support-agent
# Run agent tests
python manage.py agent_test support-agent
# View execution trace
python manage.py agent_trace <execution_id>
Migration Checklist
Use this checklist when converting an existing agent:
- Identify all LLM calls and map them to
ctx.llm.generate()with appropriatetaskvalues - Identify all external API calls and map them to domain calls via
ctx.domain() - Identify tool functions and convert them to
@tooldecorators - Identify the main orchestration flow and convert it to a
@workflowor@router - Define the
@agentdecorator with name, description, signals, and domains - Remove all manual observability code (logging wrappers, metrics, cost tracking)
- Remove all manual retry logic (the runtime handles this)
- Configure domain endpoints in the control plane
- Register signal types in the control plane
- Test with
python manage.py agent_test <agent_name> - Verify execution traces with
python manage.py agent_trace <execution_id>
Next Steps
- SDK Overview -- Full SDK documentation
- Workflow Spec -- Workflow definition reference
- Runtime Overview -- How the execution engine works
- Execution Context -- ExecutionContext API reference
- Progressive Migration Overview -- See all migration phases