Phase 3: Agent Builder
Agent Builder is currently in development. The feature described on this page is not yet available. Sign up below to be notified when it launches.
You are here if: you have existing agents running with Waxell Observe (Phase 1) and optionally signals (Phase 2), and you want to migrate to native Waxell without manually rewriting everything.
What you will have after this phase: your existing agent logic expressed as native Waxell SDK constructs (@agent, @workflow, @tool, @router), with durable workflows and full governance.
What Agent Builder Will Do
Agent Builder is an AI-assisted tool that automates the conversion of existing agent code into native Waxell SDK definitions. It works in four steps:
1. Analyze Your Existing Code
Point Agent Builder at your agent source files. It reads your code and identifies:
- LLM calls (OpenAI, Anthropic, or other provider SDKs)
- Tool definitions and external API integrations
- Orchestration logic (sequencing, branching, loops)
- State management patterns
- Error handling and retry logic
2. Map Patterns to Waxell Constructs
Agent Builder maps what it finds to the corresponding Waxell SDK primitives:
| Your Code | Waxell Equivalent |
|---|---|
| LLM API calls | @router with decision or ctx.llm.generate() |
| Tool functions | @tool decorators |
| Orchestration logic | @workflow decorators |
| Agent class or entry point | @agent decorator |
| External API integrations | Domain calls via ctx.domain() |
| Manual observability code | Removed (built-in) |
3. Generate Native Waxell Code
Agent Builder produces complete, working Waxell agent definitions. The generated code follows Waxell best practices:
- Declarative
@agentdefinitions with signals, domains, and capabilities - Workflows that express orchestration as durable step sequences
- Tools with typed inputs extracted from your existing function signatures
- LLM calls routed through
ctx.llm.generate()with task annotations
4. Review and Customize
Agent Builder does not commit anything automatically. It presents the generated code for your review in a side-by-side view:
- Left: Your original agent code
- Right: The generated Waxell equivalent
- Annotations: Explanations of each mapping decision
You can accept, modify, or reject each generated component individually.
Example: What It Looks Like
Given a LangChain agent like this:
from langchain_openai import ChatOpenAI
from langchain_core.tools import tool
@tool
def search_docs(query: str) -> str:
"""Search documentation."""
return requests.get(f"https://api.docs.com/search?q={query}").text
llm = ChatOpenAI(model="gpt-4o")
agent = create_tool_calling_agent(llm, [search_docs], prompt)
executor = AgentExecutor(agent=agent, tools=[search_docs])
result = executor.invoke({"input": "How do I configure SSL?"})
Agent Builder would generate:
from waxell_sdk import agent, workflow, tool, WorkflowContext
@agent(
name="docs-assistant",
description="Searches documentation and answers questions",
signals=["docs_query"],
domains=["documentation"],
)
class DocsAssistant:
@tool
async def search_docs(self, ctx: WorkflowContext, query: str) -> dict:
"""Search documentation."""
return await ctx.domain("documentation", "search", query=query)
@workflow("answer_question")
async def answer_question(self, ctx: WorkflowContext, question: str) -> dict:
"""Search docs and generate an answer."""
docs = await ctx.tool("search_docs", query=question)
answer = await ctx.llm.generate(
prompt=f"Using these docs: {docs}\n\nAnswer: {question}",
output_format="text",
task="docs_answer",
)
return {"answer": answer}
Who Is It For?
Agent Builder is most useful when you have:
- Multiple agents that would be tedious to convert manually
- Complex orchestration with many steps, branches, or tool calls
- LangChain or CrewAI agents that follow common patterns Agent Builder recognizes
- Teams where not everyone is familiar with the Waxell SDK yet
If you have a single, simple agent, manual migration may be faster.
Early Access
Agent Builder is under active development. To be notified when it launches:
Sign up for early access at waxell.ai
Next Steps
- Phase 4: Go Fully Native -- Manual migration guide (available now)
- Phase 2: Add Signals -- Add centralized triggering while you wait
- Progressive Migration Overview -- See all migration phases