Migrate from Langfuse
If you are using Langfuse for LLM observability, switching to Waxell Observe is straightforward. This tutorial maps Langfuse concepts to their Waxell equivalents, shows side-by-side code migration, and highlights what Waxell adds on top.
Prerequisites
- An existing Langfuse integration you want to migrate
waxell-observeinstalled (pip install waxell-observe)- A Waxell API key (get one from your Waxell control plane dashboard)
What You'll Learn
- How Langfuse concepts map to Waxell equivalents
- Step-by-step code migration with before/after examples
- How to migrate prompts from Langfuse to Waxell
- What Waxell provides beyond Langfuse's feature set
Step 1: Concept Mapping
| Langfuse | Waxell Observe | Notes |
|---|---|---|
| Trace | AgentExecutionRun | Top-level execution unit. Created by @waxell_agent or WaxellContext. |
| Generation | LlmCallRecord | Individual LLM call with model, tokens, cost. Recorded via ctx.record_llm_call(). |
| Span | Step | Sub-operation within a run. Recorded via ctx.record_step(). |
| Session | Session (session_id) | Group of related runs. Pass session_id to decorator or context. |
| Score | Score | Quality metrics (numeric, categorical, boolean). Via ctx.record_score() or API. |
| Prompt | Prompt + PromptVersion | Versioned prompt management with labels. Fetched via client.get_prompt(). |
| Tag | Tag | Searchable key-value labels. Via ctx.set_tag() or waxell.tag(). |
| User | User (user_id) | End-user tracking. Pass user_id to context. |
| Metadata | Metadata + Tags | Tags are searchable key-value pairs; metadata supports complex values. |
The core concepts are nearly 1:1, which makes migration mechanical. The main difference is that Waxell adds governance, multi-tenancy, and agent lifecycle management on top of observability.
Step 2: Code Migration -- Basic Tracing
Before (Langfuse)
from langfuse import Langfuse
langfuse = Langfuse(
public_key="pk-...",
secret_key="sk-...",
host="https://cloud.langfuse.com",
)
# Create a trace
trace = langfuse.trace(
name="my-agent",
session_id="session-123",
user_id="user-42",
metadata={"environment": "production"},
)
# Record a generation (LLM call)
generation = trace.generation(
name="chat",
model="gpt-4o",
input=[{"role": "user", "content": "Hello"}],
output={"role": "assistant", "content": "Hi there!"},
usage={"input": 10, "output": 5},
)
# Add a score
trace.score(
name="quality",
value=0.9,
)
# Flush
langfuse.flush()
After (Waxell Observe)
from waxell_observe import WaxellObserveClient
from waxell_observe.context import WaxellContext
# Configure once at startup
WaxellObserveClient.configure(
api_url="https://acme.waxell.dev",
api_key="wax_sk_...",
)
# Use context manager (equivalent to langfuse.trace)
async with WaxellContext(
agent_name="my-agent",
session_id="session-123",
user_id="user-42",
metadata={"environment": "production"},
) as ctx:
# Record an LLM call (equivalent to trace.generation)
ctx.record_llm_call(
model="gpt-4o",
tokens_in=10,
tokens_out=5,
task="chat",
prompt_preview="Hello",
response_preview="Hi there!",
)
# Add a score (equivalent to trace.score)
ctx.record_score(
name="quality",
value=0.9,
data_type="numeric",
)
ctx.set_result({"output": "Hi there!"})
# No explicit flush needed -- context manager handles it on exit
Step 3: Code Migration -- Decorator Pattern
Langfuse's @observe decorator maps to Waxell's @waxell_agent decorator.
Before (Langfuse)
from langfuse.decorators import observe
@observe()
def my_agent(query: str) -> str:
result = call_llm(query)
return result
After (Waxell Observe)
from waxell_observe import waxell_agent
@waxell_agent(agent_name="my-agent")
def my_agent(query: str, waxell_ctx=None) -> str:
result = call_llm(query)
if waxell_ctx:
waxell_ctx.record_llm_call(
model="gpt-4o",
tokens_in=100,
tokens_out=50,
task="query",
)
return result
Key differences:
- Waxell requires
agent_name(Langfuse uses the function name by default) - Waxell injects
waxell_ctxfor manual LLM call recording - Waxell automatically captures function inputs and outputs (set
capture_io=Falseto disable)
Step 4: Code Migration -- LangChain Integration
Both platforms provide LangChain callback handlers.
Before (Langfuse)
from langfuse.callback import CallbackHandler
handler = CallbackHandler(
public_key="pk-...",
secret_key="sk-...",
host="https://cloud.langfuse.com",
)
chain = prompt | llm
result = chain.invoke(
{"question": "What is Waxell?"},
config={"callbacks": [handler]},
)
After (Waxell Observe)
from waxell_observe.integrations.langchain import WaxellLangChainHandler
handler = WaxellLangChainHandler(agent_name="langchain-bot")
chain = prompt | llm
result = chain.invoke(
{"question": "What is Waxell?"},
config={"callbacks": [handler]},
)
# Flush when done
handler.flush_sync(result={"output": result.content})
The Waxell LangChain handler automatically captures:
- Every LLM call with model name, token counts, and cost estimates
- Prompt and response previews
- Chain and tool spans for the full execution trace
Step 5: Code Migration -- Sessions and Users
Before (Langfuse)
trace = langfuse.trace(
name="my-agent",
session_id="session-123",
user_id="user-42",
)
After (Waxell Observe)
# With decorator
@waxell_agent(agent_name="my-agent", session_id="session-123")
def my_agent(query: str, waxell_ctx=None) -> str:
return call_llm(query)
# With context manager
async with WaxellContext(
agent_name="my-agent",
session_id="session-123",
user_id="user-42",
) as ctx:
result = call_llm(query)
ctx.set_result({"output": result})
Session and user data is available in the Waxell dashboard under Observability > Sessions and Observability > Users.
Step 6: Code Migration -- Scores
Before (Langfuse)
# Numeric score
trace.score(name="accuracy", value=0.95)
# Categorical score
trace.score(name="category", value="relevant")
# Boolean score (via numeric)
trace.score(name="thumbs_up", value=1)
After (Waxell Observe)
# Numeric score
ctx.record_score(name="accuracy", value=0.95, data_type="numeric")
# Categorical score
ctx.record_score(name="category", value="relevant", data_type="categorical")
# Boolean score
ctx.record_score(name="thumbs_up", value=True, data_type="boolean")
Waxell has explicit data_type support (numeric, categorical, boolean) which enables proper analytics -- averages for numeric, value distributions for categorical, and pass/fail rates for boolean.
Step 7: Migrate Prompts
If you manage prompts in Langfuse, you can migrate them to Waxell's prompt management system.
Before (Langfuse)
prompt = langfuse.get_prompt("support-agent")
compiled = prompt.compile(customer_name="Alice")
After (Waxell Observe)
from waxell_observe import WaxellObserveClient
client = WaxellObserveClient()
# Fetch latest version
prompt = await client.get_prompt("support-agent")
compiled = prompt.compile(customer_name="Alice")
# Fetch specific version
prompt_v2 = await client.get_prompt("support-agent", version=2)
# Fetch by label
production_prompt = await client.get_prompt("support-agent", label="production")
Synchronous version:
prompt = client.get_prompt_sync(name="support-agent")
compiled = prompt.compile(customer_name="Alice")
To migrate your existing prompts:
- Export your prompts from Langfuse (Settings > Prompts)
- Create them in Waxell via the API or dashboard
- Update your code to fetch from Waxell instead of Langfuse
Step 8: Migration Checklist
Use this checklist to track your migration:
- Install
waxell-observe(pip install waxell-observe) - Set environment variables (
WAXELL_API_URL,WAXELL_API_KEY) - Replace
Langfuse()initialization withWaxellObserveClient.configure() - Replace
langfuse.trace()withWaxellContextor@waxell_agent - Replace
trace.generation()withctx.record_llm_call() - Replace
trace.span()withctx.record_step() - Replace
trace.score()withctx.record_score() - Replace
langfuse.get_prompt()withclient.get_prompt() - Replace LangChain
CallbackHandlerwithWaxellLangChainHandler - Replace
langfuse.flush()-- no explicit flush needed with context manager - Remove
langfusefrom dependencies - Verify data appears in Waxell dashboard
Step 9: What Waxell Adds Beyond Langfuse
Waxell Observe is more than a Langfuse replacement. It is part of a full agent governance platform.
Governance and Policy Enforcement
Waxell can enforce policies before, during, and after agent execution:
# Automatic policy enforcement with the decorator
@waxell_agent(agent_name="support-bot", enforce_policy=True)
async def handle_query(query: str, waxell_ctx=None) -> str:
# If a budget, safety, or operations policy blocks this agent,
# a PolicyViolationError is raised before execution begins.
return await call_llm(query)
Policy types include:
- Budget policies -- Block or warn when cost or token limits are exceeded
- Safety policies -- Enforce tool call limits and step count maximums
- Operations policies -- Monitor execution duration and latency
Multi-Tenancy
Full tenant isolation out of the box. Each tenant gets:
- Separate data schemas (no data leakage between tenants)
- Independent model cost overrides
- Per-tenant policies and governance rules
- Isolated API keys and authentication
Agent Lifecycle Management
Beyond observability, Waxell manages the full agent lifecycle:
- Deploy and configure agents via the control plane
- Start/stop/pause agents with management commands
- Agent registry for discovering and managing all agents
- Workflow orchestration with pause/resume support
OpenTelemetry Native
Waxell generates standard OpenTelemetry traces alongside its HTTP data path:
- Compatible with any OTel-compatible backend (Jaeger, Grafana Tempo, Datadog)
- Distributed tracing across services
- Correlate agent execution with infrastructure metrics
- No vendor lock-in on the tracing layer
Enterprise Features
- SSO -- SAML/OIDC single sign-on
- RBAC -- Role-based access control with fine-grained permissions
- Audit logs -- Track who did what and when
- Self-hosted -- Run entirely on your infrastructure
Next Steps
- Quickstart -- Get started with Waxell Observe in 5 minutes
- Installation & Configuration -- All configuration options
- Instrument OpenAI Directly -- Deep dive into instrumentation approaches
- Cost Optimization -- Take advantage of Waxell's cost management
- Policy & Governance -- Set up governance policies