Skip to main content

Migrate from Langfuse

If you are using Langfuse for LLM observability, switching to Waxell Observe is straightforward. This tutorial maps Langfuse concepts to their Waxell equivalents, shows side-by-side code migration, and highlights what Waxell adds on top.

Prerequisites

  • An existing Langfuse integration you want to migrate
  • waxell-observe installed (pip install waxell-observe)
  • A Waxell API key (get one from your Waxell control plane dashboard)

What You'll Learn

  • How Langfuse concepts map to Waxell equivalents
  • Step-by-step code migration with before/after examples
  • How to migrate prompts from Langfuse to Waxell
  • What Waxell provides beyond Langfuse's feature set

Step 1: Concept Mapping

LangfuseWaxell ObserveNotes
TraceAgentExecutionRunTop-level execution unit. Created by @waxell_agent or WaxellContext.
GenerationLlmCallRecordIndividual LLM call with model, tokens, cost. Recorded via ctx.record_llm_call().
SpanStepSub-operation within a run. Recorded via ctx.record_step().
SessionSession (session_id)Group of related runs. Pass session_id to decorator or context.
ScoreScoreQuality metrics (numeric, categorical, boolean). Via ctx.record_score() or API.
PromptPrompt + PromptVersionVersioned prompt management with labels. Fetched via client.get_prompt().
TagTagSearchable key-value labels. Via ctx.set_tag() or waxell.tag().
UserUser (user_id)End-user tracking. Pass user_id to context.
MetadataMetadata + TagsTags are searchable key-value pairs; metadata supports complex values.
info

The core concepts are nearly 1:1, which makes migration mechanical. The main difference is that Waxell adds governance, multi-tenancy, and agent lifecycle management on top of observability.

Step 2: Code Migration -- Basic Tracing

Before (Langfuse)

from langfuse import Langfuse

langfuse = Langfuse(
public_key="pk-...",
secret_key="sk-...",
host="https://cloud.langfuse.com",
)

# Create a trace
trace = langfuse.trace(
name="my-agent",
session_id="session-123",
user_id="user-42",
metadata={"environment": "production"},
)

# Record a generation (LLM call)
generation = trace.generation(
name="chat",
model="gpt-4o",
input=[{"role": "user", "content": "Hello"}],
output={"role": "assistant", "content": "Hi there!"},
usage={"input": 10, "output": 5},
)

# Add a score
trace.score(
name="quality",
value=0.9,
)

# Flush
langfuse.flush()

After (Waxell Observe)

from waxell_observe import WaxellObserveClient
from waxell_observe.context import WaxellContext

# Configure once at startup
WaxellObserveClient.configure(
api_url="https://acme.waxell.dev",
api_key="wax_sk_...",
)

# Use context manager (equivalent to langfuse.trace)
async with WaxellContext(
agent_name="my-agent",
session_id="session-123",
user_id="user-42",
metadata={"environment": "production"},
) as ctx:
# Record an LLM call (equivalent to trace.generation)
ctx.record_llm_call(
model="gpt-4o",
tokens_in=10,
tokens_out=5,
task="chat",
prompt_preview="Hello",
response_preview="Hi there!",
)

# Add a score (equivalent to trace.score)
ctx.record_score(
name="quality",
value=0.9,
data_type="numeric",
)

ctx.set_result({"output": "Hi there!"})

# No explicit flush needed -- context manager handles it on exit

Step 3: Code Migration -- Decorator Pattern

Langfuse's @observe decorator maps to Waxell's @waxell_agent decorator.

Before (Langfuse)

from langfuse.decorators import observe

@observe()
def my_agent(query: str) -> str:
result = call_llm(query)
return result

After (Waxell Observe)

from waxell_observe import waxell_agent

@waxell_agent(agent_name="my-agent")
def my_agent(query: str, waxell_ctx=None) -> str:
result = call_llm(query)

if waxell_ctx:
waxell_ctx.record_llm_call(
model="gpt-4o",
tokens_in=100,
tokens_out=50,
task="query",
)

return result

Key differences:

  • Waxell requires agent_name (Langfuse uses the function name by default)
  • Waxell injects waxell_ctx for manual LLM call recording
  • Waxell automatically captures function inputs and outputs (set capture_io=False to disable)

Step 4: Code Migration -- LangChain Integration

Both platforms provide LangChain callback handlers.

Before (Langfuse)

from langfuse.callback import CallbackHandler

handler = CallbackHandler(
public_key="pk-...",
secret_key="sk-...",
host="https://cloud.langfuse.com",
)

chain = prompt | llm
result = chain.invoke(
{"question": "What is Waxell?"},
config={"callbacks": [handler]},
)

After (Waxell Observe)

from waxell_observe.integrations.langchain import WaxellLangChainHandler

handler = WaxellLangChainHandler(agent_name="langchain-bot")

chain = prompt | llm
result = chain.invoke(
{"question": "What is Waxell?"},
config={"callbacks": [handler]},
)

# Flush when done
handler.flush_sync(result={"output": result.content})

The Waxell LangChain handler automatically captures:

  • Every LLM call with model name, token counts, and cost estimates
  • Prompt and response previews
  • Chain and tool spans for the full execution trace

Step 5: Code Migration -- Sessions and Users

Before (Langfuse)

trace = langfuse.trace(
name="my-agent",
session_id="session-123",
user_id="user-42",
)

After (Waxell Observe)

# With decorator
@waxell_agent(agent_name="my-agent", session_id="session-123")
def my_agent(query: str, waxell_ctx=None) -> str:
return call_llm(query)

# With context manager
async with WaxellContext(
agent_name="my-agent",
session_id="session-123",
user_id="user-42",
) as ctx:
result = call_llm(query)
ctx.set_result({"output": result})

Session and user data is available in the Waxell dashboard under Observability > Sessions and Observability > Users.

Step 6: Code Migration -- Scores

Before (Langfuse)

# Numeric score
trace.score(name="accuracy", value=0.95)

# Categorical score
trace.score(name="category", value="relevant")

# Boolean score (via numeric)
trace.score(name="thumbs_up", value=1)

After (Waxell Observe)

# Numeric score
ctx.record_score(name="accuracy", value=0.95, data_type="numeric")

# Categorical score
ctx.record_score(name="category", value="relevant", data_type="categorical")

# Boolean score
ctx.record_score(name="thumbs_up", value=True, data_type="boolean")

Waxell has explicit data_type support (numeric, categorical, boolean) which enables proper analytics -- averages for numeric, value distributions for categorical, and pass/fail rates for boolean.

Step 7: Migrate Prompts

If you manage prompts in Langfuse, you can migrate them to Waxell's prompt management system.

Before (Langfuse)

prompt = langfuse.get_prompt("support-agent")
compiled = prompt.compile(customer_name="Alice")

After (Waxell Observe)

from waxell_observe import WaxellObserveClient

client = WaxellObserveClient()

# Fetch latest version
prompt = await client.get_prompt("support-agent")
compiled = prompt.compile(customer_name="Alice")

# Fetch specific version
prompt_v2 = await client.get_prompt("support-agent", version=2)

# Fetch by label
production_prompt = await client.get_prompt("support-agent", label="production")

Synchronous version:

prompt = client.get_prompt_sync(name="support-agent")
compiled = prompt.compile(customer_name="Alice")

To migrate your existing prompts:

  1. Export your prompts from Langfuse (Settings > Prompts)
  2. Create them in Waxell via the API or dashboard
  3. Update your code to fetch from Waxell instead of Langfuse

Step 8: Migration Checklist

Use this checklist to track your migration:

  • Install waxell-observe (pip install waxell-observe)
  • Set environment variables (WAXELL_API_URL, WAXELL_API_KEY)
  • Replace Langfuse() initialization with WaxellObserveClient.configure()
  • Replace langfuse.trace() with WaxellContext or @waxell_agent
  • Replace trace.generation() with ctx.record_llm_call()
  • Replace trace.span() with ctx.record_step()
  • Replace trace.score() with ctx.record_score()
  • Replace langfuse.get_prompt() with client.get_prompt()
  • Replace LangChain CallbackHandler with WaxellLangChainHandler
  • Replace langfuse.flush() -- no explicit flush needed with context manager
  • Remove langfuse from dependencies
  • Verify data appears in Waxell dashboard

Step 9: What Waxell Adds Beyond Langfuse

Waxell Observe is more than a Langfuse replacement. It is part of a full agent governance platform.

Governance and Policy Enforcement

Waxell can enforce policies before, during, and after agent execution:

# Automatic policy enforcement with the decorator
@waxell_agent(agent_name="support-bot", enforce_policy=True)
async def handle_query(query: str, waxell_ctx=None) -> str:
# If a budget, safety, or operations policy blocks this agent,
# a PolicyViolationError is raised before execution begins.
return await call_llm(query)

Policy types include:

  • Budget policies -- Block or warn when cost or token limits are exceeded
  • Safety policies -- Enforce tool call limits and step count maximums
  • Operations policies -- Monitor execution duration and latency

Multi-Tenancy

Full tenant isolation out of the box. Each tenant gets:

  • Separate data schemas (no data leakage between tenants)
  • Independent model cost overrides
  • Per-tenant policies and governance rules
  • Isolated API keys and authentication

Agent Lifecycle Management

Beyond observability, Waxell manages the full agent lifecycle:

  • Deploy and configure agents via the control plane
  • Start/stop/pause agents with management commands
  • Agent registry for discovering and managing all agents
  • Workflow orchestration with pause/resume support

OpenTelemetry Native

Waxell generates standard OpenTelemetry traces alongside its HTTP data path:

  • Compatible with any OTel-compatible backend (Jaeger, Grafana Tempo, Datadog)
  • Distributed tracing across services
  • Correlate agent execution with infrastructure metrics
  • No vendor lock-in on the tracing layer

Enterprise Features

  • SSO -- SAML/OIDC single sign-on
  • RBAC -- Role-based access control with fine-grained permissions
  • Audit logs -- Track who did what and when
  • Self-hosted -- Run entirely on your infrastructure

Next Steps