OpenAI Agent
Three patterns for integrating waxell-observe with OpenAI, from simplest to most flexible.
Environment variables
This example requires OPENAI_API_KEY, WAXELL_API_KEY, and WAXELL_API_URL.
Pattern 1: Auto-instrumentation
The simplest approach. Call init() before importing the OpenAI SDK and every call is captured automatically.
import waxell_observe as waxell
waxell.init() # must come before importing openai
from openai import OpenAI
client = OpenAI()
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Explain quantum computing in one paragraph."}],
)
print(response.choices[0].message.content)
Pattern 2: @observe decorator
Wrap a function to create a named trace. Use top-level convenience functions and the @step_dec decorator to enrich the trace without touching the context object directly.
import asyncio
import waxell_observe as waxell
waxell.init()
from openai import OpenAI
client = OpenAI()
@waxell.step_dec(name="generate_answer")
def call_openai(question: str) -> str:
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": question}],
)
return response.choices[0].message.content
@waxell.observe(agent_name="qa-bot")
async def answer_question(question: str) -> str:
waxell.tag("domain", "science")
answer = call_openai(question)
waxell.score("answer_length", len(answer))
waxell.metadata("model", "gpt-4o")
return answer
asyncio.run(answer_question("What is quantum computing?"))
Pattern 3: WaxellContext (Advanced)
For cases requiring explicit control over the trace lifecycle, including session IDs, tags, and structured results.
import asyncio
import waxell_observe as waxell
from waxell_observe import WaxellContext
waxell.init()
from openai import OpenAI
client = OpenAI()
async def main():
async with WaxellContext(
agent_name="research-bot",
session_id="session-001",
) as ctx:
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Summarize the latest in AI safety."}],
)
ctx.record_step("summarize", output={"model": "gpt-4o"})
ctx.set_tag("topic", "ai-safety")
ctx.set_result({"summary": response.choices[0].message.content})
asyncio.run(main())
What this demonstrates
- Auto-instrumentation -- the recommended starting point. Zero-code capture of all OpenAI calls after
waxell.init(). @observedecorator -- named agent traces with automatic lifecycle tracking. Add this when you want structure and enrichment.@step_decdecorator -- wraps helper functions to record steps automatically.- Top-level convenience functions --
waxell.score(),waxell.tag(), andwaxell.metadata()enrich traces without needing the context object. WaxellContext-- advanced pattern for explicit trace lifecycle control with session IDs and user tracking.- Session tracking -- correlating multiple runs under one session ID.
Run it
export OPENAI_API_KEY="sk-..."
export WAXELL_API_KEY="your-waxell-api-key"
export WAXELL_API_URL="https://api.waxell.ai"
python openai_agent.py