Skip to main content

Prompt Management

Fetch versioned prompts from Waxell, render them with variables, and use get_context() for nested access.

Environment variables

This example requires OPENAI_API_KEY, WAXELL_API_KEY, and WAXELL_API_URL.

Prompt retrieval and rendering

import asyncio

import waxell_observe
from waxell_observe import WaxellObserveClient, WaxellContext

waxell_observe.init()

from openai import OpenAI

openai_client = OpenAI()


async def use_managed_prompts():
client = WaxellObserveClient()

async with WaxellContext(agent_name="prompt-demo") as ctx:
# Fetch a text prompt by label
prompt = await client.get_prompt("summarizer", label="production")
print(f"Prompt: {prompt.name} v{prompt.version} ({prompt.prompt_type})")
print(f"Labels: {prompt.labels}")
print(f"Config: {prompt.config}")

# Compile with variables (replaces {{variable}} placeholders)
rendered = prompt.compile(topic="AI safety", length="brief")
print(f"Rendered: {rendered}")

# Use the prompt with OpenAI
response = openai_client.chat.completions.create(
model=prompt.config.get("model", "gpt-4o"),
messages=[{"role": "user", "content": rendered}],
temperature=prompt.config.get("temperature", 0.7),
)
ctx.record_step("generate", output={"prompt_version": prompt.version})

# Fetch a chat prompt
chat_prompt = await client.get_prompt("support-assistant", label="production")
messages = chat_prompt.compile(user_query="How do I reset my password?")
# messages is now a list of {role, content} dicts ready for the API

chat_response = openai_client.chat.completions.create(
model=chat_prompt.config.get("model", "gpt-4o"),
messages=messages,
)
ctx.record_step("chat", output={"prompt_version": chat_prompt.version})

ctx.set_result({"summary": response.choices[0].message.content})


asyncio.run(use_managed_prompts())

Accessing context with get_context()

Use get_context() to access the current WaxellContext from anywhere in the call stack -- no need to pass it as a parameter.

import asyncio

import waxell_observe
from waxell_observe import observe


@observe(agent_name="context-demo")
async def my_agent(query: str):
# Get the current context from anywhere in the call stack
ctx = waxell_observe.get_context()
if ctx:
print(f"Run ID: {ctx.run_id}")
ctx.set_tag("source", "api")

# Works from nested functions too
await nested_function()


async def nested_function():
ctx = waxell_observe.get_context()
if ctx:
ctx.record_step("nested_work")


asyncio.run(my_agent("What is the meaning of life?"))

What this demonstrates

  • get_prompt() -- fetch versioned prompts by name and label from Waxell.
  • PromptInfo.compile() -- render {{variable}} placeholders into final prompt text or chat messages.
  • Text vs chat prompts -- text prompts compile to a string, chat prompts compile to a list of {role, content} dicts.
  • get_context() -- access the active WaxellContext from nested functions without threading it through parameters.
  • Prompt config -- use prompt.config to drive model selection, temperature, and other API parameters.

Run it

export OPENAI_API_KEY="sk-..."
export WAXELL_API_KEY="your-waxell-api-key"
export WAXELL_API_URL="https://api.waxell.ai"

python prompt_management.py