Skip to main content

Prompt Management Agent

Demonstrates get_prompt() / PromptInfo.compile(), the background collector for LLM calls made outside any WaxellContext, get_context(), and capture_content mode. A parent orchestrator coordinates a prompt-runner (compiles and uses prompts) and a prompt-evaluator (tests context APIs and background collection).

Environment variables

This example requires OPENAI_API_KEY, WAXELL_API_KEY, and WAXELL_API_URL. Use --dry-run to skip real API calls.

Architecture

Key Code

PromptInfo Compilation

Text and chat prompts compiled with variable substitution using PromptInfo.compile().

@waxell.tool(tool_type="prompt_management")
def compile_text_prompt(template_content: str, variables: dict) -> dict:
prompt = PromptInfo(
name="demo-greeting", version=1, prompt_type="text",
content=template_content,
config={"temperature": 0.7, "max_tokens": 500},
labels=["demo", "local"],
)
compiled = prompt.compile(**variables)
return {"compiled": str(compiled), "length": len(str(compiled))}

@waxell.tool(tool_type="prompt_management")
def compile_chat_prompt(role: str, topic: str, question: str) -> dict:
chat_prompt = PromptInfo(
name="demo-chat", version=1, prompt_type="chat",
content=[
{"role": "system", "content": "You are {{role}}. Help the user with {{topic}}."},
{"role": "user", "content": "{{question}}"},
],
)
compiled = chat_prompt.compile(role=role, topic=topic, question=question)
return {"messages": compiled, "message_count": len(compiled)}

Background Collector and Context API

The evaluator tests get_context() and exercises the background collector.

@waxell.tool(tool_type="testing")
def test_get_context() -> dict:
ctx = waxell.get_context()
return {"context_found": ctx is not None, "context_type": type(ctx).__name__}

@waxell.tool(tool_type="collector")
def run_background_collector() -> dict:
from waxell_observe.instrumentors._collector import _collector
calls = [
{"model": "gpt-4o-mini", "tokens_in": 120, "tokens_out": 65, "cost": 0.00012},
{"model": "gpt-4o-mini", "tokens_in": 200, "tokens_out": 110, "cost": 0.00022},
]
for call in calls:
_collector.record_call(call)
_collector.flush()
return {"calls_buffered": len(calls), "flushed": True}

What this demonstrates

  • PromptInfo.compile() -- text and chat prompt templates with variable substitution, config, and labels.
  • get_prompt() remote fetch -- attempts to retrieve prompts from the controlplane via the observe client.
  • get_context() API -- verifies that the active WaxellContext is accessible inside tool functions.
  • Background collector -- buffers and flushes LLM calls made outside any WaxellContext.
  • capture_content=True -- full prompt/response text included in traces.
  • waxell.decide() -- manual routing decision for content capture mode.

Run it

# Dry-run mode (no API key needed)
cd dev/waxell-dev
python -m app.demos.prompt_management_agent --dry-run

# Live mode
export OPENAI_API_KEY="sk-..."
python -m app.demos.prompt_management_agent

Source

dev/waxell-dev/app/demos/prompt_management_agent.py