Governance
Policy enforcement with pre-execution checks, mid-execution governance, and retry feedback.
Environment variables
This example requires OPENAI_API_KEY, WAXELL_API_KEY, and WAXELL_API_URL.
Policy enforcement with @waxell.observe
import asyncio
import waxell_observe as waxell
from waxell_observe.errors import PolicyViolationError
waxell.init()
from openai import OpenAI
client = OpenAI()
@waxell.observe(
agent_name="governed-agent",
enforce_policy=True,
mid_execution_governance=True,
)
async def governed_agent(query: str, waxell_ctx=None):
try:
# Step 1: Analyze query
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": query}],
)
waxell.step("analyze", output={"model": "gpt-4o"})
# Mid-execution policy check
policy = await waxell_ctx.check_policy()
if policy.blocked:
waxell.tag("policy_outcome", "blocked")
return None
if policy.action == "warn":
waxell.tag("policy_warning", policy.reason)
# Step 2: Generate response
final = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "Provide a helpful response."},
{"role": "user", "content": query},
],
)
result = final.choices[0].message.content
waxell.step("respond", output={"length": len(result)})
waxell.score("response_quality", 0.9, comment="auto-scored")
return result
except PolicyViolationError as e:
print(f"Execution blocked by policy: {e}")
print(f"Action: {e.policy_result.action}")
print(f"Reason: {e.policy_result.reason}")
return None
asyncio.run(governed_agent("How do I optimize my cloud costs?"))
Sync governance with WaxellObserveClient
For cases where you need to check policy outside of an async context, use the synchronous client directly.
import os
from waxell_observe import WaxellObserveClient
WaxellObserveClient.configure(
api_url=os.environ["WAXELL_API_URL"],
api_key=os.environ["WAXELL_API_KEY"],
)
client = WaxellObserveClient()
# Check policy before running
result = client.check_policy_sync(agent_name="my-agent", workflow_name="default")
if result.allowed:
print("Agent is allowed to run")
else:
print(f"Blocked: {result.reason}")
Approval workflow with human-in-the-loop
Handle policy blocks with interactive approval instead of failing. The approval prompt, response, and timing are auto-captured in the trace.
import asyncio
import waxell_observe as waxell
from waxell_observe.errors import PolicyViolationError
waxell.init()
@waxell.tool(tool_type="database")
def delete_records(table: str) -> dict:
"""Simulated deletion — triggers approval policy."""
return {"table": table, "deleted": 500, "status": "completed"}
@waxell.observe(
agent_name="data-manager",
workflow_name="delete",
enforce_policy=True,
on_policy_block=waxell.prompt_approval, # terminal Y/N prompt
)
async def guarded_delete(table: str):
result = delete_records(table=table)
waxell.tag("outcome", "success")
return result
async def main():
# waxell.input() captures the user's command in the trace
command = waxell.input("> ")
try:
result = await guarded_delete(table="users")
print(f"Done: {result}")
except PolicyViolationError as e:
print(f"Blocked or denied: {e}")
asyncio.run(main())
For testing, swap prompt_approval for auto_approve or auto_deny:
on_policy_block=waxell.auto_approve # always approve (tests)
on_policy_block=waxell.auto_deny # always deny (tests)
What this demonstrates
@waxell.observe(enforce_policy=True)-- pre-execution policy checks that block the run before any LLM calls.mid_execution_governance=True-- enables runtime policy evaluation between steps viawaxell_ctx.check_policy().waxell.step()-- record execution milestones using the top-level convenience function.waxell.tag()-- attach policy outcomes and warnings as searchable tags.waxell.score()-- record quality metrics alongside governance data.check_policy()result handling -- branching onblocked,action, andreason.PolicyViolationError-- structured exception withpolicy_result.actionandpolicy_result.reason.- Sync policy checks --
check_policy_sync()for non-async code paths. on_policy_block=waxell.prompt_approval-- interactive approval handler that prompts in the terminal.waxell.input()-- drop-ininput()replacement that captures human interactions in the trace.waxell.auto_approve/waxell.auto_deny-- test helpers for approval flows.
Run it
export OPENAI_API_KEY="sk-..."
export WAXELL_API_KEY="your-waxell-api-key"
export WAXELL_API_URL="https://api.waxell.ai"
python governance.py