Phase 2: Add Signals
You are here if: you have completed Phase 1 (your agents are observed) and you want centralized control over when and how your agents are triggered.
What you will have after this phase: webhook-driven agent execution with retry logic, deduplication, audit trails, and a single control point for all your agents.
What Are Signals?
A signal is a typed HTTP webhook that triggers an agent run through the Waxell control plane. Instead of calling your agent function directly, you POST a signal payload to Waxell, and the control plane routes it to the right agent with full governance applied.
Your App ──POST──> Waxell Control Plane ──triggers──> Your Agent
│
├── Policy check
├── Rate limiting
├── Audit logging
└── Retry on failure
Signals decouple your application code from your agent execution. Your app does not need to know where the agent runs, how it is configured, or what policies apply -- it just emits a signal.
How Signals Work
1. Define a Signal in Your Agent
If you are already using Waxell Observe, your agents are tracked but triggered directly. To add signal-driven execution, define the signal your agent listens for:
# In your Waxell control plane configuration, register a signal:
# Signal name: "support_ticket"
# Payload schema: { customer_id: string, message: string }
# Routes to: support-classifier agent
2. Emit Signals from Your Application
Replace direct agent calls with signal emissions using the Waxell SDK client:
Before (direct call):
# Your application calls the agent directly
result = await handle_support_ticket(customer_id="C-1234", message="I can't log in")
After (signal-driven):
from waxell_sdk import WaxellClient
client = WaxellClient()
result = await client.emit_signal(
signal_name="support_ticket",
payload={
"customer_id": "C-1234",
"message": "I can't log in",
},
idempotency_key="ticket-5678", # Prevents duplicate processing
)
if result.success:
print(f"Signal emitted: {result.signal_id}")
else:
print(f"Error: {result.error}")
3. Configure the Client
The WaxellClient reads configuration from multiple sources (highest priority first):
- Explicit constructor arguments
- Global configuration via
WaxellClient.configure() - CLI config file (
~/.waxell/config) - Environment variables (
WAX_API_URL,WAX_API_KEY)
# Option A: Environment variables (recommended for production)
# export WAX_API_URL="https://acme.waxell.dev"
# export WAX_API_KEY="wax_sk_..."
client = WaxellClient()
# Option B: Global configuration at app startup
WaxellClient.configure(
api_url="https://acme.waxell.dev",
api_key="wax_sk_...",
)
client = WaxellClient()
# Option C: Per-instance (overrides everything)
client = WaxellClient(
api_url="https://acme.waxell.dev",
api_key="wax_sk_...",
)
For synchronous contexts (e.g., Django views without async), use the sync variant:
result = client.emit_signal_sync(
signal_name="support_ticket",
payload={"customer_id": "C-1234", "message": "I can't log in"},
)
What You Gain
Centralized Orchestration
All agent triggering flows through the Waxell control plane. You can see every signal that was emitted, which agent it triggered, and what happened.
Retry Logic
If the agent fails, the control plane can retry the execution based on your configured retry policy. Your application does not need to implement retry logic.
Deduplication
The idempotency_key parameter prevents duplicate processing. If the same signal is emitted twice (e.g., due to a network retry on the caller side), only one agent execution occurs.
Audit Trail
Every signal emission is logged with:
- Who emitted it (API key / tenant)
- When it was emitted
- The full payload
- Which agent run it triggered
- The execution result
Scheduling
Signals can be scheduled for future execution. Emit a signal with a delay or cron expression to trigger agents on a schedule, all managed through the control plane.
Integration Pattern: Existing Observed Agents
If your agents are already wrapped with @waxell_agent or WaxellContext (from Phase 1), signals integrate cleanly. The control plane calls your observed agent, and both the signal audit trail and the observe telemetry are linked.
from waxell_observe import waxell_agent
# This agent is both observed AND signal-triggered
@waxell_agent(agent_name="support-classifier")
async def handle_support_ticket(
customer_id: str,
message: str,
waxell_ctx=None,
) -> str:
result = await classify_and_respond(customer_id, message)
if waxell_ctx:
waxell_ctx.record_llm_call(
model="gpt-4o",
tokens_in=200,
tokens_out=150,
task="classify_ticket",
)
return result
The signal payload fields (customer_id, message) are passed as arguments to your agent function. The waxell_ctx is injected by the decorator as usual.
What You Do Not Get (Yet)
Phase 2 adds centralized triggering but your agents still run as standard Python functions. The following require further migration:
- Durable workflows: Checkpoint/resume requires native Waxell workflows (Phase 4)
- Approval workflows: Pause/resume with human-in-the-loop requires native Waxell (Phase 4)
- Domain abstraction: Governed external system calls require native Waxell domains (Phase 4)
- LLM routing: Automatic model selection and fallbacks require native Waxell (Phase 4)
Next Steps
- Phase 3: Agent Builder -- AI-assisted migration to native Waxell
- Phase 4: Go Fully Native -- Manual migration guide
- Progressive Migration Overview -- See all migration phases