Conversation Tracking
Waxell automatically tracks conversation flow in interactive agents — chat bots, REPLs, copilots, and any agent that exchanges messages with users through an LLM.
What's Auto-Captured
When you use any auto-instrumented provider, waxell extracts conversation data from every LLM call:
| Data | Source | How |
|---|---|---|
| User messages | messages array (role=user) | Last user message, deduplicated |
| Agent responses | LLM response content | Only final responses (finish_reason=stop), not tool calls |
| Message count | messages array length | Total messages in context window |
| Turn count | role=user count in messages | Number of user turns |
| Context utilization | prompt_tokens / model limit | Percentage of context window used |
| System prompt hash | role=system or system param | Detects system prompt changes |
Supported Providers
Auto-capture works with all providers: OpenAI, Anthropic, Groq, Mistral, Gemini, Cohere, Bedrock, Ollama, Together, Azure AI, AI21, and more.
Context Window Monitoring
Each LLM call span includes context state attributes:
waxell.context.message_count— messages in the context windowwaxell.context.user_turns— user turn countwaxell.context.tokens_used— tokens consumed in contextwaxell.context.utilization_pct— context utilization percentage
Access these programmatically via WaxellContext properties:
async with WaxellContext(agent_name="my-agent") as ctx:
# After LLM calls, these are auto-populated:
print(ctx.conversation_turns) # e.g. 5
print(ctx.context_utilization) # e.g. 42.3
print(ctx.message_count) # e.g. 23
Manual Recording
For custom LLM clients or non-instrumented providers:
import waxell
# Record user input
waxell.user_message("Clean up inactive users")
# Record agent output
waxell.agent_response("I've cleaned up 14,872 inactive users.")
Or on the context directly:
ctx.record_user_message("What's the weather?")
ctx.record_agent_response("It's sunny in Paris today.")
Viewing Conversation Data
Context Tab
The execution detail page includes a Context tab with:
- Conversation metrics — user turns, message count, context utilization
- Conversation timeline — visual thread of user messages, LLM calls, tool calls, and agent responses
- Context window gauge — utilization growth across LLM calls
Raw Data
Conversation state is stored in the run's context.conversation field:
{
"user_turns": 5,
"message_count": 23,
"assistant_turns": 5,
"tool_results": 8,
"tokens_in_context": 4200,
"context_utilization_pct": 3.3,
"system_prompt_hash": "a1b2c3d4e5f6"
}
Governance
Use the context management policy to set limits on conversation length, context window usage, and session duration.