Skip to main content

User Tracking

User tracking lets you attribute agent runs, LLM costs, and token usage to individual end users of your application. This enables per-user cost analysis, abuse detection, and usage pattern insights.

What Gets Tracked

When runs include a user_id, Waxell Observe aggregates per-user metrics:

MetricDescription
run_countTotal runs by this user
first_seenTimestamp of the user's first run
last_seenTimestamp of the user's most recent run
total_durationCombined execution time across all user runs (seconds)
total_costSum of LLM costs for this user (USD)
total_tokensSum of tokens consumed by this user
agentsList of distinct agents this user has interacted with
cost_by_modelCost and token breakdown per LLM model

Setting a User ID

Context Manager

Pass user_id when creating a WaxellContext:

from waxell_observe import WaxellContext

async with WaxellContext(
agent_name="support-agent",
user_id="user-456",
) as ctx:
response = await handle_support_request(ticket)
ctx.record_llm_call(
model="gpt-4o",
tokens_in=response.usage.prompt_tokens,
tokens_out=response.usage.completion_tokens,
)
ctx.set_result({"output": response.text})

Decorator

Pass user_id directly to the @observe decorator:

import waxell_observe as waxell

waxell.init()

@waxell.observe(agent_name="support-agent", user_id="user-456")
async def handle_ticket(ticket_id: str) -> str:
response = await process_ticket(ticket_id)
return response

For dynamic user IDs, use the context manager pattern instead (see below).

Combined Session and User Tracking

In most applications, you set both session_id and user_id to get the full picture:

from waxell_observe import WaxellContext

async with WaxellContext(
agent_name="chat-agent",
session_id=f"conv-{conversation.id}",
user_id=f"user-{request.user.id}",
) as ctx:
response = await generate_reply(message)
ctx.record_llm_call(
model="gpt-4o",
tokens_in=response.usage.prompt_tokens,
tokens_out=response.usage.completion_tokens,
)
ctx.set_result({"output": response.text})
warning

Privacy best practice: Use opaque internal identifiers (database IDs, UUIDs) as user_id values. Do not pass email addresses, names, or other personally identifiable information. The user_id field is stored in plain text and is visible in the Waxell UI and API responses.

REST API

List Users

GET /api/v1/observability/users/

Authentication: Session (UI)

Query Parameters:

ParameterTypeDefaultDescription
searchstringFilter by user_id (substring match)
agentstringFilter by agent name
startISO8601Only users with runs after this time
endISO8601Only users with runs before this time
sortstring-last_seenSort field. Options: last_seen, -last_seen, first_seen, -first_seen, run_count, -run_count
limitint25Page size (max 100)
offsetint0Pagination offset

Example:

curl -s "https://acme.waxell.dev/api/v1/observability/users/?sort=-run_count&limit=10" \
-H "Cookie: sessionid=..."

Response:

{
"results": [
{
"user_id": "user-456",
"run_count": 87,
"first_seen": "2026-01-15T08:30:00Z",
"last_seen": "2026-02-07T14:22:00Z",
"total_duration": 245.8,
"total_cost": 1.234567,
"total_tokens": 189420,
"agents": ["chat-agent", "search-agent", "code-agent"]
}
],
"count": 156,
"next": "?offset=10&limit=10",
"previous": null
}

Get User Detail

GET /api/v1/observability/users/{user_id}/

Authentication: Session (UI)

Returns detailed metrics for a specific user, including a per-model cost breakdown and recent runs.

Example:

curl -s "https://acme.waxell.dev/api/v1/observability/users/user-456/" \
-H "Cookie: sessionid=..."

Response:

{
"user_id": "user-456",
"aggregates": {
"run_count": 87,
"total_duration": 245.8,
"total_cost": 1.234567,
"total_tokens": 189420,
"agents": ["chat-agent", "search-agent"],
"first_seen": "2026-01-15T08:30:00Z",
"last_seen": "2026-02-07T14:22:00Z"
},
"cost_by_model": [
{
"model": "gpt-4o",
"total_cost": 0.987654,
"total_tokens": 142000,
"call_count": 64
},
{
"model": "gpt-4o-mini",
"total_cost": 0.246913,
"total_tokens": 47420,
"call_count": 23
}
],
"runs": [
{
"id": 1042,
"agent_name": "chat-agent",
"workflow_name": "default",
"started_at": "2026-02-07T14:22:00Z",
"completed_at": "2026-02-07T14:22:03Z",
"duration": 2.8,
"status": "success",
"cost": 0.0089,
"tokens": 1250
}
]
}

UI Walkthrough

Users List

The users list view shows all tracked users with sortable columns:

  • User ID -- click to open user detail
  • Runs -- total number of agent executions
  • First Seen / Last Seen -- user activity time range
  • Duration -- total execution time
  • Cost -- total LLM spend
  • Tokens -- total token usage
  • Agents -- which agents this user interacted with

User Detail

The user detail page shows:

  1. Summary cards with run count, total cost, total tokens, and active time range
  2. Cost by model breakdown -- a table showing which models drove the user's costs
  3. Recent runs -- the last 50 runs for this user with per-run cost, tokens, duration, and status

Use Cases

Cost Attribution

Identify your highest-cost users to understand whether spend is proportional to value:

curl -s "https://acme.waxell.dev/api/v1/observability/users/?sort=-total_cost&limit=5" \
-H "Cookie: sessionid=..."

Abuse Detection

Flag users with unusually high run counts or token consumption. The cost_by_model breakdown on the detail endpoint reveals whether a user is making disproportionately expensive model calls.

Usage Patterns

Track first_seen and last_seen to understand user retention and engagement patterns. The agents list shows which product features each user exercises.

Next Steps

  • Sessions -- Group runs by conversation for multi-turn analysis
  • Scoring -- Capture user satisfaction alongside usage data
  • Cost Management -- Set budget limits and alerts based on user spend