Mem0 Memory Agent
A multi-agent memory pipeline that exercises Mem0 persistent memory operations through two child agents -- a mem0-storer (adds 3 user memories via @tool) and a mem0-retriever (searches, retrieves, assesses relevance with @reasoning, decides synthesis strategy with @decision, and synthesizes a personalized answer with an auto-instrumented OpenAI call). Produces memory_coverage and personalization scores.
Environment variables
This example runs in dry-run mode by default (no API key needed). For live mode, set OPENAI_API_KEY, WAXELL_API_KEY, and WAXELL_API_URL.
Architecture
Key Code
Mem0 memory tool operations
Each Mem0 operation is wrapped with @waxell.tool(tool_type="memory") for automatic trace attribution.
@waxell.tool(tool_type="memory")
def mem0_add(data: str, user_id: str, memory_id: str) -> dict:
return {"id": memory_id, "data": data, "user_id": user_id, "status": "stored"}
@waxell.tool(tool_type="memory")
def mem0_search(query: str, user_id: str) -> dict:
return {
"results": 3, "top_score": 0.95,
"memories": [
{"id": "mem_001", "text": "Likes pizza", "score": 0.95},
{"id": "mem_002", "text": "Prefers thin crust", "score": 0.88},
{"id": "mem_003", "text": "Dislikes anchovies", "score": 0.72},
],
}
@waxell.tool(tool_type="memory")
def mem0_get_all(user_id: str) -> dict:
return {"count": 3, "memories": [...]}
Relevance reasoning and synthesis decision
The retriever assesses memory relevance, decides the synthesis strategy, then calls OpenAI.
@waxell.reasoning_dec(step="memory_relevance_assessment")
async def assess_memory_relevance(memories: list, query: str) -> dict:
relevant = [m for m in memories if m.get("score", 0) > 0.7]
return {
"thought": f"Retrieved {len(memories)} memories. {len(relevant)} have score > 0.7.",
"evidence": [f"{m['text']} (score: {m['score']})" for m in memories],
"conclusion": "All 3 memories are relevant and form a coherent preference profile",
}
@waxell.decision(name="synthesis_strategy", options=["direct", "contextual", "conversational"])
async def decide_synthesis_strategy(memories: list) -> dict:
return {
"chosen": "contextual",
"reasoning": f"Found {len(memories)} relevant memories -- contextual provides best personalization",
"confidence": 0.90,
}
What this demonstrates
@waxell.tool(tool_type="memory")-- 5 Mem0 operations (3 adds, 1 search, 1 get_all).@waxell.retrieval(source="mem0")-- ranked memory retrieval.@waxell.reasoning_dec-- memory relevance assessment with thought/evidence/conclusion.@waxell.decision-- synthesis strategy selection (direct/contextual/conversational).@waxell.step_dec-- query preprocessing step.waxell.score()-- memory_coverage and personalization scores.waxell.tag()/waxell.metadata()-- per-agent role tags and memory counts.- Auto-instrumented LLM calls -- OpenAI synthesis with memory context.
- Nested
@waxell.observe-- orchestrator + 2 child agents (mem0-storer, mem0-retriever).
Run it
# Dry-run (no API key needed)
python -m app.demos.mem0_agent --dry-run
# Live mode with OpenAI
OPENAI_API_KEY=sk-... python -m app.demos.mem0_agent