OpenAI Embeddings
A multi-agent OpenAI embeddings pipeline with a parent orchestrator coordinating 2 child agents -- a generator and an analyzer. The generator produces single and batch embeddings via text-embedding-3-small, while the analyzer computes similarity rankings, assesses embedding quality, and synthesizes insights with an auto-instrumented LLM call.
Environment variables
This example requires OPENAI_API_KEY, WAXELL_API_KEY, and WAXELL_API_URL. Use --dry-run to run without any API keys.
Architecture
Key Code
Generator with @tool(embedding) for single and batch embeddings
Each embedding call is recorded as a tool call with tool_type="embedding".
@waxell.tool(tool_type="embedding")
async def generate_single_embedding(client, text: str, model: str = "text-embedding-3-small"):
"""Generate a single text embedding via OpenAI."""
response = await client.embeddings.create(model=model, input=text)
embedding = response.data[0].embedding
tokens_used = response.usage.total_tokens if response.usage else 0
return {"dimensions": len(embedding), "tokens": tokens_used, "model": model}
@waxell.tool(tool_type="embedding")
async def generate_batch_embeddings(client, texts: list[str], model: str = "text-embedding-3-small"):
"""Generate embeddings for a batch of texts via OpenAI."""
response = await client.embeddings.create(model=model, input=texts)
return {"count": len(texts), "dimensions": len(response.data[0].embedding),
"tokens": response.usage.total_tokens, "model": model}
Analyzer with @retrieval, @reasoning, and LLM synthesis
The analyzer computes similarity, assesses quality, and synthesizes insights.
@waxell.retrieval(source="openai_embeddings")
def find_similar_texts(query: str, texts: list[str], top_k: int = 3) -> list[dict]:
"""Rank texts by simulated cosine similarity to the query embedding."""
return [{"rank": i + 1, "text": text, "score": round(0.95 - i * 0.08, 4)}
for i, text in enumerate(texts[:top_k])]
@waxell.reasoning_dec(step="embedding_quality_assessment")
async def analyze_embedding_quality(single_result, batch_result, similarity_results) -> dict:
total_embeddings = 1 + batch_result.get("count", 0)
top_score = similarity_results[0]["score"] if similarity_results else 0
return {
"thought": f"Generated {total_embeddings} embeddings. Top similarity: {top_score}.",
"evidence": [f"Single: {single_result.get('dimensions', 0)}d",
f"Batch: {batch_result.get('count', 0)} texts"],
"conclusion": "Dimensionally consistent with good semantic separation"
if top_score > 0.8 else "Consider model upgrade",
}
What this demonstrates
@waxell.observe-- parent-child agent hierarchy with automatic lineage@waxell.step_dec-- text preprocessing recorded as execution step@waxell.tool-- embedding generation withtool_type="embedding"@waxell.retrieval-- similarity search withsource="openai_embeddings"@waxell.decision-- embedding strategy selection (single_pass, batch, chunked)@waxell.reasoning_dec-- embedding quality assessment chain-of-thoughtwaxell.score()-- token efficiency and similarity quality scores- Auto-instrumented LLM calls -- OpenAI similarity analysis call captured
- Embedding pipeline pattern -- single + batch generation, similarity search, LLM synthesis
Run it
# Dry-run (no API keys needed)
cd dev/waxell-dev
python -m app.demos.openai_embeddings_agent --dry-run
# Live (real OpenAI)
export OPENAI_API_KEY="sk-..."
python -m app.demos.openai_embeddings_agent