Feature Comparison Matrix
A comprehensive comparison of what you get with each framework and integration approach. The columns progress from standalone frameworks on the left to full Waxell native on the right.
Legend
| Symbol | Meaning |
|---|---|
| Yes | Fully supported out of the box |
| Partial | Partially supported or requires significant custom work |
| No | Not available |
Core Agent Capabilities
| Feature | LangChain | LangChain + Observe | CrewAI | CrewAI + Observe | Custom Python | Custom + Observe | Waxell Native |
|---|---|---|---|---|---|---|---|
| Declarative agent definitions | No | No | Partial | Partial | No | No | Yes |
| Tool abstraction | Yes | Yes | Yes | Yes | Manual | Manual | Yes |
| Workflow orchestration | Partial | Partial | Yes | Yes | Manual | Manual | Yes |
| Multi-agent coordination | Partial | Partial | Yes | Yes | Manual | Manual | Yes |
| LLM model routing | Manual | Manual | Manual | Manual | Manual | Manual | Yes |
Notes on partial support
- LangChain declarative: LangChain Expression Language (LCEL) provides composable chains, but agent definitions remain imperative.
- CrewAI declarative: Agent/Task/Crew classes are semi-declarative, but orchestration logic is still code-level.
- LangChain workflow orchestration: LangGraph adds graph-based workflows, but lacks durability and governance.
Observability
| Feature | LangChain | LangChain + Observe | CrewAI | CrewAI + Observe | Custom Python | Custom + Observe | Waxell Native |
|---|---|---|---|---|---|---|---|
| Execution run tracking | No | Yes | No | Yes | No | Yes | Yes |
| LLM call tracking | No | Yes | No | Yes | No | Yes | Yes |
| Automatic token counting | No | Yes | No | Partial | No | Partial | Yes |
| Step-by-step execution trail | No | Yes | No | Yes | No | Yes | Yes |
| Input/output capture | No | Yes | No | Yes | No | Yes | Yes |
| Dashboard UI | No | Yes | No | Yes | No | Yes | Yes |
Notes on automatic token counting
- LangChain + Observe: The
WaxellLangChainHandlercallback extracts token usage from LangChain'sLLMResultautomatically. - CrewAI + Observe / Custom + Observe: Token counts must be passed to
ctx.record_llm_call()manually or extracted from your LLM client's response. - Waxell Native: The LLM router tracks all token usage automatically.
Cost Management
| Feature | LangChain | LangChain + Observe | CrewAI | CrewAI + Observe | Custom Python | Custom + Observe | Waxell Native |
|---|---|---|---|---|---|---|---|
| LLM cost estimation | No | Yes | No | Yes | No | Yes | Yes |
| Per-model pricing (20+ models) | No | Yes | No | Yes | No | Yes | Yes |
| Tenant-level cost overrides | No | Yes | No | Yes | No | Yes | Yes |
| Budget enforcement | No | Yes | No | Yes | No | Yes | Yes |
| Cost-per-run visibility | No | Yes | No | Yes | No | Yes | Yes |
Governance and Policy
| Feature | LangChain | LangChain + Observe | CrewAI | CrewAI + Observe | Custom Python | Custom + Observe | Waxell Native |
|---|---|---|---|---|---|---|---|
| Pre-execution policy checks | No | Yes | No | Yes | No | Yes | Yes |
| Budget limit policies | No | Yes | No | Yes | No | Yes | Yes |
| Rate limiting / throttling | No | Yes | No | Yes | No | Yes | Yes |
| Content filtering | No | Partial | No | Partial | No | Partial | Yes |
| Approval workflows | No | No | No | No | No | No | Yes |
| Dynamic policy management | No | No | No | No | No | No | Yes |
| Full governance lifecycle | No | No | No | No | No | No | Yes |
Notes on content filtering
- +Observe: Policy checks can block execution based on agent name, workflow, or budget. Content-level filtering (inspecting prompts/responses) requires custom policy rules in the control plane.
- Waxell Native: The
DynamicPolicyManagerevaluates policies at every governance hook point, including pre-execution, mid-workflow, and post-completion.
Durability and Infrastructure
| Feature | LangChain | LangChain + Observe | CrewAI | CrewAI + Observe | Custom Python | Custom + Observe | Waxell Native |
|---|---|---|---|---|---|---|---|
| Durable workflows | No | No | No | No | No | No | Yes |
| Checkpoint / resume | No | No | No | No | No | No | Yes |
| Pause / resume (human-in-the-loop) | No | No | No | No | No | No | Yes |
| Multi-tenancy | No | Partial | No | Partial | No | Partial | Yes |
| Signal-driven execution (webhooks) | No | No | No | No | No | No | Yes |
| Production backends (Redis, Celery) | No | No | No | No | No | No | Yes |
Audit trail with agent_trace | No | No | No | No | No | No | Yes |
| Generation layer (RAG, prompt versioning) | No | No | No | No | No | No | Yes |
Notes on multi-tenancy
- +Observe: Runs are scoped to a tenant via the control plane API key. Data isolation is at the API level.
- Waxell Native: Full tenant isolation at the database level, with per-tenant policies, model configurations, and billing.
Summary
The progression from left to right represents increasing levels of governance and infrastructure:
- Standalone frameworks (LangChain, CrewAI, custom) give you agent capabilities but no governance.
- + Waxell Observe adds observability, cost tracking, and basic policy enforcement with minimal code changes.
- Waxell Native provides the full stack: declarative definitions, durable workflows, signal-driven execution, full governance lifecycle, and production infrastructure.
Each level delivers standalone value. See the Progressive Migration guide for how to move between them at your own pace.