Server data from the Official MCP Registry
MCP server for AI agent token-cost telemetry across providers: per-agent + anomaly + routing.
MCP server for AI agent token-cost telemetry across providers: per-agent + anomaly + routing.
This MCP server implements cost tracking and analysis for LLM API usage with reasonable security posture. The codebase is clean with proper input validation, no hardcoded credentials, and permissions appropriate to its stated purpose (file I/O, network access for logs). Minor code quality issues—broad exception handling, some unvalidated assumptions in cost calculations—do not introduce security vulnerabilities. The server requires no external authentication, which is acceptable for an operator-facing analytics tool reading local cost logs. Supply chain analysis found 4 known vulnerabilities in dependencies (0 critical, 3 high severity). Package verification found 1 issue.
5 files analyzed · 10 issues found
Security scores are indicators to help you make informed decisions, not guarantees. Always review permissions before connecting any MCP server.
This plugin requests these system permissions. Most are normal for its category.
Add this to your MCP configuration file:
{
"mcpServers": {
"io-github-temurkhan13-openclaw-cost-tracker-mcp": {
"args": [
"openclaw-cost-tracker-mcp"
],
"command": "uvx"
}
}
}From the project's GitHub README.
MCP server for AI agent token-cost telemetry across Anthropic, OpenAI, Gemini, Ollama, AWS Bedrock — per-agent attribution, per-provider breakdowns, spend-spike anomaly detection (per-agent median × threshold), cheaper-model routing recommendations with 30-day savings estimates, monthly forecast. Reads any cost-log JSONL with the standard
{request_id, timestamp, provider, model, agent_id, prompt_tokens, completion_tokens, cost_usd}schema. OpenClaw operators get native~/.openclaw/cost-logs/parsing; other deployments wrap their provider calls with the included logging shim or commission a Custom MCP Build adapter. Keywords: Claude API cost, OpenAI cost tracking, AI agent FinOps, LLM cost attribution, token spend monitoring.
Production AI deployments rack up token cost across Anthropic + OpenAI + Gemini + Ollama, dozens of agents, hundreds of tool/skill calls. The bill arrives at month-end — and by then, the answer to "where did our money go?" is buried in 100k log lines. Provider dashboards show "$X spent on Anthropic" but not "which of our six agents drove 78% of that." This MCP server surfaces per-agent + per-provider cost attribution live, queryable in plain English from inside Claude or any MCP-aware client:
> claude: where did our LLM spend go this week?
[MCP tools: cost_overview + top_cost_drivers]
Total spend last 7 days: $42.18
By provider:
Anthropic $30.40 (72%) — claude-sonnet-4 dominates
OpenAI $9.20 (22%) — chat-bot agent
Gemini $1.86 (4%) — cron-summarizer (cheap-route working)
Ollama $0.00 (local, free)
Top 3 cost drivers:
data-extraction-agent $28.50 (68%)
chat-bot $9.20 (22%)
cron-summarizer $1.86 (4%)
1 anomaly flagged — see find_cost_anomalies for details.
> claude: any cheaper-routing opportunities?
[MCP tool: model_routing_recommendations]
Recommendation: data-extraction-agent currently runs claude-sonnet-4
with avg 400-token completions — extraction-style work that
gemini-2.5-flash usually handles at ~95% quality for ~5% the cost.
Estimated savings: $27.10/30d if migrated. Test on a 10% slice first.
openclaw-cost-tracker-mcpThree things existing tools (provider billing dashboards, generic FinOps tools, custom scripts) don't do:
Per-agent attribution, not just per-provider totals. Provider dashboards show "$X spent on Anthropic" — they can't tell you which of your six agents drove 78% of that. Cost tracker reads OpenClaw's per-request cost-log JSONL and aggregates with the agent_id intact.
Cost-spike anomaly detection per agent. A single 120k-token paste into chat costs more than a week of normal traffic. The default 3x-median-per-agent threshold flags those before they show up in the month-end bill.
Routing recommendations grounded in actual usage. Generic "use cheaper models" advice is useless. This server identifies specific agents whose volume + completion-length pattern suggests a cheaper provider would deliver the same outcome, with concrete 30-day savings estimates.
Built for the production-AI operator running OpenClaw in production with real spend that matters.
| Tool | What it returns |
|---|---|
cost_overview | Total spend + by-provider + top agents + top models + anomaly count for a window |
costs_by_agent | Per-agent breakdown with avg-cost-per-request + share of total |
costs_by_provider | Per-provider breakdown with token counts |
find_cost_anomalies | Requests flagged as 3x+ above their agent's median cost |
top_cost_drivers | Top N spending agents + models, no other noise |
model_routing_recommendations | Specific cheaper-model suggestions with 30d savings estimates |
forecast_monthly_cost | Projects 30-day total + per-provider with confidence note |
Resources:
cost://overview — 7-day snapshotcost://forecast — 30-day projectioncost://anomalies — recent flagged anomaliesPrompts:
diagnose-cost-spike — walk a recent spike to its root cause + corrective actionweekly-cost-digest — 200-word weekly cost digestpip install openclaw-cost-tracker-mcp
Add to ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or %APPDATA%\Claude\claude_desktop_config.json (Windows):
{
"mcpServers": {
"openclaw-cost": {
"command": "python",
"args": ["-m", "openclaw_cost_tracker_mcp"],
"env": {
"OPENCLAW_COST_BACKEND": "mock"
}
}
}
}
| Backend | Status | Description |
|---|---|---|
mock | ✅ v1.0 | Sample data with deliberate anomalies + routing opportunities for protocol verification |
openclaw-jsonl | ✅ v1.0 | Parses OpenClaw's native cost-log JSONL files (default ~/.openclaw/cost-logs/, configurable via OPENCLAW_COST_LOGS) |
provider-direct | ⏳ v1.1 | Reads Anthropic + OpenAI billing APIs directly (no log shim required) |
Each line is one JSON record:
{"request_id":"req-abc123","timestamp":"2026-05-04T12:34:56Z","provider":"anthropic","model":"claude-sonnet-4","agent_id":"data-extraction-agent","skill_id":"extract-structured-data","prompt_tokens":8500,"completion_tokens":600,"total_tokens":9100,"cost_usd":0.0345,"duration_ms":4823}
If your OpenClaw deployment doesn't emit this format, wrap your provider calls with a small logging shim — sample shim in examples/.
| Version | Scope | Status |
|---|---|---|
| v1.0 | mock + openclaw-jsonl backends, 7 tools / 3 resources / 2 prompts, anomaly detection + routing + forecast, GitHub Actions CI matrix, PyPI Trusted Publishing | ✅ |
| v1.1 | provider-direct backend (Anthropic + OpenAI billing API integrations) | ⏳ |
| v1.2 | Backend federation; budget alerts + threshold breach detection | ⏳ |
| v1.x | Per-channel cost attribution; webhook emitter for budget alerts | ⏳ |
If your AI deployment doesn't use OpenClaw's cost-log format — different agent harness, custom logging, AWS Bedrock metering, vendor billing API — and you want the same attribution + anomaly + routing visibility, that's a Custom MCP Build engagement.
| Tier | Scope | Investment | Timeline |
|---|---|---|---|
| Simple | Single backend adapter for your existing cost-data source | $8,000–$10,000 | 1–2 weeks |
| Standard | Custom backend + custom anomaly rules + integration with your alerting | $15,000–$20,000 | 2–4 weeks |
| Complex | Multi-backend federation + budget enforcement + custom routing logic | $25,000–$35,000 | 4–8 weeks |
To engage:
Custom MCP Build inquiryThis server is part of a production-AI infrastructure MCP suite — companion to silentwatch-mcp (cron silent-failure detection) and openclaw-health-mcp (deployment health). Install all three for full operational visibility.
If you're running production AI and want an outside practitioner to score readiness, find the failure patterns already present (cost overruns being one of the most common), and write the corrective-action plan:
| Tier | Scope | Investment | Timeline |
|---|---|---|---|
| Audit Lite | One system, top-5 findings, written report | $1,500 | 1 week |
| Audit Standard | Full audit, all 14 patterns, 5 Cs findings, 90-day follow-up | $3,000 | 2–3 weeks |
| Audit + Workshop | Standard audit + 2-day team workshop + first monthly audit included | $7,500 | 3–4 weeks |
Same email channel: temur@pixelette.tech with subject AI audit inquiry.
PRs welcome. Backends are pluggable — see src/openclaw_cost_tracker_mcp/backends/ for the contract.
To add a new backend:
CostBackend in backends/<your_backend>.pyget_entries()backends/__init__.pytests/test_backend_<your_backend>.pyBug reports + feature requests: open a GitHub issue.
MIT — see LICENSE.
LAUNCH50 for the first 30 days.Built by Temur Khan — independent practitioner on production AI systems. Contact: temur@pixelette.tech
Be the first to review this server!
by Modelcontextprotocol · Developer Tools
Read, search, and manipulate Git repositories programmatically
by Toleno · Developer Tools
Toleno Network MCP Server — Manage your Toleno mining account with Claude AI using natural language.
by mcp-marketplace · Developer Tools
Create, build, and publish Python MCP servers to PyPI — conversationally.