Server data from the Official MCP Registry
Analyze Convai sessions, latency, reliability, usage, and provider telemetry via API key.
Analyze Convai sessions, latency, reliability, usage, and provider telemetry via API key.
Valid MCP server (1 strong, 1 medium validity signals). No known CVEs in dependencies. ⚠️ Package registry links to a different repository than scanned source. Imported from the Official MCP Registry. 1 finding(s) downgraded by scanner intelligence.
8 files analyzed · 1 issue found
Security scores are indicators to help you make informed decisions, not guarantees. Always review permissions before connecting any MCP server.
Set these up before or after installing:
Environment variable: CONVAI_API_KEY
Environment variable: CONVAI_ANALYTICS_BASE_URL
Add this to your MCP configuration file:
{
"mcpServers": {
"io-github-conv-ai-convai-analytics-mcp": {
"env": {
"CONVAI_API_KEY": "your-convai-api-key-here",
"CONVAI_ANALYTICS_BASE_URL": "your-convai-analytics-base-url-here"
},
"args": [
"-y",
"@convai/analytics-mcp"
],
"command": "npx"
}
}
}From the project's GitHub README.
Agentic analytics for Convai applications.
This repository lets MCP-capable agents, coding agents, and developers answer questions about Convai session telemetry using only a Convai API key. The recommended path is the published local MCP server, @convai/analytics-mcp, which plugs into Claude Desktop, Cursor, Codex-compatible MCP clients, and other stdio MCP hosts so agents can call typed analytics tools instead of writing custom scripts.
For developers who want direct programmatic access, the repo also includes TypeScript, Python, and CLI clients that use the same public Analytics API.
Ask questions like:
How many interactions did my characters have in the last 7 days?
How many unique end users used each character this month?
Show aggregate P50/P95/P99 latency over time for production readiness sign-off.
Explain why this interaction was slow and generate a waterfall chart.
Which processor, provider, model, or character is driving latency or errors?
The SDK calls Convai's hosted analytics API at https://analytics-api.convai.com/v1/analytics. The API resolves your API key server-side, scopes every query to your account, enforces plan and quota limits, and returns agent-friendly JSON that can be summarized, charted, or used in scripts.
Status: v0.2. The full v1 endpoint surface is wired:
summary,timeseries,breakdown,sessions.list,sessions.get,interactions.get,metrics/catalog,regression-detection, andquery. Convenience facades (latency,providers,errors,usage) work end to end on top of those primitives.
packages/mcp that exposes the main analytics questions as typed agent tools.packages/typescript.packages/python.cli.recipes/prompts that tell an AI agent exactly which calls to make for common analytics questions.recipes/charts that generate Vega-Lite specs or Plotly timelines for latency, usage, reliability, and concurrency analysis.convai-analytics/
├── docs/ Concepts, auth, metrics reference, prompt/chart indexes
├── openapi/ Snapshot of the analytics API contract
├── packages/
│ ├── typescript/ @convai/analytics SDK
│ ├── mcp/ @convai/analytics-mcp local stdio server
│ └── python/ convai-analytics SDK
├── cli/ convai-analytics command line interface
├── recipes/
│ ├── prompts/ Natural-language prompt templates for AI agents
│ └── charts/ Runnable chart-generation scripts
└── examples/ Runnable end-to-end examples
uv for the Python SDK examples.Get your key from the same Convai account you use for the rest of the Convai API, then export it in your shell:
export CONVAI_API_KEY="ck_live_your_key_here"
The SDK reads CONVAI_API_KEY automatically. You normally do not need to set a base URL; production is the default.
For agent workflows, start here:
export CONVAI_API_KEY="ck_live_your_key_here"
npx -y @convai/analytics-mcp@latest
The server is also listed in the Official MCP Registry as io.github.Conv-AI/convai-analytics-mcp. If your MCP client supports registry discovery, use that registry entry; otherwise configure the npx command above. Once connected, ask normal analytics questions:
After adding or changing MCP config, restart Claude Desktop, Claude Code, Cursor, Codex, or your MCP host so it reloads the server command and CONVAI_API_KEY.
Show aggregate P50/P95/P99 latency for the last 30 days and generate a chart.
Which component is driving p95 latency, and which sessions should I inspect?
Show usage trends, unique end users, active-session concurrency, and provider/model latency charts.
See packages/mcp/README.md for client setup snippets.
git clone https://github.com/Conv-AI/convai-analytics.git
cd convai-analytics
export CONVAI_API_KEY="ck_live_your_key_here"
# Install TypeScript SDK + CLI dependencies.
make ts-install
# Optional but useful: verify the SDK and CLI compile.
make ts-build
make ts-test
Run the smallest working example:
packages/typescript/node_modules/.bin/tsx \
--tsconfig tsconfig.recipes.json \
examples/node/account-summary.ts last_7d
Expected result: a short account summary with sessions, end users, interactions, errors, and latency percentiles for your own account.
If your agent does not support MCP, open this repo in Codex, Claude Code, Cursor, or a similar coding agent. Give the agent this instruction:
Use this repository and my CONVAI_API_KEY environment variable to answer analytics questions.
First read README.md, docs/concepts.md, docs/metrics-reference.md, and recipes/prompts/README.md.
Prefer MCP when available; otherwise use the SDK or recipes. Do not ask for direct data-store access.
Keep the API key private and do not print it.
Then ask normal product questions:
How many interactions and unique end users did I have in the last 30 days?
Break it down by character and generate a usage trend chart.
Show aggregate P50/P95/P99 voice.user_to_bot_latency for the last 7 days.
Add a 3000 ms p95 threshold line and summarize the worst buckets.
List recent slow sessions, choose one with traceable interaction spans,
explain the bottleneck, and generate a waterfall chart.
The agent should use recipes/prompts for the call sequence and recipes/charts for chart generation.
The fastest path for MCP-capable agents is @convai/analytics-mcp. It is a published local stdio MCP server that wraps only the public TypeScript SDK. It reads CONVAI_API_KEY, optionally reads CONVAI_ANALYTICS_BASE_URL, and never accepts account overrides, service credentials, database URLs, or other internal access paths.
Run it directly:
export CONVAI_API_KEY="ck_live_your_key_here"
npx -y @convai/analytics-mcp@latest
Official MCP Registry name: io.github.Conv-AI/convai-analytics-mcp
Claude Desktop example:
{
"mcpServers": {
"convai-analytics": {
"command": "npx",
"args": ["-y", "@convai/analytics-mcp@latest"],
"env": {
"CONVAI_API_KEY": "ck_live_your_key_here"
}
}
}
}
After saving this config, quit and reopen Claude Desktop so it starts the new MCP server. Claude Code, Cursor, Codex-compatible clients, and other stdio MCP hosts can use the same command/env shape; restart the client or start a new session after adding or changing MCP config. Once connected, ask questions like:
Show aggregate P50/P95/P99 latency for the last 30 days and generate a chart.
Which component is driving p95 latency, and which sessions should I inspect?
Show usage trends, unique end users, active-session concurrency, and provider/model latency charts.
The MCP server returns structured JSON for data tools and Vega-Lite JSON specs for chart tools. It does not write files; your MCP client can decide whether to render, save, or summarize the returned artifacts.
Local development:
make mcp-install
make mcp-lint
make mcp-test
make mcp-build
make mcp-smoke
See packages/mcp/README.md for the full tool list, prompt list, resources, and live smoke command. See docs/publishing.md and docs/mcp-distribution.md for release, registry, and marketplace publishing notes.
When using the repository directly:
make ts-install
packages/typescript/node_modules/.bin/tsx --tsconfig tsconfig.recipes.json examples/node/account-summary.ts last_24h
When using the package from your own Node project:
npm install @convai/analytics
import { ConvaiAnalytics } from "@convai/analytics";
const client = new ConvaiAnalytics({
apiKey: process.env.CONVAI_API_KEY,
});
const summary = await client.summary({ range: "last_7d" });
console.log({
sessions: summary.sessions,
interactions: summary.interactions,
uniqueEndUsers: summary.uniqueEndUsers,
p95EndToEndMs: summary.p95EndToEndMs,
});
Find sessions to inspect:
const sessions = await client.sessions.list({
range: "last_7d",
sort: "slowest",
limit: 10,
});
console.table(
sessions.sessions.map((s) => ({
sessionId: s.sessionId,
interactions: s.interactionCount,
p95EndToEndMs: s.p95EndToEndMs,
startTime: s.startTime,
})),
);
Inspect one interaction:
const trace = await client.interactions.get("interaction_id_here");
for (const span of trace.spans) {
console.log(
`${span.processor ?? "unknown"} ${span.durationMs ?? 0}ms` +
(span.provider ? ` provider=${span.provider}` : ""),
);
}
From this repository:
cd packages/python
uv sync
uv run python - <<'PY'
from convai_analytics import ConvaiAnalytics
client = ConvaiAnalytics()
summary = client.summary(range="last_7d")
print({
"sessions": summary.sessions,
"interactions": summary.interactions,
"unique_end_users": summary.unique_end_users,
"p95_end_to_end_ms": summary.p95_end_to_end_ms,
})
PY
From your own Python project:
# Use this once the package is available in your Python package registry.
pip install convai-analytics
from convai_analytics import ConvaiAnalytics
client = ConvaiAnalytics() # reads CONVAI_API_KEY
sessions = client.sessions.list(range="last_7d", sort="slowest", limit=5)
for session in sessions.sessions:
print(session.session_id, session.interaction_count, session.p95_end_to_end_ms)
Build the local CLI:
make ts-install
make ts-build
Run common queries:
npx -y @convai/analytics-cli summary --range last_7d --pretty
node cli/dist/index.js summary --range last_7d --pretty
node cli/dist/index.js timeseries \
--measure p95Value \
--metric-name voice.user_to_bot_latency \
--granularity day \
--range last_30d \
--pretty
node cli/dist/index.js sessions \
--range last_7d \
--sort slowest \
--limit 5 \
--pretty
node cli/dist/index.js interaction interaction_id_here --pretty
Use --api-key only for local one-off testing. Prefer CONVAI_API_KEY so the key does not end up in shell history:
CONVAI_API_KEY="ck_live_your_key_here" node cli/dist/index.js summary --range last_24h
All commands below assume:
export CONVAI_API_KEY="ck_live_your_key_here"
make ts-install
make ts-build
Question:
How many sessions, interactions, unique end users, and errors did I have in the last 7 days?
Command:
packages/typescript/node_modules/.bin/tsx \
--tsconfig tsconfig.recipes.json \
examples/node/account-summary.ts last_7d
Question:
Show aggregate P50/P95/P99 latency over time for production-readiness sign-off.
Command:
packages/typescript/node_modules/.bin/tsx \
--tsconfig tsconfig.recipes.json \
recipes/charts/latency_percentile_band.ts \
--range last_7d \
--granularity hour \
--p95-threshold-ms 3000 \
> latency-percentiles.vl.json
Question:
Show p95 end-to-end latency over time, optionally filtered to one character.
Command:
packages/typescript/node_modules/.bin/tsx \
--tsconfig tsconfig.recipes.json \
recipes/charts/p95_over_time.ts \
--range last_7d \
> p95-over-time.vl.json
Add --character-id your_character_id to focus on one character.
Question:
Which processor contributes most to p95 latency?
Command:
packages/typescript/node_modules/.bin/tsx \
--tsconfig tsconfig.recipes.json \
recipes/charts/component_breakdown.ts \
--range last_7d \
> component-bottlenecks.vl.json
First find a session:
node cli/dist/index.js sessions --range last_7d --sort slowest --limit 5 --pretty
Then diagnose it:
packages/typescript/node_modules/.bin/tsx \
--tsconfig tsconfig.recipes.json \
examples/node/why-was-this-session-slow.ts session_id_here
Question:
Show the latency breakdown of this specific interaction id.
Command:
packages/typescript/node_modules/.bin/tsx \
--tsconfig tsconfig.recipes.json \
recipes/charts/latency_waterfall.ts \
--interaction interaction_id_here \
> interaction-waterfall.vl.json
Question:
Show errors and dropped error-persistence events over time.
Command:
packages/typescript/node_modules/.bin/tsx \
--tsconfig tsconfig.recipes.json \
recipes/charts/reliability_trends.ts \
--range last_7d \
--granularity day \
> reliability-trends.vl.json
Question:
Show sessions, interactions, and unique end users over time.
Command:
packages/typescript/node_modules/.bin/tsx \
--tsconfig tsconfig.recipes.json \
recipes/charts/usage_trends.ts \
--range last_30d \
--granularity day \
> usage-trends.vl.json
Question:
Estimate concurrent active sessions over time.
Command:
packages/typescript/node_modules/.bin/tsx \
--tsconfig tsconfig.recipes.json \
recipes/charts/concurrency_estimate.ts \
--range last_7d \
--bucket-minutes 5 \
> active-session-concurrency.vl.json
This is a proxy derived from session start/end windows until first-class stream concurrency metrics are exposed.
Question:
Compare LLM or TTS provider latency and reliability.
Agent prompt:
Read recipes/prompts/provider-comparison.md and compare LLM and TTS providers for the last 7 days.
Return a table with provider/model, p95 latency, sample count, and any reliability caveats.
Question:
Did latency regress compared with the previous window?
Business-tier and above:
const regressions = await client.regressionDetection({
baselineRange: "last_7d",
currentRange: "last_24h",
measure: "p95Value",
groupBy: "processor",
});
Lower plans should expect a typed PlanInsufficientError for this endpoint.
The prompt recipes are the best starting point for agents:
| Recipe | Use it when you want to know |
|---|---|
why-was-this-session-slow.md | why a session felt slow and which interaction/component caused it |
aggregate-latency-distribution.md | P50/P95/P99 latency bands for systemic trend analysis |
p95-latency-trend.md | p95 latency over time for an account or character |
component-bottlenecks.md | which processor contributes most to p95 |
trace-explanation.md | what happened in one interaction id |
error-rate-trends.md | errors over time and by component/provider |
provider-comparison.md | provider/model latency comparisons |
usage-summary.md | sessions, interactions, and unique users by character or experience |
Most chart recipes print a Vega-Lite JSON spec to stdout. You can save that spec, render it in a notebook, hand it to an agent, or convert it to PNG with a Vega-Lite renderer.
| Script | Output |
|---|---|
latency_percentile_band.ts | P50/P95/P99 line chart, optional p95 threshold |
p95_over_time.ts | p95 latency trend |
component_breakdown.ts | p95 latency by processor |
latency_waterfall.ts | per-interaction waterfall |
reliability_trends.ts | errors and dropped persistence events |
usage_trends.ts | sessions, interactions, unique end users |
concurrency_estimate.ts | active-session concurrency proxy |
session_timeline.py | session event timeline |
Every request sends CONVAI-API-KEY to the hosted analytics API. The API key resolves server-side to one Convai account. The client cannot choose or override account scope.
| Plan | API access | Notes |
|---|---|---|
| free / starter | yes, limited monthly quota | basic endpoints |
| scale | yes, higher quota | basic endpoints |
| business | yes | adds regression-detection and restricted query |
| enterprise | yes | higher quotas / retention according to contract |
Common auth and plan errors:
| Status | Meaning |
|---|---|
| 401 | missing or invalid API key |
| 402 | plan required for analytics API access |
| 403 | endpoint requires a higher plan |
| 429 | rate limit or monthly analytics quota exceeded |
See docs/authentication.md for details.
| SDK call | REST endpoint | Notes |
|---|---|---|
client.summary(...) | GET /v1/analytics/summary | headline account KPIs |
client.timeseries(...) | GET /v1/analytics/timeseries | metric over time |
client.breakdown(...) | GET /v1/analytics/breakdown | grouped breakdowns |
client.sessions.list(...) | GET /v1/analytics/sessions | session inventory |
client.sessions.get(id) | GET /v1/analytics/sessions/{id} | session timeline |
client.interactions.get(id) | GET /v1/analytics/interactions/{id} | trace / component spans |
client.catalog() | GET /v1/analytics/metrics/catalog | queryable metric definitions |
client.regressionDetection(...) | GET /v1/analytics/regression-detection | business+ |
client.query(cubeQuery) | POST /v1/analytics/query | restricted advanced query, business+ |
Convenience facades:
client.latency.byComponent(...)client.latency.overTime(...)client.providers.compare(...)client.errors.summary(...)client.errors.overTime(...)client.usage.summary(...)client.usage.interactions(...)make ts-install # install TypeScript SDK + CLI deps
make ts-build # build TypeScript SDK + CLI
make ts-test # run TypeScript unit tests
make ts-lint # ESLint + TypeScript typecheck
make ts-typecheck-recipes # typecheck examples and chart recipes
make py-install # install Python SDK dev env
make py-test # run Python tests
make py-lint # ruff + mypy
make check-openapi # compare committed OpenAPI snapshot with live API
make gen-types # regenerate generated SDK types from OpenAPI snapshot
Live E2E test, if you have an API key with data:
CONVAI_ANALYTICS_E2E_RANGE=last_30d \
CONVAI_ANALYTICS_E2E_DELAY_MS=6500 \
make e2e-stg
Despite the target name, e2e-stg defaults to staging. To run against production:
CONVAI_ANALYTICS_BASE_URL=https://analytics-api.convai.com/v1/analytics \
CONVAI_ANALYTICS_E2E_RANGE=last_30d \
CONVAI_ANALYTICS_E2E_DELAY_MS=6500 \
make e2e-stg
Be the first to review this server!
by Toleno · Developer Tools
Toleno Network MCP Server — Manage your Toleno mining account with Claude AI using natural language.
by mcp-marketplace · Developer Tools
Create, build, and publish Python MCP servers to PyPI — conversationally.
by Microsoft · Content & Media
Convert files (PDF, Word, Excel, images, audio) to Markdown for LLM consumption