Server data from the Official MCP Registry
Read-only MCP server for ClickHouse metadata, parameterized queries, and plan analysis.
Read-only MCP server for ClickHouse metadata, parameterized queries, and plan analysis.
Valid MCP server (1 strong, 4 medium validity signals). No known CVEs in dependencies. Imported from the Official MCP Registry. 1 finding(s) downgraded by scanner intelligence.
6 files analyzed · 1 issue found
Security scores are indicators to help you make informed decisions, not guarantees. Always review permissions before connecting any MCP server.
This plugin requests these system permissions. Most are normal for its category.
Set these up before or after installing:
Environment variable: MCP_CLICKHOUSE_DSN
Add this to your MCP configuration file:
{
"mcpServers": {
"io-github-alyiox-mcp-clickhousex": {
"env": {
"MCP_CLICKHOUSE_DSN": "your-mcp-clickhouse-dsn-here"
},
"args": [
"mcp-clickhousex"
],
"command": "uvx"
}
}
}From the project's GitHub README.
A read-only Model Context Protocol (MCP) server for ClickHouse that supports metadata discovery, resources, parameterized SELECT queries, SHOW introspection, query analysis, and snapshot mode for large result sets, with profile-based configuration and strict no-DML/DDL enforcement.
Requirements: Python 3.13+, a running ClickHouse instance, and connection details via environment variables or a config file.
Set a DSN and run the server with MCP Inspector:
# Option 1: Run directly with uvx (no clone needed)
export MCP_CLICKHOUSE_DSN="http://default:@localhost:8123/default"
npx -y @modelcontextprotocol/inspector uvx mcp-clickhousex
# Option 2: Run from source (clone repo, then)
export MCP_CLICKHOUSE_DSN="http://default:@localhost:8123/default"
npx -y @modelcontextprotocol/inspector uv run main.py
All settings use the MCP_CLICKHOUSE prefix. Flat environment variables (e.g. MCP_CLICKHOUSE_DSN) are the straightforward way to configure the default profile when you have a single connection. For multiple profiles, the user-scoped config.json file is recommended.
Single connection: Configure via environment variables.
# Connection DSN (required).
export MCP_CLICKHOUSE_DSN="http://user:password@host:8123/database"
# Optional description for the default profile (tooling/AI discovery).
export MCP_CLICKHOUSE_DESCRIPTION="Primary cluster"
# Optional max rows per interactive query (default 500; hard ceiling 1000).
export MCP_CLICKHOUSE_QUERY_MAX_ROWS="500"
# Optional interactive query timeout in seconds (default 30; hard ceiling 300).
export MCP_CLICKHOUSE_QUERY_COMMAND_TIMEOUT_SECONDS="30"
# Optional max rows for snapshot queries (default 10000; hard ceiling 50000).
export MCP_CLICKHOUSE_SNAPSHOT_MAX_ROWS="10000"
# Optional snapshot query timeout in seconds (default 120; hard ceiling 300).
export MCP_CLICKHOUSE_SNAPSHOT_COMMAND_TIMEOUT_SECONDS="120"
Multiple connections: Use the user-scoped config.json file (recommended). Env vars also work via the MCP_CLICKHOUSE_PROFILES_<NAME>_ prefix (e.g. MCP_CLICKHOUSE_PROFILES_WAREHOUSE_DSN).
~/.config/mcp-clickhousex/config.json%USERPROFILE%\.config\mcp-clickhousex\config.jsonExample (config.json):
{
"profiles": {
"default": {
"dsn": "http://default:@localhost:8123/default",
"description": "Primary",
"query_max_rows": 500,
"query_command_timeout_seconds": 60,
"snapshot_max_rows": 10000,
"snapshot_command_timeout_seconds": 120
},
"warehouse": {
"dsn": "http://user:pass@warehouse:8123/analytics",
"description": "Warehouse"
}
}
}
Special characters in credentials: If the username or password contains URL-reserved characters, percent-encode them in the DSN:
| Character | Encoding |
|---|---|
# | %23 |
? | %3F |
/ | %2F |
@ | %40 |
% | %25 |
For example, username admin@org and password p#ss? become admin%40org:p%23ss%3F in the DSN: http://admin%40org:p%23ss%3F@host:8123/database.
Tool descriptions match server.py tool docstrings except the [ClickHouse] prefix is omitted here (it remains in MCP-exposed metadata). Parameter text matches each Field(description=…) on the same tool.
| Tool | Description | Key params |
|---|---|---|
list_profiles | List configured profiles. Each entry includes name and optional description. | — |
get_cluster_properties | Get cluster properties and execution limits. Returns ClickHouse server version plus enforced limits (max rows, timeouts) for the profile. | profile — Profile name; uses default profile when omitted. Src: profiles. |
run_query | Execute read-only SELECT or WITH … SELECT. One statement; DML, DDL, SET, SYSTEM, and similar are rejected. Returns {data, row_count} where data is an RFC 4180 CSV string. Pass snapshot=true to persist the result to disk and receive {snapshot_uri, row_count} instead; fetch the CSV via the snapshot resource URI. Max-rows cap; overflow sets truncated and row_limit. Same SQL validation as analyze_query. | sql (required) — Read-only SELECT or WITH … SELECT. One statement; use qualified db.table or database. Driver placeholder syntax for parameters. parameters — Named parameters for driver placeholders (e.g. %(name)s or {name:Type}). database — Session default database for unqualified names. Src: databases. profile — Profile name; uses default profile when omitted. Src: profiles. snapshot — When true, persist the full result as a CSV file and return a resource URI (chx://snapshots/{id}) instead of inline data. Use for queries that may exceed the interactive row limit (1 000). Snapshot limits apply (default 10 000 rows, hard ceiling 50 000). Entries expire after 7 days. |
run_show | Execute SHOW introspection statement. One statement per call; INTO OUTFILE rejected. Interactive row limits apply (default 500, hard ceiling 1 000). Same timeout as run_query. | sql (required) — Single SHOW statement (e.g. SHOW DATABASES, SHOW CREATE TABLE). No INTO OUTFILE. parameters — Named parameters for driver placeholders (e.g. %(name)s or {name:Type}). database — Session default database for unqualified names. Src: databases. profile — Profile name; uses default profile when omitted. Src: profiles. |
analyze_query | Explain read-only SELECT or WITH … SELECT. Returns plan, pipeline, and/or syntax text. Default types plan and pipeline. Uses query timeout and optional database; no max-rows cap unlike run_query. | sql (required) — Read-only SELECT or WITH … SELECT for EXPLAIN. One statement; same validation as run_query. parameters — Named parameters for driver placeholders (e.g. %(name)s or {name:Type}). database — Session default database for unqualified names. Src: databases. profile — Profile name; uses default profile when omitted. Src: profiles. types — EXPLAIN variants: plan (indexes), pipeline, syntax. Default plan and pipeline if omitted. |
list_databases | List databases. Rows from system.databases visible to the connection. | profile — Profile name; uses default profile when omitted. Src: profiles. |
list_tables | List tables and views in a database. Rows from system.tables: name, engine, primary_key, sorting_key, partition_key, total_rows, total_bytes for query planning. | database — Database to list; client default when omitted. Src: databases. profile — Profile name; uses default profile when omitted. Src: profiles. |
list_columns | List columns for a table or view. Rows from system.columns for the resolved database and table. | table (required) — Table or view name, or database.table. Src: tables. database — Database when table is unqualified; ignored if table contains a dot. Client default when omitted. Src: databases. profile — Profile name; uses default profile when omitted. Src: profiles. |
The server exposes the same discovery and metadata as the tools above via URI-addressable resources (profile-first hierarchy). Each resource’s description matches the corresponding tool (list_profiles, get_cluster_properties, list_databases, list_tables, list_columns), plus Src: tags for URI path parameters. All resource content is JSON (application/json) except snapshots which return CSV (text/csv). Use path segment default for the default profile or database.
Resource descriptions match description=… on @mcp.resource in server.py (same prefix omission as above).
| URI | Description |
|---|---|
chx://profiles | List configured profiles. Each entry includes name and optional description. |
chx://profiles/{profile}/cluster-properties | Get cluster properties and execution limits. Returns ClickHouse server version plus enforced limits (max rows, timeouts) for the profile. Src: profiles. |
chx://profiles/{profile}/databases | List databases. Rows from system.databases visible to the connection. Src: profiles. |
chx://profiles/{profile}/databases/{database}/tables | List tables and views in a database. Rows from system.tables: name, engine, primary_key, sorting_key, partition_key, total_rows, total_bytes for query planning. Src: profiles, dbs. |
chx://profiles/{profile}/databases/{database}/tables/{table}/columns | List columns for a table or view. Rows from system.columns for the resolved database and table. Src: profiles, dbs, tables. |
chx://snapshots/{id} | Fetch a query result snapshot by ID. Returns the full result as a CSV string (header row + data rows). Entries expire after 7 days. Src: run_query with snapshot=true. |
Read-only SQL only: run_query allows SELECT / WITH … SELECT; run_show allows a single SHOW statement per call. INTO OUTFILE is not allowed on run_show. Interactive queries enforce a tight row cap (default 500, hard ceiling 1 000); for larger extracts use snapshot=true (default 10 000, hard ceiling 50 000). Parameterized queries are supported where the driver allows (%(name)s or {name:Type} syntax). Use environment variables for connection credentials — never commit secrets.
Snippets for common MCP clients using uvx mcp-clickhousex (no clone required; ensure uv is on your PATH). Replace connection details as needed.
{
"mcpServers": {
"clickhouse": {
"command": "uvx",
"args": ["mcp-clickhousex"],
"env": {
"MCP_CLICKHOUSE_DSN": "http://default:@localhost:8123/default"
}
}
}
}
[mcp_servers.clickhouse]
command = "uvx"
args = ["mcp-clickhousex"]
[mcp_servers.clickhouse.env]
MCP_CLICKHOUSE_DSN = "http://default:@localhost:8123/default"
{
"$schema": "https://opencode.ai/config.json",
"mcp": {
"clickhouse": {
"type": "local",
"enabled": true,
"command": ["uvx", "mcp-clickhousex"],
"environment": {
"MCP_CLICKHOUSE_DSN": "http://default:@localhost:8123/default"
}
}
}
}
{
"mcpServers": {
"clickhouse": {
"command": "uvx",
"args": ["mcp-clickhousex"],
"env": {
"MCP_CLICKHOUSE_DSN": "http://default:@localhost:8123/default"
}
}
}
}
{
"inputs": [],
"servers": {
"clickhouse": {
"type": "stdio",
"command": "uvx",
"args": ["mcp-clickhousex"],
"env": {
"MCP_CLICKHOUSE_DSN": "http://default:@localhost:8123/default"
}
}
}
}
Config file locations: Cursor .cursor/mcp.json, Codex/Copilot/OpenCode vary by client; see your client's MCP docs.
Tests require a running ClickHouse instance. The test suite creates a sample table in the default database, seeds it, and drops it after.
# Run all tests (unit + functional + e2e)
uv run pytest tests/ -v
The test harness uses MCP_TEST_CLICKHOUSE_DSN to locate the ClickHouse instance. If unset, it falls back to http://admin:password123@localhost:8123/default. Set the variable to point tests at a different server without affecting your production MCP_CLICKHOUSE_DSN:
export MCP_TEST_CLICKHOUSE_DSN="http://user:pass@testhost:8123/default"
uv run pytest tests/ -v
No planned features at this time. Open an issue to suggest improvements.
Open issues or PRs; follow existing style and add tests where appropriate.
MIT. See LICENSE.
Be the first to review this server!
by Toleno · Developer Tools
Toleno Network MCP Server — Manage your Toleno mining account with Claude AI using natural language.
by mcp-marketplace · Developer Tools
Create, build, and publish Python MCP servers to PyPI — conversationally.
by Microsoft · Content & Media
Convert files (PDF, Word, Excel, images, audio) to Markdown for LLM consumption
by mcp-marketplace · Developer Tools
Scaffold, build, and publish TypeScript MCP servers to npm — conversationally
by Taylorwilsdon · Productivity
Control Gmail, Calendar, Docs, Sheets, Drive, and more from your AI
by mcp-marketplace · Finance
Free stock data and market news for any MCP-compatible AI assistant.