Server data from the Official MCP Registry
dbt MCP — manifest/run_results/sources/catalog parsing + DQ result tables (BigQuery/Postgres)
dbt MCP — manifest/run_results/sources/catalog parsing + DQ result tables (BigQuery/Postgres)
Valid MCP server (3 strong, 3 medium validity signals). No known CVEs in dependencies. Package registry verified. Imported from the Official MCP Registry.
9 files analyzed · 1 issue found
Security scores are indicators to help you make informed decisions, not guarantees. Always review permissions before connecting any MCP server.
This plugin requests these system permissions. Most are normal for its category.
Set these up before or after installing:
Environment variable: DBT_PROJECT_DIR
Environment variable: DBT_TARGET_DIR
Environment variable: DBT_RUN_HISTORY_DIR
Environment variable: DQ_BACKEND
Environment variable: DQ_RESULTS_TABLE
Environment variable: DQ_SCORE_TABLE
Environment variable: GOOGLE_APPLICATION_CREDENTIALS
Environment variable: BQ_PROJECT_ID
Environment variable: PG_CONNECTION_STRING
Environment variable: DBT_ALLOW_WRITE
Environment variable: DBT_TOOLS
Environment variable: DBT_DISABLE
Add this to your MCP configuration file:
{
"mcpServers": {
"io-github-us-all-dbt": {
"env": {
"DBT_TOOLS": "your-dbt-tools-here",
"DQ_BACKEND": "your-dq-backend-here",
"DBT_DISABLE": "your-dbt-disable-here",
"BQ_PROJECT_ID": "your-bq-project-id-here",
"DBT_TARGET_DIR": "your-dbt-target-dir-here",
"DQ_SCORE_TABLE": "your-dq-score-table-here",
"DBT_ALLOW_WRITE": "your-dbt-allow-write-here",
"DBT_PROJECT_DIR": "your-dbt-project-dir-here",
"DQ_RESULTS_TABLE": "your-dq-results-table-here",
"DBT_RUN_HISTORY_DIR": "your-dbt-run-history-dir-here",
"PG_CONNECTION_STRING": "your-pg-connection-string-here",
"GOOGLE_APPLICATION_CREDENTIALS": "your-google-application-credentials-here"
},
"args": [
"-y",
"@us-all/dbt-mcp"
],
"command": "npx"
}
}
}From the project's GitHub README.
dbt MCP server —
manifest.json,run_results.json,sources.json,catalog.json, plus DQ result tables (BigQuery / Postgres) behind one stdio MCP. Built on@us-all/mcp-toolkit.
A read-only window into your dbt project for LLM clients. No dbt run triggering — just deep introspection, run-history analysis, source freshness, per-column test coverage, lineage walks, and (if you have a custom DQ result table) historical check trends and Tier SLA status.
For DAG triggering / run history / log tails, install the companion @us-all/airflow-mcp alongside.
dbt, quality, meta)# 1. add the MCP server
pnpm add -D @us-all/dbt-mcp
# 2. add the DQ backend you actually use (only if you query custom DQ tables):
pnpm add -D @google-cloud/bigquery # OR
pnpm add -D pg
DBT_PROJECT_DIR=/path/to/dbt-project \
DQ_RESULTS_TABLE=my-project.data_ops.quality_checks \
npx @us-all/dbt-mcp
The server speaks MCP stdio; wire it into Claude Desktop / Cursor / any MCP client. Set MCP_TRANSPORT=http to opt in to Streamable HTTP transport (Bearer auth, /health endpoint).
| Category | Tools | Purpose |
|---|---|---|
dbt | 15 + 2 aggregations | Parse manifest.json / run_results.json / sources.json / catalog.json |
quality | 5 + 2 aggregations | Query quality_checks and quality_score_daily (BQ or PG) |
meta | 1 (always on) | search-tools for natural-language tool discovery |
Toggle with DBT_TOOLS=dbt (allowlist) or DBT_DISABLE=quality (denylist).
dbt (15 + 2)dbt-list-models, dbt-get-model, dbt-list-tests, dbt-get-test, dbt-list-sources, dbt-get-source, dbt-list-exposures, dbt-list-macros, dbt-get-macro, dbt-list-runs, dbt-get-run-results, dbt-failed-tests, dbt-slow-models, dbt-coverage, dbt-graph, freshness-status, incident-context
quality (5 + 2)dq-list-checks, dq-get-check-history, dq-failed-checks-by-dataset, dq-score-trend, dq-tier-status, failed-tests-summary, dq-score-snapshot
| Prompt | Use when |
|---|---|
investigate-failed-tests | "What's broken in the last 24h?" |
freshness-degradation-triage | "Are any sources stale?" (Tier 1 focus optional) |
dq-trend-report | "Give me a stakeholder-friendly DQ trend report" |
incident-triage | "Triage <model | source>" — bundles all signals |
| Env | Required | Notes |
|---|---|---|
DBT_PROJECT_DIR | yes | dbt project root (where dbt_project.yml lives) |
DBT_TARGET_DIR | no | Defaults to $DBT_PROJECT_DIR/target |
DBT_RUN_HISTORY_DIR | no | Optional dir for archived run_results.json history |
DQ_BACKEND | no | bigquery (default) or postgres |
DQ_RESULTS_TABLE | no | FQN of the checks table (without it, quality category errors at call time) |
DQ_SCORE_TABLE | no | FQN of the score-daily table |
GOOGLE_APPLICATION_CREDENTIALS | no | For BigQuery backend (ADC fallback supported) |
BQ_PROJECT_ID | no | Explicit BQ project (otherwise inferred from ADC) |
PG_CONNECTION_STRING | no | When DQ_BACKEND=postgres (secret) |
DBT_ALLOW_WRITE | no | Reserved for future write tools (none in v0.1) |
DBT_TOOLS / DBT_DISABLE | no | Category toggles |
The quality category assumes columns check_name, check_type, dataset, table_name, status, severity, failure_count, run_at, message on DQ_RESULTS_TABLE, and score_date, scope, tier, completeness_pct, freshness_pct, validity_pct, anomaly_free_pct, overall_score on DQ_SCORE_TABLE. v0.2 will add a configurable column-mapping layer.
caveats line will flag them)For Airflow DAG operations (list, runs, task instances, log tail, trigger, clear), install @us-all/airflow-mcp alongside this server.
pnpm install
pnpm run build # tsc → dist/
pnpm test # vitest
pnpm run smoke # spawns dist/index.js, calls initialize + tools/list (set env first)
MIT — see LICENSE.
Be the first to review this server!
by Toleno · Developer Tools
Toleno Network MCP Server — Manage your Toleno mining account with Claude AI using natural language.
by mcp-marketplace · Developer Tools
Create, build, and publish Python MCP servers to PyPI — conversationally.
by Microsoft · Content & Media
Convert files (PDF, Word, Excel, images, audio) to Markdown for LLM consumption