Scout your PostgreSQL databases with AI - safety features, monitoring, and data quality
Scout your PostgreSQL databases with AI - safety features, monitoring, and data quality
Valid MCP server (1 strong, 1 medium validity signals). No known CVEs in dependencies. Package registry verified. Imported from the Official MCP Registry. Trust signals: trusted author (3/3 approved).
3 files analyzed · 1 issue found
Security scores are indicators to help you make informed decisions, not guarantees. Always review permissions before connecting any MCP server.
This plugin requests these system permissions. Most are normal for its category.
Add this to your MCP configuration file:
{
"mcpServers": {
"io-github-bluwork-postgres-scout-mcp": {
"args": [
"-y",
"postgres-scout-mcp"
],
"command": "npx"
}
}
}From the project's GitHub README.
Scout your PostgreSQL databases with AI - A production-ready Model Context Protocol server with built-in safety features, monitoring, and data quality tools.
You ask:
"How healthy is my production database? Any urgent issues?"
Postgres Scout returns:
Component Breakdown
| Component | Score | Status |
|---|---|---|
| Cache Performance | 94/100 | Healthy |
| Index Efficiency | 82/100 | Good |
| Table Bloat | 61/100 | Needs Attention |
| Connection Usage | 75/100 | Fair |
Issues Found
orders has 34% bloat (2.1 GB wasted). VACUUM FULL recommended.sessions consuming 890 MB.analytics_events is 71% (target: >90%).Recommendations
VACUUM FULL orders during maintenance windowidx_sessions_legacy, idx_sessions_old_token, idx_sessions_tempanalytics_events to shared_buffers or partitioning by dateThat's getHealthScore — one of 38 tools covering exploration, diagnostics, optimization, monitoring, data quality, and safe writes.
claude mcp add postgres-scout -- npx -y postgres-scout-mcp postgresql://localhost:5432/mydb
Then ask: "Show me the largest tables and whether they have any bloat issues."
Add to your Claude Desktop config (~/Library/Application Support/Claude/claude_desktop_config.json on macOS):
{
"mcpServers": {
"postgres-scout": {
"command": "npx",
"args": ["-y", "postgres-scout-mcp", "postgresql://localhost:5432/mydb"],
"type": "stdio"
}
}
}
Add to your MCP settings:
{
"postgres-scout": {
"command": "npx",
"args": ["-y", "postgres-scout-mcp", "postgresql://localhost:5432/mydb"]
}
}
The server runs in read-only mode by default. For write operations, run a separate instance:
{
"mcpServers": {
"postgres-scout-readonly": {
"command": "npx",
"args": ["-y", "postgres-scout-mcp", "--read-only", "postgresql://localhost:5432/production"],
"type": "stdio"
},
"postgres-scout-readwrite": {
"command": "npx",
"args": ["-y", "postgres-scout-mcp", "--read-write", "postgresql://localhost:5432/development"],
"type": "stdio"
}
}
}
listDatabases — databases the user has access togetDatabaseStats — size, cache hit ratio, connection infolistSchemas — all schemas in the current databaselistTables — tables with size and row statisticsdescribeTable — columns, constraints, indexes, and moreexecuteQuery — run SELECT queries (or writes in read-write mode)explainQuery — EXPLAIN plans for performance analysisoptimizeQuery — optimization recommendations for a specific querygetHealthScore — overall health score with component breakdowndetectAnomalies — anomalies in performance, connections, and dataanalyzeTableBloat — bloat analysis for VACUUM planninggetSlowQueries — slow query analysis (requires pg_stat_statements)suggestVacuum — VACUUM recommendations based on dead tuples and bloatsuggestIndexes — missing index recommendations from query patternssuggestPartitioning — partitioning strategies for large tablesgetIndexUsage — identify unused or underused indexesgetCurrentActivity — active queries and connectionsanalyzeLocks — lock contention and blocking queriesgetLiveMetrics — real-time metrics over a time windowgetHottestTables — tables with highest activitygetTableMetrics — comprehensive per-table I/O and scan statsfindDuplicates — duplicate rows by column combinationfindMissingValues — NULL analysis across columnsfindOrphans — orphaned records with invalid foreign keyscheckConstraintViolations — test constraints before adding themanalyzeTypeConsistency — type inconsistencies in text columnsexploreRelationships — multi-hop foreign key traversalanalyzeForeignKeys — foreign key health and performancefindRecent — rows within a time windowanalyzeTimeSeries — window functions and anomaly detectiondetectSeasonality — seasonal pattern detectionexportTable — CSV, JSON, JSONL, or SQLgenerateInsertStatements — INSERT statements for migrationpreviewUpdate / previewDelete — see what would change before committingsafeUpdate — UPDATE with dry-run, row limits, empty WHERE protectionsafeDelete — DELETE with dry-run, row limits, empty WHERE protectionsafeInsert — INSERT with validation, batching, ON CONFLICT support"What are the largest tables and do they have bloat?"
listTables({ schema: "public" })
analyzeTableBloat({ schema: "public", minSizeMb: 100 })
"Find duplicate emails in the users table."
findDuplicates({ table: "users", columns: ["email"] })
"Which queries are slowest and how can I speed them up?"
getSlowQueries({ minDurationMs: 100, limit: 10 })
suggestIndexes({ schema: "public" })
"Show me what's happening on the database right now."
getCurrentActivity()
getLiveMetrics({ metrics: ["queries", "connections", "cache"], duration: 30000, interval: 1000 })
getHottestTables({ limit: 5, orderBy: "seq_scan" })
"Find orphaned orders that reference deleted customers."
findOrphans({ table: "orders", foreignKey: "customer_id", referenceTable: "customers", referenceColumn: "id" })
| Variable | Default | Description |
|---|---|---|
QUERY_TIMEOUT | 30000 | Query timeout in milliseconds |
MAX_RESULT_ROWS | 10000 | Maximum rows returned per query |
ENABLE_RATE_LIMIT | true | Enable rate limiting |
RATE_LIMIT_MAX_REQUESTS | 100 | Requests per window |
RATE_LIMIT_WINDOW_MS | 60000 | Rate limit window (ms) |
PGMAXPOOLSIZE | 10 | Connection pool max size |
PGMINPOOLSIZE | 2 | Connection pool min size |
PGIDLETIMEOUT | 10000 | Idle connection timeout (ms) |
ENABLE_LOGGING | false | Enable file logging |
LOG_DIR | ./logs | Log file directory |
LOG_LEVEL | info | Log verbosity: debug, info, warn, error |
CLI flags: --read-only (default), --read-write, --mode <mode>
File logging is disabled by default. Set ENABLE_LOGGING=true to enable. Two log files are created in LOG_DIR:
Connection strings are automatically redacted in all output.
git clone https://github.com/bluwork/postgres-scout-mcp.git
cd postgres-scout-mcp
pnpm install
pnpm build
pnpm test
Apache-2.0
Be the first to review this server!
by Toleno · Developer Tools
Toleno Network MCP Server — Manage your Toleno mining account with Claude AI using natural language.
by mcp-marketplace · Developer Tools
Create, build, and publish Python MCP servers to PyPI — conversationally.
by Microsoft · Content & Media
Convert files (PDF, Word, Excel, images, audio) to Markdown for LLM consumption
by mcp-marketplace · Developer Tools
Scaffold, build, and publish TypeScript MCP servers to npm — conversationally
by mcp-marketplace · Finance
Free stock data and market news for any MCP-compatible AI assistant.
by Taylorwilsdon · Productivity
Control Gmail, Calendar, Docs, Sheets, Drive, and more from your AI