Server data from the Official MCP Registry
Historical chart-pattern intelligence for AI agents. 9 tools, 25M+ embeddings, 10y equity history.
Historical chart-pattern intelligence for AI agents. 9 tools, 25M+ embeddings, 10y equity history.
This is a well-structured MCP server for a financial data service with proper authentication (API key via environment variable), appropriate permissions (network HTTP calls only), and clean code architecture. Minor code quality issues (broad exception handling, no input validation on some fields) present but do not indicate security vulnerabilities. Permissions align with the server's stated purpose of querying a remote stock pattern database. Supply chain analysis found 7 known vulnerabilities in dependencies (0 critical, 3 high severity). Package verification found 1 issue.
4 files analyzed · 14 issues found
Security scores are indicators to help you make informed decisions, not guarantees. Always review permissions before connecting any MCP server.
This plugin requests these system permissions. Most are normal for its category.
Set these up before or after installing:
Environment variable: CHART_LIBRARY_API_KEY
Add this to your MCP configuration file:
{
"mcpServers": {
"io-github-grahammccain-chart-library": {
"env": {
"CHART_LIBRARY_API_KEY": "your-chart-library-api-key-here"
},
"args": [
"chartlibrary-mcp"
],
"command": "uvx"
}
}
}From the project's GitHub README.
Works with: Claude Desktop | Claude Code | ChatGPT | GitHub Copilot | Cursor | VS Code | Any MCP client
Cohort intelligence engine for stock chart patterns — give your AI agent the cohort of historical analogs, the full forward-return distribution, and the features that separated winners from losers. Calibrated, methodology-honest, no overstated confidence.
📖 What is cohort intelligence? · 🛠️ Full MCP setup guide · 🤖 Build an AI trading agent with Claude
25M+ pattern embeddings. 10 years of history. 19K+ stocks. One tool call.
> "What does NVDA's chart on 2024-08-05 1h look like historically?"
NVDA · 2024-08-05 · 1h — cohort of 500 historical analogs
(485 with realized 5-day returns)
Distribution at 5 days forward:
median: −1.3%
p10 ·· p90: −11.3% ·· +6.8% (80% empirical band)
win rate: 44%
cohort_score: 0.31 (modest)
Features that separated winners from losers:
+ credit_spread_state = tight
+ macro_state = bullish
+ pct_off_52w_low (further off)
− vol_regime = low
Summary: NVDA's 1-hour pattern on 2024-08-05 has 500 historical
analogs. The cohort's 5-day distribution is bearish-leaning
(median −1.3%, win rate 44%) — the historical record does NOT
show this pattern typically resolving bullish. Conditioning on
tight credit spreads and a bullish macro state would have
separated the outperformers within the cohort.
A retrieval, not a forecast. No hallucinated predictions. No cherry-picking. Just the empirical record your agent can cite.
pip install chartlibrary-mcp
Download the chart-library-1.1.1.mcpb extension file and open it with Claude Desktop for automatic installation.
claude mcp add chart-library -- chartlibrary-mcp
Add to claude_desktop_config.json:
{
"mcpServers": {
"chart-library": {
"command": "chartlibrary-mcp",
"env": {
"CHART_LIBRARY_API_KEY": "cl_your_key"
}
}
}
}
Add to .cursor/mcp.json or VS Code MCP settings:
{
"servers": {
"chart-library": {
"command": "chartlibrary-mcp",
"env": {
"CHART_LIBRARY_API_KEY": "cl_your_key"
}
}
}
}
Add to .vscode/mcp.json in your project (this file is already included in the chart-library repos):
{
"servers": {
"chart-library": {
"command": "chartlibrary-mcp",
"env": {
"CHART_LIBRARY_API_KEY": "cl_your_key"
}
}
}
}
Copilot Chat will auto-detect the MCP server when you open the project. Use @mcp in Copilot Chat to invoke tools.
ChatGPT connects to MCP servers via remote HTTP endpoints. To set up:
https://chartlibrary.io/mcpNote: The remote endpoint at
https://chartlibrary.io/mcpuses Streamable HTTP transport. If you need SSE fallback, usehttps://chartlibrary.io/mcp/sse.
For any MCP client that supports remote HTTP connections:
https://chartlibrary.io/mcp
This endpoint supports both Streamable HTTP and SSE transports, no local installation required.
Free tier: 200 calls/day, no credit card required. Get an API key at chartlibrary.io/developers or use basic search without one.
> get_exit_signal("TSLA")
Signal: HOLD (confidence: 72%)
Similar patterns that exited early: 3/10 would have avoided a drawdown
Similar patterns that held: 7/10 gained an additional +2.1% over 5 days
Recommendation: Pattern suggests continuation. No exit signal triggered.
> get_sector_rotation()
Leaders (30-day relative strength):
1. XLK Technology +4.2%
2. XLY Cons. Disc. +3.1%
3. XLC Communication +2.8%
Laggards:
9. XLU Utilities -1.4%
10. XLP Cons. Staples -2.1%
11. XLRE Real Estate -3.3%
Regime: Risk-On (growth > defensives)
> run_scenario("AMD", spy_change=-3.0)
When SPY fell ~3%, AMD historically:
Median move: -5.2%
Best case: +1.1%
Worst case: -11.4%
Positive: 18% of the time
AMD shows 1.7x beta to SPY downside moves.
Chart Library v5 ships a clean 8-tool surface. Chain them via cohort_id handles for sub-second refinement without re-running kNN.
| Tool | What it does |
|---|---|
search | Entry point. Find similar historical patterns for an anchor; returns a cohort_id you can chain. mode= supports text (default), live_bars (raw OHLCV), similar (cohort-level neighbors). |
cohort | The core primitive. Conditional distribution analysis. depth="basic" returns kNN + outcome distribution; depth="full" adds Layer 3 feature importance + regime stratification + risk profile; depth="compare" pits two anchors side-by-side. Filters across regime / sector / liquidity / event. |
discover | What's interesting today. mode="picks" (cohort-ranked top picks), mode="daily_setups" (pre-enriched briefs in one call), mode="risk_adjusted" (Sharpe-ranked). |
analyze | Analytic metrics. metric= accepts anomaly, volume_profile, crowding, correlation_shift, earnings_reaction, pattern_degradation, regime_accuracy, decompose (slice winners vs losers), clusters (cohort-internal grouping). |
context | Situational data. target= accepts "market", a ticker symbol ("NVDA"), {"symbol": ..., "date": ...} for lightweight anchor metadata, or "system" for DB coverage. |
narrative | News intelligence. mode="pulse" (single-symbol narrative-change score + FinBERT sentiment) or mode="alerts" (market-wide divergence anomalies). |
explain | Narrative + rankings derived from a cohort. style= accepts filter_ranking (which filter shifts the distribution most), prose (plain-English summary), position_guidance (exit signals), risk_ranking. |
portfolio | Multi-holding analysis OR per-symbol track record. mode="basic" (multi-holding weighted cohort) or mode="symbol_intel" (per-symbol Layer 5 memory). |
Plus report_feedback for filing errors / suggestions back to the project.
These tools replace hallucinated "on average this pattern returns X%" with real conditional base rates. The full distinction — what they do and how to read responses — is documented at /concepts/cohort-intelligence and /concepts/reading-a-cohort-response.
1. search(query="NVDA 2024-06-18") → cohort_id
2. cohort(symbol="NVDA", date="2024-06-18", depth="full",
filters={"vol_regime": ["high"]})
→ Layer 3 distribution + features
3. explain(cohort_id=..., style="filter_ranking") → which filter matters most
4. cohort(symbol=..., date=..., depth="full",
filters={...refined...}) → re-conditioned distribution
v5 reduces the surface from 19 active tools to 8 composite tools. Twelve previously-active tools (cohort_analyze, cohort_compare, decompose, clusters, live_search, similar_cohorts, symbol_intelligence, anchor_fetch, narrative_pulse, narrative_alerts, discover_picks, get_daily_setups) are retained as DEPRECATED wrappers that forward to the canonical tools — v4 callers keep working unchanged. New agents should reach for the 8 canonical tools.
The v3-era tools (search_charts, get_cohort_distribution, etc.) have been removed in v5. If your code still calls them, pin chartlibrary-mcp<5.0.0 until you migrate to the canonical surface. The mapping:
| Legacy (removed in v5) | Replacement |
|---|---|
search_charts, search_batch, get_discover_picks | search / discover |
get_cohort_distribution, refine_cohort_with_filters, run_scenario, get_regime_win_rates, compare_to_peers | cohort |
detect_anomaly, get_volume_profile, get_crowding, get_earnings_reaction, get_correlation_shift, get_pattern_degradation, get_regime_accuracy | analyze (metric=) |
get_sector_rotation, get_status, get_market_context | context |
get_pattern_summary, explain_cohort_filters, get_exit_signal, get_risk_adjusted_picks | explain (style=) |
get_portfolio_health | portfolio |
analyze_pattern, get_follow_through, check_ticker | search + cohort (+ optional explain) |
| Previously active in v4 (now DEPRECATED in v5) | Replacement |
|---|---|
cohort_analyze | cohort(depth="full") |
cohort_compare | cohort(depth="compare", compare_with={...}) |
decompose, clusters | `analyze(metric="decompose" |
live_search, similar_cohorts | `search(mode="live_bars" |
symbol_intelligence | portfolio(mode="symbol_intel") |
anchor_fetch | context(target={"symbol": ..., "date": ...}) |
narrative_pulse, narrative_alerts | `narrative(mode="pulse" |
discover_picks, get_daily_setups | `discover(mode="picks" |
Chart Library indexes a large library of historical chart patterns and exposes them behind a conditional-distribution API. Every query returns sample sizes, percentiles, and calibrated forward-return bands — never a point forecast.
When your agent calls analyze_pattern("NVDA"), the server:
The result: factual, citation-ready statements like "out of N similar historical patterns, the median 5-day return was X% (80% band [p10, p90])" that your agent can present without hallucinating or hedging.
| Tier | Calls/day | Price |
|---|---|---|
| Sandbox | 200 | Free |
| Builder | 5,000 | $29/mo |
| Scale | 50,000 | $99/mo |
Get your key at chartlibrary.io/developers.
export CHART_LIBRARY_API_KEY=cl_your_key
Chart Library provides historical pattern data for informational purposes. Not financial advice.
Be the first to review this server!
by Modelcontextprotocol · Developer Tools
Read, search, and manipulate Git repositories programmatically
by Toleno · Developer Tools
Toleno Network MCP Server — Manage your Toleno mining account with Claude AI using natural language.
by mcp-marketplace · Developer Tools
Create, build, and publish Python MCP servers to PyPI — conversationally.