Server data from the Official MCP Registry
MCP server for searching, exploring, and aggregating Disney Lorcana cards.
MCP server for searching, exploring, and aggregating Disney Lorcana cards.
Valid MCP server (0 strong, 3 medium validity signals). 4 known CVEs in dependencies (1 critical, 1 high severity) Imported from the Official MCP Registry.
6 files analyzed · 4 issues found
Security scores are indicators to help you make informed decisions, not guarantees. Always review permissions before connecting any MCP server.
This plugin requests these system permissions. Most are normal for its category.
Add this to your MCP configuration file:
{
"mcpServers": {
"io-github-danielenricocahall-lorcana-mcp": {
"args": [
"lorcana-mcp"
],
"command": "uvx"
}
}
}From the project's GitHub README.
An MCP server for searching and aggregating Disney Lorcana cards.
On startup, the server fetches a JSON list of cards from https://danielenricocahall.github.io/lorcana-mcp/allCards.json. The snapshot is refreshed daily by data_pipeline/fetch_cards.py, which pulls from the Lorcast API, normalizes each card into our internal schema, and publishes the list to the gh-pages branch. The middle layer insulates running containers from Lorcast's availability and rate limits — the runtime never calls Lorcast directly.
Cards are kept in-memory as a Python list for fast filtering. With ~2,270 unique cards (each carrying a printings array for its alternate sets/rarities) this is lightweight and requires no external database. A local JSON file cache (LORCANA_CACHE_PATH, default cards.json) lets the server skip the network fetch on subsequent startups.
Startup data loading is controlled by:
LORCANA_REFRESH_ON_STARTUP:
true: always fetch from API and repopulate storagefalse: use existing cache if availableLORCANA_SKIP_IF_DB_EXISTS:
true (default): skip API fetch if the cache file already contains cardsfalse: fetch and repopulateThe server is published to GHCR and the MCP Registry. Pull and run it directly:
docker pull ghcr.io/danielenricocahall/lorcana-mcp:latest
docker run --rm -i ghcr.io/danielenricocahall/lorcana-mcp:latest
To persist the card cache across container restarts, mount a volume:
docker run --rm -i \
-e LORCANA_CACHE_PATH=/data/cards.json \
-e LORCANA_SKIP_IF_DB_EXISTS=true \
-v lorcana_mcp_data:/data \
ghcr.io/danielenricocahall/lorcana-mcp:latest
uv run python main.py
docker build -t lorcana-mcp:latest .
docker run --rm -i lorcana-mcp:latest
docker compose build
docker compose run --rm -T lorcana-mcp
Notes:
LORCANA_API (default: https://danielenricocahall.github.io/lorcana-mcp/allCards.json)LORCANA_CACHE_PATH (default: cards.json) — local file for caching fetched cardsLORCANA_HTTP_TIMEOUT_SECONDS (default: 60)LORCANA_REFRESH_ON_STARTUP (false default)LORCANA_SKIP_IF_DB_EXISTS (true default){
"mcpServers": {
"lorcana": {
"command": "uv",
"args": ["run", "python", "/absolute/path/to/lorcana-mcp/main.py"]
}
}
}
{
"mcpServers": {
"lorcana": {
"command": "docker",
"args": [
"run",
"--rm",
"-i",
"ghcr.io/danielenricocahall/lorcana-mcp:latest"
]
}
}
}
{
"mcpServers": {
"lorcana": {
"command": "docker",
"args": [
"run",
"--rm",
"-i",
"lorcana-mcp:latest"
]
}
}
}
{
"mcpServers": {
"lorcana": {
"command": "docker",
"args": ["compose", "run", "--rm", "-T", "lorcana-mcp"]
}
}
}
claude mcp add --scope user \
-- lorcana docker run --rm -i \
ghcr.io/danielenricocahall/lorcana-mcp:latest
claude mcp add --scope user \
-- lorcana docker run --rm -i lorcana-mcp:latest
Once connected to an MCP client, you can ask natural language questions like:
Card lookup
Deck building
Keyword & ability search
Stats & aggregations
Cross-filter queries
Note: For plain keyword queries (Evasive, Bodyguard, Shift, etc.) use the
keywordparameter — it filters against the structured ability list and is more reliable than substring search. For value-specific queries likeSinger 5orResist +2, usebody_text(keyword values live in the card's full text, not the ability list).
search_cards — filter and retrieve card objects (supports response_format="toon" for ~10% fewer tokens)count_cards — count cards matching a filter without returning full objectsaggregate_cards — card counts grouped by cost (ink curve), rarity, color, set_code, or typeresolve_card — fuzzy-match an informal/partial/misspelled card name to the closest cards (returns full card data)top_traits — most common traits across all cardsexport_deck — render a deck as a Dreamborn/Pixelborn-compatible text deck listimport_deck — parse a Dreamborn/Pixelborn-style deck list, returning resolved cards plus any unresolved lines with fuzzy candidatesvalidate_deck — check a deck against the format rules (≥60 cards, max 4 copies, ≤2 inks); returns {legal, total_cards, inks, violations}deck_stats — compute ink curve, color split, inkable count, and type breakdown for a deckserver_status — startup metadata (card count, config)build_deck(colors, playstyle="balanced") — guides the model through assembling a legal Lorcana deck (60-card minimum, ≤2 inks, max 4 copies of any card) for the requested color(s) and playstyle (aggressive / control / lore-race / balanced). Uses the search/aggregate tools above plus the rules embedded in the server instructions.search_cards accepts a response_format argument:
"json" (default) — list of card objects, unchanged from prior versions."toon" — a TOON string with one column header line and one row per card, encoded by the toons Rust-backed library (the official community reference implementation).Example (search_cards(name="elsa", limit=2, response_format="toon")):
cards[2]:
- id: crd_01c4835a62df4960bb973aeff81f2bb2
name: Elsa
version: Ice Maker
full_name: Elsa - Ice Maker
cost: 7
...
printings[3]{set_code,set_name,number,rarity}:
"7",Archazia's Island,69,Super Rare
C2,Lorcana Challenge Year 3,2,Promo
C2,Lorcana Challenge Year 3,6,Promo
- id: crd_04bca46a8e2d4e9ba0fbdbfc6c99e51e
name: Elsa
...
The outer cards[2]: falls back to YAML-style per-card blocks (rather than a single tabular table) because card shapes vary — Actions and Items don't carry strength/willpower/lore, for example. The inner printings[N]{...}: block is fully tabular since every printing has the same four fields.
Measured with benchmarks/bench_toon.py against the live ~2,270-card dataset (post-consolidation), tokenizing with tiktoken cl100k_base (used as a proxy for Claude's tokenizer):
| query | rows | JSON tokens | TOON tokens | Δ |
|---|---|---|---|---|
color="amber", limit=200 | 200 | 43,672 | 39,282 | −10.1% |
color="ruby", limit=50 | 50 | 10,446 | 9,464 | −9.4% |
card_type="action", limit=50 (sparse cols) | 50 | 10,150 | 9,265 | −8.7% |
body_text="when", limit=50 (long full_text) | 50 | 11,574 | 10,380 | −10.3% |
name="elsa", limit=20 | 14 | 3,456 | 2,925 | −15.4% |
| total | 79,298 | 71,316 | −10.1% |
Note: TOON's relative savings are smaller here than they were before the printings consolidation (pre-PR-#29 the same queries showed ~50% reductions). That gap is structural to the nested printings array — TOON's columnar encoding wins on the top-level fields but falls back to JSON-style encoding inside the per-printing entries, so the array dilutes the relative gain. Absolute token counts are still down meaningfully versus the equivalent count of pre-consolidation rows since each unique card is now represented once with a small printings list rather than as 1-3 separate full rows.
Reproduce with PYTHONPATH=. uv run python benchmarks/bench_toon.py (requires a populated cards.json cache).
This is a personal, unofficial fan and engineering project. It is not affiliated with, endorsed by, sponsored by, or reviewed by Disney, Ravensburger, or the Disney Lorcana TCG team. I worked only with publicly available/community data sources. All Disney Lorcana TCG names, card text, trademarks, and related intellectual property belong to Disney and Ravensburger. This project is non-commercial and reflects my personal views only, not those of my employer.
Be the first to review this server!
by Modelcontextprotocol · Developer Tools
Read, search, and manipulate Git repositories programmatically
by Toleno · Developer Tools
Toleno Network MCP Server — Manage your Toleno mining account with Claude AI using natural language.
by mcp-marketplace · Developer Tools
Create, build, and publish Python MCP servers to PyPI — conversationally.
by Microsoft · Content & Media
Convert files (PDF, Word, Excel, images, audio) to Markdown for LLM consumption
by mcp-marketplace · Developer Tools
Scaffold, build, and publish TypeScript MCP servers to npm — conversationally
by Taylorwilsdon · Productivity
Control Gmail, Calendar, Docs, Sheets, Drive, and more from your AI