Server data from the Official MCP Registry
MCP server for the full Pāli Canon — search, cite, compare translations. Offered as Dhamma Dāna.
MCP server for the full Pāli Canon — search, cite, compare translations. Offered as Dhamma Dāna.
Remote endpoints: streamable-http: https://mcp.tripitaka-mcp.com/mcp sse: https://mcp.tripitaka-mcp.com/sse
The Tripitaka MCP server is a well-structured Buddhist canon search tool with generally sound security practices. However, there are input validation gaps in keyword search that could enable SQL injection, missing authentication on all endpoints, and insufficient rate-limiting on the MCP transport. Permissions are appropriate for a developer tool serving structured data. Supply chain analysis found 15 known vulnerabilities in dependencies (1 critical, 3 high severity).
3 files analyzed · 25 issues found
Security scores are indicators to help you make informed decisions, not guarantees. Always review permissions before connecting any MCP server.
This plugin requests these system permissions. Most are normal for its category.
Available as Local & Remote
This plugin can run on your machine or connect to a hosted endpoint. during install.
From the project's GitHub README.
An MCP Server for searching and citing content from the Pāli Tipiṭaka. Gives AI agents (such as Claude or Cursor) the ability to look up suttas, quote the teachings, and compare translations across languages.
🙏 This project is offered as Dhamma Dāna — 100% free, non-commercial only. License details: LICENSE (code) + NOTICE.md (data)
bilara-data for any Abhidhamma book). Live counts via list_structure.mn1, pli-tv-bu-vb-pj1, patthana1.1) and generate properly formatted academic citations.bhikkhūnaṁ → bhikkhu)./sse) and canonical Streamable HTTP (/mcp, MCP spec 2025-03-26).tripitaka://structure, tripitaka://sutta/{id}, tripitaka://word/{w} for clients that pin context as resources./topics/* — six markdown pages covering canon structure, getting-started + tool selection, places (Mahājanapada + holy sites + cosmology), 10 foundational themes with locus classicus, ~30 major figures, and a phase-based timeline of the Buddha's 45-year mission. Sutta IDs verified against live data; AI clients can fetch a page in one shot instead of running 30+ tool calls.skills/tipitaka-research.md ships a ready-to-install workflow file that activates a multi-step research pattern (clarify → verify coverage → search → drill in → cite) on Claude Desktop / Claude Code.| Technology | Role |
|---|---|
| Python + FastMCP | MCP Server |
| PostgreSQL + pgvector | Database + Vector Search |
| sentence-transformers | Embeddings for semantic search |
| Docker Compose | Infrastructure |
The maintainers run a free public instance at tripitaka-mcp.com.
| Endpoint | Use |
|---|---|
https://mcp.tripitaka-mcp.com/mcp | Streamable HTTP (MCP spec 2025-03-26) |
https://mcp.tripitaka-mcp.com/sse | Legacy SSE (older clients) |
Connect Claude Desktop in three steps (no install, no Docker, no GPU — you just need Node.js):
1. Find your absolute npx path. Claude Desktop doesn't read your shell profile, so a bare npx won't resolve. Open a terminal:
which npx
# example: /Users/you/.nvm/versions/node/v22.14.0/bin/npx
2. Open claude_desktop_config.json (~/Library/Application Support/Claude/ on macOS, %APPDATA%\Claude\ on Windows) and add the entry below — substitute YOUR_NPX_PATH with the output from step 1, and YOUR_NODE_BIN_DIR with that path's parent directory:
{
"mcpServers": {
"tripitaka": {
"command": "YOUR_NPX_PATH",
"args": ["-y", "mcp-remote", "https://mcp.tripitaka-mcp.com/mcp"],
"env": { "PATH": "YOUR_NODE_BIN_DIR:/usr/local/bin:/usr/bin:/bin" }
}
}
}
3. Quit Claude Desktop completely (⌘Q on macOS, tray → Quit on Windows) and reopen. The 🔌 indicator in the bottom-left should show tripitaka with 10 tools available.
First connection takes 5–10 seconds while
npxdownloadsmcp-remoteon demand — give Claude Desktop a moment after restart before assuming it failed.
Once connected, try asking Claude things like:
Claude will pick the right tool, fetch the canonical Pāli, and surface clickable links back to SuttaCentral for verification.
The hosted server is rate-limited (10 req/10s + 60 req/min per IP) and offered for personal study, research, and dhamma practice — see NOTICE.md before redistributing or using commercially.
git clone https://github.com/dhamma-seeker/tripitaka-mcp.git
cd tripitaka-mcp
./scripts/install.sh
The installer downloads a prepared database dump from Hugging Face — dhamma-seeker/tripitaka-mcp-dump and restores it automatically — cutting setup time from 2–4 hours (loading data + generating embeddings) down to ~5 minutes. (If a local dump file already exists, the local copy is used instead.)
The installer will:
docker, compose, openssl, and curl are installed.env with random passwords (for both the admin and the readonly user)Options:
./scripts/install.sh --dump PATH # use an existing dump file
./scripts/install.sh --dump-url URL # override the dump source
./scripts/install.sh --no-dump # skip restore (load data yourself later)
git clone https://github.com/dhamma-seeker/tripitaka-mcp.git
cd tripitaka-mcp
cp .env.example .env
# Set POSTGRES_PASSWORD in .env to a random password
docker compose up db -d
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
# 1. Seed metadata (pitaka, nikāya)
python scripts/seed_metadata.py
# 2. Download & load Sutta Piṭaka data from SuttaCentral
python scripts/data_loader.py
# 3. Load Thai CC0 translations (Dhīranando & Jayasāro)
python scripts/load_thai_cc0.py
# 4. Load dictionaries (DPD, PTS, DPPN, and the Payutto dictionary)
python scripts/load_dictionary.py
# 5. Generate embeddings for semantic / hybrid search
python scripts/generate_embeddings.py
python main.py
The project supports Postman testing in SSE mode:
MCP_TRANSPORT=sse python main.pyTo deploy to production without re-loading the data and re-running the embedding model, restoring from a database dump is the recommended path.
docker compose -f docker-compose.prod.yml up -d --build
The production stack runs 3 services:
db — PostgreSQL + pgvector (internal only, no exposed port)mcp-server — FastMCP (runs as a readonly user, read-only FS, cap_drop: ALL)caddy — reverse proxy + Let's Encrypt + rate limit (10 req/10s and 60 req/1 min per IP)For an extra hardening layer, front Caddy with Cloudflare (DNS proxy + rate-limit rules + DDoS protection on the free tier).
👉 Full details: DEPLOYMENT.md
The repo ships claude_desktop_config.example.json with three ready-to-use entries — copy whichever fits your setup into claude_desktop_config.json (~/Library/Application Support/Claude/ on macOS, %APPDATA%\Claude\ on Windows), then edit the absolute paths:
| Entry | When to use | Transport |
|---|---|---|
tripitaka-local | You ran the installer locally on the same machine as Claude Desktop | stdio (no network) |
tripitaka-remote | You self-hosted the server on a VPS and want the modern transport | Streamable HTTP (/mcp) |
tripitaka-remote-sse | Your client doesn't support Streamable HTTP yet | Legacy SSE (/sse) |
The remote entries route through mcp-remote — Claude Desktop ↔ npx bridge ↔ remote MCP. The example file has annotated comments explaining each field; remove the _comment keys before saving.
Heads-up for nvm users:
commandandenv.PATHneed absolute node paths — Claude Desktop doesn't read your shell profile. Find the right paths withwhich npx/which pythonwhile your normal shell is active.
For Claude Desktop / Claude Code users, copying the bundled skill activates the multi-step research workflow automatically:
mkdir -p ~/.claude/skills
cp skills/tipitaka-research.md ~/.claude/skills/
# Restart Claude Desktop (Cmd+Q then reopen) to pick up the skill
Details in skills/README.md.
| Tool | Description |
|---|---|
search_hybrid | (Recommended for concept search) Combined keyword + semantic via RRF — best when looking for "discourses about X". |
search_by_keyword | Trigram keyword search — best for exact word lookups (appamāda, ānāpānassati). |
search_semantic | Pure vector similarity — usually you want search_hybrid instead. |
get_sutta | Fetch a full sutta by ID (e.g. mn1, dn22, dhp1-20) — returns every segment with cross-reference URLs. |
get_reference | Generate a properly formatted academic citation with all source URLs. |
compare_translations | Compare renderings of a single segment across editions. |
list_structure | Show the Tipiṭaka structure with segment-count coverage per nikāya. |
list_editions | List Thai/English translation editions currently loaded. |
get_word_definition | Pāli dictionary lookup (PTS, DPPN, and the Payutto Thai dictionary). |
parse_pali_word | Strip Pāli suffixes to recover the root form when get_word_definition misses (bhikkhūnaṁ → bhikkhu). |
search_semanticThe vector index is built only on text_pali (SuttaCentral's bilara-data does not yet include Thai translations) using a multilingual MiniLM model that is not specifically trained on Pāli. As a result:
appamāda, search_by_keyword is more precisesearch_hybrid (keyword + semantic) tolerates this limitation bestUpgrading to a Pāli-trained embedding model (e.g. bge-m3) plus embedding the Thai edition is on the roadmap.
tripitaka-mcp/
├── main.py # Main MCP Server (10 tools + 3 resources)
├── db/
│ ├── connection.py # Database connection pool
│ └── schema.py # Schema (supports translation table)
├── embedding/
│ └── model.py # SentenceTransformer wrapper
├── scripts/
│ ├── install.sh # One-shot installer (HF dump → DB)
│ ├── deploy.sh # Deploy / restart on a VPS
│ ├── backup.sh # pg_dump → S3-compatible store
│ ├── dump_and_publish.sh # Verify embeddings → pg_dump → upload to HuggingFace
│ ├── seed_metadata.py # Seed pitaka/nikāya metadata
│ ├── data_loader.py # Load Sutta Piṭaka (Pāli + Sujato English)
│ ├── load_vinaya.py # Vinaya loader (Vibhaṅga + Pātimokkha + Khandhaka + Parivāra, Brahmali EN)
│ ├── load_abhidhamma.py # Abhidhamma loader (7 books, Pāli — bilara has no EN)
│ ├── load_thai_cc0.py # Thai translation loader
│ ├── load_dictionary.py # Load dictionary data
│ ├── scrape_payutto.py # Web scraper for the Payutto dictionary
│ ├── generate_embeddings.py # Generate vector embeddings
│ ├── run_embedding_with_retry.sh # Resilient wrapper around embedding generation (retries on DB drop)
│ ├── check_embedding_progress.py # Live progress snapshot (or --watch mode) for the embedding job
│ ├── smoke_test.sh # Endpoint smoke test (TLS + /sse + /mcp + /health)
│ └── test_full_sutta.py # Full-content smoke test (22 size-tiered suttas across all 3 piṭakas)
├── topics/ # Static markdown pages served at /topics/*
│ ├── README.md # Index of available topic pages
│ ├── tipitaka-overview.md # Canon structure + coverage
│ ├── getting-started.md # Connection paths, tool selection, prompt patterns
│ ├── places.md # Geography of the suttas (Mahājanapada, holy sites, cosmology)
│ ├── themes.md # 10 foundational teachings + locus classicus
│ └── people.md # ~30 major figures (chief disciples, lay supporters, kings)
├── skills/ # Portable Claude skills for AI clients
│ ├── README.md # How to install
│ └── tipitaka-research.md # Multi-step research workflow
├── infra/ # Reverse proxy + deploy config
│ ├── Caddyfile # Caddy: TLS, rate limit, /topics, /sse, /mcp
│ ├── Dockerfile.caddy # Caddy + caddy-ratelimit plugin
│ ├── cloud-init.yml # VPS bootstrap
│ └── *.tf # Terraform (provider-agnostic)
├── docs/
│ └── CAPACITY.md # Capacity planning per VPS spec
├── claude_desktop_config.example.json
├── docker-compose.yml # Dev (single mcp-server)
├── docker-compose.prod.yml # Prod (db + 2 mcp-server + caddy)
├── Dockerfile
└── requirements.txt
This project aggregates data from multiple sources under different licenses. Please read NOTICE.md in full before redistributing.
| Source | License | Note |
|---|---|---|
| Source code | MIT | Free to use, fork, modify |
| SuttaCentral bilara-data | CC0 | Public domain |
| Thai translations (Dhīranando, Jayasāro) | CC0 | Via SuttaCentral |
| Dictionary of Buddhism by Somdet Phra Buddhaghosacariya (P. A. Payutto) | Dhamma Dāna | ⚠️ Non-commercial use only |
| PTS / DPPN / Dhammika Dictionaries | Public Domain / CC | — |
For commercial use: remove the dictionary component, or contact Wat Nyanavesakavan for permission.
See CREDITS.md for contributor details and NOTICE.md for license terms.
Gratitude to:
Sādhu 🙏 — May the sharing of this Dhamma bring benefit and happiness to all beings.
Be the first to review this server!
by Modelcontextprotocol · Developer Tools
Read, search, and manipulate Git repositories programmatically
by Toleno · Developer Tools
Toleno Network MCP Server — Manage your Toleno mining account with Claude AI using natural language.
by mcp-marketplace · Developer Tools
Create, build, and publish Python MCP servers to PyPI — conversationally.