Server data from the Official MCP Registry
Search and explore the Holy Qur'an with Arabic text, transliteration, and advanced search support.
Search and explore the Holy Qur'an with Arabic text, transliteration, and advanced search support.
Valid MCP server (1 strong, 2 medium validity signals). 3 known CVEs in dependencies (0 critical, 3 high severity) Package registry verified. Imported from the Official MCP Registry.
9 files analyzed · 4 issues found
Security scores are indicators to help you make informed decisions, not guarantees. Always review permissions before connecting any MCP server.
Add this to your MCP configuration file:
{
"mcpServers": {
"io-github-alfanous-team-alfanous": {
"args": [
"alfanous3-mcp"
],
"command": "uvx"
}
}
}From the project's GitHub README.
Alfanous is a Quranic search engine API that provides simple and advanced search capabilities for the Holy Qur'an. It enables developers to build applications that search through Quranic text in Arabic, with support for Buckwalter transliteration, advanced query syntax, and rich metadata.
Install from PyPI using pip:
$ pip install alfanous3
>>> from alfanous import api
# Simple search for a word
>>> api.search(u"الله")
# Advanced search with options
>>> api.do({"action": "search", "query": u"الله", "page": 1, "perpage": 10})
# Search using Buckwalter transliteration
>>> api.do({"action": "search", "query": u"Allh"})
# Get suggestions
>>> api.do({"action": "suggest", "query": u"الح"})
# Correct a query
>>> api.correct_query(u"الكتاب")
# Get metadata information
>>> api.do({"action": "show", "query": "translations"})
You can also use the public web service:
Or run your own web service locally (see alfanous_webapi).
Search for phrases:
>>> api.search(u'"الحمد لله"')
Boolean search (AND, OR, NOT):
>>> api.search(u'الصلاة + الزكاة') # AND
>>> api.search(u'الصلاة | الزكاة') # OR
>>> api.search(u'الصلاة - الزكاة') # NOT
Fielded search:
>>> api.search(u'سورة:يس') # Search in Sura Yasin
>>> api.search(u'سجدة:نعم') # Search verses with sajda
Wildcard search:
>>> api.search(u'*نبي*') # Words containing "نبي"
Faceted search (aggregate by fields):
>>> api.do({
... "action": "search",
... "query": u"الله",
... "facets": "sura_id,juz"
... })
api.search(query, **options) - Search Quran versesapi.do(params) - Unified interface for all actions (search, suggest, show, list_values, correct_query)api.correct_query(query, unit, flags) - Get a spelling-corrected version of a queryapi.get_info(category) - Get metadata informationThe underlying Raw output engine is exposed as Engine in alfanous.api (and re-exported from alfanous directly). Use it as a context manager to ensure index resources are properly released:
from alfanous.api import Engine
# or equivalently:
# from alfanous import Engine
with Engine() as engine:
result = engine.do({"action": "search", "query": u"الله"})
Common parameters for api.do() with action="search":
query (str): Search query (required)unit (str): Search unit - "aya", "word", or "translation" (default: "aya")page (int): Page number (default: 1)perpage (int): Results per page, 1-100 (default: 10)sortedby (str): Sort order - "score", "relevance", "mushaf", "tanzil", "ayalength" (default: "score")reverse (bool): Reverse the sort order (default: False)view (str): Output view - "minimal", "normal", "full", "statistic", "linguistic" (default: "normal")highlight (str): Highlight style - "css", "html", "bold", "bbcode" (default: "css")script (str): Text script - "standard" or "uthmani" (default: "standard")vocalized (bool): Include Arabic vocalization (default: True)translation (str): Translation ID to includerecitation (str): Recitation ID to include (1-30, default: "1")fuzzy (bool): Enable fuzzy search — searches both aya_ (exact) and aya (normalised/stemmed) fields, plus Levenshtein distance matching (default: False). See Exact Search vs Fuzzy Search.fuzzy_maxdist (int): Maximum Levenshtein edit distance for fuzzy term matching — 1, 2, or 3 (default: 1, only used when fuzzy=True).facets (str): Comma-separated list of fields for faceted searchfilter (dict): Filter results by field valuesFor a complete list of parameters and options, see the detailed documentation.
Alfanous provides two complementary search modes that control which index fields are queried.
fuzzy=False)When fuzzy search is off (the default), queries run against the aya_ field, which stores the fully-vocalized Quranic text with diacritical marks (tashkeel) preserved. This mode is designed for precise, statistical matching:
مَلِكِ and مَالِكِ are treated as different words.# Default exact search — only the vocalized aya_ field is used
>>> api.search(u"الله")
>>> api.search(u"الله", fuzzy=False)
# Phrase match with full diacritics
>>> api.search(u'"الْحَمْدُ لِلَّهِ"')
fuzzy=True)When fuzzy search is on, queries run against both the aya_ field (exact matches) and the aya field (a separate index built for broad, forgiving search). At index time the aya field is processed through a richer pipeline:
pystemmer), so different morphological forms of the same root match each other.No heavy operations are performed on the query string at search time; all the linguistic enrichment lives in the index.
Additionally, for each Arabic term in the query, a Levenshtein distance search is performed against the aya_ac field (unvocalized, non-stemmed). This catches spelling variants and typos within a configurable edit-distance budget controlled by fuzzy_maxdist.
# Fuzzy search — aya_ (exact) + aya (normalised/stemmed) + Levenshtein distance on aya_ac
>>> api.search(u"الكتاب", fuzzy=True)
# Increase edit distance to 2 to tolerate more spelling variation
>>> api.search(u"الكتاب", fuzzy=True, fuzzy_maxdist=2)
# Via the unified interface
>>> api.do({
... "action": "search",
... "query": u"مؤمن",
... "fuzzy": True,
... "fuzzy_maxdist": 1,
... "page": 1,
... "perpage": 10
... })
fuzzy_maxdist | Behaviour |
|---|---|
1 (default) | Catches single-character insertions, deletions, or substitutions |
2 | Broader tolerance — useful for longer words or noisy input |
3 | Maximum supported — use with care as recall increases significantly |
Fuzzy mode is particularly useful when:
Note:
pystemmermust be installed for stemming to take effect (pip install pystemmer). If the package is absent the stem filter degrades silently to a no-op, leaving normalisation and stop-word removal still active.
list_values returns every unique indexed value for a given field. Use it to discover the full vocabulary of searchable fields — for example, all available translation identifiers, part-of-speech tags, or root words — before composing a query.
# Get all unique root values in the index
>>> api.do({"action": "list_values", "field": "root"})
# Returns: {"list_values": {"field": "root", "values": [...], "count": N}}
# Discover all indexed translation IDs
>>> api.do({"action": "list_values", "field": "trans_id"})
# Discover all part-of-speech categories for word search
>>> api.do({"action": "list_values", "field": "pos"})
# Retrieve all indexed lemmas on demand (replaces the former show/lemmas)
>>> api.do({"action": "list_values", "field": "lemma"})
Parameters:
field (str): The name of the indexed field whose unique values you want (required).Return value:
A dictionary with a list_values key containing:
field — the requested field name.values — sorted list of unique non-empty indexed values.count — length of the values list.correct_query() uses Whoosh's built-in spell-checker to compare each term in the query against the index vocabulary and replace unknown terms with the closest known alternative. When the query is already valid (all terms appear in the index) the corrected value in the response is identical to the original input.
# Correct a query via the dedicated function
>>> api.correct_query(u"الكتاب")
# Returns:
# {"correct_query": {"original": "الكتاب", "corrected": "الكتاب"}, "error": ...}
# Correct a misspelled / out-of-vocabulary term
>>> api.correct_query(u"الكتب")
# Returns:
# {"correct_query": {"original": "الكتب", "corrected": "الكتاب"}, "error": ...}
# Via the unified interface
>>> api.do({"action": "correct_query", "query": u"الكتب", "unit": "aya"})
Parameters:
query (str): The raw query string to correct (required).unit (str): Search unit — currently only "aya" is supported; other units return None (default: "aya").flags (dict): Optional dictionary of additional flags.Return value:
A dictionary with a correct_query key containing:
original — the input query string as provided.corrected — the corrected query string; identical to original when no correction is needed.Alfanous supports advanced query syntax:
"الحمد لله"+ - الصلاة + الزكاة| - الصلاة | الزكاة- - الصلاة - الزكاة* for multiple chars, ? for single char - *نبي*, نعم؟سورة:يس, سجدة:نعم[X الى Y] - رقم_السورة:[1 الى 5]آية_:'مَن'> - >رحيم (searches aya_stem — corpus-derived stem)>> - >>رحيم (searches aya_lemma — all inflections of the same lexeme)>>> - >>>ملك (searches aya_root — all words from the same root){root,type} - {قول،اسم}Control how broadly the search expands morphologically using the derivation_level parameter:
| Level | Value | Index field | Description |
|---|---|---|---|
| 0 | "word" | aya | Exact match only (default) |
| 1 | "stem" | aya_stem | Corpus-derived stem — words sharing the same morphological stem |
| 2 | "lemma" | aya_lemma | Corpus lemma — all inflections of the same lexeme |
| 3 | "root" | aya_root | Trilateral root — all words from the same root |
Each derivation field is pre-indexed at build time so queries run against a compact, pre-computed representation rather than expanding at search time.
# Level 0 — exact match
>>> api.do({"action": "search", "query": "رَحِيمٌ"})
# Level 1 — stem (corpus-derived stem)
>>> api.do({"action": "search", "query": "رحيم", "derivation_level": 1})
# Level 2 — all words sharing the same lemma (e.g. all forms of رَحِيمٌ)
>>> api.do({"action": "search", "query": "رحيم", "derivation_level": 2})
# Level 3 — all words from root رحم (رحمة، رحمن، رحيم، يرحم، …)
>>> api.do({"action": "search", "query": "رحيم", "derivation_level": 3})
# String aliases are also accepted
>>> api.do({"action": "search", "query": "كتب", "derivation_level": "root"})
The derivation syntax >word, >>word, >>>word maps to levels 1, 2, 3 respectively and can be embedded directly in queries:
# Stem-level derivation in a query expression
>>> api.do({"action": "search", "query": ">رحيم"})
# Root-level: all words from root ملك
>>> api.do({"action": "search", "query": ">>>ملك"})
# Combined with field filters
>>> api.do({"action": "search", "query": ">>الله AND سورة:الإخلاص"})
unit="word")When unit="word" the engine searches word-level child documents instead of verse-level parent documents. Each word child carries its full morphological annotation — part-of-speech, lemma, stem, root, pattern, gender, number, person, voice, mood, derivation level, and more.
# Search all words matching "الله" at word level
>>> api.do({"action": "search", "query": "الله", "unit": "word"})
# Derivation levels also apply to word search:
# Level 1 — search word_stem (corpus-derived stem)
>>> api.do({"action": "search", "query": "رحيم", "unit": "word", "derivation_level": 1})
# Level 2 — search word_lemma (normalized lemma)
>>> api.do({"action": "search", "query": "رحيم", "unit": "word", "derivation_level": 2})
# Level 3 — search root (exact root value, e.g. "رحم")
>>> api.do({"action": "search", "query": "رحم", "unit": "word", "derivation_level": 3})
search_by_word_linguistics)Filter words by any combination of morphological properties using the search_by_word_linguistics action:
# Find all nouns from root قول
>>> api.do({
... "action": "search_by_word_linguistics",
... "root": "قول",
... "type": "أسماء" # Arabic POS category name
... })
# Find all active-voice past-tense verbs from root كتب
>>> api.do({
... "action": "search_by_word_linguistics",
... "root": "كتب",
... "pos": "V",
... "voice": "مبني للمعلوم",
... "aspect": "فعل ماضي"
... })
# Find all proper nouns
>>> api.do({
... "action": "search_by_word_linguistics",
... "pos": "PN"
... })
Available morphological filter fields include: pos, type, root, lemma, stem, pattern, gender, number, person, voice, mood, state, case, form, aspect, derivation, special, prefix, suffix, segments.
Faceted search allows you to aggregate search results by fields:
>>> result = api.do({
... "action": "search",
... "query": u"الله",
... "facets": "sura_id,juz,chapter"
... })
>>> print(result["search"]["facets"])
Available facet fields:
sura_id - Sura (chapter) number (1-114)juz - Juz (part) number (1-30)hizb - Hizb (section) numberchapter - Main topic/chaptertopic - Subtopicsura_type - Meccan/Medinan classificationFilter search results by field values:
>>> api.do({
... "action": "search",
... "query": u"الله",
... "filter": {"sura_id": "2"} # Only from Sura Al-Baqarah
... })
Available fields for fielded search:
سورة (sura) - Sura nameرقم_السورة (sura_id) - Sura numberرقم_الآية (aya_id) - Verse numberجزء (juz) - Juz numberحزب (hizb) - Hizb numberصفحة (page) - Page number in Mushafسجدة (sajda) - Has prostrationموضوع (subject) - Subject/themeفصل (chapter) - Chapterباب (subtopic) - Subtopicنوع_السورة (sura_type) - Sura type (Meccan/Medinan)Word-level fields (use with unit="word"):
englishstate - Nominal state in English (e.g. "Definite state", "Indefinite state")englishmood - Verb mood in English (e.g. "Indicative mood", "Subjunctive mood", "Jussive mood")For the complete field list, call:
>>> api.do({"action": "show", "query": "fields"})
Different views provide different levels of detail:
Example:
>>> api.do({
... "action": "search",
... "query": u"الله",
... "view": "full",
... "word_info": True,
... "aya_theme_info": True
... })
Get various metadata using the "show" action:
# Get list of available translations
>>> api.do({"action": "show", "query": "translations"})
# Get list of recitations
>>> api.do({"action": "show", "query": "recitations"})
# Get Sura information
>>> api.do({"action": "show", "query": "surates"})
# Get search fields
>>> api.do({"action": "show", "query": "fields"})
# Get default values
>>> api.do({"action": "show", "query": "defaults"})
Note: Lemmas are no longer exposed via
show. Useapi.do({"action": "list_values", "field": "lemma"})to retrieve them on demand.
You can extend the local search index with additional Zekr-compatible .trans.zip translation files using index_translations(). This requires the alfanous_import package (included in the repository under src/alfanous_import/).
import alfanous.api as alfanous
# Index all .trans.zip files found in a folder
count = alfanous.index_translations(source="/path/to/translations")
print(f"{count} translation(s) newly indexed")
The function:
*.trans.zip file in source0 means nothing new was added)configs/translations.json so the new translations are immediately visible via api.get_info("translations") and searchable with unit="translation"After indexing, search in the new translation:
# Search in a newly indexed translation
result = alfanous.search(u"الرحمن", unit="translation", flags={"translation": "en.newt"})
See examples/index_translations_example.py for a complete walkthrough.
The examples/ directory contains example scripts demonstrating various features:
See examples/README.md for more information.
Alfanous ships an MCP (Model Context Protocol) server that lets AI assistants (Claude, Copilot, etc.) search and explore the Qur'an directly. See alfanous_mcp/README.md for the full reference.
Quick start:
$ pip install alfanous3-mcp
$ python -m alfanous_mcp.mcp_server # stdio – works with Claude Desktop
To connect Claude Desktop, add the following to your claude_desktop_config.json:
{
"mcpServers": {
"alfanous": {
"type": "stdio",
"command": "python",
"args": ["-m", "alfanous_mcp.mcp_server"],
"tools": [
"search_quran",
"search_translations",
"get_quran_info",
"search_quran_by_themes",
"search_quran_by_stats",
"search_quran_by_position",
"suggest_query",
"correct_query",
"search_by_word_linguistics",
"list_field_values"
]
}
}
}
Alfanous includes a FastAPI-based web service for RESTful access. See alfanous_webapi/README.md for:
Quick start:
$ pip install alfanous3 fastapi uvicorn
$ cd src/
$ uvicorn alfanous_webapi.web_api:app --reload
Then visit http://localhost:8000/docs for interactive documentation.
We welcome contributions! See CONTRIBUTING.md for:
Quick development setup:
# Clone the repository
$ git clone https://github.com/Alfanous-team/alfanous.git
$ cd alfanous
# Install dependencies
$ pip install pyparsing whoosh pystemmer pytest
# Build indexes (required for development)
$ make build
# Run tests
$ pytest -vv --rootdir=src/
Alfanous is licensed under the GNU Lesser General Public License v3 or later (LGPLv3+).
See LICENSE for details.
This project handles sacred religious text (the Holy Qur'an) - please treat the data and code with respect.
If you are looking for the legacy Alfanous code, you can find it under the legacy branch.
Be the first to review this server!
by Modelcontextprotocol · Developer Tools
Web content fetching and conversion for efficient LLM usage
by Modelcontextprotocol · Developer Tools
Read, search, and manipulate Git repositories programmatically
by Toleno · Developer Tools
Toleno Network MCP Server — Manage your Toleno mining account with Claude AI using natural language.
by mcp-marketplace · Developer Tools
Create, build, and publish Python MCP servers to PyPI — conversationally.
by Microsoft · Content & Media
Convert files (PDF, Word, Excel, images, audio) to Markdown for LLM consumption
by mcp-marketplace · Developer Tools
Scaffold, build, and publish TypeScript MCP servers to npm — conversationally