Server data from the Official MCP Registry
9-LLM consensus + disagreement scoring + cheapest-route picks to fight hallucinations.
9-LLM consensus + disagreement scoring + cheapest-route picks to fight hallucinations.
OpenClaw Consensus MCP is a well-structured server that calls an external RapidAPI endpoint to provide multi-LLM consensus answers. Authentication is properly handled via environment variables (RAPIDAPI_KEY), input validation is present for mode and quality parameters, and there are no apparent malicious patterns or credential leaks. Minor code quality observations exist but do not present security risks. Package verification found 1 issue (1 critical, 0 high severity).
6 files analyzed · 5 issues found
Security scores are indicators to help you make informed decisions, not guarantees. Always review permissions before connecting any MCP server.
This plugin requests these system permissions. Most are normal for its category.
Unverified package source
We couldn't verify that the installable package matches the reviewed source code. Proceed with caution.
Set these up before or after installing:
Environment variable: RAPIDAPI_KEY
Add this to your MCP configuration file:
{
"mcpServers": {
"io-github-miconnm-openclaw-consensus-mcp": {
"env": {
"RAPIDAPI_KEY": "your-rapidapi-key-here"
},
"args": [
"openclaw-consensus-mcp"
],
"command": "uvx"
}
}
}From the project's GitHub README.
9-LLM consensus inside Claude — a hallucination guardrail you can call as a tool.
Production-grade 9-LLM consensus with disagreement scoring and cheapest-route picks to fight hallucinations.
OpenClaw runs the same prompt across 9 frontier LLMs (Claude, GPT, Gemini, Llama, Mistral, etc.), then returns:
This MCP server exposes those three capabilities as tools so Claude Desktop / Claude Code can call them mid-conversation.
A single LLM can confidently make things up. Nine models rarely make up the same thing. When 9 models agree, you can trust the answer; when they fan out, you have a cheap, calibrated hallucination signal — before the user sees the wrong answer.
pip install openclaw-consensus-mcp
# or
uv pip install openclaw-consensus-mcp
You also need a RapidAPI key for the OpenClaw Consensus API: https://rapidapi.com/yanmiayn/api/openclaw-consensus
Set it in your environment:
export RAPIDAPI_KEY="your-rapidapi-key"
Add to ~/.claude/claude_desktop_config.json (macOS/Linux) or
%APPDATA%\Claude\claude_desktop_config.json (Windows):
{
"mcpServers": {
"openclaw-consensus": {
"command": "openclaw-consensus",
"env": {
"RAPIDAPI_KEY": "your-rapidapi-key"
}
}
}
}
For Claude Code:
claude mcp add openclaw-consensus -- openclaw-consensus
consensus(prompt, mode="balanced")Get a 9-LLM consensus answer.
balanced) — deep (9 models), balanced (5), or fast (3).Returns
{
"answer": "string",
"confidence": 0.0,
"models_used": ["claude-opus-4.7", "gpt-5.1", "..."],
"disagreement": 0.0
}
disagreement_score(prompt)How much the 9 models disagree on a prompt — a hallucination warning signal.
Returns
{
"score": 0.0,
"per_model": { "model-id": "answer summary" }
}
cheapest_route(prompt, target_quality=0.85)Cheapest model combo that meets a quality threshold (0..1).
Returns
{
"models": ["..."],
"estimated_cost_usd": 0.0,
"estimated_quality": 0.0
}
git clone https://github.com/yanmiayn/openclaw-consensus-mcp
cd openclaw-consensus-mcp
uv venv && source .venv/bin/activate
uv pip install -e ".[dev]"
pytest
Smoke-test the server with the official MCP Inspector:
npx @modelcontextprotocol/inspector openclaw-consensus
uv build
uv publish # to PyPI
mcp-publisher publish # to the official MCP Registry
MIT — see LICENSE.
Be the first to review this server!
by Modelcontextprotocol · Developer Tools
Read, search, and manipulate Git repositories programmatically
by Toleno · Developer Tools
Toleno Network MCP Server — Manage your Toleno mining account with Claude AI using natural language.
by mcp-marketplace · Developer Tools
Create, build, and publish Python MCP servers to PyPI — conversationally.