Server data from the Official MCP Registry
Verify AI agent claims vs reality: inline grounding-check + swallowed-exception + transcript review.
Verify AI agent claims vs reality: inline grounding-check + swallowed-exception + transcript review.
This MCP server implements three lightweight Python-based verification tools for AI agent output — grounding checks, exception pattern detection, and transcript review. The code is well-structured, properly typed, and has no authentication requirements (which is appropriate for a local analysis tool). Permissions are narrowly scoped to file reading and in-memory AST processing. Minor code quality observations exist around input validation and error handling, but these do not constitute security vulnerabilities. Supply chain analysis found 4 known vulnerabilities in dependencies (0 critical, 3 high severity). Package verification found 1 issue.
6 files analyzed · 9 issues found
Security scores are indicators to help you make informed decisions, not guarantees. Always review permissions before connecting any MCP server.
This plugin requests these system permissions. Most are normal for its category.
Add this to your MCP configuration file:
{
"mcpServers": {
"io-github-temurkhan13-openclaw-output-vetter-mcp": {
"args": [
"openclaw-output-vetter-mcp"
],
"command": "uvx"
}
}
}From the project's GitHub README.
MCP server for verifying AI agent claims vs reality — single-transcript inline grounding-check that flags when an agent's response states facts not in the input context, when its code silently swallows exceptions and substitutes mock data, or when its multi-turn transcript contains contradictions or unverified completion claims. Sub-second, local, free, MCP-native — designed to be called inline from Claude Code / Cursor / Cline / OpenClaw agents during the conversation, not as a separate eval-pipeline. The lightweight complement to dashboard-based eval frameworks (DeepEval, Phoenix, LangSmith).
Production AI agents fail in three quiet ways that pass every standard dashboard:
This MCP server runs three pure-Python checks inline during the conversation — no API key, no LLM-as-judge cost, sub-second:
> claude: did your last answer hallucinate anything?
[MCP tool: verify_response_grounding]
verdict: FABRICATED
ungrounded_count: 3
overall_grounding_score: 0.08
ungrounded claims:
- "Pixelette Technologies has raised $12M in Series A funding" (overlap 0.04)
- "led by Sequoia Capital" (overlap 0.00)
- "47 full-time employees" (overlap 0.00)
summary: All 3 claim(s) lack grounding in the input context — likely hallucinated.
> claude: scan the code you just wrote for swallowed-exception patterns.
[MCP tool: find_swallowed_exceptions]
verdict: FABRICATED (one HIGH-severity finding)
findings:
[HIGH] Line 12 — mock-substitution
except Exception:
return {"id": 1, "name": "sample"}
Description: except handler returns fabricated/mock data instead of re-raising
— the call site sees a 'successful' response built from constants. This is the
silent-fake-success pattern.
summary: 1 swallowing pattern detected — at least one returns fabricated data.
> claude: review the agent's transcript so far.
[MCP tool: review_transcript]
verdict: FABRICATED
issue_count: 2
issues:
[HIGH] turns [3] — unverified-completion-claim
"I've configured the gateway and verified everything works."
Description: assistant claims completion of an action but no tool calls are
present in this turn or earlier turns.
[MEDIUM] turns [2, 7] — cross-turn-contradiction
Cross-turn factual drift on subject 'the api':
turn 2 says 'returns json for every request',
turn 7 says 'returns xml for legacy endpoints'.
summary: Reviewed 8 turn(s); flagged 2 issue(s) including unverified completion
claim(s) — investigate before trusting the transcript.
openclaw-output-vetter-mcpThree things existing eval frameworks (DeepEval, Phoenix, LangSmith, Galileo, Langfuse) don't do well together:
Inline single-transcript scope, not eval-pipeline orchestration. DeepEval ships an MCP server — but its scope is "run evals, pull datasets, and inspect traces straight from claude code, cursor" (verbatim from their docs). That's eval-pipeline orchestration: schedule a named eval suite against a stored dataset; review trace history. This server is the opposite shape: verify this specific conversation right now, before the user sees the response. Same metric stack philosophically (faithfulness, grounding); different surface.
Sub-second + local + free. No LLM-as-judge call, no API key, no per-call cost. Pure-Python claim splitting + Jaccard overlap + AST walking. Tradeoff: lower theoretical accuracy than LLM-as-judge for ambiguous edge cases. For high-frequency inline use (every assistant turn) the speed-vs-accuracy tradeoff favors lightweight. v1.1 will offer optional DeepEval-LLM-as-judge mode for users who want the higher-quality check.
Three checks for three distinct failure modes, not one umbrella metric.
verify_response_grounding) catches hallucinated factsfind_swallowed_exceptions) catches silent-fake-success in agent-written codereview_transcript) catches unverified completion claims + cross-turn driftOther tools collapse all three into "faithfulness." The failure modes are different and the corrective actions are different. Surfacing them separately makes the response actionable.
Built for the production AI operator who's already using Claude Code / Cursor / Cline / OpenClaw and wants a defensive layer the agent calls before its response goes user-facing.
| Tool | What it returns |
|---|---|
verify_response_grounding | Per-claim grounded/ungrounded + overall verdict (CLEAN / PARTIALLY_GROUNDED / FABRICATED) + overlap scores + summary |
find_swallowed_exceptions | Per-finding line number + pattern (pass-only / mock-substitution / silent-log-and-return / bare-except) + severity + code excerpt |
review_transcript | Per-issue turn indices + issue kind (unverified-completion-claim / cross-turn-contradiction) + severity + evidence excerpt |
Resources:
vetter://demo/grounded — sample CLEAN grounding resultvetter://demo/fabricated — sample FABRICATED grounding resultvetter://demo/swallowed-exceptions — sample swallowed-exception scanPrompts:
verify-this-answer(threshold) — walks verify_response_grounding on the most recent assistant answeraudit-this-code — walks find_swallowed_exceptions on a code block + explains each finding's riskpip install openclaw-output-vetter-mcp
Add to ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or %APPDATA%\Claude\claude_desktop_config.json (Windows):
{
"mcpServers": {
"openclaw-output-vetter": {
"command": "python",
"args": ["-m", "openclaw_output_vetter_mcp"]
}
}
}
Restart Claude Desktop. Test:
Resource
vetter://demo/grounded— read it back to me.
The demo resource returns a sample GroundingResult so you can verify the protocol wiring without authoring inputs.
| Version | Scope | Status |
|---|---|---|
| v1.0 | 3 scanners (grounding via Jaccard / swallowed-exceptions via AST / transcript review via pattern matching), 3 tools / 3 demo resources / 2 prompts, GitHub Actions CI matrix, PyPI Trusted Publishing, MCP Registry submission, 40+ tests | ✅ |
| v1.1 | Optional LLM-as-judge backend (wraps DeepEval's FaithfulnessMetric / HallucinationMetric for higher-quality grounding); embedding-based similarity option (sentence-transformers); custom claim-extraction prompts | ⏳ |
| v1.2 | Backend-pluggable architecture (per-tool backend selection); incremental review (verify only the last N turns); persistent issue tracking across multi-session work | ⏳ |
| v1.x | Webhook emit on FABRICATED verdict; integration with CI to gate AI-generated PRs that fail grounding checks | ⏳ |
If your AI deployment uses a different agent harness, custom claim-extraction prompts, language other than Python for the swallowed-exception scanner, or specific compliance / auditing requirements — that's a Custom MCP Build engagement.
| Tier | Scope | Investment | Timeline |
|---|---|---|---|
| Simple | Custom claim-extraction prompts + tuned thresholds for your domain | $8,000–$10,000 | 1–2 weeks |
| Standard | Multi-language swallowed-exception scanners (TypeScript / Go / Rust AST walks) + custom severity rules | $15,000–$25,000 | 2–4 weeks |
| Complex | LLM-as-judge backend with your hosted model + persistence + CI integration + audit-trail | $30,000–$45,000 | 4–8 weeks |
To engage:
Custom MCP Build inquiry — output verificationThis server is part of a production-AI infrastructure MCP suite — companion to silentwatch-mcp (cron silent-failure detection), openclaw-health-mcp (deployment health), openclaw-cost-tracker-mcp (token-cost telemetry + 429 prediction), openclaw-skill-vetter-mcp (skill security vetting), and openclaw-upgrade-orchestrator-mcp (upgrade safety + provider-side regression detection). Install all six for full operational visibility.
If you're running production AI and want an outside practitioner to score readiness, find the failure patterns already present (silent fake success being pattern P3.x in the catalog), and write the corrective-action plan:
| Tier | Scope | Investment | Timeline |
|---|---|---|---|
| Audit Lite | One system, top-5 findings, written report | $1,500 | 1 week |
| Audit Standard | Full audit, all 14 patterns, 5 Cs findings, 90-day follow-up | $3,000 | 2–3 weeks |
| Audit + Workshop | Standard audit + 2-day team workshop + first monthly audit included | $7,500 | 3–4 weeks |
Same email channel: temur@pixelette.tech with subject AI audit inquiry.
PRs welcome. The three scanners are intentionally pluggable — each lives in its own module under src/openclaw_output_vetter_mcp/scanners/ and is a pure function over input → typed result. Adding a new scanner is one file + one test file + one tool registration in server.py.
Bug reports + feature requests: open a GitHub issue.
MIT — see LICENSE.
Built by Temur Khan — independent practitioner on production AI systems. Contact: temur@pixelette.tech
Be the first to review this server!
by Modelcontextprotocol · Developer Tools
Read, search, and manipulate Git repositories programmatically
by Toleno · Developer Tools
Toleno Network MCP Server — Manage your Toleno mining account with Claude AI using natural language.
by mcp-marketplace · Developer Tools
Create, build, and publish Python MCP servers to PyPI — conversationally.