Server data from the Official MCP Registry
MCP server for runtime quality validation of AI agent outputs — hallucination detection, scope compl
MCP server for runtime quality validation of AI agent outputs — hallucination detection, scope compl
This MCP server is well-designed for quality validation of AI agent outputs with clean architecture and proper input handling via Zod validation. The codebase shows good security practices with no hardcoded credentials, no malicious patterns, and permissions appropriately scoped to its purpose (in-memory validation and logging). Only minor code quality observations around error handling and documentation prevent a higher score. Supply chain analysis found 4 known vulnerabilities in dependencies (0 critical, 3 high severity). Package verification found 1 issue.
4 files analyzed · 8 issues found
Security scores are indicators to help you make informed decisions, not guarantees. Always review permissions before connecting any MCP server.
This plugin requests these system permissions. Most are normal for its category.
Add this to your MCP configuration file:
{
"mcpServers": {
"io-github-mdfifty50-boop-qc-validator": {
"args": [
"-y",
"qc-validator-mcp"
],
"command": "npx"
}
}
}From the project's GitHub README.
Runtime quality validation for AI agent outputs. Detect hallucinations, enforce scope compliance, and score output quality — all via MCP.
npx qc-validator-mcp
{
"mcpServers": {
"qc-validator": {
"command": "npx",
"args": ["qc-validator-mcp"]
}
}
}
Score agent output against configurable criteria: length limits, required keywords, forbidden patterns, and factual claim density.
Params: output, task_description, criteria { max_length, required_keywords[], forbidden_patterns[], factual_claims_count }
Returns: { pass, score, issues[], recommendation }
Estimate hallucination likelihood. With source text, checks sentence-level grounding. Without source, flags outputs dense with specific numbers, dates, and URLs.
Params: output, source_text (optional), claim_count (default 5)
Returns: { risk_level, unsupported_claims[], confidence, suggestion }
Validate output against a scope contract — allowed/forbidden topics, word limits, required sections.
Params: output, scope { allowed_topics[], forbidden_topics[], max_words, required_sections[] }
Returns: { compliant, violations[], scope_utilization_percent }
Store validation results for per-agent trending.
Params: agent_id, output_hash, score, pass, issues_count
Returns: { logged, agent_id, total_validations }
Analyze common failure modes for a specific agent.
Params: agent_id
Returns: { total_validations, pass_rate, avg_score, most_common_issues[], trend }
Quality dashboard across all validated agents — no parameters required.
Returns: { total_agents, overall_pass_rate, agents[], worst_performers[], best_performers[], recommendations[] }
qc://dashboard — Quality metrics for all validated agentsMIT
Be the first to review this server!
by Modelcontextprotocol · Developer Tools
Read, search, and manipulate Git repositories programmatically
by Toleno · Developer Tools
Toleno Network MCP Server — Manage your Toleno mining account with Claude AI using natural language.
by mcp-marketplace · Developer Tools
Create, build, and publish Python MCP servers to PyPI — conversationally.