CognOS trust scoring (C=p·(1-Ue-Ua)) and session trace storage as MCP tools.
CognOS trust scoring (C=p·(1-Ue-Ua)) and session trace storage as MCP tools.
The CognOS Session Memory MCP server is generally well-structured with good authentication practices, but has some security concerns. The main issues include potential path traversal vulnerability due to user-controlled database path, lack of input validation on several parameters, and some code quality issues around error handling. Supply chain analysis found 6 known vulnerabilities in dependencies (0 critical, 4 high severity). Package verification found 1 issue.
8 files analyzed · 12 issues found
Security scores are indicators to help you make informed decisions, not guarantees. Always review permissions before connecting any MCP server.
This plugin requests these system permissions. Most are normal for its category.
Set these up before or after installing:
Environment variable: COGNOS_TRACE_DB
Add this to your MCP configuration file:
{
"mcpServers": {
"io-github-base76-research-lab-cognos-session-memory": {
"env": {
"COGNOS_TRACE_DB": "your-cognos-trace-db-here"
},
"args": [
"cognos-session-memory-mcp"
],
"command": "uvx"
}
}
}From the project's GitHub README.
mcp-name: io.github.base76-research-lab/cognos-session-memory
Verified context injection via epistemic trust scoring for LLMs.
Solves session fragmentation by maintaining verified, high-confidence session context between conversations.
Large language models suffer from session fragmentation: each new conversation starts without verified context of previous work. This forces repeated explanations, loses decision history, and breaks long-running workflows.
Existing solutions (persistent memory systems, vector retrieval) either:
A plan-mode gateway that:
C = p · (1 − Ue − Ua)C > thresholdC < thresholdrecent_traces (n=5)
↓
extract_context() → ContextField + coverage
↓
compute_trust_score(p, ue, ua) → C, R, decision
↓
if C > threshold:
system_prompt ← inject
else:
flagged_reason ← manual review
C = p · (1 − Ue − Ua)
R = 1 − C
where:
p = prediction confidence (coverage of required fields)
Ue = epistemic uncertainty (divergence between traces)
Ua = aleatoric uncertainty (mean risk in traces)
R < 0.25 → PASS (inject without review)
0.25 ≤ R < 0.60 → REFINE (inject with caution)
R ≥ 0.60 → ESCALATE (flag for manual review)
Extract and score context.
Request:
{
"n": 5,
"trust_threshold": 0.75,
"mode": "auto"
}
Response (if injected):
{
"status": "injected",
"trust_score": 0.82,
"confidence": 0.82,
"risk": 0.18,
"decision": "PASS",
"context": {
"active_project": "CognOS mHC research",
"last_decision": "Verify P1 hypothesis",
"open_questions": ["How does routing entropy scale?"],
"current_output": "exp_008 complete",
"recent_models": ["gpt-4", "claude-3", "mistral"]
},
"system_prompt": "## CognOS Context...",
"trace_ids": ["uuid-1", "uuid-2", ...]
}
Response (if flagged):
{
"status": "flagged",
"trust_score": 0.45,
"decision": "REFINE",
"flagged_reason": "Trust score 0.45 below threshold 0.75. Manual review recommended.",
"trace_ids": [...]
}
trust_score ≥ threshold, else flag# In any Claude Code session:
/save
Claude writes a structured summary, trust-scores it, and persists it to SQLite.
Next session: automatically injected as SESSION_CONTEXT before your first prompt.
See docs/COMPACT_ALTERNATIVE.md for a full comparison.
Add to ~/.claude/settings.json:
{
"mcpServers": {
"cognos-session-memory": {
"command": "python3",
"args": ["/path/to/cognos-session-memory/mcp_server.py"]
}
}
}
Tools exposed:
| Tool | Description |
|---|---|
save_session(summary, project?) | Trust-score and persist a session summary |
load_session(threshold?) | Retrieve last verified context (default threshold: 0.45) |
git clone https://github.com/base76-research-lab/cognos-session-memory
cd cognos-session-memory
pip install -e .
python3 -m uvicorn --app-dir src main:app --port 8788
curl -X POST http://127.0.0.1:8788/v1/plan \
-H 'Content-Type: application/json' \
-d '{"n": 5, "mode": "dry_run"}'
curl -X POST http://127.0.0.1:8788/v1/plan \
-H 'Content-Type: application/json' \
-d '{"n": 5, "trust_threshold": 0.75, "mode": "auto"}'
save_session, load_session)pytest tests/ -v --cov=src
/compactSee docs/PAPER.md — "Verified Context Injection: Epistemically Scored Session Memory for Large Language Models"
Status: Independent research — Base76 Research Lab, 2026 Authors: Björn André Wikström (Base76)
@software{wikstrom2026cognos,
author = {Wikström, Björn André},
title = {{CognOS Session Memory}: Verified Context Injection via Epistemic Trust Scoring},
year = {2026},
url = {https://github.com/base76-research-lab/cognos-session-memory}
}
MIT
Be the first to review this server!
by Modelcontextprotocol · Developer Tools
Read, search, and manipulate Git repositories programmatically
by Toleno · Developer Tools
Toleno Network MCP Server — Manage your Toleno mining account with Claude AI using natural language.
by mcp-marketplace · Developer Tools
Create, build, and publish Python MCP servers to PyPI — conversationally.