Compress prompts 40-60% using local LLM + embedding validation. Preserves all conditionals.
Compress prompts 40-60% using local LLM + embedding validation. Preserves all conditionals.
From the project's GitHub README.
mcp-name: io.github.base76-research-lab/token-compressor
Semantic prompt compression for LLM workflows. Reduce token usage by 40–60% without losing meaning.
Built by Base76 Research Lab — research into epistemic AI architecture.
Intent Compiler MVP is now live and uses this project as part of the idea -> spec -> compressed output flow:
token-compressor is a two-stage pipeline that compresses prompts before they reach an LLM:
The result: shorter prompts, lower costs, same intent.
Input prompt (300 tokens)
↓
LLM compresses
↓
Embedding validates (cosine ≥ 0.85?)
↓
Pass → compressed (120 tokens) Fail → original (300 tokens)
Key design principle: conditionality is never sacrificed. If your prompt says "only do X if Y", that constraint survives compression.
ollama pull llama3.2:1b
ollama pull nomic-embed-text
pip install ollama numpy
from compressor import LLMCompressEmbedValidate
pipeline = LLMCompressEmbedValidate()
result = pipeline.process("Your prompt text here...")
print(result.output_text) # compressed (or original if validation failed)
print(result.report()) # MODE / COVERAGE / TOKENS saved
Result object:
| Field | Description |
|---|---|
output_text | Text to send to your LLM |
mode | compressed / raw_fallback / skipped |
coverage | Cosine similarity (0.0–1.0) |
tokens_in | Estimated input tokens |
tokens_out | Estimated output tokens |
tokens_saved | Difference |
echo "Your long prompt here..." | python3 cli.py
Output: compressed text on stdout, stats on stderr.
Add to your ~/.claude/settings.json under hooks → UserPromptSubmit:
{
"type": "command",
"command": "echo \"${CLAUDE_USER_PROMPT:-}\" | python3 /path/to/token-compressor/cli.py > /tmp/compressed_prompt.txt 2>/tmp/compress.log || true"
}
This runs on every prompt submission and writes the compressed version to a temp file, which can be injected back into context via a second hook or MCP server.
The MCP server exposes compression as a tool callable from Claude Code and any MCP-compatible client.
Install:
pip install token-compressor-mcp
Tool: compress_prompt
text (string)Claude Code MCP config (~/.claude/settings.json):
{
"mcpServers": {
"token-compressor": {
"command": "uvx",
"args": ["token-compressor-mcp"]
}
}
}
Or from source:
{
"mcpServers": {
"token-compressor": {
"command": "python3",
"args": ["-m", "token_compressor_mcp"],
"cwd": "/path/to/token-compressor"
}
}
}
pipeline = LLMCompressEmbedValidate(
threshold=0.85, # cosine similarity floor (lower = more aggressive)
min_tokens=80, # skip pipeline below this (not worth compressing)
compress_model="llama3.2:1b",
embed_model="nomic-embed-text",
)
Stage 1 — LLM compression
The compression prompt instructs the model to:
if, only if, unless, when, but only)Stage 2 — Embedding validation
Computes cosine similarity between the original and compressed text using nomic-embed-text. If similarity falls below threshold, the original is returned unchanged. This prevents silent meaning loss.
Tested across Swedish and English prompts, technical and natural language:
| Input | Tokens in | Tokens out | Saved |
|---|---|---|---|
| Research abstract (EN) | 89 | 38 | 57% |
| Session intent (SV) | 32 | 18 | 44% |
| Technical instruction | 47 | 22 | 53% |
| Short command (<80t) | — | — | skipped |
This tool implements the architecture from:
Wikström, B. (2026). When Alignment Reduces Uncertainty: Epistemic Variance Collapse and Its Implications for Metacognitive AI. DOI: 10.5281/zenodo.18731535
Part of the Base76 Research Lab toolchain for epistemic AI infrastructure.
MIT — Base76 Research Lab, Sweden
Add this to your MCP configuration file:
{
"mcpServers": {
"io-github-base76-research-lab-token-compressor": {
"args": [
"token-compressor-mcp"
],
"command": "uvx"
}
}
}This is a well-designed MCP server for prompt compression that integrates with local Ollama models. The code demonstrates good security practices with no authentication issues (appropriate for local-only operation), proper error handling with graceful fallbacks, and permissions that match its purpose. Some minor code quality issues exist around hardcoded model names and broad exception handling. Supply chain analysis found 8 known vulnerabilities in dependencies (1 critical, 7 high severity). Package verification found 1 issue.
8 files analyzed · 13 issues found
Security scores are indicators to help you make informed decisions, not guarantees. Always review permissions before connecting any MCP server.
This plugin requests these system permissions. Most are normal for its category.
Be the first to review this server!
by Toleno · Developer Tools
Toleno Network MCP Server — Manage your Toleno mining account with Claude AI using natural language.
by mcp-marketplace · Developer Tools
Create, build, and publish Python MCP servers to PyPI — conversationally.
by mcp-marketplace · Developer Tools
Scaffold, build, and publish TypeScript MCP servers to npm — conversationally