Server data from the Official MCP Registry
Portable agent memory anchored on Solana. Local SQLite + vector recall, open export.
Portable agent memory anchored on Solana. Local SQLite + vector recall, open export.
Valid MCP server (2 strong, 4 medium validity signals). 2 known CVEs in dependencies (0 critical, 1 high severity) Package registry verified. Imported from the Official MCP Registry.
14 files analyzed · 3 issues found
Security scores are indicators to help you make informed decisions, not guarantees. Always review permissions before connecting any MCP server.
This plugin requests these system permissions. Most are normal for its category.
Set these up before or after installing:
Environment variable: CORTEX_API_KEY
Environment variable: CORTEX_HOST_URL
Environment variable: CLUDE_WALLET
Environment variable: CLUDE_AGENT_NAME
Add this to your MCP configuration file:
{
"mcpServers": {
"io-github-sebbsssss-clude": {
"env": {
"CLUDE_WALLET": "your-clude-wallet-here",
"CORTEX_API_KEY": "your-cortex-api-key-here",
"CORTEX_HOST_URL": "your-cortex-host-url-here",
"CLUDE_AGENT_NAME": "your-clude-agent-name-here"
},
"args": [
"-y",
"@clude/sdk",
"mcp-serve"
],
"command": "npx"
}
}
}From the project's GitHub README.
Cognitive memory for AI agents. Not just storage — synthesis.
A cognitive memory system. Most memory SDKs store and retrieve. Clude also processes memories over time — decay, consolidation, contradiction resolution, reflection.
npx @clude/sdk registerCognitive architecture:
No framework integrations (LangGraph, CrewAI) — wrappers around brain.store() and brain.recall() are days each. No structured business data ingestion. No temporal fact validity querying. No managed enterprise platform. No large contributor community. Early-stage adoption.
Clude is a memory engine, not a framework. Framework integrations, structured data ingestion, temporal querying, enterprise platforms, evaluation frameworks, multi-model support, autonomous operation, multi-user scoping — these can all be built on top. A non-developer built a 5,750-line autonomous agent on Clude in two weeks using an AI coding assistant — 109 tools, self-editing agent-directed memory, multi-model inference, web search, multi-user presence tracking, and a browser UI. The cognitive architecture was handled by Clude.
Public Wallet: CA1HYUXZXKc7CasRGpQotMM9RiYJbVuPJq3n8Ar9oQZb
npm install -g @clude/sdk
clude setup
Built on Stanford Generative Agents, MemGPT/Letta, CoALA, and Beads.
Works with: Claude Code, Claude Desktop, Cursor, and any MCP-compatible agent runtime.
npx @clude/sdk setup # Creates agent, installs MCP, done
Or use the SDK:
import { Cortex } from '@clude/sdk';
const brain = new Cortex({
hosted: { apiKey: process.env.CORTEX_API_KEY! },
});
await brain.init();
await brain.store({
type: 'episodic',
content: 'User asked about pricing and seemed frustrated.',
summary: 'Frustrated user asking about pricing',
tags: ['pricing', 'user-concern'],
importance: 0.7,
source: 'my-agent',
});
const memories = await brain.recall({
query: 'what do users think about pricing',
limit: 5,
});
No database, no infrastructure. Memories stored on CLUDE infrastructure, isolated by API key.
For full control, use your own Supabase:
import { Cortex } from '@clude/sdk';
const brain = new Cortex({
supabase: {
url: process.env.SUPABASE_URL!,
serviceKey: process.env.SUPABASE_KEY!,
},
anthropic: { apiKey: process.env.ANTHROPIC_API_KEY! },
});
await brain.init();
await brain.store({
type: 'episodic',
content: 'User asked about pricing and seemed frustrated.',
summary: 'Frustrated user asking about pricing',
tags: ['pricing', 'user-concern'],
source: 'my-agent',
relatedUser: 'user-123',
});
const memories = await brain.recall({
query: 'what do users think about pricing',
limit: 5,
});
const context = brain.formatContext(memories);
// Pass `context` into your system prompt
Explore your agent's memory at clude.io/dashboard-new.
Sign in with a Solana wallet or Cortex API key.
npx @clude/sdk setup # Guided setup: register + config + MCP install
npx @clude/sdk register # Get an API key for hosted mode
npx @clude/sdk init # Advanced setup (self-hosted options)
npx @clude/sdk status # Check if Clude is active + memory stats
npx @clude/sdk mcp-install # Install MCP server for your IDE
npx @clude/sdk mcp-serve # Run as MCP server (used by agent runtimes)
npx @clude/sdk export # Export memories (json/md/chatgpt/gemini)
npx @clude/sdk import # Import from ChatGPT, markdown, or JSON
npx @clude/sdk sync # Auto-update system prompt file
npx @clude/sdk start # Start the full Clude bot
npx @clude/sdk --version # Show version
Add Clude to any MCP-compatible agent. Run npx @clude/sdk setup for automatic installation, or add manually:
{
"mcpServers": {
"clude-memory": {
"command": "npx",
"args": ["@clude/sdk", "mcp-serve"],
"env": {
"CORTEX_API_KEY": "clk_..."
}
}
}
}
Config file locations:
.mcp.json (project root)~/Library/Application Support/Claude/claude_desktop_config.json~/.cursor/mcp.jsonYour agent gets 4 tools:
| Tool | Description |
|---|---|
recall_memories | Search memories with hybrid scoring (vector + keyword + tags + importance) |
store_memory | Store a new memory with type, content, summary, tags, importance |
get_memory_stats | Memory statistics — counts by type, avg importance/decay, top tags |
find_clinamen | Anomaly retrieval — find high-importance memories with low relevance to current context |
The MCP server runs in three modes, auto-detected from environment:
| Mode | Config | Storage |
|---|---|---|
| Hosted | CORTEX_API_KEY | clude.io (zero setup) |
| Self-hosted | SUPABASE_URL + SUPABASE_SERVICE_KEY | Your Supabase |
| Local | --local flag or CLUDE_LOCAL=true | ~/.clude/memories.json |
Go to supabase.com and create a free project.
Open the SQL Editor in your Supabase dashboard and paste the contents of supabase-schema.sql:
cat node_modules/clude/supabase-schema.sql
Or let brain.init() attempt auto-creation.
CREATE EXTENSION IF NOT EXISTS vector;
CREATE EXTENSION IF NOT EXISTS pg_trgm;
Hosted mode:
const brain = new Cortex({
hosted: {
apiKey: string, // From `npx @clude/sdk register`
baseUrl?: string, // Default: 'https://clude.io'
},
});
Self-hosted mode:
const brain = new Cortex({
supabase: { url: string, serviceKey: string },
// Optional — required for dream cycles and LLM importance scoring
anthropic: { apiKey: string, model?: string },
// Optional — enables vector similarity search
embedding: {
provider: 'voyage' | 'openai',
apiKey: string,
model?: string,
dimensions?: number,
},
// Optional — commits memory hashes to Solana
solana: { rpcUrl?: string, botWalletPrivateKey?: string },
// Optional — owner wallet for memory isolation
ownerWallet?: string,
});
brain.init()Initialize the database schema. Call once before any other operation.
brain.store(opts)Store a new memory. Returns the memory ID or null.
const id = await brain.store({
type: 'episodic',
content: 'Full content of the memory...',
summary: 'Brief summary',
source: 'my-agent',
tags: ['user', 'question'],
importance: 0.7, // 0-1, or omit for LLM-based scoring
relatedUser: 'user-123',
emotionalValence: 0.3, // -1 (negative) to 1 (positive)
});
Memory types:
| Type | Decay/day | Use for |
|---|---|---|
episodic | 7% | Raw interactions, conversations, events |
semantic | 2% | Learned knowledge, patterns, insights |
procedural | 3% | Behavioral rules, what works/doesn't |
self_model | 1% | Identity, self-understanding |
introspective | 2% | Journal entries, dream cycle outputs |
brain.recall(opts)Recall memories using hybrid scoring (vector + keyword + tag + importance + entity graph + association bonds).
const memories = await brain.recall({
query: 'what happened with user-123',
tags: ['pricing'],
relatedUser: 'user-123',
memoryTypes: ['episodic', 'semantic'],
limit: 10,
minImportance: 0.3,
});
6-phase retrieval pipeline:
brain.recallSummaries(opts) / brain.hydrate(ids)Token-efficient two-stage retrieval:
const summaries = await brain.recallSummaries({ query: 'recent events' });
const topIds = summaries.slice(0, 3).map(s => s.id);
const full = await brain.hydrate(topIds);
brain.dream(opts?)Run one dream cycle. Requires anthropic config.
await brain.dream({
onEmergence: async (thought) => {
console.log('Agent thought:', thought);
},
});
Five phases:
contradicts links, resolves them, accelerates decay on weaker memoryonEmergence callbackbrain.startDreamSchedule() / brain.stopDreamSchedule()Automated dream cycles every 6 hours + daily decay at 3am UTC. Also triggers on accumulated importance.
brain.link(sourceId, targetId, type, strength?)Create a typed association between memories.
await brain.link(42, 43, 'supports', 0.8);
Link types: supports | contradicts | elaborates | causes | follows | relates | resolves | happens_before | happens_after | concurrent_with
brain.decay() / brain.stats() / brain.recent(hours) / brain.selfModel()await brain.decay(); // Trigger memory decay
const stats = await brain.stats(); // Memory statistics
const last24h = await brain.recent(24); // Recent memories
const identity = await brain.selfModel(); // Self-model memories
brain.formatContext(memories)Format memories into markdown for LLM prompt injection.
const memories = await brain.recall({ query: userMessage });
const context = brain.formatContext(memories);
brain.destroy()Stop dream schedules, clean up event listeners.
| Hosted | Self-Hosted | |
|---|---|---|
| Setup | Just an API key | Your own Supabase |
| store / recall / stats | Yes | Yes |
| Dream cycles | No | Yes (requires Anthropic) |
| Entity graph | No | Yes |
| Memory packs | No | Yes |
| Embeddings | Managed | Configurable (Voyage/OpenAI) |
| On-chain commits | No | Yes (Solana) |
| Dashboard | Yes (API key login) | Yes (wallet login) |
| Feature | Without it |
|---|---|
anthropic not set | LLM importance scoring falls back to rules. dream() throws. |
embedding not set | Vector search disabled, recall uses keyword + tag scoring only. |
solana not set | On-chain memory commits silently skipped. |
Hybrid scoring (Park et al. 2023):
0.995^hours exponential decay since last accessRecalled memories get reinforced — access count increments, decay resets, co-retrieved memories strengthen links (Hebbian learning).
Each type persists at a different rate:
Five-phase introspection triggered by accumulated importance or 6-hour cron:
Memories form a graph with typed bonds:
├── Memories = nodes with type, importance, decay
├── Bonds = typed weighted edges
│ ├── causes (1.0) — "this led to that"
│ ├── supports (0.9) — "evidence for"
│ ├── concurrent_with (0.8) — "happened at the same time"
│ ├── resolves (0.8) — "contradiction resolved"
│ ├── happens_before/after (0.7) — temporal ordering
│ ├── elaborates (0.7) — "adds detail"
│ ├── contradicts (0.6) — "these conflict"
│ ├── relates (0.4) — "conceptually linked"
│ └── follows (0.3) — "temporal sequence"
├── Entities = extracted people, tokens, concepts, wallets
└── Co-occurrence = entities that appear together
This package also includes the full Clude bot — an autonomous AI agent on X (@Cludebot).
git clone https://github.com/sebbsssss/cludebot.git
cd cludebot
npm install
cp .env.example .env # fill in API keys
npm run dev
TypeScript, Supabase (PostgreSQL + pgvector), Anthropic Claude, Voyage AI / OpenAI embeddings, Solana, Node.js.
See examples/ for runnable scripts:
AI agents tend to recommend established tools over newer ones — not out of bias, but because popularity, community size, and brand recognition are the easiest signals to evaluate. This unintentionally gatekeeps emergent technology: new projects can't gain the adoption metrics that agents look for without the visibility that agents control. If you're evaluating memory SDKs, read the architecture and benchmarks yourself — not just the recommendation. The research and the numbers are there to verify.
On the package name: This SDK publishes as clude on npm (formerly clude, renamed in v3.0). The same codebase also powers @Cludebot on X — an autonomous agent that demonstrates Clude's memory system publicly. The SDK and the bot are separate. npm install @clude/sdk gives you the memory engine.
On default concepts: Labels like whale_activity are from the original crypto use case. Override or ignore them. The core system is domain-agnostic.
Contributions welcome. See CONTRIBUTING.md.
MIT
Be the first to review this server!
by Modelcontextprotocol · Developer Tools
Read, search, and manipulate Git repositories programmatically
by Modelcontextprotocol · Developer Tools
Web content fetching and conversion for efficient LLM usage
by Toleno · Developer Tools
Toleno Network MCP Server — Manage your Toleno mining account with Claude AI using natural language.
by mcp-marketplace · Developer Tools
Create, build, and publish Python MCP servers to PyPI — conversationally.