Mirror MCP: Introspection for LLMs
Valid MCP server (3 strong, 4 medium validity signals). 2 known CVEs in dependencies (0 critical, 2 high severity) Package registry verified. Imported from the Official MCP Registry.
7 files analyzed Β· 3 issues found
Security scores are indicators to help you make informed decisions, not guarantees. Always review permissions before connecting any MCP server.
This plugin requests these system permissions. Most are normal for its category.
Add this to your MCP configuration file:
{
"mcpServers": {
"io-github-toby-mirror-mcp": {
"args": [
"-y",
"mirror-mcp"
],
"command": "npx"
}
}
}From the project's GitHub README.
A Model Context Protocol (MCP) server that provides a reflect tool, enabling LLMs to engage in self-reflection and introspection through recursive questioning and MCP sampling.
mirror-mcp allows AI models to "look at themselves" by providing a reflection mechanism. When an LLM uses the reflect tool, it can pose questions to itself and receive answers through the Model Context Protocol's sampling capabilities. This creates a powerful feedback loop for self-analysis, reasoning validation, and iterative problem-solving.
For other MCP-compatible clients, add the following configuration:
{
"type": "stdio",
"command": "npx",
"args": ["mirror-mcp@latest"]
}
npm install -g mirror-mcp
npx mirror-mcp
git clone https://github.com/toby/mirror-mcp.git
cd mirror-mcp
npm install
npm run build
npm start
reflectEnables the LLM to ask itself a question and receive a response through MCP sampling. The tool supports custom system and user prompts to help the LLM self-direct what kind of response it gets.
Self-Direction with Custom Prompts:
Parameters:
question (string, required): The question the LLM wants to ask itselfcontext (string, optional): Additional context for the reflectionsystem_prompt (string, optional): Custom system prompt to direct the reflection approachuser_prompt (string, optional): Custom user prompt to replace the default reflection instructionsmax_tokens (number, optional): Maximum tokens for the response (default: 500)temperature (number, optional): Sampling temperature (default: 0.8)Example:
{
"name": "reflect",
"arguments": {
"question": "How confident am I in my previous analysis of the data?",
"context": "Previous analysis showed a 23% increase in user engagement",
"max_tokens": 300,
"temperature": 0.6
}
}
Example with custom prompts:
{
"name": "reflect",
"arguments": {
"question": "What are the potential weaknesses in my reasoning?",
"system_prompt": "You are an expert critical thinking coach helping to identify logical fallacies and reasoning gaps.",
"user_prompt": "Analyze my reasoning step-by-step and provide specific examples of potential weaknesses or blind spots.",
"context": "Working on a complex machine learning model evaluation",
"max_tokens": 400,
"temperature": 0.7
}
}
Response:
{
"reflection": "Upon reflection, my confidence in the 23% engagement increase analysis is moderate to high. The data sources appear reliable, and the methodology follows standard practices. However, I should consider potential confounding variables such as seasonal effects or concurrent marketing campaigns that might influence the results.",
"metadata": {
"tokens_used": 67,
"reflection_time_ms": 1240
}
}
mirror-mcp is built on the principle that self-reflection is crucial for robust AI reasoning. By enabling models to question their own outputs and reasoning processes, we create opportunities for:
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β LLM Client βββββΆβ mirror-mcp βββββΆβ MCP Sampling β
β β β β β Infrastructure β
β Calls reflect() β β Processes β β β
β ββββββ reflection ββββββ Returns responseβ
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
The Model Context Protocol provides a standardized way for AI models to connect with external resources and tools. By implementing mirror-mcp as an MCP server, we ensure:
The reflection mechanism leverages MCP's sampling capabilities to generate thoughtful responses. The sampling process:
This approach ensures that reflections are generated using the same model capabilities as the original reasoning, creating authentic self-assessment.
git clone https://github.com/toby/mirror-mcp.git
cd mirror-mcp
npm install
npm run dev
npm test
npm run build
We welcome contributions! Please see our Contributing Guidelines for details.
This project is licensed under the MIT License - see the LICENSE file for details.
"The unexamined life is not worth living" - Socrates
Enable your AI models to examine their own reasoning with mirror-mcp.
Be the first to review this server!
by Modelcontextprotocol Β· Developer Tools
Web content fetching and conversion for efficient LLM usage
by Modelcontextprotocol Β· Developer Tools
Read, search, and manipulate Git repositories programmatically
by Toleno Β· Developer Tools
Toleno Network MCP Server β Manage your Toleno mining account with Claude AI using natural language.
by mcp-marketplace Β· Developer Tools
Create, build, and publish Python MCP servers to PyPI β conversationally.
by Microsoft Β· Content & Media
Convert files (PDF, Word, Excel, images, audio) to Markdown for LLM consumption
by mcp-marketplace Β· Developer Tools
Scaffold, build, and publish TypeScript MCP servers to npm β conversationally