Server data from the Official MCP Registry
MCP server bridging AI assistants to OpenAI Codex CLI for code analysis and review
MCP server bridging AI assistants to OpenAI Codex CLI for code analysis and review
Valid MCP server (1 strong, 1 medium validity signals). No known CVEs in dependencies. Package registry verified. Imported from the Official MCP Registry.
6 files analyzed · 1 issue found
Security scores are indicators to help you make informed decisions, not guarantees. Always review permissions before connecting any MCP server.
This plugin requests these system permissions. Most are normal for its category.
Set these up before or after installing:
Environment variable: CODEX_MCP_CWD
Environment variable: CODEX_SESSION_TTL_MS
Environment variable: CODEX_MAX_SESSIONS
Add this to your MCP configuration file:
{
"mcpServers": {
"io-github-x51xxx-codex-mcp-tool": {
"env": {
"CODEX_MCP_CWD": "your-codex-mcp-cwd-here",
"CODEX_MAX_SESSIONS": "your-codex-max-sessions-here",
"CODEX_SESSION_TTL_MS": "your-codex-session-ttl-ms-here"
},
"args": [
"-y",
"@trishchuk/codex-mcp-tool"
],
"command": "npx"
}
}
}From the project's GitHub README.
MCP server connecting Claude/Cursor to Codex CLI. Enables code analysis via @ file references, multi-turn conversations, sandboxed edits, and structured change mode.
@src/, @package.json syntaxcodex resume for context preservation (CLI v0.36.0+)localProvidersearch: true--full-autoclaude mcp add codex-cli -- npx -y @trishchuk/codex-mcp-tool
Prerequisites: Node.js 18+, Codex CLI installed and authenticated.
{
"mcpServers": {
"codex-cli": {
"command": "npx",
"args": ["-y", "@trishchuk/codex-mcp-tool"]
}
}
}
Config locations: macOS: ~/Library/Application Support/Claude/claude_desktop_config.json | Windows: %APPDATA%\Claude\claude_desktop_config.json
// File analysis
'explain the architecture of @src/';
'analyze @package.json and list dependencies';
// With specific model
'use codex with model gpt-5.4 to analyze @algorithm.py';
// Multi-turn conversations (v1.4.0+)
'ask codex sessionId:"my-project" prompt:"explain @src/"';
'ask codex sessionId:"my-project" prompt:"now add error handling"';
// Brainstorming
'brainstorm ways to optimize CI/CD using SCAMPER method';
// Sandbox mode
'use codex sandbox:true to create and run a Python script';
// Web search
'ask codex search:true prompt:"latest TypeScript 5.7 features"';
// Local OSS model (Ollama)
'ask codex localProvider:"ollama" model:"qwen3:8b" prompt:"explain @src/"';
| Tool | Description |
|---|---|
ask-codex | Execute Codex CLI with file analysis, models, sessions |
brainstorm | Generate ideas with SCAMPER, design-thinking, etc. |
list-sessions | View/delete/clear conversation sessions |
health | Diagnose CLI installation, version, features |
ping / help | Test connection, show CLI help |
Default: gpt-5.4 with fallback → gpt-5.3-codex → gpt-5.2-codex → gpt-5.1-codex-max → gpt-5.2
| Model | Use Case |
|---|---|
gpt-5.4 | Latest frontier agentic coding (default) |
gpt-5.3-codex | Frontier agentic coding |
gpt-5.2-codex | Frontier agentic coding |
gpt-5.1-codex-max | Deep and fast reasoning |
gpt-5.1-codex-mini | Cost-efficient quick tasks |
gpt-5.2 | Broad knowledge, reasoning and coding |
Multi-turn conversations with workspace isolation:
{ "prompt": "analyze code", "sessionId": "my-session" }
{ "prompt": "continue from here", "sessionId": "my-session" }
{ "prompt": "start fresh", "sessionId": "my-session", "resetSession": true }
Environment:
CODEX_SESSION_TTL_MS - Session TTL (default: 24h)CODEX_MAX_SESSIONS - Max sessions (default: 50)Run with local Ollama or LM Studio instead of OpenAI:
// Ollama
{ "prompt": "analyze @src/", "localProvider": "ollama", "model": "qwen3:8b" }
// LM Studio
{ "prompt": "analyze @src/", "localProvider": "lmstudio", "model": "my-model" }
// Auto-select provider
{ "prompt": "analyze @src/", "oss": true }
Requirements: Ollama running locally with a model that supports tool calling (e.g. qwen3:8b).
| Parameter | Description |
|---|---|
model | Model selection |
sessionId | Enable conversation continuity |
sandbox | Enable --full-auto mode |
search | Enable web search |
changeMode | Structured OLD/NEW edits |
addDirs | Additional writable directories |
toolOutputTokenLimit | Cap response verbosity (100-10,000) |
reasoningEffort | Reasoning depth: low, medium, high, xhigh |
oss | Use local OSS model provider |
localProvider | Local provider: lmstudio or ollama |
| Version | Features |
|---|---|
| v0.60.0+ | GPT-5.2 model family |
| v0.59.0+ | --add-dir, token limits |
| v0.52.0+ | Native --search flag |
| v0.36.0+ | Native codex resume (sessions) |
codex --version # Check CLI version
codex login # Authenticate
Use health tool for diagnostics: 'use health verbose:true'
v2.0.x → v2.1.0: gpt-5.4 as new default model, updated fallback chain.
v1.5.x → v1.6.0: Local OSS model support (localProvider, oss), gpt-5.3-codex default model, xhigh reasoning effort.
v1.3.x → v1.4.0: New sessionId parameter, list-sessions/health tools, structured error handling. No breaking changes.
MIT License. Not affiliated with OpenAI.
Documentation | Issues | Inspired by jamubc/gemini-mcp-tool
Be the first to review this server!
by Modelcontextprotocol · Developer Tools
Read, search, and manipulate Git repositories programmatically
by Toleno · Developer Tools
Toleno Network MCP Server — Manage your Toleno mining account with Claude AI using natural language.
by mcp-marketplace · Developer Tools
Create, build, and publish Python MCP servers to PyPI — conversationally.