Server data from the Official MCP Registry
Wraps Codex CLI as MCP tools: query, review, search, assess, structured, sessions.
Wraps Codex CLI as MCP tools: query, review, search, assess, structured, sessions.
Valid MCP server (2 strong, 3 medium validity signals). 5 known CVEs in dependencies (1 critical, 3 high severity) Package registry verified. Imported from the Official MCP Registry.
4 files analyzed · 6 issues found
Security scores are indicators to help you make informed decisions, not guarantees. Always review permissions before connecting any MCP server.
This plugin requests these system permissions. Most are normal for its category.
Set these up before or after installing:
Environment variable: CODEX_CLI_PATH
Environment variable: CODEX_DEFAULT_MODEL
Environment variable: CODEX_FALLBACK_MODEL
Environment variable: CODEX_MAX_CONCURRENT
Environment variable: CODEX_MCP_SERVERS
Add this to your MCP configuration file:
{
"mcpServers": {
"io-github-hampsterx-codex-mcp-bridge": {
"env": {
"CODEX_CLI_PATH": "your-codex-cli-path-here",
"CODEX_MCP_SERVERS": "your-codex-mcp-servers-here",
"CODEX_DEFAULT_MODEL": "your-codex-default-model-here",
"CODEX_FALLBACK_MODEL": "your-codex-fallback-model-here",
"CODEX_MAX_CONCURRENT": "your-codex-max-concurrent-here"
},
"args": [
"-y",
"codex-mcp-bridge"
],
"command": "npx"
}
}
}From the project's GitHub README.
MCP server that wraps Codex CLI as a subprocess, exposing code execution, agentic review, web search, and structured output as Model Context Protocol tools.
Works with any MCP client: Claude Code, Gemini CLI, Cursor, Windsurf, VS Code, or any tool that speaks MCP.
If you're in a terminal agent (Claude Code, Codex CLI, Gemini CLI) with shell access, call Codex CLI directly. It's faster, cheaper, and zero overhead:
# Review current branch vs main
codex review --base main
# Review uncommitted changes
codex review --uncommitted
# Review with custom focus
codex review --base main "Focus on security and error handling"
# From a worktree
codex -C /path/to/worktree review --base main
# General analysis
codex exec "Analyze src/utils/parse.ts for edge cases"
Use this MCP bridge instead when:
assess classifies a diff in <2s
(no CLI spawn) and recommends a review depth with estimated wall timescan (diff-only,
120s), focused (reads changed files, 120-300s), deep (full agentic, up to
30min) with per-depth auto-scaled timeouts. Focused containment is
--sandbox read-only --skip-git-repo-check --ephemeral plus prompt guidance;
Codex can still invoke shell commands, so it is lighter containment than
Gemini plan mode--json has known bugs)CODEX_MAX_CONCURRENT)sessionId /
resetSession, inspected via listSessions)npx codex-mcp-bridge
npm i -g @openai/codex)OPENAI_API_KEY environment variable set, or codex auth login completedclaude mcp add codex-bridge -- npx -y codex-mcp-bridge
Add to ~/.gemini/settings.json:
{
"mcpServers": {
"codex-bridge": {
"command": "npx",
"args": ["-y", "codex-mcp-bridge"]
}
}
}
Add to your MCP settings:
{
"codex-bridge": {
"command": "npx",
"args": ["-y", "codex-mcp-bridge"],
"env": {
"OPENAI_API_KEY": "sk-..."
}
}
}
| Tool | Description |
|---|---|
| codex | Execute prompts with file context, session resume, and sandbox control. Multi-turn conversations via session IDs. |
| review | Agentic code review. Codex runs in full-auto inside the repo: diffs, reads files, follows imports, checks tests. Quick diff-only mode available. |
| search | Web search via codex --search. Returns synthesized answers with source URLs. |
| query | Lightweight text analysis. No repo context, no sessions. Runs in an isolated temp directory. |
| structured | JSON Schema validated output via Ajv. Data extraction, classification, or any task needing machine-parseable output. |
| ping | Health check with CLI version, capabilities, and concurrency diagnostics (activeCount, queueDepth). |
| listSessions | List active conversation sessions with metadata (turn count, model, timestamps). |
General-purpose execution. Supports multi-turn conversations via sessionId, sandbox levels (read-only, workspace-write, full-auto), and reasoning effort control. Pass resetSession: true to discard and start fresh. Use listSessions to inspect active sessions before resuming.
Key parameters: prompt (required), files, model, sessionId, sandbox, reasoningEffort, workingDirectory, timeout (default 60s).
Two modes:
--full-auto inside the repo. It diffs, reads files, follows imports, checks tests, and reads project instruction files before reviewing. Timeout auto-scales from diff size.quick: true): Diff-only review, no repo exploration. Faster but less context.Key parameters: uncommitted (default true), base, focus, quick, workingDirectory, timeout (default auto-scaled agentic, 120s quick).
Web search powered by OpenAI's native search infrastructure via Codex CLI's --search flag. Returns synthesized answers with source URLs.
Key parameters: query (required), model, workingDirectory, timeout (default 120s).
Lightweight, non-agentic text analysis. Spawns in an isolated temp directory so the bridge's repo context doesn't leak. Pass text to analyze in the context parameter. Supports reasoningEffort and maxResponseLength.
Key parameters: prompt (required), context, model, reasoningEffort, timeout (default 60s).
Embeds a JSON Schema in the prompt and validates the response with Ajv. Returns clean JSON on success, validation errors on failure.
Key parameters: prompt (required), schema (required, JSON string), files, model, workingDirectory, timeout (default 60s).
No parameters. Returns CLI version, auth status, model configuration, and concurrency diagnostics (activeCount, queueDepth).
All tools attach execution metadata (_meta) with durationMs, model, fallbackUsed, and session info where applicable. See DESIGN.md for details.
| Variable | Default | Description |
|---|---|---|
CODEX_DEFAULT_MODEL | (CLI default) | Default model for all tools |
CODEX_FALLBACK_MODEL | o3 | Fallback on quota exhaustion (none to disable) |
CODEX_CLI_PATH | codex | Path to CLI binary |
CODEX_MAX_CONCURRENT | 3 | Max concurrent subprocess spawns |
CODEX_MCP_SERVERS | (unset) | Control which Codex internal MCP servers stay enabled. See DESIGN.md. |
| You need... | Consider |
|---|---|
| Agentic code review, structured output, model fallback, concurrency management | This bridge |
Session threading with conversationId, callback URI forwarding | @tuannvm/codex-mcp-server |
| Structured patch output with approval policies | cexll/codex-mcp-server |
Minimal codex exec wrapper with parallel subagents | codex-as-mcp |
| Native Codex MCP (experimental, no wrapper needed) | codex mcp serve (docs) |
Codex CLI has minimal startup overhead (<100ms), so wall time is dominated by model inference.
| Scenario | Typical time |
|---|---|
| Trivial prompt | 9-12s |
| Quick review, small diff (1KB) | ~20s |
| Quick review, medium diff (24KB) | ~35s |
| Quick review, large diff (54KB) | ~40s |
| Web search | ~17s |
Default timeouts (60-300s) are comfortable for typical workloads.
Three MCP servers, same architecture, different underlying CLIs. Each wraps a terminal agent as a subprocess and exposes it as MCP tools. Pick the one that matches your model provider, or run multiple for cross-model workflows.
| codex-mcp-bridge | claude-mcp-bridge | gemini-mcp-bridge | |
|---|---|---|---|
| CLI | Codex CLI | Claude Code | Gemini CLI |
| Provider | OpenAI | Anthropic | |
| Tools | codex, review, search, query, structured, ping, listSessions | query, review, search, structured, ping, listSessions | query, review, search, structured, ping |
| Agentic review | Codex explores repo in full-auto mode | Claude explores repo with Read/Grep/Glob/git | Gemini explores repo with file reads and git |
| Structured output | Ajv validation | Native --json-schema | Ajv validation |
| Session resume | Session IDs with multi-turn | Native --resume | Not supported |
| Budget caps | Not supported | Native --max-budget-usd | Not supported |
| Effort control | reasoningEffort (low/medium/high) | --effort low/medium/high/max | Not supported |
| Cold start | <100ms (inference dominates) | ~1-2s | ~16s |
| Auth | OPENAI_API_KEY | claude login (subscription) or ANTHROPIC_API_KEY | gemini auth login |
| Cost | Pay-per-token | Subscription (included) or API credits | Free tier available |
| Concurrency | 3 (configurable) | 3 (configurable) | 3 (configurable) |
| Model fallback | Auto-retry with fallback model | Auto-retry with fallback model | Auto-retry with fallback model |
All three share: subprocess env isolation, path sandboxing, FIFO concurrency queue, MCP tool annotations, _meta response metadata, progress heartbeats. The codex and claude bridges also perform output redaction (secret stripping).
npm install
npm run build # Compile TypeScript
npm run dev # Watch mode
npm test # Run tests
npm run lint # ESLint
npm run typecheck # tsc --noEmit
MIT
Be the first to review this server!
by Modelcontextprotocol · Developer Tools
Read, search, and manipulate Git repositories programmatically
by Toleno · Developer Tools
Toleno Network MCP Server — Manage your Toleno mining account with Claude AI using natural language.
by mcp-marketplace · Developer Tools
Create, build, and publish Python MCP servers to PyPI — conversationally.