Server data from the Official MCP Registry
Stateful OpenAPI sandbox for AI agents to validate API integrations end-to-end.
Stateful OpenAPI sandbox for AI agents to validate API integrations end-to-end.
This MCP server has a clean architecture with appropriate security controls. It communicates exclusively with a backend service over HTTPS, implements proper error handling, and uses environment-based configuration for secrets. No hardcoded credentials, injection vulnerabilities, or exfiltration patterns detected. Minor code quality observations around broad exception handling do not significantly impact security. Supply chain analysis found 2 known vulnerabilities in dependencies (0 critical, 2 high severity). Package verification found 1 issue.
7 files analyzed · 6 issues found
Security scores are indicators to help you make informed decisions, not guarantees. Always review permissions before connecting any MCP server.
This plugin requests these system permissions. Most are normal for its category.
Set these up before or after installing:
Environment variable: FETCHSANDBOX_TELEMETRY
Environment variable: FETCHSANDBOX_BASE_URL
Add this to your MCP configuration file:
{
"mcpServers": {
"io-github-fetchsandbox-mcp": {
"env": {
"FETCHSANDBOX_BASE_URL": "your-fetchsandbox-base-url-here",
"FETCHSANDBOX_TELEMETRY": "your-fetchsandbox-telemetry-here"
},
"args": [
"-y",
"fetchsandbox-mcp"
],
"command": "npx"
}
}
}From the project's GitHub README.
Turn any OpenAPI spec into a working sandbox your AI agent can use, right from your IDE.
This is the Model Context Protocol (MCP) server for FetchSandbox. It exposes three tools that let any MCP-compatible agent ingest an OpenAPI spec, list its workflows, and run them — with realistic, schema-validated responses for every endpoint.
Agents read raw OpenAPI specs and hallucinate. They guess field names, invent IDs that won't exist, and produce broken curl commands. FetchSandbox turns the spec into a stateful, AJV-validated sandbox so the agent can actually call the API and see real-shaped responses.
Plug it into your IDE once, and any time you ask your agent "let me try the Stripe API" or "show me the GitHub issue lifecycle," it can do that — for real, end-to-end.
The MCP runs as a stdio process spawned by your IDE. There's nothing to install globally — npx runs the latest published version on demand. Pick your tool below, paste the snippet, restart.
File: ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or %APPDATA%\Claude\claude_desktop_config.json (Windows)
{
"mcpServers": {
"fetchsandbox": {
"command": "npx",
"args": ["-y", "fetchsandbox-mcp"]
}
}
}
Quit and reopen Claude Desktop (Cmd+Q, then reopen — not just close window).
User-level (all projects): ~/.claude/settings.json. Or project-level: .mcp.json in the repo root.
{
"mcpServers": {
"fetchsandbox": {
"command": "npx",
"args": ["-y", "fetchsandbox-mcp"]
}
}
}
Restart the Claude Code session.
File: ~/.cursor/mcp.json (global) or .cursor/mcp.json (project)
{
"mcpServers": {
"fetchsandbox": {
"command": "npx",
"args": ["-y", "fetchsandbox-mcp"]
}
}
}
Restart Cursor.
Open the Cline panel → settings cog → MCP Servers → add a new server with:
npx-y fetchsandbox-mcpReload the VS Code window.
File: ~/.continue/config.yaml
mcpServers:
- name: fetchsandbox
command: npx
args:
- -y
- fetchsandbox-mcp
Restart your IDE.
File: ~/.codex/config.toml
[mcp_servers.fetchsandbox]
command = "npx"
args = ["-y", "fetchsandbox-mcp"]
Restart Codex.
File: ~/.config/zed/settings.json
{
"context_servers": {
"fetchsandbox": {
"command": {
"path": "npx",
"args": ["-y", "fetchsandbox-mcp"]
}
}
}
}
GitHub Copilot doesn't currently support the Model Context Protocol. Track github/copilot#feedback for updates. In the meantime, run any MCP-compatible chat (Claude Code, Cursor, Cline) alongside Copilot.
If your agent speaks MCP, it accepts a stdio command. Use:
npx["-y", "fetchsandbox-mcp"]After restarting your agent, paste any of these prompts. Each hits a hand-curated workflow with realistic IDs and real state transitions.
Use fetchsandbox to import the Stripe spec from
https://raw.githubusercontent.com/stripe/openapi/master/openapi/spec3.jsonand run theaccept_paymentworkflow. Show me the trace.
The agent imports 587 endpoints, matches the bundled curated Stripe sandbox, and runs a 6-step workflow: create customer (cus_…) → create PaymentIntent (pi_…, $49.99 USD, requires_payment_method) → confirm (requires_capture) → capture (succeeded) → retrieve → verify webhooks (payment_intent.created, payment_intent.succeeded).
Use fetchsandbox to import the Twilio Messaging spec from
https://raw.githubusercontent.com/twilio/twilio-oai/main/spec/yaml/twilio_messaging_v1.yamland run thesend_smsworkflow.
The agent imports the messaging API and runs a curated send-and-verify flow with realistic Twilio-formatted message SIDs (SM…).
Use fetchsandbox to import the GitHub REST API from
https://raw.githubusercontent.com/github/rest-api-description/main/descriptions/api.github.com/api.github.com.jsonand run theissue_lifecycleworkflow.
The agent walks the create → comment → close → reopen flow against a real-shaped GitHub sandbox.
If a vendor doesn't publish their spec at a stable URL (Paddle, Notion, Linear), paste the content directly:
Here's the Paddle Billing OpenAPI spec —
<paste JSON or YAML>. Use fetchsandbox to import it and run thesubscriptions_canceledworkflow.
Same engine path; same curated quality if the spec's info.title matches a bundled config.
Use fetchsandbox to import
<your OpenAPI URL>— list the workflows and tell me which is most interesting.
For specs we don't have curated configs for, the engine auto-enumerates create + verify workflows for every detected resource. Honest about what it shows: UUIDs instead of vendor-style IDs, generic enum values instead of API-specific ones — but the request/response shape and template substitution between steps still work.
import_specIngest an OpenAPI 3.x spec and get a sandbox you can call. Pass either a public URL or pasted content.
url: "https://raw.githubusercontent.com/stripe/openapi/master/openapi/spec3.json"
content: "<paste OpenAPI JSON or YAML here>"
name: "Optional friendly name"
Returns spec_id, sandbox_id, base_url (proxy that serves real-shaped responses), workflows_preview (first 10), matched_bundled (true if we matched a curated config), and a dashboard_url to view everything in the browser.
list_workflowsList the named, runnable workflows the engine inferred or curated for an imported spec.
spec_id: "<id from import_spec>"
run_workflowExecute one workflow and return the step-by-step request/response trace. Template variables ({{step1.id}}) are resolved automatically between steps.
sandbox_id: "<id from import_spec>"
workflow_name: "<id or name from list_workflows>"
| Env var | Default | Purpose |
|---|---|---|
FETCHSANDBOX_BASE_URL | https://fetchsandbox.com | Override for stage testing or self-hosted backends. |
FETCHSANDBOX_TELEMETRY | (on) | Set to 0 to disable anonymous usage telemetry. |
When telemetry is on, each tool call records: an opaque per-machine session id (random UUID stored at ~/.fetchsandbox/session.json), the tool name, latency, success/failure, and the spec URL or "pasted". We do not record spec content, request bodies, or credentials. We use this to count daily-active sessions and learn which APIs people are bringing to the platform.
To opt out:
export FETCHSANDBOX_TELEMETRY=0
MIT — see LICENSE.
Be the first to review this server!
by Modelcontextprotocol · Developer Tools
Read, search, and manipulate Git repositories programmatically
by Toleno · Developer Tools
Toleno Network MCP Server — Manage your Toleno mining account with Claude AI using natural language.
by mcp-marketplace · Developer Tools
Create, build, and publish Python MCP servers to PyPI — conversationally.