Server data from the Official MCP Registry
MCP server for n8n: generate, lint, explain failed executions, drive live n8n via REST.
MCP server for n8n: generate, lint, explain failed executions, drive live n8n via REST.
Valid MCP server (2 strong, 4 medium validity signals). 2 known CVEs in dependencies (0 critical, 2 high severity) Package registry verified. Imported from the Official MCP Registry.
7 files analyzed · 3 issues found
Security scores are indicators to help you make informed decisions, not guarantees. Always review permissions before connecting any MCP server.
This plugin requests these system permissions. Most are normal for its category.
Add this to your MCP configuration file:
{
"mcpServers": {
"io-github-ratamaha-git-n8n-mcp": {
"args": [
"-y",
"@automatelab/n8n-mcp"
],
"command": "npx"
}
}
}From the project's GitHub README.
An MCP server that gives Claude, Cursor, and other MCP-compatible agents nine tools for working with n8n: scaffold a custom node, generate workflow JSON, lint, diagnose failed executions, and drive a live n8n instance via REST.
We use n8n daily inside AutomateLab, and we kept hitting the same friction when asking an LLM to help: it would emit workflow JSON that imported but failed at runtime, or it would generate AI Agent clusters with the wrong connection types, or - most frustrating - the user would paste an execution that "silently dropped items" and the model had no idea where to look. Generic "give the model the whole n8n catalog" approaches eat huge context and still produce broken JSON because the failure modes are subtle (typeVersion mismatches, IF v1 schema, credential references that don't survive import).
So we built a small, focused server: encode the failure modes the lint can catch, the cluster topology the generator must respect, and the diagnosis the agent can't do alone.
Other n8n MCP servers (notably czlonkowski/n8n-mcp) compete on breadth - 20+ tools and an indexed corpus of every n8n node. They own that niche.
This server is the debugging-and-first-run-correctness MCP for n8n:
n8n_explain_execution is the wedge. Paste the execution JSON; get back per-node findings: which nodes returned 0 items, which had unresolved ={{ ... }} expressions, error messages with concrete hints. No other MCP server does this well, and it hits the n8n community's #1 debugging pain point (silent data loss between nodes).n8n_generate_workflow is opinionated about AI Agent topology - emits proper LangChain clusters with ai_languageModel / ai_memory / ai_tool connections (sub-nodes connect upward to the agent, not via main). Imports cleanly on n8n 1.x.n8n_lint_workflow catches the silent failures: deprecated node types (Function → Code, spreadsheetFile → convertToFile), AI Agent missing language model, IF v1 schema, Webhook missing webhookId, broken connections across all connection types (not just main).N8N_API_URL + N8N_API_KEY) let you list, fetch, create, activate workflows and pull executions - so the lint and explain tools can run against your live workflows, not just JSON pasted in chat.Plus: a paired Agent Skill that teaches the model when to use which tool and where to load deeper context (split into references/ so it doesn't bloat the prompt).
Stateless (work without a live n8n instance):
| Tool | Purpose |
|---|---|
n8n_generate_workflow | Plain-English description → workflow JSON. Detects AI-agent intent. |
n8n_scaffold_node | Description → single INodeType TypeScript file for a custom n8n package. |
n8n_lint_workflow | Workflow JSON → list of errors and warnings. |
n8n_explain_execution | Failed execution JSON → per-node diagnosis with hints. |
Live-instance (require N8N_API_URL + N8N_API_KEY env vars):
| Tool | Purpose |
|---|---|
n8n_list_workflows | Paginate workflows; filter by active/tags/name. |
n8n_get_workflow | Fetch a workflow by id. |
n8n_create_workflow | POST a workflow. Strips read-only fields. |
n8n_activate_workflow | Flip active on/off. |
n8n_list_executions | Browse executions; pass includeData: true for the full body. |
Requires Node 20 or later.
npm install -g @automatelab/n8n-mcp
Cursor (~/.cursor/mcp.json) or Claude Desktop (claude_desktop_config.json):
{
"mcpServers": {
"n8n": {
"command": "npx",
"args": ["-y", "@automatelab/n8n-mcp"],
"env": {
"N8N_API_URL": "https://your-n8n.example.com",
"N8N_API_KEY": "n8n_..."
}
}
}
}
The env block is optional - the 4 stateless tools work without it. Get an API key from n8n: Settings → API → Create API key.
Restart your MCP host. The 9 n8n_* tools appear in the MCP panel.
n8n_generate_workflowUse n8n_generate_workflow to build: Stripe webhook → Slack message + new row in Google Sheets.
Returns workflow JSON ready for n8n's "Import from File" dialog.
n8n_explain_executionHere's a failed execution from n8n. Why is the Slack node not firing? [paste JSON]
Returns:
WARNING [Filter] Returned 0 items. Downstream nodes will not execute.
hint: Common causes: (1) IF/Switch routed to the other branch — check `parameters.conditions`. (2) Filter/Set node dropped everything — inspect its output explicitly.
INFO [Last node executed was "Filter". If the workflow stopped here unexpectedly, check its output items below.]
n8n_lint_workflowLint this workflow JSON. [paste JSON]
Returns:
ERROR [AI Agent] AI Agent has no `ai_languageModel` sub-node connected. Attach a chat model (e.g. lmChatOpenAi).
WARNING [Webhook] Webhook node has no `webhookId`. n8n auto-generates one on import, so the production URL will change.
WARNING [LegacyFunction] Node type "n8n-nodes-base.function" is deprecated. Use "n8n-nodes-base.code".
Or no issues found.
The examples/ directory ships with two ready-to-import workflows:
workflow-stripe-to-slack.json - Stripe webhook fans out to Slack and Google Sheets.workflow-rss-to-discord.json - RSS feed trigger posts new items to a Discord channel.Import either via n8n's Import from File dialog.
git clone https://github.com/ratamaha-git/n8n-mcp
cd n8n-mcp
npm install
npm run build
npm run smoke
npm run smoke boots the server with a --smoke flag that lists registered tools and exits without binding stdio. Useful for CI or first-run sanity checks.
MIT. See LICENSE.
Developed by AutomateLab.
Be the first to review this server!
by Modelcontextprotocol · Productivity
Knowledge graph-based persistent memory across sessions
by Modelcontextprotocol · Productivity
Time and timezone conversion capabilities for your AI assistant
by Toleno · Developer Tools
Toleno Network MCP Server — Manage your Toleno mining account with Claude AI using natural language.
by mcp-marketplace · Developer Tools
Create, build, and publish Python MCP servers to PyPI — conversationally.