Server data from the Official MCP Registry
Create presentations, documents, and webpages from any MCP-compatible AI assistant via Gamma.app
Create presentations, documents, and webpages from any MCP-compatible AI assistant via Gamma.app
A well-structured MCP server for Gamma.app integration with solid security practices. The codebase demonstrates proper authentication, input validation via Zod schemas, and appropriate error handling. API key is correctly required via environment variables with format validation. Permissions align with the server's purpose (network API calls, environment variable access). Minor code quality observations around broad error catching and logging do not materially impact security. Supply chain analysis found 8 known vulnerabilities in dependencies (0 critical, 5 high severity). Package verification found 1 issue.
7 files analyzed · 13 issues found
Security scores are indicators to help you make informed decisions, not guarantees. Always review permissions before connecting any MCP server.
This plugin requests these system permissions. Most are normal for its category.
Set these up before or after installing:
Environment variable: GAMMA_API_KEY
Add this to your MCP configuration file:
{
"mcpServers": {
"io-github-amer-prog-gamma-mcp-server": {
"env": {
"GAMMA_API_KEY": "your-gamma-api-key-here"
},
"args": [
"-y",
"@arkava-ai/gamma-mcp-server"
],
"command": "npx"
}
}
}From the project's GitHub README.
A Model Context Protocol (MCP) server that integrates Gamma.app with AI assistants. Create presentations, documents, webpages, and social posts directly from your AI conversations.
Works with: Claude Code, Claude Desktop, OpenCode, GitHub Copilot CLI, Google Gemini CLI, and any other MCP-compatible AI assistant.
git clone https://github.com/Arkava-AI/gamma-mcp-server.git
cd gamma-mcp-server
npm install
npm run build
sk-gamma-xxxxxxxx)Note: Requires Gamma Pro, Ultra, Team, or Business account.
Choose your AI assistant below for setup instructions.
| OS | Path |
|---|---|
| macOS | ~/Library/Application Support/Claude/claude_desktop_config.json |
| Linux | ~/.config/Claude/claude_desktop_config.json |
| Windows | %APPDATA%\Claude\claude_desktop_config.json |
Add to the mcpServers object:
{
"mcpServers": {
"gamma": {
"command": "node",
"args": ["/absolute/path/to/gamma-mcp-server/dist/index.js"],
"env": {
"GAMMA_API_KEY": "sk-gamma-your-api-key-here"
}
}
}
}
Restart Claude Desktop to load the new MCP server. You should see "gamma" in your MCP servers list.
Claude Code uses the same MCP configuration as Claude Desktop. If you've already configured Claude Desktop, you're all set.
For project-level configuration, create a .claude/settings.json file in your project directory with the same mcpServers structure shown above. This allows different projects to use different MCP server configurations.
| Scope | Path |
|---|---|
| Global (user) | ~/.config/opencode/opencode.json |
| Project | ./opencode.json (in your project root) |
{
"mcp": {
"gamma": {
"type": "local",
"command": ["node", "/absolute/path/to/gamma-mcp-server/dist/index.js"],
"enabled": true,
"environment": {
"GAMMA_API_KEY": "sk-gamma-your-api-key-here"
}
}
}
}
Note: OpenCode uses a different config format from Claude Desktop —
mcp(notmcpServers),typefield required ("local"or"remote"),commandis an array, and env vars go underenvironment.
Restart OpenCode after editing the config to load the server.
~/.copilot/mcp-config.json{
"mcpServers": {
"gamma": {
"type": "local",
"command": "node",
"args": ["/absolute/path/to/gamma-mcp-server/dist/index.js"],
"env": {
"GAMMA_API_KEY": "sk-gamma-your-api-key-here"
},
"tools": ["*"]
}
}
}
Note: Requires the GitHub Copilot CLI (
gh copilot) — not the same as OpenAI Codex.
~/.codex/config.toml (TOML format, not JSON)[mcp_servers.gamma]
command = "node"
args = ["/absolute/path/to/gamma-mcp-server/dist/index.js"]
enabled = true
[mcp_servers.gamma.env]
GAMMA_API_KEY = "sk-gamma-your-api-key-here"
Note: Codex uses TOML format, not JSON. The
envsection is a separate table under[mcp_servers.gamma.env].
| Scope | Path |
|---|---|
| User | ~/.gemini/settings.json |
| Project | .gemini/settings.json (in your project root) |
{
"mcpServers": {
"gamma": {
"command": "node",
"args": ["/absolute/path/to/gamma-mcp-server/dist/index.js"],
"cwd": "/absolute/path/to/gamma-mcp-server",
"env": {
"GAMMA_API_KEY": "sk-gamma-your-api-key-here"
},
"timeout": 30000
}
}
}
Restart Gemini CLI after editing the config to load the server.
| Tool | Description |
|---|---|
gamma_generate | Create presentations, documents, webpages, or social posts |
gamma_get_status | Check generation progress (with optional polling) |
gamma_from_template | Remix existing Gamma templates |
gamma_list_themes | Browse available visual themes |
gamma_list_folders | List your Gamma folders |
gamma_share_email | Share content via email |
gamma_health | Verify server and API are reachable |
gamma_archive | Archive a Gamma from your workspace |
Create new content using Gamma's AI.
Formats & Sizes:
presentation: fluid, 16x9, 4x3document: fluid, pageless, letter, a4social: 1x1, 4x5, 9x16webpage: fluidExample prompts in your AI assistant:
Check if a generation has completed. Set waitForCompletion: true to automatically poll until done.
Remix an existing Gamma with new content or variable substitutions.
{
"templateId": "gamma_xyz789",
"prompt": "Update for Q1 2025",
"variables": { "company_name": "Acme Corp" }
}
This repository is designed for easy deployment across multiple machines:
# On each machine:
git clone https://github.com/Arkava-AI/gamma-mcp-server.git
cd gamma-mcp-server
npm install && npm run build
# Then configure your AI assistant with the local path
To update on any machine:
git pull
npm install
npm run build
# Restart your AI assistant
| Variable | Default | Description |
|---|---|---|
GAMMA_API_KEY | (required) | Your Gamma API key (sk-gamma-...) |
GAMMA_API_BASE_URL | https://public-api.gamma.app/v1.0 | Override for self-hosted Gamma instances |
GAMMA_POLL_INTERVAL_MS | 2000 | Milliseconds between status polls (default 2s) |
GAMMA_MAX_POLL_ATTEMPTS | 150 | Max polling attempts before timeout (default 150 × 2s = 5 min) |
# Run in development mode with auto-reload
npm run dev
# Build for production
npm run build
# Run linting
npm run lint
# Format code
npm run format
# Type check
npm run typecheck
# Test with MCP Inspector
npm run inspect
gamma-mcp-server/
├── src/
│ ├── index.ts # Main entry point
│ ├── constants.ts # Configuration constants
│ ├── types.ts # TypeScript interfaces
│ ├── schemas/ # Zod validation schemas
│ ├── services/ # API client and formatters
│ └── tools/ # MCP tool implementations
├── dist/ # Compiled JavaScript (generated)
├── package.json
├── tsconfig.json
└── eslint.config.js
Gamma uses a credit-based system for API usage. Credits are consumed per generation. Monitor your usage in the Gamma dashboard and enable auto-recharge if needed.
| Error | Solution |
|---|---|
| "GAMMA_API_KEY environment variable is required" | Ensure env.GAMMA_API_KEY is set in your AI assistant's MCP config |
| "Invalid API key" | Keys should start with sk-gamma-. Verify the complete key. |
| "Rate limit exceeded" | Wait a few minutes. Contact Gamma support for higher limits. |
| "Insufficient credits" | Top up credits or enable auto-recharge in Gamma settings. |
Arkava Ltd — engage@arkava.ai
MIT License - see LICENSE for details.
Be the first to review this server!
by Modelcontextprotocol · Developer Tools
Read, search, and manipulate Git repositories programmatically
by Toleno · Developer Tools
Toleno Network MCP Server — Manage your Toleno mining account with Claude AI using natural language.
by mcp-marketplace · Developer Tools
Create, build, and publish Python MCP servers to PyPI — conversationally.
by Microsoft · Content & Media
Convert files (PDF, Word, Excel, images, audio) to Markdown for LLM consumption
by mcp-marketplace · Developer Tools
Scaffold, build, and publish TypeScript MCP servers to npm — conversationally
by Taylorwilsdon · Productivity
Control Gmail, Calendar, Docs, Sheets, Drive, and more from your AI