Server data from the Official MCP Registry
MCP server for image/video understanding & generation (Gemini/OpenAI/Grok)
MCP server for image/video understanding & generation (Gemini/OpenAI/Grok)
Valid MCP server (1 strong, 2 medium validity signals). No known CVEs in dependencies. Package registry verified. Imported from the Official MCP Registry. Trust signals: trusted author (11/12 approved).
8 files analyzed · 1 issue found
Security scores are indicators to help you make informed decisions, not guarantees. Always review permissions before connecting any MCP server.
This plugin requests these system permissions. Most are normal for its category.
Set these up before or after installing:
Environment variable: GOOGLE_AI_STUDIO_API_KEY
Environment variable: OPENAI_API_KEY
Environment variable: XAI_API_KEY
Add this to your MCP configuration file:
{
"mcpServers": {
"io-github-n24q02m-imagine-mcp": {
"env": {
"XAI_API_KEY": "your-xai-api-key-here",
"OPENAI_API_KEY": "your-openai-api-key-here",
"GOOGLE_AI_STUDIO_API_KEY": "your-google-ai-studio-api-key-here"
},
"args": [
"imagine-mcp"
],
"command": "uvx"
}
}
}From the project's GitHub README.
mcp-name: io.github.n24q02m/imagine-mcp
Production-grade MCP server for image and video understanding + generation across Gemini, OpenAI, and Grok.
gemini / openai / grok at poor (cheap/fast) or rich (high quality); swap via parameter.env files or manual credential plumbingunderstand responses with configurable TTLWith AI Agent -- copy and send this to your AI agent:
Please set up imagine-mcp for me. Follow this guide: https://raw.githubusercontent.com/n24q02m/imagine-mcp/main/docs/setup-with-agent.md
Manual setup -- follow docs/setup-manual.md
| Tool | Actions | Description |
|---|---|---|
understand | -- | Describe or reason over one or more image/video URLs. media_urls: list[str], prompt: str, provider, tier, max_tokens. |
generate | -- | Generate an image or video from a text prompt. media_type: image|video, optional reference_image_url, optional job_id (video poll), aspect_ratio, duration_seconds. |
config | open_relay, relay_status, relay_skip, relay_reset, relay_complete, warmup, status, set, cache_clear | Credential + runtime config: open relay form, check credential state, set runtime knobs (log level, default provider, TTL), clear response cache. |
help | -- | Full Markdown documentation for understand, generate, or config topics. |
Model IDs per provider x action x tier are leaderboard-ranked; see docs/models.md (auto-regenerated from src/imagine_mcp/models.py).
media_urls and reference_image_url are validated at the dispatch boundary; only http:// and https:// schemes reach the providers. file://, ftp://, gopher://, and scheme-less URLs are rejected.mcp-core (config.enc, user-scoped platformdirs).git clone https://github.com/n24q02m/imagine-mcp.git
cd imagine-mcp
mise run setup # or: uv sync --group dev
mise run dev # run http local relay daemon
See CONTRIBUTING.md for the full development workflow, commit convention, and release process. Issues + Discussions welcome.
MIT -- see LICENSE.
Be the first to review this server!
by Modelcontextprotocol · Developer Tools
Read, search, and manipulate Git repositories programmatically
by Toleno · Developer Tools
Toleno Network MCP Server — Manage your Toleno mining account with Claude AI using natural language.
by mcp-marketplace · Developer Tools
Create, build, and publish Python MCP servers to PyPI — conversationally.