Server data from the Official MCP Registry
Model intelligence for AI agents — syntax, params, pricing for 67+ generative AI models.
Model intelligence for AI agents — syntax, params, pricing for 67+ generative AI models.
Valid MCP server (2 strong, 3 medium validity signals). 3 known CVEs in dependencies (0 critical, 3 high severity) Package registry verified. Imported from the Official MCP Registry.
4 files analyzed · 4 issues found
Security scores are indicators to help you make informed decisions, not guarantees. Always review permissions before connecting any MCP server.
This plugin requests these system permissions. Most are normal for its category.
Set these up before or after installing:
Environment variable: PROMPTIBUS_API_KEY
Environment variable: PROMPTIBUS_API_URL
Add this to your MCP configuration file:
{
"mcpServers": {
"com-promptibus-mcp": {
"env": {
"PROMPTIBUS_API_KEY": "your-promptibus-api-key-here",
"PROMPTIBUS_API_URL": "your-promptibus-api-url-here"
},
"args": [
"-y",
"@promptibus/mcp"
],
"command": "npx"
}
}
}From the project's GitHub README.
Model intelligence for AI agents. Syntax, parameters, pricing, and routing for 67+ generative AI models (Midjourney, Flux, Suno, Runway, DALL-E, Stable Diffusion, and more), delivered via the Model Context Protocol.
Promptibus MCP gives your AI agent structured knowledge about generative AI models: which model fits a task, how to format prompts for it, what parameters to use, what a run of 100 images will cost, and what pitfalls to avoid. It's not an API wrapper — it doesn't generate images or music. Instead, it tells your agent how to use the tools it already has access to.
Think of it as a prompt engineering co-pilot embedded in your agent's tool chain.
Works out of the box with any MCP-compatible client:
claude_desktop_config.jsonclaude mcp add promptibus -- npx -y @promptibus/mcp.cursor/mcp.json~/.codeium/windsurf/mcp_config.jsonsettings.json under context_servers~/.continue/config.jsonSee Client Configs for per-client snippets.
There are three install paths, in rough order of convenience:
Visit https://smithery.ai/server/@promptibus/mcp, pick your client, click install. Smithery writes the config for you.
For MCP clients that support HTTP transport, point straight at our hosted endpoint:
{
"mcpServers": {
"promptibus": {
"url": "https://promptibus.com/api/mcp"
}
}
}
No npm, no process to manage, no local state. Works behind firewalls as long as the client can reach promptibus.com.
{
"mcpServers": {
"promptibus": {
"command": "npx",
"args": ["-y", "@promptibus/mcp"],
"env": {
"PROMPTIBUS_API_KEY": "psy_your_api_key_here"
}
}
}
}
The package talks to the hosted Promptibus API — no database, no server setup. API key is optional (raises rate limits and unlocks all 67+ models).
| Variable | Required | Description |
|---|---|---|
PROMPTIBUS_API_KEY | No | API key for higher rate limits and full tool access. Works anonymously without one. Get a key at promptibus.com/settings/api-keys |
PROMPTIBUS_API_URL | No | Override API base URL (default: https://promptibus.com). Useful for testing or self-hosted Promptibus instances. |
All seven tools are available to every tier — including anonymous use without an API key. Tiering applies to rate limits and which models you can query against (see below).
| Tool | Description | Example Input |
|---|---|---|
recommend_model | Find the best model for a task. Returns top 3 with reasoning and parameters. | { "task": "photorealistic portrait", "domain": "IMAGE" } |
optimize_prompt | Optimize a prompt for a specific model. Applies model-specific syntax, community-tested parameters, and best-practice wording. | { "text": "a cat in space", "model": "midjourney-v7" } |
lint_prompt | Lint a prompt against a model's rules. Finds deprecated flags, invalid parameters, or length violations, and suggests fixes. | { "prompt": "a cat --ar 16:9", "model": "flux-2-pro" } |
compare_models | Side-by-side comparison of 2-5 models with provider, domain, cost, and capabilities. | { "models": ["flux-2-pro", "midjourney-v7"], "criteria": "photorealism" } |
get_parameters | Get recommended parameters for a model, including defaults and community-tested configs. | { "model": "stable-diffusion-3-5", "task_type": "portrait" } |
get_model_profile | Complete model profile: capabilities, syntax guide, parameters, community tips, and related prompts. | { "model": "suno-v4" } |
get_pricing | Real-world USD pricing for a model, a domain, or a planned volume. Includes cheaper alternatives and total-cost estimates. | { "model": "dall-e-3", "volume": 100 } |
Model profiles are available as MCP resources at:
promptibus://models/{slug}
Each resource returns a Markdown document with the model's provider, domain, version, pricing, description, and full guide content.
The system-prompt prompt provides access to curated system prompts from the Promptibus community.
# List all
system-prompt
# Get specific
system-prompt { "slug": "midjourney-prompt-architect" }
Authentication is optional but recommended. Without an API key, you get anonymous-tier access.
psy_)PROMPTIBUS_API_KEY in your MCP configAll tiers get access to all 7 tools. The difference is how many calls you get per day and which models you can query against.
| Plan | Daily Limit | Model coverage |
|---|---|---|
| Anonymous (no key) | 25 requests | 10 free-tier models |
| Free (with key) | 100 requests | 10 free-tier models |
| Pro | 500 requests | All 67+ models |
| Studio | 2,000 requests | All 67+ models |
Limits reset daily at midnight UTC. See pricing for plan details.
To keep things fast and reduce unnecessary API traffic, the client caches responses for four tools whose output changes rarely: get_model_profile, get_parameters, compare_models, get_pricing. Cache TTL is 24 hours, in-memory (per process). Cache is skipped for tools whose output is input-dependent in a way that would get stale (recommend_model, optimize_prompt, lint_prompt).
67+ models across 5 domains:
IMAGE — Midjourney v7, v6.1, v6 | FLUX 2 Pro, 1.1 Pro, Dev, Schnell | Stable Diffusion 3.5, XL | DALL-E 3 | Ideogram 3 | Recraft V3 | Leonardo Phoenix | Google Imagen 3 | and more
VIDEO — Sora | Runway Gen-3 Alpha, Gen-4 | Kling 1.6, 2.0 | Minimax Video-01 | Pika 2.0, 2.2 | Luma Dream Machine | Hailuo | Veo 2 | and more
AUDIO — Suno v4, v3.5 | Udio v1.5 | ElevenLabs | Stable Audio 2.0 | and more
TEXT — GPT-4o | Claude 4 Sonnet | Gemini 2.5 Pro | DeepSeek V3, R1 | Llama 3.3 | and more
CODE — Claude 4 Sonnet | GPT-4o | Gemini 2.5 Pro | DeepSeek V3 | and more
Full list at promptibus.com/models.
promptibus.com over HTTPS~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or %APPDATA%\Claude\claude_desktop_config.json (Windows):
{
"mcpServers": {
"promptibus": {
"command": "npx",
"args": ["-y", "@promptibus/mcp"]
}
}
}
claude mcp add promptibus -- npx -y @promptibus/mcp
.cursor/mcp.json in your project (or ~/.cursor/mcp.json globally):
{
"mcpServers": {
"promptibus": {
"command": "npx",
"args": ["-y", "@promptibus/mcp"]
}
}
}
~/.codeium/windsurf/mcp_config.json:
{
"mcpServers": {
"promptibus": {
"command": "npx",
"args": ["-y", "@promptibus/mcp"]
}
}
}
settings.json:
{
"context_servers": {
"promptibus": {
"command": {
"path": "npx",
"args": ["-y", "@promptibus/mcp"]
}
}
}
}
~/.continue/config.json, under experimental.modelContextProtocolServers:
{
"transport": {
"type": "stdio",
"command": "npx",
"args": ["-y", "@promptibus/mcp"]
}
}
In the MCP Client node, set transport to stdio:
Command: npx
Arguments: -y @promptibus/mcp
MIT — © Promptibus
Be the first to review this server!
by Modelcontextprotocol · Developer Tools
Read, search, and manipulate Git repositories programmatically
by Toleno · Developer Tools
Toleno Network MCP Server — Manage your Toleno mining account with Claude AI using natural language.
by mcp-marketplace · Developer Tools
Create, build, and publish Python MCP servers to PyPI — conversationally.