AI prompt optimization for 58+ platforms across 7 categories with custom platforms
AI prompt optimization for 58+ platforms across 7 categories with custom platforms
Set these up before or after installing:
Environment variable: LLM_API_URL
Environment variable: LLM_API_KEY
Environment variable: LLM_MODEL
Add this to your MCP configuration file:
{
"mcpServers": {
"io-github-lumabyteco-clarifyprompt": {
"env": {
"LLM_MODEL": "your-llm-model-here",
"LLM_API_KEY": "your-llm-api-key-here",
"LLM_API_URL": "your-llm-api-url-here"
},
"args": [
"-y",
"clarifyprompt-mcp"
],
"command": "npx"
}
}
}This is a well-structured MCP server for AI prompt optimization with proper authentication patterns and appropriate permissions for its purpose. The server has good code quality with minimal security concerns, though there are a few areas for improvement around error handling and API key validation. Supply chain analysis found 3 known vulnerabilities in dependencies (0 critical, 3 high severity). Package verification found 1 issue.
Scanned 5 files · 7 findings
Security scores are indicators to help you make informed decisions, not guarantees. Always review permissions before connecting any MCP server.
This plugin requests these system permissions. Most are normal for its category.
Be the first to review this server!