Server data from the Official MCP Registry
Drop-in cloud replacement for mcp-server-filesystem — all 14 MCP tools over S3, Azure Blob, and GCS
Drop-in cloud replacement for mcp-server-filesystem — all 14 MCP tools over S3, Azure Blob, and GCS
Valid MCP server (2 strong, 1 medium validity signals). No known CVEs in dependencies. ⚠️ Package registry links to a different repository than scanned source. Imported from the Official MCP Registry. 1 finding(s) downgraded by scanner intelligence.
11 files analyzed · 1 issue found
Security scores are indicators to help you make informed decisions, not guarantees. Always review permissions before connecting any MCP server.
This plugin requests these system permissions. Most are normal for its category.
Set these up before or after installing:
Environment variable: AZURE_STORAGE_CONNECTION_STRING
Environment variable: GOOGLE_CLOUD_PROJECT
Environment variable: AWS_ACCESS_KEY_ID
Environment variable: AWS_SECRET_ACCESS_KEY
Environment variable: REDIS_URL
Add this to your MCP configuration file:
{
"mcpServers": {
"io-github-nogoo9-mcp-server-cloud-fs": {
"env": {
"REDIS_URL": "your-redis-url-here",
"AWS_ACCESS_KEY_ID": "your-aws-access-key-id-here",
"GOOGLE_CLOUD_PROJECT": "your-google-cloud-project-here",
"AWS_SECRET_ACCESS_KEY": "your-aws-secret-access-key-here",
"AZURE_STORAGE_CONNECTION_STRING": "your-azure-storage-connection-string-here"
},
"args": [
"-y",
"@nogoo9/mcp-server-cloud-fs"
],
"command": "npx"
}
}
}From the project's GitHub README.
Drop-in cloud replacement for mcp-server-filesystem — all 14 MCP tools, same schema, backed by S3, Azure Blob, or GCS.
@nogoo9/mcp-server-cloud-fs exposes all 14 tools defined by mcp-server-filesystem — same tool names, same parameter schemas — over cloud object storage. Drop it into any MCP client config that currently points at mcp-server-filesystem and your AI assistant gains read/write access to S3, Azure Blob Storage, or Google Cloud Storage buckets.
npx @nogoo9/mcp-server-cloud-fs s3 s3://my-bucket
cloud-fs-mcp <provider> <root-uri> [root-uri...] [options]
Providers: s3 | azure | gcs
Options:
--region <region> Cloud region (S3, GCS)
--endpoint <url> Custom endpoint for S3-compatible backends (MinIO, RustFS)
--cache-store <memory|fs|redis> Cache backend (default: memory)
--cache-ttl <seconds> Cache TTL in seconds (default: 60)
--sync-debounce <ms> Write flush delay in ms (default: 2000)
--cache-dir <path> Directory for fs cache store
--no-cache Bypass cache entirely (pass-through mode)
Credentials are always sourced from SDK credential chains — never CLI flags.
Credentials are read from the standard AWS credential chain: AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY env vars, ~/.aws/credentials, EC2 instance profiles, and so on.
cloud-fs-mcp s3 s3://my-bucket --region us-east-1
Pass --endpoint to target any S3-compatible backend:
export AWS_ACCESS_KEY_ID=minioadmin
export AWS_SECRET_ACCESS_KEY=minioadmin
cloud-fs-mcp s3 s3://my-bucket --endpoint http://minio:9000 --region us-east-1
Uses DefaultAzureCredential — works with AZURE_TENANT_ID / AZURE_CLIENT_ID / AZURE_CLIENT_SECRET env vars, managed identity, az login, and so on.
cloud-fs-mcp azure az://my-container
Uses Application Default Credentials (ADC). Set GOOGLE_APPLICATION_CREDENTIALS or run gcloud auth application-default login.
cloud-fs-mcp gcs gs://my-bucket
~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or %APPDATA%\Claude\claude_desktop_config.json (Windows):
{
"mcpServers": {
"cloud-fs": {
"command": "npx",
"args": ["-y", "@nogoo9/mcp-server-cloud-fs", "s3", "s3://my-bucket"]
}
}
}
.mcp.json in your project root:
{
"mcpServers": {
"cloud-fs": {
"type": "stdio",
"command": "npx",
"args": ["-y", "@nogoo9/mcp-server-cloud-fs", "s3", "s3://my-bucket"]
}
}
}
All reads and writes are routed through a transparent cache layer to reduce round-trips to cloud storage.
| Backend | Flag | Notes |
|---|---|---|
| Memory (default) | --cache-store memory | In-process; no persistence across restarts |
| Filesystem | --cache-store fs --cache-dir /tmp/cloud-fs-cache | Survives restarts |
| Redis | --cache-store redis | Set REDIS_URL (default: redis://localhost:6379) |
Write debounce: Writes land in cache immediately and are flushed to the provider after --sync-debounce ms (default: 2000). On SIGTERM/SIGINT the buffer is synchronously flushed before exit.
Disable caching: Pass --no-cache for pass-through mode — every read and write goes directly to the provider.
PolyForm Shield 1.0.0. Free for any non-competitive use.
Be the first to review this server!
by Modelcontextprotocol · Developer Tools
Read, search, and manipulate Git repositories programmatically
by Toleno · Developer Tools
Toleno Network MCP Server — Manage your Toleno mining account with Claude AI using natural language.
by mcp-marketplace · Developer Tools
Create, build, and publish Python MCP servers to PyPI — conversationally.
by Microsoft · Content & Media
Convert files (PDF, Word, Excel, images, audio) to Markdown for LLM consumption
by mcp-marketplace · Developer Tools
Scaffold, build, and publish TypeScript MCP servers to npm — conversationally
by Taylorwilsdon · Productivity
Control Gmail, Calendar, Docs, Sheets, Drive, and more from your AI