Server data from the Official MCP Registry
Metadata-first fiction editing and reasoning tools for long-form writing projects.
Metadata-first fiction editing and reasoning tools for long-form writing projects.
Valid MCP server (4 strong, 0 medium validity signals). No known CVEs in dependencies. Package registry verified. Imported from the Official MCP Registry. Trust signals: 3 highly-trusted packages.
7 files analyzed · 1 issue found
Security scores are indicators to help you make informed decisions, not guarantees. Always review permissions before connecting any MCP server.
This plugin requests these system permissions. Most are normal for its category.
Set these up before or after installing:
Environment variable: WRITING_SYNC_DIR
Environment variable: DB_PATH
Environment variable: OWNERSHIP_GUARD_MODE
Add this to your MCP configuration file:
{
"mcpServers": {
"io-github-hannasdev-mcp-writing": {
"env": {
"DB_PATH": "your-db-path-here",
"WRITING_SYNC_DIR": "your-writing-sync-dir-here",
"OWNERSHIP_GUARD_MODE": "your-ownership-guard-mode-here"
},
"args": [
"-y",
"@hanna84/mcp-writing"
],
"command": "npx"
}
}
}From the project's GitHub README.
An MCP service for AI-assisted reasoning and editing on long-form fiction projects.
Designed to work with OpenClaw but compatible with any MCP-capable AI gateway.
For local stdio MCP clients, run the published package directly:
WRITING_SYNC_DIR=/path/to/sync-dir DB_PATH=./writing.db npx -y @hanna84/mcp-writing
The CLI wrapper defaults to stdio transport and adds the Node 22 SQLite flag automatically when needed.
Instead of feeding an entire manuscript to an AI and hoping it fits in the context window, mcp-writing builds a structured index from your scene files. The AI queries that index first — finding relevant characters, beats, and loglines — then loads only the specific prose it needs.
Current status:
| Guide | Description |
|---|---|
| docs/setup.md | Prerequisites, first-time setup, Scrivener import, native sync format |
| docs/docker.md | Docker Compose, OpenClaw integration, SSH hardening |
| docs/data-ownership.md | Which tools write which files, import safety rules |
| docs/tools.md | Full tool reference — auto-generated from source |
| docs/development.md | Running locally, tests, environment variables, troubleshooting |
Goal: catch inconsistencies before sharing pages.
sync after your latest writing session.find_scenes for scenes involving a specific character or tag (for example, all scenes tagged injury or promise).get_arc to review that character's ordered progression across the manuscript.get_scene_prose.flag_scene where continuity needs a fix.Outcome: you review one narrative thread at a time instead of rereading the entire novel to find contradictions.
Goal: make sure subplot threads progress intentionally and resolve on time.
list_threads for the project.get_thread_arc to inspect scene order and beat labels for each thread.upsert_thread_link to add or update it on the right scene.get_thread_arc to confirm pacing and coverage.Outcome: subplot structure stays visible and auditable, which reduces dropped threads in late drafts.
Goal: keep indexes accurate without manually re-tagging everything.
enrich_scene to re-derive lightweight metadata from current prose.update_scene_metadata for intentional editorial fields (for example, beat, POV, timeline position, and tags).search_metadata and find_scenes to verify scenes are discoverable under the expected filters.Outcome: your AI assistant can reliably find the right scenes without drifting from the manuscript.
Goal: let AI propose prose edits without losing control of your draft.
propose_edit for a specific scene.commit_edit or reject with discard_edit.list_snapshots (and optional snapshot_scene) to inspect or preserve revision history.Outcome: you get AI speed with explicit approval and recoverable history for every applied change.
Goal: rebuild scene-to-character links in a controlled way after imported prose changes or metadata drift.
enrich_scene_characters_batch using the default dry_run=true to preview inferred links for a project, chapter, or explicit scene list.get_async_job_status until the batch job completes, then review job.result.results for changed scenes, ambiguous matches, and partial failures.get_scene_prose if the changes touch important continuity or cast-heavy chapters.enrich_scene_characters_batch with dry_run=false once the preview looks correct.replace_mode=replace with confirm_replace=true deliberately.Outcome: character-link maintenance becomes a preview-first batch operation instead of a one-off regex script or manual sidecar cleanup.
AGPL-3.0-only
Be the first to review this server!
by Modelcontextprotocol · Developer Tools
Read, search, and manipulate Git repositories programmatically
by Toleno · Developer Tools
Toleno Network MCP Server — Manage your Toleno mining account with Claude AI using natural language.
by mcp-marketplace · Developer Tools
Create, build, and publish Python MCP servers to PyPI — conversationally.