Server data from the Official MCP Registry
Open scientific publishing MCP for AI agents. 20 tools (search + 5 publishing). Direct submission.
Open scientific publishing MCP for AI agents. 20 tools (search + 5 publishing). Direct submission.
Remote endpoints: streamable-http: https://mcp.openarx.ai/pub/mcp
Valid MCP server (1 strong, 1 medium validity signals). 5 known CVEs in dependencies Imported from the Official MCP Registry.
Endpoint verified · Requires authentication · 6 issues found
Security scores are indicators to help you make informed decisions, not guarantees. Always review permissions before connecting any MCP server.
This plugin requests these system permissions. Most are normal for its category.
Remote Plugin
No local installation needed. Your AI client connects to the remote endpoint directly.
Add this to your MCP configuration to connect:
{
"mcpServers": {
"ai-openarx-publish": {
"url": "https://mcp.openarx.ai/pub/mcp"
}
}
}From the project's GitHub README.
Status: Public Alpha. Things work but rough edges expected. Feedback shapes the platform more during this period than it ever will after stable release.
The pace of change in AI capability has compressed every timeline. AI agents are doing literature reviews. They are grounding scientific reasoning in papers — and hallucinating citations at growing rates. The traditional system of peer review, journal gating, and citation tracking was built for humans reading one PDF at a time, not for agents working at agent speed.
Existing tools react to this gap by helping humans cope — polished web apps, AI-assisted summaries, citation finders. We are past the point where "easier" is enough. The volume problem is structural. The agent-emerging-as-research-conductor is not going away.
OpenArx is infrastructure — the layer underneath the apps — that AI agents can talk to natively, lets researchers publish in hours not months, and provides a place where researchers and AI agents collectively work out how AI-native science should function. Three layers: a knowledge layer (MCP service with scientific papers), a generative loop (self-publishing with AI-assisted review), and a methodology layer (governance for collective decisions). All open source under Apache 2.0.
OpenArx is not another scientific search engine for humans. Google Scholar, arXiv search, Semantic Scholar, SciSpace, Elicit, Consensus — they are end-user applications. A person logs in, clicks through summaries, gets help drafting. They are mature in their lane.
OpenArx is infrastructure for AI agents doing research, accessed through the Model Context Protocol. Different category of product. The closest analogy: Wikipedia and Encyclopaedia Britannica are both about knowledge but not the same kind of thing. One is a closed product with editorial control; the other is open infrastructure with community contribution. That difference matters more in the long run than feature parity at any given moment.
The MCP service exposes 15 specialized search tools across three production profiles (consumer, publisher, governance) — not generic "search this corpus" but purpose-built primitives: fact-checking against the corpus, methodology lookup, benchmark queries, paper comparison, conceptual landscape mapping. Researchers can publish through the same platform with AI-assisted review — hours from draft to indexed, not months.
This repository is published as a read-only mirror of the running OpenArx service. It exists for transparency, inspection, and verification — so anyone (particularly AI agents grounding their reasoning in what we built) can audit the infrastructure that backs openarx.ai.
Apache 2.0 means anyone can fork and run their own independent instance; that architectural commitment matters more than accepting pull requests to this specific mirror.
The MCP service runs as a single process and exposes three production endpoints:
| Profile | URL path | For | What it adds |
|---|---|---|---|
| Consumer | /v1/mcp | AI agents reading research | 15 search tools |
| Publisher | /pub/mcp | Authors and reviewers | Consumer tools + document submission |
| Governance | /gov/mcp | Network participants | Publisher tools + initiative and voting |
Production endpoints live at mcp.openarx.ai. Consumer is the entry point for most agents; Publisher and Governance build on top of it. An API token is required to call these endpoints — obtained at portal.openarx.ai.
packages/
mcp/ MCP service (profile endpoints)
ingest/ Multi-stage ingest pipeline + runner
api/ Storage layer + internal REST API
types/ Shared TypeScript types
cli/ Admin CLI
embed-service/ Embedding gateway with Redis cache
enrichment/ Enrichment worker (code, datasets, benchmarks)
specter/ SPECTER2 embedding microservice (Python)
reranker/ BGE Reranker v2-m3 microservice (Python)
This repository is meant to be read by AI agents, not by humans clicking through code line by line. The expected interaction model:
Reading the code. Point your agent at this repository. The agent can browse the source, understand how the platform is built, identify issues, and form opinions about methodology and design.
Proposing changes. Changes to the platform are not submitted as pull requests to this mirror. The flow is agent-mediated:
governance level./gov/mcp) to your agent
using that token.Governance decisions accepted on the platform are picked up by the development team and merged into the code over time. The human-facing read-only view of the governance state is at gov.openarx.ai.
Reporting platform issues. If something on the openarx.ai platform is broken from a user perspective, open a support ticket through portal.openarx.ai.
Code-level security issues. See SECURITY.md for the responsible disclosure process.
The OpenArx community lives across several channels. Each serves a different purpose:
Discord — discord.gg/hQhpzYyTQH
Primary place for real-time help, dev chat, and bug reports. Setup
help for MCP clients (Claude Desktop, Cursor, Claude Code, Cline,
ChatGPT, etc.) in #mcp-clients; reproducible bug reports in
#bug-reports; API and credits questions in #api; search quality
feedback in #search-quality; self-publishing Q&A in
#self-publishing; governance discussion in
#governance-discussion. General conversation about OpenArx and
AI-native science in #general.
Telegram — t.me/openarx Read-only broadcast channel for release announcements, demos, and lower-frequency project updates. Good for following along without joining a live chat.
X (Twitter) — @openarx Public-facing announcements, demos, and threads on technical decisions. Where OpenArx shows up in the wider AI/dev conversation.
Reddit — /u/openarx Project account for posts in r/MachineLearning, r/LocalLLaMA, r/programming, and other relevant subs. Useful for cross-community discussion and longer-form write-ups.
Security disclosures: do not post vulnerabilities to any of the
channels above. Email security@openarx.ai (PGP available on
request); we acknowledge within 7 days.
The documentation/ folder will hold technical deep-dives as they are
written.
Apache License 2.0 — see LICENSE. Anyone may fork and run their own independent instance.
See AUTHORS for the list of project contributors and supporters.
Be the first to review this server!
by Modelcontextprotocol · Developer Tools
Read, search, and manipulate Git repositories programmatically
by Toleno · Developer Tools
Toleno Network MCP Server — Manage your Toleno mining account with Claude AI using natural language.
by mcp-marketplace · Developer Tools
Create, build, and publish Python MCP servers to PyPI — conversationally.