Server data from the Official MCP Registry
MCP server for Arm64 Linux performance: parse perf output, suggest NEON intrinsics, audit deps.
MCP server for Arm64 Linux performance: parse perf output, suggest NEON intrinsics, audit deps.
Valid MCP server (2 strong, 4 medium validity signals). No known CVEs in dependencies. Imported from the Official MCP Registry. 1 finding(s) downgraded by scanner intelligence.
13 files analyzed · 1 issue found
Security scores are indicators to help you make informed decisions, not guarantees. Always review permissions before connecting any MCP server.
This plugin requests these system permissions. Most are normal for its category.
Set these up before or after installing:
Environment variable: ARM_CODE_MCP_LOG_LEVEL
Add this to your MCP configuration file:
{
"mcpServers": {
"io-github-jean-johnson-zwix-arm-code-mcp": {
"env": {
"ARM_CODE_MCP_LOG_LEVEL": "your-arm-code-mcp-log-level-here"
},
"args": [
"arm-code-mcp"
],
"command": "uvx"
}
}
}From the project's GitHub README.
An MCP server that helps AI assistants optimize Linux workloads on Arm64.
It parses perf report output, recommends NEON SIMD intrinsics for hot loops,
and audits Python dependency manifests for arm64 wheel availability —
all offline, all structured, all callable from Claude Code, GitHub Copilot, and Codex.
analyze_perf_output — parse perf report --stdio into a ranked list of hot symbolssuggest_neon_intrinsic — semantic + keyword search over 110 curated NEON intrinsicscheck_arm64_deps — flag packages in requirements.txt, pyproject.toml, or Dockerfile that lack arm64 wheels or require special handlingdocker pull jeannjohnson/arm-code-mcp:latest
Add to your MCP client config (e.g. ~/.claude/mcp.json):
{
"mcpServers": {
"arm-code-mcp": {
"command": "docker",
"args": ["run", "--rm", "-i", "jeannjohnson/arm-code-mcp:latest"]
}
}
}
Restart your client. All three tools are now available.
analyze_perf_outputParse raw perf report --stdio output and return the top hot symbols, ranked by overhead.
analyze_perf_output(
perf_report_text: str, # raw stdout of `perf report --stdio`
top_n: int = 10, # max symbols to return
min_overhead_pct: float = 0.5, # ignore symbols below this %
) -> dict
Example response:
{
"summary": {
"total_samples": 5432100,
"total_events": null,
"command": "myapp"
},
"hot_symbols": [
{"overhead_pct": 24.17, "samples": 1245, "command": "myapp",
"module": "myapp", "symbol": "process_buffer"},
{"overhead_pct": 12.34, "samples": 636, "command": "myapp",
"module": "libc-2.31.so", "symbol": "__memcpy_avx_unaligned_erms"}
],
"warnings": []
}
suggest_neon_intrinsicRecommend NEON intrinsics for a hot loop using hybrid semantic + exact-name retrieval over a curated knowledge base of 110 intrinsics.
suggest_neon_intrinsic(
operation_description: str, # e.g. "32-bit float multiply-accumulate"
target_arch: str = "armv8-a", # "armv8-a" | "armv8.2-a" | "armv9-a"
top_k: int = 5,
) -> dict
Example response:
{
"matches": [
{
"intrinsic": "vmlaq_f32",
"signature": "float32x4_t vmlaq_f32(float32x4_t a, float32x4_t b, float32x4_t c)",
"header": "<arm_neon.h>",
"min_arch": "armv8-a",
"description": "Multiply-accumulate: a + (b * c), lane-wise, 4x f32.",
"score": 0.9142
}
],
"notes": "Filtered to armv8-a. KB contains 110 entries (103 compatible)."
}
check_arm64_depsScan a dependency manifest and flag packages with known arm64 compatibility issues. Fully offline — no network calls, fast, deterministic.
check_arm64_deps(
file_content: str, # raw text of the manifest
file_type: str = "requirements.txt", # "requirements.txt" | "pyproject.toml" | "Dockerfile"
) -> dict
Example response:
{
"checked": ["numpy", "tensorflow", "cupy-cuda12x", "faiss-cpu", "requests"],
"issues": [
{"package": "cupy-cuda12x", "severity": "error",
"message": "GPU-only package with no arm64 wheel. Use cupy with ROCm or a CPU fallback."},
{"package": "tensorflow", "severity": "warning",
"message": "Official TensorFlow PyPI wheels are x86-only before 2.10; use tensorflow-aarch64 or build from source."},
{"package": "faiss-cpu", "severity": "warning",
"message": "No official arm64 wheel on PyPI; build from source or use the conda-forge package."},
{"package": "numpy", "severity": "info",
"message": "arm64 wheels available from PyPI since 1.21.0. Ensure version >= 1.21.0."}
],
"summary": "Checked 5 package(s): 1 error(s), 2 warning(s), 1 info(s)."
}
Severity levels:
| Level | Meaning |
|---|---|
error | No arm64 wheel exists (e.g. GPU-only packages) |
warning | Wheel exists but requires a workaround or alternative source |
info | Wheel available; version constraint or system-lib note applies |
All env vars are optional. The server works with no configuration.
| Variable | Default | Description |
|---|---|---|
ARM_CODE_MCP_LOG_LEVEL | INFO | Log verbosity: DEBUG, INFO, WARNING |
ARM_CODE_MCP_KB_PATH | bundled JSONL | Override path to neon_intrinsics.jsonl |
ARM_CODE_MCP_CACHE_DIR | ~/.cache/arm-code-mcp | Embedding cache directory |
Pass env vars to the container:
docker run --rm -i \
-e ARM_CODE_MCP_LOG_LEVEL=DEBUG \
jeannjohnson/arm-code-mcp:latest
suggest_neon_intrinsic is evaluated against 15 hand-curated (query, expected intrinsic) pairs
using the real all-MiniLM-L6-v2 embedding model. Current baseline:
| Metric | Score |
|---|---|
| hit@1 | 0.667 |
| hit@3 | 0.933 |
| hit@5 | 1.000 |
| MRR | 0.817 |
The regression guard exits non-zero if hit@3 drops below 0.70.
Run the eval harness locally:
uv sync
make eval
See eval/README.md for methodology and known limitations.
git clone https://github.com/jean-johnson-zwix/arm-code-mcp
cd arm-code-mcp
uv sync
make test # 78 tests
make lint # ruff check + format
make eval # real model, 15 gold queries
Makefile targets:
| Target | Description |
|---|---|
make setup | uv sync + pre-commit install |
make test | Run the full test suite |
make lint | ruff check + ruff format --check |
make eval | Run the NEON retrieval eval harness |
make docker-build | Build arm-code-mcp:dev locally |
make docker-run | Run the local dev image over stdio |
Multi-arch images (linux/amd64 + linux/arm64) are built and pushed automatically
by .github/workflows/release.yml on v*.*.* tags.
The NEON intrinsics knowledge base lives in src/arm_code_mcp/kb/data/neon_intrinsics.jsonl
(110 entries). To add intrinsics or refresh after a model upgrade, see docs/kb-refresh.md.
Tools
parse_flamegraph — extract hot paths from Linux perf flamegraph SVGsuggest_sve2_intrinsic — extend retrieval to SVE2 intrinsics (Neoverse V2, Cortex-X4)Eval
Coming soon.
Stars, forks, and issues are welcome. Open a PR or file an issue on GitHub.
Good first issues:
kb/data/neon_intrinsics.jsonlparse_flamegraph tool for Linux perf flamegraph SVG filesApache 2.0 — same as arm/mcp.
Be the first to review this server!
by Modelcontextprotocol · Developer Tools
Read, search, and manipulate Git repositories programmatically
by Toleno · Developer Tools
Toleno Network MCP Server — Manage your Toleno mining account with Claude AI using natural language.
by mcp-marketplace · Developer Tools
Create, build, and publish Python MCP servers to PyPI — conversationally.
by Microsoft · Content & Media
Convert files (PDF, Word, Excel, images, audio) to Markdown for LLM consumption
by mcp-marketplace · Developer Tools
Scaffold, build, and publish TypeScript MCP servers to npm — conversationally
by Taylorwilsdon · Productivity
Control Gmail, Calendar, Docs, Sheets, Drive, and more from your AI