Server data from the Official MCP Registry
Deterministic SHACL-based assessment and comparison for digital passport data models.
Deterministic SHACL-based assessment and comparison for digital passport data models.
Valid MCP server (0 strong, 2 medium validity signals). No known CVEs in dependencies. Imported from the Official MCP Registry.
8 files analyzed · No issues found
Security scores are indicators to help you make informed decisions, not guarantees. Always review permissions before connecting any MCP server.
This plugin requests these system permissions. Most are normal for its category.
From the project's GitHub README.
A Python toolkit and thin CLI for SHACL-based assessment of digital passport data models, pairwise comparison of composed solutions, and SHACL-only use-case coverage analysis.
This README is primarily a repository and development entry point. The user-facing conceptual and usage reference is at the published documentation site. The repository-source docs remain available in docs/.
dpawbdpawb-mcpThe tool is designed primarily as an agent-usable analytical toolkit with a thin CLI. The core command surface is:
assesscoveragecompareprioritizeschemavocabularytemplatecapabilitiessummarizeThe intended primary integration mode is an AI agent orchestrating the analytical pipeline through the package API or the CLI. In practice, this means:
The CLI remains useful for direct human invocation, but the main product shape is an analytical engine that can be called by agent skills, workflow runners, or tool adapters.
The package also ships a thin stdio MCP server over the same deterministic command surface:
assesscoveragecompareprioritizeschemavocabularytemplatecapabilitiessummarizeThe MCP surface stays intentionally thin:
dpawb as the core Python packagedpawb-mcpThe MCP runtime and publication shape are documented in the published MCP server page.
The public Python API is documented in the published API reference.
The MCP server is published with this identity:
io.github.CE-RISE-software/dpawbghcr.io/ce-rise-software/dpawb-mcp:<release-version>stdiohttps://registry.modelcontextprotocol.io/The server is discoverable in the official MCP Registry under the name io.github.CE-RISE-software/dpawb.
Registry discovery page:
https://registry.modelcontextprotocol.io/?q=dpawbLocal client configuration example:
{
"mcpServers": {
"dpawb": {
"command": "dpawb-mcp"
}
}
}
OCI-oriented registry metadata is declared in server.json. The publish workflow fills in the release version and image tag when publishing.
python -m venv .venv
. .venv/bin/activate
pip install -e .
Published package name:
pip install dpawb
The installed MCP server command is:
dpawb-mcp
If you are working in a restricted environment, the package is configured to build with setuptools so editable installs do not depend on fetching an extra build backend.
If build isolation or wheel support is unavailable locally, use:
python -m pip install --no-build-isolation -e .
In constrained environments where editable installation is blocked by local Python packaging tooling, the repository still supports a repo-native execution path:
./scripts/run-local.sh capabilities
./scripts/test-local.sh
make smoke
make test
make validate
Minimal local MCP runtime command:
python -m dpawb.mcp_server
dpawb assess --profile fixtures/profiles/synthetic_evolution_latest.yaml
dpawb coverage --profile fixtures/profiles/synthetic_evolution_latest.yaml --use-case fixtures/use_cases/product_identity_lookup.yaml
dpawb compare --left left_assessment.json --right right_assessment.json
dpawb summarize --result comparison_result.json --format markdown
dpawb capabilities
Before wiring PyPI CI/CD, run the local packaging check in an environment with wheel available:
make release-check
This builds a wheel and sdist, installs the wheel into a clean temporary environment, and runs installed CLI smoke checks.
There are two distinct input areas in this repository:
fixtures/
Synthetic, repository-local test data only.
These files are used by tests, smoke checks, and CI validation.
examples/01-source-ingestion/
Live-source example inputs intended for manual runs.
These are the first step in the tutorial progression and are not part of CI validation.
The full examples tree is organized by analytical task, not by input data model:
examples/01-source-ingestion/
Load and assess declared profiles.examples/02-structural-comparison/
Compare two profile assessment results.examples/03-reduced-use-case-comparison/
Run the first aligned use-case comparison.examples/04-extended-use-case-comparison/
Run a broader aligned use-case comparison.Each example is usable by humans as a step-by-step command tutorial and by AI agents as a deterministic recipe over explicit files.
Example profiles for live SHACL sources are included at:
examples/01-source-ingestion/profiles/battery_dpp_representation_live.yamlexamples/01-source-ingestion/profiles/battery_product_identification_live.yamlexamples/01-source-ingestion/profiles/dp_record_metadata_live.yamlexamples/01-source-ingestion/profiles/traceability_and_life_cycle_events_live.yamlexamples/01-source-ingestion/profiles/metadata_focused_composition_live.yamlexamples/01-source-ingestion/profiles/metadata_and_traceability_live.yamlexamples/02-structural-comparison/profiles/metadata_slice_left_live.yamlexamples/02-structural-comparison/profiles/metadata_slice_right_live.yamlIf you want a single live source, the metadata-oriented example is the main starting point:
./scripts/run-local.sh assess --profile examples/01-source-ingestion/profiles/dp_record_metadata_live.yaml
If you want a composed profile, use:
./scripts/run-local.sh assess --profile examples/01-source-ingestion/profiles/metadata_and_traceability_live.yaml
You can also run the traceability-only example:
./scripts/run-local.sh assess --profile examples/01-source-ingestion/profiles/traceability_and_life_cycle_events_live.yaml
For manual coverage runs, example use cases are included at:
examples/01-source-ingestion/use_cases/battery_dpp_representation.yamlexamples/01-source-ingestion/use_cases/battery_product_identification.yamlexamples/01-source-ingestion/use_cases/battery_passport_metadata_and_classification.yamlexamples/01-source-ingestion/use_cases/record_identity_lookup.yamlexamples/01-source-ingestion/use_cases/provenance_actor_lookup.yamlExample:
./scripts/run-local.sh coverage \
--profile examples/01-source-ingestion/profiles/dp_record_metadata_live.yaml \
--use-case examples/01-source-ingestion/use_cases/record_identity_lookup.yaml
The main real comparison-driver use case is:
examples/01-source-ingestion/use_cases/battery_dpp_representation.yamlIt intentionally stays narrow. It requires:
and the joins needed to treat those as one battery-DPP representation slice.
The matching starting composition for that use case is:
examples/01-source-ingestion/profiles/battery_dpp_representation_live.yamlIt currently composes:
dp_record_metadatatraceability_and_life_cycle_eventsThis is the current broader baseline for the battery-DPP comparison work.
For the first reduced real pass, the narrower identity-focused comparison slice is:
examples/03-reduced-use-case-comparison/examples/03-reduced-use-case-comparison/use_cases/use_case.yamlexamples/03-reduced-use-case-comparison/profiles/left_profile.yamlexamples/03-reduced-use-case-comparison/profiles/right_profile.yamlThis reduced slice composes:
dp_record_metadataproduct_profiletraceability_and_life_cycle_eventsand is the first validated product-identification comparison slice against the BatteryPass General Product Information model.
A second broader validated slice is also included:
examples/04-extended-use-case-comparison/examples/04-extended-use-case-comparison/use_cases/use_case.yamlexamples/04-extended-use-case-comparison/profiles/left_profile.yamlexamples/04-extended-use-case-comparison/profiles/right_profile.yamlThis slice adds passport version/revision and battery type/classification while keeping the same CE-RISE composed model set.
The two current cross-ecosystem validation notes are:
examples/03-reduced-use-case-comparison/notes/comparison_note.mdexamples/04-extended-use-case-comparison/notes/comparison_note.mdThe step-by-step user reference for these examples is in the published example applications guide.
For manual comparison runs, a comparison-ready live pair is included with the same declared scope label:
examples/02-structural-comparison/profiles/metadata_slice_left_live.yamlexamples/02-structural-comparison/profiles/metadata_slice_right_live.yamlexamples/02-structural-comparison/alignments/metadata_slice_alignment.yaml as a starting-point alignment exampleTypical flow:
./scripts/run-local.sh assess --profile examples/02-structural-comparison/profiles/metadata_slice_left_live.yaml --output /tmp/left.json
./scripts/run-local.sh assess --profile examples/02-structural-comparison/profiles/metadata_slice_right_live.yaml --output /tmp/right.json
./scripts/run-local.sh compare --left /tmp/left.json --right /tmp/right.json
With an explicit analyst-authored alignment:
./scripts/run-local.sh compare \
--left /tmp/left.json \
--right /tmp/right.json \
--alignment examples/02-structural-comparison/alignments/metadata_slice_alignment.yaml
When an alignment file is provided, the comparison result now includes two alignment-oriented views:
evaluated_pairs
Full per-pair presence status for every declared equivalence.
ranked_alignment_observations
Review-oriented gaps for any declared pair that is only present on one side or missing on both sides.
So the main things to inspect in an alignment-aware comparison result are:
alignment_coverage_ratioevaluated_pairsranked_alignment_observationsIf that comparison result is then passed into prioritize, those alignment gaps can also appear directly as ranked improvement targets.
The current analytical core is still conservative by design, but it now goes beyond token matching alone:
src/dpawb/: package, CLI, and analytical operationssrc/dpawb/data/: bundled schemas, vocabularies, and templatesfixtures/: synthetic repository-local models, profiles, use cases, and alignments for tests onlyexamples/01-source-ingestion/: source-ingestion profiles, use cases, and alignments for manual runsexamples/03-reduced-use-case-comparison/: self-contained aligned use-case comparison examplesscripts/: repo-native execution and test helpers.github/workflows/validate.yml and .forgejo/workflows/validate.yml: CI validation via the repo-native pathLicensed under the European Union Public Licence v1.2 (EUPL-1.2).
This repository is maintained on Codeberg — the canonical source of truth. The GitHub repository is a read mirror used for release archival and Zenodo integration. Issues and pull requests should be opened on Codeberg.
Funded by the European Union under Grant Agreement No. 101092281 — CE-RISE.
Views and opinions expressed are those of the author(s) only and do not necessarily reflect those of the European Union or the granting authority (HADEA).
Neither the European Union nor the granting authority can be held responsible for them.
© 2026 CE-RISE consortium.
Licensed under the European Union Public Licence v1.2 (EUPL-1.2).
Attribution: CE-RISE project (Grant Agreement No. 101092281) and the individual authors/partners as indicated.
Developed by NILU (Riccardo Boero — ribo@nilu.no) within the CE-RISE project.
Be the first to review this server!
by Modelcontextprotocol · Developer Tools
Read, search, and manipulate Git repositories programmatically
by Toleno · Developer Tools
Toleno Network MCP Server — Manage your Toleno mining account with Claude AI using natural language.
by mcp-marketplace · Developer Tools
Create, build, and publish Python MCP servers to PyPI — conversationally.
by Microsoft · Content & Media
Convert files (PDF, Word, Excel, images, audio) to Markdown for LLM consumption
by mcp-marketplace · Developer Tools
Scaffold, build, and publish TypeScript MCP servers to npm — conversationally
by mcp-marketplace · Finance
Free stock data and market news for any MCP-compatible AI assistant.