Server data from the Official MCP Registry
Validate and test llguidance grammars with batch testing and documentation
Validate and test llguidance grammars with batch testing and documentation
Valid MCP server (2 strong, 1 medium validity signals). 2 known CVEs in dependencies (0 critical, 1 high severity) Package registry verified. Imported from the Official MCP Registry.
8 files analyzed · 3 issues found
Security scores are indicators to help you make informed decisions, not guarantees. Always review permissions before connecting any MCP server.
This plugin requests these system permissions. Most are normal for its category.
Add this to your MCP configuration file:
{
"mcpServers": {
"io-github-guidance-ai-guidance-lark-mcp": {
"args": [
"guidance-lark-mcp"
],
"command": "uvx"
}
}
}From the project's GitHub README.
MCP server for validating and testing llguidance grammars (Lark format). Provides grammar validation, batch test execution, and syntax documentation — ideal for iteratively building grammars with AI coding assistants.
uvx guidance-lark-mcp
pip install guidance-lark-mcp
cd mcp-grammar-tools
pip install -e .
You can add the server using the interactive /mcp add command or by editing the config file directly. See the Copilot CLI MCP documentation for full details.
Option 1: Interactive setup
In the Copilot CLI, run /mcp add, select Local/STDIO, and enter uvx guidance-lark-mcp as the command.
Option 2: Edit config file
Add the following to ~/.copilot/mcp-config.json:
{
"mcpServers": {
"grammar-tools": {
"type": "local",
"command": "uvx",
"args": ["guidance-lark-mcp"],
"tools": ["*"]
}
}
}
This gives you grammar validation and batch testing out of the box. To also enable LLM-powered generation (generate_with_grammar), add ENABLE_GENERATION and your credentials to env:
"env": {
"ENABLE_GENERATION": "true",
"OPENAI_API_KEY": "your-key-here"
}
For Azure OpenAI (with Entra ID via az login), use guidance-lark-mcp[azure] and set the endpoint instead:
"args": ["guidance-lark-mcp[azure]"],
"env": {
"ENABLE_GENERATION": "true",
"AZURE_OPENAI_ENDPOINT": "https://your-resource.openai.azure.com/",
"OPENAI_MODEL": "your-deployment-name"
}
See Backend Configuration for all supported backends.
After saving, use /mcp show to verify the server is connected.
{
"mcpServers": {
"grammar-tools": {
"type": "local",
"command": "uvx",
"args": ["guidance-lark-mcp"],
"env": {
"ENABLE_GENERATION": "true",
"OPENAI_API_KEY": "your-key-here"
},
"tools": ["*"]
}
}
}
{
"mcpServers": {
"grammar-tools": {
"command": "uvx",
"args": ["guidance-lark-mcp"],
"env": {
"ENABLE_GENERATION": "true",
"OPENAI_API_KEY": "your-key-here"
}
}
}
}
validate_grammar — Validate grammar completeness and consistency using llguidance's built-in validator.
{"grammar": "start: \"hello\" \"world\""}
run_batch_validation_tests — Run batch validation tests from a JSON file against a grammar. Returns pass/fail statistics and detailed failure info.
{
"grammar": "start: /[0-9]+/",
"test_file": "tests.json"
}
Test file format:
[
{"input": "123", "should_parse": true, "description": "Valid number"},
{"input": "abc", "should_parse": false, "description": "Not a number"}
]
get_llguidance_documentation — Fetch the llguidance grammar syntax documentation from the official repo.
generate_with_grammar (optional, requires ENABLE_GENERATION=true) — Generate text using an OpenAI model constrained by a grammar. Uses the Responses API with custom tool grammar format, so output is guaranteed to conform to the grammar. Requires OPENAI_API_KEY environment variable. See Backend Configuration for Azure and other endpoints.
The generate_with_grammar tool uses the OpenAI Python SDK, which natively supports multiple backends via environment variables:
| Backend | Required env vars | Optional env vars |
|---|---|---|
| OpenAI (default) | OPENAI_API_KEY | OPENAI_MODEL |
| Azure OpenAI (API key) | AZURE_OPENAI_ENDPOINT, AZURE_OPENAI_API_KEY | AZURE_OPENAI_API_VERSION, OPENAI_MODEL |
| Azure OpenAI (Entra ID) | AZURE_OPENAI_ENDPOINT + az login | AZURE_OPENAI_API_VERSION, OPENAI_MODEL |
| Custom endpoint | OPENAI_API_KEY, OPENAI_BASE_URL | OPENAI_MODEL |
The server auto-detects which backend to use:
AZURE_OPENAI_ENDPOINT is set → uses AzureOpenAI client (with Entra ID or API key)OpenAI client (reads OPENAI_API_KEY and OPENAI_BASE_URL automatically)The server logs which backend it detects on startup.
{
"mcpServers": {
"grammar-tools": {
"type": "local",
"command": "uvx",
"args": ["guidance-lark-mcp"],
"env": {
"ENABLE_GENERATION": "true",
"AZURE_OPENAI_ENDPOINT": "https://my-resource.openai.azure.com",
"AZURE_OPENAI_API_KEY": "your-azure-key",
"OPENAI_MODEL": "gpt-4.1"
},
"tools": ["*"]
}
}
}
Requires az login and the azure extra: pip install guidance-lark-mcp[azure]
{
"mcpServers": {
"grammar-tools": {
"type": "local",
"command": "uvx",
"args": ["guidance-lark-mcp[azure]"],
"env": {
"ENABLE_GENERATION": "true",
"AZURE_OPENAI_ENDPOINT": "https://my-resource.openai.azure.com",
"OPENAI_MODEL": "gpt-4.1"
},
"tools": ["*"]
}
}
}
Build a grammar iteratively with an AI assistant:
validate_grammar to check for missing rulesrun_batch_validation_tests to find failuresThe examples/ directory includes sample grammars built using these tools, with Lark grammar files, test suites, and documentation:
Server fails to connect in Copilot CLI / VS Code?
MCP clients like Copilot CLI only show "Connection closed" when a server crashes on startup. To see the actual error, run the server directly in your terminal:
uvx guidance-lark-mcp
Or with generation enabled:
ENABLE_GENERATION=true OPENAI_API_KEY=your-key uvx guidance-lark-mcp
Common issues:
ENABLE_GENERATION=true without a valid OPENAI_API_KEY or AZURE_OPENAI_ENDPOINT. The server will still start and serve validation tools; generate_with_grammar will return a descriptive error.az login and are using guidance-lark-mcp[azure] (not the base package).uvx needs to resolve and install dependencies on first run, which may exceed the MCP client's connection timeout. Run uvx guidance-lark-mcp once manually to warm the cache.uvx caches packages, so after a new release you may need to clear the cache and restart your MCP client:
uv cache clean guidance-lark-mcp
git clone https://github.com/guidance-ai/guidance-lark-mcp
cd guidance-lark-mcp
uv sync
uv run pytest tests/ -q
Be the first to review this server!
by Modelcontextprotocol · Developer Tools
Read, search, and manipulate Git repositories programmatically
by Modelcontextprotocol · Developer Tools
Web content fetching and conversion for efficient LLM usage
by Toleno · Developer Tools
Toleno Network MCP Server — Manage your Toleno mining account with Claude AI using natural language.