MCP Server: Generate Manual Test Cases
This MCP server provides the generate_testcases tool to generate manual test cases from documentation and rules you supply.
Installation
npm install
npm run build
Running the server
- Production:
npm start(runsnode dist/index.js) - Development:
npm run dev(runs with ts-node)
The server communicates over stdio (stdin/stdout) and is intended to be started by Cursor or another MCP client.
Configuring in Cursor
Add to your MCP config (e.g. Cursor: Settings → MCP or ~/.cursor/mcp.json):
{
"mcpServers": {
"manual-testcases": {
"command": "node",
"args": ["/PATH/TO/PROJECT/dist/index.js"]
}
}
}
Example with a real path:
{
"mcpServers": {
"manual-testcases": {
"command": "node",
"args": ["/Users/huenguyen/Desktop/hue-data/hue-data/workspace/mcp-manual-testcases/dist/index.js"]
}
}
}
After adding this, Cursor will expose the generate_testcases tool from this server.
Tool: generate_testcases
Generates test cases from documentation and rules.
Parameters
| Parameter | Required | Description |
|---|---|---|
document_content |
No | Document content (text). Omit if using document_path. |
document_path |
No | Path to the document file (txt, md, or PDF). Prefer when available. |
output_format |
No | "markdown" (default) or "csv". CSV uses columns: 模块,标题,前置条件,步骤描述,预期结果,test1测试人员,test1测试结果,buglink,PRE测试人员,PRE测试结果,buglink. |
rules |
Yes | Rules for generating test cases (format, priority, scope, language, etc.). |
use_llm |
No | Default true. If true and client supports sampling, the LLM is used to generate test cases; if false, only the formatted prompt is returned. |
max_tokens |
No | Max tokens for the LLM response (default 4096). |
Note: At least one of document_content or document_path is required.
How it works
- Read document: From
document_contentor by reading the file atdocument_path. PDF files are supported (text is extracted automatically). - Combine with rules: Build a prompt from document + rules.
- Generate test cases:
- If
use_llm === trueand the client (e.g. Cursor) supports sampling (LLM): the server sends a request to the client so the LLM generates test cases and returns the result. - Otherwise (client does not support sampling or
use_llm === false): the server returns the formatted prompt for you to copy and use with an external LLM.
- If
Example rules
- "Output test cases as a table: ID, Description, Preconditions, Steps, Expected result, Priority."
- "Priority P1 for login/payment flows; P2 for secondary screens."
- "Output language: English."
- "Each scenario has at most 10 steps; split into multiple scenarios if more complex."
Example usage in Cursor
You can ask the AI in Cursor, for example:
- "Use the generate_testcases tool: document_path is
docs/feature-login.md, rules are 'Table format, English, P1 for happy path'." - Or paste document content and use
document_contentwithrules.
Generate test cases locally (script)
The script reads doc (txt, md, or PDF) and rules, then either calls OpenAI to generate test cases or prints the formatted prompt.
Environment variables:
| Variable | Description |
|---|---|
DOC_PATH |
Path to requirement document (default: samples/doc-example.md). PDF is supported. |
RULES_PATH |
Path to rules file (default: samples/rules-example.txt). |
OUTPUT_FORMAT |
prompt (default) or csv. |
OUTPUT_FILE |
Output file path. For CSV, default is samples/generated-testcases.csv. |
OPENAI_API_KEY |
If set, the script calls OpenAI to generate test cases. |
Examples:
# Print formatted prompt (no API key needed)
npm run generate-testcases
# Read requirement from PDF and output CSV (requires OPENAI_API_KEY)
export OPENAI_API_KEY=sk-your-key
export DOC_PATH=path/to/requirements.pdf
export RULES_PATH=samples/rules-example.txt
export OUTPUT_FORMAT=csv
export OUTPUT_FILE=samples/generated-testcases.csv
npm run generate-testcases
CSV columns match the rules format: 模块, 标题, 前置条件, 步骤描述, 预期结果, test1测试人员, test1测试结果, buglink, PRE测试人员, PRE测试结果, buglink.
Quick try with samples
The samples/ folder contains:
doc-example.md— sample requirement doc (中控后台PC, 6.1–6.4). It references prototype images inImages/(image1.png–image4.png).images/(orImages/) — prototype screenshots; when present next to the doc, the script and MCP prompt include the image list so generated test cases can reference them (e.g. “参照原型 Images/image1.png”).rules-example.txt— sample rules (table columns, 模块 format 代理后台-AGBE/…, language, strict/quality rules).
Generate test cases from doc + images + rules (MCP or script):
- MCP: Call tool
generate_testcaseswithdocument_path:samples/doc-example.md,rules: content fromsamples/rules-example.txt, and optionallyoutput_format:"csv"or"markdown". - Script:
DOC_PATH=samples/doc-example.md RULES_PATH=samples/rules-example.txt OUTPUT_FORMAT=both npm run generate-testcases(setOPENAI_API_KEYfor LLM generation).
Project structure
mcp-manual-testcases/
├── src/
│ └── index.ts # MCP server + generate_testcases tool
├── samples/
│ ├── doc-example.md # Sample requirement (references Images/)
│ ├── images/ # Prototype images (image1.png …)
│ ├── rules-example.txt # Sample rules
│ ├── generated-testcases.md
│ └── generated-testcases.csv
├── dist/ # Build output (after npm run build)
├── package.json
├── tsconfig.json
└── README.md
License
ISC
depend on doc-example.md and images folder generate tescase for the feature depend on mcp and rules