mirror of
https://github.com/jmagar/unraid-mcp.git
synced 2026-03-23 12:39:24 -07:00
chore: reorganize test scripts, add destructive action tests, fix rclone bug
- Move scripts/test-tools.sh and scripts/test-actions.sh → tests/mcporter/ - Fix PROJECT_DIR path in test-tools.sh (SCRIPT_DIR/.. → SCRIPT_DIR/../..) - Add tests/mcporter/test-destructive.sh: 2 live + 13 skipped destructive tests - stdio transport (no running server required) - notifications:delete (create→list→delete), keys:delete (create→delete→verify) - 3 new skips: createDockerFolder/updateSshSettings/createRCloneRemote not in API - Requires --confirm flag; dry-run by default - Add tests/mcporter/README.md documenting both scripts and coverage - Rewrite docs/DESTRUCTIVE_ACTIONS.md: merge test guide, all 15 actions with commands - Delete docs/test-actions.md (merged into tests/mcporter/README.md) - Fix rclone.py create_remote: send "parameters" not "config" (API field name) - Update README.md and CLAUDE.md: 11 tools/~104 actions, new script paths - Add AGENTS.md and GEMINI.md symlinks to CLAUDE.md - Bump version 0.4.3 → 0.4.4 Co-authored-by: Claude <noreply@anthropic.com>
This commit is contained in:
151
tests/mcporter/README.md
Normal file
151
tests/mcporter/README.md
Normal file
@@ -0,0 +1,151 @@
|
||||
# mcporter Integration Tests
|
||||
|
||||
Live integration smoke-tests for the unraid-mcp server, exercising real API calls via [mcporter](https://github.com/mcporter/mcporter).
|
||||
|
||||
---
|
||||
|
||||
## Two Scripts, Two Transports
|
||||
|
||||
| | `test-tools.sh` | `test-actions.sh` |
|
||||
|-|-----------------|-------------------|
|
||||
| **Transport** | stdio | HTTP |
|
||||
| **Server required** | No — launched ad-hoc per call | Yes — must be running at `$MCP_URL` |
|
||||
| **Flags** | `--timeout-ms N`, `--parallel`, `--verbose` | positional `[MCP_URL]` |
|
||||
| **Coverage** | 10 tools (read-only actions only) | 11 tools (all non-destructive actions) |
|
||||
| **Use case** | CI / offline local check | Live server smoke-test |
|
||||
|
||||
### `test-tools.sh` — stdio, no running server needed
|
||||
|
||||
```bash
|
||||
./tests/mcporter/test-tools.sh # sequential, 25s timeout
|
||||
./tests/mcporter/test-tools.sh --parallel # parallel suites
|
||||
./tests/mcporter/test-tools.sh --timeout-ms 10000 # tighter timeout
|
||||
./tests/mcporter/test-tools.sh --verbose # print raw responses
|
||||
```
|
||||
|
||||
Launches `uv run unraid-mcp-server` in stdio mode for each tool call. Requires `mcporter`, `uv`, and `python3` in `PATH`. Good for CI pipelines — no persistent server process needed.
|
||||
|
||||
### `test-actions.sh` — HTTP, requires a live server
|
||||
|
||||
```bash
|
||||
./tests/mcporter/test-actions.sh # default: http://localhost:6970/mcp
|
||||
./tests/mcporter/test-actions.sh http://10.1.0.2:6970/mcp # explicit URL
|
||||
UNRAID_MCP_URL=http://10.1.0.2:6970/mcp ./tests/mcporter/test-actions.sh
|
||||
```
|
||||
|
||||
Connects to an already-running streamable-http server. More up-to-date coverage — includes `unraid_settings`, all docker organizer mutations, and the full notification action set.
|
||||
|
||||
---
|
||||
|
||||
## What `test-actions.sh` Tests
|
||||
|
||||
### Phase 1 — Param-free reads
|
||||
|
||||
All actions requiring no arguments beyond `action` itself.
|
||||
|
||||
| Tool | Actions tested |
|
||||
|------|----------------|
|
||||
| `unraid_info` | `overview`, `array`, `network`, `registration`, `connect`, `variables`, `metrics`, `services`, `display`, `config`, `online`, `owner`, `settings`, `server`, `servers`, `flash`, `ups_devices`, `ups_device`, `ups_config` |
|
||||
| `unraid_array` | `parity_status` |
|
||||
| `unraid_storage` | `disks`, `shares`, `unassigned`, `log_files` |
|
||||
| `unraid_docker` | `list`, `networks`, `port_conflicts`, `check_updates`, `sync_templates`, `refresh_digests` |
|
||||
| `unraid_vm` | `list` |
|
||||
| `unraid_notifications` | `overview`, `list`, `warnings`, `recalculate` |
|
||||
| `unraid_rclone` | `list_remotes`, `config_form` |
|
||||
| `unraid_users` | `me` |
|
||||
| `unraid_keys` | `list` |
|
||||
| `unraid_health` | `check`, `test_connection`, `diagnose` |
|
||||
| `unraid_settings` | *(all 9 actions skipped — mutations only)* |
|
||||
|
||||
### Phase 2 — ID-discovered reads
|
||||
|
||||
IDs are extracted from Phase 1 responses and used for actions requiring a specific resource. Each is skipped if Phase 1 returned no matching resources.
|
||||
|
||||
| Action | Source of ID |
|
||||
|--------|--------------|
|
||||
| `docker: details` | first container from `docker: list` |
|
||||
| `docker: logs` | first container from `docker: list` |
|
||||
| `docker: network_details` | first network from `docker: networks` |
|
||||
| `storage: disk_details` | first disk from `storage: disks` |
|
||||
| `storage: logs` | first path from `storage: log_files` |
|
||||
| `vm: details` | first VM from `vm: list` |
|
||||
| `keys: get` | first key from `keys: list` |
|
||||
|
||||
### Skipped actions (and why)
|
||||
|
||||
| Label | Meaning |
|
||||
|-------|---------|
|
||||
| `destructive (confirm=True required)` | Permanently modifies or deletes data |
|
||||
| `mutation — state-changing` | Modifies live system state (container/VM lifecycle, settings) |
|
||||
| `mutation — creates …` | Creates a new resource |
|
||||
|
||||
**Full skip list:**
|
||||
- `unraid_info`: `update_server`, `update_ssh`
|
||||
- `unraid_array`: `parity_start`, `parity_pause`, `parity_resume`, `parity_cancel`
|
||||
- `unraid_storage`: `flash_backup`
|
||||
- `unraid_docker`: `start`, `stop`, `restart`, `pause`, `unpause`, `update`, `remove`, `update_all`, `create_folder`, `set_folder_children`, `delete_entries`, `move_to_folder`, `move_to_position`, `rename_folder`, `create_folder_with_items`, `update_view_prefs`, `reset_template_mappings`
|
||||
- `unraid_vm`: `start`, `stop`, `pause`, `resume`, `reboot`, `force_stop`, `reset`
|
||||
- `unraid_notifications`: `create`, `create_unique`, `archive`, `unread`, `archive_all`, `archive_many`, `unarchive_many`, `unarchive_all`, `delete`, `delete_archived`
|
||||
- `unraid_rclone`: `create_remote`, `delete_remote`
|
||||
- `unraid_keys`: `create`, `update`, `delete`
|
||||
- `unraid_settings`: all 9 actions
|
||||
|
||||
### Output format
|
||||
|
||||
```
|
||||
<action label> PASS
|
||||
<action label> FAIL
|
||||
<first 3 lines of error detail>
|
||||
<action label> SKIP (reason)
|
||||
|
||||
Results: 42 passed 0 failed 37 skipped (79 total)
|
||||
```
|
||||
|
||||
Exit code `0` when all executed tests pass, `1` if any fail.
|
||||
|
||||
---
|
||||
|
||||
## Destructive Actions
|
||||
|
||||
Neither script executes destructive actions. They are explicitly `skip_test`-ed with reason `"destructive (confirm=True required)"`.
|
||||
|
||||
All destructive actions require `confirm=True` at the call site. There is no environment variable gate — `confirm` is the sole guard.
|
||||
|
||||
### Safe Testing Strategy
|
||||
|
||||
| Strategy | When to use |
|
||||
|----------|-------------|
|
||||
| **Create → destroy** | Action has a create counterpart (keys, notifications, rclone remotes, docker folders) |
|
||||
| **No-op apply** | Action mutates config but can be re-applied with current values unchanged (`update_ssh`) |
|
||||
| **Dedicated test remote** | Action requires a remote target (`flash_backup`) |
|
||||
| **Test VM** | Action requires a live VM (`force_stop`, `reset`) |
|
||||
| **Mock/safety audit only** | Global blast radius, no safe isolation (`update_all`, `reset_template_mappings`, `setup_remote_access`, `configure_ups`) |
|
||||
| **Secondary server only** | Run on `shart` (10.1.0.3), never `tootie` (10.1.0.2) |
|
||||
|
||||
For exact per-action mcporter commands, see [`docs/DESTRUCTIVE_ACTIONS.md`](../../docs/DESTRUCTIVE_ACTIONS.md).
|
||||
|
||||
---
|
||||
|
||||
## Prerequisites
|
||||
|
||||
```bash
|
||||
# mcporter CLI
|
||||
npm install -g mcporter
|
||||
|
||||
# uv (for test-tools.sh stdio mode)
|
||||
curl -LsSf https://astral.sh/uv/install.sh | sh
|
||||
|
||||
# python3 — used for inline JSON extraction
|
||||
python3 --version # 3.12+
|
||||
|
||||
# Running server (for test-actions.sh only)
|
||||
docker compose up -d
|
||||
# or
|
||||
uv run unraid-mcp-server
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Cleanup
|
||||
|
||||
Both scripts create **no temporary files and no background processes**. `test-actions.sh` connects to an existing server and leaves it running. `test-tools.sh` spawns stdio server subprocesses per call; they exit when mcporter finishes each invocation.
|
||||
397
tests/mcporter/test-actions.sh
Executable file
397
tests/mcporter/test-actions.sh
Executable file
@@ -0,0 +1,397 @@
|
||||
#!/usr/bin/env bash
|
||||
# test-actions.sh — Test all non-destructive Unraid MCP actions via mcporter
|
||||
#
|
||||
# Usage:
|
||||
# ./scripts/test-actions.sh [MCP_URL]
|
||||
#
|
||||
# Default MCP_URL: http://localhost:6970/mcp
|
||||
# Skips: destructive (confirm=True required), state-changing mutations,
|
||||
# and actions requiring IDs not yet discovered.
|
||||
#
|
||||
# Phase 1: param-free reads
|
||||
# Phase 2: ID-discovered reads (container, network, disk, vm, key, log)
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
MCP_URL="${1:-${UNRAID_MCP_URL:-http://localhost:6970/mcp}}"
|
||||
|
||||
# ── colours ──────────────────────────────────────────────────────────────────
|
||||
RED='\033[0;31m'; GREEN='\033[0;32m'; YELLOW='\033[1;33m'
|
||||
CYAN='\033[0;36m'; BOLD='\033[1m'; NC='\033[0m'
|
||||
|
||||
PASS=0; FAIL=0; SKIP=0
|
||||
declare -a FAILED_TESTS=()
|
||||
|
||||
# ── helpers ───────────────────────────────────────────────────────────────────
|
||||
|
||||
mcall() {
|
||||
# mcall <tool> <json-args>
|
||||
local tool="$1" args="$2"
|
||||
mcporter call \
|
||||
--http-url "$MCP_URL" \
|
||||
--allow-http \
|
||||
--tool "$tool" \
|
||||
--args "$args" \
|
||||
--output json \
|
||||
2>&1
|
||||
}
|
||||
|
||||
_check_output() {
|
||||
# Returns 0 if output looks like a successful JSON response, 1 otherwise.
|
||||
local output="$1" exit_code="$2"
|
||||
[[ $exit_code -ne 0 ]] && return 1
|
||||
echo "$output" | python3 -c "
|
||||
import json, sys
|
||||
try:
|
||||
d = json.load(sys.stdin)
|
||||
if isinstance(d, dict) and (d.get('isError') or d.get('error') or 'ToolError' in str(d)):
|
||||
sys.exit(1)
|
||||
except Exception:
|
||||
pass
|
||||
sys.exit(0)
|
||||
" 2>/dev/null
|
||||
}
|
||||
|
||||
run_test() {
|
||||
# Print result; do NOT echo the JSON body (kept quiet for readability).
|
||||
local label="$1" tool="$2" args="$3"
|
||||
printf " %-60s" "$label"
|
||||
local output exit_code=0
|
||||
output=$(mcall "$tool" "$args" 2>&1) || exit_code=$?
|
||||
if _check_output "$output" "$exit_code"; then
|
||||
echo -e "${GREEN}PASS${NC}"
|
||||
((PASS++)) || true
|
||||
else
|
||||
echo -e "${RED}FAIL${NC}"
|
||||
((FAIL++)) || true
|
||||
FAILED_TESTS+=("$label")
|
||||
# Show first 3 lines of error detail, indented
|
||||
echo "$output" | head -3 | sed 's/^/ /'
|
||||
fi
|
||||
}
|
||||
|
||||
run_test_capture() {
|
||||
# Like run_test but echoes raw JSON to stdout for ID extraction by caller.
|
||||
# Status lines go to stderr so the caller's $() captures only clean JSON.
|
||||
local label="$1" tool="$2" args="$3"
|
||||
local output exit_code=0
|
||||
printf " %-60s" "$label" >&2
|
||||
output=$(mcall "$tool" "$args" 2>&1) || exit_code=$?
|
||||
if _check_output "$output" "$exit_code"; then
|
||||
echo -e "${GREEN}PASS${NC}" >&2
|
||||
((PASS++)) || true
|
||||
else
|
||||
echo -e "${RED}FAIL${NC}" >&2
|
||||
((FAIL++)) || true
|
||||
FAILED_TESTS+=("$label")
|
||||
echo "$output" | head -3 | sed 's/^/ /' >&2
|
||||
fi
|
||||
echo "$output" # pure JSON → captured by caller's $()
|
||||
}
|
||||
|
||||
skip_test() {
|
||||
local label="$1" reason="$2"
|
||||
printf " %-60s${YELLOW}SKIP${NC} (%s)\n" "$label" "$reason"
|
||||
((SKIP++)) || true
|
||||
}
|
||||
|
||||
section() {
|
||||
echo ""
|
||||
echo -e "${CYAN}${BOLD}━━━ $1 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
|
||||
}
|
||||
|
||||
# ── connectivity check ────────────────────────────────────────────────────────
|
||||
|
||||
echo ""
|
||||
echo -e "${BOLD}Unraid MCP Non-Destructive Action Test Suite${NC}"
|
||||
echo -e "Server: ${CYAN}$MCP_URL${NC}"
|
||||
echo ""
|
||||
printf "Checking connectivity... "
|
||||
# Use -s (silent) without -f: a 4xx/406 means the MCP server is up and
|
||||
# responding correctly to a plain GET — only "connection refused" is fatal.
|
||||
HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" --max-time 5 "$MCP_URL" 2>/dev/null || echo "000")
|
||||
if [[ "$HTTP_CODE" == "000" ]]; then
|
||||
echo -e "${RED}UNREACHABLE${NC}"
|
||||
echo "Start the server first: docker compose up -d OR uv run unraid-mcp-server"
|
||||
exit 1
|
||||
fi
|
||||
echo -e "${GREEN}OK${NC} (HTTP $HTTP_CODE)"
|
||||
|
||||
# ═══════════════════════════════════════════════════════════════════════════════
|
||||
# PHASE 1 — Param-free read actions
|
||||
# ═══════════════════════════════════════════════════════════════════════════════
|
||||
|
||||
section "unraid_info (19 query actions)"
|
||||
run_test "info: overview" unraid_info '{"action":"overview"}'
|
||||
run_test "info: array" unraid_info '{"action":"array"}'
|
||||
run_test "info: network" unraid_info '{"action":"network"}'
|
||||
run_test "info: registration" unraid_info '{"action":"registration"}'
|
||||
run_test "info: connect" unraid_info '{"action":"connect"}'
|
||||
run_test "info: variables" unraid_info '{"action":"variables"}'
|
||||
run_test "info: metrics" unraid_info '{"action":"metrics"}'
|
||||
run_test "info: services" unraid_info '{"action":"services"}'
|
||||
run_test "info: display" unraid_info '{"action":"display"}'
|
||||
run_test "info: config" unraid_info '{"action":"config"}'
|
||||
run_test "info: online" unraid_info '{"action":"online"}'
|
||||
run_test "info: owner" unraid_info '{"action":"owner"}'
|
||||
run_test "info: settings" unraid_info '{"action":"settings"}'
|
||||
run_test "info: server" unraid_info '{"action":"server"}'
|
||||
run_test "info: servers" unraid_info '{"action":"servers"}'
|
||||
run_test "info: flash" unraid_info '{"action":"flash"}'
|
||||
run_test "info: ups_devices" unraid_info '{"action":"ups_devices"}'
|
||||
run_test "info: ups_device" unraid_info '{"action":"ups_device"}'
|
||||
run_test "info: ups_config" unraid_info '{"action":"ups_config"}'
|
||||
skip_test "info: update_server" "mutation — state-changing"
|
||||
skip_test "info: update_ssh" "mutation — state-changing"
|
||||
|
||||
section "unraid_array"
|
||||
run_test "array: parity_status" unraid_array '{"action":"parity_status"}'
|
||||
skip_test "array: parity_start" "mutation — starts parity check"
|
||||
skip_test "array: parity_pause" "mutation — pauses parity check"
|
||||
skip_test "array: parity_resume" "mutation — resumes parity check"
|
||||
skip_test "array: parity_cancel" "mutation — cancels parity check"
|
||||
|
||||
section "unraid_storage (param-free reads)"
|
||||
STORAGE_DISKS=$(run_test_capture "storage: disks" unraid_storage '{"action":"disks"}')
|
||||
run_test "storage: shares" unraid_storage '{"action":"shares"}'
|
||||
run_test "storage: unassigned" unraid_storage '{"action":"unassigned"}'
|
||||
LOG_FILES=$(run_test_capture "storage: log_files" unraid_storage '{"action":"log_files"}')
|
||||
skip_test "storage: flash_backup" "destructive (confirm=True required)"
|
||||
|
||||
section "unraid_docker (param-free reads)"
|
||||
DOCKER_LIST=$(run_test_capture "docker: list" unraid_docker '{"action":"list"}')
|
||||
DOCKER_NETS=$(run_test_capture "docker: networks" unraid_docker '{"action":"networks"}')
|
||||
run_test "docker: port_conflicts" unraid_docker '{"action":"port_conflicts"}'
|
||||
run_test "docker: check_updates" unraid_docker '{"action":"check_updates"}'
|
||||
run_test "docker: sync_templates" unraid_docker '{"action":"sync_templates"}'
|
||||
run_test "docker: refresh_digests" unraid_docker '{"action":"refresh_digests"}'
|
||||
skip_test "docker: start" "mutation — changes container state"
|
||||
skip_test "docker: stop" "mutation — changes container state"
|
||||
skip_test "docker: restart" "mutation — changes container state"
|
||||
skip_test "docker: pause" "mutation — changes container state"
|
||||
skip_test "docker: unpause" "mutation — changes container state"
|
||||
skip_test "docker: update" "mutation — updates container image"
|
||||
skip_test "docker: remove" "destructive (confirm=True required)"
|
||||
skip_test "docker: update_all" "destructive (confirm=True required)"
|
||||
skip_test "docker: create_folder" "mutation — changes organizer state"
|
||||
skip_test "docker: set_folder_children" "mutation — changes organizer state"
|
||||
skip_test "docker: delete_entries" "destructive (confirm=True required)"
|
||||
skip_test "docker: move_to_folder" "mutation — changes organizer state"
|
||||
skip_test "docker: move_to_position" "mutation — changes organizer state"
|
||||
skip_test "docker: rename_folder" "mutation — changes organizer state"
|
||||
skip_test "docker: create_folder_with_items" "mutation — changes organizer state"
|
||||
skip_test "docker: update_view_prefs" "mutation — changes organizer state"
|
||||
skip_test "docker: reset_template_mappings" "destructive (confirm=True required)"
|
||||
|
||||
section "unraid_vm (param-free reads)"
|
||||
VM_LIST=$(run_test_capture "vm: list" unraid_vm '{"action":"list"}')
|
||||
skip_test "vm: start" "mutation — changes VM state"
|
||||
skip_test "vm: stop" "mutation — changes VM state"
|
||||
skip_test "vm: pause" "mutation — changes VM state"
|
||||
skip_test "vm: resume" "mutation — changes VM state"
|
||||
skip_test "vm: reboot" "mutation — changes VM state"
|
||||
skip_test "vm: force_stop" "destructive (confirm=True required)"
|
||||
skip_test "vm: reset" "destructive (confirm=True required)"
|
||||
|
||||
section "unraid_notifications"
|
||||
run_test "notifications: overview" unraid_notifications '{"action":"overview"}'
|
||||
run_test "notifications: list" unraid_notifications '{"action":"list"}'
|
||||
run_test "notifications: warnings" unraid_notifications '{"action":"warnings"}'
|
||||
run_test "notifications: recalculate" unraid_notifications '{"action":"recalculate"}'
|
||||
skip_test "notifications: create" "mutation — creates notification"
|
||||
skip_test "notifications: create_unique" "mutation — creates notification"
|
||||
skip_test "notifications: archive" "mutation — changes notification state"
|
||||
skip_test "notifications: unread" "mutation — changes notification state"
|
||||
skip_test "notifications: archive_all" "mutation — changes notification state"
|
||||
skip_test "notifications: archive_many" "mutation — changes notification state"
|
||||
skip_test "notifications: unarchive_many" "mutation — changes notification state"
|
||||
skip_test "notifications: unarchive_all" "mutation — changes notification state"
|
||||
skip_test "notifications: delete" "destructive (confirm=True required)"
|
||||
skip_test "notifications: delete_archived" "destructive (confirm=True required)"
|
||||
|
||||
section "unraid_rclone"
|
||||
run_test "rclone: list_remotes" unraid_rclone '{"action":"list_remotes"}'
|
||||
run_test "rclone: config_form" unraid_rclone '{"action":"config_form"}'
|
||||
skip_test "rclone: create_remote" "mutation — creates remote"
|
||||
skip_test "rclone: delete_remote" "destructive (confirm=True required)"
|
||||
|
||||
section "unraid_users"
|
||||
run_test "users: me" unraid_users '{"action":"me"}'
|
||||
|
||||
section "unraid_keys"
|
||||
KEYS_LIST=$(run_test_capture "keys: list" unraid_keys '{"action":"list"}')
|
||||
skip_test "keys: create" "mutation — creates API key"
|
||||
skip_test "keys: update" "mutation — modifies API key"
|
||||
skip_test "keys: delete" "destructive (confirm=True required)"
|
||||
|
||||
section "unraid_health"
|
||||
run_test "health: check" unraid_health '{"action":"check"}'
|
||||
run_test "health: test_connection" unraid_health '{"action":"test_connection"}'
|
||||
run_test "health: diagnose" unraid_health '{"action":"diagnose"}'
|
||||
|
||||
section "unraid_settings (all mutations — skipped)"
|
||||
skip_test "settings: update" "mutation — modifies settings"
|
||||
skip_test "settings: update_temperature" "mutation — modifies settings"
|
||||
skip_test "settings: update_time" "mutation — modifies settings"
|
||||
skip_test "settings: configure_ups" "destructive (confirm=True required)"
|
||||
skip_test "settings: update_api" "mutation — modifies settings"
|
||||
skip_test "settings: connect_sign_in" "mutation — authentication action"
|
||||
skip_test "settings: connect_sign_out" "mutation — authentication action"
|
||||
skip_test "settings: setup_remote_access" "destructive (confirm=True required)"
|
||||
skip_test "settings: enable_dynamic_remote_access" "destructive (confirm=True required)"
|
||||
|
||||
# ═══════════════════════════════════════════════════════════════════════════════
|
||||
# PHASE 2 — ID-discovered read actions
|
||||
# ═══════════════════════════════════════════════════════════════════════════════
|
||||
|
||||
section "Phase 2: ID-discovered reads"
|
||||
|
||||
# ── docker container ID ───────────────────────────────────────────────────────
|
||||
CONTAINER_ID=$(echo "$DOCKER_LIST" | python3 -c "
|
||||
import json, sys
|
||||
try:
|
||||
d = json.load(sys.stdin)
|
||||
containers = d.get('containers') or d.get('data', {}).get('containers') or []
|
||||
if isinstance(containers, list) and containers:
|
||||
c = containers[0]
|
||||
cid = c.get('id') or c.get('names', [''])[0].lstrip('/')
|
||||
if cid:
|
||||
print(cid)
|
||||
except Exception:
|
||||
pass
|
||||
" 2>/dev/null || true)
|
||||
|
||||
if [[ -n "$CONTAINER_ID" ]]; then
|
||||
run_test "docker: details (id=$CONTAINER_ID)" \
|
||||
unraid_docker "{\"action\":\"details\",\"container_id\":\"$CONTAINER_ID\"}"
|
||||
run_test "docker: logs (id=$CONTAINER_ID)" \
|
||||
unraid_docker "{\"action\":\"logs\",\"container_id\":\"$CONTAINER_ID\",\"tail_lines\":20}"
|
||||
else
|
||||
skip_test "docker: details" "no containers found to discover ID"
|
||||
skip_test "docker: logs" "no containers found to discover ID"
|
||||
fi
|
||||
|
||||
# ── docker network ID ─────────────────────────────────────────────────────────
|
||||
NETWORK_ID=$(echo "$DOCKER_NETS" | python3 -c "
|
||||
import json, sys
|
||||
try:
|
||||
d = json.load(sys.stdin)
|
||||
nets = d.get('networks') or d.get('data', {}).get('networks') or []
|
||||
if isinstance(nets, list) and nets:
|
||||
nid = nets[0].get('id') or nets[0].get('Id')
|
||||
if nid:
|
||||
print(nid)
|
||||
except Exception:
|
||||
pass
|
||||
" 2>/dev/null || true)
|
||||
|
||||
if [[ -n "$NETWORK_ID" ]]; then
|
||||
run_test "docker: network_details (id=$NETWORK_ID)" \
|
||||
unraid_docker "{\"action\":\"network_details\",\"network_id\":\"$NETWORK_ID\"}"
|
||||
else
|
||||
skip_test "docker: network_details" "no networks found to discover ID"
|
||||
fi
|
||||
|
||||
# ── disk ID ───────────────────────────────────────────────────────────────────
|
||||
DISK_ID=$(echo "$STORAGE_DISKS" | python3 -c "
|
||||
import json, sys
|
||||
try:
|
||||
d = json.load(sys.stdin)
|
||||
disks = d.get('disks') or d.get('data', {}).get('disks') or []
|
||||
if isinstance(disks, list) and disks:
|
||||
did = disks[0].get('id') or disks[0].get('device')
|
||||
if did:
|
||||
print(did)
|
||||
except Exception:
|
||||
pass
|
||||
" 2>/dev/null || true)
|
||||
|
||||
if [[ -n "$DISK_ID" ]]; then
|
||||
run_test "storage: disk_details (id=$DISK_ID)" \
|
||||
unraid_storage "{\"action\":\"disk_details\",\"disk_id\":\"$DISK_ID\"}"
|
||||
else
|
||||
skip_test "storage: disk_details" "no disks found to discover ID"
|
||||
fi
|
||||
|
||||
# ── log path ──────────────────────────────────────────────────────────────────
|
||||
LOG_PATH=$(echo "$LOG_FILES" | python3 -c "
|
||||
import json, sys
|
||||
try:
|
||||
d = json.load(sys.stdin)
|
||||
files = d.get('log_files') or d.get('files') or d.get('data', {}).get('log_files') or []
|
||||
if isinstance(files, list) and files:
|
||||
p = files[0].get('path') or (files[0] if isinstance(files[0], str) else None)
|
||||
if p:
|
||||
print(p)
|
||||
except Exception:
|
||||
pass
|
||||
" 2>/dev/null || true)
|
||||
|
||||
if [[ -n "$LOG_PATH" ]]; then
|
||||
run_test "storage: logs (path=$LOG_PATH)" \
|
||||
unraid_storage "{\"action\":\"logs\",\"log_path\":\"$LOG_PATH\",\"tail_lines\":20}"
|
||||
else
|
||||
skip_test "storage: logs" "no log files found to discover path"
|
||||
fi
|
||||
|
||||
# ── VM ID ─────────────────────────────────────────────────────────────────────
|
||||
VM_ID=$(echo "$VM_LIST" | python3 -c "
|
||||
import json, sys
|
||||
try:
|
||||
d = json.load(sys.stdin)
|
||||
vms = d.get('vms') or d.get('data', {}).get('vms') or []
|
||||
if isinstance(vms, list) and vms:
|
||||
vid = vms[0].get('uuid') or vms[0].get('id') or vms[0].get('name')
|
||||
if vid:
|
||||
print(vid)
|
||||
except Exception:
|
||||
pass
|
||||
" 2>/dev/null || true)
|
||||
|
||||
if [[ -n "$VM_ID" ]]; then
|
||||
run_test "vm: details (id=$VM_ID)" \
|
||||
unraid_vm "{\"action\":\"details\",\"vm_id\":\"$VM_ID\"}"
|
||||
else
|
||||
skip_test "vm: details" "no VMs found to discover ID"
|
||||
fi
|
||||
|
||||
# ── API key ID ────────────────────────────────────────────────────────────────
|
||||
KEY_ID=$(echo "$KEYS_LIST" | python3 -c "
|
||||
import json, sys
|
||||
try:
|
||||
d = json.load(sys.stdin)
|
||||
keys = d.get('keys') or d.get('apiKeys') or d.get('data', {}).get('keys') or []
|
||||
if isinstance(keys, list) and keys:
|
||||
kid = keys[0].get('id')
|
||||
if kid:
|
||||
print(kid)
|
||||
except Exception:
|
||||
pass
|
||||
" 2>/dev/null || true)
|
||||
|
||||
if [[ -n "$KEY_ID" ]]; then
|
||||
run_test "keys: get (id=$KEY_ID)" \
|
||||
unraid_keys "{\"action\":\"get\",\"key_id\":\"$KEY_ID\"}"
|
||||
else
|
||||
skip_test "keys: get" "no API keys found to discover ID"
|
||||
fi
|
||||
|
||||
# ═══════════════════════════════════════════════════════════════════════════════
|
||||
# SUMMARY
|
||||
# ═══════════════════════════════════════════════════════════════════════════════
|
||||
|
||||
TOTAL=$((PASS + FAIL + SKIP))
|
||||
echo ""
|
||||
echo -e "${BOLD}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
|
||||
echo -e "${BOLD}Results: ${GREEN}${PASS} passed${NC} ${RED}${FAIL} failed${NC} ${YELLOW}${SKIP} skipped${NC} (${TOTAL} total)"
|
||||
|
||||
if [[ ${#FAILED_TESTS[@]} -gt 0 ]]; then
|
||||
echo ""
|
||||
echo -e "${RED}${BOLD}Failed tests:${NC}"
|
||||
for t in "${FAILED_TESTS[@]}"; do
|
||||
echo -e " ${RED}✗${NC} $t"
|
||||
done
|
||||
fi
|
||||
|
||||
echo ""
|
||||
[[ $FAIL -eq 0 ]] && exit 0 || exit 1
|
||||
334
tests/mcporter/test-destructive.sh
Executable file
334
tests/mcporter/test-destructive.sh
Executable file
@@ -0,0 +1,334 @@
|
||||
#!/usr/bin/env bash
|
||||
# test-destructive.sh — Safe destructive action tests for unraid-mcp
|
||||
#
|
||||
# Tests all 15 destructive actions using create→destroy and no-op patterns.
|
||||
# Actions with global blast radius (no safe isolation) are skipped.
|
||||
#
|
||||
# Transport: stdio — spawns uv run unraid-mcp-server per call; no running server needed.
|
||||
#
|
||||
# Usage:
|
||||
# ./tests/mcporter/test-destructive.sh [--confirm]
|
||||
#
|
||||
# Options:
|
||||
# --confirm REQUIRED to execute destructive tests; without it, dry-runs only
|
||||
#
|
||||
# Exit codes:
|
||||
# 0 — all executable tests passed (or dry-run)
|
||||
# 1 — one or more tests failed
|
||||
# 2 — prerequisite check failed
|
||||
|
||||
set -uo pipefail
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Constants
|
||||
# ---------------------------------------------------------------------------
|
||||
readonly SCRIPT_DIR="$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")" && pwd -P)"
|
||||
readonly SCRIPT_NAME="$(basename -- "${BASH_SOURCE[0]}")"
|
||||
|
||||
RED='\033[0;31m'; GREEN='\033[0;32m'; YELLOW='\033[1;33m'
|
||||
CYAN='\033[0;36m'; BOLD='\033[1m'; NC='\033[0m'
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Defaults
|
||||
# ---------------------------------------------------------------------------
|
||||
readonly PROJECT_DIR="$(cd -- "${SCRIPT_DIR}/../.." && pwd -P)"
|
||||
CONFIRM=false
|
||||
|
||||
PASS=0; FAIL=0; SKIP=0
|
||||
declare -a FAILED_TESTS=()
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Argument parsing
|
||||
# ---------------------------------------------------------------------------
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case "$1" in
|
||||
--confirm) CONFIRM=true; shift ;;
|
||||
-h|--help)
|
||||
printf 'Usage: %s [--confirm]\n' "${SCRIPT_NAME}"
|
||||
exit 0
|
||||
;;
|
||||
*) printf '[ERROR] Unknown argument: %s\n' "$1" >&2; exit 2 ;;
|
||||
esac
|
||||
done
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Helpers
|
||||
# ---------------------------------------------------------------------------
|
||||
section() { echo ""; echo -e "${CYAN}${BOLD}━━━ $1 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"; }
|
||||
|
||||
pass_test() {
|
||||
printf " %-60s${GREEN}PASS${NC}\n" "$1"
|
||||
((PASS++)) || true
|
||||
}
|
||||
|
||||
fail_test() {
|
||||
local label="$1" reason="$2"
|
||||
printf " %-60s${RED}FAIL${NC}\n" "${label}"
|
||||
printf " %s\n" "${reason}"
|
||||
((FAIL++)) || true
|
||||
FAILED_TESTS+=("${label}")
|
||||
}
|
||||
|
||||
skip_test() {
|
||||
printf " %-60s${YELLOW}SKIP${NC} (%s)\n" "$1" "$2"
|
||||
((SKIP++)) || true
|
||||
}
|
||||
|
||||
dry_run() {
|
||||
printf " %-60s${CYAN}DRY-RUN${NC}\n" "$1"
|
||||
((SKIP++)) || true
|
||||
}
|
||||
|
||||
mcall() {
|
||||
local tool="$1" args="$2"
|
||||
mcporter call \
|
||||
--stdio "uv run --project ${PROJECT_DIR} unraid-mcp-server" \
|
||||
--tool "$tool" \
|
||||
--args "$args" \
|
||||
--output json \
|
||||
2>/dev/null
|
||||
}
|
||||
|
||||
extract() {
|
||||
# extract <json> <python-expression>
|
||||
python3 -c "import json,sys; d=json.loads('''$1'''); print($2)" 2>/dev/null || true
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Connectivity check
|
||||
# ---------------------------------------------------------------------------
|
||||
echo ""
|
||||
echo -e "${BOLD}Unraid MCP Destructive Action Test Suite${NC}"
|
||||
echo -e "Transport: ${CYAN}stdio (uv run unraid-mcp-server)${NC}"
|
||||
echo -e "Mode: $(${CONFIRM} && echo "${RED}LIVE — destructive actions will execute${NC}" || echo "${YELLOW}DRY-RUN — pass --confirm to execute${NC}")"
|
||||
echo ""
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# docker: remove — skipped (two-machine problem)
|
||||
# ---------------------------------------------------------------------------
|
||||
section "docker: remove"
|
||||
skip_test "docker: remove" "requires a pre-existing stopped container on the Unraid server — can't provision via local docker"
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# docker: delete_entries — create folder → delete via MCP
|
||||
# ---------------------------------------------------------------------------
|
||||
section "docker: delete_entries"
|
||||
skip_test "docker: delete_entries" "createDockerFolder mutation not available in this Unraid API version (HTTP 400)"
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# docker: update_all — mock/safety audit only
|
||||
# ---------------------------------------------------------------------------
|
||||
section "docker: update_all"
|
||||
skip_test "docker: update_all" "global blast radius — restarts all containers; safety audit only"
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# docker: reset_template_mappings — mock/safety audit only
|
||||
# ---------------------------------------------------------------------------
|
||||
section "docker: reset_template_mappings"
|
||||
skip_test "docker: reset_template_mappings" "wipes all template mappings globally; safety audit only"
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# vm: force_stop — requires manual test VM setup
|
||||
# ---------------------------------------------------------------------------
|
||||
section "vm: force_stop"
|
||||
skip_test "vm: force_stop" "requires pre-created Alpine test VM (no persistent disk)"
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# vm: reset — requires manual test VM setup
|
||||
# ---------------------------------------------------------------------------
|
||||
section "vm: reset"
|
||||
skip_test "vm: reset" "requires pre-created Alpine test VM (no persistent disk)"
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# notifications: delete — create notification → delete via MCP
|
||||
# ---------------------------------------------------------------------------
|
||||
section "notifications: delete"
|
||||
|
||||
test_notifications_delete() {
|
||||
local label="notifications: delete"
|
||||
|
||||
# Create the notification
|
||||
local create_raw
|
||||
create_raw="$(mcall unraid_notifications \
|
||||
'{"action":"create","title":"mcp-test-delete","subject":"MCP destructive test","description":"Safe to delete","importance":"INFO"}')"
|
||||
local create_ok
|
||||
create_ok="$(python3 -c "import json,sys; d=json.loads('''${create_raw}'''); print(d.get('success', False))" 2>/dev/null)"
|
||||
if [[ "${create_ok}" != "True" ]]; then
|
||||
fail_test "${label}" "create notification failed: ${create_raw}"
|
||||
return
|
||||
fi
|
||||
|
||||
# The create response ID doesn't match the stored filename — list and find by title
|
||||
local list_raw nid
|
||||
list_raw="$(mcall unraid_notifications '{"action":"list","notification_type":"UNREAD"}')"
|
||||
nid="$(python3 -c "
|
||||
import json,sys
|
||||
d = json.loads('''${list_raw}''')
|
||||
notifs = d.get('notifications', [])
|
||||
match = next((n['id'] for n in notifs if n.get('title') == 'mcp-test-delete'), '')
|
||||
print(match)
|
||||
" 2>/dev/null)"
|
||||
|
||||
if [[ -z "${nid}" ]]; then
|
||||
fail_test "${label}" "created notification not found in UNREAD list"
|
||||
return
|
||||
fi
|
||||
|
||||
local del_raw
|
||||
del_raw="$(mcall unraid_notifications \
|
||||
"{\"action\":\"delete\",\"notification_id\":\"${nid}\",\"notification_type\":\"UNREAD\",\"confirm\":true}")"
|
||||
# success=true OR deleteNotification key present (raw GraphQL response) both indicate success
|
||||
local success
|
||||
success="$(python3 -c "
|
||||
import json,sys
|
||||
d=json.loads('''${del_raw}''')
|
||||
ok = d.get('success', False) or ('deleteNotification' in d)
|
||||
print(ok)
|
||||
" 2>/dev/null)"
|
||||
|
||||
if [[ "${success}" != "True" ]]; then
|
||||
# Leak: notification created but not deleted — archive it so it doesn't clutter the feed
|
||||
mcall unraid_notifications "{\"action\":\"archive\",\"notification_id\":\"${nid}\"}" &>/dev/null || true
|
||||
fail_test "${label}" "delete did not return success=true: ${del_raw} (notification archived as fallback cleanup)"
|
||||
return
|
||||
fi
|
||||
|
||||
pass_test "${label}"
|
||||
}
|
||||
|
||||
if ${CONFIRM}; then
|
||||
test_notifications_delete
|
||||
else
|
||||
dry_run "notifications: delete [create notification → mcall unraid_notifications delete]"
|
||||
fi
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# notifications: delete_archived — bulk wipe; skip (hard to isolate)
|
||||
# ---------------------------------------------------------------------------
|
||||
section "notifications: delete_archived"
|
||||
skip_test "notifications: delete_archived" "bulk wipe of ALL archived notifications; run manually on shart if needed"
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# rclone: delete_remote — create local:/tmp remote → delete via MCP
|
||||
# ---------------------------------------------------------------------------
|
||||
section "rclone: delete_remote"
|
||||
skip_test "rclone: delete_remote" "createRCloneRemote broken server-side on this Unraid version (url slash error)"
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# keys: delete — create test key → delete via MCP
|
||||
# ---------------------------------------------------------------------------
|
||||
section "keys: delete"
|
||||
|
||||
test_keys_delete() {
|
||||
local label="keys: delete"
|
||||
|
||||
# Guard: abort if test key already exists (don't delete a real key)
|
||||
# Note: API key names cannot contain hyphens — use "mcp test key"
|
||||
local existing_keys
|
||||
existing_keys="$(mcall unraid_keys '{"action":"list"}')"
|
||||
if python3 -c "
|
||||
import json,sys
|
||||
d = json.loads('''${existing_keys}''')
|
||||
keys = d.get('keys', d.get('apiKeys', []))
|
||||
sys.exit(1 if any(k.get('name') == 'mcp test key' for k in keys) else 0)
|
||||
" 2>/dev/null; then
|
||||
: # not found, safe to proceed
|
||||
else
|
||||
fail_test "${label}" "a key named 'mcp test key' already exists — refusing to proceed"
|
||||
return
|
||||
fi
|
||||
|
||||
local create_raw
|
||||
create_raw="$(mcall unraid_keys \
|
||||
'{"action":"create","name":"mcp test key","roles":["VIEWER"]}')"
|
||||
local kid
|
||||
kid="$(python3 -c "import json,sys; d=json.loads('''${create_raw}'''); print(d.get('key',{}).get('id',''))" 2>/dev/null)"
|
||||
|
||||
if [[ -z "${kid}" ]]; then
|
||||
fail_test "${label}" "create key did not return an ID"
|
||||
return
|
||||
fi
|
||||
|
||||
local del_raw
|
||||
del_raw="$(mcall unraid_keys "{\"action\":\"delete\",\"key_id\":\"${kid}\",\"confirm\":true}")"
|
||||
local success
|
||||
success="$(python3 -c "import json,sys; d=json.loads('''${del_raw}'''); print(d.get('success', False))" 2>/dev/null)"
|
||||
|
||||
if [[ "${success}" != "True" ]]; then
|
||||
fail_test "${label}" "delete did not return success=true: ${del_raw}"
|
||||
return
|
||||
fi
|
||||
|
||||
# Verify gone
|
||||
local list_raw
|
||||
list_raw="$(mcall unraid_keys '{"action":"list"}')"
|
||||
if python3 -c "
|
||||
import json,sys
|
||||
d = json.loads('''${list_raw}''')
|
||||
keys = d.get('keys', d.get('apiKeys', []))
|
||||
sys.exit(0 if not any(k.get('id') == '${kid}' for k in keys) else 1)
|
||||
" 2>/dev/null; then
|
||||
pass_test "${label}"
|
||||
else
|
||||
fail_test "${label}" "key still present in list after delete"
|
||||
fi
|
||||
}
|
||||
|
||||
if ${CONFIRM}; then
|
||||
test_keys_delete
|
||||
else
|
||||
dry_run "keys: delete [create test key → mcall unraid_keys delete]"
|
||||
fi
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# storage: flash_backup — requires dedicated test remote
|
||||
# ---------------------------------------------------------------------------
|
||||
section "storage: flash_backup"
|
||||
skip_test "storage: flash_backup" "requires dedicated test remote pre-configured and isolated destination"
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# settings: configure_ups — mock/safety audit only
|
||||
# ---------------------------------------------------------------------------
|
||||
section "settings: configure_ups"
|
||||
skip_test "settings: configure_ups" "wrong config breaks UPS monitoring; safety audit only"
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# settings: setup_remote_access — mock/safety audit only
|
||||
# ---------------------------------------------------------------------------
|
||||
section "settings: setup_remote_access"
|
||||
skip_test "settings: setup_remote_access" "misconfiguration can lock out remote access; safety audit only"
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# settings: enable_dynamic_remote_access — shart only, toggle false → restore
|
||||
# ---------------------------------------------------------------------------
|
||||
section "settings: enable_dynamic_remote_access"
|
||||
skip_test "settings: enable_dynamic_remote_access" "run manually on shart (10.1.0.3) only — see docs/DESTRUCTIVE_ACTIONS.md"
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# info: update_ssh — read current values, re-apply same (no-op)
|
||||
# ---------------------------------------------------------------------------
|
||||
section "info: update_ssh"
|
||||
skip_test "info: update_ssh" "updateSshSettings mutation not available in this Unraid API version (HTTP 400)"
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Summary
|
||||
# ---------------------------------------------------------------------------
|
||||
TOTAL=$((PASS + FAIL + SKIP))
|
||||
echo ""
|
||||
echo -e "${BOLD}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
|
||||
echo -e "${BOLD}Results: ${GREEN}${PASS} passed${NC} ${RED}${FAIL} failed${NC} ${YELLOW}${SKIP} skipped${NC} (${TOTAL} total)"
|
||||
|
||||
if [[ ${#FAILED_TESTS[@]} -gt 0 ]]; then
|
||||
echo ""
|
||||
echo -e "${RED}${BOLD}Failed tests:${NC}"
|
||||
for t in "${FAILED_TESTS[@]}"; do
|
||||
echo -e " ${RED}✗${NC} ${t}"
|
||||
done
|
||||
fi
|
||||
|
||||
echo ""
|
||||
if ! ${CONFIRM}; then
|
||||
echo -e "${YELLOW}Dry-run complete. Pass --confirm to execute destructive tests.${NC}"
|
||||
fi
|
||||
|
||||
[[ ${FAIL} -eq 0 ]] && exit 0 || exit 1
|
||||
764
tests/mcporter/test-tools.sh
Executable file
764
tests/mcporter/test-tools.sh
Executable file
@@ -0,0 +1,764 @@
|
||||
#!/usr/bin/env bash
|
||||
# =============================================================================
|
||||
# test-tools.sh — Integration smoke-test for unraid-mcp MCP server tools
|
||||
#
|
||||
# Exercises every non-destructive action across all 10 tools using mcporter.
|
||||
# The server is launched ad-hoc via mcporter's --stdio flag so no persistent
|
||||
# process or registered server entry is required.
|
||||
#
|
||||
# Usage:
|
||||
# ./scripts/test-tools.sh [--timeout-ms N] [--parallel] [--verbose]
|
||||
#
|
||||
# Options:
|
||||
# --timeout-ms N Per-call timeout in milliseconds (default: 25000)
|
||||
# --parallel Run independent test groups in parallel (default: off)
|
||||
# --verbose Print raw mcporter output for each call
|
||||
#
|
||||
# Exit codes:
|
||||
# 0 — all tests passed or skipped
|
||||
# 1 — one or more tests failed
|
||||
# 2 — prerequisite check failed (mcporter, uv, server startup)
|
||||
# =============================================================================
|
||||
|
||||
set -uo pipefail
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Constants
|
||||
# ---------------------------------------------------------------------------
|
||||
readonly SCRIPT_DIR="$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")" && pwd -P)"
|
||||
readonly PROJECT_DIR="$(cd -- "${SCRIPT_DIR}/../.." && pwd -P)"
|
||||
readonly SCRIPT_NAME="$(basename -- "${BASH_SOURCE[0]}")"
|
||||
readonly TS_START="$(date +%s%N)" # nanosecond epoch
|
||||
readonly LOG_FILE="${TMPDIR:-/tmp}/${SCRIPT_NAME%.sh}.$(date +%Y%m%d-%H%M%S).log"
|
||||
|
||||
# Colours (disabled automatically when stdout is not a terminal)
|
||||
if [[ -t 1 ]]; then
|
||||
C_RESET='\033[0m'
|
||||
C_BOLD='\033[1m'
|
||||
C_GREEN='\033[0;32m'
|
||||
C_RED='\033[0;31m'
|
||||
C_YELLOW='\033[0;33m'
|
||||
C_CYAN='\033[0;36m'
|
||||
C_DIM='\033[2m'
|
||||
else
|
||||
C_RESET='' C_BOLD='' C_GREEN='' C_RED='' C_YELLOW='' C_CYAN='' C_DIM=''
|
||||
fi
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Defaults (overridable via flags)
|
||||
# ---------------------------------------------------------------------------
|
||||
CALL_TIMEOUT_MS=25000
|
||||
USE_PARALLEL=false
|
||||
VERBOSE=false
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Counters (updated by run_test / skip_test)
|
||||
# ---------------------------------------------------------------------------
|
||||
PASS_COUNT=0
|
||||
FAIL_COUNT=0
|
||||
SKIP_COUNT=0
|
||||
declare -a FAIL_NAMES=()
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Argument parsing
|
||||
# ---------------------------------------------------------------------------
|
||||
parse_args() {
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case "$1" in
|
||||
--timeout-ms)
|
||||
CALL_TIMEOUT_MS="${2:?--timeout-ms requires a value}"
|
||||
shift 2
|
||||
;;
|
||||
--parallel)
|
||||
USE_PARALLEL=true
|
||||
shift
|
||||
;;
|
||||
--verbose)
|
||||
VERBOSE=true
|
||||
shift
|
||||
;;
|
||||
-h|--help)
|
||||
printf 'Usage: %s [--timeout-ms N] [--parallel] [--verbose]\n' "${SCRIPT_NAME}"
|
||||
exit 0
|
||||
;;
|
||||
*)
|
||||
printf '[ERROR] Unknown argument: %s\n' "$1" >&2
|
||||
exit 2
|
||||
;;
|
||||
esac
|
||||
done
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Logging helpers
|
||||
# ---------------------------------------------------------------------------
|
||||
log_info() { printf "${C_CYAN}[INFO]${C_RESET} %s\n" "$*" | tee -a "${LOG_FILE}"; }
|
||||
log_warn() { printf "${C_YELLOW}[WARN]${C_RESET} %s\n" "$*" | tee -a "${LOG_FILE}"; }
|
||||
log_error() { printf "${C_RED}[ERROR]${C_RESET} %s\n" "$*" | tee -a "${LOG_FILE}" >&2; }
|
||||
|
||||
elapsed_ms() {
|
||||
local now
|
||||
now="$(date +%s%N)"
|
||||
printf '%d' "$(( (now - TS_START) / 1000000 ))"
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Cleanup trap
|
||||
# ---------------------------------------------------------------------------
|
||||
cleanup() {
|
||||
local rc=$?
|
||||
if [[ $rc -ne 0 ]]; then
|
||||
log_warn "Script exited with rc=${rc}. Log: ${LOG_FILE}"
|
||||
fi
|
||||
}
|
||||
trap cleanup EXIT
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Prerequisite checks
|
||||
# ---------------------------------------------------------------------------
|
||||
check_prerequisites() {
|
||||
local missing=false
|
||||
|
||||
if ! command -v mcporter &>/dev/null; then
|
||||
log_error "mcporter not found in PATH. Install it and re-run."
|
||||
missing=true
|
||||
fi
|
||||
|
||||
if ! command -v uv &>/dev/null; then
|
||||
log_error "uv not found in PATH. Install it and re-run."
|
||||
missing=true
|
||||
fi
|
||||
|
||||
if ! command -v python3 &>/dev/null; then
|
||||
log_error "python3 not found in PATH."
|
||||
missing=true
|
||||
fi
|
||||
|
||||
if [[ ! -f "${PROJECT_DIR}/pyproject.toml" ]]; then
|
||||
log_error "pyproject.toml not found at ${PROJECT_DIR}. Wrong directory?"
|
||||
missing=true
|
||||
fi
|
||||
|
||||
if [[ "${missing}" == true ]]; then
|
||||
return 2
|
||||
fi
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Server startup smoke-test
|
||||
# Launches the stdio server and calls unraid_health action=check.
|
||||
# Returns 0 if the server responds (even with an API error — that still
|
||||
# means the Python process started cleanly), non-zero on import failure.
|
||||
# ---------------------------------------------------------------------------
|
||||
smoke_test_server() {
|
||||
log_info "Smoke-testing server startup..."
|
||||
|
||||
local output
|
||||
output="$(
|
||||
mcporter call \
|
||||
--stdio "uv run unraid-mcp-server" \
|
||||
--cwd "${PROJECT_DIR}" \
|
||||
--name "unraid-smoke" \
|
||||
--tool unraid_health \
|
||||
--args '{"action":"check"}' \
|
||||
--timeout 30000 \
|
||||
--output json \
|
||||
2>&1
|
||||
)" || true
|
||||
|
||||
# If mcporter returns the offline error the server failed to import/start
|
||||
if printf '%s' "${output}" | grep -q '"kind": "offline"'; then
|
||||
log_error "Server failed to start. Output:"
|
||||
printf '%s\n' "${output}" >&2
|
||||
log_error "Common causes:"
|
||||
log_error " • Missing module: check 'uv run unraid-mcp-server' locally"
|
||||
log_error " • server.py has an import for a file that doesn't exist yet"
|
||||
log_error " • Environment variable UNRAID_API_URL or UNRAID_API_KEY missing"
|
||||
return 2
|
||||
fi
|
||||
|
||||
# Assert the response contains a valid tool response field, not a bare JSON error.
|
||||
# unraid_health action=check always returns {"status": ...} on success.
|
||||
local key_check
|
||||
key_check="$(
|
||||
printf '%s' "${output}" | python3 -c "
|
||||
import sys, json
|
||||
try:
|
||||
d = json.load(sys.stdin)
|
||||
if 'status' in d or 'success' in d or 'error' in d:
|
||||
print('ok')
|
||||
else:
|
||||
print('missing: no status/success/error key in response')
|
||||
except Exception as e:
|
||||
print('parse_error: ' + str(e))
|
||||
" 2>/dev/null
|
||||
)" || key_check="parse_error"
|
||||
|
||||
if [[ "${key_check}" != "ok" ]]; then
|
||||
log_error "Smoke test: unexpected response shape — ${key_check}"
|
||||
printf '%s\n' "${output}" >&2
|
||||
return 2
|
||||
fi
|
||||
|
||||
log_info "Server started successfully (health response received)."
|
||||
return 0
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# mcporter call wrapper
|
||||
# Usage: mcporter_call <tool_name> <args_json>
|
||||
# Writes the mcporter JSON output to stdout.
|
||||
# Returns the mcporter exit code.
|
||||
# ---------------------------------------------------------------------------
|
||||
mcporter_call() {
|
||||
local tool_name="${1:?tool_name required}"
|
||||
local args_json="${2:?args_json required}"
|
||||
|
||||
mcporter call \
|
||||
--stdio "uv run unraid-mcp-server" \
|
||||
--cwd "${PROJECT_DIR}" \
|
||||
--name "unraid" \
|
||||
--tool "${tool_name}" \
|
||||
--args "${args_json}" \
|
||||
--timeout "${CALL_TIMEOUT_MS}" \
|
||||
--output json \
|
||||
2>&1
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Test runner
|
||||
# Usage: run_test <label> <tool_name> <args_json> [expected_key]
|
||||
#
|
||||
# expected_key — optional jq-style python key path to validate in the
|
||||
# response (e.g. ".status" or ".containers"). If omitted,
|
||||
# any non-offline response is a PASS (tool errors from the
|
||||
# API — e.g. VMs disabled — are still considered PASS because
|
||||
# the tool itself responded correctly).
|
||||
# ---------------------------------------------------------------------------
|
||||
run_test() {
|
||||
local label="${1:?label required}"
|
||||
local tool="${2:?tool required}"
|
||||
local args="${3:?args required}"
|
||||
local expected_key="${4:-}"
|
||||
|
||||
local t0
|
||||
t0="$(date +%s%N)"
|
||||
|
||||
local output
|
||||
output="$(mcporter_call "${tool}" "${args}" 2>&1)" || true
|
||||
|
||||
local elapsed_ms
|
||||
elapsed_ms="$(( ( $(date +%s%N) - t0 ) / 1000000 ))"
|
||||
|
||||
if [[ "${VERBOSE}" == true ]]; then
|
||||
printf '%s\n' "${output}" | tee -a "${LOG_FILE}"
|
||||
else
|
||||
printf '%s\n' "${output}" >> "${LOG_FILE}"
|
||||
fi
|
||||
|
||||
# Detect server-offline (import/startup failure)
|
||||
if printf '%s' "${output}" | grep -q '"kind": "offline"'; then
|
||||
printf "${C_RED}[FAIL]${C_RESET} %-55s ${C_DIM}%dms${C_RESET}\n" \
|
||||
"${label}" "${elapsed_ms}" | tee -a "${LOG_FILE}"
|
||||
printf ' server offline — check startup errors in %s\n' "${LOG_FILE}" | tee -a "${LOG_FILE}"
|
||||
FAIL_COUNT=$(( FAIL_COUNT + 1 ))
|
||||
FAIL_NAMES+=("${label}")
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Validate optional key presence
|
||||
if [[ -n "${expected_key}" ]]; then
|
||||
local key_check
|
||||
key_check="$(
|
||||
printf '%s' "${output}" | python3 -c "
|
||||
import sys, json
|
||||
try:
|
||||
d = json.load(sys.stdin)
|
||||
keys = '${expected_key}'.split('.')
|
||||
node = d
|
||||
for k in keys:
|
||||
if k:
|
||||
node = node[k]
|
||||
print('ok')
|
||||
except Exception as e:
|
||||
print('missing: ' + str(e))
|
||||
" 2>/dev/null
|
||||
)" || key_check="parse_error"
|
||||
|
||||
if [[ "${key_check}" != "ok" ]]; then
|
||||
printf "${C_RED}[FAIL]${C_RESET} %-55s ${C_DIM}%dms${C_RESET}\n" \
|
||||
"${label}" "${elapsed_ms}" | tee -a "${LOG_FILE}"
|
||||
printf ' expected key .%s not found: %s\n' "${expected_key}" "${key_check}" | tee -a "${LOG_FILE}"
|
||||
FAIL_COUNT=$(( FAIL_COUNT + 1 ))
|
||||
FAIL_NAMES+=("${label}")
|
||||
return 1
|
||||
fi
|
||||
fi
|
||||
|
||||
printf "${C_GREEN}[PASS]${C_RESET} %-55s ${C_DIM}%dms${C_RESET}\n" \
|
||||
"${label}" "${elapsed_ms}" | tee -a "${LOG_FILE}"
|
||||
PASS_COUNT=$(( PASS_COUNT + 1 ))
|
||||
return 0
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Skip helper — use when a prerequisite (like a list) returned empty
|
||||
# ---------------------------------------------------------------------------
|
||||
skip_test() {
|
||||
local label="${1:?label required}"
|
||||
local reason="${2:-prerequisite returned empty}"
|
||||
printf "${C_YELLOW}[SKIP]${C_RESET} %-55s %s\n" "${label}" "${reason}" | tee -a "${LOG_FILE}"
|
||||
SKIP_COUNT=$(( SKIP_COUNT + 1 ))
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# ID extractors
|
||||
# Each function calls the relevant list action and prints the first ID.
|
||||
# Prints nothing (empty string) if the list is empty or the call fails.
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
# Extract first docker container ID
|
||||
get_docker_id() {
|
||||
local raw
|
||||
raw="$(mcporter_call unraid_docker '{"action":"list"}' 2>/dev/null)" || return 0
|
||||
printf '%s' "${raw}" | python3 -c "
|
||||
import sys, json
|
||||
try:
|
||||
d = json.load(sys.stdin)
|
||||
containers = d.get('containers', [])
|
||||
if containers:
|
||||
print(containers[0]['id'])
|
||||
except Exception:
|
||||
pass
|
||||
" 2>/dev/null || true
|
||||
}
|
||||
|
||||
# Extract first docker network ID
|
||||
get_network_id() {
|
||||
local raw
|
||||
raw="$(mcporter_call unraid_docker '{"action":"networks"}' 2>/dev/null)" || return 0
|
||||
printf '%s' "${raw}" | python3 -c "
|
||||
import sys, json
|
||||
try:
|
||||
d = json.load(sys.stdin)
|
||||
nets = d.get('networks', [])
|
||||
if nets:
|
||||
print(nets[0]['id'])
|
||||
except Exception:
|
||||
pass
|
||||
" 2>/dev/null || true
|
||||
}
|
||||
|
||||
# Extract first VM ID
|
||||
get_vm_id() {
|
||||
local raw
|
||||
raw="$(mcporter_call unraid_vm '{"action":"list"}' 2>/dev/null)" || return 0
|
||||
printf '%s' "${raw}" | python3 -c "
|
||||
import sys, json
|
||||
try:
|
||||
d = json.load(sys.stdin)
|
||||
vms = d.get('vms', d.get('domains', []))
|
||||
if vms:
|
||||
print(vms[0].get('id', vms[0].get('uuid', '')))
|
||||
except Exception:
|
||||
pass
|
||||
" 2>/dev/null || true
|
||||
}
|
||||
|
||||
# Extract first API key ID
|
||||
get_key_id() {
|
||||
local raw
|
||||
raw="$(mcporter_call unraid_keys '{"action":"list"}' 2>/dev/null)" || return 0
|
||||
printf '%s' "${raw}" | python3 -c "
|
||||
import sys, json
|
||||
try:
|
||||
d = json.load(sys.stdin)
|
||||
keys = d.get('keys', d.get('apiKeys', []))
|
||||
if keys:
|
||||
print(keys[0].get('id', ''))
|
||||
except Exception:
|
||||
pass
|
||||
" 2>/dev/null || true
|
||||
}
|
||||
|
||||
# Extract first disk ID
|
||||
get_disk_id() {
|
||||
local raw
|
||||
raw="$(mcporter_call unraid_storage '{"action":"disks"}' 2>/dev/null)" || return 0
|
||||
printf '%s' "${raw}" | python3 -c "
|
||||
import sys, json
|
||||
try:
|
||||
d = json.load(sys.stdin)
|
||||
disks = d.get('disks', [])
|
||||
if disks:
|
||||
print(disks[0]['id'])
|
||||
except Exception:
|
||||
pass
|
||||
" 2>/dev/null || true
|
||||
}
|
||||
|
||||
# Extract first log file path
|
||||
get_log_path() {
|
||||
local raw
|
||||
raw="$(mcporter_call unraid_storage '{"action":"log_files"}' 2>/dev/null)" || return 0
|
||||
printf '%s' "${raw}" | python3 -c "
|
||||
import sys, json
|
||||
try:
|
||||
d = json.load(sys.stdin)
|
||||
files = d.get('log_files', [])
|
||||
# Prefer a plain text log (not binary like btmp/lastlog)
|
||||
for f in files:
|
||||
p = f.get('path', '')
|
||||
if p.endswith('.log') or 'syslog' in p or 'messages' in p:
|
||||
print(p)
|
||||
break
|
||||
else:
|
||||
if files:
|
||||
print(files[0]['path'])
|
||||
except Exception:
|
||||
pass
|
||||
" 2>/dev/null || true
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Grouped test suites
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
suite_unraid_info() {
|
||||
printf '\n%b== unraid_info (19 actions) ==%b\n' "${C_BOLD}" "${C_RESET}" | tee -a "${LOG_FILE}"
|
||||
|
||||
run_test "unraid_info: overview" unraid_info '{"action":"overview"}'
|
||||
run_test "unraid_info: array" unraid_info '{"action":"array"}'
|
||||
run_test "unraid_info: network" unraid_info '{"action":"network"}'
|
||||
run_test "unraid_info: registration" unraid_info '{"action":"registration"}'
|
||||
run_test "unraid_info: connect" unraid_info '{"action":"connect"}'
|
||||
run_test "unraid_info: variables" unraid_info '{"action":"variables"}'
|
||||
run_test "unraid_info: metrics" unraid_info '{"action":"metrics"}'
|
||||
run_test "unraid_info: services" unraid_info '{"action":"services"}'
|
||||
run_test "unraid_info: display" unraid_info '{"action":"display"}'
|
||||
run_test "unraid_info: config" unraid_info '{"action":"config"}'
|
||||
run_test "unraid_info: online" unraid_info '{"action":"online"}'
|
||||
run_test "unraid_info: owner" unraid_info '{"action":"owner"}'
|
||||
run_test "unraid_info: settings" unraid_info '{"action":"settings"}'
|
||||
run_test "unraid_info: server" unraid_info '{"action":"server"}'
|
||||
run_test "unraid_info: servers" unraid_info '{"action":"servers"}'
|
||||
run_test "unraid_info: flash" unraid_info '{"action":"flash"}'
|
||||
run_test "unraid_info: ups_devices" unraid_info '{"action":"ups_devices"}'
|
||||
# ups_device and ups_config require a device_id — skip if no UPS devices found
|
||||
local ups_raw
|
||||
ups_raw="$(mcporter_call unraid_info '{"action":"ups_devices"}' 2>/dev/null)" || ups_raw=''
|
||||
local ups_id
|
||||
ups_id="$(printf '%s' "${ups_raw}" | python3 -c "
|
||||
import sys, json
|
||||
try:
|
||||
d = json.load(sys.stdin)
|
||||
devs = d.get('ups_devices', d.get('upsDevices', []))
|
||||
if devs:
|
||||
print(devs[0].get('id', devs[0].get('name', '')))
|
||||
except Exception:
|
||||
pass
|
||||
" 2>/dev/null)" || ups_id=''
|
||||
|
||||
if [[ -n "${ups_id}" ]]; then
|
||||
run_test "unraid_info: ups_device" unraid_info \
|
||||
"$(printf '{"action":"ups_device","device_id":"%s"}' "${ups_id}")"
|
||||
run_test "unraid_info: ups_config" unraid_info \
|
||||
"$(printf '{"action":"ups_config","device_id":"%s"}' "${ups_id}")"
|
||||
else
|
||||
skip_test "unraid_info: ups_device" "no UPS devices found"
|
||||
skip_test "unraid_info: ups_config" "no UPS devices found"
|
||||
fi
|
||||
}
|
||||
|
||||
suite_unraid_array() {
|
||||
printf '\n%b== unraid_array (1 read-only action) ==%b\n' "${C_BOLD}" "${C_RESET}" | tee -a "${LOG_FILE}"
|
||||
run_test "unraid_array: parity_status" unraid_array '{"action":"parity_status"}'
|
||||
# Destructive actions (parity_start/pause/resume/cancel) skipped
|
||||
}
|
||||
|
||||
suite_unraid_storage() {
|
||||
printf '\n%b== unraid_storage (6 actions) ==%b\n' "${C_BOLD}" "${C_RESET}" | tee -a "${LOG_FILE}"
|
||||
|
||||
run_test "unraid_storage: shares" unraid_storage '{"action":"shares"}'
|
||||
run_test "unraid_storage: disks" unraid_storage '{"action":"disks"}'
|
||||
run_test "unraid_storage: unassigned" unraid_storage '{"action":"unassigned"}'
|
||||
run_test "unraid_storage: log_files" unraid_storage '{"action":"log_files"}'
|
||||
|
||||
# disk_details needs a disk ID
|
||||
local disk_id
|
||||
disk_id="$(get_disk_id)" || disk_id=''
|
||||
if [[ -n "${disk_id}" ]]; then
|
||||
run_test "unraid_storage: disk_details" unraid_storage \
|
||||
"$(printf '{"action":"disk_details","disk_id":"%s"}' "${disk_id}")"
|
||||
else
|
||||
skip_test "unraid_storage: disk_details" "no disks found"
|
||||
fi
|
||||
|
||||
# logs needs a valid log path
|
||||
local log_path
|
||||
log_path="$(get_log_path)" || log_path=''
|
||||
if [[ -n "${log_path}" ]]; then
|
||||
run_test "unraid_storage: logs" unraid_storage \
|
||||
"$(printf '{"action":"logs","log_path":"%s","tail_lines":20}' "${log_path}")"
|
||||
else
|
||||
skip_test "unraid_storage: logs" "no log files found"
|
||||
fi
|
||||
}
|
||||
|
||||
suite_unraid_docker() {
|
||||
printf '\n%b== unraid_docker (7 read-only actions) ==%b\n' "${C_BOLD}" "${C_RESET}" | tee -a "${LOG_FILE}"
|
||||
|
||||
run_test "unraid_docker: list" unraid_docker '{"action":"list"}'
|
||||
run_test "unraid_docker: networks" unraid_docker '{"action":"networks"}'
|
||||
run_test "unraid_docker: port_conflicts" unraid_docker '{"action":"port_conflicts"}'
|
||||
run_test "unraid_docker: check_updates" unraid_docker '{"action":"check_updates"}'
|
||||
|
||||
# details, logs, network_details need IDs
|
||||
local container_id
|
||||
container_id="$(get_docker_id)" || container_id=''
|
||||
if [[ -n "${container_id}" ]]; then
|
||||
run_test "unraid_docker: details" unraid_docker \
|
||||
"$(printf '{"action":"details","container_id":"%s"}' "${container_id}")"
|
||||
run_test "unraid_docker: logs" unraid_docker \
|
||||
"$(printf '{"action":"logs","container_id":"%s","tail_lines":20}' "${container_id}")"
|
||||
else
|
||||
skip_test "unraid_docker: details" "no containers found"
|
||||
skip_test "unraid_docker: logs" "no containers found"
|
||||
fi
|
||||
|
||||
local network_id
|
||||
network_id="$(get_network_id)" || network_id=''
|
||||
if [[ -n "${network_id}" ]]; then
|
||||
run_test "unraid_docker: network_details" unraid_docker \
|
||||
"$(printf '{"action":"network_details","network_id":"%s"}' "${network_id}")"
|
||||
else
|
||||
skip_test "unraid_docker: network_details" "no networks found"
|
||||
fi
|
||||
|
||||
# Destructive actions (start/stop/restart/pause/unpause/remove/update/update_all) skipped
|
||||
}
|
||||
|
||||
suite_unraid_vm() {
|
||||
printf '\n%b== unraid_vm (2 read-only actions) ==%b\n' "${C_BOLD}" "${C_RESET}" | tee -a "${LOG_FILE}"
|
||||
|
||||
run_test "unraid_vm: list" unraid_vm '{"action":"list"}'
|
||||
|
||||
local vm_id
|
||||
vm_id="$(get_vm_id)" || vm_id=''
|
||||
if [[ -n "${vm_id}" ]]; then
|
||||
run_test "unraid_vm: details" unraid_vm \
|
||||
"$(printf '{"action":"details","vm_id":"%s"}' "${vm_id}")"
|
||||
else
|
||||
skip_test "unraid_vm: details" "no VMs found (or VM service unavailable)"
|
||||
fi
|
||||
|
||||
# Destructive actions (start/stop/pause/resume/force_stop/reboot/reset) skipped
|
||||
}
|
||||
|
||||
suite_unraid_notifications() {
|
||||
printf '\n%b== unraid_notifications (4 read-only actions) ==%b\n' "${C_BOLD}" "${C_RESET}" | tee -a "${LOG_FILE}"
|
||||
|
||||
run_test "unraid_notifications: overview" unraid_notifications '{"action":"overview"}'
|
||||
run_test "unraid_notifications: list" unraid_notifications '{"action":"list"}'
|
||||
run_test "unraid_notifications: warnings" unraid_notifications '{"action":"warnings"}'
|
||||
run_test "unraid_notifications: unread" unraid_notifications '{"action":"unread"}'
|
||||
|
||||
# Destructive actions (create/archive/delete/delete_archived/archive_all/etc.) skipped
|
||||
}
|
||||
|
||||
suite_unraid_rclone() {
|
||||
printf '\n%b== unraid_rclone (2 read-only actions) ==%b\n' "${C_BOLD}" "${C_RESET}" | tee -a "${LOG_FILE}"
|
||||
|
||||
run_test "unraid_rclone: list_remotes" unraid_rclone '{"action":"list_remotes"}'
|
||||
# config_form requires a provider_type — use "s3" as a safe, always-available provider
|
||||
run_test "unraid_rclone: config_form" unraid_rclone '{"action":"config_form","provider_type":"s3"}'
|
||||
|
||||
# Destructive actions (create_remote/delete_remote) skipped
|
||||
}
|
||||
|
||||
suite_unraid_users() {
|
||||
printf '\n%b== unraid_users (1 action) ==%b\n' "${C_BOLD}" "${C_RESET}" | tee -a "${LOG_FILE}"
|
||||
run_test "unraid_users: me" unraid_users '{"action":"me"}'
|
||||
}
|
||||
|
||||
suite_unraid_keys() {
|
||||
printf '\n%b== unraid_keys (2 read-only actions) ==%b\n' "${C_BOLD}" "${C_RESET}" | tee -a "${LOG_FILE}"
|
||||
|
||||
run_test "unraid_keys: list" unraid_keys '{"action":"list"}'
|
||||
|
||||
local key_id
|
||||
key_id="$(get_key_id)" || key_id=''
|
||||
if [[ -n "${key_id}" ]]; then
|
||||
run_test "unraid_keys: get" unraid_keys \
|
||||
"$(printf '{"action":"get","key_id":"%s"}' "${key_id}")"
|
||||
else
|
||||
skip_test "unraid_keys: get" "no API keys found"
|
||||
fi
|
||||
|
||||
# Destructive actions (create/update/delete) skipped
|
||||
}
|
||||
|
||||
suite_unraid_health() {
|
||||
printf '\n%b== unraid_health (3 actions) ==%b\n' "${C_BOLD}" "${C_RESET}" | tee -a "${LOG_FILE}"
|
||||
|
||||
run_test "unraid_health: check" unraid_health '{"action":"check"}'
|
||||
run_test "unraid_health: test_connection" unraid_health '{"action":"test_connection"}'
|
||||
run_test "unraid_health: diagnose" unraid_health '{"action":"diagnose"}'
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Print final summary
|
||||
# ---------------------------------------------------------------------------
|
||||
print_summary() {
|
||||
local total_ms="$(( ( $(date +%s%N) - TS_START ) / 1000000 ))"
|
||||
local total=$(( PASS_COUNT + FAIL_COUNT + SKIP_COUNT ))
|
||||
|
||||
printf '\n%b%s%b\n' "${C_BOLD}" "$(printf '=%.0s' {1..65})" "${C_RESET}"
|
||||
printf '%b%-20s%b %b%d%b\n' "${C_BOLD}" "PASS" "${C_RESET}" "${C_GREEN}" "${PASS_COUNT}" "${C_RESET}"
|
||||
printf '%b%-20s%b %b%d%b\n' "${C_BOLD}" "FAIL" "${C_RESET}" "${C_RED}" "${FAIL_COUNT}" "${C_RESET}"
|
||||
printf '%b%-20s%b %b%d%b\n' "${C_BOLD}" "SKIP" "${C_RESET}" "${C_YELLOW}" "${SKIP_COUNT}" "${C_RESET}"
|
||||
printf '%b%-20s%b %d\n' "${C_BOLD}" "TOTAL" "${C_RESET}" "${total}"
|
||||
printf '%b%-20s%b %ds (%dms)\n' "${C_BOLD}" "ELAPSED" "${C_RESET}" \
|
||||
"$(( total_ms / 1000 ))" "${total_ms}"
|
||||
printf '%b%s%b\n' "${C_BOLD}" "$(printf '=%.0s' {1..65})" "${C_RESET}"
|
||||
|
||||
if [[ "${FAIL_COUNT}" -gt 0 ]]; then
|
||||
printf '\n%bFailed tests:%b\n' "${C_RED}" "${C_RESET}"
|
||||
local name
|
||||
for name in "${FAIL_NAMES[@]}"; do
|
||||
printf ' • %s\n' "${name}"
|
||||
done
|
||||
printf '\nFull log: %s\n' "${LOG_FILE}"
|
||||
fi
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Parallel runner — wraps each suite in a background subshell and waits
|
||||
# ---------------------------------------------------------------------------
|
||||
run_parallel() {
|
||||
# Each suite is independent (only cross-suite dependency: IDs are fetched
|
||||
# fresh inside each suite function, not shared across suites).
|
||||
# Counter updates from subshells won't propagate to the parent — collect
|
||||
# results via temp files instead.
|
||||
log_warn "--parallel mode: per-suite counters aggregated via temp files."
|
||||
|
||||
local tmp_dir
|
||||
tmp_dir="$(mktemp -d)"
|
||||
trap 'rm -rf -- "${tmp_dir}"' RETURN
|
||||
|
||||
local suites=(
|
||||
suite_unraid_info
|
||||
suite_unraid_array
|
||||
suite_unraid_storage
|
||||
suite_unraid_docker
|
||||
suite_unraid_vm
|
||||
suite_unraid_notifications
|
||||
suite_unraid_rclone
|
||||
suite_unraid_users
|
||||
suite_unraid_keys
|
||||
suite_unraid_health
|
||||
)
|
||||
|
||||
local pids=()
|
||||
local suite
|
||||
for suite in "${suites[@]}"; do
|
||||
(
|
||||
# Reset counters in subshell
|
||||
PASS_COUNT=0; FAIL_COUNT=0; SKIP_COUNT=0; FAIL_NAMES=()
|
||||
"${suite}"
|
||||
printf '%d %d %d\n' "${PASS_COUNT}" "${FAIL_COUNT}" "${SKIP_COUNT}" \
|
||||
> "${tmp_dir}/${suite}.counts"
|
||||
printf '%s\n' "${FAIL_NAMES[@]:-}" > "${tmp_dir}/${suite}.fails"
|
||||
) &
|
||||
pids+=($!)
|
||||
done
|
||||
|
||||
# Wait for all background suites
|
||||
local pid
|
||||
for pid in "${pids[@]}"; do
|
||||
wait "${pid}" || true
|
||||
done
|
||||
|
||||
# Aggregate counters
|
||||
local f
|
||||
for f in "${tmp_dir}"/*.counts; do
|
||||
[[ -f "${f}" ]] || continue
|
||||
local p fl s
|
||||
read -r p fl s < "${f}"
|
||||
PASS_COUNT=$(( PASS_COUNT + p ))
|
||||
FAIL_COUNT=$(( FAIL_COUNT + fl ))
|
||||
SKIP_COUNT=$(( SKIP_COUNT + s ))
|
||||
done
|
||||
|
||||
for f in "${tmp_dir}"/*.fails; do
|
||||
[[ -f "${f}" ]] || continue
|
||||
while IFS= read -r line; do
|
||||
[[ -n "${line}" ]] && FAIL_NAMES+=("${line}")
|
||||
done < "${f}"
|
||||
done
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Sequential runner
|
||||
# ---------------------------------------------------------------------------
|
||||
run_sequential() {
|
||||
suite_unraid_info
|
||||
suite_unraid_array
|
||||
suite_unraid_storage
|
||||
suite_unraid_docker
|
||||
suite_unraid_vm
|
||||
suite_unraid_notifications
|
||||
suite_unraid_rclone
|
||||
suite_unraid_users
|
||||
suite_unraid_keys
|
||||
suite_unraid_health
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Main
|
||||
# ---------------------------------------------------------------------------
|
||||
main() {
|
||||
parse_args "$@"
|
||||
|
||||
printf '%b%s%b\n' "${C_BOLD}" "$(printf '=%.0s' {1..65})" "${C_RESET}"
|
||||
printf '%b unraid-mcp integration smoke-test%b\n' "${C_BOLD}" "${C_RESET}"
|
||||
printf '%b Project: %s%b\n' "${C_BOLD}" "${PROJECT_DIR}" "${C_RESET}"
|
||||
printf '%b Timeout: %dms/call | Parallel: %s%b\n' \
|
||||
"${C_BOLD}" "${CALL_TIMEOUT_MS}" "${USE_PARALLEL}" "${C_RESET}"
|
||||
printf '%b Log: %s%b\n' "${C_BOLD}" "${LOG_FILE}" "${C_RESET}"
|
||||
printf '%b%s%b\n\n' "${C_BOLD}" "$(printf '=%.0s' {1..65})" "${C_RESET}"
|
||||
|
||||
# Prerequisite gate
|
||||
check_prerequisites || exit 2
|
||||
|
||||
# Server startup gate — fail fast if the Python process can't start
|
||||
smoke_test_server || {
|
||||
log_error ""
|
||||
log_error "Server startup failed. Aborting — no tests will run."
|
||||
log_error ""
|
||||
log_error "To diagnose, run:"
|
||||
log_error " cd ${PROJECT_DIR} && uv run unraid-mcp-server"
|
||||
log_error ""
|
||||
log_error "If server.py has a broken import (e.g. missing tools/settings.py),"
|
||||
log_error "stash or revert the uncommitted server.py change first:"
|
||||
log_error " git stash -- unraid_mcp/server.py"
|
||||
log_error " ./scripts/test-tools.sh"
|
||||
log_error " git stash pop"
|
||||
exit 2
|
||||
}
|
||||
|
||||
if [[ "${USE_PARALLEL}" == true ]]; then
|
||||
run_parallel
|
||||
else
|
||||
run_sequential
|
||||
fi
|
||||
|
||||
print_summary
|
||||
|
||||
if [[ "${FAIL_COUNT}" -gt 0 ]]; then
|
||||
exit 1
|
||||
fi
|
||||
exit 0
|
||||
}
|
||||
|
||||
main "$@"
|
||||
Reference in New Issue
Block a user