26 Commits

Author SHA1 Message Date
Jacob Magar
207b68cd8c chore: initialize beads + lavra project config (v1.1.5)
- bd init: Dolt-backed issue tracker, prefix unraid-mcp-<hash>
- .lavra/config/project-setup.md: python stack, 4 review agents
- .lavra/config/codebase-profile.md: stack/arch/conventions profile
- .gitignore: add lavra session-state and beads entries
- CLAUDE.md: beads workflow integration block

Co-Authored-By: Claude <noreply@anthropic.com>
2026-03-27 21:30:12 -04:00
Jacob Magar
1c8a81786c bd init: initialize beads issue tracking 2026-03-27 20:49:51 -04:00
Jacob Magar
abb71f17ff chore: rebrand plugin to unRAID, expand description with full action reference (v1.1.4)
- Add displayName "unRAID" to plugin.json and marketplace.json
- Expand description to list all 15 action domains and 107 subactions
- Mark destructive subactions with * in description
- Bump version 1.1.3 → 1.1.4

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-03-25 00:18:16 -04:00
Jacob Magar
87cd5e5193 docs: expand plugin README with full tool/action/subaction reference 2026-03-24 23:35:54 -04:00
Jacob Magar
2f9decdac9 chore: sync plugin description and marketplace version to 1.1.3 2026-03-24 23:33:48 -04:00
Jacob Magar
8a43b2535a feat: Google OAuth, API key auth, consolidated tool, docs overhaul (v1.1.3)
Merges feat/google-oauth into main.

Highlights:
- Consolidated 15 individual tools into single `unraid` tool with action/subaction routing
- Added API key bearer token authentication for HTTP transport
- Added Google OAuth via FastMCP GoogleProvider (subsequently removed in favor of external gateway pattern)
- Comprehensive security hardening: path traversal fixes, timing-safe auth, traceback leak prevention
- CI pipeline: lint, typecheck, test, version-sync, audit
- CHANGELOG.md, AUTHENTICATION.md, full PR review comment resolution
- Version: 1.0.0 → 1.1.3 across this branch

Co-Authored-By: Claude <noreply@anthropic.com>
2026-03-24 23:26:52 -04:00
Jacob Magar
3c6b59b763 chore: bump version to 1.1.3, update CHANGELOG
Version: 1.1.2 → 1.1.3 (patch)

Changelog entry documents 11 documentation accuracy fixes and 1 test
improvement addressed in 183db70 (PR review comments).

Co-Authored-By: Claude <noreply@anthropic.com>
2026-03-24 23:26:37 -04:00
Jacob Magar
183db70d97 fix: address 17 remaining PR review comments
Resolves review threads:
- PRRT_kwDOO6Hdxs50mcYz: oidc/validate_session now documents required `token`
- PRRT_kwDOO6Hdxs50mcY8: setting/update corrected to require `settings_input`
- PRRT_kwDOO6Hdxs50mcZE: rclone/create_remote corrected to `provider_type`+`config_data`
- PRRT_kwDOO6Hdxs50mcZL: disk/logs corrected to `log_path`+`tail_lines`
- PRRT_kwDOO6Hdxs50mcZe: parity_progress added to event-driven subscriptions list
- PRRT_kwDOO6Hdxs50mcZh: log_tail README example now includes required `path`
- PRRT_kwDOO6Hdxs50mcaR: parity_start quick-reference now includes required `correct=False`
- PRRT_kwDOO6Hdxs50mcaq: array_state documented as "may show" not "will always show"
- PRRT_kwDOO6Hdxs50mnR8: key/create roles is optional; add_role/remove_role use `roles` (plural)
- PRRT_kwDOO6Hdxs50mnRd: endpoints.md heading moved before blockquote (MD041)
- PRRT_kwDOO6Hdxs50mnTB: test_resources.py uses _get_resource() helper instead of raw internals
- PRRT_kwDOO6Hdxs50mYkZ: N/A — _build_google_auth removed in prior refactor commit
- PRRT_kwDOO6Hdxs50mnQf: N/A — plugin.json already at 1.1.2, matches pyproject.toml
- PRRT_kwDOO6Hdxs50mnQ7: N/A — blank line already present in CLAUDE.md
- PRRT_kwDOO6Hdxs50mnRD: N/A — fastmcp.http.json removed in prior refactor commit
- PRRT_kwDOO6Hdxs50mnRH: N/A — blank line already present in README.md
- PRRT_kwDOO6Hdxs50mnSW: N/A — test_auth_builder.py removed in prior refactor commit
2026-03-24 22:50:40 -04:00
Jacob Magar
e548f6e6c9 refactor: remove Docker and HTTP transport support, fix hypothesis cache directory 2026-03-24 19:22:27 -04:00
Jacob Magar
e68d4a80e4 refactor: simplify path validation and connection_init via shared helpers
- Extract _validate_path() in unraid.py — consolidates traversal check + normpath
  + prefix validation used by disk/logs and live/log_tail into one place
- Extract build_connection_init() in subscriptions/utils.py — removes 4 duplicate
  connection_init payload blocks from snapshot.py (×2), manager.py, diagnostics.py;
  also fixes diagnostics.py bug where x-api-key: None was sent when no key configured
- Remove _LIVE_ALLOWED_LOG_PREFIXES alias — direct reference to _ALLOWED_LOG_PREFIXES
- Move import hmac to module level in server.py (was inside verify_token hot path)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-03-23 11:57:00 -04:00
Jacob Magar
dc1e5f18d8 docs: add CHANGELOG.md covering v0.1.0 through v1.1.2
Co-Authored-By: Claude <noreply@anthropic.com>
2026-03-23 11:38:27 -04:00
Jacob Magar
2b777be927 fix(security): path traversal, timing-safe auth, stale credential bindings
Security:
- Remove /mnt/ from _ALLOWED_LOG_PREFIXES to prevent Unraid share exposure
- Add early .. detection for disk/logs and live/log_tail path validation
- Add /boot/ prefix restriction for flash_backup source_path
- Use hmac.compare_digest for timing-safe API key verification in server.py
- Gate include_traceback on DEBUG log level (no tracebacks in production)

Correctness:
- Re-raise CredentialsNotConfiguredError in health check instead of swallowing
- Fix ups_device query (remove non-existent nominalPower/currentPower fields)

Best practices (BP-01, BP-05, BP-06):
- Add # noqa: ASYNC109 to timeout params in _handle_live and unraid()
- Fix start_array* → start_array in docstring (not in ARRAY_DESTRUCTIVE)
- Remove from __future__ import annotations from snapshot.py
- Replace import-time UNRAID_API_KEY/URL bindings with _settings.ATTR pattern
  in manager.py, snapshot.py, utils.py, diagnostics.py — fixes stale binding
  after apply_runtime_config() post-elicitation (BP-05)

CI/CD:
- Add .github/workflows/ci.yml (5-job pipeline: lint, typecheck, test, version-sync, audit)
- Add fail_under = 80 to [tool.coverage.report]
- Add version sync check to scripts/validate-marketplace.sh

Documentation:
- Sync plugin.json version 1.1.1 → 1.1.2 with pyproject.toml
- Update CLAUDE.md: 3 tools, system domain count 18, scripts comment fix
- Update README.md: 3 tools, security notes
- Update docs/AUTHENTICATION.md: H1 title fix
- Add UNRAID_CREDENTIALS_DIR to .env.example

Bump: 1.1.1 → 1.1.2

Co-Authored-By: Claude <noreply@anthropic.com>
2026-03-23 11:37:05 -04:00
Jacob Magar
d59f8c22a8 docs: rename GOOGLE_OAUTH.md → AUTHENTICATION.md, update references 2026-03-16 11:12:25 -04:00
Jacob Magar
cc24f1ec62 feat: add API key bearer token authentication
- ApiKeyVerifier(TokenVerifier) — validates Authorization: Bearer <key>
  against UNRAID_MCP_API_KEY; guards against empty-key bypass
- _build_auth() replaces module-level _build_google_auth() call:
  returns MultiAuth(server=google, verifiers=[api_key]) when both set,
  GoogleProvider alone, ApiKeyVerifier alone, or None
- settings.py: add UNRAID_MCP_API_KEY + is_api_key_auth_configured()
  + api_key_auth_enabled in get_config_summary()
- run_server(): improved auth status logging for all three states
- tests/test_api_key_auth.py: 9 tests covering verifier + _build_auth
- .env.example: add UNRAID_MCP_API_KEY section
- docs/GOOGLE_OAUTH.md: add API Key section
- README.md / CLAUDE.md: rename section, document both auth methods
- Fix pre-existing: test_health.py patched cache_middleware/error_middleware
  now match renamed _cache_middleware/_error_middleware in server.py
2026-03-16 11:11:38 -04:00
Jacob Magar
6f7a58a0f9 docs: add Google OAuth setup guide and update README/CLAUDE.md
- Create docs/GOOGLE_OAUTH.md: complete OAuth setup walkthrough
  (Google Cloud Console, env vars, JWT key generation, troubleshooting)
- README.md: add Google OAuth section with quick-setup steps + link
- CLAUDE.md: add JWT key generation tip + link to full guide
2026-03-16 10:59:30 -04:00
Jacob Magar
440245108a fix(tests): clear subscription_configs before auto-start tests to account for default SNAPSHOT_ACTIONS 2026-03-16 10:54:43 -04:00
Jacob Magar
9754261402 fix(auth): use setenv('') instead of delenv to prevent dotenv re-injection in tests 2026-03-16 10:51:14 -04:00
Jacob Magar
9e9915b2fa docs(auth): document Google OAuth setup in CLAUDE.md 2026-03-16 10:48:38 -04:00
Jacob Magar
2ab61be2df feat(auth): wire GoogleProvider into FastMCP, log auth status on startup
- Call _build_google_auth() at module level before mcp = FastMCP(...)
- Pass auth=_google_auth to FastMCP() constructor
- Add startup log in run_server(): INFO when OAuth enabled (with redirect URI), WARNING when open/unauthenticated
- Add test verifying mcp has no auth provider when Google vars are absent (baseline + post-wire)
2026-03-16 10:42:51 -04:00
Jacob Magar
b319cf4932 fix(auth): use dict[str, Any] for kwargs, add typing.Any import 2026-03-16 10:40:29 -04:00
Jacob Magar
0f46cb9713 test(auth): add _build_google_auth() unit tests and bump to v1.1.1
- 4 tests covering unconfigured/configured/no-jwt-key/stdio-warning paths
- Validates GoogleProvider is called with correct kwargs
- Verifies jwt_signing_key is omitted (not passed empty) when unset

Co-authored-by: Claude <noreply@anthropic.com>
2026-03-16 10:37:52 -04:00
Jacob Magar
1248ccd53e fix(auth): hoist _build_google_auth import to module level for ty compatibility 2026-03-16 10:37:36 -04:00
Jacob Magar
4a1ffcfd51 feat(auth): add _build_google_auth() builder with stdio warning
Adds _build_google_auth() to server.py that reads Google OAuth settings
and returns a configured GoogleProvider instance or None when unconfigured.
Includes warning for stdio transport incompatibility and conditional
jwt_signing_key passthrough. 4 new TDD tests in tests/test_auth_builder.py.
2026-03-16 10:36:41 -04:00
Jacob Magar
f69aa94826 feat(dx): add fastmcp.json configs, module-level tool registration, tool timeout
- Add fastmcp.http.json and fastmcp.stdio.json declarative server configs
  for streamable-http (:6970) and stdio transports respectively
- Move register_all_modules() to module level in server.py so
  `fastmcp run server.py --reload` discovers the fully-wired mcp object
  without going through run_server() — tools registered exactly once
- Add timeout=120 to @mcp.tool() decorator as a global safety net;
  any hung subaction returns a clean MCP error instead of hanging forever
- Document fastmcp run --reload, fastmcp list, fastmcp call in README
- Bump version 1.0.1 → 1.1.0

Co-authored-by: Claude <claude@anthropic.com>
2026-03-16 10:32:16 -04:00
Jacob Magar
5187cf730f fix(auth): use Any return type in _reload_settings for ty compatibility 2026-03-16 10:29:56 -04:00
Jacob Magar
896fc8db1b feat(auth): add Google OAuth settings with is_google_auth_configured()
Add GOOGLE_CLIENT_ID, GOOGLE_CLIENT_SECRET, UNRAID_MCP_BASE_URL, and
UNRAID_MCP_JWT_SIGNING_KEY env vars to settings.py, along with the
is_google_auth_configured() predicate and three new keys in
get_config_summary(). TDD: 4 tests written red-first, all passing green.
2026-03-16 10:28:53 -04:00
59 changed files with 1619 additions and 1075 deletions

72
.beads/.gitignore vendored Normal file
View File

@@ -0,0 +1,72 @@
# Dolt database (managed by Dolt, not git)
dolt/
dolt-access.lock
# Runtime files
bd.sock
bd.sock.startlock
sync-state.json
last-touched
.exclusive-lock
# Daemon runtime (lock, log, pid)
daemon.*
# Interactions log (runtime, not versioned)
interactions.jsonl
# Push state (runtime, per-machine)
push-state.json
# Lock files (various runtime locks)
*.lock
# Credential key (encryption key for federation peer auth — never commit)
.beads-credential-key
# Local version tracking (prevents upgrade notification spam after git ops)
.local_version
# Worktree redirect file (contains relative path to main repo's .beads/)
# Must not be committed as paths would be wrong in other clones
redirect
# Sync state (local-only, per-machine)
# These files are machine-specific and should not be shared across clones
.sync.lock
export-state/
# Ephemeral store (SQLite - wisps/molecules, intentionally not versioned)
ephemeral.sqlite3
ephemeral.sqlite3-journal
ephemeral.sqlite3-wal
ephemeral.sqlite3-shm
# Dolt server management (auto-started by bd)
dolt-server.pid
dolt-server.log
dolt-server.lock
dolt-server.port
dolt-server.activity
# Corrupt backup directories (created by bd doctor --fix recovery)
*.corrupt.backup/
# Backup data (auto-exported JSONL, local-only)
backup/
# Per-project environment file (Dolt connection config, GH#2520)
.env
# Legacy files (from pre-Dolt versions)
*.db
*.db?*
*.db-journal
*.db-wal
*.db-shm
db.sqlite
bd.db
# NOTE: Do NOT add negation patterns here.
# They would override fork protection in .git/info/exclude.
# Config files (metadata.json, config.yaml) are tracked by git by default
# since no pattern above ignores them.

81
.beads/README.md Normal file
View File

@@ -0,0 +1,81 @@
# Beads - AI-Native Issue Tracking
Welcome to Beads! This repository uses **Beads** for issue tracking - a modern, AI-native tool designed to live directly in your codebase alongside your code.
## What is Beads?
Beads is issue tracking that lives in your repo, making it perfect for AI coding agents and developers who want their issues close to their code. No web UI required - everything works through the CLI and integrates seamlessly with git.
**Learn more:** [github.com/steveyegge/beads](https://github.com/steveyegge/beads)
## Quick Start
### Essential Commands
```bash
# Create new issues
bd create "Add user authentication"
# View all issues
bd list
# View issue details
bd show <issue-id>
# Update issue status
bd update <issue-id> --claim
bd update <issue-id> --status done
# Sync with Dolt remote
bd dolt push
```
### Working with Issues
Issues in Beads are:
- **Git-native**: Stored in Dolt database with version control and branching
- **AI-friendly**: CLI-first design works perfectly with AI coding agents
- **Branch-aware**: Issues can follow your branch workflow
- **Always in sync**: Auto-syncs with your commits
## Why Beads?
**AI-Native Design**
- Built specifically for AI-assisted development workflows
- CLI-first interface works seamlessly with AI coding agents
- No context switching to web UIs
🚀 **Developer Focused**
- Issues live in your repo, right next to your code
- Works offline, syncs when you push
- Fast, lightweight, and stays out of your way
🔧 **Git Integration**
- Automatic sync with git commits
- Branch-aware issue tracking
- Dolt-native three-way merge resolution
## Get Started with Beads
Try Beads in your own projects:
```bash
# Install Beads
curl -sSL https://raw.githubusercontent.com/steveyegge/beads/main/scripts/install.sh | bash
# Initialize in your repo
bd init
# Create your first issue
bd create "Try out Beads"
```
## Learn More
- **Documentation**: [github.com/steveyegge/beads/docs](https://github.com/steveyegge/beads/tree/main/docs)
- **Quick Start Guide**: Run `bd quickstart`
- **Examples**: [github.com/steveyegge/beads/examples](https://github.com/steveyegge/beads/tree/main/examples)
---
*Beads: Issue tracking that moves at the speed of thought*

54
.beads/config.yaml Normal file
View File

@@ -0,0 +1,54 @@
# Beads Configuration File
# This file configures default behavior for all bd commands in this repository
# All settings can also be set via environment variables (BD_* prefix)
# or overridden with command-line flags
# Issue prefix for this repository (used by bd init)
# If not set, bd init will auto-detect from directory name
# Example: issue-prefix: "myproject" creates issues like "myproject-1", "myproject-2", etc.
# issue-prefix: ""
# Use no-db mode: JSONL-only, no Dolt database
# When true, bd will use .beads/issues.jsonl as the source of truth
# no-db: false
# Enable JSON output by default
# json: false
# Feedback title formatting for mutating commands (create/update/close/dep/edit)
# 0 = hide titles, N > 0 = truncate to N characters
# output:
# title-length: 255
# Default actor for audit trails (overridden by BEADS_ACTOR or --actor)
# actor: ""
# Export events (audit trail) to .beads/events.jsonl on each flush/sync
# When enabled, new events are appended incrementally using a high-water mark.
# Use 'bd export --events' to trigger manually regardless of this setting.
# events-export: false
# Multi-repo configuration (experimental - bd-307)
# Allows hydrating from multiple repositories and routing writes to the correct database
# repos:
# primary: "." # Primary repo (where this database lives)
# additional: # Additional repos to hydrate from (read-only)
# - ~/beads-planning # Personal planning repo
# - ~/work-planning # Work planning repo
# JSONL backup (periodic export for off-machine recovery)
# Auto-enabled when a git remote exists. Override explicitly:
# backup:
# enabled: false # Disable auto-backup entirely
# interval: 15m # Minimum time between auto-exports
# git-push: false # Disable git push (export locally only)
# git-repo: "" # Separate git repo for backups (default: project repo)
# Integration settings (access with 'bd config get/set')
# These are stored in the database, not in this file:
# - jira.url
# - jira.project
# - linear.url
# - linear.api-key
# - github.org
# - github.repo

24
.beads/hooks/post-checkout Executable file
View File

@@ -0,0 +1,24 @@
#!/usr/bin/env sh
# --- BEGIN BEADS INTEGRATION v0.62.0 ---
# This section is managed by beads. Do not remove these markers.
if command -v bd >/dev/null 2>&1; then
export BD_GIT_HOOK=1
_bd_timeout=${BEADS_HOOK_TIMEOUT:-300}
if command -v timeout >/dev/null 2>&1; then
timeout "$_bd_timeout" bd hooks run post-checkout "$@"
_bd_exit=$?
if [ $_bd_exit -eq 124 ]; then
echo >&2 "beads: hook 'post-checkout' timed out after ${_bd_timeout}s — continuing without beads"
_bd_exit=0
fi
else
bd hooks run post-checkout "$@"
_bd_exit=$?
fi
if [ $_bd_exit -eq 3 ]; then
echo >&2 "beads: database not initialized — skipping hook 'post-checkout'"
_bd_exit=0
fi
if [ $_bd_exit -ne 0 ]; then exit $_bd_exit; fi
fi
# --- END BEADS INTEGRATION v0.62.0 ---

24
.beads/hooks/post-merge Executable file
View File

@@ -0,0 +1,24 @@
#!/usr/bin/env sh
# --- BEGIN BEADS INTEGRATION v0.62.0 ---
# This section is managed by beads. Do not remove these markers.
if command -v bd >/dev/null 2>&1; then
export BD_GIT_HOOK=1
_bd_timeout=${BEADS_HOOK_TIMEOUT:-300}
if command -v timeout >/dev/null 2>&1; then
timeout "$_bd_timeout" bd hooks run post-merge "$@"
_bd_exit=$?
if [ $_bd_exit -eq 124 ]; then
echo >&2 "beads: hook 'post-merge' timed out after ${_bd_timeout}s — continuing without beads"
_bd_exit=0
fi
else
bd hooks run post-merge "$@"
_bd_exit=$?
fi
if [ $_bd_exit -eq 3 ]; then
echo >&2 "beads: database not initialized — skipping hook 'post-merge'"
_bd_exit=0
fi
if [ $_bd_exit -ne 0 ]; then exit $_bd_exit; fi
fi
# --- END BEADS INTEGRATION v0.62.0 ---

24
.beads/hooks/pre-commit Executable file
View File

@@ -0,0 +1,24 @@
#!/usr/bin/env sh
# --- BEGIN BEADS INTEGRATION v0.62.0 ---
# This section is managed by beads. Do not remove these markers.
if command -v bd >/dev/null 2>&1; then
export BD_GIT_HOOK=1
_bd_timeout=${BEADS_HOOK_TIMEOUT:-300}
if command -v timeout >/dev/null 2>&1; then
timeout "$_bd_timeout" bd hooks run pre-commit "$@"
_bd_exit=$?
if [ $_bd_exit -eq 124 ]; then
echo >&2 "beads: hook 'pre-commit' timed out after ${_bd_timeout}s — continuing without beads"
_bd_exit=0
fi
else
bd hooks run pre-commit "$@"
_bd_exit=$?
fi
if [ $_bd_exit -eq 3 ]; then
echo >&2 "beads: database not initialized — skipping hook 'pre-commit'"
_bd_exit=0
fi
if [ $_bd_exit -ne 0 ]; then exit $_bd_exit; fi
fi
# --- END BEADS INTEGRATION v0.62.0 ---

24
.beads/hooks/pre-push Executable file
View File

@@ -0,0 +1,24 @@
#!/usr/bin/env sh
# --- BEGIN BEADS INTEGRATION v0.62.0 ---
# This section is managed by beads. Do not remove these markers.
if command -v bd >/dev/null 2>&1; then
export BD_GIT_HOOK=1
_bd_timeout=${BEADS_HOOK_TIMEOUT:-300}
if command -v timeout >/dev/null 2>&1; then
timeout "$_bd_timeout" bd hooks run pre-push "$@"
_bd_exit=$?
if [ $_bd_exit -eq 124 ]; then
echo >&2 "beads: hook 'pre-push' timed out after ${_bd_timeout}s — continuing without beads"
_bd_exit=0
fi
else
bd hooks run pre-push "$@"
_bd_exit=$?
fi
if [ $_bd_exit -eq 3 ]; then
echo >&2 "beads: database not initialized — skipping hook 'pre-push'"
_bd_exit=0
fi
if [ $_bd_exit -ne 0 ]; then exit $_bd_exit; fi
fi
# --- END BEADS INTEGRATION v0.62.0 ---

24
.beads/hooks/prepare-commit-msg Executable file
View File

@@ -0,0 +1,24 @@
#!/usr/bin/env sh
# --- BEGIN BEADS INTEGRATION v0.62.0 ---
# This section is managed by beads. Do not remove these markers.
if command -v bd >/dev/null 2>&1; then
export BD_GIT_HOOK=1
_bd_timeout=${BEADS_HOOK_TIMEOUT:-300}
if command -v timeout >/dev/null 2>&1; then
timeout "$_bd_timeout" bd hooks run prepare-commit-msg "$@"
_bd_exit=$?
if [ $_bd_exit -eq 124 ]; then
echo >&2 "beads: hook 'prepare-commit-msg' timed out after ${_bd_timeout}s — continuing without beads"
_bd_exit=0
fi
else
bd hooks run prepare-commit-msg "$@"
_bd_exit=$?
fi
if [ $_bd_exit -eq 3 ]; then
echo >&2 "beads: database not initialized — skipping hook 'prepare-commit-msg'"
_bd_exit=0
fi
if [ $_bd_exit -ne 0 ]; then exit $_bd_exit; fi
fi
# --- END BEADS INTEGRATION v0.62.0 ---

7
.beads/metadata.json Normal file
View File

@@ -0,0 +1,7 @@
{
"database": "dolt",
"backend": "dolt",
"dolt_mode": "server",
"dolt_database": "unraid_mcp",
"project_id": "5bb31674-18f6-4968-8fd9-71a7ceceaa48"
}

View File

@@ -1,73 +1,239 @@
# Unraid MCP Marketplace
# Unraid MCP Plugin
This directory contains the Claude Code marketplace configuration for the Unraid MCP server and skills.
Query, monitor, and manage Unraid servers via GraphQL API using a single consolidated `unraid` tool with action+subaction routing.
**Version:** 1.1.3 | **Category:** Infrastructure | **Tags:** unraid, homelab, graphql, docker, virtualization
---
## Installation
### From GitHub (Recommended)
```bash
# Add the marketplace
/plugin marketplace add jmagar/unraid-mcp
# Install the Unraid skill
/plugin install unraid @unraid-mcp
```
### From Local Path (Development)
```bash
# Add local marketplace
/plugin marketplace add /path/to/unraid-mcp
# Install the plugin
/plugin install unraid @unraid-mcp
```
## Available Plugins
### unraid
Query and monitor Unraid servers via GraphQL API - array status, disk health, containers, VMs, system monitoring.
**Features:**
- 1 consolidated `unraid` tool with ~108 actions across 15 domains
- Real-time live subscriptions (CPU, memory, logs, array state, UPS)
- Disk health and temperature monitoring
- Docker container management
- VM status and control
- Log file access
- Network share information
- Notification management
- Plugin, rclone, API key, and OIDC management
**Version:** 1.0.0
**Category:** Infrastructure
**Tags:** unraid, monitoring, homelab, graphql, docker, virtualization
## Configuration
After installation, run setup to configure credentials interactively:
After install, configure credentials:
```python
unraid(action="health", subaction="setup")
```
Credentials are stored at `~/.unraid-mcp/.env` automatically.
Credentials are stored at `~/.unraid-mcp/.env`. Get an API key from **Unraid WebUI → Settings → Management Access → API Keys**.
**Getting an API Key:**
1. Open Unraid WebUI
2. Go to Settings → Management Access → API Keys
3. Click "Create" and select "Viewer" role (or appropriate roles for mutations)
4. Copy the generated API key
---
## Documentation
## Tools
- **Plugin Documentation:** See `skills/unraid/README.md`
- **MCP Server Documentation:** See root `README.md`
- **API Reference:** See `skills/unraid/references/`
### `unraid` — Primary Tool (107 subactions, 15 domains)
Call as `unraid(action="<domain>", subaction="<operation>", [params])`.
#### `system` — Server Information (18 subactions)
| Subaction | Description |
|-----------|-------------|
| `overview` | Complete system summary (recommended starting point) |
| `server` | Hostname, version, uptime |
| `servers` | All known Unraid servers |
| `array` | Array status and disk list |
| `network` | Network interfaces and config |
| `registration` | License and registration status |
| `variables` | Environment variables |
| `metrics` | Real-time CPU, memory, I/O usage |
| `services` | Running services status |
| `display` | Display settings |
| `config` | System configuration |
| `online` | Quick online status check |
| `owner` | Server owner information |
| `settings` | User settings and preferences |
| `flash` | USB flash drive details |
| `ups_devices` | List all UPS devices |
| `ups_device` | Single UPS device (requires `device_id`) |
| `ups_config` | UPS configuration |
#### `health` — Diagnostics (4 subactions)
| Subaction | Description |
|-----------|-------------|
| `check` | Comprehensive health check — connectivity, array, disks, containers, VMs, resources |
| `test_connection` | Test API connectivity and authentication |
| `diagnose` | Detailed diagnostic report with troubleshooting recommendations |
| `setup` | Configure credentials interactively (stores to `~/.unraid-mcp/.env`) |
#### `array` — Array & Parity (13 subactions)
| Subaction | Description |
|-----------|-------------|
| `parity_status` | Current parity check progress and status |
| `parity_history` | Historical parity check results |
| `parity_start` | Start a parity check (requires `correct`) |
| `parity_pause` | Pause a running parity check |
| `parity_resume` | Resume a paused parity check |
| `parity_cancel` | Cancel a running parity check |
| `start_array` | Start the array |
| `stop_array` | ⚠️ Stop the array (requires `confirm=True`) |
| `add_disk` | Add a disk to the array (requires `slot`, `id`) |
| `remove_disk` | ⚠️ Remove a disk (requires `slot`, `confirm=True`) |
| `mount_disk` | Mount a disk |
| `unmount_disk` | Unmount a disk |
| `clear_disk_stats` | ⚠️ Clear disk statistics (requires `confirm=True`) |
#### `disk` — Storage & Logs (6 subactions)
| Subaction | Description |
|-----------|-------------|
| `shares` | List network shares |
| `disks` | All physical disks with health and temperatures |
| `disk_details` | Detailed info for a specific disk (requires `disk_id`) |
| `log_files` | List available log files |
| `logs` | Read log content (requires `log_path`; optional `tail_lines`) |
| `flash_backup` | ⚠️ Trigger a flash backup (requires `confirm=True`) |
#### `docker` — Containers (7 subactions)
| Subaction | Description |
|-----------|-------------|
| `list` | All containers with status, image, state |
| `details` | Single container details (requires container identifier) |
| `start` | Start a container (requires container identifier) |
| `stop` | Stop a container (requires container identifier) |
| `restart` | Restart a container (requires container identifier) |
| `networks` | List Docker networks |
| `network_details` | Details for a specific network (requires `network_id`) |
Container identification: name, ID, or partial name (fuzzy match).
#### `vm` — Virtual Machines (9 subactions)
| Subaction | Description |
|-----------|-------------|
| `list` | All VMs with state |
| `details` | Single VM details (requires `vm_id`) |
| `start` | Start a VM (requires `vm_id`) |
| `stop` | Gracefully stop a VM (requires `vm_id`) |
| `pause` | Pause a VM (requires `vm_id`) |
| `resume` | Resume a paused VM (requires `vm_id`) |
| `reboot` | Reboot a VM (requires `vm_id`) |
| `force_stop` | ⚠️ Force stop a VM (requires `vm_id`, `confirm=True`) |
| `reset` | ⚠️ Hard reset a VM (requires `vm_id`, `confirm=True`) |
#### `notification` — Notifications (12 subactions)
| Subaction | Description |
|-----------|-------------|
| `overview` | Notification counts (unread, archived by type) |
| `list` | List notifications (optional `filter`, `limit`, `offset`) |
| `create` | Create a notification (requires `title`, `subject`, `description`, `importance`) |
| `archive` | Archive a notification (requires `notification_id`) |
| `mark_unread` | Mark a notification as unread (requires `notification_id`) |
| `recalculate` | Recalculate notification counts |
| `archive_all` | Archive all unread notifications |
| `archive_many` | Archive multiple (requires `ids` list) |
| `unarchive_many` | Unarchive multiple (requires `ids` list) |
| `unarchive_all` | Unarchive all archived notifications |
| `delete` | ⚠️ Delete a notification (requires `notification_id`, `notification_type`, `confirm=True`) |
| `delete_archived` | ⚠️ Delete all archived (requires `confirm=True`) |
#### `key` — API Keys (7 subactions)
| Subaction | Description |
|-----------|-------------|
| `list` | All API keys |
| `get` | Single key details (requires `key_id`) |
| `create` | Create a new key (requires `name`; optional `roles`, `permissions`) |
| `update` | Update a key (requires `key_id`) |
| `delete` | ⚠️ Delete a key (requires `key_id`, `confirm=True`) |
| `add_role` | Add roles to a key (requires `key_id`, `roles`) |
| `remove_role` | Remove roles from a key (requires `key_id`, `roles`) |
#### `plugin` — Plugins (3 subactions)
| Subaction | Description |
|-----------|-------------|
| `list` | All installed plugins |
| `add` | Install plugins (requires `names` list) |
| `remove` | ⚠️ Uninstall plugins (requires `names` list, `confirm=True`) |
#### `rclone` — Cloud Storage (4 subactions)
| Subaction | Description |
|-----------|-------------|
| `list_remotes` | List configured rclone remotes |
| `config_form` | Get configuration form for a remote type |
| `create_remote` | Create a new remote (requires `name`, `provider_type`, `config_data`) |
| `delete_remote` | ⚠️ Delete a remote (requires `name`, `confirm=True`) |
#### `setting` — System Settings (2 subactions)
| Subaction | Description |
|-----------|-------------|
| `update` | Update system settings (requires `settings_input` object) |
| `configure_ups` | ⚠️ Configure UPS settings (requires `confirm=True`) |
#### `customization` — Theme & Appearance (5 subactions)
| Subaction | Description |
|-----------|-------------|
| `theme` | Current theme settings |
| `public_theme` | Public-facing theme |
| `is_initial_setup` | Check if initial setup is complete |
| `sso_enabled` | Check SSO status |
| `set_theme` | Update theme (requires theme parameters) |
#### `oidc` — SSO / OpenID Connect (5 subactions)
| Subaction | Description |
|-----------|-------------|
| `providers` | List configured OIDC providers |
| `provider` | Single provider details (requires `provider_id`) |
| `configuration` | OIDC configuration |
| `public_providers` | Public-facing provider list |
| `validate_session` | Validate current SSO session (requires `token`) |
#### `user` — Current User (1 subaction)
| Subaction | Description |
|-----------|-------------|
| `me` | Current authenticated user info |
#### `live` — Real-Time Subscriptions (11 subactions)
Persistent WebSocket connections. Returns `{"status": "connecting"}` on first call — retry momentarily.
| Subaction | Description |
|-----------|-------------|
| `cpu` | Live CPU utilization |
| `memory` | Live memory usage |
| `cpu_telemetry` | Detailed CPU telemetry |
| `array_state` | Live array state changes |
| `parity_progress` | Live parity check progress |
| `ups_status` | Live UPS status |
| `notifications_overview` | Live notification counts |
| `owner` | Live owner info |
| `server_status` | Live server status |
| `log_tail` | Live log tail stream (requires `path`) |
| `notification_feed` | Live notification feed |
---
### `diagnose_subscriptions` — Subscription Diagnostics
Inspect WebSocket subscription connection states, errors, and URLs. No parameters required.
---
### `test_subscription_query` — Subscription Query Tester
Test a specific GraphQL subscription query against the live Unraid API. Uses an allowlisted set of safe fields only.
---
## Destructive Actions
All require `confirm=True`. Without it, the action is blocked.
| Domain | Subaction |
|--------|-----------|
| `array` | `stop_array`, `remove_disk`, `clear_disk_stats` |
| `vm` | `force_stop`, `reset` |
| `notification` | `delete`, `delete_archived` |
| `rclone` | `delete_remote` |
| `key` | `delete` |
| `disk` | `flash_backup` |
| `setting` | `configure_ups` |
| `plugin` | `remove` |
---
## Support
- **Issues:** https://github.com/jmagar/unraid-mcp/issues
- **Repository:** https://github.com/jmagar/unraid-mcp
- **Skill docs:** `skills/unraid/SKILL.md`
- **API reference:** `skills/unraid/references/`

View File

@@ -1,21 +1,22 @@
{
"name": "jmagar-unraid-mcp",
"name": "unraid-mcp",
"owner": {
"name": "jmagar",
"email": "jmagar@users.noreply.github.com"
},
"metadata": {
"description": "Comprehensive Unraid server management and monitoring via a single consolidated MCP tool (~108 actions across 15 domains)",
"version": "1.0.0",
"description": "Unraid server management via 3 MCP tools: `unraid` (107 subactions across 15 domains), `diagnose_subscriptions`, and `test_subscription_query`",
"version": "1.1.4",
"homepage": "https://github.com/jmagar/unraid-mcp",
"repository": "https://github.com/jmagar/unraid-mcp"
},
"plugins": [
{
"name": "unraid",
"displayName": "unRAID",
"source": "./",
"description": "Query and monitor Unraid servers via GraphQL API — single `unraid` tool with action+subaction routing for array, disk, docker, VM, notifications, live metrics, and more",
"version": "1.0.0",
"description": "Query, monitor, and manage Unraid servers via GraphQL API.\n\nTools: `unraid` (primary), `diagnose_subscriptions`, `test_subscription_query`\n\nActions + subactions:\n• system: overview, array, network, registration, variables, metrics, services, display, config, online, owner, settings, server, servers, flash, ups_devices, ups_device, ups_config\n• health: check, test_connection, diagnose, setup\n• array: parity_status, parity_history, parity_start, parity_pause, parity_resume, parity_cancel, start_array, stop_array*, add_disk, remove_disk*, mount_disk, unmount_disk, clear_disk_stats*\n• disk: shares, disks, disk_details, log_files, logs, flash_backup*\n• docker: list, details, start, stop, restart, networks, network_details\n• vm: list, details, start, stop, pause, resume, force_stop*, reboot, reset*\n• notification: overview, list, create, archive, mark_unread, recalculate, archive_all, archive_many, unarchive_many, unarchive_all, delete*, delete_archived*\n• key: list, get, create, update, delete*, add_role, remove_role\n• plugin: list, add, remove*\n• rclone: list_remotes, config_form, create_remote, delete_remote*\n• setting: update, configure_ups*\n• customization: theme, public_theme, is_initial_setup, sso_enabled, set_theme\n• oidc: providers, provider, configuration, public_providers, validate_session\n• user: me\n• live: cpu, memory, cpu_telemetry, array_state, parity_progress, ups_status, notifications_overview, notification_feed, log_tail, owner, server_status\n\n* = destructive, requires confirm=True",
"version": "1.1.4",
"tags": ["unraid", "monitoring", "homelab", "graphql", "docker", "virtualization"],
"category": "infrastructure"
}

View File

@@ -1,7 +1,8 @@
{
"name": "unraid",
"description": "Query and monitor Unraid servers via GraphQL API - array status, disk health, containers, VMs, system monitoring",
"version": "1.0.1",
"displayName": "unRAID",
"description": "Query, monitor, and manage Unraid servers via GraphQL API.\n\nTools: `unraid` (primary), `diagnose_subscriptions`, `test_subscription_query`\n\nActions + subactions:\n• system: overview, array, network, registration, variables, metrics, services, display, config, online, owner, settings, server, servers, flash, ups_devices, ups_device, ups_config\n• health: check, test_connection, diagnose, setup\n• array: parity_status, parity_history, parity_start, parity_pause, parity_resume, parity_cancel, start_array, stop_array*, add_disk, remove_disk*, mount_disk, unmount_disk, clear_disk_stats*\n• disk: shares, disks, disk_details, log_files, logs, flash_backup*\n• docker: list, details, start, stop, restart, networks, network_details\n• vm: list, details, start, stop, pause, resume, force_stop*, reboot, reset*\n• notification: overview, list, create, archive, mark_unread, recalculate, archive_all, archive_many, unarchive_many, unarchive_all, delete*, delete_archived*\n• key: list, get, create, update, delete*, add_role, remove_role\n• plugin: list, add, remove*\n• rclone: list_remotes, config_form, create_remote, delete_remote*\n• setting: update, configure_ups*\n• customization: theme, public_theme, is_initial_setup, sso_enabled, set_theme\n• oidc: providers, provider, configuration, public_providers, validate_session\n• user: me\n• live: cpu, memory, cpu_telemetry, array_state, parity_progress, ups_status, notifications_overview, notification_feed, log_tail, owner, server_status\n\n* = destructive, requires confirm=True",
"version": "1.1.5",
"author": {
"name": "jmagar",
"email": "jmagar@users.noreply.github.com"

View File

@@ -1,31 +0,0 @@
Dockerfile
.dockerignore
.git
.gitignore
__pycache__
*.pyc
*.pyo
*.pyd
.env
.env.local
.env.*
*.log
logs/
*.db
*.sqlite3
instance/
.pytest_cache/
.ty_cache/
.venv/
venv/
env/
.vscode/
cline_docs/
tests/
docs/
scripts/
commands/
.full-review/
.claude-plugin/
*.md
!README.md

View File

@@ -8,7 +8,10 @@ UNRAID_API_KEY=your_unraid_api_key
# MCP Server Settings
# -------------------
UNRAID_MCP_TRANSPORT=streamable-http # Options: streamable-http (recommended), sse (deprecated), stdio
# Default transport is stdio (for Claude Desktop / local use).
# Docker Compose overrides this to streamable-http automatically.
# Options: stdio (default), streamable-http, sse (deprecated)
UNRAID_MCP_TRANSPORT=stdio
UNRAID_MCP_HOST=0.0.0.0
UNRAID_MCP_PORT=6970
@@ -35,3 +38,21 @@ UNRAID_MAX_RECONNECT_ATTEMPTS=10
# Optional: Custom log file path for subscription auto-start diagnostics
# Defaults to standard log if not specified
# UNRAID_AUTOSTART_LOG_PATH=/custom/path/to/autostart.log
# Credentials Directory Override (Optional)
# -----------------------------------------
# Override the credentials directory (default: ~/.unraid-mcp/)
# UNRAID_CREDENTIALS_DIR=/custom/path/to/credentials
# Authentication
# --------------
# This server has NO built-in authentication.
# When running as HTTP (streamable-http transport), protect the endpoint with
# an external OAuth gateway or identity-aware proxy:
#
# Reverse proxy with auth: nginx + OAuth2-proxy, Caddy + forward auth
# Identity-aware proxy: Authelia, Authentik, Pomerium
# Network isolation: bind to 127.0.0.1, use VPN/Tailscale for access
# Firewall rules: restrict source IPs at the network layer
#
# stdio transport (default) is inherently local — no network exposure.

77
.github/workflows/ci.yml vendored Normal file
View File

@@ -0,0 +1,77 @@
name: CI
on:
push:
branches: ["main", "feat/**", "fix/**"]
pull_request:
branches: ["main"]
jobs:
lint:
name: Lint & Format
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: astral-sh/setup-uv@v5
with:
version: "0.9.25"
- name: Install dependencies
run: uv sync --group dev
- name: Ruff check
run: uv run ruff check unraid_mcp/ tests/
- name: Ruff format
run: uv run ruff format --check unraid_mcp/ tests/
typecheck:
name: Type Check
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: astral-sh/setup-uv@v5
with:
version: "0.9.25"
- name: Install dependencies
run: uv sync --group dev
- name: ty check
run: uv run ty check unraid_mcp/
test:
name: Test
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: astral-sh/setup-uv@v5
with:
version: "0.9.25"
- name: Install dependencies
run: uv sync --group dev
- name: Run tests with coverage (excluding integration/slow)
run: uv run pytest -m "not slow and not integration" --cov=unraid_mcp --cov-report=term-missing --tb=short -q
version-sync:
name: Version Sync Check
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Check pyproject.toml and plugin.json versions match
run: |
TOML_VER=$(grep '^version = ' pyproject.toml | sed 's/version = "//;s/"//')
PLUGIN_VER=$(python3 -c "import json; print(json.load(open('.claude-plugin/plugin.json'))['version'])")
echo "pyproject.toml: $TOML_VER"
echo "plugin.json: $PLUGIN_VER"
if [ "$TOML_VER" != "$PLUGIN_VER" ]; then
echo "ERROR: Version mismatch! Update .claude-plugin/plugin.json to match pyproject.toml"
exit 1
fi
echo "Versions in sync: $TOML_VER"
audit:
name: Security Audit
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: astral-sh/setup-uv@v5
with:
version: "0.9.25"
- name: Dependency audit
run: uv audit

8
.gitignore vendored
View File

@@ -72,3 +72,11 @@ client_secret_*.apps.googleusercontent.com.json
web-ui/frontend/node_modules
web-ui/backend/.venv-backend/
.pnpm-store/
# Beads / Dolt files (added by bd init)
.dolt/
*.db
.beads-credential-key
# Lavra
.lavra/memory/session-state.md

View File

@@ -0,0 +1,89 @@
# Codebase Profile
Generated by /project-setup on 2026-03-27
## Stack & Integrations
**Language & Runtime**
- Python 3.12+ (min requirement), supports 3.13
- Build system: Hatchling 1.25.0+, package manager: uv
**Core Frameworks**
- FastMCP 3.0.0+: MCP server framework
- FastAPI 0.115.0+, Uvicorn 0.35.0+: ASGI layer
- Pydantic: validation/serialization
**API & Communication**
- httpx 0.28.1+: async HTTP client for GraphQL queries
- websockets 15.0.1+: WebSocket client for real-time subscriptions
- graphql-core 3.2.0+: GraphQL query validation (dev dep)
**Configuration**
- python-dotenv 1.1.1+: env var management
- rich 14.1.0+: terminal output/logging
**Testing**
- pytest 8.4.2+, pytest-asyncio 1.2.0+, pytest-cov 7.0.0+
- respx 0.22.0+: httpx request mocking
- hypothesis 6.151.9+: property-based testing
**Quality**
- ruff 0.12.8+: lint/format; ty 0.0.15+: type checking
**External Dependencies**
- Unraid GraphQL API (primary backend via httpx)
- WebSocket subscriptions to Unraid server (persistent connections)
- Supports custom CA certs (UNRAID_VERIFY_SSL)
**Entry Points**
- CLI: `unraid-mcp-server` / `unraid` bin scripts
- 1 primary MCP tool: `unraid` (15 domains, ~108 subactions)
- 2 diagnostic tools: `diagnose_subscriptions`, `test_subscription_query`
- 10 live snapshot MCP resources under `unraid://` namespace
## Architecture & Structure
```
unraid_mcp/
├── core/ # GraphQL client, exceptions, types, guards
├── config/ # Settings, logging, env validation
├── tools/ # Consolidated unraid tool (15 action domains)
├── subscriptions/ # WebSocket manager, resources, diagnostics
├── main.py # Entry point with shutdown cleanup
└── server.py # FastMCP init + 5-layer middleware chain
```
**server.py**: FastMCP init with middleware: logging → error_handling → rate_limiting → response_limiting → caching. Registers all tools/resources at import.
**tools/unraid.py**: Single consolidated tool (~1900 lines). 15 actions dispatch to domain handlers. Destructive actions require `confirm=True`.
**core/client.py**: GraphQL HTTP client (httpx-based) with sensitive key redaction, request/response logging, timeout management, credentials elicitation.
**config/settings.py**: Env var parsing for API URL, credentials, host/port, SSL, timeouts, log level, transport.
**subscriptions/manager.py**: Singleton SubscriptionManager for WebSocket lifecycle and real-time streaming.
**Data Flow**: `unraid(action, subaction)` → domain handler → GraphQL query → httpx → Unraid API → response
**Patterns**:
- Consolidated single-tool MCP (action+subaction multiplexing)
- Singleton SubscriptionManager for WebSocket lifecycle
- Middleware chain (logging wraps all; caching disabled for mutations)
- Elicitation flow for first-run credential setup
## Conventions & Testing
**Test Framework**: pytest with asyncio auto mode; `make_tool_fn` helper in conftest for tool extraction from FastMCP.
**Test Layout**:
- Root: domain unit tests (test_array.py, test_docker.py, etc.)
- tests/safety/: destructive action guard audits
- tests/schema/: GraphQL query validation
- tests/http_layer/: respx-based HTTP mocking
- tests/contract/: response structure contracts
- tests/property/: hypothesis property-based tests
- tests/integration/: WebSocket subscription lifecycle (slow, mock WS)
**Key Patching Rule**: Patch at `unraid_mcp.tools.unraid.make_graphql_request` (tool module level), NOT core module.
**Naming**: `Test{Feature}` classes, async `test_*` methods, `_DOMAIN_QUERIES` dicts, `register_{domain}_tool()` functions.
**CI**: uv-based: lint (ruff) → typecheck (ty) → test (pytest) → version-sync → audit. Coverage 80% with branch coverage. Version sync enforces pyproject.toml ↔ .claude-plugin/plugin.json consistency.
**Ruff config**: line length 100, ignores D1xx/D2xx/D7xx (docstring), S101 (assert in tests).

View File

@@ -0,0 +1,16 @@
---
stack: python
review_agents:
- kieran-python-reviewer
- code-simplicity-reviewer
- security-sentinel
- performance-oracle
plan_review_agents:
- kieran-python-reviewer
- code-simplicity-reviewer
disabled_agents: []
---
<reviewer_context_note>
MCP server for Unraid GraphQL API, built with FastMCP. Single consolidated `unraid` tool with action/subaction routing (~108 subactions across 15 domains). Python 3.12+, uv, ruff, ty (Astral type checker), pytest. Async throughout (httpx, asyncio). No web framework — stdio transport by default, streamable-http in Docker. Tests: unit (mock at tool module level), schema validation, HTTP layer (respx), safety (destructive action guards), integration (WebSocket subscriptions). Destructive actions require confirm=True gating.
</reviewer_context_note>

175
CHANGELOG.md Normal file
View File

@@ -0,0 +1,175 @@
# Changelog
All notable changes to this project are documented here.
## [1.1.5] - 2026-03-27
### Added
- **Beads issue tracking**: `bd init` — Dolt-backed issue tracker with prefix `unraid-mcp-<hash>`, hooks, and AGENTS.md integration
- **Lavra project config**: `.lavra/config/project-setup.md` — stack `python`, review agents (kieran-python-reviewer, code-simplicity-reviewer, security-sentinel, performance-oracle)
- **Codebase profile**: `.lavra/config/codebase-profile.md` — auto-generated stack/architecture/conventions reference for planning and review commands
### Changed
- **`.gitignore`**: Added lavra session-state exclusion (`.lavra/memory/session-state.md`) and beads-related entries
- **`CLAUDE.md`**: Added beads workflow integration block with mandatory `bd` usage rules and session completion protocol
## [1.1.4] - 2026-03-25
### Changed
- **Plugin branding**: `displayName` set to `unRAID` in `plugin.json` and `marketplace.json`
- **Plugin description**: Expanded to list all 3 tools and all 15 action domains with full subaction inventory (107 subactions, destructive actions marked with `*`)
## [1.1.3] - 2026-03-24
### Fixed
- **Docs accuracy**: `disk/logs` docs corrected to use `log_path`/`tail_lines` parameters (were `path`/`lines`)
- **Docs accuracy**: `rclone/create_remote` docs corrected to `provider_type`/`config_data` (were `type`/`fields`)
- **Docs accuracy**: `setting/update` docs corrected to `settings_input` parameter (was `settings`)
- **Docs accuracy**: `key/create` now documents `roles` as optional; `add_role`/`remove_role` corrected to `roles` (plural)
- **Docs accuracy**: `oidc/validate_session` now documents required `token` parameter
- **Docs accuracy**: `parity_start` quick-reference example now includes required `correct=False`
- **Docs accuracy**: `log_tail` README example now includes required `path="/var/log/syslog"`
- **Docs accuracy**: `live/parity_progress` added to event-driven subscriptions list in troubleshooting guide
- **Docs accuracy**: `live/array_state` wording softened — "may show connecting indefinitely" vs "will always show"
- **Markdown**: `endpoints.md` top-level heading moved before blockquote disclaimer (MD041)
- **Tests**: `test_resources.py` now uses `_get_resource()` helper instead of raw `mcp.providers[0]._components[...]` access; isolates FastMCP internals to one location
---
## [1.1.2] - 2026-03-23
### Security
- **Path traversal**: Removed `/mnt/` from `_ALLOWED_LOG_PREFIXES` — was exposing all Unraid user shares to path-based reads
- **Path traversal**: Added early `..` detection for `disk/logs` and `live/log_tail` before any filesystem access; added `/boot/` prefix restriction for `flash_backup` source paths
- **Timing-safe auth**: `verify_token` now uses `hmac.compare_digest` instead of `==` to prevent timing oracle attacks on API key comparison
- **Traceback leak**: `include_traceback` in `ErrorHandlingMiddleware` is now gated on `DEBUG` log level; production deployments no longer expose stack traces
### Fixed
- **Health check**: `_comprehensive_health_check` now re-raises `CredentialsNotConfiguredError` instead of swallowing it into a generic unhealthy status
- **UPS device query**: Removed non-existent `nominalPower` and `currentPower` fields from `ups_device` query — every call was failing against the live API
- **Stale credential bindings**: Subscription modules (`manager.py`, `snapshot.py`, `utils.py`, `diagnostics.py`) previously captured `UNRAID_API_KEY`/`UNRAID_API_URL` at import time; replaced with `_settings.ATTR` call-time access so `apply_runtime_config()` updates propagate correctly after credential elicitation
### Added
- **CI pipeline**: `.github/workflows/ci.yml` with 5 jobs — lint (`ruff`), typecheck (`ty`), test (`pytest -m "not integration"`), version-sync check, and `uv audit` dependency scan
- **Coverage threshold**: `fail_under = 80` added to `[tool.coverage.report]`
- **Version sync check**: `scripts/validate-marketplace.sh` now verifies `pyproject.toml` and `plugin.json` versions match
### Changed
- **Docs**: Updated `CLAUDE.md`, `README.md` to reflect 3 tools (1 primary + 2 diagnostic); corrected system domain count (19→18); fixed scripts comment
- **Docs**: `docs/AUTHENTICATION.md` H1 retitled to "Authentication Setup Guide"
- **Docs**: Added `UNRAID_CREDENTIALS_DIR` commented entry to `.env.example`
- Removed `from __future__ import annotations` from `snapshot.py` (caused TC002 false positives with FastMCP)
- Added `# noqa: ASYNC109` to `timeout` parameters in `_handle_live` and `unraid()` (valid suppressions)
- Fixed `start_array*``start_array` in tool docstring table (`start_array` is not in `_ARRAY_DESTRUCTIVE`)
### Refactored
- **Path validation**: Extracted `_validate_path()` in `unraid.py` — consolidates traversal check, `normpath`, and prefix validation used by both `disk/logs` and `live/log_tail` into one place; eliminates duplication
- **WebSocket auth payload**: Extracted `build_connection_init()` in `subscriptions/utils.py` — removes 4 duplicate `connection_init` blocks from `snapshot.py` (×2), `manager.py`, and `diagnostics.py`; also fixes a bug in `diagnostics.py` where `x-api-key: None` was sent when no API key was configured
- Removed `_LIVE_ALLOWED_LOG_PREFIXES` alias — direct reference to `_ALLOWED_LOG_PREFIXES`
- Moved `import hmac` to module level in `server.py` (was inside `verify_token` hot path)
---
## [1.1.1] - 2026-03-16
### Added
- **API key auth**: `Authorization: Bearer <UNRAID_MCP_API_KEY>` bearer token authentication via `ApiKeyVerifier` — machine-to-machine access without OAuth browser flow
- **MultiAuth**: When both Google OAuth and API key are configured, `MultiAuth` accepts either method
- **Google OAuth**: Full `GoogleProvider` integration — browser-based OAuth 2.0 flow with JWT session tokens; `UNRAID_MCP_JWT_SIGNING_KEY` for stable tokens across restarts
- **`fastmcp.json`**: Dev tooling configs for FastMCP
### Fixed
- Auth test isolation: use `os.environ[k] = ""` instead of `delenv` to prevent dotenv re-injection between test reloads
---
## [1.1.0] - 2026-03-16
### Breaking Changes
- **Tool consolidation**: 15 individual domain tools (`unraid_docker`, `unraid_vm`, etc.) merged into single `unraid` tool with `action` + `subaction` routing
- Old: `unraid_docker(action="list")`
- New: `unraid(action="docker", subaction="list")`
### Added
- **`live` tool** (11 subactions): Real-time WebSocket subscription snapshots — `cpu`, `memory`, `cpu_telemetry`, `array_state`, `parity_progress`, `ups_status`, `notifications_overview`, `notification_feed`, `log_tail`, `owner`, `server_status`
- **`customization` tool** (5 subactions): `theme`, `public_theme`, `is_initial_setup`, `sso_enabled`, `set_theme`
- **`plugin` tool** (3 subactions): `list`, `add`, `remove`
- **`oidc` tool** (5 subactions): `providers`, `provider`, `configuration`, `public_providers`, `validate_session`
- **Persistent `SubscriptionManager`**: `unraid://live/*` MCP resources backed by long-lived WebSocket connections with auto-start and reconnection
- **`diagnose_subscriptions`** and **`test_subscription_query`** diagnostic tools
- `array`: Added `parity_history`, `start_array`, `stop_array`, `add_disk`, `remove_disk`, `mount_disk`, `unmount_disk`, `clear_disk_stats`
- `keys`: Added `add_role`, `remove_role`
- `settings`: Added `update_ssh` (confirm required)
- `stop_array` added to `_ARRAY_DESTRUCTIVE`
- `gate_destructive_action` helper in `core/guards.py` — centralized elicitation + confirm guard
- Full safety test suite: `TestNoGraphQLCallsWhenUnconfirmed` (zero-I/O guarantee for all 13 destructive actions)
### Fixed
- Removed 29 actions confirmed absent from live API v4.29.2 via GraphQL introspection (Docker organizer mutations, `unassignedDevices`, `warningsAndAlerts`, etc.)
- `log_tail` path validated against allowlist before subscription start
- WebSocket auth uses `x-api-key` connectionParams format
---
## [1.0.0] - 2026-03-14 through 2026-03-15
### Breaking Changes
- Credential storage moved to `~/.unraid-mcp/.env` (dir 700, file 600); all runtimes load from this path
- `unraid_health(action="setup")` is the only tool that triggers credential elicitation; all others propagate `CredentialsNotConfiguredError`
### Added
- `CredentialsNotConfiguredError` sentinel — propagates cleanly through `tool_error_handler` with exact credential path in the error message
- `is_configured()` and `apply_runtime_config()` in `settings.py` for runtime credential injection
- `elicit_and_configure()` with `.env` persistence and confirmation before overwrite
- 28 GraphQL mutations across storage, docker, notifications, and new settings tool
- Comprehensive test suite expansion: schema validation (99 tests), HTTP layer (respx), property tests, safety audit, contract tests
### Fixed
- Numerous PR review fixes across 50+ commits (CodeRabbit, ChatGPT-Codex review rounds)
- Shell scripts hardened against injection and null guards
- Notification enum validation, subscription lock split, safe_get semantics
---
## [0.6.0] - 2026-03-15
### Added
- Subscription byte/line cap to prevent unbounded memory growth
- `asyncio.timeout` bounds on `subscribe_once` / `subscribe_collect`
- Partial auto-start for subscriptions (best-effort on startup)
### Fixed
- WebSocket URL scheme handling (`ws://`/`wss://`)
- `flash_backup` path validation and smoke test assertions
---
## [0.5.0] - 2026-03-15
*Tool expansion and live subscription foundation.*
---
## [0.4.x] - 2026-03-13 through 2026-03-14
*Credential elicitation system, per-tool refactors, and mutation additions.*
---
## [0.2.x] - 2026-02-15 through 2026-03-13
*Initial public release hardening: PR review cycles, test suite expansion, security fixes, plugin manifest.*
---
## [0.1.0] - 2026-02-08
### Added
- Consolidated 26 tools into 10 tools with 90 actions
- FastMCP architecture migration with `uv` toolchain
- Docker Compose support with health checks
- WebSocket subscription infrastructure
---
*Format: [Keep a Changelog](https://keepachangelog.com/en/1.0.0/). Versioning: [Semantic Versioning](https://semver.org/).*

140
CLAUDE.md
View File

@@ -38,28 +38,26 @@ uv run ty check unraid_mcp/
uv run pytest
```
### Docker Development
```bash
# Build the Docker image
docker build -t unraid-mcp-server .
# Run with Docker Compose
docker compose up -d
# View logs
docker compose logs -f unraid-mcp
# Stop service
docker compose down
```
### Environment Setup
- Copy `.env.example` to `.env` and configure:
- `UNRAID_API_URL`: Unraid GraphQL endpoint (required)
- `UNRAID_API_KEY`: Unraid API key (required)
- `UNRAID_MCP_TRANSPORT`: Transport type (default: streamable-http)
- `UNRAID_MCP_PORT`: Server port (default: 6970)
- `UNRAID_MCP_HOST`: Server host (default: 0.0.0.0)
Copy `.env.example` to `.env` and configure:
**Required:**
- `UNRAID_API_URL`: Unraid GraphQL endpoint
- `UNRAID_API_KEY`: Unraid API key
**Server:**
- `UNRAID_MCP_LOG_LEVEL`: Log verbosity (default: INFO)
- `UNRAID_MCP_LOG_FILE`: Log filename in logs/ (default: unraid-mcp.log)
**SSL/TLS:**
- `UNRAID_VERIFY_SSL`: SSL verification (default: true; set `false` for self-signed certs)
**Subscriptions:**
- `UNRAID_AUTO_START_SUBSCRIPTIONS`: Auto-start live subscriptions on startup (default: true)
- `UNRAID_MAX_RECONNECT_ATTEMPTS`: WebSocket reconnect limit (default: 10)
**Credentials override:**
- `UNRAID_CREDENTIALS_DIR`: Override the `~/.unraid-mcp/` credentials directory path
## Architecture
@@ -68,10 +66,13 @@ docker compose down
- **Entry Point**: `unraid_mcp/main.py` - Application entry point and startup logic
- **Configuration**: `unraid_mcp/config/` - Settings management and logging configuration
- **Core Infrastructure**: `unraid_mcp/core/` - GraphQL client, exceptions, and shared types
- `guards.py` — destructive action gating via MCP elicitation
- `utils.py` — shared helpers (`safe_get`, `safe_display_url`, path validation)
- `setup.py` — elicitation-based credential setup flow
- **Subscriptions**: `unraid_mcp/subscriptions/` - Real-time WebSocket subscriptions and diagnostics
- **Tools**: `unraid_mcp/tools/` - Domain-specific tool implementations
- **GraphQL Client**: Uses httpx for async HTTP requests to Unraid API
- **Transport Layer**: Supports streamable-http (recommended), SSE (deprecated), and stdio
- **Version Helper**: `unraid_mcp/version.py` - Reads version from package metadata via importlib
### Key Design Patterns
- **Consolidated Action Pattern**: Each tool uses `action: Literal[...]` parameter to expose multiple operations via a single MCP tool, reducing context window usage
@@ -89,13 +90,16 @@ docker compose down
while the subscription starts — callers should retry in a moment. When
`UNRAID_AUTO_START_SUBSCRIPTIONS=false`, resources fall back to on-demand `subscribe_once`.
### Tool Categories (1 Tool, ~107 Subactions)
### Tool Categories (3 Tools: 1 Primary + 2 Diagnostic)
The server registers a **single consolidated `unraid` tool** with `action` (domain) + `subaction` (operation) routing. Call it as `unraid(action="docker", subaction="list")`.
The server registers **3 MCP tools**:
- **`unraid`** — primary tool with `action` (domain) + `subaction` (operation) routing, 107 subactions. Call it as `unraid(action="docker", subaction="list")`.
- **`diagnose_subscriptions`** — inspect subscription connection states, errors, and WebSocket URLs.
- **`test_subscription_query`** — test a specific GraphQL subscription query (allowlisted fields only).
| action | subactions |
|--------|-----------|
| **system** (19) | overview, array, network, registration, variables, metrics, services, display, config, online, owner, settings, server, servers, flash, ups_devices, ups_device, ups_config |
| **system** (18) | overview, array, network, registration, variables, metrics, services, display, config, online, owner, settings, server, servers, flash, ups_devices, ups_device, ups_config |
| **health** (4) | check, test_connection, diagnose, setup |
| **array** (13) | parity_status, parity_history, parity_start, parity_pause, parity_resume, parity_cancel, start_array, stop_array*, add_disk, remove_disk*, mount_disk, unmount_disk, clear_disk_stats* |
| **disk** (6) | shares, disks, disk_details, log_files, logs, flash_backup* |
@@ -116,26 +120,20 @@ The server registers a **single consolidated `unraid` tool** with `action` (doma
### Destructive Actions (require `confirm=True`)
- **array**: stop_array, remove_disk, clear_disk_stats
- **vm**: force_stop, reset
- **notifications**: delete, delete_archived
- **notification**: delete, delete_archived
- **rclone**: delete_remote
- **keys**: delete
- **key**: delete
- **disk**: flash_backup
- **settings**: configure_ups
- **plugins**: remove
- **setting**: configure_ups
- **plugin**: remove
### Environment Variable Hierarchy
The server loads environment variables from multiple locations in order:
1. `~/.unraid-mcp/.env` (primary — canonical credentials dir, all runtimes)
2. `~/.unraid-mcp/.env.local` (local overrides, only used if primary is absent)
3. `/app/.env.local` (Docker container mount)
4. `../.env.local` (project root local overrides)
5. `../.env` (project root fallback)
6. `unraid_mcp/.env` (last resort)
### Transport Configuration
- **streamable-http** (recommended): HTTP-based transport on `/mcp` endpoint
- **sse** (deprecated): Server-Sent Events transport
- **stdio**: Standard input/output for direct integration
3. `../.env.local` (project root local overrides)
4. `../.env` (project root fallback)
5. `unraid_mcp/.env` (last resort)
### Error Handling Strategy
- GraphQL errors are converted to ToolError with descriptive messages
@@ -143,6 +141,14 @@ The server loads environment variables from multiple locations in order:
- Network errors are caught and wrapped with connection context
- All errors are logged with full context for debugging
### Middleware Chain
`server.py` wraps all tools in a 5-layer stack (order matters — outermost first):
1. **LoggingMiddleware** — logs every `tools/call` and `resources/read` with duration
2. **ErrorHandlingMiddleware** — converts unhandled exceptions to proper MCP errors
3. **SlidingWindowRateLimitingMiddleware** — 540 req/min sliding window
4. **ResponseLimitingMiddleware** — truncates responses > 512 KB with a clear suffix
5. **ResponseCachingMiddleware** — caching disabled entirely for `unraid` tool (mutations and reads share one tool name, so no per-subaction exclusion is possible)
### Performance Considerations
- Increased timeouts for disk operations (90s read timeout)
- Selective queries to avoid GraphQL type overflow issues
@@ -167,7 +173,9 @@ tests/
├── http_layer/ # httpx-level request/response tests (respx)
├── integration/ # WebSocket subscription lifecycle tests (slow)
├── safety/ # Destructive action guard tests
── schema/ # GraphQL query validation (99 tests, all passing)
── schema/ # GraphQL query validation (119 tests)
├── contract/ # Response shape contract tests
└── property/ # Input validation property-based tests
```
### Running Targeted Tests
@@ -181,7 +189,7 @@ uv run pytest -x # Fail fast on first error
### Scripts
```bash
# HTTP smoke-test against a live server (11 tools, all non-destructive actions)
# HTTP smoke-test against a live server (non-destructive actions, all domains)
./tests/mcporter/test-actions.sh [MCP_URL] # default: http://localhost:6970/mcp
# stdio smoke-test, no running server needed (good for CI)
@@ -195,6 +203,8 @@ See `tests/mcporter/README.md` for transport differences and `docs/DESTRUCTIVE_A
### API Reference Docs
- `docs/UNRAID_API_COMPLETE_REFERENCE.md` — Full GraphQL schema reference
- `docs/UNRAID_API_OPERATIONS.md` — All supported operations with examples
- `docs/MARKETPLACE.md` — Plugin marketplace listing and publishing guide
- `docs/PUBLISHING.md` — Step-by-step instructions for publishing to Claude plugin registry
Use these when adding new queries/mutations.
@@ -204,12 +214,11 @@ When bumping the version, **always update both files** — they must stay in syn
- `.claude-plugin/plugin.json``"version": "X.Y.Z"`
### Credential Storage (`~/.unraid-mcp/.env`)
All runtimes (plugin, direct, Docker) load credentials from `~/.unraid-mcp/.env`.
All runtimes (plugin, direct `uv run`) load credentials from `~/.unraid-mcp/.env`.
- **Plugin/direct:** `unraid action=health subaction=setup` writes this file automatically via elicitation,
**Safe to re-run**: always prompts for confirmation before overwriting existing credentials,
whether the connection is working or not (failed probe may be a transient outage, not bad creds).
or manual: `mkdir -p ~/.unraid-mcp && cp .env.example ~/.unraid-mcp/.env` then edit.
- **Docker:** `docker-compose.yml` loads it via `env_file` before container start.
- **No symlinks needed.** Version bumps do not affect this path.
- **Permissions:** dir=700, file=600 (set automatically by elicitation; set manually if
using `cp`: `chmod 700 ~/.unraid-mcp && chmod 600 ~/.unraid-mcp/.env`).
@@ -219,3 +228,50 @@ All runtimes (plugin, direct, Docker) load credentials from `~/.unraid-mcp/.env`
```bash
ln -sf CLAUDE.md AGENTS.md && ln -sf CLAUDE.md GEMINI.md
```
<!-- BEGIN BEADS INTEGRATION v:1 profile:minimal hash:ca08a54f -->
## Beads Issue Tracker
This project uses **bd (beads)** for issue tracking. Run `bd prime` to see full workflow context and commands.
### Quick Reference
```bash
bd ready # Find available work
bd show <id> # View issue details
bd update <id> --claim # Claim work
bd close <id> # Complete work
```
### Rules
- Use `bd` for ALL task tracking — do NOT use TodoWrite, TaskCreate, or markdown TODO lists
- Run `bd prime` for detailed command reference and session close protocol
- Use `bd remember` for persistent knowledge — do NOT use MEMORY.md files
## Session Completion
**When ending a work session**, you MUST complete ALL steps below. Work is NOT complete until `git push` succeeds.
**MANDATORY WORKFLOW:**
1. **File issues for remaining work** - Create issues for anything that needs follow-up
2. **Run quality gates** (if code changed) - Tests, linters, builds
3. **Update issue status** - Close finished work, update in-progress items
4. **PUSH TO REMOTE** - This is MANDATORY:
```bash
git pull --rebase
bd dolt push
git push
git status # MUST show "up to date with origin"
```
5. **Clean up** - Clear stashes, prune remote branches
6. **Verify** - All changes committed AND pushed
7. **Hand off** - Provide context for next session
**CRITICAL RULES:**
- Work is NOT complete until `git push` succeeds
- NEVER stop before pushing - that leaves work stranded locally
- NEVER say "ready to push when you are" - YOU must push
- If push fails, resolve and retry until it succeeds
<!-- END BEADS INTEGRATION -->

View File

@@ -1,48 +0,0 @@
# Use an official Python runtime as a parent image
FROM python:3.12-slim
# Set the working directory in the container
WORKDIR /app
# Install uv (pinned tag to avoid mutable latest)
COPY --from=ghcr.io/astral-sh/uv:0.9.25 /uv /uvx /usr/local/bin/
# Create non-root user with home directory and give ownership of /app
RUN groupadd --gid 1000 appuser && \
useradd --uid 1000 --gid 1000 --create-home --shell /bin/false appuser && \
chown appuser:appuser /app
# Copy dependency files (owned by appuser via --chown)
COPY --chown=appuser:appuser pyproject.toml .
COPY --chown=appuser:appuser uv.lock .
COPY --chown=appuser:appuser README.md .
COPY --chown=appuser:appuser LICENSE .
# Copy the source code
COPY --chown=appuser:appuser unraid_mcp/ ./unraid_mcp/
# Switch to non-root user before installing dependencies
USER appuser
# Install dependencies and the package
RUN uv sync --frozen
# Make port UNRAID_MCP_PORT available to the world outside this container
# Defaulting to 6970, but can be overridden by environment variable
EXPOSE 6970
# Define environment variables (defaults, can be overridden at runtime)
ENV UNRAID_MCP_PORT=6970
ENV UNRAID_MCP_HOST="0.0.0.0"
ENV UNRAID_MCP_TRANSPORT="streamable-http"
ENV UNRAID_API_URL=""
ENV UNRAID_API_KEY=""
ENV UNRAID_VERIFY_SSL="true"
ENV UNRAID_MCP_LOG_LEVEL="INFO"
# Health check
HEALTHCHECK --interval=30s --timeout=5s --start-period=10s --retries=3 \
CMD ["python", "-c", "import os, urllib.request; port = os.getenv('UNRAID_MCP_PORT', '6970'); urllib.request.urlopen(f'http://localhost:{port}/mcp')"]
# Run unraid-mcp-server when the container launches
CMD ["uv", "run", "unraid-mcp-server"]

113
README.md
View File

@@ -8,13 +8,12 @@
## ✨ Features
- 🔧 **1 Tool, ~108 Actions**: Complete Unraid management through a single consolidated MCP tool
- 🔧 **1 primary tool + 2 diagnostic tools, 107 subactions**: Complete Unraid management through a consolidated MCP tool
- 🏗️ **Modular Architecture**: Clean, maintainable, and extensible codebase
-**High Performance**: Async/concurrent operations with optimized timeouts
- 🔄 **Real-time Data**: WebSocket subscriptions for live metrics, logs, array state, and more
- 📊 **Health Monitoring**: Comprehensive system diagnostics and status
- 🐳 **Docker Ready**: Full containerization support with Docker Compose
- 🔒 **Secure**: Proper SSL/TLS configuration and API key management
- 🔒 **Secure**: Network-layer isolation
- 📝 **Rich Logging**: Structured logging with rotation and multiple levels
---
@@ -25,6 +24,7 @@
- [Quick Start](#-quick-start)
- [Installation](#-installation)
- [Configuration](#-configuration)
- [Authentication](#-authentication)
- [Available Tools & Resources](#-available-tools--resources)
- [Development](#-development)
- [Architecture](#-architecture)
@@ -45,7 +45,7 @@
```
This provides instant access to Unraid monitoring and management through Claude Code with:
- **1 MCP tool** (`unraid`) exposing **~108 actions** via `action` + `subaction` routing
- **1 primary MCP tool** (`unraid`) exposing **107 subactions** via `action` + `subaction` routing, plus `diagnose_subscriptions` and `test_subscription_query` diagnostic tools
- Real-time system metrics and health monitoring
- Docker container and VM lifecycle management
- Disk health monitoring and storage management
@@ -55,7 +55,7 @@ This provides instant access to Unraid monitoring and management through Claude
### ⚙️ Credential Setup
Credentials are stored in `~/.unraid-mcp/.env` — one location that works for the
Claude Code plugin, direct `uv run` invocations, and Docker.
Claude Code plugin and direct `uv run` invocations.
**Option 1 — Interactive (Claude Code plugin, elicitation-supported clients):**
```
@@ -73,9 +73,6 @@ cp .env.example ~/.unraid-mcp/.env && chmod 600 ~/.unraid-mcp/.env
# UNRAID_API_KEY=your-key-from-unraid-settings
```
**Docker:** `~/.unraid-mcp/.env` is loaded via `env_file` in `docker-compose.yml`
same file, no duplication needed.
> **Finding your API key:** Unraid → Settings → Management Access → API Keys
---
@@ -83,8 +80,7 @@ same file, no duplication needed.
## 🚀 Quick Start
### Prerequisites
- Docker and Docker Compose (recommended)
- OR Python 3.12+ with [uv](https://github.com/astral-sh/uv) for development
- Python 3.12+ with [uv](https://github.com/astral-sh/uv) for development
- Unraid server with GraphQL API enabled
### 1. Clone Repository
@@ -95,7 +91,7 @@ cd unraid-mcp
### 2. Configure Environment
```bash
# For Docker/production use — canonical credential location (all runtimes)
# Canonical credential location (all runtimes)
mkdir -p ~/.unraid-mcp && chmod 700 ~/.unraid-mcp
cp .env.example ~/.unraid-mcp/.env && chmod 600 ~/.unraid-mcp/.env
# Edit ~/.unraid-mcp/.env with your values
@@ -104,16 +100,7 @@ cp .env.example ~/.unraid-mcp/.env && chmod 600 ~/.unraid-mcp/.env
cp .env.example .env
```
### 3. Deploy with Docker (Recommended)
```bash
# Start with Docker Compose
docker compose up -d
# View logs
docker compose logs -f unraid-mcp
```
### OR 3. Run for Development
### 3. Run for Development
```bash
# Install dependencies
uv sync
@@ -139,7 +126,7 @@ unraid-mcp/ # ${CLAUDE_PLUGIN_ROOT}
└── scripts/ # Validation and helper scripts
```
- **MCP Server**: 1 `unraid` tool with ~108 actions via GraphQL API
- **MCP Server**: 3 tools — `unraid` (107 subactions) + `diagnose_subscriptions` + `test_subscription_query`
- **Skill**: `/unraid` skill for monitoring and queries
- **Entry Point**: `unraid-mcp-server` defined in pyproject.toml
@@ -147,38 +134,6 @@ unraid-mcp/ # ${CLAUDE_PLUGIN_ROOT}
## 📦 Installation
### 🐳 Docker Deployment (Recommended)
The easiest way to run the Unraid MCP Server is with Docker:
```bash
# Clone repository
git clone https://github.com/jmagar/unraid-mcp
cd unraid-mcp
# Set required environment variables
export UNRAID_API_URL="http://your-unraid-server/graphql"
export UNRAID_API_KEY="your_api_key_here"
# Deploy with Docker Compose
docker compose up -d
# View logs
docker compose logs -f unraid-mcp
```
#### Manual Docker Build
```bash
# Build and run manually
docker build -t unraid-mcp-server .
docker run -d --name unraid-mcp \
--restart unless-stopped \
-p 6970:6970 \
-e UNRAID_API_URL="http://your-unraid-server/graphql" \
-e UNRAID_API_KEY="your_api_key_here" \
unraid-mcp-server
```
### 🔧 Development Installation
For development and testing:
@@ -208,7 +163,7 @@ uv run unraid-mcp-server
### Environment Variables
Create `.env` file in the project root:
Credentials and settings go in `~/.unraid-mcp/.env` (the canonical location loaded by all runtimes — plugin and direct `uv run`). See the [Credential Setup](#%EF%B8%8F-credential-setup) section above for how to create it.
```bash
# Core API Configuration (Required)
@@ -216,7 +171,7 @@ UNRAID_API_URL=https://your-unraid-server-url/graphql
UNRAID_API_KEY=your_unraid_api_key
# MCP Server Settings
UNRAID_MCP_TRANSPORT=streamable-http # streamable-http (recommended), sse (deprecated), stdio
UNRAID_MCP_TRANSPORT=stdio # stdio (default)
UNRAID_MCP_HOST=0.0.0.0
UNRAID_MCP_PORT=6970
@@ -229,27 +184,24 @@ UNRAID_VERIFY_SSL=true # true, false, or path to CA bundle
# Subscription Configuration
UNRAID_AUTO_START_SUBSCRIPTIONS=true # Auto-start WebSocket subscriptions on startup (default: true)
UNRAID_MAX_RECONNECT_ATTEMPTS=5 # Max WebSocket reconnection attempts (default: 5)
UNRAID_MAX_RECONNECT_ATTEMPTS=10 # Max WebSocket reconnection attempts (default: 10)
# Optional: Log Stream Configuration
# UNRAID_AUTOSTART_LOG_PATH=/var/log/syslog # Path for log streaming resource (unraid://logs/stream)
# Optional: Auto-start log file subscription path
# Defaults to /var/log/syslog if it exists and this is unset
# UNRAID_AUTOSTART_LOG_PATH=/var/log/syslog
# Optional: Credentials directory override (default: ~/.unraid-mcp/)
# Useful for containers or non-standard home directory layouts
# UNRAID_CREDENTIALS_DIR=/custom/path/to/credentials
```
### Transport Options
| Transport | Description | Use Case |
|-----------|-------------|----------|
| `streamable-http` | HTTP-based (recommended) | Most compatible, best performance |
| `sse` | Server-Sent Events (deprecated) | Legacy support only |
| `stdio` | Standard I/O | Direct integration scenarios |
---
## 🛠️ Available Tools & Resources
The single `unraid` tool uses `action` (domain) + `subaction` (operation) routing to expose all operations via one MCP tool, minimizing context window usage. Destructive actions require `confirm=True`.
### Single Tool, 15 Domains, ~108 Actions
### Primary Tool: 15 Domains, 107 Subactions
Call pattern: `unraid(action="<domain>", subaction="<operation>")`
@@ -299,8 +251,10 @@ The server exposes two classes of MCP resources backed by persistent WebSocket c
**`unraid://logs/stream`** — Live log file tail (path controlled by `UNRAID_AUTOSTART_LOG_PATH`)
> **Note**: Resources return cached data from persistent WebSocket subscriptions. A `{"status": "connecting"}` placeholder is returned while the subscription initializes — retry in a moment.
>
> **`log_tail`** is accessible as a tool subaction (`unraid(action="live", subaction="log_tail", path="/var/log/syslog")`) and requires a `path`; **`notification_feed`** is also available as a tool subaction but uses a transient one-shot subscription and accepts optional parameters. Neither is registered as an MCP resource.
> **`log_tail` and `notification_feed`** are accessible as tool subactions (`unraid(action="live", subaction="log_tail")`) but are not registered as MCP resources — they use transient one-shot subscriptions and require parameters.
> **Security note**: The `disk/logs` and `live/log_tail` subactions allow reading files under `/var/log/` and `/boot/logs/` on the Unraid server. Authenticated MCP clients can stream any log file within these directories.
---
@@ -331,7 +285,7 @@ unraid-mcp/
│ │ ├── queries.py # Subscription query constants
│ │ ├── diagnostics.py # Diagnostic tools
│ │ └── utils.py # Subscription utility functions
│ └── tools/ # Single consolidated tool (~108 actions)
│ └── tools/ # Consolidated tools (unraid: 107 subactions + 2 diagnostic tools)
│ └── unraid.py # All 15 domains in one file
├── tests/ # Test suite
│ ├── conftest.py # Shared fixtures
@@ -345,8 +299,6 @@ unraid-mcp/
├── skills/unraid/ # Claude skill assets
├── .claude-plugin/ # Plugin manifest & marketplace config
├── .env.example # Environment template
├── Dockerfile # Container image definition
├── docker-compose.yml # Docker Compose deployment
├── pyproject.toml # Project config & dependencies
└── logs/ # Log files (auto-created, gitignored)
```
@@ -366,17 +318,14 @@ uv run pytest
### Integration Smoke-Tests (mcporter)
Live integration tests that exercise all non-destructive actions via [mcporter](https://github.com/mcporter/mcporter). Two scripts cover two transport modes:
Live integration tests that exercise all non-destructive actions via [mcporter](https://github.com/mcporter/mcporter).
```bash
# stdio — no running server needed (good for CI)
./tests/mcporter/test-tools.sh [--parallel] [--timeout-ms N] [--verbose]
# HTTP — connects to a live server (most up-to-date coverage)
./tests/mcporter/test-actions.sh [MCP_URL] # default: http://localhost:6970/mcp
```
Destructive actions are always skipped in both scripts. For safe testing strategies and exact mcporter commands per destructive action, see [`docs/DESTRUCTIVE_ACTIONS.md`](docs/DESTRUCTIVE_ACTIONS.md).
Destructive actions are always skipped. For safe testing strategies and exact mcporter commands per destructive action, see [`docs/DESTRUCTIVE_ACTIONS.md`](docs/DESTRUCTIVE_ACTIONS.md).
### API Schema Docs Automation
```bash
@@ -399,6 +348,16 @@ uv run unraid-mcp-server
# Or run via module directly
uv run -m unraid_mcp.main
# Run via named config files
fastmcp run fastmcp.stdio.json # stdio transport
```
### Ad-hoc Tool Testing (fastmcp CLI)
```bash
# Call without a running server (stdio config)
fastmcp list fastmcp.stdio.json
fastmcp call fastmcp.stdio.json unraid action=health subaction=check
```
---

View File

@@ -1,49 +0,0 @@
services:
unraid-mcp:
build:
context: .
dockerfile: Dockerfile
container_name: unraid-mcp
restart: unless-stopped
read_only: true
cap_drop:
- ALL
tmpfs:
- /tmp:noexec,nosuid,size=64m
- /app/logs:noexec,nosuid,size=16m
- /app/.cache/logs:noexec,nosuid,size=8m
ports:
# HostPort:ContainerPort (maps to UNRAID_MCP_PORT inside the container, default 6970)
# Change the host port (left side) if 6970 is already in use on your host
- "${UNRAID_MCP_PORT:-6970}:${UNRAID_MCP_PORT:-6970}"
env_file:
- path: ${HOME}/.unraid-mcp/.env
required: false # Don't fail if file missing; environment: block below takes over
environment:
# Core API Configuration (Required)
# Sourced from ~/.unraid-mcp/.env via env_file above (if present),
# or set these directly here. The :? syntax fails fast if unset.
- UNRAID_API_URL=${UNRAID_API_URL:?UNRAID_API_URL is required}
- UNRAID_API_KEY=${UNRAID_API_KEY:?UNRAID_API_KEY is required}
# MCP Server Settings
- UNRAID_MCP_PORT=${UNRAID_MCP_PORT:-6970}
- UNRAID_MCP_HOST=${UNRAID_MCP_HOST:-0.0.0.0}
- UNRAID_MCP_TRANSPORT=${UNRAID_MCP_TRANSPORT:-streamable-http}
# SSL Configuration
- UNRAID_VERIFY_SSL=${UNRAID_VERIFY_SSL:-true}
# Logging Configuration
- UNRAID_MCP_LOG_LEVEL=${UNRAID_MCP_LOG_LEVEL:-INFO}
- UNRAID_MCP_LOG_FILE=${UNRAID_MCP_LOG_FILE:-unraid-mcp.log}
# Real-time Subscription Configuration
- UNRAID_AUTO_START_SUBSCRIPTIONS=${UNRAID_AUTO_START_SUBSCRIPTIONS:-true}
- UNRAID_MAX_RECONNECT_ATTEMPTS=${UNRAID_MAX_RECONNECT_ATTEMPTS:-10}
# Optional: Custom log file path for subscription auto-start diagnostics
- UNRAID_AUTOSTART_LOG_PATH=${UNRAID_AUTOSTART_LOG_PATH}
# Optional: If you want to mount a specific directory for logs (ensure UNRAID_MCP_LOG_FILE points within this mount)
# volumes:
# - ./logs:/app/logs # Example: maps ./logs on host to /app/logs in container

View File

@@ -1,14 +1,14 @@
# Destructive Actions
**Last Updated:** 2026-03-16
**Last Updated:** 2026-03-24
**Total destructive actions:** 12 across 8 domains (single `unraid` tool)
All destructive actions require `confirm=True` at the call site. There is no additional environment variable gate — `confirm` is the sole guard.
> **mcporter commands below** use `$MCP_URL` (default: `http://localhost:6970/mcp`). Run `test-actions.sh` for automated non-destructive coverage; destructive actions are always skipped there and tested manually per the strategies below.
> **mcporter commands below** use stdio transport. Run `test-tools.sh` for automated non-destructive coverage; destructive actions are always skipped there and tested manually per the strategies below.
>
> **Calling convention (v1.0.0+):** All operations use the single `unraid` tool with `action` (domain) + `subaction` (operation). For example:
> `mcporter call --http-url "$MCP_URL" --tool unraid --args '{"action":"docker","subaction":"list"}'`
> `mcporter call --stdio-cmd "uv run unraid-mcp-server" --tool unraid --args '{"action":"docker","subaction":"list"}'`
---
@@ -26,7 +26,7 @@ Stopping the array unmounts all shares and can interrupt running containers and
```bash
# Prerequisite: array must already be stopped; use a disk you intend to remove
mcporter call --http-url "$MCP_URL" --tool unraid \
mcporter call --stdio-cmd "uv run unraid-mcp-server" --tool unraid \
--args '{"action":"array","subaction":"remove_disk","disk_id":"<DISK_ID>","confirm":true}' --output json
```
@@ -36,11 +36,11 @@ mcporter call --http-url "$MCP_URL" --tool unraid \
```bash
# Discover disk IDs
mcporter call --http-url "$MCP_URL" --tool unraid \
mcporter call --stdio-cmd "uv run unraid-mcp-server" --tool unraid \
--args '{"action":"disk","subaction":"disks"}' --output json
# Clear stats for a specific disk
mcporter call --http-url "$MCP_URL" --tool unraid \
mcporter call --stdio-cmd "uv run unraid-mcp-server" --tool unraid \
--args '{"action":"array","subaction":"clear_disk_stats","disk_id":"<DISK_ID>","confirm":true}' --output json
```
@@ -54,15 +54,15 @@ mcporter call --http-url "$MCP_URL" --tool unraid \
# Prerequisite: create a minimal Alpine test VM in Unraid VM manager
# (Alpine ISO, 512MB RAM, no persistent disk, name contains "mcp-test")
VID=$(mcporter call --http-url "$MCP_URL" --tool unraid \
VID=$(mcporter call --stdio-cmd "uv run unraid-mcp-server" --tool unraid \
--args '{"action":"vm","subaction":"list"}' --output json \
| python3 -c "import json,sys; vms=json.load(sys.stdin).get('vms',[]); print(next(v.get('uuid',v.get('id','')) for v in vms if 'mcp-test' in v.get('name','')))")
mcporter call --http-url "$MCP_URL" --tool unraid \
mcporter call --stdio-cmd "uv run unraid-mcp-server" --tool unraid \
--args "{\"action\":\"vm\",\"subaction\":\"force_stop\",\"vm_id\":\"$VID\",\"confirm\":true}" --output json
# Verify: VM state should return to stopped
mcporter call --http-url "$MCP_URL" --tool unraid \
mcporter call --stdio-cmd "uv run unraid-mcp-server" --tool unraid \
--args "{\"action\":\"vm\",\"subaction\":\"details\",\"vm_id\":\"$VID\"}" --output json
```
@@ -72,11 +72,11 @@ mcporter call --http-url "$MCP_URL" --tool unraid \
```bash
# Same minimal Alpine test VM as above
VID=$(mcporter call --http-url "$MCP_URL" --tool unraid \
VID=$(mcporter call --stdio-cmd "uv run unraid-mcp-server" --tool unraid \
--args '{"action":"vm","subaction":"list"}' --output json \
| python3 -c "import json,sys; vms=json.load(sys.stdin).get('vms',[]); print(next(v.get('uuid',v.get('id','')) for v in vms if 'mcp-test' in v.get('name','')))")
mcporter call --http-url "$MCP_URL" --tool unraid \
mcporter call --stdio-cmd "uv run unraid-mcp-server" --tool unraid \
--args "{\"action\":\"vm\",\"subaction\":\"reset\",\"vm_id\":\"$VID\",\"confirm\":true}" --output json
```
@@ -89,9 +89,9 @@ mcporter call --http-url "$MCP_URL" --tool unraid \
```bash
# 1. Create a test notification, then list to get the real stored ID (create response
# ID is ULID-based; stored filename uses a unix timestamp, so IDs differ)
mcporter call --http-url "$MCP_URL" --tool unraid \
mcporter call --stdio-cmd "uv run unraid-mcp-server" --tool unraid \
--args '{"action":"notification","subaction":"create","title":"mcp-test-delete","subject":"safe to delete","description":"MCP destructive action test","importance":"INFO"}' --output json
NID=$(mcporter call --http-url "$MCP_URL" --tool unraid \
NID=$(mcporter call --stdio-cmd "uv run unraid-mcp-server" --tool unraid \
--args '{"action":"notification","subaction":"list","notification_type":"UNREAD"}' --output json \
| python3 -c "
import json,sys
@@ -100,11 +100,11 @@ matches=[n['id'] for n in reversed(notifs) if n.get('title')=='mcp-test-delete']
print(matches[0] if matches else '')")
# 2. Delete it (notification_type required)
mcporter call --http-url "$MCP_URL" --tool unraid \
mcporter call --stdio-cmd "uv run unraid-mcp-server" --tool unraid \
--args "{\"action\":\"notification\",\"subaction\":\"delete\",\"notification_id\":\"$NID\",\"notification_type\":\"UNREAD\",\"confirm\":true}" --output json
# 3. Verify
mcporter call --http-url "$MCP_URL" --tool unraid \
mcporter call --stdio-cmd "uv run unraid-mcp-server" --tool unraid \
--args '{"action":"notification","subaction":"list"}' --output json | python3 -c \
"import json,sys; ns=[n for n in json.load(sys.stdin).get('notifications',[]) if 'mcp-test' in n.get('title','')]; print('clean' if not ns else ns)"
```
@@ -115,21 +115,21 @@ mcporter call --http-url "$MCP_URL" --tool unraid \
```bash
# 1. Create and archive a test notification
mcporter call --http-url "$MCP_URL" --tool unraid \
mcporter call --stdio-cmd "uv run unraid-mcp-server" --tool unraid \
--args '{"action":"notification","subaction":"create","title":"mcp-test-archive-wipe","subject":"archive me","description":"safe to delete","importance":"INFO"}' --output json
AID=$(mcporter call --http-url "$MCP_URL" --tool unraid \
AID=$(mcporter call --stdio-cmd "uv run unraid-mcp-server" --tool unraid \
--args '{"action":"notification","subaction":"list","notification_type":"UNREAD"}' --output json \
| python3 -c "
import json,sys
notifs=json.load(sys.stdin).get('notifications',[])
matches=[n['id'] for n in reversed(notifs) if n.get('title')=='mcp-test-archive-wipe']
print(matches[0] if matches else '')")
mcporter call --http-url "$MCP_URL" --tool unraid \
mcporter call --stdio-cmd "uv run unraid-mcp-server" --tool unraid \
--args "{\"action\":\"notification\",\"subaction\":\"archive\",\"notification_id\":\"$AID\"}" --output json
# 2. Wipe all archived
# NOTE: this deletes ALL archived notifications, not just the test one
mcporter call --http-url "$MCP_URL" --tool unraid \
mcporter call --stdio-cmd "uv run unraid-mcp-server" --tool unraid \
--args '{"action":"notification","subaction":"delete_archived","confirm":true}' --output json
```
@@ -144,15 +144,15 @@ mcporter call --http-url "$MCP_URL" --tool unraid \
```bash
# 1. Create a throwaway local remote (points to /tmp — no real data)
# Parameters: name (str), provider_type (str), config_data (dict)
mcporter call --http-url "$MCP_URL" --tool unraid \
mcporter call --stdio-cmd "uv run unraid-mcp-server" --tool unraid \
--args '{"action":"rclone","subaction":"create_remote","name":"mcp-test-remote","provider_type":"local","config_data":{"root":"/tmp"}}' --output json
# 2. Delete it
mcporter call --http-url "$MCP_URL" --tool unraid \
mcporter call --stdio-cmd "uv run unraid-mcp-server" --tool unraid \
--args '{"action":"rclone","subaction":"delete_remote","name":"mcp-test-remote","confirm":true}' --output json
# 3. Verify
mcporter call --http-url "$MCP_URL" --tool unraid \
mcporter call --stdio-cmd "uv run unraid-mcp-server" --tool unraid \
--args '{"action":"rclone","subaction":"list_remotes"}' --output json | python3 -c \
"import json,sys; remotes=json.load(sys.stdin).get('remotes',[]); print('clean' if 'mcp-test-remote' not in remotes else 'FOUND — cleanup failed')"
```
@@ -167,16 +167,16 @@ mcporter call --http-url "$MCP_URL" --tool unraid \
```bash
# 1. Create a test key (names cannot contain hyphens; ID is at key.id)
KID=$(mcporter call --http-url "$MCP_URL" --tool unraid \
KID=$(mcporter call --stdio-cmd "uv run unraid-mcp-server" --tool unraid \
--args '{"action":"key","subaction":"create","name":"mcp test key","roles":["VIEWER"]}' --output json \
| python3 -c "import json,sys; print(json.load(sys.stdin).get('key',{}).get('id',''))")
# 2. Delete it
mcporter call --http-url "$MCP_URL" --tool unraid \
mcporter call --stdio-cmd "uv run unraid-mcp-server" --tool unraid \
--args "{\"action\":\"key\",\"subaction\":\"delete\",\"key_id\":\"$KID\",\"confirm\":true}" --output json
# 3. Verify
mcporter call --http-url "$MCP_URL" --tool unraid \
mcporter call --stdio-cmd "uv run unraid-mcp-server" --tool unraid \
--args '{"action":"key","subaction":"list"}' --output json | python3 -c \
"import json,sys; ks=json.load(sys.stdin).get('keys',[]); print('clean' if not any('mcp test key' in k.get('name','') for k in ks) else 'FOUND — cleanup failed')"
```
@@ -191,7 +191,7 @@ mcporter call --http-url "$MCP_URL" --tool unraid \
# Prerequisite: create a dedicated test remote pointing away from real backup destination
# (use rclone create_remote first, or configure mcp-test-remote manually)
mcporter call --http-url "$MCP_URL" --tool unraid \
mcporter call --stdio-cmd "uv run unraid-mcp-server" --tool unraid \
--args '{"action":"disk","subaction":"flash_backup","remote_name":"mcp-test-remote","source_path":"/boot","destination_path":"/flash-backup-test","confirm":true}' --output json
```
@@ -217,7 +217,7 @@ Removing a plugin cannot be undone without a full re-install. Test via `tests/sa
```bash
# If live testing is necessary (intentional removal only):
mcporter call --http-url "$MCP_URL" --tool unraid \
mcporter call --stdio-cmd "uv run unraid-mcp-server" --tool unraid \
--args '{"action":"plugin","subaction":"remove","names":["<plugin-name>"],"confirm":true}' --output json
```

View File

@@ -11,37 +11,77 @@ The marketplace catalog that lists all available plugins in this repository.
**Contents:**
- Marketplace metadata (name, version, owner, repository)
- Plugin catalog with the "unraid" skill
- Plugin catalog with the "unraid" plugin
- Categories and tags for discoverability
### 2. Plugin Manifest (`.claude-plugin/plugin.json`)
The individual plugin configuration for the Unraid skill.
The individual plugin configuration for the Unraid MCP server.
**Location:** `.claude-plugin/plugin.json`
**Contents:**
- Plugin name, version, author
- Plugin name (`unraid`), version (`1.1.2`), author
- Repository and homepage links
- Plugin-specific metadata
- `mcpServers` block that configures the server to run via `uv run unraid-mcp-server` in stdio mode
### 3. Documentation
- `.claude-plugin/README.md` - Marketplace installation guide
- Updated root `README.md` with plugin installation section
### 3. Validation Script
- `scripts/validate-marketplace.sh` — Automated validation of marketplace structure
### 4. Validation Script
- `scripts/validate-marketplace.sh` - Automated validation of marketplace structure
## MCP Tools Exposed
The plugin registers **3 MCP tools**:
| Tool | Purpose |
|------|---------|
| `unraid` | Primary tool — `action` (domain) + `subaction` (operation) routing, ~107 subactions across 15 domains |
| `diagnose_subscriptions` | Inspect WebSocket subscription connection states and errors |
| `test_subscription_query` | Test a specific GraphQL subscription query (allowlisted fields only) |
### Calling Convention
All Unraid operations go through the single `unraid` tool:
```
unraid(action="docker", subaction="list")
unraid(action="system", subaction="overview")
unraid(action="array", subaction="parity_status")
unraid(action="vm", subaction="list")
unraid(action="live", subaction="cpu")
```
### Domains (action=)
| action | example subactions |
|--------|--------------------|
| `system` | overview, array, network, metrics, services, ups_devices |
| `health` | check, test_connection, diagnose, setup |
| `array` | parity_status, parity_start, start_array, add_disk |
| `disk` | shares, disks, disk_details, logs |
| `docker` | list, details, start, stop, restart |
| `vm` | list, details, start, stop, pause, resume |
| `notification` | overview, list, create, archive, archive_all |
| `key` | list, get, create, update, delete |
| `plugin` | list, add, remove |
| `rclone` | list_remotes, config_form, create_remote |
| `setting` | update, configure_ups |
| `customization` | theme, set_theme, sso_enabled |
| `oidc` | providers, configuration, validate_session |
| `user` | me |
| `live` | cpu, memory, array_state, log_tail, notification_feed |
Destructive subactions (e.g. `stop_array`, `force_stop`, `delete`) require `confirm=True`.
## Installation Methods
### Method 1: GitHub Distribution (Recommended for Users)
Once you push this to GitHub, users can install via:
Once pushed to GitHub, users install via:
```bash
# Add your marketplace
# Add the marketplace
/plugin marketplace add jmagar/unraid-mcp
# Install the Unraid skill
# Install the Unraid plugin
/plugin install unraid @unraid-mcp
```
@@ -59,7 +99,7 @@ For testing locally before publishing:
### Method 3: Direct URL
Users can also install from a specific commit or branch:
Install from a specific branch or commit:
```bash
# From specific branch
@@ -75,14 +115,14 @@ Users can also install from a specific commit or branch:
unraid-mcp/
├── .claude-plugin/ # Plugin manifest + marketplace manifest
│ ├── plugin.json # Plugin configuration (name, version, mcpServers)
── marketplace.json # Marketplace catalog
│ └── README.md # Marketplace installation guide
├── skills/unraid/ # Skill documentation and helpers
│ ├── SKILL.md # Skill documentation
│ ├── README.md # Plugin documentation
│ ├── examples/ # Example scripts
│ ├── scripts/ # Helper scripts
│ └── references/ # API reference docs
── marketplace.json # Marketplace catalog
├── unraid_mcp/ # Python package (the actual MCP server)
│ ├── main.py # Entry point
│ ├── server.py # FastMCP server registration
│ ├── tools/unraid.py # Consolidated tool (all 3 tools registered here)
│ ├── config/ # Settings management
│ ├── core/ # GraphQL client, exceptions, shared types
│ └── subscriptions/ # Real-time WebSocket subscription manager
└── scripts/
└── validate-marketplace.sh # Validation tool
```
@@ -90,15 +130,15 @@ unraid-mcp/
## Marketplace Metadata
### Categories
- `infrastructure` - Server management and monitoring tools
- `infrastructure` Server management and monitoring tools
### Tags
- `unraid` - Unraid-specific functionality
- `monitoring` - System monitoring capabilities
- `homelab` - Homelab automation
- `graphql` - GraphQL API integration
- `docker` - Docker container management
- `virtualization` - VM management
- `unraid` Unraid-specific functionality
- `monitoring` System monitoring capabilities
- `homelab` Homelab automation
- `graphql` GraphQL API integration
- `docker` Docker container management
- `virtualization` VM management
## Publishing Checklist
@@ -109,10 +149,10 @@ Before publishing to GitHub:
./scripts/validate-marketplace.sh
```
2. **Update Version Numbers**
- Bump version in `.claude-plugin/marketplace.json`
- Bump version in `.claude-plugin/plugin.json`
- Update version in `README.md` if needed
2. **Update Version Numbers** (must be in sync)
- `pyproject.toml` → `version = "X.Y.Z"` under `[project]`
- `.claude-plugin/plugin.json` → `"version": "X.Y.Z"`
- `.claude-plugin/marketplace.json` → `"version"` in both `metadata` and `plugins[]`
3. **Test Locally**
```bash
@@ -123,33 +163,38 @@ Before publishing to GitHub:
4. **Commit and Push**
```bash
git add .claude-plugin/
git commit -m "feat: add Claude Code marketplace configuration"
git commit -m "chore: bump marketplace to vX.Y.Z"
git push origin main
```
5. **Create Release Tag** (Optional)
5. **Create Release Tag**
```bash
git tag -a v1.0.0 -m "Release v1.0.0"
git push origin v1.0.0
git tag -a vX.Y.Z -m "Release vX.Y.Z"
git push origin vX.Y.Z
```
## User Experience
After installation, users will:
After installation, users can:
1. **See the skill in their skill list**
```bash
/skill list
1. **Invoke Unraid operations directly in Claude Code**
```
unraid(action="system", subaction="overview")
unraid(action="docker", subaction="list")
unraid(action="health", subaction="check")
```
2. **Access Unraid functionality directly**
- Claude Code will automatically detect when to invoke the skill
- Users can explicitly invoke with `/unraid`
2. **Use the credential setup tool on first run**
```
unraid(action="health", subaction="setup")
```
This triggers elicitation to collect and persist credentials to `~/.unraid-mcp/.env`.
3. **Have access to all helper scripts**
- Example scripts in `examples/`
- Utility scripts in `scripts/`
- API reference in `references/`
3. **Monitor live data via subscriptions**
```
unraid(action="live", subaction="cpu")
unraid(action="live", subaction="log_tail")
```
## Maintenance
@@ -157,31 +202,21 @@ After installation, users will:
To release a new version:
1. Make changes to the plugin
2. Update version in `.claude-plugin/plugin.json`
3. Update marketplace catalog in `.claude-plugin/marketplace.json`
4. Run validation: `./scripts/validate-marketplace.sh`
5. Commit and push
1. Make changes to the plugin code
2. Update version in `pyproject.toml`, `.claude-plugin/plugin.json`, and `.claude-plugin/marketplace.json`
3. Run validation: `./scripts/validate-marketplace.sh`
4. Commit and push
Users with the plugin installed will see the update available and can upgrade with:
Users with the plugin installed will see the update available and can upgrade:
```bash
/plugin update unraid
```
### Adding More Plugins
To add additional plugins to this marketplace:
1. Create new plugin directory: `skills/new-plugin/`
2. Add plugin manifest: `skills/new-plugin/.claude-plugin/plugin.json`
3. Update marketplace catalog: add entry to `.plugins[]` array in `.claude-plugin/marketplace.json`
4. Validate: `./scripts/validate-marketplace.sh`
## Support
- **Repository:** https://github.com/jmagar/unraid-mcp
- **Issues:** https://github.com/jmagar/unraid-mcp/issues
- **Documentation:** See `.claude-plugin/README.md` and `skills/unraid/README.md`
- **Destructive Actions:** `docs/DESTRUCTIVE_ACTIONS.md`
## Validation
@@ -198,5 +233,3 @@ This checks:
- Plugin structure
- Source path accuracy
- Documentation completeness
All 17 checks must pass before publishing.

View File

@@ -2,6 +2,26 @@
This guide covers how to publish `unraid-mcp` to PyPI so it can be installed via `uvx` or `pip` from anywhere.
## Package Overview
**PyPI package name:** `unraid-mcp`
**Entry point binary:** `unraid-mcp-server` (also aliased as `unraid-mcp`)
**Current version:** `1.1.2`
The package ships a FastMCP server exposing **3 MCP tools**:
- `unraid` — primary tool with `action` + `subaction` routing (~107 subactions, 15 domains)
- `diagnose_subscriptions` — WebSocket subscription diagnostics
- `test_subscription_query` — test individual GraphQL subscription queries
Tool call convention: `unraid(action="docker", subaction="list")`
### Version Sync Requirement
When bumping the version, **all three files must be updated together**:
- `pyproject.toml``version = "X.Y.Z"` under `[project]`
- `.claude-plugin/plugin.json``"version": "X.Y.Z"`
- `.claude-plugin/marketplace.json``"version"` in both `metadata` and `plugins[]`
## Prerequisites
1. **PyPI Account**: Create accounts on both:
@@ -40,7 +60,7 @@ Before publishing, update the version in `pyproject.toml`:
```toml
[project]
version = "1.0.0" # Follow semantic versioning: MAJOR.MINOR.PATCH
version = "1.1.2" # Follow semantic versioning: MAJOR.MINOR.PATCH
```
**Semantic Versioning Guide:**
@@ -82,8 +102,8 @@ uv run python -m build
```
This creates:
- `dist/unraid_mcp-VERSION-py3-none-any.whl` (wheel)
- `dist/unraid_mcp-VERSION.tar.gz` (source distribution)
- `dist/unraid_mcp-1.1.2-py3-none-any.whl` (wheel)
- `dist/unraid_mcp-1.1.2.tar.gz` (source distribution)
### 4. Validate the Package
@@ -156,7 +176,7 @@ UNRAID_API_URL=https://your-server uvx unraid-mcp-server
**Benefits of uvx:**
- No installation required
- Automatic virtual environment management
- Always uses the latest version (or specify version: `uvx unraid-mcp-server@1.0.0`)
- Always uses the latest version (or specify version: `uvx unraid-mcp-server@1.1.2`)
- Clean execution environment
## Automation with GitHub Actions (Future)

View File

@@ -10,7 +10,7 @@ build-backend = "hatchling.build"
# ============================================================================
[project]
name = "unraid-mcp"
version = "1.0.1"
version = "1.1.5"
description = "MCP Server for Unraid API - provides tools to interact with an Unraid server's GraphQL API"
readme = "README.md"
license = {file = "LICENSE"}
@@ -258,6 +258,7 @@ omit = [
]
[tool.coverage.report]
fail_under = 80
precision = 2
show_missing = true
skip_covered = false

View File

@@ -70,6 +70,20 @@ else
echo -e "Checking: Plugin source path is valid... ${RED}${NC} (plugin not found in marketplace)"
fi
# Check version sync between pyproject.toml and plugin.json
echo "Checking version sync..."
TOML_VER=$(grep -m1 '^version = ' pyproject.toml | sed 's/version = "//;s/"//')
PLUGIN_VER=$(python3 -c "import json; print(json.load(open('.claude-plugin/plugin.json'))['version'])" 2>/dev/null || echo "ERROR_READING")
if [ "$TOML_VER" != "$PLUGIN_VER" ]; then
echo -e "${RED}FAIL: Version mismatch — pyproject.toml=$TOML_VER, plugin.json=$PLUGIN_VER${NC}"
CHECKS=$((CHECKS + 1))
FAILED=$((FAILED + 1))
else
echo -e "${GREEN}PASS: Versions in sync ($TOML_VER)${NC}"
CHECKS=$((CHECKS + 1))
PASSED=$((PASSED + 1))
fi
echo ""
echo "=== Results ==="
echo -e "Total checks: $CHECKS"

View File

@@ -93,7 +93,7 @@ unraid(action="live", subaction="cpu")
| `disks` | All physical disks with health and temperatures |
| `disk_details` | Detailed info for a specific disk (requires `disk_id`) |
| `log_files` | List available log files |
| `logs` | Read log content (requires `path`; optional `lines`) |
| `logs` | Read log content (requires `log_path`; optional `tail_lines`) |
| `flash_backup` | ⚠️ Trigger a flash backup (requires `confirm=True`) |
### `docker` — Containers
@@ -143,11 +143,11 @@ unraid(action="live", subaction="cpu")
|-----------|-------------|
| `list` | All API keys |
| `get` | Single key details (requires `key_id`) |
| `create` | Create a new key (requires `name`, `roles`) |
| `create` | Create a new key (requires `name`; optional `roles`, `permissions`) |
| `update` | Update a key (requires `key_id`) |
| `delete` | ⚠️ Delete a key (requires `key_id`, `confirm=True`) |
| `add_role` | Add a role to a key (requires `key_id`, `role`) |
| `remove_role` | Remove a role from a key (requires `key_id`, `role`) |
| `add_role` | Add a role to a key (requires `key_id`, `roles`) |
| `remove_role` | Remove a role from a key (requires `key_id`, `roles`) |
### `plugin` — Plugins
| Subaction | Description |
@@ -161,13 +161,13 @@ unraid(action="live", subaction="cpu")
|-----------|-------------|
| `list_remotes` | List configured rclone remotes |
| `config_form` | Get configuration form for a remote type |
| `create_remote` | Create a new remote (requires `name`, `type`, `fields`) |
| `create_remote` | Create a new remote (requires `name`, `provider_type`, `config_data`) |
| `delete_remote` | ⚠️ Delete a remote (requires `name`, `confirm=True`) |
### `setting` — System Settings
| Subaction | Description |
|-----------|-------------|
| `update` | Update system settings (requires `settings` object) |
| `update` | Update system settings (requires `settings_input` object) |
| `configure_ups` | ⚠️ Configure UPS settings (requires `confirm=True`) |
### `customization` — Theme & Appearance
@@ -186,7 +186,7 @@ unraid(action="live", subaction="cpu")
| `provider` | Single provider details (requires `provider_id`) |
| `configuration` | OIDC configuration |
| `public_providers` | Public-facing provider list |
| `validate_session` | Validate current SSO session |
| `validate_session` | Validate current SSO session (requires `token`) |
### `user` — Current User
| Subaction | Description |
@@ -264,7 +264,7 @@ unraid(action="system", subaction="array")
### Read logs
```
unraid(action="disk", subaction="log_files")
unraid(action="disk", subaction="logs", path="syslog", lines=50)
unraid(action="disk", subaction="logs", log_path="/var/log/syslog", tail_lines=50)
```
### Live monitoring

View File

@@ -1,6 +1,6 @@
# Unraid API - Complete Reference Guide
> **⚠️ DEVELOPER REFERENCE ONLY** — This file documents the raw GraphQL API schema for development and maintenance purposes (adding new queries/mutations). Do NOT use these curl/GraphQL examples for MCP tool usage. Use `unraid(action=..., subaction=...)` calls instead. See `SKILL.md` for the correct calling convention.
> **⚠️ DEVELOPER REFERENCE ONLY** — This file documents the raw GraphQL API schema for development and maintenance purposes (adding new queries/mutations). Do NOT use these curl/GraphQL examples for MCP tool usage. Use `unraid(action=..., subaction=...)` calls instead. See [`SKILL.md`](../SKILL.md) for the correct calling convention.
**Tested on:** Unraid 7.2 x86_64
**Date:** 2026-01-21

View File

@@ -1,7 +1,7 @@
> **⚠️ DEVELOPER REFERENCE ONLY** — This file documents raw GraphQL endpoints for development purposes. For MCP tool usage, use `unraid(action=..., subaction=...)` calls as documented in `SKILL.md`.
# Unraid API Endpoints Reference
> **⚠️ DEVELOPER REFERENCE ONLY** — This file documents raw GraphQL endpoints for development purposes. For MCP tool usage, use `unraid(action=..., subaction=...)` calls as documented in `SKILL.md`.
Complete list of available GraphQL read-only endpoints in Unraid 7.2+.
## System & Metrics (8)

View File

@@ -22,7 +22,7 @@ unraid(action="system", subaction="array") # Array status overview
unraid(action="disk", subaction="disks") # All disks with temps & health
unraid(action="array", subaction="parity_status") # Current parity check
unraid(action="array", subaction="parity_history") # Past parity results
unraid(action="array", subaction="parity_start") # Start parity check
unraid(action="array", subaction="parity_start", correct=False) # Start parity check
unraid(action="array", subaction="stop_array", confirm=True) # ⚠️ Stop array
```
@@ -30,9 +30,8 @@ unraid(action="array", subaction="stop_array", confirm=True) # ⚠️ Stop
```python
unraid(action="disk", subaction="log_files") # List available logs
unraid(action="disk", subaction="logs", log_path="syslog", tail_lines=50) # Read syslog
unraid(action="disk", subaction="logs", log_path="/var/log/syslog") # Full path also works
unraid(action="live", subaction="log_tail", log_path="/var/log/syslog") # Live tail
unraid(action="disk", subaction="logs", log_path="/var/log/syslog", tail_lines=50) # Read syslog
unraid(action="live", subaction="log_tail", path="/var/log/syslog") # Live tail
```
### Docker Containers
@@ -64,7 +63,7 @@ unraid(action="notification", subaction="overview")
unraid(action="notification", subaction="list", list_type="UNREAD", limit=10)
unraid(action="notification", subaction="archive", notification_id="<id>")
unraid(action="notification", subaction="create", title="Test", subject="Subject",
description="Body", importance="normal")
description="Body", importance="INFO")
```
### API Keys

View File

@@ -26,15 +26,15 @@ This writes `UNRAID_API_URL` and `UNRAID_API_KEY` to `~/.unraid-mcp/.env`. Re-ru
unraid(action="health", subaction="test_connection")
```
2. Full diagnostic report:
1. Full diagnostic report:
```python
unraid(action="health", subaction="diagnose")
```
3. Check that `UNRAID_API_URL` in `~/.unraid-mcp/.env` points to the correct Unraid GraphQL endpoint.
1. Check that `UNRAID_API_URL` in `~/.unraid-mcp/.env` points to the correct Unraid GraphQL endpoint.
4. Verify the API key has the required roles. Get a new key: **Unraid UI → Settings → Management Access → API Keys → Create** (select "Viewer" role for read-only, or appropriate roles for mutations).
1. Verify the API key has the required roles. Get a new key: **Unraid UI → Settings → Management Access → API Keys → Create** (select "Viewer" role for read-only, or appropriate roles for mutations).
---
@@ -76,9 +76,9 @@ See the Destructive Actions table in `SKILL.md` for the full list.
**Explanation:** The persistent WebSocket subscription has not yet received its first event. Retry in a moment.
**Known issue:** `live/array_state` uses `arraySubscription` which has a known Unraid API bug (returns null for a non-nullable field). This subscription will always show "connecting."
**Known issue:** `live/array_state` uses `arraySubscription` which has a known Unraid API bug (returns null for a non-nullable field). This subscription may show "connecting" indefinitely.
**Event-driven subscriptions** (`live/notifications_overview`, `live/owner`, `live/server_status`, `live/ups_status`) only populate when the server emits a change event. If the server is idle, these may never populate during a session.
**Event-driven subscriptions** (`live/parity_progress`, `live/notifications_overview`, `live/owner`, `live/server_status`, `live/ups_status`) only populate when the server emits a change event. If the server is idle, these may never populate during a session.
**Workaround for array state:** Use `unraid(action="system", subaction="array")` for a synchronous snapshot instead.

View File

@@ -6,6 +6,12 @@ from unittest.mock import AsyncMock, patch
import pytest
from fastmcp import FastMCP
from hypothesis import settings
from hypothesis.database import DirectoryBasedExampleDatabase
# Configure hypothesis to use the .cache directory for its database
settings.register_profile("default", database=DirectoryBasedExampleDatabase(".cache/.hypothesis"))
settings.load_profile("default")
@pytest.fixture

View File

@@ -8,6 +8,7 @@ to verify the full request pipeline.
"""
import json
from collections.abc import Callable
from typing import Any
from unittest.mock import patch
@@ -264,7 +265,7 @@ class TestInfoToolRequests:
"""Verify unraid system tool constructs correct GraphQL queries."""
@staticmethod
def _get_tool():
def _get_tool() -> Callable[..., Any]:
return make_tool_fn("unraid_mcp.tools.unraid", "register_unraid_tool", "unraid")
@respx.mock
@@ -367,7 +368,7 @@ class TestDockerToolRequests:
"""Verify unraid docker tool constructs correct requests."""
@staticmethod
def _get_tool():
def _get_tool() -> Callable[..., Any]:
return make_tool_fn("unraid_mcp.tools.unraid", "register_unraid_tool", "unraid")
@respx.mock
@@ -535,7 +536,7 @@ class TestVMToolRequests:
"""Verify unraid vm tool constructs correct requests."""
@staticmethod
def _get_tool():
def _get_tool() -> Callable[..., Any]:
return make_tool_fn("unraid_mcp.tools.unraid", "register_unraid_tool", "unraid")
@respx.mock
@@ -625,7 +626,7 @@ class TestArrayToolRequests:
"""Verify unraid array tool constructs correct requests."""
@staticmethod
def _get_tool():
def _get_tool() -> Callable[..., Any]:
return make_tool_fn("unraid_mcp.tools.unraid", "register_unraid_tool", "unraid")
@respx.mock
@@ -701,7 +702,7 @@ class TestStorageToolRequests:
"""Verify unraid disk tool constructs correct requests."""
@staticmethod
def _get_tool():
def _get_tool() -> Callable[..., Any]:
return make_tool_fn("unraid_mcp.tools.unraid", "register_unraid_tool", "unraid")
@respx.mock
@@ -799,7 +800,7 @@ class TestNotificationsToolRequests:
"""Verify unraid notification tool constructs correct requests."""
@staticmethod
def _get_tool():
def _get_tool() -> Callable[..., Any]:
return make_tool_fn("unraid_mcp.tools.unraid", "register_unraid_tool", "unraid")
@respx.mock
@@ -932,7 +933,7 @@ class TestRCloneToolRequests:
"""Verify unraid rclone tool constructs correct requests."""
@staticmethod
def _get_tool():
def _get_tool() -> Callable[..., Any]:
return make_tool_fn("unraid_mcp.tools.unraid", "register_unraid_tool", "unraid")
@respx.mock
@@ -1029,7 +1030,7 @@ class TestUsersToolRequests:
"""Verify unraid user tool constructs correct requests."""
@staticmethod
def _get_tool():
def _get_tool() -> Callable[..., Any]:
return make_tool_fn("unraid_mcp.tools.unraid", "register_unraid_tool", "unraid")
@respx.mock
@@ -1062,7 +1063,7 @@ class TestKeysToolRequests:
"""Verify unraid key tool constructs correct requests."""
@staticmethod
def _get_tool():
def _get_tool() -> Callable[..., Any]:
return make_tool_fn("unraid_mcp.tools.unraid", "register_unraid_tool", "unraid")
@respx.mock
@@ -1157,7 +1158,7 @@ class TestHealthToolRequests:
"""Verify unraid health tool constructs correct requests."""
@staticmethod
def _get_tool():
def _get_tool() -> Callable[..., Any]:
return make_tool_fn("unraid_mcp.tools.unraid", "register_unraid_tool", "unraid")
@respx.mock

View File

@@ -816,6 +816,15 @@ class TestAutoStart:
async def test_auto_start_only_starts_marked_subscriptions(self) -> None:
mgr = SubscriptionManager()
# Clear default SNAPSHOT_ACTIONS configs; add one with auto_start=False
# to verify that unmarked subscriptions are never started.
mgr.subscription_configs.clear()
mgr.subscription_configs["no_auto_sub"] = {
"query": "subscription { test }",
"resource": "unraid://test",
"description": "Unmarked sub",
"auto_start": False,
}
with patch.object(mgr, "start_subscription", new_callable=AsyncMock) as mock_start:
await mgr.auto_start_all_subscriptions()
mock_start.assert_not_called()
@@ -837,6 +846,7 @@ class TestAutoStart:
async def test_auto_start_calls_start_for_marked(self) -> None:
mgr = SubscriptionManager()
mgr.subscription_configs.clear()
mgr.subscription_configs["auto_sub"] = {
"query": "subscription { auto }",
"resource": "unraid://auto",

View File

@@ -4,17 +4,7 @@ Live integration smoke-tests for the unraid-mcp server, exercising real API call
---
## Two Scripts, Two Transports
| | `test-tools.sh` | `test-actions.sh` |
|-|-----------------|-------------------|
| **Transport** | stdio | HTTP |
| **Server required** | No — launched ad-hoc per call | Yes — must be running at `$MCP_URL` |
| **Flags** | `--timeout-ms N`, `--parallel`, `--verbose` | positional `[MCP_URL]` |
| **Coverage** | 10 tools (read-only actions only) | 11 tools (all non-destructive actions) |
| **Use case** | CI / offline local check | Live server smoke-test |
### `test-tools.sh` — stdio, no running server needed
## `test-tools.sh` — stdio, no running server needed
```bash
./tests/mcporter/test-tools.sh # sequential, 25s timeout
@@ -25,19 +15,9 @@ Live integration smoke-tests for the unraid-mcp server, exercising real API call
Launches `uv run unraid-mcp-server` in stdio mode for each tool call. Requires `mcporter`, `uv`, and `python3` in `PATH`. Good for CI pipelines — no persistent server process needed.
### `test-actions.sh` — HTTP, requires a live server
```bash
./tests/mcporter/test-actions.sh # default: http://localhost:6970/mcp
./tests/mcporter/test-actions.sh http://10.1.0.2:6970/mcp # explicit URL
UNRAID_MCP_URL=http://10.1.0.2:6970/mcp ./tests/mcporter/test-actions.sh
```
Connects to an already-running streamable-http server. Covers all read-only actions across 10 tools (`unraid_settings` is all-mutations and skipped; all destructive mutations are explicitly skipped).
---
## What `test-actions.sh` Tests
## What `test-tools.sh` Tests
### Phase 1 — Param-free reads
@@ -137,15 +117,10 @@ curl -LsSf https://astral.sh/uv/install.sh | sh
# python3 — used for inline JSON extraction
python3 --version # 3.12+
# Running server (for test-actions.sh only)
docker compose up -d
# or
uv run unraid-mcp-server
```
---
## Cleanup
`test-actions.sh` connects to an existing server and leaves it running; it creates no temporary files. `test-tools.sh` spawns stdio server subprocesses per call — they exit when mcporter finishes each invocation — and may write a timestamped log file under `${TMPDIR:-/tmp}`. Neither script leaves background processes.
`test-tools.sh` spawns stdio server subprocesses per call — they exit when mcporter finishes each invocation — and may write a timestamped log file under `${TMPDIR:-/tmp}`. It does not leave background processes.

View File

@@ -1,407 +0,0 @@
#!/usr/bin/env bash
# test-actions.sh — Test all non-destructive Unraid MCP actions via mcporter
#
# Usage:
# ./scripts/test-actions.sh [MCP_URL]
#
# Default MCP_URL: http://localhost:6970/mcp
# Skips: destructive (confirm=True required), state-changing mutations,
# and actions requiring IDs not yet discovered.
#
# Phase 1: param-free reads
# Phase 2: ID-discovered reads (container, network, disk, vm, key, log)
set -euo pipefail
MCP_URL="${1:-${UNRAID_MCP_URL:-http://localhost:6970/mcp}}"
# ── colours ──────────────────────────────────────────────────────────────────
RED='\033[0;31m'; GREEN='\033[0;32m'; YELLOW='\033[1;33m'
CYAN='\033[0;36m'; BOLD='\033[1m'; NC='\033[0m'
PASS=0; FAIL=0; SKIP=0
declare -a FAILED_TESTS=()
# ── helpers ───────────────────────────────────────────────────────────────────
mcall() {
# mcall <tool> <json-args>
local tool="$1" args="$2"
mcporter call \
--http-url "$MCP_URL" \
--allow-http \
--tool "$tool" \
--args "$args" \
--output json \
2>&1
}
_check_output() {
# Returns 0 if output looks like a successful JSON response, 1 otherwise.
local output="$1" exit_code="$2"
[[ $exit_code -ne 0 ]] && return 1
echo "$output" | python3 -c "
import json, sys
try:
d = json.load(sys.stdin)
if isinstance(d, dict) and (d.get('isError') or d.get('error') or 'ToolError' in str(d)):
sys.exit(1)
except Exception:
pass
sys.exit(0)
" 2>/dev/null
}
run_test() {
# Print result; do NOT echo the JSON body (kept quiet for readability).
local label="$1" tool="$2" args="$3"
printf " %-60s" "$label"
local output exit_code=0
output=$(mcall "$tool" "$args" 2>&1) || exit_code=$?
if _check_output "$output" "$exit_code"; then
echo -e "${GREEN}PASS${NC}"
((PASS++)) || true
else
echo -e "${RED}FAIL${NC}"
((FAIL++)) || true
FAILED_TESTS+=("$label")
# Show first 3 lines of error detail, indented
echo "$output" | head -3 | sed 's/^/ /'
fi
}
run_test_capture() {
# Like run_test but echoes raw JSON to stdout for ID extraction by caller.
# Status lines go to stderr so the caller's $() captures only clean JSON.
local label="$1" tool="$2" args="$3"
local output exit_code=0
printf " %-60s" "$label" >&2
output=$(mcall "$tool" "$args" 2>&1) || exit_code=$?
if _check_output "$output" "$exit_code"; then
echo -e "${GREEN}PASS${NC}" >&2
((PASS++)) || true
else
echo -e "${RED}FAIL${NC}" >&2
((FAIL++)) || true
FAILED_TESTS+=("$label")
echo "$output" | head -3 | sed 's/^/ /' >&2
fi
echo "$output" # pure JSON → captured by caller's $()
}
extract_id() {
# Extract an ID from JSON output using a Python snippet.
# Usage: ID=$(extract_id "$JSON_OUTPUT" "$LABEL" 'python expression')
# If JSON parsing fails (malformed mcporter output), record a FAIL.
# If parsing succeeds but finds no items, return empty (caller skips).
local json_input="$1" label="$2" py_code="$3"
local result="" py_exit=0 parse_err=""
# Capture stdout (the extracted ID) and stderr (any parse errors) separately.
# A temp file is needed because $() can only capture one stream.
local errfile
errfile=$(mktemp)
result=$(echo "$json_input" | python3 -c "$py_code" 2>"$errfile") || py_exit=$?
parse_err=$(<"$errfile")
rm -f "$errfile"
if [[ $py_exit -ne 0 ]]; then
printf " %-60s${RED}FAIL${NC} (JSON parse error)\n" "$label" >&2
[[ -n "$parse_err" ]] && echo "$parse_err" | head -2 | sed 's/^/ /' >&2
((FAIL++)) || true
FAILED_TESTS+=("$label (JSON parse)")
echo ""
return 1
fi
echo "$result"
}
skip_test() {
local label="$1" reason="$2"
printf " %-60s${YELLOW}SKIP${NC} (%s)\n" "$label" "$reason"
((SKIP++)) || true
}
section() {
echo ""
echo -e "${CYAN}${BOLD}━━━ $1 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
}
# ── connectivity check ────────────────────────────────────────────────────────
echo ""
echo -e "${BOLD}Unraid MCP Non-Destructive Action Test Suite${NC}"
echo -e "Server: ${CYAN}$MCP_URL${NC}"
echo ""
printf "Checking connectivity... "
# Use -s (silent) without -f: a 4xx/406 means the MCP server is up and
# responding correctly to a plain GET — only "connection refused" is fatal.
# Capture curl's exit code directly — don't mask failures with a fallback.
HTTP_CODE=""
curl_exit=0
HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" --max-time 5 "$MCP_URL" 2>/dev/null) || curl_exit=$?
if [[ $curl_exit -ne 0 ]]; then
echo -e "${RED}UNREACHABLE${NC} (curl exit code: $curl_exit)"
echo "Start the server first: docker compose up -d OR uv run unraid-mcp-server"
exit 1
fi
echo -e "${GREEN}OK${NC} (HTTP $HTTP_CODE)"
# ═══════════════════════════════════════════════════════════════════════════════
# PHASE 1 — Param-free read actions
# ═══════════════════════════════════════════════════════════════════════════════
section "unraid_info (19 query actions)"
run_test "info: overview" unraid_info '{"action":"overview"}'
run_test "info: array" unraid_info '{"action":"array"}'
run_test "info: network" unraid_info '{"action":"network"}'
run_test "info: registration" unraid_info '{"action":"registration"}'
run_test "info: connect" unraid_info '{"action":"connect"}'
run_test "info: variables" unraid_info '{"action":"variables"}'
run_test "info: metrics" unraid_info '{"action":"metrics"}'
run_test "info: services" unraid_info '{"action":"services"}'
run_test "info: display" unraid_info '{"action":"display"}'
run_test "info: config" unraid_info '{"action":"config"}'
run_test "info: online" unraid_info '{"action":"online"}'
run_test "info: owner" unraid_info '{"action":"owner"}'
run_test "info: settings" unraid_info '{"action":"settings"}'
run_test "info: server" unraid_info '{"action":"server"}'
run_test "info: servers" unraid_info '{"action":"servers"}'
run_test "info: flash" unraid_info '{"action":"flash"}'
run_test "info: ups_devices" unraid_info '{"action":"ups_devices"}'
run_test "info: ups_device" unraid_info '{"action":"ups_device"}'
run_test "info: ups_config" unraid_info '{"action":"ups_config"}'
skip_test "info: update_server" "mutation — state-changing"
skip_test "info: update_ssh" "mutation — state-changing"
section "unraid_array"
run_test "array: parity_status" unraid_array '{"action":"parity_status"}'
skip_test "array: parity_start" "mutation — starts parity check"
skip_test "array: parity_pause" "mutation — pauses parity check"
skip_test "array: parity_resume" "mutation — resumes parity check"
skip_test "array: parity_cancel" "mutation — cancels parity check"
section "unraid_storage (param-free reads)"
STORAGE_DISKS=$(run_test_capture "storage: disks" unraid_storage '{"action":"disks"}')
run_test "storage: shares" unraid_storage '{"action":"shares"}'
run_test "storage: unassigned" unraid_storage '{"action":"unassigned"}'
LOG_FILES=$(run_test_capture "storage: log_files" unraid_storage '{"action":"log_files"}')
skip_test "storage: flash_backup" "destructive (confirm=True required)"
section "unraid_docker (param-free reads)"
DOCKER_LIST=$(run_test_capture "docker: list" unraid_docker '{"action":"list"}')
DOCKER_NETS=$(run_test_capture "docker: networks" unraid_docker '{"action":"networks"}')
run_test "docker: port_conflicts" unraid_docker '{"action":"port_conflicts"}'
run_test "docker: check_updates" unraid_docker '{"action":"check_updates"}'
run_test "docker: sync_templates" unraid_docker '{"action":"sync_templates"}'
run_test "docker: refresh_digests" unraid_docker '{"action":"refresh_digests"}'
skip_test "docker: start" "mutation — changes container state"
skip_test "docker: stop" "mutation — changes container state"
skip_test "docker: restart" "mutation — changes container state"
skip_test "docker: pause" "mutation — changes container state"
skip_test "docker: unpause" "mutation — changes container state"
skip_test "docker: update" "mutation — updates container image"
skip_test "docker: remove" "destructive (confirm=True required)"
skip_test "docker: update_all" "destructive (confirm=True required)"
skip_test "docker: create_folder" "mutation — changes organizer state"
skip_test "docker: set_folder_children" "mutation — changes organizer state"
skip_test "docker: delete_entries" "destructive (confirm=True required)"
skip_test "docker: move_to_folder" "mutation — changes organizer state"
skip_test "docker: move_to_position" "mutation — changes organizer state"
skip_test "docker: rename_folder" "mutation — changes organizer state"
skip_test "docker: create_folder_with_items" "mutation — changes organizer state"
skip_test "docker: update_view_prefs" "mutation — changes organizer state"
skip_test "docker: reset_template_mappings" "destructive (confirm=True required)"
section "unraid_vm (param-free reads)"
VM_LIST=$(run_test_capture "vm: list" unraid_vm '{"action":"list"}')
skip_test "vm: start" "mutation — changes VM state"
skip_test "vm: stop" "mutation — changes VM state"
skip_test "vm: pause" "mutation — changes VM state"
skip_test "vm: resume" "mutation — changes VM state"
skip_test "vm: reboot" "mutation — changes VM state"
skip_test "vm: force_stop" "destructive (confirm=True required)"
skip_test "vm: reset" "destructive (confirm=True required)"
section "unraid_notifications"
run_test "notifications: overview" unraid_notifications '{"action":"overview"}'
run_test "notifications: list" unraid_notifications '{"action":"list"}'
run_test "notifications: warnings" unraid_notifications '{"action":"warnings"}'
run_test "notifications: recalculate" unraid_notifications '{"action":"recalculate"}'
skip_test "notifications: create" "mutation — creates notification"
skip_test "notifications: create_unique" "mutation — creates notification"
skip_test "notifications: archive" "mutation — changes notification state"
skip_test "notifications: unread" "mutation — changes notification state"
skip_test "notifications: archive_all" "mutation — changes notification state"
skip_test "notifications: archive_many" "mutation — changes notification state"
skip_test "notifications: unarchive_many" "mutation — changes notification state"
skip_test "notifications: unarchive_all" "mutation — changes notification state"
skip_test "notifications: delete" "destructive (confirm=True required)"
skip_test "notifications: delete_archived" "destructive (confirm=True required)"
section "unraid_rclone"
run_test "rclone: list_remotes" unraid_rclone '{"action":"list_remotes"}'
run_test "rclone: config_form" unraid_rclone '{"action":"config_form"}'
skip_test "rclone: create_remote" "mutation — creates remote"
skip_test "rclone: delete_remote" "destructive (confirm=True required)"
section "unraid_users"
run_test "users: me" unraid_users '{"action":"me"}'
section "unraid_keys"
KEYS_LIST=$(run_test_capture "keys: list" unraid_keys '{"action":"list"}')
skip_test "keys: create" "mutation — creates API key"
skip_test "keys: update" "mutation — modifies API key"
skip_test "keys: delete" "destructive (confirm=True required)"
section "unraid_health"
run_test "health: check" unraid_health '{"action":"check"}'
run_test "health: test_connection" unraid_health '{"action":"test_connection"}'
run_test "health: diagnose" unraid_health '{"action":"diagnose"}'
section "unraid_settings (all mutations — skipped)"
skip_test "settings: update" "mutation — modifies settings"
skip_test "settings: update_temperature" "mutation — modifies settings"
skip_test "settings: update_time" "mutation — modifies settings"
skip_test "settings: configure_ups" "destructive (confirm=True required)"
skip_test "settings: update_api" "mutation — modifies settings"
skip_test "settings: connect_sign_in" "mutation — authentication action"
skip_test "settings: connect_sign_out" "mutation — authentication action"
skip_test "settings: setup_remote_access" "destructive (confirm=True required)"
skip_test "settings: enable_dynamic_remote_access" "destructive (confirm=True required)"
# ═══════════════════════════════════════════════════════════════════════════════
# PHASE 2 — ID-discovered read actions
# ═══════════════════════════════════════════════════════════════════════════════
section "Phase 2: ID-discovered reads"
# ── docker container ID ───────────────────────────────────────────────────────
CONTAINER_ID=$(extract_id "$DOCKER_LIST" "docker: extract container ID" "
import json, sys
d = json.load(sys.stdin)
containers = d.get('containers') or d.get('data', {}).get('containers') or []
if isinstance(containers, list) and containers:
c = containers[0]
cid = c.get('id') or c.get('names', [''])[0].lstrip('/')
if cid:
print(cid)
")
if [[ -n "$CONTAINER_ID" ]]; then
run_test "docker: details (id=$CONTAINER_ID)" \
unraid_docker "{\"action\":\"details\",\"container_id\":\"$CONTAINER_ID\"}"
run_test "docker: logs (id=$CONTAINER_ID)" \
unraid_docker "{\"action\":\"logs\",\"container_id\":\"$CONTAINER_ID\",\"tail_lines\":20}"
else
skip_test "docker: details" "no containers found to discover ID"
skip_test "docker: logs" "no containers found to discover ID"
fi
# ── docker network ID ─────────────────────────────────────────────────────────
NETWORK_ID=$(extract_id "$DOCKER_NETS" "docker: extract network ID" "
import json, sys
d = json.load(sys.stdin)
nets = d.get('networks') or d.get('data', {}).get('networks') or []
if isinstance(nets, list) and nets:
nid = nets[0].get('id') or nets[0].get('Id')
if nid:
print(nid)
")
if [[ -n "$NETWORK_ID" ]]; then
run_test "docker: network_details (id=$NETWORK_ID)" \
unraid_docker "{\"action\":\"network_details\",\"network_id\":\"$NETWORK_ID\"}"
else
skip_test "docker: network_details" "no networks found to discover ID"
fi
# ── disk ID ───────────────────────────────────────────────────────────────────
DISK_ID=$(extract_id "$STORAGE_DISKS" "storage: extract disk ID" "
import json, sys
d = json.load(sys.stdin)
disks = d.get('disks') or d.get('data', {}).get('disks') or []
if isinstance(disks, list) and disks:
did = disks[0].get('id') or disks[0].get('device')
if did:
print(did)
")
if [[ -n "$DISK_ID" ]]; then
run_test "storage: disk_details (id=$DISK_ID)" \
unraid_storage "{\"action\":\"disk_details\",\"disk_id\":\"$DISK_ID\"}"
else
skip_test "storage: disk_details" "no disks found to discover ID"
fi
# ── log path ──────────────────────────────────────────────────────────────────
LOG_PATH=$(extract_id "$LOG_FILES" "storage: extract log path" "
import json, sys
d = json.load(sys.stdin)
files = d.get('log_files') or d.get('files') or d.get('data', {}).get('log_files') or []
if isinstance(files, list) and files:
p = files[0].get('path') or (files[0] if isinstance(files[0], str) else None)
if p:
print(p)
")
if [[ -n "$LOG_PATH" ]]; then
run_test "storage: logs (path=$LOG_PATH)" \
unraid_storage "{\"action\":\"logs\",\"log_path\":\"$LOG_PATH\",\"tail_lines\":20}"
else
skip_test "storage: logs" "no log files found to discover path"
fi
# ── VM ID ─────────────────────────────────────────────────────────────────────
VM_ID=$(extract_id "$VM_LIST" "vm: extract VM ID" "
import json, sys
d = json.load(sys.stdin)
vms = d.get('vms') or d.get('data', {}).get('vms') or []
if isinstance(vms, list) and vms:
vid = vms[0].get('uuid') or vms[0].get('id') or vms[0].get('name')
if vid:
print(vid)
")
if [[ -n "$VM_ID" ]]; then
run_test "vm: details (id=$VM_ID)" \
unraid_vm "{\"action\":\"details\",\"vm_id\":\"$VM_ID\"}"
else
skip_test "vm: details" "no VMs found to discover ID"
fi
# ── API key ID ────────────────────────────────────────────────────────────────
KEY_ID=$(extract_id "$KEYS_LIST" "keys: extract key ID" "
import json, sys
d = json.load(sys.stdin)
keys = d.get('keys') or d.get('apiKeys') or d.get('data', {}).get('keys') or []
if isinstance(keys, list) and keys:
kid = keys[0].get('id')
if kid:
print(kid)
")
if [[ -n "$KEY_ID" ]]; then
run_test "keys: get (id=$KEY_ID)" \
unraid_keys "{\"action\":\"get\",\"key_id\":\"$KEY_ID\"}"
else
skip_test "keys: get" "no API keys found to discover ID"
fi
# ═══════════════════════════════════════════════════════════════════════════════
# SUMMARY
# ═══════════════════════════════════════════════════════════════════════════════
TOTAL=$((PASS + FAIL + SKIP))
echo ""
echo -e "${BOLD}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo -e "${BOLD}Results: ${GREEN}${PASS} passed${NC} ${RED}${FAIL} failed${NC} ${YELLOW}${SKIP} skipped${NC} (${TOTAL} total)"
if [[ ${#FAILED_TESTS[@]} -gt 0 ]]; then
echo ""
echo -e "${RED}${BOLD}Failed tests:${NC}"
for t in "${FAILED_TESTS[@]}"; do
echo -e " ${RED}${NC} $t"
done
fi
echo ""
[[ $FAIL -eq 0 ]] && exit 0 || exit 1

View File

@@ -149,8 +149,8 @@ test_notifications_delete() {
# Create the notification
local create_raw
create_raw="$(mcall unraid_notifications \
'{"action":"create","title":"mcp-test-delete","subject":"MCP destructive test","description":"Safe to delete","importance":"INFO"}')"
create_raw="$(mcall unraid \
'{"action":"notification","subaction":"create","title":"mcp-test-delete","subject":"MCP destructive test","description":"Safe to delete","importance":"INFO"}')"
local create_ok
create_ok="$(python3 -c "import json,sys; d=json.loads('''${create_raw}'''); print(d.get('success', False))" 2>/dev/null)"
if [[ "${create_ok}" != "True" ]]; then
@@ -161,7 +161,7 @@ test_notifications_delete() {
# The create response ID doesn't match the stored filename — list and find by title.
# Use the LAST match so a stale notification with the same title is bypassed.
local list_raw nid
list_raw="$(mcall unraid_notifications '{"action":"list","notification_type":"UNREAD"}')"
list_raw="$(mcall unraid '{"action":"notification","subaction":"list","notification_type":"UNREAD"}')"
nid="$(python3 -c "
import json,sys
d = json.loads('''${list_raw}''')
@@ -177,8 +177,8 @@ print(matches[0] if matches else '')
fi
local del_raw
del_raw="$(mcall unraid_notifications \
"{\"action\":\"delete\",\"notification_id\":\"${nid}\",\"notification_type\":\"UNREAD\",\"confirm\":true}")"
del_raw="$(mcall unraid \
"{\"action\":\"notification\",\"subaction\":\"delete\",\"notification_id\":\"${nid}\",\"notification_type\":\"UNREAD\",\"confirm\":true}")"
# success=true OR deleteNotification key present (raw GraphQL response) both indicate success
local success
success="$(python3 -c "
@@ -190,7 +190,7 @@ print(ok)
if [[ "${success}" != "True" ]]; then
# Leak: notification created but not deleted — archive it so it doesn't clutter the feed
mcall unraid_notifications "{\"action\":\"archive\",\"notification_id\":\"${nid}\"}" &>/dev/null || true
mcall unraid "{\"action\":\"notification\",\"subaction\":\"archive\",\"notification_id\":\"${nid}\"}" &>/dev/null || true
fail_test "${label}" "delete did not return success=true: ${del_raw} (notification archived as fallback cleanup)"
return
fi
@@ -201,7 +201,7 @@ print(ok)
if ${CONFIRM}; then
test_notifications_delete
else
dry_run "notifications: delete [create notification → mcall unraid_notifications delete]"
dry_run "notifications: delete [create notification → mcall unraid action=notification subaction=delete]"
fi
# ---------------------------------------------------------------------------
@@ -227,7 +227,7 @@ test_keys_delete() {
# Guard: abort if test key already exists (don't delete a real key)
# Note: API key names cannot contain hyphens — use "mcp test key"
local existing_keys
existing_keys="$(mcall unraid_keys '{"action":"list"}')"
existing_keys="$(mcall unraid '{"action":"key","subaction":"list"}')"
if python3 -c "
import json,sys
d = json.loads('''${existing_keys}''')
@@ -241,8 +241,8 @@ sys.exit(1 if any(k.get('name') == 'mcp test key' for k in keys) else 0)
fi
local create_raw
create_raw="$(mcall unraid_keys \
'{"action":"create","name":"mcp test key","roles":["VIEWER"]}')"
create_raw="$(mcall unraid \
'{"action":"key","subaction":"create","name":"mcp test key","roles":["VIEWER"]}')"
local kid
kid="$(python3 -c "import json,sys; d=json.loads('''${create_raw}'''); print(d.get('key',{}).get('id',''))" 2>/dev/null)"
@@ -252,20 +252,20 @@ sys.exit(1 if any(k.get('name') == 'mcp test key' for k in keys) else 0)
fi
local del_raw
del_raw="$(mcall unraid_keys "{\"action\":\"delete\",\"key_id\":\"${kid}\",\"confirm\":true}")"
del_raw="$(mcall unraid "{\"action\":\"key\",\"subaction\":\"delete\",\"key_id\":\"${kid}\",\"confirm\":true}")"
local success
success="$(python3 -c "import json,sys; d=json.loads('''${del_raw}'''); print(d.get('success', False))" 2>/dev/null)"
if [[ "${success}" != "True" ]]; then
# Cleanup: attempt to delete the leaked key so future runs are not blocked
mcall unraid_keys "{\"action\":\"delete\",\"key_id\":\"${kid}\",\"confirm\":true}" &>/dev/null || true
mcall unraid "{\"action\":\"key\",\"subaction\":\"delete\",\"key_id\":\"${kid}\",\"confirm\":true}" &>/dev/null || true
fail_test "${label}" "delete did not return success=true: ${del_raw} (key delete re-attempted as fallback cleanup)"
return
fi
# Verify gone
local list_raw
list_raw="$(mcall unraid_keys '{"action":"list"}')"
list_raw="$(mcall unraid '{"action":"key","subaction":"list"}')"
if python3 -c "
import json,sys
d = json.loads('''${list_raw}''')
@@ -281,7 +281,7 @@ sys.exit(0 if not any(k.get('id') == '${kid}' for k in keys) else 1)
if ${CONFIRM}; then
test_keys_delete
else
dry_run "keys: delete [create test key → mcall unraid_keys delete]"
dry_run "keys: delete [create test key → mcall unraid action=key subaction=delete]"
fi
# ---------------------------------------------------------------------------

View File

@@ -134,6 +134,11 @@ check_prerequisites() {
missing=true
fi
if ! command -v jq &>/dev/null; then
log_error "jq not found in PATH. Install it and re-run."
missing=true
fi
if [[ ! -f "${PROJECT_DIR}/pyproject.toml" ]]; then
log_error "pyproject.toml not found at ${PROJECT_DIR}. Wrong directory?"
missing=true
@@ -181,10 +186,12 @@ smoke_test_server() {
import sys, json
try:
d = json.load(sys.stdin)
if 'status' in d or 'success' in d or 'error' in d:
if 'error' in d:
print('error: tool returned error key — ' + str(d.get('error', '')))
elif 'status' in d or 'success' in d:
print('ok')
else:
print('missing: no status/success/error key in response')
print('missing: no status/success key in response')
except Exception as e:
print('parse_error: ' + str(e))
" 2>/dev/null
@@ -208,6 +215,7 @@ except Exception as e:
mcporter_call() {
local args_json="${1:?args_json required}"
# Redirect stderr to the log file so startup warnings/logs don't pollute the JSON stdout.
mcporter call \
--stdio "uv run unraid-mcp-server" \
--cwd "${PROJECT_DIR}" \
@@ -216,7 +224,7 @@ mcporter_call() {
--args "${args_json}" \
--timeout "${CALL_TIMEOUT_MS}" \
--output json \
2>&1
2>>"${LOG_FILE}"
}
# ---------------------------------------------------------------------------
@@ -232,7 +240,7 @@ run_test() {
t0="$(date +%s%N)"
local output
output="$(mcporter_call "${args}" 2>&1)" || true
output="$(mcporter_call "${args}")" || true
local elapsed_ms
elapsed_ms="$(( ( $(date +%s%N) - t0 ) / 1000000 ))"
@@ -253,6 +261,31 @@ run_test() {
return 1
fi
# Always validate JSON is parseable and not an error payload
local json_check
json_check="$(
printf '%s' "${output}" | python3 -c "
import sys, json
try:
d = json.load(sys.stdin)
if isinstance(d, dict) and ('error' in d or d.get('kind') == 'error'):
print('error: ' + str(d.get('error', d.get('message', 'unknown error'))))
else:
print('ok')
except Exception as e:
print('invalid_json: ' + str(e))
" 2>/dev/null
)" || json_check="parse_error"
if [[ "${json_check}" != "ok" ]]; then
printf "${C_RED}[FAIL]${C_RESET} %-55s ${C_DIM}%dms${C_RESET}\n" \
"${label}" "${elapsed_ms}" | tee -a "${LOG_FILE}"
printf ' response validation failed: %s\n' "${json_check}" | tee -a "${LOG_FILE}"
FAIL_COUNT=$(( FAIL_COUNT + 1 ))
FAIL_NAMES+=("${label}")
return 1
fi
# Validate optional key presence
if [[ -n "${expected_key}" ]]; then
local key_check
@@ -627,7 +660,7 @@ suite_live() {
run_test "live: memory" '{"action":"live","subaction":"memory"}'
run_test "live: cpu_telemetry" '{"action":"live","subaction":"cpu_telemetry"}'
run_test "live: notifications_overview" '{"action":"live","subaction":"notifications_overview"}'
run_test "live: log_tail" '{"action":"live","subaction":"log_tail"}'
run_test "live: log_tail" '{"action":"live","subaction":"log_tail","path":"/var/log/syslog"}'
}
# ---------------------------------------------------------------------------

View File

@@ -36,7 +36,7 @@ def _all_domain_dicts(unraid_mod: object) -> list[tuple[str, dict[str, str]]]:
"""
import types
m = unraid_mod # type: ignore[assignment]
m = unraid_mod
if not isinstance(m, types.ModuleType):
import importlib
@@ -417,7 +417,6 @@ class TestDockerQueries:
"details",
"networks",
"network_details",
"_resolve",
}
assert set(QUERIES.keys()) == expected

View File

@@ -3,6 +3,7 @@
from __future__ import annotations
from typing import Any
from unittest.mock import AsyncMock, patch
import pytest

View File

@@ -17,20 +17,26 @@ class TestGateDestructiveAction:
@pytest.mark.asyncio
async def test_non_destructive_action_passes_through(self) -> None:
"""Non-destructive actions are never blocked."""
await gate_destructive_action(None, "list", DESTRUCTIVE, False, "irrelevant")
await gate_destructive_action(
None, "list", DESTRUCTIVE, confirm=False, description="irrelevant"
)
@pytest.mark.asyncio
async def test_confirm_true_bypasses_elicitation(self) -> None:
"""confirm=True skips elicitation entirely."""
with patch("unraid_mcp.core.guards.elicit_destructive_confirmation") as mock_elicit:
await gate_destructive_action(None, "delete", DESTRUCTIVE, True, "desc")
await gate_destructive_action(
None, "delete", DESTRUCTIVE, confirm=True, description="desc"
)
mock_elicit.assert_not_called()
@pytest.mark.asyncio
async def test_no_ctx_raises_tool_error(self) -> None:
"""ctx=None means elicitation returns False → ToolError."""
with pytest.raises(ToolError, match="not confirmed"):
await gate_destructive_action(None, "delete", DESTRUCTIVE, False, "desc")
await gate_destructive_action(
None, "delete", DESTRUCTIVE, confirm=False, description="desc"
)
@pytest.mark.asyncio
async def test_elicitation_accepted_does_not_raise(self) -> None:
@@ -40,7 +46,9 @@ class TestGateDestructiveAction:
new_callable=AsyncMock,
return_value=True,
):
await gate_destructive_action(object(), "delete", DESTRUCTIVE, False, "desc")
await gate_destructive_action(
object(), "delete", DESTRUCTIVE, confirm=False, description="desc"
)
@pytest.mark.asyncio
async def test_elicitation_declined_raises_tool_error(self) -> None:
@@ -53,7 +61,9 @@ class TestGateDestructiveAction:
) as mock_elicit,
pytest.raises(ToolError, match="confirm=True"),
):
await gate_destructive_action(object(), "delete", DESTRUCTIVE, False, "desc")
await gate_destructive_action(
object(), "delete", DESTRUCTIVE, confirm=False, description="desc"
)
mock_elicit.assert_called_once()
@pytest.mark.asyncio
@@ -65,7 +75,7 @@ class TestGateDestructiveAction:
return_value=True,
) as mock_elicit:
await gate_destructive_action(
object(), "delete", DESTRUCTIVE, False, "Delete everything."
object(), "delete", DESTRUCTIVE, confirm=False, description="Delete everything."
)
_, _, desc = mock_elicit.call_args.args
assert desc == "Delete everything."
@@ -79,7 +89,9 @@ class TestGateDestructiveAction:
new_callable=AsyncMock,
return_value=True,
) as mock_elicit:
await gate_destructive_action(object(), "wipe", DESTRUCTIVE, False, descs)
await gate_destructive_action(
object(), "wipe", DESTRUCTIVE, confirm=False, description=descs
)
_, _, desc = mock_elicit.call_args.args
assert desc == "Wipe desc."
@@ -87,4 +99,6 @@ class TestGateDestructiveAction:
async def test_error_message_contains_action_name(self) -> None:
"""ToolError message includes the action name."""
with pytest.raises(ToolError, match="'delete'"):
await gate_destructive_action(None, "delete", DESTRUCTIVE, False, "desc")
await gate_destructive_action(
None, "delete", DESTRUCTIVE, confirm=False, description="desc"
)

View File

@@ -141,8 +141,8 @@ class TestHealthActions:
"unraid_mcp.subscriptions.utils._analyze_subscription_status",
return_value=(0, []),
),
patch("unraid_mcp.server.cache_middleware", mock_cache),
patch("unraid_mcp.server.error_middleware", mock_error),
patch("unraid_mcp.server._cache_middleware", mock_cache),
patch("unraid_mcp.server._error_middleware", mock_error),
):
result = await tool_fn(action="health", subaction="diagnose")
assert "subscriptions" in result
@@ -404,7 +404,8 @@ async def test_health_setup_declined_message_includes_manual_path() -> None:
real_path_str = str(CREDENTIALS_ENV_PATH)
mock_path = MagicMock()
mock_path.exists.return_value = False
type(mock_path).__str__ = lambda self: real_path_str # type: ignore[method-assign]
# Override __str__ on the instance's mock directly — avoids mutating the shared MagicMock class.
mock_path.__str__ = MagicMock(return_value=real_path_str)
with (
patch("unraid_mcp.config.settings.CREDENTIALS_ENV_PATH", mock_path),

View File

@@ -1,6 +1,7 @@
"""Tests for key subactions of the consolidated unraid tool."""
from collections.abc import Generator
from collections.abc import Callable, Generator
from typing import Any
from unittest.mock import AsyncMock, patch
import pytest
@@ -15,7 +16,7 @@ def _mock_graphql() -> Generator[AsyncMock, None, None]:
yield mock
def _make_tool():
def _make_tool() -> Callable[..., Any]:
return make_tool_fn("unraid_mcp.tools.unraid", "register_unraid_tool", "unraid")

View File

@@ -20,20 +20,23 @@ def _make_tool():
class TestRcloneValidation:
async def test_delete_requires_confirm(self) -> None:
async def test_delete_requires_confirm(self, _mock_graphql: AsyncMock) -> None:
tool_fn = _make_tool()
with pytest.raises(ToolError, match="not confirmed"):
await tool_fn(action="rclone", subaction="delete_remote", name="gdrive")
_mock_graphql.assert_not_awaited()
async def test_create_requires_fields(self) -> None:
async def test_create_requires_fields(self, _mock_graphql: AsyncMock) -> None:
tool_fn = _make_tool()
with pytest.raises(ToolError, match="requires name"):
await tool_fn(action="rclone", subaction="create_remote")
_mock_graphql.assert_not_awaited()
async def test_delete_requires_name(self) -> None:
async def test_delete_requires_name(self, _mock_graphql: AsyncMock) -> None:
tool_fn = _make_tool()
with pytest.raises(ToolError, match="name is required"):
await tool_fn(action="rclone", subaction="delete_remote", confirm=True)
_mock_graphql.assert_not_awaited()
class TestRcloneActions:

View File

@@ -17,6 +17,16 @@ def _make_resources():
return test_mcp
def _get_resource(mcp: FastMCP, uri: str):
"""Look up a registered resource by URI.
Accesses FastMCP provider internals. If this breaks after a FastMCP upgrade,
check whether a public resource-lookup API has been added upstream.
"""
key = f"resource:{uri}@"
return mcp.providers[0]._components[key]
@pytest.fixture
def _mock_ensure_started():
with patch(
@@ -36,7 +46,7 @@ class TestLiveResourcesUseManagerCache:
with patch("unraid_mcp.subscriptions.resources.subscription_manager") as mock_mgr:
mock_mgr.get_resource_data = AsyncMock(return_value=cached)
mcp = _make_resources()
resource = mcp.providers[0]._components[f"resource:unraid://live/{action}@"]
resource = _get_resource(mcp, f"unraid://live/{action}")
result = await resource.fn()
assert json.loads(result) == cached
@@ -49,7 +59,7 @@ class TestLiveResourcesUseManagerCache:
mock_mgr.get_resource_data = AsyncMock(return_value=None)
mock_mgr.last_error = {}
mcp = _make_resources()
resource = mcp.providers[0]._components[f"resource:unraid://live/{action}@"]
resource = _get_resource(mcp, f"unraid://live/{action}")
result = await resource.fn()
parsed = json.loads(result)
assert parsed["status"] == "connecting"
@@ -60,8 +70,10 @@ class TestLiveResourcesUseManagerCache:
with patch("unraid_mcp.subscriptions.resources.subscription_manager") as mock_mgr:
mock_mgr.get_resource_data = AsyncMock(return_value=None)
mock_mgr.last_error = {action: "WebSocket auth failed"}
mock_mgr.connection_states = {action: "auth_failed"}
mock_mgr.auto_start_enabled = True
mcp = _make_resources()
resource = mcp.providers[0]._components[f"resource:unraid://live/{action}@"]
resource = _get_resource(mcp, f"unraid://live/{action}")
result = await resource.fn()
parsed = json.loads(result)
assert parsed["status"] == "error"
@@ -95,8 +107,7 @@ class TestLogsStreamResource:
with patch("unraid_mcp.subscriptions.resources.subscription_manager") as mock_mgr:
mock_mgr.get_resource_data = AsyncMock(return_value=None)
mcp = _make_resources()
local_provider = mcp.providers[0]
resource = local_provider._components["resource:unraid://logs/stream@"]
resource = _get_resource(mcp, "unraid://logs/stream")
result = await resource.fn()
parsed = json.loads(result)
assert "status" in parsed
@@ -107,8 +118,7 @@ class TestLogsStreamResource:
with patch("unraid_mcp.subscriptions.resources.subscription_manager") as mock_mgr:
mock_mgr.get_resource_data = AsyncMock(return_value={})
mcp = _make_resources()
local_provider = mcp.providers[0]
resource = local_provider._components["resource:unraid://logs/stream@"]
resource = _get_resource(mcp, "unraid://logs/stream")
result = await resource.fn()
assert json.loads(result) == {}
@@ -131,7 +141,7 @@ class TestAutoStartDisabledFallback:
mock_mgr.last_error = {}
mock_mgr.auto_start_enabled = False
mcp = _make_resources()
resource = mcp.providers[0]._components[f"resource:unraid://live/{action}@"]
resource = _get_resource(mcp, f"unraid://live/{action}")
result = await resource.fn()
assert json.loads(result) == fallback_data
@@ -150,6 +160,6 @@ class TestAutoStartDisabledFallback:
mock_mgr.last_error = {}
mock_mgr.auto_start_enabled = False
mcp = _make_resources()
resource = mcp.providers[0]._components[f"resource:unraid://live/{action}@"]
resource = _get_resource(mcp, f"unraid://live/{action}")
result = await resource.fn()
assert json.loads(result)["status"] == "connecting"

View File

@@ -1,5 +1,7 @@
import os
import stat
from pathlib import Path
from unittest.mock import patch
from unittest.mock import AsyncMock, MagicMock, patch
import pytest
@@ -100,8 +102,6 @@ def test_run_server_does_not_exit_when_creds_missing(monkeypatch):
@pytest.mark.asyncio
async def test_elicit_and_configure_writes_env_file(tmp_path):
"""elicit_and_configure writes a .env file and calls apply_runtime_config."""
from unittest.mock import AsyncMock, MagicMock, patch
from unraid_mcp.core.setup import elicit_and_configure
mock_ctx = MagicMock()
@@ -133,7 +133,6 @@ async def test_elicit_and_configure_writes_env_file(tmp_path):
@pytest.mark.asyncio
async def test_elicit_and_configure_returns_false_on_decline():
from unittest.mock import AsyncMock, MagicMock
from unraid_mcp.core.setup import elicit_and_configure
@@ -148,7 +147,6 @@ async def test_elicit_and_configure_returns_false_on_decline():
@pytest.mark.asyncio
async def test_elicit_and_configure_returns_false_on_cancel():
from unittest.mock import AsyncMock, MagicMock
from unraid_mcp.core.setup import elicit_and_configure
@@ -181,9 +179,6 @@ async def test_make_graphql_request_raises_sentinel_when_unconfigured():
settings_mod.UNRAID_API_KEY = original_key
import os # noqa: E402 — needed for reload-based tests below
def test_credentials_dir_defaults_to_home_unraid_mcp():
"""CREDENTIALS_DIR defaults to ~/.unraid-mcp when env var is not set."""
import importlib
@@ -223,9 +218,6 @@ def test_credentials_env_path_is_dot_env_inside_credentials_dir():
assert s.CREDENTIALS_ENV_PATH == s.CREDENTIALS_DIR / ".env"
import stat # noqa: E402
def test_write_env_creates_credentials_dir_with_700_permissions(tmp_path):
"""_write_env creates CREDENTIALS_DIR with mode 700 (owner-only)."""
from unraid_mcp.core.setup import _write_env
@@ -342,7 +334,6 @@ def test_write_env_updates_existing_credentials_in_place(tmp_path):
@pytest.mark.asyncio
async def test_elicit_and_configure_returns_false_when_client_not_supported():
"""elicit_and_configure returns False when client raises NotImplementedError."""
from unittest.mock import AsyncMock, MagicMock
from unraid_mcp.core.setup import elicit_and_configure
@@ -404,7 +395,6 @@ async def test_elicit_reset_confirmation_returns_false_when_ctx_none():
@pytest.mark.asyncio
async def test_elicit_reset_confirmation_returns_true_when_user_confirms():
"""Returns True when the user accepts and answers True."""
from unittest.mock import AsyncMock, MagicMock
from unraid_mcp.core.setup import elicit_reset_confirmation
@@ -421,7 +411,6 @@ async def test_elicit_reset_confirmation_returns_true_when_user_confirms():
@pytest.mark.asyncio
async def test_elicit_reset_confirmation_returns_false_when_user_answers_false():
"""Returns False when the user accepts but answers False (does not want to reset)."""
from unittest.mock import AsyncMock, MagicMock
from unraid_mcp.core.setup import elicit_reset_confirmation
@@ -438,7 +427,6 @@ async def test_elicit_reset_confirmation_returns_false_when_user_answers_false()
@pytest.mark.asyncio
async def test_elicit_reset_confirmation_returns_false_when_declined():
"""Returns False when the user declines via action (dismisses the prompt)."""
from unittest.mock import AsyncMock, MagicMock
from unraid_mcp.core.setup import elicit_reset_confirmation
@@ -454,7 +442,6 @@ async def test_elicit_reset_confirmation_returns_false_when_declined():
@pytest.mark.asyncio
async def test_elicit_reset_confirmation_returns_false_when_cancelled():
"""Returns False when the user cancels the prompt."""
from unittest.mock import AsyncMock, MagicMock
from unraid_mcp.core.setup import elicit_reset_confirmation
@@ -468,13 +455,13 @@ async def test_elicit_reset_confirmation_returns_false_when_cancelled():
@pytest.mark.asyncio
async def test_elicit_reset_confirmation_returns_true_when_not_implemented():
"""Returns True (proceed with reset) when the MCP client does not support elicitation.
async def test_elicit_reset_confirmation_returns_false_when_not_implemented():
"""Returns False (decline reset) when the MCP client does not support elicitation.
Non-interactive clients (stdio, CI) must not be permanently blocked from
reconfiguring credentials just because they can't ask the user a yes/no question.
Auto-approving a destructive credential reset on non-interactive clients would
silently overwrite working credentials. Callers must use a client that supports
elicitation or configure credentials directly via the .env file.
"""
from unittest.mock import AsyncMock, MagicMock
from unraid_mcp.core.setup import elicit_reset_confirmation
@@ -482,13 +469,12 @@ async def test_elicit_reset_confirmation_returns_true_when_not_implemented():
mock_ctx.elicit = AsyncMock(side_effect=NotImplementedError("elicitation not supported"))
result = await elicit_reset_confirmation(mock_ctx, "https://example.com")
assert result is True
assert result is False
@pytest.mark.asyncio
async def test_elicit_reset_confirmation_includes_current_url_in_prompt():
"""The elicitation message includes the current URL so the user knows what they're replacing."""
from unittest.mock import AsyncMock, MagicMock
from unraid_mcp.core.setup import elicit_reset_confirmation
@@ -507,8 +493,6 @@ async def test_elicit_reset_confirmation_includes_current_url_in_prompt():
@pytest.mark.asyncio
async def test_credentials_not_configured_surfaces_as_tool_error_with_path():
"""CredentialsNotConfiguredError from a tool becomes ToolError with the credentials path."""
from unittest.mock import AsyncMock, patch
from tests.conftest import make_tool_fn
from unraid_mcp.config.settings import CREDENTIALS_ENV_PATH
from unraid_mcp.core.exceptions import CredentialsNotConfiguredError, ToolError

View File

@@ -56,20 +56,23 @@ class TestStorageValidation:
tool_fn = _make_tool()
with pytest.raises(ToolError, match="log_path"):
await tool_fn(action="disk", subaction="logs")
_mock_graphql.assert_not_awaited()
async def test_logs_rejects_invalid_path(self, _mock_graphql: AsyncMock) -> None:
tool_fn = _make_tool()
with pytest.raises(ToolError, match="log_path must start with"):
await tool_fn(action="disk", subaction="logs", log_path="/etc/shadow")
_mock_graphql.assert_not_awaited()
async def test_logs_rejects_path_traversal(self, _mock_graphql: AsyncMock) -> None:
tool_fn = _make_tool()
# Traversal that escapes /var/log/ to reach /etc/shadow
with pytest.raises(ToolError, match="log_path must start with"):
# Traversal that escapes /var/log/ — detected by early .. check
with pytest.raises(ToolError, match="log_path"):
await tool_fn(action="disk", subaction="logs", log_path="/var/log/../../etc/shadow")
# Traversal that escapes /mnt/ to reach /etc/passwd
with pytest.raises(ToolError, match="log_path must start with"):
await tool_fn(action="disk", subaction="logs", log_path="/mnt/../etc/passwd")
# Traversal via .. — detected by early .. check
with pytest.raises(ToolError, match="log_path"):
await tool_fn(action="disk", subaction="logs", log_path="/var/log/../etc/passwd")
_mock_graphql.assert_not_awaited()
async def test_logs_allows_valid_paths(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"logFile": {"path": "/var/log/syslog", "content": "ok"}}
@@ -83,11 +86,13 @@ class TestStorageValidation:
await tool_fn(
action="disk", subaction="logs", log_path="/var/log/syslog", tail_lines=10_001
)
_mock_graphql.assert_not_awaited()
async def test_logs_tail_lines_zero_rejected(self, _mock_graphql: AsyncMock) -> None:
tool_fn = _make_tool()
with pytest.raises(ToolError, match="tail_lines must be between"):
await tool_fn(action="disk", subaction="logs", log_path="/var/log/syslog", tail_lines=0)
_mock_graphql.assert_not_awaited()
async def test_logs_tail_lines_at_max_accepted(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"logFile": {"path": "/var/log/syslog", "content": "ok"}}

View File

@@ -5,6 +5,7 @@ and provides all configuration constants used throughout the application.
"""
import os
import sys
from pathlib import Path
from typing import Any
@@ -51,13 +52,9 @@ def _parse_port(env_var: str, default: int) -> int:
try:
port = int(raw)
except ValueError:
import sys
print(f"FATAL: {env_var}={raw!r} is not a valid integer port number", file=sys.stderr)
sys.exit(1)
if not (1 <= port <= 65535):
import sys
print(f"FATAL: {env_var}={port} outside valid port range 1-65535", file=sys.stderr)
sys.exit(1)
return port
@@ -65,7 +62,7 @@ def _parse_port(env_var: str, default: int) -> int:
UNRAID_MCP_PORT = _parse_port("UNRAID_MCP_PORT", 6970)
UNRAID_MCP_HOST = os.getenv("UNRAID_MCP_HOST", "0.0.0.0") # noqa: S104 — intentional for Docker
UNRAID_MCP_TRANSPORT = os.getenv("UNRAID_MCP_TRANSPORT", "streamable-http").lower()
UNRAID_MCP_TRANSPORT = os.getenv("UNRAID_MCP_TRANSPORT", "stdio").lower()
# SSL Configuration
raw_verify_ssl = os.getenv("UNRAID_VERIFY_SSL", "true").lower()

View File

@@ -52,13 +52,16 @@ async def elicit_reset_confirmation(ctx: Context | None, current_url: str) -> bo
response_type=bool,
)
except NotImplementedError:
# Client doesn't support elicitation — treat as "proceed with reset" so
# non-interactive clients (stdio, CI) are not permanently blocked from
# reconfiguring credentials.
# Client doesn't support elicitation — return False (decline the reset).
# Auto-approving a destructive credential reset on non-interactive clients
# could silently overwrite working credentials; callers must use a client
# that supports elicitation or configure credentials directly in the .env file.
logger.warning(
"MCP client does not support elicitation for reset confirmation — proceeding with reset."
"MCP client does not support elicitation for reset confirmation — declining reset. "
"To reconfigure credentials, edit %s directly.",
CREDENTIALS_ENV_PATH,
)
return True
return False
if result.action != "accept":
logger.info("Credential reset declined by user (%s).", result.action)

View File

@@ -13,8 +13,9 @@ from fastmcp.server.middleware.logging import LoggingMiddleware
from fastmcp.server.middleware.rate_limiting import SlidingWindowRateLimitingMiddleware
from fastmcp.server.middleware.response_limiting import ResponseLimitingMiddleware
from .config.logging import logger
from .config.logging import log_configuration_status, logger
from .config.settings import (
LOG_LEVEL_STR,
UNRAID_MCP_HOST,
UNRAID_MCP_PORT,
UNRAID_MCP_TRANSPORT,
@@ -39,26 +40,36 @@ _logging_middleware = LoggingMiddleware(
# 2. Catch any unhandled exceptions and convert to proper MCP errors.
# Tracks error_counts per (exception_type:method) for health diagnose.
error_middleware = ErrorHandlingMiddleware(
_error_middleware = ErrorHandlingMiddleware(
logger=logger,
include_traceback=True,
include_traceback=LOG_LEVEL_STR == "DEBUG",
)
# 3. Unraid API rate limit: 100 requests per 10 seconds.
# Use a sliding window that stays comfortably under that cap.
_rate_limiter = SlidingWindowRateLimitingMiddleware(max_requests=90, window_minutes=1)
# 3. Rate limiting: 540 requests per 60-second sliding window.
# SlidingWindowRateLimitingMiddleware only supports window_minutes (int), so the
# upstream Unraid "100 req/10 s" burst limit cannot be enforced exactly here.
# 540 req/min is a conservative 1-minute equivalent that prevents sustained
# overload while staying well under the 600 req/min ceiling.
# Note: this does NOT cap bursts within a 10 s window; a client can still send
# up to 540 requests in the first 10 s of a window. Add a sub-minute rate limiter
# in front of this server (e.g. nginx limit_req) if tighter burst control is needed.
_rate_limiter = SlidingWindowRateLimitingMiddleware(max_requests=540, window_minutes=1)
# 4. Cap tool responses at 512 KB to protect the client context window.
# Oversized responses are truncated with a clear suffix rather than erroring.
_response_limiter = ResponseLimitingMiddleware(max_size=512_000)
# 5. Cache tool calls in-memory (MemoryStore default — no extra deps).
# Short 30 s TTL absorbs burst duplicate requests while keeping data fresh.
# Destructive calls won't hit the cache in practice (unique confirm=True + IDs).
cache_middleware = ResponseCachingMiddleware(
# 5. Cache middleware — all call_tool caching is disabled for the `unraid` tool.
# CallToolSettings supports excluded_tools/included_tools by tool name only; there
# is no per-argument or per-subaction exclusion mechanism. The cache key is
# "{tool_name}:{arguments_str}", so a cached stop("nginx") result would be served
# back on a retry within the TTL window even though the container is already stopped.
# Mutation subactions (start, stop, restart, reboot, etc.) must never be cached.
# Because the consolidated `unraid` tool mixes reads and mutations under one name,
# the only safe option is to disable caching for the entire tool.
_cache_middleware = ResponseCachingMiddleware(
call_tool_settings=CallToolSettings(
ttl=30,
included_tools=["unraid"],
enabled=False,
),
# Disable caching for list/resource/prompt — those are cheap.
list_tools_settings={"enabled": False},
@@ -68,23 +79,30 @@ cache_middleware = ResponseCachingMiddleware(
get_prompt_settings={"enabled": False},
)
# Initialize FastMCP instance
# Initialize FastMCP instance — no built-in auth.
# Authentication is delegated to an external OAuth gateway (nginx, Caddy,
# Authelia, Authentik, etc.) placed in front of this server.
mcp = FastMCP(
name="Unraid MCP Server",
instructions="Provides tools to interact with an Unraid server's GraphQL API.",
version=VERSION,
middleware=[
_logging_middleware,
error_middleware,
_error_middleware,
_rate_limiter,
_response_limiter,
cache_middleware,
_cache_middleware,
],
)
# Note: SubscriptionManager singleton is defined in subscriptions/manager.py
# and imported by resources.py - no duplicate instance needed here
# Register all modules at import time so `fastmcp run server.py --reload` can
# discover the fully-configured `mcp` object without going through run_server().
# run_server() no longer calls this — tools are registered exactly once here.
def register_all_modules() -> None:
"""Register all tools and resources with the MCP instance."""
@@ -103,6 +121,9 @@ def register_all_modules() -> None:
raise
register_all_modules()
def run_server() -> None:
"""Run the MCP server with the configured transport."""
# Validate required configuration before anything else
@@ -113,9 +134,6 @@ def run_server() -> None:
"Server will prompt for credentials on first tool call via elicitation."
)
# Log configuration (delegated to shared function)
from .config.logging import log_configuration_status
log_configuration_status(logger)
if UNRAID_VERIFY_SSL is False:
@@ -125,21 +143,29 @@ def run_server() -> None:
"Only use this in trusted networks or for development."
)
# Register all modules
register_all_modules()
if UNRAID_MCP_TRANSPORT in ("streamable-http", "sse"):
logger.warning(
"⚠️ NO AUTHENTICATION — HTTP server is open to all clients on the network. "
"Protect this server with an external OAuth gateway (nginx, Caddy, Authelia, Authentik) "
"or restrict access at the network layer (firewall, VPN, Tailscale)."
)
logger.info(
f"Starting Unraid MCP Server on {UNRAID_MCP_HOST}:{UNRAID_MCP_PORT} using {UNRAID_MCP_TRANSPORT} transport..."
)
try:
if UNRAID_MCP_TRANSPORT == "streamable-http":
mcp.run(
transport="streamable-http", host=UNRAID_MCP_HOST, port=UNRAID_MCP_PORT, path="/mcp"
if UNRAID_MCP_TRANSPORT in ("streamable-http", "sse"):
if UNRAID_MCP_TRANSPORT == "sse":
logger.warning(
"SSE transport is deprecated. Consider switching to 'streamable-http'."
)
mcp.run(
transport=UNRAID_MCP_TRANSPORT,
host=UNRAID_MCP_HOST,
port=UNRAID_MCP_PORT,
path="/mcp",
)
elif UNRAID_MCP_TRANSPORT == "sse":
logger.warning("SSE transport is deprecated. Consider switching to 'streamable-http'.")
mcp.run(transport="sse", host=UNRAID_MCP_HOST, port=UNRAID_MCP_PORT, path="/mcp")
elif UNRAID_MCP_TRANSPORT == "stdio":
mcp.run()
else:

View File

@@ -15,13 +15,18 @@ import websockets
from fastmcp import FastMCP
from websockets.typing import Subprotocol
from ..config import settings as _settings
from ..config.logging import logger
from ..config.settings import UNRAID_API_KEY, UNRAID_API_URL
from ..core.exceptions import ToolError
from ..core.utils import safe_display_url
from .manager import subscription_manager
from .resources import ensure_subscriptions_started
from .utils import _analyze_subscription_status, build_ws_ssl_context, build_ws_url
from .utils import (
_analyze_subscription_status,
build_connection_init,
build_ws_ssl_context,
build_ws_url,
)
# Schema field names that appear inside the selection set of allowed subscriptions.
@@ -125,15 +130,8 @@ def register_diagnostic_tools(mcp: FastMCP) -> None:
ping_interval=30,
ping_timeout=10,
) as websocket:
# Send connection init (using standard X-API-Key format)
await websocket.send(
json.dumps(
{
"type": "connection_init",
"payload": {"x-api-key": UNRAID_API_KEY},
}
)
)
# Send connection init
await websocket.send(json.dumps(build_connection_init()))
# Wait for ack
response = await websocket.recv()
@@ -203,7 +201,7 @@ def register_diagnostic_tools(mcp: FastMCP) -> None:
# Calculate WebSocket URL
ws_url_display: str | None = None
if UNRAID_API_URL:
if _settings.UNRAID_API_URL:
try:
ws_url_display = build_ws_url()
except ValueError:
@@ -215,8 +213,8 @@ def register_diagnostic_tools(mcp: FastMCP) -> None:
"environment": {
"auto_start_enabled": subscription_manager.auto_start_enabled,
"max_reconnect_attempts": subscription_manager.max_reconnect_attempts,
"unraid_api_url": safe_display_url(UNRAID_API_URL),
"api_key_configured": bool(UNRAID_API_KEY),
"unraid_api_url": safe_display_url(_settings.UNRAID_API_URL),
"api_key_configured": bool(_settings.UNRAID_API_KEY),
"websocket_url": ws_url_display,
},
"subscriptions": status,

View File

@@ -15,11 +15,11 @@ from typing import Any
import websockets
from websockets.typing import Subprotocol
from ..config import settings as _settings
from ..config.logging import logger
from ..config.settings import UNRAID_API_KEY
from ..core.client import redact_sensitive
from ..core.types import SubscriptionData
from .utils import build_ws_ssl_context, build_ws_url
from .utils import build_connection_init, build_ws_ssl_context, build_ws_url
# Resource data size limits to prevent unbounded memory growth
@@ -250,7 +250,7 @@ class SubscriptionManager:
ws_url = build_ws_url()
logger.debug(f"[WEBSOCKET:{subscription_name}] Connecting to: {ws_url}")
logger.debug(
f"[WEBSOCKET:{subscription_name}] API Key present: {'Yes' if UNRAID_API_KEY else 'No'}"
f"[WEBSOCKET:{subscription_name}] API Key present: {'Yes' if _settings.UNRAID_API_KEY else 'No'}"
)
ssl_context = build_ws_ssl_context(ws_url)
@@ -284,13 +284,9 @@ class SubscriptionManager:
logger.debug(
f"[PROTOCOL:{subscription_name}] Initializing GraphQL-WS protocol..."
)
init_type = "connection_init"
init_payload: dict[str, Any] = {"type": init_type}
if UNRAID_API_KEY:
init_payload = build_connection_init()
if "payload" in init_payload:
logger.debug(f"[AUTH:{subscription_name}] Adding authentication payload")
# Use graphql-ws connectionParams format (direct key, not nested headers)
init_payload["payload"] = {"x-api-key": UNRAID_API_KEY}
else:
logger.warning(
f"[AUTH:{subscription_name}] No API key available for authentication"

View File

@@ -7,7 +7,8 @@ and the MCP protocol, providing fallback queries when subscription data is unava
import asyncio
import json
import os
from typing import Final
from collections.abc import Callable, Coroutine
from typing import Any, Final
import anyio
from fastmcp import FastMCP
@@ -22,6 +23,8 @@ from .snapshot import subscribe_once
_subscriptions_started = False
_startup_lock: Final[asyncio.Lock] = asyncio.Lock()
_terminal_states = frozenset({"failed", "auth_failed", "max_retries_exceeded"})
async def ensure_subscriptions_started() -> None:
"""Ensure subscriptions are started, called from async context."""
@@ -104,15 +107,17 @@ def register_subscription_resources(mcp: FastMCP) -> None:
}
)
def _make_resource_fn(action: str):
def _make_resource_fn(action: str) -> Callable[[], Coroutine[Any, Any, str]]:
async def _live_resource() -> str:
await ensure_subscriptions_started()
data = await subscription_manager.get_resource_data(action)
if data is not None:
return json.dumps(data, indent=2)
# Surface permanent errors instead of reporting "connecting" indefinitely
# Surface permanent errors only when the connection is in a terminal failure
# state — if the subscription has since reconnected, ignore the stale error.
last_error = subscription_manager.last_error.get(action)
if last_error:
conn_state = subscription_manager.connection_states.get(action, "")
if last_error and conn_state in _terminal_states:
return json.dumps(
{
"status": "error",

View File

@@ -11,8 +11,6 @@ WebSocket per call. This is intentional: MCP tools are request-response.
Use the SubscriptionManager for long-lived monitoring resources.
"""
from __future__ import annotations
import asyncio
import json
from typing import Any
@@ -21,9 +19,8 @@ import websockets
from websockets.typing import Subprotocol
from ..config.logging import logger
from ..config.settings import UNRAID_API_KEY
from ..core.exceptions import ToolError
from .utils import build_ws_ssl_context, build_ws_url
from .utils import build_connection_init, build_ws_ssl_context, build_ws_url
async def subscribe_once(
@@ -50,10 +47,7 @@ async def subscribe_once(
sub_id = "snapshot-1"
# Handshake
init: dict[str, Any] = {"type": "connection_init"}
if UNRAID_API_KEY:
init["payload"] = {"x-api-key": UNRAID_API_KEY}
await ws.send(json.dumps(init))
await ws.send(json.dumps(build_connection_init()))
raw = await asyncio.wait_for(ws.recv(), timeout=timeout)
ack = json.loads(raw)
@@ -125,10 +119,7 @@ async def subscribe_collect(
proto = ws.subprotocol or "graphql-transport-ws"
sub_id = "snapshot-1"
init: dict[str, Any] = {"type": "connection_init"}
if UNRAID_API_KEY:
init["payload"] = {"x-api-key": UNRAID_API_KEY}
await ws.send(json.dumps(init))
await ws.send(json.dumps(build_connection_init()))
raw = await asyncio.wait_for(ws.recv(), timeout=timeout)
ack = json.loads(raw)

View File

@@ -3,11 +3,11 @@
import ssl as _ssl
from typing import Any
from ..config.settings import UNRAID_API_URL, UNRAID_VERIFY_SSL
from ..config import settings as _settings
def build_ws_url() -> str:
"""Build a WebSocket URL from the configured UNRAID_API_URL.
"""Build a WebSocket URL from the configured UNRAID_API_URL setting.
Converts http(s) scheme to ws(s) and ensures /graphql path suffix.
@@ -17,19 +17,19 @@ def build_ws_url() -> str:
Raises:
ValueError: If UNRAID_API_URL is not configured or has an unrecognised scheme.
"""
if not UNRAID_API_URL:
if not _settings.UNRAID_API_URL:
raise ValueError("UNRAID_API_URL is not configured")
if UNRAID_API_URL.startswith("https://"):
ws_url = "wss://" + UNRAID_API_URL[len("https://") :]
elif UNRAID_API_URL.startswith("http://"):
ws_url = "ws://" + UNRAID_API_URL[len("http://") :]
elif UNRAID_API_URL.startswith(("ws://", "wss://")):
ws_url = UNRAID_API_URL # Already a WebSocket URL
if _settings.UNRAID_API_URL.startswith("https://"):
ws_url = "wss://" + _settings.UNRAID_API_URL[len("https://") :]
elif _settings.UNRAID_API_URL.startswith("http://"):
ws_url = "ws://" + _settings.UNRAID_API_URL[len("http://") :]
elif _settings.UNRAID_API_URL.startswith(("ws://", "wss://")):
ws_url = _settings.UNRAID_API_URL # Already a WebSocket URL
else:
raise ValueError(
f"UNRAID_API_URL must start with http://, https://, ws://, or wss://. "
f"Got: {UNRAID_API_URL[:20]}..."
f"Got: {_settings.UNRAID_API_URL[:20]}..."
)
if not ws_url.endswith("/graphql"):
@@ -49,9 +49,9 @@ def build_ws_ssl_context(ws_url: str) -> _ssl.SSLContext | None:
"""
if not ws_url.startswith("wss://"):
return None
if isinstance(UNRAID_VERIFY_SSL, str):
return _ssl.create_default_context(cafile=UNRAID_VERIFY_SSL)
if UNRAID_VERIFY_SSL:
if isinstance(_settings.UNRAID_VERIFY_SSL, str):
return _ssl.create_default_context(cafile=_settings.UNRAID_VERIFY_SSL)
if _settings.UNRAID_VERIFY_SSL:
return _ssl.create_default_context()
# Explicitly disable verification (equivalent to verify=False)
ctx = _ssl.SSLContext(_ssl.PROTOCOL_TLS_CLIENT)
@@ -60,6 +60,18 @@ def build_ws_ssl_context(ws_url: str) -> _ssl.SSLContext | None:
return ctx
def build_connection_init() -> dict[str, Any]:
"""Build the graphql-ws connection_init message.
Omits the payload key entirely when no API key is configured —
sending {"x-api-key": None} and omitting the key differ for some servers.
"""
msg: dict[str, Any] = {"type": "connection_init"}
if _settings.UNRAID_API_KEY:
msg["payload"] = {"x-api-key": _settings.UNRAID_API_KEY}
return msg
def _analyze_subscription_status(
status: dict[str, Any],
) -> tuple[int, list[dict[str, Any]]]:

View File

@@ -21,7 +21,6 @@ Actions:
live - Real-time WebSocket subscription snapshots (11 subactions)
"""
import asyncio
import datetime
import os
import re
@@ -32,7 +31,7 @@ from fastmcp import Context, FastMCP
from ..config.logging import logger
from ..core.client import DISK_TIMEOUT, make_graphql_request
from ..core.exceptions import ToolError, tool_error_handler
from ..core.exceptions import CredentialsNotConfiguredError, ToolError, tool_error_handler
from ..core.guards import gate_destructive_action
from ..core.setup import elicit_and_configure, elicit_reset_confirmation
from ..core.utils import format_bytes, format_kb, safe_get
@@ -110,7 +109,7 @@ _SYSTEM_QUERIES: dict[str, str] = {
"servers": "query GetServers { servers { id name status wanip lanip localurl remoteurl } }",
"flash": "query GetFlash { flash { id vendor product } }",
"ups_devices": "query GetUpsDevices { upsDevices { id name model status battery { chargeLevel estimatedRuntime health } power { loadPercentage inputVoltage outputVoltage } } }",
"ups_device": "query GetUpsDevice($id: String!) { upsDeviceById(id: $id) { id name model status battery { chargeLevel estimatedRuntime health } power { loadPercentage inputVoltage outputVoltage nominalPower currentPower } } }",
"ups_device": "query GetUpsDevice($id: String!) { upsDeviceById(id: $id) { id name model status battery { chargeLevel estimatedRuntime health } power { loadPercentage inputVoltage outputVoltage } } }",
"ups_config": "query GetUpsConfig { upsConfiguration { service upsCable upsType device batteryLevel minutes timeout killUps upsName } }",
}
@@ -285,6 +284,16 @@ async def _handle_system(subaction: str, device_id: str | None) -> dict[str, Any
# ===========================================================================
_HEALTH_SUBACTIONS: set[str] = {"check", "test_connection", "diagnose", "setup"}
_HEALTH_QUERIES: dict[str, str] = {
"comprehensive_health": (
"query ComprehensiveHealthCheck {"
" info { machineId time versions { core { unraid } } os { uptime } }"
" array { state }"
" notifications { overview { unread { alert warning total } } }"
" docker { containers(skipCache: true) { id state status } }"
" }"
),
}
_SEVERITY = {"healthy": 0, "warning": 1, "degraded": 2, "unhealthy": 3}
@@ -346,7 +355,8 @@ async def _handle_health(subaction: str, ctx: Context | None) -> dict[str, Any]
return await _comprehensive_health_check()
if subaction == "diagnose":
from ..server import cache_middleware, error_middleware
from ..server import _cache_middleware as cache_middleware
from ..server import _error_middleware as error_middleware
from ..subscriptions.manager import subscription_manager
from ..subscriptions.resources import ensure_subscriptions_started
@@ -373,7 +383,7 @@ async def _handle_health(subaction: str, ctx: Context | None) -> dict[str, Any]
"call_tool": {
"hits": cache_stats.call_tool.get.hit,
"misses": cache_stats.call_tool.get.miss,
"puts": cache_stats.call_tool.put.total,
"puts": cache_stats.call_tool.put.count,
}
if cache_stats.call_tool
else {"hits": 0, "misses": 0, "puts": 0},
@@ -403,15 +413,7 @@ async def _comprehensive_health_check() -> dict[str, Any]:
health_severity = max(health_severity, _SEVERITY.get(level, 0))
try:
query = """
query ComprehensiveHealthCheck {
info { machineId time versions { core { unraid } } os { uptime } }
array { state }
notifications { overview { unread { alert warning total } } }
docker { containers(skipCache: true) { id state status } }
}
"""
data = await make_graphql_request(query)
data = await make_graphql_request(_HEALTH_QUERIES["comprehensive_health"])
api_latency = round((time.time() - start_time) * 1000, 2)
health_info: dict[str, Any] = {
@@ -502,6 +504,8 @@ async def _comprehensive_health_check() -> dict[str, Any]:
}
return health_info
except CredentialsNotConfiguredError:
raise # Let tool_error_handler convert to setup instructions
except Exception as e:
logger.error(f"Health check failed: {e}", exc_info=True)
return {
@@ -622,10 +626,27 @@ _DISK_MUTATIONS: dict[str, str] = {
_DISK_SUBACTIONS: set[str] = set(_DISK_QUERIES) | set(_DISK_MUTATIONS)
_DISK_DESTRUCTIVE: set[str] = {"flash_backup"}
_ALLOWED_LOG_PREFIXES = ("/var/log/", "/boot/logs/", "/mnt/")
_ALLOWED_LOG_PREFIXES = ("/var/log/", "/boot/logs/")
_MAX_TAIL_LINES = 10_000
def _validate_path(path: str, allowed_prefixes: tuple[str, ...], label: str) -> str:
"""Validate a remote path string for traversal and allowed prefix.
Uses pure string normalization — no filesystem access. The path is validated
locally but consumed on the remote Unraid server, so realpath would resolve
against the wrong filesystem.
Returns the normalized path. Raises ToolError on any violation.
"""
if ".." in path:
raise ToolError(f"{label} must not contain path traversal sequences (../)")
normalized = os.path.normpath(path)
if not any(normalized.startswith(p) for p in allowed_prefixes):
raise ToolError(f"{label} must start with one of: {', '.join(allowed_prefixes)}")
return normalized
async def _handle_disk(
subaction: str,
disk_id: str | None,
@@ -659,10 +680,7 @@ async def _handle_disk(
raise ToolError(f"tail_lines must be between 1 and {_MAX_TAIL_LINES}, got {tail_lines}")
if not log_path:
raise ToolError("log_path is required for disk/logs")
normalized = await asyncio.to_thread(os.path.realpath, log_path)
if not any(normalized.startswith(p) for p in _ALLOWED_LOG_PREFIXES):
raise ToolError(f"log_path must start with one of: {', '.join(_ALLOWED_LOG_PREFIXES)}")
log_path = normalized
log_path = _validate_path(log_path, _ALLOWED_LOG_PREFIXES, "log_path")
if subaction == "flash_backup":
if not remote_name:
@@ -671,6 +689,15 @@ async def _handle_disk(
raise ToolError("source_path is required for disk/flash_backup")
if not destination_path:
raise ToolError("destination_path is required for disk/flash_backup")
# Validate paths — flash backup source must come from /boot/ only
if ".." in source_path:
raise ToolError("source_path must not contain path traversal sequences (../)")
normalized = os.path.normpath(source_path) # noqa: ASYNC240 — pure string, no I/O
if not (normalized == "/boot" or normalized.startswith("/boot/")):
raise ToolError("source_path must start with /boot/ (flash drive only)")
source_path = normalized
if ".." in destination_path:
raise ToolError("destination_path must not contain path traversal sequences (../)")
input_data: dict[str, Any] = {
"remoteName": remote_name,
"sourcePath": source_path,
@@ -738,9 +765,13 @@ _DOCKER_QUERIES: dict[str, str] = {
"details": "query GetContainerDetails { docker { containers(skipCache: false) { id names image imageId command created ports { ip privatePort publicPort type } sizeRootFs labels state status hostConfig { networkMode } networkSettings mounts autoStart } } }",
"networks": "query GetDockerNetworks { docker { networks { id name driver scope } } }",
"network_details": "query GetDockerNetwork { docker { networks { id name driver scope enableIPv6 internal attachable containers options labels } } }",
"_resolve": "query ResolveContainerID { docker { containers(skipCache: true) { id names } } }",
}
# Internal query used only for container ID resolution — not a public subaction.
_DOCKER_RESOLVE_QUERY = (
"query ResolveContainerID { docker { containers(skipCache: true) { id names } } }"
)
_DOCKER_MUTATIONS: dict[str, str] = {
"start": "mutation StartContainer($id: PrefixedID!) { docker { start(id: $id) { id names state status } } }",
"stop": "mutation StopContainer($id: PrefixedID!) { docker { stop(id: $id) { id names state status } } }",
@@ -761,21 +792,28 @@ def _find_container(
if strict:
return None
id_lower = identifier.lower()
for c in containers:
for name in c.get("names", []):
if name.lower().startswith(id_lower):
return c
for c in containers:
for name in c.get("names", []):
if id_lower in name.lower():
return c
# Collect prefix matches first, then fall back to substring matches.
prefix_matches = [
c for c in containers if any(n.lower().startswith(id_lower) for n in c.get("names", []))
]
candidates = prefix_matches or [
c for c in containers if any(id_lower in n.lower() for n in c.get("names", []))
]
if not candidates:
return None
if len(candidates) == 1:
return candidates[0]
names = [n for c in candidates for n in c.get("names", [])]
raise ToolError(
f"Container identifier '{identifier}' is ambiguous — matches: {', '.join(names[:10])}. "
"Use a more specific name or the full container ID."
)
async def _resolve_container_id(container_id: str, *, strict: bool = False) -> str:
if _DOCKER_ID_PATTERN.match(container_id):
return container_id
data = await make_graphql_request(_DOCKER_QUERIES["_resolve"])
data = await make_graphql_request(_DOCKER_RESOLVE_QUERY)
containers = safe_get(data, "docker", "containers", default=[])
if _DOCKER_SHORT_ID_PATTERN.match(container_id):
id_lower = container_id.lower()
@@ -1227,6 +1265,8 @@ async def _handle_key(
input_data["name"] = name
if roles is not None:
input_data["roles"] = roles
if permissions is not None:
input_data["permissions"] = permissions
data = await make_graphql_request(_KEY_MUTATIONS["update"], {"input": input_data})
updated_key = (data.get("apiKey") or {}).get("update")
if not updated_key:
@@ -1246,7 +1286,7 @@ async def _handle_key(
if subaction in ("add_role", "remove_role"):
if not key_id:
raise ToolError(f"key_id is required for key/{subaction}")
if not roles or len(roles) == 0:
if not roles:
raise ToolError(
f"roles is required for key/{subaction} (pass as roles=['ROLE_NAME'])"
)
@@ -1622,11 +1662,12 @@ async def _handle_user(subaction: str) -> dict[str, Any]:
# LIVE (subscriptions)
# ===========================================================================
_LIVE_ALLOWED_LOG_PREFIXES = ("/var/log/", "/boot/logs/", "/mnt/")
async def _handle_live(
subaction: str, path: str | None, collect_for: float, timeout: float
subaction: str,
path: str | None,
collect_for: float,
timeout: float, # noqa: ASYNC109
) -> dict[str, Any]:
from ..subscriptions.queries import COLLECT_ACTIONS, EVENT_DRIVEN_ACTIONS, SNAPSHOT_ACTIONS
from ..subscriptions.snapshot import subscribe_collect, subscribe_once
@@ -1640,10 +1681,7 @@ async def _handle_live(
if subaction == "log_tail":
if not path:
raise ToolError("path is required for live/log_tail")
normalized = os.path.realpath(path) # noqa: ASYNC240
if not any(normalized.startswith(p) for p in _LIVE_ALLOWED_LOG_PREFIXES):
raise ToolError(f"path must start with one of: {', '.join(_LIVE_ALLOWED_LOG_PREFIXES)}")
path = normalized
path = _validate_path(path, _ALLOWED_LOG_PREFIXES, "path")
with tool_error_handler("live", subaction, logger):
logger.info(f"Executing unraid action=live subaction={subaction} timeout={timeout}")
@@ -1722,7 +1760,7 @@ UNRAID_ACTIONS = Literal[
def register_unraid_tool(mcp: FastMCP) -> None:
"""Register the single `unraid` tool with the FastMCP instance."""
@mcp.tool()
@mcp.tool(timeout=120)
async def unraid(
action: UNRAID_ACTIONS,
subaction: str,
@@ -1780,7 +1818,7 @@ def register_unraid_tool(mcp: FastMCP) -> None:
# live
path: str | None = None,
collect_for: float = 5.0,
timeout: float = 10.0,
timeout: float = 10.0, # noqa: ASYNC109
) -> dict[str, Any] | str:
"""Interact with an Unraid server's GraphQL API.
@@ -1797,7 +1835,7 @@ def register_unraid_tool(mcp: FastMCP) -> None:
│ health │ check, test_connection, diagnose, setup │
├─────────────────┼──────────────────────────────────────────────────────────────────────┤
│ array │ parity_status, parity_history, parity_start, parity_pause, │
│ │ parity_resume, parity_cancel, start_array*, stop_array*, │
│ │ parity_resume, parity_cancel, start_array, stop_array*,
│ │ add_disk, remove_disk*, mount_disk, unmount_disk, clear_disk_stats* │
├─────────────────┼──────────────────────────────────────────────────────────────────────┤
│ disk │ shares, disks, disk_details, log_files, logs, flash_backup* │

2
uv.lock generated
View File

@@ -1572,7 +1572,7 @@ wheels = [
[[package]]
name = "unraid-mcp"
version = "1.0.0"
version = "1.1.4"
source = { editable = "." }
dependencies = [
{ name = "fastapi" },