refactor: merge comprehensive code review fixes branch

Merges 35 commits from refactor/comprehensive-code-review-fixes:
- Critical/high/medium/low PR review findings addressed
- Test suite reorganized (safety, http_layer, schema, integration)
- Destructive action guard tests added
- Rclone bug fix, diagnostics improvements
- Version bump to 0.4.5

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
Jacob Magar
2026-03-14 03:01:59 -04:00
73 changed files with 9625 additions and 10444 deletions

View File

@@ -31,7 +31,7 @@ This directory contains the Claude Code marketplace configuration for the Unraid
Query and monitor Unraid servers via GraphQL API - array status, disk health, containers, VMs, system monitoring. Query and monitor Unraid servers via GraphQL API - array status, disk health, containers, VMs, system monitoring.
**Features:** **Features:**
- 10 tools with 76 actions (queries and mutations) - 11 tools with ~104 actions (queries and mutations)
- Real-time system metrics - Real-time system metrics
- Disk health and temperature monitoring - Disk health and temperature monitoring
- Docker container management - Docker container management

View File

@@ -1,7 +1,7 @@
{ {
"name": "unraid", "name": "unraid",
"description": "Query and monitor Unraid servers via GraphQL API - array status, disk health, containers, VMs, system monitoring", "description": "Query and monitor Unraid servers via GraphQL API - array status, disk health, containers, VMs, system monitoring",
"version": "0.2.0", "version": "0.4.4",
"author": { "author": {
"name": "jmagar", "name": "jmagar",
"email": "jmagar@users.noreply.github.com" "email": "jmagar@users.noreply.github.com"

View File

@@ -21,3 +21,11 @@ venv/
env/ env/
.vscode/ .vscode/
cline_docs/ cline_docs/
tests/
docs/
scripts/
commands/
.full-review/
.claude-plugin/
*.md
!README.md

12
.gitignore vendored
View File

@@ -34,6 +34,13 @@ logs/
# IDE/Editor # IDE/Editor
.bivvy .bivvy
.cursor .cursor
.windsurf/
.1code/
.emdash.json
# Backup files
*.bak
*.bak-*
# Claude Code user settings (gitignore local settings) # Claude Code user settings (gitignore local settings)
.claude/settings.local.json .claude/settings.local.json
@@ -41,12 +48,17 @@ logs/
# Serena IDE configuration # Serena IDE configuration
.serena/ .serena/
# Claude Code worktrees (temporary agent isolation dirs)
.claude/worktrees/
# Documentation and session artifacts # Documentation and session artifacts
.docs/ .docs/
.full-review/ .full-review/
/docs/plans/ /docs/plans/
/docs/sessions/ /docs/sessions/
/docs/reports/ /docs/reports/
/docs/research/
/docs/superpowers/
# Test planning documents # Test planning documents
/DESTRUCTIVE_ACTIONS.md /DESTRUCTIVE_ACTIONS.md

1
AGENTS.md Symbolic link
View File

@@ -0,0 +1 @@
CLAUDE.md

View File

@@ -84,17 +84,28 @@ docker compose down
- **Health Monitoring**: Comprehensive health check tool for system monitoring - **Health Monitoring**: Comprehensive health check tool for system monitoring
- **Real-time Subscriptions**: WebSocket-based live data streaming - **Real-time Subscriptions**: WebSocket-based live data streaming
### Tool Categories (10 Tools, 76 Actions) ### Tool Categories (11 Tools, ~104 Actions)
1. **`unraid_info`** (19 actions): overview, array, network, registration, connect, variables, metrics, services, display, config, online, owner, settings, server, servers, flash, ups_devices, ups_device, ups_config 1. **`unraid_info`** (21 actions): overview, array, network, registration, connect, variables, metrics, services, display, config, online, owner, settings, server, servers, flash, ups_devices, ups_device, ups_config, update_server, update_ssh
2. **`unraid_array`** (5 actions): parity_start, parity_pause, parity_resume, parity_cancel, parity_status 2. **`unraid_array`** (5 actions): parity_start, parity_pause, parity_resume, parity_cancel, parity_status
3. **`unraid_storage`** (6 actions): shares, disks, disk_details, unassigned, log_files, logs 3. **`unraid_storage`** (7 actions): shares, disks, disk_details, unassigned, log_files, logs, flash_backup
4. **`unraid_docker`** (15 actions): list, details, start, stop, restart, pause, unpause, remove, update, update_all, logs, networks, network_details, port_conflicts, check_updates 4. **`unraid_docker`** (26 actions): list, details, start, stop, restart, pause, unpause, remove, update, update_all, logs, networks, network_details, port_conflicts, check_updates, create_folder, set_folder_children, delete_entries, move_to_folder, move_to_position, rename_folder, create_folder_with_items, update_view_prefs, sync_templates, reset_template_mappings, refresh_digests
5. **`unraid_vm`** (9 actions): list, details, start, stop, pause, resume, force_stop, reboot, reset 5. **`unraid_vm`** (9 actions): list, details, start, stop, pause, resume, force_stop, reboot, reset
6. **`unraid_notifications`** (9 actions): overview, list, warnings, create, archive, unread, delete, delete_archived, archive_all 6. **`unraid_notifications`** (9 actions): overview, list, warnings, create, archive, unread, delete, delete_archived, archive_all
7. **`unraid_rclone`** (4 actions): list_remotes, config_form, create_remote, delete_remote 7. **`unraid_rclone`** (4 actions): list_remotes, config_form, create_remote, delete_remote
8. **`unraid_users`** (1 action): me 8. **`unraid_users`** (1 action): me
9. **`unraid_keys`** (5 actions): list, get, create, update, delete 9. **`unraid_keys`** (5 actions): list, get, create, update, delete
10. **`unraid_health`** (3 actions): check, test_connection, diagnose 10. **`unraid_health`** (3 actions): check, test_connection, diagnose
11. **`unraid_settings`** (9 actions): update, update_temperature, update_time, configure_ups, update_api, connect_sign_in, connect_sign_out, setup_remote_access, enable_dynamic_remote_access
### Destructive Actions (require `confirm=True`)
- **docker**: remove, update_all, delete_entries, reset_template_mappings
- **vm**: force_stop, reset
- **notifications**: delete, delete_archived
- **rclone**: delete_remote
- **keys**: delete
- **storage**: flash_backup
- **info**: update_ssh
- **settings**: configure_ups, setup_remote_access, enable_dynamic_remote_access
### Environment Variable Hierarchy ### Environment Variable Hierarchy
The server loads environment variables from multiple locations in order: The server loads environment variables from multiple locations in order:
@@ -119,3 +130,55 @@ The server loads environment variables from multiple locations in order:
- Selective queries to avoid GraphQL type overflow issues - Selective queries to avoid GraphQL type overflow issues
- Optional caching controls for Docker container queries - Optional caching controls for Docker container queries
- Log file overwrite at 10MB cap to prevent disk space issues - Log file overwrite at 10MB cap to prevent disk space issues
## Critical Gotchas
### Mutation Handler Ordering
**Mutation handlers MUST return before the `QUERIES[action]` lookup.** Mutations are not in the `QUERIES` dict — reaching that line for a mutation action causes a `KeyError`. Always add early-return `if action == "mutation_name": ... return` blocks BEFORE the `QUERIES` lookup.
### Test Patching
- Patch at the **tool module level**: `unraid_mcp.tools.info.make_graphql_request` (not core)
- `conftest.py`'s `mock_graphql_request` patches the core module — wrong for tool-level tests
- Use `conftest.py`'s `make_tool_fn()` helper or local `_make_tool()` pattern
### Test Suite Structure
```
tests/
├── conftest.py # Shared fixtures + make_tool_fn() helper
├── test_*.py # Unit tests (mock at tool module level)
├── http_layer/ # httpx-level request/response tests (respx)
├── integration/ # WebSocket subscription lifecycle tests (slow)
├── safety/ # Destructive action guard tests
└── schema/ # GraphQL query validation (99 tests, all passing)
```
### Running Targeted Tests
```bash
uv run pytest tests/safety/ # Destructive action guards only
uv run pytest tests/schema/ # GraphQL query validation only
uv run pytest tests/http_layer/ # HTTP/httpx layer only
uv run pytest tests/test_docker.py # Single tool only
uv run pytest -x # Fail fast on first error
```
### Scripts
```bash
# HTTP smoke-test against a live server (11 tools, all non-destructive actions)
./tests/mcporter/test-actions.sh [MCP_URL] # default: http://localhost:6970/mcp
# stdio smoke-test, no running server needed (good for CI)
./tests/mcporter/test-tools.sh [--parallel] [--timeout-ms N] [--verbose]
```
See `tests/mcporter/README.md` for transport differences and `docs/DESTRUCTIVE_ACTIONS.md` for exact destructive-action test commands.
### API Reference Docs
- `docs/UNRAID_API_COMPLETE_REFERENCE.md` — Full GraphQL schema reference
- `docs/UNRAID_API_OPERATIONS.md` — All supported operations with examples
Use these when adding new queries/mutations.
### Symlinks
`AGENTS.md` and `GEMINI.md` are symlinks to `CLAUDE.md` for Codex/Gemini compatibility:
```bash
ln -sf CLAUDE.md AGENTS.md && ln -sf CLAUDE.md GEMINI.md
```

View File

@@ -1,19 +1,28 @@
# Use an official Python runtime as a parent image # Use an official Python runtime as a parent image
FROM python:3.11-slim FROM python:3.12-slim
# Set the working directory in the container # Set the working directory in the container
WORKDIR /app WORKDIR /app
# Install uv # Install uv (pinned tag to avoid mutable latest)
COPY --from=ghcr.io/astral-sh/uv:latest /uv /uvx /usr/local/bin/ COPY --from=ghcr.io/astral-sh/uv:0.9.25 /uv /uvx /usr/local/bin/
# Copy dependency files # Create non-root user with home directory and give ownership of /app
COPY pyproject.toml . RUN groupadd --gid 1000 appuser && \
COPY uv.lock . useradd --uid 1000 --gid 1000 --create-home --shell /bin/false appuser && \
COPY README.md . chown appuser:appuser /app
# Copy dependency files (owned by appuser via --chown)
COPY --chown=appuser:appuser pyproject.toml .
COPY --chown=appuser:appuser uv.lock .
COPY --chown=appuser:appuser README.md .
COPY --chown=appuser:appuser LICENSE .
# Copy the source code # Copy the source code
COPY unraid_mcp/ ./unraid_mcp/ COPY --chown=appuser:appuser unraid_mcp/ ./unraid_mcp/
# Switch to non-root user before installing dependencies
USER appuser
# Install dependencies and the package # Install dependencies and the package
RUN uv sync --frozen RUN uv sync --frozen
@@ -31,5 +40,9 @@ ENV UNRAID_API_KEY=""
ENV UNRAID_VERIFY_SSL="true" ENV UNRAID_VERIFY_SSL="true"
ENV UNRAID_MCP_LOG_LEVEL="INFO" ENV UNRAID_MCP_LOG_LEVEL="INFO"
# Run unraid-mcp-server.py when the container launches # Health check
HEALTHCHECK --interval=30s --timeout=5s --start-period=10s --retries=3 \
CMD ["python", "-c", "import os, urllib.request; port = os.getenv('UNRAID_MCP_PORT', '6970'); urllib.request.urlopen(f'http://localhost:{port}/mcp')"]
# Run unraid-mcp-server when the container launches
CMD ["uv", "run", "unraid-mcp-server"] CMD ["uv", "run", "unraid-mcp-server"]

1
GEMINI.md Symbolic link
View File

@@ -0,0 +1 @@
CLAUDE.md

View File

@@ -8,7 +8,7 @@
## ✨ Features ## ✨ Features
- 🔧 **10 Tools, 90 Actions**: Complete Unraid management through MCP protocol - 🔧 **11 Tools, ~104 Actions**: Complete Unraid management through MCP protocol
- 🏗️ **Modular Architecture**: Clean, maintainable, and extensible codebase - 🏗️ **Modular Architecture**: Clean, maintainable, and extensible codebase
-**High Performance**: Async/concurrent operations with optimized timeouts -**High Performance**: Async/concurrent operations with optimized timeouts
- 🔄 **Real-time Data**: WebSocket subscriptions for live log streaming - 🔄 **Real-time Data**: WebSocket subscriptions for live log streaming
@@ -46,7 +46,7 @@
``` ```
This provides instant access to Unraid monitoring and management through Claude Code with: This provides instant access to Unraid monitoring and management through Claude Code with:
- **10 MCP tools** exposing **83 actions** via the consolidated action pattern - **11 MCP tools** exposing **~104 actions** via the consolidated action pattern
- **10 slash commands** for quick CLI-style access (`commands/`) - **10 slash commands** for quick CLI-style access (`commands/`)
- Real-time system metrics and health monitoring - Real-time system metrics and health monitoring
- Docker container and VM lifecycle management - Docker container and VM lifecycle management
@@ -111,7 +111,7 @@ unraid-mcp/ # ${CLAUDE_PLUGIN_ROOT}
└── scripts/ # Validation and helper scripts └── scripts/ # Validation and helper scripts
``` ```
- **MCP Server**: 10 tools with 76 actions via GraphQL API - **MCP Server**: 11 tools with ~104 actions via GraphQL API
- **Slash Commands**: 10 commands in `commands/` for quick CLI-style access - **Slash Commands**: 10 commands in `commands/` for quick CLI-style access
- **Skill**: `/unraid` skill for monitoring and queries - **Skill**: `/unraid` skill for monitoring and queries
- **Entry Point**: `unraid-mcp-server` defined in pyproject.toml - **Entry Point**: `unraid-mcp-server` defined in pyproject.toml
@@ -218,20 +218,21 @@ UNRAID_VERIFY_SSL=true # true, false, or path to CA bundle
Each tool uses a consolidated `action` parameter to expose multiple operations, reducing context window usage. Destructive actions require `confirm=True`. Each tool uses a consolidated `action` parameter to expose multiple operations, reducing context window usage. Destructive actions require `confirm=True`.
### Tool Categories (10 Tools, 76 Actions) ### Tool Categories (11 Tools, ~104 Actions)
| Tool | Actions | Description | | Tool | Actions | Description |
|------|---------|-------------| |------|---------|-------------|
| **`unraid_info`** | 19 | overview, array, network, registration, connect, variables, metrics, services, display, config, online, owner, settings, server, servers, flash, ups_devices, ups_device, ups_config | | **`unraid_info`** | 21 | overview, array, network, registration, connect, variables, metrics, services, display, config, online, owner, settings, server, servers, flash, ups_devices, ups_device, ups_config, update_server, update_ssh |
| **`unraid_array`** | 5 | parity_start, parity_pause, parity_resume, parity_cancel, parity_status | | **`unraid_array`** | 5 | parity_start, parity_pause, parity_resume, parity_cancel, parity_status |
| **`unraid_storage`** | 6 | shares, disks, disk_details, unassigned, log_files, logs | | **`unraid_storage`** | 7 | shares, disks, disk_details, unassigned, log_files, logs, flash_backup |
| **`unraid_docker`** | 15 | list, details, start, stop, restart, pause, unpause, remove, update, update_all, logs, networks, network_details, port_conflicts, check_updates | | **`unraid_docker`** | 26 | list, details, start, stop, restart, pause, unpause, remove, update, update_all, logs, networks, network_details, port_conflicts, check_updates, create_folder, set_folder_children, delete_entries, move_to_folder, move_to_position, rename_folder, create_folder_with_items, update_view_prefs, sync_templates, reset_template_mappings, refresh_digests |
| **`unraid_vm`** | 9 | list, details, start, stop, pause, resume, force_stop, reboot, reset | | **`unraid_vm`** | 9 | list, details, start, stop, pause, resume, force_stop, reboot, reset |
| **`unraid_notifications`** | 9 | overview, list, warnings, create, archive, unread, delete, delete_archived, archive_all | | **`unraid_notifications`** | 14 | overview, list, warnings, create, create_unique, archive, archive_many, unread, unarchive_many, unarchive_all, recalculate, delete, delete_archived, archive_all |
| **`unraid_rclone`** | 4 | list_remotes, config_form, create_remote, delete_remote | | **`unraid_rclone`** | 4 | list_remotes, config_form, create_remote, delete_remote |
| **`unraid_users`** | 1 | me | | **`unraid_users`** | 1 | me |
| **`unraid_keys`** | 5 | list, get, create, update, delete | | **`unraid_keys`** | 5 | list, get, create, update, delete |
| **`unraid_health`** | 3 | check, test_connection, diagnose | | **`unraid_health`** | 3 | check, test_connection, diagnose |
| **`unraid_settings`** | 9 | update, update_temperature, update_time, configure_ups, update_api, connect_sign_in, connect_sign_out, setup_remote_access, enable_dynamic_remote_access |
### MCP Resources (Real-time Data) ### MCP Resources (Real-time Data)
- `unraid://logs/stream` - Live log streaming from `/var/log/syslog` with WebSocket subscriptions - `unraid://logs/stream` - Live log streaming from `/var/log/syslog` with WebSocket subscriptions
@@ -248,12 +249,12 @@ The project includes **10 custom slash commands** in `commands/` for quick acces
| Command | Actions | Quick Access | | Command | Actions | Quick Access |
|---------|---------|--------------| |---------|---------|--------------|
| `/info` | 19 | System information, metrics, configuration | | `/info` | 21 | System information, metrics, configuration |
| `/array` | 5 | Parity check management | | `/array` | 5 | Parity check management |
| `/storage` | 6 | Shares, disks, logs | | `/storage` | 7 | Shares, disks, logs |
| `/docker` | 15 | Container management and monitoring | | `/docker` | 26 | Container management and monitoring |
| `/vm` | 9 | Virtual machine lifecycle | | `/vm` | 9 | Virtual machine lifecycle |
| `/notifications` | 9 | Alert management | | `/notifications` | 14 | Alert management |
| `/rclone` | 4 | Cloud storage remotes | | `/rclone` | 4 | Cloud storage remotes |
| `/users` | 1 | Current user query | | `/users` | 1 | Current user query |
| `/keys` | 5 | API key management | | `/keys` | 5 | API key management |
@@ -317,16 +318,17 @@ unraid-mcp/
│ │ ├── manager.py # WebSocket management │ │ ├── manager.py # WebSocket management
│ │ ├── resources.py # MCP resources │ │ ├── resources.py # MCP resources
│ │ └── diagnostics.py # Diagnostic tools │ │ └── diagnostics.py # Diagnostic tools
│ ├── tools/ # MCP tool categories (10 tools, 76 actions) │ ├── tools/ # MCP tool categories (11 tools, ~104 actions)
│ │ ├── info.py # System information (19 actions) │ │ ├── info.py # System information (21 actions)
│ │ ├── array.py # Parity checks (5 actions) │ │ ├── array.py # Parity checks (5 actions)
│ │ ├── storage.py # Storage & monitoring (6 actions) │ │ ├── storage.py # Storage & monitoring (7 actions)
│ │ ├── docker.py # Container management (15 actions) │ │ ├── docker.py # Container management (26 actions)
│ │ ├── virtualization.py # VM management (9 actions) │ │ ├── virtualization.py # VM management (9 actions)
│ │ ├── notifications.py # Notification management (9 actions) │ │ ├── notifications.py # Notification management (14 actions)
│ │ ├── rclone.py # Cloud storage (4 actions) │ │ ├── rclone.py # Cloud storage (4 actions)
│ │ ├── users.py # Current user query (1 action) │ │ ├── users.py # Current user query (1 action)
│ │ ├── keys.py # API key management (5 actions) │ │ ├── keys.py # API key management (5 actions)
│ │ ├── settings.py # Server settings (9 actions)
│ │ └── health.py # Health checks (3 actions) │ │ └── health.py # Health checks (3 actions)
│ └── server.py # FastMCP server setup │ └── server.py # FastMCP server setup
├── logs/ # Log files (auto-created) ├── logs/ # Log files (auto-created)
@@ -346,6 +348,20 @@ uv run ty check unraid_mcp/
uv run pytest uv run pytest
``` ```
### Integration Smoke-Tests (mcporter)
Live integration tests that exercise all non-destructive actions via [mcporter](https://github.com/mcporter/mcporter). Two scripts cover two transport modes:
```bash
# stdio — no running server needed (good for CI)
./tests/mcporter/test-tools.sh [--parallel] [--timeout-ms N] [--verbose]
# HTTP — connects to a live server (most up-to-date coverage)
./tests/mcporter/test-actions.sh [MCP_URL] # default: http://localhost:6970/mcp
```
Destructive actions are always skipped in both scripts. For safe testing strategies and exact mcporter commands per destructive action, see [`docs/DESTRUCTIVE_ACTIONS.md`](docs/DESTRUCTIVE_ACTIONS.md).
### API Schema Docs Automation ### API Schema Docs Automation
```bash ```bash
# Regenerate complete GraphQL schema reference from live introspection # Regenerate complete GraphQL schema reference from live introspection

View File

@@ -5,31 +5,38 @@ services:
dockerfile: Dockerfile dockerfile: Dockerfile
container_name: unraid-mcp container_name: unraid-mcp
restart: unless-stopped restart: unless-stopped
read_only: true
cap_drop:
- ALL
tmpfs:
- /tmp:noexec,nosuid,size=64m
- /app/logs:noexec,nosuid,size=16m
- /app/.cache/logs:noexec,nosuid,size=8m
ports: ports:
# HostPort:ContainerPort (maps to UNRAID_MCP_PORT inside the container, default 6970) # HostPort:ContainerPort (maps to UNRAID_MCP_PORT inside the container, default 6970)
# Change the host port (left side) if 6970 is already in use on your host # Change the host port (left side) if 6970 is already in use on your host
- "${UNRAID_MCP_PORT:-6970}:${UNRAID_MCP_PORT:-6970}" - "${UNRAID_MCP_PORT:-6970}:${UNRAID_MCP_PORT:-6970}"
environment: environment:
# Core API Configuration (Required) # Core API Configuration (Required)
- UNRAID_API_URL=${UNRAID_API_URL} - UNRAID_API_URL=${UNRAID_API_URL:?UNRAID_API_URL is required}
- UNRAID_API_KEY=${UNRAID_API_KEY} - UNRAID_API_KEY=${UNRAID_API_KEY:?UNRAID_API_KEY is required}
# MCP Server Settings # MCP Server Settings
- UNRAID_MCP_PORT=${UNRAID_MCP_PORT:-6970} - UNRAID_MCP_PORT=${UNRAID_MCP_PORT:-6970}
- UNRAID_MCP_HOST=${UNRAID_MCP_HOST:-0.0.0.0} - UNRAID_MCP_HOST=${UNRAID_MCP_HOST:-0.0.0.0}
- UNRAID_MCP_TRANSPORT=${UNRAID_MCP_TRANSPORT:-streamable-http} - UNRAID_MCP_TRANSPORT=${UNRAID_MCP_TRANSPORT:-streamable-http}
# SSL Configuration # SSL Configuration
- UNRAID_VERIFY_SSL=${UNRAID_VERIFY_SSL:-true} - UNRAID_VERIFY_SSL=${UNRAID_VERIFY_SSL:-true}
# Logging Configuration # Logging Configuration
- UNRAID_MCP_LOG_LEVEL=${UNRAID_MCP_LOG_LEVEL:-INFO} - UNRAID_MCP_LOG_LEVEL=${UNRAID_MCP_LOG_LEVEL:-INFO}
- UNRAID_MCP_LOG_FILE=${UNRAID_MCP_LOG_FILE:-unraid-mcp.log} - UNRAID_MCP_LOG_FILE=${UNRAID_MCP_LOG_FILE:-unraid-mcp.log}
# Real-time Subscription Configuration # Real-time Subscription Configuration
- UNRAID_AUTO_START_SUBSCRIPTIONS=${UNRAID_AUTO_START_SUBSCRIPTIONS:-true} - UNRAID_AUTO_START_SUBSCRIPTIONS=${UNRAID_AUTO_START_SUBSCRIPTIONS:-true}
- UNRAID_MAX_RECONNECT_ATTEMPTS=${UNRAID_MAX_RECONNECT_ATTEMPTS:-10} - UNRAID_MAX_RECONNECT_ATTEMPTS=${UNRAID_MAX_RECONNECT_ATTEMPTS:-10}
# Optional: Custom log file path for subscription auto-start diagnostics # Optional: Custom log file path for subscription auto-start diagnostics
- UNRAID_AUTOSTART_LOG_PATH=${UNRAID_AUTOSTART_LOG_PATH} - UNRAID_AUTOSTART_LOG_PATH=${UNRAID_AUTOSTART_LOG_PATH}
# Optional: If you want to mount a specific directory for logs (ensure UNRAID_MCP_LOG_FILE points within this mount) # Optional: If you want to mount a specific directory for logs (ensure UNRAID_MCP_LOG_FILE points within this mount)

View File

@@ -1,240 +1,321 @@
# Destructive Actions Inventory # Destructive Actions
This file lists all destructive actions across the unraid-mcp tools. Fill in the "Testing Strategy" column to specify how each should be tested in the mcporter integration test suite. **Last Updated:** 2026-03-13
**Total destructive actions:** 15 across 7 tools
**Last Updated:** 2026-02-15 All destructive actions require `confirm=True` at the call site. There is no additional environment variable gate — `confirm` is the sole guard.
> **mcporter commands below** use `$MCP_URL` (default: `http://localhost:6970/mcp`). Run `test-actions.sh` for automated non-destructive coverage; destructive actions are always skipped there and tested manually per the strategies below.
--- ---
## Summary ## `unraid_docker`
- **Total Destructive Actions:** 8 (after removing 4 array operations) ### `remove` — Delete a container permanently
- **Tools with Destructive Actions:** 6
- **Environment Variable Gates:** 6 (one per tool)
---
## Destructive Actions by Tool
### 1. Docker (1 action)
| Action | Description | Risk Level | Env Var Gate | Testing Strategy |
|--------|-------------|------------|--------------|------------------|
| `remove` | Permanently delete a Docker container | **HIGH** - Data loss, irreversible | `UNRAID_ALLOW_DOCKER_DESTRUCTIVE` | **TODO: Specify testing approach** |
**Notes:**
- Container must be stopped first
- Removes container config and any non-volume data
- Cannot be undone
---
### 2. Virtual Machines (2 actions)
| Action | Description | Risk Level | Env Var Gate | Testing Strategy |
|--------|-------------|------------|--------------|------------------|
| `force_stop` | Forcefully power off a running VM (equivalent to pulling power cord) | **MEDIUM** - Severe but recoverable, risk of data corruption | `UNRAID_ALLOW_VM_DESTRUCTIVE` | **TODO: Specify testing approach** |
| `reset` | Hard reset a VM (power cycle without graceful shutdown) | **MEDIUM** - Severe but recoverable, risk of data corruption | `UNRAID_ALLOW_VM_DESTRUCTIVE` | **TODO: Specify testing approach** |
**Notes:**
- Both bypass graceful shutdown procedures
- May corrupt VM filesystem if used during write operations
- Use `stop` action instead for graceful shutdown
---
### 3. Notifications (2 actions)
| Action | Description | Risk Level | Env Var Gate | Testing Strategy |
|--------|-------------|------------|--------------|------------------|
| `delete` | Permanently delete a notification | **HIGH** - Data loss, irreversible | `UNRAID_ALLOW_NOTIFICATIONS_DESTRUCTIVE` | **TODO: Specify testing approach** |
| `delete_archived` | Permanently delete all archived notifications | **HIGH** - Bulk data loss, irreversible | `UNRAID_ALLOW_NOTIFICATIONS_DESTRUCTIVE` | **TODO: Specify testing approach** |
**Notes:**
- Cannot recover deleted notifications
- `delete_archived` affects ALL archived notifications (bulk operation)
---
### 4. Rclone (1 action)
| Action | Description | Risk Level | Env Var Gate | Testing Strategy |
|--------|-------------|------------|--------------|------------------|
| `delete_remote` | Permanently delete an rclone remote configuration | **HIGH** - Data loss, irreversible | `UNRAID_ALLOW_RCLONE_DESTRUCTIVE` | **TODO: Specify testing approach** |
**Notes:**
- Removes cloud storage connection configuration
- Does NOT delete data in the remote storage
- Must reconfigure remote from scratch if deleted
---
### 5. Users (1 action)
| Action | Description | Risk Level | Env Var Gate | Testing Strategy |
|--------|-------------|------------|--------------|------------------|
| `delete` | Permanently delete a user account | **HIGH** - Data loss, irreversible | `UNRAID_ALLOW_USERS_DESTRUCTIVE` | **TODO: Specify testing approach** |
**Notes:**
- Removes user account and permissions
- Cannot delete the root user
- User's data may remain but become orphaned
---
### 6. API Keys (1 action)
| Action | Description | Risk Level | Env Var Gate | Testing Strategy |
|--------|-------------|------------|--------------|------------------|
| `delete` | Permanently delete an API key | **HIGH** - Data loss, irreversible, breaks integrations | `UNRAID_ALLOW_KEYS_DESTRUCTIVE` | **TODO: Specify testing approach** |
**Notes:**
- Immediately revokes API key access
- Will break any integrations using the deleted key
- Cannot be undone - must create new key
---
## Removed Actions (No Longer Exposed)
These actions were previously marked as destructive but have been **removed** from the array tool per the implementation plan:
| Action | Former Risk Level | Reason for Removal |
|--------|-------------------|-------------------|
| `start` | CRITICAL | System-wide impact - should not be exposed via MCP |
| `stop` | CRITICAL | System-wide impact - should not be exposed via MCP |
| `shutdown` | CRITICAL | System-wide impact - could cause data loss |
| `reboot` | CRITICAL | System-wide impact - disrupts all services |
---
## Testing Strategy Options
Choose one of the following for each action in the "Testing Strategy" column:
### Option 1: Mock/Validation Only
- Test parameter validation
- Test `confirm=True` requirement
- Test env var gate requirement
- **DO NOT** execute the actual action
### Option 2: Dry-Run Testing
- Test with `confirm=false` to verify rejection
- Test without env var to verify gate
- **DO NOT** execute with both gates passed
### Option 3: Test Server Execution
- Execute on a dedicated test Unraid server (e.g., shart)
- Requires pre-created test resources (containers, VMs, notifications)
- Verify action succeeds and state changes as expected
- Clean up after test
### Option 4: Manual Test Checklist
- Document manual verification steps
- Do not automate in mcporter suite
- Requires human operator to execute and verify
### Option 5: Skip Testing
- Too dangerous to automate
- Rely on unit tests only
- Document why testing is skipped
---
## Example Testing Strategies
**Safe approach (recommended for most):**
```
Option 1: Mock/Validation Only
- Verify action requires UNRAID_ALLOW_DOCKER_DESTRUCTIVE=true
- Verify action requires confirm=True
- Do not execute actual deletion
```
**Comprehensive approach (for test server only):**
```
Option 3: Test Server Execution on 'shart'
- Create test container 'mcporter-test-container'
- Execute remove with gates enabled
- Verify container is deleted
- Clean up not needed (container already removed)
```
**Hybrid approach:**
```
Option 1 + Option 4: Mock validation + Manual checklist
- Automated: Test gate requirements
- Manual: Human operator verifies on test server
```
---
## Usage in mcporter Tests
Each tool test script will check the testing strategy:
```bash ```bash
# Example from test_docker.sh # 1. Provision a throwaway canary container
test_remove_action() { docker run -d --name mcp-test-canary alpine sleep 3600
local strategy="TODO: Specify testing approach" # From this file
case "$strategy" in # 2. Discover its MCP-assigned ID
*"Option 1"*|*"Mock"*) CID=$(mcporter call --http-url "$MCP_URL" --tool unraid_docker \
# Mock/validation testing --args '{"action":"list"}' --output json \
test_remove_requires_env_var | python3 -c "import json,sys; cs=json.load(sys.stdin).get('containers',[]); print(next(c['id'] for c in cs if 'mcp-test-canary' in c.get('name','')))")
test_remove_requires_confirm
;; # 3. Remove via MCP
*"Option 3"*|*"Test Server"*) mcporter call --http-url "$MCP_URL" --tool unraid_docker \
# Real execution on test server --args "{\"action\":\"remove\",\"container_id\":\"$CID\",\"confirm\":true}" --output json
if [[ "$UNRAID_TEST_SERVER" != "unraid-shart" ]]; then
echo "SKIP: Destructive test only runs on test server" # 4. Verify
return 2 docker ps -a | grep mcp-test-canary # should return nothing
fi
test_remove_real_execution
;;
*"Option 5"*|*"Skip"*)
echo "SKIP: Testing disabled for this action"
return 2
;;
esac
}
``` ```
--- ---
## Security Model ### `update_all` — Pull latest images and restart all containers
**Two-tier security for destructive actions:** **Strategy: mock/safety audit only.**
No safe live isolation — this hits every running container. Test via `tests/safety/` confirming the `confirm=False` guard raises `ToolError`. Do not run live unless all containers can tolerate a simultaneous restart.
1. **Environment Variable Gate** (first line of defense)
- Must be explicitly enabled per tool
- Defaults to disabled (safe)
- Prevents accidental execution
2. **Runtime Confirmation** (second line of defense)
- Must pass `confirm=True` in each call
- Forces explicit acknowledgment per operation
- Cannot be cached or preset
**Both must pass for execution.**
--- ---
## Next Steps ### `delete_entries` — Delete Docker organizer folders/entries
1. **Fill in Testing Strategy column** for each action above ```bash
2. **Create test fixtures** if using Option 3 (test containers, VMs, etc.) # 1. Create a throwaway organizer folder
3. **Implement tool test scripts** following the specified strategies # Parameter: folder_name (str); ID is in organizer.views.flatEntries[type==FOLDER]
4. **Document any special setup** required for destructive testing FOLDER=$(mcporter call --http-url "$MCP_URL" --tool unraid_docker \
--args '{"action":"create_folder","folder_name":"mcp-test-delete-me"}' --output json)
FID=$(echo "$FOLDER" | python3 -c "
import json,sys
data=json.load(sys.stdin)
entries=(data.get('organizer',{}).get('views',{}).get('flatEntries') or [])
match=next((e['id'] for e in entries if e.get('type')=='FOLDER' and 'mcp-test' in e.get('name','')),'' )
print(match)")
# 2. Delete it
mcporter call --http-url "$MCP_URL" --tool unraid_docker \
--args "{\"action\":\"delete_entries\",\"entry_ids\":[\"$FID\"],\"confirm\":true}" --output json
# 3. Verify
mcporter call --http-url "$MCP_URL" --tool unraid_docker \
--args '{"action":"list"}' --output json | python3 -c \
"import json,sys; folders=[x for x in json.load(sys.stdin).get('folders',[]) if 'mcp-test' in x.get('name','')]; print('clean' if not folders else folders)"
```
--- ---
## Questions to Consider ### `reset_template_mappings` — Wipe all template-to-container associations
For each action, ask: **Strategy: mock/safety audit only.**
- Is this safe to automate on a test server? Global state — wipes all template mappings, requires full remapping afterward. No safe isolation. Test via `tests/safety/` confirming the `confirm=False` guard raises `ToolError`.
- Do we have test fixtures/resources available?
- What cleanup is required after testing?
- What's the blast radius if something goes wrong?
- Can we verify the action worked without side effects?
---
## `unraid_vm`
### `force_stop` — Hard power-off a VM (potential data corruption)
```bash
# Prerequisite: create a minimal Alpine test VM in Unraid VM manager
# (Alpine ISO, 512MB RAM, no persistent disk, name contains "mcp-test")
VID=$(mcporter call --http-url "$MCP_URL" --tool unraid_vm \
--args '{"action":"list"}' --output json \
| python3 -c "import json,sys; vms=json.load(sys.stdin).get('vms',[]); print(next(v.get('uuid',v.get('id','')) for v in vms if 'mcp-test' in v.get('name','')))")
mcporter call --http-url "$MCP_URL" --tool unraid_vm \
--args "{\"action\":\"force_stop\",\"vm_id\":\"$VID\",\"confirm\":true}" --output json
# Verify: VM state should return to stopped
mcporter call --http-url "$MCP_URL" --tool unraid_vm \
--args "{\"action\":\"details\",\"vm_id\":\"$VID\"}" --output json
```
---
### `reset` — Hard reset a VM (power cycle without graceful shutdown)
```bash
# Same minimal Alpine test VM as above
VID=$(mcporter call --http-url "$MCP_URL" --tool unraid_vm \
--args '{"action":"list"}' --output json \
| python3 -c "import json,sys; vms=json.load(sys.stdin).get('vms',[]); print(next(v.get('uuid',v.get('id','')) for v in vms if 'mcp-test' in v.get('name','')))")
mcporter call --http-url "$MCP_URL" --tool unraid_vm \
--args "{\"action\":\"reset\",\"vm_id\":\"$VID\",\"confirm\":true}" --output json
```
---
## `unraid_notifications`
### `delete` — Permanently delete a notification
```bash
# 1. Create a test notification, then list to get the real stored ID (create response
# ID is ULID-based; stored filename uses a unix timestamp, so IDs differ)
mcporter call --http-url "$MCP_URL" --tool unraid_notifications \
--args '{"action":"create","title":"mcp-test-delete","subject":"safe to delete","description":"MCP destructive action test","importance":"INFO"}' --output json
NID=$(mcporter call --http-url "$MCP_URL" --tool unraid_notifications \
--args '{"action":"list","notification_type":"UNREAD"}' --output json \
| python3 -c "
import json,sys
notifs=json.load(sys.stdin).get('notifications',[])
matches=[n['id'] for n in reversed(notifs) if n.get('title')=='mcp-test-delete']
print(matches[0] if matches else '')")
# 2. Delete it (notification_type required)
mcporter call --http-url "$MCP_URL" --tool unraid_notifications \
--args "{\"action\":\"delete\",\"notification_id\":\"$NID\",\"notification_type\":\"UNREAD\",\"confirm\":true}" --output json
# 3. Verify
mcporter call --http-url "$MCP_URL" --tool unraid_notifications \
--args '{"action":"list"}' --output json | python3 -c \
"import json,sys; ns=[n for n in json.load(sys.stdin).get('notifications',[]) if 'mcp-test' in n.get('title','')]; print('clean' if not ns else ns)"
```
---
### `delete_archived` — Wipe all archived notifications (bulk, irreversible)
```bash
# 1. Create and archive a test notification
mcporter call --http-url "$MCP_URL" --tool unraid_notifications \
--args '{"action":"create","title":"mcp-test-archive-wipe","subject":"archive me","description":"safe to delete","importance":"INFO"}' --output json
AID=$(mcporter call --http-url "$MCP_URL" --tool unraid_notifications \
--args '{"action":"list","notification_type":"UNREAD"}' --output json \
| python3 -c "
import json,sys
notifs=json.load(sys.stdin).get('notifications',[])
matches=[n['id'] for n in reversed(notifs) if n.get('title')=='mcp-test-archive-wipe']
print(matches[0] if matches else '')")
mcporter call --http-url "$MCP_URL" --tool unraid_notifications \
--args "{\"action\":\"archive\",\"notification_id\":\"$AID\"}" --output json
# 2. Wipe all archived
# NOTE: this deletes ALL archived notifications, not just the test one
mcporter call --http-url "$MCP_URL" --tool unraid_notifications \
--args '{"action":"delete_archived","confirm":true}' --output json
```
> Run on `shart` if archival history on `tootie` matters.
---
## `unraid_rclone`
### `delete_remote` — Remove an rclone remote configuration
```bash
# 1. Create a throwaway local remote (points to /tmp — no real data)
# Parameters: name (str), provider_type (str), config_data (dict)
mcporter call --http-url "$MCP_URL" --tool unraid_rclone \
--args '{"action":"create_remote","name":"mcp-test-remote","provider_type":"local","config_data":{"root":"/tmp"}}' --output json
# 2. Delete it
mcporter call --http-url "$MCP_URL" --tool unraid_rclone \
--args '{"action":"delete_remote","name":"mcp-test-remote","confirm":true}' --output json
# 3. Verify
mcporter call --http-url "$MCP_URL" --tool unraid_rclone \
--args '{"action":"list_remotes"}' --output json | python3 -c \
"import json,sys; remotes=json.load(sys.stdin).get('remotes',[]); print('clean' if 'mcp-test-remote' not in remotes else 'FOUND — cleanup failed')"
```
> Note: `delete_remote` removes the config only — it does NOT delete data in the remote storage.
---
## `unraid_keys`
### `delete` — Delete an API key (immediately revokes access)
```bash
# 1. Create a test key (names cannot contain hyphens; ID is at key.id)
KID=$(mcporter call --http-url "$MCP_URL" --tool unraid_keys \
--args '{"action":"create","name":"mcp test key","roles":["VIEWER"]}' --output json \
| python3 -c "import json,sys; print(json.load(sys.stdin).get('key',{}).get('id',''))")
# 2. Delete it
mcporter call --http-url "$MCP_URL" --tool unraid_keys \
--args "{\"action\":\"delete\",\"key_id\":\"$KID\",\"confirm\":true}" --output json
# 3. Verify
mcporter call --http-url "$MCP_URL" --tool unraid_keys \
--args '{"action":"list"}' --output json | python3 -c \
"import json,sys; ks=json.load(sys.stdin).get('keys',[]); print('clean' if not any('mcp-test-key' in k.get('name','') for k in ks) else 'FOUND — cleanup failed')"
```
---
## `unraid_storage`
### `flash_backup` — Rclone backup of flash drive (overwrites destination)
```bash
# Prerequisite: create a dedicated test remote pointing away from real backup destination
# (use rclone create_remote first, or configure mcp-test-remote manually)
mcporter call --http-url "$MCP_URL" --tool unraid_storage \
--args '{"action":"flash_backup","remote_name":"mcp-test-remote","source_path":"/boot","destination_path":"/flash-backup-test","confirm":true}' --output json
```
> Never point at the same destination as your real flash backup. Create a dedicated `mcp-test-remote` (see `rclone: delete_remote` above for provisioning pattern).
---
## `unraid_settings`
### `configure_ups` — Overwrite UPS monitoring configuration
**Strategy: mock/safety audit only.**
Wrong config can break UPS integration. If live testing is required: read current config via `unraid_info ups_config`, save values, re-apply identical values (no-op), verify response matches. Test via `tests/safety/` for guard behavior.
---
### `setup_remote_access` — Modify remote access configuration
**Strategy: mock/safety audit only.**
Misconfiguration can break remote connectivity and lock you out. Do not run live. Test via `tests/safety/` confirming `confirm=False` raises `ToolError`.
---
### `enable_dynamic_remote_access` — Toggle dynamic remote access
```bash
# Strategy: toggle to false (disabling is reversible) on shart only, then restore
# Step 1: Read current state
CURRENT=$(mcporter call --http-url "$SHART_MCP_URL" --tool unraid_info \
--args '{"action":"settings"}' --output json)
# Step 2: Disable (safe — can be re-enabled)
mcporter call --http-url "$SHART_MCP_URL" --tool unraid_settings \
--args '{"action":"enable_dynamic_remote_access","access_url_type":"SUBDOMAINS","dynamic_enabled":false,"confirm":true}' --output json
# Step 3: Restore to previous state
mcporter call --http-url "$SHART_MCP_URL" --tool unraid_settings \
--args '{"action":"enable_dynamic_remote_access","access_url_type":"SUBDOMAINS","dynamic_enabled":true,"confirm":true}' --output json
```
> Run on `shart` (10.1.0.3) only — never `tootie`.
---
## `unraid_info`
### `update_ssh` — Change SSH enabled state and port
```bash
# Strategy: read current config, re-apply same values (no-op change)
# 1. Read current SSH settings
CURRENT=$(mcporter call --http-url "$MCP_URL" --tool unraid_info \
--args '{"action":"settings"}' --output json)
SSH_ENABLED=$(echo "$CURRENT" | python3 -c "import json,sys; print(json.load(sys.stdin).get('ssh',{}).get('enabled', True))")
SSH_PORT=$(echo "$CURRENT" | python3 -c "import json,sys; print(json.load(sys.stdin).get('ssh',{}).get('port', 22))")
# 2. Re-apply same values (no-op)
mcporter call --http-url "$MCP_URL" --tool unraid_info \
--args "{\"action\":\"update_ssh\",\"ssh_enabled\":$SSH_ENABLED,\"ssh_port\":$SSH_PORT,\"confirm\":true}" --output json
# 3. Verify SSH connectivity still works
ssh root@"$UNRAID_HOST" -p "$SSH_PORT" exit
```
---
## Safety Audit (Automated)
The `tests/safety/` directory contains pytest tests that verify:
- Every destructive action raises `ToolError` when called with `confirm=False`
- Every destructive action raises `ToolError` when called without the `confirm` parameter
- The `DESTRUCTIVE_ACTIONS` set in each tool file stays in sync with the actions listed above
These run as part of the standard test suite:
```bash
uv run pytest tests/safety/ -v
```
---
## Summary Table
| Tool | Action | Strategy | Target Server |
|------|--------|----------|---------------|
| `unraid_docker` | `remove` | Pre-existing stopped container on Unraid server (skipped in test-destructive.sh) | either |
| `unraid_docker` | `update_all` | Mock/safety audit only | — |
| `unraid_docker` | `delete_entries` | Create folder → destroy | either |
| `unraid_docker` | `reset_template_mappings` | Mock/safety audit only | — |
| `unraid_vm` | `force_stop` | Minimal Alpine test VM | either |
| `unraid_vm` | `reset` | Minimal Alpine test VM | either |
| `unraid_notifications` | `delete` | Create notification → destroy | either |
| `unraid_notifications` | `delete_archived` | Create → archive → wipe | shart preferred |
| `unraid_rclone` | `delete_remote` | Create local:/tmp remote → destroy | either |
| `unraid_keys` | `delete` | Create test key → destroy | either |
| `unraid_storage` | `flash_backup` | Dedicated test remote, isolated path | either |
| `unraid_settings` | `configure_ups` | Mock/safety audit only | — |
| `unraid_settings` | `setup_remote_access` | Mock/safety audit only | — |
| `unraid_settings` | `enable_dynamic_remote_access` | Toggle false → restore | shart only |
| `unraid_info` | `update_ssh` | Read → re-apply same values (no-op) | either |

View File

@@ -1,616 +0,0 @@
# Competitive Analysis: Unraid Integration Projects
> **Date:** 2026-02-07
> **Purpose:** Identify features and capabilities that competing Unraid integration projects offer that our `unraid-mcp` server (10 tools, 76 actions, GraphQL-based) currently lacks.
## Table of Contents
- [Executive Summary](#executive-summary)
- [Project Profiles](#project-profiles)
- [1. unraid-management-agent (Go plugin)](#1-unraid-management-agent)
- [2. domalab/unraid-api-client (Python library)](#2-domalabunraid-api-client)
- [3. mcp-ssh-sre / unraid-ssh-mcp (SSH-based MCP)](#3-mcp-ssh-sre--unraid-ssh-mcp)
- [4. PSUnraid (PowerShell module)](#4-psunraid)
- [5. ha-unraid (Home Assistant integration)](#5-ha-unraid-home-assistant-integration)
- [6. chris-mc1/unraid_api (HA integration)](#6-chris-mc1unraid_api-ha-integration)
- [Feature Matrix](#feature-matrix)
- [Gap Analysis](#gap-analysis)
- [Recommended Priorities](#recommended-priorities)
- [Sources](#sources)
---
## Executive Summary
Our `unraid-mcp` server provides 10 MCP tools (76 actions) built on the official Unraid GraphQL API. After analyzing six competing projects, we identified several significant gaps:
**Critical gaps (high-value features we lack):**
1. **Array control operations** (start/stop array, parity check control, disk spin up/down)
2. **UPS monitoring** (battery level, load, runtime, power status)
3. **GPU metrics** (utilization, temperature, memory, power draw)
4. **SMART disk health data** (per-disk SMART status, errors, power-on hours)
5. **Parity check history** (dates, durations, error counts)
6. **System reboot/shutdown** commands
7. **Services status** (running system services)
8. **Flash drive info** (boot device monitoring)
9. **Plugins list** (installed plugins)
**Moderate gaps (nice-to-have features):**
10. **Docker container resource metrics** (CPU %, memory usage per container)
11. **Docker container pause/unpause** operations
12. **ZFS pool/dataset/snapshot management**
13. **User script execution** (User Scripts plugin integration)
14. **Network bandwidth monitoring** (per-interface stats)
15. **Prometheus metrics endpoint**
16. **MQTT event publishing**
17. **WebSocket real-time streaming** (not just subscription diagnostics)
18. **MCP Resources** (subscribable data streams)
19. **MCP Prompts** (guided interaction templates)
20. **Unassigned devices** monitoring
**Architectural gaps:**
21. No confirmation/safety mechanism for destructive operations
22. No Pydantic response models (type-safe responses)
23. No Docker network listing
24. No container update capability
25. No owner/cloud/remote-access info queries
---
## Project Profiles
### 1. unraid-management-agent
- **Repository:** [ruaan-deysel/unraid-management-agent](https://github.com/ruaan-deysel/unraid-management-agent)
- **Language:** Go
- **Architecture:** Unraid plugin with REST API + WebSocket + MCP + Prometheus + MQTT
- **API Type:** REST (59 endpoints) + WebSocket (9 event types) + MCP (54 tools)
- **Data Collection:** Native Go libraries (Docker SDK, libvirt, /proc, /sys) -- does NOT depend on the GraphQL API
- **Stars/Activity:** Active development, comprehensive documentation
**Key differentiators from our project:**
- Runs as an Unraid plugin directly on the server (no external dependency on GraphQL API)
- Collects data directly from /proc, /sys, Docker SDK, and libvirt
- 59 REST endpoints vs our 10 MCP tools (76 actions)
- 54 MCP tools with Resources and Prompts
- Real-time WebSocket event streaming (9 event types, 5-60s intervals)
- 41 Prometheus metrics for Grafana dashboards
- MQTT publishing for Home Assistant/IoT integration
- Confirmation-required destructive operations (`confirm: true` parameter)
- Collector management (enable/disable collectors, adjust intervals)
- System reboot and shutdown commands
**Unique capabilities not available via GraphQL API:**
- GPU metrics (utilization, temperature, memory, power draw via nvidia-smi)
- UPS metrics via NUT (Network UPS Tools) direct integration
- Fan RPM readings from /sys
- Motherboard temperature from /sys
- SMART disk data (power-on hours, power cycles, read/write bytes, I/O utilization)
- Network interface bandwidth (rx/tx bytes, real-time)
- Docker container resource usage (CPU %, memory bytes, network I/O)
- Unassigned devices monitoring
- ZFS pools, datasets, snapshots, ARC stats
- Parity check scheduling
- Mover settings
- Disk thresholds/settings
- Service management
- Plugin and update management
- Flash drive info
- Network access URLs (LAN, WAN, mDNS, IPv6)
- User script execution
- Share configuration modification (POST endpoints)
- System settings modification
**MCP-specific features we lack:**
- MCP Resources (subscribable real-time data: `unraid://system`, `unraid://array`, `unraid://containers`, `unraid://vms`, `unraid://disks`)
- MCP Prompts (`analyze_disk_health`, `system_overview`, `troubleshoot_issue`)
- Dual MCP transport (HTTP + SSE)
- Confirmation-gated destructive operations
**REST Endpoints (59 total):**
| Category | Endpoints |
|----------|-----------|
| System & Health | `GET /health`, `GET /system`, `POST /system/reboot`, `POST /system/shutdown` |
| Array | `GET /array`, `POST /array/start`, `POST /array/stop` |
| Parity | `POST /parity-check/start\|stop\|pause\|resume`, `GET /parity-check/history`, `GET /parity-check/schedule` |
| Disks | `GET /disks`, `GET /disks/{id}` |
| Shares | `GET /shares`, `GET /shares/{name}/config`, `POST /shares/{name}/config` |
| Docker | `GET /docker`, `GET /docker/{id}`, `POST /docker/{id}/start\|stop\|restart\|pause\|unpause` |
| VMs | `GET /vm`, `GET /vm/{id}`, `POST /vm/{id}/start\|stop\|restart\|pause\|resume\|hibernate\|force-stop` |
| UPS | `GET /ups` |
| GPU | `GET /gpu` |
| Network | `GET /network`, `GET /network/access-urls`, `GET /network/{interface}/config` |
| Collectors | `GET /collectors/status`, `GET /collectors/{name}`, `POST /collectors/{name}/enable\|disable`, `PATCH /collectors/{name}/interval` |
| Logs | `GET /logs`, `GET /logs/{filename}` |
| Settings | `GET /settings/system\|docker\|vm\|disks\|disk-thresholds\|mover\|services\|network-services`, `POST /settings/system` |
| Plugins | `GET /plugins`, `GET /updates` |
| Flash | `GET /system/flash` |
| Prometheus | `GET /metrics` |
| WebSocket | `WS /ws` |
---
### 2. domalab/unraid-api-client
- **Repository:** [domalab/unraid-api-client](https://github.com/domalab/unraid-api-client)
- **Language:** Python (async, aiohttp)
- **Architecture:** Client library for the official Unraid GraphQL API
- **API Type:** GraphQL client (same API we use)
- **PyPI Package:** `unraid-api` (installable via pip)
**Key differentiators from our project:**
- Pure client library (not an MCP server), but shows what the GraphQL API can do
- Full Pydantic model coverage for all responses (type-safe)
- SSL auto-discovery (handles Unraid's "No", "Yes", "Strict" SSL modes)
- Redirect handling for myunraid.net remote access
- Session injection for Home Assistant integration
- Comprehensive exception hierarchy
**Methods we should consider adding MCP tools for:**
| Method | Our Coverage | Notes |
|--------|-------------|-------|
| `test_connection()` | Missing | Connection validation |
| `get_version()` | Missing | API and OS version info |
| `get_server_info()` | Partial | For device registration |
| `get_system_metrics()` | Missing | CPU, memory, temperature, power, uptime as typed model |
| `typed_get_array()` | Have `get_array_status()` | They have richer Pydantic model |
| `typed_get_containers()` | Have `list_docker_containers()` | They have typed models |
| `typed_get_vms()` | Have `list_vms()` | They have typed models |
| `typed_get_ups_devices()` | **Missing** | UPS battery, power, runtime |
| `typed_get_shares()` | Have `get_shares_info()` | Similar |
| `get_notification_overview()` | Have it | Same |
| `start/stop_container()` | Have `manage_docker_container()` | Same |
| `pause/unpause_container()` | **Missing** | Docker pause/unpause |
| `update_container()` | **Missing** | Container image update |
| `remove_container()` | **Missing** | Container removal |
| `start/stop_vm()` | Have `manage_vm()` | Same |
| `pause/resume_vm()` | **Missing** | VM pause/resume |
| `force_stop_vm()` | **Missing** | Force stop VM |
| `reboot_vm()` | **Missing** | VM reboot |
| `start/stop_array()` | **Missing** | Array start/stop control |
| `start/pause/resume/cancel_parity_check()` | **Missing** | Full parity control |
| `spin_up/down_disk()` | **Missing** | Disk spin control |
| `get_parity_history()` | **Missing** | Historical parity data |
| `typed_get_vars()` | Have `get_unraid_variables()` | Same |
| `typed_get_registration()` | Have `get_registration_info()` | Same |
| `typed_get_services()` | **Missing** | System services list |
| `typed_get_flash()` | **Missing** | Flash drive info |
| `typed_get_owner()` | **Missing** | Server owner info |
| `typed_get_plugins()` | **Missing** | Installed plugins |
| `typed_get_docker_networks()` | **Missing** | Docker network list |
| `typed_get_log_files()` | Have `list_available_log_files()` | Same |
| `typed_get_cloud()` | **Missing** | Unraid Connect cloud status |
| `typed_get_connect()` | Have `get_connect_settings()` | Same |
| `typed_get_remote_access()` | **Missing** | Remote access settings |
| `get_physical_disks()` | Have `list_physical_disks()` | Same |
| `get_array_disks()` | **Missing** | Array disk assignments |
---
### 3. mcp-ssh-sre / unraid-ssh-mcp
- **Repository:** [ohare93/mcp-ssh-sre](https://github.com/ohare93/mcp-ssh-sre)
- **Language:** TypeScript/Node.js
- **Architecture:** MCP server that connects via SSH to run predefined commands
- **API Type:** SSH command execution (read-only by design)
- **Tools:** 12 tool modules with 79+ actions
**Why SSH instead of GraphQL API:**
The project's documentation explicitly compares SSH vs API capabilities:
| Feature | GraphQL API | SSH |
|---------|------------|-----|
| Docker container logs | Limited | Full |
| SMART disk health data | Limited | Full (smartctl) |
| Real-time CPU/load averages | Polling | Direct |
| Network bandwidth monitoring | Limited | Full (iftop, nethogs) |
| Process monitoring (ps/top) | Not available | Full |
| Log file analysis | Basic | Full (grep, awk) |
| Security auditing | Not available | Full |
**Tool modules and actions:**
| Module | Tool Name | Actions |
|--------|-----------|---------|
| Docker | `docker` | list_containers, inspect, logs, stats, port, env, top, health, logs_aggregate, list_networks, inspect_network, list_volumes, inspect_volume, network_containers |
| System | `system` | list_files, read_file, find_files, disk_usage, system_info |
| Monitoring | `monitoring` | ps, process_tree, top, iostat, network_connections |
| Security | `security` | open_ports, audit_privileges, ssh_connections, cert_expiry |
| Log Analysis | `log` | grep_all, error_aggregator, timeline, parse_docker, compare_timerange, restart_history |
| Resources | `resource` | dangling, hogs, disk_analyzer, docker_df, zombies, io_profile |
| Performance | `performance` | bottleneck, bandwidth, track_metric |
| VMs | `vm` | list, info, vnc, logs |
| Container Topology | `container_topology` | network_topology, volume_sharing, dependency_graph, port_conflicts, network_test |
| Health Diagnostics | `health` | comprehensive, common_issues, threshold_alerts, compare_baseline, diagnostic_report, snapshot |
| **Unraid Array** | `unraid` | array_status, smart, temps, shares, share_usage, parity_status, parity_history, sync_status, spin_status, unclean_check, mover_status, mover_log, cache_usage, split_level |
| **Unraid Plugins** | `plugin` | list, updates, template, scripts, share_config, disk_assignments, recent_changes |
**Unique capabilities we lack entirely:**
- Container log retrieval and aggregation
- Container environment variable inspection
- Container topology analysis (network maps, shared volumes, dependency graphs, port conflicts)
- Process monitoring (ps, top, process trees)
- Disk I/O monitoring (iostat)
- Network connection analysis (ss/netstat)
- Security auditing (open ports, privilege audit, SSH connection logs, SSL cert expiry)
- Performance bottleneck analysis
- Resource waste detection (dangling Docker resources, zombie processes)
- Comprehensive health diagnostics with baseline comparison
- Mover status and logs
- Cache usage analysis
- Split level configuration
- User script discovery
- Docker template inspection
- Disk assignment information
- Recent config file change detection
---
### 4. PSUnraid
- **Repository:** [jlabon2/PSUnraid](https://github.com/jlabon2/PSUnraid)
- **Language:** PowerShell
- **Architecture:** PowerShell module using GraphQL API
- **API Type:** GraphQL (same as ours)
- **Status:** Proof of concept, 30+ cmdlets
**Cmdlets and operations:**
| Category | Cmdlets |
|----------|---------|
| Connection | `Connect-Unraid`, `Disconnect-Unraid` |
| System | `Get-UnraidServer`, `Get-UnraidMetrics`, `Get-UnraidLog`, `Start-UnraidMonitor` |
| Docker | `Get-UnraidContainer`, `Start-UnraidContainer`, `Stop-UnraidContainer`, `Restart-UnraidContainer` |
| VMs | `Get-UnraidVm`, `Start-UnraidVm`, `Stop-UnraidVm`, `Suspend-UnraidVm`, `Resume-UnraidVm`, `Restart-UnraidVm` |
| Array | `Get-UnraidArray`, `Get-UnraidPhysicalDisk`, `Get-UnraidShare`, `Start-UnraidArray`, `Stop-UnraidArray` |
| Parity | `Start-UnraidParityCheck`, `Stop-UnraidParityCheck`, `Suspend-UnraidParityCheck`, `Resume-UnraidParityCheck`, `Get-UnraidParityHistory` |
| Notifications | `Get-UnraidNotification`, `Set-UnraidNotification`, `Remove-UnraidNotification` |
| Other | `Get-UnraidPlugin`, `Get-UnraidUps`, `Restart-UnraidApi` |
**Features we lack that PSUnraid has (via same GraphQL API):**
- Real-time monitoring dashboard (`Start-UnraidMonitor`)
- Notification management (mark as read, delete notifications)
- Array start/stop
- Parity check full lifecycle (start, stop, pause, resume, history)
- UPS monitoring
- Plugin listing
- API restart capability
- VM suspend/resume/restart
---
### 5. ha-unraid (Home Assistant)
- **Repository:** [domalab/ha-unraid](https://github.com/domalab/ha-unraid) (ruaan-deysel fork is active)
- **Language:** Python
- **Architecture:** Home Assistant custom integration
- **API Type:** Originally SSH-based (through v2025.06.11), rebuilt for GraphQL API (v2025.12.0+)
- **Requires:** Unraid 7.2.0+, GraphQL API v4.21.0+
**Sensors provided:**
| Entity Type | Entities |
|-------------|----------|
| **Sensors** | CPU Usage, CPU Temperature, CPU Power, Memory Usage, Uptime, Array State, Array Usage, Parity Progress, per-Disk Usage, per-Share Usage, Flash Usage, UPS Battery, UPS Load, UPS Runtime, UPS Power, Notifications count |
| **Binary Sensors** | Array Started, Parity Check Running, Parity Valid, per-Disk Health, UPS Connected |
| **Switches** | Docker Container start/stop, VM start/stop |
| **Buttons** | Array Start/Stop, Parity Check Start/Stop, Disk Spin Up/Down |
**Features we lack:**
- CPU temperature and CPU power consumption monitoring
- UPS full monitoring (battery, load, runtime, power, connected status)
- Parity progress tracking
- Per-disk health binary status
- Flash device usage monitoring
- Array start/stop buttons
- Parity check start/stop
- Disk spin up/down
- Dynamic entity creation (only creates entities for available services)
---
### 6. chris-mc1/unraid_api (HA integration)
- **Repository:** [chris-mc1/unraid_api](https://github.com/chris-mc1/unraid_api)
- **Language:** Python
- **Architecture:** Lightweight Home Assistant integration using GraphQL API
- **API Type:** GraphQL
- **Status:** Simpler/lighter alternative to ha-unraid
**Entities provided:**
- Array state sensor
- Array used space percentage
- RAM usage percentage
- CPU utilization
- Per-share free space (optional)
- Per-disk state, temperature, spinning status, used space (optional)
**Notable:** This is a simpler, lighter-weight integration focused on monitoring only (no control operations).
---
## Feature Matrix
### Legend
- **Y** = Supported
- **N** = Not supported
- **P** = Partial support
- **--** = Not applicable
### Monitoring Features
| Feature | Our MCP (10 tools, 76 actions) | mgmt-agent (54 MCP tools) | unraid-api-client | mcp-ssh-sre (79 actions) | PSUnraid | ha-unraid | chris-mc1 |
|---------|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| System info (hostname, uptime) | Y | Y | Y | Y | Y | Y | N |
| CPU usage | Y | Y | Y | Y | Y | Y | Y |
| CPU temperature | N | Y | Y | N | N | Y | N |
| CPU power consumption | N | Y | N | N | N | Y | N |
| Memory usage | Y | Y | Y | Y | Y | Y | Y |
| GPU metrics | N | Y | N | N | N | N | N |
| Fan RPM | N | Y | N | N | N | N | N |
| Motherboard temperature | N | Y | N | N | N | N | N |
| UPS monitoring | N | Y | Y | N | Y | Y | N |
| Network config | Y | Y | Y | Y | N | N | N |
| Network bandwidth | N | Y | N | Y | N | N | N |
| Registration/license info | Y | Y | Y | N | N | N | N |
| Connect settings | Y | Y | Y | N | N | N | N |
| Unraid variables | Y | Y | Y | N | N | N | N |
| System services status | N | Y | Y | N | N | N | N |
| Flash drive info | N | Y | Y | N | N | Y | N |
| Owner info | N | N | Y | N | N | N | N |
| Installed plugins | N | Y | Y | Y | Y | N | N |
| Available updates | N | Y | N | Y | N | N | N |
### Storage Features
| Feature | Our MCP | mgmt-agent | unraid-api-client | mcp-ssh-sre | PSUnraid | ha-unraid | chris-mc1 |
|---------|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| Array status | Y | Y | Y | Y | Y | Y | Y |
| Array start/stop | N | Y | Y | N | Y | Y | N |
| Physical disk listing | Y | Y | Y | N | Y | N | N |
| Disk details | Y | Y | Y | Y | Y | Y | Y |
| Disk SMART data | N | Y | N | Y | N | P | N |
| Disk spin up/down | N | Y | Y | Y | N | Y | N |
| Disk temperatures | P | Y | Y | Y | N | Y | Y |
| Disk I/O stats | N | Y | N | Y | N | N | N |
| Shares info | Y | Y | Y | Y | Y | Y | Y |
| Share configuration | N | Y | N | Y | N | N | N |
| Parity check control | N | Y | Y | N | Y | Y | N |
| Parity check history | N | Y | Y | Y | Y | N | N |
| Parity progress | N | Y | Y | Y | Y | Y | N |
| ZFS pools/datasets/snapshots | N | Y | N | N | N | N | N |
| ZFS ARC stats | N | Y | N | N | N | N | N |
| Unassigned devices | N | Y | N | N | N | N | N |
| Mover status/logs | N | N | N | Y | N | N | N |
| Cache usage | N | N | N | Y | N | N | N |
### Docker Features
| Feature | Our MCP | mgmt-agent | unraid-api-client | mcp-ssh-sre | PSUnraid | ha-unraid | chris-mc1 |
|---------|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| List containers | Y | Y | Y | Y | Y | Y | N |
| Container details | Y | Y | Y | Y | N | P | N |
| Start/stop/restart | Y | Y | Y | N | Y | Y | N |
| Pause/unpause | N | Y | Y | N | N | N | N |
| Container resource usage | N | Y | Y | Y | N | N | N |
| Container logs | N | N | N | Y | N | N | N |
| Container env vars | N | N | N | Y | N | N | N |
| Container network topology | N | N | N | Y | N | N | N |
| Container port inspection | N | N | N | Y | N | N | N |
| Docker networks | N | Y | Y | Y | N | N | N |
| Docker volumes | N | N | N | Y | N | N | N |
| Container update | N | N | Y | N | N | N | N |
| Container removal | N | N | Y | N | N | N | N |
| Docker settings | N | Y | N | N | N | N | N |
### VM Features
| Feature | Our MCP | mgmt-agent | unraid-api-client | mcp-ssh-sre | PSUnraid | ha-unraid | chris-mc1 |
|---------|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| List VMs | Y | Y | Y | Y | Y | Y | N |
| VM details | Y | Y | Y | Y | N | P | N |
| Start/stop | Y | Y | Y | N | Y | Y | N |
| Restart | Y | Y | N | N | Y | N | N |
| Pause/resume | N | Y | Y | N | Y | N | N |
| Hibernate | N | Y | N | N | N | N | N |
| Force stop | N | Y | Y | N | Y | N | N |
| Reboot VM | N | N | Y | N | N | N | N |
| VNC info | N | N | N | Y | N | N | N |
| VM libvirt logs | N | N | N | Y | N | N | N |
| VM settings | N | Y | N | N | N | N | N |
### Cloud Storage (RClone) Features
| Feature | Our MCP | mgmt-agent | unraid-api-client | mcp-ssh-sre | PSUnraid | ha-unraid | chris-mc1 |
|---------|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| List remotes | Y | N | N | N | N | N | N |
| Get config form | Y | N | N | N | N | N | N |
| Create remote | Y | N | N | N | N | N | N |
| Delete remote | Y | N | N | N | N | N | N |
> **Note:** RClone management is unique to our project among these competitors.
### Notification Features
| Feature | Our MCP | mgmt-agent | unraid-api-client | mcp-ssh-sre | PSUnraid | ha-unraid | chris-mc1 |
|---------|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| Notification overview | Y | Y | Y | N | N | Y | N |
| List notifications | Y | Y | Y | Y | Y | N | N |
| Mark as read | N | N | N | N | Y | N | N |
| Delete notifications | N | N | N | N | Y | N | N |
### Logs & Diagnostics
| Feature | Our MCP | mgmt-agent | unraid-api-client | mcp-ssh-sre | PSUnraid | ha-unraid | chris-mc1 |
|---------|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| List log files | Y | Y | Y | N | N | N | N |
| Get log contents | Y | Y | Y | Y | Y | N | N |
| Log search/grep | N | N | N | Y | N | N | N |
| Error aggregation | N | N | N | Y | N | N | N |
| Syslog access | N | Y | N | Y | Y | N | N |
| Docker daemon log | N | Y | N | Y | N | N | N |
| Health check | Y | Y | N | Y | N | N | N |
| Subscription diagnostics | Y | N | N | N | N | N | N |
### Integration & Protocol Features
| Feature | Our MCP | mgmt-agent | unraid-api-client | mcp-ssh-sre | PSUnraid | ha-unraid | chris-mc1 |
|---------|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| MCP tools | Y (10 tools, 76 actions) | Y (54) | N | Y (79 actions) | N | N | N |
| MCP Resources | N | Y (5) | N | N | N | N | N |
| MCP Prompts | N | Y (3) | N | N | N | N | N |
| REST API | N | Y (59) | N | N | N | N | N |
| WebSocket streaming | N | Y (9 events) | N | N | N | N | N |
| Prometheus metrics | N | Y (41) | N | N | N | N | N |
| MQTT publishing | N | Y | N | N | N | N | N |
| SSE transport | Y | Y | N | Y | N | N | N |
| Stdio transport | Y | N | N | Y | N | N | N |
| Streamable HTTP | Y | Y | N | Y | N | N | N |
| Pydantic models | N | N | Y | N | N | N | N |
| Safety confirmations | N | Y | N | N | N | N | N |
### Security & Operational Features
| Feature | Our MCP | mgmt-agent | mcp-ssh-sre | PSUnraid |
|---------|:---:|:---:|:---:|:---:|
| Open port scanning | N | N | Y | N |
| SSH login monitoring | N | N | Y | N |
| Container privilege audit | N | N | Y | N |
| SSL certificate expiry | N | N | Y | N |
| Process monitoring | N | N | Y | N |
| Zombie process detection | N | N | Y | N |
| Performance bottleneck analysis | N | N | Y | N |
| System reboot | N | Y | N | N |
| System shutdown | N | Y | N | N |
| User script execution | N | Y | Y | N |
---
## Gap Analysis
### Priority 1: High-Value Features Available via GraphQL API
These features are available through the same GraphQL API we already use and should be straightforward to implement:
1. **Array start/stop control** -- Both `domalab/unraid-api-client` and `PSUnraid` implement this via GraphQL mutations. This is a fundamental control operation that every competitor supports.
2. **Parity check lifecycle** (start, stop, pause, resume, history) -- Available via GraphQL mutations. Critical for array management.
3. **Disk spin up/down** -- Available via GraphQL mutations. Important for power management and noise control.
4. **UPS monitoring** -- Available via GraphQL query. Present in `unraid-api-client`, `PSUnraid`, and `ha-unraid`. Data includes battery level, load, runtime, power state.
5. **System services list** -- Available via GraphQL query (`services`). Shows Docker service, VM manager status, etc.
6. **Flash drive info** -- Available via GraphQL query (`flash`). Boot device monitoring.
7. **Installed plugins list** -- Available via GraphQL query (`plugins`). Useful for understanding server configuration.
8. **Docker networks** -- Available via GraphQL query. Listed in `unraid-api-client`.
9. **Parity history** -- Available via GraphQL query. Historical parity check data.
10. **VM pause/resume and force stop** -- Available via GraphQL mutations. Completing our VM control capabilities.
11. **Docker pause/unpause** -- Available via GraphQL mutations. Completing our Docker control capabilities.
12. **Cloud/remote access status** -- Available via GraphQL queries. Shows Unraid Connect status, remote access configuration.
13. **Notification management** -- Mark as read, delete. `PSUnraid` implements this via GraphQL.
14. **API/OS version info** -- Simple query that helps with compatibility checks.
### Priority 2: High-Value Features Requiring Non-GraphQL Data Sources
These would require SSH access or other system-level access that our GraphQL-only architecture cannot provide:
1. **Container logs** -- Not available via GraphQL. SSH-based solutions (mcp-ssh-sre) can retrieve full container logs via `docker logs`.
2. **SMART disk data** -- Limited via GraphQL. Full SMART data (power-on hours, error counts, reallocated sectors) requires `smartctl` access.
3. **GPU metrics** -- Not available via GraphQL. Requires nvidia-smi or similar.
4. **Process monitoring** -- Not available via GraphQL. Requires `ps`/`top` access.
5. **Network bandwidth** -- Not in GraphQL. Requires direct system access.
6. **Container resource usage** (CPU%, memory) -- Not available through the current GraphQL API at a per-container level in real-time.
7. **Log search/grep** -- While we can get log contents, we cannot search across logs.
8. **Security auditing** -- Not available via GraphQL.
### Priority 3: Architectural Improvements
1. **MCP Resources** -- Add subscribable data streams (system, array, containers, VMs, disks) for real-time AI agent monitoring.
2. **MCP Prompts** -- Add guided interaction templates (disk health analysis, system overview, troubleshooting).
3. **Confirmation for destructive operations** -- Add a `confirm` parameter for array stop, system reboot, container removal, etc.
4. **Pydantic response models** -- Type-safe response parsing like `domalab/unraid-api-client`.
5. **Connection validation tool** -- Simple tool to verify API connectivity and version compatibility.
---
## Recommended Priorities
### Phase 1: Low-Hanging Fruit (GraphQL mutations/queries we already have access to)
**Estimated effort: Small -- these are straightforward GraphQL queries/mutations**
| New Tool | Priority | Notes |
|----------|----------|-------|
| `start_array()` / `stop_array()` | Critical | Every competitor has this |
| `start_parity_check()` / `stop_parity_check()` | Critical | Full parity lifecycle |
| `pause_parity_check()` / `resume_parity_check()` | Critical | Full parity lifecycle |
| `get_parity_history()` | High | Historical data |
| `spin_up_disk()` / `spin_down_disk()` | High | Disk power management |
| `get_ups_status()` | High | UPS monitoring |
| `get_services_status()` | Medium | System services |
| `get_flash_info()` | Medium | Flash drive info |
| `get_plugins()` | Medium | Plugin management |
| `get_docker_networks()` | Medium | Docker networking |
| `pause_docker_container()` / `unpause_docker_container()` | Medium | Docker control |
| `pause_vm()` / `resume_vm()` / `force_stop_vm()` | Medium | VM control |
| `get_cloud_status()` / `get_remote_access()` | Low | Connect info |
| `get_version()` | Low | API version |
| `manage_notifications()` | Low | Mark read/delete |
### Phase 2: MCP Protocol Enhancements
| Enhancement | Priority | Notes |
|-------------|----------|-------|
| MCP Resources (5 streams) | High | Real-time data for AI agents |
| MCP Prompts (3 templates) | Medium | Guided interactions |
| Confirmation parameter | High | Safety for destructive ops |
| Connection validation tool | Medium | Health/compatibility check |
### Phase 3: Advanced Features (may require SSH)
| Feature | Priority | Notes |
|---------|----------|-------|
| Container log retrieval | High | Most-requested SSH-only feature |
| SMART disk health data | High | Disk failure prediction |
| GPU monitoring | Medium | For GPU passthrough users |
| Performance/resource monitoring | Medium | Bottleneck analysis |
| Security auditing | Low | Port scan, login audit |
---
## Sources
- [ruaan-deysel/unraid-management-agent](https://github.com/ruaan-deysel/unraid-management-agent) -- Go-based Unraid plugin with REST API, WebSocket, MCP, Prometheus, and MQTT
- [domalab/unraid-api-client](https://github.com/domalab/unraid-api-client) -- Async Python client for Unraid GraphQL API (PyPI: `unraid-api`)
- [ohare93/mcp-ssh-sre](https://github.com/ohare93/mcp-ssh-sre) -- SSH-based MCP server for read-only server monitoring
- [jlabon2/PSUnraid](https://github.com/jlabon2/PSUnraid) -- PowerShell module for Unraid 7.x management via GraphQL API
- [domalab/ha-unraid](https://github.com/domalab/ha-unraid) (ruaan-deysel fork) -- Home Assistant integration via GraphQL API
- [chris-mc1/unraid_api](https://github.com/chris-mc1/unraid_api) -- Lightweight Home Assistant integration for Unraid
- [nickbeddows-ctrl/unraid-ssh-mcp](https://github.com/nickbeddows-ctrl/unraid-ssh-mcp) -- Guardrailed MCP server for Unraid management via SSH
- [MCP SSH Unraid on LobeHub](https://lobehub.com/mcp/ohare93-unraid-ssh-mcp)
- [MCP SSH SRE on Glama](https://glama.ai/mcp/servers/@ohare93/mcp-ssh-sre)
- [Unraid Integration for Home Assistant (domalab docs)](https://domalab.github.io/ha-unraid/)
- [Home Assistant Unraid Integration forum thread](https://community.home-assistant.io/t/unraid-integration/785003)

View File

@@ -1,845 +0,0 @@
# Unraid API Feature Gap Analysis
> **Date:** 2026-02-07
> **Purpose:** Comprehensive inventory of every API capability that could become an MCP tool, cross-referenced against our current 10 tools (76 actions) to identify gaps.
> **Sources:** 7 research documents (3,800+ lines), Unraid API source code analysis, community project reviews, official documentation crawl.
---
## Table of Contents
1. [All GraphQL Queries Available](#a-all-graphql-queries-available)
2. [All GraphQL Mutations Available](#b-all-graphql-mutations-available)
3. [All GraphQL Subscriptions Available](#c-all-graphql-subscriptions-available)
4. [All Custom Scalars and Types](#d-all-custom-scalars-and-types)
5. [All Enums](#e-all-enums)
6. [API Capabilities NOT in Current MCP Server](#f-api-capabilities-not-currently-in-the-mcp-server)
7. [Community Project Capabilities](#g-community-project-capabilities)
8. [Known API Bugs and Limitations](#h-known-api-bugs-and-limitations)
---
## A. All GraphQL Queries Available
Every query type identified across all research documents, with their fields and sub-fields.
### A.1 System & Server Queries
| Query | Fields | Current MCP Coverage |
|-------|--------|---------------------|
| `info` | `time`, `baseboard { manufacturer, model, version, serial }`, `cpu { manufacturer, brand, vendor, family, model, stepping, revision, voltage, speed, speedmin, speedmax, threads, cores, processors, socket, cache, flags }`, `devices`, `display`, `machineId`, `memory { max, total, free, used, active, available, buffcache, swaptotal, swapused, swapfree, layout[] }`, `os { platform, distro, release, codename, kernel, arch, hostname, codepage, logofile, serial, build, uptime }`, `system { manufacturer, model, version, serial, uuid }`, `versions { kernel, docker, unraid, node }`, `apps { installed, started }` | **YES** - `get_system_info()` |
| `vars` | `id`, `version`, `name`, `timeZone`, `comment`, `security`, `workgroup`, `domain`, `useNtp`, `ntpServer1-4`, `useSsl`, `port`, `portssl`, `useTelnet`, `useSsh`, `portssh`, `startPage`, `startArray`, `spindownDelay`, `defaultFormat`, `defaultFsType`, `shutdownTimeout`, `shareDisk`, `shareUser`, `shareSmbEnabled`, `shareNfsEnabled`, `shareAfpEnabled`, `shareCacheEnabled`, `shareMoverSchedule`, `shareMoverLogging`, `safeMode`, `configValid`, `configError`, `deviceCount`, `flashGuid`, `flashProduct`, `flashVendor`, `regState`, `regTo`, `mdState`, `mdNumDisks`, `mdNumDisabled`, `mdNumInvalid`, `mdNumMissing`, `mdResync`, `mdResyncAction`, `fsState`, `fsProgress`, `fsCopyPrcnt`, `shareCount`, `shareSmbCount`, `shareNfsCount`, `csrfToken`, `maxArraysz`, `maxCachesz` | **YES** - `get_unraid_variables()` |
| `online` | `Boolean` | **NO** |
| `owner` | Server owner information | **NO** |
| `server` | Server details | **NO** |
| `servers` | `[Server!]!` - List of all servers (Connect-managed) | **NO** |
| `me` | `id`, `name`, `description`, `roles`, `permissions` (current authenticated user) | **NO** |
| `user(id)` | `id`, `name`, `description`, `roles`, `password`, `permissions` | **NO** |
| `users(input)` | `[User!]!` - List of users | **NO** |
| `config` | `Config!` - System configuration | **NO** |
| `display` | Display settings | **NO** |
| `services` | `[Service!]!` - Running services list | **NO** |
| `cloud` | `error`, `apiKey`, `relay`, `minigraphql`, `cloud`, `allowedOrigins` | **NO** |
| `flash` | Flash drive information | **NO** |
### A.2 Network Queries
| Query | Fields | Current MCP Coverage |
|-------|--------|---------------------|
| `network` | `id`, `iface`, `ifaceName`, `ipv4`, `ipv6`, `mac`, `internal`, `operstate`, `type`, `duplex`, `mtu`, `speed`, `carrierChanges`, `accessUrls { type, name, ipv4, ipv6 }` | **YES** - `get_network_config()` |
### A.3 Storage & Array Queries
| Query | Fields | Current MCP Coverage |
|-------|--------|---------------------|
| `array` | `id`, `state`, `previousState`, `pendingState`, `capacity { kilobytes { free, used, total }, disks { free, used, total } }`, `boot { id, idx, name, device, size, fsSize, fsFree, fsUsed, status, rotational, temp, numReads, numWrites, numErrors, type, exportable, warning, critical, fsType, comment, format, transport, color, isSpinning }`, `parities[...]`, `disks[...]`, `caches[...]`, `parityCheckStatus` | **PARTIAL** - `get_array_status()` (missing `previousState`, `pendingState`, `parityCheckStatus`, disk fields like `color`, `isSpinning`, `transport`, `format`) |
| `parityHistory` | `[ParityCheck]` - Historical parity check records | **NO** |
| `disks` | `[Disk]!` - All physical disks with `device`, `type`, `name`, `vendor`, `size`, `bytesPerSector`, `totalCylinders`, `totalHeads`, `totalSectors`, `totalTracks`, `tracksPerCylinder`, `sectorsPerTrack`, `firmwareRevision`, `serialNum`, `interfaceType`, `smartStatus`, `temperature`, `partitions[]` | **YES** - `list_physical_disks()` |
| `disk(id)` | Single disk by PrefixedID | **YES** - `get_disk_details()` |
| `shares` | `name`, `free`, `used`, `size`, `include[]`, `exclude[]`, `cache`, `nameOrig`, `comment`, `allocator`, `splitLevel`, `floor`, `cow`, `color`, `luksStatus` | **PARTIAL** - `get_shares_info()` (may not query all fields like `allocator`, `splitLevel`, `floor`, `cow`, `luksStatus`) |
| `unassignedDevices` | `[UnassignedDevice]` - Devices not assigned to array/pool | **NO** |
### A.4 Docker Queries
| Query | Fields | Current MCP Coverage |
|-------|--------|---------------------|
| `docker` | `id`, `containers[]`, `networks[]` | **YES** - `list_docker_containers()` |
| `dockerContainers(all)` | `[DockerContainer!]!` - All containers with full details including `id`, `names`, `image`, `imageId`, `command`, `created`, `ports[]`, `lanIpPorts[]`, `sizeRootFs`, `sizeRw`, `sizeLog`, `labels`, `state`, `status`, `hostConfig`, `networkSettings`, `mounts`, `autoStart`, `autoStartOrder`, `autoStartWait`, `templatePath`, `projectUrl`, `registryUrl`, `supportUrl`, `iconUrl`, `webUiUrl`, `shell`, `templatePorts`, `isOrphaned` | **YES** - `list_docker_containers()` / `get_docker_container_details()` |
| `container(id)` (via Docker resolver) | Single container by PrefixedID | **YES** - `get_docker_container_details()` |
| `docker.logs(id, since, tail)` | Container log output with filtering | **NO** |
| `docker.networks` / `dockerNetworks(all)` | `[DockerNetwork]` - name, id, created, scope, driver, enableIPv6, ipam, internal, attachable, ingress, configFrom, configOnly, containers, options, labels | **NO** |
| `dockerNetwork(id)` | Single network by ID | **NO** |
| `docker.portConflicts` | Port conflict detection | **NO** |
| `docker.organizer` | Container organization/folder structure | **NO** |
| `docker.containerUpdateStatuses` | Check for available container image updates (`UpdateStatus`: UP_TO_DATE, UPDATE_AVAILABLE, REBUILD_READY, UNKNOWN) | **NO** |
### A.5 VM Queries
| Query | Fields | Current MCP Coverage |
|-------|--------|---------------------|
| `vms` | `id`, `domain[{ uuid/id, name, state }]` | **YES** - `list_vms()` / `get_vm_details()` |
### A.6 Notification Queries
| Query | Fields | Current MCP Coverage |
|-------|--------|---------------------|
| `notifications` | `id`, `overview { unread { info, warning, alert, total }, archive { info, warning, alert, total } }`, `list(filter) [{ id, title, subject, description, importance, link, type, timestamp, formattedTimestamp }]` | **YES** - `get_notifications_overview()` / `list_notifications()` |
| `notifications.warningsAndAlerts` | Deduplicated unread warnings and alerts | **NO** |
### A.7 Registration & Connect Queries
| Query | Fields | Current MCP Coverage |
|-------|--------|---------------------|
| `registration` | `id`, `type`, `state`, `expiration`, `updateExpiration`, `keyFile { location, contents }` | **YES** - `get_registration_info()` |
| `connect` | `id`, `dynamicRemoteAccess { ... }` | **YES** - `get_connect_settings()` |
| `remoteAccess` | `accessType`, `forwardType`, `port` | **NO** |
| `extraAllowedOrigins` | `[String!]!` | **NO** |
### A.8 RClone Queries
| Query | Fields | Current MCP Coverage |
|-------|--------|---------------------|
| `rclone.remotes` | `name`, `type`, `parameters`, `config` | **YES** - `list_rclone_remotes()` |
| `rclone.configForm(formOptions)` | `id`, `dataSchema`, `uiSchema` | **YES** - `get_rclone_config_form()` |
### A.9 Logs Queries
| Query | Fields | Current MCP Coverage |
|-------|--------|---------------------|
| `logFiles` | List available log files | **YES** - `list_available_log_files()` |
| `logFile(path, lines, startLine)` | Specific log file content with pagination | **YES** - `get_logs()` |
### A.10 Settings Queries
| Query | Fields | Current MCP Coverage |
|-------|--------|---------------------|
| `settings` | `unified { values }`, SSO config | **NO** |
### A.11 API Key Queries
| Query | Fields | Current MCP Coverage |
|-------|--------|---------------------|
| `apiKeys` | `[ApiKey!]!` - List all API keys with `id`, `name`, `description`, `roles[]`, `createdAt`, `permissions[]` | **NO** |
| `apiKey(id)` | Single API key by ID | **NO** |
### A.12 UPS Queries
| Query | Fields | Current MCP Coverage |
|-------|--------|---------------------|
| `upsDevices` | List UPS devices with status | **NO** |
| `upsDeviceById(id)` | Specific UPS device | **NO** |
| `upsConfiguration` | UPS configuration settings | **NO** |
### A.13 Metrics Queries
| Query | Fields | Current MCP Coverage |
|-------|--------|---------------------|
| `metrics` | System performance metrics (CPU, memory utilization) | **NO** |
---
## B. All GraphQL Mutations Available
Every mutation identified across all research documents with their parameters and return types.
### B.1 Array Management Mutations
| Mutation | Parameters | Returns | Current MCP Coverage |
|----------|------------|---------|---------------------|
| `startArray` | none | `Array` | **NO** |
| `stopArray` | none | `Array` | **NO** |
| `addDiskToArray(input)` | `arrayDiskInput` | `Array` | **NO** |
| `removeDiskFromArray(input)` | `arrayDiskInput` | `Array` | **NO** |
| `mountArrayDisk(id)` | `ID!` | `Disk` | **NO** |
| `unmountArrayDisk(id)` | `ID!` | `Disk` | **NO** |
| `clearArrayDiskStatistics(id)` | `ID!` | `JSON` | **NO** |
### B.2 Parity Check Mutations
| Mutation | Parameters | Returns | Current MCP Coverage |
|----------|------------|---------|---------------------|
| `startParityCheck(correct)` | `correct: Boolean` | `JSON` | **NO** |
| `pauseParityCheck` | none | `JSON` | **NO** |
| `resumeParityCheck` | none | `JSON` | **NO** |
| `cancelParityCheck` | none | `JSON` | **NO** |
### B.3 Docker Container Mutations
| Mutation | Parameters | Returns | Current MCP Coverage |
|----------|------------|---------|---------------------|
| `docker.start(id)` | `PrefixedID!` | `DockerContainer` | **YES** - `manage_docker_container(action="start")` |
| `docker.stop(id)` | `PrefixedID!` | `DockerContainer` | **YES** - `manage_docker_container(action="stop")` |
| `docker.pause(id)` | `PrefixedID!` | `DockerContainer` | **NO** |
| `docker.unpause(id)` | `PrefixedID!` | `DockerContainer` | **NO** |
| `docker.removeContainer(id, withImage?)` | `PrefixedID!`, `Boolean` | `DockerContainer` | **NO** |
| `docker.updateContainer(id)` | `PrefixedID!` | `DockerContainer` | **NO** |
| `docker.updateContainers(ids)` | `[PrefixedID!]!` | `[DockerContainer]` | **NO** |
| `docker.updateAllContainers` | none | `[DockerContainer]` | **NO** |
| `docker.updateAutostartConfiguration` | auto-start config | varies | **NO** |
### B.4 Docker Organizer Mutations (Feature-Flagged)
| Mutation | Parameters | Returns | Current MCP Coverage |
|----------|------------|---------|---------------------|
| `docker.createDockerFolder` | folder config | varies | **NO** |
| `docker.setDockerFolderChildren` | folder ID, children | varies | **NO** |
| `docker.deleteDockerEntries` | entry IDs | varies | **NO** |
| `docker.moveDockerEntriesToFolder` | entries, folder | varies | **NO** |
| `docker.moveDockerItemsToPosition` | items, position | varies | **NO** |
| `docker.renameDockerFolder` | folder ID, name | varies | **NO** |
| `docker.createDockerFolderWithItems` | folder config, items | varies | **NO** |
### B.5 Docker Template Mutations
| Mutation | Parameters | Returns | Current MCP Coverage |
|----------|------------|---------|---------------------|
| `docker.syncDockerTemplatePaths` | none | varies | **NO** |
| `docker.resetDockerTemplateMappings` | none | varies | **NO** |
### B.6 VM Management Mutations
| Mutation | Parameters | Returns | Current MCP Coverage |
|----------|------------|---------|---------------------|
| `vm.start(id)` | `PrefixedID!` | `Boolean` | **YES** - `manage_vm(action="start")` |
| `vm.stop(id)` | `PrefixedID!` | `Boolean` | **YES** - `manage_vm(action="stop")` |
| `vm.pause(id)` | `PrefixedID!` | `Boolean` | **YES** - `manage_vm(action="pause")` |
| `vm.resume(id)` | `PrefixedID!` | `Boolean` | **YES** - `manage_vm(action="resume")` |
| `vm.forceStop(id)` | `PrefixedID!` | `Boolean` | **YES** - `manage_vm(action="forceStop")` |
| `vm.reboot(id)` | `PrefixedID!` | `Boolean` | **YES** - `manage_vm(action="reboot")` |
| `vm.reset(id)` | `PrefixedID!` | `Boolean` | **YES** - `manage_vm(action="reset")` |
### B.7 Notification Mutations
| Mutation | Parameters | Returns | Current MCP Coverage |
|----------|------------|---------|---------------------|
| `createNotification(input)` | `NotificationData!` | `Notification!` | **NO** |
| `deleteNotification(id, type)` | `String!`, `NotificationType!` | `NotificationOverview!` | **NO** |
| `deleteArchivedNotifications` | none | `NotificationOverview!` | **NO** |
| `archiveNotification(id)` | `String!` | `Notification!` | **NO** |
| `unreadNotification(id)` | `String!` | `Notification!` | **NO** |
| `archiveNotifications(ids)` | `[String!]` | `NotificationOverview!` | **NO** |
| `unarchiveNotifications(ids)` | `[String!]` | `NotificationOverview!` | **NO** |
| `archiveAll(importance?)` | `Importance` (optional) | `NotificationOverview!` | **NO** |
| `unarchiveAll(importance?)` | `Importance` (optional) | `NotificationOverview!` | **NO** |
| `recalculateOverview` | none | `NotificationOverview!` | **NO** |
| `notifyIfUnique(input)` | `NotificationData!` | `Notification!` | **NO** |
### B.8 RClone Mutations
| Mutation | Parameters | Returns | Current MCP Coverage |
|----------|------------|---------|---------------------|
| `createRCloneRemote(input)` | name, type, config | `RCloneRemote` | **YES** - `create_rclone_remote()` |
| `deleteRCloneRemote(input)` | name | `Boolean` | **YES** - `delete_rclone_remote()` |
### B.9 Server Power Mutations
| Mutation | Parameters | Returns | Current MCP Coverage |
|----------|------------|---------|---------------------|
| `shutdown` | none | `String` | **NO** |
| `reboot` | none | `String` | **NO** |
### B.10 Authentication & User Mutations
| Mutation | Parameters | Returns | Current MCP Coverage |
|----------|------------|---------|---------------------|
| `login(username, password)` | `String!`, `String!` | `String` | **NO** |
| `createApiKey(input)` | `CreateApiKeyInput!` | `ApiKeyWithSecret!` | **NO** |
| `addPermission(input)` | `AddPermissionInput!` | `Boolean!` | **NO** |
| `addRoleForUser(input)` | `AddRoleForUserInput!` | `Boolean!` | **NO** |
| `addRoleForApiKey(input)` | `AddRoleForApiKeyInput!` | `Boolean!` | **NO** |
| `removeRoleFromApiKey(input)` | `RemoveRoleFromApiKeyInput!` | `Boolean!` | **NO** |
| `deleteApiKeys(input)` | API key IDs | `Boolean` | **NO** |
| `updateApiKey(input)` | API key update data | `Boolean` | **NO** |
| `addUser(input)` | `addUserInput!` | `User` | **NO** |
| `deleteUser(input)` | `deleteUserInput!` | `User` | **NO** |
### B.11 Connect/Remote Access Mutations
| Mutation | Parameters | Returns | Current MCP Coverage |
|----------|------------|---------|---------------------|
| `connectSignIn(input)` | `ConnectSignInInput!` | `Boolean!` | **NO** |
| `connectSignOut` | none | `Boolean!` | **NO** |
| `enableDynamicRemoteAccess(input)` | `EnableDynamicRemoteAccessInput!` | `Boolean!` | **NO** |
| `setAdditionalAllowedOrigins(input)` | `AllowedOriginInput!` | `[String!]!` | **NO** |
| `setupRemoteAccess(input)` | `SetupRemoteAccessInput!` | `Boolean!` | **NO** |
### B.12 UPS Mutations
| Mutation | Parameters | Returns | Current MCP Coverage |
|----------|------------|---------|---------------------|
| `configureUps(config)` | UPS configuration | varies | **NO** |
---
## C. All GraphQL Subscriptions Available
Every subscription channel identified with update intervals and event triggers.
### C.1 PubSub Channel Definitions (from source code)
```
GRAPHQL_PUBSUB_CHANNEL {
ARRAY // Array state changes
CPU_UTILIZATION // 1-second CPU utilization data
CPU_TELEMETRY // 5-second CPU power & temperature
DASHBOARD // Dashboard aggregate updates
DISPLAY // Display settings changes
INFO // System information changes
MEMORY_UTILIZATION // 2-second memory utilization
NOTIFICATION // Notification state changes
NOTIFICATION_ADDED // New notification created
NOTIFICATION_OVERVIEW // Notification count updates
NOTIFICATION_WARNINGS_AND_ALERTS // Warning/alert changes
OWNER // Owner information changes
SERVERS // Server list changes
VMS // VM state changes
DOCKER_STATS // Container performance stats
LOG_FILE // Real-time log file updates (dynamic path)
PARITY // Parity check progress
}
```
### C.2 GraphQL Subscription Types (from schema)
| Subscription | Channel | Interval | Description | Current MCP Coverage |
|-------------|---------|----------|-------------|---------------------|
| `array` | ARRAY | Event-based | Real-time array state changes | **NO** (diag only) |
| `parityHistory` | PARITY | Event-based | Parity check progress updates | **NO** |
| `ping` | - | - | Connection keepalive | **NO** |
| `info` | INFO | Event-based | System info changes | **NO** (diag only) |
| `online` | - | Event-based | Online status changes | **NO** |
| `config` | - | Event-based | Configuration changes | **NO** |
| `display` | DISPLAY | Event-based | Display settings changes | **NO** |
| `dockerContainer(id)` | DOCKER_STATS | Polling | Single container stats (CPU%, mem, net I/O, block I/O) | **NO** |
| `dockerContainers` | DOCKER_STATS | Polling | All container state changes | **NO** |
| `dockerNetwork(id)` | - | Event-based | Single network changes | **NO** |
| `dockerNetworks` | - | Event-based | All network changes | **NO** |
| `flash` | - | Event-based | Flash drive changes | **NO** |
| `notificationAdded` | NOTIFICATION_ADDED | Event-based | New notification created | **NO** |
| `notificationsOverview` | NOTIFICATION_OVERVIEW | Event-based | Notification count updates | **NO** |
| `notificationsWarningsAndAlerts` | NOTIFICATION_WARNINGS_AND_ALERTS | Event-based | Warning/alert changes | **NO** |
| `owner` | OWNER | Event-based | Owner info changes | **NO** |
| `registration` | - | Event-based | Registration changes | **NO** |
| `server` | - | Event-based | Server status changes | **NO** |
| `service(name)` | - | Event-based | Specific service changes | **NO** |
| `share(id)` | - | Event-based | Single share changes | **NO** |
| `shares` | - | Event-based | All shares changes | **NO** |
| `unassignedDevices` | - | Event-based | Unassigned device changes | **NO** |
| `me` | - | Event-based | Current user changes | **NO** |
| `user(id)` | - | Event-based | Specific user changes | **NO** |
| `users` | - | Event-based | User list changes | **NO** |
| `vars` | - | Event-based | Server variable changes | **NO** |
| `vms` | VMS | Event-based | VM state changes | **NO** |
| `systemMetricsCpu` | CPU_UTILIZATION | 1 second | Real-time CPU utilization | **NO** |
| `systemMetricsCpuTelemetry` | CPU_TELEMETRY | 5 seconds | CPU power & temperature | **NO** |
| `systemMetricsMemory` | MEMORY_UTILIZATION | 2 seconds | Memory utilization | **NO** |
| `logFileSubscription(path)` | LOG_FILE (dynamic) | Event-based | Real-time log tailing | **NO** |
| `upsUpdates` | - | Event-based | UPS status changes | **NO** |
**Note:** The current MCP server has `test_subscription_query()` and `diagnose_subscriptions()` as diagnostic tools but does NOT expose any production subscription-based tools that stream real-time data.
---
## D. All Custom Scalars and Types
### D.1 Custom Scalar Types
| Scalar | Description | Serialization | Usage |
|--------|-------------|---------------|-------|
| `PrefixedID` | Server-prefixed identifiers | String (format: `TypePrefix:uuid`) | Container IDs, VM IDs, disk IDs, share IDs |
| `Long` | 52-bit integers (exceeds GraphQL Int 32-bit limit) | String in JSON | Disk sizes, memory values, operation counters |
| `BigInt` | Large integer values | String in JSON | Same as Long (used in newer schema versions) |
| `DateTime` | ISO 8601 date-time string (RFC 3339) | String | Timestamps, uptime, creation dates |
| `JSON` | Arbitrary JSON data structures | Object | Labels, network settings, mounts, host config |
| `Port` | Valid TCP port number (0-65535) | Integer | Network port references |
| `URL` | Standard URL format | String | Web UI URLs, registry URLs, support URLs |
| `UUID` | Universally Unique Identifier | String | VM domain UUIDs |
### D.2 Core Interface Types
| Interface | Fields | Implementors |
|-----------|--------|-------------|
| `Node` | `id: ID!` | `Array`, `Info`, `Network`, `Notifications`, `Connect`, `ArrayDisk`, `DockerContainer`, `VmDomain`, `Share` |
| `UserAccount` | `id`, `name`, `description`, `roles`, `permissions` | `Me`, `User` |
### D.3 Key Object Types
| Type | Key Fields | Notes |
|------|-----------|-------|
| `Array` | `state`, `previousState`, `pendingState`, `capacity`, `boot`, `parities[]`, `disks[]`, `caches[]`, `parityCheckStatus` | Implements Node |
| `ArrayDisk` | `id`, `idx`, `name`, `device`, `size`, `fsSize`, `fsFree`, `fsUsed`, `status`, `rotational`, `temp`, `numReads`, `numWrites`, `numErrors`, `type`, `exportable`, `warning`, `critical`, `fsType`, `comment`, `format`, `transport`, `color`, `isSpinning` | Implements Node |
| `ArrayCapacity` | `kilobytes { free, used, total }`, `disks { free, used, total }` | |
| `Capacity` | `free`, `used`, `total` | All String type |
| `ParityCheck` | Parity check status/progress data | |
| `DockerContainer` | 25+ fields (see A.4) | Implements Node |
| `Docker` | `id`, `containers[]`, `networks[]` | Implements Node |
| `DockerNetwork` | `name`, `id`, `created`, `scope`, `driver`, `enableIPv6`, `ipam`, etc. | |
| `ContainerPort` | `ip`, `privatePort`, `publicPort`, `type` | |
| `ContainerHostConfig` | JSON host configuration | |
| `VmDomain` | `uuid/id`, `name`, `state` | Implements Node |
| `Vms` | `id`, `domain[]` | |
| `Info` | `time`, `baseboard`, `cpu`, `devices`, `display`, `machineId`, `memory`, `os`, `system`, `versions`, `apps` | Implements Node |
| `InfoCpu` | `manufacturer`, `brand`, `vendor`, `family`, `model`, `stepping`, `revision`, `voltage`, `speed`, `speedmin`, `speedmax`, `threads`, `cores`, `processors`, `socket`, `cache`, `flags` | |
| `InfoMemory` | `max`, `total`, `free`, `used`, `active`, `available`, `buffcache`, `swaptotal`, `swapused`, `swapfree`, `layout[]` | |
| `MemoryLayout` | `bank`, `type`, `clockSpeed`, `manufacturer` | Missing `size` field (known bug) |
| `Os` | `platform`, `distro`, `release`, `codename`, `kernel`, `arch`, `hostname`, `codepage`, `logofile`, `serial`, `build`, `uptime` | |
| `Baseboard` | `manufacturer`, `model`, `version`, `serial` | |
| `SystemInfo` | `manufacturer`, `model`, `version`, `serial`, `uuid` | |
| `Versions` | `kernel`, `docker`, `unraid`, `node` | |
| `InfoApps` | `installed`, `started` | |
| `Network` | `iface`, `ifaceName`, `ipv4`, `ipv6`, `mac`, `internal`, `operstate`, `type`, `duplex`, `mtu`, `speed`, `carrierChanges`, `id`, `accessUrls[]` | Implements Node |
| `AccessUrl` | `type`, `name`, `ipv4`, `ipv6` | |
| `Share` | `name`, `free`, `used`, `size`, `include[]`, `exclude[]`, `cache`, `nameOrig`, `comment`, `allocator`, `splitLevel`, `floor`, `cow`, `color`, `luksStatus` | |
| `Disk` (physical) | `device`, `type`, `name`, `vendor`, `size`, `bytesPerSector`, `totalCylinders`, `totalHeads`, `totalSectors`, `totalTracks`, `tracksPerCylinder`, `sectorsPerTrack`, `firmwareRevision`, `serialNum`, `interfaceType`, `smartStatus`, `temperature`, `partitions[]` | |
| `DiskPartition` | Partition details | |
| `Notification` | `id`, `title`, `subject`, `description`, `importance`, `link`, `type`, `timestamp`, `formattedTimestamp` | Implements Node |
| `NotificationOverview` | `unread { info, warning, alert, total }`, `archive { info, warning, alert, total }` | |
| `NotificationCounts` | `info`, `warning`, `alert`, `total` | |
| `Registration` | `id`, `type`, `state`, `expiration`, `updateExpiration`, `keyFile { location, contents }` | |
| `Connect` | `id`, `dynamicRemoteAccess { ... }` | Implements Node |
| `RemoteAccess` | `accessType`, `forwardType`, `port` | |
| `Cloud` | `error`, `apiKey`, `relay`, `minigraphql`, `cloud`, `allowedOrigins` | |
| `Flash` | Flash drive information | |
| `UnassignedDevice` | Unassigned device details | |
| `Service` | Service name and status | |
| `Server` | Server details (Connect-managed) | |
| `ApiKey` | `id`, `name`, `description`, `roles[]`, `createdAt`, `permissions[]` | |
| `ApiKeyWithSecret` | `id`, `key`, `name`, `description`, `roles[]`, `createdAt`, `permissions[]` | |
| `Permission` | `resource`, `actions[]` | |
| `Config` | System configuration | |
| `Display` | Display settings | |
| `Owner` | Server owner info | |
| `Me` | Current user info | Implements UserAccount |
| `User` | User account info | Implements UserAccount |
| `Vars` | Server variables (40+ fields) | Implements Node |
### D.4 Input Types
| Input Type | Used By | Fields |
|-----------|---------|--------|
| `CreateApiKeyInput` | `createApiKey` | `name!`, `description`, `roles[]`, `permissions[]`, `overwrite` |
| `AddPermissionInput` | `addPermission` | `resource!`, `actions![]` |
| `AddRoleForUserInput` | `addRoleForUser` | User + role assignment |
| `AddRoleForApiKeyInput` | `addRoleForApiKey` | API key + role assignment |
| `RemoveRoleFromApiKeyInput` | `removeRoleFromApiKey` | API key + role removal |
| `arrayDiskInput` | `addDiskToArray`, `removeDiskFromArray` | Disk assignment data |
| `ConnectSignInInput` | `connectSignIn` | Connect credentials |
| `EnableDynamicRemoteAccessInput` | `enableDynamicRemoteAccess` | Remote access config |
| `AllowedOriginInput` | `setAdditionalAllowedOrigins` | Origin URLs |
| `SetupRemoteAccessInput` | `setupRemoteAccess` | Remote access setup |
| `NotificationData` | `createNotification`, `notifyIfUnique` | title, subject, description, importance |
| `NotificationFilter` | `notifications.list` | Filter criteria |
| `addUserInput` | `addUser` | User creation data |
| `deleteUserInput` | `deleteUser` | User deletion target |
| `usersInput` | `users` | User listing filter |
---
## E. All Enums
### E.1 Array & Disk Enums
| Enum | Values |
|------|--------|
| **ArrayState** | `STARTED`, `STOPPED`, `NEW_ARRAY`, `RECON_DISK`, `DISABLE_DISK`, `SWAP_DSBL`, `INVALID_EXPANSION`, `PARITY_NOT_BIGGEST`, `TOO_MANY_MISSING_DISKS`, `NEW_DISK_TOO_SMALL`, `NO_DATA_DISKS` |
| **ArrayPendingState** | Pending state transitions (exact values not documented) |
| **ArrayDiskStatus** | `DISK_NP`, `DISK_OK`, `DISK_NP_MISSING`, `DISK_INVALID`, `DISK_WRONG`, `DISK_DSBL`, `DISK_NP_DSBL`, `DISK_DSBL_NEW`, `DISK_NEW` |
| **ArrayDiskType** | `Data`, `Parity`, `Flash`, `Cache` |
| **ArrayDiskFsColor** | `GREEN_ON`, `GREEN_BLINK`, `BLUE_ON`, `BLUE_BLINK`, `YELLOW_ON`, `YELLOW_BLINK`, `RED_ON`, `RED_OFF`, `GREY_OFF` |
| **DiskInterfaceType** | `SAS`, `SATA`, `USB`, `PCIe`, `UNKNOWN` |
| **DiskFsType** | `xfs`, `btrfs`, `vfat`, `zfs` |
| **DiskSmartStatus** | SMART health assessment values |
### E.2 Docker Enums
| Enum | Values |
|------|--------|
| **ContainerState** | `RUNNING`, `PAUSED`, `EXITED` |
| **ContainerPortType** | `TCP`, `UDP` |
| **UpdateStatus** | `UP_TO_DATE`, `UPDATE_AVAILABLE`, `REBUILD_READY`, `UNKNOWN` |
### E.3 VM Enums
| Enum | Values |
|------|--------|
| **VmState** | `NOSTATE`, `RUNNING`, `IDLE`, `PAUSED`, `SHUTDOWN`, `SHUTOFF`, `CRASHED`, `PMSUSPENDED` |
### E.4 Notification Enums
| Enum | Values |
|------|--------|
| **Importance** | `ALERT`, `INFO`, `WARNING` |
| **NotificationType** | `UNREAD`, `ARCHIVE` |
### E.5 Auth & Permission Enums
| Enum | Values |
|------|--------|
| **Role** | `ADMIN`, `CONNECT`, `GUEST`, `VIEWER` |
| **AuthAction** | `CREATE_ANY`, `CREATE_OWN`, `READ_ANY`, `READ_OWN`, `UPDATE_ANY`, `UPDATE_OWN`, `DELETE_ANY`, `DELETE_OWN` |
| **Resource** (35 total) | `ACTIVATION_CODE`, `API_KEY`, `ARRAY`, `CLOUD`, `CONFIG`, `CONNECT`, `CONNECT__REMOTE_ACCESS`, `CUSTOMIZATIONS`, `DASHBOARD`, `DISK`, `DISPLAY`, `DOCKER`, `FLASH`, `INFO`, `LOGS`, `ME`, `NETWORK`, `NOTIFICATIONS`, `ONLINE`, `OS`, `OWNER`, `PERMISSION`, `REGISTRATION`, `SERVERS`, `SERVICES`, `SHARE`, `USER`, `VARS`, `VMS`, `WELCOME` |
### E.6 Registration Enums
| Enum | Values |
|------|--------|
| **RegistrationState** | `TRIAL`, `BASIC`, `PLUS`, `PRO`, `STARTER`, `UNLEASHED`, `LIFETIME`, `EEXPIRED`, `EGUID`, `EGUID1`, `ETRIAL`, `ENOKEYFILE`, `ENOFLASH`, `EBLACKLISTED`, `ENOCONN` |
### E.7 Configuration Enums
| Enum | Values |
|------|--------|
| **ConfigErrorState** | Configuration error state values |
| **WAN_ACCESS_TYPE** | `DYNAMIC`, `ALWAYS`, `DISABLED` |
| **WAN_FORWARD_TYPE** | WAN forwarding type values |
---
## F. API Capabilities NOT Currently in the MCP Server
The current MCP server has 10 tools (76 actions) after consolidation. The following capabilities are available in the Unraid API but NOT covered by any existing tool.
### F.1 HIGH PRIORITY - New Tool Candidates
#### Array Management (0 tools currently, 7 mutations available)
| Proposed Tool | API Operation | Why Important |
|--------------|---------------|---------------|
| `start_array()` | `startArray` mutation | Core server management |
| `stop_array()` | `stopArray` mutation | Core server management |
| `start_parity_check(correct)` | `startParityCheck` mutation | Data integrity management |
| `pause_parity_check()` | `pauseParityCheck` mutation | Parity management |
| `resume_parity_check()` | `resumeParityCheck` mutation | Parity management |
| `cancel_parity_check()` | `cancelParityCheck` mutation | Parity management |
| `get_parity_history()` | `parityHistory` query | Historical parity check results |
#### Server Power Management (0 tools currently, 2 mutations available)
| Proposed Tool | API Operation | Why Important |
|--------------|---------------|---------------|
| `shutdown_server()` | `shutdown` mutation | Remote server management |
| `reboot_server()` | `reboot` mutation | Remote server management |
#### Notification Management (read-only currently, 10+ mutations available)
| Proposed Tool | API Operation | Why Important |
|--------------|---------------|---------------|
| `create_notification(input)` | `createNotification` mutation | Proactive alerting from MCP |
| `archive_notification(id)` | `archiveNotification` mutation | Notification lifecycle |
| `archive_all_notifications(importance?)` | `archiveAll` mutation | Bulk management |
| `delete_notification(id, type)` | `deleteNotification` mutation | Cleanup |
| `delete_archived_notifications()` | `deleteArchivedNotifications` mutation | Bulk cleanup |
| `unread_notification(id)` | `unreadNotification` mutation | Mark as unread |
| `get_warnings_and_alerts()` | `notifications.warningsAndAlerts` query | Focused severity view |
#### Docker Extended Operations (3 tools currently, 10+ mutations available)
| Proposed Tool | API Operation | Why Important |
|--------------|---------------|---------------|
| `pause_docker_container(id)` | `docker.pause` mutation | Container lifecycle |
| `unpause_docker_container(id)` | `docker.unpause` mutation | Container lifecycle |
| `remove_docker_container(id, with_image?)` | `docker.removeContainer` mutation | Container cleanup |
| `update_docker_container(id)` | `docker.updateContainer` mutation | Keep containers current |
| `update_all_docker_containers()` | `docker.updateAllContainers` mutation | Bulk updates |
| `check_docker_updates()` | `containerUpdateStatuses` query | Pre-update assessment |
| `get_docker_container_logs(id, since?, tail?)` | `docker.logs` query | Debugging/monitoring |
| `list_docker_networks(all?)` | `dockerNetworks` query | Network inspection |
| `get_docker_network(id)` | `dockerNetwork` query | Network details |
| `check_docker_port_conflicts()` | `docker.portConflicts` query | Conflict detection |
#### Disk Operations (2 tools currently, 3 mutations available)
| Proposed Tool | API Operation | Why Important |
|--------------|---------------|---------------|
| `mount_array_disk(id)` | `mountArrayDisk` mutation | Disk management |
| `unmount_array_disk(id)` | `unmountArrayDisk` mutation | Disk management |
| `clear_disk_statistics(id)` | `clearArrayDiskStatistics` mutation | Statistics reset |
| `add_disk_to_array(input)` | `addDiskToArray` mutation | Array expansion |
| `remove_disk_from_array(input)` | `removeDiskFromArray` mutation | Array modification |
### F.2 MEDIUM PRIORITY - New Tool Candidates
#### UPS Monitoring (0 tools currently, 3 queries + 1 mutation + 1 subscription)
| Proposed Tool | API Operation | Why Important |
|--------------|---------------|---------------|
| `list_ups_devices()` | `upsDevices` query | UPS monitoring |
| `get_ups_device(id)` | `upsDeviceById` query | UPS details |
| `get_ups_configuration()` | `upsConfiguration` query | UPS config |
| `configure_ups(config)` | `configureUps` mutation | UPS management |
#### System Metrics (0 tools currently, 1 query + 3 subscriptions)
| Proposed Tool | API Operation | Why Important |
|--------------|---------------|---------------|
| `get_system_metrics()` | `metrics` query | Performance monitoring |
| `get_cpu_utilization()` | `systemMetricsCpu` subscription (polled) | Real-time CPU |
| `get_memory_utilization()` | `systemMetricsMemory` subscription (polled) | Real-time memory |
| `get_cpu_telemetry()` | `systemMetricsCpuTelemetry` subscription (polled) | CPU temp/power |
#### Unassigned Devices (0 tools currently, 1 query + 1 subscription)
| Proposed Tool | API Operation | Why Important |
|--------------|---------------|---------------|
| `list_unassigned_devices()` | `unassignedDevices` query | Device management |
#### Flash Drive (0 tools currently, 1 query + 1 subscription)
| Proposed Tool | API Operation | Why Important |
|--------------|---------------|---------------|
| `get_flash_info()` | `flash` query | Flash drive status |
#### User Management (0 tools currently, 3 queries + 2 mutations)
| Proposed Tool | API Operation | Why Important |
|--------------|---------------|---------------|
| `get_current_user()` | `me` query | Identity context |
| `list_users()` | `users` query | User management |
| `get_user(id)` | `user(id)` query | User details |
| `add_user(input)` | `addUser` mutation | User creation |
| `delete_user(input)` | `deleteUser` mutation | User removal |
#### Services (0 tools currently, 1 query + 1 subscription)
| Proposed Tool | API Operation | Why Important |
|--------------|---------------|---------------|
| `list_services()` | `services` query | Service monitoring |
#### Settings (0 tools currently, 1 query)
| Proposed Tool | API Operation | Why Important |
|--------------|---------------|---------------|
| `get_settings()` | `settings` query | Configuration inspection |
### F.3 LOW PRIORITY - New Tool Candidates
#### API Key Management (0 tools currently, 2 queries + 5 mutations)
| Proposed Tool | API Operation | Why Important |
|--------------|---------------|---------------|
| `list_api_keys()` | `apiKeys` query | Key inventory |
| `get_api_key(id)` | `apiKey(id)` query | Key details |
| `create_api_key(input)` | `createApiKey` mutation | Key provisioning |
| `delete_api_keys(input)` | `deleteApiKeys` mutation | Key cleanup |
| `update_api_key(input)` | `updateApiKey` mutation | Key modification |
#### Remote Access Management (0 tools currently, 1 query + 3 mutations)
| Proposed Tool | API Operation | Why Important |
|--------------|---------------|---------------|
| `get_remote_access()` | `remoteAccess` query | Remote access status |
| `setup_remote_access(input)` | `setupRemoteAccess` mutation | Remote access config |
| `enable_dynamic_remote_access(input)` | `enableDynamicRemoteAccess` mutation | Toggle remote access |
| `set_allowed_origins(input)` | `setAdditionalAllowedOrigins` mutation | CORS config |
#### Cloud/Connect Management (0 tools currently, 1 query + 2 mutations)
| Proposed Tool | API Operation | Why Important |
|--------------|---------------|---------------|
| `get_cloud_status()` | `cloud` query | Cloud connectivity |
| `connect_sign_in(input)` | `connectSignIn` mutation | Connect auth |
| `connect_sign_out()` | `connectSignOut` mutation | Connect deauth |
#### Server Management (0 tools currently, 2 queries)
| Proposed Tool | API Operation | Why Important |
|--------------|---------------|---------------|
| `get_server_info()` | `server` query | Server details |
| `list_servers()` | `servers` query | Multi-server view |
| `get_online_status()` | `online` query | Connectivity check |
| `get_owner_info()` | `owner` query | Server owner |
#### Display & Config (0 tools currently, 2 queries)
| Proposed Tool | API Operation | Why Important |
|--------------|---------------|---------------|
| `get_display_settings()` | `display` query | Display config |
| `get_config()` | `config` query | System config |
### F.4 Summary: Coverage Statistics
| Category | Available in API | Covered by MCP (actions) | Gap |
|----------|-----------------|--------------------------|-----|
| **Queries** | ~30+ | 14 | ~16+ uncovered |
| **Mutations** | ~50+ | 10 (start/stop Docker+VM, RClone CRUD) | ~40+ uncovered |
| **Subscriptions** | ~30+ | 0 (2 diagnostic only) | ~30+ uncovered |
| **Total** | ~110+ | ~24 unique API operations (76 actions across 10 tools) | ~86+ uncovered |
**Current coverage: approximately 22% of available API operations** (24 of ~110 unique GraphQL queries/mutations/subscriptions). Note: the MCP server exposes 76 actions, but many actions map to the same underlying API operation with different parameters.
---
## G. Community Project Capabilities
### G.1 unraid-management-agent (Go Plugin by Ruaan Deysel)
Capabilities this project offers that we do NOT:
| Capability | Details | Our Status |
|-----------|---------|------------|
| **SMART Disk Data** | Detailed SMART attributes, health monitoring | NOT available via GraphQL API (Issue #1839) |
| **Container Logs** | Docker container log retrieval | Available via `docker.logs` query (we don't use it) |
| **GPU Metrics** | GPU utilization, temperature, VRAM | NOT available via GraphQL API |
| **Process Monitoring** | Active process list, resource usage | NOT available via GraphQL API |
| **CPU Load Averages** | Real-time 1/5/15 min load averages | Available via `metrics` query (we don't use it) |
| **Prometheus Metrics** | 41 exportable metrics at `/metrics` | NOT applicable to MCP |
| **MQTT Publishing** | IoT event streaming | NOT applicable to MCP |
| **Home Assistant Auto-Discovery** | MQTT auto-discovery | NOT applicable to MCP |
| **Disk Temperature History** | Historical temp tracking | Limited via API |
| **UPS Data** | UPS status monitoring | Available via API (we don't use it) |
| **Plugin Information** | List installed plugins | NOT available via GraphQL API |
| **Update Status** | Check for OS/plugin updates | NOT available via GraphQL API |
| **Mover Control** | Invoke the mover tool | NOT available via GraphQL API (Issue #1873) |
| **Disk Thresholds** | Warning/critical temp settings | Partially available via `ArrayDisk.warning`/`critical` |
| **54 MCP Tools** | Full MCP tool suite | We have 10 tools (76 actions) |
| **WebSocket Events** | Real-time event stream | We have diagnostic-only subscriptions |
### G.2 PSUnraid (PowerShell Module)
| Capability | Details | Our Status |
|-----------|---------|------------|
| **Server Status** | Comprehensive server overview | We have `get_system_info()` |
| **Array Status** | Array state and disk health | We have `get_array_status()` |
| **Docker Start/Stop/Restart** | Container lifecycle | We have start/stop only (no restart, no pause) |
| **VM Start/Stop** | VM lifecycle | We have full VM lifecycle |
| **Notification Retrieval** | Read notifications | We have `list_notifications()` |
| **Restart Containers** | Dedicated restart action | We do NOT have restart (would be stop+start) |
### G.3 unraid-ssh-mcp
Chose SSH over GraphQL API due to these gaps:
| Missing from GraphQL API | Impact on Our Project |
|--------------------------|----------------------|
| Container logs | Now available in API (`docker.logs`) -- we should add it |
| Detailed SMART data | Still missing from API (Issue #1839) |
| Real-time CPU load | Now available via `metrics` query -- we should add it |
| Process monitoring | Still missing from API |
| `/proc` and `/sys` access | Not applicable via API |
### G.4 Home Assistant Integrations
#### domalab/ha-unraid
| Capability | Our Status |
|-----------|------------|
| CPU usage, temperature, power consumption | NO - missing metrics tools |
| Memory utilization tracking | NO - missing metrics tools |
| Per-disk and per-share metrics | PARTIAL - have basic disk/share info |
| Docker container start/stop switches | YES |
| VM management controls | YES |
| UPS monitoring with energy dashboard | NO |
| Notification counts | YES |
| Dynamic entity creation | N/A |
#### chris-mc1/unraid_api
| Capability | Our Status |
|-----------|------------|
| Array status, storage utilization | YES |
| RAM and CPU usage | NO - missing metrics |
| Per-share free space | YES |
| Per-disk: temperature, spin state, capacity | PARTIAL |
---
## H. Known API Bugs and Limitations
### H.1 Active Bugs (from GitHub Issues)
| Issue | Title | Impact on MCP Implementation |
|-------|-------|------------------------------|
| **#1837** | GraphQL partial failures | **CRITICAL**: Entire queries fail when VMs/Docker unavailable. Must implement partial failure handling with separate try/catch per section. |
| **#1842** | Temperature inconsistency | SSD temps unavailable in `disks` query but accessible via `array` query. Use Array endpoint for temperature data. |
| **#1840** | Docker cache invalidation | Docker container data may be stale after external changes (docker CLI). Use `skipCache: true` parameter when available. |
| **#1825** | UPS false data | API returns hardcoded/phantom values when NO UPS is connected. Must validate UPS data before presenting to user. |
| **#1861** | VM PMSUSPENDED issues | Cannot unsuspend VMs in `PMSUSPENDED` state. Must handle this state explicitly and warn users. |
| **#1859** | Notification counting errors | Archive counts may include duplicates. Use `recalculateOverview` mutation to fix. |
| **#1818** | Network query failures | GraphQL may return empty lists for network data. Handle gracefully. |
| **#1871** | Container restart/update mutation | Single restart+update operation not yet in API. Must implement as separate stop+start. |
| **#1873** | Mover not invocable via API | No GraphQL mutation to trigger the mover. Cannot implement mover tools. |
| **#1839** | SMART disk data missing | Detailed SMART attributes not yet exposed via GraphQL. Major gap for disk health tools. |
| **#1872** | CLI list missing creation dates | Timestamp data unavailable in some CLI operations. |
### H.2 Schema/Type Issues
| Issue | Description | Workaround |
|-------|-------------|------------|
| **Int Overflow** | Memory size fields and disk operation counters can overflow 32-bit Int. API uses `Long`/`BigInt` scalars but some fields remain problematic. | Parse values as strings, convert to Python `int` |
| **NaN Values** | Fields `sysArraySlots`, `sysCacheSlots`, `cacheNumDevices`, `cacheSbNumDisks` in `vars` query can return NaN. | Query only curated subset of `vars` fields (current approach) |
| **Non-nullable Null** | `info.devices` section has non-nullable fields that return null in practice. | Avoid querying `info.devices` entirely (current approach) |
| **Memory Layout Size** | Individual memory stick `size` values not returned by API. | Cannot calculate total memory from layout data |
| **PrefixedID Format** | IDs follow `TypePrefix:uuid` format. Clients must handle as opaque strings. | Already handled in current implementation |
### H.3 Infrastructure Limitations
| Limitation | Description | Impact |
|-----------|-------------|--------|
| **Rate Limiting** | 100 requests per 10 seconds (`@nestjs/throttler`). | Must implement request queuing/backoff for bulk operations |
| **EventEmitter Limit** | Max 30 concurrent subscription listeners. | Limit simultaneous subscription tools |
| **Disk Operation Timeouts** | Disk queries require 90s+ read timeouts. | Already handled with custom timeout config |
| **Docker Size Queries** | `sizeRootFs` query is expensive. | Make it optional in list queries, only include in detail queries |
| **Storage Polling Interval** | SMART query overhead means storage data should poll at 5min minimum. | Rate-limit storage-related subscriptions |
| **Cache TTL** | cache-manager v7 expects TTL in milliseconds (not seconds). | Correct TTL units in any caching implementation |
| **Schema Volatility** | API schema is still evolving between versions. | Consider version-checking at startup, graceful degradation |
| **Nchan Memory** | WebSocket subscriptions can cause Nginx memory exhaustion (mitigated in 7.1.0+ but still possible). | Limit concurrent subscriptions, implement reconnection logic |
| **SSL/TLS** | Self-signed certificates require special handling for local IP access. | Already handled via `UNRAID_VERIFY_SSL` env var |
| **Version Dependency** | Full API requires Unraid 7.2+. Pre-7.2 needs Connect plugin. | Document minimum version requirements per tool |
### H.4 Features Requested but NOT Yet in API
| Feature | GitHub Issue | Status |
|---------|-------------|--------|
| Mover invocation | #1873 | Open feature request |
| SMART disk data | #1839 | Open feature request (was bounty candidate) |
| System temperature monitoring (CPU, GPU, motherboard, NVMe, chipset) | #1597 | Open bounty (not implemented) |
| Container restart+update single mutation | #1871 | Open feature request |
| Docker Compose native support | Roadmap TBD | Under consideration |
| Plugin information/management via API | Not filed | Not exposed |
| File browser/upload/download | Not filed | Legacy PHP WebGUI only |
| Process list monitoring | Not filed | Not exposed |
| GPU metrics | Not filed | Not exposed |
---
## Appendix: Proposed New Tool Count by Priority
| Priority | Category | New Tools | Total After |
|----------|----------|-----------|-------------|
| **HIGH** | Array Management | 7 | |
| **HIGH** | Server Power | 2 | |
| **HIGH** | Notification Mutations | 7 | |
| **HIGH** | Docker Extended | 10 | |
| **HIGH** | Disk Operations | 5 | |
| | **High Priority Subtotal** | **31** | **57** |
| **MEDIUM** | UPS Monitoring | 4 | |
| **MEDIUM** | System Metrics | 4 | |
| **MEDIUM** | Unassigned Devices | 1 | |
| **MEDIUM** | Flash Drive | 1 | |
| **MEDIUM** | User Management | 5 | |
| **MEDIUM** | Services | 1 | |
| **MEDIUM** | Settings | 1 | |
| | **Medium Priority Subtotal** | **17** | **74** |
| **LOW** | API Key Management | 5 | |
| **LOW** | Remote Access | 4 | |
| **LOW** | Cloud/Connect | 3 | |
| **LOW** | Server Management | 4 | |
| **LOW** | Display & Config | 2 | |
| | **Low Priority Subtotal** | **18** | **92** |
| | **GRAND TOTAL NEW TOOLS** | **66** | **92** |
**Current tools: 10 (76 actions) | Potential total: ~110+ operations | Remaining gap: ~20+ uncovered operations**
---
## Appendix: Data Sources Cross-Reference
| Document | Lines | Key Contributions |
|----------|-------|-------------------|
| `unraid-api-research.md` | 819 | API overview, auth flow, query/mutation examples, version history, recommendations |
| `unraid-api-source-analysis.md` | 998 | Complete resolver listing, PubSub channels, mutation details, open issues, community projects |
| `unraid-api-exa-research.md` | 569 | DeepWiki architecture, rate limits, OIDC providers, Python client library, MCP listings |
| `unraid-api-crawl.md` | 1451 | Complete GraphQL schema (Query/Mutation/Subscription types), CLI reference, all enums/scalars |
| `raw/release-7.0.0.md` | 958 | ZFS support, VM snapshots/clones, File Manager, Tailscale, notification agents |
| `raw/release-7.2.0.md` | 348 | API built into OS, responsive WebGUI, RAIDZ expansion, SSO, Ext2/3/4/NTFS/exFAT support |
| `raw/blog-api-bounty.md` | 139 | Feature Bounty Program, community projects showcase |

View File

@@ -1,176 +0,0 @@
* [Unraid News](https://unraid.net/blog)
29 October 2025
Unraid OS 7.2.0 Stable is Now Available
=======================================
Unraid 7.2.0 delivers a **fully responsive web interface, expanded filesystem support, a built-in, open-source API**, **ZFS RAIDZ Expansion,** and much more! 
![7 2 Stable](https://cdn.craft.cloud/481d40bf-939a-4dc1-918d-b4d4b48b7c04/assets/uploads/7.2-Stable.png?width=788&quality=80&fit=crop&s=QslBGEL9Xr_DcPGOHkUC19ATiyuq9Bf_sOXDDtZf5iE)
**Your Server: More Responsive, Secure, and More Flexible than ever.**
Building on months of testing and feedback, this release brings major quality-of-life improvements for new and seasoned users alike. Whether you're upgrading your homelab or deploying at scale, this release brings more control, compatibility, and confidence to every system. 
We want to give a huge thanks to the _over 5,000 beta testers_ that helped bring this release to Stable.
**Fully Responsive WebGUI**
---------------------------
Unraid now adapts seamlessly to any screen size. The redesigned WebGUI ensures smooth operation across desktops, tablets, and mobile devices making it easier than ever to manage your server from anywhere, with any device.
### See the Responsive Webgui in action
**Expand Your RAIDZ Pools and Bring Every Drive With You**
----------------------------------------------------------
### **ZFS RAIDZ Expansion**
You can now expand your single-vdev RAIDZ1/2/3 pools, one drive at a time!
1. Stop the array
2. On _**Main → Pool Devices,**_ add a slot to the pool
3. Select the appropriate drive. _Note: must be at least as large as the smallest drive in the pool._
4. Start the array
### See How RAIDZ Expansion Works
### **External Drive Support: ext2/3/4, NTFS, exFAT**
Alongside XFS, BTRFS, and the ZFS file systems, Unraid now supports ext2 / ext3 / ext4, NTFS, and exFAT out of the box, making it easier to import data from external sources or legacy systems. 
This means you can _create an array or single device pool with existing drives formatted in Ext2/3/4 or NTFS, and you can format drives in Ext4 or NTFS._  
### Learn How Unraid Handles ext, NTFS, and exFAT Out of the Box
Cyber Weekend is Coming
-----------------------
Dont miss our biggest sale of the year November 28-December 1st. Subscribe to the [Unraid Digest](https://newsletter.unraid.net/)
and be the first to know all of the details!
[Subscribe](https://newsletter.unraid.net/)
**Unraid API**
--------------
The [**Unraid API**](https://docs.unraid.net/API/)
is now integrated directly into Unraid OS, giving developers and power users new ways to interact with their systems.
The new **Notifications panel** is the first major feature built on this foundation, and over time, more of the webGUI will transition to use the API for faster, more dynamic updates.
The API is fully [**open source**](https://github.com/unraid/api)
, providing direct access to system data and functionality for building automations, dashboards, and thirdparty integrations. It also supports [**external authentication (OIDC)**](https://docs.unraid.net/API/oidc-provider-setup/)
for secure, scalable access. 
### See the Unraid API in Action!
Learn More about the Unraid API
-------------------------------
* #### [Follow along the Unraid API Roadmap](https://docs.unraid.net/API/upcoming-features/)
* #### [See current apps using the Unraid API](https://discord.com/channels/216281096667529216/1375651142704566282)
**Additional Improvements and Fixes**
-------------------------------------
### **Storage & Array**
* Two-device ZFS pools default to mirrors; use RAIDZ1 for future vdev expansion
* New _File System Status_ shows if drives are mounted and/or empty
* Exclusive shares now exportable via NFS
* Restricted special share names (homes, global, printers)
* Improved SMB config (smb3 directory leases = no) and security settings UX
* Better handling for parity disks with 1MiB partitions
* BTRFS mounts more reliably with multiple FS signatures
* New drives now repartitioned when added to parity-protected arrays
* Devices in SMART test wont spin down
* Cleaner handling of case-insensitive share names and invalid characters
* ZFS vdevs now display correctly in allocation profiles
### **VM Manager** 
* Console access now works even when user shares are disabled
* Single quotes are no longer allowed in the Domains storage path
* Windows 11 defaults have been updated for better compatibility
* Cdrom Bus now defaults to IDE for i440fx and SATA for Q35 machines
* Vdisk locations now display properly in non-English languages
* You'll now see a warning when adding a second vdisk if you forget to assign a capacity
### **WebGUI**
* Network and RAM stats now shown in human-readable units
* Font size and layout fixes
* Better error protection for PHP-based failures
### Miscellaneous **Improvements**
* Better logging during plugin installs
* Added safeguards to protect WebGUI from fatal PHP errors
* Diagnostics ZIPs are now further anonymized
* Resolved crash related to Docker container CPU pinning
* Fixed Docker NAT issue caused by missing br\_netfilter
* Scheduled mover runs are now properly logged
### **Kernel & Packages**
* Linux Kernel 6.12.54-Unraid
* Samba 4.23.2
* Updated versions of openssl, mesa, kernel-firmware, git, exfatprogs, and more
**Plugin Compatibility Notice**
-------------------------------
To maintain stability with the new responsive WebGUI, the following plugins will be removed during upgrade if present:
* **Theme Engine**
* **Dark Theme**
* **Dynamix Date Time**
* **Flash Remount**
* **Outdated versions of Unraid Connect**
Please update all other plugins—**especially Unraid Connect and Nvidia Driver**—before upgrading!
Unraid 7.2.0
------------
Important Release Links
* #### [Docs](https://docs.unraid.net/unraid-os/release-notes/7.2.0/)
Version 7.2.0 Full Release Notes
* #### [Forum Thread](https://forums.unraid.net/topic/194610-unraid-os-version-720-available/)
Unraid 7.2.0 Forum Thread
* #### [Known Issues](https://docs.unraid.net/unraid-os/release-notes/7.2.0/#known-issues)
See the Known Issues for the Unraid 7.2 series
* #### [Learn More](https://docs.unraid.net/unraid-os/system-administration/maintain-and-update/upgrading-unraid/#standard-upgrade-process)
Ready to Upgrade? Visit your servers Tools → Update OS page to install Unraid 7.2.0.
![Img Pricing 1](https://cdn.craft.cloud/481d40bf-939a-4dc1-918d-b4d4b48b7c04/assets/uploads/img_Pricing-1.jpg?width=1380&height=444&quality=100&fit=crop&s=Vnf0bkINshpmnIRgfMLOisrcLdSH9WP-b54ecTBxNUw)
Pricing
-------
With affordable options starting at just $49, we have a license for everyone.
[Buy Now](https://account.unraid.net/buy)
![Img Trial 2024 02 08 212340 axtg](https://cdn.craft.cloud/481d40bf-939a-4dc1-918d-b4d4b48b7c04/assets/uploads/img_Trial_2024-02-08-212340_axtg.jpg?width=1380&height=444&quality=100&fit=crop&s=-lkAcuBOMgQgFSU_toFAaDf98CS5kxlMWcP7yYA3m7Y)
Try before you buy
------------------
Not sure if Unraid is right for you? Take Unraid for a test drive for 30 days—no credit card required.
[Free Trial](https://unraid.net/getting-started)

View File

@@ -1,139 +0,0 @@
* [Unraid News](https://unraid.net/blog)
5 September 2025
Introducing the Unraid API Feature Bounty Program
=================================================
Were opening new doors for developers and power users to directly shape the Unraid experience, together.
The new [Unraid API](https://docs.unraid.net/API/)
has already come a long way as a powerful, open-source toolkit that unlocks endless possibilities for automation, integrations, and third-party applications. With each release, weve seen the creativity of our community take center stage, building tools that extend the Unraid experience in ways we never imagined.
Now, were taking it one step further with the [**Unraid API Feature Bounty Program**.](https://unraid.net/feature-bounty)
### **What Is the Feature Bounty Program?**
The bounty program gives developers (and adventurous users) a way to directly contribute to the Unraid API roadmap. Heres how it works:
1. **Feature Requests Become Bounties:** We post specific API features that would benefit the entire Unraid ecosystem.
2. **You Build & Contribute:** Developers who implement these features can claim the bounty, earn recognition, and a monetary reward.
3. **Community Driven Growth:** Instead of waiting for features to arrive, you can help build them, get rewarded, and help the Unraid community.
Our core team focuses on high-priority roadmap items. Bounties give the community a way to help accelerate other highly requested features by bringing more ideas to life, faster, with recognition and reward for those who contribute.
API Feature Bounty Program Details
----------------------------------
You can turn feature requests into reality, get rewarded for your contributions, and help grow the open-source Unraid API ecosystem.
[Learn More](https://unraid.net/feature-bounty)
### **The Open-Source Unraid API**
Alongside the bounty program, were thrilled to highlight just how open and flexible the Unraid API has become. Whether youre scripting via the CLI, building automations with the API, or integrating with external identity providers through OAuth2/OIDC, the API is designed to be transparent and extensible.
API Docs
--------
Learn about how to get started with the Unraid API.
[Start Here](https://docs.unraid.net/API/)
OIDC Provider Setup
-------------------
Configure OIDC providers for SSO authentication in the Unraid API using the web interface.
[OIDC](https://docs.unraid.net/API/oidc-provider-setup/)
Upcoming API Features
---------------------
The roadmap outlines completed and planned features for the Unraid API. Features and timelines may change based on development priorities and community feedback.
[Learn More](https://docs.unraid.net/API/upcoming-features/)
Community API Projects in Action
--------------------------------
The power of an open API is best shown by what you build with it. Here are just a few highlights from the community so far!
![Screenshot 2025 09 05 at 9 24 36 AM](https://cdn.craft.cloud/481d40bf-939a-4dc1-918d-b4d4b48b7c04/assets/uploads/Screenshot-2025-09-05-at-9.24.36-AM.png?width=678&quality=80&fit=crop&s=22BQj1EsG2qcoT6xJtcrm4Lo7I-Pa4OfArEG84jLAGc)
### [Unraid Mobile App](https://forums.unraid.net/topic/189522-unraid-mobile-app/)
by S3ppo
![Screenshot 2025 09 05 at 9 26 52 AM](https://cdn.craft.cloud/481d40bf-939a-4dc1-918d-b4d4b48b7c04/assets/uploads/Screenshot-2025-09-05-at-9.26.52-AM.png?width=678&quality=80&fit=crop&s=TAELCKbETxuccKu0Wu2kw-glpxkal9nYpdXAm8kQd1w)
### [Homepage Dashboard Widget](https://discord.com/channels/216281096667529216/1379497640110063656)
by surf108
![Image 66](https://cdn.craft.cloud/481d40bf-939a-4dc1-918d-b4d4b48b7c04/assets/uploads/image-66.png?width=678&quality=80&fit=crop&s=OCJFFLVo0PIP0moDyYYrgCBnXpOCTNXC_Q39MnvOCW0)
### [Home Assistant Integration](https://github.com/domalab/ha-unraid-connect)
by domalab
![Screenshot 2025 09 05 at 9 29 14 AM](https://cdn.craft.cloud/481d40bf-939a-4dc1-918d-b4d4b48b7c04/assets/uploads/Screenshot-2025-09-05-at-9.29.14-AM.png?width=678&quality=80&fit=crop&s=3PB7G7nDkVxu25QNYqdIFgcMYNv3CeoOgVZ-JGI0dJw)
[Unloggarr (AI-powered log analysis)](https://github.com/jmagar/unloggarr)
---------------------------------------------------------------------------
by jmagar
![Screenshot 2025 09 04 at 2 43 41 PM](https://cdn.craft.cloud/481d40bf-939a-4dc1-918d-b4d4b48b7c04/assets/uploads/Screenshot-2025-09-04-at-2.43.41-PM.png?width=678&quality=80&fit=crop&s=XYxguwTLXEpMn27QXtJ70HY_SGsoqE8LGqGZ2K3Opx0)
[nzb360 Mobile App (Android)](https://play.google.com/store/apps/details?id=com.kevinforeman.nzb360&hl=en_US)
--------------------------------------------------------------------------------------------------------------
by nzb360dev
![Screenshot 2025 09 05 at 9 31 43 AM](https://cdn.craft.cloud/481d40bf-939a-4dc1-918d-b4d4b48b7c04/assets/uploads/Screenshot-2025-09-05-at-9.31.43-AM.png?width=678&quality=80&fit=crop&s=XMZFTbG_-tY85Zo_HQSyYS1kZdXFVVfEa6Ukd-1hqe8)
[API Show and Tell](https://discord.com/channels/216281096667529216/1375651142704566282)
-----------------------------------------------------------------------------------------
Show off your project or see them all in action on our Discord channel!
Get Involved
------------
Whether youre a developer looking to contribute, or a user eager to see your most-wanted features come to life, the new Unraid API Feature Bounty Program is your chance to help shape the future of Unraid. The Unraid API is open and the bounties are live!
* #### [Feature Bounty Program](https://unraid.net/feature-bounty)
Learn More about the Feature Bounty Program
* #### [Claim Bounties](https://github.com/orgs/unraid/projects/3/views/1)
Browse the live bounty board
* #### [API Info](https://docs.unraid.net/API/)
Read the API Docs
![Img Pricing 1 2024 02 08 212302 xdlz](https://cdn.craft.cloud/481d40bf-939a-4dc1-918d-b4d4b48b7c04/assets/uploads/img_Pricing-1_2024-02-08-212302_xdlz.jpg?width=1380&height=444&quality=100&fit=crop&s=uku4SaVMM0O-H6ZOZC1sq3NvBkvCwPBdvj4dJdWYNP0)
Pricing
-------
With affordable options starting at just $49, we have a license for everyone.
[Buy Now](https://account.unraid.net/buy)
![Img Trial 2024 02 08 212340 axtg](https://cdn.craft.cloud/481d40bf-939a-4dc1-918d-b4d4b48b7c04/assets/uploads/img_Trial_2024-02-08-212340_axtg.jpg?width=1380&height=444&quality=100&fit=crop&s=-lkAcuBOMgQgFSU_toFAaDf98CS5kxlMWcP7yYA3m7Y)
Try before you buy
------------------
Not sure if Unraid is right for you? Take Unraid for a test drive for 30 days—no credit card required.
[Free Trial](https://unraid.net/getting-started)

View File

@@ -1,259 +0,0 @@
[Skip to main content](https://docs.unraid.net/unraid-connect/overview-and-setup#__docusaurus_skipToContent_fallback)
On this page
**Unraid Connect** is a cloud-enabled companion designed to enhance your Unraid OS server experience. It makes server management, monitoring, and maintenance easier than ever, bringing cloud convenience directly to your homelab or business setup.
Unraid Connect works seamlessly with Unraid OS, boosting your server experience without altering its core functions. You can think of Unraid Connect as your remote command center. It expands the capabilities of your Unraid server by providing secure, web-based access and advanced features, no matter where you are.
With Unraid Connect, you can:
* Remotely access and manage your Unraid server from any device, anywhere in the world.
* Monitor real-time server health and resource usage, including storage, network, and Docker container status.
* Perform and schedule secure online flash backups to protect your configuration and licensing information.
* Receive notifications about server health, storage status, and critical events.
* Use dynamic remote access and server deep linking to navigate to specific management pages or troubleshoot issues quickly.
* Manage multiple servers from a single dashboard, making it perfect for users with more than one Unraid system.
Unraid Connect is more than just an add-on; it's an essential extension of the Unraid platform, designed to maximize the value, security, and convenience of your Unraid OS investment.
[**Click here to dive in to Unraid Connect!**](https://connect.myunraid.net/)
Data collection and privacy[](https://docs.unraid.net/unraid-connect/overview-and-setup#data-collection-and-privacy "Direct link to Data collection and privacy")
-------------------------------------------------------------------------------------------------------------------------------------------------------------------
**Click to see what data is collected and how we handle it**
Unraid Connect prioritizes your privacy and transparency. Heres what you need to know about how we handle your data:
### What data is collected and why
When your server connects to Unraid.net, it establishes a secure connection to our infrastructure and transmits only the necessary data required for a seamless experience in the Unraid Connect Dashboard. This includes:
* Server hostname, description, and icon
* Keyfile details and flash GUID
* Local access URL and LAN IP (only if a certificate is installed)
* Remote access URL and WAN IP (if remote access is turned on)
* Installed Unraid version and uptime
* Unraid Connect plugin version and unraid-api version/uptime
* Array size and usage (only numbers, no file specifics)
* Number of Docker containers and VMs installed and running
We use this data solely to enable Unraid Connect features, such as remote monitoring, management, and notifications. It is not used for advertising or profiling.
### Data retention policy
* We only keep the most recent update from your server; no past data is stored.
* Data is retained as long as your server is registered and using Unraid Connect.
* To delete your data, simply uninstall the plugin and remove any SSL certificates issued through Let's Encrypt.
### Data sharing
* Your data is **not shared with third parties** unless it is necessary for Unraid Connect services, such as certificate provisioning through Let's Encrypt.
* We do not collect or share any user content, file details, or personal information beyond what is specified above.
For more details, check out our [Policies](https://unraid.net/policies)
page.
Installation[](https://docs.unraid.net/unraid-connect/overview-and-setup#installation "Direct link to Installation")
----------------------------------------------------------------------------------------------------------------------
Unraid Connect is available as a plugin that requires Unraid OS 6.10 or later. Before you start, make sure your server is connected to the internet and you have the [Community Applications](https://docs.unraid.net/unraid-os/using-unraid-to/run-docker-containers/community-applications/)
plugin installed.
To install Unraid Connect:
1. Navigate to the **Apps** tab in the Unraid WebGUI.
2. Search for **Unraid Connect** and proceed to install the plugin. Wait for the installation to fully complete before closing the dialog.
3. In the top right corner of your Unraid WebGUI, click on the Unraid logo and select **Sign In**.
![Unraid Connect icon](<Base64-Image-Removed>)
4. Sign in with your Unraid.net credentials or create a new account if necessary.
5. Follow the on-screen instructions to register your server with Unraid Connect.
6. After registration, you can access the [Unraid Connect Dashboard](https://connect.myunraid.net/)
for centralized management.
note
Unraid Connect requires a myunraid.net certificate for secure remote management and access. To provision a certificate, go to _**Settings → Management Access**_ in the WebGUI and click **Provision** under the Certificate section.
Dashboard[](https://docs.unraid.net/unraid-connect/overview-and-setup#dashboard "Direct link to Dashboard")
-------------------------------------------------------------------------------------------------------------
The **Unraid Connect Dashboard** offers a centralized, cloud-based view of all your registered Unraid servers, with features like:
* **My Servers:** All linked servers appear in a sidebar and as interactive tiles for easy selection.
* **Status (at a glance):** Quickly see which servers are online or offline, along with their Unraid OS version, license type, and recent activity.
* **Health and alerts:** Visual indicators show server health, notifications, and update status.
When you click **Details** on a server, you will see:
* **Online/Offline:** Real-time connectivity status.
* **License type:** Starter, Unleashed, or Lifetime.
* **Uptime:** Duration the server has been running.
* **Unraid OS version:** Current version and update availability.
* **Storage:** Total and free space on all arrays and pools.
* **Health metrics:** CPU usage, memory usage, and temperature (if supported).
* **Notifications:** Hardware/software alerts, warnings, and errors.
* **Flash backup:** Status and date of the last successful backup.
* * *
Managing your server remotely[](https://docs.unraid.net/unraid-connect/overview-and-setup#managing-your-server-remotely "Direct link to Managing your server remotely")
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------
tip
To use all management features, provision a myunraid.net certificate under _**Settings → Management Access**_ on your server.
With a valid **myunraid.net** certificate, Unraid Connect enables secure, remote server management directly from the Connect web interface.
Remote management features include:
* **Remote WebGUI access:** Access the WebGUI from anywhere.
* **Array controls:** Start or stop arrays and manage storage pools.
* **Docker and VM management:** View, start, stop, and monitor containers and VMs.
* **Parity & Scrub:** Launch parity check or ZFS/BTRFS scrub jobs
* **Flash backup:** Trigger and monitor flash device backups to the cloud.
* **Diagnostics:** Download a diagnostics zip for support
* **Notifications:** Review and acknowledge system alerts.
* **Server controls:** Reboot or shut down your server remotely.
* **User management:** Manage Unraid.net account access and registration.
You can manage multiple servers from any device - phone, tablet, or computer - with a single browser window.
* * *
Deep linking[](https://docs.unraid.net/unraid-connect/overview-and-setup#deep-linking "Direct link to Deep linking")
----------------------------------------------------------------------------------------------------------------------
Deep linking in Unraid Connect lets you jump directly to specific sections of your Unraid WebGUI with a single click. Simply click any of the circled link buttons (below) in the Connect interface to go straight to the relevant management page for your server.
![Deep linking](https://docs.unraid.net/assets/images/Deep-linking-b5b22e7f13d34004b213053c78fc7423.png)
* * *
Customization[](https://docs.unraid.net/unraid-connect/overview-and-setup#customization "Direct link to Customization")
-------------------------------------------------------------------------------------------------------------------------
Unraid Connect provides a flexible dashboard experience, allowing you to personalize your server view and appearance. The customization options are organized below for easy reference.
* Change banner image
* Rearrange dashboard tiles
* Switch themes
To display your servers banner image on the Connect dashboard, upload or select a banner image from your WebGUI under _**Settings → Display Settings → Banner**_. This banner will automatically appear in your Connect dashboard for that server.
You can customize your dashboard layout by dragging and dropping server tiles. In the Connect dashboard, click the hamburger (≡) button on any tile to rearrange its position. This allows you to prioritize the information and the services most important to you.
Toggle between dark and light mode by clicking the Sun or Moon icon on the far right of the Connect UI. Your theme preference will be instantly applied across the Connect dashboard for a consistent experience.
* * *
License management[](https://docs.unraid.net/unraid-connect/overview-and-setup#license-management "Direct link to License management")
----------------------------------------------------------------------------------------------------------------------------------------
Managing your licenses in Unraid Connect is easy. Under the **My Keys** section, you can:
* View or reissue a key to a new USB.
* Upgrade your license tier directly from the Connect UI.
* Download registration key files for backup or transfer.
* Review license status and expiration (if applicable).
![My Keys](<Base64-Image-Removed>)
You dont need to leave the Connect interface to manage or upgrade your licenses.
* * *
Language localization[](https://docs.unraid.net/unraid-connect/overview-and-setup#language-localization "Direct link to Language localization")
-------------------------------------------------------------------------------------------------------------------------------------------------
Unraid Connect supports multiple languages to cater to a global user base. You can change your language preference through the language selector in the Connect interface.
To change your language preference:
1. Open the Connect UI.
2. Go to the language selector.
![Language selector](<Base64-Image-Removed>)
3. Select your preferred language from the list.
The interface will update automatically to reflect your selection.
* * *
Signing out[](https://docs.unraid.net/unraid-connect/overview-and-setup#signing-out "Direct link to Signing out")
-------------------------------------------------------------------------------------------------------------------
You can sign out of Unraid Connect anytime from _**Settings → Management Access → Unraid Connect → Account Status**_ by clicking the **Sign Out** button.
When you sign out:
* Your server remains listed on the Connect dashboard, but you lose access to remote management features.
* Remote access, cloud-based flash backups, and other Unraid Connect features will be disabled for that server.
* You can still download your registration keys, but you cannot manage or monitor the server remotely until you sign in again.
* Signing out does **not** disconnect your server from the local network or affect local access.
* * *
Uninstalling the plugin[](https://docs.unraid.net/unraid-connect/overview-and-setup#uninstalling-the-plugin "Direct link to Uninstalling the plugin")
-------------------------------------------------------------------------------------------------------------------------------------------------------
When you uninstall the Unraid Connect plugin:
* All flash backup files will be deactivated and deleted from your local flash drive.
* Cloud backups are marked for removal from Unraid servers; they will be retained for 30 days, after which they are permanently purged. For immediate deletion, [disable Flash Backup](https://docs.unraid.net/unraid-connect/automated-flash-backup/)
before uninstalling.
* Remote access will be disabled. Ensure that you remove any related port forwarding rules from your router.
* Your server will be signed out of Unraid.net.
note
Uninstalling the plugin does **not** revert your server's URL from `https://yourpersonalhash.unraid.net` to `http://computername`. If you wish to change your access URL, refer to [Disabling SSL for local access](https://docs.unraid.net/unraid-os/system-administration/secure-your-server/securing-your-connection/#disabling-ssl-for-local-access)
.
* * *
Connection errors[](https://docs.unraid.net/unraid-connect/overview-and-setup#connection-errors "Direct link to Connection errors")
-------------------------------------------------------------------------------------------------------------------------------------
If you encounter connection errors in Unraid Connect, [open a terminal](https://docs.unraid.net/unraid-os/system-administration/advanced-tools/command-line-interface/)
from the WebGUI and run:
unraid-api restart
* [Data collection and privacy](https://docs.unraid.net/unraid-connect/overview-and-setup#data-collection-and-privacy)
* [Installation](https://docs.unraid.net/unraid-connect/overview-and-setup#installation)
* [Dashboard](https://docs.unraid.net/unraid-connect/overview-and-setup#dashboard)
* [Managing your server remotely](https://docs.unraid.net/unraid-connect/overview-and-setup#managing-your-server-remotely)
* [Deep linking](https://docs.unraid.net/unraid-connect/overview-and-setup#deep-linking)
* [Customization](https://docs.unraid.net/unraid-connect/overview-and-setup#customization)
* [License management](https://docs.unraid.net/unraid-connect/overview-and-setup#license-management)
* [Language localization](https://docs.unraid.net/unraid-connect/overview-and-setup#language-localization)
* [Signing out](https://docs.unraid.net/unraid-connect/overview-and-setup#signing-out)
* [Uninstalling the plugin](https://docs.unraid.net/unraid-connect/overview-and-setup#uninstalling-the-plugin)
* [Connection errors](https://docs.unraid.net/unraid-connect/overview-and-setup#connection-errors)

View File

@@ -1,181 +0,0 @@
# Remote Access (Unraid Connect)
> **Source:** [Unraid Documentation - Remote Access](https://docs.unraid.net/unraid-connect/remote-access)
> **Scraped:** 2026-02-07 | Raw content for reference purposes
Unlock secure, browser-based access to your Unraid WebGUI from anywhere with remote access. This feature is ideal for managing your server when you're away from home - no complicated networking or VPN Tunnel setup is required. For more advanced needs, such as connecting to Docker containers or accessing network drives, a VPN Tunnel remains the recommended solution.
Security reminder
Before enabling remote access, ensure your root password is strong and unique. Update it on the **Users** page if required. Additionally, keep your Unraid OS updated to the latest version to protect against security vulnerabilities. [Learn more about updating Unraid here](https://docs.unraid.net/unraid-os/system-administration/maintain-and-update/upgrading-unraid/)
.
Remote access through Unraid Connect provides:
* **Convenience** - Quickly access your servers management interface from anywhere, using a secure, cloud-managed connection.
* **Security** - Dynamic access modes limit exposure by only allowing access to the internet when necessary, which helps reduce risks from automated attacks.
* **Simplicity** - No need for manual port forwarding or VPN client setup for basic management tasks.
tip
For full network access or advanced use cases, consider setting up [Tailscale](https://docs.unraid.net/unraid-os/system-administration/secure-your-server/tailscale/)
or a VPN solution.
* * *
Initial setup[](https://docs.unraid.net/unraid-connect/remote-access#initial-setup "Direct link to Initial setup")
--------------------------------------------------------------------------------------------------------------------
To enable remote access:
1. In the Unraid WebGUI, navigate to _**Settings → Management Access**_.
2. Check the **HTTPS port** (default: 443). If this port is in use (e.g., by Docker), select an unused port above 1000 (like 3443, 4443, or 5443).
3. Click **Apply** if you changed any settings.
4. Under **CA-signed certificate file**, click **Provision** to generate a trusted certificate.
Your Unraid server will be ready to accept secure remote connections via the WebGUI, using the configured port and a trusted certificate.
* * *
Choosing a remote access type[](https://docs.unraid.net/unraid-connect/remote-access#choosing-a-remote-access-type "Direct link to Choosing a remote access type")
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
Unraid Connect offers two modes:
* Dynamic remote access
* Static remote access
**Dynamic remote access** provides secure, on-demand access to your WebGUI.
* **Access is enabled only when you need it.** The WebGUI remains closed to the internet by default, minimizing the attack surface.
* **Works with UPnP or manual port forwarding.**
* **Automatically opens and closes access** through the Connect dashboard or API, with sessions limited by time for added security.
**Static remote access** keeps your WebGUI continuously available from the internet.
* **Server is always accessible from the internet** on the configured port.
* **Higher risk:** The WebGUI is exposed to WAN traffic at all times, increasing potential vulnerability.
| Feature | Dynamic remote access | Static remote access |
| --- | --- | --- |
| WebGUI open to internet | Only when enabled | Always |
| Attack surface | Minimized | Maximized |
| Automation | Auto open/close via Connect | Manual setup, always open |
| UPnP support | Yes | Yes |
| | **Recommended for most** | |
Dynamic remote access setup[](https://docs.unraid.net/unraid-connect/remote-access#dynamic-remote-access-setup "Direct link to Dynamic remote access setup")
--------------------------------------------------------------------------------------------------------------------------------------------------------------
To set up dynamic remote access:
1. In _**Settings → Management Access → Unraid API**_, select a dynamic option from the Remote Access dropdown:
* **Dynamic - UPnP:** Uses UPnP to open and close a random port automatically (requires UPnP enabled on your router).
* **Dynamic - Manual port forward:** Requires you to forward the selected port on your router manually.
2. Navigate to [Unraid Connect](https://connect.myunraid.net/)
, and go to the management or server details page.
3. The **Dynamic remote access** card will show a button if your server isnt currently accessible from your location.
4. Click the button to enable WAN access. If using UPnP, a new port forward lease is created (typically for 30 minutes) and auto-renewed while active.
5. The card will display the current status and UPnP state.
6. After 10 minutes of inactivity - or if you click **Disable remote access** - internet access is automatically revoked. UPnP leases are removed as well.
* * *
Using UPnP (Universal Plug and Play)[](https://docs.unraid.net/unraid-connect/remote-access#using-upnp-universal-plug-and-play "Direct link to Using UPnP (Universal Plug and Play)")
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
UPnP automates port forwarding, simplifying remote access without requiring manual router configuration.
To configure UPnP:
1. **Enable UPnP on your router.** Ensure that your router supports UPnP and verify that it is enabled in the router settings.
2. **Enable UPnP in Unraid.** Navigate to _**Settings → Management Access**_ and change **Use UPnP** to **Yes**.
3. **Select UPnP in Unraid Connect.** On the Unraid Connect settings page, choose the remote access option as UPnP (select either Dynamic or Always On) and then click **Apply**.
4. **Verify port forwarding (Always On only).** Click the **Check** button. If successful, you'll see the message, "Your Unraid Server is reachable from the Internet."
For Dynamic forwarding, you need to click **Enable Dynamic Remote Access** in [Unraid Connect](https://connect.myunraid.net/)
to allow access.
Troubleshooting
If the setting changes from UPnP to Manual Port Forward upon reloading, Unraid might not be able to communicate with your router. Double-check that UPnP is enabled and consider updating your router's firmware.
* * *
Using manual port forwarding[](https://docs.unraid.net/unraid-connect/remote-access#using-manual-port-forwarding "Direct link to Using manual port forwarding")
-----------------------------------------------------------------------------------------------------------------------------------------------------------------
Manual port forwarding provides greater control and is compatible with most routers.
To configure manual port forwarding:
1. **Choose a WAN port:** Pick a random port number above 1000 (for example, 13856 or 48653), rather than using the default 443.
2. **Apply settings in Unraid:** Click **Apply** to save the port you selected.
3. **Configure your router:** Set up a port forwarding rule on your router, directing your chosen WAN port to your servers HTTPS port. The Unraid interface provides the correct ports and IP address.
Some routers may require the WAN port and HTTPS port to match. If so, use the same high random number for both.
4. **Verify port forwarding (Always On only):** Press the **Check** button. If everything is correct, youll see “Your Unraid Server is reachable from the Internet.”
For dynamic forwarding, ensure to click **Enable Dynamic Remote Access** in [Unraid Connect](https://connect.myunraid.net/)
to enable access.
5. **Access your server:** Log in to [Unraid Connect](https://connect.myunraid.net/)
and click the **Manage** link to connect to your server remotely.
* * *
Enabling secure local access[](https://docs.unraid.net/unraid-connect/remote-access#enabling-secure-local-access "Direct link to Enabling secure local access")
-----------------------------------------------------------------------------------------------------------------------------------------------------------------
Secure local access ensures that all connections to your Unraid WebGUI, even within your home or office network, are encrypted using HTTPS, thereby safeguarding any sensitive information, such as login credentials and configuration data.
Benefits of secure local access include:
* **Encryption** - All data exchanged between your browser and the server is protected.
* **Consistency** - Use the same secure URL for both local and remote access.
* **Compliance** - Adheres to security best practices for protecting administrative interfaces.
To enable secure local access:
1. Go to _**Settings → Management Access**_.
2. In the **CA-signed certificate** section, check for DNS Rebinding warnings.
* If no warnings show, set **Use SSL/TLS** to **Strict**.
* If warnings are present, review [DNS Rebinding Protection](https://docs.unraid.net/unraid-os/system-administration/secure-your-server/securing-your-connection/#dns-rebinding-protection)
.
important
With SSL/TLS set to Strict, client devices must resolve your servers DNS name. If your Internet connection fails, access to the WebGUI may be lost. See [Accessing your server when DNS is down](https://docs.unraid.net/unraid-os/system-administration/secure-your-server/securing-your-connection/#accessing-your-server-when-dns-is-down)
for recovery steps.
* [Initial setup](https://docs.unraid.net/unraid-connect/remote-access#initial-setup)
* [Choosing a remote access type](https://docs.unraid.net/unraid-connect/remote-access#choosing-a-remote-access-type)
* [Dynamic remote access setup](https://docs.unraid.net/unraid-connect/remote-access#dynamic-remote-access-setup)
* [Using UPnP (Universal Plug and Play)](https://docs.unraid.net/unraid-connect/remote-access#using-upnp-universal-plug-and-play)
* [Using manual port forwarding](https://docs.unraid.net/unraid-connect/remote-access#using-manual-port-forwarding)
* [Enabling secure local access](https://docs.unraid.net/unraid-connect/remote-access#enabling-secure-local-access)

View File

@@ -1,886 +0,0 @@
# Unraid OS 7.0.0 Release Notes
> **Source:** [Unraid OS Release Notes - 7.0.0](https://docs.unraid.net/unraid-os/release-notes/7.0.0)
> **Scraped:** 2026-02-07 | Raw content for reference purposes
This version of Unraid OS includes significant improvements across all subsystems, while attempting to maintain backward compatibility as much as possible.
Special thanks to:
* @bonienl, @dlandon, @ich777, @JorgeB, @SimonF, and @Squid for their direction, support, and development work on this release
* @bonienl for merging their **Dynamix File Manager** plugin into the webgui
* @Squid for merging their **GUI Search** and **Unlimited Width Plugin** plugins into the webgui
* @ludoux (**Proxy Editor** plugin) and @Squid (**Community Applications** plugin) for pioneering the work on http proxy support, of which several ideas have been incorporated into the webgui
* @ich777 for maintaining third-party driver plugins, and for the [Tailscale Docker integration](https://docs.unraid.net/unraid-os/release-notes/7.0.0#tailscale-integration)
* @SimonF for significant new features in the Unraid OS VM Manager
* @EDACerton for development of the Tailscale plugin
View the [contributors to Unraid on GitHub](https://github.com/unraid/webgui/graphs/contributors?from=2023-09-08&to=2025-01-08&type=c)
with shoutouts to these community members who have contributed PRs (these are GitHub ids):
* almightyYantao
* baumerdev
* Commifreak
* desertwitch
* dkaser
* donbuehl
* FunkeCoder23
* Garbee
* jbtwo
* jski
* Leseratte10
* Mainfrezzer
* mtongnz
* othyn
* serisman
* suzukua
* thecode
And sincere thanks to everyone who has requested features, reported bugs, and tested pre-releases!
Upgrading[](https://docs.unraid.net/unraid-os/release-notes/7.0.0#upgrading "Direct link to Upgrading")
---------------------------------------------------------------------------------------------------------
### Known issues[](https://docs.unraid.net/unraid-os/release-notes/7.0.0#known-issues "Direct link to Known issues")
#### ZFS pools[](https://docs.unraid.net/unraid-os/release-notes/7.0.0#zfs-pools "Direct link to ZFS pools")
If you are using ZFS pools, please take note of the following:
* You will see a warning about unsupported features in your existing ZFS pools. This is because the version of ZFS in 7.0 is upgraded vs. 6.12 and contains new features. This warning is harmless, meaning your pool will still function normally. A button will appear letting you upgrade a pool to support the new ZFS features; however, Unraid OS does not make use of these new features, and once upgraded previous versions of Unraid OS will not be able to mount the pool.
* Similarly, new pools created in 7.0 will not mount in 6.12 due to ZFS not supporting downgrades. There is no way around this.
* If you decide to downgrade from 7.0 to 6.12 any previously existing hybrid pools will not be recognized upon reboot into 6.12. To work around this, first click Tools/New Config in 7.0, preserving all slots, then reboot into 6.12 and your hybrid pools should import correctly.
* ZFS spares are not supported in this release. If you have created a hybrid pool in 6.12 which includes spares, please remove the 'spares' vdev before upgrading to v7.0. This will be fixed in a future release.
* Currently unable to import TrueNAS pools. This will be fixed in a future release.
* If you are using **Docker data-root=directory** on a ZFS volume, see [Add support for overlay2 storage driver](https://docs.unraid.net/unraid-os/release-notes/7.0.0#add-support-for-overlay2-storage-driver)
.
* We check that VM names do not include characters that are not valid for ZFS. Existing VMs are not modified but will throw an error and disable update if invalid characters are found.
#### General pool issues[](https://docs.unraid.net/unraid-os/release-notes/7.0.0#general-pool-issues "Direct link to General pool issues")
If your existing pools fail to import with _Wrong Pool State, invalid expansion_ or _Wrong pool State. Too many wrong or missing devices_, see this [forum post](https://forums.unraid.net/topic/184435-unraid-os-version-700-available/#findComment-1508012)
.
#### Drive spindown issues[](https://docs.unraid.net/unraid-os/release-notes/7.0.0#drive-spindown-issues "Direct link to Drive spindown issues")
Drives may not spin down when connected to older Marvell drive controllers that use the sata\_mv driver (i.e. Supermicro SASLP and SAS2LP) or to older Intel controllers (i.e. ICH7-ICH10). This may be resolved by a future kernel update.
#### Excessive flash drive activity slows the system down[](https://docs.unraid.net/unraid-os/release-notes/7.0.0#excessive-flash-drive-activity-slows-the-system-down "Direct link to Excessive flash drive activity slows the system down")
If the system is running slowly, check the Main page and see if it shows significant continuous reads from the flash drive during normal operation. If so, the system may be experiencing sufficient memory pressure to push the OS out of RAM and cause it to be re-read from the flash drive. From the web terminal type:
touch /boot/config/fastusr
and then reboot. This will use around 500 MB of RAM to ensure the OS files always stay in memory. Please let us know if this helps.
#### New Windows changes may result in loss of access to Public shares[](https://docs.unraid.net/unraid-os/release-notes/7.0.0#new-windows-changes-may-result-in-loss-of-access-to-public-shares "Direct link to New Windows changes may result in loss of access to Public shares")
Due to recent security changes in Windows 11 24H2, "guest" access of Unraid public shares may not work. The easiest way around this is to create a user in Unraid with the same name as the Windows account you are using to connect. If the Unraid user password is not the same as the Windows account password, Windows will prompt for credentials.
If you are using a Microsoft account, it may be better to create a user in Unraid with a simple username, set a password, then in Windows go to _**Control Panel → Credential Manager → Windows credentials → Add a Windows Credential**_ and add the correct Unraid server name and credentials.
Alternately you can [re-enable Windows guest fallback](https://techcommunity.microsoft.com/blog/filecab/accessing-a-third-party-nas-with-smb-in-windows-11-24h2-may-fail/4154300)
(not recommended).
#### Problems due to Realtek network cards[](https://docs.unraid.net/unraid-os/release-notes/7.0.0#problems-due-to-realtek-network-cards "Direct link to Problems due to Realtek network cards")
There have been multiple reports of issues with the Realtek driver plugin after upgrading to recent kernels. You may want to preemptively uninstall it before upgrading, or remove it afterwards if you have networking issues.
#### A virtual NIC is being assigned to eth0 on certain systems[](https://docs.unraid.net/unraid-os/release-notes/7.0.0#a-virtual-nic-is-being-assigned-to-eth0-on-certain-systems "Direct link to A virtual NIC is being assigned to eth0 on certain systems")
On some systems with IPMI KVM, a virtual NIC is being assigned to eth0 instead of the expected NIC. See this [forum post](https://forums.unraid.net/bug-reports/stable-releases/61214-no-network-after-updating-eth0-assigned-to-virtual-usb-nic-cdc-ethernet-device-with-169-ip-instead-of-mellanox-10gbe-nic-r3407/)
for options.
#### Issues using Docker custom networks[](https://docs.unraid.net/unraid-os/release-notes/7.0.0#issues-using-docker-custom-networks "Direct link to Issues using Docker custom networks")
If certain custom Docker networks are not available for use by your Docker containers, navigate to _**Settings → Docker**_ and fix the CIDR definitions for the subnet mask and DHCP pool on those custom networks. The underlying systems have gotten more strict and invalid CIDR definitions which worked in earlier releases no longer work.
### Rolling back[](https://docs.unraid.net/unraid-os/release-notes/7.0.0#rolling-back "Direct link to Rolling back")
See the warnings under **Known Issues** above.
The Dynamix File Manager, GUI Search, and Unlimited Width Plugin plugins are now built into Unraid. If you rollback to an earlier version you will need to reinstall those plugins to retain their functionality.
If you disabled the unRAID array we recommend enabling it again before rolling back.
If you previously had Outgoing Proxies set up using the Proxy Editor plugin or some other mechanism, you will need to re-enable that mechanism after rolling back.
If you roll back after enabling the [overlay2 storage driver](https://docs.unraid.net/unraid-os/release-notes/7.0.0#add-support-for-overlay2-storage-driver)
you will need to delete the Docker directory and let Docker re-download the image layers.
If you roll back after installing [Tailscale in a Docker container](https://docs.unraid.net/unraid-os/release-notes/7.0.0#tailscale-integration)
, you will need to edit the container, make a dummy change, and **Apply** to rebuild the container without the Tailscale integration.
After rolling back, make a dummy change to each WireGuard config to get the settings appropriate for that version of Unraid.
If rolling back earlier than 6.12.14, also see the [6.12.14 release notes](https://docs.unraid.net/unraid-os/release-notes/6.12.14/#rolling-back)
.
Storage[](https://docs.unraid.net/unraid-os/release-notes/7.0.0#storage "Direct link to Storage")
---------------------------------------------------------------------------------------------------
### unRAID array optional[](https://docs.unraid.net/unraid-os/release-notes/7.0.0#unraid-array-optional "Direct link to unRAID array optional")
You can now set the number of unRAID array slots to 'none'. This will allow the array to Start without any devices assigned to the unRAID array itself.
If you are running an all-SSD/NMVe server, we recommend assigning all devices to one or more ZFS/BTRFS pools, since Trim/Discard is not supported with unRAID array devices.
To unassign the unRAID array from an existing server, first unassign all Array slots on the Main page, and then set the Slots to 'none'.
For new installs, the default number of slots to reserve for the unRAID array is now 'none'.
### Share secondary storage may be assigned to a pool[](https://docs.unraid.net/unraid-os/release-notes/7.0.0#share-secondary-storage-may-be-assigned-to-a-pool "Direct link to Share secondary storage may be assigned to a pool")
Shares can now be configured with pools for both primary and secondary storage, and mover will move files between those pools. As a result of this change, the maximum number of supported pools is now 34 (previously 35).
### ReiserFS file system option has been disabled[](https://docs.unraid.net/unraid-os/release-notes/7.0.0#reiserfs-file-system-option-has-been-disabled "Direct link to ReiserFS file system option has been disabled")
Since ReiserFS is scheduled to be removed from the Linux kernel, the option to format a device with ReiserFS has also been disabled. You may use this mover function to empty an array disk prior to reformatting with another file system, see below. We will add a webGUI button for this in a future release.
### Using 'mover' to empty an array disk[](https://docs.unraid.net/unraid-os/release-notes/7.0.0#using-mover-to-empty-an-array-disk "Direct link to Using 'mover' to empty an array disk")
Removed in Unraid 7.2.1
This command line option was removed in Unraid 7.2.1. On newer releases, use the WebGUI method instead. See [Converting to a new file system type](https://docs.unraid.net/unraid-os/using-unraid-to/manage-storage/file-systems/#converting-to-a-new-file-system-type)
for details.
Mover can now be used to empty an array disk. With the array started, run this at a web terminal:
mover start -e diskN |& logger & # where N is [1..28]
Mover will look at each top-level director (share) and then move files one-by-one to other disks in the array, following the usual config settings (include/exclude, split-level, alloc method). Move targets are restricted to just the unRAID array.
Watch the syslog for status. When the mover process ends, the syslog will show a list of files which could not be moved:
* maybe file was in-use
* maybe file is at the top-level of /mnt/diskN
* maybe we ran out of space
### Predefined shares handling[](https://docs.unraid.net/unraid-os/release-notes/7.0.0#predefined-shares-handling "Direct link to Predefined shares handling")
The Unraid OS Docker Manager is configured by default to use these predefined shares:
* system - used to store Docker image layers in a loopback image stored in system/docker.
* appdata - used by Docker applications to store application data.
The Unraid OS VM Manager is configured by default to use these predefined shares:
* system - used to store libvirt loopback image stored in system/libvirt
* domains - used to store VM vdisk images
* isos - used to store ISO boot images
When either Docker or VMs are enabled, the required predefined shares are created if necessary according to these rules:
* if a pool named 'cache' is present, predefined shares are created with 'cache' as the Primary storage with no Secondary storage.
* if no pool named 'cache' is present, the predefined shares are created with the first alphabetically present pool as Primary with no Secondary storage.
* if no pools are present, the predefined shares are created on the unRAID array as Primary with no Secondary storage.
### ZFS implementation[](https://docs.unraid.net/unraid-os/release-notes/7.0.0#zfs-implementation "Direct link to ZFS implementation")
* Support Hybrid ZFS pools aka subpools (except 'spares')
* Support recovery from multiple drive failures in a ZFS pool with sufficient protection
* Support LUKS encryption on ZFS pools and drives
* Set reasonable default profiles for new ZFS pools and subpools
* Support upgrading ZFS pools when viewing the pool status. Note: after upgrading, the volume may not be mountable in previous versions of Unraid
### Allocation profiles for btrfs, zfs, and zfs subpools[](https://docs.unraid.net/unraid-os/release-notes/7.0.0#allocation-profiles-for-btrfs-zfs-and-zfs-subpools "Direct link to Allocation profiles for btrfs, zfs, and zfs subpools")
When a btrfs or zfs pool/subpool is created, the default storage allocation is determined by the number of slots (devices) initially assigned to the pool:
* for zfs main (root) pool:
* slots == 1 => single
* slots == 2 => mirror (1 group of 2 devices)
* slots >= 3 => raidz1 (1 group of 'slots' devices)
* for zfs special, logs, and dedup subpools:
* slots == 1 => single
* slots%2 == 0 => mirror (slots/2 groups of 2 devices)
* slots%3 == 0 => mirror (slots/3 groups of 3 devices)
* otherwise => stripe (1 group of 'slots' devices)
* for zfs cache and spare subpools:
* slots == 1 => single
* slots >= 2 => stripe (1 group of 'slots' devices)
* for btrfs pools:
* slots == 1 => single
* slots >= 2 => raid1 (ie, what btrfs called "raid1")
### Pool considerations[](https://docs.unraid.net/unraid-os/release-notes/7.0.0#pool-considerations "Direct link to Pool considerations")
When adding devices to (expanding) a single-slot pool, these rules apply:
For btrfs: adding one or more devices to a single-slot pool will result in converting the pool to raid1 (that is, what btrfs defines as raid1). Adding any number of devices to an existing multiple-slot btrfs pool increases the storage capacity of the pool and does not change the storage profile.
For zfs: adding one, two, or three devices to a single-slot pool will result in converting the pool to 2-way, 3-way, or 4-way mirror. Adding a single device to an existing 2-way or 3-way mirror converts the pool to a 3-way or 4-way mirror.
Changing the file system type of a pool:
For all single-slot pools, the file system type can be changed when array is Stopped.
For btrfs/zfs multi-slot pools, the file system type cannot be changed. To repurpose the devices you must click the Erase pool buton.
### Other features[](https://docs.unraid.net/unraid-os/release-notes/7.0.0#other-features "Direct link to Other features")
* Add Spin up/down devices of a pool in parallel
* Add "Delete Pool" button, which unassigns all devices of a pool and then removes the pool. The devices themselves are not modified. This is useful when physically removing devices from a server.
* Add ability to change encryption phrase/keyfile for LUKS encrypted disks
* Introduce 'config/share.cfg' variable 'shareNOFILE' which sets maximum open file descriptors for shfs process (see the Known Issues)
VM Manager[](https://docs.unraid.net/unraid-os/release-notes/7.0.0#vm-manager "Direct link to VM Manager")
------------------------------------------------------------------------------------------------------------
### Improvements[](https://docs.unraid.net/unraid-os/release-notes/7.0.0#improvements "Direct link to Improvements")
Added support for VM clones, snapshots, and evdev passthru.
The VM editor now has a new read-only inline XML mode for advanced users, making it clear how the GUI choices affect the underlying XML used by the VM.
Big thanks to @SimonF for his ongoing enhancements to VMs.
### Other changes[](https://docs.unraid.net/unraid-os/release-notes/7.0.0#other-changes "Direct link to Other changes")
* **VM Tab**
* Show all graphics cards and IP addresses assigned to VMs
* noVNC version: 1.5
* **VM Manager Settings**
* Added VM autostart disable option
* **Add/edit VM template**
* Added "inline xml view" option
* Support user-created VM templates
* Add qemu ppc64 target
* Add qemu:override support
* Add "QEMU command-line passthrough" feature
* Add VM multifunction support, including "PCI Other"
* VM template enhancements for Windows VMs, including hypervclock support
* Add "migratable" on/off option for emulated CPU
* Add offset and timer support
* Add no keymap option and set Virtual GPU default keyboard to use it
* Add nogpu option
* Add SR-IOV support for Intel iGPU
* Add storage override to specify where images are created at add VM
* Add SSD flag for vdisks
* Add Unmap Support
* Check that VM name does not include characters that are not valid for ZFS.
* **Dashboard**
* Add VM usage statistics to the dashboard, enable on _**Settings → VM Manager → Show VM Usage**_
Docker[](https://docs.unraid.net/unraid-os/release-notes/7.0.0#docker "Direct link to Docker")
------------------------------------------------------------------------------------------------
### Docker fork bomb prevention[](https://docs.unraid.net/unraid-os/release-notes/7.0.0#docker-fork-bomb-prevention "Direct link to Docker fork bomb prevention")
To prevent "Docker fork bombs" we introduced a new setting, _**Settings → Docker → Docker PID Limit**_, which specifies the maximum number of Process ID's which any container may have active (with default 2048).
If you have a container that requires more PID's you may either increase this setting or you may override for a specific container by adding, for example, `--pids-limit 3000` to the Docker template _Extra Parameters_ setting.
### Add support for overlay2 storage driver[](https://docs.unraid.net/unraid-os/release-notes/7.0.0#add-support-for-overlay2-storage-driver "Direct link to Add support for overlay2 storage driver")
If you are using **Docker data-root=directory** on a ZFS volume, we recommend that you navigate to _**Settings → Docker**_ and switch the **Docker storage driver** to **overlay2**, then delete the directory contents and let Docker re-download the image layers. The legacy **native** setting causes significant stability issues on ZFS volumes.
If retaining the ability to downgrade to earlier releases is important, then switch to **Docker data-root=xfs vDisk** instead.
### Other changes[](https://docs.unraid.net/unraid-os/release-notes/7.0.0#other-changes-1 "Direct link to Other changes")
* See [Tailscale integration](https://docs.unraid.net/unraid-os/release-notes/7.0.0#tailscale-integration)
* Allow custom registry with a port specification
* Use "lazy unmount" unmount of docker image to prevent blocking array stop
* Updated to address multiple security issues (CVE-2024-21626, CVE-2024-24557)
* Docker Manager:
* Allow users to select Container networks in the WebUI
* Correctly identify/show non dockerman Managed containers
* rc.docker:
* Only stop Unraid managed containers
* Honor restart policy from 3rd party containers
* Set MTU of Docker Wireguard bridge to match Wireguard default MTU
Networking[](https://docs.unraid.net/unraid-os/release-notes/7.0.0#networking "Direct link to Networking")
------------------------------------------------------------------------------------------------------------
### Tailscale integration[](https://docs.unraid.net/unraid-os/release-notes/7.0.0#tailscale-integration "Direct link to Tailscale integration")
Unraid OS supports [Tailscale](https://tailscale.com/)
through the use of a plugin created by Community Developer EDACerton. When this plugin is installed, Tailscale certificates are supported for https webGUI access, and the Tailnet URLs will be displayed on the _**Settings → Management Access**_ page.
And in Unraid natively, you can optionally install Tailscale in almost any Docker container, giving you the ability to share containers with specific people, access them using valid https certificates, and give them alternate routes to the Internet via Exit Nodes.
For more details see [the docs](https://docs.unraid.net/unraid-os/system-administration/secure-your-server/tailscale/)
### Support iframing the webGUI[](https://docs.unraid.net/unraid-os/release-notes/7.0.0#support-iframing-the-webgui "Direct link to Support iframing the webGUI")
Added "Content-Security-Policy frame-ancestors" support to automatically allow the webGUI to be iframed by domains it has certificates for. It isn't exactly supported, but additional customization is possible by using a script to modify NGINX\_CUSTOMFA in `/etc/defaults/nginx`
### Other changes[](https://docs.unraid.net/unraid-os/release-notes/7.0.0#other-changes-2 "Direct link to Other changes")
* Upgraded to OpenSSL 3
* Allow ALL IPv4/IPv6 addresses as listener. This solves the issue when IPv4 or IPv6 addresses change dynamically
* Samba:
* Add ipv6 listening address only when NetBIOS is disabled
* Fix MacOS unable to write 'flash' share and restore Time Machine compatibility (fruit changes)
* The VPN manager now adds all interfaces to WireGuard tunnels, make a dummy change to the tunnel after upgrading or changing network settings to update WireGuard tunnel configs.
webGUI[](https://docs.unraid.net/unraid-os/release-notes/7.0.0#webgui "Direct link to webGUI")
------------------------------------------------------------------------------------------------
### Integrated Dynamix File Manager plugin[](https://docs.unraid.net/unraid-os/release-notes/7.0.0#integrated-dynamix-file-manager-plugin "Direct link to Integrated Dynamix File Manager plugin")
Click the file manager icon and navigate through your directory structure with the ability to perform common operations such as copy, move, delete, and rename files and directories.
### Integrated GUI Search plugin[](https://docs.unraid.net/unraid-os/release-notes/7.0.0#integrated-gui-search-plugin "Direct link to Integrated GUI Search plugin")
Click the search icon on the Menu bar and type the name of the setting you are looking for.
### Outgoing Proxy Manager[](https://docs.unraid.net/unraid-os/release-notes/7.0.0#outgoing-proxy-manager "Direct link to Outgoing Proxy Manager")
If you previously used the Proxy Editor plugin or had an outgoing proxy setup for CA, those will automatically be removed/imported. You can then adjust them on _**Settings → Outgoing Proxy Manager**_.
For more details, see the [manual](https://docs.unraid.net/unraid-os/system-administration/secure-your-server/secure-your-outgoing-comms/)
.
Note: this feature is completely unrelated to any reverse proxies you may be using.
### Notification Agents[](https://docs.unraid.net/unraid-os/release-notes/7.0.0#notification-agents "Direct link to Notification Agents")
Notification agents xml are now stored as individual xml files, making it easier to add notification agents via plugin.
See this [sample plugin](https://github.com/Squidly271/Wxwork-sample)
by @Squid
* fix: Agent notifications do not work if there is a problem with email notifications
### NTP Configuration[](https://docs.unraid.net/unraid-os/release-notes/7.0.0#ntp-configuration "Direct link to NTP Configuration")
For new installs, a single default NTP server is set to 'time.google.com'.
If your server is using our previous NTP defaults of time1.google.com, time2.google.com etc, you may notice some confusing NTP-related messages in your syslog. To avoid this, consider changing to our new defaults: navigate to _**Settings → Date & Time**_ and configure **NTP server 1** to be time.google.com, leaving all the others blank.
Of course, you are welcome to use any time servers you prefer, this is just to let you know that we have tweaked our defaults.
### NFS Shares[](https://docs.unraid.net/unraid-os/release-notes/7.0.0#nfs-shares "Direct link to NFS Shares")
We have added a few new settings to help resolve issues with NFS shares. On _**Settings → Global Share Settings**_ you can adjust the number of fuse file descriptors and on _**Settings → NFS**_ you can adjust the NFS protocol version and number of threads it uses. See the inline help for details.
* Added support for NFS 4.1 and 4.2, and permit NFSv4 mounts by default
* Add a text box to configure multi-line NFS rules
* Bug fix: nfsd doesn't restart properly
### Dashboard[](https://docs.unraid.net/unraid-os/release-notes/7.0.0#dashboard "Direct link to Dashboard")
* Add server date and time to the Dashboard; click the time to edit related settings
* Rework the **System** tile to clarify what is being shown, including tooltips
* Show useful content when dashboard tiles are minimized
* Show Docker RAM usage on Dashboard
* Add Docker RAM usage to the Dashboard
* Rename 'Services' to 'System'
* Fix memory leak on the Dashboard, VM Manager and Docker Manager pages
### SMART improvements[](https://docs.unraid.net/unraid-os/release-notes/7.0.0#smart-improvements "Direct link to SMART improvements")
* Display KB/MB/GB/TB written in SMART Attributes for SSDs
* Add 'SSD endurance remaining' SMART Attribute.
### Diagnostics[](https://docs.unraid.net/unraid-os/release-notes/7.0.0#diagnostics "Direct link to Diagnostics")
* Add gpujson from gpu\_statistics to diagnostics
* Improved anonymization of LXC logs
* If the FCP plugin is installed, run scan during diagnostics
* Add phplog to identify PHP errors
* Improved anonymization of IPv6 addresses
* Removed ps.txt because it exposed passwords in the process list
### Other changes[](https://docs.unraid.net/unraid-os/release-notes/7.0.0#other-changes-3 "Direct link to Other changes")
* Support different warning/critical temperature thresholds for HDD/SSD/NVMe drives. NVMe thresholds are set automatically by the drive itself, see _**Settings → Disk Settings**_ to set the thresholds for HDDs and SSDs. All can still be overridden for individual drives.
* Add _**Settings → Local Console Settings**_ page with options for keyboard layout, screen blank time, and persistent Bash history
* Add _**Settings → Power Mode**_ to optimize the system for power efficiency, balanced, or performance
* Hover over an entry on **Tools** and **Settings** to favorite an item, and quickly get back to it on the new top-level **Favorites** page. Or disable Favorites functionality on \***Settings → Display Settings**.
* Enhanced shutdown/restart screen showing more details of the process
* Simplify notifications by removing submenus - View, History, and Acknowledge now apply to all notification types
* Move date & time settings from **Display Settings** to _**Settings → Date & Time Settings**_
* _**Settings → Display settings**_: new setting "width" to take advantage of larger screens
* Optionally display NVMe power usage; see _**Settings → Disk Settings**_
* Web component enhancements downgrades, updates, and registration
* Prevent formatting new drives as ReiserFS
* Use atomic writes for updates of config files
* ZFS pool settings changes:
* Create meaningful ZFS subpool descriptions
* Change ZFS profile text 'raid0' to 'stripe'
* Add additional USB device passthrough smartmontools options to webgui (thanks to GitHub user jski)
* UPS Settings page (thanks to @othyn):
* Add the ability to set a manual UPS capacity override.
* UserEdit: in addition to Ed25519, FIDO/U2F Ed25519, and RSA, support SSH key types DSA, ECDSA, and FIDO/U2F ECDSA
* OpenTerminal: use shell defined for root user in /etc/passwd file
* Always display the "delete share" option, but disable it when the share is not empty
Misc[](https://docs.unraid.net/unraid-os/release-notes/7.0.0#misc "Direct link to Misc")
------------------------------------------------------------------------------------------
### Other changes[](https://docs.unraid.net/unraid-os/release-notes/7.0.0#other-changes-4 "Direct link to Other changes")
* Replace very old 'memtest' with Memtest86+ version 6.20
* There are also [Boot Options](https://github.com/memtest86plus/memtest86plus#boot-options)
available
* Remove support for legacy unraid.net certs
* Remove "UpdateDNS" functionality since no longer using legacy non-wildcard 'unraid.net' SSL certs
* Strip proxy info and '&' from go script
* passwd file handling correction
* When avahidaemon running, add name.local to hosts file
* Remove keys.lime-technology.com from hosts file
* rc.S: remove wsync from XFS mount to prevent WebGUI from freezing during heavy I/O on /boot
* make\_bootable\_linux: version 1.4
* detect if mtools is installed
* ntp.conf: set 'logconfig' to ignore LOG\_INFO
* Speed things up: use AVAHI reload instead of restart
* Linux kernel: force all buggy Seagate external USB enclosures to bind to usb-storage instead of UAS driver
* Startup improvements in rc.S script:
* Automatically repair boot sector backup
* Explicitly unmount all file systems if cannot continue boot
* Detect bad root value in syslinux.cfg
* reboot should not invoke shutdown
* Clean up empty cgroups
* Samba smb.conf: set "nmbd bind explicit broadcast = no" if NetBIOS enabled
* Add fastcgi\_path\_info to default nginx configuration
* Ensure calls to pgrep or killall are restricted to the current namespace
* (Advanced) Added ability to apply custom udev rules from `/boot/config/udev/` upon boot
* Bug fix: Correct handling of empty Trial.key when download fails
* Bug fix: Fix PHP warning for UPS status
* Create meaningful /etc/os-release file
* Misc translation fixes
* Bug fix: JavaScript console logging functionality restored
* Clicking Unraid version number loads release notes from Unraid Docs website
Linux kernel[](https://docs.unraid.net/unraid-os/release-notes/7.0.0#linux-kernel "Direct link to Linux kernel")
------------------------------------------------------------------------------------------------------------------
* version 6.6.68
* CONFIG\_MISC\_RTSX\_PCI: Realtek PCI-E card reader
* CONFIG\_MISC\_RTSX\_USB: Realtek USB card reader
* CONFIG\_DRM\_XE: Intel Xe Graphics
* CONFIG\_DRM\_XE\_DISPLAY: Enable display support
* CONFIG\_AUDIT: Auditing support
* CONFIG\_USB\_SERIAL\_OPTION: USB driver for GSM and CDMA modems
* CONFIG\_USB\_SERIAL\_SIMPLE: USB Serial Simple Driver
* CONFIG\_USB\_UAS: USB Attached SCSI
* CONFIG\_NFS\_V4\_1: NFS client support for NFSv4.1
* CONFIG\_NFS\_V4\_1\_MIGRATION: NFSv4.1 client support for migration
* CONFIG\_NFS\_V4\_2: NFS client support for NFSv4.2
* CONFIG\_NFS\_V4\_2\_READ\_PLUS: NFS: Enable support for the NFSv4.2 READ\_PLUS operation
* CONFIG\_NFSD\_V4\_2\_INTER\_SSC: NFSv4.2 inter server to server COPY
* CONFIG\_USB\_NET\_CDC\_EEM: CDC EEM support
* CONFIG\_USB\_NET\_CDC\_NCM: CDC NCM support
* CONFIG\_USB\_SERIAL\_XR: USB MaxLinear/Exar USB to Serial driver
* CONFIG\_CAN: CAN bus subsystem support
* CONFIG\_CAN\_NETLINK: CAN device drivers with Netlink support
* CONFIG\_CAN\_GS\_USB: Geschwister Schneider UG and candleLight compatible interfaces
* CONFIG\_SCSI\_LPFC: Emulex LightPulse Fibre Channel Support
* CONFIG\_DRM\_VIRTIO\_GPU: Virtio GPU driver
* CONFIG\_DRM\_VIRTIO\_GPU\_KMS: Virtio GPU driver modesetting support
* CONFIG\_LEDS\_TRIGGERS: LED Trigger support
* CONFIG\_LEDS\_TRIGGER\_ONESHOT: LED One-shot Trigger
* CONFIG\_LEDS\_TRIGGER\_NETDEV: LED Netdev Trigger
* CONFIG\_QED: QLogic QED 25/40/100Gb core driver
* CONFIG\_QED\_SRIOV: QLogic QED 25/40/100Gb SR-IOV support
* CONFIG\_QEDE: QLogic QED 25/40/100Gb Ethernet NIC
* CONFIG\_SCSI\_UFSHCD: Universal Flash Storage Controller
* CONFIG\_SCSI\_UFS\_BSG: Universal Flash Storage BSG device node
* CONFIG\_SCSI\_UFS\_HWMON: UFS Temperature Notification
* CONFIG\_SCSI\_UFSHCD\_PCI: PCI bus based UFS Controller support
* CONFIG\_SCSI\_UFS\_DWC\_TC\_PCI: DesignWare pci support using a G210 Test Chip
* CONFIG\_SCSI\_UFSHCD\_PLATFORM: Platform bus based UFS Controller support
* CONFIG\_SCSI\_UFS\_CDNS\_PLATFORM: Cadence UFS Controller platform driver
* CONFIG\_SCSI\_QLA\_FC: QLogic QLA2XXX Fibre Channel Support
* CONFIG\_LIQUIDIO: Cavium LiquidIO support
* CONFIG\_LIQUIDIO\_VF: Cavium LiquidIO VF support
* CONFIG\_NTFS\_FS: NTFS file system support \[removed - this is the old read-only vfs module\]
* CONFIG\_NTFS3\_FS: NTFS Read-Write file system support
* CONFIG\_NTFS3\_LZX\_XPRESS: activate support of external compressions lzx/xpress
* CONFIG\_NTFS3\_FS\_POSIX\_ACL: NTFS POSIX Access Control Lists
* CONFIG\_UHID: User-space I/O driver support for HID subsystem
* md/unraid: version 2.9.33
* fix regression: empty slots before first occupied slot returns NO\_DEVICES
* fix handling of device failure during rebuild/sync
* removed XEN support
Base distro[](https://docs.unraid.net/unraid-os/release-notes/7.0.0#base-distro "Direct link to Base distro")
---------------------------------------------------------------------------------------------------------------
* aaa\_base: version 15.1
* aaa\_glibc-solibs: version 2.40
* aaa\_libraries: version 15.1
* acl: version 2.3.2
* acpid: version 2.0.34
* adwaita-icon-theme: version 47.0
* apcupsd: version 3.14.14
* appres: version 1.0.7
* at: version 3.2.5
* at-spi2-atk: version 2.38.0
* at-spi2-core: version 2.54.0
* atk: version 2.38.0
* attr: version 2.5.2
* avahi: version 0.8
* bash: version 5.2.037
* bash-completion: version 2.16.0
* beep: version 1.3
* bin: version 11.1
* bind: version 9.20.4
* bluez-firmware: version 1.2
* bridge-utils: version 1.7.1
* brotli: version 1.1.0
* btrfs-progs: version 6.12
* bzip2: version 1.0.8
* ca-certificates: version 20241120
* cairo: version 1.18.2
* celt051: version 0.5.1.3
* cifs-utils: version 7.1
* coreutils: version 9.5
* cpio: version 2.15
* cpufrequtils: version 008
* cracklib: version 2.10.3
* cryptsetup: version 2.7.5
* curl: version 8.11.1
* cyrus-sasl: version 2.1.28
* db48: version 4.8.30
* dbus: version 1.16.0
* dbus-glib: version 0.112
* dcron: version 4.5
* dejavu-fonts-ttf: version 2.37
* devs: version 2.3.1
* dhcpcd: version 10.0.10
* diffutils: version 3.10
* dmidecode: version 3.6
* dnsmasq: version 2.90
* docker: version 27.0.3
* dosfstools: version 4.2
* e2fsprogs: version 1.47.1
* ebtables: version 2.0.11
* editres: version 1.0.9
* elfutils: version 0.192
* elogind: version 255.5
* elvis: version 2.2\_0
* encodings: version 1.1.0
* etc: version 15.1
* ethtool: version 5.19
* eudev: version 3.2.14
* file: version 5.46
* findutils: version 4.10.0
* flex: version 2.6.4
* floppy: version 5.5
* fluxbox: version 1.3.7
* fontconfig: version 2.15.0
* freeglut: version 3.6.0
* freetype: version 2.13.3
* fribidi: version 1.0.16
* fuse3: version 3.16.2
* gawk: version 5.3.1
* gd: version 2.3.3
* gdbm: version 1.24
* gdk-pixbuf2: version 2.42.12
* genpower: version 1.0.5
* git: version 2.47.1
* glew: version 2.2.0
* glib2: version 2.82.4
* glibc: version 2.40
* glibc-zoneinfo: version 2024b
* glu: version 9.0.3
* gmp: version 6.3.0
* gnutls: version 3.8.8
* gptfdisk: version 1.0.10
* graphite2: version 1.3.14
* grep: version 3.11
* gtk+3: version 3.24.43
* gzip: version 1.13
* harfbuzz: version 10.1.0
* hdparm: version 9.65
* hicolor-icon-theme: version 0.18
* hostname: version 3.25
* htop: version 3.3.0
* hwloc: version 2.2.0
* icu4c: version 76.1
* imlib2: version 1.7.1
* inetd: version 1.79s
* infozip: version 6.0
* inih: version 58
* inotify-tools: version 4.23.9.0
* intel-microcode: version 20241112
* iperf3: version 3.17.1
* iproute2: version 6.12.0
* iptables: version 1.8.11
* iputils: version 20240905
* irqbalance: version 1.7.0
* jansson: version 2.14
* jemalloc: version 5.3.0
* jq: version 1.6
* json-c: version 0.18\_20240915
* json-glib: version 1.10.6
* kbd: version 2.7.1
* kernel-firmware: version 20241220\_9e1d9ae
* keyutils: version 1.6.3
* kmod: version 33
* krb5: version 1.21.3
* lbzip2: version 2.5
* less: version 668
* libICE: version 1.1.2
* libSM: version 1.2.5
* libX11: version 1.8.10
* libXau: version 1.0.12
* libXaw: version 1.0.16
* libXcomposite: version 0.4.6
* libXcursor: version 1.2.3
* libXdamage: version 1.1.6
* libXdmcp: version 1.1.5
* libXevie: version 1.0.3
* libXext: version 1.3.6
* libXfixes: version 6.0.1
* libXfont2: version 2.0.7
* libXfontcache: version 1.0.5
* libXft: version 2.3.8
* libXi: version 1.8.2
* libXinerama: version 1.1.5
* libXmu: version 1.2.1
* libXpm: version 3.5.17
* libXrandr: version 1.5.4
* libXrender: version 0.9.12
* libXres: version 1.2.2
* libXt: version 1.3.1
* libXtst: version 1.2.5
* libXxf86dga: version 1.1.6
* libXxf86misc: version 1.0.4
* libXxf86vm: version 1.1.6
* libaio: version 0.3.113
* libarchive: version 3.7.7
* libcap-ng: version 0.8.5
* libcgroup: version 0.41
* libdaemon: version 0.14
* libdeflate: version 1.23
* libdmx: version 1.1.5
* libdrm: version 2.4.124
* libedit: version 20240808\_3.1
* libepoxy: version 1.5.10
* libestr: version 0.1.9
* libevdev: version 1.13.3
* libevent: version 2.1.12
* libfastjson: version 0.99.9
* libffi: version 3.4.6
* libfontenc: version 1.1.8
* libgcrypt: version 1.11.0
* libglvnd: version 1.7.0
* libgpg-error: version 1.51
* libgudev: version 238
* libidn: version 1.42
* libjpeg-turbo: version 3.1.0
* liblogging: version 1.0.6
* libmnl: version 1.0.5
* libnetfilter\_conntrack: version 1.1.0
* libnfnetlink: version 1.0.2
* libnftnl: version 1.2.8
* libnl3: version 3.11.0
* libnvme: version 1.11.1
* libpcap: version 1.10.5
* libpciaccess: version 0.18.1
* libpng: version 1.6.44
* libpsl: version 0.21.5
* libpthread-stubs: version 0.5
* libseccomp: version 2.5.5
* libssh: version 0.11.1
* libssh2: version 1.11.1
* libtasn1: version 4.19.0
* libtiff: version 4.7.0
* libtirpc: version 1.3.6
* libtpms: version 0.9.0
* libunistring: version 1.3
* libunwind: version 1.8.1
* libusb: version 1.0.27
* libusb-compat: version 0.1.8
* libuv: version 1.49.2
* libvirt: version 10.7.0
* libvirt-php: version 0.5.8
* libwebp: version 1.5.0
* libwebsockets: version 4.3.2
* libx86: version 1.1
* libxcb: version 1.17.0
* libxcvt: version 0.1.3
* libxkbcommon: version 1.7.0
* libxkbfile: version 1.1.3
* libxml2: version 2.13.5
* libxshmfence: version 1.3.3
* libxslt: version 1.1.42
* libzip: version 1.11.2
* listres: version 1.0.6
* lm\_sensors: version 3.6.0
* lmdb: version 0.9.33
* logrotate: version 3.22.0
* lshw: version B.02.19.2
* lsof: version 4.99.4
* lsscsi: version 0.32
* lvm2: version 2.03.29
* lz4: version 1.10.0
* lzip: version 1.24.1
* lzlib: version 1.14
* lzo: version 2.10
* mbuffer: version 20240107
* mc: version 4.8.31
* mcelog: version 202
* mesa: version 24.2.8
* miniupnpc: version 2.1
* mkfontscale: version 1.2.3
* mpfr: version 4.2.1
* mtdev: version 1.1.7
* nano: version 8.3
* ncompress: version 5.0
* ncurses: version 6.5
* net-tools: version 20181103\_0eebece
* nettle: version 3.10
* network-scripts: version 15.1
* nfs-utils: version 2.8.2
* nghttp2: version 1.64.0
* nghttp3: version 1.7.0
* nginx: version 1.27.2
* noto-fonts-ttf: version 2024.12.01
* nss-mdns: version 0.14.1
* ntfs-3g: version 2022.10.3
* ntp: version 4.2.8p18
* numactl: version 2.0.13
* nvme-cli: version 2.11
* oniguruma: version 6.9.9
* openssh: version 9.9p1
* openssl: version 3.4.0
* ovmf: version stable202411
* p11-kit: version 0.25.5
* pam: version 1.6.1
* pango: version 1.54.0
* patch: version 2.7.6
* pciutils: version 3.13.0
* pcre: version 8.45
* pcre2: version 10.44
* perl: version 5.40.0
* php: version 8.3.8
* pixman: version 0.44.2
* pkgtools: version 15.1
* procps-ng: version 4.0.4
* pv: version 1.6.6
* qemu: version 9.1.0
* qrencode: version 4.1.1
* readline: version 8.2.013
* reiserfsprogs: version 3.6.27
* rpcbind: version 1.2.6
* rsync: version 3.3.0
* rsyslog: version 8.2102.0
* sakura: version 3.5.0
* samba: version 4.21.1
* sdparm: version 1.12
* sed: version 4.9
* sessreg: version 1.1.3
* setxkbmap: version 1.3.4
* sg3\_utils: version 1.48
* shadow: version 4.16.0
* shared-mime-info: version 2.4
* slim: version 1.3.6
* smartmontools: version 7.4
* spice: version 0.15.0
* spirv-llvm-translator: version 19.1.2
* sqlite: version 3.46.1
* ssmtp: version 2.64
* startup-notification: version 0.12
* sudo: version 1.9.16p2
* swtpm: version 0.7.3
* sysfsutils: version 2.1.1
* sysstat: version 12.7.6
* sysvinit: version 3.12
* sysvinit-scripts: version 15.1
* talloc: version 2.4.2
* tar: version 1.35
* tcp\_wrappers: version 7.6
* tdb: version 1.4.12
* telnet: version 0.17
* tevent: version 0.16.1
* traceroute: version 2.1.6
* transset: version 1.0.4
* tree: version 2.1.1
* usbredir: version 0.8.0
* usbutils: version 018
* userspace-rcu: version 0.15.0
* utempter: version 1.2.1
* util-linux: version 2.40.2
* vbetool: version 1.2.2
* virtiofsd: version 1.11.1
* vsftpd: version 3.0.5
* vte3: version 0.50.2
* wayland: version 1.23.1
* wget: version 1.25.0
* which: version 2.21
* wireguard-tools: version 1.0.20210914
* wqy-zenhei-font-ttf: version 0.8.38\_1
* wsdd2: version 1.8.7
* xauth: version 1.1.3
* xcb-util: version 0.4.1
* xcb-util-keysyms: version 0.4.1
* xclock: version 1.1.1
* xdpyinfo: version 1.3.4
* xdriinfo: version 1.0.7
* xev: version 1.2.6
* xf86-input-evdev: version 2.11.0
* xf86-input-keyboard: version 1.9.0
* xf86-input-mouse: version 1.9.3
* xf86-input-synaptics: version 1.9.2
* xf86-video-ast: version 1.1.5
* xf86-video-mga: version 2.1.0
* xf86-video-vesa: version 2.6.0
* xfsprogs: version 6.12.0
* xhost: version 1.0.9
* xinit: version 1.4.2
* xkbcomp: version 1.4.7
* xkbevd: version 1.1.6
* xkbutils: version 1.0.6
* xkeyboard-config: version 2.43
* xkill: version 1.0.6
* xload: version 1.2.0
* xlsatoms: version 1.1.4
* xlsclients: version 1.1.5
* xmessage: version 1.0.7
* xmodmap: version 1.0.11
* xorg-server: version 21.1.15
* xprop: version 1.2.8
* xrandr: version 1.5.3
* xrdb: version 1.2.2
* xrefresh: version 1.1.0
* xset: version 1.2.5
* xsetroot: version 1.1.3
* xsm: version 1.0.6
* xterm: version 396
* xtrans: version 1.5.2
* xwd: version 1.0.9
* xwininfo: version 1.1.6
* xwud: version 1.0.7
* xxHash: version 0.8.3
* xz: version 5.6.3
* yajl: version 2.1.0
* zfs: version 2.2.7\_6.6.68\_Unraid
* zlib: version 1.3.1
* zstd: version 1.5.6
Patches[](https://docs.unraid.net/unraid-os/release-notes/7.0.0#patches "Direct link to Patches")
---------------------------------------------------------------------------------------------------
With the [Unraid Patch plugin](https://forums.unraid.net/topic/185560-unraid-patch-plugin/)
installed, visit _**Tools → Unraid Patch**_ to get the following patches / hot fixes:
* mover was not moving shares with spaces in the name from array to pool
* File Manager: allow access to UD remote shares
* Share Listing: tool tip showed `%20` instead of a space
* VM Manager: fix issue with blank Discard field on vDisk
* Include installed patches in diagnostics
Note: if you have the Mover Tuning plugin installed, you will be prompted to reboot in order to apply these patches.

View File

@@ -1,374 +0,0 @@
[Skip to main content](https://docs.unraid.net/unraid-os/release-notes/7.1.0#__docusaurus_skipToContent_fallback)
On this page
This release adds wireless networking, the ability to import TrueNAS and other foreign pools, multiple enhancements to VMs, early steps toward making the webGUI responsive, and more.
Upgrading[](https://docs.unraid.net/unraid-os/release-notes/7.1.0#upgrading "Direct link to Upgrading")
---------------------------------------------------------------------------------------------------------
### Known issues[](https://docs.unraid.net/unraid-os/release-notes/7.1.0#known-issues "Direct link to Known issues")
This release has a potential data-loss issue where the recent "mover empty disk" feature does not handle split levels on shares correctly. Resolved in 7.1.2.
#### Plugins[](https://docs.unraid.net/unraid-os/release-notes/7.1.0#plugins "Direct link to Plugins")
Please upgrade all plugins, particularly Unraid Connect and the Nvidia driver.
For other known issues, see the [7.0.0 release notes](https://docs.unraid.net/unraid-os/release-notes/7.0.0/#known-issues)
.
### Rolling back[](https://docs.unraid.net/unraid-os/release-notes/7.1.0#rolling-back "Direct link to Rolling back")
We are making improvements to how we distribute patches between releases, so the standalone Patch Plugin will be uninstalled from this release. If rolling back to an earlier release we'd recommend reinstalling it. More details to come.
If rolling back earlier than 7.0.0, also see the [7.0.0 release notes](https://docs.unraid.net/unraid-os/release-notes/7.0.0/#rolling-back)
.
Changes vs. [7.0.1](https://docs.unraid.net/unraid-os/release-notes/7.0.1/)
[](https://docs.unraid.net/unraid-os/release-notes/7.1.0#changes-vs-701 "Direct link to changes-vs-701")
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
### Storage[](https://docs.unraid.net/unraid-os/release-notes/7.1.0#storage "Direct link to Storage")
* Import foreign ZFS pools such as TrueNAS, Proxmox, Ubuntu, QNAP.
* Import the largest partition on disk instead of the first.
* Removing device from btrfs raid1 or zfs single-vdev mirror will now reduce pool slot count.
#### Other storage changes[](https://docs.unraid.net/unraid-os/release-notes/7.1.0#other-storage-changes "Direct link to Other storage changes")
* Fix: Disabled disks were not shown on the Dashboard.
* Fix: Initially, only the first pool device spins down after adding a custom spin down setting.
* Fix: Array Start was permitted if only 2 Parity devices and no Data devices.
* Fix: The parity check notification often shows the previous parity check and not the current parity check.
* Fix: Resolved certain instances of _Wrong pool State. Too many wrong or missing devices_ when upgrading.
* Fix: Not possible to replace a zfs device from a smaller vdev.
* mover:
* Fix: Resolved issue with older share.cfg files that prevented mover from running.
* Fix: mover would fail to recreate hard link if parent directory did not already exist.
* Fix: mover would hang on named pipes.
* Fix: [Using mover to empty an array disk](https://docs.unraid.net/unraid-os/release-notes/7.0.0/#using-mover-to-empty-an-array-disk)
now only moves top level folders that have a corresponding share.cfg file, also fixed a bug that prevented the list of files _not moved_ from displaying.
### Networking[](https://docs.unraid.net/unraid-os/release-notes/7.1.0#networking "Direct link to Networking")
#### Wireless Networking[](https://docs.unraid.net/unraid-os/release-notes/7.1.0#wireless-networking "Direct link to Wireless Networking")
Unraid now supports WiFi! A hard wired connection is typically preferred, but if that isn't possible for your situation you can now setup WiFi.
For the initial setup you will either need a local keyboard/monitor (boot into GUI mode) or a wired connection. In the future, the USB Creator will be able to configure wireless networking prior to the initial boot.
* Access the webGUI and visit _**Settings → Network Settings → Wireless wlan0**_
* First, enable WiFi
* The **Regulatory Region** can generally be left to **Automatic**, but set it to your location if the network you want to connect to is not available
* Find your preferred network and click the **Connect to WiFi network** icon
* Fill in your WiFi password and other settings, then press **Join this network**
* Note: if your goal is to use Docker containers over WiFi, unplug any wired connection before starting Docker
Additional details
* WPA2/WPA3 and WPA2/WPA3 Enterprise are supported, if both WPA2 and WPA3 are available then WPA3 is used.
* Having both wired and wireless isn't recommended for long term use, it should be one or the other. But if both connections use DHCP and you (un)plug a network cable while wireless is configured, the system (excluding Docker) should adjust within 45-60 seconds.
* Wireless chipset support: We expect to have success with modern WiFi adapters, but older adapters may not work. If your WiFi adapter isn't detected, please start a new forum thread and provide your diagnostics so it can be investigated.
* If you want to use a USB WiFi adapter, see this list of [USB WiFi adapters that are supported with Linux in-kernel drivers](https://github.com/morrownr/USB-WiFi/blob/main/home/USB_WiFi_Adapters_that_are_supported_with_Linux_in-kernel_drivers.md)
* Advanced: New firmware files placed in `/boot/config/firmware/` will be copied to `/lib/firmware/` before driver modules are loaded (existing files will not be overwritten).
Limitations: there are networking limitations when using wireless, as a wlan can only have a single mac address.
* Only one wireless NIC is supported, wlan0
* wlan0 is not able to participate in a bond
* Docker containers
* On _**Settings → Docker**_, note that when wireless is enabled, the system will ignore the **Docker custom network type** setting and always use **ipvlan** (macvlan is not possible because wireless does not support multiple mac addresses on a single interface)
* _**Settings → Docker**_, **Host access to custom networks** must be disabled
* A Docker container's **Network Type** cannot use br0/bond0/eth0
* Docker has a limitation that it cannot participate in two networks that share the same subnet. If switching between wired and wireless, you will need to restart Docker and reconfigure all existing containers to use the new interface. We recommend setting up either wired or wireless and not switching.
* VMs
* We recommend setting your VM **Network Source** to **virbr0**, there are no limits to how many VMs you can run in this mode. The VMs will have full network access, the downside is they will not be accessible from the network. You can still access them via VNC to the host.
* With some manual configuration, a single VM can be made accessible on the network:
* Configure the VM with a static IP address
* Configure the same IP address on the ipvtap interface, type: `ip addr add IP-ADDRESS dev shim-wlan0`
#### Other networking changes[](https://docs.unraid.net/unraid-os/release-notes/7.1.0#other-networking-changes "Direct link to Other networking changes")
* On _**Settings → Network Settings**_, you can now adjust the server's DNS settings without stopping other services first. See the top of the **eth0** section.
* When configuring a network interface, each interface has an **Info** button showing details for the current connection.
* When configuring a network interface, the **Desired MTU** field is disabled until you click **Enable jumbo frames**. Hover over the icon for a warning about changing the MTU, in most cases it should be left at the default setting.
* When configuring multiple network interfaces, by default the additional interfaces will have their gateway disabled, this is a safe default that works on most networks where a single gateway is required. If an additional gateway is enabled, it will be given a higher metric than existing gateways so there are no conflicts. You can override as needed.
* Old network interfaces are automatically removed from config files when you save changes to _**Settings → Network Settings**_.
* Fix various issues with DHCP.
### VM Manager[](https://docs.unraid.net/unraid-os/release-notes/7.1.0#vm-manager "Direct link to VM Manager")
#### Nouveau GPU driver[](https://docs.unraid.net/unraid-os/release-notes/7.1.0#nouveau-gpu-driver "Direct link to Nouveau GPU driver")
The Nouveau driver for Nvidia GPUs is now included, disabled by default as we expect most users to want the Nvidia driver instead. To enable it, uninstall the Nvidia driver plugin and run `touch /boot/config/modprobe.d/nouveau.conf` then reboot.
#### VirGL[](https://docs.unraid.net/unraid-os/release-notes/7.1.0#virgl "Direct link to VirGL")
You can now share Intel and AMD GPUs between multiple Linux VMs at the same time using VirGL, the virtual 3D OpenGL renderer. When used this way, the GPU will provide accelerated graphics but will not output on the monitor. Note that this does not yet work with Windows VMs or the standard Nvidia plugin (it does work with Nvidia GPUs using the Nouveau driver though).
To use the virtual GPU in a Linux VM, edit the VM template and set the **Graphics Card** to **Virtual**. Then set the **VM Console Video Driver** to **Virtio(3d)** and select the appropriate **Render GPU** from the list of available GPUs (note that GPUs bound to VFIO-PCI or passed through to other VMs cannot be chosen here, and Nvidia GPUs are available only if the Nouveau driver is enabled).
#### QXL Virtual GPUs[](https://docs.unraid.net/unraid-os/release-notes/7.1.0#qxl-virtual-gpus "Direct link to QXL Virtual GPUs")
To use this feature in a VM, edit the VM template and set the **Graphics Card** to **Virtual** and the **VM Console Video Driver** to **QXL (Best)**, you can then choose how many screens it supports and how much memory to allocate to it.
#### CPU Pinning is optional[](https://docs.unraid.net/unraid-os/release-notes/7.1.0#cpu-pinning-is-optional "Direct link to CPU Pinning is optional")
CPU pinning is now optional, if no cores are pinned to a VM then the OS chooses which cores to use.
From _**Settings → CPU Settings**_ or when editing a VM, press **Deselect All** to unpin all cores for this VM and set the number of vCPUs to 1, increase as needed.
### User VM Templates[](https://docs.unraid.net/unraid-os/release-notes/7.1.0#user-vm-templates "Direct link to User VM Templates")
To create a user template:
* Edit the VM, choose **Create Modify Template** and give it a name. It will now be stored as a **User Template**, available on the **Add VM** screen.
To use a user template:
* From the VM listing, press **Add VM**, then choose the template from the **User Templates** area.
Import/Export
* From the **Add VM** screen, hover over a user template and click the arrow to export the template to a location on the server or download it.
* On another Unraid system press **Import from file** or **Upload** to use the template.
#### Other VM changes[](https://docs.unraid.net/unraid-os/release-notes/7.1.0#other-vm-changes "Direct link to Other VM changes")
* When the **Primary** GPU is assigned as passthrough for a VM, warn that it may not work without loading a compatible vBIOS.
* Fix: Remove confusing _Path does not exist_ message when setting up the VM service
* Feat: Unraid VMs can now boot into GUI mode, when using the QXL video driver
* Fix: Could not change VM icon when using XML view
### WebGUI[](https://docs.unraid.net/unraid-os/release-notes/7.1.0#webgui "Direct link to WebGUI")
#### CSS changes[](https://docs.unraid.net/unraid-os/release-notes/7.1.0#css-changes "Direct link to CSS changes")
As a step toward making the webGUI responsive, we have reworked the CSS. For the most part, this should not be noticeable aside from some minor color adjustments. We expect that most plugins will be fine as well, although plugin authors may want to review [this documentation](https://github.com/unraid/webgui/blob/master/emhttp/plugins/dynamix/styles/themes/README.md)
. Responsiveness will continue to be improved in future releases.
If you notice alignment issues or color problems in any official theme, please let us know.
#### nchan out of shared memory issues[](https://docs.unraid.net/unraid-os/release-notes/7.1.0#nchan-out-of-shared-memory-issues "Direct link to nchan out of shared memory issues")
We have made several changes that should prevent this issue, and if we detect that it happens, we restart nginx in an attempt to automatically recover from it.
If your Main page never populates, or if you see "nchan: Out of shared memory" in your logs, please start a new forum thread and provide your diagnostics. You can optionally navigate to _**Settings → Display Settings**_ and disable **Allow realtime updates on inactive browsers**; this prevents your browser from requesting certain updates once it loses focus. When in this state you will see a banner saying **Live Updates Paused**, simply click on the webGUI to bring it to the foreground and re-enable live updates. Certain pages will automatically reload to ensure they are displaying the latest information.
#### Other WebGUI changes[](https://docs.unraid.net/unraid-os/release-notes/7.1.0#other-webgui-changes "Direct link to Other WebGUI changes")
* Fix: AdBlockers could prevent Dashboard from loading
* Fix: Under certain circumstances, browser memory utilization on the Dashboard could exponentially grow
* Fix: Prevent corrupted config file from breaking the Dashboard
Misc[](https://docs.unraid.net/unraid-os/release-notes/7.1.0#misc "Direct link to Misc")
------------------------------------------------------------------------------------------
### Other changes[](https://docs.unraid.net/unraid-os/release-notes/7.1.0#other-changes "Direct link to Other changes")
* On _**Settings → Date and Time**_ you can now sync your clock with a **PTP** server (we expect most users will continue to use **NTP**)
* Upgraded to jQuery 3.7.1 and jQuery UI 1.14.1
* Fix: Visiting boot.php will no longer shutdown the server
* Fix: On the Docker tab, the dropdown menu for the last container was truncated in certain situations
* Fix: On _**Settings → Docker**_, deleting a **Docker directory** stored on a ZFS volume now works properly
* Fix: On boot, custom ssh configuration copied from `/boot/config/ssh/` to `/etc/ssh/` again
* Fix: File Manager can copy files from a User Share to an Unassigned Disk mount
* Fix: Remove confusing _Path does not exist_ message when setting up the Docker service
* Fix: update `rc.messagebus` to correct handling of `/etc/machine-id`
* Diagnostics
* Fix: Improved anonymization of IPv6 addresses in diagnostics
* Fix: Improved anonymization of user names in certain config files in diagnostics
* Fix: diagnostics could fail due to multibyte strings in syslog
* Feat: diagnostics now logs errors in logs/diagnostics.error.log
### Linux kernel[](https://docs.unraid.net/unraid-os/release-notes/7.1.0#linux-kernel "Direct link to Linux kernel")
* version 6.12.24-Unraid
* Apply: \[PATCH\] [Revert "PCI: Avoid reset when disabled via sysfs"](https://lore.kernel.org/lkml/20250414211828.3530741-1-alex.williamson@redhat.com/)
* CONFIG\_NR\_CPUS: increased from 256 to 512
* CONFIG\_TEHUTI\_TN40: Tehuti Networks TN40xx 10G Ethernet adapters
* CONFIG\_DRM\_XE: Intel Xe Graphics
* CONFIG\_UDMABUF: userspace dmabuf misc driver
* CONFIG\_DRM\_NOUVEAU: Nouveau (NVIDIA) cards
* CONFIG\_DRM\_QXL: QXL virtual GPU
* CONFIG\_EXFAT\_FS: exFAT filesystem support
* CONFIG\_PSI: Pressure stall information tracking
* CONFIG\_PSI\_DEFAULT\_DISABLED: Require boot parameter to enable pressure stall information tracking, i.e., `psi=1`
* CONFIG\_ENCLOSURE\_SERVICES: Enclosure Services
* CONFIG\_SCSI\_ENCLOSURE: SCSI Enclosure Support
* CONFIG\_DRM\_ACCEL: Compute Acceleration Framework
* CONFIG\_DRM\_ACCEL\_HABANALABS: HabanaLabs AI accelerators
* CONFIG\_DRM\_ACCEL\_IVPU: Intel NPU (Neural Processing Unit)
* CONFIG\_DRM\_ACCEL\_QAIC: Qualcomm Cloud AI accelerators
* zfs: version 2.3.1
* Wireless support
* Atheros/Qualcomm
* Broadcom
* Intel
* Marvell
* Microtek
* Realtek
### Base distro updates[](https://docs.unraid.net/unraid-os/release-notes/7.1.0#base-distro-updates "Direct link to Base distro updates")
* aaa\_glibc-solibs: version 2.41
* adwaita-icon-theme: version 48.0
* at-spi2-core: version 2.56.1
* bind: version 9.20.8
* btrfs-progs: version 6.14
* ca-certificates: version 20250425
* cairo: version 1.18.4
* cifs-utils: version 7.3
* coreutils: version 9.7
* dbus: version 1.16.2
* dbus-glib: version 0.114
* dhcpcd: version 9.5.2
* diffutils: version 3.12
* dnsmasq: version 2.91
* docker: version 27.5.1
* e2fsprogs: version 1.47.2
* elogind: version 255.17
* elfutils: version 0.193
* ethtool: version 6.14
* firefox: version 128.10 (AppImage)
* floppy: version 5.6
* fontconfig: version 2.16.2
* gdbm: version 1.25
* git: version 2.49.0
* glib2: version 2.84.1
* glibc: version 2.41
* glibc-zoneinfo: version 2025b
* grep: version 3.12
* gtk+3: version 3.24.49
* gzip: version 1.14
* harfbuzz: version 11.1.0
* htop: version 3.4.1
* icu4c: version 77.1
* inih: version 60
* intel-microcode: version 20250211
* iperf3: version 3.18
* iproute2: version 6.14.0
* iw: version 6.9
* jansson: version 2.14.1
* kernel-firmware: version 20250425\_cf6ea3d
* kmod: version 34.2
* less: version 674
* libSM: version 1.2.6
* libX11: version 1.8.12
* libarchive: version 3.7.8
* libcgroup: version 3.2.0
* libedit: version 20250104\_3.1
* libevdev: version 1.13.4
* libffi: version 3.4.8
* libidn: version 1.43
* libnftnl: version 1.2.9
* libnvme: version 1.13
* libgpg-error: version 1.55
* libpng: version 1.6.47
* libseccomp: version 2.6.0
* liburing: version 2.9
* libusb: version 1.0.28
* libuv: version 1.51.0
* libvirt: version 11.2.0
* libXft: version 2.3.9
* libxkbcommon: version 1.9.0
* libxml2: version 2.13.8
* libxslt: version 1.1.43
* libzip: version 1.11.3
* linuxptp: version 4.4
* lvm2: version 2.03.31
* lzip: version 1.25
* lzlib: version 1.15
* mcelog: version 204
* mesa: version 25.0.4
* mpfr: version 4.2.2
* nano: version 8.4
* ncurses: version 6.5\_20250419
* nettle: version 3.10.1
* nghttp2: version 1.65.0
* nghttp3: version 1.9.0
* noto-fonts-ttf: version 2025.03.01
* nvme-cli: version 2.13
* oniguruma: version 6.9.10
* openssh: version 10.0p1
* openssl: version 3.5.0
* ovmf: version stable202502
* pam: version 1.7.0
* pango: version 1.56.3
* parted: version 3.6
* patch: version 2.8
* pcre2: version 10.45
* perl: version 5.40.2
* php: version 8.3.19
* procps-ng: version 4.0.5
* qemu: version 9.2.3
* rsync: version 3.4.1
* samba: version 4.21.3
* shadow: version 4.17.4
* spice: version 0.15.2
* spirv-llvm-translator: version 20.1.0
* sqlite: version 3.49.1
* sysstat: version 12.7.7
* sysvinit: version 3.14
* talloc: version 2.4.3
* tdb: version 1.4.13
* tevent: version 0.16.2
* tree: version 2.2.1
* userspace-rcu: version 0.15.2
* utempter: version 1.2.3
* util-linux: version 2.41
* virglrenderer: version 1.1.1
* virtiofsd: version 1.13.1
* which: version 2.23
* wireless-regdb: version 2025.02.20
* wpa\_supplicant: version 2.11
* xauth: version 1.1.4
* xf86-input-synaptics: version 1.10.0
* xfsprogs: version 6.14.0
* xhost: version 1.0.10
* xinit: version 1.4.4
* xkeyboard-config: version 2.44
* xorg-server: version 21.1.16
* xterm: version 398
* xtrans: version 1.6.0
* xz: version 5.8.1
* zstd: version 1.5.7
Patches[](https://docs.unraid.net/unraid-os/release-notes/7.1.0#patches "Direct link to Patches")
---------------------------------------------------------------------------------------------------
No patches are currently available for this release.
* [Upgrading](https://docs.unraid.net/unraid-os/release-notes/7.1.0#upgrading)
* [Known issues](https://docs.unraid.net/unraid-os/release-notes/7.1.0#known-issues)
* [Rolling back](https://docs.unraid.net/unraid-os/release-notes/7.1.0#rolling-back)
* [Changes vs. 7.0.1](https://docs.unraid.net/unraid-os/release-notes/7.1.0#changes-vs-701)
* [Storage](https://docs.unraid.net/unraid-os/release-notes/7.1.0#storage)
* [Networking](https://docs.unraid.net/unraid-os/release-notes/7.1.0#networking)
* [VM Manager](https://docs.unraid.net/unraid-os/release-notes/7.1.0#vm-manager)
* [User VM Templates](https://docs.unraid.net/unraid-os/release-notes/7.1.0#user-vm-templates)
* [WebGUI](https://docs.unraid.net/unraid-os/release-notes/7.1.0#webgui)
* [Misc](https://docs.unraid.net/unraid-os/release-notes/7.1.0#misc)
* [Other changes](https://docs.unraid.net/unraid-os/release-notes/7.1.0#other-changes)
* [Linux kernel](https://docs.unraid.net/unraid-os/release-notes/7.1.0#linux-kernel)
* [Base distro updates](https://docs.unraid.net/unraid-os/release-notes/7.1.0#base-distro-updates)
* [Patches](https://docs.unraid.net/unraid-os/release-notes/7.1.0#patches)

View File

@@ -1,348 +0,0 @@
[Skip to main content](https://docs.unraid.net/unraid-os/release-notes/7.2.0#__docusaurus_skipToContent_fallback)
On this page
The Unraid webGUI is now responsive! The interface automatically adapts to different screen sizes, making it usable on mobile devices, tablets, and desktop monitors alike. The Unraid API is now built in, and the release also brings RAIDZ expansion, Ext2/3/4, NTFS and exFAT support, and the (optional) ability to login to the webGUI via SSO, among other features and bug fixes.
Note that some plugins may have visual issues in this release; please give plugin authors time to make adjustments. Plugin authors, please see this post describing [how to update your plugins to make them responsive](https://forums.unraid.net/topic/192172-responsive-webgui-plugin-migration-guide/)
.
Upgrading[](https://docs.unraid.net/unraid-os/release-notes/7.2.0#upgrading "Direct link to Upgrading")
---------------------------------------------------------------------------------------------------------
For step-by-step instructions, see [Upgrading Unraid](https://docs.unraid.net/unraid-os/system-administration/maintain-and-update/upgrading-unraid/)
. Questions about your [license](https://docs.unraid.net/unraid-os/troubleshooting/licensing-faq/#license-types--features)
?
### Known issues[](https://docs.unraid.net/unraid-os/release-notes/7.2.0#known-issues "Direct link to Known issues")
#### Plugins[](https://docs.unraid.net/unraid-os/release-notes/7.2.0#plugins "Direct link to Plugins")
The Theme Engine, Dark Theme, Dynamix Date Time, and Flash Remount plugins are incompatible and will be automatically uninstalled, as will outdated versions of Unraid Connect.
Please upgrade all plugins, particularly Unraid Connect and the Nvidia driver, before updating. Note that some plugins may have visual issues in this release; please give plugin authors time to make adjustments.
For other known issues, see the [7.1.4 release notes](https://docs.unraid.net/unraid-os/release-notes/7.1.4/#known-issues)
.
### Rolling back[](https://docs.unraid.net/unraid-os/release-notes/7.2.0#rolling-back "Direct link to Rolling back")
If rolling back earlier than 7.1.4, also see the [7.1.4 release notes](https://docs.unraid.net/unraid-os/release-notes/7.1.4/#rolling-back)
.
Changes vs. [7.1.4](https://docs.unraid.net/unraid-os/release-notes/7.1.4/)
[](https://docs.unraid.net/unraid-os/release-notes/7.2.0#changes-vs-714 "Direct link to changes-vs-714")
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
### Storage[](https://docs.unraid.net/unraid-os/release-notes/7.2.0#storage "Direct link to Storage")
#### ZFS RAIDZ expansion[](https://docs.unraid.net/unraid-os/release-notes/7.2.0#zfs-raidz-expansion "Direct link to ZFS RAIDZ expansion")
You can now expand your single-vdev RAIDZ1/2/3 pools, one drive at a time. For detailed instructions, see [RAIDZ expansion](https://docs.unraid.net/unraid-os/release-notes/7.2.0/warn/)
.
* With the array running, on **_Main → Pool Devices_**, select the pool name to view the details
* In the **Pool Status** area, check for an **Upgrade Pool** button. If one exists, you'll need to click that before continuing. Note that upgrading the pool will limit your ability to downgrade to earlier releases of Unraid (7.1 should be OK, but not 7.0)
* Stop the array
* On **_Main → Pool Devices_**, add a slot to the pool
* Select the appropriate drive (must be at least as large as the smallest drive in the pool)
* Start the array
#### Enhancements[](https://docs.unraid.net/unraid-os/release-notes/7.2.0#enhancements "Direct link to Enhancements")
* Fix: There will now be an "invalid expansion" warning if the pool needs to be upgraded first
* Improvement: Better defaults for ZFS RAIDZ vdevs
#### Ext2/3/4, NTFS, and exFAT Support[](https://docs.unraid.net/unraid-os/release-notes/7.2.0#ext234-ntfs-and-exfat-support "Direct link to Ext2/3/4, NTFS, and exFAT Support")
Unraid now supports Ext2/3/4, NTFS, and exFAT drive formats in addition to XFS, BTRFS, and ZFS.
Use case: say you are a content creator with a box full of hard drives containing all of your historical videos. When first creating an array (or after running **_Tools → New Config_**), add all of your existing data drives (blank, or with data in a supported drive format) to the array. Any parity drives will be overwritten but the data drives will retain their data. You can enjoy parity protection, share them on the network, and take full advantage of everything Unraid has to offer.
Critical note: you can continue adding filled data drives to the array up until you start the array with a parity drive installed. Once a parity drive has been added, any new data drives will be zeroed out when they are added to the array.
To clarify, Unraid has always worked this way; what is new is that Unraid now supports additional drive formats.
Additionally, you can create single drive pools using the new formats as well.
* Improved the usability of the **File System Type** dropdown as the list of available options is growing
#### Warn about deprecated file systems[](https://docs.unraid.net/unraid-os/release-notes/7.2.0#warn-about-deprecated-file-systems "Direct link to Warn about deprecated file systems")
The **_Main_** page will now warn if any array or pool drives are formatted with ReiserFS; these drives need to be migrated to another filesystem ASAP as they will not be usable in a future release of Unraid (likely Unraid 7.3). Similarly, it will warn if there are drives formatted in a deprecated version of XFS; those need to be migrated before 2030. See [Converting to a new file system type](https://docs.unraid.net/unraid-os/using-unraid-to/manage-storage/file-systems/#converting-to-a-new-file-system-type)
in the docs for details.
#### Other storage changes[](https://docs.unraid.net/unraid-os/release-notes/7.2.0#other-storage-changes "Direct link to Other storage changes")
* Improvement: Two-device ZFS pools are mirrored by default, but you can make them RAIDZ1 if you plan to expand that vdev in the future
* Improvement: Add **File system status** to **DeviceInfo** page, showing whether a drive is mounted/unmounted and empty/not empty
* Fix: Display issue on Main page when two pools are named similarly
* Fix: [glibc bug](https://github.com/openzfs/zfs/issues/17629)
which could lead to data loss with ZFS
* Fix: BTRFS array disks with multiple filesystem signatures don't mount
* Fix: Resolved some issues for parity disks with existing 1MiB aligned partitions
* Fix: When stopping array, do not attempt 'umount' on array devices that are not mounted
* Improvement: Exclusive shares may be selected for NFS export
* Improvement: Disallow shares named `homes`, `global`, and `printers` (these have special meaning in Samba)
* Fix: Correct handling of case-insensitive share names
* Fix: Shares with invalid characters in names could not be deleted or modified
* Fix: Improvements to reading from/writing to SMB Security Settings
* Improvement: A top-level `lost+found` directory will not be shared
* Fix: In smb.conf, set `smb3 directory leases = no` to avoid issues with the current release of Samba
* Fix: Restore comments in default `/etc/modprobe.d/*.conf` files
* Fix: Windows fails to create a new folder for a share with primary=ZFS pool and secondary=EXT4 array disk
* Fix: New devices added to an existing array with valid parity should be repartitioned
* Fix: Do not spin down devices for which SMART self-test is in progress
* Fix: New array device not available for shares until the array is restarted
* Fix: ZFS allocation profile always shows one vdev only
### Networking[](https://docs.unraid.net/unraid-os/release-notes/7.2.0#networking "Direct link to Networking")
#### Other networking changes[](https://docs.unraid.net/unraid-os/release-notes/7.2.0#other-networking-changes "Direct link to Other networking changes")
* Feature: IPv6 Docker custom networks now support Unique Local Addresses (ULA) in addition to the more standard Global Unicast Addresses (GUA), assuming your router provides both subnets when the Unraid host gets an IPv6 address via DHCP or SLAAC. To use, assign a custom static IP from the appropriate subnet to the container.
* Fix: The **_Settings → Network Settings → Interface Rules_** page sometimes showed the wrong network driver (was just a display issue)
### VM Manager[](https://docs.unraid.net/unraid-os/release-notes/7.2.0#vm-manager "Direct link to VM Manager")
* Feature: Save PCI hardware data, warn if hardware used by VM changes
* Feature: Support virtual sound cards in VMs
#### Other VM changes[](https://docs.unraid.net/unraid-os/release-notes/7.2.0#other-vm-changes "Direct link to Other VM changes")
* Improvement: Enhance multi-monitor support, automatically enabling spicevmc when needed
* Feature: Upgrade to noVNC v1.6
* Removed historical OpenElec and LibreElec VM templates
* Fix: VM Console did not work when user shares were disabled
* Fix: Don't allow single quotes in Domains storage path
* Fix: Change Windows 11 VM defaults
* Fix: Unable to view vdisk locations in languages other than English
* Fix: No capacity warning when editing a VM to add a 2nd vdisk
* Fix: Cdrom Bus: select IDE for i440 and SATA for q35
### Unraid API[](https://docs.unraid.net/unraid-os/release-notes/7.2.0#unraid-api "Direct link to Unraid API")
The Unraid API is now built into Unraid! The new Notifications panel is the first major feature to use it, over time the entire webGUI will be updated to use it.
The Unraid API is fully open source: [https://github.com/unraid/api](https://github.com/unraid/api)
. Get started in the [API docs](https://docs.unraid.net/API/)
.
The Unraid Connect plugin adds functionality which communicates with our cloud servers; it remains completely optional.
#### Other Unraid API changes[](https://docs.unraid.net/unraid-os/release-notes/7.2.0#other-unraid-api-changes "Direct link to Other Unraid API changes")
* dynamix.unraid.net 4.25.3 - [see changes](https://github.com/unraid/api/releases)
### WebGUI[](https://docs.unraid.net/unraid-os/release-notes/7.2.0#webgui "Direct link to WebGUI")
#### Responsive CSS[](https://docs.unraid.net/unraid-os/release-notes/7.2.0#responsive-css "Direct link to Responsive CSS")
The Unraid webGUI is now responsive! Most screens should now work as well on your phone as they do on your desktop monitor.
#### Login to the webGUI via SSO[](https://docs.unraid.net/unraid-os/release-notes/7.2.0#login-to-the-webgui-via-sso "Direct link to Login to the webGUI via SSO")
Login to the Unraid webGUI using Single Sign-On (SSO) with your Unraid.net account or any other OIDC-compliant provider. For details on this _optional_ feature, see [OIDC Provider Setup](https://docs.unraid.net/API/oidc-provider-setup/)
in the Docs.
#### Other WebGUI changes[](https://docs.unraid.net/unraid-os/release-notes/7.2.0#other-webgui-changes "Direct link to Other WebGUI changes")
* Feature: Add new notifications management view, access via the bell in the upper right corner of the webGUI
* Feature: Add progress indicator to Docker / Plugin / VM popup window
* Feature: Show countdown timer on login page when locked out due to too many incorrect login attempts
* Feature: Add _Force Install_ button to bypass version checks when manually installing plugins
* Feature: Add **_Tools → Open Terminal_** page; can access it by searching for "terminal". Can optionally remove Terminal button from toolbar via **_Settings → Display Settings → Show Terminal Button in header_**
* Feature: **_Users → Root → SSH authorized keys_** now supports more formats (thanks [wandercone](https://github.com/wandercone)
)
* Feature: Added a welcome screen for new systems, shown after setting the root password
* Fix: Re-enable smart test buttons after completion of test
* Fix: Prevent webGUI from crashing when dynamix.cfg is corrupt, log any issues
* Fix: `blob:` links shouldn't be considered external
* Feature: Differentiate between Intel E-Cores and P-Cores on the Dashboard
* Feature: Dashboard now gets CPU usage stats from the Unraid API
* Fix: Dashboard: More than 1TB of RAM was not reported correctly
* Chore: Change charting libraries on the Dashboard
* Fix: Prevent Firefox from showing resend/cancel popup when starting array (thanks [dkaser](https://github.com/dkaser)
)
* Fix: File Manager: stop spinner and show error when it fails (thanks [poroyo](https://github.com/poroyo)
)
* Feature: Speed up rendering of Plugins and Docker pages
* Fix: Prevent issues when clicking an external link from within a changelog
* Improvement: Show RAM and network speed in human-readable units
* Fix: On _**Settings → Display Settings → Font size**_, remove extreme options that break the webGUI
Misc[](https://docs.unraid.net/unraid-os/release-notes/7.2.0#misc "Direct link to Misc")
------------------------------------------------------------------------------------------
* Feature: Do not execute `go` script when in safe mode, create `/boot/config/go.safemode` script if needed
* Improvement: Require authentication on `http://localhost`. This improves security and allows Tailscale Funnel to work with the webGUI. Note that when booting in GUI mode, you will now need to login again to access the webGUI.
* Feature: Add favicon and web app manifest support
* Feature: License key upgrades are installed automatically, without needing to restart the array
* Feature: Thunderbolt devices will be auto-authorized when connected
* Feature: Improvements to custom udev rules and scripts, at boot:
* `/boot/config/udev/*.rules` are copied to `/etc/udev/rules.d/`
* `/boot/config/udev/*.sh` are copied to `/etc/udev/scripts/` where they can be used by your custom udev rules
* Fix: Remove support for nonworking ipv6.hash.myunraid.net URLs
* Fix: Docker custom network creation failed when IPv6 was enabled
* Fix: Resolve issues with high CPU load due to nchan and lsof
* Improvement: Removed option to disable live updates on inactive browsers; should no longer be needed
* Improvement: Better messaging around mover and "dangling links"
* Fix: Prevent errors related to _searchLink_ when installing plugins
* Fix: PHP warnings importing WireGuard tunnels
* Improvement: _Europe/Kiev_ timezone renamed to _Europe/Kyiv_ to align with the IANA Time Zone Database
* Improvement: Enhance Discord notification agent; enable/disable the agent to get the updates (thanks [mgutt](https://github.com/mgutt)
)
* Fix: Further anonymization of diagnostics.zip
* Improvement: Protect WebGUI from fatal PHP errors
* Improvement: Adjust logging during plugin installs
* Fix: CPU Pinning for Docker containers could crash in certain instances
* Fix: Docker NAT failure due to missing br\_netfilter
* Fix: Scheduled mover runs not logged
### Linux kernel[](https://docs.unraid.net/unraid-os/release-notes/7.2.0#linux-kernel "Direct link to Linux kernel")
* version 6.12.54-Unraid
* built-in: CONFIG\_EFIVAR\_FS: EFI Variable filesystem
* CONFIG\_INTEL\_RAPL: Intel RAPL support via MSR interface
* CONFIG\_NLS\_DEFAULT: change from "iso8859-1" to "utf8"
* Added eMMC support:
* CONFIG\_MMC: MMC/SD/SDIO card support
* CONFIG\_MMC\_BLOCK: MMC block device driver
* CONFIG\_MMC\_SDHCI: Secure Digital Host Controller Interface support
* CONFIG\_MMC\_SDHCI\_PCI: SDHCI support on PCI bus
* CONFIG\_MMC\_SDHCI\_ACPI: SDHCI support for ACPI enumerated SDHCI controllers
* CONFIG\_MMC\_SDHCI\_PLTFM: SDHCI platform and OF driver helper
### Base distro updates[](https://docs.unraid.net/unraid-os/release-notes/7.2.0#base-distro-updates "Direct link to Base distro updates")
* aaa\_glibc-solibs: version 2.42
* adwaita-icon-theme: version 48.1
* at-spi2-core: version 2.58.1
* bash: version 5.3.003
* bind: version 9.20.13
* btrfs-progs: version 6.17
* ca-certificates: version 20251003
* cifs-utils: version 7.4
* coreutils: version 9.8
* cryptsetup: version 2.8.1
* curl: version 8.16.0
* e2fsprogs: version 1.47.3
* ethtool: version 6.15
* exfatprogs: version 1.3.0
* fontconfig: version 2.17.1
* freetype: version 2.14.0
* gdbm: version 1.26
* gdk-pixbuf2: version 2.44.3
* git: version 2.51.1
* glib2: version 2.86.0
* glibc: version 2.42 (build 2)
* gnutls: version 3.8.10
* grub: version 2.12
* gtk+3: version 3.24.51
* harfbuzz: version 12.1.0
* intel-microcode: version 20250812
* iproute2: version 6.17.0
* inih: version 61
* inotify-tools: version 4.25.9.0
* iputils: version 20250605
* iw: version 6.17
* json-glib: version 1.10.8
* kbd: version 2.9.0
* kernel-firmware: version 20251018\_8b4de42
* krb5: version 1.22.1
* less: version 685
* libXfixes: version 6.0.2
* libXpresent: version 1.0.2
* libXres: version 1.2.3
* libarchive: version 3.8.2
* libdrm: version 2.4.127
* libedit: version 20251016\_3.1
* libevdev: version 1.13.5
* libffi: version 3.5.2
* libgpg-error: version 1.56
* libjpeg-turbo: version 3.1.2
* libnftnl: version 1.3.0
* libnvme: version 1.15
* libpng: version 1.6.50
* libssh: version 0.11.3
* libtiff: version 4.7.1
* libtirpc: version 1.3.7
* libunwind: version 1.8.3
* liburing: version 2.12
* libusb: version 1.0.29
* libwebp: version 1.6.0
* libvirt: version 11.7.0
* libxkbcommon: version 1.11.0
* libxml2: version 2.14.6
* libzip: version 1.11.4
* lsof: version 4.99.5
* lvm2: version 2.03.35
* mcelog: version 207
* mesa: version 25.2.5
* nano: version 8.6
* ncurses: version 6.5\_20250816
* nettle: version 3.10.2
* nghttp2: version 1.67.1
* nghttp3: version 1.12.0
* noto-fonts-ttf: version 2025.10.01
* nvme-cli: version 2.15
* openssh: version 10.2p1
* openssl: version 3.5.4
* ovmf: version unraid202502
* p11-kit: version 0.25.10
* pam: version 1.7.1
* pcre2: version 10.46
* pango: version 1.56.4
* pciutils: version 3.14.0
* perl: version 5.42.0
* php: version 8.3.26-x86\_64-1\_LT with gettext extension
* pixman: version 0.46.4
* rclone: version 1.70.1-x86\_64-1\_SBo\_LT.tgz
* readline: version 8.3.001
* samba: version 4.23.2
* shadow: version 4.18.0
* smartmontools: version 7.5
* spirv-llvm-translator: version 21.1.1
* sqlite: version 3.50.4
* sudo: version 1.9.17p2
* sysstat: version 12.7.8
* sysvinit: version 3.15
* tdb: version 1.4.14
* tevent: version 0.17.1
* userspace-rcu: version 0.15.3
* util-linux: version 2.41.2
* wayland: version 1.24.0
* wireguard-tools: version 1.0.20250521
* wireless-regdb: version 2025.10.07
* xdpyinfo: version 1.4.0
* xdriinfo: version 1.0.8
* xfsprogs: version 6.16.0
* xkeyboard-config: version 2.46
* xorg-server: version 21.1.18
* xterm: version 402
* zfs: version zfs-2.3.4\_6.12.54\_Unraid-x86\_64-2\_LT
* [Upgrading](https://docs.unraid.net/unraid-os/release-notes/7.2.0#upgrading)
* [Known issues](https://docs.unraid.net/unraid-os/release-notes/7.2.0#known-issues)
* [Rolling back](https://docs.unraid.net/unraid-os/release-notes/7.2.0#rolling-back)
* [Changes vs. 7.1.4](https://docs.unraid.net/unraid-os/release-notes/7.2.0#changes-vs-714)
* [Storage](https://docs.unraid.net/unraid-os/release-notes/7.2.0#storage)
* [Networking](https://docs.unraid.net/unraid-os/release-notes/7.2.0#networking)
* [VM Manager](https://docs.unraid.net/unraid-os/release-notes/7.2.0#vm-manager)
* [Unraid API](https://docs.unraid.net/unraid-os/release-notes/7.2.0#unraid-api)
* [WebGUI](https://docs.unraid.net/unraid-os/release-notes/7.2.0#webgui)
* [Misc](https://docs.unraid.net/unraid-os/release-notes/7.2.0#misc)
* [Linux kernel](https://docs.unraid.net/unraid-os/release-notes/7.2.0#linux-kernel)
* [Base distro updates](https://docs.unraid.net/unraid-os/release-notes/7.2.0#base-distro-updates)

File diff suppressed because it is too large Load Diff

View File

@@ -1,569 +0,0 @@
# ExaAI Research Findings: Unraid API Ecosystem
**Date:** 2026-02-07
**Research Topic:** Unraid API Ecosystem - Architecture, Authentication, GraphQL Schema, Integrations, and MCP Server
**Specialist:** ExaAI Semantic Search
## Methodology
- **Total queries executed:** 22
- **Total unique URLs discovered:** 55+
- **Sources deep-read:** 14
- **Search strategy:** Multi-perspective semantic search covering official docs, source code analysis, community integrations, DeepWiki architecture analysis, feature roadmap, and third-party client libraries
---
## Key Findings
### 1. Unraid API Overview and Availability
The Unraid API provides a **GraphQL interface** for programmatic interaction with Unraid servers. Starting with **Unraid 7.2** (released 2025-10-29), the API comes **built into the operating system** with no plugin installation required ([source](https://docs.unraid.net/API/)).
Key capabilities include:
- Automation, monitoring, and integration through a modern, strongly-typed API
- Multiple authentication methods (API keys, session cookies, SSO/OIDC)
- Comprehensive system coverage
- Built-in developer tools including a GraphQL Sandbox
For **pre-7.2 versions**, the API is available via the Unraid Connect plugin from Community Applications. Users do **not** need to sign in to Unraid Connect to use the API locally ([source](https://docs.unraid.net/API/)).
The API was announced alongside Unraid 7.2.0 which also brought RAIDZ expansion, responsive WebGUI, and SSO login capabilities ([source](https://docs.unraid.net/unraid-os/release-notes/7.2.0/)).
### 2. Architecture and Technology Stack
The Unraid API is organized as a **pnpm workspace monorepo** containing 8+ packages ([source](https://deepwiki.com/unraid/api), [source](https://github.com/unraid/api)):
**Core Packages:**
| Package | Location | Purpose |
|---------|----------|---------|
| `@unraid/api` | `api/` | NestJS-based GraphQL server, service layer, OS integration |
| `@unraid/web` | `web/` | Vue 3 web application, Apollo Client integration |
| `@unraid/ui` | `unraid-ui/` | Reusable Vue components, web component builds |
| `@unraid/shared` | `packages/unraid-shared/` | Shared TypeScript types, utilities, constants |
| `unraid-api-plugin-connect` | `packages/unraid-api-plugin-connect/` | Remote access, UPnP, dynamic DNS |
**Backend Technology Stack:**
- **NestJS 11.1.6** with **Fastify 5.5.0** HTTP server
- **Apollo Server 4.12.2** for GraphQL
- **GraphQL 16.11.0** reference implementation
- **graphql-ws 6.0.6** for WebSocket subscriptions
- **TypeScript 5.9.2** (77.4% of codebase)
- **Redux Toolkit** for state management
- **Casbin 5.38.0** for RBAC authorization
- **PM2 6.0.8** for process management
- **dockerode 4.0.7** for Docker container management
- **@unraid/libvirt 2.1.0** for VM lifecycle control
- **systeminformation 5.27.8** for hardware metrics
- **chokidar 4.0.3** for file watching
**Frontend Technology Stack:**
- **Vue 3.5.20** with Composition API
- **Apollo Client 3.14.0** with WebSocket subscriptions
- **Pinia 3.0.3** for state management
- **TailwindCSS 4.1.12** for styling
- **Vite 7.1.3** as build tool
**Current Version:** 4.29.2 (core packages) ([source](https://deepwiki.com/unraid/api))
### 3. GraphQL API Layer
The API uses a **code-first approach** where the GraphQL schema is generated automatically from TypeScript decorators ([source](https://deepwiki.com/unraid/api/2.1-graphql-api-layer)):
- `@ObjectType()` - Defines GraphQL object types
- `@InputType()` - Specifies input types for mutations
- `@Resolver()` - Declares resolver classes
- `@Query()`, `@Mutation()`, `@Subscription()` - Operation decorators
**Schema Generation Pipeline:**
```
TypeScript Classes with Decorators
-> @nestjs/graphql processes decorators
-> Schema generated at runtime
-> @graphql-codegen extracts schema
-> TypedDocumentNode generated for frontend
-> Type-safe operations in Vue 3 client
```
**Key Configuration:**
- **autoSchemaFile**: Code-first generation enabled
- **introspection**: Always enabled (controlled by security guards)
- **subscriptions**: WebSocket via `graphql-ws` protocol
- **fieldResolverEnhancers**: Guards enabled for field-level authorization
- **transformSchema**: Applies permission checks and conditional field removal
The GraphQL Sandbox is accessible at `http://YOUR_SERVER_IP/graphql` when enabled through Settings -> Management Access -> Developer Options, or via CLI: `unraid-api developer --sandbox true` ([source](https://docs.unraid.net/API/how-to-use-the-api/)).
**Live API documentation** is available through Apollo GraphQL Studio for exploring the complete schema ([source](https://docs.unraid.net/API/how-to-use-the-api/)).
### 4. Authentication and Authorization
The API implements a **multi-layered security architecture** separating authentication from authorization ([source](https://deepwiki.com/unraid/api/2.2-authentication-and-authorization)):
#### Authentication Methods
1. **API Keys** - Programmatic access via `x-api-key` HTTP header
- Created via WebGUI (Settings -> Management Access -> API Keys) or CLI
- Validated using `passport-http-header-strategy`
- JWT verification via `jose 6.0.13`
2. **Session Cookies** - Automatic when signed into WebGUI
3. **SSO/OIDC** - External identity providers via `openid-client 6.6.4`
- Supported providers: Unraid.net, Google, Microsoft/Azure AD, Keycloak, Authelia, Authentik, Okta
- Configuration via Settings -> Management Access -> API -> OIDC
- Two authorization modes: Simple (email domain/address) and Advanced (claim-based rules)
([source](https://docs.unraid.net/API/oidc-provider-setup/))
#### API Key Authorization Flow for Third-Party Apps
Applications can request API access via a self-service flow ([source](https://docs.unraid.net/API/api-key-app-developer-authorization-flow/)):
```
https://[unraid-server]/ApiKeyAuthorize?name=MyApp&scopes=docker:read,vm:*&redirect_uri=https://myapp.com/callback&state=abc123
```
**Scope Format:** `resource:action` pattern
- Resources: docker, vm, system, share, user, network, disk
- Actions: create, read, update, delete, * (full access)
#### Programmatic API Key Management
CLI-based CRUD operations for automation ([source](https://docs.unraid.net/API/programmatic-api-key-management/)):
```bash
# Create with granular permissions
unraid-api apikey --create \
--name "monitoring key" \
--permissions "DOCKER:READ_ANY,ARRAY:READ_ANY" \
--description "Read-only access" --json
# Delete
unraid-api apikey --delete --name "monitoring key"
```
**Available Roles:** ADMIN, CONNECT, VIEWER, GUEST
**Available Resources:** ACTIVATION_CODE, API_KEY, ARRAY, CLOUD, CONFIG, CONNECT, DOCKER, FLASH, INFO, LOGS, NETWORK, NOTIFICATIONS, OS, SERVICES, SHARE, VMS
**Available Actions:** CREATE_ANY, CREATE_OWN, READ_ANY, READ_OWN, UPDATE_ANY, UPDATE_OWN, DELETE_ANY, DELETE_OWN
#### RBAC Implementation
- **Casbin 5.38.0** with **nest-authz 2.17.0** for policy-based access control
- **accesscontrol 2.2.1** maintains the permission matrix
- **@UsePermissions() directive** provides field-level authorization by removing protected fields from the GraphQL schema dynamically
- **Rate limiting:** 100 requests per 10 seconds via `@nestjs/throttler 6.4.0`
- **Security headers:** `@fastify/helmet 13.0.1` with minimal CSP
### 5. CLI Reference
All commands follow the pattern: `unraid-api <command> [options]` ([source](https://docs.unraid.net/API/cli)):
| Command | Purpose |
|---------|---------|
| `unraid-api start [--log-level <level>]` | Start API service |
| `unraid-api stop [--delete]` | Stop API service |
| `unraid-api restart` | Restart API service |
| `unraid-api logs [-l <lines>]` | View logs (default 100 lines) |
| `unraid-api config` | Display configuration |
| `unraid-api switch-env [-e <env>]` | Toggle production/staging |
| `unraid-api developer [--sandbox true/false]` | Developer mode |
| `unraid-api apikey [options]` | API key management |
| `unraid-api sso add-user/remove-user/list-users` | SSO user management |
| `unraid-api sso validate-token <token>` | Token validation |
| `unraid-api report [-r] [-j]` | Generate system report |
Log levels: trace, debug, info, warn, error, fatal
### 6. Docker Container Management
The Docker Management Service provides comprehensive container lifecycle management through GraphQL ([source](https://deepwiki.com/unraid/api/2.4.2-notification-system)):
**Container Lifecycle Mutations:**
- `start(id)` - Start a stopped container
- `stop(id)` - Stop with 10-second timeout
- `pause(id)` / `unpause(id)` - Suspend/resume
- `removeContainer(id, options)` - Remove container and optionally images
- `updateContainer(id)` - Upgrade to latest image version
- `updateAllContainers()` - Batch update all containers
**Container Data Enrichment:**
- Canonical name extraction via `autostartService`
- Auto-start configuration details
- Port deduplication (IPv4/IPv6)
- LAN-accessible URL computation
- State normalization: RUNNING, PAUSED, EXITED
**Update Detection:**
- Compares local image digests against remote registry manifests
- Returns `UpdateStatus`: UP_TO_DATE, UPDATE_AVAILABLE, REBUILD_READY, UNKNOWN
- Legacy PHP script integration for status computation
**Real-Time Event Monitoring:**
- Watches `/var/run` for Docker socket via chokidar
- Filters: start, stop, die, kill, pause, unpause, restart, oom events
- Publishes to `PUBSUB_CHANNEL.INFO` for subscription updates
**Container Organizer:**
- Folder-based hierarchical organization
- Operations: createFolder, setFolderChildren, deleteEntries, moveEntriesToFolder, renameFolder
- Behind `ENABLE_NEXT_DOCKER_RELEASE` feature flag
**Statistics Streaming:**
- Real-time resource metrics via subscriptions
- CPU percent, memory usage/percent, network I/O, block I/O
- Auto-start/stop streams based on subscription count
### 7. VM Management
VM management uses the `@unraid/libvirt` package (v2.1.0) for QEMU/KVM integration ([source](https://github.com/unraid/libvirt), [source](https://deepwiki.com/unraid/api)):
- Domain state management (start, stop, pause, resume)
- Snapshot creation and restoration
- Domain XML inspection
- Retry logic (up to 2 minutes) for libvirt daemon initialization
Unraid 7.x enhancements include VM clones, snapshots, user-created VM templates, inline XML editing, and advanced GPU passthrough ([source](https://docs.unraid.net/unraid-os/manual/vm/vm-management/)).
### 8. Storage and Array Management
**Array Operations** (available via Python client library):
- `start_array()` / `stop_array()`
- `start_parity_check(correct)` / `pause_parity_check()` / `resume_parity_check()` / `cancel_parity_check()`
- `spin_up_disk(id)` / `spin_down_disk(id)`
**GraphQL Queries for Storage:**
```graphql
# Disk Information
{ disks { device name type size vendor temperature smartStatus } }
# Share Information
{ shares { name comment free size used } }
# Array Status (from official docs example)
{ array { state capacity { free used total } disks { name size status temp } } }
```
([source](https://deepwiki.com/domalab/unraid-api-client/4.3-network-and-storage-queries), [source](https://docs.unraid.net/API/how-to-use-the-api/))
**ZFS Support:** Unraid supports ZFS pools with automatic data integrity, built-in RAID (mirrors, RAIDZ), snapshots, and send/receive ([source](https://docs.unraid.net/unraid-os/advanced-configurations/optimize-storage/zfs-storage/)).
### 9. Network Management
**Network Query Fields:**
| Field | Type | Description |
|-------|------|-------------|
| iface | String | Interface identifier |
| ifaceName | String | Interface name |
| ipv4/ipv6 | String | IP addresses |
| mac | String | MAC address |
| operstate | String | Operational state (up/down) |
| type | String | Interface type |
| duplex | String | Duplex mode |
| speed | Number | Interface speed |
| accessUrls | Array | Access URLs for the interface |
```graphql
{ network { iface ifaceName ipv4 ipv6 mac operstate type duplex speed accessUrls { type name ipv4 ipv6 } } }
```
([source](https://deepwiki.com/domalab/unraid-api-client/4.3-network-and-storage-queries))
### 10. Notification System
The Unraid API exposes a notification system with the following features ([source](https://deepwiki.com/unraid/api)):
- File-based notifications stored in `/unread/` and `/archive/` directories
- GraphQL queries for notification overview (counts by type)
- Notification listing with filters
- Notification agents: email, Discord, Slack (built-in); custom agents via scripts
Community solutions for additional notification targets include ntfy.sh, Matrix, and webhook-based approaches ([source](https://forums.unraid.net/topic/88464-webhook-notification-method/), [source](https://lder.dev/posts/ntfy-Notifications-With-unRAID/)).
### 11. WebSocket Subscriptions (Real-Time)
The API implements real-time subscriptions via the `graphql-ws` protocol (v6.0.6) ([source](https://deepwiki.com/unraid/api/2.1-graphql-api-layer)):
- **PubSub Engine:** `graphql-subscriptions@3.0.0` for event publishing
- **Transport:** WebSocket via `graphql-ws` protocol
- **Trigger:** Redux store updates from file watchers propagate to subscribed clients
- **Available subscriptions include:**
- Container state changes
- Container statistics (CPU, memory, I/O)
- System metrics updates
- Array status changes
The subscription system is event-driven: file changes on disk (detected by chokidar) -> Redux store update -> PubSub event -> WebSocket push to clients.
### 12. MCP Server Integrations
**jmagar/unraid-mcp** (this project) is the primary MCP server for Unraid ([source](https://glama.ai/mcp/servers/@jmagar/unraid-mcp), [source](https://mcpmarket.com/server/unraid)):
- Python-based MCP server using FastMCP framework
- 10 tools with 90 actions for comprehensive Unraid management
- Read-only access by default for safety
- Listed on Glama, MCP Market, MCPServers.com, LangDB, UBOS, JuheAPI
- 21 GitHub stars
- Communicates via stdio transport
**Alternative MCP implementations:**
- `lwsinclair/unraid-mcp` - Another MCP implementation ([source](https://github.com/lwsinclair/unraid-mcp))
- `ruaan-deysel/unraid-management-agent` - Go-based plugin with REST API + WebSocket + MCP integration ([source](https://github.com/ruaan-deysel/unraid-management-agent))
### 13. Third-Party Client Libraries
#### Python Client: `unraid-api` (PyPI)
**Author:** DomaLab (Ruaan Deysel)
**Version:** 1.3.1 (as of Jan 2026)
**Requirements:** Python 3.11+, Unraid 7.1.4+, API v4.21.0+
Features ([source](https://github.com/domalab/unraid-api-client), [source](https://unraid-api.domalab.net/)):
- Async/await with aiohttp
- Home Assistant ready (accepts external ClientSession)
- Pydantic models for all responses
- SSL auto-discovery
- Redirect handling for myunraid.net
**Supported Operations:**
- Docker: start/stop/restart containers
- VMs: start/stop/force_stop/pause/resume
- Array: start/stop, parity check (start/pause/resume/cancel), disk spin up/down
- System: metrics, shares, UPS, services, plugins, log files, notifications
- Custom GraphQL queries
#### Home Assistant Integration
`chris-mc1/unraid_api` (60 stars) - Full Home Assistant integration using the local GraphQL API ([source](https://github.com/chris-mc1/unraid_api)):
- Monitors array state, disk status, temperatures
- Docker container status
- Network information
- HACS compatible
#### Homey Smart Home
Unraid API integration available for the Homey smart home platform ([source](https://homey.app/no-no/app/community.unraid.api/Unraid-API/)).
#### Legacy APIs (Pre-GraphQL)
- `ElectricBrainUK/UnraidAPI` (127 stars) - Original Node.js API using web scraping ([source](https://github.com/ElectricBrainUK/UnraidAPI))
- `BoKKeR/UnraidAPI-RE` (68 stars) - Reverse-engineered Node.js API ([source](https://github.com/BoKKeR/UnraidAPI-RE))
- `ridenui/unraid` - TypeScript client via SSH ([source](https://github.com/ridenui/unraid))
### 14. Unraid Connect and Remote Access
Unraid Connect provides cloud-enabled server management ([source](https://docs.unraid.net/connect/), [source](https://unraid.net/connect)):
- **Dynamic Remote Access:** Toggle on/off server accessibility via UPnP
- **Server Management:** Manage multiple servers from Connect web UI
- **Deep Linking:** Links to relevant WebGUI sections
- **Online Flash Backup:** Cloud-based configuration backups
- **Real-time Monitoring:** Server health and resource usage monitoring
- **Notifications:** Server health, storage status, critical events
The Connect plugin (`unraid-api-plugin-connect`) handles remote access, UPnP, dynamic DNS, and Mothership API communication ([source](https://deepwiki.com/unraid/api)).
### 15. Plugin Architecture
The API supports a plugin system for extending functionality ([source](https://deepwiki.com/unraid/api)):
- Plugins are NPM packages implementing the `UnraidPlugin` interface
- Access to NestJS dependency injection
- Can extend the GraphQL schema
- Dynamic loading via `PluginLoaderService` at runtime
- `@unraid/create-api-plugin` CLI scaffolding tool available
- Plugin documentation at `api/docs/developer/api-plugins.md`
### 16. Feature Bounty Program
Unraid launched a **Feature Bounty Program** in September 2025 ([source](https://unraid.net/blog/api-feature-bounty-program)):
- Community developers implement specific API features for monetary rewards
- Bounty board: `github.com/orgs/unraid/projects/3/views/1`
- Accelerates feature development beyond core team capacity
**Notable Open Bounty: System Temperature Monitoring** ([source](https://github.com/unraid/api/issues/1597)):
- Current API provides only disk temperatures via smartctl
- Proposed comprehensive monitoring: CPU, motherboard, GPU, NVMe, chipset
- Proposed GraphQL schema with TemperatureSensor, TemperatureSummary types
- Would use lm-sensors, smartctl, nvidia-smi, IPMI
### 17. Monitoring and Grafana Integration
While the Unraid API does not natively expose Prometheus metrics, the community has established monitoring patterns ([source](https://unraid.net/blog/prometheus)):
- **Prometheus Node Exporter** plugin for Unraid
- **Grafana dashboards** available:
- Unraid System Dashboard V2 (ID: 7233) ([source](https://grafana.com/grafana/dashboards/7233-unraid-system-dashboard-v2/))
- Unraid UPS Monitoring (ID: 19243) ([source](https://grafana.com/grafana/dashboards/19243-unraid-ups-monitoring/))
- **cAdvisor** for container-level metrics
### 18. Development and Contribution
**Development Environment Requirements:**
- Node.js 22.x (enforced)
- pnpm 10.15.0
- Bash, Docker, libvirt, jq
**Key Development Commands:**
```bash
pnpm dev # All dev servers in parallel
pnpm build # Production builds
pnpm codegen # Generate GraphQL types
pnpm test # Run test suites (Vitest)
pnpm lint # ESLint
pnpm type-check # TypeScript checking
```
**Deployment to Unraid:**
```bash
pnpm unraid:deploy <SERVER_IP>
```
**CI/CD Pipeline:**
1. PR previews with unique build URLs
2. Staging deployment for merged PRs
3. Production releases via release-please with semantic versioning
([source](https://github.com/unraid/api/blob/main/CLAUDE.md))
---
## Expert Opinions and Analysis
The DeepWiki auto-generated documentation characterizes the Unraid API as "a modern GraphQL API and web interface for managing Unraid servers" that "replaces portions of the legacy PHP-based WebGUI with a type-safe, real-time API built on NestJS and Vue 3, while maintaining backward compatibility through hybrid integration" ([source](https://deepwiki.com/unraid/api)).
The Feature Bounty Program blog post indicates Unraid is actively investing in the API ecosystem: "The new Unraid API has already come a long way as a powerful, open-source toolkit that unlocks endless possibilities for automation, integrations, and third-party applications" ([source](https://unraid.net/blog/api-feature-bounty-program)).
---
## Contradictions and Debates
1. **Code-first vs Schema-first:** The CLAUDE.md mentions "GraphQL schema-first approach with code generation" while the DeepWiki analysis describes a "code-first approach with NestJS decorators that generate the GraphQL schema." The DeepWiki analysis appears more accurate based on the `autoSchemaFile` configuration and NestJS decorator usage.
2. **File Manager API:** No dedicated file browser/upload/download API was found in the GraphQL schema. File operations appear to be handled through the legacy PHP WebGUI rather than the new API.
3. **RClone via API:** While our MCP server project has RClone tools, these appear to interface with rclone config files rather than a native GraphQL API for cloud storage management.
---
## Data Points and Statistics
| Metric | Value | Source |
|--------|-------|--------|
| Unraid API native since | v7.2.0 (2025-10-29) | [docs.unraid.net](https://docs.unraid.net/unraid-os/release-notes/7.2.0/) |
| GitHub stars (official repo) | 86 | [github.com/unraid/api](https://github.com/unraid/api) |
| Total releases | 102 | [github.com/unraid/api](https://github.com/unraid/api) |
| Codebase language | TypeScript 77.4%, Vue 11.8%, PHP 5.6% | [github.com/unraid/api](https://github.com/unraid/api) |
| Current package version | 4.29.2 | [deepwiki.com](https://deepwiki.com/unraid/api) |
| Rate limit | 100 req/10 sec | [deepwiki.com](https://deepwiki.com/unraid/api/2.2-authentication-and-authorization) |
| Python client PyPI version | 1.3.1 | [pypi.org](https://pypi.org/project/unraid-api/1.3.1/) |
| Home Assistant integration stars | 60 | [github.com](https://github.com/chris-mc1/unraid_api) |
| jmagar/unraid-mcp stars | 21 | [github.com](https://github.com/jmagar/unraid-mcp) |
---
## Gaps Identified
1. **Full GraphQL Schema Dump:** No publicly accessible introspection dump or SDL file was found. The live schema is only available via the GraphQL Sandbox on a running Unraid server.
2. **File Manager API:** No evidence of file browse/upload/download GraphQL mutations. This appears to remain in the PHP WebGUI layer.
3. **Temperature Monitoring:** Currently limited to disk temperatures via smartctl. Comprehensive temperature monitoring is an open feature bounty (not yet implemented).
4. **Parity/Array Operation Mutations:** While the Python client library implements `start_array()`/`stop_array()`, the specific GraphQL mutations and their schemas were not found in public documentation.
5. **RClone GraphQL API:** The extent of rclone integration through the GraphQL API versus legacy integration is unclear.
6. **Flash Backup API:** Flash backups appear to be handled through Unraid Connect (cloud-based) rather than a local GraphQL API.
7. **Network Configuration Mutations:** While network queries return interface data, mutations for VLAN/bonding configuration were not found in the API documentation.
8. **WebSocket Subscription Schema:** The specific subscription types and their exact GraphQL definitions are not publicly documented outside the running API.
9. **Plugin API Documentation:** The plugin developer guide (`api/docs/developer/api-plugins.md`) was not publicly accessible outside the repository.
10. **Rate Limiting Details:** Only the default rate (100 req/10 sec) was found; per-endpoint or per-role rate limits were not documented.
---
## All URLs Discovered
### Primary Sources (Official Unraid Documentation)
- [Welcome to Unraid API](https://docs.unraid.net/API/) - API landing page
- [Using the Unraid API](https://docs.unraid.net/API/how-to-use-the-api/) - Usage guide with examples
- [API Key Authorization Flow](https://docs.unraid.net/API/api-key-app-developer-authorization-flow/) - Third-party auth flow
- [Programmatic API Key Management](https://docs.unraid.net/API/programmatic-api-key-management/) - CLI key management
- [CLI Reference](https://docs.unraid.net/API/cli) - Full CLI command reference
- [OIDC Provider Setup](https://docs.unraid.net/API/oidc-provider-setup/) - SSO configuration
- [Unraid 7.2.0 Release Notes](https://docs.unraid.net/unraid-os/release-notes/7.2.0/) - Release notes
- [Automated Flash Backup](https://docs.unraid.net/connect/flash-backup/) - Flash backup docs
- [Unraid Connect Overview](https://docs.unraid.net/connect/) - Connect service
- [Remote Access](https://docs.unraid.net/unraid-connect/remote-access/) - Remote access docs
- [Unraid Connect Setup](https://docs.unraid.net/unraid-connect/overview-and-setup/) - Setup guide
- [Arrays Overview](https://docs.unraid.net/unraid-os/using-unraid-to/manage-storage/array/overview/) - Storage management
- [ZFS Storage](https://docs.unraid.net/unraid-os/advanced-configurations/optimize-storage/zfs-storage/) - ZFS guide
- [SMART Reports](https://docs.unraid.net/unraid-os/system-administration/monitor-performance/smart-reports-and-disk-health/) - Disk health
- [User Management](https://docs.unraid.net/unraid-os/system-administration/secure-your-server/user-management/) - User system
- [Array Health](https://docs.unraid.net/unraid-os/using-unraid-to/manage-storage/array/array-health-and-maintenance/) - Parity/maintenance
- [VM Management](https://docs.unraid.net/unraid-os/manual/vm/vm-management/) - VM setup guide
- [Plugins](https://docs.unraid.net/unraid-os/using-unraid-to/customize-your-experience/plugins/) - Plugin overview
### Official Source Code
- [unraid/api GitHub](https://github.com/unraid/api) - Official monorepo (86 stars)
- [unraid/api CLAUDE.md](https://github.com/unraid/api/blob/main/CLAUDE.md) - Development guidelines
- [unraid/libvirt GitHub](https://github.com/unraid/libvirt) - Libvirt bindings
- [unraid/api Issues](https://github.com/unraid/api/issues) - Issue tracker
- [Temperature Monitoring Bounty](https://github.com/unraid/api/issues/1597) - Feature bounty issue
- [API Feature Bounty Program](https://unraid.net/blog/api-feature-bounty-program) - Program announcement
- [Unraid Connect](https://unraid.net/connect) - Connect product page
- [Connect Dashboard](https://connect.myunraid.net/) - Live Connect dashboard
### Architecture Analysis (DeepWiki)
- [Unraid API Overview](https://deepwiki.com/unraid/api) - Full architecture
- [Backend API System](https://deepwiki.com/unraid/api/2-api-server) - Backend details
- [GraphQL API Layer](https://deepwiki.com/unraid/api/2.1-graphql-api-layer) - GraphQL implementation
- [Authentication and Authorization](https://deepwiki.com/unraid/api/2.2-authentication-and-authorization) - Auth system
- [Core Services](https://deepwiki.com/unraid/api/2.4-docker-integration) - Docker/services
- [Docker Management Service](https://deepwiki.com/unraid/api/2.4.2-notification-system) - Docker details
- [Configuration Files](https://deepwiki.com/unraid/api/5.2-connect-settings-and-remote-access) - Config system
### Community Client Libraries
- [domalab/unraid-api-client GitHub](https://github.com/domalab/unraid-api-client) - Python client
- [unraid-api PyPI](https://pypi.org/project/unraid-api/1.3.1/) - PyPI package
- [Unraid API Documentation (DomaLab)](https://unraid-api.domalab.net/) - Python docs
- [Network and Storage Queries](https://deepwiki.com/domalab/unraid-api-client/4.3-network-and-storage-queries) - Query reference
- [chris-mc1/unraid_api GitHub](https://github.com/chris-mc1/unraid_api) - Home Assistant integration (60 stars)
- [Homey Unraid API](https://homey.app/no-no/app/community.unraid.api/Unraid-API/) - Homey integration
### MCP Server Listings
- [jmagar/unraid-mcp GitHub](https://github.com/jmagar/unraid-mcp) - This project
- [Glama MCP Listing](https://glama.ai/mcp/servers/@jmagar/unraid-mcp) - Glama listing
- [MCP Market Listing](https://mcpmarket.com/server/unraid) - MCP Market
- [MCPServers.com Listing](https://mcpservers.com/servers/jmagar-unraid) - MCPServers.com
- [LangDB Listing](https://langdb.ai/app/mcp-servers/unraid-mcp-server-8605b018-ce29-48d5-8132-48cf0792501f) - LangDB
- [UBOS Listing](https://ubos.tech/mcp/unraid-mcp-server/) - UBOS
- [JuheAPI Listing](https://www.juheapi.com/mcp-servers/jmagar/unraid-mcp) - JuheAPI
- [AIBase Listing](https://mcp.aibase.com/server/1916341265568079874) - AIBase
- [lwsinclair/unraid-mcp GitHub](https://github.com/lwsinclair/unraid-mcp) - Alternative MCP
### Alternative/Legacy APIs
- [ruaan-deysel/unraid-management-agent](https://github.com/ruaan-deysel/unraid-management-agent) - Go REST+WebSocket (5 stars)
- [BoKKeR/UnraidAPI-RE](https://github.com/BoKKeR/UnraidAPI-RE) - Node.js API (68 stars)
- [ElectricBrainUK/UnraidAPI](https://github.com/ElectricBrainUK/UnraidAPI) - Original API (127 stars)
- [ridenui/unraid](https://github.com/ridenui/unraid) - TypeScript SSH client (3 stars)
### Monitoring Integration
- [Unraid Prometheus Guide](https://unraid.net/blog/prometheus) - Official guide
- [Grafana UPS Dashboard](https://grafana.com/grafana/dashboards/19243-unraid-ups-monitoring/) - Dashboard 19243
- [Grafana System Dashboard V2](https://grafana.com/grafana/dashboards/7233-unraid-system-dashboard-v2/) - Dashboard 7233
- [Prometheus/Grafana Forum Thread](https://forums.unraid.net/topic/77593-monitoring-unraid-with-prometheus-grafana-cadvisor-nodeexporter-and-alertmanager/) - Community guide
### Community Discussion
- [Webhook Notification Forum Thread](https://forums.unraid.net/topic/88464-webhook-notification-method/) - Notification customization
- [Matrix Notification Agent](https://forums.unraid.net/topic/122107-matrix-notification-agent/) - Matrix integration
- [ntfy.sh Notifications](https://lder.dev/posts/ntfy-Notifications-With-unRAID/) - ntfy.sh setup
- [MCP HomeLab Tutorial (YouTube)](https://www.youtube.com/watch?v=AydDDYn09QA) - Christian Lempa MCP tutorial
- [Build with the Unraid API (YouTube)](https://www.youtube.com/shorts/0JJQdFfh4e0) - Short video

View File

@@ -1,824 +0,0 @@
# Unraid API Research Findings
**Date:** 2026-02-07
**Research Topic:** Unraid GraphQL API, Connect Cloud Service, MCP Integration
**Specialist:** NotebookLM Deep Research
**Notebook ID:** 4e217d5d-d68b-4bfa-881a-42f7c01d3e44
## Research Summary
- **Deep research mode:** deep (47 web sources discovered)
- **Sources indexed:** 51 ready / 77 total (26 error)
- **Q&A questions asked:** 23 comprehensive questions with follow-ups
- **Deep research status:** completed
- **Key source categories:** Official Unraid docs, GitHub repos, community forums, GraphQL references, third-party integrations
---
## Table of Contents
1. [Unraid API Overview](#1-unraid-api-overview)
2. [Architecture and Deployment](#2-architecture-and-deployment)
3. [Authentication and Security](#3-authentication-and-security)
4. [GraphQL Schema and Endpoints](#4-graphql-schema-and-endpoints)
5. [WebSocket Subscriptions](#5-websocket-subscriptions)
6. [Unraid Connect Cloud Service](#6-unraid-connect-cloud-service)
7. [Version History and API Changes](#7-version-history-and-api-changes)
8. [Community Integrations](#8-community-integrations)
9. [Known Issues and Limitations](#9-known-issues-and-limitations)
10. [API Roadmap and Future Features](#10-api-roadmap-and-future-features)
11. [Recommendations for unraid-mcp](#11-recommendations-for-unraid-mcp)
12. [Source Bibliography](#12-source-bibliography)
---
## 1. Unraid API Overview
The **Unraid API** is a programmatic interface that provides automation, monitoring, and integration capabilities for Unraid servers. It uses a **GraphQL** interface, offering a modern, strongly-typed method for developers and third-party applications to interact directly with the Unraid operating system.
### Key Facts
- **Protocol:** GraphQL (queries, mutations, subscriptions)
- **Endpoint:** `http(s)://[SERVER_IP]/graphql`
- **Authentication:** API Keys, Session Cookies, SSO/OIDC
- **Native since:** Unraid 7.2 (no plugin required)
- **Pre-7.2:** Requires Unraid Connect plugin installation
The API exposes nearly all management functions available in the Unraid WebGUI, including server management, storage operations, Docker/VM lifecycle, remote access, and backup capabilities.
**Sources:**
- [Welcome to Unraid API | Unraid Docs](https://docs.unraid.net/API/) -- Official API landing page [Tier: Primary]
- [Using the Unraid API](https://docs.unraid.net/API/how-to-use-the-api/) -- Official usage guide [Tier: Primary]
---
## 2. Architecture and Deployment
### Monorepo Structure
The Unraid API is developed in the [unraid/api](https://github.com/unraid/api) monorepo which houses:
| Directory | Purpose |
|-----------|---------|
| `api/` | GraphQL backend server (TypeScript/Node.js) |
| `web/` | Frontend interface (Nuxt/Vue.js) |
| `plugin/` | Unraid plugin packaging (.plg format) |
| `packages/` | Shared internal libraries |
| `unraid-ui/` | UI component library |
| `scripts/` | Build and maintenance utilities |
### Technology Stack
| Component | Technology |
|-----------|------------|
| Primary language | TypeScript (77.4%) |
| Frontend | Vue.js (11.8%) via Nuxt |
| Runtime | Node.js v22 |
| Package manager | pnpm v9.0+ |
| API protocol | GraphQL |
| Dev environment | Nix (optional), Docker |
| Build tool | Justfile |
### Deployment Modes
1. **Native (Unraid 7.2+):** API is built into the OS, starts automatically with the system. Managed via **Settings > Management Access > API**.
2. **Plugin (Pre-7.2):** Requires installing the Unraid Connect plugin from Community Applications. Installing the plugin on 7.2+ provides access to newer API features before they are merged into the stable OS release.
3. **Development:** Supports local Docker builds (`pnpm run docker:build-and-run` on port 5858), direct deployment to a running server (`pnpm unraid:deploy <SERVER_IP>`), and hot-reloading dev servers (API port 3001, Web port 3000).
### Integration with Nginx
The API integrates with Unraid's Nginx web server. Nginx acts as a reverse proxy, handling external requests on standard web ports (80/443) and routing `/graphql` traffic to the internal API backend. This means the API shares the same IP and port as the WebGUI.
**Sources:**
- [GitHub - unraid/api: Unraid API / Connect / UI Monorepo](https://github.com/unraid/api) [Tier: Official]
- [api/api/docs/developer/development.md](https://github.com/unraid/api/blob/main/api/docs/developer/development.md) [Tier: Official]
---
## 3. Authentication and Security
### Authentication Methods
The Unraid API supports three primary authentication mechanisms:
1. **API Keys** -- Standard method for programmatic access
- Created via WebGUI: **Settings > Management Access > API Keys**
- Created via CLI: `unraid-api apikey --create --name "mykey" --roles ADMIN --json`
- Sent in HTTP header: `x-api-key: YOUR_API_KEY`
- Displayed only once upon creation
2. **Session Cookies** -- Used for browser-based WebGUI access
- Automatic when logged into WebGUI
- Used internally by the GraphQL Sandbox
3. **SSO / OIDC (OpenID Connect)** -- Enterprise identity management
- Added in API v4.0.0
- Supports external identity providers
### API Key Authorization Flow (OAuth-like)
For third-party applications, Unraid provides an OAuth-like authorization flow:
1. App redirects user to: `https://[server]/ApiKeyAuthorize?name=MyApp&scopes=docker:read,vm:*&redirect_uri=https://myapp.com/callback&state=abc123`
2. User authenticates (if not already logged in)
3. User sees consent screen with requested permissions
4. Upon approval, API key is created and shown to user once
5. If `redirect_uri` provided, user is redirected with `?api_key=xxx&state=abc123`
**Query Parameters:**
| Parameter | Required | Description |
|-----------|----------|-------------|
| `name` | Yes | Application name |
| `scopes` | Yes | Comma-separated permissions (e.g., `docker:read,vm:*`) |
| `redirect_uri` | No | HTTPS callback URL (localhost allowed for dev) |
| `state` | No | CSRF prevention token |
### Programmatic API Key Management (CLI)
```bash
# Create a key with admin role
unraid-api apikey --create --name "workflow key" --roles ADMIN --json
# Create a key with specific permissions
unraid-api apikey --create --name "monitor" --permissions "DOCKER:READ_ANY,ARRAY:READ_ANY" --json
# Overwrite existing key
unraid-api apikey --create --name "workflow key" --roles ADMIN --overwrite --json
# Delete a key
unraid-api apikey --delete --name "workflow key"
```
### Roles and Permissions
**Roles (pre-defined access levels):**
| Role | Description |
|------|-------------|
| `ADMIN` | Full system access (all permissions) |
| `VIEWER` | Read-only access |
| `GUEST` | Limited access |
| `CONNECT` | Unraid Connect cloud features |
**Permission Scope Format:** `RESOURCE:ACTION`
**Available Resources:**
- Core: `ACTIVATION_CODE`, `API_KEY`, `CONFIG`, `CUSTOMIZATIONS`, `INFO`, `LOGS`, `OS`, `REGISTRATION`, `VARS`, `WELCOME`
- Storage: `ARRAY`, `DISK`, `FLASH`
- Services: `DOCKER`, `VMS`, `SERVICES`, `NETWORK`
- Management: `DASHBOARD`, `DISPLAY`, `ME`, `NOTIFICATIONS`, `OWNER`, `PERMISSION`, `SHARE`, `USER`
- Cloud: `CLOUD`, `CONNECT`, `CONNECT__REMOTE_ACCESS`, `ONLINE`, `SERVERS`
**Available Actions:**
- `CREATE_ANY`, `CREATE_OWN`
- `READ_ANY`, `READ_OWN`
- `UPDATE_ANY`, `UPDATE_OWN`
- `DELETE_ANY`, `DELETE_OWN`
- `*` (wildcard for all actions)
### SSL/TLS Certificate Handling
| Scenario | Recommendation |
|----------|---------------|
| Self-signed cert (local IP) | Either trust the specific CA or disable SSL verification (dev only) |
| `myunraid.net` cert (Let's Encrypt) | SSL verification works normally; use the `myunraid.net` URL |
| Strict SSL mode | Enforces HTTPS for all connections including local |
For self-signed certs in client code:
```bash
curl -k "https://your-unraid-server/graphql" -H "x-api-key: YOUR_KEY"
```
**Sources:**
- [API key authorization flow | Unraid Docs](https://docs.unraid.net/API/api-key-app-developer-authorization-flow/) [Tier: Primary]
- [Programmatic API key management | Unraid Docs](https://docs.unraid.net/API/programmatic-api-key-management/) [Tier: Primary]
---
## 4. GraphQL Schema and Endpoints
### Endpoint URLs
| Purpose | URL |
|---------|-----|
| GraphQL API | `http(s)://[SERVER_IP]/graphql` |
| GraphQL Sandbox | `http(s)://[SERVER_IP]/graphql` (must be enabled) |
| WebSocket (subscriptions) | `ws(s)://[SERVER_IP]/graphql` |
| Internal dev API | `http://localhost:3001/graphql` |
### Enabling the GraphQL Sandbox
Two methods:
1. **WebGUI:** Settings > Management Access > Developer Options > Toggle GraphQL Sandbox to "On"
2. **CLI:** `unraid-api developer --sandbox true`
Then access at `http://YOUR_SERVER_IP/graphql` to explore the schema via Apollo Sandbox.
### Query Types
#### System Information (`info`)
```graphql
query {
info {
os { platform distro release uptime hostname arch kernel }
cpu { manufacturer brand cores threads }
memory { layout { bank type clockSpeed manufacturer } }
baseboard { manufacturer model version serial }
system { manufacturer model version serial uuid }
versions { kernel docker unraid node }
apps { installed started }
machineId
time
}
}
```
#### Array Status (`array`)
```graphql
query {
array {
id
state
capacity {
kilobytes { free used total }
disks { free used total }
}
boot { id name device size status temp fsType }
parities { id name device size status temp numErrors }
disks { id name device size status temp numReads numWrites numErrors }
caches { id name device size status temp }
}
}
```
#### Docker Containers (`docker`)
```graphql
query {
docker {
containers(skipCache: false) {
id names image state status autoStart
ports { ip privatePort publicPort type }
labels
networkSettings
mounts
}
}
}
```
#### Virtual Machines (`vms`)
```graphql
query {
vms {
id
domains {
id name state uuid
}
}
}
```
#### Network (`network`)
```graphql
query {
network {
id
accessUrls { type name ipv4 ipv6 }
}
}
```
#### Registration (`registration`)
```graphql
query {
registration {
id type state expiration updateExpiration
keyFile { location contents }
}
}
```
#### Settings (`settings`)
```graphql
query {
settings {
unified { values }
}
}
```
#### System Variables (`vars`)
```graphql
query {
vars {
id version name timeZone security workgroup
useSsl port portssl
shareSmbEnabled shareNfsEnabled
mdState mdVersion
csrfToken
# Many more fields available -- some have Int overflow issues
}
}
```
#### RClone Remotes (`rclone`)
```graphql
query {
rclone {
remotes { name type parameters config }
configForm(formOptions: { providerType: "s3" }) {
id dataSchema uiSchema
}
}
}
```
#### Notifications
```graphql
query {
notifications {
id subject message importance unread
}
}
```
#### Shares
```graphql
query {
shares {
name comment free used
}
}
```
### Mutation Types
#### Docker Container Management
```graphql
mutation {
docker {
start(id: $id) { id names state status }
stop(id: $id) { id names state status }
}
}
```
- Uses `PrefixedID` type for container identification
- Mutations are idempotent (starting an already-running container returns success)
#### VM Management
```graphql
mutation {
vm {
start(id: $id) # Returns Boolean
stop(id: $id)
pause(id: $id)
resume(id: $id)
forceStop(id: $id)
reboot(id: $id)
reset(id: $id)
}
}
```
#### RClone Remote Management
```graphql
mutation {
rclone {
createRCloneRemote(input: { name: "...", type: "s3", config: {...} }) {
name type parameters
}
deleteRCloneRemote(input: { name: "..." })
}
}
```
#### System Operations (via API)
The following operations are confirmed available through the API (exact mutation names should be discovered via introspection):
- Array start/stop
- Parity check trigger
- Server reboot/shutdown
- Flash backup trigger
- Notification management
### The `PrefixedID` Type
The API uses a `PrefixedID` custom scalar type for global object identification. This follows the GraphQL `Node` interface pattern, combining the object type and its internal ID (e.g., `DockerContainer:abc123`). Client libraries must handle this as a string.
### The `Long` Scalar Type
The API defines a custom `Long` scalar type for 64-bit integers to handle values that exceed the standard GraphQL `Int` (32-bit signed). This is used for:
- Disk/array capacity values (size, free, used, total)
- Memory values (total, free)
- Disk operation counters (numReads, numWrites)
These are typically serialized as strings in JSON responses.
**Sources:**
- [Welcome to Unraid API | Unraid Docs](https://docs.unraid.net/API/) [Tier: Primary]
- [Using the Unraid API](https://docs.unraid.net/API/how-to-use-the-api/) [Tier: Primary]
- [GitHub - jmagar/unraid-mcp](https://github.com/jmagar/unraid-mcp) [Tier: Official]
---
## 5. WebSocket Subscriptions
### Protocol
The Unraid API uses the **`graphql-transport-ws`** protocol (the modern standard, superseding the older `subscriptions-transport-ws`).
### Connection Flow
1. Client connects to `ws(s)://[SERVER_IP]/graphql`
2. Client sends `connection_init` with auth payload:
```json
{
"type": "connection_init",
"payload": {
"x-api-key": "YOUR_API_KEY"
}
}
```
3. Server responds with `connection_ack`
4. Client sends `subscribe` message with GraphQL subscription query
5. Server streams `next` messages with data as events occur
6. Server sends `complete` when subscription ends
### Known Subscription Types
| Subscription | Purpose |
|-------------|---------|
| `syslog` / `logFile` | Real-time system log streaming |
| Array events | State changes, parity check progress |
| Docker events | Container state changes |
| Notifications | Real-time alert streaming |
### Authentication for WebSockets
Since standard WebSocket APIs in browsers cannot set custom headers, the API key is passed in the `connectionParams` payload of the `connection_init` message. Alternatively, session cookies work automatically for WebGUI-based tools.
### Infrastructure Notes
- Unraid uses **Nchan** (Nginx module) for WebSocket connections internally
- Unraid 7.0.1 fixed Nchan memory leaks affecting subscription stability
- Unraid 7.1.0 added automatic Nchan shared memory recovery (restarts Nginx when memory runs out)
- A setting was added in 7.1.0 to disable real-time updates on inactive browsers to prevent memory issues
**Sources:**
- [Subscriptions - GraphQL](https://graphql.org/learn/subscriptions/) [Tier: Primary]
- [Subscriptions - Apollo GraphQL Docs](https://www.apollographql.com/docs/react/data/subscriptions) [Tier: Official]
---
## 6. Unraid Connect Cloud Service
### Overview
**Unraid Connect** is a cloud-enabled companion service that functions as a centralized "remote command center" for Unraid servers. It provides:
- **Centralized Dashboard:** View status, uptime, storage, and license details for multiple servers
- **Remote Management:** Start/stop arrays, manage Docker/VMs, reboot servers
- **Flash Backup:** Automated cloud-based backups of USB flash drive configuration
- **Deep Linking:** Jump directly from cloud dashboard to local WebGUI pages
### Relationship to Local API
- Pre-7.2: The Unraid Connect plugin provides both cloud features AND the local GraphQL API
- Post-7.2: The API is native to the OS; the Connect plugin adds cloud features
- The cloud dashboard communicates through a secure tunnel to execute commands locally
### Data Transmitted to Cloud
The local server transmits to `Unraid.net`:
- Server hostname and keyfile details
- Local/remote IP addresses
- Array usage statistics (numbers only, no file names)
- Container and VM counts
**Privacy:** The service explicitly does NOT collect or share user content, file details, or personal information beyond necessary metadata.
### Remote Access Mechanisms
1. **Dynamic Remote Access (Recommended):**
- On-demand; WebGUI closed to internet by default
- Uses UPnP for automatic port forwarding (or manual rules)
- Port lease expires after inactivity (~10 minutes)
2. **Static Remote Access:**
- Always-on; WebGUI continuously accessible
- Requires forwarding WAN port (high random number >1000) to HTTPS port
3. **VPN Alternatives:**
- WireGuard (built-in)
- Tailscale (native since Unraid 7.0+)
### Flash Backup Details
- Configuration files are encrypted and uploaded
- Excludes sensitive data: passwords, WireGuard keys
- Retained as latest backup only; older/inactive backups are purged
- Can be triggered and monitored via the API
**Sources:**
- [Unraid Connect overview & setup | Unraid Docs](https://docs.unraid.net/connect/about/) [Tier: Primary]
- [Remote access | Unraid Docs](https://docs.unraid.net/connect/remote-access/) [Tier: Primary]
- [Automated flash backup | Unraid Docs](https://docs.unraid.net/connect/flash-backup/) [Tier: Primary]
---
## 7. Version History and API Changes
### Unraid 7.0.0 (2025-01-09)
**Developer & System Capabilities:**
- Notification agents stored as individual XML files (easier programmatic management)
- `Content-Security-Policy frame-ancestors` support (iframe embedding for dashboards)
- JavaScript console logging restored
- VM Manager inline XML mode (read-only libvirt XML view)
- Docker PID limits (default 2048)
- Full ZFS support (hybrid pools, subpools, encryption)
- Native Tailscale integration
- File Manager merged into core OS
- QEMU snapshots and clones for VMs
**Note:** API was still plugin-based (Unraid Connect plugin required).
### Unraid 7.0.1 (2025-02-25)
- **Nchan memory leak fix** -- Critical for WebSocket subscription stability
- Tailscale integration security restrictions for Host-network containers
### Unraid 7.1.0 (2025-05-05)
- **Nchan shared memory recovery** -- Automatic Nginx restart on memory exhaustion
- **Real-time updates toggle** -- Disable updates on inactive browsers
- Native WiFi support (`wlan0`) -- New network interface data
- User VM templates (create, export, import)
- CSS rework for WebGUI
### Unraid 7.2.0 (Stable Release)
**Major Milestone: API becomes native to the OS.**
- No plugin required for local API access
- API starts automatically with system
- Deep system integration
- Settings accessible at **Settings > Management Access > API**
- OIDC/SSO support added
- Permissions system rewritten (API v4.0.0)
- Built-in GraphQL Sandbox
- CLI key management (`unraid-api apikey`)
- Open-sourced API code
**Sources:**
- [Version 7.0.0 | Unraid Docs](https://docs.unraid.net/unraid-os/release-notes/7.0.0/) [Tier: Primary]
- [Version 7.0.1 | Unraid Docs](https://docs.unraid.net/unraid-os/release-notes/7.0.1/) [Tier: Primary]
- [Version 7.1.0 | Unraid Docs](https://docs.unraid.net/unraid-os/release-notes/7.1.0/) [Tier: Primary]
- [Unraid 7.2.0 Blog Post](https://unraid.net/blog/unraid-7-2-0) [Tier: Official]
---
## 8. Community Integrations
### Third-Party Projects Using the Unraid API
#### 1. unraid-mcp (Python MCP Server) -- This Project
- **Interface:** Official Unraid GraphQL API via HTTP/HTTPS + WebSockets
- **Auth:** `UNRAID_API_URL` + `UNRAID_API_KEY` environment variables
- **Transport:** HTTP header `X-API-Key` for queries; WebSocket `connection_init` payload for subscriptions
- **Tools:** 26+ MCP tools for Docker, VM, storage, system management
#### 2. PSUnraid (PowerShell Module)
- **Developer:** Community member "Jagula"
- **Status:** Alpha / proof of concept
- **Interface:** Official Unraid GraphQL API
- **Install:** `Install-Module PSUnraid`
- **Capabilities:** Server/array/disk status, Docker/VM start/stop/restart, notifications
- **Requires:** Unraid 7.2.2+ for full feature support
- **Key insight:** Remote-only (no SSH needed), converts JSON to PowerShell objects
#### 3. unraid-management-agent (Go Plugin)
- **Interface:** **NOT** the official GraphQL API -- independent REST API + WebSocket
- **Port:** Default 8043
- **Architecture:** Standalone Go binary, collects data via native libraries
- **Endpoints:** 50+ REST endpoints, `/metrics` for Prometheus, WebSocket at `/api/v1/ws`
- **Integrations:** Prometheus (41 metrics), MQTT, Home Assistant (auto-discovery), MCP (54 tools)
- **Key insight:** Provides data the official API lacks (SMART data, container logs, process monitoring, GPU stats, UPS data)
#### 4. unraid-ssh-mcp
- **Interface:** SSH (explicitly chose NOT to use GraphQL API)
- **Reason:** API lacked container logs, SMART data, real-time CPU load, process monitoring
- **Advantage:** Works on any Unraid version, no rate limits
#### Other Projects
- **U-Manager:** Android app for remote Unraid management
- **Unraid Deck:** Native iOS client (SwiftUI)
- **hass-unraid:** Home Assistant integration with SMART attribute notifications
**Sources:**
- [PSUnraid Reddit Thread](https://www.reddit.com/r/unRAID/comments/1ph08wi/psunraid_powershell_m) [Tier: Community]
- [unraid-management-agent GitHub](https://github.com/ruaan-deysel/unraid-management-agent) [Tier: Official]
- [Unraid MCP Reddit Thread](https://www.reddit.com/r/unRAID/comments/1pl4s4j/unraid_mcp_server_que) [Tier: Community]
---
## 9. Known Issues and Limitations
### GraphQL Schema Issues (Discovered in unraid-mcp Development)
Based on the existing unraid-mcp codebase, the following issues have been encountered:
1. **Int Overflow on Large Values:** Memory size fields (total, used, free) and some disk operation counters can overflow GraphQL's standard 32-bit `Int` type. The API uses a custom `Long` scalar but some fields still return problematic values.
2. **NaN Values:** Certain fields in the `vars` query (e.g., `sysArraySlots`, `sysCacheSlots`, `cacheNumDevices`, `cacheSbNumDisks`) can return NaN, causing type errors. The existing codebase works around this by querying a curated subset of fields.
3. **Non-nullable Fields Returning Null:** The `info.devices` section has non-nullable fields that may be null in practice. The codebase avoids querying this section entirely.
4. **Memory Layout Size Missing:** Individual memory stick `size` values are not returned by the API, preventing total memory calculation from layout data.
### API Coverage Gaps
According to the unraid-ssh-mcp developer, the GraphQL API currently lacks:
- Docker container logs
- Detailed SMART data for drives
- Real-time CPU load averages
- Process monitoring capabilities
- Some system-level metrics available via `/proc` and `/sys`
### General Limitations
- **Rate Limiting:** The API implements rate limiting (specific limits not documented publicly)
- **Version Dependency:** Full API requires Unraid 7.2+; pre-7.2 versions need the Connect plugin
- **Self-Signed Certificates:** Client must handle SSL verification for local IP access
- **Schema Volatility:** The API schema is still evolving; field names and types may change between versions
---
## 10. API Roadmap and Future Features
### Completed (as of 7.2)
- API native to Unraid OS
- Separated from Connect Plugin
- Open-sourced
- OIDC/SSO support
- Permissions system rewrite (v4.0.0)
### Q1 2025
- New Connect Settings Interface
### Q2 2025
- New modernized Settings Pages
- Storage Pool Creation Interface (simplified)
- Storage Pool Status Interface (real-time)
- Developer Tools for Plugins
- Custom Theme Creator (start)
### Q3 2025
- Custom Theme Creator (completion)
- New Docker Status Interface
- Docker Container Setup Interface (streamlined)
- New Plugins Interface (redesigned)
### TBD (Planned but Unscheduled)
- **Native Docker Compose support** -- Highly anticipated
- Plugin Development SDK and tooling
- Advanced Plugin Management interface
- Storage Share Creation & Settings interfaces
- Storage Share Management Dashboard
### In Development
- User Interface Component Library (security components)
**Sources:**
- [Roadmap & Features | Unraid Docs](https://docs.unraid.net/API/upcoming-features/) [Tier: Primary]
---
## 11. Recommendations for unraid-mcp
Based on this research, the following improvements are recommended for the unraid-mcp project:
### High Priority
1. **ZFS/Pool Management Tools**
- Add `get_pool_status` for ZFS/BTRFS storage pools
- Current `get_array_status` insufficient for multi-pool setups introduced in Unraid 7.0
2. **Scope-Based Tool Filtering**
- Before registering tools with MCP, verify the API key has appropriate permissions
- Prevent exposing tools the key cannot use (avoid hallucinated capabilities)
- Query current key permissions at startup
3. **Improved Error Handling**
- Implement exponential backoff for rate limit errors (HTTP 429)
- Better handling of `Long` scalar type values
- Graceful degradation for unavailable schema fields
4. **API Key Authorization Flow**
- Consider implementing the OAuth-like flow (`/ApiKeyAuthorize`) for user-friendly key generation
- Enables scope-based consent before key creation
### Medium Priority
5. **Real-Time Notification Streaming**
- Add WebSocket subscription for notifications
- Allows proactive alerting (e.g., "Disk 5 is overheating") without user request
6. **File Manager Integration**
- Add `list_files`, `read_file` tools using the native File Manager API (merged in 7.0)
- Enables LLM to organize media or clean up `appdata`
7. **Pagination for Large Queries**
- Implement `limit` and `offset` for log listings and file browsing
- Prevent timeouts from massive result sets
8. **Flash Backup Trigger**
- Add tool to trigger flash backup via API mutation
- Monitor backup status
### Low Priority
9. **VM Snapshot Management**
- Add `create_vm_snapshot`, `revert_to_snapshot`, `clone_vm`
- Leverages QEMU snapshot support from Unraid 7.0
10. **Tailscale/VPN Status**
- Query network schemas for Tailnet IPs and VPN connection status
- Useful for remote management diagnostics
11. **Query Complexity Optimization**
- Separate list queries (lightweight) from detail queries (heavy)
- `list_docker_containers` should fetch only id/names/state
- Detail queries should be on-demand
### Implementation Notes
- **GraphQL Sandbox Discovery:** Use the built-in sandbox at `http://SERVER/graphql` to discover exact mutation names and field types for any new tools
- **Version Compatibility:** Consider checking the Unraid API version at startup and adjusting available tools accordingly
- **SSL Configuration:** The `UNRAID_VERIFY_SSL` environment variable is already implemented -- ensure documentation guides users toward `myunraid.net` certificates for proper SSL
- **PrefixedID Handling:** Container and VM IDs use the `PrefixedID` custom scalar -- ensure all ID-based operations handle this string type correctly
---
## 12. Source Bibliography
### Primary Sources (Official Documentation)
- [Welcome to Unraid API | Unraid Docs](https://docs.unraid.net/API/)
- [Using the Unraid API](https://docs.unraid.net/API/how-to-use-the-api/)
- [API key authorization flow | Unraid Docs](https://docs.unraid.net/API/api-key-app-developer-authorization-flow/)
- [Programmatic API key management | Unraid Docs](https://docs.unraid.net/API/programmatic-api-key-management/)
- [Roadmap & Features | Unraid Docs](https://docs.unraid.net/API/upcoming-features/)
- [Unraid Connect overview & setup | Unraid Docs](https://docs.unraid.net/connect/about/)
- [Remote access | Unraid Docs](https://docs.unraid.net/connect/remote-access/)
- [Automated flash backup | Unraid Docs](https://docs.unraid.net/connect/flash-backup/)
- [Version 7.0.0 Release Notes](https://docs.unraid.net/unraid-os/release-notes/7.0.0/)
- [Version 7.0.1 Release Notes](https://docs.unraid.net/unraid-os/release-notes/7.0.1/)
- [Version 7.1.0 Release Notes](https://docs.unraid.net/unraid-os/release-notes/7.1.0/)
### Official / GitHub Sources
- [GitHub - unraid/api: Unraid API / Connect / UI Monorepo](https://github.com/unraid/api)
- [GitHub - jmagar/unraid-mcp](https://github.com/jmagar/unraid-mcp)
- [api/docs/developer/development.md](https://github.com/unraid/api/blob/main/api/docs/developer/development.md)
- [Unraid OS 7.2.0 Blog Post](https://unraid.net/blog/unraid-7-2-0)
### Community Sources
- [PSUnraid PowerShell Module (Reddit)](https://www.reddit.com/r/unRAID/comments/1ph08wi/psunraid_powershell_m)
- [Unraid MCP Server (Reddit)](https://www.reddit.com/r/unRAID/comments/1pl4s4j/unraid_mcp_server_que)
- [unraid-management-agent (GitHub)](https://github.com/ruaan-deysel/unraid-management-agent)
- [Unraid API Discussion (Reddit)](https://www.reddit.com/r/unRAID/comments/1h7xkjr/unraid_api/)
- [API Key Location Question (Reddit)](https://www.reddit.com/r/unRAID/comments/1nk2jjk/i_couldnt_find_the_ap)
### Reference Sources
- [GraphQL Specification](https://spec.graphql.org/)
- [Learn GraphQL](https://graphql.org/learn/)
- [GraphQL Subscriptions](https://graphql.org/learn/subscriptions/)
- [Apollo GraphQL Sandbox](https://www.apollographql.com/docs/graphos/platform/sandbox)
- [Model Context Protocol (MCP)](https://modelcontextprotocol.io/introduction)
---
## Cross-Source Analysis
### Where Sources Agree
- The API is GraphQL-based with queries, mutations, and subscriptions
- Unraid 7.2 is the version where API became native
- API Keys are the primary authentication method for programmatic access
- The endpoint is at `/graphql` on the server
- The API supports Docker/VM lifecycle management
- The monorepo is TypeScript/Node.js based
### Where Sources Disagree or Have Gaps
- **Exact mutation names** are not documented publicly -- must use GraphQL Sandbox introspection
- **Rate limit specifics** (thresholds, headers) are not publicly documented
- **Container logs** -- the unraid-ssh-mcp developer claims they're unavailable via API, but this may have changed in newer versions
- **Schema type issues** (Int overflow, NaN) are documented only in the unraid-mcp codebase, not in official docs
### Notable Insights
1. The unraid-management-agent project provides capabilities the official API lacks, suggesting areas for API expansion
2. PSUnraid confirms the API schema includes mutations for Docker/VM lifecycle with boolean return types
3. The OAuth-like authorization flow is a sophisticated feature not commonly found in self-hosted server APIs
4. The `Long` scalar type and `PrefixedID` type are custom additions critical for proper client implementation

View File

@@ -1,998 +0,0 @@
# Unraid API Source Code Analysis
> **Research Date:** 2026-02-07
> **Repository:** https://github.com/unraid/api
> **Latest Version:** v4.29.2 (December 19, 2025)
> **License:** Open-sourced January 2025
---
## Table of Contents
1. [Repository Structure](#1-repository-structure)
2. [Technology Stack](#2-technology-stack)
3. [GraphQL Schema & Type System](#3-graphql-schema--type-system)
4. [Authentication & Authorization](#4-authentication--authorization)
5. [Resolver Implementations](#5-resolver-implementations)
6. [Subscription System](#6-subscription-system)
7. [State Management](#7-state-management)
8. [Plugin Architecture](#8-plugin-architecture)
9. [Release History](#9-release-history)
10. [Roadmap & Upcoming Features](#10-roadmap--upcoming-features)
11. [Open Issues & Community Requests](#11-open-issues--community-requests)
12. [Community Projects & Integrations](#12-community-projects--integrations)
13. [Architectural Insights for unraid-mcp](#13-architectural-insights-for-unraid-mcp)
---
## 1. Repository Structure
The Unraid API is a **monorepo** managed with pnpm workspaces containing eight interconnected packages:
```
unraid/api/
├── api/ # NestJS GraphQL backend (port 3001)
│ ├── src/
│ │ ├── __test__/
│ │ ├── common/ # Shared utilities
│ │ ├── core/ # Core infrastructure
│ │ │ ├── errors/
│ │ │ ├── modules/
│ │ │ ├── notifiers/
│ │ │ ├── types/
│ │ │ ├── utils/
│ │ │ ├── log.ts
│ │ │ └── pubsub.ts # PubSub for GraphQL subscriptions
│ │ ├── i18n/ # Internationalization
│ │ ├── mothership/ # Unraid Connect relay communication
│ │ ├── store/ # Redux state management
│ │ │ ├── actions/
│ │ │ ├── listeners/
│ │ │ ├── modules/
│ │ │ ├── services/
│ │ │ ├── state-parsers/
│ │ │ ├── watch/
│ │ │ └── root-reducer.ts
│ │ ├── types/
│ │ ├── unraid-api/ # Main API implementation
│ │ │ ├── app/
│ │ │ ├── auth/ # Authentication system
│ │ │ ├── cli/
│ │ │ ├── config/
│ │ │ ├── cron/
│ │ │ ├── decorators/
│ │ │ ├── exceptions/
│ │ │ ├── graph/ # GraphQL resolvers & services
│ │ │ ├── nginx/
│ │ │ ├── observers/
│ │ │ ├── organizer/
│ │ │ ├── plugin/
│ │ │ ├── rest/ # REST API endpoints
│ │ │ ├── shared/
│ │ │ ├── types/
│ │ │ ├── unraid-file-modifier/
│ │ │ └── utils/
│ │ ├── upnp/ # UPnP protocol
│ │ ├── cli.ts
│ │ ├── consts.ts
│ │ ├── environment.ts
│ │ └── index.ts
│ ├── generated-schema.graphql # Auto-generated GraphQL schema
│ ├── codegen.ts # GraphQL code generation config
│ ├── Dockerfile
│ └── docker-compose.yml
├── web/ # Nuxt 3 frontend (Vue 3)
│ ├── composables/gql/ # GraphQL composables
│ ├── layouts/
│ ├── src/
│ └── codegen.ts
├── unraid-ui/ # Vue 3 component library
├── plugin/ # Plugin packaging
├── packages/
│ ├── unraid-shared/ # Shared types & utilities
│ │ └── src/
│ │ ├── pubsub/ # PubSub channel definitions
│ │ ├── types/
│ │ ├── graphql-enums.ts # AuthAction, Resource, Role enums
│ │ ├── graphql.model.ts
│ │ └── use-permissions.directive.ts
│ ├── unraid-api-plugin-connect/
│ ├── unraid-api-plugin-generator/
│ └── unraid-api-plugin-health/
├── scripts/
├── pnpm-workspace.yaml
├── .nvmrc # Node.js v22
└── flake.nix # Nix dev environment
```
---
## 2. Technology Stack
### Backend
| Component | Technology | Version |
|-----------|-----------|---------|
| Runtime | Node.js | v22 |
| Framework | NestJS | 11.1.6 |
| HTTP Server | Fastify | 5.5.0 |
| GraphQL | Apollo Server | 4.12.2 |
| Package Manager | pnpm | 10.15.0 |
| Build Tool | Vite | 7.1.3 |
| Test Framework | Vitest | 3.2.4 |
| Docker SDK | Dockerode | 4.0.7 |
| VM Management | @unraid/libvirt | 2.1.0 |
| System Info | systeminformation | 5.27.8 |
| File Watcher | Chokidar | 4.0.3 |
| Auth RBAC | Casbin + nest-authz | 5.38.0 |
| Auth Passport | Passport.js | Multiple strategies |
| State Mgmt | Redux Toolkit | - |
| Subscriptions | graphql-subscriptions | PubSub with EventEmitter |
### Frontend
| Component | Technology | Version |
|-----------|-----------|---------|
| Framework | Vue 3 + Nuxt | 3.5.20 |
| GraphQL Client | Apollo Client | 3.14.0 |
| State | Pinia | 3.0.3 |
| Styling | Tailwind CSS | v4 |
### Key Patterns
- **Schema-first GraphQL** (migrating to code-first with NestJS decorators)
- NestJS dependency injection with decorators
- TypeScript ESM imports (`.js` extensions)
- Redux for ephemeral runtime state synced with INI config files
- Chokidar filesystem watchers for reactive config synchronization
---
## 3. GraphQL Schema & Type System
### Custom Scalars
- `DateTime` - ISO date/time
- `BigInt` - Large integer values
- `JSON` - Arbitrary JSON data
- `Port` - Network port numbers
- `URL` - URL strings
- `PrefixedID` - Server-prefixed identifiers (format: `server-prefix:uuid`)
### Core Enums
#### ArrayState
```
STARTED, STOPPED, NEW_ARRAY, RECON_DISK, DISABLE_DISK,
SWAP_DSBL, INVALID_EXPANSION, PARITY_NOT_BIGGEST,
TOO_MANY_MISSING_DISKS, NEW_DISK_TOO_SMALL, NO_DATA_DISKS
```
#### ArrayDiskStatus
```
DISK_NP, DISK_OK, DISK_NP_MISSING, DISK_INVALID, DISK_WRONG,
DISK_DSBL, DISK_NP_DSBL, DISK_DSBL_NEW, DISK_NEW
```
#### ArrayDiskType
```
DATA, PARITY, FLASH, CACHE
```
#### ArrayDiskFsColor
```
GREEN_ON, GREEN_BLINK, BLUE_ON, BLUE_BLINK,
YELLOW_ON, YELLOW_BLINK, RED_ON, RED_OFF, GREY_OFF
```
#### ContainerState
```
RUNNING, PAUSED, EXITED
```
#### ContainerPortType
```
TCP, UDP
```
#### VmState
```
NOSTATE, RUNNING, IDLE, PAUSED, SHUTDOWN,
SHUTOFF, CRASHED, PMSUSPENDED
```
#### NotificationImportance / NotificationType
- Importance: Defines severity levels
- Type: Categorizes notification sources
#### Role
```
ADMIN - Full administrative access
CONNECT - Read access with remote management
GUEST - Basic profile access
VIEWER - Read-only access
```
#### AuthAction
```
CREATE_ANY, CREATE_OWN
READ_ANY, READ_OWN
UPDATE_ANY, UPDATE_OWN
DELETE_ANY, DELETE_OWN
```
#### Resource (35 total)
```
ACTIVATION_CODE, API_KEY, ARRAY, CLOUD, CONFIG, CONNECT,
CUSTOMIZATIONS, DASHBOARD, DISK, DOCKER, FLASH, INFO,
LOGS, ME, NETWORK, NOTIFICATIONS, ONLINE, OS, OWNER,
PERMISSION, REGISTRATION, SERVERS, SERVICES, SHARE,
VARS, VMS, WELCOME, ...
```
### Core Type Definitions
#### UnraidArray
```graphql
type UnraidArray {
state: ArrayState!
capacity: ArrayCapacity
boot: ArrayDisk
parities: [ArrayDisk!]!
parityCheckStatus: ParityCheck
disks: [ArrayDisk!]!
caches: [ArrayDisk!]!
}
```
#### ArrayDisk
```graphql
type ArrayDisk implements Node {
id: PrefixedID!
idx: Int
name: String
device: String
size: BigInt
fsSize: String
fsFree: String
fsUsed: String
status: ArrayDiskStatus
rotational: Boolean
temp: Int
numReads: BigInt
numWrites: BigInt
numErrors: BigInt
type: ArrayDiskType
exportable: Boolean
warning: Int
critical: Int
fsType: String
comment: String
format: String
transport: String
color: ArrayDiskFsColor
isSpinning: Boolean
}
```
#### DockerContainer
```graphql
type DockerContainer implements Node {
id: PrefixedID!
names: [String!]
image: String
imageId: String
command: String
created: DateTime
ports: [ContainerPort!]
lanIpPorts: [String] # LAN-accessible host:port values
sizeRootFs: BigInt
sizeRw: BigInt
sizeLog: BigInt
labels: JSON
state: ContainerState
status: String
hostConfig: JSON
networkSettings: JSON
mounts: JSON
autoStart: Boolean
autoStartOrder: Int
autoStartWait: Int
templatePath: String
projectUrl: String
registryUrl: String
supportUrl: String
iconUrl: String
webUiUrl: String
shell: String
templatePorts: JSON
isOrphaned: Boolean
}
```
#### VmDomain
```graphql
type VmDomain implements Node {
id: PrefixedID! # UUID-based identifier
name: String # Friendly name
state: VmState! # Current state
uuid: String @deprecated # Use id instead
}
```
#### Share
```graphql
type Share implements Node {
id: PrefixedID!
name: String
comment: String
free: String
used: String
total: String
include: [String]
exclude: [String]
# Additional capacity/config fields
}
```
#### Info (System Information)
```graphql
type Info {
time: DateTime
baseboard: Baseboard
cpu: CpuInfo
devices: Devices
display: DisplayInfo
machineId: String
memory: MemoryInfo
os: OsInfo
system: SystemInfo
versions: Versions
}
```
---
## 4. Authentication & Authorization
### Authentication Methods
#### 1. API Key Authentication
- **Header**: `x-api-key: YOUR_API_KEY`
- Keys stored as JSON files in `/boot/config/plugins/unraid-api/`
- Generated via WebGUI (Settings > Management Access > API Keys) or CLI (`unraid-api apikey --create`)
- 32-byte hexadecimal keys generated using `crypto.randomBytes`
- File system watcher (Chokidar) syncs in-memory cache with disk changes
- Keys support both **roles** (simplified) and **permissions** (granular resource:action pairs)
**API Key Service (`api-key.service.ts`):**
```typescript
// Key creation validates:
// - Name via Unicode-aware regex
// - At least one role or permission required
// - 32-byte hex key generation
// - Overwrite support for existing keys
// Lookup methods:
findById(id) // UUID-based lookup
findByField(field, value) // Generic field search
findByKey(key) // Direct secret key lookup for auth
```
#### 2. Cookie-Based Sessions
- CSRF token validation for non-GET requests
- `timingSafeEqual` comparison prevents timing attacks
- Session user ID: `-1`
- Returns admin role privileges
#### 3. Local Sessions (CLI/System)
- For CLI and system-level operations
- Local session user ID: `-2`
- Returns local admin with elevated privileges
#### 4. SSO/OIDC
- OpenID Connect client implementation
- Separate SSO module with auth, client, core, models, session, and utils subdirectories
- JWT validation using Jose library
### Authorization (RBAC via Casbin)
**Model:** Resource-based access control with `_ANY` (universal) and `_OWN` (owner-limited) permission modifiers.
```typescript
// Permission enforcement via decorators:
@UsePermissions({
action: AuthAction.READ_ANY,
resource: Resource.ARRAY,
})
```
**Casbin Implementation (`api/src/unraid-api/auth/casbin/`):**
- `casbin.service.ts` - Core RBAC service
- `policy.ts` - Policy configuration
- `model.ts` - RBAC model definitions
- `resolve-subject.util.ts` - Subject resolution utility
- Wildcard permission expansion (`*` -> full CRUD)
- Role hierarchy with inherited permissions
### Auth Files Structure
```
api/src/unraid-api/auth/
├── casbin/
│ ├── casbin.module.ts
│ ├── casbin.service.ts
│ ├── model.ts
│ ├── policy.ts
│ └── resolve-subject.util.ts
├── api-key.service.ts # API key CRUD operations
├── auth.interceptor.ts # HTTP auth interceptor
├── auth.module.ts # NestJS auth module
├── auth.service.ts # Core auth logic (3 strategies)
├── authentication.guard.ts # Route protection guard
├── cookie.service.ts # Cookie handling
├── cookie.strategy.ts # Cookie auth strategy
├── fastify-throttler.guard.ts # Rate limiting
├── header.strategy.ts # Header-based auth (API keys)
├── local-session-lifecycle.service.ts
├── local-session.service.ts
├── local-session.strategy.ts
├── public.decorator.ts # Mark endpoints as public
└── user.decorator.ts # User injection decorator
```
---
## 5. Resolver Implementations
### Resolver Directory Structure
```
api/src/unraid-api/graph/resolvers/
├── api-key/ # API key management (10 files)
├── array/ # Array operations + parity (11 files)
├── cloud/ # Cloud/Connect operations
├── config/ # System configuration
├── customization/ # UI customization
├── disks/ # Disk management (6 files)
├── display/ # Display settings
├── docker/ # Docker management (36 files)
├── flash/ # Flash drive operations
├── flash-backup/ # Flash backup management
├── info/ # System information (7 subdirs)
│ ├── cpu/
│ ├── devices/
│ ├── display/
│ ├── memory/
│ ├── os/
│ ├── system/
│ └── versions/
├── logs/ # Log management (8 files)
├── metrics/ # System metrics (5 files)
├── mutation/ # Root mutation resolver
├── notifications/ # Notification management (7 files)
├── online/ # Online status
├── owner/ # Server owner info
├── rclone/ # Cloud storage (8 files)
├── registration/ # License/registration
├── servers/ # Server management
├── settings/ # Settings management (5 files)
├── sso/ # SSO/OIDC (8 subdirs)
├── ups/ # UPS monitoring (7 files)
├── vars/ # Unraid variables
└── vms/ # VM management (7 files)
```
### Complete API Surface
#### Queries
| Domain | Query | Description | Permission |
|--------|-------|-------------|------------|
| **Array** | `array` | Get array data (state, capacity, disks, parities, caches) | READ_ANY:ARRAY |
| **Disks** | `disks` | List all disks with temp, spin state, capacity | READ_ANY:DISK |
| **Disks** | `disk(id)` | Get specific disk by PrefixedID | READ_ANY:DISK |
| **Docker** | `docker` | Get Docker instance | READ_ANY:DOCKER |
| **Docker** | `container(id)` | Get specific container | READ_ANY:DOCKER |
| **Docker** | `containers` | List all containers (optional size info) | READ_ANY:DOCKER |
| **Docker** | `logs(id, since, tail)` | Container logs with filtering | READ_ANY:DOCKER |
| **Docker** | `networks` | Docker networks | READ_ANY:DOCKER |
| **Docker** | `portConflicts` | Port conflict detection | READ_ANY:DOCKER |
| **Docker** | `organizer` | Container organization structure | READ_ANY:DOCKER |
| **Docker** | `containerUpdateStatuses` | Check update availability | READ_ANY:DOCKER |
| **VMs** | `vms` | Get all VM domains | READ_ANY:VMS |
| **Info** | `info` | System info (CPU, memory, OS, baseboard, devices, versions) | READ_ANY:INFO |
| **Metrics** | `metrics` | System performance metrics | READ_ANY:INFO |
| **Logs** | `logFiles` | List available log files | READ_ANY:LOGS |
| **Logs** | `logFile(path, lines, startLine)` | Get specific log file content | READ_ANY:LOGS |
| **Notifications** | `notifications` | Get all notifications | READ_ANY:NOTIFICATIONS |
| **Notifications** | `overview` | Notification statistics | READ_ANY:NOTIFICATIONS |
| **Notifications** | `list` | Filtered notification list | READ_ANY:NOTIFICATIONS |
| **Notifications** | `warningsAndAlerts` | Deduplicated unread warnings/alerts | READ_ANY:NOTIFICATIONS |
| **RClone** | `rclone` | Cloud storage backup settings | READ_ANY:FLASH |
| **RClone** | `configForm(formOptions)` | Config form schemas | READ_ANY:FLASH |
| **RClone** | `remotes` | List configured remote storage | READ_ANY:FLASH |
| **UPS** | `upsDevices` | List UPS devices with status | READ_ANY:* |
| **UPS** | `upsDeviceById(id)` | Specific UPS device | READ_ANY:* |
| **UPS** | `upsConfiguration` | UPS configuration settings | READ_ANY:* |
| **Settings** | `settings` | System settings + SSO config | READ_ANY:CONFIG |
| **Shares** | `shares` | Storage shares with capacity | READ_ANY:SHARE |
#### Mutations
| Domain | Mutation | Description | Permission |
|--------|---------|-------------|------------|
| **Array** | `setState(input)` | Set array state (start/stop) | UPDATE_ANY:ARRAY |
| **Array** | `addDiskToArray(input)` | Add disk to array | UPDATE_ANY:ARRAY |
| **Array** | `removeDiskFromArray(input)` | Remove disk (array must be stopped) | UPDATE_ANY:ARRAY |
| **Array** | `mountArrayDisk(id)` | Mount a disk | UPDATE_ANY:ARRAY |
| **Array** | `unmountArrayDisk(id)` | Unmount a disk | UPDATE_ANY:ARRAY |
| **Array** | `clearArrayDiskStatistics(id)` | Clear disk statistics | UPDATE_ANY:ARRAY |
| **Parity** | `start(correct)` | Start parity check | UPDATE_ANY:ARRAY |
| **Parity** | `pause` | Pause parity check | UPDATE_ANY:ARRAY |
| **Parity** | `resume` | Resume parity check | UPDATE_ANY:ARRAY |
| **Parity** | `cancel` | Cancel parity check | UPDATE_ANY:ARRAY |
| **Docker** | `start(id)` | Start container | UPDATE_ANY:DOCKER |
| **Docker** | `stop(id)` | Stop container | UPDATE_ANY:DOCKER |
| **Docker** | `pause(id)` | Pause container | UPDATE_ANY:DOCKER |
| **Docker** | `unpause(id)` | Unpause container | UPDATE_ANY:DOCKER |
| **Docker** | `removeContainer(id, withImage?)` | Remove container (optionally with image) | DELETE_ANY:DOCKER |
| **Docker** | `updateContainer(id)` | Update to latest image | UPDATE_ANY:DOCKER |
| **Docker** | `updateContainers(ids)` | Update multiple containers | UPDATE_ANY:DOCKER |
| **Docker** | `updateAllContainers` | Update all with available updates | UPDATE_ANY:DOCKER |
| **Docker** | `updateAutostartConfiguration` | Update auto-start config (feature flag) | UPDATE_ANY:DOCKER |
| **Docker** | `createDockerFolder` | Create organizational folder | UPDATE_ANY:DOCKER |
| **Docker** | `setDockerFolderChildren` | Manage folder contents | UPDATE_ANY:DOCKER |
| **Docker** | `deleteDockerEntries` | Remove folders | UPDATE_ANY:DOCKER |
| **Docker** | `moveDockerEntriesToFolder` | Reorganize containers | UPDATE_ANY:DOCKER |
| **Docker** | `moveDockerItemsToPosition` | Position items | UPDATE_ANY:DOCKER |
| **Docker** | `renameDockerFolder` | Rename folder | UPDATE_ANY:DOCKER |
| **Docker** | `createDockerFolderWithItems` | Create folder with items | UPDATE_ANY:DOCKER |
| **Docker** | `syncDockerTemplatePaths` | Sync template data | UPDATE_ANY:DOCKER |
| **Docker** | `resetDockerTemplateMappings` | Reset to defaults | UPDATE_ANY:DOCKER |
| **VMs** | `start(id)` | Start VM | UPDATE_ANY:VMS |
| **VMs** | `stop(id)` | Stop VM | UPDATE_ANY:VMS |
| **VMs** | `pause(id)` | Pause VM | UPDATE_ANY:VMS |
| **VMs** | `resume(id)` | Resume VM | UPDATE_ANY:VMS |
| **VMs** | `forceStop(id)` | Force stop VM | UPDATE_ANY:VMS |
| **VMs** | `reboot(id)` | Reboot VM | UPDATE_ANY:VMS |
| **VMs** | `reset(id)` | Reset VM | UPDATE_ANY:VMS |
| **Notifications** | `createNotification(input)` | Create notification | CREATE_ANY:NOTIFICATIONS |
| **Notifications** | `deleteNotification(id, type)` | Delete notification | DELETE_ANY:NOTIFICATIONS |
| **Notifications** | `deleteArchivedNotifications` | Clear all archived | DELETE_ANY:NOTIFICATIONS |
| **Notifications** | `archiveNotification(id)` | Archive single | UPDATE_ANY:NOTIFICATIONS |
| **Notifications** | `archiveNotifications(ids)` | Archive multiple | UPDATE_ANY:NOTIFICATIONS |
| **Notifications** | `archiveAll(importance?)` | Archive all (optional filter) | UPDATE_ANY:NOTIFICATIONS |
| **Notifications** | `unreadNotification(id)` | Mark as unread | UPDATE_ANY:NOTIFICATIONS |
| **Notifications** | `unarchiveNotifications(ids)` | Restore archived | UPDATE_ANY:NOTIFICATIONS |
| **Notifications** | `unarchiveAll(importance?)` | Restore all archived | UPDATE_ANY:NOTIFICATIONS |
| **Notifications** | `notifyIfUnique(input)` | Create if no equivalent exists | CREATE_ANY:NOTIFICATIONS |
| **Notifications** | `recalculateOverview` | Recompute overview stats | UPDATE_ANY:NOTIFICATIONS |
| **RClone** | `createRCloneRemote(input)` | Create remote storage | CREATE_ANY:FLASH |
| **RClone** | `deleteRCloneRemote(input)` | Delete remote storage | DELETE_ANY:FLASH |
| **UPS** | `configureUps(config)` | Update UPS configuration | UPDATE_ANY:* |
| **API Keys** | `createApiKey(input)` | Create API key | CREATE_ANY:API_KEY |
| **API Keys** | `addRoleForApiKey(input)` | Add role to key | UPDATE_ANY:API_KEY |
| **API Keys** | `removeRoleFromApiKey(input)` | Remove role from key | UPDATE_ANY:API_KEY |
| **API Keys** | `deleteApiKeys(input)` | Delete API keys | DELETE_ANY:API_KEY |
| **API Keys** | `updateApiKey(input)` | Update API key | UPDATE_ANY:API_KEY |
---
## 6. Subscription System
### PubSub Architecture
The subscription system uses `graphql-subscriptions` PubSub with a Node.js EventEmitter (max 30 listeners).
**Core PubSub (`api/src/core/pubsub.ts`):**
```typescript
import EventEmitter from 'events';
import { GRAPHQL_PUBSUB_CHANNEL } from '@unraid/shared/pubsub/graphql.pubsub.js';
import { PubSub } from 'graphql-subscriptions';
const eventEmitter = new EventEmitter();
eventEmitter.setMaxListeners(30);
export const pubsub = new PubSub({ eventEmitter });
export const createSubscription = <T = any>(
channel: GRAPHQL_PUBSUB_CHANNEL | string
): AsyncIterableIterator<T> => {
return pubsub.asyncIterableIterator<T>(channel);
};
```
### PubSub Channel Definitions
**Source:** `packages/unraid-shared/src/pubsub/graphql.pubsub.ts`
```typescript
export const GRAPHQL_PUBSUB_TOKEN = "GRAPHQL_PUBSUB";
export enum GRAPHQL_PUBSUB_CHANNEL {
ARRAY = "ARRAY",
CPU_UTILIZATION = "CPU_UTILIZATION",
CPU_TELEMETRY = "CPU_TELEMETRY",
DASHBOARD = "DASHBOARD",
DISPLAY = "DISPLAY",
INFO = "INFO",
MEMORY_UTILIZATION = "MEMORY_UTILIZATION",
NOTIFICATION = "NOTIFICATION",
NOTIFICATION_ADDED = "NOTIFICATION_ADDED",
NOTIFICATION_OVERVIEW = "NOTIFICATION_OVERVIEW",
NOTIFICATION_WARNINGS_AND_ALERTS = "NOTIFICATION_WARNINGS_AND_ALERTS",
OWNER = "OWNER",
SERVERS = "SERVERS",
VMS = "VMS",
DOCKER_STATS = "DOCKER_STATS",
LOG_FILE = "LOG_FILE",
PARITY = "PARITY",
}
```
### Available Subscriptions
| Subscription | Channel | Interval | Description |
|-------------|---------|----------|-------------|
| `arraySubscription` | ARRAY | Event-based | Real-time array state changes |
| `systemMetricsCpu` | CPU_UTILIZATION | 1 second | CPU utilization data |
| `systemMetricsCpuTelemetry` | CPU_TELEMETRY | 5 seconds | CPU power & temperature |
| `systemMetricsMemory` | MEMORY_UTILIZATION | 2 seconds | Memory utilization |
| `dockerContainerStats` | DOCKER_STATS | Polling | Container performance stats |
| `logFileSubscription(path)` | LOG_FILE (dynamic) | Event-based | Real-time log file updates |
| `notificationAdded` | NOTIFICATION_ADDED | Event-based | New notification created |
| `notificationsOverview` | NOTIFICATION_OVERVIEW | Event-based | Overview stats updates |
| `notificationsWarningsAndAlerts` | NOTIFICATION_WARNINGS_AND_ALERTS | Event-based | Warning/alert changes |
| `upsUpdates` | - | Event-based | UPS device status changes |
### Subscription Management Services
Three-tier subscription management:
1. **SubscriptionManagerService** (low-level, internal)
- Manages both polling and event-based subscriptions
- Polling: Creates intervals via NestJS SchedulerRegistry with overlap guards
- Event-based: Persistent listeners until explicitly stopped
- Methods: `startSubscription()`, `stopSubscription()`, `stopAll()`, `isSubscriptionActive()`
2. **SubscriptionTrackerService** (mid-level)
- Reference-counted subscriptions (auto-cleanup when no subscribers)
3. **SubscriptionHelperService** (high-level, for resolvers)
- GraphQL subscriptions with automatic cleanup
- Used directly in resolver decorators
**Dynamic Topics:** The LOG_FILE channel supports dynamic paths like `LOG_FILE:/var/log/test.log` for monitoring specific log files.
---
## 7. State Management
### Redux Store Architecture
The API uses Redux Toolkit for ephemeral runtime state derived from persistent INI configuration files stored in `/boot/config/`.
```
api/src/store/
├── actions/ # Redux action creators
├── listeners/ # State change event listeners
├── modules/ # Modular state slices
├── services/ # Business logic
├── state-parsers/ # INI file parsing utilities
├── watch/ # Filesystem watchers (Chokidar)
├── index.ts # Store initialization
├── root-reducer.ts # Combined reducer
└── types.ts # State type definitions
```
**Key Design:** The StateManager singleton uses Chokidar to watch filesystem changes on INI config files, enabling reactive synchronization without polling. This accommodates legacy CLI tools and scripts that modify configuration outside the API.
---
## 8. Plugin Architecture
### Dynamic Plugin System
The API supports dynamic plugin loading at runtime through NestJS:
```
packages/
├── unraid-api-plugin-connect/ # Remote access, UPnP integration
├── unraid-api-plugin-generator/ # Code generation
├── unraid-api-plugin-health/ # Health monitoring
└── unraid-shared/ # Shared types, enums, utilities
```
**Plugin Loading:** Plugins load conditionally based on installation state. The `unraid-api-plugin-connect` handles remote access as an optional peer dependency.
### Schema Migration Status
The API is **actively migrating** from schema-first to code-first GraphQL:
- **Completed:** API Key Resolver (1/21)
- **Pending (20 resolvers):** Docker, Array, Disks, VMs, Connect, Display, Info, Owner, Unassigned Devices, Cloud, Flash, Config, Vars, Logs, Users, Notifications, Network, Registration, Servers, Services, Shares
**Migration pattern per resolver:**
1. Create model files with `@ObjectType()` and `@InputType()` decorators
2. Define return types and input parameters as classes
3. Update resolver to use new model classes
4. Create module file for dependency registration
5. Test functionality
---
## 9. Release History
### Recent Releases (Reverse Chronological)
| Version | Date | Highlights |
|---------|------|------------|
| **v4.29.2** | Dec 19, 2025 | Fix: connect plugin not loaded when connect installed |
| **v4.29.1** | Dec 19, 2025 | Reverted docker overview web component; fixed GUID/license race |
| **v4.29.0** | Dec 19, 2025 | Feature: Docker overview web component for 7.3+ |
| **v4.28.2** | Dec 16, 2025 | API startup timeout for v7.0 and v6.12 |
| **v4.28.0** | Dec 15, 2025 | Feature: Plugin cleanup on OS upgrade cancel; keyfile polling; dark mode |
| **v4.27.2** | Nov 21, 2025 | Fix: header flashing and trial date display |
| **v4.27.0** | Nov 19, 2025 | Feature: Removed API log download; fixed connect plugin uninstall |
| **v4.26.0** | Nov 17, 2025 | Feature: CPU power query/subscription; Apollo Studio schema publish |
| **v4.25.0** | Sep 26, 2025 | Feature: Tailwind scoping; notification filter pills |
| **v4.24.0** | Sep 18, 2025 | Feature: Optimized DOM content loading |
| **v4.23.0** | Sep 16, 2025 | Feature: API status manager |
### Milestone Releases
- **Open-sourced:** January 2025
- **v4.0.0:** OIDC/SSO support and permissions system
- **Native in Unraid 7.2+:** October 29, 2025
---
## 10. Roadmap & Upcoming Features
### Near-Term (Q1-Q2 2025, some may be completed)
- Developer Tools for Plugins (Q2)
- New modernized settings pages (Q2)
- Redesigned Unraid Connect configuration (Q1)
- Custom theme creation (Q2-Q3)
- Storage pool management (Q2)
### Mid-Term (Q3 2025)
- Modern Docker status interface redesign
- New plugins interface with redesigned management UI
- Streamlined Docker container deployment
- Real-time pool health monitoring
### Under Consideration (TBD)
- Docker Compose native support
- Advanced plugin configuration/development tools
- Storage share creation, settings, and unified management dashboard
---
## 11. Open Issues & Community Requests
### Open Issues: 32 total
#### Feature Requests (Enhancements)
| Issue | Title | Description |
|-------|-------|-------------|
| #1873 | Invoke Mover through API | Programmatic access to the Mover tool |
| #1872 | CLI list with creation dates | Timestamp data in CLI operations |
| #1871 | Container restart/update mutation | Single operation to restart+update containers |
| #1839 | SMART disk data | Detailed disk health monitoring via SMART |
| #1827-1828 | Nuxt UI upgrades | Interface modernization |
#### Reported Bugs
| Issue | Title | Impact |
|-------|-------|--------|
| #1861 | VM suspension issues | Cannot unsuspend PMSUSPENDED VMs |
| #1842 | Temperature inconsistency | SSD temps unavailable in Disk queries but accessible via Array |
| #1840 | Cache invalidation | Docker container data stale after external changes |
| #1837 | GraphQL partial failures | Entire queries fail when VMs/Docker unavailable |
| #1859 | Notification counting errors | Archive counts include duplicates |
| #1818 | Network query failures | GraphQL returns empty lists for network data |
| #1825 | UPS false data | Hardcoded values returned when no UPS connected |
#### Key Takeaways for unraid-mcp
1. **#1837 is critical**: We should handle partial GraphQL failures gracefully
2. **#1842**: Temperature data should be queried from Array endpoint, not Disk
3. **#1840**: Docker cache may return stale data; consider cache-busting strategies
4. **#1825**: UPS data validation needed - API returns fake data with no UPS
5. **#1861**: VM `PMSUSPENDED` state needs special handling
6. **#1871**: Container restart+update is a common need not yet in the API
---
## 12. Community Projects & Integrations
### 1. Unraid Management Agent (Go)
**Repository:** https://github.com/ruaan-deysel/unraid-management-agent
**Author:** Ruaan Deysel
**Language:** Go
A comprehensive third-party plugin providing:
- **57 REST endpoints** at `http://localhost:8043/api/v1`
- **54 MCP tools** for AI agent integration
- **41 Prometheus metrics** for monitoring
- **WebSocket** real-time event streaming
- **MQTT** publishing for IoT integration
**Architecture:** Event-driven with collectors -> event bus -> API cache pattern
- System Collector (15s): CPU, RAM, temperatures
- Array/Disk Collectors (30s): Storage metrics
- Docker/VM Collectors (30s): Container/VM data
- Native Go libraries (Docker SDK, libvirt bindings, /proc/sys access)
**Key Endpoints:**
```
/api/v1/health # Health check
/api/v1/system # System info
/api/v1/array # Array status
/api/v1/disks # Disk info
/api/v1/docker # Docker containers
/api/v1/vm # Virtual machines
/api/v1/network # Network interfaces
/api/v1/shares # User shares
/api/v1/gpu # GPU metrics
/api/v1/ups # UPS status
/api/v1/settings/* # Disk thresholds, mover config
/api/v1/plugins # Plugin info
/api/v1/updates # Update status
```
### 2. Home Assistant - domalab/ha-unraid
**Repository:** https://github.com/domalab/ha-unraid
**Status:** Active (rebuilt in 2025.12.0 for GraphQL)
**Requires:** Unraid 7.2.0+, API key
**Features:**
- CPU usage, temperature, power consumption monitoring
- Memory utilization tracking
- Array state, per-disk and per-share metrics
- Docker container start/stop switches
- VM management controls
- UPS monitoring with energy dashboard integration
- Notification counts
- Dynamic entity creation (only creates entities for available services)
**Polling:** System data 30s, storage data 5min
### 3. Home Assistant - chris-mc1/unraid_api
**Repository:** https://github.com/chris-mc1/unraid_api
**Status:** Active
**Requires:** Unraid 7.2+, API key with Info/Servers/Array/Disk/Share read permissions
**Features:**
- Array status, storage utilization
- RAM and CPU usage
- Per-share free space (optional)
- Per-disk metrics: temperature, spin state, capacity
- Python-based (99.9%)
### 4. Home Assistant - ruaan-deysel/ha-unraid
**Repository:** https://github.com/ruaan-deysel/ha-unraid
**Status:** Active
**Note:** Uses the management agent's REST API rather than official GraphQL
### 5. Home Assistant - IDmedia/hass-unraid
**Repository:** https://github.com/IDmedia/hass-unraid
**Approach:** Docker container that parses WebSocket messages and forwards to HA via MQTT
### 6. unraid-mcp (This Project)
**Repository:** https://github.com/jmagar/unraid-mcp
**Language:** Python (FastMCP)
**Features:** 26 MCP tools, GraphQL client, WebSocket subscriptions
---
## 13. Architectural Insights for unraid-mcp
### Gaps in Our Current Implementation
Based on this research, potential improvements for unraid-mcp:
#### Missing Queries We Could Add
1. **Metrics subscriptions** - CPU (1s), CPU telemetry (5s), memory (2s) real-time data
2. **Docker port conflicts** - `portConflicts` query
3. **Docker organizer** - Folder management queries/mutations
4. **Docker update statuses** - Check for container image updates
5. **Parity check operations** - Start (with correct flag), pause, resume, cancel
6. **UPS monitoring** - Devices, configuration, real-time updates subscription
7. **API key management** - Full CRUD on API keys
8. **Settings management** - System settings queries
9. **SSO/OIDC configuration** - SSO settings
10. **Disk mount/unmount** - `mountArrayDisk` and `unmountArrayDisk` mutations
11. **Container removal** - `removeContainer` with optional image cleanup
12. **Container bulk updates** - `updateContainers` and `updateAllContainers`
13. **Flash backup** - Flash drive backup operations
#### GraphQL Query Patterns to Match
**Official query examples from Unraid docs:**
```graphql
# System Status
query {
info {
os { platform, distro, release, uptime }
cpu { manufacturer, brand, cores, threads }
}
}
# Array Monitoring
query {
array {
state
capacity { disks { free, used, total } }
disks { name, size, status, temp }
}
}
# Docker Containers
query {
dockerContainers {
id, names, state, status, autoStart
}
}
```
#### Authentication Best Practices
- Use `x-api-key` header (not query parameters)
- API keys need specific resource:action permissions
- For our MCP server, recommend keys with: `READ_ANY` on all resources + `UPDATE_ANY` on DOCKER, VMS, ARRAY for management operations
- Keys are stored at `/boot/config/plugins/unraid-api/`
#### Known Issues to Handle
1. **Partial query failures (#1837):** Wrap individual sections in try/catch; don't let VM failures crash Docker queries
2. **Temperature inconsistency (#1842):** Prefer Array endpoint for temperature data
3. **Docker cache staleness (#1840):** Use cache-busting options when available
4. **UPS phantom data (#1825):** Validate UPS data before presenting
5. **VM PMSUSPENDED (#1861):** Handle this state explicitly; unsuspend may fail
6. **Increased timeouts for disks:** The official API uses 90s read timeouts for disk operations
#### Subscription Channel Mapping
Our subscription implementation should align with the official channels:
```
ARRAY -> array state changes
CPU_UTILIZATION -> 1s CPU metrics
CPU_TELEMETRY -> 5s CPU power/temp
MEMORY_UTILIZATION -> 2s memory metrics
DOCKER_STATS -> container stats
LOG_FILE + dynamic path -> log file tailing
NOTIFICATION_ADDED -> new notifications
NOTIFICATION_OVERVIEW -> notification counts
NOTIFICATION_WARNINGS_AND_ALERTS -> warnings/alerts
PARITY -> parity check progress
VMS -> VM state changes
```
#### Performance Considerations
- Max 30 concurrent subscription connections (EventEmitter limit)
- Disk operations need extended timeouts (90s+)
- Docker `sizeRootFs` query is expensive; make it optional
- Storage data polling at 5min intervals (not faster) due to SMART query overhead
- cache-manager v7 expects TTL in milliseconds (not seconds)
---
## Appendix: Key Source File References
| File | Purpose |
|------|---------|
| `packages/unraid-shared/src/pubsub/graphql.pubsub.ts` | PubSub channel enum (17 channels) |
| `packages/unraid-shared/src/graphql-enums.ts` | AuthAction, Resource (35), Role enums |
| `packages/unraid-shared/src/graphql.model.ts` | Shared GraphQL models |
| `packages/unraid-shared/src/use-permissions.directive.ts` | Permission enforcement decorator |
| `api/src/core/pubsub.ts` | PubSub singleton + subscription factory |
| `api/src/unraid-api/auth/auth.service.ts` | 3-strategy auth (API key, cookie, local) |
| `api/src/unraid-api/auth/api-key.service.ts` | API key CRUD + validation |
| `api/src/unraid-api/auth/casbin/policy.ts` | RBAC policy definitions |
| `api/src/unraid-api/graph/resolvers/docker/docker.resolver.ts` | Docker queries + organizer |
| `api/src/unraid-api/graph/resolvers/docker/docker.mutations.resolver.ts` | Docker mutations (9 ops) |
| `api/src/unraid-api/graph/resolvers/vms/vms.resolver.ts` | VM queries |
| `api/src/unraid-api/graph/resolvers/vms/vms.mutations.resolver.ts` | VM mutations (7 ops) |
| `api/src/unraid-api/graph/resolvers/array/array.resolver.ts` | Array query + subscription |
| `api/src/unraid-api/graph/resolvers/array/array.mutations.resolver.ts` | Array mutations (6 ops) |
| `api/src/unraid-api/graph/resolvers/array/parity.mutations.resolver.ts` | Parity mutations (4 ops) |
| `api/src/unraid-api/graph/resolvers/notifications/notifications.resolver.ts` | Notification CRUD + subs |
| `api/src/unraid-api/graph/resolvers/metrics/metrics.resolver.ts` | System metrics + subs |
| `api/src/unraid-api/graph/resolvers/logs/logs.resolver.ts` | Log queries + subscription |
| `api/src/unraid-api/graph/resolvers/rclone/rclone.resolver.ts` | RClone queries |
| `api/src/unraid-api/graph/resolvers/rclone/rclone.mutation.resolver.ts` | RClone mutations |
| `api/src/unraid-api/graph/resolvers/ups/ups.resolver.ts` | UPS queries + mutations + sub |
| `api/src/unraid-api/graph/resolvers/api-key/api-key.mutation.ts` | API key mutations (5 ops) |
| `api/generated-schema.graphql` | Complete auto-generated schema |

File diff suppressed because it is too large Load Diff

View File

@@ -10,7 +10,7 @@ build-backend = "hatchling.build"
# ============================================================================ # ============================================================================
[project] [project]
name = "unraid-mcp" name = "unraid-mcp"
version = "0.2.0" version = "0.4.5"
description = "MCP Server for Unraid API - provides tools to interact with an Unraid server's GraphQL API" description = "MCP Server for Unraid API - provides tools to interact with an Unraid server's GraphQL API"
readme = "README.md" readme = "README.md"
license = {file = "LICENSE"} license = {file = "LICENSE"}
@@ -77,7 +77,6 @@ dependencies = [
"uvicorn[standard]>=0.35.0", "uvicorn[standard]>=0.35.0",
"websockets>=15.0.1", "websockets>=15.0.1",
"rich>=14.1.0", "rich>=14.1.0",
"pytz>=2025.2",
] ]
# ============================================================================ # ============================================================================
@@ -108,8 +107,13 @@ only-include = ["unraid_mcp"]
include = [ include = [
"/unraid_mcp", "/unraid_mcp",
"/tests", "/tests",
"/commands",
"/skills",
"/README.md", "/README.md",
"/LICENSE", "/LICENSE",
"/CLAUDE.md",
"/AGENTS.md",
"/GEMINI.md",
"/pyproject.toml", "/pyproject.toml",
"/.env.example", "/.env.example",
] ]
@@ -121,6 +125,8 @@ exclude = [
"/.docs", "/.docs",
"/.full-review", "/.full-review",
"/docs", "/docs",
"/dist",
"/logs",
"*.pyc", "*.pyc",
"__pycache__", "__pycache__",
] ]
@@ -170,6 +176,8 @@ select = [
"PERF", "PERF",
# Ruff-specific rules # Ruff-specific rules
"RUF", "RUF",
# flake8-bandit (security)
"S",
] ]
ignore = [ ignore = [
"E501", # line too long (handled by ruff formatter) "E501", # line too long (handled by ruff formatter)
@@ -188,7 +196,7 @@ ignore = [
[tool.ruff.lint.per-file-ignores] [tool.ruff.lint.per-file-ignores]
"__init__.py" = ["F401", "D104"] "__init__.py" = ["F401", "D104"]
"tests/**/*.py" = ["D", "S101", "PLR2004"] # Allow asserts and magic values in tests "tests/**/*.py" = ["D", "S101", "S105", "S106", "S107", "PLR2004"] # Allow test-only patterns
[tool.ruff.lint.pydocstyle] [tool.ruff.lint.pydocstyle]
convention = "google" convention = "google"
@@ -285,7 +293,6 @@ dev = [
"pytest-asyncio>=1.2.0", "pytest-asyncio>=1.2.0",
"pytest-cov>=7.0.0", "pytest-cov>=7.0.0",
"respx>=0.22.0", "respx>=0.22.0",
"types-pytz>=2025.2.0.20250809",
"ty>=0.0.15", "ty>=0.0.15",
"ruff>=0.12.8", "ruff>=0.12.8",
"build>=1.2.2", "build>=1.2.2",

View File

@@ -19,6 +19,7 @@ from tests.conftest import make_tool_fn
from unraid_mcp.core.client import DEFAULT_TIMEOUT, DISK_TIMEOUT, make_graphql_request from unraid_mcp.core.client import DEFAULT_TIMEOUT, DISK_TIMEOUT, make_graphql_request
from unraid_mcp.core.exceptions import ToolError from unraid_mcp.core.exceptions import ToolError
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
# Shared fixtures # Shared fixtures
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
@@ -158,43 +159,43 @@ class TestHttpErrorHandling:
@respx.mock @respx.mock
async def test_http_401_raises_tool_error(self) -> None: async def test_http_401_raises_tool_error(self) -> None:
respx.post(API_URL).mock(return_value=httpx.Response(401, text="Unauthorized")) respx.post(API_URL).mock(return_value=httpx.Response(401, text="Unauthorized"))
with pytest.raises(ToolError, match="HTTP error 401"): with pytest.raises(ToolError, match="Unraid API returned HTTP 401"):
await make_graphql_request("query { online }") await make_graphql_request("query { online }")
@respx.mock @respx.mock
async def test_http_403_raises_tool_error(self) -> None: async def test_http_403_raises_tool_error(self) -> None:
respx.post(API_URL).mock(return_value=httpx.Response(403, text="Forbidden")) respx.post(API_URL).mock(return_value=httpx.Response(403, text="Forbidden"))
with pytest.raises(ToolError, match="HTTP error 403"): with pytest.raises(ToolError, match="Unraid API returned HTTP 403"):
await make_graphql_request("query { online }") await make_graphql_request("query { online }")
@respx.mock @respx.mock
async def test_http_500_raises_tool_error(self) -> None: async def test_http_500_raises_tool_error(self) -> None:
respx.post(API_URL).mock(return_value=httpx.Response(500, text="Internal Server Error")) respx.post(API_URL).mock(return_value=httpx.Response(500, text="Internal Server Error"))
with pytest.raises(ToolError, match="HTTP error 500"): with pytest.raises(ToolError, match="Unraid API returned HTTP 500"):
await make_graphql_request("query { online }") await make_graphql_request("query { online }")
@respx.mock @respx.mock
async def test_http_503_raises_tool_error(self) -> None: async def test_http_503_raises_tool_error(self) -> None:
respx.post(API_URL).mock(return_value=httpx.Response(503, text="Service Unavailable")) respx.post(API_URL).mock(return_value=httpx.Response(503, text="Service Unavailable"))
with pytest.raises(ToolError, match="HTTP error 503"): with pytest.raises(ToolError, match="Unraid API returned HTTP 503"):
await make_graphql_request("query { online }") await make_graphql_request("query { online }")
@respx.mock @respx.mock
async def test_network_connection_error(self) -> None: async def test_network_connection_error(self) -> None:
respx.post(API_URL).mock(side_effect=httpx.ConnectError("Connection refused")) respx.post(API_URL).mock(side_effect=httpx.ConnectError("Connection refused"))
with pytest.raises(ToolError, match="Network connection error"): with pytest.raises(ToolError, match="Network error connecting to Unraid API"):
await make_graphql_request("query { online }") await make_graphql_request("query { online }")
@respx.mock @respx.mock
async def test_network_timeout_error(self) -> None: async def test_network_timeout_error(self) -> None:
respx.post(API_URL).mock(side_effect=httpx.ReadTimeout("Read timed out")) respx.post(API_URL).mock(side_effect=httpx.ReadTimeout("Read timed out"))
with pytest.raises(ToolError, match="Network connection error"): with pytest.raises(ToolError, match="Network error connecting to Unraid API"):
await make_graphql_request("query { online }") await make_graphql_request("query { online }")
@respx.mock @respx.mock
async def test_invalid_json_response(self) -> None: async def test_invalid_json_response(self) -> None:
respx.post(API_URL).mock(return_value=httpx.Response(200, text="not json")) respx.post(API_URL).mock(return_value=httpx.Response(200, text="not json"))
with pytest.raises(ToolError, match="Invalid JSON response"): with pytest.raises(ToolError, match=r"invalid response.*not valid JSON"):
await make_graphql_request("query { online }") await make_graphql_request("query { online }")
@@ -227,9 +228,7 @@ class TestGraphQLErrorHandling:
@respx.mock @respx.mock
async def test_idempotent_start_error_returns_success(self) -> None: async def test_idempotent_start_error_returns_success(self) -> None:
respx.post(API_URL).mock( respx.post(API_URL).mock(
return_value=_graphql_response( return_value=_graphql_response(errors=[{"message": "Container already running"}])
errors=[{"message": "Container already running"}]
)
) )
result = await make_graphql_request( result = await make_graphql_request(
'mutation { docker { start(id: "x") } }', 'mutation { docker { start(id: "x") } }',
@@ -241,9 +240,7 @@ class TestGraphQLErrorHandling:
@respx.mock @respx.mock
async def test_idempotent_stop_error_returns_success(self) -> None: async def test_idempotent_stop_error_returns_success(self) -> None:
respx.post(API_URL).mock( respx.post(API_URL).mock(
return_value=_graphql_response( return_value=_graphql_response(errors=[{"message": "Container not running"}])
errors=[{"message": "Container not running"}]
)
) )
result = await make_graphql_request( result = await make_graphql_request(
'mutation { docker { stop(id: "x") } }', 'mutation { docker { stop(id: "x") } }',
@@ -274,7 +271,13 @@ class TestInfoToolRequests:
async def test_overview_sends_correct_query(self) -> None: async def test_overview_sends_correct_query(self) -> None:
route = respx.post(API_URL).mock( route = respx.post(API_URL).mock(
return_value=_graphql_response( return_value=_graphql_response(
{"info": {"os": {"platform": "linux", "hostname": "tower"}, "cpu": {}, "memory": {}}} {
"info": {
"os": {"platform": "linux", "hostname": "tower"},
"cpu": {},
"memory": {},
}
}
) )
) )
tool = self._get_tool() tool = self._get_tool()
@@ -328,9 +331,7 @@ class TestInfoToolRequests:
@respx.mock @respx.mock
async def test_online_sends_correct_query(self) -> None: async def test_online_sends_correct_query(self) -> None:
route = respx.post(API_URL).mock( route = respx.post(API_URL).mock(return_value=_graphql_response({"online": True}))
return_value=_graphql_response({"online": True})
)
tool = self._get_tool() tool = self._get_tool()
await tool(action="online") await tool(action="online")
body = _extract_request_body(route.calls.last.request) body = _extract_request_body(route.calls.last.request)
@@ -373,9 +374,7 @@ class TestDockerToolRequests:
async def test_list_sends_correct_query(self) -> None: async def test_list_sends_correct_query(self) -> None:
route = respx.post(API_URL).mock( route = respx.post(API_URL).mock(
return_value=_graphql_response( return_value=_graphql_response(
{"docker": {"containers": [ {"docker": {"containers": [{"id": "c1", "names": ["plex"], "state": "running"}]}}
{"id": "c1", "names": ["plex"], "state": "running"}
]}}
) )
) )
tool = self._get_tool() tool = self._get_tool()
@@ -388,10 +387,16 @@ class TestDockerToolRequests:
container_id = "a" * 64 container_id = "a" * 64
route = respx.post(API_URL).mock( route = respx.post(API_URL).mock(
return_value=_graphql_response( return_value=_graphql_response(
{"docker": {"start": { {
"id": container_id, "names": ["plex"], "docker": {
"state": "running", "status": "Up", "start": {
}}} "id": container_id,
"names": ["plex"],
"state": "running",
"status": "Up",
}
}
}
) )
) )
tool = self._get_tool() tool = self._get_tool()
@@ -405,10 +410,16 @@ class TestDockerToolRequests:
container_id = "b" * 64 container_id = "b" * 64
route = respx.post(API_URL).mock( route = respx.post(API_URL).mock(
return_value=_graphql_response( return_value=_graphql_response(
{"docker": {"stop": { {
"id": container_id, "names": ["sonarr"], "docker": {
"state": "exited", "status": "Exited", "stop": {
}}} "id": container_id,
"names": ["sonarr"],
"state": "exited",
"status": "Exited",
}
}
}
) )
) )
tool = self._get_tool() tool = self._get_tool()
@@ -450,9 +461,11 @@ class TestDockerToolRequests:
async def test_networks_sends_correct_query(self) -> None: async def test_networks_sends_correct_query(self) -> None:
route = respx.post(API_URL).mock( route = respx.post(API_URL).mock(
return_value=_graphql_response( return_value=_graphql_response(
{"dockerNetworks": [ {
{"id": "n1", "name": "bridge", "driver": "bridge", "scope": "local"} "dockerNetworks": [
]} {"id": "n1", "name": "bridge", "driver": "bridge", "scope": "local"}
]
}
) )
) )
tool = self._get_tool() tool = self._get_tool()
@@ -463,9 +476,7 @@ class TestDockerToolRequests:
@respx.mock @respx.mock
async def test_check_updates_sends_correct_query(self) -> None: async def test_check_updates_sends_correct_query(self) -> None:
route = respx.post(API_URL).mock( route = respx.post(API_URL).mock(
return_value=_graphql_response( return_value=_graphql_response({"docker": {"containerUpdateStatuses": []}})
{"docker": {"containerUpdateStatuses": []}}
)
) )
tool = self._get_tool() tool = self._get_tool()
await tool(action="check_updates") await tool(action="check_updates")
@@ -484,17 +495,29 @@ class TestDockerToolRequests:
call_count += 1 call_count += 1
if "StopContainer" in body["query"]: if "StopContainer" in body["query"]:
return _graphql_response( return _graphql_response(
{"docker": {"stop": { {
"id": container_id, "names": ["app"], "docker": {
"state": "exited", "status": "Exited", "stop": {
}}} "id": container_id,
"names": ["app"],
"state": "exited",
"status": "Exited",
}
}
}
) )
if "StartContainer" in body["query"]: if "StartContainer" in body["query"]:
return _graphql_response( return _graphql_response(
{"docker": {"start": { {
"id": container_id, "names": ["app"], "docker": {
"state": "running", "status": "Up", "start": {
}}} "id": container_id,
"names": ["app"],
"state": "running",
"status": "Up",
}
}
}
) )
return _graphql_response({"docker": {"containers": []}}) return _graphql_response({"docker": {"containers": []}})
@@ -521,10 +544,16 @@ class TestDockerToolRequests:
) )
if "StartContainer" in body["query"]: if "StartContainer" in body["query"]:
return _graphql_response( return _graphql_response(
{"docker": {"start": { {
"id": resolved_id, "names": ["plex"], "docker": {
"state": "running", "status": "Up", "start": {
}}} "id": resolved_id,
"names": ["plex"],
"state": "running",
"status": "Up",
}
}
}
) )
return _graphql_response({}) return _graphql_response({})
@@ -545,17 +574,17 @@ class TestVMToolRequests:
@staticmethod @staticmethod
def _get_tool(): def _get_tool():
return make_tool_fn( return make_tool_fn("unraid_mcp.tools.virtualization", "register_vm_tool", "unraid_vm")
"unraid_mcp.tools.virtualization", "register_vm_tool", "unraid_vm"
)
@respx.mock @respx.mock
async def test_list_sends_correct_query(self) -> None: async def test_list_sends_correct_query(self) -> None:
route = respx.post(API_URL).mock( route = respx.post(API_URL).mock(
return_value=_graphql_response( return_value=_graphql_response(
{"vms": {"domains": [ {
{"id": "v1", "name": "win10", "state": "running", "uuid": "u1"} "vms": {
]}} "domains": [{"id": "v1", "name": "win10", "state": "running", "uuid": "u1"}]
}
}
) )
) )
tool = self._get_tool() tool = self._get_tool()
@@ -566,9 +595,7 @@ class TestVMToolRequests:
@respx.mock @respx.mock
async def test_start_sends_mutation_with_id(self) -> None: async def test_start_sends_mutation_with_id(self) -> None:
route = respx.post(API_URL).mock( route = respx.post(API_URL).mock(return_value=_graphql_response({"vm": {"start": True}}))
return_value=_graphql_response({"vm": {"start": True}})
)
tool = self._get_tool() tool = self._get_tool()
result = await tool(action="start", vm_id="vm-123") result = await tool(action="start", vm_id="vm-123")
body = _extract_request_body(route.calls.last.request) body = _extract_request_body(route.calls.last.request)
@@ -578,11 +605,9 @@ class TestVMToolRequests:
@respx.mock @respx.mock
async def test_stop_sends_mutation_with_id(self) -> None: async def test_stop_sends_mutation_with_id(self) -> None:
route = respx.post(API_URL).mock( route = respx.post(API_URL).mock(return_value=_graphql_response({"vm": {"stop": True}}))
return_value=_graphql_response({"vm": {"stop": True}})
)
tool = self._get_tool() tool = self._get_tool()
result = await tool(action="stop", vm_id="vm-456") await tool(action="stop", vm_id="vm-456")
body = _extract_request_body(route.calls.last.request) body = _extract_request_body(route.calls.last.request)
assert "StopVM" in body["query"] assert "StopVM" in body["query"]
assert body["variables"] == {"id": "vm-456"} assert body["variables"] == {"id": "vm-456"}
@@ -614,10 +639,14 @@ class TestVMToolRequests:
async def test_details_finds_vm_by_name(self) -> None: async def test_details_finds_vm_by_name(self) -> None:
respx.post(API_URL).mock( respx.post(API_URL).mock(
return_value=_graphql_response( return_value=_graphql_response(
{"vms": {"domains": [ {
{"id": "v1", "name": "win10", "state": "running", "uuid": "uuid-1"}, "vms": {
{"id": "v2", "name": "ubuntu", "state": "stopped", "uuid": "uuid-2"}, "domains": [
]}} {"id": "v1", "name": "win10", "state": "running", "uuid": "uuid-1"},
{"id": "v2", "name": "ubuntu", "state": "stopped", "uuid": "uuid-2"},
]
}
}
) )
) )
tool = self._get_tool() tool = self._get_tool()
@@ -641,9 +670,15 @@ class TestArrayToolRequests:
async def test_parity_status_sends_correct_query(self) -> None: async def test_parity_status_sends_correct_query(self) -> None:
route = respx.post(API_URL).mock( route = respx.post(API_URL).mock(
return_value=_graphql_response( return_value=_graphql_response(
{"array": {"parityCheckStatus": { {
"progress": 50, "speed": "100 MB/s", "errors": 0, "array": {
}}} "parityCheckStatus": {
"progress": 50,
"speed": "100 MB/s",
"errors": 0,
}
}
}
) )
) )
tool = self._get_tool() tool = self._get_tool()
@@ -658,9 +693,10 @@ class TestArrayToolRequests:
return_value=_graphql_response({"parityCheck": {"start": True}}) return_value=_graphql_response({"parityCheck": {"start": True}})
) )
tool = self._get_tool() tool = self._get_tool()
result = await tool(action="parity_start") result = await tool(action="parity_start", correct=False)
body = _extract_request_body(route.calls.last.request) body = _extract_request_body(route.calls.last.request)
assert "StartParityCheck" in body["query"] assert "StartParityCheck" in body["query"]
assert body["variables"] == {"correct": False}
assert result["success"] is True assert result["success"] is True
@respx.mock @respx.mock
@@ -704,9 +740,7 @@ class TestStorageToolRequests:
@staticmethod @staticmethod
def _get_tool(): def _get_tool():
return make_tool_fn( return make_tool_fn("unraid_mcp.tools.storage", "register_storage_tool", "unraid_storage")
"unraid_mcp.tools.storage", "register_storage_tool", "unraid_storage"
)
@respx.mock @respx.mock
async def test_shares_sends_correct_query(self) -> None: async def test_shares_sends_correct_query(self) -> None:
@@ -735,10 +769,16 @@ class TestStorageToolRequests:
async def test_disk_details_sends_variable(self) -> None: async def test_disk_details_sends_variable(self) -> None:
route = respx.post(API_URL).mock( route = respx.post(API_URL).mock(
return_value=_graphql_response( return_value=_graphql_response(
{"disk": { {
"id": "d1", "device": "sda", "name": "Disk 1", "disk": {
"serialNum": "SN123", "size": 1000000, "temperature": 35, "id": "d1",
}} "device": "sda",
"name": "Disk 1",
"serialNum": "SN123",
"size": 1000000,
"temperature": 35,
}
}
) )
) )
tool = self._get_tool() tool = self._get_tool()
@@ -764,10 +804,14 @@ class TestStorageToolRequests:
async def test_logs_sends_path_and_lines_variables(self) -> None: async def test_logs_sends_path_and_lines_variables(self) -> None:
route = respx.post(API_URL).mock( route = respx.post(API_URL).mock(
return_value=_graphql_response( return_value=_graphql_response(
{"logFile": { {
"path": "/var/log/syslog", "content": "log line", "logFile": {
"totalLines": 100, "startLine": 1, "path": "/var/log/syslog",
}} "content": "log line",
"totalLines": 100,
"startLine": 1,
}
}
) )
) )
tool = self._get_tool() tool = self._get_tool()
@@ -785,9 +829,7 @@ class TestStorageToolRequests:
@respx.mock @respx.mock
async def test_unassigned_sends_correct_query(self) -> None: async def test_unassigned_sends_correct_query(self) -> None:
route = respx.post(API_URL).mock( route = respx.post(API_URL).mock(return_value=_graphql_response({"unassignedDevices": []}))
return_value=_graphql_response({"unassignedDevices": []})
)
tool = self._get_tool() tool = self._get_tool()
result = await tool(action="unassigned") result = await tool(action="unassigned")
body = _extract_request_body(route.calls.last.request) body = _extract_request_body(route.calls.last.request)
@@ -815,9 +857,13 @@ class TestNotificationsToolRequests:
async def test_overview_sends_correct_query(self) -> None: async def test_overview_sends_correct_query(self) -> None:
route = respx.post(API_URL).mock( route = respx.post(API_URL).mock(
return_value=_graphql_response( return_value=_graphql_response(
{"notifications": {"overview": { {
"unread": {"info": 1, "warning": 0, "alert": 0, "total": 1}, "notifications": {
}}} "overview": {
"unread": {"info": 1, "warning": 0, "alert": 0, "total": 1},
}
}
}
) )
) )
tool = self._get_tool() tool = self._get_tool()
@@ -831,9 +877,7 @@ class TestNotificationsToolRequests:
return_value=_graphql_response({"notifications": {"list": []}}) return_value=_graphql_response({"notifications": {"list": []}})
) )
tool = self._get_tool() tool = self._get_tool()
await tool( await tool(action="list", list_type="ARCHIVE", importance="WARNING", offset=5, limit=10)
action="list", list_type="ARCHIVE", importance="WARNING", offset=5, limit=10
)
body = _extract_request_body(route.calls.last.request) body = _extract_request_body(route.calls.last.request)
assert "ListNotifications" in body["query"] assert "ListNotifications" in body["query"]
filt = body["variables"]["filter"] filt = body["variables"]["filter"]
@@ -857,9 +901,13 @@ class TestNotificationsToolRequests:
async def test_create_sends_input_variables(self) -> None: async def test_create_sends_input_variables(self) -> None:
route = respx.post(API_URL).mock( route = respx.post(API_URL).mock(
return_value=_graphql_response( return_value=_graphql_response(
{"notifications": {"createNotification": { {
"id": "n1", "title": "Test", "importance": "INFO", "createNotification": {
}}} "id": "n1",
"title": "Test",
"importance": "INFO",
}
}
) )
) )
tool = self._get_tool() tool = self._get_tool()
@@ -875,14 +923,12 @@ class TestNotificationsToolRequests:
inp = body["variables"]["input"] inp = body["variables"]["input"]
assert inp["title"] == "Test" assert inp["title"] == "Test"
assert inp["subject"] == "Sub" assert inp["subject"] == "Sub"
assert inp["importance"] == "INFO" # uppercased assert inp["importance"] == "INFO" # uppercased from "info"
@respx.mock @respx.mock
async def test_archive_sends_id_variable(self) -> None: async def test_archive_sends_id_variable(self) -> None:
route = respx.post(API_URL).mock( route = respx.post(API_URL).mock(
return_value=_graphql_response( return_value=_graphql_response({"archiveNotification": {"id": "notif-1"}})
{"notifications": {"archiveNotification": True}}
)
) )
tool = self._get_tool() tool = self._get_tool()
await tool(action="archive", notification_id="notif-1") await tool(action="archive", notification_id="notif-1")
@@ -899,9 +945,7 @@ class TestNotificationsToolRequests:
@respx.mock @respx.mock
async def test_delete_sends_id_and_type(self) -> None: async def test_delete_sends_id_and_type(self) -> None:
route = respx.post(API_URL).mock( route = respx.post(API_URL).mock(
return_value=_graphql_response( return_value=_graphql_response({"deleteNotification": {"unread": {"total": 0}}})
{"notifications": {"deleteNotification": True}}
)
) )
tool = self._get_tool() tool = self._get_tool()
await tool( await tool(
@@ -918,9 +962,7 @@ class TestNotificationsToolRequests:
@respx.mock @respx.mock
async def test_archive_all_sends_importance_when_provided(self) -> None: async def test_archive_all_sends_importance_when_provided(self) -> None:
route = respx.post(API_URL).mock( route = respx.post(API_URL).mock(
return_value=_graphql_response( return_value=_graphql_response({"archiveAll": {"archive": {"total": 1}}})
{"notifications": {"archiveAll": True}}
)
) )
tool = self._get_tool() tool = self._get_tool()
await tool(action="archive_all", importance="warning") await tool(action="archive_all", importance="warning")
@@ -939,9 +981,7 @@ class TestRCloneToolRequests:
@staticmethod @staticmethod
def _get_tool(): def _get_tool():
return make_tool_fn( return make_tool_fn("unraid_mcp.tools.rclone", "register_rclone_tool", "unraid_rclone")
"unraid_mcp.tools.rclone", "register_rclone_tool", "unraid_rclone"
)
@respx.mock @respx.mock
async def test_list_remotes_sends_correct_query(self) -> None: async def test_list_remotes_sends_correct_query(self) -> None:
@@ -960,9 +1000,15 @@ class TestRCloneToolRequests:
async def test_config_form_sends_provider_type(self) -> None: async def test_config_form_sends_provider_type(self) -> None:
route = respx.post(API_URL).mock( route = respx.post(API_URL).mock(
return_value=_graphql_response( return_value=_graphql_response(
{"rclone": {"configForm": { {
"id": "form1", "dataSchema": {}, "uiSchema": {}, "rclone": {
}}} "configForm": {
"id": "form1",
"dataSchema": {},
"uiSchema": {},
}
}
}
) )
) )
tool = self._get_tool() tool = self._get_tool()
@@ -975,9 +1021,15 @@ class TestRCloneToolRequests:
async def test_create_remote_sends_input_variables(self) -> None: async def test_create_remote_sends_input_variables(self) -> None:
route = respx.post(API_URL).mock( route = respx.post(API_URL).mock(
return_value=_graphql_response( return_value=_graphql_response(
{"rclone": {"createRCloneRemote": { {
"name": "my-s3", "type": "s3", "parameters": {}, "rclone": {
}}} "createRCloneRemote": {
"name": "my-s3",
"type": "s3",
"parameters": {},
}
}
}
) )
) )
tool = self._get_tool() tool = self._get_tool()
@@ -992,7 +1044,7 @@ class TestRCloneToolRequests:
inp = body["variables"]["input"] inp = body["variables"]["input"]
assert inp["name"] == "my-s3" assert inp["name"] == "my-s3"
assert inp["type"] == "s3" assert inp["type"] == "s3"
assert inp["config"] == {"bucket": "my-bucket"} assert inp["parameters"] == {"bucket": "my-bucket"}
@respx.mock @respx.mock
async def test_delete_remote_requires_confirm(self) -> None: async def test_delete_remote_requires_confirm(self) -> None:
@@ -1023,18 +1075,20 @@ class TestUsersToolRequests:
@staticmethod @staticmethod
def _get_tool(): def _get_tool():
return make_tool_fn( return make_tool_fn("unraid_mcp.tools.users", "register_users_tool", "unraid_users")
"unraid_mcp.tools.users", "register_users_tool", "unraid_users"
)
@respx.mock @respx.mock
async def test_me_sends_correct_query(self) -> None: async def test_me_sends_correct_query(self) -> None:
route = respx.post(API_URL).mock( route = respx.post(API_URL).mock(
return_value=_graphql_response( return_value=_graphql_response(
{"me": { {
"id": "u1", "name": "admin", "me": {
"description": "Admin", "roles": ["admin"], "id": "u1",
}} "name": "admin",
"description": "Admin",
"roles": ["admin"],
}
}
) )
) )
tool = self._get_tool() tool = self._get_tool()
@@ -1059,9 +1113,7 @@ class TestKeysToolRequests:
@respx.mock @respx.mock
async def test_list_sends_correct_query(self) -> None: async def test_list_sends_correct_query(self) -> None:
route = respx.post(API_URL).mock( route = respx.post(API_URL).mock(
return_value=_graphql_response( return_value=_graphql_response({"apiKeys": [{"id": "k1", "name": "my-key"}]})
{"apiKeys": [{"id": "k1", "name": "my-key"}]}
)
) )
tool = self._get_tool() tool = self._get_tool()
result = await tool(action="list") result = await tool(action="list")
@@ -1086,10 +1138,16 @@ class TestKeysToolRequests:
async def test_create_sends_input_variables(self) -> None: async def test_create_sends_input_variables(self) -> None:
route = respx.post(API_URL).mock( route = respx.post(API_URL).mock(
return_value=_graphql_response( return_value=_graphql_response(
{"createApiKey": { {
"id": "k2", "name": "new-key", "apiKey": {
"key": "secret", "roles": ["read"], "create": {
}} "id": "k2",
"name": "new-key",
"key": "secret",
"roles": ["read"],
}
}
}
) )
) )
tool = self._get_tool() tool = self._get_tool()
@@ -1105,7 +1163,7 @@ class TestKeysToolRequests:
async def test_update_sends_input_variables(self) -> None: async def test_update_sends_input_variables(self) -> None:
route = respx.post(API_URL).mock( route = respx.post(API_URL).mock(
return_value=_graphql_response( return_value=_graphql_response(
{"updateApiKey": {"id": "k1", "name": "renamed", "roles": ["admin"]}} {"apiKey": {"update": {"id": "k1", "name": "renamed", "roles": ["admin"]}}}
) )
) )
tool = self._get_tool() tool = self._get_tool()
@@ -1125,12 +1183,12 @@ class TestKeysToolRequests:
@respx.mock @respx.mock
async def test_delete_sends_ids_when_confirmed(self) -> None: async def test_delete_sends_ids_when_confirmed(self) -> None:
route = respx.post(API_URL).mock( route = respx.post(API_URL).mock(
return_value=_graphql_response({"deleteApiKeys": True}) return_value=_graphql_response({"apiKey": {"delete": True}})
) )
tool = self._get_tool() tool = self._get_tool()
result = await tool(action="delete", key_id="k1", confirm=True) result = await tool(action="delete", key_id="k1", confirm=True)
body = _extract_request_body(route.calls.last.request) body = _extract_request_body(route.calls.last.request)
assert "DeleteApiKeys" in body["query"] assert "DeleteApiKey" in body["query"]
assert body["variables"]["input"]["ids"] == ["k1"] assert body["variables"]["input"]["ids"] == ["k1"]
assert result["success"] is True assert result["success"] is True
@@ -1145,15 +1203,11 @@ class TestHealthToolRequests:
@staticmethod @staticmethod
def _get_tool(): def _get_tool():
return make_tool_fn( return make_tool_fn("unraid_mcp.tools.health", "register_health_tool", "unraid_health")
"unraid_mcp.tools.health", "register_health_tool", "unraid_health"
)
@respx.mock @respx.mock
async def test_test_connection_sends_online_query(self) -> None: async def test_test_connection_sends_online_query(self) -> None:
route = respx.post(API_URL).mock( route = respx.post(API_URL).mock(return_value=_graphql_response({"online": True}))
return_value=_graphql_response({"online": True})
)
tool = self._get_tool() tool = self._get_tool()
result = await tool(action="test_connection") result = await tool(action="test_connection")
body = _extract_request_body(route.calls.last.request) body = _extract_request_body(route.calls.last.request)
@@ -1164,21 +1218,23 @@ class TestHealthToolRequests:
@respx.mock @respx.mock
async def test_check_sends_comprehensive_query(self) -> None: async def test_check_sends_comprehensive_query(self) -> None:
route = respx.post(API_URL).mock( route = respx.post(API_URL).mock(
return_value=_graphql_response({ return_value=_graphql_response(
"info": { {
"machineId": "m1", "info": {
"time": 1234567890, "machineId": "m1",
"versions": {"unraid": "7.0"}, "time": 1234567890,
"os": {"uptime": 86400}, "versions": {"unraid": "7.0"},
}, "os": {"uptime": 86400},
"array": {"state": "STARTED"}, },
"notifications": { "array": {"state": "STARTED"},
"overview": {"unread": {"alert": 0, "warning": 1, "total": 3}}, "notifications": {
}, "overview": {"unread": {"alert": 0, "warning": 1, "total": 3}},
"docker": { },
"containers": [{"id": "c1", "state": "running", "status": "Up"}], "docker": {
}, "containers": [{"id": "c1", "state": "running", "status": "Up"}],
}) },
}
)
) )
tool = self._get_tool() tool = self._get_tool()
result = await tool(action="check") result = await tool(action="check")
@@ -1189,9 +1245,7 @@ class TestHealthToolRequests:
@respx.mock @respx.mock
async def test_test_connection_measures_latency(self) -> None: async def test_test_connection_measures_latency(self) -> None:
respx.post(API_URL).mock( respx.post(API_URL).mock(return_value=_graphql_response({"online": True}))
return_value=_graphql_response({"online": True})
)
tool = self._get_tool() tool = self._get_tool()
result = await tool(action="test_connection") result = await tool(action="test_connection")
assert "latency_ms" in result assert "latency_ms" in result
@@ -1200,18 +1254,21 @@ class TestHealthToolRequests:
@respx.mock @respx.mock
async def test_check_reports_warning_on_alerts(self) -> None: async def test_check_reports_warning_on_alerts(self) -> None:
respx.post(API_URL).mock( respx.post(API_URL).mock(
return_value=_graphql_response({ return_value=_graphql_response(
"info": { {
"machineId": "m1", "time": 0, "info": {
"versions": {"unraid": "7.0"}, "machineId": "m1",
"os": {"uptime": 0}, "time": 0,
}, "versions": {"unraid": "7.0"},
"array": {"state": "STARTED"}, "os": {"uptime": 0},
"notifications": { },
"overview": {"unread": {"alert": 3, "warning": 0, "total": 5}}, "array": {"state": "STARTED"},
}, "notifications": {
"docker": {"containers": []}, "overview": {"unread": {"alert": 3, "warning": 0, "total": 5}},
}) },
"docker": {"containers": []},
}
)
) )
tool = self._get_tool() tool = self._get_tool()
result = await tool(action="check") result = await tool(action="check")
@@ -1250,37 +1307,25 @@ class TestCrossCuttingConcerns:
@respx.mock @respx.mock
async def test_tool_error_from_http_layer_propagates(self) -> None: async def test_tool_error_from_http_layer_propagates(self) -> None:
"""When an HTTP error occurs, the ToolError bubbles up through the tool.""" """When an HTTP error occurs, the ToolError bubbles up through the tool."""
respx.post(API_URL).mock( respx.post(API_URL).mock(return_value=httpx.Response(500, text="Server Error"))
return_value=httpx.Response(500, text="Server Error") tool = make_tool_fn("unraid_mcp.tools.info", "register_info_tool", "unraid_info")
) with pytest.raises(ToolError, match="Unraid API returned HTTP 500"):
tool = make_tool_fn(
"unraid_mcp.tools.info", "register_info_tool", "unraid_info"
)
with pytest.raises(ToolError, match="HTTP error 500"):
await tool(action="online") await tool(action="online")
@respx.mock @respx.mock
async def test_network_error_propagates_through_tool(self) -> None: async def test_network_error_propagates_through_tool(self) -> None:
"""When a network error occurs, the ToolError bubbles up through the tool.""" """When a network error occurs, the ToolError bubbles up through the tool."""
respx.post(API_URL).mock( respx.post(API_URL).mock(side_effect=httpx.ConnectError("Connection refused"))
side_effect=httpx.ConnectError("Connection refused") tool = make_tool_fn("unraid_mcp.tools.info", "register_info_tool", "unraid_info")
) with pytest.raises(ToolError, match="Network error connecting to Unraid API"):
tool = make_tool_fn(
"unraid_mcp.tools.info", "register_info_tool", "unraid_info"
)
with pytest.raises(ToolError, match="Network connection error"):
await tool(action="online") await tool(action="online")
@respx.mock @respx.mock
async def test_graphql_error_propagates_through_tool(self) -> None: async def test_graphql_error_propagates_through_tool(self) -> None:
"""When a GraphQL error occurs, the ToolError bubbles up through the tool.""" """When a GraphQL error occurs, the ToolError bubbles up through the tool."""
respx.post(API_URL).mock( respx.post(API_URL).mock(
return_value=_graphql_response( return_value=_graphql_response(errors=[{"message": "Permission denied"}])
errors=[{"message": "Permission denied"}]
)
)
tool = make_tool_fn(
"unraid_mcp.tools.info", "register_info_tool", "unraid_info"
) )
tool = make_tool_fn("unraid_mcp.tools.info", "register_info_tool", "unraid_info")
with pytest.raises(ToolError, match="Permission denied"): with pytest.raises(ToolError, match="Permission denied"):
await tool(action="online") await tool(action="online")

View File

@@ -7,7 +7,7 @@ data management without requiring a live Unraid server.
import asyncio import asyncio
import json import json
from datetime import datetime from datetime import UTC, datetime
from typing import Any from typing import Any
from unittest.mock import AsyncMock, MagicMock, patch from unittest.mock import AsyncMock, MagicMock, patch
@@ -16,6 +16,7 @@ import websockets.exceptions
from unraid_mcp.subscriptions.manager import SubscriptionManager from unraid_mcp.subscriptions.manager import SubscriptionManager
pytestmark = pytest.mark.integration pytestmark = pytest.mark.integration
@@ -83,7 +84,7 @@ SAMPLE_QUERY = "subscription { test { value } }"
# Shared patch targets # Shared patch targets
_WS_CONNECT = "unraid_mcp.subscriptions.manager.websockets.connect" _WS_CONNECT = "unraid_mcp.subscriptions.manager.websockets.connect"
_API_URL = "unraid_mcp.subscriptions.manager.UNRAID_API_URL" _API_URL = "unraid_mcp.subscriptions.utils.UNRAID_API_URL"
_API_KEY = "unraid_mcp.subscriptions.manager.UNRAID_API_KEY" _API_KEY = "unraid_mcp.subscriptions.manager.UNRAID_API_KEY"
_SSL_CTX = "unraid_mcp.subscriptions.manager.build_ws_ssl_context" _SSL_CTX = "unraid_mcp.subscriptions.manager.build_ws_ssl_context"
_SLEEP = "unraid_mcp.subscriptions.manager.asyncio.sleep" _SLEEP = "unraid_mcp.subscriptions.manager.asyncio.sleep"
@@ -100,7 +101,7 @@ class TestSubscriptionManagerInit:
mgr = SubscriptionManager() mgr = SubscriptionManager()
assert mgr.active_subscriptions == {} assert mgr.active_subscriptions == {}
assert mgr.resource_data == {} assert mgr.resource_data == {}
assert mgr.websocket is None assert not hasattr(mgr, "websocket")
def test_default_auto_start_enabled(self) -> None: def test_default_auto_start_enabled(self) -> None:
mgr = SubscriptionManager() mgr = SubscriptionManager()
@@ -720,20 +721,20 @@ class TestWebSocketURLConstruction:
class TestResourceData: class TestResourceData:
def test_get_resource_data_returns_none_when_empty(self) -> None: async def test_get_resource_data_returns_none_when_empty(self) -> None:
mgr = SubscriptionManager() mgr = SubscriptionManager()
assert mgr.get_resource_data("nonexistent") is None assert await mgr.get_resource_data("nonexistent") is None
def test_get_resource_data_returns_stored_data(self) -> None: async def test_get_resource_data_returns_stored_data(self) -> None:
from unraid_mcp.core.types import SubscriptionData from unraid_mcp.core.types import SubscriptionData
mgr = SubscriptionManager() mgr = SubscriptionManager()
mgr.resource_data["test"] = SubscriptionData( mgr.resource_data["test"] = SubscriptionData(
data={"key": "value"}, data={"key": "value"},
last_updated=datetime.now(), last_updated=datetime.now(UTC),
subscription_type="test", subscription_type="test",
) )
result = mgr.get_resource_data("test") result = await mgr.get_resource_data("test")
assert result == {"key": "value"} assert result == {"key": "value"}
def test_list_active_subscriptions_empty(self) -> None: def test_list_active_subscriptions_empty(self) -> None:
@@ -755,46 +756,46 @@ class TestResourceData:
class TestSubscriptionStatus: class TestSubscriptionStatus:
def test_status_includes_all_configured_subscriptions(self) -> None: async def test_status_includes_all_configured_subscriptions(self) -> None:
mgr = SubscriptionManager() mgr = SubscriptionManager()
status = mgr.get_subscription_status() status = await mgr.get_subscription_status()
for name in mgr.subscription_configs: for name in mgr.subscription_configs:
assert name in status assert name in status
def test_status_default_connection_state(self) -> None: async def test_status_default_connection_state(self) -> None:
mgr = SubscriptionManager() mgr = SubscriptionManager()
status = mgr.get_subscription_status() status = await mgr.get_subscription_status()
for sub_status in status.values(): for sub_status in status.values():
assert sub_status["runtime"]["connection_state"] == "not_started" assert sub_status["runtime"]["connection_state"] == "not_started"
def test_status_shows_active_flag(self) -> None: async def test_status_shows_active_flag(self) -> None:
mgr = SubscriptionManager() mgr = SubscriptionManager()
mgr.active_subscriptions["logFileSubscription"] = MagicMock() mgr.active_subscriptions["logFileSubscription"] = MagicMock()
status = mgr.get_subscription_status() status = await mgr.get_subscription_status()
assert status["logFileSubscription"]["runtime"]["active"] is True assert status["logFileSubscription"]["runtime"]["active"] is True
def test_status_shows_data_availability(self) -> None: async def test_status_shows_data_availability(self) -> None:
from unraid_mcp.core.types import SubscriptionData from unraid_mcp.core.types import SubscriptionData
mgr = SubscriptionManager() mgr = SubscriptionManager()
mgr.resource_data["logFileSubscription"] = SubscriptionData( mgr.resource_data["logFileSubscription"] = SubscriptionData(
data={"log": "content"}, data={"log": "content"},
last_updated=datetime.now(), last_updated=datetime.now(UTC),
subscription_type="logFileSubscription", subscription_type="logFileSubscription",
) )
status = mgr.get_subscription_status() status = await mgr.get_subscription_status()
assert status["logFileSubscription"]["data"]["available"] is True assert status["logFileSubscription"]["data"]["available"] is True
def test_status_shows_error_info(self) -> None: async def test_status_shows_error_info(self) -> None:
mgr = SubscriptionManager() mgr = SubscriptionManager()
mgr.last_error["logFileSubscription"] = "Test error message" mgr.last_error["logFileSubscription"] = "Test error message"
status = mgr.get_subscription_status() status = await mgr.get_subscription_status()
assert status["logFileSubscription"]["runtime"]["last_error"] == "Test error message" assert status["logFileSubscription"]["runtime"]["last_error"] == "Test error message"
def test_status_reconnect_attempts_tracked(self) -> None: async def test_status_reconnect_attempts_tracked(self) -> None:
mgr = SubscriptionManager() mgr = SubscriptionManager()
mgr.reconnect_attempts["logFileSubscription"] = 3 mgr.reconnect_attempts["logFileSubscription"] = 3
status = mgr.get_subscription_status() status = await mgr.get_subscription_status()
assert status["logFileSubscription"]["runtime"]["reconnect_attempts"] == 3 assert status["logFileSubscription"]["runtime"]["reconnect_attempts"] == 3

151
tests/mcporter/README.md Normal file
View File

@@ -0,0 +1,151 @@
# mcporter Integration Tests
Live integration smoke-tests for the unraid-mcp server, exercising real API calls via [mcporter](https://github.com/mcporter/mcporter).
---
## Two Scripts, Two Transports
| | `test-tools.sh` | `test-actions.sh` |
|-|-----------------|-------------------|
| **Transport** | stdio | HTTP |
| **Server required** | No — launched ad-hoc per call | Yes — must be running at `$MCP_URL` |
| **Flags** | `--timeout-ms N`, `--parallel`, `--verbose` | positional `[MCP_URL]` |
| **Coverage** | 10 tools (read-only actions only) | 11 tools (all non-destructive actions) |
| **Use case** | CI / offline local check | Live server smoke-test |
### `test-tools.sh` — stdio, no running server needed
```bash
./tests/mcporter/test-tools.sh # sequential, 25s timeout
./tests/mcporter/test-tools.sh --parallel # parallel suites
./tests/mcporter/test-tools.sh --timeout-ms 10000 # tighter timeout
./tests/mcporter/test-tools.sh --verbose # print raw responses
```
Launches `uv run unraid-mcp-server` in stdio mode for each tool call. Requires `mcporter`, `uv`, and `python3` in `PATH`. Good for CI pipelines — no persistent server process needed.
### `test-actions.sh` — HTTP, requires a live server
```bash
./tests/mcporter/test-actions.sh # default: http://localhost:6970/mcp
./tests/mcporter/test-actions.sh http://10.1.0.2:6970/mcp # explicit URL
UNRAID_MCP_URL=http://10.1.0.2:6970/mcp ./tests/mcporter/test-actions.sh
```
Connects to an already-running streamable-http server. Covers all read-only actions across 10 tools (`unraid_settings` is all-mutations and skipped; all destructive mutations are explicitly skipped).
---
## What `test-actions.sh` Tests
### Phase 1 — Param-free reads
All actions requiring no arguments beyond `action` itself.
| Tool | Actions tested |
|------|----------------|
| `unraid_info` | `overview`, `array`, `network`, `registration`, `connect`, `variables`, `metrics`, `services`, `display`, `config`, `online`, `owner`, `settings`, `server`, `servers`, `flash`, `ups_devices`, `ups_device`, `ups_config` |
| `unraid_array` | `parity_status` |
| `unraid_storage` | `disks`, `shares`, `unassigned`, `log_files` |
| `unraid_docker` | `list`, `networks`, `port_conflicts`, `check_updates`, `sync_templates`, `refresh_digests` |
| `unraid_vm` | `list` |
| `unraid_notifications` | `overview`, `list`, `warnings`, `recalculate` |
| `unraid_rclone` | `list_remotes`, `config_form` |
| `unraid_users` | `me` |
| `unraid_keys` | `list` |
| `unraid_health` | `check`, `test_connection`, `diagnose` |
| `unraid_settings` | *(all 9 actions skipped — mutations only)* |
### Phase 2 — ID-discovered reads
IDs are extracted from Phase 1 responses and used for actions requiring a specific resource. Each is skipped if Phase 1 returned no matching resources.
| Action | Source of ID |
|--------|--------------|
| `docker: details` | first container from `docker: list` |
| `docker: logs` | first container from `docker: list` |
| `docker: network_details` | first network from `docker: networks` |
| `storage: disk_details` | first disk from `storage: disks` |
| `storage: logs` | first path from `storage: log_files` |
| `vm: details` | first VM from `vm: list` |
| `keys: get` | first key from `keys: list` |
### Skipped actions (and why)
| Label | Meaning |
|-------|---------|
| `destructive (confirm=True required)` | Permanently modifies or deletes data |
| `mutation — state-changing` | Modifies live system state (container/VM lifecycle, settings) |
| `mutation — creates …` | Creates a new resource |
**Full skip list:**
- `unraid_info`: `update_server`, `update_ssh`
- `unraid_array`: `parity_start`, `parity_pause`, `parity_resume`, `parity_cancel`
- `unraid_storage`: `flash_backup`
- `unraid_docker`: `start`, `stop`, `restart`, `pause`, `unpause`, `update`, `remove`, `update_all`, `create_folder`, `set_folder_children`, `delete_entries`, `move_to_folder`, `move_to_position`, `rename_folder`, `create_folder_with_items`, `update_view_prefs`, `reset_template_mappings`
- `unraid_vm`: `start`, `stop`, `pause`, `resume`, `reboot`, `force_stop`, `reset`
- `unraid_notifications`: `create`, `create_unique`, `archive`, `unread`, `archive_all`, `archive_many`, `unarchive_many`, `unarchive_all`, `delete`, `delete_archived`
- `unraid_rclone`: `create_remote`, `delete_remote`
- `unraid_keys`: `create`, `update`, `delete`
- `unraid_settings`: all 9 actions
### Output format
```
<action label> PASS
<action label> FAIL
<first 3 lines of error detail>
<action label> SKIP (reason)
Results: 42 passed 0 failed 37 skipped (79 total)
```
Exit code `0` when all executed tests pass, `1` if any fail.
---
## Destructive Actions
Neither script executes destructive actions. They are explicitly `skip_test`-ed with reason `"destructive (confirm=True required)"`.
All destructive actions require `confirm=True` at the call site. There is no environment variable gate — `confirm` is the sole guard.
### Safe Testing Strategy
| Strategy | When to use |
|----------|-------------|
| **Create → destroy** | Action has a create counterpart (keys, notifications, rclone remotes, docker folders) |
| **No-op apply** | Action mutates config but can be re-applied with current values unchanged (`update_ssh`) |
| **Dedicated test remote** | Action requires a remote target (`flash_backup`) |
| **Test VM** | Action requires a live VM (`force_stop`, `reset`) |
| **Mock/safety audit only** | Global blast radius, no safe isolation (`update_all`, `reset_template_mappings`, `setup_remote_access`, `configure_ups`) |
| **Secondary server only** | Run on `shart` (10.1.0.3), never `tootie` (10.1.0.2) |
For exact per-action mcporter commands, see [`docs/DESTRUCTIVE_ACTIONS.md`](../../docs/DESTRUCTIVE_ACTIONS.md).
---
## Prerequisites
```bash
# mcporter CLI
npm install -g mcporter
# uv (for test-tools.sh stdio mode)
curl -LsSf https://astral.sh/uv/install.sh | sh
# python3 — used for inline JSON extraction
python3 --version # 3.12+
# Running server (for test-actions.sh only)
docker compose up -d
# or
uv run unraid-mcp-server
```
---
## Cleanup
`test-actions.sh` connects to an existing server and leaves it running; it creates no temporary files. `test-tools.sh` spawns stdio server subprocesses per call — they exit when mcporter finishes each invocation — and may write a timestamped log file under `${TMPDIR:-/tmp}`. Neither script leaves background processes.

407
tests/mcporter/test-actions.sh Executable file
View File

@@ -0,0 +1,407 @@
#!/usr/bin/env bash
# test-actions.sh — Test all non-destructive Unraid MCP actions via mcporter
#
# Usage:
# ./scripts/test-actions.sh [MCP_URL]
#
# Default MCP_URL: http://localhost:6970/mcp
# Skips: destructive (confirm=True required), state-changing mutations,
# and actions requiring IDs not yet discovered.
#
# Phase 1: param-free reads
# Phase 2: ID-discovered reads (container, network, disk, vm, key, log)
set -euo pipefail
MCP_URL="${1:-${UNRAID_MCP_URL:-http://localhost:6970/mcp}}"
# ── colours ──────────────────────────────────────────────────────────────────
RED='\033[0;31m'; GREEN='\033[0;32m'; YELLOW='\033[1;33m'
CYAN='\033[0;36m'; BOLD='\033[1m'; NC='\033[0m'
PASS=0; FAIL=0; SKIP=0
declare -a FAILED_TESTS=()
# ── helpers ───────────────────────────────────────────────────────────────────
mcall() {
# mcall <tool> <json-args>
local tool="$1" args="$2"
mcporter call \
--http-url "$MCP_URL" \
--allow-http \
--tool "$tool" \
--args "$args" \
--output json \
2>&1
}
_check_output() {
# Returns 0 if output looks like a successful JSON response, 1 otherwise.
local output="$1" exit_code="$2"
[[ $exit_code -ne 0 ]] && return 1
echo "$output" | python3 -c "
import json, sys
try:
d = json.load(sys.stdin)
if isinstance(d, dict) and (d.get('isError') or d.get('error') or 'ToolError' in str(d)):
sys.exit(1)
except Exception:
pass
sys.exit(0)
" 2>/dev/null
}
run_test() {
# Print result; do NOT echo the JSON body (kept quiet for readability).
local label="$1" tool="$2" args="$3"
printf " %-60s" "$label"
local output exit_code=0
output=$(mcall "$tool" "$args" 2>&1) || exit_code=$?
if _check_output "$output" "$exit_code"; then
echo -e "${GREEN}PASS${NC}"
((PASS++)) || true
else
echo -e "${RED}FAIL${NC}"
((FAIL++)) || true
FAILED_TESTS+=("$label")
# Show first 3 lines of error detail, indented
echo "$output" | head -3 | sed 's/^/ /'
fi
}
run_test_capture() {
# Like run_test but echoes raw JSON to stdout for ID extraction by caller.
# Status lines go to stderr so the caller's $() captures only clean JSON.
local label="$1" tool="$2" args="$3"
local output exit_code=0
printf " %-60s" "$label" >&2
output=$(mcall "$tool" "$args" 2>&1) || exit_code=$?
if _check_output "$output" "$exit_code"; then
echo -e "${GREEN}PASS${NC}" >&2
((PASS++)) || true
else
echo -e "${RED}FAIL${NC}" >&2
((FAIL++)) || true
FAILED_TESTS+=("$label")
echo "$output" | head -3 | sed 's/^/ /' >&2
fi
echo "$output" # pure JSON → captured by caller's $()
}
extract_id() {
# Extract an ID from JSON output using a Python snippet.
# Usage: ID=$(extract_id "$JSON_OUTPUT" "$LABEL" 'python expression')
# If JSON parsing fails (malformed mcporter output), record a FAIL.
# If parsing succeeds but finds no items, return empty (caller skips).
local json_input="$1" label="$2" py_code="$3"
local result="" py_exit=0 parse_err=""
# Capture stdout (the extracted ID) and stderr (any parse errors) separately.
# A temp file is needed because $() can only capture one stream.
local errfile
errfile=$(mktemp)
result=$(echo "$json_input" | python3 -c "$py_code" 2>"$errfile") || py_exit=$?
parse_err=$(<"$errfile")
rm -f "$errfile"
if [[ $py_exit -ne 0 ]]; then
printf " %-60s${RED}FAIL${NC} (JSON parse error)\n" "$label" >&2
[[ -n "$parse_err" ]] && echo "$parse_err" | head -2 | sed 's/^/ /' >&2
((FAIL++)) || true
FAILED_TESTS+=("$label (JSON parse)")
echo ""
return 1
fi
echo "$result"
}
skip_test() {
local label="$1" reason="$2"
printf " %-60s${YELLOW}SKIP${NC} (%s)\n" "$label" "$reason"
((SKIP++)) || true
}
section() {
echo ""
echo -e "${CYAN}${BOLD}━━━ $1 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
}
# ── connectivity check ────────────────────────────────────────────────────────
echo ""
echo -e "${BOLD}Unraid MCP Non-Destructive Action Test Suite${NC}"
echo -e "Server: ${CYAN}$MCP_URL${NC}"
echo ""
printf "Checking connectivity... "
# Use -s (silent) without -f: a 4xx/406 means the MCP server is up and
# responding correctly to a plain GET — only "connection refused" is fatal.
# Capture curl's exit code directly — don't mask failures with a fallback.
HTTP_CODE=""
curl_exit=0
HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" --max-time 5 "$MCP_URL" 2>/dev/null) || curl_exit=$?
if [[ $curl_exit -ne 0 ]]; then
echo -e "${RED}UNREACHABLE${NC} (curl exit code: $curl_exit)"
echo "Start the server first: docker compose up -d OR uv run unraid-mcp-server"
exit 1
fi
echo -e "${GREEN}OK${NC} (HTTP $HTTP_CODE)"
# ═══════════════════════════════════════════════════════════════════════════════
# PHASE 1 — Param-free read actions
# ═══════════════════════════════════════════════════════════════════════════════
section "unraid_info (19 query actions)"
run_test "info: overview" unraid_info '{"action":"overview"}'
run_test "info: array" unraid_info '{"action":"array"}'
run_test "info: network" unraid_info '{"action":"network"}'
run_test "info: registration" unraid_info '{"action":"registration"}'
run_test "info: connect" unraid_info '{"action":"connect"}'
run_test "info: variables" unraid_info '{"action":"variables"}'
run_test "info: metrics" unraid_info '{"action":"metrics"}'
run_test "info: services" unraid_info '{"action":"services"}'
run_test "info: display" unraid_info '{"action":"display"}'
run_test "info: config" unraid_info '{"action":"config"}'
run_test "info: online" unraid_info '{"action":"online"}'
run_test "info: owner" unraid_info '{"action":"owner"}'
run_test "info: settings" unraid_info '{"action":"settings"}'
run_test "info: server" unraid_info '{"action":"server"}'
run_test "info: servers" unraid_info '{"action":"servers"}'
run_test "info: flash" unraid_info '{"action":"flash"}'
run_test "info: ups_devices" unraid_info '{"action":"ups_devices"}'
run_test "info: ups_device" unraid_info '{"action":"ups_device"}'
run_test "info: ups_config" unraid_info '{"action":"ups_config"}'
skip_test "info: update_server" "mutation — state-changing"
skip_test "info: update_ssh" "mutation — state-changing"
section "unraid_array"
run_test "array: parity_status" unraid_array '{"action":"parity_status"}'
skip_test "array: parity_start" "mutation — starts parity check"
skip_test "array: parity_pause" "mutation — pauses parity check"
skip_test "array: parity_resume" "mutation — resumes parity check"
skip_test "array: parity_cancel" "mutation — cancels parity check"
section "unraid_storage (param-free reads)"
STORAGE_DISKS=$(run_test_capture "storage: disks" unraid_storage '{"action":"disks"}')
run_test "storage: shares" unraid_storage '{"action":"shares"}'
run_test "storage: unassigned" unraid_storage '{"action":"unassigned"}'
LOG_FILES=$(run_test_capture "storage: log_files" unraid_storage '{"action":"log_files"}')
skip_test "storage: flash_backup" "destructive (confirm=True required)"
section "unraid_docker (param-free reads)"
DOCKER_LIST=$(run_test_capture "docker: list" unraid_docker '{"action":"list"}')
DOCKER_NETS=$(run_test_capture "docker: networks" unraid_docker '{"action":"networks"}')
run_test "docker: port_conflicts" unraid_docker '{"action":"port_conflicts"}'
run_test "docker: check_updates" unraid_docker '{"action":"check_updates"}'
run_test "docker: sync_templates" unraid_docker '{"action":"sync_templates"}'
run_test "docker: refresh_digests" unraid_docker '{"action":"refresh_digests"}'
skip_test "docker: start" "mutation — changes container state"
skip_test "docker: stop" "mutation — changes container state"
skip_test "docker: restart" "mutation — changes container state"
skip_test "docker: pause" "mutation — changes container state"
skip_test "docker: unpause" "mutation — changes container state"
skip_test "docker: update" "mutation — updates container image"
skip_test "docker: remove" "destructive (confirm=True required)"
skip_test "docker: update_all" "destructive (confirm=True required)"
skip_test "docker: create_folder" "mutation — changes organizer state"
skip_test "docker: set_folder_children" "mutation — changes organizer state"
skip_test "docker: delete_entries" "destructive (confirm=True required)"
skip_test "docker: move_to_folder" "mutation — changes organizer state"
skip_test "docker: move_to_position" "mutation — changes organizer state"
skip_test "docker: rename_folder" "mutation — changes organizer state"
skip_test "docker: create_folder_with_items" "mutation — changes organizer state"
skip_test "docker: update_view_prefs" "mutation — changes organizer state"
skip_test "docker: reset_template_mappings" "destructive (confirm=True required)"
section "unraid_vm (param-free reads)"
VM_LIST=$(run_test_capture "vm: list" unraid_vm '{"action":"list"}')
skip_test "vm: start" "mutation — changes VM state"
skip_test "vm: stop" "mutation — changes VM state"
skip_test "vm: pause" "mutation — changes VM state"
skip_test "vm: resume" "mutation — changes VM state"
skip_test "vm: reboot" "mutation — changes VM state"
skip_test "vm: force_stop" "destructive (confirm=True required)"
skip_test "vm: reset" "destructive (confirm=True required)"
section "unraid_notifications"
run_test "notifications: overview" unraid_notifications '{"action":"overview"}'
run_test "notifications: list" unraid_notifications '{"action":"list"}'
run_test "notifications: warnings" unraid_notifications '{"action":"warnings"}'
run_test "notifications: recalculate" unraid_notifications '{"action":"recalculate"}'
skip_test "notifications: create" "mutation — creates notification"
skip_test "notifications: create_unique" "mutation — creates notification"
skip_test "notifications: archive" "mutation — changes notification state"
skip_test "notifications: unread" "mutation — changes notification state"
skip_test "notifications: archive_all" "mutation — changes notification state"
skip_test "notifications: archive_many" "mutation — changes notification state"
skip_test "notifications: unarchive_many" "mutation — changes notification state"
skip_test "notifications: unarchive_all" "mutation — changes notification state"
skip_test "notifications: delete" "destructive (confirm=True required)"
skip_test "notifications: delete_archived" "destructive (confirm=True required)"
section "unraid_rclone"
run_test "rclone: list_remotes" unraid_rclone '{"action":"list_remotes"}'
run_test "rclone: config_form" unraid_rclone '{"action":"config_form"}'
skip_test "rclone: create_remote" "mutation — creates remote"
skip_test "rclone: delete_remote" "destructive (confirm=True required)"
section "unraid_users"
run_test "users: me" unraid_users '{"action":"me"}'
section "unraid_keys"
KEYS_LIST=$(run_test_capture "keys: list" unraid_keys '{"action":"list"}')
skip_test "keys: create" "mutation — creates API key"
skip_test "keys: update" "mutation — modifies API key"
skip_test "keys: delete" "destructive (confirm=True required)"
section "unraid_health"
run_test "health: check" unraid_health '{"action":"check"}'
run_test "health: test_connection" unraid_health '{"action":"test_connection"}'
run_test "health: diagnose" unraid_health '{"action":"diagnose"}'
section "unraid_settings (all mutations — skipped)"
skip_test "settings: update" "mutation — modifies settings"
skip_test "settings: update_temperature" "mutation — modifies settings"
skip_test "settings: update_time" "mutation — modifies settings"
skip_test "settings: configure_ups" "destructive (confirm=True required)"
skip_test "settings: update_api" "mutation — modifies settings"
skip_test "settings: connect_sign_in" "mutation — authentication action"
skip_test "settings: connect_sign_out" "mutation — authentication action"
skip_test "settings: setup_remote_access" "destructive (confirm=True required)"
skip_test "settings: enable_dynamic_remote_access" "destructive (confirm=True required)"
# ═══════════════════════════════════════════════════════════════════════════════
# PHASE 2 — ID-discovered read actions
# ═══════════════════════════════════════════════════════════════════════════════
section "Phase 2: ID-discovered reads"
# ── docker container ID ───────────────────────────────────────────────────────
CONTAINER_ID=$(extract_id "$DOCKER_LIST" "docker: extract container ID" "
import json, sys
d = json.load(sys.stdin)
containers = d.get('containers') or d.get('data', {}).get('containers') or []
if isinstance(containers, list) and containers:
c = containers[0]
cid = c.get('id') or c.get('names', [''])[0].lstrip('/')
if cid:
print(cid)
")
if [[ -n "$CONTAINER_ID" ]]; then
run_test "docker: details (id=$CONTAINER_ID)" \
unraid_docker "{\"action\":\"details\",\"container_id\":\"$CONTAINER_ID\"}"
run_test "docker: logs (id=$CONTAINER_ID)" \
unraid_docker "{\"action\":\"logs\",\"container_id\":\"$CONTAINER_ID\",\"tail_lines\":20}"
else
skip_test "docker: details" "no containers found to discover ID"
skip_test "docker: logs" "no containers found to discover ID"
fi
# ── docker network ID ─────────────────────────────────────────────────────────
NETWORK_ID=$(extract_id "$DOCKER_NETS" "docker: extract network ID" "
import json, sys
d = json.load(sys.stdin)
nets = d.get('networks') or d.get('data', {}).get('networks') or []
if isinstance(nets, list) and nets:
nid = nets[0].get('id') or nets[0].get('Id')
if nid:
print(nid)
")
if [[ -n "$NETWORK_ID" ]]; then
run_test "docker: network_details (id=$NETWORK_ID)" \
unraid_docker "{\"action\":\"network_details\",\"network_id\":\"$NETWORK_ID\"}"
else
skip_test "docker: network_details" "no networks found to discover ID"
fi
# ── disk ID ───────────────────────────────────────────────────────────────────
DISK_ID=$(extract_id "$STORAGE_DISKS" "storage: extract disk ID" "
import json, sys
d = json.load(sys.stdin)
disks = d.get('disks') or d.get('data', {}).get('disks') or []
if isinstance(disks, list) and disks:
did = disks[0].get('id') or disks[0].get('device')
if did:
print(did)
")
if [[ -n "$DISK_ID" ]]; then
run_test "storage: disk_details (id=$DISK_ID)" \
unraid_storage "{\"action\":\"disk_details\",\"disk_id\":\"$DISK_ID\"}"
else
skip_test "storage: disk_details" "no disks found to discover ID"
fi
# ── log path ──────────────────────────────────────────────────────────────────
LOG_PATH=$(extract_id "$LOG_FILES" "storage: extract log path" "
import json, sys
d = json.load(sys.stdin)
files = d.get('log_files') or d.get('files') or d.get('data', {}).get('log_files') or []
if isinstance(files, list) and files:
p = files[0].get('path') or (files[0] if isinstance(files[0], str) else None)
if p:
print(p)
")
if [[ -n "$LOG_PATH" ]]; then
run_test "storage: logs (path=$LOG_PATH)" \
unraid_storage "{\"action\":\"logs\",\"log_path\":\"$LOG_PATH\",\"tail_lines\":20}"
else
skip_test "storage: logs" "no log files found to discover path"
fi
# ── VM ID ─────────────────────────────────────────────────────────────────────
VM_ID=$(extract_id "$VM_LIST" "vm: extract VM ID" "
import json, sys
d = json.load(sys.stdin)
vms = d.get('vms') or d.get('data', {}).get('vms') or []
if isinstance(vms, list) and vms:
vid = vms[0].get('uuid') or vms[0].get('id') or vms[0].get('name')
if vid:
print(vid)
")
if [[ -n "$VM_ID" ]]; then
run_test "vm: details (id=$VM_ID)" \
unraid_vm "{\"action\":\"details\",\"vm_id\":\"$VM_ID\"}"
else
skip_test "vm: details" "no VMs found to discover ID"
fi
# ── API key ID ────────────────────────────────────────────────────────────────
KEY_ID=$(extract_id "$KEYS_LIST" "keys: extract key ID" "
import json, sys
d = json.load(sys.stdin)
keys = d.get('keys') or d.get('apiKeys') or d.get('data', {}).get('keys') or []
if isinstance(keys, list) and keys:
kid = keys[0].get('id')
if kid:
print(kid)
")
if [[ -n "$KEY_ID" ]]; then
run_test "keys: get (id=$KEY_ID)" \
unraid_keys "{\"action\":\"get\",\"key_id\":\"$KEY_ID\"}"
else
skip_test "keys: get" "no API keys found to discover ID"
fi
# ═══════════════════════════════════════════════════════════════════════════════
# SUMMARY
# ═══════════════════════════════════════════════════════════════════════════════
TOTAL=$((PASS + FAIL + SKIP))
echo ""
echo -e "${BOLD}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo -e "${BOLD}Results: ${GREEN}${PASS} passed${NC} ${RED}${FAIL} failed${NC} ${YELLOW}${SKIP} skipped${NC} (${TOTAL} total)"
if [[ ${#FAILED_TESTS[@]} -gt 0 ]]; then
echo ""
echo -e "${RED}${BOLD}Failed tests:${NC}"
for t in "${FAILED_TESTS[@]}"; do
echo -e " ${RED}${NC} $t"
done
fi
echo ""
[[ $FAIL -eq 0 ]] && exit 0 || exit 1

View File

@@ -0,0 +1,338 @@
#!/usr/bin/env bash
# test-destructive.sh — Safe destructive action tests for unraid-mcp
#
# Tests all 15 destructive actions using create→destroy and no-op patterns.
# Actions with global blast radius (no safe isolation) are skipped.
#
# Transport: stdio — spawns uv run unraid-mcp-server per call; no running server needed.
#
# Usage:
# ./tests/mcporter/test-destructive.sh [--confirm]
#
# Options:
# --confirm REQUIRED to execute destructive tests; without it, dry-runs only
#
# Exit codes:
# 0 — all executable tests passed (or dry-run)
# 1 — one or more tests failed
# 2 — prerequisite check failed
set -uo pipefail
# ---------------------------------------------------------------------------
# Constants
# ---------------------------------------------------------------------------
readonly SCRIPT_DIR="$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")" && pwd -P)"
readonly SCRIPT_NAME="$(basename -- "${BASH_SOURCE[0]}")"
RED='\033[0;31m'; GREEN='\033[0;32m'; YELLOW='\033[1;33m'
CYAN='\033[0;36m'; BOLD='\033[1m'; NC='\033[0m'
# ---------------------------------------------------------------------------
# Defaults
# ---------------------------------------------------------------------------
readonly PROJECT_DIR="$(cd -- "${SCRIPT_DIR}/../.." && pwd -P)"
CONFIRM=false
PASS=0; FAIL=0; SKIP=0
declare -a FAILED_TESTS=()
# ---------------------------------------------------------------------------
# Argument parsing
# ---------------------------------------------------------------------------
while [[ $# -gt 0 ]]; do
case "$1" in
--confirm) CONFIRM=true; shift ;;
-h|--help)
printf 'Usage: %s [--confirm]\n' "${SCRIPT_NAME}"
exit 0
;;
*) printf '[ERROR] Unknown argument: %s\n' "$1" >&2; exit 2 ;;
esac
done
# ---------------------------------------------------------------------------
# Helpers
# ---------------------------------------------------------------------------
section() { echo ""; echo -e "${CYAN}${BOLD}━━━ $1 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"; }
pass_test() {
printf " %-60s${GREEN}PASS${NC}\n" "$1"
((PASS++)) || true
}
fail_test() {
local label="$1" reason="$2"
printf " %-60s${RED}FAIL${NC}\n" "${label}"
printf " %s\n" "${reason}"
((FAIL++)) || true
FAILED_TESTS+=("${label}")
}
skip_test() {
printf " %-60s${YELLOW}SKIP${NC} (%s)\n" "$1" "$2"
((SKIP++)) || true
}
dry_run() {
printf " %-60s${CYAN}DRY-RUN${NC}\n" "$1"
((SKIP++)) || true
}
mcall() {
local tool="$1" args="$2"
mcporter call \
--stdio "uv run --project ${PROJECT_DIR} unraid-mcp-server" \
--tool "$tool" \
--args "$args" \
--output json \
2>/dev/null
}
extract() {
# extract <json> <python-expression>
python3 -c "import json,sys; d=json.loads('''$1'''); print($2)" 2>/dev/null || true
}
# ---------------------------------------------------------------------------
# Connectivity check
# ---------------------------------------------------------------------------
echo ""
echo -e "${BOLD}Unraid MCP Destructive Action Test Suite${NC}"
echo -e "Transport: ${CYAN}stdio (uv run unraid-mcp-server)${NC}"
echo -e "Mode: $(${CONFIRM} && echo "${RED}LIVE — destructive actions will execute${NC}" || echo "${YELLOW}DRY-RUN — pass --confirm to execute${NC}")"
echo ""
# ---------------------------------------------------------------------------
# docker: remove — skipped (two-machine problem)
# ---------------------------------------------------------------------------
section "docker: remove"
skip_test "docker: remove" "requires a pre-existing stopped container on the Unraid server — can't provision via local docker"
# ---------------------------------------------------------------------------
# docker: delete_entries — create folder → delete via MCP
# ---------------------------------------------------------------------------
section "docker: delete_entries"
skip_test "docker: delete_entries" "createDockerFolder mutation not available in this Unraid API version (HTTP 400)"
# ---------------------------------------------------------------------------
# docker: update_all — mock/safety audit only
# ---------------------------------------------------------------------------
section "docker: update_all"
skip_test "docker: update_all" "global blast radius — restarts all containers; safety audit only"
# ---------------------------------------------------------------------------
# docker: reset_template_mappings — mock/safety audit only
# ---------------------------------------------------------------------------
section "docker: reset_template_mappings"
skip_test "docker: reset_template_mappings" "wipes all template mappings globally; safety audit only"
# ---------------------------------------------------------------------------
# vm: force_stop — requires manual test VM setup
# ---------------------------------------------------------------------------
section "vm: force_stop"
skip_test "vm: force_stop" "requires pre-created Alpine test VM (no persistent disk)"
# ---------------------------------------------------------------------------
# vm: reset — requires manual test VM setup
# ---------------------------------------------------------------------------
section "vm: reset"
skip_test "vm: reset" "requires pre-created Alpine test VM (no persistent disk)"
# ---------------------------------------------------------------------------
# notifications: delete — create notification → delete via MCP
# ---------------------------------------------------------------------------
section "notifications: delete"
test_notifications_delete() {
local label="notifications: delete"
# Create the notification
local create_raw
create_raw="$(mcall unraid_notifications \
'{"action":"create","title":"mcp-test-delete","subject":"MCP destructive test","description":"Safe to delete","importance":"INFO"}')"
local create_ok
create_ok="$(python3 -c "import json,sys; d=json.loads('''${create_raw}'''); print(d.get('success', False))" 2>/dev/null)"
if [[ "${create_ok}" != "True" ]]; then
fail_test "${label}" "create notification failed: ${create_raw}"
return
fi
# The create response ID doesn't match the stored filename — list and find by title.
# Use the LAST match so a stale notification with the same title is bypassed.
local list_raw nid
list_raw="$(mcall unraid_notifications '{"action":"list","notification_type":"UNREAD"}')"
nid="$(python3 -c "
import json,sys
d = json.loads('''${list_raw}''')
notifs = d.get('notifications', [])
# Reverse so the most-recent match wins over any stale leftover
matches = [n['id'] for n in reversed(notifs) if n.get('title') == 'mcp-test-delete']
print(matches[0] if matches else '')
" 2>/dev/null)"
if [[ -z "${nid}" ]]; then
fail_test "${label}" "created notification not found in UNREAD list"
return
fi
local del_raw
del_raw="$(mcall unraid_notifications \
"{\"action\":\"delete\",\"notification_id\":\"${nid}\",\"notification_type\":\"UNREAD\",\"confirm\":true}")"
# success=true OR deleteNotification key present (raw GraphQL response) both indicate success
local success
success="$(python3 -c "
import json,sys
d=json.loads('''${del_raw}''')
ok = d.get('success', False) or ('deleteNotification' in d)
print(ok)
" 2>/dev/null)"
if [[ "${success}" != "True" ]]; then
# Leak: notification created but not deleted — archive it so it doesn't clutter the feed
mcall unraid_notifications "{\"action\":\"archive\",\"notification_id\":\"${nid}\"}" &>/dev/null || true
fail_test "${label}" "delete did not return success=true: ${del_raw} (notification archived as fallback cleanup)"
return
fi
pass_test "${label}"
}
if ${CONFIRM}; then
test_notifications_delete
else
dry_run "notifications: delete [create notification → mcall unraid_notifications delete]"
fi
# ---------------------------------------------------------------------------
# notifications: delete_archived — bulk wipe; skip (hard to isolate)
# ---------------------------------------------------------------------------
section "notifications: delete_archived"
skip_test "notifications: delete_archived" "bulk wipe of ALL archived notifications; run manually on shart if needed"
# ---------------------------------------------------------------------------
# rclone: delete_remote — create local:/tmp remote → delete via MCP
# ---------------------------------------------------------------------------
section "rclone: delete_remote"
skip_test "rclone: delete_remote" "createRCloneRemote broken server-side on this Unraid version (url slash error)"
# ---------------------------------------------------------------------------
# keys: delete — create test key → delete via MCP
# ---------------------------------------------------------------------------
section "keys: delete"
test_keys_delete() {
local label="keys: delete"
# Guard: abort if test key already exists (don't delete a real key)
# Note: API key names cannot contain hyphens — use "mcp test key"
local existing_keys
existing_keys="$(mcall unraid_keys '{"action":"list"}')"
if python3 -c "
import json,sys
d = json.loads('''${existing_keys}''')
keys = d.get('keys', d.get('apiKeys', []))
sys.exit(1 if any(k.get('name') == 'mcp test key' for k in keys) else 0)
" 2>/dev/null; then
: # not found, safe to proceed
else
fail_test "${label}" "a key named 'mcp test key' already exists — refusing to proceed"
return
fi
local create_raw
create_raw="$(mcall unraid_keys \
'{"action":"create","name":"mcp test key","roles":["VIEWER"]}')"
local kid
kid="$(python3 -c "import json,sys; d=json.loads('''${create_raw}'''); print(d.get('key',{}).get('id',''))" 2>/dev/null)"
if [[ -z "${kid}" ]]; then
fail_test "${label}" "create key did not return an ID"
return
fi
local del_raw
del_raw="$(mcall unraid_keys "{\"action\":\"delete\",\"key_id\":\"${kid}\",\"confirm\":true}")"
local success
success="$(python3 -c "import json,sys; d=json.loads('''${del_raw}'''); print(d.get('success', False))" 2>/dev/null)"
if [[ "${success}" != "True" ]]; then
# Cleanup: attempt to delete the leaked key so future runs are not blocked
mcall unraid_keys "{\"action\":\"delete\",\"key_id\":\"${kid}\",\"confirm\":true}" &>/dev/null || true
fail_test "${label}" "delete did not return success=true: ${del_raw} (key delete re-attempted as fallback cleanup)"
return
fi
# Verify gone
local list_raw
list_raw="$(mcall unraid_keys '{"action":"list"}')"
if python3 -c "
import json,sys
d = json.loads('''${list_raw}''')
keys = d.get('keys', d.get('apiKeys', []))
sys.exit(0 if not any(k.get('id') == '${kid}' for k in keys) else 1)
" 2>/dev/null; then
pass_test "${label}"
else
fail_test "${label}" "key still present in list after delete"
fi
}
if ${CONFIRM}; then
test_keys_delete
else
dry_run "keys: delete [create test key → mcall unraid_keys delete]"
fi
# ---------------------------------------------------------------------------
# storage: flash_backup — requires dedicated test remote
# ---------------------------------------------------------------------------
section "storage: flash_backup"
skip_test "storage: flash_backup" "requires dedicated test remote pre-configured and isolated destination"
# ---------------------------------------------------------------------------
# settings: configure_ups — mock/safety audit only
# ---------------------------------------------------------------------------
section "settings: configure_ups"
skip_test "settings: configure_ups" "wrong config breaks UPS monitoring; safety audit only"
# ---------------------------------------------------------------------------
# settings: setup_remote_access — mock/safety audit only
# ---------------------------------------------------------------------------
section "settings: setup_remote_access"
skip_test "settings: setup_remote_access" "misconfiguration can lock out remote access; safety audit only"
# ---------------------------------------------------------------------------
# settings: enable_dynamic_remote_access — shart only, toggle false → restore
# ---------------------------------------------------------------------------
section "settings: enable_dynamic_remote_access"
skip_test "settings: enable_dynamic_remote_access" "run manually on shart (10.1.0.3) only — see docs/DESTRUCTIVE_ACTIONS.md"
# ---------------------------------------------------------------------------
# info: update_ssh — read current values, re-apply same (no-op)
# ---------------------------------------------------------------------------
section "info: update_ssh"
skip_test "info: update_ssh" "updateSshSettings mutation not available in this Unraid API version (HTTP 400)"
# ---------------------------------------------------------------------------
# Summary
# ---------------------------------------------------------------------------
TOTAL=$((PASS + FAIL + SKIP))
echo ""
echo -e "${BOLD}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo -e "${BOLD}Results: ${GREEN}${PASS} passed${NC} ${RED}${FAIL} failed${NC} ${YELLOW}${SKIP} skipped${NC} (${TOTAL} total)"
if [[ ${#FAILED_TESTS[@]} -gt 0 ]]; then
echo ""
echo -e "${RED}${BOLD}Failed tests:${NC}"
for t in "${FAILED_TESTS[@]}"; do
echo -e " ${RED}${NC} ${t}"
done
fi
echo ""
if ! ${CONFIRM}; then
echo -e "${YELLOW}Dry-run complete. Pass --confirm to execute destructive tests.${NC}"
fi
[[ ${FAIL} -eq 0 ]] && exit 0 || exit 1

764
tests/mcporter/test-tools.sh Executable file
View File

@@ -0,0 +1,764 @@
#!/usr/bin/env bash
# =============================================================================
# test-tools.sh — Integration smoke-test for unraid-mcp MCP server tools
#
# Exercises every non-destructive action across all 10 tools using mcporter.
# The server is launched ad-hoc via mcporter's --stdio flag so no persistent
# process or registered server entry is required.
#
# Usage:
# ./scripts/test-tools.sh [--timeout-ms N] [--parallel] [--verbose]
#
# Options:
# --timeout-ms N Per-call timeout in milliseconds (default: 25000)
# --parallel Run independent test groups in parallel (default: off)
# --verbose Print raw mcporter output for each call
#
# Exit codes:
# 0 — all tests passed or skipped
# 1 — one or more tests failed
# 2 — prerequisite check failed (mcporter, uv, server startup)
# =============================================================================
set -uo pipefail
# ---------------------------------------------------------------------------
# Constants
# ---------------------------------------------------------------------------
readonly SCRIPT_DIR="$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")" && pwd -P)"
readonly PROJECT_DIR="$(cd -- "${SCRIPT_DIR}/../.." && pwd -P)"
readonly SCRIPT_NAME="$(basename -- "${BASH_SOURCE[0]}")"
readonly TS_START="$(date +%s%N)" # nanosecond epoch
readonly LOG_FILE="${TMPDIR:-/tmp}/${SCRIPT_NAME%.sh}.$(date +%Y%m%d-%H%M%S).log"
# Colours (disabled automatically when stdout is not a terminal)
if [[ -t 1 ]]; then
C_RESET='\033[0m'
C_BOLD='\033[1m'
C_GREEN='\033[0;32m'
C_RED='\033[0;31m'
C_YELLOW='\033[0;33m'
C_CYAN='\033[0;36m'
C_DIM='\033[2m'
else
C_RESET='' C_BOLD='' C_GREEN='' C_RED='' C_YELLOW='' C_CYAN='' C_DIM=''
fi
# ---------------------------------------------------------------------------
# Defaults (overridable via flags)
# ---------------------------------------------------------------------------
CALL_TIMEOUT_MS=25000
USE_PARALLEL=false
VERBOSE=false
# ---------------------------------------------------------------------------
# Counters (updated by run_test / skip_test)
# ---------------------------------------------------------------------------
PASS_COUNT=0
FAIL_COUNT=0
SKIP_COUNT=0
declare -a FAIL_NAMES=()
# ---------------------------------------------------------------------------
# Argument parsing
# ---------------------------------------------------------------------------
parse_args() {
while [[ $# -gt 0 ]]; do
case "$1" in
--timeout-ms)
CALL_TIMEOUT_MS="${2:?--timeout-ms requires a value}"
shift 2
;;
--parallel)
USE_PARALLEL=true
shift
;;
--verbose)
VERBOSE=true
shift
;;
-h|--help)
printf 'Usage: %s [--timeout-ms N] [--parallel] [--verbose]\n' "${SCRIPT_NAME}"
exit 0
;;
*)
printf '[ERROR] Unknown argument: %s\n' "$1" >&2
exit 2
;;
esac
done
}
# ---------------------------------------------------------------------------
# Logging helpers
# ---------------------------------------------------------------------------
log_info() { printf "${C_CYAN}[INFO]${C_RESET} %s\n" "$*" | tee -a "${LOG_FILE}"; }
log_warn() { printf "${C_YELLOW}[WARN]${C_RESET} %s\n" "$*" | tee -a "${LOG_FILE}"; }
log_error() { printf "${C_RED}[ERROR]${C_RESET} %s\n" "$*" | tee -a "${LOG_FILE}" >&2; }
elapsed_ms() {
local now
now="$(date +%s%N)"
printf '%d' "$(( (now - TS_START) / 1000000 ))"
}
# ---------------------------------------------------------------------------
# Cleanup trap
# ---------------------------------------------------------------------------
cleanup() {
local rc=$?
if [[ $rc -ne 0 ]]; then
log_warn "Script exited with rc=${rc}. Log: ${LOG_FILE}"
fi
}
trap cleanup EXIT
# ---------------------------------------------------------------------------
# Prerequisite checks
# ---------------------------------------------------------------------------
check_prerequisites() {
local missing=false
if ! command -v mcporter &>/dev/null; then
log_error "mcporter not found in PATH. Install it and re-run."
missing=true
fi
if ! command -v uv &>/dev/null; then
log_error "uv not found in PATH. Install it and re-run."
missing=true
fi
if ! command -v python3 &>/dev/null; then
log_error "python3 not found in PATH."
missing=true
fi
if [[ ! -f "${PROJECT_DIR}/pyproject.toml" ]]; then
log_error "pyproject.toml not found at ${PROJECT_DIR}. Wrong directory?"
missing=true
fi
if [[ "${missing}" == true ]]; then
return 2
fi
}
# ---------------------------------------------------------------------------
# Server startup smoke-test
# Launches the stdio server and calls unraid_health action=check.
# Returns 0 if the server responds (even with an API error — that still
# means the Python process started cleanly), non-zero on import failure.
# ---------------------------------------------------------------------------
smoke_test_server() {
log_info "Smoke-testing server startup..."
local output
output="$(
mcporter call \
--stdio "uv run unraid-mcp-server" \
--cwd "${PROJECT_DIR}" \
--name "unraid-smoke" \
--tool unraid_health \
--args '{"action":"check"}' \
--timeout 30000 \
--output json \
2>&1
)" || true
# If mcporter returns the offline error the server failed to import/start
if printf '%s' "${output}" | grep -q '"kind": "offline"'; then
log_error "Server failed to start. Output:"
printf '%s\n' "${output}" >&2
log_error "Common causes:"
log_error " • Missing module: check 'uv run unraid-mcp-server' locally"
log_error " • server.py has an import for a file that doesn't exist yet"
log_error " • Environment variable UNRAID_API_URL or UNRAID_API_KEY missing"
return 2
fi
# Assert the response contains a valid tool response field, not a bare JSON error.
# unraid_health action=check always returns {"status": ...} on success.
local key_check
key_check="$(
printf '%s' "${output}" | python3 -c "
import sys, json
try:
d = json.load(sys.stdin)
if 'status' in d or 'success' in d or 'error' in d:
print('ok')
else:
print('missing: no status/success/error key in response')
except Exception as e:
print('parse_error: ' + str(e))
" 2>/dev/null
)" || key_check="parse_error"
if [[ "${key_check}" != "ok" ]]; then
log_error "Smoke test: unexpected response shape — ${key_check}"
printf '%s\n' "${output}" >&2
return 2
fi
log_info "Server started successfully (health response received)."
return 0
}
# ---------------------------------------------------------------------------
# mcporter call wrapper
# Usage: mcporter_call <tool_name> <args_json>
# Writes the mcporter JSON output to stdout.
# Returns the mcporter exit code.
# ---------------------------------------------------------------------------
mcporter_call() {
local tool_name="${1:?tool_name required}"
local args_json="${2:?args_json required}"
mcporter call \
--stdio "uv run unraid-mcp-server" \
--cwd "${PROJECT_DIR}" \
--name "unraid" \
--tool "${tool_name}" \
--args "${args_json}" \
--timeout "${CALL_TIMEOUT_MS}" \
--output json \
2>&1
}
# ---------------------------------------------------------------------------
# Test runner
# Usage: run_test <label> <tool_name> <args_json> [expected_key]
#
# expected_key — optional jq-style python key path to validate in the
# response (e.g. ".status" or ".containers"). If omitted,
# any non-offline response is a PASS (tool errors from the
# API — e.g. VMs disabled — are still considered PASS because
# the tool itself responded correctly).
# ---------------------------------------------------------------------------
run_test() {
local label="${1:?label required}"
local tool="${2:?tool required}"
local args="${3:?args required}"
local expected_key="${4:-}"
local t0
t0="$(date +%s%N)"
local output
output="$(mcporter_call "${tool}" "${args}" 2>&1)" || true
local elapsed_ms
elapsed_ms="$(( ( $(date +%s%N) - t0 ) / 1000000 ))"
if [[ "${VERBOSE}" == true ]]; then
printf '%s\n' "${output}" | tee -a "${LOG_FILE}"
else
printf '%s\n' "${output}" >> "${LOG_FILE}"
fi
# Detect server-offline (import/startup failure)
if printf '%s' "${output}" | grep -q '"kind": "offline"'; then
printf "${C_RED}[FAIL]${C_RESET} %-55s ${C_DIM}%dms${C_RESET}\n" \
"${label}" "${elapsed_ms}" | tee -a "${LOG_FILE}"
printf ' server offline — check startup errors in %s\n' "${LOG_FILE}" | tee -a "${LOG_FILE}"
FAIL_COUNT=$(( FAIL_COUNT + 1 ))
FAIL_NAMES+=("${label}")
return 1
fi
# Validate optional key presence
if [[ -n "${expected_key}" ]]; then
local key_check
key_check="$(
printf '%s' "${output}" | python3 -c "
import sys, json
try:
d = json.load(sys.stdin)
keys = '${expected_key}'.split('.')
node = d
for k in keys:
if k:
node = node[k]
print('ok')
except Exception as e:
print('missing: ' + str(e))
" 2>/dev/null
)" || key_check="parse_error"
if [[ "${key_check}" != "ok" ]]; then
printf "${C_RED}[FAIL]${C_RESET} %-55s ${C_DIM}%dms${C_RESET}\n" \
"${label}" "${elapsed_ms}" | tee -a "${LOG_FILE}"
printf ' expected key .%s not found: %s\n' "${expected_key}" "${key_check}" | tee -a "${LOG_FILE}"
FAIL_COUNT=$(( FAIL_COUNT + 1 ))
FAIL_NAMES+=("${label}")
return 1
fi
fi
printf "${C_GREEN}[PASS]${C_RESET} %-55s ${C_DIM}%dms${C_RESET}\n" \
"${label}" "${elapsed_ms}" | tee -a "${LOG_FILE}"
PASS_COUNT=$(( PASS_COUNT + 1 ))
return 0
}
# ---------------------------------------------------------------------------
# Skip helper — use when a prerequisite (like a list) returned empty
# ---------------------------------------------------------------------------
skip_test() {
local label="${1:?label required}"
local reason="${2:-prerequisite returned empty}"
printf "${C_YELLOW}[SKIP]${C_RESET} %-55s %s\n" "${label}" "${reason}" | tee -a "${LOG_FILE}"
SKIP_COUNT=$(( SKIP_COUNT + 1 ))
}
# ---------------------------------------------------------------------------
# ID extractors
# Each function calls the relevant list action and prints the first ID.
# Prints nothing (empty string) if the list is empty or the call fails.
# ---------------------------------------------------------------------------
# Extract first docker container ID
get_docker_id() {
local raw
raw="$(mcporter_call unraid_docker '{"action":"list"}' 2>/dev/null)" || return 0
printf '%s' "${raw}" | python3 -c "
import sys, json
try:
d = json.load(sys.stdin)
containers = d.get('containers', [])
if containers:
print(containers[0]['id'])
except Exception:
pass
" 2>/dev/null || true
}
# Extract first docker network ID
get_network_id() {
local raw
raw="$(mcporter_call unraid_docker '{"action":"networks"}' 2>/dev/null)" || return 0
printf '%s' "${raw}" | python3 -c "
import sys, json
try:
d = json.load(sys.stdin)
nets = d.get('networks', [])
if nets:
print(nets[0]['id'])
except Exception:
pass
" 2>/dev/null || true
}
# Extract first VM ID
get_vm_id() {
local raw
raw="$(mcporter_call unraid_vm '{"action":"list"}' 2>/dev/null)" || return 0
printf '%s' "${raw}" | python3 -c "
import sys, json
try:
d = json.load(sys.stdin)
vms = d.get('vms', d.get('domains', []))
if vms:
print(vms[0].get('id', vms[0].get('uuid', '')))
except Exception:
pass
" 2>/dev/null || true
}
# Extract first API key ID
get_key_id() {
local raw
raw="$(mcporter_call unraid_keys '{"action":"list"}' 2>/dev/null)" || return 0
printf '%s' "${raw}" | python3 -c "
import sys, json
try:
d = json.load(sys.stdin)
keys = d.get('keys', d.get('apiKeys', []))
if keys:
print(keys[0].get('id', ''))
except Exception:
pass
" 2>/dev/null || true
}
# Extract first disk ID
get_disk_id() {
local raw
raw="$(mcporter_call unraid_storage '{"action":"disks"}' 2>/dev/null)" || return 0
printf '%s' "${raw}" | python3 -c "
import sys, json
try:
d = json.load(sys.stdin)
disks = d.get('disks', [])
if disks:
print(disks[0]['id'])
except Exception:
pass
" 2>/dev/null || true
}
# Extract first log file path
get_log_path() {
local raw
raw="$(mcporter_call unraid_storage '{"action":"log_files"}' 2>/dev/null)" || return 0
printf '%s' "${raw}" | python3 -c "
import sys, json
try:
d = json.load(sys.stdin)
files = d.get('log_files', [])
# Prefer a plain text log (not binary like btmp/lastlog)
for f in files:
p = f.get('path', '')
if p.endswith('.log') or 'syslog' in p or 'messages' in p:
print(p)
break
else:
if files:
print(files[0]['path'])
except Exception:
pass
" 2>/dev/null || true
}
# ---------------------------------------------------------------------------
# Grouped test suites
# ---------------------------------------------------------------------------
suite_unraid_info() {
printf '\n%b== unraid_info (19 actions) ==%b\n' "${C_BOLD}" "${C_RESET}" | tee -a "${LOG_FILE}"
run_test "unraid_info: overview" unraid_info '{"action":"overview"}'
run_test "unraid_info: array" unraid_info '{"action":"array"}'
run_test "unraid_info: network" unraid_info '{"action":"network"}'
run_test "unraid_info: registration" unraid_info '{"action":"registration"}'
run_test "unraid_info: connect" unraid_info '{"action":"connect"}'
run_test "unraid_info: variables" unraid_info '{"action":"variables"}'
run_test "unraid_info: metrics" unraid_info '{"action":"metrics"}'
run_test "unraid_info: services" unraid_info '{"action":"services"}'
run_test "unraid_info: display" unraid_info '{"action":"display"}'
run_test "unraid_info: config" unraid_info '{"action":"config"}'
run_test "unraid_info: online" unraid_info '{"action":"online"}'
run_test "unraid_info: owner" unraid_info '{"action":"owner"}'
run_test "unraid_info: settings" unraid_info '{"action":"settings"}'
run_test "unraid_info: server" unraid_info '{"action":"server"}'
run_test "unraid_info: servers" unraid_info '{"action":"servers"}'
run_test "unraid_info: flash" unraid_info '{"action":"flash"}'
run_test "unraid_info: ups_devices" unraid_info '{"action":"ups_devices"}'
# ups_device and ups_config require a device_id — skip if no UPS devices found
local ups_raw
ups_raw="$(mcporter_call unraid_info '{"action":"ups_devices"}' 2>/dev/null)" || ups_raw=''
local ups_id
ups_id="$(printf '%s' "${ups_raw}" | python3 -c "
import sys, json
try:
d = json.load(sys.stdin)
devs = d.get('ups_devices', d.get('upsDevices', []))
if devs:
print(devs[0].get('id', devs[0].get('name', '')))
except Exception:
pass
" 2>/dev/null)" || ups_id=''
if [[ -n "${ups_id}" ]]; then
run_test "unraid_info: ups_device" unraid_info \
"$(printf '{"action":"ups_device","device_id":"%s"}' "${ups_id}")"
run_test "unraid_info: ups_config" unraid_info \
"$(printf '{"action":"ups_config","device_id":"%s"}' "${ups_id}")"
else
skip_test "unraid_info: ups_device" "no UPS devices found"
skip_test "unraid_info: ups_config" "no UPS devices found"
fi
}
suite_unraid_array() {
printf '\n%b== unraid_array (1 read-only action) ==%b\n' "${C_BOLD}" "${C_RESET}" | tee -a "${LOG_FILE}"
run_test "unraid_array: parity_status" unraid_array '{"action":"parity_status"}'
# Destructive actions (parity_start/pause/resume/cancel) skipped
}
suite_unraid_storage() {
printf '\n%b== unraid_storage (6 actions) ==%b\n' "${C_BOLD}" "${C_RESET}" | tee -a "${LOG_FILE}"
run_test "unraid_storage: shares" unraid_storage '{"action":"shares"}'
run_test "unraid_storage: disks" unraid_storage '{"action":"disks"}'
run_test "unraid_storage: unassigned" unraid_storage '{"action":"unassigned"}'
run_test "unraid_storage: log_files" unraid_storage '{"action":"log_files"}'
# disk_details needs a disk ID
local disk_id
disk_id="$(get_disk_id)" || disk_id=''
if [[ -n "${disk_id}" ]]; then
run_test "unraid_storage: disk_details" unraid_storage \
"$(printf '{"action":"disk_details","disk_id":"%s"}' "${disk_id}")"
else
skip_test "unraid_storage: disk_details" "no disks found"
fi
# logs needs a valid log path
local log_path
log_path="$(get_log_path)" || log_path=''
if [[ -n "${log_path}" ]]; then
run_test "unraid_storage: logs" unraid_storage \
"$(printf '{"action":"logs","log_path":"%s","tail_lines":20}' "${log_path}")"
else
skip_test "unraid_storage: logs" "no log files found"
fi
}
suite_unraid_docker() {
printf '\n%b== unraid_docker (7 read-only actions) ==%b\n' "${C_BOLD}" "${C_RESET}" | tee -a "${LOG_FILE}"
run_test "unraid_docker: list" unraid_docker '{"action":"list"}'
run_test "unraid_docker: networks" unraid_docker '{"action":"networks"}'
run_test "unraid_docker: port_conflicts" unraid_docker '{"action":"port_conflicts"}'
run_test "unraid_docker: check_updates" unraid_docker '{"action":"check_updates"}'
# details, logs, network_details need IDs
local container_id
container_id="$(get_docker_id)" || container_id=''
if [[ -n "${container_id}" ]]; then
run_test "unraid_docker: details" unraid_docker \
"$(printf '{"action":"details","container_id":"%s"}' "${container_id}")"
run_test "unraid_docker: logs" unraid_docker \
"$(printf '{"action":"logs","container_id":"%s","tail_lines":20}' "${container_id}")"
else
skip_test "unraid_docker: details" "no containers found"
skip_test "unraid_docker: logs" "no containers found"
fi
local network_id
network_id="$(get_network_id)" || network_id=''
if [[ -n "${network_id}" ]]; then
run_test "unraid_docker: network_details" unraid_docker \
"$(printf '{"action":"network_details","network_id":"%s"}' "${network_id}")"
else
skip_test "unraid_docker: network_details" "no networks found"
fi
# Destructive actions (start/stop/restart/pause/unpause/remove/update/update_all) skipped
}
suite_unraid_vm() {
printf '\n%b== unraid_vm (2 read-only actions) ==%b\n' "${C_BOLD}" "${C_RESET}" | tee -a "${LOG_FILE}"
run_test "unraid_vm: list" unraid_vm '{"action":"list"}'
local vm_id
vm_id="$(get_vm_id)" || vm_id=''
if [[ -n "${vm_id}" ]]; then
run_test "unraid_vm: details" unraid_vm \
"$(printf '{"action":"details","vm_id":"%s"}' "${vm_id}")"
else
skip_test "unraid_vm: details" "no VMs found (or VM service unavailable)"
fi
# Destructive actions (start/stop/pause/resume/force_stop/reboot/reset) skipped
}
suite_unraid_notifications() {
printf '\n%b== unraid_notifications (4 read-only actions) ==%b\n' "${C_BOLD}" "${C_RESET}" | tee -a "${LOG_FILE}"
run_test "unraid_notifications: overview" unraid_notifications '{"action":"overview"}'
run_test "unraid_notifications: list" unraid_notifications '{"action":"list"}'
run_test "unraid_notifications: warnings" unraid_notifications '{"action":"warnings"}'
run_test "unraid_notifications: unread" unraid_notifications '{"action":"unread"}'
# Destructive actions (create/archive/delete/delete_archived/archive_all/etc.) skipped
}
suite_unraid_rclone() {
printf '\n%b== unraid_rclone (2 read-only actions) ==%b\n' "${C_BOLD}" "${C_RESET}" | tee -a "${LOG_FILE}"
run_test "unraid_rclone: list_remotes" unraid_rclone '{"action":"list_remotes"}'
# config_form requires a provider_type — use "s3" as a safe, always-available provider
run_test "unraid_rclone: config_form" unraid_rclone '{"action":"config_form","provider_type":"s3"}'
# Destructive actions (create_remote/delete_remote) skipped
}
suite_unraid_users() {
printf '\n%b== unraid_users (1 action) ==%b\n' "${C_BOLD}" "${C_RESET}" | tee -a "${LOG_FILE}"
run_test "unraid_users: me" unraid_users '{"action":"me"}'
}
suite_unraid_keys() {
printf '\n%b== unraid_keys (2 read-only actions) ==%b\n' "${C_BOLD}" "${C_RESET}" | tee -a "${LOG_FILE}"
run_test "unraid_keys: list" unraid_keys '{"action":"list"}'
local key_id
key_id="$(get_key_id)" || key_id=''
if [[ -n "${key_id}" ]]; then
run_test "unraid_keys: get" unraid_keys \
"$(printf '{"action":"get","key_id":"%s"}' "${key_id}")"
else
skip_test "unraid_keys: get" "no API keys found"
fi
# Destructive actions (create/update/delete) skipped
}
suite_unraid_health() {
printf '\n%b== unraid_health (3 actions) ==%b\n' "${C_BOLD}" "${C_RESET}" | tee -a "${LOG_FILE}"
run_test "unraid_health: check" unraid_health '{"action":"check"}'
run_test "unraid_health: test_connection" unraid_health '{"action":"test_connection"}'
run_test "unraid_health: diagnose" unraid_health '{"action":"diagnose"}'
}
# ---------------------------------------------------------------------------
# Print final summary
# ---------------------------------------------------------------------------
print_summary() {
local total_ms="$(( ( $(date +%s%N) - TS_START ) / 1000000 ))"
local total=$(( PASS_COUNT + FAIL_COUNT + SKIP_COUNT ))
printf '\n%b%s%b\n' "${C_BOLD}" "$(printf '=%.0s' {1..65})" "${C_RESET}"
printf '%b%-20s%b %b%d%b\n' "${C_BOLD}" "PASS" "${C_RESET}" "${C_GREEN}" "${PASS_COUNT}" "${C_RESET}"
printf '%b%-20s%b %b%d%b\n' "${C_BOLD}" "FAIL" "${C_RESET}" "${C_RED}" "${FAIL_COUNT}" "${C_RESET}"
printf '%b%-20s%b %b%d%b\n' "${C_BOLD}" "SKIP" "${C_RESET}" "${C_YELLOW}" "${SKIP_COUNT}" "${C_RESET}"
printf '%b%-20s%b %d\n' "${C_BOLD}" "TOTAL" "${C_RESET}" "${total}"
printf '%b%-20s%b %ds (%dms)\n' "${C_BOLD}" "ELAPSED" "${C_RESET}" \
"$(( total_ms / 1000 ))" "${total_ms}"
printf '%b%s%b\n' "${C_BOLD}" "$(printf '=%.0s' {1..65})" "${C_RESET}"
if [[ "${FAIL_COUNT}" -gt 0 ]]; then
printf '\n%bFailed tests:%b\n' "${C_RED}" "${C_RESET}"
local name
for name in "${FAIL_NAMES[@]}"; do
printf ' • %s\n' "${name}"
done
printf '\nFull log: %s\n' "${LOG_FILE}"
fi
}
# ---------------------------------------------------------------------------
# Parallel runner — wraps each suite in a background subshell and waits
# ---------------------------------------------------------------------------
run_parallel() {
# Each suite is independent (only cross-suite dependency: IDs are fetched
# fresh inside each suite function, not shared across suites).
# Counter updates from subshells won't propagate to the parent — collect
# results via temp files instead.
log_warn "--parallel mode: per-suite counters aggregated via temp files."
local tmp_dir
tmp_dir="$(mktemp -d)"
trap 'rm -rf -- "${tmp_dir}"' RETURN
local suites=(
suite_unraid_info
suite_unraid_array
suite_unraid_storage
suite_unraid_docker
suite_unraid_vm
suite_unraid_notifications
suite_unraid_rclone
suite_unraid_users
suite_unraid_keys
suite_unraid_health
)
local pids=()
local suite
for suite in "${suites[@]}"; do
(
# Reset counters in subshell
PASS_COUNT=0; FAIL_COUNT=0; SKIP_COUNT=0; FAIL_NAMES=()
"${suite}"
printf '%d %d %d\n' "${PASS_COUNT}" "${FAIL_COUNT}" "${SKIP_COUNT}" \
> "${tmp_dir}/${suite}.counts"
printf '%s\n' "${FAIL_NAMES[@]:-}" > "${tmp_dir}/${suite}.fails"
) &
pids+=($!)
done
# Wait for all background suites
local pid
for pid in "${pids[@]}"; do
wait "${pid}" || true
done
# Aggregate counters
local f
for f in "${tmp_dir}"/*.counts; do
[[ -f "${f}" ]] || continue
local p fl s
read -r p fl s < "${f}"
PASS_COUNT=$(( PASS_COUNT + p ))
FAIL_COUNT=$(( FAIL_COUNT + fl ))
SKIP_COUNT=$(( SKIP_COUNT + s ))
done
for f in "${tmp_dir}"/*.fails; do
[[ -f "${f}" ]] || continue
while IFS= read -r line; do
[[ -n "${line}" ]] && FAIL_NAMES+=("${line}")
done < "${f}"
done
}
# ---------------------------------------------------------------------------
# Sequential runner
# ---------------------------------------------------------------------------
run_sequential() {
suite_unraid_info
suite_unraid_array
suite_unraid_storage
suite_unraid_docker
suite_unraid_vm
suite_unraid_notifications
suite_unraid_rclone
suite_unraid_users
suite_unraid_keys
suite_unraid_health
}
# ---------------------------------------------------------------------------
# Main
# ---------------------------------------------------------------------------
main() {
parse_args "$@"
printf '%b%s%b\n' "${C_BOLD}" "$(printf '=%.0s' {1..65})" "${C_RESET}"
printf '%b unraid-mcp integration smoke-test%b\n' "${C_BOLD}" "${C_RESET}"
printf '%b Project: %s%b\n' "${C_BOLD}" "${PROJECT_DIR}" "${C_RESET}"
printf '%b Timeout: %dms/call | Parallel: %s%b\n' \
"${C_BOLD}" "${CALL_TIMEOUT_MS}" "${USE_PARALLEL}" "${C_RESET}"
printf '%b Log: %s%b\n' "${C_BOLD}" "${LOG_FILE}" "${C_RESET}"
printf '%b%s%b\n\n' "${C_BOLD}" "$(printf '=%.0s' {1..65})" "${C_RESET}"
# Prerequisite gate
check_prerequisites || exit 2
# Server startup gate — fail fast if the Python process can't start
smoke_test_server || {
log_error ""
log_error "Server startup failed. Aborting — no tests will run."
log_error ""
log_error "To diagnose, run:"
log_error " cd ${PROJECT_DIR} && uv run unraid-mcp-server"
log_error ""
log_error "If server.py has a broken import (e.g. missing tools/settings.py),"
log_error "stash or revert the uncommitted server.py change first:"
log_error " git stash -- unraid_mcp/server.py"
log_error " ./scripts/test-tools.sh"
log_error " git stash pop"
exit 2
}
if [[ "${USE_PARALLEL}" == true ]]; then
run_parallel
else
run_sequential
fi
print_summary
if [[ "${FAIL_COUNT}" -gt 0 ]]; then
exit 1
fi
exit 0
}
main "$@"

View File

@@ -10,6 +10,12 @@ from unittest.mock import AsyncMock, patch
import pytest import pytest
# conftest.py is the shared test-helper module for this project.
# pytest automatically adds tests/ to sys.path, making it importable here
# without a package __init__.py. Do NOT add tests/__init__.py — it breaks
# conftest.py's fixture auto-discovery.
from conftest import make_tool_fn
from unraid_mcp.core.exceptions import ToolError from unraid_mcp.core.exceptions import ToolError
# Import DESTRUCTIVE_ACTIONS sets from every tool module that defines one # Import DESTRUCTIVE_ACTIONS sets from every tool module that defines one
@@ -24,10 +30,6 @@ from unraid_mcp.tools.rclone import MUTATIONS as RCLONE_MUTATIONS
from unraid_mcp.tools.virtualization import DESTRUCTIVE_ACTIONS as VM_DESTRUCTIVE from unraid_mcp.tools.virtualization import DESTRUCTIVE_ACTIONS as VM_DESTRUCTIVE
from unraid_mcp.tools.virtualization import MUTATIONS as VM_MUTATIONS from unraid_mcp.tools.virtualization import MUTATIONS as VM_MUTATIONS
# Centralized import for make_tool_fn helper
# conftest.py sits in tests/ and is importable without __init__.py
from conftest import make_tool_fn
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
# Known destructive actions registry (ground truth for this audit) # Known destructive actions registry (ground truth for this audit)
@@ -39,7 +41,7 @@ KNOWN_DESTRUCTIVE: dict[str, dict[str, set[str]]] = {
"module": "unraid_mcp.tools.docker", "module": "unraid_mcp.tools.docker",
"register_fn": "register_docker_tool", "register_fn": "register_docker_tool",
"tool_name": "unraid_docker", "tool_name": "unraid_docker",
"actions": {"remove"}, "actions": {"remove", "update_all", "delete_entries", "reset_template_mappings"},
"runtime_set": DOCKER_DESTRUCTIVE, "runtime_set": DOCKER_DESTRUCTIVE,
}, },
"vm": { "vm": {
@@ -86,8 +88,7 @@ class TestDestructiveActionRegistries:
"""Each tool's DESTRUCTIVE_ACTIONS must exactly match the audited set.""" """Each tool's DESTRUCTIVE_ACTIONS must exactly match the audited set."""
info = KNOWN_DESTRUCTIVE[tool_key] info = KNOWN_DESTRUCTIVE[tool_key]
assert info["runtime_set"] == info["actions"], ( assert info["runtime_set"] == info["actions"], (
f"{tool_key}: DESTRUCTIVE_ACTIONS is {info['runtime_set']}, " f"{tool_key}: DESTRUCTIVE_ACTIONS is {info['runtime_set']}, expected {info['actions']}"
f"expected {info['actions']}"
) )
@pytest.mark.parametrize("tool_key", list(KNOWN_DESTRUCTIVE.keys())) @pytest.mark.parametrize("tool_key", list(KNOWN_DESTRUCTIVE.keys()))
@@ -126,9 +127,12 @@ class TestDestructiveActionRegistries:
missing: list[str] = [] missing: list[str] = []
for tool_key, mutations in all_mutations.items(): for tool_key, mutations in all_mutations.items():
destructive = all_destructive[tool_key] destructive = all_destructive[tool_key]
for action_name in mutations: missing.extend(
if ("delete" in action_name or "remove" in action_name) and action_name not in destructive: f"{tool_key}/{action_name}"
missing.append(f"{tool_key}/{action_name}") for action_name in mutations
if ("delete" in action_name or "remove" in action_name)
and action_name not in destructive
)
assert not missing, ( assert not missing, (
f"Mutations with 'delete'/'remove' not in DESTRUCTIVE_ACTIONS: {missing}" f"Mutations with 'delete'/'remove' not in DESTRUCTIVE_ACTIONS: {missing}"
) )
@@ -143,6 +147,9 @@ class TestDestructiveActionRegistries:
_DESTRUCTIVE_TEST_CASES: list[tuple[str, str, dict]] = [ _DESTRUCTIVE_TEST_CASES: list[tuple[str, str, dict]] = [
# Docker # Docker
("docker", "remove", {"container_id": "abc123"}), ("docker", "remove", {"container_id": "abc123"}),
("docker", "update_all", {}),
("docker", "delete_entries", {"entry_ids": ["e1"]}),
("docker", "reset_template_mappings", {}),
# VM # VM
("vm", "force_stop", {"vm_id": "test-vm-uuid"}), ("vm", "force_stop", {"vm_id": "test-vm-uuid"}),
("vm", "reset", {"vm_id": "test-vm-uuid"}), ("vm", "reset", {"vm_id": "test-vm-uuid"}),
@@ -193,7 +200,11 @@ def _mock_keys_graphql() -> Generator[AsyncMock, None, None]:
_TOOL_REGISTRY = { _TOOL_REGISTRY = {
"docker": ("unraid_mcp.tools.docker", "register_docker_tool", "unraid_docker"), "docker": ("unraid_mcp.tools.docker", "register_docker_tool", "unraid_docker"),
"vm": ("unraid_mcp.tools.virtualization", "register_vm_tool", "unraid_vm"), "vm": ("unraid_mcp.tools.virtualization", "register_vm_tool", "unraid_vm"),
"notifications": ("unraid_mcp.tools.notifications", "register_notifications_tool", "unraid_notifications"), "notifications": (
"unraid_mcp.tools.notifications",
"register_notifications_tool",
"unraid_notifications",
),
"rclone": ("unraid_mcp.tools.rclone", "register_rclone_tool", "unraid_rclone"), "rclone": ("unraid_mcp.tools.rclone", "register_rclone_tool", "unraid_rclone"),
"keys": ("unraid_mcp.tools.keys", "register_keys_tool", "unraid_keys"), "keys": ("unraid_mcp.tools.keys", "register_keys_tool", "unraid_keys"),
} }
@@ -268,6 +279,41 @@ class TestConfirmationGuards:
class TestConfirmAllowsExecution: class TestConfirmAllowsExecution:
"""Destructive actions with confirm=True should reach the GraphQL layer.""" """Destructive actions with confirm=True should reach the GraphQL layer."""
async def test_docker_update_all_with_confirm(self, _mock_docker_graphql: AsyncMock) -> None:
_mock_docker_graphql.return_value = {
"docker": {
"updateAllContainers": [
{"id": "c1", "names": ["app"], "state": "running", "status": "Up"}
]
}
}
tool_fn = make_tool_fn("unraid_mcp.tools.docker", "register_docker_tool", "unraid_docker")
result = await tool_fn(action="update_all", confirm=True)
assert result["success"] is True
assert result["action"] == "update_all"
async def test_docker_delete_entries_with_confirm(
self, _mock_docker_graphql: AsyncMock
) -> None:
organizer_response = {
"version": 1.0,
"views": [{"id": "default", "name": "Default", "rootId": "root", "flatEntries": []}],
}
_mock_docker_graphql.return_value = {"deleteDockerEntries": organizer_response}
tool_fn = make_tool_fn("unraid_mcp.tools.docker", "register_docker_tool", "unraid_docker")
result = await tool_fn(action="delete_entries", entry_ids=["e1"], confirm=True)
assert result["success"] is True
assert result["action"] == "delete_entries"
async def test_docker_reset_template_mappings_with_confirm(
self, _mock_docker_graphql: AsyncMock
) -> None:
_mock_docker_graphql.return_value = {"resetDockerTemplateMappings": True}
tool_fn = make_tool_fn("unraid_mcp.tools.docker", "register_docker_tool", "unraid_docker")
result = await tool_fn(action="reset_template_mappings", confirm=True)
assert result["success"] is True
assert result["action"] == "reset_template_mappings"
async def test_docker_remove_with_confirm(self, _mock_docker_graphql: AsyncMock) -> None: async def test_docker_remove_with_confirm(self, _mock_docker_graphql: AsyncMock) -> None:
cid = "a" * 64 + ":local" cid = "a" * 64 + ":local"
_mock_docker_graphql.side_effect = [ _mock_docker_graphql.side_effect = [
@@ -291,7 +337,12 @@ class TestConfirmAllowsExecution:
assert result["success"] is True assert result["success"] is True
async def test_notifications_delete_with_confirm(self, _mock_notif_graphql: AsyncMock) -> None: async def test_notifications_delete_with_confirm(self, _mock_notif_graphql: AsyncMock) -> None:
_mock_notif_graphql.return_value = {"notifications": {"deleteNotification": True}} _mock_notif_graphql.return_value = {
"deleteNotification": {
"unread": {"info": 0, "warning": 0, "alert": 0, "total": 0},
"archive": {"info": 0, "warning": 0, "alert": 0, "total": 0},
}
}
tool_fn = make_tool_fn( tool_fn = make_tool_fn(
"unraid_mcp.tools.notifications", "register_notifications_tool", "unraid_notifications" "unraid_mcp.tools.notifications", "register_notifications_tool", "unraid_notifications"
) )
@@ -303,8 +354,15 @@ class TestConfirmAllowsExecution:
) )
assert result["success"] is True assert result["success"] is True
async def test_notifications_delete_archived_with_confirm(self, _mock_notif_graphql: AsyncMock) -> None: async def test_notifications_delete_archived_with_confirm(
_mock_notif_graphql.return_value = {"notifications": {"deleteArchivedNotifications": True}} self, _mock_notif_graphql: AsyncMock
) -> None:
_mock_notif_graphql.return_value = {
"deleteArchivedNotifications": {
"unread": {"info": 0, "warning": 0, "alert": 0, "total": 0},
"archive": {"info": 0, "warning": 0, "alert": 0, "total": 0},
}
}
tool_fn = make_tool_fn( tool_fn = make_tool_fn(
"unraid_mcp.tools.notifications", "register_notifications_tool", "unraid_notifications" "unraid_mcp.tools.notifications", "register_notifications_tool", "unraid_notifications"
) )
@@ -318,7 +376,7 @@ class TestConfirmAllowsExecution:
assert result["success"] is True assert result["success"] is True
async def test_keys_delete_with_confirm(self, _mock_keys_graphql: AsyncMock) -> None: async def test_keys_delete_with_confirm(self, _mock_keys_graphql: AsyncMock) -> None:
_mock_keys_graphql.return_value = {"deleteApiKeys": True} _mock_keys_graphql.return_value = {"apiKey": {"delete": True}}
tool_fn = make_tool_fn("unraid_mcp.tools.keys", "register_keys_tool", "unraid_keys") tool_fn = make_tool_fn("unraid_mcp.tools.keys", "register_keys_tool", "unraid_keys")
result = await tool_fn(action="delete", key_id="key-123", confirm=True) result = await tool_fn(action="delete", key_id="key-123", confirm=True)
assert result["success"] is True assert result["success"] is True

View File

@@ -153,10 +153,25 @@ class TestInfoQueries:
from unraid_mcp.tools.info import QUERIES from unraid_mcp.tools.info import QUERIES
expected_actions = { expected_actions = {
"overview", "array", "network", "registration", "connect", "overview",
"variables", "metrics", "services", "display", "config", "array",
"online", "owner", "settings", "server", "servers", "network",
"flash", "ups_devices", "ups_device", "ups_config", "registration",
"connect",
"variables",
"metrics",
"services",
"display",
"config",
"online",
"owner",
"settings",
"server",
"servers",
"flash",
"ups_devices",
"ups_device",
"ups_config",
} }
assert set(QUERIES.keys()) == expected_actions assert set(QUERIES.keys()) == expected_actions
@@ -314,8 +329,13 @@ class TestDockerQueries:
from unraid_mcp.tools.docker import QUERIES from unraid_mcp.tools.docker import QUERIES
expected = { expected = {
"list", "details", "logs", "networks", "list",
"network_details", "port_conflicts", "check_updates", "details",
"logs",
"networks",
"network_details",
"port_conflicts",
"check_updates",
} }
assert set(QUERIES.keys()) == expected assert set(QUERIES.keys()) == expected
@@ -368,7 +388,26 @@ class TestDockerMutations:
def test_all_docker_mutations_covered(self, schema: GraphQLSchema) -> None: def test_all_docker_mutations_covered(self, schema: GraphQLSchema) -> None:
from unraid_mcp.tools.docker import MUTATIONS from unraid_mcp.tools.docker import MUTATIONS
expected = {"start", "stop", "pause", "unpause", "remove", "update", "update_all"} expected = {
"start",
"stop",
"pause",
"unpause",
"remove",
"update",
"update_all",
"create_folder",
"set_folder_children",
"delete_entries",
"move_to_folder",
"move_to_position",
"rename_folder",
"create_folder_with_items",
"update_view_prefs",
"sync_templates",
"reset_template_mappings",
"refresh_digests",
}
assert set(MUTATIONS.keys()) == expected assert set(MUTATIONS.keys()) == expected
@@ -384,10 +423,16 @@ class TestVmQueries:
errors = _validate_operation(schema, QUERIES["list"]) errors = _validate_operation(schema, QUERIES["list"])
assert not errors, f"list query validation failed: {errors}" assert not errors, f"list query validation failed: {errors}"
def test_details_query(self, schema: GraphQLSchema) -> None:
from unraid_mcp.tools.virtualization import QUERIES
errors = _validate_operation(schema, QUERIES["details"])
assert not errors, f"details query validation failed: {errors}"
def test_all_vm_queries_covered(self, schema: GraphQLSchema) -> None: def test_all_vm_queries_covered(self, schema: GraphQLSchema) -> None:
from unraid_mcp.tools.virtualization import QUERIES from unraid_mcp.tools.virtualization import QUERIES
assert set(QUERIES.keys()) == {"list"} assert set(QUERIES.keys()) == {"list", "details"}
class TestVmMutations: class TestVmMutations:
@@ -511,10 +556,52 @@ class TestNotificationMutations:
errors = _validate_operation(schema, MUTATIONS["archive_all"]) errors = _validate_operation(schema, MUTATIONS["archive_all"])
assert not errors, f"archive_all mutation validation failed: {errors}" assert not errors, f"archive_all mutation validation failed: {errors}"
def test_archive_many_mutation(self, schema: GraphQLSchema) -> None:
from unraid_mcp.tools.notifications import MUTATIONS
errors = _validate_operation(schema, MUTATIONS["archive_many"])
assert not errors, f"archive_many mutation validation failed: {errors}"
def test_create_unique_mutation(self, schema: GraphQLSchema) -> None:
from unraid_mcp.tools.notifications import MUTATIONS
errors = _validate_operation(schema, MUTATIONS["create_unique"])
assert not errors, f"create_unique mutation validation failed: {errors}"
def test_unarchive_many_mutation(self, schema: GraphQLSchema) -> None:
from unraid_mcp.tools.notifications import MUTATIONS
errors = _validate_operation(schema, MUTATIONS["unarchive_many"])
assert not errors, f"unarchive_many mutation validation failed: {errors}"
def test_unarchive_all_mutation(self, schema: GraphQLSchema) -> None:
from unraid_mcp.tools.notifications import MUTATIONS
errors = _validate_operation(schema, MUTATIONS["unarchive_all"])
assert not errors, f"unarchive_all mutation validation failed: {errors}"
def test_recalculate_mutation(self, schema: GraphQLSchema) -> None:
from unraid_mcp.tools.notifications import MUTATIONS
errors = _validate_operation(schema, MUTATIONS["recalculate"])
assert not errors, f"recalculate mutation validation failed: {errors}"
def test_all_notification_mutations_covered(self, schema: GraphQLSchema) -> None: def test_all_notification_mutations_covered(self, schema: GraphQLSchema) -> None:
from unraid_mcp.tools.notifications import MUTATIONS from unraid_mcp.tools.notifications import MUTATIONS
expected = {"create", "archive", "unread", "delete", "delete_archived", "archive_all"} expected = {
"create",
"archive",
"unread",
"delete",
"delete_archived",
"archive_all",
"archive_many",
"create_unique",
"unarchive_many",
"unarchive_all",
"recalculate",
}
assert set(MUTATIONS.keys()) == expected assert set(MUTATIONS.keys()) == expected
@@ -647,7 +734,7 @@ class TestHealthQueries:
query ComprehensiveHealthCheck { query ComprehensiveHealthCheck {
info { info {
machineId time machineId time
versions { unraid } versions { core { unraid } }
os { uptime } os { uptime }
} }
array { state } array { state }
@@ -707,8 +794,7 @@ class TestSchemaCompleteness:
failures.append(f"{tool_name}/MUTATIONS/{action}: {errors[0]}") failures.append(f"{tool_name}/MUTATIONS/{action}: {errors[0]}")
assert not failures, ( assert not failures, (
f"{len(failures)} of {total} operations failed validation:\n" f"{len(failures)} of {total} operations failed validation:\n" + "\n".join(failures)
+ "\n".join(failures)
) )
def test_schema_has_query_type(self, schema: GraphQLSchema) -> None: def test_schema_has_query_type(self, schema: GraphQLSchema) -> None:

View File

@@ -39,15 +39,23 @@ class TestArrayValidation:
with pytest.raises(ToolError, match="Invalid action"): with pytest.raises(ToolError, match="Invalid action"):
await tool_fn(action=action) await tool_fn(action=action)
async def test_parity_start_requires_correct(self, _mock_graphql: AsyncMock) -> None:
tool_fn = _make_tool()
with pytest.raises(ToolError, match="correct is required"):
await tool_fn(action="parity_start")
_mock_graphql.assert_not_called()
class TestArrayActions: class TestArrayActions:
async def test_parity_start(self, _mock_graphql: AsyncMock) -> None: async def test_parity_start(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"parityCheck": {"start": True}} _mock_graphql.return_value = {"parityCheck": {"start": True}}
tool_fn = _make_tool() tool_fn = _make_tool()
result = await tool_fn(action="parity_start") result = await tool_fn(action="parity_start", correct=False)
assert result["success"] is True assert result["success"] is True
assert result["action"] == "parity_start" assert result["action"] == "parity_start"
_mock_graphql.assert_called_once() _mock_graphql.assert_called_once()
call_args = _mock_graphql.call_args
assert call_args[0][1] == {"correct": False}
async def test_parity_start_with_correct(self, _mock_graphql: AsyncMock) -> None: async def test_parity_start_with_correct(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"parityCheck": {"start": True}} _mock_graphql.return_value = {"parityCheck": {"start": True}}
@@ -84,7 +92,7 @@ class TestArrayActions:
async def test_generic_exception_wraps(self, _mock_graphql: AsyncMock) -> None: async def test_generic_exception_wraps(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.side_effect = RuntimeError("disk error") _mock_graphql.side_effect = RuntimeError("disk error")
tool_fn = _make_tool() tool_fn = _make_tool()
with pytest.raises(ToolError, match="disk error"): with pytest.raises(ToolError, match="Failed to execute array/parity_status"):
await tool_fn(action="parity_status") await tool_fn(action="parity_status")
@@ -94,14 +102,14 @@ class TestArrayMutationFailures:
async def test_parity_start_mutation_returns_false(self, _mock_graphql: AsyncMock) -> None: async def test_parity_start_mutation_returns_false(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"parityCheck": {"start": False}} _mock_graphql.return_value = {"parityCheck": {"start": False}}
tool_fn = _make_tool() tool_fn = _make_tool()
result = await tool_fn(action="parity_start") result = await tool_fn(action="parity_start", correct=False)
assert result["success"] is True assert result["success"] is True
assert result["data"] == {"parityCheck": {"start": False}} assert result["data"] == {"parityCheck": {"start": False}}
async def test_parity_start_mutation_returns_null(self, _mock_graphql: AsyncMock) -> None: async def test_parity_start_mutation_returns_null(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"parityCheck": {"start": None}} _mock_graphql.return_value = {"parityCheck": {"start": None}}
tool_fn = _make_tool() tool_fn = _make_tool()
result = await tool_fn(action="parity_start") result = await tool_fn(action="parity_start", correct=False)
assert result["success"] is True assert result["success"] is True
assert result["data"] == {"parityCheck": {"start": None}} assert result["data"] == {"parityCheck": {"start": None}}
@@ -110,7 +118,7 @@ class TestArrayMutationFailures:
) -> None: ) -> None:
_mock_graphql.return_value = {"parityCheck": {"start": {}}} _mock_graphql.return_value = {"parityCheck": {"start": {}}}
tool_fn = _make_tool() tool_fn = _make_tool()
result = await tool_fn(action="parity_start") result = await tool_fn(action="parity_start", correct=False)
assert result["success"] is True assert result["success"] is True
assert result["data"] == {"parityCheck": {"start": {}}} assert result["data"] == {"parityCheck": {"start": {}}}
@@ -128,7 +136,7 @@ class TestArrayNetworkErrors:
_mock_graphql.side_effect = ToolError("HTTP error 500: Internal Server Error") _mock_graphql.side_effect = ToolError("HTTP error 500: Internal Server Error")
tool_fn = _make_tool() tool_fn = _make_tool()
with pytest.raises(ToolError, match="HTTP error 500"): with pytest.raises(ToolError, match="HTTP error 500"):
await tool_fn(action="parity_start") await tool_fn(action="parity_start", correct=False)
async def test_connection_refused(self, _mock_graphql: AsyncMock) -> None: async def test_connection_refused(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.side_effect = ToolError("Network connection error: Connection refused") _mock_graphql.side_effect = ToolError("Network connection error: Connection refused")

View File

@@ -1,6 +1,7 @@
"""Tests for unraid_mcp.core.client — GraphQL client infrastructure.""" """Tests for unraid_mcp.core.client — GraphQL client infrastructure."""
import json import json
import time
from unittest.mock import AsyncMock, MagicMock, patch from unittest.mock import AsyncMock, MagicMock, patch
import httpx import httpx
@@ -9,9 +10,11 @@ import pytest
from unraid_mcp.core.client import ( from unraid_mcp.core.client import (
DEFAULT_TIMEOUT, DEFAULT_TIMEOUT,
DISK_TIMEOUT, DISK_TIMEOUT,
_redact_sensitive, _QueryCache,
_RateLimiter,
is_idempotent_error, is_idempotent_error,
make_graphql_request, make_graphql_request,
redact_sensitive,
) )
from unraid_mcp.core.exceptions import ToolError from unraid_mcp.core.exceptions import ToolError
@@ -57,7 +60,7 @@ class TestIsIdempotentError:
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
# _redact_sensitive # redact_sensitive
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
@@ -66,36 +69,36 @@ class TestRedactSensitive:
def test_flat_dict(self) -> None: def test_flat_dict(self) -> None:
data = {"username": "admin", "password": "hunter2", "host": "10.0.0.1"} data = {"username": "admin", "password": "hunter2", "host": "10.0.0.1"}
result = _redact_sensitive(data) result = redact_sensitive(data)
assert result["username"] == "admin" assert result["username"] == "admin"
assert result["password"] == "***" assert result["password"] == "***"
assert result["host"] == "10.0.0.1" assert result["host"] == "10.0.0.1"
def test_nested_dict(self) -> None: def test_nested_dict(self) -> None:
data = {"config": {"apiKey": "abc123", "url": "http://host"}} data = {"config": {"apiKey": "abc123", "url": "http://host"}}
result = _redact_sensitive(data) result = redact_sensitive(data)
assert result["config"]["apiKey"] == "***" assert result["config"]["apiKey"] == "***"
assert result["config"]["url"] == "http://host" assert result["config"]["url"] == "http://host"
def test_list_of_dicts(self) -> None: def test_list_of_dicts(self) -> None:
data = [{"token": "t1"}, {"name": "safe"}] data = [{"token": "t1"}, {"name": "safe"}]
result = _redact_sensitive(data) result = redact_sensitive(data)
assert result[0]["token"] == "***" assert result[0]["token"] == "***"
assert result[1]["name"] == "safe" assert result[1]["name"] == "safe"
def test_deeply_nested(self) -> None: def test_deeply_nested(self) -> None:
data = {"a": {"b": {"c": {"secret": "deep"}}}} data = {"a": {"b": {"c": {"secret": "deep"}}}}
result = _redact_sensitive(data) result = redact_sensitive(data)
assert result["a"]["b"]["c"]["secret"] == "***" assert result["a"]["b"]["c"]["secret"] == "***"
def test_non_dict_passthrough(self) -> None: def test_non_dict_passthrough(self) -> None:
assert _redact_sensitive("plain_string") == "plain_string" assert redact_sensitive("plain_string") == "plain_string"
assert _redact_sensitive(42) == 42 assert redact_sensitive(42) == 42
assert _redact_sensitive(None) is None assert redact_sensitive(None) is None
def test_case_insensitive_keys(self) -> None: def test_case_insensitive_keys(self) -> None:
data = {"Password": "p1", "TOKEN": "t1", "ApiKey": "k1", "Secret": "s1", "Key": "x1"} data = {"Password": "p1", "TOKEN": "t1", "ApiKey": "k1", "Secret": "s1", "Key": "x1"}
result = _redact_sensitive(data) result = redact_sensitive(data)
for v in result.values(): for v in result.values():
assert v == "***" assert v == "***"
@@ -109,7 +112,7 @@ class TestRedactSensitive:
"username": "safe", "username": "safe",
"host": "safe", "host": "safe",
} }
result = _redact_sensitive(data) result = redact_sensitive(data)
assert result["user_password"] == "***" assert result["user_password"] == "***"
assert result["api_key_value"] == "***" assert result["api_key_value"] == "***"
assert result["auth_token_expiry"] == "***" assert result["auth_token_expiry"] == "***"
@@ -119,12 +122,26 @@ class TestRedactSensitive:
def test_mixed_list_content(self) -> None: def test_mixed_list_content(self) -> None:
data = [{"key": "val"}, "string", 123, [{"token": "inner"}]] data = [{"key": "val"}, "string", 123, [{"token": "inner"}]]
result = _redact_sensitive(data) result = redact_sensitive(data)
assert result[0]["key"] == "***" assert result[0]["key"] == "***"
assert result[1] == "string" assert result[1] == "string"
assert result[2] == 123 assert result[2] == 123
assert result[3][0]["token"] == "***" assert result[3][0]["token"] == "***"
def test_new_sensitive_keys_are_redacted(self) -> None:
"""PR-added keys: authorization, cookie, session, credential, passphrase, jwt."""
data = {
"authorization": "Bearer token123",
"cookie": "session=abc",
"jwt": "eyJ...",
"credential": "secret_cred",
"passphrase": "hunter2",
"session": "sess_id",
}
result = redact_sensitive(data)
for key, val in result.items():
assert val == "***", f"Key '{key}' was not redacted"
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
# Timeout constants # Timeout constants
@@ -274,7 +291,7 @@ class TestMakeGraphQLRequestErrors:
with ( with (
patch("unraid_mcp.core.client.get_http_client", return_value=mock_client), patch("unraid_mcp.core.client.get_http_client", return_value=mock_client),
pytest.raises(ToolError, match="HTTP error 401"), pytest.raises(ToolError, match="Unraid API returned HTTP 401"),
): ):
await make_graphql_request("{ info }") await make_graphql_request("{ info }")
@@ -292,7 +309,7 @@ class TestMakeGraphQLRequestErrors:
with ( with (
patch("unraid_mcp.core.client.get_http_client", return_value=mock_client), patch("unraid_mcp.core.client.get_http_client", return_value=mock_client),
pytest.raises(ToolError, match="HTTP error 500"), pytest.raises(ToolError, match="Unraid API returned HTTP 500"),
): ):
await make_graphql_request("{ info }") await make_graphql_request("{ info }")
@@ -310,7 +327,7 @@ class TestMakeGraphQLRequestErrors:
with ( with (
patch("unraid_mcp.core.client.get_http_client", return_value=mock_client), patch("unraid_mcp.core.client.get_http_client", return_value=mock_client),
pytest.raises(ToolError, match="HTTP error 503"), pytest.raises(ToolError, match="Unraid API returned HTTP 503"),
): ):
await make_graphql_request("{ info }") await make_graphql_request("{ info }")
@@ -320,7 +337,7 @@ class TestMakeGraphQLRequestErrors:
with ( with (
patch("unraid_mcp.core.client.get_http_client", return_value=mock_client), patch("unraid_mcp.core.client.get_http_client", return_value=mock_client),
pytest.raises(ToolError, match="Network connection error"), pytest.raises(ToolError, match="Network error connecting to Unraid API"),
): ):
await make_graphql_request("{ info }") await make_graphql_request("{ info }")
@@ -330,7 +347,7 @@ class TestMakeGraphQLRequestErrors:
with ( with (
patch("unraid_mcp.core.client.get_http_client", return_value=mock_client), patch("unraid_mcp.core.client.get_http_client", return_value=mock_client),
pytest.raises(ToolError, match="Network connection error"), pytest.raises(ToolError, match="Network error connecting to Unraid API"),
): ):
await make_graphql_request("{ info }") await make_graphql_request("{ info }")
@@ -344,7 +361,7 @@ class TestMakeGraphQLRequestErrors:
with ( with (
patch("unraid_mcp.core.client.get_http_client", return_value=mock_client), patch("unraid_mcp.core.client.get_http_client", return_value=mock_client),
pytest.raises(ToolError, match="Invalid JSON response"), pytest.raises(ToolError, match=r"invalid response.*not valid JSON"),
): ):
await make_graphql_request("{ info }") await make_graphql_request("{ info }")
@@ -464,3 +481,240 @@ class TestGraphQLErrorHandling:
pytest.raises(ToolError, match="GraphQL API error"), pytest.raises(ToolError, match="GraphQL API error"),
): ):
await make_graphql_request("{ info }") await make_graphql_request("{ info }")
# ---------------------------------------------------------------------------
# _RateLimiter
# ---------------------------------------------------------------------------
class TestRateLimiter:
"""Unit tests for the token bucket rate limiter."""
async def test_acquire_consumes_one_token(self) -> None:
limiter = _RateLimiter(max_tokens=10, refill_rate=1.0)
initial = limiter.tokens
await limiter.acquire()
assert limiter.tokens == pytest.approx(initial - 1, abs=1e-3)
async def test_acquire_succeeds_when_tokens_available(self) -> None:
limiter = _RateLimiter(max_tokens=5, refill_rate=1.0)
# Should complete without sleeping
for _ in range(5):
await limiter.acquire()
# _refill() runs during each acquire() call and adds a tiny time-based
# amount; check < 1.0 (not enough for another immediate request) rather
# than == 0.0 to avoid flakiness from timing.
assert limiter.tokens < 1.0
async def test_tokens_do_not_exceed_max(self) -> None:
limiter = _RateLimiter(max_tokens=10, refill_rate=1.0)
# Force refill with large elapsed time
limiter.last_refill = time.monotonic() - 100.0 # 100 seconds ago
limiter._refill()
assert limiter.tokens == 10.0 # Capped at max_tokens
async def test_refill_adds_tokens_based_on_elapsed(self) -> None:
limiter = _RateLimiter(max_tokens=100, refill_rate=10.0)
limiter.tokens = 0.0
limiter.last_refill = time.monotonic() - 1.0 # 1 second ago
limiter._refill()
# Should have refilled ~10 tokens (10.0 rate * 1.0 sec)
assert 9.5 < limiter.tokens < 10.5
async def test_acquire_sleeps_when_no_tokens(self) -> None:
"""When tokens are exhausted, acquire should sleep before consuming."""
limiter = _RateLimiter(max_tokens=1, refill_rate=1.0)
limiter.tokens = 0.0
sleep_calls = []
async def fake_sleep(duration: float) -> None:
sleep_calls.append(duration)
# Simulate refill by advancing last_refill so tokens replenish
limiter.tokens = 1.0
limiter.last_refill = time.monotonic()
with patch("unraid_mcp.core.client.asyncio.sleep", side_effect=fake_sleep):
await limiter.acquire()
assert len(sleep_calls) == 1
assert sleep_calls[0] > 0
async def test_default_params_match_api_limits(self) -> None:
"""Default rate limiter must use 90 tokens at 9.0/sec (10% headroom from 100/10s)."""
limiter = _RateLimiter()
assert limiter.max_tokens == 90
assert limiter.refill_rate == 9.0
# ---------------------------------------------------------------------------
# _QueryCache
# ---------------------------------------------------------------------------
class TestQueryCache:
"""Unit tests for the TTL query cache."""
async def test_miss_on_empty_cache(self) -> None:
cache = _QueryCache()
assert await cache.get("{ info }", None) is None
async def test_put_and_get_hit(self) -> None:
cache = _QueryCache()
data = {"result": "ok"}
await cache.put("GetNetworkConfig { }", None, data)
result = await cache.get("GetNetworkConfig { }", None)
assert result == data
async def test_expired_entry_returns_none(self) -> None:
cache = _QueryCache()
data = {"result": "ok"}
await cache.put("GetNetworkConfig { }", None, data)
# Manually expire the entry
key = cache._cache_key("GetNetworkConfig { }", None)
cache._store[key] = (time.monotonic() - 1.0, data) # expired 1 sec ago
assert await cache.get("GetNetworkConfig { }", None) is None
async def test_invalidate_all_clears_store(self) -> None:
cache = _QueryCache()
await cache.put("GetNetworkConfig { }", None, {"x": 1})
await cache.put("GetOwner { }", None, {"y": 2})
assert len(cache._store) == 2
await cache.invalidate_all()
assert len(cache._store) == 0
async def test_variables_affect_cache_key(self) -> None:
"""Different variables produce different cache keys."""
cache = _QueryCache()
q = "GetNetworkConfig($id: ID!) { network(id: $id) { name } }"
await cache.put(q, {"id": "1"}, {"name": "eth0"})
await cache.put(q, {"id": "2"}, {"name": "eth1"})
assert await cache.get(q, {"id": "1"}) == {"name": "eth0"}
assert await cache.get(q, {"id": "2"}) == {"name": "eth1"}
def test_is_cacheable_returns_true_for_known_prefixes(self) -> None:
assert _QueryCache.is_cacheable("GetNetworkConfig { ... }") is True
assert _QueryCache.is_cacheable("GetRegistrationInfo { ... }") is True
assert _QueryCache.is_cacheable("GetOwner { ... }") is True
assert _QueryCache.is_cacheable("GetFlash { ... }") is True
def test_is_cacheable_returns_false_for_mutations(self) -> None:
assert _QueryCache.is_cacheable('mutation { docker { start(id: "x") } }') is False
def test_is_cacheable_returns_false_for_unlisted_queries(self) -> None:
assert _QueryCache.is_cacheable("{ docker { containers { id } } }") is False
assert _QueryCache.is_cacheable("{ info { os } }") is False
def test_is_cacheable_mutation_check_is_prefix(self) -> None:
"""Queries that start with 'mutation' after whitespace are not cacheable."""
assert _QueryCache.is_cacheable(" mutation { ... }") is False
def test_is_cacheable_with_explicit_query_keyword(self) -> None:
"""Operation names after explicit 'query' keyword must be recognized."""
assert _QueryCache.is_cacheable("query GetNetworkConfig { network { name } }") is True
assert _QueryCache.is_cacheable("query GetOwner { owner { name } }") is True
def test_is_cacheable_anonymous_query_returns_false(self) -> None:
"""Anonymous 'query { ... }' has no operation name — must not be cached."""
assert _QueryCache.is_cacheable("query { network { name } }") is False
async def test_expired_entry_removed_from_store(self) -> None:
"""Accessing an expired entry should remove it from the internal store."""
cache = _QueryCache()
await cache.put("GetOwner { }", None, {"owner": "root"})
key = cache._cache_key("GetOwner { }", None)
cache._store[key] = (time.monotonic() - 1.0, {"owner": "root"})
assert key in cache._store
await cache.get("GetOwner { }", None) # triggers deletion
assert key not in cache._store
# ---------------------------------------------------------------------------
# make_graphql_request — 429 retry behavior
# ---------------------------------------------------------------------------
class TestRateLimitRetry:
"""Tests for the 429 retry loop in make_graphql_request."""
@pytest.fixture(autouse=True)
def _patch_config(self):
with (
patch("unraid_mcp.core.client.UNRAID_API_URL", "https://unraid.local/graphql"),
patch("unraid_mcp.core.client.UNRAID_API_KEY", "test-key"),
patch("unraid_mcp.core.client.asyncio.sleep", new_callable=AsyncMock),
):
yield
def _make_429_response(self) -> MagicMock:
resp = MagicMock()
resp.status_code = 429
resp.raise_for_status = MagicMock()
return resp
def _make_ok_response(self, data: dict) -> MagicMock:
resp = MagicMock()
resp.status_code = 200
resp.raise_for_status = MagicMock()
resp.json.return_value = {"data": data}
return resp
async def test_single_429_then_success_retries(self) -> None:
"""One 429 followed by a success should return the data."""
mock_client = AsyncMock()
mock_client.post.side_effect = [
self._make_429_response(),
self._make_ok_response({"info": {"os": "Unraid"}}),
]
with patch("unraid_mcp.core.client.get_http_client", return_value=mock_client):
result = await make_graphql_request("{ info { os } }")
assert result == {"info": {"os": "Unraid"}}
assert mock_client.post.call_count == 2
async def test_two_429s_then_success(self) -> None:
"""Two 429s followed by success returns data after 2 retries."""
mock_client = AsyncMock()
mock_client.post.side_effect = [
self._make_429_response(),
self._make_429_response(),
self._make_ok_response({"x": 1}),
]
with patch("unraid_mcp.core.client.get_http_client", return_value=mock_client):
result = await make_graphql_request("{ x }")
assert result == {"x": 1}
assert mock_client.post.call_count == 3
async def test_three_429s_raises_tool_error(self) -> None:
"""Three consecutive 429s (all retries exhausted) raises ToolError."""
mock_client = AsyncMock()
mock_client.post.side_effect = [
self._make_429_response(),
self._make_429_response(),
self._make_429_response(),
]
with (
patch("unraid_mcp.core.client.get_http_client", return_value=mock_client),
pytest.raises(ToolError, match="rate limiting"),
):
await make_graphql_request("{ info }")
async def test_rate_limit_error_message_advises_wait(self) -> None:
"""The ToolError message should tell the user to wait ~10 seconds."""
mock_client = AsyncMock()
mock_client.post.side_effect = [
self._make_429_response(),
self._make_429_response(),
self._make_429_response(),
]
with (
patch("unraid_mcp.core.client.get_http_client", return_value=mock_client),
pytest.raises(ToolError, match="10 seconds"),
):
await make_graphql_request("{ info }")

View File

@@ -70,7 +70,9 @@ class TestDockerValidation:
await tool_fn(action="remove", container_id="abc123") await tool_fn(action="remove", container_id="abc123")
@pytest.mark.parametrize("action", ["start", "stop", "details", "logs", "pause", "unpause"]) @pytest.mark.parametrize("action", ["start", "stop", "details", "logs", "pause", "unpause"])
async def test_container_actions_require_id(self, _mock_graphql: AsyncMock, action: str) -> None: async def test_container_actions_require_id(
self, _mock_graphql: AsyncMock, action: str
) -> None:
tool_fn = _make_tool() tool_fn = _make_tool()
with pytest.raises(ToolError, match="container_id"): with pytest.raises(ToolError, match="container_id"):
await tool_fn(action=action) await tool_fn(action=action)
@@ -80,6 +82,14 @@ class TestDockerValidation:
with pytest.raises(ToolError, match="network_id"): with pytest.raises(ToolError, match="network_id"):
await tool_fn(action="network_details") await tool_fn(action="network_details")
async def test_non_logs_action_ignores_tail_lines_validation(
self, _mock_graphql: AsyncMock
) -> None:
_mock_graphql.return_value = {"docker": {"containers": []}}
tool_fn = _make_tool()
result = await tool_fn(action="list", tail_lines=0)
assert result["containers"] == []
class TestDockerActions: class TestDockerActions:
async def test_list(self, _mock_graphql: AsyncMock) -> None: async def test_list(self, _mock_graphql: AsyncMock) -> None:
@@ -94,13 +104,7 @@ class TestDockerActions:
# First call resolves ID, second performs start # First call resolves ID, second performs start
cid = "a" * 64 + ":local" cid = "a" * 64 + ":local"
_mock_graphql.side_effect = [ _mock_graphql.side_effect = [
{ {"docker": {"containers": [{"id": cid, "names": ["plex"]}]}},
"docker": {
"containers": [
{"id": cid, "names": ["plex"]}
]
}
},
{ {
"docker": { "docker": {
"start": { "start": {
@@ -115,7 +119,7 @@ class TestDockerActions:
assert result["success"] is True assert result["success"] is True
async def test_networks(self, _mock_graphql: AsyncMock) -> None: async def test_networks(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"dockerNetworks": [{"id": "net:1", "name": "bridge"}]} _mock_graphql.return_value = {"docker": {"networks": [{"id": "net:1", "name": "bridge"}]}}
tool_fn = _make_tool() tool_fn = _make_tool()
result = await tool_fn(action="networks") result = await tool_fn(action="networks")
assert len(result["networks"]) == 1 assert len(result["networks"]) == 1
@@ -175,7 +179,7 @@ class TestDockerActions:
"docker": {"updateAllContainers": [{"id": "c1", "state": "running"}]} "docker": {"updateAllContainers": [{"id": "c1", "state": "running"}]}
} }
tool_fn = _make_tool() tool_fn = _make_tool()
result = await tool_fn(action="update_all") result = await tool_fn(action="update_all", confirm=True)
assert result["success"] is True assert result["success"] is True
assert len(result["containers"]) == 1 assert len(result["containers"]) == 1
@@ -224,9 +228,28 @@ class TestDockerActions:
async def test_generic_exception_wraps_in_tool_error(self, _mock_graphql: AsyncMock) -> None: async def test_generic_exception_wraps_in_tool_error(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.side_effect = RuntimeError("unexpected failure") _mock_graphql.side_effect = RuntimeError("unexpected failure")
tool_fn = _make_tool() tool_fn = _make_tool()
with pytest.raises(ToolError, match="unexpected failure"): with pytest.raises(ToolError, match="Failed to execute docker/list"):
await tool_fn(action="list") await tool_fn(action="list")
async def test_short_id_prefix_ambiguous_rejected(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {
"docker": {
"containers": [
{
"id": "abcdef1234560000000000000000000000000000000000000000000000000000:local",
"names": ["plex"],
},
{
"id": "abcdef1234561111111111111111111111111111111111111111111111111111:local",
"names": ["sonarr"],
},
]
}
}
tool_fn = _make_tool()
with pytest.raises(ToolError, match="ambiguous"):
await tool_fn(action="logs", container_id="abcdef123456")
class TestDockerMutationFailures: class TestDockerMutationFailures:
"""Tests for mutation responses that indicate failure or unexpected shapes.""" """Tests for mutation responses that indicate failure or unexpected shapes."""
@@ -271,10 +294,16 @@ class TestDockerMutationFailures:
"""update_all with no containers to update.""" """update_all with no containers to update."""
_mock_graphql.return_value = {"docker": {"updateAllContainers": []}} _mock_graphql.return_value = {"docker": {"updateAllContainers": []}}
tool_fn = _make_tool() tool_fn = _make_tool()
result = await tool_fn(action="update_all") result = await tool_fn(action="update_all", confirm=True)
assert result["success"] is True assert result["success"] is True
assert result["containers"] == [] assert result["containers"] == []
async def test_update_all_requires_confirm(self, _mock_graphql: AsyncMock) -> None:
"""update_all is destructive and requires confirm=True."""
tool_fn = _make_tool()
with pytest.raises(ToolError, match="destructive"):
await tool_fn(action="update_all")
async def test_mutation_timeout(self, _mock_graphql: AsyncMock) -> None: async def test_mutation_timeout(self, _mock_graphql: AsyncMock) -> None:
"""Mid-operation timeout during a docker mutation.""" """Mid-operation timeout during a docker mutation."""
@@ -315,3 +344,159 @@ class TestDockerNetworkErrors:
tool_fn = _make_tool() tool_fn = _make_tool()
with pytest.raises(ToolError, match="Invalid JSON"): with pytest.raises(ToolError, match="Invalid JSON"):
await tool_fn(action="list") await tool_fn(action="list")
_ORGANIZER_RESPONSE = {
"version": 1.0,
"views": [{"id": "default", "name": "Default", "rootId": "root", "flatEntries": []}],
}
class TestDockerOrganizerMutations:
async def test_create_folder_success(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"createDockerFolder": _ORGANIZER_RESPONSE}
result = await _make_tool()(action="create_folder", folder_name="Media")
assert result["success"] is True
call_vars = _mock_graphql.call_args[0][1]
assert call_vars["name"] == "Media"
async def test_create_folder_requires_name(self, _mock_graphql: AsyncMock) -> None:
with pytest.raises(ToolError, match="folder_name"):
await _make_tool()(action="create_folder")
async def test_set_folder_children_success(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"setDockerFolderChildren": _ORGANIZER_RESPONSE}
result = await _make_tool()(action="set_folder_children", children_ids=["c1"])
assert result["success"] is True
call_vars = _mock_graphql.call_args[0][1]
assert call_vars["childrenIds"] == ["c1"]
async def test_set_folder_children_requires_children(self, _mock_graphql: AsyncMock) -> None:
with pytest.raises(ToolError, match="children_ids"):
await _make_tool()(action="set_folder_children")
async def test_delete_entries_requires_confirm(self, _mock_graphql: AsyncMock) -> None:
with pytest.raises(ToolError, match="destructive"):
await _make_tool()(action="delete_entries", entry_ids=["e1"])
async def test_delete_entries_requires_ids(self, _mock_graphql: AsyncMock) -> None:
with pytest.raises(ToolError, match="entry_ids"):
await _make_tool()(action="delete_entries", confirm=True)
async def test_delete_entries_success(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"deleteDockerEntries": _ORGANIZER_RESPONSE}
result = await _make_tool()(action="delete_entries", entry_ids=["e1", "e2"], confirm=True)
assert result["success"] is True
call_vars = _mock_graphql.call_args[0][1]
assert call_vars["entryIds"] == ["e1", "e2"]
async def test_move_to_folder_success(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"moveDockerEntriesToFolder": _ORGANIZER_RESPONSE}
result = await _make_tool()(
action="move_to_folder", source_entry_ids=["e1"], destination_folder_id="f1"
)
assert result["success"] is True
call_vars = _mock_graphql.call_args[0][1]
assert call_vars["sourceEntryIds"] == ["e1"]
assert call_vars["destinationFolderId"] == "f1"
async def test_move_to_folder_requires_source_ids(self, _mock_graphql: AsyncMock) -> None:
with pytest.raises(ToolError, match="source_entry_ids"):
await _make_tool()(action="move_to_folder", destination_folder_id="f1")
async def test_move_to_folder_requires_destination(self, _mock_graphql: AsyncMock) -> None:
with pytest.raises(ToolError, match="destination_folder_id"):
await _make_tool()(action="move_to_folder", source_entry_ids=["e1"])
async def test_move_to_position_success(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"moveDockerItemsToPosition": _ORGANIZER_RESPONSE}
result = await _make_tool()(
action="move_to_position",
source_entry_ids=["e1"],
destination_folder_id="f1",
position=2.0,
)
assert result["success"] is True
call_vars = _mock_graphql.call_args[0][1]
assert call_vars["sourceEntryIds"] == ["e1"]
assert call_vars["destinationFolderId"] == "f1"
assert call_vars["position"] == 2.0
async def test_move_to_position_requires_position(self, _mock_graphql: AsyncMock) -> None:
with pytest.raises(ToolError, match="position"):
await _make_tool()(
action="move_to_position", source_entry_ids=["e1"], destination_folder_id="f1"
)
async def test_rename_folder_success(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"renameDockerFolder": _ORGANIZER_RESPONSE}
result = await _make_tool()(action="rename_folder", folder_id="f1", new_folder_name="New")
assert result["success"] is True
call_vars = _mock_graphql.call_args[0][1]
assert call_vars["folderId"] == "f1"
assert call_vars["newName"] == "New"
async def test_rename_folder_requires_folder_id(self, _mock_graphql: AsyncMock) -> None:
with pytest.raises(ToolError, match="folder_id"):
await _make_tool()(action="rename_folder", new_folder_name="New")
async def test_rename_folder_requires_new_name(self, _mock_graphql: AsyncMock) -> None:
with pytest.raises(ToolError, match="new_folder_name"):
await _make_tool()(action="rename_folder", folder_id="f1")
async def test_create_folder_with_items_success(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"createDockerFolderWithItems": _ORGANIZER_RESPONSE}
result = await _make_tool()(action="create_folder_with_items", folder_name="New")
assert result["success"] is True
call_vars = _mock_graphql.call_args[0][1]
assert call_vars["name"] == "New"
assert "sourceEntryIds" not in call_vars # not forwarded when not provided
async def test_create_folder_with_items_with_source_ids(self, _mock_graphql: AsyncMock) -> None:
"""Passing source_entry_ids must forward sourceEntryIds to the mutation."""
_mock_graphql.return_value = {"createDockerFolderWithItems": _ORGANIZER_RESPONSE}
result = await _make_tool()(
action="create_folder_with_items",
folder_name="Media",
source_entry_ids=["c1", "c2"],
)
assert result["success"] is True
call_vars = _mock_graphql.call_args[0][1]
assert call_vars["name"] == "Media"
assert call_vars["sourceEntryIds"] == ["c1", "c2"]
async def test_create_folder_with_items_requires_name(self, _mock_graphql: AsyncMock) -> None:
with pytest.raises(ToolError, match="folder_name"):
await _make_tool()(action="create_folder_with_items")
async def test_update_view_prefs_success(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"updateDockerViewPreferences": _ORGANIZER_RESPONSE}
result = await _make_tool()(action="update_view_prefs", view_prefs={"sort": "name"})
assert result["success"] is True
call_vars = _mock_graphql.call_args[0][1]
assert call_vars["prefs"] == {"sort": "name"}
async def test_update_view_prefs_requires_prefs(self, _mock_graphql: AsyncMock) -> None:
with pytest.raises(ToolError, match="view_prefs"):
await _make_tool()(action="update_view_prefs")
async def test_sync_templates_success(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {
"syncDockerTemplatePaths": {"scanned": 5, "matched": 4, "skipped": 1, "errors": []}
}
result = await _make_tool()(action="sync_templates")
assert result["success"] is True
async def test_reset_template_mappings_requires_confirm(self, _mock_graphql: AsyncMock) -> None:
with pytest.raises(ToolError, match="destructive"):
await _make_tool()(action="reset_template_mappings")
async def test_reset_template_mappings_success(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"resetDockerTemplateMappings": True}
result = await _make_tool()(action="reset_template_mappings", confirm=True)
assert result["success"] is True
async def test_refresh_digests_success(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"refreshDockerDigests": True}
result = await _make_tool()(action="refresh_digests")
assert result["success"] is True

View File

@@ -7,6 +7,7 @@ import pytest
from conftest import make_tool_fn from conftest import make_tool_fn
from unraid_mcp.core.exceptions import ToolError from unraid_mcp.core.exceptions import ToolError
from unraid_mcp.core.utils import safe_display_url
@pytest.fixture @pytest.fixture
@@ -99,7 +100,7 @@ class TestHealthActions:
"unraid_mcp.tools.health._diagnose_subscriptions", "unraid_mcp.tools.health._diagnose_subscriptions",
side_effect=RuntimeError("broken"), side_effect=RuntimeError("broken"),
), ),
pytest.raises(ToolError, match="broken"), pytest.raises(ToolError, match="Failed to execute health/diagnose"),
): ):
await tool_fn(action="diagnose") await tool_fn(action="diagnose")
@@ -114,7 +115,7 @@ class TestHealthActions:
assert "cpu_sub" in result assert "cpu_sub" in result
async def test_diagnose_import_error_internal(self) -> None: async def test_diagnose_import_error_internal(self) -> None:
"""_diagnose_subscriptions catches ImportError and returns error dict.""" """_diagnose_subscriptions raises ToolError when subscription modules are unavailable."""
import sys import sys
from unraid_mcp.tools.health import _diagnose_subscriptions from unraid_mcp.tools.health import _diagnose_subscriptions
@@ -126,16 +127,70 @@ class TestHealthActions:
try: try:
# Replace the modules with objects that raise ImportError on access # Replace the modules with objects that raise ImportError on access
with patch.dict( with (
sys.modules, patch.dict(
{ sys.modules,
"unraid_mcp.subscriptions": None, {
"unraid_mcp.subscriptions.manager": None, "unraid_mcp.subscriptions": None,
"unraid_mcp.subscriptions.resources": None, "unraid_mcp.subscriptions.manager": None,
}, "unraid_mcp.subscriptions.resources": None,
},
),
pytest.raises(ToolError, match="Subscription modules not available"),
): ):
result = await _diagnose_subscriptions() await _diagnose_subscriptions()
assert "error" in result
finally: finally:
# Restore cached modules # Restore cached modules
sys.modules.update(cached) sys.modules.update(cached)
# ---------------------------------------------------------------------------
# _safe_display_url — URL redaction helper
# ---------------------------------------------------------------------------
class TestSafeDisplayUrl:
"""Verify that safe_display_url strips credentials/path and preserves scheme+host+port."""
def test_none_returns_none(self) -> None:
assert safe_display_url(None) is None
def test_empty_string_returns_none(self) -> None:
assert safe_display_url("") is None
def test_simple_url_scheme_and_host(self) -> None:
assert safe_display_url("https://unraid.local/graphql") == "https://unraid.local"
def test_preserves_port(self) -> None:
assert safe_display_url("https://10.1.0.2:31337/api/graphql") == "https://10.1.0.2:31337"
def test_strips_path(self) -> None:
result = safe_display_url("http://unraid.local/some/deep/path?query=1")
assert "path" not in result
assert "query" not in result
def test_strips_credentials(self) -> None:
result = safe_display_url("https://user:password@unraid.local/graphql")
assert "user" not in result
assert "password" not in result
assert result == "https://unraid.local"
def test_strips_query_params(self) -> None:
result = safe_display_url("http://host.local?token=abc&key=xyz")
assert "token" not in result
assert "abc" not in result
def test_http_scheme_preserved(self) -> None:
result = safe_display_url("http://10.0.0.1:8080/api")
assert result == "http://10.0.0.1:8080"
def test_tailscale_url(self) -> None:
result = safe_display_url("https://100.118.209.1:31337/graphql")
assert result == "https://100.118.209.1:31337"
def test_malformed_ipv6_url_returns_unparseable(self) -> None:
"""Malformed IPv6 brackets in netloc cause urlparse.hostname to raise ValueError."""
# urlparse("https://[invalid") parses without error, but accessing .hostname
# raises ValueError: Invalid IPv6 URL — this triggers the except branch.
result = safe_display_url("https://[invalid")
assert result == "<unparseable>"

View File

@@ -186,7 +186,7 @@ class TestUnraidInfoTool:
async def test_generic_exception_wraps(self, _mock_graphql: AsyncMock) -> None: async def test_generic_exception_wraps(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.side_effect = RuntimeError("unexpected") _mock_graphql.side_effect = RuntimeError("unexpected")
tool_fn = _make_tool() tool_fn = _make_tool()
with pytest.raises(ToolError, match="unexpected"): with pytest.raises(ToolError, match="Failed to execute info/online"):
await tool_fn(action="online") await tool_fn(action="online")
async def test_metrics(self, _mock_graphql: AsyncMock) -> None: async def test_metrics(self, _mock_graphql: AsyncMock) -> None:
@@ -201,6 +201,7 @@ class TestUnraidInfoTool:
_mock_graphql.return_value = {"services": [{"name": "docker", "state": "running"}]} _mock_graphql.return_value = {"services": [{"name": "docker", "state": "running"}]}
tool_fn = _make_tool() tool_fn = _make_tool()
result = await tool_fn(action="services") result = await tool_fn(action="services")
assert "services" in result
assert len(result["services"]) == 1 assert len(result["services"]) == 1
assert result["services"][0]["name"] == "docker" assert result["services"][0]["name"] == "docker"
@@ -225,6 +226,7 @@ class TestUnraidInfoTool:
} }
tool_fn = _make_tool() tool_fn = _make_tool()
result = await tool_fn(action="servers") result = await tool_fn(action="servers")
assert "servers" in result
assert len(result["servers"]) == 1 assert len(result["servers"]) == 1
assert result["servers"][0]["name"] == "tower" assert result["servers"][0]["name"] == "tower"
@@ -248,6 +250,7 @@ class TestUnraidInfoTool:
} }
tool_fn = _make_tool() tool_fn = _make_tool()
result = await tool_fn(action="ups_devices") result = await tool_fn(action="ups_devices")
assert "ups_devices" in result
assert len(result["ups_devices"]) == 1 assert len(result["ups_devices"]) == 1
assert result["ups_devices"][0]["model"] == "APC" assert result["ups_devices"][0]["model"] == "APC"
@@ -279,3 +282,64 @@ class TestInfoNetworkErrors:
tool_fn = _make_tool() tool_fn = _make_tool()
with pytest.raises(ToolError, match="Invalid JSON"): with pytest.raises(ToolError, match="Invalid JSON"):
await tool_fn(action="network") await tool_fn(action="network")
class TestInfoMutations:
async def test_update_server_requires_name(self, _mock_graphql: AsyncMock) -> None:
tool_fn = _make_tool()
with pytest.raises(ToolError, match="server_name"):
await tool_fn(action="update_server")
async def test_update_server_success(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {
"updateServerIdentity": {
"id": "s:1",
"name": "tootie",
"comment": None,
"status": "online",
}
}
tool_fn = _make_tool()
result = await tool_fn(action="update_server", server_name="tootie")
assert result["success"] is True
assert result["data"]["name"] == "tootie"
async def test_update_server_passes_optional_fields(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {
"updateServerIdentity": {"id": "s:1", "name": "x", "comment": None, "status": "online"}
}
tool_fn = _make_tool()
await tool_fn(action="update_server", server_name="x", sys_model="custom")
assert _mock_graphql.call_args[0][1]["sysModel"] == "custom"
async def test_update_ssh_requires_confirm(self, _mock_graphql: AsyncMock) -> None:
tool_fn = _make_tool()
with pytest.raises(ToolError, match="confirm=True"):
await tool_fn(action="update_ssh", ssh_enabled=True, ssh_port=22)
async def test_update_ssh_requires_enabled(self, _mock_graphql: AsyncMock) -> None:
tool_fn = _make_tool()
with pytest.raises(ToolError, match="ssh_enabled"):
await tool_fn(action="update_ssh", confirm=True, ssh_port=22)
async def test_update_ssh_requires_port(self, _mock_graphql: AsyncMock) -> None:
tool_fn = _make_tool()
with pytest.raises(ToolError, match="ssh_port"):
await tool_fn(action="update_ssh", confirm=True, ssh_enabled=True)
async def test_update_ssh_success(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {
"updateSshSettings": {"id": "s:1", "useSsh": True, "portssh": 22}
}
tool_fn = _make_tool()
result = await tool_fn(action="update_ssh", confirm=True, ssh_enabled=True, ssh_port=22)
assert result["success"] is True
assert result["data"]["useSsh"] is True
async def test_update_ssh_passes_correct_input(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {
"updateSshSettings": {"id": "s:1", "useSsh": False, "portssh": 2222}
}
tool_fn = _make_tool()
await tool_fn(action="update_ssh", confirm=True, ssh_enabled=False, ssh_port=2222)
assert _mock_graphql.call_args[0][1] == {"input": {"enabled": False, "port": 2222}}

View File

@@ -65,7 +65,9 @@ class TestKeysActions:
async def test_create(self, _mock_graphql: AsyncMock) -> None: async def test_create(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = { _mock_graphql.return_value = {
"createApiKey": {"id": "k:new", "name": "new-key", "key": "secret123", "roles": []} "apiKey": {
"create": {"id": "k:new", "name": "new-key", "key": "secret123", "roles": []}
}
} }
tool_fn = _make_tool() tool_fn = _make_tool()
result = await tool_fn(action="create", name="new-key") result = await tool_fn(action="create", name="new-key")
@@ -74,11 +76,13 @@ class TestKeysActions:
async def test_create_with_roles(self, _mock_graphql: AsyncMock) -> None: async def test_create_with_roles(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = { _mock_graphql.return_value = {
"createApiKey": { "apiKey": {
"id": "k:new", "create": {
"name": "admin-key", "id": "k:new",
"key": "secret", "name": "admin-key",
"roles": ["admin"], "key": "secret",
"roles": ["admin"],
}
} }
} }
tool_fn = _make_tool() tool_fn = _make_tool()
@@ -86,13 +90,15 @@ class TestKeysActions:
assert result["success"] is True assert result["success"] is True
async def test_update(self, _mock_graphql: AsyncMock) -> None: async def test_update(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"updateApiKey": {"id": "k:1", "name": "renamed", "roles": []}} _mock_graphql.return_value = {
"apiKey": {"update": {"id": "k:1", "name": "renamed", "roles": []}}
}
tool_fn = _make_tool() tool_fn = _make_tool()
result = await tool_fn(action="update", key_id="k:1", name="renamed") result = await tool_fn(action="update", key_id="k:1", name="renamed")
assert result["success"] is True assert result["success"] is True
async def test_delete(self, _mock_graphql: AsyncMock) -> None: async def test_delete(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"deleteApiKeys": True} _mock_graphql.return_value = {"apiKey": {"delete": True}}
tool_fn = _make_tool() tool_fn = _make_tool()
result = await tool_fn(action="delete", key_id="k:1", confirm=True) result = await tool_fn(action="delete", key_id="k:1", confirm=True)
assert result["success"] is True assert result["success"] is True
@@ -100,5 +106,5 @@ class TestKeysActions:
async def test_generic_exception_wraps(self, _mock_graphql: AsyncMock) -> None: async def test_generic_exception_wraps(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.side_effect = RuntimeError("connection lost") _mock_graphql.side_effect = RuntimeError("connection lost")
tool_fn = _make_tool() tool_fn = _make_tool()
with pytest.raises(ToolError, match="connection lost"): with pytest.raises(ToolError, match="Failed to execute keys/list"):
await tool_fn(action="list") await tool_fn(action="list")

View File

@@ -82,9 +82,7 @@ class TestNotificationsActions:
async def test_create(self, _mock_graphql: AsyncMock) -> None: async def test_create(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = { _mock_graphql.return_value = {
"notifications": { "createNotification": {"id": "n:new", "title": "Test", "importance": "INFO"}
"createNotification": {"id": "n:new", "title": "Test", "importance": "INFO"}
}
} }
tool_fn = _make_tool() tool_fn = _make_tool()
result = await tool_fn( result = await tool_fn(
@@ -97,13 +95,18 @@ class TestNotificationsActions:
assert result["success"] is True assert result["success"] is True
async def test_archive_notification(self, _mock_graphql: AsyncMock) -> None: async def test_archive_notification(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"notifications": {"archiveNotification": True}} _mock_graphql.return_value = {"archiveNotification": {"id": "n:1"}}
tool_fn = _make_tool() tool_fn = _make_tool()
result = await tool_fn(action="archive", notification_id="n:1") result = await tool_fn(action="archive", notification_id="n:1")
assert result["success"] is True assert result["success"] is True
async def test_delete_with_confirm(self, _mock_graphql: AsyncMock) -> None: async def test_delete_with_confirm(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"notifications": {"deleteNotification": True}} _mock_graphql.return_value = {
"deleteNotification": {
"unread": {"info": 0, "warning": 0, "alert": 0, "total": 0},
"archive": {"info": 0, "warning": 0, "alert": 0, "total": 0},
}
}
tool_fn = _make_tool() tool_fn = _make_tool()
result = await tool_fn( result = await tool_fn(
action="delete", action="delete",
@@ -114,13 +117,18 @@ class TestNotificationsActions:
assert result["success"] is True assert result["success"] is True
async def test_archive_all(self, _mock_graphql: AsyncMock) -> None: async def test_archive_all(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"notifications": {"archiveAll": True}} _mock_graphql.return_value = {
"archiveAll": {
"unread": {"info": 0, "warning": 0, "alert": 0, "total": 0},
"archive": {"info": 0, "warning": 0, "alert": 0, "total": 1},
}
}
tool_fn = _make_tool() tool_fn = _make_tool()
result = await tool_fn(action="archive_all") result = await tool_fn(action="archive_all")
assert result["success"] is True assert result["success"] is True
async def test_unread_notification(self, _mock_graphql: AsyncMock) -> None: async def test_unread_notification(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"notifications": {"unreadNotification": True}} _mock_graphql.return_value = {"unreadNotification": {"id": "n:1"}}
tool_fn = _make_tool() tool_fn = _make_tool()
result = await tool_fn(action="unread", notification_id="n:1") result = await tool_fn(action="unread", notification_id="n:1")
assert result["success"] is True assert result["success"] is True
@@ -140,7 +148,12 @@ class TestNotificationsActions:
assert filter_var["offset"] == 5 assert filter_var["offset"] == 5
async def test_delete_archived(self, _mock_graphql: AsyncMock) -> None: async def test_delete_archived(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"notifications": {"deleteArchivedNotifications": True}} _mock_graphql.return_value = {
"deleteArchivedNotifications": {
"unread": {"info": 0, "warning": 0, "alert": 0, "total": 0},
"archive": {"info": 0, "warning": 0, "alert": 0, "total": 0},
}
}
tool_fn = _make_tool() tool_fn = _make_tool()
result = await tool_fn(action="delete_archived", confirm=True) result = await tool_fn(action="delete_archived", confirm=True)
assert result["success"] is True assert result["success"] is True
@@ -149,5 +162,187 @@ class TestNotificationsActions:
async def test_generic_exception_wraps(self, _mock_graphql: AsyncMock) -> None: async def test_generic_exception_wraps(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.side_effect = RuntimeError("boom") _mock_graphql.side_effect = RuntimeError("boom")
tool_fn = _make_tool() tool_fn = _make_tool()
with pytest.raises(ToolError, match="boom"): with pytest.raises(ToolError, match="Failed to execute notifications/overview"):
await tool_fn(action="overview") await tool_fn(action="overview")
class TestNotificationsCreateValidation:
"""Tests for importance enum and field length validation added in this PR."""
async def test_invalid_importance_rejected(self, _mock_graphql: AsyncMock) -> None:
tool_fn = _make_tool()
with pytest.raises(ToolError, match="Invalid importance"):
await tool_fn(
action="create",
title="T",
subject="S",
description="D",
importance="invalid",
)
async def test_normal_importance_rejected(self, _mock_graphql: AsyncMock) -> None:
"""NORMAL is not a valid GraphQL NotificationImportance value (INFO/WARNING/ALERT are)."""
tool_fn = _make_tool()
with pytest.raises(ToolError, match="Invalid importance"):
await tool_fn(
action="create",
title="T",
subject="S",
description="D",
importance="normal",
)
async def test_alert_importance_accepted(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"createNotification": {"id": "n:1", "importance": "ALERT"}}
tool_fn = _make_tool()
result = await tool_fn(
action="create", title="T", subject="S", description="D", importance="alert"
)
assert result["success"] is True
async def test_title_too_long_rejected(self, _mock_graphql: AsyncMock) -> None:
tool_fn = _make_tool()
with pytest.raises(ToolError, match="title must be at most 200"):
await tool_fn(
action="create",
title="x" * 201,
subject="S",
description="D",
importance="info",
)
async def test_subject_too_long_rejected(self, _mock_graphql: AsyncMock) -> None:
tool_fn = _make_tool()
with pytest.raises(ToolError, match="subject must be at most 500"):
await tool_fn(
action="create",
title="T",
subject="x" * 501,
description="D",
importance="info",
)
async def test_description_too_long_rejected(self, _mock_graphql: AsyncMock) -> None:
tool_fn = _make_tool()
with pytest.raises(ToolError, match="description must be at most 2000"):
await tool_fn(
action="create",
title="T",
subject="S",
description="x" * 2001,
importance="info",
)
async def test_title_at_max_accepted(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"createNotification": {"id": "n:1", "importance": "INFO"}}
tool_fn = _make_tool()
result = await tool_fn(
action="create",
title="x" * 200,
subject="S",
description="D",
importance="info",
)
assert result["success"] is True
class TestNewNotificationMutations:
async def test_archive_many_success(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {
"archiveNotifications": {
"unread": {"info": 0, "warning": 0, "alert": 0, "total": 0},
"archive": {"info": 2, "warning": 0, "alert": 0, "total": 2},
}
}
tool_fn = _make_tool()
result = await tool_fn(action="archive_many", notification_ids=["n:1", "n:2"])
assert result["success"] is True
call_args = _mock_graphql.call_args
assert call_args[0][1] == {"ids": ["n:1", "n:2"]}
async def test_archive_many_requires_ids(self, _mock_graphql: AsyncMock) -> None:
tool_fn = _make_tool()
with pytest.raises(ToolError, match="notification_ids"):
await tool_fn(action="archive_many")
async def test_create_unique_success(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {
"notifyIfUnique": {"id": "n:1", "title": "Test", "importance": "INFO"}
}
tool_fn = _make_tool()
result = await tool_fn(
action="create_unique",
title="Test",
subject="Subj",
description="Desc",
importance="info",
)
assert result["success"] is True
async def test_create_unique_returns_none_when_duplicate(
self, _mock_graphql: AsyncMock
) -> None:
_mock_graphql.return_value = {"notifyIfUnique": None}
tool_fn = _make_tool()
result = await tool_fn(
action="create_unique",
title="T",
subject="S",
description="D",
importance="info",
)
assert result["success"] is True
assert result["duplicate"] is True
async def test_create_unique_requires_fields(self, _mock_graphql: AsyncMock) -> None:
tool_fn = _make_tool()
with pytest.raises(ToolError, match="requires title"):
await tool_fn(action="create_unique")
async def test_unarchive_many_success(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {
"unarchiveNotifications": {
"unread": {"info": 2, "warning": 0, "alert": 0, "total": 2},
"archive": {"info": 0, "warning": 0, "alert": 0, "total": 0},
}
}
tool_fn = _make_tool()
result = await tool_fn(action="unarchive_many", notification_ids=["n:1", "n:2"])
assert result["success"] is True
async def test_unarchive_many_requires_ids(self, _mock_graphql: AsyncMock) -> None:
tool_fn = _make_tool()
with pytest.raises(ToolError, match="notification_ids"):
await tool_fn(action="unarchive_many")
async def test_unarchive_all_success(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {
"unarchiveAll": {
"unread": {"info": 5, "warning": 1, "alert": 0, "total": 6},
"archive": {"info": 0, "warning": 0, "alert": 0, "total": 0},
}
}
tool_fn = _make_tool()
result = await tool_fn(action="unarchive_all")
assert result["success"] is True
async def test_unarchive_all_with_importance(self, _mock_graphql: AsyncMock) -> None:
"""Lowercase importance input must be uppercased before being sent to GraphQL."""
_mock_graphql.return_value = {
"unarchiveAll": {"unread": {"total": 1}, "archive": {"total": 0}}
}
tool_fn = _make_tool()
await tool_fn(action="unarchive_all", importance="warning")
call_args = _mock_graphql.call_args
assert call_args[0][1] == {"importance": "WARNING"}
async def test_recalculate_success(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {
"recalculateOverview": {
"unread": {"info": 3, "warning": 1, "alert": 0, "total": 4},
"archive": {"info": 10, "warning": 0, "alert": 0, "total": 10},
}
}
tool_fn = _make_tool()
result = await tool_fn(action="recalculate")
assert result["success"] is True

View File

@@ -19,7 +19,6 @@ def _make_tool():
return make_tool_fn("unraid_mcp.tools.rclone", "register_rclone_tool", "unraid_rclone") return make_tool_fn("unraid_mcp.tools.rclone", "register_rclone_tool", "unraid_rclone")
@pytest.mark.usefixtures("_mock_graphql")
class TestRcloneValidation: class TestRcloneValidation:
async def test_delete_requires_confirm(self) -> None: async def test_delete_requires_confirm(self) -> None:
tool_fn = _make_tool() tool_fn = _make_tool()
@@ -100,3 +99,83 @@ class TestRcloneActions:
tool_fn = _make_tool() tool_fn = _make_tool()
with pytest.raises(ToolError, match="Failed to delete"): with pytest.raises(ToolError, match="Failed to delete"):
await tool_fn(action="delete_remote", name="gdrive", confirm=True) await tool_fn(action="delete_remote", name="gdrive", confirm=True)
class TestRcloneConfigDataValidation:
"""Tests for _validate_config_data security guards."""
async def test_path_traversal_in_key_rejected(self, _mock_graphql: AsyncMock) -> None:
tool_fn = _make_tool()
with pytest.raises(ToolError, match="disallowed characters"):
await tool_fn(
action="create_remote",
name="r",
provider_type="s3",
config_data={"../evil": "value"},
)
async def test_shell_metachar_in_key_rejected(self, _mock_graphql: AsyncMock) -> None:
tool_fn = _make_tool()
with pytest.raises(ToolError, match="disallowed characters"):
await tool_fn(
action="create_remote",
name="r",
provider_type="s3",
config_data={"key;rm": "value"},
)
async def test_too_many_keys_rejected(self, _mock_graphql: AsyncMock) -> None:
tool_fn = _make_tool()
with pytest.raises(ToolError, match="max 50"):
await tool_fn(
action="create_remote",
name="r",
provider_type="s3",
config_data={f"key{i}": "v" for i in range(51)},
)
async def test_dict_value_rejected(self, _mock_graphql: AsyncMock) -> None:
tool_fn = _make_tool()
with pytest.raises(ToolError, match="string, number, or boolean"):
await tool_fn(
action="create_remote",
name="r",
provider_type="s3",
config_data={"nested": {"key": "val"}},
)
async def test_value_too_long_rejected(self, _mock_graphql: AsyncMock) -> None:
tool_fn = _make_tool()
with pytest.raises(ToolError, match="exceeds max length"):
await tool_fn(
action="create_remote",
name="r",
provider_type="s3",
config_data={"key": "x" * 4097},
)
async def test_boolean_value_accepted(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {
"rclone": {"createRCloneRemote": {"name": "r", "type": "s3"}}
}
tool_fn = _make_tool()
result = await tool_fn(
action="create_remote",
name="r",
provider_type="s3",
config_data={"use_path_style": True},
)
assert result["success"] is True
async def test_int_value_accepted(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {
"rclone": {"createRCloneRemote": {"name": "r", "type": "sftp"}}
}
tool_fn = _make_tool()
result = await tool_fn(
action="create_remote",
name="r",
provider_type="sftp",
config_data={"port": 22},
)
assert result["success"] is True

266
tests/test_settings.py Normal file
View File

@@ -0,0 +1,266 @@
"""Tests for the unraid_settings tool."""
from __future__ import annotations
from collections.abc import Generator
from unittest.mock import AsyncMock, patch
import pytest
from fastmcp import FastMCP
from unraid_mcp.core.exceptions import ToolError
from unraid_mcp.tools.settings import register_settings_tool
@pytest.fixture
def _mock_graphql() -> Generator[AsyncMock, None, None]:
with patch("unraid_mcp.tools.settings.make_graphql_request", new_callable=AsyncMock) as mock:
yield mock
def _make_tool() -> AsyncMock:
test_mcp = FastMCP("test")
register_settings_tool(test_mcp)
return test_mcp._tool_manager._tools["unraid_settings"].fn # type: ignore[union-attr]
class TestSettingsValidation:
"""Tests for action validation and destructive guard."""
async def test_invalid_action(self, _mock_graphql: AsyncMock) -> None:
tool_fn = _make_tool()
with pytest.raises(ToolError, match="Invalid action"):
await tool_fn(action="nonexistent_action")
async def test_destructive_configure_ups_requires_confirm(
self, _mock_graphql: AsyncMock
) -> None:
tool_fn = _make_tool()
with pytest.raises(ToolError, match="confirm=True"):
await tool_fn(action="configure_ups", ups_config={"mode": "slave"})
async def test_destructive_setup_remote_access_requires_confirm(
self, _mock_graphql: AsyncMock
) -> None:
tool_fn = _make_tool()
with pytest.raises(ToolError, match="confirm=True"):
await tool_fn(action="setup_remote_access", access_type="STATIC")
async def test_destructive_enable_dynamic_remote_access_requires_confirm(
self, _mock_graphql: AsyncMock
) -> None:
tool_fn = _make_tool()
with pytest.raises(ToolError, match="confirm=True"):
await tool_fn(
action="enable_dynamic_remote_access", access_url_type="WAN", dynamic_enabled=True
)
class TestSettingsUpdate:
"""Tests for update action."""
async def test_update_requires_settings_input(self, _mock_graphql: AsyncMock) -> None:
tool_fn = _make_tool()
with pytest.raises(ToolError, match="settings_input is required"):
await tool_fn(action="update")
async def test_update_success(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {
"updateSettings": {"restartRequired": False, "values": {}, "warnings": []}
}
tool_fn = _make_tool()
result = await tool_fn(action="update", settings_input={"shareCount": 5})
assert result["success"] is True
assert result["action"] == "update"
async def test_update_temperature_requires_config(self, _mock_graphql: AsyncMock) -> None:
tool_fn = _make_tool()
with pytest.raises(ToolError, match="temperature_config is required"):
await tool_fn(action="update_temperature")
async def test_update_temperature_success(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"updateTemperatureConfig": True}
tool_fn = _make_tool()
result = await tool_fn(action="update_temperature", temperature_config={"unit": "C"})
assert result["success"] is True
assert result["action"] == "update_temperature"
class TestSystemTime:
"""Tests for update_time action."""
async def test_update_time_requires_at_least_one_field(self, _mock_graphql: AsyncMock) -> None:
tool_fn = _make_tool()
with pytest.raises(ToolError, match="update_time requires"):
await tool_fn(action="update_time")
async def test_update_time_with_timezone(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {
"updateSystemTime": {
"currentTime": "2026-03-13T00:00:00Z",
"timeZone": "America/New_York",
"useNtp": True,
"ntpServers": [],
}
}
tool_fn = _make_tool()
result = await tool_fn(action="update_time", time_zone="America/New_York")
assert result["success"] is True
assert result["action"] == "update_time"
async def test_update_time_with_ntp(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {
"updateSystemTime": {"useNtp": True, "ntpServers": ["0.pool.ntp.org"]}
}
tool_fn = _make_tool()
result = await tool_fn(action="update_time", use_ntp=True, ntp_servers=["0.pool.ntp.org"])
assert result["success"] is True
async def test_update_time_manual(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"updateSystemTime": {"currentTime": "2026-03-13T12:00:00Z"}}
tool_fn = _make_tool()
result = await tool_fn(action="update_time", manual_datetime="2026-03-13T12:00:00Z")
assert result["success"] is True
class TestUpsConfig:
"""Tests for configure_ups action."""
async def test_configure_ups_requires_ups_config(self, _mock_graphql: AsyncMock) -> None:
tool_fn = _make_tool()
with pytest.raises(ToolError, match="ups_config is required"):
await tool_fn(action="configure_ups", confirm=True)
async def test_configure_ups_success(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"configureUps": True}
tool_fn = _make_tool()
result = await tool_fn(
action="configure_ups", confirm=True, ups_config={"mode": "master", "cable": "usb"}
)
assert result["success"] is True
assert result["action"] == "configure_ups"
class TestApiSettings:
"""Tests for update_api action."""
async def test_update_api_requires_at_least_one_field(self, _mock_graphql: AsyncMock) -> None:
tool_fn = _make_tool()
with pytest.raises(ToolError, match="update_api requires"):
await tool_fn(action="update_api")
async def test_update_api_with_port(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {
"updateApiSettings": {"accessType": "STATIC", "forwardType": "NONE", "port": 8080}
}
tool_fn = _make_tool()
result = await tool_fn(action="update_api", port=8080)
assert result["success"] is True
assert result["action"] == "update_api"
async def test_update_api_with_access_type(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"updateApiSettings": {"accessType": "STATIC"}}
tool_fn = _make_tool()
result = await tool_fn(action="update_api", access_type="STATIC")
assert result["success"] is True
class TestConnectActions:
"""Tests for connect_sign_in and connect_sign_out actions."""
async def test_connect_sign_in_requires_api_key(self, _mock_graphql: AsyncMock) -> None:
tool_fn = _make_tool()
with pytest.raises(ToolError, match="api_key is required"):
await tool_fn(action="connect_sign_in")
async def test_connect_sign_in_success(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"connectSignIn": True}
tool_fn = _make_tool()
result = await tool_fn(action="connect_sign_in", api_key="test-api-key-abc123")
assert result["success"] is True
assert result["action"] == "connect_sign_in"
async def test_connect_sign_in_with_user_info(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"connectSignIn": True}
tool_fn = _make_tool()
result = await tool_fn(
action="connect_sign_in",
api_key="test-api-key",
username="testuser",
email="test@example.com",
avatar="https://example.com/avatar.png",
)
assert result["success"] is True
async def test_connect_sign_out_success(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"connectSignOut": True}
tool_fn = _make_tool()
result = await tool_fn(action="connect_sign_out")
assert result["success"] is True
assert result["action"] == "connect_sign_out"
class TestRemoteAccess:
"""Tests for setup_remote_access and enable_dynamic_remote_access actions."""
async def test_setup_remote_access_requires_access_type(self, _mock_graphql: AsyncMock) -> None:
tool_fn = _make_tool()
with pytest.raises(ToolError, match="access_type is required"):
await tool_fn(action="setup_remote_access", confirm=True)
async def test_setup_remote_access_success(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"setupRemoteAccess": True}
tool_fn = _make_tool()
result = await tool_fn(action="setup_remote_access", confirm=True, access_type="STATIC")
assert result["success"] is True
assert result["action"] == "setup_remote_access"
async def test_setup_remote_access_with_port(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"setupRemoteAccess": True}
tool_fn = _make_tool()
result = await tool_fn(
action="setup_remote_access",
confirm=True,
access_type="STATIC",
forward_type="UPNP",
port=9999,
)
assert result["success"] is True
async def test_enable_dynamic_requires_url_type(self, _mock_graphql: AsyncMock) -> None:
tool_fn = _make_tool()
with pytest.raises(ToolError, match="access_url_type is required"):
await tool_fn(action="enable_dynamic_remote_access", confirm=True, dynamic_enabled=True)
async def test_enable_dynamic_requires_dynamic_enabled(self, _mock_graphql: AsyncMock) -> None:
tool_fn = _make_tool()
with pytest.raises(ToolError, match="dynamic_enabled is required"):
await tool_fn(
action="enable_dynamic_remote_access", confirm=True, access_url_type="WAN"
)
async def test_enable_dynamic_success(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"enableDynamicRemoteAccess": True}
tool_fn = _make_tool()
result = await tool_fn(
action="enable_dynamic_remote_access",
confirm=True,
access_url_type="WAN",
dynamic_enabled=True,
)
assert result["success"] is True
assert result["action"] == "enable_dynamic_remote_access"
async def test_enable_dynamic_with_optional_fields(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"enableDynamicRemoteAccess": True}
tool_fn = _make_tool()
result = await tool_fn(
action="enable_dynamic_remote_access",
confirm=True,
access_url_type="WAN",
dynamic_enabled=False,
access_url_name="myserver",
access_url_ipv4="1.2.3.4",
access_url_ipv6="::1",
)
assert result["success"] is True

View File

@@ -7,7 +7,7 @@ import pytest
from conftest import make_tool_fn from conftest import make_tool_fn
from unraid_mcp.core.exceptions import ToolError from unraid_mcp.core.exceptions import ToolError
from unraid_mcp.tools.storage import format_bytes from unraid_mcp.core.utils import format_bytes, format_kb, safe_get
# --- Unit tests for helpers --- # --- Unit tests for helpers ---
@@ -77,6 +77,87 @@ class TestStorageValidation:
result = await tool_fn(action="logs", log_path="/var/log/syslog") result = await tool_fn(action="logs", log_path="/var/log/syslog")
assert result["content"] == "ok" assert result["content"] == "ok"
async def test_logs_tail_lines_too_large(self, _mock_graphql: AsyncMock) -> None:
tool_fn = _make_tool()
with pytest.raises(ToolError, match="tail_lines must be between"):
await tool_fn(action="logs", log_path="/var/log/syslog", tail_lines=10_001)
async def test_logs_tail_lines_zero_rejected(self, _mock_graphql: AsyncMock) -> None:
tool_fn = _make_tool()
with pytest.raises(ToolError, match="tail_lines must be between"):
await tool_fn(action="logs", log_path="/var/log/syslog", tail_lines=0)
async def test_logs_tail_lines_at_max_accepted(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"logFile": {"path": "/var/log/syslog", "content": "ok"}}
tool_fn = _make_tool()
result = await tool_fn(action="logs", log_path="/var/log/syslog", tail_lines=10_000)
assert result["content"] == "ok"
async def test_non_logs_action_ignores_tail_lines_validation(
self, _mock_graphql: AsyncMock
) -> None:
_mock_graphql.return_value = {"shares": []}
tool_fn = _make_tool()
result = await tool_fn(action="shares", tail_lines=0)
assert result["shares"] == []
class TestFormatKb:
def test_none_returns_na(self) -> None:
assert format_kb(None) == "N/A"
def test_invalid_string_returns_na(self) -> None:
assert format_kb("not-a-number") == "N/A"
def test_kilobytes_range(self) -> None:
assert format_kb(512) == "512.00 KB"
def test_megabytes_range(self) -> None:
assert format_kb(2048) == "2.00 MB"
def test_gigabytes_range(self) -> None:
assert format_kb(1_048_576) == "1.00 GB"
def test_terabytes_range(self) -> None:
assert format_kb(1_073_741_824) == "1.00 TB"
def test_boundary_exactly_1024_kb(self) -> None:
# 1024 KB = 1 MB
assert format_kb(1024) == "1.00 MB"
class TestSafeGet:
def test_simple_key_access(self) -> None:
assert safe_get({"a": 1}, "a") == 1
def test_nested_key_access(self) -> None:
assert safe_get({"a": {"b": "val"}}, "a", "b") == "val"
def test_missing_key_returns_none(self) -> None:
assert safe_get({"a": 1}, "missing") is None
def test_none_intermediate_returns_default(self) -> None:
assert safe_get({"a": None}, "a", "b") is None
def test_custom_default_returned(self) -> None:
assert safe_get({}, "x", default="fallback") == "fallback"
def test_non_dict_intermediate_returns_default(self) -> None:
assert safe_get({"a": "string"}, "a", "b") is None
def test_empty_list_default(self) -> None:
result = safe_get({}, "missing", default=[])
assert result == []
def test_zero_value_not_replaced_by_default(self) -> None:
assert safe_get({"temp": 0}, "temp", default="N/A") == 0
def test_false_value_not_replaced_by_default(self) -> None:
assert safe_get({"active": False}, "active", default=True) is False
def test_empty_string_not_replaced_by_default(self) -> None:
assert safe_get({"name": ""}, "name", default="unknown") == ""
class TestStorageActions: class TestStorageActions:
async def test_shares(self, _mock_graphql: AsyncMock) -> None: async def test_shares(self, _mock_graphql: AsyncMock) -> None:
@@ -202,3 +283,38 @@ class TestStorageNetworkErrors:
tool_fn = _make_tool() tool_fn = _make_tool()
with pytest.raises(ToolError, match="HTTP error 500"): with pytest.raises(ToolError, match="HTTP error 500"):
await tool_fn(action="disks") await tool_fn(action="disks")
class TestStorageFlashBackup:
async def test_flash_backup_requires_confirm(self, _mock_graphql: AsyncMock) -> None:
tool_fn = _make_tool()
with pytest.raises(ToolError, match="destructive"):
await tool_fn(action="flash_backup", remote_name="r", source_path="/boot", destination_path="r:b")
async def test_flash_backup_requires_remote_name(self, _mock_graphql: AsyncMock) -> None:
tool_fn = _make_tool()
with pytest.raises(ToolError, match="remote_name"):
await tool_fn(action="flash_backup", confirm=True)
async def test_flash_backup_requires_source_path(self, _mock_graphql: AsyncMock) -> None:
tool_fn = _make_tool()
with pytest.raises(ToolError, match="source_path"):
await tool_fn(action="flash_backup", confirm=True, remote_name="r")
async def test_flash_backup_requires_destination_path(self, _mock_graphql: AsyncMock) -> None:
tool_fn = _make_tool()
with pytest.raises(ToolError, match="destination_path"):
await tool_fn(action="flash_backup", confirm=True, remote_name="r", source_path="/boot")
async def test_flash_backup_success(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"initiateFlashBackup": {"status": "started", "jobId": "j:1"}}
tool_fn = _make_tool()
result = await tool_fn(action="flash_backup", confirm=True, remote_name="r", source_path="/boot", destination_path="r:b")
assert result["success"] is True
assert result["data"]["status"] == "started"
async def test_flash_backup_passes_options(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"initiateFlashBackup": {"status": "started", "jobId": "j:2"}}
tool_fn = _make_tool()
await tool_fn(action="flash_backup", confirm=True, remote_name="r", source_path="/boot", destination_path="r:b", backup_options={"dryRun": True})
assert _mock_graphql.call_args[0][1]["input"]["options"] == {"dryRun": True}

View File

@@ -0,0 +1,156 @@
"""Tests for _cap_log_content in subscriptions/manager.py.
_cap_log_content is a pure utility that prevents unbounded memory growth from
log subscription data. It must: return a NEW dict (not mutate), recursively
cap nested 'content' fields, and only truncate when both byte limit and line
limit are exceeded.
"""
from unittest.mock import patch
from unraid_mcp.subscriptions.manager import _cap_log_content
class TestCapLogContentImmutability:
"""The function must return a new dict — never mutate the input."""
def test_returns_new_dict(self) -> None:
data = {"key": "value"}
result = _cap_log_content(data)
assert result is not data
def test_input_not_mutated_on_passthrough(self) -> None:
data = {"content": "short text", "other": "value"}
original_content = data["content"]
_cap_log_content(data)
assert data["content"] == original_content
def test_input_not_mutated_on_truncation(self) -> None:
# Use small limits so the truncation path is exercised
large_content = "\n".join(f"line {i}" for i in range(200))
data = {"content": large_content}
with (
patch("unraid_mcp.subscriptions.manager._MAX_RESOURCE_DATA_BYTES", 10),
patch("unraid_mcp.subscriptions.manager._MAX_RESOURCE_DATA_LINES", 50),
):
_cap_log_content(data)
# Original data must be unchanged
assert data["content"] == large_content
class TestCapLogContentSmallData:
"""Content below the byte limit must be returned unchanged."""
def test_small_content_unchanged(self) -> None:
data = {"content": "just a few lines\nof log data\n"}
result = _cap_log_content(data)
assert result["content"] == data["content"]
def test_non_content_keys_passed_through(self) -> None:
data = {"name": "cpu_subscription", "timestamp": "2026-02-18T00:00:00Z"}
result = _cap_log_content(data)
assert result == data
def test_integer_value_passed_through(self) -> None:
data = {"count": 42, "active": True}
result = _cap_log_content(data)
assert result == data
class TestCapLogContentTruncation:
"""Content exceeding both byte AND line limits must be truncated to the last N lines."""
def test_oversized_content_truncated_and_byte_capped(self) -> None:
# 200 lines, tiny byte limit: must keep recent content within byte cap.
lines = [f"line {i}" for i in range(200)]
data = {"content": "\n".join(lines)}
with (
patch("unraid_mcp.subscriptions.manager._MAX_RESOURCE_DATA_BYTES", 10),
patch("unraid_mcp.subscriptions.manager._MAX_RESOURCE_DATA_LINES", 50),
):
result = _cap_log_content(data)
result_lines = result["content"].splitlines()
assert len(result["content"].encode("utf-8", errors="replace")) <= 10
# Must keep the most recent line suffix.
assert result_lines[-1] == "line 199"
def test_content_with_fewer_lines_than_limit_still_honors_byte_cap(self) -> None:
"""If byte limit is exceeded, output must still be capped even with few lines."""
# 30 lines, byte limit 10, line limit 50 -> must cap bytes regardless of line count
lines = [f"line {i}" for i in range(30)]
data = {"content": "\n".join(lines)}
with (
patch("unraid_mcp.subscriptions.manager._MAX_RESOURCE_DATA_BYTES", 10),
patch("unraid_mcp.subscriptions.manager._MAX_RESOURCE_DATA_LINES", 50),
):
result = _cap_log_content(data)
assert len(result["content"].encode("utf-8", errors="replace")) <= 10
def test_non_content_keys_preserved_alongside_truncated_content(self) -> None:
lines = [f"line {i}" for i in range(200)]
data = {"content": "\n".join(lines), "path": "/var/log/syslog", "total_lines": 200}
with (
patch("unraid_mcp.subscriptions.manager._MAX_RESOURCE_DATA_BYTES", 10),
patch("unraid_mcp.subscriptions.manager._MAX_RESOURCE_DATA_LINES", 50),
):
result = _cap_log_content(data)
assert result["path"] == "/var/log/syslog"
assert result["total_lines"] == 200
assert len(result["content"].encode("utf-8", errors="replace")) <= 10
class TestCapLogContentNested:
"""Nested 'content' fields inside sub-dicts must also be capped recursively."""
def test_nested_content_field_capped(self) -> None:
lines = [f"line {i}" for i in range(200)]
data = {"logFile": {"content": "\n".join(lines), "path": "/var/log/syslog"}}
with (
patch("unraid_mcp.subscriptions.manager._MAX_RESOURCE_DATA_BYTES", 10),
patch("unraid_mcp.subscriptions.manager._MAX_RESOURCE_DATA_LINES", 50),
):
result = _cap_log_content(data)
assert len(result["logFile"]["content"].encode("utf-8", errors="replace")) <= 10
assert result["logFile"]["path"] == "/var/log/syslog"
def test_deeply_nested_content_capped(self) -> None:
lines = [f"line {i}" for i in range(200)]
data = {"outer": {"inner": {"content": "\n".join(lines)}}}
with (
patch("unraid_mcp.subscriptions.manager._MAX_RESOURCE_DATA_BYTES", 10),
patch("unraid_mcp.subscriptions.manager._MAX_RESOURCE_DATA_LINES", 50),
):
result = _cap_log_content(data)
assert len(result["outer"]["inner"]["content"].encode("utf-8", errors="replace")) <= 10
def test_nested_non_content_keys_unaffected(self) -> None:
data = {"metrics": {"cpu": 42.5, "memory": 8192}}
result = _cap_log_content(data)
assert result == data
class TestCapLogContentSingleMassiveLine:
"""A single line larger than the byte cap must be hard-capped at byte level."""
def test_single_massive_line_hard_caps_bytes(self) -> None:
# One line, no newlines, larger than the byte cap.
# The while-loop can't reduce it (len(lines) == 1), so the
# last-resort byte-slice path at manager.py:65-69 must fire.
huge_content = "x" * 200
data = {"content": huge_content}
with (
patch("unraid_mcp.subscriptions.manager._MAX_RESOURCE_DATA_BYTES", 10),
patch("unraid_mcp.subscriptions.manager._MAX_RESOURCE_DATA_LINES", 5_000),
):
result = _cap_log_content(data)
assert len(result["content"].encode("utf-8", errors="replace")) <= 10
def test_single_massive_line_input_not_mutated(self) -> None:
huge_content = "x" * 200
data = {"content": huge_content}
with (
patch("unraid_mcp.subscriptions.manager._MAX_RESOURCE_DATA_BYTES", 10),
patch("unraid_mcp.subscriptions.manager._MAX_RESOURCE_DATA_LINES", 5_000),
):
_cap_log_content(data)
assert data["content"] == huge_content

View File

@@ -0,0 +1,131 @@
"""Tests for _validate_subscription_query in diagnostics.py.
Security-critical: this function is the only guard against arbitrary GraphQL
operations (mutations, queries) being sent over the WebSocket subscription channel.
"""
import pytest
from unraid_mcp.core.exceptions import ToolError
from unraid_mcp.subscriptions.diagnostics import (
_ALLOWED_SUBSCRIPTION_FIELDS,
_validate_subscription_query,
)
class TestValidateSubscriptionQueryAllowed:
"""All whitelisted subscription names must be accepted."""
@pytest.mark.parametrize("sub_name", sorted(_ALLOWED_SUBSCRIPTION_FIELDS))
def test_all_allowed_names_accepted(self, sub_name: str) -> None:
query = f"subscription {{ {sub_name} {{ data }} }}"
result = _validate_subscription_query(query)
assert result == sub_name
def test_returns_extracted_subscription_name(self) -> None:
query = "subscription { cpu { usage } }"
assert _validate_subscription_query(query) == "cpu"
def test_leading_whitespace_accepted(self) -> None:
query = " subscription { memory { free } }"
assert _validate_subscription_query(query) == "memory"
def test_multiline_query_accepted(self) -> None:
query = "subscription {\n logFile {\n content\n }\n}"
assert _validate_subscription_query(query) == "logFile"
def test_case_insensitive_subscription_keyword(self) -> None:
"""'SUBSCRIPTION' should be accepted (regex uses IGNORECASE)."""
query = "SUBSCRIPTION { cpu { usage } }"
assert _validate_subscription_query(query) == "cpu"
class TestValidateSubscriptionQueryForbiddenKeywords:
"""Queries containing 'mutation' or 'query' as standalone keywords must be rejected."""
def test_mutation_keyword_rejected(self) -> None:
query = 'mutation { docker { start(id: "abc") } }'
with pytest.raises(ToolError, match="must be a subscription"):
_validate_subscription_query(query)
def test_query_keyword_rejected(self) -> None:
query = "query { info { os { platform } } }"
with pytest.raises(ToolError, match="must be a subscription"):
_validate_subscription_query(query)
def test_mutation_embedded_in_subscription_rejected(self) -> None:
"""'mutation' anywhere in the string triggers rejection."""
query = "subscription { cpuSubscription { mutation data } }"
with pytest.raises(ToolError, match="must be a subscription"):
_validate_subscription_query(query)
def test_query_embedded_in_subscription_rejected(self) -> None:
query = "subscription { cpuSubscription { query data } }"
with pytest.raises(ToolError, match="must be a subscription"):
_validate_subscription_query(query)
def test_mutation_case_insensitive_rejection(self) -> None:
query = 'MUTATION { docker { start(id: "abc") } }'
with pytest.raises(ToolError, match="must be a subscription"):
_validate_subscription_query(query)
def test_mutation_field_identifier_not_rejected(self) -> None:
"""'mutationField' as an identifier must NOT be rejected — only standalone 'mutation'."""
# This tests the \b word boundary in _FORBIDDEN_KEYWORDS
query = "subscription { cpu { mutationField } }"
# Should not raise — "mutationField" is an identifier, not the keyword
result = _validate_subscription_query(query)
assert result == "cpu"
def test_query_field_identifier_not_rejected(self) -> None:
"""'queryResult' as an identifier must NOT be rejected."""
query = "subscription { cpu { queryResult } }"
result = _validate_subscription_query(query)
assert result == "cpu"
class TestValidateSubscriptionQueryInvalidFormat:
"""Queries that don't match the expected subscription format must be rejected."""
def test_empty_string_rejected(self) -> None:
with pytest.raises(ToolError, match="must start with 'subscription'"):
_validate_subscription_query("")
def test_plain_identifier_rejected(self) -> None:
with pytest.raises(ToolError, match="must start with 'subscription'"):
_validate_subscription_query("cpuSubscription { usage }")
def test_missing_operation_body_rejected(self) -> None:
with pytest.raises(ToolError, match="must start with 'subscription'"):
_validate_subscription_query("subscription")
def test_subscription_without_field_rejected(self) -> None:
"""subscription { } with no field name doesn't match the pattern."""
with pytest.raises(ToolError, match="must start with 'subscription'"):
_validate_subscription_query("subscription { }")
class TestValidateSubscriptionQueryUnknownName:
"""Subscription names not in the whitelist must be rejected even if format is valid."""
def test_unknown_subscription_name_rejected(self) -> None:
query = "subscription { unknownSubscription { data } }"
with pytest.raises(ToolError, match="not allowed"):
_validate_subscription_query(query)
def test_error_message_includes_allowed_list(self) -> None:
"""Error message must list the allowed subscription field names for usability."""
query = "subscription { badSub { data } }"
with pytest.raises(ToolError, match="Allowed fields"):
_validate_subscription_query(query)
def test_arbitrary_field_name_rejected(self) -> None:
query = "subscription { users { id email } }"
with pytest.raises(ToolError, match="not allowed"):
_validate_subscription_query(query)
def test_close_but_not_whitelisted_rejected(self) -> None:
"""'cpuSubscription' (old operation-style name) is not in the field allow-list."""
query = "subscription { cpuSubscription { usage } }"
with pytest.raises(ToolError, match="not allowed"):
_validate_subscription_query(query)

View File

@@ -1,7 +1,6 @@
"""Unraid MCP Server Package. """Unraid MCP Server Package."""
A modular MCP (Model Context Protocol) server that provides tools to interact from .version import VERSION
with an Unraid server's GraphQL API.
"""
__version__ = "0.2.0"
__version__ = VERSION

View File

@@ -5,16 +5,10 @@ that cap at 10MB and start over (no rotation) for consistent use across all modu
""" """
import logging import logging
from datetime import datetime
from pathlib import Path from pathlib import Path
import pytz
from rich.align import Align
from rich.console import Console from rich.console import Console
from rich.logging import RichHandler from rich.logging import RichHandler
from rich.panel import Panel
from rich.rule import Rule
from rich.text import Text
try: try:
@@ -28,7 +22,7 @@ from .settings import LOG_FILE_PATH, LOG_LEVEL_STR
# Global Rich console for consistent formatting # Global Rich console for consistent formatting
console = Console(stderr=True, force_terminal=True) console = Console(stderr=True)
class OverwriteFileHandler(logging.FileHandler): class OverwriteFileHandler(logging.FileHandler):
@@ -45,29 +39,45 @@ class OverwriteFileHandler(logging.FileHandler):
delay: Whether to delay file opening delay: Whether to delay file opening
""" """
self.max_bytes = max_bytes self.max_bytes = max_bytes
self._emit_count = 0
self._check_interval = 100
super().__init__(filename, mode, encoding, delay) super().__init__(filename, mode, encoding, delay)
def emit(self, record): def emit(self, record):
"""Emit a record, checking file size and overwriting if needed.""" """Emit a record, checking file size periodically and overwriting if needed."""
# Check file size before writing self._emit_count += 1
if self.stream and hasattr(self.stream, "name"): if (
(self._emit_count == 1 or self._emit_count % self._check_interval == 0)
and self.stream
and hasattr(self.stream, "name")
):
try: try:
base_path = Path(self.baseFilename) base_path = Path(self.baseFilename)
if base_path.exists(): file_size = base_path.stat().st_size if base_path.exists() else 0
file_size = base_path.stat().st_size if file_size >= self.max_bytes:
if file_size >= self.max_bytes: old_stream = self.stream
# Close current stream self.stream = None
if self.stream: try:
self.stream.close() old_stream.close()
base_path.unlink(missing_ok=True)
# Remove the old file and start fresh
if base_path.exists():
base_path.unlink()
# Reopen with truncate mode
self.stream = self._open() self.stream = self._open()
except OSError:
# Recovery: attempt to reopen even if unlink failed
try:
self.stream = self._open()
except OSError:
# old_stream is already closed — do NOT restore it.
# Leave self.stream = None so super().emit() skips output
# rather than writing to a closed file descriptor.
import sys
# Log a marker that the file was reset print(
"WARNING: Failed to reopen log file after rotation. "
"File logging suspended until next successful open.",
file=sys.stderr,
)
if self.stream is not None:
reset_record = logging.LogRecord( reset_record = logging.LogRecord(
name="UnraidMCPServer.Logging", name="UnraidMCPServer.Logging",
level=logging.INFO, level=logging.INFO,
@@ -91,6 +101,28 @@ class OverwriteFileHandler(logging.FileHandler):
super().emit(record) super().emit(record)
def _create_shared_file_handler() -> OverwriteFileHandler:
"""Create the single shared file handler for all loggers.
Returns:
Configured OverwriteFileHandler instance
"""
numeric_log_level = getattr(logging, LOG_LEVEL_STR, logging.INFO)
handler = OverwriteFileHandler(LOG_FILE_PATH, max_bytes=10 * 1024 * 1024, encoding="utf-8")
handler.setLevel(numeric_log_level)
handler.setFormatter(
logging.Formatter(
"%(asctime)s - %(name)s - %(levelname)s - %(module)s - %(funcName)s - %(lineno)d - %(message)s"
)
)
return handler
# Single shared file handler — all loggers reuse this instance to avoid
# race conditions from multiple OverwriteFileHandler instances on the same file.
_shared_file_handler = _create_shared_file_handler()
def setup_logger(name: str = "UnraidMCPServer") -> logging.Logger: def setup_logger(name: str = "UnraidMCPServer") -> logging.Logger:
"""Set up and configure the logger with console and file handlers. """Set up and configure the logger with console and file handlers.
@@ -118,19 +150,13 @@ def setup_logger(name: str = "UnraidMCPServer") -> logging.Logger:
show_level=True, show_level=True,
show_path=False, show_path=False,
rich_tracebacks=True, rich_tracebacks=True,
tracebacks_show_locals=True, tracebacks_show_locals=False,
) )
console_handler.setLevel(numeric_log_level) console_handler.setLevel(numeric_log_level)
logger.addHandler(console_handler) logger.addHandler(console_handler)
# File Handler with 10MB cap (overwrites instead of rotating) # Reuse the shared file handler
file_handler = OverwriteFileHandler(LOG_FILE_PATH, max_bytes=10 * 1024 * 1024, encoding="utf-8") logger.addHandler(_shared_file_handler)
file_handler.setLevel(numeric_log_level)
file_formatter = logging.Formatter(
"%(asctime)s - %(name)s - %(levelname)s - %(module)s - %(funcName)s - %(lineno)d - %(message)s"
)
file_handler.setFormatter(file_formatter)
logger.addHandler(file_handler)
return logger return logger
@@ -157,59 +183,28 @@ def configure_fastmcp_logger_with_rich() -> logging.Logger | None:
show_level=True, show_level=True,
show_path=False, show_path=False,
rich_tracebacks=True, rich_tracebacks=True,
tracebacks_show_locals=True, tracebacks_show_locals=False,
markup=True, markup=True,
) )
console_handler.setLevel(numeric_log_level) console_handler.setLevel(numeric_log_level)
fastmcp_logger.addHandler(console_handler) fastmcp_logger.addHandler(console_handler)
# File Handler with 10MB cap (overwrites instead of rotating) # Reuse the shared file handler
file_handler = OverwriteFileHandler(LOG_FILE_PATH, max_bytes=10 * 1024 * 1024, encoding="utf-8") fastmcp_logger.addHandler(_shared_file_handler)
file_handler.setLevel(numeric_log_level)
file_formatter = logging.Formatter(
"%(asctime)s - %(name)s - %(levelname)s - %(module)s - %(funcName)s - %(lineno)d - %(message)s"
)
file_handler.setFormatter(file_formatter)
fastmcp_logger.addHandler(file_handler)
fastmcp_logger.setLevel(numeric_log_level) fastmcp_logger.setLevel(numeric_log_level)
# Also configure the root logger to catch any other logs # Attach shared file handler to the root logger so that library/third-party
# loggers (httpx, websockets, etc.) whose propagate=True flows up to root
# will also be written to the log file, not just the console.
root_logger = logging.getLogger() root_logger = logging.getLogger()
root_logger.handlers.clear()
root_logger.propagate = False
# Rich Console Handler for root logger
root_console_handler = RichHandler(
console=console,
show_time=True,
show_level=True,
show_path=False,
rich_tracebacks=True,
tracebacks_show_locals=True,
markup=True,
)
root_console_handler.setLevel(numeric_log_level)
root_logger.addHandler(root_console_handler)
# File Handler for root logger with 10MB cap (overwrites instead of rotating)
root_file_handler = OverwriteFileHandler(
LOG_FILE_PATH, max_bytes=10 * 1024 * 1024, encoding="utf-8"
)
root_file_handler.setLevel(numeric_log_level)
root_file_handler.setFormatter(file_formatter)
root_logger.addHandler(root_file_handler)
root_logger.setLevel(numeric_log_level) root_logger.setLevel(numeric_log_level)
if _shared_file_handler not in root_logger.handlers:
root_logger.addHandler(_shared_file_handler)
return fastmcp_logger return fastmcp_logger
def setup_uvicorn_logging() -> logging.Logger | None:
"""Configure uvicorn and other third-party loggers to use Rich formatting."""
# This function is kept for backward compatibility but now delegates to FastMCP
return configure_fastmcp_logger_with_rich()
def log_configuration_status(logger: logging.Logger) -> None: def log_configuration_status(logger: logging.Logger) -> None:
"""Log configuration status at startup. """Log configuration status at startup.
@@ -242,97 +237,6 @@ def log_configuration_status(logger: logging.Logger) -> None:
logger.error(f"Missing required configuration: {config['missing_config']}") logger.error(f"Missing required configuration: {config['missing_config']}")
# Development logging helpers for Rich formatting
def get_est_timestamp() -> str:
"""Get current timestamp in EST timezone with YY/MM/DD format."""
est = pytz.timezone("US/Eastern")
now = datetime.now(est)
return now.strftime("%y/%m/%d %H:%M:%S")
def log_header(title: str) -> None:
"""Print a beautiful header panel with Nordic blue styling."""
panel = Panel(
Align.center(Text(title, style="bold white")),
style="#5E81AC", # Nordic blue
padding=(0, 2),
border_style="#81A1C1", # Light Nordic blue
)
console.print(panel)
def log_with_level_and_indent(message: str, level: str = "info", indent: int = 0) -> None:
"""Log a message with specific level and indentation."""
timestamp = get_est_timestamp()
indent_str = " " * indent
# Enhanced Nordic color scheme with more blues
level_config = {
"error": {"color": "#BF616A", "icon": "", "style": "bold"}, # Nordic red
"warning": {"color": "#EBCB8B", "icon": "⚠️", "style": ""}, # Nordic yellow
"success": {"color": "#A3BE8C", "icon": "", "style": "bold"}, # Nordic green
"info": {"color": "#5E81AC", "icon": "\u2139\ufe0f", "style": "bold"}, # Nordic blue (bold)
"status": {"color": "#81A1C1", "icon": "🔍", "style": ""}, # Light Nordic blue
"debug": {"color": "#4C566A", "icon": "🐛", "style": ""}, # Nordic dark gray
}
config = level_config.get(
level, {"color": "#81A1C1", "icon": "", "style": ""}
) # Default to light Nordic blue
# Create beautifully formatted text
text = Text()
# Timestamp with Nordic blue styling
text.append(f"[{timestamp}]", style="#81A1C1") # Light Nordic blue for timestamps
text.append(" ")
# Indentation with Nordic blue styling
if indent > 0:
text.append(indent_str, style="#81A1C1")
# Level icon (only for certain levels)
if level in ["error", "warning", "success"]:
# Extract emoji from message if it starts with one, to avoid duplication
if message and len(message) > 0 and ord(message[0]) >= 0x1F600: # Emoji range
# Message already has emoji, don't add icon
pass
else:
text.append(f"{config['icon']} ", style=config["color"])
# Message content
message_style = f"{config['color']} {config['style']}".strip()
text.append(message, style=message_style)
console.print(text)
def log_separator() -> None:
"""Print a beautiful separator line with Nordic blue styling."""
console.print(Rule(style="#81A1C1"))
# Convenience functions for different log levels
def log_error(message: str, indent: int = 0) -> None:
log_with_level_and_indent(message, "error", indent)
def log_warning(message: str, indent: int = 0) -> None:
log_with_level_and_indent(message, "warning", indent)
def log_success(message: str, indent: int = 0) -> None:
log_with_level_and_indent(message, "success", indent)
def log_info(message: str, indent: int = 0) -> None:
log_with_level_and_indent(message, "info", indent)
def log_status(message: str, indent: int = 0) -> None:
log_with_level_and_indent(message, "status", indent)
# Global logger instance - modules can import this directly # Global logger instance - modules can import this directly
if FASTMCP_AVAILABLE: if FASTMCP_AVAILABLE:
# Use FastMCP logger with Rich formatting # Use FastMCP logger with Rich formatting
@@ -341,5 +245,3 @@ if FASTMCP_AVAILABLE:
else: else:
# Fallback to our custom logger if FastMCP is not available # Fallback to our custom logger if FastMCP is not available
logger = setup_logger() logger = setup_logger()
# Setup uvicorn logging when module is imported
setup_uvicorn_logging()

View File

@@ -10,6 +10,8 @@ from typing import Any
from dotenv import load_dotenv from dotenv import load_dotenv
from ..version import VERSION as APP_VERSION
# Get the script directory (config module location) # Get the script directory (config module location)
SCRIPT_DIR = Path(__file__).parent # /home/user/code/unraid-mcp/unraid_mcp/config/ SCRIPT_DIR = Path(__file__).parent # /home/user/code/unraid-mcp/unraid_mcp/config/
@@ -30,16 +32,32 @@ for dotenv_path in dotenv_paths:
load_dotenv(dotenv_path=dotenv_path) load_dotenv(dotenv_path=dotenv_path)
break break
# Application Version
VERSION = "0.2.0"
# Core API Configuration # Core API Configuration
UNRAID_API_URL = os.getenv("UNRAID_API_URL") UNRAID_API_URL = os.getenv("UNRAID_API_URL")
UNRAID_API_KEY = os.getenv("UNRAID_API_KEY") UNRAID_API_KEY = os.getenv("UNRAID_API_KEY")
# Server Configuration # Server Configuration
UNRAID_MCP_PORT = int(os.getenv("UNRAID_MCP_PORT", "6970")) def _parse_port(env_var: str, default: int) -> int:
UNRAID_MCP_HOST = os.getenv("UNRAID_MCP_HOST", "0.0.0.0") """Parse a port number from environment variable with validation."""
raw = os.getenv(env_var, str(default))
try:
port = int(raw)
except ValueError:
import sys
print(f"FATAL: {env_var}={raw!r} is not a valid integer port number", file=sys.stderr)
sys.exit(1)
if not (1 <= port <= 65535):
import sys
print(f"FATAL: {env_var}={port} outside valid port range 1-65535", file=sys.stderr)
sys.exit(1)
return port
UNRAID_MCP_PORT = _parse_port("UNRAID_MCP_PORT", 6970)
UNRAID_MCP_HOST = os.getenv("UNRAID_MCP_HOST", "0.0.0.0") # noqa: S104 — intentional for Docker
UNRAID_MCP_TRANSPORT = os.getenv("UNRAID_MCP_TRANSPORT", "streamable-http").lower() UNRAID_MCP_TRANSPORT = os.getenv("UNRAID_MCP_TRANSPORT", "streamable-http").lower()
# SSL Configuration # SSL Configuration
@@ -54,11 +72,18 @@ else: # Path to CA bundle
# Logging Configuration # Logging Configuration
LOG_LEVEL_STR = os.getenv("UNRAID_MCP_LOG_LEVEL", "INFO").upper() LOG_LEVEL_STR = os.getenv("UNRAID_MCP_LOG_LEVEL", "INFO").upper()
LOG_FILE_NAME = os.getenv("UNRAID_MCP_LOG_FILE", "unraid-mcp.log") LOG_FILE_NAME = os.getenv("UNRAID_MCP_LOG_FILE", "unraid-mcp.log")
LOGS_DIR = Path("/tmp") # Use /.dockerenv as the container indicator for robust Docker detection.
IS_DOCKER = Path("/.dockerenv").exists()
LOGS_DIR = Path("/app/logs") if IS_DOCKER else PROJECT_ROOT / "logs"
LOG_FILE_PATH = LOGS_DIR / LOG_FILE_NAME LOG_FILE_PATH = LOGS_DIR / LOG_FILE_NAME
# Ensure logs directory exists # Ensure logs directory exists; if creation fails, fall back to PROJECT_ROOT / ".cache" / "logs".
LOGS_DIR.mkdir(parents=True, exist_ok=True) try:
LOGS_DIR.mkdir(parents=True, exist_ok=True)
except OSError:
LOGS_DIR = PROJECT_ROOT / ".cache" / "logs"
LOGS_DIR.mkdir(parents=True, exist_ok=True)
LOG_FILE_PATH = LOGS_DIR / LOG_FILE_NAME
# HTTP Client Configuration # HTTP Client Configuration
TIMEOUT_CONFIG = { TIMEOUT_CONFIG = {
@@ -91,9 +116,11 @@ def get_config_summary() -> dict[str, Any]:
""" """
is_valid, missing = validate_required_config() is_valid, missing = validate_required_config()
from ..core.utils import safe_display_url
return { return {
"api_url_configured": bool(UNRAID_API_URL), "api_url_configured": bool(UNRAID_API_URL),
"api_url_preview": UNRAID_API_URL[:20] + "..." if UNRAID_API_URL else None, "api_url_preview": safe_display_url(UNRAID_API_URL) if UNRAID_API_URL else None,
"api_key_configured": bool(UNRAID_API_KEY), "api_key_configured": bool(UNRAID_API_KEY),
"server_host": UNRAID_MCP_HOST, "server_host": UNRAID_MCP_HOST,
"server_port": UNRAID_MCP_PORT, "server_port": UNRAID_MCP_PORT,
@@ -104,3 +131,7 @@ def get_config_summary() -> dict[str, Any]:
"config_valid": is_valid, "config_valid": is_valid,
"missing_config": missing if not is_valid else None, "missing_config": missing if not is_valid else None,
} }
# Re-export application version from a single source of truth.
VERSION = APP_VERSION

View File

@@ -5,8 +5,11 @@ to the Unraid API with proper timeout handling and error management.
""" """
import asyncio import asyncio
import hashlib
import json import json
from typing import Any import re
import time
from typing import Any, Final
import httpx import httpx
@@ -19,10 +22,25 @@ from ..config.settings import (
VERSION, VERSION,
) )
from ..core.exceptions import ToolError from ..core.exceptions import ToolError
from .utils import safe_display_url
# Sensitive keys to redact from debug logs # Sensitive keys to redact from debug logs (frozenset — immutable, Final — no accidental reassignment)
_SENSITIVE_KEYS = {"password", "key", "secret", "token", "apikey"} _SENSITIVE_KEYS: Final[frozenset[str]] = frozenset(
{
"password",
"key",
"secret",
"token",
"apikey",
"authorization",
"cookie",
"session",
"credential",
"passphrase",
"jwt",
}
)
def _is_sensitive_key(key: str) -> bool: def _is_sensitive_key(key: str) -> bool:
@@ -31,14 +49,12 @@ def _is_sensitive_key(key: str) -> bool:
return any(s in key_lower for s in _SENSITIVE_KEYS) return any(s in key_lower for s in _SENSITIVE_KEYS)
def _redact_sensitive(obj: Any) -> Any: def redact_sensitive(obj: Any) -> Any:
"""Recursively redact sensitive values from nested dicts/lists.""" """Recursively redact sensitive values from nested dicts/lists."""
if isinstance(obj, dict): if isinstance(obj, dict):
return { return {k: ("***" if _is_sensitive_key(k) else redact_sensitive(v)) for k, v in obj.items()}
k: ("***" if _is_sensitive_key(k) else _redact_sensitive(v)) for k, v in obj.items()
}
if isinstance(obj, list): if isinstance(obj, list):
return [_redact_sensitive(item) for item in obj] return [redact_sensitive(item) for item in obj]
return obj return obj
@@ -66,8 +82,128 @@ def get_timeout_for_operation(profile: str) -> httpx.Timeout:
# Global connection pool (module-level singleton) # Global connection pool (module-level singleton)
# Python 3.12+ asyncio.Lock() is safe at module level — no running event loop required
_http_client: httpx.AsyncClient | None = None _http_client: httpx.AsyncClient | None = None
_client_lock = asyncio.Lock() _client_lock: Final[asyncio.Lock] = asyncio.Lock()
class _RateLimiter:
"""Token bucket rate limiter for Unraid API (100 req / 10s hard limit).
Uses 90 tokens with 9.0 tokens/sec refill for 10% safety headroom.
"""
def __init__(self, max_tokens: int = 90, refill_rate: float = 9.0) -> None:
self.max_tokens = max_tokens
self.tokens = float(max_tokens)
self.refill_rate = refill_rate # tokens per second
self.last_refill = time.monotonic()
# asyncio.Lock() is safe to create at __init__ time (Python 3.12+)
self._lock: Final[asyncio.Lock] = asyncio.Lock()
def _refill(self) -> None:
"""Refill tokens based on elapsed time."""
now = time.monotonic()
elapsed = now - self.last_refill
self.tokens = min(self.max_tokens, self.tokens + elapsed * self.refill_rate)
self.last_refill = now
async def acquire(self) -> None:
"""Consume one token, waiting if necessary for refill."""
while True:
async with self._lock:
self._refill()
if self.tokens >= 1:
self.tokens -= 1
return
wait_time = (1 - self.tokens) / self.refill_rate
# Sleep outside the lock so other coroutines aren't blocked
await asyncio.sleep(wait_time)
_rate_limiter = _RateLimiter()
# --- TTL Cache for stable read-only queries ---
# Queries whose results change infrequently and are safe to cache.
# Mutations and volatile queries (metrics, docker, array state) are excluded.
_CACHEABLE_QUERY_PREFIXES = frozenset(
{
"GetNetworkConfig",
"GetRegistrationInfo",
"GetOwner",
"GetFlash",
}
)
_CACHE_TTL_SECONDS = 60.0
_OPERATION_NAME_PATTERN = re.compile(r"^(?:query\s+)?([_A-Za-z][_0-9A-Za-z]*)\b")
class _QueryCache:
"""Simple TTL cache for GraphQL query responses.
Keyed by a hash of (query, variables). Entries expire after _CACHE_TTL_SECONDS.
Only caches responses for queries whose operation name is in _CACHEABLE_QUERY_PREFIXES.
Mutation requests always bypass the cache.
Thread-safe via asyncio.Lock. Bounded to _MAX_ENTRIES with FIFO eviction (oldest
expiry timestamp evicted first when the store is full).
"""
_MAX_ENTRIES: Final[int] = 256
def __init__(self) -> None:
self._store: dict[str, tuple[float, dict[str, Any]]] = {}
self._lock: Final[asyncio.Lock] = asyncio.Lock()
@staticmethod
def _cache_key(query: str, variables: dict[str, Any] | None) -> str:
raw = query + json.dumps(variables or {}, sort_keys=True)
return hashlib.sha256(raw.encode()).hexdigest()
@staticmethod
def is_cacheable(query: str) -> bool:
"""Check if a query is eligible for caching based on its operation name."""
normalized = query.lstrip()
if normalized.startswith("mutation"):
return False
match = _OPERATION_NAME_PATTERN.match(normalized)
if not match:
return False
return match.group(1) in _CACHEABLE_QUERY_PREFIXES
async def get(self, query: str, variables: dict[str, Any] | None) -> dict[str, Any] | None:
"""Return cached result if present and not expired, else None."""
async with self._lock:
key = self._cache_key(query, variables)
entry = self._store.get(key)
if entry is None:
return None
expires_at, data = entry
if time.monotonic() > expires_at:
del self._store[key]
return None
return data
async def put(self, query: str, variables: dict[str, Any] | None, data: dict[str, Any]) -> None:
"""Store a query result with TTL expiry, evicting oldest entry if at capacity."""
async with self._lock:
if len(self._store) >= self._MAX_ENTRIES:
oldest_key = min(self._store, key=lambda k: self._store[k][0])
del self._store[oldest_key]
key = self._cache_key(query, variables)
self._store[key] = (time.monotonic() + _CACHE_TTL_SECONDS, data)
async def invalidate_all(self) -> None:
"""Clear the entire cache (called after mutations)."""
async with self._lock:
self._store.clear()
_query_cache = _QueryCache()
def is_idempotent_error(error_message: str, operation: str) -> bool: def is_idempotent_error(error_message: str, operation: str) -> bool:
@@ -109,7 +245,7 @@ async def _create_http_client() -> httpx.AsyncClient:
return httpx.AsyncClient( return httpx.AsyncClient(
# Connection pool settings # Connection pool settings
limits=httpx.Limits( limits=httpx.Limits(
max_keepalive_connections=20, max_connections=100, keepalive_expiry=30.0 max_keepalive_connections=20, max_connections=20, keepalive_expiry=30.0
), ),
# Default timeout (can be overridden per-request) # Default timeout (can be overridden per-request)
timeout=DEFAULT_TIMEOUT, timeout=DEFAULT_TIMEOUT,
@@ -123,33 +259,28 @@ async def _create_http_client() -> httpx.AsyncClient:
async def get_http_client() -> httpx.AsyncClient: async def get_http_client() -> httpx.AsyncClient:
"""Get or create shared HTTP client with connection pooling. """Get or create shared HTTP client with connection pooling.
The client is protected by an asyncio lock to prevent concurrent creation. Uses double-checked locking: fast-path skips the lock when the client
If the existing client was closed (e.g., during shutdown), a new one is created. is already initialized, only acquiring it for initial creation or
recovery after close.
Returns: Returns:
Singleton AsyncClient instance with connection pooling enabled Singleton AsyncClient instance with connection pooling enabled
""" """
global _http_client global _http_client
# Fast-path: skip lock if client is already initialized and open
client = _http_client
if client is not None and not client.is_closed:
return client
# Slow-path: acquire lock for initialization
async with _client_lock: async with _client_lock:
if _http_client is None or _http_client.is_closed: if _http_client is None or _http_client.is_closed:
_http_client = await _create_http_client() _http_client = await _create_http_client()
logger.info( logger.info(
"Created shared HTTP client with connection pooling (20 keepalive, 100 max connections)" "Created shared HTTP client with connection pooling (20 keepalive, 20 max connections)"
) )
return _http_client
client = _http_client
# Verify client is still open after releasing the lock.
# In asyncio's cooperative model this is unlikely to fail, but guards
# against edge cases where close_http_client runs between yield points.
if client.is_closed:
async with _client_lock:
_http_client = await _create_http_client()
client = _http_client
logger.info("Re-created HTTP client after unexpected close")
return client
async def close_http_client() -> None: async def close_http_client() -> None:
@@ -190,6 +321,14 @@ async def make_graphql_request(
if not UNRAID_API_KEY: if not UNRAID_API_KEY:
raise ToolError("UNRAID_API_KEY not configured") raise ToolError("UNRAID_API_KEY not configured")
# Check TTL cache — short-circuits rate limiter on hits
is_mutation = query.lstrip().startswith("mutation")
if not is_mutation and _query_cache.is_cacheable(query):
cached = await _query_cache.get(query, variables)
if cached is not None:
logger.debug("Returning cached response for query")
return cached
headers = { headers = {
"Content-Type": "application/json", "Content-Type": "application/json",
"X-API-Key": UNRAID_API_KEY, "X-API-Key": UNRAID_API_KEY,
@@ -199,22 +338,44 @@ async def make_graphql_request(
if variables: if variables:
payload["variables"] = variables payload["variables"] = variables
logger.debug(f"Making GraphQL request to {UNRAID_API_URL}:") logger.debug(f"Making GraphQL request to {safe_display_url(UNRAID_API_URL)}:")
logger.debug(f"Query: {query[:200]}{'...' if len(query) > 200 else ''}") # Log truncated query logger.debug(f"Query: {query[:200]}{'...' if len(query) > 200 else ''}") # Log truncated query
if variables: if variables:
logger.debug(f"Variables: {_redact_sensitive(variables)}") logger.debug(f"Variables: {redact_sensitive(variables)}")
try: try:
# Rate limit: consume a token before making the request
await _rate_limiter.acquire()
# Get the shared HTTP client with connection pooling # Get the shared HTTP client with connection pooling
client = await get_http_client() client = await get_http_client()
# Override timeout if custom timeout specified # Retry loop for 429 rate limit responses
post_kwargs: dict[str, Any] = {"json": payload, "headers": headers}
if custom_timeout is not None: if custom_timeout is not None:
response = await client.post( post_kwargs["timeout"] = custom_timeout
UNRAID_API_URL, json=payload, headers=headers, timeout=custom_timeout
response: httpx.Response | None = None
for attempt in range(3):
response = await client.post(UNRAID_API_URL, **post_kwargs)
if response.status_code == 429:
backoff = 2**attempt
logger.warning(
f"Rate limited (429) by Unraid API, retrying in {backoff}s (attempt {attempt + 1}/3)"
)
await asyncio.sleep(backoff)
continue
break
if response is None: # pragma: no cover — guaranteed by loop
raise ToolError("No response received after retry attempts")
# Provide a clear message when all retries are exhausted on 429
if response.status_code == 429:
logger.error("Rate limit (429) persisted after 3 retries — request aborted")
raise ToolError(
"Unraid API is rate limiting requests. Wait ~10 seconds before retrying."
) )
else:
response = await client.post(UNRAID_API_URL, json=payload, headers=headers)
response.raise_for_status() # Raise an exception for HTTP error codes 4xx/5xx response.raise_for_status() # Raise an exception for HTTP error codes 4xx/5xx
@@ -245,14 +406,27 @@ async def make_graphql_request(
logger.debug("GraphQL request successful.") logger.debug("GraphQL request successful.")
data = response_data.get("data", {}) data = response_data.get("data", {})
return data if isinstance(data, dict) else {} # Ensure we return dict result = data if isinstance(data, dict) else {} # Ensure we return dict
# Invalidate cache on mutations; cache eligible query results
if is_mutation:
await _query_cache.invalidate_all()
elif _query_cache.is_cacheable(query):
await _query_cache.put(query, variables, result)
return result
except httpx.HTTPStatusError as e: except httpx.HTTPStatusError as e:
# Log full details internally; only expose status code to MCP client
logger.error(f"HTTP error occurred: {e.response.status_code} - {e.response.text}") logger.error(f"HTTP error occurred: {e.response.status_code} - {e.response.text}")
raise ToolError(f"HTTP error {e.response.status_code}: {e.response.text}") from e raise ToolError(
f"Unraid API returned HTTP {e.response.status_code}. Check server logs for details."
) from e
except httpx.RequestError as e: except httpx.RequestError as e:
# Log full error internally; give safe summary to MCP client
logger.error(f"Request error occurred: {e}") logger.error(f"Request error occurred: {e}")
raise ToolError(f"Network connection error: {e!s}") from e raise ToolError(f"Network error connecting to Unraid API: {type(e).__name__}") from e
except json.JSONDecodeError as e: except json.JSONDecodeError as e:
# Log full decode error; give safe summary to MCP client
logger.error(f"Failed to decode JSON response: {e}") logger.error(f"Failed to decode JSON response: {e}")
raise ToolError(f"Invalid JSON response from Unraid API: {e!s}") from e raise ToolError("Unraid API returned an invalid response (not valid JSON)") from e

View File

@@ -4,6 +4,10 @@ This module defines custom exception classes for consistent error handling
throughout the application, with proper integration to FastMCP's error system. throughout the application, with proper integration to FastMCP's error system.
""" """
import contextlib
import logging
from collections.abc import Iterator
from fastmcp.exceptions import ToolError as FastMCPToolError from fastmcp.exceptions import ToolError as FastMCPToolError
@@ -19,36 +23,34 @@ class ToolError(FastMCPToolError):
pass pass
class ConfigurationError(ToolError): @contextlib.contextmanager
"""Raised when there are configuration-related errors.""" def tool_error_handler(
tool_name: str,
action: str,
logger: logging.Logger,
) -> Iterator[None]:
"""Context manager that standardizes tool error handling.
pass Re-raises ToolError as-is. Gives TimeoutError a descriptive message.
Catches all other exceptions, logs them with full traceback, and wraps them
in ToolError with a descriptive message.
Args:
class UnraidAPIError(ToolError): tool_name: The tool name for error messages (e.g., "docker", "vm").
"""Raised when the Unraid API returns an error or is unreachable.""" action: The current action being executed.
logger: The logger instance to use for error logging.
pass
class SubscriptionError(ToolError):
"""Raised when there are WebSocket subscription-related errors."""
pass
class ValidationError(ToolError):
"""Raised when input validation fails."""
pass
class IdempotentOperationError(ToolError):
"""Raised when an operation is idempotent (already in desired state).
This is used internally to signal that an operation was already complete,
which should typically be converted to a success response rather than
propagated as an error to the user.
""" """
try:
pass yield
except ToolError:
raise
except TimeoutError as e:
logger.exception(f"Timeout in unraid_{tool_name} action={action}: request exceeded time limit")
raise ToolError(
f"Request timed out executing {tool_name}/{action}. The Unraid API did not respond in time."
) from e
except Exception as e:
logger.exception(f"Error in unraid_{tool_name} action={action}")
raise ToolError(
f"Failed to execute {tool_name}/{action}. Check server logs for details."
) from e

View File

@@ -9,38 +9,21 @@ from datetime import datetime
from typing import Any from typing import Any
@dataclass @dataclass(slots=True)
class SubscriptionData: class SubscriptionData:
"""Container for subscription data with metadata.""" """Container for subscription data with metadata.
Note: last_updated must be timezone-aware (use datetime.now(UTC)).
"""
data: dict[str, Any] data: dict[str, Any]
last_updated: datetime last_updated: datetime # Must be timezone-aware (UTC)
subscription_type: str subscription_type: str
def __post_init__(self) -> None:
@dataclass if self.last_updated.tzinfo is None:
class SystemHealth: raise ValueError(
"""Container for system health status information.""" "last_updated must be timezone-aware; use datetime.now(UTC)"
)
is_healthy: bool if not self.subscription_type.strip():
issues: list[str] raise ValueError("subscription_type must be a non-empty string")
warnings: list[str]
last_checked: datetime
component_status: dict[str, str]
@dataclass
class APIResponse:
"""Container for standardized API response data."""
success: bool
data: dict[str, Any] | None = None
error: str | None = None
metadata: dict[str, Any] | None = None
# Type aliases for common data structures
ConfigValue = str | int | bool | float | None
ConfigDict = dict[str, ConfigValue]
GraphQLVariables = dict[str, Any]
HealthStatus = dict[str, str | bool | int | list[Any]]

97
unraid_mcp/core/utils.py Normal file
View File

@@ -0,0 +1,97 @@
"""Shared utility functions for Unraid MCP tools."""
from typing import Any
from urllib.parse import urlparse
_MISSING: object = object()
def safe_get(data: dict[str, Any], *keys: str, default: Any = None) -> Any:
"""Safely traverse nested dict keys, handling missing keys and None intermediates.
Args:
data: The root dictionary to traverse.
*keys: Sequence of keys to follow.
default: Value to return if any key is absent or any intermediate value
is not a dict.
Returns:
The value at the end of the key chain (including explicit ``None``),
or ``default`` if a key is missing or an intermediate is not a dict.
This preserves the distinction between ``{"k": None}`` (returns ``None``)
and ``{}`` (returns ``default``).
"""
current: Any = data
for key in keys:
if not isinstance(current, dict):
return default
current = current.get(key, _MISSING)
if current is _MISSING:
return default
return current
def format_bytes(bytes_value: int | None) -> str:
"""Format byte values into human-readable sizes.
Args:
bytes_value: Number of bytes, or None.
Returns:
Human-readable string like "1.00 GB" or "N/A" if input is None/invalid.
"""
if bytes_value is None:
return "N/A"
try:
value = float(int(bytes_value))
except (ValueError, TypeError):
return "N/A"
for unit in ["B", "KB", "MB", "GB", "TB", "PB"]:
if value < 1024.0:
return f"{value:.2f} {unit}"
value /= 1024.0
return f"{value:.2f} EB"
def safe_display_url(url: str | None) -> str | None:
"""Return a redacted URL showing only scheme + host + port.
Strips path, query parameters, credentials, and fragments to avoid
leaking internal network topology or embedded secrets (CWE-200).
"""
if not url:
return None
try:
parsed = urlparse(url)
host = parsed.hostname or "unknown"
if parsed.port:
return f"{parsed.scheme}://{host}:{parsed.port}"
return f"{parsed.scheme}://{host}"
except ValueError:
# urlparse raises ValueError for invalid URLs (e.g. contains control chars)
return "<unparseable>"
def format_kb(k: Any) -> str:
"""Format kilobyte values into human-readable sizes.
Args:
k: Number of kilobytes, or None.
Returns:
Human-readable string like "1.00 GB" or "N/A" if input is None/invalid.
"""
if k is None:
return "N/A"
try:
k = int(k)
except (ValueError, TypeError):
return "N/A"
if k >= 1024 * 1024 * 1024:
return f"{k / (1024 * 1024 * 1024):.2f} TB"
if k >= 1024 * 1024:
return f"{k / (1024 * 1024):.2f} GB"
if k >= 1024:
return f"{k / 1024:.2f} MB"
return f"{k:.2f} KB"

View File

@@ -11,12 +11,19 @@ import sys
async def shutdown_cleanup() -> None: async def shutdown_cleanup() -> None:
"""Cleanup resources on server shutdown.""" """Cleanup resources on server shutdown."""
try:
from .subscriptions.manager import subscription_manager
await subscription_manager.stop_all()
except Exception as e:
print(f"Error stopping subscriptions during cleanup: {e}", file=sys.stderr)
try: try:
from .core.client import close_http_client from .core.client import close_http_client
await close_http_client() await close_http_client()
except Exception as e: except Exception as e:
print(f"Error during cleanup: {e}") print(f"Error during cleanup: {e}", file=sys.stderr)
def _run_shutdown_cleanup() -> None: def _run_shutdown_cleanup() -> None:

View File

@@ -10,13 +10,14 @@ from fastmcp import FastMCP
from .config.logging import logger from .config.logging import logger
from .config.settings import ( from .config.settings import (
UNRAID_API_KEY,
UNRAID_API_URL,
UNRAID_MCP_HOST, UNRAID_MCP_HOST,
UNRAID_MCP_PORT, UNRAID_MCP_PORT,
UNRAID_MCP_TRANSPORT, UNRAID_MCP_TRANSPORT,
UNRAID_VERIFY_SSL,
VERSION, VERSION,
validate_required_config,
) )
from .subscriptions.diagnostics import register_diagnostic_tools
from .subscriptions.resources import register_subscription_resources from .subscriptions.resources import register_subscription_resources
from .tools.array import register_array_tool from .tools.array import register_array_tool
from .tools.docker import register_docker_tool from .tools.docker import register_docker_tool
@@ -25,6 +26,7 @@ from .tools.info import register_info_tool
from .tools.keys import register_keys_tool from .tools.keys import register_keys_tool
from .tools.notifications import register_notifications_tool from .tools.notifications import register_notifications_tool
from .tools.rclone import register_rclone_tool from .tools.rclone import register_rclone_tool
from .tools.settings import register_settings_tool
from .tools.storage import register_storage_tool from .tools.storage import register_storage_tool
from .tools.users import register_users_tool from .tools.users import register_users_tool
from .tools.virtualization import register_vm_tool from .tools.virtualization import register_vm_tool
@@ -44,9 +46,10 @@ mcp = FastMCP(
def register_all_modules() -> None: def register_all_modules() -> None:
"""Register all tools and resources with the MCP instance.""" """Register all tools and resources with the MCP instance."""
try: try:
# Register subscription resources first # Register subscription resources and diagnostic tools
register_subscription_resources(mcp) register_subscription_resources(mcp)
logger.info("Subscription resources registered") register_diagnostic_tools(mcp)
logger.info("Subscription resources and diagnostic tools registered")
# Register all consolidated tools # Register all consolidated tools
registrars = [ registrars = [
@@ -60,6 +63,7 @@ def register_all_modules() -> None:
register_users_tool, register_users_tool,
register_keys_tool, register_keys_tool,
register_health_tool, register_health_tool,
register_settings_tool,
] ]
for registrar in registrars: for registrar in registrars:
registrar(mcp) registrar(mcp)
@@ -73,20 +77,26 @@ def register_all_modules() -> None:
def run_server() -> None: def run_server() -> None:
"""Run the MCP server with the configured transport.""" """Run the MCP server with the configured transport."""
# Log configuration # Validate required configuration before anything else
if UNRAID_API_URL: is_valid, missing = validate_required_config()
logger.info(f"UNRAID_API_URL loaded: {UNRAID_API_URL[:20]}...") if not is_valid:
else: logger.critical(
logger.warning("UNRAID_API_URL not found in environment or .env file.") f"Missing required configuration: {', '.join(missing)}. "
"Set these environment variables or add them to your .env file."
)
sys.exit(1)
if UNRAID_API_KEY: # Log configuration (delegated to shared function)
logger.info("UNRAID_API_KEY loaded: ****") from .config.logging import log_configuration_status
else:
logger.warning("UNRAID_API_KEY not found in environment or .env file.")
logger.info(f"UNRAID_MCP_PORT set to: {UNRAID_MCP_PORT}") log_configuration_status(logger)
logger.info(f"UNRAID_MCP_HOST set to: {UNRAID_MCP_HOST}")
logger.info(f"UNRAID_MCP_TRANSPORT set to: {UNRAID_MCP_TRANSPORT}") if UNRAID_VERIFY_SSL is False:
logger.warning(
"SSL VERIFICATION DISABLED (UNRAID_VERIFY_SSL=false). "
"Connections to Unraid API are vulnerable to man-in-the-middle attacks. "
"Only use this in trusted networks or for development."
)
# Register all modules # Register all modules
register_all_modules() register_all_modules()

View File

@@ -7,7 +7,8 @@ development and debugging purposes.
import asyncio import asyncio
import json import json
from datetime import datetime import re
from datetime import UTC, datetime
from typing import Any from typing import Any
import websockets import websockets
@@ -17,9 +18,66 @@ from websockets.typing import Subprotocol
from ..config.logging import logger from ..config.logging import logger
from ..config.settings import UNRAID_API_KEY, UNRAID_API_URL from ..config.settings import UNRAID_API_KEY, UNRAID_API_URL
from ..core.exceptions import ToolError from ..core.exceptions import ToolError
from ..core.utils import safe_display_url
from .manager import subscription_manager from .manager import subscription_manager
from .resources import ensure_subscriptions_started from .resources import ensure_subscriptions_started
from .utils import build_ws_ssl_context from .utils import _analyze_subscription_status, build_ws_ssl_context, build_ws_url
# Schema field names that appear inside the selection set of allowed subscriptions.
# The regex _SUBSCRIPTION_NAME_PATTERN extracts the first identifier after the
# opening "{", so we list the actual field names used in queries (e.g. "logFile"),
# NOT the operation-level names (e.g. "logFileSubscription").
_ALLOWED_SUBSCRIPTION_FIELDS = frozenset(
{
"logFile",
"containerStats",
"cpu",
"memory",
"array",
"network",
"docker",
"vm",
}
)
# Pattern: must start with "subscription" keyword, then extract the first selected
# field name (the word immediately after "{").
_SUBSCRIPTION_NAME_PATTERN = re.compile(r"^\s*subscription\b[^{]*\{\s*(\w+)", re.IGNORECASE)
# Reject any query that contains a bare "mutation" or "query" operation keyword.
_FORBIDDEN_KEYWORDS = re.compile(r"\b(mutation|query)\b", re.IGNORECASE)
def _validate_subscription_query(query: str) -> str:
"""Validate that a subscription query is safe to execute.
Only allows subscription operations targeting whitelisted schema field names.
Rejects any query containing mutation/query keywords.
Returns:
The extracted field name (e.g. "logFile").
Raises:
ToolError: If the query fails validation.
"""
if _FORBIDDEN_KEYWORDS.search(query):
raise ToolError("Query rejected: must be a subscription, not a mutation or query.")
match = _SUBSCRIPTION_NAME_PATTERN.match(query)
if not match:
raise ToolError(
"Query rejected: must start with 'subscription' and contain a valid "
'subscription field. Example: subscription { logFile(path: "/var/log/syslog") { content } }'
)
field_name = match.group(1)
if field_name not in _ALLOWED_SUBSCRIPTION_FIELDS:
raise ToolError(
f"Subscription field '{field_name}' is not allowed. "
f"Allowed fields: {sorted(_ALLOWED_SUBSCRIPTION_FIELDS)}"
)
return field_name
def register_diagnostic_tools(mcp: FastMCP) -> None: def register_diagnostic_tools(mcp: FastMCP) -> None:
@@ -34,6 +92,8 @@ def register_diagnostic_tools(mcp: FastMCP) -> None:
"""Test a GraphQL subscription query directly to debug schema issues. """Test a GraphQL subscription query directly to debug schema issues.
Use this to find working subscription field names and structure. Use this to find working subscription field names and structure.
Only whitelisted schema fields are permitted (logFile, containerStats,
cpu, memory, array, network, docker, vm).
Args: Args:
subscription_query: The GraphQL subscription query to test subscription_query: The GraphQL subscription query to test
@@ -41,16 +101,18 @@ def register_diagnostic_tools(mcp: FastMCP) -> None:
Returns: Returns:
Dict containing test results and response data Dict containing test results and response data
""" """
try: field_name = _validate_subscription_query(subscription_query)
logger.info(f"[TEST_SUBSCRIPTION] Testing query: {subscription_query}")
# Build WebSocket URL try:
if not UNRAID_API_URL: logger.info(f"[TEST_SUBSCRIPTION] Testing validated subscription field '{field_name}'")
raise ToolError("UNRAID_API_URL is not configured")
ws_url = ( try:
UNRAID_API_URL.replace("https://", "wss://").replace("http://", "ws://") ws_url = build_ws_url()
+ "/graphql" except ValueError as e:
) logger.error("[TEST_SUBSCRIPTION] Invalid WebSocket URL configuration: %s", e)
raise ToolError(
"Subscription test failed: invalid WebSocket URL configuration."
) from e
ssl_context = build_ws_ssl_context(ws_url) ssl_context = build_ws_ssl_context(ws_url)
@@ -59,6 +121,7 @@ def register_diagnostic_tools(mcp: FastMCP) -> None:
ws_url, ws_url,
subprotocols=[Subprotocol("graphql-transport-ws"), Subprotocol("graphql-ws")], subprotocols=[Subprotocol("graphql-transport-ws"), Subprotocol("graphql-ws")],
ssl=ssl_context, ssl=ssl_context,
open_timeout=10,
ping_interval=30, ping_interval=30,
ping_timeout=10, ping_timeout=10,
) as websocket: ) as websocket:
@@ -77,7 +140,13 @@ def register_diagnostic_tools(mcp: FastMCP) -> None:
init_response = json.loads(response) init_response = json.loads(response)
if init_response.get("type") != "connection_ack": if init_response.get("type") != "connection_ack":
return {"error": f"Connection failed: {init_response}"} logger.error(
"[TEST_SUBSCRIPTION] Connection not acknowledged: %s",
init_response,
)
raise ToolError(
"Subscription test failed: WebSocket connection was not acknowledged."
)
# Send subscription # Send subscription
await websocket.send( await websocket.send(
@@ -102,9 +171,13 @@ def register_diagnostic_tools(mcp: FastMCP) -> None:
"note": "Connection successful, subscription may be waiting for events", "note": "Connection successful, subscription may be waiting for events",
} }
except ToolError:
raise
except Exception as e: except Exception as e:
logger.error(f"[TEST_SUBSCRIPTION] Error: {e}", exc_info=True) logger.error("[TEST_SUBSCRIPTION] Error: %s", e, exc_info=True)
return {"error": str(e), "query_tested": subscription_query} raise ToolError(
"Subscription test failed: an unexpected error occurred. Check server logs for details."
) from e
@mcp.tool() @mcp.tool()
async def diagnose_subscriptions() -> dict[str, Any]: async def diagnose_subscriptions() -> dict[str, Any]:
@@ -122,20 +195,29 @@ def register_diagnostic_tools(mcp: FastMCP) -> None:
logger.info("[DIAGNOSTIC] Running subscription diagnostics...") logger.info("[DIAGNOSTIC] Running subscription diagnostics...")
# Get comprehensive status # Get comprehensive status
status = subscription_manager.get_subscription_status() status = await subscription_manager.get_subscription_status()
# Initialize connection issues list with proper type # Analyze connection issues and error counts via shared helper.
connection_issues: list[dict[str, Any]] = [] # Gates connection_issues on current failure state (Bug 5 fix).
error_count, connection_issues = _analyze_subscription_status(status)
# Calculate WebSocket URL
ws_url_display: str | None = None
if UNRAID_API_URL:
try:
ws_url_display = build_ws_url()
except ValueError:
ws_url_display = None
# Add environment info with explicit typing # Add environment info with explicit typing
diagnostic_info: dict[str, Any] = { diagnostic_info: dict[str, Any] = {
"timestamp": datetime.now().isoformat(), "timestamp": datetime.now(UTC).isoformat(),
"environment": { "environment": {
"auto_start_enabled": subscription_manager.auto_start_enabled, "auto_start_enabled": subscription_manager.auto_start_enabled,
"max_reconnect_attempts": subscription_manager.max_reconnect_attempts, "max_reconnect_attempts": subscription_manager.max_reconnect_attempts,
"unraid_api_url": UNRAID_API_URL[:50] + "..." if UNRAID_API_URL else None, "unraid_api_url": safe_display_url(UNRAID_API_URL),
"api_key_configured": bool(UNRAID_API_KEY), "api_key_configured": bool(UNRAID_API_KEY),
"websocket_url": None, "websocket_url": ws_url_display,
}, },
"subscriptions": status, "subscriptions": status,
"summary": { "summary": {
@@ -147,40 +229,11 @@ def register_diagnostic_tools(mcp: FastMCP) -> None:
), ),
"active_count": len(subscription_manager.active_subscriptions), "active_count": len(subscription_manager.active_subscriptions),
"with_data": len(subscription_manager.resource_data), "with_data": len(subscription_manager.resource_data),
"in_error_state": 0, "in_error_state": error_count,
"connection_issues": connection_issues, "connection_issues": connection_issues,
}, },
} }
# Calculate WebSocket URL
if UNRAID_API_URL:
if UNRAID_API_URL.startswith("https://"):
ws_url = "wss://" + UNRAID_API_URL[len("https://") :]
elif UNRAID_API_URL.startswith("http://"):
ws_url = "ws://" + UNRAID_API_URL[len("http://") :]
else:
ws_url = UNRAID_API_URL
if not ws_url.endswith("/graphql"):
ws_url = ws_url.rstrip("/") + "/graphql"
diagnostic_info["environment"]["websocket_url"] = ws_url
# Analyze issues
for sub_name, sub_status in status.items():
runtime = sub_status.get("runtime", {})
connection_state = runtime.get("connection_state", "unknown")
if connection_state in ["error", "auth_failed", "timeout", "max_retries_exceeded"]:
diagnostic_info["summary"]["in_error_state"] += 1
if runtime.get("last_error"):
connection_issues.append(
{
"subscription": sub_name,
"state": connection_state,
"error": runtime["last_error"],
}
)
# Add troubleshooting recommendations # Add troubleshooting recommendations
recommendations: list[str] = [] recommendations: list[str] = []
@@ -227,7 +280,9 @@ def register_diagnostic_tools(mcp: FastMCP) -> None:
return diagnostic_info return diagnostic_info
except Exception as e: except Exception as e:
logger.error(f"[DIAGNOSTIC] Failed to generate diagnostics: {e}") logger.error("[DIAGNOSTIC] Failed to generate diagnostics: %s", e, exc_info=True)
raise ToolError(f"Failed to generate diagnostics: {e!s}") from e raise ToolError(
"Failed to generate diagnostics: an unexpected error occurred. Check server logs for details."
) from e
logger.info("Subscription diagnostic tools registered successfully") logger.info("Subscription diagnostic tools registered successfully")

View File

@@ -8,16 +8,71 @@ error handling, reconnection logic, and authentication.
import asyncio import asyncio
import json import json
import os import os
from datetime import datetime import time
from datetime import UTC, datetime
from typing import Any from typing import Any
import websockets import websockets
from websockets.typing import Subprotocol from websockets.typing import Subprotocol
from ..config.logging import logger from ..config.logging import logger
from ..config.settings import UNRAID_API_KEY, UNRAID_API_URL from ..config.settings import UNRAID_API_KEY
from ..core.client import redact_sensitive
from ..core.types import SubscriptionData from ..core.types import SubscriptionData
from .utils import build_ws_ssl_context from .utils import build_ws_ssl_context, build_ws_url
# Resource data size limits to prevent unbounded memory growth
_MAX_RESOURCE_DATA_BYTES = 1_048_576 # 1MB
_MAX_RESOURCE_DATA_LINES = 5_000
# Minimum stable connection duration (seconds) before resetting reconnect counter
_STABLE_CONNECTION_SECONDS = 30
def _cap_log_content(data: dict[str, Any]) -> dict[str, Any]:
"""Cap log content in subscription data to prevent unbounded memory growth.
Returns a new dict — does NOT mutate the input. If any nested 'content'
field (from log subscriptions) exceeds the byte limit, truncate it to the
most recent _MAX_RESOURCE_DATA_LINES lines.
The final content is guaranteed to be <= _MAX_RESOURCE_DATA_BYTES.
"""
result: dict[str, Any] = {}
for key, value in data.items():
if isinstance(value, dict):
result[key] = _cap_log_content(value)
elif (
key == "content"
and isinstance(value, str)
# Pre-check uses byte count so multibyte UTF-8 chars cannot bypass the cap
and len(value.encode("utf-8", errors="replace")) > _MAX_RESOURCE_DATA_BYTES
):
lines = value.splitlines()
original_line_count = len(lines)
# Keep most recent lines first.
if len(lines) > _MAX_RESOURCE_DATA_LINES:
lines = lines[-_MAX_RESOURCE_DATA_LINES:]
truncated = "\n".join(lines)
# Encode once and slice bytes instead of an O(n²) line-trim loop
encoded = truncated.encode("utf-8", errors="replace")
if len(encoded) > _MAX_RESOURCE_DATA_BYTES:
truncated = encoded[-_MAX_RESOURCE_DATA_BYTES:].decode("utf-8", errors="ignore")
# Strip partial first line that may have been cut mid-character
nl_pos = truncated.find("\n")
if nl_pos != -1:
truncated = truncated[nl_pos + 1 :]
logger.warning(
f"[RESOURCE] Capped log content from {original_line_count} to "
f"{len(lines)} lines ({len(value)} -> {len(truncated)} chars)"
)
result[key] = truncated
else:
result[key] = value
return result
class SubscriptionManager: class SubscriptionManager:
@@ -26,8 +81,13 @@ class SubscriptionManager:
def __init__(self) -> None: def __init__(self) -> None:
self.active_subscriptions: dict[str, asyncio.Task[None]] = {} self.active_subscriptions: dict[str, asyncio.Task[None]] = {}
self.resource_data: dict[str, SubscriptionData] = {} self.resource_data: dict[str, SubscriptionData] = {}
self.websocket: websockets.WebSocketServerProtocol | None = None # Two fine-grained locks instead of one coarse lock (P-01):
self.subscription_lock = asyncio.Lock() # _task_lock guards active_subscriptions dict (task lifecycle).
# _data_lock guards resource_data dict (WebSocket message writes + reads).
# Splitting prevents WebSocket message updates from blocking tool reads
# of active_subscriptions and vice versa.
self._task_lock = asyncio.Lock()
self._data_lock = asyncio.Lock()
# Configuration # Configuration
self.auto_start_enabled = ( self.auto_start_enabled = (
@@ -37,6 +97,7 @@ class SubscriptionManager:
self.max_reconnect_attempts = int(os.getenv("UNRAID_MAX_RECONNECT_ATTEMPTS", "10")) self.max_reconnect_attempts = int(os.getenv("UNRAID_MAX_RECONNECT_ATTEMPTS", "10"))
self.connection_states: dict[str, str] = {} # Track connection state per subscription self.connection_states: dict[str, str] = {} # Track connection state per subscription
self.last_error: dict[str, str] = {} # Track last error per subscription self.last_error: dict[str, str] = {} # Track last error per subscription
self._connection_start_times: dict[str, float] = {} # Track when connections started
# Define subscription configurations # Define subscription configurations
self.subscription_configs = { self.subscription_configs = {
@@ -105,8 +166,9 @@ class SubscriptionManager:
# Reset connection tracking # Reset connection tracking
self.reconnect_attempts[subscription_name] = 0 self.reconnect_attempts[subscription_name] = 0
self.connection_states[subscription_name] = "starting" self.connection_states[subscription_name] = "starting"
self._connection_start_times.pop(subscription_name, None)
async with self.subscription_lock: async with self._task_lock:
try: try:
task = asyncio.create_task( task = asyncio.create_task(
self._subscription_loop(subscription_name, query, variables or {}) self._subscription_loop(subscription_name, query, variables or {})
@@ -128,7 +190,7 @@ class SubscriptionManager:
"""Stop a specific subscription.""" """Stop a specific subscription."""
logger.info(f"[SUBSCRIPTION:{subscription_name}] Stopping subscription...") logger.info(f"[SUBSCRIPTION:{subscription_name}] Stopping subscription...")
async with self.subscription_lock: async with self._task_lock:
if subscription_name in self.active_subscriptions: if subscription_name in self.active_subscriptions:
task = self.active_subscriptions[subscription_name] task = self.active_subscriptions[subscription_name]
task.cancel() task.cancel()
@@ -138,10 +200,21 @@ class SubscriptionManager:
logger.debug(f"[SUBSCRIPTION:{subscription_name}] Task cancelled successfully") logger.debug(f"[SUBSCRIPTION:{subscription_name}] Task cancelled successfully")
del self.active_subscriptions[subscription_name] del self.active_subscriptions[subscription_name]
self.connection_states[subscription_name] = "stopped" self.connection_states[subscription_name] = "stopped"
self._connection_start_times.pop(subscription_name, None)
logger.info(f"[SUBSCRIPTION:{subscription_name}] Subscription stopped") logger.info(f"[SUBSCRIPTION:{subscription_name}] Subscription stopped")
else: else:
logger.warning(f"[SUBSCRIPTION:{subscription_name}] No active subscription to stop") logger.warning(f"[SUBSCRIPTION:{subscription_name}] No active subscription to stop")
async def stop_all(self) -> None:
"""Stop all active subscriptions (called during server shutdown)."""
subscription_names = list(self.active_subscriptions.keys())
for name in subscription_names:
try:
await self.stop_subscription(name)
except Exception as e:
logger.error(f"[SHUTDOWN] Error stopping subscription '{name}': {e}", exc_info=True)
logger.info(f"[SHUTDOWN] Stopped {len(subscription_names)} subscription(s)")
async def _subscription_loop( async def _subscription_loop(
self, subscription_name: str, query: str, variables: dict[str, Any] | None self, subscription_name: str, query: str, variables: dict[str, Any] | None
) -> None: ) -> None:
@@ -165,20 +238,7 @@ class SubscriptionManager:
break break
try: try:
# Build WebSocket URL with detailed logging ws_url = build_ws_url()
if not UNRAID_API_URL:
raise ValueError("UNRAID_API_URL is not configured")
if UNRAID_API_URL.startswith("https://"):
ws_url = "wss://" + UNRAID_API_URL[len("https://") :]
elif UNRAID_API_URL.startswith("http://"):
ws_url = "ws://" + UNRAID_API_URL[len("http://") :]
else:
ws_url = UNRAID_API_URL
if not ws_url.endswith("/graphql"):
ws_url = ws_url.rstrip("/") + "/graphql"
logger.debug(f"[WEBSOCKET:{subscription_name}] Connecting to: {ws_url}") logger.debug(f"[WEBSOCKET:{subscription_name}] Connecting to: {ws_url}")
logger.debug( logger.debug(
f"[WEBSOCKET:{subscription_name}] API Key present: {'Yes' if UNRAID_API_KEY else 'No'}" f"[WEBSOCKET:{subscription_name}] API Key present: {'Yes' if UNRAID_API_KEY else 'No'}"
@@ -195,6 +255,7 @@ class SubscriptionManager:
async with websockets.connect( async with websockets.connect(
ws_url, ws_url,
subprotocols=[Subprotocol("graphql-transport-ws"), Subprotocol("graphql-ws")], subprotocols=[Subprotocol("graphql-transport-ws"), Subprotocol("graphql-ws")],
open_timeout=connect_timeout,
ping_interval=20, ping_interval=20,
ping_timeout=10, ping_timeout=10,
close_timeout=10, close_timeout=10,
@@ -206,9 +267,9 @@ class SubscriptionManager:
) )
self.connection_states[subscription_name] = "connected" self.connection_states[subscription_name] = "connected"
# Reset retry count on successful connection # Track connection start time — only reset retry counter
self.reconnect_attempts[subscription_name] = 0 # after the connection proves stable (>30s connected)
retry_delay = 5 # Reset delay self._connection_start_times[subscription_name] = time.monotonic()
# Initialize GraphQL-WS protocol # Initialize GraphQL-WS protocol
logger.debug( logger.debug(
@@ -290,7 +351,9 @@ class SubscriptionManager:
f"[SUBSCRIPTION:{subscription_name}] Subscription message type: {start_type}" f"[SUBSCRIPTION:{subscription_name}] Subscription message type: {start_type}"
) )
logger.debug(f"[SUBSCRIPTION:{subscription_name}] Query: {query[:100]}...") logger.debug(f"[SUBSCRIPTION:{subscription_name}] Query: {query[:100]}...")
logger.debug(f"[SUBSCRIPTION:{subscription_name}] Variables: {variables}") logger.debug(
f"[SUBSCRIPTION:{subscription_name}] Variables: {redact_sensitive(variables)}"
)
await websocket.send(json.dumps(subscription_message)) await websocket.send(json.dumps(subscription_message))
logger.info( logger.info(
@@ -326,11 +389,18 @@ class SubscriptionManager:
logger.info( logger.info(
f"[DATA:{subscription_name}] Received subscription data update" f"[DATA:{subscription_name}] Received subscription data update"
) )
self.resource_data[subscription_name] = SubscriptionData( capped_data = (
data=payload["data"], _cap_log_content(payload["data"])
last_updated=datetime.now(), if isinstance(payload["data"], dict)
else payload["data"]
)
new_entry = SubscriptionData(
data=capped_data,
last_updated=datetime.now(UTC),
subscription_type=subscription_name, subscription_type=subscription_name,
) )
async with self._data_lock:
self.resource_data[subscription_name] = new_entry
logger.debug( logger.debug(
f"[RESOURCE:{subscription_name}] Resource data updated successfully" f"[RESOURCE:{subscription_name}] Resource data updated successfully"
) )
@@ -391,7 +461,8 @@ class SubscriptionManager:
logger.error(f"[PROTOCOL:{subscription_name}] JSON decode error: {e}") logger.error(f"[PROTOCOL:{subscription_name}] JSON decode error: {e}")
except Exception as e: except Exception as e:
logger.error( logger.error(
f"[DATA:{subscription_name}] Error processing message: {e}" f"[DATA:{subscription_name}] Error processing message: {e}",
exc_info=True,
) )
msg_preview = ( msg_preview = (
message[:200] message[:200]
@@ -421,29 +492,70 @@ class SubscriptionManager:
self.connection_states[subscription_name] = "invalid_uri" self.connection_states[subscription_name] = "invalid_uri"
break # Don't retry on invalid URI break # Don't retry on invalid URI
except Exception as e: except ValueError as e:
error_msg = f"Unexpected error: {e}" # Non-retryable configuration error (e.g. UNRAID_API_URL not set)
error_msg = f"Configuration error: {e}"
logger.error(f"[WEBSOCKET:{subscription_name}] {error_msg}") logger.error(f"[WEBSOCKET:{subscription_name}] {error_msg}")
self.last_error[subscription_name] = error_msg self.last_error[subscription_name] = error_msg
self.connection_states[subscription_name] = "error" self.connection_states[subscription_name] = "error"
break # Don't retry on configuration errors
# Calculate backoff delay except Exception as e:
retry_delay = min(retry_delay * 1.5, max_retry_delay) error_msg = f"Unexpected error: {e}"
logger.error(f"[WEBSOCKET:{subscription_name}] {error_msg}", exc_info=True)
self.last_error[subscription_name] = error_msg
self.connection_states[subscription_name] = "error"
# Check if connection was stable before deciding on retry behavior
start_time = self._connection_start_times.pop(subscription_name, None)
if start_time is not None:
connected_duration = time.monotonic() - start_time
if connected_duration >= _STABLE_CONNECTION_SECONDS:
# Connection was stable — reset retry counter and backoff
logger.info(
f"[WEBSOCKET:{subscription_name}] Connection was stable "
f"({connected_duration:.0f}s >= {_STABLE_CONNECTION_SECONDS}s), "
f"resetting retry counter"
)
self.reconnect_attempts[subscription_name] = 0
retry_delay = 5
else:
logger.warning(
f"[WEBSOCKET:{subscription_name}] Connection was unstable "
f"({connected_duration:.0f}s < {_STABLE_CONNECTION_SECONDS}s), "
f"keeping retry counter at {self.reconnect_attempts.get(subscription_name, 0)}"
)
# Only escalate backoff when connection was NOT stable
retry_delay = min(retry_delay * 1.5, max_retry_delay)
else:
# No connection was established — escalate backoff
retry_delay = min(retry_delay * 1.5, max_retry_delay)
logger.info( logger.info(
f"[WEBSOCKET:{subscription_name}] Reconnecting in {retry_delay:.1f} seconds..." f"[WEBSOCKET:{subscription_name}] Reconnecting in {retry_delay:.1f} seconds..."
) )
self.connection_states[subscription_name] = "reconnecting" self.connection_states[subscription_name] = "reconnecting"
await asyncio.sleep(retry_delay) await asyncio.sleep(retry_delay)
def get_resource_data(self, resource_name: str) -> dict[str, Any] | None: # The while loop exited (via break or max_retries exceeded).
# Remove from active_subscriptions so start_subscription() can restart it.
async with self._task_lock:
self.active_subscriptions.pop(subscription_name, None)
logger.info(
f"[SUBSCRIPTION:{subscription_name}] Subscription loop ended — "
f"removed from active_subscriptions. Final state: "
f"{self.connection_states.get(subscription_name, 'unknown')}"
)
async def get_resource_data(self, resource_name: str) -> dict[str, Any] | None:
"""Get current resource data with enhanced logging.""" """Get current resource data with enhanced logging."""
logger.debug(f"[RESOURCE:{resource_name}] Resource data requested") logger.debug(f"[RESOURCE:{resource_name}] Resource data requested")
if resource_name in self.resource_data: async with self._data_lock:
data = self.resource_data[resource_name] if resource_name in self.resource_data:
age_seconds = (datetime.now() - data.last_updated).total_seconds() data = self.resource_data[resource_name]
logger.debug(f"[RESOURCE:{resource_name}] Data found, age: {age_seconds:.1f}s") age_seconds = (datetime.now(UTC) - data.last_updated).total_seconds()
return data.data logger.debug(f"[RESOURCE:{resource_name}] Data found, age: {age_seconds:.1f}s")
return data.data
logger.debug(f"[RESOURCE:{resource_name}] No data available") logger.debug(f"[RESOURCE:{resource_name}] No data available")
return None return None
@@ -453,38 +565,39 @@ class SubscriptionManager:
logger.debug(f"[SUBSCRIPTION_MANAGER] Active subscriptions: {active}") logger.debug(f"[SUBSCRIPTION_MANAGER] Active subscriptions: {active}")
return active return active
def get_subscription_status(self) -> dict[str, dict[str, Any]]: async def get_subscription_status(self) -> dict[str, dict[str, Any]]:
"""Get detailed status of all subscriptions for diagnostics.""" """Get detailed status of all subscriptions for diagnostics."""
status = {} status = {}
for sub_name, config in self.subscription_configs.items(): async with self._task_lock, self._data_lock:
sub_status = { for sub_name, config in self.subscription_configs.items():
"config": { sub_status = {
"resource": config["resource"], "config": {
"description": config["description"], "resource": config["resource"],
"auto_start": config.get("auto_start", False), "description": config["description"],
}, "auto_start": config.get("auto_start", False),
"runtime": { },
"active": sub_name in self.active_subscriptions, "runtime": {
"connection_state": self.connection_states.get(sub_name, "not_started"), "active": sub_name in self.active_subscriptions,
"reconnect_attempts": self.reconnect_attempts.get(sub_name, 0), "connection_state": self.connection_states.get(sub_name, "not_started"),
"last_error": self.last_error.get(sub_name, None), "reconnect_attempts": self.reconnect_attempts.get(sub_name, 0),
}, "last_error": self.last_error.get(sub_name, None),
} },
# Add data info if available
if sub_name in self.resource_data:
data_info = self.resource_data[sub_name]
age_seconds = (datetime.now() - data_info.last_updated).total_seconds()
sub_status["data"] = {
"available": True,
"last_updated": data_info.last_updated.isoformat(),
"age_seconds": age_seconds,
} }
else:
sub_status["data"] = {"available": False}
status[sub_name] = sub_status # Add data info if available
if sub_name in self.resource_data:
data_info = self.resource_data[sub_name]
age_seconds = (datetime.now(UTC) - data_info.last_updated).total_seconds()
sub_status["data"] = {
"available": True,
"last_updated": data_info.last_updated.isoformat(),
"age_seconds": age_seconds,
}
else:
sub_status["data"] = {"available": False}
status[sub_name] = sub_status
logger.debug(f"[SUBSCRIPTION_MANAGER] Generated status for {len(status)} subscriptions") logger.debug(f"[SUBSCRIPTION_MANAGER] Generated status for {len(status)} subscriptions")
return status return status

View File

@@ -4,8 +4,10 @@ This module defines MCP resources that bridge between the subscription manager
and the MCP protocol, providing fallback queries when subscription data is unavailable. and the MCP protocol, providing fallback queries when subscription data is unavailable.
""" """
import asyncio
import json import json
import os import os
from typing import Final
import anyio import anyio
from fastmcp import FastMCP from fastmcp import FastMCP
@@ -16,22 +18,29 @@ from .manager import subscription_manager
# Global flag to track subscription startup # Global flag to track subscription startup
_subscriptions_started = False _subscriptions_started = False
_startup_lock: Final[asyncio.Lock] = asyncio.Lock()
async def ensure_subscriptions_started() -> None: async def ensure_subscriptions_started() -> None:
"""Ensure subscriptions are started, called from async context.""" """Ensure subscriptions are started, called from async context."""
global _subscriptions_started global _subscriptions_started
# Fast-path: skip lock if already started
if _subscriptions_started: if _subscriptions_started:
return return
logger.info("[STARTUP] First async operation detected, starting subscriptions...") # Slow-path: acquire lock for initialization (double-checked locking)
try: async with _startup_lock:
await autostart_subscriptions() if _subscriptions_started:
_subscriptions_started = True return
logger.info("[STARTUP] Subscriptions started successfully")
except Exception as e: logger.info("[STARTUP] First async operation detected, starting subscriptions...")
logger.error(f"[STARTUP] Failed to start subscriptions: {e}", exc_info=True) try:
await autostart_subscriptions()
_subscriptions_started = True
logger.info("[STARTUP] Subscriptions started successfully")
except Exception as e:
logger.error(f"[STARTUP] Failed to start subscriptions: {e}", exc_info=True)
async def autostart_subscriptions() -> None: async def autostart_subscriptions() -> None:
@@ -39,11 +48,12 @@ async def autostart_subscriptions() -> None:
logger.info("[AUTOSTART] Initiating subscription auto-start process...") logger.info("[AUTOSTART] Initiating subscription auto-start process...")
try: try:
# Use the new SubscriptionManager auto-start method # Use the SubscriptionManager auto-start method
await subscription_manager.auto_start_all_subscriptions() await subscription_manager.auto_start_all_subscriptions()
logger.info("[AUTOSTART] Auto-start process completed successfully") logger.info("[AUTOSTART] Auto-start process completed successfully")
except Exception as e: except Exception as e:
logger.error(f"[AUTOSTART] Failed during auto-start process: {e}", exc_info=True) logger.error(f"[AUTOSTART] Failed during auto-start process: {e}", exc_info=True)
raise # Propagate so ensure_subscriptions_started doesn't mark as started
# Optional log file subscription # Optional log file subscription
log_path = os.getenv("UNRAID_AUTOSTART_LOG_PATH") log_path = os.getenv("UNRAID_AUTOSTART_LOG_PATH")
@@ -82,7 +92,7 @@ def register_subscription_resources(mcp: FastMCP) -> None:
async def logs_stream_resource() -> str: async def logs_stream_resource() -> str:
"""Real-time log stream data from subscription.""" """Real-time log stream data from subscription."""
await ensure_subscriptions_started() await ensure_subscriptions_started()
data = subscription_manager.get_resource_data("logFileSubscription") data = await subscription_manager.get_resource_data("logFileSubscription")
if data: if data:
return json.dumps(data, indent=2) return json.dumps(data, indent=2)
return json.dumps( return json.dumps(

View File

@@ -1,8 +1,41 @@
"""Shared utilities for the subscription system.""" """Shared utilities for the subscription system."""
import ssl as _ssl import ssl as _ssl
from typing import Any
from ..config.settings import UNRAID_VERIFY_SSL from ..config.settings import UNRAID_API_URL, UNRAID_VERIFY_SSL
def build_ws_url() -> str:
"""Build a WebSocket URL from the configured UNRAID_API_URL.
Converts http(s) scheme to ws(s) and ensures /graphql path suffix.
Returns:
The WebSocket URL string (e.g. "wss://10.1.0.2:31337/graphql").
Raises:
ValueError: If UNRAID_API_URL is not configured or has an unrecognised scheme.
"""
if not UNRAID_API_URL:
raise ValueError("UNRAID_API_URL is not configured")
if UNRAID_API_URL.startswith("https://"):
ws_url = "wss://" + UNRAID_API_URL[len("https://") :]
elif UNRAID_API_URL.startswith("http://"):
ws_url = "ws://" + UNRAID_API_URL[len("http://") :]
elif UNRAID_API_URL.startswith(("ws://", "wss://")):
ws_url = UNRAID_API_URL # Already a WebSocket URL
else:
raise ValueError(
f"UNRAID_API_URL must start with http://, https://, ws://, or wss://. "
f"Got: {UNRAID_API_URL[:20]}..."
)
if not ws_url.endswith("/graphql"):
ws_url = ws_url.rstrip("/") + "/graphql"
return ws_url
def build_ws_ssl_context(ws_url: str) -> _ssl.SSLContext | None: def build_ws_ssl_context(ws_url: str) -> _ssl.SSLContext | None:
@@ -25,3 +58,41 @@ def build_ws_ssl_context(ws_url: str) -> _ssl.SSLContext | None:
ctx.check_hostname = False ctx.check_hostname = False
ctx.verify_mode = _ssl.CERT_NONE ctx.verify_mode = _ssl.CERT_NONE
return ctx return ctx
def _analyze_subscription_status(
status: dict[str, Any],
) -> tuple[int, list[dict[str, Any]]]:
"""Analyze subscription status dict, returning error count and connection issues.
Only reports connection_issues for subscriptions that are currently in a
failure state (not recovered ones that happen to have a stale last_error).
Args:
status: Dict of subscription name -> status info from get_subscription_status().
Returns:
Tuple of (error_count, connection_issues_list).
"""
_error_states = frozenset(
{"error", "auth_failed", "timeout", "max_retries_exceeded", "invalid_uri"}
)
error_count = 0
connection_issues: list[dict[str, Any]] = []
for sub_name, sub_status in status.items():
runtime = sub_status.get("runtime", {})
conn_state = runtime.get("connection_state", "unknown")
if conn_state in _error_states:
error_count += 1
# Gate on current failure state so recovered subscriptions are not reported
if runtime.get("last_error") and conn_state in _error_states:
connection_issues.append(
{
"subscription": sub_name,
"state": conn_state,
"error": runtime["last_error"],
}
)
return error_count, connection_issues

View File

@@ -1,14 +1,14 @@
"""MCP tools organized by functional domain. """MCP tools organized by functional domain.
10 consolidated tools with ~90 actions total: 10 consolidated tools with 76 actions total:
unraid_info - System information queries (19 actions) unraid_info - System information queries (19 actions)
unraid_array - Array operations and power management (12 actions) unraid_array - Array operations and parity management (5 actions)
unraid_storage - Storage, disks, and logs (6 actions) unraid_storage - Storage, disks, and logs (6 actions)
unraid_docker - Docker container management (15 actions) unraid_docker - Docker container management (15 actions)
unraid_vm - Virtual machine management (9 actions) unraid_vm - Virtual machine management (9 actions)
unraid_notifications - Notification management (9 actions) unraid_notifications - Notification management (9 actions)
unraid_rclone - Cloud storage remotes (4 actions) unraid_rclone - Cloud storage remotes (4 actions)
unraid_users - User management (8 actions) unraid_users - User management (1 action)
unraid_keys - API key management (5 actions) unraid_keys - API key management (5 actions)
unraid_health - Health monitoring and diagnostics (3 actions) unraid_health - Health monitoring and diagnostics (3 actions)
""" """

View File

@@ -3,13 +3,13 @@
Provides the `unraid_array` tool with 5 actions for parity check management. Provides the `unraid_array` tool with 5 actions for parity check management.
""" """
from typing import Any, Literal from typing import Any, Literal, get_args
from fastmcp import FastMCP from fastmcp import FastMCP
from ..config.logging import logger from ..config.logging import logger
from ..core.client import make_graphql_request from ..core.client import make_graphql_request
from ..core.exceptions import ToolError from ..core.exceptions import ToolError, tool_error_handler
QUERIES: dict[str, str] = { QUERIES: dict[str, str] = {
@@ -22,7 +22,7 @@ QUERIES: dict[str, str] = {
MUTATIONS: dict[str, str] = { MUTATIONS: dict[str, str] = {
"parity_start": """ "parity_start": """
mutation StartParityCheck($correct: Boolean) { mutation StartParityCheck($correct: Boolean!) {
parityCheck { start(correct: $correct) } parityCheck { start(correct: $correct) }
} }
""", """,
@@ -53,6 +53,14 @@ ARRAY_ACTIONS = Literal[
"parity_status", "parity_status",
] ]
if set(get_args(ARRAY_ACTIONS)) != ALL_ACTIONS:
_missing = ALL_ACTIONS - set(get_args(ARRAY_ACTIONS))
_extra = set(get_args(ARRAY_ACTIONS)) - ALL_ACTIONS
raise RuntimeError(
f"ARRAY_ACTIONS and ALL_ACTIONS are out of sync. "
f"Missing from Literal: {_missing or 'none'}. Extra in Literal: {_extra or 'none'}"
)
def register_array_tool(mcp: FastMCP) -> None: def register_array_tool(mcp: FastMCP) -> None:
"""Register the unraid_array tool with the FastMCP instance.""" """Register the unraid_array tool with the FastMCP instance."""
@@ -65,7 +73,7 @@ def register_array_tool(mcp: FastMCP) -> None:
"""Manage Unraid array parity checks. """Manage Unraid array parity checks.
Actions: Actions:
parity_start - Start parity check (optional correct=True to fix errors) parity_start - Start parity check (correct=True to fix errors, correct=False for read-only; required)
parity_pause - Pause running parity check parity_pause - Pause running parity check
parity_resume - Resume paused parity check parity_resume - Resume paused parity check
parity_cancel - Cancel running parity check parity_cancel - Cancel running parity check
@@ -74,7 +82,7 @@ def register_array_tool(mcp: FastMCP) -> None:
if action not in ALL_ACTIONS: if action not in ALL_ACTIONS:
raise ToolError(f"Invalid action '{action}'. Must be one of: {sorted(ALL_ACTIONS)}") raise ToolError(f"Invalid action '{action}'. Must be one of: {sorted(ALL_ACTIONS)}")
try: with tool_error_handler("array", action, logger):
logger.info(f"Executing unraid_array action={action}") logger.info(f"Executing unraid_array action={action}")
if action in QUERIES: if action in QUERIES:
@@ -84,7 +92,9 @@ def register_array_tool(mcp: FastMCP) -> None:
query = MUTATIONS[action] query = MUTATIONS[action]
variables: dict[str, Any] | None = None variables: dict[str, Any] | None = None
if action == "parity_start" and correct is not None: if action == "parity_start":
if correct is None:
raise ToolError("correct is required for 'parity_start' action")
variables = {"correct": correct} variables = {"correct": correct}
data = await make_graphql_request(query, variables) data = await make_graphql_request(query, variables)
@@ -95,10 +105,4 @@ def register_array_tool(mcp: FastMCP) -> None:
"data": data, "data": data,
} }
except ToolError:
raise
except Exception as e:
logger.error(f"Error in unraid_array action={action}: {e}", exc_info=True)
raise ToolError(f"Failed to execute array/{action}: {e!s}") from e
logger.info("Array tool registered successfully") logger.info("Array tool registered successfully")

View File

@@ -1,17 +1,18 @@
"""Docker container management. """Docker container management.
Provides the `unraid_docker` tool with 15 actions for container lifecycle, Provides the `unraid_docker` tool with 26 actions for container lifecycle,
logs, networks, and update management. logs, networks, update management, and Docker organizer operations.
""" """
import re import re
from typing import Any, Literal from typing import Any, Literal, get_args
from fastmcp import FastMCP from fastmcp import FastMCP
from ..config.logging import logger from ..config.logging import logger
from ..core.client import make_graphql_request from ..core.client import make_graphql_request
from ..core.exceptions import ToolError from ..core.exceptions import ToolError, tool_error_handler
from ..core.utils import safe_get
QUERIES: dict[str, str] = { QUERIES: dict[str, str] = {
@@ -35,27 +36,27 @@ QUERIES: dict[str, str] = {
""", """,
"logs": """ "logs": """
query GetContainerLogs($id: PrefixedID!, $tail: Int) { query GetContainerLogs($id: PrefixedID!, $tail: Int) {
docker { logs(id: $id, tail: $tail) } docker { logs(id: $id, tail: $tail) { containerId lines { timestamp message } cursor } }
} }
""", """,
"networks": """ "networks": """
query GetDockerNetworks { query GetDockerNetworks {
dockerNetworks { id name driver scope } docker { networks { id name driver scope } }
} }
""", """,
"network_details": """ "network_details": """
query GetDockerNetwork($id: PrefixedID!) { query GetDockerNetwork {
dockerNetwork(id: $id) { id name driver scope containers } docker { networks { id name driver scope enableIPv6 internal attachable containers options labels } }
} }
""", """,
"port_conflicts": """ "port_conflicts": """
query GetPortConflicts { query GetPortConflicts {
docker { portConflicts { containerName port conflictsWith } } docker { portConflicts { containerPorts { privatePort type containers { id name } } lanPorts { lanIpPort publicPort type containers { id name } } } }
} }
""", """,
"check_updates": """ "check_updates": """
query CheckContainerUpdates { query CheckContainerUpdates {
docker { containerUpdateStatuses { id name updateAvailable currentVersion latestVersion } } docker { containerUpdateStatuses { name updateStatus } }
} }
""", """,
} }
@@ -96,9 +97,83 @@ MUTATIONS: dict[str, str] = {
docker { updateAllContainers { id names state status } } docker { updateAllContainers { id names state status } }
} }
""", """,
"create_folder": """
mutation CreateDockerFolder($name: String!, $parentId: String, $childrenIds: [String!]) {
createDockerFolder(name: $name, parentId: $parentId, childrenIds: $childrenIds) {
version views { id name rootId flatEntries { id type name parentId depth position path hasChildren childrenIds } }
}
}
""",
"set_folder_children": """
mutation SetDockerFolderChildren($folderId: String, $childrenIds: [String!]!) {
setDockerFolderChildren(folderId: $folderId, childrenIds: $childrenIds) {
version views { id name rootId flatEntries { id type name parentId depth position path hasChildren childrenIds } }
}
}
""",
"delete_entries": """
mutation DeleteDockerEntries($entryIds: [String!]!) {
deleteDockerEntries(entryIds: $entryIds) {
version views { id name rootId flatEntries { id type name parentId depth position path hasChildren childrenIds } }
}
}
""",
"move_to_folder": """
mutation MoveDockerEntriesToFolder($sourceEntryIds: [String!]!, $destinationFolderId: String!) {
moveDockerEntriesToFolder(sourceEntryIds: $sourceEntryIds, destinationFolderId: $destinationFolderId) {
version views { id name rootId flatEntries { id type name parentId depth position path hasChildren childrenIds } }
}
}
""",
"move_to_position": """
mutation MoveDockerItemsToPosition($sourceEntryIds: [String!]!, $destinationFolderId: String!, $position: Float!) {
moveDockerItemsToPosition(sourceEntryIds: $sourceEntryIds, destinationFolderId: $destinationFolderId, position: $position) {
version views { id name rootId flatEntries { id type name parentId depth position path hasChildren childrenIds } }
}
}
""",
"rename_folder": """
mutation RenameDockerFolder($folderId: String!, $newName: String!) {
renameDockerFolder(folderId: $folderId, newName: $newName) {
version views { id name rootId flatEntries { id type name parentId depth position path hasChildren childrenIds } }
}
}
""",
"create_folder_with_items": """
mutation CreateDockerFolderWithItems($name: String!, $parentId: String, $sourceEntryIds: [String!], $position: Float) {
createDockerFolderWithItems(name: $name, parentId: $parentId, sourceEntryIds: $sourceEntryIds, position: $position) {
version views { id name rootId flatEntries { id type name parentId depth position path hasChildren childrenIds } }
}
}
""",
"update_view_prefs": """
mutation UpdateDockerViewPreferences($viewId: String, $prefs: JSON!) {
updateDockerViewPreferences(viewId: $viewId, prefs: $prefs) {
version views { id name rootId }
}
}
""",
"sync_templates": """
mutation SyncDockerTemplatePaths {
syncDockerTemplatePaths { scanned matched skipped errors }
}
""",
"reset_template_mappings": """
mutation ResetDockerTemplateMappings {
resetDockerTemplateMappings
}
""",
"refresh_digests": """
mutation RefreshDockerDigests {
refreshDockerDigests
}
""",
} }
DESTRUCTIVE_ACTIONS = {"remove"} DESTRUCTIVE_ACTIONS = {"remove", "update_all", "delete_entries", "reset_template_mappings"}
# NOTE (Code-M-07): "details" and "logs" are listed here because they require a
# container_id parameter, but unlike mutations they use fuzzy name matching (not
# strict). This is intentional: read-only queries are safe with fuzzy matching.
_ACTIONS_REQUIRING_CONTAINER_ID = { _ACTIONS_REQUIRING_CONTAINER_ID = {
"start", "start",
"stop", "stop",
@@ -111,6 +186,7 @@ _ACTIONS_REQUIRING_CONTAINER_ID = {
"logs", "logs",
} }
ALL_ACTIONS = set(QUERIES) | set(MUTATIONS) | {"restart"} ALL_ACTIONS = set(QUERIES) | set(MUTATIONS) | {"restart"}
_MAX_TAIL_LINES = 10_000
DOCKER_ACTIONS = Literal[ DOCKER_ACTIONS = Literal[
"list", "list",
@@ -128,35 +204,49 @@ DOCKER_ACTIONS = Literal[
"network_details", "network_details",
"port_conflicts", "port_conflicts",
"check_updates", "check_updates",
"create_folder",
"set_folder_children",
"delete_entries",
"move_to_folder",
"move_to_position",
"rename_folder",
"create_folder_with_items",
"update_view_prefs",
"sync_templates",
"reset_template_mappings",
"refresh_digests",
] ]
# Docker container IDs: 64 hex chars + optional suffix (e.g., ":local") if set(get_args(DOCKER_ACTIONS)) != ALL_ACTIONS:
_missing = ALL_ACTIONS - set(get_args(DOCKER_ACTIONS))
_extra = set(get_args(DOCKER_ACTIONS)) - ALL_ACTIONS
raise RuntimeError(
f"DOCKER_ACTIONS and ALL_ACTIONS are out of sync. "
f"Missing from Literal: {_missing or 'none'}. Extra in Literal: {_extra or 'none'}"
)
# Full PrefixedID: 64 hex chars + optional suffix (e.g., ":local")
_DOCKER_ID_PATTERN = re.compile(r"^[a-f0-9]{64}(:[a-z0-9]+)?$", re.IGNORECASE) _DOCKER_ID_PATTERN = re.compile(r"^[a-f0-9]{64}(:[a-z0-9]+)?$", re.IGNORECASE)
# Short hex prefix: at least 12 hex chars (standard Docker short ID length)
def _safe_get(data: dict[str, Any], *keys: str, default: Any = None) -> Any: _DOCKER_SHORT_ID_PATTERN = re.compile(r"^[a-f0-9]{12,63}$", re.IGNORECASE)
"""Safely traverse nested dict keys, handling None intermediates."""
current = data
for key in keys:
if not isinstance(current, dict):
return default
current = current.get(key)
return current if current is not None else default
def find_container_by_identifier( def find_container_by_identifier(
identifier: str, containers: list[dict[str, Any]] identifier: str, containers: list[dict[str, Any]], *, strict: bool = False
) -> dict[str, Any] | None: ) -> dict[str, Any] | None:
"""Find a container by ID or name with fuzzy matching. """Find a container by ID or name with optional fuzzy matching.
Match priority: Match priority:
1. Exact ID match 1. Exact ID match
2. Exact name match (case-sensitive) 2. Exact name match (case-sensitive)
When strict=False (default), also tries:
3. Name starts with identifier (case-insensitive) 3. Name starts with identifier (case-insensitive)
4. Name contains identifier as substring (case-insensitive) 4. Name contains identifier as substring (case-insensitive)
Note: Short identifiers (e.g. "db") may match unintended containers When strict=True, only exact matches (1 & 2) are used.
via substring. Use more specific names or IDs for precision. Use strict=True for mutations to prevent targeting the wrong container.
""" """
if not containers: if not containers:
return None return None
@@ -168,20 +258,24 @@ def find_container_by_identifier(
if identifier in c.get("names", []): if identifier in c.get("names", []):
return c return c
# Strict mode: no fuzzy matching allowed
if strict:
return None
id_lower = identifier.lower() id_lower = identifier.lower()
# Priority 3: prefix match (more precise than substring) # Priority 3: prefix match (more precise than substring)
for c in containers: for c in containers:
for name in c.get("names", []): for name in c.get("names", []):
if name.lower().startswith(id_lower): if name.lower().startswith(id_lower):
logger.info(f"Prefix match: '{identifier}' -> '{name}'") logger.debug(f"Prefix match: '{identifier}' -> '{name}'")
return c return c
# Priority 4: substring match (least precise) # Priority 4: substring match (least precise)
for c in containers: for c in containers:
for name in c.get("names", []): for name in c.get("names", []):
if id_lower in name.lower(): if id_lower in name.lower():
logger.info(f"Substring match: '{identifier}' -> '{name}'") logger.debug(f"Substring match: '{identifier}' -> '{name}'")
return c return c
return None return None
@@ -195,27 +289,66 @@ def get_available_container_names(containers: list[dict[str, Any]]) -> list[str]
return names return names
async def _resolve_container_id(container_id: str) -> str: async def _resolve_container_id(container_id: str, *, strict: bool = False) -> str:
"""Resolve a container name/identifier to its actual PrefixedID.""" """Resolve a container name/identifier to its actual PrefixedID.
Optimization: if the identifier is a full 64-char hex ID (with optional
:suffix), skip the container list fetch entirely and use it directly.
If it's a short hex prefix (12-63 chars), fetch the list and match by
ID prefix. Only fetch the container list for name-based lookups.
Args:
container_id: Container name or ID to resolve
strict: When True, only exact name/ID matches are allowed (no fuzzy).
Use for mutations to prevent targeting the wrong container.
"""
# Full PrefixedID: skip the list fetch entirely
if _DOCKER_ID_PATTERN.match(container_id): if _DOCKER_ID_PATTERN.match(container_id):
return container_id return container_id
logger.info(f"Resolving container identifier '{container_id}'") logger.info(f"Resolving container identifier '{container_id}' (strict={strict})")
list_query = """ list_query = """
query ResolveContainerID { query ResolveContainerID {
docker { containers(skipCache: true) { id names } } docker { containers(skipCache: true) { id names } }
} }
""" """
data = await make_graphql_request(list_query) data = await make_graphql_request(list_query)
containers = _safe_get(data, "docker", "containers", default=[]) containers = safe_get(data, "docker", "containers", default=[])
resolved = find_container_by_identifier(container_id, containers)
# Short hex prefix: match by ID prefix before trying name matching
if _DOCKER_SHORT_ID_PATTERN.match(container_id):
id_lower = container_id.lower()
matches: list[dict[str, Any]] = []
for c in containers:
cid = (c.get("id") or "").lower()
if cid.startswith(id_lower) or cid.split(":")[0].startswith(id_lower):
matches.append(c)
if len(matches) == 1:
actual_id = str(matches[0].get("id", ""))
logger.info(f"Resolved short ID '{container_id}' -> '{actual_id}'")
return actual_id
if len(matches) > 1:
candidate_ids = [str(c.get("id", "")) for c in matches[:5]]
raise ToolError(
f"Short container ID prefix '{container_id}' is ambiguous. "
f"Matches: {', '.join(candidate_ids)}. Use a longer ID or exact name."
)
resolved = find_container_by_identifier(container_id, containers, strict=strict)
if resolved: if resolved:
actual_id = str(resolved.get("id", "")) actual_id = str(resolved.get("id", ""))
logger.info(f"Resolved '{container_id}' -> '{actual_id}'") logger.info(f"Resolved '{container_id}' -> '{actual_id}'")
return actual_id return actual_id
available = get_available_container_names(containers) available = get_available_container_names(containers)
msg = f"Container '{container_id}' not found." if strict:
msg = (
f"Container '{container_id}' not found by exact match. "
f"Mutations require an exact container name or full ID — "
f"fuzzy/substring matching is not allowed for safety."
)
else:
msg = f"Container '{container_id}' not found."
if available: if available:
msg += f" Available: {', '.join(available[:10])}" msg += f" Available: {', '.join(available[:10])}"
raise ToolError(msg) raise ToolError(msg)
@@ -232,6 +365,17 @@ def register_docker_tool(mcp: FastMCP) -> None:
*, *,
confirm: bool = False, confirm: bool = False,
tail_lines: int = 100, tail_lines: int = 100,
folder_name: str | None = None,
folder_id: str | None = None,
parent_id: str | None = None,
children_ids: list[str] | None = None,
entry_ids: list[str] | None = None,
source_entry_ids: list[str] | None = None,
destination_folder_id: str | None = None,
position: float | None = None,
new_folder_name: str | None = None,
view_id: str = "default",
view_prefs: dict[str, Any] | None = None,
) -> dict[str, Any]: ) -> dict[str, Any]:
"""Manage Docker containers, networks, and updates. """Manage Docker containers, networks, and updates.
@@ -251,6 +395,17 @@ def register_docker_tool(mcp: FastMCP) -> None:
network_details - Details of a network (requires network_id) network_details - Details of a network (requires network_id)
port_conflicts - Check for port conflicts port_conflicts - Check for port conflicts
check_updates - Check which containers have updates available check_updates - Check which containers have updates available
create_folder - Create Docker organizer folder (requires folder_name)
set_folder_children - Set children of a folder (requires children_ids)
delete_entries - Delete organizer entries (requires entry_ids, confirm=True)
move_to_folder - Move entries to a folder (requires source_entry_ids, destination_folder_id)
move_to_position - Move entries to position in folder (requires source_entry_ids, destination_folder_id, position)
rename_folder - Rename a folder (requires folder_id, new_folder_name)
create_folder_with_items - Create folder with items (requires folder_name)
update_view_prefs - Update organizer view preferences (requires view_prefs)
sync_templates - Sync Docker template paths
reset_template_mappings - Reset template mappings (confirm=True)
refresh_digests - Refresh container image digests
""" """
if action not in ALL_ACTIONS: if action not in ALL_ACTIONS:
raise ToolError(f"Invalid action '{action}'. Must be one of: {sorted(ALL_ACTIONS)}") raise ToolError(f"Invalid action '{action}'. Must be one of: {sorted(ALL_ACTIONS)}")
@@ -264,56 +419,86 @@ def register_docker_tool(mcp: FastMCP) -> None:
if action == "network_details" and not network_id: if action == "network_details" and not network_id:
raise ToolError("network_id is required for 'network_details' action") raise ToolError("network_id is required for 'network_details' action")
try: if action == "logs" and (tail_lines < 1 or tail_lines > _MAX_TAIL_LINES):
raise ToolError(f"tail_lines must be between 1 and {_MAX_TAIL_LINES}, got {tail_lines}")
with tool_error_handler("docker", action, logger):
logger.info(f"Executing unraid_docker action={action}") logger.info(f"Executing unraid_docker action={action}")
# --- Read-only queries --- # --- Read-only queries ---
if action == "list": if action == "list":
data = await make_graphql_request(QUERIES["list"]) data = await make_graphql_request(QUERIES["list"])
containers = _safe_get(data, "docker", "containers", default=[]) containers = safe_get(data, "docker", "containers", default=[])
return {"containers": list(containers) if isinstance(containers, list) else []} return {"containers": containers}
if action == "details": if action == "details":
# Resolve name -> ID first (skips list fetch if already an ID)
actual_id = await _resolve_container_id(container_id or "")
data = await make_graphql_request(QUERIES["details"]) data = await make_graphql_request(QUERIES["details"])
containers = _safe_get(data, "docker", "containers", default=[]) containers = safe_get(data, "docker", "containers", default=[])
container = find_container_by_identifier(container_id or "", containers) # Match by resolved ID (exact match, no second list fetch needed)
if container: for c in containers:
return container if c.get("id") == actual_id:
available = get_available_container_names(containers) return c
msg = f"Container '{container_id}' not found." raise ToolError(f"Container '{container_id}' not found in details response.")
if available:
msg += f" Available: {', '.join(available[:10])}"
raise ToolError(msg)
if action == "logs": if action == "logs":
actual_id = await _resolve_container_id(container_id or "") actual_id = await _resolve_container_id(container_id or "")
data = await make_graphql_request( data = await make_graphql_request(
QUERIES["logs"], {"id": actual_id, "tail": tail_lines} QUERIES["logs"], {"id": actual_id, "tail": tail_lines}
) )
return {"logs": _safe_get(data, "docker", "logs")} logs_data = safe_get(data, "docker", "logs")
if logs_data is None:
raise ToolError(f"No logs returned for container '{container_id}'")
# Extract log lines into a plain text string for backward compatibility.
# The GraphQL response is { containerId, lines: [{ timestamp, message }], cursor }
# but callers expect result["logs"] to be a string of log text.
lines = logs_data.get("lines", []) if isinstance(logs_data, dict) else []
log_text = "\n".join(
f"{line.get('timestamp', '')} {line.get('message', '')}".strip()
for line in lines
)
return {
"logs": log_text,
"cursor": logs_data.get("cursor") if isinstance(logs_data, dict) else None,
}
if action == "networks": if action == "networks":
data = await make_graphql_request(QUERIES["networks"]) data = await make_graphql_request(QUERIES["networks"])
networks = data.get("dockerNetworks", []) networks = safe_get(data, "docker", "networks", default=[])
return {"networks": list(networks) if isinstance(networks, list) else []} return {"networks": networks}
if action == "network_details": if action == "network_details":
data = await make_graphql_request(QUERIES["network_details"], {"id": network_id}) data = await make_graphql_request(QUERIES["network_details"])
return dict(data.get("dockerNetwork") or {}) all_networks = safe_get(data, "docker", "networks", default=[])
# Filter client-side by network_id since the API returns all networks
for net in all_networks:
if net.get("id") == network_id or net.get("name") == network_id:
return dict(net)
raise ToolError(f"Network '{network_id}' not found.")
if action == "port_conflicts": if action == "port_conflicts":
data = await make_graphql_request(QUERIES["port_conflicts"]) data = await make_graphql_request(QUERIES["port_conflicts"])
conflicts = _safe_get(data, "docker", "portConflicts", default=[]) conflicts_data = safe_get(data, "docker", "portConflicts", default={})
return {"port_conflicts": list(conflicts) if isinstance(conflicts, list) else []} # The GraphQL response is { containerPorts: [...], lanPorts: [...] }
# but callers expect result["port_conflicts"] to be a flat list.
# Merge both conflict lists for backward compatibility.
if isinstance(conflicts_data, dict):
conflicts: list[Any] = []
conflicts.extend(conflicts_data.get("containerPorts", []))
conflicts.extend(conflicts_data.get("lanPorts", []))
else:
conflicts = list(conflicts_data) if conflicts_data else []
return {"port_conflicts": conflicts}
if action == "check_updates": if action == "check_updates":
data = await make_graphql_request(QUERIES["check_updates"]) data = await make_graphql_request(QUERIES["check_updates"])
statuses = _safe_get(data, "docker", "containerUpdateStatuses", default=[]) statuses = safe_get(data, "docker", "containerUpdateStatuses", default=[])
return {"update_statuses": list(statuses) if isinstance(statuses, list) else []} return {"update_statuses": statuses}
# --- Mutations --- # --- Mutations (strict matching: no fuzzy/substring) ---
if action == "restart": if action == "restart":
actual_id = await _resolve_container_id(container_id or "") actual_id = await _resolve_container_id(container_id or "", strict=True)
# Stop (idempotent: treat "already stopped" as success) # Stop (idempotent: treat "already stopped" as success)
stop_data = await make_graphql_request( stop_data = await make_graphql_request(
MUTATIONS["stop"], MUTATIONS["stop"],
@@ -330,7 +515,7 @@ def register_docker_tool(mcp: FastMCP) -> None:
if start_data.get("idempotent_success"): if start_data.get("idempotent_success"):
result = {} result = {}
else: else:
result = _safe_get(start_data, "docker", "start", default={}) result = safe_get(start_data, "docker", "start", default={})
response: dict[str, Any] = { response: dict[str, Any] = {
"success": True, "success": True,
"action": "restart", "action": "restart",
@@ -342,12 +527,156 @@ def register_docker_tool(mcp: FastMCP) -> None:
if action == "update_all": if action == "update_all":
data = await make_graphql_request(MUTATIONS["update_all"]) data = await make_graphql_request(MUTATIONS["update_all"])
results = _safe_get(data, "docker", "updateAllContainers", default=[]) results = safe_get(data, "docker", "updateAllContainers", default=[])
return {"success": True, "action": "update_all", "containers": results} return {"success": True, "action": "update_all", "containers": results}
# --- Docker organizer mutations ---
if action == "create_folder":
if not folder_name:
raise ToolError("folder_name is required for 'create_folder' action")
_vars: dict[str, Any] = {"name": folder_name}
if parent_id is not None:
_vars["parentId"] = parent_id
if children_ids is not None:
_vars["childrenIds"] = children_ids
data = await make_graphql_request(MUTATIONS["create_folder"], _vars)
organizer = data.get("createDockerFolder")
if organizer is None:
raise ToolError("create_folder failed: server returned no data")
return {"success": True, "action": "create_folder", "organizer": organizer}
if action == "set_folder_children":
if children_ids is None:
raise ToolError("children_ids is required for 'set_folder_children' action")
_vars = {"childrenIds": children_ids}
if folder_id is not None:
_vars["folderId"] = folder_id
data = await make_graphql_request(MUTATIONS["set_folder_children"], _vars)
organizer = data.get("setDockerFolderChildren")
if organizer is None:
raise ToolError("set_folder_children failed: server returned no data")
return {"success": True, "action": "set_folder_children", "organizer": organizer}
if action == "delete_entries":
if not entry_ids:
raise ToolError("entry_ids is required for 'delete_entries' action")
data = await make_graphql_request(
MUTATIONS["delete_entries"], {"entryIds": entry_ids}
)
organizer = data.get("deleteDockerEntries")
if organizer is None:
raise ToolError("delete_entries failed: server returned no data")
return {"success": True, "action": "delete_entries", "organizer": organizer}
if action == "move_to_folder":
if not source_entry_ids:
raise ToolError("source_entry_ids is required for 'move_to_folder' action")
if not destination_folder_id:
raise ToolError("destination_folder_id is required for 'move_to_folder' action")
data = await make_graphql_request(
MUTATIONS["move_to_folder"],
{
"sourceEntryIds": source_entry_ids,
"destinationFolderId": destination_folder_id,
},
)
organizer = data.get("moveDockerEntriesToFolder")
if organizer is None:
raise ToolError("move_to_folder failed: server returned no data")
return {"success": True, "action": "move_to_folder", "organizer": organizer}
if action == "move_to_position":
if not source_entry_ids:
raise ToolError("source_entry_ids is required for 'move_to_position' action")
if not destination_folder_id:
raise ToolError(
"destination_folder_id is required for 'move_to_position' action"
)
if position is None:
raise ToolError("position is required for 'move_to_position' action")
data = await make_graphql_request(
MUTATIONS["move_to_position"],
{
"sourceEntryIds": source_entry_ids,
"destinationFolderId": destination_folder_id,
"position": position,
},
)
organizer = data.get("moveDockerItemsToPosition")
if organizer is None:
raise ToolError("move_to_position failed: server returned no data")
return {"success": True, "action": "move_to_position", "organizer": organizer}
if action == "rename_folder":
if not folder_id:
raise ToolError("folder_id is required for 'rename_folder' action")
if not new_folder_name:
raise ToolError("new_folder_name is required for 'rename_folder' action")
data = await make_graphql_request(
MUTATIONS["rename_folder"], {"folderId": folder_id, "newName": new_folder_name}
)
organizer = data.get("renameDockerFolder")
if organizer is None:
raise ToolError("rename_folder failed: server returned no data")
return {"success": True, "action": "rename_folder", "organizer": organizer}
if action == "create_folder_with_items":
if not folder_name:
raise ToolError("folder_name is required for 'create_folder_with_items' action")
_vars = {"name": folder_name}
if parent_id is not None:
_vars["parentId"] = parent_id
if source_entry_ids is not None:
_vars["sourceEntryIds"] = source_entry_ids
if position is not None:
_vars["position"] = position
data = await make_graphql_request(MUTATIONS["create_folder_with_items"], _vars)
organizer = data.get("createDockerFolderWithItems")
if organizer is None:
raise ToolError("create_folder_with_items failed: server returned no data")
return {
"success": True,
"action": "create_folder_with_items",
"organizer": organizer,
}
if action == "update_view_prefs":
if view_prefs is None:
raise ToolError("view_prefs is required for 'update_view_prefs' action")
data = await make_graphql_request(
MUTATIONS["update_view_prefs"], {"viewId": view_id, "prefs": view_prefs}
)
organizer = data.get("updateDockerViewPreferences")
if organizer is None:
raise ToolError("update_view_prefs failed: server returned no data")
return {"success": True, "action": "update_view_prefs", "organizer": organizer}
if action == "sync_templates":
data = await make_graphql_request(MUTATIONS["sync_templates"])
result = data.get("syncDockerTemplatePaths")
if result is None:
raise ToolError("sync_templates failed: server returned no data")
return {"success": True, "action": "sync_templates", "result": result}
if action == "reset_template_mappings":
data = await make_graphql_request(MUTATIONS["reset_template_mappings"])
return {
"success": True,
"action": "reset_template_mappings",
"result": data.get("resetDockerTemplateMappings"),
}
if action == "refresh_digests":
data = await make_graphql_request(MUTATIONS["refresh_digests"])
return {
"success": True,
"action": "refresh_digests",
"result": data.get("refreshDockerDigests"),
}
# Single-container mutations # Single-container mutations
if action in MUTATIONS: if action in MUTATIONS:
actual_id = await _resolve_container_id(container_id or "") actual_id = await _resolve_container_id(container_id or "", strict=True)
op_context: dict[str, str] | None = ( op_context: dict[str, str] | None = (
{"operation": action} if action in ("start", "stop") else None {"operation": action} if action in ("start", "stop") else None
) )
@@ -382,10 +711,4 @@ def register_docker_tool(mcp: FastMCP) -> None:
raise ToolError(f"Unhandled action '{action}' — this is a bug") raise ToolError(f"Unhandled action '{action}' — this is a bug")
except ToolError:
raise
except Exception as e:
logger.error(f"Error in unraid_docker action={action}: {e}", exc_info=True)
raise ToolError(f"Failed to execute docker/{action}: {e!s}") from e
logger.info("Docker tool registered successfully") logger.info("Docker tool registered successfully")

View File

@@ -6,7 +6,7 @@ connection testing, and subscription diagnostics.
import datetime import datetime
import time import time
from typing import Any, Literal from typing import Any, Literal, get_args
from fastmcp import FastMCP from fastmcp import FastMCP
@@ -19,11 +19,23 @@ from ..config.settings import (
VERSION, VERSION,
) )
from ..core.client import make_graphql_request from ..core.client import make_graphql_request
from ..core.exceptions import ToolError from ..core.exceptions import ToolError, tool_error_handler
from ..core.utils import safe_display_url
from ..subscriptions.utils import _analyze_subscription_status
ALL_ACTIONS = {"check", "test_connection", "diagnose"}
HEALTH_ACTIONS = Literal["check", "test_connection", "diagnose"] HEALTH_ACTIONS = Literal["check", "test_connection", "diagnose"]
if set(get_args(HEALTH_ACTIONS)) != ALL_ACTIONS:
_missing = ALL_ACTIONS - set(get_args(HEALTH_ACTIONS))
_extra = set(get_args(HEALTH_ACTIONS)) - ALL_ACTIONS
raise RuntimeError(
"HEALTH_ACTIONS and ALL_ACTIONS are out of sync. "
f"Missing in HEALTH_ACTIONS: {_missing}; extra in HEALTH_ACTIONS: {_extra}"
)
# Severity ordering: only upgrade, never downgrade # Severity ordering: only upgrade, never downgrade
_SEVERITY = {"healthy": 0, "warning": 1, "degraded": 2, "unhealthy": 3} _SEVERITY = {"healthy": 0, "warning": 1, "degraded": 2, "unhealthy": 3}
@@ -53,12 +65,10 @@ def register_health_tool(mcp: FastMCP) -> None:
test_connection - Quick connectivity test (just checks { online }) test_connection - Quick connectivity test (just checks { online })
diagnose - Subscription system diagnostics diagnose - Subscription system diagnostics
""" """
if action not in ("check", "test_connection", "diagnose"): if action not in ALL_ACTIONS:
raise ToolError( raise ToolError(f"Invalid action '{action}'. Must be one of: {sorted(ALL_ACTIONS)}")
f"Invalid action '{action}'. Must be one of: check, test_connection, diagnose"
)
try: with tool_error_handler("health", action, logger):
logger.info(f"Executing unraid_health action={action}") logger.info(f"Executing unraid_health action={action}")
if action == "test_connection": if action == "test_connection":
@@ -79,12 +89,6 @@ def register_health_tool(mcp: FastMCP) -> None:
raise ToolError(f"Unhandled action '{action}' — this is a bug") raise ToolError(f"Unhandled action '{action}' — this is a bug")
except ToolError:
raise
except Exception as e:
logger.error(f"Error in unraid_health action={action}: {e}", exc_info=True)
raise ToolError(f"Failed to execute health/{action}: {e!s}") from e
logger.info("Health tool registered successfully") logger.info("Health tool registered successfully")
@@ -103,7 +107,7 @@ async def _comprehensive_check() -> dict[str, Any]:
query ComprehensiveHealthCheck { query ComprehensiveHealthCheck {
info { info {
machineId time machineId time
versions { unraid } versions { core { unraid } }
os { uptime } os { uptime }
} }
array { state } array { state }
@@ -131,21 +135,21 @@ async def _comprehensive_check() -> dict[str, Any]:
return health_info return health_info
# System info # System info
info = data.get("info", {}) info = data.get("info") or {}
if info: if info:
health_info["unraid_system"] = { health_info["unraid_system"] = {
"status": "connected", "status": "connected",
"url": UNRAID_API_URL, "url": safe_display_url(UNRAID_API_URL),
"machine_id": info.get("machineId"), "machine_id": info.get("machineId"),
"version": info.get("versions", {}).get("unraid"), "version": ((info.get("versions") or {}).get("core") or {}).get("unraid"),
"uptime": info.get("os", {}).get("uptime"), "uptime": (info.get("os") or {}).get("uptime"),
} }
else: else:
_escalate("degraded") _escalate("degraded")
issues.append("Unable to retrieve system info") issues.append("Unable to retrieve system info")
# Array # Array
array_info = data.get("array", {}) array_info = data.get("array") or {}
if array_info: if array_info:
state = array_info.get("state", "unknown") state = array_info.get("state", "unknown")
health_info["array_status"] = { health_info["array_status"] = {
@@ -160,9 +164,9 @@ async def _comprehensive_check() -> dict[str, Any]:
issues.append("Unable to retrieve array status") issues.append("Unable to retrieve array status")
# Notifications # Notifications
notifications = data.get("notifications", {}) notifications = data.get("notifications") or {}
if notifications and notifications.get("overview"): if notifications and notifications.get("overview"):
unread = notifications["overview"].get("unread", {}) unread = notifications["overview"].get("unread") or {}
alerts = unread.get("alert", 0) alerts = unread.get("alert", 0)
health_info["notifications"] = { health_info["notifications"] = {
"unread_total": unread.get("total", 0), "unread_total": unread.get("total", 0),
@@ -174,7 +178,7 @@ async def _comprehensive_check() -> dict[str, Any]:
issues.append(f"{alerts} unread alert(s)") issues.append(f"{alerts} unread alert(s)")
# Docker # Docker
docker = data.get("docker", {}) docker = data.get("docker") or {}
if docker and docker.get("containers"): if docker and docker.get("containers"):
containers = docker["containers"] containers = docker["containers"]
health_info["docker_services"] = { health_info["docker_services"] = {
@@ -206,7 +210,7 @@ async def _comprehensive_check() -> dict[str, Any]:
except Exception as e: except Exception as e:
# Intentionally broad: health checks must always return a result, # Intentionally broad: health checks must always return a result,
# even on unexpected failures, so callers never get an unhandled exception. # even on unexpected failures, so callers never get an unhandled exception.
logger.error(f"Health check failed: {e}") logger.error(f"Health check failed: {e}", exc_info=True)
return { return {
"status": "unhealthy", "status": "unhealthy",
"timestamp": datetime.datetime.now(datetime.UTC).isoformat(), "timestamp": datetime.datetime.now(datetime.UTC).isoformat(),
@@ -223,13 +227,10 @@ async def _diagnose_subscriptions() -> dict[str, Any]:
await ensure_subscriptions_started() await ensure_subscriptions_started()
status = subscription_manager.get_subscription_status() status = await subscription_manager.get_subscription_status()
# This list is intentionally placed into the summary dict below and then error_count, connection_issues = _analyze_subscription_status(status)
# appended to in the loop — the mutable alias ensures both references
# reflect the same data without a second pass.
connection_issues: list[dict[str, Any]] = []
diagnostic_info: dict[str, Any] = { return {
"timestamp": datetime.datetime.now(datetime.UTC).isoformat(), "timestamp": datetime.datetime.now(datetime.UTC).isoformat(),
"environment": { "environment": {
"auto_start_enabled": subscription_manager.auto_start_enabled, "auto_start_enabled": subscription_manager.auto_start_enabled,
@@ -241,31 +242,12 @@ async def _diagnose_subscriptions() -> dict[str, Any]:
"total_configured": len(subscription_manager.subscription_configs), "total_configured": len(subscription_manager.subscription_configs),
"active_count": len(subscription_manager.active_subscriptions), "active_count": len(subscription_manager.active_subscriptions),
"with_data": len(subscription_manager.resource_data), "with_data": len(subscription_manager.resource_data),
"in_error_state": 0, "in_error_state": error_count,
"connection_issues": connection_issues, "connection_issues": connection_issues,
}, },
} }
for sub_name, sub_status in status.items(): except ImportError as e:
runtime = sub_status.get("runtime", {}) raise ToolError("Subscription modules not available") from e
conn_state = runtime.get("connection_state", "unknown")
if conn_state in ("error", "auth_failed", "timeout", "max_retries_exceeded"):
diagnostic_info["summary"]["in_error_state"] += 1
if runtime.get("last_error"):
connection_issues.append(
{
"subscription": sub_name,
"state": conn_state,
"error": runtime["last_error"],
}
)
return diagnostic_info
except ImportError:
return {
"error": "Subscription modules not available",
"timestamp": datetime.datetime.now(datetime.UTC).isoformat(),
}
except Exception as e: except Exception as e:
raise ToolError(f"Failed to generate diagnostics: {e!s}") from e raise ToolError(f"Failed to generate diagnostics: {e!s}") from e

View File

@@ -4,13 +4,14 @@ Provides the `unraid_info` tool with 19 read-only actions for retrieving
system information, array status, network config, and server metadata. system information, array status, network config, and server metadata.
""" """
from typing import Any, Literal from typing import Any, Literal, get_args
from fastmcp import FastMCP from fastmcp import FastMCP
from ..config.logging import logger from ..config.logging import logger
from ..core.client import make_graphql_request from ..core.client import make_graphql_request
from ..core.exceptions import ToolError from ..core.exceptions import ToolError, tool_error_handler
from ..core.utils import format_kb
# Pre-built queries keyed by action name # Pre-built queries keyed by action name
@@ -18,15 +19,14 @@ QUERIES: dict[str, str] = {
"overview": """ "overview": """
query GetSystemInfo { query GetSystemInfo {
info { info {
os { platform distro release codename kernel arch hostname codepage logofile serial build uptime } os { platform distro release codename kernel arch hostname logofile serial build uptime }
cpu { manufacturer brand vendor family model stepping revision voltage speed speedmin speedmax threads cores processors socket cache flags } cpu { manufacturer brand vendor family model stepping revision voltage speed speedmin speedmax threads cores processors socket cache }
memory { memory {
layout { bank type clockSpeed formFactor manufacturer partNum serialNum } layout { bank type clockSpeed formFactor manufacturer partNum serialNum }
} }
baseboard { manufacturer model version serial assetTag } baseboard { manufacturer model version serial assetTag }
system { manufacturer model version serial uuid sku } system { manufacturer model version serial uuid sku }
versions { kernel openssl systemOpenssl systemOpensslLib node v8 npm yarn pm2 gulp grunt git tsc mysql redis mongodb apache nginx php docker postfix postgresql perl python gcc unraid } versions { core { unraid api kernel } packages { openssl node npm pm2 git nginx php docker } }
apps { installed started }
machineId machineId
time time
} }
@@ -67,7 +67,7 @@ QUERIES: dict[str, str] = {
""", """,
"connect": """ "connect": """
query GetConnectSettings { query GetConnectSettings {
connect { status sandbox flashGuid } connect { id dynamicRemoteAccess { enabledType runningType error } }
} }
""", """,
"variables": """ "variables": """
@@ -81,18 +81,17 @@ QUERIES: dict[str, str] = {
shareAvahiEnabled safeMode startMode configValid configError joinStatus shareAvahiEnabled safeMode startMode configValid configError joinStatus
deviceCount flashGuid flashProduct flashVendor mdState mdVersion deviceCount flashGuid flashProduct flashVendor mdState mdVersion
shareCount shareSmbCount shareNfsCount shareAfpCount shareMoverActive shareCount shareSmbCount shareNfsCount shareAfpCount shareMoverActive
csrfToken
} }
} }
""", """,
"metrics": """ "metrics": """
query GetMetrics { query GetMetrics {
metrics { cpu { used } memory { used total } } metrics { cpu { percentTotal } memory { used total } }
} }
""", """,
"services": """ "services": """
query GetServices { query GetServices {
services { name state } services { name online version }
} }
""", """,
"display": """ "display": """
@@ -122,7 +121,7 @@ QUERIES: dict[str, str] = {
query GetServer { query GetServer {
info { info {
os { hostname uptime } os { hostname uptime }
versions { unraid } versions { core { unraid } }
machineId time machineId time
} }
array { state } array { state }
@@ -131,31 +130,49 @@ QUERIES: dict[str, str] = {
""", """,
"servers": """ "servers": """
query GetServers { query GetServers {
servers { id name status description ip port } servers { id name status comment wanip lanip localurl remoteurl }
} }
""", """,
"flash": """ "flash": """
query GetFlash { query GetFlash {
flash { id guid product vendor size } flash { id guid product vendor }
} }
""", """,
"ups_devices": """ "ups_devices": """
query GetUpsDevices { query GetUpsDevices {
upsDevices { id model status runtime charge load } upsDevices { id name model status battery { chargeLevel estimatedRuntime health } power { loadPercentage inputVoltage outputVoltage } }
} }
""", """,
"ups_device": """ "ups_device": """
query GetUpsDevice($id: PrefixedID!) { query GetUpsDevice($id: String!) {
upsDeviceById(id: $id) { id model status runtime charge load voltage frequency temperature } upsDeviceById(id: $id) { id name model status battery { chargeLevel estimatedRuntime health } power { loadPercentage inputVoltage outputVoltage nominalPower currentPower } }
} }
""", """,
"ups_config": """ "ups_config": """
query GetUpsConfig { query GetUpsConfig {
upsConfiguration { enabled mode cable driver port } upsConfiguration { service upsCable upsType device batteryLevel minutes timeout killUps upsName }
} }
""", """,
} }
MUTATIONS: dict[str, str] = {
"update_server": """
mutation UpdateServerIdentity($name: String!, $comment: String, $sysModel: String) {
updateServerIdentity(name: $name, comment: $comment, sysModel: $sysModel) {
id name comment status
}
}
""",
"update_ssh": """
mutation UpdateSshSettings($input: UpdateSshInput!) {
updateSshSettings(input: $input) { id useSsh portssh }
}
""",
}
DESTRUCTIVE_ACTIONS = {"update_ssh"}
ALL_ACTIONS = set(QUERIES) | set(MUTATIONS)
INFO_ACTIONS = Literal[ INFO_ACTIONS = Literal[
"overview", "overview",
"array", "array",
@@ -176,11 +193,17 @@ INFO_ACTIONS = Literal[
"ups_devices", "ups_devices",
"ups_device", "ups_device",
"ups_config", "ups_config",
"update_server",
"update_ssh",
] ]
assert set(QUERIES.keys()) == set(INFO_ACTIONS.__args__), ( if set(get_args(INFO_ACTIONS)) != ALL_ACTIONS:
"QUERIES keys and INFO_ACTIONS are out of sync" _missing = ALL_ACTIONS - set(get_args(INFO_ACTIONS))
) _extra = set(get_args(INFO_ACTIONS)) - ALL_ACTIONS
raise RuntimeError(
f"QUERIES keys and INFO_ACTIONS are out of sync. "
f"Missing from Literal: {_missing or 'none'}. Extra in Literal: {_extra or 'none'}"
)
def _process_system_info(raw_info: dict[str, Any]) -> dict[str, Any]: def _process_system_info(raw_info: dict[str, Any]) -> dict[str, Any]:
@@ -189,17 +212,17 @@ def _process_system_info(raw_info: dict[str, Any]) -> dict[str, Any]:
if raw_info.get("os"): if raw_info.get("os"):
os_info = raw_info["os"] os_info = raw_info["os"]
summary["os"] = ( summary["os"] = (
f"{os_info.get('distro', '')} {os_info.get('release', '')} " f"{os_info.get('distro') or 'unknown'} {os_info.get('release') or 'unknown'} "
f"({os_info.get('platform', '')}, {os_info.get('arch', '')})" f"({os_info.get('platform') or 'unknown'}, {os_info.get('arch') or 'unknown'})"
) )
summary["hostname"] = os_info.get("hostname") summary["hostname"] = os_info.get("hostname") or "unknown"
summary["uptime"] = os_info.get("uptime") summary["uptime"] = os_info.get("uptime")
if raw_info.get("cpu"): if raw_info.get("cpu"):
cpu = raw_info["cpu"] cpu = raw_info["cpu"]
summary["cpu"] = ( summary["cpu"] = (
f"{cpu.get('manufacturer', '')} {cpu.get('brand', '')} " f"{cpu.get('manufacturer') or 'unknown'} {cpu.get('brand') or 'unknown'} "
f"({cpu.get('cores', '?')} cores, {cpu.get('threads', '?')} threads)" f"({cpu.get('cores') or '?'} cores, {cpu.get('threads') or '?'} threads)"
) )
if raw_info.get("memory") and raw_info["memory"].get("layout"): if raw_info.get("memory") and raw_info["memory"].get("layout"):
@@ -207,10 +230,10 @@ def _process_system_info(raw_info: dict[str, Any]) -> dict[str, Any]:
summary["memory_layout_details"] = [] summary["memory_layout_details"] = []
for stick in mem_layout: for stick in mem_layout:
summary["memory_layout_details"].append( summary["memory_layout_details"].append(
f"Bank {stick.get('bank', '?')}: Type {stick.get('type', '?')}, " f"Bank {stick.get('bank') or '?'}: Type {stick.get('type') or '?'}, "
f"Speed {stick.get('clockSpeed', '?')}MHz, " f"Speed {stick.get('clockSpeed') or '?'}MHz, "
f"Manufacturer: {stick.get('manufacturer', '?')}, " f"Manufacturer: {stick.get('manufacturer') or '?'}, "
f"Part: {stick.get('partNum', '?')}" f"Part: {stick.get('partNum') or '?'}"
) )
summary["memory_summary"] = ( summary["memory_summary"] = (
"Stick layout details retrieved. Overall total/used/free memory stats " "Stick layout details retrieved. Overall total/used/free memory stats "
@@ -255,31 +278,14 @@ def _analyze_disk_health(disks: list[dict[str, Any]]) -> dict[str, int]:
return counts return counts
def _format_kb(k: Any) -> str:
"""Format kilobyte values into human-readable sizes."""
if k is None:
return "N/A"
try:
k = int(k)
except (ValueError, TypeError):
return "N/A"
if k >= 1024 * 1024 * 1024:
return f"{k / (1024 * 1024 * 1024):.2f} TB"
if k >= 1024 * 1024:
return f"{k / (1024 * 1024):.2f} GB"
if k >= 1024:
return f"{k / 1024:.2f} MB"
return f"{k} KB"
def _process_array_status(raw: dict[str, Any]) -> dict[str, Any]: def _process_array_status(raw: dict[str, Any]) -> dict[str, Any]:
"""Process raw array data into summary + details.""" """Process raw array data into summary + details."""
summary: dict[str, Any] = {"state": raw.get("state")} summary: dict[str, Any] = {"state": raw.get("state")}
if raw.get("capacity") and raw["capacity"].get("kilobytes"): if raw.get("capacity") and raw["capacity"].get("kilobytes"):
kb = raw["capacity"]["kilobytes"] kb = raw["capacity"]["kilobytes"]
summary["capacity_total"] = _format_kb(kb.get("total")) summary["capacity_total"] = format_kb(kb.get("total"))
summary["capacity_used"] = _format_kb(kb.get("used")) summary["capacity_used"] = format_kb(kb.get("used"))
summary["capacity_free"] = _format_kb(kb.get("free")) summary["capacity_free"] = format_kb(kb.get("free"))
summary["num_parity_disks"] = len(raw.get("parities", [])) summary["num_parity_disks"] = len(raw.get("parities", []))
summary["num_data_disks"] = len(raw.get("disks", [])) summary["num_data_disks"] = len(raw.get("disks", []))
@@ -320,7 +326,13 @@ def register_info_tool(mcp: FastMCP) -> None:
@mcp.tool() @mcp.tool()
async def unraid_info( async def unraid_info(
action: INFO_ACTIONS, action: INFO_ACTIONS,
confirm: bool = False,
device_id: str | None = None, device_id: str | None = None,
server_name: str | None = None,
server_comment: str | None = None,
sys_model: str | None = None,
ssh_enabled: bool | None = None,
ssh_port: int | None = None,
) -> dict[str, Any]: ) -> dict[str, Any]:
"""Query Unraid system information. """Query Unraid system information.
@@ -344,13 +356,52 @@ def register_info_tool(mcp: FastMCP) -> None:
ups_devices - List UPS devices ups_devices - List UPS devices
ups_device - Single UPS device (requires device_id) ups_device - Single UPS device (requires device_id)
ups_config - UPS configuration ups_config - UPS configuration
update_server - Update server name, comment, and model (requires server_name)
update_ssh - Enable/disable SSH and set port (requires ssh_enabled, ssh_port)
""" """
if action not in QUERIES: if action not in ALL_ACTIONS:
raise ToolError(f"Invalid action '{action}'. Must be one of: {list(QUERIES.keys())}") raise ToolError(f"Invalid action '{action}'. Must be one of: {sorted(ALL_ACTIONS)}")
if action in DESTRUCTIVE_ACTIONS and not confirm:
raise ToolError(f"Action '{action}' is destructive. Set confirm=True to proceed.")
if action == "ups_device" and not device_id: if action == "ups_device" and not device_id:
raise ToolError("device_id is required for ups_device action") raise ToolError("device_id is required for ups_device action")
# Mutation handlers — must return before query = QUERIES[action]
if action == "update_server":
if server_name is None:
raise ToolError("server_name is required for 'update_server' action")
variables_mut: dict[str, Any] = {"name": server_name}
if server_comment is not None:
variables_mut["comment"] = server_comment
if sys_model is not None:
variables_mut["sysModel"] = sys_model
with tool_error_handler("info", action, logger):
logger.info("Executing unraid_info action=update_server")
data = await make_graphql_request(MUTATIONS["update_server"], variables_mut)
return {
"success": True,
"action": "update_server",
"data": data.get("updateServerIdentity"),
}
if action == "update_ssh":
if ssh_enabled is None:
raise ToolError("ssh_enabled is required for 'update_ssh' action")
if ssh_port is None:
raise ToolError("ssh_port is required for 'update_ssh' action")
with tool_error_handler("info", action, logger):
logger.info("Executing unraid_info action=update_ssh")
data = await make_graphql_request(
MUTATIONS["update_ssh"], {"input": {"enabled": ssh_enabled, "port": ssh_port}}
)
return {
"success": True,
"action": "update_ssh",
"data": data.get("updateSshSettings"),
}
query = QUERIES[action] query = QUERIES[action]
variables: dict[str, Any] | None = None variables: dict[str, Any] | None = None
if action == "ups_device": if action == "ups_device":
@@ -377,7 +428,7 @@ def register_info_tool(mcp: FastMCP) -> None:
"ups_devices": ("upsDevices", "ups_devices"), "ups_devices": ("upsDevices", "ups_devices"),
} }
try: with tool_error_handler("info", action, logger):
logger.info(f"Executing unraid_info action={action}") logger.info(f"Executing unraid_info action={action}")
data = await make_graphql_request(query, variables) data = await make_graphql_request(query, variables)
@@ -426,14 +477,9 @@ def register_info_tool(mcp: FastMCP) -> None:
if action in list_actions: if action in list_actions:
response_key, output_key = list_actions[action] response_key, output_key = list_actions[action]
items = data.get(response_key) or [] items = data.get(response_key) or []
return {output_key: list(items) if isinstance(items, list) else []} normalized_items = list(items) if isinstance(items, list) else []
return {output_key: normalized_items}
raise ToolError(f"Unhandled action '{action}' — this is a bug") raise ToolError(f"Unhandled action '{action}' — this is a bug")
except ToolError:
raise
except Exception as e:
logger.error(f"Error in unraid_info action={action}: {e}", exc_info=True)
raise ToolError(f"Failed to execute info/{action}: {e!s}") from e
logger.info("Info tool registered successfully") logger.info("Info tool registered successfully")

View File

@@ -4,24 +4,24 @@ Provides the `unraid_keys` tool with 5 actions for listing, viewing,
creating, updating, and deleting API keys. creating, updating, and deleting API keys.
""" """
from typing import Any, Literal from typing import Any, Literal, get_args
from fastmcp import FastMCP from fastmcp import FastMCP
from ..config.logging import logger from ..config.logging import logger
from ..core.client import make_graphql_request from ..core.client import make_graphql_request
from ..core.exceptions import ToolError from ..core.exceptions import ToolError, tool_error_handler
QUERIES: dict[str, str] = { QUERIES: dict[str, str] = {
"list": """ "list": """
query ListApiKeys { query ListApiKeys {
apiKeys { id name roles permissions createdAt lastUsed } apiKeys { id name roles permissions { resource actions } createdAt }
} }
""", """,
"get": """ "get": """
query GetApiKey($id: PrefixedID!) { query GetApiKey($id: PrefixedID!) {
apiKey(id: $id) { id name roles permissions createdAt lastUsed } apiKey(id: $id) { id name roles permissions { resource actions } createdAt }
} }
""", """,
} }
@@ -29,22 +29,23 @@ QUERIES: dict[str, str] = {
MUTATIONS: dict[str, str] = { MUTATIONS: dict[str, str] = {
"create": """ "create": """
mutation CreateApiKey($input: CreateApiKeyInput!) { mutation CreateApiKey($input: CreateApiKeyInput!) {
createApiKey(input: $input) { id name key roles } apiKey { create(input: $input) { id name key roles } }
} }
""", """,
"update": """ "update": """
mutation UpdateApiKey($input: UpdateApiKeyInput!) { mutation UpdateApiKey($input: UpdateApiKeyInput!) {
updateApiKey(input: $input) { id name roles } apiKey { update(input: $input) { id name roles } }
} }
""", """,
"delete": """ "delete": """
mutation DeleteApiKeys($input: DeleteApiKeysInput!) { mutation DeleteApiKey($input: DeleteApiKeyInput!) {
deleteApiKeys(input: $input) apiKey { delete(input: $input) }
} }
""", """,
} }
DESTRUCTIVE_ACTIONS = {"delete"} DESTRUCTIVE_ACTIONS = {"delete"}
ALL_ACTIONS = set(QUERIES) | set(MUTATIONS)
KEY_ACTIONS = Literal[ KEY_ACTIONS = Literal[
"list", "list",
@@ -54,6 +55,14 @@ KEY_ACTIONS = Literal[
"delete", "delete",
] ]
if set(get_args(KEY_ACTIONS)) != ALL_ACTIONS:
_missing = ALL_ACTIONS - set(get_args(KEY_ACTIONS))
_extra = set(get_args(KEY_ACTIONS)) - ALL_ACTIONS
raise RuntimeError(
f"KEY_ACTIONS and ALL_ACTIONS are out of sync. "
f"Missing from Literal: {_missing or 'none'}. Extra in Literal: {_extra or 'none'}"
)
def register_keys_tool(mcp: FastMCP) -> None: def register_keys_tool(mcp: FastMCP) -> None:
"""Register the unraid_keys tool with the FastMCP instance.""" """Register the unraid_keys tool with the FastMCP instance."""
@@ -76,14 +85,13 @@ def register_keys_tool(mcp: FastMCP) -> None:
update - Update an API key (requires key_id; optional name, roles) update - Update an API key (requires key_id; optional name, roles)
delete - Delete API keys (requires key_id, confirm=True) delete - Delete API keys (requires key_id, confirm=True)
""" """
all_actions = set(QUERIES) | set(MUTATIONS) if action not in ALL_ACTIONS:
if action not in all_actions: raise ToolError(f"Invalid action '{action}'. Must be one of: {sorted(ALL_ACTIONS)}")
raise ToolError(f"Invalid action '{action}'. Must be one of: {sorted(all_actions)}")
if action in DESTRUCTIVE_ACTIONS and not confirm: if action in DESTRUCTIVE_ACTIONS and not confirm:
raise ToolError(f"Action '{action}' is destructive. Set confirm=True to proceed.") raise ToolError(f"Action '{action}' is destructive. Set confirm=True to proceed.")
try: with tool_error_handler("keys", action, logger):
logger.info(f"Executing unraid_keys action={action}") logger.info(f"Executing unraid_keys action={action}")
if action == "list": if action == "list":
@@ -106,10 +114,10 @@ def register_keys_tool(mcp: FastMCP) -> None:
if permissions is not None: if permissions is not None:
input_data["permissions"] = permissions input_data["permissions"] = permissions
data = await make_graphql_request(MUTATIONS["create"], {"input": input_data}) data = await make_graphql_request(MUTATIONS["create"], {"input": input_data})
return { created_key = (data.get("apiKey") or {}).get("create")
"success": True, if not created_key:
"key": data.get("createApiKey", {}), raise ToolError("Failed to create API key: no data returned from server")
} return {"success": True, "key": created_key}
if action == "update": if action == "update":
if not key_id: if not key_id:
@@ -120,16 +128,16 @@ def register_keys_tool(mcp: FastMCP) -> None:
if roles is not None: if roles is not None:
input_data["roles"] = roles input_data["roles"] = roles
data = await make_graphql_request(MUTATIONS["update"], {"input": input_data}) data = await make_graphql_request(MUTATIONS["update"], {"input": input_data})
return { updated_key = (data.get("apiKey") or {}).get("update")
"success": True, if not updated_key:
"key": data.get("updateApiKey", {}), raise ToolError("Failed to update API key: no data returned from server")
} return {"success": True, "key": updated_key}
if action == "delete": if action == "delete":
if not key_id: if not key_id:
raise ToolError("key_id is required for 'delete' action") raise ToolError("key_id is required for 'delete' action")
data = await make_graphql_request(MUTATIONS["delete"], {"input": {"ids": [key_id]}}) data = await make_graphql_request(MUTATIONS["delete"], {"input": {"ids": [key_id]}})
result = data.get("deleteApiKeys") result = (data.get("apiKey") or {}).get("delete")
if not result: if not result:
raise ToolError( raise ToolError(
f"Failed to delete API key '{key_id}': no confirmation from server" f"Failed to delete API key '{key_id}': no confirmation from server"
@@ -141,10 +149,4 @@ def register_keys_tool(mcp: FastMCP) -> None:
raise ToolError(f"Unhandled action '{action}' — this is a bug") raise ToolError(f"Unhandled action '{action}' — this is a bug")
except ToolError:
raise
except Exception as e:
logger.error(f"Error in unraid_keys action={action}: {e}", exc_info=True)
raise ToolError(f"Failed to execute keys/{action}: {e!s}") from e
logger.info("Keys tool registered successfully") logger.info("Keys tool registered successfully")

View File

@@ -4,13 +4,13 @@ Provides the `unraid_notifications` tool with 9 actions for viewing,
creating, archiving, and deleting system notifications. creating, archiving, and deleting system notifications.
""" """
from typing import Any, Literal from typing import Any, Literal, get_args
from fastmcp import FastMCP from fastmcp import FastMCP
from ..config.logging import logger from ..config.logging import logger
from ..core.client import make_graphql_request from ..core.client import make_graphql_request
from ..core.exceptions import ToolError from ..core.exceptions import ToolError, tool_error_handler
QUERIES: dict[str, str] = { QUERIES: dict[str, str] = {
@@ -44,38 +44,86 @@ QUERIES: dict[str, str] = {
MUTATIONS: dict[str, str] = { MUTATIONS: dict[str, str] = {
"create": """ "create": """
mutation CreateNotification($input: CreateNotificationInput!) { mutation CreateNotification($input: NotificationData!) {
notifications { createNotification(input: $input) { id title importance } } createNotification(input: $input) { id title importance }
} }
""", """,
"archive": """ "archive": """
mutation ArchiveNotification($id: PrefixedID!) { mutation ArchiveNotification($id: PrefixedID!) {
notifications { archiveNotification(id: $id) } archiveNotification(id: $id) { id title importance }
} }
""", """,
"unread": """ "unread": """
mutation UnreadNotification($id: PrefixedID!) { mutation UnreadNotification($id: PrefixedID!) {
notifications { unreadNotification(id: $id) } unreadNotification(id: $id) { id title importance }
} }
""", """,
"delete": """ "delete": """
mutation DeleteNotification($id: PrefixedID!, $type: NotificationType!) { mutation DeleteNotification($id: PrefixedID!, $type: NotificationType!) {
notifications { deleteNotification(id: $id, type: $type) } deleteNotification(id: $id, type: $type) {
unread { info warning alert total }
archive { info warning alert total }
}
} }
""", """,
"delete_archived": """ "delete_archived": """
mutation DeleteArchivedNotifications { mutation DeleteArchivedNotifications {
notifications { deleteArchivedNotifications } deleteArchivedNotifications {
unread { info warning alert total }
archive { info warning alert total }
}
} }
""", """,
"archive_all": """ "archive_all": """
mutation ArchiveAllNotifications($importance: NotificationImportance) { mutation ArchiveAllNotifications($importance: NotificationImportance) {
notifications { archiveAll(importance: $importance) } archiveAll(importance: $importance) {
unread { info warning alert total }
archive { info warning alert total }
}
}
""",
"archive_many": """
mutation ArchiveNotifications($ids: [PrefixedID!]!) {
archiveNotifications(ids: $ids) {
unread { info warning alert total }
archive { info warning alert total }
}
}
""",
"create_unique": """
mutation NotifyIfUnique($input: NotificationData!) {
notifyIfUnique(input: $input) { id title importance }
}
""",
"unarchive_many": """
mutation UnarchiveNotifications($ids: [PrefixedID!]!) {
unarchiveNotifications(ids: $ids) {
unread { info warning alert total }
archive { info warning alert total }
}
}
""",
"unarchive_all": """
mutation UnarchiveAll($importance: NotificationImportance) {
unarchiveAll(importance: $importance) {
unread { info warning alert total }
archive { info warning alert total }
}
}
""",
"recalculate": """
mutation RecalculateOverview {
recalculateOverview {
unread { info warning alert total }
archive { info warning alert total }
}
} }
""", """,
} }
DESTRUCTIVE_ACTIONS = {"delete", "delete_archived"} DESTRUCTIVE_ACTIONS = {"delete", "delete_archived"}
ALL_ACTIONS = set(QUERIES) | set(MUTATIONS)
_VALID_IMPORTANCE = {"ALERT", "WARNING", "INFO"}
NOTIFICATION_ACTIONS = Literal[ NOTIFICATION_ACTIONS = Literal[
"overview", "overview",
@@ -87,8 +135,21 @@ NOTIFICATION_ACTIONS = Literal[
"delete", "delete",
"delete_archived", "delete_archived",
"archive_all", "archive_all",
"archive_many",
"create_unique",
"unarchive_many",
"unarchive_all",
"recalculate",
] ]
if set(get_args(NOTIFICATION_ACTIONS)) != ALL_ACTIONS:
_missing = ALL_ACTIONS - set(get_args(NOTIFICATION_ACTIONS))
_extra = set(get_args(NOTIFICATION_ACTIONS)) - ALL_ACTIONS
raise RuntimeError(
f"NOTIFICATION_ACTIONS and ALL_ACTIONS are out of sync. "
f"Missing from Literal: {_missing or 'none'}. Extra in Literal: {_extra or 'none'}"
)
def register_notifications_tool(mcp: FastMCP) -> None: def register_notifications_tool(mcp: FastMCP) -> None:
"""Register the unraid_notifications tool with the FastMCP instance.""" """Register the unraid_notifications tool with the FastMCP instance."""
@@ -98,6 +159,7 @@ def register_notifications_tool(mcp: FastMCP) -> None:
action: NOTIFICATION_ACTIONS, action: NOTIFICATION_ACTIONS,
confirm: bool = False, confirm: bool = False,
notification_id: str | None = None, notification_id: str | None = None,
notification_ids: list[str] | None = None,
notification_type: str | None = None, notification_type: str | None = None,
importance: str | None = None, importance: str | None = None,
offset: int = 0, offset: int = 0,
@@ -119,17 +181,39 @@ def register_notifications_tool(mcp: FastMCP) -> None:
delete - Delete a notification (requires notification_id, notification_type, confirm=True) delete - Delete a notification (requires notification_id, notification_type, confirm=True)
delete_archived - Delete all archived notifications (requires confirm=True) delete_archived - Delete all archived notifications (requires confirm=True)
archive_all - Archive all notifications (optional importance filter) archive_all - Archive all notifications (optional importance filter)
archive_many - Archive multiple notifications by ID (requires notification_ids)
create_unique - Create notification only if no equivalent unread exists (requires title, subject, description, importance)
unarchive_many - Move notifications back to unread (requires notification_ids)
unarchive_all - Move all archived notifications to unread (optional importance filter)
recalculate - Recompute overview counts from disk
""" """
all_actions = {**QUERIES, **MUTATIONS} if action not in ALL_ACTIONS:
if action not in all_actions: raise ToolError(f"Invalid action '{action}'. Must be one of: {sorted(ALL_ACTIONS)}")
raise ToolError(
f"Invalid action '{action}'. Must be one of: {list(all_actions.keys())}"
)
if action in DESTRUCTIVE_ACTIONS and not confirm: if action in DESTRUCTIVE_ACTIONS and not confirm:
raise ToolError(f"Action '{action}' is destructive. Set confirm=True to proceed.") raise ToolError(f"Action '{action}' is destructive. Set confirm=True to proceed.")
try: # Validate enum parameters before dispatching to GraphQL (SEC-M04).
# Invalid values waste a rate-limited request and may leak schema details in errors.
valid_list_types = frozenset({"UNREAD", "ARCHIVE"})
valid_importance = frozenset({"INFO", "WARNING", "ALERT"})
valid_notif_types = frozenset({"UNREAD", "ARCHIVE"})
if list_type.upper() not in valid_list_types:
raise ToolError(
f"Invalid list_type '{list_type}'. Must be one of: {sorted(valid_list_types)}"
)
if importance is not None and importance.upper() not in valid_importance:
raise ToolError(
f"Invalid importance '{importance}'. Must be one of: {sorted(valid_importance)}"
)
if notification_type is not None and notification_type.upper() not in valid_notif_types:
raise ToolError(
f"Invalid notification_type '{notification_type}'. "
f"Must be one of: {sorted(valid_notif_types)}"
)
with tool_error_handler("notifications", action, logger):
logger.info(f"Executing unraid_notifications action={action}") logger.info(f"Executing unraid_notifications action={action}")
if action == "overview": if action == "overview":
@@ -147,18 +231,29 @@ def register_notifications_tool(mcp: FastMCP) -> None:
filter_vars["importance"] = importance.upper() filter_vars["importance"] = importance.upper()
data = await make_graphql_request(QUERIES["list"], {"filter": filter_vars}) data = await make_graphql_request(QUERIES["list"], {"filter": filter_vars})
notifications = data.get("notifications", {}) notifications = data.get("notifications", {})
result = notifications.get("list", []) return {"notifications": notifications.get("list", [])}
return {"notifications": list(result) if isinstance(result, list) else []}
if action == "warnings": if action == "warnings":
data = await make_graphql_request(QUERIES["warnings"]) data = await make_graphql_request(QUERIES["warnings"])
notifications = data.get("notifications", {}) notifications = data.get("notifications", {})
result = notifications.get("warningsAndAlerts", []) return {"warnings": notifications.get("warningsAndAlerts", [])}
return {"warnings": list(result) if isinstance(result, list) else []}
if action == "create": if action == "create":
if title is None or subject is None or description is None or importance is None: if title is None or subject is None or description is None or importance is None:
raise ToolError("create requires title, subject, description, and importance") raise ToolError("create requires title, subject, description, and importance")
if importance.upper() not in _VALID_IMPORTANCE:
raise ToolError(
f"importance must be one of: {', '.join(sorted(_VALID_IMPORTANCE))}. "
f"Got: '{importance}'"
)
if len(title) > 200:
raise ToolError(f"title must be at most 200 characters (got {len(title)})")
if len(subject) > 500:
raise ToolError(f"subject must be at most 500 characters (got {len(subject)})")
if len(description) > 2000:
raise ToolError(
f"description must be at most 2000 characters (got {len(description)})"
)
input_data = { input_data = {
"title": title, "title": title,
"subject": subject, "subject": subject,
@@ -166,7 +261,10 @@ def register_notifications_tool(mcp: FastMCP) -> None:
"importance": importance.upper(), "importance": importance.upper(),
} }
data = await make_graphql_request(MUTATIONS["create"], {"input": input_data}) data = await make_graphql_request(MUTATIONS["create"], {"input": input_data})
return {"success": True, "data": data} notification = data.get("createNotification")
if notification is None:
raise ToolError("Notification creation failed: server returned no data")
return {"success": True, "notification": notification}
if action in ("archive", "unread"): if action in ("archive", "unread"):
if not notification_id: if not notification_id:
@@ -194,12 +292,63 @@ def register_notifications_tool(mcp: FastMCP) -> None:
data = await make_graphql_request(MUTATIONS["archive_all"], variables) data = await make_graphql_request(MUTATIONS["archive_all"], variables)
return {"success": True, "action": "archive_all", "data": data} return {"success": True, "action": "archive_all", "data": data}
if action == "archive_many":
if not notification_ids:
raise ToolError("notification_ids is required for 'archive_many' action")
data = await make_graphql_request(
MUTATIONS["archive_many"], {"ids": notification_ids}
)
return {"success": True, "action": "archive_many", "data": data}
if action == "create_unique":
if title is None or subject is None or description is None or importance is None:
raise ToolError(
"create_unique requires title, subject, description, and importance"
)
if importance.upper() not in _VALID_IMPORTANCE:
raise ToolError(
f"importance must be one of: {', '.join(sorted(_VALID_IMPORTANCE))}. "
f"Got: '{importance}'"
)
if len(title) > 200:
raise ToolError(f"title must be at most 200 characters (got {len(title)})")
if len(subject) > 500:
raise ToolError(f"subject must be at most 500 characters (got {len(subject)})")
if len(description) > 2000:
raise ToolError(
f"description must be at most 2000 characters (got {len(description)})"
)
input_data = {
"title": title,
"subject": subject,
"description": description,
"importance": importance.upper(),
}
data = await make_graphql_request(MUTATIONS["create_unique"], {"input": input_data})
notification = data.get("notifyIfUnique")
if notification is None:
return {"success": True, "duplicate": True, "data": None}
return {"success": True, "duplicate": False, "data": notification}
if action == "unarchive_many":
if not notification_ids:
raise ToolError("notification_ids is required for 'unarchive_many' action")
data = await make_graphql_request(
MUTATIONS["unarchive_many"], {"ids": notification_ids}
)
return {"success": True, "action": "unarchive_many", "data": data}
if action == "unarchive_all":
vars_: dict[str, Any] | None = None
if importance:
vars_ = {"importance": importance.upper()}
data = await make_graphql_request(MUTATIONS["unarchive_all"], vars_)
return {"success": True, "action": "unarchive_all", "data": data}
if action == "recalculate":
data = await make_graphql_request(MUTATIONS["recalculate"])
return {"success": True, "action": "recalculate", "data": data}
raise ToolError(f"Unhandled action '{action}' — this is a bug") raise ToolError(f"Unhandled action '{action}' — this is a bug")
except ToolError:
raise
except Exception as e:
logger.error(f"Error in unraid_notifications action={action}: {e}", exc_info=True)
raise ToolError(f"Failed to execute notifications/{action}: {e!s}") from e
logger.info("Notifications tool registered successfully") logger.info("Notifications tool registered successfully")

View File

@@ -4,13 +4,14 @@ Provides the `unraid_rclone` tool with 4 actions for managing
cloud storage remotes (S3, Google Drive, Dropbox, FTP, etc.). cloud storage remotes (S3, Google Drive, Dropbox, FTP, etc.).
""" """
from typing import Any, Literal import re
from typing import Any, Literal, get_args
from fastmcp import FastMCP from fastmcp import FastMCP
from ..config.logging import logger from ..config.logging import logger
from ..core.client import make_graphql_request from ..core.client import make_graphql_request
from ..core.exceptions import ToolError from ..core.exceptions import ToolError, tool_error_handler
QUERIES: dict[str, str] = { QUERIES: dict[str, str] = {
@@ -49,6 +50,59 @@ RCLONE_ACTIONS = Literal[
"delete_remote", "delete_remote",
] ]
if set(get_args(RCLONE_ACTIONS)) != ALL_ACTIONS:
_missing = ALL_ACTIONS - set(get_args(RCLONE_ACTIONS))
_extra = set(get_args(RCLONE_ACTIONS)) - ALL_ACTIONS
raise RuntimeError(
f"RCLONE_ACTIONS and ALL_ACTIONS are out of sync. "
f"Missing from Literal: {_missing or 'none'}. Extra in Literal: {_extra or 'none'}"
)
# Max config entries to prevent abuse
_MAX_CONFIG_KEYS = 50
# Pattern for suspicious key names (path traversal, shell metacharacters)
_DANGEROUS_KEY_PATTERN = re.compile(r"\.\.|[/\\;|`$(){}]")
# Max length for individual config values
_MAX_VALUE_LENGTH = 4096
def _validate_config_data(config_data: dict[str, Any]) -> dict[str, str]:
"""Validate and sanitize rclone config_data before passing to GraphQL.
Ensures all keys and values are safe strings with no injection vectors.
Raises:
ToolError: If config_data contains invalid keys or values
"""
if len(config_data) > _MAX_CONFIG_KEYS:
raise ToolError(f"config_data has {len(config_data)} keys (max {_MAX_CONFIG_KEYS})")
validated: dict[str, str] = {}
for key, value in config_data.items():
if not isinstance(key, str) or not key.strip():
raise ToolError(
f"config_data keys must be non-empty strings, got: {type(key).__name__}"
)
if _DANGEROUS_KEY_PATTERN.search(key):
raise ToolError(
f"config_data key '{key}' contains disallowed characters "
f"(path traversal or shell metacharacters)"
)
if not isinstance(value, (str, int, float, bool)):
raise ToolError(
f"config_data['{key}'] must be a string, number, or boolean, "
f"got: {type(value).__name__}"
)
str_value = str(value)
if len(str_value) > _MAX_VALUE_LENGTH:
raise ToolError(
f"config_data['{key}'] value exceeds max length "
f"({len(str_value)} > {_MAX_VALUE_LENGTH})"
)
validated[key] = str_value
return validated
def register_rclone_tool(mcp: FastMCP) -> None: def register_rclone_tool(mcp: FastMCP) -> None:
"""Register the unraid_rclone tool with the FastMCP instance.""" """Register the unraid_rclone tool with the FastMCP instance."""
@@ -75,7 +129,7 @@ def register_rclone_tool(mcp: FastMCP) -> None:
if action in DESTRUCTIVE_ACTIONS and not confirm: if action in DESTRUCTIVE_ACTIONS and not confirm:
raise ToolError(f"Action '{action}' is destructive. Set confirm=True to proceed.") raise ToolError(f"Action '{action}' is destructive. Set confirm=True to proceed.")
try: with tool_error_handler("rclone", action, logger):
logger.info(f"Executing unraid_rclone action={action}") logger.info(f"Executing unraid_rclone action={action}")
if action == "list_remotes": if action == "list_remotes":
@@ -96,9 +150,16 @@ def register_rclone_tool(mcp: FastMCP) -> None:
if action == "create_remote": if action == "create_remote":
if name is None or provider_type is None or config_data is None: if name is None or provider_type is None or config_data is None:
raise ToolError("create_remote requires name, provider_type, and config_data") raise ToolError("create_remote requires name, provider_type, and config_data")
validated_config = _validate_config_data(config_data)
data = await make_graphql_request( data = await make_graphql_request(
MUTATIONS["create_remote"], MUTATIONS["create_remote"],
{"input": {"name": name, "type": provider_type, "config": config_data}}, {
"input": {
"name": name,
"type": provider_type,
"parameters": validated_config,
}
},
) )
remote = data.get("rclone", {}).get("createRCloneRemote") remote = data.get("rclone", {}).get("createRCloneRemote")
if not remote: if not remote:
@@ -127,10 +188,4 @@ def register_rclone_tool(mcp: FastMCP) -> None:
raise ToolError(f"Unhandled action '{action}' — this is a bug") raise ToolError(f"Unhandled action '{action}' — this is a bug")
except ToolError:
raise
except Exception as e:
logger.error(f"Error in unraid_rclone action={action}: {e}", exc_info=True)
raise ToolError(f"Failed to execute rclone/{action}: {e!s}") from e
logger.info("RClone tool registered successfully") logger.info("RClone tool registered successfully")

View File

@@ -0,0 +1,284 @@
"""System settings, time, UPS, and remote access mutations.
Provides the `unraid_settings` tool with 9 actions for updating system
configuration, time settings, UPS, API settings, and Unraid Connect.
"""
from typing import Any, Literal, get_args
from fastmcp import FastMCP
from ..config.logging import logger
from ..core.client import make_graphql_request
from ..core.exceptions import ToolError, tool_error_handler
MUTATIONS: dict[str, str] = {
"update": """
mutation UpdateSettings($input: JSON!) {
updateSettings(input: $input) { restartRequired values warnings }
}
""",
"update_temperature": """
mutation UpdateTemperatureConfig($input: TemperatureConfigInput!) {
updateTemperatureConfig(input: $input)
}
""",
"update_time": """
mutation UpdateSystemTime($input: UpdateSystemTimeInput!) {
updateSystemTime(input: $input) { currentTime timeZone useNtp ntpServers }
}
""",
"configure_ups": """
mutation ConfigureUps($config: UPSConfigInput!) {
configureUps(config: $config)
}
""",
"update_api": """
mutation UpdateApiSettings($input: ConnectSettingsInput!) {
updateApiSettings(input: $input) { accessType forwardType port }
}
""",
"connect_sign_in": """
mutation ConnectSignIn($input: ConnectSignInInput!) {
connectSignIn(input: $input)
}
""",
"connect_sign_out": """
mutation ConnectSignOut {
connectSignOut
}
""",
"setup_remote_access": """
mutation SetupRemoteAccess($input: SetupRemoteAccessInput!) {
setupRemoteAccess(input: $input)
}
""",
"enable_dynamic_remote_access": """
mutation EnableDynamicRemoteAccess($input: EnableDynamicRemoteAccessInput!) {
enableDynamicRemoteAccess(input: $input)
}
""",
}
DESTRUCTIVE_ACTIONS = {"configure_ups", "setup_remote_access", "enable_dynamic_remote_access"}
ALL_ACTIONS = set(MUTATIONS)
SETTINGS_ACTIONS = Literal[
"update",
"update_temperature",
"update_time",
"configure_ups",
"update_api",
"connect_sign_in",
"connect_sign_out",
"setup_remote_access",
"enable_dynamic_remote_access",
]
if set(get_args(SETTINGS_ACTIONS)) != ALL_ACTIONS:
_missing = ALL_ACTIONS - set(get_args(SETTINGS_ACTIONS))
_extra = set(get_args(SETTINGS_ACTIONS)) - ALL_ACTIONS
raise RuntimeError(
f"SETTINGS_ACTIONS and ALL_ACTIONS are out of sync. "
f"Missing from Literal: {_missing or 'none'}. Extra in Literal: {_extra or 'none'}"
)
def register_settings_tool(mcp: FastMCP) -> None:
"""Register the unraid_settings tool with the FastMCP instance."""
@mcp.tool()
async def unraid_settings(
action: SETTINGS_ACTIONS,
confirm: bool = False,
settings_input: dict[str, Any] | None = None,
temperature_config: dict[str, Any] | None = None,
time_zone: str | None = None,
use_ntp: bool | None = None,
ntp_servers: list[str] | None = None,
manual_datetime: str | None = None,
ups_config: dict[str, Any] | None = None,
access_type: str | None = None,
forward_type: str | None = None,
port: int | None = None,
api_key: str | None = None,
username: str | None = None,
email: str | None = None,
avatar: str | None = None,
access_url_type: str | None = None,
access_url_name: str | None = None,
access_url_ipv4: str | None = None,
access_url_ipv6: str | None = None,
dynamic_enabled: bool | None = None,
) -> dict[str, Any]:
"""Update Unraid system settings, time, UPS, and remote access configuration.
Actions:
update - Update system settings (requires settings_input dict)
update_temperature - Update temperature sensor config (requires temperature_config dict)
update_time - Update time/timezone/NTP (requires at least one of: time_zone, use_ntp, ntp_servers, manual_datetime)
configure_ups - Configure UPS monitoring (requires ups_config dict, confirm=True)
update_api - Update API/Connect settings (requires at least one of: access_type, forward_type, port)
connect_sign_in - Sign in to Unraid Connect (requires api_key)
connect_sign_out - Sign out from Unraid Connect
setup_remote_access - Configure remote access (requires access_type, confirm=True)
enable_dynamic_remote_access - Enable/disable dynamic remote access (requires access_url_type, dynamic_enabled, confirm=True)
"""
if action not in ALL_ACTIONS:
raise ToolError(f"Invalid action '{action}'. Must be one of: {sorted(ALL_ACTIONS)}")
if action in DESTRUCTIVE_ACTIONS and not confirm:
raise ToolError(f"Action '{action}' is destructive. Set confirm=True to proceed.")
with tool_error_handler("settings", action, logger):
logger.info(f"Executing unraid_settings action={action}")
if action == "update":
if settings_input is None:
raise ToolError("settings_input is required for 'update' action")
data = await make_graphql_request(MUTATIONS["update"], {"input": settings_input})
return {"success": True, "action": "update", "data": data.get("updateSettings")}
if action == "update_temperature":
if temperature_config is None:
raise ToolError(
"temperature_config is required for 'update_temperature' action"
)
data = await make_graphql_request(
MUTATIONS["update_temperature"], {"input": temperature_config}
)
return {
"success": True,
"action": "update_temperature",
"result": data.get("updateTemperatureConfig"),
}
if action == "update_time":
time_input: dict[str, Any] = {}
if time_zone is not None:
time_input["timeZone"] = time_zone
if use_ntp is not None:
time_input["useNtp"] = use_ntp
if ntp_servers is not None:
time_input["ntpServers"] = ntp_servers
if manual_datetime is not None:
time_input["manualDateTime"] = manual_datetime
if not time_input:
raise ToolError(
"update_time requires at least one of: time_zone, use_ntp, ntp_servers, manual_datetime"
)
data = await make_graphql_request(MUTATIONS["update_time"], {"input": time_input})
return {
"success": True,
"action": "update_time",
"data": data.get("updateSystemTime"),
}
if action == "configure_ups":
if ups_config is None:
raise ToolError("ups_config is required for 'configure_ups' action")
data = await make_graphql_request(
MUTATIONS["configure_ups"], {"config": ups_config}
)
return {
"success": True,
"action": "configure_ups",
"result": data.get("configureUps"),
}
if action == "update_api":
api_input: dict[str, Any] = {}
if access_type is not None:
api_input["accessType"] = access_type
if forward_type is not None:
api_input["forwardType"] = forward_type
if port is not None:
api_input["port"] = port
if not api_input:
raise ToolError(
"update_api requires at least one of: access_type, forward_type, port"
)
data = await make_graphql_request(MUTATIONS["update_api"], {"input": api_input})
return {
"success": True,
"action": "update_api",
"data": data.get("updateApiSettings"),
}
if action == "connect_sign_in":
if not api_key:
raise ToolError("api_key is required for 'connect_sign_in' action")
sign_in_input: dict[str, Any] = {"apiKey": api_key}
user_info: dict[str, Any] = {}
if username:
user_info["preferred_username"] = username
if email:
user_info["email"] = email
if avatar:
user_info["avatar"] = avatar
if user_info:
sign_in_input["userInfo"] = user_info
data = await make_graphql_request(
MUTATIONS["connect_sign_in"], {"input": sign_in_input}
)
return {
"success": True,
"action": "connect_sign_in",
"result": data.get("connectSignIn"),
}
if action == "connect_sign_out":
data = await make_graphql_request(MUTATIONS["connect_sign_out"])
return {
"success": True,
"action": "connect_sign_out",
"result": data.get("connectSignOut"),
}
if action == "setup_remote_access":
if not access_type:
raise ToolError("access_type is required for 'setup_remote_access' action")
remote_input: dict[str, Any] = {"accessType": access_type}
if forward_type is not None:
remote_input["forwardType"] = forward_type
if port is not None:
remote_input["port"] = port
data = await make_graphql_request(
MUTATIONS["setup_remote_access"], {"input": remote_input}
)
return {
"success": True,
"action": "setup_remote_access",
"result": data.get("setupRemoteAccess"),
}
if action == "enable_dynamic_remote_access":
if not access_url_type:
raise ToolError(
"access_url_type is required for 'enable_dynamic_remote_access' action"
)
if dynamic_enabled is None:
raise ToolError(
"dynamic_enabled is required for 'enable_dynamic_remote_access' action"
)
url_input: dict[str, Any] = {"type": access_url_type}
if access_url_name is not None:
url_input["name"] = access_url_name
if access_url_ipv4 is not None:
url_input["ipv4"] = access_url_ipv4
if access_url_ipv6 is not None:
url_input["ipv6"] = access_url_ipv6
data = await make_graphql_request(
MUTATIONS["enable_dynamic_remote_access"],
{"input": {"url": url_input, "enabled": dynamic_enabled}},
)
return {
"success": True,
"action": "enable_dynamic_remote_access",
"result": data.get("enableDynamicRemoteAccess"),
}
raise ToolError(f"Unhandled action '{action}' — this is a bug")
logger.info("Settings tool registered successfully")

View File

@@ -4,17 +4,19 @@ Provides the `unraid_storage` tool with 6 actions for shares, physical disks,
unassigned devices, log files, and log content retrieval. unassigned devices, log files, and log content retrieval.
""" """
from typing import Any, Literal import os
from typing import Any, Literal, get_args
import anyio
from fastmcp import FastMCP from fastmcp import FastMCP
from ..config.logging import logger from ..config.logging import logger
from ..core.client import DISK_TIMEOUT, make_graphql_request from ..core.client import DISK_TIMEOUT, make_graphql_request
from ..core.exceptions import ToolError from ..core.exceptions import ToolError, tool_error_handler
from ..core.utils import format_bytes
_ALLOWED_LOG_PREFIXES = ("/var/log/", "/boot/logs/", "/mnt/") _ALLOWED_LOG_PREFIXES = ("/var/log/", "/boot/logs/", "/mnt/")
_MAX_TAIL_LINES = 10_000
QUERIES: dict[str, str] = { QUERIES: dict[str, str] = {
"shares": """ "shares": """
@@ -56,6 +58,17 @@ QUERIES: dict[str, str] = {
""", """,
} }
MUTATIONS: dict[str, str] = {
"flash_backup": """
mutation InitiateFlashBackup($input: InitiateFlashBackupInput!) {
initiateFlashBackup(input: $input) { status jobId }
}
""",
}
DESTRUCTIVE_ACTIONS = {"flash_backup"}
ALL_ACTIONS = set(QUERIES) | set(MUTATIONS)
STORAGE_ACTIONS = Literal[ STORAGE_ACTIONS = Literal[
"shares", "shares",
"disks", "disks",
@@ -63,22 +76,16 @@ STORAGE_ACTIONS = Literal[
"unassigned", "unassigned",
"log_files", "log_files",
"logs", "logs",
"flash_backup",
] ]
if set(get_args(STORAGE_ACTIONS)) != ALL_ACTIONS:
def format_bytes(bytes_value: int | None) -> str: _missing = ALL_ACTIONS - set(get_args(STORAGE_ACTIONS))
"""Format byte values into human-readable sizes.""" _extra = set(get_args(STORAGE_ACTIONS)) - ALL_ACTIONS
if bytes_value is None: raise RuntimeError(
return "N/A" f"STORAGE_ACTIONS and ALL_ACTIONS are out of sync. "
try: f"Missing from Literal: {_missing or 'none'}. Extra in Literal: {_extra or 'none'}"
value = float(int(bytes_value)) )
except (ValueError, TypeError):
return "N/A"
for unit in ["B", "KB", "MB", "GB", "TB", "PB"]:
if value < 1024.0:
return f"{value:.2f} {unit}"
value /= 1024.0
return f"{value:.2f} EB"
def register_storage_tool(mcp: FastMCP) -> None: def register_storage_tool(mcp: FastMCP) -> None:
@@ -90,6 +97,11 @@ def register_storage_tool(mcp: FastMCP) -> None:
disk_id: str | None = None, disk_id: str | None = None,
log_path: str | None = None, log_path: str | None = None,
tail_lines: int = 100, tail_lines: int = 100,
confirm: bool = False,
remote_name: str | None = None,
source_path: str | None = None,
destination_path: str | None = None,
backup_options: dict[str, Any] | None = None,
) -> dict[str, Any]: ) -> dict[str, Any]:
"""Manage Unraid storage, disks, and logs. """Manage Unraid storage, disks, and logs.
@@ -100,18 +112,27 @@ def register_storage_tool(mcp: FastMCP) -> None:
unassigned - List unassigned devices unassigned - List unassigned devices
log_files - List available log files log_files - List available log files
logs - Retrieve log content (requires log_path, optional tail_lines) logs - Retrieve log content (requires log_path, optional tail_lines)
flash_backup - Initiate flash backup via rclone (requires remote_name, source_path, destination_path, confirm=True)
""" """
if action not in QUERIES: if action not in ALL_ACTIONS:
raise ToolError(f"Invalid action '{action}'. Must be one of: {list(QUERIES.keys())}") raise ToolError(f"Invalid action '{action}'. Must be one of: {sorted(ALL_ACTIONS)}")
if action in DESTRUCTIVE_ACTIONS and not confirm:
raise ToolError(f"Action '{action}' is destructive. Set confirm=True to proceed.")
if action == "disk_details" and not disk_id: if action == "disk_details" and not disk_id:
raise ToolError("disk_id is required for 'disk_details' action") raise ToolError("disk_id is required for 'disk_details' action")
if action == "logs" and (tail_lines < 1 or tail_lines > _MAX_TAIL_LINES):
raise ToolError(f"tail_lines must be between 1 and {_MAX_TAIL_LINES}, got {tail_lines}")
if action == "logs": if action == "logs":
if not log_path: if not log_path:
raise ToolError("log_path is required for 'logs' action") raise ToolError("log_path is required for 'logs' action")
# Resolve path to prevent traversal attacks (e.g. /var/log/../../etc/shadow) # Resolve path synchronously to prevent traversal attacks.
normalized = str(await anyio.Path(log_path).resolve()) # Using os.path.realpath instead of anyio.Path.resolve() because the
# async variant blocks on NFS-mounted paths under /mnt/ (Perf-AI-1).
normalized = os.path.realpath(log_path) # noqa: ASYNC240
if not any(normalized.startswith(p) for p in _ALLOWED_LOG_PREFIXES): if not any(normalized.startswith(p) for p in _ALLOWED_LOG_PREFIXES):
raise ToolError( raise ToolError(
f"log_path must start with one of: {', '.join(_ALLOWED_LOG_PREFIXES)}. " f"log_path must start with one of: {', '.join(_ALLOWED_LOG_PREFIXES)}. "
@@ -119,6 +140,32 @@ def register_storage_tool(mcp: FastMCP) -> None:
) )
log_path = normalized log_path = normalized
if action == "flash_backup":
if not remote_name:
raise ToolError("remote_name is required for 'flash_backup' action")
if not source_path:
raise ToolError("source_path is required for 'flash_backup' action")
if not destination_path:
raise ToolError("destination_path is required for 'flash_backup' action")
input_data: dict[str, Any] = {
"remoteName": remote_name,
"sourcePath": source_path,
"destinationPath": destination_path,
}
if backup_options is not None:
input_data["options"] = backup_options
with tool_error_handler("storage", action, logger):
logger.info("Executing unraid_storage action=flash_backup")
data = await make_graphql_request(MUTATIONS["flash_backup"], {"input": input_data})
backup = data.get("initiateFlashBackup")
if not backup:
raise ToolError("Failed to start flash backup: no confirmation from server")
return {
"success": True,
"action": "flash_backup",
"data": backup,
}
query = QUERIES[action] query = QUERIES[action]
variables: dict[str, Any] | None = None variables: dict[str, Any] | None = None
custom_timeout = DISK_TIMEOUT if action in ("disks", "disk_details") else None custom_timeout = DISK_TIMEOUT if action in ("disks", "disk_details") else None
@@ -128,17 +175,15 @@ def register_storage_tool(mcp: FastMCP) -> None:
elif action == "logs": elif action == "logs":
variables = {"path": log_path, "lines": tail_lines} variables = {"path": log_path, "lines": tail_lines}
try: with tool_error_handler("storage", action, logger):
logger.info(f"Executing unraid_storage action={action}") logger.info(f"Executing unraid_storage action={action}")
data = await make_graphql_request(query, variables, custom_timeout=custom_timeout) data = await make_graphql_request(query, variables, custom_timeout=custom_timeout)
if action == "shares": if action == "shares":
shares = data.get("shares", []) return {"shares": data.get("shares", [])}
return {"shares": list(shares) if isinstance(shares, list) else []}
if action == "disks": if action == "disks":
disks = data.get("disks", []) return {"disks": data.get("disks", [])}
return {"disks": list(disks) if isinstance(disks, list) else []}
if action == "disk_details": if action == "disk_details":
raw = data.get("disk", {}) raw = data.get("disk", {})
@@ -159,22 +204,14 @@ def register_storage_tool(mcp: FastMCP) -> None:
return {"summary": summary, "details": raw} return {"summary": summary, "details": raw}
if action == "unassigned": if action == "unassigned":
devices = data.get("unassignedDevices", []) return {"devices": data.get("unassignedDevices", [])}
return {"devices": list(devices) if isinstance(devices, list) else []}
if action == "log_files": if action == "log_files":
files = data.get("logFiles", []) return {"log_files": data.get("logFiles", [])}
return {"log_files": list(files) if isinstance(files, list) else []}
if action == "logs": if action == "logs":
return dict(data.get("logFile") or {}) return dict(data.get("logFile") or {})
raise ToolError(f"Unhandled action '{action}' — this is a bug") raise ToolError(f"Unhandled action '{action}' — this is a bug")
except ToolError:
raise
except Exception as e:
logger.error(f"Error in unraid_storage action={action}: {e}", exc_info=True)
raise ToolError(f"Failed to execute storage/{action}: {e!s}") from e
logger.info("Storage tool registered successfully") logger.info("Storage tool registered successfully")

View File

@@ -10,7 +10,7 @@ from fastmcp import FastMCP
from ..config.logging import logger from ..config.logging import logger
from ..core.client import make_graphql_request from ..core.client import make_graphql_request
from ..core.exceptions import ToolError from ..core.exceptions import ToolError, tool_error_handler
QUERIES: dict[str, str] = { QUERIES: dict[str, str] = {
@@ -39,17 +39,11 @@ def register_users_tool(mcp: FastMCP) -> None:
Note: Unraid API does not support user management operations (list, add, delete). Note: Unraid API does not support user management operations (list, add, delete).
""" """
if action not in ALL_ACTIONS: if action not in ALL_ACTIONS:
raise ToolError(f"Invalid action '{action}'. Must be: me") raise ToolError(f"Invalid action '{action}'. Must be one of: {sorted(ALL_ACTIONS)}")
try: with tool_error_handler("users", action, logger):
logger.info("Executing unraid_users action=me") logger.info("Executing unraid_users action=me")
data = await make_graphql_request(QUERIES["me"]) data = await make_graphql_request(QUERIES["me"])
return data.get("me") or {} return data.get("me") or {}
except ToolError:
raise
except Exception as e:
logger.error(f"Error in unraid_users action=me: {e}", exc_info=True)
raise ToolError(f"Failed to execute users/me: {e!s}") from e
logger.info("Users tool registered successfully") logger.info("Users tool registered successfully")

View File

@@ -4,13 +4,13 @@ Provides the `unraid_vm` tool with 9 actions for VM lifecycle management
including start, stop, pause, resume, force stop, reboot, and reset. including start, stop, pause, resume, force stop, reboot, and reset.
""" """
from typing import Any, Literal from typing import Any, Literal, get_args
from fastmcp import FastMCP from fastmcp import FastMCP
from ..config.logging import logger from ..config.logging import logger
from ..core.client import make_graphql_request from ..core.client import make_graphql_request
from ..core.exceptions import ToolError from ..core.exceptions import ToolError, tool_error_handler
QUERIES: dict[str, str] = { QUERIES: dict[str, str] = {
@@ -19,6 +19,13 @@ QUERIES: dict[str, str] = {
vms { id domains { id name state uuid } } vms { id domains { id name state uuid } }
} }
""", """,
# NOTE: The Unraid GraphQL API does not expose a single-VM query.
# The details query is identical to list; client-side filtering is required.
"details": """
query ListVMs {
vms { id domains { id name state uuid } }
}
""",
} }
MUTATIONS: dict[str, str] = { MUTATIONS: dict[str, str] = {
@@ -64,7 +71,15 @@ VM_ACTIONS = Literal[
"reset", "reset",
] ]
ALL_ACTIONS = set(QUERIES) | set(MUTATIONS) | {"details"} ALL_ACTIONS = set(QUERIES) | set(MUTATIONS)
if set(get_args(VM_ACTIONS)) != ALL_ACTIONS:
_missing = ALL_ACTIONS - set(get_args(VM_ACTIONS))
_extra = set(get_args(VM_ACTIONS)) - ALL_ACTIONS
raise RuntimeError(
f"VM_ACTIONS and ALL_ACTIONS are out of sync. "
f"Missing from Literal: {_missing or 'none'}. Extra in Literal: {_extra or 'none'}"
)
def register_vm_tool(mcp: FastMCP) -> None: def register_vm_tool(mcp: FastMCP) -> None:
@@ -98,33 +113,31 @@ def register_vm_tool(mcp: FastMCP) -> None:
if action in DESTRUCTIVE_ACTIONS and not confirm: if action in DESTRUCTIVE_ACTIONS and not confirm:
raise ToolError(f"Action '{action}' is destructive. Set confirm=True to proceed.") raise ToolError(f"Action '{action}' is destructive. Set confirm=True to proceed.")
try: with tool_error_handler("vm", action, logger):
logger.info(f"Executing unraid_vm action={action}") logger.info(f"Executing unraid_vm action={action}")
if action in ("list", "details"): if action == "list":
data = await make_graphql_request(QUERIES["list"]) data = await make_graphql_request(QUERIES["list"])
if data.get("vms"): if data.get("vms"):
vms = data["vms"].get("domains") or data["vms"].get("domain") or [] vms = data["vms"].get("domains") or data["vms"].get("domain") or []
if isinstance(vms, dict): if isinstance(vms, dict):
vms = [vms] vms = [vms]
return {"vms": vms}
if action == "list":
return {"vms": vms}
# details: find specific VM
for vm in vms:
if (
vm.get("uuid") == vm_id
or vm.get("id") == vm_id
or vm.get("name") == vm_id
):
return dict(vm)
available = [f"{v.get('name')} (UUID: {v.get('uuid')})" for v in vms]
raise ToolError(f"VM '{vm_id}' not found. Available: {', '.join(available)}")
if action == "details":
raise ToolError("No VM data returned from server")
return {"vms": []} return {"vms": []}
if action == "details":
data = await make_graphql_request(QUERIES["details"])
if not data.get("vms"):
raise ToolError("No VM data returned from server")
vms = data["vms"].get("domains") or data["vms"].get("domain") or []
if isinstance(vms, dict):
vms = [vms]
for vm in vms:
if vm.get("uuid") == vm_id or vm.get("id") == vm_id or vm.get("name") == vm_id:
return dict(vm)
available = [f"{v.get('name')} (UUID: {v.get('uuid')})" for v in vms]
raise ToolError(f"VM '{vm_id}' not found. Available: {', '.join(available)}")
# Mutations # Mutations
if action in MUTATIONS: if action in MUTATIONS:
data = await make_graphql_request(MUTATIONS[action], {"id": vm_id}) data = await make_graphql_request(MUTATIONS[action], {"id": vm_id})
@@ -139,15 +152,4 @@ def register_vm_tool(mcp: FastMCP) -> None:
raise ToolError(f"Unhandled action '{action}' — this is a bug") raise ToolError(f"Unhandled action '{action}' — this is a bug")
except ToolError:
raise
except Exception as e:
logger.error(f"Error in unraid_vm action={action}: {e}", exc_info=True)
msg = str(e)
if "VMs are not available" in msg:
raise ToolError(
"VMs not available on this server. Check VM support is enabled."
) from e
raise ToolError(f"Failed to execute vm/{action}: {msg}") from e
logger.info("VM tool registered successfully") logger.info("VM tool registered successfully")

11
unraid_mcp/version.py Normal file
View File

@@ -0,0 +1,11 @@
"""Application version helpers."""
from importlib.metadata import PackageNotFoundError, version
__all__ = ["VERSION"]
try:
VERSION = version("unraid-mcp")
except PackageNotFoundError:
VERSION = "0.0.0"

607
uv.lock generated
View File

@@ -2,6 +2,18 @@ version = 1
revision = 3 revision = 3
requires-python = ">=3.12" requires-python = ">=3.12"
[[package]]
name = "aiofile"
version = "3.9.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "caio" },
]
sdist = { url = "https://files.pythonhosted.org/packages/67/e2/d7cb819de8df6b5c1968a2756c3cb4122d4fa2b8fc768b53b7c9e5edb646/aiofile-3.9.0.tar.gz", hash = "sha256:e5ad718bb148b265b6df1b3752c4d1d83024b93da9bd599df74b9d9ffcf7919b", size = 17943, upload-time = "2024-10-08T10:39:35.846Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/50/25/da1f0b4dd970e52bf5a36c204c107e11a0c6d3ed195eba0bfbc664c312b2/aiofile-3.9.0-py3-none-any.whl", hash = "sha256:ce2f6c1571538cbdfa0143b04e16b208ecb0e9cb4148e528af8a640ed51cc8aa", size = 19539, upload-time = "2024-10-08T10:39:32.955Z" },
]
[[package]] [[package]]
name = "annotated-doc" name = "annotated-doc"
version = "0.0.4" version = "0.0.4"
@@ -44,14 +56,14 @@ wheels = [
[[package]] [[package]]
name = "authlib" name = "authlib"
version = "1.6.8" version = "1.6.9"
source = { registry = "https://pypi.org/simple" } source = { registry = "https://pypi.org/simple" }
dependencies = [ dependencies = [
{ name = "cryptography" }, { name = "cryptography" },
] ]
sdist = { url = "https://files.pythonhosted.org/packages/6b/6c/c88eac87468c607f88bc24df1f3b31445ee6fc9ba123b09e666adf687cd9/authlib-1.6.8.tar.gz", hash = "sha256:41ae180a17cf672bc784e4a518e5c82687f1fe1e98b0cafaeda80c8e4ab2d1cb", size = 165074, upload-time = "2026-02-14T04:02:17.941Z" } sdist = { url = "https://files.pythonhosted.org/packages/af/98/00d3dd826d46959ad8e32af2dbb2398868fd9fd0683c26e56d0789bd0e68/authlib-1.6.9.tar.gz", hash = "sha256:d8f2421e7e5980cc1ddb4e32d3f5fa659cfaf60d8eaf3281ebed192e4ab74f04", size = 165134, upload-time = "2026-03-02T07:44:01.998Z" }
wheels = [ wheels = [
{ url = "https://files.pythonhosted.org/packages/9b/73/f7084bf12755113cd535ae586782ff3a6e710bfbe6a0d13d1c2f81ffbbfa/authlib-1.6.8-py2.py3-none-any.whl", hash = "sha256:97286fd7a15e6cfefc32771c8ef9c54f0ed58028f1322de6a2a7c969c3817888", size = 244116, upload-time = "2026-02-14T04:02:15.579Z" }, { url = "https://files.pythonhosted.org/packages/53/23/b65f568ed0c22f1efacb744d2db1a33c8068f384b8c9b482b52ebdbc3ef6/authlib-1.6.9-py2.py3-none-any.whl", hash = "sha256:f08b4c14e08f0861dc18a32357b33fbcfd2ea86cfe3fe149484b4d764c4a0ac3", size = 244197, upload-time = "2026-03-02T07:44:00.307Z" },
] ]
[[package]] [[package]]
@@ -79,20 +91,41 @@ wheels = [
[[package]] [[package]]
name = "cachetools" name = "cachetools"
version = "7.0.1" version = "7.0.5"
source = { registry = "https://pypi.org/simple" } source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/d4/07/56595285564e90777d758ebd383d6b0b971b87729bbe2184a849932a3736/cachetools-7.0.1.tar.gz", hash = "sha256:e31e579d2c5b6e2944177a0397150d312888ddf4e16e12f1016068f0c03b8341", size = 36126, upload-time = "2026-02-10T22:24:05.03Z" } sdist = { url = "https://files.pythonhosted.org/packages/af/dd/57fe3fdb6e65b25a5987fd2cdc7e22db0aef508b91634d2e57d22928d41b/cachetools-7.0.5.tar.gz", hash = "sha256:0cd042c24377200c1dcd225f8b7b12b0ca53cc2c961b43757e774ebe190fd990", size = 37367, upload-time = "2026-03-09T20:51:29.451Z" }
wheels = [ wheels = [
{ url = "https://files.pythonhosted.org/packages/ed/9e/5faefbf9db1db466d633735faceda1f94aa99ce506ac450d232536266b32/cachetools-7.0.1-py3-none-any.whl", hash = "sha256:8f086515c254d5664ae2146d14fc7f65c9a4bce75152eb247e5a9c5e6d7b2ecf", size = 13484, upload-time = "2026-02-10T22:24:03.741Z" }, { url = "https://files.pythonhosted.org/packages/06/f3/39cf3367b8107baa44f861dc802cbf16263c945b62d8265d36034fc07bea/cachetools-7.0.5-py3-none-any.whl", hash = "sha256:46bc8ebefbe485407621d0a4264b23c080cedd913921bad7ac3ed2f26c183114", size = 13918, upload-time = "2026-03-09T20:51:27.33Z" },
]
[[package]]
name = "caio"
version = "0.9.25"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/92/88/b8527e1b00c1811db339a1df8bd1ae49d146fcea9d6a5c40e3a80aaeb38d/caio-0.9.25.tar.gz", hash = "sha256:16498e7f81d1d0f5a4c0ad3f2540e65fe25691376e0a5bd367f558067113ed10", size = 26781, upload-time = "2025-12-26T15:21:36.501Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/d3/25/79c98ebe12df31548ba4eaf44db11b7cad6b3e7b4203718335620939083c/caio-0.9.25-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:fb7ff95af4c31ad3f03179149aab61097a71fd85e05f89b4786de0359dffd044", size = 36983, upload-time = "2025-12-26T15:21:36.075Z" },
{ url = "https://files.pythonhosted.org/packages/a3/2b/21288691f16d479945968a0a4f2856818c1c5be56881d51d4dac9b255d26/caio-0.9.25-cp312-cp312-manylinux2010_x86_64.manylinux2014_x86_64.manylinux_2_12_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:97084e4e30dfa598449d874c4d8e0c8d5ea17d2f752ef5e48e150ff9d240cd64", size = 82012, upload-time = "2025-12-26T15:22:20.983Z" },
{ url = "https://files.pythonhosted.org/packages/03/c4/8a1b580875303500a9c12b9e0af58cb82e47f5bcf888c2457742a138273c/caio-0.9.25-cp312-cp312-manylinux_2_34_aarch64.whl", hash = "sha256:4fa69eba47e0f041b9d4f336e2ad40740681c43e686b18b191b6c5f4c5544bfb", size = 81502, upload-time = "2026-03-04T22:08:22.381Z" },
{ url = "https://files.pythonhosted.org/packages/d1/1c/0fe770b8ffc8362c48134d1592d653a81a3d8748d764bec33864db36319d/caio-0.9.25-cp312-cp312-manylinux_2_34_x86_64.whl", hash = "sha256:6bebf6f079f1341d19f7386db9b8b1f07e8cc15ae13bfdaff573371ba0575d69", size = 80200, upload-time = "2026-03-04T22:08:23.382Z" },
{ url = "https://files.pythonhosted.org/packages/31/57/5e6ff127e6f62c9f15d989560435c642144aa4210882f9494204bc892305/caio-0.9.25-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:d6c2a3411af97762a2b03840c3cec2f7f728921ff8adda53d7ea2315a8563451", size = 36979, upload-time = "2025-12-26T15:21:35.484Z" },
{ url = "https://files.pythonhosted.org/packages/a3/9f/f21af50e72117eb528c422d4276cbac11fb941b1b812b182e0a9c70d19c5/caio-0.9.25-cp313-cp313-manylinux2010_x86_64.manylinux2014_x86_64.manylinux_2_12_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:0998210a4d5cd5cb565b32ccfe4e53d67303f868a76f212e002a8554692870e6", size = 81900, upload-time = "2025-12-26T15:22:21.919Z" },
{ url = "https://files.pythonhosted.org/packages/9c/12/c39ae2a4037cb10ad5eb3578eb4d5f8c1a2575c62bba675f3406b7ef0824/caio-0.9.25-cp313-cp313-manylinux_2_34_aarch64.whl", hash = "sha256:1a177d4777141b96f175fe2c37a3d96dec7911ed9ad5f02bac38aaa1c936611f", size = 81523, upload-time = "2026-03-04T22:08:25.187Z" },
{ url = "https://files.pythonhosted.org/packages/22/59/f8f2e950eb4f1a5a3883e198dca514b9d475415cb6cd7b78b9213a0dd45a/caio-0.9.25-cp313-cp313-manylinux_2_34_x86_64.whl", hash = "sha256:9ed3cfb28c0e99fec5e208c934e5c157d0866aa9c32aa4dc5e9b6034af6286b7", size = 80243, upload-time = "2026-03-04T22:08:26.449Z" },
{ url = "https://files.pythonhosted.org/packages/69/ca/a08fdc7efdcc24e6a6131a93c85be1f204d41c58f474c42b0670af8c016b/caio-0.9.25-cp314-cp314-macosx_10_15_universal2.whl", hash = "sha256:fab6078b9348e883c80a5e14b382e6ad6aabbc4429ca034e76e730cf464269db", size = 36978, upload-time = "2025-12-26T15:21:41.055Z" },
{ url = "https://files.pythonhosted.org/packages/5e/6c/d4d24f65e690213c097174d26eda6831f45f4734d9d036d81790a27e7b78/caio-0.9.25-cp314-cp314-manylinux2010_x86_64.manylinux2014_x86_64.manylinux_2_12_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:44a6b58e52d488c75cfaa5ecaa404b2b41cc965e6c417e03251e868ecd5b6d77", size = 81832, upload-time = "2025-12-26T15:22:22.757Z" },
{ url = "https://files.pythonhosted.org/packages/87/a4/e534cf7d2d0e8d880e25dd61e8d921ffcfe15bd696734589826f5a2df727/caio-0.9.25-cp314-cp314-manylinux_2_34_aarch64.whl", hash = "sha256:628a630eb7fb22381dd8e3c8ab7f59e854b9c806639811fc3f4310c6bd711d79", size = 81565, upload-time = "2026-03-04T22:08:27.483Z" },
{ url = "https://files.pythonhosted.org/packages/3f/ed/bf81aeac1d290017e5e5ac3e880fd56ee15e50a6d0353986799d1bc5cfd5/caio-0.9.25-cp314-cp314-manylinux_2_34_x86_64.whl", hash = "sha256:0ba16aa605ccb174665357fc729cf500679c2d94d5f1458a6f0d5ca48f2060a7", size = 80071, upload-time = "2026-03-04T22:08:28.751Z" },
{ url = "https://files.pythonhosted.org/packages/86/93/1f76c8d1bafe3b0614e06b2195784a3765bbf7b0a067661af9e2dd47fc33/caio-0.9.25-py3-none-any.whl", hash = "sha256:06c0bb02d6b929119b1cfbe1ca403c768b2013a369e2db46bfa2a5761cf82e40", size = 19087, upload-time = "2025-12-26T15:22:00.221Z" },
] ]
[[package]] [[package]]
name = "certifi" name = "certifi"
version = "2026.1.4" version = "2026.2.25"
source = { registry = "https://pypi.org/simple" } source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/e0/2d/a891ca51311197f6ad14a7ef42e2399f36cf2f9bd44752b3dc4eab60fdc5/certifi-2026.1.4.tar.gz", hash = "sha256:ac726dd470482006e014ad384921ed6438c457018f4b3d204aea4281258b2120", size = 154268, upload-time = "2026-01-04T02:42:41.825Z" } sdist = { url = "https://files.pythonhosted.org/packages/af/2d/7bf41579a8986e348fa033a31cdd0e4121114f6bce2457e8876010b092dd/certifi-2026.2.25.tar.gz", hash = "sha256:e887ab5cee78ea814d3472169153c2d12cd43b14bd03329a39a9c6e2e80bfba7", size = 155029, upload-time = "2026-02-25T02:54:17.342Z" }
wheels = [ wheels = [
{ url = "https://files.pythonhosted.org/packages/e6/ad/3cc14f097111b4de0040c83a525973216457bbeeb63739ef1ed275c1c021/certifi-2026.1.4-py3-none-any.whl", hash = "sha256:9943707519e4add1115f44c2bc244f782c0249876bf51b6599fee1ffbedd685c", size = 152900, upload-time = "2026-01-04T02:42:40.15Z" }, { url = "https://files.pythonhosted.org/packages/9a/3c/c17fb3ca2d9c3acff52e30b309f538586f9f5b9c9cf454f3845fc9af4881/certifi-2026.2.25-py3-none-any.whl", hash = "sha256:027692e4402ad994f1c42e52a4997a9763c646b73e4096e4d5d6db8af1d6f0fa", size = 153684, upload-time = "2026-02-25T02:54:15.766Z" },
] ]
[[package]] [[package]]
@@ -154,59 +187,59 @@ wheels = [
[[package]] [[package]]
name = "charset-normalizer" name = "charset-normalizer"
version = "3.4.4" version = "3.4.5"
source = { registry = "https://pypi.org/simple" } source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/13/69/33ddede1939fdd074bce5434295f38fae7136463422fe4fd3e0e89b98062/charset_normalizer-3.4.4.tar.gz", hash = "sha256:94537985111c35f28720e43603b8e7b43a6ecfb2ce1d3058bbe955b73404e21a", size = 129418, upload-time = "2025-10-14T04:42:32.879Z" } sdist = { url = "https://files.pythonhosted.org/packages/1d/35/02daf95b9cd686320bb622eb148792655c9412dbb9b67abb5694e5910a24/charset_normalizer-3.4.5.tar.gz", hash = "sha256:95adae7b6c42a6c5b5b559b1a99149f090a57128155daeea91732c8d970d8644", size = 134804, upload-time = "2026-03-06T06:03:19.46Z" }
wheels = [ wheels = [
{ url = "https://files.pythonhosted.org/packages/f3/85/1637cd4af66fa687396e757dec650f28025f2a2f5a5531a3208dc0ec43f2/charset_normalizer-3.4.4-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:0a98e6759f854bd25a58a73fa88833fba3b7c491169f86ce1180c948ab3fd394", size = 208425, upload-time = "2025-10-14T04:40:53.353Z" }, { url = "https://files.pythonhosted.org/packages/9c/b6/9ee9c1a608916ca5feae81a344dffbaa53b26b90be58cc2159e3332d44ec/charset_normalizer-3.4.5-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:ed97c282ee4f994ef814042423a529df9497e3c666dca19be1d4cd1129dc7ade", size = 280976, upload-time = "2026-03-06T06:01:15.276Z" },
{ url = "https://files.pythonhosted.org/packages/9d/6a/04130023fef2a0d9c62d0bae2649b69f7b7d8d24ea5536feef50551029df/charset_normalizer-3.4.4-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:b5b290ccc2a263e8d185130284f8501e3e36c5e02750fc6b6bdeb2e9e96f1e25", size = 148162, upload-time = "2025-10-14T04:40:54.558Z" }, { url = "https://files.pythonhosted.org/packages/f8/d8/a54f7c0b96f1df3563e9190f04daf981e365a9b397eedfdfb5dbef7e5c6c/charset_normalizer-3.4.5-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:0294916d6ccf2d069727d65973c3a1ca477d68708db25fd758dd28b0827cff54", size = 189356, upload-time = "2026-03-06T06:01:16.511Z" },
{ url = "https://files.pythonhosted.org/packages/78/29/62328d79aa60da22c9e0b9a66539feae06ca0f5a4171ac4f7dc285b83688/charset_normalizer-3.4.4-cp312-cp312-manylinux2014_armv7l.manylinux_2_17_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:74bb723680f9f7a6234dcf67aea57e708ec1fbdf5699fb91dfd6f511b0a320ef", size = 144558, upload-time = "2025-10-14T04:40:55.677Z" }, { url = "https://files.pythonhosted.org/packages/42/69/2bf7f76ce1446759a5787cb87d38f6a61eb47dbbdf035cfebf6347292a65/charset_normalizer-3.4.5-cp312-cp312-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:dc57a0baa3eeedd99fafaef7511b5a6ef4581494e8168ee086031744e2679467", size = 206369, upload-time = "2026-03-06T06:01:17.853Z" },
{ url = "https://files.pythonhosted.org/packages/86/bb/b32194a4bf15b88403537c2e120b817c61cd4ecffa9b6876e941c3ee38fe/charset_normalizer-3.4.4-cp312-cp312-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:f1e34719c6ed0b92f418c7c780480b26b5d9c50349e9a9af7d76bf757530350d", size = 161497, upload-time = "2025-10-14T04:40:57.217Z" }, { url = "https://files.pythonhosted.org/packages/10/9c/949d1a46dab56b959d9a87272482195f1840b515a3380e39986989a893ae/charset_normalizer-3.4.5-cp312-cp312-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:ed1a9a204f317ef879b32f9af507d47e49cd5e7f8e8d5d96358c98373314fc60", size = 203285, upload-time = "2026-03-06T06:01:19.473Z" },
{ url = "https://files.pythonhosted.org/packages/19/89/a54c82b253d5b9b111dc74aca196ba5ccfcca8242d0fb64146d4d3183ff1/charset_normalizer-3.4.4-cp312-cp312-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:2437418e20515acec67d86e12bf70056a33abdacb5cb1655042f6538d6b085a8", size = 159240, upload-time = "2025-10-14T04:40:58.358Z" }, { url = "https://files.pythonhosted.org/packages/67/5c/ae30362a88b4da237d71ea214a8c7eb915db3eec941adda511729ac25fa2/charset_normalizer-3.4.5-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:7ad83b8f9379176c841f8865884f3514d905bcd2a9a3b210eaa446e7d2223e4d", size = 196274, upload-time = "2026-03-06T06:01:20.728Z" },
{ url = "https://files.pythonhosted.org/packages/c0/10/d20b513afe03acc89ec33948320a5544d31f21b05368436d580dec4e234d/charset_normalizer-3.4.4-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:11d694519d7f29d6cd09f6ac70028dba10f92f6cdd059096db198c283794ac86", size = 153471, upload-time = "2025-10-14T04:40:59.468Z" }, { url = "https://files.pythonhosted.org/packages/b2/07/c9f2cb0e46cb6d64fdcc4f95953747b843bb2181bda678dc4e699b8f0f9a/charset_normalizer-3.4.5-cp312-cp312-manylinux_2_31_armv7l.whl", hash = "sha256:a118e2e0b5ae6b0120d5efa5f866e58f2bb826067a646431da4d6a2bdae7950e", size = 184715, upload-time = "2026-03-06T06:01:22.194Z" },
{ url = "https://files.pythonhosted.org/packages/61/fa/fbf177b55bdd727010f9c0a3c49eefa1d10f960e5f09d1d887bf93c2e698/charset_normalizer-3.4.4-cp312-cp312-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:ac1c4a689edcc530fc9d9aa11f5774b9e2f33f9a0c6a57864e90908f5208d30a", size = 150864, upload-time = "2025-10-14T04:41:00.623Z" }, { url = "https://files.pythonhosted.org/packages/36/64/6b0ca95c44fddf692cd06d642b28f63009d0ce325fad6e9b2b4d0ef86a52/charset_normalizer-3.4.5-cp312-cp312-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:754f96058e61a5e22e91483f823e07df16416ce76afa4ebf306f8e1d1296d43f", size = 193426, upload-time = "2026-03-06T06:01:23.795Z" },
{ url = "https://files.pythonhosted.org/packages/05/12/9fbc6a4d39c0198adeebbde20b619790e9236557ca59fc40e0e3cebe6f40/charset_normalizer-3.4.4-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:21d142cc6c0ec30d2efee5068ca36c128a30b0f2c53c1c07bd78cb6bc1d3be5f", size = 150647, upload-time = "2025-10-14T04:41:01.754Z" }, { url = "https://files.pythonhosted.org/packages/50/bc/a730690d726403743795ca3f5bb2baf67838c5fea78236098f324b965e40/charset_normalizer-3.4.5-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:0c300cefd9b0970381a46394902cd18eaf2aa00163f999590ace991989dcd0fc", size = 191780, upload-time = "2026-03-06T06:01:25.053Z" },
{ url = "https://files.pythonhosted.org/packages/ad/1f/6a9a593d52e3e8c5d2b167daf8c6b968808efb57ef4c210acb907c365bc4/charset_normalizer-3.4.4-cp312-cp312-musllinux_1_2_armv7l.whl", hash = "sha256:5dbe56a36425d26d6cfb40ce79c314a2e4dd6211d51d6d2191c00bed34f354cc", size = 145110, upload-time = "2025-10-14T04:41:03.231Z" }, { url = "https://files.pythonhosted.org/packages/97/4f/6c0bc9af68222b22951552d73df4532b5be6447cee32d58e7e8c74ecbb7b/charset_normalizer-3.4.5-cp312-cp312-musllinux_1_2_armv7l.whl", hash = "sha256:c108f8619e504140569ee7de3f97d234f0fbae338a7f9f360455071ef9855a95", size = 185805, upload-time = "2026-03-06T06:01:26.294Z" },
{ url = "https://files.pythonhosted.org/packages/30/42/9a52c609e72471b0fc54386dc63c3781a387bb4fe61c20231a4ebcd58bdd/charset_normalizer-3.4.4-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:5bfbb1b9acf3334612667b61bd3002196fe2a1eb4dd74d247e0f2a4d50ec9bbf", size = 162839, upload-time = "2025-10-14T04:41:04.715Z" }, { url = "https://files.pythonhosted.org/packages/dd/b9/a523fb9b0ee90814b503452b2600e4cbc118cd68714d57041564886e7325/charset_normalizer-3.4.5-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:d1028de43596a315e2720a9849ee79007ab742c06ad8b45a50db8cdb7ed4a82a", size = 208342, upload-time = "2026-03-06T06:01:27.55Z" },
{ url = "https://files.pythonhosted.org/packages/c4/5b/c0682bbf9f11597073052628ddd38344a3d673fda35a36773f7d19344b23/charset_normalizer-3.4.4-cp312-cp312-musllinux_1_2_riscv64.whl", hash = "sha256:d055ec1e26e441f6187acf818b73564e6e6282709e9bcb5b63f5b23068356a15", size = 150667, upload-time = "2025-10-14T04:41:05.827Z" }, { url = "https://files.pythonhosted.org/packages/4d/61/c59e761dee4464050713e50e27b58266cc8e209e518c0b378c1580c959ba/charset_normalizer-3.4.5-cp312-cp312-musllinux_1_2_riscv64.whl", hash = "sha256:19092dde50335accf365cce21998a1c6dd8eafd42c7b226eb54b2747cdce2fac", size = 193661, upload-time = "2026-03-06T06:01:29.051Z" },
{ url = "https://files.pythonhosted.org/packages/e4/24/a41afeab6f990cf2daf6cb8c67419b63b48cf518e4f56022230840c9bfb2/charset_normalizer-3.4.4-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:af2d8c67d8e573d6de5bc30cdb27e9b95e49115cd9baad5ddbd1a6207aaa82a9", size = 160535, upload-time = "2025-10-14T04:41:06.938Z" }, { url = "https://files.pythonhosted.org/packages/1c/43/729fa30aad69783f755c5ad8649da17ee095311ca42024742701e202dc59/charset_normalizer-3.4.5-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:4354e401eb6dab9aed3c7b4030514328a6c748d05e1c3e19175008ca7de84fb1", size = 204819, upload-time = "2026-03-06T06:01:30.298Z" },
{ url = "https://files.pythonhosted.org/packages/2a/e5/6a4ce77ed243c4a50a1fecca6aaaab419628c818a49434be428fe24c9957/charset_normalizer-3.4.4-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:780236ac706e66881f3b7f2f32dfe90507a09e67d1d454c762cf642e6e1586e0", size = 154816, upload-time = "2025-10-14T04:41:08.101Z" }, { url = "https://files.pythonhosted.org/packages/87/33/d9b442ce5a91b96fc0840455a9e49a611bbadae6122778d0a6a79683dd31/charset_normalizer-3.4.5-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:a68766a3c58fde7f9aaa22b3786276f62ab2f594efb02d0a1421b6282e852e98", size = 198080, upload-time = "2026-03-06T06:01:31.478Z" },
{ url = "https://files.pythonhosted.org/packages/a8/ef/89297262b8092b312d29cdb2517cb1237e51db8ecef2e9af5edbe7b683b1/charset_normalizer-3.4.4-cp312-cp312-win32.whl", hash = "sha256:5833d2c39d8896e4e19b689ffc198f08ea58116bee26dea51e362ecc7cd3ed26", size = 99694, upload-time = "2025-10-14T04:41:09.23Z" }, { url = "https://files.pythonhosted.org/packages/56/5a/b8b5a23134978ee9885cee2d6995f4c27cc41f9baded0a9685eabc5338f0/charset_normalizer-3.4.5-cp312-cp312-win32.whl", hash = "sha256:1827734a5b308b65ac54e86a618de66f935a4f63a8a462ff1e19a6788d6c2262", size = 132630, upload-time = "2026-03-06T06:01:33.056Z" },
{ url = "https://files.pythonhosted.org/packages/3d/2d/1e5ed9dd3b3803994c155cd9aacb60c82c331bad84daf75bcb9c91b3295e/charset_normalizer-3.4.4-cp312-cp312-win_amd64.whl", hash = "sha256:a79cfe37875f822425b89a82333404539ae63dbdddf97f84dcbc3d339aae9525", size = 107131, upload-time = "2025-10-14T04:41:10.467Z" }, { url = "https://files.pythonhosted.org/packages/70/53/e44a4c07e8904500aec95865dc3f6464dc3586a039ef0df606eb3ac38e35/charset_normalizer-3.4.5-cp312-cp312-win_amd64.whl", hash = "sha256:728c6a963dfab66ef865f49286e45239384249672cd598576765acc2a640a636", size = 142856, upload-time = "2026-03-06T06:01:34.489Z" },
{ url = "https://files.pythonhosted.org/packages/d0/d9/0ed4c7098a861482a7b6a95603edce4c0d9db2311af23da1fb2b75ec26fc/charset_normalizer-3.4.4-cp312-cp312-win_arm64.whl", hash = "sha256:376bec83a63b8021bb5c8ea75e21c4ccb86e7e45ca4eb81146091b56599b80c3", size = 100390, upload-time = "2025-10-14T04:41:11.915Z" }, { url = "https://files.pythonhosted.org/packages/ea/aa/c5628f7cad591b1cf45790b7a61483c3e36cf41349c98af7813c483fd6e8/charset_normalizer-3.4.5-cp312-cp312-win_arm64.whl", hash = "sha256:75dfd1afe0b1647449e852f4fb428195a7ed0588947218f7ba929f6538487f02", size = 132982, upload-time = "2026-03-06T06:01:35.641Z" },
{ url = "https://files.pythonhosted.org/packages/97/45/4b3a1239bbacd321068ea6e7ac28875b03ab8bc0aa0966452db17cd36714/charset_normalizer-3.4.4-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:e1f185f86a6f3403aa2420e815904c67b2f9ebc443f045edd0de921108345794", size = 208091, upload-time = "2025-10-14T04:41:13.346Z" }, { url = "https://files.pythonhosted.org/packages/f5/48/9f34ec4bb24aa3fdba1890c1bddb97c8a4be1bd84ef5c42ac2352563ad05/charset_normalizer-3.4.5-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:ac59c15e3f1465f722607800c68713f9fbc2f672b9eb649fe831da4019ae9b23", size = 280788, upload-time = "2026-03-06T06:01:37.126Z" },
{ url = "https://files.pythonhosted.org/packages/7d/62/73a6d7450829655a35bb88a88fca7d736f9882a27eacdca2c6d505b57e2e/charset_normalizer-3.4.4-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:6b39f987ae8ccdf0d2642338faf2abb1862340facc796048b604ef14919e55ed", size = 147936, upload-time = "2025-10-14T04:41:14.461Z" }, { url = "https://files.pythonhosted.org/packages/0e/09/6003e7ffeb90cc0560da893e3208396a44c210c5ee42efff539639def59b/charset_normalizer-3.4.5-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:165c7b21d19365464e8f70e5ce5e12524c58b48c78c1f5a57524603c1ab003f8", size = 188890, upload-time = "2026-03-06T06:01:38.73Z" },
{ url = "https://files.pythonhosted.org/packages/89/c5/adb8c8b3d6625bef6d88b251bbb0d95f8205831b987631ab0c8bb5d937c2/charset_normalizer-3.4.4-cp313-cp313-manylinux2014_armv7l.manylinux_2_17_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:3162d5d8ce1bb98dd51af660f2121c55d0fa541b46dff7bb9b9f86ea1d87de72", size = 144180, upload-time = "2025-10-14T04:41:15.588Z" }, { url = "https://files.pythonhosted.org/packages/42/1e/02706edf19e390680daa694d17e2b8eab4b5f7ac285e2a51168b4b22ee6b/charset_normalizer-3.4.5-cp313-cp313-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:28269983f25a4da0425743d0d257a2d6921ea7d9b83599d4039486ec5b9f911d", size = 206136, upload-time = "2026-03-06T06:01:40.016Z" },
{ url = "https://files.pythonhosted.org/packages/91/ed/9706e4070682d1cc219050b6048bfd293ccf67b3d4f5a4f39207453d4b99/charset_normalizer-3.4.4-cp313-cp313-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:81d5eb2a312700f4ecaa977a8235b634ce853200e828fbadf3a9c50bab278328", size = 161346, upload-time = "2025-10-14T04:41:16.738Z" }, { url = "https://files.pythonhosted.org/packages/c7/87/942c3def1b37baf3cf786bad01249190f3ca3d5e63a84f831e704977de1f/charset_normalizer-3.4.5-cp313-cp313-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:d27ce22ec453564770d29d03a9506d449efbb9fa13c00842262b2f6801c48cce", size = 202551, upload-time = "2026-03-06T06:01:41.522Z" },
{ url = "https://files.pythonhosted.org/packages/d5/0d/031f0d95e4972901a2f6f09ef055751805ff541511dc1252ba3ca1f80cf5/charset_normalizer-3.4.4-cp313-cp313-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:5bd2293095d766545ec1a8f612559f6b40abc0eb18bb2f5d1171872d34036ede", size = 158874, upload-time = "2025-10-14T04:41:17.923Z" }, { url = "https://files.pythonhosted.org/packages/94/0a/af49691938dfe175d71b8a929bd7e4ace2809c0c5134e28bc535660d5262/charset_normalizer-3.4.5-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:0625665e4ebdddb553ab185de5db7054393af8879fb0c87bd5690d14379d6819", size = 195572, upload-time = "2026-03-06T06:01:43.208Z" },
{ url = "https://files.pythonhosted.org/packages/f5/83/6ab5883f57c9c801ce5e5677242328aa45592be8a00644310a008d04f922/charset_normalizer-3.4.4-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:a8a8b89589086a25749f471e6a900d3f662d1d3b6e2e59dcecf787b1cc3a1894", size = 153076, upload-time = "2025-10-14T04:41:19.106Z" }, { url = "https://files.pythonhosted.org/packages/20/ea/dfb1792a8050a8e694cfbde1570ff97ff74e48afd874152d38163d1df9ae/charset_normalizer-3.4.5-cp313-cp313-manylinux_2_31_armv7l.whl", hash = "sha256:c23eb3263356d94858655b3e63f85ac5d50970c6e8febcdde7830209139cc37d", size = 184438, upload-time = "2026-03-06T06:01:44.755Z" },
{ url = "https://files.pythonhosted.org/packages/75/1e/5ff781ddf5260e387d6419959ee89ef13878229732732ee73cdae01800f2/charset_normalizer-3.4.4-cp313-cp313-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:bc7637e2f80d8530ee4a78e878bce464f70087ce73cf7c1caf142416923b98f1", size = 150601, upload-time = "2025-10-14T04:41:20.245Z" }, { url = "https://files.pythonhosted.org/packages/72/12/c281e2067466e3ddd0595bfaea58a6946765ace5c72dfa3edc2f5f118026/charset_normalizer-3.4.5-cp313-cp313-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:e6302ca4ae283deb0af68d2fbf467474b8b6aedcd3dab4db187e07f94c109763", size = 193035, upload-time = "2026-03-06T06:01:46.051Z" },
{ url = "https://files.pythonhosted.org/packages/d7/57/71be810965493d3510a6ca79b90c19e48696fb1ff964da319334b12677f0/charset_normalizer-3.4.4-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:f8bf04158c6b607d747e93949aa60618b61312fe647a6369f88ce2ff16043490", size = 150376, upload-time = "2025-10-14T04:41:21.398Z" }, { url = "https://files.pythonhosted.org/packages/ba/4f/3792c056e7708e10464bad0438a44708886fb8f92e3c3d29ec5e2d964d42/charset_normalizer-3.4.5-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:e51ae7d81c825761d941962450f50d041db028b7278e7b08930b4541b3e45cb9", size = 191340, upload-time = "2026-03-06T06:01:47.547Z" },
{ url = "https://files.pythonhosted.org/packages/e5/d5/c3d057a78c181d007014feb7e9f2e65905a6c4ef182c0ddf0de2924edd65/charset_normalizer-3.4.4-cp313-cp313-musllinux_1_2_armv7l.whl", hash = "sha256:554af85e960429cf30784dd47447d5125aaa3b99a6f0683589dbd27e2f45da44", size = 144825, upload-time = "2025-10-14T04:41:22.583Z" }, { url = "https://files.pythonhosted.org/packages/e7/86/80ddba897127b5c7a9bccc481b0cd36c8fefa485d113262f0fe4332f0bf4/charset_normalizer-3.4.5-cp313-cp313-musllinux_1_2_armv7l.whl", hash = "sha256:597d10dec876923e5c59e48dbd366e852eacb2b806029491d307daea6b917d7c", size = 185464, upload-time = "2026-03-06T06:01:48.764Z" },
{ url = "https://files.pythonhosted.org/packages/e6/8c/d0406294828d4976f275ffbe66f00266c4b3136b7506941d87c00cab5272/charset_normalizer-3.4.4-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:74018750915ee7ad843a774364e13a3db91682f26142baddf775342c3f5b1133", size = 162583, upload-time = "2025-10-14T04:41:23.754Z" }, { url = "https://files.pythonhosted.org/packages/4d/00/b5eff85ba198faacab83e0e4b6f0648155f072278e3b392a82478f8b988b/charset_normalizer-3.4.5-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:5cffde4032a197bd3b42fd0b9509ec60fb70918d6970e4cc773f20fc9180ca67", size = 208014, upload-time = "2026-03-06T06:01:50.371Z" },
{ url = "https://files.pythonhosted.org/packages/d7/24/e2aa1f18c8f15c4c0e932d9287b8609dd30ad56dbe41d926bd846e22fb8d/charset_normalizer-3.4.4-cp313-cp313-musllinux_1_2_riscv64.whl", hash = "sha256:c0463276121fdee9c49b98908b3a89c39be45d86d1dbaa22957e38f6321d4ce3", size = 150366, upload-time = "2025-10-14T04:41:25.27Z" }, { url = "https://files.pythonhosted.org/packages/c8/11/d36f70be01597fd30850dde8a1269ebc8efadd23ba5785808454f2389bde/charset_normalizer-3.4.5-cp313-cp313-musllinux_1_2_riscv64.whl", hash = "sha256:2da4eedcb6338e2321e831a0165759c0c620e37f8cd044a263ff67493be8ffb3", size = 193297, upload-time = "2026-03-06T06:01:51.933Z" },
{ url = "https://files.pythonhosted.org/packages/e4/5b/1e6160c7739aad1e2df054300cc618b06bf784a7a164b0f238360721ab86/charset_normalizer-3.4.4-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:362d61fd13843997c1c446760ef36f240cf81d3ebf74ac62652aebaf7838561e", size = 160300, upload-time = "2025-10-14T04:41:26.725Z" }, { url = "https://files.pythonhosted.org/packages/1a/1d/259eb0a53d4910536c7c2abb9cb25f4153548efb42800c6a9456764649c0/charset_normalizer-3.4.5-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:65a126fb4b070d05340a84fc709dd9e7c75d9b063b610ece8a60197a291d0adf", size = 204321, upload-time = "2026-03-06T06:01:53.887Z" },
{ url = "https://files.pythonhosted.org/packages/7a/10/f882167cd207fbdd743e55534d5d9620e095089d176d55cb22d5322f2afd/charset_normalizer-3.4.4-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:9a26f18905b8dd5d685d6d07b0cdf98a79f3c7a918906af7cc143ea2e164c8bc", size = 154465, upload-time = "2025-10-14T04:41:28.322Z" }, { url = "https://files.pythonhosted.org/packages/84/31/faa6c5b9d3688715e1ed1bb9d124c384fe2fc1633a409e503ffe1c6398c1/charset_normalizer-3.4.5-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:c7a80a9242963416bd81f99349d5f3fce1843c303bd404f204918b6d75a75fd6", size = 197509, upload-time = "2026-03-06T06:01:56.439Z" },
{ url = "https://files.pythonhosted.org/packages/89/66/c7a9e1b7429be72123441bfdbaf2bc13faab3f90b933f664db506dea5915/charset_normalizer-3.4.4-cp313-cp313-win32.whl", hash = "sha256:9b35f4c90079ff2e2edc5b26c0c77925e5d2d255c42c74fdb70fb49b172726ac", size = 99404, upload-time = "2025-10-14T04:41:29.95Z" }, { url = "https://files.pythonhosted.org/packages/fd/a5/c7d9dd1503ffc08950b3260f5d39ec2366dd08254f0900ecbcf3a6197c7c/charset_normalizer-3.4.5-cp313-cp313-win32.whl", hash = "sha256:f1d725b754e967e648046f00c4facc42d414840f5ccc670c5670f59f83693e4f", size = 132284, upload-time = "2026-03-06T06:01:57.812Z" },
{ url = "https://files.pythonhosted.org/packages/c4/26/b9924fa27db384bdcd97ab83b4f0a8058d96ad9626ead570674d5e737d90/charset_normalizer-3.4.4-cp313-cp313-win_amd64.whl", hash = "sha256:b435cba5f4f750aa6c0a0d92c541fb79f69a387c91e61f1795227e4ed9cece14", size = 107092, upload-time = "2025-10-14T04:41:31.188Z" }, { url = "https://files.pythonhosted.org/packages/b9/0f/57072b253af40c8aa6636e6de7d75985624c1eb392815b2f934199340a89/charset_normalizer-3.4.5-cp313-cp313-win_amd64.whl", hash = "sha256:e37bd100d2c5d3ba35db9c7c5ba5a9228cbcffe5c4778dc824b164e5257813d7", size = 142630, upload-time = "2026-03-06T06:01:59.062Z" },
{ url = "https://files.pythonhosted.org/packages/af/8f/3ed4bfa0c0c72a7ca17f0380cd9e4dd842b09f664e780c13cff1dcf2ef1b/charset_normalizer-3.4.4-cp313-cp313-win_arm64.whl", hash = "sha256:542d2cee80be6f80247095cc36c418f7bddd14f4a6de45af91dfad36d817bba2", size = 100408, upload-time = "2025-10-14T04:41:32.624Z" }, { url = "https://files.pythonhosted.org/packages/31/41/1c4b7cc9f13bd9d369ce3bc993e13d374ce25fa38a2663644283ecf422c1/charset_normalizer-3.4.5-cp313-cp313-win_arm64.whl", hash = "sha256:93b3b2cc5cf1b8743660ce77a4f45f3f6d1172068207c1defc779a36eea6bb36", size = 133254, upload-time = "2026-03-06T06:02:00.281Z" },
{ url = "https://files.pythonhosted.org/packages/2a/35/7051599bd493e62411d6ede36fd5af83a38f37c4767b92884df7301db25d/charset_normalizer-3.4.4-cp314-cp314-macosx_10_13_universal2.whl", hash = "sha256:da3326d9e65ef63a817ecbcc0df6e94463713b754fe293eaa03da99befb9a5bd", size = 207746, upload-time = "2025-10-14T04:41:33.773Z" }, { url = "https://files.pythonhosted.org/packages/43/be/0f0fd9bb4a7fa4fb5067fb7d9ac693d4e928d306f80a0d02bde43a7c4aee/charset_normalizer-3.4.5-cp314-cp314-macosx_10_15_universal2.whl", hash = "sha256:8197abe5ca1ffb7d91e78360f915eef5addff270f8a71c1fc5be24a56f3e4873", size = 280232, upload-time = "2026-03-06T06:02:01.508Z" },
{ url = "https://files.pythonhosted.org/packages/10/9a/97c8d48ef10d6cd4fcead2415523221624bf58bcf68a802721a6bc807c8f/charset_normalizer-3.4.4-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:8af65f14dc14a79b924524b1e7fffe304517b2bff5a58bf64f30b98bbc5079eb", size = 147889, upload-time = "2025-10-14T04:41:34.897Z" }, { url = "https://files.pythonhosted.org/packages/28/02/983b5445e4bef49cd8c9da73a8e029f0825f39b74a06d201bfaa2e55142a/charset_normalizer-3.4.5-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:a2aecdb364b8a1802afdc7f9327d55dad5366bc97d8502d0f5854e50712dbc5f", size = 189688, upload-time = "2026-03-06T06:02:02.857Z" },
{ url = "https://files.pythonhosted.org/packages/10/bf/979224a919a1b606c82bd2c5fa49b5c6d5727aa47b4312bb27b1734f53cd/charset_normalizer-3.4.4-cp314-cp314-manylinux2014_armv7l.manylinux_2_17_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:74664978bb272435107de04e36db5a9735e78232b85b77d45cfb38f758efd33e", size = 143641, upload-time = "2025-10-14T04:41:36.116Z" }, { url = "https://files.pythonhosted.org/packages/d0/88/152745c5166437687028027dc080e2daed6fe11cfa95a22f4602591c42db/charset_normalizer-3.4.5-cp314-cp314-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:a66aa5022bf81ab4b1bebfb009db4fd68e0c6d4307a1ce5ef6a26e5878dfc9e4", size = 206833, upload-time = "2026-03-06T06:02:05.127Z" },
{ url = "https://files.pythonhosted.org/packages/ba/33/0ad65587441fc730dc7bd90e9716b30b4702dc7b617e6ba4997dc8651495/charset_normalizer-3.4.4-cp314-cp314-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:752944c7ffbfdd10c074dc58ec2d5a8a4cd9493b314d367c14d24c17684ddd14", size = 160779, upload-time = "2025-10-14T04:41:37.229Z" }, { url = "https://files.pythonhosted.org/packages/cb/0f/ebc15c8b02af2f19be9678d6eed115feeeccc45ce1f4b098d986c13e8769/charset_normalizer-3.4.5-cp314-cp314-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:d77f97e515688bd615c1d1f795d540f32542d514242067adcb8ef532504cb9ee", size = 202879, upload-time = "2026-03-06T06:02:06.446Z" },
{ url = "https://files.pythonhosted.org/packages/67/ed/331d6b249259ee71ddea93f6f2f0a56cfebd46938bde6fcc6f7b9a3d0e09/charset_normalizer-3.4.4-cp314-cp314-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:d1f13550535ad8cff21b8d757a3257963e951d96e20ec82ab44bc64aeb62a191", size = 159035, upload-time = "2025-10-14T04:41:38.368Z" }, { url = "https://files.pythonhosted.org/packages/38/9c/71336bff6934418dc8d1e8a1644176ac9088068bc571da612767619c97b3/charset_normalizer-3.4.5-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:01a1ed54b953303ca7e310fafe0fe347aab348bd81834a0bcd602eb538f89d66", size = 195764, upload-time = "2026-03-06T06:02:08.763Z" },
{ url = "https://files.pythonhosted.org/packages/67/ff/f6b948ca32e4f2a4576aa129d8bed61f2e0543bf9f5f2b7fc3758ed005c9/charset_normalizer-3.4.4-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:ecaae4149d99b1c9e7b88bb03e3221956f68fd6d50be2ef061b2381b61d20838", size = 152542, upload-time = "2025-10-14T04:41:39.862Z" }, { url = "https://files.pythonhosted.org/packages/b7/95/ce92fde4f98615661871bc282a856cf9b8a15f686ba0af012984660d480b/charset_normalizer-3.4.5-cp314-cp314-manylinux_2_31_armv7l.whl", hash = "sha256:b2d37d78297b39a9eb9eb92c0f6df98c706467282055419df141389b23f93362", size = 183728, upload-time = "2026-03-06T06:02:10.137Z" },
{ url = "https://files.pythonhosted.org/packages/16/85/276033dcbcc369eb176594de22728541a925b2632f9716428c851b149e83/charset_normalizer-3.4.4-cp314-cp314-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:cb6254dc36b47a990e59e1068afacdcd02958bdcce30bb50cc1700a8b9d624a6", size = 149524, upload-time = "2025-10-14T04:41:41.319Z" }, { url = "https://files.pythonhosted.org/packages/1c/e7/f5b4588d94e747ce45ae680f0f242bc2d98dbd4eccfab73e6160b6893893/charset_normalizer-3.4.5-cp314-cp314-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:e71bbb595973622b817c042bd943c3f3667e9c9983ce3d205f973f486fec98a7", size = 192937, upload-time = "2026-03-06T06:02:11.663Z" },
{ url = "https://files.pythonhosted.org/packages/9e/f2/6a2a1f722b6aba37050e626530a46a68f74e63683947a8acff92569f979a/charset_normalizer-3.4.4-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:c8ae8a0f02f57a6e61203a31428fa1d677cbe50c93622b4149d5c0f319c1d19e", size = 150395, upload-time = "2025-10-14T04:41:42.539Z" }, { url = "https://files.pythonhosted.org/packages/f9/29/9d94ed6b929bf9f48bf6ede6e7474576499f07c4c5e878fb186083622716/charset_normalizer-3.4.5-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:4cd966c2559f501c6fd69294d082c2934c8dd4719deb32c22961a5ac6db0df1d", size = 192040, upload-time = "2026-03-06T06:02:13.489Z" },
{ url = "https://files.pythonhosted.org/packages/60/bb/2186cb2f2bbaea6338cad15ce23a67f9b0672929744381e28b0592676824/charset_normalizer-3.4.4-cp314-cp314-musllinux_1_2_armv7l.whl", hash = "sha256:47cc91b2f4dd2833fddaedd2893006b0106129d4b94fdb6af1f4ce5a9965577c", size = 143680, upload-time = "2025-10-14T04:41:43.661Z" }, { url = "https://files.pythonhosted.org/packages/15/d2/1a093a1cf827957f9445f2fe7298bcc16f8fc5e05c1ed2ad1af0b239035e/charset_normalizer-3.4.5-cp314-cp314-musllinux_1_2_armv7l.whl", hash = "sha256:d5e52d127045d6ae01a1e821acfad2f3a1866c54d0e837828538fabe8d9d1bd6", size = 184107, upload-time = "2026-03-06T06:02:14.83Z" },
{ url = "https://files.pythonhosted.org/packages/7d/a5/bf6f13b772fbb2a90360eb620d52ed8f796f3c5caee8398c3b2eb7b1c60d/charset_normalizer-3.4.4-cp314-cp314-musllinux_1_2_ppc64le.whl", hash = "sha256:82004af6c302b5d3ab2cfc4cc5f29db16123b1a8417f2e25f9066f91d4411090", size = 162045, upload-time = "2025-10-14T04:41:44.821Z" }, { url = "https://files.pythonhosted.org/packages/0f/7d/82068ce16bd36135df7b97f6333c5d808b94e01d4599a682e2337ed5fd14/charset_normalizer-3.4.5-cp314-cp314-musllinux_1_2_ppc64le.whl", hash = "sha256:30a2b1a48478c3428d047ed9690d57c23038dac838a87ad624c85c0a78ebeb39", size = 208310, upload-time = "2026-03-06T06:02:16.165Z" },
{ url = "https://files.pythonhosted.org/packages/df/c5/d1be898bf0dc3ef9030c3825e5d3b83f2c528d207d246cbabe245966808d/charset_normalizer-3.4.4-cp314-cp314-musllinux_1_2_riscv64.whl", hash = "sha256:2b7d8f6c26245217bd2ad053761201e9f9680f8ce52f0fcd8d0755aeae5b2152", size = 149687, upload-time = "2025-10-14T04:41:46.442Z" }, { url = "https://files.pythonhosted.org/packages/84/4e/4dfb52307bb6af4a5c9e73e482d171b81d36f522b21ccd28a49656baa680/charset_normalizer-3.4.5-cp314-cp314-musllinux_1_2_riscv64.whl", hash = "sha256:d8ed79b8f6372ca4254955005830fd61c1ccdd8c0fac6603e2c145c61dd95db6", size = 192918, upload-time = "2026-03-06T06:02:18.144Z" },
{ url = "https://files.pythonhosted.org/packages/a5/42/90c1f7b9341eef50c8a1cb3f098ac43b0508413f33affd762855f67a410e/charset_normalizer-3.4.4-cp314-cp314-musllinux_1_2_s390x.whl", hash = "sha256:799a7a5e4fb2d5898c60b640fd4981d6a25f1c11790935a44ce38c54e985f828", size = 160014, upload-time = "2025-10-14T04:41:47.631Z" }, { url = "https://files.pythonhosted.org/packages/08/a4/159ff7da662cf7201502ca89980b8f06acf3e887b278956646a8aeb178ab/charset_normalizer-3.4.5-cp314-cp314-musllinux_1_2_s390x.whl", hash = "sha256:c5af897b45fa606b12464ccbe0014bbf8c09191e0a66aab6aa9d5cf6e77e0c94", size = 204615, upload-time = "2026-03-06T06:02:19.821Z" },
{ url = "https://files.pythonhosted.org/packages/76/be/4d3ee471e8145d12795ab655ece37baed0929462a86e72372fd25859047c/charset_normalizer-3.4.4-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:99ae2cffebb06e6c22bdc25801d7b30f503cc87dbd283479e7b606f70aff57ec", size = 154044, upload-time = "2025-10-14T04:41:48.81Z" }, { url = "https://files.pythonhosted.org/packages/d6/62/0dd6172203cb6b429ffffc9935001fde42e5250d57f07b0c28c6046deb6b/charset_normalizer-3.4.5-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:1088345bcc93c58d8d8f3d783eca4a6e7a7752bbff26c3eee7e73c597c191c2e", size = 197784, upload-time = "2026-03-06T06:02:21.86Z" },
{ url = "https://files.pythonhosted.org/packages/b0/6f/8f7af07237c34a1defe7defc565a9bc1807762f672c0fde711a4b22bf9c0/charset_normalizer-3.4.4-cp314-cp314-win32.whl", hash = "sha256:f9d332f8c2a2fcbffe1378594431458ddbef721c1769d78e2cbc06280d8155f9", size = 99940, upload-time = "2025-10-14T04:41:49.946Z" }, { url = "https://files.pythonhosted.org/packages/c7/5e/1aab5cb737039b9c59e63627dc8bbc0d02562a14f831cc450e5f91d84ce1/charset_normalizer-3.4.5-cp314-cp314-win32.whl", hash = "sha256:ee57b926940ba00bca7ba7041e665cc956e55ef482f851b9b65acb20d867e7a2", size = 133009, upload-time = "2026-03-06T06:02:23.289Z" },
{ url = "https://files.pythonhosted.org/packages/4b/51/8ade005e5ca5b0d80fb4aff72a3775b325bdc3d27408c8113811a7cbe640/charset_normalizer-3.4.4-cp314-cp314-win_amd64.whl", hash = "sha256:8a6562c3700cce886c5be75ade4a5db4214fda19fede41d9792d100288d8f94c", size = 107104, upload-time = "2025-10-14T04:41:51.051Z" }, { url = "https://files.pythonhosted.org/packages/40/65/e7c6c77d7aaa4c0d7974f2e403e17f0ed2cb0fc135f77d686b916bf1eead/charset_normalizer-3.4.5-cp314-cp314-win_amd64.whl", hash = "sha256:4481e6da1830c8a1cc0b746b47f603b653dadb690bcd851d039ffaefe70533aa", size = 143511, upload-time = "2026-03-06T06:02:26.195Z" },
{ url = "https://files.pythonhosted.org/packages/da/5f/6b8f83a55bb8278772c5ae54a577f3099025f9ade59d0136ac24a0df4bde/charset_normalizer-3.4.4-cp314-cp314-win_arm64.whl", hash = "sha256:de00632ca48df9daf77a2c65a484531649261ec9f25489917f09e455cb09ddb2", size = 100743, upload-time = "2025-10-14T04:41:52.122Z" }, { url = "https://files.pythonhosted.org/packages/ba/91/52b0841c71f152f563b8e072896c14e3d83b195c188b338d3cc2e582d1d4/charset_normalizer-3.4.5-cp314-cp314-win_arm64.whl", hash = "sha256:97ab7787092eb9b50fb47fa04f24c75b768a606af1bcba1957f07f128a7219e4", size = 133775, upload-time = "2026-03-06T06:02:27.473Z" },
{ url = "https://files.pythonhosted.org/packages/0a/4c/925909008ed5a988ccbb72dcc897407e5d6d3bd72410d69e051fc0c14647/charset_normalizer-3.4.4-py3-none-any.whl", hash = "sha256:7a32c560861a02ff789ad905a2fe94e3f840803362c84fecf1851cb4cf3dc37f", size = 53402, upload-time = "2025-10-14T04:42:31.76Z" }, { url = "https://files.pythonhosted.org/packages/c5/60/3a621758945513adfd4db86827a5bafcc615f913dbd0b4c2ed64a65731be/charset_normalizer-3.4.5-py3-none-any.whl", hash = "sha256:9db5e3fcdcee89a78c04dffb3fe33c79f77bd741a624946db2591c81b2fc85b0", size = 55455, upload-time = "2026-03-06T06:03:17.827Z" },
] ]
[[package]] [[package]]
@@ -221,15 +254,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/98/78/01c019cdb5d6498122777c1a43056ebb3ebfeef2076d9d026bfe15583b2b/click-8.3.1-py3-none-any.whl", hash = "sha256:981153a64e25f12d547d3426c367a4857371575ee7ad18df2a6183ab0545b2a6", size = 108274, upload-time = "2025-11-15T20:45:41.139Z" }, { url = "https://files.pythonhosted.org/packages/98/78/01c019cdb5d6498122777c1a43056ebb3ebfeef2076d9d026bfe15583b2b/click-8.3.1-py3-none-any.whl", hash = "sha256:981153a64e25f12d547d3426c367a4857371575ee7ad18df2a6183ab0545b2a6", size = 108274, upload-time = "2025-11-15T20:45:41.139Z" },
] ]
[[package]]
name = "cloudpickle"
version = "3.1.2"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/27/fb/576f067976d320f5f0114a8d9fa1215425441bb35627b1993e5afd8111e5/cloudpickle-3.1.2.tar.gz", hash = "sha256:7fda9eb655c9c230dab534f1983763de5835249750e85fbcef43aaa30a9a2414", size = 22330, upload-time = "2025-11-03T09:25:26.604Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/88/39/799be3f2f0f38cc727ee3b4f1445fe6d5e4133064ec2e4115069418a5bb6/cloudpickle-3.1.2-py3-none-any.whl", hash = "sha256:9acb47f6afd73f60dc1df93bb801b472f05ff42fa6c84167d25cb206be1fbf4a", size = 22228, upload-time = "2025-11-03T09:25:25.534Z" },
]
[[package]] [[package]]
name = "colorama" name = "colorama"
version = "0.4.6" version = "0.4.6"
@@ -323,19 +347,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/0d/4a/331fe2caf6799d591109bb9c08083080f6de90a823695d412a935622abb2/coverage-7.13.4-py3-none-any.whl", hash = "sha256:1af1641e57cf7ba1bd67d677c9abdbcd6cc2ab7da3bca7fa1e2b7e50e65f2ad0", size = 211242, upload-time = "2026-02-09T12:59:02.032Z" }, { url = "https://files.pythonhosted.org/packages/0d/4a/331fe2caf6799d591109bb9c08083080f6de90a823695d412a935622abb2/coverage-7.13.4-py3-none-any.whl", hash = "sha256:1af1641e57cf7ba1bd67d677c9abdbcd6cc2ab7da3bca7fa1e2b7e50e65f2ad0", size = 211242, upload-time = "2026-02-09T12:59:02.032Z" },
] ]
[[package]]
name = "croniter"
version = "6.0.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "python-dateutil" },
{ name = "pytz" },
]
sdist = { url = "https://files.pythonhosted.org/packages/ad/2f/44d1ae153a0e27be56be43465e5cb39b9650c781e001e7864389deb25090/croniter-6.0.0.tar.gz", hash = "sha256:37c504b313956114a983ece2c2b07790b1f1094fe9d81cc94739214748255577", size = 64481, upload-time = "2024-12-17T17:17:47.32Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/07/4b/290b4c3efd6417a8b0c284896de19b1d5855e6dbdb97d2a35e68fa42de85/croniter-6.0.0-py2.py3-none-any.whl", hash = "sha256:2f878c3856f17896979b2a4379ba1f09c83e374931ea15cc835c5dd2eee9b368", size = 25468, upload-time = "2024-12-17T17:17:45.359Z" },
]
[[package]] [[package]]
name = "cryptography" name = "cryptography"
version = "46.0.5" version = "46.0.5"
@@ -391,7 +402,7 @@ wheels = [
[[package]] [[package]]
name = "cyclopts" name = "cyclopts"
version = "4.5.3" version = "4.9.0"
source = { registry = "https://pypi.org/simple" } source = { registry = "https://pypi.org/simple" }
dependencies = [ dependencies = [
{ name = "attrs" }, { name = "attrs" },
@@ -399,18 +410,9 @@ dependencies = [
{ name = "rich" }, { name = "rich" },
{ name = "rich-rst" }, { name = "rich-rst" },
] ]
sdist = { url = "https://files.pythonhosted.org/packages/a5/16/06e35c217334930ff7c476ce1c8e74ed786fa3ef6742e59a1458e2412290/cyclopts-4.5.3.tar.gz", hash = "sha256:35fa70971204c450d9668646a6ca372eb5fa3070fbe8dd51c5b4b31e65198f2d", size = 162437, upload-time = "2026-02-16T15:07:11.96Z" } sdist = { url = "https://files.pythonhosted.org/packages/75/de/75598ddea1f47589ccecdb23a560715a5a8ec2b3e34396b5628ba98d70e4/cyclopts-4.9.0.tar.gz", hash = "sha256:f292868e4be33a3e622d8cf95d89f49222e987b1ccdbf40caf6514e19dd99a63", size = 166300, upload-time = "2026-03-13T13:43:40.38Z" }
wheels = [ wheels = [
{ url = "https://files.pythonhosted.org/packages/3a/1f/d8bce383a90d8a6a11033327777afa4d4d611ec11869284adb6f48152906/cyclopts-4.5.3-py3-none-any.whl", hash = "sha256:50af3085bb15d4a6f2582dd383dad5e4ba6a0d4d4c64ee63326d881a752a6919", size = 200231, upload-time = "2026-02-16T15:07:13.045Z" }, { url = "https://files.pythonhosted.org/packages/d1/b2/2e342a876e5b78ce99ecf65ce391f5b2935144a0528c9989c437b8578a54/cyclopts-4.9.0-py3-none-any.whl", hash = "sha256:583ea4090a040c92f9303bc0da26bca7b681c81bcea34097ace279e1acef22c1", size = 203999, upload-time = "2026-03-13T13:43:38.553Z" },
]
[[package]]
name = "diskcache"
version = "5.6.3"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/3f/21/1c1ffc1a039ddcc459db43cc108658f32c57d271d7289a2794e401d0fdb6/diskcache-5.6.3.tar.gz", hash = "sha256:2c3a3fa2743d8535d832ec61c2054a1641f41775aa7c556758a109941e33e4fc", size = 67916, upload-time = "2023-08-31T06:12:00.316Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/3f/27/4570e78fc0bf5ea0ca45eb1de3818a23787af9b390c0b0a0033a1b8236f9/diskcache-5.6.3-py3-none-any.whl", hash = "sha256:5e31b2d5fbad117cc363ebaf6b689474db18a1f6438bc82358b024abd4c2ca19", size = 45550, upload-time = "2023-08-31T06:11:58.822Z" },
] ]
[[package]] [[package]]
@@ -465,27 +467,9 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/8a/0e/97c33bf5009bdbac74fd2beace167cab3f978feb69cc36f1ef79360d6c4e/exceptiongroup-1.3.1-py3-none-any.whl", hash = "sha256:a7a39a3bd276781e98394987d3a5701d0c4edffb633bb7a5144577f82c773598", size = 16740, upload-time = "2025-11-21T23:01:53.443Z" }, { url = "https://files.pythonhosted.org/packages/8a/0e/97c33bf5009bdbac74fd2beace167cab3f978feb69cc36f1ef79360d6c4e/exceptiongroup-1.3.1-py3-none-any.whl", hash = "sha256:a7a39a3bd276781e98394987d3a5701d0c4edffb633bb7a5144577f82c773598", size = 16740, upload-time = "2025-11-21T23:01:53.443Z" },
] ]
[[package]]
name = "fakeredis"
version = "2.34.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "redis" },
{ name = "sortedcontainers" },
]
sdist = { url = "https://files.pythonhosted.org/packages/d8/44/c403963727d707e03f49a417712b0a23e853d33ae50729679040b6cfe281/fakeredis-2.34.0.tar.gz", hash = "sha256:72bc51a7ab39bedf5004f0cf1b5206822619c1be8c2657fd878d1f4250256c57", size = 177156, upload-time = "2026-02-16T15:56:34.318Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/1a/8e/af19c00753c432355f9b76cec3ab0842578de43ba575e82735b18c1b3ec9/fakeredis-2.34.0-py3-none-any.whl", hash = "sha256:bc45d362c6cc3a537f8287372d8ea532538dfbe7f5d635d0905d7b3464ec51d2", size = 122063, upload-time = "2026-02-16T15:56:21.227Z" },
]
[package.optional-dependencies]
lua = [
{ name = "lupa" },
]
[[package]] [[package]]
name = "fastapi" name = "fastapi"
version = "0.129.0" version = "0.135.1"
source = { registry = "https://pypi.org/simple" } source = { registry = "https://pypi.org/simple" }
dependencies = [ dependencies = [
{ name = "annotated-doc" }, { name = "annotated-doc" },
@@ -494,14 +478,14 @@ dependencies = [
{ name = "typing-extensions" }, { name = "typing-extensions" },
{ name = "typing-inspection" }, { name = "typing-inspection" },
] ]
sdist = { url = "https://files.pythonhosted.org/packages/48/47/75f6bea02e797abff1bca968d5997793898032d9923c1935ae2efdece642/fastapi-0.129.0.tar.gz", hash = "sha256:61315cebd2e65df5f97ec298c888f9de30430dd0612d59d6480beafbc10655af", size = 375450, upload-time = "2026-02-12T13:54:52.541Z" } sdist = { url = "https://files.pythonhosted.org/packages/e7/7b/f8e0211e9380f7195ba3f3d40c292594fd81ba8ec4629e3854c353aaca45/fastapi-0.135.1.tar.gz", hash = "sha256:d04115b508d936d254cea545b7312ecaa58a7b3a0f84952535b4c9afae7668cd", size = 394962, upload-time = "2026-03-01T18:18:29.369Z" }
wheels = [ wheels = [
{ url = "https://files.pythonhosted.org/packages/9e/dd/d0ee25348ac58245ee9f90b6f3cbb666bf01f69be7e0911f9851bddbda16/fastapi-0.129.0-py3-none-any.whl", hash = "sha256:b4946880e48f462692b31c083be0432275cbfb6e2274566b1be91479cc1a84ec", size = 102950, upload-time = "2026-02-12T13:54:54.528Z" }, { url = "https://files.pythonhosted.org/packages/e4/72/42e900510195b23a56bde950d26a51f8b723846bfcaa0286e90287f0422b/fastapi-0.135.1-py3-none-any.whl", hash = "sha256:46e2fc5745924b7c840f71ddd277382af29ce1cdb7d5eab5bf697e3fb9999c9e", size = 116999, upload-time = "2026-03-01T18:18:30.831Z" },
] ]
[[package]] [[package]]
name = "fastmcp" name = "fastmcp"
version = "2.14.5" version = "3.1.0"
source = { registry = "https://pypi.org/simple" } source = { registry = "https://pypi.org/simple" }
dependencies = [ dependencies = [
{ name = "authlib" }, { name = "authlib" },
@@ -512,29 +496,32 @@ dependencies = [
{ name = "jsonschema-path" }, { name = "jsonschema-path" },
{ name = "mcp" }, { name = "mcp" },
{ name = "openapi-pydantic" }, { name = "openapi-pydantic" },
{ name = "opentelemetry-api" },
{ name = "packaging" }, { name = "packaging" },
{ name = "platformdirs" }, { name = "platformdirs" },
{ name = "py-key-value-aio", extra = ["disk", "keyring", "memory"] }, { name = "py-key-value-aio", extra = ["filetree", "keyring", "memory"] },
{ name = "pydantic", extra = ["email"] }, { name = "pydantic", extra = ["email"] },
{ name = "pydocket" },
{ name = "pyperclip" }, { name = "pyperclip" },
{ name = "python-dotenv" }, { name = "python-dotenv" },
{ name = "pyyaml" },
{ name = "rich" }, { name = "rich" },
{ name = "uncalled-for" },
{ name = "uvicorn" }, { name = "uvicorn" },
{ name = "watchfiles" },
{ name = "websockets" }, { name = "websockets" },
] ]
sdist = { url = "https://files.pythonhosted.org/packages/3b/32/982678d44f13849530a74ab101ed80e060c2ee6cf87471f062dcf61705fd/fastmcp-2.14.5.tar.gz", hash = "sha256:38944dc582c541d55357082bda2241cedb42cd3a78faea8a9d6a2662c62a42d7", size = 8296329, upload-time = "2026-02-03T15:35:21.005Z" } sdist = { url = "https://files.pythonhosted.org/packages/0a/70/862026c4589441f86ad3108f05bfb2f781c6b322ad60a982f40b303b47d7/fastmcp-3.1.0.tar.gz", hash = "sha256:e25264794c734b9977502a51466961eeecff92a0c2f3b49c40c070993628d6d0", size = 17347083, upload-time = "2026-03-03T02:43:11.283Z" }
wheels = [ wheels = [
{ url = "https://files.pythonhosted.org/packages/e5/c1/1a35ec68ff76ea8443aa115b18bcdee748a4ada2124537ee90522899ff9f/fastmcp-2.14.5-py3-none-any.whl", hash = "sha256:d81e8ec813f5089d3624bec93944beaefa86c0c3a4ef1111cbef676a761ebccf", size = 417784, upload-time = "2026-02-03T15:35:18.489Z" }, { url = "https://files.pythonhosted.org/packages/17/07/516f5b20d88932e5a466c2216b628e5358a71b3a9f522215607c3281de05/fastmcp-3.1.0-py3-none-any.whl", hash = "sha256:b1f73b56fd3b0cb2bd9e2a144fc650d5cc31587ed129d996db7710e464ae8010", size = 633749, upload-time = "2026-03-03T02:43:09.06Z" },
] ]
[[package]] [[package]]
name = "graphql-core" name = "graphql-core"
version = "3.2.7" version = "3.2.8"
source = { registry = "https://pypi.org/simple" } source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/ac/9b/037a640a2983b09aed4a823f9cf1729e6d780b0671f854efa4727a7affbe/graphql_core-3.2.7.tar.gz", hash = "sha256:27b6904bdd3b43f2a0556dad5d579bdfdeab1f38e8e8788e555bdcb586a6f62c", size = 513484, upload-time = "2025-11-01T22:30:40.436Z" } sdist = { url = "https://files.pythonhosted.org/packages/68/c5/36aa96205c3ecbb3d34c7c24189e4553c7ca2ebc7e1dd07432339b980272/graphql_core-3.2.8.tar.gz", hash = "sha256:015457da5d996c924ddf57a43f4e959b0b94fb695b85ed4c29446e508ed65cf3", size = 513181, upload-time = "2026-03-05T19:55:37.332Z" }
wheels = [ wheels = [
{ url = "https://files.pythonhosted.org/packages/0a/14/933037032608787fb92e365883ad6a741c235e0ff992865ec5d904a38f1e/graphql_core-3.2.7-py3-none-any.whl", hash = "sha256:17fc8f3ca4a42913d8e24d9ac9f08deddf0a0b2483076575757f6c412ead2ec0", size = 207262, upload-time = "2025-11-01T22:30:38.912Z" }, { url = "https://files.pythonhosted.org/packages/86/41/cb887d9afc5dabd78feefe6ccbaf83ff423c206a7a1b7aeeac05120b2125/graphql_core-3.2.8-py3-none-any.whl", hash = "sha256:cbee07bee1b3ed5e531723685369039f32ff815ef60166686e0162f540f1520c", size = 207349, upload-time = "2026-03-05T19:55:35.911Z" },
] ]
[[package]] [[package]]
@@ -668,11 +655,11 @@ wheels = [
[[package]] [[package]]
name = "jaraco-context" name = "jaraco-context"
version = "6.1.0" version = "6.1.1"
source = { registry = "https://pypi.org/simple" } source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/cb/9c/a788f5bb29c61e456b8ee52ce76dbdd32fd72cd73dd67bc95f42c7a8d13c/jaraco_context-6.1.0.tar.gz", hash = "sha256:129a341b0a85a7db7879e22acd66902fda67882db771754574338898b2d5d86f", size = 15850, upload-time = "2026-01-13T02:53:53.847Z" } sdist = { url = "https://files.pythonhosted.org/packages/27/7b/c3081ff1af947915503121c649f26a778e1a2101fd525f74aef997d75b7e/jaraco_context-6.1.1.tar.gz", hash = "sha256:bc046b2dc94f1e5532bd02402684414575cc11f565d929b6563125deb0a6e581", size = 15832, upload-time = "2026-03-07T15:46:04.63Z" }
wheels = [ wheels = [
{ url = "https://files.pythonhosted.org/packages/8d/48/aa685dbf1024c7bd82bede569e3a85f82c32fd3d79ba5fea578f0159571a/jaraco_context-6.1.0-py3-none-any.whl", hash = "sha256:a43b5ed85815223d0d3cfdb6d7ca0d2bc8946f28f30b6f3216bda070f68badda", size = 7065, upload-time = "2026-01-13T02:53:53.031Z" }, { url = "https://files.pythonhosted.org/packages/f4/49/c152890d49102b280ecf86ba5f80a8c111c3a155dafa3bd24aeb64fde9e1/jaraco_context-6.1.1-py3-none-any.whl", hash = "sha256:0df6a0287258f3e364072c3e40d5411b20cafa30cb28c4839d24319cecf9f808", size = 7005, upload-time = "2026-03-07T15:46:03.515Z" },
] ]
[[package]] [[package]]
@@ -722,17 +709,16 @@ wheels = [
[[package]] [[package]]
name = "jsonschema-path" name = "jsonschema-path"
version = "0.3.4" version = "0.4.5"
source = { registry = "https://pypi.org/simple" } source = { registry = "https://pypi.org/simple" }
dependencies = [ dependencies = [
{ name = "pathable" }, { name = "pathable" },
{ name = "pyyaml" }, { name = "pyyaml" },
{ name = "referencing" }, { name = "referencing" },
{ name = "requests" },
] ]
sdist = { url = "https://files.pythonhosted.org/packages/6e/45/41ebc679c2a4fced6a722f624c18d658dee42612b83ea24c1caf7c0eb3a8/jsonschema_path-0.3.4.tar.gz", hash = "sha256:8365356039f16cc65fddffafda5f58766e34bebab7d6d105616ab52bc4297001", size = 11159, upload-time = "2025-01-24T14:33:16.547Z" } sdist = { url = "https://files.pythonhosted.org/packages/5b/8a/7e6102f2b8bdc6705a9eb5294f8f6f9ccd3a8420e8e8e19671d1dd773251/jsonschema_path-0.4.5.tar.gz", hash = "sha256:c6cd7d577ae290c7defd4f4029e86fdb248ca1bd41a07557795b3c95e5144918", size = 15113, upload-time = "2026-03-03T09:56:46.87Z" }
wheels = [ wheels = [
{ url = "https://files.pythonhosted.org/packages/cb/58/3485da8cb93d2f393bce453adeef16896751f14ba3e2024bc21dc9597646/jsonschema_path-0.3.4-py3-none-any.whl", hash = "sha256:f502191fdc2b22050f9a81c9237be9d27145b9001c55842bece5e94e382e52f8", size = 14810, upload-time = "2025-01-24T14:33:14.652Z" }, { url = "https://files.pythonhosted.org/packages/04/d5/4e96c44f6c1ea3d812cf5391d81a4f5abaa540abf8d04ecd7f66e0ed11df/jsonschema_path-0.4.5-py3-none-any.whl", hash = "sha256:7d77a2c3f3ec569a40efe5c5f942c44c1af2a6f96fe0866794c9ef5b8f87fd65", size = 19368, upload-time = "2026-03-03T09:56:45.39Z" },
] ]
[[package]] [[package]]
@@ -764,58 +750,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/81/db/e655086b7f3a705df045bf0933bdd9c2f79bb3c97bfef1384598bb79a217/keyring-25.7.0-py3-none-any.whl", hash = "sha256:be4a0b195f149690c166e850609a477c532ddbfbaed96a404d4e43f8d5e2689f", size = 39160, upload-time = "2025-11-16T16:26:08.402Z" }, { url = "https://files.pythonhosted.org/packages/81/db/e655086b7f3a705df045bf0933bdd9c2f79bb3c97bfef1384598bb79a217/keyring-25.7.0-py3-none-any.whl", hash = "sha256:be4a0b195f149690c166e850609a477c532ddbfbaed96a404d4e43f8d5e2689f", size = 39160, upload-time = "2025-11-16T16:26:08.402Z" },
] ]
[[package]]
name = "lupa"
version = "2.6"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/b8/1c/191c3e6ec6502e3dbe25a53e27f69a5daeac3e56de1f73c0138224171ead/lupa-2.6.tar.gz", hash = "sha256:9a770a6e89576be3447668d7ced312cd6fd41d3c13c2462c9dc2c2ab570e45d9", size = 7240282, upload-time = "2025-10-24T07:20:29.738Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/94/86/ce243390535c39d53ea17ccf0240815e6e457e413e40428a658ea4ee4b8d/lupa-2.6-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:47ce718817ef1cc0c40d87c3d5ae56a800d61af00fbc0fad1ca9be12df2f3b56", size = 951707, upload-time = "2025-10-24T07:18:03.884Z" },
{ url = "https://files.pythonhosted.org/packages/86/85/cedea5e6cbeb54396fdcc55f6b741696f3f036d23cfaf986d50d680446da/lupa-2.6-cp312-cp312-macosx_11_0_universal2.whl", hash = "sha256:7aba985b15b101495aa4b07112cdc08baa0c545390d560ad5cfde2e9e34f4d58", size = 1916703, upload-time = "2025-10-24T07:18:05.6Z" },
{ url = "https://files.pythonhosted.org/packages/24/be/3d6b5f9a8588c01a4d88129284c726017b2089f3a3fd3ba8bd977292fea0/lupa-2.6-cp312-cp312-macosx_11_0_x86_64.whl", hash = "sha256:b766f62f95b2739f2248977d29b0722e589dcf4f0ccfa827ccbd29f0148bd2e5", size = 985152, upload-time = "2025-10-24T07:18:08.561Z" },
{ url = "https://files.pythonhosted.org/packages/eb/23/9f9a05beee5d5dce9deca4cb07c91c40a90541fc0a8e09db4ee670da550f/lupa-2.6-cp312-cp312-manylinux2010_i686.manylinux_2_12_i686.manylinux_2_28_i686.whl", hash = "sha256:00a934c23331f94cb51760097ebfab14b005d55a6b30a2b480e3c53dd2fa290d", size = 1159599, upload-time = "2025-10-24T07:18:10.346Z" },
{ url = "https://files.pythonhosted.org/packages/40/4e/e7c0583083db9d7f1fd023800a9767d8e4391e8330d56c2373d890ac971b/lupa-2.6-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:21de9f38bd475303e34a042b7081aabdf50bd9bafd36ce4faea2f90fd9f15c31", size = 1038686, upload-time = "2025-10-24T07:18:12.112Z" },
{ url = "https://files.pythonhosted.org/packages/1c/9f/5a4f7d959d4feba5e203ff0c31889e74d1ca3153122be4a46dca7d92bf7c/lupa-2.6-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:cf3bda96d3fc41237e964a69c23647d50d4e28421111360274d4799832c560e9", size = 2071956, upload-time = "2025-10-24T07:18:14.572Z" },
{ url = "https://files.pythonhosted.org/packages/92/34/2f4f13ca65d01169b1720176aedc4af17bc19ee834598c7292db232cb6dc/lupa-2.6-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:5a76ead245da54801a81053794aa3975f213221f6542d14ec4b859ee2e7e0323", size = 1057199, upload-time = "2025-10-24T07:18:16.379Z" },
{ url = "https://files.pythonhosted.org/packages/35/2a/5f7d2eebec6993b0dcd428e0184ad71afb06a45ba13e717f6501bfed1da3/lupa-2.6-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:8dd0861741caa20886ddbda0a121d8e52fb9b5bb153d82fa9bba796962bf30e8", size = 1173693, upload-time = "2025-10-24T07:18:18.153Z" },
{ url = "https://files.pythonhosted.org/packages/e4/29/089b4d2f8e34417349af3904bb40bec40b65c8731f45e3fd8d497ca573e5/lupa-2.6-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:239e63948b0b23023f81d9a19a395e768ed3da6a299f84e7963b8f813f6e3f9c", size = 2164394, upload-time = "2025-10-24T07:18:20.403Z" },
{ url = "https://files.pythonhosted.org/packages/f3/1b/79c17b23c921f81468a111cad843b076a17ef4b684c4a8dff32a7969c3f0/lupa-2.6-cp312-cp312-win32.whl", hash = "sha256:325894e1099499e7a6f9c351147661a2011887603c71086d36fe0f964d52d1ce", size = 1420647, upload-time = "2025-10-24T07:18:23.368Z" },
{ url = "https://files.pythonhosted.org/packages/b8/15/5121e68aad3584e26e1425a5c9a79cd898f8a152292059e128c206ee817c/lupa-2.6-cp312-cp312-win_amd64.whl", hash = "sha256:c735a1ce8ee60edb0fe71d665f1e6b7c55c6021f1d340eb8c865952c602cd36f", size = 1688529, upload-time = "2025-10-24T07:18:25.523Z" },
{ url = "https://files.pythonhosted.org/packages/28/1d/21176b682ca5469001199d8b95fa1737e29957a3d185186e7a8b55345f2e/lupa-2.6-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:663a6e58a0f60e7d212017d6678639ac8df0119bc13c2145029dcba084391310", size = 947232, upload-time = "2025-10-24T07:18:27.878Z" },
{ url = "https://files.pythonhosted.org/packages/ce/4c/d327befb684660ca13cf79cd1f1d604331808f9f1b6fb6bf57832f8edf80/lupa-2.6-cp313-cp313-macosx_11_0_universal2.whl", hash = "sha256:d1f5afda5c20b1f3217a80e9bc1b77037f8a6eb11612fd3ada19065303c8f380", size = 1908625, upload-time = "2025-10-24T07:18:29.944Z" },
{ url = "https://files.pythonhosted.org/packages/66/8e/ad22b0a19454dfd08662237a84c792d6d420d36b061f239e084f29d1a4f3/lupa-2.6-cp313-cp313-macosx_11_0_x86_64.whl", hash = "sha256:26f2b3c085fe76e9119e48c1013c1cccdc1f51585d456858290475aa38e7089e", size = 981057, upload-time = "2025-10-24T07:18:31.553Z" },
{ url = "https://files.pythonhosted.org/packages/5c/48/74859073ab276bd0566c719f9ca0108b0cfc1956ca0d68678d117d47d155/lupa-2.6-cp313-cp313-manylinux2010_i686.manylinux_2_12_i686.manylinux_2_28_i686.whl", hash = "sha256:60d2f902c7b96fb8ab98493dcff315e7bb4d0b44dc9dd76eb37de575025d5685", size = 1156227, upload-time = "2025-10-24T07:18:33.981Z" },
{ url = "https://files.pythonhosted.org/packages/09/6c/0e9ded061916877253c2266074060eb71ed99fb21d73c8c114a76725bce2/lupa-2.6-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:a02d25dee3a3250967c36590128d9220ae02f2eda166a24279da0b481519cbff", size = 1035752, upload-time = "2025-10-24T07:18:36.32Z" },
{ url = "https://files.pythonhosted.org/packages/dd/ef/f8c32e454ef9f3fe909f6c7d57a39f950996c37a3deb7b391fec7903dab7/lupa-2.6-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:6eae1ee16b886b8914ff292dbefbf2f48abfbdee94b33a88d1d5475e02423203", size = 2069009, upload-time = "2025-10-24T07:18:38.072Z" },
{ url = "https://files.pythonhosted.org/packages/53/dc/15b80c226a5225815a890ee1c11f07968e0aba7a852df41e8ae6fe285063/lupa-2.6-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:b0edd5073a4ee74ab36f74fe61450148e6044f3952b8d21248581f3c5d1a58be", size = 1056301, upload-time = "2025-10-24T07:18:40.165Z" },
{ url = "https://files.pythonhosted.org/packages/31/14/2086c1425c985acfb30997a67e90c39457122df41324d3c179d6ee2292c6/lupa-2.6-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:0c53ee9f22a8a17e7d4266ad48e86f43771951797042dd51d1494aaa4f5f3f0a", size = 1170673, upload-time = "2025-10-24T07:18:42.426Z" },
{ url = "https://files.pythonhosted.org/packages/10/e5/b216c054cf86576c0191bf9a9f05de6f7e8e07164897d95eea0078dca9b2/lupa-2.6-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:de7c0f157a9064a400d828789191a96da7f4ce889969a588b87ec80de9b14772", size = 2162227, upload-time = "2025-10-24T07:18:46.112Z" },
{ url = "https://files.pythonhosted.org/packages/59/2f/33ecb5bedf4f3bc297ceacb7f016ff951331d352f58e7e791589609ea306/lupa-2.6-cp313-cp313-win32.whl", hash = "sha256:ee9523941ae0a87b5b703417720c5d78f72d2f5bc23883a2ea80a949a3ed9e75", size = 1419558, upload-time = "2025-10-24T07:18:48.371Z" },
{ url = "https://files.pythonhosted.org/packages/f9/b4/55e885834c847ea610e111d87b9ed4768f0afdaeebc00cd46810f25029f6/lupa-2.6-cp313-cp313-win_amd64.whl", hash = "sha256:b1335a5835b0a25ebdbc75cf0bda195e54d133e4d994877ef025e218c2e59db9", size = 1683424, upload-time = "2025-10-24T07:18:50.976Z" },
{ url = "https://files.pythonhosted.org/packages/66/9d/d9427394e54d22a35d1139ef12e845fd700d4872a67a34db32516170b746/lupa-2.6-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:dcb6d0a3264873e1653bc188499f48c1fb4b41a779e315eba45256cfe7bc33c1", size = 953818, upload-time = "2025-10-24T07:18:53.378Z" },
{ url = "https://files.pythonhosted.org/packages/10/41/27bbe81953fb2f9ecfced5d9c99f85b37964cfaf6aa8453bb11283983721/lupa-2.6-cp314-cp314-macosx_11_0_universal2.whl", hash = "sha256:a37e01f2128f8c36106726cb9d360bac087d58c54b4522b033cc5691c584db18", size = 1915850, upload-time = "2025-10-24T07:18:55.259Z" },
{ url = "https://files.pythonhosted.org/packages/a3/98/f9ff60db84a75ba8725506bbf448fb085bc77868a021998ed2a66d920568/lupa-2.6-cp314-cp314-macosx_11_0_x86_64.whl", hash = "sha256:458bd7e9ff3c150b245b0fcfbb9bd2593d1152ea7f0a7b91c1d185846da033fe", size = 982344, upload-time = "2025-10-24T07:18:57.05Z" },
{ url = "https://files.pythonhosted.org/packages/41/f7/f39e0f1c055c3b887d86b404aaf0ca197b5edfd235a8b81b45b25bac7fc3/lupa-2.6-cp314-cp314-manylinux2010_i686.manylinux_2_12_i686.manylinux_2_28_i686.whl", hash = "sha256:052ee82cac5206a02df77119c325339acbc09f5ce66967f66a2e12a0f3211cad", size = 1156543, upload-time = "2025-10-24T07:18:59.251Z" },
{ url = "https://files.pythonhosted.org/packages/9e/9c/59e6cffa0d672d662ae17bd7ac8ecd2c89c9449dee499e3eb13ca9cd10d9/lupa-2.6-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:96594eca3c87dd07938009e95e591e43d554c1dbd0385be03c100367141db5a8", size = 1047974, upload-time = "2025-10-24T07:19:01.449Z" },
{ url = "https://files.pythonhosted.org/packages/23/c6/a04e9cef7c052717fcb28fb63b3824802488f688391895b618e39be0f684/lupa-2.6-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:e8faddd9d198688c8884091173a088a8e920ecc96cda2ffed576a23574c4b3f6", size = 2073458, upload-time = "2025-10-24T07:19:03.369Z" },
{ url = "https://files.pythonhosted.org/packages/e6/10/824173d10f38b51fc77785228f01411b6ca28826ce27404c7c912e0e442c/lupa-2.6-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:daebb3a6b58095c917e76ba727ab37b27477fb926957c825205fbda431552134", size = 1067683, upload-time = "2025-10-24T07:19:06.2Z" },
{ url = "https://files.pythonhosted.org/packages/b6/dc/9692fbcf3c924d9c4ece2d8d2f724451ac2e09af0bd2a782db1cef34e799/lupa-2.6-cp314-cp314-musllinux_1_2_i686.whl", hash = "sha256:f3154e68972befe0f81564e37d8142b5d5d79931a18309226a04ec92487d4ea3", size = 1171892, upload-time = "2025-10-24T07:19:08.544Z" },
{ url = "https://files.pythonhosted.org/packages/84/ff/e318b628d4643c278c96ab3ddea07fc36b075a57383c837f5b11e537ba9d/lupa-2.6-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:e4dadf77b9fedc0bfa53417cc28dc2278a26d4cbd95c29f8927ad4d8fe0a7ef9", size = 2166641, upload-time = "2025-10-24T07:19:10.485Z" },
{ url = "https://files.pythonhosted.org/packages/12/f7/a6f9ec2806cf2d50826980cdb4b3cffc7691dc6f95e13cc728846d5cb793/lupa-2.6-cp314-cp314-win32.whl", hash = "sha256:cb34169c6fa3bab3e8ac58ca21b8a7102f6a94b6a5d08d3636312f3f02fafd8f", size = 1456857, upload-time = "2025-10-24T07:19:37.989Z" },
{ url = "https://files.pythonhosted.org/packages/c5/de/df71896f25bdc18360fdfa3b802cd7d57d7fede41a0e9724a4625b412c85/lupa-2.6-cp314-cp314-win_amd64.whl", hash = "sha256:b74f944fe46c421e25d0f8692aef1e842192f6f7f68034201382ac440ef9ea67", size = 1731191, upload-time = "2025-10-24T07:19:40.281Z" },
{ url = "https://files.pythonhosted.org/packages/47/3c/a1f23b01c54669465f5f4c4083107d496fbe6fb45998771420e9aadcf145/lupa-2.6-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:0e21b716408a21ab65723f8841cf7f2f37a844b7a965eeabb785e27fca4099cf", size = 999343, upload-time = "2025-10-24T07:19:12.519Z" },
{ url = "https://files.pythonhosted.org/packages/c5/6d/501994291cb640bfa2ccf7f554be4e6914afa21c4026bd01bff9ca8aac57/lupa-2.6-cp314-cp314t-macosx_11_0_universal2.whl", hash = "sha256:589db872a141bfff828340079bbdf3e9a31f2689f4ca0d88f97d9e8c2eae6142", size = 2000730, upload-time = "2025-10-24T07:19:14.869Z" },
{ url = "https://files.pythonhosted.org/packages/53/a5/457ffb4f3f20469956c2d4c4842a7675e884efc895b2f23d126d23e126cc/lupa-2.6-cp314-cp314t-macosx_11_0_x86_64.whl", hash = "sha256:cd852a91a4a9d4dcbb9a58100f820a75a425703ec3e3f049055f60b8533b7953", size = 1021553, upload-time = "2025-10-24T07:19:17.123Z" },
{ url = "https://files.pythonhosted.org/packages/51/6b/36bb5a5d0960f2a5c7c700e0819abb76fd9bf9c1d8a66e5106416d6e9b14/lupa-2.6-cp314-cp314t-manylinux2010_i686.manylinux_2_12_i686.manylinux_2_28_i686.whl", hash = "sha256:0334753be028358922415ca97a64a3048e4ed155413fc4eaf87dd0a7e2752983", size = 1133275, upload-time = "2025-10-24T07:19:20.51Z" },
{ url = "https://files.pythonhosted.org/packages/19/86/202ff4429f663013f37d2229f6176ca9f83678a50257d70f61a0a97281bf/lupa-2.6-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:661d895cd38c87658a34780fac54a690ec036ead743e41b74c3fb81a9e65a6aa", size = 1038441, upload-time = "2025-10-24T07:19:22.509Z" },
{ url = "https://files.pythonhosted.org/packages/a7/42/d8125f8e420714e5b52e9c08d88b5329dfb02dcca731b4f21faaee6cc5b5/lupa-2.6-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:6aa58454ccc13878cc177c62529a2056be734da16369e451987ff92784994ca7", size = 2058324, upload-time = "2025-10-24T07:19:24.979Z" },
{ url = "https://files.pythonhosted.org/packages/2b/2c/47bf8b84059876e877a339717ddb595a4a7b0e8740bacae78ba527562e1c/lupa-2.6-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:1425017264e470c98022bba8cff5bd46d054a827f5df6b80274f9cc71dafd24f", size = 1060250, upload-time = "2025-10-24T07:19:27.262Z" },
{ url = "https://files.pythonhosted.org/packages/c2/06/d88add2b6406ca1bdec99d11a429222837ca6d03bea42ca75afa169a78cb/lupa-2.6-cp314-cp314t-musllinux_1_2_i686.whl", hash = "sha256:224af0532d216e3105f0a127410f12320f7c5f1aa0300bdf9646b8d9afb0048c", size = 1151126, upload-time = "2025-10-24T07:19:29.522Z" },
{ url = "https://files.pythonhosted.org/packages/b4/a0/89e6a024c3b4485b89ef86881c9d55e097e7cb0bdb74efb746f2fa6a9a76/lupa-2.6-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:9abb98d5a8fd27c8285302e82199f0e56e463066f88f619d6594a450bf269d80", size = 2153693, upload-time = "2025-10-24T07:19:31.379Z" },
{ url = "https://files.pythonhosted.org/packages/b6/36/a0f007dc58fc1bbf51fb85dcc82fcb1f21b8c4261361de7dab0e3d8521ef/lupa-2.6-cp314-cp314t-win32.whl", hash = "sha256:1849efeba7a8f6fb8aa2c13790bee988fd242ae404bd459509640eeea3d1e291", size = 1590104, upload-time = "2025-10-24T07:19:33.514Z" },
{ url = "https://files.pythonhosted.org/packages/7d/5e/db903ce9cf82c48d6b91bf6d63ae4c8d0d17958939a4e04ba6b9f38b8643/lupa-2.6-cp314-cp314t-win_amd64.whl", hash = "sha256:fc1498d1a4fc028bc521c26d0fad4ca00ed63b952e32fb95949bda76a04bad52", size = 1913818, upload-time = "2025-10-24T07:19:36.039Z" },
]
[[package]] [[package]]
name = "markdown-it-py" name = "markdown-it-py"
version = "4.0.0" version = "4.0.0"
@@ -919,15 +853,15 @@ wheels = [
[[package]] [[package]]
name = "opentelemetry-api" name = "opentelemetry-api"
version = "1.39.1" version = "1.40.0"
source = { registry = "https://pypi.org/simple" } source = { registry = "https://pypi.org/simple" }
dependencies = [ dependencies = [
{ name = "importlib-metadata" }, { name = "importlib-metadata" },
{ name = "typing-extensions" }, { name = "typing-extensions" },
] ]
sdist = { url = "https://files.pythonhosted.org/packages/97/b9/3161be15bb8e3ad01be8be5a968a9237c3027c5be504362ff800fca3e442/opentelemetry_api-1.39.1.tar.gz", hash = "sha256:fbde8c80e1b937a2c61f20347e91c0c18a1940cecf012d62e65a7caf08967c9c", size = 65767, upload-time = "2025-12-11T13:32:39.182Z" } sdist = { url = "https://files.pythonhosted.org/packages/2c/1d/4049a9e8698361cc1a1aa03a6c59e4fa4c71e0c0f94a30f988a6876a2ae6/opentelemetry_api-1.40.0.tar.gz", hash = "sha256:159be641c0b04d11e9ecd576906462773eb97ae1b657730f0ecf64d32071569f", size = 70851, upload-time = "2026-03-04T14:17:21.555Z" }
wheels = [ wheels = [
{ url = "https://files.pythonhosted.org/packages/cf/df/d3f1ddf4bb4cb50ed9b1139cc7b1c54c34a1e7ce8fd1b9a37c0d1551a6bd/opentelemetry_api-1.39.1-py3-none-any.whl", hash = "sha256:2edd8463432a7f8443edce90972169b195e7d6a05500cd29e6d13898187c9950", size = 66356, upload-time = "2025-12-11T13:32:17.304Z" }, { url = "https://files.pythonhosted.org/packages/5f/bf/93795954016c522008da367da292adceed71cca6ee1717e1d64c83089099/opentelemetry_api-1.40.0-py3-none-any.whl", hash = "sha256:82dd69331ae74b06f6a874704be0cfaa49a1650e1537d4a813b86ecef7d0ecf9", size = 68676, upload-time = "2026-03-04T14:17:01.24Z" },
] ]
[[package]] [[package]]
@@ -941,29 +875,20 @@ wheels = [
[[package]] [[package]]
name = "pathable" name = "pathable"
version = "0.4.4" version = "0.5.0"
source = { registry = "https://pypi.org/simple" } source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/67/93/8f2c2075b180c12c1e9f6a09d1a985bc2036906b13dff1d8917e395f2048/pathable-0.4.4.tar.gz", hash = "sha256:6905a3cd17804edfac7875b5f6c9142a218c7caef78693c2dbbbfbac186d88b2", size = 8124, upload-time = "2025-01-10T18:43:13.247Z" } sdist = { url = "https://files.pythonhosted.org/packages/72/55/b748445cb4ea6b125626f15379be7c96d1035d4fa3e8fee362fa92298abf/pathable-0.5.0.tar.gz", hash = "sha256:d81938348a1cacb525e7c75166270644782c0fb9c8cecc16be033e71427e0ef1", size = 16655, upload-time = "2026-02-20T08:47:00.748Z" }
wheels = [ wheels = [
{ url = "https://files.pythonhosted.org/packages/7d/eb/b6260b31b1a96386c0a880edebe26f89669098acea8e0318bff6adb378fd/pathable-0.4.4-py3-none-any.whl", hash = "sha256:5ae9e94793b6ef5a4cbe0a7ce9dbbefc1eec38df253763fd0aeeacf2762dbbc2", size = 9592, upload-time = "2025-01-10T18:43:11.88Z" }, { url = "https://files.pythonhosted.org/packages/52/96/5a770e5c461462575474468e5af931cff9de036e7c2b4fea23c1c58d2cbe/pathable-0.5.0-py3-none-any.whl", hash = "sha256:646e3d09491a6351a0c82632a09c02cdf70a252e73196b36d8a15ba0a114f0a6", size = 16867, upload-time = "2026-02-20T08:46:59.536Z" },
]
[[package]]
name = "pathvalidate"
version = "3.3.1"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/fa/2a/52a8da6fe965dea6192eb716b357558e103aea0a1e9a8352ad575a8406ca/pathvalidate-3.3.1.tar.gz", hash = "sha256:b18c07212bfead624345bb8e1d6141cdcf15a39736994ea0b94035ad2b1ba177", size = 63262, upload-time = "2025-06-15T09:07:20.736Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/9a/70/875f4a23bfc4731703a5835487d0d2fb999031bd415e7d17c0ae615c18b7/pathvalidate-3.3.1-py3-none-any.whl", hash = "sha256:5263baab691f8e1af96092fa5137ee17df5bdfbd6cff1fcac4d6ef4bc2e1735f", size = 24305, upload-time = "2025-06-15T09:07:19.117Z" },
] ]
[[package]] [[package]]
name = "platformdirs" name = "platformdirs"
version = "4.9.2" version = "4.9.4"
source = { registry = "https://pypi.org/simple" } source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/1b/04/fea538adf7dbbd6d186f551d595961e564a3b6715bdf276b477460858672/platformdirs-4.9.2.tar.gz", hash = "sha256:9a33809944b9db043ad67ca0db94b14bf452cc6aeaac46a88ea55b26e2e9d291", size = 28394, upload-time = "2026-02-16T03:56:10.574Z" } sdist = { url = "https://files.pythonhosted.org/packages/19/56/8d4c30c8a1d07013911a8fdbd8f89440ef9f08d07a1b50ab8ca8be5a20f9/platformdirs-4.9.4.tar.gz", hash = "sha256:1ec356301b7dc906d83f371c8f487070e99d3ccf9e501686456394622a01a934", size = 28737, upload-time = "2026-03-05T18:34:13.271Z" }
wheels = [ wheels = [
{ url = "https://files.pythonhosted.org/packages/48/31/05e764397056194206169869b50cf2fee4dbbbc71b344705b9c0d878d4d8/platformdirs-4.9.2-py3-none-any.whl", hash = "sha256:9170634f126f8efdae22fb58ae8a0eaa86f38365bc57897a6c4f781d1f5875bd", size = 21168, upload-time = "2026-02-16T03:56:08.891Z" }, { url = "https://files.pythonhosted.org/packages/63/d7/97f7e3a6abb67d8080dd406fd4df842c2be0efaf712d1c899c32a075027c/platformdirs-4.9.4-py3-none-any.whl", hash = "sha256:68a9a4619a666ea6439f2ff250c12a853cd1cbd5158d258bd824a7df6be2f868", size = 21216, upload-time = "2026-03-05T18:34:12.172Z" },
] ]
[[package]] [[package]]
@@ -975,32 +900,23 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/54/20/4d324d65cc6d9205fabedc306948156824eb9f0ee1633355a8f7ec5c66bf/pluggy-1.6.0-py3-none-any.whl", hash = "sha256:e920276dd6813095e9377c0bc5566d94c932c33b27a3e3945d8389c374dd4746", size = 20538, upload-time = "2025-05-15T12:30:06.134Z" }, { url = "https://files.pythonhosted.org/packages/54/20/4d324d65cc6d9205fabedc306948156824eb9f0ee1633355a8f7ec5c66bf/pluggy-1.6.0-py3-none-any.whl", hash = "sha256:e920276dd6813095e9377c0bc5566d94c932c33b27a3e3945d8389c374dd4746", size = 20538, upload-time = "2025-05-15T12:30:06.134Z" },
] ]
[[package]]
name = "prometheus-client"
version = "0.24.1"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/f0/58/a794d23feb6b00fc0c72787d7e87d872a6730dd9ed7c7b3e954637d8f280/prometheus_client-0.24.1.tar.gz", hash = "sha256:7e0ced7fbbd40f7b84962d5d2ab6f17ef88a72504dcf7c0b40737b43b2a461f9", size = 85616, upload-time = "2026-01-14T15:26:26.965Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/74/c3/24a2f845e3917201628ecaba4f18bab4d18a337834c1df2a159ee9d22a42/prometheus_client-0.24.1-py3-none-any.whl", hash = "sha256:150db128af71a5c2482b36e588fc8a6b95e498750da4b17065947c16070f4055", size = 64057, upload-time = "2026-01-14T15:26:24.42Z" },
]
[[package]] [[package]]
name = "py-key-value-aio" name = "py-key-value-aio"
version = "0.3.0" version = "0.4.4"
source = { registry = "https://pypi.org/simple" } source = { registry = "https://pypi.org/simple" }
dependencies = [ dependencies = [
{ name = "beartype" }, { name = "beartype" },
{ name = "py-key-value-shared" }, { name = "typing-extensions" },
] ]
sdist = { url = "https://files.pythonhosted.org/packages/93/ce/3136b771dddf5ac905cc193b461eb67967cf3979688c6696e1f2cdcde7ea/py_key_value_aio-0.3.0.tar.gz", hash = "sha256:858e852fcf6d696d231266da66042d3355a7f9871650415feef9fca7a6cd4155", size = 50801, upload-time = "2025-11-17T16:50:04.711Z" } sdist = { url = "https://files.pythonhosted.org/packages/04/3c/0397c072a38d4bc580994b42e0c90c5f44f679303489e4376289534735e5/py_key_value_aio-0.4.4.tar.gz", hash = "sha256:e3012e6243ed7cc09bb05457bd4d03b1ba5c2b1ca8700096b3927db79ffbbe55", size = 92300, upload-time = "2026-02-16T21:21:43.245Z" }
wheels = [ wheels = [
{ url = "https://files.pythonhosted.org/packages/99/10/72f6f213b8f0bce36eff21fda0a13271834e9eeff7f9609b01afdc253c79/py_key_value_aio-0.3.0-py3-none-any.whl", hash = "sha256:1c781915766078bfd608daa769fefb97e65d1d73746a3dfb640460e322071b64", size = 96342, upload-time = "2025-11-17T16:50:03.801Z" }, { url = "https://files.pythonhosted.org/packages/32/69/f1b537ee70b7def42d63124a539ed3026a11a3ffc3086947a1ca6e861868/py_key_value_aio-0.4.4-py3-none-any.whl", hash = "sha256:18e17564ecae61b987f909fc2cd41ee2012c84b4b1dcb8c055cf8b4bc1bf3f5d", size = 152291, upload-time = "2026-02-16T21:21:44.241Z" },
] ]
[package.optional-dependencies] [package.optional-dependencies]
disk = [ filetree = [
{ name = "diskcache" }, { name = "aiofile" },
{ name = "pathvalidate" }, { name = "anyio" },
] ]
keyring = [ keyring = [
{ name = "keyring" }, { name = "keyring" },
@@ -1008,22 +924,6 @@ keyring = [
memory = [ memory = [
{ name = "cachetools" }, { name = "cachetools" },
] ]
redis = [
{ name = "redis" },
]
[[package]]
name = "py-key-value-shared"
version = "0.3.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "beartype" },
{ name = "typing-extensions" },
]
sdist = { url = "https://files.pythonhosted.org/packages/7b/e4/1971dfc4620a3a15b4579fe99e024f5edd6e0967a71154771a059daff4db/py_key_value_shared-0.3.0.tar.gz", hash = "sha256:8fdd786cf96c3e900102945f92aa1473138ebe960ef49da1c833790160c28a4b", size = 11666, upload-time = "2025-11-17T16:50:06.849Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/51/e4/b8b0a03ece72f47dce2307d36e1c34725b7223d209fc679315ffe6a4e2c3/py_key_value_shared-0.3.0-py3-none-any.whl", hash = "sha256:5b0efba7ebca08bb158b1e93afc2f07d30b8f40c2fc12ce24a4c0d84f42f9298", size = 19560, upload-time = "2025-11-17T16:50:05.954Z" },
]
[[package]] [[package]]
name = "pycparser" name = "pycparser"
@@ -1127,38 +1027,16 @@ wheels = [
[[package]] [[package]]
name = "pydantic-settings" name = "pydantic-settings"
version = "2.13.0" version = "2.13.1"
source = { registry = "https://pypi.org/simple" } source = { registry = "https://pypi.org/simple" }
dependencies = [ dependencies = [
{ name = "pydantic" }, { name = "pydantic" },
{ name = "python-dotenv" }, { name = "python-dotenv" },
{ name = "typing-inspection" }, { name = "typing-inspection" },
] ]
sdist = { url = "https://files.pythonhosted.org/packages/96/a1/ae859ffac5a3338a66b74c5e29e244fd3a3cc483c89feaf9f56c39898d75/pydantic_settings-2.13.0.tar.gz", hash = "sha256:95d875514610e8595672800a5c40b073e99e4aae467fa7c8f9c263061ea2e1fe", size = 222450, upload-time = "2026-02-15T12:11:23.476Z" } sdist = { url = "https://files.pythonhosted.org/packages/52/6d/fffca34caecc4a3f97bda81b2098da5e8ab7efc9a66e819074a11955d87e/pydantic_settings-2.13.1.tar.gz", hash = "sha256:b4c11847b15237fb0171e1462bf540e294affb9b86db4d9aa5c01730bdbe4025", size = 223826, upload-time = "2026-02-19T13:45:08.055Z" }
wheels = [ wheels = [
{ url = "https://files.pythonhosted.org/packages/b0/1a/dd1b9d7e627486cf8e7523d09b70010e05a4bc41414f4ae6ce184cf0afb6/pydantic_settings-2.13.0-py3-none-any.whl", hash = "sha256:d67b576fff39cd086b595441bf9c75d4193ca9c0ed643b90360694d0f1240246", size = 58429, upload-time = "2026-02-15T12:11:22.133Z" }, { url = "https://files.pythonhosted.org/packages/00/4b/ccc026168948fec4f7555b9164c724cf4125eac006e176541483d2c959be/pydantic_settings-2.13.1-py3-none-any.whl", hash = "sha256:d56fd801823dbeae7f0975e1f8c8e25c258eb75d278ea7abb5d9cebb01b56237", size = 58929, upload-time = "2026-02-19T13:45:06.034Z" },
]
[[package]]
name = "pydocket"
version = "0.17.7"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "cloudpickle" },
{ name = "croniter" },
{ name = "fakeredis", extra = ["lua"] },
{ name = "opentelemetry-api" },
{ name = "prometheus-client" },
{ name = "py-key-value-aio", extra = ["memory", "redis"] },
{ name = "python-json-logger" },
{ name = "redis" },
{ name = "rich" },
{ name = "typer" },
{ name = "typing-extensions" },
]
sdist = { url = "https://files.pythonhosted.org/packages/cd/b2/5e12dbe2acf59e4499285e8eee66e8e81b6ba2f553696d2f4ccca0a7978c/pydocket-0.17.7.tar.gz", hash = "sha256:5c77ec6731a167cdcb44174abf793fe63e7b6c1c1c8a799cc6ec7502b361ee77", size = 347071, upload-time = "2026-02-11T21:01:31.744Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/c9/c7/68f2553819965326f968375f02597d49efe71b309ba9d8fef539aeb51c48/pydocket-0.17.7-py3-none-any.whl", hash = "sha256:d1e0921ac02026c4a0140fc72a3848545f3e91e6e74c6e32c588489017c130b2", size = 94608, upload-time = "2026-02-11T21:01:30.111Z" },
] ]
[[package]] [[package]]
@@ -1172,11 +1050,11 @@ wheels = [
[[package]] [[package]]
name = "pyjwt" name = "pyjwt"
version = "2.11.0" version = "2.12.1"
source = { registry = "https://pypi.org/simple" } source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/5c/5a/b46fa56bf322901eee5b0454a34343cdbdae202cd421775a8ee4e42fd519/pyjwt-2.11.0.tar.gz", hash = "sha256:35f95c1f0fbe5d5ba6e43f00271c275f7a1a4db1dab27bf708073b75318ea623", size = 98019, upload-time = "2026-01-30T19:59:55.694Z" } sdist = { url = "https://files.pythonhosted.org/packages/c2/27/a3b6e5bf6ff856d2509292e95c8f57f0df7017cf5394921fc4e4ef40308a/pyjwt-2.12.1.tar.gz", hash = "sha256:c74a7a2adf861c04d002db713dd85f84beb242228e671280bf709d765b03672b", size = 102564, upload-time = "2026-03-13T19:27:37.25Z" }
wheels = [ wheels = [
{ url = "https://files.pythonhosted.org/packages/6f/01/c26ce75ba460d5cd503da9e13b21a33804d38c2165dec7b716d06b13010c/pyjwt-2.11.0-py3-none-any.whl", hash = "sha256:94a6bde30eb5c8e04fee991062b534071fd1439ef58d2adc9ccb823e7bcd0469", size = 28224, upload-time = "2026-01-30T19:59:54.539Z" }, { url = "https://files.pythonhosted.org/packages/e5/7a/8dd906bd22e79e47397a61742927f6747fe93242ef86645ee9092e610244/pyjwt-2.12.1-py3-none-any.whl", hash = "sha256:28ca37c070cad8ba8cd9790cd940535d40274d22f80ab87f3ac6a713e6e8454c", size = 29726, upload-time = "2026-03-13T19:27:35.677Z" },
] ]
[package.optional-dependencies] [package.optional-dependencies]
@@ -1245,34 +1123,13 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/ee/49/1377b49de7d0c1ce41292161ea0f721913fa8722c19fb9c1e3aa0367eecb/pytest_cov-7.0.0-py3-none-any.whl", hash = "sha256:3b8e9558b16cc1479da72058bdecf8073661c7f57f7d3c5f22a1c23507f2d861", size = 22424, upload-time = "2025-09-09T10:57:00.695Z" }, { url = "https://files.pythonhosted.org/packages/ee/49/1377b49de7d0c1ce41292161ea0f721913fa8722c19fb9c1e3aa0367eecb/pytest_cov-7.0.0-py3-none-any.whl", hash = "sha256:3b8e9558b16cc1479da72058bdecf8073661c7f57f7d3c5f22a1c23507f2d861", size = 22424, upload-time = "2025-09-09T10:57:00.695Z" },
] ]
[[package]]
name = "python-dateutil"
version = "2.9.0.post0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "six" },
]
sdist = { url = "https://files.pythonhosted.org/packages/66/c0/0c8b6ad9f17a802ee498c46e004a0eb49bc148f2fd230864601a86dcf6db/python-dateutil-2.9.0.post0.tar.gz", hash = "sha256:37dd54208da7e1cd875388217d5e00ebd4179249f90fb72437e91a35459a0ad3", size = 342432, upload-time = "2024-03-01T18:36:20.211Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/ec/57/56b9bcc3c9c6a792fcbaf139543cee77261f3651ca9da0c93f5c1221264b/python_dateutil-2.9.0.post0-py2.py3-none-any.whl", hash = "sha256:a8b2bc7bffae282281c8140a97d3aa9c14da0b136dfe83f850eea9a5f7470427", size = 229892, upload-time = "2024-03-01T18:36:18.57Z" },
]
[[package]] [[package]]
name = "python-dotenv" name = "python-dotenv"
version = "1.2.1" version = "1.2.2"
source = { registry = "https://pypi.org/simple" } source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/f0/26/19cadc79a718c5edbec86fd4919a6b6d3f681039a2f6d66d14be94e75fb9/python_dotenv-1.2.1.tar.gz", hash = "sha256:42667e897e16ab0d66954af0e60a9caa94f0fd4ecf3aaf6d2d260eec1aa36ad6", size = 44221, upload-time = "2025-10-26T15:12:10.434Z" } sdist = { url = "https://files.pythonhosted.org/packages/82/ed/0301aeeac3e5353ef3d94b6ec08bbcabd04a72018415dcb29e588514bba8/python_dotenv-1.2.2.tar.gz", hash = "sha256:2c371a91fbd7ba082c2c1dc1f8bf89ca22564a087c2c287cd9b662adde799cf3", size = 50135, upload-time = "2026-03-01T16:00:26.196Z" }
wheels = [ wheels = [
{ url = "https://files.pythonhosted.org/packages/14/1b/a298b06749107c305e1fe0f814c6c74aea7b2f1e10989cb30f544a1b3253/python_dotenv-1.2.1-py3-none-any.whl", hash = "sha256:b81ee9561e9ca4004139c6cbba3a238c32b03e4894671e181b671e8cb8425d61", size = 21230, upload-time = "2025-10-26T15:12:09.109Z" }, { url = "https://files.pythonhosted.org/packages/0b/d7/1959b9648791274998a9c3526f6d0ec8fd2233e4d4acce81bbae76b44b2a/python_dotenv-1.2.2-py3-none-any.whl", hash = "sha256:1d8214789a24de455a8b8bd8ae6fe3c6b69a5e3d64aa8a8e5d68e694bbcb285a", size = 22101, upload-time = "2026-03-01T16:00:25.09Z" },
]
[[package]]
name = "python-json-logger"
version = "4.0.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/29/bf/eca6a3d43db1dae7070f70e160ab20b807627ba953663ba07928cdd3dc58/python_json_logger-4.0.0.tar.gz", hash = "sha256:f58e68eb46e1faed27e0f574a55a0455eecd7b8a5b88b85a784519ba3cff047f", size = 17683, upload-time = "2025-10-06T04:15:18.984Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/51/e5/fecf13f06e5e5f67e8837d777d1bc43fac0ed2b77a676804df5c34744727/python_json_logger-4.0.0-py3-none-any.whl", hash = "sha256:af09c9daf6a813aa4cc7180395f50f2a9e5fa056034c9953aec92e381c5ba1e2", size = 15548, upload-time = "2025-10-06T04:15:17.553Z" },
] ]
[[package]] [[package]]
@@ -1284,15 +1141,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/1b/d0/397f9626e711ff749a95d96b7af99b9c566a9bb5129b8e4c10fc4d100304/python_multipart-0.0.22-py3-none-any.whl", hash = "sha256:2b2cd894c83d21bf49d702499531c7bafd057d730c201782048f7945d82de155", size = 24579, upload-time = "2026-01-25T10:15:54.811Z" }, { url = "https://files.pythonhosted.org/packages/1b/d0/397f9626e711ff749a95d96b7af99b9c566a9bb5129b8e4c10fc4d100304/python_multipart-0.0.22-py3-none-any.whl", hash = "sha256:2b2cd894c83d21bf49d702499531c7bafd057d730c201782048f7945d82de155", size = 24579, upload-time = "2026-01-25T10:15:54.811Z" },
] ]
[[package]]
name = "pytz"
version = "2025.2"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/f8/bf/abbd3cdfb8fbc7fb3d4d38d320f2441b1e7cbe29be4f23797b4a2b5d8aac/pytz-2025.2.tar.gz", hash = "sha256:360b9e3dbb49a209c21ad61809c7fb453643e048b38924c765813546746e81c3", size = 320884, upload-time = "2025-03-25T02:25:00.538Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/81/c4/34e93fe5f5429d7570ec1fa436f1986fb1f00c3e0f43a589fe2bbcd22c3f/pytz-2025.2-py2.py3-none-any.whl", hash = "sha256:5ddf76296dd8c44c26eb8f4b6f35488f3ccbf6fbbd7adee0b7262d43f0ec2f00", size = 509225, upload-time = "2025-03-25T02:24:58.468Z" },
]
[[package]] [[package]]
name = "pywin32" name = "pywin32"
version = "311" version = "311"
@@ -1378,27 +1226,18 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/e1/67/921ec3024056483db83953ae8e48079ad62b92db7880013ca77632921dd0/readme_renderer-44.0-py3-none-any.whl", hash = "sha256:2fbca89b81a08526aadf1357a8c2ae889ec05fb03f5da67f9769c9a592166151", size = 13310, upload-time = "2024-07-08T15:00:56.577Z" }, { url = "https://files.pythonhosted.org/packages/e1/67/921ec3024056483db83953ae8e48079ad62b92db7880013ca77632921dd0/readme_renderer-44.0-py3-none-any.whl", hash = "sha256:2fbca89b81a08526aadf1357a8c2ae889ec05fb03f5da67f9769c9a592166151", size = 13310, upload-time = "2024-07-08T15:00:56.577Z" },
] ]
[[package]]
name = "redis"
version = "7.2.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/9f/32/6fac13a11e73e1bc67a2ae821a72bfe4c2d8c4c48f0267e4a952be0f1bae/redis-7.2.0.tar.gz", hash = "sha256:4dd5bf4bd4ae80510267f14185a15cba2a38666b941aff68cccf0256b51c1f26", size = 4901247, upload-time = "2026-02-16T17:16:22.797Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/86/cf/f6180b67f99688d83e15c84c5beda831d1d341e95872d224f87ccafafe61/redis-7.2.0-py3-none-any.whl", hash = "sha256:01f591f8598e483f1842d429e8ae3a820804566f1c73dca1b80e23af9fba0497", size = 394898, upload-time = "2026-02-16T17:16:20.693Z" },
]
[[package]] [[package]]
name = "referencing" name = "referencing"
version = "0.36.2" version = "0.37.0"
source = { registry = "https://pypi.org/simple" } source = { registry = "https://pypi.org/simple" }
dependencies = [ dependencies = [
{ name = "attrs" }, { name = "attrs" },
{ name = "rpds-py" }, { name = "rpds-py" },
{ name = "typing-extensions", marker = "python_full_version < '3.13'" }, { name = "typing-extensions", marker = "python_full_version < '3.13'" },
] ]
sdist = { url = "https://files.pythonhosted.org/packages/2f/db/98b5c277be99dd18bfd91dd04e1b759cad18d1a338188c936e92f921c7e2/referencing-0.36.2.tar.gz", hash = "sha256:df2e89862cd09deabbdba16944cc3f10feb6b3e6f18e902f7cc25609a34775aa", size = 74744, upload-time = "2025-01-25T08:48:16.138Z" } sdist = { url = "https://files.pythonhosted.org/packages/22/f5/df4e9027acead3ecc63e50fe1e36aca1523e1719559c499951bb4b53188f/referencing-0.37.0.tar.gz", hash = "sha256:44aefc3142c5b842538163acb373e24cce6632bd54bdb01b21ad5863489f50d8", size = 78036, upload-time = "2025-10-13T15:30:48.871Z" }
wheels = [ wheels = [
{ url = "https://files.pythonhosted.org/packages/c1/b1/3baf80dc6d2b7bc27a95a67752d0208e410351e3feb4eb78de5f77454d8d/referencing-0.36.2-py3-none-any.whl", hash = "sha256:e8699adbbf8b5c7de96d8ffa0eb5c158b3beafce084968e2ea8bb08c6794dcd0", size = 26775, upload-time = "2025-01-25T08:48:14.241Z" }, { url = "https://files.pythonhosted.org/packages/2c/58/ca301544e1fa93ed4f80d724bf5b194f6e4b945841c5bfd555878eea9fcb/referencing-0.37.0-py3-none-any.whl", hash = "sha256:381329a9f99628c9069361716891d34ad94af76e461dcb0335825aecc7692231", size = 26766, upload-time = "2025-10-13T15:30:47.625Z" },
] ]
[[package]] [[package]]
@@ -1451,15 +1290,15 @@ wheels = [
[[package]] [[package]]
name = "rich" name = "rich"
version = "14.3.2" version = "14.3.3"
source = { registry = "https://pypi.org/simple" } source = { registry = "https://pypi.org/simple" }
dependencies = [ dependencies = [
{ name = "markdown-it-py" }, { name = "markdown-it-py" },
{ name = "pygments" }, { name = "pygments" },
] ]
sdist = { url = "https://files.pythonhosted.org/packages/74/99/a4cab2acbb884f80e558b0771e97e21e939c5dfb460f488d19df485e8298/rich-14.3.2.tar.gz", hash = "sha256:e712f11c1a562a11843306f5ed999475f09ac31ffb64281f73ab29ffdda8b3b8", size = 230143, upload-time = "2026-02-01T16:20:47.908Z" } sdist = { url = "https://files.pythonhosted.org/packages/b3/c6/f3b320c27991c46f43ee9d856302c70dc2d0fb2dba4842ff739d5f46b393/rich-14.3.3.tar.gz", hash = "sha256:b8daa0b9e4eef54dd8cf7c86c03713f53241884e814f4e2f5fb342fe520f639b", size = 230582, upload-time = "2026-02-19T17:23:12.474Z" }
wheels = [ wheels = [
{ url = "https://files.pythonhosted.org/packages/ef/45/615f5babd880b4bd7d405cc0dc348234c5ffb6ed1ea33e152ede08b2072d/rich-14.3.2-py3-none-any.whl", hash = "sha256:08e67c3e90884651da3239ea668222d19bea7b589149d8014a21c633420dbb69", size = 309963, upload-time = "2026-02-01T16:20:46.078Z" }, { url = "https://files.pythonhosted.org/packages/14/25/b208c5683343959b670dc001595f2f3737e051da617f66c31f7c4fa93abc/rich-14.3.3-py3-none-any.whl", hash = "sha256:793431c1f8619afa7d3b52b2cdec859562b950ea0d4b6b505397612db8d5362d", size = 310458, upload-time = "2026-02-19T17:23:13.732Z" },
] ]
[[package]] [[package]]
@@ -1558,27 +1397,27 @@ wheels = [
[[package]] [[package]]
name = "ruff" name = "ruff"
version = "0.15.1" version = "0.15.6"
source = { registry = "https://pypi.org/simple" } source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/04/dc/4e6ac71b511b141cf626357a3946679abeba4cf67bc7cc5a17920f31e10d/ruff-0.15.1.tar.gz", hash = "sha256:c590fe13fb57c97141ae975c03a1aedb3d3156030cabd740d6ff0b0d601e203f", size = 4540855, upload-time = "2026-02-12T23:09:09.998Z" } sdist = { url = "https://files.pythonhosted.org/packages/51/df/f8629c19c5318601d3121e230f74cbee7a3732339c52b21daa2b82ef9c7d/ruff-0.15.6.tar.gz", hash = "sha256:8394c7bb153a4e3811a4ecdacd4a8e6a4fa8097028119160dffecdcdf9b56ae4", size = 4597916, upload-time = "2026-03-12T23:05:47.51Z" }
wheels = [ wheels = [
{ url = "https://files.pythonhosted.org/packages/23/bf/e6e4324238c17f9d9120a9d60aa99a7daaa21204c07fcd84e2ef03bb5fd1/ruff-0.15.1-py3-none-linux_armv6l.whl", hash = "sha256:b101ed7cf4615bda6ffe65bdb59f964e9f4a0d3f85cbf0e54f0ab76d7b90228a", size = 10367819, upload-time = "2026-02-12T23:09:03.598Z" }, { url = "https://files.pythonhosted.org/packages/9e/2f/4e03a7e5ce99b517e98d3b4951f411de2b0fa8348d39cf446671adcce9a2/ruff-0.15.6-py3-none-linux_armv6l.whl", hash = "sha256:7c98c3b16407b2cf3d0f2b80c80187384bc92c6774d85fefa913ecd941256fff", size = 10508953, upload-time = "2026-03-12T23:05:17.246Z" },
{ url = "https://files.pythonhosted.org/packages/b3/ea/c8f89d32e7912269d38c58f3649e453ac32c528f93bb7f4219258be2e7ed/ruff-0.15.1-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:939c995e9277e63ea632cc8d3fae17aa758526f49a9a850d2e7e758bfef46602", size = 10798618, upload-time = "2026-02-12T23:09:22.928Z" }, { url = "https://files.pythonhosted.org/packages/70/60/55bcdc3e9f80bcf39edf0cd272da6fa511a3d94d5a0dd9e0adf76ceebdb4/ruff-0.15.6-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:ee7dcfaad8b282a284df4aa6ddc2741b3f4a18b0555d626805555a820ea181c3", size = 10942257, upload-time = "2026-03-12T23:05:23.076Z" },
{ url = "https://files.pythonhosted.org/packages/5e/0f/1d0d88bc862624247d82c20c10d4c0f6bb2f346559d8af281674cf327f15/ruff-0.15.1-py3-none-macosx_11_0_arm64.whl", hash = "sha256:1d83466455fdefe60b8d9c8df81d3c1bbb2115cede53549d3b522ce2bc703899", size = 10148518, upload-time = "2026-02-12T23:08:58.339Z" }, { url = "https://files.pythonhosted.org/packages/e7/f9/005c29bd1726c0f492bfa215e95154cf480574140cb5f867c797c18c790b/ruff-0.15.6-py3-none-macosx_11_0_arm64.whl", hash = "sha256:3bd9967851a25f038fc8b9ae88a7fbd1b609f30349231dffaa37b6804923c4bb", size = 10322683, upload-time = "2026-03-12T23:05:33.738Z" },
{ url = "https://files.pythonhosted.org/packages/f5/c8/291c49cefaa4a9248e986256df2ade7add79388fe179e0691be06fae6f37/ruff-0.15.1-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a9457e3c3291024866222b96108ab2d8265b477e5b1534c7ddb1810904858d16", size = 10518811, upload-time = "2026-02-12T23:09:31.865Z" }, { url = "https://files.pythonhosted.org/packages/5f/74/2f861f5fd7cbb2146bddb5501450300ce41562da36d21868c69b7a828169/ruff-0.15.6-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:13f4594b04e42cd24a41da653886b04d2ff87adbf57497ed4f728b0e8a4866f8", size = 10660986, upload-time = "2026-03-12T23:05:53.245Z" },
{ url = "https://files.pythonhosted.org/packages/c3/1a/f5707440e5ae43ffa5365cac8bbb91e9665f4a883f560893829cf16a606b/ruff-0.15.1-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:92c92b003e9d4f7fbd33b1867bb15a1b785b1735069108dfc23821ba045b29bc", size = 10196169, upload-time = "2026-02-12T23:09:17.306Z" }, { url = "https://files.pythonhosted.org/packages/c1/a1/309f2364a424eccb763cdafc49df843c282609f47fe53aa83f38272389e0/ruff-0.15.6-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:e2ed8aea2f3fe57886d3f00ea5b8aae5bf68d5e195f487f037a955ff9fbaac9e", size = 10332177, upload-time = "2026-03-12T23:05:56.145Z" },
{ url = "https://files.pythonhosted.org/packages/2a/ff/26ddc8c4da04c8fd3ee65a89c9fb99eaa5c30394269d424461467be2271f/ruff-0.15.1-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1fe5c41ab43e3a06778844c586251eb5a510f67125427625f9eb2b9526535779", size = 10990491, upload-time = "2026-02-12T23:09:25.503Z" }, { url = "https://files.pythonhosted.org/packages/30/41/7ebf1d32658b4bab20f8ac80972fb19cd4e2c6b78552be263a680edc55ac/ruff-0.15.6-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:70789d3e7830b848b548aae96766431c0dc01a6c78c13381f423bf7076c66d15", size = 11170783, upload-time = "2026-03-12T23:06:01.742Z" },
{ url = "https://files.pythonhosted.org/packages/fc/00/50920cb385b89413f7cdb4bb9bc8fc59c1b0f30028d8bccc294189a54955/ruff-0.15.1-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:66a6dd6df4d80dc382c6484f8ce1bcceb55c32e9f27a8b94c32f6c7331bf14fb", size = 11843280, upload-time = "2026-02-12T23:09:19.88Z" }, { url = "https://files.pythonhosted.org/packages/76/be/6d488f6adca047df82cd62c304638bcb00821c36bd4881cfca221561fdfc/ruff-0.15.6-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:542aaf1de3154cea088ced5a819ce872611256ffe2498e750bbae5247a8114e9", size = 12044201, upload-time = "2026-03-12T23:05:28.697Z" },
{ url = "https://files.pythonhosted.org/packages/5d/6d/2f5cad8380caf5632a15460c323ae326f1e1a2b5b90a6ee7519017a017ca/ruff-0.15.1-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:6a4a42cbb8af0bda9bcd7606b064d7c0bc311a88d141d02f78920be6acb5aa83", size = 11274336, upload-time = "2026-02-12T23:09:14.907Z" }, { url = "https://files.pythonhosted.org/packages/71/68/e6f125df4af7e6d0b498f8d373274794bc5156b324e8ab4bf5c1b4fc0ec7/ruff-0.15.6-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:1c22e6f02c16cfac3888aa636e9eba857254d15bbacc9906c9689fdecb1953ab", size = 11421561, upload-time = "2026-03-12T23:05:31.236Z" },
{ url = "https://files.pythonhosted.org/packages/a3/1d/5f56cae1d6c40b8a318513599b35ea4b075d7dc1cd1d04449578c29d1d75/ruff-0.15.1-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4ab064052c31dddada35079901592dfba2e05f5b1e43af3954aafcbc1096a5b2", size = 11137288, upload-time = "2026-02-12T23:09:07.475Z" }, { url = "https://files.pythonhosted.org/packages/f1/9f/f85ef5fd01a52e0b472b26dc1b4bd228b8f6f0435975442ffa4741278703/ruff-0.15.6-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:98893c4c0aadc8e448cfa315bd0cc343a5323d740fe5f28ef8a3f9e21b381f7e", size = 11310928, upload-time = "2026-03-12T23:05:45.288Z" },
{ url = "https://files.pythonhosted.org/packages/cd/20/6f8d7d8f768c93b0382b33b9306b3b999918816da46537d5a61635514635/ruff-0.15.1-py3-none-manylinux_2_31_riscv64.whl", hash = "sha256:5631c940fe9fe91f817a4c2ea4e81f47bee3ca4aa646134a24374f3c19ad9454", size = 11070681, upload-time = "2026-02-12T23:08:55.43Z" }, { url = "https://files.pythonhosted.org/packages/8c/26/b75f8c421f5654304b89471ed384ae8c7f42b4dff58fa6ce1626d7f2b59a/ruff-0.15.6-py3-none-manylinux_2_31_riscv64.whl", hash = "sha256:70d263770d234912374493e8cc1e7385c5d49376e41dfa51c5c3453169dc581c", size = 11235186, upload-time = "2026-03-12T23:05:50.677Z" },
{ url = "https://files.pythonhosted.org/packages/9a/67/d640ac76069f64cdea59dba02af2e00b1fa30e2103c7f8d049c0cff4cafd/ruff-0.15.1-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:68138a4ba184b4691ccdc39f7795c66b3c68160c586519e7e8444cf5a53e1b4c", size = 10486401, upload-time = "2026-02-12T23:09:27.927Z" }, { url = "https://files.pythonhosted.org/packages/fc/d4/d5a6d065962ff7a68a86c9b4f5500f7d101a0792078de636526c0edd40da/ruff-0.15.6-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:55a1ad63c5a6e54b1f21b7514dfadc0c7fb40093fa22e95143cf3f64ebdcd512", size = 10635231, upload-time = "2026-03-12T23:05:37.044Z" },
{ url = "https://files.pythonhosted.org/packages/65/3d/e1429f64a3ff89297497916b88c32a5cc88eeca7e9c787072d0e7f1d3e1e/ruff-0.15.1-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:518f9af03bfc33c03bdb4cb63fabc935341bb7f54af500f92ac309ecfbba6330", size = 10197452, upload-time = "2026-02-12T23:09:12.147Z" }, { url = "https://files.pythonhosted.org/packages/d6/56/7c3acf3d50910375349016cf33de24be021532042afbed87942858992491/ruff-0.15.6-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:8dc473ba093c5ec238bb1e7429ee676dca24643c471e11fbaa8a857925b061c0", size = 10340357, upload-time = "2026-03-12T23:06:04.748Z" },
{ url = "https://files.pythonhosted.org/packages/78/83/e2c3bade17dad63bf1e1c2ffaf11490603b760be149e1419b07049b36ef2/ruff-0.15.1-py3-none-musllinux_1_2_i686.whl", hash = "sha256:da79f4d6a826caaea95de0237a67e33b81e6ec2e25fc7e1993a4015dffca7c61", size = 10693900, upload-time = "2026-02-12T23:09:34.418Z" }, { url = "https://files.pythonhosted.org/packages/06/54/6faa39e9c1033ff6a3b6e76b5df536931cd30caf64988e112bbf91ef5ce5/ruff-0.15.6-py3-none-musllinux_1_2_i686.whl", hash = "sha256:85b042377c2a5561131767974617006f99f7e13c63c111b998f29fc1e58a4cfb", size = 10860583, upload-time = "2026-03-12T23:05:58.978Z" },
{ url = "https://files.pythonhosted.org/packages/a1/27/fdc0e11a813e6338e0706e8b39bb7a1d61ea5b36873b351acee7e524a72a/ruff-0.15.1-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:3dd86dccb83cd7d4dcfac303ffc277e6048600dfc22e38158afa208e8bf94a1f", size = 11227302, upload-time = "2026-02-12T23:09:36.536Z" }, { url = "https://files.pythonhosted.org/packages/cb/1e/509a201b843b4dfb0b32acdedf68d951d3377988cae43949ba4c4133a96a/ruff-0.15.6-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:cef49e30bc5a86a6a92098a7fbf6e467a234d90b63305d6f3ec01225a9d092e0", size = 11410976, upload-time = "2026-03-12T23:05:39.955Z" },
{ url = "https://files.pythonhosted.org/packages/f6/58/ac864a75067dcbd3b95be5ab4eb2b601d7fbc3d3d736a27e391a4f92a5c1/ruff-0.15.1-py3-none-win32.whl", hash = "sha256:660975d9cb49b5d5278b12b03bb9951d554543a90b74ed5d366b20e2c57c2098", size = 10462555, upload-time = "2026-02-12T23:09:29.899Z" }, { url = "https://files.pythonhosted.org/packages/6c/25/3fc9114abf979a41673ce877c08016f8e660ad6cf508c3957f537d2e9fa9/ruff-0.15.6-py3-none-win32.whl", hash = "sha256:bbf67d39832404812a2d23020dda68fee7f18ce15654e96fb1d3ad21a5fe436c", size = 10616872, upload-time = "2026-03-12T23:05:42.451Z" },
{ url = "https://files.pythonhosted.org/packages/e0/5e/d4ccc8a27ecdb78116feac4935dfc39d1304536f4296168f91ed3ec00cd2/ruff-0.15.1-py3-none-win_amd64.whl", hash = "sha256:c820fef9dd5d4172a6570e5721704a96c6679b80cf7be41659ed439653f62336", size = 11599956, upload-time = "2026-02-12T23:09:01.157Z" }, { url = "https://files.pythonhosted.org/packages/89/7a/09ece68445ceac348df06e08bf75db72d0e8427765b96c9c0ffabc1be1d9/ruff-0.15.6-py3-none-win_amd64.whl", hash = "sha256:aee25bc84c2f1007ecb5037dff75cef00414fdf17c23f07dc13e577883dca406", size = 11787271, upload-time = "2026-03-12T23:05:20.168Z" },
{ url = "https://files.pythonhosted.org/packages/2a/07/5bda6a85b220c64c65686bc85bd0bbb23b29c62b3a9f9433fa55f17cda93/ruff-0.15.1-py3-none-win_arm64.whl", hash = "sha256:5ff7d5f0f88567850f45081fac8f4ec212be8d0b963e385c3f7d0d2eb4899416", size = 10874604, upload-time = "2026-02-12T23:09:05.515Z" }, { url = "https://files.pythonhosted.org/packages/7f/d0/578c47dd68152ddddddf31cd7fc67dc30b7cdf639a86275fda821b0d9d98/ruff-0.15.6-py3-none-win_arm64.whl", hash = "sha256:c34de3dd0b0ba203be50ae70f5910b17188556630e2178fd7d79fc030eb0d837", size = 11060497, upload-time = "2026-03-12T23:05:25.968Z" },
] ]
[[package]] [[package]]
@@ -1594,44 +1433,17 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/b7/46/f5af3402b579fd5e11573ce652019a67074317e18c1935cc0b4ba9b35552/secretstorage-3.5.0-py3-none-any.whl", hash = "sha256:0ce65888c0725fcb2c5bc0fdb8e5438eece02c523557ea40ce0703c266248137", size = 15554, upload-time = "2025-11-23T19:02:51.545Z" }, { url = "https://files.pythonhosted.org/packages/b7/46/f5af3402b579fd5e11573ce652019a67074317e18c1935cc0b4ba9b35552/secretstorage-3.5.0-py3-none-any.whl", hash = "sha256:0ce65888c0725fcb2c5bc0fdb8e5438eece02c523557ea40ce0703c266248137", size = 15554, upload-time = "2025-11-23T19:02:51.545Z" },
] ]
[[package]]
name = "shellingham"
version = "1.5.4"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/58/15/8b3609fd3830ef7b27b655beb4b4e9c62313a4e8da8c676e142cc210d58e/shellingham-1.5.4.tar.gz", hash = "sha256:8dbca0739d487e5bd35ab3ca4b36e11c4078f3a234bfce294b0a0291363404de", size = 10310, upload-time = "2023-10-24T04:13:40.426Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/e0/f9/0595336914c5619e5f28a1fb793285925a8cd4b432c9da0a987836c7f822/shellingham-1.5.4-py2.py3-none-any.whl", hash = "sha256:7ecfff8f2fd72616f7481040475a65b2bf8af90a56c89140852d1120324e8686", size = 9755, upload-time = "2023-10-24T04:13:38.866Z" },
]
[[package]]
name = "six"
version = "1.17.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/94/e7/b2c673351809dca68a0e064b6af791aa332cf192da575fd474ed7d6f16a2/six-1.17.0.tar.gz", hash = "sha256:ff70335d468e7eb6ec65b95b99d3a2836546063f63acc5171de367e834932a81", size = 34031, upload-time = "2024-12-04T17:35:28.174Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/b7/ce/149a00dd41f10bc29e5921b496af8b574d8413afcd5e30dfa0ed46c2cc5e/six-1.17.0-py2.py3-none-any.whl", hash = "sha256:4721f391ed90541fddacab5acf947aa0d3dc7d27b2e1e8eda2be8970586c3274", size = 11050, upload-time = "2024-12-04T17:35:26.475Z" },
]
[[package]]
name = "sortedcontainers"
version = "2.4.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/e8/c4/ba2f8066cceb6f23394729afe52f3bf7adec04bf9ed2c820b39e19299111/sortedcontainers-2.4.0.tar.gz", hash = "sha256:25caa5a06cc30b6b83d11423433f65d1f9d76c4c6a0c90e3379eaa43b9bfdb88", size = 30594, upload-time = "2021-05-16T22:03:42.897Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/32/46/9cb0e58b2deb7f82b84065f37f3bffeb12413f947f9388e4cac22c4621ce/sortedcontainers-2.4.0-py2.py3-none-any.whl", hash = "sha256:a163dcaede0f1c021485e957a39245190e74249897e2ae4b2aa38595db237ee0", size = 29575, upload-time = "2021-05-16T22:03:41.177Z" },
]
[[package]] [[package]]
name = "sse-starlette" name = "sse-starlette"
version = "3.2.0" version = "3.3.2"
source = { registry = "https://pypi.org/simple" } source = { registry = "https://pypi.org/simple" }
dependencies = [ dependencies = [
{ name = "anyio" }, { name = "anyio" },
{ name = "starlette" }, { name = "starlette" },
] ]
sdist = { url = "https://files.pythonhosted.org/packages/8b/8d/00d280c03ffd39aaee0e86ec81e2d3b9253036a0f93f51d10503adef0e65/sse_starlette-3.2.0.tar.gz", hash = "sha256:8127594edfb51abe44eac9c49e59b0b01f1039d0c7461c6fd91d4e03b70da422", size = 27253, upload-time = "2026-01-17T13:11:05.62Z" } sdist = { url = "https://files.pythonhosted.org/packages/5a/9f/c3695c2d2d4ef70072c3a06992850498b01c6bc9be531950813716b426fa/sse_starlette-3.3.2.tar.gz", hash = "sha256:678fca55a1945c734d8472a6cad186a55ab02840b4f6786f5ee8770970579dcd", size = 32326, upload-time = "2026-02-28T11:24:34.36Z" }
wheels = [ wheels = [
{ url = "https://files.pythonhosted.org/packages/96/7f/832f015020844a8b8f7a9cbc103dd76ba8e3875004c41e08440ea3a2b41a/sse_starlette-3.2.0-py3-none-any.whl", hash = "sha256:5876954bd51920fc2cd51baee47a080eb88a37b5b784e615abb0b283f801cdbf", size = 12763, upload-time = "2026-01-17T13:11:03.775Z" }, { url = "https://files.pythonhosted.org/packages/61/28/8cb142d3fe80c4a2d8af54ca0b003f47ce0ba920974e7990fa6e016402d1/sse_starlette-3.3.2-py3-none-any.whl", hash = "sha256:5c3ea3dad425c601236726af2f27689b74494643f57017cafcb6f8c9acfbb862", size = 14270, upload-time = "2026-02-28T11:24:32.984Z" },
] ]
[[package]] [[package]]
@@ -1669,50 +1481,26 @@ wheels = [
[[package]] [[package]]
name = "ty" name = "ty"
version = "0.0.17" version = "0.0.23"
source = { registry = "https://pypi.org/simple" } source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/66/c3/41ae6346443eedb65b96761abfab890a48ce2aa5a8a27af69c5c5d99064d/ty-0.0.17.tar.gz", hash = "sha256:847ed6c120913e280bf9b54d8eaa7a1049708acb8824ad234e71498e8ad09f97", size = 5167209, upload-time = "2026-02-13T13:26:36.835Z" } sdist = { url = "https://files.pythonhosted.org/packages/75/ba/d3c998ff4cf6b5d75b39356db55fe1b7caceecc522b9586174e6a5dee6f7/ty-0.0.23.tar.gz", hash = "sha256:5fb05db58f202af366f80ef70f806e48f5237807fe424ec787c9f289e3f3a4ef", size = 5341461, upload-time = "2026-03-13T12:34:23.125Z" }
wheels = [ wheels = [
{ url = "https://files.pythonhosted.org/packages/c0/01/0ef15c22a1c54b0f728ceff3f62d478dbf8b0dcf8ff7b80b954f79584f3e/ty-0.0.17-py3-none-linux_armv6l.whl", hash = "sha256:64a9a16555cc8867d35c2647c2f1afbd3cae55f68fd95283a574d1bb04fe93e0", size = 10192793, upload-time = "2026-02-13T13:27:13.943Z" }, { url = "https://files.pythonhosted.org/packages/f4/21/aab32603dfdfacd4819e52fa8c6074e7bd578218a5142729452fc6a62db6/ty-0.0.23-py3-none-linux_armv6l.whl", hash = "sha256:e810eef1a5f1cfc0731a58af8d2f334906a96835829767aed00026f1334a8dd7", size = 10329096, upload-time = "2026-03-13T12:34:09.432Z" },
{ url = "https://files.pythonhosted.org/packages/0f/2c/f4c322d9cded56edc016b1092c14b95cf58c8a33b4787316ea752bb9418e/ty-0.0.17-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:eb2dbd8acd5c5a55f4af0d479523e7c7265a88542efe73ed3d696eb1ba7b6454", size = 10051977, upload-time = "2026-02-13T13:26:57.741Z" }, { url = "https://files.pythonhosted.org/packages/9f/a9/dd3287a82dce3df546ec560296208d4905dcf06346b6e18c2f3c63523bd1/ty-0.0.23-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:e43d36bd89a151ddcad01acaeff7dcc507cb73ff164c1878d2d11549d39a061c", size = 10156631, upload-time = "2026-03-13T12:34:53.122Z" },
{ url = "https://files.pythonhosted.org/packages/4c/a5/43746c1ff81e784f5fc303afc61fe5bcd85d0fcf3ef65cb2cef78c7486c7/ty-0.0.17-py3-none-macosx_11_0_arm64.whl", hash = "sha256:f18f5fd927bc628deb9ea2df40f06b5f79c5ccf355db732025a3e8e7152801f6", size = 9564639, upload-time = "2026-02-13T13:26:42.781Z" }, { url = "https://files.pythonhosted.org/packages/0f/01/3f25909b02fac29bb0a62b2251f8d62e65d697781ffa4cf6b47a4c075c85/ty-0.0.23-py3-none-macosx_11_0_arm64.whl", hash = "sha256:bd6a340969577b4645f231572c4e46012acba2d10d4c0c6570fe1ab74e76ae00", size = 9653211, upload-time = "2026-03-13T12:34:15.049Z" },
{ url = "https://files.pythonhosted.org/packages/d6/b8/280b04e14a9c0474af574f929fba2398b5e1c123c1e7735893b4cd73d13c/ty-0.0.17-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5383814d1d7a5cc53b3b07661856bab04bb2aac7a677c8d33c55169acdaa83df", size = 10061204, upload-time = "2026-02-13T13:27:00.152Z" }, { url = "https://files.pythonhosted.org/packages/d5/60/bfc0479572a6f4b90501c869635faf8d84c8c68ffc5dd87d04f049affabc/ty-0.0.23-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:341441783e626eeb7b1ec2160432956aed5734932ab2d1c26f94d0c98b229937", size = 10156143, upload-time = "2026-03-13T12:34:34.468Z" },
{ url = "https://files.pythonhosted.org/packages/2a/d7/493e1607d8dfe48288d8a768a2adc38ee27ef50e57f0af41ff273987cda0/ty-0.0.17-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:9c20423b8744b484f93e7bf2ef8a9724bca2657873593f9f41d08bd9f83444c9", size = 10013116, upload-time = "2026-02-13T13:26:34.543Z" }, { url = "https://files.pythonhosted.org/packages/3a/81/8a93e923535a340f54bea20ff196f6b2787782b2f2f399bd191c4bc132d6/ty-0.0.23-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:8ce1dc66c26d4167e2c78d12fa870ef5a7ec9cc344d2baaa6243297cfa88bd52", size = 10136632, upload-time = "2026-03-13T12:34:28.832Z" },
{ url = "https://files.pythonhosted.org/packages/80/ef/22f3ed401520afac90dbdf1f9b8b7755d85b0d5c35c1cb35cf5bd11b59c2/ty-0.0.17-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e6f5b1aba97db9af86517b911674b02f5bc310750485dc47603a105bd0e83ddd", size = 10533623, upload-time = "2026-02-13T13:26:31.449Z" }, { url = "https://files.pythonhosted.org/packages/da/cb/2ac81c850c58acc9f976814404d28389c9c1c939676e32287b9cff61381e/ty-0.0.23-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:bae1e7a294bf8528836f7617dc5c360ea2dddb63789fc9471ae6753534adca05", size = 10655025, upload-time = "2026-03-13T12:34:37.105Z" },
{ url = "https://files.pythonhosted.org/packages/75/ce/744b15279a11ac7138832e3a55595706b4a8a209c9f878e3ab8e571d9032/ty-0.0.17-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:488bce1a9bea80b851a97cd34c4d2ffcd69593d6c3f54a72ae02e5c6e47f3d0c", size = 11069750, upload-time = "2026-02-13T13:26:48.638Z" }, { url = "https://files.pythonhosted.org/packages/b5/9b/bac771774c198c318ae699fc013d8cd99ed9caf993f661fba11238759244/ty-0.0.23-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:d2b162768764d9dc177c83fb497a51532bb67cbebe57b8fa0f2668436bf53f3c", size = 11230107, upload-time = "2026-03-13T12:34:20.751Z" },
{ url = "https://files.pythonhosted.org/packages/f2/be/1133c91f15a0e00d466c24f80df486d630d95d1b2af63296941f7473812f/ty-0.0.17-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:8df66b91ec84239420985ec215e7f7549bfda2ac036a3b3c065f119d1c06825a", size = 10870862, upload-time = "2026-02-13T13:26:54.715Z" }, { url = "https://files.pythonhosted.org/packages/14/09/7644fb0e297265e18243f878aca343593323b9bb19ed5278dcbc63781be0/ty-0.0.23-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d28384e48ca03b34e4e2beee0e230c39bbfb68994bb44927fec61ef3642900da", size = 10934177, upload-time = "2026-03-13T12:34:17.904Z" },
{ url = "https://files.pythonhosted.org/packages/3e/4a/a2ed209ef215b62b2d3246e07e833081e07d913adf7e0448fc204be443d6/ty-0.0.17-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:002139e807c53002790dfefe6e2f45ab0e04012e76db3d7c8286f96ec121af8f", size = 10628118, upload-time = "2026-02-13T13:26:45.439Z" }, { url = "https://files.pythonhosted.org/packages/18/14/69a25a0cad493fb6a947302471b579a03516a3b00e7bece77fdc6b4afb9b/ty-0.0.23-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:559d9a299df793cb7a7902caed5eda8a720ff69164c31c979673e928f02251ee", size = 10752487, upload-time = "2026-03-13T12:34:31.785Z" },
{ url = "https://files.pythonhosted.org/packages/b3/0c/87476004cb5228e9719b98afffad82c3ef1f84334bde8527bcacba7b18cb/ty-0.0.17-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:6c4e01f05ce82e5d489ab3900ca0899a56c4ccb52659453780c83e5b19e2b64c", size = 10038185, upload-time = "2026-02-13T13:27:02.693Z" }, { url = "https://files.pythonhosted.org/packages/9d/2a/42fc3cbccf95af0a62308ebed67e084798ab7a85ef073c9986ef18032743/ty-0.0.23-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:32a7b8a14a98e1d20a9d8d2af23637ed7efdb297ac1fa2450b8e465d05b94482", size = 10133007, upload-time = "2026-03-13T12:34:42.838Z" },
{ url = "https://files.pythonhosted.org/packages/46/4b/98f0b3ba9aef53c1f0305519536967a4aa793a69ed72677b0a625c5313ac/ty-0.0.17-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:2b226dd1e99c0d2152d218c7e440150d1a47ce3c431871f0efa073bbf899e881", size = 10047644, upload-time = "2026-02-13T13:27:05.474Z" }, { url = "https://files.pythonhosted.org/packages/e1/69/307833f1b52fa3670e0a1d496e43ef7df556ecde838192d3fcb9b35e360d/ty-0.0.23-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:6f803b9b9cca87af793467973b9abdd4b83e6b96d9b5e749d662cff7ead70b6d", size = 10169698, upload-time = "2026-03-13T12:34:12.351Z" },
{ url = "https://files.pythonhosted.org/packages/93/e0/06737bb80aa1a9103b8651d2eb691a7e53f1ed54111152be25f4a02745db/ty-0.0.17-py3-none-musllinux_1_2_i686.whl", hash = "sha256:8b11f1da7859e0ad69e84b3c5ef9a7b055ceed376a432fad44231bdfc48061c2", size = 10231140, upload-time = "2026-02-13T13:27:10.844Z" }, { url = "https://files.pythonhosted.org/packages/89/ae/5dd379ec22d0b1cba410d7af31c366fcedff191d5b867145913a64889f66/ty-0.0.23-py3-none-musllinux_1_2_i686.whl", hash = "sha256:4a0bf086ec8e2197b7ea7ebfcf4be36cb6a52b235f8be61647ef1b2d99d6ffd3", size = 10346080, upload-time = "2026-03-13T12:34:40.012Z" },
{ url = "https://files.pythonhosted.org/packages/7c/79/e2a606bd8852383ba9abfdd578f4a227bd18504145381a10a5f886b4e751/ty-0.0.17-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:c04e196809ff570559054d3e011425fd7c04161529eb551b3625654e5f2434cb", size = 10718344, upload-time = "2026-02-13T13:26:51.66Z" }, { url = "https://files.pythonhosted.org/packages/98/c7/dfc83203d37998620bba9c4873a080c8850a784a8a46f56f8163c5b4e320/ty-0.0.23-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:252539c3fcd7aeb9b8d5c14e2040682c3e1d7ff640906d63fd2c4ce35865a4ba", size = 10848162, upload-time = "2026-03-13T12:34:45.421Z" },
{ url = "https://files.pythonhosted.org/packages/c5/2d/2663984ac11de6d78f74432b8b14ba64d170b45194312852b7543cf7fd56/ty-0.0.17-py3-none-win32.whl", hash = "sha256:305b6ed150b2740d00a817b193373d21f0767e10f94ac47abfc3b2e5a5aec809", size = 9672932, upload-time = "2026-02-13T13:27:08.522Z" }, { url = "https://files.pythonhosted.org/packages/89/08/05481511cfbcc1fd834b6c67aaae090cb609a079189ddf2032139ccfc490/ty-0.0.23-py3-none-win32.whl", hash = "sha256:51b591d19eef23bbc3807aef77d38fa1f003c354e1da908aa80ea2dca0993f77", size = 9748283, upload-time = "2026-03-13T12:34:50.607Z" },
{ url = "https://files.pythonhosted.org/packages/de/b5/39be78f30b31ee9f5a585969930c7248354db90494ff5e3d0756560fb731/ty-0.0.17-py3-none-win_amd64.whl", hash = "sha256:531828267527aee7a63e972f54e5eee21d9281b72baf18e5c2850c6b862add83", size = 10542138, upload-time = "2026-02-13T13:27:17.084Z" }, { url = "https://files.pythonhosted.org/packages/31/2e/eaed4ff5c85e857a02415084c394e02c30476b65e158eec1938fdaa9a205/ty-0.0.23-py3-none-win_amd64.whl", hash = "sha256:1e137e955f05c501cfbb81dd2190c8fb7d01ec037c7e287024129c722a83c9ad", size = 10698355, upload-time = "2026-03-13T12:34:26.134Z" },
{ url = "https://files.pythonhosted.org/packages/40/b7/f875c729c5d0079640c75bad2c7e5d43edc90f16ba242f28a11966df8f65/ty-0.0.17-py3-none-win_arm64.whl", hash = "sha256:de9810234c0c8d75073457e10a84825b9cd72e6629826b7f01c7a0b266ae25b1", size = 10023068, upload-time = "2026-02-13T13:26:39.637Z" }, { url = "https://files.pythonhosted.org/packages/91/29/b32cb7b4c7d56b9ed50117f8ad6e45834aec293e4cb14749daab4e9236d5/ty-0.0.23-py3-none-win_arm64.whl", hash = "sha256:a0399bd13fd2cd6683fd0a2d59b9355155d46546d8203e152c556ddbdeb20842", size = 10155890, upload-time = "2026-03-13T12:34:48.082Z" },
]
[[package]]
name = "typer"
version = "0.23.2"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "annotated-doc" },
{ name = "click" },
{ name = "rich" },
{ name = "shellingham" },
]
sdist = { url = "https://files.pythonhosted.org/packages/d3/ae/93d16574e66dfe4c2284ffdaca4b0320ade32858cb2cc586c8dd79f127c5/typer-0.23.2.tar.gz", hash = "sha256:a99706a08e54f1aef8bb6a8611503808188a4092808e86addff1828a208af0de", size = 120162, upload-time = "2026-02-16T18:52:40.354Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/14/2c/dee705c427875402200fe779eb8a3c00ccb349471172c41178336e9599cc/typer-0.23.2-py3-none-any.whl", hash = "sha256:e9c8dc380f82450b3c851a9b9d5a0edf95d1d6456ae70c517d8b06a50c7a9978", size = 56834, upload-time = "2026-02-16T18:52:39.308Z" },
]
[[package]]
name = "types-pytz"
version = "2025.2.0.20251108"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/40/ff/c047ddc68c803b46470a357454ef76f4acd8c1088f5cc4891cdd909bfcf6/types_pytz-2025.2.0.20251108.tar.gz", hash = "sha256:fca87917836ae843f07129567b74c1929f1870610681b4c92cb86a3df5817bdb", size = 10961, upload-time = "2025-11-08T02:55:57.001Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/e7/c1/56ef16bf5dcd255155cc736d276efa6ae0a5c26fd685e28f0412a4013c01/types_pytz-2025.2.0.20251108-py3-none-any.whl", hash = "sha256:0f1c9792cab4eb0e46c52f8845c8f77cf1e313cb3d68bf826aa867fe4717d91c", size = 10116, upload-time = "2025-11-08T02:55:56.194Z" },
] ]
[[package]] [[package]]
@@ -1737,15 +1525,23 @@ wheels = [
] ]
[[package]] [[package]]
name = "unraid-mcp" name = "uncalled-for"
version = "0.2.0" version = "0.2.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/02/7c/b5b7d8136f872e3f13b0584e576886de0489d7213a12de6bebf29ff6ebfc/uncalled_for-0.2.0.tar.gz", hash = "sha256:b4f8fdbcec328c5a113807d653e041c5094473dd4afa7c34599ace69ccb7e69f", size = 49488, upload-time = "2026-02-27T17:40:58.137Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/ff/7f/4320d9ce3be404e6310b915c3629fe27bf1e2f438a1a7a3cb0396e32e9a9/uncalled_for-0.2.0-py3-none-any.whl", hash = "sha256:2c0bd338faff5f930918f79e7eb9ff48290df2cb05fcc0b40a7f334e55d4d85f", size = 11351, upload-time = "2026-02-27T17:40:56.804Z" },
]
[[package]]
name = "unraid-mcp"
version = "0.4.4"
source = { editable = "." } source = { editable = "." }
dependencies = [ dependencies = [
{ name = "fastapi" }, { name = "fastapi" },
{ name = "fastmcp" }, { name = "fastmcp" },
{ name = "httpx" }, { name = "httpx" },
{ name = "python-dotenv" }, { name = "python-dotenv" },
{ name = "pytz" },
{ name = "rich" }, { name = "rich" },
{ name = "uvicorn", extra = ["standard"] }, { name = "uvicorn", extra = ["standard"] },
{ name = "websockets" }, { name = "websockets" },
@@ -1762,7 +1558,6 @@ dev = [
{ name = "ruff" }, { name = "ruff" },
{ name = "twine" }, { name = "twine" },
{ name = "ty" }, { name = "ty" },
{ name = "types-pytz" },
] ]
[package.metadata] [package.metadata]
@@ -1771,7 +1566,6 @@ requires-dist = [
{ name = "fastmcp", specifier = ">=2.14.5" }, { name = "fastmcp", specifier = ">=2.14.5" },
{ name = "httpx", specifier = ">=0.28.1" }, { name = "httpx", specifier = ">=0.28.1" },
{ name = "python-dotenv", specifier = ">=1.1.1" }, { name = "python-dotenv", specifier = ">=1.1.1" },
{ name = "pytz", specifier = ">=2025.2" },
{ name = "rich", specifier = ">=14.1.0" }, { name = "rich", specifier = ">=14.1.0" },
{ name = "uvicorn", extras = ["standard"], specifier = ">=0.35.0" }, { name = "uvicorn", extras = ["standard"], specifier = ">=0.35.0" },
{ name = "websockets", specifier = ">=15.0.1" }, { name = "websockets", specifier = ">=15.0.1" },
@@ -1788,7 +1582,6 @@ dev = [
{ name = "ruff", specifier = ">=0.12.8" }, { name = "ruff", specifier = ">=0.12.8" },
{ name = "twine", specifier = ">=6.0.1" }, { name = "twine", specifier = ">=6.0.1" },
{ name = "ty", specifier = ">=0.0.15" }, { name = "ty", specifier = ">=0.0.15" },
{ name = "types-pytz", specifier = ">=2025.2.0.20250809" },
] ]
[[package]] [[package]]
@@ -1802,15 +1595,15 @@ wheels = [
[[package]] [[package]]
name = "uvicorn" name = "uvicorn"
version = "0.40.0" version = "0.41.0"
source = { registry = "https://pypi.org/simple" } source = { registry = "https://pypi.org/simple" }
dependencies = [ dependencies = [
{ name = "click" }, { name = "click" },
{ name = "h11" }, { name = "h11" },
] ]
sdist = { url = "https://files.pythonhosted.org/packages/c3/d1/8f3c683c9561a4e6689dd3b1d345c815f10f86acd044ee1fb9a4dcd0b8c5/uvicorn-0.40.0.tar.gz", hash = "sha256:839676675e87e73694518b5574fd0f24c9d97b46bea16df7b8c05ea1a51071ea", size = 81761, upload-time = "2025-12-21T14:16:22.45Z" } sdist = { url = "https://files.pythonhosted.org/packages/32/ce/eeb58ae4ac36fe09e3842eb02e0eb676bf2c53ae062b98f1b2531673efdd/uvicorn-0.41.0.tar.gz", hash = "sha256:09d11cf7008da33113824ee5a1c6422d89fbc2ff476540d69a34c87fab8b571a", size = 82633, upload-time = "2026-02-16T23:07:24.1Z" }
wheels = [ wheels = [
{ url = "https://files.pythonhosted.org/packages/3d/d8/2083a1daa7439a66f3a48589a57d576aa117726762618f6bb09fe3798796/uvicorn-0.40.0-py3-none-any.whl", hash = "sha256:c6c8f55bc8bf13eb6fa9ff87ad62308bbbc33d0b67f84293151efe87e0d5f2ee", size = 68502, upload-time = "2025-12-21T14:16:21.041Z" }, { url = "https://files.pythonhosted.org/packages/83/e4/d04a086285c20886c0daad0e026f250869201013d18f81d9ff5eada73a88/uvicorn-0.41.0-py3-none-any.whl", hash = "sha256:29e35b1d2c36a04b9e180d4007ede3bcb32a85fbdfd6c6aeb3f26839de088187", size = 68783, upload-time = "2026-02-16T23:07:22.357Z" },
] ]
[package.optional-dependencies] [package.optional-dependencies]