feat: harden API safety and expand command docs with full test coverage

This commit is contained in:
Jacob Magar
2026-02-15 22:15:51 -05:00
parent d791c6b6b7
commit abb7915672
60 changed files with 7122 additions and 1247 deletions

View File

@@ -16,12 +16,7 @@
"--directory", "--directory",
"${CLAUDE_PLUGIN_ROOT}", "${CLAUDE_PLUGIN_ROOT}",
"unraid-mcp-server" "unraid-mcp-server"
], ]
"env": {
"UNRAID_API_URL": "${UNRAID_API_URL}",
"UNRAID_API_KEY": "${UNRAID_API_KEY}",
"UNRAID_MCP_TRANSPORT": "stdio"
}
} }
} }
} }

13
.gitignore vendored
View File

@@ -21,13 +21,23 @@ coverage.xml
# Virtual environments # Virtual environments
.venv .venv
.venv-backend .venv-backend
# Environment files (only .env.example is tracked)
.env .env
.env.local .env.*
!.env.example
# Logs
*.log *.log
logs/ logs/
# IDE/Editor
.bivvy .bivvy
.cursor .cursor
# Claude Code user settings (gitignore local settings)
.claude/settings.local.json
# Serena IDE configuration # Serena IDE configuration
.serena/ .serena/
@@ -36,6 +46,7 @@ logs/
.full-review/ .full-review/
/docs/plans/ /docs/plans/
/docs/sessions/ /docs/sessions/
/docs/reports/
# Test planning documents # Test planning documents
/DESTRUCTIVE_ACTIONS.md /DESTRUCTIVE_ACTIONS.md

544
.plan.md
View File

@@ -1,544 +0,0 @@
# Implementation Plan: mcporter Integration Tests + Destructive Action Gating
**Date:** 2026-02-15
**Status:** Awaiting Approval
**Estimated Effort:** 8-12 hours
## Overview
Implement comprehensive integration testing using mcporter CLI to validate all 86 tool actions (after removing 4 destructive array operations) against live Unraid servers, plus add environment variable gates for remaining destructive actions to prevent accidental operations.
## Requirements
1. **Remove destructive array operations** - start, stop, shutdown, reboot should not be exposed via MCP
2. **Add per-tool environment variable gates** - UNRAID_ALLOW_*_DESTRUCTIVE flags for remaining destructive actions
3. **Build mcporter test suite** - Real end-to-end testing of all 86 actions against live servers (tootie/shart)
4. **Document all actions** - Comprehensive action catalog with test specifications
## Architecture Changes
### 1. Settings Infrastructure (Pydantic-based)
**File:** `unraid_mcp/config/settings.py`
- Migrate from simple `os.getenv()` to Pydantic `BaseSettings`
- Add 7 destructive action gate flags (all default to False for safety):
- `allow_docker_destructive` (docker remove)
- `allow_vm_destructive` (vm force_stop, reset)
- `allow_notifications_destructive` (delete, delete_archived)
- `allow_rclone_destructive` (delete_remote)
- `allow_users_destructive` (user delete)
- `allow_keys_destructive` (key delete)
- `allow_array_destructive` (REMOVED - no longer needed after task 1)
- Add `get_config_summary()` method showing gate status
- Maintain backwards compatibility via module-level exports
**Dependencies:** Add `pydantic-settings` to `pyproject.toml`
### 2. Tool Implementation Pattern
**Pattern for all tools with destructive actions:**
```python
from ..config.settings import settings
# In tool function:
if action in DESTRUCTIVE_ACTIONS:
# Check 1: Environment variable gate (first line of defense)
if not settings.allow_{tool}_destructive:
raise ToolError(
f"Destructive {tool} action '{action}' is disabled. "
f"Set UNRAID_ALLOW_{TOOL}_DESTRUCTIVE=true to enable. "
f"This is a safety gate to prevent accidental operations."
)
# Check 2: Runtime confirmation (second line of defense)
if not confirm:
raise ToolError(f"Action '{action}' is destructive. Set confirm=True to proceed.")
```
**Tools requiring updates:**
- `unraid_mcp/tools/docker.py` (1 action: remove)
- `unraid_mcp/tools/virtualization.py` (2 actions: force_stop, reset)
- `unraid_mcp/tools/notifications.py` (2 actions: delete, delete_archived)
- `unraid_mcp/tools/rclone.py` (1 action: delete_remote)
- `unraid_mcp/tools/users.py` (1 action: delete)
- `unraid_mcp/tools/keys.py` (1 action: delete)
### 3. mcporter Integration Test Suite
**New Directory Structure:**
```
tests/integration/
├── helpers/
│ ├── mcporter.sh # mcporter wrapper (call_tool, call_destructive, get_field)
│ ├── validation.sh # Response validation (assert_fields, assert_equals, assert_success)
│ └── reporting.sh # Test reporting (init_report, record_test, generate_summary)
├── tools/
│ ├── test_health.sh # 3 actions
│ ├── test_info.sh # 19 actions
│ ├── test_storage.sh # 6 actions
│ ├── test_docker.sh # 15 actions
│ ├── test_vm.sh # 9 actions
│ ├── test_notifications.sh # 9 actions
│ ├── test_rclone.sh # 4 actions
│ ├── test_users.sh # 8 actions
│ ├── test_keys.sh # 5 actions
│ └── test_array.sh # 8 actions (after removal)
├── run-all.sh # Master test runner (parallel/sequential)
├── run-tool.sh # Single tool runner
└── README.md # Integration test documentation
```
**mcporter Configuration:** `config/mcporter.json`
```json
{
"mcpServers": {
"unraid-tootie": {
"command": "uv",
"args": ["run", "unraid-mcp-server"],
"env": {
"UNRAID_API_URL": "https://myunraid.net:31337/graphql",
"UNRAID_API_KEY": "${UNRAID_TOOTIE_API_KEY}",
"UNRAID_VERIFY_SSL": "false",
"UNRAID_MCP_TRANSPORT": "stdio"
},
"cwd": "/home/jmagar/workspace/unraid-mcp"
},
"unraid-shart": {
"command": "uv",
"args": ["run", "unraid-mcp-server"],
"env": {
"UNRAID_API_URL": "http://100.118.209.1/graphql",
"UNRAID_API_KEY": "${UNRAID_SHART_API_KEY}",
"UNRAID_VERIFY_SSL": "false",
"UNRAID_MCP_TRANSPORT": "stdio"
},
"cwd": "/home/jmagar/workspace/unraid-mcp"
}
}
}
```
## Implementation Tasks
### Task 1: Remove Destructive Array Operations
**Files:**
- `unraid_mcp/tools/array.py`
- `tests/test_array.py`
**Changes:**
1. Remove from `MUTATIONS` dict:
- `start` (lines 24-28)
- `stop` (lines 29-33)
- `shutdown` (lines 69-73)
- `reboot` (lines 74-78)
2. Remove from `DESTRUCTIVE_ACTIONS` set (line 81) - set becomes empty `{}`
3. Remove from `ARRAY_ACTIONS` Literal type (lines 85-86)
4. Update docstring removing these 4 actions (lines 105-106, 115-116)
5. Remove tests for these actions in `tests/test_array.py`
**Acceptance:**
- ✅ Array tool has 8 actions (down from 12)
-`DESTRUCTIVE_ACTIONS` is empty set
- ✅ Tests pass for remaining actions
- ✅ Removed mutations are not callable
### Task 2: Add Pydantic Settings with Destructive Gates
**Files:**
- `unraid_mcp/config/settings.py`
- `pyproject.toml`
- `.env.example`
**Changes:**
1. **Add dependency:** `pydantic-settings>=2.12` in `pyproject.toml` dependencies
2. **Update settings.py:**
- Import `BaseSettings` from `pydantic_settings`
- Create `UnraidSettings` class with all config fields
- Add 6 destructive gate fields (all default to False):
- `allow_docker_destructive: bool = Field(default=False, ...)`
- `allow_vm_destructive: bool = Field(default=False, ...)`
- `allow_notifications_destructive: bool = Field(default=False, ...)`
- `allow_rclone_destructive: bool = Field(default=False, ...)`
- `allow_users_destructive: bool = Field(default=False, ...)`
- `allow_keys_destructive: bool = Field(default=False, ...)`
- Add `get_config_summary()` method including gate status
- Instantiate global `settings = UnraidSettings()`
- Keep backwards compatibility exports
3. **Update .env.example:** Add section documenting all destructive gates
**Acceptance:**
-`settings` instance loads successfully
- ✅ All gate fields default to False
-`get_config_summary()` shows gate status
- ✅ Backwards compatibility maintained (existing code still works)
### Task 3: Update Tools with Environment Variable Gates
**Files to update:**
- `unraid_mcp/tools/docker.py`
- `unraid_mcp/tools/virtualization.py`
- `unraid_mcp/tools/notifications.py`
- `unraid_mcp/tools/rclone.py`
- `unraid_mcp/tools/users.py`
- `unraid_mcp/tools/keys.py`
**Pattern for each tool:**
1. Add import: `from ..config.settings import settings`
2. Add gate check before confirm check in destructive action handler:
```python
if action in DESTRUCTIVE_ACTIONS:
if not settings.allow_{tool}_destructive:
raise ToolError(
f"Destructive {tool} action '{action}' is disabled. "
f"Set UNRAID_ALLOW_{TOOL}_DESTRUCTIVE=true to enable."
)
if not confirm:
raise ToolError(f"Action '{action}' is destructive. Set confirm=True to proceed.")
```
3. Update tool docstring documenting security requirements
**Acceptance (per tool):**
- ✅ Destructive action fails with clear error when env var not set
- ✅ Destructive action still requires confirm=True when env var is set
- ✅ Both checks must pass for execution
- ✅ Error messages guide user to correct env var
### Task 4: Update Test Suite with Settings Mocking
**Files:**
- `tests/conftest.py`
- `tests/test_docker.py`
- `tests/test_vm.py`
- `tests/test_notifications.py`
- `tests/test_rclone.py`
- `tests/test_users.py`
- `tests/test_keys.py`
**Changes:**
1. **Add fixtures to conftest.py:**
```python
@pytest.fixture
def mock_settings():
# All gates disabled
@pytest.fixture
def mock_settings_all_enabled(mock_settings):
# All gates enabled
```
2. **Update each test file:**
- Add `mock_settings` parameter to fixtures
- Wrap tool calls with `with patch("unraid_mcp.tools.{tool}.settings", mock_settings):`
- Add 3 destructive action tests:
- Test gate check (env var not set, confirm=True → fails)
- Test confirm check (env var set, confirm=False → fails)
- Test success (env var set, confirm=True → succeeds)
**Acceptance:**
- ✅ All 150 existing tests pass
- ✅ New gate tests cover all destructive actions
- ✅ Tests verify correct error messages
- ✅ Tests use mocked settings (don't rely on actual env vars)
### Task 5: Create mcporter Configuration
**Files:**
- `config/mcporter.json` (new)
- `tests/integration/README.md` (new)
**Changes:**
1. Create `config/mcporter.json` with tootie and shart server configs
2. Document how to use mcporter with the server in README
3. Include instructions for loading credentials from `~/workspace/homelab/.env`
**Acceptance:**
- ✅ `mcporter list unraid-tootie` shows all tools
- ✅ `mcporter call unraid-tootie.unraid_health action=test_connection` succeeds
- ✅ Configuration works for both servers
### Task 6: Build mcporter Helper Libraries
**Files to create:**
- `tests/integration/helpers/mcporter.sh`
- `tests/integration/helpers/validation.sh`
- `tests/integration/helpers/reporting.sh`
**Functions to implement:**
**mcporter.sh:**
- `call_tool <tool> <action> [params...]` - Call tool via mcporter, return JSON
- `call_destructive <tool> <action> <env_var> [params...]` - Safe destructive call
- `get_field <json> <jq_path>` - Extract field from JSON
- `is_success <json>` - Check if response indicates success
- `get_error <json>` - Extract error message
**validation.sh:**
- `assert_fields <json> <field>...` - Verify required fields exist
- `assert_equals <json> <field> <expected>` - Field value equality
- `assert_matches <json> <field> <pattern>` - Field matches regex
- `assert_success <json>` - Response indicates success
- `assert_failure <json> [pattern]` - Response indicates failure (negative test)
**reporting.sh:**
- `init_report <tool>` - Initialize JSON report file
- `record_test <report> <action> <status> [error]` - Record test result
- `generate_summary` - Generate console summary from all reports
**Acceptance:**
- ✅ Helper functions work correctly
- ✅ Error handling is robust
- ✅ Functions are reusable across all tool tests
### Task 7: Implement Tool Test Scripts
**Files to create:**
- `tests/integration/tools/test_health.sh` (3 actions)
- `tests/integration/tools/test_info.sh` (19 actions)
- `tests/integration/tools/test_storage.sh` (6 actions)
- `tests/integration/tools/test_docker.sh` (15 actions)
- `tests/integration/tools/test_vm.sh` (9 actions)
- `tests/integration/tools/test_notifications.sh` (9 actions)
- `tests/integration/tools/test_rclone.sh` (4 actions)
- `tests/integration/tools/test_users.sh` (8 actions)
- `tests/integration/tools/test_keys.sh` (5 actions)
- `tests/integration/tools/test_array.sh` (8 actions)
**Per-script implementation:**
1. Source helper libraries
2. Initialize report
3. Implement test functions for each action:
- Basic functionality test
- Response structure validation
- Parameter validation
- Destructive action gate tests (if applicable)
4. Run all tests and record results
5. Return exit code based on failures
**Priority order (implement in this sequence):**
1. `test_health.sh` - Simplest (3 actions, no destructive)
2. `test_info.sh` - Large but straightforward (19 query actions)
3. `test_storage.sh` - Moderate (6 query actions)
4. `test_docker.sh` - Complex (15 actions, 1 destructive)
5. `test_vm.sh` - Complex (9 actions, 2 destructive)
6. `test_notifications.sh` - Moderate (9 actions, 2 destructive)
7. `test_rclone.sh` - Simple (4 actions, 1 destructive)
8. `test_users.sh` - Moderate (8 actions, 1 destructive)
9. `test_keys.sh` - Simple (5 actions, 1 destructive)
10. `test_array.sh` - Moderate (8 actions, no destructive after removal)
**Acceptance:**
- ✅ Each script tests all actions for its tool
- ✅ Tests validate response structure
- ✅ Destructive action gates are tested
- ✅ Scripts generate JSON reports
- ✅ Exit code indicates success/failure
### Task 8: Build Test Runners
**Files to create:**
- `tests/integration/run-all.sh`
- `tests/integration/run-tool.sh`
**run-all.sh features:**
- Load credentials from `~/workspace/homelab/.env`
- Support sequential and parallel execution modes
- Run all 10 tool test scripts
- Generate summary report
- Return exit code based on any failures
**run-tool.sh features:**
- Accept tool name as argument
- Load credentials
- Execute single tool test script
- Pass through exit code
**Acceptance:**
- ✅ `run-all.sh` executes all tool tests
- ✅ Parallel mode works correctly (no race conditions)
- ✅ Summary report shows pass/fail/skip counts
- ✅ `run-tool.sh health` runs only health tests
- ✅ Exit codes are correct
### Task 9: Document Action Catalog
**File to create:**
- `docs/testing/action-catalog.md`
**Content:**
- Table of all 86 actions across 10 tools
- For each action:
- Tool name
- Action name
- Type (query/mutation/compound)
- Required parameters
- Optional parameters
- Destructive? (yes/no + env var if yes)
- Expected response structure
- Example mcporter call
- Validation criteria
**Acceptance:**
- ✅ All 86 actions documented
- ✅ Specifications are detailed and accurate
- ✅ Examples are runnable
- ✅ Becomes source of truth for test implementation
### Task 10: Integration Documentation
**Files to create/update:**
- `tests/integration/README.md`
- `docs/testing/integration-tests.md`
- `docs/testing/test-environments.md`
- `README.md` (add integration test section)
**Content:**
- How to run integration tests
- How to configure mcporter
- Server setup (tootie/shart)
- Environment variable gates
- Destructive action testing
- CI/CD integration
- Troubleshooting
**Acceptance:**
- ✅ Clear setup instructions
- ✅ Examples for common use cases
- ✅ Integration with existing pytest docs
- ✅ CI/CD pipeline documented
## Testing Strategy
### Unit Tests (pytest - existing)
- **150 tests** across 10 tool modules
- Mock GraphQL responses
- Fast, isolated, offline
- Cover edge cases and error paths
### Integration Tests (mcporter - new)
- **86 tests** (one per action)
- Real Unraid server calls
- Slow, dependent, online
- Validate actual API behavior
### Test Matrix
| Tool | Actions | pytest Tests | mcporter Tests | Destructive |
|------|---------|--------------|----------------|-------------|
| health | 3 | 10 | 3 | 0 |
| info | 19 | 98 | 19 | 0 |
| storage | 6 | 11 | 6 | 0 |
| docker | 15 | 28 | 15 | 1 |
| vm | 9 | 25 | 9 | 2 |
| notifications | 9 | 7 | 9 | 2 |
| rclone | 4 | (pending) | 4 | 1 |
| users | 8 | (pending) | 8 | 1 |
| keys | 5 | (pending) | 5 | 1 |
| array | 8 | 26 | 8 | 0 |
| **TOTAL** | **86** | **~150** | **86** | **8** |
## Validation Checklist
### Code Changes
- [ ] Array tool has 8 actions (removed start/stop/shutdown/reboot)
- [ ] Settings class with 6 destructive gate flags
- [ ] All 6 tools updated with environment variable gates
- [ ] All 6 tool tests updated with gate test cases
- [ ] All existing 150 pytest tests pass
- [ ] `pydantic-settings` added to dependencies
- [ ] `.env.example` updated with gate documentation
### Integration Tests
- [ ] mcporter configuration works for both servers
- [ ] All 3 helper libraries implemented
- [ ] All 10 tool test scripts implemented
- [ ] Test runners (run-all, run-tool) work correctly
- [ ] All 86 actions have test coverage
- [ ] Destructive action gates are tested
- [ ] Reports generate correctly
### Documentation
- [ ] Action catalog documents all 86 actions
- [ ] Integration test README is clear
- [ ] Environment setup documented
- [ ] CI/CD integration documented
- [ ] Project README updated
## Success Criteria
1. **Safety:** Destructive actions require both env var AND confirm=True
2. **Coverage:** All 86 actions have integration tests
3. **Quality:** Clear error messages guide users to correct env vars
4. **Automation:** Test suite runs via single command
5. **Documentation:** Complete action catalog and testing guide
## Risks & Mitigations
### Risk: Breaking existing deployments
**Impact:** HIGH - Users suddenly can't execute destructive actions
**Mitigation:**
- Clear error messages with exact env var to set
- Document migration in release notes
- Default to disabled (safe) but guide users to enable
### Risk: Integration tests are flaky
**Impact:** MEDIUM - CI/CD unreliable
**Mitigation:**
- Test against stable servers (tootie/shart)
- Implement retry logic for network errors
- Skip destructive tests if env vars not set (not failures)
### Risk: mcporter configuration complexity
**Impact:** LOW - Difficult for contributors to run tests
**Mitigation:**
- Clear setup documentation
- Example .env template
- Helper script to validate setup
## Dependencies
- `pydantic-settings>=2.12` (Python package)
- `mcporter` (npm package - user must install)
- `jq` (system package for JSON parsing in bash)
- Access to tootie/shart servers (for integration tests)
- Credentials in `~/workspace/homelab/.env`
## Timeline Estimate
| Task | Estimated Time |
|------|---------------|
| 1. Remove array ops | 30 min |
| 2. Add settings infrastructure | 1 hour |
| 3. Update tools with gates | 2 hours |
| 4. Update test suite | 2 hours |
| 5. mcporter config | 30 min |
| 6. Helper libraries | 1.5 hours |
| 7. Tool test scripts | 4 hours |
| 8. Test runners | 1 hour |
| 9. Action catalog | 2 hours |
| 10. Documentation | 1.5 hours |
| **Total** | **~12 hours** |
## Notes
- Integration tests complement (not replace) existing pytest suite
- Tests validate actual Unraid API behavior, not just our code
- Environment variable gates provide defense-in-depth security
- mcporter enables real-world validation impossible with mocked tests
- Action catalog becomes living documentation for all tools
---
**Plan Status:** Awaiting user approval
**Next Step:** Review plan, make adjustments, then execute via task list

View File

@@ -84,15 +84,15 @@ docker compose down
- **Health Monitoring**: Comprehensive health check tool for system monitoring - **Health Monitoring**: Comprehensive health check tool for system monitoring
- **Real-time Subscriptions**: WebSocket-based live data streaming - **Real-time Subscriptions**: WebSocket-based live data streaming
### Tool Categories (10 Tools, 90 Actions) ### Tool Categories (10 Tools, 76 Actions)
1. **`unraid_info`** (19 actions): overview, array, network, registration, connect, variables, metrics, services, display, config, online, owner, settings, server, servers, flash, ups_devices, ups_device, ups_config 1. **`unraid_info`** (19 actions): overview, array, network, registration, connect, variables, metrics, services, display, config, online, owner, settings, server, servers, flash, ups_devices, ups_device, ups_config
2. **`unraid_array`** (12 actions): start, stop, parity_start/pause/resume/cancel/history, mount_disk, unmount_disk, clear_stats, shutdown, reboot 2. **`unraid_array`** (5 actions): parity_start, parity_pause, parity_resume, parity_cancel, parity_status
3. **`unraid_storage`** (6 actions): shares, disks, disk_details, unassigned, log_files, logs 3. **`unraid_storage`** (6 actions): shares, disks, disk_details, unassigned, log_files, logs
4. **`unraid_docker`** (15 actions): list, details, start, stop, restart, pause, unpause, remove, update, update_all, logs, networks, network_details, port_conflicts, check_updates 4. **`unraid_docker`** (15 actions): list, details, start, stop, restart, pause, unpause, remove, update, update_all, logs, networks, network_details, port_conflicts, check_updates
5. **`unraid_vm`** (9 actions): list, details, start, stop, pause, resume, force_stop, reboot, reset 5. **`unraid_vm`** (9 actions): list, details, start, stop, pause, resume, force_stop, reboot, reset
6. **`unraid_notifications`** (9 actions): overview, list, warnings, create, archive, unread, delete, delete_archived, archive_all 6. **`unraid_notifications`** (9 actions): overview, list, warnings, create, archive, unread, delete, delete_archived, archive_all
7. **`unraid_rclone`** (4 actions): list_remotes, config_form, create_remote, delete_remote 7. **`unraid_rclone`** (4 actions): list_remotes, config_form, create_remote, delete_remote
8. **`unraid_users`** (8 actions): me, list, get, add, delete, cloud, remote_access, origins 8. **`unraid_users`** (1 action): me
9. **`unraid_keys`** (5 actions): list, get, create, update, delete 9. **`unraid_keys`** (5 actions): list, get, create, update, delete
10. **`unraid_health`** (3 actions): check, test_connection, diagnose 10. **`unraid_health`** (3 actions): check, test_connection, diagnose

View File

@@ -26,6 +26,7 @@
- [Installation](#-installation) - [Installation](#-installation)
- [Configuration](#-configuration) - [Configuration](#-configuration)
- [Available Tools & Resources](#-available-tools--resources) - [Available Tools & Resources](#-available-tools--resources)
- [Custom Slash Commands](#-custom-slash-commands)
- [Development](#-development) - [Development](#-development)
- [Architecture](#-architecture) - [Architecture](#-architecture)
- [Troubleshooting](#-troubleshooting) - [Troubleshooting](#-troubleshooting)
@@ -45,10 +46,11 @@
``` ```
This provides instant access to Unraid monitoring and management through Claude Code with: This provides instant access to Unraid monitoring and management through Claude Code with:
- 10 tools exposing 90 actions via the consolidated action pattern - **10 MCP tools** exposing **83 actions** via the consolidated action pattern
- Real-time system metrics - **10 slash commands** for quick CLI-style access (`commands/`)
- Disk health monitoring - Real-time system metrics and health monitoring
- Docker and VM management - Docker container and VM lifecycle management
- Disk health monitoring and storage management
**See [.claude-plugin/README.md](.claude-plugin/README.md) for detailed plugin documentation.** **See [.claude-plugin/README.md](.claude-plugin/README.md) for detailed plugin documentation.**
@@ -102,13 +104,15 @@ unraid-mcp/ # ${CLAUDE_PLUGIN_ROOT}
├── .claude-plugin/ ├── .claude-plugin/
│ ├── marketplace.json # Marketplace catalog │ ├── marketplace.json # Marketplace catalog
│ └── plugin.json # Plugin manifest │ └── plugin.json # Plugin manifest
├── commands/ # 10 custom slash commands
├── unraid_mcp/ # MCP server Python package ├── unraid_mcp/ # MCP server Python package
├── skills/unraid/ # Skill and documentation ├── skills/unraid/ # Skill and documentation
├── pyproject.toml # Dependencies and entry points ├── pyproject.toml # Dependencies and entry points
└── scripts/ # Validation and helper scripts └── scripts/ # Validation and helper scripts
``` ```
- **MCP Server**: 10 tools with 90 actions via GraphQL API - **MCP Server**: 10 tools with 76 actions via GraphQL API
- **Slash Commands**: 10 commands in `commands/` for quick CLI-style access
- **Skill**: `/unraid` skill for monitoring and queries - **Skill**: `/unraid` skill for monitoring and queries
- **Entry Point**: `unraid-mcp-server` defined in pyproject.toml - **Entry Point**: `unraid-mcp-server` defined in pyproject.toml
@@ -214,18 +218,18 @@ UNRAID_VERIFY_SSL=true # true, false, or path to CA bundle
Each tool uses a consolidated `action` parameter to expose multiple operations, reducing context window usage. Destructive actions require `confirm=True`. Each tool uses a consolidated `action` parameter to expose multiple operations, reducing context window usage. Destructive actions require `confirm=True`.
### Tool Categories (10 Tools, 90 Actions) ### Tool Categories (10 Tools, 76 Actions)
| Tool | Actions | Description | | Tool | Actions | Description |
|------|---------|-------------| |------|---------|-------------|
| **`unraid_info`** | 19 | overview, array, network, registration, connect, variables, metrics, services, display, config, online, owner, settings, server, servers, flash, ups_devices, ups_device, ups_config | | **`unraid_info`** | 19 | overview, array, network, registration, connect, variables, metrics, services, display, config, online, owner, settings, server, servers, flash, ups_devices, ups_device, ups_config |
| **`unraid_array`** | 12 | start, stop, parity_start/pause/resume/cancel/history, mount_disk, unmount_disk, clear_stats, shutdown, reboot | | **`unraid_array`** | 5 | parity_start, parity_pause, parity_resume, parity_cancel, parity_status |
| **`unraid_storage`** | 6 | shares, disks, disk_details, unassigned, log_files, logs | | **`unraid_storage`** | 6 | shares, disks, disk_details, unassigned, log_files, logs |
| **`unraid_docker`** | 15 | list, details, start, stop, restart, pause, unpause, remove, update, update_all, logs, networks, network_details, port_conflicts, check_updates | | **`unraid_docker`** | 15 | list, details, start, stop, restart, pause, unpause, remove, update, update_all, logs, networks, network_details, port_conflicts, check_updates |
| **`unraid_vm`** | 9 | list, details, start, stop, pause, resume, force_stop, reboot, reset | | **`unraid_vm`** | 9 | list, details, start, stop, pause, resume, force_stop, reboot, reset |
| **`unraid_notifications`** | 9 | overview, list, warnings, create, archive, unread, delete, delete_archived, archive_all | | **`unraid_notifications`** | 9 | overview, list, warnings, create, archive, unread, delete, delete_archived, archive_all |
| **`unraid_rclone`** | 4 | list_remotes, config_form, create_remote, delete_remote | | **`unraid_rclone`** | 4 | list_remotes, config_form, create_remote, delete_remote |
| **`unraid_users`** | 8 | me, list, get, add, delete, cloud, remote_access, origins | | **`unraid_users`** | 1 | me |
| **`unraid_keys`** | 5 | list, get, create, update, delete | | **`unraid_keys`** | 5 | list, get, create, update, delete |
| **`unraid_health`** | 3 | check, test_connection, diagnose | | **`unraid_health`** | 3 | check, test_connection, diagnose |
@@ -236,6 +240,64 @@ Each tool uses a consolidated `action` parameter to expose multiple operations,
--- ---
## 💬 Custom Slash Commands
The project includes **10 custom slash commands** in `commands/` for quick access to Unraid operations:
### Available Commands
| Command | Actions | Quick Access |
|---------|---------|--------------|
| `/info` | 19 | System information, metrics, configuration |
| `/array` | 5 | Parity check management |
| `/storage` | 6 | Shares, disks, logs |
| `/docker` | 15 | Container management and monitoring |
| `/vm` | 9 | Virtual machine lifecycle |
| `/notifications` | 9 | Alert management |
| `/rclone` | 4 | Cloud storage remotes |
| `/users` | 1 | Current user query |
| `/keys` | 5 | API key management |
| `/health` | 3 | System health checks |
### Example Usage
```bash
# System monitoring
/info overview
/health check
/storage shares
# Container management
/docker list
/docker start plex
/docker logs nginx
# VM operations
/vm list
/vm start windows-10
# Notifications
/notifications warnings
/notifications archive_all
# User management
/users list
/keys create "Automation Key" "For CI/CD"
```
### Command Features
Each slash command provides:
- **Comprehensive documentation** of all available actions
- **Argument hints** for required parameters
- **Safety warnings** for destructive operations (⚠️)
- **Usage examples** for common scenarios
- **Action categorization** (Query, Lifecycle, Management, Destructive)
Run any command without arguments to see full documentation, or type `/help` to list all available commands.
---
## 🔧 Development ## 🔧 Development
@@ -255,15 +317,15 @@ unraid-mcp/
│ │ ├── manager.py # WebSocket management │ │ ├── manager.py # WebSocket management
│ │ ├── resources.py # MCP resources │ │ ├── resources.py # MCP resources
│ │ └── diagnostics.py # Diagnostic tools │ │ └── diagnostics.py # Diagnostic tools
│ ├── tools/ # MCP tool categories (10 tools, 90 actions) │ ├── tools/ # MCP tool categories (10 tools, 76 actions)
│ │ ├── info.py # System information (19 actions) │ │ ├── info.py # System information (19 actions)
│ │ ├── array.py # Array management (12 actions) │ │ ├── array.py # Parity checks (5 actions)
│ │ ├── storage.py # Storage & monitoring (6 actions) │ │ ├── storage.py # Storage & monitoring (6 actions)
│ │ ├── docker.py # Container management (15 actions) │ │ ├── docker.py # Container management (15 actions)
│ │ ├── virtualization.py # VM management (9 actions) │ │ ├── virtualization.py # VM management (9 actions)
│ │ ├── notifications.py # Notification management (9 actions) │ │ ├── notifications.py # Notification management (9 actions)
│ │ ├── rclone.py # Cloud storage (4 actions) │ │ ├── rclone.py # Cloud storage (4 actions)
│ │ ├── users.py # User management (8 actions) │ │ ├── users.py # Current user query (1 action)
│ │ ├── keys.py # API key management (5 actions) │ │ ├── keys.py # API key management (5 actions)
│ │ └── health.py # Health checks (3 actions) │ │ └── health.py # Health checks (3 actions)
│ └── server.py # FastMCP server setup │ └── server.py # FastMCP server setup
@@ -284,6 +346,20 @@ uv run ty check unraid_mcp/
uv run pytest uv run pytest
``` ```
### API Schema Docs Automation
```bash
# Regenerate complete GraphQL schema reference from live introspection
set -a; source .env; set +a
uv run python scripts/generate_unraid_api_reference.py
```
This updates `docs/UNRAID_API_COMPLETE_REFERENCE.md` with all operations, directives, and types visible to your API key.
Optional cron example (daily at 03:15):
```bash
15 3 * * * cd /path/to/unraid-mcp && /usr/bin/env bash -lc 'set -a; source .env; set +a; uv run python scripts/generate_unraid_api_reference.py && git add docs/UNRAID_API_COMPLETE_REFERENCE.md && git commit -m "docs: refresh unraid graphql schema"'
```
### Development Workflow ### Development Workflow
```bash ```bash
# Start development server # Start development server

30
commands/array.md Normal file
View File

@@ -0,0 +1,30 @@
---
description: Manage Unraid array parity checks
argument-hint: [action] [correct=true/false]
---
Execute the `unraid_array` MCP tool with action: `$1`
## Available Actions (5)
**Parity Check Operations:**
- `parity_start` - Start parity check/sync (optional: correct=true to fix errors)
- `parity_pause` - Pause running parity operation
- `parity_resume` - Resume paused parity operation
- `parity_cancel` - Cancel running parity operation
- `parity_status` - Get current parity check status
## Example Usage
```
/array parity_start
/array parity_start correct=true
/array parity_pause
/array parity_resume
/array parity_cancel
/array parity_status
```
**Note:** Use `correct=true` with `parity_start` to automatically fix any parity errors found during the check.
Use the tool to execute the requested parity operation and report the results.

48
commands/docker.md Normal file
View File

@@ -0,0 +1,48 @@
---
description: Manage Docker containers on Unraid
argument-hint: [action] [additional-args]
---
Execute the `unraid_docker` MCP tool with action: `$1`
## Available Actions (15)
**Query Operations:**
- `list` - List all Docker containers with status
- `details` - Get detailed info for a container (requires container identifier)
- `logs` - Get container logs (requires container identifier)
- `check_updates` - Check for available container updates
- `port_conflicts` - Identify port conflicts
- `networks` - List Docker networks
- `network_details` - Get network details (requires network identifier)
**Container Lifecycle:**
- `start` - Start a stopped container (requires container identifier)
- `stop` - Stop a running container (requires container identifier)
- `restart` - Restart a container (requires container identifier)
- `pause` - Pause a running container (requires container identifier)
- `unpause` - Unpause a paused container (requires container identifier)
**Updates & Management:**
- `update` - Update a specific container (requires container identifier)
- `update_all` - Update all containers with available updates
**⚠️ Destructive:**
- `remove` - Permanently delete a container (requires container identifier + confirmation)
## Example Usage
```
/unraid-docker list
/unraid-docker details plex
/unraid-docker logs plex
/unraid-docker start nginx
/unraid-docker restart sonarr
/unraid-docker check_updates
/unraid-docker update plex
/unraid-docker port_conflicts
```
**Container Identification:** Use container name, ID, or partial match (fuzzy search supported)
Use the tool to execute the requested Docker operation and report the results.

59
commands/health.md Normal file
View File

@@ -0,0 +1,59 @@
---
description: Check Unraid system health and connectivity
argument-hint: [action]
---
Execute the `unraid_health` MCP tool with action: `$1`
## Available Actions (3)
**Health Monitoring:**
- `check` - Comprehensive health check of all system components
- `test_connection` - Test basic API connectivity
- `diagnose` - Detailed diagnostic information for troubleshooting
## What Each Action Checks
### `check` - System Health
- API connectivity and response time
- Array status and disk health
- Running services status
- Docker container health
- VM status
- System resources (CPU, RAM, disk I/O)
- Network connectivity
- UPS status (if configured)
Returns: Overall health status (`HEALTHY`, `WARNING`, `CRITICAL`) with component details
### `test_connection` - Connectivity
- GraphQL endpoint availability
- Authentication validity
- Basic query execution
- Network latency
Returns: Connection status and latency metrics
### `diagnose` - Diagnostic Details
- Full system configuration
- Resource utilization trends
- Error logs and warnings
- Component-level diagnostics
- Troubleshooting recommendations
Returns: Detailed diagnostic report
## Example Usage
```
/unraid-health check
/unraid-health test_connection
/unraid-health diagnose
```
**Use Cases:**
- `check` - Quick health status (monitoring dashboards)
- `test_connection` - Verify API access (troubleshooting)
- `diagnose` - Deep dive debugging (issue resolution)
Use the tool to execute the requested health check and present results with clear severity indicators.

50
commands/info.md Normal file
View File

@@ -0,0 +1,50 @@
---
description: Query Unraid server information and configuration
argument-hint: [action] [additional-args]
---
Execute the `unraid_info` MCP tool with action: `$1`
## Available Actions (19)
**System Overview:**
- `overview` - Complete system summary with all key metrics
- `server` - Server details (hostname, version, uptime)
- `servers` - List all known Unraid servers
**Array & Storage:**
- `array` - Array status, disks, and health
**Network & Registration:**
- `network` - Network configuration and interfaces
- `registration` - Registration status and license info
- `connect` - Connect service configuration
- `online` - Online status check
**Configuration:**
- `config` - System configuration settings
- `settings` - User settings and preferences
- `variables` - Environment variables
- `display` - Display settings
**Services & Monitoring:**
- `services` - Running services status
- `metrics` - System metrics (CPU, RAM, disk I/O)
- `ups_devices` - List all UPS devices
- `ups_device` - Get specific UPS device details (requires device_id)
- `ups_config` - UPS configuration
**Ownership:**
- `owner` - Server owner information
- `flash` - USB flash drive details
## Example Usage
```
/unraid-info overview
/unraid-info array
/unraid-info metrics
/unraid-info ups_device [device-id]
```
Use the tool to retrieve the requested information and present it in a clear, formatted manner.

37
commands/keys.md Normal file
View File

@@ -0,0 +1,37 @@
---
description: Manage Unraid API keys for authentication
argument-hint: [action] [key-id]
---
Execute the `unraid_keys` MCP tool with action: `$1`
## Available Actions (5)
**Query Operations:**
- `list` - List all API keys with metadata
- `get` - Get details for a specific API key (requires key_id)
**Management Operations:**
- `create` - Create a new API key (requires name, optional description and expiry)
- `update` - Update an existing API key (requires key_id, name, description)
**⚠️ Destructive:**
- `delete` - Permanently revoke an API key (requires key_id + confirmation)
## Example Usage
```
/unraid-keys list
/unraid-keys get [key-id]
/unraid-keys create "MCP Server Key" "Key for unraid-mcp integration"
/unraid-keys update [key-id] "Updated Name" "Updated description"
```
**Key Format:** PrefixedID (`hex64:suffix`)
**IMPORTANT:**
- Deleted keys are immediately revoked and cannot be recovered
- Store new keys securely - they're only shown once during creation
- Set expiry dates for keys used in automation
Use the tool to execute the requested API key operation and report the results.

41
commands/notifications.md Normal file
View File

@@ -0,0 +1,41 @@
---
description: Manage Unraid system notifications and alerts
argument-hint: [action] [additional-args]
---
Execute the `unraid_notifications` MCP tool with action: `$1`
## Available Actions (9)
**Query Operations:**
- `overview` - Summary of notification counts by category
- `list` - List all notifications with details
- `warnings` - List only warning/error notifications
- `unread` - List unread notifications only
**Management Operations:**
- `create` - Create a new notification (requires title, message, severity)
- `archive` - Archive a specific notification (requires notification_id)
- `archive_all` - Archive all current notifications
**⚠️ Destructive Operations:**
- `delete` - Permanently delete a notification (requires notification_id + confirmation)
- `delete_archived` - Permanently delete all archived notifications (requires confirmation)
## Example Usage
```
/unraid-notifications overview
/unraid-notifications list
/unraid-notifications warnings
/unraid-notifications unread
/unraid-notifications create "Test Alert" "This is a test" normal
/unraid-notifications archive [notification-id]
/unraid-notifications archive_all
```
**Severity Levels:** `normal`, `warning`, `alert`, `critical`
**IMPORTANT:** Delete operations are permanent and cannot be undone.
Use the tool to execute the requested notification operation and present results clearly.

32
commands/rclone.md Normal file
View File

@@ -0,0 +1,32 @@
---
description: Manage Rclone cloud storage remotes on Unraid
argument-hint: [action] [remote-name]
---
Execute the `unraid_rclone` MCP tool with action: `$1`
## Available Actions (4)
**Query Operations:**
- `list_remotes` - List all configured Rclone remotes
- `config_form` - Get configuration form for a remote type (requires remote_type)
**Management Operations:**
- `create_remote` - Create a new Rclone remote (requires remote_name, remote_type, config)
**⚠️ Destructive:**
- `delete_remote` - Permanently delete a remote (requires remote_name + confirmation)
## Example Usage
```
/unraid-rclone list_remotes
/unraid-rclone config_form s3
/unraid-rclone create_remote mybackup s3 {"access_key":"...","secret_key":"..."}
```
**Supported Remote Types:** s3, dropbox, google-drive, onedrive, backblaze, ftp, sftp, webdav, etc.
**IMPORTANT:** Deleting a remote does NOT delete cloud data, only the local configuration.
Use the tool to execute the requested Rclone operation and report the results.

33
commands/storage.md Normal file
View File

@@ -0,0 +1,33 @@
---
description: Query Unraid storage, shares, and disk information
argument-hint: [action] [additional-args]
---
Execute the `unraid_storage` MCP tool with action: `$1`
## Available Actions (6)
**Shares & Disks:**
- `shares` - List all user shares with sizes and allocation
- `disks` - List all disks in the array
- `disk_details` - Get detailed info for a specific disk (requires disk identifier)
- `unassigned` - List unassigned devices
**Logs:**
- `log_files` - List available system log files
- `logs` - Read log file contents (requires log file path)
## Example Usage
```
/unraid-storage shares
/unraid-storage disks
/unraid-storage disk_details disk1
/unraid-storage unassigned
/unraid-storage log_files
/unraid-storage logs /var/log/syslog
```
**Note:** Log file paths must start with `/var/log/`, `/boot/logs/`, or `/mnt/`
Use the tool to retrieve the requested storage information and present it clearly.

31
commands/users.md Normal file
View File

@@ -0,0 +1,31 @@
---
description: Query current authenticated Unraid user
argument-hint: [action]
---
Execute the `unraid_users` MCP tool with action: `$1`
## Available Actions (1)
**Query Operation:**
- `me` - Get current authenticated user info (id, name, description, roles)
## Example Usage
```
/users me
```
## API Limitation
⚠️ **Note:** The Unraid GraphQL API does not support user management operations. Only the `me` query is available, which returns information about the currently authenticated user (the API key holder).
**Not supported:**
- Listing all users
- Getting other user details
- Adding/deleting users
- Cloud/remote access queries
For user management, use the Unraid web UI.
Use the tool to query the current authenticated user and report the results.

41
commands/vm.md Normal file
View File

@@ -0,0 +1,41 @@
---
description: Manage virtual machines on Unraid
argument-hint: [action] [vm-id]
---
Execute the `unraid_vm` MCP tool with action: `$1` and vm_id: `$2`
## Available Actions (9)
**Query Operations:**
- `list` - List all VMs with status and resource allocation
- `details` - Get detailed info for a VM (requires vm_id)
**Lifecycle Operations:**
- `start` - Start a stopped VM (requires vm_id)
- `stop` - Gracefully stop a running VM (requires vm_id)
- `pause` - Pause a running VM (requires vm_id)
- `resume` - Resume a paused VM (requires vm_id)
- `reboot` - Gracefully reboot a VM (requires vm_id)
**⚠️ Destructive Operations:**
- `force_stop` - Forcefully power off VM (like pulling power cord - requires vm_id + confirmation)
- `reset` - Hard reset VM (power cycle without graceful shutdown - requires vm_id + confirmation)
## Example Usage
```
/unraid-vm list
/unraid-vm details windows-10
/unraid-vm start ubuntu-server
/unraid-vm stop windows-10
/unraid-vm pause debian-vm
/unraid-vm resume debian-vm
/unraid-vm reboot ubuntu-server
```
**VM Identification:** Use VM ID (PrefixedID format: `hex64:suffix`)
**IMPORTANT:** `force_stop` and `reset` bypass graceful shutdown and may corrupt VM filesystem. Use `stop` instead for safe shutdowns.
Use the tool to execute the requested VM operation and report the results.

240
docs/DESTRUCTIVE_ACTIONS.md Normal file
View File

@@ -0,0 +1,240 @@
# Destructive Actions Inventory
This file lists all destructive actions across the unraid-mcp tools. Fill in the "Testing Strategy" column to specify how each should be tested in the mcporter integration test suite.
**Last Updated:** 2026-02-15
---
## Summary
- **Total Destructive Actions:** 8 (after removing 4 array operations)
- **Tools with Destructive Actions:** 6
- **Environment Variable Gates:** 6 (one per tool)
---
## Destructive Actions by Tool
### 1. Docker (1 action)
| Action | Description | Risk Level | Env Var Gate | Testing Strategy |
|--------|-------------|------------|--------------|------------------|
| `remove` | Permanently delete a Docker container | **HIGH** - Data loss, irreversible | `UNRAID_ALLOW_DOCKER_DESTRUCTIVE` | **TODO: Specify testing approach** |
**Notes:**
- Container must be stopped first
- Removes container config and any non-volume data
- Cannot be undone
---
### 2. Virtual Machines (2 actions)
| Action | Description | Risk Level | Env Var Gate | Testing Strategy |
|--------|-------------|------------|--------------|------------------|
| `force_stop` | Forcefully power off a running VM (equivalent to pulling power cord) | **MEDIUM** - Severe but recoverable, risk of data corruption | `UNRAID_ALLOW_VM_DESTRUCTIVE` | **TODO: Specify testing approach** |
| `reset` | Hard reset a VM (power cycle without graceful shutdown) | **MEDIUM** - Severe but recoverable, risk of data corruption | `UNRAID_ALLOW_VM_DESTRUCTIVE` | **TODO: Specify testing approach** |
**Notes:**
- Both bypass graceful shutdown procedures
- May corrupt VM filesystem if used during write operations
- Use `stop` action instead for graceful shutdown
---
### 3. Notifications (2 actions)
| Action | Description | Risk Level | Env Var Gate | Testing Strategy |
|--------|-------------|------------|--------------|------------------|
| `delete` | Permanently delete a notification | **HIGH** - Data loss, irreversible | `UNRAID_ALLOW_NOTIFICATIONS_DESTRUCTIVE` | **TODO: Specify testing approach** |
| `delete_archived` | Permanently delete all archived notifications | **HIGH** - Bulk data loss, irreversible | `UNRAID_ALLOW_NOTIFICATIONS_DESTRUCTIVE` | **TODO: Specify testing approach** |
**Notes:**
- Cannot recover deleted notifications
- `delete_archived` affects ALL archived notifications (bulk operation)
---
### 4. Rclone (1 action)
| Action | Description | Risk Level | Env Var Gate | Testing Strategy |
|--------|-------------|------------|--------------|------------------|
| `delete_remote` | Permanently delete an rclone remote configuration | **HIGH** - Data loss, irreversible | `UNRAID_ALLOW_RCLONE_DESTRUCTIVE` | **TODO: Specify testing approach** |
**Notes:**
- Removes cloud storage connection configuration
- Does NOT delete data in the remote storage
- Must reconfigure remote from scratch if deleted
---
### 5. Users (1 action)
| Action | Description | Risk Level | Env Var Gate | Testing Strategy |
|--------|-------------|------------|--------------|------------------|
| `delete` | Permanently delete a user account | **HIGH** - Data loss, irreversible | `UNRAID_ALLOW_USERS_DESTRUCTIVE` | **TODO: Specify testing approach** |
**Notes:**
- Removes user account and permissions
- Cannot delete the root user
- User's data may remain but become orphaned
---
### 6. API Keys (1 action)
| Action | Description | Risk Level | Env Var Gate | Testing Strategy |
|--------|-------------|------------|--------------|------------------|
| `delete` | Permanently delete an API key | **HIGH** - Data loss, irreversible, breaks integrations | `UNRAID_ALLOW_KEYS_DESTRUCTIVE` | **TODO: Specify testing approach** |
**Notes:**
- Immediately revokes API key access
- Will break any integrations using the deleted key
- Cannot be undone - must create new key
---
## Removed Actions (No Longer Exposed)
These actions were previously marked as destructive but have been **removed** from the array tool per the implementation plan:
| Action | Former Risk Level | Reason for Removal |
|--------|-------------------|-------------------|
| `start` | CRITICAL | System-wide impact - should not be exposed via MCP |
| `stop` | CRITICAL | System-wide impact - should not be exposed via MCP |
| `shutdown` | CRITICAL | System-wide impact - could cause data loss |
| `reboot` | CRITICAL | System-wide impact - disrupts all services |
---
## Testing Strategy Options
Choose one of the following for each action in the "Testing Strategy" column:
### Option 1: Mock/Validation Only
- Test parameter validation
- Test `confirm=True` requirement
- Test env var gate requirement
- **DO NOT** execute the actual action
### Option 2: Dry-Run Testing
- Test with `confirm=false` to verify rejection
- Test without env var to verify gate
- **DO NOT** execute with both gates passed
### Option 3: Test Server Execution
- Execute on a dedicated test Unraid server (e.g., shart)
- Requires pre-created test resources (containers, VMs, notifications)
- Verify action succeeds and state changes as expected
- Clean up after test
### Option 4: Manual Test Checklist
- Document manual verification steps
- Do not automate in mcporter suite
- Requires human operator to execute and verify
### Option 5: Skip Testing
- Too dangerous to automate
- Rely on unit tests only
- Document why testing is skipped
---
## Example Testing Strategies
**Safe approach (recommended for most):**
```
Option 1: Mock/Validation Only
- Verify action requires UNRAID_ALLOW_DOCKER_DESTRUCTIVE=true
- Verify action requires confirm=True
- Do not execute actual deletion
```
**Comprehensive approach (for test server only):**
```
Option 3: Test Server Execution on 'shart'
- Create test container 'mcporter-test-container'
- Execute remove with gates enabled
- Verify container is deleted
- Clean up not needed (container already removed)
```
**Hybrid approach:**
```
Option 1 + Option 4: Mock validation + Manual checklist
- Automated: Test gate requirements
- Manual: Human operator verifies on test server
```
---
## Usage in mcporter Tests
Each tool test script will check the testing strategy:
```bash
# Example from test_docker.sh
test_remove_action() {
local strategy="TODO: Specify testing approach" # From this file
case "$strategy" in
*"Option 1"*|*"Mock"*)
# Mock/validation testing
test_remove_requires_env_var
test_remove_requires_confirm
;;
*"Option 3"*|*"Test Server"*)
# Real execution on test server
if [[ "$UNRAID_TEST_SERVER" != "unraid-shart" ]]; then
echo "SKIP: Destructive test only runs on test server"
return 2
fi
test_remove_real_execution
;;
*"Option 5"*|*"Skip"*)
echo "SKIP: Testing disabled for this action"
return 2
;;
esac
}
```
---
## Security Model
**Two-tier security for destructive actions:**
1. **Environment Variable Gate** (first line of defense)
- Must be explicitly enabled per tool
- Defaults to disabled (safe)
- Prevents accidental execution
2. **Runtime Confirmation** (second line of defense)
- Must pass `confirm=True` in each call
- Forces explicit acknowledgment per operation
- Cannot be cached or preset
**Both must pass for execution.**
---
## Next Steps
1. **Fill in Testing Strategy column** for each action above
2. **Create test fixtures** if using Option 3 (test containers, VMs, etc.)
3. **Implement tool test scripts** following the specified strategies
4. **Document any special setup** required for destructive testing
---
## Questions to Consider
For each action, ask:
- Is this safe to automate on a test server?
- Do we have test fixtures/resources available?
- What cleanup is required after testing?
- What's the blast radius if something goes wrong?
- Can we verify the action worked without side effects?

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,290 @@
# Unraid GraphQL API Operations
Generated via live introspection at `2026-02-15 23:45:50Z`.
## Schema Summary
- Query root: `Query`
- Mutation root: `Mutation`
- Subscription root: `Subscription`
- Total types: **164**
- Total directives: **6**
- Type kinds:
- `ENUM`: 32
- `INPUT_OBJECT`: 16
- `INTERFACE`: 2
- `OBJECT`: 103
- `SCALAR`: 10
- `UNION`: 1
## Queries
Total: **46**
### `apiKey(id: PrefixedID!): ApiKey`
#### Required Permissions: - Action: **READ_ANY** - Resource: **API_KEY**
Arguments:
- `id`: `PrefixedID!`
### `apiKeyPossiblePermissions(): [Permission!]!`
#### Required Permissions: - Action: **READ_ANY** - Resource: **PERMISSION** #### Description: All possible permissions for API keys
### `apiKeyPossibleRoles(): [Role!]!`
#### Required Permissions: - Action: **READ_ANY** - Resource: **PERMISSION** #### Description: All possible roles for API keys
### `apiKeys(): [ApiKey!]!`
#### Required Permissions: - Action: **READ_ANY** - Resource: **API_KEY**
### `array(): UnraidArray!`
#### Required Permissions: - Action: **READ_ANY** - Resource: **ARRAY**
### `config(): Config!`
#### Required Permissions: - Action: **READ_ANY** - Resource: **CONFIG**
### `customization(): Customization`
#### Required Permissions: - Action: **READ_ANY** - Resource: **CUSTOMIZATIONS**
### `disk(id: PrefixedID!): Disk!`
#### Required Permissions: - Action: **READ_ANY** - Resource: **DISK**
Arguments:
- `id`: `PrefixedID!`
### `disks(): [Disk!]!`
#### Required Permissions: - Action: **READ_ANY** - Resource: **DISK**
### `docker(): Docker!`
#### Required Permissions: - Action: **READ_ANY** - Resource: **DOCKER**
### `flash(): Flash!`
#### Required Permissions: - Action: **READ_ANY** - Resource: **FLASH**
### `getApiKeyCreationFormSchema(): ApiKeyFormSettings!`
#### Required Permissions: - Action: **READ_ANY** - Resource: **API_KEY** #### Description: Get JSON Schema for API key creation form
### `getAvailableAuthActions(): [AuthAction!]!`
Get all available authentication actions with possession
### `getPermissionsForRoles(roles: [Role!]!): [Permission!]!`
#### Required Permissions: - Action: **READ_ANY** - Resource: **PERMISSION** #### Description: Get the actual permissions that would be granted by a set of roles
Arguments:
- `roles`: `[Role!]!`
### `info(): Info!`
#### Required Permissions: - Action: **READ_ANY** - Resource: **INFO**
### `isInitialSetup(): Boolean!`
### `isSSOEnabled(): Boolean!`
### `logFile(lines: Int, path: String!, startLine: Int): LogFileContent!`
#### Required Permissions: - Action: **READ_ANY** - Resource: **LOGS**
Arguments:
- `lines`: `Int`
- `path`: `String!`
- `startLine`: `Int`
### `logFiles(): [LogFile!]!`
#### Required Permissions: - Action: **READ_ANY** - Resource: **LOGS**
### `me(): UserAccount!`
#### Required Permissions: - Action: **READ_ANY** - Resource: **ME**
### `metrics(): Metrics!`
#### Required Permissions: - Action: **READ_ANY** - Resource: **INFO**
### `notifications(): Notifications!`
#### Required Permissions: - Action: **READ_ANY** - Resource: **NOTIFICATIONS** #### Description: Get all notifications
### `oidcConfiguration(): OidcConfiguration!`
#### Required Permissions: - Action: **READ_ANY** - Resource: **CONFIG** #### Description: Get the full OIDC configuration (admin only)
### `oidcProvider(id: PrefixedID!): OidcProvider`
#### Required Permissions: - Action: **READ_ANY** - Resource: **CONFIG** #### Description: Get a specific OIDC provider by ID
Arguments:
- `id`: `PrefixedID!`
### `oidcProviders(): [OidcProvider!]!`
#### Required Permissions: - Action: **READ_ANY** - Resource: **CONFIG** #### Description: Get all configured OIDC providers (admin only)
### `online(): Boolean!`
#### Required Permissions: - Action: **READ_ANY** - Resource: **ONLINE**
### `owner(): Owner!`
#### Required Permissions: - Action: **READ_ANY** - Resource: **OWNER**
### `parityHistory(): [ParityCheck!]!`
#### Required Permissions: - Action: **READ_ANY** - Resource: **ARRAY**
### `plugins(): [Plugin!]!`
#### Required Permissions: - Action: **READ_ANY** - Resource: **CONFIG** #### Description: List all installed plugins with their metadata
### `previewEffectivePermissions(permissions: [AddPermissionInput!], roles: [Role!]): [Permission!]!`
#### Required Permissions: - Action: **READ_ANY** - Resource: **PERMISSION** #### Description: Preview the effective permissions for a combination of roles and explicit permissions
Arguments:
- `permissions`: `[AddPermissionInput!]`
- `roles`: `[Role!]`
### `publicOidcProviders(): [PublicOidcProvider!]!`
Get public OIDC provider information for login buttons
### `publicPartnerInfo(): PublicPartnerInfo`
### `publicTheme(): Theme!`
### `rclone(): RCloneBackupSettings!`
#### Required Permissions: - Action: **READ_ANY** - Resource: **FLASH**
### `registration(): Registration`
#### Required Permissions: - Action: **READ_ANY** - Resource: **REGISTRATION**
### `server(): Server`
#### Required Permissions: - Action: **READ_ANY** - Resource: **SERVERS**
### `servers(): [Server!]!`
#### Required Permissions: - Action: **READ_ANY** - Resource: **SERVERS**
### `services(): [Service!]!`
#### Required Permissions: - Action: **READ_ANY** - Resource: **SERVICES**
### `settings(): Settings!`
### `shares(): [Share!]!`
#### Required Permissions: - Action: **READ_ANY** - Resource: **SHARE**
### `upsConfiguration(): UPSConfiguration!`
### `upsDeviceById(id: String!): UPSDevice`
Arguments:
- `id`: `String!`
### `upsDevices(): [UPSDevice!]!`
### `validateOidcSession(token: String!): OidcSessionValidation!`
#### Required Permissions: - Action: **READ_ANY** - Resource: **CONFIG** #### Description: Validate an OIDC session token (internal use for CLI validation)
Arguments:
- `token`: `String!`
### `vars(): Vars!`
#### Required Permissions: - Action: **READ_ANY** - Resource: **VARS**
### `vms(): Vms!`
#### Required Permissions: - Action: **READ_ANY** - Resource: **VMS** #### Description: Get information about all VMs on the system
## Mutations
Total: **22**
### `addPlugin(input: PluginManagementInput!): Boolean!`
#### Required Permissions: - Action: **UPDATE_ANY** - Resource: **CONFIG** #### Description: Add one or more plugins to the API. Returns false if restart was triggered automatically, true if manual restart is required.
Arguments:
- `input`: `PluginManagementInput!`
### `apiKey(): ApiKeyMutations!`
### `archiveAll(importance: NotificationImportance): NotificationOverview!`
Arguments:
- `importance`: `NotificationImportance`
### `archiveNotification(id: PrefixedID!): Notification!`
Marks a notification as archived.
Arguments:
- `id`: `PrefixedID!`
### `archiveNotifications(ids: [PrefixedID!]!): NotificationOverview!`
Arguments:
- `ids`: `[PrefixedID!]!`
### `array(): ArrayMutations!`
### `configureUps(config: UPSConfigInput!): Boolean!`
Arguments:
- `config`: `UPSConfigInput!`
### `createNotification(input: NotificationData!): Notification!`
Creates a new notification record
Arguments:
- `input`: `NotificationData!`
### `customization(): CustomizationMutations!`
### `deleteArchivedNotifications(): NotificationOverview!`
Deletes all archived notifications on server.
### `deleteNotification(id: PrefixedID!, type: NotificationType!): NotificationOverview!`
Arguments:
- `id`: `PrefixedID!`
- `type`: `NotificationType!`
### `docker(): DockerMutations!`
### `initiateFlashBackup(input: InitiateFlashBackupInput!): FlashBackupStatus!`
Initiates a flash drive backup using a configured remote.
Arguments:
- `input`: `InitiateFlashBackupInput!`
### `parityCheck(): ParityCheckMutations!`
### `rclone(): RCloneMutations!`
### `recalculateOverview(): NotificationOverview!`
Reads each notification to recompute & update the overview.
### `removePlugin(input: PluginManagementInput!): Boolean!`
#### Required Permissions: - Action: **DELETE_ANY** - Resource: **CONFIG** #### Description: Remove one or more plugins from the API. Returns false if restart was triggered automatically, true if manual restart is required.
Arguments:
- `input`: `PluginManagementInput!`
### `unarchiveAll(importance: NotificationImportance): NotificationOverview!`
Arguments:
- `importance`: `NotificationImportance`
### `unarchiveNotifications(ids: [PrefixedID!]!): NotificationOverview!`
Arguments:
- `ids`: `[PrefixedID!]!`
### `unreadNotification(id: PrefixedID!): Notification!`
Marks a notification as unread.
Arguments:
- `id`: `PrefixedID!`
### `updateSettings(input: JSON!): UpdateSettingsResponse!`
#### Required Permissions: - Action: **UPDATE_ANY** - Resource: **CONFIG**
Arguments:
- `input`: `JSON!`
### `vm(): VmMutations!`
## Subscriptions
Total: **11**
### `arraySubscription(): UnraidArray!`
#### Required Permissions: - Action: **READ_ANY** - Resource: **ARRAY**
### `logFile(path: String!): LogFileContent!`
#### Required Permissions: - Action: **READ_ANY** - Resource: **LOGS**
Arguments:
- `path`: `String!`
### `notificationAdded(): Notification!`
#### Required Permissions: - Action: **READ_ANY** - Resource: **NOTIFICATIONS**
### `notificationsOverview(): NotificationOverview!`
#### Required Permissions: - Action: **READ_ANY** - Resource: **NOTIFICATIONS**
### `ownerSubscription(): Owner!`
#### Required Permissions: - Action: **READ_ANY** - Resource: **OWNER**
### `parityHistorySubscription(): ParityCheck!`
#### Required Permissions: - Action: **READ_ANY** - Resource: **ARRAY**
### `serversSubscription(): Server!`
#### Required Permissions: - Action: **READ_ANY** - Resource: **SERVERS**
### `systemMetricsCpu(): CpuUtilization!`
#### Required Permissions: - Action: **READ_ANY** - Resource: **INFO**
### `systemMetricsCpuTelemetry(): CpuPackages!`
#### Required Permissions: - Action: **READ_ANY** - Resource: **INFO**
### `systemMetricsMemory(): MemoryUtilization!`
#### Required Permissions: - Action: **READ_ANY** - Resource: **INFO**
### `upsUpdates(): UPSDevice!`

View File

@@ -284,9 +284,11 @@ dev = [
"pytest>=8.4.2", "pytest>=8.4.2",
"pytest-asyncio>=1.2.0", "pytest-asyncio>=1.2.0",
"pytest-cov>=7.0.0", "pytest-cov>=7.0.0",
"respx>=0.22.0",
"types-pytz>=2025.2.0.20250809", "types-pytz>=2025.2.0.20250809",
"ty>=0.0.15", "ty>=0.0.15",
"ruff>=0.12.8", "ruff>=0.12.8",
"build>=1.2.2", "build>=1.2.2",
"twine>=6.0.1", "twine>=6.0.1",
"graphql-core>=3.2.0",
] ]

View File

@@ -0,0 +1,447 @@
#!/usr/bin/env python3
"""Generate a complete Markdown reference from Unraid GraphQL introspection."""
from __future__ import annotations
import argparse
import json
import os
from collections import Counter, defaultdict
from pathlib import Path
from typing import Any
import httpx
DEFAULT_OUTPUT = Path("docs/UNRAID_API_COMPLETE_REFERENCE.md")
INTROSPECTION_QUERY = """
query FullIntrospection {
__schema {
queryType { name }
mutationType { name }
subscriptionType { name }
directives {
name
description
locations
args {
name
description
defaultValue
type { ...TypeRef }
}
}
types {
kind
name
description
fields(includeDeprecated: true) {
name
description
isDeprecated
deprecationReason
args {
name
description
defaultValue
type { ...TypeRef }
}
type { ...TypeRef }
}
inputFields {
name
description
defaultValue
type { ...TypeRef }
}
interfaces { kind name }
enumValues(includeDeprecated: true) {
name
description
isDeprecated
deprecationReason
}
possibleTypes { kind name }
}
}
}
fragment TypeRef on __Type {
kind
name
ofType {
kind
name
ofType {
kind
name
ofType {
kind
name
ofType {
kind
name
ofType {
kind
name
ofType {
kind
name
ofType {
kind
name
}
}
}
}
}
}
}
}
"""
def _clean(text: str | None) -> str:
"""Collapse multiline description text into a single line."""
if not text:
return ""
return " ".join(text.split())
def _type_to_str(type_ref: dict[str, Any] | None) -> str:
"""Render GraphQL nested type refs to SDL-like notation."""
if not type_ref:
return "Unknown"
kind = type_ref.get("kind")
if kind == "NON_NULL":
return f"{_type_to_str(type_ref.get('ofType'))}!"
if kind == "LIST":
return f"[{_type_to_str(type_ref.get('ofType'))}]"
return str(type_ref.get("name") or kind or "Unknown")
def _field_lines(field: dict[str, Any], *, is_input: bool) -> list[str]:
"""Render field/input-field markdown lines."""
lines: list[str] = []
lines.append(f"- `{field['name']}`: `{_type_to_str(field.get('type'))}`")
description = _clean(field.get("description"))
if description:
lines.append(f" - {description}")
default_value = field.get("defaultValue")
if default_value is not None:
lines.append(f" - Default: `{default_value}`")
if not is_input:
args = sorted(field.get("args") or [], key=lambda item: str(item["name"]))
if args:
lines.append(" - Arguments:")
for arg in args:
arg_line = f" - `{arg['name']}`: `{_type_to_str(arg.get('type'))}`"
if arg.get("defaultValue") is not None:
arg_line += f" (default: `{arg['defaultValue']}`)"
lines.append(arg_line)
arg_description = _clean(arg.get("description"))
if arg_description:
lines.append(f" - {arg_description}")
if field.get("isDeprecated"):
reason = _clean(field.get("deprecationReason"))
lines.append(f" - Deprecated: {reason}" if reason else " - Deprecated")
return lines
def _build_markdown(schema: dict[str, Any], *, include_introspection: bool) -> str:
"""Build full Markdown schema reference."""
all_types = schema.get("types") or []
types = [
item
for item in all_types
if item.get("name") and (include_introspection or not str(item["name"]).startswith("__"))
]
types_by_name = {str(item["name"]): item for item in types}
kind_counts = Counter(str(item.get("kind", "UNKNOWN")) for item in types)
directives = sorted(schema.get("directives") or [], key=lambda item: str(item["name"]))
implements_map: dict[str, list[str]] = defaultdict(list)
for item in types:
for interface in item.get("interfaces") or []:
interface_name = interface.get("name")
if interface_name:
implements_map[str(interface_name)].append(str(item["name"]))
query_root = (schema.get("queryType") or {}).get("name")
mutation_root = (schema.get("mutationType") or {}).get("name")
subscription_root = (schema.get("subscriptionType") or {}).get("name")
lines: list[str] = []
lines.append("# Unraid GraphQL API Complete Schema Reference")
lines.append("")
lines.append(
"Generated via live GraphQL introspection for the configured endpoint and API key."
)
lines.append("")
lines.append("This is permission-scoped: it contains everything visible to the API key used.")
lines.append("")
lines.append("## Table of Contents")
lines.append("- [Schema Summary](#schema-summary)")
lines.append("- [Root Operations](#root-operations)")
lines.append("- [Directives](#directives)")
lines.append("- [All Types (Alphabetical)](#all-types-alphabetical)")
lines.append("")
lines.append("## Schema Summary")
lines.append(f"- Query root: `{query_root}`")
lines.append(f"- Mutation root: `{mutation_root}`")
lines.append(f"- Subscription root: `{subscription_root}`")
lines.append(f"- Total types: **{len(types)}**")
lines.append(f"- Total directives: **{len(directives)}**")
lines.append("- Type kinds:")
lines.extend(f"- `{kind}`: {kind_counts[kind]}" for kind in sorted(kind_counts))
lines.append("")
def render_root(root_name: str | None, label: str) -> None:
lines.append(f"### {label}")
if not root_name or root_name not in types_by_name:
lines.append("Not exposed.")
lines.append("")
return
root_type = types_by_name[root_name]
fields = sorted(root_type.get("fields") or [], key=lambda item: str(item["name"]))
lines.append(f"Total fields: **{len(fields)}**")
lines.append("")
for field in fields:
args = sorted(field.get("args") or [], key=lambda item: str(item["name"]))
arg_signature: list[str] = []
for arg in args:
part = f"{arg['name']}: {_type_to_str(arg.get('type'))}"
if arg.get("defaultValue") is not None:
part += f" = {arg['defaultValue']}"
arg_signature.append(part)
signature = (
f"{field['name']}({', '.join(arg_signature)})"
if arg_signature
else f"{field['name']}()"
)
lines.append(f"- `{signature}: {_type_to_str(field.get('type'))}`")
description = _clean(field.get("description"))
if description:
lines.append(f" - {description}")
if field.get("isDeprecated"):
reason = _clean(field.get("deprecationReason"))
lines.append(f" - Deprecated: {reason}" if reason else " - Deprecated")
lines.append("")
lines.append("## Root Operations")
render_root(query_root, "Queries")
render_root(mutation_root, "Mutations")
render_root(subscription_root, "Subscriptions")
lines.append("## Directives")
if not directives:
lines.append("No directives exposed.")
lines.append("")
else:
for directive in directives:
lines.append(f"### `@{directive['name']}`")
description = _clean(directive.get("description"))
if description:
lines.append(description)
lines.append("")
locations = directive.get("locations") or []
lines.append(
f"- Locations: {', '.join(f'`{item}`' for item in locations) if locations else 'None'}"
)
args = sorted(directive.get("args") or [], key=lambda item: str(item["name"]))
if args:
lines.append("- Arguments:")
for arg in args:
line = f" - `{arg['name']}`: `{_type_to_str(arg.get('type'))}`"
if arg.get("defaultValue") is not None:
line += f" (default: `{arg['defaultValue']}`)"
lines.append(line)
arg_description = _clean(arg.get("description"))
if arg_description:
lines.append(f" - {arg_description}")
lines.append("")
lines.append("## All Types (Alphabetical)")
for item in sorted(types, key=lambda row: str(row["name"])):
name = str(item["name"])
kind = str(item["kind"])
lines.append(f"### `{name}` ({kind})")
description = _clean(item.get("description"))
if description:
lines.append(description)
lines.append("")
if kind == "OBJECT":
interfaces = sorted(
str(interface["name"])
for interface in (item.get("interfaces") or [])
if interface.get("name")
)
if interfaces:
lines.append(f"- Implements: {', '.join(f'`{value}`' for value in interfaces)}")
fields = sorted(item.get("fields") or [], key=lambda row: str(row["name"]))
lines.append(f"- Fields ({len(fields)}):")
if fields:
for field in fields:
lines.extend(_field_lines(field, is_input=False))
else:
lines.append("- None")
elif kind == "INPUT_OBJECT":
fields = sorted(item.get("inputFields") or [], key=lambda row: str(row["name"]))
lines.append(f"- Input fields ({len(fields)}):")
if fields:
for field in fields:
lines.extend(_field_lines(field, is_input=True))
else:
lines.append("- None")
elif kind == "ENUM":
enum_values = sorted(item.get("enumValues") or [], key=lambda row: str(row["name"]))
lines.append(f"- Enum values ({len(enum_values)}):")
if enum_values:
for enum_value in enum_values:
lines.append(f" - `{enum_value['name']}`")
enum_description = _clean(enum_value.get("description"))
if enum_description:
lines.append(f" - {enum_description}")
if enum_value.get("isDeprecated"):
reason = _clean(enum_value.get("deprecationReason"))
lines.append(
f" - Deprecated: {reason}" if reason else " - Deprecated"
)
else:
lines.append("- None")
elif kind == "INTERFACE":
fields = sorted(item.get("fields") or [], key=lambda row: str(row["name"]))
lines.append(f"- Interface fields ({len(fields)}):")
if fields:
for field in fields:
lines.extend(_field_lines(field, is_input=False))
else:
lines.append("- None")
implementers = sorted(implements_map.get(name, []))
if implementers:
lines.append(
f"- Implemented by ({len(implementers)}): "
+ ", ".join(f"`{value}`" for value in implementers)
)
else:
lines.append("- Implemented by (0): None")
elif kind == "UNION":
possible_types = sorted(
str(possible["name"])
for possible in (item.get("possibleTypes") or [])
if possible.get("name")
)
if possible_types:
lines.append(
f"- Possible types ({len(possible_types)}): "
+ ", ".join(f"`{value}`" for value in possible_types)
)
else:
lines.append("- Possible types (0): None")
elif kind == "SCALAR":
lines.append("- Scalar type")
else:
lines.append("- Unhandled type kind")
lines.append("")
return "\n".join(lines).rstrip() + "\n"
def _parse_args() -> argparse.Namespace:
"""Parse CLI args."""
parser = argparse.ArgumentParser(
description="Generate complete Unraid GraphQL schema reference Markdown from introspection."
)
parser.add_argument(
"--api-url",
default=os.getenv("UNRAID_API_URL", ""),
help="GraphQL endpoint URL (default: UNRAID_API_URL env var).",
)
parser.add_argument(
"--api-key",
default=os.getenv("UNRAID_API_KEY", ""),
help="API key (default: UNRAID_API_KEY env var).",
)
parser.add_argument(
"--output",
type=Path,
default=DEFAULT_OUTPUT,
help=f"Output markdown file path (default: {DEFAULT_OUTPUT}).",
)
parser.add_argument(
"--timeout-seconds",
type=float,
default=90.0,
help="HTTP timeout in seconds (default: 90).",
)
parser.add_argument(
"--verify-ssl",
action="store_true",
help="Enable SSL cert verification. Default is disabled for local/self-signed setups.",
)
parser.add_argument(
"--include-introspection-types",
action="store_true",
help="Include __Schema/__Type/etc in the generated type list.",
)
return parser.parse_args()
def main() -> int:
"""Run generator CLI."""
args = _parse_args()
if not args.api_url:
raise SystemExit("Missing API URL. Provide --api-url or set UNRAID_API_URL.")
if not args.api_key:
raise SystemExit("Missing API key. Provide --api-key or set UNRAID_API_KEY.")
headers = {"Authorization": f"Bearer {args.api_key}", "Content-Type": "application/json"}
with httpx.Client(timeout=args.timeout_seconds, verify=args.verify_ssl) as client:
response = client.post(args.api_url, json={"query": INTROSPECTION_QUERY}, headers=headers)
response.raise_for_status()
payload = response.json()
if payload.get("errors"):
errors = json.dumps(payload["errors"], indent=2)
raise SystemExit(f"GraphQL introspection returned errors:\n{errors}")
schema = (payload.get("data") or {}).get("__schema")
if not schema:
raise SystemExit("GraphQL introspection returned no __schema payload.")
markdown = _build_markdown(schema, include_introspection=bool(args.include_introspection_types))
args.output.parent.mkdir(parents=True, exist_ok=True)
args.output.write_text(markdown, encoding="utf-8")
print(f"Wrote {args.output}")
return 0
if __name__ == "__main__":
raise SystemExit(main())

0
tests/http/__init__.py Normal file
View File

File diff suppressed because it is too large Load Diff

View File

File diff suppressed because it is too large Load Diff

0
tests/safety/__init__.py Normal file
View File

View File

@@ -0,0 +1,324 @@
"""Safety audit tests for destructive action confirmation guards.
Verifies that all destructive operations across every tool require
explicit `confirm=True` before execution, and that the DESTRUCTIVE_ACTIONS
registries are complete and consistent.
"""
from collections.abc import Generator
from unittest.mock import AsyncMock, patch
import pytest
from unraid_mcp.core.exceptions import ToolError
# Import DESTRUCTIVE_ACTIONS sets from every tool module that defines one
from unraid_mcp.tools.docker import DESTRUCTIVE_ACTIONS as DOCKER_DESTRUCTIVE
from unraid_mcp.tools.docker import MUTATIONS as DOCKER_MUTATIONS
from unraid_mcp.tools.keys import DESTRUCTIVE_ACTIONS as KEYS_DESTRUCTIVE
from unraid_mcp.tools.keys import MUTATIONS as KEYS_MUTATIONS
from unraid_mcp.tools.notifications import DESTRUCTIVE_ACTIONS as NOTIF_DESTRUCTIVE
from unraid_mcp.tools.notifications import MUTATIONS as NOTIF_MUTATIONS
from unraid_mcp.tools.rclone import DESTRUCTIVE_ACTIONS as RCLONE_DESTRUCTIVE
from unraid_mcp.tools.rclone import MUTATIONS as RCLONE_MUTATIONS
from unraid_mcp.tools.virtualization import DESTRUCTIVE_ACTIONS as VM_DESTRUCTIVE
from unraid_mcp.tools.virtualization import MUTATIONS as VM_MUTATIONS
# Centralized import for make_tool_fn helper
# conftest.py sits in tests/ and is importable without __init__.py
from conftest import make_tool_fn
# ---------------------------------------------------------------------------
# Known destructive actions registry (ground truth for this audit)
# ---------------------------------------------------------------------------
# Every destructive action in the codebase, keyed by (tool_module, tool_name)
KNOWN_DESTRUCTIVE: dict[str, dict[str, set[str]]] = {
"docker": {
"module": "unraid_mcp.tools.docker",
"register_fn": "register_docker_tool",
"tool_name": "unraid_docker",
"actions": {"remove"},
"runtime_set": DOCKER_DESTRUCTIVE,
},
"vm": {
"module": "unraid_mcp.tools.virtualization",
"register_fn": "register_vm_tool",
"tool_name": "unraid_vm",
"actions": {"force_stop", "reset"},
"runtime_set": VM_DESTRUCTIVE,
},
"notifications": {
"module": "unraid_mcp.tools.notifications",
"register_fn": "register_notifications_tool",
"tool_name": "unraid_notifications",
"actions": {"delete", "delete_archived"},
"runtime_set": NOTIF_DESTRUCTIVE,
},
"rclone": {
"module": "unraid_mcp.tools.rclone",
"register_fn": "register_rclone_tool",
"tool_name": "unraid_rclone",
"actions": {"delete_remote"},
"runtime_set": RCLONE_DESTRUCTIVE,
},
"keys": {
"module": "unraid_mcp.tools.keys",
"register_fn": "register_keys_tool",
"tool_name": "unraid_keys",
"actions": {"delete"},
"runtime_set": KEYS_DESTRUCTIVE,
},
}
# ---------------------------------------------------------------------------
# Registry validation: DESTRUCTIVE_ACTIONS sets match ground truth
# ---------------------------------------------------------------------------
class TestDestructiveActionRegistries:
"""Verify that DESTRUCTIVE_ACTIONS sets in source code match the audit."""
@pytest.mark.parametrize("tool_key", list(KNOWN_DESTRUCTIVE.keys()))
def test_destructive_set_matches_audit(self, tool_key: str) -> None:
"""Each tool's DESTRUCTIVE_ACTIONS must exactly match the audited set."""
info = KNOWN_DESTRUCTIVE[tool_key]
assert info["runtime_set"] == info["actions"], (
f"{tool_key}: DESTRUCTIVE_ACTIONS is {info['runtime_set']}, "
f"expected {info['actions']}"
)
@pytest.mark.parametrize("tool_key", list(KNOWN_DESTRUCTIVE.keys()))
def test_destructive_actions_are_valid_mutations(self, tool_key: str) -> None:
"""Every destructive action must correspond to an actual mutation."""
info = KNOWN_DESTRUCTIVE[tool_key]
mutations_map = {
"docker": DOCKER_MUTATIONS,
"vm": VM_MUTATIONS,
"notifications": NOTIF_MUTATIONS,
"rclone": RCLONE_MUTATIONS,
"keys": KEYS_MUTATIONS,
}
mutations = mutations_map[tool_key]
for action in info["actions"]:
assert action in mutations, (
f"{tool_key}: destructive action '{action}' is not in MUTATIONS"
)
def test_no_delete_or_remove_mutations_missing_from_destructive(self) -> None:
"""Any mutation with 'delete' or 'remove' in its name should be destructive."""
all_mutations = {
"docker": DOCKER_MUTATIONS,
"vm": VM_MUTATIONS,
"notifications": NOTIF_MUTATIONS,
"rclone": RCLONE_MUTATIONS,
"keys": KEYS_MUTATIONS,
}
all_destructive = {
"docker": DOCKER_DESTRUCTIVE,
"vm": VM_DESTRUCTIVE,
"notifications": NOTIF_DESTRUCTIVE,
"rclone": RCLONE_DESTRUCTIVE,
"keys": KEYS_DESTRUCTIVE,
}
missing: list[str] = []
for tool_key, mutations in all_mutations.items():
destructive = all_destructive[tool_key]
for action_name in mutations:
if ("delete" in action_name or "remove" in action_name) and action_name not in destructive:
missing.append(f"{tool_key}/{action_name}")
assert not missing, (
f"Mutations with 'delete'/'remove' not in DESTRUCTIVE_ACTIONS: {missing}"
)
# ---------------------------------------------------------------------------
# Confirmation guard tests: calling without confirm=True raises ToolError
# ---------------------------------------------------------------------------
# Build parametrized test cases: (tool_key, action, kwargs_without_confirm)
# Each destructive action needs the minimum required params (minus confirm)
_DESTRUCTIVE_TEST_CASES: list[tuple[str, str, dict]] = [
# Docker
("docker", "remove", {"container_id": "abc123"}),
# VM
("vm", "force_stop", {"vm_id": "test-vm-uuid"}),
("vm", "reset", {"vm_id": "test-vm-uuid"}),
# Notifications
("notifications", "delete", {"notification_id": "notif-1", "notification_type": "UNREAD"}),
("notifications", "delete_archived", {}),
# RClone
("rclone", "delete_remote", {"name": "my-remote"}),
# Keys
("keys", "delete", {"key_id": "key-123"}),
]
_CASE_IDS = [f"{c[0]}/{c[1]}" for c in _DESTRUCTIVE_TEST_CASES]
@pytest.fixture
def _mock_docker_graphql() -> Generator[AsyncMock, None, None]:
with patch("unraid_mcp.tools.docker.make_graphql_request", new_callable=AsyncMock) as m:
yield m
@pytest.fixture
def _mock_vm_graphql() -> Generator[AsyncMock, None, None]:
with patch("unraid_mcp.tools.virtualization.make_graphql_request", new_callable=AsyncMock) as m:
yield m
@pytest.fixture
def _mock_notif_graphql() -> Generator[AsyncMock, None, None]:
with patch("unraid_mcp.tools.notifications.make_graphql_request", new_callable=AsyncMock) as m:
yield m
@pytest.fixture
def _mock_rclone_graphql() -> Generator[AsyncMock, None, None]:
with patch("unraid_mcp.tools.rclone.make_graphql_request", new_callable=AsyncMock) as m:
yield m
@pytest.fixture
def _mock_keys_graphql() -> Generator[AsyncMock, None, None]:
with patch("unraid_mcp.tools.keys.make_graphql_request", new_callable=AsyncMock) as m:
yield m
# Map tool_key -> (fixture name, module path, register fn, tool name)
_TOOL_REGISTRY = {
"docker": ("unraid_mcp.tools.docker", "register_docker_tool", "unraid_docker"),
"vm": ("unraid_mcp.tools.virtualization", "register_vm_tool", "unraid_vm"),
"notifications": ("unraid_mcp.tools.notifications", "register_notifications_tool", "unraid_notifications"),
"rclone": ("unraid_mcp.tools.rclone", "register_rclone_tool", "unraid_rclone"),
"keys": ("unraid_mcp.tools.keys", "register_keys_tool", "unraid_keys"),
}
class TestConfirmationGuards:
"""Every destructive action must reject calls without confirm=True."""
@pytest.mark.parametrize("tool_key,action,kwargs", _DESTRUCTIVE_TEST_CASES, ids=_CASE_IDS)
async def test_rejects_without_confirm(
self,
tool_key: str,
action: str,
kwargs: dict,
_mock_docker_graphql: AsyncMock,
_mock_vm_graphql: AsyncMock,
_mock_notif_graphql: AsyncMock,
_mock_rclone_graphql: AsyncMock,
_mock_keys_graphql: AsyncMock,
) -> None:
"""Calling a destructive action without confirm=True must raise ToolError."""
module_path, register_fn, tool_name = _TOOL_REGISTRY[tool_key]
tool_fn = make_tool_fn(module_path, register_fn, tool_name)
with pytest.raises(ToolError, match="confirm=True"):
await tool_fn(action=action, **kwargs)
@pytest.mark.parametrize("tool_key,action,kwargs", _DESTRUCTIVE_TEST_CASES, ids=_CASE_IDS)
async def test_rejects_with_confirm_false(
self,
tool_key: str,
action: str,
kwargs: dict,
_mock_docker_graphql: AsyncMock,
_mock_vm_graphql: AsyncMock,
_mock_notif_graphql: AsyncMock,
_mock_rclone_graphql: AsyncMock,
_mock_keys_graphql: AsyncMock,
) -> None:
"""Explicitly passing confirm=False must still raise ToolError."""
module_path, register_fn, tool_name = _TOOL_REGISTRY[tool_key]
tool_fn = make_tool_fn(module_path, register_fn, tool_name)
with pytest.raises(ToolError, match="destructive"):
await tool_fn(action=action, confirm=False, **kwargs)
@pytest.mark.parametrize("tool_key,action,kwargs", _DESTRUCTIVE_TEST_CASES, ids=_CASE_IDS)
async def test_error_message_includes_action_name(
self,
tool_key: str,
action: str,
kwargs: dict,
_mock_docker_graphql: AsyncMock,
_mock_vm_graphql: AsyncMock,
_mock_notif_graphql: AsyncMock,
_mock_rclone_graphql: AsyncMock,
_mock_keys_graphql: AsyncMock,
) -> None:
"""The error message should include the action name for clarity."""
module_path, register_fn, tool_name = _TOOL_REGISTRY[tool_key]
tool_fn = make_tool_fn(module_path, register_fn, tool_name)
with pytest.raises(ToolError, match=action):
await tool_fn(action=action, **kwargs)
# ---------------------------------------------------------------------------
# Positive tests: destructive actions proceed when confirm=True
# ---------------------------------------------------------------------------
class TestConfirmAllowsExecution:
"""Destructive actions with confirm=True should reach the GraphQL layer."""
async def test_docker_remove_with_confirm(self, _mock_docker_graphql: AsyncMock) -> None:
cid = "a" * 64 + ":local"
_mock_docker_graphql.side_effect = [
{"docker": {"containers": [{"id": cid, "names": ["old-app"]}]}},
{"docker": {"removeContainer": True}},
]
tool_fn = make_tool_fn("unraid_mcp.tools.docker", "register_docker_tool", "unraid_docker")
result = await tool_fn(action="remove", container_id="old-app", confirm=True)
assert result["success"] is True
async def test_vm_force_stop_with_confirm(self, _mock_vm_graphql: AsyncMock) -> None:
_mock_vm_graphql.return_value = {"vm": {"forceStop": True}}
tool_fn = make_tool_fn("unraid_mcp.tools.virtualization", "register_vm_tool", "unraid_vm")
result = await tool_fn(action="force_stop", vm_id="test-uuid", confirm=True)
assert result["success"] is True
async def test_vm_reset_with_confirm(self, _mock_vm_graphql: AsyncMock) -> None:
_mock_vm_graphql.return_value = {"vm": {"reset": True}}
tool_fn = make_tool_fn("unraid_mcp.tools.virtualization", "register_vm_tool", "unraid_vm")
result = await tool_fn(action="reset", vm_id="test-uuid", confirm=True)
assert result["success"] is True
async def test_notifications_delete_with_confirm(self, _mock_notif_graphql: AsyncMock) -> None:
_mock_notif_graphql.return_value = {"notifications": {"deleteNotification": True}}
tool_fn = make_tool_fn(
"unraid_mcp.tools.notifications", "register_notifications_tool", "unraid_notifications"
)
result = await tool_fn(
action="delete",
notification_id="notif-1",
notification_type="UNREAD",
confirm=True,
)
assert result["success"] is True
async def test_notifications_delete_archived_with_confirm(self, _mock_notif_graphql: AsyncMock) -> None:
_mock_notif_graphql.return_value = {"notifications": {"deleteArchivedNotifications": True}}
tool_fn = make_tool_fn(
"unraid_mcp.tools.notifications", "register_notifications_tool", "unraid_notifications"
)
result = await tool_fn(action="delete_archived", confirm=True)
assert result["success"] is True
async def test_rclone_delete_remote_with_confirm(self, _mock_rclone_graphql: AsyncMock) -> None:
_mock_rclone_graphql.return_value = {"rclone": {"deleteRCloneRemote": True}}
tool_fn = make_tool_fn("unraid_mcp.tools.rclone", "register_rclone_tool", "unraid_rclone")
result = await tool_fn(action="delete_remote", name="my-remote", confirm=True)
assert result["success"] is True
async def test_keys_delete_with_confirm(self, _mock_keys_graphql: AsyncMock) -> None:
_mock_keys_graphql.return_value = {"deleteApiKeys": True}
tool_fn = make_tool_fn("unraid_mcp.tools.keys", "register_keys_tool", "unraid_keys")
result = await tool_fn(action="delete", key_id="key-123", confirm=True)
assert result["success"] is True

View File

@@ -20,26 +20,33 @@ def _make_tool():
class TestArrayValidation: class TestArrayValidation:
async def test_destructive_action_requires_confirm(self, _mock_graphql: AsyncMock) -> None: async def test_invalid_action_rejected(self, _mock_graphql: AsyncMock) -> None:
tool_fn = _make_tool() tool_fn = _make_tool()
for action in ("start", "stop", "shutdown", "reboot"): with pytest.raises(ToolError, match="Invalid action"):
with pytest.raises(ToolError, match="destructive"): await tool_fn(action="start")
await tool_fn(action=action)
async def test_disk_action_requires_disk_id(self, _mock_graphql: AsyncMock) -> None: async def test_removed_actions_are_invalid(self, _mock_graphql: AsyncMock) -> None:
tool_fn = _make_tool() tool_fn = _make_tool()
for action in ("mount_disk", "unmount_disk", "clear_stats"): for action in (
with pytest.raises(ToolError, match="disk_id"): "start",
"stop",
"shutdown",
"reboot",
"mount_disk",
"unmount_disk",
"clear_stats",
):
with pytest.raises(ToolError, match="Invalid action"):
await tool_fn(action=action) await tool_fn(action=action)
class TestArrayActions: class TestArrayActions:
async def test_start_array(self, _mock_graphql: AsyncMock) -> None: async def test_parity_start(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"setState": {"state": "STARTED"}} _mock_graphql.return_value = {"parityCheck": {"start": True}}
tool_fn = _make_tool() tool_fn = _make_tool()
result = await tool_fn(action="start", confirm=True) result = await tool_fn(action="parity_start")
assert result["success"] is True assert result["success"] is True
assert result["action"] == "start" assert result["action"] == "parity_start"
_mock_graphql.assert_called_once() _mock_graphql.assert_called_once()
async def test_parity_start_with_correct(self, _mock_graphql: AsyncMock) -> None: async def test_parity_start_with_correct(self, _mock_graphql: AsyncMock) -> None:
@@ -56,45 +63,22 @@ class TestArrayActions:
result = await tool_fn(action="parity_status") result = await tool_fn(action="parity_status")
assert result["success"] is True assert result["success"] is True
async def test_mount_disk(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"mountArrayDisk": True}
tool_fn = _make_tool()
result = await tool_fn(action="mount_disk", disk_id="disk:1")
assert result["success"] is True
call_args = _mock_graphql.call_args
assert call_args[0][1] == {"id": "disk:1"}
async def test_shutdown(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"shutdown": True}
tool_fn = _make_tool()
result = await tool_fn(action="shutdown", confirm=True)
assert result["success"] is True
assert result["action"] == "shutdown"
async def test_stop_array(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"setState": {"state": "STOPPED"}}
tool_fn = _make_tool()
result = await tool_fn(action="stop", confirm=True)
assert result["success"] is True
assert result["action"] == "stop"
async def test_reboot(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"reboot": True}
tool_fn = _make_tool()
result = await tool_fn(action="reboot", confirm=True)
assert result["success"] is True
assert result["action"] == "reboot"
async def test_parity_pause(self, _mock_graphql: AsyncMock) -> None: async def test_parity_pause(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"parityCheck": {"pause": True}} _mock_graphql.return_value = {"parityCheck": {"pause": True}}
tool_fn = _make_tool() tool_fn = _make_tool()
result = await tool_fn(action="parity_pause") result = await tool_fn(action="parity_pause")
assert result["success"] is True assert result["success"] is True
async def test_unmount_disk(self, _mock_graphql: AsyncMock) -> None: async def test_parity_resume(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"unmountArrayDisk": True} _mock_graphql.return_value = {"parityCheck": {"resume": True}}
tool_fn = _make_tool() tool_fn = _make_tool()
result = await tool_fn(action="unmount_disk", disk_id="disk:1") result = await tool_fn(action="parity_resume")
assert result["success"] is True
async def test_parity_cancel(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"parityCheck": {"cancel": True}}
tool_fn = _make_tool()
result = await tool_fn(action="parity_cancel")
assert result["success"] is True assert result["success"] is True
async def test_generic_exception_wraps(self, _mock_graphql: AsyncMock) -> None: async def test_generic_exception_wraps(self, _mock_graphql: AsyncMock) -> None:
@@ -107,63 +91,46 @@ class TestArrayActions:
class TestArrayMutationFailures: class TestArrayMutationFailures:
"""Tests for mutation responses that indicate failure.""" """Tests for mutation responses that indicate failure."""
async def test_start_mutation_returns_false(self, _mock_graphql: AsyncMock) -> None: async def test_parity_start_mutation_returns_false(self, _mock_graphql: AsyncMock) -> None:
"""Mutation returning False in the response field should still succeed (the tool _mock_graphql.return_value = {"parityCheck": {"start": False}}
wraps the raw response; it doesn't inspect the inner boolean)."""
_mock_graphql.return_value = {"setState": False}
tool_fn = _make_tool() tool_fn = _make_tool()
result = await tool_fn(action="start", confirm=True) result = await tool_fn(action="parity_start")
assert result["success"] is True assert result["success"] is True
assert result["data"] == {"setState": False} assert result["data"] == {"parityCheck": {"start": False}}
async def test_start_mutation_returns_null(self, _mock_graphql: AsyncMock) -> None: async def test_parity_start_mutation_returns_null(self, _mock_graphql: AsyncMock) -> None:
"""Mutation returning null for the response field.""" _mock_graphql.return_value = {"parityCheck": {"start": None}}
_mock_graphql.return_value = {"setState": None}
tool_fn = _make_tool() tool_fn = _make_tool()
result = await tool_fn(action="start", confirm=True) result = await tool_fn(action="parity_start")
assert result["success"] is True assert result["success"] is True
assert result["data"] == {"setState": None} assert result["data"] == {"parityCheck": {"start": None}}
async def test_start_mutation_returns_empty_object(self, _mock_graphql: AsyncMock) -> None: async def test_parity_start_mutation_returns_empty_object(
"""Mutation returning an empty object for the response field.""" self, _mock_graphql: AsyncMock
_mock_graphql.return_value = {"setState": {}} ) -> None:
_mock_graphql.return_value = {"parityCheck": {"start": {}}}
tool_fn = _make_tool() tool_fn = _make_tool()
result = await tool_fn(action="start", confirm=True) result = await tool_fn(action="parity_start")
assert result["success"] is True assert result["success"] is True
assert result["data"] == {"setState": {}} assert result["data"] == {"parityCheck": {"start": {}}}
async def test_mount_disk_mutation_returns_false(self, _mock_graphql: AsyncMock) -> None:
"""mountArrayDisk returning False indicates mount failed."""
_mock_graphql.return_value = {"mountArrayDisk": False}
tool_fn = _make_tool()
result = await tool_fn(action="mount_disk", disk_id="disk:1")
assert result["success"] is True
assert result["data"]["mountArrayDisk"] is False
async def test_mutation_timeout(self, _mock_graphql: AsyncMock) -> None: async def test_mutation_timeout(self, _mock_graphql: AsyncMock) -> None:
"""Mid-operation timeout should be wrapped in ToolError."""
_mock_graphql.side_effect = TimeoutError("operation timed out") _mock_graphql.side_effect = TimeoutError("operation timed out")
tool_fn = _make_tool() tool_fn = _make_tool()
with pytest.raises(ToolError, match="timed out"): with pytest.raises(ToolError, match="timed out"):
await tool_fn(action="shutdown", confirm=True) await tool_fn(action="parity_cancel")
class TestArrayNetworkErrors: class TestArrayNetworkErrors:
"""Tests for network-level failures in array operations.""" """Tests for network-level failures in array operations."""
async def test_http_500_server_error(self, _mock_graphql: AsyncMock) -> None: async def test_http_500_server_error(self, _mock_graphql: AsyncMock) -> None:
"""HTTP 500 from the API should be wrapped in ToolError."""
mock_response = AsyncMock()
mock_response.status_code = 500
mock_response.text = "Internal Server Error"
_mock_graphql.side_effect = ToolError("HTTP error 500: Internal Server Error") _mock_graphql.side_effect = ToolError("HTTP error 500: Internal Server Error")
tool_fn = _make_tool() tool_fn = _make_tool()
with pytest.raises(ToolError, match="HTTP error 500"): with pytest.raises(ToolError, match="HTTP error 500"):
await tool_fn(action="start", confirm=True) await tool_fn(action="parity_start")
async def test_connection_refused(self, _mock_graphql: AsyncMock) -> None: async def test_connection_refused(self, _mock_graphql: AsyncMock) -> None:
"""Connection refused should be wrapped in ToolError."""
_mock_graphql.side_effect = ToolError("Network connection error: Connection refused") _mock_graphql.side_effect = ToolError("Network connection error: Connection refused")
tool_fn = _make_tool() tool_fn = _make_tool()
with pytest.raises(ToolError, match="Network connection error"): with pytest.raises(ToolError, match="Network connection error"):

View File

@@ -368,9 +368,7 @@ class TestGraphQLErrorHandling:
async def test_graphql_error_raises_tool_error(self) -> None: async def test_graphql_error_raises_tool_error(self) -> None:
mock_response = MagicMock() mock_response = MagicMock()
mock_response.raise_for_status = MagicMock() mock_response.raise_for_status = MagicMock()
mock_response.json.return_value = { mock_response.json.return_value = {"errors": [{"message": "Field 'bogus' not found"}]}
"errors": [{"message": "Field 'bogus' not found"}]
}
mock_client = AsyncMock() mock_client = AsyncMock()
mock_client.post.return_value = mock_response mock_client.post.return_value = mock_response
@@ -403,9 +401,7 @@ class TestGraphQLErrorHandling:
async def test_idempotent_start_returns_success(self) -> None: async def test_idempotent_start_returns_success(self) -> None:
mock_response = MagicMock() mock_response = MagicMock()
mock_response.raise_for_status = MagicMock() mock_response.raise_for_status = MagicMock()
mock_response.json.return_value = { mock_response.json.return_value = {"errors": [{"message": "Container already running"}]}
"errors": [{"message": "Container already running"}]
}
mock_client = AsyncMock() mock_client = AsyncMock()
mock_client.post.return_value = mock_response mock_client.post.return_value = mock_response
@@ -421,9 +417,7 @@ class TestGraphQLErrorHandling:
async def test_idempotent_stop_returns_success(self) -> None: async def test_idempotent_stop_returns_success(self) -> None:
mock_response = MagicMock() mock_response = MagicMock()
mock_response.raise_for_status = MagicMock() mock_response.raise_for_status = MagicMock()
mock_response.json.return_value = { mock_response.json.return_value = {"errors": [{"message": "Container not running"}]}
"errors": [{"message": "Container not running"}]
}
mock_client = AsyncMock() mock_client = AsyncMock()
mock_client.post.return_value = mock_response mock_client.post.return_value = mock_response
@@ -440,9 +434,7 @@ class TestGraphQLErrorHandling:
"""An error that doesn't match idempotent patterns still raises even with context.""" """An error that doesn't match idempotent patterns still raises even with context."""
mock_response = MagicMock() mock_response = MagicMock()
mock_response.raise_for_status = MagicMock() mock_response.raise_for_status = MagicMock()
mock_response.json.return_value = { mock_response.json.return_value = {"errors": [{"message": "Permission denied"}]}
"errors": [{"message": "Permission denied"}]
}
mock_client = AsyncMock() mock_client = AsyncMock()
mock_client.post.return_value = mock_response mock_client.post.return_value = mock_response

View File

@@ -93,8 +93,21 @@ class TestDockerActions:
async def test_start_container(self, _mock_graphql: AsyncMock) -> None: async def test_start_container(self, _mock_graphql: AsyncMock) -> None:
# First call resolves ID, second performs start # First call resolves ID, second performs start
_mock_graphql.side_effect = [ _mock_graphql.side_effect = [
{"docker": {"containers": [{"id": "abc123def456" * 4 + "abcd1234abcd1234:local", "names": ["plex"]}]}}, {
{"docker": {"start": {"id": "abc123def456" * 4 + "abcd1234abcd1234:local", "state": "running"}}}, "docker": {
"containers": [
{"id": "abc123def456" * 4 + "abcd1234abcd1234:local", "names": ["plex"]}
]
}
},
{
"docker": {
"start": {
"id": "abc123def456" * 4 + "abcd1234abcd1234:local",
"state": "running",
}
}
},
] ]
tool_fn = _make_tool() tool_fn = _make_tool()
result = await tool_fn(action="start", container_id="plex") result = await tool_fn(action="start", container_id="plex")
@@ -114,7 +127,9 @@ class TestDockerActions:
async def test_check_updates(self, _mock_graphql: AsyncMock) -> None: async def test_check_updates(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = { _mock_graphql.return_value = {
"docker": {"containerUpdateStatuses": [{"id": "c1", "name": "plex", "updateAvailable": True}]} "docker": {
"containerUpdateStatuses": [{"id": "c1", "name": "plex", "updateAvailable": True}]
}
} }
tool_fn = _make_tool() tool_fn = _make_tool()
result = await tool_fn(action="check_updates") result = await tool_fn(action="check_updates")
@@ -175,7 +190,11 @@ class TestDockerActions:
async def test_details_found(self, _mock_graphql: AsyncMock) -> None: async def test_details_found(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = { _mock_graphql.return_value = {
"docker": {"containers": [{"id": "c1", "names": ["plex"], "state": "running", "image": "plexinc/pms"}]} "docker": {
"containers": [
{"id": "c1", "names": ["plex"], "state": "running", "image": "plexinc/pms"}
]
}
} }
tool_fn = _make_tool() tool_fn = _make_tool()
result = await tool_fn(action="details", container_id="plex") result = await tool_fn(action="details", container_id="plex")

View File

@@ -44,12 +44,8 @@ class TestHealthActions:
"os": {"uptime": 86400}, "os": {"uptime": 86400},
}, },
"array": {"state": "STARTED"}, "array": {"state": "STARTED"},
"notifications": { "notifications": {"overview": {"unread": {"alert": 0, "warning": 0, "total": 3}}},
"overview": {"unread": {"alert": 0, "warning": 0, "total": 3}} "docker": {"containers": [{"id": "c1", "state": "running", "status": "Up 2 days"}]},
},
"docker": {
"containers": [{"id": "c1", "state": "running", "status": "Up 2 days"}]
},
} }
tool_fn = _make_tool() tool_fn = _make_tool()
result = await tool_fn(action="check") result = await tool_fn(action="check")
@@ -60,9 +56,7 @@ class TestHealthActions:
_mock_graphql.return_value = { _mock_graphql.return_value = {
"info": {"machineId": "abc", "versions": {"unraid": "7.2"}, "os": {"uptime": 100}}, "info": {"machineId": "abc", "versions": {"unraid": "7.2"}, "os": {"uptime": 100}},
"array": {"state": "STARTED"}, "array": {"state": "STARTED"},
"notifications": { "notifications": {"overview": {"unread": {"alert": 3, "warning": 0, "total": 3}}},
"overview": {"unread": {"alert": 3, "warning": 0, "total": 3}}
},
"docker": {"containers": []}, "docker": {"containers": []},
} }
tool_fn = _make_tool() tool_fn = _make_tool()
@@ -88,9 +82,7 @@ class TestHealthActions:
_mock_graphql.return_value = { _mock_graphql.return_value = {
"info": {}, "info": {},
"array": {"state": "STARTED"}, "array": {"state": "STARTED"},
"notifications": { "notifications": {"overview": {"unread": {"alert": 5, "warning": 0, "total": 5}}},
"overview": {"unread": {"alert": 5, "warning": 0, "total": 5}}
},
"docker": {"containers": []}, "docker": {"containers": []},
} }
tool_fn = _make_tool() tool_fn = _make_tool()
@@ -102,10 +94,13 @@ class TestHealthActions:
async def test_diagnose_wraps_exception(self, _mock_graphql: AsyncMock) -> None: async def test_diagnose_wraps_exception(self, _mock_graphql: AsyncMock) -> None:
"""When _diagnose_subscriptions raises, tool wraps in ToolError.""" """When _diagnose_subscriptions raises, tool wraps in ToolError."""
tool_fn = _make_tool() tool_fn = _make_tool()
with patch( with (
patch(
"unraid_mcp.tools.health._diagnose_subscriptions", "unraid_mcp.tools.health._diagnose_subscriptions",
side_effect=RuntimeError("broken"), side_effect=RuntimeError("broken"),
), pytest.raises(ToolError, match="broken"): ),
pytest.raises(ToolError, match="broken"),
):
await tool_fn(action="diagnose") await tool_fn(action="diagnose")
async def test_diagnose_success(self, _mock_graphql: AsyncMock) -> None: async def test_diagnose_success(self, _mock_graphql: AsyncMock) -> None:
@@ -131,11 +126,14 @@ class TestHealthActions:
try: try:
# Replace the modules with objects that raise ImportError on access # Replace the modules with objects that raise ImportError on access
with patch.dict(sys.modules, { with patch.dict(
sys.modules,
{
"unraid_mcp.subscriptions": None, "unraid_mcp.subscriptions": None,
"unraid_mcp.subscriptions.manager": None, "unraid_mcp.subscriptions.manager": None,
"unraid_mcp.subscriptions.resources": None, "unraid_mcp.subscriptions.resources": None,
}): },
):
result = await _diagnose_subscriptions() result = await _diagnose_subscriptions()
assert "error" in result assert "error" in result
finally: finally:

View File

@@ -20,7 +20,14 @@ from unraid_mcp.tools.info import (
class TestProcessSystemInfo: class TestProcessSystemInfo:
def test_processes_os_info(self) -> None: def test_processes_os_info(self) -> None:
raw = { raw = {
"os": {"distro": "Unraid", "release": "7.2", "platform": "linux", "arch": "x86_64", "hostname": "tower", "uptime": 3600}, "os": {
"distro": "Unraid",
"release": "7.2",
"platform": "linux",
"arch": "x86_64",
"hostname": "tower",
"uptime": 3600,
},
"cpu": {"manufacturer": "AMD", "brand": "Ryzen", "cores": 8, "threads": 16}, "cpu": {"manufacturer": "AMD", "brand": "Ryzen", "cores": 8, "threads": 16},
} }
result = _process_system_info(raw) result = _process_system_info(raw)
@@ -34,7 +41,19 @@ class TestProcessSystemInfo:
assert result["summary"] == {"memory_summary": "Memory information not available."} assert result["summary"] == {"memory_summary": "Memory information not available."}
def test_processes_memory_layout(self) -> None: def test_processes_memory_layout(self) -> None:
raw = {"memory": {"layout": [{"bank": "0", "type": "DDR4", "clockSpeed": 3200, "manufacturer": "G.Skill", "partNum": "XYZ"}]}} raw = {
"memory": {
"layout": [
{
"bank": "0",
"type": "DDR4",
"clockSpeed": 3200,
"manufacturer": "G.Skill",
"partNum": "XYZ",
}
]
}
}
result = _process_system_info(raw) result = _process_system_info(raw)
assert len(result["summary"]["memory_layout_details"]) == 1 assert len(result["summary"]["memory_layout_details"]) == 1
@@ -130,7 +149,13 @@ class TestUnraidInfoTool:
async def test_overview_action(self, _mock_graphql: AsyncMock) -> None: async def test_overview_action(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = { _mock_graphql.return_value = {
"info": { "info": {
"os": {"distro": "Unraid", "release": "7.2", "platform": "linux", "arch": "x86_64", "hostname": "test"}, "os": {
"distro": "Unraid",
"release": "7.2",
"platform": "linux",
"arch": "x86_64",
"hostname": "test",
},
"cpu": {"manufacturer": "Intel", "brand": "i7", "cores": 4, "threads": 8}, "cpu": {"manufacturer": "Intel", "brand": "i7", "cores": 4, "threads": 8},
} }
} }
@@ -165,7 +190,9 @@ class TestUnraidInfoTool:
await tool_fn(action="online") await tool_fn(action="online")
async def test_metrics(self, _mock_graphql: AsyncMock) -> None: async def test_metrics(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"metrics": {"cpu": {"used": 25.5}, "memory": {"used": 8192, "total": 32768}}} _mock_graphql.return_value = {
"metrics": {"cpu": {"used": 25.5}, "memory": {"used": 8192, "total": 32768}}
}
tool_fn = _make_tool() tool_fn = _make_tool()
result = await tool_fn(action="metrics") result = await tool_fn(action="metrics")
assert result["cpu"]["used"] == 25.5 assert result["cpu"]["used"] == 25.5
@@ -178,7 +205,9 @@ class TestUnraidInfoTool:
assert result["services"][0]["name"] == "docker" assert result["services"][0]["name"] == "docker"
async def test_settings(self, _mock_graphql: AsyncMock) -> None: async def test_settings(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"settings": {"unified": {"values": {"timezone": "US/Eastern"}}}} _mock_graphql.return_value = {
"settings": {"unified": {"values": {"timezone": "US/Eastern"}}}
}
tool_fn = _make_tool() tool_fn = _make_tool()
result = await tool_fn(action="settings") result = await tool_fn(action="settings")
assert result["timezone"] == "US/Eastern" assert result["timezone"] == "US/Eastern"
@@ -191,20 +220,32 @@ class TestUnraidInfoTool:
assert result == {"raw": "raw_string"} assert result == {"raw": "raw_string"}
async def test_servers(self, _mock_graphql: AsyncMock) -> None: async def test_servers(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"servers": [{"id": "s:1", "name": "tower", "status": "online"}]} _mock_graphql.return_value = {
"servers": [{"id": "s:1", "name": "tower", "status": "online"}]
}
tool_fn = _make_tool() tool_fn = _make_tool()
result = await tool_fn(action="servers") result = await tool_fn(action="servers")
assert len(result["servers"]) == 1 assert len(result["servers"]) == 1
assert result["servers"][0]["name"] == "tower" assert result["servers"][0]["name"] == "tower"
async def test_flash(self, _mock_graphql: AsyncMock) -> None: async def test_flash(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"flash": {"id": "f:1", "guid": "abc", "product": "SanDisk", "vendor": "SanDisk", "size": 32000000000}} _mock_graphql.return_value = {
"flash": {
"id": "f:1",
"guid": "abc",
"product": "SanDisk",
"vendor": "SanDisk",
"size": 32000000000,
}
}
tool_fn = _make_tool() tool_fn = _make_tool()
result = await tool_fn(action="flash") result = await tool_fn(action="flash")
assert result["product"] == "SanDisk" assert result["product"] == "SanDisk"
async def test_ups_devices(self, _mock_graphql: AsyncMock) -> None: async def test_ups_devices(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"upsDevices": [{"id": "ups:1", "model": "APC", "status": "online", "charge": 100}]} _mock_graphql.return_value = {
"upsDevices": [{"id": "ups:1", "model": "APC", "status": "online", "charge": 100}]
}
tool_fn = _make_tool() tool_fn = _make_tool()
result = await tool_fn(action="ups_devices") result = await tool_fn(action="ups_devices")
assert len(result["ups_devices"]) == 1 assert len(result["ups_devices"]) == 1

View File

@@ -56,7 +56,9 @@ class TestKeysActions:
assert len(result["keys"]) == 1 assert len(result["keys"]) == 1
async def test_get(self, _mock_graphql: AsyncMock) -> None: async def test_get(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"apiKey": {"id": "k:1", "name": "mcp-key", "roles": ["admin"]}} _mock_graphql.return_value = {
"apiKey": {"id": "k:1", "name": "mcp-key", "roles": ["admin"]}
}
tool_fn = _make_tool() tool_fn = _make_tool()
result = await tool_fn(action="get", key_id="k:1") result = await tool_fn(action="get", key_id="k:1")
assert result["name"] == "mcp-key" assert result["name"] == "mcp-key"
@@ -72,7 +74,12 @@ class TestKeysActions:
async def test_create_with_roles(self, _mock_graphql: AsyncMock) -> None: async def test_create_with_roles(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = { _mock_graphql.return_value = {
"createApiKey": {"id": "k:new", "name": "admin-key", "key": "secret", "roles": ["admin"]} "createApiKey": {
"id": "k:new",
"name": "admin-key",
"key": "secret",
"roles": ["admin"],
}
} }
tool_fn = _make_tool() tool_fn = _make_tool()
result = await tool_fn(action="create", name="admin-key", roles=["admin"]) result = await tool_fn(action="create", name="admin-key", roles=["admin"])

View File

@@ -11,7 +11,9 @@ from unraid_mcp.core.exceptions import ToolError
@pytest.fixture @pytest.fixture
def _mock_graphql() -> Generator[AsyncMock, None, None]: def _mock_graphql() -> Generator[AsyncMock, None, None]:
with patch("unraid_mcp.tools.notifications.make_graphql_request", new_callable=AsyncMock) as mock: with patch(
"unraid_mcp.tools.notifications.make_graphql_request", new_callable=AsyncMock
) as mock:
yield mock yield mock
@@ -64,9 +66,7 @@ class TestNotificationsActions:
async def test_list(self, _mock_graphql: AsyncMock) -> None: async def test_list(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = { _mock_graphql.return_value = {
"notifications": { "notifications": {"list": [{"id": "n:1", "title": "Test", "importance": "INFO"}]}
"list": [{"id": "n:1", "title": "Test", "importance": "INFO"}]
}
} }
tool_fn = _make_tool() tool_fn = _make_tool()
result = await tool_fn(action="list") result = await tool_fn(action="list")
@@ -82,7 +82,9 @@ class TestNotificationsActions:
async def test_create(self, _mock_graphql: AsyncMock) -> None: async def test_create(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = { _mock_graphql.return_value = {
"notifications": {"createNotification": {"id": "n:new", "title": "Test", "importance": "INFO"}} "notifications": {
"createNotification": {"id": "n:new", "title": "Test", "importance": "INFO"}
}
} }
tool_fn = _make_tool() tool_fn = _make_tool()
result = await tool_fn( result = await tool_fn(
@@ -126,9 +128,7 @@ class TestNotificationsActions:
async def test_list_with_importance_filter(self, _mock_graphql: AsyncMock) -> None: async def test_list_with_importance_filter(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = { _mock_graphql.return_value = {
"notifications": { "notifications": {"list": [{"id": "n:1", "title": "Alert", "importance": "WARNING"}]}
"list": [{"id": "n:1", "title": "Alert", "importance": "WARNING"}]
}
} }
tool_fn = _make_tool() tool_fn = _make_tool()
result = await tool_fn(action="list", importance="warning", limit=10, offset=5) result = await tool_fn(action="list", importance="warning", limit=10, offset=5)

View File

@@ -39,9 +39,7 @@ class TestRcloneValidation:
class TestRcloneActions: class TestRcloneActions:
async def test_list_remotes(self, _mock_graphql: AsyncMock) -> None: async def test_list_remotes(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = { _mock_graphql.return_value = {"rclone": {"remotes": [{"name": "gdrive", "type": "drive"}]}}
"rclone": {"remotes": [{"name": "gdrive", "type": "drive"}]}
}
tool_fn = _make_tool() tool_fn = _make_tool()
result = await tool_fn(action="list_remotes") result = await tool_fn(action="list_remotes")
assert len(result["remotes"]) == 1 assert len(result["remotes"]) == 1

View File

@@ -95,7 +95,14 @@ class TestStorageActions:
async def test_disk_details(self, _mock_graphql: AsyncMock) -> None: async def test_disk_details(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = { _mock_graphql.return_value = {
"disk": {"id": "d:1", "device": "sda", "name": "WD", "serialNum": "SN1", "size": 1073741824, "temperature": 35} "disk": {
"id": "d:1",
"device": "sda",
"name": "WD",
"serialNum": "SN1",
"size": 1073741824,
"temperature": 35,
}
} }
tool_fn = _make_tool() tool_fn = _make_tool()
result = await tool_fn(action="disk_details", disk_id="d:1") result = await tool_fn(action="disk_details", disk_id="d:1")
@@ -121,7 +128,9 @@ class TestStorageActions:
assert len(result["log_files"]) == 1 assert len(result["log_files"]) == 1
async def test_logs(self, _mock_graphql: AsyncMock) -> None: async def test_logs(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"logFile": {"path": "/var/log/syslog", "content": "log line", "totalLines": 1}} _mock_graphql.return_value = {
"logFile": {"path": "/var/log/syslog", "content": "log line", "totalLines": 1}
}
tool_fn = _make_tool() tool_fn = _make_tool()
result = await tool_fn(action="logs", log_path="/var/log/syslog") result = await tool_fn(action="logs", log_path="/var/log/syslog")
assert result["content"] == "log line" assert result["content"] == "log line"

View File

@@ -1,4 +1,8 @@
"""Tests for unraid_users tool.""" """Tests for unraid_users tool.
NOTE: Unraid GraphQL API only supports the me() query.
User management operations (list, add, delete, cloud, remote_access, origins) are NOT available in the API.
"""
from collections.abc import Generator from collections.abc import Generator
from unittest.mock import AsyncMock, patch from unittest.mock import AsyncMock, patch
@@ -20,112 +24,54 @@ def _make_tool():
class TestUsersValidation: class TestUsersValidation:
async def test_delete_requires_confirm(self, _mock_graphql: AsyncMock) -> None: """Test validation for invalid actions."""
tool_fn = _make_tool()
with pytest.raises(ToolError, match="destructive"):
await tool_fn(action="delete", user_id="u:1")
async def test_get_requires_user_id(self, _mock_graphql: AsyncMock) -> None: async def test_invalid_action_rejected(self, _mock_graphql: AsyncMock) -> None:
"""Test that non-existent actions are rejected with clear error."""
tool_fn = _make_tool() tool_fn = _make_tool()
with pytest.raises(ToolError, match="user_id"): with pytest.raises(ToolError, match="Invalid action"):
await tool_fn(action="get") await tool_fn(action="list")
async def test_add_requires_name_and_password(self, _mock_graphql: AsyncMock) -> None: with pytest.raises(ToolError, match="Invalid action"):
tool_fn = _make_tool()
with pytest.raises(ToolError, match="requires name and password"):
await tool_fn(action="add") await tool_fn(action="add")
async def test_delete_requires_user_id(self, _mock_graphql: AsyncMock) -> None: with pytest.raises(ToolError, match="Invalid action"):
tool_fn = _make_tool() await tool_fn(action="delete")
with pytest.raises(ToolError, match="user_id"):
await tool_fn(action="delete", confirm=True) with pytest.raises(ToolError, match="Invalid action"):
await tool_fn(action="cloud")
class TestUsersActions: class TestUsersActions:
"""Test the single supported action: me."""
async def test_me(self, _mock_graphql: AsyncMock) -> None: async def test_me(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"me": {"id": "u:1", "name": "root", "description": "", "roles": ["ADMIN"]}} """Test querying current authenticated user."""
_mock_graphql.return_value = {
"me": {"id": "u:1", "name": "root", "description": "", "roles": ["ADMIN"]}
}
tool_fn = _make_tool() tool_fn = _make_tool()
result = await tool_fn(action="me") result = await tool_fn(action="me")
assert result["name"] == "root" assert result["name"] == "root"
assert result["roles"] == ["ADMIN"]
_mock_graphql.assert_called_once()
async def test_list(self, _mock_graphql: AsyncMock) -> None: async def test_me_default_action(self, _mock_graphql: AsyncMock) -> None:
"""Test that 'me' is the default action."""
_mock_graphql.return_value = { _mock_graphql.return_value = {
"users": [{"id": "u:1", "name": "root"}, {"id": "u:2", "name": "guest"}] "me": {"id": "u:1", "name": "root", "description": "", "roles": ["ADMIN"]}
} }
tool_fn = _make_tool() tool_fn = _make_tool()
result = await tool_fn(action="list") result = await tool_fn()
assert len(result["users"]) == 2
async def test_get(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"user": {"id": "u:1", "name": "root", "description": "", "roles": ["ADMIN"]}}
tool_fn = _make_tool()
result = await tool_fn(action="get", user_id="u:1")
assert result["name"] == "root" assert result["name"] == "root"
async def test_add(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"addUser": {"id": "u:3", "name": "newuser", "description": "", "roles": ["USER"]}}
tool_fn = _make_tool()
result = await tool_fn(action="add", name="newuser", password="pass123")
assert result["success"] is True
async def test_add_with_role(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"addUser": {"id": "u:3", "name": "admin2", "description": "", "roles": ["ADMIN"]}}
tool_fn = _make_tool()
result = await tool_fn(action="add", name="admin2", password="pass123", role="admin")
assert result["success"] is True
call_args = _mock_graphql.call_args
assert call_args[0][1]["input"]["role"] == "ADMIN"
async def test_delete(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"deleteUser": {"id": "u:2", "name": "guest"}}
tool_fn = _make_tool()
result = await tool_fn(action="delete", user_id="u:2", confirm=True)
assert result["success"] is True
call_args = _mock_graphql.call_args
assert call_args[0][1]["input"]["id"] == "u:2"
async def test_cloud(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"cloud": {"status": "connected", "apiKey": "***"}}
tool_fn = _make_tool()
result = await tool_fn(action="cloud")
assert result["status"] == "connected"
async def test_remote_access(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"remoteAccess": {"enabled": True, "url": "https://example.com"}}
tool_fn = _make_tool()
result = await tool_fn(action="remote_access")
assert result["enabled"] is True
async def test_origins(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"allowedOrigins": ["http://localhost", "https://example.com"]}
tool_fn = _make_tool()
result = await tool_fn(action="origins")
assert len(result["origins"]) == 2
class TestUsersNoneHandling: class TestUsersNoneHandling:
"""Verify actions return empty dict (not TypeError) when API returns None.""" """Verify actions return empty dict (not TypeError) when API returns None."""
async def test_me_returns_none(self, _mock_graphql: AsyncMock) -> None: async def test_me_returns_none(self, _mock_graphql: AsyncMock) -> None:
"""Test that me returns empty dict when API returns None."""
_mock_graphql.return_value = {"me": None} _mock_graphql.return_value = {"me": None}
tool_fn = _make_tool() tool_fn = _make_tool()
result = await tool_fn(action="me") result = await tool_fn(action="me")
assert result == {} assert result == {}
async def test_get_returns_none(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"user": None}
tool_fn = _make_tool()
result = await tool_fn(action="get", user_id="u:1")
assert result == {}
async def test_cloud_returns_none(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"cloud": None}
tool_fn = _make_tool()
result = await tool_fn(action="cloud")
assert result == {}
async def test_remote_access_returns_none(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = {"remoteAccess": None}
tool_fn = _make_tool()
result = await tool_fn(action="remote_access")
assert result == {}

View File

@@ -11,7 +11,9 @@ from unraid_mcp.core.exceptions import ToolError
@pytest.fixture @pytest.fixture
def _mock_graphql() -> Generator[AsyncMock, None, None]: def _mock_graphql() -> Generator[AsyncMock, None, None]:
with patch("unraid_mcp.tools.virtualization.make_graphql_request", new_callable=AsyncMock) as mock: with patch(
"unraid_mcp.tools.virtualization.make_graphql_request", new_callable=AsyncMock
) as mock:
yield mock yield mock
@@ -67,7 +69,9 @@ class TestVmActions:
async def test_details_by_uuid(self, _mock_graphql: AsyncMock) -> None: async def test_details_by_uuid(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = { _mock_graphql.return_value = {
"vms": {"domains": [{"id": "vm:1", "name": "Win11", "state": "RUNNING", "uuid": "uuid-1"}]} "vms": {
"domains": [{"id": "vm:1", "name": "Win11", "state": "RUNNING", "uuid": "uuid-1"}]
}
} }
tool_fn = _make_tool() tool_fn = _make_tool()
result = await tool_fn(action="details", vm_id="uuid-1") result = await tool_fn(action="details", vm_id="uuid-1")
@@ -75,7 +79,9 @@ class TestVmActions:
async def test_details_by_name(self, _mock_graphql: AsyncMock) -> None: async def test_details_by_name(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = { _mock_graphql.return_value = {
"vms": {"domains": [{"id": "vm:1", "name": "Win11", "state": "RUNNING", "uuid": "uuid-1"}]} "vms": {
"domains": [{"id": "vm:1", "name": "Win11", "state": "RUNNING", "uuid": "uuid-1"}]
}
} }
tool_fn = _make_tool() tool_fn = _make_tool()
result = await tool_fn(action="details", vm_id="Win11") result = await tool_fn(action="details", vm_id="Win11")
@@ -83,7 +89,9 @@ class TestVmActions:
async def test_details_not_found(self, _mock_graphql: AsyncMock) -> None: async def test_details_not_found(self, _mock_graphql: AsyncMock) -> None:
_mock_graphql.return_value = { _mock_graphql.return_value = {
"vms": {"domains": [{"id": "vm:1", "name": "Win11", "state": "RUNNING", "uuid": "uuid-1"}]} "vms": {
"domains": [{"id": "vm:1", "name": "Win11", "state": "RUNNING", "uuid": "uuid-1"}]
}
} }
tool_fn = _make_tool() tool_fn = _make_tool()
with pytest.raises(ToolError, match="not found"): with pytest.raises(ToolError, match="not found"):

View File

@@ -19,6 +19,7 @@ from rich.text import Text
try: try:
from fastmcp.utilities.logging import get_logger as get_fastmcp_logger from fastmcp.utilities.logging import get_logger as get_fastmcp_logger
FASTMCP_AVAILABLE = True FASTMCP_AVAILABLE = True
except ImportError: except ImportError:
FASTMCP_AVAILABLE = False FASTMCP_AVAILABLE = False
@@ -74,14 +75,17 @@ class OverwriteFileHandler(logging.FileHandler):
lineno=0, lineno=0,
msg="=== LOG FILE RESET (10MB limit reached) ===", msg="=== LOG FILE RESET (10MB limit reached) ===",
args=(), args=(),
exc_info=None exc_info=None,
) )
super().emit(reset_record) super().emit(reset_record)
except OSError as e: except OSError as e:
import sys import sys
print(f"WARNING: Log file size check failed: {e}. Continuing without rotation.",
file=sys.stderr) print(
f"WARNING: Log file size check failed: {e}. Continuing without rotation.",
file=sys.stderr,
)
# Emit the original record # Emit the original record
super().emit(record) super().emit(record)
@@ -114,17 +118,13 @@ def setup_logger(name: str = "UnraidMCPServer") -> logging.Logger:
show_level=True, show_level=True,
show_path=False, show_path=False,
rich_tracebacks=True, rich_tracebacks=True,
tracebacks_show_locals=True tracebacks_show_locals=True,
) )
console_handler.setLevel(numeric_log_level) console_handler.setLevel(numeric_log_level)
logger.addHandler(console_handler) logger.addHandler(console_handler)
# File Handler with 10MB cap (overwrites instead of rotating) # File Handler with 10MB cap (overwrites instead of rotating)
file_handler = OverwriteFileHandler( file_handler = OverwriteFileHandler(LOG_FILE_PATH, max_bytes=10 * 1024 * 1024, encoding="utf-8")
LOG_FILE_PATH,
max_bytes=10*1024*1024,
encoding="utf-8"
)
file_handler.setLevel(numeric_log_level) file_handler.setLevel(numeric_log_level)
file_formatter = logging.Formatter( file_formatter = logging.Formatter(
"%(asctime)s - %(name)s - %(levelname)s - %(module)s - %(funcName)s - %(lineno)d - %(message)s" "%(asctime)s - %(name)s - %(levelname)s - %(module)s - %(funcName)s - %(lineno)d - %(message)s"
@@ -158,17 +158,13 @@ def configure_fastmcp_logger_with_rich() -> logging.Logger | None:
show_path=False, show_path=False,
rich_tracebacks=True, rich_tracebacks=True,
tracebacks_show_locals=True, tracebacks_show_locals=True,
markup=True markup=True,
) )
console_handler.setLevel(numeric_log_level) console_handler.setLevel(numeric_log_level)
fastmcp_logger.addHandler(console_handler) fastmcp_logger.addHandler(console_handler)
# File Handler with 10MB cap (overwrites instead of rotating) # File Handler with 10MB cap (overwrites instead of rotating)
file_handler = OverwriteFileHandler( file_handler = OverwriteFileHandler(LOG_FILE_PATH, max_bytes=10 * 1024 * 1024, encoding="utf-8")
LOG_FILE_PATH,
max_bytes=10*1024*1024,
encoding="utf-8"
)
file_handler.setLevel(numeric_log_level) file_handler.setLevel(numeric_log_level)
file_formatter = logging.Formatter( file_formatter = logging.Formatter(
"%(asctime)s - %(name)s - %(levelname)s - %(module)s - %(funcName)s - %(lineno)d - %(message)s" "%(asctime)s - %(name)s - %(levelname)s - %(module)s - %(funcName)s - %(lineno)d - %(message)s"
@@ -191,16 +187,14 @@ def configure_fastmcp_logger_with_rich() -> logging.Logger | None:
show_path=False, show_path=False,
rich_tracebacks=True, rich_tracebacks=True,
tracebacks_show_locals=True, tracebacks_show_locals=True,
markup=True markup=True,
) )
root_console_handler.setLevel(numeric_log_level) root_console_handler.setLevel(numeric_log_level)
root_logger.addHandler(root_console_handler) root_logger.addHandler(root_console_handler)
# File Handler for root logger with 10MB cap (overwrites instead of rotating) # File Handler for root logger with 10MB cap (overwrites instead of rotating)
root_file_handler = OverwriteFileHandler( root_file_handler = OverwriteFileHandler(
LOG_FILE_PATH, LOG_FILE_PATH, max_bytes=10 * 1024 * 1024, encoding="utf-8"
max_bytes=10*1024*1024,
encoding="utf-8"
) )
root_file_handler.setLevel(numeric_log_level) root_file_handler.setLevel(numeric_log_level)
root_file_handler.setFormatter(file_formatter) root_file_handler.setFormatter(file_formatter)
@@ -255,16 +249,18 @@ def get_est_timestamp() -> str:
now = datetime.now(est) now = datetime.now(est)
return now.strftime("%y/%m/%d %H:%M:%S") return now.strftime("%y/%m/%d %H:%M:%S")
def log_header(title: str) -> None: def log_header(title: str) -> None:
"""Print a beautiful header panel with Nordic blue styling.""" """Print a beautiful header panel with Nordic blue styling."""
panel = Panel( panel = Panel(
Align.center(Text(title, style="bold white")), Align.center(Text(title, style="bold white")),
style="#5E81AC", # Nordic blue style="#5E81AC", # Nordic blue
padding=(0, 2), padding=(0, 2),
border_style="#81A1C1" # Light Nordic blue border_style="#81A1C1", # Light Nordic blue
) )
console.print(panel) console.print(panel)
def log_with_level_and_indent(message: str, level: str = "info", indent: int = 0) -> None: def log_with_level_and_indent(message: str, level: str = "info", indent: int = 0) -> None:
"""Log a message with specific level and indentation.""" """Log a message with specific level and indentation."""
timestamp = get_est_timestamp() timestamp = get_est_timestamp()
@@ -280,7 +276,9 @@ def log_with_level_and_indent(message: str, level: str = "info", indent: int = 0
"debug": {"color": "#4C566A", "icon": "🐛", "style": ""}, # Nordic dark gray "debug": {"color": "#4C566A", "icon": "🐛", "style": ""}, # Nordic dark gray
} }
config = level_config.get(level, {"color": "#81A1C1", "icon": "", "style": ""}) # Default to light Nordic blue config = level_config.get(
level, {"color": "#81A1C1", "icon": "", "style": ""}
) # Default to light Nordic blue
# Create beautifully formatted text # Create beautifully formatted text
text = Text() text = Text()
@@ -308,26 +306,33 @@ def log_with_level_and_indent(message: str, level: str = "info", indent: int = 0
console.print(text) console.print(text)
def log_separator() -> None: def log_separator() -> None:
"""Print a beautiful separator line with Nordic blue styling.""" """Print a beautiful separator line with Nordic blue styling."""
console.print(Rule(style="#81A1C1")) console.print(Rule(style="#81A1C1"))
# Convenience functions for different log levels # Convenience functions for different log levels
def log_error(message: str, indent: int = 0) -> None: def log_error(message: str, indent: int = 0) -> None:
log_with_level_and_indent(message, "error", indent) log_with_level_and_indent(message, "error", indent)
def log_warning(message: str, indent: int = 0) -> None: def log_warning(message: str, indent: int = 0) -> None:
log_with_level_and_indent(message, "warning", indent) log_with_level_and_indent(message, "warning", indent)
def log_success(message: str, indent: int = 0) -> None: def log_success(message: str, indent: int = 0) -> None:
log_with_level_and_indent(message, "success", indent) log_with_level_and_indent(message, "success", indent)
def log_info(message: str, indent: int = 0) -> None: def log_info(message: str, indent: int = 0) -> None:
log_with_level_and_indent(message, "info", indent) log_with_level_and_indent(message, "info", indent)
def log_status(message: str, indent: int = 0) -> None: def log_status(message: str, indent: int = 0) -> None:
log_with_level_and_indent(message, "status", indent) log_with_level_and_indent(message, "status", indent)
# Global logger instance - modules can import this directly # Global logger instance - modules can import this directly
if FASTMCP_AVAILABLE: if FASTMCP_AVAILABLE:
# Use FastMCP logger with Rich formatting # Use FastMCP logger with Rich formatting

View File

@@ -22,7 +22,7 @@ dotenv_paths = [
Path("/app/.env.local"), # Container mount point Path("/app/.env.local"), # Container mount point
PROJECT_ROOT / ".env.local", # Project root .env.local PROJECT_ROOT / ".env.local", # Project root .env.local
PROJECT_ROOT / ".env", # Project root .env PROJECT_ROOT / ".env", # Project root .env
UNRAID_MCP_DIR / ".env" # Local .env in unraid_mcp/ UNRAID_MCP_DIR / ".env", # Local .env in unraid_mcp/
] ]
for dotenv_path in dotenv_paths: for dotenv_path in dotenv_paths:
@@ -73,10 +73,7 @@ def validate_required_config() -> tuple[bool, list[str]]:
Returns: Returns:
bool: True if all required config is present, False otherwise. bool: True if all required config is present, False otherwise.
""" """
required_vars = [ required_vars = [("UNRAID_API_URL", UNRAID_API_URL), ("UNRAID_API_KEY", UNRAID_API_KEY)]
("UNRAID_API_URL", UNRAID_API_URL),
("UNRAID_API_KEY", UNRAID_API_KEY)
]
missing = [] missing = []
for name, value in required_vars: for name, value in required_vars:
@@ -105,5 +102,5 @@ def get_config_summary() -> dict[str, Any]:
"log_level": LOG_LEVEL_STR, "log_level": LOG_LEVEL_STR,
"log_file": str(LOG_FILE_PATH), "log_file": str(LOG_FILE_PATH),
"config_valid": is_valid, "config_valid": is_valid,
"missing_config": missing if not is_valid else None "missing_config": missing if not is_valid else None,
} }

View File

@@ -34,7 +34,9 @@ def _is_sensitive_key(key: str) -> bool:
def _redact_sensitive(obj: Any) -> Any: def _redact_sensitive(obj: Any) -> Any:
"""Recursively redact sensitive values from nested dicts/lists.""" """Recursively redact sensitive values from nested dicts/lists."""
if isinstance(obj, dict): if isinstance(obj, dict):
return {k: ("***" if _is_sensitive_key(k) else _redact_sensitive(v)) for k, v in obj.items()} return {
k: ("***" if _is_sensitive_key(k) else _redact_sensitive(v)) for k, v in obj.items()
}
if isinstance(obj, list): if isinstance(obj, list):
return [_redact_sensitive(item) for item in obj] return [_redact_sensitive(item) for item in obj]
return obj return obj
@@ -62,6 +64,7 @@ def get_timeout_for_operation(profile: str) -> httpx.Timeout:
""" """
return _TIMEOUT_PROFILES.get(profile, DEFAULT_TIMEOUT) return _TIMEOUT_PROFILES.get(profile, DEFAULT_TIMEOUT)
# Global connection pool (module-level singleton) # Global connection pool (module-level singleton)
_http_client: httpx.AsyncClient | None = None _http_client: httpx.AsyncClient | None = None
_client_lock = asyncio.Lock() _client_lock = asyncio.Lock()
@@ -82,16 +85,16 @@ def is_idempotent_error(error_message: str, operation: str) -> bool:
# Docker container operation patterns # Docker container operation patterns
if operation == "start": if operation == "start":
return ( return (
"already started" in error_lower or "already started" in error_lower
"container already running" in error_lower or or "container already running" in error_lower
"http code 304" in error_lower or "http code 304" in error_lower
) )
if operation == "stop": if operation == "stop":
return ( return (
"already stopped" in error_lower or "already stopped" in error_lower
"container already stopped" in error_lower or or "container already stopped" in error_lower
"container not running" in error_lower or or "container not running" in error_lower
"http code 304" in error_lower or "http code 304" in error_lower
) )
return False return False
@@ -106,19 +109,14 @@ async def _create_http_client() -> httpx.AsyncClient:
return httpx.AsyncClient( return httpx.AsyncClient(
# Connection pool settings # Connection pool settings
limits=httpx.Limits( limits=httpx.Limits(
max_keepalive_connections=20, max_keepalive_connections=20, max_connections=100, keepalive_expiry=30.0
max_connections=100,
keepalive_expiry=30.0
), ),
# Default timeout (can be overridden per-request) # Default timeout (can be overridden per-request)
timeout=DEFAULT_TIMEOUT, timeout=DEFAULT_TIMEOUT,
# SSL verification # SSL verification
verify=UNRAID_VERIFY_SSL, verify=UNRAID_VERIFY_SSL,
# Connection pooling headers # Connection pooling headers
headers={ headers={"Connection": "keep-alive", "User-Agent": f"UnraidMCPServer/{VERSION}"},
"Connection": "keep-alive",
"User-Agent": f"UnraidMCPServer/{VERSION}"
}
) )
@@ -136,7 +134,9 @@ async def get_http_client() -> httpx.AsyncClient:
async with _client_lock: async with _client_lock:
if _http_client is None or _http_client.is_closed: if _http_client is None or _http_client.is_closed:
_http_client = await _create_http_client() _http_client = await _create_http_client()
logger.info("Created shared HTTP client with connection pooling (20 keepalive, 100 max connections)") logger.info(
"Created shared HTTP client with connection pooling (20 keepalive, 100 max connections)"
)
client = _http_client client = _http_client
@@ -167,7 +167,7 @@ async def make_graphql_request(
query: str, query: str,
variables: dict[str, Any] | None = None, variables: dict[str, Any] | None = None,
custom_timeout: httpx.Timeout | None = None, custom_timeout: httpx.Timeout | None = None,
operation_context: dict[str, str] | None = None operation_context: dict[str, str] | None = None,
) -> dict[str, Any]: ) -> dict[str, Any]:
"""Make GraphQL requests to the Unraid API. """Make GraphQL requests to the Unraid API.
@@ -193,7 +193,7 @@ async def make_graphql_request(
headers = { headers = {
"Content-Type": "application/json", "Content-Type": "application/json",
"X-API-Key": UNRAID_API_KEY, "X-API-Key": UNRAID_API_KEY,
"User-Agent": f"UnraidMCPServer/{VERSION}" # Custom user-agent "User-Agent": f"UnraidMCPServer/{VERSION}", # Custom user-agent
} }
payload: dict[str, Any] = {"query": query} payload: dict[str, Any] = {"query": query}
@@ -212,10 +212,7 @@ async def make_graphql_request(
# Override timeout if custom timeout specified # Override timeout if custom timeout specified
if custom_timeout is not None: if custom_timeout is not None:
response = await client.post( response = await client.post(
UNRAID_API_URL, UNRAID_API_URL, json=payload, headers=headers, timeout=custom_timeout
json=payload,
headers=headers,
timeout=custom_timeout
) )
else: else:
response = await client.post(UNRAID_API_URL, json=payload, headers=headers) response = await client.post(UNRAID_API_URL, json=payload, headers=headers)
@@ -224,19 +221,23 @@ async def make_graphql_request(
response_data = response.json() response_data = response.json()
if response_data.get("errors"): if response_data.get("errors"):
error_details = "; ".join([err.get("message", str(err)) for err in response_data["errors"]]) error_details = "; ".join(
[err.get("message", str(err)) for err in response_data["errors"]]
)
# Check if this is an idempotent error that should be treated as success # Check if this is an idempotent error that should be treated as success
if operation_context and operation_context.get("operation"): if operation_context and operation_context.get("operation"):
operation = operation_context["operation"] operation = operation_context["operation"]
if is_idempotent_error(error_details, operation): if is_idempotent_error(error_details, operation):
logger.warning(f"Idempotent operation '{operation}' - treating as success: {error_details}") logger.warning(
f"Idempotent operation '{operation}' - treating as success: {error_details}"
)
# Return a success response with the current state information # Return a success response with the current state information
return { return {
"idempotent_success": True, "idempotent_success": True,
"operation": operation, "operation": operation,
"message": error_details, "message": error_details,
"original_errors": response_data["errors"] "original_errors": response_data["errors"],
} }
logger.error(f"GraphQL API returned errors: {response_data['errors']}") logger.error(f"GraphQL API returned errors: {response_data['errors']}")

View File

@@ -15,26 +15,31 @@ class ToolError(FastMCPToolError):
Inherits from FastMCP's ToolError to ensure proper MCP protocol handling. Inherits from FastMCP's ToolError to ensure proper MCP protocol handling.
""" """
pass pass
class ConfigurationError(ToolError): class ConfigurationError(ToolError):
"""Raised when there are configuration-related errors.""" """Raised when there are configuration-related errors."""
pass pass
class UnraidAPIError(ToolError): class UnraidAPIError(ToolError):
"""Raised when the Unraid API returns an error or is unreachable.""" """Raised when the Unraid API returns an error or is unreachable."""
pass pass
class SubscriptionError(ToolError): class SubscriptionError(ToolError):
"""Raised when there are WebSocket subscription-related errors.""" """Raised when there are WebSocket subscription-related errors."""
pass pass
class ValidationError(ToolError): class ValidationError(ToolError):
"""Raised when input validation fails.""" """Raised when input validation fails."""
pass pass
@@ -45,4 +50,5 @@ class IdempotentOperationError(ToolError):
which should typically be converted to a success response rather than which should typically be converted to a success response rather than
propagated as an error to the user. propagated as an error to the user.
""" """
pass pass

View File

@@ -12,6 +12,7 @@ from typing import Any
@dataclass @dataclass
class SubscriptionData: class SubscriptionData:
"""Container for subscription data with metadata.""" """Container for subscription data with metadata."""
data: dict[str, Any] data: dict[str, Any]
last_updated: datetime last_updated: datetime
subscription_type: str subscription_type: str
@@ -20,6 +21,7 @@ class SubscriptionData:
@dataclass @dataclass
class SystemHealth: class SystemHealth:
"""Container for system health status information.""" """Container for system health status information."""
is_healthy: bool is_healthy: bool
issues: list[str] issues: list[str]
warnings: list[str] warnings: list[str]
@@ -30,6 +32,7 @@ class SystemHealth:
@dataclass @dataclass
class APIResponse: class APIResponse:
"""Container for standardized API response data.""" """Container for standardized API response data."""
success: bool success: bool
data: dict[str, Any] | None = None data: dict[str, Any] | None = None
error: str | None = None error: str | None = None

View File

@@ -13,6 +13,7 @@ async def shutdown_cleanup() -> None:
"""Cleanup resources on server shutdown.""" """Cleanup resources on server shutdown."""
try: try:
from .core.client import close_http_client from .core.client import close_http_client
await close_http_client() await close_http_client()
except Exception as e: except Exception as e:
print(f"Error during cleanup: {e}") print(f"Error during cleanup: {e}")
@@ -22,13 +23,17 @@ def main() -> None:
"""Main entry point for the Unraid MCP Server.""" """Main entry point for the Unraid MCP Server."""
try: try:
from .server import run_server from .server import run_server
run_server() run_server()
except KeyboardInterrupt: except KeyboardInterrupt:
print("\nServer stopped by user") print("\nServer stopped by user")
try: try:
asyncio.run(shutdown_cleanup()) asyncio.run(shutdown_cleanup())
except RuntimeError as e: except RuntimeError as e:
if "event loop is closed" in str(e).lower() or "no running event loop" in str(e).lower(): if (
"event loop is closed" in str(e).lower()
or "no running event loop" in str(e).lower()
):
pass # Expected during shutdown pass # Expected during shutdown
else: else:
print(f"WARNING: Unexpected error during cleanup: {e}", file=sys.stderr) print(f"WARNING: Unexpected error during cleanup: {e}", file=sys.stderr)
@@ -37,7 +42,10 @@ def main() -> None:
try: try:
asyncio.run(shutdown_cleanup()) asyncio.run(shutdown_cleanup())
except RuntimeError as e: except RuntimeError as e:
if "event loop is closed" in str(e).lower() or "no running event loop" in str(e).lower(): if (
"event loop is closed" in str(e).lower()
or "no running event loop" in str(e).lower()
):
pass # Expected during shutdown pass # Expected during shutdown
else: else:
print(f"WARNING: Unexpected error during cleanup: {e}", file=sys.stderr) print(f"WARNING: Unexpected error during cleanup: {e}", file=sys.stderr)

View File

@@ -91,28 +91,24 @@ def run_server() -> None:
# Register all modules # Register all modules
register_all_modules() register_all_modules()
logger.info(f"Starting Unraid MCP Server on {UNRAID_MCP_HOST}:{UNRAID_MCP_PORT} using {UNRAID_MCP_TRANSPORT} transport...") logger.info(
f"Starting Unraid MCP Server on {UNRAID_MCP_HOST}:{UNRAID_MCP_PORT} using {UNRAID_MCP_TRANSPORT} transport..."
)
try: try:
if UNRAID_MCP_TRANSPORT == "streamable-http": if UNRAID_MCP_TRANSPORT == "streamable-http":
mcp.run( mcp.run(
transport="streamable-http", transport="streamable-http", host=UNRAID_MCP_HOST, port=UNRAID_MCP_PORT, path="/mcp"
host=UNRAID_MCP_HOST,
port=UNRAID_MCP_PORT,
path="/mcp"
) )
elif UNRAID_MCP_TRANSPORT == "sse": elif UNRAID_MCP_TRANSPORT == "sse":
logger.warning("SSE transport is deprecated. Consider switching to 'streamable-http'.") logger.warning("SSE transport is deprecated. Consider switching to 'streamable-http'.")
mcp.run( mcp.run(transport="sse", host=UNRAID_MCP_HOST, port=UNRAID_MCP_PORT, path="/mcp")
transport="sse",
host=UNRAID_MCP_HOST,
port=UNRAID_MCP_PORT,
path="/mcp"
)
elif UNRAID_MCP_TRANSPORT == "stdio": elif UNRAID_MCP_TRANSPORT == "stdio":
mcp.run() mcp.run()
else: else:
logger.error(f"Unsupported MCP_TRANSPORT: {UNRAID_MCP_TRANSPORT}. Choose 'streamable-http', 'sse', or 'stdio'.") logger.error(
f"Unsupported MCP_TRANSPORT: {UNRAID_MCP_TRANSPORT}. Choose 'streamable-http', 'sse', or 'stdio'."
)
sys.exit(1) sys.exit(1)
except Exception as e: except Exception as e:
logger.critical(f"Failed to start Unraid MCP server: {e}", exc_info=True) logger.critical(f"Failed to start Unraid MCP server: {e}", exc_info=True)

View File

@@ -47,7 +47,10 @@ def register_diagnostic_tools(mcp: FastMCP) -> None:
# Build WebSocket URL # Build WebSocket URL
if not UNRAID_API_URL: if not UNRAID_API_URL:
raise ToolError("UNRAID_API_URL is not configured") raise ToolError("UNRAID_API_URL is not configured")
ws_url = UNRAID_API_URL.replace("https://", "wss://").replace("http://", "ws://") + "/graphql" ws_url = (
UNRAID_API_URL.replace("https://", "wss://").replace("http://", "ws://")
+ "/graphql"
)
ssl_context = build_ws_ssl_context(ws_url) ssl_context = build_ws_ssl_context(ws_url)
@@ -57,18 +60,17 @@ def register_diagnostic_tools(mcp: FastMCP) -> None:
subprotocols=[Subprotocol("graphql-transport-ws"), Subprotocol("graphql-ws")], subprotocols=[Subprotocol("graphql-transport-ws"), Subprotocol("graphql-ws")],
ssl=ssl_context, ssl=ssl_context,
ping_interval=30, ping_interval=30,
ping_timeout=10 ping_timeout=10,
) as websocket: ) as websocket:
# Send connection init (using standard X-API-Key format) # Send connection init (using standard X-API-Key format)
await websocket.send(json.dumps({ await websocket.send(
json.dumps(
{
"type": "connection_init", "type": "connection_init",
"payload": { "payload": {"headers": {"X-API-Key": UNRAID_API_KEY}},
"headers": {
"X-API-Key": UNRAID_API_KEY
} }
} )
})) )
# Wait for ack # Wait for ack
response = await websocket.recv() response = await websocket.recv()
@@ -78,11 +80,11 @@ def register_diagnostic_tools(mcp: FastMCP) -> None:
return {"error": f"Connection failed: {init_response}"} return {"error": f"Connection failed: {init_response}"}
# Send subscription # Send subscription
await websocket.send(json.dumps({ await websocket.send(
"id": "test", json.dumps(
"type": "start", {"id": "test", "type": "start", "payload": {"query": subscription_query}}
"payload": {"query": subscription_query} )
})) )
# Wait for response with timeout # Wait for response with timeout
try: try:
@@ -90,26 +92,19 @@ def register_diagnostic_tools(mcp: FastMCP) -> None:
result = json.loads(response) result = json.loads(response)
logger.info(f"[TEST_SUBSCRIPTION] Response: {result}") logger.info(f"[TEST_SUBSCRIPTION] Response: {result}")
return { return {"success": True, "response": result, "query_tested": subscription_query}
"success": True,
"response": result,
"query_tested": subscription_query
}
except TimeoutError: except TimeoutError:
return { return {
"success": True, "success": True,
"response": "No immediate response (subscriptions may only send data on changes)", "response": "No immediate response (subscriptions may only send data on changes)",
"query_tested": subscription_query, "query_tested": subscription_query,
"note": "Connection successful, subscription may be waiting for events" "note": "Connection successful, subscription may be waiting for events",
} }
except Exception as e: except Exception as e:
logger.error(f"[TEST_SUBSCRIPTION] Error: {e}", exc_info=True) logger.error(f"[TEST_SUBSCRIPTION] Error: {e}", exc_info=True)
return { return {"error": str(e), "query_tested": subscription_query}
"error": str(e),
"query_tested": subscription_query
}
@mcp.tool() @mcp.tool()
async def diagnose_subscriptions() -> dict[str, Any]: async def diagnose_subscriptions() -> dict[str, Any]:
@@ -140,17 +135,21 @@ def register_diagnostic_tools(mcp: FastMCP) -> None:
"max_reconnect_attempts": subscription_manager.max_reconnect_attempts, "max_reconnect_attempts": subscription_manager.max_reconnect_attempts,
"unraid_api_url": UNRAID_API_URL[:50] + "..." if UNRAID_API_URL else None, "unraid_api_url": UNRAID_API_URL[:50] + "..." if UNRAID_API_URL else None,
"api_key_configured": bool(UNRAID_API_KEY), "api_key_configured": bool(UNRAID_API_KEY),
"websocket_url": None "websocket_url": None,
}, },
"subscriptions": status, "subscriptions": status,
"summary": { "summary": {
"total_configured": len(subscription_manager.subscription_configs), "total_configured": len(subscription_manager.subscription_configs),
"auto_start_count": sum(1 for s in subscription_manager.subscription_configs.values() if s.get("auto_start")), "auto_start_count": sum(
1
for s in subscription_manager.subscription_configs.values()
if s.get("auto_start")
),
"active_count": len(subscription_manager.active_subscriptions), "active_count": len(subscription_manager.active_subscriptions),
"with_data": len(subscription_manager.resource_data), "with_data": len(subscription_manager.resource_data),
"in_error_state": 0, "in_error_state": 0,
"connection_issues": connection_issues "connection_issues": connection_issues,
} },
} }
# Calculate WebSocket URL # Calculate WebSocket URL
@@ -174,42 +173,57 @@ def register_diagnostic_tools(mcp: FastMCP) -> None:
diagnostic_info["summary"]["in_error_state"] += 1 diagnostic_info["summary"]["in_error_state"] += 1
if runtime.get("last_error"): if runtime.get("last_error"):
connection_issues.append({ connection_issues.append(
{
"subscription": sub_name, "subscription": sub_name,
"state": connection_state, "state": connection_state,
"error": runtime["last_error"] "error": runtime["last_error"],
}) }
)
# Add troubleshooting recommendations # Add troubleshooting recommendations
recommendations: list[str] = [] recommendations: list[str] = []
if not diagnostic_info["environment"]["api_key_configured"]: if not diagnostic_info["environment"]["api_key_configured"]:
recommendations.append("CRITICAL: No API key configured. Set UNRAID_API_KEY environment variable.") recommendations.append(
"CRITICAL: No API key configured. Set UNRAID_API_KEY environment variable."
)
if diagnostic_info["summary"]["in_error_state"] > 0: if diagnostic_info["summary"]["in_error_state"] > 0:
recommendations.append("Some subscriptions are in error state. Check 'connection_issues' for details.") recommendations.append(
"Some subscriptions are in error state. Check 'connection_issues' for details."
)
if diagnostic_info["summary"]["with_data"] == 0: if diagnostic_info["summary"]["with_data"] == 0:
recommendations.append("No subscriptions have received data yet. Check WebSocket connectivity and authentication.") recommendations.append(
"No subscriptions have received data yet. Check WebSocket connectivity and authentication."
)
if diagnostic_info["summary"]["active_count"] < diagnostic_info["summary"]["auto_start_count"]: if (
recommendations.append("Not all auto-start subscriptions are active. Check server startup logs.") diagnostic_info["summary"]["active_count"]
< diagnostic_info["summary"]["auto_start_count"]
):
recommendations.append(
"Not all auto-start subscriptions are active. Check server startup logs."
)
diagnostic_info["troubleshooting"] = { diagnostic_info["troubleshooting"] = {
"recommendations": recommendations, "recommendations": recommendations,
"log_commands": [ "log_commands": [
"Check server logs for [WEBSOCKET:*], [AUTH:*], [SUBSCRIPTION:*] prefixed messages", "Check server logs for [WEBSOCKET:*], [AUTH:*], [SUBSCRIPTION:*] prefixed messages",
"Look for connection timeout or authentication errors", "Look for connection timeout or authentication errors",
"Verify Unraid API URL is accessible and supports GraphQL subscriptions" "Verify Unraid API URL is accessible and supports GraphQL subscriptions",
], ],
"next_steps": [ "next_steps": [
"If authentication fails: Verify API key has correct permissions", "If authentication fails: Verify API key has correct permissions",
"If connection fails: Check network connectivity to Unraid server", "If connection fails: Check network connectivity to Unraid server",
"If no data received: Enable DEBUG logging to see detailed protocol messages" "If no data received: Enable DEBUG logging to see detailed protocol messages",
] ],
} }
logger.info(f"[DIAGNOSTIC] Completed. Active: {diagnostic_info['summary']['active_count']}, With data: {diagnostic_info['summary']['with_data']}, Errors: {diagnostic_info['summary']['in_error_state']}") logger.info(
f"[DIAGNOSTIC] Completed. Active: {diagnostic_info['summary']['active_count']}, With data: {diagnostic_info['summary']['with_data']}, Errors: {diagnostic_info['summary']['in_error_state']}"
)
return diagnostic_info return diagnostic_info
except Exception as e: except Exception as e:

View File

@@ -30,7 +30,9 @@ class SubscriptionManager:
self.subscription_lock = asyncio.Lock() self.subscription_lock = asyncio.Lock()
# Configuration # Configuration
self.auto_start_enabled = os.getenv("UNRAID_AUTO_START_SUBSCRIPTIONS", "true").lower() == "true" self.auto_start_enabled = (
os.getenv("UNRAID_AUTO_START_SUBSCRIPTIONS", "true").lower() == "true"
)
self.reconnect_attempts: dict[str, int] = {} self.reconnect_attempts: dict[str, int] = {}
self.max_reconnect_attempts = int(os.getenv("UNRAID_MAX_RECONNECT_ATTEMPTS", "10")) self.max_reconnect_attempts = int(os.getenv("UNRAID_MAX_RECONNECT_ATTEMPTS", "10"))
self.connection_states: dict[str, str] = {} # Track connection state per subscription self.connection_states: dict[str, str] = {} # Track connection state per subscription
@@ -50,12 +52,16 @@ class SubscriptionManager:
""", """,
"resource": "unraid://logs/stream", "resource": "unraid://logs/stream",
"description": "Real-time log file streaming", "description": "Real-time log file streaming",
"auto_start": False # Started manually with path parameter "auto_start": False, # Started manually with path parameter
} }
} }
logger.info(f"[SUBSCRIPTION_MANAGER] Initialized with auto_start={self.auto_start_enabled}, max_reconnects={self.max_reconnect_attempts}") logger.info(
logger.debug(f"[SUBSCRIPTION_MANAGER] Available subscriptions: {list(self.subscription_configs.keys())}") f"[SUBSCRIPTION_MANAGER] Initialized with auto_start={self.auto_start_enabled}, max_reconnects={self.max_reconnect_attempts}"
)
logger.debug(
f"[SUBSCRIPTION_MANAGER] Available subscriptions: {list(self.subscription_configs.keys())}"
)
async def auto_start_all_subscriptions(self) -> None: async def auto_start_all_subscriptions(self) -> None:
"""Auto-start all subscriptions marked for auto-start.""" """Auto-start all subscriptions marked for auto-start."""
@@ -69,21 +75,31 @@ class SubscriptionManager:
for subscription_name, config in self.subscription_configs.items(): for subscription_name, config in self.subscription_configs.items():
if config.get("auto_start", False): if config.get("auto_start", False):
try: try:
logger.info(f"[SUBSCRIPTION_MANAGER] Auto-starting subscription: {subscription_name}") logger.info(
f"[SUBSCRIPTION_MANAGER] Auto-starting subscription: {subscription_name}"
)
await self.start_subscription(subscription_name, str(config["query"])) await self.start_subscription(subscription_name, str(config["query"]))
auto_start_count += 1 auto_start_count += 1
except Exception as e: except Exception as e:
logger.error(f"[SUBSCRIPTION_MANAGER] Failed to auto-start {subscription_name}: {e}") logger.error(
f"[SUBSCRIPTION_MANAGER] Failed to auto-start {subscription_name}: {e}"
)
self.last_error[subscription_name] = str(e) self.last_error[subscription_name] = str(e)
logger.info(f"[SUBSCRIPTION_MANAGER] Auto-start completed. Started {auto_start_count} subscriptions") logger.info(
f"[SUBSCRIPTION_MANAGER] Auto-start completed. Started {auto_start_count} subscriptions"
)
async def start_subscription(self, subscription_name: str, query: str, variables: dict[str, Any] | None = None) -> None: async def start_subscription(
self, subscription_name: str, query: str, variables: dict[str, Any] | None = None
) -> None:
"""Start a GraphQL subscription and maintain it as a resource.""" """Start a GraphQL subscription and maintain it as a resource."""
logger.info(f"[SUBSCRIPTION:{subscription_name}] Starting subscription...") logger.info(f"[SUBSCRIPTION:{subscription_name}] Starting subscription...")
if subscription_name in self.active_subscriptions: if subscription_name in self.active_subscriptions:
logger.warning(f"[SUBSCRIPTION:{subscription_name}] Subscription already active, skipping") logger.warning(
f"[SUBSCRIPTION:{subscription_name}] Subscription already active, skipping"
)
return return
# Reset connection tracking # Reset connection tracking
@@ -92,12 +108,18 @@ class SubscriptionManager:
async with self.subscription_lock: async with self.subscription_lock:
try: try:
task = asyncio.create_task(self._subscription_loop(subscription_name, query, variables or {})) task = asyncio.create_task(
self._subscription_loop(subscription_name, query, variables or {})
)
self.active_subscriptions[subscription_name] = task self.active_subscriptions[subscription_name] = task
logger.info(f"[SUBSCRIPTION:{subscription_name}] Subscription task created and started") logger.info(
f"[SUBSCRIPTION:{subscription_name}] Subscription task created and started"
)
self.connection_states[subscription_name] = "active" self.connection_states[subscription_name] = "active"
except Exception as e: except Exception as e:
logger.error(f"[SUBSCRIPTION:{subscription_name}] Failed to start subscription task: {e}") logger.error(
f"[SUBSCRIPTION:{subscription_name}] Failed to start subscription task: {e}"
)
self.connection_states[subscription_name] = "failed" self.connection_states[subscription_name] = "failed"
self.last_error[subscription_name] = str(e) self.last_error[subscription_name] = str(e)
raise raise
@@ -120,7 +142,9 @@ class SubscriptionManager:
else: else:
logger.warning(f"[SUBSCRIPTION:{subscription_name}] No active subscription to stop") logger.warning(f"[SUBSCRIPTION:{subscription_name}] No active subscription to stop")
async def _subscription_loop(self, subscription_name: str, query: str, variables: dict[str, Any] | None) -> None: async def _subscription_loop(
self, subscription_name: str, query: str, variables: dict[str, Any] | None
) -> None:
"""Main loop for maintaining a GraphQL subscription with comprehensive logging.""" """Main loop for maintaining a GraphQL subscription with comprehensive logging."""
retry_delay: int | float = 5 retry_delay: int | float = 5
max_retry_delay = 300 # 5 minutes max max_retry_delay = 300 # 5 minutes max
@@ -129,10 +153,14 @@ class SubscriptionManager:
attempt = self.reconnect_attempts.get(subscription_name, 0) + 1 attempt = self.reconnect_attempts.get(subscription_name, 0) + 1
self.reconnect_attempts[subscription_name] = attempt self.reconnect_attempts[subscription_name] = attempt
logger.info(f"[WEBSOCKET:{subscription_name}] Connection attempt #{attempt} (max: {self.max_reconnect_attempts})") logger.info(
f"[WEBSOCKET:{subscription_name}] Connection attempt #{attempt} (max: {self.max_reconnect_attempts})"
)
if attempt > self.max_reconnect_attempts: if attempt > self.max_reconnect_attempts:
logger.error(f"[WEBSOCKET:{subscription_name}] Max reconnection attempts ({self.max_reconnect_attempts}) exceeded, stopping") logger.error(
f"[WEBSOCKET:{subscription_name}] Max reconnection attempts ({self.max_reconnect_attempts}) exceeded, stopping"
)
self.connection_states[subscription_name] = "max_retries_exceeded" self.connection_states[subscription_name] = "max_retries_exceeded"
break break
@@ -152,13 +180,17 @@ class SubscriptionManager:
ws_url = ws_url.rstrip("/") + "/graphql" ws_url = ws_url.rstrip("/") + "/graphql"
logger.debug(f"[WEBSOCKET:{subscription_name}] Connecting to: {ws_url}") logger.debug(f"[WEBSOCKET:{subscription_name}] Connecting to: {ws_url}")
logger.debug(f"[WEBSOCKET:{subscription_name}] API Key present: {'Yes' if UNRAID_API_KEY else 'No'}") logger.debug(
f"[WEBSOCKET:{subscription_name}] API Key present: {'Yes' if UNRAID_API_KEY else 'No'}"
)
ssl_context = build_ws_ssl_context(ws_url) ssl_context = build_ws_ssl_context(ws_url)
# Connection with timeout # Connection with timeout
connect_timeout = 10 connect_timeout = 10
logger.debug(f"[WEBSOCKET:{subscription_name}] Connection timeout: {connect_timeout}s") logger.debug(
f"[WEBSOCKET:{subscription_name}] Connection timeout: {connect_timeout}s"
)
async with websockets.connect( async with websockets.connect(
ws_url, ws_url,
@@ -166,11 +198,12 @@ class SubscriptionManager:
ping_interval=20, ping_interval=20,
ping_timeout=10, ping_timeout=10,
close_timeout=10, close_timeout=10,
ssl=ssl_context ssl=ssl_context,
) as websocket: ) as websocket:
selected_proto = websocket.subprotocol or "none" selected_proto = websocket.subprotocol or "none"
logger.info(f"[WEBSOCKET:{subscription_name}] Connected! Protocol: {selected_proto}") logger.info(
f"[WEBSOCKET:{subscription_name}] Connected! Protocol: {selected_proto}"
)
self.connection_states[subscription_name] = "connected" self.connection_states[subscription_name] = "connected"
# Reset retry count on successful connection # Reset retry count on successful connection
@@ -178,21 +211,21 @@ class SubscriptionManager:
retry_delay = 5 # Reset delay retry_delay = 5 # Reset delay
# Initialize GraphQL-WS protocol # Initialize GraphQL-WS protocol
logger.debug(f"[PROTOCOL:{subscription_name}] Initializing GraphQL-WS protocol...") logger.debug(
f"[PROTOCOL:{subscription_name}] Initializing GraphQL-WS protocol..."
)
init_type = "connection_init" init_type = "connection_init"
init_payload: dict[str, Any] = {"type": init_type} init_payload: dict[str, Any] = {"type": init_type}
if UNRAID_API_KEY: if UNRAID_API_KEY:
logger.debug(f"[AUTH:{subscription_name}] Adding authentication payload") logger.debug(f"[AUTH:{subscription_name}] Adding authentication payload")
# Use standard X-API-Key header format (matching HTTP client) # Use standard X-API-Key header format (matching HTTP client)
auth_payload = { auth_payload = {"headers": {"X-API-Key": UNRAID_API_KEY}}
"headers": {
"X-API-Key": UNRAID_API_KEY
}
}
init_payload["payload"] = auth_payload init_payload["payload"] = auth_payload
else: else:
logger.warning(f"[AUTH:{subscription_name}] No API key available for authentication") logger.warning(
f"[AUTH:{subscription_name}] No API key available for authentication"
)
logger.debug(f"[PROTOCOL:{subscription_name}] Sending connection_init message") logger.debug(f"[PROTOCOL:{subscription_name}] Sending connection_init message")
await websocket.send(json.dumps(init_payload)) await websocket.send(json.dumps(init_payload))
@@ -203,45 +236,66 @@ class SubscriptionManager:
try: try:
init_data = json.loads(init_raw) init_data = json.loads(init_raw)
logger.debug(f"[PROTOCOL:{subscription_name}] Received init response: {init_data.get('type')}") logger.debug(
f"[PROTOCOL:{subscription_name}] Received init response: {init_data.get('type')}"
)
except json.JSONDecodeError as e: except json.JSONDecodeError as e:
init_preview = init_raw[:200] if isinstance(init_raw, str) else init_raw[:200].decode("utf-8", errors="replace") init_preview = (
logger.error(f"[PROTOCOL:{subscription_name}] Failed to decode init response: {init_preview}...") init_raw[:200]
if isinstance(init_raw, str)
else init_raw[:200].decode("utf-8", errors="replace")
)
logger.error(
f"[PROTOCOL:{subscription_name}] Failed to decode init response: {init_preview}..."
)
self.last_error[subscription_name] = f"Invalid JSON in init response: {e}" self.last_error[subscription_name] = f"Invalid JSON in init response: {e}"
break break
# Handle connection acknowledgment # Handle connection acknowledgment
if init_data.get("type") == "connection_ack": if init_data.get("type") == "connection_ack":
logger.info(f"[PROTOCOL:{subscription_name}] Connection acknowledged successfully") logger.info(
f"[PROTOCOL:{subscription_name}] Connection acknowledged successfully"
)
self.connection_states[subscription_name] = "authenticated" self.connection_states[subscription_name] = "authenticated"
elif init_data.get("type") == "connection_error": elif init_data.get("type") == "connection_error":
error_payload = init_data.get("payload", {}) error_payload = init_data.get("payload", {})
logger.error(f"[AUTH:{subscription_name}] Authentication failed: {error_payload}") logger.error(
self.last_error[subscription_name] = f"Authentication error: {error_payload}" f"[AUTH:{subscription_name}] Authentication failed: {error_payload}"
)
self.last_error[subscription_name] = (
f"Authentication error: {error_payload}"
)
self.connection_states[subscription_name] = "auth_failed" self.connection_states[subscription_name] = "auth_failed"
break break
else: else:
logger.warning(f"[PROTOCOL:{subscription_name}] Unexpected init response: {init_data}") logger.warning(
f"[PROTOCOL:{subscription_name}] Unexpected init response: {init_data}"
)
# Continue anyway - some servers send other messages first # Continue anyway - some servers send other messages first
# Start the subscription # Start the subscription
logger.debug(f"[SUBSCRIPTION:{subscription_name}] Starting GraphQL subscription...") logger.debug(
start_type = "subscribe" if selected_proto == "graphql-transport-ws" else "start" f"[SUBSCRIPTION:{subscription_name}] Starting GraphQL subscription..."
)
start_type = (
"subscribe" if selected_proto == "graphql-transport-ws" else "start"
)
subscription_message = { subscription_message = {
"id": subscription_name, "id": subscription_name,
"type": start_type, "type": start_type,
"payload": { "payload": {"query": query, "variables": variables},
"query": query,
"variables": variables
}
} }
logger.debug(f"[SUBSCRIPTION:{subscription_name}] Subscription message type: {start_type}") logger.debug(
f"[SUBSCRIPTION:{subscription_name}] Subscription message type: {start_type}"
)
logger.debug(f"[SUBSCRIPTION:{subscription_name}] Query: {query[:100]}...") logger.debug(f"[SUBSCRIPTION:{subscription_name}] Query: {query[:100]}...")
logger.debug(f"[SUBSCRIPTION:{subscription_name}] Variables: {variables}") logger.debug(f"[SUBSCRIPTION:{subscription_name}] Variables: {variables}")
await websocket.send(json.dumps(subscription_message)) await websocket.send(json.dumps(subscription_message))
logger.info(f"[SUBSCRIPTION:{subscription_name}] Subscription started successfully") logger.info(
f"[SUBSCRIPTION:{subscription_name}] Subscription started successfully"
)
self.connection_states[subscription_name] = "subscribed" self.connection_states[subscription_name] = "subscribed"
# Listen for subscription data # Listen for subscription data
@@ -253,57 +307,100 @@ class SubscriptionManager:
message_count += 1 message_count += 1
message_type = data.get("type", "unknown") message_type = data.get("type", "unknown")
logger.debug(f"[DATA:{subscription_name}] Message #{message_count}: {message_type}") logger.debug(
f"[DATA:{subscription_name}] Message #{message_count}: {message_type}"
)
# Handle different message types # Handle different message types
expected_data_type = "next" if selected_proto == "graphql-transport-ws" else "data" expected_data_type = (
"next" if selected_proto == "graphql-transport-ws" else "data"
)
if data.get("type") == expected_data_type and data.get("id") == subscription_name: if (
data.get("type") == expected_data_type
and data.get("id") == subscription_name
):
payload = data.get("payload", {}) payload = data.get("payload", {})
if payload.get("data"): if payload.get("data"):
logger.info(f"[DATA:{subscription_name}] Received subscription data update") logger.info(
f"[DATA:{subscription_name}] Received subscription data update"
)
self.resource_data[subscription_name] = SubscriptionData( self.resource_data[subscription_name] = SubscriptionData(
data=payload["data"], data=payload["data"],
last_updated=datetime.now(), last_updated=datetime.now(),
subscription_type=subscription_name subscription_type=subscription_name,
)
logger.debug(
f"[RESOURCE:{subscription_name}] Resource data updated successfully"
) )
logger.debug(f"[RESOURCE:{subscription_name}] Resource data updated successfully")
elif payload.get("errors"): elif payload.get("errors"):
logger.error(f"[DATA:{subscription_name}] GraphQL errors in response: {payload['errors']}") logger.error(
self.last_error[subscription_name] = f"GraphQL errors: {payload['errors']}" f"[DATA:{subscription_name}] GraphQL errors in response: {payload['errors']}"
)
self.last_error[subscription_name] = (
f"GraphQL errors: {payload['errors']}"
)
else: else:
logger.warning(f"[DATA:{subscription_name}] Empty or invalid data payload: {payload}") logger.warning(
f"[DATA:{subscription_name}] Empty or invalid data payload: {payload}"
)
elif data.get("type") == "ping": elif data.get("type") == "ping":
logger.debug(f"[PROTOCOL:{subscription_name}] Received ping, sending pong") logger.debug(
f"[PROTOCOL:{subscription_name}] Received ping, sending pong"
)
await websocket.send(json.dumps({"type": "pong"})) await websocket.send(json.dumps({"type": "pong"}))
elif data.get("type") == "error": elif data.get("type") == "error":
error_payload = data.get("payload", {}) error_payload = data.get("payload", {})
logger.error(f"[SUBSCRIPTION:{subscription_name}] Subscription error: {error_payload}") logger.error(
self.last_error[subscription_name] = f"Subscription error: {error_payload}" f"[SUBSCRIPTION:{subscription_name}] Subscription error: {error_payload}"
)
self.last_error[subscription_name] = (
f"Subscription error: {error_payload}"
)
self.connection_states[subscription_name] = "error" self.connection_states[subscription_name] = "error"
elif data.get("type") == "complete": elif data.get("type") == "complete":
logger.info(f"[SUBSCRIPTION:{subscription_name}] Subscription completed by server") logger.info(
f"[SUBSCRIPTION:{subscription_name}] Subscription completed by server"
)
self.connection_states[subscription_name] = "completed" self.connection_states[subscription_name] = "completed"
break break
elif data.get("type") in ["ka", "ping", "pong"]: elif data.get("type") in ["ka", "ping", "pong"]:
logger.debug(f"[PROTOCOL:{subscription_name}] Keepalive message: {message_type}") logger.debug(
f"[PROTOCOL:{subscription_name}] Keepalive message: {message_type}"
)
else: else:
logger.debug(f"[PROTOCOL:{subscription_name}] Unhandled message type: {message_type}") logger.debug(
f"[PROTOCOL:{subscription_name}] Unhandled message type: {message_type}"
)
except json.JSONDecodeError as e: except json.JSONDecodeError as e:
msg_preview = message[:200] if isinstance(message, str) else message[:200].decode("utf-8", errors="replace") msg_preview = (
logger.error(f"[PROTOCOL:{subscription_name}] Failed to decode message: {msg_preview}...") message[:200]
if isinstance(message, str)
else message[:200].decode("utf-8", errors="replace")
)
logger.error(
f"[PROTOCOL:{subscription_name}] Failed to decode message: {msg_preview}..."
)
logger.error(f"[PROTOCOL:{subscription_name}] JSON decode error: {e}") logger.error(f"[PROTOCOL:{subscription_name}] JSON decode error: {e}")
except Exception as e: except Exception as e:
logger.error(f"[DATA:{subscription_name}] Error processing message: {e}") logger.error(
msg_preview = message[:200] if isinstance(message, str) else message[:200].decode("utf-8", errors="replace") f"[DATA:{subscription_name}] Error processing message: {e}"
logger.debug(f"[DATA:{subscription_name}] Raw message: {msg_preview}...") )
msg_preview = (
message[:200]
if isinstance(message, str)
else message[:200].decode("utf-8", errors="replace")
)
logger.debug(
f"[DATA:{subscription_name}] Raw message: {msg_preview}..."
)
except TimeoutError: except TimeoutError:
error_msg = "Connection or authentication timeout" error_msg = "Connection or authentication timeout"
@@ -332,7 +429,9 @@ class SubscriptionManager:
# Calculate backoff delay # Calculate backoff delay
retry_delay = min(retry_delay * 1.5, max_retry_delay) retry_delay = min(retry_delay * 1.5, max_retry_delay)
logger.info(f"[WEBSOCKET:{subscription_name}] Reconnecting in {retry_delay:.1f} seconds...") logger.info(
f"[WEBSOCKET:{subscription_name}] Reconnecting in {retry_delay:.1f} seconds..."
)
self.connection_states[subscription_name] = "reconnecting" self.connection_states[subscription_name] = "reconnecting"
await asyncio.sleep(retry_delay) await asyncio.sleep(retry_delay)
@@ -363,14 +462,14 @@ class SubscriptionManager:
"config": { "config": {
"resource": config["resource"], "resource": config["resource"],
"description": config["description"], "description": config["description"],
"auto_start": config.get("auto_start", False) "auto_start": config.get("auto_start", False),
}, },
"runtime": { "runtime": {
"active": sub_name in self.active_subscriptions, "active": sub_name in self.active_subscriptions,
"connection_state": self.connection_states.get(sub_name, "not_started"), "connection_state": self.connection_states.get(sub_name, "not_started"),
"reconnect_attempts": self.reconnect_attempts.get(sub_name, 0), "reconnect_attempts": self.reconnect_attempts.get(sub_name, 0),
"last_error": self.last_error.get(sub_name, None) "last_error": self.last_error.get(sub_name, None),
} },
} }
# Add data info if available # Add data info if available
@@ -380,7 +479,7 @@ class SubscriptionManager:
sub_status["data"] = { sub_status["data"] = {
"available": True, "available": True,
"last_updated": data_info.last_updated.isoformat(), "last_updated": data_info.last_updated.isoformat(),
"age_seconds": age_seconds "age_seconds": age_seconds,
} }
else: else:
sub_status["data"] = {"available": False} sub_status["data"] = {"available": False}

View File

@@ -59,7 +59,9 @@ async def autostart_subscriptions() -> None:
logger.info(f"[AUTOSTART] Starting log file subscription for: {log_path}") logger.info(f"[AUTOSTART] Starting log file subscription for: {log_path}")
config = subscription_manager.subscription_configs.get("logFileSubscription") config = subscription_manager.subscription_configs.get("logFileSubscription")
if config: if config:
await subscription_manager.start_subscription("logFileSubscription", str(config["query"]), {"path": log_path}) await subscription_manager.start_subscription(
"logFileSubscription", str(config["query"]), {"path": log_path}
)
logger.info(f"[AUTOSTART] Log file subscription started for: {log_path}") logger.info(f"[AUTOSTART] Log file subscription started for: {log_path}")
else: else:
logger.error("[AUTOSTART] logFileSubscription config not found") logger.error("[AUTOSTART] logFileSubscription config not found")
@@ -83,9 +85,11 @@ def register_subscription_resources(mcp: FastMCP) -> None:
data = subscription_manager.get_resource_data("logFileSubscription") data = subscription_manager.get_resource_data("logFileSubscription")
if data: if data:
return json.dumps(data, indent=2) return json.dumps(data, indent=2)
return json.dumps({ return json.dumps(
{
"status": "No subscription data yet", "status": "No subscription data yet",
"message": "Subscriptions auto-start on server boot. If this persists, check server logs for WebSocket/auth issues." "message": "Subscriptions auto-start on server boot. If this persists, check server logs for WebSocket/auth issues.",
}) }
)
logger.info("Subscription resources registered successfully") logger.info("Subscription resources registered successfully")

View File

@@ -1,7 +1,6 @@
"""Array operations and system power management. """Array parity check operations.
Provides the `unraid_array` tool with 12 actions for array lifecycle, Provides the `unraid_array` tool with 5 actions for parity check management.
parity operations, disk management, and system power control.
""" """
from typing import Any, Literal from typing import Any, Literal
@@ -22,16 +21,6 @@ QUERIES: dict[str, str] = {
} }
MUTATIONS: dict[str, str] = { MUTATIONS: dict[str, str] = {
"start": """
mutation StartArray {
setState(input: { desiredState: STARTED }) { state }
}
""",
"stop": """
mutation StopArray {
setState(input: { desiredState: STOPPED }) { state }
}
""",
"parity_start": """ "parity_start": """
mutation StartParityCheck($correct: Boolean) { mutation StartParityCheck($correct: Boolean) {
parityCheck { start(correct: $correct) } parityCheck { start(correct: $correct) }
@@ -52,42 +41,16 @@ MUTATIONS: dict[str, str] = {
parityCheck { cancel } parityCheck { cancel }
} }
""", """,
"mount_disk": """
mutation MountDisk($id: PrefixedID!) {
mountArrayDisk(id: $id)
}
""",
"unmount_disk": """
mutation UnmountDisk($id: PrefixedID!) {
unmountArrayDisk(id: $id)
}
""",
"clear_stats": """
mutation ClearStats($id: PrefixedID!) {
clearArrayDiskStatistics(id: $id)
}
""",
"shutdown": """
mutation Shutdown {
shutdown
}
""",
"reboot": """
mutation Reboot {
reboot
}
""",
} }
DESTRUCTIVE_ACTIONS = {"start", "stop", "shutdown", "reboot"}
DISK_ACTIONS = {"mount_disk", "unmount_disk", "clear_stats"}
ALL_ACTIONS = set(QUERIES) | set(MUTATIONS) ALL_ACTIONS = set(QUERIES) | set(MUTATIONS)
ARRAY_ACTIONS = Literal[ ARRAY_ACTIONS = Literal[
"start", "stop", "parity_start",
"parity_start", "parity_pause", "parity_resume", "parity_cancel", "parity_status", "parity_pause",
"mount_disk", "unmount_disk", "clear_stats", "parity_resume",
"shutdown", "reboot", "parity_cancel",
"parity_status",
] ]
@@ -97,52 +60,31 @@ def register_array_tool(mcp: FastMCP) -> None:
@mcp.tool() @mcp.tool()
async def unraid_array( async def unraid_array(
action: ARRAY_ACTIONS, action: ARRAY_ACTIONS,
confirm: bool = False,
disk_id: str | None = None,
correct: bool | None = None, correct: bool | None = None,
) -> dict[str, Any]: ) -> dict[str, Any]:
"""Manage the Unraid array and system power. """Manage Unraid array parity checks.
Actions: Actions:
start - Start the array (destructive, requires confirm=True)
stop - Stop the array (destructive, requires confirm=True)
parity_start - Start parity check (optional correct=True to fix errors) parity_start - Start parity check (optional correct=True to fix errors)
parity_pause - Pause running parity check parity_pause - Pause running parity check
parity_resume - Resume paused parity check parity_resume - Resume paused parity check
parity_cancel - Cancel running parity check parity_cancel - Cancel running parity check
parity_status - Get current parity check status parity_status - Get current parity check status
mount_disk - Mount an array disk (requires disk_id)
unmount_disk - Unmount an array disk (requires disk_id)
clear_stats - Clear disk statistics (requires disk_id)
shutdown - Shut down the server (destructive, requires confirm=True)
reboot - Reboot the server (destructive, requires confirm=True)
""" """
if action not in ALL_ACTIONS: if action not in ALL_ACTIONS:
raise ToolError(f"Invalid action '{action}'. Must be one of: {sorted(ALL_ACTIONS)}") raise ToolError(f"Invalid action '{action}'. Must be one of: {sorted(ALL_ACTIONS)}")
if action in DESTRUCTIVE_ACTIONS and not confirm:
raise ToolError(
f"Action '{action}' is destructive. Set confirm=True to proceed."
)
if action in DISK_ACTIONS and not disk_id:
raise ToolError(f"disk_id is required for '{action}' action")
try: try:
logger.info(f"Executing unraid_array action={action}") logger.info(f"Executing unraid_array action={action}")
# Read-only query
if action in QUERIES: if action in QUERIES:
data = await make_graphql_request(QUERIES[action]) data = await make_graphql_request(QUERIES[action])
return {"success": True, "action": action, "data": data} return {"success": True, "action": action, "data": data}
# Mutations
query = MUTATIONS[action] query = MUTATIONS[action]
variables: dict[str, Any] | None = None variables: dict[str, Any] | None = None
if action in DISK_ACTIONS: if action == "parity_start" and correct is not None:
variables = {"id": disk_id}
elif action == "parity_start" and correct is not None:
variables = {"correct": correct} variables = {"correct": correct}
data = await make_graphql_request(query, variables) data = await make_graphql_request(query, variables)

View File

@@ -99,13 +99,35 @@ MUTATIONS: dict[str, str] = {
} }
DESTRUCTIVE_ACTIONS = {"remove"} DESTRUCTIVE_ACTIONS = {"remove"}
_ACTIONS_REQUIRING_CONTAINER_ID = {"start", "stop", "restart", "pause", "unpause", "remove", "update", "details", "logs"} _ACTIONS_REQUIRING_CONTAINER_ID = {
"start",
"stop",
"restart",
"pause",
"unpause",
"remove",
"update",
"details",
"logs",
}
ALL_ACTIONS = set(QUERIES) | set(MUTATIONS) | {"restart"} ALL_ACTIONS = set(QUERIES) | set(MUTATIONS) | {"restart"}
DOCKER_ACTIONS = Literal[ DOCKER_ACTIONS = Literal[
"list", "details", "start", "stop", "restart", "pause", "unpause", "list",
"remove", "update", "update_all", "logs", "details",
"networks", "network_details", "port_conflicts", "check_updates", "start",
"stop",
"restart",
"pause",
"unpause",
"remove",
"update",
"update_all",
"logs",
"networks",
"network_details",
"port_conflicts",
"check_updates",
] ]
# Docker container IDs: 64 hex chars + optional suffix (e.g., ":local") # Docker container IDs: 64 hex chars + optional suffix (e.g., ":local")
@@ -246,9 +268,7 @@ def register_docker_tool(mcp: FastMCP) -> None:
return {"networks": list(networks) if isinstance(networks, list) else []} return {"networks": list(networks) if isinstance(networks, list) else []}
if action == "network_details": if action == "network_details":
data = await make_graphql_request( data = await make_graphql_request(QUERIES["network_details"], {"id": network_id})
QUERIES["network_details"], {"id": network_id}
)
return dict(data.get("dockerNetwork", {})) return dict(data.get("dockerNetwork", {}))
if action == "port_conflicts": if action == "port_conflicts":
@@ -266,13 +286,15 @@ def register_docker_tool(mcp: FastMCP) -> None:
actual_id = await _resolve_container_id(container_id or "") actual_id = await _resolve_container_id(container_id or "")
# Stop (idempotent: treat "already stopped" as success) # Stop (idempotent: treat "already stopped" as success)
stop_data = await make_graphql_request( stop_data = await make_graphql_request(
MUTATIONS["stop"], {"id": actual_id}, MUTATIONS["stop"],
{"id": actual_id},
operation_context={"operation": "stop"}, operation_context={"operation": "stop"},
) )
stop_was_idempotent = stop_data.get("idempotent_success", False) stop_was_idempotent = stop_data.get("idempotent_success", False)
# Start (idempotent: treat "already running" as success) # Start (idempotent: treat "already running" as success)
start_data = await make_graphql_request( start_data = await make_graphql_request(
MUTATIONS["start"], {"id": actual_id}, MUTATIONS["start"],
{"id": actual_id},
operation_context={"operation": "start"}, operation_context={"operation": "start"},
) )
if start_data.get("idempotent_success"): if start_data.get("idempotent_success"):
@@ -280,7 +302,9 @@ def register_docker_tool(mcp: FastMCP) -> None:
else: else:
result = start_data.get("docker", {}).get("start", {}) result = start_data.get("docker", {}).get("start", {})
response: dict[str, Any] = { response: dict[str, Any] = {
"success": True, "action": "restart", "container": result, "success": True,
"action": "restart",
"container": result,
} }
if stop_was_idempotent: if stop_was_idempotent:
response["note"] = "Container was already stopped before restart" response["note"] = "Container was already stopped before restart"
@@ -294,9 +318,12 @@ def register_docker_tool(mcp: FastMCP) -> None:
# Single-container mutations # Single-container mutations
if action in MUTATIONS: if action in MUTATIONS:
actual_id = await _resolve_container_id(container_id or "") actual_id = await _resolve_container_id(container_id or "")
op_context: dict[str, str] | None = {"operation": action} if action in ("start", "stop") else None op_context: dict[str, str] | None = (
{"operation": action} if action in ("start", "stop") else None
)
data = await make_graphql_request( data = await make_graphql_request(
MUTATIONS[action], {"id": actual_id}, MUTATIONS[action],
{"id": actual_id},
operation_context=op_context, operation_context=op_context,
) )

View File

@@ -247,11 +247,13 @@ async def _diagnose_subscriptions() -> dict[str, Any]:
if conn_state in ("error", "auth_failed", "timeout", "max_retries_exceeded"): if conn_state in ("error", "auth_failed", "timeout", "max_retries_exceeded"):
diagnostic_info["summary"]["in_error_state"] += 1 diagnostic_info["summary"]["in_error_state"] += 1
if runtime.get("last_error"): if runtime.get("last_error"):
connection_issues.append({ connection_issues.append(
{
"subscription": sub_name, "subscription": sub_name,
"state": conn_state, "state": conn_state,
"error": runtime["last_error"], "error": runtime["last_error"],
}) }
)
return diagnostic_info return diagnostic_info

View File

@@ -157,10 +157,25 @@ QUERIES: dict[str, str] = {
} }
INFO_ACTIONS = Literal[ INFO_ACTIONS = Literal[
"overview", "array", "network", "registration", "connect", "variables", "overview",
"metrics", "services", "display", "config", "online", "owner", "array",
"settings", "server", "servers", "flash", "network",
"ups_devices", "ups_device", "ups_config", "registration",
"connect",
"variables",
"metrics",
"services",
"display",
"config",
"online",
"owner",
"settings",
"server",
"servers",
"flash",
"ups_devices",
"ups_device",
"ups_config",
] ]
assert set(QUERIES.keys()) == set(INFO_ACTIONS.__args__), ( assert set(QUERIES.keys()) == set(INFO_ACTIONS.__args__), (
@@ -209,7 +224,15 @@ def _process_system_info(raw_info: dict[str, Any]) -> dict[str, Any]:
def _analyze_disk_health(disks: list[dict[str, Any]]) -> dict[str, int]: def _analyze_disk_health(disks: list[dict[str, Any]]) -> dict[str, int]:
"""Analyze health status of disk arrays.""" """Analyze health status of disk arrays."""
counts = {"healthy": 0, "failed": 0, "missing": 0, "new": 0, "warning": 0, "critical": 0, "unknown": 0} counts = {
"healthy": 0,
"failed": 0,
"missing": 0,
"new": 0,
"warning": 0,
"critical": 0,
"unknown": 0,
}
for disk in disks: for disk in disks:
status = disk.get("status", "").upper() status = disk.get("status", "").upper()
warning = disk.get("warning") warning = disk.get("warning")
@@ -263,7 +286,11 @@ def _process_array_status(raw: dict[str, Any]) -> dict[str, Any]:
summary["num_cache_pools"] = len(raw.get("caches", [])) summary["num_cache_pools"] = len(raw.get("caches", []))
health_summary: dict[str, Any] = {} health_summary: dict[str, Any] = {}
for key, label in [("parities", "parity_health"), ("disks", "data_health"), ("caches", "cache_health")]: for key, label in [
("parities", "parity_health"),
("disks", "data_health"),
("caches", "cache_health"),
]:
if raw.get(key): if raw.get(key):
health_summary[label] = _analyze_disk_health(raw[key]) health_summary[label] = _analyze_disk_health(raw[key])
@@ -377,10 +404,14 @@ def register_info_tool(mcp: FastMCP) -> None:
if action == "settings": if action == "settings":
settings = data.get("settings") or {} settings = data.get("settings") or {}
if not settings: if not settings:
raise ToolError("No settings data returned from Unraid API. Check API permissions.") raise ToolError(
"No settings data returned from Unraid API. Check API permissions."
)
if not settings.get("unified"): if not settings.get("unified"):
logger.warning(f"Settings returned unexpected structure: {settings.keys()}") logger.warning(f"Settings returned unexpected structure: {settings.keys()}")
raise ToolError(f"Unexpected settings structure. Expected 'unified' key, got: {list(settings.keys())}") raise ToolError(
f"Unexpected settings structure. Expected 'unified' key, got: {list(settings.keys())}"
)
values = settings["unified"].get("values") or {} values = settings["unified"].get("values") or {}
return dict(values) if isinstance(values, dict) else {"raw": values} return dict(values) if isinstance(values, dict) else {"raw": values}

View File

@@ -47,7 +47,11 @@ MUTATIONS: dict[str, str] = {
DESTRUCTIVE_ACTIONS = {"delete"} DESTRUCTIVE_ACTIONS = {"delete"}
KEY_ACTIONS = Literal[ KEY_ACTIONS = Literal[
"list", "get", "create", "update", "delete", "list",
"get",
"create",
"update",
"delete",
] ]
@@ -101,9 +105,7 @@ def register_keys_tool(mcp: FastMCP) -> None:
input_data["roles"] = roles input_data["roles"] = roles
if permissions: if permissions:
input_data["permissions"] = permissions input_data["permissions"] = permissions
data = await make_graphql_request( data = await make_graphql_request(MUTATIONS["create"], {"input": input_data})
MUTATIONS["create"], {"input": input_data}
)
return { return {
"success": True, "success": True,
"key": data.get("createApiKey", {}), "key": data.get("createApiKey", {}),
@@ -117,9 +119,7 @@ def register_keys_tool(mcp: FastMCP) -> None:
input_data["name"] = name input_data["name"] = name
if roles: if roles:
input_data["roles"] = roles input_data["roles"] = roles
data = await make_graphql_request( data = await make_graphql_request(MUTATIONS["update"], {"input": input_data})
MUTATIONS["update"], {"input": input_data}
)
return { return {
"success": True, "success": True,
"key": data.get("updateApiKey", {}), "key": data.get("updateApiKey", {}),
@@ -128,12 +128,12 @@ def register_keys_tool(mcp: FastMCP) -> None:
if action == "delete": if action == "delete":
if not key_id: if not key_id:
raise ToolError("key_id is required for 'delete' action") raise ToolError("key_id is required for 'delete' action")
data = await make_graphql_request( data = await make_graphql_request(MUTATIONS["delete"], {"input": {"ids": [key_id]}})
MUTATIONS["delete"], {"input": {"ids": [key_id]}}
)
result = data.get("deleteApiKeys") result = data.get("deleteApiKeys")
if not result: if not result:
raise ToolError(f"Failed to delete API key '{key_id}': no confirmation from server") raise ToolError(
f"Failed to delete API key '{key_id}': no confirmation from server"
)
return { return {
"success": True, "success": True,
"message": f"API key '{key_id}' deleted", "message": f"API key '{key_id}' deleted",

View File

@@ -78,8 +78,15 @@ MUTATIONS: dict[str, str] = {
DESTRUCTIVE_ACTIONS = {"delete", "delete_archived"} DESTRUCTIVE_ACTIONS = {"delete", "delete_archived"}
NOTIFICATION_ACTIONS = Literal[ NOTIFICATION_ACTIONS = Literal[
"overview", "list", "warnings", "overview",
"create", "archive", "unread", "delete", "delete_archived", "archive_all", "list",
"warnings",
"create",
"archive",
"unread",
"delete",
"delete_archived",
"archive_all",
] ]
@@ -115,7 +122,9 @@ def register_notifications_tool(mcp: FastMCP) -> None:
""" """
all_actions = {**QUERIES, **MUTATIONS} all_actions = {**QUERIES, **MUTATIONS}
if action not in all_actions: if action not in all_actions:
raise ToolError(f"Invalid action '{action}'. Must be one of: {list(all_actions.keys())}") raise ToolError(
f"Invalid action '{action}'. Must be one of: {list(all_actions.keys())}"
)
if action in DESTRUCTIVE_ACTIONS and not confirm: if action in DESTRUCTIVE_ACTIONS and not confirm:
raise ToolError(f"Action '{action}' is destructive. Set confirm=True to proceed.") raise ToolError(f"Action '{action}' is destructive. Set confirm=True to proceed.")
@@ -136,9 +145,7 @@ def register_notifications_tool(mcp: FastMCP) -> None:
} }
if importance: if importance:
filter_vars["importance"] = importance.upper() filter_vars["importance"] = importance.upper()
data = await make_graphql_request( data = await make_graphql_request(QUERIES["list"], {"filter": filter_vars})
QUERIES["list"], {"filter": filter_vars}
)
notifications = data.get("notifications", {}) notifications = data.get("notifications", {})
result = notifications.get("list", []) result = notifications.get("list", [])
return {"notifications": list(result) if isinstance(result, list) else []} return {"notifications": list(result) if isinstance(result, list) else []}
@@ -151,33 +158,25 @@ def register_notifications_tool(mcp: FastMCP) -> None:
if action == "create": if action == "create":
if title is None or subject is None or description is None or importance is None: if title is None or subject is None or description is None or importance is None:
raise ToolError( raise ToolError("create requires title, subject, description, and importance")
"create requires title, subject, description, and importance"
)
input_data = { input_data = {
"title": title, "title": title,
"subject": subject, "subject": subject,
"description": description, "description": description,
"importance": importance.upper(), "importance": importance.upper(),
} }
data = await make_graphql_request( data = await make_graphql_request(MUTATIONS["create"], {"input": input_data})
MUTATIONS["create"], {"input": input_data}
)
return {"success": True, "data": data} return {"success": True, "data": data}
if action in ("archive", "unread"): if action in ("archive", "unread"):
if not notification_id: if not notification_id:
raise ToolError(f"notification_id is required for '{action}' action") raise ToolError(f"notification_id is required for '{action}' action")
data = await make_graphql_request( data = await make_graphql_request(MUTATIONS[action], {"id": notification_id})
MUTATIONS[action], {"id": notification_id}
)
return {"success": True, "action": action, "data": data} return {"success": True, "action": action, "data": data}
if action == "delete": if action == "delete":
if not notification_id or not notification_type: if not notification_id or not notification_type:
raise ToolError( raise ToolError("delete requires notification_id and notification_type")
"delete requires notification_id and notification_type"
)
data = await make_graphql_request( data = await make_graphql_request(
MUTATIONS["delete"], MUTATIONS["delete"],
{"id": notification_id, "type": notification_type.upper()}, {"id": notification_id, "type": notification_type.upper()},

View File

@@ -43,7 +43,10 @@ DESTRUCTIVE_ACTIONS = {"delete_remote"}
ALL_ACTIONS = set(QUERIES) | set(MUTATIONS) ALL_ACTIONS = set(QUERIES) | set(MUTATIONS)
RCLONE_ACTIONS = Literal[ RCLONE_ACTIONS = Literal[
"list_remotes", "config_form", "create_remote", "delete_remote", "list_remotes",
"config_form",
"create_remote",
"delete_remote",
] ]
@@ -84,9 +87,7 @@ def register_rclone_tool(mcp: FastMCP) -> None:
variables: dict[str, Any] = {} variables: dict[str, Any] = {}
if provider_type: if provider_type:
variables["formOptions"] = {"providerType": provider_type} variables["formOptions"] = {"providerType": provider_type}
data = await make_graphql_request( data = await make_graphql_request(QUERIES["config_form"], variables or None)
QUERIES["config_form"], variables or None
)
form = data.get("rclone", {}).get("configForm", {}) form = data.get("rclone", {}).get("configForm", {})
if not form: if not form:
raise ToolError("No RClone config form data received") raise ToolError("No RClone config form data received")
@@ -94,16 +95,16 @@ def register_rclone_tool(mcp: FastMCP) -> None:
if action == "create_remote": if action == "create_remote":
if name is None or provider_type is None or config_data is None: if name is None or provider_type is None or config_data is None:
raise ToolError( raise ToolError("create_remote requires name, provider_type, and config_data")
"create_remote requires name, provider_type, and config_data"
)
data = await make_graphql_request( data = await make_graphql_request(
MUTATIONS["create_remote"], MUTATIONS["create_remote"],
{"input": {"name": name, "type": provider_type, "config": config_data}}, {"input": {"name": name, "type": provider_type, "config": config_data}},
) )
remote = data.get("rclone", {}).get("createRCloneRemote") remote = data.get("rclone", {}).get("createRCloneRemote")
if not remote: if not remote:
raise ToolError(f"Failed to create remote '{name}': no confirmation from server") raise ToolError(
f"Failed to create remote '{name}': no confirmation from server"
)
return { return {
"success": True, "success": True,
"message": f"Remote '{name}' created successfully", "message": f"Remote '{name}' created successfully",

View File

@@ -57,7 +57,12 @@ QUERIES: dict[str, str] = {
} }
STORAGE_ACTIONS = Literal[ STORAGE_ACTIONS = Literal[
"shares", "disks", "disk_details", "unassigned", "log_files", "logs", "shares",
"disks",
"disk_details",
"unassigned",
"log_files",
"logs",
] ]

View File

@@ -1,7 +1,7 @@
"""User management. """User account query.
Provides the `unraid_users` tool with 8 actions for managing users, Provides the `unraid_users` tool with 1 action for querying the current authenticated user.
cloud access, remote access settings, and allowed origins. Note: Unraid GraphQL API does not support user management operations (list, add, delete).
""" """
from typing import Any, Literal from typing import Any, Literal
@@ -19,146 +19,37 @@ QUERIES: dict[str, str] = {
me { id name description roles } me { id name description roles }
} }
""", """,
"list": """
query ListUsers {
users { id name description roles }
}
""",
"get": """
query GetUser($id: ID!) {
user(id: $id) { id name description roles }
}
""",
"cloud": """
query GetCloud {
cloud { status error }
}
""",
"remote_access": """
query GetRemoteAccess {
remoteAccess { enabled url }
}
""",
"origins": """
query GetAllowedOrigins {
allowedOrigins
}
""",
} }
MUTATIONS: dict[str, str] = { ALL_ACTIONS = set(QUERIES)
"add": """
mutation AddUser($input: addUserInput!) {
addUser(input: $input) { id name description roles }
}
""",
"delete": """
mutation DeleteUser($input: deleteUserInput!) {
deleteUser(input: $input) { id name }
}
""",
}
DESTRUCTIVE_ACTIONS = {"delete"} USER_ACTIONS = Literal["me"]
USER_ACTIONS = Literal[
"me", "list", "get", "add", "delete", "cloud", "remote_access", "origins",
]
def register_users_tool(mcp: FastMCP) -> None: def register_users_tool(mcp: FastMCP) -> None:
"""Register the unraid_users tool with the FastMCP instance.""" """Register the unraid_users tool with the FastMCP instance."""
@mcp.tool() @mcp.tool()
async def unraid_users( async def unraid_users(action: USER_ACTIONS = "me") -> dict[str, Any]:
action: USER_ACTIONS, """Query current authenticated user.
confirm: bool = False,
user_id: str | None = None,
name: str | None = None,
password: str | None = None,
role: str | None = None,
) -> dict[str, Any]:
"""Manage Unraid users and access settings.
Actions: Actions:
me - Get current authenticated user info me - Get current authenticated user info (id, name, description, roles)
list - List all users
get - Get a specific user (requires user_id)
add - Add a new user (requires name, password; optional role)
delete - Delete a user (requires user_id, confirm=True)
cloud - Get Unraid Connect cloud status
remote_access - Get remote access settings
origins - Get allowed origins
"""
all_actions = set(QUERIES) | set(MUTATIONS)
if action not in all_actions:
raise ToolError(f"Invalid action '{action}'. Must be one of: {sorted(all_actions)}")
if action in DESTRUCTIVE_ACTIONS and not confirm: Note: Unraid API does not support user management operations (list, add, delete).
raise ToolError(f"Action '{action}' is destructive. Set confirm=True to proceed.") """
if action not in ALL_ACTIONS:
raise ToolError(f"Invalid action '{action}'. Must be: me")
try: try:
logger.info(f"Executing unraid_users action={action}") logger.info("Executing unraid_users action=me")
if action == "me":
data = await make_graphql_request(QUERIES["me"]) data = await make_graphql_request(QUERIES["me"])
return data.get("me") or {} return data.get("me") or {}
if action == "list":
data = await make_graphql_request(QUERIES["list"])
users = data.get("users", [])
return {"users": list(users) if isinstance(users, list) else []}
if action == "get":
if not user_id:
raise ToolError("user_id is required for 'get' action")
data = await make_graphql_request(QUERIES["get"], {"id": user_id})
return data.get("user") or {}
if action == "add":
if not name or not password:
raise ToolError("add requires name and password")
input_data: dict[str, Any] = {"name": name, "password": password}
if role:
input_data["role"] = role.upper()
data = await make_graphql_request(
MUTATIONS["add"], {"input": input_data}
)
return {
"success": True,
"user": data.get("addUser", {}),
}
if action == "delete":
if not user_id:
raise ToolError("user_id is required for 'delete' action")
data = await make_graphql_request(
MUTATIONS["delete"], {"input": {"id": user_id}}
)
return {
"success": True,
"message": f"User '{user_id}' deleted",
}
if action == "cloud":
data = await make_graphql_request(QUERIES["cloud"])
return data.get("cloud") or {}
if action == "remote_access":
data = await make_graphql_request(QUERIES["remote_access"])
return data.get("remoteAccess") or {}
if action == "origins":
data = await make_graphql_request(QUERIES["origins"])
origins = data.get("allowedOrigins", [])
return {"origins": list(origins) if isinstance(origins, list) else []}
raise ToolError(f"Unhandled action '{action}' — this is a bug")
except ToolError: except ToolError:
raise raise
except Exception as e: except Exception as e:
logger.error(f"Error in unraid_users action={action}: {e}", exc_info=True) logger.error(f"Error in unraid_users action=me: {e}", exc_info=True)
raise ToolError(f"Failed to execute users/{action}: {e!s}") from e raise ToolError(f"Failed to execute users/me: {e!s}") from e
logger.info("Users tool registered successfully") logger.info("Users tool registered successfully")

View File

@@ -53,8 +53,15 @@ _MUTATION_FIELDS: dict[str, str] = {
DESTRUCTIVE_ACTIONS = {"force_stop", "reset"} DESTRUCTIVE_ACTIONS = {"force_stop", "reset"}
VM_ACTIONS = Literal[ VM_ACTIONS = Literal[
"list", "details", "list",
"start", "stop", "pause", "resume", "force_stop", "reboot", "reset", "details",
"start",
"stop",
"pause",
"resume",
"force_stop",
"reboot",
"reset",
] ]
@@ -111,21 +118,15 @@ def register_vm_tool(mcp: FastMCP) -> None:
or vm.get("name") == vm_id or vm.get("name") == vm_id
): ):
return dict(vm) return dict(vm)
available = [ available = [f"{v.get('name')} (UUID: {v.get('uuid')})" for v in vms]
f"{v.get('name')} (UUID: {v.get('uuid')})" for v in vms raise ToolError(f"VM '{vm_id}' not found. Available: {', '.join(available)}")
]
raise ToolError(
f"VM '{vm_id}' not found. Available: {', '.join(available)}"
)
if action == "details": if action == "details":
raise ToolError("No VM data returned from server") raise ToolError("No VM data returned from server")
return {"vms": []} return {"vms": []}
# Mutations # Mutations
if action in MUTATIONS: if action in MUTATIONS:
data = await make_graphql_request( data = await make_graphql_request(MUTATIONS[action], {"id": vm_id})
MUTATIONS[action], {"id": vm_id}
)
field = _MUTATION_FIELDS.get(action, action) field = _MUTATION_FIELDS.get(action, action)
if data.get("vm") and field in data["vm"]: if data.get("vm") and field in data["vm"]:
return { return {

25
uv.lock generated
View File

@@ -422,6 +422,15 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/e2/c7/562ff39f25de27caec01e4c1e88cbb5fcae5160802ba3d90be33165df24f/fastmcp-2.12.4-py3-none-any.whl", hash = "sha256:56188fbbc1a9df58c537063f25958c57b5c4d715f73e395c41b51550b247d140", size = 329090, upload-time = "2025-09-26T16:43:25.314Z" }, { url = "https://files.pythonhosted.org/packages/e2/c7/562ff39f25de27caec01e4c1e88cbb5fcae5160802ba3d90be33165df24f/fastmcp-2.12.4-py3-none-any.whl", hash = "sha256:56188fbbc1a9df58c537063f25958c57b5c4d715f73e395c41b51550b247d140", size = 329090, upload-time = "2025-09-26T16:43:25.314Z" },
] ]
[[package]]
name = "graphql-core"
version = "3.2.7"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/ac/9b/037a640a2983b09aed4a823f9cf1729e6d780b0671f854efa4727a7affbe/graphql_core-3.2.7.tar.gz", hash = "sha256:27b6904bdd3b43f2a0556dad5d579bdfdeab1f38e8e8788e555bdcb586a6f62c", size = 513484, upload-time = "2025-11-01T22:30:40.436Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/0a/14/933037032608787fb92e365883ad6a741c235e0ff992865ec5d904a38f1e/graphql_core-3.2.7-py3-none-any.whl", hash = "sha256:17fc8f3ca4a42913d8e24d9ac9f08deddf0a0b2483076575757f6c412ead2ec0", size = 207262, upload-time = "2025-11-01T22:30:38.912Z" },
]
[[package]] [[package]]
name = "h11" name = "h11"
version = "0.16.0" version = "0.16.0"
@@ -1222,6 +1231,18 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/3f/51/d4db610ef29373b879047326cbf6fa98b6c1969d6f6dc423279de2b1be2c/requests_toolbelt-1.0.0-py2.py3-none-any.whl", hash = "sha256:cccfdd665f0a24fcf4726e690f65639d272bb0637b9b92dfd91a5568ccf6bd06", size = 54481, upload-time = "2023-05-01T04:11:28.427Z" }, { url = "https://files.pythonhosted.org/packages/3f/51/d4db610ef29373b879047326cbf6fa98b6c1969d6f6dc423279de2b1be2c/requests_toolbelt-1.0.0-py2.py3-none-any.whl", hash = "sha256:cccfdd665f0a24fcf4726e690f65639d272bb0637b9b92dfd91a5568ccf6bd06", size = 54481, upload-time = "2023-05-01T04:11:28.427Z" },
] ]
[[package]]
name = "respx"
version = "0.22.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "httpx" },
]
sdist = { url = "https://files.pythonhosted.org/packages/f4/7c/96bd0bc759cf009675ad1ee1f96535edcb11e9666b985717eb8c87192a95/respx-0.22.0.tar.gz", hash = "sha256:3c8924caa2a50bd71aefc07aa812f2466ff489f1848c96e954a5362d17095d91", size = 28439, upload-time = "2024-12-19T22:33:59.374Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/8e/67/afbb0978d5399bc9ea200f1d4489a23c9a1dad4eee6376242b8182389c79/respx-0.22.0-py2.py3-none-any.whl", hash = "sha256:631128d4c9aba15e56903fb5f66fb1eff412ce28dd387ca3a81339e52dbd3ad0", size = 25127, upload-time = "2024-12-19T22:33:57.837Z" },
]
[[package]] [[package]]
name = "rfc3339-validator" name = "rfc3339-validator"
version = "0.1.4" version = "0.1.4"
@@ -1524,9 +1545,11 @@ dependencies = [
[package.dev-dependencies] [package.dev-dependencies]
dev = [ dev = [
{ name = "build" }, { name = "build" },
{ name = "graphql-core" },
{ name = "pytest" }, { name = "pytest" },
{ name = "pytest-asyncio" }, { name = "pytest-asyncio" },
{ name = "pytest-cov" }, { name = "pytest-cov" },
{ name = "respx" },
{ name = "ruff" }, { name = "ruff" },
{ name = "twine" }, { name = "twine" },
{ name = "ty" }, { name = "ty" },
@@ -1548,9 +1571,11 @@ requires-dist = [
[package.metadata.requires-dev] [package.metadata.requires-dev]
dev = [ dev = [
{ name = "build", specifier = ">=1.2.2" }, { name = "build", specifier = ">=1.2.2" },
{ name = "graphql-core", specifier = ">=3.2.0" },
{ name = "pytest", specifier = ">=8.4.2" }, { name = "pytest", specifier = ">=8.4.2" },
{ name = "pytest-asyncio", specifier = ">=1.2.0" }, { name = "pytest-asyncio", specifier = ">=1.2.0" },
{ name = "pytest-cov", specifier = ">=7.0.0" }, { name = "pytest-cov", specifier = ">=7.0.0" },
{ name = "respx", specifier = ">=0.22.0" },
{ name = "ruff", specifier = ">=0.12.8" }, { name = "ruff", specifier = ">=0.12.8" },
{ name = "twine", specifier = ">=6.0.1" }, { name = "twine", specifier = ">=6.0.1" },
{ name = "ty", specifier = ">=0.0.15" }, { name = "ty", specifier = ">=0.0.15" },