mirror of
https://github.com/jmagar/unraid-mcp.git
synced 2026-03-02 00:04:45 -08:00
fix: address 18 CRITICAL+HIGH PR review comments
**Critical Fixes (7 issues):**
- Fix GraphQL schema field names in users tool (role→roles, remove email)
- Fix GraphQL mutation signatures (addUserInput, deleteUser input)
- Fix dict(None) TypeError guards in users tool (use `or {}` pattern)
- Fix FastAPI version constraint (0.116.1→0.115.0)
- Fix WebSocket SSL context handling (support CA bundles, bool, and None)
- Fix critical disk threshold treated as warning (split counters)
**High Priority Fixes (11 issues):**
- Fix Docker update/remove action response field mapping
- Fix path traversal vulnerability in log validation (normalize paths)
- Fix deleteApiKeys validation (check response before success)
- Fix rclone create_remote validation (check response)
- Fix keys input_data type annotation (dict[str, Any])
- Fix VM domain/domains fallback restoration
**Changes by file:**
- unraid_mcp/tools/docker.py: Response field mapping
- unraid_mcp/tools/info.py: Split critical/warning counters
- unraid_mcp/tools/storage.py: Path normalization for traversal protection
- unraid_mcp/tools/users.py: GraphQL schema + null handling
- unraid_mcp/tools/keys.py: Validation + type annotations
- unraid_mcp/tools/rclone.py: Response validation
- unraid_mcp/tools/virtualization.py: Domain fallback
- unraid_mcp/subscriptions/manager.py: SSL context creation
- pyproject.toml: FastAPI version fix
- tests/*: New tests for all fixes
**Review threads resolved:**
PRRT_kwDOO6Hdxs5uu70L, PRRT_kwDOO6Hdxs5uu70O, PRRT_kwDOO6Hdxs5uu70V,
PRRT_kwDOO6Hdxs5uu70e, PRRT_kwDOO6Hdxs5uu70i, PRRT_kwDOO6Hdxs5uu7zn,
PRRT_kwDOO6Hdxs5uu7z_, PRRT_kwDOO6Hdxs5uu7sI, PRRT_kwDOO6Hdxs5uu7sJ,
PRRT_kwDOO6Hdxs5uu7sK, PRRT_kwDOO6Hdxs5uu7Tk, PRRT_kwDOO6Hdxs5uu7Tn,
PRRT_kwDOO6Hdxs5uu7Tr, PRRT_kwDOO6Hdxs5uu7Ts, PRRT_kwDOO6Hdxs5uu7Tu,
PRRT_kwDOO6Hdxs5uu7Tv, PRRT_kwDOO6Hdxs5uu7Tw, PRRT_kwDOO6Hdxs5uu7Tx
All tests passing.
Co-authored-by: docker-fixer <agent@pr-fixes>
Co-authored-by: info-fixer <agent@pr-fixes>
Co-authored-by: storage-fixer <agent@pr-fixes>
Co-authored-by: users-fixer <agent@pr-fixes>
Co-authored-by: config-fixer <agent@pr-fixes>
Co-authored-by: websocket-fixer <agent@pr-fixes>
Co-authored-by: keys-rclone-fixer <agent@pr-fixes>
Co-authored-by: vm-fixer <agent@pr-fixes>
This commit is contained in:
70
.claude-plugin/README.md
Normal file
70
.claude-plugin/README.md
Normal file
@@ -0,0 +1,70 @@
|
|||||||
|
# Unraid MCP Marketplace
|
||||||
|
|
||||||
|
This directory contains the Claude Code marketplace configuration for the Unraid MCP server and skills.
|
||||||
|
|
||||||
|
## Installation
|
||||||
|
|
||||||
|
### From GitHub (Recommended)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Add the marketplace
|
||||||
|
/plugin marketplace add jmagar/unraid-mcp
|
||||||
|
|
||||||
|
# Install the Unraid skill
|
||||||
|
/plugin install unraid @unraid-mcp
|
||||||
|
```
|
||||||
|
|
||||||
|
### From Local Path (Development)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Add local marketplace
|
||||||
|
/plugin marketplace add /path/to/unraid-mcp
|
||||||
|
|
||||||
|
# Install the plugin
|
||||||
|
/plugin install unraid @unraid-mcp
|
||||||
|
```
|
||||||
|
|
||||||
|
## Available Plugins
|
||||||
|
|
||||||
|
### unraid
|
||||||
|
Query and monitor Unraid servers via GraphQL API - array status, disk health, containers, VMs, system monitoring.
|
||||||
|
|
||||||
|
**Features:**
|
||||||
|
- 27 read-only API endpoints
|
||||||
|
- Real-time system metrics
|
||||||
|
- Disk health and temperature monitoring
|
||||||
|
- Docker container management
|
||||||
|
- VM status and control
|
||||||
|
- Log file access
|
||||||
|
- Network share information
|
||||||
|
- Notification management
|
||||||
|
|
||||||
|
**Version:** 1.1.0
|
||||||
|
**Category:** Infrastructure
|
||||||
|
**Tags:** unraid, monitoring, homelab, graphql, docker, virtualization
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
After installation, configure your Unraid server credentials:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export UNRAID_URL="https://your-unraid-server/graphql"
|
||||||
|
export UNRAID_API_KEY="your-api-key"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Getting an API Key:**
|
||||||
|
1. Open Unraid WebUI
|
||||||
|
2. Go to Settings → Management Access → API Keys
|
||||||
|
3. Click "Create" and select "Viewer" role
|
||||||
|
4. Copy the generated API key
|
||||||
|
|
||||||
|
## Documentation
|
||||||
|
|
||||||
|
- **Plugin Documentation:** See `skills/unraid/README.md`
|
||||||
|
- **MCP Server Documentation:** See root `README.md`
|
||||||
|
- **API Reference:** See `skills/unraid/references/`
|
||||||
|
|
||||||
|
## Support
|
||||||
|
|
||||||
|
- **Issues:** https://github.com/jmagar/unraid-mcp/issues
|
||||||
|
- **Repository:** https://github.com/jmagar/unraid-mcp
|
||||||
22
.claude-plugin/marketplace.json
Normal file
22
.claude-plugin/marketplace.json
Normal file
@@ -0,0 +1,22 @@
|
|||||||
|
{
|
||||||
|
"name": "unraid-mcp",
|
||||||
|
"description": "Comprehensive Unraid server management and monitoring tools via GraphQL API",
|
||||||
|
"version": "1.0.0",
|
||||||
|
"owner": {
|
||||||
|
"name": "jmagar",
|
||||||
|
"email": "jmagar@users.noreply.github.com",
|
||||||
|
"url": "https://github.com/jmagar"
|
||||||
|
},
|
||||||
|
"homepage": "https://github.com/jmagar/unraid-mcp",
|
||||||
|
"repository": "https://github.com/jmagar/unraid-mcp",
|
||||||
|
"plugins": [
|
||||||
|
{
|
||||||
|
"name": "unraid",
|
||||||
|
"source": "./skills/unraid",
|
||||||
|
"description": "Query and monitor Unraid servers via GraphQL API - array status, disk health, containers, VMs, system monitoring",
|
||||||
|
"version": "1.1.0",
|
||||||
|
"tags": ["unraid", "monitoring", "homelab", "graphql", "docker", "virtualization"],
|
||||||
|
"category": "infrastructure"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
3
.gitignore
vendored
3
.gitignore
vendored
@@ -37,6 +37,9 @@ logs/
|
|||||||
docs/plans/
|
docs/plans/
|
||||||
docs/sessions/
|
docs/sessions/
|
||||||
|
|
||||||
|
# Test planning documents
|
||||||
|
DESTRUCTIVE_ACTIONS.md
|
||||||
|
|
||||||
# Google OAuth client secrets
|
# Google OAuth client secrets
|
||||||
client_secret_*.apps.googleusercontent.com.json
|
client_secret_*.apps.googleusercontent.com.json
|
||||||
|
|
||||||
|
|||||||
544
.plan.md
Normal file
544
.plan.md
Normal file
@@ -0,0 +1,544 @@
|
|||||||
|
# Implementation Plan: mcporter Integration Tests + Destructive Action Gating
|
||||||
|
|
||||||
|
**Date:** 2026-02-15
|
||||||
|
**Status:** Awaiting Approval
|
||||||
|
**Estimated Effort:** 8-12 hours
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
Implement comprehensive integration testing using mcporter CLI to validate all 86 tool actions (after removing 4 destructive array operations) against live Unraid servers, plus add environment variable gates for remaining destructive actions to prevent accidental operations.
|
||||||
|
|
||||||
|
## Requirements
|
||||||
|
|
||||||
|
1. **Remove destructive array operations** - start, stop, shutdown, reboot should not be exposed via MCP
|
||||||
|
2. **Add per-tool environment variable gates** - UNRAID_ALLOW_*_DESTRUCTIVE flags for remaining destructive actions
|
||||||
|
3. **Build mcporter test suite** - Real end-to-end testing of all 86 actions against live servers (tootie/shart)
|
||||||
|
4. **Document all actions** - Comprehensive action catalog with test specifications
|
||||||
|
|
||||||
|
## Architecture Changes
|
||||||
|
|
||||||
|
### 1. Settings Infrastructure (Pydantic-based)
|
||||||
|
|
||||||
|
**File:** `unraid_mcp/config/settings.py`
|
||||||
|
|
||||||
|
- Migrate from simple `os.getenv()` to Pydantic `BaseSettings`
|
||||||
|
- Add 7 destructive action gate flags (all default to False for safety):
|
||||||
|
- `allow_docker_destructive` (docker remove)
|
||||||
|
- `allow_vm_destructive` (vm force_stop, reset)
|
||||||
|
- `allow_notifications_destructive` (delete, delete_archived)
|
||||||
|
- `allow_rclone_destructive` (delete_remote)
|
||||||
|
- `allow_users_destructive` (user delete)
|
||||||
|
- `allow_keys_destructive` (key delete)
|
||||||
|
- `allow_array_destructive` (REMOVED - no longer needed after task 1)
|
||||||
|
- Add `get_config_summary()` method showing gate status
|
||||||
|
- Maintain backwards compatibility via module-level exports
|
||||||
|
|
||||||
|
**Dependencies:** Add `pydantic-settings` to `pyproject.toml`
|
||||||
|
|
||||||
|
### 2. Tool Implementation Pattern
|
||||||
|
|
||||||
|
**Pattern for all tools with destructive actions:**
|
||||||
|
|
||||||
|
```python
|
||||||
|
from ..config.settings import settings
|
||||||
|
|
||||||
|
# In tool function:
|
||||||
|
if action in DESTRUCTIVE_ACTIONS:
|
||||||
|
# Check 1: Environment variable gate (first line of defense)
|
||||||
|
if not settings.allow_{tool}_destructive:
|
||||||
|
raise ToolError(
|
||||||
|
f"Destructive {tool} action '{action}' is disabled. "
|
||||||
|
f"Set UNRAID_ALLOW_{TOOL}_DESTRUCTIVE=true to enable. "
|
||||||
|
f"This is a safety gate to prevent accidental operations."
|
||||||
|
)
|
||||||
|
|
||||||
|
# Check 2: Runtime confirmation (second line of defense)
|
||||||
|
if not confirm:
|
||||||
|
raise ToolError(f"Action '{action}' is destructive. Set confirm=True to proceed.")
|
||||||
|
```
|
||||||
|
|
||||||
|
**Tools requiring updates:**
|
||||||
|
- `unraid_mcp/tools/docker.py` (1 action: remove)
|
||||||
|
- `unraid_mcp/tools/virtualization.py` (2 actions: force_stop, reset)
|
||||||
|
- `unraid_mcp/tools/notifications.py` (2 actions: delete, delete_archived)
|
||||||
|
- `unraid_mcp/tools/rclone.py` (1 action: delete_remote)
|
||||||
|
- `unraid_mcp/tools/users.py` (1 action: delete)
|
||||||
|
- `unraid_mcp/tools/keys.py` (1 action: delete)
|
||||||
|
|
||||||
|
### 3. mcporter Integration Test Suite
|
||||||
|
|
||||||
|
**New Directory Structure:**
|
||||||
|
|
||||||
|
```
|
||||||
|
tests/integration/
|
||||||
|
├── helpers/
|
||||||
|
│ ├── mcporter.sh # mcporter wrapper (call_tool, call_destructive, get_field)
|
||||||
|
│ ├── validation.sh # Response validation (assert_fields, assert_equals, assert_success)
|
||||||
|
│ └── reporting.sh # Test reporting (init_report, record_test, generate_summary)
|
||||||
|
├── tools/
|
||||||
|
│ ├── test_health.sh # 3 actions
|
||||||
|
│ ├── test_info.sh # 19 actions
|
||||||
|
│ ├── test_storage.sh # 6 actions
|
||||||
|
│ ├── test_docker.sh # 15 actions
|
||||||
|
│ ├── test_vm.sh # 9 actions
|
||||||
|
│ ├── test_notifications.sh # 9 actions
|
||||||
|
│ ├── test_rclone.sh # 4 actions
|
||||||
|
│ ├── test_users.sh # 8 actions
|
||||||
|
│ ├── test_keys.sh # 5 actions
|
||||||
|
│ └── test_array.sh # 8 actions (after removal)
|
||||||
|
├── run-all.sh # Master test runner (parallel/sequential)
|
||||||
|
├── run-tool.sh # Single tool runner
|
||||||
|
└── README.md # Integration test documentation
|
||||||
|
```
|
||||||
|
|
||||||
|
**mcporter Configuration:** `config/mcporter.json`
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"mcpServers": {
|
||||||
|
"unraid-tootie": {
|
||||||
|
"command": "uv",
|
||||||
|
"args": ["run", "unraid-mcp-server"],
|
||||||
|
"env": {
|
||||||
|
"UNRAID_API_URL": "https://myunraid.net:31337/graphql",
|
||||||
|
"UNRAID_API_KEY": "${UNRAID_TOOTIE_API_KEY}",
|
||||||
|
"UNRAID_VERIFY_SSL": "false",
|
||||||
|
"UNRAID_MCP_TRANSPORT": "stdio"
|
||||||
|
},
|
||||||
|
"cwd": "/home/jmagar/workspace/unraid-mcp"
|
||||||
|
},
|
||||||
|
"unraid-shart": {
|
||||||
|
"command": "uv",
|
||||||
|
"args": ["run", "unraid-mcp-server"],
|
||||||
|
"env": {
|
||||||
|
"UNRAID_API_URL": "http://100.118.209.1/graphql",
|
||||||
|
"UNRAID_API_KEY": "${UNRAID_SHART_API_KEY}",
|
||||||
|
"UNRAID_VERIFY_SSL": "false",
|
||||||
|
"UNRAID_MCP_TRANSPORT": "stdio"
|
||||||
|
},
|
||||||
|
"cwd": "/home/jmagar/workspace/unraid-mcp"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Implementation Tasks
|
||||||
|
|
||||||
|
### Task 1: Remove Destructive Array Operations
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- `unraid_mcp/tools/array.py`
|
||||||
|
- `tests/test_array.py`
|
||||||
|
|
||||||
|
**Changes:**
|
||||||
|
1. Remove from `MUTATIONS` dict:
|
||||||
|
- `start` (lines 24-28)
|
||||||
|
- `stop` (lines 29-33)
|
||||||
|
- `shutdown` (lines 69-73)
|
||||||
|
- `reboot` (lines 74-78)
|
||||||
|
2. Remove from `DESTRUCTIVE_ACTIONS` set (line 81) - set becomes empty `{}`
|
||||||
|
3. Remove from `ARRAY_ACTIONS` Literal type (lines 85-86)
|
||||||
|
4. Update docstring removing these 4 actions (lines 105-106, 115-116)
|
||||||
|
5. Remove tests for these actions in `tests/test_array.py`
|
||||||
|
|
||||||
|
**Acceptance:**
|
||||||
|
- ✅ Array tool has 8 actions (down from 12)
|
||||||
|
- ✅ `DESTRUCTIVE_ACTIONS` is empty set
|
||||||
|
- ✅ Tests pass for remaining actions
|
||||||
|
- ✅ Removed mutations are not callable
|
||||||
|
|
||||||
|
### Task 2: Add Pydantic Settings with Destructive Gates
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- `unraid_mcp/config/settings.py`
|
||||||
|
- `pyproject.toml`
|
||||||
|
- `.env.example`
|
||||||
|
|
||||||
|
**Changes:**
|
||||||
|
|
||||||
|
1. **Add dependency:** `pydantic-settings>=2.12` in `pyproject.toml` dependencies
|
||||||
|
|
||||||
|
2. **Update settings.py:**
|
||||||
|
- Import `BaseSettings` from `pydantic_settings`
|
||||||
|
- Create `UnraidSettings` class with all config fields
|
||||||
|
- Add 6 destructive gate fields (all default to False):
|
||||||
|
- `allow_docker_destructive: bool = Field(default=False, ...)`
|
||||||
|
- `allow_vm_destructive: bool = Field(default=False, ...)`
|
||||||
|
- `allow_notifications_destructive: bool = Field(default=False, ...)`
|
||||||
|
- `allow_rclone_destructive: bool = Field(default=False, ...)`
|
||||||
|
- `allow_users_destructive: bool = Field(default=False, ...)`
|
||||||
|
- `allow_keys_destructive: bool = Field(default=False, ...)`
|
||||||
|
- Add `get_config_summary()` method including gate status
|
||||||
|
- Instantiate global `settings = UnraidSettings()`
|
||||||
|
- Keep backwards compatibility exports
|
||||||
|
|
||||||
|
3. **Update .env.example:** Add section documenting all destructive gates
|
||||||
|
|
||||||
|
**Acceptance:**
|
||||||
|
- ✅ `settings` instance loads successfully
|
||||||
|
- ✅ All gate fields default to False
|
||||||
|
- ✅ `get_config_summary()` shows gate status
|
||||||
|
- ✅ Backwards compatibility maintained (existing code still works)
|
||||||
|
|
||||||
|
### Task 3: Update Tools with Environment Variable Gates
|
||||||
|
|
||||||
|
**Files to update:**
|
||||||
|
- `unraid_mcp/tools/docker.py`
|
||||||
|
- `unraid_mcp/tools/virtualization.py`
|
||||||
|
- `unraid_mcp/tools/notifications.py`
|
||||||
|
- `unraid_mcp/tools/rclone.py`
|
||||||
|
- `unraid_mcp/tools/users.py`
|
||||||
|
- `unraid_mcp/tools/keys.py`
|
||||||
|
|
||||||
|
**Pattern for each tool:**
|
||||||
|
|
||||||
|
1. Add import: `from ..config.settings import settings`
|
||||||
|
2. Add gate check before confirm check in destructive action handler:
|
||||||
|
```python
|
||||||
|
if action in DESTRUCTIVE_ACTIONS:
|
||||||
|
if not settings.allow_{tool}_destructive:
|
||||||
|
raise ToolError(
|
||||||
|
f"Destructive {tool} action '{action}' is disabled. "
|
||||||
|
f"Set UNRAID_ALLOW_{TOOL}_DESTRUCTIVE=true to enable."
|
||||||
|
)
|
||||||
|
if not confirm:
|
||||||
|
raise ToolError(f"Action '{action}' is destructive. Set confirm=True to proceed.")
|
||||||
|
```
|
||||||
|
3. Update tool docstring documenting security requirements
|
||||||
|
|
||||||
|
**Acceptance (per tool):**
|
||||||
|
- ✅ Destructive action fails with clear error when env var not set
|
||||||
|
- ✅ Destructive action still requires confirm=True when env var is set
|
||||||
|
- ✅ Both checks must pass for execution
|
||||||
|
- ✅ Error messages guide user to correct env var
|
||||||
|
|
||||||
|
### Task 4: Update Test Suite with Settings Mocking
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- `tests/conftest.py`
|
||||||
|
- `tests/test_docker.py`
|
||||||
|
- `tests/test_vm.py`
|
||||||
|
- `tests/test_notifications.py`
|
||||||
|
- `tests/test_rclone.py`
|
||||||
|
- `tests/test_users.py`
|
||||||
|
- `tests/test_keys.py`
|
||||||
|
|
||||||
|
**Changes:**
|
||||||
|
|
||||||
|
1. **Add fixtures to conftest.py:**
|
||||||
|
```python
|
||||||
|
@pytest.fixture
|
||||||
|
def mock_settings():
|
||||||
|
# All gates disabled
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def mock_settings_all_enabled(mock_settings):
|
||||||
|
# All gates enabled
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Update each test file:**
|
||||||
|
- Add `mock_settings` parameter to fixtures
|
||||||
|
- Wrap tool calls with `with patch("unraid_mcp.tools.{tool}.settings", mock_settings):`
|
||||||
|
- Add 3 destructive action tests:
|
||||||
|
- Test gate check (env var not set, confirm=True → fails)
|
||||||
|
- Test confirm check (env var set, confirm=False → fails)
|
||||||
|
- Test success (env var set, confirm=True → succeeds)
|
||||||
|
|
||||||
|
**Acceptance:**
|
||||||
|
- ✅ All 150 existing tests pass
|
||||||
|
- ✅ New gate tests cover all destructive actions
|
||||||
|
- ✅ Tests verify correct error messages
|
||||||
|
- ✅ Tests use mocked settings (don't rely on actual env vars)
|
||||||
|
|
||||||
|
### Task 5: Create mcporter Configuration
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- `config/mcporter.json` (new)
|
||||||
|
- `tests/integration/README.md` (new)
|
||||||
|
|
||||||
|
**Changes:**
|
||||||
|
|
||||||
|
1. Create `config/mcporter.json` with tootie and shart server configs
|
||||||
|
2. Document how to use mcporter with the server in README
|
||||||
|
3. Include instructions for loading credentials from `~/workspace/homelab/.env`
|
||||||
|
|
||||||
|
**Acceptance:**
|
||||||
|
- ✅ `mcporter list unraid-tootie` shows all tools
|
||||||
|
- ✅ `mcporter call unraid-tootie.unraid_health action=test_connection` succeeds
|
||||||
|
- ✅ Configuration works for both servers
|
||||||
|
|
||||||
|
### Task 6: Build mcporter Helper Libraries
|
||||||
|
|
||||||
|
**Files to create:**
|
||||||
|
- `tests/integration/helpers/mcporter.sh`
|
||||||
|
- `tests/integration/helpers/validation.sh`
|
||||||
|
- `tests/integration/helpers/reporting.sh`
|
||||||
|
|
||||||
|
**Functions to implement:**
|
||||||
|
|
||||||
|
**mcporter.sh:**
|
||||||
|
- `call_tool <tool> <action> [params...]` - Call tool via mcporter, return JSON
|
||||||
|
- `call_destructive <tool> <action> <env_var> [params...]` - Safe destructive call
|
||||||
|
- `get_field <json> <jq_path>` - Extract field from JSON
|
||||||
|
- `is_success <json>` - Check if response indicates success
|
||||||
|
- `get_error <json>` - Extract error message
|
||||||
|
|
||||||
|
**validation.sh:**
|
||||||
|
- `assert_fields <json> <field>...` - Verify required fields exist
|
||||||
|
- `assert_equals <json> <field> <expected>` - Field value equality
|
||||||
|
- `assert_matches <json> <field> <pattern>` - Field matches regex
|
||||||
|
- `assert_success <json>` - Response indicates success
|
||||||
|
- `assert_failure <json> [pattern]` - Response indicates failure (negative test)
|
||||||
|
|
||||||
|
**reporting.sh:**
|
||||||
|
- `init_report <tool>` - Initialize JSON report file
|
||||||
|
- `record_test <report> <action> <status> [error]` - Record test result
|
||||||
|
- `generate_summary` - Generate console summary from all reports
|
||||||
|
|
||||||
|
**Acceptance:**
|
||||||
|
- ✅ Helper functions work correctly
|
||||||
|
- ✅ Error handling is robust
|
||||||
|
- ✅ Functions are reusable across all tool tests
|
||||||
|
|
||||||
|
### Task 7: Implement Tool Test Scripts
|
||||||
|
|
||||||
|
**Files to create:**
|
||||||
|
- `tests/integration/tools/test_health.sh` (3 actions)
|
||||||
|
- `tests/integration/tools/test_info.sh` (19 actions)
|
||||||
|
- `tests/integration/tools/test_storage.sh` (6 actions)
|
||||||
|
- `tests/integration/tools/test_docker.sh` (15 actions)
|
||||||
|
- `tests/integration/tools/test_vm.sh` (9 actions)
|
||||||
|
- `tests/integration/tools/test_notifications.sh` (9 actions)
|
||||||
|
- `tests/integration/tools/test_rclone.sh` (4 actions)
|
||||||
|
- `tests/integration/tools/test_users.sh` (8 actions)
|
||||||
|
- `tests/integration/tools/test_keys.sh` (5 actions)
|
||||||
|
- `tests/integration/tools/test_array.sh` (8 actions)
|
||||||
|
|
||||||
|
**Per-script implementation:**
|
||||||
|
|
||||||
|
1. Source helper libraries
|
||||||
|
2. Initialize report
|
||||||
|
3. Implement test functions for each action:
|
||||||
|
- Basic functionality test
|
||||||
|
- Response structure validation
|
||||||
|
- Parameter validation
|
||||||
|
- Destructive action gate tests (if applicable)
|
||||||
|
4. Run all tests and record results
|
||||||
|
5. Return exit code based on failures
|
||||||
|
|
||||||
|
**Priority order (implement in this sequence):**
|
||||||
|
1. `test_health.sh` - Simplest (3 actions, no destructive)
|
||||||
|
2. `test_info.sh` - Large but straightforward (19 query actions)
|
||||||
|
3. `test_storage.sh` - Moderate (6 query actions)
|
||||||
|
4. `test_docker.sh` - Complex (15 actions, 1 destructive)
|
||||||
|
5. `test_vm.sh` - Complex (9 actions, 2 destructive)
|
||||||
|
6. `test_notifications.sh` - Moderate (9 actions, 2 destructive)
|
||||||
|
7. `test_rclone.sh` - Simple (4 actions, 1 destructive)
|
||||||
|
8. `test_users.sh` - Moderate (8 actions, 1 destructive)
|
||||||
|
9. `test_keys.sh` - Simple (5 actions, 1 destructive)
|
||||||
|
10. `test_array.sh` - Moderate (8 actions, no destructive after removal)
|
||||||
|
|
||||||
|
**Acceptance:**
|
||||||
|
- ✅ Each script tests all actions for its tool
|
||||||
|
- ✅ Tests validate response structure
|
||||||
|
- ✅ Destructive action gates are tested
|
||||||
|
- ✅ Scripts generate JSON reports
|
||||||
|
- ✅ Exit code indicates success/failure
|
||||||
|
|
||||||
|
### Task 8: Build Test Runners
|
||||||
|
|
||||||
|
**Files to create:**
|
||||||
|
- `tests/integration/run-all.sh`
|
||||||
|
- `tests/integration/run-tool.sh`
|
||||||
|
|
||||||
|
**run-all.sh features:**
|
||||||
|
- Load credentials from `~/workspace/homelab/.env`
|
||||||
|
- Support sequential and parallel execution modes
|
||||||
|
- Run all 10 tool test scripts
|
||||||
|
- Generate summary report
|
||||||
|
- Return exit code based on any failures
|
||||||
|
|
||||||
|
**run-tool.sh features:**
|
||||||
|
- Accept tool name as argument
|
||||||
|
- Load credentials
|
||||||
|
- Execute single tool test script
|
||||||
|
- Pass through exit code
|
||||||
|
|
||||||
|
**Acceptance:**
|
||||||
|
- ✅ `run-all.sh` executes all tool tests
|
||||||
|
- ✅ Parallel mode works correctly (no race conditions)
|
||||||
|
- ✅ Summary report shows pass/fail/skip counts
|
||||||
|
- ✅ `run-tool.sh health` runs only health tests
|
||||||
|
- ✅ Exit codes are correct
|
||||||
|
|
||||||
|
### Task 9: Document Action Catalog
|
||||||
|
|
||||||
|
**File to create:**
|
||||||
|
- `docs/testing/action-catalog.md`
|
||||||
|
|
||||||
|
**Content:**
|
||||||
|
- Table of all 86 actions across 10 tools
|
||||||
|
- For each action:
|
||||||
|
- Tool name
|
||||||
|
- Action name
|
||||||
|
- Type (query/mutation/compound)
|
||||||
|
- Required parameters
|
||||||
|
- Optional parameters
|
||||||
|
- Destructive? (yes/no + env var if yes)
|
||||||
|
- Expected response structure
|
||||||
|
- Example mcporter call
|
||||||
|
- Validation criteria
|
||||||
|
|
||||||
|
**Acceptance:**
|
||||||
|
- ✅ All 86 actions documented
|
||||||
|
- ✅ Specifications are detailed and accurate
|
||||||
|
- ✅ Examples are runnable
|
||||||
|
- ✅ Becomes source of truth for test implementation
|
||||||
|
|
||||||
|
### Task 10: Integration Documentation
|
||||||
|
|
||||||
|
**Files to create/update:**
|
||||||
|
- `tests/integration/README.md`
|
||||||
|
- `docs/testing/integration-tests.md`
|
||||||
|
- `docs/testing/test-environments.md`
|
||||||
|
- `README.md` (add integration test section)
|
||||||
|
|
||||||
|
**Content:**
|
||||||
|
- How to run integration tests
|
||||||
|
- How to configure mcporter
|
||||||
|
- Server setup (tootie/shart)
|
||||||
|
- Environment variable gates
|
||||||
|
- Destructive action testing
|
||||||
|
- CI/CD integration
|
||||||
|
- Troubleshooting
|
||||||
|
|
||||||
|
**Acceptance:**
|
||||||
|
- ✅ Clear setup instructions
|
||||||
|
- ✅ Examples for common use cases
|
||||||
|
- ✅ Integration with existing pytest docs
|
||||||
|
- ✅ CI/CD pipeline documented
|
||||||
|
|
||||||
|
## Testing Strategy
|
||||||
|
|
||||||
|
### Unit Tests (pytest - existing)
|
||||||
|
- **150 tests** across 10 tool modules
|
||||||
|
- Mock GraphQL responses
|
||||||
|
- Fast, isolated, offline
|
||||||
|
- Cover edge cases and error paths
|
||||||
|
|
||||||
|
### Integration Tests (mcporter - new)
|
||||||
|
- **86 tests** (one per action)
|
||||||
|
- Real Unraid server calls
|
||||||
|
- Slow, dependent, online
|
||||||
|
- Validate actual API behavior
|
||||||
|
|
||||||
|
### Test Matrix
|
||||||
|
|
||||||
|
| Tool | Actions | pytest Tests | mcporter Tests | Destructive |
|
||||||
|
|------|---------|--------------|----------------|-------------|
|
||||||
|
| health | 3 | 10 | 3 | 0 |
|
||||||
|
| info | 19 | 98 | 19 | 0 |
|
||||||
|
| storage | 6 | 11 | 6 | 0 |
|
||||||
|
| docker | 15 | 28 | 15 | 1 |
|
||||||
|
| vm | 9 | 25 | 9 | 2 |
|
||||||
|
| notifications | 9 | 7 | 9 | 2 |
|
||||||
|
| rclone | 4 | (pending) | 4 | 1 |
|
||||||
|
| users | 8 | (pending) | 8 | 1 |
|
||||||
|
| keys | 5 | (pending) | 5 | 1 |
|
||||||
|
| array | 8 | 26 | 8 | 0 |
|
||||||
|
| **TOTAL** | **86** | **~150** | **86** | **8** |
|
||||||
|
|
||||||
|
## Validation Checklist
|
||||||
|
|
||||||
|
### Code Changes
|
||||||
|
- [ ] Array tool has 8 actions (removed start/stop/shutdown/reboot)
|
||||||
|
- [ ] Settings class with 6 destructive gate flags
|
||||||
|
- [ ] All 6 tools updated with environment variable gates
|
||||||
|
- [ ] All 6 tool tests updated with gate test cases
|
||||||
|
- [ ] All existing 150 pytest tests pass
|
||||||
|
- [ ] `pydantic-settings` added to dependencies
|
||||||
|
- [ ] `.env.example` updated with gate documentation
|
||||||
|
|
||||||
|
### Integration Tests
|
||||||
|
- [ ] mcporter configuration works for both servers
|
||||||
|
- [ ] All 3 helper libraries implemented
|
||||||
|
- [ ] All 10 tool test scripts implemented
|
||||||
|
- [ ] Test runners (run-all, run-tool) work correctly
|
||||||
|
- [ ] All 86 actions have test coverage
|
||||||
|
- [ ] Destructive action gates are tested
|
||||||
|
- [ ] Reports generate correctly
|
||||||
|
|
||||||
|
### Documentation
|
||||||
|
- [ ] Action catalog documents all 86 actions
|
||||||
|
- [ ] Integration test README is clear
|
||||||
|
- [ ] Environment setup documented
|
||||||
|
- [ ] CI/CD integration documented
|
||||||
|
- [ ] Project README updated
|
||||||
|
|
||||||
|
## Success Criteria
|
||||||
|
|
||||||
|
1. **Safety:** Destructive actions require both env var AND confirm=True
|
||||||
|
2. **Coverage:** All 86 actions have integration tests
|
||||||
|
3. **Quality:** Clear error messages guide users to correct env vars
|
||||||
|
4. **Automation:** Test suite runs via single command
|
||||||
|
5. **Documentation:** Complete action catalog and testing guide
|
||||||
|
|
||||||
|
## Risks & Mitigations
|
||||||
|
|
||||||
|
### Risk: Breaking existing deployments
|
||||||
|
**Impact:** HIGH - Users suddenly can't execute destructive actions
|
||||||
|
**Mitigation:**
|
||||||
|
- Clear error messages with exact env var to set
|
||||||
|
- Document migration in release notes
|
||||||
|
- Default to disabled (safe) but guide users to enable
|
||||||
|
|
||||||
|
### Risk: Integration tests are flaky
|
||||||
|
**Impact:** MEDIUM - CI/CD unreliable
|
||||||
|
**Mitigation:**
|
||||||
|
- Test against stable servers (tootie/shart)
|
||||||
|
- Implement retry logic for network errors
|
||||||
|
- Skip destructive tests if env vars not set (not failures)
|
||||||
|
|
||||||
|
### Risk: mcporter configuration complexity
|
||||||
|
**Impact:** LOW - Difficult for contributors to run tests
|
||||||
|
**Mitigation:**
|
||||||
|
- Clear setup documentation
|
||||||
|
- Example .env template
|
||||||
|
- Helper script to validate setup
|
||||||
|
|
||||||
|
## Dependencies
|
||||||
|
|
||||||
|
- `pydantic-settings>=2.12` (Python package)
|
||||||
|
- `mcporter` (npm package - user must install)
|
||||||
|
- `jq` (system package for JSON parsing in bash)
|
||||||
|
- Access to tootie/shart servers (for integration tests)
|
||||||
|
- Credentials in `~/workspace/homelab/.env`
|
||||||
|
|
||||||
|
## Timeline Estimate
|
||||||
|
|
||||||
|
| Task | Estimated Time |
|
||||||
|
|------|---------------|
|
||||||
|
| 1. Remove array ops | 30 min |
|
||||||
|
| 2. Add settings infrastructure | 1 hour |
|
||||||
|
| 3. Update tools with gates | 2 hours |
|
||||||
|
| 4. Update test suite | 2 hours |
|
||||||
|
| 5. mcporter config | 30 min |
|
||||||
|
| 6. Helper libraries | 1.5 hours |
|
||||||
|
| 7. Tool test scripts | 4 hours |
|
||||||
|
| 8. Test runners | 1 hour |
|
||||||
|
| 9. Action catalog | 2 hours |
|
||||||
|
| 10. Documentation | 1.5 hours |
|
||||||
|
| **Total** | **~12 hours** |
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
- Integration tests complement (not replace) existing pytest suite
|
||||||
|
- Tests validate actual Unraid API behavior, not just our code
|
||||||
|
- Environment variable gates provide defense-in-depth security
|
||||||
|
- mcporter enables real-world validation impossible with mocked tests
|
||||||
|
- Action catalog becomes living documentation for all tools
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Plan Status:** Awaiting user approval
|
||||||
|
**Next Step:** Review plan, make adjustments, then execute via task list
|
||||||
203
MARKETPLACE.md
Normal file
203
MARKETPLACE.md
Normal file
@@ -0,0 +1,203 @@
|
|||||||
|
# Claude Code Marketplace Setup
|
||||||
|
|
||||||
|
This document explains the Claude Code marketplace and plugin structure for the Unraid MCP project.
|
||||||
|
|
||||||
|
## What Was Created
|
||||||
|
|
||||||
|
### 1. Marketplace Manifest (`.claude-plugin/marketplace.json`)
|
||||||
|
The marketplace catalog that lists all available plugins in this repository.
|
||||||
|
|
||||||
|
**Location:** `.claude-plugin/marketplace.json`
|
||||||
|
|
||||||
|
**Contents:**
|
||||||
|
- Marketplace metadata (name, version, owner, repository)
|
||||||
|
- Plugin catalog with the "unraid" skill
|
||||||
|
- Categories and tags for discoverability
|
||||||
|
|
||||||
|
### 2. Plugin Manifest (`skills/unraid/.claude-plugin/plugin.json`)
|
||||||
|
The individual plugin configuration for the Unraid skill.
|
||||||
|
|
||||||
|
**Location:** `skills/unraid/.claude-plugin/plugin.json`
|
||||||
|
|
||||||
|
**Contents:**
|
||||||
|
- Plugin name, version, author
|
||||||
|
- Repository and homepage links
|
||||||
|
- Plugin-specific metadata
|
||||||
|
|
||||||
|
### 3. Documentation
|
||||||
|
- `.claude-plugin/README.md` - Marketplace installation guide
|
||||||
|
- Updated root `README.md` with plugin installation section
|
||||||
|
|
||||||
|
### 4. Validation Script
|
||||||
|
- `scripts/validate-marketplace.sh` - Automated validation of marketplace structure
|
||||||
|
|
||||||
|
## Installation Methods
|
||||||
|
|
||||||
|
### Method 1: GitHub Distribution (Recommended for Users)
|
||||||
|
|
||||||
|
Once you push this to GitHub, users can install via:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Add your marketplace
|
||||||
|
/plugin marketplace add jmagar/unraid-mcp
|
||||||
|
|
||||||
|
# Install the Unraid skill
|
||||||
|
/plugin install unraid @unraid-mcp
|
||||||
|
```
|
||||||
|
|
||||||
|
### Method 2: Local Installation (Development)
|
||||||
|
|
||||||
|
For testing locally before publishing:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Add local marketplace
|
||||||
|
/plugin marketplace add /home/jmagar/workspace/unraid-mcp
|
||||||
|
|
||||||
|
# Install the plugin
|
||||||
|
/plugin install unraid @unraid-mcp
|
||||||
|
```
|
||||||
|
|
||||||
|
### Method 3: Direct URL
|
||||||
|
|
||||||
|
Users can also install from a specific commit or branch:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# From specific branch
|
||||||
|
/plugin marketplace add jmagar/unraid-mcp#main
|
||||||
|
|
||||||
|
# From specific commit
|
||||||
|
/plugin marketplace add jmagar/unraid-mcp#abc123
|
||||||
|
```
|
||||||
|
|
||||||
|
## Plugin Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
unraid-mcp/
|
||||||
|
├── .claude-plugin/ # Marketplace manifest
|
||||||
|
│ ├── marketplace.json
|
||||||
|
│ └── README.md
|
||||||
|
├── skills/unraid/ # Plugin directory
|
||||||
|
│ ├── .claude-plugin/ # Plugin manifest
|
||||||
|
│ │ └── plugin.json
|
||||||
|
│ ├── SKILL.md # Skill documentation
|
||||||
|
│ ├── README.md # Plugin documentation
|
||||||
|
│ ├── examples/ # Example scripts
|
||||||
|
│ ├── scripts/ # Helper scripts
|
||||||
|
│ └── references/ # API reference docs
|
||||||
|
└── scripts/
|
||||||
|
└── validate-marketplace.sh # Validation tool
|
||||||
|
```
|
||||||
|
|
||||||
|
## Marketplace Metadata
|
||||||
|
|
||||||
|
### Categories
|
||||||
|
- `infrastructure` - Server management and monitoring tools
|
||||||
|
|
||||||
|
### Tags
|
||||||
|
- `unraid` - Unraid-specific functionality
|
||||||
|
- `monitoring` - System monitoring capabilities
|
||||||
|
- `homelab` - Homelab automation
|
||||||
|
- `graphql` - GraphQL API integration
|
||||||
|
- `docker` - Docker container management
|
||||||
|
- `virtualization` - VM management
|
||||||
|
|
||||||
|
## Publishing Checklist
|
||||||
|
|
||||||
|
Before publishing to GitHub:
|
||||||
|
|
||||||
|
1. **Validate Structure**
|
||||||
|
```bash
|
||||||
|
./scripts/validate-marketplace.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Update Version Numbers**
|
||||||
|
- Bump version in `.claude-plugin/marketplace.json`
|
||||||
|
- Bump version in `skills/unraid/.claude-plugin/plugin.json`
|
||||||
|
- Update version in `README.md` if needed
|
||||||
|
|
||||||
|
3. **Test Locally**
|
||||||
|
```bash
|
||||||
|
/plugin marketplace add .
|
||||||
|
/plugin install unraid @unraid-mcp
|
||||||
|
```
|
||||||
|
|
||||||
|
4. **Commit and Push**
|
||||||
|
```bash
|
||||||
|
git add .claude-plugin/ skills/unraid/.claude-plugin/
|
||||||
|
git commit -m "feat: add Claude Code marketplace configuration"
|
||||||
|
git push origin main
|
||||||
|
```
|
||||||
|
|
||||||
|
5. **Create Release Tag** (Optional)
|
||||||
|
```bash
|
||||||
|
git tag -a v1.0.0 -m "Release v1.0.0"
|
||||||
|
git push origin v1.0.0
|
||||||
|
```
|
||||||
|
|
||||||
|
## User Experience
|
||||||
|
|
||||||
|
After installation, users will:
|
||||||
|
|
||||||
|
1. **See the skill in their skill list**
|
||||||
|
```bash
|
||||||
|
/skill list
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Access Unraid functionality directly**
|
||||||
|
- Claude Code will automatically detect when to invoke the skill
|
||||||
|
- Users can explicitly invoke with `/unraid`
|
||||||
|
|
||||||
|
3. **Have access to all helper scripts**
|
||||||
|
- Example scripts in `examples/`
|
||||||
|
- Utility scripts in `scripts/`
|
||||||
|
- API reference in `references/`
|
||||||
|
|
||||||
|
## Maintenance
|
||||||
|
|
||||||
|
### Updating the Plugin
|
||||||
|
|
||||||
|
To release a new version:
|
||||||
|
|
||||||
|
1. Make changes to the plugin
|
||||||
|
2. Update version in `skills/unraid/.claude-plugin/plugin.json`
|
||||||
|
3. Update marketplace catalog in `.claude-plugin/marketplace.json`
|
||||||
|
4. Run validation: `./scripts/validate-marketplace.sh`
|
||||||
|
5. Commit and push
|
||||||
|
|
||||||
|
Users with the plugin installed will see the update available and can upgrade with:
|
||||||
|
```bash
|
||||||
|
/plugin update unraid
|
||||||
|
```
|
||||||
|
|
||||||
|
### Adding More Plugins
|
||||||
|
|
||||||
|
To add additional plugins to this marketplace:
|
||||||
|
|
||||||
|
1. Create new plugin directory: `skills/new-plugin/`
|
||||||
|
2. Add plugin manifest: `skills/new-plugin/.claude-plugin/plugin.json`
|
||||||
|
3. Update marketplace catalog: add entry to `.plugins[]` array in `.claude-plugin/marketplace.json`
|
||||||
|
4. Validate: `./scripts/validate-marketplace.sh`
|
||||||
|
|
||||||
|
## Support
|
||||||
|
|
||||||
|
- **Repository:** https://github.com/jmagar/unraid-mcp
|
||||||
|
- **Issues:** https://github.com/jmagar/unraid-mcp/issues
|
||||||
|
- **Documentation:** See `.claude-plugin/README.md` and `skills/unraid/README.md`
|
||||||
|
|
||||||
|
## Validation
|
||||||
|
|
||||||
|
Run the validation script anytime to ensure marketplace integrity:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
./scripts/validate-marketplace.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
This checks:
|
||||||
|
- Manifest file existence and validity
|
||||||
|
- JSON syntax
|
||||||
|
- Required fields
|
||||||
|
- Plugin structure
|
||||||
|
- Source path accuracy
|
||||||
|
- Documentation completeness
|
||||||
|
|
||||||
|
All 17 checks must pass before publishing.
|
||||||
23
README.md
23
README.md
@@ -21,6 +21,7 @@
|
|||||||
|
|
||||||
## 📋 Table of Contents
|
## 📋 Table of Contents
|
||||||
|
|
||||||
|
- [Claude Code Plugin](#-claude-code-plugin)
|
||||||
- [Quick Start](#-quick-start)
|
- [Quick Start](#-quick-start)
|
||||||
- [Installation](#-installation)
|
- [Installation](#-installation)
|
||||||
- [Configuration](#-configuration)
|
- [Configuration](#-configuration)
|
||||||
@@ -31,6 +32,28 @@
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
## 🎯 Claude Code Plugin
|
||||||
|
|
||||||
|
**The easiest way to use Unraid MCP is through the Claude Code marketplace:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Add the marketplace
|
||||||
|
/plugin marketplace add jmagar/unraid-mcp
|
||||||
|
|
||||||
|
# Install the Unraid skill
|
||||||
|
/plugin install unraid @unraid-mcp
|
||||||
|
```
|
||||||
|
|
||||||
|
This provides instant access to Unraid monitoring and management through Claude Code with:
|
||||||
|
- 27 GraphQL API endpoints
|
||||||
|
- Real-time system metrics
|
||||||
|
- Disk health monitoring
|
||||||
|
- Docker and VM management
|
||||||
|
|
||||||
|
**See [.claude-plugin/README.md](.claude-plugin/README.md) for detailed plugin documentation.**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
## 🚀 Quick Start
|
## 🚀 Quick Start
|
||||||
|
|
||||||
### Prerequisites
|
### Prerequisites
|
||||||
|
|||||||
556
dev.sh
556
dev.sh
@@ -1,556 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
|
|
||||||
# Unraid MCP Server Development Script
|
|
||||||
# Safely manages server processes during development with accurate process detection
|
|
||||||
|
|
||||||
set -euo pipefail
|
|
||||||
|
|
||||||
# Configuration
|
|
||||||
DEFAULT_PORT=6970
|
|
||||||
PROJECT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
|
||||||
LOG_DIR="/tmp"
|
|
||||||
LOG_FILE="$LOG_DIR/unraid-mcp.log"
|
|
||||||
PID_FILE="$LOG_DIR/dev.pid"
|
|
||||||
# Ensure logs directory exists
|
|
||||||
mkdir -p "$LOG_DIR"
|
|
||||||
|
|
||||||
# All colors are now handled by Rich logging system
|
|
||||||
|
|
||||||
# Helper function for unified Rich logging
|
|
||||||
log() {
|
|
||||||
local message="$1"
|
|
||||||
local level="${2:-info}"
|
|
||||||
local indent="${3:-0}"
|
|
||||||
local file_timestamp="$(date +'%Y-%m-%d %H:%M:%S')"
|
|
||||||
|
|
||||||
# Use unified Rich logger for beautiful console output - escape single quotes
|
|
||||||
local escaped_message="${message//\'/\'\"\'\"\'}"
|
|
||||||
uv run python -c "from unraid_mcp.config.logging import log_with_level_and_indent; log_with_level_and_indent('$escaped_message', '$level', $indent)"
|
|
||||||
|
|
||||||
# File output without color
|
|
||||||
printf "[%s] %s\n" "$file_timestamp" "$message" >> "$LOG_FILE"
|
|
||||||
}
|
|
||||||
|
|
||||||
# Convenience functions for different log levels
|
|
||||||
log_error() { log "$1" "error" "${2:-0}"; }
|
|
||||||
log_warning() { log "$1" "warning" "${2:-0}"; }
|
|
||||||
log_success() { log "$1" "success" "${2:-0}"; }
|
|
||||||
log_info() { log "$1" "info" "${2:-0}"; }
|
|
||||||
log_status() { log "$1" "status" "${2:-0}"; }
|
|
||||||
|
|
||||||
# Rich header function
|
|
||||||
log_header() {
|
|
||||||
uv run python -c "from unraid_mcp.config.logging import log_header; log_header('$1')"
|
|
||||||
}
|
|
||||||
|
|
||||||
# Rich separator function
|
|
||||||
log_separator() {
|
|
||||||
uv run python -c "from unraid_mcp.config.logging import log_separator; log_separator()"
|
|
||||||
}
|
|
||||||
|
|
||||||
# Get port from environment or use default
|
|
||||||
get_port() {
|
|
||||||
local port="${UNRAID_MCP_PORT:-$DEFAULT_PORT}"
|
|
||||||
echo "$port"
|
|
||||||
}
|
|
||||||
|
|
||||||
# Write PID to file
|
|
||||||
write_pid_file() {
|
|
||||||
local pid=$1
|
|
||||||
echo "$pid" > "$PID_FILE"
|
|
||||||
}
|
|
||||||
|
|
||||||
# Read PID from file
|
|
||||||
read_pid_file() {
|
|
||||||
if [[ -f "$PID_FILE" ]]; then
|
|
||||||
cat "$PID_FILE" 2>/dev/null
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
# Check if PID file contains valid running process
|
|
||||||
is_pid_valid() {
|
|
||||||
local pid=$1
|
|
||||||
[[ -n "$pid" ]] && [[ "$pid" =~ ^[0-9]+$ ]] && kill -0 "$pid" 2>/dev/null
|
|
||||||
}
|
|
||||||
|
|
||||||
# Clean up PID file
|
|
||||||
cleanup_pid_file() {
|
|
||||||
if [[ -f "$PID_FILE" ]]; then
|
|
||||||
rm -f "$PID_FILE"
|
|
||||||
log_info "🗑️ Cleaned up PID file"
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
# Get PID from PID file if valid, otherwise return empty
|
|
||||||
get_valid_pid_from_file() {
|
|
||||||
local pid=$(read_pid_file)
|
|
||||||
if is_pid_valid "$pid"; then
|
|
||||||
echo "$pid"
|
|
||||||
else
|
|
||||||
# Clean up stale PID file
|
|
||||||
[[ -f "$PID_FILE" ]] && cleanup_pid_file
|
|
||||||
echo ""
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
# Find processes using multiple detection methods
|
|
||||||
find_server_processes() {
|
|
||||||
local port=$(get_port)
|
|
||||||
local pids=()
|
|
||||||
|
|
||||||
# Method 0: Check PID file first (most reliable)
|
|
||||||
local pid_from_file=$(get_valid_pid_from_file)
|
|
||||||
if [[ -n "$pid_from_file" ]]; then
|
|
||||||
log_status "🔍 Found server PID from file: $pid_from_file"
|
|
||||||
pids+=("$pid_from_file")
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Method 1: Command line pattern matching (fallback)
|
|
||||||
while IFS= read -r line; do
|
|
||||||
if [[ -n "$line" ]]; then
|
|
||||||
local pid=$(echo "$line" | awk '{print $2}')
|
|
||||||
# Add to pids if not already present
|
|
||||||
if [[ ! " ${pids[@]} " =~ " $pid " ]]; then
|
|
||||||
pids+=("$pid")
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
done < <(ps aux | grep -E 'python.*unraid.*mcp|python.*main\.py|uv run.*main\.py|uv run -m unraid_mcp' | grep -v grep | grep -v "$0")
|
|
||||||
|
|
||||||
# Method 2: Port binding verification (fallback)
|
|
||||||
if command -v lsof >/dev/null 2>&1; then
|
|
||||||
while IFS= read -r line; do
|
|
||||||
if [[ -n "$line" ]]; then
|
|
||||||
local pid=$(echo "$line" | awk '{print $2}')
|
|
||||||
# Add to pids if not already present
|
|
||||||
if [[ ! " ${pids[@]} " =~ " $pid " ]]; then
|
|
||||||
pids+=("$pid")
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
done < <(lsof -i ":$port" 2>/dev/null | grep LISTEN || true)
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Method 3: Working directory verification for fallback methods
|
|
||||||
local verified_pids=()
|
|
||||||
for pid in "${pids[@]}"; do
|
|
||||||
# Skip if not a valid PID
|
|
||||||
if ! [[ "$pid" =~ ^[0-9]+$ ]]; then
|
|
||||||
continue
|
|
||||||
fi
|
|
||||||
|
|
||||||
# If this PID came from the PID file, it's already verified
|
|
||||||
if [[ "$pid" == "$pid_from_file" ]]; then
|
|
||||||
verified_pids+=("$pid")
|
|
||||||
continue
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Verify other PIDs by working directory
|
|
||||||
if [[ -d "/proc/$pid" ]]; then
|
|
||||||
local pwd_info=""
|
|
||||||
if command -v pwdx >/dev/null 2>&1; then
|
|
||||||
pwd_info=$(pwdx "$pid" 2>/dev/null | cut -d' ' -f2- || echo "unknown")
|
|
||||||
else
|
|
||||||
pwd_info=$(readlink -f "/proc/$pid/cwd" 2>/dev/null || echo "unknown")
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Verify it's running from our project directory or a parent directory
|
|
||||||
if [[ "$pwd_info" == "$PROJECT_DIR"* ]] || [[ "$pwd_info" == *"unraid-mcp"* ]]; then
|
|
||||||
verified_pids+=("$pid")
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
|
|
||||||
# Output final list
|
|
||||||
printf '%s\n' "${verified_pids[@]}" | grep -E '^[0-9]+$' || true
|
|
||||||
}
|
|
||||||
|
|
||||||
# Terminate a process gracefully, then forcefully if needed
|
|
||||||
terminate_process() {
|
|
||||||
local pid=$1
|
|
||||||
local name=${2:-"process"}
|
|
||||||
|
|
||||||
if ! kill -0 "$pid" 2>/dev/null; then
|
|
||||||
log_warning "⚠️ Process $pid ($name) already terminated"
|
|
||||||
return 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
log_warning "🔄 Terminating $name (PID: $pid)..."
|
|
||||||
|
|
||||||
# Step 1: Graceful shutdown (SIGTERM)
|
|
||||||
log_info "→ Sending SIGTERM to PID $pid" 1
|
|
||||||
kill -TERM "$pid" 2>/dev/null || {
|
|
||||||
log_warning "⚠️ Failed to send SIGTERM (process may have died)" 2
|
|
||||||
return 0
|
|
||||||
}
|
|
||||||
|
|
||||||
# Step 2: Wait for graceful shutdown (5 seconds)
|
|
||||||
local count=0
|
|
||||||
while [[ $count -lt 5 ]]; do
|
|
||||||
if ! kill -0 "$pid" 2>/dev/null; then
|
|
||||||
log_success "✅ Process $pid terminated gracefully" 1
|
|
||||||
|
|
||||||
# Clean up PID file if this was our server process
|
|
||||||
local pid_from_file=$(read_pid_file)
|
|
||||||
if [[ "$pid" == "$pid_from_file" ]]; then
|
|
||||||
cleanup_pid_file
|
|
||||||
fi
|
|
||||||
|
|
||||||
return 0
|
|
||||||
fi
|
|
||||||
sleep 1
|
|
||||||
((count++))
|
|
||||||
log_info "⏳ Waiting for graceful shutdown... (${count}/5)" 2
|
|
||||||
done
|
|
||||||
|
|
||||||
# Step 3: Force kill (SIGKILL)
|
|
||||||
log_error "⚡ Graceful shutdown timeout, sending SIGKILL to PID $pid" 1
|
|
||||||
kill -KILL "$pid" 2>/dev/null || {
|
|
||||||
log_warning "⚠️ Failed to send SIGKILL (process may have died)" 2
|
|
||||||
return 0
|
|
||||||
}
|
|
||||||
|
|
||||||
# Step 4: Final verification
|
|
||||||
sleep 1
|
|
||||||
if kill -0 "$pid" 2>/dev/null; then
|
|
||||||
log_error "❌ Failed to terminate process $pid" 1
|
|
||||||
return 1
|
|
||||||
else
|
|
||||||
log_success "✅ Process $pid terminated forcefully" 1
|
|
||||||
|
|
||||||
# Clean up PID file if this was our server process
|
|
||||||
local pid_from_file=$(read_pid_file)
|
|
||||||
if [[ "$pid" == "$pid_from_file" ]]; then
|
|
||||||
cleanup_pid_file
|
|
||||||
fi
|
|
||||||
|
|
||||||
return 0
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
# Stop all server processes
|
|
||||||
stop_servers() {
|
|
||||||
log_header "Server Shutdown"
|
|
||||||
log_error "🛑 Stopping existing server processes..."
|
|
||||||
|
|
||||||
local pids=($(find_server_processes))
|
|
||||||
|
|
||||||
if [[ ${#pids[@]} -eq 0 ]]; then
|
|
||||||
log_success "✅ No processes to stop"
|
|
||||||
return 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
local failed=0
|
|
||||||
for pid in "${pids[@]}"; do
|
|
||||||
if ! terminate_process "$pid" "Unraid MCP Server"; then
|
|
||||||
((failed++))
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
|
|
||||||
# Wait for ports to be released
|
|
||||||
local port=$(get_port)
|
|
||||||
log_info "⏳ Waiting for port $port to be released..."
|
|
||||||
local port_wait=0
|
|
||||||
while [[ $port_wait -lt 3 ]]; do
|
|
||||||
if ! lsof -i ":$port" >/dev/null 2>&1; then
|
|
||||||
log_success "✅ Port $port released" 1
|
|
||||||
break
|
|
||||||
fi
|
|
||||||
sleep 1
|
|
||||||
((port_wait++))
|
|
||||||
done
|
|
||||||
|
|
||||||
if [[ $failed -gt 0 ]]; then
|
|
||||||
log_error "⚠️ Failed to stop $failed process(es)"
|
|
||||||
return 1
|
|
||||||
else
|
|
||||||
log_success "✅ All processes stopped successfully"
|
|
||||||
return 0
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
# Start the new modular server
|
|
||||||
start_modular_server() {
|
|
||||||
log_header "Modular Server Startup"
|
|
||||||
log_success "🚀 Starting modular server..."
|
|
||||||
|
|
||||||
cd "$PROJECT_DIR"
|
|
||||||
|
|
||||||
# Check if main.py exists in unraid_mcp/
|
|
||||||
if [[ ! -f "unraid_mcp/main.py" ]]; then
|
|
||||||
log_error "❌ unraid_mcp/main.py not found. Make sure modular server is implemented."
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Clear the log file and add a startup marker to capture fresh logs
|
|
||||||
echo "=== Server Starting at $(date) ===" > "$LOG_FILE"
|
|
||||||
|
|
||||||
# Start server in background using module syntax
|
|
||||||
log_info "→ Executing: uv run -m unraid_mcp.main" 1
|
|
||||||
# Start server in new process group to isolate it from parent signals
|
|
||||||
setsid nohup uv run -m unraid_mcp.main >> "$LOG_FILE" 2>&1 &
|
|
||||||
local pid=$!
|
|
||||||
|
|
||||||
# Write PID to file
|
|
||||||
write_pid_file "$pid"
|
|
||||||
log_info "📝 Written PID $pid to file: $PID_FILE" 1
|
|
||||||
|
|
||||||
# Give it a moment to start and write some logs
|
|
||||||
sleep 3
|
|
||||||
|
|
||||||
# Check if it's still running
|
|
||||||
if kill -0 "$pid" 2>/dev/null; then
|
|
||||||
local port=$(get_port)
|
|
||||||
log_success "✅ Modular server started successfully (PID: $pid, Port: $port)"
|
|
||||||
log_info "📋 Process info: $(ps -p "$pid" -o pid,ppid,cmd --no-headers 2>/dev/null || echo 'Process info unavailable')" 1
|
|
||||||
|
|
||||||
# Auto-tail logs after successful start
|
|
||||||
echo ""
|
|
||||||
log_success "📄 Following server logs in real-time..."
|
|
||||||
log_info "ℹ️ Press Ctrl+C to stop following logs (server will continue running)" 1
|
|
||||||
log_separator
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Set up signal handler for graceful exit from log following
|
|
||||||
trap 'handle_log_interrupt' SIGINT
|
|
||||||
|
|
||||||
# Start tailing from beginning of the fresh log file
|
|
||||||
tail -f "$LOG_FILE"
|
|
||||||
|
|
||||||
return 0
|
|
||||||
else
|
|
||||||
log_error "❌ Modular server failed to start"
|
|
||||||
cleanup_pid_file
|
|
||||||
log_warning "📄 Check $LOG_FILE for error details"
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
# Start the original server
|
|
||||||
start_original_server() {
|
|
||||||
log_header "Original Server Startup"
|
|
||||||
log_success "🚀 Starting original server..."
|
|
||||||
|
|
||||||
cd "$PROJECT_DIR"
|
|
||||||
|
|
||||||
# Check if original server exists
|
|
||||||
if [[ ! -f "unraid_mcp_server.py" ]]; then
|
|
||||||
log_error "❌ unraid_mcp_server.py not found"
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Clear the log file and add a startup marker to capture fresh logs
|
|
||||||
echo "=== Server Starting at $(date) ===" > "$LOG_FILE"
|
|
||||||
|
|
||||||
# Start server in background
|
|
||||||
log_info "→ Executing: uv run unraid_mcp_server.py" 1
|
|
||||||
# Start server in new process group to isolate it from parent signals
|
|
||||||
setsid nohup uv run unraid_mcp_server.py >> "$LOG_FILE" 2>&1 &
|
|
||||||
local pid=$!
|
|
||||||
|
|
||||||
# Write PID to file
|
|
||||||
write_pid_file "$pid"
|
|
||||||
log_info "📝 Written PID $pid to file: $PID_FILE" 1
|
|
||||||
|
|
||||||
# Give it a moment to start and write some logs
|
|
||||||
sleep 3
|
|
||||||
|
|
||||||
# Check if it's still running
|
|
||||||
if kill -0 "$pid" 2>/dev/null; then
|
|
||||||
local port=$(get_port)
|
|
||||||
log_success "✅ Original server started successfully (PID: $pid, Port: $port)"
|
|
||||||
log_info "📋 Process info: $(ps -p "$pid" -o pid,ppid,cmd --no-headers 2>/dev/null || echo 'Process info unavailable')" 1
|
|
||||||
|
|
||||||
# Auto-tail logs after successful start
|
|
||||||
echo ""
|
|
||||||
log_success "📄 Following server logs in real-time..."
|
|
||||||
log_info "ℹ️ Press Ctrl+C to stop following logs (server will continue running)" 1
|
|
||||||
log_separator
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Set up signal handler for graceful exit from log following
|
|
||||||
trap 'handle_log_interrupt' SIGINT
|
|
||||||
|
|
||||||
# Start tailing from beginning of the fresh log file
|
|
||||||
tail -f "$LOG_FILE"
|
|
||||||
|
|
||||||
return 0
|
|
||||||
else
|
|
||||||
log_error "❌ Original server failed to start"
|
|
||||||
cleanup_pid_file
|
|
||||||
log_warning "📄 Check $LOG_FILE for error details"
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
# Show usage information
|
|
||||||
show_usage() {
|
|
||||||
echo "Usage: $0 [OPTIONS]"
|
|
||||||
echo ""
|
|
||||||
echo "Development script for Unraid MCP Server"
|
|
||||||
echo ""
|
|
||||||
echo "OPTIONS:"
|
|
||||||
echo " (no args) Stop existing servers, start modular server, and tail logs"
|
|
||||||
echo " --old Stop existing servers, start original server, and tail logs"
|
|
||||||
echo " --kill Stop existing servers only (don't start new one)"
|
|
||||||
echo " --status Show status of running servers"
|
|
||||||
echo " --logs [N] Show last N lines of server logs (default: 50)"
|
|
||||||
echo " --tail Follow server logs in real-time (without restarting server)"
|
|
||||||
echo " --help, -h Show this help message"
|
|
||||||
echo ""
|
|
||||||
echo "ENVIRONMENT VARIABLES:"
|
|
||||||
echo " UNRAID_MCP_PORT Port for server (default: $DEFAULT_PORT)"
|
|
||||||
echo ""
|
|
||||||
echo "EXAMPLES:"
|
|
||||||
echo " ./dev.sh # Restart with modular server"
|
|
||||||
echo " ./dev.sh --old # Restart with original server"
|
|
||||||
echo " ./dev.sh --kill # Stop all servers"
|
|
||||||
echo " ./dev.sh --status # Check server status"
|
|
||||||
echo " ./dev.sh --logs # Show last 50 lines of logs"
|
|
||||||
echo " ./dev.sh --logs 100 # Show last 100 lines of logs"
|
|
||||||
echo " ./dev.sh --tail # Follow logs in real-time"
|
|
||||||
}
|
|
||||||
|
|
||||||
# Show server status
|
|
||||||
show_status() {
|
|
||||||
local port=$(get_port)
|
|
||||||
log_header "Server Status"
|
|
||||||
log_status "🔍 Server Status Check"
|
|
||||||
log_info "📁 Project Directory: $PROJECT_DIR" 1
|
|
||||||
log_info "📝 PID File: $PID_FILE" 1
|
|
||||||
log_info "🔌 Expected Port: $port" 1
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Check PID file status
|
|
||||||
local pid_from_file=$(read_pid_file)
|
|
||||||
if [[ -n "$pid_from_file" ]]; then
|
|
||||||
if is_pid_valid "$pid_from_file"; then
|
|
||||||
log_success "✅ PID File: Contains valid PID $pid_from_file" 1
|
|
||||||
else
|
|
||||||
log_warning "⚠️ PID File: Contains stale PID $pid_from_file (process not running)" 1
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
log_warning "🚫 PID File: Not found or empty" 1
|
|
||||||
fi
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
local pids=($(find_server_processes))
|
|
||||||
|
|
||||||
if [[ ${#pids[@]} -eq 0 ]]; then
|
|
||||||
log_warning "🟡 Status: No servers running" 1
|
|
||||||
else
|
|
||||||
log_success "✅ Status: ${#pids[@]} server(s) running" 1
|
|
||||||
for pid in "${pids[@]}"; do
|
|
||||||
local cmd=$(ps -p "$pid" -o cmd --no-headers 2>/dev/null || echo "Command unavailable")
|
|
||||||
local source="process scan"
|
|
||||||
if [[ "$pid" == "$pid_from_file" ]]; then
|
|
||||||
source="PID file"
|
|
||||||
fi
|
|
||||||
log_success "PID $pid ($source): $cmd" 2
|
|
||||||
done
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Check port binding
|
|
||||||
if command -v lsof >/dev/null 2>&1; then
|
|
||||||
local port_info=$(lsof -i ":$port" 2>/dev/null | grep LISTEN || echo "")
|
|
||||||
if [[ -n "$port_info" ]]; then
|
|
||||||
log_success "Port $port: BOUND" 1
|
|
||||||
echo "$port_info" | while IFS= read -r line; do
|
|
||||||
log_info "$line" 2
|
|
||||||
done
|
|
||||||
else
|
|
||||||
log_warning "Port $port: FREE" 1
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
# Tail the server logs
|
|
||||||
tail_logs() {
|
|
||||||
local lines="${1:-50}"
|
|
||||||
|
|
||||||
log_info "📄 Tailing last $lines lines from server logs..."
|
|
||||||
|
|
||||||
if [[ ! -f "$LOG_FILE" ]]; then
|
|
||||||
log_error "❌ Log file not found: $LOG_FILE"
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo "=== Server Logs (last $lines lines) ==="
|
|
||||||
tail -n "$lines" "$LOG_FILE"
|
|
||||||
echo "=== End of Logs ===="
|
|
||||||
echo ""
|
|
||||||
}
|
|
||||||
|
|
||||||
# Handle SIGINT during log following
|
|
||||||
handle_log_interrupt() {
|
|
||||||
echo ""
|
|
||||||
log_info "📄 Stopped following logs. Server continues running in background."
|
|
||||||
log_info "💡 Use './dev.sh --status' to check server status" 1
|
|
||||||
log_info "💡 Use './dev.sh --tail' to resume following logs" 1
|
|
||||||
exit 0
|
|
||||||
}
|
|
||||||
|
|
||||||
# Follow server logs in real-time
|
|
||||||
follow_logs() {
|
|
||||||
log_success "📄 Following server logs in real-time..."
|
|
||||||
log_info "ℹ️ Press Ctrl+C to stop following logs"
|
|
||||||
|
|
||||||
if [[ ! -f "$LOG_FILE" ]]; then
|
|
||||||
log_error "❌ Log file not found: $LOG_FILE"
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Set up signal handler for graceful exit
|
|
||||||
trap 'handle_log_interrupt' SIGINT
|
|
||||||
|
|
||||||
log_separator
|
|
||||||
echo ""
|
|
||||||
tail -f "$LOG_FILE"
|
|
||||||
}
|
|
||||||
|
|
||||||
# Main script logic
|
|
||||||
main() {
|
|
||||||
# Initialize log file
|
|
||||||
echo "=== Dev Script Started at $(date) ===" >> "$LOG_FILE"
|
|
||||||
|
|
||||||
case "${1:-}" in
|
|
||||||
--help|-h)
|
|
||||||
show_usage
|
|
||||||
;;
|
|
||||||
--status)
|
|
||||||
show_status
|
|
||||||
;;
|
|
||||||
--kill)
|
|
||||||
stop_servers
|
|
||||||
;;
|
|
||||||
--logs)
|
|
||||||
tail_logs "${2:-50}"
|
|
||||||
;;
|
|
||||||
--tail)
|
|
||||||
follow_logs
|
|
||||||
;;
|
|
||||||
--old)
|
|
||||||
if stop_servers; then
|
|
||||||
start_original_server
|
|
||||||
else
|
|
||||||
log_error "❌ Failed to stop existing servers"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
;;
|
|
||||||
"")
|
|
||||||
if stop_servers; then
|
|
||||||
start_modular_server
|
|
||||||
else
|
|
||||||
log_error "❌ Failed to stop existing servers"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
;;
|
|
||||||
*)
|
|
||||||
log_error "❌ Unknown option: $1"
|
|
||||||
show_usage
|
|
||||||
exit 1
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
}
|
|
||||||
|
|
||||||
# Run main function with all arguments
|
|
||||||
main "$@"
|
|
||||||
@@ -75,7 +75,7 @@ dependencies = [
|
|||||||
"python-dotenv>=1.1.1",
|
"python-dotenv>=1.1.1",
|
||||||
"fastmcp>=2.11.2",
|
"fastmcp>=2.11.2",
|
||||||
"httpx>=0.28.1",
|
"httpx>=0.28.1",
|
||||||
"fastapi>=0.116.1",
|
"fastapi>=0.115.0",
|
||||||
"uvicorn[standard]>=0.35.0",
|
"uvicorn[standard]>=0.35.0",
|
||||||
"websockets>=13.1,<14.0",
|
"websockets>=13.1,<14.0",
|
||||||
"rich>=14.1.0",
|
"rich>=14.1.0",
|
||||||
|
|||||||
80
scripts/validate-marketplace.sh
Executable file
80
scripts/validate-marketplace.sh
Executable file
@@ -0,0 +1,80 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
# Validate Claude Code marketplace and plugin structure
|
||||||
|
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
# Colors for output
|
||||||
|
RED='\033[0;31m'
|
||||||
|
GREEN='\033[0;32m'
|
||||||
|
YELLOW='\033[1;33m'
|
||||||
|
NC='\033[0m' # No Color
|
||||||
|
|
||||||
|
# Counters
|
||||||
|
CHECKS=0
|
||||||
|
PASSED=0
|
||||||
|
FAILED=0
|
||||||
|
|
||||||
|
check() {
|
||||||
|
local test_name="$1"
|
||||||
|
local test_cmd="$2"
|
||||||
|
|
||||||
|
CHECKS=$((CHECKS + 1))
|
||||||
|
echo -n "Checking: $test_name... "
|
||||||
|
|
||||||
|
if eval "$test_cmd" > /dev/null 2>&1; then
|
||||||
|
echo -e "${GREEN}✓${NC}"
|
||||||
|
PASSED=$((PASSED + 1))
|
||||||
|
return 0
|
||||||
|
else
|
||||||
|
echo -e "${RED}✗${NC}"
|
||||||
|
FAILED=$((FAILED + 1))
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
echo "=== Validating Claude Code Marketplace Structure ==="
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# Check marketplace manifest
|
||||||
|
check "Marketplace manifest exists" "test -f .claude-plugin/marketplace.json"
|
||||||
|
check "Marketplace manifest is valid JSON" "jq empty .claude-plugin/marketplace.json"
|
||||||
|
check "Marketplace has name" "jq -e '.name' .claude-plugin/marketplace.json"
|
||||||
|
check "Marketplace has plugins array" "jq -e '.plugins | type == \"array\"' .claude-plugin/marketplace.json"
|
||||||
|
|
||||||
|
# Check plugin manifest
|
||||||
|
check "Plugin manifest exists" "test -f skills/unraid/.claude-plugin/plugin.json"
|
||||||
|
check "Plugin manifest is valid JSON" "jq empty skills/unraid/.claude-plugin/plugin.json"
|
||||||
|
check "Plugin has name" "jq -e '.name' skills/unraid/.claude-plugin/plugin.json"
|
||||||
|
check "Plugin has version" "jq -e '.version' skills/unraid/.claude-plugin/plugin.json"
|
||||||
|
|
||||||
|
# Check plugin structure
|
||||||
|
check "Plugin has SKILL.md" "test -f skills/unraid/SKILL.md"
|
||||||
|
check "Plugin has README.md" "test -f skills/unraid/README.md"
|
||||||
|
check "Plugin has scripts directory" "test -d skills/unraid/scripts"
|
||||||
|
check "Plugin has examples directory" "test -d skills/unraid/examples"
|
||||||
|
check "Plugin has references directory" "test -d skills/unraid/references"
|
||||||
|
|
||||||
|
# Validate plugin is listed in marketplace
|
||||||
|
check "Plugin listed in marketplace" "jq -e '.plugins[] | select(.name == \"unraid\")' .claude-plugin/marketplace.json"
|
||||||
|
|
||||||
|
# Check marketplace metadata
|
||||||
|
check "Marketplace has repository" "jq -e '.repository' .claude-plugin/marketplace.json"
|
||||||
|
check "Marketplace has owner" "jq -e '.owner' .claude-plugin/marketplace.json"
|
||||||
|
|
||||||
|
# Verify source path
|
||||||
|
PLUGIN_SOURCE=$(jq -r '.plugins[] | select(.name == "unraid") | .source' .claude-plugin/marketplace.json)
|
||||||
|
check "Plugin source path is valid" "test -d \"$PLUGIN_SOURCE\""
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "=== Results ==="
|
||||||
|
echo -e "Total checks: $CHECKS"
|
||||||
|
echo -e "${GREEN}Passed: $PASSED${NC}"
|
||||||
|
if [ $FAILED -gt 0 ]; then
|
||||||
|
echo -e "${RED}Failed: $FAILED${NC}"
|
||||||
|
exit 1
|
||||||
|
else
|
||||||
|
echo -e "${GREEN}All checks passed!${NC}"
|
||||||
|
echo ""
|
||||||
|
echo "Marketplace is ready for distribution at:"
|
||||||
|
echo " https://github.com/$(jq -r '.repository' .claude-plugin/marketplace.json | sed 's|https://github.com/||')"
|
||||||
|
fi
|
||||||
27
skills/unraid/.claude-plugin/plugin.json
Normal file
27
skills/unraid/.claude-plugin/plugin.json
Normal file
@@ -0,0 +1,27 @@
|
|||||||
|
{
|
||||||
|
"name": "unraid",
|
||||||
|
"description": "Query and monitor Unraid servers via GraphQL API - array status, disk health, containers, VMs, system monitoring",
|
||||||
|
"version": "1.1.0",
|
||||||
|
"author": {
|
||||||
|
"name": "jmagar",
|
||||||
|
"email": "jmagar@users.noreply.github.com"
|
||||||
|
},
|
||||||
|
"homepage": "https://github.com/jmagar/unraid-mcp",
|
||||||
|
"repository": "https://github.com/jmagar/unraid-mcp",
|
||||||
|
"mcpServers": {
|
||||||
|
"unraid": {
|
||||||
|
"command": "uv",
|
||||||
|
"args": [
|
||||||
|
"run",
|
||||||
|
"--directory",
|
||||||
|
"${CLAUDE_PLUGIN_ROOT}/../..",
|
||||||
|
"unraid-mcp-server"
|
||||||
|
],
|
||||||
|
"env": {
|
||||||
|
"UNRAID_API_URL": "${UNRAID_API_URL}",
|
||||||
|
"UNRAID_API_KEY": "${UNRAID_API_KEY}",
|
||||||
|
"UNRAID_MCP_TRANSPORT": "stdio"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
149
skills/unraid/README.md
Normal file
149
skills/unraid/README.md
Normal file
@@ -0,0 +1,149 @@
|
|||||||
|
# Unraid API Skill
|
||||||
|
|
||||||
|
Query and monitor Unraid servers via the GraphQL API.
|
||||||
|
|
||||||
|
## What's Included
|
||||||
|
|
||||||
|
This skill provides complete access to all 27 read-only Unraid GraphQL API endpoints.
|
||||||
|
|
||||||
|
### Files
|
||||||
|
|
||||||
|
```
|
||||||
|
skills/unraid/
|
||||||
|
├── SKILL.md # Main skill documentation
|
||||||
|
├── README.md # This file
|
||||||
|
├── scripts/
|
||||||
|
│ └── unraid-query.sh # GraphQL query helper script
|
||||||
|
├── examples/
|
||||||
|
│ ├── monitoring-dashboard.sh # Complete system dashboard
|
||||||
|
│ ├── disk-health.sh # Disk temperature & health check
|
||||||
|
│ └── read-logs.sh # Log file reader
|
||||||
|
└── references/
|
||||||
|
├── api-reference.md # Complete API documentation
|
||||||
|
└── quick-reference.md # Common queries cheat sheet
|
||||||
|
```
|
||||||
|
|
||||||
|
## Quick Start
|
||||||
|
|
||||||
|
1. **Set your credentials:**
|
||||||
|
```bash
|
||||||
|
export UNRAID_URL="https://your-unraid-server/graphql"
|
||||||
|
export UNRAID_API_KEY="your-api-key"
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Run a query:**
|
||||||
|
```bash
|
||||||
|
cd skills/unraid
|
||||||
|
./scripts/unraid-query.sh -q "{ online }"
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Run examples:**
|
||||||
|
```bash
|
||||||
|
./examples/monitoring-dashboard.sh
|
||||||
|
./examples/disk-health.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
## Triggers
|
||||||
|
|
||||||
|
This skill activates when you mention:
|
||||||
|
- "check Unraid"
|
||||||
|
- "monitor Unraid"
|
||||||
|
- "Unraid API"
|
||||||
|
- "Unraid disk temperatures"
|
||||||
|
- "Unraid array status"
|
||||||
|
- "read Unraid logs"
|
||||||
|
- And more Unraid-related monitoring tasks
|
||||||
|
|
||||||
|
## Features
|
||||||
|
|
||||||
|
- **27 working endpoints** - All read-only queries documented
|
||||||
|
- **Helper script** - Easy CLI interface for GraphQL queries
|
||||||
|
- **Example scripts** - Ready-to-use monitoring scripts
|
||||||
|
- **Complete reference** - Detailed documentation with examples
|
||||||
|
- **Quick reference** - Common queries cheat sheet
|
||||||
|
|
||||||
|
## Endpoints Covered
|
||||||
|
|
||||||
|
### System & Monitoring
|
||||||
|
- System info (CPU, OS, hardware)
|
||||||
|
- Real-time metrics (CPU %, memory %)
|
||||||
|
- Configuration & settings
|
||||||
|
- Log files (list & read)
|
||||||
|
|
||||||
|
### Storage
|
||||||
|
- Array status & disks
|
||||||
|
- All physical disks (including cache/USB)
|
||||||
|
- Network shares
|
||||||
|
- Parity check status
|
||||||
|
|
||||||
|
### Virtualization
|
||||||
|
- Docker containers
|
||||||
|
- Virtual machines
|
||||||
|
|
||||||
|
### Power & Alerts
|
||||||
|
- UPS devices
|
||||||
|
- System notifications
|
||||||
|
|
||||||
|
### Administration
|
||||||
|
- API key management
|
||||||
|
- User & authentication
|
||||||
|
- Server registration
|
||||||
|
- UI customization
|
||||||
|
|
||||||
|
## Requirements
|
||||||
|
|
||||||
|
- **Unraid 7.2+** (GraphQL API)
|
||||||
|
- **API Key** with Viewer role
|
||||||
|
- **jq** for JSON parsing (usually pre-installed)
|
||||||
|
- **curl** for HTTP requests
|
||||||
|
|
||||||
|
## Getting an API Key
|
||||||
|
|
||||||
|
1. Log in to Unraid WebGUI
|
||||||
|
2. Settings → Management Access → API Keys
|
||||||
|
3. Click "Create API Key"
|
||||||
|
4. Name: "monitoring" (or whatever you like)
|
||||||
|
5. Role: Select "Viewer" (read-only)
|
||||||
|
6. Copy the generated key
|
||||||
|
|
||||||
|
## Documentation
|
||||||
|
|
||||||
|
- **SKILL.md** - Start here for task-oriented guidance
|
||||||
|
- **references/api-reference.md** - Complete endpoint reference
|
||||||
|
- **references/quick-reference.md** - Quick query examples
|
||||||
|
|
||||||
|
## Examples
|
||||||
|
|
||||||
|
### System Status
|
||||||
|
```bash
|
||||||
|
./scripts/unraid-query.sh -q "{ online metrics { cpu { percentTotal } } }"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Disk Health
|
||||||
|
```bash
|
||||||
|
./examples/disk-health.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
### Complete Dashboard
|
||||||
|
```bash
|
||||||
|
./examples/monitoring-dashboard.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
### Read Logs
|
||||||
|
```bash
|
||||||
|
./examples/read-logs.sh syslog 20
|
||||||
|
```
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
- All sizes are in **kilobytes**
|
||||||
|
- Temperatures are in **Celsius**
|
||||||
|
- Docker container logs are **not accessible** via API (use SSH)
|
||||||
|
- Poll no faster than every **5 seconds** to avoid server load
|
||||||
|
|
||||||
|
## Version
|
||||||
|
|
||||||
|
- **Skill Version:** 1.0.0
|
||||||
|
- **API Version:** Unraid 7.2 GraphQL
|
||||||
|
- **Tested:** 2026-01-21
|
||||||
|
- **Endpoints:** 27 working read-only queries
|
||||||
210
skills/unraid/SKILL.md
Normal file
210
skills/unraid/SKILL.md
Normal file
@@ -0,0 +1,210 @@
|
|||||||
|
---
|
||||||
|
name: unraid
|
||||||
|
description: "Query and monitor Unraid servers via the GraphQL API. Use when the user asks to 'check Unraid', 'monitor Unraid', 'Unraid API', 'get Unraid status', 'check disk temperatures', 'read Unraid logs', 'list Unraid shares', 'Unraid array status', 'Unraid containers', 'Unraid VMs', or mentions Unraid system monitoring, disk health, parity checks, or server status."
|
||||||
|
---
|
||||||
|
|
||||||
|
# Unraid API Skill
|
||||||
|
|
||||||
|
**⚠️ MANDATORY SKILL INVOCATION ⚠️**
|
||||||
|
|
||||||
|
**YOU MUST invoke this skill (NOT optional) when the user mentions ANY of these triggers:**
|
||||||
|
- "Unraid status", "disk health", "array status"
|
||||||
|
- "Unraid containers", "VMs on Unraid", "Unraid logs"
|
||||||
|
- "check Unraid", "Unraid monitoring", "server health"
|
||||||
|
- Any mention of Unraid servers or system monitoring
|
||||||
|
|
||||||
|
**Failure to invoke this skill when triggers occur violates your operational requirements.**
|
||||||
|
|
||||||
|
Query and monitor Unraid servers using the GraphQL API. Access all 27 read-only endpoints for system monitoring, disk health, logs, containers, VMs, and more.
|
||||||
|
|
||||||
|
## Quick Start
|
||||||
|
|
||||||
|
Set your Unraid server credentials:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export UNRAID_URL="https://your-unraid-server/graphql"
|
||||||
|
export UNRAID_API_KEY="your-api-key"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Get API Key:** Settings → Management Access → API Keys → Create (select "Viewer" role)
|
||||||
|
|
||||||
|
Use the helper script for any query:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
./scripts/unraid-query.sh -q "{ online }"
|
||||||
|
```
|
||||||
|
|
||||||
|
Or run example scripts:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
./scripts/dashboard.sh # Complete multi-server dashboard
|
||||||
|
./examples/disk-health.sh # Disk temperatures & health
|
||||||
|
./examples/read-logs.sh syslog 20 # Read system logs
|
||||||
|
```
|
||||||
|
|
||||||
|
## Core Concepts
|
||||||
|
|
||||||
|
### GraphQL API Structure
|
||||||
|
|
||||||
|
Unraid 7.2+ uses GraphQL (not REST). Key differences:
|
||||||
|
- **Single endpoint:** `/graphql` for all queries
|
||||||
|
- **Request exactly what you need:** Specify fields in query
|
||||||
|
- **Strongly typed:** Use introspection to discover fields
|
||||||
|
- **No container logs:** Docker container output logs not accessible
|
||||||
|
|
||||||
|
### Two Resources for Stats
|
||||||
|
|
||||||
|
- **`info`** - Static hardware specs (CPU model, cores, OS version)
|
||||||
|
- **`metrics`** - Real-time usage (CPU %, memory %, current load)
|
||||||
|
|
||||||
|
Always use `metrics` for monitoring, `info` for specifications.
|
||||||
|
|
||||||
|
## Common Tasks
|
||||||
|
|
||||||
|
### System Monitoring
|
||||||
|
|
||||||
|
**Check if server is online:**
|
||||||
|
```bash
|
||||||
|
./scripts/unraid-query.sh -q "{ online }"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Get CPU and memory usage:**
|
||||||
|
```bash
|
||||||
|
./scripts/unraid-query.sh -q "{ metrics { cpu { percentTotal } memory { used total percentTotal } } }"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Complete dashboard:**
|
||||||
|
```bash
|
||||||
|
./scripts/dashboard.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
### Disk Management
|
||||||
|
|
||||||
|
**Check disk health and temperatures:**
|
||||||
|
```bash
|
||||||
|
./examples/disk-health.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
**Get array status:**
|
||||||
|
```bash
|
||||||
|
./scripts/unraid-query.sh -q "{ array { state parityCheckStatus { status progress errors } } }"
|
||||||
|
```
|
||||||
|
|
||||||
|
**List all physical disks (including cache/USB):**
|
||||||
|
```bash
|
||||||
|
./scripts/unraid-query.sh -q "{ disks { name } }"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Storage Shares
|
||||||
|
|
||||||
|
**List network shares:**
|
||||||
|
```bash
|
||||||
|
./scripts/unraid-query.sh -q "{ shares { name comment } }"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Logs
|
||||||
|
|
||||||
|
**List available logs:**
|
||||||
|
```bash
|
||||||
|
./scripts/unraid-query.sh -q "{ logFiles { name size modifiedAt } }"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Read log content:**
|
||||||
|
```bash
|
||||||
|
./examples/read-logs.sh syslog 20
|
||||||
|
```
|
||||||
|
|
||||||
|
### Containers & VMs
|
||||||
|
|
||||||
|
**List Docker containers:**
|
||||||
|
```bash
|
||||||
|
./scripts/unraid-query.sh -q "{ docker { containers { names image state status } } }"
|
||||||
|
```
|
||||||
|
|
||||||
|
**List VMs:**
|
||||||
|
```bash
|
||||||
|
./scripts/unraid-query.sh -q "{ vms { name state cpus memory } } }"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Note:** Container output logs are NOT accessible via API. Use `docker logs` via SSH.
|
||||||
|
|
||||||
|
### Notifications
|
||||||
|
|
||||||
|
**Get notification counts:**
|
||||||
|
```bash
|
||||||
|
./scripts/unraid-query.sh -q "{ notifications { overview { unread { info warning alert total } } } }"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Helper Script Usage
|
||||||
|
|
||||||
|
The `scripts/unraid-query.sh` helper supports:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Basic usage
|
||||||
|
./scripts/unraid-query.sh -u URL -k API_KEY -q "QUERY"
|
||||||
|
|
||||||
|
# Use environment variables
|
||||||
|
export UNRAID_URL="https://unraid.local/graphql"
|
||||||
|
export UNRAID_API_KEY="your-key"
|
||||||
|
./scripts/unraid-query.sh -q "{ online }"
|
||||||
|
|
||||||
|
# Format options
|
||||||
|
-f json # Raw JSON (default)
|
||||||
|
-f pretty # Pretty-printed JSON
|
||||||
|
-f raw # Just the data (no wrapper)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Additional Resources
|
||||||
|
|
||||||
|
### Reference Files
|
||||||
|
|
||||||
|
For detailed documentation, consult:
|
||||||
|
- **`references/endpoints.md`** - Complete list of all 27 API endpoints
|
||||||
|
- **`references/troubleshooting.md`** - Common errors and solutions
|
||||||
|
- **`references/api-reference.md`** - Detailed field documentation
|
||||||
|
|
||||||
|
### Helper Scripts
|
||||||
|
|
||||||
|
- **`scripts/unraid-query.sh`** - Main GraphQL query tool
|
||||||
|
- **`scripts/dashboard.sh`** - Automated multi-server inventory reporter
|
||||||
|
|
||||||
|
## Quick Command Reference
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# System status
|
||||||
|
./scripts/unraid-query.sh -q "{ online metrics { cpu { percentTotal } } }"
|
||||||
|
|
||||||
|
# Disk health
|
||||||
|
./examples/disk-health.sh
|
||||||
|
|
||||||
|
# Array status
|
||||||
|
./scripts/unraid-query.sh -q "{ array { state } }"
|
||||||
|
|
||||||
|
# Read logs
|
||||||
|
./examples/read-logs.sh syslog 20
|
||||||
|
|
||||||
|
# Complete dashboard
|
||||||
|
./scripts/dashboard.sh
|
||||||
|
|
||||||
|
# List shares
|
||||||
|
./scripts/unraid-query.sh -q "{ shares { name } }"
|
||||||
|
|
||||||
|
# List containers
|
||||||
|
./scripts/unraid-query.sh -q "{ docker { containers { names state } } }"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🔧 Agent Tool Usage Requirements
|
||||||
|
|
||||||
|
**CRITICAL:** When invoking scripts from this skill via the zsh-tool, **ALWAYS use `pty: true`**.
|
||||||
|
|
||||||
|
Without PTY mode, command output will not be visible even though commands execute successfully.
|
||||||
|
|
||||||
|
**Correct invocation pattern:**
|
||||||
|
```typescript
|
||||||
|
<invoke name="mcp__plugin_zsh-tool_zsh-tool__zsh">
|
||||||
|
<parameter name="command">./skills/SKILL_NAME/scripts/SCRIPT.sh [args]</parameter>
|
||||||
|
<parameter name="pty">true</parameter>
|
||||||
|
</invoke>
|
||||||
|
```
|
||||||
23
skills/unraid/examples/disk-health.sh
Executable file
23
skills/unraid/examples/disk-health.sh
Executable file
@@ -0,0 +1,23 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# Check disk health and temperatures
|
||||||
|
# Quick overview of all disks with temperature warnings
|
||||||
|
|
||||||
|
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
QUERY_SCRIPT="$SCRIPT_DIR/../scripts/unraid-query.sh"
|
||||||
|
|
||||||
|
QUERY='{ array { disks { name device temp status isSpinning } } }'
|
||||||
|
|
||||||
|
echo "=== Disk Health Report ==="
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
RESPONSE=$("$QUERY_SCRIPT" -q "$QUERY" -f raw)
|
||||||
|
|
||||||
|
echo "$RESPONSE" | jq -r '.array.disks[] | "\(.name) (\(.device)): \(.temp)°C - \(.status) - \(if .isSpinning then "Spinning" else "Spun down" end)"'
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "Temperature warnings:"
|
||||||
|
echo "$RESPONSE" | jq -r '.array.disks[] | select(.temp > 45) | "⚠️ \(.name): \(.temp)°C (HIGH)"'
|
||||||
|
|
||||||
|
HOTTEST=$(echo "$RESPONSE" | jq -r '[.array.disks[].temp] | max')
|
||||||
|
echo ""
|
||||||
|
echo "Hottest disk: ${HOTTEST}°C"
|
||||||
23
skills/unraid/examples/read-logs.sh
Executable file
23
skills/unraid/examples/read-logs.sh
Executable file
@@ -0,0 +1,23 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# Read Unraid system logs
|
||||||
|
# Usage: ./read-logs.sh [log-name] [lines]
|
||||||
|
|
||||||
|
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
QUERY_SCRIPT="$SCRIPT_DIR/../scripts/unraid-query.sh"
|
||||||
|
|
||||||
|
LOG_NAME="${1:-syslog}"
|
||||||
|
LINES="${2:-20}"
|
||||||
|
|
||||||
|
echo "=== Reading $LOG_NAME (last $LINES lines) ==="
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
QUERY="{ logFile(path: \"$LOG_NAME\", lines: $LINES) { path totalLines startLine content } }"
|
||||||
|
|
||||||
|
RESPONSE=$("$QUERY_SCRIPT" -q "$QUERY" -f raw)
|
||||||
|
|
||||||
|
echo "$RESPONSE" | jq -r '.logFile.content'
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "---"
|
||||||
|
echo "Total lines in log: $(echo "$RESPONSE" | jq -r '.logFile.totalLines')"
|
||||||
|
echo "Showing from line: $(echo "$RESPONSE" | jq -r '.logFile.startLine')"
|
||||||
946
skills/unraid/references/api-reference.md
Normal file
946
skills/unraid/references/api-reference.md
Normal file
@@ -0,0 +1,946 @@
|
|||||||
|
# Unraid API - Complete Reference Guide
|
||||||
|
|
||||||
|
**Tested on:** Unraid 7.2 x86_64
|
||||||
|
**Date:** 2026-01-21
|
||||||
|
**API Type:** GraphQL
|
||||||
|
**Base URL:** `https://YOUR-UNRAID-SERVER/graphql`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 📊 Summary
|
||||||
|
|
||||||
|
Out of 46 total GraphQL query endpoints:
|
||||||
|
- **✅ 27 fully working read-only endpoints**
|
||||||
|
- **⚠️ 1 works but returns empty** (`plugins`)
|
||||||
|
- **❌ 3 return null** (`flash`, `parityHistory`, `services`)
|
||||||
|
- **❓ 15 untested** (mostly write/mutation operations)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Authentication
|
||||||
|
|
||||||
|
All requests require the `x-api-key` header:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
-H "x-api-key: YOUR_API_KEY_HERE"
|
||||||
|
```
|
||||||
|
|
||||||
|
### How to Generate API Key:
|
||||||
|
1. Log in to Unraid WebGUI
|
||||||
|
2. Settings → Management Access → API Keys
|
||||||
|
3. Create API Key with **Viewer** role (read-only)
|
||||||
|
4. Copy the generated key
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🎯 All 27 Working Read-Only Endpoints
|
||||||
|
|
||||||
|
### 1. System Info & Metrics
|
||||||
|
|
||||||
|
#### **info** - Hardware Specifications
|
||||||
|
Get CPU, OS, motherboard, and hardware details.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s -X POST "https://YOUR-UNRAID/graphql" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "x-api-key: YOUR_API_KEY" \
|
||||||
|
-d '{
|
||||||
|
"query": "{ info { time cpu { model cores threads } os { platform distro release arch } system { manufacturer model version uuid } } }"
|
||||||
|
}' | jq '.'
|
||||||
|
```
|
||||||
|
|
||||||
|
**Response:**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"info": {
|
||||||
|
"time": "2026-01-21T12:57:22.539Z",
|
||||||
|
"cpu": {
|
||||||
|
"model": "183",
|
||||||
|
"cores": 16,
|
||||||
|
"threads": 24
|
||||||
|
},
|
||||||
|
"os": {
|
||||||
|
"platform": "linux",
|
||||||
|
"distro": "Unraid OS",
|
||||||
|
"release": "7.2 x86_64",
|
||||||
|
"arch": "x64"
|
||||||
|
},
|
||||||
|
"system": {
|
||||||
|
"manufacturer": "Micro-Star International Co., Ltd.",
|
||||||
|
"model": "MS-7E07",
|
||||||
|
"version": "1.0",
|
||||||
|
"uuid": "fec05753-077c-8e18-a089-047c1644678a"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
#### **metrics** - Real-Time Usage Stats
|
||||||
|
Get current CPU and memory usage percentages.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s -X POST "https://YOUR-UNRAID/graphql" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "x-api-key: YOUR_API_KEY" \
|
||||||
|
-d '{
|
||||||
|
"query": "{ metrics { cpu { percentTotal } memory { total used free percentTotal swapTotal swapUsed swapFree } } }"
|
||||||
|
}' | jq '.'
|
||||||
|
```
|
||||||
|
|
||||||
|
**Response:**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"metrics": {
|
||||||
|
"cpu": {
|
||||||
|
"percentTotal": 20.99
|
||||||
|
},
|
||||||
|
"memory": {
|
||||||
|
"total": 134773903360,
|
||||||
|
"used": 129472622592,
|
||||||
|
"free": 5301280768,
|
||||||
|
"percentTotal": 59.97,
|
||||||
|
"swapTotal": 0,
|
||||||
|
"swapUsed": 0,
|
||||||
|
"swapFree": 0
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Note:** Memory values are in bytes.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
#### **online** - Server Online Status
|
||||||
|
Simple boolean check if server is online.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s -X POST "https://YOUR-UNRAID/graphql" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "x-api-key: YOUR_API_KEY" \
|
||||||
|
-d '{ "query": "{ online }" }' | jq '.'
|
||||||
|
```
|
||||||
|
|
||||||
|
**Response:**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"online": true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
#### **isInitialSetup** - Initial Setup Status
|
||||||
|
Check if server has completed initial setup.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s -X POST "https://YOUR-UNRAID/graphql" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "x-api-key: YOUR_API_KEY" \
|
||||||
|
-d '{ "query": "{ isInitialSetup }" }' | jq '.'
|
||||||
|
```
|
||||||
|
|
||||||
|
**Response:**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"isInitialSetup": false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 2. Storage & Disks
|
||||||
|
|
||||||
|
#### **array** - Array Status & Disks
|
||||||
|
Get array state, disk details, temperatures, and capacity.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s -X POST "https://YOUR-UNRAID/graphql" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "x-api-key: YOUR_API_KEY" \
|
||||||
|
-d '{
|
||||||
|
"query": "{ array { state disks { id name device size status temp fsSize fsFree fsUsed fsType rotational isSpinning } parityCheckStatus { status progress errors speed } } }"
|
||||||
|
}' | jq '.'
|
||||||
|
```
|
||||||
|
|
||||||
|
**Response (sample):**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"array": {
|
||||||
|
"state": "STARTED",
|
||||||
|
"disks": [
|
||||||
|
{
|
||||||
|
"id": "3cb1026338736ed07b8afec2c484e429710b0f6550dc65d0c5c410ea9d0fa6b2:WDC_WD120EDBZ-11B1HA0_5QGWN5DF",
|
||||||
|
"name": "disk1",
|
||||||
|
"device": "sdb",
|
||||||
|
"size": 11718885324,
|
||||||
|
"status": "DISK_OK",
|
||||||
|
"temp": 38,
|
||||||
|
"fsSize": 11998001574,
|
||||||
|
"fsFree": 1692508541,
|
||||||
|
"fsUsed": 10305493033,
|
||||||
|
"fsType": "xfs",
|
||||||
|
"rotational": true,
|
||||||
|
"isSpinning": true
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"parityCheckStatus": {
|
||||||
|
"status": "NEVER_RUN",
|
||||||
|
"progress": 0,
|
||||||
|
"errors": null,
|
||||||
|
"speed": "0"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Note:** Sizes are in kilobytes. Temperature in Celsius.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
#### **disks** - All Physical Disks
|
||||||
|
Get ALL disks including array disks, cache SSDs, and boot USB.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s -X POST "https://YOUR-UNRAID/graphql" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "x-api-key: YOUR_API_KEY" \
|
||||||
|
-d '{
|
||||||
|
"query": "{ disks { id name } }"
|
||||||
|
}' | jq '.'
|
||||||
|
```
|
||||||
|
|
||||||
|
**Response (sample):**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"disks": [
|
||||||
|
{
|
||||||
|
"id": "3cb1026338736ed07b8afec2c484e429710b0f6550dc65d0c5c410ea9d0fa6b2:04009732070823130633",
|
||||||
|
"name": "Cruzer Glide"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "3cb1026338736ed07b8afec2c484e429710b0f6550dc65d0c5c410ea9d0fa6b2:5QGWN5DF",
|
||||||
|
"name": "WDC WD120EDBZ-11B1HA0"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "3cb1026338736ed07b8afec2c484e429710b0f6550dc65d0c5c410ea9d0fa6b2:S6S2NS0TB18572X",
|
||||||
|
"name": "Samsung SSD 970 EVO Plus 2TB"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Returns:** Array disks + Cache SSDs + Boot USB (17 disks in tested system).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
#### **shares** - Network Shares
|
||||||
|
List all user shares with comments.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s -X POST "https://YOUR-UNRAID/graphql" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "x-api-key: YOUR_API_KEY" \
|
||||||
|
-d '{
|
||||||
|
"query": "{ shares { id name comment } }"
|
||||||
|
}' | jq '.'
|
||||||
|
```
|
||||||
|
|
||||||
|
**Response:**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"shares": [
|
||||||
|
{
|
||||||
|
"id": "3cb1026338736ed07b8afec2c484e429710b0f6550dc65d0c5c410ea9d0fa6b2:appdata",
|
||||||
|
"name": "appdata",
|
||||||
|
"comment": "application data"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "3cb1026338736ed07b8afec2c484e429710b0f6550dc65d0c5c410ea9d0fa6b2:backups",
|
||||||
|
"name": "backups",
|
||||||
|
"comment": "primary homelab backup target"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 3. Virtualization
|
||||||
|
|
||||||
|
#### **docker** - Docker Containers
|
||||||
|
List all Docker containers with status and metadata.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s -X POST "https://YOUR-UNRAID/graphql" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "x-api-key: YOUR_API_KEY" \
|
||||||
|
-d '{
|
||||||
|
"query": "{ docker { containers { id names image state status created autoStart } } }"
|
||||||
|
}' | jq '.'
|
||||||
|
```
|
||||||
|
|
||||||
|
**Response (when no containers):**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"docker": {
|
||||||
|
"containers": []
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Note:** Container logs are NOT accessible via this API. Use `docker logs` via SSH.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
#### **vms** - Virtual Machines
|
||||||
|
List all VMs with status and resource allocation.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s -X POST "https://YOUR-UNRAID/graphql" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "x-api-key: YOUR_API_KEY" \
|
||||||
|
-d '{
|
||||||
|
"query": "{ vms { id name state cpus memory autostart } }"
|
||||||
|
}' | jq '.'
|
||||||
|
```
|
||||||
|
|
||||||
|
**Response (when no VMs):**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"vms": []
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 4. Logs & Monitoring
|
||||||
|
|
||||||
|
#### **logFiles** - List All Log Files
|
||||||
|
Get list of all available system log files.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s -X POST "https://YOUR-UNRAID/graphql" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "x-api-key: YOUR_API_KEY" \
|
||||||
|
-d '{
|
||||||
|
"query": "{ logFiles { name size modifiedAt } }"
|
||||||
|
}' | jq '.'
|
||||||
|
```
|
||||||
|
|
||||||
|
**Response (sample, 32 logs found):**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"logFiles": [
|
||||||
|
{
|
||||||
|
"name": "syslog",
|
||||||
|
"size": 142567,
|
||||||
|
"modifiedAt": "2026-01-21T13:00:00.000Z"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "docker.log",
|
||||||
|
"size": 66321,
|
||||||
|
"modifiedAt": "2026-01-05T19:14:53.934Z"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "dmesg",
|
||||||
|
"size": 93128,
|
||||||
|
"modifiedAt": "2025-12-19T11:09:30.200Z"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
#### **logFile** - Read Log Content
|
||||||
|
Read the actual contents of a log file.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s -X POST "https://YOUR-UNRAID/graphql" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "x-api-key: YOUR_API_KEY" \
|
||||||
|
-d '{
|
||||||
|
"query": "query { logFile(path: \"syslog\", lines: 10) { path totalLines startLine content } }"
|
||||||
|
}' | jq '.'
|
||||||
|
```
|
||||||
|
|
||||||
|
**Response:**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"logFile": {
|
||||||
|
"path": "/var/log/syslog",
|
||||||
|
"totalLines": 1395,
|
||||||
|
"startLine": 1386,
|
||||||
|
"content": "Jan 21 07:49:49 unraid-server sshd-session[2992319]: Accepted keyboard-interactive/pam for root from 100.80.181.18 port 49724 ssh2\n..."
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Parameters:**
|
||||||
|
- `path` - Log file name (required)
|
||||||
|
- `lines` - Number of lines to return (optional, defaults to last 100)
|
||||||
|
- `startLine` - Line number to start from (optional)
|
||||||
|
|
||||||
|
**Available logs include:**
|
||||||
|
- `syslog` - System log
|
||||||
|
- `docker.log` - Docker daemon log
|
||||||
|
- `dmesg` - Kernel messages
|
||||||
|
- `wtmp` - Login records
|
||||||
|
- And 28 more...
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
#### **notifications** - System Alerts
|
||||||
|
Get system notifications and alerts.
|
||||||
|
|
||||||
|
**Get notification counts:**
|
||||||
|
```bash
|
||||||
|
curl -s -X POST "https://YOUR-UNRAID/graphql" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "x-api-key: YOUR_API_KEY" \
|
||||||
|
-d '{
|
||||||
|
"query": "{ notifications { overview { unread { info warning alert total } archive { info warning alert total } } } }"
|
||||||
|
}' | jq '.'
|
||||||
|
```
|
||||||
|
|
||||||
|
**Response:**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"notifications": {
|
||||||
|
"overview": {
|
||||||
|
"unread": {
|
||||||
|
"info": 66,
|
||||||
|
"warning": 0,
|
||||||
|
"alert": 0,
|
||||||
|
"total": 66
|
||||||
|
},
|
||||||
|
"archive": {
|
||||||
|
"info": 581,
|
||||||
|
"warning": 4,
|
||||||
|
"alert": 1,
|
||||||
|
"total": 586
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**List unread notifications:**
|
||||||
|
```bash
|
||||||
|
curl -s -X POST "https://YOUR-UNRAID/graphql" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "x-api-key: YOUR_API_KEY" \
|
||||||
|
-d '{
|
||||||
|
"query": "{ notifications { list(filter: { type: UNREAD, offset: 0, limit: 10 }) { id subject description timestamp } } }"
|
||||||
|
}' | jq '.'
|
||||||
|
```
|
||||||
|
|
||||||
|
**Response (sample):**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"notifications": {
|
||||||
|
"list": [
|
||||||
|
{
|
||||||
|
"id": "...",
|
||||||
|
"subject": "Backup Notification",
|
||||||
|
"description": "ZFS replication was successful...",
|
||||||
|
"timestamp": "2026-01-21T09:10:40.000Z"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Parameters for list query:**
|
||||||
|
- `type` - `UNREAD` or `ARCHIVE` (required)
|
||||||
|
- `offset` - Starting index (required, use 0 for first page)
|
||||||
|
- `limit` - Number of results (required, max typically 100)
|
||||||
|
- `importance` - Filter by `INFO`, `WARNING`, or `ALERT` (optional)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 5. UPS & Power
|
||||||
|
|
||||||
|
#### **upsDevices** - UPS Status
|
||||||
|
Get UPS battery backup status (if configured).
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s -X POST "https://YOUR-UNRAID/graphql" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "x-api-key: YOUR_API_KEY" \
|
||||||
|
-d '{
|
||||||
|
"query": "{ upsDevices { id name status charge load runtime } }"
|
||||||
|
}' | jq '.'
|
||||||
|
```
|
||||||
|
|
||||||
|
**Response (when no UPS):**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"upsDevices": []
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 6. User & Authentication
|
||||||
|
|
||||||
|
#### **me** - Current User Info
|
||||||
|
Get information about the current authenticated user.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s -X POST "https://YOUR-UNRAID/graphql" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "x-api-key: YOUR_API_KEY" \
|
||||||
|
-d '{
|
||||||
|
"query": "{ me { id } }"
|
||||||
|
}' | jq '.'
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
#### **owner** - Server Owner
|
||||||
|
Get server owner information.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s -X POST "https://YOUR-UNRAID/graphql" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "x-api-key: YOUR_API_KEY" \
|
||||||
|
-d '{
|
||||||
|
"query": "{ owner { username url avatar } }"
|
||||||
|
}' | jq '.'
|
||||||
|
```
|
||||||
|
|
||||||
|
**Response:**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"owner": {
|
||||||
|
"username": "root",
|
||||||
|
"url": "",
|
||||||
|
"avatar": ""
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
#### **isSSOEnabled** - SSO Status
|
||||||
|
Check if Single Sign-On is enabled.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s -X POST "https://YOUR-UNRAID/graphql" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "x-api-key: YOUR_API_KEY" \
|
||||||
|
-d '{ "query": "{ isSSOEnabled }" }' | jq '.'
|
||||||
|
```
|
||||||
|
|
||||||
|
**Response:**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"isSSOEnabled": true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
#### **oidcProviders** - OIDC Providers
|
||||||
|
List configured OpenID Connect providers.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s -X POST "https://YOUR-UNRAID/graphql" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "x-api-key: YOUR_API_KEY" \
|
||||||
|
-d '{
|
||||||
|
"query": "{ oidcProviders { id } }"
|
||||||
|
}' | jq '.'
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 7. API Keys & Access
|
||||||
|
|
||||||
|
#### **apiKeys** - List API Keys
|
||||||
|
Get list of all API keys (requires appropriate permissions).
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s -X POST "https://YOUR-UNRAID/graphql" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "x-api-key: YOUR_API_KEY" \
|
||||||
|
-d '{
|
||||||
|
"query": "{ apiKeys { id name createdAt } }"
|
||||||
|
}' | jq '.'
|
||||||
|
```
|
||||||
|
|
||||||
|
**Response (sample, 4 keys found):**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"apiKeys": [
|
||||||
|
{
|
||||||
|
"id": "key1",
|
||||||
|
"name": "monitoring",
|
||||||
|
"createdAt": "2026-01-01T00:00:00.000Z"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 8. Configuration & Settings
|
||||||
|
|
||||||
|
#### **config** - System Configuration
|
||||||
|
Get system configuration details.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s -X POST "https://YOUR-UNRAID/graphql" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "x-api-key: YOUR_API_KEY" \
|
||||||
|
-d '{
|
||||||
|
"query": "{ config { id } }"
|
||||||
|
}' | jq '.'
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
#### **settings** - System Settings
|
||||||
|
Get system settings.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s -X POST "https://YOUR-UNRAID/graphql" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "x-api-key: YOUR_API_KEY" \
|
||||||
|
-d '{
|
||||||
|
"query": "{ settings { id } }"
|
||||||
|
}' | jq '.'
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
#### **vars** - System Variables
|
||||||
|
Get system variables.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s -X POST "https://YOUR-UNRAID/graphql" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "x-api-key: YOUR_API_KEY" \
|
||||||
|
-d '{
|
||||||
|
"query": "{ vars { id } }"
|
||||||
|
}' | jq '.'
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 9. Customization & Theming
|
||||||
|
|
||||||
|
#### **customization** - UI Customization
|
||||||
|
Get UI theme and customization settings.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s -X POST "https://YOUR-UNRAID/graphql" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "x-api-key: YOUR_API_KEY" \
|
||||||
|
-d '{
|
||||||
|
"query": "{ customization { theme { name headerBackgroundColor headerPrimaryTextColor showBannerImage showBannerGradient } } }"
|
||||||
|
}' | jq '.'
|
||||||
|
```
|
||||||
|
|
||||||
|
**Response:**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"customization": {
|
||||||
|
"theme": {
|
||||||
|
"name": "white",
|
||||||
|
"headerBackgroundColor": "#2e3440",
|
||||||
|
"headerPrimaryTextColor": "#FFF",
|
||||||
|
"showBannerImage": false,
|
||||||
|
"showBannerGradient": false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
#### **publicTheme** - Public Theme Settings
|
||||||
|
Get public-facing theme settings.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s -X POST "https://YOUR-UNRAID/graphql" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "x-api-key: YOUR_API_KEY" \
|
||||||
|
-d '{
|
||||||
|
"query": "{ publicTheme { name showBannerImage showBannerGradient headerBackgroundColor headerPrimaryTextColor headerSecondaryTextColor } }"
|
||||||
|
}' | jq '.'
|
||||||
|
```
|
||||||
|
|
||||||
|
**Response:**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"publicTheme": {
|
||||||
|
"name": "white",
|
||||||
|
"showBannerImage": false,
|
||||||
|
"showBannerGradient": false,
|
||||||
|
"headerBackgroundColor": "#2e3440",
|
||||||
|
"headerPrimaryTextColor": "#FFF",
|
||||||
|
"headerSecondaryTextColor": "#fff"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
#### **publicPartnerInfo** - Partner/OEM Branding
|
||||||
|
Get partner or OEM branding information.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s -X POST "https://YOUR-UNRAID/graphql" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "x-api-key: YOUR_API_KEY" \
|
||||||
|
-d '{
|
||||||
|
"query": "{ publicPartnerInfo { partnerName partnerUrl partnerLogoUrl hasPartnerLogo } }"
|
||||||
|
}' | jq '.'
|
||||||
|
```
|
||||||
|
|
||||||
|
**Response:**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"publicPartnerInfo": {
|
||||||
|
"partnerName": null,
|
||||||
|
"partnerUrl": null,
|
||||||
|
"partnerLogoUrl": "/webGui/images/UN-logotype-gradient.svg",
|
||||||
|
"hasPartnerLogo": false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 10. Server Management
|
||||||
|
|
||||||
|
#### **registration** - License Info
|
||||||
|
Get Unraid license/registration information.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s -X POST "https://YOUR-UNRAID/graphql" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "x-api-key: YOUR_API_KEY" \
|
||||||
|
-d '{
|
||||||
|
"query": "{ registration { id } }"
|
||||||
|
}' | jq '.'
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
#### **server** - Server Metadata
|
||||||
|
Get server metadata.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s -X POST "https://YOUR-UNRAID/graphql" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "x-api-key: YOUR_API_KEY" \
|
||||||
|
-d '{
|
||||||
|
"query": "{ server { id } }"
|
||||||
|
}' | jq '.'
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
#### **servers** - Multi-Server Management
|
||||||
|
Get list of servers (for multi-server setups).
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s -X POST "https://YOUR-UNRAID/graphql" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "x-api-key: YOUR_API_KEY" \
|
||||||
|
-d '{
|
||||||
|
"query": "{ servers { id } }"
|
||||||
|
}' | jq '.'
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 11. Plugins
|
||||||
|
|
||||||
|
#### **plugins** - Installed Plugins
|
||||||
|
List installed plugins.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s -X POST "https://YOUR-UNRAID/graphql" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "x-api-key: YOUR_API_KEY" \
|
||||||
|
-d '{
|
||||||
|
"query": "{ plugins { name version author description } }"
|
||||||
|
}' | jq '.'
|
||||||
|
```
|
||||||
|
|
||||||
|
**Response (when no plugins):**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"plugins": []
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🎯 Complete Dashboard Query
|
||||||
|
|
||||||
|
Get everything useful in a single query:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s -X POST "https://YOUR-UNRAID/graphql" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "x-api-key: YOUR_API_KEY" \
|
||||||
|
-d '{
|
||||||
|
"query": "query Dashboard {
|
||||||
|
info {
|
||||||
|
time
|
||||||
|
cpu { model cores threads }
|
||||||
|
os { distro release }
|
||||||
|
system { manufacturer model }
|
||||||
|
}
|
||||||
|
metrics {
|
||||||
|
cpu { percentTotal }
|
||||||
|
memory { total used free percentTotal }
|
||||||
|
}
|
||||||
|
array {
|
||||||
|
state
|
||||||
|
disks { name device temp status fsSize fsFree fsUsed isSpinning }
|
||||||
|
parityCheckStatus { status progress errors }
|
||||||
|
}
|
||||||
|
shares { name comment }
|
||||||
|
online
|
||||||
|
isSSOEnabled
|
||||||
|
}"
|
||||||
|
}' | jq '.'
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## ❌ Endpoints That Return Null
|
||||||
|
|
||||||
|
These queries exist but return `null` in Unraid 7.2:
|
||||||
|
|
||||||
|
1. **`flash`** - Boot USB drive info (returns `null`)
|
||||||
|
2. **`parityHistory`** - Historical parity checks (returns `null` - use `array.parityCheckStatus` instead)
|
||||||
|
3. **`services`** - System services (returns `null`)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🔍 Schema Discovery
|
||||||
|
|
||||||
|
### Discover Available Fields for a Type
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s -X POST "https://YOUR-UNRAID/graphql" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "x-api-key: YOUR_API_KEY" \
|
||||||
|
-d '{
|
||||||
|
"query": "{ __type(name: \"Info\") { fields { name type { name } } } }"
|
||||||
|
}' | jq -r '.data.__type.fields[] | "\(.name): \(.type.name)"'
|
||||||
|
```
|
||||||
|
|
||||||
|
### List All Available Queries
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -s -X POST "https://YOUR-UNRAID/graphql" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "x-api-key: YOUR_API_KEY" \
|
||||||
|
-d '{
|
||||||
|
"query": "{ __type(name: \"Query\") { fields { name } } }"
|
||||||
|
}' | jq -r '.data.__type.fields[].name' | sort
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 📝 Field Name Reference
|
||||||
|
|
||||||
|
Common differences from online documentation:
|
||||||
|
|
||||||
|
| Online Docs | Actual Unraid 7.2 Field |
|
||||||
|
|------------|------------------------|
|
||||||
|
| `uptime` | `time` |
|
||||||
|
| `cpu.usage` | `metrics.cpu.percentTotal` |
|
||||||
|
| `memory.usage` | `metrics.memory.percentTotal` |
|
||||||
|
| `array.status` | `array.state` |
|
||||||
|
| `disk.temperature` | `disk.temp` |
|
||||||
|
| `percentUsed` | `percentTotal` |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## ⚡ Best Practices
|
||||||
|
|
||||||
|
1. **Use `metrics` for real-time stats** - CPU/memory usage is in `metrics`, not `info`
|
||||||
|
2. **Use `array.disks` for array disks** - The top-level `disks` query includes ALL disks (USB, SSDs, etc.)
|
||||||
|
3. **Always check errors** - GraphQL returns errors in `errors` array
|
||||||
|
4. **Use introspection** - Field names can vary between versions
|
||||||
|
5. **Sizes are in kilobytes** - Disk sizes and capacities are in KB, not bytes
|
||||||
|
6. **Temperature is Celsius** - All temperature values are in Celsius
|
||||||
|
7. **Handle empty arrays** - Many queries return `[]` when no data exists
|
||||||
|
8. **Use viewer role** - Create API keys with "Viewer" role for read-only access
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🚫 Known Limitations
|
||||||
|
|
||||||
|
1. **No Docker container logs** - Container output logs are NOT accessible via API
|
||||||
|
2. **No real-time streaming** - All queries are request/response, no WebSocket subscriptions
|
||||||
|
3. **Some queries require higher permissions** - Read-only "Viewer" role may not access all queries
|
||||||
|
4. **No mutation examples included** - This guide covers read-only queries only
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 📚 Additional Resources
|
||||||
|
|
||||||
|
- **Unraid Docs:** https://docs.unraid.net/
|
||||||
|
- **GraphQL Spec:** https://graphql.org/
|
||||||
|
- **GraphQL Introspection:** Use `__schema` and `__type` queries to explore the API
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Last Updated:** 2026-01-21
|
||||||
|
**API Version:** Unraid 7.2 GraphQL API
|
||||||
|
**Total Working Endpoints:** 27 of 46
|
||||||
49
skills/unraid/references/endpoints.md
Normal file
49
skills/unraid/references/endpoints.md
Normal file
@@ -0,0 +1,49 @@
|
|||||||
|
# Unraid API Endpoints Reference
|
||||||
|
|
||||||
|
Complete list of available GraphQL read-only endpoints in Unraid 7.2+.
|
||||||
|
|
||||||
|
## System & Metrics (8)
|
||||||
|
1. **`info`** - Hardware specs (CPU, OS, motherboard)
|
||||||
|
2. **`metrics`** - Real-time CPU/memory usage
|
||||||
|
3. **`online`** - Server online status
|
||||||
|
4. **`isInitialSetup`** - Setup completion status
|
||||||
|
5. **`config`** - System configuration
|
||||||
|
6. **`vars`** - System variables
|
||||||
|
7. **`settings`** - System settings
|
||||||
|
8. **`logFiles`** - List all log files
|
||||||
|
|
||||||
|
## Storage (4)
|
||||||
|
9. **`array`** - Array status, disks, parity
|
||||||
|
10. **`disks`** - All physical disks (array + cache + USB)
|
||||||
|
11. **`shares`** - Network shares
|
||||||
|
12. **`logFile`** - Read log content
|
||||||
|
|
||||||
|
## Virtualization (2)
|
||||||
|
13. **`docker`** - Docker containers
|
||||||
|
14. **`vms`** - Virtual machines
|
||||||
|
|
||||||
|
## Monitoring (2)
|
||||||
|
15. **`notifications`** - System alerts
|
||||||
|
16. **`upsDevices`** - UPS battery status
|
||||||
|
|
||||||
|
## User & Auth (4)
|
||||||
|
17. **`me`** - Current user info
|
||||||
|
18. **`owner`** - Server owner
|
||||||
|
19. **`isSSOEnabled`** - SSO status
|
||||||
|
20. **`oidcProviders`** - OIDC providers
|
||||||
|
|
||||||
|
## API Management (2)
|
||||||
|
21. **`apiKeys`** - List API keys
|
||||||
|
|
||||||
|
## Customization (3)
|
||||||
|
22. **`customization`** - UI theme & settings
|
||||||
|
23. **`publicTheme`** - Public theme
|
||||||
|
24. **`publicPartnerInfo`** - Partner branding
|
||||||
|
|
||||||
|
## Server Management (3)
|
||||||
|
25. **`registration`** - License info
|
||||||
|
26. **`server`** - Server metadata
|
||||||
|
27. **`servers`** - Multi-server management
|
||||||
|
|
||||||
|
## Bonus (1)
|
||||||
|
28. **`plugins`** - Installed plugins (returns empty array if none)
|
||||||
3114
skills/unraid/references/introspection-schema.md
Normal file
3114
skills/unraid/references/introspection-schema.md
Normal file
File diff suppressed because it is too large
Load Diff
219
skills/unraid/references/quick-reference.md
Normal file
219
skills/unraid/references/quick-reference.md
Normal file
@@ -0,0 +1,219 @@
|
|||||||
|
# Unraid API Quick Reference
|
||||||
|
|
||||||
|
Quick reference for the most common Unraid GraphQL API queries.
|
||||||
|
|
||||||
|
## Setup
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Set environment variables
|
||||||
|
export UNRAID_URL="https://your-unraid-server/graphql"
|
||||||
|
export UNRAID_API_KEY="your-api-key-here"
|
||||||
|
|
||||||
|
# Or use the helper script directly
|
||||||
|
./scripts/unraid-query.sh -u "$UNRAID_URL" -k "$API_KEY" -q "{ online }"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Common Queries
|
||||||
|
|
||||||
|
### System Status
|
||||||
|
```graphql
|
||||||
|
{
|
||||||
|
online
|
||||||
|
metrics {
|
||||||
|
cpu { percentTotal }
|
||||||
|
memory { total used free percentTotal }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Array Status
|
||||||
|
```graphql
|
||||||
|
{
|
||||||
|
array {
|
||||||
|
state
|
||||||
|
parityCheckStatus { status progress errors }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Disk List with Temperatures
|
||||||
|
```graphql
|
||||||
|
{
|
||||||
|
array {
|
||||||
|
disks {
|
||||||
|
name
|
||||||
|
device
|
||||||
|
temp
|
||||||
|
status
|
||||||
|
fsSize
|
||||||
|
fsFree
|
||||||
|
isSpinning
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### All Physical Disks (including USB/SSDs)
|
||||||
|
```graphql
|
||||||
|
{
|
||||||
|
disks {
|
||||||
|
id
|
||||||
|
name
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Network Shares
|
||||||
|
```graphql
|
||||||
|
{
|
||||||
|
shares {
|
||||||
|
name
|
||||||
|
comment
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Docker Containers
|
||||||
|
```graphql
|
||||||
|
{
|
||||||
|
docker {
|
||||||
|
containers {
|
||||||
|
id
|
||||||
|
names
|
||||||
|
image
|
||||||
|
state
|
||||||
|
status
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Virtual Machines
|
||||||
|
```graphql
|
||||||
|
{
|
||||||
|
vms {
|
||||||
|
id
|
||||||
|
name
|
||||||
|
state
|
||||||
|
cpus
|
||||||
|
memory
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### List Log Files
|
||||||
|
```graphql
|
||||||
|
{
|
||||||
|
logFiles {
|
||||||
|
name
|
||||||
|
size
|
||||||
|
modifiedAt
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Read Log Content
|
||||||
|
```graphql
|
||||||
|
{
|
||||||
|
logFile(path: "syslog", lines: 20) {
|
||||||
|
content
|
||||||
|
totalLines
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### System Info
|
||||||
|
```graphql
|
||||||
|
{
|
||||||
|
info {
|
||||||
|
time
|
||||||
|
cpu { model cores threads }
|
||||||
|
os { distro release }
|
||||||
|
system { manufacturer model }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### UPS Devices
|
||||||
|
```graphql
|
||||||
|
{
|
||||||
|
upsDevices {
|
||||||
|
id
|
||||||
|
name
|
||||||
|
status
|
||||||
|
charge
|
||||||
|
load
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Notifications
|
||||||
|
|
||||||
|
**Counts:**
|
||||||
|
```graphql
|
||||||
|
{
|
||||||
|
notifications {
|
||||||
|
overview {
|
||||||
|
unread { info warning alert total }
|
||||||
|
archive { info warning alert total }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**List Unread:**
|
||||||
|
```graphql
|
||||||
|
{
|
||||||
|
notifications {
|
||||||
|
list(filter: { type: UNREAD, offset: 0, limit: 10 }) {
|
||||||
|
id
|
||||||
|
subject
|
||||||
|
description
|
||||||
|
timestamp
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**List Archived:**
|
||||||
|
```graphql
|
||||||
|
{
|
||||||
|
notifications {
|
||||||
|
list(filter: { type: ARCHIVE, offset: 0, limit: 10 }) {
|
||||||
|
id
|
||||||
|
subject
|
||||||
|
description
|
||||||
|
timestamp
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Field Name Notes
|
||||||
|
|
||||||
|
- Use `metrics` for real-time usage (CPU/memory percentages)
|
||||||
|
- Use `info` for hardware specs (cores, model, etc.)
|
||||||
|
- Temperature field is `temp` (not `temperature`)
|
||||||
|
- Status field is `state` for array (not `status`)
|
||||||
|
- Sizes are in kilobytes
|
||||||
|
- Temperatures are in Celsius
|
||||||
|
|
||||||
|
## Response Structure
|
||||||
|
|
||||||
|
All responses follow this pattern:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"queryName": { ... }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Errors appear in:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"errors": [
|
||||||
|
{ "message": "..." }
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
3114
skills/unraid/references/schema.graphql
Normal file
3114
skills/unraid/references/schema.graphql
Normal file
File diff suppressed because it is too large
Load Diff
34
skills/unraid/references/troubleshooting.md
Normal file
34
skills/unraid/references/troubleshooting.md
Normal file
@@ -0,0 +1,34 @@
|
|||||||
|
# Unraid API Troubleshooting Guide
|
||||||
|
|
||||||
|
Common issues and solutions when working with the Unraid GraphQL API.
|
||||||
|
|
||||||
|
## "Cannot query field" error
|
||||||
|
Field name doesn't exist in your Unraid version. Use introspection to find valid fields:
|
||||||
|
```bash
|
||||||
|
./scripts/unraid-query.sh -q "{ __type(name: \"TypeName\") { fields { name } } }"
|
||||||
|
```
|
||||||
|
|
||||||
|
## "API key validation failed"
|
||||||
|
- Check API key is correct and not truncated
|
||||||
|
- Verify key has appropriate permissions (use "Viewer" role)
|
||||||
|
- Ensure URL includes `/graphql` endpoint (e.g. `http://host/graphql`)
|
||||||
|
|
||||||
|
## Empty results
|
||||||
|
Many queries return empty arrays when no data exists:
|
||||||
|
- `docker.containers` - No containers running
|
||||||
|
- `vms` - No VMs configured (or VM service disabled)
|
||||||
|
- `notifications` - No active alerts
|
||||||
|
- `plugins` - No plugins installed
|
||||||
|
|
||||||
|
This is normal behavior, not an error. Ensure your scripts handle empty arrays gracefully.
|
||||||
|
|
||||||
|
## "VMs are not available" (GraphQL Error)
|
||||||
|
If the VM manager is disabled in Unraid settings, querying `{ vms { ... } }` will return a GraphQL error.
|
||||||
|
**Solution:** Check if VM service is enabled before querying, or use error handling (like `IGNORE_ERRORS=true` in dashboard scripts) to process partial data.
|
||||||
|
|
||||||
|
## URL connection issues
|
||||||
|
- Use HTTPS (not HTTP) for remote access if configured
|
||||||
|
- For local access: `http://unraid-server-ip/graphql`
|
||||||
|
- For Unraid Connect: Use provided URL with token in hostname
|
||||||
|
- Use `-k` (insecure) with curl if using self-signed certs on local HTTPS
|
||||||
|
- Use `-L` (follow redirects) if Unraid redirects HTTP to HTTPS
|
||||||
214
skills/unraid/scripts/dashboard.sh
Executable file
214
skills/unraid/scripts/dashboard.sh
Executable file
@@ -0,0 +1,214 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# Complete Unraid Monitoring Dashboard (Multi-Server)
|
||||||
|
# Gets system status, disk health, and resource usage for all configured servers
|
||||||
|
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
SCRIPT_DIR="$(cd -P "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
REPO_ROOT="$(cd "$SCRIPT_DIR/../../.." && pwd)"
|
||||||
|
source "$REPO_ROOT/lib/load-env.sh"
|
||||||
|
|
||||||
|
QUERY_SCRIPT="$SCRIPT_DIR/unraid-query.sh"
|
||||||
|
OUTPUT_FILE="$HOME/memory/bank/unraid-inventory.md"
|
||||||
|
|
||||||
|
# Load credentials from .env for all servers
|
||||||
|
load_env_file || exit 1
|
||||||
|
for server in "TOOTIE" "SHART"; do
|
||||||
|
url_var="UNRAID_${server}_URL"
|
||||||
|
key_var="UNRAID_${server}_API_KEY"
|
||||||
|
name_var="UNRAID_${server}_NAME"
|
||||||
|
validate_env_vars "$url_var" "$key_var" || exit 1
|
||||||
|
done
|
||||||
|
|
||||||
|
# Ensure output directory exists
|
||||||
|
mkdir -p "$(dirname "$OUTPUT_FILE")"
|
||||||
|
|
||||||
|
# Start the report
|
||||||
|
echo "# Unraid Fleet Dashboard" > "$OUTPUT_FILE"
|
||||||
|
echo "Generated at: $(date)" >> "$OUTPUT_FILE"
|
||||||
|
echo "" >> "$OUTPUT_FILE"
|
||||||
|
|
||||||
|
# Function to process a single server
|
||||||
|
process_server() {
|
||||||
|
local NAME="$1"
|
||||||
|
local URL="$2"
|
||||||
|
local API_KEY="$3"
|
||||||
|
|
||||||
|
echo "Querying server: $NAME..."
|
||||||
|
|
||||||
|
export UNRAID_URL="$URL"
|
||||||
|
export UNRAID_API_KEY="$API_KEY"
|
||||||
|
export IGNORE_ERRORS="true"
|
||||||
|
|
||||||
|
QUERY='query Dashboard {
|
||||||
|
info {
|
||||||
|
time
|
||||||
|
cpu { model cores threads }
|
||||||
|
os { platform distro release arch }
|
||||||
|
system { manufacturer model version uuid }
|
||||||
|
}
|
||||||
|
metrics {
|
||||||
|
cpu { percentTotal }
|
||||||
|
memory { total used free percentTotal }
|
||||||
|
}
|
||||||
|
array {
|
||||||
|
state
|
||||||
|
capacity { kilobytes { total free used } }
|
||||||
|
disks { name device temp status fsSize fsFree fsUsed isSpinning numErrors }
|
||||||
|
caches { name device temp status fsSize fsFree fsUsed fsType type }
|
||||||
|
parityCheckStatus { status progress errors }
|
||||||
|
}
|
||||||
|
disks { id name device size status temp numErrors }
|
||||||
|
shares { name comment free }
|
||||||
|
docker {
|
||||||
|
containers { names image state status }
|
||||||
|
}
|
||||||
|
vms { domains { id name state } }
|
||||||
|
vars { timeZone regTy regTo }
|
||||||
|
notifications { id title subject description importance timestamp }
|
||||||
|
recentLog: logFile(path: \"syslog\", lines: 50) { content }
|
||||||
|
online
|
||||||
|
isSSOEnabled
|
||||||
|
}'
|
||||||
|
|
||||||
|
RESPONSE=$("$QUERY_SCRIPT" -q "$QUERY" -f json)
|
||||||
|
|
||||||
|
# Debug output
|
||||||
|
echo "$RESPONSE" > "${NAME}_debug.json"
|
||||||
|
|
||||||
|
# Check if response is valid JSON
|
||||||
|
if ! echo "$RESPONSE" | jq -e . >/dev/null 2>&1; then
|
||||||
|
echo "Error querying $NAME: Invalid response"
|
||||||
|
echo "Response saved to ${NAME}_debug.json"
|
||||||
|
echo "## Server: $NAME (⚠️ Error)" >> "$OUTPUT_FILE"
|
||||||
|
echo "Failed to retrieve data." >> "$OUTPUT_FILE"
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Append to report
|
||||||
|
echo "## Server: $NAME" >> "$OUTPUT_FILE"
|
||||||
|
|
||||||
|
# System Info
|
||||||
|
CPU_MODEL=$(echo "$RESPONSE" | jq -r '.data.info.cpu.model')
|
||||||
|
CPU_CORES=$(echo "$RESPONSE" | jq -r '.data.info.cpu.cores')
|
||||||
|
CPU_THREADS=$(echo "$RESPONSE" | jq -r '.data.info.cpu.threads')
|
||||||
|
OS_REL=$(echo "$RESPONSE" | jq -r '.data.info.os.release')
|
||||||
|
OS_ARCH=$(echo "$RESPONSE" | jq -r '.data.info.os.arch // "x64"')
|
||||||
|
SYS_MFG=$(echo "$RESPONSE" | jq -r '.data.info.system.manufacturer // "Unknown"')
|
||||||
|
SYS_MODEL=$(echo "$RESPONSE" | jq -r '.data.info.system.model // "Unknown"')
|
||||||
|
TIMEZONE=$(echo "$RESPONSE" | jq -r '.data.vars.timeZone // "N/A"')
|
||||||
|
LICENSE=$(echo "$RESPONSE" | jq -r '.data.vars.regTy // "Unknown"')
|
||||||
|
REG_TO=$(echo "$RESPONSE" | jq -r '.data.vars.regTo // "N/A"')
|
||||||
|
CPU_LOAD=$(echo "$RESPONSE" | jq -r '.data.metrics.cpu.percentTotal // 0')
|
||||||
|
TOTAL_MEM=$(echo "$RESPONSE" | jq -r '.data.metrics.memory.total // 0')
|
||||||
|
MEM_USED_PCT=$(echo "$RESPONSE" | jq -r '.data.metrics.memory.percentTotal // 0')
|
||||||
|
TOTAL_MEM_GB=$((TOTAL_MEM / 1024 / 1024 / 1024))
|
||||||
|
|
||||||
|
echo "### System" >> "$OUTPUT_FILE"
|
||||||
|
echo "- **Hardware:** $SYS_MFG $SYS_MODEL" >> "$OUTPUT_FILE"
|
||||||
|
echo "- **OS:** Unraid $OS_REL ($OS_ARCH)" >> "$OUTPUT_FILE"
|
||||||
|
echo "- **License:** $LICENSE (Registered to: $REG_TO)" >> "$OUTPUT_FILE"
|
||||||
|
echo "- **Timezone:** $TIMEZONE" >> "$OUTPUT_FILE"
|
||||||
|
echo "- **CPU:** Model $CPU_MODEL ($CPU_CORES cores / $CPU_THREADS threads) - **${CPU_LOAD}% load**" >> "$OUTPUT_FILE"
|
||||||
|
echo "- **Memory:** ${TOTAL_MEM_GB}GB - **${MEM_USED_PCT}% used**" >> "$OUTPUT_FILE"
|
||||||
|
echo "" >> "$OUTPUT_FILE"
|
||||||
|
|
||||||
|
# Array capacity
|
||||||
|
ARRAY_TOTAL=$(echo "$RESPONSE" | jq -r '.data.array.capacity.kilobytes.total')
|
||||||
|
ARRAY_FREE=$(echo "$RESPONSE" | jq -r '.data.array.capacity.kilobytes.free')
|
||||||
|
ARRAY_USED=$(echo "$RESPONSE" | jq -r '.data.array.capacity.kilobytes.used')
|
||||||
|
|
||||||
|
if [ "$ARRAY_TOTAL" != "null" ] && [ "$ARRAY_TOTAL" -gt 0 ]; then
|
||||||
|
ARRAY_TOTAL_GB=$((ARRAY_TOTAL / 1024 / 1024))
|
||||||
|
ARRAY_FREE_GB=$((ARRAY_FREE / 1024 / 1024))
|
||||||
|
ARRAY_USED_GB=$((ARRAY_USED / 1024 / 1024))
|
||||||
|
ARRAY_USED_PCT=$((ARRAY_USED * 100 / ARRAY_TOTAL))
|
||||||
|
echo "### Storage" >> "$OUTPUT_FILE"
|
||||||
|
echo "- **Array:** ${ARRAY_USED_GB}GB / ${ARRAY_TOTAL_GB}GB used (${ARRAY_USED_PCT}%)" >> "$OUTPUT_FILE"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Cache pools
|
||||||
|
echo "- **Cache Pools:**" >> "$OUTPUT_FILE"
|
||||||
|
echo "$RESPONSE" | jq -r '.data.array.caches[] | " - \(.name) (\(.device)): \(.temp)°C - \(.status) - \(if .fsSize then "\((.fsUsed / 1024 / 1024 | floor))GB / \((.fsSize / 1024 / 1024 | floor))GB used" else "N/A" end)"' >> "$OUTPUT_FILE"
|
||||||
|
|
||||||
|
# Docker
|
||||||
|
TOTAL_CONTAINERS=$(echo "$RESPONSE" | jq '[.data.docker.containers[]] | length')
|
||||||
|
RUNNING_CONTAINERS=$(echo "$RESPONSE" | jq '[.data.docker.containers[] | select(.state == "RUNNING")] | length')
|
||||||
|
|
||||||
|
echo "" >> "$OUTPUT_FILE"
|
||||||
|
echo "### Workloads" >> "$OUTPUT_FILE"
|
||||||
|
echo "- **Docker:** ${TOTAL_CONTAINERS} containers (${RUNNING_CONTAINERS} running)" >> "$OUTPUT_FILE"
|
||||||
|
|
||||||
|
# Unhealthy containers
|
||||||
|
UNHEALTHY=$(echo "$RESPONSE" | jq -r '.data.docker.containers[] | select(.status | test("unhealthy|restarting"; "i")) | " - ⚠️ \(.names[0]): \(.status)"')
|
||||||
|
if [ -n "$UNHEALTHY" ]; then
|
||||||
|
echo "$UNHEALTHY" >> "$OUTPUT_FILE"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# VMs
|
||||||
|
if [ "$(echo "$RESPONSE" | jq -r '.data.vms.domains')" != "null" ]; then
|
||||||
|
TOTAL_VMS=$(echo "$RESPONSE" | jq '[.data.vms.domains[]] | length')
|
||||||
|
RUNNING_VMS=$(echo "$RESPONSE" | jq '[.data.vms.domains[] | select(.state == "RUNNING")] | length')
|
||||||
|
echo "- **VMs:** ${TOTAL_VMS} VMs (${RUNNING_VMS} running)" >> "$OUTPUT_FILE"
|
||||||
|
else
|
||||||
|
echo "- **VMs:** Service disabled or no data" >> "$OUTPUT_FILE"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Disk Health
|
||||||
|
echo "" >> "$OUTPUT_FILE"
|
||||||
|
echo "### Health" >> "$OUTPUT_FILE"
|
||||||
|
|
||||||
|
HOT_DISKS=$(echo "$RESPONSE" | jq -r '.data.array.disks[] | select(.temp > 45) | "- ⚠️ \(.name): \(.temp)°C (HIGH)"')
|
||||||
|
DISK_ERRORS=$(echo "$RESPONSE" | jq -r '.data.array.disks[] | select(.numErrors > 0) | "- ❌ \(.name): \(.numErrors) errors"')
|
||||||
|
|
||||||
|
if [ -z "$HOT_DISKS" ] && [ -z "$DISK_ERRORS" ]; then
|
||||||
|
echo "- ✅ All disks healthy" >> "$OUTPUT_FILE"
|
||||||
|
else
|
||||||
|
[ -n "$HOT_DISKS" ] && echo "$HOT_DISKS" >> "$OUTPUT_FILE"
|
||||||
|
[ -n "$DISK_ERRORS" ] && echo "$DISK_ERRORS" >> "$OUTPUT_FILE"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Notifications (Alerts)
|
||||||
|
echo "" >> "$OUTPUT_FILE"
|
||||||
|
echo "### Notifications" >> "$OUTPUT_FILE"
|
||||||
|
|
||||||
|
NOTIF_COUNT=$(echo "$RESPONSE" | jq '[.data.notifications[]] | length' 2>/dev/null || echo "0")
|
||||||
|
if [ "$NOTIF_COUNT" -gt 0 ] && [ "$NOTIF_COUNT" != "null" ]; then
|
||||||
|
# Show recent notifications (last 10)
|
||||||
|
ALERT_NOTIFS=$(echo "$RESPONSE" | jq -r '.data.notifications | sort_by(.timestamp) | reverse | .[0:10][] | "- [\(.importance // "info")] \(.title // .subject): \(.description // "No description") (\(.timestamp | split("T")[0]))"' 2>/dev/null)
|
||||||
|
if [ -n "$ALERT_NOTIFS" ]; then
|
||||||
|
echo "$ALERT_NOTIFS" >> "$OUTPUT_FILE"
|
||||||
|
else
|
||||||
|
echo "- ✅ No recent notifications" >> "$OUTPUT_FILE"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Count by importance
|
||||||
|
ALERT_COUNT=$(echo "$RESPONSE" | jq '[.data.notifications[] | select(.importance == "alert" or .importance == "warning")] | length' 2>/dev/null || echo "0")
|
||||||
|
if [ "$ALERT_COUNT" -gt 0 ]; then
|
||||||
|
echo "" >> "$OUTPUT_FILE"
|
||||||
|
echo "**⚠️ $ALERT_COUNT alert/warning notifications**" >> "$OUTPUT_FILE"
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
echo "- ✅ No notifications" >> "$OUTPUT_FILE"
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "" >> "$OUTPUT_FILE"
|
||||||
|
echo "---" >> "$OUTPUT_FILE"
|
||||||
|
echo "" >> "$OUTPUT_FILE"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Main loop - process each server from environment variables
|
||||||
|
for server in "TOOTIE" "SHART"; do
|
||||||
|
name_var="UNRAID_${server}_NAME"
|
||||||
|
url_var="UNRAID_${server}_URL"
|
||||||
|
key_var="UNRAID_${server}_API_KEY"
|
||||||
|
|
||||||
|
NAME="${!name_var}"
|
||||||
|
URL="${!url_var}"
|
||||||
|
KEY="${!key_var}"
|
||||||
|
|
||||||
|
process_server "$NAME" "$URL" "$KEY"
|
||||||
|
done
|
||||||
|
|
||||||
|
echo "Dashboard saved to: $OUTPUT_FILE"
|
||||||
|
cat "$OUTPUT_FILE"
|
||||||
126
skills/unraid/scripts/unraid-query.sh
Executable file
126
skills/unraid/scripts/unraid-query.sh
Executable file
@@ -0,0 +1,126 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# Unraid GraphQL API Query Helper
|
||||||
|
# Makes it easy to query the Unraid API from the command line
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
# Usage function
|
||||||
|
usage() {
|
||||||
|
cat << EOF
|
||||||
|
Usage: $0 [OPTIONS]
|
||||||
|
|
||||||
|
Query the Unraid GraphQL API
|
||||||
|
|
||||||
|
OPTIONS:
|
||||||
|
-u, --url URL Unraid server URL (required)
|
||||||
|
-k, --key KEY API key (required)
|
||||||
|
-q, --query QUERY GraphQL query (required)
|
||||||
|
-f, --format FORMAT Output format: json (default), raw, pretty
|
||||||
|
-h, --help Show this help message
|
||||||
|
|
||||||
|
ENVIRONMENT VARIABLES:
|
||||||
|
UNRAID_URL Default Unraid server URL
|
||||||
|
UNRAID_API_KEY Default API key
|
||||||
|
|
||||||
|
EXAMPLES:
|
||||||
|
# Get system status
|
||||||
|
$0 -u https://unraid.local/graphql -k YOUR_KEY -q "{ online }"
|
||||||
|
|
||||||
|
# Use environment variables
|
||||||
|
export UNRAID_URL="https://unraid.local/graphql"
|
||||||
|
export UNRAID_API_KEY="your-api-key"
|
||||||
|
$0 -q "{ metrics { cpu { percentTotal } } }"
|
||||||
|
|
||||||
|
# Pretty print output
|
||||||
|
$0 -q "{ array { state } }" -f pretty
|
||||||
|
|
||||||
|
EOF
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
|
||||||
|
# Default values
|
||||||
|
URL="${UNRAID_URL:-}"
|
||||||
|
API_KEY="${UNRAID_API_KEY:-}"
|
||||||
|
QUERY=""
|
||||||
|
FORMAT="json"
|
||||||
|
|
||||||
|
# Parse arguments
|
||||||
|
while [[ $# -gt 0 ]]; do
|
||||||
|
case $1 in
|
||||||
|
-u|--url)
|
||||||
|
URL="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-k|--key)
|
||||||
|
API_KEY="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-q|--query)
|
||||||
|
QUERY="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-f|--format)
|
||||||
|
FORMAT="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-h|--help)
|
||||||
|
usage
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Unknown option: $1"
|
||||||
|
usage
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
# Validate required arguments
|
||||||
|
if [[ -z "$URL" ]]; then
|
||||||
|
echo "Error: Unraid URL is required (use -u or set UNRAID_URL)"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -z "$API_KEY" ]]; then
|
||||||
|
echo "Error: API key is required (use -k or set UNRAID_API_KEY)"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -z "$QUERY" ]]; then
|
||||||
|
echo "Error: GraphQL query is required (use -q)"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Make the request
|
||||||
|
RESPONSE=$(curl -skL -X POST "$URL" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "x-api-key: $API_KEY" \
|
||||||
|
-d "{\"query\":\"$QUERY\"}")
|
||||||
|
|
||||||
|
# Check for errors
|
||||||
|
if echo "$RESPONSE" | jq -e '.errors' > /dev/null 2>&1; then
|
||||||
|
# If we have data despite errors, and --ignore-errors is set, continue
|
||||||
|
if [[ "$IGNORE_ERRORS" == "true" ]] && echo "$RESPONSE" | jq -e '.data' > /dev/null 2>&1; then
|
||||||
|
echo "GraphQL Warning:" >&2
|
||||||
|
echo "$RESPONSE" | jq -r '.errors[0].message' >&2
|
||||||
|
else
|
||||||
|
echo "GraphQL Error:" >&2
|
||||||
|
echo "$RESPONSE" | jq -r '.errors[0].message' >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Output based on format
|
||||||
|
case "$FORMAT" in
|
||||||
|
json)
|
||||||
|
echo "$RESPONSE"
|
||||||
|
;;
|
||||||
|
raw)
|
||||||
|
echo "$RESPONSE" | jq -r '.data'
|
||||||
|
;;
|
||||||
|
pretty)
|
||||||
|
echo "$RESPONSE" | jq '.'
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Unknown format: $FORMAT" >&2
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
39
skills/unraid/setup.sh
Executable file
39
skills/unraid/setup.sh
Executable file
@@ -0,0 +1,39 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
# Setup script for Unraid MCP Plugin
|
||||||
|
# Installs the MCP server dependencies
|
||||||
|
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
PLUGIN_ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
PROJECT_ROOT="$(cd "$PLUGIN_ROOT/../.." && pwd)"
|
||||||
|
|
||||||
|
echo "=== Unraid MCP Plugin Setup ==="
|
||||||
|
echo ""
|
||||||
|
echo "Plugin root: $PLUGIN_ROOT"
|
||||||
|
echo "Project root: $PROJECT_ROOT"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# Check if uv is installed
|
||||||
|
if ! command -v uv &> /dev/null; then
|
||||||
|
echo "Error: 'uv' is not installed."
|
||||||
|
echo "Install it with: curl -LsSf https://astral.sh/uv/install.sh | sh"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "✓ uv is installed"
|
||||||
|
|
||||||
|
# Navigate to project root and install dependencies
|
||||||
|
cd "$PROJECT_ROOT"
|
||||||
|
|
||||||
|
echo "Installing Python dependencies..."
|
||||||
|
uv sync
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "✓ Setup complete!"
|
||||||
|
echo ""
|
||||||
|
echo "Configure your Unraid server by setting these environment variables:"
|
||||||
|
echo " export UNRAID_API_URL='http://your-unraid-server/graphql'"
|
||||||
|
echo " export UNRAID_API_KEY='your-api-key'"
|
||||||
|
echo ""
|
||||||
|
echo "Test the MCP server with:"
|
||||||
|
echo " uv run unraid-mcp-server"
|
||||||
@@ -52,6 +52,20 @@ class TestAnalyzeDiskHealth:
|
|||||||
disks = [{"status": "DISK_OK", "warning": 45}]
|
disks = [{"status": "DISK_OK", "warning": 45}]
|
||||||
result = _analyze_disk_health(disks)
|
result = _analyze_disk_health(disks)
|
||||||
assert result["warning"] == 1
|
assert result["warning"] == 1
|
||||||
|
assert result["critical"] == 0
|
||||||
|
|
||||||
|
def test_counts_critical_disks(self) -> None:
|
||||||
|
disks = [{"status": "DISK_OK", "critical": 55}]
|
||||||
|
result = _analyze_disk_health(disks)
|
||||||
|
assert result["critical"] == 1
|
||||||
|
assert result["warning"] == 0
|
||||||
|
assert result["healthy"] == 0
|
||||||
|
|
||||||
|
def test_critical_takes_precedence_over_warning(self) -> None:
|
||||||
|
disks = [{"status": "DISK_OK", "warning": 45, "critical": 55}]
|
||||||
|
result = _analyze_disk_health(disks)
|
||||||
|
assert result["critical"] == 1
|
||||||
|
assert result["warning"] == 0
|
||||||
|
|
||||||
def test_counts_missing_disks(self) -> None:
|
def test_counts_missing_disks(self) -> None:
|
||||||
disks = [{"status": "DISK_NP"}]
|
disks = [{"status": "DISK_NP"}]
|
||||||
@@ -76,6 +90,16 @@ class TestProcessArrayStatus:
|
|||||||
assert result["summary"]["state"] == "STARTED"
|
assert result["summary"]["state"] == "STARTED"
|
||||||
assert result["summary"]["overall_health"] == "HEALTHY"
|
assert result["summary"]["overall_health"] == "HEALTHY"
|
||||||
|
|
||||||
|
def test_critical_disk_threshold_array(self) -> None:
|
||||||
|
raw = {
|
||||||
|
"state": "STARTED",
|
||||||
|
"parities": [],
|
||||||
|
"disks": [{"status": "DISK_OK", "critical": 55}],
|
||||||
|
"caches": [],
|
||||||
|
}
|
||||||
|
result = _process_array_status(raw)
|
||||||
|
assert result["summary"]["overall_health"] == "CRITICAL"
|
||||||
|
|
||||||
def test_degraded_array(self) -> None:
|
def test_degraded_array(self) -> None:
|
||||||
raw = {
|
raw = {
|
||||||
"state": "STARTED",
|
"state": "STARTED",
|
||||||
|
|||||||
@@ -60,6 +60,15 @@ class TestStorageValidation:
|
|||||||
with pytest.raises(ToolError, match="log_path must start with"):
|
with pytest.raises(ToolError, match="log_path must start with"):
|
||||||
await tool_fn(action="logs", log_path="/etc/shadow")
|
await tool_fn(action="logs", log_path="/etc/shadow")
|
||||||
|
|
||||||
|
async def test_logs_rejects_path_traversal(self, _mock_graphql: AsyncMock) -> None:
|
||||||
|
tool_fn = _make_tool()
|
||||||
|
# Traversal that escapes /var/log/ to reach /etc/shadow
|
||||||
|
with pytest.raises(ToolError, match="log_path must start with"):
|
||||||
|
await tool_fn(action="logs", log_path="/var/log/../../etc/shadow")
|
||||||
|
# Traversal that escapes /mnt/ to reach /etc/passwd
|
||||||
|
with pytest.raises(ToolError, match="log_path must start with"):
|
||||||
|
await tool_fn(action="logs", log_path="/mnt/../etc/passwd")
|
||||||
|
|
||||||
async def test_logs_allows_valid_paths(self, _mock_graphql: AsyncMock) -> None:
|
async def test_logs_allows_valid_paths(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_graphql.return_value = {"logFile": {"path": "/var/log/syslog", "content": "ok"}}
|
_mock_graphql.return_value = {"logFile": {"path": "/var/log/syslog", "content": "ok"}}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
|
|||||||
@@ -42,7 +42,7 @@ class TestUsersValidation:
|
|||||||
|
|
||||||
class TestUsersActions:
|
class TestUsersActions:
|
||||||
async def test_me(self, _mock_graphql: AsyncMock) -> None:
|
async def test_me(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_graphql.return_value = {"me": {"id": "u:1", "name": "root", "role": "ADMIN"}}
|
_mock_graphql.return_value = {"me": {"id": "u:1", "name": "root", "description": "", "roles": ["ADMIN"]}}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="me")
|
result = await tool_fn(action="me")
|
||||||
assert result["name"] == "root"
|
assert result["name"] == "root"
|
||||||
@@ -56,19 +56,19 @@ class TestUsersActions:
|
|||||||
assert len(result["users"]) == 2
|
assert len(result["users"]) == 2
|
||||||
|
|
||||||
async def test_get(self, _mock_graphql: AsyncMock) -> None:
|
async def test_get(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_graphql.return_value = {"user": {"id": "u:1", "name": "root", "role": "ADMIN"}}
|
_mock_graphql.return_value = {"user": {"id": "u:1", "name": "root", "description": "", "roles": ["ADMIN"]}}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="get", user_id="u:1")
|
result = await tool_fn(action="get", user_id="u:1")
|
||||||
assert result["name"] == "root"
|
assert result["name"] == "root"
|
||||||
|
|
||||||
async def test_add(self, _mock_graphql: AsyncMock) -> None:
|
async def test_add(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_graphql.return_value = {"addUser": {"id": "u:3", "name": "newuser", "role": "USER"}}
|
_mock_graphql.return_value = {"addUser": {"id": "u:3", "name": "newuser", "description": "", "roles": ["USER"]}}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="add", name="newuser", password="pass123")
|
result = await tool_fn(action="add", name="newuser", password="pass123")
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
|
|
||||||
async def test_add_with_role(self, _mock_graphql: AsyncMock) -> None:
|
async def test_add_with_role(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_graphql.return_value = {"addUser": {"id": "u:3", "name": "admin2", "role": "ADMIN"}}
|
_mock_graphql.return_value = {"addUser": {"id": "u:3", "name": "admin2", "description": "", "roles": ["ADMIN"]}}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="add", name="admin2", password="pass123", role="admin")
|
result = await tool_fn(action="add", name="admin2", password="pass123", role="admin")
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
@@ -76,10 +76,12 @@ class TestUsersActions:
|
|||||||
assert call_args[0][1]["input"]["role"] == "ADMIN"
|
assert call_args[0][1]["input"]["role"] == "ADMIN"
|
||||||
|
|
||||||
async def test_delete(self, _mock_graphql: AsyncMock) -> None:
|
async def test_delete(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_graphql.return_value = {"deleteUser": True}
|
_mock_graphql.return_value = {"deleteUser": {"id": "u:2", "name": "guest"}}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="delete", user_id="u:2", confirm=True)
|
result = await tool_fn(action="delete", user_id="u:2", confirm=True)
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
|
call_args = _mock_graphql.call_args
|
||||||
|
assert call_args[0][1]["input"]["id"] == "u:2"
|
||||||
|
|
||||||
async def test_cloud(self, _mock_graphql: AsyncMock) -> None:
|
async def test_cloud(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_graphql.return_value = {"cloud": {"status": "connected", "apiKey": "***"}}
|
_mock_graphql.return_value = {"cloud": {"status": "connected", "apiKey": "***"}}
|
||||||
@@ -98,3 +100,31 @@ class TestUsersActions:
|
|||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="origins")
|
result = await tool_fn(action="origins")
|
||||||
assert len(result["origins"]) == 2
|
assert len(result["origins"]) == 2
|
||||||
|
|
||||||
|
|
||||||
|
class TestUsersNoneHandling:
|
||||||
|
"""Verify actions return empty dict (not TypeError) when API returns None."""
|
||||||
|
|
||||||
|
async def test_me_returns_none(self, _mock_graphql: AsyncMock) -> None:
|
||||||
|
_mock_graphql.return_value = {"me": None}
|
||||||
|
tool_fn = _make_tool()
|
||||||
|
result = await tool_fn(action="me")
|
||||||
|
assert result == {}
|
||||||
|
|
||||||
|
async def test_get_returns_none(self, _mock_graphql: AsyncMock) -> None:
|
||||||
|
_mock_graphql.return_value = {"user": None}
|
||||||
|
tool_fn = _make_tool()
|
||||||
|
result = await tool_fn(action="get", user_id="u:1")
|
||||||
|
assert result == {}
|
||||||
|
|
||||||
|
async def test_cloud_returns_none(self, _mock_graphql: AsyncMock) -> None:
|
||||||
|
_mock_graphql.return_value = {"cloud": None}
|
||||||
|
tool_fn = _make_tool()
|
||||||
|
result = await tool_fn(action="cloud")
|
||||||
|
assert result == {}
|
||||||
|
|
||||||
|
async def test_remote_access_returns_none(self, _mock_graphql: AsyncMock) -> None:
|
||||||
|
_mock_graphql.return_value = {"remoteAccess": None}
|
||||||
|
tool_fn = _make_tool()
|
||||||
|
result = await tool_fn(action="remote_access")
|
||||||
|
assert result == {}
|
||||||
|
|||||||
@@ -8,6 +8,7 @@ error handling, reconnection logic, and authentication.
|
|||||||
import asyncio
|
import asyncio
|
||||||
import json
|
import json
|
||||||
import os
|
import os
|
||||||
|
import ssl
|
||||||
from datetime import datetime
|
from datetime import datetime
|
||||||
from typing import Any
|
from typing import Any
|
||||||
|
|
||||||
@@ -153,6 +154,16 @@ class SubscriptionManager:
|
|||||||
logger.debug(f"[WEBSOCKET:{subscription_name}] Connecting to: {ws_url}")
|
logger.debug(f"[WEBSOCKET:{subscription_name}] Connecting to: {ws_url}")
|
||||||
logger.debug(f"[WEBSOCKET:{subscription_name}] API Key present: {'Yes' if UNRAID_API_KEY else 'No'}")
|
logger.debug(f"[WEBSOCKET:{subscription_name}] API Key present: {'Yes' if UNRAID_API_KEY else 'No'}")
|
||||||
|
|
||||||
|
# Build SSL context for wss:// connections
|
||||||
|
ssl_context = None
|
||||||
|
if ws_url.startswith('wss://'):
|
||||||
|
if isinstance(UNRAID_VERIFY_SSL, str):
|
||||||
|
ssl_context = ssl.create_default_context(cafile=UNRAID_VERIFY_SSL)
|
||||||
|
elif UNRAID_VERIFY_SSL:
|
||||||
|
ssl_context = ssl.create_default_context()
|
||||||
|
else:
|
||||||
|
ssl_context = ssl._create_unverified_context()
|
||||||
|
|
||||||
# Connection with timeout
|
# Connection with timeout
|
||||||
connect_timeout = 10
|
connect_timeout = 10
|
||||||
logger.debug(f"[WEBSOCKET:{subscription_name}] Connection timeout: {connect_timeout}s")
|
logger.debug(f"[WEBSOCKET:{subscription_name}] Connection timeout: {connect_timeout}s")
|
||||||
@@ -163,7 +174,7 @@ class SubscriptionManager:
|
|||||||
ping_interval=20,
|
ping_interval=20,
|
||||||
ping_timeout=10,
|
ping_timeout=10,
|
||||||
close_timeout=10,
|
close_timeout=10,
|
||||||
ssl=UNRAID_VERIFY_SSL
|
ssl=ssl_context
|
||||||
) as websocket:
|
) as websocket:
|
||||||
|
|
||||||
selected_proto = websocket.subprotocol or "none"
|
selected_proto = websocket.subprotocol or "none"
|
||||||
|
|||||||
@@ -308,7 +308,13 @@ def register_docker_tool(mcp: FastMCP) -> None:
|
|||||||
}
|
}
|
||||||
|
|
||||||
docker_data = data.get("docker", {})
|
docker_data = data.get("docker", {})
|
||||||
result = docker_data.get(action, docker_data.get("removeContainer"))
|
# Map action names to GraphQL response field names where they differ
|
||||||
|
response_field_map = {
|
||||||
|
"update": "updateContainer",
|
||||||
|
"remove": "removeContainer",
|
||||||
|
}
|
||||||
|
field = response_field_map.get(action, action)
|
||||||
|
result = docker_data.get(field)
|
||||||
return {
|
return {
|
||||||
"success": True,
|
"success": True,
|
||||||
"action": action,
|
"action": action,
|
||||||
|
|||||||
@@ -204,13 +204,18 @@ def _process_system_info(raw_info: dict[str, Any]) -> dict[str, Any]:
|
|||||||
|
|
||||||
def _analyze_disk_health(disks: list[dict[str, Any]]) -> dict[str, int]:
|
def _analyze_disk_health(disks: list[dict[str, Any]]) -> dict[str, int]:
|
||||||
"""Analyze health status of disk arrays."""
|
"""Analyze health status of disk arrays."""
|
||||||
counts = {"healthy": 0, "failed": 0, "missing": 0, "new": 0, "warning": 0, "unknown": 0}
|
counts = {"healthy": 0, "failed": 0, "missing": 0, "new": 0, "warning": 0, "critical": 0, "unknown": 0}
|
||||||
for disk in disks:
|
for disk in disks:
|
||||||
status = disk.get("status", "").upper()
|
status = disk.get("status", "").upper()
|
||||||
warning = disk.get("warning")
|
warning = disk.get("warning")
|
||||||
critical = disk.get("critical")
|
critical = disk.get("critical")
|
||||||
if status == "DISK_OK":
|
if status == "DISK_OK":
|
||||||
counts["warning" if (warning or critical) else "healthy"] += 1
|
if critical:
|
||||||
|
counts["critical"] += 1
|
||||||
|
elif warning:
|
||||||
|
counts["warning"] += 1
|
||||||
|
else:
|
||||||
|
counts["healthy"] += 1
|
||||||
elif status in ("DISK_DSBL", "DISK_INVALID"):
|
elif status in ("DISK_DSBL", "DISK_INVALID"):
|
||||||
counts["failed"] += 1
|
counts["failed"] += 1
|
||||||
elif status == "DISK_NP":
|
elif status == "DISK_NP":
|
||||||
@@ -254,10 +259,11 @@ def _process_array_status(raw: dict[str, Any]) -> dict[str, Any]:
|
|||||||
health_summary[label] = _analyze_disk_health(raw[key])
|
health_summary[label] = _analyze_disk_health(raw[key])
|
||||||
|
|
||||||
total_failed = sum(h.get("failed", 0) for h in health_summary.values())
|
total_failed = sum(h.get("failed", 0) for h in health_summary.values())
|
||||||
|
total_critical = sum(h.get("critical", 0) for h in health_summary.values())
|
||||||
total_missing = sum(h.get("missing", 0) for h in health_summary.values())
|
total_missing = sum(h.get("missing", 0) for h in health_summary.values())
|
||||||
total_warning = sum(h.get("warning", 0) for h in health_summary.values())
|
total_warning = sum(h.get("warning", 0) for h in health_summary.values())
|
||||||
|
|
||||||
if total_failed > 0:
|
if total_failed > 0 or total_critical > 0:
|
||||||
overall = "CRITICAL"
|
overall = "CRITICAL"
|
||||||
elif total_missing > 0:
|
elif total_missing > 0:
|
||||||
overall = "DEGRADED"
|
overall = "DEGRADED"
|
||||||
|
|||||||
@@ -111,7 +111,7 @@ def register_keys_tool(mcp: FastMCP) -> None:
|
|||||||
if action == "update":
|
if action == "update":
|
||||||
if not key_id:
|
if not key_id:
|
||||||
raise ToolError("key_id is required for 'update' action")
|
raise ToolError("key_id is required for 'update' action")
|
||||||
input_data = {"id": key_id}
|
input_data: dict[str, Any] = {"id": key_id}
|
||||||
if name:
|
if name:
|
||||||
input_data["name"] = name
|
input_data["name"] = name
|
||||||
if roles:
|
if roles:
|
||||||
@@ -130,6 +130,9 @@ def register_keys_tool(mcp: FastMCP) -> None:
|
|||||||
data = await make_graphql_request(
|
data = await make_graphql_request(
|
||||||
MUTATIONS["delete"], {"input": {"ids": [key_id]}}
|
MUTATIONS["delete"], {"input": {"ids": [key_id]}}
|
||||||
)
|
)
|
||||||
|
result = data.get("deleteApiKeys")
|
||||||
|
if not result:
|
||||||
|
raise ToolError(f"Failed to delete API key '{key_id}': no confirmation from server")
|
||||||
return {
|
return {
|
||||||
"success": True,
|
"success": True,
|
||||||
"message": f"API key '{key_id}' deleted",
|
"message": f"API key '{key_id}' deleted",
|
||||||
|
|||||||
@@ -100,7 +100,9 @@ def register_rclone_tool(mcp: FastMCP) -> None:
|
|||||||
MUTATIONS["create_remote"],
|
MUTATIONS["create_remote"],
|
||||||
{"input": {"name": name, "type": provider_type, "config": config_data}},
|
{"input": {"name": name, "type": provider_type, "config": config_data}},
|
||||||
)
|
)
|
||||||
remote = data.get("rclone", {}).get("createRCloneRemote", {})
|
remote = data.get("rclone", {}).get("createRCloneRemote")
|
||||||
|
if not remote:
|
||||||
|
raise ToolError(f"Failed to create remote '{name}': no confirmation from server")
|
||||||
return {
|
return {
|
||||||
"success": True,
|
"success": True,
|
||||||
"message": f"Remote '{name}' created successfully",
|
"message": f"Remote '{name}' created successfully",
|
||||||
|
|||||||
@@ -4,6 +4,7 @@ Provides the `unraid_storage` tool with 6 actions for shares, physical disks,
|
|||||||
unassigned devices, log files, and log content retrieval.
|
unassigned devices, log files, and log content retrieval.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
import posixpath
|
||||||
from typing import Any, Literal
|
from typing import Any, Literal
|
||||||
|
|
||||||
from fastmcp import FastMCP
|
from fastmcp import FastMCP
|
||||||
@@ -99,11 +100,14 @@ def register_storage_tool(mcp: FastMCP) -> None:
|
|||||||
if not log_path:
|
if not log_path:
|
||||||
raise ToolError("log_path is required for 'logs' action")
|
raise ToolError("log_path is required for 'logs' action")
|
||||||
_ALLOWED_LOG_PREFIXES = ("/var/log/", "/boot/logs/", "/mnt/")
|
_ALLOWED_LOG_PREFIXES = ("/var/log/", "/boot/logs/", "/mnt/")
|
||||||
if not any(log_path.startswith(p) for p in _ALLOWED_LOG_PREFIXES):
|
# Normalize path to prevent traversal attacks (e.g. /var/log/../../etc/shadow)
|
||||||
|
normalized = posixpath.normpath(log_path)
|
||||||
|
if not any(normalized.startswith(p) for p in _ALLOWED_LOG_PREFIXES):
|
||||||
raise ToolError(
|
raise ToolError(
|
||||||
f"log_path must start with one of: {', '.join(_ALLOWED_LOG_PREFIXES)}. "
|
f"log_path must start with one of: {', '.join(_ALLOWED_LOG_PREFIXES)}. "
|
||||||
f"Use log_files action to discover valid paths."
|
f"Use log_files action to discover valid paths."
|
||||||
)
|
)
|
||||||
|
log_path = normalized
|
||||||
|
|
||||||
query = QUERIES[action]
|
query = QUERIES[action]
|
||||||
variables: dict[str, Any] | None = None
|
variables: dict[str, Any] | None = None
|
||||||
|
|||||||
@@ -15,17 +15,17 @@ from ..core.exceptions import ToolError
|
|||||||
QUERIES: dict[str, str] = {
|
QUERIES: dict[str, str] = {
|
||||||
"me": """
|
"me": """
|
||||||
query GetMe {
|
query GetMe {
|
||||||
me { id name role email }
|
me { id name description roles }
|
||||||
}
|
}
|
||||||
""",
|
""",
|
||||||
"list": """
|
"list": """
|
||||||
query ListUsers {
|
query ListUsers {
|
||||||
users { id name role email }
|
users { id name description roles }
|
||||||
}
|
}
|
||||||
""",
|
""",
|
||||||
"get": """
|
"get": """
|
||||||
query GetUser($id: PrefixedID!) {
|
query GetUser($id: ID!) {
|
||||||
user(id: $id) { id name role email }
|
user(id: $id) { id name description roles }
|
||||||
}
|
}
|
||||||
""",
|
""",
|
||||||
"cloud": """
|
"cloud": """
|
||||||
@@ -47,13 +47,13 @@ QUERIES: dict[str, str] = {
|
|||||||
|
|
||||||
MUTATIONS: dict[str, str] = {
|
MUTATIONS: dict[str, str] = {
|
||||||
"add": """
|
"add": """
|
||||||
mutation AddUser($input: AddUserInput!) {
|
mutation AddUser($input: addUserInput!) {
|
||||||
addUser(input: $input) { id name role }
|
addUser(input: $input) { id name description roles }
|
||||||
}
|
}
|
||||||
""",
|
""",
|
||||||
"delete": """
|
"delete": """
|
||||||
mutation DeleteUser($id: PrefixedID!) {
|
mutation DeleteUser($input: deleteUserInput!) {
|
||||||
deleteUser(id: $id)
|
deleteUser(input: $input) { id name }
|
||||||
}
|
}
|
||||||
""",
|
""",
|
||||||
}
|
}
|
||||||
@@ -101,7 +101,7 @@ def register_users_tool(mcp: FastMCP) -> None:
|
|||||||
|
|
||||||
if action == "me":
|
if action == "me":
|
||||||
data = await make_graphql_request(QUERIES["me"])
|
data = await make_graphql_request(QUERIES["me"])
|
||||||
return dict(data.get("me", {}))
|
return data.get("me") or {}
|
||||||
|
|
||||||
if action == "list":
|
if action == "list":
|
||||||
data = await make_graphql_request(QUERIES["list"])
|
data = await make_graphql_request(QUERIES["list"])
|
||||||
@@ -112,7 +112,7 @@ def register_users_tool(mcp: FastMCP) -> None:
|
|||||||
if not user_id:
|
if not user_id:
|
||||||
raise ToolError("user_id is required for 'get' action")
|
raise ToolError("user_id is required for 'get' action")
|
||||||
data = await make_graphql_request(QUERIES["get"], {"id": user_id})
|
data = await make_graphql_request(QUERIES["get"], {"id": user_id})
|
||||||
return dict(data.get("user", {}))
|
return data.get("user") or {}
|
||||||
|
|
||||||
if action == "add":
|
if action == "add":
|
||||||
if not name or not password:
|
if not name or not password:
|
||||||
@@ -132,7 +132,7 @@ def register_users_tool(mcp: FastMCP) -> None:
|
|||||||
if not user_id:
|
if not user_id:
|
||||||
raise ToolError("user_id is required for 'delete' action")
|
raise ToolError("user_id is required for 'delete' action")
|
||||||
data = await make_graphql_request(
|
data = await make_graphql_request(
|
||||||
MUTATIONS["delete"], {"id": user_id}
|
MUTATIONS["delete"], {"input": {"id": user_id}}
|
||||||
)
|
)
|
||||||
return {
|
return {
|
||||||
"success": True,
|
"success": True,
|
||||||
@@ -141,11 +141,11 @@ def register_users_tool(mcp: FastMCP) -> None:
|
|||||||
|
|
||||||
if action == "cloud":
|
if action == "cloud":
|
||||||
data = await make_graphql_request(QUERIES["cloud"])
|
data = await make_graphql_request(QUERIES["cloud"])
|
||||||
return dict(data.get("cloud", {}))
|
return data.get("cloud") or {}
|
||||||
|
|
||||||
if action == "remote_access":
|
if action == "remote_access":
|
||||||
data = await make_graphql_request(QUERIES["remote_access"])
|
data = await make_graphql_request(QUERIES["remote_access"])
|
||||||
return dict(data.get("remoteAccess", {}))
|
return data.get("remoteAccess") or {}
|
||||||
|
|
||||||
if action == "origins":
|
if action == "origins":
|
||||||
data = await make_graphql_request(QUERIES["origins"])
|
data = await make_graphql_request(QUERIES["origins"])
|
||||||
|
|||||||
@@ -105,15 +105,16 @@ def register_vm_tool(mcp: FastMCP) -> None:
|
|||||||
|
|
||||||
if action == "list":
|
if action == "list":
|
||||||
data = await make_graphql_request(QUERIES["list"])
|
data = await make_graphql_request(QUERIES["list"])
|
||||||
if data.get("vms") and data["vms"].get("domains"):
|
if data.get("vms"):
|
||||||
vms = data["vms"]["domains"]
|
vms = data["vms"].get("domains") or data["vms"].get("domain")
|
||||||
|
if vms:
|
||||||
return {"vms": list(vms) if isinstance(vms, list) else []}
|
return {"vms": list(vms) if isinstance(vms, list) else []}
|
||||||
return {"vms": []}
|
return {"vms": []}
|
||||||
|
|
||||||
if action == "details":
|
if action == "details":
|
||||||
data = await make_graphql_request(QUERIES["details"])
|
data = await make_graphql_request(QUERIES["details"])
|
||||||
if data.get("vms"):
|
if data.get("vms"):
|
||||||
vms = data["vms"].get("domains") or []
|
vms = data["vms"].get("domains") or data["vms"].get("domain") or []
|
||||||
for vm in vms:
|
for vm in vms:
|
||||||
if (
|
if (
|
||||||
vm.get("uuid") == vm_id
|
vm.get("uuid") == vm_id
|
||||||
|
|||||||
2
uv.lock
generated
2
uv.lock
generated
@@ -1985,7 +1985,7 @@ requires-dist = [
|
|||||||
{ name = "black", marker = "extra == 'dev'", specifier = ">=25.1.0" },
|
{ name = "black", marker = "extra == 'dev'", specifier = ">=25.1.0" },
|
||||||
{ name = "black", marker = "extra == 'lint'", specifier = ">=25.1.0" },
|
{ name = "black", marker = "extra == 'lint'", specifier = ">=25.1.0" },
|
||||||
{ name = "build", marker = "extra == 'dev'", specifier = ">=1.2.2" },
|
{ name = "build", marker = "extra == 'dev'", specifier = ">=1.2.2" },
|
||||||
{ name = "fastapi", specifier = ">=0.116.1" },
|
{ name = "fastapi", specifier = ">=0.115.0" },
|
||||||
{ name = "fastmcp", specifier = ">=2.11.2" },
|
{ name = "fastmcp", specifier = ">=2.11.2" },
|
||||||
{ name = "httpx", specifier = ">=0.28.1" },
|
{ name = "httpx", specifier = ">=0.28.1" },
|
||||||
{ name = "pytest", marker = "extra == 'dev'", specifier = ">=8.4.2" },
|
{ name = "pytest", marker = "extra == 'dev'", specifier = ">=8.4.2" },
|
||||||
|
|||||||
Reference in New Issue
Block a user