feat: add API key bearer token authentication

- ApiKeyVerifier(TokenVerifier) — validates Authorization: Bearer <key>
  against UNRAID_MCP_API_KEY; guards against empty-key bypass
- _build_auth() replaces module-level _build_google_auth() call:
  returns MultiAuth(server=google, verifiers=[api_key]) when both set,
  GoogleProvider alone, ApiKeyVerifier alone, or None
- settings.py: add UNRAID_MCP_API_KEY + is_api_key_auth_configured()
  + api_key_auth_enabled in get_config_summary()
- run_server(): improved auth status logging for all three states
- tests/test_api_key_auth.py: 9 tests covering verifier + _build_auth
- .env.example: add UNRAID_MCP_API_KEY section
- docs/GOOGLE_OAUTH.md: add API Key section
- README.md / CLAUDE.md: rename section, document both auth methods
- Fix pre-existing: test_health.py patched cache_middleware/error_middleware
  now match renamed _cache_middleware/_error_middleware in server.py
This commit is contained in:
Jacob Magar
2026-03-16 11:11:38 -04:00
parent 6f7a58a0f9
commit cc24f1ec62
16 changed files with 406 additions and 69 deletions

View File

@@ -61,4 +61,16 @@ UNRAID_MAX_RECONNECT_ATTEMPTS=10
# GOOGLE_CLIENT_ID= # GOOGLE_CLIENT_ID=
# GOOGLE_CLIENT_SECRET= # GOOGLE_CLIENT_SECRET=
# UNRAID_MCP_BASE_URL=http://10.1.0.2:6970 # UNRAID_MCP_BASE_URL=http://10.1.0.2:6970
# UNRAID_MCP_JWT_SIGNING_KEY=<generate with command above> # UNRAID_MCP_JWT_SIGNING_KEY=<generate with command above>
# API Key Authentication (Optional)
# -----------------------------------
# Alternative to Google OAuth — clients present this key as a bearer token:
# Authorization: Bearer <UNRAID_MCP_API_KEY>
#
# Can be the same value as UNRAID_API_KEY (reuse your Unraid key), or a
# separate dedicated secret. Set both GOOGLE_CLIENT_ID and UNRAID_MCP_API_KEY
# to accept either auth method (MultiAuth).
#
# Leave empty to disable API key auth.
# UNRAID_MCP_API_KEY=

View File

@@ -61,29 +61,33 @@ docker compose down
- `UNRAID_MCP_PORT`: Server port (default: 6970) - `UNRAID_MCP_PORT`: Server port (default: 6970)
- `UNRAID_MCP_HOST`: Server host (default: 0.0.0.0) - `UNRAID_MCP_HOST`: Server host (default: 0.0.0.0)
### Google OAuth (Optional — protects the HTTP server) ### Authentication (Optional — protects the HTTP server)
When `GOOGLE_CLIENT_ID`, `GOOGLE_CLIENT_SECRET`, and `UNRAID_MCP_BASE_URL` are all set, Two independent methods. Use either or both — when both are set, `MultiAuth` accepts either.
the MCP server requires Google login before any tool call.
| Env Var | Required | Purpose | **Google OAuth** — requires all three vars:
|---------|----------|---------|
| `GOOGLE_CLIENT_ID` | For OAuth | Google OAuth 2.0 Client ID |
| `GOOGLE_CLIENT_SECRET` | For OAuth | Google OAuth 2.0 Client Secret |
| `UNRAID_MCP_BASE_URL` | For OAuth | Public URL of this server (e.g. `http://10.1.0.2:6970`) |
| `UNRAID_MCP_JWT_SIGNING_KEY` | Recommended | Stable 32+ char secret — prevents token invalidation on restart |
**Google Cloud Console setup:** | Env Var | Purpose |
1. APIs & Services → Credentials → Create OAuth 2.0 Client ID (Web application) |---------|---------|
2. Authorized redirect URIs: `<UNRAID_MCP_BASE_URL>/auth/callback` | `GOOGLE_CLIENT_ID` | Google OAuth 2.0 Client ID |
3. Copy Client ID + Secret to `~/.unraid-mcp/.env` | `GOOGLE_CLIENT_SECRET` | Google OAuth 2.0 Client Secret |
| `UNRAID_MCP_BASE_URL` | Public URL of this server (e.g. `http://10.1.0.2:6970`) |
| `UNRAID_MCP_JWT_SIGNING_KEY` | Stable 32+ char secret — prevents token invalidation on restart |
Google Cloud Console setup: APIs & Services → Credentials → OAuth 2.0 Client ID (Web application) → Authorized redirect URIs: `<UNRAID_MCP_BASE_URL>/auth/callback`
**API Key** — clients present as `Authorization: Bearer <key>`:
| Env Var | Purpose |
|---------|---------|
| `UNRAID_MCP_API_KEY` | Static bearer token (can be same value as `UNRAID_API_KEY`) |
**Generate a stable JWT signing key:** **Generate a stable JWT signing key:**
```bash ```bash
python3 -c "import secrets; print(secrets.token_hex(32))" python3 -c "import secrets; print(secrets.token_hex(32))"
``` ```
**Omit `GOOGLE_CLIENT_ID` to run without auth** (default — preserves existing behaviour). **Omit all auth vars to run without auth** (default — open server).
**Full guide:** [`docs/GOOGLE_OAUTH.md`](docs/GOOGLE_OAUTH.md) **Full guide:** [`docs/GOOGLE_OAUTH.md`](docs/GOOGLE_OAUTH.md)

View File

@@ -246,13 +246,16 @@ UNRAID_MAX_RECONNECT_ATTEMPTS=10 # Max WebSocket reconnection attempts (def
--- ---
## 🔐 Google OAuth (Optional) ## 🔐 Authentication (Optional)
Two independent auth methods — use either or both.
### Google OAuth
Protect the HTTP server with Google OAuth 2.0 — clients must complete a Google login before any tool call is executed. Protect the HTTP server with Google OAuth 2.0 — clients must complete a Google login before any tool call is executed.
Add these to `~/.unraid-mcp/.env`:
```bash ```bash
# Add to ~/.unraid-mcp/.env
GOOGLE_CLIENT_ID=your-client-id.apps.googleusercontent.com GOOGLE_CLIENT_ID=your-client-id.apps.googleusercontent.com
GOOGLE_CLIENT_SECRET=GOCSPX-your-secret GOOGLE_CLIENT_SECRET=GOCSPX-your-secret
UNRAID_MCP_BASE_URL=http://10.1.0.2:6970 # public URL of this server UNRAID_MCP_BASE_URL=http://10.1.0.2:6970 # public URL of this server
@@ -266,7 +269,18 @@ UNRAID_MCP_JWT_SIGNING_KEY=<64-char-hex> # prevents token invalidation o
4. Generate a signing key: `python3 -c "import secrets; print(secrets.token_hex(32))"` 4. Generate a signing key: `python3 -c "import secrets; print(secrets.token_hex(32))"`
5. Restart the server 5. Restart the server
Omit `GOOGLE_CLIENT_ID` to run without authentication (default behavior). ### API Key (Bearer Token)
Simpler option for headless/machine access — no browser flow required:
```bash
# Add to ~/.unraid-mcp/.env
UNRAID_MCP_API_KEY=your-secret-token # can be same value as UNRAID_API_KEY
```
Clients present it as `Authorization: Bearer <UNRAID_MCP_API_KEY>`. Set both `GOOGLE_CLIENT_ID` and `UNRAID_MCP_API_KEY` to accept either method simultaneously.
Omit both to run without authentication (default — open server).
**Full guide:** [`docs/GOOGLE_OAUTH.md`](docs/GOOGLE_OAUTH.md) **Full guide:** [`docs/GOOGLE_OAUTH.md`](docs/GOOGLE_OAUTH.md)
@@ -326,7 +340,7 @@ The server exposes two classes of MCP resources backed by persistent WebSocket c
**`unraid://logs/stream`** — Live log file tail (path controlled by `UNRAID_AUTOSTART_LOG_PATH`) **`unraid://logs/stream`** — Live log file tail (path controlled by `UNRAID_AUTOSTART_LOG_PATH`)
> **Note**: Resources return cached data from persistent WebSocket subscriptions. A `{"status": "connecting"}` placeholder is returned while the subscription initializes — retry in a moment. > **Note**: Resources return cached data from persistent WebSocket subscriptions. A `{"status": "connecting"}` placeholder is returned while the subscription initializes — retry in a moment.
>
> **`log_tail` and `notification_feed`** are accessible as tool subactions (`unraid(action="live", subaction="log_tail")`) but are not registered as MCP resources — they use transient one-shot subscriptions and require parameters. > **`log_tail` and `notification_feed`** are accessible as tool subactions (`unraid(action="live", subaction="log_tail")`) but are not registered as MCP resources — they use transient one-shot subscriptions and require parameters.
--- ---

View File

@@ -157,6 +157,29 @@ Check `logs/unraid-mcp.log` or `docker compose logs unraid-mcp` for startup erro
--- ---
## API Key Authentication (Alternative / Combined)
For machine-to-machine access (scripts, CI, other agents) without a browser-based OAuth flow, set `UNRAID_MCP_API_KEY`:
```bash
# In ~/.unraid-mcp/.env
UNRAID_MCP_API_KEY=your-secret-token
```
Clients present it as a standard bearer token:
```
Authorization: Bearer your-secret-token
```
**Combining with Google OAuth**: set both `GOOGLE_CLIENT_ID` and `UNRAID_MCP_API_KEY`. The server activates `MultiAuth` and accepts either method — Google OAuth for interactive clients, API key for headless clients.
**Reusing the Unraid API key**: you can set `UNRAID_MCP_API_KEY` to the same value as `UNRAID_API_KEY` for simplicity. The two vars are kept separate so each concern has its own name.
**Standalone API key** (no Google OAuth): set only `UNRAID_MCP_API_KEY`. The server validates bearer tokens directly with no OAuth redirect flow.
---
## Security Notes ## Security Notes
- OAuth protects the MCP HTTP interface — the Unraid GraphQL API itself still uses `UNRAID_API_KEY` - OAuth protects the MCP HTTP interface — the Unraid GraphQL API itself still uses `UNRAID_API_KEY`

View File

@@ -1,6 +1,6 @@
# Unraid API - Complete Reference Guide # Unraid API - Complete Reference Guide
> **⚠️ DEVELOPER REFERENCE ONLY** — This file documents the raw GraphQL API schema for development and maintenance purposes (adding new queries/mutations). Do NOT use these curl/GraphQL examples for MCP tool usage. Use `unraid(action=..., subaction=...)` calls instead. See `SKILL.md` for the correct calling convention. > **⚠️ DEVELOPER REFERENCE ONLY** — This file documents the raw GraphQL API schema for development and maintenance purposes (adding new queries/mutations). Do NOT use these curl/GraphQL examples for MCP tool usage. Use `unraid(action=..., subaction=...)` calls instead. See [`SKILL.md`](../SKILL.md) for the correct calling convention.
**Tested on:** Unraid 7.2 x86_64 **Tested on:** Unraid 7.2 x86_64
**Date:** 2026-01-21 **Date:** 2026-01-21

View File

@@ -30,9 +30,8 @@ unraid(action="array", subaction="stop_array", confirm=True) # ⚠️ Stop
```python ```python
unraid(action="disk", subaction="log_files") # List available logs unraid(action="disk", subaction="log_files") # List available logs
unraid(action="disk", subaction="logs", log_path="syslog", tail_lines=50) # Read syslog unraid(action="disk", subaction="logs", log_path="/var/log/syslog", tail_lines=50) # Read syslog
unraid(action="disk", subaction="logs", log_path="/var/log/syslog") # Full path also works unraid(action="live", subaction="log_tail", path="/var/log/syslog") # Live tail
unraid(action="live", subaction="log_tail", log_path="/var/log/syslog") # Live tail
``` ```
### Docker Containers ### Docker Containers
@@ -64,7 +63,7 @@ unraid(action="notification", subaction="overview")
unraid(action="notification", subaction="list", list_type="UNREAD", limit=10) unraid(action="notification", subaction="list", list_type="UNREAD", limit=10)
unraid(action="notification", subaction="archive", notification_id="<id>") unraid(action="notification", subaction="archive", notification_id="<id>")
unraid(action="notification", subaction="create", title="Test", subject="Subject", unraid(action="notification", subaction="create", title="Test", subject="Subject",
description="Body", importance="normal") description="Body", importance="INFO")
``` ```
### API Keys ### API Keys

View File

@@ -26,15 +26,15 @@ This writes `UNRAID_API_URL` and `UNRAID_API_KEY` to `~/.unraid-mcp/.env`. Re-ru
unraid(action="health", subaction="test_connection") unraid(action="health", subaction="test_connection")
``` ```
2. Full diagnostic report: 1. Full diagnostic report:
```python ```python
unraid(action="health", subaction="diagnose") unraid(action="health", subaction="diagnose")
``` ```
3. Check that `UNRAID_API_URL` in `~/.unraid-mcp/.env` points to the correct Unraid GraphQL endpoint. 1. Check that `UNRAID_API_URL` in `~/.unraid-mcp/.env` points to the correct Unraid GraphQL endpoint.
4. Verify the API key has the required roles. Get a new key: **Unraid UI → Settings → Management Access → API Keys → Create** (select "Viewer" role for read-only, or appropriate roles for mutations). 1. Verify the API key has the required roles. Get a new key: **Unraid UI → Settings → Management Access → API Keys → Create** (select "Viewer" role for read-only, or appropriate roles for mutations).
--- ---

View File

@@ -134,6 +134,11 @@ check_prerequisites() {
missing=true missing=true
fi fi
if ! command -v jq &>/dev/null; then
log_error "jq not found in PATH. Install it and re-run."
missing=true
fi
if [[ ! -f "${PROJECT_DIR}/pyproject.toml" ]]; then if [[ ! -f "${PROJECT_DIR}/pyproject.toml" ]]; then
log_error "pyproject.toml not found at ${PROJECT_DIR}. Wrong directory?" log_error "pyproject.toml not found at ${PROJECT_DIR}. Wrong directory?"
missing=true missing=true
@@ -181,10 +186,12 @@ smoke_test_server() {
import sys, json import sys, json
try: try:
d = json.load(sys.stdin) d = json.load(sys.stdin)
if 'status' in d or 'success' in d or 'error' in d: if 'error' in d:
print('error: tool returned error key — ' + str(d.get('error', '')))
elif 'status' in d or 'success' in d:
print('ok') print('ok')
else: else:
print('missing: no status/success/error key in response') print('missing: no status/success key in response')
except Exception as e: except Exception as e:
print('parse_error: ' + str(e)) print('parse_error: ' + str(e))
" 2>/dev/null " 2>/dev/null
@@ -253,6 +260,31 @@ run_test() {
return 1 return 1
fi fi
# Always validate JSON is parseable and not an error payload
local json_check
json_check="$(
printf '%s' "${output}" | python3 -c "
import sys, json
try:
d = json.load(sys.stdin)
if isinstance(d, dict) and ('error' in d or d.get('kind') == 'error'):
print('error: ' + str(d.get('error', d.get('message', 'unknown error'))))
else:
print('ok')
except Exception as e:
print('invalid_json: ' + str(e))
" 2>/dev/null
)" || json_check="parse_error"
if [[ "${json_check}" != "ok" ]]; then
printf "${C_RED}[FAIL]${C_RESET} %-55s ${C_DIM}%dms${C_RESET}\n" \
"${label}" "${elapsed_ms}" | tee -a "${LOG_FILE}"
printf ' response validation failed: %s\n' "${json_check}" | tee -a "${LOG_FILE}"
FAIL_COUNT=$(( FAIL_COUNT + 1 ))
FAIL_NAMES+=("${label}")
return 1
fi
# Validate optional key presence # Validate optional key presence
if [[ -n "${expected_key}" ]]; then if [[ -n "${expected_key}" ]]; then
local key_check local key_check

View File

@@ -36,7 +36,7 @@ def _all_domain_dicts(unraid_mod: object) -> list[tuple[str, dict[str, str]]]:
""" """
import types import types
m = unraid_mod # type: ignore[assignment] m = unraid_mod
if not isinstance(m, types.ModuleType): if not isinstance(m, types.ModuleType):
import importlib import importlib
@@ -417,7 +417,6 @@ class TestDockerQueries:
"details", "details",
"networks", "networks",
"network_details", "network_details",
"_resolve",
} }
assert set(QUERIES.keys()) == expected assert set(QUERIES.keys()) == expected

155
tests/test_api_key_auth.py Normal file
View File

@@ -0,0 +1,155 @@
"""Tests for ApiKeyVerifier and _build_auth() in server.py."""
import importlib
from unittest.mock import MagicMock, patch
import pytest
from unraid_mcp.server import ApiKeyVerifier, _build_auth
# ---------------------------------------------------------------------------
# ApiKeyVerifier unit tests
# ---------------------------------------------------------------------------
@pytest.mark.asyncio
async def test_api_key_verifier_accepts_correct_key():
"""Returns AccessToken when the presented token matches the configured key."""
verifier = ApiKeyVerifier("secret-key-abc123")
result = await verifier.verify_token("secret-key-abc123")
assert result is not None
assert result.client_id == "api-key-client"
assert result.token == "secret-key-abc123"
@pytest.mark.asyncio
async def test_api_key_verifier_rejects_wrong_key():
"""Returns None when the token does not match."""
verifier = ApiKeyVerifier("secret-key-abc123")
result = await verifier.verify_token("wrong-key")
assert result is None
@pytest.mark.asyncio
async def test_api_key_verifier_rejects_empty_token():
"""Returns None for an empty string token."""
verifier = ApiKeyVerifier("secret-key-abc123")
result = await verifier.verify_token("")
assert result is None
@pytest.mark.asyncio
async def test_api_key_verifier_empty_key_rejects_empty_token():
"""When initialised with empty key, even an empty token is rejected.
An empty UNRAID_MCP_API_KEY means auth is disabled — ApiKeyVerifier
should not be instantiated in that case. But if it is, it must not
grant access via an empty bearer token.
"""
verifier = ApiKeyVerifier("")
result = await verifier.verify_token("")
assert result is None
# ---------------------------------------------------------------------------
# _build_auth() integration tests
# ---------------------------------------------------------------------------
def test_build_auth_returns_none_when_nothing_configured(monkeypatch):
"""Returns None when neither Google OAuth nor API key is set."""
monkeypatch.setenv("GOOGLE_CLIENT_ID", "")
monkeypatch.setenv("GOOGLE_CLIENT_SECRET", "")
monkeypatch.setenv("UNRAID_MCP_BASE_URL", "")
monkeypatch.setenv("UNRAID_MCP_API_KEY", "")
import unraid_mcp.config.settings as s
importlib.reload(s)
result = _build_auth()
assert result is None
def test_build_auth_returns_api_key_verifier_when_only_api_key_set(monkeypatch):
"""Returns ApiKeyVerifier when UNRAID_MCP_API_KEY is set but Google OAuth is not."""
monkeypatch.setenv("GOOGLE_CLIENT_ID", "")
monkeypatch.setenv("GOOGLE_CLIENT_SECRET", "")
monkeypatch.setenv("UNRAID_MCP_BASE_URL", "")
monkeypatch.setenv("UNRAID_MCP_API_KEY", "my-secret-api-key")
import unraid_mcp.config.settings as s
importlib.reload(s)
result = _build_auth()
assert isinstance(result, ApiKeyVerifier)
def test_build_auth_returns_google_provider_when_only_oauth_set(monkeypatch):
"""Returns GoogleProvider when Google OAuth vars are set but no API key."""
monkeypatch.setenv("GOOGLE_CLIENT_ID", "test-id.apps.googleusercontent.com")
monkeypatch.setenv("GOOGLE_CLIENT_SECRET", "GOCSPX-test-secret")
monkeypatch.setenv("UNRAID_MCP_BASE_URL", "http://10.1.0.2:6970")
monkeypatch.setenv("UNRAID_MCP_API_KEY", "")
monkeypatch.setenv("UNRAID_MCP_JWT_SIGNING_KEY", "x" * 32)
import unraid_mcp.config.settings as s
importlib.reload(s)
mock_provider = MagicMock()
with patch("unraid_mcp.server.GoogleProvider", return_value=mock_provider):
result = _build_auth()
assert result is mock_provider
def test_build_auth_returns_multi_auth_when_both_configured(monkeypatch):
"""Returns MultiAuth when both Google OAuth and UNRAID_MCP_API_KEY are set."""
from fastmcp.server.auth import MultiAuth
monkeypatch.setenv("GOOGLE_CLIENT_ID", "test-id.apps.googleusercontent.com")
monkeypatch.setenv("GOOGLE_CLIENT_SECRET", "GOCSPX-test-secret")
monkeypatch.setenv("UNRAID_MCP_BASE_URL", "http://10.1.0.2:6970")
monkeypatch.setenv("UNRAID_MCP_API_KEY", "my-secret-api-key")
monkeypatch.setenv("UNRAID_MCP_JWT_SIGNING_KEY", "x" * 32)
import unraid_mcp.config.settings as s
importlib.reload(s)
mock_provider = MagicMock()
with patch("unraid_mcp.server.GoogleProvider", return_value=mock_provider):
result = _build_auth()
assert isinstance(result, MultiAuth)
# Server is the Google provider
assert result.server is mock_provider
# One additional verifier — the ApiKeyVerifier
assert len(result.verifiers) == 1
assert isinstance(result.verifiers[0], ApiKeyVerifier)
def test_build_auth_multi_auth_api_key_verifier_uses_correct_key(monkeypatch):
"""The ApiKeyVerifier inside MultiAuth is seeded with the configured key."""
monkeypatch.setenv("GOOGLE_CLIENT_ID", "test-id.apps.googleusercontent.com")
monkeypatch.setenv("GOOGLE_CLIENT_SECRET", "GOCSPX-test-secret")
monkeypatch.setenv("UNRAID_MCP_BASE_URL", "http://10.1.0.2:6970")
monkeypatch.setenv("UNRAID_MCP_API_KEY", "super-secret-token")
monkeypatch.setenv("UNRAID_MCP_JWT_SIGNING_KEY", "x" * 32)
import unraid_mcp.config.settings as s
importlib.reload(s)
with patch("unraid_mcp.server.GoogleProvider", return_value=MagicMock()):
result = _build_auth()
verifier = result.verifiers[0]
assert verifier._api_key == "super-secret-token"

View File

@@ -3,6 +3,7 @@
from __future__ import annotations from __future__ import annotations
from typing import Any
from unittest.mock import AsyncMock, patch from unittest.mock import AsyncMock, patch
import pytest import pytest

View File

@@ -141,8 +141,8 @@ class TestHealthActions:
"unraid_mcp.subscriptions.utils._analyze_subscription_status", "unraid_mcp.subscriptions.utils._analyze_subscription_status",
return_value=(0, []), return_value=(0, []),
), ),
patch("unraid_mcp.server.cache_middleware", mock_cache), patch("unraid_mcp.server._cache_middleware", mock_cache),
patch("unraid_mcp.server.error_middleware", mock_error), patch("unraid_mcp.server._error_middleware", mock_error),
): ):
result = await tool_fn(action="health", subaction="diagnose") result = await tool_fn(action="health", subaction="diagnose")
assert "subscriptions" in result assert "subscriptions" in result

View File

@@ -36,6 +36,8 @@ class TestLiveResourcesUseManagerCache:
with patch("unraid_mcp.subscriptions.resources.subscription_manager") as mock_mgr: with patch("unraid_mcp.subscriptions.resources.subscription_manager") as mock_mgr:
mock_mgr.get_resource_data = AsyncMock(return_value=cached) mock_mgr.get_resource_data = AsyncMock(return_value=cached)
mcp = _make_resources() mcp = _make_resources()
# Accessing FastMCP internals intentionally for unit test isolation.
# This may break on FastMCP upgrades — consider a make_resource_fn() helper if it does.
resource = mcp.providers[0]._components[f"resource:unraid://live/{action}@"] resource = mcp.providers[0]._components[f"resource:unraid://live/{action}@"]
result = await resource.fn() result = await resource.fn()
assert json.loads(result) == cached assert json.loads(result) == cached
@@ -49,6 +51,8 @@ class TestLiveResourcesUseManagerCache:
mock_mgr.get_resource_data = AsyncMock(return_value=None) mock_mgr.get_resource_data = AsyncMock(return_value=None)
mock_mgr.last_error = {} mock_mgr.last_error = {}
mcp = _make_resources() mcp = _make_resources()
# Accessing FastMCP internals intentionally for unit test isolation.
# This may break on FastMCP upgrades — consider a make_resource_fn() helper if it does.
resource = mcp.providers[0]._components[f"resource:unraid://live/{action}@"] resource = mcp.providers[0]._components[f"resource:unraid://live/{action}@"]
result = await resource.fn() result = await resource.fn()
parsed = json.loads(result) parsed = json.loads(result)
@@ -61,6 +65,8 @@ class TestLiveResourcesUseManagerCache:
mock_mgr.get_resource_data = AsyncMock(return_value=None) mock_mgr.get_resource_data = AsyncMock(return_value=None)
mock_mgr.last_error = {action: "WebSocket auth failed"} mock_mgr.last_error = {action: "WebSocket auth failed"}
mcp = _make_resources() mcp = _make_resources()
# Accessing FastMCP internals intentionally for unit test isolation.
# This may break on FastMCP upgrades — consider a make_resource_fn() helper if it does.
resource = mcp.providers[0]._components[f"resource:unraid://live/{action}@"] resource = mcp.providers[0]._components[f"resource:unraid://live/{action}@"]
result = await resource.fn() result = await resource.fn()
parsed = json.loads(result) parsed = json.loads(result)
@@ -96,6 +102,8 @@ class TestLogsStreamResource:
mock_mgr.get_resource_data = AsyncMock(return_value=None) mock_mgr.get_resource_data = AsyncMock(return_value=None)
mcp = _make_resources() mcp = _make_resources()
local_provider = mcp.providers[0] local_provider = mcp.providers[0]
# Accessing FastMCP internals intentionally for unit test isolation.
# This may break on FastMCP upgrades — consider a make_resource_fn() helper if it does.
resource = local_provider._components["resource:unraid://logs/stream@"] resource = local_provider._components["resource:unraid://logs/stream@"]
result = await resource.fn() result = await resource.fn()
parsed = json.loads(result) parsed = json.loads(result)
@@ -108,6 +116,8 @@ class TestLogsStreamResource:
mock_mgr.get_resource_data = AsyncMock(return_value={}) mock_mgr.get_resource_data = AsyncMock(return_value={})
mcp = _make_resources() mcp = _make_resources()
local_provider = mcp.providers[0] local_provider = mcp.providers[0]
# Accessing FastMCP internals intentionally for unit test isolation.
# This may break on FastMCP upgrades — consider a make_resource_fn() helper if it does.
resource = local_provider._components["resource:unraid://logs/stream@"] resource = local_provider._components["resource:unraid://logs/stream@"]
result = await resource.fn() result = await resource.fn()
assert json.loads(result) == {} assert json.loads(result) == {}
@@ -131,6 +141,8 @@ class TestAutoStartDisabledFallback:
mock_mgr.last_error = {} mock_mgr.last_error = {}
mock_mgr.auto_start_enabled = False mock_mgr.auto_start_enabled = False
mcp = _make_resources() mcp = _make_resources()
# Accessing FastMCP internals intentionally for unit test isolation.
# This may break on FastMCP upgrades — consider a make_resource_fn() helper if it does.
resource = mcp.providers[0]._components[f"resource:unraid://live/{action}@"] resource = mcp.providers[0]._components[f"resource:unraid://live/{action}@"]
result = await resource.fn() result = await resource.fn()
assert json.loads(result) == fallback_data assert json.loads(result) == fallback_data
@@ -150,6 +162,8 @@ class TestAutoStartDisabledFallback:
mock_mgr.last_error = {} mock_mgr.last_error = {}
mock_mgr.auto_start_enabled = False mock_mgr.auto_start_enabled = False
mcp = _make_resources() mcp = _make_resources()
# Accessing FastMCP internals intentionally for unit test isolation.
# This may break on FastMCP upgrades — consider a make_resource_fn() helper if it does.
resource = mcp.providers[0]._components[f"resource:unraid://live/{action}@"] resource = mcp.providers[0]._components[f"resource:unraid://live/{action}@"]
result = await resource.fn() result = await resource.fn()
assert json.loads(result)["status"] == "connecting" assert json.loads(result)["status"] == "connecting"

View File

@@ -98,6 +98,19 @@ def is_google_auth_configured() -> bool:
return bool(GOOGLE_CLIENT_ID and GOOGLE_CLIENT_SECRET and UNRAID_MCP_BASE_URL) return bool(GOOGLE_CLIENT_ID and GOOGLE_CLIENT_SECRET and UNRAID_MCP_BASE_URL)
# API Key Authentication (Optional)
# ----------------------------------
# A static bearer token clients can use instead of (or alongside) Google OAuth.
# Can be set to the same value as UNRAID_API_KEY for simplicity, or a separate
# dedicated secret for MCP access.
UNRAID_MCP_API_KEY = os.getenv("UNRAID_MCP_API_KEY", "")
def is_api_key_auth_configured() -> bool:
"""Return True when UNRAID_MCP_API_KEY is set."""
return bool(UNRAID_MCP_API_KEY)
# Logging Configuration # Logging Configuration
LOG_LEVEL_STR = os.getenv("UNRAID_MCP_LOG_LEVEL", "INFO").upper() LOG_LEVEL_STR = os.getenv("UNRAID_MCP_LOG_LEVEL", "INFO").upper()
LOG_FILE_NAME = os.getenv("UNRAID_MCP_LOG_FILE", "unraid-mcp.log") LOG_FILE_NAME = os.getenv("UNRAID_MCP_LOG_FILE", "unraid-mcp.log")
@@ -180,6 +193,7 @@ def get_config_summary() -> dict[str, Any]:
"google_auth_enabled": is_google_auth_configured(), "google_auth_enabled": is_google_auth_configured(),
"google_auth_base_url": UNRAID_MCP_BASE_URL if is_google_auth_configured() else None, "google_auth_base_url": UNRAID_MCP_BASE_URL if is_google_auth_configured() else None,
"jwt_signing_key_configured": bool(UNRAID_MCP_JWT_SIGNING_KEY), "jwt_signing_key_configured": bool(UNRAID_MCP_JWT_SIGNING_KEY),
"api_key_auth_enabled": is_api_key_auth_configured(),
} }

View File

@@ -8,6 +8,7 @@ import sys
from typing import Any from typing import Any
from fastmcp import FastMCP from fastmcp import FastMCP
from fastmcp.server.auth import AccessToken, MultiAuth, TokenVerifier
from fastmcp.server.auth.providers.google import GoogleProvider from fastmcp.server.auth.providers.google import GoogleProvider
from fastmcp.server.middleware.caching import CallToolSettings, ResponseCachingMiddleware from fastmcp.server.middleware.caching import CallToolSettings, ResponseCachingMiddleware
from fastmcp.server.middleware.error_handling import ErrorHandlingMiddleware from fastmcp.server.middleware.error_handling import ErrorHandlingMiddleware
@@ -41,26 +42,32 @@ _logging_middleware = LoggingMiddleware(
# 2. Catch any unhandled exceptions and convert to proper MCP errors. # 2. Catch any unhandled exceptions and convert to proper MCP errors.
# Tracks error_counts per (exception_type:method) for health diagnose. # Tracks error_counts per (exception_type:method) for health diagnose.
error_middleware = ErrorHandlingMiddleware( _error_middleware = ErrorHandlingMiddleware(
logger=logger, logger=logger,
include_traceback=True, include_traceback=True,
) )
# 3. Unraid API rate limit: 100 requests per 10 seconds. # 3. Unraid API rate limit: 100 requests per 10 seconds.
# Use a sliding window that stays comfortably under that cap. # SlidingWindowRateLimitingMiddleware only accepts window_minutes (int), so express
_rate_limiter = SlidingWindowRateLimitingMiddleware(max_requests=90, window_minutes=1) # the 10-second budget as a 1-minute equivalent: 540 req/60 s to stay comfortably
# under the 600 req/min ceiling.
_rate_limiter = SlidingWindowRateLimitingMiddleware(max_requests=540, window_minutes=1)
# 4. Cap tool responses at 512 KB to protect the client context window. # 4. Cap tool responses at 512 KB to protect the client context window.
# Oversized responses are truncated with a clear suffix rather than erroring. # Oversized responses are truncated with a clear suffix rather than erroring.
_response_limiter = ResponseLimitingMiddleware(max_size=512_000) _response_limiter = ResponseLimitingMiddleware(max_size=512_000)
# 5. Cache tool calls in-memory (MemoryStore default — no extra deps). # 5. Cache middleware — all call_tool caching is disabled for the `unraid` tool.
# Short 30 s TTL absorbs burst duplicate requests while keeping data fresh. # CallToolSettings supports excluded_tools/included_tools by tool name only; there
# Destructive calls won't hit the cache in practice (unique confirm=True + IDs). # is no per-argument or per-subaction exclusion mechanism. The cache key is
cache_middleware = ResponseCachingMiddleware( # "{tool_name}:{arguments_str}", so a cached stop("nginx") result would be served
# back on a retry within the TTL window even though the container is already stopped.
# Mutation subactions (start, stop, restart, reboot, etc.) must never be cached.
# Because the consolidated `unraid` tool mixes reads and mutations under one name,
# the only safe option is to disable caching for the entire tool.
_cache_middleware = ResponseCachingMiddleware(
call_tool_settings=CallToolSettings( call_tool_settings=CallToolSettings(
ttl=30, enabled=False,
included_tools=["unraid"],
), ),
# Disable caching for list/resource/prompt — those are cheap. # Disable caching for list/resource/prompt — those are cheap.
list_tools_settings={"enabled": False}, list_tools_settings={"enabled": False},
@@ -71,6 +78,30 @@ cache_middleware = ResponseCachingMiddleware(
) )
class ApiKeyVerifier(TokenVerifier):
"""Bearer token verifier that validates against a static API key.
Clients present the key as a standard OAuth bearer token:
Authorization: Bearer <UNRAID_MCP_API_KEY>
This allows machine-to-machine access (e.g. CI, scripts, other agents)
without going through the Google OAuth browser flow.
"""
def __init__(self, api_key: str) -> None:
super().__init__()
self._api_key = api_key
async def verify_token(self, token: str) -> AccessToken | None:
if self._api_key and token == self._api_key:
return AccessToken(
token=token,
client_id="api-key-client",
scopes=[],
)
return None
def _build_google_auth() -> "GoogleProvider | None": def _build_google_auth() -> "GoogleProvider | None":
"""Build GoogleProvider when OAuth env vars are configured, else return None. """Build GoogleProvider when OAuth env vars are configured, else return None.
@@ -117,21 +148,45 @@ def _build_google_auth() -> "GoogleProvider | None":
return GoogleProvider(**kwargs) return GoogleProvider(**kwargs)
# Build auth provider — returns GoogleProvider when configured, None otherwise. def _build_auth() -> "GoogleProvider | ApiKeyVerifier | MultiAuth | None":
_google_auth = _build_google_auth() """Build the active auth stack from environment configuration.
Returns:
- MultiAuth(server=GoogleProvider, verifiers=[ApiKeyVerifier])
when both GOOGLE_CLIENT_ID and UNRAID_MCP_API_KEY are set.
- GoogleProvider alone when only Google OAuth vars are set.
- ApiKeyVerifier alone when only UNRAID_MCP_API_KEY is set.
- None when no auth vars are configured (open server).
"""
from .config.settings import UNRAID_MCP_API_KEY, is_api_key_auth_configured
google = _build_google_auth()
api_key = ApiKeyVerifier(UNRAID_MCP_API_KEY) if is_api_key_auth_configured() else None
if google and api_key:
logger.info("Auth: Google OAuth + API key both enabled (MultiAuth)")
return MultiAuth(server=google, verifiers=[api_key])
if api_key:
logger.info("Auth: API key authentication enabled")
return api_key
return google # GoogleProvider or None
# Build auth stack — GoogleProvider, ApiKeyVerifier, MultiAuth, or None.
_auth = _build_auth()
# Initialize FastMCP instance # Initialize FastMCP instance
mcp = FastMCP( mcp = FastMCP(
name="Unraid MCP Server", name="Unraid MCP Server",
instructions="Provides tools to interact with an Unraid server's GraphQL API.", instructions="Provides tools to interact with an Unraid server's GraphQL API.",
version=VERSION, version=VERSION,
auth=_google_auth, auth=_auth,
middleware=[ middleware=[
_logging_middleware, _logging_middleware,
error_middleware, _error_middleware,
_rate_limiter, _rate_limiter,
_response_limiter, _response_limiter,
cache_middleware, _cache_middleware,
], ],
) )
@@ -185,17 +240,25 @@ def run_server() -> None:
"Only use this in trusted networks or for development." "Only use this in trusted networks or for development."
) )
if _google_auth is not None: if _auth is not None:
from .config.settings import UNRAID_MCP_BASE_URL from .config.settings import is_google_auth_configured
logger.info( if is_google_auth_configured():
"Google OAuth ENABLED — clients must authenticate before calling tools. " from .config.settings import UNRAID_MCP_BASE_URL
f"Redirect URI: {UNRAID_MCP_BASE_URL}/auth/callback"
) logger.info(
"Google OAuth ENABLED — clients must authenticate before calling tools. "
f"Redirect URI: {UNRAID_MCP_BASE_URL}/auth/callback"
)
else:
logger.info(
"API key authentication ENABLED — present UNRAID_MCP_API_KEY as bearer token."
)
else: else:
logger.warning( logger.warning(
"No authentication configured — MCP server is open to all clients on the network. " "No authentication configured — MCP server is open to all clients on the network. "
"Set GOOGLE_CLIENT_ID + GOOGLE_CLIENT_SECRET + UNRAID_MCP_BASE_URL to enable OAuth." "Set GOOGLE_CLIENT_ID + GOOGLE_CLIENT_SECRET + UNRAID_MCP_BASE_URL to enable Google OAuth, "
"or set UNRAID_MCP_API_KEY to enable bearer token authentication."
) )
logger.info( logger.info(

View File

@@ -285,6 +285,16 @@ async def _handle_system(subaction: str, device_id: str | None) -> dict[str, Any
# =========================================================================== # ===========================================================================
_HEALTH_SUBACTIONS: set[str] = {"check", "test_connection", "diagnose", "setup"} _HEALTH_SUBACTIONS: set[str] = {"check", "test_connection", "diagnose", "setup"}
_HEALTH_QUERIES: dict[str, str] = {
"comprehensive_health": (
"query ComprehensiveHealthCheck {"
" info { machineId time versions { core { unraid } } os { uptime } }"
" array { state }"
" notifications { overview { unread { alert warning total } } }"
" docker { containers(skipCache: true) { id state status } }"
" }"
),
}
_SEVERITY = {"healthy": 0, "warning": 1, "degraded": 2, "unhealthy": 3} _SEVERITY = {"healthy": 0, "warning": 1, "degraded": 2, "unhealthy": 3}
@@ -346,7 +356,8 @@ async def _handle_health(subaction: str, ctx: Context | None) -> dict[str, Any]
return await _comprehensive_health_check() return await _comprehensive_health_check()
if subaction == "diagnose": if subaction == "diagnose":
from ..server import cache_middleware, error_middleware from ..server import _cache_middleware as cache_middleware
from ..server import _error_middleware as error_middleware
from ..subscriptions.manager import subscription_manager from ..subscriptions.manager import subscription_manager
from ..subscriptions.resources import ensure_subscriptions_started from ..subscriptions.resources import ensure_subscriptions_started
@@ -373,7 +384,7 @@ async def _handle_health(subaction: str, ctx: Context | None) -> dict[str, Any]
"call_tool": { "call_tool": {
"hits": cache_stats.call_tool.get.hit, "hits": cache_stats.call_tool.get.hit,
"misses": cache_stats.call_tool.get.miss, "misses": cache_stats.call_tool.get.miss,
"puts": cache_stats.call_tool.put.total, "puts": cache_stats.call_tool.put.count,
} }
if cache_stats.call_tool if cache_stats.call_tool
else {"hits": 0, "misses": 0, "puts": 0}, else {"hits": 0, "misses": 0, "puts": 0},
@@ -403,15 +414,7 @@ async def _comprehensive_health_check() -> dict[str, Any]:
health_severity = max(health_severity, _SEVERITY.get(level, 0)) health_severity = max(health_severity, _SEVERITY.get(level, 0))
try: try:
query = """ data = await make_graphql_request(_HEALTH_QUERIES["comprehensive_health"])
query ComprehensiveHealthCheck {
info { machineId time versions { core { unraid } } os { uptime } }
array { state }
notifications { overview { unread { alert warning total } } }
docker { containers(skipCache: true) { id state status } }
}
"""
data = await make_graphql_request(query)
api_latency = round((time.time() - start_time) * 1000, 2) api_latency = round((time.time() - start_time) * 1000, 2)
health_info: dict[str, Any] = { health_info: dict[str, Any] = {
@@ -738,9 +741,13 @@ _DOCKER_QUERIES: dict[str, str] = {
"details": "query GetContainerDetails { docker { containers(skipCache: false) { id names image imageId command created ports { ip privatePort publicPort type } sizeRootFs labels state status hostConfig { networkMode } networkSettings mounts autoStart } } }", "details": "query GetContainerDetails { docker { containers(skipCache: false) { id names image imageId command created ports { ip privatePort publicPort type } sizeRootFs labels state status hostConfig { networkMode } networkSettings mounts autoStart } } }",
"networks": "query GetDockerNetworks { docker { networks { id name driver scope } } }", "networks": "query GetDockerNetworks { docker { networks { id name driver scope } } }",
"network_details": "query GetDockerNetwork { docker { networks { id name driver scope enableIPv6 internal attachable containers options labels } } }", "network_details": "query GetDockerNetwork { docker { networks { id name driver scope enableIPv6 internal attachable containers options labels } } }",
"_resolve": "query ResolveContainerID { docker { containers(skipCache: true) { id names } } }",
} }
# Internal query used only for container ID resolution — not a public subaction.
_DOCKER_RESOLVE_QUERY = (
"query ResolveContainerID { docker { containers(skipCache: true) { id names } } }"
)
_DOCKER_MUTATIONS: dict[str, str] = { _DOCKER_MUTATIONS: dict[str, str] = {
"start": "mutation StartContainer($id: PrefixedID!) { docker { start(id: $id) { id names state status } } }", "start": "mutation StartContainer($id: PrefixedID!) { docker { start(id: $id) { id names state status } } }",
"stop": "mutation StopContainer($id: PrefixedID!) { docker { stop(id: $id) { id names state status } } }", "stop": "mutation StopContainer($id: PrefixedID!) { docker { stop(id: $id) { id names state status } } }",
@@ -775,7 +782,7 @@ def _find_container(
async def _resolve_container_id(container_id: str, *, strict: bool = False) -> str: async def _resolve_container_id(container_id: str, *, strict: bool = False) -> str:
if _DOCKER_ID_PATTERN.match(container_id): if _DOCKER_ID_PATTERN.match(container_id):
return container_id return container_id
data = await make_graphql_request(_DOCKER_QUERIES["_resolve"]) data = await make_graphql_request(_DOCKER_RESOLVE_QUERY)
containers = safe_get(data, "docker", "containers", default=[]) containers = safe_get(data, "docker", "containers", default=[])
if _DOCKER_SHORT_ID_PATTERN.match(container_id): if _DOCKER_SHORT_ID_PATTERN.match(container_id):
id_lower = container_id.lower() id_lower = container_id.lower()
@@ -1640,7 +1647,7 @@ async def _handle_live(
if subaction == "log_tail": if subaction == "log_tail":
if not path: if not path:
raise ToolError("path is required for live/log_tail") raise ToolError("path is required for live/log_tail")
normalized = os.path.realpath(path) # noqa: ASYNC240 normalized = await asyncio.to_thread(os.path.realpath, path)
if not any(normalized.startswith(p) for p in _LIVE_ALLOWED_LOG_PREFIXES): if not any(normalized.startswith(p) for p in _LIVE_ALLOWED_LOG_PREFIXES):
raise ToolError(f"path must start with one of: {', '.join(_LIVE_ALLOWED_LOG_PREFIXES)}") raise ToolError(f"path must start with one of: {', '.join(_LIVE_ALLOWED_LOG_PREFIXES)}")
path = normalized path = normalized