diff --git a/.env.example b/.env.example index 09adc31..e2078bc 100644 --- a/.env.example +++ b/.env.example @@ -61,4 +61,16 @@ UNRAID_MAX_RECONNECT_ATTEMPTS=10 # GOOGLE_CLIENT_ID= # GOOGLE_CLIENT_SECRET= # UNRAID_MCP_BASE_URL=http://10.1.0.2:6970 -# UNRAID_MCP_JWT_SIGNING_KEY= \ No newline at end of file +# UNRAID_MCP_JWT_SIGNING_KEY= + +# API Key Authentication (Optional) +# ----------------------------------- +# Alternative to Google OAuth — clients present this key as a bearer token: +# Authorization: Bearer +# +# Can be the same value as UNRAID_API_KEY (reuse your Unraid key), or a +# separate dedicated secret. Set both GOOGLE_CLIENT_ID and UNRAID_MCP_API_KEY +# to accept either auth method (MultiAuth). +# +# Leave empty to disable API key auth. +# UNRAID_MCP_API_KEY= \ No newline at end of file diff --git a/CLAUDE.md b/CLAUDE.md index a095f16..6a1d0a5 100644 --- a/CLAUDE.md +++ b/CLAUDE.md @@ -61,29 +61,33 @@ docker compose down - `UNRAID_MCP_PORT`: Server port (default: 6970) - `UNRAID_MCP_HOST`: Server host (default: 0.0.0.0) -### Google OAuth (Optional — protects the HTTP server) +### Authentication (Optional — protects the HTTP server) -When `GOOGLE_CLIENT_ID`, `GOOGLE_CLIENT_SECRET`, and `UNRAID_MCP_BASE_URL` are all set, -the MCP server requires Google login before any tool call. +Two independent methods. Use either or both — when both are set, `MultiAuth` accepts either. -| Env Var | Required | Purpose | -|---------|----------|---------| -| `GOOGLE_CLIENT_ID` | For OAuth | Google OAuth 2.0 Client ID | -| `GOOGLE_CLIENT_SECRET` | For OAuth | Google OAuth 2.0 Client Secret | -| `UNRAID_MCP_BASE_URL` | For OAuth | Public URL of this server (e.g. `http://10.1.0.2:6970`) | -| `UNRAID_MCP_JWT_SIGNING_KEY` | Recommended | Stable 32+ char secret — prevents token invalidation on restart | +**Google OAuth** — requires all three vars: -**Google Cloud Console setup:** -1. APIs & Services → Credentials → Create OAuth 2.0 Client ID (Web application) -2. Authorized redirect URIs: `/auth/callback` -3. Copy Client ID + Secret to `~/.unraid-mcp/.env` +| Env Var | Purpose | +|---------|---------| +| `GOOGLE_CLIENT_ID` | Google OAuth 2.0 Client ID | +| `GOOGLE_CLIENT_SECRET` | Google OAuth 2.0 Client Secret | +| `UNRAID_MCP_BASE_URL` | Public URL of this server (e.g. `http://10.1.0.2:6970`) | +| `UNRAID_MCP_JWT_SIGNING_KEY` | Stable 32+ char secret — prevents token invalidation on restart | + +Google Cloud Console setup: APIs & Services → Credentials → OAuth 2.0 Client ID (Web application) → Authorized redirect URIs: `/auth/callback` + +**API Key** — clients present as `Authorization: Bearer `: + +| Env Var | Purpose | +|---------|---------| +| `UNRAID_MCP_API_KEY` | Static bearer token (can be same value as `UNRAID_API_KEY`) | **Generate a stable JWT signing key:** ```bash python3 -c "import secrets; print(secrets.token_hex(32))" ``` -**Omit `GOOGLE_CLIENT_ID` to run without auth** (default — preserves existing behaviour). +**Omit all auth vars to run without auth** (default — open server). **Full guide:** [`docs/GOOGLE_OAUTH.md`](docs/GOOGLE_OAUTH.md) diff --git a/README.md b/README.md index eff41a5..0ffc0a6 100644 --- a/README.md +++ b/README.md @@ -246,13 +246,16 @@ UNRAID_MAX_RECONNECT_ATTEMPTS=10 # Max WebSocket reconnection attempts (def --- -## 🔐 Google OAuth (Optional) +## 🔐 Authentication (Optional) + +Two independent auth methods — use either or both. + +### Google OAuth Protect the HTTP server with Google OAuth 2.0 — clients must complete a Google login before any tool call is executed. -Add these to `~/.unraid-mcp/.env`: - ```bash +# Add to ~/.unraid-mcp/.env GOOGLE_CLIENT_ID=your-client-id.apps.googleusercontent.com GOOGLE_CLIENT_SECRET=GOCSPX-your-secret UNRAID_MCP_BASE_URL=http://10.1.0.2:6970 # public URL of this server @@ -266,7 +269,18 @@ UNRAID_MCP_JWT_SIGNING_KEY=<64-char-hex> # prevents token invalidation o 4. Generate a signing key: `python3 -c "import secrets; print(secrets.token_hex(32))"` 5. Restart the server -Omit `GOOGLE_CLIENT_ID` to run without authentication (default behavior). +### API Key (Bearer Token) + +Simpler option for headless/machine access — no browser flow required: + +```bash +# Add to ~/.unraid-mcp/.env +UNRAID_MCP_API_KEY=your-secret-token # can be same value as UNRAID_API_KEY +``` + +Clients present it as `Authorization: Bearer `. Set both `GOOGLE_CLIENT_ID` and `UNRAID_MCP_API_KEY` to accept either method simultaneously. + +Omit both to run without authentication (default — open server). **Full guide:** [`docs/GOOGLE_OAUTH.md`](docs/GOOGLE_OAUTH.md) @@ -326,7 +340,7 @@ The server exposes two classes of MCP resources backed by persistent WebSocket c **`unraid://logs/stream`** — Live log file tail (path controlled by `UNRAID_AUTOSTART_LOG_PATH`) > **Note**: Resources return cached data from persistent WebSocket subscriptions. A `{"status": "connecting"}` placeholder is returned while the subscription initializes — retry in a moment. - +> > **`log_tail` and `notification_feed`** are accessible as tool subactions (`unraid(action="live", subaction="log_tail")`) but are not registered as MCP resources — they use transient one-shot subscriptions and require parameters. --- diff --git a/docs/GOOGLE_OAUTH.md b/docs/GOOGLE_OAUTH.md index 2df3db0..0069dcf 100644 --- a/docs/GOOGLE_OAUTH.md +++ b/docs/GOOGLE_OAUTH.md @@ -157,6 +157,29 @@ Check `logs/unraid-mcp.log` or `docker compose logs unraid-mcp` for startup erro --- +## API Key Authentication (Alternative / Combined) + +For machine-to-machine access (scripts, CI, other agents) without a browser-based OAuth flow, set `UNRAID_MCP_API_KEY`: + +```bash +# In ~/.unraid-mcp/.env +UNRAID_MCP_API_KEY=your-secret-token +``` + +Clients present it as a standard bearer token: + +``` +Authorization: Bearer your-secret-token +``` + +**Combining with Google OAuth**: set both `GOOGLE_CLIENT_ID` and `UNRAID_MCP_API_KEY`. The server activates `MultiAuth` and accepts either method — Google OAuth for interactive clients, API key for headless clients. + +**Reusing the Unraid API key**: you can set `UNRAID_MCP_API_KEY` to the same value as `UNRAID_API_KEY` for simplicity. The two vars are kept separate so each concern has its own name. + +**Standalone API key** (no Google OAuth): set only `UNRAID_MCP_API_KEY`. The server validates bearer tokens directly with no OAuth redirect flow. + +--- + ## Security Notes - OAuth protects the MCP HTTP interface — the Unraid GraphQL API itself still uses `UNRAID_API_KEY` diff --git a/skills/unraid/references/api-reference.md b/skills/unraid/references/api-reference.md index 1a5759a..2e44f20 100644 --- a/skills/unraid/references/api-reference.md +++ b/skills/unraid/references/api-reference.md @@ -1,6 +1,6 @@ # Unraid API - Complete Reference Guide -> **⚠️ DEVELOPER REFERENCE ONLY** — This file documents the raw GraphQL API schema for development and maintenance purposes (adding new queries/mutations). Do NOT use these curl/GraphQL examples for MCP tool usage. Use `unraid(action=..., subaction=...)` calls instead. See `SKILL.md` for the correct calling convention. +> **⚠️ DEVELOPER REFERENCE ONLY** — This file documents the raw GraphQL API schema for development and maintenance purposes (adding new queries/mutations). Do NOT use these curl/GraphQL examples for MCP tool usage. Use `unraid(action=..., subaction=...)` calls instead. See [`SKILL.md`](../SKILL.md) for the correct calling convention. **Tested on:** Unraid 7.2 x86_64 **Date:** 2026-01-21 diff --git a/skills/unraid/references/quick-reference.md b/skills/unraid/references/quick-reference.md index d39b8d6..eb102e7 100644 --- a/skills/unraid/references/quick-reference.md +++ b/skills/unraid/references/quick-reference.md @@ -30,9 +30,8 @@ unraid(action="array", subaction="stop_array", confirm=True) # ⚠️ Stop ```python unraid(action="disk", subaction="log_files") # List available logs -unraid(action="disk", subaction="logs", log_path="syslog", tail_lines=50) # Read syslog -unraid(action="disk", subaction="logs", log_path="/var/log/syslog") # Full path also works -unraid(action="live", subaction="log_tail", log_path="/var/log/syslog") # Live tail +unraid(action="disk", subaction="logs", log_path="/var/log/syslog", tail_lines=50) # Read syslog +unraid(action="live", subaction="log_tail", path="/var/log/syslog") # Live tail ``` ### Docker Containers @@ -64,7 +63,7 @@ unraid(action="notification", subaction="overview") unraid(action="notification", subaction="list", list_type="UNREAD", limit=10) unraid(action="notification", subaction="archive", notification_id="") unraid(action="notification", subaction="create", title="Test", subject="Subject", - description="Body", importance="normal") + description="Body", importance="INFO") ``` ### API Keys diff --git a/skills/unraid/references/troubleshooting.md b/skills/unraid/references/troubleshooting.md index 2e99703..d0b830e 100644 --- a/skills/unraid/references/troubleshooting.md +++ b/skills/unraid/references/troubleshooting.md @@ -26,15 +26,15 @@ This writes `UNRAID_API_URL` and `UNRAID_API_KEY` to `~/.unraid-mcp/.env`. Re-ru unraid(action="health", subaction="test_connection") ``` -2. Full diagnostic report: +1. Full diagnostic report: ```python unraid(action="health", subaction="diagnose") ``` -3. Check that `UNRAID_API_URL` in `~/.unraid-mcp/.env` points to the correct Unraid GraphQL endpoint. +1. Check that `UNRAID_API_URL` in `~/.unraid-mcp/.env` points to the correct Unraid GraphQL endpoint. -4. Verify the API key has the required roles. Get a new key: **Unraid UI → Settings → Management Access → API Keys → Create** (select "Viewer" role for read-only, or appropriate roles for mutations). +1. Verify the API key has the required roles. Get a new key: **Unraid UI → Settings → Management Access → API Keys → Create** (select "Viewer" role for read-only, or appropriate roles for mutations). --- diff --git a/tests/mcporter/test-tools.sh b/tests/mcporter/test-tools.sh index 0b43d1d..5fb7cf2 100755 --- a/tests/mcporter/test-tools.sh +++ b/tests/mcporter/test-tools.sh @@ -134,6 +134,11 @@ check_prerequisites() { missing=true fi + if ! command -v jq &>/dev/null; then + log_error "jq not found in PATH. Install it and re-run." + missing=true + fi + if [[ ! -f "${PROJECT_DIR}/pyproject.toml" ]]; then log_error "pyproject.toml not found at ${PROJECT_DIR}. Wrong directory?" missing=true @@ -181,10 +186,12 @@ smoke_test_server() { import sys, json try: d = json.load(sys.stdin) - if 'status' in d or 'success' in d or 'error' in d: + if 'error' in d: + print('error: tool returned error key — ' + str(d.get('error', ''))) + elif 'status' in d or 'success' in d: print('ok') else: - print('missing: no status/success/error key in response') + print('missing: no status/success key in response') except Exception as e: print('parse_error: ' + str(e)) " 2>/dev/null @@ -253,6 +260,31 @@ run_test() { return 1 fi + # Always validate JSON is parseable and not an error payload + local json_check + json_check="$( + printf '%s' "${output}" | python3 -c " +import sys, json +try: + d = json.load(sys.stdin) + if isinstance(d, dict) and ('error' in d or d.get('kind') == 'error'): + print('error: ' + str(d.get('error', d.get('message', 'unknown error')))) + else: + print('ok') +except Exception as e: + print('invalid_json: ' + str(e)) +" 2>/dev/null + )" || json_check="parse_error" + + if [[ "${json_check}" != "ok" ]]; then + printf "${C_RED}[FAIL]${C_RESET} %-55s ${C_DIM}%dms${C_RESET}\n" \ + "${label}" "${elapsed_ms}" | tee -a "${LOG_FILE}" + printf ' response validation failed: %s\n' "${json_check}" | tee -a "${LOG_FILE}" + FAIL_COUNT=$(( FAIL_COUNT + 1 )) + FAIL_NAMES+=("${label}") + return 1 + fi + # Validate optional key presence if [[ -n "${expected_key}" ]]; then local key_check diff --git a/tests/schema/test_query_validation.py b/tests/schema/test_query_validation.py index 94b0b56..52eaaff 100644 --- a/tests/schema/test_query_validation.py +++ b/tests/schema/test_query_validation.py @@ -36,7 +36,7 @@ def _all_domain_dicts(unraid_mod: object) -> list[tuple[str, dict[str, str]]]: """ import types - m = unraid_mod # type: ignore[assignment] + m = unraid_mod if not isinstance(m, types.ModuleType): import importlib @@ -417,7 +417,6 @@ class TestDockerQueries: "details", "networks", "network_details", - "_resolve", } assert set(QUERIES.keys()) == expected diff --git a/tests/test_api_key_auth.py b/tests/test_api_key_auth.py new file mode 100644 index 0000000..97399ff --- /dev/null +++ b/tests/test_api_key_auth.py @@ -0,0 +1,155 @@ +"""Tests for ApiKeyVerifier and _build_auth() in server.py.""" + +import importlib +from unittest.mock import MagicMock, patch + +import pytest + +from unraid_mcp.server import ApiKeyVerifier, _build_auth + + +# --------------------------------------------------------------------------- +# ApiKeyVerifier unit tests +# --------------------------------------------------------------------------- + + +@pytest.mark.asyncio +async def test_api_key_verifier_accepts_correct_key(): + """Returns AccessToken when the presented token matches the configured key.""" + verifier = ApiKeyVerifier("secret-key-abc123") + result = await verifier.verify_token("secret-key-abc123") + + assert result is not None + assert result.client_id == "api-key-client" + assert result.token == "secret-key-abc123" + + +@pytest.mark.asyncio +async def test_api_key_verifier_rejects_wrong_key(): + """Returns None when the token does not match.""" + verifier = ApiKeyVerifier("secret-key-abc123") + result = await verifier.verify_token("wrong-key") + + assert result is None + + +@pytest.mark.asyncio +async def test_api_key_verifier_rejects_empty_token(): + """Returns None for an empty string token.""" + verifier = ApiKeyVerifier("secret-key-abc123") + result = await verifier.verify_token("") + + assert result is None + + +@pytest.mark.asyncio +async def test_api_key_verifier_empty_key_rejects_empty_token(): + """When initialised with empty key, even an empty token is rejected. + + An empty UNRAID_MCP_API_KEY means auth is disabled — ApiKeyVerifier + should not be instantiated in that case. But if it is, it must not + grant access via an empty bearer token. + """ + verifier = ApiKeyVerifier("") + result = await verifier.verify_token("") + + assert result is None + + +# --------------------------------------------------------------------------- +# _build_auth() integration tests +# --------------------------------------------------------------------------- + + +def test_build_auth_returns_none_when_nothing_configured(monkeypatch): + """Returns None when neither Google OAuth nor API key is set.""" + monkeypatch.setenv("GOOGLE_CLIENT_ID", "") + monkeypatch.setenv("GOOGLE_CLIENT_SECRET", "") + monkeypatch.setenv("UNRAID_MCP_BASE_URL", "") + monkeypatch.setenv("UNRAID_MCP_API_KEY", "") + + import unraid_mcp.config.settings as s + + importlib.reload(s) + + result = _build_auth() + assert result is None + + +def test_build_auth_returns_api_key_verifier_when_only_api_key_set(monkeypatch): + """Returns ApiKeyVerifier when UNRAID_MCP_API_KEY is set but Google OAuth is not.""" + monkeypatch.setenv("GOOGLE_CLIENT_ID", "") + monkeypatch.setenv("GOOGLE_CLIENT_SECRET", "") + monkeypatch.setenv("UNRAID_MCP_BASE_URL", "") + monkeypatch.setenv("UNRAID_MCP_API_KEY", "my-secret-api-key") + + import unraid_mcp.config.settings as s + + importlib.reload(s) + + result = _build_auth() + assert isinstance(result, ApiKeyVerifier) + + +def test_build_auth_returns_google_provider_when_only_oauth_set(monkeypatch): + """Returns GoogleProvider when Google OAuth vars are set but no API key.""" + monkeypatch.setenv("GOOGLE_CLIENT_ID", "test-id.apps.googleusercontent.com") + monkeypatch.setenv("GOOGLE_CLIENT_SECRET", "GOCSPX-test-secret") + monkeypatch.setenv("UNRAID_MCP_BASE_URL", "http://10.1.0.2:6970") + monkeypatch.setenv("UNRAID_MCP_API_KEY", "") + monkeypatch.setenv("UNRAID_MCP_JWT_SIGNING_KEY", "x" * 32) + + import unraid_mcp.config.settings as s + + importlib.reload(s) + + mock_provider = MagicMock() + with patch("unraid_mcp.server.GoogleProvider", return_value=mock_provider): + result = _build_auth() + + assert result is mock_provider + + +def test_build_auth_returns_multi_auth_when_both_configured(monkeypatch): + """Returns MultiAuth when both Google OAuth and UNRAID_MCP_API_KEY are set.""" + from fastmcp.server.auth import MultiAuth + + monkeypatch.setenv("GOOGLE_CLIENT_ID", "test-id.apps.googleusercontent.com") + monkeypatch.setenv("GOOGLE_CLIENT_SECRET", "GOCSPX-test-secret") + monkeypatch.setenv("UNRAID_MCP_BASE_URL", "http://10.1.0.2:6970") + monkeypatch.setenv("UNRAID_MCP_API_KEY", "my-secret-api-key") + monkeypatch.setenv("UNRAID_MCP_JWT_SIGNING_KEY", "x" * 32) + + import unraid_mcp.config.settings as s + + importlib.reload(s) + + mock_provider = MagicMock() + with patch("unraid_mcp.server.GoogleProvider", return_value=mock_provider): + result = _build_auth() + + assert isinstance(result, MultiAuth) + # Server is the Google provider + assert result.server is mock_provider + # One additional verifier — the ApiKeyVerifier + assert len(result.verifiers) == 1 + assert isinstance(result.verifiers[0], ApiKeyVerifier) + + +def test_build_auth_multi_auth_api_key_verifier_uses_correct_key(monkeypatch): + """The ApiKeyVerifier inside MultiAuth is seeded with the configured key.""" + monkeypatch.setenv("GOOGLE_CLIENT_ID", "test-id.apps.googleusercontent.com") + monkeypatch.setenv("GOOGLE_CLIENT_SECRET", "GOCSPX-test-secret") + monkeypatch.setenv("UNRAID_MCP_BASE_URL", "http://10.1.0.2:6970") + monkeypatch.setenv("UNRAID_MCP_API_KEY", "super-secret-token") + monkeypatch.setenv("UNRAID_MCP_JWT_SIGNING_KEY", "x" * 32) + + import unraid_mcp.config.settings as s + + importlib.reload(s) + + with patch("unraid_mcp.server.GoogleProvider", return_value=MagicMock()): + result = _build_auth() + + verifier = result.verifiers[0] + assert verifier._api_key == "super-secret-token" diff --git a/tests/test_customization.py b/tests/test_customization.py index 09ac4f8..4492019 100644 --- a/tests/test_customization.py +++ b/tests/test_customization.py @@ -3,6 +3,7 @@ from __future__ import annotations +from typing import Any from unittest.mock import AsyncMock, patch import pytest diff --git a/tests/test_health.py b/tests/test_health.py index 5b34e0c..2026ba6 100644 --- a/tests/test_health.py +++ b/tests/test_health.py @@ -141,8 +141,8 @@ class TestHealthActions: "unraid_mcp.subscriptions.utils._analyze_subscription_status", return_value=(0, []), ), - patch("unraid_mcp.server.cache_middleware", mock_cache), - patch("unraid_mcp.server.error_middleware", mock_error), + patch("unraid_mcp.server._cache_middleware", mock_cache), + patch("unraid_mcp.server._error_middleware", mock_error), ): result = await tool_fn(action="health", subaction="diagnose") assert "subscriptions" in result diff --git a/tests/test_resources.py b/tests/test_resources.py index 4ddafac..899a6b0 100644 --- a/tests/test_resources.py +++ b/tests/test_resources.py @@ -36,6 +36,8 @@ class TestLiveResourcesUseManagerCache: with patch("unraid_mcp.subscriptions.resources.subscription_manager") as mock_mgr: mock_mgr.get_resource_data = AsyncMock(return_value=cached) mcp = _make_resources() + # Accessing FastMCP internals intentionally for unit test isolation. + # This may break on FastMCP upgrades — consider a make_resource_fn() helper if it does. resource = mcp.providers[0]._components[f"resource:unraid://live/{action}@"] result = await resource.fn() assert json.loads(result) == cached @@ -49,6 +51,8 @@ class TestLiveResourcesUseManagerCache: mock_mgr.get_resource_data = AsyncMock(return_value=None) mock_mgr.last_error = {} mcp = _make_resources() + # Accessing FastMCP internals intentionally for unit test isolation. + # This may break on FastMCP upgrades — consider a make_resource_fn() helper if it does. resource = mcp.providers[0]._components[f"resource:unraid://live/{action}@"] result = await resource.fn() parsed = json.loads(result) @@ -61,6 +65,8 @@ class TestLiveResourcesUseManagerCache: mock_mgr.get_resource_data = AsyncMock(return_value=None) mock_mgr.last_error = {action: "WebSocket auth failed"} mcp = _make_resources() + # Accessing FastMCP internals intentionally for unit test isolation. + # This may break on FastMCP upgrades — consider a make_resource_fn() helper if it does. resource = mcp.providers[0]._components[f"resource:unraid://live/{action}@"] result = await resource.fn() parsed = json.loads(result) @@ -96,6 +102,8 @@ class TestLogsStreamResource: mock_mgr.get_resource_data = AsyncMock(return_value=None) mcp = _make_resources() local_provider = mcp.providers[0] + # Accessing FastMCP internals intentionally for unit test isolation. + # This may break on FastMCP upgrades — consider a make_resource_fn() helper if it does. resource = local_provider._components["resource:unraid://logs/stream@"] result = await resource.fn() parsed = json.loads(result) @@ -108,6 +116,8 @@ class TestLogsStreamResource: mock_mgr.get_resource_data = AsyncMock(return_value={}) mcp = _make_resources() local_provider = mcp.providers[0] + # Accessing FastMCP internals intentionally for unit test isolation. + # This may break on FastMCP upgrades — consider a make_resource_fn() helper if it does. resource = local_provider._components["resource:unraid://logs/stream@"] result = await resource.fn() assert json.loads(result) == {} @@ -131,6 +141,8 @@ class TestAutoStartDisabledFallback: mock_mgr.last_error = {} mock_mgr.auto_start_enabled = False mcp = _make_resources() + # Accessing FastMCP internals intentionally for unit test isolation. + # This may break on FastMCP upgrades — consider a make_resource_fn() helper if it does. resource = mcp.providers[0]._components[f"resource:unraid://live/{action}@"] result = await resource.fn() assert json.loads(result) == fallback_data @@ -150,6 +162,8 @@ class TestAutoStartDisabledFallback: mock_mgr.last_error = {} mock_mgr.auto_start_enabled = False mcp = _make_resources() + # Accessing FastMCP internals intentionally for unit test isolation. + # This may break on FastMCP upgrades — consider a make_resource_fn() helper if it does. resource = mcp.providers[0]._components[f"resource:unraid://live/{action}@"] result = await resource.fn() assert json.loads(result)["status"] == "connecting" diff --git a/unraid_mcp/config/settings.py b/unraid_mcp/config/settings.py index b7ae4a0..6523d6c 100644 --- a/unraid_mcp/config/settings.py +++ b/unraid_mcp/config/settings.py @@ -98,6 +98,19 @@ def is_google_auth_configured() -> bool: return bool(GOOGLE_CLIENT_ID and GOOGLE_CLIENT_SECRET and UNRAID_MCP_BASE_URL) +# API Key Authentication (Optional) +# ---------------------------------- +# A static bearer token clients can use instead of (or alongside) Google OAuth. +# Can be set to the same value as UNRAID_API_KEY for simplicity, or a separate +# dedicated secret for MCP access. +UNRAID_MCP_API_KEY = os.getenv("UNRAID_MCP_API_KEY", "") + + +def is_api_key_auth_configured() -> bool: + """Return True when UNRAID_MCP_API_KEY is set.""" + return bool(UNRAID_MCP_API_KEY) + + # Logging Configuration LOG_LEVEL_STR = os.getenv("UNRAID_MCP_LOG_LEVEL", "INFO").upper() LOG_FILE_NAME = os.getenv("UNRAID_MCP_LOG_FILE", "unraid-mcp.log") @@ -180,6 +193,7 @@ def get_config_summary() -> dict[str, Any]: "google_auth_enabled": is_google_auth_configured(), "google_auth_base_url": UNRAID_MCP_BASE_URL if is_google_auth_configured() else None, "jwt_signing_key_configured": bool(UNRAID_MCP_JWT_SIGNING_KEY), + "api_key_auth_enabled": is_api_key_auth_configured(), } diff --git a/unraid_mcp/server.py b/unraid_mcp/server.py index bff0e56..67276eb 100644 --- a/unraid_mcp/server.py +++ b/unraid_mcp/server.py @@ -8,6 +8,7 @@ import sys from typing import Any from fastmcp import FastMCP +from fastmcp.server.auth import AccessToken, MultiAuth, TokenVerifier from fastmcp.server.auth.providers.google import GoogleProvider from fastmcp.server.middleware.caching import CallToolSettings, ResponseCachingMiddleware from fastmcp.server.middleware.error_handling import ErrorHandlingMiddleware @@ -41,26 +42,32 @@ _logging_middleware = LoggingMiddleware( # 2. Catch any unhandled exceptions and convert to proper MCP errors. # Tracks error_counts per (exception_type:method) for health diagnose. -error_middleware = ErrorHandlingMiddleware( +_error_middleware = ErrorHandlingMiddleware( logger=logger, include_traceback=True, ) # 3. Unraid API rate limit: 100 requests per 10 seconds. -# Use a sliding window that stays comfortably under that cap. -_rate_limiter = SlidingWindowRateLimitingMiddleware(max_requests=90, window_minutes=1) +# SlidingWindowRateLimitingMiddleware only accepts window_minutes (int), so express +# the 10-second budget as a 1-minute equivalent: 540 req/60 s to stay comfortably +# under the 600 req/min ceiling. +_rate_limiter = SlidingWindowRateLimitingMiddleware(max_requests=540, window_minutes=1) # 4. Cap tool responses at 512 KB to protect the client context window. # Oversized responses are truncated with a clear suffix rather than erroring. _response_limiter = ResponseLimitingMiddleware(max_size=512_000) -# 5. Cache tool calls in-memory (MemoryStore default — no extra deps). -# Short 30 s TTL absorbs burst duplicate requests while keeping data fresh. -# Destructive calls won't hit the cache in practice (unique confirm=True + IDs). -cache_middleware = ResponseCachingMiddleware( +# 5. Cache middleware — all call_tool caching is disabled for the `unraid` tool. +# CallToolSettings supports excluded_tools/included_tools by tool name only; there +# is no per-argument or per-subaction exclusion mechanism. The cache key is +# "{tool_name}:{arguments_str}", so a cached stop("nginx") result would be served +# back on a retry within the TTL window even though the container is already stopped. +# Mutation subactions (start, stop, restart, reboot, etc.) must never be cached. +# Because the consolidated `unraid` tool mixes reads and mutations under one name, +# the only safe option is to disable caching for the entire tool. +_cache_middleware = ResponseCachingMiddleware( call_tool_settings=CallToolSettings( - ttl=30, - included_tools=["unraid"], + enabled=False, ), # Disable caching for list/resource/prompt — those are cheap. list_tools_settings={"enabled": False}, @@ -71,6 +78,30 @@ cache_middleware = ResponseCachingMiddleware( ) +class ApiKeyVerifier(TokenVerifier): + """Bearer token verifier that validates against a static API key. + + Clients present the key as a standard OAuth bearer token: + Authorization: Bearer + + This allows machine-to-machine access (e.g. CI, scripts, other agents) + without going through the Google OAuth browser flow. + """ + + def __init__(self, api_key: str) -> None: + super().__init__() + self._api_key = api_key + + async def verify_token(self, token: str) -> AccessToken | None: + if self._api_key and token == self._api_key: + return AccessToken( + token=token, + client_id="api-key-client", + scopes=[], + ) + return None + + def _build_google_auth() -> "GoogleProvider | None": """Build GoogleProvider when OAuth env vars are configured, else return None. @@ -117,21 +148,45 @@ def _build_google_auth() -> "GoogleProvider | None": return GoogleProvider(**kwargs) -# Build auth provider — returns GoogleProvider when configured, None otherwise. -_google_auth = _build_google_auth() +def _build_auth() -> "GoogleProvider | ApiKeyVerifier | MultiAuth | None": + """Build the active auth stack from environment configuration. + + Returns: + - MultiAuth(server=GoogleProvider, verifiers=[ApiKeyVerifier]) + when both GOOGLE_CLIENT_ID and UNRAID_MCP_API_KEY are set. + - GoogleProvider alone when only Google OAuth vars are set. + - ApiKeyVerifier alone when only UNRAID_MCP_API_KEY is set. + - None when no auth vars are configured (open server). + """ + from .config.settings import UNRAID_MCP_API_KEY, is_api_key_auth_configured + + google = _build_google_auth() + api_key = ApiKeyVerifier(UNRAID_MCP_API_KEY) if is_api_key_auth_configured() else None + + if google and api_key: + logger.info("Auth: Google OAuth + API key both enabled (MultiAuth)") + return MultiAuth(server=google, verifiers=[api_key]) + if api_key: + logger.info("Auth: API key authentication enabled") + return api_key + return google # GoogleProvider or None + + +# Build auth stack — GoogleProvider, ApiKeyVerifier, MultiAuth, or None. +_auth = _build_auth() # Initialize FastMCP instance mcp = FastMCP( name="Unraid MCP Server", instructions="Provides tools to interact with an Unraid server's GraphQL API.", version=VERSION, - auth=_google_auth, + auth=_auth, middleware=[ _logging_middleware, - error_middleware, + _error_middleware, _rate_limiter, _response_limiter, - cache_middleware, + _cache_middleware, ], ) @@ -185,17 +240,25 @@ def run_server() -> None: "Only use this in trusted networks or for development." ) - if _google_auth is not None: - from .config.settings import UNRAID_MCP_BASE_URL + if _auth is not None: + from .config.settings import is_google_auth_configured - logger.info( - "Google OAuth ENABLED — clients must authenticate before calling tools. " - f"Redirect URI: {UNRAID_MCP_BASE_URL}/auth/callback" - ) + if is_google_auth_configured(): + from .config.settings import UNRAID_MCP_BASE_URL + + logger.info( + "Google OAuth ENABLED — clients must authenticate before calling tools. " + f"Redirect URI: {UNRAID_MCP_BASE_URL}/auth/callback" + ) + else: + logger.info( + "API key authentication ENABLED — present UNRAID_MCP_API_KEY as bearer token." + ) else: logger.warning( "No authentication configured — MCP server is open to all clients on the network. " - "Set GOOGLE_CLIENT_ID + GOOGLE_CLIENT_SECRET + UNRAID_MCP_BASE_URL to enable OAuth." + "Set GOOGLE_CLIENT_ID + GOOGLE_CLIENT_SECRET + UNRAID_MCP_BASE_URL to enable Google OAuth, " + "or set UNRAID_MCP_API_KEY to enable bearer token authentication." ) logger.info( diff --git a/unraid_mcp/tools/unraid.py b/unraid_mcp/tools/unraid.py index a236688..ba501bf 100644 --- a/unraid_mcp/tools/unraid.py +++ b/unraid_mcp/tools/unraid.py @@ -285,6 +285,16 @@ async def _handle_system(subaction: str, device_id: str | None) -> dict[str, Any # =========================================================================== _HEALTH_SUBACTIONS: set[str] = {"check", "test_connection", "diagnose", "setup"} +_HEALTH_QUERIES: dict[str, str] = { + "comprehensive_health": ( + "query ComprehensiveHealthCheck {" + " info { machineId time versions { core { unraid } } os { uptime } }" + " array { state }" + " notifications { overview { unread { alert warning total } } }" + " docker { containers(skipCache: true) { id state status } }" + " }" + ), +} _SEVERITY = {"healthy": 0, "warning": 1, "degraded": 2, "unhealthy": 3} @@ -346,7 +356,8 @@ async def _handle_health(subaction: str, ctx: Context | None) -> dict[str, Any] return await _comprehensive_health_check() if subaction == "diagnose": - from ..server import cache_middleware, error_middleware + from ..server import _cache_middleware as cache_middleware + from ..server import _error_middleware as error_middleware from ..subscriptions.manager import subscription_manager from ..subscriptions.resources import ensure_subscriptions_started @@ -373,7 +384,7 @@ async def _handle_health(subaction: str, ctx: Context | None) -> dict[str, Any] "call_tool": { "hits": cache_stats.call_tool.get.hit, "misses": cache_stats.call_tool.get.miss, - "puts": cache_stats.call_tool.put.total, + "puts": cache_stats.call_tool.put.count, } if cache_stats.call_tool else {"hits": 0, "misses": 0, "puts": 0}, @@ -403,15 +414,7 @@ async def _comprehensive_health_check() -> dict[str, Any]: health_severity = max(health_severity, _SEVERITY.get(level, 0)) try: - query = """ - query ComprehensiveHealthCheck { - info { machineId time versions { core { unraid } } os { uptime } } - array { state } - notifications { overview { unread { alert warning total } } } - docker { containers(skipCache: true) { id state status } } - } - """ - data = await make_graphql_request(query) + data = await make_graphql_request(_HEALTH_QUERIES["comprehensive_health"]) api_latency = round((time.time() - start_time) * 1000, 2) health_info: dict[str, Any] = { @@ -738,9 +741,13 @@ _DOCKER_QUERIES: dict[str, str] = { "details": "query GetContainerDetails { docker { containers(skipCache: false) { id names image imageId command created ports { ip privatePort publicPort type } sizeRootFs labels state status hostConfig { networkMode } networkSettings mounts autoStart } } }", "networks": "query GetDockerNetworks { docker { networks { id name driver scope } } }", "network_details": "query GetDockerNetwork { docker { networks { id name driver scope enableIPv6 internal attachable containers options labels } } }", - "_resolve": "query ResolveContainerID { docker { containers(skipCache: true) { id names } } }", } +# Internal query used only for container ID resolution — not a public subaction. +_DOCKER_RESOLVE_QUERY = ( + "query ResolveContainerID { docker { containers(skipCache: true) { id names } } }" +) + _DOCKER_MUTATIONS: dict[str, str] = { "start": "mutation StartContainer($id: PrefixedID!) { docker { start(id: $id) { id names state status } } }", "stop": "mutation StopContainer($id: PrefixedID!) { docker { stop(id: $id) { id names state status } } }", @@ -775,7 +782,7 @@ def _find_container( async def _resolve_container_id(container_id: str, *, strict: bool = False) -> str: if _DOCKER_ID_PATTERN.match(container_id): return container_id - data = await make_graphql_request(_DOCKER_QUERIES["_resolve"]) + data = await make_graphql_request(_DOCKER_RESOLVE_QUERY) containers = safe_get(data, "docker", "containers", default=[]) if _DOCKER_SHORT_ID_PATTERN.match(container_id): id_lower = container_id.lower() @@ -1640,7 +1647,7 @@ async def _handle_live( if subaction == "log_tail": if not path: raise ToolError("path is required for live/log_tail") - normalized = os.path.realpath(path) # noqa: ASYNC240 + normalized = await asyncio.to_thread(os.path.realpath, path) if not any(normalized.startswith(p) for p in _LIVE_ALLOWED_LOG_PREFIXES): raise ToolError(f"path must start with one of: {', '.join(_LIVE_ALLOWED_LOG_PREFIXES)}") path = normalized