mirror of
https://github.com/jmagar/unraid-mcp.git
synced 2026-03-01 16:04:24 -08:00
feat: consolidate 26 tools into 10 tools with 90 actions
Refactor the entire tool layer to use the consolidated action pattern (action: Literal[...] with QUERIES/MUTATIONS dicts). This reduces LLM context from ~12k to ~5k tokens while adding ~60 new API capabilities. New tools: unraid_info (19 actions), unraid_array (12), unraid_notifications (9), unraid_users (8), unraid_keys (5). Rewritten: unraid_docker (15), unraid_vm (9), unraid_storage (6), unraid_rclone (4), unraid_health (3). Includes 129 tests across 10 test files, code review fixes for 16 issues (severity ordering, PrefixedID regex, sensitive var redaction, etc.). Removes tools/system.py (replaced by tools/info.py). Version bumped to 0.2.0.
This commit is contained in:
24
CLAUDE.md
24
CLAUDE.md
@@ -79,21 +79,27 @@ docker-compose down
|
||||
- **Transport Layer**: Supports streamable-http (recommended), SSE (deprecated), and stdio
|
||||
|
||||
### Key Design Patterns
|
||||
- **Consolidated Action Pattern**: Each tool uses `action: Literal[...]` parameter to expose multiple operations via a single MCP tool, reducing context window usage
|
||||
- **Pre-built Query Dicts**: `QUERIES` and `MUTATIONS` dicts prevent GraphQL injection and organize operations
|
||||
- **Destructive Action Safety**: `DESTRUCTIVE_ACTIONS` sets require `confirm=True` for dangerous operations
|
||||
- **Modular Architecture**: Clean separation of concerns across focused modules
|
||||
- **Error Handling**: Uses ToolError for user-facing errors, detailed logging for debugging
|
||||
- **Timeout Management**: Custom timeout configurations for different query types
|
||||
- **Timeout Management**: Custom timeout configurations for different query types (90s for disk ops)
|
||||
- **Data Processing**: Tools return both human-readable summaries and detailed raw data
|
||||
- **Health Monitoring**: Comprehensive health check tool for system monitoring
|
||||
- **Real-time Subscriptions**: WebSocket-based live data streaming
|
||||
|
||||
### Tool Categories (26 Tools Total)
|
||||
1. **System Information** (6 tools): `get_system_info()`, `get_array_status()`, `get_network_config()`, `get_registration_info()`, `get_connect_settings()`, `get_unraid_variables()`
|
||||
2. **Storage Management** (7 tools): `get_shares_info()`, `list_physical_disks()`, `get_disk_details()`, `list_available_log_files()`, `get_logs()`, `get_notifications_overview()`, `list_notifications()`
|
||||
3. **Docker Management** (3 tools): `list_docker_containers()`, `manage_docker_container()`, `get_docker_container_details()`
|
||||
4. **VM Management** (3 tools): `list_vms()`, `manage_vm()`, `get_vm_details()`
|
||||
5. **Cloud Storage (RClone)** (4 tools): `list_rclone_remotes()`, `get_rclone_config_form()`, `create_rclone_remote()`, `delete_rclone_remote()`
|
||||
6. **Health Monitoring** (1 tool): `health_check()`
|
||||
7. **Subscription Diagnostics** (2 tools): `test_subscription_query()`, `diagnose_subscriptions()`
|
||||
### Tool Categories (10 Tools, 90 Actions)
|
||||
1. **`unraid_info`** (19 actions): overview, array, network, registration, connect, variables, metrics, services, display, config, online, owner, settings, server, servers, flash, ups_devices, ups_device, ups_config
|
||||
2. **`unraid_array`** (12 actions): start, stop, parity_start/pause/resume/cancel/history, mount_disk, unmount_disk, clear_stats, shutdown, reboot
|
||||
3. **`unraid_storage`** (6 actions): shares, disks, disk_details, unassigned, log_files, logs
|
||||
4. **`unraid_docker`** (15 actions): list, details, start, stop, restart, pause, unpause, remove, update, update_all, logs, networks, network_details, port_conflicts, check_updates
|
||||
5. **`unraid_vm`** (9 actions): list, details, start, stop, pause, resume, force_stop, reboot, reset
|
||||
6. **`unraid_notifications`** (9 actions): overview, list, warnings, create, archive, unread, delete, delete_archived, archive_all
|
||||
7. **`unraid_rclone`** (4 actions): list_remotes, config_form, create_remote, delete_remote
|
||||
8. **`unraid_users`** (8 actions): me, list, get, add, delete, cloud, remote_access, origins
|
||||
9. **`unraid_keys`** (5 actions): list, get, create, update, delete
|
||||
10. **`unraid_health`** (3 actions): check, test_connection, diagnose
|
||||
|
||||
### Environment Variable Hierarchy
|
||||
The server loads environment variables from multiple locations in order:
|
||||
|
||||
@@ -4,7 +4,7 @@ build-backend = "hatchling.build"
|
||||
|
||||
[project]
|
||||
name = "unraid-mcp"
|
||||
version = "0.1.0"
|
||||
version = "0.2.0"
|
||||
description = "MCP Server for Unraid API - provides tools to interact with an Unraid server's GraphQL API"
|
||||
authors = [
|
||||
{name = "jmagar", email = "jmagar@users.noreply.github.com"}
|
||||
@@ -33,18 +33,6 @@ dependencies = [
|
||||
"websockets>=13.1,<14.0",
|
||||
"rich>=14.1.0",
|
||||
"pytz>=2025.2",
|
||||
"mypy>=1.17.1",
|
||||
"ruff>=0.12.8",
|
||||
]
|
||||
|
||||
[project.optional-dependencies]
|
||||
dev = [
|
||||
"pytest>=8.4.1",
|
||||
"pytest-asyncio>=1.1.0",
|
||||
"black>=25.1.0",
|
||||
"ruff>=0.12.8",
|
||||
"mypy>=1.17.1",
|
||||
"types-python-dateutil",
|
||||
]
|
||||
|
||||
[project.urls]
|
||||
@@ -151,5 +139,11 @@ exclude_lines = [
|
||||
|
||||
[dependency-groups]
|
||||
dev = [
|
||||
"pytest>=8.4.2",
|
||||
"pytest-asyncio>=1.2.0",
|
||||
"pytest-cov>=7.0.0",
|
||||
"types-pytz>=2025.2.0.20250809",
|
||||
"mypy>=1.17.1",
|
||||
"ruff>=0.12.8",
|
||||
"black>=25.1.0",
|
||||
]
|
||||
|
||||
50
tests/conftest.py
Normal file
50
tests/conftest.py
Normal file
@@ -0,0 +1,50 @@
|
||||
"""Shared test fixtures and helpers for Unraid MCP server tests."""
|
||||
|
||||
from typing import Any
|
||||
from unittest.mock import AsyncMock, patch
|
||||
|
||||
import pytest
|
||||
from fastmcp import FastMCP
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mock_graphql_request() -> AsyncMock:
|
||||
"""Fixture that patches make_graphql_request at the core module.
|
||||
|
||||
NOTE: Since each tool file imports make_graphql_request into its own
|
||||
namespace, tool-specific tests should patch at the tool module level
|
||||
(e.g., "unraid_mcp.tools.info.make_graphql_request") instead of using
|
||||
this fixture. This fixture is useful for testing the core client
|
||||
or for integration tests that reload modules.
|
||||
"""
|
||||
with patch("unraid_mcp.core.client.make_graphql_request", new_callable=AsyncMock) as mock:
|
||||
yield mock
|
||||
|
||||
|
||||
def make_tool_fn(
|
||||
module_path: str,
|
||||
register_fn_name: str,
|
||||
tool_name: str,
|
||||
) -> Any:
|
||||
"""Extract a tool function from a FastMCP registration for testing.
|
||||
|
||||
This wraps the repeated pattern of creating a test FastMCP instance,
|
||||
registering a tool, and extracting the inner function. Centralizing
|
||||
this avoids reliance on FastMCP's private `_tool_manager._tools` API
|
||||
in every test file.
|
||||
|
||||
Args:
|
||||
module_path: Dotted import path to the tool module (e.g., "unraid_mcp.tools.info")
|
||||
register_fn_name: Name of the registration function (e.g., "register_info_tool")
|
||||
tool_name: Name of the registered tool (e.g., "unraid_info")
|
||||
|
||||
Returns:
|
||||
The async tool function callable
|
||||
"""
|
||||
import importlib
|
||||
|
||||
module = importlib.import_module(module_path)
|
||||
register_fn = getattr(module, register_fn_name)
|
||||
test_mcp = FastMCP("test")
|
||||
register_fn(test_mcp)
|
||||
return test_mcp._tool_manager._tools[tool_name].fn
|
||||
77
tests/test_array.py
Normal file
77
tests/test_array.py
Normal file
@@ -0,0 +1,77 @@
|
||||
"""Tests for unraid_array tool."""
|
||||
|
||||
from unittest.mock import AsyncMock, patch
|
||||
|
||||
import pytest
|
||||
from conftest import make_tool_fn
|
||||
|
||||
from unraid_mcp.core.exceptions import ToolError
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def _mock_graphql() -> AsyncMock:
|
||||
with patch("unraid_mcp.tools.array.make_graphql_request", new_callable=AsyncMock) as mock:
|
||||
yield mock
|
||||
|
||||
|
||||
def _make_tool():
|
||||
return make_tool_fn("unraid_mcp.tools.array", "register_array_tool", "unraid_array")
|
||||
|
||||
|
||||
class TestArrayValidation:
|
||||
async def test_destructive_action_requires_confirm(self, _mock_graphql: AsyncMock) -> None:
|
||||
tool_fn = _make_tool()
|
||||
for action in ("start", "stop", "shutdown", "reboot"):
|
||||
with pytest.raises(ToolError, match="destructive"):
|
||||
await tool_fn(action=action)
|
||||
|
||||
async def test_disk_action_requires_disk_id(self, _mock_graphql: AsyncMock) -> None:
|
||||
tool_fn = _make_tool()
|
||||
for action in ("mount_disk", "unmount_disk", "clear_stats"):
|
||||
with pytest.raises(ToolError, match="disk_id"):
|
||||
await tool_fn(action=action)
|
||||
|
||||
|
||||
class TestArrayActions:
|
||||
async def test_start_array(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {"setState": {"state": "STARTED"}}
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(action="start", confirm=True)
|
||||
assert result["success"] is True
|
||||
assert result["action"] == "start"
|
||||
_mock_graphql.assert_called_once()
|
||||
|
||||
async def test_parity_start_with_correct(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {"parityCheck": {"start": True}}
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(action="parity_start", correct=True)
|
||||
assert result["success"] is True
|
||||
call_args = _mock_graphql.call_args
|
||||
assert call_args[0][1] == {"correct": True}
|
||||
|
||||
async def test_parity_history(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {"array": {"parityCheckStatus": {"progress": 50}}}
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(action="parity_history")
|
||||
assert result["success"] is True
|
||||
|
||||
async def test_mount_disk(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {"mountArrayDisk": True}
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(action="mount_disk", disk_id="disk:1")
|
||||
assert result["success"] is True
|
||||
call_args = _mock_graphql.call_args
|
||||
assert call_args[0][1] == {"id": "disk:1"}
|
||||
|
||||
async def test_shutdown(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {"shutdown": True}
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(action="shutdown", confirm=True)
|
||||
assert result["success"] is True
|
||||
assert result["action"] == "shutdown"
|
||||
|
||||
async def test_generic_exception_wraps(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.side_effect = RuntimeError("disk error")
|
||||
tool_fn = _make_tool()
|
||||
with pytest.raises(ToolError, match="disk error"):
|
||||
await tool_fn(action="parity_history")
|
||||
178
tests/test_docker.py
Normal file
178
tests/test_docker.py
Normal file
@@ -0,0 +1,178 @@
|
||||
"""Tests for unraid_docker tool."""
|
||||
|
||||
from unittest.mock import AsyncMock, patch
|
||||
|
||||
import pytest
|
||||
from conftest import make_tool_fn
|
||||
|
||||
from unraid_mcp.core.exceptions import ToolError
|
||||
from unraid_mcp.tools.docker import find_container_by_identifier, get_available_container_names
|
||||
|
||||
# --- Unit tests for helpers ---
|
||||
|
||||
|
||||
class TestFindContainerByIdentifier:
|
||||
def test_by_exact_id(self) -> None:
|
||||
containers = [{"id": "abc123", "names": ["plex"]}]
|
||||
assert find_container_by_identifier("abc123", containers) == containers[0]
|
||||
|
||||
def test_by_exact_name(self) -> None:
|
||||
containers = [{"id": "abc123", "names": ["plex"]}]
|
||||
assert find_container_by_identifier("plex", containers) == containers[0]
|
||||
|
||||
def test_fuzzy_match(self) -> None:
|
||||
containers = [{"id": "abc123", "names": ["plex-media-server"]}]
|
||||
result = find_container_by_identifier("plex", containers)
|
||||
assert result == containers[0]
|
||||
|
||||
def test_not_found(self) -> None:
|
||||
containers = [{"id": "abc123", "names": ["plex"]}]
|
||||
assert find_container_by_identifier("sonarr", containers) is None
|
||||
|
||||
def test_empty_list(self) -> None:
|
||||
assert find_container_by_identifier("plex", []) is None
|
||||
|
||||
|
||||
class TestGetAvailableContainerNames:
|
||||
def test_extracts_names(self) -> None:
|
||||
containers = [
|
||||
{"names": ["plex"]},
|
||||
{"names": ["sonarr", "sonarr-v3"]},
|
||||
]
|
||||
names = get_available_container_names(containers)
|
||||
assert "plex" in names
|
||||
assert "sonarr" in names
|
||||
assert "sonarr-v3" in names
|
||||
|
||||
def test_empty(self) -> None:
|
||||
assert get_available_container_names([]) == []
|
||||
|
||||
|
||||
# --- Integration tests ---
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def _mock_graphql() -> AsyncMock:
|
||||
with patch("unraid_mcp.tools.docker.make_graphql_request", new_callable=AsyncMock) as mock:
|
||||
yield mock
|
||||
|
||||
|
||||
def _make_tool():
|
||||
return make_tool_fn("unraid_mcp.tools.docker", "register_docker_tool", "unraid_docker")
|
||||
|
||||
|
||||
class TestDockerValidation:
|
||||
async def test_remove_requires_confirm(self, _mock_graphql: AsyncMock) -> None:
|
||||
tool_fn = _make_tool()
|
||||
with pytest.raises(ToolError, match="destructive"):
|
||||
await tool_fn(action="remove", container_id="abc123")
|
||||
|
||||
async def test_container_actions_require_id(self, _mock_graphql: AsyncMock) -> None:
|
||||
tool_fn = _make_tool()
|
||||
for action in ("start", "stop", "details", "logs", "pause", "unpause"):
|
||||
with pytest.raises(ToolError, match="container_id"):
|
||||
await tool_fn(action=action)
|
||||
|
||||
async def test_network_details_requires_id(self, _mock_graphql: AsyncMock) -> None:
|
||||
tool_fn = _make_tool()
|
||||
with pytest.raises(ToolError, match="network_id"):
|
||||
await tool_fn(action="network_details")
|
||||
|
||||
|
||||
class TestDockerActions:
|
||||
async def test_list(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {
|
||||
"docker": {"containers": [{"id": "c1", "names": ["plex"], "state": "running"}]}
|
||||
}
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(action="list")
|
||||
assert len(result["containers"]) == 1
|
||||
|
||||
async def test_start_container(self, _mock_graphql: AsyncMock) -> None:
|
||||
# First call resolves ID, second performs start
|
||||
_mock_graphql.side_effect = [
|
||||
{"docker": {"containers": [{"id": "abc123def456" * 4 + "abcd1234abcd1234:local", "names": ["plex"]}]}},
|
||||
{"docker": {"start": {"id": "abc123def456" * 4 + "abcd1234abcd1234:local", "state": "running"}}},
|
||||
]
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(action="start", container_id="plex")
|
||||
assert result["success"] is True
|
||||
|
||||
async def test_networks(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {"dockerNetworks": [{"id": "net:1", "name": "bridge"}]}
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(action="networks")
|
||||
assert len(result["networks"]) == 1
|
||||
|
||||
async def test_port_conflicts(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {"docker": {"portConflicts": []}}
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(action="port_conflicts")
|
||||
assert result["port_conflicts"] == []
|
||||
|
||||
async def test_check_updates(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {
|
||||
"docker": {"containerUpdateStatuses": [{"id": "c1", "name": "plex", "updateAvailable": True}]}
|
||||
}
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(action="check_updates")
|
||||
assert len(result["update_statuses"]) == 1
|
||||
|
||||
async def test_idempotent_start(self, _mock_graphql: AsyncMock) -> None:
|
||||
# Resolve + idempotent success
|
||||
_mock_graphql.side_effect = [
|
||||
{"docker": {"containers": [{"id": "a" * 64 + ":local", "names": ["plex"]}]}},
|
||||
{"idempotent_success": True, "docker": {}},
|
||||
]
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(action="start", container_id="plex")
|
||||
assert result["idempotent"] is True
|
||||
|
||||
async def test_restart(self, _mock_graphql: AsyncMock) -> None:
|
||||
cid = "a" * 64 + ":local"
|
||||
_mock_graphql.side_effect = [
|
||||
{"docker": {"containers": [{"id": cid, "names": ["plex"]}]}},
|
||||
{"docker": {"stop": {"id": cid, "state": "exited"}}},
|
||||
{"docker": {"start": {"id": cid, "state": "running"}}},
|
||||
]
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(action="restart", container_id="plex")
|
||||
assert result["success"] is True
|
||||
assert result["action"] == "restart"
|
||||
|
||||
async def test_restart_idempotent_stop(self, _mock_graphql: AsyncMock) -> None:
|
||||
cid = "a" * 64 + ":local"
|
||||
_mock_graphql.side_effect = [
|
||||
{"docker": {"containers": [{"id": cid, "names": ["plex"]}]}},
|
||||
{"idempotent_success": True},
|
||||
{"docker": {"start": {"id": cid, "state": "running"}}},
|
||||
]
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(action="restart", container_id="plex")
|
||||
assert result["success"] is True
|
||||
assert "note" in result
|
||||
|
||||
async def test_update_all(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {
|
||||
"docker": {"updateAllContainers": [{"id": "c1", "state": "running"}]}
|
||||
}
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(action="update_all")
|
||||
assert result["success"] is True
|
||||
assert len(result["containers"]) == 1
|
||||
|
||||
async def test_remove_with_confirm(self, _mock_graphql: AsyncMock) -> None:
|
||||
cid = "a" * 64 + ":local"
|
||||
_mock_graphql.side_effect = [
|
||||
{"docker": {"containers": [{"id": cid, "names": ["old-app"]}]}},
|
||||
{"docker": {"removeContainer": True}},
|
||||
]
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(action="remove", container_id="old-app", confirm=True)
|
||||
assert result["success"] is True
|
||||
|
||||
async def test_generic_exception_wraps_in_tool_error(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.side_effect = RuntimeError("unexpected failure")
|
||||
tool_fn = _make_tool()
|
||||
with pytest.raises(ToolError, match="unexpected failure"):
|
||||
await tool_fn(action="list")
|
||||
126
tests/test_health.py
Normal file
126
tests/test_health.py
Normal file
@@ -0,0 +1,126 @@
|
||||
"""Tests for unraid_health tool."""
|
||||
|
||||
from unittest.mock import AsyncMock, patch
|
||||
|
||||
import pytest
|
||||
from conftest import make_tool_fn
|
||||
|
||||
from unraid_mcp.core.exceptions import ToolError
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def _mock_graphql() -> AsyncMock:
|
||||
with patch("unraid_mcp.tools.health.make_graphql_request", new_callable=AsyncMock) as mock:
|
||||
yield mock
|
||||
|
||||
|
||||
def _make_tool():
|
||||
return make_tool_fn("unraid_mcp.tools.health", "register_health_tool", "unraid_health")
|
||||
|
||||
|
||||
class TestHealthValidation:
|
||||
async def test_invalid_action(self, _mock_graphql: AsyncMock) -> None:
|
||||
tool_fn = _make_tool()
|
||||
with pytest.raises(ToolError, match="Invalid action"):
|
||||
await tool_fn(action="invalid")
|
||||
|
||||
|
||||
class TestHealthActions:
|
||||
async def test_test_connection(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {"online": True}
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(action="test_connection")
|
||||
assert result["status"] == "connected"
|
||||
assert result["online"] is True
|
||||
assert "latency_ms" in result
|
||||
|
||||
async def test_check_healthy(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {
|
||||
"info": {
|
||||
"machineId": "abc123",
|
||||
"time": "2026-02-08T12:00:00Z",
|
||||
"versions": {"unraid": "7.2.0"},
|
||||
"os": {"uptime": 86400},
|
||||
},
|
||||
"array": {"state": "STARTED"},
|
||||
"notifications": {
|
||||
"overview": {"unread": {"alert": 0, "warning": 0, "total": 3}}
|
||||
},
|
||||
"docker": {
|
||||
"containers": [{"id": "c1", "state": "running", "status": "Up 2 days"}]
|
||||
},
|
||||
}
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(action="check")
|
||||
assert result["status"] == "healthy"
|
||||
assert "api_latency_ms" in result
|
||||
|
||||
async def test_check_warning_on_alerts(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {
|
||||
"info": {"machineId": "abc", "versions": {"unraid": "7.2"}, "os": {"uptime": 100}},
|
||||
"array": {"state": "STARTED"},
|
||||
"notifications": {
|
||||
"overview": {"unread": {"alert": 3, "warning": 0, "total": 3}}
|
||||
},
|
||||
"docker": {"containers": []},
|
||||
}
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(action="check")
|
||||
assert result["status"] == "warning"
|
||||
assert any("alert" in i for i in result.get("issues", []))
|
||||
|
||||
async def test_check_no_data(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {}
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(action="check")
|
||||
assert result["status"] == "unhealthy"
|
||||
|
||||
async def test_check_api_error(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.side_effect = Exception("Connection refused")
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(action="check")
|
||||
assert result["status"] == "unhealthy"
|
||||
assert "Connection refused" in result["error"]
|
||||
|
||||
async def test_check_severity_never_downgrades(self, _mock_graphql: AsyncMock) -> None:
|
||||
"""Degraded from missing info should not be overwritten by warning from alerts."""
|
||||
_mock_graphql.return_value = {
|
||||
"info": {},
|
||||
"array": {"state": "STARTED"},
|
||||
"notifications": {
|
||||
"overview": {"unread": {"alert": 5, "warning": 0, "total": 5}}
|
||||
},
|
||||
"docker": {"containers": []},
|
||||
}
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(action="check")
|
||||
# Missing info escalates to "degraded"; alerts only escalate to "warning"
|
||||
# Severity should stay at "degraded" (not downgrade to "warning")
|
||||
assert result["status"] == "degraded"
|
||||
|
||||
async def test_diagnose_wraps_exception(self, _mock_graphql: AsyncMock) -> None:
|
||||
"""When _diagnose_subscriptions raises, tool wraps in ToolError."""
|
||||
tool_fn = _make_tool()
|
||||
with patch(
|
||||
"unraid_mcp.tools.health._diagnose_subscriptions",
|
||||
side_effect=RuntimeError("broken"),
|
||||
):
|
||||
with pytest.raises(ToolError, match="broken"):
|
||||
await tool_fn(action="diagnose")
|
||||
|
||||
async def test_diagnose_import_error_internal(self) -> None:
|
||||
"""_diagnose_subscriptions catches ImportError and returns error dict."""
|
||||
import builtins
|
||||
|
||||
from unraid_mcp.tools.health import _diagnose_subscriptions
|
||||
|
||||
real_import = builtins.__import__
|
||||
|
||||
def fail_subscriptions(name, *args, **kwargs):
|
||||
if "subscriptions" in name:
|
||||
raise ImportError("no module")
|
||||
return real_import(name, *args, **kwargs)
|
||||
|
||||
with patch("builtins.__import__", side_effect=fail_subscriptions):
|
||||
result = await _diagnose_subscriptions()
|
||||
assert "error" in result
|
||||
159
tests/test_info.py
Normal file
159
tests/test_info.py
Normal file
@@ -0,0 +1,159 @@
|
||||
"""Tests for unraid_info tool."""
|
||||
|
||||
from unittest.mock import AsyncMock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
from unraid_mcp.core.exceptions import ToolError
|
||||
from unraid_mcp.tools.info import (
|
||||
_analyze_disk_health,
|
||||
_process_array_status,
|
||||
_process_system_info,
|
||||
)
|
||||
|
||||
# --- Unit tests for helper functions ---
|
||||
|
||||
|
||||
class TestProcessSystemInfo:
|
||||
def test_processes_os_info(self) -> None:
|
||||
raw = {
|
||||
"os": {"distro": "Unraid", "release": "7.2", "platform": "linux", "arch": "x86_64", "hostname": "tower", "uptime": 3600},
|
||||
"cpu": {"manufacturer": "AMD", "brand": "Ryzen", "cores": 8, "threads": 16},
|
||||
}
|
||||
result = _process_system_info(raw)
|
||||
assert "summary" in result
|
||||
assert "details" in result
|
||||
assert result["summary"]["hostname"] == "tower"
|
||||
assert "AMD" in result["summary"]["cpu"]
|
||||
|
||||
def test_handles_missing_fields(self) -> None:
|
||||
result = _process_system_info({})
|
||||
assert result["summary"] == {"memory_summary": "Memory information not available."}
|
||||
|
||||
def test_processes_memory_layout(self) -> None:
|
||||
raw = {"memory": {"layout": [{"bank": "0", "type": "DDR4", "clockSpeed": 3200, "manufacturer": "G.Skill", "partNum": "XYZ"}]}}
|
||||
result = _process_system_info(raw)
|
||||
assert len(result["summary"]["memory_layout_details"]) == 1
|
||||
|
||||
|
||||
class TestAnalyzeDiskHealth:
|
||||
def test_counts_healthy_disks(self) -> None:
|
||||
disks = [{"status": "DISK_OK"}, {"status": "DISK_OK"}]
|
||||
result = _analyze_disk_health(disks)
|
||||
assert result["healthy"] == 2
|
||||
|
||||
def test_counts_failed_disks(self) -> None:
|
||||
disks = [{"status": "DISK_DSBL"}, {"status": "DISK_INVALID"}]
|
||||
result = _analyze_disk_health(disks)
|
||||
assert result["failed"] == 2
|
||||
|
||||
def test_counts_warning_disks(self) -> None:
|
||||
disks = [{"status": "DISK_OK", "warning": 45}]
|
||||
result = _analyze_disk_health(disks)
|
||||
assert result["warning"] == 1
|
||||
|
||||
def test_counts_missing_disks(self) -> None:
|
||||
disks = [{"status": "DISK_NP"}]
|
||||
result = _analyze_disk_health(disks)
|
||||
assert result["missing"] == 1
|
||||
|
||||
def test_empty_list(self) -> None:
|
||||
result = _analyze_disk_health([])
|
||||
assert result["healthy"] == 0
|
||||
|
||||
|
||||
class TestProcessArrayStatus:
|
||||
def test_basic_array(self) -> None:
|
||||
raw = {
|
||||
"state": "STARTED",
|
||||
"capacity": {"kilobytes": {"free": "1048576", "used": "524288", "total": "1572864"}},
|
||||
"parities": [{"status": "DISK_OK"}],
|
||||
"disks": [{"status": "DISK_OK"}],
|
||||
"caches": [],
|
||||
}
|
||||
result = _process_array_status(raw)
|
||||
assert result["summary"]["state"] == "STARTED"
|
||||
assert result["summary"]["overall_health"] == "HEALTHY"
|
||||
|
||||
def test_degraded_array(self) -> None:
|
||||
raw = {
|
||||
"state": "STARTED",
|
||||
"parities": [],
|
||||
"disks": [{"status": "DISK_NP"}],
|
||||
"caches": [],
|
||||
}
|
||||
result = _process_array_status(raw)
|
||||
assert result["summary"]["overall_health"] == "DEGRADED"
|
||||
|
||||
|
||||
# --- Integration tests for the tool function ---
|
||||
|
||||
|
||||
class TestUnraidInfoTool:
|
||||
@pytest.fixture
|
||||
def _mock_graphql(self) -> AsyncMock:
|
||||
with patch("unraid_mcp.tools.info.make_graphql_request", new_callable=AsyncMock) as mock:
|
||||
yield mock
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_overview_action(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {
|
||||
"info": {
|
||||
"os": {"distro": "Unraid", "release": "7.2", "platform": "linux", "arch": "x86_64", "hostname": "test"},
|
||||
"cpu": {"manufacturer": "Intel", "brand": "i7", "cores": 4, "threads": 8},
|
||||
}
|
||||
}
|
||||
# Import and call the inner function by simulating registration
|
||||
from fastmcp import FastMCP
|
||||
test_mcp = FastMCP("test")
|
||||
from unraid_mcp.tools.info import register_info_tool
|
||||
register_info_tool(test_mcp)
|
||||
tool_fn = test_mcp._tool_manager._tools["unraid_info"].fn
|
||||
result = await tool_fn(action="overview")
|
||||
assert "summary" in result
|
||||
_mock_graphql.assert_called_once()
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_ups_device_requires_device_id(self, _mock_graphql: AsyncMock) -> None:
|
||||
from fastmcp import FastMCP
|
||||
test_mcp = FastMCP("test")
|
||||
from unraid_mcp.tools.info import register_info_tool
|
||||
register_info_tool(test_mcp)
|
||||
tool_fn = test_mcp._tool_manager._tools["unraid_info"].fn
|
||||
with pytest.raises(ToolError, match="device_id is required"):
|
||||
await tool_fn(action="ups_device")
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_network_action(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {"network": {"id": "net:1", "accessUrls": []}}
|
||||
from fastmcp import FastMCP
|
||||
test_mcp = FastMCP("test")
|
||||
from unraid_mcp.tools.info import register_info_tool
|
||||
register_info_tool(test_mcp)
|
||||
tool_fn = test_mcp._tool_manager._tools["unraid_info"].fn
|
||||
result = await tool_fn(action="network")
|
||||
assert result["id"] == "net:1"
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_connect_action(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {
|
||||
"connect": {"status": "connected", "sandbox": False, "flashGuid": "abc123"}
|
||||
}
|
||||
from fastmcp import FastMCP
|
||||
test_mcp = FastMCP("test")
|
||||
from unraid_mcp.tools.info import register_info_tool
|
||||
register_info_tool(test_mcp)
|
||||
tool_fn = test_mcp._tool_manager._tools["unraid_info"].fn
|
||||
result = await tool_fn(action="connect")
|
||||
assert result["status"] == "connected"
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_generic_exception_wraps(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.side_effect = RuntimeError("unexpected")
|
||||
from fastmcp import FastMCP
|
||||
test_mcp = FastMCP("test")
|
||||
from unraid_mcp.tools.info import register_info_tool
|
||||
register_info_tool(test_mcp)
|
||||
tool_fn = test_mcp._tool_manager._tools["unraid_info"].fn
|
||||
with pytest.raises(ToolError, match="unexpected"):
|
||||
await tool_fn(action="online")
|
||||
90
tests/test_keys.py
Normal file
90
tests/test_keys.py
Normal file
@@ -0,0 +1,90 @@
|
||||
"""Tests for unraid_keys tool."""
|
||||
|
||||
from unittest.mock import AsyncMock, patch
|
||||
|
||||
import pytest
|
||||
from conftest import make_tool_fn
|
||||
|
||||
from unraid_mcp.core.exceptions import ToolError
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def _mock_graphql() -> AsyncMock:
|
||||
with patch("unraid_mcp.tools.keys.make_graphql_request", new_callable=AsyncMock) as mock:
|
||||
yield mock
|
||||
|
||||
|
||||
def _make_tool():
|
||||
return make_tool_fn("unraid_mcp.tools.keys", "register_keys_tool", "unraid_keys")
|
||||
|
||||
|
||||
class TestKeysValidation:
|
||||
async def test_delete_requires_confirm(self, _mock_graphql: AsyncMock) -> None:
|
||||
tool_fn = _make_tool()
|
||||
with pytest.raises(ToolError, match="destructive"):
|
||||
await tool_fn(action="delete", key_id="k:1")
|
||||
|
||||
async def test_get_requires_key_id(self, _mock_graphql: AsyncMock) -> None:
|
||||
tool_fn = _make_tool()
|
||||
with pytest.raises(ToolError, match="key_id"):
|
||||
await tool_fn(action="get")
|
||||
|
||||
async def test_create_requires_name(self, _mock_graphql: AsyncMock) -> None:
|
||||
tool_fn = _make_tool()
|
||||
with pytest.raises(ToolError, match="name"):
|
||||
await tool_fn(action="create")
|
||||
|
||||
async def test_update_requires_key_id(self, _mock_graphql: AsyncMock) -> None:
|
||||
tool_fn = _make_tool()
|
||||
with pytest.raises(ToolError, match="key_id"):
|
||||
await tool_fn(action="update")
|
||||
|
||||
async def test_delete_requires_key_id(self, _mock_graphql: AsyncMock) -> None:
|
||||
tool_fn = _make_tool()
|
||||
with pytest.raises(ToolError, match="key_id"):
|
||||
await tool_fn(action="delete", confirm=True)
|
||||
|
||||
|
||||
class TestKeysActions:
|
||||
async def test_list(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {
|
||||
"apiKeys": [{"id": "k:1", "name": "mcp-key", "roles": ["admin"]}]
|
||||
}
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(action="list")
|
||||
assert len(result["keys"]) == 1
|
||||
|
||||
async def test_get(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {"apiKey": {"id": "k:1", "name": "mcp-key", "roles": ["admin"]}}
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(action="get", key_id="k:1")
|
||||
assert result["name"] == "mcp-key"
|
||||
|
||||
async def test_create(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {
|
||||
"createApiKey": {"id": "k:new", "name": "new-key", "key": "secret123", "roles": []}
|
||||
}
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(action="create", name="new-key")
|
||||
assert result["success"] is True
|
||||
assert result["key"]["name"] == "new-key"
|
||||
|
||||
async def test_create_with_roles(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {
|
||||
"createApiKey": {"id": "k:new", "name": "admin-key", "key": "secret", "roles": ["admin"]}
|
||||
}
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(action="create", name="admin-key", roles=["admin"])
|
||||
assert result["success"] is True
|
||||
|
||||
async def test_update(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {"updateApiKey": {"id": "k:1", "name": "renamed", "roles": []}}
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(action="update", key_id="k:1", name="renamed")
|
||||
assert result["success"] is True
|
||||
|
||||
async def test_delete(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {"deleteApiKeys": True}
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(action="delete", key_id="k:1", confirm=True)
|
||||
assert result["success"] is True
|
||||
145
tests/test_notifications.py
Normal file
145
tests/test_notifications.py
Normal file
@@ -0,0 +1,145 @@
|
||||
"""Tests for unraid_notifications tool."""
|
||||
|
||||
from unittest.mock import AsyncMock, patch
|
||||
|
||||
import pytest
|
||||
from conftest import make_tool_fn
|
||||
|
||||
from unraid_mcp.core.exceptions import ToolError
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def _mock_graphql() -> AsyncMock:
|
||||
with patch("unraid_mcp.tools.notifications.make_graphql_request", new_callable=AsyncMock) as mock:
|
||||
yield mock
|
||||
|
||||
|
||||
def _make_tool():
|
||||
return make_tool_fn(
|
||||
"unraid_mcp.tools.notifications", "register_notifications_tool", "unraid_notifications"
|
||||
)
|
||||
|
||||
|
||||
class TestNotificationsValidation:
|
||||
async def test_delete_requires_confirm(self, _mock_graphql: AsyncMock) -> None:
|
||||
tool_fn = _make_tool()
|
||||
with pytest.raises(ToolError, match="destructive"):
|
||||
await tool_fn(action="delete", notification_id="n:1", notification_type="UNREAD")
|
||||
|
||||
async def test_delete_archived_requires_confirm(self, _mock_graphql: AsyncMock) -> None:
|
||||
tool_fn = _make_tool()
|
||||
with pytest.raises(ToolError, match="destructive"):
|
||||
await tool_fn(action="delete_archived")
|
||||
|
||||
async def test_create_requires_fields(self, _mock_graphql: AsyncMock) -> None:
|
||||
tool_fn = _make_tool()
|
||||
with pytest.raises(ToolError, match="requires title"):
|
||||
await tool_fn(action="create")
|
||||
|
||||
async def test_archive_requires_id(self, _mock_graphql: AsyncMock) -> None:
|
||||
tool_fn = _make_tool()
|
||||
with pytest.raises(ToolError, match="notification_id"):
|
||||
await tool_fn(action="archive")
|
||||
|
||||
async def test_delete_requires_id_and_type(self, _mock_graphql: AsyncMock) -> None:
|
||||
tool_fn = _make_tool()
|
||||
with pytest.raises(ToolError, match="requires notification_id"):
|
||||
await tool_fn(action="delete", confirm=True)
|
||||
|
||||
|
||||
class TestNotificationsActions:
|
||||
async def test_overview(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {
|
||||
"notifications": {
|
||||
"overview": {
|
||||
"unread": {"info": 5, "warning": 2, "alert": 0, "total": 7},
|
||||
"archive": {"info": 10, "warning": 1, "alert": 0, "total": 11},
|
||||
}
|
||||
}
|
||||
}
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(action="overview")
|
||||
assert result["unread"]["total"] == 7
|
||||
|
||||
async def test_list(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {
|
||||
"notifications": {
|
||||
"list": [{"id": "n:1", "title": "Test", "importance": "INFO"}]
|
||||
}
|
||||
}
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(action="list")
|
||||
assert len(result["notifications"]) == 1
|
||||
|
||||
async def test_warnings(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {
|
||||
"notifications": {"warningsAndAlerts": [{"id": "n:1", "importance": "WARNING"}]}
|
||||
}
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(action="warnings")
|
||||
assert len(result["warnings"]) == 1
|
||||
|
||||
async def test_create(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {
|
||||
"notifications": {"createNotification": {"id": "n:new", "title": "Test", "importance": "INFO"}}
|
||||
}
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(
|
||||
action="create",
|
||||
title="Test",
|
||||
subject="Test Subject",
|
||||
description="Test Desc",
|
||||
importance="info",
|
||||
)
|
||||
assert result["success"] is True
|
||||
|
||||
async def test_archive_notification(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {"notifications": {"archiveNotification": True}}
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(action="archive", notification_id="n:1")
|
||||
assert result["success"] is True
|
||||
|
||||
async def test_delete_with_confirm(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {"notifications": {"deleteNotification": True}}
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(
|
||||
action="delete",
|
||||
notification_id="n:1",
|
||||
notification_type="unread",
|
||||
confirm=True,
|
||||
)
|
||||
assert result["success"] is True
|
||||
|
||||
async def test_archive_all(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {"notifications": {"archiveAll": True}}
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(action="archive_all")
|
||||
assert result["success"] is True
|
||||
|
||||
async def test_unread_notification(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {"notifications": {"unreadNotification": True}}
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(action="unread", notification_id="n:1")
|
||||
assert result["success"] is True
|
||||
assert result["action"] == "unread"
|
||||
|
||||
async def test_list_with_importance_filter(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {
|
||||
"notifications": {
|
||||
"list": [{"id": "n:1", "title": "Alert", "importance": "WARNING"}]
|
||||
}
|
||||
}
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(action="list", importance="warning", limit=10, offset=5)
|
||||
assert len(result["notifications"]) == 1
|
||||
call_args = _mock_graphql.call_args
|
||||
filter_var = call_args[0][1]["filter"]
|
||||
assert filter_var["importance"] == "WARNING"
|
||||
assert filter_var["limit"] == 10
|
||||
assert filter_var["offset"] == 5
|
||||
|
||||
async def test_generic_exception_wraps(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.side_effect = RuntimeError("boom")
|
||||
tool_fn = _make_tool()
|
||||
with pytest.raises(ToolError, match="boom"):
|
||||
await tool_fn(action="overview")
|
||||
102
tests/test_rclone.py
Normal file
102
tests/test_rclone.py
Normal file
@@ -0,0 +1,102 @@
|
||||
"""Tests for unraid_rclone tool."""
|
||||
|
||||
from unittest.mock import AsyncMock, patch
|
||||
|
||||
import pytest
|
||||
from conftest import make_tool_fn
|
||||
|
||||
from unraid_mcp.core.exceptions import ToolError
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def _mock_graphql() -> AsyncMock:
|
||||
with patch("unraid_mcp.tools.rclone.make_graphql_request", new_callable=AsyncMock) as mock:
|
||||
yield mock
|
||||
|
||||
|
||||
def _make_tool():
|
||||
return make_tool_fn("unraid_mcp.tools.rclone", "register_rclone_tool", "unraid_rclone")
|
||||
|
||||
|
||||
class TestRcloneValidation:
|
||||
async def test_delete_requires_confirm(self, _mock_graphql: AsyncMock) -> None:
|
||||
tool_fn = _make_tool()
|
||||
with pytest.raises(ToolError, match="destructive"):
|
||||
await tool_fn(action="delete_remote", name="gdrive")
|
||||
|
||||
async def test_create_requires_fields(self, _mock_graphql: AsyncMock) -> None:
|
||||
tool_fn = _make_tool()
|
||||
with pytest.raises(ToolError, match="requires name"):
|
||||
await tool_fn(action="create_remote")
|
||||
|
||||
async def test_delete_requires_name(self, _mock_graphql: AsyncMock) -> None:
|
||||
tool_fn = _make_tool()
|
||||
with pytest.raises(ToolError, match="name is required"):
|
||||
await tool_fn(action="delete_remote", confirm=True)
|
||||
|
||||
|
||||
class TestRcloneActions:
|
||||
async def test_list_remotes(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {
|
||||
"rclone": {"remotes": [{"name": "gdrive", "type": "drive"}]}
|
||||
}
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(action="list_remotes")
|
||||
assert len(result["remotes"]) == 1
|
||||
|
||||
async def test_config_form(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {
|
||||
"rclone": {"configForm": {"id": "form:1", "dataSchema": {}, "uiSchema": {}}}
|
||||
}
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(action="config_form")
|
||||
assert result["id"] == "form:1"
|
||||
|
||||
async def test_config_form_with_provider(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {
|
||||
"rclone": {"configForm": {"id": "form:s3", "dataSchema": {}, "uiSchema": {}}}
|
||||
}
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(action="config_form", provider_type="s3")
|
||||
assert result["id"] == "form:s3"
|
||||
call_args = _mock_graphql.call_args
|
||||
assert call_args[0][1] == {"formOptions": {"providerType": "s3"}}
|
||||
|
||||
async def test_create_remote(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {
|
||||
"rclone": {"createRCloneRemote": {"name": "newremote", "type": "s3"}}
|
||||
}
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(
|
||||
action="create_remote",
|
||||
name="newremote",
|
||||
provider_type="s3",
|
||||
config_data={"bucket": "mybucket"},
|
||||
)
|
||||
assert result["success"] is True
|
||||
|
||||
async def test_create_remote_with_empty_config(self, _mock_graphql: AsyncMock) -> None:
|
||||
"""Empty config_data dict should be accepted (not rejected by truthiness)."""
|
||||
_mock_graphql.return_value = {
|
||||
"rclone": {"createRCloneRemote": {"name": "ftp-remote", "type": "ftp"}}
|
||||
}
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(
|
||||
action="create_remote",
|
||||
name="ftp-remote",
|
||||
provider_type="ftp",
|
||||
config_data={},
|
||||
)
|
||||
assert result["success"] is True
|
||||
|
||||
async def test_delete_remote(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {"rclone": {"deleteRCloneRemote": True}}
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(action="delete_remote", name="gdrive", confirm=True)
|
||||
assert result["success"] is True
|
||||
|
||||
async def test_delete_remote_failure(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {"rclone": {"deleteRCloneRemote": False}}
|
||||
tool_fn = _make_tool()
|
||||
with pytest.raises(ToolError, match="Failed to delete"):
|
||||
await tool_fn(action="delete_remote", name="gdrive", confirm=True)
|
||||
105
tests/test_storage.py
Normal file
105
tests/test_storage.py
Normal file
@@ -0,0 +1,105 @@
|
||||
"""Tests for unraid_storage tool."""
|
||||
|
||||
from unittest.mock import AsyncMock, patch
|
||||
|
||||
import pytest
|
||||
from conftest import make_tool_fn
|
||||
|
||||
from unraid_mcp.core.exceptions import ToolError
|
||||
from unraid_mcp.tools.storage import format_bytes
|
||||
|
||||
# --- Unit tests for helpers ---
|
||||
|
||||
|
||||
class TestFormatBytes:
|
||||
def test_none(self) -> None:
|
||||
assert format_bytes(None) == "N/A"
|
||||
|
||||
def test_bytes(self) -> None:
|
||||
assert format_bytes(512) == "512.00 B"
|
||||
|
||||
def test_kilobytes(self) -> None:
|
||||
assert format_bytes(2048) == "2.00 KB"
|
||||
|
||||
def test_megabytes(self) -> None:
|
||||
assert format_bytes(1048576) == "1.00 MB"
|
||||
|
||||
def test_gigabytes(self) -> None:
|
||||
assert format_bytes(1073741824) == "1.00 GB"
|
||||
|
||||
def test_terabytes(self) -> None:
|
||||
assert format_bytes(1099511627776) == "1.00 TB"
|
||||
|
||||
|
||||
# --- Integration tests ---
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def _mock_graphql() -> AsyncMock:
|
||||
with patch("unraid_mcp.tools.storage.make_graphql_request", new_callable=AsyncMock) as mock:
|
||||
yield mock
|
||||
|
||||
|
||||
def _make_tool():
|
||||
return make_tool_fn("unraid_mcp.tools.storage", "register_storage_tool", "unraid_storage")
|
||||
|
||||
|
||||
class TestStorageValidation:
|
||||
async def test_disk_details_requires_disk_id(self, _mock_graphql: AsyncMock) -> None:
|
||||
tool_fn = _make_tool()
|
||||
with pytest.raises(ToolError, match="disk_id"):
|
||||
await tool_fn(action="disk_details")
|
||||
|
||||
async def test_logs_requires_log_path(self, _mock_graphql: AsyncMock) -> None:
|
||||
tool_fn = _make_tool()
|
||||
with pytest.raises(ToolError, match="log_path"):
|
||||
await tool_fn(action="logs")
|
||||
|
||||
|
||||
class TestStorageActions:
|
||||
async def test_shares(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {
|
||||
"shares": [{"id": "s:1", "name": "media"}, {"id": "s:2", "name": "backups"}]
|
||||
}
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(action="shares")
|
||||
assert len(result["shares"]) == 2
|
||||
|
||||
async def test_disks(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {"disks": [{"id": "d:1", "device": "sda"}]}
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(action="disks")
|
||||
assert len(result["disks"]) == 1
|
||||
|
||||
async def test_disk_details(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {
|
||||
"disk": {"id": "d:1", "device": "sda", "name": "WD", "serialNum": "SN1", "size": 1073741824, "temperature": 35}
|
||||
}
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(action="disk_details", disk_id="d:1")
|
||||
assert result["summary"]["temperature"] == "35C"
|
||||
assert "1.00 GB" in result["summary"]["size_formatted"]
|
||||
|
||||
async def test_disk_details_not_found(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {"disk": None}
|
||||
tool_fn = _make_tool()
|
||||
with pytest.raises(ToolError, match="not found"):
|
||||
await tool_fn(action="disk_details", disk_id="d:missing")
|
||||
|
||||
async def test_unassigned(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {"unassignedDevices": []}
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(action="unassigned")
|
||||
assert result["devices"] == []
|
||||
|
||||
async def test_log_files(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {"logFiles": [{"name": "syslog", "path": "/var/log/syslog"}]}
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(action="log_files")
|
||||
assert len(result["log_files"]) == 1
|
||||
|
||||
async def test_logs(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {"logFile": {"path": "/var/log/syslog", "content": "log line", "totalLines": 1}}
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(action="logs", log_path="/var/log/syslog")
|
||||
assert result["content"] == "log line"
|
||||
100
tests/test_users.py
Normal file
100
tests/test_users.py
Normal file
@@ -0,0 +1,100 @@
|
||||
"""Tests for unraid_users tool."""
|
||||
|
||||
from unittest.mock import AsyncMock, patch
|
||||
|
||||
import pytest
|
||||
from conftest import make_tool_fn
|
||||
|
||||
from unraid_mcp.core.exceptions import ToolError
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def _mock_graphql() -> AsyncMock:
|
||||
with patch("unraid_mcp.tools.users.make_graphql_request", new_callable=AsyncMock) as mock:
|
||||
yield mock
|
||||
|
||||
|
||||
def _make_tool():
|
||||
return make_tool_fn("unraid_mcp.tools.users", "register_users_tool", "unraid_users")
|
||||
|
||||
|
||||
class TestUsersValidation:
|
||||
async def test_delete_requires_confirm(self, _mock_graphql: AsyncMock) -> None:
|
||||
tool_fn = _make_tool()
|
||||
with pytest.raises(ToolError, match="destructive"):
|
||||
await tool_fn(action="delete", user_id="u:1")
|
||||
|
||||
async def test_get_requires_user_id(self, _mock_graphql: AsyncMock) -> None:
|
||||
tool_fn = _make_tool()
|
||||
with pytest.raises(ToolError, match="user_id"):
|
||||
await tool_fn(action="get")
|
||||
|
||||
async def test_add_requires_name_and_password(self, _mock_graphql: AsyncMock) -> None:
|
||||
tool_fn = _make_tool()
|
||||
with pytest.raises(ToolError, match="requires name and password"):
|
||||
await tool_fn(action="add")
|
||||
|
||||
async def test_delete_requires_user_id(self, _mock_graphql: AsyncMock) -> None:
|
||||
tool_fn = _make_tool()
|
||||
with pytest.raises(ToolError, match="user_id"):
|
||||
await tool_fn(action="delete", confirm=True)
|
||||
|
||||
|
||||
class TestUsersActions:
|
||||
async def test_me(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {"me": {"id": "u:1", "name": "root", "role": "ADMIN"}}
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(action="me")
|
||||
assert result["name"] == "root"
|
||||
|
||||
async def test_list(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {
|
||||
"users": [{"id": "u:1", "name": "root"}, {"id": "u:2", "name": "guest"}]
|
||||
}
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(action="list")
|
||||
assert len(result["users"]) == 2
|
||||
|
||||
async def test_get(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {"user": {"id": "u:1", "name": "root", "role": "ADMIN"}}
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(action="get", user_id="u:1")
|
||||
assert result["name"] == "root"
|
||||
|
||||
async def test_add(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {"addUser": {"id": "u:3", "name": "newuser", "role": "USER"}}
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(action="add", name="newuser", password="pass123")
|
||||
assert result["success"] is True
|
||||
|
||||
async def test_add_with_role(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {"addUser": {"id": "u:3", "name": "admin2", "role": "ADMIN"}}
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(action="add", name="admin2", password="pass123", role="admin")
|
||||
assert result["success"] is True
|
||||
call_args = _mock_graphql.call_args
|
||||
assert call_args[0][1]["input"]["role"] == "ADMIN"
|
||||
|
||||
async def test_delete(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {"deleteUser": True}
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(action="delete", user_id="u:2", confirm=True)
|
||||
assert result["success"] is True
|
||||
|
||||
async def test_cloud(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {"cloud": {"status": "connected", "apiKey": "***"}}
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(action="cloud")
|
||||
assert result["status"] == "connected"
|
||||
|
||||
async def test_remote_access(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {"remoteAccess": {"enabled": True, "url": "https://example.com"}}
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(action="remote_access")
|
||||
assert result["enabled"] is True
|
||||
|
||||
async def test_origins(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {"allowedOrigins": ["http://localhost", "https://example.com"]}
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(action="origins")
|
||||
assert len(result["origins"]) == 2
|
||||
109
tests/test_vm.py
Normal file
109
tests/test_vm.py
Normal file
@@ -0,0 +1,109 @@
|
||||
"""Tests for unraid_vm tool."""
|
||||
|
||||
from unittest.mock import AsyncMock, patch
|
||||
|
||||
import pytest
|
||||
from conftest import make_tool_fn
|
||||
|
||||
from unraid_mcp.core.exceptions import ToolError
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def _mock_graphql() -> AsyncMock:
|
||||
with patch("unraid_mcp.tools.virtualization.make_graphql_request", new_callable=AsyncMock) as mock:
|
||||
yield mock
|
||||
|
||||
|
||||
def _make_tool():
|
||||
return make_tool_fn("unraid_mcp.tools.virtualization", "register_vm_tool", "unraid_vm")
|
||||
|
||||
|
||||
class TestVmValidation:
|
||||
async def test_actions_except_list_require_vm_id(self, _mock_graphql: AsyncMock) -> None:
|
||||
tool_fn = _make_tool()
|
||||
for action in ("details", "start", "stop", "pause", "resume", "reboot"):
|
||||
with pytest.raises(ToolError, match="vm_id"):
|
||||
await tool_fn(action=action)
|
||||
|
||||
async def test_destructive_actions_require_confirm(self, _mock_graphql: AsyncMock) -> None:
|
||||
tool_fn = _make_tool()
|
||||
for action in ("force_stop", "reset"):
|
||||
with pytest.raises(ToolError, match="destructive"):
|
||||
await tool_fn(action=action, vm_id="uuid-1")
|
||||
|
||||
async def test_destructive_vm_id_check_before_confirm(self, _mock_graphql: AsyncMock) -> None:
|
||||
"""Destructive actions without vm_id should fail on confirm first."""
|
||||
tool_fn = _make_tool()
|
||||
with pytest.raises(ToolError, match="destructive"):
|
||||
await tool_fn(action="force_stop")
|
||||
|
||||
|
||||
class TestVmActions:
|
||||
async def test_list(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {
|
||||
"vms": {
|
||||
"domains": [
|
||||
{"id": "vm:1", "name": "Windows 11", "state": "RUNNING", "uuid": "uuid-1"},
|
||||
]
|
||||
}
|
||||
}
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(action="list")
|
||||
assert len(result["vms"]) == 1
|
||||
assert result["vms"][0]["name"] == "Windows 11"
|
||||
|
||||
async def test_list_empty(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {"vms": {"domains": []}}
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(action="list")
|
||||
assert result["vms"] == []
|
||||
|
||||
async def test_list_no_vms_key(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {}
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(action="list")
|
||||
assert result["vms"] == []
|
||||
|
||||
async def test_details_by_uuid(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {
|
||||
"vms": {"domains": [{"id": "vm:1", "name": "Win11", "state": "RUNNING", "uuid": "uuid-1"}]}
|
||||
}
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(action="details", vm_id="uuid-1")
|
||||
assert result["name"] == "Win11"
|
||||
|
||||
async def test_details_by_name(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {
|
||||
"vms": {"domains": [{"id": "vm:1", "name": "Win11", "state": "RUNNING", "uuid": "uuid-1"}]}
|
||||
}
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(action="details", vm_id="Win11")
|
||||
assert result["uuid"] == "uuid-1"
|
||||
|
||||
async def test_details_not_found(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {
|
||||
"vms": {"domains": [{"id": "vm:1", "name": "Win11", "state": "RUNNING", "uuid": "uuid-1"}]}
|
||||
}
|
||||
tool_fn = _make_tool()
|
||||
with pytest.raises(ToolError, match="not found"):
|
||||
await tool_fn(action="details", vm_id="nonexistent")
|
||||
|
||||
async def test_start_vm(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {"vm": {"start": True}}
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(action="start", vm_id="uuid-1")
|
||||
assert result["success"] is True
|
||||
assert result["action"] == "start"
|
||||
|
||||
async def test_force_stop(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {"vm": {"forceStop": True}}
|
||||
tool_fn = _make_tool()
|
||||
result = await tool_fn(action="force_stop", vm_id="uuid-1", confirm=True)
|
||||
assert result["success"] is True
|
||||
assert result["action"] == "force_stop"
|
||||
|
||||
async def test_mutation_unexpected_response(self, _mock_graphql: AsyncMock) -> None:
|
||||
_mock_graphql.return_value = {"vm": {}}
|
||||
tool_fn = _make_tool()
|
||||
with pytest.raises(ToolError, match="Failed to start"):
|
||||
await tool_fn(action="start", vm_id="uuid-1")
|
||||
@@ -76,7 +76,7 @@ class OverwriteFileHandler(logging.FileHandler):
|
||||
)
|
||||
super().emit(reset_record)
|
||||
|
||||
except (OSError, IOError):
|
||||
except OSError:
|
||||
# If there's an issue checking file size, just continue normally
|
||||
pass
|
||||
|
||||
|
||||
@@ -29,6 +29,9 @@ for dotenv_path in dotenv_paths:
|
||||
load_dotenv(dotenv_path=dotenv_path)
|
||||
break
|
||||
|
||||
# Application Version
|
||||
VERSION = "0.2.0"
|
||||
|
||||
# Core API Configuration
|
||||
UNRAID_API_URL = os.getenv("UNRAID_API_URL")
|
||||
UNRAID_API_KEY = os.getenv("UNRAID_API_KEY")
|
||||
|
||||
@@ -4,19 +4,30 @@ This module provides the HTTP client interface for making GraphQL requests
|
||||
to the Unraid API with proper timeout handling and error management.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
from typing import Any
|
||||
|
||||
import httpx
|
||||
|
||||
from ..config.logging import logger
|
||||
from ..config.settings import TIMEOUT_CONFIG, UNRAID_API_KEY, UNRAID_API_URL, UNRAID_VERIFY_SSL
|
||||
from ..config.settings import (
|
||||
TIMEOUT_CONFIG,
|
||||
UNRAID_API_KEY,
|
||||
UNRAID_API_URL,
|
||||
UNRAID_VERIFY_SSL,
|
||||
VERSION,
|
||||
)
|
||||
from ..core.exceptions import ToolError
|
||||
|
||||
# HTTP timeout configuration
|
||||
DEFAULT_TIMEOUT = httpx.Timeout(10.0, read=30.0, connect=5.0)
|
||||
DISK_TIMEOUT = httpx.Timeout(10.0, read=TIMEOUT_CONFIG['disk_operations'], connect=5.0)
|
||||
|
||||
# Global connection pool (module-level singleton)
|
||||
_http_client: httpx.AsyncClient | None = None
|
||||
_client_lock = asyncio.Lock()
|
||||
|
||||
|
||||
def is_idempotent_error(error_message: str, operation: str) -> bool:
|
||||
"""Check if a GraphQL error represents an idempotent operation that should be treated as success.
|
||||
@@ -48,6 +59,49 @@ def is_idempotent_error(error_message: str, operation: str) -> bool:
|
||||
return False
|
||||
|
||||
|
||||
async def get_http_client() -> httpx.AsyncClient:
|
||||
"""Get or create shared HTTP client with connection pooling.
|
||||
|
||||
Returns:
|
||||
Singleton AsyncClient instance with connection pooling enabled
|
||||
"""
|
||||
global _http_client
|
||||
|
||||
async with _client_lock:
|
||||
if _http_client is None or _http_client.is_closed:
|
||||
_http_client = httpx.AsyncClient(
|
||||
# Connection pool settings
|
||||
limits=httpx.Limits(
|
||||
max_keepalive_connections=20,
|
||||
max_connections=100,
|
||||
keepalive_expiry=30.0
|
||||
),
|
||||
# Default timeout (can be overridden per-request)
|
||||
timeout=DEFAULT_TIMEOUT,
|
||||
# SSL verification
|
||||
verify=UNRAID_VERIFY_SSL,
|
||||
# Connection pooling headers
|
||||
headers={
|
||||
"Connection": "keep-alive",
|
||||
"User-Agent": f"UnraidMCPServer/{VERSION}"
|
||||
}
|
||||
)
|
||||
logger.info("Created shared HTTP client with connection pooling (20 keepalive, 100 max connections)")
|
||||
|
||||
return _http_client
|
||||
|
||||
|
||||
async def close_http_client() -> None:
|
||||
"""Close the shared HTTP client (call on server shutdown)."""
|
||||
global _http_client
|
||||
|
||||
async with _client_lock:
|
||||
if _http_client is not None:
|
||||
await _http_client.aclose()
|
||||
_http_client = None
|
||||
logger.info("Closed shared HTTP client")
|
||||
|
||||
|
||||
async def make_graphql_request(
|
||||
query: str,
|
||||
variables: dict[str, Any] | None = None,
|
||||
@@ -78,7 +132,7 @@ async def make_graphql_request(
|
||||
headers = {
|
||||
"Content-Type": "application/json",
|
||||
"X-API-Key": UNRAID_API_KEY,
|
||||
"User-Agent": "UnraidMCPServer/0.1.0" # Custom user-agent
|
||||
"User-Agent": f"UnraidMCPServer/{VERSION}" # Custom user-agent
|
||||
}
|
||||
|
||||
payload: dict[str, Any] = {"query": query}
|
||||
@@ -88,13 +142,28 @@ async def make_graphql_request(
|
||||
logger.debug(f"Making GraphQL request to {UNRAID_API_URL}:")
|
||||
logger.debug(f"Query: {query[:200]}{'...' if len(query) > 200 else ''}") # Log truncated query
|
||||
if variables:
|
||||
logger.debug(f"Variables: {variables}")
|
||||
|
||||
current_timeout = custom_timeout if custom_timeout is not None else DEFAULT_TIMEOUT
|
||||
_SENSITIVE_KEYS = {"password", "key", "secret", "token", "apiKey"}
|
||||
redacted = {
|
||||
k: ("***" if k.lower() in _SENSITIVE_KEYS else v)
|
||||
for k, v in (variables.get("input", variables) if isinstance(variables.get("input"), dict) else variables).items()
|
||||
}
|
||||
logger.debug(f"Variables: {redacted}")
|
||||
|
||||
try:
|
||||
async with httpx.AsyncClient(timeout=current_timeout, verify=UNRAID_VERIFY_SSL) as client:
|
||||
# Get the shared HTTP client with connection pooling
|
||||
client = await get_http_client()
|
||||
|
||||
# Override timeout if custom timeout specified
|
||||
if custom_timeout is not None:
|
||||
response = await client.post(
|
||||
UNRAID_API_URL,
|
||||
json=payload,
|
||||
headers=headers,
|
||||
timeout=custom_timeout
|
||||
)
|
||||
else:
|
||||
response = await client.post(UNRAID_API_URL, json=payload, headers=headers)
|
||||
|
||||
response.raise_for_status() # Raise an exception for HTTP error codes 4xx/5xx
|
||||
|
||||
response_data = response.json()
|
||||
|
||||
@@ -5,6 +5,17 @@ This is the main entry point for the Unraid MCP Server. It imports and starts
|
||||
the modular server implementation from unraid_mcp.server.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
|
||||
|
||||
async def shutdown_cleanup() -> None:
|
||||
"""Cleanup resources on server shutdown."""
|
||||
try:
|
||||
from .core.client import close_http_client
|
||||
await close_http_client()
|
||||
except Exception as e:
|
||||
print(f"Error during cleanup: {e}")
|
||||
|
||||
|
||||
def main() -> None:
|
||||
"""Main entry point for the Unraid MCP Server."""
|
||||
@@ -13,8 +24,18 @@ def main() -> None:
|
||||
run_server()
|
||||
except KeyboardInterrupt:
|
||||
print("\nServer stopped by user")
|
||||
try:
|
||||
asyncio.run(shutdown_cleanup())
|
||||
except RuntimeError:
|
||||
# Event loop already closed, skip cleanup
|
||||
pass
|
||||
except Exception as e:
|
||||
print(f"Server failed to start: {e}")
|
||||
try:
|
||||
asyncio.run(shutdown_cleanup())
|
||||
except RuntimeError:
|
||||
# Event loop already closed, skip cleanup
|
||||
pass
|
||||
raise
|
||||
|
||||
|
||||
|
||||
@@ -15,38 +15,29 @@ from .config.settings import (
|
||||
UNRAID_MCP_HOST,
|
||||
UNRAID_MCP_PORT,
|
||||
UNRAID_MCP_TRANSPORT,
|
||||
VERSION,
|
||||
)
|
||||
from .subscriptions.diagnostics import register_diagnostic_tools
|
||||
from .subscriptions.manager import SubscriptionManager
|
||||
from .subscriptions.resources import register_subscription_resources
|
||||
from .tools.docker import register_docker_tools
|
||||
from .tools.health import register_health_tools
|
||||
from .tools.rclone import register_rclone_tools
|
||||
from .tools.storage import register_storage_tools
|
||||
from .tools.system import register_system_tools
|
||||
from .tools.virtualization import register_vm_tools
|
||||
from .tools.array import register_array_tool
|
||||
from .tools.docker import register_docker_tool
|
||||
from .tools.health import register_health_tool
|
||||
from .tools.info import register_info_tool
|
||||
from .tools.keys import register_keys_tool
|
||||
from .tools.notifications import register_notifications_tool
|
||||
from .tools.rclone import register_rclone_tool
|
||||
from .tools.storage import register_storage_tool
|
||||
from .tools.users import register_users_tool
|
||||
from .tools.virtualization import register_vm_tool
|
||||
|
||||
# Initialize FastMCP instance
|
||||
mcp = FastMCP(
|
||||
name="Unraid MCP Server",
|
||||
instructions="Provides tools to interact with an Unraid server's GraphQL API.",
|
||||
version="0.1.0",
|
||||
version=VERSION,
|
||||
)
|
||||
|
||||
# Initialize subscription manager
|
||||
subscription_manager = SubscriptionManager()
|
||||
|
||||
|
||||
async def autostart_subscriptions() -> None:
|
||||
"""Auto-start all subscriptions marked for auto-start in SubscriptionManager"""
|
||||
logger.info("[AUTOSTART] Initiating subscription auto-start process...")
|
||||
|
||||
try:
|
||||
# Use the SubscriptionManager auto-start method
|
||||
await subscription_manager.auto_start_all_subscriptions()
|
||||
logger.info("[AUTOSTART] Auto-start process completed successfully")
|
||||
except Exception as e:
|
||||
logger.error(f"[AUTOSTART] Failed during auto-start process: {e}", exc_info=True)
|
||||
# Note: SubscriptionManager singleton is defined in subscriptions/manager.py
|
||||
# and imported by resources.py - no duplicate instance needed here
|
||||
|
||||
|
||||
def register_all_modules() -> None:
|
||||
@@ -54,35 +45,24 @@ def register_all_modules() -> None:
|
||||
try:
|
||||
# Register subscription resources first
|
||||
register_subscription_resources(mcp)
|
||||
logger.info("📊 Subscription resources registered")
|
||||
logger.info("Subscription resources registered")
|
||||
|
||||
# Register diagnostic tools
|
||||
register_diagnostic_tools(mcp)
|
||||
logger.info("🔧 Diagnostic tools registered")
|
||||
# Register all 10 consolidated tools
|
||||
register_info_tool(mcp)
|
||||
register_array_tool(mcp)
|
||||
register_storage_tool(mcp)
|
||||
register_docker_tool(mcp)
|
||||
register_vm_tool(mcp)
|
||||
register_notifications_tool(mcp)
|
||||
register_rclone_tool(mcp)
|
||||
register_users_tool(mcp)
|
||||
register_keys_tool(mcp)
|
||||
register_health_tool(mcp)
|
||||
|
||||
# Register all tool categories
|
||||
register_system_tools(mcp)
|
||||
logger.info("🖥️ System tools registered")
|
||||
|
||||
register_docker_tools(mcp)
|
||||
logger.info("🐳 Docker tools registered")
|
||||
|
||||
register_vm_tools(mcp)
|
||||
logger.info("💻 Virtualization tools registered")
|
||||
|
||||
register_storage_tools(mcp)
|
||||
logger.info("💾 Storage tools registered")
|
||||
|
||||
register_health_tools(mcp)
|
||||
logger.info("🏥 Health tools registered")
|
||||
|
||||
register_rclone_tools(mcp)
|
||||
logger.info("☁️ RClone tools registered")
|
||||
|
||||
logger.info("🎯 All modules registered successfully - Server ready!")
|
||||
logger.info("All 10 tools registered successfully - Server ready!")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Failed to register modules: {e}", exc_info=True)
|
||||
logger.error(f"Failed to register modules: {e}", exc_info=True)
|
||||
raise
|
||||
|
||||
|
||||
@@ -106,34 +86,31 @@ def run_server() -> None:
|
||||
# Register all modules
|
||||
register_all_modules()
|
||||
|
||||
logger.info(f"🚀 Starting Unraid MCP Server on {UNRAID_MCP_HOST}:{UNRAID_MCP_PORT} using {UNRAID_MCP_TRANSPORT} transport...")
|
||||
logger.info(f"Starting Unraid MCP Server on {UNRAID_MCP_HOST}:{UNRAID_MCP_PORT} using {UNRAID_MCP_TRANSPORT} transport...")
|
||||
|
||||
try:
|
||||
# Auto-start subscriptions on first async operation
|
||||
if UNRAID_MCP_TRANSPORT == "streamable-http":
|
||||
# Use the recommended Streamable HTTP transport
|
||||
mcp.run(
|
||||
transport="streamable-http",
|
||||
host=UNRAID_MCP_HOST,
|
||||
port=UNRAID_MCP_PORT,
|
||||
path="/mcp" # Standard path for MCP
|
||||
path="/mcp"
|
||||
)
|
||||
elif UNRAID_MCP_TRANSPORT == "sse":
|
||||
# Deprecated SSE transport - log warning
|
||||
logger.warning("SSE transport is deprecated and may be removed in a future version. Consider switching to 'streamable-http'.")
|
||||
logger.warning("SSE transport is deprecated. Consider switching to 'streamable-http'.")
|
||||
mcp.run(
|
||||
transport="sse",
|
||||
host=UNRAID_MCP_HOST,
|
||||
port=UNRAID_MCP_PORT,
|
||||
path="/mcp" # Keep custom path for SSE
|
||||
path="/mcp"
|
||||
)
|
||||
elif UNRAID_MCP_TRANSPORT == "stdio":
|
||||
mcp.run() # Defaults to stdio
|
||||
mcp.run()
|
||||
else:
|
||||
logger.error(f"Unsupported MCP_TRANSPORT: {UNRAID_MCP_TRANSPORT}. Choose 'streamable-http' (recommended), 'sse' (deprecated), or 'stdio'.")
|
||||
logger.error(f"Unsupported MCP_TRANSPORT: {UNRAID_MCP_TRANSPORT}. Choose 'streamable-http', 'sse', or 'stdio'.")
|
||||
sys.exit(1)
|
||||
except Exception as e:
|
||||
logger.critical(f"❌ Failed to start Unraid MCP server: {e}", exc_info=True)
|
||||
logger.critical(f"Failed to start Unraid MCP server: {e}", exc_info=True)
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
|
||||
@@ -57,10 +57,14 @@ def register_diagnostic_tools(mcp: FastMCP) -> None:
|
||||
ping_timeout=10
|
||||
) as websocket:
|
||||
|
||||
# Send connection init
|
||||
# Send connection init (using standard X-API-Key format)
|
||||
await websocket.send(json.dumps({
|
||||
"type": "connection_init",
|
||||
"payload": {"Authorization": f"Bearer {UNRAID_API_KEY}"}
|
||||
"payload": {
|
||||
"headers": {
|
||||
"X-API-Key": UNRAID_API_KEY
|
||||
}
|
||||
}
|
||||
}))
|
||||
|
||||
# Wait for ack
|
||||
|
||||
@@ -15,7 +15,7 @@ import websockets
|
||||
from websockets.legacy.protocol import Subprotocol
|
||||
|
||||
from ..config.logging import logger
|
||||
from ..config.settings import UNRAID_API_KEY, UNRAID_API_URL
|
||||
from ..config.settings import UNRAID_API_KEY, UNRAID_API_URL, UNRAID_VERIFY_SSL
|
||||
from ..core.types import SubscriptionData
|
||||
|
||||
|
||||
@@ -162,7 +162,8 @@ class SubscriptionManager:
|
||||
subprotocols=[Subprotocol("graphql-transport-ws"), Subprotocol("graphql-ws")],
|
||||
ping_interval=20,
|
||||
ping_timeout=10,
|
||||
close_timeout=10
|
||||
close_timeout=10,
|
||||
ssl=UNRAID_VERIFY_SSL
|
||||
) as websocket:
|
||||
|
||||
selected_proto = websocket.subprotocol or "none"
|
||||
@@ -180,15 +181,10 @@ class SubscriptionManager:
|
||||
|
||||
if UNRAID_API_KEY:
|
||||
logger.debug(f"[AUTH:{subscription_name}] Adding authentication payload")
|
||||
# Use standard X-API-Key header format (matching HTTP client)
|
||||
auth_payload = {
|
||||
"X-API-Key": UNRAID_API_KEY,
|
||||
"x-api-key": UNRAID_API_KEY,
|
||||
"authorization": f"Bearer {UNRAID_API_KEY}",
|
||||
"Authorization": f"Bearer {UNRAID_API_KEY}",
|
||||
"headers": {
|
||||
"X-API-Key": UNRAID_API_KEY,
|
||||
"x-api-key": UNRAID_API_KEY,
|
||||
"Authorization": f"Bearer {UNRAID_API_KEY}"
|
||||
"X-API-Key": UNRAID_API_KEY
|
||||
}
|
||||
}
|
||||
init_payload["payload"] = auth_payload
|
||||
|
||||
@@ -1 +1,14 @@
|
||||
"""MCP tools organized by functional domain."""
|
||||
"""MCP tools organized by functional domain.
|
||||
|
||||
10 consolidated tools with ~90 actions total:
|
||||
unraid_info - System information queries (19 actions)
|
||||
unraid_array - Array operations and power management (12 actions)
|
||||
unraid_storage - Storage, disks, and logs (6 actions)
|
||||
unraid_docker - Docker container management (15 actions)
|
||||
unraid_vm - Virtual machine management (9 actions)
|
||||
unraid_notifications - Notification management (9 actions)
|
||||
unraid_rclone - Cloud storage remotes (4 actions)
|
||||
unraid_users - User management (8 actions)
|
||||
unraid_keys - API key management (5 actions)
|
||||
unraid_health - Health monitoring and diagnostics (3 actions)
|
||||
"""
|
||||
|
||||
161
unraid_mcp/tools/array.py
Normal file
161
unraid_mcp/tools/array.py
Normal file
@@ -0,0 +1,161 @@
|
||||
"""Array operations and system power management.
|
||||
|
||||
Provides the `unraid_array` tool with 12 actions for array lifecycle,
|
||||
parity operations, disk management, and system power control.
|
||||
"""
|
||||
|
||||
from typing import Any, Literal
|
||||
|
||||
from fastmcp import FastMCP
|
||||
|
||||
from ..config.logging import logger
|
||||
from ..core.client import make_graphql_request
|
||||
from ..core.exceptions import ToolError
|
||||
|
||||
QUERIES: dict[str, str] = {
|
||||
"parity_history": """
|
||||
query GetParityHistory {
|
||||
array { parityCheckStatus { progress speed errors } }
|
||||
}
|
||||
""",
|
||||
}
|
||||
|
||||
MUTATIONS: dict[str, str] = {
|
||||
"start": """
|
||||
mutation StartArray {
|
||||
setState(input: { desiredState: STARTED }) { state }
|
||||
}
|
||||
""",
|
||||
"stop": """
|
||||
mutation StopArray {
|
||||
setState(input: { desiredState: STOPPED }) { state }
|
||||
}
|
||||
""",
|
||||
"parity_start": """
|
||||
mutation StartParityCheck($correct: Boolean) {
|
||||
parityCheck { start(correct: $correct) }
|
||||
}
|
||||
""",
|
||||
"parity_pause": """
|
||||
mutation PauseParityCheck {
|
||||
parityCheck { pause }
|
||||
}
|
||||
""",
|
||||
"parity_resume": """
|
||||
mutation ResumeParityCheck {
|
||||
parityCheck { resume }
|
||||
}
|
||||
""",
|
||||
"parity_cancel": """
|
||||
mutation CancelParityCheck {
|
||||
parityCheck { cancel }
|
||||
}
|
||||
""",
|
||||
"mount_disk": """
|
||||
mutation MountDisk($id: PrefixedID!) {
|
||||
mountArrayDisk(id: $id)
|
||||
}
|
||||
""",
|
||||
"unmount_disk": """
|
||||
mutation UnmountDisk($id: PrefixedID!) {
|
||||
unmountArrayDisk(id: $id)
|
||||
}
|
||||
""",
|
||||
"clear_stats": """
|
||||
mutation ClearStats($id: PrefixedID!) {
|
||||
clearArrayDiskStatistics(id: $id)
|
||||
}
|
||||
""",
|
||||
"shutdown": """
|
||||
mutation Shutdown {
|
||||
shutdown
|
||||
}
|
||||
""",
|
||||
"reboot": """
|
||||
mutation Reboot {
|
||||
reboot
|
||||
}
|
||||
""",
|
||||
}
|
||||
|
||||
DESTRUCTIVE_ACTIONS = {"start", "stop", "shutdown", "reboot"}
|
||||
DISK_ACTIONS = {"mount_disk", "unmount_disk", "clear_stats"}
|
||||
|
||||
ARRAY_ACTIONS = Literal[
|
||||
"start", "stop",
|
||||
"parity_start", "parity_pause", "parity_resume", "parity_cancel", "parity_history",
|
||||
"mount_disk", "unmount_disk", "clear_stats",
|
||||
"shutdown", "reboot",
|
||||
]
|
||||
|
||||
|
||||
def register_array_tool(mcp: FastMCP) -> None:
|
||||
"""Register the unraid_array tool with the FastMCP instance."""
|
||||
|
||||
@mcp.tool()
|
||||
async def unraid_array(
|
||||
action: ARRAY_ACTIONS,
|
||||
confirm: bool = False,
|
||||
disk_id: str | None = None,
|
||||
correct: bool | None = None,
|
||||
) -> dict[str, Any]:
|
||||
"""Manage the Unraid array and system power.
|
||||
|
||||
Actions:
|
||||
start - Start the array (destructive, requires confirm=True)
|
||||
stop - Stop the array (destructive, requires confirm=True)
|
||||
parity_start - Start parity check (optional correct=True to fix errors)
|
||||
parity_pause - Pause running parity check
|
||||
parity_resume - Resume paused parity check
|
||||
parity_cancel - Cancel running parity check
|
||||
parity_history - Get parity check status/history
|
||||
mount_disk - Mount an array disk (requires disk_id)
|
||||
unmount_disk - Unmount an array disk (requires disk_id)
|
||||
clear_stats - Clear disk statistics (requires disk_id)
|
||||
shutdown - Shut down the server (destructive, requires confirm=True)
|
||||
reboot - Reboot the server (destructive, requires confirm=True)
|
||||
"""
|
||||
all_actions = set(QUERIES) | set(MUTATIONS)
|
||||
if action not in all_actions:
|
||||
raise ToolError(f"Invalid action '{action}'. Must be one of: {sorted(all_actions)}")
|
||||
|
||||
if action in DESTRUCTIVE_ACTIONS and not confirm:
|
||||
raise ToolError(
|
||||
f"Action '{action}' is destructive. Set confirm=True to proceed."
|
||||
)
|
||||
|
||||
if action in DISK_ACTIONS and not disk_id:
|
||||
raise ToolError(f"disk_id is required for '{action}' action")
|
||||
|
||||
try:
|
||||
logger.info(f"Executing unraid_array action={action}")
|
||||
|
||||
# Read-only query
|
||||
if action in QUERIES:
|
||||
data = await make_graphql_request(QUERIES[action])
|
||||
return {"success": True, "action": action, "data": data}
|
||||
|
||||
# Mutations
|
||||
query = MUTATIONS[action]
|
||||
variables: dict[str, Any] | None = None
|
||||
|
||||
if action in DISK_ACTIONS:
|
||||
variables = {"id": disk_id}
|
||||
elif action == "parity_start" and correct is not None:
|
||||
variables = {"correct": correct}
|
||||
|
||||
data = await make_graphql_request(query, variables)
|
||||
|
||||
return {
|
||||
"success": True,
|
||||
"action": action,
|
||||
"data": data,
|
||||
}
|
||||
|
||||
except ToolError:
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error(f"Error in unraid_array action={action}: {e}", exc_info=True)
|
||||
raise ToolError(f"Failed to execute array/{action}: {str(e)}") from e
|
||||
|
||||
logger.info("Array tool registered successfully")
|
||||
@@ -1,11 +1,11 @@
|
||||
"""Docker container management tools.
|
||||
"""Docker container management.
|
||||
|
||||
This module provides tools for Docker container lifecycle and management
|
||||
including listing containers with caching options, start/stop operations,
|
||||
and detailed container information retrieval.
|
||||
Provides the `unraid_docker` tool with 15 actions for container lifecycle,
|
||||
logs, networks, and update management.
|
||||
"""
|
||||
|
||||
from typing import Any
|
||||
import re
|
||||
from typing import Any, Literal
|
||||
|
||||
from fastmcp import FastMCP
|
||||
|
||||
@@ -13,376 +13,311 @@ from ..config.logging import logger
|
||||
from ..core.client import make_graphql_request
|
||||
from ..core.exceptions import ToolError
|
||||
|
||||
QUERIES: dict[str, str] = {
|
||||
"list": """
|
||||
query ListDockerContainers {
|
||||
docker { containers(skipCache: false) {
|
||||
id names image state status autoStart
|
||||
} }
|
||||
}
|
||||
""",
|
||||
"details": """
|
||||
query GetContainerDetails {
|
||||
docker { containers(skipCache: false) {
|
||||
id names image imageId command created
|
||||
ports { ip privatePort publicPort type }
|
||||
sizeRootFs labels state status
|
||||
hostConfig { networkMode }
|
||||
networkSettings mounts autoStart
|
||||
} }
|
||||
}
|
||||
""",
|
||||
"logs": """
|
||||
query GetContainerLogs($id: PrefixedID!, $tail: Int) {
|
||||
docker { logs(id: $id, tail: $tail) }
|
||||
}
|
||||
""",
|
||||
"networks": """
|
||||
query GetDockerNetworks {
|
||||
dockerNetworks { id name driver scope }
|
||||
}
|
||||
""",
|
||||
"network_details": """
|
||||
query GetDockerNetwork($id: PrefixedID!) {
|
||||
dockerNetwork(id: $id) { id name driver scope containers }
|
||||
}
|
||||
""",
|
||||
"port_conflicts": """
|
||||
query GetPortConflicts {
|
||||
docker { portConflicts { containerName port conflictsWith } }
|
||||
}
|
||||
""",
|
||||
"check_updates": """
|
||||
query CheckContainerUpdates {
|
||||
docker { containerUpdateStatuses { id name updateAvailable currentVersion latestVersion } }
|
||||
}
|
||||
""",
|
||||
}
|
||||
|
||||
def find_container_by_identifier(container_identifier: str, containers: list[dict[str, Any]]) -> dict[str, Any] | None:
|
||||
"""Find a container by ID or name with fuzzy matching.
|
||||
MUTATIONS: dict[str, str] = {
|
||||
"start": """
|
||||
mutation StartContainer($id: PrefixedID!) {
|
||||
docker { start(id: $id) { id names state status } }
|
||||
}
|
||||
""",
|
||||
"stop": """
|
||||
mutation StopContainer($id: PrefixedID!) {
|
||||
docker { stop(id: $id) { id names state status } }
|
||||
}
|
||||
""",
|
||||
"pause": """
|
||||
mutation PauseContainer($id: PrefixedID!) {
|
||||
docker { pause(id: $id) { id names state status } }
|
||||
}
|
||||
""",
|
||||
"unpause": """
|
||||
mutation UnpauseContainer($id: PrefixedID!) {
|
||||
docker { unpause(id: $id) { id names state status } }
|
||||
}
|
||||
""",
|
||||
"remove": """
|
||||
mutation RemoveContainer($id: PrefixedID!) {
|
||||
docker { removeContainer(id: $id) }
|
||||
}
|
||||
""",
|
||||
"update": """
|
||||
mutation UpdateContainer($id: PrefixedID!) {
|
||||
docker { updateContainer(id: $id) { id names state status } }
|
||||
}
|
||||
""",
|
||||
"update_all": """
|
||||
mutation UpdateAllContainers {
|
||||
docker { updateAllContainers { id names state status } }
|
||||
}
|
||||
""",
|
||||
}
|
||||
|
||||
Args:
|
||||
container_identifier: Container ID or name to find
|
||||
containers: List of container dictionaries to search
|
||||
DESTRUCTIVE_ACTIONS = {"remove"}
|
||||
CONTAINER_ACTIONS = {"start", "stop", "restart", "pause", "unpause", "remove", "update", "details", "logs"}
|
||||
|
||||
Returns:
|
||||
Container dictionary if found, None otherwise
|
||||
"""
|
||||
DOCKER_ACTIONS = Literal[
|
||||
"list", "details", "start", "stop", "restart", "pause", "unpause",
|
||||
"remove", "update", "update_all", "logs",
|
||||
"networks", "network_details", "port_conflicts", "check_updates",
|
||||
]
|
||||
|
||||
# Docker container IDs: 64 hex chars + optional suffix (e.g., ":local")
|
||||
_DOCKER_ID_PATTERN = re.compile(r"^[a-f0-9]{64}(:[a-z0-9]+)?$", re.IGNORECASE)
|
||||
|
||||
|
||||
def find_container_by_identifier(
|
||||
identifier: str, containers: list[dict[str, Any]]
|
||||
) -> dict[str, Any] | None:
|
||||
"""Find a container by ID or name with fuzzy matching."""
|
||||
if not containers:
|
||||
return None
|
||||
|
||||
# Exact matches first
|
||||
for container in containers:
|
||||
if container.get("id") == container_identifier:
|
||||
return container
|
||||
for c in containers:
|
||||
if c.get("id") == identifier:
|
||||
return c
|
||||
if identifier in c.get("names", []):
|
||||
return c
|
||||
|
||||
# Check all names for exact match
|
||||
names = container.get("names", [])
|
||||
if container_identifier in names:
|
||||
return container
|
||||
|
||||
# Fuzzy matching - case insensitive partial matches
|
||||
container_identifier_lower = container_identifier.lower()
|
||||
for container in containers:
|
||||
names = container.get("names", [])
|
||||
for name in names:
|
||||
if container_identifier_lower in name.lower() or name.lower() in container_identifier_lower:
|
||||
logger.info(f"Found container via fuzzy match: '{container_identifier}' -> '{name}'")
|
||||
return container
|
||||
id_lower = identifier.lower()
|
||||
for c in containers:
|
||||
for name in c.get("names", []):
|
||||
if id_lower in name.lower() or name.lower() in id_lower:
|
||||
logger.info(f"Fuzzy match: '{identifier}' -> '{name}'")
|
||||
return c
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def get_available_container_names(containers: list[dict[str, Any]]) -> list[str]:
|
||||
"""Extract all available container names for error reporting.
|
||||
|
||||
Args:
|
||||
containers: List of container dictionaries
|
||||
|
||||
Returns:
|
||||
List of container names
|
||||
"""
|
||||
names = []
|
||||
for container in containers:
|
||||
container_names = container.get("names", [])
|
||||
names.extend(container_names)
|
||||
"""Extract all container names for error messages."""
|
||||
names: list[str] = []
|
||||
for c in containers:
|
||||
names.extend(c.get("names", []))
|
||||
return names
|
||||
|
||||
|
||||
def register_docker_tools(mcp: FastMCP) -> None:
|
||||
"""Register all Docker tools with the FastMCP instance.
|
||||
async def _resolve_container_id(container_id: str) -> str:
|
||||
"""Resolve a container name/identifier to its actual PrefixedID."""
|
||||
if _DOCKER_ID_PATTERN.match(container_id):
|
||||
return container_id
|
||||
|
||||
Args:
|
||||
mcp: FastMCP instance to register tools with
|
||||
"""
|
||||
|
||||
@mcp.tool()
|
||||
async def list_docker_containers() -> list[dict[str, Any]]:
|
||||
"""Lists all Docker containers on the Unraid system.
|
||||
|
||||
Returns:
|
||||
List of Docker container information dictionaries
|
||||
"""
|
||||
query = """
|
||||
query ListDockerContainers {
|
||||
docker {
|
||||
containers(skipCache: false) {
|
||||
id
|
||||
names
|
||||
image
|
||||
state
|
||||
status
|
||||
autoStart
|
||||
}
|
||||
}
|
||||
}
|
||||
"""
|
||||
try:
|
||||
logger.info("Executing list_docker_containers tool")
|
||||
response_data = await make_graphql_request(query)
|
||||
if response_data.get("docker"):
|
||||
containers = response_data["docker"].get("containers", [])
|
||||
return list(containers) if isinstance(containers, list) else []
|
||||
return []
|
||||
except Exception as e:
|
||||
logger.error(f"Error in list_docker_containers: {e}", exc_info=True)
|
||||
raise ToolError(f"Failed to list Docker containers: {str(e)}") from e
|
||||
|
||||
@mcp.tool()
|
||||
async def manage_docker_container(container_id: str, action: str) -> dict[str, Any]:
|
||||
"""Starts or stops a specific Docker container. Action must be 'start' or 'stop'.
|
||||
|
||||
Args:
|
||||
container_id: Container ID to manage
|
||||
action: Action to perform - 'start' or 'stop'
|
||||
|
||||
Returns:
|
||||
Dict containing operation result and container information
|
||||
"""
|
||||
import asyncio
|
||||
|
||||
if action.lower() not in ["start", "stop"]:
|
||||
logger.warning(f"Invalid action '{action}' for manage_docker_container")
|
||||
raise ToolError("Invalid action. Must be 'start' or 'stop'.")
|
||||
|
||||
mutation_name = action.lower()
|
||||
|
||||
# Step 1: Execute the operation mutation
|
||||
operation_query = f"""
|
||||
mutation ManageDockerContainer($id: PrefixedID!) {{
|
||||
docker {{
|
||||
{mutation_name}(id: $id) {{
|
||||
id
|
||||
names
|
||||
state
|
||||
status
|
||||
}}
|
||||
}}
|
||||
}}
|
||||
"""
|
||||
|
||||
variables = {"id": container_id}
|
||||
|
||||
try:
|
||||
logger.info(f"Executing manage_docker_container: action={action}, id={container_id}")
|
||||
|
||||
# Step 1: Resolve container identifier to actual container ID if needed
|
||||
actual_container_id = container_id
|
||||
if not container_id.startswith("3cb1026338736ed07b8afec2c484e429710b0f6550dc65d0c5c410ea9d0fa6b2:"):
|
||||
# This looks like a name, not a full container ID - need to resolve it
|
||||
logger.info(f"Resolving container identifier '{container_id}' to actual container ID")
|
||||
logger.info(f"Resolving container identifier '{container_id}'")
|
||||
list_query = """
|
||||
query ResolveContainerID {
|
||||
docker {
|
||||
containers(skipCache: true) {
|
||||
id
|
||||
names
|
||||
}
|
||||
}
|
||||
docker { containers(skipCache: true) { id names } }
|
||||
}
|
||||
"""
|
||||
list_response = await make_graphql_request(list_query)
|
||||
if list_response.get("docker"):
|
||||
containers = list_response["docker"].get("containers", [])
|
||||
resolved_container = find_container_by_identifier(container_id, containers)
|
||||
if resolved_container:
|
||||
actual_container_id = str(resolved_container.get("id", ""))
|
||||
logger.info(f"Resolved '{container_id}' to container ID: {actual_container_id}")
|
||||
else:
|
||||
available_names = get_available_container_names(containers)
|
||||
error_msg = f"Container '{container_id}' not found for {action} operation."
|
||||
if available_names:
|
||||
error_msg += f" Available containers: {', '.join(available_names[:10])}"
|
||||
raise ToolError(error_msg)
|
||||
data = await make_graphql_request(list_query)
|
||||
containers = data.get("docker", {}).get("containers", [])
|
||||
resolved = find_container_by_identifier(container_id, containers)
|
||||
if resolved:
|
||||
actual_id = str(resolved.get("id", ""))
|
||||
logger.info(f"Resolved '{container_id}' -> '{actual_id}'")
|
||||
return actual_id
|
||||
|
||||
# Update variables with the actual container ID
|
||||
variables = {"id": actual_container_id}
|
||||
available = get_available_container_names(containers)
|
||||
msg = f"Container '{container_id}' not found."
|
||||
if available:
|
||||
msg += f" Available: {', '.join(available[:10])}"
|
||||
raise ToolError(msg)
|
||||
|
||||
# Execute the operation with idempotent error handling
|
||||
operation_context = {"operation": action}
|
||||
operation_response = await make_graphql_request(
|
||||
operation_query,
|
||||
variables,
|
||||
operation_context=operation_context
|
||||
)
|
||||
|
||||
# Handle idempotent success case
|
||||
if operation_response.get("idempotent_success"):
|
||||
logger.info(f"Container {action} operation was idempotent: {operation_response.get('message')}")
|
||||
# Get current container state since the operation was already complete
|
||||
try:
|
||||
list_query = """
|
||||
query GetContainerStateAfterIdempotent($skipCache: Boolean!) {
|
||||
docker {
|
||||
containers(skipCache: $skipCache) {
|
||||
id
|
||||
names
|
||||
image
|
||||
state
|
||||
status
|
||||
autoStart
|
||||
}
|
||||
}
|
||||
}
|
||||
"""
|
||||
list_response = await make_graphql_request(list_query, {"skipCache": True})
|
||||
|
||||
if list_response.get("docker"):
|
||||
containers = list_response["docker"].get("containers", [])
|
||||
container = find_container_by_identifier(container_id, containers)
|
||||
|
||||
if container:
|
||||
return {
|
||||
"operation_result": {"id": container_id, "names": container.get("names", [])},
|
||||
"container_details": container,
|
||||
"success": True,
|
||||
"message": f"Container {action} operation was already complete - current state returned",
|
||||
"idempotent": True
|
||||
}
|
||||
|
||||
except Exception as lookup_error:
|
||||
logger.warning(f"Could not retrieve container state after idempotent operation: {lookup_error}")
|
||||
|
||||
return {
|
||||
"operation_result": {"id": container_id},
|
||||
"container_details": None,
|
||||
"success": True,
|
||||
"message": f"Container {action} operation was already complete",
|
||||
"idempotent": True
|
||||
}
|
||||
|
||||
# Handle normal successful operation
|
||||
if not (operation_response.get("docker") and operation_response["docker"].get(mutation_name)):
|
||||
raise ToolError(f"Failed to execute {action} operation on container")
|
||||
|
||||
operation_result = operation_response["docker"][mutation_name]
|
||||
logger.info(f"Container {action} operation completed for {container_id}")
|
||||
|
||||
# Step 2: Wait briefly for state to propagate, then fetch current container details
|
||||
await asyncio.sleep(1.0) # Give the container state time to update
|
||||
|
||||
# Step 3: Try to get updated container details with retry logic
|
||||
max_retries = 3
|
||||
retry_delay = 1.0
|
||||
|
||||
for attempt in range(max_retries):
|
||||
try:
|
||||
# Query all containers and find the one we just operated on
|
||||
list_query = """
|
||||
query GetUpdatedContainerState($skipCache: Boolean!) {
|
||||
docker {
|
||||
containers(skipCache: $skipCache) {
|
||||
id
|
||||
names
|
||||
image
|
||||
state
|
||||
status
|
||||
autoStart
|
||||
}
|
||||
}
|
||||
}
|
||||
"""
|
||||
|
||||
# Skip cache to get fresh data
|
||||
list_response = await make_graphql_request(list_query, {"skipCache": True})
|
||||
|
||||
if list_response.get("docker"):
|
||||
containers = list_response["docker"].get("containers", [])
|
||||
|
||||
# Find the container using our helper function
|
||||
container = find_container_by_identifier(container_id, containers)
|
||||
if container:
|
||||
logger.info(f"Found updated container state for {container_id}")
|
||||
return {
|
||||
"operation_result": operation_result,
|
||||
"container_details": container,
|
||||
"success": True,
|
||||
"message": f"Container {action} operation completed successfully"
|
||||
}
|
||||
|
||||
# If not found in this attempt, wait and retry
|
||||
if attempt < max_retries - 1:
|
||||
logger.warning(f"Container {container_id} not found after {action}, retrying in {retry_delay}s (attempt {attempt + 1}/{max_retries})")
|
||||
await asyncio.sleep(retry_delay)
|
||||
retry_delay *= 1.5 # Exponential backoff
|
||||
|
||||
except Exception as query_error:
|
||||
logger.warning(f"Error querying updated container state (attempt {attempt + 1}): {query_error}")
|
||||
if attempt < max_retries - 1:
|
||||
await asyncio.sleep(retry_delay)
|
||||
retry_delay *= 1.5
|
||||
else:
|
||||
# On final attempt failure, still return operation success
|
||||
logger.warning(f"Could not retrieve updated container details after {action}, but operation succeeded")
|
||||
return {
|
||||
"operation_result": operation_result,
|
||||
"container_details": None,
|
||||
"success": True,
|
||||
"message": f"Container {action} operation completed, but updated state could not be retrieved",
|
||||
"warning": "Container state query failed after operation - this may be due to timing or the container not being found in the updated state"
|
||||
}
|
||||
|
||||
# If we get here, all retries failed to find the container
|
||||
logger.warning(f"Container {container_id} not found in any retry attempt after {action}")
|
||||
return {
|
||||
"operation_result": operation_result,
|
||||
"container_details": None,
|
||||
"success": True,
|
||||
"message": f"Container {action} operation completed, but container not found in subsequent queries",
|
||||
"warning": "Container not found in updated state - this may indicate the operation succeeded but container is no longer listed"
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error in manage_docker_container ({action}): {e}", exc_info=True)
|
||||
raise ToolError(f"Failed to {action} Docker container: {str(e)}") from e
|
||||
def register_docker_tool(mcp: FastMCP) -> None:
|
||||
"""Register the unraid_docker tool with the FastMCP instance."""
|
||||
|
||||
@mcp.tool()
|
||||
async def get_docker_container_details(container_identifier: str) -> dict[str, Any]:
|
||||
"""Retrieves detailed information for a specific Docker container by its ID or name.
|
||||
async def unraid_docker(
|
||||
action: DOCKER_ACTIONS,
|
||||
container_id: str | None = None,
|
||||
network_id: str | None = None,
|
||||
confirm: bool = False,
|
||||
tail_lines: int = 100,
|
||||
) -> dict[str, Any]:
|
||||
"""Manage Docker containers, networks, and updates.
|
||||
|
||||
Args:
|
||||
container_identifier: Container ID or name to retrieve details for
|
||||
Actions:
|
||||
list - List all containers
|
||||
details - Detailed info for a container (requires container_id)
|
||||
start - Start a container (requires container_id)
|
||||
stop - Stop a container (requires container_id)
|
||||
restart - Stop then start a container (requires container_id)
|
||||
pause - Pause a container (requires container_id)
|
||||
unpause - Unpause a container (requires container_id)
|
||||
remove - Remove a container (requires container_id, confirm=True)
|
||||
update - Update a container to latest image (requires container_id)
|
||||
update_all - Update all containers with available updates
|
||||
logs - Get container logs (requires container_id, optional tail_lines)
|
||||
networks - List Docker networks
|
||||
network_details - Details of a network (requires network_id)
|
||||
port_conflicts - Check for port conflicts
|
||||
check_updates - Check which containers have updates available
|
||||
"""
|
||||
all_actions = set(QUERIES) | set(MUTATIONS) | {"restart"}
|
||||
if action not in all_actions:
|
||||
raise ToolError(f"Invalid action '{action}'. Must be one of: {sorted(all_actions)}")
|
||||
|
||||
Returns:
|
||||
Dict containing detailed container information
|
||||
"""
|
||||
# This tool fetches all containers and then filters by ID or name.
|
||||
# More detailed query for a single container if found:
|
||||
detailed_query_fields = """
|
||||
id
|
||||
names
|
||||
image
|
||||
imageId
|
||||
command
|
||||
created
|
||||
ports { ip privatePort publicPort type }
|
||||
sizeRootFs
|
||||
labels # JSONObject
|
||||
state
|
||||
status
|
||||
hostConfig { networkMode }
|
||||
networkSettings # JSONObject
|
||||
mounts # JSONObject array
|
||||
autoStart
|
||||
"""
|
||||
if action in DESTRUCTIVE_ACTIONS and not confirm:
|
||||
raise ToolError(f"Action '{action}' is destructive. Set confirm=True to proceed.")
|
||||
|
||||
if action in CONTAINER_ACTIONS and not container_id:
|
||||
raise ToolError(f"container_id is required for '{action}' action")
|
||||
|
||||
if action == "network_details" and not network_id:
|
||||
raise ToolError("network_id is required for 'network_details' action")
|
||||
|
||||
# Fetch all containers first
|
||||
list_query = f"""
|
||||
query GetAllContainerDetailsForFiltering {{
|
||||
docker {{
|
||||
containers(skipCache: false) {{
|
||||
{detailed_query_fields}
|
||||
}}
|
||||
}}
|
||||
}}
|
||||
"""
|
||||
try:
|
||||
logger.info(f"Executing get_docker_container_details for identifier: {container_identifier}")
|
||||
response_data = await make_graphql_request(list_query)
|
||||
logger.info(f"Executing unraid_docker action={action}")
|
||||
|
||||
containers = []
|
||||
if response_data.get("docker"):
|
||||
containers = response_data["docker"].get("containers", [])
|
||||
# --- Read-only queries ---
|
||||
if action == "list":
|
||||
data = await make_graphql_request(QUERIES["list"])
|
||||
containers = data.get("docker", {}).get("containers", [])
|
||||
return {"containers": list(containers) if isinstance(containers, list) else []}
|
||||
|
||||
# Use our enhanced container lookup
|
||||
container = find_container_by_identifier(container_identifier, containers)
|
||||
if action == "details":
|
||||
data = await make_graphql_request(QUERIES["details"])
|
||||
containers = data.get("docker", {}).get("containers", [])
|
||||
container = find_container_by_identifier(container_id or "", containers)
|
||||
if container:
|
||||
logger.info(f"Found container {container_identifier}")
|
||||
return container
|
||||
available = get_available_container_names(containers)
|
||||
msg = f"Container '{container_id}' not found."
|
||||
if available:
|
||||
msg += f" Available: {', '.join(available[:10])}"
|
||||
raise ToolError(msg)
|
||||
|
||||
# Container not found - provide helpful error message with available containers
|
||||
available_names = get_available_container_names(containers)
|
||||
logger.warning(f"Container with identifier '{container_identifier}' not found.")
|
||||
logger.info(f"Available containers: {available_names}")
|
||||
if action == "logs":
|
||||
actual_id = await _resolve_container_id(container_id or "")
|
||||
data = await make_graphql_request(
|
||||
QUERIES["logs"], {"id": actual_id, "tail": tail_lines}
|
||||
)
|
||||
return {"logs": data.get("docker", {}).get("logs")}
|
||||
|
||||
error_msg = f"Container '{container_identifier}' not found."
|
||||
if available_names:
|
||||
error_msg += f" Available containers: {', '.join(available_names[:10])}" # Limit to first 10
|
||||
if len(available_names) > 10:
|
||||
error_msg += f" (and {len(available_names) - 10} more)"
|
||||
else:
|
||||
error_msg += " No containers are currently available."
|
||||
if action == "networks":
|
||||
data = await make_graphql_request(QUERIES["networks"])
|
||||
networks = data.get("dockerNetworks", [])
|
||||
return {"networks": list(networks) if isinstance(networks, list) else []}
|
||||
|
||||
raise ToolError(error_msg)
|
||||
if action == "network_details":
|
||||
data = await make_graphql_request(
|
||||
QUERIES["network_details"], {"id": network_id}
|
||||
)
|
||||
return dict(data.get("dockerNetwork", {}))
|
||||
|
||||
if action == "port_conflicts":
|
||||
data = await make_graphql_request(QUERIES["port_conflicts"])
|
||||
conflicts = data.get("docker", {}).get("portConflicts", [])
|
||||
return {"port_conflicts": list(conflicts) if isinstance(conflicts, list) else []}
|
||||
|
||||
if action == "check_updates":
|
||||
data = await make_graphql_request(QUERIES["check_updates"])
|
||||
statuses = data.get("docker", {}).get("containerUpdateStatuses", [])
|
||||
return {"update_statuses": list(statuses) if isinstance(statuses, list) else []}
|
||||
|
||||
# --- Mutations ---
|
||||
if action == "restart":
|
||||
actual_id = await _resolve_container_id(container_id or "")
|
||||
# Stop (idempotent: treat "already stopped" as success)
|
||||
stop_data = await make_graphql_request(
|
||||
MUTATIONS["stop"], {"id": actual_id},
|
||||
operation_context={"operation": "stop"},
|
||||
)
|
||||
stop_was_idempotent = stop_data.get("idempotent_success", False)
|
||||
# Start (idempotent: treat "already running" as success)
|
||||
start_data = await make_graphql_request(
|
||||
MUTATIONS["start"], {"id": actual_id},
|
||||
operation_context={"operation": "start"},
|
||||
)
|
||||
result = start_data.get("docker", {}).get("start", {})
|
||||
response: dict[str, Any] = {
|
||||
"success": True, "action": "restart", "container": result,
|
||||
}
|
||||
if stop_was_idempotent:
|
||||
response["note"] = "Container was already stopped before restart"
|
||||
return response
|
||||
|
||||
if action == "update_all":
|
||||
data = await make_graphql_request(MUTATIONS["update_all"])
|
||||
results = data.get("docker", {}).get("updateAllContainers", [])
|
||||
return {"success": True, "action": "update_all", "containers": results}
|
||||
|
||||
# Single-container mutations
|
||||
if action in MUTATIONS:
|
||||
actual_id = await _resolve_container_id(container_id or "")
|
||||
op_context = {"operation": action} if action in ("start", "stop") else None
|
||||
data = await make_graphql_request(
|
||||
MUTATIONS[action], {"id": actual_id},
|
||||
operation_context=op_context,
|
||||
)
|
||||
|
||||
# Handle idempotent success
|
||||
if data.get("idempotent_success"):
|
||||
return {
|
||||
"success": True,
|
||||
"action": action,
|
||||
"idempotent": True,
|
||||
"message": f"Container already in desired state for '{action}'",
|
||||
}
|
||||
|
||||
docker_data = data.get("docker", {})
|
||||
result = docker_data.get(action, docker_data.get("removeContainer"))
|
||||
return {
|
||||
"success": True,
|
||||
"action": action,
|
||||
"container": result,
|
||||
}
|
||||
|
||||
return {}
|
||||
|
||||
except ToolError:
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error(f"Error in get_docker_container_details: {e}", exc_info=True)
|
||||
raise ToolError(f"Failed to retrieve Docker container details: {str(e)}") from e
|
||||
logger.error(f"Error in unraid_docker action={action}: {e}", exc_info=True)
|
||||
raise ToolError(f"Failed to execute docker/{action}: {str(e)}") from e
|
||||
|
||||
logger.info("Docker tools registered successfully")
|
||||
logger.info("Docker tool registered successfully")
|
||||
|
||||
@@ -1,168 +1,198 @@
|
||||
"""Comprehensive health monitoring tools.
|
||||
"""Health monitoring and diagnostics.
|
||||
|
||||
This module provides tools for comprehensive health checks of the Unraid MCP server
|
||||
and the underlying Unraid system, including performance metrics, system status,
|
||||
notifications, Docker services, and API responsiveness.
|
||||
Provides the `unraid_health` tool with 3 actions for system health checks,
|
||||
connection testing, and subscription diagnostics.
|
||||
"""
|
||||
|
||||
import datetime
|
||||
import time
|
||||
from typing import Any
|
||||
from typing import Any, Literal
|
||||
|
||||
from fastmcp import FastMCP
|
||||
|
||||
from ..config.logging import logger
|
||||
from ..config.settings import UNRAID_API_URL, UNRAID_MCP_HOST, UNRAID_MCP_PORT, UNRAID_MCP_TRANSPORT
|
||||
from ..config.settings import (
|
||||
UNRAID_API_URL,
|
||||
UNRAID_MCP_HOST,
|
||||
UNRAID_MCP_PORT,
|
||||
UNRAID_MCP_TRANSPORT,
|
||||
VERSION,
|
||||
)
|
||||
from ..core.client import make_graphql_request
|
||||
from ..core.exceptions import ToolError
|
||||
|
||||
HEALTH_ACTIONS = Literal["check", "test_connection", "diagnose"]
|
||||
|
||||
# Severity ordering: only upgrade, never downgrade
|
||||
_SEVERITY = {"healthy": 0, "warning": 1, "degraded": 2, "unhealthy": 3}
|
||||
|
||||
|
||||
def register_health_tools(mcp: FastMCP) -> None:
|
||||
"""Register all health tools with the FastMCP instance.
|
||||
|
||||
Args:
|
||||
mcp: FastMCP instance to register tools with
|
||||
"""
|
||||
def register_health_tool(mcp: FastMCP) -> None:
|
||||
"""Register the unraid_health tool with the FastMCP instance."""
|
||||
|
||||
@mcp.tool()
|
||||
async def health_check() -> dict[str, Any]:
|
||||
"""Returns comprehensive health status of the Unraid MCP server and system for monitoring purposes."""
|
||||
start_time = time.time()
|
||||
health_status = "healthy"
|
||||
issues = []
|
||||
async def unraid_health(
|
||||
action: HEALTH_ACTIONS,
|
||||
) -> dict[str, Any]:
|
||||
"""Monitor Unraid MCP server and system health.
|
||||
|
||||
Actions:
|
||||
check - Comprehensive health check (API latency, array, notifications, Docker)
|
||||
test_connection - Quick connectivity test (just checks { online })
|
||||
diagnose - Subscription system diagnostics
|
||||
"""
|
||||
if action not in ("check", "test_connection", "diagnose"):
|
||||
raise ToolError(
|
||||
f"Invalid action '{action}'. Must be one of: check, test_connection, diagnose"
|
||||
)
|
||||
|
||||
try:
|
||||
# Enhanced health check with multiple system components
|
||||
comprehensive_query = """
|
||||
logger.info(f"Executing unraid_health action={action}")
|
||||
|
||||
if action == "test_connection":
|
||||
start = time.time()
|
||||
data = await make_graphql_request("query { online }")
|
||||
latency = round((time.time() - start) * 1000, 2)
|
||||
return {
|
||||
"status": "connected",
|
||||
"online": data.get("online"),
|
||||
"latency_ms": latency,
|
||||
}
|
||||
|
||||
if action == "check":
|
||||
return await _comprehensive_check()
|
||||
|
||||
if action == "diagnose":
|
||||
return await _diagnose_subscriptions()
|
||||
|
||||
return {}
|
||||
|
||||
except ToolError:
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error(f"Error in unraid_health action={action}: {e}", exc_info=True)
|
||||
raise ToolError(f"Failed to execute health/{action}: {str(e)}") from e
|
||||
|
||||
logger.info("Health tool registered successfully")
|
||||
|
||||
|
||||
async def _comprehensive_check() -> dict[str, Any]:
|
||||
"""Run comprehensive health check against the Unraid system."""
|
||||
start_time = time.time()
|
||||
health_severity = 0 # Track as int to prevent downgrade
|
||||
issues: list[str] = []
|
||||
|
||||
def _escalate(level: str) -> None:
|
||||
nonlocal health_severity
|
||||
health_severity = max(health_severity, _SEVERITY.get(level, 0))
|
||||
|
||||
try:
|
||||
query = """
|
||||
query ComprehensiveHealthCheck {
|
||||
info {
|
||||
machineId
|
||||
time
|
||||
machineId time
|
||||
versions { unraid }
|
||||
os { uptime }
|
||||
}
|
||||
array {
|
||||
state
|
||||
}
|
||||
array { state }
|
||||
notifications {
|
||||
overview {
|
||||
unread { alert warning total }
|
||||
}
|
||||
overview { unread { alert warning total } }
|
||||
}
|
||||
docker {
|
||||
containers(skipCache: true) {
|
||||
id
|
||||
state
|
||||
status
|
||||
}
|
||||
containers(skipCache: true) { id state status }
|
||||
}
|
||||
}
|
||||
"""
|
||||
data = await make_graphql_request(query)
|
||||
api_latency = round((time.time() - start_time) * 1000, 2)
|
||||
|
||||
response_data = await make_graphql_request(comprehensive_query)
|
||||
api_latency = round((time.time() - start_time) * 1000, 2) # ms
|
||||
|
||||
# Base health info
|
||||
health_info = {
|
||||
"status": health_status,
|
||||
"timestamp": datetime.datetime.utcnow().isoformat(),
|
||||
health_info: dict[str, Any] = {
|
||||
"status": "healthy",
|
||||
"timestamp": datetime.datetime.now(datetime.timezone.utc).isoformat(),
|
||||
"api_latency_ms": api_latency,
|
||||
"server": {
|
||||
"name": "Unraid MCP Server",
|
||||
"version": "0.1.0",
|
||||
"version": VERSION,
|
||||
"transport": UNRAID_MCP_TRANSPORT,
|
||||
"host": UNRAID_MCP_HOST,
|
||||
"port": UNRAID_MCP_PORT,
|
||||
"process_uptime_seconds": time.time() - start_time # Rough estimate
|
||||
}
|
||||
},
|
||||
}
|
||||
|
||||
if not response_data:
|
||||
health_status = "unhealthy"
|
||||
issues.append("No response from Unraid API")
|
||||
health_info["status"] = health_status
|
||||
health_info["issues"] = issues
|
||||
if not data:
|
||||
health_info["status"] = "unhealthy"
|
||||
health_info["issues"] = ["No response from Unraid API"]
|
||||
return health_info
|
||||
|
||||
# System info analysis
|
||||
info = response_data.get("info", {})
|
||||
# System info
|
||||
info = data.get("info", {})
|
||||
if info:
|
||||
health_info["unraid_system"] = {
|
||||
"status": "connected",
|
||||
"url": UNRAID_API_URL,
|
||||
"machine_id": info.get("machineId"),
|
||||
"time": info.get("time"),
|
||||
"version": info.get("versions", {}).get("unraid"),
|
||||
"uptime": info.get("os", {}).get("uptime")
|
||||
"uptime": info.get("os", {}).get("uptime"),
|
||||
}
|
||||
else:
|
||||
health_status = "degraded"
|
||||
_escalate("degraded")
|
||||
issues.append("Unable to retrieve system info")
|
||||
|
||||
# Array health analysis
|
||||
array_info = response_data.get("array", {})
|
||||
# Array
|
||||
array_info = data.get("array", {})
|
||||
if array_info:
|
||||
array_state = array_info.get("state", "unknown")
|
||||
state = array_info.get("state", "unknown")
|
||||
health_info["array_status"] = {
|
||||
"state": array_state,
|
||||
"healthy": array_state in ["STARTED", "STOPPED"]
|
||||
"state": state,
|
||||
"healthy": state in ("STARTED", "STOPPED"),
|
||||
}
|
||||
if array_state not in ["STARTED", "STOPPED"]:
|
||||
health_status = "warning"
|
||||
issues.append(f"Array in unexpected state: {array_state}")
|
||||
if state not in ("STARTED", "STOPPED"):
|
||||
_escalate("warning")
|
||||
issues.append(f"Array in unexpected state: {state}")
|
||||
else:
|
||||
health_status = "warning"
|
||||
_escalate("warning")
|
||||
issues.append("Unable to retrieve array status")
|
||||
|
||||
# Notifications analysis
|
||||
notifications = response_data.get("notifications", {})
|
||||
# Notifications
|
||||
notifications = data.get("notifications", {})
|
||||
if notifications and notifications.get("overview"):
|
||||
unread = notifications["overview"].get("unread", {})
|
||||
alert_count = unread.get("alert", 0)
|
||||
warning_count = unread.get("warning", 0)
|
||||
total_unread = unread.get("total", 0)
|
||||
|
||||
alerts = unread.get("alert", 0)
|
||||
health_info["notifications"] = {
|
||||
"unread_total": total_unread,
|
||||
"unread_alerts": alert_count,
|
||||
"unread_warnings": warning_count,
|
||||
"has_critical_notifications": alert_count > 0
|
||||
"unread_total": unread.get("total", 0),
|
||||
"unread_alerts": alerts,
|
||||
"unread_warnings": unread.get("warning", 0),
|
||||
}
|
||||
if alerts > 0:
|
||||
_escalate("warning")
|
||||
issues.append(f"{alerts} unread alert(s)")
|
||||
|
||||
if alert_count > 0:
|
||||
health_status = "warning"
|
||||
issues.append(f"{alert_count} unread alert notification(s)")
|
||||
|
||||
# Docker services analysis
|
||||
docker_info = response_data.get("docker", {})
|
||||
if docker_info and docker_info.get("containers"):
|
||||
containers = docker_info["containers"]
|
||||
running_containers = [c for c in containers if c.get("state") == "running"]
|
||||
stopped_containers = [c for c in containers if c.get("state") == "exited"]
|
||||
|
||||
# Docker
|
||||
docker = data.get("docker", {})
|
||||
if docker and docker.get("containers"):
|
||||
containers = docker["containers"]
|
||||
health_info["docker_services"] = {
|
||||
"total_containers": len(containers),
|
||||
"running_containers": len(running_containers),
|
||||
"stopped_containers": len(stopped_containers),
|
||||
"containers_healthy": len([c for c in containers if c.get("status", "").startswith("Up")])
|
||||
"total": len(containers),
|
||||
"running": len([c for c in containers if c.get("state") == "running"]),
|
||||
"stopped": len([c for c in containers if c.get("state") == "exited"]),
|
||||
}
|
||||
|
||||
# API performance assessment
|
||||
if api_latency > 5000: # > 5 seconds
|
||||
health_status = "warning"
|
||||
issues.append(f"High API latency: {api_latency}ms")
|
||||
elif api_latency > 10000: # > 10 seconds
|
||||
health_status = "degraded"
|
||||
# Latency assessment
|
||||
if api_latency > 10000:
|
||||
_escalate("degraded")
|
||||
issues.append(f"Very high API latency: {api_latency}ms")
|
||||
elif api_latency > 5000:
|
||||
_escalate("warning")
|
||||
issues.append(f"High API latency: {api_latency}ms")
|
||||
|
||||
# Final status determination
|
||||
health_info["status"] = health_status
|
||||
# Resolve final status from severity level
|
||||
severity_to_status = {v: k for k, v in _SEVERITY.items()}
|
||||
health_info["status"] = severity_to_status.get(health_severity, "healthy")
|
||||
if issues:
|
||||
health_info["issues"] = issues
|
||||
|
||||
# Add performance metrics
|
||||
health_info["performance"] = {
|
||||
"api_response_time_ms": api_latency,
|
||||
"health_check_duration_ms": round((time.time() - start_time) * 1000, 2)
|
||||
"check_duration_ms": round((time.time() - start_time) * 1000, 2),
|
||||
}
|
||||
|
||||
return health_info
|
||||
@@ -171,16 +201,64 @@ def register_health_tools(mcp: FastMCP) -> None:
|
||||
logger.error(f"Health check failed: {e}")
|
||||
return {
|
||||
"status": "unhealthy",
|
||||
"timestamp": datetime.datetime.utcnow().isoformat(),
|
||||
"timestamp": datetime.datetime.now(datetime.timezone.utc).isoformat(),
|
||||
"error": str(e),
|
||||
"api_latency_ms": round((time.time() - start_time) * 1000, 2) if 'start_time' in locals() else None,
|
||||
"server": {
|
||||
"name": "Unraid MCP Server",
|
||||
"version": "0.1.0",
|
||||
"version": VERSION,
|
||||
"transport": UNRAID_MCP_TRANSPORT,
|
||||
"host": UNRAID_MCP_HOST,
|
||||
"port": UNRAID_MCP_PORT
|
||||
}
|
||||
"port": UNRAID_MCP_PORT,
|
||||
},
|
||||
}
|
||||
|
||||
logger.info("Health tools registered successfully")
|
||||
|
||||
async def _diagnose_subscriptions() -> dict[str, Any]:
|
||||
"""Import and run subscription diagnostics."""
|
||||
try:
|
||||
from ..subscriptions.manager import subscription_manager
|
||||
from ..subscriptions.resources import ensure_subscriptions_started
|
||||
|
||||
await ensure_subscriptions_started()
|
||||
|
||||
status = subscription_manager.get_subscription_status()
|
||||
connection_issues: list[dict[str, Any]] = []
|
||||
|
||||
diagnostic_info: dict[str, Any] = {
|
||||
"timestamp": datetime.datetime.now(datetime.timezone.utc).isoformat(),
|
||||
"environment": {
|
||||
"auto_start_enabled": subscription_manager.auto_start_enabled,
|
||||
"max_reconnect_attempts": subscription_manager.max_reconnect_attempts,
|
||||
"api_url_configured": bool(UNRAID_API_URL),
|
||||
},
|
||||
"subscriptions": status,
|
||||
"summary": {
|
||||
"total_configured": len(subscription_manager.subscription_configs),
|
||||
"active_count": len(subscription_manager.active_subscriptions),
|
||||
"with_data": len(subscription_manager.resource_data),
|
||||
"in_error_state": 0,
|
||||
"connection_issues": connection_issues,
|
||||
},
|
||||
}
|
||||
|
||||
for sub_name, sub_status in status.items():
|
||||
runtime = sub_status.get("runtime", {})
|
||||
conn_state = runtime.get("connection_state", "unknown")
|
||||
if conn_state in ("error", "auth_failed", "timeout", "max_retries_exceeded"):
|
||||
diagnostic_info["summary"]["in_error_state"] += 1
|
||||
if runtime.get("last_error"):
|
||||
connection_issues.append({
|
||||
"subscription": sub_name,
|
||||
"state": conn_state,
|
||||
"error": runtime["last_error"],
|
||||
})
|
||||
|
||||
return diagnostic_info
|
||||
|
||||
except ImportError:
|
||||
return {
|
||||
"error": "Subscription modules not available",
|
||||
"timestamp": datetime.datetime.now(datetime.timezone.utc).isoformat(),
|
||||
}
|
||||
except Exception as e:
|
||||
raise ToolError(f"Failed to generate diagnostics: {str(e)}") from e
|
||||
|
||||
400
unraid_mcp/tools/info.py
Normal file
400
unraid_mcp/tools/info.py
Normal file
@@ -0,0 +1,400 @@
|
||||
"""System information and server status queries.
|
||||
|
||||
Provides the `unraid_info` tool with 19 read-only actions for retrieving
|
||||
system information, array status, network config, and server metadata.
|
||||
"""
|
||||
|
||||
from typing import Any, Literal
|
||||
|
||||
from fastmcp import FastMCP
|
||||
|
||||
from ..config.logging import logger
|
||||
from ..core.client import make_graphql_request
|
||||
from ..core.exceptions import ToolError
|
||||
|
||||
# Pre-built queries keyed by action name
|
||||
QUERIES: dict[str, str] = {
|
||||
"overview": """
|
||||
query GetSystemInfo {
|
||||
info {
|
||||
os { platform distro release codename kernel arch hostname codepage logofile serial build uptime }
|
||||
cpu { manufacturer brand vendor family model stepping revision voltage speed speedmin speedmax threads cores processors socket cache flags }
|
||||
memory {
|
||||
layout { bank type clockSpeed formFactor manufacturer partNum serialNum }
|
||||
}
|
||||
baseboard { manufacturer model version serial assetTag }
|
||||
system { manufacturer model version serial uuid sku }
|
||||
versions { kernel openssl systemOpenssl systemOpensslLib node v8 npm yarn pm2 gulp grunt git tsc mysql redis mongodb apache nginx php docker postfix postgresql perl python gcc unraid }
|
||||
apps { installed started }
|
||||
machineId
|
||||
time
|
||||
}
|
||||
}
|
||||
""",
|
||||
"array": """
|
||||
query GetArrayStatus {
|
||||
array {
|
||||
id
|
||||
state
|
||||
capacity {
|
||||
kilobytes { free used total }
|
||||
disks { free used total }
|
||||
}
|
||||
boot { id idx name device size status rotational temp numReads numWrites numErrors fsSize fsFree fsUsed exportable type warning critical fsType comment format transport color }
|
||||
parities { id idx name device size status rotational temp numReads numWrites numErrors fsSize fsFree fsUsed exportable type warning critical fsType comment format transport color }
|
||||
disks { id idx name device size status rotational temp numReads numWrites numErrors fsSize fsFree fsUsed exportable type warning critical fsType comment format transport color }
|
||||
caches { id idx name device size status rotational temp numReads numWrites numErrors fsSize fsFree fsUsed exportable type warning critical fsType comment format transport color }
|
||||
}
|
||||
}
|
||||
""",
|
||||
"network": """
|
||||
query GetNetworkConfig {
|
||||
network {
|
||||
id
|
||||
accessUrls { type name ipv4 ipv6 }
|
||||
}
|
||||
}
|
||||
""",
|
||||
"registration": """
|
||||
query GetRegistrationInfo {
|
||||
registration {
|
||||
id type
|
||||
keyFile { location contents }
|
||||
state expiration updateExpiration
|
||||
}
|
||||
}
|
||||
""",
|
||||
"connect": """
|
||||
query GetConnectSettings {
|
||||
connect { status sandbox flashGuid }
|
||||
}
|
||||
""",
|
||||
"variables": """
|
||||
query GetSelectiveUnraidVariables {
|
||||
vars {
|
||||
id version name timeZone comment security workgroup domain domainShort
|
||||
hideDotFiles localMaster enableFruit useNtp domainLogin sysModel
|
||||
sysFlashSlots useSsl port portssl localTld bindMgt useTelnet porttelnet
|
||||
useSsh portssh startPage startArray shutdownTimeout
|
||||
shareSmbEnabled shareNfsEnabled shareAfpEnabled shareCacheEnabled
|
||||
shareAvahiEnabled safeMode startMode configValid configError joinStatus
|
||||
deviceCount flashGuid flashProduct flashVendor mdState mdVersion
|
||||
shareCount shareSmbCount shareNfsCount shareAfpCount shareMoverActive
|
||||
csrfToken
|
||||
}
|
||||
}
|
||||
""",
|
||||
"metrics": """
|
||||
query GetMetrics {
|
||||
metrics { cpu { used } memory { used total } }
|
||||
}
|
||||
""",
|
||||
"services": """
|
||||
query GetServices {
|
||||
services { name state }
|
||||
}
|
||||
""",
|
||||
"display": """
|
||||
query GetDisplay {
|
||||
info { display { theme } }
|
||||
}
|
||||
""",
|
||||
"config": """
|
||||
query GetConfig {
|
||||
config { valid error }
|
||||
}
|
||||
""",
|
||||
"online": """
|
||||
query GetOnline { online }
|
||||
""",
|
||||
"owner": """
|
||||
query GetOwner {
|
||||
owner { username avatar url }
|
||||
}
|
||||
""",
|
||||
"settings": """
|
||||
query GetSettings {
|
||||
settings { unified { values } }
|
||||
}
|
||||
""",
|
||||
"server": """
|
||||
query GetServer {
|
||||
info {
|
||||
os { hostname uptime }
|
||||
versions { unraid }
|
||||
machineId time
|
||||
}
|
||||
array { state }
|
||||
online
|
||||
}
|
||||
""",
|
||||
"servers": """
|
||||
query GetServers {
|
||||
servers { id name status description ip port }
|
||||
}
|
||||
""",
|
||||
"flash": """
|
||||
query GetFlash {
|
||||
flash { id guid product vendor size }
|
||||
}
|
||||
""",
|
||||
"ups_devices": """
|
||||
query GetUpsDevices {
|
||||
upsDevices { id model status runtime charge load }
|
||||
}
|
||||
""",
|
||||
"ups_device": """
|
||||
query GetUpsDevice($id: PrefixedID!) {
|
||||
upsDeviceById(id: $id) { id model status runtime charge load voltage frequency temperature }
|
||||
}
|
||||
""",
|
||||
"ups_config": """
|
||||
query GetUpsConfig {
|
||||
upsConfiguration { enabled mode cable driver port }
|
||||
}
|
||||
""",
|
||||
}
|
||||
|
||||
INFO_ACTIONS = Literal[
|
||||
"overview", "array", "network", "registration", "connect", "variables",
|
||||
"metrics", "services", "display", "config", "online", "owner",
|
||||
"settings", "server", "servers", "flash",
|
||||
"ups_devices", "ups_device", "ups_config",
|
||||
]
|
||||
|
||||
|
||||
def _process_system_info(raw_info: dict[str, Any]) -> dict[str, Any]:
|
||||
"""Process raw system info into summary + details."""
|
||||
summary: dict[str, Any] = {}
|
||||
if raw_info.get("os"):
|
||||
os_info = raw_info["os"]
|
||||
summary["os"] = (
|
||||
f"{os_info.get('distro', '')} {os_info.get('release', '')} "
|
||||
f"({os_info.get('platform', '')}, {os_info.get('arch', '')})"
|
||||
)
|
||||
summary["hostname"] = os_info.get("hostname")
|
||||
summary["uptime"] = os_info.get("uptime")
|
||||
|
||||
if raw_info.get("cpu"):
|
||||
cpu = raw_info["cpu"]
|
||||
summary["cpu"] = (
|
||||
f"{cpu.get('manufacturer', '')} {cpu.get('brand', '')} "
|
||||
f"({cpu.get('cores')} cores, {cpu.get('threads')} threads)"
|
||||
)
|
||||
|
||||
if raw_info.get("memory") and raw_info["memory"].get("layout"):
|
||||
mem_layout = raw_info["memory"]["layout"]
|
||||
summary["memory_layout_details"] = []
|
||||
for stick in mem_layout:
|
||||
summary["memory_layout_details"].append(
|
||||
f"Bank {stick.get('bank', '?')}: Type {stick.get('type', '?')}, "
|
||||
f"Speed {stick.get('clockSpeed', '?')}MHz, "
|
||||
f"Manufacturer: {stick.get('manufacturer', '?')}, "
|
||||
f"Part: {stick.get('partNum', '?')}"
|
||||
)
|
||||
summary["memory_summary"] = (
|
||||
"Stick layout details retrieved. Overall total/used/free memory stats "
|
||||
"are unavailable due to API limitations."
|
||||
)
|
||||
else:
|
||||
summary["memory_summary"] = "Memory information not available."
|
||||
|
||||
return {"summary": summary, "details": raw_info}
|
||||
|
||||
|
||||
def _analyze_disk_health(disks: list[dict[str, Any]]) -> dict[str, int]:
|
||||
"""Analyze health status of disk arrays."""
|
||||
counts = {"healthy": 0, "failed": 0, "missing": 0, "new": 0, "warning": 0, "unknown": 0}
|
||||
for disk in disks:
|
||||
status = disk.get("status", "").upper()
|
||||
warning = disk.get("warning")
|
||||
critical = disk.get("critical")
|
||||
if status == "DISK_OK":
|
||||
counts["warning" if (warning or critical) else "healthy"] += 1
|
||||
elif status in ("DISK_DSBL", "DISK_INVALID"):
|
||||
counts["failed"] += 1
|
||||
elif status == "DISK_NP":
|
||||
counts["missing"] += 1
|
||||
elif status == "DISK_NEW":
|
||||
counts["new"] += 1
|
||||
else:
|
||||
counts["unknown"] += 1
|
||||
return counts
|
||||
|
||||
|
||||
def _process_array_status(raw: dict[str, Any]) -> dict[str, Any]:
|
||||
"""Process raw array data into summary + details."""
|
||||
|
||||
def format_kb(k: Any) -> str:
|
||||
if k is None:
|
||||
return "N/A"
|
||||
k = int(k)
|
||||
if k >= 1024 * 1024 * 1024:
|
||||
return f"{k / (1024 * 1024 * 1024):.2f} TB"
|
||||
if k >= 1024 * 1024:
|
||||
return f"{k / (1024 * 1024):.2f} GB"
|
||||
if k >= 1024:
|
||||
return f"{k / 1024:.2f} MB"
|
||||
return f"{k} KB"
|
||||
|
||||
summary: dict[str, Any] = {"state": raw.get("state")}
|
||||
if raw.get("capacity") and raw["capacity"].get("kilobytes"):
|
||||
kb = raw["capacity"]["kilobytes"]
|
||||
summary["capacity_total"] = format_kb(kb.get("total"))
|
||||
summary["capacity_used"] = format_kb(kb.get("used"))
|
||||
summary["capacity_free"] = format_kb(kb.get("free"))
|
||||
|
||||
summary["num_parity_disks"] = len(raw.get("parities", []))
|
||||
summary["num_data_disks"] = len(raw.get("disks", []))
|
||||
summary["num_cache_pools"] = len(raw.get("caches", []))
|
||||
|
||||
health_summary: dict[str, Any] = {}
|
||||
for key, label in [("parities", "parity_health"), ("disks", "data_health"), ("caches", "cache_health")]:
|
||||
if raw.get(key):
|
||||
health_summary[label] = _analyze_disk_health(raw[key])
|
||||
|
||||
total_failed = sum(h.get("failed", 0) for h in health_summary.values())
|
||||
total_missing = sum(h.get("missing", 0) for h in health_summary.values())
|
||||
total_warning = sum(h.get("warning", 0) for h in health_summary.values())
|
||||
|
||||
if total_failed > 0:
|
||||
overall = "CRITICAL"
|
||||
elif total_missing > 0:
|
||||
overall = "DEGRADED"
|
||||
elif total_warning > 0:
|
||||
overall = "WARNING"
|
||||
else:
|
||||
overall = "HEALTHY"
|
||||
|
||||
summary["overall_health"] = overall
|
||||
summary["health_summary"] = health_summary
|
||||
|
||||
return {"summary": summary, "details": raw}
|
||||
|
||||
|
||||
def register_info_tool(mcp: FastMCP) -> None:
|
||||
"""Register the unraid_info tool with the FastMCP instance."""
|
||||
|
||||
@mcp.tool()
|
||||
async def unraid_info(
|
||||
action: INFO_ACTIONS,
|
||||
device_id: str | None = None,
|
||||
) -> dict[str, Any]:
|
||||
"""Query Unraid system information.
|
||||
|
||||
Actions:
|
||||
overview - OS, CPU, memory, baseboard, versions
|
||||
array - Array state, capacity, disk health
|
||||
network - Access URLs, interfaces
|
||||
registration - License type, state, expiration
|
||||
connect - Unraid Connect settings
|
||||
variables - System variables and configuration
|
||||
metrics - CPU and memory utilization
|
||||
services - Running services
|
||||
display - Theme settings
|
||||
config - Configuration validity
|
||||
online - Server online status
|
||||
owner - Server owner info
|
||||
settings - All unified settings
|
||||
server - Quick server summary
|
||||
servers - Connected servers list
|
||||
flash - Flash drive info
|
||||
ups_devices - List UPS devices
|
||||
ups_device - Single UPS device (requires device_id)
|
||||
ups_config - UPS configuration
|
||||
"""
|
||||
if action not in QUERIES:
|
||||
raise ToolError(f"Invalid action '{action}'. Must be one of: {list(QUERIES.keys())}")
|
||||
|
||||
if action == "ups_device" and not device_id:
|
||||
raise ToolError("device_id is required for ups_device action")
|
||||
|
||||
query = QUERIES[action]
|
||||
variables: dict[str, Any] | None = None
|
||||
if action == "ups_device":
|
||||
variables = {"id": device_id}
|
||||
|
||||
try:
|
||||
logger.info(f"Executing unraid_info action={action}")
|
||||
data = await make_graphql_request(query, variables)
|
||||
|
||||
# Action-specific response processing
|
||||
if action == "overview":
|
||||
raw = data.get("info", {})
|
||||
if not raw:
|
||||
raise ToolError("No system info returned from Unraid API")
|
||||
return _process_system_info(raw)
|
||||
|
||||
if action == "array":
|
||||
raw = data.get("array", {})
|
||||
if not raw:
|
||||
raise ToolError("No array information returned from Unraid API")
|
||||
return _process_array_status(raw)
|
||||
|
||||
if action == "network":
|
||||
return dict(data.get("network", {}))
|
||||
|
||||
if action == "registration":
|
||||
return dict(data.get("registration", {}))
|
||||
|
||||
if action == "connect":
|
||||
return dict(data.get("connect", {}))
|
||||
|
||||
if action == "variables":
|
||||
return dict(data.get("vars", {}))
|
||||
|
||||
if action == "metrics":
|
||||
return dict(data.get("metrics", {}))
|
||||
|
||||
if action == "services":
|
||||
services = data.get("services", [])
|
||||
return {"services": list(services) if isinstance(services, list) else []}
|
||||
|
||||
if action == "display":
|
||||
info = data.get("info", {})
|
||||
return dict(info.get("display", {}))
|
||||
|
||||
if action == "config":
|
||||
return dict(data.get("config", {}))
|
||||
|
||||
if action == "online":
|
||||
return {"online": data.get("online")}
|
||||
|
||||
if action == "owner":
|
||||
return dict(data.get("owner", {}))
|
||||
|
||||
if action == "settings":
|
||||
settings = data.get("settings", {})
|
||||
if settings and settings.get("unified"):
|
||||
return dict(settings["unified"].get("values", {}))
|
||||
return {}
|
||||
|
||||
if action == "server":
|
||||
return data
|
||||
|
||||
if action == "servers":
|
||||
servers = data.get("servers", [])
|
||||
return {"servers": list(servers) if isinstance(servers, list) else []}
|
||||
|
||||
if action == "flash":
|
||||
return dict(data.get("flash", {}))
|
||||
|
||||
if action == "ups_devices":
|
||||
devices = data.get("upsDevices", [])
|
||||
return {"ups_devices": list(devices) if isinstance(devices, list) else []}
|
||||
|
||||
if action == "ups_device":
|
||||
return dict(data.get("upsDeviceById", {}))
|
||||
|
||||
if action == "ups_config":
|
||||
return dict(data.get("upsConfiguration", {}))
|
||||
|
||||
return data
|
||||
|
||||
except ToolError:
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error(f"Error in unraid_info action={action}: {e}", exc_info=True)
|
||||
raise ToolError(f"Failed to execute info/{action}: {str(e)}") from e
|
||||
|
||||
logger.info("Info tool registered successfully")
|
||||
146
unraid_mcp/tools/keys.py
Normal file
146
unraid_mcp/tools/keys.py
Normal file
@@ -0,0 +1,146 @@
|
||||
"""API key management.
|
||||
|
||||
Provides the `unraid_keys` tool with 5 actions for listing, viewing,
|
||||
creating, updating, and deleting API keys.
|
||||
"""
|
||||
|
||||
from typing import Any, Literal
|
||||
|
||||
from fastmcp import FastMCP
|
||||
|
||||
from ..config.logging import logger
|
||||
from ..core.client import make_graphql_request
|
||||
from ..core.exceptions import ToolError
|
||||
|
||||
QUERIES: dict[str, str] = {
|
||||
"list": """
|
||||
query ListApiKeys {
|
||||
apiKeys { id name roles permissions createdAt lastUsed }
|
||||
}
|
||||
""",
|
||||
"get": """
|
||||
query GetApiKey($id: PrefixedID!) {
|
||||
apiKey(id: $id) { id name roles permissions createdAt lastUsed }
|
||||
}
|
||||
""",
|
||||
}
|
||||
|
||||
MUTATIONS: dict[str, str] = {
|
||||
"create": """
|
||||
mutation CreateApiKey($input: CreateApiKeyInput!) {
|
||||
createApiKey(input: $input) { id name key roles }
|
||||
}
|
||||
""",
|
||||
"update": """
|
||||
mutation UpdateApiKey($input: UpdateApiKeyInput!) {
|
||||
updateApiKey(input: $input) { id name roles }
|
||||
}
|
||||
""",
|
||||
"delete": """
|
||||
mutation DeleteApiKeys($input: DeleteApiKeysInput!) {
|
||||
deleteApiKeys(input: $input)
|
||||
}
|
||||
""",
|
||||
}
|
||||
|
||||
DESTRUCTIVE_ACTIONS = {"delete"}
|
||||
|
||||
KEY_ACTIONS = Literal[
|
||||
"list", "get", "create", "update", "delete",
|
||||
]
|
||||
|
||||
|
||||
def register_keys_tool(mcp: FastMCP) -> None:
|
||||
"""Register the unraid_keys tool with the FastMCP instance."""
|
||||
|
||||
@mcp.tool()
|
||||
async def unraid_keys(
|
||||
action: KEY_ACTIONS,
|
||||
confirm: bool = False,
|
||||
key_id: str | None = None,
|
||||
name: str | None = None,
|
||||
roles: list[str] | None = None,
|
||||
permissions: list[str] | None = None,
|
||||
) -> dict[str, Any]:
|
||||
"""Manage Unraid API keys.
|
||||
|
||||
Actions:
|
||||
list - List all API keys
|
||||
get - Get a specific API key (requires key_id)
|
||||
create - Create a new API key (requires name; optional roles, permissions)
|
||||
update - Update an API key (requires key_id; optional name, roles)
|
||||
delete - Delete API keys (requires key_id, confirm=True)
|
||||
"""
|
||||
all_actions = set(QUERIES) | set(MUTATIONS)
|
||||
if action not in all_actions:
|
||||
raise ToolError(f"Invalid action '{action}'. Must be one of: {sorted(all_actions)}")
|
||||
|
||||
if action in DESTRUCTIVE_ACTIONS and not confirm:
|
||||
raise ToolError(f"Action '{action}' is destructive. Set confirm=True to proceed.")
|
||||
|
||||
try:
|
||||
logger.info(f"Executing unraid_keys action={action}")
|
||||
|
||||
if action == "list":
|
||||
data = await make_graphql_request(QUERIES["list"])
|
||||
keys = data.get("apiKeys", [])
|
||||
return {"keys": list(keys) if isinstance(keys, list) else []}
|
||||
|
||||
if action == "get":
|
||||
if not key_id:
|
||||
raise ToolError("key_id is required for 'get' action")
|
||||
data = await make_graphql_request(QUERIES["get"], {"id": key_id})
|
||||
return dict(data.get("apiKey", {}))
|
||||
|
||||
if action == "create":
|
||||
if not name:
|
||||
raise ToolError("name is required for 'create' action")
|
||||
input_data: dict[str, Any] = {"name": name}
|
||||
if roles:
|
||||
input_data["roles"] = roles
|
||||
if permissions:
|
||||
input_data["permissions"] = permissions
|
||||
data = await make_graphql_request(
|
||||
MUTATIONS["create"], {"input": input_data}
|
||||
)
|
||||
return {
|
||||
"success": True,
|
||||
"key": data.get("createApiKey", {}),
|
||||
}
|
||||
|
||||
if action == "update":
|
||||
if not key_id:
|
||||
raise ToolError("key_id is required for 'update' action")
|
||||
input_data = {"id": key_id}
|
||||
if name:
|
||||
input_data["name"] = name
|
||||
if roles:
|
||||
input_data["roles"] = roles
|
||||
data = await make_graphql_request(
|
||||
MUTATIONS["update"], {"input": input_data}
|
||||
)
|
||||
return {
|
||||
"success": True,
|
||||
"key": data.get("updateApiKey", {}),
|
||||
}
|
||||
|
||||
if action == "delete":
|
||||
if not key_id:
|
||||
raise ToolError("key_id is required for 'delete' action")
|
||||
data = await make_graphql_request(
|
||||
MUTATIONS["delete"], {"input": {"ids": [key_id]}}
|
||||
)
|
||||
return {
|
||||
"success": True,
|
||||
"message": f"API key '{key_id}' deleted",
|
||||
}
|
||||
|
||||
return {}
|
||||
|
||||
except ToolError:
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error(f"Error in unraid_keys action={action}: {e}", exc_info=True)
|
||||
raise ToolError(f"Failed to execute keys/{action}: {str(e)}") from e
|
||||
|
||||
logger.info("Keys tool registered successfully")
|
||||
205
unraid_mcp/tools/notifications.py
Normal file
205
unraid_mcp/tools/notifications.py
Normal file
@@ -0,0 +1,205 @@
|
||||
"""Notification management.
|
||||
|
||||
Provides the `unraid_notifications` tool with 9 actions for viewing,
|
||||
creating, archiving, and deleting system notifications.
|
||||
"""
|
||||
|
||||
from typing import Any, Literal
|
||||
|
||||
from fastmcp import FastMCP
|
||||
|
||||
from ..config.logging import logger
|
||||
from ..core.client import make_graphql_request
|
||||
from ..core.exceptions import ToolError
|
||||
|
||||
QUERIES: dict[str, str] = {
|
||||
"overview": """
|
||||
query GetNotificationsOverview {
|
||||
notifications {
|
||||
overview {
|
||||
unread { info warning alert total }
|
||||
archive { info warning alert total }
|
||||
}
|
||||
}
|
||||
}
|
||||
""",
|
||||
"list": """
|
||||
query ListNotifications($filter: NotificationFilter!) {
|
||||
notifications {
|
||||
list(filter: $filter) {
|
||||
id title subject description importance link type timestamp formattedTimestamp
|
||||
}
|
||||
}
|
||||
}
|
||||
""",
|
||||
"warnings": """
|
||||
query GetWarningsAndAlerts {
|
||||
notifications {
|
||||
warningsAndAlerts { id title subject description importance type timestamp }
|
||||
}
|
||||
}
|
||||
""",
|
||||
}
|
||||
|
||||
MUTATIONS: dict[str, str] = {
|
||||
"create": """
|
||||
mutation CreateNotification($input: CreateNotificationInput!) {
|
||||
notifications { createNotification(input: $input) { id title importance } }
|
||||
}
|
||||
""",
|
||||
"archive": """
|
||||
mutation ArchiveNotification($id: PrefixedID!) {
|
||||
notifications { archiveNotification(id: $id) }
|
||||
}
|
||||
""",
|
||||
"unread": """
|
||||
mutation UnreadNotification($id: PrefixedID!) {
|
||||
notifications { unreadNotification(id: $id) }
|
||||
}
|
||||
""",
|
||||
"delete": """
|
||||
mutation DeleteNotification($id: PrefixedID!, $type: NotificationType!) {
|
||||
notifications { deleteNotification(id: $id, type: $type) }
|
||||
}
|
||||
""",
|
||||
"delete_archived": """
|
||||
mutation DeleteArchivedNotifications {
|
||||
notifications { deleteArchivedNotifications }
|
||||
}
|
||||
""",
|
||||
"archive_all": """
|
||||
mutation ArchiveAllNotifications($importance: NotificationImportance) {
|
||||
notifications { archiveAll(importance: $importance) }
|
||||
}
|
||||
""",
|
||||
}
|
||||
|
||||
DESTRUCTIVE_ACTIONS = {"delete", "delete_archived"}
|
||||
|
||||
NOTIFICATION_ACTIONS = Literal[
|
||||
"overview", "list", "warnings",
|
||||
"create", "archive", "unread", "delete", "delete_archived", "archive_all",
|
||||
]
|
||||
|
||||
|
||||
def register_notifications_tool(mcp: FastMCP) -> None:
|
||||
"""Register the unraid_notifications tool with the FastMCP instance."""
|
||||
|
||||
@mcp.tool()
|
||||
async def unraid_notifications(
|
||||
action: NOTIFICATION_ACTIONS,
|
||||
confirm: bool = False,
|
||||
notification_id: str | None = None,
|
||||
notification_type: str | None = None,
|
||||
importance: str | None = None,
|
||||
offset: int = 0,
|
||||
limit: int = 20,
|
||||
list_type: str = "UNREAD",
|
||||
title: str | None = None,
|
||||
subject: str | None = None,
|
||||
description: str | None = None,
|
||||
) -> dict[str, Any]:
|
||||
"""Manage Unraid system notifications.
|
||||
|
||||
Actions:
|
||||
overview - Notification counts by severity (unread/archive)
|
||||
list - List notifications with filtering (list_type=UNREAD/ARCHIVE, importance=INFO/WARNING/ALERT)
|
||||
warnings - Get deduplicated unread warnings and alerts
|
||||
create - Create notification (requires title, subject, description, importance)
|
||||
archive - Archive a notification (requires notification_id)
|
||||
unread - Mark notification as unread (requires notification_id)
|
||||
delete - Delete a notification (requires notification_id, notification_type, confirm=True)
|
||||
delete_archived - Delete all archived notifications (requires confirm=True)
|
||||
archive_all - Archive all notifications (optional importance filter)
|
||||
"""
|
||||
all_actions = {**QUERIES, **MUTATIONS}
|
||||
if action not in all_actions:
|
||||
raise ToolError(f"Invalid action '{action}'. Must be one of: {list(all_actions.keys())}")
|
||||
|
||||
if action in DESTRUCTIVE_ACTIONS and not confirm:
|
||||
raise ToolError(f"Action '{action}' is destructive. Set confirm=True to proceed.")
|
||||
|
||||
try:
|
||||
logger.info(f"Executing unraid_notifications action={action}")
|
||||
|
||||
if action == "overview":
|
||||
data = await make_graphql_request(QUERIES["overview"])
|
||||
notifications = data.get("notifications", {})
|
||||
return dict(notifications.get("overview", {}))
|
||||
|
||||
if action == "list":
|
||||
filter_vars: dict[str, Any] = {
|
||||
"type": list_type.upper(),
|
||||
"offset": offset,
|
||||
"limit": limit,
|
||||
}
|
||||
if importance:
|
||||
filter_vars["importance"] = importance.upper()
|
||||
data = await make_graphql_request(
|
||||
QUERIES["list"], {"filter": filter_vars}
|
||||
)
|
||||
notifications = data.get("notifications", {})
|
||||
result = notifications.get("list", [])
|
||||
return {"notifications": list(result) if isinstance(result, list) else []}
|
||||
|
||||
if action == "warnings":
|
||||
data = await make_graphql_request(QUERIES["warnings"])
|
||||
notifications = data.get("notifications", {})
|
||||
result = notifications.get("warningsAndAlerts", [])
|
||||
return {"warnings": list(result) if isinstance(result, list) else []}
|
||||
|
||||
if action == "create":
|
||||
if title is None or subject is None or description is None or importance is None:
|
||||
raise ToolError(
|
||||
"create requires title, subject, description, and importance"
|
||||
)
|
||||
input_data = {
|
||||
"title": title,
|
||||
"subject": subject,
|
||||
"description": description,
|
||||
"importance": importance.upper() if importance else "INFO",
|
||||
}
|
||||
data = await make_graphql_request(
|
||||
MUTATIONS["create"], {"input": input_data}
|
||||
)
|
||||
return {"success": True, "data": data}
|
||||
|
||||
if action in ("archive", "unread"):
|
||||
if not notification_id:
|
||||
raise ToolError(f"notification_id is required for '{action}' action")
|
||||
data = await make_graphql_request(
|
||||
MUTATIONS[action], {"id": notification_id}
|
||||
)
|
||||
return {"success": True, "action": action, "data": data}
|
||||
|
||||
if action == "delete":
|
||||
if not notification_id or not notification_type:
|
||||
raise ToolError(
|
||||
"delete requires notification_id and notification_type"
|
||||
)
|
||||
data = await make_graphql_request(
|
||||
MUTATIONS["delete"],
|
||||
{"id": notification_id, "type": notification_type.upper()},
|
||||
)
|
||||
return {"success": True, "action": "delete", "data": data}
|
||||
|
||||
if action == "delete_archived":
|
||||
data = await make_graphql_request(MUTATIONS["delete_archived"])
|
||||
return {"success": True, "action": "delete_archived", "data": data}
|
||||
|
||||
if action == "archive_all":
|
||||
variables: dict[str, Any] | None = None
|
||||
if importance:
|
||||
variables = {"importance": importance.upper()}
|
||||
data = await make_graphql_request(MUTATIONS["archive_all"], variables)
|
||||
return {"success": True, "action": "archive_all", "data": data}
|
||||
|
||||
return {}
|
||||
|
||||
except ToolError:
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error(f"Error in unraid_notifications action={action}: {e}", exc_info=True)
|
||||
raise ToolError(f"Failed to execute notifications/{action}: {str(e)}") from e
|
||||
|
||||
logger.info("Notifications tool registered successfully")
|
||||
@@ -1,11 +1,10 @@
|
||||
"""RClone cloud storage remote management tools.
|
||||
"""RClone cloud storage remote management.
|
||||
|
||||
This module provides tools for managing RClone remotes including listing existing
|
||||
remotes, getting configuration forms, creating new remotes, and deleting remotes
|
||||
for various cloud storage providers (S3, Google Drive, Dropbox, FTP, etc.).
|
||||
Provides the `unraid_rclone` tool with 4 actions for managing
|
||||
cloud storage remotes (S3, Google Drive, Dropbox, FTP, etc.).
|
||||
"""
|
||||
|
||||
from typing import Any
|
||||
from typing import Any, Literal
|
||||
|
||||
from fastmcp import FastMCP
|
||||
|
||||
@@ -13,166 +12,121 @@ from ..config.logging import logger
|
||||
from ..core.client import make_graphql_request
|
||||
from ..core.exceptions import ToolError
|
||||
|
||||
|
||||
def register_rclone_tools(mcp: FastMCP) -> None:
|
||||
"""Register all RClone tools with the FastMCP instance.
|
||||
|
||||
Args:
|
||||
mcp: FastMCP instance to register tools with
|
||||
"""
|
||||
|
||||
@mcp.tool()
|
||||
async def list_rclone_remotes() -> list[dict[str, Any]]:
|
||||
"""Retrieves all configured RClone remotes with their configuration details."""
|
||||
try:
|
||||
query = """
|
||||
QUERIES: dict[str, str] = {
|
||||
"list_remotes": """
|
||||
query ListRCloneRemotes {
|
||||
rclone {
|
||||
remotes {
|
||||
name
|
||||
type
|
||||
parameters
|
||||
config
|
||||
rclone { remotes { name type parameters config } }
|
||||
}
|
||||
""",
|
||||
"config_form": """
|
||||
query GetRCloneConfigForm($formOptions: RCloneConfigFormInput) {
|
||||
rclone { configForm(formOptions: $formOptions) { id dataSchema uiSchema } }
|
||||
}
|
||||
""",
|
||||
}
|
||||
|
||||
MUTATIONS: dict[str, str] = {
|
||||
"create_remote": """
|
||||
mutation CreateRCloneRemote($input: CreateRCloneRemoteInput!) {
|
||||
rclone { createRCloneRemote(input: $input) { name type parameters } }
|
||||
}
|
||||
"""
|
||||
""",
|
||||
"delete_remote": """
|
||||
mutation DeleteRCloneRemote($input: DeleteRCloneRemoteInput!) {
|
||||
rclone { deleteRCloneRemote(input: $input) }
|
||||
}
|
||||
""",
|
||||
}
|
||||
|
||||
response_data = await make_graphql_request(query)
|
||||
DESTRUCTIVE_ACTIONS = {"delete_remote"}
|
||||
|
||||
if "rclone" in response_data and "remotes" in response_data["rclone"]:
|
||||
remotes = response_data["rclone"]["remotes"]
|
||||
logger.info(f"Retrieved {len(remotes)} RClone remotes")
|
||||
return list(remotes) if isinstance(remotes, list) else []
|
||||
RCLONE_ACTIONS = Literal[
|
||||
"list_remotes", "config_form", "create_remote", "delete_remote",
|
||||
]
|
||||
|
||||
return []
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to list RClone remotes: {str(e)}")
|
||||
raise ToolError(f"Failed to list RClone remotes: {str(e)}") from e
|
||||
def register_rclone_tool(mcp: FastMCP) -> None:
|
||||
"""Register the unraid_rclone tool with the FastMCP instance."""
|
||||
|
||||
@mcp.tool()
|
||||
async def get_rclone_config_form(provider_type: str | None = None) -> dict[str, Any]:
|
||||
"""
|
||||
Get RClone configuration form schema for setting up new remotes.
|
||||
async def unraid_rclone(
|
||||
action: RCLONE_ACTIONS,
|
||||
confirm: bool = False,
|
||||
name: str | None = None,
|
||||
provider_type: str | None = None,
|
||||
config_data: dict[str, Any] | None = None,
|
||||
) -> dict[str, Any]:
|
||||
"""Manage RClone cloud storage remotes.
|
||||
|
||||
Args:
|
||||
provider_type: Optional provider type to get specific form (e.g., 's3', 'drive', 'dropbox')
|
||||
Actions:
|
||||
list_remotes - List all configured remotes
|
||||
config_form - Get config form schema (optional provider_type for specific provider)
|
||||
create_remote - Create a new remote (requires name, provider_type, config_data)
|
||||
delete_remote - Delete a remote (requires name, confirm=True)
|
||||
"""
|
||||
all_actions = set(QUERIES) | set(MUTATIONS)
|
||||
if action not in all_actions:
|
||||
raise ToolError(f"Invalid action '{action}'. Must be one of: {sorted(all_actions)}")
|
||||
|
||||
if action in DESTRUCTIVE_ACTIONS and not confirm:
|
||||
raise ToolError(f"Action '{action}' is destructive. Set confirm=True to proceed.")
|
||||
|
||||
try:
|
||||
query = """
|
||||
query GetRCloneConfigForm($formOptions: RCloneConfigFormInput) {
|
||||
rclone {
|
||||
configForm(formOptions: $formOptions) {
|
||||
id
|
||||
dataSchema
|
||||
uiSchema
|
||||
}
|
||||
}
|
||||
}
|
||||
"""
|
||||
logger.info(f"Executing unraid_rclone action={action}")
|
||||
|
||||
variables = {}
|
||||
if action == "list_remotes":
|
||||
data = await make_graphql_request(QUERIES["list_remotes"])
|
||||
remotes = data.get("rclone", {}).get("remotes", [])
|
||||
return {"remotes": list(remotes) if isinstance(remotes, list) else []}
|
||||
|
||||
if action == "config_form":
|
||||
variables: dict[str, Any] = {}
|
||||
if provider_type:
|
||||
variables["formOptions"] = {"providerType": provider_type}
|
||||
|
||||
response_data = await make_graphql_request(query, variables)
|
||||
|
||||
if "rclone" in response_data and "configForm" in response_data["rclone"]:
|
||||
form_data = response_data["rclone"]["configForm"]
|
||||
logger.info(f"Retrieved RClone config form for {provider_type or 'general'}")
|
||||
return dict(form_data) if isinstance(form_data, dict) else {}
|
||||
|
||||
data = await make_graphql_request(
|
||||
QUERIES["config_form"], variables or None
|
||||
)
|
||||
form = data.get("rclone", {}).get("configForm", {})
|
||||
if not form:
|
||||
raise ToolError("No RClone config form data received")
|
||||
return dict(form)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to get RClone config form: {str(e)}")
|
||||
raise ToolError(f"Failed to get RClone config form: {str(e)}") from e
|
||||
|
||||
@mcp.tool()
|
||||
async def create_rclone_remote(name: str, provider_type: str, config_data: dict[str, Any]) -> dict[str, Any]:
|
||||
"""
|
||||
Create a new RClone remote with the specified configuration.
|
||||
|
||||
Args:
|
||||
name: Name for the new remote
|
||||
provider_type: Type of provider (e.g., 's3', 'drive', 'dropbox', 'ftp')
|
||||
config_data: Configuration parameters specific to the provider type
|
||||
"""
|
||||
try:
|
||||
mutation = """
|
||||
mutation CreateRCloneRemote($input: CreateRCloneRemoteInput!) {
|
||||
rclone {
|
||||
createRCloneRemote(input: $input) {
|
||||
name
|
||||
type
|
||||
parameters
|
||||
}
|
||||
}
|
||||
}
|
||||
"""
|
||||
|
||||
variables = {
|
||||
"input": {
|
||||
"name": name,
|
||||
"type": provider_type,
|
||||
"config": config_data
|
||||
}
|
||||
}
|
||||
|
||||
response_data = await make_graphql_request(mutation, variables)
|
||||
|
||||
if "rclone" in response_data and "createRCloneRemote" in response_data["rclone"]:
|
||||
remote_info = response_data["rclone"]["createRCloneRemote"]
|
||||
logger.info(f"Successfully created RClone remote: {name}")
|
||||
if action == "create_remote":
|
||||
if name is None or provider_type is None or config_data is None:
|
||||
raise ToolError(
|
||||
"create_remote requires name, provider_type, and config_data"
|
||||
)
|
||||
data = await make_graphql_request(
|
||||
MUTATIONS["create_remote"],
|
||||
{"input": {"name": name, "type": provider_type, "config": config_data}},
|
||||
)
|
||||
remote = data.get("rclone", {}).get("createRCloneRemote", {})
|
||||
return {
|
||||
"success": True,
|
||||
"message": f"RClone remote '{name}' created successfully",
|
||||
"remote": remote_info
|
||||
"message": f"Remote '{name}' created successfully",
|
||||
"remote": remote,
|
||||
}
|
||||
|
||||
raise ToolError("Failed to create RClone remote")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to create RClone remote {name}: {str(e)}")
|
||||
raise ToolError(f"Failed to create RClone remote {name}: {str(e)}") from e
|
||||
|
||||
@mcp.tool()
|
||||
async def delete_rclone_remote(name: str) -> dict[str, Any]:
|
||||
"""
|
||||
Delete an existing RClone remote by name.
|
||||
|
||||
Args:
|
||||
name: Name of the remote to delete
|
||||
"""
|
||||
try:
|
||||
mutation = """
|
||||
mutation DeleteRCloneRemote($input: DeleteRCloneRemoteInput!) {
|
||||
rclone {
|
||||
deleteRCloneRemote(input: $input)
|
||||
}
|
||||
}
|
||||
"""
|
||||
|
||||
variables = {
|
||||
"input": {
|
||||
"name": name
|
||||
}
|
||||
}
|
||||
|
||||
response_data = await make_graphql_request(mutation, variables)
|
||||
|
||||
if "rclone" in response_data and response_data["rclone"]["deleteRCloneRemote"]:
|
||||
logger.info(f"Successfully deleted RClone remote: {name}")
|
||||
if action == "delete_remote":
|
||||
if not name:
|
||||
raise ToolError("name is required for 'delete_remote' action")
|
||||
data = await make_graphql_request(
|
||||
MUTATIONS["delete_remote"], {"input": {"name": name}}
|
||||
)
|
||||
success = data.get("rclone", {}).get("deleteRCloneRemote", False)
|
||||
if not success:
|
||||
raise ToolError(f"Failed to delete remote '{name}'")
|
||||
return {
|
||||
"success": True,
|
||||
"message": f"RClone remote '{name}' deleted successfully"
|
||||
"message": f"Remote '{name}' deleted successfully",
|
||||
}
|
||||
|
||||
raise ToolError(f"Failed to delete RClone remote '{name}'")
|
||||
return {}
|
||||
|
||||
except ToolError:
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to delete RClone remote {name}: {str(e)}")
|
||||
raise ToolError(f"Failed to delete RClone remote {name}: {str(e)}") from e
|
||||
logger.error(f"Error in unraid_rclone action={action}: {e}", exc_info=True)
|
||||
raise ToolError(f"Failed to execute rclone/{action}: {str(e)}") from e
|
||||
|
||||
logger.info("RClone tools registered successfully")
|
||||
logger.info("RClone tool registered successfully")
|
||||
|
||||
@@ -1,277 +1,159 @@
|
||||
"""Storage, disk, and notification management tools.
|
||||
"""Storage and disk management.
|
||||
|
||||
This module provides tools for managing user shares, notifications,
|
||||
log files, physical disks with SMART data, and system storage operations
|
||||
with custom timeout configurations for disk-intensive operations.
|
||||
Provides the `unraid_storage` tool with 6 actions for shares, physical disks,
|
||||
unassigned devices, log files, and log content retrieval.
|
||||
"""
|
||||
|
||||
from typing import Any
|
||||
from typing import Any, Literal
|
||||
|
||||
import httpx
|
||||
from fastmcp import FastMCP
|
||||
|
||||
from ..config.logging import logger
|
||||
from ..core.client import make_graphql_request
|
||||
from ..core.client import DISK_TIMEOUT, make_graphql_request
|
||||
from ..core.exceptions import ToolError
|
||||
|
||||
|
||||
def register_storage_tools(mcp: FastMCP) -> None:
|
||||
"""Register all storage tools with the FastMCP instance.
|
||||
|
||||
Args:
|
||||
mcp: FastMCP instance to register tools with
|
||||
"""
|
||||
|
||||
@mcp.tool()
|
||||
async def get_shares_info() -> list[dict[str, Any]]:
|
||||
"""Retrieves information about user shares."""
|
||||
query = """
|
||||
QUERIES: dict[str, str] = {
|
||||
"shares": """
|
||||
query GetSharesInfo {
|
||||
shares {
|
||||
id
|
||||
name
|
||||
free
|
||||
used
|
||||
size
|
||||
include
|
||||
exclude
|
||||
cache
|
||||
nameOrig
|
||||
comment
|
||||
allocator
|
||||
splitLevel
|
||||
floor
|
||||
cow
|
||||
color
|
||||
luksStatus
|
||||
id name free used size include exclude cache nameOrig
|
||||
comment allocator splitLevel floor cow color luksStatus
|
||||
}
|
||||
}
|
||||
"""
|
||||
try:
|
||||
logger.info("Executing get_shares_info tool")
|
||||
response_data = await make_graphql_request(query)
|
||||
shares = response_data.get("shares", [])
|
||||
return list(shares) if isinstance(shares, list) else []
|
||||
except Exception as e:
|
||||
logger.error(f"Error in get_shares_info: {e}", exc_info=True)
|
||||
raise ToolError(f"Failed to retrieve shares information: {str(e)}") from e
|
||||
|
||||
@mcp.tool()
|
||||
async def get_notifications_overview() -> dict[str, Any]:
|
||||
"""Retrieves an overview of system notifications (unread and archive counts by severity)."""
|
||||
query = """
|
||||
query GetNotificationsOverview {
|
||||
notifications {
|
||||
overview {
|
||||
unread { info warning alert total }
|
||||
archive { info warning alert total }
|
||||
""",
|
||||
"disks": """
|
||||
query ListPhysicalDisks {
|
||||
disks { id device name }
|
||||
}
|
||||
}
|
||||
}
|
||||
"""
|
||||
try:
|
||||
logger.info("Executing get_notifications_overview tool")
|
||||
response_data = await make_graphql_request(query)
|
||||
if response_data.get("notifications"):
|
||||
overview = response_data["notifications"].get("overview", {})
|
||||
return dict(overview) if isinstance(overview, dict) else {}
|
||||
return {}
|
||||
except Exception as e:
|
||||
logger.error(f"Error in get_notifications_overview: {e}", exc_info=True)
|
||||
raise ToolError(f"Failed to retrieve notifications overview: {str(e)}") from e
|
||||
|
||||
@mcp.tool()
|
||||
async def list_notifications(
|
||||
type: str,
|
||||
offset: int,
|
||||
limit: int,
|
||||
importance: str | None = None
|
||||
) -> list[dict[str, Any]]:
|
||||
"""Lists notifications with filtering. Type: UNREAD/ARCHIVE. Importance: INFO/WARNING/ALERT."""
|
||||
query = """
|
||||
query ListNotifications($filter: NotificationFilter!) {
|
||||
notifications {
|
||||
list(filter: $filter) {
|
||||
id
|
||||
title
|
||||
subject
|
||||
description
|
||||
importance
|
||||
link
|
||||
type
|
||||
timestamp
|
||||
formattedTimestamp
|
||||
}
|
||||
}
|
||||
}
|
||||
"""
|
||||
variables = {
|
||||
"filter": {
|
||||
"type": type.upper(),
|
||||
"offset": offset,
|
||||
"limit": limit,
|
||||
"importance": importance.upper() if importance else None
|
||||
}
|
||||
}
|
||||
# Remove null importance from variables if not provided, as GraphQL might be strict
|
||||
if not importance:
|
||||
del variables["filter"]["importance"]
|
||||
|
||||
try:
|
||||
logger.info(f"Executing list_notifications: type={type}, offset={offset}, limit={limit}, importance={importance}")
|
||||
response_data = await make_graphql_request(query, variables)
|
||||
if response_data.get("notifications"):
|
||||
notifications_list = response_data["notifications"].get("list", [])
|
||||
return list(notifications_list) if isinstance(notifications_list, list) else []
|
||||
return []
|
||||
except Exception as e:
|
||||
logger.error(f"Error in list_notifications: {e}", exc_info=True)
|
||||
raise ToolError(f"Failed to list notifications: {str(e)}") from e
|
||||
|
||||
@mcp.tool()
|
||||
async def list_available_log_files() -> list[dict[str, Any]]:
|
||||
"""Lists all available log files that can be queried."""
|
||||
query = """
|
||||
query ListLogFiles {
|
||||
logFiles {
|
||||
name
|
||||
path
|
||||
size
|
||||
modifiedAt
|
||||
}
|
||||
}
|
||||
"""
|
||||
try:
|
||||
logger.info("Executing list_available_log_files tool")
|
||||
response_data = await make_graphql_request(query)
|
||||
log_files = response_data.get("logFiles", [])
|
||||
return list(log_files) if isinstance(log_files, list) else []
|
||||
except Exception as e:
|
||||
logger.error(f"Error in list_available_log_files: {e}", exc_info=True)
|
||||
raise ToolError(f"Failed to list available log files: {str(e)}") from e
|
||||
|
||||
@mcp.tool()
|
||||
async def get_logs(log_file_path: str, tail_lines: int = 100) -> dict[str, Any]:
|
||||
"""Retrieves content from a specific log file, defaulting to the last 100 lines."""
|
||||
# The Unraid GraphQL API Query.logFile takes 'lines' and 'startLine'.
|
||||
# To implement 'tail_lines', we would ideally need to know the total lines first,
|
||||
# then calculate startLine. However, Query.logFile itself returns totalLines.
|
||||
# A simple approach for 'tail' is to request a large number of lines if totalLines is not known beforehand,
|
||||
# and let the API handle it, or make two calls (one to get totalLines, then another).
|
||||
# For now, let's assume 'lines' parameter in Query.logFile effectively means tail if startLine is not given.
|
||||
# If not, this tool might need to be smarter or the API might not directly support 'tail' easily.
|
||||
# The SDL for LogFileContent implies it returns startLine, so it seems aware of ranges.
|
||||
|
||||
# Let's try fetching with just 'lines' to see if it acts as a tail,
|
||||
# or if we need Query.logFiles first to get totalLines for calculation.
|
||||
# For robust tailing, one might need to fetch totalLines first, then calculate start_line for the tail.
|
||||
# Simplified: query for the last 'tail_lines'. If the API doesn't support tailing this way, we may need adjustment.
|
||||
# The current plan is to pass 'lines=tail_lines' directly.
|
||||
|
||||
query = """
|
||||
query GetLogContent($path: String!, $lines: Int) {
|
||||
logFile(path: $path, lines: $lines) {
|
||||
path
|
||||
content
|
||||
totalLines
|
||||
startLine
|
||||
}
|
||||
}
|
||||
"""
|
||||
variables = {"path": log_file_path, "lines": tail_lines}
|
||||
try:
|
||||
logger.info(f"Executing get_logs for {log_file_path}, tail_lines={tail_lines}")
|
||||
response_data = await make_graphql_request(query, variables)
|
||||
log_file = response_data.get("logFile", {})
|
||||
return dict(log_file) if isinstance(log_file, dict) else {}
|
||||
except Exception as e:
|
||||
logger.error(f"Error in get_logs for {log_file_path}: {e}", exc_info=True)
|
||||
raise ToolError(f"Failed to retrieve logs from {log_file_path}: {str(e)}") from e
|
||||
|
||||
@mcp.tool()
|
||||
async def list_physical_disks() -> list[dict[str, Any]]:
|
||||
"""Lists all physical disks recognized by the Unraid system."""
|
||||
# Querying an extremely minimal set of fields for diagnostics
|
||||
query = """
|
||||
query ListPhysicalDisksMinimal {
|
||||
disks {
|
||||
id
|
||||
device
|
||||
name
|
||||
}
|
||||
}
|
||||
"""
|
||||
try:
|
||||
logger.info("Executing list_physical_disks tool with minimal query and increased timeout")
|
||||
# Increased read timeout for this potentially slow query
|
||||
long_timeout = httpx.Timeout(10.0, read=90.0, connect=5.0)
|
||||
response_data = await make_graphql_request(query, custom_timeout=long_timeout)
|
||||
disks = response_data.get("disks", [])
|
||||
return list(disks) if isinstance(disks, list) else []
|
||||
except Exception as e:
|
||||
logger.error(f"Error in list_physical_disks: {e}", exc_info=True)
|
||||
raise ToolError(f"Failed to list physical disks: {str(e)}") from e
|
||||
|
||||
@mcp.tool()
|
||||
async def get_disk_details(disk_id: str) -> dict[str, Any]:
|
||||
"""Retrieves detailed SMART information and partition data for a specific physical disk."""
|
||||
# Enhanced query with more comprehensive disk information
|
||||
query = """
|
||||
""",
|
||||
"disk_details": """
|
||||
query GetDiskDetails($id: PrefixedID!) {
|
||||
disk(id: $id) {
|
||||
id
|
||||
device
|
||||
name
|
||||
serialNum
|
||||
size
|
||||
temperature
|
||||
id device name serialNum size temperature
|
||||
}
|
||||
}
|
||||
"""
|
||||
variables = {"id": disk_id}
|
||||
try:
|
||||
logger.info(f"Executing get_disk_details for disk: {disk_id}")
|
||||
response_data = await make_graphql_request(query, variables)
|
||||
raw_disk = response_data.get("disk", {})
|
||||
""",
|
||||
"unassigned": """
|
||||
query GetUnassignedDevices {
|
||||
unassignedDevices { id device name size type }
|
||||
}
|
||||
""",
|
||||
"log_files": """
|
||||
query ListLogFiles {
|
||||
logFiles { name path size modifiedAt }
|
||||
}
|
||||
""",
|
||||
"logs": """
|
||||
query GetLogContent($path: String!, $lines: Int) {
|
||||
logFile(path: $path, lines: $lines) {
|
||||
path content totalLines startLine
|
||||
}
|
||||
}
|
||||
""",
|
||||
}
|
||||
|
||||
if not raw_disk:
|
||||
raise ToolError(f"Disk '{disk_id}' not found")
|
||||
STORAGE_ACTIONS = Literal[
|
||||
"shares", "disks", "disk_details", "unassigned", "log_files", "logs",
|
||||
]
|
||||
|
||||
# Process disk information for human-readable output
|
||||
def format_bytes(bytes_value: int | None) -> str:
|
||||
|
||||
def format_bytes(bytes_value: int | None) -> str:
|
||||
"""Format byte values into human-readable sizes."""
|
||||
if bytes_value is None:
|
||||
return "N/A"
|
||||
value = float(int(bytes_value))
|
||||
for unit in ['B', 'KB', 'MB', 'GB', 'TB', 'PB']:
|
||||
for unit in ["B", "KB", "MB", "GB", "TB", "PB"]:
|
||||
if value < 1024.0:
|
||||
return f"{value:.2f} {unit}"
|
||||
value /= 1024.0
|
||||
return f"{value:.2f} EB"
|
||||
|
||||
|
||||
def register_storage_tool(mcp: FastMCP) -> None:
|
||||
"""Register the unraid_storage tool with the FastMCP instance."""
|
||||
|
||||
@mcp.tool()
|
||||
async def unraid_storage(
|
||||
action: STORAGE_ACTIONS,
|
||||
disk_id: str | None = None,
|
||||
log_path: str | None = None,
|
||||
tail_lines: int = 100,
|
||||
) -> dict[str, Any]:
|
||||
"""Manage Unraid storage, disks, and logs.
|
||||
|
||||
Actions:
|
||||
shares - List all user shares with capacity info
|
||||
disks - List all physical disks
|
||||
disk_details - Detailed SMART info for a disk (requires disk_id)
|
||||
unassigned - List unassigned devices
|
||||
log_files - List available log files
|
||||
logs - Retrieve log content (requires log_path, optional tail_lines)
|
||||
"""
|
||||
if action not in QUERIES:
|
||||
raise ToolError(f"Invalid action '{action}'. Must be one of: {list(QUERIES.keys())}")
|
||||
|
||||
if action == "disk_details" and not disk_id:
|
||||
raise ToolError("disk_id is required for 'disk_details' action")
|
||||
|
||||
if action == "logs" and not log_path:
|
||||
raise ToolError("log_path is required for 'logs' action")
|
||||
|
||||
query = QUERIES[action]
|
||||
variables: dict[str, Any] | None = None
|
||||
custom_timeout = DISK_TIMEOUT if action == "disks" else None
|
||||
|
||||
if action == "disk_details":
|
||||
variables = {"id": disk_id}
|
||||
elif action == "logs":
|
||||
variables = {"path": log_path, "lines": tail_lines}
|
||||
|
||||
try:
|
||||
logger.info(f"Executing unraid_storage action={action}")
|
||||
data = await make_graphql_request(query, variables, custom_timeout=custom_timeout)
|
||||
|
||||
if action == "shares":
|
||||
shares = data.get("shares", [])
|
||||
return {"shares": list(shares) if isinstance(shares, list) else []}
|
||||
|
||||
if action == "disks":
|
||||
disks = data.get("disks", [])
|
||||
return {"disks": list(disks) if isinstance(disks, list) else []}
|
||||
|
||||
if action == "disk_details":
|
||||
raw = data.get("disk", {})
|
||||
if not raw:
|
||||
raise ToolError(f"Disk '{disk_id}' not found")
|
||||
summary = {
|
||||
'disk_id': raw_disk.get('id'),
|
||||
'device': raw_disk.get('device'),
|
||||
'name': raw_disk.get('name'),
|
||||
'serial_number': raw_disk.get('serialNum'),
|
||||
'size_formatted': format_bytes(raw_disk.get('size')),
|
||||
'temperature': f"{raw_disk.get('temperature')}°C" if raw_disk.get('temperature') else 'N/A',
|
||||
'interface_type': raw_disk.get('interfaceType'),
|
||||
'smart_status': raw_disk.get('smartStatus'),
|
||||
'is_spinning': raw_disk.get('isSpinning'),
|
||||
'power_on_hours': raw_disk.get('powerOnHours'),
|
||||
'reallocated_sectors': raw_disk.get('reallocatedSectorCount'),
|
||||
'partition_count': len(raw_disk.get('partitions', [])),
|
||||
'total_partition_size': format_bytes(sum(p.get('size', 0) for p in raw_disk.get('partitions', []) if p.get('size')))
|
||||
"disk_id": raw.get("id"),
|
||||
"device": raw.get("device"),
|
||||
"name": raw.get("name"),
|
||||
"serial_number": raw.get("serialNum"),
|
||||
"size_formatted": format_bytes(raw.get("size")),
|
||||
"temperature": (
|
||||
f"{raw.get('temperature')}C"
|
||||
if raw.get("temperature")
|
||||
else "N/A"
|
||||
),
|
||||
}
|
||||
return {"summary": summary, "details": raw}
|
||||
|
||||
return {
|
||||
'summary': summary,
|
||||
'partitions': raw_disk.get('partitions', []),
|
||||
'details': raw_disk
|
||||
}
|
||||
if action == "unassigned":
|
||||
devices = data.get("unassignedDevices", [])
|
||||
return {"devices": list(devices) if isinstance(devices, list) else []}
|
||||
|
||||
if action == "log_files":
|
||||
files = data.get("logFiles", [])
|
||||
return {"log_files": list(files) if isinstance(files, list) else []}
|
||||
|
||||
if action == "logs":
|
||||
return dict(data.get("logFile", {}))
|
||||
|
||||
return data
|
||||
|
||||
except ToolError:
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error(f"Error in get_disk_details for {disk_id}: {e}", exc_info=True)
|
||||
raise ToolError(f"Failed to retrieve disk details for {disk_id}: {str(e)}") from e
|
||||
logger.error(f"Error in unraid_storage action={action}: {e}", exc_info=True)
|
||||
raise ToolError(f"Failed to execute storage/{action}: {str(e)}") from e
|
||||
|
||||
logger.info("Storage tools registered successfully")
|
||||
logger.info("Storage tool registered successfully")
|
||||
|
||||
@@ -1,392 +0,0 @@
|
||||
"""System information and array status tools.
|
||||
|
||||
This module provides tools for retrieving core Unraid system information,
|
||||
array status with health analysis, network configuration, registration info,
|
||||
and system variables.
|
||||
"""
|
||||
|
||||
from typing import Any
|
||||
|
||||
from fastmcp import FastMCP
|
||||
|
||||
from ..config.logging import logger
|
||||
from ..core.client import make_graphql_request
|
||||
from ..core.exceptions import ToolError
|
||||
|
||||
|
||||
# Standalone functions for use by subscription resources
|
||||
async def _get_system_info() -> dict[str, Any]:
|
||||
"""Standalone function to get system info - used by subscriptions and tools."""
|
||||
query = """
|
||||
query GetSystemInfo {
|
||||
info {
|
||||
os { platform distro release codename kernel arch hostname codepage logofile serial build uptime }
|
||||
cpu { manufacturer brand vendor family model stepping revision voltage speed speedmin speedmax threads cores processors socket cache flags }
|
||||
memory {
|
||||
# Avoid fetching problematic fields that cause type errors
|
||||
layout { bank type clockSpeed formFactor manufacturer partNum serialNum }
|
||||
}
|
||||
baseboard { manufacturer model version serial assetTag }
|
||||
system { manufacturer model version serial uuid sku }
|
||||
versions { kernel openssl systemOpenssl systemOpensslLib node v8 npm yarn pm2 gulp grunt git tsc mysql redis mongodb apache nginx php docker postfix postgresql perl python gcc unraid }
|
||||
apps { installed started }
|
||||
# Remove devices section as it has non-nullable fields that might be null
|
||||
machineId
|
||||
time
|
||||
}
|
||||
}
|
||||
"""
|
||||
try:
|
||||
logger.info("Executing get_system_info")
|
||||
response_data = await make_graphql_request(query)
|
||||
raw_info = response_data.get("info", {})
|
||||
if not raw_info:
|
||||
raise ToolError("No system info returned from Unraid API")
|
||||
|
||||
# Process for human-readable output
|
||||
summary: dict[str, Any] = {}
|
||||
if raw_info.get('os'):
|
||||
os_info = raw_info['os']
|
||||
summary['os'] = f"{os_info.get('distro', '')} {os_info.get('release', '')} ({os_info.get('platform', '')}, {os_info.get('arch', '')})"
|
||||
summary['hostname'] = os_info.get('hostname')
|
||||
summary['uptime'] = os_info.get('uptime')
|
||||
|
||||
if raw_info.get('cpu'):
|
||||
cpu_info = raw_info['cpu']
|
||||
summary['cpu'] = f"{cpu_info.get('manufacturer', '')} {cpu_info.get('brand', '')} ({cpu_info.get('cores')} cores, {cpu_info.get('threads')} threads)"
|
||||
|
||||
if raw_info.get('memory') and raw_info['memory'].get('layout'):
|
||||
mem_layout = raw_info['memory']['layout']
|
||||
summary['memory_layout_details'] = [] # Renamed for clarity
|
||||
# The API is not returning 'size' for individual sticks in the layout, even if queried.
|
||||
# So, we cannot calculate total from layout currently.
|
||||
for stick in mem_layout:
|
||||
# stick_size = stick.get('size') # This is None in the actual API response
|
||||
summary['memory_layout_details'].append(
|
||||
f"Bank {stick.get('bank', '?')}: Type {stick.get('type', '?')}, Speed {stick.get('clockSpeed', '?')}MHz, Manufacturer: {stick.get('manufacturer','?')}, Part: {stick.get('partNum', '?')}"
|
||||
)
|
||||
summary['memory_summary'] = "Stick layout details retrieved. Overall total/used/free memory stats are unavailable due to API limitations (Int overflow or data not provided by API)."
|
||||
else:
|
||||
summary['memory_summary'] = "Memory information (layout or stats) not available or failed to retrieve."
|
||||
|
||||
# Include a key for the full details if needed by an LLM for deeper dives
|
||||
return {"summary": summary, "details": raw_info}
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error in get_system_info: {e}", exc_info=True)
|
||||
raise ToolError(f"Failed to retrieve system information: {str(e)}") from e
|
||||
|
||||
|
||||
async def _get_array_status() -> dict[str, Any]:
|
||||
"""Standalone function to get array status - used by subscriptions and tools."""
|
||||
query = """
|
||||
query GetArrayStatus {
|
||||
array {
|
||||
id
|
||||
state
|
||||
capacity {
|
||||
kilobytes { free used total }
|
||||
disks { free used total }
|
||||
}
|
||||
boot { id idx name device size status rotational temp numReads numWrites numErrors fsSize fsFree fsUsed exportable type warning critical fsType comment format transport color }
|
||||
parities { id idx name device size status rotational temp numReads numWrites numErrors fsSize fsFree fsUsed exportable type warning critical fsType comment format transport color }
|
||||
disks { id idx name device size status rotational temp numReads numWrites numErrors fsSize fsFree fsUsed exportable type warning critical fsType comment format transport color }
|
||||
caches { id idx name device size status rotational temp numReads numWrites numErrors fsSize fsFree fsUsed exportable type warning critical fsType comment format transport color }
|
||||
}
|
||||
}
|
||||
"""
|
||||
try:
|
||||
logger.info("Executing get_array_status")
|
||||
response_data = await make_graphql_request(query)
|
||||
raw_array_info = response_data.get("array", {})
|
||||
if not raw_array_info:
|
||||
raise ToolError("No array information returned from Unraid API")
|
||||
|
||||
summary: dict[str, Any] = {}
|
||||
summary['state'] = raw_array_info.get('state')
|
||||
|
||||
if raw_array_info.get('capacity') and raw_array_info['capacity'].get('kilobytes'):
|
||||
kb_cap = raw_array_info['capacity']['kilobytes']
|
||||
# Helper to format KB into TB/GB/MB
|
||||
def format_kb(k: Any) -> str:
|
||||
if k is None:
|
||||
return "N/A"
|
||||
k = int(k) # Values are strings in SDL for PrefixedID containing types like capacity
|
||||
if k >= 1024*1024*1024:
|
||||
return f"{k / (1024*1024*1024):.2f} TB"
|
||||
if k >= 1024*1024:
|
||||
return f"{k / (1024*1024):.2f} GB"
|
||||
if k >= 1024:
|
||||
return f"{k / 1024:.2f} MB"
|
||||
return f"{k} KB"
|
||||
|
||||
summary['capacity_total'] = format_kb(kb_cap.get('total'))
|
||||
summary['capacity_used'] = format_kb(kb_cap.get('used'))
|
||||
summary['capacity_free'] = format_kb(kb_cap.get('free'))
|
||||
|
||||
summary['num_parity_disks'] = len(raw_array_info.get('parities', []))
|
||||
summary['num_data_disks'] = len(raw_array_info.get('disks', []))
|
||||
summary['num_cache_pools'] = len(raw_array_info.get('caches', [])) # Note: caches are pools, not individual cache disks
|
||||
|
||||
# Enhanced: Add disk health summary
|
||||
def analyze_disk_health(disks: list[dict[str, Any]], disk_type: str) -> dict[str, int]:
|
||||
"""Analyze health status of disk arrays"""
|
||||
if not disks:
|
||||
return {}
|
||||
|
||||
health_counts = {
|
||||
'healthy': 0,
|
||||
'failed': 0,
|
||||
'missing': 0,
|
||||
'new': 0,
|
||||
'warning': 0,
|
||||
'unknown': 0
|
||||
}
|
||||
|
||||
for disk in disks:
|
||||
status = disk.get('status', '').upper()
|
||||
warning = disk.get('warning')
|
||||
critical = disk.get('critical')
|
||||
|
||||
if status == 'DISK_OK':
|
||||
if warning or critical:
|
||||
health_counts['warning'] += 1
|
||||
else:
|
||||
health_counts['healthy'] += 1
|
||||
elif status in ['DISK_DSBL', 'DISK_INVALID']:
|
||||
health_counts['failed'] += 1
|
||||
elif status == 'DISK_NP':
|
||||
health_counts['missing'] += 1
|
||||
elif status == 'DISK_NEW':
|
||||
health_counts['new'] += 1
|
||||
else:
|
||||
health_counts['unknown'] += 1
|
||||
|
||||
return health_counts
|
||||
|
||||
# Analyze health for each disk type
|
||||
health_summary = {}
|
||||
if raw_array_info.get('parities'):
|
||||
health_summary['parity_health'] = analyze_disk_health(raw_array_info['parities'], 'parity')
|
||||
if raw_array_info.get('disks'):
|
||||
health_summary['data_health'] = analyze_disk_health(raw_array_info['disks'], 'data')
|
||||
if raw_array_info.get('caches'):
|
||||
health_summary['cache_health'] = analyze_disk_health(raw_array_info['caches'], 'cache')
|
||||
|
||||
# Overall array health assessment
|
||||
total_failed = sum(h.get('failed', 0) for h in health_summary.values())
|
||||
total_missing = sum(h.get('missing', 0) for h in health_summary.values())
|
||||
total_warning = sum(h.get('warning', 0) for h in health_summary.values())
|
||||
|
||||
if total_failed > 0:
|
||||
overall_health = "CRITICAL"
|
||||
elif total_missing > 0:
|
||||
overall_health = "DEGRADED"
|
||||
elif total_warning > 0:
|
||||
overall_health = "WARNING"
|
||||
else:
|
||||
overall_health = "HEALTHY"
|
||||
|
||||
summary['overall_health'] = overall_health
|
||||
summary['health_summary'] = health_summary
|
||||
|
||||
return {"summary": summary, "details": raw_array_info}
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error in get_array_status: {e}", exc_info=True)
|
||||
raise ToolError(f"Failed to retrieve array status: {str(e)}") from e
|
||||
|
||||
|
||||
def register_system_tools(mcp: FastMCP) -> None:
|
||||
"""Register all system tools with the FastMCP instance.
|
||||
|
||||
Args:
|
||||
mcp: FastMCP instance to register tools with
|
||||
"""
|
||||
|
||||
@mcp.tool()
|
||||
async def get_system_info() -> dict[str, Any]:
|
||||
"""Retrieves comprehensive information about the Unraid system, OS, CPU, memory, and baseboard."""
|
||||
return await _get_system_info()
|
||||
|
||||
@mcp.tool()
|
||||
async def get_array_status() -> dict[str, Any]:
|
||||
"""Retrieves the current status of the Unraid storage array, including its state, capacity, and details of all disks."""
|
||||
return await _get_array_status()
|
||||
|
||||
@mcp.tool()
|
||||
async def get_network_config() -> dict[str, Any]:
|
||||
"""Retrieves network configuration details, including access URLs."""
|
||||
query = """
|
||||
query GetNetworkConfig {
|
||||
network {
|
||||
id
|
||||
accessUrls { type name ipv4 ipv6 }
|
||||
}
|
||||
}
|
||||
"""
|
||||
try:
|
||||
logger.info("Executing get_network_config tool")
|
||||
response_data = await make_graphql_request(query)
|
||||
network = response_data.get("network", {})
|
||||
return dict(network) if isinstance(network, dict) else {}
|
||||
except Exception as e:
|
||||
logger.error(f"Error in get_network_config: {e}", exc_info=True)
|
||||
raise ToolError(f"Failed to retrieve network configuration: {str(e)}") from e
|
||||
|
||||
@mcp.tool()
|
||||
async def get_registration_info() -> dict[str, Any]:
|
||||
"""Retrieves Unraid registration details."""
|
||||
query = """
|
||||
query GetRegistrationInfo {
|
||||
registration {
|
||||
id
|
||||
type
|
||||
keyFile { location contents }
|
||||
state
|
||||
expiration
|
||||
updateExpiration
|
||||
}
|
||||
}
|
||||
"""
|
||||
try:
|
||||
logger.info("Executing get_registration_info tool")
|
||||
response_data = await make_graphql_request(query)
|
||||
registration = response_data.get("registration", {})
|
||||
return dict(registration) if isinstance(registration, dict) else {}
|
||||
except Exception as e:
|
||||
logger.error(f"Error in get_registration_info: {e}", exc_info=True)
|
||||
raise ToolError(f"Failed to retrieve registration information: {str(e)}") from e
|
||||
|
||||
@mcp.tool()
|
||||
async def get_connect_settings() -> dict[str, Any]:
|
||||
"""Retrieves settings related to Unraid Connect."""
|
||||
# Based on actual schema: settings.unified.values contains the JSON settings
|
||||
query = """
|
||||
query GetConnectSettingsForm {
|
||||
settings {
|
||||
unified {
|
||||
values
|
||||
}
|
||||
}
|
||||
}
|
||||
"""
|
||||
try:
|
||||
logger.info("Executing get_connect_settings tool")
|
||||
response_data = await make_graphql_request(query)
|
||||
|
||||
# Navigate down to the unified settings values
|
||||
if response_data.get("settings") and response_data["settings"].get("unified"):
|
||||
values = response_data["settings"]["unified"].get("values", {})
|
||||
# Filter for Connect-related settings if values is a dict
|
||||
if isinstance(values, dict):
|
||||
# Look for connect-related keys in the unified settings
|
||||
connect_settings = {}
|
||||
for key, value in values.items():
|
||||
if 'connect' in key.lower() or key in ['accessType', 'forwardType', 'port']:
|
||||
connect_settings[key] = value
|
||||
return connect_settings if connect_settings else values
|
||||
return dict(values) if isinstance(values, dict) else {}
|
||||
return {}
|
||||
except Exception as e:
|
||||
logger.error(f"Error in get_connect_settings: {e}", exc_info=True)
|
||||
raise ToolError(f"Failed to retrieve Unraid Connect settings: {str(e)}") from e
|
||||
|
||||
@mcp.tool()
|
||||
async def get_unraid_variables() -> dict[str, Any]:
|
||||
"""Retrieves a selection of Unraid system variables and settings.
|
||||
Note: Many variables are omitted due to API type issues (Int overflow/NaN).
|
||||
"""
|
||||
# Querying a smaller, curated set of fields to avoid Int overflow and NaN issues
|
||||
# pending Unraid API schema fixes for the full Vars type.
|
||||
query = """
|
||||
query GetSelectiveUnraidVariables {
|
||||
vars {
|
||||
id
|
||||
version
|
||||
name
|
||||
timeZone
|
||||
comment
|
||||
security
|
||||
workgroup
|
||||
domain
|
||||
domainShort
|
||||
hideDotFiles
|
||||
localMaster
|
||||
enableFruit
|
||||
useNtp
|
||||
# ntpServer1, ntpServer2, ... are strings, should be okay but numerous
|
||||
domainLogin # Boolean
|
||||
sysModel # String
|
||||
# sysArraySlots, sysCacheSlots are Int, were problematic (NaN)
|
||||
sysFlashSlots # Int, might be okay if small and always set
|
||||
useSsl # Boolean
|
||||
port # Int, usually small
|
||||
portssl # Int, usually small
|
||||
localTld # String
|
||||
bindMgt # Boolean
|
||||
useTelnet # Boolean
|
||||
porttelnet # Int, usually small
|
||||
useSsh # Boolean
|
||||
portssh # Int, usually small
|
||||
startPage # String
|
||||
startArray # Boolean
|
||||
# spindownDelay, queueDepth are Int, potentially okay if always set
|
||||
# defaultFormat, defaultFsType are String
|
||||
shutdownTimeout # Int, potentially okay
|
||||
# luksKeyfile is String
|
||||
# pollAttributes, pollAttributesDefault, pollAttributesStatus are Int/String, were problematic (NaN or type)
|
||||
# nrRequests, nrRequestsDefault, nrRequestsStatus were problematic
|
||||
# mdNumStripes, mdNumStripesDefault, mdNumStripesStatus were problematic
|
||||
# mdSyncWindow, mdSyncWindowDefault, mdSyncWindowStatus were problematic
|
||||
# mdSyncThresh, mdSyncThreshDefault, mdSyncThreshStatus were problematic
|
||||
# mdWriteMethod, mdWriteMethodDefault, mdWriteMethodStatus were problematic
|
||||
# shareDisk, shareUser, shareUserInclude, shareUserExclude are String arrays/String
|
||||
shareSmbEnabled # Boolean
|
||||
shareNfsEnabled # Boolean
|
||||
shareAfpEnabled # Boolean
|
||||
# shareInitialOwner, shareInitialGroup are String
|
||||
shareCacheEnabled # Boolean
|
||||
# shareCacheFloor is String (numeric string?)
|
||||
# shareMoverSchedule, shareMoverLogging are String
|
||||
# fuseRemember, fuseRememberDefault, fuseRememberStatus are String/Boolean, were problematic
|
||||
# fuseDirectio, fuseDirectioDefault, fuseDirectioStatus are String/Boolean, were problematic
|
||||
shareAvahiEnabled # Boolean
|
||||
# shareAvahiSmbName, shareAvahiSmbModel, shareAvahiAfpName, shareAvahiAfpModel are String
|
||||
safeMode # Boolean
|
||||
startMode # String
|
||||
configValid # Boolean
|
||||
configError # String
|
||||
joinStatus # String
|
||||
deviceCount # Int, might be okay
|
||||
flashGuid # String
|
||||
flashProduct # String
|
||||
flashVendor # String
|
||||
# regCheck, regFile, regGuid, regTy, regState, regTo, regTm, regTm2, regGen are varied, mostly String/Int
|
||||
# sbName, sbVersion, sbUpdated, sbEvents, sbState, sbClean, sbSynced, sbSyncErrs, sbSynced2, sbSyncExit are varied
|
||||
# mdColor, mdNumDisks, mdNumDisabled, mdNumInvalid, mdNumMissing, mdNumNew, mdNumErased are Int, potentially okay if counts
|
||||
# mdResync, mdResyncCorr, mdResyncPos, mdResyncDb, mdResyncDt, mdResyncAction are varied (Int/Boolean/String)
|
||||
# mdResyncSize was an overflow
|
||||
mdState # String (enum)
|
||||
mdVersion # String
|
||||
# cacheNumDevices, cacheSbNumDisks were problematic (NaN)
|
||||
# fsState, fsProgress, fsCopyPrcnt, fsNumMounted, fsNumUnmountable, fsUnmountableMask are varied
|
||||
shareCount # Int, might be okay
|
||||
shareSmbCount # Int, might be okay
|
||||
shareNfsCount # Int, might be okay
|
||||
shareAfpCount # Int, might be okay
|
||||
shareMoverActive # Boolean
|
||||
csrfToken # String
|
||||
}
|
||||
}
|
||||
"""
|
||||
try:
|
||||
logger.info("Executing get_unraid_variables tool with a selective query")
|
||||
response_data = await make_graphql_request(query)
|
||||
vars_data = response_data.get("vars", {})
|
||||
return dict(vars_data) if isinstance(vars_data, dict) else {}
|
||||
except Exception as e:
|
||||
logger.error(f"Error in get_unraid_variables: {e}", exc_info=True)
|
||||
raise ToolError(f"Failed to retrieve Unraid variables: {str(e)}") from e
|
||||
|
||||
logger.info("System tools registered successfully")
|
||||
163
unraid_mcp/tools/users.py
Normal file
163
unraid_mcp/tools/users.py
Normal file
@@ -0,0 +1,163 @@
|
||||
"""User management.
|
||||
|
||||
Provides the `unraid_users` tool with 8 actions for managing users,
|
||||
cloud access, remote access settings, and allowed origins.
|
||||
"""
|
||||
|
||||
from typing import Any, Literal
|
||||
|
||||
from fastmcp import FastMCP
|
||||
|
||||
from ..config.logging import logger
|
||||
from ..core.client import make_graphql_request
|
||||
from ..core.exceptions import ToolError
|
||||
|
||||
QUERIES: dict[str, str] = {
|
||||
"me": """
|
||||
query GetMe {
|
||||
me { id name role email }
|
||||
}
|
||||
""",
|
||||
"list": """
|
||||
query ListUsers {
|
||||
users { id name role email }
|
||||
}
|
||||
""",
|
||||
"get": """
|
||||
query GetUser($id: PrefixedID!) {
|
||||
user(id: $id) { id name role email }
|
||||
}
|
||||
""",
|
||||
"cloud": """
|
||||
query GetCloud {
|
||||
cloud { status apiKey error }
|
||||
}
|
||||
""",
|
||||
"remote_access": """
|
||||
query GetRemoteAccess {
|
||||
remoteAccess { enabled url }
|
||||
}
|
||||
""",
|
||||
"origins": """
|
||||
query GetAllowedOrigins {
|
||||
allowedOrigins
|
||||
}
|
||||
""",
|
||||
}
|
||||
|
||||
MUTATIONS: dict[str, str] = {
|
||||
"add": """
|
||||
mutation AddUser($input: AddUserInput!) {
|
||||
addUser(input: $input) { id name role }
|
||||
}
|
||||
""",
|
||||
"delete": """
|
||||
mutation DeleteUser($id: PrefixedID!) {
|
||||
deleteUser(id: $id)
|
||||
}
|
||||
""",
|
||||
}
|
||||
|
||||
DESTRUCTIVE_ACTIONS = {"delete"}
|
||||
|
||||
USER_ACTIONS = Literal[
|
||||
"me", "list", "get", "add", "delete", "cloud", "remote_access", "origins",
|
||||
]
|
||||
|
||||
|
||||
def register_users_tool(mcp: FastMCP) -> None:
|
||||
"""Register the unraid_users tool with the FastMCP instance."""
|
||||
|
||||
@mcp.tool()
|
||||
async def unraid_users(
|
||||
action: USER_ACTIONS,
|
||||
confirm: bool = False,
|
||||
user_id: str | None = None,
|
||||
name: str | None = None,
|
||||
password: str | None = None,
|
||||
role: str | None = None,
|
||||
) -> dict[str, Any]:
|
||||
"""Manage Unraid users and access settings.
|
||||
|
||||
Actions:
|
||||
me - Get current authenticated user info
|
||||
list - List all users
|
||||
get - Get a specific user (requires user_id)
|
||||
add - Add a new user (requires name, password; optional role)
|
||||
delete - Delete a user (requires user_id, confirm=True)
|
||||
cloud - Get Unraid Connect cloud status
|
||||
remote_access - Get remote access settings
|
||||
origins - Get allowed origins
|
||||
"""
|
||||
all_actions = set(QUERIES) | set(MUTATIONS)
|
||||
if action not in all_actions:
|
||||
raise ToolError(f"Invalid action '{action}'. Must be one of: {sorted(all_actions)}")
|
||||
|
||||
if action in DESTRUCTIVE_ACTIONS and not confirm:
|
||||
raise ToolError(f"Action '{action}' is destructive. Set confirm=True to proceed.")
|
||||
|
||||
try:
|
||||
logger.info(f"Executing unraid_users action={action}")
|
||||
|
||||
if action == "me":
|
||||
data = await make_graphql_request(QUERIES["me"])
|
||||
return dict(data.get("me", {}))
|
||||
|
||||
if action == "list":
|
||||
data = await make_graphql_request(QUERIES["list"])
|
||||
users = data.get("users", [])
|
||||
return {"users": list(users) if isinstance(users, list) else []}
|
||||
|
||||
if action == "get":
|
||||
if not user_id:
|
||||
raise ToolError("user_id is required for 'get' action")
|
||||
data = await make_graphql_request(QUERIES["get"], {"id": user_id})
|
||||
return dict(data.get("user", {}))
|
||||
|
||||
if action == "add":
|
||||
if not name or not password:
|
||||
raise ToolError("add requires name and password")
|
||||
input_data: dict[str, Any] = {"name": name, "password": password}
|
||||
if role:
|
||||
input_data["role"] = role.upper()
|
||||
data = await make_graphql_request(
|
||||
MUTATIONS["add"], {"input": input_data}
|
||||
)
|
||||
return {
|
||||
"success": True,
|
||||
"user": data.get("addUser", {}),
|
||||
}
|
||||
|
||||
if action == "delete":
|
||||
if not user_id:
|
||||
raise ToolError("user_id is required for 'delete' action")
|
||||
data = await make_graphql_request(
|
||||
MUTATIONS["delete"], {"id": user_id}
|
||||
)
|
||||
return {
|
||||
"success": True,
|
||||
"message": f"User '{user_id}' deleted",
|
||||
}
|
||||
|
||||
if action == "cloud":
|
||||
data = await make_graphql_request(QUERIES["cloud"])
|
||||
return dict(data.get("cloud", {}))
|
||||
|
||||
if action == "remote_access":
|
||||
data = await make_graphql_request(QUERIES["remote_access"])
|
||||
return dict(data.get("remoteAccess", {}))
|
||||
|
||||
if action == "origins":
|
||||
data = await make_graphql_request(QUERIES["origins"])
|
||||
origins = data.get("allowedOrigins", [])
|
||||
return {"origins": list(origins) if isinstance(origins, list) else []}
|
||||
|
||||
return {}
|
||||
|
||||
except ToolError:
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error(f"Error in unraid_users action={action}: {e}", exc_info=True)
|
||||
raise ToolError(f"Failed to execute users/{action}: {str(e)}") from e
|
||||
|
||||
logger.info("Users tool registered successfully")
|
||||
@@ -1,11 +1,10 @@
|
||||
"""Virtual machine management tools.
|
||||
"""Virtual machine management.
|
||||
|
||||
This module provides tools for VM lifecycle management and monitoring
|
||||
including listing VMs, VM operations (start/stop/pause/reboot/etc),
|
||||
and detailed VM information retrieval.
|
||||
Provides the `unraid_vm` tool with 9 actions for VM lifecycle management
|
||||
including start, stop, pause, resume, force stop, reboot, and reset.
|
||||
"""
|
||||
|
||||
from typing import Any
|
||||
from typing import Any, Literal
|
||||
|
||||
from fastmcp import FastMCP
|
||||
|
||||
@@ -13,150 +12,148 @@ from ..config.logging import logger
|
||||
from ..core.client import make_graphql_request
|
||||
from ..core.exceptions import ToolError
|
||||
|
||||
|
||||
def register_vm_tools(mcp: FastMCP) -> None:
|
||||
"""Register all VM tools with the FastMCP instance.
|
||||
|
||||
Args:
|
||||
mcp: FastMCP instance to register tools with
|
||||
"""
|
||||
|
||||
@mcp.tool()
|
||||
async def list_vms() -> list[dict[str, Any]]:
|
||||
"""Lists all Virtual Machines (VMs) on the Unraid system and their current state.
|
||||
|
||||
Returns:
|
||||
List of VM information dictionaries with UUID, name, and state
|
||||
"""
|
||||
query = """
|
||||
QUERIES: dict[str, str] = {
|
||||
"list": """
|
||||
query ListVMs {
|
||||
vms {
|
||||
id
|
||||
domains {
|
||||
id
|
||||
name
|
||||
state
|
||||
uuid
|
||||
vms { id domains { id name state uuid } }
|
||||
}
|
||||
}
|
||||
}
|
||||
"""
|
||||
try:
|
||||
logger.info("Executing list_vms tool")
|
||||
response_data = await make_graphql_request(query)
|
||||
logger.info(f"VM query response: {response_data}")
|
||||
if response_data.get("vms") and response_data["vms"].get("domains"):
|
||||
vms = response_data["vms"]["domains"]
|
||||
logger.info(f"Found {len(vms)} VMs")
|
||||
return list(vms) if isinstance(vms, list) else []
|
||||
else:
|
||||
logger.info("No VMs found in domains field")
|
||||
return []
|
||||
except Exception as e:
|
||||
logger.error(f"Error in list_vms: {e}", exc_info=True)
|
||||
error_msg = str(e)
|
||||
if "VMs are not available" in error_msg:
|
||||
raise ToolError("VMs are not available on this Unraid server. This could mean: 1) VM support is not enabled, 2) VM service is not running, or 3) no VMs are configured. Check Unraid VM settings.") from e
|
||||
else:
|
||||
raise ToolError(f"Failed to list virtual machines: {error_msg}") from e
|
||||
|
||||
@mcp.tool()
|
||||
async def manage_vm(vm_uuid: str, action: str) -> dict[str, Any]:
|
||||
"""Manages a VM: start, stop, pause, resume, force_stop, reboot, reset. Uses VM UUID.
|
||||
|
||||
Args:
|
||||
vm_uuid: UUID of the VM to manage
|
||||
action: Action to perform - one of: start, stop, pause, resume, forceStop, reboot, reset
|
||||
|
||||
Returns:
|
||||
Dict containing operation success status and details
|
||||
"""
|
||||
valid_actions = ["start", "stop", "pause", "resume", "forceStop", "reboot", "reset"] # Added reset operation
|
||||
if action not in valid_actions:
|
||||
logger.warning(f"Invalid action '{action}' for manage_vm")
|
||||
raise ToolError(f"Invalid action. Must be one of {valid_actions}.")
|
||||
|
||||
mutation_name = action
|
||||
query = f"""
|
||||
mutation ManageVM($id: PrefixedID!) {{
|
||||
vm {{
|
||||
{mutation_name}(id: $id)
|
||||
}}
|
||||
}}
|
||||
"""
|
||||
variables = {"id": vm_uuid}
|
||||
try:
|
||||
logger.info(f"Executing manage_vm tool: action={action}, uuid={vm_uuid}")
|
||||
response_data = await make_graphql_request(query, variables)
|
||||
if response_data.get("vm") and mutation_name in response_data["vm"]:
|
||||
# Mutations for VM return Boolean for success
|
||||
success = response_data["vm"][mutation_name]
|
||||
return {"success": success, "action": action, "vm_uuid": vm_uuid}
|
||||
raise ToolError(f"Failed to {action} VM or unexpected response structure.")
|
||||
except Exception as e:
|
||||
logger.error(f"Error in manage_vm ({action}): {e}", exc_info=True)
|
||||
raise ToolError(f"Failed to {action} virtual machine: {str(e)}") from e
|
||||
|
||||
@mcp.tool()
|
||||
async def get_vm_details(vm_identifier: str) -> dict[str, Any]:
|
||||
"""Retrieves detailed information for a specific VM by its UUID or name.
|
||||
|
||||
Args:
|
||||
vm_identifier: VM UUID or name to retrieve details for
|
||||
|
||||
Returns:
|
||||
Dict containing detailed VM information
|
||||
"""
|
||||
# Make direct GraphQL call instead of calling list_vms() tool
|
||||
query = """
|
||||
""",
|
||||
"details": """
|
||||
query GetVmDetails {
|
||||
vms {
|
||||
domains {
|
||||
id
|
||||
name
|
||||
state
|
||||
uuid
|
||||
}
|
||||
domain {
|
||||
id
|
||||
name
|
||||
state
|
||||
uuid
|
||||
}
|
||||
}
|
||||
vms { domains { id name state uuid } }
|
||||
}
|
||||
""",
|
||||
}
|
||||
|
||||
MUTATIONS: dict[str, str] = {
|
||||
"start": """
|
||||
mutation StartVM($id: PrefixedID!) { vm { start(id: $id) } }
|
||||
""",
|
||||
"stop": """
|
||||
mutation StopVM($id: PrefixedID!) { vm { stop(id: $id) } }
|
||||
""",
|
||||
"pause": """
|
||||
mutation PauseVM($id: PrefixedID!) { vm { pause(id: $id) } }
|
||||
""",
|
||||
"resume": """
|
||||
mutation ResumeVM($id: PrefixedID!) { vm { resume(id: $id) } }
|
||||
""",
|
||||
"force_stop": """
|
||||
mutation ForceStopVM($id: PrefixedID!) { vm { forceStop(id: $id) } }
|
||||
""",
|
||||
"reboot": """
|
||||
mutation RebootVM($id: PrefixedID!) { vm { reboot(id: $id) } }
|
||||
""",
|
||||
"reset": """
|
||||
mutation ResetVM($id: PrefixedID!) { vm { reset(id: $id) } }
|
||||
""",
|
||||
}
|
||||
|
||||
# Map action names to their GraphQL field names
|
||||
_MUTATION_FIELDS: dict[str, str] = {
|
||||
"start": "start",
|
||||
"stop": "stop",
|
||||
"pause": "pause",
|
||||
"resume": "resume",
|
||||
"force_stop": "forceStop",
|
||||
"reboot": "reboot",
|
||||
"reset": "reset",
|
||||
}
|
||||
|
||||
DESTRUCTIVE_ACTIONS = {"force_stop", "reset"}
|
||||
|
||||
VM_ACTIONS = Literal[
|
||||
"list", "details",
|
||||
"start", "stop", "pause", "resume", "force_stop", "reboot", "reset",
|
||||
]
|
||||
|
||||
|
||||
def register_vm_tool(mcp: FastMCP) -> None:
|
||||
"""Register the unraid_vm tool with the FastMCP instance."""
|
||||
|
||||
@mcp.tool()
|
||||
async def unraid_vm(
|
||||
action: VM_ACTIONS,
|
||||
vm_id: str | None = None,
|
||||
confirm: bool = False,
|
||||
) -> dict[str, Any]:
|
||||
"""Manage Unraid virtual machines.
|
||||
|
||||
Actions:
|
||||
list - List all VMs with state
|
||||
details - Detailed info for a VM (requires vm_id: UUID, PrefixedID, or name)
|
||||
start - Start a VM (requires vm_id)
|
||||
stop - Gracefully stop a VM (requires vm_id)
|
||||
pause - Pause a VM (requires vm_id)
|
||||
resume - Resume a paused VM (requires vm_id)
|
||||
force_stop - Force stop a VM (requires vm_id, confirm=True)
|
||||
reboot - Reboot a VM (requires vm_id)
|
||||
reset - Reset a VM (requires vm_id, confirm=True)
|
||||
"""
|
||||
all_actions = set(QUERIES) | set(MUTATIONS)
|
||||
if action not in all_actions:
|
||||
raise ToolError(f"Invalid action '{action}'. Must be one of: {sorted(all_actions)}")
|
||||
|
||||
if action in DESTRUCTIVE_ACTIONS and not confirm:
|
||||
raise ToolError(f"Action '{action}' is destructive. Set confirm=True to proceed.")
|
||||
|
||||
if action != "list" and not vm_id:
|
||||
raise ToolError(f"vm_id is required for '{action}' action")
|
||||
|
||||
try:
|
||||
logger.info(f"Executing get_vm_details for identifier: {vm_identifier}")
|
||||
response_data = await make_graphql_request(query)
|
||||
logger.info(f"Executing unraid_vm action={action}")
|
||||
|
||||
if response_data.get("vms"):
|
||||
vms_data = response_data["vms"]
|
||||
# Try to get VMs from either domains or domain field
|
||||
vms = vms_data.get("domains") or vms_data.get("domain") or []
|
||||
if action == "list":
|
||||
data = await make_graphql_request(QUERIES["list"])
|
||||
if data.get("vms") and data["vms"].get("domains"):
|
||||
vms = data["vms"]["domains"]
|
||||
return {"vms": list(vms) if isinstance(vms, list) else []}
|
||||
return {"vms": []}
|
||||
|
||||
if vms:
|
||||
for vm_data in vms:
|
||||
if (vm_data.get("uuid") == vm_identifier or
|
||||
vm_data.get("id") == vm_identifier or
|
||||
vm_data.get("name") == vm_identifier):
|
||||
logger.info(f"Found VM {vm_identifier}")
|
||||
return dict(vm_data) if isinstance(vm_data, dict) else {}
|
||||
if action == "details":
|
||||
data = await make_graphql_request(QUERIES["details"])
|
||||
if data.get("vms"):
|
||||
vms = data["vms"].get("domains") or []
|
||||
for vm in vms:
|
||||
if (
|
||||
vm.get("uuid") == vm_id
|
||||
or vm.get("id") == vm_id
|
||||
or vm.get("name") == vm_id
|
||||
):
|
||||
return dict(vm) if isinstance(vm, dict) else {}
|
||||
available = [
|
||||
f"{v.get('name')} (UUID: {v.get('uuid')})" for v in vms
|
||||
]
|
||||
raise ToolError(
|
||||
f"VM '{vm_id}' not found. Available: {', '.join(available)}"
|
||||
)
|
||||
raise ToolError("No VM data returned from server")
|
||||
|
||||
logger.warning(f"VM with identifier '{vm_identifier}' not found.")
|
||||
available_vms = [f"{vm.get('name')} (UUID: {vm.get('uuid')}, ID: {vm.get('id')})" for vm in vms]
|
||||
raise ToolError(f"VM '{vm_identifier}' not found. Available VMs: {', '.join(available_vms)}")
|
||||
else:
|
||||
raise ToolError("No VMs available or VMs not accessible")
|
||||
else:
|
||||
raise ToolError("No VMs data returned from server")
|
||||
# Mutations
|
||||
if action in MUTATIONS:
|
||||
data = await make_graphql_request(
|
||||
MUTATIONS[action], {"id": vm_id}
|
||||
)
|
||||
field = _MUTATION_FIELDS[action]
|
||||
if data.get("vm") and field in data["vm"]:
|
||||
return {
|
||||
"success": data["vm"][field],
|
||||
"action": action,
|
||||
"vm_id": vm_id,
|
||||
}
|
||||
raise ToolError(f"Failed to {action} VM or unexpected response")
|
||||
|
||||
return {}
|
||||
|
||||
except ToolError:
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error(f"Error in get_vm_details: {e}", exc_info=True)
|
||||
error_msg = str(e)
|
||||
if "VMs are not available" in error_msg:
|
||||
raise ToolError("VMs are not available on this Unraid server. This could mean: 1) VM support is not enabled, 2) VM service is not running, or 3) no VMs are configured. Check Unraid VM settings.") from e
|
||||
else:
|
||||
raise ToolError(f"Failed to retrieve VM details: {error_msg}") from e
|
||||
logger.error(f"Error in unraid_vm action={action}: {e}", exc_info=True)
|
||||
msg = str(e)
|
||||
if "VMs are not available" in msg:
|
||||
raise ToolError(
|
||||
"VMs not available on this server. Check VM support is enabled."
|
||||
) from e
|
||||
raise ToolError(f"Failed to execute vm/{action}: {msg}") from e
|
||||
|
||||
logger.info("VM tools registered successfully")
|
||||
logger.info("VM tool registered successfully")
|
||||
|
||||
160
uv.lock
generated
160
uv.lock
generated
@@ -1,5 +1,5 @@
|
||||
version = 1
|
||||
revision = 2
|
||||
revision = 3
|
||||
requires-python = ">=3.10"
|
||||
|
||||
[[package]]
|
||||
@@ -267,6 +267,110 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/d1/d6/3965ed04c63042e047cb6a3e6ed1a63a35087b6a609aa3a15ed8ac56c221/colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6", size = 25335, upload-time = "2022-10-25T02:36:20.889Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "coverage"
|
||||
version = "7.13.3"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/11/43/3e4ac666cc35f231fa70c94e9f38459299de1a152813f9d2f60fc5f3ecaf/coverage-7.13.3.tar.gz", hash = "sha256:f7f6182d3dfb8802c1747eacbfe611b669455b69b7c037484bb1efbbb56711ac", size = 826832, upload-time = "2026-02-03T14:02:30.944Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/ab/07/1c8099563a8a6c389a31c2d0aa1497cee86d6248bb4b9ba5e779215db9f9/coverage-7.13.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:0b4f345f7265cdbdb5ec2521ffff15fa49de6d6c39abf89fc7ad68aa9e3a55f0", size = 219143, upload-time = "2026-02-03T13:59:40.459Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/69/39/a892d44af7aa092cab70e0cc5cdbba18eeccfe1d6930695dab1742eef9e9/coverage-7.13.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:96c3be8bae9d0333e403cc1a8eb078a7f928b5650bae94a18fb4820cc993fb9b", size = 219663, upload-time = "2026-02-03T13:59:41.951Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/9a/25/9669dcf4c2bb4c3861469e6db20e52e8c11908cf53c14ec9b12e9fd4d602/coverage-7.13.3-cp310-cp310-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:d6f4a21328ea49d38565b55599e1c02834e76583a6953e5586d65cb1efebd8f8", size = 246424, upload-time = "2026-02-03T13:59:43.418Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f3/68/d9766c4e298aca62ea5d9543e1dd1e4e1439d7284815244d8b7db1840bfb/coverage-7.13.3-cp310-cp310-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:fc970575799a9d17d5c3fafd83a0f6ccf5d5117cdc9ad6fbd791e9ead82418b0", size = 248228, upload-time = "2026-02-03T13:59:44.816Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f0/e2/eea6cb4a4bd443741adf008d4cccec83a1f75401df59b6559aca2bdd9710/coverage-7.13.3-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:87ff33b652b3556b05e204ae20793d1f872161b0fa5ec8a9ac76f8430e152ed6", size = 250103, upload-time = "2026-02-03T13:59:46.271Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/db/77/664280ecd666c2191610842177e2fab9e5dbdeef97178e2078fed46a3d2c/coverage-7.13.3-cp310-cp310-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:7df8759ee57b9f3f7b66799b7660c282f4375bef620ade1686d6a7b03699e75f", size = 247107, upload-time = "2026-02-03T13:59:48.53Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/2b/df/2a672eab99e0d0eba52d8a63e47dc92245eee26954d1b2d3c8f7d372151f/coverage-7.13.3-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:f45c9bcb16bee25a798ccba8a2f6a1251b19de6a0d617bb365d7d2f386c4e20e", size = 248143, upload-time = "2026-02-03T13:59:50.027Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a5/dc/a104e7a87c13e57a358b8b9199a8955676e1703bb372d79722b54978ae45/coverage-7.13.3-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:318b2e4753cbf611061e01b6cc81477e1cdfeb69c36c4a14e6595e674caadb56", size = 246148, upload-time = "2026-02-03T13:59:52.025Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/2b/89/e113d3a58dc20b03b7e59aed1e53ebc9ca6167f961876443e002b10e3ae9/coverage-7.13.3-cp310-cp310-musllinux_1_2_riscv64.whl", hash = "sha256:24db3959de8ee394eeeca89ccb8ba25305c2da9a668dd44173394cbd5aa0777f", size = 246414, upload-time = "2026-02-03T13:59:53.859Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/3f/60/a3fd0a6e8d89b488396019a2268b6a1f25ab56d6d18f3be50f35d77b47dc/coverage-7.13.3-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:be14d0622125edef21b3a4d8cd2d138c4872bf6e38adc90fd92385e3312f406a", size = 247023, upload-time = "2026-02-03T13:59:55.454Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/19/fa/de4840bb939dbb22ba0648a6d8069fa91c9cf3b3fca8b0d1df461e885b3d/coverage-7.13.3-cp310-cp310-win32.whl", hash = "sha256:53be4aab8ddef18beb6188f3a3fdbf4d1af2277d098d4e618be3a8e6c88e74be", size = 221751, upload-time = "2026-02-03T13:59:57.383Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/de/87/233ff8b7ef62fb63f58c78623b50bef69681111e0c4d43504f422d88cda4/coverage-7.13.3-cp310-cp310-win_amd64.whl", hash = "sha256:bfeee64ad8b4aae3233abb77eb6b52b51b05fa89da9645518671b9939a78732b", size = 222686, upload-time = "2026-02-03T13:59:58.825Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ec/09/1ac74e37cf45f17eb41e11a21854f7f92a4c2d6c6098ef4a1becb0c6d8d3/coverage-7.13.3-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:5907605ee20e126eeee2abe14aae137043c2c8af2fa9b38d2ab3b7a6b8137f73", size = 219276, upload-time = "2026-02-03T14:00:00.296Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/2e/cb/71908b08b21beb2c437d0d5870c4ec129c570ca1b386a8427fcdb11cf89c/coverage-7.13.3-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:a88705500988c8acad8b8fd86c2a933d3aa96bec1ddc4bc5cb256360db7bbd00", size = 219776, upload-time = "2026-02-03T14:00:02.414Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/09/85/c4f3dd69232887666a2c0394d4be21c60ea934d404db068e6c96aa59cd87/coverage-7.13.3-cp311-cp311-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:7bbb5aa9016c4c29e3432e087aa29ebee3f8fda089cfbfb4e6d64bd292dcd1c2", size = 250196, upload-time = "2026-02-03T14:00:04.197Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/9c/cc/560ad6f12010344d0778e268df5ba9aa990aacccc310d478bf82bf3d302c/coverage-7.13.3-cp311-cp311-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:0c2be202a83dde768937a61cdc5d06bf9fb204048ca199d93479488e6247656c", size = 252111, upload-time = "2026-02-03T14:00:05.639Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f0/66/3193985fb2c58e91f94cfbe9e21a6fdf941e9301fe2be9e92c072e9c8f8c/coverage-7.13.3-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:0f45e32ef383ce56e0ca099b2e02fcdf7950be4b1b56afaab27b4ad790befe5b", size = 254217, upload-time = "2026-02-03T14:00:07.738Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c5/78/f0f91556bf1faa416792e537c523c5ef9db9b1d32a50572c102b3d7c45b3/coverage-7.13.3-cp311-cp311-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:6ed2e787249b922a93cd95c671cc9f4c9797a106e81b455c83a9ddb9d34590c0", size = 250318, upload-time = "2026-02-03T14:00:09.224Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/6f/aa/fc654e45e837d137b2c1f3a2cc09b4aea1e8b015acd2f774fa0f3d2ddeba/coverage-7.13.3-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:05dd25b21afffe545e808265897c35f32d3e4437663923e0d256d9ab5031fb14", size = 251909, upload-time = "2026-02-03T14:00:10.712Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/73/4d/ab53063992add8a9ca0463c9d92cce5994a29e17affd1c2daa091b922a93/coverage-7.13.3-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:46d29926349b5c4f1ea4fca95e8c892835515f3600995a383fa9a923b5739ea4", size = 249971, upload-time = "2026-02-03T14:00:12.402Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/29/25/83694b81e46fcff9899694a1b6f57573429cdd82b57932f09a698f03eea5/coverage-7.13.3-cp311-cp311-musllinux_1_2_riscv64.whl", hash = "sha256:fae6a21537519c2af00245e834e5bf2884699cc7c1055738fd0f9dc37a3644ad", size = 249692, upload-time = "2026-02-03T14:00:13.868Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d4/ef/d68fc304301f4cb4bf6aefa0045310520789ca38dabdfba9dbecd3f37919/coverage-7.13.3-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:c672d4e2f0575a4ca2bf2aa0c5ced5188220ab806c1bb6d7179f70a11a017222", size = 250597, upload-time = "2026-02-03T14:00:15.461Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/8d/85/240ad396f914df361d0f71e912ddcedb48130c71b88dc4193fe3c0306f00/coverage-7.13.3-cp311-cp311-win32.whl", hash = "sha256:fcda51c918c7a13ad93b5f89a58d56e3a072c9e0ba5c231b0ed81404bf2648fb", size = 221773, upload-time = "2026-02-03T14:00:17.462Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/2f/71/165b3a6d3d052704a9ab52d11ea64ef3426745de517dda44d872716213a7/coverage-7.13.3-cp311-cp311-win_amd64.whl", hash = "sha256:d1a049b5c51b3b679928dd35e47c4a2235e0b6128b479a7596d0ef5b42fa6301", size = 222711, upload-time = "2026-02-03T14:00:19.449Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/51/d0/0ddc9c5934cdd52639c5df1f1eb0fdab51bb52348f3a8d1c7db9c600d93a/coverage-7.13.3-cp311-cp311-win_arm64.whl", hash = "sha256:79f2670c7e772f4917895c3d89aad59e01f3dbe68a4ed2d0373b431fad1dcfba", size = 221377, upload-time = "2026-02-03T14:00:20.968Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/94/44/330f8e83b143f6668778ed61d17ece9dc48459e9e74669177de02f45fec5/coverage-7.13.3-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:ed48b4170caa2c4420e0cd27dc977caaffc7eecc317355751df8373dddcef595", size = 219441, upload-time = "2026-02-03T14:00:22.585Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/08/e7/29db05693562c2e65bdf6910c0af2fd6f9325b8f43caf7a258413f369e30/coverage-7.13.3-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:8f2adf4bcffbbec41f366f2e6dffb9d24e8172d16e91da5799c9b7ed6b5716e6", size = 219801, upload-time = "2026-02-03T14:00:24.186Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/90/ae/7f8a78249b02b0818db46220795f8ac8312ea4abd1d37d79ea81db5cae81/coverage-7.13.3-cp312-cp312-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:01119735c690786b6966a1e9f098da4cd7ca9174c4cfe076d04e653105488395", size = 251306, upload-time = "2026-02-03T14:00:25.798Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/62/71/a18a53d1808e09b2e9ebd6b47dad5e92daf4c38b0686b4c4d1b2f3e42b7f/coverage-7.13.3-cp312-cp312-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:8bb09e83c603f152d855f666d70a71765ca8e67332e5829e62cb9466c176af23", size = 254051, upload-time = "2026-02-03T14:00:27.474Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/4a/0a/eb30f6455d04c5a3396d0696cad2df0269ae7444bb322f86ffe3376f7bf9/coverage-7.13.3-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:b607a40cba795cfac6d130220d25962931ce101f2f478a29822b19755377fb34", size = 255160, upload-time = "2026-02-03T14:00:29.024Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/7b/7e/a45baac86274ce3ed842dbb84f14560c673ad30535f397d89164ec56c5df/coverage-7.13.3-cp312-cp312-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:44f14a62f5da2e9aedf9080e01d2cda61df39197d48e323538ec037336d68da8", size = 251709, upload-time = "2026-02-03T14:00:30.641Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c0/df/dd0dc12f30da11349993f3e218901fdf82f45ee44773596050c8f5a1fb25/coverage-7.13.3-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:debf29e0b157769843dff0981cc76f79e0ed04e36bb773c6cac5f6029054bd8a", size = 253083, upload-time = "2026-02-03T14:00:32.14Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ab/32/fc764c8389a8ce95cb90eb97af4c32f392ab0ac23ec57cadeefb887188d3/coverage-7.13.3-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:824bb95cd71604031ae9a48edb91fd6effde669522f960375668ed21b36e3ec4", size = 251227, upload-time = "2026-02-03T14:00:34.721Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/dd/ca/d025e9da8f06f24c34d2da9873957cfc5f7e0d67802c3e34d0caa8452130/coverage-7.13.3-cp312-cp312-musllinux_1_2_riscv64.whl", hash = "sha256:8f1010029a5b52dc427c8e2a8dbddb2303ddd180b806687d1acd1bb1d06649e7", size = 250794, upload-time = "2026-02-03T14:00:36.278Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/45/c7/76bf35d5d488ec8f68682eb8e7671acc50a6d2d1c1182de1d2b6d4ffad3b/coverage-7.13.3-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:cd5dee4fd7659d8306ffa79eeaaafd91fa30a302dac3af723b9b469e549247e0", size = 252671, upload-time = "2026-02-03T14:00:38.368Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/bf/10/1921f1a03a7c209e1cb374f81a6b9b68b03cdb3ecc3433c189bc90e2a3d5/coverage-7.13.3-cp312-cp312-win32.whl", hash = "sha256:f7f153d0184d45f3873b3ad3ad22694fd73aadcb8cdbc4337ab4b41ea6b4dff1", size = 221986, upload-time = "2026-02-03T14:00:40.442Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/3c/7c/f5d93297f8e125a80c15545edc754d93e0ed8ba255b65e609b185296af01/coverage-7.13.3-cp312-cp312-win_amd64.whl", hash = "sha256:03a6e5e1e50819d6d7436f5bc40c92ded7e484e400716886ac921e35c133149d", size = 222793, upload-time = "2026-02-03T14:00:42.106Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/43/59/c86b84170015b4555ebabca8649bdf9f4a1f737a73168088385ed0f947c4/coverage-7.13.3-cp312-cp312-win_arm64.whl", hash = "sha256:51c4c42c0e7d09a822b08b6cf79b3c4db8333fffde7450da946719ba0d45730f", size = 221410, upload-time = "2026-02-03T14:00:43.726Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/81/f3/4c333da7b373e8c8bfb62517e8174a01dcc373d7a9083698e3b39d50d59c/coverage-7.13.3-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:853c3d3c79ff0db65797aad79dee6be020efd218ac4510f15a205f1e8d13ce25", size = 219468, upload-time = "2026-02-03T14:00:45.829Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d6/31/0714337b7d23630c8de2f4d56acf43c65f8728a45ed529b34410683f7217/coverage-7.13.3-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:f75695e157c83d374f88dcc646a60cb94173304a9258b2e74ba5a66b7614a51a", size = 219839, upload-time = "2026-02-03T14:00:47.407Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/12/99/bd6f2a2738144c98945666f90cae446ed870cecf0421c767475fcf42cdbe/coverage-7.13.3-cp313-cp313-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:2d098709621d0819039f3f1e471ee554f55a0b2ac0d816883c765b14129b5627", size = 250828, upload-time = "2026-02-03T14:00:49.029Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/6f/99/97b600225fbf631e6f5bfd3ad5bcaf87fbb9e34ff87492e5a572ff01bbe2/coverage-7.13.3-cp313-cp313-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:16d23d6579cf80a474ad160ca14d8b319abaa6db62759d6eef53b2fc979b58c8", size = 253432, upload-time = "2026-02-03T14:00:50.655Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/5f/5c/abe2b3490bda26bd4f5e3e799be0bdf00bd81edebedc2c9da8d3ef288fa8/coverage-7.13.3-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:00d34b29a59d2076e6f318b30a00a69bf63687e30cd882984ed444e753990cc1", size = 254672, upload-time = "2026-02-03T14:00:52.757Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/31/ba/5d1957c76b40daff53971fe0adb84d9c2162b614280031d1d0653dd010c1/coverage-7.13.3-cp313-cp313-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:ab6d72bffac9deb6e6cb0f61042e748de3f9f8e98afb0375a8e64b0b6e11746b", size = 251050, upload-time = "2026-02-03T14:00:54.332Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/69/dc/dffdf3bfe9d32090f047d3c3085378558cb4eb6778cda7de414ad74581ed/coverage-7.13.3-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:e129328ad1258e49cae0123a3b5fcb93d6c2fa90d540f0b4c7cdcdc019aaa3dc", size = 252801, upload-time = "2026-02-03T14:00:56.121Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/87/51/cdf6198b0f2746e04511a30dc9185d7b8cdd895276c07bdb538e37f1cd50/coverage-7.13.3-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:2213a8d88ed35459bda71597599d4eec7c2ebad201c88f0bfc2c26fd9b0dd2ea", size = 250763, upload-time = "2026-02-03T14:00:58.719Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d7/1a/596b7d62218c1d69f2475b69cc6b211e33c83c902f38ee6ae9766dd422da/coverage-7.13.3-cp313-cp313-musllinux_1_2_riscv64.whl", hash = "sha256:00dd3f02de6d5f5c9c3d95e3e036c3c2e2a669f8bf2d3ceb92505c4ce7838f67", size = 250587, upload-time = "2026-02-03T14:01:01.197Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f7/46/52330d5841ff660f22c130b75f5e1dd3e352c8e7baef5e5fef6b14e3e991/coverage-7.13.3-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:f9bada7bc660d20b23d7d312ebe29e927b655cf414dadcdb6335a2075695bd86", size = 252358, upload-time = "2026-02-03T14:01:02.824Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/36/8a/e69a5be51923097ba7d5cff9724466e74fe486e9232020ba97c809a8b42b/coverage-7.13.3-cp313-cp313-win32.whl", hash = "sha256:75b3c0300f3fa15809bd62d9ca8b170eb21fcf0100eb4b4154d6dc8b3a5bbd43", size = 222007, upload-time = "2026-02-03T14:01:04.876Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/0a/09/a5a069bcee0d613bdd48ee7637fa73bc09e7ed4342b26890f2df97cc9682/coverage-7.13.3-cp313-cp313-win_amd64.whl", hash = "sha256:a2f7589c6132c44c53f6e705e1a6677e2b7821378c22f7703b2cf5388d0d4587", size = 222812, upload-time = "2026-02-03T14:01:07.296Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/3d/4f/d62ad7dfe32f9e3d4a10c178bb6f98b10b083d6e0530ca202b399371f6c1/coverage-7.13.3-cp313-cp313-win_arm64.whl", hash = "sha256:123ceaf2b9d8c614f01110f908a341e05b1b305d6b2ada98763b9a5a59756051", size = 221433, upload-time = "2026-02-03T14:01:09.156Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/04/b2/4876c46d723d80b9c5b695f1a11bf5f7c3dabf540ec00d6edc076ff025e6/coverage-7.13.3-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:cc7fd0f726795420f3678ac82ff882c7fc33770bd0074463b5aef7293285ace9", size = 220162, upload-time = "2026-02-03T14:01:11.409Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/fc/04/9942b64a0e0bdda2c109f56bda42b2a59d9d3df4c94b85a323c1cae9fc77/coverage-7.13.3-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:d358dc408edc28730aed5477a69338e444e62fba0b7e9e4a131c505fadad691e", size = 220510, upload-time = "2026-02-03T14:01:13.038Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/5a/82/5cfe1e81eae525b74669f9795f37eb3edd4679b873d79d1e6c1c14ee6c1c/coverage-7.13.3-cp313-cp313t-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:5d67b9ed6f7b5527b209b24b3df9f2e5bf0198c1bbf99c6971b0e2dcb7e2a107", size = 261801, upload-time = "2026-02-03T14:01:14.674Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/0b/ec/a553d7f742fd2cd12e36a16a7b4b3582d5934b496ef2b5ea8abeb10903d4/coverage-7.13.3-cp313-cp313t-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:59224bfb2e9b37c1335ae35d00daa3a5b4e0b1a20f530be208fff1ecfa436f43", size = 263882, upload-time = "2026-02-03T14:01:16.343Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e1/58/8f54a2a93e3d675635bc406de1c9ac8d551312142ff52c9d71b5e533ad45/coverage-7.13.3-cp313-cp313t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:ae9306b5299e31e31e0d3b908c66bcb6e7e3ddca143dea0266e9ce6c667346d3", size = 266306, upload-time = "2026-02-03T14:01:18.02Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/1a/be/e593399fd6ea1f00aee79ebd7cc401021f218d34e96682a92e1bae092ff6/coverage-7.13.3-cp313-cp313t-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:343aaeb5f8bb7bcd38620fd7bc56e6ee8207847d8c6103a1e7b72322d381ba4a", size = 261051, upload-time = "2026-02-03T14:01:19.757Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/5c/e5/e9e0f6138b21bcdebccac36fbfde9cf15eb1bbcea9f5b1f35cd1f465fb91/coverage-7.13.3-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:b2182129f4c101272ff5f2f18038d7b698db1bf8e7aa9e615cb48440899ad32e", size = 263868, upload-time = "2026-02-03T14:01:21.487Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/9a/bf/de72cfebb69756f2d4a2dde35efcc33c47d85cd3ebdf844b3914aac2ef28/coverage-7.13.3-cp313-cp313t-musllinux_1_2_i686.whl", hash = "sha256:94d2ac94bd0cc57c5626f52f8c2fffed1444b5ae8c9fc68320306cc2b255e155", size = 261498, upload-time = "2026-02-03T14:01:23.097Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f2/91/4a2d313a70fc2e98ca53afd1c8ce67a89b1944cd996589a5b1fe7fbb3e5c/coverage-7.13.3-cp313-cp313t-musllinux_1_2_riscv64.whl", hash = "sha256:65436cde5ecabe26fb2f0bf598962f0a054d3f23ad529361326ac002c61a2a1e", size = 260394, upload-time = "2026-02-03T14:01:24.949Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/40/83/25113af7cf6941e779eb7ed8de2a677865b859a07ccee9146d4cc06a03e3/coverage-7.13.3-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:db83b77f97129813dbd463a67e5335adc6a6a91db652cc085d60c2d512746f96", size = 262579, upload-time = "2026-02-03T14:01:26.703Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/1e/19/a5f2b96262977e82fb9aabbe19b4d83561f5d063f18dde3e72f34ffc3b2f/coverage-7.13.3-cp313-cp313t-win32.whl", hash = "sha256:dfb428e41377e6b9ba1b0a32df6db5409cb089a0ed1d0a672dc4953ec110d84f", size = 222679, upload-time = "2026-02-03T14:01:28.553Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/81/82/ef1747b88c87a5c7d7edc3704799ebd650189a9158e680a063308b6125ef/coverage-7.13.3-cp313-cp313t-win_amd64.whl", hash = "sha256:5badd7e596e6b0c89aa8ec6d37f4473e4357f982ce57f9a2942b0221cd9cf60c", size = 223740, upload-time = "2026-02-03T14:01:30.776Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/1c/4c/a67c7bb5b560241c22736a9cb2f14c5034149ffae18630323fde787339e4/coverage-7.13.3-cp313-cp313t-win_arm64.whl", hash = "sha256:989aa158c0eb19d83c76c26f4ba00dbb272485c56e452010a3450bdbc9daafd9", size = 221996, upload-time = "2026-02-03T14:01:32.495Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/5e/b3/677bb43427fed9298905106f39c6520ac75f746f81b8f01104526a8026e4/coverage-7.13.3-cp314-cp314-macosx_10_15_x86_64.whl", hash = "sha256:c6f6169bbdbdb85aab8ac0392d776948907267fcc91deeacf6f9d55f7a83ae3b", size = 219513, upload-time = "2026-02-03T14:01:34.29Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/42/53/290046e3bbf8986cdb7366a42dab3440b9983711eaff044a51b11006c67b/coverage-7.13.3-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:2f5e731627a3d5ef11a2a35aa0c6f7c435867c7ccbc391268eb4f2ca5dbdcc10", size = 219850, upload-time = "2026-02-03T14:01:35.984Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ea/2b/ab41f10345ba2e49d5e299be8663be2b7db33e77ac1b85cd0af985ea6406/coverage-7.13.3-cp314-cp314-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:9db3a3285d91c0b70fab9f39f0a4aa37d375873677efe4e71e58d8321e8c5d39", size = 250886, upload-time = "2026-02-03T14:01:38.287Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/72/2d/b3f6913ee5a1d5cdd04106f257e5fac5d048992ffc2d9995d07b0f17739f/coverage-7.13.3-cp314-cp314-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:06e49c5897cb12e3f7ecdc111d44e97c4f6d0557b81a7a0204ed70a8b038f86f", size = 253393, upload-time = "2026-02-03T14:01:40.118Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f0/f6/b1f48810ffc6accf49a35b9943636560768f0812330f7456aa87dc39aff5/coverage-7.13.3-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:fb25061a66802df9fc13a9ba1967d25faa4dae0418db469264fd9860a921dde4", size = 254740, upload-time = "2026-02-03T14:01:42.413Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/57/d0/e59c54f9be0b61808f6bc4c8c4346bd79f02dd6bbc3f476ef26124661f20/coverage-7.13.3-cp314-cp314-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:99fee45adbb1caeb914da16f70e557fb7ff6ddc9e4b14de665bd41af631367ef", size = 250905, upload-time = "2026-02-03T14:01:44.163Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d5/f7/5291bcdf498bafbee3796bb32ef6966e9915aebd4d0954123c8eae921c32/coverage-7.13.3-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:318002f1fd819bdc1651c619268aa5bc853c35fa5cc6d1e8c96bd9cd6c828b75", size = 252753, upload-time = "2026-02-03T14:01:45.974Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a0/a9/1dcafa918c281554dae6e10ece88c1add82db685be123e1b05c2056ff3fb/coverage-7.13.3-cp314-cp314-musllinux_1_2_i686.whl", hash = "sha256:71295f2d1d170b9977dc386d46a7a1b7cbb30e5405492529b4c930113a33f895", size = 250716, upload-time = "2026-02-03T14:01:48.844Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/44/bb/4ea4eabcce8c4f6235df6e059fbc5db49107b24c4bdffc44aee81aeca5a8/coverage-7.13.3-cp314-cp314-musllinux_1_2_riscv64.whl", hash = "sha256:5b1ad2e0dc672625c44bc4fe34514602a9fd8b10d52ddc414dc585f74453516c", size = 250530, upload-time = "2026-02-03T14:01:50.793Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/6d/31/4a6c9e6a71367e6f923b27b528448c37f4e959b7e4029330523014691007/coverage-7.13.3-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:b2beb64c145593a50d90db5c7178f55daeae129123b0d265bdb3cbec83e5194a", size = 252186, upload-time = "2026-02-03T14:01:52.607Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/27/92/e1451ef6390a4f655dc42da35d9971212f7abbbcad0bdb7af4407897eb76/coverage-7.13.3-cp314-cp314-win32.whl", hash = "sha256:3d1aed4f4e837a832df2f3b4f68a690eede0de4560a2dbc214ea0bc55aabcdb4", size = 222253, upload-time = "2026-02-03T14:01:55.071Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/8a/98/78885a861a88de020c32a2693487c37d15a9873372953f0c3c159d575a43/coverage-7.13.3-cp314-cp314-win_amd64.whl", hash = "sha256:9f9efbbaf79f935d5fbe3ad814825cbce4f6cdb3054384cb49f0c0f496125fa0", size = 223069, upload-time = "2026-02-03T14:01:56.95Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/eb/fb/3784753a48da58a5337972abf7ca58b1fb0f1bda21bc7b4fae992fd28e47/coverage-7.13.3-cp314-cp314-win_arm64.whl", hash = "sha256:31b6e889c53d4e6687ca63706148049494aace140cffece1c4dc6acadb70a7b3", size = 221633, upload-time = "2026-02-03T14:01:58.758Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/40/f9/75b732d9674d32cdbffe801ed5f770786dd1c97eecedef2125b0d25102dc/coverage-7.13.3-cp314-cp314t-macosx_10_15_x86_64.whl", hash = "sha256:c5e9787cec750793a19a28df7edd85ac4e49d3fb91721afcdc3b86f6c08d9aa8", size = 220243, upload-time = "2026-02-03T14:02:01.109Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/cf/7e/2868ec95de5a65703e6f0c87407ea822d1feb3619600fbc3c1c4fa986090/coverage-7.13.3-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:e5b86db331c682fd0e4be7098e6acee5e8a293f824d41487c667a93705d415ca", size = 220515, upload-time = "2026-02-03T14:02:02.862Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/7d/eb/9f0d349652fced20bcaea0f67fc5777bd097c92369f267975732f3dc5f45/coverage-7.13.3-cp314-cp314t-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:edc7754932682d52cf6e7a71806e529ecd5ce660e630e8bd1d37109a2e5f63ba", size = 261874, upload-time = "2026-02-03T14:02:04.727Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ee/a5/6619bc4a6c7b139b16818149a3e74ab2e21599ff9a7b6811b6afde99f8ec/coverage-7.13.3-cp314-cp314t-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:d3a16d6398666510a6886f67f43d9537bfd0e13aca299688a19daa84f543122f", size = 264004, upload-time = "2026-02-03T14:02:06.634Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/29/b7/90aa3fc645a50c6f07881fca4fd0ba21e3bfb6ce3a7078424ea3a35c74c9/coverage-7.13.3-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:303d38b19626c1981e1bb067a9928236d88eb0e4479b18a74812f05a82071508", size = 266408, upload-time = "2026-02-03T14:02:09.037Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/62/55/08bb2a1e4dcbae384e638f0effef486ba5987b06700e481691891427d879/coverage-7.13.3-cp314-cp314t-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:284e06eadfe15ddfee2f4ee56631f164ef897a7d7d5a15bca5f0bb88889fc5ba", size = 260977, upload-time = "2026-02-03T14:02:11.755Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/9b/76/8bd4ae055a42d8fb5dd2230e5cf36ff2e05f85f2427e91b11a27fea52ed7/coverage-7.13.3-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:d401f0864a1d3198422816878e4e84ca89ec1c1bf166ecc0ae01380a39b888cd", size = 263868, upload-time = "2026-02-03T14:02:13.565Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e3/f9/ba000560f11e9e32ec03df5aa8477242c2d95b379c99ac9a7b2e7fbacb1a/coverage-7.13.3-cp314-cp314t-musllinux_1_2_i686.whl", hash = "sha256:3f379b02c18a64de78c4ccdddf1c81c2c5ae1956c72dacb9133d7dd7809794ab", size = 261474, upload-time = "2026-02-03T14:02:16.069Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/90/4b/4de4de8f9ca7af4733bfcf4baa440121b7dbb3856daf8428ce91481ff63b/coverage-7.13.3-cp314-cp314t-musllinux_1_2_riscv64.whl", hash = "sha256:7a482f2da9086971efb12daca1d6547007ede3674ea06e16d7663414445c683e", size = 260317, upload-time = "2026-02-03T14:02:17.996Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/05/71/5cd8436e2c21410ff70be81f738c0dddea91bcc3189b1517d26e0102ccb3/coverage-7.13.3-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:562136b0d401992118d9b49fbee5454e16f95f85b120a4226a04d816e33fe024", size = 262635, upload-time = "2026-02-03T14:02:20.405Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e7/f8/2834bb45bdd70b55a33ec354b8b5f6062fc90e5bb787e14385903a979503/coverage-7.13.3-cp314-cp314t-win32.whl", hash = "sha256:ca46e5c3be3b195098dd88711890b8011a9fa4feca942292bb84714ce5eab5d3", size = 223035, upload-time = "2026-02-03T14:02:22.323Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/26/75/f8290f0073c00d9ae14056d2b84ab92dff21d5370e464cb6cb06f52bf580/coverage-7.13.3-cp314-cp314t-win_amd64.whl", hash = "sha256:06d316dbb3d9fd44cca05b2dbcfbef22948493d63a1f28e828d43e6cc505fed8", size = 224142, upload-time = "2026-02-03T14:02:24.143Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/03/01/43ac78dfea8946c4a9161bbc034b5549115cb2b56781a4b574927f0d141a/coverage-7.13.3-cp314-cp314t-win_arm64.whl", hash = "sha256:299d66e9218193f9dc6e4880629ed7c4cd23486005166247c283fb98531656c3", size = 222166, upload-time = "2026-02-03T14:02:26.005Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/7d/fb/70af542d2d938c778c9373ce253aa4116dbe7c0a5672f78b2b2ae0e1b94b/coverage-7.13.3-py3-none-any.whl", hash = "sha256:90a8af9dba6429b2573199622d72e0ebf024d6276f16abce394ad4d181bb0910", size = 211237, upload-time = "2026-02-03T14:02:27.986Z" },
|
||||
]
|
||||
|
||||
[package.optional-dependencies]
|
||||
toml = [
|
||||
{ name = "tomli", marker = "python_full_version <= '3.11'" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "cryptography"
|
||||
version = "46.0.1"
|
||||
@@ -1082,6 +1186,20 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/04/93/2fa34714b7a4ae72f2f8dad66ba17dd9a2c793220719e736dda28b7aec27/pytest_asyncio-1.2.0-py3-none-any.whl", hash = "sha256:8e17ae5e46d8e7efe51ab6494dd2010f4ca8dae51652aa3c8d55acf50bfb2e99", size = 15095, upload-time = "2025-09-12T07:33:52.639Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "pytest-cov"
|
||||
version = "7.0.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "coverage", extra = ["toml"] },
|
||||
{ name = "pluggy" },
|
||||
{ name = "pytest" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/5e/f7/c933acc76f5208b3b00089573cf6a2bc26dc80a8aece8f52bb7d6b1855ca/pytest_cov-7.0.0.tar.gz", hash = "sha256:33c97eda2e049a0c5298e91f519302a1334c26ac65c1a483d6206fd458361af1", size = 54328, upload-time = "2025-09-09T10:57:02.113Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/ee/49/1377b49de7d0c1ce41292161ea0f721913fa8722c19fb9c1e3aa0367eecb/pytest_cov-7.0.0-py3-none-any.whl", hash = "sha256:3b8e9558b16cc1479da72058bdecf8073661c7f57f7d3c5f22a1c23507f2d861", size = 22424, upload-time = "2025-09-09T10:57:00.695Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "python-dotenv"
|
||||
version = "1.1.1"
|
||||
@@ -1514,15 +1632,6 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/6e/c2/61d3e0f47e2b74ef40a68b9e6ad5984f6241a942f7cd3bbfbdbd03861ea9/tomli-2.2.1-py3-none-any.whl", hash = "sha256:cb55c73c5f4408779d0cf3eef9f762b9c9f147a77de7b258bef0a5628adc85cc", size = 14257, upload-time = "2024-11-27T22:38:35.385Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "types-python-dateutil"
|
||||
version = "2.9.0.20250822"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/0c/0a/775f8551665992204c756be326f3575abba58c4a3a52eef9909ef4536428/types_python_dateutil-2.9.0.20250822.tar.gz", hash = "sha256:84c92c34bd8e68b117bff742bc00b692a1e8531262d4507b33afcc9f7716cd53", size = 16084, upload-time = "2025-08-22T03:02:00.613Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/ab/d9/a29dfa84363e88b053bf85a8b7f212a04f0d7343a4d24933baa45c06e08b/types_python_dateutil-2.9.0.20250822-py3-none-any.whl", hash = "sha256:849d52b737e10a6dc6621d2bd7940ec7c65fcb69e6aa2882acf4e56b2b508ddc", size = 17892, upload-time = "2025-08-22T03:01:59.436Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "types-pytz"
|
||||
version = "2025.2.0.20250809"
|
||||
@@ -1555,59 +1664,52 @@ wheels = [
|
||||
|
||||
[[package]]
|
||||
name = "unraid-mcp"
|
||||
version = "0.1.0"
|
||||
version = "0.2.0"
|
||||
source = { editable = "." }
|
||||
dependencies = [
|
||||
{ name = "fastapi" },
|
||||
{ name = "fastmcp" },
|
||||
{ name = "httpx" },
|
||||
{ name = "mypy" },
|
||||
{ name = "python-dotenv" },
|
||||
{ name = "pytz" },
|
||||
{ name = "rich" },
|
||||
{ name = "ruff" },
|
||||
{ name = "uvicorn" },
|
||||
{ name = "websockets" },
|
||||
]
|
||||
|
||||
[package.optional-dependencies]
|
||||
[package.dev-dependencies]
|
||||
dev = [
|
||||
{ name = "black" },
|
||||
{ name = "mypy" },
|
||||
{ name = "pytest" },
|
||||
{ name = "pytest-asyncio" },
|
||||
{ name = "pytest-cov" },
|
||||
{ name = "ruff" },
|
||||
{ name = "types-python-dateutil" },
|
||||
]
|
||||
|
||||
[package.dev-dependencies]
|
||||
dev = [
|
||||
{ name = "types-pytz" },
|
||||
]
|
||||
|
||||
[package.metadata]
|
||||
requires-dist = [
|
||||
{ name = "black", marker = "extra == 'dev'", specifier = ">=25.1.0" },
|
||||
{ name = "fastapi", specifier = ">=0.116.1" },
|
||||
{ name = "fastmcp", specifier = ">=2.11.2" },
|
||||
{ name = "httpx", specifier = ">=0.28.1" },
|
||||
{ name = "mypy", specifier = ">=1.17.1" },
|
||||
{ name = "mypy", marker = "extra == 'dev'", specifier = ">=1.17.1" },
|
||||
{ name = "pytest", marker = "extra == 'dev'", specifier = ">=8.4.1" },
|
||||
{ name = "pytest-asyncio", marker = "extra == 'dev'", specifier = ">=1.1.0" },
|
||||
{ name = "python-dotenv", specifier = ">=1.1.1" },
|
||||
{ name = "pytz", specifier = ">=2025.2" },
|
||||
{ name = "rich", specifier = ">=14.1.0" },
|
||||
{ name = "ruff", specifier = ">=0.12.8" },
|
||||
{ name = "ruff", marker = "extra == 'dev'", specifier = ">=0.12.8" },
|
||||
{ name = "types-python-dateutil", marker = "extra == 'dev'" },
|
||||
{ name = "uvicorn", specifier = ">=0.35.0" },
|
||||
{ name = "websockets", specifier = ">=13.1,<14.0" },
|
||||
]
|
||||
provides-extras = ["dev"]
|
||||
|
||||
[package.metadata.requires-dev]
|
||||
dev = [{ name = "types-pytz", specifier = ">=2025.2.0.20250809" }]
|
||||
dev = [
|
||||
{ name = "black", specifier = ">=25.1.0" },
|
||||
{ name = "mypy", specifier = ">=1.17.1" },
|
||||
{ name = "pytest", specifier = ">=8.4.2" },
|
||||
{ name = "pytest-asyncio", specifier = ">=1.2.0" },
|
||||
{ name = "pytest-cov", specifier = ">=7.0.0" },
|
||||
{ name = "ruff", specifier = ">=0.12.8" },
|
||||
{ name = "types-pytz", specifier = ">=2025.2.0.20250809" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "urllib3"
|
||||
|
||||
Reference in New Issue
Block a user