mirror of
https://github.com/jmagar/unraid-mcp.git
synced 2026-03-23 04:29:17 -07:00
refactor(tools)!: consolidate 15 individual tools into single unified unraid tool
BREAKING CHANGE: Replaces 15 separate MCP tools (unraid_info, unraid_array, unraid_storage, unraid_docker, unraid_vm, unraid_notifications, unraid_rclone, unraid_users, unraid_keys, unraid_health, unraid_settings, unraid_customization, unraid_plugins, unraid_oidc, unraid_live) with a single `unraid` tool using action (domain) + subaction (operation) routing. New interface: unraid(action="system", subaction="overview") replaces unraid_info(action="overview"). All 15 domains and ~108 subactions preserved. - Add unraid_mcp/tools/unraid.py (1891 lines, all domains consolidated) - Remove 15 individual tool files - Update tools/__init__.py to register single unified tool - Update server.py for new tool registration pattern - Update subscriptions/manager.py and resources.py for new tool names - Update all 25 test files + integration/contract/safety/schema/property tests - Update mcporter smoke-test script for new tool interface - Bump version 0.6.0 → 1.0.0 Co-authored-by: Claude <noreply@anthropic.com>
This commit is contained in:
@@ -1,7 +1,7 @@
|
|||||||
{
|
{
|
||||||
"name": "unraid",
|
"name": "unraid",
|
||||||
"description": "Query and monitor Unraid servers via GraphQL API - array status, disk health, containers, VMs, system monitoring",
|
"description": "Query and monitor Unraid servers via GraphQL API - array status, disk health, containers, VMs, system monitoring",
|
||||||
"version": "0.6.0",
|
"version": "1.0.0",
|
||||||
"author": {
|
"author": {
|
||||||
"name": "jmagar",
|
"name": "jmagar",
|
||||||
"email": "jmagar@users.noreply.github.com"
|
"email": "jmagar@users.noreply.github.com"
|
||||||
|
|||||||
12
CLAUDE.md
12
CLAUDE.md
@@ -83,9 +83,13 @@ docker compose down
|
|||||||
- **Data Processing**: Tools return both human-readable summaries and detailed raw data
|
- **Data Processing**: Tools return both human-readable summaries and detailed raw data
|
||||||
- **Health Monitoring**: Comprehensive health check tool for system monitoring
|
- **Health Monitoring**: Comprehensive health check tool for system monitoring
|
||||||
- **Real-time Subscriptions**: WebSocket-based live data streaming
|
- **Real-time Subscriptions**: WebSocket-based live data streaming
|
||||||
|
- **Persistent Subscription Manager**: `unraid_live` actions use a shared `SubscriptionManager`
|
||||||
|
that maintains persistent WebSocket connections. Resources serve cached data via
|
||||||
|
`subscription_manager.get_resource_data(action)`. A "connecting" placeholder is returned
|
||||||
|
while the subscription starts — callers should retry in a moment.
|
||||||
|
|
||||||
### Tool Categories (15 Tools, ~103 Actions)
|
### Tool Categories (15 Tools, ~108 Actions)
|
||||||
1. **`unraid_info`** (18 actions): overview, array, network, registration, variables, metrics, services, display, config, online, owner, settings, server, servers, flash, ups_devices, ups_device, ups_config
|
1. **`unraid_info`** (19 actions): overview, array, network, registration, connect, variables, metrics, services, display, config, online, owner, settings, server, servers, flash, ups_devices, ups_device, ups_config
|
||||||
2. **`unraid_array`** (13 actions): parity_start, parity_pause, parity_resume, parity_cancel, parity_status, parity_history, start_array, stop_array, add_disk, remove_disk, mount_disk, unmount_disk, clear_disk_stats
|
2. **`unraid_array`** (13 actions): parity_start, parity_pause, parity_resume, parity_cancel, parity_status, parity_history, start_array, stop_array, add_disk, remove_disk, mount_disk, unmount_disk, clear_disk_stats
|
||||||
3. **`unraid_storage`** (6 actions): shares, disks, disk_details, log_files, logs, flash_backup
|
3. **`unraid_storage`** (6 actions): shares, disks, disk_details, log_files, logs, flash_backup
|
||||||
4. **`unraid_docker`** (7 actions): list, details, start, stop, restart, networks, network_details
|
4. **`unraid_docker`** (7 actions): list, details, start, stop, restart, networks, network_details
|
||||||
@@ -102,7 +106,7 @@ docker compose down
|
|||||||
15. **`unraid_live`** (11 actions): cpu, memory, cpu_telemetry, array_state, parity_progress, ups_status, notifications_overview, notification_feed, log_tail, owner, server_status
|
15. **`unraid_live`** (11 actions): cpu, memory, cpu_telemetry, array_state, parity_progress, ups_status, notifications_overview, notification_feed, log_tail, owner, server_status
|
||||||
|
|
||||||
### Destructive Actions (require `confirm=True`)
|
### Destructive Actions (require `confirm=True`)
|
||||||
- **array**: remove_disk, clear_disk_stats
|
- **array**: stop_array, remove_disk, clear_disk_stats
|
||||||
- **vm**: force_stop, reset
|
- **vm**: force_stop, reset
|
||||||
- **notifications**: delete, delete_archived
|
- **notifications**: delete, delete_archived
|
||||||
- **rclone**: delete_remote
|
- **rclone**: delete_remote
|
||||||
@@ -191,6 +195,8 @@ When bumping the version, **always update both files** — they must stay in syn
|
|||||||
### Credential Storage (`~/.unraid-mcp/.env`)
|
### Credential Storage (`~/.unraid-mcp/.env`)
|
||||||
All runtimes (plugin, direct, Docker) load credentials from `~/.unraid-mcp/.env`.
|
All runtimes (plugin, direct, Docker) load credentials from `~/.unraid-mcp/.env`.
|
||||||
- **Plugin/direct:** `unraid_health action=setup` writes this file automatically via elicitation,
|
- **Plugin/direct:** `unraid_health action=setup` writes this file automatically via elicitation,
|
||||||
|
**Safe to re-run**: if credentials exist and are working, it asks before overwriting.
|
||||||
|
If credentials exist but connection fails, it silently re-configures without prompting.
|
||||||
or manual: `mkdir -p ~/.unraid-mcp && cp .env.example ~/.unraid-mcp/.env` then edit.
|
or manual: `mkdir -p ~/.unraid-mcp && cp .env.example ~/.unraid-mcp/.env` then edit.
|
||||||
- **Docker:** `docker-compose.yml` loads it via `env_file` before container start.
|
- **Docker:** `docker-compose.yml` loads it via `env_file` before container start.
|
||||||
- **No symlinks needed.** Version bumps do not affect this path.
|
- **No symlinks needed.** Version bumps do not affect this path.
|
||||||
|
|||||||
@@ -10,7 +10,7 @@ build-backend = "hatchling.build"
|
|||||||
# ============================================================================
|
# ============================================================================
|
||||||
[project]
|
[project]
|
||||||
name = "unraid-mcp"
|
name = "unraid-mcp"
|
||||||
version = "0.6.0"
|
version = "1.0.0"
|
||||||
description = "MCP Server for Unraid API - provides tools to interact with an Unraid server's GraphQL API"
|
description = "MCP Server for Unraid API - provides tools to interact with an Unraid server's GraphQL API"
|
||||||
readme = "README.md"
|
readme = "README.md"
|
||||||
license = {file = "LICENSE"}
|
license = {file = "LICENSE"}
|
||||||
|
|||||||
@@ -70,7 +70,7 @@ class DockerMutationResult(BaseModel):
|
|||||||
"""Shape returned by docker start/stop/pause/unpause mutations."""
|
"""Shape returned by docker start/stop/pause/unpause mutations."""
|
||||||
|
|
||||||
success: bool
|
success: bool
|
||||||
action: str
|
subaction: str
|
||||||
container: Any = None
|
container: Any = None
|
||||||
|
|
||||||
|
|
||||||
@@ -287,48 +287,42 @@ class NotificationCreateResult(BaseModel):
|
|||||||
|
|
||||||
@pytest.fixture
|
@pytest.fixture
|
||||||
def _docker_mock() -> Generator[AsyncMock, None, None]:
|
def _docker_mock() -> Generator[AsyncMock, None, None]:
|
||||||
with patch("unraid_mcp.tools.docker.make_graphql_request", new_callable=AsyncMock) as mock:
|
with patch("unraid_mcp.tools.unraid.make_graphql_request", new_callable=AsyncMock) as mock:
|
||||||
yield mock
|
yield mock
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
@pytest.fixture
|
||||||
def _info_mock() -> Generator[AsyncMock, None, None]:
|
def _info_mock() -> Generator[AsyncMock, None, None]:
|
||||||
with patch("unraid_mcp.tools.info.make_graphql_request", new_callable=AsyncMock) as mock:
|
with patch("unraid_mcp.tools.unraid.make_graphql_request", new_callable=AsyncMock) as mock:
|
||||||
yield mock
|
yield mock
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
@pytest.fixture
|
||||||
def _storage_mock() -> Generator[AsyncMock, None, None]:
|
def _storage_mock() -> Generator[AsyncMock, None, None]:
|
||||||
with patch("unraid_mcp.tools.storage.make_graphql_request", new_callable=AsyncMock) as mock:
|
with patch("unraid_mcp.tools.unraid.make_graphql_request", new_callable=AsyncMock) as mock:
|
||||||
yield mock
|
yield mock
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
@pytest.fixture
|
||||||
def _notifications_mock() -> Generator[AsyncMock, None, None]:
|
def _notifications_mock() -> Generator[AsyncMock, None, None]:
|
||||||
with patch(
|
with patch("unraid_mcp.tools.unraid.make_graphql_request", new_callable=AsyncMock) as mock:
|
||||||
"unraid_mcp.tools.notifications.make_graphql_request", new_callable=AsyncMock
|
|
||||||
) as mock:
|
|
||||||
yield mock
|
yield mock
|
||||||
|
|
||||||
|
|
||||||
def _docker_tool():
|
def _docker_tool():
|
||||||
return make_tool_fn("unraid_mcp.tools.docker", "register_docker_tool", "unraid_docker")
|
return make_tool_fn("unraid_mcp.tools.unraid", "register_unraid_tool", "unraid")
|
||||||
|
|
||||||
|
|
||||||
def _info_tool():
|
def _info_tool():
|
||||||
return make_tool_fn("unraid_mcp.tools.info", "register_info_tool", "unraid_info")
|
return make_tool_fn("unraid_mcp.tools.unraid", "register_unraid_tool", "unraid")
|
||||||
|
|
||||||
|
|
||||||
def _storage_tool():
|
def _storage_tool():
|
||||||
return make_tool_fn("unraid_mcp.tools.storage", "register_storage_tool", "unraid_storage")
|
return make_tool_fn("unraid_mcp.tools.unraid", "register_unraid_tool", "unraid")
|
||||||
|
|
||||||
|
|
||||||
def _notifications_tool():
|
def _notifications_tool():
|
||||||
return make_tool_fn(
|
return make_tool_fn("unraid_mcp.tools.unraid", "register_unraid_tool", "unraid")
|
||||||
"unraid_mcp.tools.notifications",
|
|
||||||
"register_notifications_tool",
|
|
||||||
"unraid_notifications",
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
# ---------------------------------------------------------------------------
|
# ---------------------------------------------------------------------------
|
||||||
@@ -341,7 +335,7 @@ class TestDockerListContract:
|
|||||||
|
|
||||||
async def test_list_result_has_containers_key(self, _docker_mock: AsyncMock) -> None:
|
async def test_list_result_has_containers_key(self, _docker_mock: AsyncMock) -> None:
|
||||||
_docker_mock.return_value = {"docker": {"containers": []}}
|
_docker_mock.return_value = {"docker": {"containers": []}}
|
||||||
result = await _docker_tool()(action="list")
|
result = await _docker_tool()(action="docker", subaction="list")
|
||||||
DockerListResult(**result)
|
DockerListResult(**result)
|
||||||
|
|
||||||
async def test_list_containers_conform_to_shape(self, _docker_mock: AsyncMock) -> None:
|
async def test_list_containers_conform_to_shape(self, _docker_mock: AsyncMock) -> None:
|
||||||
@@ -353,14 +347,14 @@ class TestDockerListContract:
|
|||||||
]
|
]
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
result = await _docker_tool()(action="list")
|
result = await _docker_tool()(action="docker", subaction="list")
|
||||||
validated = DockerListResult(**result)
|
validated = DockerListResult(**result)
|
||||||
for container in validated.containers:
|
for container in validated.containers:
|
||||||
DockerContainer(**container)
|
DockerContainer(**container)
|
||||||
|
|
||||||
async def test_list_empty_containers_is_valid(self, _docker_mock: AsyncMock) -> None:
|
async def test_list_empty_containers_is_valid(self, _docker_mock: AsyncMock) -> None:
|
||||||
_docker_mock.return_value = {"docker": {"containers": []}}
|
_docker_mock.return_value = {"docker": {"containers": []}}
|
||||||
result = await _docker_tool()(action="list")
|
result = await _docker_tool()(action="docker", subaction="list")
|
||||||
validated = DockerListResult(**result)
|
validated = DockerListResult(**result)
|
||||||
assert validated.containers == []
|
assert validated.containers == []
|
||||||
|
|
||||||
@@ -369,7 +363,7 @@ class TestDockerListContract:
|
|||||||
_docker_mock.return_value = {
|
_docker_mock.return_value = {
|
||||||
"docker": {"containers": [{"id": "abc123", "names": ["plex"], "state": "running"}]}
|
"docker": {"containers": [{"id": "abc123", "names": ["plex"], "state": "running"}]}
|
||||||
}
|
}
|
||||||
result = await _docker_tool()(action="list")
|
result = await _docker_tool()(action="docker", subaction="list")
|
||||||
container_raw = result["containers"][0]
|
container_raw = result["containers"][0]
|
||||||
DockerContainer(**container_raw)
|
DockerContainer(**container_raw)
|
||||||
|
|
||||||
@@ -378,7 +372,7 @@ class TestDockerListContract:
|
|||||||
_docker_mock.return_value = {
|
_docker_mock.return_value = {
|
||||||
"docker": {"containers": [{"id": "abc123", "state": "running"}]}
|
"docker": {"containers": [{"id": "abc123", "state": "running"}]}
|
||||||
}
|
}
|
||||||
result = await _docker_tool()(action="list")
|
result = await _docker_tool()(action="docker", subaction="list")
|
||||||
with pytest.raises(ValidationError):
|
with pytest.raises(ValidationError):
|
||||||
DockerContainer(**result["containers"][0])
|
DockerContainer(**result["containers"][0])
|
||||||
|
|
||||||
@@ -403,7 +397,7 @@ class TestDockerDetailsContract:
|
|||||||
]
|
]
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
result = await _docker_tool()(action="details", container_id=cid)
|
result = await _docker_tool()(action="docker", subaction="details", container_id=cid)
|
||||||
DockerContainerDetails(**result)
|
DockerContainerDetails(**result)
|
||||||
|
|
||||||
async def test_details_has_required_fields(self, _docker_mock: AsyncMock) -> None:
|
async def test_details_has_required_fields(self, _docker_mock: AsyncMock) -> None:
|
||||||
@@ -411,7 +405,7 @@ class TestDockerDetailsContract:
|
|||||||
_docker_mock.return_value = {
|
_docker_mock.return_value = {
|
||||||
"docker": {"containers": [{"id": cid, "names": ["sonarr"], "state": "exited"}]}
|
"docker": {"containers": [{"id": cid, "names": ["sonarr"], "state": "exited"}]}
|
||||||
}
|
}
|
||||||
result = await _docker_tool()(action="details", container_id=cid)
|
result = await _docker_tool()(action="docker", subaction="details", container_id=cid)
|
||||||
assert "id" in result
|
assert "id" in result
|
||||||
assert "names" in result
|
assert "names" in result
|
||||||
assert "state" in result
|
assert "state" in result
|
||||||
@@ -424,7 +418,7 @@ class TestDockerNetworksContract:
|
|||||||
_docker_mock.return_value = {
|
_docker_mock.return_value = {
|
||||||
"docker": {"networks": [{"id": "net:1", "name": "bridge", "driver": "bridge"}]}
|
"docker": {"networks": [{"id": "net:1", "name": "bridge", "driver": "bridge"}]}
|
||||||
}
|
}
|
||||||
result = await _docker_tool()(action="networks")
|
result = await _docker_tool()(action="docker", subaction="networks")
|
||||||
DockerNetworkListResult(**result)
|
DockerNetworkListResult(**result)
|
||||||
|
|
||||||
async def test_network_entries_conform_to_shape(self, _docker_mock: AsyncMock) -> None:
|
async def test_network_entries_conform_to_shape(self, _docker_mock: AsyncMock) -> None:
|
||||||
@@ -436,13 +430,13 @@ class TestDockerNetworksContract:
|
|||||||
]
|
]
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
result = await _docker_tool()(action="networks")
|
result = await _docker_tool()(action="docker", subaction="networks")
|
||||||
for net in result["networks"]:
|
for net in result["networks"]:
|
||||||
DockerNetwork(**net)
|
DockerNetwork(**net)
|
||||||
|
|
||||||
async def test_empty_networks_is_valid(self, _docker_mock: AsyncMock) -> None:
|
async def test_empty_networks_is_valid(self, _docker_mock: AsyncMock) -> None:
|
||||||
_docker_mock.return_value = {"docker": {"networks": []}}
|
_docker_mock.return_value = {"docker": {"networks": []}}
|
||||||
result = await _docker_tool()(action="networks")
|
result = await _docker_tool()(action="docker", subaction="networks")
|
||||||
validated = DockerNetworkListResult(**result)
|
validated = DockerNetworkListResult(**result)
|
||||||
assert validated.networks == []
|
assert validated.networks == []
|
||||||
|
|
||||||
@@ -456,10 +450,10 @@ class TestDockerMutationContract:
|
|||||||
{"docker": {"containers": [{"id": cid, "names": ["plex"]}]}},
|
{"docker": {"containers": [{"id": cid, "names": ["plex"]}]}},
|
||||||
{"docker": {"start": {"id": cid, "names": ["plex"], "state": "running"}}},
|
{"docker": {"start": {"id": cid, "names": ["plex"], "state": "running"}}},
|
||||||
]
|
]
|
||||||
result = await _docker_tool()(action="start", container_id=cid)
|
result = await _docker_tool()(action="docker", subaction="start", container_id=cid)
|
||||||
validated = DockerMutationResult(**result)
|
validated = DockerMutationResult(**result)
|
||||||
assert validated.success is True
|
assert validated.success is True
|
||||||
assert validated.action == "start"
|
assert validated.subaction == "start"
|
||||||
|
|
||||||
async def test_stop_mutation_result_shape(self, _docker_mock: AsyncMock) -> None:
|
async def test_stop_mutation_result_shape(self, _docker_mock: AsyncMock) -> None:
|
||||||
cid = "d" * 64 + ":local"
|
cid = "d" * 64 + ":local"
|
||||||
@@ -467,10 +461,10 @@ class TestDockerMutationContract:
|
|||||||
{"docker": {"containers": [{"id": cid, "names": ["nginx"]}]}},
|
{"docker": {"containers": [{"id": cid, "names": ["nginx"]}]}},
|
||||||
{"docker": {"stop": {"id": cid, "names": ["nginx"], "state": "exited"}}},
|
{"docker": {"stop": {"id": cid, "names": ["nginx"], "state": "exited"}}},
|
||||||
]
|
]
|
||||||
result = await _docker_tool()(action="stop", container_id=cid)
|
result = await _docker_tool()(action="docker", subaction="stop", container_id=cid)
|
||||||
validated = DockerMutationResult(**result)
|
validated = DockerMutationResult(**result)
|
||||||
assert validated.success is True
|
assert validated.success is True
|
||||||
assert validated.action == "stop"
|
assert validated.subaction == "stop"
|
||||||
|
|
||||||
|
|
||||||
# ---------------------------------------------------------------------------
|
# ---------------------------------------------------------------------------
|
||||||
@@ -501,7 +495,7 @@ class TestInfoOverviewContract:
|
|||||||
"memory": {"layout": []},
|
"memory": {"layout": []},
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
result = await _info_tool()(action="overview")
|
result = await _info_tool()(action="system", subaction="overview")
|
||||||
validated = InfoOverviewResult(**result)
|
validated = InfoOverviewResult(**result)
|
||||||
assert isinstance(validated.summary, dict)
|
assert isinstance(validated.summary, dict)
|
||||||
assert isinstance(validated.details, dict)
|
assert isinstance(validated.details, dict)
|
||||||
@@ -521,7 +515,7 @@ class TestInfoOverviewContract:
|
|||||||
"memory": {"layout": []},
|
"memory": {"layout": []},
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
result = await _info_tool()(action="overview")
|
result = await _info_tool()(action="system", subaction="overview")
|
||||||
InfoOverviewSummary(**result["summary"])
|
InfoOverviewSummary(**result["summary"])
|
||||||
assert result["summary"]["hostname"] == "myserver"
|
assert result["summary"]["hostname"] == "myserver"
|
||||||
|
|
||||||
@@ -538,7 +532,7 @@ class TestInfoOverviewContract:
|
|||||||
"memory": {"layout": []},
|
"memory": {"layout": []},
|
||||||
}
|
}
|
||||||
_info_mock.return_value = {"info": raw_info}
|
_info_mock.return_value = {"info": raw_info}
|
||||||
result = await _info_tool()(action="overview")
|
result = await _info_tool()(action="system", subaction="overview")
|
||||||
assert result["details"] == raw_info
|
assert result["details"] == raw_info
|
||||||
|
|
||||||
|
|
||||||
@@ -557,7 +551,7 @@ class TestInfoArrayContract:
|
|||||||
"boot": None,
|
"boot": None,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
result = await _info_tool()(action="array")
|
result = await _info_tool()(action="system", subaction="array")
|
||||||
validated = InfoArrayResult(**result)
|
validated = InfoArrayResult(**result)
|
||||||
assert isinstance(validated.summary, dict)
|
assert isinstance(validated.summary, dict)
|
||||||
assert isinstance(validated.details, dict)
|
assert isinstance(validated.details, dict)
|
||||||
@@ -572,7 +566,7 @@ class TestInfoArrayContract:
|
|||||||
"caches": [],
|
"caches": [],
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
result = await _info_tool()(action="array")
|
result = await _info_tool()(action="system", subaction="array")
|
||||||
ArraySummary(**result["summary"])
|
ArraySummary(**result["summary"])
|
||||||
|
|
||||||
async def test_array_health_overall_healthy(self, _info_mock: AsyncMock) -> None:
|
async def test_array_health_overall_healthy(self, _info_mock: AsyncMock) -> None:
|
||||||
@@ -585,7 +579,7 @@ class TestInfoArrayContract:
|
|||||||
"caches": [],
|
"caches": [],
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
result = await _info_tool()(action="array")
|
result = await _info_tool()(action="system", subaction="array")
|
||||||
assert result["summary"]["overall_health"] == "HEALTHY"
|
assert result["summary"]["overall_health"] == "HEALTHY"
|
||||||
|
|
||||||
async def test_array_health_critical_with_failed_disk(self, _info_mock: AsyncMock) -> None:
|
async def test_array_health_critical_with_failed_disk(self, _info_mock: AsyncMock) -> None:
|
||||||
@@ -598,7 +592,7 @@ class TestInfoArrayContract:
|
|||||||
"caches": [],
|
"caches": [],
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
result = await _info_tool()(action="array")
|
result = await _info_tool()(action="system", subaction="array")
|
||||||
assert result["summary"]["overall_health"] == "CRITICAL"
|
assert result["summary"]["overall_health"] == "CRITICAL"
|
||||||
|
|
||||||
|
|
||||||
@@ -619,7 +613,7 @@ class TestInfoMetricsContract:
|
|||||||
},
|
},
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
result = await _info_tool()(action="metrics")
|
result = await _info_tool()(action="system", subaction="metrics")
|
||||||
validated = InfoMetricsResult(**result)
|
validated = InfoMetricsResult(**result)
|
||||||
assert validated.cpu is not None
|
assert validated.cpu is not None
|
||||||
assert validated.memory is not None
|
assert validated.memory is not None
|
||||||
@@ -628,7 +622,7 @@ class TestInfoMetricsContract:
|
|||||||
_info_mock.return_value = {
|
_info_mock.return_value = {
|
||||||
"metrics": {"cpu": {"percentTotal": 75.3}, "memory": {"percentTotal": 60.0}}
|
"metrics": {"cpu": {"percentTotal": 75.3}, "memory": {"percentTotal": 60.0}}
|
||||||
}
|
}
|
||||||
result = await _info_tool()(action="metrics")
|
result = await _info_tool()(action="system", subaction="metrics")
|
||||||
cpu_pct = result["cpu"]["percentTotal"]
|
cpu_pct = result["cpu"]["percentTotal"]
|
||||||
assert 0.0 <= cpu_pct <= 100.0
|
assert 0.0 <= cpu_pct <= 100.0
|
||||||
|
|
||||||
@@ -643,14 +637,14 @@ class TestInfoServicesContract:
|
|||||||
{"name": "docker", "online": True, "version": "24.0"},
|
{"name": "docker", "online": True, "version": "24.0"},
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
result = await _info_tool()(action="services")
|
result = await _info_tool()(action="system", subaction="services")
|
||||||
validated = InfoServicesResult(**result)
|
validated = InfoServicesResult(**result)
|
||||||
for svc in validated.services:
|
for svc in validated.services:
|
||||||
ServiceEntry(**svc)
|
ServiceEntry(**svc)
|
||||||
|
|
||||||
async def test_services_empty_list_is_valid(self, _info_mock: AsyncMock) -> None:
|
async def test_services_empty_list_is_valid(self, _info_mock: AsyncMock) -> None:
|
||||||
_info_mock.return_value = {"services": []}
|
_info_mock.return_value = {"services": []}
|
||||||
result = await _info_tool()(action="services")
|
result = await _info_tool()(action="system", subaction="services")
|
||||||
InfoServicesResult(**result)
|
InfoServicesResult(**result)
|
||||||
assert result["services"] == []
|
assert result["services"] == []
|
||||||
|
|
||||||
@@ -660,13 +654,13 @@ class TestInfoOnlineContract:
|
|||||||
|
|
||||||
async def test_online_true_shape(self, _info_mock: AsyncMock) -> None:
|
async def test_online_true_shape(self, _info_mock: AsyncMock) -> None:
|
||||||
_info_mock.return_value = {"online": True}
|
_info_mock.return_value = {"online": True}
|
||||||
result = await _info_tool()(action="online")
|
result = await _info_tool()(action="system", subaction="online")
|
||||||
validated = InfoOnlineResult(**result)
|
validated = InfoOnlineResult(**result)
|
||||||
assert validated.online is True
|
assert validated.online is True
|
||||||
|
|
||||||
async def test_online_false_shape(self, _info_mock: AsyncMock) -> None:
|
async def test_online_false_shape(self, _info_mock: AsyncMock) -> None:
|
||||||
_info_mock.return_value = {"online": False}
|
_info_mock.return_value = {"online": False}
|
||||||
result = await _info_tool()(action="online")
|
result = await _info_tool()(action="system", subaction="online")
|
||||||
validated = InfoOnlineResult(**result)
|
validated = InfoOnlineResult(**result)
|
||||||
assert validated.online is False
|
assert validated.online is False
|
||||||
|
|
||||||
@@ -687,7 +681,7 @@ class TestInfoNetworkContract:
|
|||||||
],
|
],
|
||||||
"vars": {"port": 80, "portssl": 443, "localTld": "local", "useSsl": "no"},
|
"vars": {"port": 80, "portssl": 443, "localTld": "local", "useSsl": "no"},
|
||||||
}
|
}
|
||||||
result = await _info_tool()(action="network")
|
result = await _info_tool()(action="system", subaction="network")
|
||||||
validated = InfoNetworkResult(**result)
|
validated = InfoNetworkResult(**result)
|
||||||
assert isinstance(validated.accessUrls, list)
|
assert isinstance(validated.accessUrls, list)
|
||||||
|
|
||||||
@@ -696,7 +690,7 @@ class TestInfoNetworkContract:
|
|||||||
"servers": [],
|
"servers": [],
|
||||||
"vars": {"port": 80, "portssl": 443, "localTld": "local", "useSsl": "no"},
|
"vars": {"port": 80, "portssl": 443, "localTld": "local", "useSsl": "no"},
|
||||||
}
|
}
|
||||||
result = await _info_tool()(action="network")
|
result = await _info_tool()(action="system", subaction="network")
|
||||||
validated = InfoNetworkResult(**result)
|
validated = InfoNetworkResult(**result)
|
||||||
assert validated.accessUrls == []
|
assert validated.accessUrls == []
|
||||||
|
|
||||||
@@ -716,21 +710,21 @@ class TestStorageSharesContract:
|
|||||||
{"id": "share:2", "name": "appdata", "free": 200000, "used": 50000, "size": 250000},
|
{"id": "share:2", "name": "appdata", "free": 200000, "used": 50000, "size": 250000},
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
result = await _storage_tool()(action="shares")
|
result = await _storage_tool()(action="disk", subaction="shares")
|
||||||
validated = StorageSharesResult(**result)
|
validated = StorageSharesResult(**result)
|
||||||
for share in validated.shares:
|
for share in validated.shares:
|
||||||
ShareEntry(**share)
|
ShareEntry(**share)
|
||||||
|
|
||||||
async def test_shares_empty_list_is_valid(self, _storage_mock: AsyncMock) -> None:
|
async def test_shares_empty_list_is_valid(self, _storage_mock: AsyncMock) -> None:
|
||||||
_storage_mock.return_value = {"shares": []}
|
_storage_mock.return_value = {"shares": []}
|
||||||
result = await _storage_tool()(action="shares")
|
result = await _storage_tool()(action="disk", subaction="shares")
|
||||||
StorageSharesResult(**result)
|
StorageSharesResult(**result)
|
||||||
assert result["shares"] == []
|
assert result["shares"] == []
|
||||||
|
|
||||||
async def test_shares_missing_name_fails_contract(self, _storage_mock: AsyncMock) -> None:
|
async def test_shares_missing_name_fails_contract(self, _storage_mock: AsyncMock) -> None:
|
||||||
"""A share without required 'name' must fail contract validation."""
|
"""A share without required 'name' must fail contract validation."""
|
||||||
_storage_mock.return_value = {"shares": [{"id": "share:1", "free": 100}]}
|
_storage_mock.return_value = {"shares": [{"id": "share:1", "free": 100}]}
|
||||||
result = await _storage_tool()(action="shares")
|
result = await _storage_tool()(action="disk", subaction="shares")
|
||||||
with pytest.raises(ValidationError):
|
with pytest.raises(ValidationError):
|
||||||
ShareEntry(**result["shares"][0])
|
ShareEntry(**result["shares"][0])
|
||||||
|
|
||||||
@@ -745,14 +739,14 @@ class TestStorageDisksContract:
|
|||||||
{"id": "disk:2", "device": "sdb", "name": "Seagate_8TB"},
|
{"id": "disk:2", "device": "sdb", "name": "Seagate_8TB"},
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
result = await _storage_tool()(action="disks")
|
result = await _storage_tool()(action="disk", subaction="disks")
|
||||||
validated = StorageDisksResult(**result)
|
validated = StorageDisksResult(**result)
|
||||||
for disk in validated.disks:
|
for disk in validated.disks:
|
||||||
DiskEntry(**disk)
|
DiskEntry(**disk)
|
||||||
|
|
||||||
async def test_disks_empty_list_is_valid(self, _storage_mock: AsyncMock) -> None:
|
async def test_disks_empty_list_is_valid(self, _storage_mock: AsyncMock) -> None:
|
||||||
_storage_mock.return_value = {"disks": []}
|
_storage_mock.return_value = {"disks": []}
|
||||||
result = await _storage_tool()(action="disks")
|
result = await _storage_tool()(action="disk", subaction="disks")
|
||||||
StorageDisksResult(**result)
|
StorageDisksResult(**result)
|
||||||
assert result["disks"] == []
|
assert result["disks"] == []
|
||||||
|
|
||||||
@@ -771,7 +765,7 @@ class TestStorageDiskDetailsContract:
|
|||||||
"temperature": 35,
|
"temperature": 35,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
result = await _storage_tool()(action="disk_details", disk_id="disk:1")
|
result = await _storage_tool()(action="disk", subaction="disk_details", disk_id="disk:1")
|
||||||
validated = StorageDiskDetailsResult(**result)
|
validated = StorageDiskDetailsResult(**result)
|
||||||
assert isinstance(validated.summary, dict)
|
assert isinstance(validated.summary, dict)
|
||||||
assert isinstance(validated.details, dict)
|
assert isinstance(validated.details, dict)
|
||||||
@@ -787,7 +781,7 @@ class TestStorageDiskDetailsContract:
|
|||||||
"temperature": 40,
|
"temperature": 40,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
result = await _storage_tool()(action="disk_details", disk_id="disk:2")
|
result = await _storage_tool()(action="disk", subaction="disk_details", disk_id="disk:2")
|
||||||
DiskDetailsSummary(**result["summary"])
|
DiskDetailsSummary(**result["summary"])
|
||||||
|
|
||||||
async def test_disk_details_temperature_formatted(self, _storage_mock: AsyncMock) -> None:
|
async def test_disk_details_temperature_formatted(self, _storage_mock: AsyncMock) -> None:
|
||||||
@@ -801,7 +795,7 @@ class TestStorageDiskDetailsContract:
|
|||||||
"temperature": 38,
|
"temperature": 38,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
result = await _storage_tool()(action="disk_details", disk_id="disk:3")
|
result = await _storage_tool()(action="disk", subaction="disk_details", disk_id="disk:3")
|
||||||
assert "°C" in result["summary"]["temperature"]
|
assert "°C" in result["summary"]["temperature"]
|
||||||
|
|
||||||
async def test_disk_details_no_temperature_shows_na(self, _storage_mock: AsyncMock) -> None:
|
async def test_disk_details_no_temperature_shows_na(self, _storage_mock: AsyncMock) -> None:
|
||||||
@@ -815,7 +809,7 @@ class TestStorageDiskDetailsContract:
|
|||||||
"temperature": None,
|
"temperature": None,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
result = await _storage_tool()(action="disk_details", disk_id="disk:4")
|
result = await _storage_tool()(action="disk", subaction="disk_details", disk_id="disk:4")
|
||||||
assert result["summary"]["temperature"] == "N/A"
|
assert result["summary"]["temperature"] == "N/A"
|
||||||
|
|
||||||
|
|
||||||
@@ -839,14 +833,14 @@ class TestStorageLogFilesContract:
|
|||||||
},
|
},
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
result = await _storage_tool()(action="log_files")
|
result = await _storage_tool()(action="disk", subaction="log_files")
|
||||||
validated = StorageLogFilesResult(**result)
|
validated = StorageLogFilesResult(**result)
|
||||||
for log_file in validated.log_files:
|
for log_file in validated.log_files:
|
||||||
LogFileEntry(**log_file)
|
LogFileEntry(**log_file)
|
||||||
|
|
||||||
async def test_log_files_empty_list_is_valid(self, _storage_mock: AsyncMock) -> None:
|
async def test_log_files_empty_list_is_valid(self, _storage_mock: AsyncMock) -> None:
|
||||||
_storage_mock.return_value = {"logFiles": []}
|
_storage_mock.return_value = {"logFiles": []}
|
||||||
result = await _storage_tool()(action="log_files")
|
result = await _storage_tool()(action="disk", subaction="log_files")
|
||||||
StorageLogFilesResult(**result)
|
StorageLogFilesResult(**result)
|
||||||
assert result["log_files"] == []
|
assert result["log_files"] == []
|
||||||
|
|
||||||
@@ -868,7 +862,7 @@ class TestNotificationsOverviewContract:
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
result = await _notifications_tool()(action="overview")
|
result = await _notifications_tool()(action="notification", subaction="overview")
|
||||||
validated = NotificationOverviewResult(**result)
|
validated = NotificationOverviewResult(**result)
|
||||||
assert validated.unread is not None
|
assert validated.unread is not None
|
||||||
assert validated.archive is not None
|
assert validated.archive is not None
|
||||||
@@ -882,7 +876,7 @@ class TestNotificationsOverviewContract:
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
result = await _notifications_tool()(action="overview")
|
result = await _notifications_tool()(action="notification", subaction="overview")
|
||||||
NotificationCountBucket(**result["unread"])
|
NotificationCountBucket(**result["unread"])
|
||||||
NotificationCountBucket(**result["archive"])
|
NotificationCountBucket(**result["archive"])
|
||||||
|
|
||||||
@@ -895,7 +889,7 @@ class TestNotificationsOverviewContract:
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
result = await _notifications_tool()(action="overview")
|
result = await _notifications_tool()(action="notification", subaction="overview")
|
||||||
NotificationOverviewResult(**result)
|
NotificationOverviewResult(**result)
|
||||||
|
|
||||||
|
|
||||||
@@ -920,14 +914,14 @@ class TestNotificationsListContract:
|
|||||||
]
|
]
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
result = await _notifications_tool()(action="list")
|
result = await _notifications_tool()(action="notification", subaction="list")
|
||||||
validated = NotificationListResult(**result)
|
validated = NotificationListResult(**result)
|
||||||
for notif in validated.notifications:
|
for notif in validated.notifications:
|
||||||
NotificationEntry(**notif)
|
NotificationEntry(**notif)
|
||||||
|
|
||||||
async def test_list_empty_notifications_valid(self, _notifications_mock: AsyncMock) -> None:
|
async def test_list_empty_notifications_valid(self, _notifications_mock: AsyncMock) -> None:
|
||||||
_notifications_mock.return_value = {"notifications": {"list": []}}
|
_notifications_mock.return_value = {"notifications": {"list": []}}
|
||||||
result = await _notifications_tool()(action="list")
|
result = await _notifications_tool()(action="notification", subaction="list")
|
||||||
NotificationListResult(**result)
|
NotificationListResult(**result)
|
||||||
assert result["notifications"] == []
|
assert result["notifications"] == []
|
||||||
|
|
||||||
@@ -938,7 +932,7 @@ class TestNotificationsListContract:
|
|||||||
_notifications_mock.return_value = {
|
_notifications_mock.return_value = {
|
||||||
"notifications": {"list": [{"title": "No ID here", "importance": "INFO"}]}
|
"notifications": {"list": [{"title": "No ID here", "importance": "INFO"}]}
|
||||||
}
|
}
|
||||||
result = await _notifications_tool()(action="list")
|
result = await _notifications_tool()(action="notification", subaction="list")
|
||||||
with pytest.raises(ValidationError):
|
with pytest.raises(ValidationError):
|
||||||
NotificationEntry(**result["notifications"][0])
|
NotificationEntry(**result["notifications"][0])
|
||||||
|
|
||||||
@@ -955,7 +949,8 @@ class TestNotificationsCreateContract:
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
result = await _notifications_tool()(
|
result = await _notifications_tool()(
|
||||||
action="create",
|
action="notification",
|
||||||
|
subaction="create",
|
||||||
title="Test notification",
|
title="Test notification",
|
||||||
subject="Test subject",
|
subject="Test subject",
|
||||||
description="This is a test",
|
description="This is a test",
|
||||||
@@ -970,7 +965,8 @@ class TestNotificationsCreateContract:
|
|||||||
"createNotification": {"id": "notif:42", "title": "Alert!", "importance": "ALERT"}
|
"createNotification": {"id": "notif:42", "title": "Alert!", "importance": "ALERT"}
|
||||||
}
|
}
|
||||||
result = await _notifications_tool()(
|
result = await _notifications_tool()(
|
||||||
action="create",
|
action="notification",
|
||||||
|
subaction="create",
|
||||||
title="Alert!",
|
title="Alert!",
|
||||||
subject="Critical issue",
|
subject="Critical issue",
|
||||||
description="Something went wrong",
|
description="Something went wrong",
|
||||||
|
|||||||
@@ -261,11 +261,11 @@ class TestGraphQLErrorHandling:
|
|||||||
|
|
||||||
|
|
||||||
class TestInfoToolRequests:
|
class TestInfoToolRequests:
|
||||||
"""Verify unraid_info tool constructs correct GraphQL queries."""
|
"""Verify unraid system tool constructs correct GraphQL queries."""
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def _get_tool():
|
def _get_tool():
|
||||||
return make_tool_fn("unraid_mcp.tools.info", "register_info_tool", "unraid_info")
|
return make_tool_fn("unraid_mcp.tools.unraid", "register_unraid_tool", "unraid")
|
||||||
|
|
||||||
@respx.mock
|
@respx.mock
|
||||||
async def test_overview_sends_correct_query(self) -> None:
|
async def test_overview_sends_correct_query(self) -> None:
|
||||||
@@ -281,7 +281,7 @@ class TestInfoToolRequests:
|
|||||||
)
|
)
|
||||||
)
|
)
|
||||||
tool = self._get_tool()
|
tool = self._get_tool()
|
||||||
await tool(action="overview")
|
await tool(action="system", subaction="overview")
|
||||||
body = _extract_request_body(route.calls.last.request)
|
body = _extract_request_body(route.calls.last.request)
|
||||||
assert "GetSystemInfo" in body["query"]
|
assert "GetSystemInfo" in body["query"]
|
||||||
assert "info" in body["query"]
|
assert "info" in body["query"]
|
||||||
@@ -292,7 +292,7 @@ class TestInfoToolRequests:
|
|||||||
return_value=_graphql_response({"array": {"state": "STARTED", "capacity": {}}})
|
return_value=_graphql_response({"array": {"state": "STARTED", "capacity": {}}})
|
||||||
)
|
)
|
||||||
tool = self._get_tool()
|
tool = self._get_tool()
|
||||||
await tool(action="array")
|
await tool(action="system", subaction="array")
|
||||||
body = _extract_request_body(route.calls.last.request)
|
body = _extract_request_body(route.calls.last.request)
|
||||||
assert "GetArrayStatus" in body["query"]
|
assert "GetArrayStatus" in body["query"]
|
||||||
|
|
||||||
@@ -302,7 +302,7 @@ class TestInfoToolRequests:
|
|||||||
return_value=_graphql_response({"network": {"id": "n1", "accessUrls": []}})
|
return_value=_graphql_response({"network": {"id": "n1", "accessUrls": []}})
|
||||||
)
|
)
|
||||||
tool = self._get_tool()
|
tool = self._get_tool()
|
||||||
await tool(action="network")
|
await tool(action="system", subaction="network")
|
||||||
body = _extract_request_body(route.calls.last.request)
|
body = _extract_request_body(route.calls.last.request)
|
||||||
assert "GetNetworkInfo" in body["query"]
|
assert "GetNetworkInfo" in body["query"]
|
||||||
|
|
||||||
@@ -314,7 +314,7 @@ class TestInfoToolRequests:
|
|||||||
)
|
)
|
||||||
)
|
)
|
||||||
tool = self._get_tool()
|
tool = self._get_tool()
|
||||||
await tool(action="metrics")
|
await tool(action="system", subaction="metrics")
|
||||||
body = _extract_request_body(route.calls.last.request)
|
body = _extract_request_body(route.calls.last.request)
|
||||||
assert "GetMetrics" in body["query"]
|
assert "GetMetrics" in body["query"]
|
||||||
|
|
||||||
@@ -324,7 +324,7 @@ class TestInfoToolRequests:
|
|||||||
return_value=_graphql_response({"upsDeviceById": {"id": "ups1", "model": "APC"}})
|
return_value=_graphql_response({"upsDeviceById": {"id": "ups1", "model": "APC"}})
|
||||||
)
|
)
|
||||||
tool = self._get_tool()
|
tool = self._get_tool()
|
||||||
await tool(action="ups_device", device_id="ups1")
|
await tool(action="system", subaction="ups_device", device_id="ups1")
|
||||||
body = _extract_request_body(route.calls.last.request)
|
body = _extract_request_body(route.calls.last.request)
|
||||||
assert body["variables"] == {"id": "ups1"}
|
assert body["variables"] == {"id": "ups1"}
|
||||||
assert "GetUpsDevice" in body["query"]
|
assert "GetUpsDevice" in body["query"]
|
||||||
@@ -333,7 +333,7 @@ class TestInfoToolRequests:
|
|||||||
async def test_online_sends_correct_query(self) -> None:
|
async def test_online_sends_correct_query(self) -> None:
|
||||||
route = respx.post(API_URL).mock(return_value=_graphql_response({"online": True}))
|
route = respx.post(API_URL).mock(return_value=_graphql_response({"online": True}))
|
||||||
tool = self._get_tool()
|
tool = self._get_tool()
|
||||||
await tool(action="online")
|
await tool(action="system", subaction="online")
|
||||||
body = _extract_request_body(route.calls.last.request)
|
body = _extract_request_body(route.calls.last.request)
|
||||||
assert "GetOnline" in body["query"]
|
assert "GetOnline" in body["query"]
|
||||||
|
|
||||||
@@ -343,7 +343,7 @@ class TestInfoToolRequests:
|
|||||||
return_value=_graphql_response({"servers": [{"id": "s1", "name": "tower"}]})
|
return_value=_graphql_response({"servers": [{"id": "s1", "name": "tower"}]})
|
||||||
)
|
)
|
||||||
tool = self._get_tool()
|
tool = self._get_tool()
|
||||||
await tool(action="servers")
|
await tool(action="system", subaction="servers")
|
||||||
body = _extract_request_body(route.calls.last.request)
|
body = _extract_request_body(route.calls.last.request)
|
||||||
assert "GetServers" in body["query"]
|
assert "GetServers" in body["query"]
|
||||||
|
|
||||||
@@ -353,7 +353,7 @@ class TestInfoToolRequests:
|
|||||||
return_value=_graphql_response({"flash": {"id": "f1", "guid": "abc"}})
|
return_value=_graphql_response({"flash": {"id": "f1", "guid": "abc"}})
|
||||||
)
|
)
|
||||||
tool = self._get_tool()
|
tool = self._get_tool()
|
||||||
await tool(action="flash")
|
await tool(action="system", subaction="flash")
|
||||||
body = _extract_request_body(route.calls.last.request)
|
body = _extract_request_body(route.calls.last.request)
|
||||||
assert "GetFlash" in body["query"]
|
assert "GetFlash" in body["query"]
|
||||||
|
|
||||||
@@ -364,11 +364,11 @@ class TestInfoToolRequests:
|
|||||||
|
|
||||||
|
|
||||||
class TestDockerToolRequests:
|
class TestDockerToolRequests:
|
||||||
"""Verify unraid_docker tool constructs correct requests."""
|
"""Verify unraid docker tool constructs correct requests."""
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def _get_tool():
|
def _get_tool():
|
||||||
return make_tool_fn("unraid_mcp.tools.docker", "register_docker_tool", "unraid_docker")
|
return make_tool_fn("unraid_mcp.tools.unraid", "register_unraid_tool", "unraid")
|
||||||
|
|
||||||
@respx.mock
|
@respx.mock
|
||||||
async def test_list_sends_correct_query(self) -> None:
|
async def test_list_sends_correct_query(self) -> None:
|
||||||
@@ -378,7 +378,7 @@ class TestDockerToolRequests:
|
|||||||
)
|
)
|
||||||
)
|
)
|
||||||
tool = self._get_tool()
|
tool = self._get_tool()
|
||||||
await tool(action="list")
|
await tool(action="docker", subaction="list")
|
||||||
body = _extract_request_body(route.calls.last.request)
|
body = _extract_request_body(route.calls.last.request)
|
||||||
assert "ListDockerContainers" in body["query"]
|
assert "ListDockerContainers" in body["query"]
|
||||||
|
|
||||||
@@ -400,7 +400,7 @@ class TestDockerToolRequests:
|
|||||||
)
|
)
|
||||||
)
|
)
|
||||||
tool = self._get_tool()
|
tool = self._get_tool()
|
||||||
await tool(action="start", container_id=container_id)
|
await tool(action="docker", subaction="start", container_id=container_id)
|
||||||
body = _extract_request_body(route.calls.last.request)
|
body = _extract_request_body(route.calls.last.request)
|
||||||
assert "StartContainer" in body["query"]
|
assert "StartContainer" in body["query"]
|
||||||
assert body["variables"] == {"id": container_id}
|
assert body["variables"] == {"id": container_id}
|
||||||
@@ -423,7 +423,7 @@ class TestDockerToolRequests:
|
|||||||
)
|
)
|
||||||
)
|
)
|
||||||
tool = self._get_tool()
|
tool = self._get_tool()
|
||||||
await tool(action="stop", container_id=container_id)
|
await tool(action="docker", subaction="stop", container_id=container_id)
|
||||||
body = _extract_request_body(route.calls.last.request)
|
body = _extract_request_body(route.calls.last.request)
|
||||||
assert "StopContainer" in body["query"]
|
assert "StopContainer" in body["query"]
|
||||||
assert body["variables"] == {"id": container_id}
|
assert body["variables"] == {"id": container_id}
|
||||||
@@ -440,7 +440,7 @@ class TestDockerToolRequests:
|
|||||||
)
|
)
|
||||||
)
|
)
|
||||||
tool = self._get_tool()
|
tool = self._get_tool()
|
||||||
await tool(action="networks")
|
await tool(action="docker", subaction="networks")
|
||||||
body = _extract_request_body(route.calls.last.request)
|
body = _extract_request_body(route.calls.last.request)
|
||||||
assert "GetDockerNetworks" in body["query"]
|
assert "GetDockerNetworks" in body["query"]
|
||||||
|
|
||||||
@@ -484,9 +484,9 @@ class TestDockerToolRequests:
|
|||||||
|
|
||||||
respx.post(API_URL).mock(side_effect=side_effect)
|
respx.post(API_URL).mock(side_effect=side_effect)
|
||||||
tool = self._get_tool()
|
tool = self._get_tool()
|
||||||
result = await tool(action="restart", container_id=container_id)
|
result = await tool(action="docker", subaction="restart", container_id=container_id)
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
assert result["action"] == "restart"
|
assert result["subaction"] == "restart"
|
||||||
assert call_count == 2
|
assert call_count == 2
|
||||||
|
|
||||||
@respx.mock
|
@respx.mock
|
||||||
@@ -499,7 +499,8 @@ class TestDockerToolRequests:
|
|||||||
nonlocal call_count
|
nonlocal call_count
|
||||||
body = json.loads(request.content.decode())
|
body = json.loads(request.content.decode())
|
||||||
call_count += 1
|
call_count += 1
|
||||||
if "ResolveContainerID" in body["query"]:
|
if "skipCache" in body["query"]:
|
||||||
|
# Resolution query: docker { containers(skipCache: true) { id names } }
|
||||||
return _graphql_response(
|
return _graphql_response(
|
||||||
{"docker": {"containers": [{"id": resolved_id, "names": ["plex"]}]}}
|
{"docker": {"containers": [{"id": resolved_id, "names": ["plex"]}]}}
|
||||||
)
|
)
|
||||||
@@ -520,7 +521,7 @@ class TestDockerToolRequests:
|
|||||||
|
|
||||||
respx.post(API_URL).mock(side_effect=side_effect)
|
respx.post(API_URL).mock(side_effect=side_effect)
|
||||||
tool = self._get_tool()
|
tool = self._get_tool()
|
||||||
result = await tool(action="start", container_id="plex")
|
result = await tool(action="docker", subaction="start", container_id="plex")
|
||||||
assert call_count == 2 # resolve + start
|
assert call_count == 2 # resolve + start
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
|
|
||||||
@@ -531,11 +532,11 @@ class TestDockerToolRequests:
|
|||||||
|
|
||||||
|
|
||||||
class TestVMToolRequests:
|
class TestVMToolRequests:
|
||||||
"""Verify unraid_vm tool constructs correct requests."""
|
"""Verify unraid vm tool constructs correct requests."""
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def _get_tool():
|
def _get_tool():
|
||||||
return make_tool_fn("unraid_mcp.tools.virtualization", "register_vm_tool", "unraid_vm")
|
return make_tool_fn("unraid_mcp.tools.unraid", "register_unraid_tool", "unraid")
|
||||||
|
|
||||||
@respx.mock
|
@respx.mock
|
||||||
async def test_list_sends_correct_query(self) -> None:
|
async def test_list_sends_correct_query(self) -> None:
|
||||||
@@ -549,7 +550,7 @@ class TestVMToolRequests:
|
|||||||
)
|
)
|
||||||
)
|
)
|
||||||
tool = self._get_tool()
|
tool = self._get_tool()
|
||||||
result = await tool(action="list")
|
result = await tool(action="vm", subaction="list")
|
||||||
body = _extract_request_body(route.calls.last.request)
|
body = _extract_request_body(route.calls.last.request)
|
||||||
assert "ListVMs" in body["query"]
|
assert "ListVMs" in body["query"]
|
||||||
assert "vms" in result
|
assert "vms" in result
|
||||||
@@ -558,7 +559,7 @@ class TestVMToolRequests:
|
|||||||
async def test_start_sends_mutation_with_id(self) -> None:
|
async def test_start_sends_mutation_with_id(self) -> None:
|
||||||
route = respx.post(API_URL).mock(return_value=_graphql_response({"vm": {"start": True}}))
|
route = respx.post(API_URL).mock(return_value=_graphql_response({"vm": {"start": True}}))
|
||||||
tool = self._get_tool()
|
tool = self._get_tool()
|
||||||
result = await tool(action="start", vm_id="vm-123")
|
result = await tool(action="vm", subaction="start", vm_id="vm-123")
|
||||||
body = _extract_request_body(route.calls.last.request)
|
body = _extract_request_body(route.calls.last.request)
|
||||||
assert "StartVM" in body["query"]
|
assert "StartVM" in body["query"]
|
||||||
assert body["variables"] == {"id": "vm-123"}
|
assert body["variables"] == {"id": "vm-123"}
|
||||||
@@ -568,7 +569,7 @@ class TestVMToolRequests:
|
|||||||
async def test_stop_sends_mutation_with_id(self) -> None:
|
async def test_stop_sends_mutation_with_id(self) -> None:
|
||||||
route = respx.post(API_URL).mock(return_value=_graphql_response({"vm": {"stop": True}}))
|
route = respx.post(API_URL).mock(return_value=_graphql_response({"vm": {"stop": True}}))
|
||||||
tool = self._get_tool()
|
tool = self._get_tool()
|
||||||
await tool(action="stop", vm_id="vm-456")
|
await tool(action="vm", subaction="stop", vm_id="vm-456")
|
||||||
body = _extract_request_body(route.calls.last.request)
|
body = _extract_request_body(route.calls.last.request)
|
||||||
assert "StopVM" in body["query"]
|
assert "StopVM" in body["query"]
|
||||||
assert body["variables"] == {"id": "vm-456"}
|
assert body["variables"] == {"id": "vm-456"}
|
||||||
@@ -577,7 +578,7 @@ class TestVMToolRequests:
|
|||||||
async def test_force_stop_requires_confirm(self) -> None:
|
async def test_force_stop_requires_confirm(self) -> None:
|
||||||
tool = self._get_tool()
|
tool = self._get_tool()
|
||||||
with pytest.raises(ToolError, match="not confirmed"):
|
with pytest.raises(ToolError, match="not confirmed"):
|
||||||
await tool(action="force_stop", vm_id="vm-789")
|
await tool(action="vm", subaction="force_stop", vm_id="vm-789")
|
||||||
|
|
||||||
@respx.mock
|
@respx.mock
|
||||||
async def test_force_stop_sends_mutation_when_confirmed(self) -> None:
|
async def test_force_stop_sends_mutation_when_confirmed(self) -> None:
|
||||||
@@ -585,7 +586,7 @@ class TestVMToolRequests:
|
|||||||
return_value=_graphql_response({"vm": {"forceStop": True}})
|
return_value=_graphql_response({"vm": {"forceStop": True}})
|
||||||
)
|
)
|
||||||
tool = self._get_tool()
|
tool = self._get_tool()
|
||||||
result = await tool(action="force_stop", vm_id="vm-789", confirm=True)
|
result = await tool(action="vm", subaction="force_stop", vm_id="vm-789", confirm=True)
|
||||||
body = _extract_request_body(route.calls.last.request)
|
body = _extract_request_body(route.calls.last.request)
|
||||||
assert "ForceStopVM" in body["query"]
|
assert "ForceStopVM" in body["query"]
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
@@ -594,7 +595,7 @@ class TestVMToolRequests:
|
|||||||
async def test_reset_requires_confirm(self) -> None:
|
async def test_reset_requires_confirm(self) -> None:
|
||||||
tool = self._get_tool()
|
tool = self._get_tool()
|
||||||
with pytest.raises(ToolError, match="not confirmed"):
|
with pytest.raises(ToolError, match="not confirmed"):
|
||||||
await tool(action="reset", vm_id="vm-abc")
|
await tool(action="vm", subaction="reset", vm_id="vm-abc")
|
||||||
|
|
||||||
@respx.mock
|
@respx.mock
|
||||||
async def test_details_finds_vm_by_name(self) -> None:
|
async def test_details_finds_vm_by_name(self) -> None:
|
||||||
@@ -611,7 +612,7 @@ class TestVMToolRequests:
|
|||||||
)
|
)
|
||||||
)
|
)
|
||||||
tool = self._get_tool()
|
tool = self._get_tool()
|
||||||
result = await tool(action="details", vm_id="ubuntu")
|
result = await tool(action="vm", subaction="details", vm_id="ubuntu")
|
||||||
assert result["name"] == "ubuntu"
|
assert result["name"] == "ubuntu"
|
||||||
|
|
||||||
|
|
||||||
@@ -621,11 +622,11 @@ class TestVMToolRequests:
|
|||||||
|
|
||||||
|
|
||||||
class TestArrayToolRequests:
|
class TestArrayToolRequests:
|
||||||
"""Verify unraid_array tool constructs correct requests."""
|
"""Verify unraid array tool constructs correct requests."""
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def _get_tool():
|
def _get_tool():
|
||||||
return make_tool_fn("unraid_mcp.tools.array", "register_array_tool", "unraid_array")
|
return make_tool_fn("unraid_mcp.tools.unraid", "register_unraid_tool", "unraid")
|
||||||
|
|
||||||
@respx.mock
|
@respx.mock
|
||||||
async def test_parity_status_sends_correct_query(self) -> None:
|
async def test_parity_status_sends_correct_query(self) -> None:
|
||||||
@@ -643,7 +644,7 @@ class TestArrayToolRequests:
|
|||||||
)
|
)
|
||||||
)
|
)
|
||||||
tool = self._get_tool()
|
tool = self._get_tool()
|
||||||
result = await tool(action="parity_status")
|
result = await tool(action="array", subaction="parity_status")
|
||||||
body = _extract_request_body(route.calls.last.request)
|
body = _extract_request_body(route.calls.last.request)
|
||||||
assert "GetParityStatus" in body["query"]
|
assert "GetParityStatus" in body["query"]
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
@@ -654,7 +655,7 @@ class TestArrayToolRequests:
|
|||||||
return_value=_graphql_response({"parityCheck": {"start": True}})
|
return_value=_graphql_response({"parityCheck": {"start": True}})
|
||||||
)
|
)
|
||||||
tool = self._get_tool()
|
tool = self._get_tool()
|
||||||
result = await tool(action="parity_start", correct=False)
|
result = await tool(action="array", subaction="parity_start", correct=False)
|
||||||
body = _extract_request_body(route.calls.last.request)
|
body = _extract_request_body(route.calls.last.request)
|
||||||
assert "StartParityCheck" in body["query"]
|
assert "StartParityCheck" in body["query"]
|
||||||
assert body["variables"] == {"correct": False}
|
assert body["variables"] == {"correct": False}
|
||||||
@@ -666,7 +667,7 @@ class TestArrayToolRequests:
|
|||||||
return_value=_graphql_response({"parityCheck": {"start": True}})
|
return_value=_graphql_response({"parityCheck": {"start": True}})
|
||||||
)
|
)
|
||||||
tool = self._get_tool()
|
tool = self._get_tool()
|
||||||
await tool(action="parity_start", correct=True)
|
await tool(action="array", subaction="parity_start", correct=True)
|
||||||
body = _extract_request_body(route.calls.last.request)
|
body = _extract_request_body(route.calls.last.request)
|
||||||
assert body["variables"] == {"correct": True}
|
assert body["variables"] == {"correct": True}
|
||||||
|
|
||||||
@@ -676,7 +677,7 @@ class TestArrayToolRequests:
|
|||||||
return_value=_graphql_response({"parityCheck": {"pause": True}})
|
return_value=_graphql_response({"parityCheck": {"pause": True}})
|
||||||
)
|
)
|
||||||
tool = self._get_tool()
|
tool = self._get_tool()
|
||||||
await tool(action="parity_pause")
|
await tool(action="array", subaction="parity_pause")
|
||||||
body = _extract_request_body(route.calls.last.request)
|
body = _extract_request_body(route.calls.last.request)
|
||||||
assert "PauseParityCheck" in body["query"]
|
assert "PauseParityCheck" in body["query"]
|
||||||
|
|
||||||
@@ -686,7 +687,7 @@ class TestArrayToolRequests:
|
|||||||
return_value=_graphql_response({"parityCheck": {"cancel": True}})
|
return_value=_graphql_response({"parityCheck": {"cancel": True}})
|
||||||
)
|
)
|
||||||
tool = self._get_tool()
|
tool = self._get_tool()
|
||||||
await tool(action="parity_cancel")
|
await tool(action="array", subaction="parity_cancel")
|
||||||
body = _extract_request_body(route.calls.last.request)
|
body = _extract_request_body(route.calls.last.request)
|
||||||
assert "CancelParityCheck" in body["query"]
|
assert "CancelParityCheck" in body["query"]
|
||||||
|
|
||||||
@@ -697,11 +698,11 @@ class TestArrayToolRequests:
|
|||||||
|
|
||||||
|
|
||||||
class TestStorageToolRequests:
|
class TestStorageToolRequests:
|
||||||
"""Verify unraid_storage tool constructs correct requests."""
|
"""Verify unraid disk tool constructs correct requests."""
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def _get_tool():
|
def _get_tool():
|
||||||
return make_tool_fn("unraid_mcp.tools.storage", "register_storage_tool", "unraid_storage")
|
return make_tool_fn("unraid_mcp.tools.unraid", "register_unraid_tool", "unraid")
|
||||||
|
|
||||||
@respx.mock
|
@respx.mock
|
||||||
async def test_shares_sends_correct_query(self) -> None:
|
async def test_shares_sends_correct_query(self) -> None:
|
||||||
@@ -709,7 +710,7 @@ class TestStorageToolRequests:
|
|||||||
return_value=_graphql_response({"shares": [{"id": "s1", "name": "appdata"}]})
|
return_value=_graphql_response({"shares": [{"id": "s1", "name": "appdata"}]})
|
||||||
)
|
)
|
||||||
tool = self._get_tool()
|
tool = self._get_tool()
|
||||||
result = await tool(action="shares")
|
result = await tool(action="disk", subaction="shares")
|
||||||
body = _extract_request_body(route.calls.last.request)
|
body = _extract_request_body(route.calls.last.request)
|
||||||
assert "GetSharesInfo" in body["query"]
|
assert "GetSharesInfo" in body["query"]
|
||||||
assert "shares" in result
|
assert "shares" in result
|
||||||
@@ -722,7 +723,7 @@ class TestStorageToolRequests:
|
|||||||
)
|
)
|
||||||
)
|
)
|
||||||
tool = self._get_tool()
|
tool = self._get_tool()
|
||||||
await tool(action="disks")
|
await tool(action="disk", subaction="disks")
|
||||||
body = _extract_request_body(route.calls.last.request)
|
body = _extract_request_body(route.calls.last.request)
|
||||||
assert "ListPhysicalDisks" in body["query"]
|
assert "ListPhysicalDisks" in body["query"]
|
||||||
|
|
||||||
@@ -743,7 +744,7 @@ class TestStorageToolRequests:
|
|||||||
)
|
)
|
||||||
)
|
)
|
||||||
tool = self._get_tool()
|
tool = self._get_tool()
|
||||||
await tool(action="disk_details", disk_id="d1")
|
await tool(action="disk", subaction="disk_details", disk_id="d1")
|
||||||
body = _extract_request_body(route.calls.last.request)
|
body = _extract_request_body(route.calls.last.request)
|
||||||
assert "GetDiskDetails" in body["query"]
|
assert "GetDiskDetails" in body["query"]
|
||||||
assert body["variables"] == {"id": "d1"}
|
assert body["variables"] == {"id": "d1"}
|
||||||
@@ -756,7 +757,7 @@ class TestStorageToolRequests:
|
|||||||
)
|
)
|
||||||
)
|
)
|
||||||
tool = self._get_tool()
|
tool = self._get_tool()
|
||||||
result = await tool(action="log_files")
|
result = await tool(action="disk", subaction="log_files")
|
||||||
body = _extract_request_body(route.calls.last.request)
|
body = _extract_request_body(route.calls.last.request)
|
||||||
assert "ListLogFiles" in body["query"]
|
assert "ListLogFiles" in body["query"]
|
||||||
assert "log_files" in result
|
assert "log_files" in result
|
||||||
@@ -776,7 +777,7 @@ class TestStorageToolRequests:
|
|||||||
)
|
)
|
||||||
)
|
)
|
||||||
tool = self._get_tool()
|
tool = self._get_tool()
|
||||||
await tool(action="logs", log_path="/var/log/syslog", tail_lines=50)
|
await tool(action="disk", subaction="logs", log_path="/var/log/syslog", tail_lines=50)
|
||||||
body = _extract_request_body(route.calls.last.request)
|
body = _extract_request_body(route.calls.last.request)
|
||||||
assert "GetLogContent" in body["query"]
|
assert "GetLogContent" in body["query"]
|
||||||
assert body["variables"]["path"] == "/var/log/syslog"
|
assert body["variables"]["path"] == "/var/log/syslog"
|
||||||
@@ -786,7 +787,7 @@ class TestStorageToolRequests:
|
|||||||
async def test_logs_rejects_path_traversal(self) -> None:
|
async def test_logs_rejects_path_traversal(self) -> None:
|
||||||
tool = self._get_tool()
|
tool = self._get_tool()
|
||||||
with pytest.raises(ToolError, match="log_path must start with"):
|
with pytest.raises(ToolError, match="log_path must start with"):
|
||||||
await tool(action="logs", log_path="/etc/shadow")
|
await tool(action="disk", subaction="logs", log_path="/etc/shadow")
|
||||||
|
|
||||||
|
|
||||||
# ===========================================================================
|
# ===========================================================================
|
||||||
@@ -795,15 +796,11 @@ class TestStorageToolRequests:
|
|||||||
|
|
||||||
|
|
||||||
class TestNotificationsToolRequests:
|
class TestNotificationsToolRequests:
|
||||||
"""Verify unraid_notifications tool constructs correct requests."""
|
"""Verify unraid notification tool constructs correct requests."""
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def _get_tool():
|
def _get_tool():
|
||||||
return make_tool_fn(
|
return make_tool_fn("unraid_mcp.tools.unraid", "register_unraid_tool", "unraid")
|
||||||
"unraid_mcp.tools.notifications",
|
|
||||||
"register_notifications_tool",
|
|
||||||
"unraid_notifications",
|
|
||||||
)
|
|
||||||
|
|
||||||
@respx.mock
|
@respx.mock
|
||||||
async def test_overview_sends_correct_query(self) -> None:
|
async def test_overview_sends_correct_query(self) -> None:
|
||||||
@@ -819,7 +816,7 @@ class TestNotificationsToolRequests:
|
|||||||
)
|
)
|
||||||
)
|
)
|
||||||
tool = self._get_tool()
|
tool = self._get_tool()
|
||||||
await tool(action="overview")
|
await tool(action="notification", subaction="overview")
|
||||||
body = _extract_request_body(route.calls.last.request)
|
body = _extract_request_body(route.calls.last.request)
|
||||||
assert "GetNotificationsOverview" in body["query"]
|
assert "GetNotificationsOverview" in body["query"]
|
||||||
|
|
||||||
@@ -829,7 +826,14 @@ class TestNotificationsToolRequests:
|
|||||||
return_value=_graphql_response({"notifications": {"list": []}})
|
return_value=_graphql_response({"notifications": {"list": []}})
|
||||||
)
|
)
|
||||||
tool = self._get_tool()
|
tool = self._get_tool()
|
||||||
await tool(action="list", list_type="ARCHIVE", importance="WARNING", offset=5, limit=10)
|
await tool(
|
||||||
|
action="notification",
|
||||||
|
subaction="list",
|
||||||
|
list_type="ARCHIVE",
|
||||||
|
importance="WARNING",
|
||||||
|
offset=5,
|
||||||
|
limit=10,
|
||||||
|
)
|
||||||
body = _extract_request_body(route.calls.last.request)
|
body = _extract_request_body(route.calls.last.request)
|
||||||
assert "ListNotifications" in body["query"]
|
assert "ListNotifications" in body["query"]
|
||||||
filt = body["variables"]["filter"]
|
filt = body["variables"]["filter"]
|
||||||
@@ -853,7 +857,8 @@ class TestNotificationsToolRequests:
|
|||||||
)
|
)
|
||||||
tool = self._get_tool()
|
tool = self._get_tool()
|
||||||
await tool(
|
await tool(
|
||||||
action="create",
|
action="notification",
|
||||||
|
subaction="create",
|
||||||
title="Test",
|
title="Test",
|
||||||
subject="Sub",
|
subject="Sub",
|
||||||
description="Desc",
|
description="Desc",
|
||||||
@@ -872,7 +877,7 @@ class TestNotificationsToolRequests:
|
|||||||
return_value=_graphql_response({"archiveNotification": {"id": "notif-1"}})
|
return_value=_graphql_response({"archiveNotification": {"id": "notif-1"}})
|
||||||
)
|
)
|
||||||
tool = self._get_tool()
|
tool = self._get_tool()
|
||||||
await tool(action="archive", notification_id="notif-1")
|
await tool(action="notification", subaction="archive", notification_id="notif-1")
|
||||||
body = _extract_request_body(route.calls.last.request)
|
body = _extract_request_body(route.calls.last.request)
|
||||||
assert "ArchiveNotification" in body["query"]
|
assert "ArchiveNotification" in body["query"]
|
||||||
assert body["variables"] == {"id": "notif-1"}
|
assert body["variables"] == {"id": "notif-1"}
|
||||||
@@ -881,7 +886,12 @@ class TestNotificationsToolRequests:
|
|||||||
async def test_delete_requires_confirm(self) -> None:
|
async def test_delete_requires_confirm(self) -> None:
|
||||||
tool = self._get_tool()
|
tool = self._get_tool()
|
||||||
with pytest.raises(ToolError, match="not confirmed"):
|
with pytest.raises(ToolError, match="not confirmed"):
|
||||||
await tool(action="delete", notification_id="n1", notification_type="UNREAD")
|
await tool(
|
||||||
|
action="notification",
|
||||||
|
subaction="delete",
|
||||||
|
notification_id="n1",
|
||||||
|
notification_type="UNREAD",
|
||||||
|
)
|
||||||
|
|
||||||
@respx.mock
|
@respx.mock
|
||||||
async def test_delete_sends_id_and_type(self) -> None:
|
async def test_delete_sends_id_and_type(self) -> None:
|
||||||
@@ -890,7 +900,8 @@ class TestNotificationsToolRequests:
|
|||||||
)
|
)
|
||||||
tool = self._get_tool()
|
tool = self._get_tool()
|
||||||
await tool(
|
await tool(
|
||||||
action="delete",
|
action="notification",
|
||||||
|
subaction="delete",
|
||||||
notification_id="n1",
|
notification_id="n1",
|
||||||
notification_type="unread",
|
notification_type="unread",
|
||||||
confirm=True,
|
confirm=True,
|
||||||
@@ -906,7 +917,7 @@ class TestNotificationsToolRequests:
|
|||||||
return_value=_graphql_response({"archiveAll": {"archive": {"total": 1}}})
|
return_value=_graphql_response({"archiveAll": {"archive": {"total": 1}}})
|
||||||
)
|
)
|
||||||
tool = self._get_tool()
|
tool = self._get_tool()
|
||||||
await tool(action="archive_all", importance="warning")
|
await tool(action="notification", subaction="archive_all", importance="warning")
|
||||||
body = _extract_request_body(route.calls.last.request)
|
body = _extract_request_body(route.calls.last.request)
|
||||||
assert "ArchiveAllNotifications" in body["query"]
|
assert "ArchiveAllNotifications" in body["query"]
|
||||||
assert body["variables"]["importance"] == "WARNING"
|
assert body["variables"]["importance"] == "WARNING"
|
||||||
@@ -918,11 +929,11 @@ class TestNotificationsToolRequests:
|
|||||||
|
|
||||||
|
|
||||||
class TestRCloneToolRequests:
|
class TestRCloneToolRequests:
|
||||||
"""Verify unraid_rclone tool constructs correct requests."""
|
"""Verify unraid rclone tool constructs correct requests."""
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def _get_tool():
|
def _get_tool():
|
||||||
return make_tool_fn("unraid_mcp.tools.rclone", "register_rclone_tool", "unraid_rclone")
|
return make_tool_fn("unraid_mcp.tools.unraid", "register_unraid_tool", "unraid")
|
||||||
|
|
||||||
@respx.mock
|
@respx.mock
|
||||||
async def test_list_remotes_sends_correct_query(self) -> None:
|
async def test_list_remotes_sends_correct_query(self) -> None:
|
||||||
@@ -932,7 +943,7 @@ class TestRCloneToolRequests:
|
|||||||
)
|
)
|
||||||
)
|
)
|
||||||
tool = self._get_tool()
|
tool = self._get_tool()
|
||||||
result = await tool(action="list_remotes")
|
result = await tool(action="rclone", subaction="list_remotes")
|
||||||
body = _extract_request_body(route.calls.last.request)
|
body = _extract_request_body(route.calls.last.request)
|
||||||
assert "ListRCloneRemotes" in body["query"]
|
assert "ListRCloneRemotes" in body["query"]
|
||||||
assert "remotes" in result
|
assert "remotes" in result
|
||||||
@@ -953,7 +964,7 @@ class TestRCloneToolRequests:
|
|||||||
)
|
)
|
||||||
)
|
)
|
||||||
tool = self._get_tool()
|
tool = self._get_tool()
|
||||||
await tool(action="config_form", provider_type="s3")
|
await tool(action="rclone", subaction="config_form", provider_type="s3")
|
||||||
body = _extract_request_body(route.calls.last.request)
|
body = _extract_request_body(route.calls.last.request)
|
||||||
assert "GetRCloneConfigForm" in body["query"]
|
assert "GetRCloneConfigForm" in body["query"]
|
||||||
assert body["variables"]["formOptions"]["providerType"] == "s3"
|
assert body["variables"]["formOptions"]["providerType"] == "s3"
|
||||||
@@ -975,7 +986,8 @@ class TestRCloneToolRequests:
|
|||||||
)
|
)
|
||||||
tool = self._get_tool()
|
tool = self._get_tool()
|
||||||
await tool(
|
await tool(
|
||||||
action="create_remote",
|
action="rclone",
|
||||||
|
subaction="create_remote",
|
||||||
name="my-s3",
|
name="my-s3",
|
||||||
provider_type="s3",
|
provider_type="s3",
|
||||||
config_data={"bucket": "my-bucket"},
|
config_data={"bucket": "my-bucket"},
|
||||||
@@ -991,7 +1003,7 @@ class TestRCloneToolRequests:
|
|||||||
async def test_delete_remote_requires_confirm(self) -> None:
|
async def test_delete_remote_requires_confirm(self) -> None:
|
||||||
tool = self._get_tool()
|
tool = self._get_tool()
|
||||||
with pytest.raises(ToolError, match="not confirmed"):
|
with pytest.raises(ToolError, match="not confirmed"):
|
||||||
await tool(action="delete_remote", name="old-remote")
|
await tool(action="rclone", subaction="delete_remote", name="old-remote")
|
||||||
|
|
||||||
@respx.mock
|
@respx.mock
|
||||||
async def test_delete_remote_sends_name_when_confirmed(self) -> None:
|
async def test_delete_remote_sends_name_when_confirmed(self) -> None:
|
||||||
@@ -999,7 +1011,9 @@ class TestRCloneToolRequests:
|
|||||||
return_value=_graphql_response({"rclone": {"deleteRCloneRemote": True}})
|
return_value=_graphql_response({"rclone": {"deleteRCloneRemote": True}})
|
||||||
)
|
)
|
||||||
tool = self._get_tool()
|
tool = self._get_tool()
|
||||||
result = await tool(action="delete_remote", name="old-remote", confirm=True)
|
result = await tool(
|
||||||
|
action="rclone", subaction="delete_remote", name="old-remote", confirm=True
|
||||||
|
)
|
||||||
body = _extract_request_body(route.calls.last.request)
|
body = _extract_request_body(route.calls.last.request)
|
||||||
assert "DeleteRCloneRemote" in body["query"]
|
assert "DeleteRCloneRemote" in body["query"]
|
||||||
assert body["variables"]["input"]["name"] == "old-remote"
|
assert body["variables"]["input"]["name"] == "old-remote"
|
||||||
@@ -1012,11 +1026,11 @@ class TestRCloneToolRequests:
|
|||||||
|
|
||||||
|
|
||||||
class TestUsersToolRequests:
|
class TestUsersToolRequests:
|
||||||
"""Verify unraid_users tool constructs correct requests."""
|
"""Verify unraid user tool constructs correct requests."""
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def _get_tool():
|
def _get_tool():
|
||||||
return make_tool_fn("unraid_mcp.tools.users", "register_users_tool", "unraid_users")
|
return make_tool_fn("unraid_mcp.tools.unraid", "register_unraid_tool", "unraid")
|
||||||
|
|
||||||
@respx.mock
|
@respx.mock
|
||||||
async def test_me_sends_correct_query(self) -> None:
|
async def test_me_sends_correct_query(self) -> None:
|
||||||
@@ -1033,7 +1047,7 @@ class TestUsersToolRequests:
|
|||||||
)
|
)
|
||||||
)
|
)
|
||||||
tool = self._get_tool()
|
tool = self._get_tool()
|
||||||
result = await tool(action="me")
|
result = await tool(action="user", subaction="me")
|
||||||
body = _extract_request_body(route.calls.last.request)
|
body = _extract_request_body(route.calls.last.request)
|
||||||
assert "GetMe" in body["query"]
|
assert "GetMe" in body["query"]
|
||||||
assert result["name"] == "admin"
|
assert result["name"] == "admin"
|
||||||
@@ -1045,11 +1059,11 @@ class TestUsersToolRequests:
|
|||||||
|
|
||||||
|
|
||||||
class TestKeysToolRequests:
|
class TestKeysToolRequests:
|
||||||
"""Verify unraid_keys tool constructs correct requests."""
|
"""Verify unraid key tool constructs correct requests."""
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def _get_tool():
|
def _get_tool():
|
||||||
return make_tool_fn("unraid_mcp.tools.keys", "register_keys_tool", "unraid_keys")
|
return make_tool_fn("unraid_mcp.tools.unraid", "register_unraid_tool", "unraid")
|
||||||
|
|
||||||
@respx.mock
|
@respx.mock
|
||||||
async def test_list_sends_correct_query(self) -> None:
|
async def test_list_sends_correct_query(self) -> None:
|
||||||
@@ -1057,7 +1071,7 @@ class TestKeysToolRequests:
|
|||||||
return_value=_graphql_response({"apiKeys": [{"id": "k1", "name": "my-key"}]})
|
return_value=_graphql_response({"apiKeys": [{"id": "k1", "name": "my-key"}]})
|
||||||
)
|
)
|
||||||
tool = self._get_tool()
|
tool = self._get_tool()
|
||||||
result = await tool(action="list")
|
result = await tool(action="key", subaction="list")
|
||||||
body = _extract_request_body(route.calls.last.request)
|
body = _extract_request_body(route.calls.last.request)
|
||||||
assert "ListApiKeys" in body["query"]
|
assert "ListApiKeys" in body["query"]
|
||||||
assert "keys" in result
|
assert "keys" in result
|
||||||
@@ -1070,7 +1084,7 @@ class TestKeysToolRequests:
|
|||||||
)
|
)
|
||||||
)
|
)
|
||||||
tool = self._get_tool()
|
tool = self._get_tool()
|
||||||
await tool(action="get", key_id="k1")
|
await tool(action="key", subaction="get", key_id="k1")
|
||||||
body = _extract_request_body(route.calls.last.request)
|
body = _extract_request_body(route.calls.last.request)
|
||||||
assert "GetApiKey" in body["query"]
|
assert "GetApiKey" in body["query"]
|
||||||
assert body["variables"] == {"id": "k1"}
|
assert body["variables"] == {"id": "k1"}
|
||||||
@@ -1092,7 +1106,7 @@ class TestKeysToolRequests:
|
|||||||
)
|
)
|
||||||
)
|
)
|
||||||
tool = self._get_tool()
|
tool = self._get_tool()
|
||||||
result = await tool(action="create", name="new-key", roles=["read"])
|
result = await tool(action="key", subaction="create", name="new-key", roles=["read"])
|
||||||
body = _extract_request_body(route.calls.last.request)
|
body = _extract_request_body(route.calls.last.request)
|
||||||
assert "CreateApiKey" in body["query"]
|
assert "CreateApiKey" in body["query"]
|
||||||
inp = body["variables"]["input"]
|
inp = body["variables"]["input"]
|
||||||
@@ -1108,7 +1122,7 @@ class TestKeysToolRequests:
|
|||||||
)
|
)
|
||||||
)
|
)
|
||||||
tool = self._get_tool()
|
tool = self._get_tool()
|
||||||
await tool(action="update", key_id="k1", name="renamed")
|
await tool(action="key", subaction="update", key_id="k1", name="renamed")
|
||||||
body = _extract_request_body(route.calls.last.request)
|
body = _extract_request_body(route.calls.last.request)
|
||||||
assert "UpdateApiKey" in body["query"]
|
assert "UpdateApiKey" in body["query"]
|
||||||
inp = body["variables"]["input"]
|
inp = body["variables"]["input"]
|
||||||
@@ -1119,7 +1133,7 @@ class TestKeysToolRequests:
|
|||||||
async def test_delete_requires_confirm(self) -> None:
|
async def test_delete_requires_confirm(self) -> None:
|
||||||
tool = self._get_tool()
|
tool = self._get_tool()
|
||||||
with pytest.raises(ToolError, match="not confirmed"):
|
with pytest.raises(ToolError, match="not confirmed"):
|
||||||
await tool(action="delete", key_id="k1")
|
await tool(action="key", subaction="delete", key_id="k1")
|
||||||
|
|
||||||
@respx.mock
|
@respx.mock
|
||||||
async def test_delete_sends_ids_when_confirmed(self) -> None:
|
async def test_delete_sends_ids_when_confirmed(self) -> None:
|
||||||
@@ -1127,7 +1141,7 @@ class TestKeysToolRequests:
|
|||||||
return_value=_graphql_response({"apiKey": {"delete": True}})
|
return_value=_graphql_response({"apiKey": {"delete": True}})
|
||||||
)
|
)
|
||||||
tool = self._get_tool()
|
tool = self._get_tool()
|
||||||
result = await tool(action="delete", key_id="k1", confirm=True)
|
result = await tool(action="key", subaction="delete", key_id="k1", confirm=True)
|
||||||
body = _extract_request_body(route.calls.last.request)
|
body = _extract_request_body(route.calls.last.request)
|
||||||
assert "DeleteApiKey" in body["query"]
|
assert "DeleteApiKey" in body["query"]
|
||||||
assert body["variables"]["input"]["ids"] == ["k1"]
|
assert body["variables"]["input"]["ids"] == ["k1"]
|
||||||
@@ -1140,17 +1154,17 @@ class TestKeysToolRequests:
|
|||||||
|
|
||||||
|
|
||||||
class TestHealthToolRequests:
|
class TestHealthToolRequests:
|
||||||
"""Verify unraid_health tool constructs correct requests."""
|
"""Verify unraid health tool constructs correct requests."""
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def _get_tool():
|
def _get_tool():
|
||||||
return make_tool_fn("unraid_mcp.tools.health", "register_health_tool", "unraid_health")
|
return make_tool_fn("unraid_mcp.tools.unraid", "register_unraid_tool", "unraid")
|
||||||
|
|
||||||
@respx.mock
|
@respx.mock
|
||||||
async def test_test_connection_sends_online_query(self) -> None:
|
async def test_test_connection_sends_online_query(self) -> None:
|
||||||
route = respx.post(API_URL).mock(return_value=_graphql_response({"online": True}))
|
route = respx.post(API_URL).mock(return_value=_graphql_response({"online": True}))
|
||||||
tool = self._get_tool()
|
tool = self._get_tool()
|
||||||
result = await tool(action="test_connection")
|
result = await tool(action="health", subaction="test_connection")
|
||||||
body = _extract_request_body(route.calls.last.request)
|
body = _extract_request_body(route.calls.last.request)
|
||||||
assert "online" in body["query"]
|
assert "online" in body["query"]
|
||||||
assert result["status"] == "connected"
|
assert result["status"] == "connected"
|
||||||
@@ -1178,7 +1192,7 @@ class TestHealthToolRequests:
|
|||||||
)
|
)
|
||||||
)
|
)
|
||||||
tool = self._get_tool()
|
tool = self._get_tool()
|
||||||
result = await tool(action="check")
|
result = await tool(action="health", subaction="check")
|
||||||
body = _extract_request_body(route.calls.last.request)
|
body = _extract_request_body(route.calls.last.request)
|
||||||
assert "ComprehensiveHealthCheck" in body["query"]
|
assert "ComprehensiveHealthCheck" in body["query"]
|
||||||
assert result["status"] == "healthy"
|
assert result["status"] == "healthy"
|
||||||
@@ -1188,7 +1202,7 @@ class TestHealthToolRequests:
|
|||||||
async def test_test_connection_measures_latency(self) -> None:
|
async def test_test_connection_measures_latency(self) -> None:
|
||||||
respx.post(API_URL).mock(return_value=_graphql_response({"online": True}))
|
respx.post(API_URL).mock(return_value=_graphql_response({"online": True}))
|
||||||
tool = self._get_tool()
|
tool = self._get_tool()
|
||||||
result = await tool(action="test_connection")
|
result = await tool(action="health", subaction="test_connection")
|
||||||
assert "latency_ms" in result
|
assert "latency_ms" in result
|
||||||
assert isinstance(result["latency_ms"], float)
|
assert isinstance(result["latency_ms"], float)
|
||||||
|
|
||||||
@@ -1212,7 +1226,7 @@ class TestHealthToolRequests:
|
|||||||
)
|
)
|
||||||
)
|
)
|
||||||
tool = self._get_tool()
|
tool = self._get_tool()
|
||||||
result = await tool(action="check")
|
result = await tool(action="health", subaction="check")
|
||||||
assert result["status"] == "warning"
|
assert result["status"] == "warning"
|
||||||
assert any("alert" in issue for issue in result.get("issues", []))
|
assert any("alert" in issue for issue in result.get("issues", []))
|
||||||
|
|
||||||
@@ -1249,17 +1263,17 @@ class TestCrossCuttingConcerns:
|
|||||||
async def test_tool_error_from_http_layer_propagates(self) -> None:
|
async def test_tool_error_from_http_layer_propagates(self) -> None:
|
||||||
"""When an HTTP error occurs, the ToolError bubbles up through the tool."""
|
"""When an HTTP error occurs, the ToolError bubbles up through the tool."""
|
||||||
respx.post(API_URL).mock(return_value=httpx.Response(500, text="Server Error"))
|
respx.post(API_URL).mock(return_value=httpx.Response(500, text="Server Error"))
|
||||||
tool = make_tool_fn("unraid_mcp.tools.info", "register_info_tool", "unraid_info")
|
tool = make_tool_fn("unraid_mcp.tools.unraid", "register_unraid_tool", "unraid")
|
||||||
with pytest.raises(ToolError, match="Unraid API returned HTTP 500"):
|
with pytest.raises(ToolError, match="Unraid API returned HTTP 500"):
|
||||||
await tool(action="online")
|
await tool(action="system", subaction="online")
|
||||||
|
|
||||||
@respx.mock
|
@respx.mock
|
||||||
async def test_network_error_propagates_through_tool(self) -> None:
|
async def test_network_error_propagates_through_tool(self) -> None:
|
||||||
"""When a network error occurs, the ToolError bubbles up through the tool."""
|
"""When a network error occurs, the ToolError bubbles up through the tool."""
|
||||||
respx.post(API_URL).mock(side_effect=httpx.ConnectError("Connection refused"))
|
respx.post(API_URL).mock(side_effect=httpx.ConnectError("Connection refused"))
|
||||||
tool = make_tool_fn("unraid_mcp.tools.info", "register_info_tool", "unraid_info")
|
tool = make_tool_fn("unraid_mcp.tools.unraid", "register_unraid_tool", "unraid")
|
||||||
with pytest.raises(ToolError, match="Network error connecting to Unraid API"):
|
with pytest.raises(ToolError, match="Network error connecting to Unraid API"):
|
||||||
await tool(action="online")
|
await tool(action="system", subaction="online")
|
||||||
|
|
||||||
@respx.mock
|
@respx.mock
|
||||||
async def test_graphql_error_propagates_through_tool(self) -> None:
|
async def test_graphql_error_propagates_through_tool(self) -> None:
|
||||||
@@ -1267,6 +1281,6 @@ class TestCrossCuttingConcerns:
|
|||||||
respx.post(API_URL).mock(
|
respx.post(API_URL).mock(
|
||||||
return_value=_graphql_response(errors=[{"message": "Permission denied"}])
|
return_value=_graphql_response(errors=[{"message": "Permission denied"}])
|
||||||
)
|
)
|
||||||
tool = make_tool_fn("unraid_mcp.tools.info", "register_info_tool", "unraid_info")
|
tool = make_tool_fn("unraid_mcp.tools.unraid", "register_unraid_tool", "unraid")
|
||||||
with pytest.raises(ToolError, match="Permission denied"):
|
with pytest.raises(ToolError, match="Permission denied"):
|
||||||
await tool(action="online")
|
await tool(action="system", subaction="online")
|
||||||
|
|||||||
@@ -125,6 +125,24 @@ class TestSubscriptionManagerInit:
|
|||||||
cfg = mgr.subscription_configs["logFileSubscription"]
|
cfg = mgr.subscription_configs["logFileSubscription"]
|
||||||
assert cfg.get("auto_start") is False
|
assert cfg.get("auto_start") is False
|
||||||
|
|
||||||
|
def test_subscription_configs_contain_all_snapshot_actions(self) -> None:
|
||||||
|
from unraid_mcp.subscriptions.queries import SNAPSHOT_ACTIONS
|
||||||
|
|
||||||
|
mgr = SubscriptionManager()
|
||||||
|
for action in SNAPSHOT_ACTIONS:
|
||||||
|
assert action in mgr.subscription_configs, (
|
||||||
|
f"'{action}' missing from subscription_configs"
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_snapshot_actions_all_auto_start(self) -> None:
|
||||||
|
from unraid_mcp.subscriptions.queries import SNAPSHOT_ACTIONS
|
||||||
|
|
||||||
|
mgr = SubscriptionManager()
|
||||||
|
for action in SNAPSHOT_ACTIONS:
|
||||||
|
assert mgr.subscription_configs[action].get("auto_start") is True, (
|
||||||
|
f"'{action}' missing auto_start=True"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
# ---------------------------------------------------------------------------
|
# ---------------------------------------------------------------------------
|
||||||
# Connection Lifecycle
|
# Connection Lifecycle
|
||||||
|
|||||||
@@ -2,12 +2,12 @@
|
|||||||
# =============================================================================
|
# =============================================================================
|
||||||
# test-tools.sh — Integration smoke-test for unraid-mcp MCP server tools
|
# test-tools.sh — Integration smoke-test for unraid-mcp MCP server tools
|
||||||
#
|
#
|
||||||
# Exercises every non-destructive action across all 10 tools using mcporter.
|
# Exercises every non-destructive action using the consolidated `unraid` tool
|
||||||
# The server is launched ad-hoc via mcporter's --stdio flag so no persistent
|
# (action + subaction pattern). The server is launched ad-hoc via mcporter's
|
||||||
# process or registered server entry is required.
|
# --stdio flag so no persistent process or registered server entry is required.
|
||||||
#
|
#
|
||||||
# Usage:
|
# Usage:
|
||||||
# ./scripts/test-tools.sh [--timeout-ms N] [--parallel] [--verbose]
|
# ./tests/mcporter/test-tools.sh [--timeout-ms N] [--parallel] [--verbose]
|
||||||
#
|
#
|
||||||
# Options:
|
# Options:
|
||||||
# --timeout-ms N Per-call timeout in milliseconds (default: 25000)
|
# --timeout-ms N Per-call timeout in milliseconds (default: 25000)
|
||||||
@@ -146,9 +146,8 @@ check_prerequisites() {
|
|||||||
|
|
||||||
# ---------------------------------------------------------------------------
|
# ---------------------------------------------------------------------------
|
||||||
# Server startup smoke-test
|
# Server startup smoke-test
|
||||||
# Launches the stdio server and calls unraid_health action=check.
|
# Launches the stdio server and calls unraid action=health subaction=check.
|
||||||
# Returns 0 if the server responds (even with an API error — that still
|
# Returns 0 if the server responds, non-zero on import failure.
|
||||||
# means the Python process started cleanly), non-zero on import failure.
|
|
||||||
# ---------------------------------------------------------------------------
|
# ---------------------------------------------------------------------------
|
||||||
smoke_test_server() {
|
smoke_test_server() {
|
||||||
log_info "Smoke-testing server startup..."
|
log_info "Smoke-testing server startup..."
|
||||||
@@ -159,14 +158,13 @@ smoke_test_server() {
|
|||||||
--stdio "uv run unraid-mcp-server" \
|
--stdio "uv run unraid-mcp-server" \
|
||||||
--cwd "${PROJECT_DIR}" \
|
--cwd "${PROJECT_DIR}" \
|
||||||
--name "unraid-smoke" \
|
--name "unraid-smoke" \
|
||||||
--tool unraid_health \
|
--tool unraid \
|
||||||
--args '{"action":"check"}' \
|
--args '{"action":"health","subaction":"check"}' \
|
||||||
--timeout 30000 \
|
--timeout 30000 \
|
||||||
--output json \
|
--output json \
|
||||||
2>&1
|
2>&1
|
||||||
)" || true
|
)" || true
|
||||||
|
|
||||||
# If mcporter returns the offline error the server failed to import/start
|
|
||||||
if printf '%s' "${output}" | grep -q '"kind": "offline"'; then
|
if printf '%s' "${output}" | grep -q '"kind": "offline"'; then
|
||||||
log_error "Server failed to start. Output:"
|
log_error "Server failed to start. Output:"
|
||||||
printf '%s\n' "${output}" >&2
|
printf '%s\n' "${output}" >&2
|
||||||
@@ -177,8 +175,6 @@ smoke_test_server() {
|
|||||||
return 2
|
return 2
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Assert the response contains a valid tool response field, not a bare JSON error.
|
|
||||||
# unraid_health action=check always returns {"status": ...} on success.
|
|
||||||
local key_check
|
local key_check
|
||||||
key_check="$(
|
key_check="$(
|
||||||
printf '%s' "${output}" | python3 -c "
|
printf '%s' "${output}" | python3 -c "
|
||||||
@@ -206,19 +202,17 @@ except Exception as e:
|
|||||||
|
|
||||||
# ---------------------------------------------------------------------------
|
# ---------------------------------------------------------------------------
|
||||||
# mcporter call wrapper
|
# mcporter call wrapper
|
||||||
# Usage: mcporter_call <tool_name> <args_json>
|
# Usage: mcporter_call <args_json>
|
||||||
# Writes the mcporter JSON output to stdout.
|
# All calls go to the single `unraid` tool.
|
||||||
# Returns the mcporter exit code.
|
|
||||||
# ---------------------------------------------------------------------------
|
# ---------------------------------------------------------------------------
|
||||||
mcporter_call() {
|
mcporter_call() {
|
||||||
local tool_name="${1:?tool_name required}"
|
local args_json="${1:?args_json required}"
|
||||||
local args_json="${2:?args_json required}"
|
|
||||||
|
|
||||||
mcporter call \
|
mcporter call \
|
||||||
--stdio "uv run unraid-mcp-server" \
|
--stdio "uv run unraid-mcp-server" \
|
||||||
--cwd "${PROJECT_DIR}" \
|
--cwd "${PROJECT_DIR}" \
|
||||||
--name "unraid" \
|
--name "unraid" \
|
||||||
--tool "${tool_name}" \
|
--tool unraid \
|
||||||
--args "${args_json}" \
|
--args "${args_json}" \
|
||||||
--timeout "${CALL_TIMEOUT_MS}" \
|
--timeout "${CALL_TIMEOUT_MS}" \
|
||||||
--output json \
|
--output json \
|
||||||
@@ -227,25 +221,18 @@ mcporter_call() {
|
|||||||
|
|
||||||
# ---------------------------------------------------------------------------
|
# ---------------------------------------------------------------------------
|
||||||
# Test runner
|
# Test runner
|
||||||
# Usage: run_test <label> <tool_name> <args_json> [expected_key]
|
# Usage: run_test <label> <args_json> [expected_key]
|
||||||
#
|
|
||||||
# expected_key — optional jq-style python key path to validate in the
|
|
||||||
# response (e.g. ".status" or ".containers"). If omitted,
|
|
||||||
# any non-offline response is a PASS (tool errors from the
|
|
||||||
# API — e.g. VMs disabled — are still considered PASS because
|
|
||||||
# the tool itself responded correctly).
|
|
||||||
# ---------------------------------------------------------------------------
|
# ---------------------------------------------------------------------------
|
||||||
run_test() {
|
run_test() {
|
||||||
local label="${1:?label required}"
|
local label="${1:?label required}"
|
||||||
local tool="${2:?tool required}"
|
local args="${2:?args required}"
|
||||||
local args="${3:?args required}"
|
local expected_key="${3:-}"
|
||||||
local expected_key="${4:-}"
|
|
||||||
|
|
||||||
local t0
|
local t0
|
||||||
t0="$(date +%s%N)"
|
t0="$(date +%s%N)"
|
||||||
|
|
||||||
local output
|
local output
|
||||||
output="$(mcporter_call "${tool}" "${args}" 2>&1)" || true
|
output="$(mcporter_call "${args}" 2>&1)" || true
|
||||||
|
|
||||||
local elapsed_ms
|
local elapsed_ms
|
||||||
elapsed_ms="$(( ( $(date +%s%N) - t0 ) / 1000000 ))"
|
elapsed_ms="$(( ( $(date +%s%N) - t0 ) / 1000000 ))"
|
||||||
@@ -302,7 +289,7 @@ except Exception as e:
|
|||||||
}
|
}
|
||||||
|
|
||||||
# ---------------------------------------------------------------------------
|
# ---------------------------------------------------------------------------
|
||||||
# Skip helper — use when a prerequisite (like a list) returned empty
|
# Skip helper
|
||||||
# ---------------------------------------------------------------------------
|
# ---------------------------------------------------------------------------
|
||||||
skip_test() {
|
skip_test() {
|
||||||
local label="${1:?label required}"
|
local label="${1:?label required}"
|
||||||
@@ -313,14 +300,11 @@ skip_test() {
|
|||||||
|
|
||||||
# ---------------------------------------------------------------------------
|
# ---------------------------------------------------------------------------
|
||||||
# ID extractors
|
# ID extractors
|
||||||
# Each function calls the relevant list action and prints the first ID.
|
|
||||||
# Prints nothing (empty string) if the list is empty or the call fails.
|
|
||||||
# ---------------------------------------------------------------------------
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
# Extract first docker container ID
|
|
||||||
get_docker_id() {
|
get_docker_id() {
|
||||||
local raw
|
local raw
|
||||||
raw="$(mcporter_call unraid_docker '{"action":"list"}' 2>/dev/null)" || return 0
|
raw="$(mcporter_call '{"action":"docker","subaction":"list"}' 2>/dev/null)" || return 0
|
||||||
printf '%s' "${raw}" | python3 -c "
|
printf '%s' "${raw}" | python3 -c "
|
||||||
import sys, json
|
import sys, json
|
||||||
try:
|
try:
|
||||||
@@ -333,10 +317,9 @@ except Exception:
|
|||||||
" 2>/dev/null || true
|
" 2>/dev/null || true
|
||||||
}
|
}
|
||||||
|
|
||||||
# Extract first docker network ID
|
|
||||||
get_network_id() {
|
get_network_id() {
|
||||||
local raw
|
local raw
|
||||||
raw="$(mcporter_call unraid_docker '{"action":"networks"}' 2>/dev/null)" || return 0
|
raw="$(mcporter_call '{"action":"docker","subaction":"networks"}' 2>/dev/null)" || return 0
|
||||||
printf '%s' "${raw}" | python3 -c "
|
printf '%s' "${raw}" | python3 -c "
|
||||||
import sys, json
|
import sys, json
|
||||||
try:
|
try:
|
||||||
@@ -349,10 +332,9 @@ except Exception:
|
|||||||
" 2>/dev/null || true
|
" 2>/dev/null || true
|
||||||
}
|
}
|
||||||
|
|
||||||
# Extract first VM ID
|
|
||||||
get_vm_id() {
|
get_vm_id() {
|
||||||
local raw
|
local raw
|
||||||
raw="$(mcporter_call unraid_vm '{"action":"list"}' 2>/dev/null)" || return 0
|
raw="$(mcporter_call '{"action":"vm","subaction":"list"}' 2>/dev/null)" || return 0
|
||||||
printf '%s' "${raw}" | python3 -c "
|
printf '%s' "${raw}" | python3 -c "
|
||||||
import sys, json
|
import sys, json
|
||||||
try:
|
try:
|
||||||
@@ -365,10 +347,9 @@ except Exception:
|
|||||||
" 2>/dev/null || true
|
" 2>/dev/null || true
|
||||||
}
|
}
|
||||||
|
|
||||||
# Extract first API key ID
|
|
||||||
get_key_id() {
|
get_key_id() {
|
||||||
local raw
|
local raw
|
||||||
raw="$(mcporter_call unraid_keys '{"action":"list"}' 2>/dev/null)" || return 0
|
raw="$(mcporter_call '{"action":"key","subaction":"list"}' 2>/dev/null)" || return 0
|
||||||
printf '%s' "${raw}" | python3 -c "
|
printf '%s' "${raw}" | python3 -c "
|
||||||
import sys, json
|
import sys, json
|
||||||
try:
|
try:
|
||||||
@@ -381,10 +362,9 @@ except Exception:
|
|||||||
" 2>/dev/null || true
|
" 2>/dev/null || true
|
||||||
}
|
}
|
||||||
|
|
||||||
# Extract first disk ID
|
|
||||||
get_disk_id() {
|
get_disk_id() {
|
||||||
local raw
|
local raw
|
||||||
raw="$(mcporter_call unraid_storage '{"action":"disks"}' 2>/dev/null)" || return 0
|
raw="$(mcporter_call '{"action":"disk","subaction":"disks"}' 2>/dev/null)" || return 0
|
||||||
printf '%s' "${raw}" | python3 -c "
|
printf '%s' "${raw}" | python3 -c "
|
||||||
import sys, json
|
import sys, json
|
||||||
try:
|
try:
|
||||||
@@ -397,16 +377,14 @@ except Exception:
|
|||||||
" 2>/dev/null || true
|
" 2>/dev/null || true
|
||||||
}
|
}
|
||||||
|
|
||||||
# Extract first log file path
|
|
||||||
get_log_path() {
|
get_log_path() {
|
||||||
local raw
|
local raw
|
||||||
raw="$(mcporter_call unraid_storage '{"action":"log_files"}' 2>/dev/null)" || return 0
|
raw="$(mcporter_call '{"action":"disk","subaction":"log_files"}' 2>/dev/null)" || return 0
|
||||||
printf '%s' "${raw}" | python3 -c "
|
printf '%s' "${raw}" | python3 -c "
|
||||||
import sys, json
|
import sys, json
|
||||||
try:
|
try:
|
||||||
d = json.load(sys.stdin)
|
d = json.load(sys.stdin)
|
||||||
files = d.get('log_files', [])
|
files = d.get('log_files', [])
|
||||||
# Prefer a plain text log (not binary like btmp/lastlog)
|
|
||||||
for f in files:
|
for f in files:
|
||||||
p = f.get('path', '')
|
p = f.get('path', '')
|
||||||
if p.endswith('.log') or 'syslog' in p or 'messages' in p:
|
if p.endswith('.log') or 'syslog' in p or 'messages' in p:
|
||||||
@@ -420,35 +398,10 @@ except Exception:
|
|||||||
" 2>/dev/null || true
|
" 2>/dev/null || true
|
||||||
}
|
}
|
||||||
|
|
||||||
# ---------------------------------------------------------------------------
|
get_ups_id() {
|
||||||
# Grouped test suites
|
local raw
|
||||||
# ---------------------------------------------------------------------------
|
raw="$(mcporter_call '{"action":"system","subaction":"ups_devices"}' 2>/dev/null)" || return 0
|
||||||
|
printf '%s' "${raw}" | python3 -c "
|
||||||
suite_unraid_info() {
|
|
||||||
printf '\n%b== unraid_info (19 actions) ==%b\n' "${C_BOLD}" "${C_RESET}" | tee -a "${LOG_FILE}"
|
|
||||||
|
|
||||||
run_test "unraid_info: overview" unraid_info '{"action":"overview"}'
|
|
||||||
run_test "unraid_info: array" unraid_info '{"action":"array"}'
|
|
||||||
run_test "unraid_info: network" unraid_info '{"action":"network"}'
|
|
||||||
run_test "unraid_info: registration" unraid_info '{"action":"registration"}'
|
|
||||||
run_test "unraid_info: connect" unraid_info '{"action":"connect"}'
|
|
||||||
run_test "unraid_info: variables" unraid_info '{"action":"variables"}'
|
|
||||||
run_test "unraid_info: metrics" unraid_info '{"action":"metrics"}'
|
|
||||||
run_test "unraid_info: services" unraid_info '{"action":"services"}'
|
|
||||||
run_test "unraid_info: display" unraid_info '{"action":"display"}'
|
|
||||||
run_test "unraid_info: config" unraid_info '{"action":"config"}'
|
|
||||||
run_test "unraid_info: online" unraid_info '{"action":"online"}'
|
|
||||||
run_test "unraid_info: owner" unraid_info '{"action":"owner"}'
|
|
||||||
run_test "unraid_info: settings" unraid_info '{"action":"settings"}'
|
|
||||||
run_test "unraid_info: server" unraid_info '{"action":"server"}'
|
|
||||||
run_test "unraid_info: servers" unraid_info '{"action":"servers"}'
|
|
||||||
run_test "unraid_info: flash" unraid_info '{"action":"flash"}'
|
|
||||||
run_test "unraid_info: ups_devices" unraid_info '{"action":"ups_devices"}'
|
|
||||||
# ups_device and ups_config require a device_id — skip if no UPS devices found
|
|
||||||
local ups_raw
|
|
||||||
ups_raw="$(mcporter_call unraid_info '{"action":"ups_devices"}' 2>/dev/null)" || ups_raw=''
|
|
||||||
local ups_id
|
|
||||||
ups_id="$(printf '%s' "${ups_raw}" | python3 -c "
|
|
||||||
import sys, json
|
import sys, json
|
||||||
try:
|
try:
|
||||||
d = json.load(sys.stdin)
|
d = json.load(sys.stdin)
|
||||||
@@ -457,153 +410,193 @@ try:
|
|||||||
print(devs[0].get('id', devs[0].get('name', '')))
|
print(devs[0].get('id', devs[0].get('name', '')))
|
||||||
except Exception:
|
except Exception:
|
||||||
pass
|
pass
|
||||||
" 2>/dev/null)" || ups_id=''
|
" 2>/dev/null || true
|
||||||
|
}
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Grouped test suites
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
suite_system() {
|
||||||
|
printf '\n%b== system (info/metrics/UPS) ==%b\n' "${C_BOLD}" "${C_RESET}" | tee -a "${LOG_FILE}"
|
||||||
|
|
||||||
|
run_test "system: overview" '{"action":"system","subaction":"overview"}'
|
||||||
|
run_test "system: network" '{"action":"system","subaction":"network"}'
|
||||||
|
run_test "system: registration" '{"action":"system","subaction":"registration"}'
|
||||||
|
run_test "system: variables" '{"action":"system","subaction":"variables"}'
|
||||||
|
run_test "system: metrics" '{"action":"system","subaction":"metrics"}'
|
||||||
|
run_test "system: services" '{"action":"system","subaction":"services"}'
|
||||||
|
run_test "system: display" '{"action":"system","subaction":"display"}'
|
||||||
|
run_test "system: config" '{"action":"system","subaction":"config"}'
|
||||||
|
run_test "system: online" '{"action":"system","subaction":"online"}'
|
||||||
|
run_test "system: owner" '{"action":"system","subaction":"owner"}'
|
||||||
|
run_test "system: settings" '{"action":"system","subaction":"settings"}'
|
||||||
|
run_test "system: server" '{"action":"system","subaction":"server"}'
|
||||||
|
run_test "system: servers" '{"action":"system","subaction":"servers"}'
|
||||||
|
run_test "system: flash" '{"action":"system","subaction":"flash"}'
|
||||||
|
run_test "system: ups_devices" '{"action":"system","subaction":"ups_devices"}'
|
||||||
|
|
||||||
|
local ups_id
|
||||||
|
ups_id="$(get_ups_id)" || ups_id=''
|
||||||
if [[ -n "${ups_id}" ]]; then
|
if [[ -n "${ups_id}" ]]; then
|
||||||
run_test "unraid_info: ups_device" unraid_info \
|
run_test "system: ups_device" \
|
||||||
"$(printf '{"action":"ups_device","device_id":"%s"}' "${ups_id}")"
|
"$(printf '{"action":"system","subaction":"ups_device","device_id":"%s"}' "${ups_id}")"
|
||||||
run_test "unraid_info: ups_config" unraid_info \
|
run_test "system: ups_config" \
|
||||||
"$(printf '{"action":"ups_config","device_id":"%s"}' "${ups_id}")"
|
"$(printf '{"action":"system","subaction":"ups_config","device_id":"%s"}' "${ups_id}")"
|
||||||
else
|
else
|
||||||
skip_test "unraid_info: ups_device" "no UPS devices found"
|
skip_test "system: ups_device" "no UPS devices found"
|
||||||
skip_test "unraid_info: ups_config" "no UPS devices found"
|
skip_test "system: ups_config" "no UPS devices found"
|
||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
suite_unraid_array() {
|
suite_array() {
|
||||||
printf '\n%b== unraid_array (1 read-only action) ==%b\n' "${C_BOLD}" "${C_RESET}" | tee -a "${LOG_FILE}"
|
printf '\n%b== array (read-only) ==%b\n' "${C_BOLD}" "${C_RESET}" | tee -a "${LOG_FILE}"
|
||||||
run_test "unraid_array: parity_status" unraid_array '{"action":"parity_status"}'
|
run_test "array: parity_status" '{"action":"array","subaction":"parity_status"}'
|
||||||
# Destructive actions (parity_start/pause/resume/cancel) skipped
|
run_test "array: parity_history" '{"action":"array","subaction":"parity_history"}'
|
||||||
|
# Destructive: parity_start/pause/resume/cancel, start_array, stop_array,
|
||||||
|
# add_disk, remove_disk, mount_disk, unmount_disk, clear_disk_stats — skipped
|
||||||
}
|
}
|
||||||
|
|
||||||
suite_unraid_storage() {
|
suite_disk() {
|
||||||
printf '\n%b== unraid_storage (6 actions) ==%b\n' "${C_BOLD}" "${C_RESET}" | tee -a "${LOG_FILE}"
|
printf '\n%b== disk (storage/shares/logs) ==%b\n' "${C_BOLD}" "${C_RESET}" | tee -a "${LOG_FILE}"
|
||||||
|
|
||||||
run_test "unraid_storage: shares" unraid_storage '{"action":"shares"}'
|
run_test "disk: shares" '{"action":"disk","subaction":"shares"}'
|
||||||
run_test "unraid_storage: disks" unraid_storage '{"action":"disks"}'
|
run_test "disk: disks" '{"action":"disk","subaction":"disks"}'
|
||||||
run_test "unraid_storage: unassigned" unraid_storage '{"action":"unassigned"}'
|
run_test "disk: log_files" '{"action":"disk","subaction":"log_files"}'
|
||||||
run_test "unraid_storage: log_files" unraid_storage '{"action":"log_files"}'
|
|
||||||
|
|
||||||
# disk_details needs a disk ID
|
|
||||||
local disk_id
|
local disk_id
|
||||||
disk_id="$(get_disk_id)" || disk_id=''
|
disk_id="$(get_disk_id)" || disk_id=''
|
||||||
if [[ -n "${disk_id}" ]]; then
|
if [[ -n "${disk_id}" ]]; then
|
||||||
run_test "unraid_storage: disk_details" unraid_storage \
|
run_test "disk: disk_details" \
|
||||||
"$(printf '{"action":"disk_details","disk_id":"%s"}' "${disk_id}")"
|
"$(printf '{"action":"disk","subaction":"disk_details","disk_id":"%s"}' "${disk_id}")"
|
||||||
else
|
else
|
||||||
skip_test "unraid_storage: disk_details" "no disks found"
|
skip_test "disk: disk_details" "no disks found"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# logs needs a valid log path
|
|
||||||
local log_path
|
local log_path
|
||||||
log_path="$(get_log_path)" || log_path=''
|
log_path="$(get_log_path)" || log_path=''
|
||||||
if [[ -n "${log_path}" ]]; then
|
if [[ -n "${log_path}" ]]; then
|
||||||
run_test "unraid_storage: logs" unraid_storage \
|
run_test "disk: logs" \
|
||||||
"$(printf '{"action":"logs","log_path":"%s","tail_lines":20}' "${log_path}")"
|
"$(printf '{"action":"disk","subaction":"logs","log_path":"%s","tail_lines":20}' "${log_path}")"
|
||||||
else
|
else
|
||||||
skip_test "unraid_storage: logs" "no log files found"
|
skip_test "disk: logs" "no log files found"
|
||||||
fi
|
fi
|
||||||
|
# Destructive: flash_backup — skipped
|
||||||
}
|
}
|
||||||
|
|
||||||
suite_unraid_docker() {
|
suite_docker() {
|
||||||
printf '\n%b== unraid_docker (7 read-only actions) ==%b\n' "${C_BOLD}" "${C_RESET}" | tee -a "${LOG_FILE}"
|
printf '\n%b== docker ==%b\n' "${C_BOLD}" "${C_RESET}" | tee -a "${LOG_FILE}"
|
||||||
|
|
||||||
run_test "unraid_docker: list" unraid_docker '{"action":"list"}'
|
run_test "docker: list" '{"action":"docker","subaction":"list"}'
|
||||||
run_test "unraid_docker: networks" unraid_docker '{"action":"networks"}'
|
run_test "docker: networks" '{"action":"docker","subaction":"networks"}'
|
||||||
run_test "unraid_docker: port_conflicts" unraid_docker '{"action":"port_conflicts"}'
|
|
||||||
run_test "unraid_docker: check_updates" unraid_docker '{"action":"check_updates"}'
|
|
||||||
|
|
||||||
# details, logs, network_details need IDs
|
|
||||||
local container_id
|
local container_id
|
||||||
container_id="$(get_docker_id)" || container_id=''
|
container_id="$(get_docker_id)" || container_id=''
|
||||||
if [[ -n "${container_id}" ]]; then
|
if [[ -n "${container_id}" ]]; then
|
||||||
run_test "unraid_docker: details" unraid_docker \
|
run_test "docker: details" \
|
||||||
"$(printf '{"action":"details","container_id":"%s"}' "${container_id}")"
|
"$(printf '{"action":"docker","subaction":"details","container_id":"%s"}' "${container_id}")"
|
||||||
run_test "unraid_docker: logs" unraid_docker \
|
|
||||||
"$(printf '{"action":"logs","container_id":"%s","tail_lines":20}' "${container_id}")"
|
|
||||||
else
|
else
|
||||||
skip_test "unraid_docker: details" "no containers found"
|
skip_test "docker: details" "no containers found"
|
||||||
skip_test "unraid_docker: logs" "no containers found"
|
|
||||||
fi
|
fi
|
||||||
|
|
||||||
local network_id
|
local network_id
|
||||||
network_id="$(get_network_id)" || network_id=''
|
network_id="$(get_network_id)" || network_id=''
|
||||||
if [[ -n "${network_id}" ]]; then
|
if [[ -n "${network_id}" ]]; then
|
||||||
run_test "unraid_docker: network_details" unraid_docker \
|
run_test "docker: network_details" \
|
||||||
"$(printf '{"action":"network_details","network_id":"%s"}' "${network_id}")"
|
"$(printf '{"action":"docker","subaction":"network_details","network_id":"%s"}' "${network_id}")"
|
||||||
else
|
else
|
||||||
skip_test "unraid_docker: network_details" "no networks found"
|
skip_test "docker: network_details" "no networks found"
|
||||||
fi
|
fi
|
||||||
|
# Destructive/mutating: start/stop/restart — skipped
|
||||||
# Destructive actions (start/stop/restart/pause/unpause/remove/update/update_all) skipped
|
|
||||||
}
|
}
|
||||||
|
|
||||||
suite_unraid_vm() {
|
suite_vm() {
|
||||||
printf '\n%b== unraid_vm (2 read-only actions) ==%b\n' "${C_BOLD}" "${C_RESET}" | tee -a "${LOG_FILE}"
|
printf '\n%b== vm ==%b\n' "${C_BOLD}" "${C_RESET}" | tee -a "${LOG_FILE}"
|
||||||
|
|
||||||
run_test "unraid_vm: list" unraid_vm '{"action":"list"}'
|
run_test "vm: list" '{"action":"vm","subaction":"list"}'
|
||||||
|
|
||||||
local vm_id
|
local vm_id
|
||||||
vm_id="$(get_vm_id)" || vm_id=''
|
vm_id="$(get_vm_id)" || vm_id=''
|
||||||
if [[ -n "${vm_id}" ]]; then
|
if [[ -n "${vm_id}" ]]; then
|
||||||
run_test "unraid_vm: details" unraid_vm \
|
run_test "vm: details" \
|
||||||
"$(printf '{"action":"details","vm_id":"%s"}' "${vm_id}")"
|
"$(printf '{"action":"vm","subaction":"details","vm_id":"%s"}' "${vm_id}")"
|
||||||
else
|
else
|
||||||
skip_test "unraid_vm: details" "no VMs found (or VM service unavailable)"
|
skip_test "vm: details" "no VMs found (or VM service unavailable)"
|
||||||
fi
|
fi
|
||||||
|
# Destructive: start/stop/pause/resume/force_stop/reboot/reset — skipped
|
||||||
# Destructive actions (start/stop/pause/resume/force_stop/reboot/reset) skipped
|
|
||||||
}
|
}
|
||||||
|
|
||||||
suite_unraid_notifications() {
|
suite_notification() {
|
||||||
printf '\n%b== unraid_notifications (4 read-only actions) ==%b\n' "${C_BOLD}" "${C_RESET}" | tee -a "${LOG_FILE}"
|
printf '\n%b== notification ==%b\n' "${C_BOLD}" "${C_RESET}" | tee -a "${LOG_FILE}"
|
||||||
|
|
||||||
run_test "unraid_notifications: overview" unraid_notifications '{"action":"overview"}'
|
run_test "notification: overview" '{"action":"notification","subaction":"overview"}'
|
||||||
run_test "unraid_notifications: list" unraid_notifications '{"action":"list"}'
|
run_test "notification: list" '{"action":"notification","subaction":"list"}'
|
||||||
run_test "unraid_notifications: warnings" unraid_notifications '{"action":"warnings"}'
|
run_test "notification: unread" '{"action":"notification","subaction":"unread"}'
|
||||||
run_test "unraid_notifications: unread" unraid_notifications '{"action":"unread"}'
|
# Mutating: create/archive/delete/delete_archived/archive_all/etc. — skipped
|
||||||
|
|
||||||
# Destructive actions (create/archive/delete/delete_archived/archive_all/etc.) skipped
|
|
||||||
}
|
}
|
||||||
|
|
||||||
suite_unraid_rclone() {
|
suite_rclone() {
|
||||||
printf '\n%b== unraid_rclone (2 read-only actions) ==%b\n' "${C_BOLD}" "${C_RESET}" | tee -a "${LOG_FILE}"
|
printf '\n%b== rclone ==%b\n' "${C_BOLD}" "${C_RESET}" | tee -a "${LOG_FILE}"
|
||||||
|
|
||||||
run_test "unraid_rclone: list_remotes" unraid_rclone '{"action":"list_remotes"}'
|
run_test "rclone: list_remotes" '{"action":"rclone","subaction":"list_remotes"}'
|
||||||
# config_form requires a provider_type — use "s3" as a safe, always-available provider
|
run_test "rclone: config_form" '{"action":"rclone","subaction":"config_form","provider_type":"s3"}'
|
||||||
run_test "unraid_rclone: config_form" unraid_rclone '{"action":"config_form","provider_type":"s3"}'
|
# Destructive: create_remote/delete_remote — skipped
|
||||||
|
|
||||||
# Destructive actions (create_remote/delete_remote) skipped
|
|
||||||
}
|
}
|
||||||
|
|
||||||
suite_unraid_users() {
|
suite_user() {
|
||||||
printf '\n%b== unraid_users (1 action) ==%b\n' "${C_BOLD}" "${C_RESET}" | tee -a "${LOG_FILE}"
|
printf '\n%b== user ==%b\n' "${C_BOLD}" "${C_RESET}" | tee -a "${LOG_FILE}"
|
||||||
run_test "unraid_users: me" unraid_users '{"action":"me"}'
|
run_test "user: me" '{"action":"user","subaction":"me"}'
|
||||||
}
|
}
|
||||||
|
|
||||||
suite_unraid_keys() {
|
suite_key() {
|
||||||
printf '\n%b== unraid_keys (2 read-only actions) ==%b\n' "${C_BOLD}" "${C_RESET}" | tee -a "${LOG_FILE}"
|
printf '\n%b== key (API keys) ==%b\n' "${C_BOLD}" "${C_RESET}" | tee -a "${LOG_FILE}"
|
||||||
|
|
||||||
run_test "unraid_keys: list" unraid_keys '{"action":"list"}'
|
run_test "key: list" '{"action":"key","subaction":"list"}'
|
||||||
|
|
||||||
local key_id
|
local key_id
|
||||||
key_id="$(get_key_id)" || key_id=''
|
key_id="$(get_key_id)" || key_id=''
|
||||||
if [[ -n "${key_id}" ]]; then
|
if [[ -n "${key_id}" ]]; then
|
||||||
run_test "unraid_keys: get" unraid_keys \
|
run_test "key: get" \
|
||||||
"$(printf '{"action":"get","key_id":"%s"}' "${key_id}")"
|
"$(printf '{"action":"key","subaction":"get","key_id":"%s"}' "${key_id}")"
|
||||||
else
|
else
|
||||||
skip_test "unraid_keys: get" "no API keys found"
|
skip_test "key: get" "no API keys found"
|
||||||
fi
|
fi
|
||||||
|
# Destructive: create/update/delete/add_role/remove_role — skipped
|
||||||
# Destructive actions (create/update/delete) skipped
|
|
||||||
}
|
}
|
||||||
|
|
||||||
suite_unraid_health() {
|
suite_health() {
|
||||||
printf '\n%b== unraid_health (3 actions) ==%b\n' "${C_BOLD}" "${C_RESET}" | tee -a "${LOG_FILE}"
|
printf '\n%b== health ==%b\n' "${C_BOLD}" "${C_RESET}" | tee -a "${LOG_FILE}"
|
||||||
|
|
||||||
run_test "unraid_health: check" unraid_health '{"action":"check"}'
|
run_test "health: check" '{"action":"health","subaction":"check"}'
|
||||||
run_test "unraid_health: test_connection" unraid_health '{"action":"test_connection"}'
|
run_test "health: test_connection" '{"action":"health","subaction":"test_connection"}'
|
||||||
run_test "unraid_health: diagnose" unraid_health '{"action":"diagnose"}'
|
run_test "health: diagnose" '{"action":"health","subaction":"diagnose"}'
|
||||||
|
# setup triggers elicitation — skipped
|
||||||
|
}
|
||||||
|
|
||||||
|
suite_customization() {
|
||||||
|
printf '\n%b== customization ==%b\n' "${C_BOLD}" "${C_RESET}" | tee -a "${LOG_FILE}"
|
||||||
|
|
||||||
|
run_test "customization: theme" '{"action":"customization","subaction":"theme"}'
|
||||||
|
run_test "customization: public_theme" '{"action":"customization","subaction":"public_theme"}'
|
||||||
|
run_test "customization: sso_enabled" '{"action":"customization","subaction":"sso_enabled"}'
|
||||||
|
run_test "customization: is_initial_setup" '{"action":"customization","subaction":"is_initial_setup"}'
|
||||||
|
# Mutating: set_theme — skipped
|
||||||
|
}
|
||||||
|
|
||||||
|
suite_plugin() {
|
||||||
|
printf '\n%b== plugin ==%b\n' "${C_BOLD}" "${C_RESET}" | tee -a "${LOG_FILE}"
|
||||||
|
|
||||||
|
run_test "plugin: list" '{"action":"plugin","subaction":"list"}'
|
||||||
|
# Destructive: add/remove — skipped
|
||||||
|
}
|
||||||
|
|
||||||
|
suite_oidc() {
|
||||||
|
printf '\n%b== oidc ==%b\n' "${C_BOLD}" "${C_RESET}" | tee -a "${LOG_FILE}"
|
||||||
|
|
||||||
|
run_test "oidc: providers" '{"action":"oidc","subaction":"providers"}'
|
||||||
|
run_test "oidc: public_providers" '{"action":"oidc","subaction":"public_providers"}'
|
||||||
|
run_test "oidc: configuration" '{"action":"oidc","subaction":"configuration"}'
|
||||||
|
# provider and validate_session require IDs — skipped
|
||||||
}
|
}
|
||||||
|
|
||||||
# ---------------------------------------------------------------------------
|
# ---------------------------------------------------------------------------
|
||||||
@@ -633,13 +626,9 @@ print_summary() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
# ---------------------------------------------------------------------------
|
# ---------------------------------------------------------------------------
|
||||||
# Parallel runner — wraps each suite in a background subshell and waits
|
# Parallel runner
|
||||||
# ---------------------------------------------------------------------------
|
# ---------------------------------------------------------------------------
|
||||||
run_parallel() {
|
run_parallel() {
|
||||||
# Each suite is independent (only cross-suite dependency: IDs are fetched
|
|
||||||
# fresh inside each suite function, not shared across suites).
|
|
||||||
# Counter updates from subshells won't propagate to the parent — collect
|
|
||||||
# results via temp files instead.
|
|
||||||
log_warn "--parallel mode: per-suite counters aggregated via temp files."
|
log_warn "--parallel mode: per-suite counters aggregated via temp files."
|
||||||
|
|
||||||
local tmp_dir
|
local tmp_dir
|
||||||
@@ -647,23 +636,25 @@ run_parallel() {
|
|||||||
trap 'rm -rf -- "${tmp_dir}"' RETURN
|
trap 'rm -rf -- "${tmp_dir}"' RETURN
|
||||||
|
|
||||||
local suites=(
|
local suites=(
|
||||||
suite_unraid_info
|
suite_system
|
||||||
suite_unraid_array
|
suite_array
|
||||||
suite_unraid_storage
|
suite_disk
|
||||||
suite_unraid_docker
|
suite_docker
|
||||||
suite_unraid_vm
|
suite_vm
|
||||||
suite_unraid_notifications
|
suite_notification
|
||||||
suite_unraid_rclone
|
suite_rclone
|
||||||
suite_unraid_users
|
suite_user
|
||||||
suite_unraid_keys
|
suite_key
|
||||||
suite_unraid_health
|
suite_health
|
||||||
|
suite_customization
|
||||||
|
suite_plugin
|
||||||
|
suite_oidc
|
||||||
)
|
)
|
||||||
|
|
||||||
local pids=()
|
local pids=()
|
||||||
local suite
|
local suite
|
||||||
for suite in "${suites[@]}"; do
|
for suite in "${suites[@]}"; do
|
||||||
(
|
(
|
||||||
# Reset counters in subshell
|
|
||||||
PASS_COUNT=0; FAIL_COUNT=0; SKIP_COUNT=0; FAIL_NAMES=()
|
PASS_COUNT=0; FAIL_COUNT=0; SKIP_COUNT=0; FAIL_NAMES=()
|
||||||
"${suite}"
|
"${suite}"
|
||||||
printf '%d %d %d\n' "${PASS_COUNT}" "${FAIL_COUNT}" "${SKIP_COUNT}" \
|
printf '%d %d %d\n' "${PASS_COUNT}" "${FAIL_COUNT}" "${SKIP_COUNT}" \
|
||||||
@@ -673,13 +664,11 @@ run_parallel() {
|
|||||||
pids+=($!)
|
pids+=($!)
|
||||||
done
|
done
|
||||||
|
|
||||||
# Wait for all background suites
|
|
||||||
local pid
|
local pid
|
||||||
for pid in "${pids[@]}"; do
|
for pid in "${pids[@]}"; do
|
||||||
wait "${pid}" || true
|
wait "${pid}" || true
|
||||||
done
|
done
|
||||||
|
|
||||||
# Aggregate counters
|
|
||||||
local f
|
local f
|
||||||
for f in "${tmp_dir}"/*.counts; do
|
for f in "${tmp_dir}"/*.counts; do
|
||||||
[[ -f "${f}" ]] || continue
|
[[ -f "${f}" ]] || continue
|
||||||
@@ -702,16 +691,19 @@ run_parallel() {
|
|||||||
# Sequential runner
|
# Sequential runner
|
||||||
# ---------------------------------------------------------------------------
|
# ---------------------------------------------------------------------------
|
||||||
run_sequential() {
|
run_sequential() {
|
||||||
suite_unraid_info
|
suite_system
|
||||||
suite_unraid_array
|
suite_array
|
||||||
suite_unraid_storage
|
suite_disk
|
||||||
suite_unraid_docker
|
suite_docker
|
||||||
suite_unraid_vm
|
suite_vm
|
||||||
suite_unraid_notifications
|
suite_notification
|
||||||
suite_unraid_rclone
|
suite_rclone
|
||||||
suite_unraid_users
|
suite_user
|
||||||
suite_unraid_keys
|
suite_key
|
||||||
suite_unraid_health
|
suite_health
|
||||||
|
suite_customization
|
||||||
|
suite_plugin
|
||||||
|
suite_oidc
|
||||||
}
|
}
|
||||||
|
|
||||||
# ---------------------------------------------------------------------------
|
# ---------------------------------------------------------------------------
|
||||||
@@ -721,29 +713,21 @@ main() {
|
|||||||
parse_args "$@"
|
parse_args "$@"
|
||||||
|
|
||||||
printf '%b%s%b\n' "${C_BOLD}" "$(printf '=%.0s' {1..65})" "${C_RESET}"
|
printf '%b%s%b\n' "${C_BOLD}" "$(printf '=%.0s' {1..65})" "${C_RESET}"
|
||||||
printf '%b unraid-mcp integration smoke-test%b\n' "${C_BOLD}" "${C_RESET}"
|
printf '%b unraid-mcp integration smoke-test (single unraid tool)%b\n' "${C_BOLD}" "${C_RESET}"
|
||||||
printf '%b Project: %s%b\n' "${C_BOLD}" "${PROJECT_DIR}" "${C_RESET}"
|
printf '%b Project: %s%b\n' "${C_BOLD}" "${PROJECT_DIR}" "${C_RESET}"
|
||||||
printf '%b Timeout: %dms/call | Parallel: %s%b\n' \
|
printf '%b Timeout: %dms/call | Parallel: %s%b\n' \
|
||||||
"${C_BOLD}" "${CALL_TIMEOUT_MS}" "${USE_PARALLEL}" "${C_RESET}"
|
"${C_BOLD}" "${CALL_TIMEOUT_MS}" "${USE_PARALLEL}" "${C_RESET}"
|
||||||
printf '%b Log: %s%b\n' "${C_BOLD}" "${LOG_FILE}" "${C_RESET}"
|
printf '%b Log: %s%b\n' "${C_BOLD}" "${LOG_FILE}" "${C_RESET}"
|
||||||
printf '%b%s%b\n\n' "${C_BOLD}" "$(printf '=%.0s' {1..65})" "${C_RESET}"
|
printf '%b%s%b\n\n' "${C_BOLD}" "$(printf '=%.0s' {1..65})" "${C_RESET}"
|
||||||
|
|
||||||
# Prerequisite gate
|
|
||||||
check_prerequisites || exit 2
|
check_prerequisites || exit 2
|
||||||
|
|
||||||
# Server startup gate — fail fast if the Python process can't start
|
|
||||||
smoke_test_server || {
|
smoke_test_server || {
|
||||||
log_error ""
|
log_error ""
|
||||||
log_error "Server startup failed. Aborting — no tests will run."
|
log_error "Server startup failed. Aborting — no tests will run."
|
||||||
log_error ""
|
log_error ""
|
||||||
log_error "To diagnose, run:"
|
log_error "To diagnose, run:"
|
||||||
log_error " cd ${PROJECT_DIR} && uv run unraid-mcp-server"
|
log_error " cd ${PROJECT_DIR} && uv run unraid-mcp-server"
|
||||||
log_error ""
|
|
||||||
log_error "If server.py has a broken import (e.g. missing tools/settings.py),"
|
|
||||||
log_error "stash or revert the uncommitted server.py change first:"
|
|
||||||
log_error " git stash -- unraid_mcp/server.py"
|
|
||||||
log_error " ./scripts/test-tools.sh"
|
|
||||||
log_error " git stash pop"
|
|
||||||
exit 2
|
exit 2
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -6,11 +6,11 @@ Uses Hypothesis to fuzz tool inputs and verify the core invariant:
|
|||||||
other unhandled exception from arbitrary inputs is a bug.
|
other unhandled exception from arbitrary inputs is a bug.
|
||||||
|
|
||||||
Each test class targets a distinct tool domain and strategy profile:
|
Each test class targets a distinct tool domain and strategy profile:
|
||||||
- Docker: arbitrary container IDs, action names, numeric params
|
- Docker: arbitrary container IDs, subaction names, numeric params
|
||||||
- Notifications: importance strings, list_type strings, field lengths
|
- Notifications: importance strings, list_type strings, field lengths
|
||||||
- Keys: arbitrary key IDs, role lists, name strings
|
- Keys: arbitrary key IDs, role lists, name strings
|
||||||
- VM: arbitrary VM IDs, action names
|
- VM: arbitrary VM IDs, subaction names
|
||||||
- Info: invalid action names (cross-tool invariant for the action guard)
|
- Info: invalid subaction names (cross-tool invariant for the subaction guard)
|
||||||
"""
|
"""
|
||||||
|
|
||||||
import asyncio
|
import asyncio
|
||||||
@@ -60,6 +60,10 @@ def _assert_only_tool_error(exc: BaseException) -> None:
|
|||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def _make_tool():
|
||||||
|
return make_tool_fn("unraid_mcp.tools.unraid", "register_unraid_tool", "unraid")
|
||||||
|
|
||||||
|
|
||||||
# ---------------------------------------------------------------------------
|
# ---------------------------------------------------------------------------
|
||||||
# Docker: arbitrary container IDs
|
# Docker: arbitrary container IDs
|
||||||
# ---------------------------------------------------------------------------
|
# ---------------------------------------------------------------------------
|
||||||
@@ -79,16 +83,14 @@ class TestDockerContainerIdFuzzing:
|
|||||||
"""Arbitrary container IDs for 'details' must not crash the tool."""
|
"""Arbitrary container IDs for 'details' must not crash the tool."""
|
||||||
|
|
||||||
async def _run_test():
|
async def _run_test():
|
||||||
tool_fn = make_tool_fn(
|
tool_fn = _make_tool()
|
||||||
"unraid_mcp.tools.docker", "register_docker_tool", "unraid_docker"
|
|
||||||
)
|
|
||||||
with patch(
|
with patch(
|
||||||
"unraid_mcp.tools.docker.make_graphql_request", new_callable=AsyncMock
|
"unraid_mcp.tools.unraid.make_graphql_request", new_callable=AsyncMock
|
||||||
) as mock:
|
) as mock:
|
||||||
mock.return_value = {"docker": {"containers": []}}
|
mock.return_value = {"docker": {"containers": []}}
|
||||||
with contextlib.suppress(ToolError):
|
with contextlib.suppress(ToolError):
|
||||||
# ToolError is the only acceptable exception — suppress it
|
# ToolError is the only acceptable exception — suppress it
|
||||||
await tool_fn(action="details", container_id=container_id)
|
await tool_fn(action="docker", subaction="details", container_id=container_id)
|
||||||
|
|
||||||
_run(_run_test())
|
_run(_run_test())
|
||||||
|
|
||||||
@@ -98,15 +100,13 @@ class TestDockerContainerIdFuzzing:
|
|||||||
"""Arbitrary container IDs for 'start' must not crash the tool."""
|
"""Arbitrary container IDs for 'start' must not crash the tool."""
|
||||||
|
|
||||||
async def _run_test():
|
async def _run_test():
|
||||||
tool_fn = make_tool_fn(
|
tool_fn = _make_tool()
|
||||||
"unraid_mcp.tools.docker", "register_docker_tool", "unraid_docker"
|
|
||||||
)
|
|
||||||
with patch(
|
with patch(
|
||||||
"unraid_mcp.tools.docker.make_graphql_request", new_callable=AsyncMock
|
"unraid_mcp.tools.unraid.make_graphql_request", new_callable=AsyncMock
|
||||||
) as mock:
|
) as mock:
|
||||||
mock.return_value = {"docker": {"containers": []}}
|
mock.return_value = {"docker": {"containers": []}}
|
||||||
with contextlib.suppress(ToolError):
|
with contextlib.suppress(ToolError):
|
||||||
await tool_fn(action="start", container_id=container_id)
|
await tool_fn(action="docker", subaction="start", container_id=container_id)
|
||||||
|
|
||||||
_run(_run_test())
|
_run(_run_test())
|
||||||
|
|
||||||
@@ -116,15 +116,13 @@ class TestDockerContainerIdFuzzing:
|
|||||||
"""Arbitrary container IDs for 'stop' must not crash the tool."""
|
"""Arbitrary container IDs for 'stop' must not crash the tool."""
|
||||||
|
|
||||||
async def _run_test():
|
async def _run_test():
|
||||||
tool_fn = make_tool_fn(
|
tool_fn = _make_tool()
|
||||||
"unraid_mcp.tools.docker", "register_docker_tool", "unraid_docker"
|
|
||||||
)
|
|
||||||
with patch(
|
with patch(
|
||||||
"unraid_mcp.tools.docker.make_graphql_request", new_callable=AsyncMock
|
"unraid_mcp.tools.unraid.make_graphql_request", new_callable=AsyncMock
|
||||||
) as mock:
|
) as mock:
|
||||||
mock.return_value = {"docker": {"containers": []}}
|
mock.return_value = {"docker": {"containers": []}}
|
||||||
with contextlib.suppress(ToolError):
|
with contextlib.suppress(ToolError):
|
||||||
await tool_fn(action="stop", container_id=container_id)
|
await tool_fn(action="docker", subaction="stop", container_id=container_id)
|
||||||
|
|
||||||
_run(_run_test())
|
_run(_run_test())
|
||||||
|
|
||||||
@@ -134,80 +132,57 @@ class TestDockerContainerIdFuzzing:
|
|||||||
"""Arbitrary container IDs for 'restart' must not crash the tool."""
|
"""Arbitrary container IDs for 'restart' must not crash the tool."""
|
||||||
|
|
||||||
async def _run_test():
|
async def _run_test():
|
||||||
tool_fn = make_tool_fn(
|
tool_fn = _make_tool()
|
||||||
"unraid_mcp.tools.docker", "register_docker_tool", "unraid_docker"
|
|
||||||
)
|
|
||||||
with patch(
|
with patch(
|
||||||
"unraid_mcp.tools.docker.make_graphql_request", new_callable=AsyncMock
|
"unraid_mcp.tools.unraid.make_graphql_request", new_callable=AsyncMock
|
||||||
) as mock:
|
) as mock:
|
||||||
# stop then start both need container list + mutation responses
|
# stop then start both need container list + mutation responses
|
||||||
mock.return_value = {"docker": {"containers": []}}
|
mock.return_value = {"docker": {"containers": []}}
|
||||||
with contextlib.suppress(ToolError):
|
with contextlib.suppress(ToolError):
|
||||||
await tool_fn(action="restart", container_id=container_id)
|
await tool_fn(action="docker", subaction="restart", container_id=container_id)
|
||||||
|
|
||||||
_run(_run_test())
|
_run(_run_test())
|
||||||
|
|
||||||
|
|
||||||
# ---------------------------------------------------------------------------
|
# ---------------------------------------------------------------------------
|
||||||
# Docker: invalid action names
|
# Docker: invalid subaction names
|
||||||
# ---------------------------------------------------------------------------
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
|
||||||
class TestDockerInvalidActions:
|
class TestDockerInvalidActions:
|
||||||
"""Fuzz the action parameter with arbitrary strings.
|
"""Fuzz the subaction parameter with arbitrary strings for the docker domain.
|
||||||
|
|
||||||
Invariant: invalid action names raise ToolError, never KeyError or crash.
|
Invariant: invalid subaction names raise ToolError, never KeyError or crash.
|
||||||
This validates the action guard that sits at the top of every tool function.
|
This validates the subaction guard that sits inside every domain handler.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
@given(st.text())
|
@given(st.text())
|
||||||
@settings(max_examples=200, suppress_health_check=[HealthCheck.function_scoped_fixture])
|
@settings(max_examples=200, suppress_health_check=[HealthCheck.function_scoped_fixture])
|
||||||
def test_invalid_action_raises_tool_error(self, action: str) -> None:
|
def test_invalid_action_raises_tool_error(self, subaction: str) -> None:
|
||||||
"""Any non-valid action string must raise ToolError, not crash."""
|
"""Any non-valid subaction string for docker must raise ToolError, not crash."""
|
||||||
valid_actions = {
|
valid_subactions = {
|
||||||
"list",
|
"list",
|
||||||
"details",
|
"details",
|
||||||
"start",
|
"start",
|
||||||
"stop",
|
"stop",
|
||||||
"restart",
|
"restart",
|
||||||
"pause",
|
|
||||||
"unpause",
|
|
||||||
"remove",
|
|
||||||
"update",
|
|
||||||
"update_all",
|
|
||||||
"logs",
|
|
||||||
"networks",
|
"networks",
|
||||||
"network_details",
|
"network_details",
|
||||||
"port_conflicts",
|
|
||||||
"check_updates",
|
|
||||||
"create_folder",
|
|
||||||
"set_folder_children",
|
|
||||||
"delete_entries",
|
|
||||||
"move_to_folder",
|
|
||||||
"move_to_position",
|
|
||||||
"rename_folder",
|
|
||||||
"create_folder_with_items",
|
|
||||||
"update_view_prefs",
|
|
||||||
"sync_templates",
|
|
||||||
"reset_template_mappings",
|
|
||||||
"refresh_digests",
|
|
||||||
}
|
}
|
||||||
if action in valid_actions:
|
if subaction in valid_subactions:
|
||||||
return # Skip valid actions — they have different semantics
|
return # Skip valid subactions — they have different semantics
|
||||||
|
|
||||||
async def _run_test():
|
async def _run_test():
|
||||||
tool_fn = make_tool_fn(
|
tool_fn = _make_tool()
|
||||||
"unraid_mcp.tools.docker", "register_docker_tool", "unraid_docker"
|
with patch("unraid_mcp.tools.unraid.make_graphql_request", new_callable=AsyncMock):
|
||||||
)
|
|
||||||
with patch("unraid_mcp.tools.docker.make_graphql_request", new_callable=AsyncMock):
|
|
||||||
try:
|
try:
|
||||||
await tool_fn(action=action)
|
await tool_fn(action="docker", subaction=subaction)
|
||||||
except ToolError:
|
except ToolError:
|
||||||
pass # Correct: invalid action raises ToolError
|
pass # Correct: invalid subaction raises ToolError
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
# Any other exception is a bug
|
# Any other exception is a bug
|
||||||
pytest.fail(
|
pytest.fail(
|
||||||
f"Action '{action!r}' raised {type(exc).__name__} "
|
f"subaction={subaction!r} raised {type(exc).__name__} "
|
||||||
f"instead of ToolError: {exc!r}"
|
f"instead of ToolError: {exc!r}"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -235,13 +210,9 @@ class TestNotificationsEnumFuzzing:
|
|||||||
return # Skip valid values
|
return # Skip valid values
|
||||||
|
|
||||||
async def _run_test():
|
async def _run_test():
|
||||||
tool_fn = make_tool_fn(
|
tool_fn = _make_tool()
|
||||||
"unraid_mcp.tools.notifications",
|
|
||||||
"register_notifications_tool",
|
|
||||||
"unraid_notifications",
|
|
||||||
)
|
|
||||||
with patch(
|
with patch(
|
||||||
"unraid_mcp.tools.notifications.make_graphql_request",
|
"unraid_mcp.tools.unraid.make_graphql_request",
|
||||||
new_callable=AsyncMock,
|
new_callable=AsyncMock,
|
||||||
) as mock:
|
) as mock:
|
||||||
mock.return_value = {
|
mock.return_value = {
|
||||||
@@ -249,7 +220,8 @@ class TestNotificationsEnumFuzzing:
|
|||||||
}
|
}
|
||||||
try:
|
try:
|
||||||
await tool_fn(
|
await tool_fn(
|
||||||
action="create",
|
action="notification",
|
||||||
|
subaction="create",
|
||||||
title="Test",
|
title="Test",
|
||||||
subject="Sub",
|
subject="Sub",
|
||||||
description="Desc",
|
description="Desc",
|
||||||
@@ -271,18 +243,14 @@ class TestNotificationsEnumFuzzing:
|
|||||||
return # Skip valid values
|
return # Skip valid values
|
||||||
|
|
||||||
async def _run_test():
|
async def _run_test():
|
||||||
tool_fn = make_tool_fn(
|
tool_fn = _make_tool()
|
||||||
"unraid_mcp.tools.notifications",
|
|
||||||
"register_notifications_tool",
|
|
||||||
"unraid_notifications",
|
|
||||||
)
|
|
||||||
with patch(
|
with patch(
|
||||||
"unraid_mcp.tools.notifications.make_graphql_request",
|
"unraid_mcp.tools.unraid.make_graphql_request",
|
||||||
new_callable=AsyncMock,
|
new_callable=AsyncMock,
|
||||||
) as mock:
|
) as mock:
|
||||||
mock.return_value = {"notifications": {"list": []}}
|
mock.return_value = {"notifications": {"list": []}}
|
||||||
try:
|
try:
|
||||||
await tool_fn(action="list", list_type=list_type)
|
await tool_fn(action="notification", subaction="list", list_type=list_type)
|
||||||
except ToolError:
|
except ToolError:
|
||||||
pass
|
pass
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
@@ -306,13 +274,9 @@ class TestNotificationsEnumFuzzing:
|
|||||||
"""
|
"""
|
||||||
|
|
||||||
async def _run_test():
|
async def _run_test():
|
||||||
tool_fn = make_tool_fn(
|
tool_fn = _make_tool()
|
||||||
"unraid_mcp.tools.notifications",
|
|
||||||
"register_notifications_tool",
|
|
||||||
"unraid_notifications",
|
|
||||||
)
|
|
||||||
with patch(
|
with patch(
|
||||||
"unraid_mcp.tools.notifications.make_graphql_request",
|
"unraid_mcp.tools.unraid.make_graphql_request",
|
||||||
new_callable=AsyncMock,
|
new_callable=AsyncMock,
|
||||||
) as mock:
|
) as mock:
|
||||||
mock.return_value = {
|
mock.return_value = {
|
||||||
@@ -320,7 +284,8 @@ class TestNotificationsEnumFuzzing:
|
|||||||
}
|
}
|
||||||
try:
|
try:
|
||||||
await tool_fn(
|
await tool_fn(
|
||||||
action="create",
|
action="notification",
|
||||||
|
subaction="create",
|
||||||
title=title,
|
title=title,
|
||||||
subject=subject,
|
subject=subject,
|
||||||
description=description,
|
description=description,
|
||||||
@@ -344,19 +309,16 @@ class TestNotificationsEnumFuzzing:
|
|||||||
return
|
return
|
||||||
|
|
||||||
async def _run_test():
|
async def _run_test():
|
||||||
tool_fn = make_tool_fn(
|
tool_fn = _make_tool()
|
||||||
"unraid_mcp.tools.notifications",
|
|
||||||
"register_notifications_tool",
|
|
||||||
"unraid_notifications",
|
|
||||||
)
|
|
||||||
with patch(
|
with patch(
|
||||||
"unraid_mcp.tools.notifications.make_graphql_request",
|
"unraid_mcp.tools.unraid.make_graphql_request",
|
||||||
new_callable=AsyncMock,
|
new_callable=AsyncMock,
|
||||||
) as mock:
|
) as mock:
|
||||||
mock.return_value = {"deleteNotification": {}}
|
mock.return_value = {"deleteNotification": {}}
|
||||||
try:
|
try:
|
||||||
await tool_fn(
|
await tool_fn(
|
||||||
action="delete",
|
action="notification",
|
||||||
|
subaction="delete",
|
||||||
notification_id="some-id",
|
notification_id="some-id",
|
||||||
notification_type=notif_type,
|
notification_type=notif_type,
|
||||||
confirm=True,
|
confirm=True,
|
||||||
@@ -372,12 +334,11 @@ class TestNotificationsEnumFuzzing:
|
|||||||
|
|
||||||
@given(st.text())
|
@given(st.text())
|
||||||
@settings(max_examples=100, suppress_health_check=[HealthCheck.function_scoped_fixture])
|
@settings(max_examples=100, suppress_health_check=[HealthCheck.function_scoped_fixture])
|
||||||
def test_invalid_action_raises_tool_error(self, action: str) -> None:
|
def test_invalid_action_raises_tool_error(self, subaction: str) -> None:
|
||||||
"""Invalid action names for notifications tool raise ToolError."""
|
"""Invalid subaction names for notifications domain raise ToolError."""
|
||||||
valid_actions = {
|
valid_subactions = {
|
||||||
"overview",
|
"overview",
|
||||||
"list",
|
"list",
|
||||||
"warnings",
|
|
||||||
"create",
|
"create",
|
||||||
"archive",
|
"archive",
|
||||||
"unread",
|
"unread",
|
||||||
@@ -385,31 +346,26 @@ class TestNotificationsEnumFuzzing:
|
|||||||
"delete_archived",
|
"delete_archived",
|
||||||
"archive_all",
|
"archive_all",
|
||||||
"archive_many",
|
"archive_many",
|
||||||
"create_unique",
|
|
||||||
"unarchive_many",
|
"unarchive_many",
|
||||||
"unarchive_all",
|
"unarchive_all",
|
||||||
"recalculate",
|
"recalculate",
|
||||||
}
|
}
|
||||||
if action in valid_actions:
|
if subaction in valid_subactions:
|
||||||
return
|
return
|
||||||
|
|
||||||
async def _run_test():
|
async def _run_test():
|
||||||
tool_fn = make_tool_fn(
|
tool_fn = _make_tool()
|
||||||
"unraid_mcp.tools.notifications",
|
|
||||||
"register_notifications_tool",
|
|
||||||
"unraid_notifications",
|
|
||||||
)
|
|
||||||
with patch(
|
with patch(
|
||||||
"unraid_mcp.tools.notifications.make_graphql_request",
|
"unraid_mcp.tools.unraid.make_graphql_request",
|
||||||
new_callable=AsyncMock,
|
new_callable=AsyncMock,
|
||||||
):
|
):
|
||||||
try:
|
try:
|
||||||
await tool_fn(action=action)
|
await tool_fn(action="notification", subaction=subaction)
|
||||||
except ToolError:
|
except ToolError:
|
||||||
pass
|
pass
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
pytest.fail(
|
pytest.fail(
|
||||||
f"Action {action!r} raised {type(exc).__name__} "
|
f"subaction={subaction!r} raised {type(exc).__name__} "
|
||||||
f"instead of ToolError: {exc!r}"
|
f"instead of ToolError: {exc!r}"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -425,7 +381,7 @@ class TestKeysInputFuzzing:
|
|||||||
"""Fuzz API key management parameters.
|
"""Fuzz API key management parameters.
|
||||||
|
|
||||||
Invariant: arbitrary key_id strings, names, and role lists never crash
|
Invariant: arbitrary key_id strings, names, and role lists never crash
|
||||||
the keys tool — only ToolError or clean return values are acceptable.
|
the keys domain — only ToolError or clean return values are acceptable.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
@given(st.text())
|
@given(st.text())
|
||||||
@@ -434,13 +390,13 @@ class TestKeysInputFuzzing:
|
|||||||
"""Arbitrary key_id for 'get' must not crash the tool."""
|
"""Arbitrary key_id for 'get' must not crash the tool."""
|
||||||
|
|
||||||
async def _run_test():
|
async def _run_test():
|
||||||
tool_fn = make_tool_fn("unraid_mcp.tools.keys", "register_keys_tool", "unraid_keys")
|
tool_fn = _make_tool()
|
||||||
with patch(
|
with patch(
|
||||||
"unraid_mcp.tools.keys.make_graphql_request", new_callable=AsyncMock
|
"unraid_mcp.tools.unraid.make_graphql_request", new_callable=AsyncMock
|
||||||
) as mock:
|
) as mock:
|
||||||
mock.return_value = {"apiKey": None}
|
mock.return_value = {"apiKey": None}
|
||||||
try:
|
try:
|
||||||
await tool_fn(action="get", key_id=key_id)
|
await tool_fn(action="key", subaction="get", key_id=key_id)
|
||||||
except ToolError:
|
except ToolError:
|
||||||
pass
|
pass
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
@@ -454,15 +410,15 @@ class TestKeysInputFuzzing:
|
|||||||
"""Arbitrary name strings for 'create' must not crash the tool."""
|
"""Arbitrary name strings for 'create' must not crash the tool."""
|
||||||
|
|
||||||
async def _run_test():
|
async def _run_test():
|
||||||
tool_fn = make_tool_fn("unraid_mcp.tools.keys", "register_keys_tool", "unraid_keys")
|
tool_fn = _make_tool()
|
||||||
with patch(
|
with patch(
|
||||||
"unraid_mcp.tools.keys.make_graphql_request", new_callable=AsyncMock
|
"unraid_mcp.tools.unraid.make_graphql_request", new_callable=AsyncMock
|
||||||
) as mock:
|
) as mock:
|
||||||
mock.return_value = {
|
mock.return_value = {
|
||||||
"apiKey": {"create": {"id": "1", "name": name, "key": "k", "roles": []}}
|
"apiKey": {"create": {"id": "1", "name": name, "key": "k", "roles": []}}
|
||||||
}
|
}
|
||||||
try:
|
try:
|
||||||
await tool_fn(action="create", name=name)
|
await tool_fn(action="key", subaction="create", name=name)
|
||||||
except ToolError:
|
except ToolError:
|
||||||
pass
|
pass
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
@@ -476,13 +432,15 @@ class TestKeysInputFuzzing:
|
|||||||
"""Arbitrary role lists for 'add_role' must not crash the tool."""
|
"""Arbitrary role lists for 'add_role' must not crash the tool."""
|
||||||
|
|
||||||
async def _run_test():
|
async def _run_test():
|
||||||
tool_fn = make_tool_fn("unraid_mcp.tools.keys", "register_keys_tool", "unraid_keys")
|
tool_fn = _make_tool()
|
||||||
with patch(
|
with patch(
|
||||||
"unraid_mcp.tools.keys.make_graphql_request", new_callable=AsyncMock
|
"unraid_mcp.tools.unraid.make_graphql_request", new_callable=AsyncMock
|
||||||
) as mock:
|
) as mock:
|
||||||
mock.return_value = {"apiKey": {"addRole": True}}
|
mock.return_value = {"apiKey": {"addRole": True}}
|
||||||
try:
|
try:
|
||||||
await tool_fn(action="add_role", key_id="some-key-id", roles=roles)
|
await tool_fn(
|
||||||
|
action="key", subaction="add_role", key_id="some-key-id", roles=roles
|
||||||
|
)
|
||||||
except ToolError:
|
except ToolError:
|
||||||
pass
|
pass
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
@@ -492,22 +450,22 @@ class TestKeysInputFuzzing:
|
|||||||
|
|
||||||
@given(st.text())
|
@given(st.text())
|
||||||
@settings(max_examples=100, suppress_health_check=[HealthCheck.function_scoped_fixture])
|
@settings(max_examples=100, suppress_health_check=[HealthCheck.function_scoped_fixture])
|
||||||
def test_invalid_action_raises_tool_error(self, action: str) -> None:
|
def test_invalid_action_raises_tool_error(self, subaction: str) -> None:
|
||||||
"""Invalid action names for keys tool raise ToolError."""
|
"""Invalid subaction names for keys domain raise ToolError."""
|
||||||
valid_actions = {"list", "get", "create", "update", "delete", "add_role", "remove_role"}
|
valid_subactions = {"list", "get", "create", "update", "delete", "add_role", "remove_role"}
|
||||||
if action in valid_actions:
|
if subaction in valid_subactions:
|
||||||
return
|
return
|
||||||
|
|
||||||
async def _run_test():
|
async def _run_test():
|
||||||
tool_fn = make_tool_fn("unraid_mcp.tools.keys", "register_keys_tool", "unraid_keys")
|
tool_fn = _make_tool()
|
||||||
with patch("unraid_mcp.tools.keys.make_graphql_request", new_callable=AsyncMock):
|
with patch("unraid_mcp.tools.unraid.make_graphql_request", new_callable=AsyncMock):
|
||||||
try:
|
try:
|
||||||
await tool_fn(action=action)
|
await tool_fn(action="key", subaction=subaction)
|
||||||
except ToolError:
|
except ToolError:
|
||||||
pass
|
pass
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
pytest.fail(
|
pytest.fail(
|
||||||
f"Action {action!r} raised {type(exc).__name__} "
|
f"subaction={subaction!r} raised {type(exc).__name__} "
|
||||||
f"instead of ToolError: {exc!r}"
|
f"instead of ToolError: {exc!r}"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -515,15 +473,15 @@ class TestKeysInputFuzzing:
|
|||||||
|
|
||||||
|
|
||||||
# ---------------------------------------------------------------------------
|
# ---------------------------------------------------------------------------
|
||||||
# VM: arbitrary VM IDs and action names
|
# VM: arbitrary VM IDs and subaction names
|
||||||
# ---------------------------------------------------------------------------
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
|
||||||
class TestVMInputFuzzing:
|
class TestVMInputFuzzing:
|
||||||
"""Fuzz VM management parameters.
|
"""Fuzz VM management parameters.
|
||||||
|
|
||||||
Invariant: arbitrary vm_id strings and action names must never crash
|
Invariant: arbitrary vm_id strings and subaction names must never crash
|
||||||
the VM tool — only ToolError or clean return values are acceptable.
|
the VM domain — only ToolError or clean return values are acceptable.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
@given(st.text())
|
@given(st.text())
|
||||||
@@ -532,16 +490,14 @@ class TestVMInputFuzzing:
|
|||||||
"""Arbitrary vm_id for 'start' must not crash the tool."""
|
"""Arbitrary vm_id for 'start' must not crash the tool."""
|
||||||
|
|
||||||
async def _run_test():
|
async def _run_test():
|
||||||
tool_fn = make_tool_fn(
|
tool_fn = _make_tool()
|
||||||
"unraid_mcp.tools.virtualization", "register_vm_tool", "unraid_vm"
|
|
||||||
)
|
|
||||||
with patch(
|
with patch(
|
||||||
"unraid_mcp.tools.virtualization.make_graphql_request",
|
"unraid_mcp.tools.unraid.make_graphql_request",
|
||||||
new_callable=AsyncMock,
|
new_callable=AsyncMock,
|
||||||
) as mock:
|
) as mock:
|
||||||
mock.return_value = {"vm": {"start": True}}
|
mock.return_value = {"vm": {"start": True}}
|
||||||
try:
|
try:
|
||||||
await tool_fn(action="start", vm_id=vm_id)
|
await tool_fn(action="vm", subaction="start", vm_id=vm_id)
|
||||||
except ToolError:
|
except ToolError:
|
||||||
pass
|
pass
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
@@ -555,16 +511,14 @@ class TestVMInputFuzzing:
|
|||||||
"""Arbitrary vm_id for 'stop' must not crash the tool."""
|
"""Arbitrary vm_id for 'stop' must not crash the tool."""
|
||||||
|
|
||||||
async def _run_test():
|
async def _run_test():
|
||||||
tool_fn = make_tool_fn(
|
tool_fn = _make_tool()
|
||||||
"unraid_mcp.tools.virtualization", "register_vm_tool", "unraid_vm"
|
|
||||||
)
|
|
||||||
with patch(
|
with patch(
|
||||||
"unraid_mcp.tools.virtualization.make_graphql_request",
|
"unraid_mcp.tools.unraid.make_graphql_request",
|
||||||
new_callable=AsyncMock,
|
new_callable=AsyncMock,
|
||||||
) as mock:
|
) as mock:
|
||||||
mock.return_value = {"vm": {"stop": True}}
|
mock.return_value = {"vm": {"stop": True}}
|
||||||
try:
|
try:
|
||||||
await tool_fn(action="stop", vm_id=vm_id)
|
await tool_fn(action="vm", subaction="stop", vm_id=vm_id)
|
||||||
except ToolError:
|
except ToolError:
|
||||||
pass
|
pass
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
@@ -578,17 +532,15 @@ class TestVMInputFuzzing:
|
|||||||
"""Arbitrary vm_id for 'details' must not crash the tool."""
|
"""Arbitrary vm_id for 'details' must not crash the tool."""
|
||||||
|
|
||||||
async def _run_test():
|
async def _run_test():
|
||||||
tool_fn = make_tool_fn(
|
tool_fn = _make_tool()
|
||||||
"unraid_mcp.tools.virtualization", "register_vm_tool", "unraid_vm"
|
|
||||||
)
|
|
||||||
with patch(
|
with patch(
|
||||||
"unraid_mcp.tools.virtualization.make_graphql_request",
|
"unraid_mcp.tools.unraid.make_graphql_request",
|
||||||
new_callable=AsyncMock,
|
new_callable=AsyncMock,
|
||||||
) as mock:
|
) as mock:
|
||||||
# Return an empty VM list so the lookup gracefully fails
|
# Return an empty VM list so the lookup gracefully fails
|
||||||
mock.return_value = {"vms": {"domains": []}}
|
mock.return_value = {"vms": {"domains": []}}
|
||||||
try:
|
try:
|
||||||
await tool_fn(action="details", vm_id=vm_id)
|
await tool_fn(action="vm", subaction="details", vm_id=vm_id)
|
||||||
except ToolError:
|
except ToolError:
|
||||||
pass
|
pass
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
@@ -598,9 +550,9 @@ class TestVMInputFuzzing:
|
|||||||
|
|
||||||
@given(st.text())
|
@given(st.text())
|
||||||
@settings(max_examples=200, suppress_health_check=[HealthCheck.function_scoped_fixture])
|
@settings(max_examples=200, suppress_health_check=[HealthCheck.function_scoped_fixture])
|
||||||
def test_invalid_action_raises_tool_error(self, action: str) -> None:
|
def test_invalid_action_raises_tool_error(self, subaction: str) -> None:
|
||||||
"""Invalid action names for VM tool raise ToolError."""
|
"""Invalid subaction names for VM domain raise ToolError."""
|
||||||
valid_actions = {
|
valid_subactions = {
|
||||||
"list",
|
"list",
|
||||||
"details",
|
"details",
|
||||||
"start",
|
"start",
|
||||||
@@ -611,24 +563,22 @@ class TestVMInputFuzzing:
|
|||||||
"reboot",
|
"reboot",
|
||||||
"reset",
|
"reset",
|
||||||
}
|
}
|
||||||
if action in valid_actions:
|
if subaction in valid_subactions:
|
||||||
return
|
return
|
||||||
|
|
||||||
async def _run_test():
|
async def _run_test():
|
||||||
tool_fn = make_tool_fn(
|
tool_fn = _make_tool()
|
||||||
"unraid_mcp.tools.virtualization", "register_vm_tool", "unraid_vm"
|
|
||||||
)
|
|
||||||
with patch(
|
with patch(
|
||||||
"unraid_mcp.tools.virtualization.make_graphql_request",
|
"unraid_mcp.tools.unraid.make_graphql_request",
|
||||||
new_callable=AsyncMock,
|
new_callable=AsyncMock,
|
||||||
):
|
):
|
||||||
try:
|
try:
|
||||||
await tool_fn(action=action)
|
await tool_fn(action="vm", subaction=subaction)
|
||||||
except ToolError:
|
except ToolError:
|
||||||
pass
|
pass
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
pytest.fail(
|
pytest.fail(
|
||||||
f"Action {action!r} raised {type(exc).__name__} "
|
f"subaction={subaction!r} raised {type(exc).__name__} "
|
||||||
f"instead of ToolError: {exc!r}"
|
f"instead of ToolError: {exc!r}"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -664,18 +614,16 @@ class TestBoundaryValues:
|
|||||||
)
|
)
|
||||||
@settings(max_examples=50, suppress_health_check=[HealthCheck.function_scoped_fixture])
|
@settings(max_examples=50, suppress_health_check=[HealthCheck.function_scoped_fixture])
|
||||||
def test_docker_details_adversarial_inputs(self, container_id: str) -> None:
|
def test_docker_details_adversarial_inputs(self, container_id: str) -> None:
|
||||||
"""Adversarial container_id values must not crash the Docker tool."""
|
"""Adversarial container_id values must not crash the Docker domain."""
|
||||||
|
|
||||||
async def _run_test():
|
async def _run_test():
|
||||||
tool_fn = make_tool_fn(
|
tool_fn = _make_tool()
|
||||||
"unraid_mcp.tools.docker", "register_docker_tool", "unraid_docker"
|
|
||||||
)
|
|
||||||
with patch(
|
with patch(
|
||||||
"unraid_mcp.tools.docker.make_graphql_request", new_callable=AsyncMock
|
"unraid_mcp.tools.unraid.make_graphql_request", new_callable=AsyncMock
|
||||||
) as mock:
|
) as mock:
|
||||||
mock.return_value = {"docker": {"containers": []}}
|
mock.return_value = {"docker": {"containers": []}}
|
||||||
try:
|
try:
|
||||||
await tool_fn(action="details", container_id=container_id)
|
await tool_fn(action="docker", subaction="details", container_id=container_id)
|
||||||
except ToolError:
|
except ToolError:
|
||||||
pass
|
pass
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
@@ -702,13 +650,9 @@ class TestBoundaryValues:
|
|||||||
"""Adversarial importance values must raise ToolError, not crash."""
|
"""Adversarial importance values must raise ToolError, not crash."""
|
||||||
|
|
||||||
async def _run_test():
|
async def _run_test():
|
||||||
tool_fn = make_tool_fn(
|
tool_fn = _make_tool()
|
||||||
"unraid_mcp.tools.notifications",
|
|
||||||
"register_notifications_tool",
|
|
||||||
"unraid_notifications",
|
|
||||||
)
|
|
||||||
with patch(
|
with patch(
|
||||||
"unraid_mcp.tools.notifications.make_graphql_request",
|
"unraid_mcp.tools.unraid.make_graphql_request",
|
||||||
new_callable=AsyncMock,
|
new_callable=AsyncMock,
|
||||||
) as mock:
|
) as mock:
|
||||||
mock.return_value = {
|
mock.return_value = {
|
||||||
@@ -716,7 +660,8 @@ class TestBoundaryValues:
|
|||||||
}
|
}
|
||||||
try:
|
try:
|
||||||
await tool_fn(
|
await tool_fn(
|
||||||
action="create",
|
action="notification",
|
||||||
|
subaction="create",
|
||||||
title="t",
|
title="t",
|
||||||
subject="s",
|
subject="s",
|
||||||
description="d",
|
description="d",
|
||||||
@@ -743,13 +688,13 @@ class TestBoundaryValues:
|
|||||||
"""Adversarial key_id values must not crash the keys get action."""
|
"""Adversarial key_id values must not crash the keys get action."""
|
||||||
|
|
||||||
async def _run_test():
|
async def _run_test():
|
||||||
tool_fn = make_tool_fn("unraid_mcp.tools.keys", "register_keys_tool", "unraid_keys")
|
tool_fn = _make_tool()
|
||||||
with patch(
|
with patch(
|
||||||
"unraid_mcp.tools.keys.make_graphql_request", new_callable=AsyncMock
|
"unraid_mcp.tools.unraid.make_graphql_request", new_callable=AsyncMock
|
||||||
) as mock:
|
) as mock:
|
||||||
mock.return_value = {"apiKey": None}
|
mock.return_value = {"apiKey": None}
|
||||||
try:
|
try:
|
||||||
await tool_fn(action="get", key_id=key_id)
|
await tool_fn(action="key", subaction="get", key_id=key_id)
|
||||||
except ToolError:
|
except ToolError:
|
||||||
pass
|
pass
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
@@ -759,49 +704,46 @@ class TestBoundaryValues:
|
|||||||
|
|
||||||
|
|
||||||
# ---------------------------------------------------------------------------
|
# ---------------------------------------------------------------------------
|
||||||
# Info: action guard (invalid actions on a read-only tool)
|
# Top-level action guard (invalid domain names)
|
||||||
# ---------------------------------------------------------------------------
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
|
||||||
class TestInfoActionGuard:
|
class TestInfoActionGuard:
|
||||||
"""Fuzz the action parameter on unraid_info.
|
"""Fuzz the top-level action parameter (domain selector).
|
||||||
|
|
||||||
Invariant: the info tool exposes no mutations and its action guard must
|
Invariant: the consolidated unraid tool must reject any invalid domain
|
||||||
reject any invalid action with a ToolError rather than a KeyError crash.
|
with a ToolError rather than a KeyError crash.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
@given(st.text())
|
@given(st.text())
|
||||||
@settings(max_examples=200, suppress_health_check=[HealthCheck.function_scoped_fixture])
|
@settings(max_examples=200, suppress_health_check=[HealthCheck.function_scoped_fixture])
|
||||||
def test_invalid_action_raises_tool_error(self, action: str) -> None:
|
def test_invalid_action_raises_tool_error(self, action: str) -> None:
|
||||||
"""Invalid action names for the info tool raise ToolError."""
|
"""Invalid domain names raise ToolError."""
|
||||||
valid_actions = {
|
valid_actions = {
|
||||||
"overview",
|
|
||||||
"array",
|
"array",
|
||||||
"network",
|
"customization",
|
||||||
"registration",
|
"disk",
|
||||||
"variables",
|
"docker",
|
||||||
"metrics",
|
"health",
|
||||||
"services",
|
"key",
|
||||||
"display",
|
"live",
|
||||||
"config",
|
"notification",
|
||||||
"online",
|
"oidc",
|
||||||
"owner",
|
"plugin",
|
||||||
"settings",
|
"rclone",
|
||||||
"server",
|
"setting",
|
||||||
"servers",
|
"system",
|
||||||
"flash",
|
"user",
|
||||||
"ups_devices",
|
"vm",
|
||||||
"ups_device",
|
|
||||||
"ups_config",
|
|
||||||
}
|
}
|
||||||
if action in valid_actions:
|
if action in valid_actions:
|
||||||
return
|
return
|
||||||
|
|
||||||
async def _run_test():
|
async def _run_test():
|
||||||
tool_fn = make_tool_fn("unraid_mcp.tools.info", "register_info_tool", "unraid_info")
|
tool_fn = _make_tool()
|
||||||
with patch("unraid_mcp.tools.info.make_graphql_request", new_callable=AsyncMock):
|
with patch("unraid_mcp.tools.unraid.make_graphql_request", new_callable=AsyncMock):
|
||||||
try:
|
try:
|
||||||
await tool_fn(action=action)
|
await tool_fn(action=action, subaction="list")
|
||||||
except ToolError:
|
except ToolError:
|
||||||
pass
|
pass
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
"""Safety audit tests for destructive action confirmation guards.
|
"""Safety audit tests for destructive action confirmation guards.
|
||||||
|
|
||||||
Verifies that all destructive operations across every tool require
|
Verifies that all destructive operations across every domain require
|
||||||
explicit `confirm=True` before execution, and that the DESTRUCTIVE_ACTIONS
|
explicit `confirm=True` before execution, and that the DESTRUCTIVE_ACTIONS
|
||||||
registries are complete and consistent.
|
registries are complete and consistent.
|
||||||
"""
|
"""
|
||||||
@@ -9,97 +9,75 @@ from collections.abc import Generator
|
|||||||
from unittest.mock import AsyncMock, patch
|
from unittest.mock import AsyncMock, patch
|
||||||
|
|
||||||
import pytest
|
import pytest
|
||||||
|
|
||||||
# conftest.py is the shared test-helper module for this project.
|
|
||||||
# pytest automatically adds tests/ to sys.path, making it importable here
|
|
||||||
# without a package __init__.py. Do NOT add tests/__init__.py — it breaks
|
|
||||||
# conftest.py's fixture auto-discovery.
|
|
||||||
from conftest import make_tool_fn
|
from conftest import make_tool_fn
|
||||||
|
|
||||||
from unraid_mcp.core.exceptions import ToolError
|
from unraid_mcp.core.exceptions import ToolError
|
||||||
|
|
||||||
# Import DESTRUCTIVE_ACTIONS sets from every tool module that defines one
|
# Import DESTRUCTIVE_ACTIONS and MUTATIONS sets from the consolidated unraid module
|
||||||
from unraid_mcp.tools.array import DESTRUCTIVE_ACTIONS as ARRAY_DESTRUCTIVE
|
from unraid_mcp.tools.unraid import (
|
||||||
from unraid_mcp.tools.array import MUTATIONS as ARRAY_MUTATIONS
|
_ARRAY_DESTRUCTIVE,
|
||||||
from unraid_mcp.tools.keys import DESTRUCTIVE_ACTIONS as KEYS_DESTRUCTIVE
|
_ARRAY_MUTATIONS,
|
||||||
from unraid_mcp.tools.keys import MUTATIONS as KEYS_MUTATIONS
|
_DISK_DESTRUCTIVE,
|
||||||
from unraid_mcp.tools.notifications import DESTRUCTIVE_ACTIONS as NOTIF_DESTRUCTIVE
|
_DISK_MUTATIONS,
|
||||||
from unraid_mcp.tools.notifications import MUTATIONS as NOTIF_MUTATIONS
|
_KEY_DESTRUCTIVE,
|
||||||
from unraid_mcp.tools.plugins import DESTRUCTIVE_ACTIONS as PLUGINS_DESTRUCTIVE
|
_KEY_MUTATIONS,
|
||||||
from unraid_mcp.tools.plugins import MUTATIONS as PLUGINS_MUTATIONS
|
_NOTIFICATION_DESTRUCTIVE,
|
||||||
from unraid_mcp.tools.rclone import DESTRUCTIVE_ACTIONS as RCLONE_DESTRUCTIVE
|
_NOTIFICATION_MUTATIONS,
|
||||||
from unraid_mcp.tools.rclone import MUTATIONS as RCLONE_MUTATIONS
|
_PLUGIN_DESTRUCTIVE,
|
||||||
from unraid_mcp.tools.settings import DESTRUCTIVE_ACTIONS as SETTINGS_DESTRUCTIVE
|
_PLUGIN_MUTATIONS,
|
||||||
from unraid_mcp.tools.settings import MUTATIONS as SETTINGS_MUTATIONS
|
_RCLONE_DESTRUCTIVE,
|
||||||
from unraid_mcp.tools.storage import DESTRUCTIVE_ACTIONS as STORAGE_DESTRUCTIVE
|
_RCLONE_MUTATIONS,
|
||||||
from unraid_mcp.tools.storage import MUTATIONS as STORAGE_MUTATIONS
|
_SETTING_DESTRUCTIVE,
|
||||||
from unraid_mcp.tools.virtualization import DESTRUCTIVE_ACTIONS as VM_DESTRUCTIVE
|
_SETTING_MUTATIONS,
|
||||||
from unraid_mcp.tools.virtualization import MUTATIONS as VM_MUTATIONS
|
_VM_DESTRUCTIVE,
|
||||||
|
_VM_MUTATIONS,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
# ---------------------------------------------------------------------------
|
# ---------------------------------------------------------------------------
|
||||||
# Known destructive actions registry (ground truth for this audit)
|
# Known destructive actions registry (ground truth for this audit)
|
||||||
# ---------------------------------------------------------------------------
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
# Every destructive action in the codebase, keyed by (tool_module, tool_name)
|
KNOWN_DESTRUCTIVE: dict[str, dict] = {
|
||||||
KNOWN_DESTRUCTIVE: dict[str, dict[str, set[str] | str]] = {
|
|
||||||
"array": {
|
"array": {
|
||||||
"module": "unraid_mcp.tools.array",
|
|
||||||
"register_fn": "register_array_tool",
|
|
||||||
"tool_name": "unraid_array",
|
|
||||||
"actions": {"remove_disk", "clear_disk_stats", "stop_array"},
|
"actions": {"remove_disk", "clear_disk_stats", "stop_array"},
|
||||||
"runtime_set": ARRAY_DESTRUCTIVE,
|
"runtime_set": _ARRAY_DESTRUCTIVE,
|
||||||
|
"mutations": _ARRAY_MUTATIONS,
|
||||||
},
|
},
|
||||||
"vm": {
|
"vm": {
|
||||||
"module": "unraid_mcp.tools.virtualization",
|
|
||||||
"register_fn": "register_vm_tool",
|
|
||||||
"tool_name": "unraid_vm",
|
|
||||||
"actions": {"force_stop", "reset"},
|
"actions": {"force_stop", "reset"},
|
||||||
"runtime_set": VM_DESTRUCTIVE,
|
"runtime_set": _VM_DESTRUCTIVE,
|
||||||
|
"mutations": _VM_MUTATIONS,
|
||||||
},
|
},
|
||||||
"notifications": {
|
"notification": {
|
||||||
"module": "unraid_mcp.tools.notifications",
|
|
||||||
"register_fn": "register_notifications_tool",
|
|
||||||
"tool_name": "unraid_notifications",
|
|
||||||
"actions": {"delete", "delete_archived"},
|
"actions": {"delete", "delete_archived"},
|
||||||
"runtime_set": NOTIF_DESTRUCTIVE,
|
"runtime_set": _NOTIFICATION_DESTRUCTIVE,
|
||||||
|
"mutations": _NOTIFICATION_MUTATIONS,
|
||||||
},
|
},
|
||||||
"rclone": {
|
"rclone": {
|
||||||
"module": "unraid_mcp.tools.rclone",
|
|
||||||
"register_fn": "register_rclone_tool",
|
|
||||||
"tool_name": "unraid_rclone",
|
|
||||||
"actions": {"delete_remote"},
|
"actions": {"delete_remote"},
|
||||||
"runtime_set": RCLONE_DESTRUCTIVE,
|
"runtime_set": _RCLONE_DESTRUCTIVE,
|
||||||
|
"mutations": _RCLONE_MUTATIONS,
|
||||||
},
|
},
|
||||||
"keys": {
|
"key": {
|
||||||
"module": "unraid_mcp.tools.keys",
|
|
||||||
"register_fn": "register_keys_tool",
|
|
||||||
"tool_name": "unraid_keys",
|
|
||||||
"actions": {"delete"},
|
"actions": {"delete"},
|
||||||
"runtime_set": KEYS_DESTRUCTIVE,
|
"runtime_set": _KEY_DESTRUCTIVE,
|
||||||
|
"mutations": _KEY_MUTATIONS,
|
||||||
},
|
},
|
||||||
"storage": {
|
"disk": {
|
||||||
"module": "unraid_mcp.tools.storage",
|
|
||||||
"register_fn": "register_storage_tool",
|
|
||||||
"tool_name": "unraid_storage",
|
|
||||||
"actions": {"flash_backup"},
|
"actions": {"flash_backup"},
|
||||||
"runtime_set": STORAGE_DESTRUCTIVE,
|
"runtime_set": _DISK_DESTRUCTIVE,
|
||||||
|
"mutations": _DISK_MUTATIONS,
|
||||||
},
|
},
|
||||||
"settings": {
|
"setting": {
|
||||||
"module": "unraid_mcp.tools.settings",
|
"actions": {"configure_ups"},
|
||||||
"register_fn": "register_settings_tool",
|
"runtime_set": _SETTING_DESTRUCTIVE,
|
||||||
"tool_name": "unraid_settings",
|
"mutations": _SETTING_MUTATIONS,
|
||||||
"actions": {
|
|
||||||
"configure_ups",
|
|
||||||
},
|
|
||||||
"runtime_set": SETTINGS_DESTRUCTIVE,
|
|
||||||
},
|
},
|
||||||
"plugins": {
|
"plugin": {
|
||||||
"module": "unraid_mcp.tools.plugins",
|
|
||||||
"register_fn": "register_plugins_tool",
|
|
||||||
"tool_name": "unraid_plugins",
|
|
||||||
"actions": {"remove"},
|
"actions": {"remove"},
|
||||||
"runtime_set": PLUGINS_DESTRUCTIVE,
|
"runtime_set": _PLUGIN_DESTRUCTIVE,
|
||||||
|
"mutations": _PLUGIN_MUTATIONS,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -112,90 +90,53 @@ KNOWN_DESTRUCTIVE: dict[str, dict[str, set[str] | str]] = {
|
|||||||
class TestDestructiveActionRegistries:
|
class TestDestructiveActionRegistries:
|
||||||
"""Verify that DESTRUCTIVE_ACTIONS sets in source code match the audit."""
|
"""Verify that DESTRUCTIVE_ACTIONS sets in source code match the audit."""
|
||||||
|
|
||||||
@pytest.mark.parametrize("tool_key", list(KNOWN_DESTRUCTIVE.keys()))
|
@pytest.mark.parametrize("domain", list(KNOWN_DESTRUCTIVE.keys()))
|
||||||
def test_destructive_set_matches_audit(self, tool_key: str) -> None:
|
def test_destructive_set_matches_audit(self, domain: str) -> None:
|
||||||
"""Each tool's DESTRUCTIVE_ACTIONS must exactly match the audited set."""
|
info = KNOWN_DESTRUCTIVE[domain]
|
||||||
info = KNOWN_DESTRUCTIVE[tool_key]
|
|
||||||
assert info["runtime_set"] == info["actions"], (
|
assert info["runtime_set"] == info["actions"], (
|
||||||
f"{tool_key}: DESTRUCTIVE_ACTIONS is {info['runtime_set']}, expected {info['actions']}"
|
f"{domain}: DESTRUCTIVE_ACTIONS is {info['runtime_set']}, expected {info['actions']}"
|
||||||
)
|
)
|
||||||
|
|
||||||
@pytest.mark.parametrize("tool_key", list(KNOWN_DESTRUCTIVE.keys()))
|
@pytest.mark.parametrize("domain", list(KNOWN_DESTRUCTIVE.keys()))
|
||||||
def test_destructive_actions_are_valid_mutations(self, tool_key: str) -> None:
|
def test_destructive_actions_are_valid_mutations(self, domain: str) -> None:
|
||||||
"""Every destructive action must correspond to an actual mutation."""
|
info = KNOWN_DESTRUCTIVE[domain]
|
||||||
info = KNOWN_DESTRUCTIVE[tool_key]
|
|
||||||
mutations_map = {
|
|
||||||
"array": ARRAY_MUTATIONS,
|
|
||||||
"vm": VM_MUTATIONS,
|
|
||||||
"notifications": NOTIF_MUTATIONS,
|
|
||||||
"rclone": RCLONE_MUTATIONS,
|
|
||||||
"keys": KEYS_MUTATIONS,
|
|
||||||
"storage": STORAGE_MUTATIONS,
|
|
||||||
"settings": SETTINGS_MUTATIONS,
|
|
||||||
"plugins": PLUGINS_MUTATIONS,
|
|
||||||
}
|
|
||||||
mutations = mutations_map[tool_key]
|
|
||||||
for action in info["actions"]:
|
for action in info["actions"]:
|
||||||
assert action in mutations, (
|
assert action in info["mutations"], (
|
||||||
f"{tool_key}: destructive action '{action}' is not in MUTATIONS"
|
f"{domain}: destructive action '{action}' is not in MUTATIONS"
|
||||||
)
|
)
|
||||||
|
|
||||||
def test_no_delete_or_remove_mutations_missing_from_destructive(self) -> None:
|
def test_no_delete_or_remove_mutations_missing_from_destructive(self) -> None:
|
||||||
"""Any mutation with 'delete' or 'remove' in its name should be destructive.
|
"""Any mutation with 'delete' or 'remove' in its name should be destructive.
|
||||||
|
|
||||||
Exceptions (documented, intentional):
|
Exceptions (documented, intentional):
|
||||||
keys/remove_role — fully reversible; the role can always be re-added via add_role.
|
key/remove_role — fully reversible; the role can always be re-added via add_role.
|
||||||
No data is lost and there is no irreversible side-effect.
|
|
||||||
"""
|
"""
|
||||||
# Mutations explicitly exempted from the delete/remove heuristic with justification.
|
|
||||||
# Add entries here only when the action is demonstrably reversible and non-destructive.
|
|
||||||
_HEURISTIC_EXCEPTIONS: frozenset[str] = frozenset(
|
_HEURISTIC_EXCEPTIONS: frozenset[str] = frozenset(
|
||||||
{
|
{
|
||||||
"keys/remove_role", # reversible — role can be re-added via add_role
|
"key/remove_role", # reversible — role can be re-added via add_role
|
||||||
}
|
}
|
||||||
)
|
)
|
||||||
|
|
||||||
all_mutations = {
|
|
||||||
"array": ARRAY_MUTATIONS,
|
|
||||||
"vm": VM_MUTATIONS,
|
|
||||||
"notifications": NOTIF_MUTATIONS,
|
|
||||||
"rclone": RCLONE_MUTATIONS,
|
|
||||||
"keys": KEYS_MUTATIONS,
|
|
||||||
"storage": STORAGE_MUTATIONS,
|
|
||||||
"settings": SETTINGS_MUTATIONS,
|
|
||||||
"plugins": PLUGINS_MUTATIONS,
|
|
||||||
}
|
|
||||||
all_destructive = {
|
|
||||||
"array": ARRAY_DESTRUCTIVE,
|
|
||||||
"vm": VM_DESTRUCTIVE,
|
|
||||||
"notifications": NOTIF_DESTRUCTIVE,
|
|
||||||
"rclone": RCLONE_DESTRUCTIVE,
|
|
||||||
"keys": KEYS_DESTRUCTIVE,
|
|
||||||
"storage": STORAGE_DESTRUCTIVE,
|
|
||||||
"settings": SETTINGS_DESTRUCTIVE,
|
|
||||||
"plugins": PLUGINS_DESTRUCTIVE,
|
|
||||||
}
|
|
||||||
missing: list[str] = []
|
missing: list[str] = []
|
||||||
for tool_key, mutations in all_mutations.items():
|
for domain, info in KNOWN_DESTRUCTIVE.items():
|
||||||
destructive = all_destructive[tool_key]
|
destructive = info["runtime_set"]
|
||||||
missing.extend(
|
for action_name in info["mutations"]:
|
||||||
f"{tool_key}/{action_name}"
|
if (
|
||||||
for action_name in mutations
|
("delete" in action_name or "remove" in action_name)
|
||||||
if ("delete" in action_name or "remove" in action_name)
|
and action_name not in destructive
|
||||||
and action_name not in destructive
|
and f"{domain}/{action_name}" not in _HEURISTIC_EXCEPTIONS
|
||||||
and f"{tool_key}/{action_name}" not in _HEURISTIC_EXCEPTIONS
|
):
|
||||||
)
|
missing.append(f"{domain}/{action_name}")
|
||||||
assert not missing, (
|
assert not missing, (
|
||||||
f"Mutations with 'delete'/'remove' not in DESTRUCTIVE_ACTIONS: {missing}"
|
f"Mutations with 'delete'/'remove' not in DESTRUCTIVE_ACTIONS: {missing}"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
# ---------------------------------------------------------------------------
|
# ---------------------------------------------------------------------------
|
||||||
# Confirmation guard tests: calling without confirm=True raises ToolError
|
# Confirmation guard tests
|
||||||
# ---------------------------------------------------------------------------
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
# Build parametrized test cases: (tool_key, action, kwargs_without_confirm)
|
# (action, subaction, extra_kwargs)
|
||||||
# Each destructive action needs the minimum required params (minus confirm)
|
|
||||||
_DESTRUCTIVE_TEST_CASES: list[tuple[str, str, dict]] = [
|
_DESTRUCTIVE_TEST_CASES: list[tuple[str, str, dict]] = [
|
||||||
# Array
|
# Array
|
||||||
("array", "remove_disk", {"disk_id": "abc123:local"}),
|
("array", "remove_disk", {"disk_id": "abc123:local"}),
|
||||||
@@ -205,161 +146,112 @@ _DESTRUCTIVE_TEST_CASES: list[tuple[str, str, dict]] = [
|
|||||||
("vm", "force_stop", {"vm_id": "test-vm-uuid"}),
|
("vm", "force_stop", {"vm_id": "test-vm-uuid"}),
|
||||||
("vm", "reset", {"vm_id": "test-vm-uuid"}),
|
("vm", "reset", {"vm_id": "test-vm-uuid"}),
|
||||||
# Notifications
|
# Notifications
|
||||||
("notifications", "delete", {"notification_id": "notif-1", "notification_type": "UNREAD"}),
|
("notification", "delete", {"notification_id": "notif-1", "notification_type": "UNREAD"}),
|
||||||
("notifications", "delete_archived", {}),
|
("notification", "delete_archived", {}),
|
||||||
# RClone
|
# RClone
|
||||||
("rclone", "delete_remote", {"name": "my-remote"}),
|
("rclone", "delete_remote", {"name": "my-remote"}),
|
||||||
# Keys
|
# Keys
|
||||||
("keys", "delete", {"key_id": "key-123"}),
|
("key", "delete", {"key_id": "key-123"}),
|
||||||
# Storage
|
# Disk (flash_backup)
|
||||||
(
|
(
|
||||||
"storage",
|
"disk",
|
||||||
"flash_backup",
|
"flash_backup",
|
||||||
{"remote_name": "r", "source_path": "/boot", "destination_path": "r:b"},
|
{"remote_name": "r", "source_path": "/boot", "destination_path": "r:b"},
|
||||||
),
|
),
|
||||||
# Settings
|
# Settings
|
||||||
("settings", "configure_ups", {"ups_config": {"mode": "slave"}}),
|
("setting", "configure_ups", {"ups_config": {"mode": "slave"}}),
|
||||||
# Plugins
|
# Plugins
|
||||||
("plugins", "remove", {"names": ["my-plugin"]}),
|
("plugin", "remove", {"names": ["my-plugin"]}),
|
||||||
]
|
]
|
||||||
|
|
||||||
|
|
||||||
_CASE_IDS = [f"{c[0]}/{c[1]}" for c in _DESTRUCTIVE_TEST_CASES]
|
_CASE_IDS = [f"{c[0]}/{c[1]}" for c in _DESTRUCTIVE_TEST_CASES]
|
||||||
|
|
||||||
|
_MODULE = "unraid_mcp.tools.unraid"
|
||||||
@pytest.fixture
|
_REGISTER_FN = "register_unraid_tool"
|
||||||
def _mock_array_graphql() -> Generator[AsyncMock, None, None]:
|
_TOOL_NAME = "unraid"
|
||||||
with patch("unraid_mcp.tools.array.make_graphql_request", new_callable=AsyncMock) as m:
|
|
||||||
yield m
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
@pytest.fixture
|
||||||
def _mock_vm_graphql() -> Generator[AsyncMock, None, None]:
|
def _mock_graphql() -> Generator[AsyncMock, None, None]:
|
||||||
with patch("unraid_mcp.tools.virtualization.make_graphql_request", new_callable=AsyncMock) as m:
|
with patch(f"{_MODULE}.make_graphql_request", new_callable=AsyncMock) as m:
|
||||||
yield m
|
yield m
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
|
||||||
def _mock_notif_graphql() -> Generator[AsyncMock, None, None]:
|
|
||||||
with patch("unraid_mcp.tools.notifications.make_graphql_request", new_callable=AsyncMock) as m:
|
|
||||||
yield m
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
|
||||||
def _mock_rclone_graphql() -> Generator[AsyncMock, None, None]:
|
|
||||||
with patch("unraid_mcp.tools.rclone.make_graphql_request", new_callable=AsyncMock) as m:
|
|
||||||
yield m
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
|
||||||
def _mock_keys_graphql() -> Generator[AsyncMock, None, None]:
|
|
||||||
with patch("unraid_mcp.tools.keys.make_graphql_request", new_callable=AsyncMock) as m:
|
|
||||||
yield m
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
|
||||||
def _mock_storage_graphql() -> Generator[AsyncMock, None, None]:
|
|
||||||
with patch("unraid_mcp.tools.storage.make_graphql_request", new_callable=AsyncMock) as m:
|
|
||||||
yield m
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
|
||||||
def _mock_settings_graphql() -> Generator[AsyncMock, None, None]:
|
|
||||||
with patch("unraid_mcp.tools.settings.make_graphql_request", new_callable=AsyncMock) as m:
|
|
||||||
yield m
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
|
||||||
def _mock_plugins_graphql() -> Generator[AsyncMock, None, None]:
|
|
||||||
with patch("unraid_mcp.tools.plugins.make_graphql_request", new_callable=AsyncMock) as m:
|
|
||||||
yield m
|
|
||||||
|
|
||||||
|
|
||||||
# Map tool_key -> (module path, register fn, tool name)
|
|
||||||
_TOOL_REGISTRY = {
|
|
||||||
"array": ("unraid_mcp.tools.array", "register_array_tool", "unraid_array"),
|
|
||||||
"vm": ("unraid_mcp.tools.virtualization", "register_vm_tool", "unraid_vm"),
|
|
||||||
"notifications": (
|
|
||||||
"unraid_mcp.tools.notifications",
|
|
||||||
"register_notifications_tool",
|
|
||||||
"unraid_notifications",
|
|
||||||
),
|
|
||||||
"rclone": ("unraid_mcp.tools.rclone", "register_rclone_tool", "unraid_rclone"),
|
|
||||||
"keys": ("unraid_mcp.tools.keys", "register_keys_tool", "unraid_keys"),
|
|
||||||
"storage": ("unraid_mcp.tools.storage", "register_storage_tool", "unraid_storage"),
|
|
||||||
"settings": ("unraid_mcp.tools.settings", "register_settings_tool", "unraid_settings"),
|
|
||||||
"plugins": ("unraid_mcp.tools.plugins", "register_plugins_tool", "unraid_plugins"),
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
class TestConfirmationGuards:
|
class TestConfirmationGuards:
|
||||||
"""Every destructive action must reject calls without confirm=True."""
|
"""Every destructive action must reject calls without confirm=True."""
|
||||||
|
|
||||||
@pytest.mark.parametrize("tool_key,action,kwargs", _DESTRUCTIVE_TEST_CASES, ids=_CASE_IDS)
|
@pytest.mark.parametrize("action,subaction,kwargs", _DESTRUCTIVE_TEST_CASES, ids=_CASE_IDS)
|
||||||
async def test_rejects_without_confirm(
|
async def test_rejects_without_confirm(
|
||||||
self,
|
self,
|
||||||
tool_key: str,
|
|
||||||
action: str,
|
action: str,
|
||||||
|
subaction: str,
|
||||||
kwargs: dict,
|
kwargs: dict,
|
||||||
_mock_array_graphql: AsyncMock,
|
_mock_graphql: AsyncMock,
|
||||||
_mock_vm_graphql: AsyncMock,
|
|
||||||
_mock_notif_graphql: AsyncMock,
|
|
||||||
_mock_rclone_graphql: AsyncMock,
|
|
||||||
_mock_keys_graphql: AsyncMock,
|
|
||||||
_mock_storage_graphql: AsyncMock,
|
|
||||||
_mock_settings_graphql: AsyncMock,
|
|
||||||
_mock_plugins_graphql: AsyncMock,
|
|
||||||
) -> None:
|
) -> None:
|
||||||
"""Calling a destructive action without confirm=True must raise ToolError."""
|
tool_fn = make_tool_fn(_MODULE, _REGISTER_FN, _TOOL_NAME)
|
||||||
module_path, register_fn, tool_name = _TOOL_REGISTRY[tool_key]
|
|
||||||
tool_fn = make_tool_fn(module_path, register_fn, tool_name)
|
|
||||||
|
|
||||||
with pytest.raises(ToolError, match="confirm=True"):
|
with pytest.raises(ToolError, match="confirm=True"):
|
||||||
await tool_fn(action=action, **kwargs)
|
await tool_fn(action=action, subaction=subaction, **kwargs)
|
||||||
|
|
||||||
@pytest.mark.parametrize("tool_key,action,kwargs", _DESTRUCTIVE_TEST_CASES, ids=_CASE_IDS)
|
@pytest.mark.parametrize("action,subaction,kwargs", _DESTRUCTIVE_TEST_CASES, ids=_CASE_IDS)
|
||||||
async def test_rejects_with_confirm_false(
|
async def test_rejects_with_confirm_false(
|
||||||
self,
|
self,
|
||||||
tool_key: str,
|
|
||||||
action: str,
|
action: str,
|
||||||
|
subaction: str,
|
||||||
kwargs: dict,
|
kwargs: dict,
|
||||||
_mock_array_graphql: AsyncMock,
|
_mock_graphql: AsyncMock,
|
||||||
_mock_vm_graphql: AsyncMock,
|
|
||||||
_mock_notif_graphql: AsyncMock,
|
|
||||||
_mock_rclone_graphql: AsyncMock,
|
|
||||||
_mock_keys_graphql: AsyncMock,
|
|
||||||
_mock_storage_graphql: AsyncMock,
|
|
||||||
_mock_settings_graphql: AsyncMock,
|
|
||||||
_mock_plugins_graphql: AsyncMock,
|
|
||||||
) -> None:
|
) -> None:
|
||||||
"""Explicitly passing confirm=False must still raise ToolError."""
|
tool_fn = make_tool_fn(_MODULE, _REGISTER_FN, _TOOL_NAME)
|
||||||
module_path, register_fn, tool_name = _TOOL_REGISTRY[tool_key]
|
|
||||||
tool_fn = make_tool_fn(module_path, register_fn, tool_name)
|
|
||||||
|
|
||||||
with pytest.raises(ToolError, match="confirm=True"):
|
with pytest.raises(ToolError, match="confirm=True"):
|
||||||
await tool_fn(action=action, confirm=False, **kwargs)
|
await tool_fn(action=action, subaction=subaction, confirm=False, **kwargs)
|
||||||
|
|
||||||
@pytest.mark.parametrize("tool_key,action,kwargs", _DESTRUCTIVE_TEST_CASES, ids=_CASE_IDS)
|
@pytest.mark.parametrize("action,subaction,kwargs", _DESTRUCTIVE_TEST_CASES, ids=_CASE_IDS)
|
||||||
async def test_error_message_includes_action_name(
|
async def test_error_message_includes_subaction_name(
|
||||||
self,
|
self,
|
||||||
tool_key: str,
|
|
||||||
action: str,
|
action: str,
|
||||||
|
subaction: str,
|
||||||
kwargs: dict,
|
kwargs: dict,
|
||||||
_mock_array_graphql: AsyncMock,
|
_mock_graphql: AsyncMock,
|
||||||
_mock_vm_graphql: AsyncMock,
|
|
||||||
_mock_notif_graphql: AsyncMock,
|
|
||||||
_mock_rclone_graphql: AsyncMock,
|
|
||||||
_mock_keys_graphql: AsyncMock,
|
|
||||||
_mock_storage_graphql: AsyncMock,
|
|
||||||
_mock_settings_graphql: AsyncMock,
|
|
||||||
_mock_plugins_graphql: AsyncMock,
|
|
||||||
) -> None:
|
) -> None:
|
||||||
"""The error message should include the action name for clarity."""
|
tool_fn = make_tool_fn(_MODULE, _REGISTER_FN, _TOOL_NAME)
|
||||||
module_path, register_fn, tool_name = _TOOL_REGISTRY[tool_key]
|
with pytest.raises(ToolError, match=subaction):
|
||||||
tool_fn = make_tool_fn(module_path, register_fn, tool_name)
|
await tool_fn(action=action, subaction=subaction, **kwargs)
|
||||||
|
|
||||||
with pytest.raises(ToolError, match=action):
|
|
||||||
await tool_fn(action=action, **kwargs)
|
# ---------------------------------------------------------------------------
|
||||||
|
# Strict guard tests: no network calls escape when unconfirmed
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
|
||||||
|
class TestNoGraphQLCallsWhenUnconfirmed:
|
||||||
|
"""The most critical safety property: when confirm is missing/False,
|
||||||
|
NO GraphQL request must ever reach the network layer.
|
||||||
|
"""
|
||||||
|
|
||||||
|
@pytest.mark.parametrize("action,subaction,kwargs", _DESTRUCTIVE_TEST_CASES, ids=_CASE_IDS)
|
||||||
|
async def test_no_graphql_call_without_confirm(
|
||||||
|
self,
|
||||||
|
action: str,
|
||||||
|
subaction: str,
|
||||||
|
kwargs: dict,
|
||||||
|
_mock_graphql: AsyncMock,
|
||||||
|
) -> None:
|
||||||
|
tool_fn = make_tool_fn(_MODULE, _REGISTER_FN, _TOOL_NAME)
|
||||||
|
with pytest.raises(ToolError):
|
||||||
|
await tool_fn(action=action, subaction=subaction, **kwargs)
|
||||||
|
_mock_graphql.assert_not_called()
|
||||||
|
|
||||||
|
@pytest.mark.parametrize("action,subaction,kwargs", _DESTRUCTIVE_TEST_CASES, ids=_CASE_IDS)
|
||||||
|
async def test_no_graphql_call_with_confirm_false(
|
||||||
|
self,
|
||||||
|
action: str,
|
||||||
|
subaction: str,
|
||||||
|
kwargs: dict,
|
||||||
|
_mock_graphql: AsyncMock,
|
||||||
|
) -> None:
|
||||||
|
tool_fn = make_tool_fn(_MODULE, _REGISTER_FN, _TOOL_NAME)
|
||||||
|
with pytest.raises(ToolError):
|
||||||
|
await tool_fn(action=action, subaction=subaction, confirm=False, **kwargs)
|
||||||
|
_mock_graphql.assert_not_called()
|
||||||
|
|
||||||
|
|
||||||
# ---------------------------------------------------------------------------
|
# ---------------------------------------------------------------------------
|
||||||
@@ -370,30 +262,29 @@ class TestConfirmationGuards:
|
|||||||
class TestConfirmAllowsExecution:
|
class TestConfirmAllowsExecution:
|
||||||
"""Destructive actions with confirm=True should reach the GraphQL layer."""
|
"""Destructive actions with confirm=True should reach the GraphQL layer."""
|
||||||
|
|
||||||
async def test_vm_force_stop_with_confirm(self, _mock_vm_graphql: AsyncMock) -> None:
|
async def test_vm_force_stop_with_confirm(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_vm_graphql.return_value = {"vm": {"forceStop": True}}
|
_mock_graphql.return_value = {"vm": {"forceStop": True}}
|
||||||
tool_fn = make_tool_fn("unraid_mcp.tools.virtualization", "register_vm_tool", "unraid_vm")
|
tool_fn = make_tool_fn(_MODULE, _REGISTER_FN, _TOOL_NAME)
|
||||||
result = await tool_fn(action="force_stop", vm_id="test-uuid", confirm=True)
|
result = await tool_fn(action="vm", subaction="force_stop", vm_id="test-uuid", confirm=True)
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
|
|
||||||
async def test_vm_reset_with_confirm(self, _mock_vm_graphql: AsyncMock) -> None:
|
async def test_vm_reset_with_confirm(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_vm_graphql.return_value = {"vm": {"reset": True}}
|
_mock_graphql.return_value = {"vm": {"reset": True}}
|
||||||
tool_fn = make_tool_fn("unraid_mcp.tools.virtualization", "register_vm_tool", "unraid_vm")
|
tool_fn = make_tool_fn(_MODULE, _REGISTER_FN, _TOOL_NAME)
|
||||||
result = await tool_fn(action="reset", vm_id="test-uuid", confirm=True)
|
result = await tool_fn(action="vm", subaction="reset", vm_id="test-uuid", confirm=True)
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
|
|
||||||
async def test_notifications_delete_with_confirm(self, _mock_notif_graphql: AsyncMock) -> None:
|
async def test_notifications_delete_with_confirm(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_notif_graphql.return_value = {
|
_mock_graphql.return_value = {
|
||||||
"deleteNotification": {
|
"deleteNotification": {
|
||||||
"unread": {"info": 0, "warning": 0, "alert": 0, "total": 0},
|
"unread": {"info": 0, "warning": 0, "alert": 0, "total": 0},
|
||||||
"archive": {"info": 0, "warning": 0, "alert": 0, "total": 0},
|
"archive": {"info": 0, "warning": 0, "alert": 0, "total": 0},
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
tool_fn = make_tool_fn(
|
tool_fn = make_tool_fn(_MODULE, _REGISTER_FN, _TOOL_NAME)
|
||||||
"unraid_mcp.tools.notifications", "register_notifications_tool", "unraid_notifications"
|
|
||||||
)
|
|
||||||
result = await tool_fn(
|
result = await tool_fn(
|
||||||
action="delete",
|
action="notification",
|
||||||
|
subaction="delete",
|
||||||
notification_id="notif-1",
|
notification_id="notif-1",
|
||||||
notification_type="UNREAD",
|
notification_type="UNREAD",
|
||||||
confirm=True,
|
confirm=True,
|
||||||
@@ -401,43 +292,38 @@ class TestConfirmAllowsExecution:
|
|||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
|
|
||||||
async def test_notifications_delete_archived_with_confirm(
|
async def test_notifications_delete_archived_with_confirm(
|
||||||
self, _mock_notif_graphql: AsyncMock
|
self, _mock_graphql: AsyncMock
|
||||||
) -> None:
|
) -> None:
|
||||||
_mock_notif_graphql.return_value = {
|
_mock_graphql.return_value = {
|
||||||
"deleteArchivedNotifications": {
|
"deleteArchivedNotifications": {
|
||||||
"unread": {"info": 0, "warning": 0, "alert": 0, "total": 0},
|
"unread": {"info": 0, "warning": 0, "alert": 0, "total": 0},
|
||||||
"archive": {"info": 0, "warning": 0, "alert": 0, "total": 0},
|
"archive": {"info": 0, "warning": 0, "alert": 0, "total": 0},
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
tool_fn = make_tool_fn(
|
tool_fn = make_tool_fn(_MODULE, _REGISTER_FN, _TOOL_NAME)
|
||||||
"unraid_mcp.tools.notifications", "register_notifications_tool", "unraid_notifications"
|
result = await tool_fn(action="notification", subaction="delete_archived", confirm=True)
|
||||||
)
|
|
||||||
result = await tool_fn(action="delete_archived", confirm=True)
|
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
|
|
||||||
async def test_rclone_delete_remote_with_confirm(self, _mock_rclone_graphql: AsyncMock) -> None:
|
async def test_rclone_delete_remote_with_confirm(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_rclone_graphql.return_value = {"rclone": {"deleteRCloneRemote": True}}
|
_mock_graphql.return_value = {"rclone": {"deleteRCloneRemote": True}}
|
||||||
tool_fn = make_tool_fn("unraid_mcp.tools.rclone", "register_rclone_tool", "unraid_rclone")
|
tool_fn = make_tool_fn(_MODULE, _REGISTER_FN, _TOOL_NAME)
|
||||||
result = await tool_fn(action="delete_remote", name="my-remote", confirm=True)
|
|
||||||
assert result["success"] is True
|
|
||||||
|
|
||||||
async def test_keys_delete_with_confirm(self, _mock_keys_graphql: AsyncMock) -> None:
|
|
||||||
_mock_keys_graphql.return_value = {"apiKey": {"delete": True}}
|
|
||||||
tool_fn = make_tool_fn("unraid_mcp.tools.keys", "register_keys_tool", "unraid_keys")
|
|
||||||
result = await tool_fn(action="delete", key_id="key-123", confirm=True)
|
|
||||||
assert result["success"] is True
|
|
||||||
|
|
||||||
async def test_storage_flash_backup_with_confirm(
|
|
||||||
self, _mock_storage_graphql: AsyncMock
|
|
||||||
) -> None:
|
|
||||||
_mock_storage_graphql.return_value = {
|
|
||||||
"initiateFlashBackup": {"status": "started", "jobId": "j:1"}
|
|
||||||
}
|
|
||||||
tool_fn = make_tool_fn(
|
|
||||||
"unraid_mcp.tools.storage", "register_storage_tool", "unraid_storage"
|
|
||||||
)
|
|
||||||
result = await tool_fn(
|
result = await tool_fn(
|
||||||
action="flash_backup",
|
action="rclone", subaction="delete_remote", name="my-remote", confirm=True
|
||||||
|
)
|
||||||
|
assert result["success"] is True
|
||||||
|
|
||||||
|
async def test_keys_delete_with_confirm(self, _mock_graphql: AsyncMock) -> None:
|
||||||
|
_mock_graphql.return_value = {"apiKey": {"delete": True}}
|
||||||
|
tool_fn = make_tool_fn(_MODULE, _REGISTER_FN, _TOOL_NAME)
|
||||||
|
result = await tool_fn(action="key", subaction="delete", key_id="key-123", confirm=True)
|
||||||
|
assert result["success"] is True
|
||||||
|
|
||||||
|
async def test_disk_flash_backup_with_confirm(self, _mock_graphql: AsyncMock) -> None:
|
||||||
|
_mock_graphql.return_value = {"initiateFlashBackup": {"status": "started", "jobId": "j:1"}}
|
||||||
|
tool_fn = make_tool_fn(_MODULE, _REGISTER_FN, _TOOL_NAME)
|
||||||
|
result = await tool_fn(
|
||||||
|
action="disk",
|
||||||
|
subaction="flash_backup",
|
||||||
confirm=True,
|
confirm=True,
|
||||||
remote_name="r",
|
remote_name="r",
|
||||||
source_path="/boot",
|
source_path="/boot",
|
||||||
@@ -445,125 +331,46 @@ class TestConfirmAllowsExecution:
|
|||||||
)
|
)
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
|
|
||||||
async def test_settings_configure_ups_with_confirm(
|
async def test_settings_configure_ups_with_confirm(self, _mock_graphql: AsyncMock) -> None:
|
||||||
self, _mock_settings_graphql: AsyncMock
|
_mock_graphql.return_value = {"configureUps": True}
|
||||||
) -> None:
|
tool_fn = make_tool_fn(_MODULE, _REGISTER_FN, _TOOL_NAME)
|
||||||
_mock_settings_graphql.return_value = {"configureUps": True}
|
|
||||||
tool_fn = make_tool_fn(
|
|
||||||
"unraid_mcp.tools.settings", "register_settings_tool", "unraid_settings"
|
|
||||||
)
|
|
||||||
result = await tool_fn(
|
result = await tool_fn(
|
||||||
action="configure_ups", confirm=True, ups_config={"mode": "master", "cable": "usb"}
|
action="setting",
|
||||||
|
subaction="configure_ups",
|
||||||
|
confirm=True,
|
||||||
|
ups_config={"mode": "master", "cable": "usb"},
|
||||||
)
|
)
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
|
|
||||||
async def test_array_remove_disk_with_confirm(self, _mock_array_graphql: AsyncMock) -> None:
|
async def test_array_remove_disk_with_confirm(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_array_graphql.return_value = {"array": {"removeDiskFromArray": {"state": "STOPPED"}}}
|
_mock_graphql.return_value = {"array": {"removeDiskFromArray": {"state": "STOPPED"}}}
|
||||||
tool_fn = make_tool_fn("unraid_mcp.tools.array", "register_array_tool", "unraid_array")
|
tool_fn = make_tool_fn(_MODULE, _REGISTER_FN, _TOOL_NAME)
|
||||||
result = await tool_fn(action="remove_disk", disk_id="abc:local", confirm=True)
|
result = await tool_fn(
|
||||||
assert result["success"] is True
|
action="array", subaction="remove_disk", disk_id="abc:local", confirm=True
|
||||||
|
|
||||||
async def test_array_clear_disk_stats_with_confirm(
|
|
||||||
self, _mock_array_graphql: AsyncMock
|
|
||||||
) -> None:
|
|
||||||
_mock_array_graphql.return_value = {"array": {"clearArrayDiskStatistics": True}}
|
|
||||||
tool_fn = make_tool_fn("unraid_mcp.tools.array", "register_array_tool", "unraid_array")
|
|
||||||
result = await tool_fn(action="clear_disk_stats", disk_id="abc:local", confirm=True)
|
|
||||||
assert result["success"] is True
|
|
||||||
|
|
||||||
async def test_array_stop_array_with_confirm(self, _mock_array_graphql: AsyncMock) -> None:
|
|
||||||
_mock_array_graphql.return_value = {"array": {"setState": {"state": "STOPPED"}}}
|
|
||||||
tool_fn = make_tool_fn("unraid_mcp.tools.array", "register_array_tool", "unraid_array")
|
|
||||||
result = await tool_fn(action="stop_array", confirm=True)
|
|
||||||
assert result["success"] is True
|
|
||||||
|
|
||||||
async def test_plugins_remove_with_confirm(self, _mock_plugins_graphql: AsyncMock) -> None:
|
|
||||||
_mock_plugins_graphql.return_value = {"removePlugin": True}
|
|
||||||
tool_fn = make_tool_fn(
|
|
||||||
"unraid_mcp.tools.plugins", "register_plugins_tool", "unraid_plugins"
|
|
||||||
)
|
)
|
||||||
result = await tool_fn(action="remove", names=["my-plugin"], confirm=True)
|
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
|
|
||||||
|
async def test_array_clear_disk_stats_with_confirm(self, _mock_graphql: AsyncMock) -> None:
|
||||||
|
_mock_graphql.return_value = {"array": {"clearArrayDiskStatistics": True}}
|
||||||
|
tool_fn = make_tool_fn(_MODULE, _REGISTER_FN, _TOOL_NAME)
|
||||||
|
result = await tool_fn(
|
||||||
|
action="array", subaction="clear_disk_stats", disk_id="abc:local", confirm=True
|
||||||
|
)
|
||||||
|
assert result["success"] is True
|
||||||
|
|
||||||
# ---------------------------------------------------------------------------
|
async def test_array_stop_array_with_confirm(self, _mock_graphql: AsyncMock) -> None:
|
||||||
# Strict guard tests: no network calls escape when unconfirmed
|
_mock_graphql.return_value = {"array": {"setState": {"state": "STOPPED"}}}
|
||||||
# ---------------------------------------------------------------------------
|
tool_fn = make_tool_fn(_MODULE, _REGISTER_FN, _TOOL_NAME)
|
||||||
|
result = await tool_fn(action="array", subaction="stop_array", confirm=True)
|
||||||
|
assert result["success"] is True
|
||||||
|
|
||||||
|
async def test_plugins_remove_with_confirm(self, _mock_graphql: AsyncMock) -> None:
|
||||||
class TestNoGraphQLCallsWhenUnconfirmed:
|
_mock_graphql.return_value = {"removePlugin": True}
|
||||||
"""The most critical safety property: when confirm is missing/False,
|
tool_fn = make_tool_fn(_MODULE, _REGISTER_FN, _TOOL_NAME)
|
||||||
NO GraphQL request must ever reach the network layer. This verifies that
|
result = await tool_fn(
|
||||||
the guard fires before any I/O, not just that a ToolError is raised.
|
action="plugin", subaction="remove", names=["my-plugin"], confirm=True
|
||||||
"""
|
)
|
||||||
|
assert result["success"] is True
|
||||||
@pytest.mark.parametrize("tool_key,action,kwargs", _DESTRUCTIVE_TEST_CASES, ids=_CASE_IDS)
|
|
||||||
async def test_no_graphql_call_without_confirm(
|
|
||||||
self,
|
|
||||||
tool_key: str,
|
|
||||||
action: str,
|
|
||||||
kwargs: dict,
|
|
||||||
_mock_array_graphql: AsyncMock,
|
|
||||||
_mock_vm_graphql: AsyncMock,
|
|
||||||
_mock_notif_graphql: AsyncMock,
|
|
||||||
_mock_rclone_graphql: AsyncMock,
|
|
||||||
_mock_keys_graphql: AsyncMock,
|
|
||||||
_mock_storage_graphql: AsyncMock,
|
|
||||||
_mock_settings_graphql: AsyncMock,
|
|
||||||
_mock_plugins_graphql: AsyncMock,
|
|
||||||
) -> None:
|
|
||||||
"""make_graphql_request must NOT be called when confirm is absent."""
|
|
||||||
module_path, register_fn, tool_name = _TOOL_REGISTRY[tool_key]
|
|
||||||
tool_fn = make_tool_fn(module_path, register_fn, tool_name)
|
|
||||||
mock_map = {
|
|
||||||
"array": _mock_array_graphql,
|
|
||||||
"vm": _mock_vm_graphql,
|
|
||||||
"notifications": _mock_notif_graphql,
|
|
||||||
"rclone": _mock_rclone_graphql,
|
|
||||||
"keys": _mock_keys_graphql,
|
|
||||||
"storage": _mock_storage_graphql,
|
|
||||||
"settings": _mock_settings_graphql,
|
|
||||||
"plugins": _mock_plugins_graphql,
|
|
||||||
}
|
|
||||||
|
|
||||||
with pytest.raises(ToolError):
|
|
||||||
await tool_fn(action=action, **kwargs)
|
|
||||||
|
|
||||||
mock_map[tool_key].assert_not_called()
|
|
||||||
|
|
||||||
@pytest.mark.parametrize("tool_key,action,kwargs", _DESTRUCTIVE_TEST_CASES, ids=_CASE_IDS)
|
|
||||||
async def test_no_graphql_call_with_confirm_false(
|
|
||||||
self,
|
|
||||||
tool_key: str,
|
|
||||||
action: str,
|
|
||||||
kwargs: dict,
|
|
||||||
_mock_array_graphql: AsyncMock,
|
|
||||||
_mock_vm_graphql: AsyncMock,
|
|
||||||
_mock_notif_graphql: AsyncMock,
|
|
||||||
_mock_rclone_graphql: AsyncMock,
|
|
||||||
_mock_keys_graphql: AsyncMock,
|
|
||||||
_mock_storage_graphql: AsyncMock,
|
|
||||||
_mock_settings_graphql: AsyncMock,
|
|
||||||
_mock_plugins_graphql: AsyncMock,
|
|
||||||
) -> None:
|
|
||||||
"""make_graphql_request must NOT be called when confirm=False."""
|
|
||||||
module_path, register_fn, tool_name = _TOOL_REGISTRY[tool_key]
|
|
||||||
tool_fn = make_tool_fn(module_path, register_fn, tool_name)
|
|
||||||
mock_map = {
|
|
||||||
"array": _mock_array_graphql,
|
|
||||||
"vm": _mock_vm_graphql,
|
|
||||||
"notifications": _mock_notif_graphql,
|
|
||||||
"rclone": _mock_rclone_graphql,
|
|
||||||
"keys": _mock_keys_graphql,
|
|
||||||
"storage": _mock_storage_graphql,
|
|
||||||
"settings": _mock_settings_graphql,
|
|
||||||
"plugins": _mock_plugins_graphql,
|
|
||||||
}
|
|
||||||
|
|
||||||
with pytest.raises(ToolError):
|
|
||||||
await tool_fn(action=action, confirm=False, **kwargs)
|
|
||||||
|
|
||||||
mock_map[tool_key].assert_not_called()
|
|
||||||
|
|
||||||
|
|
||||||
# ---------------------------------------------------------------------------
|
# ---------------------------------------------------------------------------
|
||||||
@@ -572,57 +379,35 @@ class TestNoGraphQLCallsWhenUnconfirmed:
|
|||||||
|
|
||||||
|
|
||||||
class TestNonDestructiveActionsNeverRequireConfirm:
|
class TestNonDestructiveActionsNeverRequireConfirm:
|
||||||
"""Guard regression test: non-destructive mutations must work without confirm.
|
"""Guard regression: non-destructive ops must work without confirm."""
|
||||||
|
|
||||||
If a non-destructive action starts requiring confirm=True (over-guarding),
|
|
||||||
it would break normal use cases. This test class prevents that regression.
|
|
||||||
"""
|
|
||||||
|
|
||||||
@pytest.mark.parametrize(
|
@pytest.mark.parametrize(
|
||||||
"tool_key,action,kwargs,mock_return",
|
"action,subaction,kwargs,mock_return",
|
||||||
[
|
[
|
||||||
("array", "parity_cancel", {}, {"parityCheck": {"cancel": True}}),
|
("array", "parity_cancel", {}, {"parityCheck": {"cancel": True}}),
|
||||||
("vm", "start", {"vm_id": "test-uuid"}, {"vm": {"start": True}}),
|
("vm", "start", {"vm_id": "test-uuid"}, {"vm": {"start": True}}),
|
||||||
("notifications", "archive_all", {}, {"archiveAll": {"info": 0, "total": 0}}),
|
("notification", "archive_all", {}, {"archiveAll": {"info": 0, "total": 0}}),
|
||||||
("rclone", "list_remotes", {}, {"rclone": {"remotes": []}}),
|
("rclone", "list_remotes", {}, {"rclone": {"remotes": []}}),
|
||||||
("keys", "list", {}, {"apiKeys": []}),
|
("key", "list", {}, {"apiKeys": []}),
|
||||||
],
|
],
|
||||||
ids=[
|
ids=[
|
||||||
"array/parity_cancel",
|
"array/parity_cancel",
|
||||||
"vm/start",
|
"vm/start",
|
||||||
"notifications/archive_all",
|
"notification/archive_all",
|
||||||
"rclone/list_remotes",
|
"rclone/list_remotes",
|
||||||
"keys/list",
|
"key/list",
|
||||||
],
|
],
|
||||||
)
|
)
|
||||||
async def test_non_destructive_action_works_without_confirm(
|
async def test_non_destructive_action_works_without_confirm(
|
||||||
self,
|
self,
|
||||||
tool_key: str,
|
|
||||||
action: str,
|
action: str,
|
||||||
|
subaction: str,
|
||||||
kwargs: dict,
|
kwargs: dict,
|
||||||
mock_return: dict,
|
mock_return: dict,
|
||||||
_mock_array_graphql: AsyncMock,
|
_mock_graphql: AsyncMock,
|
||||||
_mock_vm_graphql: AsyncMock,
|
|
||||||
_mock_notif_graphql: AsyncMock,
|
|
||||||
_mock_rclone_graphql: AsyncMock,
|
|
||||||
_mock_keys_graphql: AsyncMock,
|
|
||||||
_mock_storage_graphql: AsyncMock,
|
|
||||||
_mock_settings_graphql: AsyncMock,
|
|
||||||
_mock_plugins_graphql: AsyncMock,
|
|
||||||
) -> None:
|
) -> None:
|
||||||
"""Non-destructive actions must not raise ToolError for missing confirm."""
|
_mock_graphql.return_value = mock_return
|
||||||
mock_map = {
|
tool_fn = make_tool_fn(_MODULE, _REGISTER_FN, _TOOL_NAME)
|
||||||
"array": _mock_array_graphql,
|
result = await tool_fn(action=action, subaction=subaction, **kwargs)
|
||||||
"vm": _mock_vm_graphql,
|
|
||||||
"notifications": _mock_notif_graphql,
|
|
||||||
"rclone": _mock_rclone_graphql,
|
|
||||||
"keys": _mock_keys_graphql,
|
|
||||||
}
|
|
||||||
mock_map[tool_key].return_value = mock_return
|
|
||||||
|
|
||||||
module_path, register_fn, tool_name = _TOOL_REGISTRY[tool_key]
|
|
||||||
tool_fn = make_tool_fn(module_path, register_fn, tool_name)
|
|
||||||
# Just verify no ToolError is raised for missing confirm — return shape varies by action
|
|
||||||
result = await tool_fn(action=action, **kwargs)
|
|
||||||
assert result is not None
|
assert result is not None
|
||||||
mock_map[tool_key].assert_called_once()
|
_mock_graphql.assert_called_once()
|
||||||
|
|||||||
@@ -35,116 +35,116 @@ class TestInfoQueries:
|
|||||||
"""Validate all queries from unraid_mcp/tools/info.py."""
|
"""Validate all queries from unraid_mcp/tools/info.py."""
|
||||||
|
|
||||||
def test_overview_query(self, schema: GraphQLSchema) -> None:
|
def test_overview_query(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.info import QUERIES
|
from unraid_mcp.tools.unraid import _SYSTEM_QUERIES as QUERIES
|
||||||
|
|
||||||
errors = _validate_operation(schema, QUERIES["overview"])
|
errors = _validate_operation(schema, QUERIES["overview"])
|
||||||
assert not errors, f"overview query validation failed: {errors}"
|
assert not errors, f"overview query validation failed: {errors}"
|
||||||
|
|
||||||
def test_array_query(self, schema: GraphQLSchema) -> None:
|
def test_array_query(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.info import QUERIES
|
from unraid_mcp.tools.unraid import _SYSTEM_QUERIES as QUERIES
|
||||||
|
|
||||||
errors = _validate_operation(schema, QUERIES["array"])
|
errors = _validate_operation(schema, QUERIES["array"])
|
||||||
assert not errors, f"array query validation failed: {errors}"
|
assert not errors, f"array query validation failed: {errors}"
|
||||||
|
|
||||||
def test_network_query(self, schema: GraphQLSchema) -> None:
|
def test_network_query(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.info import QUERIES
|
from unraid_mcp.tools.unraid import _SYSTEM_QUERIES as QUERIES
|
||||||
|
|
||||||
errors = _validate_operation(schema, QUERIES["network"])
|
errors = _validate_operation(schema, QUERIES["network"])
|
||||||
assert not errors, f"network query validation failed: {errors}"
|
assert not errors, f"network query validation failed: {errors}"
|
||||||
|
|
||||||
def test_registration_query(self, schema: GraphQLSchema) -> None:
|
def test_registration_query(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.info import QUERIES
|
from unraid_mcp.tools.unraid import _SYSTEM_QUERIES as QUERIES
|
||||||
|
|
||||||
errors = _validate_operation(schema, QUERIES["registration"])
|
errors = _validate_operation(schema, QUERIES["registration"])
|
||||||
assert not errors, f"registration query validation failed: {errors}"
|
assert not errors, f"registration query validation failed: {errors}"
|
||||||
|
|
||||||
def test_variables_query(self, schema: GraphQLSchema) -> None:
|
def test_variables_query(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.info import QUERIES
|
from unraid_mcp.tools.unraid import _SYSTEM_QUERIES as QUERIES
|
||||||
|
|
||||||
errors = _validate_operation(schema, QUERIES["variables"])
|
errors = _validate_operation(schema, QUERIES["variables"])
|
||||||
assert not errors, f"variables query validation failed: {errors}"
|
assert not errors, f"variables query validation failed: {errors}"
|
||||||
|
|
||||||
def test_metrics_query(self, schema: GraphQLSchema) -> None:
|
def test_metrics_query(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.info import QUERIES
|
from unraid_mcp.tools.unraid import _SYSTEM_QUERIES as QUERIES
|
||||||
|
|
||||||
errors = _validate_operation(schema, QUERIES["metrics"])
|
errors = _validate_operation(schema, QUERIES["metrics"])
|
||||||
assert not errors, f"metrics query validation failed: {errors}"
|
assert not errors, f"metrics query validation failed: {errors}"
|
||||||
|
|
||||||
def test_services_query(self, schema: GraphQLSchema) -> None:
|
def test_services_query(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.info import QUERIES
|
from unraid_mcp.tools.unraid import _SYSTEM_QUERIES as QUERIES
|
||||||
|
|
||||||
errors = _validate_operation(schema, QUERIES["services"])
|
errors = _validate_operation(schema, QUERIES["services"])
|
||||||
assert not errors, f"services query validation failed: {errors}"
|
assert not errors, f"services query validation failed: {errors}"
|
||||||
|
|
||||||
def test_display_query(self, schema: GraphQLSchema) -> None:
|
def test_display_query(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.info import QUERIES
|
from unraid_mcp.tools.unraid import _SYSTEM_QUERIES as QUERIES
|
||||||
|
|
||||||
errors = _validate_operation(schema, QUERIES["display"])
|
errors = _validate_operation(schema, QUERIES["display"])
|
||||||
assert not errors, f"display query validation failed: {errors}"
|
assert not errors, f"display query validation failed: {errors}"
|
||||||
|
|
||||||
def test_config_query(self, schema: GraphQLSchema) -> None:
|
def test_config_query(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.info import QUERIES
|
from unraid_mcp.tools.unraid import _SYSTEM_QUERIES as QUERIES
|
||||||
|
|
||||||
errors = _validate_operation(schema, QUERIES["config"])
|
errors = _validate_operation(schema, QUERIES["config"])
|
||||||
assert not errors, f"config query validation failed: {errors}"
|
assert not errors, f"config query validation failed: {errors}"
|
||||||
|
|
||||||
def test_online_query(self, schema: GraphQLSchema) -> None:
|
def test_online_query(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.info import QUERIES
|
from unraid_mcp.tools.unraid import _SYSTEM_QUERIES as QUERIES
|
||||||
|
|
||||||
errors = _validate_operation(schema, QUERIES["online"])
|
errors = _validate_operation(schema, QUERIES["online"])
|
||||||
assert not errors, f"online query validation failed: {errors}"
|
assert not errors, f"online query validation failed: {errors}"
|
||||||
|
|
||||||
def test_owner_query(self, schema: GraphQLSchema) -> None:
|
def test_owner_query(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.info import QUERIES
|
from unraid_mcp.tools.unraid import _SYSTEM_QUERIES as QUERIES
|
||||||
|
|
||||||
errors = _validate_operation(schema, QUERIES["owner"])
|
errors = _validate_operation(schema, QUERIES["owner"])
|
||||||
assert not errors, f"owner query validation failed: {errors}"
|
assert not errors, f"owner query validation failed: {errors}"
|
||||||
|
|
||||||
def test_settings_query(self, schema: GraphQLSchema) -> None:
|
def test_settings_query(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.info import QUERIES
|
from unraid_mcp.tools.unraid import _SYSTEM_QUERIES as QUERIES
|
||||||
|
|
||||||
errors = _validate_operation(schema, QUERIES["settings"])
|
errors = _validate_operation(schema, QUERIES["settings"])
|
||||||
assert not errors, f"settings query validation failed: {errors}"
|
assert not errors, f"settings query validation failed: {errors}"
|
||||||
|
|
||||||
def test_server_query(self, schema: GraphQLSchema) -> None:
|
def test_server_query(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.info import QUERIES
|
from unraid_mcp.tools.unraid import _SYSTEM_QUERIES as QUERIES
|
||||||
|
|
||||||
errors = _validate_operation(schema, QUERIES["server"])
|
errors = _validate_operation(schema, QUERIES["server"])
|
||||||
assert not errors, f"server query validation failed: {errors}"
|
assert not errors, f"server query validation failed: {errors}"
|
||||||
|
|
||||||
def test_servers_query(self, schema: GraphQLSchema) -> None:
|
def test_servers_query(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.info import QUERIES
|
from unraid_mcp.tools.unraid import _SYSTEM_QUERIES as QUERIES
|
||||||
|
|
||||||
errors = _validate_operation(schema, QUERIES["servers"])
|
errors = _validate_operation(schema, QUERIES["servers"])
|
||||||
assert not errors, f"servers query validation failed: {errors}"
|
assert not errors, f"servers query validation failed: {errors}"
|
||||||
|
|
||||||
def test_flash_query(self, schema: GraphQLSchema) -> None:
|
def test_flash_query(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.info import QUERIES
|
from unraid_mcp.tools.unraid import _SYSTEM_QUERIES as QUERIES
|
||||||
|
|
||||||
errors = _validate_operation(schema, QUERIES["flash"])
|
errors = _validate_operation(schema, QUERIES["flash"])
|
||||||
assert not errors, f"flash query validation failed: {errors}"
|
assert not errors, f"flash query validation failed: {errors}"
|
||||||
|
|
||||||
def test_ups_devices_query(self, schema: GraphQLSchema) -> None:
|
def test_ups_devices_query(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.info import QUERIES
|
from unraid_mcp.tools.unraid import _SYSTEM_QUERIES as QUERIES
|
||||||
|
|
||||||
errors = _validate_operation(schema, QUERIES["ups_devices"])
|
errors = _validate_operation(schema, QUERIES["ups_devices"])
|
||||||
assert not errors, f"ups_devices query validation failed: {errors}"
|
assert not errors, f"ups_devices query validation failed: {errors}"
|
||||||
|
|
||||||
def test_ups_device_query(self, schema: GraphQLSchema) -> None:
|
def test_ups_device_query(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.info import QUERIES
|
from unraid_mcp.tools.unraid import _SYSTEM_QUERIES as QUERIES
|
||||||
|
|
||||||
errors = _validate_operation(schema, QUERIES["ups_device"])
|
errors = _validate_operation(schema, QUERIES["ups_device"])
|
||||||
assert not errors, f"ups_device query validation failed: {errors}"
|
assert not errors, f"ups_device query validation failed: {errors}"
|
||||||
|
|
||||||
def test_ups_config_query(self, schema: GraphQLSchema) -> None:
|
def test_ups_config_query(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.info import QUERIES
|
from unraid_mcp.tools.unraid import _SYSTEM_QUERIES as QUERIES
|
||||||
|
|
||||||
errors = _validate_operation(schema, QUERIES["ups_config"])
|
errors = _validate_operation(schema, QUERIES["ups_config"])
|
||||||
assert not errors, f"ups_config query validation failed: {errors}"
|
assert not errors, f"ups_config query validation failed: {errors}"
|
||||||
|
|
||||||
def test_all_info_actions_covered(self, schema: GraphQLSchema) -> None:
|
def test_all_info_actions_covered(self, schema: GraphQLSchema) -> None:
|
||||||
"""Ensure every key in QUERIES has a corresponding test."""
|
"""Ensure every key in QUERIES has a corresponding test."""
|
||||||
from unraid_mcp.tools.info import QUERIES
|
from unraid_mcp.tools.unraid import _SYSTEM_QUERIES as QUERIES
|
||||||
|
|
||||||
expected_actions = {
|
expected_actions = {
|
||||||
"overview",
|
"overview",
|
||||||
@@ -177,19 +177,19 @@ class TestArrayQueries:
|
|||||||
"""Validate all queries from unraid_mcp/tools/array.py."""
|
"""Validate all queries from unraid_mcp/tools/array.py."""
|
||||||
|
|
||||||
def test_parity_status_query(self, schema: GraphQLSchema) -> None:
|
def test_parity_status_query(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.array import QUERIES
|
from unraid_mcp.tools.unraid import _ARRAY_QUERIES as QUERIES
|
||||||
|
|
||||||
errors = _validate_operation(schema, QUERIES["parity_status"])
|
errors = _validate_operation(schema, QUERIES["parity_status"])
|
||||||
assert not errors, f"parity_status query validation failed: {errors}"
|
assert not errors, f"parity_status query validation failed: {errors}"
|
||||||
|
|
||||||
def test_parity_history_query(self, schema: GraphQLSchema) -> None:
|
def test_parity_history_query(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.array import QUERIES
|
from unraid_mcp.tools.unraid import _ARRAY_QUERIES as QUERIES
|
||||||
|
|
||||||
errors = _validate_operation(schema, QUERIES["parity_history"])
|
errors = _validate_operation(schema, QUERIES["parity_history"])
|
||||||
assert not errors, f"parity_history query validation failed: {errors}"
|
assert not errors, f"parity_history query validation failed: {errors}"
|
||||||
|
|
||||||
def test_all_array_queries_covered(self, schema: GraphQLSchema) -> None:
|
def test_all_array_queries_covered(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.array import QUERIES
|
from unraid_mcp.tools.unraid import _ARRAY_QUERIES as QUERIES
|
||||||
|
|
||||||
assert set(QUERIES.keys()) == {"parity_status", "parity_history"}
|
assert set(QUERIES.keys()) == {"parity_status", "parity_history"}
|
||||||
|
|
||||||
@@ -198,73 +198,73 @@ class TestArrayMutations:
|
|||||||
"""Validate all mutations from unraid_mcp/tools/array.py."""
|
"""Validate all mutations from unraid_mcp/tools/array.py."""
|
||||||
|
|
||||||
def test_parity_start_mutation(self, schema: GraphQLSchema) -> None:
|
def test_parity_start_mutation(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.array import MUTATIONS
|
from unraid_mcp.tools.unraid import _ARRAY_MUTATIONS as MUTATIONS
|
||||||
|
|
||||||
errors = _validate_operation(schema, MUTATIONS["parity_start"])
|
errors = _validate_operation(schema, MUTATIONS["parity_start"])
|
||||||
assert not errors, f"parity_start mutation validation failed: {errors}"
|
assert not errors, f"parity_start mutation validation failed: {errors}"
|
||||||
|
|
||||||
def test_parity_pause_mutation(self, schema: GraphQLSchema) -> None:
|
def test_parity_pause_mutation(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.array import MUTATIONS
|
from unraid_mcp.tools.unraid import _ARRAY_MUTATIONS as MUTATIONS
|
||||||
|
|
||||||
errors = _validate_operation(schema, MUTATIONS["parity_pause"])
|
errors = _validate_operation(schema, MUTATIONS["parity_pause"])
|
||||||
assert not errors, f"parity_pause mutation validation failed: {errors}"
|
assert not errors, f"parity_pause mutation validation failed: {errors}"
|
||||||
|
|
||||||
def test_parity_resume_mutation(self, schema: GraphQLSchema) -> None:
|
def test_parity_resume_mutation(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.array import MUTATIONS
|
from unraid_mcp.tools.unraid import _ARRAY_MUTATIONS as MUTATIONS
|
||||||
|
|
||||||
errors = _validate_operation(schema, MUTATIONS["parity_resume"])
|
errors = _validate_operation(schema, MUTATIONS["parity_resume"])
|
||||||
assert not errors, f"parity_resume mutation validation failed: {errors}"
|
assert not errors, f"parity_resume mutation validation failed: {errors}"
|
||||||
|
|
||||||
def test_parity_cancel_mutation(self, schema: GraphQLSchema) -> None:
|
def test_parity_cancel_mutation(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.array import MUTATIONS
|
from unraid_mcp.tools.unraid import _ARRAY_MUTATIONS as MUTATIONS
|
||||||
|
|
||||||
errors = _validate_operation(schema, MUTATIONS["parity_cancel"])
|
errors = _validate_operation(schema, MUTATIONS["parity_cancel"])
|
||||||
assert not errors, f"parity_cancel mutation validation failed: {errors}"
|
assert not errors, f"parity_cancel mutation validation failed: {errors}"
|
||||||
|
|
||||||
def test_start_array_mutation(self, schema: GraphQLSchema) -> None:
|
def test_start_array_mutation(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.array import MUTATIONS
|
from unraid_mcp.tools.unraid import _ARRAY_MUTATIONS as MUTATIONS
|
||||||
|
|
||||||
errors = _validate_operation(schema, MUTATIONS["start_array"])
|
errors = _validate_operation(schema, MUTATIONS["start_array"])
|
||||||
assert not errors, f"start_array mutation validation failed: {errors}"
|
assert not errors, f"start_array mutation validation failed: {errors}"
|
||||||
|
|
||||||
def test_stop_array_mutation(self, schema: GraphQLSchema) -> None:
|
def test_stop_array_mutation(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.array import MUTATIONS
|
from unraid_mcp.tools.unraid import _ARRAY_MUTATIONS as MUTATIONS
|
||||||
|
|
||||||
errors = _validate_operation(schema, MUTATIONS["stop_array"])
|
errors = _validate_operation(schema, MUTATIONS["stop_array"])
|
||||||
assert not errors, f"stop_array mutation validation failed: {errors}"
|
assert not errors, f"stop_array mutation validation failed: {errors}"
|
||||||
|
|
||||||
def test_add_disk_mutation(self, schema: GraphQLSchema) -> None:
|
def test_add_disk_mutation(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.array import MUTATIONS
|
from unraid_mcp.tools.unraid import _ARRAY_MUTATIONS as MUTATIONS
|
||||||
|
|
||||||
errors = _validate_operation(schema, MUTATIONS["add_disk"])
|
errors = _validate_operation(schema, MUTATIONS["add_disk"])
|
||||||
assert not errors, f"add_disk mutation validation failed: {errors}"
|
assert not errors, f"add_disk mutation validation failed: {errors}"
|
||||||
|
|
||||||
def test_remove_disk_mutation(self, schema: GraphQLSchema) -> None:
|
def test_remove_disk_mutation(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.array import MUTATIONS
|
from unraid_mcp.tools.unraid import _ARRAY_MUTATIONS as MUTATIONS
|
||||||
|
|
||||||
errors = _validate_operation(schema, MUTATIONS["remove_disk"])
|
errors = _validate_operation(schema, MUTATIONS["remove_disk"])
|
||||||
assert not errors, f"remove_disk mutation validation failed: {errors}"
|
assert not errors, f"remove_disk mutation validation failed: {errors}"
|
||||||
|
|
||||||
def test_mount_disk_mutation(self, schema: GraphQLSchema) -> None:
|
def test_mount_disk_mutation(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.array import MUTATIONS
|
from unraid_mcp.tools.unraid import _ARRAY_MUTATIONS as MUTATIONS
|
||||||
|
|
||||||
errors = _validate_operation(schema, MUTATIONS["mount_disk"])
|
errors = _validate_operation(schema, MUTATIONS["mount_disk"])
|
||||||
assert not errors, f"mount_disk mutation validation failed: {errors}"
|
assert not errors, f"mount_disk mutation validation failed: {errors}"
|
||||||
|
|
||||||
def test_unmount_disk_mutation(self, schema: GraphQLSchema) -> None:
|
def test_unmount_disk_mutation(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.array import MUTATIONS
|
from unraid_mcp.tools.unraid import _ARRAY_MUTATIONS as MUTATIONS
|
||||||
|
|
||||||
errors = _validate_operation(schema, MUTATIONS["unmount_disk"])
|
errors = _validate_operation(schema, MUTATIONS["unmount_disk"])
|
||||||
assert not errors, f"unmount_disk mutation validation failed: {errors}"
|
assert not errors, f"unmount_disk mutation validation failed: {errors}"
|
||||||
|
|
||||||
def test_clear_disk_stats_mutation(self, schema: GraphQLSchema) -> None:
|
def test_clear_disk_stats_mutation(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.array import MUTATIONS
|
from unraid_mcp.tools.unraid import _ARRAY_MUTATIONS as MUTATIONS
|
||||||
|
|
||||||
errors = _validate_operation(schema, MUTATIONS["clear_disk_stats"])
|
errors = _validate_operation(schema, MUTATIONS["clear_disk_stats"])
|
||||||
assert not errors, f"clear_disk_stats mutation validation failed: {errors}"
|
assert not errors, f"clear_disk_stats mutation validation failed: {errors}"
|
||||||
|
|
||||||
def test_all_array_mutations_covered(self, schema: GraphQLSchema) -> None:
|
def test_all_array_mutations_covered(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.array import MUTATIONS
|
from unraid_mcp.tools.unraid import _ARRAY_MUTATIONS as MUTATIONS
|
||||||
|
|
||||||
expected = {
|
expected = {
|
||||||
"parity_start",
|
"parity_start",
|
||||||
@@ -289,37 +289,37 @@ class TestStorageQueries:
|
|||||||
"""Validate all queries from unraid_mcp/tools/storage.py."""
|
"""Validate all queries from unraid_mcp/tools/storage.py."""
|
||||||
|
|
||||||
def test_shares_query(self, schema: GraphQLSchema) -> None:
|
def test_shares_query(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.storage import QUERIES
|
from unraid_mcp.tools.unraid import _DISK_QUERIES as QUERIES
|
||||||
|
|
||||||
errors = _validate_operation(schema, QUERIES["shares"])
|
errors = _validate_operation(schema, QUERIES["shares"])
|
||||||
assert not errors, f"shares query validation failed: {errors}"
|
assert not errors, f"shares query validation failed: {errors}"
|
||||||
|
|
||||||
def test_disks_query(self, schema: GraphQLSchema) -> None:
|
def test_disks_query(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.storage import QUERIES
|
from unraid_mcp.tools.unraid import _DISK_QUERIES as QUERIES
|
||||||
|
|
||||||
errors = _validate_operation(schema, QUERIES["disks"])
|
errors = _validate_operation(schema, QUERIES["disks"])
|
||||||
assert not errors, f"disks query validation failed: {errors}"
|
assert not errors, f"disks query validation failed: {errors}"
|
||||||
|
|
||||||
def test_disk_details_query(self, schema: GraphQLSchema) -> None:
|
def test_disk_details_query(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.storage import QUERIES
|
from unraid_mcp.tools.unraid import _DISK_QUERIES as QUERIES
|
||||||
|
|
||||||
errors = _validate_operation(schema, QUERIES["disk_details"])
|
errors = _validate_operation(schema, QUERIES["disk_details"])
|
||||||
assert not errors, f"disk_details query validation failed: {errors}"
|
assert not errors, f"disk_details query validation failed: {errors}"
|
||||||
|
|
||||||
def test_log_files_query(self, schema: GraphQLSchema) -> None:
|
def test_log_files_query(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.storage import QUERIES
|
from unraid_mcp.tools.unraid import _DISK_QUERIES as QUERIES
|
||||||
|
|
||||||
errors = _validate_operation(schema, QUERIES["log_files"])
|
errors = _validate_operation(schema, QUERIES["log_files"])
|
||||||
assert not errors, f"log_files query validation failed: {errors}"
|
assert not errors, f"log_files query validation failed: {errors}"
|
||||||
|
|
||||||
def test_logs_query(self, schema: GraphQLSchema) -> None:
|
def test_logs_query(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.storage import QUERIES
|
from unraid_mcp.tools.unraid import _DISK_QUERIES as QUERIES
|
||||||
|
|
||||||
errors = _validate_operation(schema, QUERIES["logs"])
|
errors = _validate_operation(schema, QUERIES["logs"])
|
||||||
assert not errors, f"logs query validation failed: {errors}"
|
assert not errors, f"logs query validation failed: {errors}"
|
||||||
|
|
||||||
def test_all_storage_queries_covered(self, schema: GraphQLSchema) -> None:
|
def test_all_storage_queries_covered(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.storage import QUERIES
|
from unraid_mcp.tools.unraid import _DISK_QUERIES as QUERIES
|
||||||
|
|
||||||
expected = {"shares", "disks", "disk_details", "log_files", "logs"}
|
expected = {"shares", "disks", "disk_details", "log_files", "logs"}
|
||||||
assert set(QUERIES.keys()) == expected
|
assert set(QUERIES.keys()) == expected
|
||||||
@@ -329,13 +329,13 @@ class TestStorageMutations:
|
|||||||
"""Validate all mutations from unraid_mcp/tools/storage.py."""
|
"""Validate all mutations from unraid_mcp/tools/storage.py."""
|
||||||
|
|
||||||
def test_flash_backup_mutation(self, schema: GraphQLSchema) -> None:
|
def test_flash_backup_mutation(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.storage import MUTATIONS
|
from unraid_mcp.tools.unraid import _DISK_MUTATIONS as MUTATIONS
|
||||||
|
|
||||||
errors = _validate_operation(schema, MUTATIONS["flash_backup"])
|
errors = _validate_operation(schema, MUTATIONS["flash_backup"])
|
||||||
assert not errors, f"flash_backup mutation validation failed: {errors}"
|
assert not errors, f"flash_backup mutation validation failed: {errors}"
|
||||||
|
|
||||||
def test_all_storage_mutations_covered(self, schema: GraphQLSchema) -> None:
|
def test_all_storage_mutations_covered(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.storage import MUTATIONS
|
from unraid_mcp.tools.unraid import _DISK_MUTATIONS as MUTATIONS
|
||||||
|
|
||||||
assert set(MUTATIONS.keys()) == {"flash_backup"}
|
assert set(MUTATIONS.keys()) == {"flash_backup"}
|
||||||
|
|
||||||
@@ -347,31 +347,31 @@ class TestDockerQueries:
|
|||||||
"""Validate all queries from unraid_mcp/tools/docker.py."""
|
"""Validate all queries from unraid_mcp/tools/docker.py."""
|
||||||
|
|
||||||
def test_list_query(self, schema: GraphQLSchema) -> None:
|
def test_list_query(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.docker import QUERIES
|
from unraid_mcp.tools.unraid import _DOCKER_QUERIES as QUERIES
|
||||||
|
|
||||||
errors = _validate_operation(schema, QUERIES["list"])
|
errors = _validate_operation(schema, QUERIES["list"])
|
||||||
assert not errors, f"list query validation failed: {errors}"
|
assert not errors, f"list query validation failed: {errors}"
|
||||||
|
|
||||||
def test_details_query(self, schema: GraphQLSchema) -> None:
|
def test_details_query(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.docker import QUERIES
|
from unraid_mcp.tools.unraid import _DOCKER_QUERIES as QUERIES
|
||||||
|
|
||||||
errors = _validate_operation(schema, QUERIES["details"])
|
errors = _validate_operation(schema, QUERIES["details"])
|
||||||
assert not errors, f"details query validation failed: {errors}"
|
assert not errors, f"details query validation failed: {errors}"
|
||||||
|
|
||||||
def test_networks_query(self, schema: GraphQLSchema) -> None:
|
def test_networks_query(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.docker import QUERIES
|
from unraid_mcp.tools.unraid import _DOCKER_QUERIES as QUERIES
|
||||||
|
|
||||||
errors = _validate_operation(schema, QUERIES["networks"])
|
errors = _validate_operation(schema, QUERIES["networks"])
|
||||||
assert not errors, f"networks query validation failed: {errors}"
|
assert not errors, f"networks query validation failed: {errors}"
|
||||||
|
|
||||||
def test_network_details_query(self, schema: GraphQLSchema) -> None:
|
def test_network_details_query(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.docker import QUERIES
|
from unraid_mcp.tools.unraid import _DOCKER_QUERIES as QUERIES
|
||||||
|
|
||||||
errors = _validate_operation(schema, QUERIES["network_details"])
|
errors = _validate_operation(schema, QUERIES["network_details"])
|
||||||
assert not errors, f"network_details query validation failed: {errors}"
|
assert not errors, f"network_details query validation failed: {errors}"
|
||||||
|
|
||||||
def test_all_docker_queries_covered(self, schema: GraphQLSchema) -> None:
|
def test_all_docker_queries_covered(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.docker import QUERIES
|
from unraid_mcp.tools.unraid import _DOCKER_QUERIES as QUERIES
|
||||||
|
|
||||||
expected = {
|
expected = {
|
||||||
"list",
|
"list",
|
||||||
@@ -386,19 +386,19 @@ class TestDockerMutations:
|
|||||||
"""Validate all mutations from unraid_mcp/tools/docker.py."""
|
"""Validate all mutations from unraid_mcp/tools/docker.py."""
|
||||||
|
|
||||||
def test_start_mutation(self, schema: GraphQLSchema) -> None:
|
def test_start_mutation(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.docker import MUTATIONS
|
from unraid_mcp.tools.unraid import _DOCKER_MUTATIONS as MUTATIONS
|
||||||
|
|
||||||
errors = _validate_operation(schema, MUTATIONS["start"])
|
errors = _validate_operation(schema, MUTATIONS["start"])
|
||||||
assert not errors, f"start mutation validation failed: {errors}"
|
assert not errors, f"start mutation validation failed: {errors}"
|
||||||
|
|
||||||
def test_stop_mutation(self, schema: GraphQLSchema) -> None:
|
def test_stop_mutation(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.docker import MUTATIONS
|
from unraid_mcp.tools.unraid import _DOCKER_MUTATIONS as MUTATIONS
|
||||||
|
|
||||||
errors = _validate_operation(schema, MUTATIONS["stop"])
|
errors = _validate_operation(schema, MUTATIONS["stop"])
|
||||||
assert not errors, f"stop mutation validation failed: {errors}"
|
assert not errors, f"stop mutation validation failed: {errors}"
|
||||||
|
|
||||||
def test_all_docker_mutations_covered(self, schema: GraphQLSchema) -> None:
|
def test_all_docker_mutations_covered(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.docker import MUTATIONS
|
from unraid_mcp.tools.unraid import _DOCKER_MUTATIONS as MUTATIONS
|
||||||
|
|
||||||
expected = {
|
expected = {
|
||||||
"start",
|
"start",
|
||||||
@@ -414,19 +414,19 @@ class TestVmQueries:
|
|||||||
"""Validate all queries from unraid_mcp/tools/virtualization.py."""
|
"""Validate all queries from unraid_mcp/tools/virtualization.py."""
|
||||||
|
|
||||||
def test_list_query(self, schema: GraphQLSchema) -> None:
|
def test_list_query(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.virtualization import QUERIES
|
from unraid_mcp.tools.unraid import _VM_QUERIES as QUERIES
|
||||||
|
|
||||||
errors = _validate_operation(schema, QUERIES["list"])
|
errors = _validate_operation(schema, QUERIES["list"])
|
||||||
assert not errors, f"list query validation failed: {errors}"
|
assert not errors, f"list query validation failed: {errors}"
|
||||||
|
|
||||||
def test_details_query(self, schema: GraphQLSchema) -> None:
|
def test_details_query(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.virtualization import QUERIES
|
from unraid_mcp.tools.unraid import _VM_QUERIES as QUERIES
|
||||||
|
|
||||||
errors = _validate_operation(schema, QUERIES["details"])
|
errors = _validate_operation(schema, QUERIES["details"])
|
||||||
assert not errors, f"details query validation failed: {errors}"
|
assert not errors, f"details query validation failed: {errors}"
|
||||||
|
|
||||||
def test_all_vm_queries_covered(self, schema: GraphQLSchema) -> None:
|
def test_all_vm_queries_covered(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.virtualization import QUERIES
|
from unraid_mcp.tools.unraid import _VM_QUERIES as QUERIES
|
||||||
|
|
||||||
assert set(QUERIES.keys()) == {"list", "details"}
|
assert set(QUERIES.keys()) == {"list", "details"}
|
||||||
|
|
||||||
@@ -435,49 +435,49 @@ class TestVmMutations:
|
|||||||
"""Validate all mutations from unraid_mcp/tools/virtualization.py."""
|
"""Validate all mutations from unraid_mcp/tools/virtualization.py."""
|
||||||
|
|
||||||
def test_start_mutation(self, schema: GraphQLSchema) -> None:
|
def test_start_mutation(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.virtualization import MUTATIONS
|
from unraid_mcp.tools.unraid import _VM_MUTATIONS as MUTATIONS
|
||||||
|
|
||||||
errors = _validate_operation(schema, MUTATIONS["start"])
|
errors = _validate_operation(schema, MUTATIONS["start"])
|
||||||
assert not errors, f"start mutation validation failed: {errors}"
|
assert not errors, f"start mutation validation failed: {errors}"
|
||||||
|
|
||||||
def test_stop_mutation(self, schema: GraphQLSchema) -> None:
|
def test_stop_mutation(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.virtualization import MUTATIONS
|
from unraid_mcp.tools.unraid import _VM_MUTATIONS as MUTATIONS
|
||||||
|
|
||||||
errors = _validate_operation(schema, MUTATIONS["stop"])
|
errors = _validate_operation(schema, MUTATIONS["stop"])
|
||||||
assert not errors, f"stop mutation validation failed: {errors}"
|
assert not errors, f"stop mutation validation failed: {errors}"
|
||||||
|
|
||||||
def test_pause_mutation(self, schema: GraphQLSchema) -> None:
|
def test_pause_mutation(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.virtualization import MUTATIONS
|
from unraid_mcp.tools.unraid import _VM_MUTATIONS as MUTATIONS
|
||||||
|
|
||||||
errors = _validate_operation(schema, MUTATIONS["pause"])
|
errors = _validate_operation(schema, MUTATIONS["pause"])
|
||||||
assert not errors, f"pause mutation validation failed: {errors}"
|
assert not errors, f"pause mutation validation failed: {errors}"
|
||||||
|
|
||||||
def test_resume_mutation(self, schema: GraphQLSchema) -> None:
|
def test_resume_mutation(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.virtualization import MUTATIONS
|
from unraid_mcp.tools.unraid import _VM_MUTATIONS as MUTATIONS
|
||||||
|
|
||||||
errors = _validate_operation(schema, MUTATIONS["resume"])
|
errors = _validate_operation(schema, MUTATIONS["resume"])
|
||||||
assert not errors, f"resume mutation validation failed: {errors}"
|
assert not errors, f"resume mutation validation failed: {errors}"
|
||||||
|
|
||||||
def test_force_stop_mutation(self, schema: GraphQLSchema) -> None:
|
def test_force_stop_mutation(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.virtualization import MUTATIONS
|
from unraid_mcp.tools.unraid import _VM_MUTATIONS as MUTATIONS
|
||||||
|
|
||||||
errors = _validate_operation(schema, MUTATIONS["force_stop"])
|
errors = _validate_operation(schema, MUTATIONS["force_stop"])
|
||||||
assert not errors, f"force_stop mutation validation failed: {errors}"
|
assert not errors, f"force_stop mutation validation failed: {errors}"
|
||||||
|
|
||||||
def test_reboot_mutation(self, schema: GraphQLSchema) -> None:
|
def test_reboot_mutation(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.virtualization import MUTATIONS
|
from unraid_mcp.tools.unraid import _VM_MUTATIONS as MUTATIONS
|
||||||
|
|
||||||
errors = _validate_operation(schema, MUTATIONS["reboot"])
|
errors = _validate_operation(schema, MUTATIONS["reboot"])
|
||||||
assert not errors, f"reboot mutation validation failed: {errors}"
|
assert not errors, f"reboot mutation validation failed: {errors}"
|
||||||
|
|
||||||
def test_reset_mutation(self, schema: GraphQLSchema) -> None:
|
def test_reset_mutation(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.virtualization import MUTATIONS
|
from unraid_mcp.tools.unraid import _VM_MUTATIONS as MUTATIONS
|
||||||
|
|
||||||
errors = _validate_operation(schema, MUTATIONS["reset"])
|
errors = _validate_operation(schema, MUTATIONS["reset"])
|
||||||
assert not errors, f"reset mutation validation failed: {errors}"
|
assert not errors, f"reset mutation validation failed: {errors}"
|
||||||
|
|
||||||
def test_all_vm_mutations_covered(self, schema: GraphQLSchema) -> None:
|
def test_all_vm_mutations_covered(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.virtualization import MUTATIONS
|
from unraid_mcp.tools.unraid import _VM_MUTATIONS as MUTATIONS
|
||||||
|
|
||||||
expected = {"start", "stop", "pause", "resume", "force_stop", "reboot", "reset"}
|
expected = {"start", "stop", "pause", "resume", "force_stop", "reboot", "reset"}
|
||||||
assert set(MUTATIONS.keys()) == expected
|
assert set(MUTATIONS.keys()) == expected
|
||||||
@@ -490,19 +490,19 @@ class TestNotificationQueries:
|
|||||||
"""Validate all queries from unraid_mcp/tools/notifications.py."""
|
"""Validate all queries from unraid_mcp/tools/notifications.py."""
|
||||||
|
|
||||||
def test_overview_query(self, schema: GraphQLSchema) -> None:
|
def test_overview_query(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.notifications import QUERIES
|
from unraid_mcp.tools.unraid import _NOTIFICATION_QUERIES as QUERIES
|
||||||
|
|
||||||
errors = _validate_operation(schema, QUERIES["overview"])
|
errors = _validate_operation(schema, QUERIES["overview"])
|
||||||
assert not errors, f"overview query validation failed: {errors}"
|
assert not errors, f"overview query validation failed: {errors}"
|
||||||
|
|
||||||
def test_list_query(self, schema: GraphQLSchema) -> None:
|
def test_list_query(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.notifications import QUERIES
|
from unraid_mcp.tools.unraid import _NOTIFICATION_QUERIES as QUERIES
|
||||||
|
|
||||||
errors = _validate_operation(schema, QUERIES["list"])
|
errors = _validate_operation(schema, QUERIES["list"])
|
||||||
assert not errors, f"list query validation failed: {errors}"
|
assert not errors, f"list query validation failed: {errors}"
|
||||||
|
|
||||||
def test_all_notification_queries_covered(self, schema: GraphQLSchema) -> None:
|
def test_all_notification_queries_covered(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.notifications import QUERIES
|
from unraid_mcp.tools.unraid import _NOTIFICATION_QUERIES as QUERIES
|
||||||
|
|
||||||
assert set(QUERIES.keys()) == {"overview", "list"}
|
assert set(QUERIES.keys()) == {"overview", "list"}
|
||||||
|
|
||||||
@@ -511,67 +511,67 @@ class TestNotificationMutations:
|
|||||||
"""Validate all mutations from unraid_mcp/tools/notifications.py."""
|
"""Validate all mutations from unraid_mcp/tools/notifications.py."""
|
||||||
|
|
||||||
def test_create_mutation(self, schema: GraphQLSchema) -> None:
|
def test_create_mutation(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.notifications import MUTATIONS
|
from unraid_mcp.tools.unraid import _NOTIFICATION_MUTATIONS as MUTATIONS
|
||||||
|
|
||||||
errors = _validate_operation(schema, MUTATIONS["create"])
|
errors = _validate_operation(schema, MUTATIONS["create"])
|
||||||
assert not errors, f"create mutation validation failed: {errors}"
|
assert not errors, f"create mutation validation failed: {errors}"
|
||||||
|
|
||||||
def test_archive_mutation(self, schema: GraphQLSchema) -> None:
|
def test_archive_mutation(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.notifications import MUTATIONS
|
from unraid_mcp.tools.unraid import _NOTIFICATION_MUTATIONS as MUTATIONS
|
||||||
|
|
||||||
errors = _validate_operation(schema, MUTATIONS["archive"])
|
errors = _validate_operation(schema, MUTATIONS["archive"])
|
||||||
assert not errors, f"archive mutation validation failed: {errors}"
|
assert not errors, f"archive mutation validation failed: {errors}"
|
||||||
|
|
||||||
def test_unread_mutation(self, schema: GraphQLSchema) -> None:
|
def test_unread_mutation(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.notifications import MUTATIONS
|
from unraid_mcp.tools.unraid import _NOTIFICATION_MUTATIONS as MUTATIONS
|
||||||
|
|
||||||
errors = _validate_operation(schema, MUTATIONS["unread"])
|
errors = _validate_operation(schema, MUTATIONS["unread"])
|
||||||
assert not errors, f"unread mutation validation failed: {errors}"
|
assert not errors, f"unread mutation validation failed: {errors}"
|
||||||
|
|
||||||
def test_delete_mutation(self, schema: GraphQLSchema) -> None:
|
def test_delete_mutation(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.notifications import MUTATIONS
|
from unraid_mcp.tools.unraid import _NOTIFICATION_MUTATIONS as MUTATIONS
|
||||||
|
|
||||||
errors = _validate_operation(schema, MUTATIONS["delete"])
|
errors = _validate_operation(schema, MUTATIONS["delete"])
|
||||||
assert not errors, f"delete mutation validation failed: {errors}"
|
assert not errors, f"delete mutation validation failed: {errors}"
|
||||||
|
|
||||||
def test_delete_archived_mutation(self, schema: GraphQLSchema) -> None:
|
def test_delete_archived_mutation(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.notifications import MUTATIONS
|
from unraid_mcp.tools.unraid import _NOTIFICATION_MUTATIONS as MUTATIONS
|
||||||
|
|
||||||
errors = _validate_operation(schema, MUTATIONS["delete_archived"])
|
errors = _validate_operation(schema, MUTATIONS["delete_archived"])
|
||||||
assert not errors, f"delete_archived mutation validation failed: {errors}"
|
assert not errors, f"delete_archived mutation validation failed: {errors}"
|
||||||
|
|
||||||
def test_archive_all_mutation(self, schema: GraphQLSchema) -> None:
|
def test_archive_all_mutation(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.notifications import MUTATIONS
|
from unraid_mcp.tools.unraid import _NOTIFICATION_MUTATIONS as MUTATIONS
|
||||||
|
|
||||||
errors = _validate_operation(schema, MUTATIONS["archive_all"])
|
errors = _validate_operation(schema, MUTATIONS["archive_all"])
|
||||||
assert not errors, f"archive_all mutation validation failed: {errors}"
|
assert not errors, f"archive_all mutation validation failed: {errors}"
|
||||||
|
|
||||||
def test_archive_many_mutation(self, schema: GraphQLSchema) -> None:
|
def test_archive_many_mutation(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.notifications import MUTATIONS
|
from unraid_mcp.tools.unraid import _NOTIFICATION_MUTATIONS as MUTATIONS
|
||||||
|
|
||||||
errors = _validate_operation(schema, MUTATIONS["archive_many"])
|
errors = _validate_operation(schema, MUTATIONS["archive_many"])
|
||||||
assert not errors, f"archive_many mutation validation failed: {errors}"
|
assert not errors, f"archive_many mutation validation failed: {errors}"
|
||||||
|
|
||||||
def test_unarchive_many_mutation(self, schema: GraphQLSchema) -> None:
|
def test_unarchive_many_mutation(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.notifications import MUTATIONS
|
from unraid_mcp.tools.unraid import _NOTIFICATION_MUTATIONS as MUTATIONS
|
||||||
|
|
||||||
errors = _validate_operation(schema, MUTATIONS["unarchive_many"])
|
errors = _validate_operation(schema, MUTATIONS["unarchive_many"])
|
||||||
assert not errors, f"unarchive_many mutation validation failed: {errors}"
|
assert not errors, f"unarchive_many mutation validation failed: {errors}"
|
||||||
|
|
||||||
def test_unarchive_all_mutation(self, schema: GraphQLSchema) -> None:
|
def test_unarchive_all_mutation(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.notifications import MUTATIONS
|
from unraid_mcp.tools.unraid import _NOTIFICATION_MUTATIONS as MUTATIONS
|
||||||
|
|
||||||
errors = _validate_operation(schema, MUTATIONS["unarchive_all"])
|
errors = _validate_operation(schema, MUTATIONS["unarchive_all"])
|
||||||
assert not errors, f"unarchive_all mutation validation failed: {errors}"
|
assert not errors, f"unarchive_all mutation validation failed: {errors}"
|
||||||
|
|
||||||
def test_recalculate_mutation(self, schema: GraphQLSchema) -> None:
|
def test_recalculate_mutation(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.notifications import MUTATIONS
|
from unraid_mcp.tools.unraid import _NOTIFICATION_MUTATIONS as MUTATIONS
|
||||||
|
|
||||||
errors = _validate_operation(schema, MUTATIONS["recalculate"])
|
errors = _validate_operation(schema, MUTATIONS["recalculate"])
|
||||||
assert not errors, f"recalculate mutation validation failed: {errors}"
|
assert not errors, f"recalculate mutation validation failed: {errors}"
|
||||||
|
|
||||||
def test_all_notification_mutations_covered(self, schema: GraphQLSchema) -> None:
|
def test_all_notification_mutations_covered(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.notifications import MUTATIONS
|
from unraid_mcp.tools.unraid import _NOTIFICATION_MUTATIONS as MUTATIONS
|
||||||
|
|
||||||
expected = {
|
expected = {
|
||||||
"create",
|
"create",
|
||||||
@@ -595,19 +595,19 @@ class TestRcloneQueries:
|
|||||||
"""Validate all queries from unraid_mcp/tools/rclone.py."""
|
"""Validate all queries from unraid_mcp/tools/rclone.py."""
|
||||||
|
|
||||||
def test_list_remotes_query(self, schema: GraphQLSchema) -> None:
|
def test_list_remotes_query(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.rclone import QUERIES
|
from unraid_mcp.tools.unraid import _RCLONE_QUERIES as QUERIES
|
||||||
|
|
||||||
errors = _validate_operation(schema, QUERIES["list_remotes"])
|
errors = _validate_operation(schema, QUERIES["list_remotes"])
|
||||||
assert not errors, f"list_remotes query validation failed: {errors}"
|
assert not errors, f"list_remotes query validation failed: {errors}"
|
||||||
|
|
||||||
def test_config_form_query(self, schema: GraphQLSchema) -> None:
|
def test_config_form_query(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.rclone import QUERIES
|
from unraid_mcp.tools.unraid import _RCLONE_QUERIES as QUERIES
|
||||||
|
|
||||||
errors = _validate_operation(schema, QUERIES["config_form"])
|
errors = _validate_operation(schema, QUERIES["config_form"])
|
||||||
assert not errors, f"config_form query validation failed: {errors}"
|
assert not errors, f"config_form query validation failed: {errors}"
|
||||||
|
|
||||||
def test_all_rclone_queries_covered(self, schema: GraphQLSchema) -> None:
|
def test_all_rclone_queries_covered(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.rclone import QUERIES
|
from unraid_mcp.tools.unraid import _RCLONE_QUERIES as QUERIES
|
||||||
|
|
||||||
assert set(QUERIES.keys()) == {"list_remotes", "config_form"}
|
assert set(QUERIES.keys()) == {"list_remotes", "config_form"}
|
||||||
|
|
||||||
@@ -616,19 +616,19 @@ class TestRcloneMutations:
|
|||||||
"""Validate all mutations from unraid_mcp/tools/rclone.py."""
|
"""Validate all mutations from unraid_mcp/tools/rclone.py."""
|
||||||
|
|
||||||
def test_create_remote_mutation(self, schema: GraphQLSchema) -> None:
|
def test_create_remote_mutation(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.rclone import MUTATIONS
|
from unraid_mcp.tools.unraid import _RCLONE_MUTATIONS as MUTATIONS
|
||||||
|
|
||||||
errors = _validate_operation(schema, MUTATIONS["create_remote"])
|
errors = _validate_operation(schema, MUTATIONS["create_remote"])
|
||||||
assert not errors, f"create_remote mutation validation failed: {errors}"
|
assert not errors, f"create_remote mutation validation failed: {errors}"
|
||||||
|
|
||||||
def test_delete_remote_mutation(self, schema: GraphQLSchema) -> None:
|
def test_delete_remote_mutation(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.rclone import MUTATIONS
|
from unraid_mcp.tools.unraid import _RCLONE_MUTATIONS as MUTATIONS
|
||||||
|
|
||||||
errors = _validate_operation(schema, MUTATIONS["delete_remote"])
|
errors = _validate_operation(schema, MUTATIONS["delete_remote"])
|
||||||
assert not errors, f"delete_remote mutation validation failed: {errors}"
|
assert not errors, f"delete_remote mutation validation failed: {errors}"
|
||||||
|
|
||||||
def test_all_rclone_mutations_covered(self, schema: GraphQLSchema) -> None:
|
def test_all_rclone_mutations_covered(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.rclone import MUTATIONS
|
from unraid_mcp.tools.unraid import _RCLONE_MUTATIONS as MUTATIONS
|
||||||
|
|
||||||
assert set(MUTATIONS.keys()) == {"create_remote", "delete_remote"}
|
assert set(MUTATIONS.keys()) == {"create_remote", "delete_remote"}
|
||||||
|
|
||||||
@@ -640,13 +640,13 @@ class TestUsersQueries:
|
|||||||
"""Validate all queries from unraid_mcp/tools/users.py."""
|
"""Validate all queries from unraid_mcp/tools/users.py."""
|
||||||
|
|
||||||
def test_me_query(self, schema: GraphQLSchema) -> None:
|
def test_me_query(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.users import QUERIES
|
from unraid_mcp.tools.unraid import _USER_QUERIES as QUERIES
|
||||||
|
|
||||||
errors = _validate_operation(schema, QUERIES["me"])
|
errors = _validate_operation(schema, QUERIES["me"])
|
||||||
assert not errors, f"me query validation failed: {errors}"
|
assert not errors, f"me query validation failed: {errors}"
|
||||||
|
|
||||||
def test_all_users_queries_covered(self, schema: GraphQLSchema) -> None:
|
def test_all_users_queries_covered(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.users import QUERIES
|
from unraid_mcp.tools.unraid import _USER_QUERIES as QUERIES
|
||||||
|
|
||||||
assert set(QUERIES.keys()) == {"me"}
|
assert set(QUERIES.keys()) == {"me"}
|
||||||
|
|
||||||
@@ -658,19 +658,19 @@ class TestKeysQueries:
|
|||||||
"""Validate all queries from unraid_mcp/tools/keys.py."""
|
"""Validate all queries from unraid_mcp/tools/keys.py."""
|
||||||
|
|
||||||
def test_list_query(self, schema: GraphQLSchema) -> None:
|
def test_list_query(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.keys import QUERIES
|
from unraid_mcp.tools.unraid import _KEY_QUERIES as QUERIES
|
||||||
|
|
||||||
errors = _validate_operation(schema, QUERIES["list"])
|
errors = _validate_operation(schema, QUERIES["list"])
|
||||||
assert not errors, f"list query validation failed: {errors}"
|
assert not errors, f"list query validation failed: {errors}"
|
||||||
|
|
||||||
def test_get_query(self, schema: GraphQLSchema) -> None:
|
def test_get_query(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.keys import QUERIES
|
from unraid_mcp.tools.unraid import _KEY_QUERIES as QUERIES
|
||||||
|
|
||||||
errors = _validate_operation(schema, QUERIES["get"])
|
errors = _validate_operation(schema, QUERIES["get"])
|
||||||
assert not errors, f"get query validation failed: {errors}"
|
assert not errors, f"get query validation failed: {errors}"
|
||||||
|
|
||||||
def test_all_keys_queries_covered(self, schema: GraphQLSchema) -> None:
|
def test_all_keys_queries_covered(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.keys import QUERIES
|
from unraid_mcp.tools.unraid import _KEY_QUERIES as QUERIES
|
||||||
|
|
||||||
assert set(QUERIES.keys()) == {"list", "get"}
|
assert set(QUERIES.keys()) == {"list", "get"}
|
||||||
|
|
||||||
@@ -679,37 +679,37 @@ class TestKeysMutations:
|
|||||||
"""Validate all mutations from unraid_mcp/tools/keys.py."""
|
"""Validate all mutations from unraid_mcp/tools/keys.py."""
|
||||||
|
|
||||||
def test_create_mutation(self, schema: GraphQLSchema) -> None:
|
def test_create_mutation(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.keys import MUTATIONS
|
from unraid_mcp.tools.unraid import _KEY_MUTATIONS as MUTATIONS
|
||||||
|
|
||||||
errors = _validate_operation(schema, MUTATIONS["create"])
|
errors = _validate_operation(schema, MUTATIONS["create"])
|
||||||
assert not errors, f"create mutation validation failed: {errors}"
|
assert not errors, f"create mutation validation failed: {errors}"
|
||||||
|
|
||||||
def test_update_mutation(self, schema: GraphQLSchema) -> None:
|
def test_update_mutation(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.keys import MUTATIONS
|
from unraid_mcp.tools.unraid import _KEY_MUTATIONS as MUTATIONS
|
||||||
|
|
||||||
errors = _validate_operation(schema, MUTATIONS["update"])
|
errors = _validate_operation(schema, MUTATIONS["update"])
|
||||||
assert not errors, f"update mutation validation failed: {errors}"
|
assert not errors, f"update mutation validation failed: {errors}"
|
||||||
|
|
||||||
def test_delete_mutation(self, schema: GraphQLSchema) -> None:
|
def test_delete_mutation(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.keys import MUTATIONS
|
from unraid_mcp.tools.unraid import _KEY_MUTATIONS as MUTATIONS
|
||||||
|
|
||||||
errors = _validate_operation(schema, MUTATIONS["delete"])
|
errors = _validate_operation(schema, MUTATIONS["delete"])
|
||||||
assert not errors, f"delete mutation validation failed: {errors}"
|
assert not errors, f"delete mutation validation failed: {errors}"
|
||||||
|
|
||||||
def test_add_role_mutation(self, schema: GraphQLSchema) -> None:
|
def test_add_role_mutation(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.keys import MUTATIONS
|
from unraid_mcp.tools.unraid import _KEY_MUTATIONS as MUTATIONS
|
||||||
|
|
||||||
errors = _validate_operation(schema, MUTATIONS["add_role"])
|
errors = _validate_operation(schema, MUTATIONS["add_role"])
|
||||||
assert not errors, f"add_role mutation validation failed: {errors}"
|
assert not errors, f"add_role mutation validation failed: {errors}"
|
||||||
|
|
||||||
def test_remove_role_mutation(self, schema: GraphQLSchema) -> None:
|
def test_remove_role_mutation(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.keys import MUTATIONS
|
from unraid_mcp.tools.unraid import _KEY_MUTATIONS as MUTATIONS
|
||||||
|
|
||||||
errors = _validate_operation(schema, MUTATIONS["remove_role"])
|
errors = _validate_operation(schema, MUTATIONS["remove_role"])
|
||||||
assert not errors, f"remove_role mutation validation failed: {errors}"
|
assert not errors, f"remove_role mutation validation failed: {errors}"
|
||||||
|
|
||||||
def test_all_keys_mutations_covered(self, schema: GraphQLSchema) -> None:
|
def test_all_keys_mutations_covered(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.keys import MUTATIONS
|
from unraid_mcp.tools.unraid import _KEY_MUTATIONS as MUTATIONS
|
||||||
|
|
||||||
assert set(MUTATIONS.keys()) == {"create", "update", "delete", "add_role", "remove_role"}
|
assert set(MUTATIONS.keys()) == {"create", "update", "delete", "add_role", "remove_role"}
|
||||||
|
|
||||||
@@ -721,19 +721,19 @@ class TestSettingsMutations:
|
|||||||
"""Validate all mutations from unraid_mcp/tools/settings.py."""
|
"""Validate all mutations from unraid_mcp/tools/settings.py."""
|
||||||
|
|
||||||
def test_update_mutation(self, schema: GraphQLSchema) -> None:
|
def test_update_mutation(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.settings import MUTATIONS
|
from unraid_mcp.tools.unraid import _SETTING_MUTATIONS as MUTATIONS
|
||||||
|
|
||||||
errors = _validate_operation(schema, MUTATIONS["update"])
|
errors = _validate_operation(schema, MUTATIONS["update"])
|
||||||
assert not errors, f"update mutation validation failed: {errors}"
|
assert not errors, f"update mutation validation failed: {errors}"
|
||||||
|
|
||||||
def test_configure_ups_mutation(self, schema: GraphQLSchema) -> None:
|
def test_configure_ups_mutation(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.settings import MUTATIONS
|
from unraid_mcp.tools.unraid import _SETTING_MUTATIONS as MUTATIONS
|
||||||
|
|
||||||
errors = _validate_operation(schema, MUTATIONS["configure_ups"])
|
errors = _validate_operation(schema, MUTATIONS["configure_ups"])
|
||||||
assert not errors, f"configure_ups mutation validation failed: {errors}"
|
assert not errors, f"configure_ups mutation validation failed: {errors}"
|
||||||
|
|
||||||
def test_all_settings_mutations_covered(self, schema: GraphQLSchema) -> None:
|
def test_all_settings_mutations_covered(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.settings import MUTATIONS
|
from unraid_mcp.tools.unraid import _SETTING_MUTATIONS as MUTATIONS
|
||||||
|
|
||||||
expected = {
|
expected = {
|
||||||
"update",
|
"update",
|
||||||
@@ -790,7 +790,7 @@ class TestCustomizationQueries:
|
|||||||
assert not errors, f"is_initial_setup (isFreshInstall) query validation failed: {errors}"
|
assert not errors, f"is_initial_setup (isFreshInstall) query validation failed: {errors}"
|
||||||
|
|
||||||
def test_sso_enabled_query(self, schema: GraphQLSchema) -> None:
|
def test_sso_enabled_query(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.customization import QUERIES
|
from unraid_mcp.tools.unraid import _CUSTOMIZATION_QUERIES as QUERIES
|
||||||
|
|
||||||
errors = _validate_operation(schema, QUERIES["sso_enabled"])
|
errors = _validate_operation(schema, QUERIES["sso_enabled"])
|
||||||
assert not errors, f"sso_enabled query validation failed: {errors}"
|
assert not errors, f"sso_enabled query validation failed: {errors}"
|
||||||
@@ -805,13 +805,13 @@ class TestCustomizationMutations:
|
|||||||
"""Validate mutations from unraid_mcp/tools/customization.py."""
|
"""Validate mutations from unraid_mcp/tools/customization.py."""
|
||||||
|
|
||||||
def test_set_theme_mutation(self, schema: GraphQLSchema) -> None:
|
def test_set_theme_mutation(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.customization import MUTATIONS
|
from unraid_mcp.tools.unraid import _CUSTOMIZATION_MUTATIONS as MUTATIONS
|
||||||
|
|
||||||
errors = _validate_operation(schema, MUTATIONS["set_theme"])
|
errors = _validate_operation(schema, MUTATIONS["set_theme"])
|
||||||
assert not errors, f"set_theme mutation validation failed: {errors}"
|
assert not errors, f"set_theme mutation validation failed: {errors}"
|
||||||
|
|
||||||
def test_all_customization_mutations_covered(self, schema: GraphQLSchema) -> None:
|
def test_all_customization_mutations_covered(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.customization import MUTATIONS
|
from unraid_mcp.tools.unraid import _CUSTOMIZATION_MUTATIONS as MUTATIONS
|
||||||
|
|
||||||
assert set(MUTATIONS.keys()) == {"set_theme"}
|
assert set(MUTATIONS.keys()) == {"set_theme"}
|
||||||
|
|
||||||
@@ -823,13 +823,13 @@ class TestPluginsQueries:
|
|||||||
"""Validate all queries from unraid_mcp/tools/plugins.py."""
|
"""Validate all queries from unraid_mcp/tools/plugins.py."""
|
||||||
|
|
||||||
def test_list_query(self, schema: GraphQLSchema) -> None:
|
def test_list_query(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.plugins import QUERIES
|
from unraid_mcp.tools.unraid import _PLUGIN_QUERIES as QUERIES
|
||||||
|
|
||||||
errors = _validate_operation(schema, QUERIES["list"])
|
errors = _validate_operation(schema, QUERIES["list"])
|
||||||
assert not errors, f"plugins list query validation failed: {errors}"
|
assert not errors, f"plugins list query validation failed: {errors}"
|
||||||
|
|
||||||
def test_all_plugins_queries_covered(self, schema: GraphQLSchema) -> None:
|
def test_all_plugins_queries_covered(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.plugins import QUERIES
|
from unraid_mcp.tools.unraid import _PLUGIN_QUERIES as QUERIES
|
||||||
|
|
||||||
assert set(QUERIES.keys()) == {"list"}
|
assert set(QUERIES.keys()) == {"list"}
|
||||||
|
|
||||||
@@ -838,19 +838,19 @@ class TestPluginsMutations:
|
|||||||
"""Validate all mutations from unraid_mcp/tools/plugins.py."""
|
"""Validate all mutations from unraid_mcp/tools/plugins.py."""
|
||||||
|
|
||||||
def test_add_mutation(self, schema: GraphQLSchema) -> None:
|
def test_add_mutation(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.plugins import MUTATIONS
|
from unraid_mcp.tools.unraid import _PLUGIN_MUTATIONS as MUTATIONS
|
||||||
|
|
||||||
errors = _validate_operation(schema, MUTATIONS["add"])
|
errors = _validate_operation(schema, MUTATIONS["add"])
|
||||||
assert not errors, f"plugins add mutation validation failed: {errors}"
|
assert not errors, f"plugins add mutation validation failed: {errors}"
|
||||||
|
|
||||||
def test_remove_mutation(self, schema: GraphQLSchema) -> None:
|
def test_remove_mutation(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.plugins import MUTATIONS
|
from unraid_mcp.tools.unraid import _PLUGIN_MUTATIONS as MUTATIONS
|
||||||
|
|
||||||
errors = _validate_operation(schema, MUTATIONS["remove"])
|
errors = _validate_operation(schema, MUTATIONS["remove"])
|
||||||
assert not errors, f"plugins remove mutation validation failed: {errors}"
|
assert not errors, f"plugins remove mutation validation failed: {errors}"
|
||||||
|
|
||||||
def test_all_plugins_mutations_covered(self, schema: GraphQLSchema) -> None:
|
def test_all_plugins_mutations_covered(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.plugins import MUTATIONS
|
from unraid_mcp.tools.unraid import _PLUGIN_MUTATIONS as MUTATIONS
|
||||||
|
|
||||||
assert set(MUTATIONS.keys()) == {"add", "remove"}
|
assert set(MUTATIONS.keys()) == {"add", "remove"}
|
||||||
|
|
||||||
@@ -862,37 +862,37 @@ class TestOidcQueries:
|
|||||||
"""Validate all queries from unraid_mcp/tools/oidc.py."""
|
"""Validate all queries from unraid_mcp/tools/oidc.py."""
|
||||||
|
|
||||||
def test_providers_query(self, schema: GraphQLSchema) -> None:
|
def test_providers_query(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.oidc import QUERIES
|
from unraid_mcp.tools.unraid import _OIDC_QUERIES as QUERIES
|
||||||
|
|
||||||
errors = _validate_operation(schema, QUERIES["providers"])
|
errors = _validate_operation(schema, QUERIES["providers"])
|
||||||
assert not errors, f"oidc providers query validation failed: {errors}"
|
assert not errors, f"oidc providers query validation failed: {errors}"
|
||||||
|
|
||||||
def test_provider_query(self, schema: GraphQLSchema) -> None:
|
def test_provider_query(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.oidc import QUERIES
|
from unraid_mcp.tools.unraid import _OIDC_QUERIES as QUERIES
|
||||||
|
|
||||||
errors = _validate_operation(schema, QUERIES["provider"])
|
errors = _validate_operation(schema, QUERIES["provider"])
|
||||||
assert not errors, f"oidc provider query validation failed: {errors}"
|
assert not errors, f"oidc provider query validation failed: {errors}"
|
||||||
|
|
||||||
def test_configuration_query(self, schema: GraphQLSchema) -> None:
|
def test_configuration_query(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.oidc import QUERIES
|
from unraid_mcp.tools.unraid import _OIDC_QUERIES as QUERIES
|
||||||
|
|
||||||
errors = _validate_operation(schema, QUERIES["configuration"])
|
errors = _validate_operation(schema, QUERIES["configuration"])
|
||||||
assert not errors, f"oidc configuration query validation failed: {errors}"
|
assert not errors, f"oidc configuration query validation failed: {errors}"
|
||||||
|
|
||||||
def test_public_providers_query(self, schema: GraphQLSchema) -> None:
|
def test_public_providers_query(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.oidc import QUERIES
|
from unraid_mcp.tools.unraid import _OIDC_QUERIES as QUERIES
|
||||||
|
|
||||||
errors = _validate_operation(schema, QUERIES["public_providers"])
|
errors = _validate_operation(schema, QUERIES["public_providers"])
|
||||||
assert not errors, f"oidc public_providers query validation failed: {errors}"
|
assert not errors, f"oidc public_providers query validation failed: {errors}"
|
||||||
|
|
||||||
def test_validate_session_query(self, schema: GraphQLSchema) -> None:
|
def test_validate_session_query(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.oidc import QUERIES
|
from unraid_mcp.tools.unraid import _OIDC_QUERIES as QUERIES
|
||||||
|
|
||||||
errors = _validate_operation(schema, QUERIES["validate_session"])
|
errors = _validate_operation(schema, QUERIES["validate_session"])
|
||||||
assert not errors, f"oidc validate_session query validation failed: {errors}"
|
assert not errors, f"oidc validate_session query validation failed: {errors}"
|
||||||
|
|
||||||
def test_all_oidc_queries_covered(self, schema: GraphQLSchema) -> None:
|
def test_all_oidc_queries_covered(self, schema: GraphQLSchema) -> None:
|
||||||
from unraid_mcp.tools.oidc import QUERIES
|
from unraid_mcp.tools.unraid import _OIDC_QUERIES as QUERIES
|
||||||
|
|
||||||
expected = {
|
expected = {
|
||||||
"providers",
|
"providers",
|
||||||
@@ -911,36 +911,43 @@ class TestSchemaCompleteness:
|
|||||||
"""Validate that all tool operations are covered by the schema."""
|
"""Validate that all tool operations are covered by the schema."""
|
||||||
|
|
||||||
def test_all_tool_queries_validate(self, schema: GraphQLSchema) -> None:
|
def test_all_tool_queries_validate(self, schema: GraphQLSchema) -> None:
|
||||||
"""Bulk-validate every query across all tools.
|
"""Bulk-validate every query/mutation across all domains in the consolidated unraid module.
|
||||||
|
|
||||||
Known schema mismatches are tracked in KNOWN_SCHEMA_ISSUES and excluded
|
Known schema mismatches are tracked in KNOWN_SCHEMA_ISSUES and excluded
|
||||||
from the assertion so the test suite stays green while the underlying
|
from the assertion so the test suite stays green while the underlying
|
||||||
tool queries are fixed incrementally.
|
tool queries are fixed incrementally.
|
||||||
"""
|
"""
|
||||||
import importlib
|
import unraid_mcp.tools.unraid as unraid_mod
|
||||||
|
|
||||||
tool_modules = [
|
# All query/mutation dicts in the consolidated module, keyed by domain/type label
|
||||||
"unraid_mcp.tools.info",
|
all_operation_dicts: list[tuple[str, dict[str, str]]] = [
|
||||||
"unraid_mcp.tools.array",
|
("system/QUERIES", unraid_mod._SYSTEM_QUERIES),
|
||||||
"unraid_mcp.tools.storage",
|
("array/QUERIES", unraid_mod._ARRAY_QUERIES),
|
||||||
"unraid_mcp.tools.docker",
|
("array/MUTATIONS", unraid_mod._ARRAY_MUTATIONS),
|
||||||
"unraid_mcp.tools.virtualization",
|
("disk/QUERIES", unraid_mod._DISK_QUERIES),
|
||||||
"unraid_mcp.tools.notifications",
|
("disk/MUTATIONS", unraid_mod._DISK_MUTATIONS),
|
||||||
"unraid_mcp.tools.rclone",
|
("docker/QUERIES", unraid_mod._DOCKER_QUERIES),
|
||||||
"unraid_mcp.tools.users",
|
("docker/MUTATIONS", unraid_mod._DOCKER_MUTATIONS),
|
||||||
"unraid_mcp.tools.keys",
|
("vm/QUERIES", unraid_mod._VM_QUERIES),
|
||||||
"unraid_mcp.tools.settings",
|
("vm/MUTATIONS", unraid_mod._VM_MUTATIONS),
|
||||||
"unraid_mcp.tools.customization",
|
("notification/QUERIES", unraid_mod._NOTIFICATION_QUERIES),
|
||||||
"unraid_mcp.tools.plugins",
|
("notification/MUTATIONS", unraid_mod._NOTIFICATION_MUTATIONS),
|
||||||
"unraid_mcp.tools.oidc",
|
("rclone/QUERIES", unraid_mod._RCLONE_QUERIES),
|
||||||
|
("rclone/MUTATIONS", unraid_mod._RCLONE_MUTATIONS),
|
||||||
|
("user/QUERIES", unraid_mod._USER_QUERIES),
|
||||||
|
("key/QUERIES", unraid_mod._KEY_QUERIES),
|
||||||
|
("key/MUTATIONS", unraid_mod._KEY_MUTATIONS),
|
||||||
|
("setting/MUTATIONS", unraid_mod._SETTING_MUTATIONS),
|
||||||
|
("customization/QUERIES", unraid_mod._CUSTOMIZATION_QUERIES),
|
||||||
|
("customization/MUTATIONS", unraid_mod._CUSTOMIZATION_MUTATIONS),
|
||||||
|
("plugin/QUERIES", unraid_mod._PLUGIN_QUERIES),
|
||||||
|
("plugin/MUTATIONS", unraid_mod._PLUGIN_MUTATIONS),
|
||||||
|
("oidc/QUERIES", unraid_mod._OIDC_QUERIES),
|
||||||
]
|
]
|
||||||
|
|
||||||
# Known schema mismatches in tool QUERIES/MUTATIONS dicts.
|
# Known schema mismatches — bugs in tool implementation, not in tests.
|
||||||
# These represent bugs in the tool implementation, not in the tests.
|
# Remove entries as they are fixed.
|
||||||
# Remove entries from this set as they are fixed.
|
|
||||||
KNOWN_SCHEMA_ISSUES: set[str] = {
|
KNOWN_SCHEMA_ISSUES: set[str] = {
|
||||||
# storage: unassignedDevices not in Query type
|
|
||||||
"storage/QUERIES/unassigned",
|
|
||||||
# customization: Customization.theme field does not exist
|
# customization: Customization.theme field does not exist
|
||||||
"customization/QUERIES/theme",
|
"customization/QUERIES/theme",
|
||||||
# customization: publicPartnerInfo not in Query type
|
# customization: publicPartnerInfo not in Query type
|
||||||
@@ -953,26 +960,10 @@ class TestSchemaCompleteness:
|
|||||||
unexpected_passes: list[str] = []
|
unexpected_passes: list[str] = []
|
||||||
total = 0
|
total = 0
|
||||||
|
|
||||||
for module_path in tool_modules:
|
for label, ops_dict in all_operation_dicts:
|
||||||
mod = importlib.import_module(module_path)
|
for action, query_str in ops_dict.items():
|
||||||
tool_name = module_path.split(".")[-1]
|
|
||||||
|
|
||||||
queries = getattr(mod, "QUERIES", {})
|
|
||||||
for action, query_str in queries.items():
|
|
||||||
total += 1
|
total += 1
|
||||||
key = f"{tool_name}/QUERIES/{action}"
|
key = f"{label}/{action}"
|
||||||
errors = _validate_operation(schema, query_str)
|
|
||||||
if errors:
|
|
||||||
if key not in KNOWN_SCHEMA_ISSUES:
|
|
||||||
failures.append(f"{key}: {errors[0]}")
|
|
||||||
else:
|
|
||||||
if key in KNOWN_SCHEMA_ISSUES:
|
|
||||||
unexpected_passes.append(key)
|
|
||||||
|
|
||||||
mutations = getattr(mod, "MUTATIONS", {})
|
|
||||||
for action, query_str in mutations.items():
|
|
||||||
total += 1
|
|
||||||
key = f"{tool_name}/MUTATIONS/{action}"
|
|
||||||
errors = _validate_operation(schema, query_str)
|
errors = _validate_operation(schema, query_str)
|
||||||
if errors:
|
if errors:
|
||||||
if key not in KNOWN_SCHEMA_ISSUES:
|
if key not in KNOWN_SCHEMA_ISSUES:
|
||||||
@@ -982,7 +973,6 @@ class TestSchemaCompleteness:
|
|||||||
unexpected_passes.append(key)
|
unexpected_passes.append(key)
|
||||||
|
|
||||||
if unexpected_passes:
|
if unexpected_passes:
|
||||||
# A known issue was fixed — remove it from KNOWN_SCHEMA_ISSUES
|
|
||||||
raise AssertionError(
|
raise AssertionError(
|
||||||
"The following operations are listed in KNOWN_SCHEMA_ISSUES but now pass — "
|
"The following operations are listed in KNOWN_SCHEMA_ISSUES but now pass — "
|
||||||
"remove them from the set:\n" + "\n".join(unexpected_passes)
|
"remove them from the set:\n" + "\n".join(unexpected_passes)
|
||||||
@@ -1003,29 +993,32 @@ class TestSchemaCompleteness:
|
|||||||
|
|
||||||
def test_total_operations_count(self, schema: GraphQLSchema) -> None:
|
def test_total_operations_count(self, schema: GraphQLSchema) -> None:
|
||||||
"""Verify the expected number of tool operations exist."""
|
"""Verify the expected number of tool operations exist."""
|
||||||
import importlib
|
import unraid_mcp.tools.unraid as unraid_mod
|
||||||
|
|
||||||
tool_modules = [
|
all_dicts = [
|
||||||
"unraid_mcp.tools.info",
|
unraid_mod._SYSTEM_QUERIES,
|
||||||
"unraid_mcp.tools.array",
|
unraid_mod._ARRAY_QUERIES,
|
||||||
"unraid_mcp.tools.storage",
|
unraid_mod._ARRAY_MUTATIONS,
|
||||||
"unraid_mcp.tools.docker",
|
unraid_mod._DISK_QUERIES,
|
||||||
"unraid_mcp.tools.virtualization",
|
unraid_mod._DISK_MUTATIONS,
|
||||||
"unraid_mcp.tools.notifications",
|
unraid_mod._DOCKER_QUERIES,
|
||||||
"unraid_mcp.tools.rclone",
|
unraid_mod._DOCKER_MUTATIONS,
|
||||||
"unraid_mcp.tools.users",
|
unraid_mod._VM_QUERIES,
|
||||||
"unraid_mcp.tools.keys",
|
unraid_mod._VM_MUTATIONS,
|
||||||
"unraid_mcp.tools.settings",
|
unraid_mod._NOTIFICATION_QUERIES,
|
||||||
"unraid_mcp.tools.customization",
|
unraid_mod._NOTIFICATION_MUTATIONS,
|
||||||
"unraid_mcp.tools.plugins",
|
unraid_mod._RCLONE_QUERIES,
|
||||||
"unraid_mcp.tools.oidc",
|
unraid_mod._RCLONE_MUTATIONS,
|
||||||
|
unraid_mod._USER_QUERIES,
|
||||||
|
unraid_mod._KEY_QUERIES,
|
||||||
|
unraid_mod._KEY_MUTATIONS,
|
||||||
|
unraid_mod._SETTING_MUTATIONS,
|
||||||
|
unraid_mod._CUSTOMIZATION_QUERIES,
|
||||||
|
unraid_mod._CUSTOMIZATION_MUTATIONS,
|
||||||
|
unraid_mod._PLUGIN_QUERIES,
|
||||||
|
unraid_mod._PLUGIN_MUTATIONS,
|
||||||
|
unraid_mod._OIDC_QUERIES,
|
||||||
]
|
]
|
||||||
|
|
||||||
total = 0
|
total = sum(len(d) for d in all_dicts)
|
||||||
for module_path in tool_modules:
|
|
||||||
mod = importlib.import_module(module_path)
|
|
||||||
total += len(getattr(mod, "QUERIES", {}))
|
|
||||||
total += len(getattr(mod, "MUTATIONS", {}))
|
|
||||||
|
|
||||||
# Operations across all tools (queries + mutations in dicts)
|
|
||||||
assert total >= 50, f"Expected at least 50 operations, found {total}"
|
assert total >= 50, f"Expected at least 50 operations, found {total}"
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
"""Tests for unraid_array tool."""
|
"""Tests for array subactions of the consolidated unraid tool."""
|
||||||
|
|
||||||
from collections.abc import Generator
|
from collections.abc import Generator
|
||||||
from unittest.mock import AsyncMock, patch
|
from unittest.mock import AsyncMock, patch
|
||||||
@@ -11,36 +11,36 @@ from unraid_mcp.core.exceptions import ToolError
|
|||||||
|
|
||||||
@pytest.fixture
|
@pytest.fixture
|
||||||
def _mock_graphql() -> Generator[AsyncMock, None, None]:
|
def _mock_graphql() -> Generator[AsyncMock, None, None]:
|
||||||
with patch("unraid_mcp.tools.array.make_graphql_request", new_callable=AsyncMock) as mock:
|
with patch("unraid_mcp.tools.unraid.make_graphql_request", new_callable=AsyncMock) as mock:
|
||||||
yield mock
|
yield mock
|
||||||
|
|
||||||
|
|
||||||
def _make_tool():
|
def _make_tool():
|
||||||
return make_tool_fn("unraid_mcp.tools.array", "register_array_tool", "unraid_array")
|
return make_tool_fn("unraid_mcp.tools.unraid", "register_unraid_tool", "unraid")
|
||||||
|
|
||||||
|
|
||||||
class TestArrayValidation:
|
class TestArrayValidation:
|
||||||
async def test_invalid_action_rejected(self, _mock_graphql: AsyncMock) -> None:
|
async def test_invalid_subaction_rejected(self, _mock_graphql: AsyncMock) -> None:
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="Invalid action"):
|
with pytest.raises(ToolError, match="Invalid subaction"):
|
||||||
await tool_fn(action="start")
|
await tool_fn(action="array", subaction="start")
|
||||||
|
|
||||||
async def test_removed_actions_are_invalid(self, _mock_graphql: AsyncMock) -> None:
|
async def test_removed_actions_are_invalid(self, _mock_graphql: AsyncMock) -> None:
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
for action in (
|
for subaction in (
|
||||||
"start",
|
"start",
|
||||||
"stop",
|
"stop",
|
||||||
"shutdown",
|
"shutdown",
|
||||||
"reboot",
|
"reboot",
|
||||||
"clear_stats",
|
"clear_stats",
|
||||||
):
|
):
|
||||||
with pytest.raises(ToolError, match="Invalid action"):
|
with pytest.raises(ToolError, match="Invalid subaction"):
|
||||||
await tool_fn(action=action)
|
await tool_fn(action="array", subaction=subaction)
|
||||||
|
|
||||||
async def test_parity_start_requires_correct(self, _mock_graphql: AsyncMock) -> None:
|
async def test_parity_start_requires_correct(self, _mock_graphql: AsyncMock) -> None:
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="correct is required"):
|
with pytest.raises(ToolError, match="correct is required"):
|
||||||
await tool_fn(action="parity_start")
|
await tool_fn(action="array", subaction="parity_start")
|
||||||
_mock_graphql.assert_not_called()
|
_mock_graphql.assert_not_called()
|
||||||
|
|
||||||
|
|
||||||
@@ -48,9 +48,9 @@ class TestArrayActions:
|
|||||||
async def test_parity_start(self, _mock_graphql: AsyncMock) -> None:
|
async def test_parity_start(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_graphql.return_value = {"parityCheck": {"start": True}}
|
_mock_graphql.return_value = {"parityCheck": {"start": True}}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="parity_start", correct=False)
|
result = await tool_fn(action="array", subaction="parity_start", correct=False)
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
assert result["action"] == "parity_start"
|
assert result["subaction"] == "parity_start"
|
||||||
_mock_graphql.assert_called_once()
|
_mock_graphql.assert_called_once()
|
||||||
call_args = _mock_graphql.call_args
|
call_args = _mock_graphql.call_args
|
||||||
assert call_args[0][1] == {"correct": False}
|
assert call_args[0][1] == {"correct": False}
|
||||||
@@ -58,7 +58,7 @@ class TestArrayActions:
|
|||||||
async def test_parity_start_with_correct(self, _mock_graphql: AsyncMock) -> None:
|
async def test_parity_start_with_correct(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_graphql.return_value = {"parityCheck": {"start": True}}
|
_mock_graphql.return_value = {"parityCheck": {"start": True}}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="parity_start", correct=True)
|
result = await tool_fn(action="array", subaction="parity_start", correct=True)
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
call_args = _mock_graphql.call_args
|
call_args = _mock_graphql.call_args
|
||||||
assert call_args[0][1] == {"correct": True}
|
assert call_args[0][1] == {"correct": True}
|
||||||
@@ -66,32 +66,32 @@ class TestArrayActions:
|
|||||||
async def test_parity_status(self, _mock_graphql: AsyncMock) -> None:
|
async def test_parity_status(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_graphql.return_value = {"array": {"parityCheckStatus": {"progress": 50}}}
|
_mock_graphql.return_value = {"array": {"parityCheckStatus": {"progress": 50}}}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="parity_status")
|
result = await tool_fn(action="array", subaction="parity_status")
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
|
|
||||||
async def test_parity_pause(self, _mock_graphql: AsyncMock) -> None:
|
async def test_parity_pause(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_graphql.return_value = {"parityCheck": {"pause": True}}
|
_mock_graphql.return_value = {"parityCheck": {"pause": True}}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="parity_pause")
|
result = await tool_fn(action="array", subaction="parity_pause")
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
|
|
||||||
async def test_parity_resume(self, _mock_graphql: AsyncMock) -> None:
|
async def test_parity_resume(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_graphql.return_value = {"parityCheck": {"resume": True}}
|
_mock_graphql.return_value = {"parityCheck": {"resume": True}}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="parity_resume")
|
result = await tool_fn(action="array", subaction="parity_resume")
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
|
|
||||||
async def test_parity_cancel(self, _mock_graphql: AsyncMock) -> None:
|
async def test_parity_cancel(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_graphql.return_value = {"parityCheck": {"cancel": True}}
|
_mock_graphql.return_value = {"parityCheck": {"cancel": True}}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="parity_cancel")
|
result = await tool_fn(action="array", subaction="parity_cancel")
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
|
|
||||||
async def test_generic_exception_wraps(self, _mock_graphql: AsyncMock) -> None:
|
async def test_generic_exception_wraps(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_graphql.side_effect = RuntimeError("disk error")
|
_mock_graphql.side_effect = RuntimeError("disk error")
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="Failed to execute array/parity_status"):
|
with pytest.raises(ToolError, match="Failed to execute array/parity_status"):
|
||||||
await tool_fn(action="parity_status")
|
await tool_fn(action="array", subaction="parity_status")
|
||||||
|
|
||||||
|
|
||||||
class TestArrayMutationFailures:
|
class TestArrayMutationFailures:
|
||||||
@@ -100,14 +100,14 @@ class TestArrayMutationFailures:
|
|||||||
async def test_parity_start_mutation_returns_false(self, _mock_graphql: AsyncMock) -> None:
|
async def test_parity_start_mutation_returns_false(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_graphql.return_value = {"parityCheck": {"start": False}}
|
_mock_graphql.return_value = {"parityCheck": {"start": False}}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="parity_start", correct=False)
|
result = await tool_fn(action="array", subaction="parity_start", correct=False)
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
assert result["data"] == {"parityCheck": {"start": False}}
|
assert result["data"] == {"parityCheck": {"start": False}}
|
||||||
|
|
||||||
async def test_parity_start_mutation_returns_null(self, _mock_graphql: AsyncMock) -> None:
|
async def test_parity_start_mutation_returns_null(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_graphql.return_value = {"parityCheck": {"start": None}}
|
_mock_graphql.return_value = {"parityCheck": {"start": None}}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="parity_start", correct=False)
|
result = await tool_fn(action="array", subaction="parity_start", correct=False)
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
assert result["data"] == {"parityCheck": {"start": None}}
|
assert result["data"] == {"parityCheck": {"start": None}}
|
||||||
|
|
||||||
@@ -116,7 +116,7 @@ class TestArrayMutationFailures:
|
|||||||
) -> None:
|
) -> None:
|
||||||
_mock_graphql.return_value = {"parityCheck": {"start": {}}}
|
_mock_graphql.return_value = {"parityCheck": {"start": {}}}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="parity_start", correct=False)
|
result = await tool_fn(action="array", subaction="parity_start", correct=False)
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
assert result["data"] == {"parityCheck": {"start": {}}}
|
assert result["data"] == {"parityCheck": {"start": {}}}
|
||||||
|
|
||||||
@@ -124,7 +124,7 @@ class TestArrayMutationFailures:
|
|||||||
_mock_graphql.side_effect = TimeoutError("operation timed out")
|
_mock_graphql.side_effect = TimeoutError("operation timed out")
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="timed out"):
|
with pytest.raises(ToolError, match="timed out"):
|
||||||
await tool_fn(action="parity_cancel")
|
await tool_fn(action="array", subaction="parity_cancel")
|
||||||
|
|
||||||
|
|
||||||
class TestArrayNetworkErrors:
|
class TestArrayNetworkErrors:
|
||||||
@@ -134,13 +134,13 @@ class TestArrayNetworkErrors:
|
|||||||
_mock_graphql.side_effect = ToolError("HTTP error 500: Internal Server Error")
|
_mock_graphql.side_effect = ToolError("HTTP error 500: Internal Server Error")
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="HTTP error 500"):
|
with pytest.raises(ToolError, match="HTTP error 500"):
|
||||||
await tool_fn(action="parity_start", correct=False)
|
await tool_fn(action="array", subaction="parity_start", correct=False)
|
||||||
|
|
||||||
async def test_connection_refused(self, _mock_graphql: AsyncMock) -> None:
|
async def test_connection_refused(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_graphql.side_effect = ToolError("Network connection error: Connection refused")
|
_mock_graphql.side_effect = ToolError("Network connection error: Connection refused")
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="Network connection error"):
|
with pytest.raises(ToolError, match="Network connection error"):
|
||||||
await tool_fn(action="parity_status")
|
await tool_fn(action="array", subaction="parity_status")
|
||||||
|
|
||||||
|
|
||||||
# ---------------------------------------------------------------------------
|
# ---------------------------------------------------------------------------
|
||||||
@@ -156,7 +156,7 @@ async def test_parity_history_returns_history(_mock_graphql):
|
|||||||
_mock_graphql.return_value = {
|
_mock_graphql.return_value = {
|
||||||
"parityHistory": [{"date": "2026-03-01T00:00:00Z", "status": "COMPLETED", "errors": 0}]
|
"parityHistory": [{"date": "2026-03-01T00:00:00Z", "status": "COMPLETED", "errors": 0}]
|
||||||
}
|
}
|
||||||
result = await _make_tool()(action="parity_history")
|
result = await _make_tool()(action="array", subaction="parity_history")
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
assert len(result["data"]["parityHistory"]) == 1
|
assert len(result["data"]["parityHistory"]) == 1
|
||||||
|
|
||||||
@@ -167,20 +167,20 @@ async def test_parity_history_returns_history(_mock_graphql):
|
|||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_start_array(_mock_graphql):
|
async def test_start_array(_mock_graphql):
|
||||||
_mock_graphql.return_value = {"array": {"setState": {"state": "STARTED"}}}
|
_mock_graphql.return_value = {"array": {"setState": {"state": "STARTED"}}}
|
||||||
result = await _make_tool()(action="start_array")
|
result = await _make_tool()(action="array", subaction="start_array")
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_stop_array_requires_confirm(_mock_graphql):
|
async def test_stop_array_requires_confirm(_mock_graphql):
|
||||||
with pytest.raises(ToolError, match="not confirmed"):
|
with pytest.raises(ToolError, match="not confirmed"):
|
||||||
await _make_tool()(action="stop_array", confirm=False)
|
await _make_tool()(action="array", subaction="stop_array", confirm=False)
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_stop_array_with_confirm(_mock_graphql):
|
async def test_stop_array_with_confirm(_mock_graphql):
|
||||||
_mock_graphql.return_value = {"array": {"setState": {"state": "STOPPED"}}}
|
_mock_graphql.return_value = {"array": {"setState": {"state": "STOPPED"}}}
|
||||||
result = await _make_tool()(action="stop_array", confirm=True)
|
result = await _make_tool()(action="array", subaction="stop_array", confirm=True)
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
|
|
||||||
|
|
||||||
@@ -190,13 +190,13 @@ async def test_stop_array_with_confirm(_mock_graphql):
|
|||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_add_disk_requires_disk_id(_mock_graphql):
|
async def test_add_disk_requires_disk_id(_mock_graphql):
|
||||||
with pytest.raises(ToolError, match="disk_id"):
|
with pytest.raises(ToolError, match="disk_id"):
|
||||||
await _make_tool()(action="add_disk")
|
await _make_tool()(action="array", subaction="add_disk")
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_add_disk_success(_mock_graphql):
|
async def test_add_disk_success(_mock_graphql):
|
||||||
_mock_graphql.return_value = {"array": {"addDiskToArray": {"state": "STARTED"}}}
|
_mock_graphql.return_value = {"array": {"addDiskToArray": {"state": "STARTED"}}}
|
||||||
result = await _make_tool()(action="add_disk", disk_id="abc123:local")
|
result = await _make_tool()(action="array", subaction="add_disk", disk_id="abc123:local")
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
|
|
||||||
|
|
||||||
@@ -206,13 +206,17 @@ async def test_add_disk_success(_mock_graphql):
|
|||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_remove_disk_requires_confirm(_mock_graphql):
|
async def test_remove_disk_requires_confirm(_mock_graphql):
|
||||||
with pytest.raises(ToolError, match="not confirmed"):
|
with pytest.raises(ToolError, match="not confirmed"):
|
||||||
await _make_tool()(action="remove_disk", disk_id="abc123:local", confirm=False)
|
await _make_tool()(
|
||||||
|
action="array", subaction="remove_disk", disk_id="abc123:local", confirm=False
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_remove_disk_with_confirm(_mock_graphql):
|
async def test_remove_disk_with_confirm(_mock_graphql):
|
||||||
_mock_graphql.return_value = {"array": {"removeDiskFromArray": {"state": "STOPPED"}}}
|
_mock_graphql.return_value = {"array": {"removeDiskFromArray": {"state": "STOPPED"}}}
|
||||||
result = await _make_tool()(action="remove_disk", disk_id="abc123:local", confirm=True)
|
result = await _make_tool()(
|
||||||
|
action="array", subaction="remove_disk", disk_id="abc123:local", confirm=True
|
||||||
|
)
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
|
|
||||||
|
|
||||||
@@ -222,13 +226,13 @@ async def test_remove_disk_with_confirm(_mock_graphql):
|
|||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_mount_disk_requires_disk_id(_mock_graphql):
|
async def test_mount_disk_requires_disk_id(_mock_graphql):
|
||||||
with pytest.raises(ToolError, match="disk_id"):
|
with pytest.raises(ToolError, match="disk_id"):
|
||||||
await _make_tool()(action="mount_disk")
|
await _make_tool()(action="array", subaction="mount_disk")
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_unmount_disk_success(_mock_graphql):
|
async def test_unmount_disk_success(_mock_graphql):
|
||||||
_mock_graphql.return_value = {"array": {"unmountArrayDisk": {"id": "abc123:local"}}}
|
_mock_graphql.return_value = {"array": {"unmountArrayDisk": {"id": "abc123:local"}}}
|
||||||
result = await _make_tool()(action="unmount_disk", disk_id="abc123:local")
|
result = await _make_tool()(action="array", subaction="unmount_disk", disk_id="abc123:local")
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
|
|
||||||
|
|
||||||
@@ -238,11 +242,15 @@ async def test_unmount_disk_success(_mock_graphql):
|
|||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_clear_disk_stats_requires_confirm(_mock_graphql):
|
async def test_clear_disk_stats_requires_confirm(_mock_graphql):
|
||||||
with pytest.raises(ToolError, match="not confirmed"):
|
with pytest.raises(ToolError, match="not confirmed"):
|
||||||
await _make_tool()(action="clear_disk_stats", disk_id="abc123:local", confirm=False)
|
await _make_tool()(
|
||||||
|
action="array", subaction="clear_disk_stats", disk_id="abc123:local", confirm=False
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_clear_disk_stats_with_confirm(_mock_graphql):
|
async def test_clear_disk_stats_with_confirm(_mock_graphql):
|
||||||
_mock_graphql.return_value = {"array": {"clearArrayDiskStatistics": True}}
|
_mock_graphql.return_value = {"array": {"clearArrayDiskStatistics": True}}
|
||||||
result = await _make_tool()(action="clear_disk_stats", disk_id="abc123:local", confirm=True)
|
result = await _make_tool()(
|
||||||
|
action="array", subaction="clear_disk_stats", disk_id="abc123:local", confirm=True
|
||||||
|
)
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
|
|||||||
@@ -1,5 +1,5 @@
|
|||||||
# tests/test_customization.py
|
# tests/test_customization.py
|
||||||
"""Tests for unraid_customization tool."""
|
"""Tests for customization subactions of the consolidated unraid tool."""
|
||||||
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
@@ -11,16 +11,12 @@ from conftest import make_tool_fn
|
|||||||
|
|
||||||
@pytest.fixture
|
@pytest.fixture
|
||||||
def _mock_graphql():
|
def _mock_graphql():
|
||||||
with patch("unraid_mcp.tools.customization.make_graphql_request", new_callable=AsyncMock) as m:
|
with patch("unraid_mcp.tools.unraid.make_graphql_request", new_callable=AsyncMock) as m:
|
||||||
yield m
|
yield m
|
||||||
|
|
||||||
|
|
||||||
def _make_tool():
|
def _make_tool():
|
||||||
return make_tool_fn(
|
return make_tool_fn("unraid_mcp.tools.unraid", "register_unraid_tool", "unraid")
|
||||||
"unraid_mcp.tools.customization",
|
|
||||||
"register_customization_tool",
|
|
||||||
"unraid_customization",
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
@@ -28,23 +24,22 @@ async def test_theme_returns_customization(_mock_graphql):
|
|||||||
_mock_graphql.return_value = {
|
_mock_graphql.return_value = {
|
||||||
"customization": {"theme": {"name": "azure"}, "partnerInfo": None, "activationCode": None}
|
"customization": {"theme": {"name": "azure"}, "partnerInfo": None, "activationCode": None}
|
||||||
}
|
}
|
||||||
result = await _make_tool()(action="theme")
|
result = await _make_tool()(action="customization", subaction="theme")
|
||||||
assert result["success"] is True
|
assert "customization" in result
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_public_theme(_mock_graphql):
|
async def test_public_theme(_mock_graphql):
|
||||||
_mock_graphql.return_value = {"publicTheme": {"name": "black"}}
|
_mock_graphql.return_value = {"publicTheme": {"name": "black"}}
|
||||||
result = await _make_tool()(action="public_theme")
|
result = await _make_tool()(action="customization", subaction="public_theme")
|
||||||
assert result["success"] is True
|
assert "publicTheme" in result
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_is_initial_setup(_mock_graphql):
|
async def test_is_initial_setup(_mock_graphql):
|
||||||
_mock_graphql.return_value = {"isInitialSetup": False}
|
_mock_graphql.return_value = {"isInitialSetup": False}
|
||||||
result = await _make_tool()(action="is_initial_setup")
|
result = await _make_tool()(action="customization", subaction="is_initial_setup")
|
||||||
assert result["success"] is True
|
assert result["isInitialSetup"] is False
|
||||||
assert result["data"]["isInitialSetup"] is False
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
@@ -52,7 +47,7 @@ async def test_set_theme_requires_theme(_mock_graphql):
|
|||||||
from unraid_mcp.core.exceptions import ToolError
|
from unraid_mcp.core.exceptions import ToolError
|
||||||
|
|
||||||
with pytest.raises(ToolError, match="theme_name"):
|
with pytest.raises(ToolError, match="theme_name"):
|
||||||
await _make_tool()(action="set_theme")
|
await _make_tool()(action="customization", subaction="set_theme")
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
@@ -60,5 +55,5 @@ async def test_set_theme_success(_mock_graphql):
|
|||||||
_mock_graphql.return_value = {
|
_mock_graphql.return_value = {
|
||||||
"customization": {"setTheme": {"name": "azure", "showBannerImage": True}}
|
"customization": {"setTheme": {"name": "azure", "showBannerImage": True}}
|
||||||
}
|
}
|
||||||
result = await _make_tool()(action="set_theme", theme_name="azure")
|
result = await _make_tool()(action="customization", subaction="set_theme", theme_name="azure")
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
|
|||||||
@@ -1,58 +1,12 @@
|
|||||||
"""Tests for unraid_docker tool."""
|
"""Tests for docker subactions of the consolidated unraid tool."""
|
||||||
|
|
||||||
from collections.abc import Generator
|
from collections.abc import Generator
|
||||||
from typing import get_args
|
|
||||||
from unittest.mock import AsyncMock, patch
|
from unittest.mock import AsyncMock, patch
|
||||||
|
|
||||||
import pytest
|
import pytest
|
||||||
from conftest import make_tool_fn
|
from conftest import make_tool_fn
|
||||||
|
|
||||||
from unraid_mcp.core.exceptions import ToolError
|
from unraid_mcp.core.exceptions import ToolError
|
||||||
from unraid_mcp.tools.docker import (
|
|
||||||
DOCKER_ACTIONS,
|
|
||||||
find_container_by_identifier,
|
|
||||||
get_available_container_names,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
# --- Unit tests for helpers ---
|
|
||||||
|
|
||||||
|
|
||||||
class TestFindContainerByIdentifier:
|
|
||||||
def test_by_exact_id(self) -> None:
|
|
||||||
containers = [{"id": "abc123", "names": ["plex"]}]
|
|
||||||
assert find_container_by_identifier("abc123", containers) == containers[0]
|
|
||||||
|
|
||||||
def test_by_exact_name(self) -> None:
|
|
||||||
containers = [{"id": "abc123", "names": ["plex"]}]
|
|
||||||
assert find_container_by_identifier("plex", containers) == containers[0]
|
|
||||||
|
|
||||||
def test_fuzzy_match(self) -> None:
|
|
||||||
containers = [{"id": "abc123", "names": ["plex-media-server"]}]
|
|
||||||
result = find_container_by_identifier("plex", containers)
|
|
||||||
assert result == containers[0]
|
|
||||||
|
|
||||||
def test_not_found(self) -> None:
|
|
||||||
containers = [{"id": "abc123", "names": ["plex"]}]
|
|
||||||
assert find_container_by_identifier("sonarr", containers) is None
|
|
||||||
|
|
||||||
def test_empty_list(self) -> None:
|
|
||||||
assert find_container_by_identifier("plex", []) is None
|
|
||||||
|
|
||||||
|
|
||||||
class TestGetAvailableContainerNames:
|
|
||||||
def test_extracts_names(self) -> None:
|
|
||||||
containers = [
|
|
||||||
{"names": ["plex"]},
|
|
||||||
{"names": ["sonarr", "sonarr-v3"]},
|
|
||||||
]
|
|
||||||
names = get_available_container_names(containers)
|
|
||||||
assert "plex" in names
|
|
||||||
assert "sonarr" in names
|
|
||||||
assert "sonarr-v3" in names
|
|
||||||
|
|
||||||
def test_empty(self) -> None:
|
|
||||||
assert get_available_container_names([]) == []
|
|
||||||
|
|
||||||
|
|
||||||
# --- Integration tests ---
|
# --- Integration tests ---
|
||||||
@@ -60,55 +14,34 @@ class TestGetAvailableContainerNames:
|
|||||||
|
|
||||||
@pytest.fixture
|
@pytest.fixture
|
||||||
def _mock_graphql() -> Generator[AsyncMock, None, None]:
|
def _mock_graphql() -> Generator[AsyncMock, None, None]:
|
||||||
with patch("unraid_mcp.tools.docker.make_graphql_request", new_callable=AsyncMock) as mock:
|
with patch("unraid_mcp.tools.unraid.make_graphql_request", new_callable=AsyncMock) as mock:
|
||||||
yield mock
|
yield mock
|
||||||
|
|
||||||
|
|
||||||
def _make_tool():
|
def _make_tool():
|
||||||
return make_tool_fn("unraid_mcp.tools.docker", "register_docker_tool", "unraid_docker")
|
return make_tool_fn("unraid_mcp.tools.unraid", "register_unraid_tool", "unraid")
|
||||||
|
|
||||||
|
|
||||||
class TestDockerValidation:
|
class TestDockerValidation:
|
||||||
@pytest.mark.parametrize(
|
@pytest.mark.parametrize("subaction", ["start", "stop", "details"])
|
||||||
"action",
|
|
||||||
[
|
|
||||||
"logs",
|
|
||||||
"port_conflicts",
|
|
||||||
"check_updates",
|
|
||||||
"pause",
|
|
||||||
"unpause",
|
|
||||||
"remove",
|
|
||||||
"update",
|
|
||||||
"update_all",
|
|
||||||
"create_folder",
|
|
||||||
"delete_entries",
|
|
||||||
"reset_template_mappings",
|
|
||||||
],
|
|
||||||
)
|
|
||||||
def test_removed_actions_are_gone(self, action: str) -> None:
|
|
||||||
assert action not in get_args(DOCKER_ACTIONS), (
|
|
||||||
f"Action '{action}' should have been removed from DOCKER_ACTIONS"
|
|
||||||
)
|
|
||||||
|
|
||||||
@pytest.mark.parametrize("action", ["start", "stop", "details"])
|
|
||||||
async def test_container_actions_require_id(
|
async def test_container_actions_require_id(
|
||||||
self, _mock_graphql: AsyncMock, action: str
|
self, _mock_graphql: AsyncMock, subaction: str
|
||||||
) -> None:
|
) -> None:
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="container_id"):
|
with pytest.raises(ToolError, match="container_id"):
|
||||||
await tool_fn(action=action)
|
await tool_fn(action="docker", subaction=subaction)
|
||||||
|
|
||||||
async def test_network_details_requires_id(self, _mock_graphql: AsyncMock) -> None:
|
async def test_network_details_requires_id(self, _mock_graphql: AsyncMock) -> None:
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="network_id"):
|
with pytest.raises(ToolError, match="network_id"):
|
||||||
await tool_fn(action="network_details")
|
await tool_fn(action="docker", subaction="network_details")
|
||||||
|
|
||||||
async def test_non_logs_action_ignores_tail_lines_validation(
|
async def test_non_logs_action_ignores_tail_lines_validation(
|
||||||
self, _mock_graphql: AsyncMock
|
self, _mock_graphql: AsyncMock
|
||||||
) -> None:
|
) -> None:
|
||||||
_mock_graphql.return_value = {"docker": {"containers": []}}
|
_mock_graphql.return_value = {"docker": {"containers": []}}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="list")
|
result = await tool_fn(action="docker", subaction="list")
|
||||||
assert result["containers"] == []
|
assert result["containers"] == []
|
||||||
|
|
||||||
|
|
||||||
@@ -118,7 +51,7 @@ class TestDockerActions:
|
|||||||
"docker": {"containers": [{"id": "c1", "names": ["plex"], "state": "running"}]}
|
"docker": {"containers": [{"id": "c1", "names": ["plex"], "state": "running"}]}
|
||||||
}
|
}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="list")
|
result = await tool_fn(action="docker", subaction="list")
|
||||||
assert len(result["containers"]) == 1
|
assert len(result["containers"]) == 1
|
||||||
|
|
||||||
async def test_start_container(self, _mock_graphql: AsyncMock) -> None:
|
async def test_start_container(self, _mock_graphql: AsyncMock) -> None:
|
||||||
@@ -136,13 +69,13 @@ class TestDockerActions:
|
|||||||
},
|
},
|
||||||
]
|
]
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="start", container_id="plex")
|
result = await tool_fn(action="docker", subaction="start", container_id="plex")
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
|
|
||||||
async def test_networks(self, _mock_graphql: AsyncMock) -> None:
|
async def test_networks(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_graphql.return_value = {"docker": {"networks": [{"id": "net:1", "name": "bridge"}]}}
|
_mock_graphql.return_value = {"docker": {"networks": [{"id": "net:1", "name": "bridge"}]}}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="networks")
|
result = await tool_fn(action="docker", subaction="networks")
|
||||||
assert len(result["networks"]) == 1
|
assert len(result["networks"]) == 1
|
||||||
|
|
||||||
async def test_idempotent_start(self, _mock_graphql: AsyncMock) -> None:
|
async def test_idempotent_start(self, _mock_graphql: AsyncMock) -> None:
|
||||||
@@ -152,7 +85,7 @@ class TestDockerActions:
|
|||||||
{"idempotent_success": True, "docker": {}},
|
{"idempotent_success": True, "docker": {}},
|
||||||
]
|
]
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="start", container_id="plex")
|
result = await tool_fn(action="docker", subaction="start", container_id="plex")
|
||||||
assert result["idempotent"] is True
|
assert result["idempotent"] is True
|
||||||
|
|
||||||
async def test_restart(self, _mock_graphql: AsyncMock) -> None:
|
async def test_restart(self, _mock_graphql: AsyncMock) -> None:
|
||||||
@@ -163,9 +96,9 @@ class TestDockerActions:
|
|||||||
{"docker": {"start": {"id": cid, "state": "running"}}},
|
{"docker": {"start": {"id": cid, "state": "running"}}},
|
||||||
]
|
]
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="restart", container_id="plex")
|
result = await tool_fn(action="docker", subaction="restart", container_id="plex")
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
assert result["action"] == "restart"
|
assert result["subaction"] == "restart"
|
||||||
|
|
||||||
async def test_restart_idempotent_stop(self, _mock_graphql: AsyncMock) -> None:
|
async def test_restart_idempotent_stop(self, _mock_graphql: AsyncMock) -> None:
|
||||||
cid = "a" * 64 + ":local"
|
cid = "a" * 64 + ":local"
|
||||||
@@ -175,7 +108,7 @@ class TestDockerActions:
|
|||||||
{"docker": {"start": {"id": cid, "state": "running"}}},
|
{"docker": {"start": {"id": cid, "state": "running"}}},
|
||||||
]
|
]
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="restart", container_id="plex")
|
result = await tool_fn(action="docker", subaction="restart", container_id="plex")
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
assert "note" in result
|
assert "note" in result
|
||||||
|
|
||||||
@@ -188,14 +121,14 @@ class TestDockerActions:
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="details", container_id="plex")
|
result = await tool_fn(action="docker", subaction="details", container_id="plex")
|
||||||
assert result["names"] == ["plex"]
|
assert result["names"] == ["plex"]
|
||||||
|
|
||||||
async def test_generic_exception_wraps_in_tool_error(self, _mock_graphql: AsyncMock) -> None:
|
async def test_generic_exception_wraps_in_tool_error(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_graphql.side_effect = RuntimeError("unexpected failure")
|
_mock_graphql.side_effect = RuntimeError("unexpected failure")
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="Failed to execute docker/list"):
|
with pytest.raises(ToolError, match="Failed to execute docker/list"):
|
||||||
await tool_fn(action="list")
|
await tool_fn(action="docker", subaction="list")
|
||||||
|
|
||||||
async def test_short_id_prefix_ambiguous_rejected(self, _mock_graphql: AsyncMock) -> None:
|
async def test_short_id_prefix_ambiguous_rejected(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_graphql.return_value = {
|
_mock_graphql.return_value = {
|
||||||
@@ -214,7 +147,7 @@ class TestDockerActions:
|
|||||||
}
|
}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="ambiguous"):
|
with pytest.raises(ToolError, match="ambiguous"):
|
||||||
await tool_fn(action="details", container_id="abcdef123456")
|
await tool_fn(action="docker", subaction="details", container_id="abcdef123456")
|
||||||
|
|
||||||
|
|
||||||
class TestDockerMutationFailures:
|
class TestDockerMutationFailures:
|
||||||
@@ -228,7 +161,7 @@ class TestDockerMutationFailures:
|
|||||||
{"docker": {}},
|
{"docker": {}},
|
||||||
]
|
]
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="start", container_id="plex")
|
result = await tool_fn(action="docker", subaction="start", container_id="plex")
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
assert result["container"] is None
|
assert result["container"] is None
|
||||||
|
|
||||||
@@ -240,7 +173,7 @@ class TestDockerMutationFailures:
|
|||||||
{"docker": {"stop": {"id": cid, "state": "running"}}},
|
{"docker": {"stop": {"id": cid, "state": "running"}}},
|
||||||
]
|
]
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="stop", container_id="plex")
|
result = await tool_fn(action="docker", subaction="stop", container_id="plex")
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
assert result["container"]["state"] == "running"
|
assert result["container"]["state"] == "running"
|
||||||
|
|
||||||
@@ -254,7 +187,7 @@ class TestDockerMutationFailures:
|
|||||||
]
|
]
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="timed out"):
|
with pytest.raises(ToolError, match="timed out"):
|
||||||
await tool_fn(action="start", container_id="plex")
|
await tool_fn(action="docker", subaction="start", container_id="plex")
|
||||||
|
|
||||||
|
|
||||||
class TestDockerNetworkErrors:
|
class TestDockerNetworkErrors:
|
||||||
@@ -267,14 +200,14 @@ class TestDockerNetworkErrors:
|
|||||||
)
|
)
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="Connection refused"):
|
with pytest.raises(ToolError, match="Connection refused"):
|
||||||
await tool_fn(action="list")
|
await tool_fn(action="docker", subaction="list")
|
||||||
|
|
||||||
async def test_list_http_401_unauthorized(self, _mock_graphql: AsyncMock) -> None:
|
async def test_list_http_401_unauthorized(self, _mock_graphql: AsyncMock) -> None:
|
||||||
"""HTTP 401 should propagate as ToolError."""
|
"""HTTP 401 should propagate as ToolError."""
|
||||||
_mock_graphql.side_effect = ToolError("HTTP error 401: Unauthorized")
|
_mock_graphql.side_effect = ToolError("HTTP error 401: Unauthorized")
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="401"):
|
with pytest.raises(ToolError, match="401"):
|
||||||
await tool_fn(action="list")
|
await tool_fn(action="docker", subaction="list")
|
||||||
|
|
||||||
async def test_json_decode_error_on_list(self, _mock_graphql: AsyncMock) -> None:
|
async def test_json_decode_error_on_list(self, _mock_graphql: AsyncMock) -> None:
|
||||||
"""Invalid JSON response should be wrapped in ToolError."""
|
"""Invalid JSON response should be wrapped in ToolError."""
|
||||||
@@ -283,4 +216,4 @@ class TestDockerNetworkErrors:
|
|||||||
)
|
)
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="Invalid JSON"):
|
with pytest.raises(ToolError, match="Invalid JSON"):
|
||||||
await tool_fn(action="list")
|
await tool_fn(action="docker", subaction="list")
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
"""Tests for unraid_health tool."""
|
"""Tests for health subactions of the consolidated unraid tool."""
|
||||||
|
|
||||||
from collections.abc import Generator
|
from collections.abc import Generator
|
||||||
from unittest.mock import AsyncMock, patch
|
from unittest.mock import AsyncMock, MagicMock, patch
|
||||||
|
|
||||||
import pytest
|
import pytest
|
||||||
from conftest import make_tool_fn
|
from conftest import make_tool_fn
|
||||||
@@ -12,26 +12,26 @@ from unraid_mcp.core.utils import safe_display_url
|
|||||||
|
|
||||||
@pytest.fixture
|
@pytest.fixture
|
||||||
def _mock_graphql() -> Generator[AsyncMock, None, None]:
|
def _mock_graphql() -> Generator[AsyncMock, None, None]:
|
||||||
with patch("unraid_mcp.tools.health.make_graphql_request", new_callable=AsyncMock) as mock:
|
with patch("unraid_mcp.tools.unraid.make_graphql_request", new_callable=AsyncMock) as mock:
|
||||||
yield mock
|
yield mock
|
||||||
|
|
||||||
|
|
||||||
def _make_tool():
|
def _make_tool():
|
||||||
return make_tool_fn("unraid_mcp.tools.health", "register_health_tool", "unraid_health")
|
return make_tool_fn("unraid_mcp.tools.unraid", "register_unraid_tool", "unraid")
|
||||||
|
|
||||||
|
|
||||||
class TestHealthValidation:
|
class TestHealthValidation:
|
||||||
async def test_invalid_action(self, _mock_graphql: AsyncMock) -> None:
|
async def test_invalid_subaction(self, _mock_graphql: AsyncMock) -> None:
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="Invalid action"):
|
with pytest.raises(ToolError, match="Invalid subaction"):
|
||||||
await tool_fn(action="invalid")
|
await tool_fn(action="health", subaction="invalid")
|
||||||
|
|
||||||
|
|
||||||
class TestHealthActions:
|
class TestHealthActions:
|
||||||
async def test_test_connection(self, _mock_graphql: AsyncMock) -> None:
|
async def test_test_connection(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_graphql.return_value = {"online": True}
|
_mock_graphql.return_value = {"online": True}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="test_connection")
|
result = await tool_fn(action="health", subaction="test_connection")
|
||||||
assert result["status"] == "connected"
|
assert result["status"] == "connected"
|
||||||
assert result["online"] is True
|
assert result["online"] is True
|
||||||
assert "latency_ms" in result
|
assert "latency_ms" in result
|
||||||
@@ -46,13 +46,38 @@ class TestHealthActions:
|
|||||||
},
|
},
|
||||||
"array": {"state": "STARTED"},
|
"array": {"state": "STARTED"},
|
||||||
"notifications": {"overview": {"unread": {"alert": 0, "warning": 0, "total": 3}}},
|
"notifications": {"overview": {"unread": {"alert": 0, "warning": 0, "total": 3}}},
|
||||||
"docker": {"containers": [{"id": "c1", "state": "running", "status": "Up 2 days"}]},
|
"docker": {"containers": [{"id": "c1", "state": "RUNNING", "status": "Up 2 days"}]},
|
||||||
}
|
}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="check")
|
result = await tool_fn(action="health", subaction="check")
|
||||||
assert result["status"] == "healthy"
|
assert result["status"] == "healthy"
|
||||||
assert "api_latency_ms" in result
|
assert "api_latency_ms" in result
|
||||||
|
|
||||||
|
async def test_check_docker_counts_uppercase_states(self, _mock_graphql: AsyncMock) -> None:
|
||||||
|
"""ContainerState enum is UPPERCASE — running/stopped counts must use case-insensitive match."""
|
||||||
|
_mock_graphql.return_value = {
|
||||||
|
"info": {
|
||||||
|
"machineId": "x",
|
||||||
|
"versions": {"core": {"unraid": "7.0"}},
|
||||||
|
"os": {"uptime": 1},
|
||||||
|
},
|
||||||
|
"array": {"state": "STARTED"},
|
||||||
|
"notifications": {"overview": {"unread": {"alert": 0, "warning": 0, "total": 0}}},
|
||||||
|
"docker": {
|
||||||
|
"containers": [
|
||||||
|
{"id": "c1", "state": "RUNNING"},
|
||||||
|
{"id": "c2", "state": "RUNNING"},
|
||||||
|
{"id": "c3", "state": "EXITED"},
|
||||||
|
]
|
||||||
|
},
|
||||||
|
}
|
||||||
|
tool_fn = _make_tool()
|
||||||
|
result = await tool_fn(action="health", subaction="check")
|
||||||
|
svc = result["docker_services"]
|
||||||
|
assert svc["total"] == 3
|
||||||
|
assert svc["running"] == 2
|
||||||
|
assert svc["stopped"] == 1
|
||||||
|
|
||||||
async def test_check_warning_on_alerts(self, _mock_graphql: AsyncMock) -> None:
|
async def test_check_warning_on_alerts(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_graphql.return_value = {
|
_mock_graphql.return_value = {
|
||||||
"info": {"machineId": "abc", "versions": {"unraid": "7.2"}, "os": {"uptime": 100}},
|
"info": {"machineId": "abc", "versions": {"unraid": "7.2"}, "os": {"uptime": 100}},
|
||||||
@@ -61,20 +86,20 @@ class TestHealthActions:
|
|||||||
"docker": {"containers": []},
|
"docker": {"containers": []},
|
||||||
}
|
}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="check")
|
result = await tool_fn(action="health", subaction="check")
|
||||||
assert result["status"] == "warning"
|
assert result["status"] == "warning"
|
||||||
assert any("alert" in i for i in result.get("issues", []))
|
assert any("alert" in i for i in result.get("issues", []))
|
||||||
|
|
||||||
async def test_check_no_data(self, _mock_graphql: AsyncMock) -> None:
|
async def test_check_no_data(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_graphql.return_value = {}
|
_mock_graphql.return_value = {}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="check")
|
result = await tool_fn(action="health", subaction="check")
|
||||||
assert result["status"] == "unhealthy"
|
assert result["status"] == "unhealthy"
|
||||||
|
|
||||||
async def test_check_api_error(self, _mock_graphql: AsyncMock) -> None:
|
async def test_check_api_error(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_graphql.side_effect = Exception("Connection refused")
|
_mock_graphql.side_effect = Exception("Connection refused")
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="check")
|
result = await tool_fn(action="health", subaction="check")
|
||||||
assert result["status"] == "unhealthy"
|
assert result["status"] == "unhealthy"
|
||||||
assert "Connection refused" in result["error"]
|
assert "Connection refused" in result["error"]
|
||||||
|
|
||||||
@@ -87,61 +112,51 @@ class TestHealthActions:
|
|||||||
"docker": {"containers": []},
|
"docker": {"containers": []},
|
||||||
}
|
}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="check")
|
result = await tool_fn(action="health", subaction="check")
|
||||||
# Missing info escalates to "degraded"; alerts only escalate to "warning"
|
# Missing info escalates to "degraded"; alerts only escalate to "warning"
|
||||||
# Severity should stay at "degraded" (not downgrade to "warning")
|
# Severity should stay at "degraded" (not downgrade to "warning")
|
||||||
assert result["status"] == "degraded"
|
assert result["status"] == "degraded"
|
||||||
|
|
||||||
async def test_diagnose_wraps_exception(self, _mock_graphql: AsyncMock) -> None:
|
async def test_diagnose_success(self, _mock_graphql: AsyncMock) -> None:
|
||||||
"""When _diagnose_subscriptions raises, tool wraps in ToolError."""
|
"""Diagnose returns subscription status."""
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
|
mock_status = {"cpu": {"connection_state": "connected"}}
|
||||||
|
mock_manager = MagicMock()
|
||||||
|
mock_manager.get_subscription_status = AsyncMock(return_value=mock_status)
|
||||||
|
mock_manager.auto_start_enabled = True
|
||||||
|
mock_manager.max_reconnect_attempts = 3
|
||||||
|
mock_manager.subscription_configs = {}
|
||||||
|
mock_manager.active_subscriptions = {}
|
||||||
|
mock_manager.resource_data = {}
|
||||||
|
|
||||||
with (
|
with (
|
||||||
|
patch("unraid_mcp.subscriptions.manager.subscription_manager", mock_manager),
|
||||||
|
patch("unraid_mcp.subscriptions.resources.ensure_subscriptions_started", AsyncMock()),
|
||||||
patch(
|
patch(
|
||||||
"unraid_mcp.tools.health._diagnose_subscriptions",
|
"unraid_mcp.subscriptions.utils._analyze_subscription_status",
|
||||||
side_effect=RuntimeError("broken"),
|
return_value=(0, []),
|
||||||
|
),
|
||||||
|
):
|
||||||
|
result = await tool_fn(action="health", subaction="diagnose")
|
||||||
|
assert "subscriptions" in result
|
||||||
|
assert "summary" in result
|
||||||
|
|
||||||
|
async def test_diagnose_wraps_exception(self, _mock_graphql: AsyncMock) -> None:
|
||||||
|
"""When subscription manager raises, tool wraps in ToolError."""
|
||||||
|
tool_fn = _make_tool()
|
||||||
|
mock_manager = MagicMock()
|
||||||
|
mock_manager.get_subscription_status = AsyncMock(side_effect=RuntimeError("broken"))
|
||||||
|
|
||||||
|
with (
|
||||||
|
patch("unraid_mcp.subscriptions.manager.subscription_manager", mock_manager),
|
||||||
|
patch("unraid_mcp.subscriptions.resources.ensure_subscriptions_started", AsyncMock()),
|
||||||
|
patch(
|
||||||
|
"unraid_mcp.subscriptions.utils._analyze_subscription_status",
|
||||||
|
return_value=(0, []),
|
||||||
),
|
),
|
||||||
pytest.raises(ToolError, match="Failed to execute health/diagnose"),
|
pytest.raises(ToolError, match="Failed to execute health/diagnose"),
|
||||||
):
|
):
|
||||||
await tool_fn(action="diagnose")
|
await tool_fn(action="health", subaction="diagnose")
|
||||||
|
|
||||||
async def test_diagnose_success(self, _mock_graphql: AsyncMock) -> None:
|
|
||||||
"""Diagnose returns subscription status when modules are available."""
|
|
||||||
tool_fn = _make_tool()
|
|
||||||
mock_status = {
|
|
||||||
"cpu_sub": {"runtime": {"connection_state": "connected", "last_error": None}},
|
|
||||||
}
|
|
||||||
with patch("unraid_mcp.tools.health._diagnose_subscriptions", return_value=mock_status):
|
|
||||||
result = await tool_fn(action="diagnose")
|
|
||||||
assert "cpu_sub" in result
|
|
||||||
|
|
||||||
async def test_diagnose_import_error_internal(self) -> None:
|
|
||||||
"""_diagnose_subscriptions raises ToolError when subscription modules are unavailable."""
|
|
||||||
import sys
|
|
||||||
|
|
||||||
from unraid_mcp.tools.health import _diagnose_subscriptions
|
|
||||||
|
|
||||||
# Remove cached subscription modules so the import is re-triggered
|
|
||||||
cached = {k: v for k, v in sys.modules.items() if "unraid_mcp.subscriptions" in k}
|
|
||||||
for k in cached:
|
|
||||||
del sys.modules[k]
|
|
||||||
|
|
||||||
try:
|
|
||||||
# Replace the modules with objects that raise ImportError on access
|
|
||||||
with (
|
|
||||||
patch.dict(
|
|
||||||
sys.modules,
|
|
||||||
{
|
|
||||||
"unraid_mcp.subscriptions": None,
|
|
||||||
"unraid_mcp.subscriptions.manager": None,
|
|
||||||
"unraid_mcp.subscriptions.resources": None,
|
|
||||||
},
|
|
||||||
),
|
|
||||||
pytest.raises(ToolError, match="Subscription modules not available"),
|
|
||||||
):
|
|
||||||
await _diagnose_subscriptions()
|
|
||||||
finally:
|
|
||||||
# Restore cached modules
|
|
||||||
sys.modules.update(cached)
|
|
||||||
|
|
||||||
|
|
||||||
# ---------------------------------------------------------------------------
|
# ---------------------------------------------------------------------------
|
||||||
@@ -166,17 +181,20 @@ class TestSafeDisplayUrl:
|
|||||||
|
|
||||||
def test_strips_path(self) -> None:
|
def test_strips_path(self) -> None:
|
||||||
result = safe_display_url("http://unraid.local/some/deep/path?query=1")
|
result = safe_display_url("http://unraid.local/some/deep/path?query=1")
|
||||||
|
assert result is not None
|
||||||
assert "path" not in result
|
assert "path" not in result
|
||||||
assert "query" not in result
|
assert "query" not in result
|
||||||
|
|
||||||
def test_strips_credentials(self) -> None:
|
def test_strips_credentials(self) -> None:
|
||||||
result = safe_display_url("https://user:password@unraid.local/graphql")
|
result = safe_display_url("https://user:password@unraid.local/graphql")
|
||||||
|
assert result is not None
|
||||||
assert "user" not in result
|
assert "user" not in result
|
||||||
assert "password" not in result
|
assert "password" not in result
|
||||||
assert result == "https://unraid.local"
|
assert result == "https://unraid.local"
|
||||||
|
|
||||||
def test_strips_query_params(self) -> None:
|
def test_strips_query_params(self) -> None:
|
||||||
result = safe_display_url("http://host.local?token=abc&key=xyz")
|
result = safe_display_url("http://host.local?token=abc&key=xyz")
|
||||||
|
assert result is not None
|
||||||
assert "token" not in result
|
assert "token" not in result
|
||||||
assert "abc" not in result
|
assert "abc" not in result
|
||||||
|
|
||||||
@@ -190,23 +208,25 @@ class TestSafeDisplayUrl:
|
|||||||
|
|
||||||
def test_malformed_ipv6_url_returns_unparseable(self) -> None:
|
def test_malformed_ipv6_url_returns_unparseable(self) -> None:
|
||||||
"""Malformed IPv6 brackets in netloc cause urlparse.hostname to raise ValueError."""
|
"""Malformed IPv6 brackets in netloc cause urlparse.hostname to raise ValueError."""
|
||||||
# urlparse("https://[invalid") parses without error, but accessing .hostname
|
|
||||||
# raises ValueError: Invalid IPv6 URL — this triggers the except branch.
|
|
||||||
result = safe_display_url("https://[invalid")
|
result = safe_display_url("https://[invalid")
|
||||||
assert result == "<unparseable>"
|
assert result == "<unparseable>"
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_health_setup_action_calls_elicitation() -> None:
|
async def test_health_setup_action_calls_elicitation() -> None:
|
||||||
"""setup action triggers elicit_and_configure and returns success message."""
|
"""setup subaction triggers elicit_and_configure when no credentials exist."""
|
||||||
from unittest.mock import AsyncMock, MagicMock
|
|
||||||
|
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
|
|
||||||
with patch(
|
mock_path = MagicMock()
|
||||||
"unraid_mcp.tools.health.elicit_and_configure", new=AsyncMock(return_value=True)
|
mock_path.exists.return_value = False
|
||||||
) as mock_elicit:
|
|
||||||
result = await tool_fn(action="setup", ctx=MagicMock())
|
with (
|
||||||
|
patch("unraid_mcp.config.settings.CREDENTIALS_ENV_PATH", mock_path),
|
||||||
|
patch(
|
||||||
|
"unraid_mcp.core.setup.elicit_and_configure", new=AsyncMock(return_value=True)
|
||||||
|
) as mock_elicit,
|
||||||
|
):
|
||||||
|
result = await tool_fn(action="health", subaction="setup", ctx=MagicMock())
|
||||||
|
|
||||||
assert mock_elicit.called
|
assert mock_elicit.called
|
||||||
assert "configured" in result.lower() or "success" in result.lower()
|
assert "configured" in result.lower() or "success" in result.lower()
|
||||||
@@ -214,13 +234,17 @@ async def test_health_setup_action_calls_elicitation() -> None:
|
|||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_health_setup_action_returns_declined_message() -> None:
|
async def test_health_setup_action_returns_declined_message() -> None:
|
||||||
"""setup action with declined elicitation returns appropriate message."""
|
"""setup subaction with declined elicitation returns appropriate message."""
|
||||||
from unittest.mock import AsyncMock, MagicMock
|
|
||||||
|
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
|
|
||||||
with patch("unraid_mcp.tools.health.elicit_and_configure", new=AsyncMock(return_value=False)):
|
mock_path = MagicMock()
|
||||||
result = await tool_fn(action="setup", ctx=MagicMock())
|
mock_path.exists.return_value = False
|
||||||
|
|
||||||
|
with (
|
||||||
|
patch("unraid_mcp.config.settings.CREDENTIALS_ENV_PATH", mock_path),
|
||||||
|
patch("unraid_mcp.core.setup.elicit_and_configure", new=AsyncMock(return_value=False)),
|
||||||
|
):
|
||||||
|
result = await tool_fn(action="health", subaction="setup", ctx=MagicMock())
|
||||||
|
|
||||||
assert (
|
assert (
|
||||||
"not configured" in result.lower()
|
"not configured" in result.lower()
|
||||||
@@ -229,18 +253,126 @@ async def test_health_setup_action_returns_declined_message() -> None:
|
|||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_health_setup_already_configured_and_working_no_reset() -> None:
|
||||||
|
"""setup returns early when credentials exist, connection works, and user declines reset."""
|
||||||
|
tool_fn = _make_tool()
|
||||||
|
|
||||||
|
mock_path = MagicMock()
|
||||||
|
mock_path.exists.return_value = True
|
||||||
|
|
||||||
|
with (
|
||||||
|
patch("unraid_mcp.config.settings.CREDENTIALS_ENV_PATH", mock_path),
|
||||||
|
patch(
|
||||||
|
"unraid_mcp.tools.unraid.make_graphql_request",
|
||||||
|
new=AsyncMock(return_value={"online": True}),
|
||||||
|
),
|
||||||
|
patch(
|
||||||
|
"unraid_mcp.core.setup.elicit_reset_confirmation",
|
||||||
|
new=AsyncMock(return_value=False),
|
||||||
|
),
|
||||||
|
patch("unraid_mcp.core.setup.elicit_and_configure") as mock_configure,
|
||||||
|
):
|
||||||
|
result = await tool_fn(action="health", subaction="setup", ctx=MagicMock())
|
||||||
|
|
||||||
|
mock_configure.assert_not_called()
|
||||||
|
assert "already configured" in result.lower()
|
||||||
|
assert "no changes" in result.lower()
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_health_setup_already_configured_user_confirms_reset() -> None:
|
||||||
|
"""setup proceeds with elicitation when credentials exist but user confirms reset."""
|
||||||
|
tool_fn = _make_tool()
|
||||||
|
|
||||||
|
mock_path = MagicMock()
|
||||||
|
mock_path.exists.return_value = True
|
||||||
|
|
||||||
|
with (
|
||||||
|
patch("unraid_mcp.config.settings.CREDENTIALS_ENV_PATH", mock_path),
|
||||||
|
patch(
|
||||||
|
"unraid_mcp.tools.unraid.make_graphql_request",
|
||||||
|
new=AsyncMock(return_value={"online": True}),
|
||||||
|
),
|
||||||
|
patch(
|
||||||
|
"unraid_mcp.core.setup.elicit_reset_confirmation",
|
||||||
|
new=AsyncMock(return_value=True),
|
||||||
|
),
|
||||||
|
patch(
|
||||||
|
"unraid_mcp.core.setup.elicit_and_configure", new=AsyncMock(return_value=True)
|
||||||
|
) as mock_configure,
|
||||||
|
):
|
||||||
|
result = await tool_fn(action="health", subaction="setup", ctx=MagicMock())
|
||||||
|
|
||||||
|
mock_configure.assert_called_once()
|
||||||
|
assert "configured" in result.lower() or "success" in result.lower()
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_health_setup_credentials_exist_but_connection_fails() -> None:
|
||||||
|
"""setup proceeds with elicitation when credentials exist but connection fails."""
|
||||||
|
tool_fn = _make_tool()
|
||||||
|
|
||||||
|
mock_path = MagicMock()
|
||||||
|
mock_path.exists.return_value = True
|
||||||
|
|
||||||
|
with (
|
||||||
|
patch("unraid_mcp.config.settings.CREDENTIALS_ENV_PATH", mock_path),
|
||||||
|
patch(
|
||||||
|
"unraid_mcp.tools.unraid.make_graphql_request",
|
||||||
|
new=AsyncMock(side_effect=Exception("connection refused")),
|
||||||
|
),
|
||||||
|
patch(
|
||||||
|
"unraid_mcp.core.setup.elicit_and_configure", new=AsyncMock(return_value=True)
|
||||||
|
) as mock_configure,
|
||||||
|
):
|
||||||
|
result = await tool_fn(action="health", subaction="setup", ctx=MagicMock())
|
||||||
|
|
||||||
|
mock_configure.assert_called_once()
|
||||||
|
assert "configured" in result.lower() or "success" in result.lower()
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_health_setup_ctx_none_already_configured_returns_no_changes() -> None:
|
||||||
|
"""When ctx=None and credentials are working, setup returns 'already configured' gracefully."""
|
||||||
|
tool_fn = _make_tool()
|
||||||
|
|
||||||
|
mock_path = MagicMock()
|
||||||
|
mock_path.exists.return_value = True
|
||||||
|
|
||||||
|
with (
|
||||||
|
patch("unraid_mcp.config.settings.CREDENTIALS_ENV_PATH", mock_path),
|
||||||
|
patch(
|
||||||
|
"unraid_mcp.tools.unraid.make_graphql_request",
|
||||||
|
new=AsyncMock(return_value={"online": True}),
|
||||||
|
),
|
||||||
|
patch("unraid_mcp.core.setup.elicit_and_configure") as mock_configure,
|
||||||
|
):
|
||||||
|
result = await tool_fn(action="health", subaction="setup", ctx=None)
|
||||||
|
|
||||||
|
mock_configure.assert_not_called()
|
||||||
|
assert "already configured" in result.lower()
|
||||||
|
assert "no changes" in result.lower()
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_health_setup_declined_message_includes_manual_path() -> None:
|
async def test_health_setup_declined_message_includes_manual_path() -> None:
|
||||||
"""Declined setup message includes the exact credentials file path and variable names."""
|
"""Declined setup message includes the exact credentials file path and variable names."""
|
||||||
from unittest.mock import AsyncMock, MagicMock, patch
|
|
||||||
|
|
||||||
from unraid_mcp.config.settings import CREDENTIALS_ENV_PATH
|
from unraid_mcp.config.settings import CREDENTIALS_ENV_PATH
|
||||||
|
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
|
|
||||||
with patch("unraid_mcp.tools.health.elicit_and_configure", new=AsyncMock(return_value=False)):
|
real_path_str = str(CREDENTIALS_ENV_PATH)
|
||||||
result = await tool_fn(action="setup", ctx=MagicMock())
|
mock_path = MagicMock()
|
||||||
|
mock_path.exists.return_value = False
|
||||||
|
type(mock_path).__str__ = lambda self: real_path_str # type: ignore[method-assign]
|
||||||
|
|
||||||
assert str(CREDENTIALS_ENV_PATH) in result
|
with (
|
||||||
assert "UNRAID_API_URL=" in result # inline variable shown
|
patch("unraid_mcp.config.settings.CREDENTIALS_ENV_PATH", mock_path),
|
||||||
|
patch("unraid_mcp.core.setup.elicit_and_configure", new=AsyncMock(return_value=False)),
|
||||||
|
):
|
||||||
|
result = await tool_fn(action="health", subaction="setup", ctx=MagicMock())
|
||||||
|
|
||||||
|
assert real_path_str in result
|
||||||
|
assert "UNRAID_API_URL=" in result
|
||||||
assert "UNRAID_API_KEY=" in result
|
assert "UNRAID_API_KEY=" in result
|
||||||
|
|||||||
@@ -1,65 +1,18 @@
|
|||||||
"""Tests for unraid_info tool."""
|
"""Tests for system subactions of the consolidated unraid tool."""
|
||||||
|
|
||||||
from collections.abc import Generator
|
from collections.abc import Generator
|
||||||
from typing import get_args
|
|
||||||
from unittest.mock import AsyncMock, patch
|
from unittest.mock import AsyncMock, patch
|
||||||
|
|
||||||
import pytest
|
import pytest
|
||||||
from conftest import make_tool_fn
|
from conftest import make_tool_fn
|
||||||
|
|
||||||
from unraid_mcp.core.exceptions import ToolError
|
from unraid_mcp.core.exceptions import ToolError
|
||||||
from unraid_mcp.tools.info import (
|
from unraid_mcp.tools.unraid import _analyze_disk_health
|
||||||
INFO_ACTIONS,
|
|
||||||
_analyze_disk_health,
|
|
||||||
_process_array_status,
|
|
||||||
_process_system_info,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
# --- Unit tests for helper functions ---
|
# --- Unit tests for helper functions ---
|
||||||
|
|
||||||
|
|
||||||
class TestProcessSystemInfo:
|
|
||||||
def test_processes_os_info(self) -> None:
|
|
||||||
raw = {
|
|
||||||
"os": {
|
|
||||||
"distro": "Unraid",
|
|
||||||
"release": "7.2",
|
|
||||||
"platform": "linux",
|
|
||||||
"arch": "x86_64",
|
|
||||||
"hostname": "tower",
|
|
||||||
"uptime": 3600,
|
|
||||||
},
|
|
||||||
"cpu": {"manufacturer": "AMD", "brand": "Ryzen", "cores": 8, "threads": 16},
|
|
||||||
}
|
|
||||||
result = _process_system_info(raw)
|
|
||||||
assert "summary" in result
|
|
||||||
assert "details" in result
|
|
||||||
assert result["summary"]["hostname"] == "tower"
|
|
||||||
assert "AMD" in result["summary"]["cpu"]
|
|
||||||
|
|
||||||
def test_handles_missing_fields(self) -> None:
|
|
||||||
result = _process_system_info({})
|
|
||||||
assert result["summary"] == {"memory_summary": "Memory information not available."}
|
|
||||||
|
|
||||||
def test_processes_memory_layout(self) -> None:
|
|
||||||
raw = {
|
|
||||||
"memory": {
|
|
||||||
"layout": [
|
|
||||||
{
|
|
||||||
"bank": "0",
|
|
||||||
"type": "DDR4",
|
|
||||||
"clockSpeed": 3200,
|
|
||||||
"manufacturer": "G.Skill",
|
|
||||||
"partNum": "XYZ",
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
result = _process_system_info(raw)
|
|
||||||
assert len(result["summary"]["memory_layout_details"]) == 1
|
|
||||||
|
|
||||||
|
|
||||||
class TestAnalyzeDiskHealth:
|
class TestAnalyzeDiskHealth:
|
||||||
def test_counts_healthy_disks(self) -> None:
|
def test_counts_healthy_disks(self) -> None:
|
||||||
disks = [{"status": "DISK_OK"}, {"status": "DISK_OK"}]
|
disks = [{"status": "DISK_OK"}, {"status": "DISK_OK"}]
|
||||||
@@ -100,51 +53,17 @@ class TestAnalyzeDiskHealth:
|
|||||||
assert result["healthy"] == 0
|
assert result["healthy"] == 0
|
||||||
|
|
||||||
|
|
||||||
class TestProcessArrayStatus:
|
|
||||||
def test_basic_array(self) -> None:
|
|
||||||
raw = {
|
|
||||||
"state": "STARTED",
|
|
||||||
"capacity": {"kilobytes": {"free": "1048576", "used": "524288", "total": "1572864"}},
|
|
||||||
"parities": [{"status": "DISK_OK"}],
|
|
||||||
"disks": [{"status": "DISK_OK"}],
|
|
||||||
"caches": [],
|
|
||||||
}
|
|
||||||
result = _process_array_status(raw)
|
|
||||||
assert result["summary"]["state"] == "STARTED"
|
|
||||||
assert result["summary"]["overall_health"] == "HEALTHY"
|
|
||||||
|
|
||||||
def test_critical_disk_threshold_array(self) -> None:
|
|
||||||
raw = {
|
|
||||||
"state": "STARTED",
|
|
||||||
"parities": [],
|
|
||||||
"disks": [{"status": "DISK_OK", "critical": 55}],
|
|
||||||
"caches": [],
|
|
||||||
}
|
|
||||||
result = _process_array_status(raw)
|
|
||||||
assert result["summary"]["overall_health"] == "CRITICAL"
|
|
||||||
|
|
||||||
def test_degraded_array(self) -> None:
|
|
||||||
raw = {
|
|
||||||
"state": "STARTED",
|
|
||||||
"parities": [],
|
|
||||||
"disks": [{"status": "DISK_NP"}],
|
|
||||||
"caches": [],
|
|
||||||
}
|
|
||||||
result = _process_array_status(raw)
|
|
||||||
assert result["summary"]["overall_health"] == "DEGRADED"
|
|
||||||
|
|
||||||
|
|
||||||
# --- Integration tests for the tool function ---
|
# --- Integration tests for the tool function ---
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
@pytest.fixture
|
||||||
def _mock_graphql() -> Generator[AsyncMock, None, None]:
|
def _mock_graphql() -> Generator[AsyncMock, None, None]:
|
||||||
with patch("unraid_mcp.tools.info.make_graphql_request", new_callable=AsyncMock) as mock:
|
with patch("unraid_mcp.tools.unraid.make_graphql_request", new_callable=AsyncMock) as mock:
|
||||||
yield mock
|
yield mock
|
||||||
|
|
||||||
|
|
||||||
def _make_tool():
|
def _make_tool():
|
||||||
return make_tool_fn("unraid_mcp.tools.info", "register_info_tool", "unraid_info")
|
return make_tool_fn("unraid_mcp.tools.unraid", "register_unraid_tool", "unraid")
|
||||||
|
|
||||||
|
|
||||||
class TestUnraidInfoTool:
|
class TestUnraidInfoTool:
|
||||||
@@ -162,14 +81,14 @@ class TestUnraidInfoTool:
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="overview")
|
result = await tool_fn(action="system", subaction="overview")
|
||||||
assert "summary" in result
|
assert "summary" in result
|
||||||
_mock_graphql.assert_called_once()
|
_mock_graphql.assert_called_once()
|
||||||
|
|
||||||
async def test_ups_device_requires_device_id(self, _mock_graphql: AsyncMock) -> None:
|
async def test_ups_device_requires_device_id(self, _mock_graphql: AsyncMock) -> None:
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="device_id is required"):
|
with pytest.raises(ToolError, match="device_id is required"):
|
||||||
await tool_fn(action="ups_device")
|
await tool_fn(action="system", subaction="ups_device")
|
||||||
|
|
||||||
async def test_network_action(self, _mock_graphql: AsyncMock) -> None:
|
async def test_network_action(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_graphql.return_value = {
|
_mock_graphql.return_value = {
|
||||||
@@ -193,7 +112,7 @@ class TestUnraidInfoTool:
|
|||||||
},
|
},
|
||||||
}
|
}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="network")
|
result = await tool_fn(action="system", subaction="network")
|
||||||
assert "accessUrls" in result
|
assert "accessUrls" in result
|
||||||
assert result["httpPort"] == 6969
|
assert result["httpPort"] == 6969
|
||||||
assert result["httpsPort"] == 31337
|
assert result["httpsPort"] == 31337
|
||||||
@@ -202,26 +121,26 @@ class TestUnraidInfoTool:
|
|||||||
async def test_connect_action_raises_tool_error(self, _mock_graphql: AsyncMock) -> None:
|
async def test_connect_action_raises_tool_error(self, _mock_graphql: AsyncMock) -> None:
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="connect.*not available"):
|
with pytest.raises(ToolError, match="connect.*not available"):
|
||||||
await tool_fn(action="connect")
|
await tool_fn(action="system", subaction="connect")
|
||||||
|
|
||||||
async def test_generic_exception_wraps(self, _mock_graphql: AsyncMock) -> None:
|
async def test_generic_exception_wraps(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_graphql.side_effect = RuntimeError("unexpected")
|
_mock_graphql.side_effect = RuntimeError("unexpected")
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="Failed to execute info/online"):
|
with pytest.raises(ToolError, match="Failed to execute system/online"):
|
||||||
await tool_fn(action="online")
|
await tool_fn(action="system", subaction="online")
|
||||||
|
|
||||||
async def test_metrics(self, _mock_graphql: AsyncMock) -> None:
|
async def test_metrics(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_graphql.return_value = {
|
_mock_graphql.return_value = {
|
||||||
"metrics": {"cpu": {"used": 25.5}, "memory": {"used": 8192, "total": 32768}}
|
"metrics": {"cpu": {"used": 25.5}, "memory": {"used": 8192, "total": 32768}}
|
||||||
}
|
}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="metrics")
|
result = await tool_fn(action="system", subaction="metrics")
|
||||||
assert result["cpu"]["used"] == 25.5
|
assert result["cpu"]["used"] == 25.5
|
||||||
|
|
||||||
async def test_services(self, _mock_graphql: AsyncMock) -> None:
|
async def test_services(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_graphql.return_value = {"services": [{"name": "docker", "state": "running"}]}
|
_mock_graphql.return_value = {"services": [{"name": "docker", "state": "running"}]}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="services")
|
result = await tool_fn(action="system", subaction="services")
|
||||||
assert "services" in result
|
assert "services" in result
|
||||||
assert len(result["services"]) == 1
|
assert len(result["services"]) == 1
|
||||||
assert result["services"][0]["name"] == "docker"
|
assert result["services"][0]["name"] == "docker"
|
||||||
@@ -231,14 +150,14 @@ class TestUnraidInfoTool:
|
|||||||
"settings": {"unified": {"values": {"timezone": "US/Eastern"}}}
|
"settings": {"unified": {"values": {"timezone": "US/Eastern"}}}
|
||||||
}
|
}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="settings")
|
result = await tool_fn(action="system", subaction="settings")
|
||||||
assert result["timezone"] == "US/Eastern"
|
assert result["timezone"] == "US/Eastern"
|
||||||
|
|
||||||
async def test_settings_non_dict_values(self, _mock_graphql: AsyncMock) -> None:
|
async def test_settings_non_dict_values(self, _mock_graphql: AsyncMock) -> None:
|
||||||
"""Settings values that are not a dict should be wrapped in {'raw': ...}."""
|
"""Settings values that are not a dict should be wrapped in {'raw': ...}."""
|
||||||
_mock_graphql.return_value = {"settings": {"unified": {"values": "raw_string"}}}
|
_mock_graphql.return_value = {"settings": {"unified": {"values": "raw_string"}}}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="settings")
|
result = await tool_fn(action="system", subaction="settings")
|
||||||
assert result == {"raw": "raw_string"}
|
assert result == {"raw": "raw_string"}
|
||||||
|
|
||||||
async def test_servers(self, _mock_graphql: AsyncMock) -> None:
|
async def test_servers(self, _mock_graphql: AsyncMock) -> None:
|
||||||
@@ -246,7 +165,7 @@ class TestUnraidInfoTool:
|
|||||||
"servers": [{"id": "s:1", "name": "tower", "status": "online"}]
|
"servers": [{"id": "s:1", "name": "tower", "status": "online"}]
|
||||||
}
|
}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="servers")
|
result = await tool_fn(action="system", subaction="servers")
|
||||||
assert "servers" in result
|
assert "servers" in result
|
||||||
assert len(result["servers"]) == 1
|
assert len(result["servers"]) == 1
|
||||||
assert result["servers"][0]["name"] == "tower"
|
assert result["servers"][0]["name"] == "tower"
|
||||||
@@ -262,7 +181,7 @@ class TestUnraidInfoTool:
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="flash")
|
result = await tool_fn(action="system", subaction="flash")
|
||||||
assert result["product"] == "SanDisk"
|
assert result["product"] == "SanDisk"
|
||||||
|
|
||||||
async def test_ups_devices(self, _mock_graphql: AsyncMock) -> None:
|
async def test_ups_devices(self, _mock_graphql: AsyncMock) -> None:
|
||||||
@@ -270,7 +189,7 @@ class TestUnraidInfoTool:
|
|||||||
"upsDevices": [{"id": "ups:1", "model": "APC", "status": "online", "charge": 100}]
|
"upsDevices": [{"id": "ups:1", "model": "APC", "status": "online", "charge": 100}]
|
||||||
}
|
}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="ups_devices")
|
result = await tool_fn(action="system", subaction="ups_devices")
|
||||||
assert "ups_devices" in result
|
assert "ups_devices" in result
|
||||||
assert len(result["ups_devices"]) == 1
|
assert len(result["ups_devices"]) == 1
|
||||||
assert result["ups_devices"][0]["model"] == "APC"
|
assert result["ups_devices"][0]["model"] == "APC"
|
||||||
@@ -284,7 +203,7 @@ class TestInfoNetworkErrors:
|
|||||||
_mock_graphql.side_effect = ToolError("HTTP error 401: Unauthorized")
|
_mock_graphql.side_effect = ToolError("HTTP error 401: Unauthorized")
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="401"):
|
with pytest.raises(ToolError, match="401"):
|
||||||
await tool_fn(action="overview")
|
await tool_fn(action="system", subaction="overview")
|
||||||
|
|
||||||
async def test_overview_connection_refused(self, _mock_graphql: AsyncMock) -> None:
|
async def test_overview_connection_refused(self, _mock_graphql: AsyncMock) -> None:
|
||||||
"""Connection refused should propagate as ToolError."""
|
"""Connection refused should propagate as ToolError."""
|
||||||
@@ -293,7 +212,7 @@ class TestInfoNetworkErrors:
|
|||||||
)
|
)
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="Connection refused"):
|
with pytest.raises(ToolError, match="Connection refused"):
|
||||||
await tool_fn(action="overview")
|
await tool_fn(action="system", subaction="overview")
|
||||||
|
|
||||||
async def test_network_json_decode_error(self, _mock_graphql: AsyncMock) -> None:
|
async def test_network_json_decode_error(self, _mock_graphql: AsyncMock) -> None:
|
||||||
"""Invalid JSON from API should propagate as ToolError."""
|
"""Invalid JSON from API should propagate as ToolError."""
|
||||||
@@ -302,16 +221,17 @@ class TestInfoNetworkErrors:
|
|||||||
)
|
)
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="Invalid JSON"):
|
with pytest.raises(ToolError, match="Invalid JSON"):
|
||||||
await tool_fn(action="network")
|
await tool_fn(action="system", subaction="network")
|
||||||
|
|
||||||
|
|
||||||
# ---------------------------------------------------------------------------
|
# ---------------------------------------------------------------------------
|
||||||
# Regression: removed actions must not appear in INFO_ACTIONS
|
# Regression: removed actions must not be valid subactions
|
||||||
# ---------------------------------------------------------------------------
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.parametrize("action", ["update_server", "update_ssh"])
|
@pytest.mark.asyncio
|
||||||
def test_removed_info_actions_are_gone(action: str) -> None:
|
@pytest.mark.parametrize("subaction", ["update_server", "update_ssh"])
|
||||||
assert action not in get_args(INFO_ACTIONS), (
|
async def test_removed_info_subactions_are_invalid(subaction: str) -> None:
|
||||||
f"{action} references a non-existent mutation and must not be in INFO_ACTIONS"
|
tool_fn = _make_tool()
|
||||||
)
|
with pytest.raises(ToolError, match="Invalid subaction"):
|
||||||
|
await tool_fn(action="system", subaction=subaction)
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
"""Tests for unraid_keys tool."""
|
"""Tests for key subactions of the consolidated unraid tool."""
|
||||||
|
|
||||||
from collections.abc import Generator
|
from collections.abc import Generator
|
||||||
from unittest.mock import AsyncMock, patch
|
from unittest.mock import AsyncMock, patch
|
||||||
@@ -11,39 +11,39 @@ from unraid_mcp.core.exceptions import ToolError
|
|||||||
|
|
||||||
@pytest.fixture
|
@pytest.fixture
|
||||||
def _mock_graphql() -> Generator[AsyncMock, None, None]:
|
def _mock_graphql() -> Generator[AsyncMock, None, None]:
|
||||||
with patch("unraid_mcp.tools.keys.make_graphql_request", new_callable=AsyncMock) as mock:
|
with patch("unraid_mcp.tools.unraid.make_graphql_request", new_callable=AsyncMock) as mock:
|
||||||
yield mock
|
yield mock
|
||||||
|
|
||||||
|
|
||||||
def _make_tool():
|
def _make_tool():
|
||||||
return make_tool_fn("unraid_mcp.tools.keys", "register_keys_tool", "unraid_keys")
|
return make_tool_fn("unraid_mcp.tools.unraid", "register_unraid_tool", "unraid")
|
||||||
|
|
||||||
|
|
||||||
class TestKeysValidation:
|
class TestKeysValidation:
|
||||||
async def test_delete_requires_confirm(self, _mock_graphql: AsyncMock) -> None:
|
async def test_delete_requires_confirm(self, _mock_graphql: AsyncMock) -> None:
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="confirm=True"):
|
with pytest.raises(ToolError, match="confirm=True"):
|
||||||
await tool_fn(action="delete", key_id="k:1")
|
await tool_fn(action="key", subaction="delete", key_id="k:1")
|
||||||
|
|
||||||
async def test_get_requires_key_id(self, _mock_graphql: AsyncMock) -> None:
|
async def test_get_requires_key_id(self, _mock_graphql: AsyncMock) -> None:
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="key_id"):
|
with pytest.raises(ToolError, match="key_id"):
|
||||||
await tool_fn(action="get")
|
await tool_fn(action="key", subaction="get")
|
||||||
|
|
||||||
async def test_create_requires_name(self, _mock_graphql: AsyncMock) -> None:
|
async def test_create_requires_name(self, _mock_graphql: AsyncMock) -> None:
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="name"):
|
with pytest.raises(ToolError, match="name"):
|
||||||
await tool_fn(action="create")
|
await tool_fn(action="key", subaction="create")
|
||||||
|
|
||||||
async def test_update_requires_key_id(self, _mock_graphql: AsyncMock) -> None:
|
async def test_update_requires_key_id(self, _mock_graphql: AsyncMock) -> None:
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="key_id"):
|
with pytest.raises(ToolError, match="key_id"):
|
||||||
await tool_fn(action="update")
|
await tool_fn(action="key", subaction="update")
|
||||||
|
|
||||||
async def test_delete_requires_key_id(self, _mock_graphql: AsyncMock) -> None:
|
async def test_delete_requires_key_id(self, _mock_graphql: AsyncMock) -> None:
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="key_id"):
|
with pytest.raises(ToolError, match="key_id"):
|
||||||
await tool_fn(action="delete", confirm=True)
|
await tool_fn(action="key", subaction="delete", confirm=True)
|
||||||
|
|
||||||
|
|
||||||
class TestKeysActions:
|
class TestKeysActions:
|
||||||
@@ -52,7 +52,7 @@ class TestKeysActions:
|
|||||||
"apiKeys": [{"id": "k:1", "name": "mcp-key", "roles": ["admin"]}]
|
"apiKeys": [{"id": "k:1", "name": "mcp-key", "roles": ["admin"]}]
|
||||||
}
|
}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="list")
|
result = await tool_fn(action="key", subaction="list")
|
||||||
assert len(result["keys"]) == 1
|
assert len(result["keys"]) == 1
|
||||||
|
|
||||||
async def test_get(self, _mock_graphql: AsyncMock) -> None:
|
async def test_get(self, _mock_graphql: AsyncMock) -> None:
|
||||||
@@ -60,7 +60,7 @@ class TestKeysActions:
|
|||||||
"apiKey": {"id": "k:1", "name": "mcp-key", "roles": ["admin"]}
|
"apiKey": {"id": "k:1", "name": "mcp-key", "roles": ["admin"]}
|
||||||
}
|
}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="get", key_id="k:1")
|
result = await tool_fn(action="key", subaction="get", key_id="k:1")
|
||||||
assert result["name"] == "mcp-key"
|
assert result["name"] == "mcp-key"
|
||||||
|
|
||||||
async def test_create(self, _mock_graphql: AsyncMock) -> None:
|
async def test_create(self, _mock_graphql: AsyncMock) -> None:
|
||||||
@@ -70,7 +70,7 @@ class TestKeysActions:
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="create", name="new-key")
|
result = await tool_fn(action="key", subaction="create", name="new-key")
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
assert result["key"]["name"] == "new-key"
|
assert result["key"]["name"] == "new-key"
|
||||||
|
|
||||||
@@ -86,7 +86,7 @@ class TestKeysActions:
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="create", name="admin-key", roles=["admin"])
|
result = await tool_fn(action="key", subaction="create", name="admin-key", roles=["admin"])
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
|
|
||||||
async def test_update(self, _mock_graphql: AsyncMock) -> None:
|
async def test_update(self, _mock_graphql: AsyncMock) -> None:
|
||||||
@@ -94,39 +94,43 @@ class TestKeysActions:
|
|||||||
"apiKey": {"update": {"id": "k:1", "name": "renamed", "roles": []}}
|
"apiKey": {"update": {"id": "k:1", "name": "renamed", "roles": []}}
|
||||||
}
|
}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="update", key_id="k:1", name="renamed")
|
result = await tool_fn(action="key", subaction="update", key_id="k:1", name="renamed")
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
|
|
||||||
async def test_delete(self, _mock_graphql: AsyncMock) -> None:
|
async def test_delete(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_graphql.return_value = {"apiKey": {"delete": True}}
|
_mock_graphql.return_value = {"apiKey": {"delete": True}}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="delete", key_id="k:1", confirm=True)
|
result = await tool_fn(action="key", subaction="delete", key_id="k:1", confirm=True)
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
|
|
||||||
async def test_generic_exception_wraps(self, _mock_graphql: AsyncMock) -> None:
|
async def test_generic_exception_wraps(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_graphql.side_effect = RuntimeError("connection lost")
|
_mock_graphql.side_effect = RuntimeError("connection lost")
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="Failed to execute keys/list"):
|
with pytest.raises(ToolError, match="Failed to execute key/list"):
|
||||||
await tool_fn(action="list")
|
await tool_fn(action="key", subaction="list")
|
||||||
|
|
||||||
async def test_add_role_requires_key_id(self, _mock_graphql: AsyncMock) -> None:
|
async def test_add_role_requires_key_id(self, _mock_graphql: AsyncMock) -> None:
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="key_id"):
|
with pytest.raises(ToolError, match="key_id"):
|
||||||
await tool_fn(action="add_role", roles=["VIEWER"])
|
await tool_fn(action="key", subaction="add_role", roles=["VIEWER"])
|
||||||
|
|
||||||
async def test_add_role_requires_role(self, _mock_graphql: AsyncMock) -> None:
|
async def test_add_role_requires_role(self, _mock_graphql: AsyncMock) -> None:
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="role"):
|
with pytest.raises(ToolError, match="role"):
|
||||||
await tool_fn(action="add_role", key_id="abc:local")
|
await tool_fn(action="key", subaction="add_role", key_id="abc:local")
|
||||||
|
|
||||||
async def test_add_role_success(self, _mock_graphql: AsyncMock) -> None:
|
async def test_add_role_success(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_graphql.return_value = {"apiKey": {"addRole": True}}
|
_mock_graphql.return_value = {"apiKey": {"addRole": True}}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="add_role", key_id="abc:local", roles=["VIEWER"])
|
result = await tool_fn(
|
||||||
|
action="key", subaction="add_role", key_id="abc:local", roles=["VIEWER"]
|
||||||
|
)
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
|
|
||||||
async def test_remove_role_success(self, _mock_graphql: AsyncMock) -> None:
|
async def test_remove_role_success(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_graphql.return_value = {"apiKey": {"removeRole": True}}
|
_mock_graphql.return_value = {"apiKey": {"removeRole": True}}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="remove_role", key_id="abc:local", roles=["VIEWER"])
|
result = await tool_fn(
|
||||||
|
action="key", subaction="remove_role", key_id="abc:local", roles=["VIEWER"]
|
||||||
|
)
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
|
|||||||
@@ -1,125 +1,108 @@
|
|||||||
# tests/test_live.py
|
# tests/test_live.py
|
||||||
"""Tests for unraid_live subscription snapshot tool."""
|
"""Tests for live subactions of the consolidated unraid tool."""
|
||||||
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
from unittest.mock import patch
|
from unittest.mock import patch
|
||||||
|
|
||||||
import pytest
|
import pytest
|
||||||
from fastmcp import FastMCP
|
from conftest import make_tool_fn
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
def _make_tool():
|
||||||
def mcp():
|
return make_tool_fn("unraid_mcp.tools.unraid", "register_unraid_tool", "unraid")
|
||||||
return FastMCP("test")
|
|
||||||
|
|
||||||
|
|
||||||
def _make_live_tool(mcp):
|
|
||||||
from unraid_mcp.tools.live import register_live_tool
|
|
||||||
|
|
||||||
register_live_tool(mcp)
|
|
||||||
local_provider = mcp.providers[0]
|
|
||||||
tool = local_provider._components["tool:unraid_live@"]
|
|
||||||
return tool.fn
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
@pytest.fixture
|
||||||
def _mock_subscribe_once():
|
def _mock_subscribe_once():
|
||||||
with patch("unraid_mcp.tools.live.subscribe_once") as m:
|
with patch("unraid_mcp.subscriptions.snapshot.subscribe_once") as m:
|
||||||
yield m
|
yield m
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
@pytest.fixture
|
||||||
def _mock_subscribe_collect():
|
def _mock_subscribe_collect():
|
||||||
with patch("unraid_mcp.tools.live.subscribe_collect") as m:
|
with patch("unraid_mcp.subscriptions.snapshot.subscribe_collect") as m:
|
||||||
yield m
|
yield m
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_cpu_returns_snapshot(mcp, _mock_subscribe_once):
|
async def test_cpu_returns_snapshot(_mock_subscribe_once):
|
||||||
_mock_subscribe_once.return_value = {"systemMetricsCpu": {"percentTotal": 23.5, "cpus": []}}
|
_mock_subscribe_once.return_value = {"systemMetricsCpu": {"percentTotal": 23.5, "cpus": []}}
|
||||||
tool_fn = _make_live_tool(mcp)
|
result = await _make_tool()(action="live", subaction="cpu")
|
||||||
result = await tool_fn(action="cpu")
|
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
assert result["data"]["systemMetricsCpu"]["percentTotal"] == 23.5
|
assert result["data"]["systemMetricsCpu"]["percentTotal"] == 23.5
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_memory_returns_snapshot(mcp, _mock_subscribe_once):
|
async def test_memory_returns_snapshot(_mock_subscribe_once):
|
||||||
_mock_subscribe_once.return_value = {
|
_mock_subscribe_once.return_value = {
|
||||||
"systemMetricsMemory": {"total": 32000000000, "used": 10000000000, "percentTotal": 31.2}
|
"systemMetricsMemory": {"total": 32000000000, "used": 10000000000, "percentTotal": 31.2}
|
||||||
}
|
}
|
||||||
tool_fn = _make_live_tool(mcp)
|
result = await _make_tool()(action="live", subaction="memory")
|
||||||
result = await tool_fn(action="memory")
|
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_log_tail_requires_path(mcp, _mock_subscribe_collect):
|
async def test_log_tail_requires_path(_mock_subscribe_collect):
|
||||||
_mock_subscribe_collect.return_value = []
|
_mock_subscribe_collect.return_value = []
|
||||||
tool_fn = _make_live_tool(mcp)
|
|
||||||
from unraid_mcp.core.exceptions import ToolError
|
from unraid_mcp.core.exceptions import ToolError
|
||||||
|
|
||||||
with pytest.raises(ToolError, match="path"):
|
with pytest.raises(ToolError, match="path"):
|
||||||
await tool_fn(action="log_tail")
|
await _make_tool()(action="live", subaction="log_tail")
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_log_tail_with_path(mcp, _mock_subscribe_collect):
|
async def test_log_tail_with_path(_mock_subscribe_collect):
|
||||||
_mock_subscribe_collect.return_value = [
|
_mock_subscribe_collect.return_value = [
|
||||||
{"logFile": {"path": "/var/log/syslog", "content": "line1\nline2", "totalLines": 2}}
|
{"logFile": {"path": "/var/log/syslog", "content": "line1\nline2", "totalLines": 2}}
|
||||||
]
|
]
|
||||||
tool_fn = _make_live_tool(mcp)
|
result = await _make_tool()(
|
||||||
result = await tool_fn(action="log_tail", path="/var/log/syslog", collect_for=1.0)
|
action="live", subaction="log_tail", path="/var/log/syslog", collect_for=1.0
|
||||||
|
)
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
assert result["event_count"] == 1
|
assert result["event_count"] == 1
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_notification_feed_collects_events(mcp, _mock_subscribe_collect):
|
async def test_notification_feed_collects_events(_mock_subscribe_collect):
|
||||||
_mock_subscribe_collect.return_value = [
|
_mock_subscribe_collect.return_value = [
|
||||||
{"notificationAdded": {"id": "1", "title": "Alert"}},
|
{"notificationAdded": {"id": "1", "title": "Alert"}},
|
||||||
{"notificationAdded": {"id": "2", "title": "Info"}},
|
{"notificationAdded": {"id": "2", "title": "Info"}},
|
||||||
]
|
]
|
||||||
tool_fn = _make_live_tool(mcp)
|
result = await _make_tool()(action="live", subaction="notification_feed", collect_for=2.0)
|
||||||
result = await tool_fn(action="notification_feed", collect_for=2.0)
|
|
||||||
assert result["event_count"] == 2
|
assert result["event_count"] == 2
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_invalid_action_raises(mcp):
|
async def test_invalid_subaction_raises():
|
||||||
from unraid_mcp.core.exceptions import ToolError
|
from unraid_mcp.core.exceptions import ToolError
|
||||||
|
|
||||||
tool_fn = _make_live_tool(mcp)
|
with pytest.raises(ToolError, match="Invalid subaction"):
|
||||||
with pytest.raises(ToolError, match="Invalid action"):
|
await _make_tool()(action="live", subaction="nonexistent")
|
||||||
await tool_fn(action="nonexistent") # type: ignore[arg-type]
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_snapshot_propagates_tool_error(mcp, _mock_subscribe_once):
|
async def test_snapshot_propagates_tool_error(_mock_subscribe_once):
|
||||||
from unraid_mcp.core.exceptions import ToolError
|
from unraid_mcp.core.exceptions import ToolError
|
||||||
|
|
||||||
_mock_subscribe_once.side_effect = ToolError("Subscription timed out after 10s")
|
_mock_subscribe_once.side_effect = ToolError("Subscription timed out after 10s")
|
||||||
tool_fn = _make_live_tool(mcp)
|
|
||||||
with pytest.raises(ToolError, match="timed out"):
|
with pytest.raises(ToolError, match="timed out"):
|
||||||
await tool_fn(action="cpu")
|
await _make_tool()(action="live", subaction="cpu")
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_log_tail_rejects_invalid_path(mcp, _mock_subscribe_collect):
|
async def test_log_tail_rejects_invalid_path(_mock_subscribe_collect):
|
||||||
from unraid_mcp.core.exceptions import ToolError
|
from unraid_mcp.core.exceptions import ToolError
|
||||||
|
|
||||||
tool_fn = _make_live_tool(mcp)
|
|
||||||
with pytest.raises(ToolError, match="must start with"):
|
with pytest.raises(ToolError, match="must start with"):
|
||||||
await tool_fn(action="log_tail", path="/etc/shadow")
|
await _make_tool()(action="live", subaction="log_tail", path="/etc/shadow")
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_snapshot_wraps_bare_exception(mcp, _mock_subscribe_once):
|
async def test_snapshot_wraps_bare_exception(_mock_subscribe_once):
|
||||||
"""Bare exceptions from subscribe_once are wrapped in ToolError by tool_error_handler."""
|
"""Bare exceptions from subscribe_once are wrapped in ToolError by tool_error_handler."""
|
||||||
from unraid_mcp.core.exceptions import ToolError
|
from unraid_mcp.core.exceptions import ToolError
|
||||||
|
|
||||||
_mock_subscribe_once.side_effect = RuntimeError("WebSocket connection refused")
|
_mock_subscribe_once.side_effect = RuntimeError("WebSocket connection refused")
|
||||||
tool_fn = _make_live_tool(mcp)
|
|
||||||
with pytest.raises(ToolError):
|
with pytest.raises(ToolError):
|
||||||
await tool_fn(action="cpu")
|
await _make_tool()(action="live", subaction="cpu")
|
||||||
|
|||||||
@@ -1,67 +1,54 @@
|
|||||||
"""Tests for unraid_notifications tool."""
|
"""Tests for notification subactions of the consolidated unraid tool."""
|
||||||
|
|
||||||
from collections.abc import Generator
|
from collections.abc import Generator
|
||||||
from typing import get_args
|
|
||||||
from unittest.mock import AsyncMock, patch
|
from unittest.mock import AsyncMock, patch
|
||||||
|
|
||||||
import pytest
|
import pytest
|
||||||
from conftest import make_tool_fn
|
from conftest import make_tool_fn
|
||||||
|
|
||||||
from unraid_mcp.core.exceptions import ToolError
|
from unraid_mcp.core.exceptions import ToolError
|
||||||
from unraid_mcp.tools.notifications import NOTIFICATION_ACTIONS
|
|
||||||
|
|
||||||
|
|
||||||
def test_warnings_action_removed() -> None:
|
|
||||||
assert "warnings" not in get_args(NOTIFICATION_ACTIONS), (
|
|
||||||
"warnings action references warningsAndAlerts which is not in live API"
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def test_create_unique_action_removed() -> None:
|
|
||||||
assert "create_unique" not in get_args(NOTIFICATION_ACTIONS), (
|
|
||||||
"create_unique references notifyIfUnique which is not in live API"
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
@pytest.fixture
|
||||||
def _mock_graphql() -> Generator[AsyncMock, None, None]:
|
def _mock_graphql() -> Generator[AsyncMock, None, None]:
|
||||||
with patch(
|
with patch("unraid_mcp.tools.unraid.make_graphql_request", new_callable=AsyncMock) as mock:
|
||||||
"unraid_mcp.tools.notifications.make_graphql_request", new_callable=AsyncMock
|
|
||||||
) as mock:
|
|
||||||
yield mock
|
yield mock
|
||||||
|
|
||||||
|
|
||||||
def _make_tool():
|
def _make_tool():
|
||||||
return make_tool_fn(
|
return make_tool_fn("unraid_mcp.tools.unraid", "register_unraid_tool", "unraid")
|
||||||
"unraid_mcp.tools.notifications", "register_notifications_tool", "unraid_notifications"
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class TestNotificationsValidation:
|
class TestNotificationsValidation:
|
||||||
async def test_delete_requires_confirm(self, _mock_graphql: AsyncMock) -> None:
|
async def test_delete_requires_confirm(self, _mock_graphql: AsyncMock) -> None:
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="not confirmed"):
|
with pytest.raises(ToolError, match="not confirmed"):
|
||||||
await tool_fn(action="delete", notification_id="n:1", notification_type="UNREAD")
|
await tool_fn(
|
||||||
|
action="notification",
|
||||||
|
subaction="delete",
|
||||||
|
notification_id="n:1",
|
||||||
|
notification_type="UNREAD",
|
||||||
|
)
|
||||||
|
|
||||||
async def test_delete_archived_requires_confirm(self, _mock_graphql: AsyncMock) -> None:
|
async def test_delete_archived_requires_confirm(self, _mock_graphql: AsyncMock) -> None:
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="not confirmed"):
|
with pytest.raises(ToolError, match="not confirmed"):
|
||||||
await tool_fn(action="delete_archived")
|
await tool_fn(action="notification", subaction="delete_archived")
|
||||||
|
|
||||||
async def test_create_requires_fields(self, _mock_graphql: AsyncMock) -> None:
|
async def test_create_requires_fields(self, _mock_graphql: AsyncMock) -> None:
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="requires title"):
|
with pytest.raises(ToolError, match="requires title"):
|
||||||
await tool_fn(action="create")
|
await tool_fn(action="notification", subaction="create")
|
||||||
|
|
||||||
async def test_archive_requires_id(self, _mock_graphql: AsyncMock) -> None:
|
async def test_archive_requires_id(self, _mock_graphql: AsyncMock) -> None:
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="notification_id"):
|
with pytest.raises(ToolError, match="notification_id"):
|
||||||
await tool_fn(action="archive")
|
await tool_fn(action="notification", subaction="archive")
|
||||||
|
|
||||||
async def test_delete_requires_id_and_type(self, _mock_graphql: AsyncMock) -> None:
|
async def test_delete_requires_id_and_type(self, _mock_graphql: AsyncMock) -> None:
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="requires notification_id"):
|
with pytest.raises(ToolError, match="requires notification_id"):
|
||||||
await tool_fn(action="delete", confirm=True)
|
await tool_fn(action="notification", subaction="delete", confirm=True)
|
||||||
|
|
||||||
|
|
||||||
class TestNotificationsActions:
|
class TestNotificationsActions:
|
||||||
@@ -75,7 +62,7 @@ class TestNotificationsActions:
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="overview")
|
result = await tool_fn(action="notification", subaction="overview")
|
||||||
assert result["unread"]["total"] == 7
|
assert result["unread"]["total"] == 7
|
||||||
|
|
||||||
async def test_list(self, _mock_graphql: AsyncMock) -> None:
|
async def test_list(self, _mock_graphql: AsyncMock) -> None:
|
||||||
@@ -83,7 +70,7 @@ class TestNotificationsActions:
|
|||||||
"notifications": {"list": [{"id": "n:1", "title": "Test", "importance": "INFO"}]}
|
"notifications": {"list": [{"id": "n:1", "title": "Test", "importance": "INFO"}]}
|
||||||
}
|
}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="list")
|
result = await tool_fn(action="notification", subaction="list")
|
||||||
assert len(result["notifications"]) == 1
|
assert len(result["notifications"]) == 1
|
||||||
|
|
||||||
async def test_create(self, _mock_graphql: AsyncMock) -> None:
|
async def test_create(self, _mock_graphql: AsyncMock) -> None:
|
||||||
@@ -92,7 +79,8 @@ class TestNotificationsActions:
|
|||||||
}
|
}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(
|
result = await tool_fn(
|
||||||
action="create",
|
action="notification",
|
||||||
|
subaction="create",
|
||||||
title="Test",
|
title="Test",
|
||||||
subject="Test Subject",
|
subject="Test Subject",
|
||||||
description="Test Desc",
|
description="Test Desc",
|
||||||
@@ -103,7 +91,7 @@ class TestNotificationsActions:
|
|||||||
async def test_archive_notification(self, _mock_graphql: AsyncMock) -> None:
|
async def test_archive_notification(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_graphql.return_value = {"archiveNotification": {"id": "n:1"}}
|
_mock_graphql.return_value = {"archiveNotification": {"id": "n:1"}}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="archive", notification_id="n:1")
|
result = await tool_fn(action="notification", subaction="archive", notification_id="n:1")
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
|
|
||||||
async def test_delete_with_confirm(self, _mock_graphql: AsyncMock) -> None:
|
async def test_delete_with_confirm(self, _mock_graphql: AsyncMock) -> None:
|
||||||
@@ -115,7 +103,8 @@ class TestNotificationsActions:
|
|||||||
}
|
}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(
|
result = await tool_fn(
|
||||||
action="delete",
|
action="notification",
|
||||||
|
subaction="delete",
|
||||||
notification_id="n:1",
|
notification_id="n:1",
|
||||||
notification_type="unread",
|
notification_type="unread",
|
||||||
confirm=True,
|
confirm=True,
|
||||||
@@ -130,22 +119,24 @@ class TestNotificationsActions:
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="archive_all")
|
result = await tool_fn(action="notification", subaction="archive_all")
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
|
|
||||||
async def test_unread_notification(self, _mock_graphql: AsyncMock) -> None:
|
async def test_unread_notification(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_graphql.return_value = {"unreadNotification": {"id": "n:1"}}
|
_mock_graphql.return_value = {"unreadNotification": {"id": "n:1"}}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="unread", notification_id="n:1")
|
result = await tool_fn(action="notification", subaction="unread", notification_id="n:1")
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
assert result["action"] == "unread"
|
assert result["subaction"] == "unread"
|
||||||
|
|
||||||
async def test_list_with_importance_filter(self, _mock_graphql: AsyncMock) -> None:
|
async def test_list_with_importance_filter(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_graphql.return_value = {
|
_mock_graphql.return_value = {
|
||||||
"notifications": {"list": [{"id": "n:1", "title": "Alert", "importance": "WARNING"}]}
|
"notifications": {"list": [{"id": "n:1", "title": "Alert", "importance": "WARNING"}]}
|
||||||
}
|
}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="list", importance="warning", limit=10, offset=5)
|
result = await tool_fn(
|
||||||
|
action="notification", subaction="list", importance="warning", limit=10, offset=5
|
||||||
|
)
|
||||||
assert len(result["notifications"]) == 1
|
assert len(result["notifications"]) == 1
|
||||||
call_args = _mock_graphql.call_args
|
call_args = _mock_graphql.call_args
|
||||||
filter_var = call_args[0][1]["filter"]
|
filter_var = call_args[0][1]["filter"]
|
||||||
@@ -161,15 +152,15 @@ class TestNotificationsActions:
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="delete_archived", confirm=True)
|
result = await tool_fn(action="notification", subaction="delete_archived", confirm=True)
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
assert result["action"] == "delete_archived"
|
assert result["subaction"] == "delete_archived"
|
||||||
|
|
||||||
async def test_generic_exception_wraps(self, _mock_graphql: AsyncMock) -> None:
|
async def test_generic_exception_wraps(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_graphql.side_effect = RuntimeError("boom")
|
_mock_graphql.side_effect = RuntimeError("boom")
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="Failed to execute notifications/overview"):
|
with pytest.raises(ToolError, match="Failed to execute notification/overview"):
|
||||||
await tool_fn(action="overview")
|
await tool_fn(action="notification", subaction="overview")
|
||||||
|
|
||||||
|
|
||||||
class TestNotificationsCreateValidation:
|
class TestNotificationsCreateValidation:
|
||||||
@@ -179,7 +170,8 @@ class TestNotificationsCreateValidation:
|
|||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="Invalid importance"):
|
with pytest.raises(ToolError, match="Invalid importance"):
|
||||||
await tool_fn(
|
await tool_fn(
|
||||||
action="create",
|
action="notification",
|
||||||
|
subaction="create",
|
||||||
title="T",
|
title="T",
|
||||||
subject="S",
|
subject="S",
|
||||||
description="D",
|
description="D",
|
||||||
@@ -191,7 +183,8 @@ class TestNotificationsCreateValidation:
|
|||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="Invalid importance"):
|
with pytest.raises(ToolError, match="Invalid importance"):
|
||||||
await tool_fn(
|
await tool_fn(
|
||||||
action="create",
|
action="notification",
|
||||||
|
subaction="create",
|
||||||
title="T",
|
title="T",
|
||||||
subject="S",
|
subject="S",
|
||||||
description="D",
|
description="D",
|
||||||
@@ -202,7 +195,12 @@ class TestNotificationsCreateValidation:
|
|||||||
_mock_graphql.return_value = {"createNotification": {"id": "n:1", "importance": "ALERT"}}
|
_mock_graphql.return_value = {"createNotification": {"id": "n:1", "importance": "ALERT"}}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(
|
result = await tool_fn(
|
||||||
action="create", title="T", subject="S", description="D", importance="alert"
|
action="notification",
|
||||||
|
subaction="create",
|
||||||
|
title="T",
|
||||||
|
subject="S",
|
||||||
|
description="D",
|
||||||
|
importance="alert",
|
||||||
)
|
)
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
|
|
||||||
@@ -210,7 +208,8 @@ class TestNotificationsCreateValidation:
|
|||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="title must be at most 200"):
|
with pytest.raises(ToolError, match="title must be at most 200"):
|
||||||
await tool_fn(
|
await tool_fn(
|
||||||
action="create",
|
action="notification",
|
||||||
|
subaction="create",
|
||||||
title="x" * 201,
|
title="x" * 201,
|
||||||
subject="S",
|
subject="S",
|
||||||
description="D",
|
description="D",
|
||||||
@@ -221,7 +220,8 @@ class TestNotificationsCreateValidation:
|
|||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="subject must be at most 500"):
|
with pytest.raises(ToolError, match="subject must be at most 500"):
|
||||||
await tool_fn(
|
await tool_fn(
|
||||||
action="create",
|
action="notification",
|
||||||
|
subaction="create",
|
||||||
title="T",
|
title="T",
|
||||||
subject="x" * 501,
|
subject="x" * 501,
|
||||||
description="D",
|
description="D",
|
||||||
@@ -232,7 +232,8 @@ class TestNotificationsCreateValidation:
|
|||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="description must be at most 2000"):
|
with pytest.raises(ToolError, match="description must be at most 2000"):
|
||||||
await tool_fn(
|
await tool_fn(
|
||||||
action="create",
|
action="notification",
|
||||||
|
subaction="create",
|
||||||
title="T",
|
title="T",
|
||||||
subject="S",
|
subject="S",
|
||||||
description="x" * 2001,
|
description="x" * 2001,
|
||||||
@@ -243,7 +244,8 @@ class TestNotificationsCreateValidation:
|
|||||||
_mock_graphql.return_value = {"createNotification": {"id": "n:1", "importance": "INFO"}}
|
_mock_graphql.return_value = {"createNotification": {"id": "n:1", "importance": "INFO"}}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(
|
result = await tool_fn(
|
||||||
action="create",
|
action="notification",
|
||||||
|
subaction="create",
|
||||||
title="x" * 200,
|
title="x" * 200,
|
||||||
subject="S",
|
subject="S",
|
||||||
description="D",
|
description="D",
|
||||||
@@ -261,7 +263,9 @@ class TestNewNotificationMutations:
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="archive_many", notification_ids=["n:1", "n:2"])
|
result = await tool_fn(
|
||||||
|
action="notification", subaction="archive_many", notification_ids=["n:1", "n:2"]
|
||||||
|
)
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
call_args = _mock_graphql.call_args
|
call_args = _mock_graphql.call_args
|
||||||
assert call_args[0][1] == {"ids": ["n:1", "n:2"]}
|
assert call_args[0][1] == {"ids": ["n:1", "n:2"]}
|
||||||
@@ -269,7 +273,7 @@ class TestNewNotificationMutations:
|
|||||||
async def test_archive_many_requires_ids(self, _mock_graphql: AsyncMock) -> None:
|
async def test_archive_many_requires_ids(self, _mock_graphql: AsyncMock) -> None:
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="notification_ids"):
|
with pytest.raises(ToolError, match="notification_ids"):
|
||||||
await tool_fn(action="archive_many")
|
await tool_fn(action="notification", subaction="archive_many")
|
||||||
|
|
||||||
async def test_unarchive_many_success(self, _mock_graphql: AsyncMock) -> None:
|
async def test_unarchive_many_success(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_graphql.return_value = {
|
_mock_graphql.return_value = {
|
||||||
@@ -279,13 +283,15 @@ class TestNewNotificationMutations:
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="unarchive_many", notification_ids=["n:1", "n:2"])
|
result = await tool_fn(
|
||||||
|
action="notification", subaction="unarchive_many", notification_ids=["n:1", "n:2"]
|
||||||
|
)
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
|
|
||||||
async def test_unarchive_many_requires_ids(self, _mock_graphql: AsyncMock) -> None:
|
async def test_unarchive_many_requires_ids(self, _mock_graphql: AsyncMock) -> None:
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="notification_ids"):
|
with pytest.raises(ToolError, match="notification_ids"):
|
||||||
await tool_fn(action="unarchive_many")
|
await tool_fn(action="notification", subaction="unarchive_many")
|
||||||
|
|
||||||
async def test_unarchive_all_success(self, _mock_graphql: AsyncMock) -> None:
|
async def test_unarchive_all_success(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_graphql.return_value = {
|
_mock_graphql.return_value = {
|
||||||
@@ -295,7 +301,7 @@ class TestNewNotificationMutations:
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="unarchive_all")
|
result = await tool_fn(action="notification", subaction="unarchive_all")
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
|
|
||||||
async def test_unarchive_all_with_importance(self, _mock_graphql: AsyncMock) -> None:
|
async def test_unarchive_all_with_importance(self, _mock_graphql: AsyncMock) -> None:
|
||||||
@@ -304,7 +310,7 @@ class TestNewNotificationMutations:
|
|||||||
"unarchiveAll": {"unread": {"total": 1}, "archive": {"total": 0}}
|
"unarchiveAll": {"unread": {"total": 1}, "archive": {"total": 0}}
|
||||||
}
|
}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
await tool_fn(action="unarchive_all", importance="warning")
|
await tool_fn(action="notification", subaction="unarchive_all", importance="warning")
|
||||||
call_args = _mock_graphql.call_args
|
call_args = _mock_graphql.call_args
|
||||||
assert call_args[0][1] == {"importance": "WARNING"}
|
assert call_args[0][1] == {"importance": "WARNING"}
|
||||||
|
|
||||||
@@ -316,5 +322,5 @@ class TestNewNotificationMutations:
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="recalculate")
|
result = await tool_fn(action="notification", subaction="recalculate")
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
|
|||||||
@@ -1,5 +1,5 @@
|
|||||||
# tests/test_oidc.py
|
# tests/test_oidc.py
|
||||||
"""Tests for unraid_oidc tool."""
|
"""Tests for oidc subactions of the consolidated unraid tool."""
|
||||||
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
@@ -11,16 +11,12 @@ from conftest import make_tool_fn
|
|||||||
|
|
||||||
@pytest.fixture
|
@pytest.fixture
|
||||||
def _mock_graphql():
|
def _mock_graphql():
|
||||||
with patch("unraid_mcp.tools.oidc.make_graphql_request", new_callable=AsyncMock) as m:
|
with patch("unraid_mcp.tools.unraid.make_graphql_request", new_callable=AsyncMock) as m:
|
||||||
yield m
|
yield m
|
||||||
|
|
||||||
|
|
||||||
def _make_tool():
|
def _make_tool():
|
||||||
return make_tool_fn(
|
return make_tool_fn("unraid_mcp.tools.unraid", "register_unraid_tool", "unraid")
|
||||||
"unraid_mcp.tools.oidc",
|
|
||||||
"register_oidc_tool",
|
|
||||||
"unraid_oidc",
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
@@ -30,15 +26,16 @@ async def test_providers_returns_list(_mock_graphql):
|
|||||||
{"id": "1:local", "name": "Google", "clientId": "abc", "scopes": ["openid"]}
|
{"id": "1:local", "name": "Google", "clientId": "abc", "scopes": ["openid"]}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
result = await _make_tool()(action="providers")
|
result = await _make_tool()(action="oidc", subaction="providers")
|
||||||
assert result["success"] is True
|
assert "providers" in result
|
||||||
|
assert len(result["providers"]) == 1
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_public_providers(_mock_graphql):
|
async def test_public_providers(_mock_graphql):
|
||||||
_mock_graphql.return_value = {"publicOidcProviders": []}
|
_mock_graphql.return_value = {"publicOidcProviders": []}
|
||||||
result = await _make_tool()(action="public_providers")
|
result = await _make_tool()(action="oidc", subaction="public_providers")
|
||||||
assert result["success"] is True
|
assert "providers" in result
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
@@ -46,7 +43,7 @@ async def test_provider_requires_provider_id(_mock_graphql):
|
|||||||
from unraid_mcp.core.exceptions import ToolError
|
from unraid_mcp.core.exceptions import ToolError
|
||||||
|
|
||||||
with pytest.raises(ToolError, match="provider_id"):
|
with pytest.raises(ToolError, match="provider_id"):
|
||||||
await _make_tool()(action="provider")
|
await _make_tool()(action="oidc", subaction="provider")
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
@@ -54,7 +51,7 @@ async def test_validate_session_requires_token(_mock_graphql):
|
|||||||
from unraid_mcp.core.exceptions import ToolError
|
from unraid_mcp.core.exceptions import ToolError
|
||||||
|
|
||||||
with pytest.raises(ToolError, match="token"):
|
with pytest.raises(ToolError, match="token"):
|
||||||
await _make_tool()(action="validate_session")
|
await _make_tool()(action="oidc", subaction="validate_session")
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
@@ -62,5 +59,5 @@ async def test_configuration(_mock_graphql):
|
|||||||
_mock_graphql.return_value = {
|
_mock_graphql.return_value = {
|
||||||
"oidcConfiguration": {"providers": [], "defaultAllowedOrigins": []}
|
"oidcConfiguration": {"providers": [], "defaultAllowedOrigins": []}
|
||||||
}
|
}
|
||||||
result = await _make_tool()(action="configuration")
|
result = await _make_tool()(action="oidc", subaction="configuration")
|
||||||
assert result["success"] is True
|
assert "providers" in result
|
||||||
|
|||||||
@@ -1,72 +1,63 @@
|
|||||||
# tests/test_plugins.py
|
# tests/test_plugins.py
|
||||||
"""Tests for unraid_plugins tool."""
|
"""Tests for plugin subactions of the consolidated unraid tool."""
|
||||||
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
from unittest.mock import patch
|
from unittest.mock import patch
|
||||||
|
|
||||||
import pytest
|
import pytest
|
||||||
from fastmcp import FastMCP
|
from conftest import make_tool_fn
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
|
||||||
def mcp():
|
|
||||||
return FastMCP("test")
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
@pytest.fixture
|
||||||
def _mock_graphql():
|
def _mock_graphql():
|
||||||
with patch("unraid_mcp.tools.plugins.make_graphql_request") as m:
|
with patch("unraid_mcp.tools.unraid.make_graphql_request") as m:
|
||||||
yield m
|
yield m
|
||||||
|
|
||||||
|
|
||||||
def _make_tool(mcp):
|
def _make_tool():
|
||||||
from unraid_mcp.tools.plugins import register_plugins_tool
|
return make_tool_fn("unraid_mcp.tools.unraid", "register_unraid_tool", "unraid")
|
||||||
|
|
||||||
register_plugins_tool(mcp)
|
|
||||||
# FastMCP 3.x: access tool fn via internal provider components (same as conftest.make_tool_fn)
|
|
||||||
local_provider = mcp.providers[0]
|
|
||||||
tool = local_provider._components["tool:unraid_plugins@"]
|
|
||||||
return tool.fn
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_list_returns_plugins(mcp, _mock_graphql):
|
async def test_list_returns_plugins(_mock_graphql):
|
||||||
_mock_graphql.return_value = {
|
_mock_graphql.return_value = {
|
||||||
"plugins": [
|
"plugins": [
|
||||||
{"name": "my-plugin", "version": "1.0.0", "hasApiModule": True, "hasCliModule": False}
|
{"name": "my-plugin", "version": "1.0.0", "hasApiModule": True, "hasCliModule": False}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
result = await _make_tool(mcp)(action="list")
|
result = await _make_tool()(action="plugin", subaction="list")
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
assert len(result["data"]["plugins"]) == 1
|
assert len(result["data"]["plugins"]) == 1
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_add_requires_names(mcp, _mock_graphql):
|
async def test_add_requires_names(_mock_graphql):
|
||||||
from unraid_mcp.core.exceptions import ToolError
|
from unraid_mcp.core.exceptions import ToolError
|
||||||
|
|
||||||
with pytest.raises(ToolError, match="names"):
|
with pytest.raises(ToolError, match="names"):
|
||||||
await _make_tool(mcp)(action="add")
|
await _make_tool()(action="plugin", subaction="add")
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_add_success(mcp, _mock_graphql):
|
async def test_add_success(_mock_graphql):
|
||||||
_mock_graphql.return_value = {"addPlugin": False} # False = auto-restart triggered
|
_mock_graphql.return_value = {"addPlugin": False} # False = auto-restart triggered
|
||||||
result = await _make_tool(mcp)(action="add", names=["my-plugin"])
|
result = await _make_tool()(action="plugin", subaction="add", names=["my-plugin"])
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_remove_requires_confirm(mcp, _mock_graphql):
|
async def test_remove_requires_confirm(_mock_graphql):
|
||||||
from unraid_mcp.core.exceptions import ToolError
|
from unraid_mcp.core.exceptions import ToolError
|
||||||
|
|
||||||
with pytest.raises(ToolError, match="not confirmed"):
|
with pytest.raises(ToolError, match="not confirmed"):
|
||||||
await _make_tool(mcp)(action="remove", names=["my-plugin"], confirm=False)
|
await _make_tool()(action="plugin", subaction="remove", names=["my-plugin"], confirm=False)
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_remove_with_confirm(mcp, _mock_graphql):
|
async def test_remove_with_confirm(_mock_graphql):
|
||||||
_mock_graphql.return_value = {"removePlugin": True}
|
_mock_graphql.return_value = {"removePlugin": True}
|
||||||
result = await _make_tool(mcp)(action="remove", names=["my-plugin"], confirm=True)
|
result = await _make_tool()(
|
||||||
|
action="plugin", subaction="remove", names=["my-plugin"], confirm=True
|
||||||
|
)
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
"""Tests for unraid_rclone tool."""
|
"""Tests for rclone subactions of the consolidated unraid tool."""
|
||||||
|
|
||||||
from collections.abc import Generator
|
from collections.abc import Generator
|
||||||
from unittest.mock import AsyncMock, patch
|
from unittest.mock import AsyncMock, patch
|
||||||
@@ -11,36 +11,36 @@ from unraid_mcp.core.exceptions import ToolError
|
|||||||
|
|
||||||
@pytest.fixture
|
@pytest.fixture
|
||||||
def _mock_graphql() -> Generator[AsyncMock, None, None]:
|
def _mock_graphql() -> Generator[AsyncMock, None, None]:
|
||||||
with patch("unraid_mcp.tools.rclone.make_graphql_request", new_callable=AsyncMock) as mock:
|
with patch("unraid_mcp.tools.unraid.make_graphql_request", new_callable=AsyncMock) as mock:
|
||||||
yield mock
|
yield mock
|
||||||
|
|
||||||
|
|
||||||
def _make_tool():
|
def _make_tool():
|
||||||
return make_tool_fn("unraid_mcp.tools.rclone", "register_rclone_tool", "unraid_rclone")
|
return make_tool_fn("unraid_mcp.tools.unraid", "register_unraid_tool", "unraid")
|
||||||
|
|
||||||
|
|
||||||
class TestRcloneValidation:
|
class TestRcloneValidation:
|
||||||
async def test_delete_requires_confirm(self) -> None:
|
async def test_delete_requires_confirm(self) -> None:
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="not confirmed"):
|
with pytest.raises(ToolError, match="not confirmed"):
|
||||||
await tool_fn(action="delete_remote", name="gdrive")
|
await tool_fn(action="rclone", subaction="delete_remote", name="gdrive")
|
||||||
|
|
||||||
async def test_create_requires_fields(self) -> None:
|
async def test_create_requires_fields(self) -> None:
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="requires name"):
|
with pytest.raises(ToolError, match="requires name"):
|
||||||
await tool_fn(action="create_remote")
|
await tool_fn(action="rclone", subaction="create_remote")
|
||||||
|
|
||||||
async def test_delete_requires_name(self) -> None:
|
async def test_delete_requires_name(self) -> None:
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="name is required"):
|
with pytest.raises(ToolError, match="name is required"):
|
||||||
await tool_fn(action="delete_remote", confirm=True)
|
await tool_fn(action="rclone", subaction="delete_remote", confirm=True)
|
||||||
|
|
||||||
|
|
||||||
class TestRcloneActions:
|
class TestRcloneActions:
|
||||||
async def test_list_remotes(self, _mock_graphql: AsyncMock) -> None:
|
async def test_list_remotes(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_graphql.return_value = {"rclone": {"remotes": [{"name": "gdrive", "type": "drive"}]}}
|
_mock_graphql.return_value = {"rclone": {"remotes": [{"name": "gdrive", "type": "drive"}]}}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="list_remotes")
|
result = await tool_fn(action="rclone", subaction="list_remotes")
|
||||||
assert len(result["remotes"]) == 1
|
assert len(result["remotes"]) == 1
|
||||||
|
|
||||||
async def test_config_form(self, _mock_graphql: AsyncMock) -> None:
|
async def test_config_form(self, _mock_graphql: AsyncMock) -> None:
|
||||||
@@ -48,7 +48,7 @@ class TestRcloneActions:
|
|||||||
"rclone": {"configForm": {"id": "form:1", "dataSchema": {}, "uiSchema": {}}}
|
"rclone": {"configForm": {"id": "form:1", "dataSchema": {}, "uiSchema": {}}}
|
||||||
}
|
}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="config_form")
|
result = await tool_fn(action="rclone", subaction="config_form")
|
||||||
assert result["id"] == "form:1"
|
assert result["id"] == "form:1"
|
||||||
|
|
||||||
async def test_config_form_with_provider(self, _mock_graphql: AsyncMock) -> None:
|
async def test_config_form_with_provider(self, _mock_graphql: AsyncMock) -> None:
|
||||||
@@ -56,7 +56,7 @@ class TestRcloneActions:
|
|||||||
"rclone": {"configForm": {"id": "form:s3", "dataSchema": {}, "uiSchema": {}}}
|
"rclone": {"configForm": {"id": "form:s3", "dataSchema": {}, "uiSchema": {}}}
|
||||||
}
|
}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="config_form", provider_type="s3")
|
result = await tool_fn(action="rclone", subaction="config_form", provider_type="s3")
|
||||||
assert result["id"] == "form:s3"
|
assert result["id"] == "form:s3"
|
||||||
call_args = _mock_graphql.call_args
|
call_args = _mock_graphql.call_args
|
||||||
assert call_args[0][1] == {"formOptions": {"providerType": "s3"}}
|
assert call_args[0][1] == {"formOptions": {"providerType": "s3"}}
|
||||||
@@ -67,7 +67,8 @@ class TestRcloneActions:
|
|||||||
}
|
}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(
|
result = await tool_fn(
|
||||||
action="create_remote",
|
action="rclone",
|
||||||
|
subaction="create_remote",
|
||||||
name="newremote",
|
name="newremote",
|
||||||
provider_type="s3",
|
provider_type="s3",
|
||||||
config_data={"bucket": "mybucket"},
|
config_data={"bucket": "mybucket"},
|
||||||
@@ -81,7 +82,8 @@ class TestRcloneActions:
|
|||||||
}
|
}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(
|
result = await tool_fn(
|
||||||
action="create_remote",
|
action="rclone",
|
||||||
|
subaction="create_remote",
|
||||||
name="ftp-remote",
|
name="ftp-remote",
|
||||||
provider_type="ftp",
|
provider_type="ftp",
|
||||||
config_data={},
|
config_data={},
|
||||||
@@ -91,14 +93,16 @@ class TestRcloneActions:
|
|||||||
async def test_delete_remote(self, _mock_graphql: AsyncMock) -> None:
|
async def test_delete_remote(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_graphql.return_value = {"rclone": {"deleteRCloneRemote": True}}
|
_mock_graphql.return_value = {"rclone": {"deleteRCloneRemote": True}}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="delete_remote", name="gdrive", confirm=True)
|
result = await tool_fn(
|
||||||
|
action="rclone", subaction="delete_remote", name="gdrive", confirm=True
|
||||||
|
)
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
|
|
||||||
async def test_delete_remote_failure(self, _mock_graphql: AsyncMock) -> None:
|
async def test_delete_remote_failure(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_graphql.return_value = {"rclone": {"deleteRCloneRemote": False}}
|
_mock_graphql.return_value = {"rclone": {"deleteRCloneRemote": False}}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="Failed to delete"):
|
with pytest.raises(ToolError, match="Failed to delete"):
|
||||||
await tool_fn(action="delete_remote", name="gdrive", confirm=True)
|
await tool_fn(action="rclone", subaction="delete_remote", name="gdrive", confirm=True)
|
||||||
|
|
||||||
|
|
||||||
class TestRcloneConfigDataValidation:
|
class TestRcloneConfigDataValidation:
|
||||||
@@ -108,7 +112,8 @@ class TestRcloneConfigDataValidation:
|
|||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="disallowed characters"):
|
with pytest.raises(ToolError, match="disallowed characters"):
|
||||||
await tool_fn(
|
await tool_fn(
|
||||||
action="create_remote",
|
action="rclone",
|
||||||
|
subaction="create_remote",
|
||||||
name="r",
|
name="r",
|
||||||
provider_type="s3",
|
provider_type="s3",
|
||||||
config_data={"../evil": "value"},
|
config_data={"../evil": "value"},
|
||||||
@@ -118,7 +123,8 @@ class TestRcloneConfigDataValidation:
|
|||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="disallowed characters"):
|
with pytest.raises(ToolError, match="disallowed characters"):
|
||||||
await tool_fn(
|
await tool_fn(
|
||||||
action="create_remote",
|
action="rclone",
|
||||||
|
subaction="create_remote",
|
||||||
name="r",
|
name="r",
|
||||||
provider_type="s3",
|
provider_type="s3",
|
||||||
config_data={"key;rm": "value"},
|
config_data={"key;rm": "value"},
|
||||||
@@ -128,7 +134,8 @@ class TestRcloneConfigDataValidation:
|
|||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="max 50"):
|
with pytest.raises(ToolError, match="max 50"):
|
||||||
await tool_fn(
|
await tool_fn(
|
||||||
action="create_remote",
|
action="rclone",
|
||||||
|
subaction="create_remote",
|
||||||
name="r",
|
name="r",
|
||||||
provider_type="s3",
|
provider_type="s3",
|
||||||
config_data={f"key{i}": "v" for i in range(51)},
|
config_data={f"key{i}": "v" for i in range(51)},
|
||||||
@@ -138,7 +145,8 @@ class TestRcloneConfigDataValidation:
|
|||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="string, number, or boolean"):
|
with pytest.raises(ToolError, match="string, number, or boolean"):
|
||||||
await tool_fn(
|
await tool_fn(
|
||||||
action="create_remote",
|
action="rclone",
|
||||||
|
subaction="create_remote",
|
||||||
name="r",
|
name="r",
|
||||||
provider_type="s3",
|
provider_type="s3",
|
||||||
config_data={"nested": {"key": "val"}},
|
config_data={"nested": {"key": "val"}},
|
||||||
@@ -148,19 +156,19 @@ class TestRcloneConfigDataValidation:
|
|||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="exceeds max length"):
|
with pytest.raises(ToolError, match="exceeds max length"):
|
||||||
await tool_fn(
|
await tool_fn(
|
||||||
action="create_remote",
|
action="rclone",
|
||||||
|
subaction="create_remote",
|
||||||
name="r",
|
name="r",
|
||||||
provider_type="s3",
|
provider_type="s3",
|
||||||
config_data={"key": "x" * 4097},
|
config_data={"key": "x" * 4097},
|
||||||
)
|
)
|
||||||
|
|
||||||
async def test_boolean_value_accepted(self, _mock_graphql: AsyncMock) -> None:
|
async def test_boolean_value_accepted(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_graphql.return_value = {
|
_mock_graphql.return_value = {"rclone": {"createRCloneRemote": {"name": "r", "type": "s3"}}}
|
||||||
"rclone": {"createRCloneRemote": {"name": "r", "type": "s3"}}
|
|
||||||
}
|
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(
|
result = await tool_fn(
|
||||||
action="create_remote",
|
action="rclone",
|
||||||
|
subaction="create_remote",
|
||||||
name="r",
|
name="r",
|
||||||
provider_type="s3",
|
provider_type="s3",
|
||||||
config_data={"use_path_style": True},
|
config_data={"use_path_style": True},
|
||||||
@@ -173,7 +181,8 @@ class TestRcloneConfigDataValidation:
|
|||||||
}
|
}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(
|
result = await tool_fn(
|
||||||
action="create_remote",
|
action="rclone",
|
||||||
|
subaction="create_remote",
|
||||||
name="r",
|
name="r",
|
||||||
provider_type="sftp",
|
provider_type="sftp",
|
||||||
config_data={"port": 22},
|
config_data={"port": 22},
|
||||||
|
|||||||
@@ -6,6 +6,7 @@ from unittest.mock import AsyncMock, patch
|
|||||||
import pytest
|
import pytest
|
||||||
from fastmcp import FastMCP
|
from fastmcp import FastMCP
|
||||||
|
|
||||||
|
from unraid_mcp.subscriptions.queries import SNAPSHOT_ACTIONS
|
||||||
from unraid_mcp.subscriptions.resources import register_subscription_resources
|
from unraid_mcp.subscriptions.resources import register_subscription_resources
|
||||||
|
|
||||||
|
|
||||||
@@ -16,15 +17,6 @@ def _make_resources():
|
|||||||
return test_mcp
|
return test_mcp
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
|
||||||
def _mock_subscribe_once():
|
|
||||||
with patch(
|
|
||||||
"unraid_mcp.subscriptions.resources.subscribe_once",
|
|
||||||
new_callable=AsyncMock,
|
|
||||||
) as mock:
|
|
||||||
yield mock
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
@pytest.fixture
|
||||||
def _mock_ensure_started():
|
def _mock_ensure_started():
|
||||||
with patch(
|
with patch(
|
||||||
@@ -34,55 +26,59 @@ def _mock_ensure_started():
|
|||||||
yield mock
|
yield mock
|
||||||
|
|
||||||
|
|
||||||
class TestLiveResources:
|
class TestLiveResourcesUseManagerCache:
|
||||||
@pytest.mark.parametrize(
|
"""All live resources must read from the persistent SubscriptionManager cache."""
|
||||||
"action",
|
|
||||||
[
|
@pytest.mark.parametrize("action", list(SNAPSHOT_ACTIONS.keys()))
|
||||||
"cpu",
|
async def test_resource_returns_cached_data(
|
||||||
"memory",
|
self, action: str, _mock_ensure_started: AsyncMock
|
||||||
"cpu_telemetry",
|
|
||||||
"array_state",
|
|
||||||
"parity_progress",
|
|
||||||
"ups_status",
|
|
||||||
"notifications_overview",
|
|
||||||
"owner",
|
|
||||||
"server_status",
|
|
||||||
],
|
|
||||||
)
|
|
||||||
async def test_resource_returns_json(
|
|
||||||
self,
|
|
||||||
action: str,
|
|
||||||
_mock_subscribe_once: AsyncMock,
|
|
||||||
_mock_ensure_started: AsyncMock,
|
|
||||||
) -> None:
|
) -> None:
|
||||||
_mock_subscribe_once.return_value = {"data": "ok"}
|
cached = {"systemMetricsCpu": {"percentTotal": 12.5}}
|
||||||
mcp = _make_resources()
|
with patch("unraid_mcp.subscriptions.resources.subscription_manager") as mock_mgr:
|
||||||
|
mock_mgr.get_resource_data = AsyncMock(return_value=cached)
|
||||||
|
mcp = _make_resources()
|
||||||
|
resource = mcp.providers[0]._components[f"resource:unraid://live/{action}@"]
|
||||||
|
result = await resource.fn()
|
||||||
|
assert json.loads(result) == cached
|
||||||
|
|
||||||
local_provider = mcp.providers[0]
|
@pytest.mark.parametrize("action", list(SNAPSHOT_ACTIONS.keys()))
|
||||||
resource_key = f"resource:unraid://live/{action}@"
|
async def test_resource_returns_status_when_no_cache(
|
||||||
resource = local_provider._components[resource_key]
|
self, action: str, _mock_ensure_started: AsyncMock
|
||||||
result = await resource.fn()
|
|
||||||
|
|
||||||
parsed = json.loads(result)
|
|
||||||
assert parsed == {"data": "ok"}
|
|
||||||
|
|
||||||
async def test_resource_returns_error_dict_on_failure(
|
|
||||||
self,
|
|
||||||
_mock_subscribe_once: AsyncMock,
|
|
||||||
_mock_ensure_started: AsyncMock,
|
|
||||||
) -> None:
|
) -> None:
|
||||||
from fastmcp.exceptions import ToolError
|
with patch("unraid_mcp.subscriptions.resources.subscription_manager") as mock_mgr:
|
||||||
|
mock_mgr.get_resource_data = AsyncMock(return_value=None)
|
||||||
_mock_subscribe_once.side_effect = ToolError("WebSocket timeout")
|
mcp = _make_resources()
|
||||||
mcp = _make_resources()
|
resource = mcp.providers[0]._components[f"resource:unraid://live/{action}@"]
|
||||||
|
result = await resource.fn()
|
||||||
local_provider = mcp.providers[0]
|
|
||||||
resource = local_provider._components["resource:unraid://live/cpu@"]
|
|
||||||
result = await resource.fn()
|
|
||||||
|
|
||||||
parsed = json.loads(result)
|
parsed = json.loads(result)
|
||||||
assert "error" in parsed
|
assert "status" in parsed
|
||||||
assert "WebSocket timeout" in parsed["error"]
|
|
||||||
|
def test_subscribe_once_not_imported(self) -> None:
|
||||||
|
"""subscribe_once must not be imported — resources use manager cache exclusively."""
|
||||||
|
import unraid_mcp.subscriptions.resources as res_module
|
||||||
|
|
||||||
|
assert not hasattr(res_module, "subscribe_once")
|
||||||
|
|
||||||
|
|
||||||
|
class TestSnapshotSubscriptionsRegistered:
|
||||||
|
"""All SNAPSHOT_ACTIONS must be registered in the SubscriptionManager with auto_start=True."""
|
||||||
|
|
||||||
|
def test_all_snapshot_actions_in_configs(self) -> None:
|
||||||
|
from unraid_mcp.subscriptions.manager import subscription_manager
|
||||||
|
|
||||||
|
for action in SNAPSHOT_ACTIONS:
|
||||||
|
assert action in subscription_manager.subscription_configs, (
|
||||||
|
f"'{action}' not registered in subscription_configs"
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_all_snapshot_actions_autostart(self) -> None:
|
||||||
|
from unraid_mcp.subscriptions.manager import subscription_manager
|
||||||
|
|
||||||
|
for action in SNAPSHOT_ACTIONS:
|
||||||
|
config = subscription_manager.subscription_configs[action]
|
||||||
|
assert config.get("auto_start") is True, (
|
||||||
|
f"'{action}' missing auto_start=True in subscription_configs"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
class TestLogsStreamResource:
|
class TestLogsStreamResource:
|
||||||
|
|||||||
@@ -1,38 +1,32 @@
|
|||||||
"""Tests for the unraid_settings tool."""
|
"""Tests for the setting subactions of the consolidated unraid tool."""
|
||||||
|
|
||||||
from collections.abc import Generator
|
from collections.abc import Generator
|
||||||
from typing import get_args
|
|
||||||
from unittest.mock import AsyncMock, patch
|
from unittest.mock import AsyncMock, patch
|
||||||
|
|
||||||
import pytest
|
import pytest
|
||||||
from fastmcp import FastMCP
|
from conftest import make_tool_fn
|
||||||
|
|
||||||
from unraid_mcp.core.exceptions import ToolError
|
from unraid_mcp.core.exceptions import ToolError
|
||||||
from unraid_mcp.tools.settings import SETTINGS_ACTIONS, register_settings_tool
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
@pytest.fixture
|
||||||
def _mock_graphql() -> Generator[AsyncMock, None, None]:
|
def _mock_graphql() -> Generator[AsyncMock, None, None]:
|
||||||
with patch("unraid_mcp.tools.settings.make_graphql_request", new_callable=AsyncMock) as mock:
|
with patch("unraid_mcp.tools.unraid.make_graphql_request", new_callable=AsyncMock) as mock:
|
||||||
yield mock
|
yield mock
|
||||||
|
|
||||||
|
|
||||||
def _make_tool() -> AsyncMock:
|
def _make_tool():
|
||||||
test_mcp = FastMCP("test")
|
return make_tool_fn("unraid_mcp.tools.unraid", "register_unraid_tool", "unraid")
|
||||||
register_settings_tool(test_mcp)
|
|
||||||
# FastMCP 3.x stores tools in providers[0]._components keyed as "tool:{name}@"
|
|
||||||
local_provider = test_mcp.providers[0]
|
|
||||||
tool = local_provider._components["tool:unraid_settings@"] # ty: ignore[unresolved-attribute]
|
|
||||||
return tool.fn
|
|
||||||
|
|
||||||
|
|
||||||
# ---------------------------------------------------------------------------
|
# ---------------------------------------------------------------------------
|
||||||
# Regression: removed actions must not appear in SETTINGS_ACTIONS
|
# Regression: removed subactions must raise Invalid subaction
|
||||||
# ---------------------------------------------------------------------------
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
@pytest.mark.parametrize(
|
@pytest.mark.parametrize(
|
||||||
"action",
|
"subaction",
|
||||||
[
|
[
|
||||||
"update_temperature",
|
"update_temperature",
|
||||||
"update_time",
|
"update_time",
|
||||||
@@ -44,10 +38,10 @@ def _make_tool() -> AsyncMock:
|
|||||||
"update_ssh",
|
"update_ssh",
|
||||||
],
|
],
|
||||||
)
|
)
|
||||||
def test_removed_settings_actions_are_gone(action: str) -> None:
|
async def test_removed_settings_subactions_are_invalid(subaction: str) -> None:
|
||||||
assert action not in get_args(SETTINGS_ACTIONS), (
|
tool_fn = _make_tool()
|
||||||
f"{action} references a non-existent mutation and must not be in SETTINGS_ACTIONS"
|
with pytest.raises(ToolError, match="Invalid subaction"):
|
||||||
)
|
await tool_fn(action="setting", subaction=subaction)
|
||||||
|
|
||||||
|
|
||||||
# ---------------------------------------------------------------------------
|
# ---------------------------------------------------------------------------
|
||||||
@@ -56,19 +50,19 @@ def test_removed_settings_actions_are_gone(action: str) -> None:
|
|||||||
|
|
||||||
|
|
||||||
class TestSettingsValidation:
|
class TestSettingsValidation:
|
||||||
"""Tests for action validation and destructive guard."""
|
"""Tests for subaction validation and destructive guard."""
|
||||||
|
|
||||||
async def test_invalid_action(self, _mock_graphql: AsyncMock) -> None:
|
async def test_invalid_subaction(self, _mock_graphql: AsyncMock) -> None:
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="Invalid action"):
|
with pytest.raises(ToolError, match="Invalid subaction"):
|
||||||
await tool_fn(action="nonexistent_action")
|
await tool_fn(action="setting", subaction="nonexistent_action")
|
||||||
|
|
||||||
async def test_destructive_configure_ups_requires_confirm(
|
async def test_destructive_configure_ups_requires_confirm(
|
||||||
self, _mock_graphql: AsyncMock
|
self, _mock_graphql: AsyncMock
|
||||||
) -> None:
|
) -> None:
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="confirm=True"):
|
with pytest.raises(ToolError, match="confirm=True"):
|
||||||
await tool_fn(action="configure_ups", ups_config={"mode": "slave"})
|
await tool_fn(action="setting", subaction="configure_ups", ups_config={"mode": "slave"})
|
||||||
|
|
||||||
|
|
||||||
# ---------------------------------------------------------------------------
|
# ---------------------------------------------------------------------------
|
||||||
@@ -77,21 +71,23 @@ class TestSettingsValidation:
|
|||||||
|
|
||||||
|
|
||||||
class TestSettingsUpdate:
|
class TestSettingsUpdate:
|
||||||
"""Tests for update action."""
|
"""Tests for update subaction."""
|
||||||
|
|
||||||
async def test_update_requires_settings_input(self, _mock_graphql: AsyncMock) -> None:
|
async def test_update_requires_settings_input(self, _mock_graphql: AsyncMock) -> None:
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="settings_input is required"):
|
with pytest.raises(ToolError, match="settings_input is required"):
|
||||||
await tool_fn(action="update")
|
await tool_fn(action="setting", subaction="update")
|
||||||
|
|
||||||
async def test_update_success(self, _mock_graphql: AsyncMock) -> None:
|
async def test_update_success(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_graphql.return_value = {
|
_mock_graphql.return_value = {
|
||||||
"updateSettings": {"restartRequired": False, "values": {}, "warnings": []}
|
"updateSettings": {"restartRequired": False, "values": {}, "warnings": []}
|
||||||
}
|
}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="update", settings_input={"shareCount": 5})
|
result = await tool_fn(
|
||||||
|
action="setting", subaction="update", settings_input={"shareCount": 5}
|
||||||
|
)
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
assert result["action"] == "update"
|
assert result["subaction"] == "update"
|
||||||
|
|
||||||
|
|
||||||
# ---------------------------------------------------------------------------
|
# ---------------------------------------------------------------------------
|
||||||
@@ -100,18 +96,21 @@ class TestSettingsUpdate:
|
|||||||
|
|
||||||
|
|
||||||
class TestUpsConfig:
|
class TestUpsConfig:
|
||||||
"""Tests for configure_ups action."""
|
"""Tests for configure_ups subaction."""
|
||||||
|
|
||||||
async def test_configure_ups_requires_ups_config(self, _mock_graphql: AsyncMock) -> None:
|
async def test_configure_ups_requires_ups_config(self, _mock_graphql: AsyncMock) -> None:
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="ups_config is required"):
|
with pytest.raises(ToolError, match="ups_config is required"):
|
||||||
await tool_fn(action="configure_ups", confirm=True)
|
await tool_fn(action="setting", subaction="configure_ups", confirm=True)
|
||||||
|
|
||||||
async def test_configure_ups_success(self, _mock_graphql: AsyncMock) -> None:
|
async def test_configure_ups_success(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_graphql.return_value = {"configureUps": True}
|
_mock_graphql.return_value = {"configureUps": True}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(
|
result = await tool_fn(
|
||||||
action="configure_ups", confirm=True, ups_config={"mode": "master", "cable": "usb"}
|
action="setting",
|
||||||
|
subaction="configure_ups",
|
||||||
|
confirm=True,
|
||||||
|
ups_config={"mode": "master", "cable": "usb"},
|
||||||
)
|
)
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
assert result["action"] == "configure_ups"
|
assert result["subaction"] == "configure_ups"
|
||||||
|
|||||||
@@ -387,6 +387,119 @@ def test_tool_error_handler_credentials_error_message_includes_path():
|
|||||||
assert "setup" in str(exc_info.value).lower()
|
assert "setup" in str(exc_info.value).lower()
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# elicit_reset_confirmation
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_elicit_reset_confirmation_returns_false_when_ctx_none():
|
||||||
|
"""Returns False immediately when no MCP context is available."""
|
||||||
|
from unraid_mcp.core.setup import elicit_reset_confirmation
|
||||||
|
|
||||||
|
result = await elicit_reset_confirmation(None, "https://example.com")
|
||||||
|
assert result is False
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_elicit_reset_confirmation_returns_true_when_user_confirms():
|
||||||
|
"""Returns True when the user accepts and answers True."""
|
||||||
|
from unittest.mock import AsyncMock, MagicMock
|
||||||
|
|
||||||
|
from unraid_mcp.core.setup import elicit_reset_confirmation
|
||||||
|
|
||||||
|
mock_ctx = MagicMock()
|
||||||
|
mock_result = MagicMock()
|
||||||
|
mock_result.action = "accept"
|
||||||
|
mock_result.data = True
|
||||||
|
mock_ctx.elicit = AsyncMock(return_value=mock_result)
|
||||||
|
|
||||||
|
result = await elicit_reset_confirmation(mock_ctx, "https://example.com")
|
||||||
|
assert result is True
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_elicit_reset_confirmation_returns_false_when_user_answers_false():
|
||||||
|
"""Returns False when the user accepts but answers False (does not want to reset)."""
|
||||||
|
from unittest.mock import AsyncMock, MagicMock
|
||||||
|
|
||||||
|
from unraid_mcp.core.setup import elicit_reset_confirmation
|
||||||
|
|
||||||
|
mock_ctx = MagicMock()
|
||||||
|
mock_result = MagicMock()
|
||||||
|
mock_result.action = "accept"
|
||||||
|
mock_result.data = False
|
||||||
|
mock_ctx.elicit = AsyncMock(return_value=mock_result)
|
||||||
|
|
||||||
|
result = await elicit_reset_confirmation(mock_ctx, "https://example.com")
|
||||||
|
assert result is False
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_elicit_reset_confirmation_returns_false_when_declined():
|
||||||
|
"""Returns False when the user declines via action (dismisses the prompt)."""
|
||||||
|
from unittest.mock import AsyncMock, MagicMock
|
||||||
|
|
||||||
|
from unraid_mcp.core.setup import elicit_reset_confirmation
|
||||||
|
|
||||||
|
mock_ctx = MagicMock()
|
||||||
|
mock_result = MagicMock()
|
||||||
|
mock_result.action = "decline"
|
||||||
|
mock_ctx.elicit = AsyncMock(return_value=mock_result)
|
||||||
|
|
||||||
|
result = await elicit_reset_confirmation(mock_ctx, "https://example.com")
|
||||||
|
assert result is False
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_elicit_reset_confirmation_returns_false_when_cancelled():
|
||||||
|
"""Returns False when the user cancels the prompt."""
|
||||||
|
from unittest.mock import AsyncMock, MagicMock
|
||||||
|
|
||||||
|
from unraid_mcp.core.setup import elicit_reset_confirmation
|
||||||
|
|
||||||
|
mock_ctx = MagicMock()
|
||||||
|
mock_result = MagicMock()
|
||||||
|
mock_result.action = "cancel"
|
||||||
|
mock_ctx.elicit = AsyncMock(return_value=mock_result)
|
||||||
|
|
||||||
|
result = await elicit_reset_confirmation(mock_ctx, "https://example.com")
|
||||||
|
assert result is False
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_elicit_reset_confirmation_returns_false_when_not_implemented():
|
||||||
|
"""Returns False when the MCP client does not support elicitation."""
|
||||||
|
from unittest.mock import AsyncMock, MagicMock
|
||||||
|
|
||||||
|
from unraid_mcp.core.setup import elicit_reset_confirmation
|
||||||
|
|
||||||
|
mock_ctx = MagicMock()
|
||||||
|
mock_ctx.elicit = AsyncMock(side_effect=NotImplementedError("elicitation not supported"))
|
||||||
|
|
||||||
|
result = await elicit_reset_confirmation(mock_ctx, "https://example.com")
|
||||||
|
assert result is False
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_elicit_reset_confirmation_includes_current_url_in_prompt():
|
||||||
|
"""The elicitation message includes the current URL so the user knows what they're replacing."""
|
||||||
|
from unittest.mock import AsyncMock, MagicMock
|
||||||
|
|
||||||
|
from unraid_mcp.core.setup import elicit_reset_confirmation
|
||||||
|
|
||||||
|
mock_ctx = MagicMock()
|
||||||
|
mock_result = MagicMock()
|
||||||
|
mock_result.action = "decline"
|
||||||
|
mock_ctx.elicit = AsyncMock(return_value=mock_result)
|
||||||
|
|
||||||
|
await elicit_reset_confirmation(mock_ctx, "https://my-unraid.example.com:31337")
|
||||||
|
|
||||||
|
call_kwargs = mock_ctx.elicit.call_args
|
||||||
|
message = call_kwargs.kwargs.get("message") or call_kwargs.args[0]
|
||||||
|
assert "https://my-unraid.example.com:31337" in message
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_credentials_not_configured_surfaces_as_tool_error_with_path():
|
async def test_credentials_not_configured_surfaces_as_tool_error_with_path():
|
||||||
"""CredentialsNotConfiguredError from a tool becomes ToolError with the credentials path."""
|
"""CredentialsNotConfiguredError from a tool becomes ToolError with the credentials path."""
|
||||||
@@ -396,15 +509,15 @@ async def test_credentials_not_configured_surfaces_as_tool_error_with_path():
|
|||||||
from unraid_mcp.config.settings import CREDENTIALS_ENV_PATH
|
from unraid_mcp.config.settings import CREDENTIALS_ENV_PATH
|
||||||
from unraid_mcp.core.exceptions import CredentialsNotConfiguredError, ToolError
|
from unraid_mcp.core.exceptions import CredentialsNotConfiguredError, ToolError
|
||||||
|
|
||||||
tool_fn = make_tool_fn("unraid_mcp.tools.users", "register_users_tool", "unraid_users")
|
tool_fn = make_tool_fn("unraid_mcp.tools.unraid", "register_unraid_tool", "unraid")
|
||||||
|
|
||||||
with (
|
with (
|
||||||
patch(
|
patch(
|
||||||
"unraid_mcp.tools.users.make_graphql_request",
|
"unraid_mcp.tools.unraid.make_graphql_request",
|
||||||
new=AsyncMock(side_effect=CredentialsNotConfiguredError()),
|
new=AsyncMock(side_effect=CredentialsNotConfiguredError()),
|
||||||
),
|
),
|
||||||
pytest.raises(ToolError) as exc_info,
|
pytest.raises(ToolError) as exc_info,
|
||||||
):
|
):
|
||||||
await tool_fn(action="me")
|
await tool_fn(action="user", subaction="me")
|
||||||
|
|
||||||
assert str(CREDENTIALS_ENV_PATH) in str(exc_info.value)
|
assert str(CREDENTIALS_ENV_PATH) in str(exc_info.value)
|
||||||
|
|||||||
@@ -1,7 +1,6 @@
|
|||||||
"""Tests for unraid_storage tool."""
|
"""Tests for disk subactions of the consolidated unraid tool."""
|
||||||
|
|
||||||
from collections.abc import Generator
|
from collections.abc import Generator
|
||||||
from typing import get_args
|
|
||||||
from unittest.mock import AsyncMock, patch
|
from unittest.mock import AsyncMock, patch
|
||||||
|
|
||||||
import pytest
|
import pytest
|
||||||
@@ -9,13 +8,6 @@ from conftest import make_tool_fn
|
|||||||
|
|
||||||
from unraid_mcp.core.exceptions import ToolError
|
from unraid_mcp.core.exceptions import ToolError
|
||||||
from unraid_mcp.core.utils import format_bytes, format_kb, safe_get
|
from unraid_mcp.core.utils import format_bytes, format_kb, safe_get
|
||||||
from unraid_mcp.tools.storage import STORAGE_ACTIONS
|
|
||||||
|
|
||||||
|
|
||||||
def test_unassigned_action_removed() -> None:
|
|
||||||
assert "unassigned" not in get_args(STORAGE_ACTIONS), (
|
|
||||||
"unassigned action references unassignedDevices which is not in live API"
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
# --- Unit tests for helpers ---
|
# --- Unit tests for helpers ---
|
||||||
@@ -46,59 +38,63 @@ class TestFormatBytes:
|
|||||||
|
|
||||||
@pytest.fixture
|
@pytest.fixture
|
||||||
def _mock_graphql() -> Generator[AsyncMock, None, None]:
|
def _mock_graphql() -> Generator[AsyncMock, None, None]:
|
||||||
with patch("unraid_mcp.tools.storage.make_graphql_request", new_callable=AsyncMock) as mock:
|
with patch("unraid_mcp.tools.unraid.make_graphql_request", new_callable=AsyncMock) as mock:
|
||||||
yield mock
|
yield mock
|
||||||
|
|
||||||
|
|
||||||
def _make_tool():
|
def _make_tool():
|
||||||
return make_tool_fn("unraid_mcp.tools.storage", "register_storage_tool", "unraid_storage")
|
return make_tool_fn("unraid_mcp.tools.unraid", "register_unraid_tool", "unraid")
|
||||||
|
|
||||||
|
|
||||||
class TestStorageValidation:
|
class TestStorageValidation:
|
||||||
async def test_disk_details_requires_disk_id(self, _mock_graphql: AsyncMock) -> None:
|
async def test_disk_details_requires_disk_id(self, _mock_graphql: AsyncMock) -> None:
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="disk_id"):
|
with pytest.raises(ToolError, match="disk_id"):
|
||||||
await tool_fn(action="disk_details")
|
await tool_fn(action="disk", subaction="disk_details")
|
||||||
|
|
||||||
async def test_logs_requires_log_path(self, _mock_graphql: AsyncMock) -> None:
|
async def test_logs_requires_log_path(self, _mock_graphql: AsyncMock) -> None:
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="log_path"):
|
with pytest.raises(ToolError, match="log_path"):
|
||||||
await tool_fn(action="logs")
|
await tool_fn(action="disk", subaction="logs")
|
||||||
|
|
||||||
async def test_logs_rejects_invalid_path(self, _mock_graphql: AsyncMock) -> None:
|
async def test_logs_rejects_invalid_path(self, _mock_graphql: AsyncMock) -> None:
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="log_path must start with"):
|
with pytest.raises(ToolError, match="log_path must start with"):
|
||||||
await tool_fn(action="logs", log_path="/etc/shadow")
|
await tool_fn(action="disk", subaction="logs", log_path="/etc/shadow")
|
||||||
|
|
||||||
async def test_logs_rejects_path_traversal(self, _mock_graphql: AsyncMock) -> None:
|
async def test_logs_rejects_path_traversal(self, _mock_graphql: AsyncMock) -> None:
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
# Traversal that escapes /var/log/ to reach /etc/shadow
|
# Traversal that escapes /var/log/ to reach /etc/shadow
|
||||||
with pytest.raises(ToolError, match="log_path must start with"):
|
with pytest.raises(ToolError, match="log_path must start with"):
|
||||||
await tool_fn(action="logs", log_path="/var/log/../../etc/shadow")
|
await tool_fn(action="disk", subaction="logs", log_path="/var/log/../../etc/shadow")
|
||||||
# Traversal that escapes /mnt/ to reach /etc/passwd
|
# Traversal that escapes /mnt/ to reach /etc/passwd
|
||||||
with pytest.raises(ToolError, match="log_path must start with"):
|
with pytest.raises(ToolError, match="log_path must start with"):
|
||||||
await tool_fn(action="logs", log_path="/mnt/../etc/passwd")
|
await tool_fn(action="disk", subaction="logs", log_path="/mnt/../etc/passwd")
|
||||||
|
|
||||||
async def test_logs_allows_valid_paths(self, _mock_graphql: AsyncMock) -> None:
|
async def test_logs_allows_valid_paths(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_graphql.return_value = {"logFile": {"path": "/var/log/syslog", "content": "ok"}}
|
_mock_graphql.return_value = {"logFile": {"path": "/var/log/syslog", "content": "ok"}}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="logs", log_path="/var/log/syslog")
|
result = await tool_fn(action="disk", subaction="logs", log_path="/var/log/syslog")
|
||||||
assert result["content"] == "ok"
|
assert result["content"] == "ok"
|
||||||
|
|
||||||
async def test_logs_tail_lines_too_large(self, _mock_graphql: AsyncMock) -> None:
|
async def test_logs_tail_lines_too_large(self, _mock_graphql: AsyncMock) -> None:
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="tail_lines must be between"):
|
with pytest.raises(ToolError, match="tail_lines must be between"):
|
||||||
await tool_fn(action="logs", log_path="/var/log/syslog", tail_lines=10_001)
|
await tool_fn(
|
||||||
|
action="disk", subaction="logs", log_path="/var/log/syslog", tail_lines=10_001
|
||||||
|
)
|
||||||
|
|
||||||
async def test_logs_tail_lines_zero_rejected(self, _mock_graphql: AsyncMock) -> None:
|
async def test_logs_tail_lines_zero_rejected(self, _mock_graphql: AsyncMock) -> None:
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="tail_lines must be between"):
|
with pytest.raises(ToolError, match="tail_lines must be between"):
|
||||||
await tool_fn(action="logs", log_path="/var/log/syslog", tail_lines=0)
|
await tool_fn(action="disk", subaction="logs", log_path="/var/log/syslog", tail_lines=0)
|
||||||
|
|
||||||
async def test_logs_tail_lines_at_max_accepted(self, _mock_graphql: AsyncMock) -> None:
|
async def test_logs_tail_lines_at_max_accepted(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_graphql.return_value = {"logFile": {"path": "/var/log/syslog", "content": "ok"}}
|
_mock_graphql.return_value = {"logFile": {"path": "/var/log/syslog", "content": "ok"}}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="logs", log_path="/var/log/syslog", tail_lines=10_000)
|
result = await tool_fn(
|
||||||
|
action="disk", subaction="logs", log_path="/var/log/syslog", tail_lines=10_000
|
||||||
|
)
|
||||||
assert result["content"] == "ok"
|
assert result["content"] == "ok"
|
||||||
|
|
||||||
async def test_non_logs_action_ignores_tail_lines_validation(
|
async def test_non_logs_action_ignores_tail_lines_validation(
|
||||||
@@ -106,7 +102,7 @@ class TestStorageValidation:
|
|||||||
) -> None:
|
) -> None:
|
||||||
_mock_graphql.return_value = {"shares": []}
|
_mock_graphql.return_value = {"shares": []}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="shares", tail_lines=0)
|
result = await tool_fn(action="disk", subaction="shares", tail_lines=0)
|
||||||
assert result["shares"] == []
|
assert result["shares"] == []
|
||||||
|
|
||||||
|
|
||||||
@@ -173,13 +169,13 @@ class TestStorageActions:
|
|||||||
"shares": [{"id": "s:1", "name": "media"}, {"id": "s:2", "name": "backups"}]
|
"shares": [{"id": "s:1", "name": "media"}, {"id": "s:2", "name": "backups"}]
|
||||||
}
|
}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="shares")
|
result = await tool_fn(action="disk", subaction="shares")
|
||||||
assert len(result["shares"]) == 2
|
assert len(result["shares"]) == 2
|
||||||
|
|
||||||
async def test_disks(self, _mock_graphql: AsyncMock) -> None:
|
async def test_disks(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_graphql.return_value = {"disks": [{"id": "d:1", "device": "sda"}]}
|
_mock_graphql.return_value = {"disks": [{"id": "d:1", "device": "sda"}]}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="disks")
|
result = await tool_fn(action="disk", subaction="disks")
|
||||||
assert len(result["disks"]) == 1
|
assert len(result["disks"]) == 1
|
||||||
|
|
||||||
async def test_disk_details(self, _mock_graphql: AsyncMock) -> None:
|
async def test_disk_details(self, _mock_graphql: AsyncMock) -> None:
|
||||||
@@ -194,7 +190,7 @@ class TestStorageActions:
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="disk_details", disk_id="d:1")
|
result = await tool_fn(action="disk", subaction="disk_details", disk_id="d:1")
|
||||||
assert result["summary"]["temperature"] == "35\u00b0C"
|
assert result["summary"]["temperature"] == "35\u00b0C"
|
||||||
assert "1.00 GB" in result["summary"]["size_formatted"]
|
assert "1.00 GB" in result["summary"]["size_formatted"]
|
||||||
|
|
||||||
@@ -211,7 +207,7 @@ class TestStorageActions:
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="disk_details", disk_id="d:1")
|
result = await tool_fn(action="disk", subaction="disk_details", disk_id="d:1")
|
||||||
assert result["summary"]["temperature"] == "0\u00b0C"
|
assert result["summary"]["temperature"] == "0\u00b0C"
|
||||||
|
|
||||||
async def test_disk_details_temperature_null(self, _mock_graphql: AsyncMock) -> None:
|
async def test_disk_details_temperature_null(self, _mock_graphql: AsyncMock) -> None:
|
||||||
@@ -227,26 +223,26 @@ class TestStorageActions:
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="disk_details", disk_id="d:1")
|
result = await tool_fn(action="disk", subaction="disk_details", disk_id="d:1")
|
||||||
assert result["summary"]["temperature"] == "N/A"
|
assert result["summary"]["temperature"] == "N/A"
|
||||||
|
|
||||||
async def test_logs_null_log_file(self, _mock_graphql: AsyncMock) -> None:
|
async def test_logs_null_log_file(self, _mock_graphql: AsyncMock) -> None:
|
||||||
"""logFile being null should return an empty dict."""
|
"""logFile being null should return an empty dict."""
|
||||||
_mock_graphql.return_value = {"logFile": None}
|
_mock_graphql.return_value = {"logFile": None}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="logs", log_path="/var/log/syslog")
|
result = await tool_fn(action="disk", subaction="logs", log_path="/var/log/syslog")
|
||||||
assert result == {}
|
assert result == {}
|
||||||
|
|
||||||
async def test_disk_details_not_found(self, _mock_graphql: AsyncMock) -> None:
|
async def test_disk_details_not_found(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_graphql.return_value = {"disk": None}
|
_mock_graphql.return_value = {"disk": None}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="not found"):
|
with pytest.raises(ToolError, match="not found"):
|
||||||
await tool_fn(action="disk_details", disk_id="d:missing")
|
await tool_fn(action="disk", subaction="disk_details", disk_id="d:missing")
|
||||||
|
|
||||||
async def test_log_files(self, _mock_graphql: AsyncMock) -> None:
|
async def test_log_files(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_graphql.return_value = {"logFiles": [{"name": "syslog", "path": "/var/log/syslog"}]}
|
_mock_graphql.return_value = {"logFiles": [{"name": "syslog", "path": "/var/log/syslog"}]}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="log_files")
|
result = await tool_fn(action="disk", subaction="log_files")
|
||||||
assert len(result["log_files"]) == 1
|
assert len(result["log_files"]) == 1
|
||||||
|
|
||||||
async def test_logs(self, _mock_graphql: AsyncMock) -> None:
|
async def test_logs(self, _mock_graphql: AsyncMock) -> None:
|
||||||
@@ -254,7 +250,7 @@ class TestStorageActions:
|
|||||||
"logFile": {"path": "/var/log/syslog", "content": "log line", "totalLines": 1}
|
"logFile": {"path": "/var/log/syslog", "content": "log line", "totalLines": 1}
|
||||||
}
|
}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="logs", log_path="/var/log/syslog")
|
result = await tool_fn(action="disk", subaction="logs", log_path="/var/log/syslog")
|
||||||
assert result["content"] == "log line"
|
assert result["content"] == "log line"
|
||||||
|
|
||||||
|
|
||||||
@@ -268,7 +264,7 @@ class TestStorageNetworkErrors:
|
|||||||
)
|
)
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="Invalid JSON"):
|
with pytest.raises(ToolError, match="Invalid JSON"):
|
||||||
await tool_fn(action="logs", log_path="/var/log/syslog")
|
await tool_fn(action="disk", subaction="logs", log_path="/var/log/syslog")
|
||||||
|
|
||||||
async def test_shares_connection_refused(self, _mock_graphql: AsyncMock) -> None:
|
async def test_shares_connection_refused(self, _mock_graphql: AsyncMock) -> None:
|
||||||
"""Connection refused when listing shares should propagate as ToolError."""
|
"""Connection refused when listing shares should propagate as ToolError."""
|
||||||
@@ -277,14 +273,14 @@ class TestStorageNetworkErrors:
|
|||||||
)
|
)
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="Connection refused"):
|
with pytest.raises(ToolError, match="Connection refused"):
|
||||||
await tool_fn(action="shares")
|
await tool_fn(action="disk", subaction="shares")
|
||||||
|
|
||||||
async def test_disks_http_500(self, _mock_graphql: AsyncMock) -> None:
|
async def test_disks_http_500(self, _mock_graphql: AsyncMock) -> None:
|
||||||
"""HTTP 500 when listing disks should propagate as ToolError."""
|
"""HTTP 500 when listing disks should propagate as ToolError."""
|
||||||
_mock_graphql.side_effect = ToolError("HTTP error 500: Internal Server Error")
|
_mock_graphql.side_effect = ToolError("HTTP error 500: Internal Server Error")
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="HTTP error 500"):
|
with pytest.raises(ToolError, match="HTTP error 500"):
|
||||||
await tool_fn(action="disks")
|
await tool_fn(action="disk", subaction="disks")
|
||||||
|
|
||||||
|
|
||||||
class TestStorageFlashBackup:
|
class TestStorageFlashBackup:
|
||||||
@@ -292,29 +288,40 @@ class TestStorageFlashBackup:
|
|||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="not confirmed"):
|
with pytest.raises(ToolError, match="not confirmed"):
|
||||||
await tool_fn(
|
await tool_fn(
|
||||||
action="flash_backup", remote_name="r", source_path="/boot", destination_path="r:b"
|
action="disk",
|
||||||
|
subaction="flash_backup",
|
||||||
|
remote_name="r",
|
||||||
|
source_path="/boot",
|
||||||
|
destination_path="r:b",
|
||||||
)
|
)
|
||||||
|
|
||||||
async def test_flash_backup_requires_remote_name(self, _mock_graphql: AsyncMock) -> None:
|
async def test_flash_backup_requires_remote_name(self, _mock_graphql: AsyncMock) -> None:
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="remote_name"):
|
with pytest.raises(ToolError, match="remote_name"):
|
||||||
await tool_fn(action="flash_backup", confirm=True)
|
await tool_fn(action="disk", subaction="flash_backup", confirm=True)
|
||||||
|
|
||||||
async def test_flash_backup_requires_source_path(self, _mock_graphql: AsyncMock) -> None:
|
async def test_flash_backup_requires_source_path(self, _mock_graphql: AsyncMock) -> None:
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="source_path"):
|
with pytest.raises(ToolError, match="source_path"):
|
||||||
await tool_fn(action="flash_backup", confirm=True, remote_name="r")
|
await tool_fn(action="disk", subaction="flash_backup", confirm=True, remote_name="r")
|
||||||
|
|
||||||
async def test_flash_backup_requires_destination_path(self, _mock_graphql: AsyncMock) -> None:
|
async def test_flash_backup_requires_destination_path(self, _mock_graphql: AsyncMock) -> None:
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="destination_path"):
|
with pytest.raises(ToolError, match="destination_path"):
|
||||||
await tool_fn(action="flash_backup", confirm=True, remote_name="r", source_path="/boot")
|
await tool_fn(
|
||||||
|
action="disk",
|
||||||
|
subaction="flash_backup",
|
||||||
|
confirm=True,
|
||||||
|
remote_name="r",
|
||||||
|
source_path="/boot",
|
||||||
|
)
|
||||||
|
|
||||||
async def test_flash_backup_success(self, _mock_graphql: AsyncMock) -> None:
|
async def test_flash_backup_success(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_graphql.return_value = {"initiateFlashBackup": {"status": "started", "jobId": "j:1"}}
|
_mock_graphql.return_value = {"initiateFlashBackup": {"status": "started", "jobId": "j:1"}}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(
|
result = await tool_fn(
|
||||||
action="flash_backup",
|
action="disk",
|
||||||
|
subaction="flash_backup",
|
||||||
confirm=True,
|
confirm=True,
|
||||||
remote_name="r",
|
remote_name="r",
|
||||||
source_path="/boot",
|
source_path="/boot",
|
||||||
@@ -327,7 +334,8 @@ class TestStorageFlashBackup:
|
|||||||
_mock_graphql.return_value = {"initiateFlashBackup": {"status": "started", "jobId": "j:2"}}
|
_mock_graphql.return_value = {"initiateFlashBackup": {"status": "started", "jobId": "j:2"}}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
await tool_fn(
|
await tool_fn(
|
||||||
action="flash_backup",
|
action="disk",
|
||||||
|
subaction="flash_backup",
|
||||||
confirm=True,
|
confirm=True,
|
||||||
remote_name="r",
|
remote_name="r",
|
||||||
source_path="/boot",
|
source_path="/boot",
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
"""Tests for unraid_users tool.
|
"""Tests for user subactions of the consolidated unraid tool.
|
||||||
|
|
||||||
NOTE: Unraid GraphQL API only supports the me() query.
|
NOTE: Unraid GraphQL API only supports the me() query.
|
||||||
User management operations (list, add, delete, cloud, remote_access, origins) are NOT available in the API.
|
User management operations (list, add, delete, cloud, remote_access, origins) are NOT available in the API.
|
||||||
@@ -15,35 +15,35 @@ from unraid_mcp.core.exceptions import ToolError
|
|||||||
|
|
||||||
@pytest.fixture
|
@pytest.fixture
|
||||||
def _mock_graphql() -> Generator[AsyncMock, None, None]:
|
def _mock_graphql() -> Generator[AsyncMock, None, None]:
|
||||||
with patch("unraid_mcp.tools.users.make_graphql_request", new_callable=AsyncMock) as mock:
|
with patch("unraid_mcp.tools.unraid.make_graphql_request", new_callable=AsyncMock) as mock:
|
||||||
yield mock
|
yield mock
|
||||||
|
|
||||||
|
|
||||||
def _make_tool():
|
def _make_tool():
|
||||||
return make_tool_fn("unraid_mcp.tools.users", "register_users_tool", "unraid_users")
|
return make_tool_fn("unraid_mcp.tools.unraid", "register_unraid_tool", "unraid")
|
||||||
|
|
||||||
|
|
||||||
class TestUsersValidation:
|
class TestUsersValidation:
|
||||||
"""Test validation for invalid actions."""
|
"""Test validation for invalid subactions."""
|
||||||
|
|
||||||
async def test_invalid_action_rejected(self, _mock_graphql: AsyncMock) -> None:
|
async def test_invalid_subaction_rejected(self, _mock_graphql: AsyncMock) -> None:
|
||||||
"""Test that non-existent actions are rejected with clear error."""
|
"""Test that non-existent subactions are rejected with clear error."""
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="Invalid action"):
|
with pytest.raises(ToolError, match="Invalid subaction"):
|
||||||
await tool_fn(action="list")
|
await tool_fn(action="user", subaction="list")
|
||||||
|
|
||||||
with pytest.raises(ToolError, match="Invalid action"):
|
with pytest.raises(ToolError, match="Invalid subaction"):
|
||||||
await tool_fn(action="add")
|
await tool_fn(action="user", subaction="add")
|
||||||
|
|
||||||
with pytest.raises(ToolError, match="Invalid action"):
|
with pytest.raises(ToolError, match="Invalid subaction"):
|
||||||
await tool_fn(action="delete")
|
await tool_fn(action="user", subaction="delete")
|
||||||
|
|
||||||
with pytest.raises(ToolError, match="Invalid action"):
|
with pytest.raises(ToolError, match="Invalid subaction"):
|
||||||
await tool_fn(action="cloud")
|
await tool_fn(action="user", subaction="cloud")
|
||||||
|
|
||||||
|
|
||||||
class TestUsersActions:
|
class TestUsersActions:
|
||||||
"""Test the single supported action: me."""
|
"""Test the single supported subaction: me."""
|
||||||
|
|
||||||
async def test_me(self, _mock_graphql: AsyncMock) -> None:
|
async def test_me(self, _mock_graphql: AsyncMock) -> None:
|
||||||
"""Test querying current authenticated user."""
|
"""Test querying current authenticated user."""
|
||||||
@@ -51,27 +51,18 @@ class TestUsersActions:
|
|||||||
"me": {"id": "u:1", "name": "root", "description": "", "roles": ["ADMIN"]}
|
"me": {"id": "u:1", "name": "root", "description": "", "roles": ["ADMIN"]}
|
||||||
}
|
}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="me")
|
result = await tool_fn(action="user", subaction="me")
|
||||||
assert result["name"] == "root"
|
assert result["name"] == "root"
|
||||||
assert result["roles"] == ["ADMIN"]
|
assert result["roles"] == ["ADMIN"]
|
||||||
_mock_graphql.assert_called_once()
|
_mock_graphql.assert_called_once()
|
||||||
|
|
||||||
async def test_me_default_action(self, _mock_graphql: AsyncMock) -> None:
|
|
||||||
"""Test that 'me' is the default action."""
|
|
||||||
_mock_graphql.return_value = {
|
|
||||||
"me": {"id": "u:1", "name": "root", "description": "", "roles": ["ADMIN"]}
|
|
||||||
}
|
|
||||||
tool_fn = _make_tool()
|
|
||||||
result = await tool_fn()
|
|
||||||
assert result["name"] == "root"
|
|
||||||
|
|
||||||
|
|
||||||
class TestUsersNoneHandling:
|
class TestUsersNoneHandling:
|
||||||
"""Verify actions return empty dict (not TypeError) when API returns None."""
|
"""Verify subactions return empty dict (not TypeError) when API returns None."""
|
||||||
|
|
||||||
async def test_me_returns_none(self, _mock_graphql: AsyncMock) -> None:
|
async def test_me_returns_none(self, _mock_graphql: AsyncMock) -> None:
|
||||||
"""Test that me returns empty dict when API returns None."""
|
"""Test that me returns empty dict when API returns None."""
|
||||||
_mock_graphql.return_value = {"me": None}
|
_mock_graphql.return_value = {"me": None}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="me")
|
result = await tool_fn(action="user", subaction="me")
|
||||||
assert result == {}
|
assert result == {}
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
"""Tests for unraid_vm tool."""
|
"""Tests for vm subactions of the consolidated unraid tool."""
|
||||||
|
|
||||||
from collections.abc import Generator
|
from collections.abc import Generator
|
||||||
from unittest.mock import AsyncMock, patch
|
from unittest.mock import AsyncMock, patch
|
||||||
@@ -11,34 +11,32 @@ from unraid_mcp.core.exceptions import ToolError
|
|||||||
|
|
||||||
@pytest.fixture
|
@pytest.fixture
|
||||||
def _mock_graphql() -> Generator[AsyncMock, None, None]:
|
def _mock_graphql() -> Generator[AsyncMock, None, None]:
|
||||||
with patch(
|
with patch("unraid_mcp.tools.unraid.make_graphql_request", new_callable=AsyncMock) as mock:
|
||||||
"unraid_mcp.tools.virtualization.make_graphql_request", new_callable=AsyncMock
|
|
||||||
) as mock:
|
|
||||||
yield mock
|
yield mock
|
||||||
|
|
||||||
|
|
||||||
def _make_tool():
|
def _make_tool():
|
||||||
return make_tool_fn("unraid_mcp.tools.virtualization", "register_vm_tool", "unraid_vm")
|
return make_tool_fn("unraid_mcp.tools.unraid", "register_unraid_tool", "unraid")
|
||||||
|
|
||||||
|
|
||||||
class TestVmValidation:
|
class TestVmValidation:
|
||||||
async def test_actions_except_list_require_vm_id(self, _mock_graphql: AsyncMock) -> None:
|
async def test_actions_except_list_require_vm_id(self, _mock_graphql: AsyncMock) -> None:
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
for action in ("details", "start", "stop", "pause", "resume", "reboot"):
|
for subaction in ("details", "start", "stop", "pause", "resume", "reboot"):
|
||||||
with pytest.raises(ToolError, match="vm_id"):
|
with pytest.raises(ToolError, match="vm_id"):
|
||||||
await tool_fn(action=action)
|
await tool_fn(action="vm", subaction=subaction)
|
||||||
|
|
||||||
async def test_destructive_actions_require_confirm(self, _mock_graphql: AsyncMock) -> None:
|
async def test_destructive_actions_require_confirm(self, _mock_graphql: AsyncMock) -> None:
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
for action in ("force_stop", "reset"):
|
for subaction in ("force_stop", "reset"):
|
||||||
with pytest.raises(ToolError, match="not confirmed"):
|
with pytest.raises(ToolError, match="not confirmed"):
|
||||||
await tool_fn(action=action, vm_id="uuid-1")
|
await tool_fn(action="vm", subaction=subaction, vm_id="uuid-1")
|
||||||
|
|
||||||
async def test_destructive_vm_id_check_before_confirm(self, _mock_graphql: AsyncMock) -> None:
|
async def test_destructive_vm_id_check_before_confirm(self, _mock_graphql: AsyncMock) -> None:
|
||||||
"""Destructive actions without vm_id should fail on vm_id first (validated before confirm)."""
|
"""Destructive subactions without vm_id should fail on vm_id first (validated before confirm)."""
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="vm_id"):
|
with pytest.raises(ToolError, match="vm_id"):
|
||||||
await tool_fn(action="force_stop")
|
await tool_fn(action="vm", subaction="force_stop")
|
||||||
|
|
||||||
|
|
||||||
class TestVmActions:
|
class TestVmActions:
|
||||||
@@ -51,20 +49,20 @@ class TestVmActions:
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="list")
|
result = await tool_fn(action="vm", subaction="list")
|
||||||
assert len(result["vms"]) == 1
|
assert len(result["vms"]) == 1
|
||||||
assert result["vms"][0]["name"] == "Windows 11"
|
assert result["vms"][0]["name"] == "Windows 11"
|
||||||
|
|
||||||
async def test_list_empty(self, _mock_graphql: AsyncMock) -> None:
|
async def test_list_empty(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_graphql.return_value = {"vms": {"domains": []}}
|
_mock_graphql.return_value = {"vms": {"domains": []}}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="list")
|
result = await tool_fn(action="vm", subaction="list")
|
||||||
assert result["vms"] == []
|
assert result["vms"] == []
|
||||||
|
|
||||||
async def test_list_no_vms_key(self, _mock_graphql: AsyncMock) -> None:
|
async def test_list_no_vms_key(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_graphql.return_value = {}
|
_mock_graphql.return_value = {}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="list")
|
result = await tool_fn(action="vm", subaction="list")
|
||||||
assert result["vms"] == []
|
assert result["vms"] == []
|
||||||
|
|
||||||
async def test_details_by_uuid(self, _mock_graphql: AsyncMock) -> None:
|
async def test_details_by_uuid(self, _mock_graphql: AsyncMock) -> None:
|
||||||
@@ -74,7 +72,7 @@ class TestVmActions:
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="details", vm_id="uuid-1")
|
result = await tool_fn(action="vm", subaction="details", vm_id="uuid-1")
|
||||||
assert result["name"] == "Win11"
|
assert result["name"] == "Win11"
|
||||||
|
|
||||||
async def test_details_by_name(self, _mock_graphql: AsyncMock) -> None:
|
async def test_details_by_name(self, _mock_graphql: AsyncMock) -> None:
|
||||||
@@ -84,7 +82,7 @@ class TestVmActions:
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="details", vm_id="Win11")
|
result = await tool_fn(action="vm", subaction="details", vm_id="Win11")
|
||||||
assert result["uuid"] == "uuid-1"
|
assert result["uuid"] == "uuid-1"
|
||||||
|
|
||||||
async def test_details_not_found(self, _mock_graphql: AsyncMock) -> None:
|
async def test_details_not_found(self, _mock_graphql: AsyncMock) -> None:
|
||||||
@@ -95,48 +93,48 @@ class TestVmActions:
|
|||||||
}
|
}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="not found"):
|
with pytest.raises(ToolError, match="not found"):
|
||||||
await tool_fn(action="details", vm_id="nonexistent")
|
await tool_fn(action="vm", subaction="details", vm_id="nonexistent")
|
||||||
|
|
||||||
async def test_start_vm(self, _mock_graphql: AsyncMock) -> None:
|
async def test_start_vm(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_graphql.return_value = {"vm": {"start": True}}
|
_mock_graphql.return_value = {"vm": {"start": True}}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="start", vm_id="uuid-1")
|
result = await tool_fn(action="vm", subaction="start", vm_id="uuid-1")
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
assert result["action"] == "start"
|
assert result["subaction"] == "start"
|
||||||
|
|
||||||
async def test_force_stop(self, _mock_graphql: AsyncMock) -> None:
|
async def test_force_stop(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_graphql.return_value = {"vm": {"forceStop": True}}
|
_mock_graphql.return_value = {"vm": {"forceStop": True}}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="force_stop", vm_id="uuid-1", confirm=True)
|
result = await tool_fn(action="vm", subaction="force_stop", vm_id="uuid-1", confirm=True)
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
assert result["action"] == "force_stop"
|
assert result["subaction"] == "force_stop"
|
||||||
|
|
||||||
async def test_stop_vm(self, _mock_graphql: AsyncMock) -> None:
|
async def test_stop_vm(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_graphql.return_value = {"vm": {"stop": True}}
|
_mock_graphql.return_value = {"vm": {"stop": True}}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="stop", vm_id="uuid-1")
|
result = await tool_fn(action="vm", subaction="stop", vm_id="uuid-1")
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
assert result["action"] == "stop"
|
assert result["subaction"] == "stop"
|
||||||
|
|
||||||
async def test_pause_vm(self, _mock_graphql: AsyncMock) -> None:
|
async def test_pause_vm(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_graphql.return_value = {"vm": {"pause": True}}
|
_mock_graphql.return_value = {"vm": {"pause": True}}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="pause", vm_id="uuid-1")
|
result = await tool_fn(action="vm", subaction="pause", vm_id="uuid-1")
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
assert result["action"] == "pause"
|
assert result["subaction"] == "pause"
|
||||||
|
|
||||||
async def test_resume_vm(self, _mock_graphql: AsyncMock) -> None:
|
async def test_resume_vm(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_graphql.return_value = {"vm": {"resume": True}}
|
_mock_graphql.return_value = {"vm": {"resume": True}}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="resume", vm_id="uuid-1")
|
result = await tool_fn(action="vm", subaction="resume", vm_id="uuid-1")
|
||||||
assert result["success"] is True
|
assert result["success"] is True
|
||||||
assert result["action"] == "resume"
|
assert result["subaction"] == "resume"
|
||||||
|
|
||||||
async def test_mutation_unexpected_response(self, _mock_graphql: AsyncMock) -> None:
|
async def test_mutation_unexpected_response(self, _mock_graphql: AsyncMock) -> None:
|
||||||
_mock_graphql.return_value = {"vm": {}}
|
_mock_graphql.return_value = {"vm": {}}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="Failed to start"):
|
with pytest.raises(ToolError, match="Failed to start"):
|
||||||
await tool_fn(action="start", vm_id="uuid-1")
|
await tool_fn(action="vm", subaction="start", vm_id="uuid-1")
|
||||||
|
|
||||||
|
|
||||||
class TestVmMutationFailures:
|
class TestVmMutationFailures:
|
||||||
@@ -147,38 +145,38 @@ class TestVmMutationFailures:
|
|||||||
_mock_graphql.return_value = {}
|
_mock_graphql.return_value = {}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="Failed to start"):
|
with pytest.raises(ToolError, match="Failed to start"):
|
||||||
await tool_fn(action="start", vm_id="uuid-1")
|
await tool_fn(action="vm", subaction="start", vm_id="uuid-1")
|
||||||
|
|
||||||
async def test_start_mutation_returns_false(self, _mock_graphql: AsyncMock) -> None:
|
async def test_start_mutation_returns_false(self, _mock_graphql: AsyncMock) -> None:
|
||||||
"""VM start returning False should still succeed (the tool reports the raw value)."""
|
"""VM start returning False should still succeed (the tool reports the raw value)."""
|
||||||
_mock_graphql.return_value = {"vm": {"start": False}}
|
_mock_graphql.return_value = {"vm": {"start": False}}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
result = await tool_fn(action="start", vm_id="uuid-1")
|
result = await tool_fn(action="vm", subaction="start", vm_id="uuid-1")
|
||||||
assert result["success"] is False
|
assert result["success"] is False
|
||||||
assert result["action"] == "start"
|
assert result["subaction"] == "start"
|
||||||
|
|
||||||
async def test_stop_mutation_returns_null(self, _mock_graphql: AsyncMock) -> None:
|
async def test_stop_mutation_returns_null(self, _mock_graphql: AsyncMock) -> None:
|
||||||
"""VM stop returning None in the field should succeed (key exists, value is None)."""
|
"""VM stop returning None in the field should succeed (key exists, value is None)."""
|
||||||
_mock_graphql.return_value = {"vm": {"stop": None}}
|
_mock_graphql.return_value = {"vm": {"stop": None}}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
# The check is `field in data["vm"]` — `in` checks key existence, not truthiness
|
# The check is `field in data["vm"]` — `in` checks key existence, not truthiness
|
||||||
result = await tool_fn(action="stop", vm_id="uuid-1")
|
result = await tool_fn(action="vm", subaction="stop", vm_id="uuid-1")
|
||||||
assert result["success"] is None
|
assert result["success"] is None
|
||||||
assert result["action"] == "stop"
|
assert result["subaction"] == "stop"
|
||||||
|
|
||||||
async def test_force_stop_mutation_empty_vm_object(self, _mock_graphql: AsyncMock) -> None:
|
async def test_force_stop_mutation_empty_vm_object(self, _mock_graphql: AsyncMock) -> None:
|
||||||
"""Empty vm object with no matching field should raise ToolError."""
|
"""Empty vm object with no matching field should raise ToolError."""
|
||||||
_mock_graphql.return_value = {"vm": {}}
|
_mock_graphql.return_value = {"vm": {}}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="Failed to force_stop"):
|
with pytest.raises(ToolError, match="Failed to force_stop"):
|
||||||
await tool_fn(action="force_stop", vm_id="uuid-1", confirm=True)
|
await tool_fn(action="vm", subaction="force_stop", vm_id="uuid-1", confirm=True)
|
||||||
|
|
||||||
async def test_reboot_mutation_vm_key_none(self, _mock_graphql: AsyncMock) -> None:
|
async def test_reboot_mutation_vm_key_none(self, _mock_graphql: AsyncMock) -> None:
|
||||||
"""vm key being None should raise ToolError."""
|
"""vm key being None should raise ToolError."""
|
||||||
_mock_graphql.return_value = {"vm": None}
|
_mock_graphql.return_value = {"vm": None}
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="Failed to reboot"):
|
with pytest.raises(ToolError, match="Failed to reboot"):
|
||||||
await tool_fn(action="reboot", vm_id="uuid-1")
|
await tool_fn(action="vm", subaction="reboot", vm_id="uuid-1")
|
||||||
|
|
||||||
async def test_mutation_timeout(self, _mock_graphql: AsyncMock) -> None:
|
async def test_mutation_timeout(self, _mock_graphql: AsyncMock) -> None:
|
||||||
"""Mid-operation timeout should be wrapped in ToolError."""
|
"""Mid-operation timeout should be wrapped in ToolError."""
|
||||||
@@ -186,4 +184,4 @@ class TestVmMutationFailures:
|
|||||||
_mock_graphql.side_effect = TimeoutError("VM operation timed out")
|
_mock_graphql.side_effect = TimeoutError("VM operation timed out")
|
||||||
tool_fn = _make_tool()
|
tool_fn = _make_tool()
|
||||||
with pytest.raises(ToolError, match="timed out"):
|
with pytest.raises(ToolError, match="timed out"):
|
||||||
await tool_fn(action="start", vm_id="uuid-1")
|
await tool_fn(action="vm", subaction="start", vm_id="uuid-1")
|
||||||
|
|||||||
@@ -19,21 +19,7 @@ from .config.settings import (
|
|||||||
)
|
)
|
||||||
from .subscriptions.diagnostics import register_diagnostic_tools
|
from .subscriptions.diagnostics import register_diagnostic_tools
|
||||||
from .subscriptions.resources import register_subscription_resources
|
from .subscriptions.resources import register_subscription_resources
|
||||||
from .tools.array import register_array_tool
|
from .tools.unraid import register_unraid_tool
|
||||||
from .tools.customization import register_customization_tool
|
|
||||||
from .tools.docker import register_docker_tool
|
|
||||||
from .tools.health import register_health_tool
|
|
||||||
from .tools.info import register_info_tool
|
|
||||||
from .tools.keys import register_keys_tool
|
|
||||||
from .tools.live import register_live_tool
|
|
||||||
from .tools.notifications import register_notifications_tool
|
|
||||||
from .tools.oidc import register_oidc_tool
|
|
||||||
from .tools.plugins import register_plugins_tool
|
|
||||||
from .tools.rclone import register_rclone_tool
|
|
||||||
from .tools.settings import register_settings_tool
|
|
||||||
from .tools.storage import register_storage_tool
|
|
||||||
from .tools.users import register_users_tool
|
|
||||||
from .tools.virtualization import register_vm_tool
|
|
||||||
|
|
||||||
|
|
||||||
# Initialize FastMCP instance
|
# Initialize FastMCP instance
|
||||||
@@ -55,28 +41,9 @@ def register_all_modules() -> None:
|
|||||||
register_diagnostic_tools(mcp)
|
register_diagnostic_tools(mcp)
|
||||||
logger.info("Subscription resources and diagnostic tools registered")
|
logger.info("Subscription resources and diagnostic tools registered")
|
||||||
|
|
||||||
# Register all consolidated tools
|
# Register the consolidated unraid tool
|
||||||
registrars = [
|
register_unraid_tool(mcp)
|
||||||
register_info_tool,
|
logger.info("unraid tool registered successfully - Server ready!")
|
||||||
register_array_tool,
|
|
||||||
register_storage_tool,
|
|
||||||
register_docker_tool,
|
|
||||||
register_vm_tool,
|
|
||||||
register_notifications_tool,
|
|
||||||
register_plugins_tool,
|
|
||||||
register_rclone_tool,
|
|
||||||
register_users_tool,
|
|
||||||
register_keys_tool,
|
|
||||||
register_health_tool,
|
|
||||||
register_settings_tool,
|
|
||||||
register_live_tool,
|
|
||||||
register_customization_tool,
|
|
||||||
register_oidc_tool,
|
|
||||||
]
|
|
||||||
for registrar in registrars:
|
|
||||||
registrar(mcp)
|
|
||||||
|
|
||||||
logger.info(f"All {len(registrars)} tools registered successfully - Server ready!")
|
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"Failed to register modules: {e}", exc_info=True)
|
logger.error(f"Failed to register modules: {e}", exc_info=True)
|
||||||
|
|||||||
@@ -100,9 +100,19 @@ class SubscriptionManager:
|
|||||||
self._connection_start_times: dict[str, float] = {} # Track when connections started
|
self._connection_start_times: dict[str, float] = {} # Track when connections started
|
||||||
|
|
||||||
# Define subscription configurations
|
# Define subscription configurations
|
||||||
self.subscription_configs = {
|
from .queries import SNAPSHOT_ACTIONS
|
||||||
"logFileSubscription": {
|
|
||||||
"query": """
|
self.subscription_configs: dict[str, dict] = {
|
||||||
|
action: {
|
||||||
|
"query": query,
|
||||||
|
"resource": f"unraid://live/{action}",
|
||||||
|
"description": f"Real-time {action.replace('_', ' ')} data",
|
||||||
|
"auto_start": True,
|
||||||
|
}
|
||||||
|
for action, query in SNAPSHOT_ACTIONS.items()
|
||||||
|
}
|
||||||
|
self.subscription_configs["logFileSubscription"] = {
|
||||||
|
"query": """
|
||||||
subscription LogFileSubscription($path: String!) {
|
subscription LogFileSubscription($path: String!) {
|
||||||
logFile(path: $path) {
|
logFile(path: $path) {
|
||||||
path
|
path
|
||||||
@@ -111,10 +121,9 @@ class SubscriptionManager:
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
""",
|
""",
|
||||||
"resource": "unraid://logs/stream",
|
"resource": "unraid://logs/stream",
|
||||||
"description": "Real-time log file streaming",
|
"description": "Real-time log file streaming",
|
||||||
"auto_start": False, # Started manually with path parameter
|
"auto_start": False, # Started manually with path parameter
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
logger.info(
|
logger.info(
|
||||||
|
|||||||
@@ -15,7 +15,6 @@ from fastmcp import FastMCP
|
|||||||
from ..config.logging import logger
|
from ..config.logging import logger
|
||||||
from .manager import subscription_manager
|
from .manager import subscription_manager
|
||||||
from .queries import SNAPSHOT_ACTIONS
|
from .queries import SNAPSHOT_ACTIONS
|
||||||
from .snapshot import subscribe_once
|
|
||||||
|
|
||||||
|
|
||||||
# Global flag to track subscription startup
|
# Global flag to track subscription startup
|
||||||
@@ -104,14 +103,18 @@ def register_subscription_resources(mcp: FastMCP) -> None:
|
|||||||
}
|
}
|
||||||
)
|
)
|
||||||
|
|
||||||
def _make_resource_fn(action: str, query: str):
|
def _make_resource_fn(action: str):
|
||||||
async def _live_resource() -> str:
|
async def _live_resource() -> str:
|
||||||
await ensure_subscriptions_started()
|
await ensure_subscriptions_started()
|
||||||
try:
|
data = await subscription_manager.get_resource_data(action)
|
||||||
data = await subscribe_once(query)
|
if data:
|
||||||
return json.dumps(data, indent=2)
|
return json.dumps(data, indent=2)
|
||||||
except Exception as exc:
|
return json.dumps(
|
||||||
return json.dumps({"error": str(exc), "action": action})
|
{
|
||||||
|
"status": "connecting",
|
||||||
|
"message": f"Subscription '{action}' is starting. Retry in a moment.",
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
_live_resource.__name__ = f"{action}_resource"
|
_live_resource.__name__ = f"{action}_resource"
|
||||||
_live_resource.__doc__ = (
|
_live_resource.__doc__ = (
|
||||||
@@ -119,7 +122,7 @@ def register_subscription_resources(mcp: FastMCP) -> None:
|
|||||||
)
|
)
|
||||||
return _live_resource
|
return _live_resource
|
||||||
|
|
||||||
for _action, _query in SNAPSHOT_ACTIONS.items():
|
for _action in SNAPSHOT_ACTIONS:
|
||||||
mcp.resource(f"unraid://live/{_action}")(_make_resource_fn(_action, _query))
|
mcp.resource(f"unraid://live/{_action}")(_make_resource_fn(_action))
|
||||||
|
|
||||||
logger.info("Subscription resources registered successfully")
|
logger.info("Subscription resources registered successfully")
|
||||||
|
|||||||
@@ -1,14 +1,19 @@
|
|||||||
"""MCP tools organized by functional domain.
|
"""MCP tools — single consolidated unraid tool with action + subaction routing.
|
||||||
|
|
||||||
10 consolidated tools with 76 actions total:
|
unraid - All Unraid operations (15 actions, ~88 subactions)
|
||||||
unraid_info - System information queries (19 actions)
|
system - System info, metrics, UPS, network, registration
|
||||||
unraid_array - Array operations and parity management (5 actions)
|
health - Health checks, connection test, diagnostics, setup
|
||||||
unraid_storage - Storage, disks, and logs (6 actions)
|
array - Parity, array state, disk add/remove/mount
|
||||||
unraid_docker - Docker container management (15 actions)
|
disk - Shares, physical disks, logs, flash backup
|
||||||
unraid_vm - Virtual machine management (9 actions)
|
docker - Container list/details/start/stop/restart, networks
|
||||||
unraid_notifications - Notification management (9 actions)
|
vm - VM list/details and lifecycle (start/stop/pause/resume/etc)
|
||||||
unraid_rclone - Cloud storage remotes (4 actions)
|
notification - Notification CRUD and bulk operations
|
||||||
unraid_users - User management (1 action)
|
key - API key management
|
||||||
unraid_keys - API key management (5 actions)
|
plugin - Plugin list/add/remove
|
||||||
unraid_health - Health monitoring and diagnostics (3 actions)
|
rclone - Cloud remote management
|
||||||
|
setting - System settings and UPS config
|
||||||
|
customization - Theme and UI customization
|
||||||
|
oidc - OIDC/SSO provider management
|
||||||
|
user - Current user info
|
||||||
|
live - Real-time subscription snapshots
|
||||||
"""
|
"""
|
||||||
|
|||||||
@@ -1,214 +0,0 @@
|
|||||||
"""Array management: parity checks, array state, and disk operations.
|
|
||||||
|
|
||||||
Provides the `unraid_array` tool with 13 actions covering parity check
|
|
||||||
management, array start/stop, and disk add/remove/mount operations.
|
|
||||||
"""
|
|
||||||
|
|
||||||
from typing import Any, Literal, get_args
|
|
||||||
|
|
||||||
from fastmcp import Context, FastMCP
|
|
||||||
|
|
||||||
from ..config.logging import logger
|
|
||||||
from ..core.client import make_graphql_request
|
|
||||||
from ..core.exceptions import ToolError, tool_error_handler
|
|
||||||
from ..core.guards import gate_destructive_action
|
|
||||||
|
|
||||||
|
|
||||||
QUERIES: dict[str, str] = {
|
|
||||||
"parity_status": """
|
|
||||||
query GetParityStatus {
|
|
||||||
array { parityCheckStatus { progress speed errors status paused running correcting } }
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"parity_history": """
|
|
||||||
query GetParityHistory {
|
|
||||||
parityHistory {
|
|
||||||
date duration speed status errors progress correcting paused running
|
|
||||||
}
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
}
|
|
||||||
|
|
||||||
MUTATIONS: dict[str, str] = {
|
|
||||||
"parity_start": """
|
|
||||||
mutation StartParityCheck($correct: Boolean!) {
|
|
||||||
parityCheck { start(correct: $correct) }
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"parity_pause": """
|
|
||||||
mutation PauseParityCheck {
|
|
||||||
parityCheck { pause }
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"parity_resume": """
|
|
||||||
mutation ResumeParityCheck {
|
|
||||||
parityCheck { resume }
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"parity_cancel": """
|
|
||||||
mutation CancelParityCheck {
|
|
||||||
parityCheck { cancel }
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"start_array": """
|
|
||||||
mutation StartArray {
|
|
||||||
array { setState(input: { desiredState: START }) {
|
|
||||||
state capacity { kilobytes { free used total } }
|
|
||||||
}}
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"stop_array": """
|
|
||||||
mutation StopArray {
|
|
||||||
array { setState(input: { desiredState: STOP }) {
|
|
||||||
state
|
|
||||||
}}
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"add_disk": """
|
|
||||||
mutation AddDisk($id: PrefixedID!, $slot: Int) {
|
|
||||||
array { addDiskToArray(input: { id: $id, slot: $slot }) {
|
|
||||||
state disks { id name device type status }
|
|
||||||
}}
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"remove_disk": """
|
|
||||||
mutation RemoveDisk($id: PrefixedID!) {
|
|
||||||
array { removeDiskFromArray(input: { id: $id }) {
|
|
||||||
state disks { id name device type }
|
|
||||||
}}
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"mount_disk": """
|
|
||||||
mutation MountDisk($id: PrefixedID!) {
|
|
||||||
array { mountArrayDisk(id: $id) { id name device status } }
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"unmount_disk": """
|
|
||||||
mutation UnmountDisk($id: PrefixedID!) {
|
|
||||||
array { unmountArrayDisk(id: $id) { id name device status } }
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"clear_disk_stats": """
|
|
||||||
mutation ClearDiskStats($id: PrefixedID!) {
|
|
||||||
array { clearArrayDiskStatistics(id: $id) }
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
}
|
|
||||||
|
|
||||||
DESTRUCTIVE_ACTIONS = {"remove_disk", "clear_disk_stats", "stop_array"}
|
|
||||||
ALL_ACTIONS = set(QUERIES) | set(MUTATIONS)
|
|
||||||
|
|
||||||
ARRAY_ACTIONS = Literal[
|
|
||||||
"add_disk",
|
|
||||||
"clear_disk_stats",
|
|
||||||
"mount_disk",
|
|
||||||
"parity_cancel",
|
|
||||||
"parity_history",
|
|
||||||
"parity_pause",
|
|
||||||
"parity_resume",
|
|
||||||
"parity_start",
|
|
||||||
"parity_status",
|
|
||||||
"remove_disk",
|
|
||||||
"start_array",
|
|
||||||
"stop_array",
|
|
||||||
"unmount_disk",
|
|
||||||
]
|
|
||||||
|
|
||||||
if set(get_args(ARRAY_ACTIONS)) != ALL_ACTIONS:
|
|
||||||
_missing = ALL_ACTIONS - set(get_args(ARRAY_ACTIONS))
|
|
||||||
_extra = set(get_args(ARRAY_ACTIONS)) - ALL_ACTIONS
|
|
||||||
raise RuntimeError(
|
|
||||||
f"ARRAY_ACTIONS and ALL_ACTIONS are out of sync. "
|
|
||||||
f"Missing from Literal: {_missing or 'none'}. Extra in Literal: {_extra or 'none'}"
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def register_array_tool(mcp: FastMCP) -> None:
|
|
||||||
"""Register the unraid_array tool with the FastMCP instance."""
|
|
||||||
|
|
||||||
@mcp.tool()
|
|
||||||
async def unraid_array(
|
|
||||||
action: ARRAY_ACTIONS,
|
|
||||||
ctx: Context | None = None,
|
|
||||||
confirm: bool = False,
|
|
||||||
correct: bool | None = None,
|
|
||||||
disk_id: str | None = None,
|
|
||||||
slot: int | None = None,
|
|
||||||
) -> dict[str, Any]:
|
|
||||||
"""Manage Unraid array: parity checks, array state, and disk operations.
|
|
||||||
|
|
||||||
Parity check actions:
|
|
||||||
parity_start - Start parity check (correct=True to write fixes; required)
|
|
||||||
parity_pause - Pause running parity check
|
|
||||||
parity_resume - Resume paused parity check
|
|
||||||
parity_cancel - Cancel running parity check
|
|
||||||
parity_status - Get current parity check status and progress
|
|
||||||
parity_history - Get parity check history log
|
|
||||||
|
|
||||||
Array state actions:
|
|
||||||
start_array - Start the array (desiredState=START)
|
|
||||||
stop_array - Stop the array (desiredState=STOP)
|
|
||||||
|
|
||||||
Disk operations (requires disk_id):
|
|
||||||
add_disk - Add a disk to the array (requires disk_id; optional slot)
|
|
||||||
remove_disk - Remove a disk from the array (requires disk_id, confirm=True; array must be stopped)
|
|
||||||
mount_disk - Mount a disk (requires disk_id)
|
|
||||||
unmount_disk - Unmount a disk (requires disk_id)
|
|
||||||
clear_disk_stats - Clear I/O statistics for a disk (requires disk_id, confirm=True)
|
|
||||||
"""
|
|
||||||
if action not in ALL_ACTIONS:
|
|
||||||
raise ToolError(f"Invalid action '{action}'. Must be one of: {sorted(ALL_ACTIONS)}")
|
|
||||||
|
|
||||||
await gate_destructive_action(
|
|
||||||
ctx,
|
|
||||||
action,
|
|
||||||
DESTRUCTIVE_ACTIONS,
|
|
||||||
confirm,
|
|
||||||
{
|
|
||||||
"remove_disk": f"Remove disk **{disk_id}** from the array. The array must be stopped first.",
|
|
||||||
"clear_disk_stats": f"Clear all I/O statistics for disk **{disk_id}**. This cannot be undone.",
|
|
||||||
"stop_array": "Stop the Unraid array. Running containers and VMs may lose access to array shares.",
|
|
||||||
},
|
|
||||||
)
|
|
||||||
|
|
||||||
with tool_error_handler("array", action, logger):
|
|
||||||
logger.info(f"Executing unraid_array action={action}")
|
|
||||||
|
|
||||||
# --- Queries ---
|
|
||||||
if action in QUERIES:
|
|
||||||
data = await make_graphql_request(QUERIES[action])
|
|
||||||
return {"success": True, "action": action, "data": data}
|
|
||||||
|
|
||||||
# --- Mutations ---
|
|
||||||
if action == "parity_start":
|
|
||||||
if correct is None:
|
|
||||||
raise ToolError("correct is required for 'parity_start' action")
|
|
||||||
data = await make_graphql_request(MUTATIONS[action], {"correct": correct})
|
|
||||||
return {"success": True, "action": action, "data": data}
|
|
||||||
|
|
||||||
if action in ("parity_pause", "parity_resume", "parity_cancel"):
|
|
||||||
data = await make_graphql_request(MUTATIONS[action])
|
|
||||||
return {"success": True, "action": action, "data": data}
|
|
||||||
|
|
||||||
if action in ("start_array", "stop_array"):
|
|
||||||
data = await make_graphql_request(MUTATIONS[action])
|
|
||||||
return {"success": True, "action": action, "data": data}
|
|
||||||
|
|
||||||
if action == "add_disk":
|
|
||||||
if not disk_id:
|
|
||||||
raise ToolError("disk_id is required for 'add_disk' action")
|
|
||||||
variables: dict[str, Any] = {"id": disk_id}
|
|
||||||
if slot is not None:
|
|
||||||
variables["slot"] = slot
|
|
||||||
data = await make_graphql_request(MUTATIONS[action], variables)
|
|
||||||
return {"success": True, "action": action, "data": data}
|
|
||||||
|
|
||||||
if action in ("remove_disk", "mount_disk", "unmount_disk", "clear_disk_stats"):
|
|
||||||
if not disk_id:
|
|
||||||
raise ToolError(f"disk_id is required for '{action}' action")
|
|
||||||
data = await make_graphql_request(MUTATIONS[action], {"id": disk_id})
|
|
||||||
return {"success": True, "action": action, "data": data}
|
|
||||||
|
|
||||||
raise ToolError(f"Unhandled action '{action}' — this is a bug")
|
|
||||||
|
|
||||||
logger.info("Array tool registered successfully")
|
|
||||||
@@ -1,119 +0,0 @@
|
|||||||
"""UI customization and system state queries.
|
|
||||||
|
|
||||||
Provides the `unraid_customization` tool with 5 actions covering
|
|
||||||
theme/customization data, public UI config, initial setup state, and
|
|
||||||
theme mutation.
|
|
||||||
"""
|
|
||||||
|
|
||||||
from __future__ import annotations
|
|
||||||
|
|
||||||
from typing import TYPE_CHECKING, Any, Literal, get_args
|
|
||||||
|
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
|
||||||
from fastmcp import FastMCP
|
|
||||||
|
|
||||||
from ..config.logging import logger
|
|
||||||
from ..core.client import make_graphql_request
|
|
||||||
from ..core.exceptions import ToolError, tool_error_handler
|
|
||||||
|
|
||||||
|
|
||||||
QUERIES: dict[str, str] = {
|
|
||||||
"theme": """
|
|
||||||
query GetCustomization {
|
|
||||||
customization {
|
|
||||||
theme { name showBannerImage showBannerGradient showHeaderDescription
|
|
||||||
headerBackgroundColor headerPrimaryTextColor headerSecondaryTextColor }
|
|
||||||
partnerInfo { partnerName hasPartnerLogo partnerUrl partnerLogoUrl }
|
|
||||||
activationCode { code partnerName serverName sysModel comment header theme }
|
|
||||||
}
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"public_theme": """
|
|
||||||
query GetPublicTheme {
|
|
||||||
publicTheme { name showBannerImage showBannerGradient showHeaderDescription
|
|
||||||
headerBackgroundColor headerPrimaryTextColor headerSecondaryTextColor }
|
|
||||||
publicPartnerInfo { partnerName hasPartnerLogo partnerUrl partnerLogoUrl }
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"is_initial_setup": """
|
|
||||||
query IsInitialSetup {
|
|
||||||
isInitialSetup
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"sso_enabled": """
|
|
||||||
query IsSSOEnabled {
|
|
||||||
isSSOEnabled
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
}
|
|
||||||
|
|
||||||
MUTATIONS: dict[str, str] = {
|
|
||||||
"set_theme": """
|
|
||||||
mutation SetTheme($theme: ThemeName!) {
|
|
||||||
customization { setTheme(theme: $theme) {
|
|
||||||
name showBannerImage showBannerGradient showHeaderDescription
|
|
||||||
}}
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
}
|
|
||||||
|
|
||||||
ALL_ACTIONS = set(QUERIES) | set(MUTATIONS)
|
|
||||||
|
|
||||||
CUSTOMIZATION_ACTIONS = Literal[
|
|
||||||
"is_initial_setup",
|
|
||||||
"public_theme",
|
|
||||||
"set_theme",
|
|
||||||
"sso_enabled",
|
|
||||||
"theme",
|
|
||||||
]
|
|
||||||
|
|
||||||
if set(get_args(CUSTOMIZATION_ACTIONS)) != ALL_ACTIONS:
|
|
||||||
_missing = ALL_ACTIONS - set(get_args(CUSTOMIZATION_ACTIONS))
|
|
||||||
_extra = set(get_args(CUSTOMIZATION_ACTIONS)) - ALL_ACTIONS
|
|
||||||
raise RuntimeError(
|
|
||||||
f"CUSTOMIZATION_ACTIONS and ALL_ACTIONS are out of sync. "
|
|
||||||
f"Missing: {_missing or 'none'}. Extra: {_extra or 'none'}"
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def register_customization_tool(mcp: FastMCP) -> None:
|
|
||||||
"""Register the unraid_customization tool with the FastMCP instance."""
|
|
||||||
|
|
||||||
@mcp.tool()
|
|
||||||
async def unraid_customization(
|
|
||||||
action: CUSTOMIZATION_ACTIONS,
|
|
||||||
theme_name: str | None = None,
|
|
||||||
) -> dict[str, Any]:
|
|
||||||
"""Manage Unraid UI customization and system state.
|
|
||||||
|
|
||||||
Actions:
|
|
||||||
theme - Get full customization (theme, partner info, activation code)
|
|
||||||
public_theme - Get public theme and partner info (no auth required)
|
|
||||||
is_initial_setup - Check if server is in initial setup mode
|
|
||||||
sso_enabled - Check if SSO is enabled
|
|
||||||
set_theme - Change the UI theme (requires theme_name: azure/black/gray/white)
|
|
||||||
"""
|
|
||||||
if action not in ALL_ACTIONS:
|
|
||||||
raise ToolError(f"Invalid action '{action}'. Must be one of: {sorted(ALL_ACTIONS)}")
|
|
||||||
|
|
||||||
if action == "set_theme" and not theme_name:
|
|
||||||
raise ToolError(
|
|
||||||
"theme_name is required for 'set_theme' action "
|
|
||||||
"(valid values: azure, black, gray, white)"
|
|
||||||
)
|
|
||||||
|
|
||||||
with tool_error_handler("customization", action, logger):
|
|
||||||
logger.info(f"Executing unraid_customization action={action}")
|
|
||||||
|
|
||||||
if action in QUERIES:
|
|
||||||
data = await make_graphql_request(QUERIES[action])
|
|
||||||
return {"success": True, "action": action, "data": data}
|
|
||||||
|
|
||||||
if action == "set_theme":
|
|
||||||
data = await make_graphql_request(MUTATIONS[action], {"theme": theme_name})
|
|
||||||
return {"success": True, "action": action, "data": data}
|
|
||||||
|
|
||||||
raise ToolError(f"Unhandled action '{action}' — this is a bug")
|
|
||||||
|
|
||||||
logger.info("Customization tool registered successfully")
|
|
||||||
@@ -1,342 +0,0 @@
|
|||||||
"""Docker container management.
|
|
||||||
|
|
||||||
Provides the `unraid_docker` tool with 7 actions for container lifecycle
|
|
||||||
and network inspection.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import re
|
|
||||||
from typing import Any, Literal, get_args
|
|
||||||
|
|
||||||
from fastmcp import FastMCP
|
|
||||||
|
|
||||||
from ..config.logging import logger
|
|
||||||
from ..core.client import make_graphql_request
|
|
||||||
from ..core.exceptions import ToolError, tool_error_handler
|
|
||||||
from ..core.utils import safe_get
|
|
||||||
|
|
||||||
|
|
||||||
QUERIES: dict[str, str] = {
|
|
||||||
"list": """
|
|
||||||
query ListDockerContainers {
|
|
||||||
docker { containers(skipCache: false) {
|
|
||||||
id names image state status autoStart
|
|
||||||
} }
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"details": """
|
|
||||||
query GetContainerDetails {
|
|
||||||
docker { containers(skipCache: false) {
|
|
||||||
id names image imageId command created
|
|
||||||
ports { ip privatePort publicPort type }
|
|
||||||
sizeRootFs labels state status
|
|
||||||
hostConfig { networkMode }
|
|
||||||
networkSettings mounts autoStart
|
|
||||||
} }
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"networks": """
|
|
||||||
query GetDockerNetworks {
|
|
||||||
docker { networks { id name driver scope } }
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"network_details": """
|
|
||||||
query GetDockerNetwork {
|
|
||||||
docker { networks { id name driver scope enableIPv6 internal attachable containers options labels } }
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
}
|
|
||||||
|
|
||||||
MUTATIONS: dict[str, str] = {
|
|
||||||
"start": """
|
|
||||||
mutation StartContainer($id: PrefixedID!) {
|
|
||||||
docker { start(id: $id) { id names state status } }
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"stop": """
|
|
||||||
mutation StopContainer($id: PrefixedID!) {
|
|
||||||
docker { stop(id: $id) { id names state status } }
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
}
|
|
||||||
|
|
||||||
DESTRUCTIVE_ACTIONS: set[str] = set()
|
|
||||||
# NOTE (Code-M-07): "details" is listed here because it requires a container_id
|
|
||||||
# parameter, but unlike mutations it uses fuzzy name matching (not strict).
|
|
||||||
# This is intentional: read-only queries are safe with fuzzy matching.
|
|
||||||
_ACTIONS_REQUIRING_CONTAINER_ID = {"start", "stop", "details"}
|
|
||||||
ALL_ACTIONS = set(QUERIES) | set(MUTATIONS) | {"restart"}
|
|
||||||
|
|
||||||
DOCKER_ACTIONS = Literal[
|
|
||||||
"list",
|
|
||||||
"details",
|
|
||||||
"start",
|
|
||||||
"stop",
|
|
||||||
"restart",
|
|
||||||
"networks",
|
|
||||||
"network_details",
|
|
||||||
]
|
|
||||||
|
|
||||||
if set(get_args(DOCKER_ACTIONS)) != ALL_ACTIONS:
|
|
||||||
_missing = ALL_ACTIONS - set(get_args(DOCKER_ACTIONS))
|
|
||||||
_extra = set(get_args(DOCKER_ACTIONS)) - ALL_ACTIONS
|
|
||||||
raise RuntimeError(
|
|
||||||
f"DOCKER_ACTIONS and ALL_ACTIONS are out of sync. "
|
|
||||||
f"Missing from Literal: {_missing or 'none'}. Extra in Literal: {_extra or 'none'}"
|
|
||||||
)
|
|
||||||
|
|
||||||
# Full PrefixedID: 64 hex chars + optional suffix (e.g., ":local")
|
|
||||||
_DOCKER_ID_PATTERN = re.compile(r"^[a-f0-9]{64}(:[a-z0-9]+)?$", re.IGNORECASE)
|
|
||||||
|
|
||||||
# Short hex prefix: at least 12 hex chars (standard Docker short ID length)
|
|
||||||
_DOCKER_SHORT_ID_PATTERN = re.compile(r"^[a-f0-9]{12,63}$", re.IGNORECASE)
|
|
||||||
|
|
||||||
|
|
||||||
def find_container_by_identifier(
|
|
||||||
identifier: str, containers: list[dict[str, Any]], *, strict: bool = False
|
|
||||||
) -> dict[str, Any] | None:
|
|
||||||
"""Find a container by ID or name with optional fuzzy matching.
|
|
||||||
|
|
||||||
Match priority:
|
|
||||||
1. Exact ID match
|
|
||||||
2. Exact name match (case-sensitive)
|
|
||||||
|
|
||||||
When strict=False (default), also tries:
|
|
||||||
3. Name starts with identifier (case-insensitive)
|
|
||||||
4. Name contains identifier as substring (case-insensitive)
|
|
||||||
|
|
||||||
When strict=True, only exact matches (1 & 2) are used.
|
|
||||||
Use strict=True for mutations to prevent targeting the wrong container.
|
|
||||||
"""
|
|
||||||
if not containers:
|
|
||||||
return None
|
|
||||||
|
|
||||||
# Priority 1 & 2: exact matches
|
|
||||||
for c in containers:
|
|
||||||
if c.get("id") == identifier:
|
|
||||||
return c
|
|
||||||
if identifier in c.get("names", []):
|
|
||||||
return c
|
|
||||||
|
|
||||||
# Strict mode: no fuzzy matching allowed
|
|
||||||
if strict:
|
|
||||||
return None
|
|
||||||
|
|
||||||
id_lower = identifier.lower()
|
|
||||||
|
|
||||||
# Priority 3: prefix match (more precise than substring)
|
|
||||||
for c in containers:
|
|
||||||
for name in c.get("names", []):
|
|
||||||
if name.lower().startswith(id_lower):
|
|
||||||
logger.debug(f"Prefix match: '{identifier}' -> '{name}'")
|
|
||||||
return c
|
|
||||||
|
|
||||||
# Priority 4: substring match (least precise)
|
|
||||||
for c in containers:
|
|
||||||
for name in c.get("names", []):
|
|
||||||
if id_lower in name.lower():
|
|
||||||
logger.debug(f"Substring match: '{identifier}' -> '{name}'")
|
|
||||||
return c
|
|
||||||
|
|
||||||
return None
|
|
||||||
|
|
||||||
|
|
||||||
def get_available_container_names(containers: list[dict[str, Any]]) -> list[str]:
|
|
||||||
"""Extract all container names for error messages."""
|
|
||||||
names: list[str] = []
|
|
||||||
for c in containers:
|
|
||||||
names.extend(c.get("names", []))
|
|
||||||
return names
|
|
||||||
|
|
||||||
|
|
||||||
async def _resolve_container_id(container_id: str, *, strict: bool = False) -> str:
|
|
||||||
"""Resolve a container name/identifier to its actual PrefixedID.
|
|
||||||
|
|
||||||
Optimization: if the identifier is a full 64-char hex ID (with optional
|
|
||||||
:suffix), skip the container list fetch entirely and use it directly.
|
|
||||||
If it's a short hex prefix (12-63 chars), fetch the list and match by
|
|
||||||
ID prefix. Only fetch the container list for name-based lookups.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
container_id: Container name or ID to resolve
|
|
||||||
strict: When True, only exact name/ID matches are allowed (no fuzzy).
|
|
||||||
Use for mutations to prevent targeting the wrong container.
|
|
||||||
"""
|
|
||||||
# Full PrefixedID: skip the list fetch entirely
|
|
||||||
if _DOCKER_ID_PATTERN.match(container_id):
|
|
||||||
return container_id
|
|
||||||
|
|
||||||
logger.info(f"Resolving container identifier '{container_id}' (strict={strict})")
|
|
||||||
list_query = """
|
|
||||||
query ResolveContainerID {
|
|
||||||
docker { containers(skipCache: true) { id names } }
|
|
||||||
}
|
|
||||||
"""
|
|
||||||
data = await make_graphql_request(list_query)
|
|
||||||
containers = safe_get(data, "docker", "containers", default=[])
|
|
||||||
|
|
||||||
# Short hex prefix: match by ID prefix before trying name matching
|
|
||||||
if _DOCKER_SHORT_ID_PATTERN.match(container_id):
|
|
||||||
id_lower = container_id.lower()
|
|
||||||
matches: list[dict[str, Any]] = []
|
|
||||||
for c in containers:
|
|
||||||
cid = (c.get("id") or "").lower()
|
|
||||||
if cid.startswith(id_lower) or cid.split(":")[0].startswith(id_lower):
|
|
||||||
matches.append(c)
|
|
||||||
if len(matches) == 1:
|
|
||||||
actual_id = str(matches[0].get("id", ""))
|
|
||||||
logger.info(f"Resolved short ID '{container_id}' -> '{actual_id}'")
|
|
||||||
return actual_id
|
|
||||||
if len(matches) > 1:
|
|
||||||
candidate_ids = [str(c.get("id", "")) for c in matches[:5]]
|
|
||||||
raise ToolError(
|
|
||||||
f"Short container ID prefix '{container_id}' is ambiguous. "
|
|
||||||
f"Matches: {', '.join(candidate_ids)}. Use a longer ID or exact name."
|
|
||||||
)
|
|
||||||
|
|
||||||
resolved = find_container_by_identifier(container_id, containers, strict=strict)
|
|
||||||
if resolved:
|
|
||||||
actual_id = str(resolved.get("id", ""))
|
|
||||||
logger.info(f"Resolved '{container_id}' -> '{actual_id}'")
|
|
||||||
return actual_id
|
|
||||||
|
|
||||||
available = get_available_container_names(containers)
|
|
||||||
if strict:
|
|
||||||
msg = (
|
|
||||||
f"Container '{container_id}' not found by exact match. "
|
|
||||||
f"Mutations require an exact container name or full ID — "
|
|
||||||
f"fuzzy/substring matching is not allowed for safety."
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
msg = f"Container '{container_id}' not found."
|
|
||||||
if available:
|
|
||||||
msg += f" Available: {', '.join(available[:10])}"
|
|
||||||
raise ToolError(msg)
|
|
||||||
|
|
||||||
|
|
||||||
def register_docker_tool(mcp: FastMCP) -> None:
|
|
||||||
"""Register the unraid_docker tool with the FastMCP instance."""
|
|
||||||
|
|
||||||
@mcp.tool()
|
|
||||||
async def unraid_docker(
|
|
||||||
action: DOCKER_ACTIONS,
|
|
||||||
container_id: str | None = None,
|
|
||||||
network_id: str | None = None,
|
|
||||||
*,
|
|
||||||
confirm: bool = False,
|
|
||||||
) -> dict[str, Any]:
|
|
||||||
"""Manage Docker containers and networks.
|
|
||||||
|
|
||||||
Actions:
|
|
||||||
list - List all containers
|
|
||||||
details - Detailed info for a container (requires container_id)
|
|
||||||
start - Start a container (requires container_id)
|
|
||||||
stop - Stop a container (requires container_id)
|
|
||||||
restart - Stop then start a container (requires container_id)
|
|
||||||
networks - List Docker networks
|
|
||||||
network_details - Details of a network (requires network_id)
|
|
||||||
"""
|
|
||||||
if action not in ALL_ACTIONS:
|
|
||||||
raise ToolError(f"Invalid action '{action}'. Must be one of: {sorted(ALL_ACTIONS)}")
|
|
||||||
|
|
||||||
if action in _ACTIONS_REQUIRING_CONTAINER_ID and not container_id:
|
|
||||||
raise ToolError(f"container_id is required for '{action}' action")
|
|
||||||
|
|
||||||
if action == "network_details" and not network_id:
|
|
||||||
raise ToolError("network_id is required for 'network_details' action")
|
|
||||||
|
|
||||||
with tool_error_handler("docker", action, logger):
|
|
||||||
logger.info(f"Executing unraid_docker action={action}")
|
|
||||||
|
|
||||||
# --- Read-only queries ---
|
|
||||||
if action == "list":
|
|
||||||
data = await make_graphql_request(QUERIES["list"])
|
|
||||||
containers = safe_get(data, "docker", "containers", default=[])
|
|
||||||
return {"containers": containers}
|
|
||||||
|
|
||||||
if action == "details":
|
|
||||||
# Resolve name -> ID first (skips list fetch if already an ID)
|
|
||||||
actual_id = await _resolve_container_id(container_id or "")
|
|
||||||
data = await make_graphql_request(QUERIES["details"])
|
|
||||||
containers = safe_get(data, "docker", "containers", default=[])
|
|
||||||
# Match by resolved ID (exact match, no second list fetch needed)
|
|
||||||
for c in containers:
|
|
||||||
if c.get("id") == actual_id:
|
|
||||||
return c
|
|
||||||
raise ToolError(f"Container '{container_id}' not found in details response.")
|
|
||||||
|
|
||||||
if action == "networks":
|
|
||||||
data = await make_graphql_request(QUERIES["networks"])
|
|
||||||
networks = safe_get(data, "docker", "networks", default=[])
|
|
||||||
return {"networks": networks}
|
|
||||||
|
|
||||||
if action == "network_details":
|
|
||||||
data = await make_graphql_request(QUERIES["network_details"])
|
|
||||||
all_networks = safe_get(data, "docker", "networks", default=[])
|
|
||||||
# Filter client-side by network_id since the API returns all networks
|
|
||||||
for net in all_networks:
|
|
||||||
if net.get("id") == network_id or net.get("name") == network_id:
|
|
||||||
return dict(net)
|
|
||||||
raise ToolError(f"Network '{network_id}' not found.")
|
|
||||||
|
|
||||||
# --- Mutations (strict matching: no fuzzy/substring) ---
|
|
||||||
if action == "restart":
|
|
||||||
actual_id = await _resolve_container_id(container_id or "", strict=True)
|
|
||||||
# Stop (idempotent: treat "already stopped" as success)
|
|
||||||
stop_data = await make_graphql_request(
|
|
||||||
MUTATIONS["stop"],
|
|
||||||
{"id": actual_id},
|
|
||||||
operation_context={"operation": "stop"},
|
|
||||||
)
|
|
||||||
stop_was_idempotent = stop_data.get("idempotent_success", False)
|
|
||||||
# Start (idempotent: treat "already running" as success)
|
|
||||||
start_data = await make_graphql_request(
|
|
||||||
MUTATIONS["start"],
|
|
||||||
{"id": actual_id},
|
|
||||||
operation_context={"operation": "start"},
|
|
||||||
)
|
|
||||||
if start_data.get("idempotent_success"):
|
|
||||||
result = {}
|
|
||||||
else:
|
|
||||||
result = safe_get(start_data, "docker", "start", default={})
|
|
||||||
response: dict[str, Any] = {
|
|
||||||
"success": True,
|
|
||||||
"action": "restart",
|
|
||||||
"container": result,
|
|
||||||
}
|
|
||||||
if stop_was_idempotent:
|
|
||||||
response["note"] = "Container was already stopped before restart"
|
|
||||||
return response
|
|
||||||
|
|
||||||
# Single-container mutations (start, stop)
|
|
||||||
if action in MUTATIONS:
|
|
||||||
actual_id = await _resolve_container_id(container_id or "", strict=True)
|
|
||||||
op_context: dict[str, str] | None = (
|
|
||||||
{"operation": action} if action in ("start", "stop") else None
|
|
||||||
)
|
|
||||||
data = await make_graphql_request(
|
|
||||||
MUTATIONS[action],
|
|
||||||
{"id": actual_id},
|
|
||||||
operation_context=op_context,
|
|
||||||
)
|
|
||||||
|
|
||||||
# Handle idempotent success
|
|
||||||
if data.get("idempotent_success"):
|
|
||||||
return {
|
|
||||||
"success": True,
|
|
||||||
"action": action,
|
|
||||||
"idempotent": True,
|
|
||||||
"message": f"Container already in desired state for '{action}'",
|
|
||||||
}
|
|
||||||
|
|
||||||
docker_data = data.get("docker") or {}
|
|
||||||
field = action
|
|
||||||
result_container = docker_data.get(field)
|
|
||||||
return {
|
|
||||||
"success": True,
|
|
||||||
"action": action,
|
|
||||||
"container": result_container,
|
|
||||||
}
|
|
||||||
|
|
||||||
raise ToolError(f"Unhandled action '{action}' — this is a bug")
|
|
||||||
|
|
||||||
logger.info("Docker tool registered successfully")
|
|
||||||
@@ -1,275 +0,0 @@
|
|||||||
"""Health monitoring and diagnostics.
|
|
||||||
|
|
||||||
Provides the `unraid_health` tool with 4 actions for system health checks,
|
|
||||||
connection testing, subscription diagnostics, and credential setup.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import datetime
|
|
||||||
import time
|
|
||||||
from typing import Any, Literal, get_args
|
|
||||||
|
|
||||||
from fastmcp import Context, FastMCP
|
|
||||||
|
|
||||||
from ..config.logging import logger
|
|
||||||
from ..config.settings import (
|
|
||||||
CREDENTIALS_ENV_PATH,
|
|
||||||
UNRAID_API_URL,
|
|
||||||
UNRAID_MCP_HOST,
|
|
||||||
UNRAID_MCP_PORT,
|
|
||||||
UNRAID_MCP_TRANSPORT,
|
|
||||||
VERSION,
|
|
||||||
)
|
|
||||||
from ..core.client import make_graphql_request
|
|
||||||
from ..core.exceptions import ToolError, tool_error_handler
|
|
||||||
from ..core.setup import elicit_and_configure
|
|
||||||
from ..core.utils import safe_display_url
|
|
||||||
from ..subscriptions.utils import _analyze_subscription_status
|
|
||||||
|
|
||||||
|
|
||||||
ALL_ACTIONS = {"check", "test_connection", "diagnose", "setup"}
|
|
||||||
|
|
||||||
HEALTH_ACTIONS = Literal["check", "test_connection", "diagnose", "setup"]
|
|
||||||
|
|
||||||
if set(get_args(HEALTH_ACTIONS)) != ALL_ACTIONS:
|
|
||||||
_missing = ALL_ACTIONS - set(get_args(HEALTH_ACTIONS))
|
|
||||||
_extra = set(get_args(HEALTH_ACTIONS)) - ALL_ACTIONS
|
|
||||||
raise RuntimeError(
|
|
||||||
"HEALTH_ACTIONS and ALL_ACTIONS are out of sync. "
|
|
||||||
f"Missing in HEALTH_ACTIONS: {_missing}; extra in HEALTH_ACTIONS: {_extra}"
|
|
||||||
)
|
|
||||||
|
|
||||||
# Severity ordering: only upgrade, never downgrade
|
|
||||||
_SEVERITY = {"healthy": 0, "warning": 1, "degraded": 2, "unhealthy": 3}
|
|
||||||
|
|
||||||
|
|
||||||
def _server_info() -> dict[str, Any]:
|
|
||||||
"""Return the standard server info block used in health responses."""
|
|
||||||
return {
|
|
||||||
"name": "Unraid MCP Server",
|
|
||||||
"version": VERSION,
|
|
||||||
"transport": UNRAID_MCP_TRANSPORT,
|
|
||||||
"host": UNRAID_MCP_HOST,
|
|
||||||
"port": UNRAID_MCP_PORT,
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
def register_health_tool(mcp: FastMCP) -> None:
|
|
||||||
"""Register the unraid_health tool with the FastMCP instance."""
|
|
||||||
|
|
||||||
@mcp.tool()
|
|
||||||
async def unraid_health(
|
|
||||||
action: HEALTH_ACTIONS,
|
|
||||||
ctx: Context | None = None,
|
|
||||||
) -> dict[str, Any] | str:
|
|
||||||
"""Monitor Unraid MCP server and system health.
|
|
||||||
|
|
||||||
Actions:
|
|
||||||
setup - Configure Unraid credentials via interactive elicitation
|
|
||||||
check - Comprehensive health check (API latency, array, notifications, Docker)
|
|
||||||
test_connection - Quick connectivity test (just checks { online })
|
|
||||||
diagnose - Subscription system diagnostics
|
|
||||||
"""
|
|
||||||
if action not in ALL_ACTIONS:
|
|
||||||
raise ToolError(f"Invalid action '{action}'. Must be one of: {sorted(ALL_ACTIONS)}")
|
|
||||||
|
|
||||||
if action == "setup":
|
|
||||||
|
|
||||||
configured = await elicit_and_configure(ctx)
|
|
||||||
if configured:
|
|
||||||
return (
|
|
||||||
"✅ Credentials configured successfully. You can now use all Unraid MCP tools."
|
|
||||||
)
|
|
||||||
return (
|
|
||||||
f"⚠️ Credentials not configured.\n\n"
|
|
||||||
f"Your MCP client may not support elicitation, or setup was cancelled.\n\n"
|
|
||||||
f"**Manual setup** — create `{CREDENTIALS_ENV_PATH}` with:\n"
|
|
||||||
f"```\n"
|
|
||||||
f"UNRAID_API_URL=https://your-unraid-server:port\n"
|
|
||||||
f"UNRAID_API_KEY=your-api-key\n"
|
|
||||||
f"```\n\n"
|
|
||||||
f"Then run any Unraid tool to connect."
|
|
||||||
)
|
|
||||||
|
|
||||||
with tool_error_handler("health", action, logger):
|
|
||||||
logger.info(f"Executing unraid_health action={action}")
|
|
||||||
|
|
||||||
if action == "test_connection":
|
|
||||||
start = time.time()
|
|
||||||
data = await make_graphql_request("query { online }")
|
|
||||||
latency = round((time.time() - start) * 1000, 2)
|
|
||||||
return {
|
|
||||||
"status": "connected",
|
|
||||||
"online": data.get("online"),
|
|
||||||
"latency_ms": latency,
|
|
||||||
}
|
|
||||||
|
|
||||||
if action == "check":
|
|
||||||
return await _comprehensive_check()
|
|
||||||
|
|
||||||
if action == "diagnose":
|
|
||||||
return await _diagnose_subscriptions()
|
|
||||||
|
|
||||||
raise ToolError(f"Unhandled action '{action}' — this is a bug")
|
|
||||||
|
|
||||||
logger.info("Health tool registered successfully")
|
|
||||||
|
|
||||||
|
|
||||||
async def _comprehensive_check() -> dict[str, Any]:
|
|
||||||
"""Run comprehensive health check against the Unraid system."""
|
|
||||||
start_time = time.time()
|
|
||||||
health_severity = 0 # Track as int to prevent downgrade
|
|
||||||
issues: list[str] = []
|
|
||||||
|
|
||||||
def _escalate(level: str) -> None:
|
|
||||||
nonlocal health_severity
|
|
||||||
health_severity = max(health_severity, _SEVERITY.get(level, 0))
|
|
||||||
|
|
||||||
try:
|
|
||||||
query = """
|
|
||||||
query ComprehensiveHealthCheck {
|
|
||||||
info {
|
|
||||||
machineId time
|
|
||||||
versions { core { unraid } }
|
|
||||||
os { uptime }
|
|
||||||
}
|
|
||||||
array { state }
|
|
||||||
notifications {
|
|
||||||
overview { unread { alert warning total } }
|
|
||||||
}
|
|
||||||
docker {
|
|
||||||
containers(skipCache: true) { id state status }
|
|
||||||
}
|
|
||||||
}
|
|
||||||
"""
|
|
||||||
data = await make_graphql_request(query)
|
|
||||||
api_latency = round((time.time() - start_time) * 1000, 2)
|
|
||||||
|
|
||||||
health_info: dict[str, Any] = {
|
|
||||||
"status": "healthy",
|
|
||||||
"timestamp": datetime.datetime.now(datetime.UTC).isoformat(),
|
|
||||||
"api_latency_ms": api_latency,
|
|
||||||
"server": _server_info(),
|
|
||||||
}
|
|
||||||
|
|
||||||
if not data:
|
|
||||||
health_info["status"] = "unhealthy"
|
|
||||||
health_info["issues"] = ["No response from Unraid API"]
|
|
||||||
return health_info
|
|
||||||
|
|
||||||
# System info
|
|
||||||
info = data.get("info") or {}
|
|
||||||
if info:
|
|
||||||
health_info["unraid_system"] = {
|
|
||||||
"status": "connected",
|
|
||||||
"url": safe_display_url(UNRAID_API_URL),
|
|
||||||
"machine_id": info.get("machineId"),
|
|
||||||
"version": ((info.get("versions") or {}).get("core") or {}).get("unraid"),
|
|
||||||
"uptime": (info.get("os") or {}).get("uptime"),
|
|
||||||
}
|
|
||||||
else:
|
|
||||||
_escalate("degraded")
|
|
||||||
issues.append("Unable to retrieve system info")
|
|
||||||
|
|
||||||
# Array
|
|
||||||
array_info = data.get("array") or {}
|
|
||||||
if array_info:
|
|
||||||
state = array_info.get("state", "unknown")
|
|
||||||
health_info["array_status"] = {
|
|
||||||
"state": state,
|
|
||||||
"healthy": state in ("STARTED", "STOPPED"),
|
|
||||||
}
|
|
||||||
if state not in ("STARTED", "STOPPED"):
|
|
||||||
_escalate("warning")
|
|
||||||
issues.append(f"Array in unexpected state: {state}")
|
|
||||||
else:
|
|
||||||
_escalate("warning")
|
|
||||||
issues.append("Unable to retrieve array status")
|
|
||||||
|
|
||||||
# Notifications
|
|
||||||
notifications = data.get("notifications") or {}
|
|
||||||
if notifications and notifications.get("overview"):
|
|
||||||
unread = notifications["overview"].get("unread") or {}
|
|
||||||
alerts = unread.get("alert", 0)
|
|
||||||
health_info["notifications"] = {
|
|
||||||
"unread_total": unread.get("total", 0),
|
|
||||||
"unread_alerts": alerts,
|
|
||||||
"unread_warnings": unread.get("warning", 0),
|
|
||||||
}
|
|
||||||
if alerts > 0:
|
|
||||||
_escalate("warning")
|
|
||||||
issues.append(f"{alerts} unread alert(s)")
|
|
||||||
|
|
||||||
# Docker
|
|
||||||
docker = data.get("docker") or {}
|
|
||||||
if docker and docker.get("containers"):
|
|
||||||
containers = docker["containers"]
|
|
||||||
health_info["docker_services"] = {
|
|
||||||
"total": len(containers),
|
|
||||||
"running": len([c for c in containers if c.get("state") == "running"]),
|
|
||||||
"stopped": len([c for c in containers if c.get("state") == "exited"]),
|
|
||||||
}
|
|
||||||
|
|
||||||
# Latency assessment
|
|
||||||
if api_latency > 10000:
|
|
||||||
_escalate("degraded")
|
|
||||||
issues.append(f"Very high API latency: {api_latency}ms")
|
|
||||||
elif api_latency > 5000:
|
|
||||||
_escalate("warning")
|
|
||||||
issues.append(f"High API latency: {api_latency}ms")
|
|
||||||
|
|
||||||
# Resolve final status from severity level
|
|
||||||
severity_to_status = {v: k for k, v in _SEVERITY.items()}
|
|
||||||
health_info["status"] = severity_to_status.get(health_severity, "healthy")
|
|
||||||
if issues:
|
|
||||||
health_info["issues"] = issues
|
|
||||||
health_info["performance"] = {
|
|
||||||
"api_response_time_ms": api_latency,
|
|
||||||
"check_duration_ms": round((time.time() - start_time) * 1000, 2),
|
|
||||||
}
|
|
||||||
|
|
||||||
return health_info
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
# Intentionally broad: health checks must always return a result,
|
|
||||||
# even on unexpected failures, so callers never get an unhandled exception.
|
|
||||||
logger.error(f"Health check failed: {e}", exc_info=True)
|
|
||||||
return {
|
|
||||||
"status": "unhealthy",
|
|
||||||
"timestamp": datetime.datetime.now(datetime.UTC).isoformat(),
|
|
||||||
"error": str(e),
|
|
||||||
"server": _server_info(),
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
async def _diagnose_subscriptions() -> dict[str, Any]:
|
|
||||||
"""Import and run subscription diagnostics."""
|
|
||||||
try:
|
|
||||||
from ..subscriptions.manager import subscription_manager
|
|
||||||
from ..subscriptions.resources import ensure_subscriptions_started
|
|
||||||
|
|
||||||
await ensure_subscriptions_started()
|
|
||||||
|
|
||||||
status = await subscription_manager.get_subscription_status()
|
|
||||||
error_count, connection_issues = _analyze_subscription_status(status)
|
|
||||||
|
|
||||||
return {
|
|
||||||
"timestamp": datetime.datetime.now(datetime.UTC).isoformat(),
|
|
||||||
"environment": {
|
|
||||||
"auto_start_enabled": subscription_manager.auto_start_enabled,
|
|
||||||
"max_reconnect_attempts": subscription_manager.max_reconnect_attempts,
|
|
||||||
"api_url_configured": bool(UNRAID_API_URL),
|
|
||||||
},
|
|
||||||
"subscriptions": status,
|
|
||||||
"summary": {
|
|
||||||
"total_configured": len(subscription_manager.subscription_configs),
|
|
||||||
"active_count": len(subscription_manager.active_subscriptions),
|
|
||||||
"with_data": len(subscription_manager.resource_data),
|
|
||||||
"in_error_state": error_count,
|
|
||||||
"connection_issues": connection_issues,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
except ImportError as e:
|
|
||||||
raise ToolError("Subscription modules not available") from e
|
|
||||||
except Exception as e:
|
|
||||||
raise ToolError(f"Failed to generate diagnostics: {e!s}") from e
|
|
||||||
@@ -1,449 +0,0 @@
|
|||||||
"""System information and server status queries.
|
|
||||||
|
|
||||||
Provides the `unraid_info` tool with 19 read-only actions for retrieving
|
|
||||||
system information, array status, network config, and server metadata.
|
|
||||||
"""
|
|
||||||
|
|
||||||
from typing import Any, Literal, get_args
|
|
||||||
|
|
||||||
from fastmcp import FastMCP
|
|
||||||
|
|
||||||
from ..config.logging import logger
|
|
||||||
from ..core.client import make_graphql_request
|
|
||||||
from ..core.exceptions import ToolError, tool_error_handler
|
|
||||||
from ..core.utils import format_kb
|
|
||||||
|
|
||||||
|
|
||||||
# Pre-built queries keyed by action name
|
|
||||||
QUERIES: dict[str, str] = {
|
|
||||||
"overview": """
|
|
||||||
query GetSystemInfo {
|
|
||||||
info {
|
|
||||||
os { platform distro release codename kernel arch hostname logofile serial build uptime }
|
|
||||||
cpu { manufacturer brand vendor family model stepping revision voltage speed speedmin speedmax threads cores processors socket cache }
|
|
||||||
memory {
|
|
||||||
layout { bank type clockSpeed formFactor manufacturer partNum serialNum }
|
|
||||||
}
|
|
||||||
baseboard { manufacturer model version serial assetTag }
|
|
||||||
system { manufacturer model version serial uuid sku }
|
|
||||||
versions { core { unraid api kernel } packages { openssl node npm pm2 git nginx php docker } }
|
|
||||||
machineId
|
|
||||||
time
|
|
||||||
}
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"array": """
|
|
||||||
query GetArrayStatus {
|
|
||||||
array {
|
|
||||||
id
|
|
||||||
state
|
|
||||||
capacity {
|
|
||||||
kilobytes { free used total }
|
|
||||||
disks { free used total }
|
|
||||||
}
|
|
||||||
boot { id idx name device size status rotational temp numReads numWrites numErrors fsSize fsFree fsUsed exportable type warning critical fsType comment format transport color }
|
|
||||||
parities { id idx name device size status rotational temp numReads numWrites numErrors fsSize fsFree fsUsed exportable type warning critical fsType comment format transport color }
|
|
||||||
disks { id idx name device size status rotational temp numReads numWrites numErrors fsSize fsFree fsUsed exportable type warning critical fsType comment format transport color }
|
|
||||||
caches { id idx name device size status rotational temp numReads numWrites numErrors fsSize fsFree fsUsed exportable type warning critical fsType comment format transport color }
|
|
||||||
}
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"network": """
|
|
||||||
query GetNetworkInfo {
|
|
||||||
servers { id name status wanip lanip localurl remoteurl }
|
|
||||||
vars { id port portssl localTld useSsl }
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"registration": """
|
|
||||||
query GetRegistrationInfo {
|
|
||||||
registration {
|
|
||||||
id type
|
|
||||||
keyFile { location }
|
|
||||||
state expiration updateExpiration
|
|
||||||
}
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"connect": """
|
|
||||||
query GetConnectSettings {
|
|
||||||
connect { id dynamicRemoteAccess { enabledType runningType error } }
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"variables": """
|
|
||||||
query GetSelectiveUnraidVariables {
|
|
||||||
vars {
|
|
||||||
id version name timeZone comment security workgroup domain domainShort
|
|
||||||
hideDotFiles localMaster enableFruit useNtp domainLogin sysModel
|
|
||||||
sysFlashSlots useSsl port portssl localTld bindMgt useTelnet porttelnet
|
|
||||||
useSsh portssh startPage startArray shutdownTimeout
|
|
||||||
shareSmbEnabled shareNfsEnabled shareAfpEnabled shareCacheEnabled
|
|
||||||
shareAvahiEnabled safeMode startMode configValid configError joinStatus
|
|
||||||
deviceCount flashGuid flashProduct flashVendor mdState mdVersion
|
|
||||||
shareCount shareSmbCount shareNfsCount shareAfpCount shareMoverActive
|
|
||||||
}
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"metrics": """
|
|
||||||
query GetMetrics {
|
|
||||||
metrics { cpu { percentTotal } memory { total used free available buffcache percentTotal } }
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"services": """
|
|
||||||
query GetServices {
|
|
||||||
services { name online version }
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"display": """
|
|
||||||
query GetDisplay {
|
|
||||||
info { display { theme } }
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"config": """
|
|
||||||
query GetConfig {
|
|
||||||
config { valid error }
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"online": """
|
|
||||||
query GetOnline { online }
|
|
||||||
""",
|
|
||||||
"owner": """
|
|
||||||
query GetOwner {
|
|
||||||
owner { username avatar url }
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"settings": """
|
|
||||||
query GetSettings {
|
|
||||||
settings { unified { values } }
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"server": """
|
|
||||||
query GetServer {
|
|
||||||
info {
|
|
||||||
os { hostname uptime }
|
|
||||||
versions { core { unraid } }
|
|
||||||
machineId time
|
|
||||||
}
|
|
||||||
array { state }
|
|
||||||
online
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"servers": """
|
|
||||||
query GetServers {
|
|
||||||
servers { id name status wanip lanip localurl remoteurl }
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"flash": """
|
|
||||||
query GetFlash {
|
|
||||||
flash { id vendor product }
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"ups_devices": """
|
|
||||||
query GetUpsDevices {
|
|
||||||
upsDevices { id name model status battery { chargeLevel estimatedRuntime health } power { loadPercentage inputVoltage outputVoltage } }
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"ups_device": """
|
|
||||||
query GetUpsDevice($id: String!) {
|
|
||||||
upsDeviceById(id: $id) { id name model status battery { chargeLevel estimatedRuntime health } power { loadPercentage inputVoltage outputVoltage nominalPower currentPower } }
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"ups_config": """
|
|
||||||
query GetUpsConfig {
|
|
||||||
upsConfiguration { service upsCable upsType device batteryLevel minutes timeout killUps upsName }
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
}
|
|
||||||
|
|
||||||
MUTATIONS: dict[str, str] = {}
|
|
||||||
|
|
||||||
DESTRUCTIVE_ACTIONS: set[str] = set()
|
|
||||||
ALL_ACTIONS = set(QUERIES) | set(MUTATIONS)
|
|
||||||
|
|
||||||
INFO_ACTIONS = Literal[
|
|
||||||
"overview",
|
|
||||||
"array",
|
|
||||||
"network",
|
|
||||||
"registration",
|
|
||||||
"connect",
|
|
||||||
"variables",
|
|
||||||
"metrics",
|
|
||||||
"services",
|
|
||||||
"display",
|
|
||||||
"config",
|
|
||||||
"online",
|
|
||||||
"owner",
|
|
||||||
"settings",
|
|
||||||
"server",
|
|
||||||
"servers",
|
|
||||||
"flash",
|
|
||||||
"ups_devices",
|
|
||||||
"ups_device",
|
|
||||||
"ups_config",
|
|
||||||
]
|
|
||||||
|
|
||||||
if set(get_args(INFO_ACTIONS)) != ALL_ACTIONS:
|
|
||||||
_missing = ALL_ACTIONS - set(get_args(INFO_ACTIONS))
|
|
||||||
_extra = set(get_args(INFO_ACTIONS)) - ALL_ACTIONS
|
|
||||||
raise RuntimeError(
|
|
||||||
f"QUERIES keys and INFO_ACTIONS are out of sync. "
|
|
||||||
f"Missing from Literal: {_missing or 'none'}. Extra in Literal: {_extra or 'none'}"
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def _process_system_info(raw_info: dict[str, Any]) -> dict[str, Any]:
|
|
||||||
"""Process raw system info into summary + details."""
|
|
||||||
summary: dict[str, Any] = {}
|
|
||||||
if raw_info.get("os"):
|
|
||||||
os_info = raw_info["os"]
|
|
||||||
summary["os"] = (
|
|
||||||
f"{os_info.get('distro') or 'unknown'} {os_info.get('release') or 'unknown'} "
|
|
||||||
f"({os_info.get('platform') or 'unknown'}, {os_info.get('arch') or 'unknown'})"
|
|
||||||
)
|
|
||||||
summary["hostname"] = os_info.get("hostname") or "unknown"
|
|
||||||
summary["uptime"] = os_info.get("uptime")
|
|
||||||
|
|
||||||
if raw_info.get("cpu"):
|
|
||||||
cpu = raw_info["cpu"]
|
|
||||||
summary["cpu"] = (
|
|
||||||
f"{cpu.get('manufacturer') or 'unknown'} {cpu.get('brand') or 'unknown'} "
|
|
||||||
f"({cpu.get('cores') or '?'} cores, {cpu.get('threads') or '?'} threads)"
|
|
||||||
)
|
|
||||||
|
|
||||||
if raw_info.get("memory") and raw_info["memory"].get("layout"):
|
|
||||||
mem_layout = raw_info["memory"]["layout"]
|
|
||||||
summary["memory_layout_details"] = []
|
|
||||||
for stick in mem_layout:
|
|
||||||
summary["memory_layout_details"].append(
|
|
||||||
f"Bank {stick.get('bank') or '?'}: Type {stick.get('type') or '?'}, "
|
|
||||||
f"Speed {stick.get('clockSpeed') or '?'}MHz, "
|
|
||||||
f"Manufacturer: {stick.get('manufacturer') or '?'}, "
|
|
||||||
f"Part: {stick.get('partNum') or '?'}"
|
|
||||||
)
|
|
||||||
summary["memory_summary"] = (
|
|
||||||
"Stick layout details retrieved. Overall total/used/free memory stats "
|
|
||||||
"are unavailable due to API limitations."
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
summary["memory_summary"] = "Memory information not available."
|
|
||||||
|
|
||||||
return {"summary": summary, "details": raw_info}
|
|
||||||
|
|
||||||
|
|
||||||
def _analyze_disk_health(disks: list[dict[str, Any]]) -> dict[str, int]:
|
|
||||||
"""Analyze health status of disk arrays."""
|
|
||||||
counts = {
|
|
||||||
"healthy": 0,
|
|
||||||
"failed": 0,
|
|
||||||
"missing": 0,
|
|
||||||
"new": 0,
|
|
||||||
"warning": 0,
|
|
||||||
"critical": 0,
|
|
||||||
"unknown": 0,
|
|
||||||
}
|
|
||||||
for disk in disks:
|
|
||||||
status = disk.get("status", "").upper()
|
|
||||||
warning = disk.get("warning")
|
|
||||||
critical = disk.get("critical")
|
|
||||||
if status == "DISK_OK":
|
|
||||||
if critical:
|
|
||||||
counts["critical"] += 1
|
|
||||||
elif warning:
|
|
||||||
counts["warning"] += 1
|
|
||||||
else:
|
|
||||||
counts["healthy"] += 1
|
|
||||||
elif status in ("DISK_DSBL", "DISK_INVALID"):
|
|
||||||
counts["failed"] += 1
|
|
||||||
elif status == "DISK_NP":
|
|
||||||
counts["missing"] += 1
|
|
||||||
elif status == "DISK_NEW":
|
|
||||||
counts["new"] += 1
|
|
||||||
else:
|
|
||||||
counts["unknown"] += 1
|
|
||||||
return counts
|
|
||||||
|
|
||||||
|
|
||||||
def _process_array_status(raw: dict[str, Any]) -> dict[str, Any]:
|
|
||||||
"""Process raw array data into summary + details."""
|
|
||||||
summary: dict[str, Any] = {"state": raw.get("state")}
|
|
||||||
if raw.get("capacity") and raw["capacity"].get("kilobytes"):
|
|
||||||
kb = raw["capacity"]["kilobytes"]
|
|
||||||
summary["capacity_total"] = format_kb(kb.get("total"))
|
|
||||||
summary["capacity_used"] = format_kb(kb.get("used"))
|
|
||||||
summary["capacity_free"] = format_kb(kb.get("free"))
|
|
||||||
|
|
||||||
summary["num_parity_disks"] = len(raw.get("parities", []))
|
|
||||||
summary["num_data_disks"] = len(raw.get("disks", []))
|
|
||||||
summary["num_cache_pools"] = len(raw.get("caches", []))
|
|
||||||
|
|
||||||
health_summary: dict[str, Any] = {}
|
|
||||||
for key, label in [
|
|
||||||
("parities", "parity_health"),
|
|
||||||
("disks", "data_health"),
|
|
||||||
("caches", "cache_health"),
|
|
||||||
]:
|
|
||||||
if raw.get(key):
|
|
||||||
health_summary[label] = _analyze_disk_health(raw[key])
|
|
||||||
|
|
||||||
total_failed = sum(h.get("failed", 0) for h in health_summary.values())
|
|
||||||
total_critical = sum(h.get("critical", 0) for h in health_summary.values())
|
|
||||||
total_missing = sum(h.get("missing", 0) for h in health_summary.values())
|
|
||||||
total_warning = sum(h.get("warning", 0) for h in health_summary.values())
|
|
||||||
|
|
||||||
if total_failed > 0 or total_critical > 0:
|
|
||||||
overall = "CRITICAL"
|
|
||||||
elif total_missing > 0:
|
|
||||||
overall = "DEGRADED"
|
|
||||||
elif total_warning > 0:
|
|
||||||
overall = "WARNING"
|
|
||||||
else:
|
|
||||||
overall = "HEALTHY"
|
|
||||||
|
|
||||||
summary["overall_health"] = overall
|
|
||||||
summary["health_summary"] = health_summary
|
|
||||||
|
|
||||||
return {"summary": summary, "details": raw}
|
|
||||||
|
|
||||||
|
|
||||||
def register_info_tool(mcp: FastMCP) -> None:
|
|
||||||
"""Register the unraid_info tool with the FastMCP instance."""
|
|
||||||
|
|
||||||
@mcp.tool()
|
|
||||||
async def unraid_info(
|
|
||||||
action: INFO_ACTIONS,
|
|
||||||
device_id: str | None = None,
|
|
||||||
) -> dict[str, Any]:
|
|
||||||
"""Query Unraid system information.
|
|
||||||
|
|
||||||
Actions:
|
|
||||||
overview - OS, CPU, memory, baseboard, versions
|
|
||||||
array - Array state, capacity, disk health
|
|
||||||
network - Access URLs, interfaces
|
|
||||||
registration - License type, state, expiration
|
|
||||||
connect - Unraid Connect settings
|
|
||||||
variables - System variables and configuration
|
|
||||||
metrics - CPU and memory utilization
|
|
||||||
services - Running services
|
|
||||||
display - Theme settings
|
|
||||||
config - Configuration validity
|
|
||||||
online - Server online status
|
|
||||||
owner - Server owner info
|
|
||||||
settings - All unified settings
|
|
||||||
server - Quick server summary
|
|
||||||
servers - Connected servers list
|
|
||||||
flash - Flash drive info
|
|
||||||
ups_devices - List UPS devices
|
|
||||||
ups_device - Single UPS device (requires device_id)
|
|
||||||
ups_config - UPS configuration
|
|
||||||
"""
|
|
||||||
if action not in ALL_ACTIONS:
|
|
||||||
raise ToolError(f"Invalid action '{action}'. Must be one of: {sorted(ALL_ACTIONS)}")
|
|
||||||
|
|
||||||
if action == "ups_device" and not device_id:
|
|
||||||
raise ToolError("device_id is required for ups_device action")
|
|
||||||
|
|
||||||
# connect is not available on all Unraid API versions
|
|
||||||
if action == "connect":
|
|
||||||
raise ToolError(
|
|
||||||
"The 'connect' query is not available on this Unraid API version. "
|
|
||||||
"Use the 'settings' action for API and SSO configuration."
|
|
||||||
)
|
|
||||||
|
|
||||||
query = QUERIES[action]
|
|
||||||
variables: dict[str, Any] | None = None
|
|
||||||
if action == "ups_device":
|
|
||||||
variables = {"id": device_id}
|
|
||||||
|
|
||||||
# Lookup tables for common response patterns
|
|
||||||
# Simple dict actions: action -> GraphQL response key
|
|
||||||
dict_actions: dict[str, str] = {
|
|
||||||
"registration": "registration",
|
|
||||||
"variables": "vars",
|
|
||||||
"metrics": "metrics",
|
|
||||||
"config": "config",
|
|
||||||
"owner": "owner",
|
|
||||||
"flash": "flash",
|
|
||||||
"ups_device": "upsDeviceById",
|
|
||||||
"ups_config": "upsConfiguration",
|
|
||||||
}
|
|
||||||
# List-wrapped actions: action -> (GraphQL response key, output key)
|
|
||||||
list_actions: dict[str, tuple[str, str]] = {
|
|
||||||
"services": ("services", "services"),
|
|
||||||
"servers": ("servers", "servers"),
|
|
||||||
"ups_devices": ("upsDevices", "ups_devices"),
|
|
||||||
}
|
|
||||||
|
|
||||||
with tool_error_handler("info", action, logger):
|
|
||||||
logger.info(f"Executing unraid_info action={action}")
|
|
||||||
data = await make_graphql_request(query, variables)
|
|
||||||
|
|
||||||
# Special-case actions with custom processing
|
|
||||||
if action == "overview":
|
|
||||||
raw = data.get("info") or {}
|
|
||||||
if not raw:
|
|
||||||
raise ToolError("No system info returned from Unraid API")
|
|
||||||
return _process_system_info(raw)
|
|
||||||
|
|
||||||
if action == "array":
|
|
||||||
raw = data.get("array") or {}
|
|
||||||
if not raw:
|
|
||||||
raise ToolError("No array information returned from Unraid API")
|
|
||||||
return _process_array_status(raw)
|
|
||||||
|
|
||||||
if action == "display":
|
|
||||||
info = data.get("info") or {}
|
|
||||||
return dict(info.get("display") or {})
|
|
||||||
|
|
||||||
if action == "online":
|
|
||||||
return {"online": data.get("online")}
|
|
||||||
|
|
||||||
if action == "settings":
|
|
||||||
settings = data.get("settings") or {}
|
|
||||||
if not settings:
|
|
||||||
raise ToolError(
|
|
||||||
"No settings data returned from Unraid API. Check API permissions."
|
|
||||||
)
|
|
||||||
if not settings.get("unified"):
|
|
||||||
logger.warning(f"Settings returned unexpected structure: {settings.keys()}")
|
|
||||||
raise ToolError(
|
|
||||||
f"Unexpected settings structure. Expected 'unified' key, got: {list(settings.keys())}"
|
|
||||||
)
|
|
||||||
values = settings["unified"].get("values") or {}
|
|
||||||
return dict(values) if isinstance(values, dict) else {"raw": values}
|
|
||||||
|
|
||||||
if action == "server":
|
|
||||||
return data
|
|
||||||
|
|
||||||
if action == "network":
|
|
||||||
servers_data = data.get("servers") or []
|
|
||||||
vars_data = data.get("vars") or {}
|
|
||||||
access_urls = []
|
|
||||||
for srv in servers_data:
|
|
||||||
if srv.get("lanip"):
|
|
||||||
access_urls.append(
|
|
||||||
{"type": "LAN", "ipv4": srv["lanip"], "url": srv.get("localurl")}
|
|
||||||
)
|
|
||||||
if srv.get("wanip"):
|
|
||||||
access_urls.append(
|
|
||||||
{"type": "WAN", "ipv4": srv["wanip"], "url": srv.get("remoteurl")}
|
|
||||||
)
|
|
||||||
return {
|
|
||||||
"accessUrls": access_urls,
|
|
||||||
"httpPort": vars_data.get("port"),
|
|
||||||
"httpsPort": vars_data.get("portssl"),
|
|
||||||
"localTld": vars_data.get("localTld"),
|
|
||||||
"useSsl": vars_data.get("useSsl"),
|
|
||||||
}
|
|
||||||
|
|
||||||
# Simple dict-returning actions
|
|
||||||
if action in dict_actions:
|
|
||||||
return dict(data.get(dict_actions[action]) or {})
|
|
||||||
|
|
||||||
# List-wrapped actions
|
|
||||||
if action in list_actions:
|
|
||||||
response_key, output_key = list_actions[action]
|
|
||||||
items = data.get(response_key) or []
|
|
||||||
normalized_items = list(items) if isinstance(items, list) else []
|
|
||||||
return {output_key: normalized_items}
|
|
||||||
|
|
||||||
raise ToolError(f"Unhandled action '{action}' — this is a bug")
|
|
||||||
|
|
||||||
logger.info("Info tool registered successfully")
|
|
||||||
@@ -1,202 +0,0 @@
|
|||||||
"""API key management.
|
|
||||||
|
|
||||||
Provides the `unraid_keys` tool with 5 actions for listing, viewing,
|
|
||||||
creating, updating, and deleting API keys.
|
|
||||||
"""
|
|
||||||
|
|
||||||
from typing import Any, Literal, get_args
|
|
||||||
|
|
||||||
from fastmcp import Context, FastMCP
|
|
||||||
|
|
||||||
from ..config.logging import logger
|
|
||||||
from ..core.client import make_graphql_request
|
|
||||||
from ..core.exceptions import ToolError, tool_error_handler
|
|
||||||
from ..core.guards import gate_destructive_action
|
|
||||||
|
|
||||||
|
|
||||||
QUERIES: dict[str, str] = {
|
|
||||||
"list": """
|
|
||||||
query ListApiKeys {
|
|
||||||
apiKeys { id name roles permissions { resource actions } createdAt }
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"get": """
|
|
||||||
query GetApiKey($id: PrefixedID!) {
|
|
||||||
apiKey(id: $id) { id name roles permissions { resource actions } createdAt }
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
}
|
|
||||||
|
|
||||||
MUTATIONS: dict[str, str] = {
|
|
||||||
"create": """
|
|
||||||
mutation CreateApiKey($input: CreateApiKeyInput!) {
|
|
||||||
apiKey { create(input: $input) { id name key roles } }
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"update": """
|
|
||||||
mutation UpdateApiKey($input: UpdateApiKeyInput!) {
|
|
||||||
apiKey { update(input: $input) { id name roles } }
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"delete": """
|
|
||||||
mutation DeleteApiKey($input: DeleteApiKeyInput!) {
|
|
||||||
apiKey { delete(input: $input) }
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"add_role": """
|
|
||||||
mutation AddRole($input: AddRoleForApiKeyInput!) {
|
|
||||||
apiKey { addRole(input: $input) }
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"remove_role": """
|
|
||||||
mutation RemoveRole($input: RemoveRoleFromApiKeyInput!) {
|
|
||||||
apiKey { removeRole(input: $input) }
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
}
|
|
||||||
|
|
||||||
DESTRUCTIVE_ACTIONS = {"delete"}
|
|
||||||
ALL_ACTIONS = set(QUERIES) | set(MUTATIONS)
|
|
||||||
|
|
||||||
KEY_ACTIONS = Literal[
|
|
||||||
"add_role",
|
|
||||||
"create",
|
|
||||||
"delete",
|
|
||||||
"get",
|
|
||||||
"list",
|
|
||||||
"remove_role",
|
|
||||||
"update",
|
|
||||||
]
|
|
||||||
|
|
||||||
if set(get_args(KEY_ACTIONS)) != ALL_ACTIONS:
|
|
||||||
_missing = ALL_ACTIONS - set(get_args(KEY_ACTIONS))
|
|
||||||
_extra = set(get_args(KEY_ACTIONS)) - ALL_ACTIONS
|
|
||||||
raise RuntimeError(
|
|
||||||
f"KEY_ACTIONS and ALL_ACTIONS are out of sync. "
|
|
||||||
f"Missing from Literal: {_missing or 'none'}. Extra in Literal: {_extra or 'none'}"
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def register_keys_tool(mcp: FastMCP) -> None:
|
|
||||||
"""Register the unraid_keys tool with the FastMCP instance."""
|
|
||||||
|
|
||||||
@mcp.tool()
|
|
||||||
async def unraid_keys(
|
|
||||||
action: KEY_ACTIONS,
|
|
||||||
ctx: Context | None = None,
|
|
||||||
confirm: bool = False,
|
|
||||||
key_id: str | None = None,
|
|
||||||
name: str | None = None,
|
|
||||||
roles: list[str] | None = None,
|
|
||||||
permissions: list[str] | None = None,
|
|
||||||
) -> dict[str, Any]:
|
|
||||||
"""Manage Unraid API keys.
|
|
||||||
|
|
||||||
Actions:
|
|
||||||
list - List all API keys
|
|
||||||
get - Get a specific API key (requires key_id)
|
|
||||||
create - Create a new API key (requires name; optional roles, permissions)
|
|
||||||
update - Update an API key (requires key_id; optional name, roles)
|
|
||||||
delete - Delete API keys (requires key_id, confirm=True)
|
|
||||||
add_role - Add a role to an API key (requires key_id and roles)
|
|
||||||
remove_role - Remove a role from an API key (requires key_id and roles)
|
|
||||||
"""
|
|
||||||
if action not in ALL_ACTIONS:
|
|
||||||
raise ToolError(f"Invalid action '{action}'. Must be one of: {sorted(ALL_ACTIONS)}")
|
|
||||||
|
|
||||||
await gate_destructive_action(
|
|
||||||
ctx,
|
|
||||||
action,
|
|
||||||
DESTRUCTIVE_ACTIONS,
|
|
||||||
confirm,
|
|
||||||
f"Delete API key **{key_id}**. Any clients using this key will lose access.",
|
|
||||||
)
|
|
||||||
|
|
||||||
with tool_error_handler("keys", action, logger):
|
|
||||||
logger.info(f"Executing unraid_keys action={action}")
|
|
||||||
|
|
||||||
if action == "list":
|
|
||||||
data = await make_graphql_request(QUERIES["list"])
|
|
||||||
keys = data.get("apiKeys", [])
|
|
||||||
return {"keys": list(keys) if isinstance(keys, list) else []}
|
|
||||||
|
|
||||||
if action == "get":
|
|
||||||
if not key_id:
|
|
||||||
raise ToolError("key_id is required for 'get' action")
|
|
||||||
data = await make_graphql_request(QUERIES["get"], {"id": key_id})
|
|
||||||
return dict(data.get("apiKey") or {})
|
|
||||||
|
|
||||||
if action == "create":
|
|
||||||
if not name:
|
|
||||||
raise ToolError("name is required for 'create' action")
|
|
||||||
input_data: dict[str, Any] = {"name": name}
|
|
||||||
if roles is not None:
|
|
||||||
input_data["roles"] = roles
|
|
||||||
if permissions is not None:
|
|
||||||
input_data["permissions"] = permissions
|
|
||||||
data = await make_graphql_request(MUTATIONS["create"], {"input": input_data})
|
|
||||||
created_key = (data.get("apiKey") or {}).get("create")
|
|
||||||
if not created_key:
|
|
||||||
raise ToolError("Failed to create API key: no data returned from server")
|
|
||||||
return {"success": True, "key": created_key}
|
|
||||||
|
|
||||||
if action == "update":
|
|
||||||
if not key_id:
|
|
||||||
raise ToolError("key_id is required for 'update' action")
|
|
||||||
input_data: dict[str, Any] = {"id": key_id}
|
|
||||||
if name:
|
|
||||||
input_data["name"] = name
|
|
||||||
if roles is not None:
|
|
||||||
input_data["roles"] = roles
|
|
||||||
data = await make_graphql_request(MUTATIONS["update"], {"input": input_data})
|
|
||||||
updated_key = (data.get("apiKey") or {}).get("update")
|
|
||||||
if not updated_key:
|
|
||||||
raise ToolError("Failed to update API key: no data returned from server")
|
|
||||||
return {"success": True, "key": updated_key}
|
|
||||||
|
|
||||||
if action == "delete":
|
|
||||||
if not key_id:
|
|
||||||
raise ToolError("key_id is required for 'delete' action")
|
|
||||||
data = await make_graphql_request(MUTATIONS["delete"], {"input": {"ids": [key_id]}})
|
|
||||||
result = (data.get("apiKey") or {}).get("delete")
|
|
||||||
if not result:
|
|
||||||
raise ToolError(
|
|
||||||
f"Failed to delete API key '{key_id}': no confirmation from server"
|
|
||||||
)
|
|
||||||
return {
|
|
||||||
"success": True,
|
|
||||||
"message": f"API key '{key_id}' deleted",
|
|
||||||
}
|
|
||||||
|
|
||||||
if action == "add_role":
|
|
||||||
if not key_id:
|
|
||||||
raise ToolError("key_id is required for 'add_role' action")
|
|
||||||
if not roles or len(roles) == 0:
|
|
||||||
raise ToolError(
|
|
||||||
"role is required for 'add_role' action (pass as roles=['ROLE_NAME'])"
|
|
||||||
)
|
|
||||||
data = await make_graphql_request(
|
|
||||||
MUTATIONS["add_role"],
|
|
||||||
{"input": {"apiKeyId": key_id, "role": roles[0]}},
|
|
||||||
)
|
|
||||||
return {"success": True, "message": f"Role '{roles[0]}' added to key '{key_id}'"}
|
|
||||||
|
|
||||||
if action == "remove_role":
|
|
||||||
if not key_id:
|
|
||||||
raise ToolError("key_id is required for 'remove_role' action")
|
|
||||||
if not roles or len(roles) == 0:
|
|
||||||
raise ToolError(
|
|
||||||
"role is required for 'remove_role' action (pass as roles=['ROLE_NAME'])"
|
|
||||||
)
|
|
||||||
data = await make_graphql_request(
|
|
||||||
MUTATIONS["remove_role"],
|
|
||||||
{"input": {"apiKeyId": key_id, "role": roles[0]}},
|
|
||||||
)
|
|
||||||
return {
|
|
||||||
"success": True,
|
|
||||||
"message": f"Role '{roles[0]}' removed from key '{key_id}'",
|
|
||||||
}
|
|
||||||
|
|
||||||
raise ToolError(f"Unhandled action '{action}' — this is a bug")
|
|
||||||
|
|
||||||
logger.info("Keys tool registered successfully")
|
|
||||||
@@ -1,142 +0,0 @@
|
|||||||
"""Real-time subscription snapshot tool.
|
|
||||||
|
|
||||||
Provides the `unraid_live` tool with 11 actions — one per GraphQL
|
|
||||||
subscription. Each action opens a transient WebSocket, receives one event
|
|
||||||
(or collects events for `collect_for` seconds), then closes.
|
|
||||||
|
|
||||||
Use `subscribe_once` actions for current-state reads (cpu, memory, array_state).
|
|
||||||
Use `subscribe_collect` actions for event streams (notification_feed, log_tail).
|
|
||||||
"""
|
|
||||||
|
|
||||||
import os
|
|
||||||
from typing import Any, Literal, get_args
|
|
||||||
|
|
||||||
from fastmcp import FastMCP
|
|
||||||
|
|
||||||
from ..config.logging import logger
|
|
||||||
from ..core.exceptions import ToolError, tool_error_handler
|
|
||||||
from ..subscriptions.queries import COLLECT_ACTIONS, SNAPSHOT_ACTIONS
|
|
||||||
from ..subscriptions.snapshot import subscribe_collect, subscribe_once
|
|
||||||
|
|
||||||
|
|
||||||
_ALLOWED_LOG_PREFIXES = ("/var/log/", "/boot/logs/", "/mnt/")
|
|
||||||
|
|
||||||
ALL_LIVE_ACTIONS = set(SNAPSHOT_ACTIONS) | set(COLLECT_ACTIONS)
|
|
||||||
|
|
||||||
LIVE_ACTIONS = Literal[
|
|
||||||
"array_state",
|
|
||||||
"cpu",
|
|
||||||
"cpu_telemetry",
|
|
||||||
"log_tail",
|
|
||||||
"memory",
|
|
||||||
"notification_feed",
|
|
||||||
"notifications_overview",
|
|
||||||
"owner",
|
|
||||||
"parity_progress",
|
|
||||||
"server_status",
|
|
||||||
"ups_status",
|
|
||||||
]
|
|
||||||
|
|
||||||
if set(get_args(LIVE_ACTIONS)) != ALL_LIVE_ACTIONS:
|
|
||||||
_missing = ALL_LIVE_ACTIONS - set(get_args(LIVE_ACTIONS))
|
|
||||||
_extra = set(get_args(LIVE_ACTIONS)) - ALL_LIVE_ACTIONS
|
|
||||||
raise RuntimeError(
|
|
||||||
f"LIVE_ACTIONS and ALL_LIVE_ACTIONS are out of sync. "
|
|
||||||
f"Missing: {_missing or 'none'}. Extra: {_extra or 'none'}"
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def register_live_tool(mcp: FastMCP) -> None:
|
|
||||||
"""Register the unraid_live tool with the FastMCP instance."""
|
|
||||||
|
|
||||||
@mcp.tool()
|
|
||||||
async def unraid_live(
|
|
||||||
action: LIVE_ACTIONS,
|
|
||||||
path: str | None = None,
|
|
||||||
collect_for: float = 5.0,
|
|
||||||
timeout: float = 10.0, # noqa: ASYNC109
|
|
||||||
) -> dict[str, Any]:
|
|
||||||
"""Get real-time data from Unraid via WebSocket subscriptions.
|
|
||||||
|
|
||||||
Each action opens a transient WebSocket, receives data, then closes.
|
|
||||||
|
|
||||||
Snapshot actions (return current state):
|
|
||||||
cpu - Real-time CPU utilization (all cores)
|
|
||||||
memory - Real-time memory and swap utilization
|
|
||||||
cpu_telemetry - CPU power draw and temperature per package
|
|
||||||
array_state - Live array state and parity status
|
|
||||||
parity_progress - Live parity check progress
|
|
||||||
ups_status - Real-time UPS battery and power state
|
|
||||||
notifications_overview - Live notification counts by severity
|
|
||||||
owner - Live owner info
|
|
||||||
server_status - Live server connection state
|
|
||||||
|
|
||||||
Collection actions (collect events for `collect_for` seconds):
|
|
||||||
notification_feed - Collect new notification events (default: 5s window)
|
|
||||||
log_tail - Tail a log file (requires path; default: 5s window)
|
|
||||||
|
|
||||||
Parameters:
|
|
||||||
path - Log file path for log_tail action (required)
|
|
||||||
collect_for - Seconds to collect events for collect actions (default: 5.0)
|
|
||||||
timeout - WebSocket connection/handshake timeout in seconds (default: 10.0)
|
|
||||||
"""
|
|
||||||
if action not in ALL_LIVE_ACTIONS:
|
|
||||||
raise ToolError(
|
|
||||||
f"Invalid action '{action}'. Must be one of: {sorted(ALL_LIVE_ACTIONS)}"
|
|
||||||
)
|
|
||||||
|
|
||||||
# Validate log_tail path before entering the error handler context.
|
|
||||||
if action == "log_tail":
|
|
||||||
if not path:
|
|
||||||
raise ToolError("path is required for 'log_tail' action")
|
|
||||||
# Resolve to prevent path traversal attacks (same as storage.py).
|
|
||||||
# Using os.path.realpath instead of anyio.Path.resolve() because the
|
|
||||||
# async variant blocks on NFS-mounted paths under /mnt/ (Perf-AI-1).
|
|
||||||
normalized = os.path.realpath(path) # noqa: ASYNC240
|
|
||||||
if not any(normalized.startswith(p) for p in _ALLOWED_LOG_PREFIXES):
|
|
||||||
raise ToolError(
|
|
||||||
f"path must start with one of: {', '.join(_ALLOWED_LOG_PREFIXES)}. Got: {path!r}"
|
|
||||||
)
|
|
||||||
path = normalized
|
|
||||||
|
|
||||||
with tool_error_handler("live", action, logger):
|
|
||||||
logger.info(f"Executing unraid_live action={action} timeout={timeout}")
|
|
||||||
|
|
||||||
if action in SNAPSHOT_ACTIONS:
|
|
||||||
data = await subscribe_once(SNAPSHOT_ACTIONS[action], timeout=timeout)
|
|
||||||
return {"success": True, "action": action, "data": data}
|
|
||||||
|
|
||||||
# Collect actions
|
|
||||||
if action == "log_tail":
|
|
||||||
events = await subscribe_collect(
|
|
||||||
COLLECT_ACTIONS["log_tail"],
|
|
||||||
variables={"path": path},
|
|
||||||
collect_for=collect_for,
|
|
||||||
timeout=timeout,
|
|
||||||
)
|
|
||||||
return {
|
|
||||||
"success": True,
|
|
||||||
"action": action,
|
|
||||||
"path": path,
|
|
||||||
"collect_for": collect_for,
|
|
||||||
"event_count": len(events),
|
|
||||||
"events": events,
|
|
||||||
}
|
|
||||||
|
|
||||||
if action == "notification_feed":
|
|
||||||
events = await subscribe_collect(
|
|
||||||
COLLECT_ACTIONS["notification_feed"],
|
|
||||||
collect_for=collect_for,
|
|
||||||
timeout=timeout,
|
|
||||||
)
|
|
||||||
return {
|
|
||||||
"success": True,
|
|
||||||
"action": action,
|
|
||||||
"collect_for": collect_for,
|
|
||||||
"event_count": len(events),
|
|
||||||
"events": events,
|
|
||||||
}
|
|
||||||
|
|
||||||
raise ToolError(f"Unhandled action '{action}' — this is a bug")
|
|
||||||
|
|
||||||
logger.info("Live tool registered successfully")
|
|
||||||
@@ -1,311 +0,0 @@
|
|||||||
"""Notification management.
|
|
||||||
|
|
||||||
Provides the `unraid_notifications` tool with 13 actions for viewing,
|
|
||||||
creating, archiving, and deleting system notifications.
|
|
||||||
"""
|
|
||||||
|
|
||||||
from typing import Any, Literal, get_args
|
|
||||||
|
|
||||||
from fastmcp import Context, FastMCP
|
|
||||||
|
|
||||||
from ..config.logging import logger
|
|
||||||
from ..core.client import make_graphql_request
|
|
||||||
from ..core.exceptions import ToolError, tool_error_handler
|
|
||||||
from ..core.guards import gate_destructive_action
|
|
||||||
|
|
||||||
|
|
||||||
QUERIES: dict[str, str] = {
|
|
||||||
"overview": """
|
|
||||||
query GetNotificationsOverview {
|
|
||||||
notifications {
|
|
||||||
overview {
|
|
||||||
unread { info warning alert total }
|
|
||||||
archive { info warning alert total }
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"list": """
|
|
||||||
query ListNotifications($filter: NotificationFilter!) {
|
|
||||||
notifications {
|
|
||||||
list(filter: $filter) {
|
|
||||||
id title subject description importance link type timestamp formattedTimestamp
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
}
|
|
||||||
|
|
||||||
MUTATIONS: dict[str, str] = {
|
|
||||||
"create": """
|
|
||||||
mutation CreateNotification($input: NotificationData!) {
|
|
||||||
createNotification(input: $input) { id title importance }
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"archive": """
|
|
||||||
mutation ArchiveNotification($id: PrefixedID!) {
|
|
||||||
archiveNotification(id: $id) { id title importance }
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"unread": """
|
|
||||||
mutation UnreadNotification($id: PrefixedID!) {
|
|
||||||
unreadNotification(id: $id) { id title importance }
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"delete": """
|
|
||||||
mutation DeleteNotification($id: PrefixedID!, $type: NotificationType!) {
|
|
||||||
deleteNotification(id: $id, type: $type) {
|
|
||||||
unread { info warning alert total }
|
|
||||||
archive { info warning alert total }
|
|
||||||
}
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"delete_archived": """
|
|
||||||
mutation DeleteArchivedNotifications {
|
|
||||||
deleteArchivedNotifications {
|
|
||||||
unread { info warning alert total }
|
|
||||||
archive { info warning alert total }
|
|
||||||
}
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"archive_all": """
|
|
||||||
mutation ArchiveAllNotifications($importance: NotificationImportance) {
|
|
||||||
archiveAll(importance: $importance) {
|
|
||||||
unread { info warning alert total }
|
|
||||||
archive { info warning alert total }
|
|
||||||
}
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"archive_many": """
|
|
||||||
mutation ArchiveNotifications($ids: [PrefixedID!]!) {
|
|
||||||
archiveNotifications(ids: $ids) {
|
|
||||||
unread { info warning alert total }
|
|
||||||
archive { info warning alert total }
|
|
||||||
}
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"unarchive_many": """
|
|
||||||
mutation UnarchiveNotifications($ids: [PrefixedID!]!) {
|
|
||||||
unarchiveNotifications(ids: $ids) {
|
|
||||||
unread { info warning alert total }
|
|
||||||
archive { info warning alert total }
|
|
||||||
}
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"unarchive_all": """
|
|
||||||
mutation UnarchiveAll($importance: NotificationImportance) {
|
|
||||||
unarchiveAll(importance: $importance) {
|
|
||||||
unread { info warning alert total }
|
|
||||||
archive { info warning alert total }
|
|
||||||
}
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"recalculate": """
|
|
||||||
mutation RecalculateOverview {
|
|
||||||
recalculateOverview {
|
|
||||||
unread { info warning alert total }
|
|
||||||
archive { info warning alert total }
|
|
||||||
}
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
}
|
|
||||||
|
|
||||||
DESTRUCTIVE_ACTIONS = {"delete", "delete_archived"}
|
|
||||||
ALL_ACTIONS = set(QUERIES) | set(MUTATIONS)
|
|
||||||
_VALID_IMPORTANCE = {"ALERT", "WARNING", "INFO"}
|
|
||||||
|
|
||||||
NOTIFICATION_ACTIONS = Literal[
|
|
||||||
"overview",
|
|
||||||
"list",
|
|
||||||
"create",
|
|
||||||
"archive",
|
|
||||||
"unread",
|
|
||||||
"delete",
|
|
||||||
"delete_archived",
|
|
||||||
"archive_all",
|
|
||||||
"archive_many",
|
|
||||||
"unarchive_many",
|
|
||||||
"unarchive_all",
|
|
||||||
"recalculate",
|
|
||||||
]
|
|
||||||
|
|
||||||
if set(get_args(NOTIFICATION_ACTIONS)) != ALL_ACTIONS:
|
|
||||||
_missing = ALL_ACTIONS - set(get_args(NOTIFICATION_ACTIONS))
|
|
||||||
_extra = set(get_args(NOTIFICATION_ACTIONS)) - ALL_ACTIONS
|
|
||||||
raise RuntimeError(
|
|
||||||
f"NOTIFICATION_ACTIONS and ALL_ACTIONS are out of sync. "
|
|
||||||
f"Missing from Literal: {_missing or 'none'}. Extra in Literal: {_extra or 'none'}"
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def register_notifications_tool(mcp: FastMCP) -> None:
|
|
||||||
"""Register the unraid_notifications tool with the FastMCP instance."""
|
|
||||||
|
|
||||||
@mcp.tool()
|
|
||||||
async def unraid_notifications(
|
|
||||||
action: NOTIFICATION_ACTIONS,
|
|
||||||
ctx: Context | None = None,
|
|
||||||
confirm: bool = False,
|
|
||||||
notification_id: str | None = None,
|
|
||||||
notification_ids: list[str] | None = None,
|
|
||||||
notification_type: str | None = None,
|
|
||||||
importance: str | None = None,
|
|
||||||
offset: int = 0,
|
|
||||||
limit: int = 20,
|
|
||||||
list_type: str = "UNREAD",
|
|
||||||
title: str | None = None,
|
|
||||||
subject: str | None = None,
|
|
||||||
description: str | None = None,
|
|
||||||
) -> dict[str, Any]:
|
|
||||||
"""Manage Unraid system notifications.
|
|
||||||
|
|
||||||
Actions:
|
|
||||||
overview - Notification counts by severity (unread/archive)
|
|
||||||
list - List notifications with filtering (list_type=UNREAD/ARCHIVE, importance=INFO/WARNING/ALERT)
|
|
||||||
create - Create notification (requires title, subject, description, importance)
|
|
||||||
archive - Archive a notification (requires notification_id)
|
|
||||||
unread - Mark notification as unread (requires notification_id)
|
|
||||||
delete - Delete a notification (requires notification_id, notification_type, confirm=True)
|
|
||||||
delete_archived - Delete all archived notifications (requires confirm=True)
|
|
||||||
archive_all - Archive all notifications (optional importance filter)
|
|
||||||
archive_many - Archive multiple notifications by ID (requires notification_ids)
|
|
||||||
unarchive_many - Move notifications back to unread (requires notification_ids)
|
|
||||||
unarchive_all - Move all archived notifications to unread (optional importance filter)
|
|
||||||
recalculate - Recompute overview counts from disk
|
|
||||||
"""
|
|
||||||
if action not in ALL_ACTIONS:
|
|
||||||
raise ToolError(f"Invalid action '{action}'. Must be one of: {sorted(ALL_ACTIONS)}")
|
|
||||||
|
|
||||||
await gate_destructive_action(
|
|
||||||
ctx,
|
|
||||||
action,
|
|
||||||
DESTRUCTIVE_ACTIONS,
|
|
||||||
confirm,
|
|
||||||
{
|
|
||||||
"delete": f"Delete notification **{notification_id}** permanently. This cannot be undone.",
|
|
||||||
"delete_archived": "Delete ALL archived notifications permanently. This cannot be undone.",
|
|
||||||
},
|
|
||||||
)
|
|
||||||
|
|
||||||
# Validate enum parameters before dispatching to GraphQL (SEC-M04).
|
|
||||||
# Invalid values waste a rate-limited request and may leak schema details in errors.
|
|
||||||
valid_list_types = frozenset({"UNREAD", "ARCHIVE"})
|
|
||||||
valid_importance = frozenset({"INFO", "WARNING", "ALERT"})
|
|
||||||
valid_notif_types = frozenset({"UNREAD", "ARCHIVE"})
|
|
||||||
|
|
||||||
if list_type.upper() not in valid_list_types:
|
|
||||||
raise ToolError(
|
|
||||||
f"Invalid list_type '{list_type}'. Must be one of: {sorted(valid_list_types)}"
|
|
||||||
)
|
|
||||||
if importance is not None and importance.upper() not in valid_importance:
|
|
||||||
raise ToolError(
|
|
||||||
f"Invalid importance '{importance}'. Must be one of: {sorted(valid_importance)}"
|
|
||||||
)
|
|
||||||
if notification_type is not None and notification_type.upper() not in valid_notif_types:
|
|
||||||
raise ToolError(
|
|
||||||
f"Invalid notification_type '{notification_type}'. "
|
|
||||||
f"Must be one of: {sorted(valid_notif_types)}"
|
|
||||||
)
|
|
||||||
|
|
||||||
with tool_error_handler("notifications", action, logger):
|
|
||||||
logger.info(f"Executing unraid_notifications action={action}")
|
|
||||||
|
|
||||||
if action == "overview":
|
|
||||||
data = await make_graphql_request(QUERIES["overview"])
|
|
||||||
notifications = data.get("notifications") or {}
|
|
||||||
return dict(notifications.get("overview") or {})
|
|
||||||
|
|
||||||
if action == "list":
|
|
||||||
filter_vars: dict[str, Any] = {
|
|
||||||
"type": list_type.upper(),
|
|
||||||
"offset": offset,
|
|
||||||
"limit": limit,
|
|
||||||
}
|
|
||||||
if importance:
|
|
||||||
filter_vars["importance"] = importance.upper()
|
|
||||||
data = await make_graphql_request(QUERIES["list"], {"filter": filter_vars})
|
|
||||||
notifications = data.get("notifications", {})
|
|
||||||
return {"notifications": notifications.get("list", [])}
|
|
||||||
|
|
||||||
if action == "create":
|
|
||||||
if title is None or subject is None or description is None or importance is None:
|
|
||||||
raise ToolError("create requires title, subject, description, and importance")
|
|
||||||
if importance.upper() not in _VALID_IMPORTANCE:
|
|
||||||
raise ToolError(
|
|
||||||
f"importance must be one of: {', '.join(sorted(_VALID_IMPORTANCE))}. "
|
|
||||||
f"Got: '{importance}'"
|
|
||||||
)
|
|
||||||
if len(title) > 200:
|
|
||||||
raise ToolError(f"title must be at most 200 characters (got {len(title)})")
|
|
||||||
if len(subject) > 500:
|
|
||||||
raise ToolError(f"subject must be at most 500 characters (got {len(subject)})")
|
|
||||||
if len(description) > 2000:
|
|
||||||
raise ToolError(
|
|
||||||
f"description must be at most 2000 characters (got {len(description)})"
|
|
||||||
)
|
|
||||||
input_data = {
|
|
||||||
"title": title,
|
|
||||||
"subject": subject,
|
|
||||||
"description": description,
|
|
||||||
"importance": importance.upper(),
|
|
||||||
}
|
|
||||||
data = await make_graphql_request(MUTATIONS["create"], {"input": input_data})
|
|
||||||
notification = data.get("createNotification")
|
|
||||||
if notification is None:
|
|
||||||
raise ToolError("Notification creation failed: server returned no data")
|
|
||||||
return {"success": True, "notification": notification}
|
|
||||||
|
|
||||||
if action in ("archive", "unread"):
|
|
||||||
if not notification_id:
|
|
||||||
raise ToolError(f"notification_id is required for '{action}' action")
|
|
||||||
data = await make_graphql_request(MUTATIONS[action], {"id": notification_id})
|
|
||||||
return {"success": True, "action": action, "data": data}
|
|
||||||
|
|
||||||
if action == "delete":
|
|
||||||
if not notification_id or not notification_type:
|
|
||||||
raise ToolError("delete requires notification_id and notification_type")
|
|
||||||
_del_vars = {"id": notification_id, "type": notification_type.upper()}
|
|
||||||
data = await make_graphql_request(MUTATIONS["delete"], _del_vars)
|
|
||||||
return {"success": True, "action": "delete", "data": data}
|
|
||||||
|
|
||||||
if action == "delete_archived":
|
|
||||||
data = await make_graphql_request(MUTATIONS["delete_archived"])
|
|
||||||
return {"success": True, "action": "delete_archived", "data": data}
|
|
||||||
|
|
||||||
if action == "archive_all":
|
|
||||||
variables: dict[str, Any] | None = None
|
|
||||||
if importance:
|
|
||||||
variables = {"importance": importance.upper()}
|
|
||||||
data = await make_graphql_request(MUTATIONS["archive_all"], variables)
|
|
||||||
return {"success": True, "action": "archive_all", "data": data}
|
|
||||||
|
|
||||||
if action == "archive_many":
|
|
||||||
if not notification_ids:
|
|
||||||
raise ToolError("notification_ids is required for 'archive_many' action")
|
|
||||||
data = await make_graphql_request(
|
|
||||||
MUTATIONS["archive_many"], {"ids": notification_ids}
|
|
||||||
)
|
|
||||||
return {"success": True, "action": "archive_many", "data": data}
|
|
||||||
|
|
||||||
if action == "unarchive_many":
|
|
||||||
if not notification_ids:
|
|
||||||
raise ToolError("notification_ids is required for 'unarchive_many' action")
|
|
||||||
data = await make_graphql_request(
|
|
||||||
MUTATIONS["unarchive_many"], {"ids": notification_ids}
|
|
||||||
)
|
|
||||||
return {"success": True, "action": "unarchive_many", "data": data}
|
|
||||||
|
|
||||||
if action == "unarchive_all":
|
|
||||||
vars_: dict[str, Any] | None = None
|
|
||||||
if importance:
|
|
||||||
vars_ = {"importance": importance.upper()}
|
|
||||||
data = await make_graphql_request(MUTATIONS["unarchive_all"], vars_)
|
|
||||||
return {"success": True, "action": "unarchive_all", "data": data}
|
|
||||||
|
|
||||||
if action == "recalculate":
|
|
||||||
data = await make_graphql_request(MUTATIONS["recalculate"])
|
|
||||||
return {"success": True, "action": "recalculate", "data": data}
|
|
||||||
|
|
||||||
raise ToolError(f"Unhandled action '{action}' — this is a bug")
|
|
||||||
|
|
||||||
logger.info("Notifications tool registered successfully")
|
|
||||||
@@ -1,115 +0,0 @@
|
|||||||
"""OIDC/SSO provider management and session validation.
|
|
||||||
|
|
||||||
Provides the `unraid_oidc` tool with 5 read-only actions for querying
|
|
||||||
OIDC provider configuration and validating sessions.
|
|
||||||
"""
|
|
||||||
|
|
||||||
from typing import Any, Literal, get_args
|
|
||||||
|
|
||||||
from fastmcp import FastMCP
|
|
||||||
|
|
||||||
from ..config.logging import logger
|
|
||||||
from ..core.client import make_graphql_request
|
|
||||||
from ..core.exceptions import ToolError, tool_error_handler
|
|
||||||
|
|
||||||
|
|
||||||
QUERIES: dict[str, str] = {
|
|
||||||
"providers": """
|
|
||||||
query GetOidcProviders {
|
|
||||||
oidcProviders {
|
|
||||||
id name clientId issuer authorizationEndpoint tokenEndpoint jwksUri
|
|
||||||
scopes authorizationRules { claim operator value }
|
|
||||||
authorizationRuleMode buttonText buttonIcon buttonVariant buttonStyle
|
|
||||||
}
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"provider": """
|
|
||||||
query GetOidcProvider($id: PrefixedID!) {
|
|
||||||
oidcProvider(id: $id) {
|
|
||||||
id name clientId issuer scopes
|
|
||||||
authorizationRules { claim operator value }
|
|
||||||
authorizationRuleMode buttonText buttonIcon
|
|
||||||
}
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"configuration": """
|
|
||||||
query GetOidcConfiguration {
|
|
||||||
oidcConfiguration {
|
|
||||||
providers { id name clientId scopes }
|
|
||||||
defaultAllowedOrigins
|
|
||||||
}
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"public_providers": """
|
|
||||||
query GetPublicOidcProviders {
|
|
||||||
publicOidcProviders { id name buttonText buttonIcon buttonVariant buttonStyle }
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"validate_session": """
|
|
||||||
query ValidateOidcSession($token: String!) {
|
|
||||||
validateOidcSession(token: $token) { valid username }
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
}
|
|
||||||
|
|
||||||
ALL_ACTIONS = set(QUERIES)
|
|
||||||
|
|
||||||
OIDC_ACTIONS = Literal[
|
|
||||||
"configuration",
|
|
||||||
"provider",
|
|
||||||
"providers",
|
|
||||||
"public_providers",
|
|
||||||
"validate_session",
|
|
||||||
]
|
|
||||||
|
|
||||||
if set(get_args(OIDC_ACTIONS)) != ALL_ACTIONS:
|
|
||||||
_missing = ALL_ACTIONS - set(get_args(OIDC_ACTIONS))
|
|
||||||
_extra = set(get_args(OIDC_ACTIONS)) - ALL_ACTIONS
|
|
||||||
raise RuntimeError(
|
|
||||||
f"OIDC_ACTIONS and ALL_ACTIONS are out of sync. "
|
|
||||||
f"Missing: {_missing or 'none'}. Extra: {_extra or 'none'}"
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def register_oidc_tool(mcp: FastMCP) -> None:
|
|
||||||
"""Register the unraid_oidc tool with the FastMCP instance."""
|
|
||||||
|
|
||||||
@mcp.tool()
|
|
||||||
async def unraid_oidc(
|
|
||||||
action: OIDC_ACTIONS,
|
|
||||||
provider_id: str | None = None,
|
|
||||||
token: str | None = None,
|
|
||||||
) -> dict[str, Any]:
|
|
||||||
"""Query Unraid OIDC/SSO provider configuration and validate sessions.
|
|
||||||
|
|
||||||
Actions:
|
|
||||||
providers - List all configured OIDC providers (admin only)
|
|
||||||
provider - Get a specific OIDC provider by ID (requires provider_id)
|
|
||||||
configuration - Get full OIDC configuration including default origins (admin only)
|
|
||||||
public_providers - Get public OIDC provider info for login buttons (no auth)
|
|
||||||
validate_session - Validate an OIDC session token (requires token)
|
|
||||||
"""
|
|
||||||
if action not in ALL_ACTIONS:
|
|
||||||
raise ToolError(f"Invalid action '{action}'. Must be one of: {sorted(ALL_ACTIONS)}")
|
|
||||||
|
|
||||||
if action == "provider" and not provider_id:
|
|
||||||
raise ToolError("provider_id is required for 'provider' action")
|
|
||||||
|
|
||||||
if action == "validate_session" and not token:
|
|
||||||
raise ToolError("token is required for 'validate_session' action")
|
|
||||||
|
|
||||||
with tool_error_handler("oidc", action, logger):
|
|
||||||
logger.info(f"Executing unraid_oidc action={action}")
|
|
||||||
|
|
||||||
if action == "provider":
|
|
||||||
data = await make_graphql_request(QUERIES[action], {"id": provider_id})
|
|
||||||
return {"success": True, "action": action, "data": data}
|
|
||||||
|
|
||||||
if action == "validate_session":
|
|
||||||
data = await make_graphql_request(QUERIES[action], {"token": token})
|
|
||||||
return {"success": True, "action": action, "data": data}
|
|
||||||
|
|
||||||
data = await make_graphql_request(QUERIES[action])
|
|
||||||
return {"success": True, "action": action, "data": data}
|
|
||||||
|
|
||||||
logger.info("OIDC tool registered successfully")
|
|
||||||
@@ -1,110 +0,0 @@
|
|||||||
"""Plugin management for the Unraid API.
|
|
||||||
|
|
||||||
Provides the `unraid_plugins` tool with 3 actions: list, add, remove.
|
|
||||||
"""
|
|
||||||
|
|
||||||
from typing import Any, Literal, get_args
|
|
||||||
|
|
||||||
from fastmcp import Context, FastMCP
|
|
||||||
|
|
||||||
from ..config.logging import logger
|
|
||||||
from ..core.client import make_graphql_request
|
|
||||||
from ..core.exceptions import ToolError, tool_error_handler
|
|
||||||
from ..core.guards import gate_destructive_action
|
|
||||||
|
|
||||||
|
|
||||||
QUERIES: dict[str, str] = {
|
|
||||||
"list": """
|
|
||||||
query ListPlugins {
|
|
||||||
plugins { name version hasApiModule hasCliModule }
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
}
|
|
||||||
|
|
||||||
MUTATIONS: dict[str, str] = {
|
|
||||||
"add": """
|
|
||||||
mutation AddPlugin($input: PluginManagementInput!) {
|
|
||||||
addPlugin(input: $input)
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"remove": """
|
|
||||||
mutation RemovePlugin($input: PluginManagementInput!) {
|
|
||||||
removePlugin(input: $input)
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
}
|
|
||||||
|
|
||||||
DESTRUCTIVE_ACTIONS = {"remove"}
|
|
||||||
ALL_ACTIONS = set(QUERIES) | set(MUTATIONS)
|
|
||||||
|
|
||||||
PLUGIN_ACTIONS = Literal["add", "list", "remove"]
|
|
||||||
|
|
||||||
if set(get_args(PLUGIN_ACTIONS)) != ALL_ACTIONS:
|
|
||||||
_missing = ALL_ACTIONS - set(get_args(PLUGIN_ACTIONS))
|
|
||||||
_extra = set(get_args(PLUGIN_ACTIONS)) - ALL_ACTIONS
|
|
||||||
raise RuntimeError(
|
|
||||||
f"PLUGIN_ACTIONS and ALL_ACTIONS are out of sync. "
|
|
||||||
f"Missing: {_missing or 'none'}. Extra: {_extra or 'none'}"
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def register_plugins_tool(mcp: FastMCP) -> None:
|
|
||||||
"""Register the unraid_plugins tool with the FastMCP instance."""
|
|
||||||
|
|
||||||
@mcp.tool()
|
|
||||||
async def unraid_plugins(
|
|
||||||
action: PLUGIN_ACTIONS,
|
|
||||||
ctx: Context | None = None,
|
|
||||||
confirm: bool = False,
|
|
||||||
names: list[str] | None = None,
|
|
||||||
bundled: bool = False,
|
|
||||||
restart: bool = True,
|
|
||||||
) -> dict[str, Any]:
|
|
||||||
"""Manage Unraid API plugins.
|
|
||||||
|
|
||||||
Actions:
|
|
||||||
list - List all installed plugins with version and module info
|
|
||||||
add - Install one or more plugins (requires names: list of package names)
|
|
||||||
remove - Remove one or more plugins (requires names, confirm=True)
|
|
||||||
|
|
||||||
Parameters:
|
|
||||||
names - List of plugin package names (required for add/remove)
|
|
||||||
bundled - Whether plugins are bundled (default: False)
|
|
||||||
restart - Whether to auto-restart API after operation (default: True)
|
|
||||||
"""
|
|
||||||
if action not in ALL_ACTIONS:
|
|
||||||
raise ToolError(f"Invalid action '{action}'. Must be one of: {sorted(ALL_ACTIONS)}")
|
|
||||||
|
|
||||||
await gate_destructive_action(
|
|
||||||
ctx,
|
|
||||||
action,
|
|
||||||
DESTRUCTIVE_ACTIONS,
|
|
||||||
confirm,
|
|
||||||
f"Remove plugin(s) **{names}** from the Unraid API. This cannot be undone without re-installing.",
|
|
||||||
)
|
|
||||||
|
|
||||||
with tool_error_handler("plugins", action, logger):
|
|
||||||
logger.info(f"Executing unraid_plugins action={action}")
|
|
||||||
|
|
||||||
if action == "list":
|
|
||||||
data = await make_graphql_request(QUERIES["list"])
|
|
||||||
return {"success": True, "action": action, "data": data}
|
|
||||||
|
|
||||||
if action in ("add", "remove"):
|
|
||||||
if not names:
|
|
||||||
raise ToolError(f"names is required for '{action}' action")
|
|
||||||
input_data = {"names": names, "bundled": bundled, "restart": restart}
|
|
||||||
mutation_key = "add" if action == "add" else "remove"
|
|
||||||
data = await make_graphql_request(MUTATIONS[mutation_key], {"input": input_data})
|
|
||||||
result_key = "addPlugin" if action == "add" else "removePlugin"
|
|
||||||
restart_required = data.get(result_key)
|
|
||||||
return {
|
|
||||||
"success": True,
|
|
||||||
"action": action,
|
|
||||||
"names": names,
|
|
||||||
"manual_restart_required": restart_required,
|
|
||||||
}
|
|
||||||
|
|
||||||
raise ToolError(f"Unhandled action '{action}' — this is a bug")
|
|
||||||
|
|
||||||
logger.info("Plugins tool registered successfully")
|
|
||||||
@@ -1,198 +0,0 @@
|
|||||||
"""RClone cloud storage remote management.
|
|
||||||
|
|
||||||
Provides the `unraid_rclone` tool with 4 actions for managing
|
|
||||||
cloud storage remotes (S3, Google Drive, Dropbox, FTP, etc.).
|
|
||||||
"""
|
|
||||||
|
|
||||||
import re
|
|
||||||
from typing import Any, Literal, get_args
|
|
||||||
|
|
||||||
from fastmcp import Context, FastMCP
|
|
||||||
|
|
||||||
from ..config.logging import logger
|
|
||||||
from ..core.client import make_graphql_request
|
|
||||||
from ..core.exceptions import ToolError, tool_error_handler
|
|
||||||
from ..core.guards import gate_destructive_action
|
|
||||||
|
|
||||||
|
|
||||||
QUERIES: dict[str, str] = {
|
|
||||||
"list_remotes": """
|
|
||||||
query ListRCloneRemotes {
|
|
||||||
rclone { remotes { name type parameters config } }
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"config_form": """
|
|
||||||
query GetRCloneConfigForm($formOptions: RCloneConfigFormInput) {
|
|
||||||
rclone { configForm(formOptions: $formOptions) { id dataSchema uiSchema } }
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
}
|
|
||||||
|
|
||||||
MUTATIONS: dict[str, str] = {
|
|
||||||
"create_remote": """
|
|
||||||
mutation CreateRCloneRemote($input: CreateRCloneRemoteInput!) {
|
|
||||||
rclone { createRCloneRemote(input: $input) { name type parameters } }
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"delete_remote": """
|
|
||||||
mutation DeleteRCloneRemote($input: DeleteRCloneRemoteInput!) {
|
|
||||||
rclone { deleteRCloneRemote(input: $input) }
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
}
|
|
||||||
|
|
||||||
DESTRUCTIVE_ACTIONS = {"delete_remote"}
|
|
||||||
ALL_ACTIONS = set(QUERIES) | set(MUTATIONS)
|
|
||||||
|
|
||||||
RCLONE_ACTIONS = Literal[
|
|
||||||
"list_remotes",
|
|
||||||
"config_form",
|
|
||||||
"create_remote",
|
|
||||||
"delete_remote",
|
|
||||||
]
|
|
||||||
|
|
||||||
if set(get_args(RCLONE_ACTIONS)) != ALL_ACTIONS:
|
|
||||||
_missing = ALL_ACTIONS - set(get_args(RCLONE_ACTIONS))
|
|
||||||
_extra = set(get_args(RCLONE_ACTIONS)) - ALL_ACTIONS
|
|
||||||
raise RuntimeError(
|
|
||||||
f"RCLONE_ACTIONS and ALL_ACTIONS are out of sync. "
|
|
||||||
f"Missing from Literal: {_missing or 'none'}. Extra in Literal: {_extra or 'none'}"
|
|
||||||
)
|
|
||||||
|
|
||||||
# Max config entries to prevent abuse
|
|
||||||
_MAX_CONFIG_KEYS = 50
|
|
||||||
# Pattern for suspicious key names (path traversal, shell metacharacters)
|
|
||||||
_DANGEROUS_KEY_PATTERN = re.compile(r"\.\.|[/\\;|`$(){}]")
|
|
||||||
# Max length for individual config values
|
|
||||||
_MAX_VALUE_LENGTH = 4096
|
|
||||||
|
|
||||||
|
|
||||||
def _validate_config_data(config_data: dict[str, Any]) -> dict[str, str]:
|
|
||||||
"""Validate and sanitize rclone config_data before passing to GraphQL.
|
|
||||||
|
|
||||||
Ensures all keys and values are safe strings with no injection vectors.
|
|
||||||
|
|
||||||
Raises:
|
|
||||||
ToolError: If config_data contains invalid keys or values
|
|
||||||
"""
|
|
||||||
if len(config_data) > _MAX_CONFIG_KEYS:
|
|
||||||
raise ToolError(f"config_data has {len(config_data)} keys (max {_MAX_CONFIG_KEYS})")
|
|
||||||
|
|
||||||
validated: dict[str, str] = {}
|
|
||||||
for key, value in config_data.items():
|
|
||||||
if not isinstance(key, str) or not key.strip():
|
|
||||||
raise ToolError(
|
|
||||||
f"config_data keys must be non-empty strings, got: {type(key).__name__}"
|
|
||||||
)
|
|
||||||
if _DANGEROUS_KEY_PATTERN.search(key):
|
|
||||||
raise ToolError(
|
|
||||||
f"config_data key '{key}' contains disallowed characters "
|
|
||||||
f"(path traversal or shell metacharacters)"
|
|
||||||
)
|
|
||||||
if not isinstance(value, (str, int, float, bool)):
|
|
||||||
raise ToolError(
|
|
||||||
f"config_data['{key}'] must be a string, number, or boolean, "
|
|
||||||
f"got: {type(value).__name__}"
|
|
||||||
)
|
|
||||||
str_value = str(value)
|
|
||||||
if len(str_value) > _MAX_VALUE_LENGTH:
|
|
||||||
raise ToolError(
|
|
||||||
f"config_data['{key}'] value exceeds max length "
|
|
||||||
f"({len(str_value)} > {_MAX_VALUE_LENGTH})"
|
|
||||||
)
|
|
||||||
validated[key] = str_value
|
|
||||||
|
|
||||||
return validated
|
|
||||||
|
|
||||||
|
|
||||||
def register_rclone_tool(mcp: FastMCP) -> None:
|
|
||||||
"""Register the unraid_rclone tool with the FastMCP instance."""
|
|
||||||
|
|
||||||
@mcp.tool()
|
|
||||||
async def unraid_rclone(
|
|
||||||
action: RCLONE_ACTIONS,
|
|
||||||
ctx: Context | None = None,
|
|
||||||
confirm: bool = False,
|
|
||||||
name: str | None = None,
|
|
||||||
provider_type: str | None = None,
|
|
||||||
config_data: dict[str, Any] | None = None,
|
|
||||||
) -> dict[str, Any]:
|
|
||||||
"""Manage RClone cloud storage remotes.
|
|
||||||
|
|
||||||
Actions:
|
|
||||||
list_remotes - List all configured remotes
|
|
||||||
config_form - Get config form schema (optional provider_type for specific provider)
|
|
||||||
create_remote - Create a new remote (requires name, provider_type, config_data)
|
|
||||||
delete_remote - Delete a remote (requires name, confirm=True)
|
|
||||||
"""
|
|
||||||
if action not in ALL_ACTIONS:
|
|
||||||
raise ToolError(f"Invalid action '{action}'. Must be one of: {sorted(ALL_ACTIONS)}")
|
|
||||||
|
|
||||||
await gate_destructive_action(
|
|
||||||
ctx,
|
|
||||||
action,
|
|
||||||
DESTRUCTIVE_ACTIONS,
|
|
||||||
confirm,
|
|
||||||
f"Delete rclone remote **{name}**. This cannot be undone.",
|
|
||||||
)
|
|
||||||
|
|
||||||
with tool_error_handler("rclone", action, logger):
|
|
||||||
logger.info(f"Executing unraid_rclone action={action}")
|
|
||||||
|
|
||||||
if action == "list_remotes":
|
|
||||||
data = await make_graphql_request(QUERIES["list_remotes"])
|
|
||||||
remotes = data.get("rclone", {}).get("remotes", [])
|
|
||||||
return {"remotes": list(remotes) if isinstance(remotes, list) else []}
|
|
||||||
|
|
||||||
if action == "config_form":
|
|
||||||
variables: dict[str, Any] = {}
|
|
||||||
if provider_type:
|
|
||||||
variables["formOptions"] = {"providerType": provider_type}
|
|
||||||
data = await make_graphql_request(QUERIES["config_form"], variables or None)
|
|
||||||
form = data.get("rclone", {}).get("configForm", {})
|
|
||||||
if not form:
|
|
||||||
raise ToolError("No RClone config form data received")
|
|
||||||
return dict(form)
|
|
||||||
|
|
||||||
if action == "create_remote":
|
|
||||||
if name is None or provider_type is None or config_data is None:
|
|
||||||
raise ToolError("create_remote requires name, provider_type, and config_data")
|
|
||||||
validated_config = _validate_config_data(config_data)
|
|
||||||
data = await make_graphql_request(
|
|
||||||
MUTATIONS["create_remote"],
|
|
||||||
{
|
|
||||||
"input": {
|
|
||||||
"name": name,
|
|
||||||
"type": provider_type,
|
|
||||||
"parameters": validated_config,
|
|
||||||
}
|
|
||||||
},
|
|
||||||
)
|
|
||||||
remote = data.get("rclone", {}).get("createRCloneRemote")
|
|
||||||
if not remote:
|
|
||||||
raise ToolError(
|
|
||||||
f"Failed to create remote '{name}': no confirmation from server"
|
|
||||||
)
|
|
||||||
return {
|
|
||||||
"success": True,
|
|
||||||
"message": f"Remote '{name}' created successfully",
|
|
||||||
"remote": remote,
|
|
||||||
}
|
|
||||||
|
|
||||||
if action == "delete_remote":
|
|
||||||
if not name:
|
|
||||||
raise ToolError("name is required for 'delete_remote' action")
|
|
||||||
data = await make_graphql_request(
|
|
||||||
MUTATIONS["delete_remote"], {"input": {"name": name}}
|
|
||||||
)
|
|
||||||
success = data.get("rclone", {}).get("deleteRCloneRemote", False)
|
|
||||||
if not success:
|
|
||||||
raise ToolError(f"Failed to delete remote '{name}'")
|
|
||||||
return {
|
|
||||||
"success": True,
|
|
||||||
"message": f"Remote '{name}' deleted successfully",
|
|
||||||
}
|
|
||||||
|
|
||||||
raise ToolError(f"Unhandled action '{action}' — this is a bug")
|
|
||||||
|
|
||||||
logger.info("RClone tool registered successfully")
|
|
||||||
@@ -1,100 +0,0 @@
|
|||||||
"""System settings and UPS mutations.
|
|
||||||
|
|
||||||
Provides the `unraid_settings` tool with 2 actions for updating system
|
|
||||||
configuration and UPS monitoring.
|
|
||||||
"""
|
|
||||||
|
|
||||||
from typing import Any, Literal, get_args
|
|
||||||
|
|
||||||
from fastmcp import Context, FastMCP
|
|
||||||
|
|
||||||
from ..config.logging import logger
|
|
||||||
from ..core.client import make_graphql_request
|
|
||||||
from ..core.exceptions import ToolError, tool_error_handler
|
|
||||||
from ..core.guards import gate_destructive_action
|
|
||||||
|
|
||||||
|
|
||||||
MUTATIONS: dict[str, str] = {
|
|
||||||
"update": """
|
|
||||||
mutation UpdateSettings($input: JSON!) {
|
|
||||||
updateSettings(input: $input) { restartRequired values warnings }
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"configure_ups": """
|
|
||||||
mutation ConfigureUps($config: UPSConfigInput!) {
|
|
||||||
configureUps(config: $config)
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
}
|
|
||||||
|
|
||||||
DESTRUCTIVE_ACTIONS = {
|
|
||||||
"configure_ups",
|
|
||||||
}
|
|
||||||
ALL_ACTIONS = set(MUTATIONS)
|
|
||||||
|
|
||||||
SETTINGS_ACTIONS = Literal[
|
|
||||||
"configure_ups",
|
|
||||||
"update",
|
|
||||||
]
|
|
||||||
|
|
||||||
if set(get_args(SETTINGS_ACTIONS)) != ALL_ACTIONS:
|
|
||||||
_missing = ALL_ACTIONS - set(get_args(SETTINGS_ACTIONS))
|
|
||||||
_extra = set(get_args(SETTINGS_ACTIONS)) - ALL_ACTIONS
|
|
||||||
raise RuntimeError(
|
|
||||||
f"SETTINGS_ACTIONS and ALL_ACTIONS are out of sync. "
|
|
||||||
f"Missing from Literal: {_missing or 'none'}. Extra in Literal: {_extra or 'none'}"
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def register_settings_tool(mcp: FastMCP) -> None:
|
|
||||||
"""Register the unraid_settings tool with the FastMCP instance."""
|
|
||||||
|
|
||||||
@mcp.tool()
|
|
||||||
async def unraid_settings(
|
|
||||||
action: SETTINGS_ACTIONS,
|
|
||||||
ctx: Context | None = None,
|
|
||||||
confirm: bool = False,
|
|
||||||
settings_input: dict[str, Any] | None = None,
|
|
||||||
ups_config: dict[str, Any] | None = None,
|
|
||||||
) -> dict[str, Any]:
|
|
||||||
"""Update Unraid system settings and UPS configuration.
|
|
||||||
|
|
||||||
Actions:
|
|
||||||
update - Update system settings (requires settings_input dict)
|
|
||||||
configure_ups - Configure UPS monitoring (requires ups_config dict, confirm=True)
|
|
||||||
"""
|
|
||||||
if action not in ALL_ACTIONS:
|
|
||||||
raise ToolError(f"Invalid action '{action}'. Must be one of: {sorted(ALL_ACTIONS)}")
|
|
||||||
|
|
||||||
await gate_destructive_action(
|
|
||||||
ctx,
|
|
||||||
action,
|
|
||||||
DESTRUCTIVE_ACTIONS,
|
|
||||||
confirm,
|
|
||||||
"Configure UPS monitoring. This will overwrite the current UPS daemon settings.",
|
|
||||||
)
|
|
||||||
|
|
||||||
with tool_error_handler("settings", action, logger):
|
|
||||||
logger.info(f"Executing unraid_settings action={action}")
|
|
||||||
|
|
||||||
if action == "update":
|
|
||||||
if settings_input is None:
|
|
||||||
raise ToolError("settings_input is required for 'update' action")
|
|
||||||
data = await make_graphql_request(MUTATIONS["update"], {"input": settings_input})
|
|
||||||
return {"success": True, "action": "update", "data": data.get("updateSettings")}
|
|
||||||
|
|
||||||
if action == "configure_ups":
|
|
||||||
if ups_config is None:
|
|
||||||
raise ToolError("ups_config is required for 'configure_ups' action")
|
|
||||||
data = await make_graphql_request(
|
|
||||||
MUTATIONS["configure_ups"], {"config": ups_config}
|
|
||||||
)
|
|
||||||
return {
|
|
||||||
"success": True,
|
|
||||||
"action": "configure_ups",
|
|
||||||
"result": data.get("configureUps"),
|
|
||||||
}
|
|
||||||
|
|
||||||
raise ToolError(f"Unhandled action '{action}' — this is a bug")
|
|
||||||
|
|
||||||
logger.info("Settings tool registered successfully")
|
|
||||||
@@ -1,215 +0,0 @@
|
|||||||
"""Storage and disk management.
|
|
||||||
|
|
||||||
Provides the `unraid_storage` tool with 6 actions for shares, physical disks,
|
|
||||||
log files, and log content retrieval.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import os
|
|
||||||
from typing import Any, Literal, get_args
|
|
||||||
|
|
||||||
from fastmcp import Context, FastMCP
|
|
||||||
|
|
||||||
from ..config.logging import logger
|
|
||||||
from ..core.client import DISK_TIMEOUT, make_graphql_request
|
|
||||||
from ..core.exceptions import ToolError, tool_error_handler
|
|
||||||
from ..core.guards import gate_destructive_action
|
|
||||||
from ..core.utils import format_bytes
|
|
||||||
|
|
||||||
|
|
||||||
_ALLOWED_LOG_PREFIXES = ("/var/log/", "/boot/logs/", "/mnt/")
|
|
||||||
_MAX_TAIL_LINES = 10_000
|
|
||||||
|
|
||||||
QUERIES: dict[str, str] = {
|
|
||||||
"shares": """
|
|
||||||
query GetSharesInfo {
|
|
||||||
shares {
|
|
||||||
id name free used size include exclude cache nameOrig
|
|
||||||
comment allocator splitLevel floor cow color luksStatus
|
|
||||||
}
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"disks": """
|
|
||||||
query ListPhysicalDisks {
|
|
||||||
disks { id device name }
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"disk_details": """
|
|
||||||
query GetDiskDetails($id: PrefixedID!) {
|
|
||||||
disk(id: $id) {
|
|
||||||
id device name serialNum size temperature
|
|
||||||
}
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"log_files": """
|
|
||||||
query ListLogFiles {
|
|
||||||
logFiles { name path size modifiedAt }
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
"logs": """
|
|
||||||
query GetLogContent($path: String!, $lines: Int) {
|
|
||||||
logFile(path: $path, lines: $lines) {
|
|
||||||
path content totalLines startLine
|
|
||||||
}
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
}
|
|
||||||
|
|
||||||
MUTATIONS: dict[str, str] = {
|
|
||||||
"flash_backup": """
|
|
||||||
mutation InitiateFlashBackup($input: InitiateFlashBackupInput!) {
|
|
||||||
initiateFlashBackup(input: $input) { status jobId }
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
}
|
|
||||||
|
|
||||||
DESTRUCTIVE_ACTIONS = {"flash_backup"}
|
|
||||||
ALL_ACTIONS = set(QUERIES) | set(MUTATIONS)
|
|
||||||
|
|
||||||
STORAGE_ACTIONS = Literal[
|
|
||||||
"shares",
|
|
||||||
"disks",
|
|
||||||
"disk_details",
|
|
||||||
"log_files",
|
|
||||||
"logs",
|
|
||||||
"flash_backup",
|
|
||||||
]
|
|
||||||
|
|
||||||
if set(get_args(STORAGE_ACTIONS)) != ALL_ACTIONS:
|
|
||||||
_missing = ALL_ACTIONS - set(get_args(STORAGE_ACTIONS))
|
|
||||||
_extra = set(get_args(STORAGE_ACTIONS)) - ALL_ACTIONS
|
|
||||||
raise RuntimeError(
|
|
||||||
f"STORAGE_ACTIONS and ALL_ACTIONS are out of sync. "
|
|
||||||
f"Missing from Literal: {_missing or 'none'}. Extra in Literal: {_extra or 'none'}"
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def register_storage_tool(mcp: FastMCP) -> None:
|
|
||||||
"""Register the unraid_storage tool with the FastMCP instance."""
|
|
||||||
|
|
||||||
@mcp.tool()
|
|
||||||
async def unraid_storage(
|
|
||||||
action: STORAGE_ACTIONS,
|
|
||||||
ctx: Context | None = None,
|
|
||||||
disk_id: str | None = None,
|
|
||||||
log_path: str | None = None,
|
|
||||||
tail_lines: int = 100,
|
|
||||||
confirm: bool = False,
|
|
||||||
remote_name: str | None = None,
|
|
||||||
source_path: str | None = None,
|
|
||||||
destination_path: str | None = None,
|
|
||||||
backup_options: dict[str, Any] | None = None,
|
|
||||||
) -> dict[str, Any]:
|
|
||||||
"""Manage Unraid storage, disks, and logs.
|
|
||||||
|
|
||||||
Actions:
|
|
||||||
shares - List all user shares with capacity info
|
|
||||||
disks - List all physical disks
|
|
||||||
disk_details - Detailed SMART info for a disk (requires disk_id)
|
|
||||||
log_files - List available log files
|
|
||||||
logs - Retrieve log content (requires log_path, optional tail_lines)
|
|
||||||
flash_backup - Initiate flash backup via rclone (requires remote_name, source_path, destination_path, confirm=True)
|
|
||||||
"""
|
|
||||||
if action not in ALL_ACTIONS:
|
|
||||||
raise ToolError(f"Invalid action '{action}'. Must be one of: {sorted(ALL_ACTIONS)}")
|
|
||||||
|
|
||||||
await gate_destructive_action(
|
|
||||||
ctx,
|
|
||||||
action,
|
|
||||||
DESTRUCTIVE_ACTIONS,
|
|
||||||
confirm,
|
|
||||||
f"Back up flash drive to **{remote_name}:{destination_path}**. "
|
|
||||||
"Existing backups at this destination will be overwritten.",
|
|
||||||
)
|
|
||||||
|
|
||||||
if action == "disk_details" and not disk_id:
|
|
||||||
raise ToolError("disk_id is required for 'disk_details' action")
|
|
||||||
|
|
||||||
if action == "logs" and (tail_lines < 1 or tail_lines > _MAX_TAIL_LINES):
|
|
||||||
raise ToolError(f"tail_lines must be between 1 and {_MAX_TAIL_LINES}, got {tail_lines}")
|
|
||||||
|
|
||||||
if action == "logs":
|
|
||||||
if not log_path:
|
|
||||||
raise ToolError("log_path is required for 'logs' action")
|
|
||||||
# Resolve path synchronously to prevent traversal attacks.
|
|
||||||
# Using os.path.realpath instead of anyio.Path.resolve() because the
|
|
||||||
# async variant blocks on NFS-mounted paths under /mnt/ (Perf-AI-1).
|
|
||||||
normalized = os.path.realpath(log_path) # noqa: ASYNC240
|
|
||||||
if not any(normalized.startswith(p) for p in _ALLOWED_LOG_PREFIXES):
|
|
||||||
raise ToolError(
|
|
||||||
f"log_path must start with one of: {', '.join(_ALLOWED_LOG_PREFIXES)}. "
|
|
||||||
f"Use log_files action to discover valid paths."
|
|
||||||
)
|
|
||||||
log_path = normalized
|
|
||||||
|
|
||||||
if action == "flash_backup":
|
|
||||||
if not remote_name:
|
|
||||||
raise ToolError("remote_name is required for 'flash_backup' action")
|
|
||||||
if not source_path:
|
|
||||||
raise ToolError("source_path is required for 'flash_backup' action")
|
|
||||||
if not destination_path:
|
|
||||||
raise ToolError("destination_path is required for 'flash_backup' action")
|
|
||||||
input_data: dict[str, Any] = {
|
|
||||||
"remoteName": remote_name,
|
|
||||||
"sourcePath": source_path,
|
|
||||||
"destinationPath": destination_path,
|
|
||||||
}
|
|
||||||
if backup_options is not None:
|
|
||||||
input_data["options"] = backup_options
|
|
||||||
with tool_error_handler("storage", action, logger):
|
|
||||||
logger.info("Executing unraid_storage action=flash_backup")
|
|
||||||
data = await make_graphql_request(MUTATIONS["flash_backup"], {"input": input_data})
|
|
||||||
backup = data.get("initiateFlashBackup")
|
|
||||||
if not backup:
|
|
||||||
raise ToolError("Failed to start flash backup: no confirmation from server")
|
|
||||||
return {
|
|
||||||
"success": True,
|
|
||||||
"action": "flash_backup",
|
|
||||||
"data": backup,
|
|
||||||
}
|
|
||||||
|
|
||||||
query = QUERIES[action]
|
|
||||||
variables: dict[str, Any] | None = None
|
|
||||||
custom_timeout = DISK_TIMEOUT if action in ("disks", "disk_details") else None
|
|
||||||
|
|
||||||
if action == "disk_details":
|
|
||||||
variables = {"id": disk_id}
|
|
||||||
elif action == "logs":
|
|
||||||
variables = {"path": log_path, "lines": tail_lines}
|
|
||||||
|
|
||||||
with tool_error_handler("storage", action, logger):
|
|
||||||
logger.info(f"Executing unraid_storage action={action}")
|
|
||||||
data = await make_graphql_request(query, variables, custom_timeout=custom_timeout)
|
|
||||||
|
|
||||||
if action == "shares":
|
|
||||||
return {"shares": data.get("shares", [])}
|
|
||||||
|
|
||||||
if action == "disks":
|
|
||||||
return {"disks": data.get("disks", [])}
|
|
||||||
|
|
||||||
if action == "disk_details":
|
|
||||||
raw = data.get("disk", {})
|
|
||||||
if not raw:
|
|
||||||
raise ToolError(f"Disk '{disk_id}' not found")
|
|
||||||
summary = {
|
|
||||||
"disk_id": raw.get("id"),
|
|
||||||
"device": raw.get("device"),
|
|
||||||
"name": raw.get("name"),
|
|
||||||
"serial_number": raw.get("serialNum"),
|
|
||||||
"size_formatted": format_bytes(raw.get("size")),
|
|
||||||
"temperature": (
|
|
||||||
f"{raw['temperature']}\u00b0C"
|
|
||||||
if raw.get("temperature") is not None
|
|
||||||
else "N/A"
|
|
||||||
),
|
|
||||||
}
|
|
||||||
return {"summary": summary, "details": raw}
|
|
||||||
|
|
||||||
if action == "log_files":
|
|
||||||
return {"log_files": data.get("logFiles", [])}
|
|
||||||
|
|
||||||
if action == "logs":
|
|
||||||
return dict(data.get("logFile") or {})
|
|
||||||
|
|
||||||
raise ToolError(f"Unhandled action '{action}' — this is a bug")
|
|
||||||
|
|
||||||
logger.info("Storage tool registered successfully")
|
|
||||||
1891
unraid_mcp/tools/unraid.py
Normal file
1891
unraid_mcp/tools/unraid.py
Normal file
File diff suppressed because it is too large
Load Diff
@@ -1,51 +0,0 @@
|
|||||||
"""User account query.
|
|
||||||
|
|
||||||
Provides the `unraid_users` tool with 1 action for querying the current authenticated user.
|
|
||||||
Note: Unraid GraphQL API does not support user management operations (list, add, delete).
|
|
||||||
"""
|
|
||||||
|
|
||||||
from typing import Any, Literal
|
|
||||||
|
|
||||||
from fastmcp import FastMCP
|
|
||||||
|
|
||||||
from ..config.logging import logger
|
|
||||||
from ..core.client import make_graphql_request
|
|
||||||
from ..core.exceptions import ToolError, tool_error_handler
|
|
||||||
|
|
||||||
|
|
||||||
QUERIES: dict[str, str] = {
|
|
||||||
"me": """
|
|
||||||
query GetMe {
|
|
||||||
me { id name description roles }
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
}
|
|
||||||
|
|
||||||
ALL_ACTIONS = set(QUERIES)
|
|
||||||
|
|
||||||
USER_ACTIONS = Literal["me"]
|
|
||||||
|
|
||||||
|
|
||||||
def register_users_tool(mcp: FastMCP) -> None:
|
|
||||||
"""Register the unraid_users tool with the FastMCP instance."""
|
|
||||||
|
|
||||||
@mcp.tool()
|
|
||||||
async def unraid_users(
|
|
||||||
action: USER_ACTIONS = "me",
|
|
||||||
) -> dict[str, Any]:
|
|
||||||
"""Query current authenticated user.
|
|
||||||
|
|
||||||
Actions:
|
|
||||||
me - Get current authenticated user info (id, name, description, roles)
|
|
||||||
|
|
||||||
Note: Unraid API does not support user management operations (list, add, delete).
|
|
||||||
"""
|
|
||||||
if action not in ALL_ACTIONS:
|
|
||||||
raise ToolError(f"Invalid action '{action}'. Must be one of: {sorted(ALL_ACTIONS)}")
|
|
||||||
|
|
||||||
with tool_error_handler("users", action, logger):
|
|
||||||
logger.info("Executing unraid_users action=me")
|
|
||||||
data = await make_graphql_request(QUERIES["me"])
|
|
||||||
return data.get("me") or {}
|
|
||||||
|
|
||||||
logger.info("Users tool registered successfully")
|
|
||||||
@@ -1,165 +0,0 @@
|
|||||||
"""Virtual machine management.
|
|
||||||
|
|
||||||
Provides the `unraid_vm` tool with 9 actions for VM lifecycle management
|
|
||||||
including start, stop, pause, resume, force stop, reboot, and reset.
|
|
||||||
"""
|
|
||||||
|
|
||||||
from typing import Any, Literal, get_args
|
|
||||||
|
|
||||||
from fastmcp import Context, FastMCP
|
|
||||||
|
|
||||||
from ..config.logging import logger
|
|
||||||
from ..core.client import make_graphql_request
|
|
||||||
from ..core.exceptions import ToolError, tool_error_handler
|
|
||||||
from ..core.guards import gate_destructive_action
|
|
||||||
|
|
||||||
|
|
||||||
QUERIES: dict[str, str] = {
|
|
||||||
"list": """
|
|
||||||
query ListVMs {
|
|
||||||
vms { id domains { id name state uuid } }
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
# NOTE: The Unraid GraphQL API does not expose a single-VM query.
|
|
||||||
# The details query is identical to list; client-side filtering is required.
|
|
||||||
"details": """
|
|
||||||
query ListVMs {
|
|
||||||
vms { id domains { id name state uuid } }
|
|
||||||
}
|
|
||||||
""",
|
|
||||||
}
|
|
||||||
|
|
||||||
MUTATIONS: dict[str, str] = {
|
|
||||||
"start": """
|
|
||||||
mutation StartVM($id: PrefixedID!) { vm { start(id: $id) } }
|
|
||||||
""",
|
|
||||||
"stop": """
|
|
||||||
mutation StopVM($id: PrefixedID!) { vm { stop(id: $id) } }
|
|
||||||
""",
|
|
||||||
"pause": """
|
|
||||||
mutation PauseVM($id: PrefixedID!) { vm { pause(id: $id) } }
|
|
||||||
""",
|
|
||||||
"resume": """
|
|
||||||
mutation ResumeVM($id: PrefixedID!) { vm { resume(id: $id) } }
|
|
||||||
""",
|
|
||||||
"force_stop": """
|
|
||||||
mutation ForceStopVM($id: PrefixedID!) { vm { forceStop(id: $id) } }
|
|
||||||
""",
|
|
||||||
"reboot": """
|
|
||||||
mutation RebootVM($id: PrefixedID!) { vm { reboot(id: $id) } }
|
|
||||||
""",
|
|
||||||
"reset": """
|
|
||||||
mutation ResetVM($id: PrefixedID!) { vm { reset(id: $id) } }
|
|
||||||
""",
|
|
||||||
}
|
|
||||||
|
|
||||||
# Map action names to GraphQL field names (only where they differ)
|
|
||||||
_MUTATION_FIELDS: dict[str, str] = {
|
|
||||||
"force_stop": "forceStop",
|
|
||||||
}
|
|
||||||
|
|
||||||
DESTRUCTIVE_ACTIONS = {"force_stop", "reset"}
|
|
||||||
|
|
||||||
VM_ACTIONS = Literal[
|
|
||||||
"list",
|
|
||||||
"details",
|
|
||||||
"start",
|
|
||||||
"stop",
|
|
||||||
"pause",
|
|
||||||
"resume",
|
|
||||||
"force_stop",
|
|
||||||
"reboot",
|
|
||||||
"reset",
|
|
||||||
]
|
|
||||||
|
|
||||||
ALL_ACTIONS = set(QUERIES) | set(MUTATIONS)
|
|
||||||
|
|
||||||
if set(get_args(VM_ACTIONS)) != ALL_ACTIONS:
|
|
||||||
_missing = ALL_ACTIONS - set(get_args(VM_ACTIONS))
|
|
||||||
_extra = set(get_args(VM_ACTIONS)) - ALL_ACTIONS
|
|
||||||
raise RuntimeError(
|
|
||||||
f"VM_ACTIONS and ALL_ACTIONS are out of sync. "
|
|
||||||
f"Missing from Literal: {_missing or 'none'}. Extra in Literal: {_extra or 'none'}"
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def register_vm_tool(mcp: FastMCP) -> None:
|
|
||||||
"""Register the unraid_vm tool with the FastMCP instance."""
|
|
||||||
|
|
||||||
@mcp.tool()
|
|
||||||
async def unraid_vm(
|
|
||||||
action: VM_ACTIONS,
|
|
||||||
ctx: Context | None = None,
|
|
||||||
vm_id: str | None = None,
|
|
||||||
confirm: bool = False,
|
|
||||||
) -> dict[str, Any]:
|
|
||||||
"""Manage Unraid virtual machines.
|
|
||||||
|
|
||||||
Actions:
|
|
||||||
list - List all VMs with state
|
|
||||||
details - Detailed info for a VM (requires vm_id: UUID, PrefixedID, or name)
|
|
||||||
start - Start a VM (requires vm_id)
|
|
||||||
stop - Gracefully stop a VM (requires vm_id)
|
|
||||||
pause - Pause a VM (requires vm_id)
|
|
||||||
resume - Resume a paused VM (requires vm_id)
|
|
||||||
force_stop - Force stop a VM (requires vm_id, confirm=True)
|
|
||||||
reboot - Reboot a VM (requires vm_id)
|
|
||||||
reset - Reset a VM (requires vm_id, confirm=True)
|
|
||||||
"""
|
|
||||||
if action not in ALL_ACTIONS:
|
|
||||||
raise ToolError(f"Invalid action '{action}'. Must be one of: {sorted(ALL_ACTIONS)}")
|
|
||||||
|
|
||||||
if action != "list" and not vm_id:
|
|
||||||
raise ToolError(f"vm_id is required for '{action}' action")
|
|
||||||
|
|
||||||
await gate_destructive_action(
|
|
||||||
ctx,
|
|
||||||
action,
|
|
||||||
DESTRUCTIVE_ACTIONS,
|
|
||||||
confirm,
|
|
||||||
{
|
|
||||||
"force_stop": f"Force stop VM **{vm_id}**. Unsaved data may be lost.",
|
|
||||||
"reset": f"Reset VM **{vm_id}**. This is a hard reset — unsaved data may be lost.",
|
|
||||||
},
|
|
||||||
)
|
|
||||||
|
|
||||||
with tool_error_handler("vm", action, logger):
|
|
||||||
logger.info(f"Executing unraid_vm action={action}")
|
|
||||||
|
|
||||||
if action == "list":
|
|
||||||
data = await make_graphql_request(QUERIES["list"])
|
|
||||||
if data.get("vms"):
|
|
||||||
vms = data["vms"].get("domains") or data["vms"].get("domain") or []
|
|
||||||
if isinstance(vms, dict):
|
|
||||||
vms = [vms]
|
|
||||||
return {"vms": vms}
|
|
||||||
return {"vms": []}
|
|
||||||
|
|
||||||
if action == "details":
|
|
||||||
data = await make_graphql_request(QUERIES["details"])
|
|
||||||
if not data.get("vms"):
|
|
||||||
raise ToolError("No VM data returned from server")
|
|
||||||
vms = data["vms"].get("domains") or data["vms"].get("domain") or []
|
|
||||||
if isinstance(vms, dict):
|
|
||||||
vms = [vms]
|
|
||||||
for vm in vms:
|
|
||||||
if vm.get("uuid") == vm_id or vm.get("id") == vm_id or vm.get("name") == vm_id:
|
|
||||||
return dict(vm)
|
|
||||||
available = [f"{v.get('name')} (UUID: {v.get('uuid')})" for v in vms]
|
|
||||||
raise ToolError(f"VM '{vm_id}' not found. Available: {', '.join(available)}")
|
|
||||||
|
|
||||||
# Mutations
|
|
||||||
if action in MUTATIONS:
|
|
||||||
data = await make_graphql_request(MUTATIONS[action], {"id": vm_id})
|
|
||||||
field = _MUTATION_FIELDS.get(action, action)
|
|
||||||
if data.get("vm") and field in data["vm"]:
|
|
||||||
return {
|
|
||||||
"success": data["vm"][field],
|
|
||||||
"action": action,
|
|
||||||
"vm_id": vm_id,
|
|
||||||
}
|
|
||||||
raise ToolError(f"Failed to {action} VM or unexpected response")
|
|
||||||
|
|
||||||
raise ToolError(f"Unhandled action '{action}' — this is a bug")
|
|
||||||
|
|
||||||
logger.info("VM tool registered successfully")
|
|
||||||
Reference in New Issue
Block a user