mirror of
https://github.com/jmagar/unraid-mcp.git
synced 2026-03-01 16:04:24 -08:00
Remove unused MCP resources and update documentation
- Remove array_status, system_info, notifications_overview, and parity_status resources - Keep only logs_stream resource (unraid://logs/stream) which is working properly - Update README.md with current resource documentation and modern docker compose syntax - Fix import path issues that were causing subscription errors - Update environment configuration examples - Clean up subscription manager to only include working log streaming 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
31
.env.example
31
.env.example
@@ -1,18 +1,31 @@
|
||||
# Unraid MCP Server Configuration
|
||||
UNRAID_API_URL=https://your-unraid-server-url/graphql # Ensure this matches what the server script (unraid-mcp-server.py) expects
|
||||
# =================================
|
||||
|
||||
# Core API Configuration (Required)
|
||||
# ---------------------------------
|
||||
UNRAID_API_URL=https://your-unraid-server-url/graphql
|
||||
UNRAID_API_KEY=your_unraid_api_key
|
||||
|
||||
# MCP Server Settings
|
||||
UNRAID_MCP_TRANSPORT=sse
|
||||
# -------------------
|
||||
UNRAID_MCP_TRANSPORT=streamable-http # Options: streamable-http (recommended), sse (deprecated), stdio
|
||||
UNRAID_MCP_HOST=0.0.0.0
|
||||
UNRAID_MCP_PORT=6970
|
||||
|
||||
# Logging
|
||||
UNRAID_MCP_LOG_LEVEL=INFO # Changed from UNRAID_LOG_LEVEL
|
||||
UNRAID_MCP_LOG_FILE=unraid-mcp.log # Added
|
||||
# Logging Configuration
|
||||
# ---------------------
|
||||
UNRAID_MCP_LOG_LEVEL=INFO # Options: DEBUG, INFO, WARNING, ERROR
|
||||
UNRAID_MCP_LOG_FILE=unraid-mcp.log # Log file name (saved to logs/ directory)
|
||||
|
||||
# Optional: SSL verification for Unraid API calls
|
||||
# Set to 'false' or '0' to disable (e.g., for self-signed certs).
|
||||
# Set to a path to a CA bundle file to use custom CAs.
|
||||
# Defaults to 'true' (SSL verification enabled) if not set in server code, but explicitly configurable via UNRAID_VERIFY_SSL in script.
|
||||
# SSL/TLS Configuration
|
||||
# --------------------
|
||||
# Set to 'false' or '0' to disable SSL verification (e.g., for self-signed certificates)
|
||||
# Set to 'true' or '1' to enable SSL verification (default)
|
||||
# Set to a file path to use a custom CA bundle
|
||||
UNRAID_VERIFY_SSL=true
|
||||
|
||||
# Optional: Subscription Auto-start Log Path
|
||||
# ------------------------------------------
|
||||
# Custom log file path for subscription auto-start diagnostics
|
||||
# Defaults to standard log if not specified
|
||||
# UNRAID_AUTOSTART_LOG_PATH=/custom/path/to/autostart.log
|
||||
2
.gitignore
vendored
2
.gitignore
vendored
@@ -12,7 +12,7 @@ wheels/
|
||||
.env
|
||||
.env.local
|
||||
*.log
|
||||
|
||||
logs/
|
||||
.bivvy
|
||||
.cursor
|
||||
|
||||
|
||||
@@ -1,189 +0,0 @@
|
||||
# Missing Unraid API Features
|
||||
|
||||
This document details the comprehensive analysis of Unraid API capabilities that are **NOT** currently implemented in our MCP server, based on investigation of the official Unraid API repository (https://github.com/unraid/api).
|
||||
|
||||
## Current Implementation Status
|
||||
|
||||
### ✅ What We HAVE Implemented
|
||||
- Basic system info, array status, physical disks
|
||||
- Docker container listing/management/details
|
||||
- VM listing/management/details
|
||||
- Basic notification overview/listing
|
||||
- Log file listing/content retrieval
|
||||
- User shares information
|
||||
- Network configuration, registration, and Connect settings
|
||||
- Unraid variables and system health check
|
||||
|
||||
### ❌ What We're MISSING
|
||||
|
||||
---
|
||||
|
||||
## 1. GraphQL Mutations (Server Control Operations)
|
||||
|
||||
### Array Management Mutations
|
||||
- **`array.setState(input: ArrayStateInput)`** - Start/stop the Unraid array
|
||||
- **`array.addDiskToArray(input: ArrayDiskInput)`** - Add disk to array
|
||||
- **`array.removeDiskFromArray(input: ArrayDiskInput)`** - Remove disk from array (requires stopped array)
|
||||
|
||||
### Parity Check Mutations
|
||||
- **`parityCheck.start(correct: boolean)`** - Start parity check with optional correction
|
||||
- **`parityCheck.pause()`** - Pause ongoing parity check
|
||||
- **`parityCheck.resume()`** - Resume paused parity check
|
||||
|
||||
### Enhanced VM Management Mutations
|
||||
- **`vm.pause(id: PrefixedID)`** - Pause running VM
|
||||
- **`vm.resume(id: PrefixedID)`** - Resume paused VM
|
||||
- *(We have start/stop, missing pause/resume)*
|
||||
|
||||
### RClone Remote Management
|
||||
- **`rclone.createRCloneRemote(input: CreateRCloneRemoteInput)`** - Create new RClone remote
|
||||
- **`rclone.deleteRCloneRemote(input: DeleteRCloneRemoteInput)`** - Delete RClone remote
|
||||
|
||||
### Settings & Configuration Management
|
||||
- **`updateSettings(input: JSON!)`** - Update server settings with validation
|
||||
- **`api`** namespace: `sandbox` (boolean), `ssoSubIds` (string[]), `extraOrigins` (string[])
|
||||
- **`connect`** namespace: `accessType` (string), `port` (number|null), `forwardType` (string)
|
||||
- **Plugin namespaces**: Dynamic settings from installed plugins
|
||||
- **`setAdditionalAllowedOrigins(input: AllowedOriginInput!)`** - Configure API allowed origins
|
||||
- **`setupRemoteAccess(input: SetupRemoteAccessInput!)`** - Configure remote access settings
|
||||
|
||||
### Unraid Connect Authentication
|
||||
- **`connectSignIn(input: ConnectSignInInput!)`** - Sign in to Unraid Connect
|
||||
- **`connectSignOut()`** - Sign out from Unraid Connect
|
||||
|
||||
### Advanced Notification Management
|
||||
- **`archiveNotification(id: PrefixedID!)`** - Archive specific notification
|
||||
- **`archiveAllNotifications()`** - Archive all unread notifications
|
||||
- **`deleteNotification(id: PrefixedID!, type: NotificationType!)`** - Delete specific notification
|
||||
- **`deleteArchivedNotifications()`** - Delete all archived notifications
|
||||
- **`recalculateOverview()`** - Recompute notification overview counts
|
||||
|
||||
### API Key Management
|
||||
- **`apiKey.create(input: CreateApiKeyInput!)`** - Create new API key with roles/permissions
|
||||
- **`apiKey.delete(input: DeleteApiKeyInput!)`** - Delete existing API key
|
||||
|
||||
---
|
||||
|
||||
## 2. GraphQL Queries (Information Retrieval)
|
||||
|
||||
### Enhanced System Information
|
||||
- **`cloud`** - Cloud connection status, API key validity, allowed origins
|
||||
- **`servers`** - List of registered multi-server setups via **Unraid Connect**
|
||||
- Provides centralized management of multiple Unraid servers through cloud connectivity
|
||||
- Returns: server identification, system info, status, configuration data
|
||||
- Enables "one-stop shop" server management, monitoring, and maintenance
|
||||
- **`publicTheme`** - Current theme settings (colors, branding, etc.)
|
||||
- **`extraAllowedOrigins`** - Additional configured allowed origins
|
||||
- **`remoteAccess`** - Remote access configuration details
|
||||
- **`publicPartnerInfo`** - Partner/OEM branding information
|
||||
- **`customization.activationCode`** - Activation code and customization details
|
||||
- **`apiKeyPossibleRoles`** and **`apiKeyPossiblePermissions`** - Available API key roles and permissions
|
||||
- **`settings.unified`** - Unified settings with JSON schema validation
|
||||
|
||||
### RClone Configuration
|
||||
- **`rclone.configForm(formOptions: RCloneConfigFormInput)`** - Get RClone configuration form schema
|
||||
- **`rclone.remotes`** - List all configured RClone remotes with parameters
|
||||
|
||||
### Enhanced Log Management
|
||||
- Better log file metadata (size, modification timestamps) - we have basic implementation but missing some fields
|
||||
|
||||
---
|
||||
|
||||
## 3. GraphQL Subscriptions (Real-time Updates)
|
||||
|
||||
**We have ZERO subscription capabilities implemented.** All of these provide real-time updates:
|
||||
|
||||
### Core Infrastructure Monitoring
|
||||
- **`arraySubscription`** - Real-time array status changes (critical for storage monitoring)
|
||||
- **`infoSubscription`** - System information updates (CPU, memory, uptime changes)
|
||||
- **`parityHistorySubscription`** - Parity check progress and status updates
|
||||
|
||||
### Application & Service Monitoring
|
||||
- **`logFile(path: String!)`** - Real-time log file content streaming
|
||||
- **`notificationAdded`** - New notification events
|
||||
- **`notificationsOverview`** - Live notification count changes
|
||||
|
||||
### Advanced System Monitoring
|
||||
- **`displaySubscription`** - Display-related information updates
|
||||
- **`ownerSubscription`** - Owner profile/status changes
|
||||
- **`registrationSubscription`** - Registration status changes (API key updates, etc.)
|
||||
- **`serversSubscription`** - Multi-server status updates
|
||||
|
||||
### Events & General Updates
|
||||
- **`events`** - General system events including client connections
|
||||
|
||||
---
|
||||
|
||||
## 4. Priority Implementation Recommendations
|
||||
|
||||
### **HIGH PRIORITY** (Critical for Infrastructure Management)
|
||||
1. **Array Control Mutations** - `setState`, `addDisk`, `removeDisk`
|
||||
2. **Parity Operations** - `start`, `pause`, `resume` parity checks
|
||||
3. **Real-time Subscriptions** - `arraySubscription`, `infoSubscription`, `parityHistorySubscription`
|
||||
4. **Enhanced Notification Management** - Archive, delete operations
|
||||
|
||||
### **MEDIUM PRIORITY** (Valuable for Administration)
|
||||
1. **RClone Management** - Create, delete, list remotes
|
||||
2. **Settings Management** - Update server configurations
|
||||
3. **API Key Management** - Create, delete keys
|
||||
4. **Real-time Log Streaming** - `logFile` subscription
|
||||
|
||||
### **LOW PRIORITY** (Nice to Have)
|
||||
1. **Connect Authentication** - Sign in/out operations
|
||||
2. **Enhanced System Queries** - Cloud status, themes, partner info
|
||||
3. **Advanced VM Operations** - Pause/resume VMs
|
||||
4. **Multi-server Support** - Server listing and management
|
||||
|
||||
---
|
||||
|
||||
## 5. Implementation Strategy
|
||||
|
||||
### Phase 1: Core Operations (Highest Value)
|
||||
- Implement array control mutations
|
||||
- Add parity check operations
|
||||
- Create subscription-to-resource framework for real-time monitoring
|
||||
|
||||
### Phase 2: Enhanced Management
|
||||
- RClone remote management
|
||||
- Advanced notification operations
|
||||
- Settings management
|
||||
|
||||
### Phase 3: Advanced Features
|
||||
- API key management
|
||||
- Connect authentication
|
||||
- Multi-server capabilities
|
||||
|
||||
---
|
||||
|
||||
## 6. Technical Notes
|
||||
|
||||
### GraphQL Schema Migration Status
|
||||
According to the Unraid API repository:
|
||||
- **Docker Resolver**: Still needs migration to code-first approach
|
||||
- **Disks Resolver**: Still needs migration to code-first approach
|
||||
- **API Key Operations**: Mentioned in docs but GraphQL mutations not fully defined in context
|
||||
|
||||
### Authentication Requirements
|
||||
Most mutations require:
|
||||
- Valid API key in `x-api-key` header
|
||||
- Appropriate role-based permissions
|
||||
- Some operations may require `admin` role
|
||||
|
||||
### Real-time Capabilities
|
||||
The Unraid API uses:
|
||||
- GraphQL subscriptions over WebSocket
|
||||
- PubSub event system for real-time updates
|
||||
- Domain event bus architecture
|
||||
|
||||
---
|
||||
|
||||
## 7. Impact Assessment
|
||||
|
||||
Implementing these missing features would:
|
||||
- **Dramatically increase** our MCP server's capabilities
|
||||
- **Enable full remote management** of Unraid servers
|
||||
- **Provide real-time monitoring** through MCP resources
|
||||
- **Support automation and orchestration** workflows
|
||||
- **Match feature parity** with the official Unraid API
|
||||
|
||||
The subscription-to-resource approach would be particularly powerful, making our MCP server one of the most capable infrastructure monitoring tools available in the MCP ecosystem.
|
||||
481
README.md
481
README.md
@@ -1,230 +1,349 @@
|
||||
# Unraid MCP Server
|
||||
# 🚀 Unraid MCP Server
|
||||
|
||||
This server provides an MCP interface to interact with an Unraid server's GraphQL API.
|
||||
[](https://www.python.org/downloads/)
|
||||
[](https://github.com/jlowin/fastmcp)
|
||||
[](LICENSE)
|
||||
|
||||
## Setup
|
||||
**A powerful MCP (Model Context Protocol) server that provides comprehensive tools to interact with an Unraid server's GraphQL API.**
|
||||
|
||||
This section describes the setup for local development **without Docker**. For Docker-based deployment, see the "Docker" section below.
|
||||
## ✨ Features
|
||||
|
||||
1. Install dependencies using uv:
|
||||
```bash
|
||||
uv sync
|
||||
```
|
||||
2. Navigate to the project root directory containing `unraid_mcp_server.py`.
|
||||
3. Copy `.env.example` to `.env`: `cp .env.example .env`
|
||||
4. Edit `.env` and fill in your Unraid and MCP server details:
|
||||
* `UNRAID_API_URL`: Your Unraid GraphQL endpoint (e.g., `http://your-unraid-ip/graphql`). **Required.**
|
||||
* `UNRAID_API_KEY`: Your Unraid API key. **Required.**
|
||||
* `UNRAID_MCP_TRANSPORT` (optional, defaults to `streamable-http` for both local and Docker. Recommended for new setups). Valid options: `streamable-http`, `sse`, `stdio`.
|
||||
* `UNRAID_MCP_HOST` (optional, defaults to `0.0.0.0` for network transports, listens on all interfaces).
|
||||
* `UNRAID_MCP_PORT` (optional, defaults to `6970` for network transports).
|
||||
* `UNRAID_MCP_LOG_LEVEL` (optional, defaults to `INFO`). Examples: `DEBUG`, `INFO`, `WARNING`, `ERROR`.
|
||||
* `UNRAID_MCP_LOG_FILE` (optional, defaults to `unraid-mcp.log` in the script's directory).
|
||||
* `UNRAID_VERIFY_SSL` (optional, defaults to `true`. Set to `false` for self-signed certificates, or provide a path to a CA bundle).
|
||||
- 🔧 **25+ Tools**: Complete Unraid management through MCP protocol
|
||||
- 🏗️ **Modular Architecture**: Clean, maintainable, and extensible codebase
|
||||
- ⚡ **High Performance**: Async/concurrent operations with optimized timeouts
|
||||
- 🔄 **Real-time Data**: WebSocket subscriptions for live log streaming
|
||||
- 📊 **Health Monitoring**: Comprehensive system diagnostics and status
|
||||
- 🐳 **Docker Ready**: Full containerization support with Docker Compose
|
||||
- 🔒 **Secure**: Proper SSL/TLS configuration and API key management
|
||||
- 📝 **Rich Logging**: Structured logging with rotation and multiple levels
|
||||
|
||||
## Running the Server
|
||||
---
|
||||
|
||||
From the project root:
|
||||
## 📋 Table of Contents
|
||||
|
||||
```bash
|
||||
uv run unraid-mcp-server
|
||||
```
|
||||
- [Quick Start](#-quick-start)
|
||||
- [Installation](#-installation)
|
||||
- [Configuration](#-configuration)
|
||||
- [Available Tools & Resources](#-available-tools--resources)
|
||||
- [Docker Deployment](#-docker-deployment)
|
||||
- [Development](#-development)
|
||||
- [Architecture](#-architecture)
|
||||
- [Troubleshooting](#-troubleshooting)
|
||||
|
||||
Alternatively, you can run the Python file directly:
|
||||
---
|
||||
|
||||
```bash
|
||||
uv run python unraid_mcp_server.py
|
||||
```
|
||||
|
||||
The server will start, by default using streamable-http transport on port 6970.
|
||||
|
||||
## Implemented Tools
|
||||
|
||||
Below is a list of the implemented tools and their basic functions.
|
||||
Refer to the Unraid GraphQL schema for detailed response structures.
|
||||
|
||||
* `get_system_info()`: Retrieves comprehensive system, OS, CPU, memory, and hardware information.
|
||||
* `get_array_status()`: Gets the current status of the storage array, capacity, and disk details.
|
||||
* `list_docker_containers(skip_cache: Optional[bool] = False)`: Lists all Docker containers.
|
||||
* `manage_docker_container(container_id: str, action: str)`: Starts or stops a Docker container (action: "start" or "stop").
|
||||
* `get_docker_container_details(container_identifier: str)`: Gets detailed info for a specific Docker container by ID or name.
|
||||
* `list_vms()`: Lists all Virtual Machines and their states.
|
||||
* `manage_vm(vm_id: str, action: str)`: Manages a VM (actions: "start", "stop", "pause", "resume", "forceStop", "reboot").
|
||||
* `get_vm_details(vm_identifier: str)`: Gets details for a specific VM by ID or name.
|
||||
* `get_shares_info()`: Retrieves information about all user shares.
|
||||
* `get_notifications_overview()`: Gets an overview of system notifications (counts by severity/status).
|
||||
* `list_notifications(type: str, offset: int, limit: int, importance: Optional[str] = None)`: Lists notifications with filters.
|
||||
* `list_available_log_files()`: Lists all available log files.
|
||||
* `get_logs(log_file_path: str, tail_lines: Optional[int] = 100)`: Retrieves content from a specific log file (tails last N lines).
|
||||
* `list_physical_disks()`: Lists all physical disks recognized by the system.
|
||||
* `get_disk_details(disk_id: str)`: Retrieves detailed SMART info and partition data for a specific physical disk.
|
||||
* `get_unraid_variables()`: Retrieves a wide range of Unraid system variables and settings.
|
||||
* `get_network_config()`: Retrieves network configuration details, including access URLs.
|
||||
* `get_registration_info()`: Retrieves Unraid registration details.
|
||||
* `get_connect_settings()`: Retrieves settings related to Unraid Connect.
|
||||
|
||||
### Claude Desktop Client Configuration
|
||||
|
||||
If your Unraid MCP Server is running on `localhost:6970` (the default):
|
||||
|
||||
Create or update your Claude Desktop MCP settings file at `~/.config/claude/claude_mcp_settings.jsonc` (create the `claude` directory if it doesn't exist).
|
||||
Add or update the entry for this server:
|
||||
|
||||
```jsonc
|
||||
{
|
||||
"mcp_servers": {
|
||||
"unraid": { // Use a short, descriptive name for the client
|
||||
"url": "http://localhost:6970/mcp", // Default path for FastMCP streamable-http is /mcp
|
||||
"disabled": false,
|
||||
"timeout": 60, // Optional: timeout in seconds for requests
|
||||
"transport": "streamable-http" // Default transport
|
||||
}
|
||||
// ... other server configurations
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Make sure the `url` matches your `UNRAID_MCP_HOST` and `UNRAID_MCP_PORT` settings if you've changed them from the defaults.
|
||||
|
||||
(Details to be added after implementation based on the approved toolset.)
|
||||
|
||||
## Docker
|
||||
|
||||
This application can be containerized using Docker.
|
||||
## 🚀 Quick Start
|
||||
|
||||
### Prerequisites
|
||||
- Python 3.10+
|
||||
- [uv](https://github.com/astral-sh/uv) package manager
|
||||
- Unraid server with GraphQL API enabled
|
||||
|
||||
* Docker installed and running.
|
||||
### 1. Installation
|
||||
```bash
|
||||
git clone <your-repo-url>
|
||||
cd unraid-mcp
|
||||
uv sync
|
||||
```
|
||||
|
||||
### Building the Image
|
||||
### 2. Configuration
|
||||
```bash
|
||||
cp .env.example .env
|
||||
# Edit .env with your Unraid details
|
||||
```
|
||||
|
||||
1. Navigate to the root directory of this project (`unraid-mcp`).
|
||||
2. Build the Docker image:
|
||||
### 3. Run
|
||||
```bash
|
||||
# Using uv script (recommended)
|
||||
uv run unraid-mcp-server
|
||||
|
||||
# Using development script (with hot reload)
|
||||
./dev.sh
|
||||
|
||||
# Using module syntax
|
||||
uv run -m unraid_mcp.main
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 📦 Installation
|
||||
|
||||
### Using uv (Recommended)
|
||||
```bash
|
||||
# Install dependencies
|
||||
uv sync
|
||||
|
||||
# Install development dependencies
|
||||
uv sync --group dev
|
||||
```
|
||||
|
||||
### Manual Installation
|
||||
```bash
|
||||
pip install -r requirements.txt # If you have a requirements.txt
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## ⚙️ Configuration
|
||||
|
||||
### Environment Variables
|
||||
|
||||
Create `.env` file in the project root:
|
||||
|
||||
```bash
|
||||
# Core API Configuration (Required)
|
||||
UNRAID_API_URL=https://your-unraid-server-url/graphql
|
||||
UNRAID_API_KEY=your_unraid_api_key
|
||||
|
||||
# MCP Server Settings
|
||||
UNRAID_MCP_TRANSPORT=streamable-http # streamable-http (recommended), sse (deprecated), stdio
|
||||
UNRAID_MCP_HOST=0.0.0.0
|
||||
UNRAID_MCP_PORT=6970
|
||||
|
||||
# Logging Configuration
|
||||
UNRAID_MCP_LOG_LEVEL=INFO # DEBUG, INFO, WARNING, ERROR
|
||||
UNRAID_MCP_LOG_FILE=unraid-mcp.log
|
||||
|
||||
# SSL/TLS Configuration
|
||||
UNRAID_VERIFY_SSL=true # true, false, or path to CA bundle
|
||||
|
||||
# Optional: Log Stream Configuration
|
||||
# UNRAID_AUTOSTART_LOG_PATH=/var/log/syslog # Path for log streaming resource
|
||||
```
|
||||
|
||||
### Transport Options
|
||||
|
||||
| Transport | Description | Use Case |
|
||||
|-----------|-------------|----------|
|
||||
| `streamable-http` | HTTP-based (recommended) | Most compatible, best performance |
|
||||
| `sse` | Server-Sent Events (deprecated) | Legacy support only |
|
||||
| `stdio` | Standard I/O | Direct integration scenarios |
|
||||
|
||||
---
|
||||
|
||||
## 🛠️ Available Tools & Resources
|
||||
|
||||
### System Information & Status
|
||||
- `get_system_info()` - Comprehensive system, OS, CPU, memory, hardware info
|
||||
- `get_array_status()` - Storage array status, capacity, and disk details
|
||||
- `get_unraid_variables()` - System variables and settings
|
||||
- `get_network_config()` - Network configuration and access URLs
|
||||
- `get_registration_info()` - Unraid registration details
|
||||
- `get_connect_settings()` - Unraid Connect configuration
|
||||
|
||||
### Docker Container Management
|
||||
- `list_docker_containers()` - List all containers with caching options
|
||||
- `manage_docker_container(id, action)` - Start/stop containers (idempotent)
|
||||
- `get_docker_container_details(identifier)` - Detailed container information
|
||||
|
||||
### Virtual Machine Management
|
||||
- `list_vms()` - List all VMs and their states
|
||||
- `manage_vm(id, action)` - VM lifecycle (start/stop/pause/resume/reboot)
|
||||
- `get_vm_details(identifier)` - Detailed VM information
|
||||
|
||||
### Storage & File Systems
|
||||
- `get_shares_info()` - User shares information
|
||||
- `list_physical_disks()` - Physical disk discovery
|
||||
- `get_disk_details(disk_id)` - SMART data and detailed disk info
|
||||
|
||||
### Monitoring & Diagnostics
|
||||
- `health_check()` - Comprehensive system health assessment
|
||||
- `get_notifications_overview()` - Notification counts by severity
|
||||
- `list_notifications(type, offset, limit)` - Filtered notification listing
|
||||
- `list_available_log_files()` - Available system logs
|
||||
- `get_logs(path, tail_lines)` - Log file content retrieval
|
||||
|
||||
### Cloud Storage (RClone)
|
||||
- `list_rclone_remotes()` - List configured remotes
|
||||
- `get_rclone_config_form(provider)` - Configuration schemas
|
||||
- `create_rclone_remote(name, type, config)` - Create new remote
|
||||
- `delete_rclone_remote(name)` - Remove existing remote
|
||||
|
||||
### Real-time Subscriptions & Resources
|
||||
- `test_subscription_query(query)` - Test GraphQL subscriptions
|
||||
- `diagnose_subscriptions()` - Subscription system diagnostics
|
||||
|
||||
### MCP Resources (Real-time Data)
|
||||
- `unraid://logs/stream` - Live log streaming from `/var/log/syslog` with WebSocket subscriptions
|
||||
|
||||
> **Note**: MCP Resources provide real-time data streams that can be accessed via MCP clients. The log stream resource automatically connects to your Unraid system logs and provides live updates.
|
||||
|
||||
---
|
||||
|
||||
## 🐳 Docker Deployment
|
||||
|
||||
### Using Docker Compose (Recommended)
|
||||
|
||||
1. **Prepare Environment**
|
||||
```bash
|
||||
cp .env.example .env.local
|
||||
# Edit .env.local with your settings
|
||||
```
|
||||
|
||||
2. **Start Services**
|
||||
```bash
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
3. **View Logs**
|
||||
```bash
|
||||
docker compose logs -f unraid-mcp
|
||||
```
|
||||
|
||||
### Manual Docker
|
||||
|
||||
```bash
|
||||
# Build image
|
||||
docker build -t unraid-mcp-server .
|
||||
```
|
||||
|
||||
### Running the Container
|
||||
|
||||
To run the container, you'll need to provide the necessary environment variables. You can do this directly on the command line or by using an environment file.
|
||||
|
||||
**Option 1: Using an environment file (recommended)**
|
||||
|
||||
1. Create a file named `.env.local` in the project root (this file is in `.dockerignore` and won't be copied into the image).
|
||||
2. Add your environment variables to `.env.local`:
|
||||
|
||||
```env
|
||||
UNRAID_API_URL=http://your-unraid-ip/graphql
|
||||
UNRAID_API_KEY=your-unraid-api-key
|
||||
# Optional: override default port
|
||||
# UNRAID_MCP_PORT=6971
|
||||
# Optional: override log level
|
||||
# UNRAID_MCP_LOG_LEVEL=DEBUG
|
||||
# Optional: SSL verification settings
|
||||
# UNRAID_VERIFY_SSL=false
|
||||
```
|
||||
|
||||
3. Run the container, mounting the `.env.local` file:
|
||||
|
||||
```bash
|
||||
docker run -d --name unraid-mcp --restart unless-stopped -p 6970:6970 --env-file .env.local unraid-mcp-server
|
||||
```
|
||||
* `-d`: Run in detached mode.
|
||||
* `--name unraid-mcp`: Assign a name to the container.
|
||||
* `--restart unless-stopped`: Restart policy.
|
||||
* `-p 6970:6970`: Map port 6970 on the host to port 6970 in the container. Adjust if you changed `UNRAID_MCP_PORT`.
|
||||
* `--env-file .env.local`: Load environment variables from the specified file.
|
||||
|
||||
**Option 2: Providing environment variables directly**
|
||||
|
||||
```bash
|
||||
docker run -d --name unraid-mcp --restart unless-stopped -p 6970:6970 \
|
||||
-e UNRAID_API_URL="http://your-unraid-ip/graphql" \
|
||||
-e UNRAID_API_KEY="your-unraid-api-key" \
|
||||
# Run container
|
||||
docker run -d --name unraid-mcp \
|
||||
--restart unless-stopped \
|
||||
-p 6970:6970 \
|
||||
--env-file .env.local \
|
||||
unraid-mcp-server
|
||||
```
|
||||
|
||||
### Accessing Logs
|
||||
---
|
||||
|
||||
To view the logs of the running container:
|
||||
## 🔧 Development
|
||||
|
||||
### Project Structure
|
||||
```
|
||||
unraid-mcp/
|
||||
├── unraid_mcp/ # Main package
|
||||
│ ├── main.py # Entry point
|
||||
│ ├── config/ # Configuration management
|
||||
│ │ ├── settings.py # Environment & settings
|
||||
│ │ └── logging.py # Logging setup
|
||||
│ ├── core/ # Core infrastructure
|
||||
│ │ ├── client.py # GraphQL client
|
||||
│ │ ├── exceptions.py # Custom exceptions
|
||||
│ │ └── types.py # Shared data types
|
||||
│ ├── subscriptions/ # Real-time subscriptions
|
||||
│ │ ├── manager.py # WebSocket management
|
||||
│ │ ├── resources.py # MCP resources
|
||||
│ │ └── diagnostics.py # Diagnostic tools
|
||||
│ ├── tools/ # MCP tool categories
|
||||
│ │ ├── docker.py # Container management
|
||||
│ │ ├── system.py # System information
|
||||
│ │ ├── storage.py # Storage & monitoring
|
||||
│ │ ├── health.py # Health checks
|
||||
│ │ ├── virtualization.py # VM management
|
||||
│ │ └── rclone.py # Cloud storage
|
||||
│ └── server.py # FastMCP server setup
|
||||
├── logs/ # Log files (auto-created)
|
||||
├── dev.sh # Development script
|
||||
└── docker-compose.yml # Docker Compose deployment
|
||||
```
|
||||
|
||||
### Code Quality Commands
|
||||
```bash
|
||||
docker logs unraid-mcp
|
||||
# Format code
|
||||
uv run black unraid_mcp/
|
||||
|
||||
# Lint code
|
||||
uv run ruff check unraid_mcp/
|
||||
|
||||
# Type checking
|
||||
uv run mypy unraid_mcp/
|
||||
|
||||
# Run tests
|
||||
uv run pytest
|
||||
```
|
||||
|
||||
Follow logs in real-time:
|
||||
|
||||
### Development Workflow
|
||||
```bash
|
||||
docker logs -f unraid-mcp
|
||||
# Start development server (kills existing processes safely)
|
||||
./dev.sh
|
||||
|
||||
# Stop server only
|
||||
./dev.sh --kill
|
||||
```
|
||||
|
||||
### Stopping and Removing the Container
|
||||
---
|
||||
|
||||
(Using `docker run` commands)
|
||||
## 🏗️ Architecture
|
||||
|
||||
### Core Principles
|
||||
- **Modular Design**: Separate concerns across focused modules
|
||||
- **Async First**: All operations are non-blocking and concurrent-safe
|
||||
- **Error Resilience**: Comprehensive error handling with graceful degradation
|
||||
- **Configuration Driven**: Environment-based configuration with validation
|
||||
- **Observability**: Structured logging and health monitoring
|
||||
|
||||
### Key Components
|
||||
|
||||
| Component | Purpose |
|
||||
|-----------|---------|
|
||||
| **FastMCP Server** | MCP protocol implementation and tool registration |
|
||||
| **GraphQL Client** | Async HTTP client with timeout management |
|
||||
| **Subscription Manager** | WebSocket connections for real-time data |
|
||||
| **Tool Modules** | Domain-specific business logic (Docker, VMs, etc.) |
|
||||
| **Configuration System** | Environment loading and validation |
|
||||
| **Logging Framework** | Structured logging with file rotation |
|
||||
|
||||
---
|
||||
|
||||
## 🐛 Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
**🔥 Port Already in Use**
|
||||
```bash
|
||||
docker stop unraid-mcp
|
||||
docker rm unraid-mcp
|
||||
./dev.sh # Automatically kills existing processes
|
||||
```
|
||||
|
||||
### Using Docker Compose
|
||||
|
||||
A `docker-compose.yml` file is provided for easier management.
|
||||
|
||||
**Prerequisites:**
|
||||
|
||||
* Docker Compose installed (usually included with Docker Desktop).
|
||||
* Ensure you have an `.env.local` file in the same directory as `docker-compose.yml` with your `UNRAID_API_URL` and `UNRAID_API_KEY` (and any other overrides). See "Option 1: Using an environment file" in the `docker run` section above for an example `.env.local` content.
|
||||
* If you haven't built the image yet, Docker Compose can build it for you if you uncomment the `build` section in `docker-compose.yml` or build it manually first: `docker build -t unraid-mcp-server .`
|
||||
|
||||
**Starting the service:**
|
||||
|
||||
**🔧 Connection Refused**
|
||||
```bash
|
||||
docker-compose up -d
|
||||
# Check Unraid API configuration
|
||||
curl -k "${UNRAID_API_URL}" -H "X-API-Key: ${UNRAID_API_KEY}"
|
||||
```
|
||||
|
||||
This will start the `unraid-mcp` service in detached mode.
|
||||
|
||||
**Viewing logs:**
|
||||
|
||||
**📝 Import Errors**
|
||||
```bash
|
||||
docker-compose logs -f unraid-mcp
|
||||
# Reinstall dependencies
|
||||
uv sync --reinstall
|
||||
```
|
||||
|
||||
**Stopping the service:**
|
||||
|
||||
**🔍 Debug Mode**
|
||||
```bash
|
||||
docker-compose down
|
||||
# Enable debug logging
|
||||
export UNRAID_MCP_LOG_LEVEL=DEBUG
|
||||
uv run unraid-mcp-server
|
||||
```
|
||||
|
||||
This stops and removes the container defined in the `docker-compose.yml` file.
|
||||
|
||||
### Claude Desktop Client Configuration (for Docker)
|
||||
|
||||
### Health Check
|
||||
```bash
|
||||
docker stop unraid-mcp
|
||||
docker rm unraid-mcp
|
||||
# Use the built-in health check tool via MCP client
|
||||
# or check logs at: logs/unraid-mcp.log
|
||||
```
|
||||
|
||||
### Claude Desktop Client Configuration (for Docker)
|
||||
---
|
||||
|
||||
If your Unraid MCP Server is running in Docker and exposed on `localhost:6970` (default Docker setup):
|
||||
## 📄 License
|
||||
|
||||
Create or update your Claude Desktop MCP settings file at `~/.config/claude/claude_mcp_settings.jsonc`.
|
||||
Add or update the entry for this server:
|
||||
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
|
||||
|
||||
```jsonc
|
||||
{
|
||||
"mcp_servers": {
|
||||
"unraid": { // Use a short, descriptive name for the client
|
||||
"url": "http://localhost:6970/mcp", // Default path for FastMCP streamable-http is /mcp
|
||||
"disabled": false,
|
||||
"timeout": 60, // Optional: timeout in seconds for requests
|
||||
"transport": "streamable-http" // Ensure this matches the server's transport
|
||||
}
|
||||
// ... other server configurations
|
||||
}
|
||||
}
|
||||
```
|
||||
Make sure the `url` (host and port) matches your Docker port mapping. The default transport in the Dockerfile is `streamable-http`. # unraid-mcp
|
||||
---
|
||||
|
||||
## 🤝 Contributing
|
||||
|
||||
1. Fork the repository
|
||||
2. Create a feature branch: `git checkout -b feature/amazing-feature`
|
||||
3. Run tests: `uv run pytest`
|
||||
4. Commit changes: `git commit -m 'Add amazing feature'`
|
||||
5. Push to branch: `git push origin feature/amazing-feature`
|
||||
6. Open a Pull Request
|
||||
|
||||
---
|
||||
|
||||
## 📞 Support
|
||||
|
||||
- 📚 Documentation: Check inline code documentation
|
||||
- 🐛 Issues: [GitHub Issues](https://github.com/your-username/unraid-mcp/issues)
|
||||
- 💬 Discussions: Use GitHub Discussions for questions
|
||||
|
||||
---
|
||||
|
||||
*Built with ❤️ for the Unraid community*
|
||||
400
dev.sh
Executable file
400
dev.sh
Executable file
@@ -0,0 +1,400 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Unraid MCP Server Development Script
|
||||
# Safely manages server processes during development with accurate process detection
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Configuration
|
||||
DEFAULT_PORT=6970
|
||||
PROJECT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
LOG_FILE="$PROJECT_DIR/dev.log"
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Helper function for colored output
|
||||
log() {
|
||||
echo -e "${2:-$NC}[$(date +'%H:%M:%S')] $1${NC}"
|
||||
echo "[$(date +'%Y-%m-%d %H:%M:%S')] $1" >> "$LOG_FILE"
|
||||
}
|
||||
|
||||
# Get port from environment or use default
|
||||
get_port() {
|
||||
local port="${UNRAID_MCP_PORT:-$DEFAULT_PORT}"
|
||||
echo "$port"
|
||||
}
|
||||
|
||||
# Find processes using multiple detection methods
|
||||
find_server_processes() {
|
||||
local port=$(get_port)
|
||||
local pids=()
|
||||
|
||||
# Method 1: Command line pattern matching
|
||||
while IFS= read -r line; do
|
||||
if [[ -n "$line" ]]; then
|
||||
local pid=$(echo "$line" | awk '{print $2}')
|
||||
local cmd=$(echo "$line" | awk '{for(i=11;i<=NF;i++) printf "%s ", $i; print ""}')
|
||||
pids+=("$pid")
|
||||
fi
|
||||
done < <(ps aux | grep -E 'python.*unraid.*mcp|python.*main\.py|uv run.*main\.py|uv run -m unraid_mcp' | grep -v grep | grep -v "$0")
|
||||
|
||||
# Method 2: Port binding verification
|
||||
if command -v lsof >/dev/null 2>&1; then
|
||||
while IFS= read -r line; do
|
||||
if [[ -n "$line" ]]; then
|
||||
local pid=$(echo "$line" | awk '{print $2}')
|
||||
# Add to pids if not already present
|
||||
if [[ ! " ${pids[@]} " =~ " $pid " ]]; then
|
||||
pids+=("$pid")
|
||||
fi
|
||||
fi
|
||||
done < <(lsof -i ":$port" 2>/dev/null | grep LISTEN || true)
|
||||
fi
|
||||
|
||||
# Method 3: Working directory verification
|
||||
local verified_pids=()
|
||||
for pid in "${pids[@]}"; do
|
||||
# Skip if not a valid PID
|
||||
if ! [[ "$pid" =~ ^[0-9]+$ ]]; then
|
||||
continue
|
||||
fi
|
||||
|
||||
if [[ -d "/proc/$pid" ]]; then
|
||||
local pwd_info=""
|
||||
if command -v pwdx >/dev/null 2>&1; then
|
||||
pwd_info=$(pwdx "$pid" 2>/dev/null | cut -d' ' -f2- || echo "unknown")
|
||||
else
|
||||
pwd_info=$(readlink -f "/proc/$pid/cwd" 2>/dev/null || echo "unknown")
|
||||
fi
|
||||
|
||||
# Verify it's running from our project directory or a parent directory
|
||||
if [[ "$pwd_info" == "$PROJECT_DIR"* ]] || [[ "$pwd_info" == *"unraid-mcp"* ]]; then
|
||||
verified_pids+=("$pid")
|
||||
fi
|
||||
fi
|
||||
done
|
||||
|
||||
# Output final list
|
||||
printf '%s\n' "${verified_pids[@]}" | grep -E '^[0-9]+$' || true
|
||||
}
|
||||
|
||||
# Terminate a process gracefully, then forcefully if needed
|
||||
terminate_process() {
|
||||
local pid=$1
|
||||
local name=${2:-"process"}
|
||||
|
||||
if ! kill -0 "$pid" 2>/dev/null; then
|
||||
log "Process $pid ($name) already terminated" "$YELLOW"
|
||||
return 0
|
||||
fi
|
||||
|
||||
log "Terminating $name (PID: $pid)..." "$YELLOW"
|
||||
|
||||
# Step 1: Graceful shutdown (SIGTERM)
|
||||
log " → Sending SIGTERM to PID $pid" "$BLUE"
|
||||
kill -TERM "$pid" 2>/dev/null || {
|
||||
log " Failed to send SIGTERM (process may have died)" "$YELLOW"
|
||||
return 0
|
||||
}
|
||||
|
||||
# Step 2: Wait for graceful shutdown (5 seconds)
|
||||
local count=0
|
||||
while [[ $count -lt 5 ]]; do
|
||||
if ! kill -0 "$pid" 2>/dev/null; then
|
||||
log " ✓ Process $pid terminated gracefully" "$GREEN"
|
||||
return 0
|
||||
fi
|
||||
sleep 1
|
||||
((count++))
|
||||
log " Waiting for graceful shutdown... (${count}/5)" "$BLUE"
|
||||
done
|
||||
|
||||
# Step 3: Force kill (SIGKILL)
|
||||
log " → Graceful shutdown timeout, sending SIGKILL to PID $pid" "$RED"
|
||||
kill -KILL "$pid" 2>/dev/null || {
|
||||
log " Failed to send SIGKILL (process may have died)" "$YELLOW"
|
||||
return 0
|
||||
}
|
||||
|
||||
# Step 4: Final verification
|
||||
sleep 1
|
||||
if kill -0 "$pid" 2>/dev/null; then
|
||||
log " ✗ Failed to terminate process $pid" "$RED"
|
||||
return 1
|
||||
else
|
||||
log " ✓ Process $pid terminated forcefully" "$GREEN"
|
||||
return 0
|
||||
fi
|
||||
}
|
||||
|
||||
# Stop all server processes
|
||||
stop_servers() {
|
||||
log "🛑 Stopping existing server processes..." "$RED"
|
||||
|
||||
local pids=($(find_server_processes))
|
||||
|
||||
if [[ ${#pids[@]} -eq 0 ]]; then
|
||||
log "No processes to stop" "$GREEN"
|
||||
return 0
|
||||
fi
|
||||
|
||||
local failed=0
|
||||
for pid in "${pids[@]}"; do
|
||||
if ! terminate_process "$pid" "Unraid MCP Server"; then
|
||||
((failed++))
|
||||
fi
|
||||
done
|
||||
|
||||
# Wait for ports to be released
|
||||
local port=$(get_port)
|
||||
log "Waiting for port $port to be released..." "$BLUE"
|
||||
local port_wait=0
|
||||
while [[ $port_wait -lt 3 ]]; do
|
||||
if ! lsof -i ":$port" >/dev/null 2>&1; then
|
||||
log "✓ Port $port released" "$GREEN"
|
||||
break
|
||||
fi
|
||||
sleep 1
|
||||
((port_wait++))
|
||||
done
|
||||
|
||||
if [[ $failed -gt 0 ]]; then
|
||||
log "⚠️ Failed to stop $failed process(es)" "$RED"
|
||||
return 1
|
||||
else
|
||||
log "✅ All processes stopped successfully" "$GREEN"
|
||||
return 0
|
||||
fi
|
||||
}
|
||||
|
||||
# Start the new modular server
|
||||
start_modular_server() {
|
||||
log "🚀 Starting modular server..." "$GREEN"
|
||||
|
||||
cd "$PROJECT_DIR"
|
||||
|
||||
# Check if main.py exists in unraid_mcp/
|
||||
if [[ ! -f "unraid_mcp/main.py" ]]; then
|
||||
log "❌ unraid_mcp/main.py not found. Make sure modular server is implemented." "$RED"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Start server in background using module syntax
|
||||
log " → Executing: uv run -m unraid_mcp.main" "$BLUE"
|
||||
nohup uv run -m unraid_mcp.main >> "$LOG_FILE" 2>&1 &
|
||||
local pid=$!
|
||||
|
||||
# Give it a moment to start
|
||||
sleep 2
|
||||
|
||||
# Check if it's still running
|
||||
if kill -0 "$pid" 2>/dev/null; then
|
||||
local port=$(get_port)
|
||||
log "✅ Modular server started successfully (PID: $pid, Port: $port)" "$GREEN"
|
||||
log "📋 Process info: $(ps -p "$pid" -o pid,ppid,cmd --no-headers 2>/dev/null || echo 'Process info unavailable')" "$BLUE"
|
||||
|
||||
# Auto-tail logs after successful start
|
||||
echo ""
|
||||
log "📄 Following server logs in real-time..." "$GREEN"
|
||||
log "Press Ctrl+C to stop following logs (server will continue running)" "$YELLOW"
|
||||
echo ""
|
||||
echo -e "${GREEN}=== Following Server Logs (Press Ctrl+C to exit) ===${NC}"
|
||||
tail -f "$LOG_FILE"
|
||||
|
||||
return 0
|
||||
else
|
||||
log "❌ Modular server failed to start" "$RED"
|
||||
log "📄 Check $LOG_FILE for error details" "$YELLOW"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Start the original server
|
||||
start_original_server() {
|
||||
log "🚀 Starting original server..." "$GREEN"
|
||||
|
||||
cd "$PROJECT_DIR"
|
||||
|
||||
# Check if original server exists
|
||||
if [[ ! -f "unraid_mcp_server.py" ]]; then
|
||||
log "❌ unraid_mcp_server.py not found" "$RED"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Start server in background
|
||||
log " → Executing: uv run unraid_mcp_server.py" "$BLUE"
|
||||
nohup uv run unraid_mcp_server.py >> "$LOG_FILE" 2>&1 &
|
||||
local pid=$!
|
||||
|
||||
# Give it a moment to start
|
||||
sleep 2
|
||||
|
||||
# Check if it's still running
|
||||
if kill -0 "$pid" 2>/dev/null; then
|
||||
local port=$(get_port)
|
||||
log "✅ Original server started successfully (PID: $pid, Port: $port)" "$GREEN"
|
||||
log "📋 Process info: $(ps -p "$pid" -o pid,ppid,cmd --no-headers 2>/dev/null || echo 'Process info unavailable')" "$BLUE"
|
||||
|
||||
# Auto-tail logs after successful start
|
||||
echo ""
|
||||
log "📄 Following server logs in real-time..." "$GREEN"
|
||||
log "Press Ctrl+C to stop following logs (server will continue running)" "$YELLOW"
|
||||
echo ""
|
||||
echo -e "${GREEN}=== Following Server Logs (Press Ctrl+C to exit) ===${NC}"
|
||||
tail -f "$LOG_FILE"
|
||||
|
||||
return 0
|
||||
else
|
||||
log "❌ Original server failed to start" "$RED"
|
||||
log "📄 Check $LOG_FILE for error details" "$YELLOW"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Show usage information
|
||||
show_usage() {
|
||||
echo "Usage: $0 [OPTIONS]"
|
||||
echo ""
|
||||
echo "Development script for Unraid MCP Server"
|
||||
echo ""
|
||||
echo "OPTIONS:"
|
||||
echo " (no args) Stop existing servers, start modular server, and tail logs"
|
||||
echo " --old Stop existing servers, start original server, and tail logs"
|
||||
echo " --kill Stop existing servers only (don't start new one)"
|
||||
echo " --status Show status of running servers"
|
||||
echo " --logs [N] Show last N lines of server logs (default: 50)"
|
||||
echo " --tail Follow server logs in real-time (without restarting server)"
|
||||
echo " --help, -h Show this help message"
|
||||
echo ""
|
||||
echo "ENVIRONMENT VARIABLES:"
|
||||
echo " UNRAID_MCP_PORT Port for server (default: $DEFAULT_PORT)"
|
||||
echo ""
|
||||
echo "EXAMPLES:"
|
||||
echo " ./dev.sh # Restart with modular server"
|
||||
echo " ./dev.sh --old # Restart with original server"
|
||||
echo " ./dev.sh --kill # Stop all servers"
|
||||
echo " ./dev.sh --status # Check server status"
|
||||
echo " ./dev.sh --logs # Show last 50 lines of logs"
|
||||
echo " ./dev.sh --logs 100 # Show last 100 lines of logs"
|
||||
echo " ./dev.sh --tail # Follow logs in real-time"
|
||||
}
|
||||
|
||||
# Show server status
|
||||
show_status() {
|
||||
local port=$(get_port)
|
||||
log "🔍 Server Status Check" "$BLUE"
|
||||
log "Project Directory: $PROJECT_DIR" "$BLUE"
|
||||
log "Expected Port: $port" "$BLUE"
|
||||
echo ""
|
||||
|
||||
local pids=($(find_server_processes))
|
||||
|
||||
if [[ ${#pids[@]} -eq 0 ]]; then
|
||||
log "Status: No servers running" "$YELLOW"
|
||||
else
|
||||
log "Status: ${#pids[@]} server(s) running" "$GREEN"
|
||||
for pid in "${pids[@]}"; do
|
||||
local cmd=$(ps -p "$pid" -o cmd --no-headers 2>/dev/null || echo "Command unavailable")
|
||||
log " PID $pid: $cmd" "$GREEN"
|
||||
done
|
||||
fi
|
||||
|
||||
# Check port binding
|
||||
if command -v lsof >/dev/null 2>&1; then
|
||||
local port_info=$(lsof -i ":$port" 2>/dev/null | grep LISTEN || echo "")
|
||||
if [[ -n "$port_info" ]]; then
|
||||
log "Port $port: BOUND" "$GREEN"
|
||||
echo "$port_info" | while IFS= read -r line; do
|
||||
log " $line" "$BLUE"
|
||||
done
|
||||
else
|
||||
log "Port $port: FREE" "$YELLOW"
|
||||
fi
|
||||
fi
|
||||
}
|
||||
|
||||
# Tail the server logs
|
||||
tail_logs() {
|
||||
local lines="${1:-50}"
|
||||
|
||||
log "📄 Tailing last $lines lines from server logs..." "$BLUE"
|
||||
|
||||
if [[ ! -f "$LOG_FILE" ]]; then
|
||||
log "❌ Log file not found: $LOG_FILE" "$RED"
|
||||
return 1
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo -e "${YELLOW}=== Server Logs (last $lines lines) ===${NC}"
|
||||
tail -n "$lines" "$LOG_FILE"
|
||||
echo -e "${YELLOW}=== End of Logs ===${NC}"
|
||||
echo ""
|
||||
}
|
||||
|
||||
# Follow server logs in real-time
|
||||
follow_logs() {
|
||||
log "📄 Following server logs in real-time..." "$GREEN"
|
||||
log "Press Ctrl+C to stop following" "$YELLOW"
|
||||
|
||||
if [[ ! -f "$LOG_FILE" ]]; then
|
||||
log "❌ Log file not found: $LOG_FILE" "$RED"
|
||||
return 1
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo -e "${GREEN}=== Following Server Logs (Press Ctrl+C to exit) ===${NC}"
|
||||
tail -f "$LOG_FILE"
|
||||
}
|
||||
|
||||
# Main script logic
|
||||
main() {
|
||||
# Initialize log file
|
||||
echo "=== Dev Script Started at $(date) ===" >> "$LOG_FILE"
|
||||
|
||||
case "${1:-}" in
|
||||
--help|-h)
|
||||
show_usage
|
||||
;;
|
||||
--status)
|
||||
show_status
|
||||
;;
|
||||
--kill)
|
||||
stop_servers
|
||||
;;
|
||||
--logs)
|
||||
tail_logs "${2:-50}"
|
||||
;;
|
||||
--tail)
|
||||
follow_logs
|
||||
;;
|
||||
--old)
|
||||
if stop_servers; then
|
||||
start_original_server
|
||||
else
|
||||
log "❌ Failed to stop existing servers" "$RED"
|
||||
exit 1
|
||||
fi
|
||||
;;
|
||||
"")
|
||||
if stop_servers; then
|
||||
start_modular_server
|
||||
else
|
||||
log "❌ Failed to stop existing servers" "$RED"
|
||||
exit 1
|
||||
fi
|
||||
;;
|
||||
*)
|
||||
log "❌ Unknown option: $1" "$RED"
|
||||
show_usage
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
# Run main function with all arguments
|
||||
main "$@"
|
||||
@@ -30,7 +30,7 @@ dependencies = [
|
||||
"httpx>=0.28.1",
|
||||
"fastapi>=0.116.1",
|
||||
"uvicorn>=0.35.0",
|
||||
"websockets>=14.1",
|
||||
"websockets>=13.1,<14.0",
|
||||
]
|
||||
|
||||
[project.optional-dependencies]
|
||||
@@ -49,10 +49,10 @@ Repository = "https://github.com/your-username/unraid-mcp"
|
||||
Issues = "https://github.com/your-username/unraid-mcp/issues"
|
||||
|
||||
[project.scripts]
|
||||
unraid-mcp-server = "unraid_mcp_server:main"
|
||||
unraid-mcp-server = "unraid_mcp.main:main"
|
||||
|
||||
[tool.hatch.build.targets.wheel]
|
||||
only-include = ["unraid_mcp_server.py"]
|
||||
only-include = ["unraid_mcp/"]
|
||||
|
||||
[tool.black]
|
||||
line-length = 100
|
||||
|
||||
7
unraid_mcp/__init__.py
Normal file
7
unraid_mcp/__init__.py
Normal file
@@ -0,0 +1,7 @@
|
||||
"""Unraid MCP Server Package.
|
||||
|
||||
A modular MCP (Model Context Protocol) server that provides tools to interact
|
||||
with an Unraid server's GraphQL API.
|
||||
"""
|
||||
|
||||
__version__ = "0.1.0"
|
||||
1
unraid_mcp/config/__init__.py
Normal file
1
unraid_mcp/config/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
"""Configuration management for Unraid MCP Server."""
|
||||
92
unraid_mcp/config/logging.py
Normal file
92
unraid_mcp/config/logging.py
Normal file
@@ -0,0 +1,92 @@
|
||||
"""Logging configuration for Unraid MCP Server.
|
||||
|
||||
This module sets up structured logging with console and rotating file handlers
|
||||
that can be used consistently across all modules.
|
||||
"""
|
||||
|
||||
import logging
|
||||
import sys
|
||||
from logging.handlers import RotatingFileHandler
|
||||
|
||||
from .settings import LOG_LEVEL_STR, LOG_FILE_PATH
|
||||
|
||||
|
||||
def setup_logger(name: str = "UnraidMCPServer") -> logging.Logger:
|
||||
"""Set up and configure the logger with console and file handlers.
|
||||
|
||||
Args:
|
||||
name: Logger name (defaults to UnraidMCPServer)
|
||||
|
||||
Returns:
|
||||
Configured logger instance
|
||||
"""
|
||||
# Get numeric log level
|
||||
numeric_log_level = getattr(logging, LOG_LEVEL_STR, logging.INFO)
|
||||
|
||||
# Define the logger
|
||||
logger = logging.getLogger(name)
|
||||
logger.setLevel(numeric_log_level)
|
||||
logger.propagate = False # Prevent root logger from duplicating handlers
|
||||
|
||||
# Clear any existing handlers
|
||||
logger.handlers.clear()
|
||||
|
||||
# Console Handler
|
||||
console_handler = logging.StreamHandler(sys.stdout)
|
||||
console_handler.setLevel(numeric_log_level)
|
||||
console_formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
|
||||
console_handler.setFormatter(console_formatter)
|
||||
logger.addHandler(console_handler)
|
||||
|
||||
# File Handler with Rotation
|
||||
# Rotate logs at 5MB, keep 3 backup logs
|
||||
file_handler = RotatingFileHandler(
|
||||
LOG_FILE_PATH,
|
||||
maxBytes=5*1024*1024,
|
||||
backupCount=3,
|
||||
encoding='utf-8'
|
||||
)
|
||||
file_handler.setLevel(numeric_log_level)
|
||||
file_formatter = logging.Formatter(
|
||||
'%(asctime)s - %(name)s - %(levelname)s - %(module)s - %(funcName)s - %(lineno)d - %(message)s'
|
||||
)
|
||||
file_handler.setFormatter(file_formatter)
|
||||
logger.addHandler(file_handler)
|
||||
|
||||
return logger
|
||||
|
||||
|
||||
def log_configuration_status(logger: logging.Logger) -> None:
|
||||
"""Log configuration status at startup.
|
||||
|
||||
Args:
|
||||
logger: Logger instance to use for logging
|
||||
"""
|
||||
from .settings import get_config_summary
|
||||
|
||||
logger.info(f"Logging initialized (console and file: {LOG_FILE_PATH}).")
|
||||
|
||||
config = get_config_summary()
|
||||
|
||||
# Log configuration status
|
||||
if config['api_url_configured']:
|
||||
logger.info(f"UNRAID_API_URL loaded: {config['api_url_preview']}")
|
||||
else:
|
||||
logger.warning("UNRAID_API_URL not found in environment or .env file.")
|
||||
|
||||
if config['api_key_configured']:
|
||||
logger.info("UNRAID_API_KEY loaded: ****") # Don't log the key itself
|
||||
else:
|
||||
logger.warning("UNRAID_API_KEY not found in environment or .env file.")
|
||||
|
||||
logger.info(f"UNRAID_MCP_PORT set to: {config['server_port']}")
|
||||
logger.info(f"UNRAID_MCP_HOST set to: {config['server_host']}")
|
||||
logger.info(f"UNRAID_MCP_TRANSPORT set to: {config['transport']}")
|
||||
logger.info(f"UNRAID_MCP_LOG_LEVEL set to: {config['log_level']}")
|
||||
|
||||
if not config['config_valid']:
|
||||
logger.error(f"Missing required configuration: {config['missing_config']}")
|
||||
|
||||
|
||||
# Global logger instance - modules can import this directly
|
||||
logger = setup_logger()
|
||||
104
unraid_mcp/config/settings.py
Normal file
104
unraid_mcp/config/settings.py
Normal file
@@ -0,0 +1,104 @@
|
||||
"""Configuration management for Unraid MCP Server.
|
||||
|
||||
This module handles loading environment variables from multiple .env locations
|
||||
and provides all configuration constants used throughout the application.
|
||||
"""
|
||||
|
||||
import os
|
||||
from pathlib import Path
|
||||
from typing import Union
|
||||
from dotenv import load_dotenv
|
||||
|
||||
# Get the script directory (config module location)
|
||||
SCRIPT_DIR = Path(__file__).parent # /home/user/code/unraid-mcp/unraid_mcp/config/
|
||||
UNRAID_MCP_DIR = SCRIPT_DIR.parent # /home/user/code/unraid-mcp/unraid_mcp/
|
||||
PROJECT_ROOT = UNRAID_MCP_DIR.parent # /home/user/code/unraid-mcp/
|
||||
|
||||
# Load environment variables from .env file
|
||||
# In container: First try /app/.env.local (mounted), then project root .env
|
||||
dotenv_paths = [
|
||||
Path('/app/.env.local'), # Container mount point
|
||||
PROJECT_ROOT / '.env.local', # Project root .env.local
|
||||
PROJECT_ROOT / '.env', # Project root .env
|
||||
UNRAID_MCP_DIR / '.env' # Local .env in unraid_mcp/
|
||||
]
|
||||
|
||||
for dotenv_path in dotenv_paths:
|
||||
if dotenv_path.exists():
|
||||
load_dotenv(dotenv_path=dotenv_path)
|
||||
break
|
||||
|
||||
# Core API Configuration
|
||||
UNRAID_API_URL = os.getenv("UNRAID_API_URL")
|
||||
UNRAID_API_KEY = os.getenv("UNRAID_API_KEY")
|
||||
|
||||
# Server Configuration
|
||||
UNRAID_MCP_PORT = int(os.getenv("UNRAID_MCP_PORT", "6970"))
|
||||
UNRAID_MCP_HOST = os.getenv("UNRAID_MCP_HOST", "0.0.0.0")
|
||||
UNRAID_MCP_TRANSPORT = os.getenv("UNRAID_MCP_TRANSPORT", "streamable-http").lower()
|
||||
|
||||
# SSL Configuration
|
||||
raw_verify_ssl = os.getenv("UNRAID_VERIFY_SSL", "true").lower()
|
||||
if raw_verify_ssl in ["false", "0", "no"]:
|
||||
UNRAID_VERIFY_SSL: Union[bool, str] = False
|
||||
elif raw_verify_ssl in ["true", "1", "yes"]:
|
||||
UNRAID_VERIFY_SSL = True
|
||||
else: # Path to CA bundle
|
||||
UNRAID_VERIFY_SSL = raw_verify_ssl
|
||||
|
||||
# Logging Configuration
|
||||
LOG_LEVEL_STR = os.getenv('UNRAID_MCP_LOG_LEVEL', 'INFO').upper()
|
||||
LOG_FILE_NAME = os.getenv("UNRAID_MCP_LOG_FILE", "unraid-mcp.log")
|
||||
LOGS_DIR = PROJECT_ROOT / "logs"
|
||||
LOG_FILE_PATH = LOGS_DIR / LOG_FILE_NAME
|
||||
|
||||
# Ensure logs directory exists
|
||||
LOGS_DIR.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# HTTP Client Configuration
|
||||
TIMEOUT_CONFIG = {
|
||||
'default': 30,
|
||||
'disk_operations': 90, # Longer timeout for SMART data queries
|
||||
}
|
||||
|
||||
|
||||
def validate_required_config() -> bool:
|
||||
"""Validate that required configuration is present.
|
||||
|
||||
Returns:
|
||||
bool: True if all required config is present, False otherwise.
|
||||
"""
|
||||
required_vars = [
|
||||
("UNRAID_API_URL", UNRAID_API_URL),
|
||||
("UNRAID_API_KEY", UNRAID_API_KEY)
|
||||
]
|
||||
|
||||
missing = []
|
||||
for name, value in required_vars:
|
||||
if not value:
|
||||
missing.append(name)
|
||||
|
||||
return len(missing) == 0, missing
|
||||
|
||||
|
||||
def get_config_summary() -> dict:
|
||||
"""Get a summary of current configuration (safe for logging).
|
||||
|
||||
Returns:
|
||||
dict: Configuration summary with sensitive data redacted.
|
||||
"""
|
||||
is_valid, missing = validate_required_config()
|
||||
|
||||
return {
|
||||
'api_url_configured': bool(UNRAID_API_URL),
|
||||
'api_url_preview': UNRAID_API_URL[:20] + '...' if UNRAID_API_URL else None,
|
||||
'api_key_configured': bool(UNRAID_API_KEY),
|
||||
'server_host': UNRAID_MCP_HOST,
|
||||
'server_port': UNRAID_MCP_PORT,
|
||||
'transport': UNRAID_MCP_TRANSPORT,
|
||||
'ssl_verify': UNRAID_VERIFY_SSL,
|
||||
'log_level': LOG_LEVEL_STR,
|
||||
'log_file': str(LOG_FILE_PATH),
|
||||
'config_valid': is_valid,
|
||||
'missing_config': missing if not is_valid else None
|
||||
}
|
||||
1
unraid_mcp/core/__init__.py
Normal file
1
unraid_mcp/core/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
"""Core infrastructure components for Unraid MCP Server."""
|
||||
147
unraid_mcp/core/client.py
Normal file
147
unraid_mcp/core/client.py
Normal file
@@ -0,0 +1,147 @@
|
||||
"""GraphQL client for Unraid API communication.
|
||||
|
||||
This module provides the HTTP client interface for making GraphQL requests
|
||||
to the Unraid API with proper timeout handling and error management.
|
||||
"""
|
||||
|
||||
import json
|
||||
from typing import Any
|
||||
|
||||
import httpx
|
||||
|
||||
from ..config.logging import logger
|
||||
from ..config.settings import TIMEOUT_CONFIG, UNRAID_API_KEY, UNRAID_API_URL, UNRAID_VERIFY_SSL
|
||||
from ..core.exceptions import ToolError
|
||||
|
||||
# HTTP timeout configuration
|
||||
DEFAULT_TIMEOUT = httpx.Timeout(10.0, read=30.0, connect=5.0)
|
||||
DISK_TIMEOUT = httpx.Timeout(10.0, read=TIMEOUT_CONFIG['disk_operations'], connect=5.0)
|
||||
|
||||
|
||||
def is_idempotent_error(error_message: str, operation: str) -> bool:
|
||||
"""Check if a GraphQL error represents an idempotent operation that should be treated as success.
|
||||
|
||||
Args:
|
||||
error_message: The error message from GraphQL API
|
||||
operation: The operation being performed (e.g., 'start', 'stop')
|
||||
|
||||
Returns:
|
||||
True if this is an idempotent error that should be treated as success
|
||||
"""
|
||||
error_lower = error_message.lower()
|
||||
|
||||
# Docker container operation patterns
|
||||
if operation == 'start':
|
||||
return (
|
||||
'already started' in error_lower or
|
||||
'container already running' in error_lower or
|
||||
'http code 304' in error_lower
|
||||
)
|
||||
elif operation == 'stop':
|
||||
return (
|
||||
'already stopped' in error_lower or
|
||||
'container already stopped' in error_lower or
|
||||
'container not running' in error_lower or
|
||||
'http code 304' in error_lower
|
||||
)
|
||||
|
||||
return False
|
||||
|
||||
|
||||
async def make_graphql_request(
|
||||
query: str,
|
||||
variables: dict[str, Any] | None = None,
|
||||
custom_timeout: httpx.Timeout | None = None,
|
||||
operation_context: dict[str, str] | None = None
|
||||
) -> dict[str, Any]:
|
||||
"""Make GraphQL requests to the Unraid API.
|
||||
|
||||
Args:
|
||||
query: GraphQL query string
|
||||
variables: Optional query variables
|
||||
custom_timeout: Optional custom timeout configuration
|
||||
operation_context: Optional context for operation-specific error handling
|
||||
Should contain 'operation' key (e.g., 'start', 'stop')
|
||||
|
||||
Returns:
|
||||
Dict containing the GraphQL response data
|
||||
|
||||
Raises:
|
||||
ToolError: For HTTP errors, network errors, or non-idempotent GraphQL errors
|
||||
"""
|
||||
if not UNRAID_API_URL:
|
||||
raise ToolError("UNRAID_API_URL not configured")
|
||||
|
||||
if not UNRAID_API_KEY:
|
||||
raise ToolError("UNRAID_API_KEY not configured")
|
||||
|
||||
headers = {
|
||||
"Content-Type": "application/json",
|
||||
"X-API-Key": UNRAID_API_KEY,
|
||||
"User-Agent": "UnraidMCPServer/0.1.0" # Custom user-agent
|
||||
}
|
||||
|
||||
payload = {"query": query}
|
||||
if variables:
|
||||
payload["variables"] = variables
|
||||
|
||||
logger.debug(f"Making GraphQL request to {UNRAID_API_URL}:")
|
||||
logger.debug(f"Query: {query[:200]}{'...' if len(query) > 200 else ''}") # Log truncated query
|
||||
if variables:
|
||||
logger.debug(f"Variables: {variables}")
|
||||
|
||||
current_timeout = custom_timeout if custom_timeout is not None else DEFAULT_TIMEOUT
|
||||
|
||||
try:
|
||||
async with httpx.AsyncClient(timeout=current_timeout, verify=UNRAID_VERIFY_SSL) as client:
|
||||
response = await client.post(UNRAID_API_URL, json=payload, headers=headers)
|
||||
response.raise_for_status() # Raise an exception for HTTP error codes 4xx/5xx
|
||||
|
||||
response_data = response.json()
|
||||
if "errors" in response_data and response_data["errors"]:
|
||||
error_details = "; ".join([err.get("message", str(err)) for err in response_data["errors"]])
|
||||
|
||||
# Check if this is an idempotent error that should be treated as success
|
||||
if operation_context and operation_context.get('operation'):
|
||||
operation = operation_context['operation']
|
||||
if is_idempotent_error(error_details, operation):
|
||||
logger.warning(f"Idempotent operation '{operation}' - treating as success: {error_details}")
|
||||
# Return a success response with the current state information
|
||||
return {
|
||||
"idempotent_success": True,
|
||||
"operation": operation,
|
||||
"message": error_details,
|
||||
"original_errors": response_data["errors"]
|
||||
}
|
||||
|
||||
logger.error(f"GraphQL API returned errors: {response_data['errors']}")
|
||||
# Use ToolError for GraphQL errors to provide better feedback to LLM
|
||||
raise ToolError(f"GraphQL API error: {error_details}")
|
||||
|
||||
logger.debug("GraphQL request successful.")
|
||||
return response_data.get("data", {}) # Return only the data part
|
||||
|
||||
except httpx.HTTPStatusError as e:
|
||||
logger.error(f"HTTP error occurred: {e.response.status_code} - {e.response.text}")
|
||||
raise ToolError(f"HTTP error {e.response.status_code}: {e.response.text}")
|
||||
except httpx.RequestError as e:
|
||||
logger.error(f"Request error occurred: {e}")
|
||||
raise ToolError(f"Network connection error: {str(e)}")
|
||||
except json.JSONDecodeError as e:
|
||||
logger.error(f"Failed to decode JSON response: {e}")
|
||||
raise ToolError(f"Invalid JSON response from Unraid API: {str(e)}")
|
||||
|
||||
|
||||
def get_timeout_for_operation(operation_type: str = "default") -> httpx.Timeout:
|
||||
"""Get appropriate timeout configuration for different operation types.
|
||||
|
||||
Args:
|
||||
operation_type: Type of operation ('default', 'disk_operations')
|
||||
|
||||
Returns:
|
||||
httpx.Timeout configuration appropriate for the operation
|
||||
"""
|
||||
if operation_type == "disk_operations":
|
||||
return DISK_TIMEOUT
|
||||
else:
|
||||
return DEFAULT_TIMEOUT
|
||||
48
unraid_mcp/core/exceptions.py
Normal file
48
unraid_mcp/core/exceptions.py
Normal file
@@ -0,0 +1,48 @@
|
||||
"""Custom exceptions for Unraid MCP Server.
|
||||
|
||||
This module defines custom exception classes for consistent error handling
|
||||
throughout the application, with proper integration to FastMCP's error system.
|
||||
"""
|
||||
|
||||
from fastmcp.exceptions import ToolError as FastMCPToolError
|
||||
|
||||
|
||||
class ToolError(FastMCPToolError):
|
||||
"""User-facing error that MCP clients can handle.
|
||||
|
||||
This is the main exception type used throughout the application for
|
||||
errors that should be presented to the user/LLM in a friendly way.
|
||||
|
||||
Inherits from FastMCP's ToolError to ensure proper MCP protocol handling.
|
||||
"""
|
||||
pass
|
||||
|
||||
|
||||
class ConfigurationError(ToolError):
|
||||
"""Raised when there are configuration-related errors."""
|
||||
pass
|
||||
|
||||
|
||||
class UnraidAPIError(ToolError):
|
||||
"""Raised when the Unraid API returns an error or is unreachable."""
|
||||
pass
|
||||
|
||||
|
||||
class SubscriptionError(ToolError):
|
||||
"""Raised when there are WebSocket subscription-related errors."""
|
||||
pass
|
||||
|
||||
|
||||
class ValidationError(ToolError):
|
||||
"""Raised when input validation fails."""
|
||||
pass
|
||||
|
||||
|
||||
class IdempotentOperationError(ToolError):
|
||||
"""Raised when an operation is idempotent (already in desired state).
|
||||
|
||||
This is used internally to signal that an operation was already complete,
|
||||
which should typically be converted to a success response rather than
|
||||
propagated as an error to the user.
|
||||
"""
|
||||
pass
|
||||
43
unraid_mcp/core/types.py
Normal file
43
unraid_mcp/core/types.py
Normal file
@@ -0,0 +1,43 @@
|
||||
"""Shared data types for Unraid MCP Server.
|
||||
|
||||
This module defines data classes and type definitions used across
|
||||
multiple modules for consistent data handling.
|
||||
"""
|
||||
|
||||
from dataclasses import dataclass
|
||||
from datetime import datetime
|
||||
from typing import Any, Dict, Optional, Union
|
||||
|
||||
|
||||
@dataclass
|
||||
class SubscriptionData:
|
||||
"""Container for subscription data with metadata."""
|
||||
data: Dict[str, Any]
|
||||
last_updated: datetime
|
||||
subscription_type: str
|
||||
|
||||
|
||||
@dataclass
|
||||
class SystemHealth:
|
||||
"""Container for system health status information."""
|
||||
is_healthy: bool
|
||||
issues: list[str]
|
||||
warnings: list[str]
|
||||
last_checked: datetime
|
||||
component_status: Dict[str, str]
|
||||
|
||||
|
||||
@dataclass
|
||||
class APIResponse:
|
||||
"""Container for standardized API response data."""
|
||||
success: bool
|
||||
data: Optional[Dict[str, Any]] = None
|
||||
error: Optional[str] = None
|
||||
metadata: Optional[Dict[str, Any]] = None
|
||||
|
||||
|
||||
# Type aliases for common data structures
|
||||
ConfigValue = Union[str, int, bool, float, None]
|
||||
ConfigDict = Dict[str, ConfigValue]
|
||||
GraphQLVariables = Dict[str, Any]
|
||||
HealthStatus = Dict[str, Union[str, bool, int, list]]
|
||||
22
unraid_mcp/main.py
Normal file
22
unraid_mcp/main.py
Normal file
@@ -0,0 +1,22 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Unraid MCP Server - Entry Point.
|
||||
|
||||
This is the main entry point for the Unraid MCP Server. It imports and starts
|
||||
the modular server implementation from unraid_mcp.server.
|
||||
"""
|
||||
|
||||
|
||||
def main():
|
||||
"""Main entry point for the Unraid MCP Server."""
|
||||
try:
|
||||
from .server import run_server
|
||||
run_server()
|
||||
except KeyboardInterrupt:
|
||||
print("\nServer stopped by user")
|
||||
except Exception as e:
|
||||
print(f"Server failed to start: {e}")
|
||||
raise
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
141
unraid_mcp/server.py
Normal file
141
unraid_mcp/server.py
Normal file
@@ -0,0 +1,141 @@
|
||||
"""Modular Unraid MCP Server.
|
||||
|
||||
This is the main server implementation using the modular architecture with
|
||||
separate modules for configuration, core functionality, subscriptions, and tools.
|
||||
"""
|
||||
|
||||
import sys
|
||||
|
||||
from fastmcp import FastMCP
|
||||
|
||||
from .config.logging import logger
|
||||
from .config.settings import (
|
||||
UNRAID_API_KEY,
|
||||
UNRAID_API_URL,
|
||||
UNRAID_MCP_HOST,
|
||||
UNRAID_MCP_PORT,
|
||||
UNRAID_MCP_TRANSPORT,
|
||||
)
|
||||
from .subscriptions.diagnostics import register_diagnostic_tools
|
||||
from .subscriptions.manager import SubscriptionManager
|
||||
from .subscriptions.resources import register_subscription_resources
|
||||
from .tools.docker import register_docker_tools
|
||||
from .tools.health import register_health_tools
|
||||
from .tools.rclone import register_rclone_tools
|
||||
from .tools.storage import register_storage_tools
|
||||
from .tools.system import register_system_tools
|
||||
from .tools.virtualization import register_vm_tools
|
||||
|
||||
# Initialize FastMCP instance
|
||||
mcp = FastMCP(
|
||||
name="Unraid MCP Server",
|
||||
instructions="Provides tools to interact with an Unraid server's GraphQL API.",
|
||||
version="0.1.0",
|
||||
)
|
||||
|
||||
# Initialize subscription manager
|
||||
subscription_manager = SubscriptionManager()
|
||||
|
||||
|
||||
async def autostart_subscriptions():
|
||||
"""Auto-start all subscriptions marked for auto-start in SubscriptionManager"""
|
||||
logger.info("[AUTOSTART] Initiating subscription auto-start process...")
|
||||
|
||||
try:
|
||||
# Use the SubscriptionManager auto-start method
|
||||
await subscription_manager.auto_start_all_subscriptions()
|
||||
logger.info("[AUTOSTART] Auto-start process completed successfully")
|
||||
except Exception as e:
|
||||
logger.error(f"[AUTOSTART] Failed during auto-start process: {e}", exc_info=True)
|
||||
|
||||
|
||||
def register_all_modules():
|
||||
"""Register all tools and resources with the MCP instance."""
|
||||
try:
|
||||
# Register subscription resources first
|
||||
register_subscription_resources(mcp)
|
||||
logger.info("📊 Subscription resources registered")
|
||||
|
||||
# Register diagnostic tools
|
||||
register_diagnostic_tools(mcp)
|
||||
logger.info("🔧 Diagnostic tools registered")
|
||||
|
||||
# Register all tool categories
|
||||
register_system_tools(mcp)
|
||||
logger.info("🖥️ System tools registered")
|
||||
|
||||
register_docker_tools(mcp)
|
||||
logger.info("🐳 Docker tools registered")
|
||||
|
||||
register_vm_tools(mcp)
|
||||
logger.info("💻 Virtualization tools registered")
|
||||
|
||||
register_storage_tools(mcp)
|
||||
logger.info("💾 Storage tools registered")
|
||||
|
||||
register_health_tools(mcp)
|
||||
logger.info("🏥 Health tools registered")
|
||||
|
||||
register_rclone_tools(mcp)
|
||||
logger.info("☁️ RClone tools registered")
|
||||
|
||||
logger.info("🎯 All modules registered successfully - Server ready!")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Failed to register modules: {e}", exc_info=True)
|
||||
raise
|
||||
|
||||
|
||||
def run_server():
|
||||
"""Run the MCP server with the configured transport."""
|
||||
# Log configuration
|
||||
if UNRAID_API_URL:
|
||||
logger.info(f"UNRAID_API_URL loaded: {UNRAID_API_URL[:20]}...")
|
||||
else:
|
||||
logger.warning("UNRAID_API_URL not found in environment or .env file.")
|
||||
|
||||
if UNRAID_API_KEY:
|
||||
logger.info("UNRAID_API_KEY loaded: ****")
|
||||
else:
|
||||
logger.warning("UNRAID_API_KEY not found in environment or .env file.")
|
||||
|
||||
logger.info(f"UNRAID_MCP_PORT set to: {UNRAID_MCP_PORT}")
|
||||
logger.info(f"UNRAID_MCP_HOST set to: {UNRAID_MCP_HOST}")
|
||||
logger.info(f"UNRAID_MCP_TRANSPORT set to: {UNRAID_MCP_TRANSPORT}")
|
||||
|
||||
# Register all modules
|
||||
register_all_modules()
|
||||
|
||||
logger.info(f"🚀 Starting Unraid MCP Server on {UNRAID_MCP_HOST}:{UNRAID_MCP_PORT} using {UNRAID_MCP_TRANSPORT} transport...")
|
||||
|
||||
try:
|
||||
# Auto-start subscriptions on first async operation
|
||||
if UNRAID_MCP_TRANSPORT == "streamable-http":
|
||||
# Use the recommended Streamable HTTP transport
|
||||
mcp.run(
|
||||
transport="streamable-http",
|
||||
host=UNRAID_MCP_HOST,
|
||||
port=UNRAID_MCP_PORT,
|
||||
path="/mcp" # Standard path for MCP
|
||||
)
|
||||
elif UNRAID_MCP_TRANSPORT == "sse":
|
||||
# Deprecated SSE transport - log warning
|
||||
logger.warning("SSE transport is deprecated and may be removed in a future version. Consider switching to 'streamable-http'.")
|
||||
mcp.run(
|
||||
transport="sse",
|
||||
host=UNRAID_MCP_HOST,
|
||||
port=UNRAID_MCP_PORT,
|
||||
path="/mcp" # Keep custom path for SSE
|
||||
)
|
||||
elif UNRAID_MCP_TRANSPORT == "stdio":
|
||||
mcp.run() # Defaults to stdio
|
||||
else:
|
||||
logger.error(f"Unsupported MCP_TRANSPORT: {UNRAID_MCP_TRANSPORT}. Choose 'streamable-http' (recommended), 'sse' (deprecated), or 'stdio'.")
|
||||
sys.exit(1)
|
||||
except Exception as e:
|
||||
logger.critical(f"❌ Failed to start Unraid MCP server: {e}", exc_info=True)
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
run_server()
|
||||
1
unraid_mcp/subscriptions/__init__.py
Normal file
1
unraid_mcp/subscriptions/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
"""WebSocket subscription system for real-time Unraid data."""
|
||||
206
unraid_mcp/subscriptions/diagnostics.py
Normal file
206
unraid_mcp/subscriptions/diagnostics.py
Normal file
@@ -0,0 +1,206 @@
|
||||
"""Subscription system troubleshooting and monitoring.
|
||||
|
||||
This module provides diagnostic tools for WebSocket connection testing,
|
||||
subscription system monitoring, and detailed status reporting for
|
||||
development and debugging purposes.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
from datetime import datetime
|
||||
from typing import Any, Dict
|
||||
|
||||
import websockets
|
||||
from fastmcp import FastMCP
|
||||
|
||||
from ..config.logging import logger
|
||||
from ..config.settings import UNRAID_API_URL, UNRAID_API_KEY, UNRAID_VERIFY_SSL
|
||||
from ..core.exceptions import ToolError
|
||||
from .manager import subscription_manager
|
||||
from .resources import ensure_subscriptions_started
|
||||
|
||||
|
||||
def register_diagnostic_tools(mcp: FastMCP):
|
||||
"""Register diagnostic tools with the FastMCP instance.
|
||||
|
||||
Args:
|
||||
mcp: FastMCP instance to register tools with
|
||||
"""
|
||||
|
||||
@mcp.tool()
|
||||
async def test_subscription_query(subscription_query: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Test a GraphQL subscription query directly to debug schema issues.
|
||||
Use this to find working subscription field names and structure.
|
||||
|
||||
Args:
|
||||
subscription_query: The GraphQL subscription query to test
|
||||
|
||||
Returns:
|
||||
Dict containing test results and response data
|
||||
"""
|
||||
try:
|
||||
logger.info(f"[TEST_SUBSCRIPTION] Testing query: {subscription_query}")
|
||||
|
||||
# Build WebSocket URL
|
||||
ws_url = UNRAID_API_URL.replace("https://", "wss://").replace("http://", "ws://") + "/graphql"
|
||||
|
||||
# Test connection
|
||||
async with websockets.connect(
|
||||
ws_url,
|
||||
subprotocols=["graphql-transport-ws", "graphql-ws"],
|
||||
ssl=UNRAID_VERIFY_SSL,
|
||||
ping_interval=30,
|
||||
ping_timeout=10
|
||||
) as websocket:
|
||||
|
||||
# Send connection init
|
||||
await websocket.send(json.dumps({
|
||||
"type": "connection_init",
|
||||
"payload": {"Authorization": f"Bearer {UNRAID_API_KEY}"}
|
||||
}))
|
||||
|
||||
# Wait for ack
|
||||
response = await websocket.recv()
|
||||
init_response = json.loads(response)
|
||||
|
||||
if init_response.get("type") != "connection_ack":
|
||||
return {"error": f"Connection failed: {init_response}"}
|
||||
|
||||
# Send subscription
|
||||
await websocket.send(json.dumps({
|
||||
"id": "test",
|
||||
"type": "start",
|
||||
"payload": {"query": subscription_query}
|
||||
}))
|
||||
|
||||
# Wait for response with timeout
|
||||
try:
|
||||
response = await asyncio.wait_for(websocket.recv(), timeout=5.0)
|
||||
result = json.loads(response)
|
||||
|
||||
logger.info(f"[TEST_SUBSCRIPTION] Response: {result}")
|
||||
return {
|
||||
"success": True,
|
||||
"response": result,
|
||||
"query_tested": subscription_query
|
||||
}
|
||||
|
||||
except asyncio.TimeoutError:
|
||||
return {
|
||||
"success": True,
|
||||
"response": "No immediate response (subscriptions may only send data on changes)",
|
||||
"query_tested": subscription_query,
|
||||
"note": "Connection successful, subscription may be waiting for events"
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"[TEST_SUBSCRIPTION] Error: {e}", exc_info=True)
|
||||
return {
|
||||
"error": str(e),
|
||||
"query_tested": subscription_query
|
||||
}
|
||||
|
||||
@mcp.tool()
|
||||
async def diagnose_subscriptions() -> Dict[str, Any]:
|
||||
"""
|
||||
Comprehensive diagnostic tool for subscription system.
|
||||
Shows detailed status, connection states, errors, and troubleshooting info.
|
||||
|
||||
Returns:
|
||||
Dict containing comprehensive subscription system diagnostics
|
||||
"""
|
||||
# Ensure subscriptions are started before diagnosing
|
||||
await ensure_subscriptions_started()
|
||||
|
||||
try:
|
||||
logger.info("[DIAGNOSTIC] Running subscription diagnostics...")
|
||||
|
||||
# Get comprehensive status
|
||||
status = subscription_manager.get_subscription_status()
|
||||
|
||||
# Add environment info
|
||||
diagnostic_info = {
|
||||
"timestamp": datetime.now().isoformat(),
|
||||
"environment": {
|
||||
"auto_start_enabled": subscription_manager.auto_start_enabled,
|
||||
"max_reconnect_attempts": subscription_manager.max_reconnect_attempts,
|
||||
"unraid_api_url": UNRAID_API_URL[:50] + "..." if UNRAID_API_URL else None,
|
||||
"api_key_configured": bool(UNRAID_API_KEY),
|
||||
"websocket_url": None
|
||||
},
|
||||
"subscriptions": status,
|
||||
"summary": {
|
||||
"total_configured": len(subscription_manager.subscription_configs),
|
||||
"auto_start_count": sum(1 for s in subscription_manager.subscription_configs.values() if s.get("auto_start")),
|
||||
"active_count": len(subscription_manager.active_subscriptions),
|
||||
"with_data": len(subscription_manager.resource_data),
|
||||
"in_error_state": 0,
|
||||
"connection_issues": []
|
||||
}
|
||||
}
|
||||
|
||||
# Calculate WebSocket URL
|
||||
if UNRAID_API_URL:
|
||||
if UNRAID_API_URL.startswith('https://'):
|
||||
ws_url = 'wss://' + UNRAID_API_URL[len('https://'):]
|
||||
elif UNRAID_API_URL.startswith('http://'):
|
||||
ws_url = 'ws://' + UNRAID_API_URL[len('http://'):]
|
||||
else:
|
||||
ws_url = UNRAID_API_URL
|
||||
if not ws_url.endswith('/graphql'):
|
||||
ws_url = ws_url.rstrip('/') + '/graphql'
|
||||
diagnostic_info["environment"]["websocket_url"] = ws_url
|
||||
|
||||
# Analyze issues
|
||||
for sub_name, sub_status in status.items():
|
||||
runtime = sub_status.get("runtime", {})
|
||||
connection_state = runtime.get("connection_state", "unknown")
|
||||
|
||||
if connection_state in ["error", "auth_failed", "timeout", "max_retries_exceeded"]:
|
||||
diagnostic_info["summary"]["in_error_state"] += 1
|
||||
|
||||
if runtime.get("last_error"):
|
||||
diagnostic_info["summary"]["connection_issues"].append({
|
||||
"subscription": sub_name,
|
||||
"state": connection_state,
|
||||
"error": runtime["last_error"]
|
||||
})
|
||||
|
||||
# Add troubleshooting recommendations
|
||||
recommendations = []
|
||||
|
||||
if not diagnostic_info["environment"]["api_key_configured"]:
|
||||
recommendations.append("CRITICAL: No API key configured. Set UNRAID_API_KEY environment variable.")
|
||||
|
||||
if diagnostic_info["summary"]["in_error_state"] > 0:
|
||||
recommendations.append("Some subscriptions are in error state. Check 'connection_issues' for details.")
|
||||
|
||||
if diagnostic_info["summary"]["with_data"] == 0:
|
||||
recommendations.append("No subscriptions have received data yet. Check WebSocket connectivity and authentication.")
|
||||
|
||||
if diagnostic_info["summary"]["active_count"] < diagnostic_info["summary"]["auto_start_count"]:
|
||||
recommendations.append("Not all auto-start subscriptions are active. Check server startup logs.")
|
||||
|
||||
diagnostic_info["troubleshooting"] = {
|
||||
"recommendations": recommendations,
|
||||
"log_commands": [
|
||||
"Check server logs for [WEBSOCKET:*], [AUTH:*], [SUBSCRIPTION:*] prefixed messages",
|
||||
"Look for connection timeout or authentication errors",
|
||||
"Verify Unraid API URL is accessible and supports GraphQL subscriptions"
|
||||
],
|
||||
"next_steps": [
|
||||
"If authentication fails: Verify API key has correct permissions",
|
||||
"If connection fails: Check network connectivity to Unraid server",
|
||||
"If no data received: Enable DEBUG logging to see detailed protocol messages"
|
||||
]
|
||||
}
|
||||
|
||||
logger.info(f"[DIAGNOSTIC] Completed. Active: {diagnostic_info['summary']['active_count']}, With data: {diagnostic_info['summary']['with_data']}, Errors: {diagnostic_info['summary']['in_error_state']}")
|
||||
return diagnostic_info
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"[DIAGNOSTIC] Failed to generate diagnostics: {e}")
|
||||
raise ToolError(f"Failed to generate diagnostics: {str(e)}")
|
||||
|
||||
logger.info("Subscription diagnostic tools registered successfully")
|
||||
392
unraid_mcp/subscriptions/manager.py
Normal file
392
unraid_mcp/subscriptions/manager.py
Normal file
@@ -0,0 +1,392 @@
|
||||
"""WebSocket subscription manager for real-time Unraid data.
|
||||
|
||||
This module manages GraphQL subscriptions over WebSocket connections,
|
||||
providing real-time data streaming for MCP resources with comprehensive
|
||||
error handling, reconnection logic, and authentication.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
import os
|
||||
from datetime import datetime
|
||||
from typing import Any, Dict, List, Optional
|
||||
|
||||
import websockets
|
||||
|
||||
from ..config.logging import logger
|
||||
from ..config.settings import UNRAID_API_URL, UNRAID_API_KEY
|
||||
from ..core.types import SubscriptionData
|
||||
|
||||
|
||||
class SubscriptionManager:
|
||||
"""Manages GraphQL subscriptions and converts them to MCP resources."""
|
||||
|
||||
def __init__(self):
|
||||
self.active_subscriptions: Dict[str, asyncio.Task] = {}
|
||||
self.resource_data: Dict[str, SubscriptionData] = {}
|
||||
self.websocket: Optional[websockets.WebSocketServerProtocol] = None
|
||||
self.subscription_lock = asyncio.Lock()
|
||||
|
||||
# Configuration
|
||||
self.auto_start_enabled = os.getenv("UNRAID_AUTO_START_SUBSCRIPTIONS", "true").lower() == "true"
|
||||
self.reconnect_attempts: Dict[str, int] = {}
|
||||
self.max_reconnect_attempts = int(os.getenv("UNRAID_MAX_RECONNECT_ATTEMPTS", "10"))
|
||||
self.connection_states: Dict[str, str] = {} # Track connection state per subscription
|
||||
self.last_error: Dict[str, str] = {} # Track last error per subscription
|
||||
|
||||
# Define subscription configurations
|
||||
self.subscription_configs = {
|
||||
"logFileSubscription": {
|
||||
"query": """
|
||||
subscription LogFileSubscription($path: String!) {
|
||||
logFile(path: $path) {
|
||||
path
|
||||
content
|
||||
totalLines
|
||||
}
|
||||
}
|
||||
""",
|
||||
"resource": "unraid://logs/stream",
|
||||
"description": "Real-time log file streaming",
|
||||
"auto_start": False # Started manually with path parameter
|
||||
}
|
||||
}
|
||||
|
||||
logger.info(f"[SUBSCRIPTION_MANAGER] Initialized with auto_start={self.auto_start_enabled}, max_reconnects={self.max_reconnect_attempts}")
|
||||
logger.debug(f"[SUBSCRIPTION_MANAGER] Available subscriptions: {list(self.subscription_configs.keys())}")
|
||||
|
||||
async def auto_start_all_subscriptions(self):
|
||||
"""Auto-start all subscriptions marked for auto-start."""
|
||||
if not self.auto_start_enabled:
|
||||
logger.info("[SUBSCRIPTION_MANAGER] Auto-start disabled")
|
||||
return
|
||||
|
||||
logger.info("[SUBSCRIPTION_MANAGER] Starting auto-start process...")
|
||||
auto_start_count = 0
|
||||
|
||||
for subscription_name, config in self.subscription_configs.items():
|
||||
if config.get("auto_start", False):
|
||||
try:
|
||||
logger.info(f"[SUBSCRIPTION_MANAGER] Auto-starting subscription: {subscription_name}")
|
||||
await self.start_subscription(subscription_name, config["query"])
|
||||
auto_start_count += 1
|
||||
except Exception as e:
|
||||
logger.error(f"[SUBSCRIPTION_MANAGER] Failed to auto-start {subscription_name}: {e}")
|
||||
self.last_error[subscription_name] = str(e)
|
||||
|
||||
logger.info(f"[SUBSCRIPTION_MANAGER] Auto-start completed. Started {auto_start_count} subscriptions")
|
||||
|
||||
async def start_subscription(self, subscription_name: str, query: str, variables: Dict[str, Any] = None):
|
||||
"""Start a GraphQL subscription and maintain it as a resource."""
|
||||
logger.info(f"[SUBSCRIPTION:{subscription_name}] Starting subscription...")
|
||||
|
||||
if subscription_name in self.active_subscriptions:
|
||||
logger.warning(f"[SUBSCRIPTION:{subscription_name}] Subscription already active, skipping")
|
||||
return
|
||||
|
||||
# Reset connection tracking
|
||||
self.reconnect_attempts[subscription_name] = 0
|
||||
self.connection_states[subscription_name] = "starting"
|
||||
|
||||
async with self.subscription_lock:
|
||||
try:
|
||||
task = asyncio.create_task(self._subscription_loop(subscription_name, query, variables or {}))
|
||||
self.active_subscriptions[subscription_name] = task
|
||||
logger.info(f"[SUBSCRIPTION:{subscription_name}] Subscription task created and started")
|
||||
self.connection_states[subscription_name] = "active"
|
||||
except Exception as e:
|
||||
logger.error(f"[SUBSCRIPTION:{subscription_name}] Failed to start subscription task: {e}")
|
||||
self.connection_states[subscription_name] = "failed"
|
||||
self.last_error[subscription_name] = str(e)
|
||||
raise
|
||||
|
||||
async def stop_subscription(self, subscription_name: str):
|
||||
"""Stop a specific subscription."""
|
||||
logger.info(f"[SUBSCRIPTION:{subscription_name}] Stopping subscription...")
|
||||
|
||||
async with self.subscription_lock:
|
||||
if subscription_name in self.active_subscriptions:
|
||||
task = self.active_subscriptions[subscription_name]
|
||||
task.cancel()
|
||||
try:
|
||||
await task
|
||||
except asyncio.CancelledError:
|
||||
logger.debug(f"[SUBSCRIPTION:{subscription_name}] Task cancelled successfully")
|
||||
del self.active_subscriptions[subscription_name]
|
||||
self.connection_states[subscription_name] = "stopped"
|
||||
logger.info(f"[SUBSCRIPTION:{subscription_name}] Subscription stopped")
|
||||
else:
|
||||
logger.warning(f"[SUBSCRIPTION:{subscription_name}] No active subscription to stop")
|
||||
|
||||
async def _subscription_loop(self, subscription_name: str, query: str, variables: Dict[str, Any]):
|
||||
"""Main loop for maintaining a GraphQL subscription with comprehensive logging."""
|
||||
retry_delay = 5
|
||||
max_retry_delay = 300 # 5 minutes max
|
||||
|
||||
while True:
|
||||
attempt = self.reconnect_attempts.get(subscription_name, 0) + 1
|
||||
self.reconnect_attempts[subscription_name] = attempt
|
||||
|
||||
logger.info(f"[WEBSOCKET:{subscription_name}] Connection attempt #{attempt} (max: {self.max_reconnect_attempts})")
|
||||
|
||||
if attempt > self.max_reconnect_attempts:
|
||||
logger.error(f"[WEBSOCKET:{subscription_name}] Max reconnection attempts ({self.max_reconnect_attempts}) exceeded, stopping")
|
||||
self.connection_states[subscription_name] = "max_retries_exceeded"
|
||||
break
|
||||
|
||||
try:
|
||||
# Build WebSocket URL with detailed logging
|
||||
if UNRAID_API_URL.startswith('https://'):
|
||||
ws_url = 'wss://' + UNRAID_API_URL[len('https://'):]
|
||||
elif UNRAID_API_URL.startswith('http://'):
|
||||
ws_url = 'ws://' + UNRAID_API_URL[len('http://'):]
|
||||
else:
|
||||
ws_url = UNRAID_API_URL
|
||||
|
||||
if not ws_url.endswith('/graphql'):
|
||||
ws_url = ws_url.rstrip('/') + '/graphql'
|
||||
|
||||
logger.debug(f"[WEBSOCKET:{subscription_name}] Connecting to: {ws_url}")
|
||||
logger.debug(f"[WEBSOCKET:{subscription_name}] API Key present: {'Yes' if UNRAID_API_KEY else 'No'}")
|
||||
|
||||
# Connection with timeout
|
||||
connect_timeout = 10
|
||||
logger.debug(f"[WEBSOCKET:{subscription_name}] Connection timeout: {connect_timeout}s")
|
||||
|
||||
async with websockets.connect(
|
||||
ws_url,
|
||||
subprotocols=["graphql-transport-ws", "graphql-ws"],
|
||||
ping_interval=20,
|
||||
ping_timeout=10,
|
||||
close_timeout=10
|
||||
) as websocket:
|
||||
|
||||
selected_proto = websocket.subprotocol or "none"
|
||||
logger.info(f"[WEBSOCKET:{subscription_name}] Connected! Protocol: {selected_proto}")
|
||||
self.connection_states[subscription_name] = "connected"
|
||||
|
||||
# Reset retry count on successful connection
|
||||
self.reconnect_attempts[subscription_name] = 0
|
||||
retry_delay = 5 # Reset delay
|
||||
|
||||
# Initialize GraphQL-WS protocol
|
||||
logger.debug(f"[PROTOCOL:{subscription_name}] Initializing GraphQL-WS protocol...")
|
||||
init_type = "connection_init"
|
||||
init_payload: Dict[str, Any] = {"type": init_type}
|
||||
|
||||
if UNRAID_API_KEY:
|
||||
logger.debug(f"[AUTH:{subscription_name}] Adding authentication payload")
|
||||
auth_payload = {
|
||||
"X-API-Key": UNRAID_API_KEY,
|
||||
"x-api-key": UNRAID_API_KEY,
|
||||
"authorization": f"Bearer {UNRAID_API_KEY}",
|
||||
"Authorization": f"Bearer {UNRAID_API_KEY}",
|
||||
"headers": {
|
||||
"X-API-Key": UNRAID_API_KEY,
|
||||
"x-api-key": UNRAID_API_KEY,
|
||||
"Authorization": f"Bearer {UNRAID_API_KEY}"
|
||||
}
|
||||
}
|
||||
init_payload["payload"] = auth_payload
|
||||
else:
|
||||
logger.warning(f"[AUTH:{subscription_name}] No API key available for authentication")
|
||||
|
||||
logger.debug(f"[PROTOCOL:{subscription_name}] Sending connection_init message")
|
||||
await websocket.send(json.dumps(init_payload))
|
||||
|
||||
# Wait for connection acknowledgment
|
||||
logger.debug(f"[PROTOCOL:{subscription_name}] Waiting for connection_ack...")
|
||||
init_raw = await asyncio.wait_for(websocket.recv(), timeout=30)
|
||||
|
||||
try:
|
||||
init_data = json.loads(init_raw)
|
||||
logger.debug(f"[PROTOCOL:{subscription_name}] Received init response: {init_data.get('type')}")
|
||||
except json.JSONDecodeError as e:
|
||||
logger.error(f"[PROTOCOL:{subscription_name}] Failed to decode init response: {init_raw[:200]}...")
|
||||
self.last_error[subscription_name] = f"Invalid JSON in init response: {e}"
|
||||
break
|
||||
|
||||
# Handle connection acknowledgment
|
||||
if init_data.get("type") == "connection_ack":
|
||||
logger.info(f"[PROTOCOL:{subscription_name}] Connection acknowledged successfully")
|
||||
self.connection_states[subscription_name] = "authenticated"
|
||||
elif init_data.get("type") == "connection_error":
|
||||
error_payload = init_data.get('payload', {})
|
||||
logger.error(f"[AUTH:{subscription_name}] Authentication failed: {error_payload}")
|
||||
self.last_error[subscription_name] = f"Authentication error: {error_payload}"
|
||||
self.connection_states[subscription_name] = "auth_failed"
|
||||
break
|
||||
else:
|
||||
logger.warning(f"[PROTOCOL:{subscription_name}] Unexpected init response: {init_data}")
|
||||
# Continue anyway - some servers send other messages first
|
||||
|
||||
# Start the subscription
|
||||
logger.debug(f"[SUBSCRIPTION:{subscription_name}] Starting GraphQL subscription...")
|
||||
start_type = "subscribe" if selected_proto == "graphql-transport-ws" else "start"
|
||||
subscription_message = {
|
||||
"id": subscription_name,
|
||||
"type": start_type,
|
||||
"payload": {
|
||||
"query": query,
|
||||
"variables": variables
|
||||
}
|
||||
}
|
||||
|
||||
logger.debug(f"[SUBSCRIPTION:{subscription_name}] Subscription message type: {start_type}")
|
||||
logger.debug(f"[SUBSCRIPTION:{subscription_name}] Query: {query[:100]}...")
|
||||
logger.debug(f"[SUBSCRIPTION:{subscription_name}] Variables: {variables}")
|
||||
|
||||
await websocket.send(json.dumps(subscription_message))
|
||||
logger.info(f"[SUBSCRIPTION:{subscription_name}] Subscription started successfully")
|
||||
self.connection_states[subscription_name] = "subscribed"
|
||||
|
||||
# Listen for subscription data
|
||||
message_count = 0
|
||||
last_data_time = datetime.now()
|
||||
|
||||
async for message in websocket:
|
||||
try:
|
||||
data = json.loads(message)
|
||||
message_count += 1
|
||||
message_type = data.get('type', 'unknown')
|
||||
|
||||
logger.debug(f"[DATA:{subscription_name}] Message #{message_count}: {message_type}")
|
||||
|
||||
# Handle different message types
|
||||
expected_data_type = "next" if selected_proto == "graphql-transport-ws" else "data"
|
||||
|
||||
if data.get("type") == expected_data_type and data.get("id") == subscription_name:
|
||||
payload = data.get("payload", {})
|
||||
|
||||
if payload.get("data"):
|
||||
logger.info(f"[DATA:{subscription_name}] Received subscription data update")
|
||||
self.resource_data[subscription_name] = SubscriptionData(
|
||||
data=payload["data"],
|
||||
last_updated=datetime.now(),
|
||||
subscription_type=subscription_name
|
||||
)
|
||||
last_data_time = datetime.now()
|
||||
logger.debug(f"[RESOURCE:{subscription_name}] Resource data updated successfully")
|
||||
elif payload.get("errors"):
|
||||
logger.error(f"[DATA:{subscription_name}] GraphQL errors in response: {payload['errors']}")
|
||||
self.last_error[subscription_name] = f"GraphQL errors: {payload['errors']}"
|
||||
else:
|
||||
logger.warning(f"[DATA:{subscription_name}] Empty or invalid data payload: {payload}")
|
||||
|
||||
elif data.get("type") == "ping":
|
||||
logger.debug(f"[PROTOCOL:{subscription_name}] Received ping, sending pong")
|
||||
await websocket.send(json.dumps({"type": "pong"}))
|
||||
|
||||
elif data.get("type") == "error":
|
||||
error_payload = data.get('payload', {})
|
||||
logger.error(f"[SUBSCRIPTION:{subscription_name}] Subscription error: {error_payload}")
|
||||
self.last_error[subscription_name] = f"Subscription error: {error_payload}"
|
||||
self.connection_states[subscription_name] = "error"
|
||||
|
||||
elif data.get("type") == "complete":
|
||||
logger.info(f"[SUBSCRIPTION:{subscription_name}] Subscription completed by server")
|
||||
self.connection_states[subscription_name] = "completed"
|
||||
break
|
||||
|
||||
elif data.get("type") in ["ka", "ping", "pong"]:
|
||||
logger.debug(f"[PROTOCOL:{subscription_name}] Keepalive message: {message_type}")
|
||||
|
||||
else:
|
||||
logger.debug(f"[PROTOCOL:{subscription_name}] Unhandled message type: {message_type}")
|
||||
|
||||
except json.JSONDecodeError as e:
|
||||
logger.error(f"[PROTOCOL:{subscription_name}] Failed to decode message: {message[:200]}...")
|
||||
logger.error(f"[PROTOCOL:{subscription_name}] JSON decode error: {e}")
|
||||
except Exception as e:
|
||||
logger.error(f"[DATA:{subscription_name}] Error processing message: {e}")
|
||||
logger.debug(f"[DATA:{subscription_name}] Raw message: {message[:200]}...")
|
||||
|
||||
except asyncio.TimeoutError:
|
||||
error_msg = "Connection or authentication timeout"
|
||||
logger.error(f"[WEBSOCKET:{subscription_name}] {error_msg}")
|
||||
self.last_error[subscription_name] = error_msg
|
||||
self.connection_states[subscription_name] = "timeout"
|
||||
|
||||
except websockets.exceptions.ConnectionClosed as e:
|
||||
error_msg = f"WebSocket connection closed: {e}"
|
||||
logger.warning(f"[WEBSOCKET:{subscription_name}] {error_msg}")
|
||||
self.last_error[subscription_name] = error_msg
|
||||
self.connection_states[subscription_name] = "disconnected"
|
||||
|
||||
except websockets.exceptions.InvalidURI as e:
|
||||
error_msg = f"Invalid WebSocket URI: {e}"
|
||||
logger.error(f"[WEBSOCKET:{subscription_name}] {error_msg}")
|
||||
self.last_error[subscription_name] = error_msg
|
||||
self.connection_states[subscription_name] = "invalid_uri"
|
||||
break # Don't retry on invalid URI
|
||||
|
||||
except Exception as e:
|
||||
error_msg = f"Unexpected error: {e}"
|
||||
logger.error(f"[WEBSOCKET:{subscription_name}] {error_msg}")
|
||||
self.last_error[subscription_name] = error_msg
|
||||
self.connection_states[subscription_name] = "error"
|
||||
|
||||
# Calculate backoff delay
|
||||
retry_delay = min(retry_delay * 1.5, max_retry_delay)
|
||||
logger.info(f"[WEBSOCKET:{subscription_name}] Reconnecting in {retry_delay:.1f} seconds...")
|
||||
self.connection_states[subscription_name] = "reconnecting"
|
||||
await asyncio.sleep(retry_delay)
|
||||
|
||||
def get_resource_data(self, resource_name: str) -> Optional[Dict[str, Any]]:
|
||||
"""Get current resource data with enhanced logging."""
|
||||
logger.debug(f"[RESOURCE:{resource_name}] Resource data requested")
|
||||
|
||||
if resource_name in self.resource_data:
|
||||
data = self.resource_data[resource_name]
|
||||
age_seconds = (datetime.now() - data.last_updated).total_seconds()
|
||||
logger.debug(f"[RESOURCE:{resource_name}] Data found, age: {age_seconds:.1f}s")
|
||||
return data.data
|
||||
else:
|
||||
logger.debug(f"[RESOURCE:{resource_name}] No data available")
|
||||
return None
|
||||
|
||||
def list_active_subscriptions(self) -> List[str]:
|
||||
"""List all active subscriptions."""
|
||||
active = list(self.active_subscriptions.keys())
|
||||
logger.debug(f"[SUBSCRIPTION_MANAGER] Active subscriptions: {active}")
|
||||
return active
|
||||
|
||||
def get_subscription_status(self) -> Dict[str, Dict[str, Any]]:
|
||||
"""Get detailed status of all subscriptions for diagnostics."""
|
||||
status = {}
|
||||
|
||||
for sub_name, config in self.subscription_configs.items():
|
||||
sub_status = {
|
||||
"config": {
|
||||
"resource": config["resource"],
|
||||
"description": config["description"],
|
||||
"auto_start": config.get("auto_start", False)
|
||||
},
|
||||
"runtime": {
|
||||
"active": sub_name in self.active_subscriptions,
|
||||
"connection_state": self.connection_states.get(sub_name, "not_started"),
|
||||
"reconnect_attempts": self.reconnect_attempts.get(sub_name, 0),
|
||||
"last_error": self.last_error.get(sub_name, None)
|
||||
}
|
||||
}
|
||||
|
||||
# Add data info if available
|
||||
if sub_name in self.resource_data:
|
||||
data_info = self.resource_data[sub_name]
|
||||
age_seconds = (datetime.now() - data_info.last_updated).total_seconds()
|
||||
sub_status["data"] = {
|
||||
"available": True,
|
||||
"last_updated": data_info.last_updated.isoformat(),
|
||||
"age_seconds": age_seconds
|
||||
}
|
||||
else:
|
||||
sub_status["data"] = {"available": False}
|
||||
|
||||
status[sub_name] = sub_status
|
||||
|
||||
logger.debug(f"[SUBSCRIPTION_MANAGER] Generated status for {len(status)} subscriptions")
|
||||
return status
|
||||
|
||||
|
||||
# Global subscription manager instance
|
||||
subscription_manager = SubscriptionManager()
|
||||
91
unraid_mcp/subscriptions/resources.py
Normal file
91
unraid_mcp/subscriptions/resources.py
Normal file
@@ -0,0 +1,91 @@
|
||||
"""MCP resources that expose subscription data.
|
||||
|
||||
This module defines MCP resources that bridge between the subscription manager
|
||||
and the MCP protocol, providing fallback queries when subscription data is unavailable.
|
||||
"""
|
||||
|
||||
import json
|
||||
import os
|
||||
from pathlib import Path
|
||||
|
||||
from fastmcp import FastMCP
|
||||
|
||||
from ..config.logging import logger
|
||||
from .manager import subscription_manager
|
||||
|
||||
|
||||
# Global flag to track subscription startup
|
||||
_subscriptions_started = False
|
||||
|
||||
|
||||
async def ensure_subscriptions_started():
|
||||
"""Ensure subscriptions are started, called from async context."""
|
||||
global _subscriptions_started
|
||||
|
||||
if _subscriptions_started:
|
||||
return
|
||||
|
||||
logger.info("[STARTUP] First async operation detected, starting subscriptions...")
|
||||
try:
|
||||
await autostart_subscriptions()
|
||||
_subscriptions_started = True
|
||||
logger.info("[STARTUP] Subscriptions started successfully")
|
||||
except Exception as e:
|
||||
logger.error(f"[STARTUP] Failed to start subscriptions: {e}", exc_info=True)
|
||||
|
||||
|
||||
async def autostart_subscriptions():
|
||||
"""Auto-start all subscriptions marked for auto-start in SubscriptionManager."""
|
||||
logger.info("[AUTOSTART] Initiating subscription auto-start process...")
|
||||
|
||||
try:
|
||||
# Use the new SubscriptionManager auto-start method
|
||||
await subscription_manager.auto_start_all_subscriptions()
|
||||
logger.info("[AUTOSTART] Auto-start process completed successfully")
|
||||
except Exception as e:
|
||||
logger.error(f"[AUTOSTART] Failed during auto-start process: {e}", exc_info=True)
|
||||
|
||||
# Optional log file subscription
|
||||
log_path = os.getenv("UNRAID_AUTOSTART_LOG_PATH")
|
||||
if log_path is None:
|
||||
# Default to syslog if available
|
||||
default_path = "/var/log/syslog"
|
||||
if Path(default_path).exists():
|
||||
log_path = default_path
|
||||
logger.info(f"[AUTOSTART] Using default log path: {default_path}")
|
||||
|
||||
if log_path:
|
||||
try:
|
||||
logger.info(f"[AUTOSTART] Starting log file subscription for: {log_path}")
|
||||
config = subscription_manager.subscription_configs.get("logFileSubscription")
|
||||
if config:
|
||||
await subscription_manager.start_subscription("logFileSubscription", config["query"], {"path": log_path})
|
||||
logger.info(f"[AUTOSTART] Log file subscription started for: {log_path}")
|
||||
else:
|
||||
logger.error("[AUTOSTART] logFileSubscription config not found")
|
||||
except Exception as e:
|
||||
logger.error(f"[AUTOSTART] Failed to start log file subscription: {e}", exc_info=True)
|
||||
else:
|
||||
logger.info("[AUTOSTART] No log file path configured for auto-start")
|
||||
|
||||
|
||||
def register_subscription_resources(mcp: FastMCP):
|
||||
"""Register all subscription resources with the FastMCP instance.
|
||||
|
||||
Args:
|
||||
mcp: FastMCP instance to register resources with
|
||||
"""
|
||||
|
||||
@mcp.resource("unraid://logs/stream")
|
||||
async def logs_stream_resource() -> str:
|
||||
"""Real-time log stream data from subscription."""
|
||||
await ensure_subscriptions_started()
|
||||
data = subscription_manager.get_resource_data("logFileSubscription")
|
||||
if data:
|
||||
return json.dumps(data, indent=2)
|
||||
return json.dumps({
|
||||
"status": "No subscription data yet",
|
||||
"message": "Subscriptions auto-start on server boot. If this persists, check server logs for WebSocket/auth issues."
|
||||
})
|
||||
|
||||
logger.info("Subscription resources registered successfully")
|
||||
1
unraid_mcp/tools/__init__.py
Normal file
1
unraid_mcp/tools/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
"""MCP tools organized by functional domain."""
|
||||
387
unraid_mcp/tools/docker.py
Normal file
387
unraid_mcp/tools/docker.py
Normal file
@@ -0,0 +1,387 @@
|
||||
"""Docker container management tools.
|
||||
|
||||
This module provides tools for Docker container lifecycle and management
|
||||
including listing containers with caching options, start/stop operations,
|
||||
and detailed container information retrieval.
|
||||
"""
|
||||
|
||||
from typing import Any
|
||||
|
||||
from fastmcp import FastMCP
|
||||
|
||||
from ..config.logging import logger
|
||||
from ..core.client import make_graphql_request
|
||||
from ..core.exceptions import ToolError
|
||||
|
||||
|
||||
def find_container_by_identifier(container_identifier: str, containers: list[dict[str, Any]]) -> dict[str, Any] | None:
|
||||
"""Find a container by ID or name with fuzzy matching.
|
||||
|
||||
Args:
|
||||
container_identifier: Container ID or name to find
|
||||
containers: List of container dictionaries to search
|
||||
|
||||
Returns:
|
||||
Container dictionary if found, None otherwise
|
||||
"""
|
||||
if not containers:
|
||||
return None
|
||||
|
||||
# Exact matches first
|
||||
for container in containers:
|
||||
if container.get("id") == container_identifier:
|
||||
return container
|
||||
|
||||
# Check all names for exact match
|
||||
names = container.get("names", [])
|
||||
if container_identifier in names:
|
||||
return container
|
||||
|
||||
# Fuzzy matching - case insensitive partial matches
|
||||
container_identifier_lower = container_identifier.lower()
|
||||
for container in containers:
|
||||
names = container.get("names", [])
|
||||
for name in names:
|
||||
if container_identifier_lower in name.lower() or name.lower() in container_identifier_lower:
|
||||
logger.info(f"Found container via fuzzy match: '{container_identifier}' -> '{name}'")
|
||||
return container
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def get_available_container_names(containers: list[dict[str, Any]]) -> list[str]:
|
||||
"""Extract all available container names for error reporting.
|
||||
|
||||
Args:
|
||||
containers: List of container dictionaries
|
||||
|
||||
Returns:
|
||||
List of container names
|
||||
"""
|
||||
names = []
|
||||
for container in containers:
|
||||
container_names = container.get("names", [])
|
||||
names.extend(container_names)
|
||||
return names
|
||||
|
||||
|
||||
def register_docker_tools(mcp: FastMCP):
|
||||
"""Register all Docker tools with the FastMCP instance.
|
||||
|
||||
Args:
|
||||
mcp: FastMCP instance to register tools with
|
||||
"""
|
||||
|
||||
@mcp.tool()
|
||||
async def list_docker_containers() -> list[dict[str, Any]]:
|
||||
"""Lists all Docker containers on the Unraid system.
|
||||
|
||||
Returns:
|
||||
List of Docker container information dictionaries
|
||||
"""
|
||||
query = """
|
||||
query ListDockerContainers {
|
||||
docker {
|
||||
containers(skipCache: false) {
|
||||
id
|
||||
names
|
||||
image
|
||||
state
|
||||
status
|
||||
autoStart
|
||||
}
|
||||
}
|
||||
}
|
||||
"""
|
||||
try:
|
||||
logger.info("Executing list_docker_containers tool")
|
||||
response_data = await make_graphql_request(query)
|
||||
if response_data.get("docker"):
|
||||
return response_data["docker"].get("containers", [])
|
||||
return []
|
||||
except Exception as e:
|
||||
logger.error(f"Error in list_docker_containers: {e}", exc_info=True)
|
||||
raise ToolError(f"Failed to list Docker containers: {str(e)}")
|
||||
|
||||
@mcp.tool()
|
||||
async def manage_docker_container(container_id: str, action: str) -> dict[str, Any]:
|
||||
"""Starts or stops a specific Docker container. Action must be 'start' or 'stop'.
|
||||
|
||||
Args:
|
||||
container_id: Container ID to manage
|
||||
action: Action to perform - 'start' or 'stop'
|
||||
|
||||
Returns:
|
||||
Dict containing operation result and container information
|
||||
"""
|
||||
import asyncio
|
||||
|
||||
if action.lower() not in ["start", "stop"]:
|
||||
logger.warning(f"Invalid action '{action}' for manage_docker_container")
|
||||
raise ToolError("Invalid action. Must be 'start' or 'stop'.")
|
||||
|
||||
mutation_name = action.lower()
|
||||
|
||||
# Step 1: Execute the operation mutation
|
||||
operation_query = f"""
|
||||
mutation ManageDockerContainer($id: PrefixedID!) {{
|
||||
docker {{
|
||||
{mutation_name}(id: $id) {{
|
||||
id
|
||||
names
|
||||
state
|
||||
status
|
||||
}}
|
||||
}}
|
||||
}}
|
||||
"""
|
||||
|
||||
variables = {"id": container_id}
|
||||
|
||||
try:
|
||||
logger.info(f"Executing manage_docker_container: action={action}, id={container_id}")
|
||||
|
||||
# Step 1: Resolve container identifier to actual container ID if needed
|
||||
actual_container_id = container_id
|
||||
if not container_id.startswith("3cb1026338736ed07b8afec2c484e429710b0f6550dc65d0c5c410ea9d0fa6b2:"):
|
||||
# This looks like a name, not a full container ID - need to resolve it
|
||||
logger.info(f"Resolving container identifier '{container_id}' to actual container ID")
|
||||
list_query = """
|
||||
query ResolveContainerID {
|
||||
docker {
|
||||
containers(skipCache: true) {
|
||||
id
|
||||
names
|
||||
}
|
||||
}
|
||||
}
|
||||
"""
|
||||
list_response = await make_graphql_request(list_query)
|
||||
if list_response.get("docker"):
|
||||
containers = list_response["docker"].get("containers", [])
|
||||
resolved_container = find_container_by_identifier(container_id, containers)
|
||||
if resolved_container:
|
||||
actual_container_id = resolved_container.get("id")
|
||||
logger.info(f"Resolved '{container_id}' to container ID: {actual_container_id}")
|
||||
else:
|
||||
available_names = get_available_container_names(containers)
|
||||
error_msg = f"Container '{container_id}' not found for {action} operation."
|
||||
if available_names:
|
||||
error_msg += f" Available containers: {', '.join(available_names[:10])}"
|
||||
raise ToolError(error_msg)
|
||||
|
||||
# Update variables with the actual container ID
|
||||
variables = {"id": actual_container_id}
|
||||
|
||||
# Execute the operation with idempotent error handling
|
||||
operation_context = {"operation": action}
|
||||
operation_response = await make_graphql_request(
|
||||
operation_query,
|
||||
variables,
|
||||
operation_context=operation_context
|
||||
)
|
||||
|
||||
# Handle idempotent success case
|
||||
if operation_response.get("idempotent_success"):
|
||||
logger.info(f"Container {action} operation was idempotent: {operation_response.get('message')}")
|
||||
# Get current container state since the operation was already complete
|
||||
try:
|
||||
list_query = """
|
||||
query GetContainerStateAfterIdempotent($skipCache: Boolean!) {
|
||||
docker {
|
||||
containers(skipCache: $skipCache) {
|
||||
id
|
||||
names
|
||||
image
|
||||
state
|
||||
status
|
||||
autoStart
|
||||
}
|
||||
}
|
||||
}
|
||||
"""
|
||||
list_response = await make_graphql_request(list_query, {"skipCache": True})
|
||||
|
||||
if list_response.get("docker"):
|
||||
containers = list_response["docker"].get("containers", [])
|
||||
container = find_container_by_identifier(container_id, containers)
|
||||
|
||||
if container:
|
||||
return {
|
||||
"operation_result": {"id": container_id, "names": container.get("names", [])},
|
||||
"container_details": container,
|
||||
"success": True,
|
||||
"message": f"Container {action} operation was already complete - current state returned",
|
||||
"idempotent": True
|
||||
}
|
||||
|
||||
except Exception as lookup_error:
|
||||
logger.warning(f"Could not retrieve container state after idempotent operation: {lookup_error}")
|
||||
|
||||
return {
|
||||
"operation_result": {"id": container_id},
|
||||
"container_details": None,
|
||||
"success": True,
|
||||
"message": f"Container {action} operation was already complete",
|
||||
"idempotent": True
|
||||
}
|
||||
|
||||
# Handle normal successful operation
|
||||
if not (operation_response.get("docker") and operation_response["docker"].get(mutation_name)):
|
||||
raise ToolError(f"Failed to execute {action} operation on container")
|
||||
|
||||
operation_result = operation_response["docker"][mutation_name]
|
||||
logger.info(f"Container {action} operation completed for {container_id}")
|
||||
|
||||
# Step 2: Wait briefly for state to propagate, then fetch current container details
|
||||
await asyncio.sleep(1.0) # Give the container state time to update
|
||||
|
||||
# Step 3: Try to get updated container details with retry logic
|
||||
max_retries = 3
|
||||
retry_delay = 1.0
|
||||
|
||||
for attempt in range(max_retries):
|
||||
try:
|
||||
# Query all containers and find the one we just operated on
|
||||
list_query = """
|
||||
query GetUpdatedContainerState($skipCache: Boolean!) {
|
||||
docker {
|
||||
containers(skipCache: $skipCache) {
|
||||
id
|
||||
names
|
||||
image
|
||||
state
|
||||
status
|
||||
autoStart
|
||||
}
|
||||
}
|
||||
}
|
||||
"""
|
||||
|
||||
# Skip cache to get fresh data
|
||||
list_response = await make_graphql_request(list_query, {"skipCache": True})
|
||||
|
||||
if list_response.get("docker"):
|
||||
containers = list_response["docker"].get("containers", [])
|
||||
|
||||
# Find the container using our helper function
|
||||
container = find_container_by_identifier(container_id, containers)
|
||||
if container:
|
||||
logger.info(f"Found updated container state for {container_id}")
|
||||
return {
|
||||
"operation_result": operation_result,
|
||||
"container_details": container,
|
||||
"success": True,
|
||||
"message": f"Container {action} operation completed successfully"
|
||||
}
|
||||
|
||||
# If not found in this attempt, wait and retry
|
||||
if attempt < max_retries - 1:
|
||||
logger.warning(f"Container {container_id} not found after {action}, retrying in {retry_delay}s (attempt {attempt + 1}/{max_retries})")
|
||||
await asyncio.sleep(retry_delay)
|
||||
retry_delay *= 1.5 # Exponential backoff
|
||||
|
||||
except Exception as query_error:
|
||||
logger.warning(f"Error querying updated container state (attempt {attempt + 1}): {query_error}")
|
||||
if attempt < max_retries - 1:
|
||||
await asyncio.sleep(retry_delay)
|
||||
retry_delay *= 1.5
|
||||
else:
|
||||
# On final attempt failure, still return operation success
|
||||
logger.warning(f"Could not retrieve updated container details after {action}, but operation succeeded")
|
||||
return {
|
||||
"operation_result": operation_result,
|
||||
"container_details": None,
|
||||
"success": True,
|
||||
"message": f"Container {action} operation completed, but updated state could not be retrieved",
|
||||
"warning": "Container state query failed after operation - this may be due to timing or the container not being found in the updated state"
|
||||
}
|
||||
|
||||
# If we get here, all retries failed to find the container
|
||||
logger.warning(f"Container {container_id} not found in any retry attempt after {action}")
|
||||
return {
|
||||
"operation_result": operation_result,
|
||||
"container_details": None,
|
||||
"success": True,
|
||||
"message": f"Container {action} operation completed, but container not found in subsequent queries",
|
||||
"warning": "Container not found in updated state - this may indicate the operation succeeded but container is no longer listed"
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error in manage_docker_container ({action}): {e}", exc_info=True)
|
||||
raise ToolError(f"Failed to {action} Docker container: {str(e)}")
|
||||
|
||||
@mcp.tool()
|
||||
async def get_docker_container_details(container_identifier: str) -> dict[str, Any]:
|
||||
"""Retrieves detailed information for a specific Docker container by its ID or name.
|
||||
|
||||
Args:
|
||||
container_identifier: Container ID or name to retrieve details for
|
||||
|
||||
Returns:
|
||||
Dict containing detailed container information
|
||||
"""
|
||||
# This tool fetches all containers and then filters by ID or name.
|
||||
# More detailed query for a single container if found:
|
||||
detailed_query_fields = """
|
||||
id
|
||||
names
|
||||
image
|
||||
imageId
|
||||
command
|
||||
created
|
||||
ports { ip privatePort publicPort type }
|
||||
sizeRootFs
|
||||
labels # JSONObject
|
||||
state
|
||||
status
|
||||
hostConfig { networkMode }
|
||||
networkSettings # JSONObject
|
||||
mounts # JSONObject array
|
||||
autoStart
|
||||
"""
|
||||
|
||||
# Fetch all containers first
|
||||
list_query = f"""
|
||||
query GetAllContainerDetailsForFiltering {{
|
||||
docker {{
|
||||
containers(skipCache: false) {{
|
||||
{detailed_query_fields}
|
||||
}}
|
||||
}}
|
||||
}}
|
||||
"""
|
||||
try:
|
||||
logger.info(f"Executing get_docker_container_details for identifier: {container_identifier}")
|
||||
response_data = await make_graphql_request(list_query)
|
||||
|
||||
containers = []
|
||||
if response_data.get("docker"):
|
||||
containers = response_data["docker"].get("containers", [])
|
||||
|
||||
# Use our enhanced container lookup
|
||||
container = find_container_by_identifier(container_identifier, containers)
|
||||
if container:
|
||||
logger.info(f"Found container {container_identifier}")
|
||||
return container
|
||||
|
||||
# Container not found - provide helpful error message with available containers
|
||||
available_names = get_available_container_names(containers)
|
||||
logger.warning(f"Container with identifier '{container_identifier}' not found.")
|
||||
logger.info(f"Available containers: {available_names}")
|
||||
|
||||
error_msg = f"Container '{container_identifier}' not found."
|
||||
if available_names:
|
||||
error_msg += f" Available containers: {', '.join(available_names[:10])}" # Limit to first 10
|
||||
if len(available_names) > 10:
|
||||
error_msg += f" (and {len(available_names) - 10} more)"
|
||||
else:
|
||||
error_msg += " No containers are currently available."
|
||||
|
||||
raise ToolError(error_msg)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error in get_docker_container_details: {e}", exc_info=True)
|
||||
raise ToolError(f"Failed to retrieve Docker container details: {str(e)}")
|
||||
|
||||
logger.info("Docker tools registered successfully")
|
||||
187
unraid_mcp/tools/health.py
Normal file
187
unraid_mcp/tools/health.py
Normal file
@@ -0,0 +1,187 @@
|
||||
"""Comprehensive health monitoring tools.
|
||||
|
||||
This module provides tools for comprehensive health checks of the Unraid MCP server
|
||||
and the underlying Unraid system, including performance metrics, system status,
|
||||
notifications, Docker services, and API responsiveness.
|
||||
"""
|
||||
|
||||
import datetime
|
||||
import time
|
||||
from typing import Any, Dict
|
||||
|
||||
from fastmcp import FastMCP
|
||||
|
||||
from ..config.logging import logger
|
||||
from ..config.settings import UNRAID_API_URL, UNRAID_MCP_HOST, UNRAID_MCP_PORT, UNRAID_MCP_TRANSPORT
|
||||
from ..core.client import make_graphql_request
|
||||
from ..core.exceptions import ToolError
|
||||
|
||||
|
||||
def register_health_tools(mcp: FastMCP):
|
||||
"""Register all health tools with the FastMCP instance.
|
||||
|
||||
Args:
|
||||
mcp: FastMCP instance to register tools with
|
||||
"""
|
||||
|
||||
@mcp.tool()
|
||||
async def health_check() -> Dict[str, Any]:
|
||||
"""Returns comprehensive health status of the Unraid MCP server and system for monitoring purposes."""
|
||||
start_time = time.time()
|
||||
health_status = "healthy"
|
||||
issues = []
|
||||
|
||||
try:
|
||||
# Enhanced health check with multiple system components
|
||||
comprehensive_query = """
|
||||
query ComprehensiveHealthCheck {
|
||||
info {
|
||||
machineId
|
||||
time
|
||||
versions { unraid }
|
||||
os { uptime }
|
||||
}
|
||||
array {
|
||||
state
|
||||
}
|
||||
notifications {
|
||||
overview {
|
||||
unread { alert warning total }
|
||||
}
|
||||
}
|
||||
docker {
|
||||
containers(skipCache: true) {
|
||||
id
|
||||
state
|
||||
status
|
||||
}
|
||||
}
|
||||
}
|
||||
"""
|
||||
|
||||
response_data = await make_graphql_request(comprehensive_query)
|
||||
api_latency = round((time.time() - start_time) * 1000, 2) # ms
|
||||
|
||||
# Base health info
|
||||
health_info = {
|
||||
"status": health_status,
|
||||
"timestamp": datetime.datetime.utcnow().isoformat(),
|
||||
"api_latency_ms": api_latency,
|
||||
"server": {
|
||||
"name": "Unraid MCP Server",
|
||||
"version": "0.1.0",
|
||||
"transport": UNRAID_MCP_TRANSPORT,
|
||||
"host": UNRAID_MCP_HOST,
|
||||
"port": UNRAID_MCP_PORT,
|
||||
"process_uptime_seconds": time.time() - start_time # Rough estimate
|
||||
}
|
||||
}
|
||||
|
||||
if not response_data:
|
||||
health_status = "unhealthy"
|
||||
issues.append("No response from Unraid API")
|
||||
health_info["status"] = health_status
|
||||
health_info["issues"] = issues
|
||||
return health_info
|
||||
|
||||
# System info analysis
|
||||
info = response_data.get("info", {})
|
||||
if info:
|
||||
health_info["unraid_system"] = {
|
||||
"status": "connected",
|
||||
"url": UNRAID_API_URL,
|
||||
"machine_id": info.get("machineId"),
|
||||
"time": info.get("time"),
|
||||
"version": info.get("versions", {}).get("unraid"),
|
||||
"uptime": info.get("os", {}).get("uptime")
|
||||
}
|
||||
else:
|
||||
health_status = "degraded"
|
||||
issues.append("Unable to retrieve system info")
|
||||
|
||||
# Array health analysis
|
||||
array_info = response_data.get("array", {})
|
||||
if array_info:
|
||||
array_state = array_info.get("state", "unknown")
|
||||
health_info["array_status"] = {
|
||||
"state": array_state,
|
||||
"healthy": array_state in ["STARTED", "STOPPED"]
|
||||
}
|
||||
if array_state not in ["STARTED", "STOPPED"]:
|
||||
health_status = "warning"
|
||||
issues.append(f"Array in unexpected state: {array_state}")
|
||||
else:
|
||||
health_status = "warning"
|
||||
issues.append("Unable to retrieve array status")
|
||||
|
||||
# Notifications analysis
|
||||
notifications = response_data.get("notifications", {})
|
||||
if notifications and notifications.get("overview"):
|
||||
unread = notifications["overview"].get("unread", {})
|
||||
alert_count = unread.get("alert", 0)
|
||||
warning_count = unread.get("warning", 0)
|
||||
total_unread = unread.get("total", 0)
|
||||
|
||||
health_info["notifications"] = {
|
||||
"unread_total": total_unread,
|
||||
"unread_alerts": alert_count,
|
||||
"unread_warnings": warning_count,
|
||||
"has_critical_notifications": alert_count > 0
|
||||
}
|
||||
|
||||
if alert_count > 0:
|
||||
health_status = "warning"
|
||||
issues.append(f"{alert_count} unread alert notification(s)")
|
||||
|
||||
# Docker services analysis
|
||||
docker_info = response_data.get("docker", {})
|
||||
if docker_info and docker_info.get("containers"):
|
||||
containers = docker_info["containers"]
|
||||
running_containers = [c for c in containers if c.get("state") == "running"]
|
||||
stopped_containers = [c for c in containers if c.get("state") == "exited"]
|
||||
|
||||
health_info["docker_services"] = {
|
||||
"total_containers": len(containers),
|
||||
"running_containers": len(running_containers),
|
||||
"stopped_containers": len(stopped_containers),
|
||||
"containers_healthy": len([c for c in containers if c.get("status", "").startswith("Up")])
|
||||
}
|
||||
|
||||
# API performance assessment
|
||||
if api_latency > 5000: # > 5 seconds
|
||||
health_status = "warning"
|
||||
issues.append(f"High API latency: {api_latency}ms")
|
||||
elif api_latency > 10000: # > 10 seconds
|
||||
health_status = "degraded"
|
||||
issues.append(f"Very high API latency: {api_latency}ms")
|
||||
|
||||
# Final status determination
|
||||
health_info["status"] = health_status
|
||||
if issues:
|
||||
health_info["issues"] = issues
|
||||
|
||||
# Add performance metrics
|
||||
health_info["performance"] = {
|
||||
"api_response_time_ms": api_latency,
|
||||
"health_check_duration_ms": round((time.time() - start_time) * 1000, 2)
|
||||
}
|
||||
|
||||
return health_info
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Health check failed: {e}")
|
||||
return {
|
||||
"status": "unhealthy",
|
||||
"timestamp": datetime.datetime.utcnow().isoformat(),
|
||||
"error": str(e),
|
||||
"api_latency_ms": round((time.time() - start_time) * 1000, 2) if 'start_time' in locals() else None,
|
||||
"server": {
|
||||
"name": "Unraid MCP Server",
|
||||
"version": "0.1.0",
|
||||
"transport": UNRAID_MCP_TRANSPORT,
|
||||
"host": UNRAID_MCP_HOST,
|
||||
"port": UNRAID_MCP_PORT
|
||||
}
|
||||
}
|
||||
|
||||
logger.info("Health tools registered successfully")
|
||||
178
unraid_mcp/tools/rclone.py
Normal file
178
unraid_mcp/tools/rclone.py
Normal file
@@ -0,0 +1,178 @@
|
||||
"""RClone cloud storage remote management tools.
|
||||
|
||||
This module provides tools for managing RClone remotes including listing existing
|
||||
remotes, getting configuration forms, creating new remotes, and deleting remotes
|
||||
for various cloud storage providers (S3, Google Drive, Dropbox, FTP, etc.).
|
||||
"""
|
||||
|
||||
from typing import Any, Dict, List, Optional
|
||||
|
||||
from fastmcp import FastMCP
|
||||
|
||||
from ..config.logging import logger
|
||||
from ..core.client import make_graphql_request
|
||||
from ..core.exceptions import ToolError
|
||||
|
||||
|
||||
def register_rclone_tools(mcp: FastMCP):
|
||||
"""Register all RClone tools with the FastMCP instance.
|
||||
|
||||
Args:
|
||||
mcp: FastMCP instance to register tools with
|
||||
"""
|
||||
|
||||
@mcp.tool()
|
||||
async def list_rclone_remotes() -> List[Dict[str, Any]]:
|
||||
"""Retrieves all configured RClone remotes with their configuration details."""
|
||||
try:
|
||||
query = """
|
||||
query ListRCloneRemotes {
|
||||
rclone {
|
||||
remotes {
|
||||
name
|
||||
type
|
||||
parameters
|
||||
config
|
||||
}
|
||||
}
|
||||
}
|
||||
"""
|
||||
|
||||
response_data = await make_graphql_request(query)
|
||||
|
||||
if "rclone" in response_data and "remotes" in response_data["rclone"]:
|
||||
remotes = response_data["rclone"]["remotes"]
|
||||
logger.info(f"Retrieved {len(remotes)} RClone remotes")
|
||||
return remotes
|
||||
|
||||
return []
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to list RClone remotes: {str(e)}")
|
||||
raise ToolError(f"Failed to list RClone remotes: {str(e)}")
|
||||
|
||||
@mcp.tool()
|
||||
async def get_rclone_config_form(provider_type: Optional[str] = None) -> Dict[str, Any]:
|
||||
"""
|
||||
Get RClone configuration form schema for setting up new remotes.
|
||||
|
||||
Args:
|
||||
provider_type: Optional provider type to get specific form (e.g., 's3', 'drive', 'dropbox')
|
||||
"""
|
||||
try:
|
||||
query = """
|
||||
query GetRCloneConfigForm($formOptions: RCloneConfigFormInput) {
|
||||
rclone {
|
||||
configForm(formOptions: $formOptions) {
|
||||
id
|
||||
dataSchema
|
||||
uiSchema
|
||||
}
|
||||
}
|
||||
}
|
||||
"""
|
||||
|
||||
variables = {}
|
||||
if provider_type:
|
||||
variables["formOptions"] = {"providerType": provider_type}
|
||||
|
||||
response_data = await make_graphql_request(query, variables)
|
||||
|
||||
if "rclone" in response_data and "configForm" in response_data["rclone"]:
|
||||
form_data = response_data["rclone"]["configForm"]
|
||||
logger.info(f"Retrieved RClone config form for {provider_type or 'general'}")
|
||||
return form_data
|
||||
|
||||
raise ToolError("No RClone config form data received")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to get RClone config form: {str(e)}")
|
||||
raise ToolError(f"Failed to get RClone config form: {str(e)}")
|
||||
|
||||
@mcp.tool()
|
||||
async def create_rclone_remote(name: str, provider_type: str, config_data: Dict[str, Any]) -> Dict[str, Any]:
|
||||
"""
|
||||
Create a new RClone remote with the specified configuration.
|
||||
|
||||
Args:
|
||||
name: Name for the new remote
|
||||
provider_type: Type of provider (e.g., 's3', 'drive', 'dropbox', 'ftp')
|
||||
config_data: Configuration parameters specific to the provider type
|
||||
"""
|
||||
try:
|
||||
mutation = """
|
||||
mutation CreateRCloneRemote($input: CreateRCloneRemoteInput!) {
|
||||
rclone {
|
||||
createRCloneRemote(input: $input) {
|
||||
name
|
||||
type
|
||||
parameters
|
||||
}
|
||||
}
|
||||
}
|
||||
"""
|
||||
|
||||
variables = {
|
||||
"input": {
|
||||
"name": name,
|
||||
"type": provider_type,
|
||||
"config": config_data
|
||||
}
|
||||
}
|
||||
|
||||
response_data = await make_graphql_request(mutation, variables)
|
||||
|
||||
if "rclone" in response_data and "createRCloneRemote" in response_data["rclone"]:
|
||||
remote_info = response_data["rclone"]["createRCloneRemote"]
|
||||
logger.info(f"Successfully created RClone remote: {name}")
|
||||
return {
|
||||
"success": True,
|
||||
"message": f"RClone remote '{name}' created successfully",
|
||||
"remote": remote_info
|
||||
}
|
||||
|
||||
raise ToolError("Failed to create RClone remote")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to create RClone remote {name}: {str(e)}")
|
||||
raise ToolError(f"Failed to create RClone remote {name}: {str(e)}")
|
||||
|
||||
@mcp.tool()
|
||||
async def delete_rclone_remote(name: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Delete an existing RClone remote by name.
|
||||
|
||||
Args:
|
||||
name: Name of the remote to delete
|
||||
"""
|
||||
try:
|
||||
mutation = """
|
||||
mutation DeleteRCloneRemote($input: DeleteRCloneRemoteInput!) {
|
||||
rclone {
|
||||
deleteRCloneRemote(input: $input)
|
||||
}
|
||||
}
|
||||
"""
|
||||
|
||||
variables = {
|
||||
"input": {
|
||||
"name": name
|
||||
}
|
||||
}
|
||||
|
||||
response_data = await make_graphql_request(mutation, variables)
|
||||
|
||||
if "rclone" in response_data and response_data["rclone"]["deleteRCloneRemote"]:
|
||||
logger.info(f"Successfully deleted RClone remote: {name}")
|
||||
return {
|
||||
"success": True,
|
||||
"message": f"RClone remote '{name}' deleted successfully"
|
||||
}
|
||||
|
||||
raise ToolError(f"Failed to delete RClone remote '{name}'")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to delete RClone remote {name}: {str(e)}")
|
||||
raise ToolError(f"Failed to delete RClone remote {name}: {str(e)}")
|
||||
|
||||
logger.info("RClone tools registered successfully")
|
||||
270
unraid_mcp/tools/storage.py
Normal file
270
unraid_mcp/tools/storage.py
Normal file
@@ -0,0 +1,270 @@
|
||||
"""Storage, disk, and notification management tools.
|
||||
|
||||
This module provides tools for managing user shares, notifications,
|
||||
log files, physical disks with SMART data, and system storage operations
|
||||
with custom timeout configurations for disk-intensive operations.
|
||||
"""
|
||||
|
||||
from typing import Any, Dict, List, Optional
|
||||
|
||||
import httpx
|
||||
from fastmcp import FastMCP
|
||||
|
||||
from ..config.logging import logger
|
||||
from ..core.client import make_graphql_request
|
||||
from ..core.exceptions import ToolError
|
||||
|
||||
|
||||
def register_storage_tools(mcp: FastMCP):
|
||||
"""Register all storage tools with the FastMCP instance.
|
||||
|
||||
Args:
|
||||
mcp: FastMCP instance to register tools with
|
||||
"""
|
||||
|
||||
@mcp.tool()
|
||||
async def get_shares_info() -> List[Dict[str, Any]]:
|
||||
"""Retrieves information about user shares."""
|
||||
query = """
|
||||
query GetSharesInfo {
|
||||
shares {
|
||||
id
|
||||
name
|
||||
free
|
||||
used
|
||||
size
|
||||
include
|
||||
exclude
|
||||
cache
|
||||
nameOrig
|
||||
comment
|
||||
allocator
|
||||
splitLevel
|
||||
floor
|
||||
cow
|
||||
color
|
||||
luksStatus
|
||||
}
|
||||
}
|
||||
"""
|
||||
try:
|
||||
logger.info("Executing get_shares_info tool")
|
||||
response_data = await make_graphql_request(query)
|
||||
return response_data.get("shares", [])
|
||||
except Exception as e:
|
||||
logger.error(f"Error in get_shares_info: {e}", exc_info=True)
|
||||
raise ToolError(f"Failed to retrieve shares information: {str(e)}")
|
||||
|
||||
@mcp.tool()
|
||||
async def get_notifications_overview() -> Dict[str, Any]:
|
||||
"""Retrieves an overview of system notifications (unread and archive counts by severity)."""
|
||||
query = """
|
||||
query GetNotificationsOverview {
|
||||
notifications {
|
||||
overview {
|
||||
unread { info warning alert total }
|
||||
archive { info warning alert total }
|
||||
}
|
||||
}
|
||||
}
|
||||
"""
|
||||
try:
|
||||
logger.info("Executing get_notifications_overview tool")
|
||||
response_data = await make_graphql_request(query)
|
||||
if response_data.get("notifications"):
|
||||
return response_data["notifications"].get("overview", {})
|
||||
return {}
|
||||
except Exception as e:
|
||||
logger.error(f"Error in get_notifications_overview: {e}", exc_info=True)
|
||||
raise ToolError(f"Failed to retrieve notifications overview: {str(e)}")
|
||||
|
||||
@mcp.tool()
|
||||
async def list_notifications(
|
||||
type: str,
|
||||
offset: int,
|
||||
limit: int,
|
||||
importance: Optional[str] = None
|
||||
) -> List[Dict[str, Any]]:
|
||||
"""Lists notifications with filtering. Type: UNREAD/ARCHIVE. Importance: INFO/WARNING/ALERT."""
|
||||
query = """
|
||||
query ListNotifications($filter: NotificationFilter!) {
|
||||
notifications {
|
||||
list(filter: $filter) {
|
||||
id
|
||||
title
|
||||
subject
|
||||
description
|
||||
importance
|
||||
link
|
||||
type
|
||||
timestamp
|
||||
formattedTimestamp
|
||||
}
|
||||
}
|
||||
}
|
||||
"""
|
||||
variables = {
|
||||
"filter": {
|
||||
"type": type.upper(),
|
||||
"offset": offset,
|
||||
"limit": limit,
|
||||
"importance": importance.upper() if importance else None
|
||||
}
|
||||
}
|
||||
# Remove null importance from variables if not provided, as GraphQL might be strict
|
||||
if not importance:
|
||||
del variables["filter"]["importance"]
|
||||
|
||||
try:
|
||||
logger.info(f"Executing list_notifications: type={type}, offset={offset}, limit={limit}, importance={importance}")
|
||||
response_data = await make_graphql_request(query, variables)
|
||||
if response_data.get("notifications"):
|
||||
return response_data["notifications"].get("list", [])
|
||||
return []
|
||||
except Exception as e:
|
||||
logger.error(f"Error in list_notifications: {e}", exc_info=True)
|
||||
raise ToolError(f"Failed to list notifications: {str(e)}")
|
||||
|
||||
@mcp.tool()
|
||||
async def list_available_log_files() -> List[Dict[str, Any]]:
|
||||
"""Lists all available log files that can be queried."""
|
||||
query = """
|
||||
query ListLogFiles {
|
||||
logFiles {
|
||||
name
|
||||
path
|
||||
size
|
||||
modifiedAt
|
||||
}
|
||||
}
|
||||
"""
|
||||
try:
|
||||
logger.info("Executing list_available_log_files tool")
|
||||
response_data = await make_graphql_request(query)
|
||||
return response_data.get("logFiles", [])
|
||||
except Exception as e:
|
||||
logger.error(f"Error in list_available_log_files: {e}", exc_info=True)
|
||||
raise ToolError(f"Failed to list available log files: {str(e)}")
|
||||
|
||||
@mcp.tool()
|
||||
async def get_logs(log_file_path: str, tail_lines: int = 100) -> Dict[str, Any]:
|
||||
"""Retrieves content from a specific log file, defaulting to the last 100 lines."""
|
||||
# The Unraid GraphQL API Query.logFile takes 'lines' and 'startLine'.
|
||||
# To implement 'tail_lines', we would ideally need to know the total lines first,
|
||||
# then calculate startLine. However, Query.logFile itself returns totalLines.
|
||||
# A simple approach for 'tail' is to request a large number of lines if totalLines is not known beforehand,
|
||||
# and let the API handle it, or make two calls (one to get totalLines, then another).
|
||||
# For now, let's assume 'lines' parameter in Query.logFile effectively means tail if startLine is not given.
|
||||
# If not, this tool might need to be smarter or the API might not directly support 'tail' easily.
|
||||
# The SDL for LogFileContent implies it returns startLine, so it seems aware of ranges.
|
||||
|
||||
# Let's try fetching with just 'lines' to see if it acts as a tail,
|
||||
# or if we need Query.logFiles first to get totalLines for calculation.
|
||||
# For robust tailing, one might need to fetch totalLines first, then calculate start_line for the tail.
|
||||
# Simplified: query for the last 'tail_lines'. If the API doesn't support tailing this way, we may need adjustment.
|
||||
# The current plan is to pass 'lines=tail_lines' directly.
|
||||
|
||||
query = """
|
||||
query GetLogContent($path: String!, $lines: Int) {
|
||||
logFile(path: $path, lines: $lines) {
|
||||
path
|
||||
content
|
||||
totalLines
|
||||
startLine
|
||||
}
|
||||
}
|
||||
"""
|
||||
variables = {"path": log_file_path, "lines": tail_lines}
|
||||
try:
|
||||
logger.info(f"Executing get_logs for {log_file_path}, tail_lines={tail_lines}")
|
||||
response_data = await make_graphql_request(query, variables)
|
||||
return response_data.get("logFile", {})
|
||||
except Exception as e:
|
||||
logger.error(f"Error in get_logs for {log_file_path}: {e}", exc_info=True)
|
||||
raise ToolError(f"Failed to retrieve logs from {log_file_path}: {str(e)}")
|
||||
|
||||
@mcp.tool()
|
||||
async def list_physical_disks() -> List[Dict[str, Any]]:
|
||||
"""Lists all physical disks recognized by the Unraid system."""
|
||||
# Querying an extremely minimal set of fields for diagnostics
|
||||
query = """
|
||||
query ListPhysicalDisksMinimal {
|
||||
disks {
|
||||
id
|
||||
device
|
||||
name
|
||||
}
|
||||
}
|
||||
"""
|
||||
try:
|
||||
logger.info("Executing list_physical_disks tool with minimal query and increased timeout")
|
||||
# Increased read timeout for this potentially slow query
|
||||
long_timeout = httpx.Timeout(10.0, read=90.0, connect=5.0)
|
||||
response_data = await make_graphql_request(query, custom_timeout=long_timeout)
|
||||
return response_data.get("disks", [])
|
||||
except Exception as e:
|
||||
logger.error(f"Error in list_physical_disks: {e}", exc_info=True)
|
||||
raise ToolError(f"Failed to list physical disks: {str(e)}")
|
||||
|
||||
@mcp.tool()
|
||||
async def get_disk_details(disk_id: str) -> Dict[str, Any]:
|
||||
"""Retrieves detailed SMART information and partition data for a specific physical disk."""
|
||||
# Enhanced query with more comprehensive disk information
|
||||
query = """
|
||||
query GetDiskDetails($id: PrefixedID!) {
|
||||
disk(id: $id) {
|
||||
id
|
||||
device
|
||||
name
|
||||
serialNum
|
||||
size
|
||||
temperature
|
||||
}
|
||||
}
|
||||
"""
|
||||
variables = {"id": disk_id}
|
||||
try:
|
||||
logger.info(f"Executing get_disk_details for disk: {disk_id}")
|
||||
response_data = await make_graphql_request(query, variables)
|
||||
raw_disk = response_data.get("disk", {})
|
||||
|
||||
if not raw_disk:
|
||||
raise ToolError(f"Disk '{disk_id}' not found")
|
||||
|
||||
# Process disk information for human-readable output
|
||||
def format_bytes(bytes_value):
|
||||
if bytes_value is None: return "N/A"
|
||||
bytes_value = int(bytes_value)
|
||||
for unit in ['B', 'KB', 'MB', 'GB', 'TB', 'PB']:
|
||||
if bytes_value < 1024.0:
|
||||
return f"{bytes_value:.2f} {unit}"
|
||||
bytes_value /= 1024.0
|
||||
return f"{bytes_value:.2f} EB"
|
||||
|
||||
summary = {
|
||||
'disk_id': raw_disk.get('id'),
|
||||
'device': raw_disk.get('device'),
|
||||
'name': raw_disk.get('name'),
|
||||
'serial_number': raw_disk.get('serialNum'),
|
||||
'size_formatted': format_bytes(raw_disk.get('size')),
|
||||
'temperature': f"{raw_disk.get('temperature')}°C" if raw_disk.get('temperature') else 'N/A',
|
||||
'interface_type': raw_disk.get('interfaceType'),
|
||||
'smart_status': raw_disk.get('smartStatus'),
|
||||
'is_spinning': raw_disk.get('isSpinning'),
|
||||
'power_on_hours': raw_disk.get('powerOnHours'),
|
||||
'reallocated_sectors': raw_disk.get('reallocatedSectorCount'),
|
||||
'partition_count': len(raw_disk.get('partitions', [])),
|
||||
'total_partition_size': format_bytes(sum(p.get('size', 0) for p in raw_disk.get('partitions', []) if p.get('size')))
|
||||
}
|
||||
|
||||
return {
|
||||
'summary': summary,
|
||||
'partitions': raw_disk.get('partitions', []),
|
||||
'details': raw_disk
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error in get_disk_details for {disk_id}: {e}", exc_info=True)
|
||||
raise ToolError(f"Failed to retrieve disk details for {disk_id}: {str(e)}")
|
||||
|
||||
logger.info("Storage tools registered successfully")
|
||||
385
unraid_mcp/tools/system.py
Normal file
385
unraid_mcp/tools/system.py
Normal file
@@ -0,0 +1,385 @@
|
||||
"""System information and array status tools.
|
||||
|
||||
This module provides tools for retrieving core Unraid system information,
|
||||
array status with health analysis, network configuration, registration info,
|
||||
and system variables.
|
||||
"""
|
||||
|
||||
from typing import Any, Dict
|
||||
|
||||
from fastmcp import FastMCP
|
||||
|
||||
from ..config.logging import logger
|
||||
from ..core.client import make_graphql_request
|
||||
from ..core.exceptions import ToolError
|
||||
|
||||
|
||||
# Standalone functions for use by subscription resources
|
||||
async def _get_system_info() -> Dict[str, Any]:
|
||||
"""Standalone function to get system info - used by subscriptions and tools."""
|
||||
query = """
|
||||
query GetSystemInfo {
|
||||
info {
|
||||
os { platform distro release codename kernel arch hostname codepage logofile serial build uptime }
|
||||
cpu { manufacturer brand vendor family model stepping revision voltage speed speedmin speedmax threads cores processors socket cache flags }
|
||||
memory {
|
||||
# Avoid fetching problematic fields that cause type errors
|
||||
layout { bank type clockSpeed formFactor manufacturer partNum serialNum }
|
||||
}
|
||||
baseboard { manufacturer model version serial assetTag }
|
||||
system { manufacturer model version serial uuid sku }
|
||||
versions { kernel openssl systemOpenssl systemOpensslLib node v8 npm yarn pm2 gulp grunt git tsc mysql redis mongodb apache nginx php docker postfix postgresql perl python gcc unraid }
|
||||
apps { installed started }
|
||||
# Remove devices section as it has non-nullable fields that might be null
|
||||
machineId
|
||||
time
|
||||
}
|
||||
}
|
||||
"""
|
||||
try:
|
||||
logger.info("Executing get_system_info")
|
||||
response_data = await make_graphql_request(query)
|
||||
raw_info = response_data.get("info", {})
|
||||
if not raw_info:
|
||||
raise ToolError("No system info returned from Unraid API")
|
||||
|
||||
# Process for human-readable output
|
||||
summary = {}
|
||||
if raw_info.get('os'):
|
||||
os_info = raw_info['os']
|
||||
summary['os'] = f"{os_info.get('distro', '')} {os_info.get('release', '')} ({os_info.get('platform', '')}, {os_info.get('arch', '')})"
|
||||
summary['hostname'] = os_info.get('hostname')
|
||||
summary['uptime'] = os_info.get('uptime')
|
||||
|
||||
if raw_info.get('cpu'):
|
||||
cpu_info = raw_info['cpu']
|
||||
summary['cpu'] = f"{cpu_info.get('manufacturer', '')} {cpu_info.get('brand', '')} ({cpu_info.get('cores')} cores, {cpu_info.get('threads')} threads)"
|
||||
|
||||
if raw_info.get('memory') and raw_info['memory'].get('layout'):
|
||||
mem_layout = raw_info['memory']['layout']
|
||||
summary['memory_layout_details'] = [] # Renamed for clarity
|
||||
# The API is not returning 'size' for individual sticks in the layout, even if queried.
|
||||
# So, we cannot calculate total from layout currently.
|
||||
for stick in mem_layout:
|
||||
# stick_size = stick.get('size') # This is None in the actual API response
|
||||
summary['memory_layout_details'].append(
|
||||
f"Bank {stick.get('bank', '?')}: Type {stick.get('type', '?')}, Speed {stick.get('clockSpeed', '?')}MHz, Manufacturer: {stick.get('manufacturer','?')}, Part: {stick.get('partNum', '?')}"
|
||||
)
|
||||
summary['memory_summary'] = "Stick layout details retrieved. Overall total/used/free memory stats are unavailable due to API limitations (Int overflow or data not provided by API)."
|
||||
else:
|
||||
summary['memory_summary'] = "Memory information (layout or stats) not available or failed to retrieve."
|
||||
|
||||
# Include a key for the full details if needed by an LLM for deeper dives
|
||||
return {"summary": summary, "details": raw_info}
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error in get_system_info: {e}", exc_info=True)
|
||||
raise ToolError(f"Failed to retrieve system information: {str(e)}")
|
||||
|
||||
|
||||
async def _get_array_status() -> Dict[str, Any]:
|
||||
"""Standalone function to get array status - used by subscriptions and tools."""
|
||||
query = """
|
||||
query GetArrayStatus {
|
||||
array {
|
||||
id
|
||||
state
|
||||
capacity {
|
||||
kilobytes { free used total }
|
||||
disks { free used total }
|
||||
}
|
||||
boot { id idx name device size status rotational temp numReads numWrites numErrors fsSize fsFree fsUsed exportable type warning critical fsType comment format transport color }
|
||||
parities { id idx name device size status rotational temp numReads numWrites numErrors fsSize fsFree fsUsed exportable type warning critical fsType comment format transport color }
|
||||
disks { id idx name device size status rotational temp numReads numWrites numErrors fsSize fsFree fsUsed exportable type warning critical fsType comment format transport color }
|
||||
caches { id idx name device size status rotational temp numReads numWrites numErrors fsSize fsFree fsUsed exportable type warning critical fsType comment format transport color }
|
||||
}
|
||||
}
|
||||
"""
|
||||
try:
|
||||
logger.info("Executing get_array_status")
|
||||
response_data = await make_graphql_request(query)
|
||||
raw_array_info = response_data.get("array", {})
|
||||
if not raw_array_info:
|
||||
raise ToolError("No array information returned from Unraid API")
|
||||
|
||||
summary = {}
|
||||
summary['state'] = raw_array_info.get('state')
|
||||
|
||||
if raw_array_info.get('capacity') and raw_array_info['capacity'].get('kilobytes'):
|
||||
kb_cap = raw_array_info['capacity']['kilobytes']
|
||||
# Helper to format KB into TB/GB/MB
|
||||
def format_kb(k):
|
||||
if k is None: return "N/A"
|
||||
k = int(k) # Values are strings in SDL for PrefixedID containing types like capacity
|
||||
if k >= 1024*1024*1024: return f"{k / (1024*1024*1024):.2f} TB"
|
||||
if k >= 1024*1024: return f"{k / (1024*1024):.2f} GB"
|
||||
if k >= 1024: return f"{k / 1024:.2f} MB"
|
||||
return f"{k} KB"
|
||||
|
||||
summary['capacity_total'] = format_kb(kb_cap.get('total'))
|
||||
summary['capacity_used'] = format_kb(kb_cap.get('used'))
|
||||
summary['capacity_free'] = format_kb(kb_cap.get('free'))
|
||||
|
||||
summary['num_parity_disks'] = len(raw_array_info.get('parities', []))
|
||||
summary['num_data_disks'] = len(raw_array_info.get('disks', []))
|
||||
summary['num_cache_pools'] = len(raw_array_info.get('caches', [])) # Note: caches are pools, not individual cache disks
|
||||
|
||||
# Enhanced: Add disk health summary
|
||||
def analyze_disk_health(disks, disk_type):
|
||||
"""Analyze health status of disk arrays"""
|
||||
if not disks:
|
||||
return {}
|
||||
|
||||
health_counts = {
|
||||
'healthy': 0,
|
||||
'failed': 0,
|
||||
'missing': 0,
|
||||
'new': 0,
|
||||
'warning': 0,
|
||||
'unknown': 0
|
||||
}
|
||||
|
||||
for disk in disks:
|
||||
status = disk.get('status', '').upper()
|
||||
warning = disk.get('warning')
|
||||
critical = disk.get('critical')
|
||||
|
||||
if status == 'DISK_OK':
|
||||
if warning or critical:
|
||||
health_counts['warning'] += 1
|
||||
else:
|
||||
health_counts['healthy'] += 1
|
||||
elif status in ['DISK_DSBL', 'DISK_INVALID']:
|
||||
health_counts['failed'] += 1
|
||||
elif status == 'DISK_NP':
|
||||
health_counts['missing'] += 1
|
||||
elif status == 'DISK_NEW':
|
||||
health_counts['new'] += 1
|
||||
else:
|
||||
health_counts['unknown'] += 1
|
||||
|
||||
return health_counts
|
||||
|
||||
# Analyze health for each disk type
|
||||
health_summary = {}
|
||||
if raw_array_info.get('parities'):
|
||||
health_summary['parity_health'] = analyze_disk_health(raw_array_info['parities'], 'parity')
|
||||
if raw_array_info.get('disks'):
|
||||
health_summary['data_health'] = analyze_disk_health(raw_array_info['disks'], 'data')
|
||||
if raw_array_info.get('caches'):
|
||||
health_summary['cache_health'] = analyze_disk_health(raw_array_info['caches'], 'cache')
|
||||
|
||||
# Overall array health assessment
|
||||
total_failed = sum(h.get('failed', 0) for h in health_summary.values())
|
||||
total_missing = sum(h.get('missing', 0) for h in health_summary.values())
|
||||
total_warning = sum(h.get('warning', 0) for h in health_summary.values())
|
||||
|
||||
if total_failed > 0:
|
||||
overall_health = "CRITICAL"
|
||||
elif total_missing > 0:
|
||||
overall_health = "DEGRADED"
|
||||
elif total_warning > 0:
|
||||
overall_health = "WARNING"
|
||||
else:
|
||||
overall_health = "HEALTHY"
|
||||
|
||||
summary['overall_health'] = overall_health
|
||||
summary['health_summary'] = health_summary
|
||||
|
||||
return {"summary": summary, "details": raw_array_info}
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error in get_array_status: {e}", exc_info=True)
|
||||
raise ToolError(f"Failed to retrieve array status: {str(e)}")
|
||||
|
||||
|
||||
def register_system_tools(mcp: FastMCP):
|
||||
"""Register all system tools with the FastMCP instance.
|
||||
|
||||
Args:
|
||||
mcp: FastMCP instance to register tools with
|
||||
"""
|
||||
|
||||
@mcp.tool()
|
||||
async def get_system_info() -> Dict[str, Any]:
|
||||
"""Retrieves comprehensive information about the Unraid system, OS, CPU, memory, and baseboard."""
|
||||
return await _get_system_info()
|
||||
|
||||
@mcp.tool()
|
||||
async def get_array_status() -> Dict[str, Any]:
|
||||
"""Retrieves the current status of the Unraid storage array, including its state, capacity, and details of all disks."""
|
||||
return await _get_array_status()
|
||||
|
||||
@mcp.tool()
|
||||
async def get_network_config() -> Dict[str, Any]:
|
||||
"""Retrieves network configuration details, including access URLs."""
|
||||
query = """
|
||||
query GetNetworkConfig {
|
||||
network {
|
||||
id
|
||||
accessUrls { type name ipv4 ipv6 }
|
||||
}
|
||||
}
|
||||
"""
|
||||
try:
|
||||
logger.info("Executing get_network_config tool")
|
||||
response_data = await make_graphql_request(query)
|
||||
return response_data.get("network", {})
|
||||
except Exception as e:
|
||||
logger.error(f"Error in get_network_config: {e}", exc_info=True)
|
||||
raise ToolError(f"Failed to retrieve network configuration: {str(e)}")
|
||||
|
||||
@mcp.tool()
|
||||
async def get_registration_info() -> Dict[str, Any]:
|
||||
"""Retrieves Unraid registration details."""
|
||||
query = """
|
||||
query GetRegistrationInfo {
|
||||
registration {
|
||||
id
|
||||
type
|
||||
keyFile { location contents }
|
||||
state
|
||||
expiration
|
||||
updateExpiration
|
||||
}
|
||||
}
|
||||
"""
|
||||
try:
|
||||
logger.info("Executing get_registration_info tool")
|
||||
response_data = await make_graphql_request(query)
|
||||
return response_data.get("registration", {})
|
||||
except Exception as e:
|
||||
logger.error(f"Error in get_registration_info: {e}", exc_info=True)
|
||||
raise ToolError(f"Failed to retrieve registration information: {str(e)}")
|
||||
|
||||
@mcp.tool()
|
||||
async def get_connect_settings() -> Dict[str, Any]:
|
||||
"""Retrieves settings related to Unraid Connect."""
|
||||
# Based on actual schema: settings.unified.values contains the JSON settings
|
||||
query = """
|
||||
query GetConnectSettingsForm {
|
||||
settings {
|
||||
unified {
|
||||
values
|
||||
}
|
||||
}
|
||||
}
|
||||
"""
|
||||
try:
|
||||
logger.info("Executing get_connect_settings tool")
|
||||
response_data = await make_graphql_request(query)
|
||||
|
||||
# Navigate down to the unified settings values
|
||||
if response_data.get("settings") and response_data["settings"].get("unified"):
|
||||
values = response_data["settings"]["unified"].get("values", {})
|
||||
# Filter for Connect-related settings if values is a dict
|
||||
if isinstance(values, dict):
|
||||
# Look for connect-related keys in the unified settings
|
||||
connect_settings = {}
|
||||
for key, value in values.items():
|
||||
if 'connect' in key.lower() or key in ['accessType', 'forwardType', 'port']:
|
||||
connect_settings[key] = value
|
||||
return connect_settings if connect_settings else values
|
||||
return values
|
||||
return {}
|
||||
except Exception as e:
|
||||
logger.error(f"Error in get_connect_settings: {e}", exc_info=True)
|
||||
raise ToolError(f"Failed to retrieve Unraid Connect settings: {str(e)}")
|
||||
|
||||
@mcp.tool()
|
||||
async def get_unraid_variables() -> Dict[str, Any]:
|
||||
"""Retrieves a selection of Unraid system variables and settings.
|
||||
Note: Many variables are omitted due to API type issues (Int overflow/NaN).
|
||||
"""
|
||||
# Querying a smaller, curated set of fields to avoid Int overflow and NaN issues
|
||||
# pending Unraid API schema fixes for the full Vars type.
|
||||
query = """
|
||||
query GetSelectiveUnraidVariables {
|
||||
vars {
|
||||
id
|
||||
version
|
||||
name
|
||||
timeZone
|
||||
comment
|
||||
security
|
||||
workgroup
|
||||
domain
|
||||
domainShort
|
||||
hideDotFiles
|
||||
localMaster
|
||||
enableFruit
|
||||
useNtp
|
||||
# ntpServer1, ntpServer2, ... are strings, should be okay but numerous
|
||||
domainLogin # Boolean
|
||||
sysModel # String
|
||||
# sysArraySlots, sysCacheSlots are Int, were problematic (NaN)
|
||||
sysFlashSlots # Int, might be okay if small and always set
|
||||
useSsl # Boolean
|
||||
port # Int, usually small
|
||||
portssl # Int, usually small
|
||||
localTld # String
|
||||
bindMgt # Boolean
|
||||
useTelnet # Boolean
|
||||
porttelnet # Int, usually small
|
||||
useSsh # Boolean
|
||||
portssh # Int, usually small
|
||||
startPage # String
|
||||
startArray # Boolean
|
||||
# spindownDelay, queueDepth are Int, potentially okay if always set
|
||||
# defaultFormat, defaultFsType are String
|
||||
shutdownTimeout # Int, potentially okay
|
||||
# luksKeyfile is String
|
||||
# pollAttributes, pollAttributesDefault, pollAttributesStatus are Int/String, were problematic (NaN or type)
|
||||
# nrRequests, nrRequestsDefault, nrRequestsStatus were problematic
|
||||
# mdNumStripes, mdNumStripesDefault, mdNumStripesStatus were problematic
|
||||
# mdSyncWindow, mdSyncWindowDefault, mdSyncWindowStatus were problematic
|
||||
# mdSyncThresh, mdSyncThreshDefault, mdSyncThreshStatus were problematic
|
||||
# mdWriteMethod, mdWriteMethodDefault, mdWriteMethodStatus were problematic
|
||||
# shareDisk, shareUser, shareUserInclude, shareUserExclude are String arrays/String
|
||||
shareSmbEnabled # Boolean
|
||||
shareNfsEnabled # Boolean
|
||||
shareAfpEnabled # Boolean
|
||||
# shareInitialOwner, shareInitialGroup are String
|
||||
shareCacheEnabled # Boolean
|
||||
# shareCacheFloor is String (numeric string?)
|
||||
# shareMoverSchedule, shareMoverLogging are String
|
||||
# fuseRemember, fuseRememberDefault, fuseRememberStatus are String/Boolean, were problematic
|
||||
# fuseDirectio, fuseDirectioDefault, fuseDirectioStatus are String/Boolean, were problematic
|
||||
shareAvahiEnabled # Boolean
|
||||
# shareAvahiSmbName, shareAvahiSmbModel, shareAvahiAfpName, shareAvahiAfpModel are String
|
||||
safeMode # Boolean
|
||||
startMode # String
|
||||
configValid # Boolean
|
||||
configError # String
|
||||
joinStatus # String
|
||||
deviceCount # Int, might be okay
|
||||
flashGuid # String
|
||||
flashProduct # String
|
||||
flashVendor # String
|
||||
# regCheck, regFile, regGuid, regTy, regState, regTo, regTm, regTm2, regGen are varied, mostly String/Int
|
||||
# sbName, sbVersion, sbUpdated, sbEvents, sbState, sbClean, sbSynced, sbSyncErrs, sbSynced2, sbSyncExit are varied
|
||||
# mdColor, mdNumDisks, mdNumDisabled, mdNumInvalid, mdNumMissing, mdNumNew, mdNumErased are Int, potentially okay if counts
|
||||
# mdResync, mdResyncCorr, mdResyncPos, mdResyncDb, mdResyncDt, mdResyncAction are varied (Int/Boolean/String)
|
||||
# mdResyncSize was an overflow
|
||||
mdState # String (enum)
|
||||
mdVersion # String
|
||||
# cacheNumDevices, cacheSbNumDisks were problematic (NaN)
|
||||
# fsState, fsProgress, fsCopyPrcnt, fsNumMounted, fsNumUnmountable, fsUnmountableMask are varied
|
||||
shareCount # Int, might be okay
|
||||
shareSmbCount # Int, might be okay
|
||||
shareNfsCount # Int, might be okay
|
||||
shareAfpCount # Int, might be okay
|
||||
shareMoverActive # Boolean
|
||||
csrfToken # String
|
||||
}
|
||||
}
|
||||
"""
|
||||
try:
|
||||
logger.info("Executing get_unraid_variables tool with a selective query")
|
||||
response_data = await make_graphql_request(query)
|
||||
return response_data.get("vars", {})
|
||||
except Exception as e:
|
||||
logger.error(f"Error in get_unraid_variables: {e}", exc_info=True)
|
||||
raise ToolError(f"Failed to retrieve Unraid variables: {str(e)}")
|
||||
|
||||
logger.info("System tools registered successfully")
|
||||
162
unraid_mcp/tools/virtualization.py
Normal file
162
unraid_mcp/tools/virtualization.py
Normal file
@@ -0,0 +1,162 @@
|
||||
"""Virtual machine management tools.
|
||||
|
||||
This module provides tools for VM lifecycle management and monitoring
|
||||
including listing VMs, VM operations (start/stop/pause/reboot/etc),
|
||||
and detailed VM information retrieval.
|
||||
"""
|
||||
|
||||
from typing import Any, Dict, List
|
||||
|
||||
from fastmcp import FastMCP
|
||||
|
||||
from ..config.logging import logger
|
||||
from ..core.client import make_graphql_request
|
||||
from ..core.exceptions import ToolError
|
||||
|
||||
|
||||
def register_vm_tools(mcp: FastMCP):
|
||||
"""Register all VM tools with the FastMCP instance.
|
||||
|
||||
Args:
|
||||
mcp: FastMCP instance to register tools with
|
||||
"""
|
||||
|
||||
@mcp.tool()
|
||||
async def list_vms() -> List[Dict[str, Any]]:
|
||||
"""Lists all Virtual Machines (VMs) on the Unraid system and their current state.
|
||||
|
||||
Returns:
|
||||
List of VM information dictionaries with UUID, name, and state
|
||||
"""
|
||||
query = """
|
||||
query ListVMs {
|
||||
vms {
|
||||
id
|
||||
domains {
|
||||
id
|
||||
name
|
||||
state
|
||||
uuid
|
||||
}
|
||||
}
|
||||
}
|
||||
"""
|
||||
try:
|
||||
logger.info("Executing list_vms tool")
|
||||
response_data = await make_graphql_request(query)
|
||||
logger.info(f"VM query response: {response_data}")
|
||||
if response_data.get("vms") and response_data["vms"].get("domains"):
|
||||
vms = response_data["vms"]["domains"]
|
||||
logger.info(f"Found {len(vms)} VMs")
|
||||
return vms
|
||||
else:
|
||||
logger.info("No VMs found in domains field")
|
||||
return []
|
||||
except Exception as e:
|
||||
logger.error(f"Error in list_vms: {e}", exc_info=True)
|
||||
error_msg = str(e)
|
||||
if "VMs are not available" in error_msg:
|
||||
raise ToolError("VMs are not available on this Unraid server. This could mean: 1) VM support is not enabled, 2) VM service is not running, or 3) no VMs are configured. Check Unraid VM settings.")
|
||||
else:
|
||||
raise ToolError(f"Failed to list virtual machines: {error_msg}")
|
||||
|
||||
@mcp.tool()
|
||||
async def manage_vm(vm_uuid: str, action: str) -> Dict[str, Any]:
|
||||
"""Manages a VM: start, stop, pause, resume, force_stop, reboot, reset. Uses VM UUID.
|
||||
|
||||
Args:
|
||||
vm_uuid: UUID of the VM to manage
|
||||
action: Action to perform - one of: start, stop, pause, resume, forceStop, reboot, reset
|
||||
|
||||
Returns:
|
||||
Dict containing operation success status and details
|
||||
"""
|
||||
valid_actions = ["start", "stop", "pause", "resume", "forceStop", "reboot", "reset"] # Added reset operation
|
||||
if action not in valid_actions:
|
||||
logger.warning(f"Invalid action '{action}' for manage_vm")
|
||||
raise ToolError(f"Invalid action. Must be one of {valid_actions}.")
|
||||
|
||||
mutation_name = action
|
||||
query = f"""
|
||||
mutation ManageVM($id: PrefixedID!) {{
|
||||
vm {{
|
||||
{mutation_name}(id: $id)
|
||||
}}
|
||||
}}
|
||||
"""
|
||||
variables = {"id": vm_uuid}
|
||||
try:
|
||||
logger.info(f"Executing manage_vm tool: action={action}, uuid={vm_uuid}")
|
||||
response_data = await make_graphql_request(query, variables)
|
||||
if response_data.get("vm") and mutation_name in response_data["vm"]:
|
||||
# Mutations for VM return Boolean for success
|
||||
success = response_data["vm"][mutation_name]
|
||||
return {"success": success, "action": action, "vm_uuid": vm_uuid}
|
||||
raise ToolError(f"Failed to {action} VM or unexpected response structure.")
|
||||
except Exception as e:
|
||||
logger.error(f"Error in manage_vm ({action}): {e}", exc_info=True)
|
||||
raise ToolError(f"Failed to {action} virtual machine: {str(e)}")
|
||||
|
||||
@mcp.tool()
|
||||
async def get_vm_details(vm_identifier: str) -> Dict[str, Any]:
|
||||
"""Retrieves detailed information for a specific VM by its UUID or name.
|
||||
|
||||
Args:
|
||||
vm_identifier: VM UUID or name to retrieve details for
|
||||
|
||||
Returns:
|
||||
Dict containing detailed VM information
|
||||
"""
|
||||
# Make direct GraphQL call instead of calling list_vms() tool
|
||||
query = """
|
||||
query GetVmDetails {
|
||||
vms {
|
||||
domains {
|
||||
id
|
||||
name
|
||||
state
|
||||
uuid
|
||||
}
|
||||
domain {
|
||||
id
|
||||
name
|
||||
state
|
||||
uuid
|
||||
}
|
||||
}
|
||||
}
|
||||
"""
|
||||
try:
|
||||
logger.info(f"Executing get_vm_details for identifier: {vm_identifier}")
|
||||
response_data = await make_graphql_request(query)
|
||||
|
||||
if response_data.get("vms"):
|
||||
vms_data = response_data["vms"]
|
||||
# Try to get VMs from either domains or domain field
|
||||
vms = vms_data.get("domains") or vms_data.get("domain") or []
|
||||
|
||||
if vms:
|
||||
for vm_data in vms:
|
||||
if (vm_data.get("uuid") == vm_identifier or
|
||||
vm_data.get("id") == vm_identifier or
|
||||
vm_data.get("name") == vm_identifier):
|
||||
logger.info(f"Found VM {vm_identifier}")
|
||||
return vm_data
|
||||
|
||||
logger.warning(f"VM with identifier '{vm_identifier}' not found.")
|
||||
available_vms = [f"{vm.get('name')} (UUID: {vm.get('uuid')}, ID: {vm.get('id')})" for vm in vms]
|
||||
raise ToolError(f"VM '{vm_identifier}' not found. Available VMs: {', '.join(available_vms)}")
|
||||
else:
|
||||
raise ToolError("No VMs available or VMs not accessible")
|
||||
else:
|
||||
raise ToolError("No VMs data returned from server")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error in get_vm_details: {e}", exc_info=True)
|
||||
error_msg = str(e)
|
||||
if "VMs are not available" in error_msg:
|
||||
raise ToolError("VMs are not available on this Unraid server. This could mean: 1) VM support is not enabled, 2) VM service is not running, or 3) no VMs are configured. Check Unraid VM settings.")
|
||||
else:
|
||||
raise ToolError(f"Failed to retrieve VM details: {error_msg}")
|
||||
|
||||
logger.info("VM tools registered successfully")
|
||||
2129
unraid_mcp_server.py
2129
unraid_mcp_server.py
File diff suppressed because it is too large
Load Diff
108
uv.lock
generated
108
uv.lock
generated
@@ -1437,7 +1437,7 @@ requires-dist = [
|
||||
{ name = "ruff", marker = "extra == 'dev'", specifier = ">=0.12.8" },
|
||||
{ name = "types-python-dateutil", marker = "extra == 'dev'" },
|
||||
{ name = "uvicorn", specifier = ">=0.35.0" },
|
||||
{ name = "websockets", specifier = ">=14.1" },
|
||||
{ name = "websockets", specifier = ">=13.1,<14.0" },
|
||||
]
|
||||
provides-extras = ["dev"]
|
||||
|
||||
@@ -1466,61 +1466,61 @@ wheels = [
|
||||
|
||||
[[package]]
|
||||
name = "websockets"
|
||||
version = "15.0.1"
|
||||
version = "13.1"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/21/e6/26d09fab466b7ca9c7737474c52be4f76a40301b08362eb2dbc19dcc16c1/websockets-15.0.1.tar.gz", hash = "sha256:82544de02076bafba038ce055ee6412d68da13ab47f0c60cab827346de828dee", size = 177016, upload-time = "2025-03-05T20:03:41.606Z" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/e2/73/9223dbc7be3dcaf2a7bbf756c351ec8da04b1fa573edaf545b95f6b0c7fd/websockets-13.1.tar.gz", hash = "sha256:a3b3366087c1bc0a2795111edcadddb8b3b59509d5db5d7ea3fdd69f954a8878", size = 158549, upload-time = "2024-09-21T17:34:21.54Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/1e/da/6462a9f510c0c49837bbc9345aca92d767a56c1fb2939e1579df1e1cdcf7/websockets-15.0.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:d63efaa0cd96cf0c5fe4d581521d9fa87744540d4bc999ae6e08595a1014b45b", size = 175423, upload-time = "2025-03-05T20:01:35.363Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/1c/9f/9d11c1a4eb046a9e106483b9ff69bce7ac880443f00e5ce64261b47b07e7/websockets-15.0.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:ac60e3b188ec7574cb761b08d50fcedf9d77f1530352db4eef1707fe9dee7205", size = 173080, upload-time = "2025-03-05T20:01:37.304Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d5/4f/b462242432d93ea45f297b6179c7333dd0402b855a912a04e7fc61c0d71f/websockets-15.0.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:5756779642579d902eed757b21b0164cd6fe338506a8083eb58af5c372e39d9a", size = 173329, upload-time = "2025-03-05T20:01:39.668Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/6e/0c/6afa1f4644d7ed50284ac59cc70ef8abd44ccf7d45850d989ea7310538d0/websockets-15.0.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0fdfe3e2a29e4db3659dbd5bbf04560cea53dd9610273917799f1cde46aa725e", size = 182312, upload-time = "2025-03-05T20:01:41.815Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/dd/d4/ffc8bd1350b229ca7a4db2a3e1c482cf87cea1baccd0ef3e72bc720caeec/websockets-15.0.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4c2529b320eb9e35af0fa3016c187dffb84a3ecc572bcee7c3ce302bfeba52bf", size = 181319, upload-time = "2025-03-05T20:01:43.967Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/97/3a/5323a6bb94917af13bbb34009fac01e55c51dfde354f63692bf2533ffbc2/websockets-15.0.1-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ac1e5c9054fe23226fb11e05a6e630837f074174c4c2f0fe442996112a6de4fb", size = 181631, upload-time = "2025-03-05T20:01:46.104Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a6/cc/1aeb0f7cee59ef065724041bb7ed667b6ab1eeffe5141696cccec2687b66/websockets-15.0.1-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:5df592cd503496351d6dc14f7cdad49f268d8e618f80dce0cd5a36b93c3fc08d", size = 182016, upload-time = "2025-03-05T20:01:47.603Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/79/f9/c86f8f7af208e4161a7f7e02774e9d0a81c632ae76db2ff22549e1718a51/websockets-15.0.1-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:0a34631031a8f05657e8e90903e656959234f3a04552259458aac0b0f9ae6fd9", size = 181426, upload-time = "2025-03-05T20:01:48.949Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c7/b9/828b0bc6753db905b91df6ae477c0b14a141090df64fb17f8a9d7e3516cf/websockets-15.0.1-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:3d00075aa65772e7ce9e990cab3ff1de702aa09be3940d1dc88d5abf1ab8a09c", size = 181360, upload-time = "2025-03-05T20:01:50.938Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/89/fb/250f5533ec468ba6327055b7d98b9df056fb1ce623b8b6aaafb30b55d02e/websockets-15.0.1-cp310-cp310-win32.whl", hash = "sha256:1234d4ef35db82f5446dca8e35a7da7964d02c127b095e172e54397fb6a6c256", size = 176388, upload-time = "2025-03-05T20:01:52.213Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/1c/46/aca7082012768bb98e5608f01658ff3ac8437e563eca41cf068bd5849a5e/websockets-15.0.1-cp310-cp310-win_amd64.whl", hash = "sha256:39c1fec2c11dc8d89bba6b2bf1556af381611a173ac2b511cf7231622058af41", size = 176830, upload-time = "2025-03-05T20:01:53.922Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/9f/32/18fcd5919c293a398db67443acd33fde142f283853076049824fc58e6f75/websockets-15.0.1-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:823c248b690b2fd9303ba00c4f66cd5e2d8c3ba4aa968b2779be9532a4dad431", size = 175423, upload-time = "2025-03-05T20:01:56.276Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/76/70/ba1ad96b07869275ef42e2ce21f07a5b0148936688c2baf7e4a1f60d5058/websockets-15.0.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:678999709e68425ae2593acf2e3ebcbcf2e69885a5ee78f9eb80e6e371f1bf57", size = 173082, upload-time = "2025-03-05T20:01:57.563Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/86/f2/10b55821dd40eb696ce4704a87d57774696f9451108cff0d2824c97e0f97/websockets-15.0.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:d50fd1ee42388dcfb2b3676132c78116490976f1300da28eb629272d5d93e905", size = 173330, upload-time = "2025-03-05T20:01:59.063Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a5/90/1c37ae8b8a113d3daf1065222b6af61cc44102da95388ac0018fcb7d93d9/websockets-15.0.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d99e5546bf73dbad5bf3547174cd6cb8ba7273062a23808ffea025ecb1cf8562", size = 182878, upload-time = "2025-03-05T20:02:00.305Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/8e/8d/96e8e288b2a41dffafb78e8904ea7367ee4f891dafc2ab8d87e2124cb3d3/websockets-15.0.1-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:66dd88c918e3287efc22409d426c8f729688d89a0c587c88971a0faa2c2f3792", size = 181883, upload-time = "2025-03-05T20:02:03.148Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/93/1f/5d6dbf551766308f6f50f8baf8e9860be6182911e8106da7a7f73785f4c4/websockets-15.0.1-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8dd8327c795b3e3f219760fa603dcae1dcc148172290a8ab15158cf85a953413", size = 182252, upload-time = "2025-03-05T20:02:05.29Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d4/78/2d4fed9123e6620cbf1706c0de8a1632e1a28e7774d94346d7de1bba2ca3/websockets-15.0.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:8fdc51055e6ff4adeb88d58a11042ec9a5eae317a0a53d12c062c8a8865909e8", size = 182521, upload-time = "2025-03-05T20:02:07.458Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e7/3b/66d4c1b444dd1a9823c4a81f50231b921bab54eee2f69e70319b4e21f1ca/websockets-15.0.1-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:693f0192126df6c2327cce3baa7c06f2a117575e32ab2308f7f8216c29d9e2e3", size = 181958, upload-time = "2025-03-05T20:02:09.842Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/08/ff/e9eed2ee5fed6f76fdd6032ca5cd38c57ca9661430bb3d5fb2872dc8703c/websockets-15.0.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:54479983bd5fb469c38f2f5c7e3a24f9a4e70594cd68cd1fa6b9340dadaff7cf", size = 181918, upload-time = "2025-03-05T20:02:11.968Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d8/75/994634a49b7e12532be6a42103597b71098fd25900f7437d6055ed39930a/websockets-15.0.1-cp311-cp311-win32.whl", hash = "sha256:16b6c1b3e57799b9d38427dda63edcbe4926352c47cf88588c0be4ace18dac85", size = 176388, upload-time = "2025-03-05T20:02:13.32Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/98/93/e36c73f78400a65f5e236cd376713c34182e6663f6889cd45a4a04d8f203/websockets-15.0.1-cp311-cp311-win_amd64.whl", hash = "sha256:27ccee0071a0e75d22cb35849b1db43f2ecd3e161041ac1ee9d2352ddf72f065", size = 176828, upload-time = "2025-03-05T20:02:14.585Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/51/6b/4545a0d843594f5d0771e86463606a3988b5a09ca5123136f8a76580dd63/websockets-15.0.1-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:3e90baa811a5d73f3ca0bcbf32064d663ed81318ab225ee4f427ad4e26e5aff3", size = 175437, upload-time = "2025-03-05T20:02:16.706Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f4/71/809a0f5f6a06522af902e0f2ea2757f71ead94610010cf570ab5c98e99ed/websockets-15.0.1-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:592f1a9fe869c778694f0aa806ba0374e97648ab57936f092fd9d87f8bc03665", size = 173096, upload-time = "2025-03-05T20:02:18.832Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/3d/69/1a681dd6f02180916f116894181eab8b2e25b31e484c5d0eae637ec01f7c/websockets-15.0.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:0701bc3cfcb9164d04a14b149fd74be7347a530ad3bbf15ab2c678a2cd3dd9a2", size = 173332, upload-time = "2025-03-05T20:02:20.187Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a6/02/0073b3952f5bce97eafbb35757f8d0d54812b6174ed8dd952aa08429bcc3/websockets-15.0.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e8b56bdcdb4505c8078cb6c7157d9811a85790f2f2b3632c7d1462ab5783d215", size = 183152, upload-time = "2025-03-05T20:02:22.286Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/74/45/c205c8480eafd114b428284840da0b1be9ffd0e4f87338dc95dc6ff961a1/websockets-15.0.1-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0af68c55afbd5f07986df82831c7bff04846928ea8d1fd7f30052638788bc9b5", size = 182096, upload-time = "2025-03-05T20:02:24.368Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/14/8f/aa61f528fba38578ec553c145857a181384c72b98156f858ca5c8e82d9d3/websockets-15.0.1-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:64dee438fed052b52e4f98f76c5790513235efaa1ef7f3f2192c392cd7c91b65", size = 182523, upload-time = "2025-03-05T20:02:25.669Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ec/6d/0267396610add5bc0d0d3e77f546d4cd287200804fe02323797de77dbce9/websockets-15.0.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:d5f6b181bb38171a8ad1d6aa58a67a6aa9d4b38d0f8c5f496b9e42561dfc62fe", size = 182790, upload-time = "2025-03-05T20:02:26.99Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/02/05/c68c5adbf679cf610ae2f74a9b871ae84564462955d991178f95a1ddb7dd/websockets-15.0.1-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:5d54b09eba2bada6011aea5375542a157637b91029687eb4fdb2dab11059c1b4", size = 182165, upload-time = "2025-03-05T20:02:30.291Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/29/93/bb672df7b2f5faac89761cb5fa34f5cec45a4026c383a4b5761c6cea5c16/websockets-15.0.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:3be571a8b5afed347da347bfcf27ba12b069d9d7f42cb8c7028b5e98bbb12597", size = 182160, upload-time = "2025-03-05T20:02:31.634Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ff/83/de1f7709376dc3ca9b7eeb4b9a07b4526b14876b6d372a4dc62312bebee0/websockets-15.0.1-cp312-cp312-win32.whl", hash = "sha256:c338ffa0520bdb12fbc527265235639fb76e7bc7faafbb93f6ba80d9c06578a9", size = 176395, upload-time = "2025-03-05T20:02:33.017Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/7d/71/abf2ebc3bbfa40f391ce1428c7168fb20582d0ff57019b69ea20fa698043/websockets-15.0.1-cp312-cp312-win_amd64.whl", hash = "sha256:fcd5cf9e305d7b8338754470cf69cf81f420459dbae8a3b40cee57417f4614a7", size = 176841, upload-time = "2025-03-05T20:02:34.498Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/cb/9f/51f0cf64471a9d2b4d0fc6c534f323b664e7095640c34562f5182e5a7195/websockets-15.0.1-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:ee443ef070bb3b6ed74514f5efaa37a252af57c90eb33b956d35c8e9c10a1931", size = 175440, upload-time = "2025-03-05T20:02:36.695Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/8a/05/aa116ec9943c718905997412c5989f7ed671bc0188ee2ba89520e8765d7b/websockets-15.0.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:5a939de6b7b4e18ca683218320fc67ea886038265fd1ed30173f5ce3f8e85675", size = 173098, upload-time = "2025-03-05T20:02:37.985Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ff/0b/33cef55ff24f2d92924923c99926dcce78e7bd922d649467f0eda8368923/websockets-15.0.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:746ee8dba912cd6fc889a8147168991d50ed70447bf18bcda7039f7d2e3d9151", size = 173329, upload-time = "2025-03-05T20:02:39.298Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/31/1d/063b25dcc01faa8fada1469bdf769de3768b7044eac9d41f734fd7b6ad6d/websockets-15.0.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:595b6c3969023ecf9041b2936ac3827e4623bfa3ccf007575f04c5a6aa318c22", size = 183111, upload-time = "2025-03-05T20:02:40.595Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/93/53/9a87ee494a51bf63e4ec9241c1ccc4f7c2f45fff85d5bde2ff74fcb68b9e/websockets-15.0.1-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3c714d2fc58b5ca3e285461a4cc0c9a66bd0e24c5da9911e30158286c9b5be7f", size = 182054, upload-time = "2025-03-05T20:02:41.926Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ff/b2/83a6ddf56cdcbad4e3d841fcc55d6ba7d19aeb89c50f24dd7e859ec0805f/websockets-15.0.1-cp313-cp313-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0f3c1e2ab208db911594ae5b4f79addeb3501604a165019dd221c0bdcabe4db8", size = 182496, upload-time = "2025-03-05T20:02:43.304Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/98/41/e7038944ed0abf34c45aa4635ba28136f06052e08fc2168520bb8b25149f/websockets-15.0.1-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:229cf1d3ca6c1804400b0a9790dc66528e08a6a1feec0d5040e8b9eb14422375", size = 182829, upload-time = "2025-03-05T20:02:48.812Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e0/17/de15b6158680c7623c6ef0db361da965ab25d813ae54fcfeae2e5b9ef910/websockets-15.0.1-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:756c56e867a90fb00177d530dca4b097dd753cde348448a1012ed6c5131f8b7d", size = 182217, upload-time = "2025-03-05T20:02:50.14Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/33/2b/1f168cb6041853eef0362fb9554c3824367c5560cbdaad89ac40f8c2edfc/websockets-15.0.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:558d023b3df0bffe50a04e710bc87742de35060580a293c2a984299ed83bc4e4", size = 182195, upload-time = "2025-03-05T20:02:51.561Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/86/eb/20b6cdf273913d0ad05a6a14aed4b9a85591c18a987a3d47f20fa13dcc47/websockets-15.0.1-cp313-cp313-win32.whl", hash = "sha256:ba9e56e8ceeeedb2e080147ba85ffcd5cd0711b89576b83784d8605a7df455fa", size = 176393, upload-time = "2025-03-05T20:02:53.814Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/1b/6c/c65773d6cab416a64d191d6ee8a8b1c68a09970ea6909d16965d26bfed1e/websockets-15.0.1-cp313-cp313-win_amd64.whl", hash = "sha256:e09473f095a819042ecb2ab9465aee615bd9c2028e4ef7d933600a8401c79561", size = 176837, upload-time = "2025-03-05T20:02:55.237Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/02/9e/d40f779fa16f74d3468357197af8d6ad07e7c5a27ea1ca74ceb38986f77a/websockets-15.0.1-pp310-pypy310_pp73-macosx_10_15_x86_64.whl", hash = "sha256:0c9e74d766f2818bb95f84c25be4dea09841ac0f734d1966f415e4edfc4ef1c3", size = 173109, upload-time = "2025-03-05T20:03:17.769Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/bc/cd/5b887b8585a593073fd92f7c23ecd3985cd2c3175025a91b0d69b0551372/websockets-15.0.1-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:1009ee0c7739c08a0cd59de430d6de452a55e42d6b522de7aa15e6f67db0b8e1", size = 173343, upload-time = "2025-03-05T20:03:19.094Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/fe/ae/d34f7556890341e900a95acf4886833646306269f899d58ad62f588bf410/websockets-15.0.1-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:76d1f20b1c7a2fa82367e04982e708723ba0e7b8d43aa643d3dcd404d74f1475", size = 174599, upload-time = "2025-03-05T20:03:21.1Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/71/e6/5fd43993a87db364ec60fc1d608273a1a465c0caba69176dd160e197ce42/websockets-15.0.1-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f29d80eb9a9263b8d109135351caf568cc3f80b9928bccde535c235de55c22d9", size = 174207, upload-time = "2025-03-05T20:03:23.221Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/2b/fb/c492d6daa5ec067c2988ac80c61359ace5c4c674c532985ac5a123436cec/websockets-15.0.1-pp310-pypy310_pp73-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b359ed09954d7c18bbc1680f380c7301f92c60bf924171629c5db97febb12f04", size = 174155, upload-time = "2025-03-05T20:03:25.321Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/68/a1/dcb68430b1d00b698ae7a7e0194433bce4f07ded185f0ee5fb21e2a2e91e/websockets-15.0.1-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:cad21560da69f4ce7658ca2cb83138fb4cf695a2ba3e475e0559e05991aa8122", size = 176884, upload-time = "2025-03-05T20:03:27.934Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/fa/a8/5b41e0da817d64113292ab1f8247140aac61cbf6cfd085d6a0fa77f4984f/websockets-15.0.1-py3-none-any.whl", hash = "sha256:f7a866fbc1e97b5c617ee4116daaa09b722101d4a3c170c787450ba409f9736f", size = 169743, upload-time = "2025-03-05T20:03:39.41Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/0a/94/d15dbfc6a5eb636dbc754303fba18208f2e88cf97e733e1d64fb9cb5c89e/websockets-13.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:f48c749857f8fb598fb890a75f540e3221d0976ed0bf879cf3c7eef34151acee", size = 157815, upload-time = "2024-09-21T17:32:27.107Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/30/02/c04af33f4663945a26f5e8cf561eb140c35452b50af47a83c3fbcfe62ae1/websockets-13.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:c7e72ce6bda6fb9409cc1e8164dd41d7c91466fb599eb047cfda72fe758a34a7", size = 155466, upload-time = "2024-09-21T17:32:28.428Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/35/e8/719f08d12303ea643655e52d9e9851b2dadbb1991d4926d9ce8862efa2f5/websockets-13.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:f779498eeec470295a2b1a5d97aa1bc9814ecd25e1eb637bd9d1c73a327387f6", size = 155716, upload-time = "2024-09-21T17:32:29.905Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/91/e1/14963ae0252a8925f7434065d25dcd4701d5e281a0b4b460a3b5963d2594/websockets-13.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4676df3fe46956fbb0437d8800cd5f2b6d41143b6e7e842e60554398432cf29b", size = 164806, upload-time = "2024-09-21T17:32:31.384Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ec/fa/ab28441bae5e682a0f7ddf3d03440c0c352f930da419301f4a717f675ef3/websockets-13.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a7affedeb43a70351bb811dadf49493c9cfd1ed94c9c70095fd177e9cc1541fa", size = 163810, upload-time = "2024-09-21T17:32:32.384Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/44/77/dea187bd9d16d4b91566a2832be31f99a40d0f5bfa55eeb638eb2c3bc33d/websockets-13.1-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1971e62d2caa443e57588e1d82d15f663b29ff9dfe7446d9964a4b6f12c1e700", size = 164125, upload-time = "2024-09-21T17:32:33.398Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/cf/d9/3af14544e83f1437eb684b399e6ba0fa769438e869bf5d83d74bc197fae8/websockets-13.1-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:5f2e75431f8dc4a47f31565a6e1355fb4f2ecaa99d6b89737527ea917066e26c", size = 164532, upload-time = "2024-09-21T17:32:35.109Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/1c/8a/6d332eabe7d59dfefe4b8ba6f46c8c5fabb15b71c8a8bc3d2b65de19a7b6/websockets-13.1-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:58cf7e75dbf7e566088b07e36ea2e3e2bd5676e22216e4cad108d4df4a7402a0", size = 163948, upload-time = "2024-09-21T17:32:36.214Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/1a/91/a0aeadbaf3017467a1ee03f8fb67accdae233fe2d5ad4b038c0a84e357b0/websockets-13.1-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:c90d6dec6be2c7d03378a574de87af9b1efea77d0c52a8301dd831ece938452f", size = 163898, upload-time = "2024-09-21T17:32:37.277Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/71/31/a90fb47c63e0ae605be914b0b969d7c6e6ffe2038cd744798e4b3fbce53b/websockets-13.1-cp310-cp310-win32.whl", hash = "sha256:730f42125ccb14602f455155084f978bd9e8e57e89b569b4d7f0f0c17a448ffe", size = 158706, upload-time = "2024-09-21T17:32:38.755Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/93/ca/9540a9ba80da04dc7f36d790c30cae4252589dbd52ccdc92e75b0be22437/websockets-13.1-cp310-cp310-win_amd64.whl", hash = "sha256:5993260f483d05a9737073be197371940c01b257cc45ae3f1d5d7adb371b266a", size = 159141, upload-time = "2024-09-21T17:32:40.495Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b2/f0/cf0b8a30d86b49e267ac84addbebbc7a48a6e7bb7c19db80f62411452311/websockets-13.1-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:61fc0dfcda609cda0fc9fe7977694c0c59cf9d749fbb17f4e9483929e3c48a19", size = 157813, upload-time = "2024-09-21T17:32:42.188Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/bf/e7/22285852502e33071a8cf0ac814f8988480ec6db4754e067b8b9d0e92498/websockets-13.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:ceec59f59d092c5007e815def4ebb80c2de330e9588e101cf8bd94c143ec78a5", size = 155469, upload-time = "2024-09-21T17:32:43.858Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/68/d4/c8c7c1e5b40ee03c5cc235955b0fb1ec90e7e37685a5f69229ad4708dcde/websockets-13.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:c1dca61c6db1166c48b95198c0b7d9c990b30c756fc2923cc66f68d17dc558fd", size = 155717, upload-time = "2024-09-21T17:32:44.914Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c9/e4/c50999b9b848b1332b07c7fd8886179ac395cb766fda62725d1539e7bc6c/websockets-13.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:308e20f22c2c77f3f39caca508e765f8725020b84aa963474e18c59accbf4c02", size = 165379, upload-time = "2024-09-21T17:32:45.933Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/bc/49/4a4ad8c072f18fd79ab127650e47b160571aacfc30b110ee305ba25fffc9/websockets-13.1-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:62d516c325e6540e8a57b94abefc3459d7dab8ce52ac75c96cad5549e187e3a7", size = 164376, upload-time = "2024-09-21T17:32:46.987Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/af/9b/8c06d425a1d5a74fd764dd793edd02be18cf6fc3b1ccd1f29244ba132dc0/websockets-13.1-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:87c6e35319b46b99e168eb98472d6c7d8634ee37750d7693656dc766395df096", size = 164753, upload-time = "2024-09-21T17:32:48.046Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d5/5b/0acb5815095ff800b579ffc38b13ab1b915b317915023748812d24e0c1ac/websockets-13.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:5f9fee94ebafbc3117c30be1844ed01a3b177bb6e39088bc6b2fa1dc15572084", size = 165051, upload-time = "2024-09-21T17:32:49.271Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/30/93/c3891c20114eacb1af09dedfcc620c65c397f4fd80a7009cd12d9457f7f5/websockets-13.1-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:7c1e90228c2f5cdde263253fa5db63e6653f1c00e7ec64108065a0b9713fa1b3", size = 164489, upload-time = "2024-09-21T17:32:50.392Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/28/09/af9e19885539759efa2e2cd29b8b3f9eecef7ecefea40d46612f12138b36/websockets-13.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:6548f29b0e401eea2b967b2fdc1c7c7b5ebb3eeb470ed23a54cd45ef078a0db9", size = 164438, upload-time = "2024-09-21T17:32:52.223Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b6/08/6f38b8e625b3d93de731f1d248cc1493327f16cb45b9645b3e791782cff0/websockets-13.1-cp311-cp311-win32.whl", hash = "sha256:c11d4d16e133f6df8916cc5b7e3e96ee4c44c936717d684a94f48f82edb7c92f", size = 158710, upload-time = "2024-09-21T17:32:53.244Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/fb/39/ec8832ecb9bb04a8d318149005ed8cee0ba4e0205835da99e0aa497a091f/websockets-13.1-cp311-cp311-win_amd64.whl", hash = "sha256:d04f13a1d75cb2b8382bdc16ae6fa58c97337253826dfe136195b7f89f661557", size = 159137, upload-time = "2024-09-21T17:32:54.721Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/df/46/c426282f543b3c0296cf964aa5a7bb17e984f58dde23460c3d39b3148fcf/websockets-13.1-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:9d75baf00138f80b48f1eac72ad1535aac0b6461265a0bcad391fc5aba875cfc", size = 157821, upload-time = "2024-09-21T17:32:56.442Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/aa/85/22529867010baac258da7c45848f9415e6cf37fef00a43856627806ffd04/websockets-13.1-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:9b6f347deb3dcfbfde1c20baa21c2ac0751afaa73e64e5b693bb2b848efeaa49", size = 155480, upload-time = "2024-09-21T17:32:57.698Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/29/2c/bdb339bfbde0119a6e84af43ebf6275278698a2241c2719afc0d8b0bdbf2/websockets-13.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:de58647e3f9c42f13f90ac7e5f58900c80a39019848c5547bc691693098ae1bd", size = 155715, upload-time = "2024-09-21T17:32:59.429Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/9f/d0/8612029ea04c5c22bf7af2fd3d63876c4eaeef9b97e86c11972a43aa0e6c/websockets-13.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a1b54689e38d1279a51d11e3467dd2f3a50f5f2e879012ce8f2d6943f00e83f0", size = 165647, upload-time = "2024-09-21T17:33:00.495Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/56/04/1681ed516fa19ca9083f26d3f3a302257e0911ba75009533ed60fbb7b8d1/websockets-13.1-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:cf1781ef73c073e6b0f90af841aaf98501f975d306bbf6221683dd594ccc52b6", size = 164592, upload-time = "2024-09-21T17:33:02.223Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/38/6f/a96417a49c0ed132bb6087e8e39a37db851c70974f5c724a4b2a70066996/websockets-13.1-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8d23b88b9388ed85c6faf0e74d8dec4f4d3baf3ecf20a65a47b836d56260d4b9", size = 165012, upload-time = "2024-09-21T17:33:03.288Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/40/8b/fccf294919a1b37d190e86042e1a907b8f66cff2b61e9befdbce03783e25/websockets-13.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:3c78383585f47ccb0fcf186dcb8a43f5438bd7d8f47d69e0b56f71bf431a0a68", size = 165311, upload-time = "2024-09-21T17:33:04.728Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c1/61/f8615cf7ce5fe538476ab6b4defff52beb7262ff8a73d5ef386322d9761d/websockets-13.1-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:d6d300f8ec35c24025ceb9b9019ae9040c1ab2f01cddc2bcc0b518af31c75c14", size = 164692, upload-time = "2024-09-21T17:33:05.829Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/5c/f1/a29dd6046d3a722d26f182b783a7997d25298873a14028c4760347974ea3/websockets-13.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:a9dcaf8b0cc72a392760bb8755922c03e17a5a54e08cca58e8b74f6902b433cf", size = 164686, upload-time = "2024-09-21T17:33:06.823Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/0f/99/ab1cdb282f7e595391226f03f9b498f52109d25a2ba03832e21614967dfa/websockets-13.1-cp312-cp312-win32.whl", hash = "sha256:2f85cf4f2a1ba8f602298a853cec8526c2ca42a9a4b947ec236eaedb8f2dc80c", size = 158712, upload-time = "2024-09-21T17:33:07.877Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/46/93/e19160db48b5581feac8468330aa11b7292880a94a37d7030478596cc14e/websockets-13.1-cp312-cp312-win_amd64.whl", hash = "sha256:38377f8b0cdeee97c552d20cf1865695fcd56aba155ad1b4ca8779a5b6ef4ac3", size = 159145, upload-time = "2024-09-21T17:33:09.202Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/51/20/2b99ca918e1cbd33c53db2cace5f0c0cd8296fc77558e1908799c712e1cd/websockets-13.1-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:a9ab1e71d3d2e54a0aa646ab6d4eebfaa5f416fe78dfe4da2839525dc5d765c6", size = 157828, upload-time = "2024-09-21T17:33:10.987Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b8/47/0932a71d3d9c0e9483174f60713c84cee58d62839a143f21a2bcdbd2d205/websockets-13.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:b9d7439d7fab4dce00570bb906875734df13d9faa4b48e261c440a5fec6d9708", size = 155487, upload-time = "2024-09-21T17:33:12.153Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a9/60/f1711eb59ac7a6c5e98e5637fef5302f45b6f76a2c9d64fd83bbb341377a/websockets-13.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:327b74e915cf13c5931334c61e1a41040e365d380f812513a255aa804b183418", size = 155721, upload-time = "2024-09-21T17:33:13.909Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/6a/e6/ba9a8db7f9d9b0e5f829cf626ff32677f39824968317223605a6b419d445/websockets-13.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:325b1ccdbf5e5725fdcb1b0e9ad4d2545056479d0eee392c291c1bf76206435a", size = 165609, upload-time = "2024-09-21T17:33:14.967Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c1/22/4ec80f1b9c27a0aebd84ccd857252eda8418ab9681eb571b37ca4c5e1305/websockets-13.1-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:346bee67a65f189e0e33f520f253d5147ab76ae42493804319b5716e46dddf0f", size = 164556, upload-time = "2024-09-21T17:33:17.113Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/27/ac/35f423cb6bb15600438db80755609d27eda36d4c0b3c9d745ea12766c45e/websockets-13.1-cp313-cp313-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:91a0fa841646320ec0d3accdff5b757b06e2e5c86ba32af2e0815c96c7a603c5", size = 164993, upload-time = "2024-09-21T17:33:18.168Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/31/4e/98db4fd267f8be9e52e86b6ee4e9aa7c42b83452ea0ea0672f176224b977/websockets-13.1-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:18503d2c5f3943e93819238bf20df71982d193f73dcecd26c94514f417f6b135", size = 165360, upload-time = "2024-09-21T17:33:19.233Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/3f/15/3f0de7cda70ffc94b7e7024544072bc5b26e2c1eb36545291abb755d8cdb/websockets-13.1-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:a9cd1af7e18e5221d2878378fbc287a14cd527fdd5939ed56a18df8a31136bb2", size = 164745, upload-time = "2024-09-21T17:33:20.361Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a1/6e/66b6b756aebbd680b934c8bdbb6dcb9ce45aad72cde5f8a7208dbb00dd36/websockets-13.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:70c5be9f416aa72aab7a2a76c90ae0a4fe2755c1816c153c1a2bcc3333ce4ce6", size = 164732, upload-time = "2024-09-21T17:33:23.103Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/35/c6/12e3aab52c11aeb289e3dbbc05929e7a9d90d7a9173958477d3ef4f8ce2d/websockets-13.1-cp313-cp313-win32.whl", hash = "sha256:624459daabeb310d3815b276c1adef475b3e6804abaf2d9d2c061c319f7f187d", size = 158709, upload-time = "2024-09-21T17:33:24.196Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/41/d8/63d6194aae711d7263df4498200c690a9c39fb437ede10f3e157a6343e0d/websockets-13.1-cp313-cp313-win_amd64.whl", hash = "sha256:c518e84bb59c2baae725accd355c8dc517b4a3ed8db88b4bc93c78dae2974bf2", size = 159144, upload-time = "2024-09-21T17:33:25.96Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/2d/75/6da22cb3ad5b8c606963f9a5f9f88656256fecc29d420b4b2bf9e0c7d56f/websockets-13.1-pp310-pypy310_pp73-macosx_10_15_x86_64.whl", hash = "sha256:5dd6da9bec02735931fccec99d97c29f47cc61f644264eb995ad6c0c27667238", size = 155499, upload-time = "2024-09-21T17:33:54.917Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c0/ba/22833d58629088fcb2ccccedfae725ac0bbcd713319629e97125b52ac681/websockets-13.1-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:2510c09d8e8df777177ee3d40cd35450dc169a81e747455cc4197e63f7e7bfe5", size = 155737, upload-time = "2024-09-21T17:33:56.052Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/95/54/61684fe22bdb831e9e1843d972adadf359cf04ab8613285282baea6a24bb/websockets-13.1-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f1c3cf67185543730888b20682fb186fc8d0fa6f07ccc3ef4390831ab4b388d9", size = 157095, upload-time = "2024-09-21T17:33:57.21Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/fc/f5/6652fb82440813822022a9301a30afde85e5ff3fb2aebb77f34aabe2b4e8/websockets-13.1-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:bcc03c8b72267e97b49149e4863d57c2d77f13fae12066622dc78fe322490fe6", size = 156701, upload-time = "2024-09-21T17:33:59.061Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/67/33/ae82a7b860fa8a08aba68818bdf7ff61f04598aa5ab96df4cd5a3e418ca4/websockets-13.1-pp310-pypy310_pp73-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:004280a140f220c812e65f36944a9ca92d766b6cc4560be652a0a3883a79ed8a", size = 156654, upload-time = "2024-09-21T17:34:00.944Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/63/0b/a1b528d36934f833e20f6da1032b995bf093d55cb416b9f2266f229fb237/websockets-13.1-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:e2620453c075abeb0daa949a292e19f56de518988e079c36478bacf9546ced23", size = 159192, upload-time = "2024-09-21T17:34:02.656Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/56/27/96a5cd2626d11c8280656c6c71d8ab50fe006490ef9971ccd154e0c42cd2/websockets-13.1-py3-none-any.whl", hash = "sha256:a9a396a6ad26130cdae92ae10c36af09d9bfe6cafe69670fd3b6da9b07b4044f", size = 152134, upload-time = "2024-09-21T17:34:19.904Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
|
||||
Reference in New Issue
Block a user