Compare commits
12 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| cbbed83915 | |||
| 1e72bc9a28 | |||
| b0c95323fd | |||
| d60e753acf | |||
| 94c38359c7 | |||
| 2943fc79ab | |||
| 3e40338bbf | |||
| 39f9651236 | |||
| 3175c53504 | |||
| 29cf2aa6bd | |||
| b881ef635a | |||
| e35db0a361 |
58
HISTORY.md
58
HISTORY.md
@@ -4,6 +4,64 @@ Changelog
|
|||||||
|
|
||||||
(unreleased)
|
(unreleased)
|
||||||
------------
|
------------
|
||||||
|
|
||||||
|
Fix
|
||||||
|
~~~
|
||||||
|
- Better code generation, refs NOISSUE. [Simon Diesenreiter]
|
||||||
|
|
||||||
|
|
||||||
|
0.9.4 (2026-04-11)
|
||||||
|
------------------
|
||||||
|
|
||||||
|
Fix
|
||||||
|
~~~
|
||||||
|
- Add commit retry, refs NOISSUE. [Simon Diesenreiter]
|
||||||
|
|
||||||
|
Other
|
||||||
|
~~~~~
|
||||||
|
|
||||||
|
|
||||||
|
0.9.3 (2026-04-11)
|
||||||
|
------------------
|
||||||
|
|
||||||
|
Fix
|
||||||
|
~~~
|
||||||
|
- Better home assistant integration, refs NOISSUE. [Simon Diesenreiter]
|
||||||
|
|
||||||
|
Other
|
||||||
|
~~~~~
|
||||||
|
|
||||||
|
|
||||||
|
0.9.2 (2026-04-11)
|
||||||
|
------------------
|
||||||
|
|
||||||
|
Fix
|
||||||
|
~~~
|
||||||
|
- UI improvements and prompt hardening, refs NOISSUE. [Simon
|
||||||
|
Diesenreiter]
|
||||||
|
|
||||||
|
Other
|
||||||
|
~~~~~
|
||||||
|
|
||||||
|
|
||||||
|
0.9.1 (2026-04-11)
|
||||||
|
------------------
|
||||||
|
|
||||||
|
Fix
|
||||||
|
~~~
|
||||||
|
- Better repo name generation, refs NOISSUE. [Simon Diesenreiter]
|
||||||
|
|
||||||
|
Other
|
||||||
|
~~~~~
|
||||||
|
|
||||||
|
|
||||||
|
0.9.0 (2026-04-11)
|
||||||
|
------------------
|
||||||
|
- Feat: editable guardrails, refs NOISSUE. [Simon Diesenreiter]
|
||||||
|
|
||||||
|
|
||||||
|
0.8.0 (2026-04-11)
|
||||||
|
------------------
|
||||||
- Feat: better dashboard reloading mechanism, refs NOISSUE. [Simon
|
- Feat: better dashboard reloading mechanism, refs NOISSUE. [Simon
|
||||||
Diesenreiter]
|
Diesenreiter]
|
||||||
- Feat: add explicit workflow steps and guardrail prompts, refs NOISSUE.
|
- Feat: add explicit workflow steps and guardrail prompts, refs NOISSUE.
|
||||||
|
|||||||
18
README.md
18
README.md
@@ -48,6 +48,7 @@ OLLAMA_URL=http://localhost:11434
|
|||||||
OLLAMA_MODEL=llama3
|
OLLAMA_MODEL=llama3
|
||||||
|
|
||||||
# Gitea
|
# Gitea
|
||||||
|
# Host-only values such as git.disi.dev are normalized to https://git.disi.dev.
|
||||||
GITEA_URL=https://gitea.yourserver.com
|
GITEA_URL=https://gitea.yourserver.com
|
||||||
GITEA_TOKEN=your_gitea_api_token
|
GITEA_TOKEN=your_gitea_api_token
|
||||||
GITEA_OWNER=ai-software-factory
|
GITEA_OWNER=ai-software-factory
|
||||||
@@ -69,6 +70,12 @@ N8N_WEBHOOK_URL=http://n8n.yourserver.com/webhook/telegram
|
|||||||
# Telegram
|
# Telegram
|
||||||
TELEGRAM_BOT_TOKEN=your_telegram_bot_token
|
TELEGRAM_BOT_TOKEN=your_telegram_bot_token
|
||||||
TELEGRAM_CHAT_ID=your_chat_id
|
TELEGRAM_CHAT_ID=your_chat_id
|
||||||
|
|
||||||
|
# Optional: Home Assistant integration.
|
||||||
|
# Only the base URL and token are required in the environment.
|
||||||
|
# Entity ids, thresholds, and queue behavior can be configured from the dashboard System tab and are stored in the database.
|
||||||
|
HOME_ASSISTANT_URL=http://homeassistant.local:8123
|
||||||
|
HOME_ASSISTANT_TOKEN=your_home_assistant_long_lived_token
|
||||||
```
|
```
|
||||||
|
|
||||||
### Build and Run
|
### Build and Run
|
||||||
@@ -93,6 +100,7 @@ docker-compose up -d
|
|||||||
|
|
||||||
The backend now interprets free-form Telegram text with Ollama before generation.
|
The backend now interprets free-form Telegram text with Ollama before generation.
|
||||||
If `TELEGRAM_CHAT_ID` is set, the Telegram-trigger workflow only reacts to messages from that specific chat.
|
If `TELEGRAM_CHAT_ID` is set, the Telegram-trigger workflow only reacts to messages from that specific chat.
|
||||||
|
If queueing is enabled from the dashboard System tab, Telegram prompts are stored in a durable queue and processed only when the configured Home Assistant battery and surplus thresholds are satisfied, unless you force processing via `/queue/process` or send `process_now=true`.
|
||||||
|
|
||||||
2. **Monitor progress via Web UI:**
|
2. **Monitor progress via Web UI:**
|
||||||
|
|
||||||
@@ -104,6 +112,16 @@ docker-compose up -d
|
|||||||
|
|
||||||
If you deploy the container with PostgreSQL environment variables set, the service now selects PostgreSQL automatically even though SQLite remains the default for local/test usage.
|
If you deploy the container with PostgreSQL environment variables set, the service now selects PostgreSQL automatically even though SQLite remains the default for local/test usage.
|
||||||
|
|
||||||
|
The health tab now shows separate application, n8n, Gitea, and Home Assistant/queue diagnostics so misconfigured integrations are visible without checking container logs.
|
||||||
|
|
||||||
|
The dashboard Health tab exposes operator controls for the prompt queue, including manual batch processing, forced processing, and retrying failed items.
|
||||||
|
|
||||||
|
The dashboard System tab now also stores Home Assistant entity ids, queue toggles, thresholds, and batch settings in the database, so the environment only needs `HOME_ASSISTANT_URL` and `HOME_ASSISTANT_TOKEN` for that integration.
|
||||||
|
|
||||||
|
Projects that show `uncommitted`, `local_only`, or `pushed_no_pr` delivery warnings in the dashboard can now be retried in place from the UI before resorting to purging orphan audit rows.
|
||||||
|
|
||||||
|
Guardrail and system prompts are no longer environment-only in practice: the factory can persist DB-backed overrides for the editable LLM prompt set, expose them at `/llm/prompts`, and edit them from the dashboard System tab. Environment values still act as defaults and as the reset target.
|
||||||
|
|
||||||
## API Endpoints
|
## API Endpoints
|
||||||
|
|
||||||
| Endpoint | Method | Description |
|
| Endpoint | Method | Description |
|
||||||
|
|||||||
@@ -24,7 +24,7 @@ LLM_MAX_TOOL_CALL_ROUNDS=1
|
|||||||
|
|
||||||
# Gitea
|
# Gitea
|
||||||
# Configure Gitea API for your organization
|
# Configure Gitea API for your organization
|
||||||
# GITEA_URL can be left empty to use GITEA_ORGANIZATION instead of GITEA_OWNER
|
# Host-only values such as git.disi.dev are normalized to https://git.disi.dev automatically.
|
||||||
GITEA_URL=https://gitea.yourserver.com
|
GITEA_URL=https://gitea.yourserver.com
|
||||||
GITEA_TOKEN=your_gitea_api_token
|
GITEA_TOKEN=your_gitea_api_token
|
||||||
GITEA_OWNER=your_organization_name
|
GITEA_OWNER=your_organization_name
|
||||||
@@ -42,6 +42,12 @@ N8N_PASSWORD=your_secure_password
|
|||||||
TELEGRAM_BOT_TOKEN=your_telegram_bot_token
|
TELEGRAM_BOT_TOKEN=your_telegram_bot_token
|
||||||
TELEGRAM_CHAT_ID=your_chat_id
|
TELEGRAM_CHAT_ID=your_chat_id
|
||||||
|
|
||||||
|
# Home Assistant energy gate for queued Telegram prompts
|
||||||
|
# Only the base URL and token are environment-backed.
|
||||||
|
# Queue toggles, entity ids, thresholds, and batch sizing can be edited in the dashboard System tab and are stored in the database.
|
||||||
|
HOME_ASSISTANT_URL=http://homeassistant.local:8123
|
||||||
|
HOME_ASSISTANT_TOKEN=your_home_assistant_long_lived_token
|
||||||
|
|
||||||
# PostgreSQL
|
# PostgreSQL
|
||||||
# In production, provide PostgreSQL settings below. They now take precedence over the SQLite default.
|
# In production, provide PostgreSQL settings below. They now take precedence over the SQLite default.
|
||||||
# You can also set USE_SQLITE=false explicitly if you want the intent to be obvious.
|
# You can also set USE_SQLITE=false explicitly if you want the intent to be obvious.
|
||||||
|
|||||||
@@ -62,10 +62,11 @@ LLM_LIVE_TOOL_STAGE_TOOL_MAP={"request_interpretation": ["gitea_lookup_issue", "
|
|||||||
LLM_MAX_TOOL_CALL_ROUNDS=1
|
LLM_MAX_TOOL_CALL_ROUNDS=1
|
||||||
|
|
||||||
# Gitea
|
# Gitea
|
||||||
|
# Host-only values such as git.disi.dev are normalized to https://git.disi.dev.
|
||||||
GITEA_URL=https://gitea.yourserver.com
|
GITEA_URL=https://gitea.yourserver.com
|
||||||
GITEA_TOKEN= analyze your_gitea_api_token
|
GITEA_TOKEN=your_gitea_api_token
|
||||||
GITEA_OWNER=ai-software-factory
|
GITEA_OWNER=ai-software-factory
|
||||||
GITEA_REPO=ai-software-factory
|
GITEA_REPO=
|
||||||
|
|
||||||
# n8n
|
# n8n
|
||||||
N8N_WEBHOOK_URL=http://n8n.yourserver.com/webhook/telegram
|
N8N_WEBHOOK_URL=http://n8n.yourserver.com/webhook/telegram
|
||||||
@@ -73,6 +74,12 @@ N8N_WEBHOOK_URL=http://n8n.yourserver.com/webhook/telegram
|
|||||||
# Telegram
|
# Telegram
|
||||||
TELEGRAM_BOT_TOKEN=your_telegram_bot_token
|
TELEGRAM_BOT_TOKEN=your_telegram_bot_token
|
||||||
TELEGRAM_CHAT_ID=your_chat_id
|
TELEGRAM_CHAT_ID=your_chat_id
|
||||||
|
|
||||||
|
# Optional: Home Assistant integration.
|
||||||
|
# Only the base URL and token are required in the environment.
|
||||||
|
# Entity ids, thresholds, and queue behavior can be configured from the dashboard System tab and are stored in the database.
|
||||||
|
HOME_ASSISTANT_URL=http://homeassistant.local:8123
|
||||||
|
HOME_ASSISTANT_TOKEN=your_home_assistant_long_lived_token
|
||||||
```
|
```
|
||||||
|
|
||||||
### Build and Run
|
### Build and Run
|
||||||
@@ -95,6 +102,10 @@ docker-compose up -d
|
|||||||
Features: user authentication, task CRUD, notifications
|
Features: user authentication, task CRUD, notifications
|
||||||
```
|
```
|
||||||
|
|
||||||
|
If queueing is enabled from the dashboard System tab, Telegram prompts are queued durably and processed only when Home Assistant reports the configured battery and surplus thresholds. Operators can override the gate via `/queue/process` or by sending `process_now=true` to `/generate/text`.
|
||||||
|
|
||||||
|
The dashboard System tab stores Home Assistant entity ids, queue toggles, thresholds, and batch settings in the database, so the environment only needs `HOME_ASSISTANT_URL` and `HOME_ASSISTANT_TOKEN` for that integration.
|
||||||
|
|
||||||
2. **Monitor progress via Web UI:**
|
2. **Monitor progress via Web UI:**
|
||||||
|
|
||||||
Open `http://yourserver:8000` to see real-time progress
|
Open `http://yourserver:8000` to see real-time progress
|
||||||
@@ -138,6 +149,12 @@ New project creation can also run a dedicated `project_id_naming` stage. `LLM_PR
|
|||||||
|
|
||||||
Runtime visibility for the active guardrails, mediated tools, live tools, and model configuration is available at `/llm/runtime` and in the dashboard System tab.
|
Runtime visibility for the active guardrails, mediated tools, live tools, and model configuration is available at `/llm/runtime` and in the dashboard System tab.
|
||||||
|
|
||||||
|
Operational visibility for the Gitea integration, Home Assistant energy gate, and queued prompt counts is available in the dashboard Health tab, plus `/gitea/health`, `/home-assistant/health`, and `/queue`.
|
||||||
|
|
||||||
|
The dashboard Health tab also includes operator controls for manually processing queued Telegram prompts, force-processing them when needed, and retrying failed items.
|
||||||
|
|
||||||
|
Editable guardrail and system prompts are persisted in the database as overrides on top of the environment defaults. The current merged values are available at `/llm/prompts`, and the dashboard System tab can edit or reset them without restarting the service.
|
||||||
|
|
||||||
These tool payloads are appended to the model prompt as authoritative JSON generated by the service, so the LLM can reason over live project and Gitea context while remaining constrained by the configured guardrails.
|
These tool payloads are appended to the model prompt as authoritative JSON generated by the service, so the LLM can reason over live project and Gitea context while remaining constrained by the configured guardrails.
|
||||||
|
|
||||||
## Development
|
## Development
|
||||||
|
|||||||
@@ -1 +1 @@
|
|||||||
0.8.0
|
0.9.5
|
||||||
|
|||||||
File diff suppressed because it is too large
Load Diff
@@ -4,6 +4,20 @@ import os
|
|||||||
import urllib.error
|
import urllib.error
|
||||||
import urllib.request
|
import urllib.request
|
||||||
import json
|
import json
|
||||||
|
from urllib.parse import urlparse
|
||||||
|
|
||||||
|
|
||||||
|
def _normalize_base_url(base_url: str) -> str:
|
||||||
|
"""Normalize host-only service addresses into valid absolute URLs."""
|
||||||
|
normalized = (base_url or '').strip().rstrip('/')
|
||||||
|
if not normalized:
|
||||||
|
return ''
|
||||||
|
if '://' not in normalized:
|
||||||
|
normalized = f'https://{normalized}'
|
||||||
|
parsed = urlparse(normalized)
|
||||||
|
if not parsed.scheme or not parsed.netloc:
|
||||||
|
return ''
|
||||||
|
return normalized
|
||||||
|
|
||||||
|
|
||||||
class GiteaAPI:
|
class GiteaAPI:
|
||||||
@@ -11,7 +25,7 @@ class GiteaAPI:
|
|||||||
|
|
||||||
def __init__(self, token: str, base_url: str, owner: str | None = None, repo: str | None = None):
|
def __init__(self, token: str, base_url: str, owner: str | None = None, repo: str | None = None):
|
||||||
self.token = token
|
self.token = token
|
||||||
self.base_url = base_url.rstrip("/")
|
self.base_url = _normalize_base_url(base_url)
|
||||||
self.owner = owner
|
self.owner = owner
|
||||||
self.repo = repo
|
self.repo = repo
|
||||||
self.headers = {
|
self.headers = {
|
||||||
@@ -26,7 +40,7 @@ class GiteaAPI:
|
|||||||
owner = os.getenv("GITEA_OWNER", "ai-test")
|
owner = os.getenv("GITEA_OWNER", "ai-test")
|
||||||
repo = os.getenv("GITEA_REPO", "")
|
repo = os.getenv("GITEA_REPO", "")
|
||||||
return {
|
return {
|
||||||
"base_url": base_url.rstrip("/"),
|
"base_url": _normalize_base_url(base_url),
|
||||||
"token": token,
|
"token": token,
|
||||||
"owner": owner,
|
"owner": owner,
|
||||||
"repo": repo,
|
"repo": repo,
|
||||||
@@ -96,16 +110,16 @@ class GiteaAPI:
|
|||||||
|
|
||||||
def _request_sync(self, method: str, path: str, payload: dict | None = None) -> dict:
|
def _request_sync(self, method: str, path: str, payload: dict | None = None) -> dict:
|
||||||
"""Perform a synchronous Gitea API request."""
|
"""Perform a synchronous Gitea API request."""
|
||||||
request = urllib.request.Request(
|
|
||||||
self._api_url(path),
|
|
||||||
headers=self.get_auth_headers(),
|
|
||||||
method=method.upper(),
|
|
||||||
)
|
|
||||||
data = None
|
|
||||||
if payload is not None:
|
|
||||||
data = json.dumps(payload).encode('utf-8')
|
|
||||||
request.data = data
|
|
||||||
try:
|
try:
|
||||||
|
if not self.base_url:
|
||||||
|
return {'error': 'Gitea base URL is not configured or is invalid'}
|
||||||
|
request = urllib.request.Request(
|
||||||
|
self._api_url(path),
|
||||||
|
headers=self.get_auth_headers(),
|
||||||
|
method=method.upper(),
|
||||||
|
)
|
||||||
|
if payload is not None:
|
||||||
|
request.data = json.dumps(payload).encode('utf-8')
|
||||||
with urllib.request.urlopen(request) as response:
|
with urllib.request.urlopen(request) as response:
|
||||||
body = response.read().decode('utf-8')
|
body = response.read().decode('utf-8')
|
||||||
return json.loads(body) if body else {}
|
return json.loads(body) if body else {}
|
||||||
@@ -182,6 +196,10 @@ class GiteaAPI:
|
|||||||
"""Get the user associated with the configured token."""
|
"""Get the user associated with the configured token."""
|
||||||
return await self._request("GET", "user")
|
return await self._request("GET", "user")
|
||||||
|
|
||||||
|
def get_current_user_sync(self) -> dict:
|
||||||
|
"""Synchronously get the user associated with the configured token."""
|
||||||
|
return self._request_sync("GET", "user")
|
||||||
|
|
||||||
async def create_branch(self, branch: str, base: str = "main", owner: str | None = None, repo: str | None = None):
|
async def create_branch(self, branch: str, base: str = "main", owner: str | None = None, repo: str | None = None):
|
||||||
"""Create a new branch."""
|
"""Create a new branch."""
|
||||||
_owner = owner or self.owner
|
_owner = owner or self.owner
|
||||||
@@ -212,6 +230,26 @@ class GiteaAPI:
|
|||||||
}
|
}
|
||||||
return await self._request("POST", f"repos/{_owner}/{_repo}/pulls", payload)
|
return await self._request("POST", f"repos/{_owner}/{_repo}/pulls", payload)
|
||||||
|
|
||||||
|
def create_pull_request_sync(
|
||||||
|
self,
|
||||||
|
title: str,
|
||||||
|
body: str,
|
||||||
|
owner: str,
|
||||||
|
repo: str,
|
||||||
|
base: str = "main",
|
||||||
|
head: str | None = None,
|
||||||
|
) -> dict:
|
||||||
|
"""Synchronously create a pull request."""
|
||||||
|
_owner = owner or self.owner
|
||||||
|
_repo = repo or self.repo
|
||||||
|
payload = {
|
||||||
|
"title": title,
|
||||||
|
"body": body,
|
||||||
|
"base": base,
|
||||||
|
"head": head or f"{_owner}-{_repo}-ai-gen-{hash(title) % 10000}",
|
||||||
|
}
|
||||||
|
return self._request_sync("POST", f"repos/{_owner}/{_repo}/pulls", payload)
|
||||||
|
|
||||||
async def list_pull_requests(
|
async def list_pull_requests(
|
||||||
self,
|
self,
|
||||||
owner: str | None = None,
|
owner: str | None = None,
|
||||||
@@ -383,4 +421,14 @@ class GiteaAPI:
|
|||||||
if not _repo:
|
if not _repo:
|
||||||
return {"error": "Repository name required for org operations"}
|
return {"error": "Repository name required for org operations"}
|
||||||
|
|
||||||
return await self._request("GET", f"repos/{_owner}/{_repo}")
|
return await self._request("GET", f"repos/{_owner}/{_repo}")
|
||||||
|
|
||||||
|
def get_repo_info_sync(self, owner: str | None = None, repo: str | None = None) -> dict:
|
||||||
|
"""Synchronously get repository information."""
|
||||||
|
_owner = owner or self.owner
|
||||||
|
_repo = repo or self.repo
|
||||||
|
|
||||||
|
if not _repo:
|
||||||
|
return {"error": "Repository name required for org operations"}
|
||||||
|
|
||||||
|
return self._request_sync("GET", f"repos/{_owner}/{_repo}")
|
||||||
162
ai_software_factory/agents/home_assistant.py
Normal file
162
ai_software_factory/agents/home_assistant.py
Normal file
@@ -0,0 +1,162 @@
|
|||||||
|
"""Home Assistant integration for energy-gated queue processing."""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
try:
|
||||||
|
from ..config import settings
|
||||||
|
except ImportError:
|
||||||
|
from config import settings
|
||||||
|
|
||||||
|
|
||||||
|
class HomeAssistantAgent:
|
||||||
|
"""Query Home Assistant for queue-processing eligibility and health."""
|
||||||
|
|
||||||
|
def __init__(self, base_url: str | None = None, token: str | None = None):
|
||||||
|
self.base_url = (base_url or settings.home_assistant_url).rstrip('/')
|
||||||
|
self.token = token or settings.home_assistant_token
|
||||||
|
|
||||||
|
def _headers(self) -> dict[str, str]:
|
||||||
|
return {
|
||||||
|
'Authorization': f'Bearer {self.token}',
|
||||||
|
'Content-Type': 'application/json',
|
||||||
|
}
|
||||||
|
|
||||||
|
def _state_url(self, entity_id: str) -> str:
|
||||||
|
return f'{self.base_url}/api/states/{entity_id}'
|
||||||
|
|
||||||
|
async def _get_state(self, entity_id: str) -> dict:
|
||||||
|
if not self.base_url:
|
||||||
|
return {'error': 'Home Assistant URL is not configured'}
|
||||||
|
if not self.token:
|
||||||
|
return {'error': 'Home Assistant token is not configured'}
|
||||||
|
if not entity_id:
|
||||||
|
return {'error': 'Home Assistant entity id is not configured'}
|
||||||
|
try:
|
||||||
|
import aiohttp
|
||||||
|
|
||||||
|
async with aiohttp.ClientSession() as session:
|
||||||
|
async with session.get(self._state_url(entity_id), headers=self._headers()) as resp:
|
||||||
|
payload = await resp.json(content_type=None)
|
||||||
|
if 200 <= resp.status < 300:
|
||||||
|
return payload if isinstance(payload, dict) else {'value': payload}
|
||||||
|
return {'error': payload, 'status_code': resp.status}
|
||||||
|
except Exception as exc:
|
||||||
|
return {'error': str(exc)}
|
||||||
|
|
||||||
|
def _get_state_sync(self, entity_id: str) -> dict:
|
||||||
|
if not self.base_url:
|
||||||
|
return {'error': 'Home Assistant URL is not configured'}
|
||||||
|
if not self.token:
|
||||||
|
return {'error': 'Home Assistant token is not configured'}
|
||||||
|
if not entity_id:
|
||||||
|
return {'error': 'Home Assistant entity id is not configured'}
|
||||||
|
try:
|
||||||
|
import json
|
||||||
|
import urllib.error
|
||||||
|
import urllib.request
|
||||||
|
|
||||||
|
request = urllib.request.Request(self._state_url(entity_id), headers=self._headers(), method='GET')
|
||||||
|
with urllib.request.urlopen(request) as response:
|
||||||
|
body = response.read().decode('utf-8')
|
||||||
|
return json.loads(body) if body else {}
|
||||||
|
except urllib.error.HTTPError as exc:
|
||||||
|
try:
|
||||||
|
body = exc.read().decode('utf-8')
|
||||||
|
except Exception:
|
||||||
|
body = str(exc)
|
||||||
|
return {'error': body, 'status_code': exc.code}
|
||||||
|
except Exception as exc:
|
||||||
|
return {'error': str(exc)}
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def _coerce_float(payload: dict) -> float | None:
|
||||||
|
raw = payload.get('state') if isinstance(payload, dict) else None
|
||||||
|
try:
|
||||||
|
return float(raw)
|
||||||
|
except Exception:
|
||||||
|
return None
|
||||||
|
|
||||||
|
async def queue_gate_status(self, force: bool = False) -> dict:
|
||||||
|
"""Return whether queued prompts may be processed now."""
|
||||||
|
if force or settings.prompt_queue_force_process:
|
||||||
|
return {
|
||||||
|
'status': 'success',
|
||||||
|
'allowed': True,
|
||||||
|
'forced': True,
|
||||||
|
'reason': 'Queue override is enabled',
|
||||||
|
}
|
||||||
|
battery = await self._get_state(settings.home_assistant_battery_entity_id)
|
||||||
|
surplus = await self._get_state(settings.home_assistant_surplus_entity_id)
|
||||||
|
battery_value = self._coerce_float(battery)
|
||||||
|
surplus_value = self._coerce_float(surplus)
|
||||||
|
checks = []
|
||||||
|
if battery.get('error'):
|
||||||
|
checks.append({'name': 'battery', 'ok': False, 'message': str(battery.get('error')), 'entity_id': settings.home_assistant_battery_entity_id})
|
||||||
|
else:
|
||||||
|
checks.append({'name': 'battery', 'ok': battery_value is not None and battery_value >= settings.home_assistant_battery_full_threshold, 'message': f'{battery_value}%', 'entity_id': settings.home_assistant_battery_entity_id})
|
||||||
|
if surplus.get('error'):
|
||||||
|
checks.append({'name': 'surplus', 'ok': False, 'message': str(surplus.get('error')), 'entity_id': settings.home_assistant_surplus_entity_id})
|
||||||
|
else:
|
||||||
|
checks.append({'name': 'surplus', 'ok': surplus_value is not None and surplus_value >= settings.home_assistant_surplus_threshold_watts, 'message': f'{surplus_value} W', 'entity_id': settings.home_assistant_surplus_entity_id})
|
||||||
|
allowed = all(check['ok'] for check in checks)
|
||||||
|
return {
|
||||||
|
'status': 'success' if allowed else 'blocked',
|
||||||
|
'allowed': allowed,
|
||||||
|
'forced': False,
|
||||||
|
'checks': checks,
|
||||||
|
'battery_level': battery_value,
|
||||||
|
'surplus_watts': surplus_value,
|
||||||
|
'thresholds': {
|
||||||
|
'battery_full_percent': settings.home_assistant_battery_full_threshold,
|
||||||
|
'surplus_watts': settings.home_assistant_surplus_threshold_watts,
|
||||||
|
},
|
||||||
|
'reason': 'Energy gate open' if allowed else 'Battery or surplus threshold not met',
|
||||||
|
}
|
||||||
|
|
||||||
|
def health_check_sync(self) -> dict:
|
||||||
|
"""Return current Home Assistant connectivity and queue gate diagnostics."""
|
||||||
|
if not self.base_url:
|
||||||
|
return {
|
||||||
|
'status': 'error',
|
||||||
|
'message': 'Home Assistant URL is not configured.',
|
||||||
|
'base_url': '',
|
||||||
|
'configured': False,
|
||||||
|
'checks': [],
|
||||||
|
}
|
||||||
|
if not self.token:
|
||||||
|
return {
|
||||||
|
'status': 'error',
|
||||||
|
'message': 'Home Assistant token is not configured.',
|
||||||
|
'base_url': self.base_url,
|
||||||
|
'configured': False,
|
||||||
|
'checks': [],
|
||||||
|
}
|
||||||
|
battery = self._get_state_sync(settings.home_assistant_battery_entity_id)
|
||||||
|
surplus = self._get_state_sync(settings.home_assistant_surplus_entity_id)
|
||||||
|
checks = []
|
||||||
|
for name, entity_id, payload in (
|
||||||
|
('battery', settings.home_assistant_battery_entity_id, battery),
|
||||||
|
('surplus', settings.home_assistant_surplus_entity_id, surplus),
|
||||||
|
):
|
||||||
|
checks.append(
|
||||||
|
{
|
||||||
|
'name': name,
|
||||||
|
'entity_id': entity_id,
|
||||||
|
'ok': not bool(payload.get('error')),
|
||||||
|
'message': str(payload.get('error') or payload.get('state') or 'ok'),
|
||||||
|
'status_code': payload.get('status_code'),
|
||||||
|
'url': self._state_url(entity_id) if entity_id else self.base_url,
|
||||||
|
}
|
||||||
|
)
|
||||||
|
return {
|
||||||
|
'status': 'success' if all(check['ok'] for check in checks) else 'error',
|
||||||
|
'message': 'Home Assistant connectivity is healthy.' if all(check['ok'] for check in checks) else 'Home Assistant checks failed.',
|
||||||
|
'base_url': self.base_url,
|
||||||
|
'configured': True,
|
||||||
|
'checks': checks,
|
||||||
|
'queue_gate': {
|
||||||
|
'battery_full_percent': settings.home_assistant_battery_full_threshold,
|
||||||
|
'surplus_watts': settings.home_assistant_surplus_threshold_watts,
|
||||||
|
'force_process': settings.prompt_queue_force_process,
|
||||||
|
},
|
||||||
|
}
|
||||||
@@ -3,6 +3,7 @@
|
|||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
import difflib
|
import difflib
|
||||||
|
import json
|
||||||
import py_compile
|
import py_compile
|
||||||
import re
|
import re
|
||||||
import subprocess
|
import subprocess
|
||||||
@@ -14,12 +15,14 @@ try:
|
|||||||
from .database_manager import DatabaseManager
|
from .database_manager import DatabaseManager
|
||||||
from .git_manager import GitManager
|
from .git_manager import GitManager
|
||||||
from .gitea import GiteaAPI
|
from .gitea import GiteaAPI
|
||||||
|
from .llm_service import LLMServiceClient
|
||||||
from .ui_manager import UIManager
|
from .ui_manager import UIManager
|
||||||
except ImportError:
|
except ImportError:
|
||||||
from config import settings
|
from config import settings
|
||||||
from agents.database_manager import DatabaseManager
|
from agents.database_manager import DatabaseManager
|
||||||
from agents.git_manager import GitManager
|
from agents.git_manager import GitManager
|
||||||
from agents.gitea import GiteaAPI
|
from agents.gitea import GiteaAPI
|
||||||
|
from agents.llm_service import LLMServiceClient
|
||||||
from agents.ui_manager import UIManager
|
from agents.ui_manager import UIManager
|
||||||
|
|
||||||
|
|
||||||
@@ -62,6 +65,7 @@ class AgentOrchestrator:
|
|||||||
self.repo_name_override = repo_name_override
|
self.repo_name_override = repo_name_override
|
||||||
self.existing_history = existing_history
|
self.existing_history = existing_history
|
||||||
self.changed_files: list[str] = []
|
self.changed_files: list[str] = []
|
||||||
|
self.pending_code_changes: list[dict] = []
|
||||||
self.gitea_api = GiteaAPI(
|
self.gitea_api = GiteaAPI(
|
||||||
token=settings.GITEA_TOKEN,
|
token=settings.GITEA_TOKEN,
|
||||||
base_url=settings.GITEA_URL,
|
base_url=settings.GITEA_URL,
|
||||||
@@ -137,6 +141,40 @@ class AgentOrchestrator:
|
|||||||
if self.active_pull_request:
|
if self.active_pull_request:
|
||||||
self.ui_manager.ui_data["pull_request"] = self.active_pull_request
|
self.ui_manager.ui_data["pull_request"] = self.active_pull_request
|
||||||
|
|
||||||
|
def _static_files(self) -> dict[str, str]:
|
||||||
|
"""Files that do not need prompt-specific generation."""
|
||||||
|
return {
|
||||||
|
".gitignore": "__pycache__/\n*.pyc\n.venv/\n.pytest_cache/\n.mypy_cache/\n",
|
||||||
|
}
|
||||||
|
|
||||||
|
def _fallback_generated_files(self) -> dict[str, str]:
|
||||||
|
"""Deterministic fallback files when LLM generation is unavailable."""
|
||||||
|
feature_section = "\n".join(f"- {feature}" for feature in self.features) or "- None specified"
|
||||||
|
tech_section = "\n".join(f"- {tech}" for tech in self.tech_stack) or "- Python"
|
||||||
|
return {
|
||||||
|
"README.md": (
|
||||||
|
f"# {self.project_name}\n\n"
|
||||||
|
f"{self.description}\n\n"
|
||||||
|
"## Features\n"
|
||||||
|
f"{feature_section}\n\n"
|
||||||
|
"## Tech Stack\n"
|
||||||
|
f"{tech_section}\n"
|
||||||
|
),
|
||||||
|
"requirements.txt": "fastapi\nuvicorn\npytest\n",
|
||||||
|
"main.py": (
|
||||||
|
"from fastapi import FastAPI\n\n"
|
||||||
|
"app = FastAPI(title=\"Generated App\")\n\n"
|
||||||
|
"@app.get('/')\n"
|
||||||
|
"def read_root():\n"
|
||||||
|
f" return {{'name': '{self.project_name}', 'status': 'generated', 'features': {self.features!r}}}\n"
|
||||||
|
),
|
||||||
|
"tests/test_app.py": (
|
||||||
|
"from main import read_root\n\n"
|
||||||
|
"def test_read_root():\n"
|
||||||
|
f" assert read_root()['name'] == '{self.project_name}'\n"
|
||||||
|
),
|
||||||
|
}
|
||||||
|
|
||||||
def _build_pr_branch_name(self, project_id: str) -> str:
|
def _build_pr_branch_name(self, project_id: str) -> str:
|
||||||
"""Build a stable branch name used until the PR is merged."""
|
"""Build a stable branch name used until the PR is merged."""
|
||||||
return f"ai/{project_id}"
|
return f"ai/{project_id}"
|
||||||
@@ -157,7 +195,7 @@ class AgentOrchestrator:
|
|||||||
"""Persist the current generation plan as an inspectable trace."""
|
"""Persist the current generation plan as an inspectable trace."""
|
||||||
if not self.db_manager or not self.history or not self.prompt_audit:
|
if not self.db_manager or not self.history or not self.prompt_audit:
|
||||||
return
|
return
|
||||||
planned_files = list(self._template_files().keys())
|
planned_files = list(self._static_files().keys()) + list(self._fallback_generated_files().keys())
|
||||||
self.db_manager.log_llm_trace(
|
self.db_manager.log_llm_trace(
|
||||||
project_id=self.project_id,
|
project_id=self.project_id,
|
||||||
history_id=self.history.id,
|
history_id=self.history.id,
|
||||||
@@ -187,6 +225,66 @@ class AgentOrchestrator:
|
|||||||
fallback_used=False,
|
fallback_used=False,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
def _parse_generated_files(self, content: str | None) -> dict[str, str]:
|
||||||
|
"""Parse an LLM file bundle response into relative-path/content pairs."""
|
||||||
|
if not content:
|
||||||
|
return {}
|
||||||
|
try:
|
||||||
|
parsed = json.loads(content)
|
||||||
|
except Exception:
|
||||||
|
return {}
|
||||||
|
allowed_paths = set(self._fallback_generated_files().keys())
|
||||||
|
generated: dict[str, str] = {}
|
||||||
|
if isinstance(parsed, dict) and isinstance(parsed.get('files'), list):
|
||||||
|
for item in parsed['files']:
|
||||||
|
if not isinstance(item, dict):
|
||||||
|
continue
|
||||||
|
path = str(item.get('path') or '').strip()
|
||||||
|
file_content = item.get('content')
|
||||||
|
if path in allowed_paths and isinstance(file_content, str) and file_content.strip():
|
||||||
|
generated[path] = file_content.rstrip() + "\n"
|
||||||
|
elif isinstance(parsed, dict):
|
||||||
|
for path, file_content in parsed.items():
|
||||||
|
if path in allowed_paths and isinstance(file_content, str) and file_content.strip():
|
||||||
|
generated[str(path)] = file_content.rstrip() + "\n"
|
||||||
|
return generated
|
||||||
|
|
||||||
|
async def _generate_prompt_driven_files(self) -> tuple[dict[str, str], dict | None]:
|
||||||
|
"""Use the configured LLM to generate prompt-specific project files."""
|
||||||
|
fallback_files = self._fallback_generated_files()
|
||||||
|
system_prompt = (
|
||||||
|
'You generate small but concrete starter projects. '
|
||||||
|
'Return only JSON. Provide production-like but compact code that directly reflects the user request. '
|
||||||
|
'Include the files README.md, requirements.txt, main.py, and tests/test_app.py. '
|
||||||
|
'Use FastAPI for Python web requests unless the prompt clearly demands something else. '
|
||||||
|
'The test must verify a real behavior from main.py. '
|
||||||
|
'Do not wrap the JSON in markdown fences.'
|
||||||
|
)
|
||||||
|
user_prompt = (
|
||||||
|
f"Project name: {self.project_name}\n"
|
||||||
|
f"Description: {self.description}\n"
|
||||||
|
f"Original prompt: {self.prompt_text or self.description}\n"
|
||||||
|
f"Requested features: {json.dumps(self.features)}\n"
|
||||||
|
f"Preferred tech stack: {json.dumps(self.tech_stack)}\n"
|
||||||
|
f"Related issue: {json.dumps(self.related_issue) if self.related_issue else 'null'}\n\n"
|
||||||
|
"Return JSON shaped as {\"files\": [{\"path\": \"README.md\", \"content\": \"...\"}, ...]}."
|
||||||
|
)
|
||||||
|
content, trace = await LLMServiceClient().chat_with_trace(
|
||||||
|
stage='generation_plan',
|
||||||
|
system_prompt=system_prompt,
|
||||||
|
user_prompt=user_prompt,
|
||||||
|
tool_context_input={
|
||||||
|
'project_id': self.project_id,
|
||||||
|
'project_name': self.project_name,
|
||||||
|
'repository': self.ui_manager.ui_data.get('repository'),
|
||||||
|
'related_issue': self.related_issue,
|
||||||
|
},
|
||||||
|
expect_json=True,
|
||||||
|
)
|
||||||
|
generated_files = self._parse_generated_files(content)
|
||||||
|
merged_files = {**fallback_files, **generated_files}
|
||||||
|
return merged_files, trace
|
||||||
|
|
||||||
async def _sync_issue_context(self) -> None:
|
async def _sync_issue_context(self) -> None:
|
||||||
"""Sync repository issues and resolve a linked issue from the prompt when present."""
|
"""Sync repository issues and resolve a linked issue from the prompt when present."""
|
||||||
if not self.db_manager or not self.history:
|
if not self.db_manager or not self.history:
|
||||||
@@ -457,47 +555,15 @@ class AgentOrchestrator:
|
|||||||
diff_text = self._build_diff_text(relative_path, previous_content, content)
|
diff_text = self._build_diff_text(relative_path, previous_content, content)
|
||||||
target.write_text(content, encoding="utf-8")
|
target.write_text(content, encoding="utf-8")
|
||||||
self.changed_files.append(relative_path)
|
self.changed_files.append(relative_path)
|
||||||
if self.db_manager and self.history:
|
self.pending_code_changes.append(
|
||||||
self.db_manager.log_code_change(
|
{
|
||||||
project_id=self.project_id,
|
'change_type': change_type,
|
||||||
change_type=change_type,
|
'file_path': relative_path,
|
||||||
file_path=relative_path,
|
'details': f"{change_type.title()}d generated artifact {relative_path}",
|
||||||
actor="orchestrator",
|
'diff_summary': f"Wrote {len(content.splitlines())} lines to {relative_path}",
|
||||||
actor_type="agent",
|
'diff_text': diff_text,
|
||||||
details=f"{change_type.title()}d generated artifact {relative_path}",
|
}
|
||||||
history_id=self.history.id,
|
)
|
||||||
prompt_id=self.prompt_audit.id if self.prompt_audit else None,
|
|
||||||
diff_summary=f"Wrote {len(content.splitlines())} lines to {relative_path}",
|
|
||||||
diff_text=diff_text,
|
|
||||||
)
|
|
||||||
|
|
||||||
def _template_files(self) -> dict[str, str]:
|
|
||||||
feature_section = "\n".join(f"- {feature}" for feature in self.features) or "- None specified"
|
|
||||||
tech_section = "\n".join(f"- {tech}" for tech in self.tech_stack) or "- Python"
|
|
||||||
return {
|
|
||||||
".gitignore": "__pycache__/\n*.pyc\n.venv/\n.pytest_cache/\n.mypy_cache/\n",
|
|
||||||
"README.md": (
|
|
||||||
f"# {self.project_name}\n\n"
|
|
||||||
f"{self.description}\n\n"
|
|
||||||
"## Features\n"
|
|
||||||
f"{feature_section}\n\n"
|
|
||||||
"## Tech Stack\n"
|
|
||||||
f"{tech_section}\n"
|
|
||||||
),
|
|
||||||
"requirements.txt": "fastapi\nuvicorn\npytest\n",
|
|
||||||
"main.py": (
|
|
||||||
"from fastapi import FastAPI\n\n"
|
|
||||||
"app = FastAPI(title=\"Generated App\")\n\n"
|
|
||||||
"@app.get('/')\n"
|
|
||||||
"def read_root():\n"
|
|
||||||
f" return {{'name': '{self.project_name}', 'status': 'generated', 'features': {self.features!r}}}\n"
|
|
||||||
),
|
|
||||||
"tests/test_app.py": (
|
|
||||||
"from main import read_root\n\n"
|
|
||||||
"def test_read_root():\n"
|
|
||||||
f" assert read_root()['name'] == '{self.project_name}'\n"
|
|
||||||
),
|
|
||||||
}
|
|
||||||
|
|
||||||
async def run(self) -> dict:
|
async def run(self) -> dict:
|
||||||
"""Run the software generation process with full audit logging."""
|
"""Run the software generation process with full audit logging."""
|
||||||
@@ -588,18 +654,34 @@ class AgentOrchestrator:
|
|||||||
async def _create_project_structure(self) -> None:
|
async def _create_project_structure(self) -> None:
|
||||||
"""Create initial project structure."""
|
"""Create initial project structure."""
|
||||||
self.project_root.mkdir(parents=True, exist_ok=True)
|
self.project_root.mkdir(parents=True, exist_ok=True)
|
||||||
for relative_path, content in self._template_files().items():
|
for relative_path, content in self._static_files().items():
|
||||||
if relative_path.startswith("main.py") or relative_path.startswith("tests/"):
|
|
||||||
continue
|
|
||||||
self._write_file(relative_path, content)
|
self._write_file(relative_path, content)
|
||||||
self._append_log(f"Project structure created under {self.project_root}.")
|
self._append_log(f"Project structure created under {self.project_root}.")
|
||||||
|
|
||||||
async def _generate_code(self) -> None:
|
async def _generate_code(self) -> None:
|
||||||
"""Generate code using Ollama."""
|
"""Generate code using Ollama."""
|
||||||
for relative_path, content in self._template_files().items():
|
generated_files, trace = await self._generate_prompt_driven_files()
|
||||||
if relative_path in {"main.py", "tests/test_app.py"}:
|
for relative_path, content in generated_files.items():
|
||||||
self._write_file(relative_path, content)
|
self._write_file(relative_path, content)
|
||||||
self._append_log("Application entrypoint and smoke test generated.")
|
fallback_used = bool(trace and trace.get('fallback_used')) or trace is None
|
||||||
|
if self.db_manager and self.history and self.prompt_audit and trace:
|
||||||
|
self.db_manager.log_llm_trace(
|
||||||
|
project_id=self.project_id,
|
||||||
|
history_id=self.history.id,
|
||||||
|
prompt_id=self.prompt_audit.id,
|
||||||
|
stage='code_generation',
|
||||||
|
provider=trace.get('provider', 'ollama'),
|
||||||
|
model=trace.get('model', settings.OLLAMA_MODEL),
|
||||||
|
system_prompt=trace.get('system_prompt', ''),
|
||||||
|
user_prompt=trace.get('user_prompt', self.prompt_text or self.description),
|
||||||
|
assistant_response=trace.get('assistant_response', ''),
|
||||||
|
raw_response=trace.get('raw_response'),
|
||||||
|
fallback_used=fallback_used,
|
||||||
|
)
|
||||||
|
if fallback_used:
|
||||||
|
self._append_log('LLM code generation was unavailable; used deterministic scaffolding fallback.')
|
||||||
|
else:
|
||||||
|
self._append_log('Application files generated from the prompt with the configured LLM.')
|
||||||
|
|
||||||
async def _run_tests(self) -> None:
|
async def _run_tests(self) -> None:
|
||||||
"""Run tests for the generated code."""
|
"""Run tests for the generated code."""
|
||||||
@@ -668,6 +750,23 @@ class AgentOrchestrator:
|
|||||||
remote_status=remote_record.get("status") if remote_record else "local-only",
|
remote_status=remote_record.get("status") if remote_record else "local-only",
|
||||||
related_issue=self.related_issue,
|
related_issue=self.related_issue,
|
||||||
)
|
)
|
||||||
|
for change in self.pending_code_changes:
|
||||||
|
self.db_manager.log_code_change(
|
||||||
|
project_id=self.project_id,
|
||||||
|
change_type=change['change_type'],
|
||||||
|
file_path=change['file_path'],
|
||||||
|
actor='orchestrator',
|
||||||
|
actor_type='agent',
|
||||||
|
details=change['details'],
|
||||||
|
history_id=self.history.id if self.history else None,
|
||||||
|
prompt_id=self.prompt_audit.id if self.prompt_audit else None,
|
||||||
|
diff_summary=change.get('diff_summary'),
|
||||||
|
diff_text=change.get('diff_text'),
|
||||||
|
commit_hash=commit_hash,
|
||||||
|
remote_status=remote_record.get('status') if remote_record else 'local-only',
|
||||||
|
branch=self.branch_name,
|
||||||
|
)
|
||||||
|
self.pending_code_changes.clear()
|
||||||
if self.related_issue:
|
if self.related_issue:
|
||||||
self.db_manager.log_issue_work(
|
self.db_manager.log_issue_work(
|
||||||
project_id=self.project_id,
|
project_id=self.project_id,
|
||||||
|
|||||||
@@ -18,6 +18,20 @@ except ImportError:
|
|||||||
class RequestInterpreter:
|
class RequestInterpreter:
|
||||||
"""Use Ollama to turn free-form text into a structured software request."""
|
"""Use Ollama to turn free-form text into a structured software request."""
|
||||||
|
|
||||||
|
REQUEST_PREFIX_WORDS = {
|
||||||
|
'a', 'an', 'app', 'application', 'build', 'create', 'dashboard', 'develop', 'design', 'for', 'generate',
|
||||||
|
'internal', 'make', 'me', 'modern', 'need', 'new', 'our', 'platform', 'please', 'project', 'service',
|
||||||
|
'simple', 'site', 'start', 'system', 'the', 'tool', 'us', 'want', 'web', 'website', 'with',
|
||||||
|
}
|
||||||
|
|
||||||
|
REPO_NOISE_WORDS = REQUEST_PREFIX_WORDS | {'and', 'from', 'into', 'on', 'that', 'this', 'to'}
|
||||||
|
GENERIC_PROJECT_NAME_WORDS = {
|
||||||
|
'app', 'application', 'harness', 'platform', 'project', 'purpose', 'service', 'solution', 'suite', 'system', 'test', 'tool',
|
||||||
|
}
|
||||||
|
PLACEHOLDER_PROJECT_NAME_WORDS = {
|
||||||
|
'generated project', 'new project', 'project', 'temporary name', 'temp name', 'placeholder', 'untitled project',
|
||||||
|
}
|
||||||
|
|
||||||
def __init__(self, ollama_url: str | None = None, model: str | None = None):
|
def __init__(self, ollama_url: str | None = None, model: str | None = None):
|
||||||
self.ollama_url = (ollama_url or settings.ollama_url).rstrip('/')
|
self.ollama_url = (ollama_url or settings.ollama_url).rstrip('/')
|
||||||
self.model = model or settings.OLLAMA_MODEL
|
self.model = model or settings.OLLAMA_MODEL
|
||||||
@@ -145,10 +159,11 @@ class RequestInterpreter:
|
|||||||
)
|
)
|
||||||
if content:
|
if content:
|
||||||
try:
|
try:
|
||||||
|
fallback_name = self._preferred_project_name_fallback(prompt_text, interpreted.get('name'))
|
||||||
parsed = json.loads(content)
|
parsed = json.loads(content)
|
||||||
project_name, repo_name = self._normalize_project_identity(
|
project_name, repo_name = self._normalize_project_identity(
|
||||||
parsed,
|
parsed,
|
||||||
fallback_name=interpreted.get('name') or self._derive_name(prompt_text),
|
fallback_name=fallback_name,
|
||||||
)
|
)
|
||||||
repo_name = self._ensure_unique_repo_name(repo_name, constraints['repo_names'])
|
repo_name = self._ensure_unique_repo_name(repo_name, constraints['repo_names'])
|
||||||
interpreted['name'] = project_name
|
interpreted['name'] = project_name
|
||||||
@@ -158,7 +173,7 @@ class RequestInterpreter:
|
|||||||
except Exception:
|
except Exception:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
fallback_name = interpreted.get('name') or self._derive_name(prompt_text)
|
fallback_name = self._preferred_project_name_fallback(prompt_text, interpreted.get('name'))
|
||||||
routing['project_name'] = fallback_name
|
routing['project_name'] = fallback_name
|
||||||
routing['repo_name'] = self._ensure_unique_repo_name(self._derive_repo_name(fallback_name), constraints['repo_names'])
|
routing['repo_name'] = self._ensure_unique_repo_name(self._derive_repo_name(fallback_name), constraints['repo_names'])
|
||||||
return interpreted, routing, trace
|
return interpreted, routing, trace
|
||||||
@@ -280,13 +295,22 @@ class RequestInterpreter:
|
|||||||
noun_phrase = re.search(
|
noun_phrase = re.search(
|
||||||
r'(?:build|create|start|make|develop|generate|design|need|want)\s+'
|
r'(?:build|create|start|make|develop|generate|design|need|want)\s+'
|
||||||
r'(?:me\s+|us\s+|an?\s+|the\s+|new\s+|internal\s+|simple\s+|lightweight\s+|modern\s+|web\s+|mobile\s+)*'
|
r'(?:me\s+|us\s+|an?\s+|the\s+|new\s+|internal\s+|simple\s+|lightweight\s+|modern\s+|web\s+|mobile\s+)*'
|
||||||
r'([a-z0-9][a-z0-9\s-]{2,80}?(?:portal|dashboard|app|application|service|tool|system|platform|api|bot|assistant|website|site|workspace|tracker|manager))\b',
|
r'([a-z0-9][a-z0-9\s-]{2,80}?(?:portal|dashboard|app|application|service|tool|system|platform|api|bot|assistant|website|site|workspace|tracker|manager|harness|runner|framework|suite|pipeline|lab))\b',
|
||||||
first_line,
|
first_line,
|
||||||
flags=re.IGNORECASE,
|
flags=re.IGNORECASE,
|
||||||
)
|
)
|
||||||
if noun_phrase:
|
if noun_phrase:
|
||||||
return self._humanize_name(noun_phrase.group(1))
|
return self._humanize_name(noun_phrase.group(1))
|
||||||
|
|
||||||
|
focused_phrase = re.search(
|
||||||
|
r'(?:purpose\s+is\s+to\s+create\s+(?:an?\s+)?)'
|
||||||
|
r'([a-z0-9][a-z0-9\s-]{2,80}?(?:portal|dashboard|app|application|service|tool|system|platform|api|bot|assistant|website|site|workspace|tracker|manager|harness|runner|framework|suite|pipeline|lab))\b',
|
||||||
|
first_line,
|
||||||
|
flags=re.IGNORECASE,
|
||||||
|
)
|
||||||
|
if focused_phrase:
|
||||||
|
return self._humanize_name(focused_phrase.group(1))
|
||||||
|
|
||||||
cleaned = re.sub(r'[^A-Za-z0-9 ]+', ' ', first_line)
|
cleaned = re.sub(r'[^A-Za-z0-9 ]+', ' ', first_line)
|
||||||
stopwords = {
|
stopwords = {
|
||||||
'build', 'create', 'start', 'make', 'develop', 'generate', 'design', 'need', 'want', 'please', 'for', 'our', 'with', 'that', 'this',
|
'build', 'create', 'start', 'make', 'develop', 'generate', 'design', 'need', 'want', 'please', 'for', 'our', 'with', 'that', 'this',
|
||||||
@@ -301,6 +325,7 @@ class RequestInterpreter:
|
|||||||
"""Normalize a candidate project name into a readable title."""
|
"""Normalize a candidate project name into a readable title."""
|
||||||
cleaned = re.sub(r'[^A-Za-z0-9\s-]+', ' ', raw_name).strip(' -')
|
cleaned = re.sub(r'[^A-Za-z0-9\s-]+', ' ', raw_name).strip(' -')
|
||||||
cleaned = re.sub(r'\s+', ' ', cleaned)
|
cleaned = re.sub(r'\s+', ' ', cleaned)
|
||||||
|
cleaned = self._trim_request_prefix(cleaned)
|
||||||
special_upper = {'api', 'crm', 'erp', 'cms', 'hr', 'it', 'ui', 'qa'}
|
special_upper = {'api', 'crm', 'erp', 'cms', 'hr', 'it', 'ui', 'qa'}
|
||||||
words = []
|
words = []
|
||||||
for word in cleaned.split()[:6]:
|
for word in cleaned.split()[:6]:
|
||||||
@@ -308,14 +333,79 @@ class RequestInterpreter:
|
|||||||
words.append(lowered.upper() if lowered in special_upper else lowered.capitalize())
|
words.append(lowered.upper() if lowered in special_upper else lowered.capitalize())
|
||||||
return ' '.join(words) or 'Generated Project'
|
return ' '.join(words) or 'Generated Project'
|
||||||
|
|
||||||
|
def _trim_request_prefix(self, candidate: str) -> str:
|
||||||
|
"""Remove leading request phrasing from model-produced names and slugs."""
|
||||||
|
tokens = [token for token in re.split(r'[-\s]+', candidate or '') if token]
|
||||||
|
while tokens and tokens[0].lower() in self.REQUEST_PREFIX_WORDS:
|
||||||
|
tokens.pop(0)
|
||||||
|
trimmed = ' '.join(tokens).strip()
|
||||||
|
return trimmed or candidate.strip()
|
||||||
|
|
||||||
def _derive_repo_name(self, project_name: str) -> str:
|
def _derive_repo_name(self, project_name: str) -> str:
|
||||||
"""Derive a repository slug from a human-readable project name."""
|
"""Derive a repository slug from a human-readable project name."""
|
||||||
preferred = (project_name or 'project').strip().lower().replace(' ', '-')
|
preferred_name = self._trim_request_prefix((project_name or 'project').strip())
|
||||||
|
preferred = preferred_name.lower().replace(' ', '-')
|
||||||
sanitized = ''.join(ch if ch.isalnum() or ch in {'-', '_'} else '-' for ch in preferred)
|
sanitized = ''.join(ch if ch.isalnum() or ch in {'-', '_'} else '-' for ch in preferred)
|
||||||
while '--' in sanitized:
|
while '--' in sanitized:
|
||||||
sanitized = sanitized.replace('--', '-')
|
sanitized = sanitized.replace('--', '-')
|
||||||
return sanitized.strip('-') or 'project'
|
return sanitized.strip('-') or 'project'
|
||||||
|
|
||||||
|
def _should_use_repo_name_candidate(self, candidate: str, project_name: str) -> bool:
|
||||||
|
"""Return whether a model-proposed repo slug is concise enough to trust directly."""
|
||||||
|
cleaned = self._trim_request_prefix(re.sub(r'[^A-Za-z0-9\s_-]+', ' ', candidate or '').strip())
|
||||||
|
if not cleaned:
|
||||||
|
return False
|
||||||
|
candidate_tokens = [token.lower() for token in re.split(r'[-\s_]+', cleaned) if token]
|
||||||
|
if not candidate_tokens:
|
||||||
|
return False
|
||||||
|
if len(candidate_tokens) > 6:
|
||||||
|
return False
|
||||||
|
noise_count = sum(1 for token in candidate_tokens if token in self.REPO_NOISE_WORDS)
|
||||||
|
if noise_count >= 2:
|
||||||
|
return False
|
||||||
|
if len('-'.join(candidate_tokens)) > 40:
|
||||||
|
return False
|
||||||
|
project_tokens = {
|
||||||
|
token.lower()
|
||||||
|
for token in re.split(r'[-\s_]+', project_name or '')
|
||||||
|
if token and token.lower() not in self.REPO_NOISE_WORDS
|
||||||
|
}
|
||||||
|
if project_tokens:
|
||||||
|
overlap = sum(1 for token in candidate_tokens if token in project_tokens)
|
||||||
|
if overlap == 0:
|
||||||
|
return False
|
||||||
|
return True
|
||||||
|
|
||||||
|
def _should_use_project_name_candidate(self, candidate: str, fallback_name: str) -> bool:
|
||||||
|
"""Return whether a model-proposed project title is concrete enough to trust."""
|
||||||
|
cleaned = self._trim_request_prefix(re.sub(r'[^A-Za-z0-9\s-]+', ' ', candidate or '').strip())
|
||||||
|
if not cleaned:
|
||||||
|
return False
|
||||||
|
candidate_tokens = [token.lower() for token in re.split(r'[-\s]+', cleaned) if token]
|
||||||
|
if not candidate_tokens:
|
||||||
|
return False
|
||||||
|
if len(candidate_tokens) == 1 and candidate_tokens[0] in self.GENERIC_PROJECT_NAME_WORDS:
|
||||||
|
return False
|
||||||
|
if all(token in self.GENERIC_PROJECT_NAME_WORDS for token in candidate_tokens):
|
||||||
|
return False
|
||||||
|
fallback_tokens = {
|
||||||
|
token.lower() for token in re.split(r'[-\s]+', fallback_name or '') if token and token.lower() not in self.REPO_NOISE_WORDS
|
||||||
|
}
|
||||||
|
if fallback_tokens and len(candidate_tokens) <= 2:
|
||||||
|
overlap = sum(1 for token in candidate_tokens if token in fallback_tokens)
|
||||||
|
if overlap == 0 and any(token in self.GENERIC_PROJECT_NAME_WORDS for token in candidate_tokens):
|
||||||
|
return False
|
||||||
|
return True
|
||||||
|
|
||||||
|
def _preferred_project_name_fallback(self, prompt_text: str, interpreted_name: str | None) -> str:
|
||||||
|
"""Pick the best fallback title when the earlier interpretation produced a placeholder."""
|
||||||
|
interpreted_clean = self._humanize_name(str(interpreted_name or '').strip()) if interpreted_name else ''
|
||||||
|
normalized_interpreted = interpreted_clean.lower()
|
||||||
|
if normalized_interpreted and normalized_interpreted not in self.PLACEHOLDER_PROJECT_NAME_WORDS:
|
||||||
|
if not (len(normalized_interpreted.split()) == 1 and normalized_interpreted in self.GENERIC_PROJECT_NAME_WORDS):
|
||||||
|
return interpreted_clean
|
||||||
|
return self._derive_name(prompt_text)
|
||||||
|
|
||||||
def _ensure_unique_repo_name(self, repo_name: str, reserved_names: set[str]) -> str:
|
def _ensure_unique_repo_name(self, repo_name: str, reserved_names: set[str]) -> str:
|
||||||
"""Choose a repository slug that does not collide with tracked or remote repositories."""
|
"""Choose a repository slug that does not collide with tracked or remote repositories."""
|
||||||
base_name = self._derive_repo_name(repo_name)
|
base_name = self._derive_repo_name(repo_name)
|
||||||
@@ -328,8 +418,15 @@ class RequestInterpreter:
|
|||||||
|
|
||||||
def _normalize_project_identity(self, payload: dict, fallback_name: str) -> tuple[str, str]:
|
def _normalize_project_identity(self, payload: dict, fallback_name: str) -> tuple[str, str]:
|
||||||
"""Normalize model-proposed project and repository naming."""
|
"""Normalize model-proposed project and repository naming."""
|
||||||
project_name = self._humanize_name(str(payload.get('project_name') or payload.get('name') or fallback_name))
|
fallback_project_name = self._humanize_name(str(fallback_name or 'Generated Project'))
|
||||||
repo_name = self._derive_repo_name(str(payload.get('repo_name') or project_name))
|
project_candidate = str(payload.get('project_name') or payload.get('name') or '').strip()
|
||||||
|
project_name = fallback_project_name
|
||||||
|
if project_candidate and self._should_use_project_name_candidate(project_candidate, fallback_project_name):
|
||||||
|
project_name = self._humanize_name(project_candidate)
|
||||||
|
repo_candidate = str(payload.get('repo_name') or '').strip()
|
||||||
|
repo_name = self._derive_repo_name(project_name)
|
||||||
|
if repo_candidate and self._should_use_repo_name_candidate(repo_candidate, project_name):
|
||||||
|
repo_name = self._derive_repo_name(repo_candidate)
|
||||||
return project_name, repo_name
|
return project_name, repo_name
|
||||||
|
|
||||||
def _heuristic_fallback(self, prompt_text: str, context: dict | None = None) -> tuple[dict, dict]:
|
def _heuristic_fallback(self, prompt_text: str, context: dict | None = None) -> tuple[dict, dict]:
|
||||||
|
|||||||
@@ -4,10 +4,207 @@ import json
|
|||||||
import os
|
import os
|
||||||
from typing import Optional
|
from typing import Optional
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
|
from urllib.parse import urlparse
|
||||||
from pydantic import Field
|
from pydantic import Field
|
||||||
from pydantic_settings import BaseSettings, SettingsConfigDict
|
from pydantic_settings import BaseSettings, SettingsConfigDict
|
||||||
|
|
||||||
|
|
||||||
|
def _normalize_service_url(value: str, default_scheme: str = "https") -> str:
|
||||||
|
"""Normalize service URLs so host-only values still become valid absolute URLs."""
|
||||||
|
normalized = (value or "").strip().rstrip("/")
|
||||||
|
if not normalized:
|
||||||
|
return ""
|
||||||
|
if "://" not in normalized:
|
||||||
|
normalized = f"{default_scheme}://{normalized}"
|
||||||
|
parsed = urlparse(normalized)
|
||||||
|
if not parsed.scheme or not parsed.netloc:
|
||||||
|
return ""
|
||||||
|
return normalized
|
||||||
|
|
||||||
|
|
||||||
|
EDITABLE_LLM_PROMPTS: dict[str, dict[str, str]] = {
|
||||||
|
'LLM_GUARDRAIL_PROMPT': {
|
||||||
|
'label': 'Global Guardrails',
|
||||||
|
'category': 'guardrail',
|
||||||
|
'description': 'Applied to every outbound external LLM call.',
|
||||||
|
},
|
||||||
|
'LLM_REQUEST_INTERPRETER_GUARDRAIL_PROMPT': {
|
||||||
|
'label': 'Request Interpretation Guardrails',
|
||||||
|
'category': 'guardrail',
|
||||||
|
'description': 'Constrains project routing and continuation selection.',
|
||||||
|
},
|
||||||
|
'LLM_CHANGE_SUMMARY_GUARDRAIL_PROMPT': {
|
||||||
|
'label': 'Change Summary Guardrails',
|
||||||
|
'category': 'guardrail',
|
||||||
|
'description': 'Constrains factual delivery summaries.',
|
||||||
|
},
|
||||||
|
'LLM_PROJECT_NAMING_GUARDRAIL_PROMPT': {
|
||||||
|
'label': 'Project Naming Guardrails',
|
||||||
|
'category': 'guardrail',
|
||||||
|
'description': 'Constrains project display names and repo slugs.',
|
||||||
|
},
|
||||||
|
'LLM_PROJECT_NAMING_SYSTEM_PROMPT': {
|
||||||
|
'label': 'Project Naming System Prompt',
|
||||||
|
'category': 'system_prompt',
|
||||||
|
'description': 'Guides the dedicated new-project naming stage.',
|
||||||
|
},
|
||||||
|
'LLM_PROJECT_ID_GUARDRAIL_PROMPT': {
|
||||||
|
'label': 'Project ID Guardrails',
|
||||||
|
'category': 'guardrail',
|
||||||
|
'description': 'Constrains stable project id generation.',
|
||||||
|
},
|
||||||
|
'LLM_PROJECT_ID_SYSTEM_PROMPT': {
|
||||||
|
'label': 'Project ID System Prompt',
|
||||||
|
'category': 'system_prompt',
|
||||||
|
'description': 'Guides the dedicated project id naming stage.',
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
EDITABLE_RUNTIME_SETTINGS: dict[str, dict[str, str]] = {
|
||||||
|
'HOME_ASSISTANT_BATTERY_ENTITY_ID': {
|
||||||
|
'label': 'Battery Entity ID',
|
||||||
|
'category': 'home_assistant',
|
||||||
|
'description': 'Home Assistant entity used for battery state-of-charge gating.',
|
||||||
|
'value_type': 'string',
|
||||||
|
},
|
||||||
|
'HOME_ASSISTANT_SURPLUS_ENTITY_ID': {
|
||||||
|
'label': 'Surplus Power Entity ID',
|
||||||
|
'category': 'home_assistant',
|
||||||
|
'description': 'Home Assistant entity used for export or surplus power gating.',
|
||||||
|
'value_type': 'string',
|
||||||
|
},
|
||||||
|
'HOME_ASSISTANT_BATTERY_FULL_THRESHOLD': {
|
||||||
|
'label': 'Battery Full Threshold',
|
||||||
|
'category': 'home_assistant',
|
||||||
|
'description': 'Minimum battery percentage required before queued prompts may run.',
|
||||||
|
'value_type': 'float',
|
||||||
|
},
|
||||||
|
'HOME_ASSISTANT_SURPLUS_THRESHOLD_WATTS': {
|
||||||
|
'label': 'Surplus Threshold Watts',
|
||||||
|
'category': 'home_assistant',
|
||||||
|
'description': 'Minimum surplus/export power required before queued prompts may run.',
|
||||||
|
'value_type': 'float',
|
||||||
|
},
|
||||||
|
'PROMPT_QUEUE_ENABLED': {
|
||||||
|
'label': 'Queue Telegram Prompts',
|
||||||
|
'category': 'prompt_queue',
|
||||||
|
'description': 'When enabled, Telegram prompts are queued and gated instead of processed immediately.',
|
||||||
|
'value_type': 'boolean',
|
||||||
|
},
|
||||||
|
'PROMPT_QUEUE_AUTO_PROCESS': {
|
||||||
|
'label': 'Auto Process Queue',
|
||||||
|
'category': 'prompt_queue',
|
||||||
|
'description': 'Let the background worker drain the queue automatically when the gate is open.',
|
||||||
|
'value_type': 'boolean',
|
||||||
|
},
|
||||||
|
'PROMPT_QUEUE_FORCE_PROCESS': {
|
||||||
|
'label': 'Force Queue Processing',
|
||||||
|
'category': 'prompt_queue',
|
||||||
|
'description': 'Bypass the Home Assistant energy gate for queued prompts.',
|
||||||
|
'value_type': 'boolean',
|
||||||
|
},
|
||||||
|
'PROMPT_QUEUE_POLL_INTERVAL_SECONDS': {
|
||||||
|
'label': 'Queue Poll Interval Seconds',
|
||||||
|
'category': 'prompt_queue',
|
||||||
|
'description': 'Polling interval for the background queue worker.',
|
||||||
|
'value_type': 'integer',
|
||||||
|
},
|
||||||
|
'PROMPT_QUEUE_MAX_BATCH_SIZE': {
|
||||||
|
'label': 'Queue Max Batch Size',
|
||||||
|
'category': 'prompt_queue',
|
||||||
|
'description': 'Maximum number of queued prompts processed in one batch.',
|
||||||
|
'value_type': 'integer',
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def _get_persisted_llm_prompt_override(env_key: str) -> str | None:
|
||||||
|
"""Load one persisted LLM prompt override from the database when available."""
|
||||||
|
if env_key not in EDITABLE_LLM_PROMPTS:
|
||||||
|
return None
|
||||||
|
try:
|
||||||
|
try:
|
||||||
|
from .database import get_db_sync
|
||||||
|
from .agents.database_manager import DatabaseManager
|
||||||
|
except ImportError:
|
||||||
|
from database import get_db_sync
|
||||||
|
from agents.database_manager import DatabaseManager
|
||||||
|
|
||||||
|
db = get_db_sync()
|
||||||
|
if db is None:
|
||||||
|
return None
|
||||||
|
try:
|
||||||
|
return DatabaseManager(db).get_llm_prompt_override(env_key)
|
||||||
|
finally:
|
||||||
|
db.close()
|
||||||
|
except Exception:
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def _resolve_llm_prompt_value(env_key: str, fallback: str) -> str:
|
||||||
|
"""Resolve one editable prompt from DB override first, then environment/defaults."""
|
||||||
|
override = _get_persisted_llm_prompt_override(env_key)
|
||||||
|
if override is not None:
|
||||||
|
return override.strip()
|
||||||
|
return (fallback or '').strip()
|
||||||
|
|
||||||
|
|
||||||
|
def _get_persisted_runtime_setting_override(key: str):
|
||||||
|
"""Load one persisted runtime-setting override from the database when available."""
|
||||||
|
if key not in EDITABLE_RUNTIME_SETTINGS:
|
||||||
|
return None
|
||||||
|
try:
|
||||||
|
try:
|
||||||
|
from .database import get_db_sync
|
||||||
|
from .agents.database_manager import DatabaseManager
|
||||||
|
except ImportError:
|
||||||
|
from database import get_db_sync
|
||||||
|
from agents.database_manager import DatabaseManager
|
||||||
|
|
||||||
|
db = get_db_sync()
|
||||||
|
if db is None:
|
||||||
|
return None
|
||||||
|
try:
|
||||||
|
return DatabaseManager(db).get_runtime_setting_override(key)
|
||||||
|
finally:
|
||||||
|
db.close()
|
||||||
|
except Exception:
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def _coerce_runtime_setting_value(key: str, value, fallback):
|
||||||
|
"""Coerce a persisted runtime setting override into the expected scalar type."""
|
||||||
|
value_type = EDITABLE_RUNTIME_SETTINGS.get(key, {}).get('value_type')
|
||||||
|
if value is None:
|
||||||
|
return fallback
|
||||||
|
if value_type == 'boolean':
|
||||||
|
if isinstance(value, bool):
|
||||||
|
return value
|
||||||
|
normalized = str(value).strip().lower()
|
||||||
|
if normalized in {'1', 'true', 'yes', 'on'}:
|
||||||
|
return True
|
||||||
|
if normalized in {'0', 'false', 'no', 'off'}:
|
||||||
|
return False
|
||||||
|
return bool(fallback)
|
||||||
|
if value_type == 'integer':
|
||||||
|
try:
|
||||||
|
return int(value)
|
||||||
|
except Exception:
|
||||||
|
return int(fallback)
|
||||||
|
if value_type == 'float':
|
||||||
|
try:
|
||||||
|
return float(value)
|
||||||
|
except Exception:
|
||||||
|
return float(fallback)
|
||||||
|
return str(value).strip()
|
||||||
|
|
||||||
|
|
||||||
|
def _resolve_runtime_setting_value(key: str, fallback):
|
||||||
|
"""Resolve one editable runtime setting from DB override first, then environment/defaults."""
|
||||||
|
override = _get_persisted_runtime_setting_override(key)
|
||||||
|
return _coerce_runtime_setting_value(key, override, fallback)
|
||||||
|
|
||||||
|
|
||||||
class Settings(BaseSettings):
|
class Settings(BaseSettings):
|
||||||
"""Application settings loaded from environment variables."""
|
"""Application settings loaded from environment variables."""
|
||||||
|
|
||||||
@@ -36,10 +233,10 @@ class Settings(BaseSettings):
|
|||||||
"For summaries: only describe facts present in the provided context and tool outputs. Never claim a repository, commit, or pull request exists unless it is present in the supplied data."
|
"For summaries: only describe facts present in the provided context and tool outputs. Never claim a repository, commit, or pull request exists unless it is present in the supplied data."
|
||||||
)
|
)
|
||||||
LLM_PROJECT_NAMING_GUARDRAIL_PROMPT: str = (
|
LLM_PROJECT_NAMING_GUARDRAIL_PROMPT: str = (
|
||||||
"For project naming: prefer clear, product-like names and repository slugs that match the user's intent. Avoid reusing tracked project identities unless the request is clearly asking for an existing project."
|
"For project naming: prefer clear, product-like names and repository slugs that match the user's concrete deliverable. Avoid abstract or instructional words such as purpose, project, system, app, tool, platform, solution, new, create, or test unless the request truly centers on that exact noun. Base the name on the actual artifact or workflow being built, and avoid copying sentence fragments from the prompt. Avoid reusing tracked project identities unless the request is clearly asking for an existing project."
|
||||||
)
|
)
|
||||||
LLM_PROJECT_NAMING_SYSTEM_PROMPT: str = (
|
LLM_PROJECT_NAMING_SYSTEM_PROMPT: str = (
|
||||||
"You name newly requested software projects. Return only JSON with keys project_name, repo_name, and rationale. Project names should be concise human-readable titles. Repo names should be lowercase kebab-case slugs suitable for a Gitea repository name."
|
"You name newly requested software projects. Return only JSON with keys project_name, repo_name, and rationale. Project names should be concise human-readable titles based on the real product, artifact, or workflow being created. Repo names should be lowercase kebab-case slugs derived from that title. Never return generic names like purpose, project, system, app, tool, platform, solution, harness, or test by themselves, and never return a repo_name that is a copied sentence fragment from the prompt. Prefer 2 to 4 specific words when possible."
|
||||||
)
|
)
|
||||||
LLM_PROJECT_ID_GUARDRAIL_PROMPT: str = (
|
LLM_PROJECT_ID_GUARDRAIL_PROMPT: str = (
|
||||||
"For project ids: produce short stable slugs for newly created projects. Avoid collisions with known project ids and keep ids lowercase with hyphens."
|
"For project ids: produce short stable slugs for newly created projects. Avoid collisions with known project ids and keep ids lowercase with hyphens."
|
||||||
@@ -76,6 +273,19 @@ class Settings(BaseSettings):
|
|||||||
TELEGRAM_BOT_TOKEN: str = ""
|
TELEGRAM_BOT_TOKEN: str = ""
|
||||||
TELEGRAM_CHAT_ID: str = ""
|
TELEGRAM_CHAT_ID: str = ""
|
||||||
|
|
||||||
|
# Home Assistant and prompt queue settings
|
||||||
|
HOME_ASSISTANT_URL: str = ""
|
||||||
|
HOME_ASSISTANT_TOKEN: str = ""
|
||||||
|
HOME_ASSISTANT_BATTERY_ENTITY_ID: str = ""
|
||||||
|
HOME_ASSISTANT_SURPLUS_ENTITY_ID: str = ""
|
||||||
|
HOME_ASSISTANT_BATTERY_FULL_THRESHOLD: float = 95.0
|
||||||
|
HOME_ASSISTANT_SURPLUS_THRESHOLD_WATTS: float = 100.0
|
||||||
|
PROMPT_QUEUE_ENABLED: bool = False
|
||||||
|
PROMPT_QUEUE_AUTO_PROCESS: bool = True
|
||||||
|
PROMPT_QUEUE_FORCE_PROCESS: bool = False
|
||||||
|
PROMPT_QUEUE_POLL_INTERVAL_SECONDS: int = 60
|
||||||
|
PROMPT_QUEUE_MAX_BATCH_SIZE: int = 1
|
||||||
|
|
||||||
# PostgreSQL settings
|
# PostgreSQL settings
|
||||||
POSTGRES_HOST: str = "localhost"
|
POSTGRES_HOST: str = "localhost"
|
||||||
POSTGRES_PORT: int = 5432
|
POSTGRES_PORT: int = 5432
|
||||||
@@ -163,37 +373,74 @@ class Settings(BaseSettings):
|
|||||||
@property
|
@property
|
||||||
def llm_guardrail_prompt(self) -> str:
|
def llm_guardrail_prompt(self) -> str:
|
||||||
"""Get the global guardrail prompt used for all external LLM calls."""
|
"""Get the global guardrail prompt used for all external LLM calls."""
|
||||||
return self.LLM_GUARDRAIL_PROMPT.strip()
|
return _resolve_llm_prompt_value('LLM_GUARDRAIL_PROMPT', self.LLM_GUARDRAIL_PROMPT)
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def llm_request_interpreter_guardrail_prompt(self) -> str:
|
def llm_request_interpreter_guardrail_prompt(self) -> str:
|
||||||
"""Get the request-interpretation specific guardrail prompt."""
|
"""Get the request-interpretation specific guardrail prompt."""
|
||||||
return self.LLM_REQUEST_INTERPRETER_GUARDRAIL_PROMPT.strip()
|
return _resolve_llm_prompt_value('LLM_REQUEST_INTERPRETER_GUARDRAIL_PROMPT', self.LLM_REQUEST_INTERPRETER_GUARDRAIL_PROMPT)
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def llm_change_summary_guardrail_prompt(self) -> str:
|
def llm_change_summary_guardrail_prompt(self) -> str:
|
||||||
"""Get the change-summary specific guardrail prompt."""
|
"""Get the change-summary specific guardrail prompt."""
|
||||||
return self.LLM_CHANGE_SUMMARY_GUARDRAIL_PROMPT.strip()
|
return _resolve_llm_prompt_value('LLM_CHANGE_SUMMARY_GUARDRAIL_PROMPT', self.LLM_CHANGE_SUMMARY_GUARDRAIL_PROMPT)
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def llm_project_naming_guardrail_prompt(self) -> str:
|
def llm_project_naming_guardrail_prompt(self) -> str:
|
||||||
"""Get the project-naming specific guardrail prompt."""
|
"""Get the project-naming specific guardrail prompt."""
|
||||||
return self.LLM_PROJECT_NAMING_GUARDRAIL_PROMPT.strip()
|
return _resolve_llm_prompt_value('LLM_PROJECT_NAMING_GUARDRAIL_PROMPT', self.LLM_PROJECT_NAMING_GUARDRAIL_PROMPT)
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def llm_project_naming_system_prompt(self) -> str:
|
def llm_project_naming_system_prompt(self) -> str:
|
||||||
"""Get the project-naming system prompt."""
|
"""Get the project-naming system prompt."""
|
||||||
return self.LLM_PROJECT_NAMING_SYSTEM_PROMPT.strip()
|
return _resolve_llm_prompt_value('LLM_PROJECT_NAMING_SYSTEM_PROMPT', self.LLM_PROJECT_NAMING_SYSTEM_PROMPT)
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def llm_project_id_guardrail_prompt(self) -> str:
|
def llm_project_id_guardrail_prompt(self) -> str:
|
||||||
"""Get the project-id naming specific guardrail prompt."""
|
"""Get the project-id naming specific guardrail prompt."""
|
||||||
return self.LLM_PROJECT_ID_GUARDRAIL_PROMPT.strip()
|
return _resolve_llm_prompt_value('LLM_PROJECT_ID_GUARDRAIL_PROMPT', self.LLM_PROJECT_ID_GUARDRAIL_PROMPT)
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def llm_project_id_system_prompt(self) -> str:
|
def llm_project_id_system_prompt(self) -> str:
|
||||||
"""Get the project-id naming system prompt."""
|
"""Get the project-id naming system prompt."""
|
||||||
return self.LLM_PROJECT_ID_SYSTEM_PROMPT.strip()
|
return _resolve_llm_prompt_value('LLM_PROJECT_ID_SYSTEM_PROMPT', self.LLM_PROJECT_ID_SYSTEM_PROMPT)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def editable_llm_prompts(self) -> list[dict[str, str]]:
|
||||||
|
"""Return metadata for all LLM prompts that may be persisted and edited from the UI."""
|
||||||
|
prompts = []
|
||||||
|
for env_key, metadata in EDITABLE_LLM_PROMPTS.items():
|
||||||
|
prompts.append(
|
||||||
|
{
|
||||||
|
'key': env_key,
|
||||||
|
'label': metadata['label'],
|
||||||
|
'category': metadata['category'],
|
||||||
|
'description': metadata['description'],
|
||||||
|
'default_value': (getattr(self, env_key, '') or '').strip(),
|
||||||
|
'value': _resolve_llm_prompt_value(env_key, getattr(self, env_key, '')),
|
||||||
|
}
|
||||||
|
)
|
||||||
|
return prompts
|
||||||
|
|
||||||
|
@property
|
||||||
|
def editable_runtime_settings(self) -> list[dict]:
|
||||||
|
"""Return metadata for all DB-editable runtime settings."""
|
||||||
|
items = []
|
||||||
|
for key, metadata in EDITABLE_RUNTIME_SETTINGS.items():
|
||||||
|
default_value = getattr(self, key)
|
||||||
|
value = _resolve_runtime_setting_value(key, default_value)
|
||||||
|
items.append(
|
||||||
|
{
|
||||||
|
'key': key,
|
||||||
|
'label': metadata['label'],
|
||||||
|
'category': metadata['category'],
|
||||||
|
'description': metadata['description'],
|
||||||
|
'value_type': metadata['value_type'],
|
||||||
|
'default_value': default_value,
|
||||||
|
'value': value,
|
||||||
|
}
|
||||||
|
)
|
||||||
|
return items
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def llm_tool_allowlist(self) -> list[str]:
|
def llm_tool_allowlist(self) -> list[str]:
|
||||||
@@ -254,7 +501,7 @@ class Settings(BaseSettings):
|
|||||||
@property
|
@property
|
||||||
def gitea_url(self) -> str:
|
def gitea_url(self) -> str:
|
||||||
"""Get Gitea URL with trimmed whitespace."""
|
"""Get Gitea URL with trimmed whitespace."""
|
||||||
return self.GITEA_URL.strip()
|
return _normalize_service_url(self.GITEA_URL)
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def gitea_token(self) -> str:
|
def gitea_token(self) -> str:
|
||||||
@@ -279,12 +526,12 @@ class Settings(BaseSettings):
|
|||||||
@property
|
@property
|
||||||
def n8n_webhook_url(self) -> str:
|
def n8n_webhook_url(self) -> str:
|
||||||
"""Get n8n webhook URL with trimmed whitespace."""
|
"""Get n8n webhook URL with trimmed whitespace."""
|
||||||
return self.N8N_WEBHOOK_URL.strip()
|
return _normalize_service_url(self.N8N_WEBHOOK_URL, default_scheme="http")
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def n8n_api_url(self) -> str:
|
def n8n_api_url(self) -> str:
|
||||||
"""Get n8n API URL with trimmed whitespace."""
|
"""Get n8n API URL with trimmed whitespace."""
|
||||||
return self.N8N_API_URL.strip()
|
return _normalize_service_url(self.N8N_API_URL, default_scheme="http")
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def n8n_api_key(self) -> str:
|
def n8n_api_key(self) -> str:
|
||||||
@@ -309,7 +556,62 @@ class Settings(BaseSettings):
|
|||||||
@property
|
@property
|
||||||
def backend_public_url(self) -> str:
|
def backend_public_url(self) -> str:
|
||||||
"""Get backend public URL with trimmed whitespace."""
|
"""Get backend public URL with trimmed whitespace."""
|
||||||
return self.BACKEND_PUBLIC_URL.strip().rstrip("/")
|
return _normalize_service_url(self.BACKEND_PUBLIC_URL, default_scheme="http")
|
||||||
|
|
||||||
|
@property
|
||||||
|
def home_assistant_url(self) -> str:
|
||||||
|
"""Get Home Assistant URL with trimmed whitespace."""
|
||||||
|
return _normalize_service_url(self.HOME_ASSISTANT_URL, default_scheme="http")
|
||||||
|
|
||||||
|
@property
|
||||||
|
def home_assistant_token(self) -> str:
|
||||||
|
"""Get Home Assistant token with trimmed whitespace."""
|
||||||
|
return self.HOME_ASSISTANT_TOKEN.strip()
|
||||||
|
|
||||||
|
@property
|
||||||
|
def home_assistant_battery_entity_id(self) -> str:
|
||||||
|
"""Get the Home Assistant battery state entity id."""
|
||||||
|
return str(_resolve_runtime_setting_value('HOME_ASSISTANT_BATTERY_ENTITY_ID', self.HOME_ASSISTANT_BATTERY_ENTITY_ID)).strip()
|
||||||
|
|
||||||
|
@property
|
||||||
|
def home_assistant_surplus_entity_id(self) -> str:
|
||||||
|
"""Get the Home Assistant surplus power entity id."""
|
||||||
|
return str(_resolve_runtime_setting_value('HOME_ASSISTANT_SURPLUS_ENTITY_ID', self.HOME_ASSISTANT_SURPLUS_ENTITY_ID)).strip()
|
||||||
|
|
||||||
|
@property
|
||||||
|
def home_assistant_battery_full_threshold(self) -> float:
|
||||||
|
"""Get the minimum battery SoC percentage for queue processing."""
|
||||||
|
return float(_resolve_runtime_setting_value('HOME_ASSISTANT_BATTERY_FULL_THRESHOLD', self.HOME_ASSISTANT_BATTERY_FULL_THRESHOLD))
|
||||||
|
|
||||||
|
@property
|
||||||
|
def home_assistant_surplus_threshold_watts(self) -> float:
|
||||||
|
"""Get the minimum export/surplus power threshold for queue processing."""
|
||||||
|
return float(_resolve_runtime_setting_value('HOME_ASSISTANT_SURPLUS_THRESHOLD_WATTS', self.HOME_ASSISTANT_SURPLUS_THRESHOLD_WATTS))
|
||||||
|
|
||||||
|
@property
|
||||||
|
def prompt_queue_enabled(self) -> bool:
|
||||||
|
"""Whether Telegram prompts should be queued instead of processed immediately."""
|
||||||
|
return bool(_resolve_runtime_setting_value('PROMPT_QUEUE_ENABLED', self.PROMPT_QUEUE_ENABLED))
|
||||||
|
|
||||||
|
@property
|
||||||
|
def prompt_queue_auto_process(self) -> bool:
|
||||||
|
"""Whether the background worker should automatically process queued prompts."""
|
||||||
|
return bool(_resolve_runtime_setting_value('PROMPT_QUEUE_AUTO_PROCESS', self.PROMPT_QUEUE_AUTO_PROCESS))
|
||||||
|
|
||||||
|
@property
|
||||||
|
def prompt_queue_force_process(self) -> bool:
|
||||||
|
"""Whether queued prompts should bypass the Home Assistant energy gate."""
|
||||||
|
return bool(_resolve_runtime_setting_value('PROMPT_QUEUE_FORCE_PROCESS', self.PROMPT_QUEUE_FORCE_PROCESS))
|
||||||
|
|
||||||
|
@property
|
||||||
|
def prompt_queue_poll_interval_seconds(self) -> int:
|
||||||
|
"""Get the queue polling interval for background processing."""
|
||||||
|
return max(int(_resolve_runtime_setting_value('PROMPT_QUEUE_POLL_INTERVAL_SECONDS', self.PROMPT_QUEUE_POLL_INTERVAL_SECONDS)), 5)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def prompt_queue_max_batch_size(self) -> int:
|
||||||
|
"""Get the maximum number of queued prompts to process in one batch."""
|
||||||
|
return max(int(_resolve_runtime_setting_value('PROMPT_QUEUE_MAX_BATCH_SIZE', self.PROMPT_QUEUE_MAX_BATCH_SIZE)), 1)
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def projects_root(self) -> Path:
|
def projects_root(self) -> Path:
|
||||||
|
|||||||
@@ -5,17 +5,22 @@ from __future__ import annotations
|
|||||||
from contextlib import closing
|
from contextlib import closing
|
||||||
from html import escape
|
from html import escape
|
||||||
import json
|
import json
|
||||||
|
import re
|
||||||
import time
|
import time
|
||||||
|
import urllib.error
|
||||||
|
import urllib.request
|
||||||
|
|
||||||
from nicegui import app, ui
|
from nicegui import app, ui
|
||||||
|
|
||||||
|
|
||||||
AUTO_SYNC_INTERVAL_SECONDS = 60
|
AUTO_SYNC_INTERVAL_SECONDS = 60
|
||||||
_last_background_repo_sync_at = 0.0
|
_last_background_repo_sync_at = 0.0
|
||||||
|
_DIFF_HUNK_PATTERN = re.compile(r'^@@ -(\d+)(?:,\d+)? \+(\d+)(?:,\d+)? @@')
|
||||||
|
|
||||||
try:
|
try:
|
||||||
from .agents.database_manager import DatabaseManager
|
from .agents.database_manager import DatabaseManager
|
||||||
from .agents.gitea import GiteaAPI
|
from .agents.gitea import GiteaAPI
|
||||||
|
from .agents.home_assistant import HomeAssistantAgent
|
||||||
from .agents.llm_service import LLMServiceClient
|
from .agents.llm_service import LLMServiceClient
|
||||||
from .agents.n8n_setup import N8NSetupAgent
|
from .agents.n8n_setup import N8NSetupAgent
|
||||||
from .agents.prompt_workflow import PromptWorkflowManager
|
from .agents.prompt_workflow import PromptWorkflowManager
|
||||||
@@ -25,6 +30,7 @@ try:
|
|||||||
except ImportError:
|
except ImportError:
|
||||||
from agents.database_manager import DatabaseManager
|
from agents.database_manager import DatabaseManager
|
||||||
from agents.gitea import GiteaAPI
|
from agents.gitea import GiteaAPI
|
||||||
|
from agents.home_assistant import HomeAssistantAgent
|
||||||
from agents.llm_service import LLMServiceClient
|
from agents.llm_service import LLMServiceClient
|
||||||
from agents.n8n_setup import N8NSetupAgent
|
from agents.n8n_setup import N8NSetupAgent
|
||||||
from agents.prompt_workflow import PromptWorkflowManager
|
from agents.prompt_workflow import PromptWorkflowManager
|
||||||
@@ -235,6 +241,126 @@ def _render_timeline(events: list[dict]) -> None:
|
|||||||
ui.label(f"Prompt {metadata['prompt_id']}").classes('factory-chip')
|
ui.label(f"Prompt {metadata['prompt_id']}").classes('factory-chip')
|
||||||
|
|
||||||
|
|
||||||
|
def _parse_side_by_side_diff(diff_text: str) -> list[dict]:
|
||||||
|
"""Parse unified diff text into rows suitable for side-by-side rendering."""
|
||||||
|
rows: list[dict] = []
|
||||||
|
left_line = 0
|
||||||
|
right_line = 0
|
||||||
|
lines = diff_text.splitlines()
|
||||||
|
index = 0
|
||||||
|
while index < len(lines):
|
||||||
|
line = lines[index]
|
||||||
|
if line.startswith(('diff --git', 'index ', '--- ', '+++ ')):
|
||||||
|
index += 1
|
||||||
|
continue
|
||||||
|
if line.startswith('@@'):
|
||||||
|
match = _DIFF_HUNK_PATTERN.match(line)
|
||||||
|
if match:
|
||||||
|
left_line = int(match.group(1))
|
||||||
|
right_line = int(match.group(2))
|
||||||
|
rows.append({'type': 'hunk', 'header': line})
|
||||||
|
index += 1
|
||||||
|
continue
|
||||||
|
if line.startswith('-') and not line.startswith('---'):
|
||||||
|
next_line = lines[index + 1] if index + 1 < len(lines) else None
|
||||||
|
if next_line and next_line.startswith('+') and not next_line.startswith('+++'):
|
||||||
|
rows.append(
|
||||||
|
{
|
||||||
|
'type': 'change',
|
||||||
|
'kind': 'modified',
|
||||||
|
'left_no': left_line,
|
||||||
|
'right_no': right_line,
|
||||||
|
'left_text': line[1:],
|
||||||
|
'right_text': next_line[1:],
|
||||||
|
}
|
||||||
|
)
|
||||||
|
left_line += 1
|
||||||
|
right_line += 1
|
||||||
|
index += 2
|
||||||
|
continue
|
||||||
|
rows.append(
|
||||||
|
{
|
||||||
|
'type': 'change',
|
||||||
|
'kind': 'removed',
|
||||||
|
'left_no': left_line,
|
||||||
|
'right_no': '',
|
||||||
|
'left_text': line[1:],
|
||||||
|
'right_text': '',
|
||||||
|
}
|
||||||
|
)
|
||||||
|
left_line += 1
|
||||||
|
index += 1
|
||||||
|
continue
|
||||||
|
if line.startswith('+') and not line.startswith('+++'):
|
||||||
|
rows.append(
|
||||||
|
{
|
||||||
|
'type': 'change',
|
||||||
|
'kind': 'added',
|
||||||
|
'left_no': '',
|
||||||
|
'right_no': right_line,
|
||||||
|
'left_text': '',
|
||||||
|
'right_text': line[1:],
|
||||||
|
}
|
||||||
|
)
|
||||||
|
right_line += 1
|
||||||
|
index += 1
|
||||||
|
continue
|
||||||
|
if line.startswith(' '):
|
||||||
|
rows.append(
|
||||||
|
{
|
||||||
|
'type': 'change',
|
||||||
|
'kind': 'context',
|
||||||
|
'left_no': left_line,
|
||||||
|
'right_no': right_line,
|
||||||
|
'left_text': line[1:],
|
||||||
|
'right_text': line[1:],
|
||||||
|
}
|
||||||
|
)
|
||||||
|
left_line += 1
|
||||||
|
right_line += 1
|
||||||
|
index += 1
|
||||||
|
continue
|
||||||
|
rows.append({'type': 'meta', 'text': line})
|
||||||
|
index += 1
|
||||||
|
return rows
|
||||||
|
|
||||||
|
|
||||||
|
def _render_side_by_side_diff(diff_text: str) -> None:
|
||||||
|
"""Render a side-by-side diff table from unified diff text."""
|
||||||
|
rows = _parse_side_by_side_diff(diff_text)
|
||||||
|
if not rows:
|
||||||
|
ui.label('No diff content recorded.').classes('factory-muted')
|
||||||
|
return
|
||||||
|
html_rows = []
|
||||||
|
for row in rows:
|
||||||
|
if row['type'] == 'hunk':
|
||||||
|
html_rows.append(
|
||||||
|
f"<tr class='factory-diff-hunk'><td colspan='4'>{escape(row['header'])}</td></tr>"
|
||||||
|
)
|
||||||
|
continue
|
||||||
|
if row['type'] == 'meta':
|
||||||
|
html_rows.append(
|
||||||
|
f"<tr class='factory-diff-meta'><td colspan='4'>{escape(row['text'])}</td></tr>"
|
||||||
|
)
|
||||||
|
continue
|
||||||
|
kind = row['kind']
|
||||||
|
html_rows.append(
|
||||||
|
"<tr>"
|
||||||
|
f"<td class='factory-diff-line factory-diff-line-{kind}'>{escape(str(row['left_no'])) if row['left_no'] != '' else ''}</td>"
|
||||||
|
f"<td class='factory-diff-cell factory-diff-cell-{kind}'>{escape(row['left_text'])}</td>"
|
||||||
|
f"<td class='factory-diff-line factory-diff-line-{kind}'>{escape(str(row['right_no'])) if row['right_no'] != '' else ''}</td>"
|
||||||
|
f"<td class='factory-diff-cell factory-diff-cell-{kind}'>{escape(row['right_text'])}</td>"
|
||||||
|
"</tr>"
|
||||||
|
)
|
||||||
|
ui.html(
|
||||||
|
"<div class='factory-diff-wrapper'>"
|
||||||
|
"<table class='factory-diff-table'>"
|
||||||
|
"<thead><tr><th colspan='2'>Before</th><th colspan='2'>After</th></tr></thead>"
|
||||||
|
f"<tbody>{''.join(html_rows)}</tbody>"
|
||||||
|
"</table></div>"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
def _render_commit_context(context: dict | None) -> None:
|
def _render_commit_context(context: dict | None) -> None:
|
||||||
"""Render a commit provenance lookup result."""
|
"""Render a commit provenance lookup result."""
|
||||||
if not context:
|
if not context:
|
||||||
@@ -351,8 +477,8 @@ def _render_change_list(changes: list[dict]) -> None:
|
|||||||
ui.label(change.get('change_type') or change.get('action_type') or 'CHANGE').classes('factory-chip')
|
ui.label(change.get('change_type') or change.get('action_type') or 'CHANGE').classes('factory-chip')
|
||||||
ui.label(change.get('diff_summary') or change.get('details') or 'No diff summary recorded').classes('factory-muted')
|
ui.label(change.get('diff_summary') or change.get('details') or 'No diff summary recorded').classes('factory-muted')
|
||||||
if change.get('diff_text'):
|
if change.get('diff_text'):
|
||||||
with ui.expansion('Show diff').classes('w-full q-mt-sm'):
|
with ui.expansion('Show side-by-side diff').classes('w-full q-mt-sm'):
|
||||||
ui.label(change['diff_text']).classes('factory-code')
|
_render_side_by_side_diff(change['diff_text'])
|
||||||
|
|
||||||
|
|
||||||
def _render_llm_traces(traces: list[dict]) -> None:
|
def _render_llm_traces(traces: list[dict]) -> None:
|
||||||
@@ -467,10 +593,96 @@ def _load_n8n_health_snapshot() -> dict:
|
|||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def _load_gitea_health_snapshot() -> dict:
|
||||||
|
"""Load a Gitea health snapshot for UI rendering."""
|
||||||
|
if not settings.gitea_url:
|
||||||
|
return {
|
||||||
|
'status': 'error',
|
||||||
|
'message': 'GITEA_URL is not configured.',
|
||||||
|
'base_url': 'Not configured',
|
||||||
|
'checks': [],
|
||||||
|
}
|
||||||
|
if not settings.gitea_token:
|
||||||
|
return {
|
||||||
|
'status': 'error',
|
||||||
|
'message': 'GITEA_TOKEN is not configured.',
|
||||||
|
'base_url': settings.gitea_url,
|
||||||
|
'checks': [],
|
||||||
|
}
|
||||||
|
try:
|
||||||
|
response = GiteaAPI(token=settings.GITEA_TOKEN, base_url=settings.GITEA_URL, owner=settings.GITEA_OWNER, repo=settings.GITEA_REPO or '').get_current_user_sync()
|
||||||
|
if response.get('error'):
|
||||||
|
return {
|
||||||
|
'status': 'error',
|
||||||
|
'message': response.get('error', 'Unable to reach Gitea.'),
|
||||||
|
'base_url': settings.gitea_url,
|
||||||
|
'checks': [
|
||||||
|
{
|
||||||
|
'name': 'token_auth',
|
||||||
|
'ok': False,
|
||||||
|
'message': response.get('error'),
|
||||||
|
'status_code': response.get('status_code'),
|
||||||
|
'url': f'{settings.gitea_url}/api/v1/user',
|
||||||
|
}
|
||||||
|
],
|
||||||
|
}
|
||||||
|
return {
|
||||||
|
'status': 'success',
|
||||||
|
'message': f"Authenticated as {response.get('login') or response.get('username') or 'unknown'}.",
|
||||||
|
'base_url': settings.gitea_url,
|
||||||
|
'checks': [
|
||||||
|
{
|
||||||
|
'name': 'token_auth',
|
||||||
|
'ok': True,
|
||||||
|
'message': response.get('login') or response.get('username') or 'authenticated',
|
||||||
|
'url': f'{settings.gitea_url}/api/v1/user',
|
||||||
|
}
|
||||||
|
],
|
||||||
|
}
|
||||||
|
except Exception as exc:
|
||||||
|
return {
|
||||||
|
'status': 'error',
|
||||||
|
'message': f'Unable to run Gitea health checks: {exc}',
|
||||||
|
'base_url': settings.gitea_url,
|
||||||
|
'checks': [],
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def _load_home_assistant_health_snapshot() -> dict:
|
||||||
|
"""Load a Home Assistant health snapshot for UI rendering."""
|
||||||
|
try:
|
||||||
|
return HomeAssistantAgent(base_url=settings.home_assistant_url, token=settings.home_assistant_token).health_check_sync()
|
||||||
|
except Exception as exc:
|
||||||
|
return {
|
||||||
|
'status': 'error',
|
||||||
|
'message': f'Unable to run Home Assistant health checks: {exc}',
|
||||||
|
'base_url': settings.home_assistant_url or 'Not configured',
|
||||||
|
'checks': [],
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
def _add_dashboard_styles() -> None:
|
def _add_dashboard_styles() -> None:
|
||||||
"""Register shared dashboard styles."""
|
"""Register shared dashboard styles."""
|
||||||
ui.add_head_html(
|
ui.add_head_html(
|
||||||
"""
|
"""
|
||||||
|
<script>
|
||||||
|
(() => {
|
||||||
|
const scrollKey = 'factory-dashboard-scroll-y';
|
||||||
|
const rememberScroll = () => sessionStorage.setItem(scrollKey, String(window.scrollY || 0));
|
||||||
|
const restoreScroll = () => {
|
||||||
|
const stored = sessionStorage.getItem(scrollKey);
|
||||||
|
if (stored === null) return;
|
||||||
|
window.requestAnimationFrame(() => window.scrollTo({top: Number(stored) || 0, left: 0, behavior: 'auto'}));
|
||||||
|
};
|
||||||
|
window.addEventListener('scroll', rememberScroll, {passive: true});
|
||||||
|
document.addEventListener('click', rememberScroll, true);
|
||||||
|
const observer = new MutationObserver(() => restoreScroll());
|
||||||
|
window.addEventListener('load', () => {
|
||||||
|
observer.observe(document.body, {childList: true, subtree: true});
|
||||||
|
restoreScroll();
|
||||||
|
});
|
||||||
|
})();
|
||||||
|
</script>
|
||||||
<style>
|
<style>
|
||||||
body { background: radial-gradient(circle at top, #f4efe7 0%, #e9e1d4 38%, #d7cec1 100%); }
|
body { background: radial-gradient(circle at top, #f4efe7 0%, #e9e1d4 38%, #d7cec1 100%); }
|
||||||
.factory-shell { max-width: 1240px; margin: 0 auto; }
|
.factory-shell { max-width: 1240px; margin: 0 auto; }
|
||||||
@@ -479,6 +691,20 @@ def _add_dashboard_styles() -> None:
|
|||||||
.factory-muted { color: #745e4c; }
|
.factory-muted { color: #745e4c; }
|
||||||
.factory-code { font-family: 'IBM Plex Mono', 'Fira Code', monospace; background: rgba(32,26,20,0.92); color: #f4efe7; border-radius: 14px; padding: 12px; white-space: pre-wrap; }
|
.factory-code { font-family: 'IBM Plex Mono', 'Fira Code', monospace; background: rgba(32,26,20,0.92); color: #f4efe7; border-radius: 14px; padding: 12px; white-space: pre-wrap; }
|
||||||
.factory-chip { background: rgba(173, 129, 82, 0.14); color: #6b4b2e; border-radius: 999px; padding: 4px 10px; font-size: 12px; }
|
.factory-chip { background: rgba(173, 129, 82, 0.14); color: #6b4b2e; border-radius: 999px; padding: 4px 10px; font-size: 12px; }
|
||||||
|
.factory-diff-wrapper { overflow-x: auto; border-radius: 16px; border: 1px solid rgba(73,54,40,0.10); }
|
||||||
|
.factory-diff-table { width: 100%; border-collapse: collapse; font-family: 'IBM Plex Mono', 'Fira Code', monospace; font-size: 0.85rem; }
|
||||||
|
.factory-diff-table thead th { background: rgba(58,40,26,0.08); color: #3a281a; padding: 10px 12px; text-align: left; }
|
||||||
|
.factory-diff-line { width: 3.5rem; text-align: right; padding: 8px 10px; color: #8a7461; background: rgba(58,40,26,0.04); vertical-align: top; }
|
||||||
|
.factory-diff-cell { white-space: pre-wrap; padding: 8px 12px; vertical-align: top; }
|
||||||
|
.factory-diff-cell-context { background: rgba(255,255,255,0.88); }
|
||||||
|
.factory-diff-cell-added { background: rgba(41,121,82,0.12); }
|
||||||
|
.factory-diff-cell-removed { background: rgba(198,40,40,0.10); }
|
||||||
|
.factory-diff-cell-modified { background: linear-gradient(90deg, rgba(198,40,40,0.08), rgba(41,121,82,0.10)); }
|
||||||
|
.factory-diff-line-added { background: rgba(41,121,82,0.16); }
|
||||||
|
.factory-diff-line-removed { background: rgba(198,40,40,0.14); }
|
||||||
|
.factory-diff-line-modified { background: rgba(173,129,82,0.18); }
|
||||||
|
.factory-diff-hunk td { padding: 8px 12px; background: rgba(48,33,22,0.9); color: #f4efe7; }
|
||||||
|
.factory-diff-meta td { padding: 8px 12px; background: rgba(58,40,26,0.06); color: #745e4c; }
|
||||||
</style>
|
</style>
|
||||||
"""
|
"""
|
||||||
)
|
)
|
||||||
@@ -529,9 +755,13 @@ def _render_confirmation_dialog(title: str, message: str, confirm_label: str, on
|
|||||||
|
|
||||||
|
|
||||||
def _render_health_panels() -> None:
|
def _render_health_panels() -> None:
|
||||||
"""Render application and n8n health panels."""
|
"""Render application, integration, and queue health panels."""
|
||||||
runtime = get_database_runtime_summary()
|
runtime = get_database_runtime_summary()
|
||||||
n8n_health = _load_n8n_health_snapshot()
|
n8n_health = _load_n8n_health_snapshot()
|
||||||
|
gitea_health = _load_gitea_health_snapshot()
|
||||||
|
home_assistant_health = _load_home_assistant_health_snapshot()
|
||||||
|
snapshot = _load_dashboard_snapshot()
|
||||||
|
queue_summary = ((snapshot.get('prompt_queue') or {}).get('summary') if isinstance(snapshot, dict) else {}) or {}
|
||||||
|
|
||||||
with ui.grid(columns=2).classes('w-full gap-4'):
|
with ui.grid(columns=2).classes('w-full gap-4'):
|
||||||
with ui.card().classes('factory-panel q-pa-lg'):
|
with ui.card().classes('factory-panel q-pa-lg'):
|
||||||
@@ -579,6 +809,54 @@ def _render_health_panels() -> None:
|
|||||||
if check.get('message'):
|
if check.get('message'):
|
||||||
ui.label(check['message']).classes('factory-muted')
|
ui.label(check['message']).classes('factory-muted')
|
||||||
|
|
||||||
|
with ui.card().classes('factory-panel q-pa-lg'):
|
||||||
|
ui.label('Gitea Integration').style('font-size: 1.25rem; font-weight: 700; color: #3a281a;')
|
||||||
|
ui.label(gitea_health.get('status', 'unknown').upper()).classes('factory-chip')
|
||||||
|
ui.label(gitea_health.get('message', 'No Gitea status available.')).classes('factory-muted q-mt-sm')
|
||||||
|
for label, value in [
|
||||||
|
('Base URL', gitea_health.get('base_url') or 'Not configured'),
|
||||||
|
('Owner', settings.gitea_owner or 'Not configured'),
|
||||||
|
('Mode', 'per-project' if settings.use_project_repositories else 'shared'),
|
||||||
|
]:
|
||||||
|
with ui.row().classes('justify-between w-full q-mt-sm'):
|
||||||
|
ui.label(label).classes('factory-muted')
|
||||||
|
ui.label(str(value)).style('font-weight: 600; color: #3a281a;')
|
||||||
|
for check in gitea_health.get('checks', []):
|
||||||
|
status = 'OK' if check.get('ok') else 'FAIL'
|
||||||
|
ui.markdown(
|
||||||
|
f"- **{escape(check.get('name', 'check'))}** · {status} · {escape(str(check.get('status_code') or 'n/a'))} · {escape(check.get('url') or 'unknown url')}"
|
||||||
|
)
|
||||||
|
if check.get('message'):
|
||||||
|
ui.label(check['message']).classes('factory-muted')
|
||||||
|
|
||||||
|
with ui.card().classes('factory-panel q-pa-lg'):
|
||||||
|
ui.label('Home Assistant Queue Gate').style('font-size: 1.25rem; font-weight: 700; color: #3a281a;')
|
||||||
|
ui.label(home_assistant_health.get('status', 'unknown').upper()).classes('factory-chip')
|
||||||
|
ui.label(home_assistant_health.get('message', 'No Home Assistant status available.')).classes('factory-muted q-mt-sm')
|
||||||
|
for label, value in [
|
||||||
|
('Base URL', home_assistant_health.get('base_url') or 'Not configured'),
|
||||||
|
('Queue Enabled', 'yes' if settings.prompt_queue_enabled else 'no'),
|
||||||
|
('Auto Process', 'yes' if settings.prompt_queue_auto_process else 'no'),
|
||||||
|
('Force Override', 'yes' if settings.prompt_queue_force_process else 'no'),
|
||||||
|
('Queued Prompts', queue_summary.get('queued', 0)),
|
||||||
|
('Failed Prompts', queue_summary.get('failed', 0)),
|
||||||
|
]:
|
||||||
|
with ui.row().classes('justify-between w-full q-mt-sm'):
|
||||||
|
ui.label(label).classes('factory-muted')
|
||||||
|
ui.label(str(value)).style('font-weight: 600; color: #3a281a;')
|
||||||
|
queue_gate = home_assistant_health.get('queue_gate') or {}
|
||||||
|
if queue_gate:
|
||||||
|
ui.label(
|
||||||
|
f"Thresholds: battery >= {queue_gate.get('battery_full_percent')}%, surplus >= {queue_gate.get('surplus_watts')} W"
|
||||||
|
).classes('factory-muted q-mt-sm')
|
||||||
|
for check in home_assistant_health.get('checks', []):
|
||||||
|
status = 'OK' if check.get('ok') else 'FAIL'
|
||||||
|
ui.markdown(
|
||||||
|
f"- **{escape(check.get('name', 'check'))}** · {status} · {escape(str(check.get('status_code') or 'n/a'))} · {escape(check.get('url') or 'unknown url')}"
|
||||||
|
)
|
||||||
|
if check.get('message'):
|
||||||
|
ui.label(check['message']).classes('factory-muted')
|
||||||
|
|
||||||
|
|
||||||
def create_health_page() -> None:
|
def create_health_page() -> None:
|
||||||
"""Create a dedicated health page for runtime diagnostics."""
|
"""Create a dedicated health page for runtime diagnostics."""
|
||||||
@@ -607,6 +885,30 @@ def create_dashboard():
|
|||||||
repo_discovery_key = 'dashboard.repo_discovery'
|
repo_discovery_key = 'dashboard.repo_discovery'
|
||||||
repo_owner_key = 'dashboard.repo_owner'
|
repo_owner_key = 'dashboard.repo_owner'
|
||||||
repo_name_key = 'dashboard.repo_name'
|
repo_name_key = 'dashboard.repo_name'
|
||||||
|
expansion_state_prefix = 'dashboard.expansion.'
|
||||||
|
|
||||||
|
def _expansion_state_key(name: str) -> str:
|
||||||
|
return f'{expansion_state_prefix}{name}'
|
||||||
|
|
||||||
|
def _expansion_value(name: str, default: bool = False) -> bool:
|
||||||
|
return bool(app.storage.user.get(_expansion_state_key(name), default))
|
||||||
|
|
||||||
|
def _store_expansion_value(name: str, event) -> None:
|
||||||
|
app.storage.user[_expansion_state_key(name)] = bool(event.value)
|
||||||
|
|
||||||
|
def _sticky_expansion(name: str, text: str, *, icon: str | None = None, default: bool = False, classes: str = 'w-full'):
|
||||||
|
return ui.expansion(
|
||||||
|
text,
|
||||||
|
icon=icon,
|
||||||
|
value=_expansion_value(name, default),
|
||||||
|
on_value_change=lambda event, expansion_name=name: _store_expansion_value(expansion_name, event),
|
||||||
|
).classes(classes)
|
||||||
|
|
||||||
|
def _llm_prompt_draft_key(prompt_key: str) -> str:
|
||||||
|
return f'dashboard.llm_prompt_draft.{prompt_key}'
|
||||||
|
|
||||||
|
def _runtime_setting_draft_key(setting_key: str) -> str:
|
||||||
|
return f'dashboard.runtime_setting_draft.{setting_key}'
|
||||||
|
|
||||||
def _selected_tab_name() -> str:
|
def _selected_tab_name() -> str:
|
||||||
"""Return the persisted active dashboard tab."""
|
"""Return the persisted active dashboard tab."""
|
||||||
@@ -668,6 +970,42 @@ def create_dashboard():
|
|||||||
def _get_discovered_repositories() -> list[dict]:
|
def _get_discovered_repositories() -> list[dict]:
|
||||||
return app.storage.user.get(repo_discovery_key, [])
|
return app.storage.user.get(repo_discovery_key, [])
|
||||||
|
|
||||||
|
def _prompt_draft_value(prompt_key: str, fallback: str) -> str:
|
||||||
|
return app.storage.user.get(_llm_prompt_draft_key(prompt_key), fallback)
|
||||||
|
|
||||||
|
def _store_prompt_draft(prompt_key: str, value: str) -> None:
|
||||||
|
app.storage.user[_llm_prompt_draft_key(prompt_key)] = value
|
||||||
|
|
||||||
|
def _clear_prompt_draft(prompt_key: str) -> None:
|
||||||
|
app.storage.user.pop(_llm_prompt_draft_key(prompt_key), None)
|
||||||
|
|
||||||
|
def _runtime_setting_draft_value(setting_key: str, fallback):
|
||||||
|
return app.storage.user.get(_runtime_setting_draft_key(setting_key), fallback)
|
||||||
|
|
||||||
|
def _store_runtime_setting_draft(setting_key: str, value) -> None:
|
||||||
|
app.storage.user[_runtime_setting_draft_key(setting_key)] = value
|
||||||
|
|
||||||
|
def _clear_runtime_setting_draft(setting_key: str) -> None:
|
||||||
|
app.storage.user.pop(_runtime_setting_draft_key(setting_key), None)
|
||||||
|
|
||||||
|
def _call_backend_json(path: str, method: str = 'GET', payload: dict | None = None) -> dict:
|
||||||
|
target = f"{settings.backend_public_url}{path}"
|
||||||
|
data = json.dumps(payload).encode('utf-8') if payload is not None else None
|
||||||
|
request = urllib.request.Request(target, data=data, headers={'Content-Type': 'application/json'}, method=method.upper())
|
||||||
|
try:
|
||||||
|
with urllib.request.urlopen(request) as response:
|
||||||
|
body = response.read().decode('utf-8')
|
||||||
|
return json.loads(body) if body else {}
|
||||||
|
except urllib.error.HTTPError as exc:
|
||||||
|
try:
|
||||||
|
body = exc.read().decode('utf-8')
|
||||||
|
parsed = json.loads(body) if body else {}
|
||||||
|
except Exception:
|
||||||
|
parsed = {'detail': str(exc)}
|
||||||
|
return {'error': parsed.get('detail') or parsed.get('error') or str(exc), 'status_code': exc.code}
|
||||||
|
except Exception as exc:
|
||||||
|
return {'error': str(exc)}
|
||||||
|
|
||||||
async def discover_gitea_repositories_action() -> None:
|
async def discover_gitea_repositories_action() -> None:
|
||||||
if not settings.gitea_url or not settings.gitea_token:
|
if not settings.gitea_url or not settings.gitea_token:
|
||||||
ui.notify('Configure GITEA_URL and GITEA_TOKEN first', color='negative')
|
ui.notify('Configure GITEA_URL and GITEA_TOKEN first', color='negative')
|
||||||
@@ -817,6 +1155,115 @@ def create_dashboard():
|
|||||||
ui.notify(result.get('message', 'Telegram message sent'), color='positive' if result.get('status') == 'success' else 'negative')
|
ui.notify(result.get('message', 'Telegram message sent'), color='positive' if result.get('status') == 'success' else 'negative')
|
||||||
_refresh_health_sections()
|
_refresh_health_sections()
|
||||||
|
|
||||||
|
def process_prompt_queue_action(force: bool = False, limit: int | None = None) -> None:
|
||||||
|
result = _call_backend_json(
|
||||||
|
'/queue/process',
|
||||||
|
method='POST',
|
||||||
|
payload={'force': force, 'limit': limit or settings.prompt_queue_max_batch_size},
|
||||||
|
)
|
||||||
|
if result.get('error'):
|
||||||
|
ui.notify(result.get('error', 'Queue processing failed'), color='negative')
|
||||||
|
return
|
||||||
|
processed_count = result.get('processed_count', 0)
|
||||||
|
if processed_count:
|
||||||
|
ui.notify(f'Processed {processed_count} queued prompt(s)', color='positive')
|
||||||
|
else:
|
||||||
|
ui.notify(result.get('queue_gate', {}).get('reason', 'No queued prompts were processed'), color='warning')
|
||||||
|
_refresh_all_dashboard_sections()
|
||||||
|
|
||||||
|
def retry_prompt_queue_item_action(queue_item_id: int) -> None:
|
||||||
|
db = get_db_sync()
|
||||||
|
if db is None:
|
||||||
|
ui.notify('Database session could not be created', color='negative')
|
||||||
|
return
|
||||||
|
with closing(db):
|
||||||
|
result = DatabaseManager(db).retry_queued_prompt(queue_item_id)
|
||||||
|
if result is None:
|
||||||
|
ui.notify('Queued prompt not found', color='negative')
|
||||||
|
return
|
||||||
|
ui.notify('Queued prompt returned to pending state', color='positive')
|
||||||
|
_refresh_all_dashboard_sections()
|
||||||
|
|
||||||
|
def purge_orphan_code_changes_action(project_id: str | None = None) -> None:
|
||||||
|
db = get_db_sync()
|
||||||
|
if db is None:
|
||||||
|
ui.notify('Database session could not be created', color='negative')
|
||||||
|
return
|
||||||
|
with closing(db):
|
||||||
|
result = DatabaseManager(db).cleanup_orphan_code_changes(project_id=project_id)
|
||||||
|
ui.notify(result.get('message', 'Audit cleanup completed'), color='positive')
|
||||||
|
_refresh_all_dashboard_sections()
|
||||||
|
|
||||||
|
def retry_project_delivery_action(project_id: str) -> None:
|
||||||
|
db = get_db_sync()
|
||||||
|
if db is None:
|
||||||
|
ui.notify('Database session could not be created', color='negative')
|
||||||
|
return
|
||||||
|
with closing(db):
|
||||||
|
result = DatabaseManager(db).retry_project_delivery(project_id)
|
||||||
|
ui.notify(result.get('message', 'Delivery retry completed'), color='positive' if result.get('status') == 'success' else 'negative')
|
||||||
|
_refresh_all_dashboard_sections()
|
||||||
|
|
||||||
|
def save_llm_prompt_action(prompt_key: str) -> None:
|
||||||
|
db = get_db_sync()
|
||||||
|
if db is None:
|
||||||
|
ui.notify('Database session could not be created', color='negative')
|
||||||
|
return
|
||||||
|
with closing(db):
|
||||||
|
current = next((item for item in DatabaseManager(db).get_llm_prompt_settings() if item['key'] == prompt_key), None)
|
||||||
|
value = _prompt_draft_value(prompt_key, current['value'] if current else '')
|
||||||
|
result = DatabaseManager(db).save_llm_prompt_setting(prompt_key, value, actor='dashboard')
|
||||||
|
if result.get('status') == 'error':
|
||||||
|
ui.notify(result.get('message', 'Prompt save failed'), color='negative')
|
||||||
|
return
|
||||||
|
_clear_prompt_draft(prompt_key)
|
||||||
|
ui.notify('LLM prompt setting saved', color='positive')
|
||||||
|
_refresh_system_sections()
|
||||||
|
|
||||||
|
def reset_llm_prompt_action(prompt_key: str) -> None:
|
||||||
|
db = get_db_sync()
|
||||||
|
if db is None:
|
||||||
|
ui.notify('Database session could not be created', color='negative')
|
||||||
|
return
|
||||||
|
with closing(db):
|
||||||
|
result = DatabaseManager(db).reset_llm_prompt_setting(prompt_key, actor='dashboard')
|
||||||
|
if result.get('status') == 'error':
|
||||||
|
ui.notify(result.get('message', 'Prompt reset failed'), color='negative')
|
||||||
|
return
|
||||||
|
_clear_prompt_draft(prompt_key)
|
||||||
|
ui.notify('LLM prompt setting reset to environment default', color='positive')
|
||||||
|
_refresh_system_sections()
|
||||||
|
|
||||||
|
def save_runtime_setting_action(setting_key: str) -> None:
|
||||||
|
db = get_db_sync()
|
||||||
|
if db is None:
|
||||||
|
ui.notify('Database session could not be created', color='negative')
|
||||||
|
return
|
||||||
|
with closing(db):
|
||||||
|
current = next((item for item in DatabaseManager(db).get_runtime_settings() if item['key'] == setting_key), None)
|
||||||
|
value = _runtime_setting_draft_value(setting_key, current['value'] if current else None)
|
||||||
|
result = DatabaseManager(db).save_runtime_setting(setting_key, value, actor='dashboard')
|
||||||
|
if result.get('status') == 'error':
|
||||||
|
ui.notify(result.get('message', 'Runtime setting save failed'), color='negative')
|
||||||
|
return
|
||||||
|
_clear_runtime_setting_draft(setting_key)
|
||||||
|
ui.notify('Runtime setting saved', color='positive')
|
||||||
|
_refresh_all_dashboard_sections()
|
||||||
|
|
||||||
|
def reset_runtime_setting_action(setting_key: str) -> None:
|
||||||
|
db = get_db_sync()
|
||||||
|
if db is None:
|
||||||
|
ui.notify('Database session could not be created', color='negative')
|
||||||
|
return
|
||||||
|
with closing(db):
|
||||||
|
result = DatabaseManager(db).reset_runtime_setting(setting_key, actor='dashboard')
|
||||||
|
if result.get('status') == 'error':
|
||||||
|
ui.notify(result.get('message', 'Runtime setting reset failed'), color='negative')
|
||||||
|
return
|
||||||
|
_clear_runtime_setting_draft(setting_key)
|
||||||
|
ui.notify('Runtime setting reset to environment default', color='positive')
|
||||||
|
_refresh_all_dashboard_sections()
|
||||||
|
|
||||||
def init_db_action() -> None:
|
def init_db_action() -> None:
|
||||||
result = init_db()
|
result = init_db()
|
||||||
ui.notify(result.get('message', 'Database initialized'), color='positive' if result.get('status') == 'success' else 'negative')
|
ui.notify(result.get('message', 'Database initialized'), color='positive' if result.get('status') == 'success' else 'negative')
|
||||||
@@ -868,13 +1315,18 @@ def create_dashboard():
|
|||||||
if repository and repository.get('mode') != 'shared' and repository.get('owner') and repository.get('name') and settings.gitea_url and settings.gitea_token:
|
if repository and repository.get('mode') != 'shared' and repository.get('owner') and repository.get('name') and settings.gitea_url and settings.gitea_token:
|
||||||
gitea_api = GiteaAPI(token=settings.GITEA_TOKEN, base_url=settings.GITEA_URL, owner=settings.GITEA_OWNER, repo=settings.GITEA_REPO or '')
|
gitea_api = GiteaAPI(token=settings.GITEA_TOKEN, base_url=settings.GITEA_URL, owner=settings.GITEA_OWNER, repo=settings.GITEA_REPO or '')
|
||||||
remote_delete = gitea_api.delete_repo_sync(owner=repository.get('owner'), repo=repository.get('name'))
|
remote_delete = gitea_api.delete_repo_sync(owner=repository.get('owner'), repo=repository.get('name'))
|
||||||
if remote_delete.get('error') and remote_delete.get('status_code') not in {404, None}:
|
if remote_delete.get('error'):
|
||||||
ui.notify(remote_delete.get('error', 'Remote repository deletion failed'), color='negative')
|
manager.log_system_event(
|
||||||
return
|
component='gitea',
|
||||||
|
level='WARNING',
|
||||||
|
message=f"Remote repository delete failed for {repository.get('owner')}/{repository.get('name')}: {remote_delete.get('error')}",
|
||||||
|
)
|
||||||
result = manager.delete_project(project_id)
|
result = manager.delete_project(project_id)
|
||||||
message = result.get('message', 'Project deleted')
|
message = result.get('message', 'Project deleted')
|
||||||
if remote_delete and not remote_delete.get('error'):
|
if remote_delete and not remote_delete.get('error'):
|
||||||
message = f"{message}; remote repository deleted"
|
message = f"{message}; remote repository deleted"
|
||||||
|
elif remote_delete and remote_delete.get('error'):
|
||||||
|
message = f"{message}; remote repository delete failed: {remote_delete.get('error')}"
|
||||||
ui.notify(message, color='positive' if result.get('status') == 'success' else 'negative')
|
ui.notify(message, color='positive' if result.get('status') == 'success' else 'negative')
|
||||||
_refresh_all_dashboard_sections()
|
_refresh_all_dashboard_sections()
|
||||||
|
|
||||||
@@ -889,6 +1341,17 @@ def create_dashboard():
|
|||||||
branch_scope_filter = _selected_branch_scope()
|
branch_scope_filter = _selected_branch_scope()
|
||||||
commit_lookup_query = _selected_commit_lookup()
|
commit_lookup_query = _selected_commit_lookup()
|
||||||
discovered_repositories = _get_discovered_repositories()
|
discovered_repositories = _get_discovered_repositories()
|
||||||
|
prompt_settings = settings.editable_llm_prompts
|
||||||
|
runtime_settings = settings.editable_runtime_settings
|
||||||
|
db = get_db_sync()
|
||||||
|
if db is not None:
|
||||||
|
with closing(db):
|
||||||
|
try:
|
||||||
|
prompt_settings = DatabaseManager(db).get_llm_prompt_settings()
|
||||||
|
runtime_settings = DatabaseManager(db).get_runtime_settings()
|
||||||
|
except Exception:
|
||||||
|
prompt_settings = settings.editable_llm_prompts
|
||||||
|
runtime_settings = settings.editable_runtime_settings
|
||||||
if snapshot.get('error'):
|
if snapshot.get('error'):
|
||||||
return {
|
return {
|
||||||
'error': snapshot['error'],
|
'error': snapshot['error'],
|
||||||
@@ -899,6 +1362,8 @@ def create_dashboard():
|
|||||||
'branch_scope_filter': branch_scope_filter,
|
'branch_scope_filter': branch_scope_filter,
|
||||||
'commit_lookup_query': commit_lookup_query,
|
'commit_lookup_query': commit_lookup_query,
|
||||||
'discovered_repositories': discovered_repositories,
|
'discovered_repositories': discovered_repositories,
|
||||||
|
'prompt_settings': prompt_settings,
|
||||||
|
'runtime_settings': runtime_settings,
|
||||||
}
|
}
|
||||||
projects = snapshot['projects']
|
projects = snapshot['projects']
|
||||||
all_llm_traces = [trace for project_bundle in projects for trace in project_bundle.get('llm_traces', [])]
|
all_llm_traces = [trace for project_bundle in projects for trace in project_bundle.get('llm_traces', [])]
|
||||||
@@ -917,6 +1382,8 @@ def create_dashboard():
|
|||||||
'commit_lookup_query': commit_lookup_query,
|
'commit_lookup_query': commit_lookup_query,
|
||||||
'commit_context': _load_commit_context(commit_lookup_query, branch_scope_filter) if commit_lookup_query else None,
|
'commit_context': _load_commit_context(commit_lookup_query, branch_scope_filter) if commit_lookup_query else None,
|
||||||
'discovered_repositories': discovered_repositories,
|
'discovered_repositories': discovered_repositories,
|
||||||
|
'prompt_settings': prompt_settings,
|
||||||
|
'runtime_settings': runtime_settings,
|
||||||
'llm_stage_options': [''] + sorted({trace.get('stage') for trace in all_llm_traces if trace.get('stage')}),
|
'llm_stage_options': [''] + sorted({trace.get('stage') for trace in all_llm_traces if trace.get('stage')}),
|
||||||
'llm_model_options': [''] + sorted({trace.get('model') for trace in all_llm_traces if trace.get('model')}),
|
'llm_model_options': [''] + sorted({trace.get('model') for trace in all_llm_traces if trace.get('model')}),
|
||||||
'project_repository_map': {
|
'project_repository_map': {
|
||||||
@@ -973,6 +1440,7 @@ def create_dashboard():
|
|||||||
('Completed', summary['completed_projects'], 'Finished project runs'),
|
('Completed', summary['completed_projects'], 'Finished project runs'),
|
||||||
('Prompts', summary['prompt_events'], 'Recorded originating prompts'),
|
('Prompts', summary['prompt_events'], 'Recorded originating prompts'),
|
||||||
('Open PRs', summary['open_pull_requests'], 'Unmerged review branches'),
|
('Open PRs', summary['open_pull_requests'], 'Unmerged review branches'),
|
||||||
|
('Orphans', summary.get('orphan_code_changes', 0), 'Generated diffs with no matching commit'),
|
||||||
]
|
]
|
||||||
for title, value, subtitle in metrics:
|
for title, value, subtitle in metrics:
|
||||||
with ui.card().classes('factory-kpi'):
|
with ui.card().classes('factory-kpi'):
|
||||||
@@ -991,15 +1459,38 @@ def create_dashboard():
|
|||||||
with ui.grid(columns=2).classes('w-full gap-4'):
|
with ui.grid(columns=2).classes('w-full gap-4'):
|
||||||
with ui.card().classes('factory-panel q-pa-lg'):
|
with ui.card().classes('factory-panel q-pa-lg'):
|
||||||
ui.label('Project Pipeline').style('font-size: 1.25rem; font-weight: 700; color: #3a281a;')
|
ui.label('Project Pipeline').style('font-size: 1.25rem; font-weight: 700; color: #3a281a;')
|
||||||
|
if summary.get('orphan_code_changes'):
|
||||||
|
with ui.card().classes('q-pa-md q-mt-md').style('background: #fff4dd; border: 1px solid #e0b36a;'):
|
||||||
|
ui.label('Uncommitted generated changes detected').style('font-weight: 700; color: #7a4b16;')
|
||||||
|
ui.label(
|
||||||
|
f"{summary['orphan_code_changes']} generated file change row(s) have no matching git commit or PR delivery record."
|
||||||
|
).classes('factory-muted')
|
||||||
|
ui.button(
|
||||||
|
'Purge orphan change rows',
|
||||||
|
on_click=lambda: _render_confirmation_dialog(
|
||||||
|
'Purge orphaned generated change rows?',
|
||||||
|
'Delete only generated CODE_CHANGE audit rows that have no matching git commit. Valid prompt, commit, and PR history will be kept.',
|
||||||
|
'Purge Orphans',
|
||||||
|
lambda: purge_orphan_code_changes_action(),
|
||||||
|
color='warning',
|
||||||
|
),
|
||||||
|
).props('outline color=warning').classes('q-mt-sm')
|
||||||
if projects:
|
if projects:
|
||||||
for project_bundle in projects[:4]:
|
for project_bundle in projects[:4]:
|
||||||
project = project_bundle['project']
|
project = project_bundle['project']
|
||||||
with ui.column().classes('gap-1 q-mt-md'):
|
with ui.column().classes('gap-1 q-mt-md'):
|
||||||
with ui.row().classes('justify-between items-center'):
|
with ui.row().classes('justify-between items-center'):
|
||||||
ui.label(project['project_name']).style('font-weight: 700; color: #2f241d;')
|
ui.label(project['project_name']).style('font-weight: 700; color: #2f241d;')
|
||||||
ui.label(project['status']).classes('factory-chip')
|
with ui.row().classes('items-center gap-2'):
|
||||||
|
if project.get('delivery_status') in {'uncommitted', 'local_only', 'pushed_no_pr'}:
|
||||||
|
ui.label(project.get('delivery_status', 'delivery')).classes('factory-chip')
|
||||||
|
ui.label(project['status']).classes('factory-chip')
|
||||||
ui.linear_progress(value=(project['progress'] or 0) / 100, show_value=False).classes('w-full')
|
ui.linear_progress(value=(project['progress'] or 0) / 100, show_value=False).classes('w-full')
|
||||||
ui.label(project['message'] or 'No status message').classes('factory-muted')
|
ui.label(
|
||||||
|
project.get('delivery_message')
|
||||||
|
if project.get('delivery_status') in {'uncommitted', 'local_only', 'pushed_no_pr'}
|
||||||
|
else project['message'] or 'No status message'
|
||||||
|
).classes('factory-muted')
|
||||||
else:
|
else:
|
||||||
ui.label('No projects in the database yet.').classes('factory-muted')
|
ui.label('No projects in the database yet.').classes('factory-muted')
|
||||||
|
|
||||||
@@ -1029,7 +1520,12 @@ def create_dashboard():
|
|||||||
ui.label('No project data available yet.').classes('factory-muted')
|
ui.label('No project data available yet.').classes('factory-muted')
|
||||||
for project_bundle in projects:
|
for project_bundle in projects:
|
||||||
project = project_bundle['project']
|
project = project_bundle['project']
|
||||||
with ui.expansion(f"{project['project_name']} · {project['status']}", icon='folder').classes('factory-panel w-full q-mb-md'):
|
with _sticky_expansion(
|
||||||
|
f"projects.{project['project_id']}",
|
||||||
|
f"{project['project_name']} · {project['status']}",
|
||||||
|
icon='folder',
|
||||||
|
classes='factory-panel w-full q-mb-md',
|
||||||
|
):
|
||||||
with ui.row().classes('items-center gap-2 q-pa-md'):
|
with ui.row().classes('items-center gap-2 q-pa-md'):
|
||||||
ui.button(
|
ui.button(
|
||||||
'Archive',
|
'Archive',
|
||||||
@@ -1050,6 +1546,28 @@ def create_dashboard():
|
|||||||
lambda: delete_project_action(project_id),
|
lambda: delete_project_action(project_id),
|
||||||
),
|
),
|
||||||
).props('outline color=negative')
|
).props('outline color=negative')
|
||||||
|
if project.get('delivery_status') in {'uncommitted', 'local_only', 'pushed_no_pr'}:
|
||||||
|
with ui.card().classes('q-ma-md q-pa-md').style('background: #fff4dd; border: 1px solid #e0b36a;'):
|
||||||
|
with ui.row().classes('items-center justify-between w-full gap-3'):
|
||||||
|
with ui.column().classes('gap-1'):
|
||||||
|
ui.label('Remote delivery attention needed').style('font-weight: 700; color: #7a4b16;')
|
||||||
|
ui.label(project.get('delivery_message') or 'Generated changes were not published to the tracked repository.').classes('factory-muted')
|
||||||
|
with ui.row().classes('items-center gap-2'):
|
||||||
|
ui.button(
|
||||||
|
'Retry delivery',
|
||||||
|
on_click=lambda _=None, project_id=project['project_id']: retry_project_delivery_action(project_id),
|
||||||
|
).props('outline color=positive')
|
||||||
|
if project.get('delivery_status') == 'uncommitted':
|
||||||
|
ui.button(
|
||||||
|
'Purge project orphan rows',
|
||||||
|
on_click=lambda _=None, project_id=project['project_id']: _render_confirmation_dialog(
|
||||||
|
'Purge orphaned generated change rows for this project?',
|
||||||
|
'Delete only generated CODE_CHANGE audit rows for this project that have no matching git commit. Valid history remains intact.',
|
||||||
|
'Purge Project Orphans',
|
||||||
|
lambda: purge_orphan_code_changes_action(project_id),
|
||||||
|
color='warning',
|
||||||
|
),
|
||||||
|
).props('outline color=warning')
|
||||||
with ui.grid(columns=2).classes('w-full gap-4 q-pa-md'):
|
with ui.grid(columns=2).classes('w-full gap-4 q-pa-md'):
|
||||||
with ui.card().classes('q-pa-md'):
|
with ui.card().classes('q-pa-md'):
|
||||||
ui.label('Repository').style('font-weight: 700; color: #3a281a;')
|
ui.label('Repository').style('font-weight: 700; color: #3a281a;')
|
||||||
@@ -1074,7 +1592,12 @@ def create_dashboard():
|
|||||||
ui.label('No archived projects yet.').classes('factory-muted')
|
ui.label('No archived projects yet.').classes('factory-muted')
|
||||||
for project_bundle in archived_projects:
|
for project_bundle in archived_projects:
|
||||||
project = project_bundle['project']
|
project = project_bundle['project']
|
||||||
with ui.expansion(f"{project['project_name']} · archived", icon='archive').classes('factory-panel w-full q-mb-md'):
|
with _sticky_expansion(
|
||||||
|
f"archived.{project['project_id']}",
|
||||||
|
f"{project['project_name']} · archived",
|
||||||
|
icon='archive',
|
||||||
|
classes='factory-panel w-full q-mb-md',
|
||||||
|
):
|
||||||
with ui.row().classes('items-center gap-2 q-pa-md'):
|
with ui.row().classes('items-center gap-2 q-pa-md'):
|
||||||
ui.button(
|
ui.button(
|
||||||
'Restore',
|
'Restore',
|
||||||
@@ -1095,6 +1618,26 @@ def create_dashboard():
|
|||||||
lambda: delete_project_action(project_id),
|
lambda: delete_project_action(project_id),
|
||||||
),
|
),
|
||||||
).props('outline color=negative')
|
).props('outline color=negative')
|
||||||
|
if project.get('delivery_status') in {'uncommitted', 'local_only', 'pushed_no_pr'}:
|
||||||
|
with ui.card().classes('q-ma-md q-pa-md').style('background: #fff4dd; border: 1px solid #e0b36a;'):
|
||||||
|
ui.label('Archived project needs delivery attention').style('font-weight: 700; color: #7a4b16;')
|
||||||
|
ui.label(project.get('delivery_message') or 'Generated changes were not published to the tracked repository.').classes('factory-muted')
|
||||||
|
with ui.row().classes('items-center gap-2 q-mt-sm'):
|
||||||
|
ui.button(
|
||||||
|
'Retry delivery',
|
||||||
|
on_click=lambda _=None, project_id=project['project_id']: retry_project_delivery_action(project_id),
|
||||||
|
).props('outline color=positive')
|
||||||
|
if project.get('delivery_status') == 'uncommitted':
|
||||||
|
ui.button(
|
||||||
|
'Purge archived project orphan rows',
|
||||||
|
on_click=lambda _=None, project_id=project['project_id']: _render_confirmation_dialog(
|
||||||
|
'Purge orphaned generated change rows for this archived project?',
|
||||||
|
'Delete only generated CODE_CHANGE audit rows for this project that have no matching git commit. Valid history remains intact.',
|
||||||
|
'Purge Archived Orphans',
|
||||||
|
lambda: purge_orphan_code_changes_action(project_id),
|
||||||
|
color='warning',
|
||||||
|
),
|
||||||
|
).props('outline color=warning')
|
||||||
with ui.grid(columns=2).classes('w-full gap-4 q-pa-md'):
|
with ui.grid(columns=2).classes('w-full gap-4 q-pa-md'):
|
||||||
with ui.card().classes('q-pa-md'):
|
with ui.card().classes('q-pa-md'):
|
||||||
ui.label('Repository').style('font-weight: 700; color: #3a281a;')
|
ui.label('Repository').style('font-weight: 700; color: #3a281a;')
|
||||||
@@ -1281,7 +1824,12 @@ def create_dashboard():
|
|||||||
if projects:
|
if projects:
|
||||||
for project_bundle in projects:
|
for project_bundle in projects:
|
||||||
project = project_bundle['project']
|
project = project_bundle['project']
|
||||||
with ui.expansion(f"{project['project_name']} · {project['project_id']}", icon='schedule').classes('q-mt-md w-full'):
|
with _sticky_expansion(
|
||||||
|
f"timeline.{project['project_id']}",
|
||||||
|
f"{project['project_name']} · {project['project_id']}",
|
||||||
|
icon='schedule',
|
||||||
|
classes='q-mt-md w-full',
|
||||||
|
):
|
||||||
_render_timeline(_filter_timeline_events(project_bundle.get('timeline', []), branch_scope_filter))
|
_render_timeline(_filter_timeline_events(project_bundle.get('timeline', []), branch_scope_filter))
|
||||||
else:
|
else:
|
||||||
ui.label('No project timelines recorded yet.').classes('factory-muted')
|
ui.label('No project timelines recorded yet.').classes('factory-muted')
|
||||||
@@ -1295,6 +1843,8 @@ def create_dashboard():
|
|||||||
system_logs = view_model['system_logs']
|
system_logs = view_model['system_logs']
|
||||||
llm_runtime = view_model['llm_runtime']
|
llm_runtime = view_model['llm_runtime']
|
||||||
discovered_repositories = view_model['discovered_repositories']
|
discovered_repositories = view_model['discovered_repositories']
|
||||||
|
prompt_settings = view_model.get('prompt_settings', [])
|
||||||
|
runtime_settings = view_model.get('runtime_settings', [])
|
||||||
with ui.grid(columns=2).classes('w-full gap-4'):
|
with ui.grid(columns=2).classes('w-full gap-4'):
|
||||||
with ui.card().classes('factory-panel q-pa-lg'):
|
with ui.card().classes('factory-panel q-pa-lg'):
|
||||||
ui.label('System Logs').style('font-size: 1.25rem; font-weight: 700; color: #3a281a;')
|
ui.label('System Logs').style('font-size: 1.25rem; font-weight: 700; color: #3a281a;')
|
||||||
@@ -1345,6 +1895,70 @@ def create_dashboard():
|
|||||||
for label, text in system_prompts.items():
|
for label, text in system_prompts.items():
|
||||||
ui.label(label.replace('_', ' ').title()).classes('factory-muted q-mt-sm')
|
ui.label(label.replace('_', ' ').title()).classes('factory-muted q-mt-sm')
|
||||||
ui.label(text or 'Not configured').classes('factory-code')
|
ui.label(text or 'Not configured').classes('factory-code')
|
||||||
|
with ui.card().classes('factory-panel q-pa-lg'):
|
||||||
|
ui.label('Home Assistant and Queue Settings').style('font-size: 1.25rem; font-weight: 700; color: #3a281a;')
|
||||||
|
ui.label('Keep only the Home Assistant base URL and access token in the environment. Entity ids, thresholds, and queue behavior are edited here and persisted in the database.').classes('factory-muted')
|
||||||
|
for setting in runtime_settings:
|
||||||
|
with ui.card().classes('q-pa-sm q-mt-md'):
|
||||||
|
with ui.row().classes('items-center justify-between w-full'):
|
||||||
|
with ui.column().classes('gap-1'):
|
||||||
|
ui.label(setting['label']).style('font-weight: 700; color: #2f241d;')
|
||||||
|
ui.label(setting.get('description') or '').classes('factory-muted')
|
||||||
|
with ui.row().classes('items-center gap-2'):
|
||||||
|
ui.label(setting.get('category', 'setting')).classes('factory-chip')
|
||||||
|
ui.label(setting.get('source', 'environment')).classes('factory-chip')
|
||||||
|
draft_value = _runtime_setting_draft_value(setting['key'], setting.get('value'))
|
||||||
|
if setting.get('value_type') == 'boolean':
|
||||||
|
ui.switch(
|
||||||
|
value=bool(draft_value),
|
||||||
|
on_change=lambda event, setting_key=setting['key']: _store_runtime_setting_draft(setting_key, bool(event.value)),
|
||||||
|
).props('color=accent').classes('q-mt-sm')
|
||||||
|
elif setting.get('value_type') == 'integer':
|
||||||
|
ui.number(
|
||||||
|
value=int(draft_value),
|
||||||
|
on_change=lambda event, setting_key=setting['key']: _store_runtime_setting_draft(setting_key, int(event.value) if event.value is not None else None),
|
||||||
|
).classes('w-full q-mt-sm')
|
||||||
|
elif setting.get('value_type') == 'float':
|
||||||
|
ui.number(
|
||||||
|
value=float(draft_value),
|
||||||
|
on_change=lambda event, setting_key=setting['key']: _store_runtime_setting_draft(setting_key, float(event.value) if event.value is not None else None),
|
||||||
|
).classes('w-full q-mt-sm')
|
||||||
|
else:
|
||||||
|
ui.input(
|
||||||
|
value=str(draft_value or ''),
|
||||||
|
on_change=lambda event, setting_key=setting['key']: _store_runtime_setting_draft(setting_key, event.value or ''),
|
||||||
|
).classes('w-full q-mt-sm')
|
||||||
|
ui.label(f"Environment default: {setting.get('default_value')}").classes('factory-muted q-mt-sm')
|
||||||
|
if setting.get('updated_at'):
|
||||||
|
ui.label(f"Last updated: {setting['updated_at']} by {setting.get('updated_by') or 'unknown'}").classes('factory-muted q-mt-sm')
|
||||||
|
with ui.row().classes('items-center gap-2 q-mt-md'):
|
||||||
|
ui.button('Save Override', on_click=lambda _=None, setting_key=setting['key']: save_runtime_setting_action(setting_key)).props('unelevated color=accent')
|
||||||
|
ui.button('Reset To Default', on_click=lambda _=None, setting_key=setting['key']: reset_runtime_setting_action(setting_key)).props('outline color=warning')
|
||||||
|
with ui.card().classes('factory-panel q-pa-lg'):
|
||||||
|
ui.label('Editable LLM Prompts').style('font-size: 1.25rem; font-weight: 700; color: #3a281a;')
|
||||||
|
ui.label('These guardrails and system prompts are persisted in the database and override environment defaults until reset.').classes('factory-muted')
|
||||||
|
for prompt in prompt_settings:
|
||||||
|
with ui.card().classes('q-pa-sm q-mt-md'):
|
||||||
|
with ui.row().classes('items-center justify-between w-full'):
|
||||||
|
with ui.column().classes('gap-1'):
|
||||||
|
ui.label(prompt['label']).style('font-weight: 700; color: #2f241d;')
|
||||||
|
ui.label(prompt.get('description') or '').classes('factory-muted')
|
||||||
|
with ui.row().classes('items-center gap-2'):
|
||||||
|
ui.label(prompt.get('category', 'prompt')).classes('factory-chip')
|
||||||
|
ui.label(prompt.get('source', 'environment')).classes('factory-chip')
|
||||||
|
draft_value = _prompt_draft_value(prompt['key'], prompt.get('value') or '')
|
||||||
|
ui.textarea(
|
||||||
|
label=prompt['key'],
|
||||||
|
value=draft_value,
|
||||||
|
on_change=lambda event, prompt_key=prompt['key']: _store_prompt_draft(prompt_key, event.value or ''),
|
||||||
|
).props('autogrow outlined').classes('w-full q-mt-sm')
|
||||||
|
ui.label('Environment default').classes('factory-muted q-mt-sm')
|
||||||
|
ui.label(prompt.get('default_value') or 'Not configured').classes('factory-code')
|
||||||
|
if prompt.get('updated_at'):
|
||||||
|
ui.label(f"Last updated: {prompt['updated_at']} by {prompt.get('updated_by') or 'unknown'}").classes('factory-muted q-mt-sm')
|
||||||
|
with ui.row().classes('items-center gap-2 q-mt-md'):
|
||||||
|
ui.button('Save Override', on_click=lambda _=None, prompt_key=prompt['key']: save_llm_prompt_action(prompt_key)).props('unelevated color=dark')
|
||||||
|
ui.button('Reset To Default', on_click=lambda _=None, prompt_key=prompt['key']: reset_llm_prompt_action(prompt_key)).props('outline color=warning')
|
||||||
with ui.card().classes('factory-panel q-pa-lg'):
|
with ui.card().classes('factory-panel q-pa-lg'):
|
||||||
ui.label('Repository Onboarding').style('font-size: 1.25rem; font-weight: 700; color: #3a281a;')
|
ui.label('Repository Onboarding').style('font-size: 1.25rem; font-weight: 700; color: #3a281a;')
|
||||||
ui.label('Discover repositories in the Gitea organization, onboard manually created repos, and import their recent commits into the dashboard.').classes('factory-muted')
|
ui.label('Discover repositories in the Gitea organization, onboard manually created repos, and import their recent commits into the dashboard.').classes('factory-muted')
|
||||||
@@ -1377,15 +1991,19 @@ def create_dashboard():
|
|||||||
with ui.card().classes('factory-panel q-pa-lg'):
|
with ui.card().classes('factory-panel q-pa-lg'):
|
||||||
ui.label('Important Endpoints').style('font-size: 1.25rem; font-weight: 700; color: #3a281a;')
|
ui.label('Important Endpoints').style('font-size: 1.25rem; font-weight: 700; color: #3a281a;')
|
||||||
endpoints = [
|
endpoints = [
|
||||||
'/health', '/llm/runtime', '/generate', '/projects', '/audit/projects', '/audit/prompts', '/audit/changes', '/audit/issues',
|
'/health', '/llm/runtime', '/generate', '/generate/text', '/queue', '/queue/process', '/projects', '/audit/projects', '/audit/prompts', '/audit/changes', '/audit/issues',
|
||||||
'/audit/commit-context', '/audit/timeline', '/audit/llm-traces', '/audit/correlations', '/projects/{project_id}/sync-repository',
|
'/audit/commit-context', '/audit/timeline', '/audit/llm-traces', '/audit/correlations', '/projects/{project_id}/sync-repository',
|
||||||
'/gitea/repos', '/gitea/repos/onboard', '/n8n/health', '/n8n/setup',
|
'/gitea/repos', '/gitea/repos/onboard', '/gitea/health', '/home-assistant/health', '/n8n/health', '/n8n/setup',
|
||||||
]
|
]
|
||||||
for endpoint in endpoints:
|
for endpoint in endpoints:
|
||||||
ui.label(endpoint).classes('factory-code q-mt-sm')
|
ui.label(endpoint).classes('factory-code q-mt-sm')
|
||||||
|
|
||||||
@ui.refreshable
|
@ui.refreshable
|
||||||
def render_health_panel() -> None:
|
def render_health_panel() -> None:
|
||||||
|
view_model = _view_model()
|
||||||
|
prompt_queue = (view_model.get('snapshot') or {}).get('prompt_queue') or {}
|
||||||
|
queue_items = prompt_queue.get('items') or []
|
||||||
|
queue_summary = prompt_queue.get('summary') or {}
|
||||||
with ui.card().classes('factory-panel q-pa-lg q-mb-md'):
|
with ui.card().classes('factory-panel q-pa-lg q-mb-md'):
|
||||||
ui.label('Health and Diagnostics').style('font-size: 1.25rem; font-weight: 700; color: #3a281a;')
|
ui.label('Health and Diagnostics').style('font-size: 1.25rem; font-weight: 700; color: #3a281a;')
|
||||||
ui.label('Use this page to verify runtime configuration, n8n API connectivity, and likely causes of provisioning failures.').classes('factory-muted')
|
ui.label('Use this page to verify runtime configuration, n8n API connectivity, and likely causes of provisioning failures.').classes('factory-muted')
|
||||||
@@ -1398,6 +2016,37 @@ def create_dashboard():
|
|||||||
ui.label(settings.telegram_chat_id or 'Not configured').style('font-weight: 600; color: #3a281a;')
|
ui.label(settings.telegram_chat_id or 'Not configured').style('font-weight: 600; color: #3a281a;')
|
||||||
with ui.row().classes('items-center gap-2 q-mt-md'):
|
with ui.row().classes('items-center gap-2 q-mt-md'):
|
||||||
ui.button('Send Prompt Guide', on_click=send_telegram_prompt_guide_action).props('unelevated color=secondary')
|
ui.button('Send Prompt Guide', on_click=send_telegram_prompt_guide_action).props('unelevated color=secondary')
|
||||||
|
with ui.card().classes('factory-panel q-pa-lg q-mb-md'):
|
||||||
|
ui.label('Prompt Queue Controls').style('font-size: 1.25rem; font-weight: 700; color: #3a281a;')
|
||||||
|
ui.label('Process queued Telegram prompts manually, or requeue failed items for another pass.').classes('factory-muted')
|
||||||
|
with ui.row().classes('items-center gap-2 q-mt-md'):
|
||||||
|
ui.button('Process Next Batch', on_click=lambda: process_prompt_queue_action(force=False)).props('outline color=secondary')
|
||||||
|
ui.button('Force Process Next Batch', on_click=lambda: process_prompt_queue_action(force=True)).props('unelevated color=warning')
|
||||||
|
with ui.row().classes('items-center gap-2 q-mt-md'):
|
||||||
|
ui.label(f"Queued: {queue_summary.get('queued', 0)}").classes('factory-chip')
|
||||||
|
ui.label(f"Processing: {queue_summary.get('processing', 0)}").classes('factory-chip')
|
||||||
|
ui.label(f"Failed: {queue_summary.get('failed', 0)}").classes('factory-chip')
|
||||||
|
ui.label(f"Completed: {queue_summary.get('completed', 0)}").classes('factory-chip')
|
||||||
|
if queue_items:
|
||||||
|
for item in queue_items:
|
||||||
|
with ui.card().classes('q-pa-sm q-mt-md'):
|
||||||
|
with ui.row().classes('items-start justify-between w-full'):
|
||||||
|
with ui.column().classes('gap-1'):
|
||||||
|
ui.label((item.get('prompt_text') or 'Prompt').strip()[:220]).classes('factory-code')
|
||||||
|
ui.label(item.get('queued_at') or item.get('processed_at') or item.get('failed_at') or 'Timestamp unavailable').classes('factory-muted')
|
||||||
|
with ui.column().classes('items-end gap-1'):
|
||||||
|
ui.label(item.get('status') or 'unknown').classes('factory-chip')
|
||||||
|
if item.get('chat_id'):
|
||||||
|
ui.label(str(item['chat_id'])).classes('factory-chip')
|
||||||
|
if item.get('error'):
|
||||||
|
ui.label(item['error']).classes('factory-muted q-mt-sm')
|
||||||
|
with ui.row().classes('items-center gap-2 q-mt-md'):
|
||||||
|
if item.get('status') == 'failed':
|
||||||
|
ui.button('Retry', on_click=lambda _=None, queue_item_id=item['id']: retry_prompt_queue_item_action(queue_item_id)).props('outline color=warning')
|
||||||
|
if item.get('status') in {'queued', 'failed'}:
|
||||||
|
ui.button('Force Process', on_click=lambda: process_prompt_queue_action(force=True, limit=1)).props('outline color=dark')
|
||||||
|
else:
|
||||||
|
ui.label('No queued prompts recorded yet.').classes('factory-muted q-mt-md')
|
||||||
_render_health_panels()
|
_render_health_panels()
|
||||||
|
|
||||||
panel_refreshers: dict[str, callable] = {}
|
panel_refreshers: dict[str, callable] = {}
|
||||||
@@ -1406,7 +2055,8 @@ def create_dashboard():
|
|||||||
_update_dashboard_state()
|
_update_dashboard_state()
|
||||||
panel_refreshers['metrics']()
|
panel_refreshers['metrics']()
|
||||||
active_tab = _selected_tab_name()
|
active_tab = _selected_tab_name()
|
||||||
if active_tab in panel_refreshers:
|
# Avoid rebuilding the more interactive tabs on the timer; manual refresh keeps them current.
|
||||||
|
if active_tab in {'overview', 'health'} and active_tab in panel_refreshers:
|
||||||
panel_refreshers[active_tab]()
|
panel_refreshers[active_tab]()
|
||||||
|
|
||||||
def _refresh_all_dashboard_sections() -> None:
|
def _refresh_all_dashboard_sections() -> None:
|
||||||
@@ -1429,6 +2079,7 @@ def create_dashboard():
|
|||||||
panel_refreshers['system']()
|
panel_refreshers['system']()
|
||||||
|
|
||||||
def _refresh_health_sections() -> None:
|
def _refresh_health_sections() -> None:
|
||||||
|
_update_dashboard_state()
|
||||||
panel_refreshers['health']()
|
panel_refreshers['health']()
|
||||||
|
|
||||||
_update_dashboard_state()
|
_update_dashboard_state()
|
||||||
|
|||||||
@@ -13,6 +13,7 @@ The NiceGUI frontend provides:
|
|||||||
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import asyncio
|
||||||
from contextlib import asynccontextmanager
|
from contextlib import asynccontextmanager
|
||||||
import json
|
import json
|
||||||
import re
|
import re
|
||||||
@@ -29,6 +30,7 @@ try:
|
|||||||
from . import database as database_module
|
from . import database as database_module
|
||||||
from .agents.change_summary import ChangeSummaryGenerator
|
from .agents.change_summary import ChangeSummaryGenerator
|
||||||
from .agents.database_manager import DatabaseManager
|
from .agents.database_manager import DatabaseManager
|
||||||
|
from .agents.home_assistant import HomeAssistantAgent
|
||||||
from .agents.request_interpreter import RequestInterpreter
|
from .agents.request_interpreter import RequestInterpreter
|
||||||
from .agents.llm_service import LLMServiceClient
|
from .agents.llm_service import LLMServiceClient
|
||||||
from .agents.orchestrator import AgentOrchestrator
|
from .agents.orchestrator import AgentOrchestrator
|
||||||
@@ -41,6 +43,7 @@ except ImportError:
|
|||||||
import database as database_module
|
import database as database_module
|
||||||
from agents.change_summary import ChangeSummaryGenerator
|
from agents.change_summary import ChangeSummaryGenerator
|
||||||
from agents.database_manager import DatabaseManager
|
from agents.database_manager import DatabaseManager
|
||||||
|
from agents.home_assistant import HomeAssistantAgent
|
||||||
from agents.request_interpreter import RequestInterpreter
|
from agents.request_interpreter import RequestInterpreter
|
||||||
from agents.llm_service import LLMServiceClient
|
from agents.llm_service import LLMServiceClient
|
||||||
from agents.orchestrator import AgentOrchestrator
|
from agents.orchestrator import AgentOrchestrator
|
||||||
@@ -59,7 +62,16 @@ async def lifespan(_app: FastAPI):
|
|||||||
print(
|
print(
|
||||||
f"Runtime configuration: database_backend={runtime['backend']} target={runtime['target']}"
|
f"Runtime configuration: database_backend={runtime['backend']} target={runtime['target']}"
|
||||||
)
|
)
|
||||||
yield
|
queue_worker = asyncio.create_task(_prompt_queue_worker())
|
||||||
|
try:
|
||||||
|
yield
|
||||||
|
finally:
|
||||||
|
if queue_worker is not None:
|
||||||
|
queue_worker.cancel()
|
||||||
|
try:
|
||||||
|
await queue_worker
|
||||||
|
except asyncio.CancelledError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
app = FastAPI(lifespan=lifespan)
|
app = FastAPI(lifespan=lifespan)
|
||||||
@@ -94,6 +106,26 @@ class FreeformSoftwareRequest(BaseModel):
|
|||||||
source: str = 'telegram'
|
source: str = 'telegram'
|
||||||
chat_id: str | None = None
|
chat_id: str | None = None
|
||||||
chat_type: str | None = None
|
chat_type: str | None = None
|
||||||
|
process_now: bool = False
|
||||||
|
|
||||||
|
|
||||||
|
class PromptQueueProcessRequest(BaseModel):
|
||||||
|
"""Request body for manual queue processing."""
|
||||||
|
|
||||||
|
force: bool = False
|
||||||
|
limit: int = Field(default=1, ge=1, le=25)
|
||||||
|
|
||||||
|
|
||||||
|
class LLMPromptSettingUpdateRequest(BaseModel):
|
||||||
|
"""Request body for persisting one editable LLM prompt override."""
|
||||||
|
|
||||||
|
value: str = Field(default='')
|
||||||
|
|
||||||
|
|
||||||
|
class RuntimeSettingUpdateRequest(BaseModel):
|
||||||
|
"""Request body for persisting one editable runtime setting override."""
|
||||||
|
|
||||||
|
value: str | bool | int | float | None = None
|
||||||
|
|
||||||
|
|
||||||
class GiteaRepositoryOnboardRequest(BaseModel):
|
class GiteaRepositoryOnboardRequest(BaseModel):
|
||||||
@@ -372,8 +404,18 @@ async def _run_generation(
|
|||||||
fallback_used=summary_trace.get('fallback_used', False),
|
fallback_used=summary_trace.get('fallback_used', False),
|
||||||
)
|
)
|
||||||
response_data['summary_message'] = summary_message
|
response_data['summary_message'] = summary_message
|
||||||
|
response_data['summary_metadata'] = {
|
||||||
|
'provider': summary_trace.get('provider'),
|
||||||
|
'model': summary_trace.get('model'),
|
||||||
|
'fallback_used': bool(summary_trace.get('fallback_used')),
|
||||||
|
}
|
||||||
response_data['pull_request'] = result.get('pull_request') or manager.get_open_pull_request(project_id=project_id)
|
response_data['pull_request'] = result.get('pull_request') or manager.get_open_pull_request(project_id=project_id)
|
||||||
return {'status': result['status'], 'data': response_data, 'summary_message': summary_message}
|
return {
|
||||||
|
'status': result['status'],
|
||||||
|
'data': response_data,
|
||||||
|
'summary_message': summary_message,
|
||||||
|
'summary_metadata': response_data['summary_metadata'],
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
def _project_root(project_id: str) -> Path:
|
def _project_root(project_id: str) -> Path:
|
||||||
@@ -397,6 +439,276 @@ def _create_gitea_api():
|
|||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def _create_home_assistant_agent() -> HomeAssistantAgent:
|
||||||
|
"""Create a configured Home Assistant client."""
|
||||||
|
return HomeAssistantAgent(
|
||||||
|
base_url=database_module.settings.home_assistant_url,
|
||||||
|
token=database_module.settings.home_assistant_token,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def _get_gitea_health() -> dict:
|
||||||
|
"""Return current Gitea connectivity diagnostics."""
|
||||||
|
if not database_module.settings.gitea_url:
|
||||||
|
return {
|
||||||
|
'status': 'error',
|
||||||
|
'message': 'Gitea URL is not configured.',
|
||||||
|
'base_url': '',
|
||||||
|
'configured': False,
|
||||||
|
'checks': [],
|
||||||
|
}
|
||||||
|
if not database_module.settings.gitea_token:
|
||||||
|
return {
|
||||||
|
'status': 'error',
|
||||||
|
'message': 'Gitea token is not configured.',
|
||||||
|
'base_url': database_module.settings.gitea_url,
|
||||||
|
'configured': False,
|
||||||
|
'checks': [],
|
||||||
|
}
|
||||||
|
response = _create_gitea_api().get_current_user_sync()
|
||||||
|
if response.get('error'):
|
||||||
|
return {
|
||||||
|
'status': 'error',
|
||||||
|
'message': response.get('error'),
|
||||||
|
'base_url': database_module.settings.gitea_url,
|
||||||
|
'configured': True,
|
||||||
|
'checks': [
|
||||||
|
{
|
||||||
|
'name': 'token_auth',
|
||||||
|
'ok': False,
|
||||||
|
'message': response.get('error'),
|
||||||
|
'url': f"{database_module.settings.gitea_url}/api/v1/user",
|
||||||
|
'status_code': response.get('status_code'),
|
||||||
|
}
|
||||||
|
],
|
||||||
|
}
|
||||||
|
username = response.get('login') or response.get('username') or response.get('full_name') or 'unknown'
|
||||||
|
return {
|
||||||
|
'status': 'success',
|
||||||
|
'message': f'Authenticated as {username}.',
|
||||||
|
'base_url': database_module.settings.gitea_url,
|
||||||
|
'configured': True,
|
||||||
|
'checks': [
|
||||||
|
{
|
||||||
|
'name': 'token_auth',
|
||||||
|
'ok': True,
|
||||||
|
'message': f'Authenticated as {username}',
|
||||||
|
'url': f"{database_module.settings.gitea_url}/api/v1/user",
|
||||||
|
}
|
||||||
|
],
|
||||||
|
'user': username,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def _get_home_assistant_health() -> dict:
|
||||||
|
"""Return current Home Assistant connectivity diagnostics."""
|
||||||
|
return _create_home_assistant_agent().health_check_sync()
|
||||||
|
|
||||||
|
|
||||||
|
async def _get_queue_gate_status(force: bool = False) -> dict:
|
||||||
|
"""Return whether queued prompts may be processed now."""
|
||||||
|
if not database_module.settings.prompt_queue_enabled:
|
||||||
|
return {
|
||||||
|
'status': 'disabled',
|
||||||
|
'allowed': True,
|
||||||
|
'forced': False,
|
||||||
|
'reason': 'Prompt queue is disabled',
|
||||||
|
}
|
||||||
|
if not database_module.settings.home_assistant_url:
|
||||||
|
if force or database_module.settings.prompt_queue_force_process:
|
||||||
|
return {
|
||||||
|
'status': 'success',
|
||||||
|
'allowed': True,
|
||||||
|
'forced': True,
|
||||||
|
'reason': 'Queue override is enabled',
|
||||||
|
}
|
||||||
|
return {
|
||||||
|
'status': 'blocked',
|
||||||
|
'allowed': False,
|
||||||
|
'forced': False,
|
||||||
|
'reason': 'Home Assistant URL is not configured',
|
||||||
|
}
|
||||||
|
return await _create_home_assistant_agent().queue_gate_status(force=force)
|
||||||
|
|
||||||
|
|
||||||
|
async def _interpret_freeform_request(request: FreeformSoftwareRequest, manager: DatabaseManager) -> tuple[SoftwareRequest, dict, dict]:
|
||||||
|
"""Interpret a free-form request and return the structured request plus routing trace."""
|
||||||
|
interpreter_context = manager.get_interpreter_context(chat_id=request.chat_id, source=request.source)
|
||||||
|
interpreted, interpretation_trace = await RequestInterpreter().interpret_with_trace(
|
||||||
|
request.prompt_text,
|
||||||
|
context=interpreter_context,
|
||||||
|
)
|
||||||
|
routing = interpretation_trace.get('routing') or {}
|
||||||
|
selected_history = manager.get_project_by_id(routing.get('project_id'), include_archived=False) if routing.get('project_id') else None
|
||||||
|
if selected_history is not None and routing.get('intent') != 'new_project':
|
||||||
|
interpreted['name'] = selected_history.project_name
|
||||||
|
interpreted['description'] = selected_history.description or interpreted['description']
|
||||||
|
return SoftwareRequest(**interpreted), routing, interpretation_trace
|
||||||
|
|
||||||
|
|
||||||
|
async def _run_freeform_generation(
|
||||||
|
request: FreeformSoftwareRequest,
|
||||||
|
db: Session,
|
||||||
|
*,
|
||||||
|
queue_item_id: int | None = None,
|
||||||
|
) -> dict:
|
||||||
|
"""Shared free-form request flow used by direct calls and queued processing."""
|
||||||
|
manager = DatabaseManager(db)
|
||||||
|
try:
|
||||||
|
structured_request, routing, interpretation_trace = await _interpret_freeform_request(request, manager)
|
||||||
|
response = await _run_generation(
|
||||||
|
structured_request,
|
||||||
|
db,
|
||||||
|
prompt_text=request.prompt_text,
|
||||||
|
prompt_actor=request.source,
|
||||||
|
prompt_source_context={
|
||||||
|
'chat_id': request.chat_id,
|
||||||
|
'chat_type': request.chat_type,
|
||||||
|
'queue_item_id': queue_item_id,
|
||||||
|
},
|
||||||
|
prompt_routing=routing,
|
||||||
|
preferred_project_id=routing.get('project_id') if routing.get('intent') != 'new_project' else None,
|
||||||
|
repo_name_override=routing.get('repo_name') if routing.get('intent') == 'new_project' else None,
|
||||||
|
related_issue={'number': routing.get('issue_number')} if routing.get('issue_number') is not None else None,
|
||||||
|
)
|
||||||
|
project_data = response.get('data', {})
|
||||||
|
if project_data.get('history_id') is not None:
|
||||||
|
manager = DatabaseManager(db)
|
||||||
|
prompts = manager.get_prompt_events(project_id=project_data.get('project_id'))
|
||||||
|
prompt_id = prompts[0]['id'] if prompts else None
|
||||||
|
manager.log_llm_trace(
|
||||||
|
project_id=project_data.get('project_id'),
|
||||||
|
history_id=project_data.get('history_id'),
|
||||||
|
prompt_id=prompt_id,
|
||||||
|
stage=interpretation_trace['stage'],
|
||||||
|
provider=interpretation_trace['provider'],
|
||||||
|
model=interpretation_trace['model'],
|
||||||
|
system_prompt=interpretation_trace['system_prompt'],
|
||||||
|
user_prompt=interpretation_trace['user_prompt'],
|
||||||
|
assistant_response=interpretation_trace['assistant_response'],
|
||||||
|
raw_response=interpretation_trace.get('raw_response'),
|
||||||
|
fallback_used=interpretation_trace.get('fallback_used', False),
|
||||||
|
)
|
||||||
|
naming_trace = interpretation_trace.get('project_naming')
|
||||||
|
if naming_trace:
|
||||||
|
manager.log_llm_trace(
|
||||||
|
project_id=project_data.get('project_id'),
|
||||||
|
history_id=project_data.get('history_id'),
|
||||||
|
prompt_id=prompt_id,
|
||||||
|
stage=naming_trace['stage'],
|
||||||
|
provider=naming_trace['provider'],
|
||||||
|
model=naming_trace['model'],
|
||||||
|
system_prompt=naming_trace['system_prompt'],
|
||||||
|
user_prompt=naming_trace['user_prompt'],
|
||||||
|
assistant_response=naming_trace['assistant_response'],
|
||||||
|
raw_response=naming_trace.get('raw_response'),
|
||||||
|
fallback_used=naming_trace.get('fallback_used', False),
|
||||||
|
)
|
||||||
|
response['interpreted_request'] = structured_request.model_dump()
|
||||||
|
response['routing'] = routing
|
||||||
|
response['llm_trace'] = interpretation_trace
|
||||||
|
response['source'] = {
|
||||||
|
'type': request.source,
|
||||||
|
'chat_id': request.chat_id,
|
||||||
|
'chat_type': request.chat_type,
|
||||||
|
}
|
||||||
|
if queue_item_id is not None:
|
||||||
|
DatabaseManager(db).complete_queued_prompt(
|
||||||
|
queue_item_id,
|
||||||
|
{
|
||||||
|
'project_id': project_data.get('project_id'),
|
||||||
|
'history_id': project_data.get('history_id'),
|
||||||
|
'status': response.get('status'),
|
||||||
|
},
|
||||||
|
)
|
||||||
|
return response
|
||||||
|
except Exception as exc:
|
||||||
|
if queue_item_id is not None:
|
||||||
|
DatabaseManager(db).fail_queued_prompt(queue_item_id, str(exc))
|
||||||
|
raise
|
||||||
|
|
||||||
|
|
||||||
|
async def _process_prompt_queue_batch(limit: int = 1, force: bool = False) -> dict:
|
||||||
|
"""Process up to `limit` queued prompts if the energy gate allows it."""
|
||||||
|
queue_gate = await _get_queue_gate_status(force=force)
|
||||||
|
if not queue_gate.get('allowed'):
|
||||||
|
db = database_module.get_db_sync()
|
||||||
|
try:
|
||||||
|
summary = DatabaseManager(db).get_prompt_queue_summary()
|
||||||
|
finally:
|
||||||
|
db.close()
|
||||||
|
return {
|
||||||
|
'status': queue_gate.get('status', 'blocked'),
|
||||||
|
'processed_count': 0,
|
||||||
|
'queue_gate': queue_gate,
|
||||||
|
'queue_summary': summary,
|
||||||
|
'processed': [],
|
||||||
|
}
|
||||||
|
|
||||||
|
processed = []
|
||||||
|
for _ in range(max(limit, 1)):
|
||||||
|
claim_db = database_module.get_db_sync()
|
||||||
|
try:
|
||||||
|
claimed = DatabaseManager(claim_db).claim_next_queued_prompt()
|
||||||
|
finally:
|
||||||
|
claim_db.close()
|
||||||
|
if claimed is None:
|
||||||
|
break
|
||||||
|
work_db = database_module.get_db_sync()
|
||||||
|
try:
|
||||||
|
request = FreeformSoftwareRequest(
|
||||||
|
prompt_text=claimed['prompt_text'],
|
||||||
|
source=claimed['source'] or 'telegram',
|
||||||
|
chat_id=claimed.get('chat_id'),
|
||||||
|
chat_type=claimed.get('chat_type'),
|
||||||
|
process_now=True,
|
||||||
|
)
|
||||||
|
response = await _run_freeform_generation(request, work_db, queue_item_id=claimed['id'])
|
||||||
|
processed.append(
|
||||||
|
{
|
||||||
|
'queue_item_id': claimed['id'],
|
||||||
|
'project_id': (response.get('data') or {}).get('project_id'),
|
||||||
|
'status': response.get('status'),
|
||||||
|
}
|
||||||
|
)
|
||||||
|
except Exception as exc:
|
||||||
|
DatabaseManager(work_db).fail_queued_prompt(claimed['id'], str(exc))
|
||||||
|
processed.append({'queue_item_id': claimed['id'], 'status': 'failed', 'error': str(exc)})
|
||||||
|
finally:
|
||||||
|
work_db.close()
|
||||||
|
|
||||||
|
summary_db = database_module.get_db_sync()
|
||||||
|
try:
|
||||||
|
summary = DatabaseManager(summary_db).get_prompt_queue_summary()
|
||||||
|
finally:
|
||||||
|
summary_db.close()
|
||||||
|
return {
|
||||||
|
'status': 'success',
|
||||||
|
'processed_count': len(processed),
|
||||||
|
'processed': processed,
|
||||||
|
'queue_gate': queue_gate,
|
||||||
|
'queue_summary': summary,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
async def _prompt_queue_worker() -> None:
|
||||||
|
"""Background worker that drains the prompt queue when the energy gate opens."""
|
||||||
|
while True:
|
||||||
|
try:
|
||||||
|
if database_module.settings.prompt_queue_enabled and database_module.settings.prompt_queue_auto_process:
|
||||||
|
await _process_prompt_queue_batch(
|
||||||
|
limit=database_module.settings.prompt_queue_max_batch_size,
|
||||||
|
force=database_module.settings.prompt_queue_force_process,
|
||||||
|
)
|
||||||
|
except Exception as exc:
|
||||||
|
db = database_module.get_db_sync()
|
||||||
|
try:
|
||||||
|
DatabaseManager(db).log_system_event('prompt-queue', 'ERROR', f'Queue worker error: {exc}')
|
||||||
|
finally:
|
||||||
|
db.close()
|
||||||
|
await asyncio.sleep(database_module.settings.prompt_queue_poll_interval_seconds)
|
||||||
|
|
||||||
|
|
||||||
def _resolve_n8n_api_url(explicit_url: str | None = None) -> str:
|
def _resolve_n8n_api_url(explicit_url: str | None = None) -> str:
|
||||||
"""Resolve the effective n8n API URL from explicit input or settings."""
|
"""Resolve the effective n8n API URL from explicit input or settings."""
|
||||||
if explicit_url and explicit_url.strip():
|
if explicit_url and explicit_url.strip():
|
||||||
@@ -420,8 +732,14 @@ def read_api_info():
|
|||||||
'/api',
|
'/api',
|
||||||
'/health',
|
'/health',
|
||||||
'/llm/runtime',
|
'/llm/runtime',
|
||||||
|
'/llm/prompts',
|
||||||
|
'/llm/prompts/{prompt_key}',
|
||||||
|
'/settings/runtime',
|
||||||
|
'/settings/runtime/{setting_key}',
|
||||||
'/generate',
|
'/generate',
|
||||||
'/generate/text',
|
'/generate/text',
|
||||||
|
'/queue',
|
||||||
|
'/queue/process',
|
||||||
'/projects',
|
'/projects',
|
||||||
'/status/{project_id}',
|
'/status/{project_id}',
|
||||||
'/audit/projects',
|
'/audit/projects',
|
||||||
@@ -442,7 +760,9 @@ def read_api_info():
|
|||||||
'/projects/{project_id}/prompts/{prompt_id}/undo',
|
'/projects/{project_id}/prompts/{prompt_id}/undo',
|
||||||
'/projects/{project_id}/sync-repository',
|
'/projects/{project_id}/sync-repository',
|
||||||
'/gitea/repos',
|
'/gitea/repos',
|
||||||
|
'/gitea/health',
|
||||||
'/gitea/repos/onboard',
|
'/gitea/repos/onboard',
|
||||||
|
'/home-assistant/health',
|
||||||
'/n8n/health',
|
'/n8n/health',
|
||||||
'/n8n/setup',
|
'/n8n/setup',
|
||||||
],
|
],
|
||||||
@@ -453,11 +773,30 @@ def read_api_info():
|
|||||||
def health_check():
|
def health_check():
|
||||||
"""Health check endpoint."""
|
"""Health check endpoint."""
|
||||||
runtime = database_module.get_database_runtime_summary()
|
runtime = database_module.get_database_runtime_summary()
|
||||||
|
queue_summary = {'queued': 0, 'processing': 0, 'completed': 0, 'failed': 0, 'total': 0, 'next_item': None}
|
||||||
|
db = database_module.get_db_sync()
|
||||||
|
try:
|
||||||
|
try:
|
||||||
|
queue_summary = DatabaseManager(db).get_prompt_queue_summary()
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
finally:
|
||||||
|
db.close()
|
||||||
return {
|
return {
|
||||||
'status': 'healthy',
|
'status': 'healthy',
|
||||||
'database': runtime['backend'],
|
'database': runtime['backend'],
|
||||||
'database_target': runtime['target'],
|
'database_target': runtime['target'],
|
||||||
'database_name': runtime['database'],
|
'database_name': runtime['database'],
|
||||||
|
'integrations': {
|
||||||
|
'gitea': _get_gitea_health(),
|
||||||
|
'home_assistant': _get_home_assistant_health(),
|
||||||
|
},
|
||||||
|
'prompt_queue': {
|
||||||
|
'enabled': database_module.settings.prompt_queue_enabled,
|
||||||
|
'auto_process': database_module.settings.prompt_queue_auto_process,
|
||||||
|
'force_process': database_module.settings.prompt_queue_force_process,
|
||||||
|
'summary': queue_summary,
|
||||||
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
@@ -467,6 +806,58 @@ def get_llm_runtime():
|
|||||||
return LLMServiceClient().get_runtime_configuration()
|
return LLMServiceClient().get_runtime_configuration()
|
||||||
|
|
||||||
|
|
||||||
|
@app.get('/llm/prompts')
|
||||||
|
def get_llm_prompt_settings(db: DbSession):
|
||||||
|
"""Return editable LLM prompt settings with DB overrides merged over environment defaults."""
|
||||||
|
return {'prompts': DatabaseManager(db).get_llm_prompt_settings()}
|
||||||
|
|
||||||
|
|
||||||
|
@app.put('/llm/prompts/{prompt_key}')
|
||||||
|
def update_llm_prompt_setting(prompt_key: str, request: LLMPromptSettingUpdateRequest, db: DbSession):
|
||||||
|
"""Persist one editable LLM prompt override into the database."""
|
||||||
|
database_module.init_db()
|
||||||
|
result = DatabaseManager(db).save_llm_prompt_setting(prompt_key, request.value, actor='api')
|
||||||
|
if result.get('status') == 'error':
|
||||||
|
raise HTTPException(status_code=400, detail=result.get('message', 'Prompt save failed'))
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
@app.delete('/llm/prompts/{prompt_key}')
|
||||||
|
def reset_llm_prompt_setting(prompt_key: str, db: DbSession):
|
||||||
|
"""Reset one editable LLM prompt override back to the environment/default value."""
|
||||||
|
database_module.init_db()
|
||||||
|
result = DatabaseManager(db).reset_llm_prompt_setting(prompt_key, actor='api')
|
||||||
|
if result.get('status') == 'error':
|
||||||
|
raise HTTPException(status_code=400, detail=result.get('message', 'Prompt reset failed'))
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
@app.get('/settings/runtime')
|
||||||
|
def get_runtime_settings(db: DbSession):
|
||||||
|
"""Return editable runtime settings with DB overrides merged over environment defaults."""
|
||||||
|
return {'settings': DatabaseManager(db).get_runtime_settings()}
|
||||||
|
|
||||||
|
|
||||||
|
@app.put('/settings/runtime/{setting_key}')
|
||||||
|
def update_runtime_setting(setting_key: str, request: RuntimeSettingUpdateRequest, db: DbSession):
|
||||||
|
"""Persist one editable runtime setting override into the database."""
|
||||||
|
database_module.init_db()
|
||||||
|
result = DatabaseManager(db).save_runtime_setting(setting_key, request.value, actor='api')
|
||||||
|
if result.get('status') == 'error':
|
||||||
|
raise HTTPException(status_code=400, detail=result.get('message', 'Runtime setting save failed'))
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
@app.delete('/settings/runtime/{setting_key}')
|
||||||
|
def reset_runtime_setting(setting_key: str, db: DbSession):
|
||||||
|
"""Reset one editable runtime setting override back to the environment/default value."""
|
||||||
|
database_module.init_db()
|
||||||
|
result = DatabaseManager(db).reset_runtime_setting(setting_key, actor='api')
|
||||||
|
if result.get('status') == 'error':
|
||||||
|
raise HTTPException(status_code=400, detail=result.get('message', 'Runtime setting reset failed'))
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
@app.post('/generate')
|
@app.post('/generate')
|
||||||
async def generate_software(request: SoftwareRequest, db: DbSession):
|
async def generate_software(request: SoftwareRequest, db: DbSession):
|
||||||
"""Create and record a software-generation request."""
|
"""Create and record a software-generation request."""
|
||||||
@@ -492,74 +883,64 @@ async def generate_software_from_text(request: FreeformSoftwareRequest, db: DbSe
|
|||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
manager = DatabaseManager(db)
|
if request.source == 'telegram' and database_module.settings.prompt_queue_enabled and not request.process_now:
|
||||||
interpreter_context = manager.get_interpreter_context(chat_id=request.chat_id, source=request.source)
|
|
||||||
interpreted, interpretation_trace = await RequestInterpreter().interpret_with_trace(
|
|
||||||
request.prompt_text,
|
|
||||||
context=interpreter_context,
|
|
||||||
)
|
|
||||||
routing = interpretation_trace.get('routing') or {}
|
|
||||||
selected_history = manager.get_project_by_id(routing.get('project_id'), include_archived=False) if routing.get('project_id') else None
|
|
||||||
if selected_history is not None and routing.get('intent') != 'new_project':
|
|
||||||
interpreted['name'] = selected_history.project_name
|
|
||||||
interpreted['description'] = selected_history.description or interpreted['description']
|
|
||||||
structured_request = SoftwareRequest(**interpreted)
|
|
||||||
response = await _run_generation(
|
|
||||||
structured_request,
|
|
||||||
db,
|
|
||||||
prompt_text=request.prompt_text,
|
|
||||||
prompt_actor=request.source,
|
|
||||||
prompt_source_context={
|
|
||||||
'chat_id': request.chat_id,
|
|
||||||
'chat_type': request.chat_type,
|
|
||||||
},
|
|
||||||
prompt_routing=routing,
|
|
||||||
preferred_project_id=routing.get('project_id') if routing.get('intent') != 'new_project' else None,
|
|
||||||
repo_name_override=routing.get('repo_name') if routing.get('intent') == 'new_project' else None,
|
|
||||||
related_issue={'number': routing.get('issue_number')} if routing.get('issue_number') is not None else None,
|
|
||||||
)
|
|
||||||
project_data = response.get('data', {})
|
|
||||||
if project_data.get('history_id') is not None:
|
|
||||||
manager = DatabaseManager(db)
|
manager = DatabaseManager(db)
|
||||||
prompts = manager.get_prompt_events(project_id=project_data.get('project_id'))
|
queue_item = manager.enqueue_prompt(
|
||||||
prompt_id = prompts[0]['id'] if prompts else None
|
prompt_text=request.prompt_text,
|
||||||
manager.log_llm_trace(
|
source=request.source,
|
||||||
project_id=project_data.get('project_id'),
|
chat_id=request.chat_id,
|
||||||
history_id=project_data.get('history_id'),
|
chat_type=request.chat_type,
|
||||||
prompt_id=prompt_id,
|
source_context={'chat_id': request.chat_id, 'chat_type': request.chat_type},
|
||||||
stage=interpretation_trace['stage'],
|
|
||||||
provider=interpretation_trace['provider'],
|
|
||||||
model=interpretation_trace['model'],
|
|
||||||
system_prompt=interpretation_trace['system_prompt'],
|
|
||||||
user_prompt=interpretation_trace['user_prompt'],
|
|
||||||
assistant_response=interpretation_trace['assistant_response'],
|
|
||||||
raw_response=interpretation_trace.get('raw_response'),
|
|
||||||
fallback_used=interpretation_trace.get('fallback_used', False),
|
|
||||||
)
|
)
|
||||||
naming_trace = interpretation_trace.get('project_naming')
|
return {
|
||||||
if naming_trace:
|
'status': 'queued',
|
||||||
manager.log_llm_trace(
|
'message': 'Prompt queued for energy-aware processing.',
|
||||||
project_id=project_data.get('project_id'),
|
'queue_item': queue_item,
|
||||||
history_id=project_data.get('history_id'),
|
'queue_summary': manager.get_prompt_queue_summary(),
|
||||||
prompt_id=prompt_id,
|
'queue_gate': await _get_queue_gate_status(force=False),
|
||||||
stage=naming_trace['stage'],
|
'source': {
|
||||||
provider=naming_trace['provider'],
|
'type': request.source,
|
||||||
model=naming_trace['model'],
|
'chat_id': request.chat_id,
|
||||||
system_prompt=naming_trace['system_prompt'],
|
'chat_type': request.chat_type,
|
||||||
user_prompt=naming_trace['user_prompt'],
|
},
|
||||||
assistant_response=naming_trace['assistant_response'],
|
}
|
||||||
raw_response=naming_trace.get('raw_response'),
|
|
||||||
fallback_used=naming_trace.get('fallback_used', False),
|
return await _run_freeform_generation(request, db)
|
||||||
)
|
|
||||||
response['interpreted_request'] = interpreted
|
|
||||||
response['routing'] = routing
|
@app.get('/queue')
|
||||||
response['llm_trace'] = interpretation_trace
|
def get_prompt_queue(db: DbSession):
|
||||||
response['source'] = {
|
"""Return queued prompt items and prompt queue configuration."""
|
||||||
'type': request.source,
|
manager = DatabaseManager(db)
|
||||||
'chat_id': request.chat_id,
|
return {
|
||||||
'chat_type': request.chat_type,
|
'queue': manager.get_prompt_queue(),
|
||||||
|
'summary': manager.get_prompt_queue_summary(),
|
||||||
|
'config': {
|
||||||
|
'enabled': database_module.settings.prompt_queue_enabled,
|
||||||
|
'auto_process': database_module.settings.prompt_queue_auto_process,
|
||||||
|
'force_process': database_module.settings.prompt_queue_force_process,
|
||||||
|
'poll_interval_seconds': database_module.settings.prompt_queue_poll_interval_seconds,
|
||||||
|
'max_batch_size': database_module.settings.prompt_queue_max_batch_size,
|
||||||
|
},
|
||||||
}
|
}
|
||||||
return response
|
|
||||||
|
|
||||||
|
@app.post('/queue/process')
|
||||||
|
async def process_prompt_queue(request: PromptQueueProcessRequest):
|
||||||
|
"""Manually process queued prompts, optionally bypassing the HA gate."""
|
||||||
|
return await _process_prompt_queue_batch(limit=request.limit, force=request.force)
|
||||||
|
|
||||||
|
|
||||||
|
@app.get('/gitea/health')
|
||||||
|
def get_gitea_health():
|
||||||
|
"""Return Gitea integration connectivity diagnostics."""
|
||||||
|
return _get_gitea_health()
|
||||||
|
|
||||||
|
|
||||||
|
@app.get('/home-assistant/health')
|
||||||
|
def get_home_assistant_health():
|
||||||
|
"""Return Home Assistant integration connectivity diagnostics."""
|
||||||
|
return _get_home_assistant_health()
|
||||||
|
|
||||||
|
|
||||||
@app.get('/projects')
|
@app.get('/projects')
|
||||||
@@ -743,13 +1124,18 @@ def delete_project(project_id: str, db: DbSession):
|
|||||||
remote_delete = None
|
remote_delete = None
|
||||||
if repository and repository.get('mode') != 'shared' and repository.get('owner') and repository.get('name') and database_module.settings.gitea_url and database_module.settings.gitea_token:
|
if repository and repository.get('mode') != 'shared' and repository.get('owner') and repository.get('name') and database_module.settings.gitea_url and database_module.settings.gitea_token:
|
||||||
remote_delete = _create_gitea_api().delete_repo_sync(owner=repository.get('owner'), repo=repository.get('name'))
|
remote_delete = _create_gitea_api().delete_repo_sync(owner=repository.get('owner'), repo=repository.get('name'))
|
||||||
if remote_delete.get('error') and remote_delete.get('status_code') not in {404, None}:
|
if remote_delete.get('error'):
|
||||||
raise HTTPException(status_code=502, detail=remote_delete.get('error'))
|
manager.log_system_event(
|
||||||
|
component='gitea',
|
||||||
|
level='WARNING',
|
||||||
|
message=f"Remote repository delete failed for {repository.get('owner')}/{repository.get('name')}: {remote_delete.get('error')}",
|
||||||
|
)
|
||||||
|
|
||||||
result = manager.delete_project(project_id)
|
result = manager.delete_project(project_id)
|
||||||
if result.get('status') == 'error':
|
if result.get('status') == 'error':
|
||||||
raise HTTPException(status_code=400, detail=result.get('message', 'Project deletion failed'))
|
raise HTTPException(status_code=400, detail=result.get('message', 'Project deletion failed'))
|
||||||
result['remote_repository_deleted'] = bool(remote_delete and not remote_delete.get('error'))
|
result['remote_repository_deleted'] = bool(remote_delete and not remote_delete.get('error'))
|
||||||
|
result['remote_repository_delete_error'] = remote_delete.get('error') if remote_delete else None
|
||||||
result['remote_repository'] = repository if repository else None
|
result['remote_repository'] = repository if repository else None
|
||||||
return result
|
return result
|
||||||
|
|
||||||
|
|||||||
Reference in New Issue
Block a user