6 Commits
0.8.0 ... 0.9.2

Author SHA1 Message Date
3e40338bbf release: version 0.9.2 🚀
All checks were successful
Upload Python Package / Create Release (push) Successful in 31s
Upload Python Package / deploy (push) Successful in 32s
2026-04-11 11:53:25 +02:00
39f9651236 fix: UI improvements and prompt hardening, refs NOISSUE 2026-04-11 11:53:18 +02:00
3175c53504 release: version 0.9.1 🚀
All checks were successful
Upload Python Package / Create Release (push) Successful in 10s
Upload Python Package / deploy (push) Successful in 55s
2026-04-11 11:37:22 +02:00
29cf2aa6bd fix: better repo name generation, refs NOISSUE 2026-04-11 11:37:19 +02:00
b881ef635a release: version 0.9.0 🚀
All checks were successful
Upload Python Package / Create Release (push) Successful in 21s
Upload Python Package / deploy (push) Successful in 1m11s
2026-04-11 11:12:54 +02:00
e35db0a361 feat: editable guardrails, refs NOISSUE 2026-04-11 11:12:50 +02:00
12 changed files with 1748 additions and 116 deletions

View File

@@ -4,6 +4,31 @@ Changelog
(unreleased)
------------
Fix
~~~
- UI improvements and prompt hardening, refs NOISSUE. [Simon
Diesenreiter]
0.9.1 (2026-04-11)
------------------
Fix
~~~
- Better repo name generation, refs NOISSUE. [Simon Diesenreiter]
Other
~~~~~
0.9.0 (2026-04-11)
------------------
- Feat: editable guardrails, refs NOISSUE. [Simon Diesenreiter]
0.8.0 (2026-04-11)
------------------
- Feat: better dashboard reloading mechanism, refs NOISSUE. [Simon
Diesenreiter]
- Feat: add explicit workflow steps and guardrail prompts, refs NOISSUE.

View File

@@ -48,6 +48,7 @@ OLLAMA_URL=http://localhost:11434
OLLAMA_MODEL=llama3
# Gitea
# Host-only values such as git.disi.dev are normalized to https://git.disi.dev.
GITEA_URL=https://gitea.yourserver.com
GITEA_TOKEN=your_gitea_api_token
GITEA_OWNER=ai-software-factory
@@ -69,6 +70,19 @@ N8N_WEBHOOK_URL=http://n8n.yourserver.com/webhook/telegram
# Telegram
TELEGRAM_BOT_TOKEN=your_telegram_bot_token
TELEGRAM_CHAT_ID=your_chat_id
# Optional: queue Telegram prompts until Home Assistant reports battery/surplus targets are met.
PROMPT_QUEUE_ENABLED=false
PROMPT_QUEUE_AUTO_PROCESS=true
PROMPT_QUEUE_FORCE_PROCESS=false
PROMPT_QUEUE_POLL_INTERVAL_SECONDS=60
PROMPT_QUEUE_MAX_BATCH_SIZE=1
HOME_ASSISTANT_URL=http://homeassistant.local:8123
HOME_ASSISTANT_TOKEN=your_home_assistant_long_lived_token
HOME_ASSISTANT_BATTERY_ENTITY_ID=sensor.home_battery_soc
HOME_ASSISTANT_SURPLUS_ENTITY_ID=sensor.home_pv_surplus_power
HOME_ASSISTANT_BATTERY_FULL_THRESHOLD=95
HOME_ASSISTANT_SURPLUS_THRESHOLD_WATTS=100
```
### Build and Run
@@ -93,6 +107,7 @@ docker-compose up -d
The backend now interprets free-form Telegram text with Ollama before generation.
If `TELEGRAM_CHAT_ID` is set, the Telegram-trigger workflow only reacts to messages from that specific chat.
If `PROMPT_QUEUE_ENABLED=true`, Telegram prompts are stored in a durable queue and processed only when the Home Assistant battery and surplus thresholds are satisfied, unless you force processing via `/queue/process` or send `process_now=true`.
2. **Monitor progress via Web UI:**
@@ -104,6 +119,12 @@ docker-compose up -d
If you deploy the container with PostgreSQL environment variables set, the service now selects PostgreSQL automatically even though SQLite remains the default for local/test usage.
The health tab now shows separate application, n8n, Gitea, and Home Assistant/queue diagnostics so misconfigured integrations are visible without checking container logs.
The dashboard Health tab also exposes operator controls for the prompt queue, including manual batch processing, forced processing, and retrying failed items.
Guardrail and system prompts are no longer environment-only in practice: the factory can persist DB-backed overrides for the editable LLM prompt set, expose them at `/llm/prompts`, and edit them from the dashboard System tab. Environment values still act as defaults and as the reset target.
## API Endpoints
| Endpoint | Method | Description |

View File

@@ -24,7 +24,7 @@ LLM_MAX_TOOL_CALL_ROUNDS=1
# Gitea
# Configure Gitea API for your organization
# GITEA_URL can be left empty to use GITEA_ORGANIZATION instead of GITEA_OWNER
# Host-only values such as git.disi.dev are normalized to https://git.disi.dev automatically.
GITEA_URL=https://gitea.yourserver.com
GITEA_TOKEN=your_gitea_api_token
GITEA_OWNER=your_organization_name
@@ -42,6 +42,20 @@ N8N_PASSWORD=your_secure_password
TELEGRAM_BOT_TOKEN=your_telegram_bot_token
TELEGRAM_CHAT_ID=your_chat_id
# Home Assistant energy gate for queued Telegram prompts
# Leave PROMPT_QUEUE_ENABLED=false to preserve immediate Telegram processing.
PROMPT_QUEUE_ENABLED=false
PROMPT_QUEUE_AUTO_PROCESS=true
PROMPT_QUEUE_FORCE_PROCESS=false
PROMPT_QUEUE_POLL_INTERVAL_SECONDS=60
PROMPT_QUEUE_MAX_BATCH_SIZE=1
HOME_ASSISTANT_URL=http://homeassistant.local:8123
HOME_ASSISTANT_TOKEN=your_home_assistant_long_lived_token
HOME_ASSISTANT_BATTERY_ENTITY_ID=sensor.home_battery_soc
HOME_ASSISTANT_SURPLUS_ENTITY_ID=sensor.home_pv_surplus_power
HOME_ASSISTANT_BATTERY_FULL_THRESHOLD=95
HOME_ASSISTANT_SURPLUS_THRESHOLD_WATTS=100
# PostgreSQL
# In production, provide PostgreSQL settings below. They now take precedence over the SQLite default.
# You can also set USE_SQLITE=false explicitly if you want the intent to be obvious.

View File

@@ -62,10 +62,11 @@ LLM_LIVE_TOOL_STAGE_TOOL_MAP={"request_interpretation": ["gitea_lookup_issue", "
LLM_MAX_TOOL_CALL_ROUNDS=1
# Gitea
# Host-only values such as git.disi.dev are normalized to https://git.disi.dev.
GITEA_URL=https://gitea.yourserver.com
GITEA_TOKEN= analyze your_gitea_api_token
GITEA_TOKEN=your_gitea_api_token
GITEA_OWNER=ai-software-factory
GITEA_REPO=ai-software-factory
GITEA_REPO=
# n8n
N8N_WEBHOOK_URL=http://n8n.yourserver.com/webhook/telegram
@@ -73,6 +74,19 @@ N8N_WEBHOOK_URL=http://n8n.yourserver.com/webhook/telegram
# Telegram
TELEGRAM_BOT_TOKEN=your_telegram_bot_token
TELEGRAM_CHAT_ID=your_chat_id
# Optional: queue Telegram prompts until Home Assistant reports energy surplus.
PROMPT_QUEUE_ENABLED=false
PROMPT_QUEUE_AUTO_PROCESS=true
PROMPT_QUEUE_FORCE_PROCESS=false
PROMPT_QUEUE_POLL_INTERVAL_SECONDS=60
PROMPT_QUEUE_MAX_BATCH_SIZE=1
HOME_ASSISTANT_URL=http://homeassistant.local:8123
HOME_ASSISTANT_TOKEN=your_home_assistant_long_lived_token
HOME_ASSISTANT_BATTERY_ENTITY_ID=sensor.home_battery_soc
HOME_ASSISTANT_SURPLUS_ENTITY_ID=sensor.home_pv_surplus_power
HOME_ASSISTANT_BATTERY_FULL_THRESHOLD=95
HOME_ASSISTANT_SURPLUS_THRESHOLD_WATTS=100
```
### Build and Run
@@ -95,6 +109,8 @@ docker-compose up -d
Features: user authentication, task CRUD, notifications
```
If `PROMPT_QUEUE_ENABLED=true`, Telegram prompts are queued durably and processed only when Home Assistant reports the configured battery and surplus thresholds. Operators can override the gate via `/queue/process` or by sending `process_now=true` to `/generate/text`.
2. **Monitor progress via Web UI:**
Open `http://yourserver:8000` to see real-time progress
@@ -138,6 +154,12 @@ New project creation can also run a dedicated `project_id_naming` stage. `LLM_PR
Runtime visibility for the active guardrails, mediated tools, live tools, and model configuration is available at `/llm/runtime` and in the dashboard System tab.
Operational visibility for the Gitea integration, Home Assistant energy gate, and queued prompt counts is available in the dashboard Health tab, plus `/gitea/health`, `/home-assistant/health`, and `/queue`.
The dashboard Health tab also includes operator controls for manually processing queued Telegram prompts, force-processing them when needed, and retrying failed items.
Editable guardrail and system prompts are persisted in the database as overrides on top of the environment defaults. The current merged values are available at `/llm/prompts`, and the dashboard System tab can edit or reset them without restarting the service.
These tool payloads are appended to the model prompt as authoritative JSON generated by the service, so the LLM can reason over live project and Gitea context while remaining constrained by the configured guardrails.
## Development

View File

@@ -1 +1 @@
0.8.0
0.9.2

View File

@@ -4,7 +4,7 @@ from sqlalchemy.orm import Session
from sqlalchemy import text
try:
from ..config import settings
from ..config import EDITABLE_LLM_PROMPTS, settings
from ..models import (
AuditTrail,
ProjectHistory,
@@ -18,7 +18,7 @@ try:
UserAction,
)
except ImportError:
from config import settings
from config import EDITABLE_LLM_PROMPTS, settings
from models import (
AuditTrail,
ProjectHistory,
@@ -83,6 +83,11 @@ class DatabaseMigrations:
class DatabaseManager:
"""Manages database operations for audit logging and history tracking."""
PROMPT_QUEUE_PROJECT_ID = '__prompt_queue__'
PROMPT_QUEUE_ACTION = 'PROMPT_QUEUED'
PROMPT_CONFIG_PROJECT_ID = '__llm_prompt_config__'
PROMPT_CONFIG_ACTION = 'LLM_PROMPT_CONFIG'
def __init__(self, db: Session):
"""Initialize database manager."""
self.db = db
@@ -270,6 +275,277 @@ class DatabaseManager:
self.db.refresh(audit)
return audit
def enqueue_prompt(
self,
prompt_text: str,
source: str = 'telegram',
chat_id: str | None = None,
chat_type: str | None = None,
source_context: dict | None = None,
process_now: bool = False,
) -> dict:
"""Persist a queued prompt so it can be processed later by the worker."""
metadata = {
'status': 'queued',
'prompt_text': prompt_text,
'source': source,
'chat_id': chat_id,
'chat_type': chat_type,
'source_context': source_context or {},
'process_now': bool(process_now),
'queued_at': datetime.utcnow().isoformat(),
}
audit = AuditTrail(
project_id=self.PROMPT_QUEUE_PROJECT_ID,
action=self.PROMPT_QUEUE_ACTION,
actor=source or 'queue',
action_type='QUEUE',
details=prompt_text,
message='Prompt queued for deferred processing',
metadata_json=metadata,
)
self.db.add(audit)
self.db.commit()
self.db.refresh(audit)
return self._serialize_prompt_queue_item(audit)
def _serialize_prompt_queue_item(self, audit: AuditTrail) -> dict:
"""Convert a queue audit record into a stable API payload."""
metadata = self._normalize_metadata(audit.metadata_json)
return {
'id': audit.id,
'prompt_text': metadata.get('prompt_text') or audit.details,
'source': metadata.get('source') or audit.actor,
'chat_id': metadata.get('chat_id'),
'chat_type': metadata.get('chat_type'),
'status': metadata.get('status') or 'queued',
'queued_at': metadata.get('queued_at') or (audit.created_at.isoformat() if audit.created_at else None),
'claimed_at': metadata.get('claimed_at'),
'processed_at': metadata.get('processed_at'),
'failed_at': metadata.get('failed_at'),
'process_now': bool(metadata.get('process_now')),
'result': metadata.get('result') or {},
'error': metadata.get('error'),
'source_context': metadata.get('source_context') or {},
}
def _update_audit_metadata(self, audit: AuditTrail, updates: dict) -> AuditTrail:
"""Apply shallow metadata updates to an audit record."""
metadata = dict(self._normalize_metadata(audit.metadata_json))
metadata.update(updates)
audit.metadata_json = metadata
self.db.commit()
self.db.refresh(audit)
return audit
def get_prompt_queue(self, status: str | None = None, limit: int = 100) -> list[dict]:
"""Return queued prompt items, optionally filtered by queue status."""
audits = (
self.db.query(AuditTrail)
.filter(AuditTrail.action == self.PROMPT_QUEUE_ACTION)
.order_by(AuditTrail.created_at.desc(), AuditTrail.id.desc())
.all()
)
items = []
for audit in audits:
item = self._serialize_prompt_queue_item(audit)
if status and item['status'] != status:
continue
items.append(item)
if len(items) >= limit:
break
return items
def get_prompt_queue_summary(self) -> dict:
"""Return aggregate prompt queue counts for operations and health views."""
items = self.get_prompt_queue(limit=1000)
summary = {'queued': 0, 'processing': 0, 'completed': 0, 'failed': 0, 'total': len(items)}
for item in items:
summary[item['status']] = summary.get(item['status'], 0) + 1
summary['next_item'] = next((item for item in reversed(items) if item['status'] == 'queued'), None)
return summary
def claim_next_queued_prompt(self) -> dict | None:
"""Claim the oldest queued prompt for processing."""
audits = (
self.db.query(AuditTrail)
.filter(AuditTrail.action == self.PROMPT_QUEUE_ACTION)
.order_by(AuditTrail.created_at.asc(), AuditTrail.id.asc())
.all()
)
for audit in audits:
item = self._serialize_prompt_queue_item(audit)
if item['status'] != 'queued':
continue
updated = self._update_audit_metadata(
audit,
{
'status': 'processing',
'claimed_at': datetime.utcnow().isoformat(),
'error': None,
},
)
return self._serialize_prompt_queue_item(updated)
return None
def complete_queued_prompt(self, queue_item_id: int, result: dict | None = None) -> dict | None:
"""Mark a queued prompt as successfully processed."""
audit = self.db.query(AuditTrail).filter(AuditTrail.id == queue_item_id, AuditTrail.action == self.PROMPT_QUEUE_ACTION).first()
if audit is None:
return None
updated = self._update_audit_metadata(
audit,
{
'status': 'completed',
'processed_at': datetime.utcnow().isoformat(),
'result': result or {},
'error': None,
},
)
return self._serialize_prompt_queue_item(updated)
def fail_queued_prompt(self, queue_item_id: int, error: str) -> dict | None:
"""Mark a queued prompt as failed."""
audit = self.db.query(AuditTrail).filter(AuditTrail.id == queue_item_id, AuditTrail.action == self.PROMPT_QUEUE_ACTION).first()
if audit is None:
return None
updated = self._update_audit_metadata(
audit,
{
'status': 'failed',
'failed_at': datetime.utcnow().isoformat(),
'error': error,
},
)
return self._serialize_prompt_queue_item(updated)
def get_prompt_queue_item(self, queue_item_id: int) -> dict | None:
"""Return a single queued prompt item by audit id."""
audit = self.db.query(AuditTrail).filter(AuditTrail.id == queue_item_id, AuditTrail.action == self.PROMPT_QUEUE_ACTION).first()
if audit is None:
return None
return self._serialize_prompt_queue_item(audit)
def retry_queued_prompt(self, queue_item_id: int) -> dict | None:
"""Return a failed or completed queue item back to queued state."""
audit = self.db.query(AuditTrail).filter(AuditTrail.id == queue_item_id, AuditTrail.action == self.PROMPT_QUEUE_ACTION).first()
if audit is None:
return None
updated = self._update_audit_metadata(
audit,
{
'status': 'queued',
'queued_at': datetime.utcnow().isoformat(),
'claimed_at': None,
'processed_at': None,
'failed_at': None,
'error': None,
},
)
return self._serialize_prompt_queue_item(updated)
def _latest_llm_prompt_config_entries(self) -> dict[str, AuditTrail]:
"""Return the most recent persisted audit row for each editable LLM prompt key."""
entries: dict[str, AuditTrail] = {}
try:
audits = (
self.db.query(AuditTrail)
.filter(AuditTrail.action == self.PROMPT_CONFIG_ACTION)
.order_by(AuditTrail.created_at.desc(), AuditTrail.id.desc())
.all()
)
except Exception:
return entries
for audit in audits:
metadata = self._normalize_metadata(audit.metadata_json)
key = str(metadata.get('key') or '').strip()
if not key or key in entries or key not in EDITABLE_LLM_PROMPTS:
continue
entries[key] = audit
return entries
def get_llm_prompt_override(self, key: str) -> str | None:
"""Return the persisted override for one editable LLM prompt key."""
entry = self._latest_llm_prompt_config_entries().get(key)
if entry is None:
return None
metadata = self._normalize_metadata(entry.metadata_json)
if metadata.get('reset_to_default'):
return None
value = metadata.get('value')
if value is None:
return None
return str(value)
def get_llm_prompt_settings(self) -> list[dict]:
"""Return editable LLM prompt definitions merged with persisted DB overrides."""
latest = self._latest_llm_prompt_config_entries()
items = []
for key, metadata in EDITABLE_LLM_PROMPTS.items():
entry = latest.get(key)
entry_metadata = self._normalize_metadata(entry.metadata_json) if entry is not None else {}
default_value = (getattr(settings, key, '') or '').strip()
persisted_value = None if entry_metadata.get('reset_to_default') else entry_metadata.get('value')
items.append(
{
'key': key,
'label': metadata['label'],
'category': metadata['category'],
'description': metadata['description'],
'default_value': default_value,
'value': str(persisted_value).strip() if persisted_value is not None else default_value,
'source': 'database' if persisted_value is not None else 'environment',
'updated_at': entry.created_at.isoformat() if entry and entry.created_at else None,
'updated_by': entry.actor if entry is not None else None,
'reset_to_default': bool(entry_metadata.get('reset_to_default')) if entry is not None else False,
}
)
return items
def save_llm_prompt_setting(self, key: str, value: str, actor: str = 'dashboard') -> dict:
"""Persist one editable LLM prompt override into the audit trail."""
if key not in EDITABLE_LLM_PROMPTS:
return {'status': 'error', 'message': f'Unsupported prompt key: {key}'}
audit = AuditTrail(
project_id=self.PROMPT_CONFIG_PROJECT_ID,
action=self.PROMPT_CONFIG_ACTION,
actor=actor,
action_type='UPDATE',
details=f'Updated LLM prompt setting {key}',
message=f'Updated LLM prompt setting {key}',
metadata_json={
'key': key,
'value': value,
'reset_to_default': False,
},
)
self.db.add(audit)
self.db.commit()
self.db.refresh(audit)
return {'status': 'success', 'setting': next(item for item in self.get_llm_prompt_settings() if item['key'] == key)}
def reset_llm_prompt_setting(self, key: str, actor: str = 'dashboard') -> dict:
"""Reset one editable LLM prompt override back to its environment/default value."""
if key not in EDITABLE_LLM_PROMPTS:
return {'status': 'error', 'message': f'Unsupported prompt key: {key}'}
audit = AuditTrail(
project_id=self.PROMPT_CONFIG_PROJECT_ID,
action=self.PROMPT_CONFIG_ACTION,
actor=actor,
action_type='RESET',
details=f'Reset LLM prompt setting {key} to default',
message=f'Reset LLM prompt setting {key} to default',
metadata_json={
'key': key,
'value': None,
'reset_to_default': True,
},
)
self.db.add(audit)
self.db.commit()
self.db.refresh(audit)
return {'status': 'success', 'setting': next(item for item in self.get_llm_prompt_settings() if item['key'] == key)}
def attach_issue_to_prompt(self, prompt_id: int, related_issue: dict) -> AuditTrail | None:
"""Attach resolved issue context to a previously recorded prompt."""
prompt = self.db.query(AuditTrail).filter(AuditTrail.id == prompt_id, AuditTrail.action == 'PROMPT_RECEIVED').first()
@@ -1987,6 +2263,7 @@ class DatabaseManager:
def get_dashboard_snapshot(self, limit: int = 8) -> dict:
"""Return DB-backed dashboard data for the UI."""
queue_summary = self.get_prompt_queue_summary()
if settings.gitea_url and settings.gitea_token:
try:
try:
@@ -2015,6 +2292,8 @@ class DatabaseManager:
"completed_projects": len([project for project in active_projects if project.status == ProjectStatus.COMPLETED.value]),
"error_projects": len([project for project in active_projects if project.status == ProjectStatus.ERROR.value]),
"prompt_events": self.db.query(AuditTrail).filter(AuditTrail.action == "PROMPT_RECEIVED").count(),
"queued_prompts": queue_summary.get('queued', 0),
"failed_queued_prompts": queue_summary.get('failed', 0),
"code_changes": self.db.query(AuditTrail).filter(AuditTrail.action == "CODE_CHANGE").count(),
"open_pull_requests": self.db.query(PullRequest).filter(PullRequest.pr_state == "open", PullRequest.merged.is_(False)).count(),
"tracked_issues": self.db.query(AuditTrail).filter(AuditTrail.action == "REPOSITORY_ISSUE").count(),
@@ -2034,6 +2313,10 @@ class DatabaseManager:
],
"lineage_links": self.get_prompt_change_links(limit=limit * 10),
"correlations": self.get_prompt_change_correlations(limit=limit),
"prompt_queue": {
'items': self.get_prompt_queue(limit=limit),
'summary': queue_summary,
},
}
def cleanup_audit_trail(self) -> None:

View File

@@ -4,6 +4,20 @@ import os
import urllib.error
import urllib.request
import json
from urllib.parse import urlparse
def _normalize_base_url(base_url: str) -> str:
"""Normalize host-only service addresses into valid absolute URLs."""
normalized = (base_url or '').strip().rstrip('/')
if not normalized:
return ''
if '://' not in normalized:
normalized = f'https://{normalized}'
parsed = urlparse(normalized)
if not parsed.scheme or not parsed.netloc:
return ''
return normalized
class GiteaAPI:
@@ -11,7 +25,7 @@ class GiteaAPI:
def __init__(self, token: str, base_url: str, owner: str | None = None, repo: str | None = None):
self.token = token
self.base_url = base_url.rstrip("/")
self.base_url = _normalize_base_url(base_url)
self.owner = owner
self.repo = repo
self.headers = {
@@ -26,7 +40,7 @@ class GiteaAPI:
owner = os.getenv("GITEA_OWNER", "ai-test")
repo = os.getenv("GITEA_REPO", "")
return {
"base_url": base_url.rstrip("/"),
"base_url": _normalize_base_url(base_url),
"token": token,
"owner": owner,
"repo": repo,
@@ -96,16 +110,16 @@ class GiteaAPI:
def _request_sync(self, method: str, path: str, payload: dict | None = None) -> dict:
"""Perform a synchronous Gitea API request."""
request = urllib.request.Request(
self._api_url(path),
headers=self.get_auth_headers(),
method=method.upper(),
)
data = None
if payload is not None:
data = json.dumps(payload).encode('utf-8')
request.data = data
try:
if not self.base_url:
return {'error': 'Gitea base URL is not configured or is invalid'}
request = urllib.request.Request(
self._api_url(path),
headers=self.get_auth_headers(),
method=method.upper(),
)
if payload is not None:
request.data = json.dumps(payload).encode('utf-8')
with urllib.request.urlopen(request) as response:
body = response.read().decode('utf-8')
return json.loads(body) if body else {}
@@ -182,6 +196,10 @@ class GiteaAPI:
"""Get the user associated with the configured token."""
return await self._request("GET", "user")
def get_current_user_sync(self) -> dict:
"""Synchronously get the user associated with the configured token."""
return self._request_sync("GET", "user")
async def create_branch(self, branch: str, base: str = "main", owner: str | None = None, repo: str | None = None):
"""Create a new branch."""
_owner = owner or self.owner

View File

@@ -0,0 +1,162 @@
"""Home Assistant integration for energy-gated queue processing."""
from __future__ import annotations
try:
from ..config import settings
except ImportError:
from config import settings
class HomeAssistantAgent:
"""Query Home Assistant for queue-processing eligibility and health."""
def __init__(self, base_url: str | None = None, token: str | None = None):
self.base_url = (base_url or settings.home_assistant_url).rstrip('/')
self.token = token or settings.home_assistant_token
def _headers(self) -> dict[str, str]:
return {
'Authorization': f'Bearer {self.token}',
'Content-Type': 'application/json',
}
def _state_url(self, entity_id: str) -> str:
return f'{self.base_url}/api/states/{entity_id}'
async def _get_state(self, entity_id: str) -> dict:
if not self.base_url:
return {'error': 'Home Assistant URL is not configured'}
if not self.token:
return {'error': 'Home Assistant token is not configured'}
if not entity_id:
return {'error': 'Home Assistant entity id is not configured'}
try:
import aiohttp
async with aiohttp.ClientSession() as session:
async with session.get(self._state_url(entity_id), headers=self._headers()) as resp:
payload = await resp.json(content_type=None)
if 200 <= resp.status < 300:
return payload if isinstance(payload, dict) else {'value': payload}
return {'error': payload, 'status_code': resp.status}
except Exception as exc:
return {'error': str(exc)}
def _get_state_sync(self, entity_id: str) -> dict:
if not self.base_url:
return {'error': 'Home Assistant URL is not configured'}
if not self.token:
return {'error': 'Home Assistant token is not configured'}
if not entity_id:
return {'error': 'Home Assistant entity id is not configured'}
try:
import json
import urllib.error
import urllib.request
request = urllib.request.Request(self._state_url(entity_id), headers=self._headers(), method='GET')
with urllib.request.urlopen(request) as response:
body = response.read().decode('utf-8')
return json.loads(body) if body else {}
except urllib.error.HTTPError as exc:
try:
body = exc.read().decode('utf-8')
except Exception:
body = str(exc)
return {'error': body, 'status_code': exc.code}
except Exception as exc:
return {'error': str(exc)}
@staticmethod
def _coerce_float(payload: dict) -> float | None:
raw = payload.get('state') if isinstance(payload, dict) else None
try:
return float(raw)
except Exception:
return None
async def queue_gate_status(self, force: bool = False) -> dict:
"""Return whether queued prompts may be processed now."""
if force or settings.prompt_queue_force_process:
return {
'status': 'success',
'allowed': True,
'forced': True,
'reason': 'Queue override is enabled',
}
battery = await self._get_state(settings.home_assistant_battery_entity_id)
surplus = await self._get_state(settings.home_assistant_surplus_entity_id)
battery_value = self._coerce_float(battery)
surplus_value = self._coerce_float(surplus)
checks = []
if battery.get('error'):
checks.append({'name': 'battery', 'ok': False, 'message': str(battery.get('error')), 'entity_id': settings.home_assistant_battery_entity_id})
else:
checks.append({'name': 'battery', 'ok': battery_value is not None and battery_value >= settings.home_assistant_battery_full_threshold, 'message': f'{battery_value}%', 'entity_id': settings.home_assistant_battery_entity_id})
if surplus.get('error'):
checks.append({'name': 'surplus', 'ok': False, 'message': str(surplus.get('error')), 'entity_id': settings.home_assistant_surplus_entity_id})
else:
checks.append({'name': 'surplus', 'ok': surplus_value is not None and surplus_value >= settings.home_assistant_surplus_threshold_watts, 'message': f'{surplus_value} W', 'entity_id': settings.home_assistant_surplus_entity_id})
allowed = all(check['ok'] for check in checks)
return {
'status': 'success' if allowed else 'blocked',
'allowed': allowed,
'forced': False,
'checks': checks,
'battery_level': battery_value,
'surplus_watts': surplus_value,
'thresholds': {
'battery_full_percent': settings.home_assistant_battery_full_threshold,
'surplus_watts': settings.home_assistant_surplus_threshold_watts,
},
'reason': 'Energy gate open' if allowed else 'Battery or surplus threshold not met',
}
def health_check_sync(self) -> dict:
"""Return current Home Assistant connectivity and queue gate diagnostics."""
if not self.base_url:
return {
'status': 'error',
'message': 'Home Assistant URL is not configured.',
'base_url': '',
'configured': False,
'checks': [],
}
if not self.token:
return {
'status': 'error',
'message': 'Home Assistant token is not configured.',
'base_url': self.base_url,
'configured': False,
'checks': [],
}
battery = self._get_state_sync(settings.home_assistant_battery_entity_id)
surplus = self._get_state_sync(settings.home_assistant_surplus_entity_id)
checks = []
for name, entity_id, payload in (
('battery', settings.home_assistant_battery_entity_id, battery),
('surplus', settings.home_assistant_surplus_entity_id, surplus),
):
checks.append(
{
'name': name,
'entity_id': entity_id,
'ok': not bool(payload.get('error')),
'message': str(payload.get('error') or payload.get('state') or 'ok'),
'status_code': payload.get('status_code'),
'url': self._state_url(entity_id) if entity_id else self.base_url,
}
)
return {
'status': 'success' if all(check['ok'] for check in checks) else 'error',
'message': 'Home Assistant connectivity is healthy.' if all(check['ok'] for check in checks) else 'Home Assistant checks failed.',
'base_url': self.base_url,
'configured': True,
'checks': checks,
'queue_gate': {
'battery_full_percent': settings.home_assistant_battery_full_threshold,
'surplus_watts': settings.home_assistant_surplus_threshold_watts,
'force_process': settings.prompt_queue_force_process,
},
}

View File

@@ -18,6 +18,20 @@ except ImportError:
class RequestInterpreter:
"""Use Ollama to turn free-form text into a structured software request."""
REQUEST_PREFIX_WORDS = {
'a', 'an', 'app', 'application', 'build', 'create', 'dashboard', 'develop', 'design', 'for', 'generate',
'internal', 'make', 'me', 'modern', 'need', 'new', 'our', 'platform', 'please', 'project', 'service',
'simple', 'site', 'start', 'system', 'the', 'tool', 'us', 'want', 'web', 'website', 'with',
}
REPO_NOISE_WORDS = REQUEST_PREFIX_WORDS | {'and', 'from', 'into', 'on', 'that', 'this', 'to'}
GENERIC_PROJECT_NAME_WORDS = {
'app', 'application', 'harness', 'platform', 'project', 'purpose', 'service', 'solution', 'suite', 'system', 'test', 'tool',
}
PLACEHOLDER_PROJECT_NAME_WORDS = {
'generated project', 'new project', 'project', 'temporary name', 'temp name', 'placeholder', 'untitled project',
}
def __init__(self, ollama_url: str | None = None, model: str | None = None):
self.ollama_url = (ollama_url or settings.ollama_url).rstrip('/')
self.model = model or settings.OLLAMA_MODEL
@@ -145,10 +159,11 @@ class RequestInterpreter:
)
if content:
try:
fallback_name = self._preferred_project_name_fallback(prompt_text, interpreted.get('name'))
parsed = json.loads(content)
project_name, repo_name = self._normalize_project_identity(
parsed,
fallback_name=interpreted.get('name') or self._derive_name(prompt_text),
fallback_name=fallback_name,
)
repo_name = self._ensure_unique_repo_name(repo_name, constraints['repo_names'])
interpreted['name'] = project_name
@@ -158,7 +173,7 @@ class RequestInterpreter:
except Exception:
pass
fallback_name = interpreted.get('name') or self._derive_name(prompt_text)
fallback_name = self._preferred_project_name_fallback(prompt_text, interpreted.get('name'))
routing['project_name'] = fallback_name
routing['repo_name'] = self._ensure_unique_repo_name(self._derive_repo_name(fallback_name), constraints['repo_names'])
return interpreted, routing, trace
@@ -280,13 +295,22 @@ class RequestInterpreter:
noun_phrase = re.search(
r'(?:build|create|start|make|develop|generate|design|need|want)\s+'
r'(?:me\s+|us\s+|an?\s+|the\s+|new\s+|internal\s+|simple\s+|lightweight\s+|modern\s+|web\s+|mobile\s+)*'
r'([a-z0-9][a-z0-9\s-]{2,80}?(?:portal|dashboard|app|application|service|tool|system|platform|api|bot|assistant|website|site|workspace|tracker|manager))\b',
r'([a-z0-9][a-z0-9\s-]{2,80}?(?:portal|dashboard|app|application|service|tool|system|platform|api|bot|assistant|website|site|workspace|tracker|manager|harness|runner|framework|suite|pipeline|lab))\b',
first_line,
flags=re.IGNORECASE,
)
if noun_phrase:
return self._humanize_name(noun_phrase.group(1))
focused_phrase = re.search(
r'(?:purpose\s+is\s+to\s+create\s+(?:an?\s+)?)'
r'([a-z0-9][a-z0-9\s-]{2,80}?(?:portal|dashboard|app|application|service|tool|system|platform|api|bot|assistant|website|site|workspace|tracker|manager|harness|runner|framework|suite|pipeline|lab))\b',
first_line,
flags=re.IGNORECASE,
)
if focused_phrase:
return self._humanize_name(focused_phrase.group(1))
cleaned = re.sub(r'[^A-Za-z0-9 ]+', ' ', first_line)
stopwords = {
'build', 'create', 'start', 'make', 'develop', 'generate', 'design', 'need', 'want', 'please', 'for', 'our', 'with', 'that', 'this',
@@ -301,6 +325,7 @@ class RequestInterpreter:
"""Normalize a candidate project name into a readable title."""
cleaned = re.sub(r'[^A-Za-z0-9\s-]+', ' ', raw_name).strip(' -')
cleaned = re.sub(r'\s+', ' ', cleaned)
cleaned = self._trim_request_prefix(cleaned)
special_upper = {'api', 'crm', 'erp', 'cms', 'hr', 'it', 'ui', 'qa'}
words = []
for word in cleaned.split()[:6]:
@@ -308,14 +333,79 @@ class RequestInterpreter:
words.append(lowered.upper() if lowered in special_upper else lowered.capitalize())
return ' '.join(words) or 'Generated Project'
def _trim_request_prefix(self, candidate: str) -> str:
"""Remove leading request phrasing from model-produced names and slugs."""
tokens = [token for token in re.split(r'[-\s]+', candidate or '') if token]
while tokens and tokens[0].lower() in self.REQUEST_PREFIX_WORDS:
tokens.pop(0)
trimmed = ' '.join(tokens).strip()
return trimmed or candidate.strip()
def _derive_repo_name(self, project_name: str) -> str:
"""Derive a repository slug from a human-readable project name."""
preferred = (project_name or 'project').strip().lower().replace(' ', '-')
preferred_name = self._trim_request_prefix((project_name or 'project').strip())
preferred = preferred_name.lower().replace(' ', '-')
sanitized = ''.join(ch if ch.isalnum() or ch in {'-', '_'} else '-' for ch in preferred)
while '--' in sanitized:
sanitized = sanitized.replace('--', '-')
return sanitized.strip('-') or 'project'
def _should_use_repo_name_candidate(self, candidate: str, project_name: str) -> bool:
"""Return whether a model-proposed repo slug is concise enough to trust directly."""
cleaned = self._trim_request_prefix(re.sub(r'[^A-Za-z0-9\s_-]+', ' ', candidate or '').strip())
if not cleaned:
return False
candidate_tokens = [token.lower() for token in re.split(r'[-\s_]+', cleaned) if token]
if not candidate_tokens:
return False
if len(candidate_tokens) > 6:
return False
noise_count = sum(1 for token in candidate_tokens if token in self.REPO_NOISE_WORDS)
if noise_count >= 2:
return False
if len('-'.join(candidate_tokens)) > 40:
return False
project_tokens = {
token.lower()
for token in re.split(r'[-\s_]+', project_name or '')
if token and token.lower() not in self.REPO_NOISE_WORDS
}
if project_tokens:
overlap = sum(1 for token in candidate_tokens if token in project_tokens)
if overlap == 0:
return False
return True
def _should_use_project_name_candidate(self, candidate: str, fallback_name: str) -> bool:
"""Return whether a model-proposed project title is concrete enough to trust."""
cleaned = self._trim_request_prefix(re.sub(r'[^A-Za-z0-9\s-]+', ' ', candidate or '').strip())
if not cleaned:
return False
candidate_tokens = [token.lower() for token in re.split(r'[-\s]+', cleaned) if token]
if not candidate_tokens:
return False
if len(candidate_tokens) == 1 and candidate_tokens[0] in self.GENERIC_PROJECT_NAME_WORDS:
return False
if all(token in self.GENERIC_PROJECT_NAME_WORDS for token in candidate_tokens):
return False
fallback_tokens = {
token.lower() for token in re.split(r'[-\s]+', fallback_name or '') if token and token.lower() not in self.REPO_NOISE_WORDS
}
if fallback_tokens and len(candidate_tokens) <= 2:
overlap = sum(1 for token in candidate_tokens if token in fallback_tokens)
if overlap == 0 and any(token in self.GENERIC_PROJECT_NAME_WORDS for token in candidate_tokens):
return False
return True
def _preferred_project_name_fallback(self, prompt_text: str, interpreted_name: str | None) -> str:
"""Pick the best fallback title when the earlier interpretation produced a placeholder."""
interpreted_clean = self._humanize_name(str(interpreted_name or '').strip()) if interpreted_name else ''
normalized_interpreted = interpreted_clean.lower()
if normalized_interpreted and normalized_interpreted not in self.PLACEHOLDER_PROJECT_NAME_WORDS:
if not (len(normalized_interpreted.split()) == 1 and normalized_interpreted in self.GENERIC_PROJECT_NAME_WORDS):
return interpreted_clean
return self._derive_name(prompt_text)
def _ensure_unique_repo_name(self, repo_name: str, reserved_names: set[str]) -> str:
"""Choose a repository slug that does not collide with tracked or remote repositories."""
base_name = self._derive_repo_name(repo_name)
@@ -328,8 +418,15 @@ class RequestInterpreter:
def _normalize_project_identity(self, payload: dict, fallback_name: str) -> tuple[str, str]:
"""Normalize model-proposed project and repository naming."""
project_name = self._humanize_name(str(payload.get('project_name') or payload.get('name') or fallback_name))
repo_name = self._derive_repo_name(str(payload.get('repo_name') or project_name))
fallback_project_name = self._humanize_name(str(fallback_name or 'Generated Project'))
project_candidate = str(payload.get('project_name') or payload.get('name') or '').strip()
project_name = fallback_project_name
if project_candidate and self._should_use_project_name_candidate(project_candidate, fallback_project_name):
project_name = self._humanize_name(project_candidate)
repo_candidate = str(payload.get('repo_name') or '').strip()
repo_name = self._derive_repo_name(project_name)
if repo_candidate and self._should_use_repo_name_candidate(repo_candidate, project_name):
repo_name = self._derive_repo_name(repo_candidate)
return project_name, repo_name
def _heuristic_fallback(self, prompt_text: str, context: dict | None = None) -> tuple[dict, dict]:

View File

@@ -4,10 +4,94 @@ import json
import os
from typing import Optional
from pathlib import Path
from urllib.parse import urlparse
from pydantic import Field
from pydantic_settings import BaseSettings, SettingsConfigDict
def _normalize_service_url(value: str, default_scheme: str = "https") -> str:
"""Normalize service URLs so host-only values still become valid absolute URLs."""
normalized = (value or "").strip().rstrip("/")
if not normalized:
return ""
if "://" not in normalized:
normalized = f"{default_scheme}://{normalized}"
parsed = urlparse(normalized)
if not parsed.scheme or not parsed.netloc:
return ""
return normalized
EDITABLE_LLM_PROMPTS: dict[str, dict[str, str]] = {
'LLM_GUARDRAIL_PROMPT': {
'label': 'Global Guardrails',
'category': 'guardrail',
'description': 'Applied to every outbound external LLM call.',
},
'LLM_REQUEST_INTERPRETER_GUARDRAIL_PROMPT': {
'label': 'Request Interpretation Guardrails',
'category': 'guardrail',
'description': 'Constrains project routing and continuation selection.',
},
'LLM_CHANGE_SUMMARY_GUARDRAIL_PROMPT': {
'label': 'Change Summary Guardrails',
'category': 'guardrail',
'description': 'Constrains factual delivery summaries.',
},
'LLM_PROJECT_NAMING_GUARDRAIL_PROMPT': {
'label': 'Project Naming Guardrails',
'category': 'guardrail',
'description': 'Constrains project display names and repo slugs.',
},
'LLM_PROJECT_NAMING_SYSTEM_PROMPT': {
'label': 'Project Naming System Prompt',
'category': 'system_prompt',
'description': 'Guides the dedicated new-project naming stage.',
},
'LLM_PROJECT_ID_GUARDRAIL_PROMPT': {
'label': 'Project ID Guardrails',
'category': 'guardrail',
'description': 'Constrains stable project id generation.',
},
'LLM_PROJECT_ID_SYSTEM_PROMPT': {
'label': 'Project ID System Prompt',
'category': 'system_prompt',
'description': 'Guides the dedicated project id naming stage.',
},
}
def _get_persisted_llm_prompt_override(env_key: str) -> str | None:
"""Load one persisted LLM prompt override from the database when available."""
if env_key not in EDITABLE_LLM_PROMPTS:
return None
try:
try:
from .database import get_db_sync
from .agents.database_manager import DatabaseManager
except ImportError:
from database import get_db_sync
from agents.database_manager import DatabaseManager
db = get_db_sync()
if db is None:
return None
try:
return DatabaseManager(db).get_llm_prompt_override(env_key)
finally:
db.close()
except Exception:
return None
def _resolve_llm_prompt_value(env_key: str, fallback: str) -> str:
"""Resolve one editable prompt from DB override first, then environment/defaults."""
override = _get_persisted_llm_prompt_override(env_key)
if override is not None:
return override.strip()
return (fallback or '').strip()
class Settings(BaseSettings):
"""Application settings loaded from environment variables."""
@@ -36,10 +120,10 @@ class Settings(BaseSettings):
"For summaries: only describe facts present in the provided context and tool outputs. Never claim a repository, commit, or pull request exists unless it is present in the supplied data."
)
LLM_PROJECT_NAMING_GUARDRAIL_PROMPT: str = (
"For project naming: prefer clear, product-like names and repository slugs that match the user's intent. Avoid reusing tracked project identities unless the request is clearly asking for an existing project."
"For project naming: prefer clear, product-like names and repository slugs that match the user's concrete deliverable. Avoid abstract or instructional words such as purpose, project, system, app, tool, platform, solution, new, create, or test unless the request truly centers on that exact noun. Base the name on the actual artifact or workflow being built, and avoid copying sentence fragments from the prompt. Avoid reusing tracked project identities unless the request is clearly asking for an existing project."
)
LLM_PROJECT_NAMING_SYSTEM_PROMPT: str = (
"You name newly requested software projects. Return only JSON with keys project_name, repo_name, and rationale. Project names should be concise human-readable titles. Repo names should be lowercase kebab-case slugs suitable for a Gitea repository name."
"You name newly requested software projects. Return only JSON with keys project_name, repo_name, and rationale. Project names should be concise human-readable titles based on the real product, artifact, or workflow being created. Repo names should be lowercase kebab-case slugs derived from that title. Never return generic names like purpose, project, system, app, tool, platform, solution, harness, or test by themselves, and never return a repo_name that is a copied sentence fragment from the prompt. Prefer 2 to 4 specific words when possible."
)
LLM_PROJECT_ID_GUARDRAIL_PROMPT: str = (
"For project ids: produce short stable slugs for newly created projects. Avoid collisions with known project ids and keep ids lowercase with hyphens."
@@ -76,6 +160,19 @@ class Settings(BaseSettings):
TELEGRAM_BOT_TOKEN: str = ""
TELEGRAM_CHAT_ID: str = ""
# Home Assistant and prompt queue settings
HOME_ASSISTANT_URL: str = ""
HOME_ASSISTANT_TOKEN: str = ""
HOME_ASSISTANT_BATTERY_ENTITY_ID: str = ""
HOME_ASSISTANT_SURPLUS_ENTITY_ID: str = ""
HOME_ASSISTANT_BATTERY_FULL_THRESHOLD: float = 95.0
HOME_ASSISTANT_SURPLUS_THRESHOLD_WATTS: float = 100.0
PROMPT_QUEUE_ENABLED: bool = False
PROMPT_QUEUE_AUTO_PROCESS: bool = True
PROMPT_QUEUE_FORCE_PROCESS: bool = False
PROMPT_QUEUE_POLL_INTERVAL_SECONDS: int = 60
PROMPT_QUEUE_MAX_BATCH_SIZE: int = 1
# PostgreSQL settings
POSTGRES_HOST: str = "localhost"
POSTGRES_PORT: int = 5432
@@ -163,37 +260,54 @@ class Settings(BaseSettings):
@property
def llm_guardrail_prompt(self) -> str:
"""Get the global guardrail prompt used for all external LLM calls."""
return self.LLM_GUARDRAIL_PROMPT.strip()
return _resolve_llm_prompt_value('LLM_GUARDRAIL_PROMPT', self.LLM_GUARDRAIL_PROMPT)
@property
def llm_request_interpreter_guardrail_prompt(self) -> str:
"""Get the request-interpretation specific guardrail prompt."""
return self.LLM_REQUEST_INTERPRETER_GUARDRAIL_PROMPT.strip()
return _resolve_llm_prompt_value('LLM_REQUEST_INTERPRETER_GUARDRAIL_PROMPT', self.LLM_REQUEST_INTERPRETER_GUARDRAIL_PROMPT)
@property
def llm_change_summary_guardrail_prompt(self) -> str:
"""Get the change-summary specific guardrail prompt."""
return self.LLM_CHANGE_SUMMARY_GUARDRAIL_PROMPT.strip()
return _resolve_llm_prompt_value('LLM_CHANGE_SUMMARY_GUARDRAIL_PROMPT', self.LLM_CHANGE_SUMMARY_GUARDRAIL_PROMPT)
@property
def llm_project_naming_guardrail_prompt(self) -> str:
"""Get the project-naming specific guardrail prompt."""
return self.LLM_PROJECT_NAMING_GUARDRAIL_PROMPT.strip()
return _resolve_llm_prompt_value('LLM_PROJECT_NAMING_GUARDRAIL_PROMPT', self.LLM_PROJECT_NAMING_GUARDRAIL_PROMPT)
@property
def llm_project_naming_system_prompt(self) -> str:
"""Get the project-naming system prompt."""
return self.LLM_PROJECT_NAMING_SYSTEM_PROMPT.strip()
return _resolve_llm_prompt_value('LLM_PROJECT_NAMING_SYSTEM_PROMPT', self.LLM_PROJECT_NAMING_SYSTEM_PROMPT)
@property
def llm_project_id_guardrail_prompt(self) -> str:
"""Get the project-id naming specific guardrail prompt."""
return self.LLM_PROJECT_ID_GUARDRAIL_PROMPT.strip()
return _resolve_llm_prompt_value('LLM_PROJECT_ID_GUARDRAIL_PROMPT', self.LLM_PROJECT_ID_GUARDRAIL_PROMPT)
@property
def llm_project_id_system_prompt(self) -> str:
"""Get the project-id naming system prompt."""
return self.LLM_PROJECT_ID_SYSTEM_PROMPT.strip()
return _resolve_llm_prompt_value('LLM_PROJECT_ID_SYSTEM_PROMPT', self.LLM_PROJECT_ID_SYSTEM_PROMPT)
@property
def editable_llm_prompts(self) -> list[dict[str, str]]:
"""Return metadata for all LLM prompts that may be persisted and edited from the UI."""
prompts = []
for env_key, metadata in EDITABLE_LLM_PROMPTS.items():
prompts.append(
{
'key': env_key,
'label': metadata['label'],
'category': metadata['category'],
'description': metadata['description'],
'default_value': (getattr(self, env_key, '') or '').strip(),
'value': _resolve_llm_prompt_value(env_key, getattr(self, env_key, '')),
}
)
return prompts
@property
def llm_tool_allowlist(self) -> list[str]:
@@ -254,7 +368,7 @@ class Settings(BaseSettings):
@property
def gitea_url(self) -> str:
"""Get Gitea URL with trimmed whitespace."""
return self.GITEA_URL.strip()
return _normalize_service_url(self.GITEA_URL)
@property
def gitea_token(self) -> str:
@@ -279,12 +393,12 @@ class Settings(BaseSettings):
@property
def n8n_webhook_url(self) -> str:
"""Get n8n webhook URL with trimmed whitespace."""
return self.N8N_WEBHOOK_URL.strip()
return _normalize_service_url(self.N8N_WEBHOOK_URL, default_scheme="http")
@property
def n8n_api_url(self) -> str:
"""Get n8n API URL with trimmed whitespace."""
return self.N8N_API_URL.strip()
return _normalize_service_url(self.N8N_API_URL, default_scheme="http")
@property
def n8n_api_key(self) -> str:
@@ -309,7 +423,62 @@ class Settings(BaseSettings):
@property
def backend_public_url(self) -> str:
"""Get backend public URL with trimmed whitespace."""
return self.BACKEND_PUBLIC_URL.strip().rstrip("/")
return _normalize_service_url(self.BACKEND_PUBLIC_URL, default_scheme="http")
@property
def home_assistant_url(self) -> str:
"""Get Home Assistant URL with trimmed whitespace."""
return _normalize_service_url(self.HOME_ASSISTANT_URL, default_scheme="http")
@property
def home_assistant_token(self) -> str:
"""Get Home Assistant token with trimmed whitespace."""
return self.HOME_ASSISTANT_TOKEN.strip()
@property
def home_assistant_battery_entity_id(self) -> str:
"""Get the Home Assistant battery state entity id."""
return self.HOME_ASSISTANT_BATTERY_ENTITY_ID.strip()
@property
def home_assistant_surplus_entity_id(self) -> str:
"""Get the Home Assistant surplus power entity id."""
return self.HOME_ASSISTANT_SURPLUS_ENTITY_ID.strip()
@property
def home_assistant_battery_full_threshold(self) -> float:
"""Get the minimum battery SoC percentage for queue processing."""
return float(self.HOME_ASSISTANT_BATTERY_FULL_THRESHOLD)
@property
def home_assistant_surplus_threshold_watts(self) -> float:
"""Get the minimum export/surplus power threshold for queue processing."""
return float(self.HOME_ASSISTANT_SURPLUS_THRESHOLD_WATTS)
@property
def prompt_queue_enabled(self) -> bool:
"""Whether Telegram prompts should be queued instead of processed immediately."""
return bool(self.PROMPT_QUEUE_ENABLED)
@property
def prompt_queue_auto_process(self) -> bool:
"""Whether the background worker should automatically process queued prompts."""
return bool(self.PROMPT_QUEUE_AUTO_PROCESS)
@property
def prompt_queue_force_process(self) -> bool:
"""Whether queued prompts should bypass the Home Assistant energy gate."""
return bool(self.PROMPT_QUEUE_FORCE_PROCESS)
@property
def prompt_queue_poll_interval_seconds(self) -> int:
"""Get the queue polling interval for background processing."""
return max(int(self.PROMPT_QUEUE_POLL_INTERVAL_SECONDS), 5)
@property
def prompt_queue_max_batch_size(self) -> int:
"""Get the maximum number of queued prompts to process in one batch."""
return max(int(self.PROMPT_QUEUE_MAX_BATCH_SIZE), 1)
@property
def projects_root(self) -> Path:

View File

@@ -5,17 +5,22 @@ from __future__ import annotations
from contextlib import closing
from html import escape
import json
import re
import time
import urllib.error
import urllib.request
from nicegui import app, ui
AUTO_SYNC_INTERVAL_SECONDS = 60
_last_background_repo_sync_at = 0.0
_DIFF_HUNK_PATTERN = re.compile(r'^@@ -(\d+)(?:,\d+)? \+(\d+)(?:,\d+)? @@')
try:
from .agents.database_manager import DatabaseManager
from .agents.gitea import GiteaAPI
from .agents.home_assistant import HomeAssistantAgent
from .agents.llm_service import LLMServiceClient
from .agents.n8n_setup import N8NSetupAgent
from .agents.prompt_workflow import PromptWorkflowManager
@@ -25,6 +30,7 @@ try:
except ImportError:
from agents.database_manager import DatabaseManager
from agents.gitea import GiteaAPI
from agents.home_assistant import HomeAssistantAgent
from agents.llm_service import LLMServiceClient
from agents.n8n_setup import N8NSetupAgent
from agents.prompt_workflow import PromptWorkflowManager
@@ -235,6 +241,126 @@ def _render_timeline(events: list[dict]) -> None:
ui.label(f"Prompt {metadata['prompt_id']}").classes('factory-chip')
def _parse_side_by_side_diff(diff_text: str) -> list[dict]:
"""Parse unified diff text into rows suitable for side-by-side rendering."""
rows: list[dict] = []
left_line = 0
right_line = 0
lines = diff_text.splitlines()
index = 0
while index < len(lines):
line = lines[index]
if line.startswith(('diff --git', 'index ', '--- ', '+++ ')):
index += 1
continue
if line.startswith('@@'):
match = _DIFF_HUNK_PATTERN.match(line)
if match:
left_line = int(match.group(1))
right_line = int(match.group(2))
rows.append({'type': 'hunk', 'header': line})
index += 1
continue
if line.startswith('-') and not line.startswith('---'):
next_line = lines[index + 1] if index + 1 < len(lines) else None
if next_line and next_line.startswith('+') and not next_line.startswith('+++'):
rows.append(
{
'type': 'change',
'kind': 'modified',
'left_no': left_line,
'right_no': right_line,
'left_text': line[1:],
'right_text': next_line[1:],
}
)
left_line += 1
right_line += 1
index += 2
continue
rows.append(
{
'type': 'change',
'kind': 'removed',
'left_no': left_line,
'right_no': '',
'left_text': line[1:],
'right_text': '',
}
)
left_line += 1
index += 1
continue
if line.startswith('+') and not line.startswith('+++'):
rows.append(
{
'type': 'change',
'kind': 'added',
'left_no': '',
'right_no': right_line,
'left_text': '',
'right_text': line[1:],
}
)
right_line += 1
index += 1
continue
if line.startswith(' '):
rows.append(
{
'type': 'change',
'kind': 'context',
'left_no': left_line,
'right_no': right_line,
'left_text': line[1:],
'right_text': line[1:],
}
)
left_line += 1
right_line += 1
index += 1
continue
rows.append({'type': 'meta', 'text': line})
index += 1
return rows
def _render_side_by_side_diff(diff_text: str) -> None:
"""Render a side-by-side diff table from unified diff text."""
rows = _parse_side_by_side_diff(diff_text)
if not rows:
ui.label('No diff content recorded.').classes('factory-muted')
return
html_rows = []
for row in rows:
if row['type'] == 'hunk':
html_rows.append(
f"<tr class='factory-diff-hunk'><td colspan='4'>{escape(row['header'])}</td></tr>"
)
continue
if row['type'] == 'meta':
html_rows.append(
f"<tr class='factory-diff-meta'><td colspan='4'>{escape(row['text'])}</td></tr>"
)
continue
kind = row['kind']
html_rows.append(
"<tr>"
f"<td class='factory-diff-line factory-diff-line-{kind}'>{escape(str(row['left_no'])) if row['left_no'] != '' else ''}</td>"
f"<td class='factory-diff-cell factory-diff-cell-{kind}'>{escape(row['left_text'])}</td>"
f"<td class='factory-diff-line factory-diff-line-{kind}'>{escape(str(row['right_no'])) if row['right_no'] != '' else ''}</td>"
f"<td class='factory-diff-cell factory-diff-cell-{kind}'>{escape(row['right_text'])}</td>"
"</tr>"
)
ui.html(
"<div class='factory-diff-wrapper'>"
"<table class='factory-diff-table'>"
"<thead><tr><th colspan='2'>Before</th><th colspan='2'>After</th></tr></thead>"
f"<tbody>{''.join(html_rows)}</tbody>"
"</table></div>"
)
def _render_commit_context(context: dict | None) -> None:
"""Render a commit provenance lookup result."""
if not context:
@@ -351,8 +477,8 @@ def _render_change_list(changes: list[dict]) -> None:
ui.label(change.get('change_type') or change.get('action_type') or 'CHANGE').classes('factory-chip')
ui.label(change.get('diff_summary') or change.get('details') or 'No diff summary recorded').classes('factory-muted')
if change.get('diff_text'):
with ui.expansion('Show diff').classes('w-full q-mt-sm'):
ui.label(change['diff_text']).classes('factory-code')
with ui.expansion('Show side-by-side diff').classes('w-full q-mt-sm'):
_render_side_by_side_diff(change['diff_text'])
def _render_llm_traces(traces: list[dict]) -> None:
@@ -467,10 +593,96 @@ def _load_n8n_health_snapshot() -> dict:
}
def _load_gitea_health_snapshot() -> dict:
"""Load a Gitea health snapshot for UI rendering."""
if not settings.gitea_url:
return {
'status': 'error',
'message': 'GITEA_URL is not configured.',
'base_url': 'Not configured',
'checks': [],
}
if not settings.gitea_token:
return {
'status': 'error',
'message': 'GITEA_TOKEN is not configured.',
'base_url': settings.gitea_url,
'checks': [],
}
try:
response = GiteaAPI(token=settings.GITEA_TOKEN, base_url=settings.GITEA_URL, owner=settings.GITEA_OWNER, repo=settings.GITEA_REPO or '').get_current_user_sync()
if response.get('error'):
return {
'status': 'error',
'message': response.get('error', 'Unable to reach Gitea.'),
'base_url': settings.gitea_url,
'checks': [
{
'name': 'token_auth',
'ok': False,
'message': response.get('error'),
'status_code': response.get('status_code'),
'url': f'{settings.gitea_url}/api/v1/user',
}
],
}
return {
'status': 'success',
'message': f"Authenticated as {response.get('login') or response.get('username') or 'unknown'}.",
'base_url': settings.gitea_url,
'checks': [
{
'name': 'token_auth',
'ok': True,
'message': response.get('login') or response.get('username') or 'authenticated',
'url': f'{settings.gitea_url}/api/v1/user',
}
],
}
except Exception as exc:
return {
'status': 'error',
'message': f'Unable to run Gitea health checks: {exc}',
'base_url': settings.gitea_url,
'checks': [],
}
def _load_home_assistant_health_snapshot() -> dict:
"""Load a Home Assistant health snapshot for UI rendering."""
try:
return HomeAssistantAgent(base_url=settings.home_assistant_url, token=settings.home_assistant_token).health_check_sync()
except Exception as exc:
return {
'status': 'error',
'message': f'Unable to run Home Assistant health checks: {exc}',
'base_url': settings.home_assistant_url or 'Not configured',
'checks': [],
}
def _add_dashboard_styles() -> None:
"""Register shared dashboard styles."""
ui.add_head_html(
"""
<script>
(() => {
const scrollKey = 'factory-dashboard-scroll-y';
const rememberScroll = () => sessionStorage.setItem(scrollKey, String(window.scrollY || 0));
const restoreScroll = () => {
const stored = sessionStorage.getItem(scrollKey);
if (stored === null) return;
window.requestAnimationFrame(() => window.scrollTo({top: Number(stored) || 0, left: 0, behavior: 'auto'}));
};
window.addEventListener('scroll', rememberScroll, {passive: true});
document.addEventListener('click', rememberScroll, true);
const observer = new MutationObserver(() => restoreScroll());
window.addEventListener('load', () => {
observer.observe(document.body, {childList: true, subtree: true});
restoreScroll();
});
})();
</script>
<style>
body { background: radial-gradient(circle at top, #f4efe7 0%, #e9e1d4 38%, #d7cec1 100%); }
.factory-shell { max-width: 1240px; margin: 0 auto; }
@@ -479,6 +691,20 @@ def _add_dashboard_styles() -> None:
.factory-muted { color: #745e4c; }
.factory-code { font-family: 'IBM Plex Mono', 'Fira Code', monospace; background: rgba(32,26,20,0.92); color: #f4efe7; border-radius: 14px; padding: 12px; white-space: pre-wrap; }
.factory-chip { background: rgba(173, 129, 82, 0.14); color: #6b4b2e; border-radius: 999px; padding: 4px 10px; font-size: 12px; }
.factory-diff-wrapper { overflow-x: auto; border-radius: 16px; border: 1px solid rgba(73,54,40,0.10); }
.factory-diff-table { width: 100%; border-collapse: collapse; font-family: 'IBM Plex Mono', 'Fira Code', monospace; font-size: 0.85rem; }
.factory-diff-table thead th { background: rgba(58,40,26,0.08); color: #3a281a; padding: 10px 12px; text-align: left; }
.factory-diff-line { width: 3.5rem; text-align: right; padding: 8px 10px; color: #8a7461; background: rgba(58,40,26,0.04); vertical-align: top; }
.factory-diff-cell { white-space: pre-wrap; padding: 8px 12px; vertical-align: top; }
.factory-diff-cell-context { background: rgba(255,255,255,0.88); }
.factory-diff-cell-added { background: rgba(41,121,82,0.12); }
.factory-diff-cell-removed { background: rgba(198,40,40,0.10); }
.factory-diff-cell-modified { background: linear-gradient(90deg, rgba(198,40,40,0.08), rgba(41,121,82,0.10)); }
.factory-diff-line-added { background: rgba(41,121,82,0.16); }
.factory-diff-line-removed { background: rgba(198,40,40,0.14); }
.factory-diff-line-modified { background: rgba(173,129,82,0.18); }
.factory-diff-hunk td { padding: 8px 12px; background: rgba(48,33,22,0.9); color: #f4efe7; }
.factory-diff-meta td { padding: 8px 12px; background: rgba(58,40,26,0.06); color: #745e4c; }
</style>
"""
)
@@ -529,9 +755,13 @@ def _render_confirmation_dialog(title: str, message: str, confirm_label: str, on
def _render_health_panels() -> None:
"""Render application and n8n health panels."""
"""Render application, integration, and queue health panels."""
runtime = get_database_runtime_summary()
n8n_health = _load_n8n_health_snapshot()
gitea_health = _load_gitea_health_snapshot()
home_assistant_health = _load_home_assistant_health_snapshot()
snapshot = _load_dashboard_snapshot()
queue_summary = ((snapshot.get('prompt_queue') or {}).get('summary') if isinstance(snapshot, dict) else {}) or {}
with ui.grid(columns=2).classes('w-full gap-4'):
with ui.card().classes('factory-panel q-pa-lg'):
@@ -579,6 +809,54 @@ def _render_health_panels() -> None:
if check.get('message'):
ui.label(check['message']).classes('factory-muted')
with ui.card().classes('factory-panel q-pa-lg'):
ui.label('Gitea Integration').style('font-size: 1.25rem; font-weight: 700; color: #3a281a;')
ui.label(gitea_health.get('status', 'unknown').upper()).classes('factory-chip')
ui.label(gitea_health.get('message', 'No Gitea status available.')).classes('factory-muted q-mt-sm')
for label, value in [
('Base URL', gitea_health.get('base_url') or 'Not configured'),
('Owner', settings.gitea_owner or 'Not configured'),
('Mode', 'per-project' if settings.use_project_repositories else 'shared'),
]:
with ui.row().classes('justify-between w-full q-mt-sm'):
ui.label(label).classes('factory-muted')
ui.label(str(value)).style('font-weight: 600; color: #3a281a;')
for check in gitea_health.get('checks', []):
status = 'OK' if check.get('ok') else 'FAIL'
ui.markdown(
f"- **{escape(check.get('name', 'check'))}** · {status} · {escape(str(check.get('status_code') or 'n/a'))} · {escape(check.get('url') or 'unknown url')}"
)
if check.get('message'):
ui.label(check['message']).classes('factory-muted')
with ui.card().classes('factory-panel q-pa-lg'):
ui.label('Home Assistant Queue Gate').style('font-size: 1.25rem; font-weight: 700; color: #3a281a;')
ui.label(home_assistant_health.get('status', 'unknown').upper()).classes('factory-chip')
ui.label(home_assistant_health.get('message', 'No Home Assistant status available.')).classes('factory-muted q-mt-sm')
for label, value in [
('Base URL', home_assistant_health.get('base_url') or 'Not configured'),
('Queue Enabled', 'yes' if settings.prompt_queue_enabled else 'no'),
('Auto Process', 'yes' if settings.prompt_queue_auto_process else 'no'),
('Force Override', 'yes' if settings.prompt_queue_force_process else 'no'),
('Queued Prompts', queue_summary.get('queued', 0)),
('Failed Prompts', queue_summary.get('failed', 0)),
]:
with ui.row().classes('justify-between w-full q-mt-sm'):
ui.label(label).classes('factory-muted')
ui.label(str(value)).style('font-weight: 600; color: #3a281a;')
queue_gate = home_assistant_health.get('queue_gate') or {}
if queue_gate:
ui.label(
f"Thresholds: battery >= {queue_gate.get('battery_full_percent')}%, surplus >= {queue_gate.get('surplus_watts')} W"
).classes('factory-muted q-mt-sm')
for check in home_assistant_health.get('checks', []):
status = 'OK' if check.get('ok') else 'FAIL'
ui.markdown(
f"- **{escape(check.get('name', 'check'))}** · {status} · {escape(str(check.get('status_code') or 'n/a'))} · {escape(check.get('url') or 'unknown url')}"
)
if check.get('message'):
ui.label(check['message']).classes('factory-muted')
def create_health_page() -> None:
"""Create a dedicated health page for runtime diagnostics."""
@@ -607,6 +885,27 @@ def create_dashboard():
repo_discovery_key = 'dashboard.repo_discovery'
repo_owner_key = 'dashboard.repo_owner'
repo_name_key = 'dashboard.repo_name'
expansion_state_prefix = 'dashboard.expansion.'
def _expansion_state_key(name: str) -> str:
return f'{expansion_state_prefix}{name}'
def _expansion_value(name: str, default: bool = False) -> bool:
return bool(app.storage.user.get(_expansion_state_key(name), default))
def _store_expansion_value(name: str, event) -> None:
app.storage.user[_expansion_state_key(name)] = bool(event.value)
def _sticky_expansion(name: str, text: str, *, icon: str | None = None, default: bool = False, classes: str = 'w-full'):
return ui.expansion(
text,
icon=icon,
value=_expansion_value(name, default),
on_value_change=lambda event, expansion_name=name: _store_expansion_value(expansion_name, event),
).classes(classes)
def _llm_prompt_draft_key(prompt_key: str) -> str:
return f'dashboard.llm_prompt_draft.{prompt_key}'
def _selected_tab_name() -> str:
"""Return the persisted active dashboard tab."""
@@ -668,6 +967,33 @@ def create_dashboard():
def _get_discovered_repositories() -> list[dict]:
return app.storage.user.get(repo_discovery_key, [])
def _prompt_draft_value(prompt_key: str, fallback: str) -> str:
return app.storage.user.get(_llm_prompt_draft_key(prompt_key), fallback)
def _store_prompt_draft(prompt_key: str, value: str) -> None:
app.storage.user[_llm_prompt_draft_key(prompt_key)] = value
def _clear_prompt_draft(prompt_key: str) -> None:
app.storage.user.pop(_llm_prompt_draft_key(prompt_key), None)
def _call_backend_json(path: str, method: str = 'GET', payload: dict | None = None) -> dict:
target = f"{settings.backend_public_url}{path}"
data = json.dumps(payload).encode('utf-8') if payload is not None else None
request = urllib.request.Request(target, data=data, headers={'Content-Type': 'application/json'}, method=method.upper())
try:
with urllib.request.urlopen(request) as response:
body = response.read().decode('utf-8')
return json.loads(body) if body else {}
except urllib.error.HTTPError as exc:
try:
body = exc.read().decode('utf-8')
parsed = json.loads(body) if body else {}
except Exception:
parsed = {'detail': str(exc)}
return {'error': parsed.get('detail') or parsed.get('error') or str(exc), 'status_code': exc.code}
except Exception as exc:
return {'error': str(exc)}
async def discover_gitea_repositories_action() -> None:
if not settings.gitea_url or not settings.gitea_token:
ui.notify('Configure GITEA_URL and GITEA_TOKEN first', color='negative')
@@ -817,6 +1143,65 @@ def create_dashboard():
ui.notify(result.get('message', 'Telegram message sent'), color='positive' if result.get('status') == 'success' else 'negative')
_refresh_health_sections()
def process_prompt_queue_action(force: bool = False, limit: int | None = None) -> None:
result = _call_backend_json(
'/queue/process',
method='POST',
payload={'force': force, 'limit': limit or settings.prompt_queue_max_batch_size},
)
if result.get('error'):
ui.notify(result.get('error', 'Queue processing failed'), color='negative')
return
processed_count = result.get('processed_count', 0)
if processed_count:
ui.notify(f'Processed {processed_count} queued prompt(s)', color='positive')
else:
ui.notify(result.get('queue_gate', {}).get('reason', 'No queued prompts were processed'), color='warning')
_refresh_all_dashboard_sections()
def retry_prompt_queue_item_action(queue_item_id: int) -> None:
db = get_db_sync()
if db is None:
ui.notify('Database session could not be created', color='negative')
return
with closing(db):
result = DatabaseManager(db).retry_queued_prompt(queue_item_id)
if result is None:
ui.notify('Queued prompt not found', color='negative')
return
ui.notify('Queued prompt returned to pending state', color='positive')
_refresh_all_dashboard_sections()
def save_llm_prompt_action(prompt_key: str) -> None:
db = get_db_sync()
if db is None:
ui.notify('Database session could not be created', color='negative')
return
with closing(db):
current = next((item for item in DatabaseManager(db).get_llm_prompt_settings() if item['key'] == prompt_key), None)
value = _prompt_draft_value(prompt_key, current['value'] if current else '')
result = DatabaseManager(db).save_llm_prompt_setting(prompt_key, value, actor='dashboard')
if result.get('status') == 'error':
ui.notify(result.get('message', 'Prompt save failed'), color='negative')
return
_clear_prompt_draft(prompt_key)
ui.notify('LLM prompt setting saved', color='positive')
_refresh_system_sections()
def reset_llm_prompt_action(prompt_key: str) -> None:
db = get_db_sync()
if db is None:
ui.notify('Database session could not be created', color='negative')
return
with closing(db):
result = DatabaseManager(db).reset_llm_prompt_setting(prompt_key, actor='dashboard')
if result.get('status') == 'error':
ui.notify(result.get('message', 'Prompt reset failed'), color='negative')
return
_clear_prompt_draft(prompt_key)
ui.notify('LLM prompt setting reset to environment default', color='positive')
_refresh_system_sections()
def init_db_action() -> None:
result = init_db()
ui.notify(result.get('message', 'Database initialized'), color='positive' if result.get('status') == 'success' else 'negative')
@@ -868,13 +1253,18 @@ def create_dashboard():
if repository and repository.get('mode') != 'shared' and repository.get('owner') and repository.get('name') and settings.gitea_url and settings.gitea_token:
gitea_api = GiteaAPI(token=settings.GITEA_TOKEN, base_url=settings.GITEA_URL, owner=settings.GITEA_OWNER, repo=settings.GITEA_REPO or '')
remote_delete = gitea_api.delete_repo_sync(owner=repository.get('owner'), repo=repository.get('name'))
if remote_delete.get('error') and remote_delete.get('status_code') not in {404, None}:
ui.notify(remote_delete.get('error', 'Remote repository deletion failed'), color='negative')
return
if remote_delete.get('error'):
manager.log_system_event(
component='gitea',
level='WARNING',
message=f"Remote repository delete failed for {repository.get('owner')}/{repository.get('name')}: {remote_delete.get('error')}",
)
result = manager.delete_project(project_id)
message = result.get('message', 'Project deleted')
if remote_delete and not remote_delete.get('error'):
message = f"{message}; remote repository deleted"
elif remote_delete and remote_delete.get('error'):
message = f"{message}; remote repository delete failed: {remote_delete.get('error')}"
ui.notify(message, color='positive' if result.get('status') == 'success' else 'negative')
_refresh_all_dashboard_sections()
@@ -889,6 +1279,14 @@ def create_dashboard():
branch_scope_filter = _selected_branch_scope()
commit_lookup_query = _selected_commit_lookup()
discovered_repositories = _get_discovered_repositories()
prompt_settings = settings.editable_llm_prompts
db = get_db_sync()
if db is not None:
with closing(db):
try:
prompt_settings = DatabaseManager(db).get_llm_prompt_settings()
except Exception:
prompt_settings = settings.editable_llm_prompts
if snapshot.get('error'):
return {
'error': snapshot['error'],
@@ -899,6 +1297,7 @@ def create_dashboard():
'branch_scope_filter': branch_scope_filter,
'commit_lookup_query': commit_lookup_query,
'discovered_repositories': discovered_repositories,
'prompt_settings': prompt_settings,
}
projects = snapshot['projects']
all_llm_traces = [trace for project_bundle in projects for trace in project_bundle.get('llm_traces', [])]
@@ -917,6 +1316,7 @@ def create_dashboard():
'commit_lookup_query': commit_lookup_query,
'commit_context': _load_commit_context(commit_lookup_query, branch_scope_filter) if commit_lookup_query else None,
'discovered_repositories': discovered_repositories,
'prompt_settings': prompt_settings,
'llm_stage_options': [''] + sorted({trace.get('stage') for trace in all_llm_traces if trace.get('stage')}),
'llm_model_options': [''] + sorted({trace.get('model') for trace in all_llm_traces if trace.get('model')}),
'project_repository_map': {
@@ -1029,7 +1429,12 @@ def create_dashboard():
ui.label('No project data available yet.').classes('factory-muted')
for project_bundle in projects:
project = project_bundle['project']
with ui.expansion(f"{project['project_name']} · {project['status']}", icon='folder').classes('factory-panel w-full q-mb-md'):
with _sticky_expansion(
f"projects.{project['project_id']}",
f"{project['project_name']} · {project['status']}",
icon='folder',
classes='factory-panel w-full q-mb-md',
):
with ui.row().classes('items-center gap-2 q-pa-md'):
ui.button(
'Archive',
@@ -1074,7 +1479,12 @@ def create_dashboard():
ui.label('No archived projects yet.').classes('factory-muted')
for project_bundle in archived_projects:
project = project_bundle['project']
with ui.expansion(f"{project['project_name']} · archived", icon='archive').classes('factory-panel w-full q-mb-md'):
with _sticky_expansion(
f"archived.{project['project_id']}",
f"{project['project_name']} · archived",
icon='archive',
classes='factory-panel w-full q-mb-md',
):
with ui.row().classes('items-center gap-2 q-pa-md'):
ui.button(
'Restore',
@@ -1281,7 +1691,12 @@ def create_dashboard():
if projects:
for project_bundle in projects:
project = project_bundle['project']
with ui.expansion(f"{project['project_name']} · {project['project_id']}", icon='schedule').classes('q-mt-md w-full'):
with _sticky_expansion(
f"timeline.{project['project_id']}",
f"{project['project_name']} · {project['project_id']}",
icon='schedule',
classes='q-mt-md w-full',
):
_render_timeline(_filter_timeline_events(project_bundle.get('timeline', []), branch_scope_filter))
else:
ui.label('No project timelines recorded yet.').classes('factory-muted')
@@ -1295,6 +1710,7 @@ def create_dashboard():
system_logs = view_model['system_logs']
llm_runtime = view_model['llm_runtime']
discovered_repositories = view_model['discovered_repositories']
prompt_settings = view_model.get('prompt_settings', [])
with ui.grid(columns=2).classes('w-full gap-4'):
with ui.card().classes('factory-panel q-pa-lg'):
ui.label('System Logs').style('font-size: 1.25rem; font-weight: 700; color: #3a281a;')
@@ -1345,6 +1761,31 @@ def create_dashboard():
for label, text in system_prompts.items():
ui.label(label.replace('_', ' ').title()).classes('factory-muted q-mt-sm')
ui.label(text or 'Not configured').classes('factory-code')
with ui.card().classes('factory-panel q-pa-lg'):
ui.label('Editable LLM Prompts').style('font-size: 1.25rem; font-weight: 700; color: #3a281a;')
ui.label('These guardrails and system prompts are persisted in the database and override environment defaults until reset.').classes('factory-muted')
for prompt in prompt_settings:
with ui.card().classes('q-pa-sm q-mt-md'):
with ui.row().classes('items-center justify-between w-full'):
with ui.column().classes('gap-1'):
ui.label(prompt['label']).style('font-weight: 700; color: #2f241d;')
ui.label(prompt.get('description') or '').classes('factory-muted')
with ui.row().classes('items-center gap-2'):
ui.label(prompt.get('category', 'prompt')).classes('factory-chip')
ui.label(prompt.get('source', 'environment')).classes('factory-chip')
draft_value = _prompt_draft_value(prompt['key'], prompt.get('value') or '')
ui.textarea(
label=prompt['key'],
value=draft_value,
on_change=lambda event, prompt_key=prompt['key']: _store_prompt_draft(prompt_key, event.value or ''),
).props('autogrow outlined').classes('w-full q-mt-sm')
ui.label('Environment default').classes('factory-muted q-mt-sm')
ui.label(prompt.get('default_value') or 'Not configured').classes('factory-code')
if prompt.get('updated_at'):
ui.label(f"Last updated: {prompt['updated_at']} by {prompt.get('updated_by') or 'unknown'}").classes('factory-muted q-mt-sm')
with ui.row().classes('items-center gap-2 q-mt-md'):
ui.button('Save Override', on_click=lambda _=None, prompt_key=prompt['key']: save_llm_prompt_action(prompt_key)).props('unelevated color=dark')
ui.button('Reset To Default', on_click=lambda _=None, prompt_key=prompt['key']: reset_llm_prompt_action(prompt_key)).props('outline color=warning')
with ui.card().classes('factory-panel q-pa-lg'):
ui.label('Repository Onboarding').style('font-size: 1.25rem; font-weight: 700; color: #3a281a;')
ui.label('Discover repositories in the Gitea organization, onboard manually created repos, and import their recent commits into the dashboard.').classes('factory-muted')
@@ -1377,15 +1818,19 @@ def create_dashboard():
with ui.card().classes('factory-panel q-pa-lg'):
ui.label('Important Endpoints').style('font-size: 1.25rem; font-weight: 700; color: #3a281a;')
endpoints = [
'/health', '/llm/runtime', '/generate', '/projects', '/audit/projects', '/audit/prompts', '/audit/changes', '/audit/issues',
'/health', '/llm/runtime', '/generate', '/generate/text', '/queue', '/queue/process', '/projects', '/audit/projects', '/audit/prompts', '/audit/changes', '/audit/issues',
'/audit/commit-context', '/audit/timeline', '/audit/llm-traces', '/audit/correlations', '/projects/{project_id}/sync-repository',
'/gitea/repos', '/gitea/repos/onboard', '/n8n/health', '/n8n/setup',
'/gitea/repos', '/gitea/repos/onboard', '/gitea/health', '/home-assistant/health', '/n8n/health', '/n8n/setup',
]
for endpoint in endpoints:
ui.label(endpoint).classes('factory-code q-mt-sm')
@ui.refreshable
def render_health_panel() -> None:
view_model = _view_model()
prompt_queue = (view_model.get('snapshot') or {}).get('prompt_queue') or {}
queue_items = prompt_queue.get('items') or []
queue_summary = prompt_queue.get('summary') or {}
with ui.card().classes('factory-panel q-pa-lg q-mb-md'):
ui.label('Health and Diagnostics').style('font-size: 1.25rem; font-weight: 700; color: #3a281a;')
ui.label('Use this page to verify runtime configuration, n8n API connectivity, and likely causes of provisioning failures.').classes('factory-muted')
@@ -1398,6 +1843,37 @@ def create_dashboard():
ui.label(settings.telegram_chat_id or 'Not configured').style('font-weight: 600; color: #3a281a;')
with ui.row().classes('items-center gap-2 q-mt-md'):
ui.button('Send Prompt Guide', on_click=send_telegram_prompt_guide_action).props('unelevated color=secondary')
with ui.card().classes('factory-panel q-pa-lg q-mb-md'):
ui.label('Prompt Queue Controls').style('font-size: 1.25rem; font-weight: 700; color: #3a281a;')
ui.label('Process queued Telegram prompts manually, or requeue failed items for another pass.').classes('factory-muted')
with ui.row().classes('items-center gap-2 q-mt-md'):
ui.button('Process Next Batch', on_click=lambda: process_prompt_queue_action(force=False)).props('outline color=secondary')
ui.button('Force Process Next Batch', on_click=lambda: process_prompt_queue_action(force=True)).props('unelevated color=warning')
with ui.row().classes('items-center gap-2 q-mt-md'):
ui.label(f"Queued: {queue_summary.get('queued', 0)}").classes('factory-chip')
ui.label(f"Processing: {queue_summary.get('processing', 0)}").classes('factory-chip')
ui.label(f"Failed: {queue_summary.get('failed', 0)}").classes('factory-chip')
ui.label(f"Completed: {queue_summary.get('completed', 0)}").classes('factory-chip')
if queue_items:
for item in queue_items:
with ui.card().classes('q-pa-sm q-mt-md'):
with ui.row().classes('items-start justify-between w-full'):
with ui.column().classes('gap-1'):
ui.label((item.get('prompt_text') or 'Prompt').strip()[:220]).classes('factory-code')
ui.label(item.get('queued_at') or item.get('processed_at') or item.get('failed_at') or 'Timestamp unavailable').classes('factory-muted')
with ui.column().classes('items-end gap-1'):
ui.label(item.get('status') or 'unknown').classes('factory-chip')
if item.get('chat_id'):
ui.label(str(item['chat_id'])).classes('factory-chip')
if item.get('error'):
ui.label(item['error']).classes('factory-muted q-mt-sm')
with ui.row().classes('items-center gap-2 q-mt-md'):
if item.get('status') == 'failed':
ui.button('Retry', on_click=lambda _=None, queue_item_id=item['id']: retry_prompt_queue_item_action(queue_item_id)).props('outline color=warning')
if item.get('status') in {'queued', 'failed'}:
ui.button('Force Process', on_click=lambda: process_prompt_queue_action(force=True, limit=1)).props('outline color=dark')
else:
ui.label('No queued prompts recorded yet.').classes('factory-muted q-mt-md')
_render_health_panels()
panel_refreshers: dict[str, callable] = {}
@@ -1406,7 +1882,8 @@ def create_dashboard():
_update_dashboard_state()
panel_refreshers['metrics']()
active_tab = _selected_tab_name()
if active_tab in panel_refreshers:
# Avoid rebuilding the more interactive tabs on the timer; manual refresh keeps them current.
if active_tab in {'overview', 'health'} and active_tab in panel_refreshers:
panel_refreshers[active_tab]()
def _refresh_all_dashboard_sections() -> None:
@@ -1429,6 +1906,7 @@ def create_dashboard():
panel_refreshers['system']()
def _refresh_health_sections() -> None:
_update_dashboard_state()
panel_refreshers['health']()
_update_dashboard_state()

View File

@@ -13,6 +13,7 @@ The NiceGUI frontend provides:
from __future__ import annotations
import asyncio
from contextlib import asynccontextmanager
import json
import re
@@ -29,6 +30,7 @@ try:
from . import database as database_module
from .agents.change_summary import ChangeSummaryGenerator
from .agents.database_manager import DatabaseManager
from .agents.home_assistant import HomeAssistantAgent
from .agents.request_interpreter import RequestInterpreter
from .agents.llm_service import LLMServiceClient
from .agents.orchestrator import AgentOrchestrator
@@ -41,6 +43,7 @@ except ImportError:
import database as database_module
from agents.change_summary import ChangeSummaryGenerator
from agents.database_manager import DatabaseManager
from agents.home_assistant import HomeAssistantAgent
from agents.request_interpreter import RequestInterpreter
from agents.llm_service import LLMServiceClient
from agents.orchestrator import AgentOrchestrator
@@ -59,7 +62,18 @@ async def lifespan(_app: FastAPI):
print(
f"Runtime configuration: database_backend={runtime['backend']} target={runtime['target']}"
)
yield
queue_worker = None
if database_module.settings.prompt_queue_enabled and database_module.settings.prompt_queue_auto_process:
queue_worker = asyncio.create_task(_prompt_queue_worker())
try:
yield
finally:
if queue_worker is not None:
queue_worker.cancel()
try:
await queue_worker
except asyncio.CancelledError:
pass
app = FastAPI(lifespan=lifespan)
@@ -94,6 +108,20 @@ class FreeformSoftwareRequest(BaseModel):
source: str = 'telegram'
chat_id: str | None = None
chat_type: str | None = None
process_now: bool = False
class PromptQueueProcessRequest(BaseModel):
"""Request body for manual queue processing."""
force: bool = False
limit: int = Field(default=1, ge=1, le=25)
class LLMPromptSettingUpdateRequest(BaseModel):
"""Request body for persisting one editable LLM prompt override."""
value: str = Field(default='')
class GiteaRepositoryOnboardRequest(BaseModel):
@@ -397,6 +425,275 @@ def _create_gitea_api():
)
def _create_home_assistant_agent() -> HomeAssistantAgent:
"""Create a configured Home Assistant client."""
return HomeAssistantAgent(
base_url=database_module.settings.home_assistant_url,
token=database_module.settings.home_assistant_token,
)
def _get_gitea_health() -> dict:
"""Return current Gitea connectivity diagnostics."""
if not database_module.settings.gitea_url:
return {
'status': 'error',
'message': 'Gitea URL is not configured.',
'base_url': '',
'configured': False,
'checks': [],
}
if not database_module.settings.gitea_token:
return {
'status': 'error',
'message': 'Gitea token is not configured.',
'base_url': database_module.settings.gitea_url,
'configured': False,
'checks': [],
}
response = _create_gitea_api().get_current_user_sync()
if response.get('error'):
return {
'status': 'error',
'message': response.get('error'),
'base_url': database_module.settings.gitea_url,
'configured': True,
'checks': [
{
'name': 'token_auth',
'ok': False,
'message': response.get('error'),
'url': f"{database_module.settings.gitea_url}/api/v1/user",
'status_code': response.get('status_code'),
}
],
}
username = response.get('login') or response.get('username') or response.get('full_name') or 'unknown'
return {
'status': 'success',
'message': f'Authenticated as {username}.',
'base_url': database_module.settings.gitea_url,
'configured': True,
'checks': [
{
'name': 'token_auth',
'ok': True,
'message': f'Authenticated as {username}',
'url': f"{database_module.settings.gitea_url}/api/v1/user",
}
],
'user': username,
}
def _get_home_assistant_health() -> dict:
"""Return current Home Assistant connectivity diagnostics."""
return _create_home_assistant_agent().health_check_sync()
async def _get_queue_gate_status(force: bool = False) -> dict:
"""Return whether queued prompts may be processed now."""
if not database_module.settings.prompt_queue_enabled:
return {
'status': 'disabled',
'allowed': True,
'forced': False,
'reason': 'Prompt queue is disabled',
}
if not database_module.settings.home_assistant_url:
if force or database_module.settings.prompt_queue_force_process:
return {
'status': 'success',
'allowed': True,
'forced': True,
'reason': 'Queue override is enabled',
}
return {
'status': 'blocked',
'allowed': False,
'forced': False,
'reason': 'Home Assistant URL is not configured',
}
return await _create_home_assistant_agent().queue_gate_status(force=force)
async def _interpret_freeform_request(request: FreeformSoftwareRequest, manager: DatabaseManager) -> tuple[SoftwareRequest, dict, dict]:
"""Interpret a free-form request and return the structured request plus routing trace."""
interpreter_context = manager.get_interpreter_context(chat_id=request.chat_id, source=request.source)
interpreted, interpretation_trace = await RequestInterpreter().interpret_with_trace(
request.prompt_text,
context=interpreter_context,
)
routing = interpretation_trace.get('routing') or {}
selected_history = manager.get_project_by_id(routing.get('project_id'), include_archived=False) if routing.get('project_id') else None
if selected_history is not None and routing.get('intent') != 'new_project':
interpreted['name'] = selected_history.project_name
interpreted['description'] = selected_history.description or interpreted['description']
return SoftwareRequest(**interpreted), routing, interpretation_trace
async def _run_freeform_generation(
request: FreeformSoftwareRequest,
db: Session,
*,
queue_item_id: int | None = None,
) -> dict:
"""Shared free-form request flow used by direct calls and queued processing."""
manager = DatabaseManager(db)
try:
structured_request, routing, interpretation_trace = await _interpret_freeform_request(request, manager)
response = await _run_generation(
structured_request,
db,
prompt_text=request.prompt_text,
prompt_actor=request.source,
prompt_source_context={
'chat_id': request.chat_id,
'chat_type': request.chat_type,
'queue_item_id': queue_item_id,
},
prompt_routing=routing,
preferred_project_id=routing.get('project_id') if routing.get('intent') != 'new_project' else None,
repo_name_override=routing.get('repo_name') if routing.get('intent') == 'new_project' else None,
related_issue={'number': routing.get('issue_number')} if routing.get('issue_number') is not None else None,
)
project_data = response.get('data', {})
if project_data.get('history_id') is not None:
manager = DatabaseManager(db)
prompts = manager.get_prompt_events(project_id=project_data.get('project_id'))
prompt_id = prompts[0]['id'] if prompts else None
manager.log_llm_trace(
project_id=project_data.get('project_id'),
history_id=project_data.get('history_id'),
prompt_id=prompt_id,
stage=interpretation_trace['stage'],
provider=interpretation_trace['provider'],
model=interpretation_trace['model'],
system_prompt=interpretation_trace['system_prompt'],
user_prompt=interpretation_trace['user_prompt'],
assistant_response=interpretation_trace['assistant_response'],
raw_response=interpretation_trace.get('raw_response'),
fallback_used=interpretation_trace.get('fallback_used', False),
)
naming_trace = interpretation_trace.get('project_naming')
if naming_trace:
manager.log_llm_trace(
project_id=project_data.get('project_id'),
history_id=project_data.get('history_id'),
prompt_id=prompt_id,
stage=naming_trace['stage'],
provider=naming_trace['provider'],
model=naming_trace['model'],
system_prompt=naming_trace['system_prompt'],
user_prompt=naming_trace['user_prompt'],
assistant_response=naming_trace['assistant_response'],
raw_response=naming_trace.get('raw_response'),
fallback_used=naming_trace.get('fallback_used', False),
)
response['interpreted_request'] = structured_request.model_dump()
response['routing'] = routing
response['llm_trace'] = interpretation_trace
response['source'] = {
'type': request.source,
'chat_id': request.chat_id,
'chat_type': request.chat_type,
}
if queue_item_id is not None:
DatabaseManager(db).complete_queued_prompt(
queue_item_id,
{
'project_id': project_data.get('project_id'),
'history_id': project_data.get('history_id'),
'status': response.get('status'),
},
)
return response
except Exception as exc:
if queue_item_id is not None:
DatabaseManager(db).fail_queued_prompt(queue_item_id, str(exc))
raise
async def _process_prompt_queue_batch(limit: int = 1, force: bool = False) -> dict:
"""Process up to `limit` queued prompts if the energy gate allows it."""
queue_gate = await _get_queue_gate_status(force=force)
if not queue_gate.get('allowed'):
db = database_module.get_db_sync()
try:
summary = DatabaseManager(db).get_prompt_queue_summary()
finally:
db.close()
return {
'status': queue_gate.get('status', 'blocked'),
'processed_count': 0,
'queue_gate': queue_gate,
'queue_summary': summary,
'processed': [],
}
processed = []
for _ in range(max(limit, 1)):
claim_db = database_module.get_db_sync()
try:
claimed = DatabaseManager(claim_db).claim_next_queued_prompt()
finally:
claim_db.close()
if claimed is None:
break
work_db = database_module.get_db_sync()
try:
request = FreeformSoftwareRequest(
prompt_text=claimed['prompt_text'],
source=claimed['source'] or 'telegram',
chat_id=claimed.get('chat_id'),
chat_type=claimed.get('chat_type'),
process_now=True,
)
response = await _run_freeform_generation(request, work_db, queue_item_id=claimed['id'])
processed.append(
{
'queue_item_id': claimed['id'],
'project_id': (response.get('data') or {}).get('project_id'),
'status': response.get('status'),
}
)
except Exception as exc:
DatabaseManager(work_db).fail_queued_prompt(claimed['id'], str(exc))
processed.append({'queue_item_id': claimed['id'], 'status': 'failed', 'error': str(exc)})
finally:
work_db.close()
summary_db = database_module.get_db_sync()
try:
summary = DatabaseManager(summary_db).get_prompt_queue_summary()
finally:
summary_db.close()
return {
'status': 'success',
'processed_count': len(processed),
'processed': processed,
'queue_gate': queue_gate,
'queue_summary': summary,
}
async def _prompt_queue_worker() -> None:
"""Background worker that drains the prompt queue when the energy gate opens."""
while True:
try:
await _process_prompt_queue_batch(
limit=database_module.settings.prompt_queue_max_batch_size,
force=database_module.settings.prompt_queue_force_process,
)
except Exception as exc:
db = database_module.get_db_sync()
try:
DatabaseManager(db).log_system_event('prompt-queue', 'ERROR', f'Queue worker error: {exc}')
finally:
db.close()
await asyncio.sleep(database_module.settings.prompt_queue_poll_interval_seconds)
def _resolve_n8n_api_url(explicit_url: str | None = None) -> str:
"""Resolve the effective n8n API URL from explicit input or settings."""
if explicit_url and explicit_url.strip():
@@ -420,8 +717,12 @@ def read_api_info():
'/api',
'/health',
'/llm/runtime',
'/llm/prompts',
'/llm/prompts/{prompt_key}',
'/generate',
'/generate/text',
'/queue',
'/queue/process',
'/projects',
'/status/{project_id}',
'/audit/projects',
@@ -442,7 +743,9 @@ def read_api_info():
'/projects/{project_id}/prompts/{prompt_id}/undo',
'/projects/{project_id}/sync-repository',
'/gitea/repos',
'/gitea/health',
'/gitea/repos/onboard',
'/home-assistant/health',
'/n8n/health',
'/n8n/setup',
],
@@ -453,11 +756,30 @@ def read_api_info():
def health_check():
"""Health check endpoint."""
runtime = database_module.get_database_runtime_summary()
queue_summary = {'queued': 0, 'processing': 0, 'completed': 0, 'failed': 0, 'total': 0, 'next_item': None}
db = database_module.get_db_sync()
try:
try:
queue_summary = DatabaseManager(db).get_prompt_queue_summary()
except Exception:
pass
finally:
db.close()
return {
'status': 'healthy',
'database': runtime['backend'],
'database_target': runtime['target'],
'database_name': runtime['database'],
'integrations': {
'gitea': _get_gitea_health(),
'home_assistant': _get_home_assistant_health(),
},
'prompt_queue': {
'enabled': database_module.settings.prompt_queue_enabled,
'auto_process': database_module.settings.prompt_queue_auto_process,
'force_process': database_module.settings.prompt_queue_force_process,
'summary': queue_summary,
},
}
@@ -467,6 +789,32 @@ def get_llm_runtime():
return LLMServiceClient().get_runtime_configuration()
@app.get('/llm/prompts')
def get_llm_prompt_settings(db: DbSession):
"""Return editable LLM prompt settings with DB overrides merged over environment defaults."""
return {'prompts': DatabaseManager(db).get_llm_prompt_settings()}
@app.put('/llm/prompts/{prompt_key}')
def update_llm_prompt_setting(prompt_key: str, request: LLMPromptSettingUpdateRequest, db: DbSession):
"""Persist one editable LLM prompt override into the database."""
database_module.init_db()
result = DatabaseManager(db).save_llm_prompt_setting(prompt_key, request.value, actor='api')
if result.get('status') == 'error':
raise HTTPException(status_code=400, detail=result.get('message', 'Prompt save failed'))
return result
@app.delete('/llm/prompts/{prompt_key}')
def reset_llm_prompt_setting(prompt_key: str, db: DbSession):
"""Reset one editable LLM prompt override back to the environment/default value."""
database_module.init_db()
result = DatabaseManager(db).reset_llm_prompt_setting(prompt_key, actor='api')
if result.get('status') == 'error':
raise HTTPException(status_code=400, detail=result.get('message', 'Prompt reset failed'))
return result
@app.post('/generate')
async def generate_software(request: SoftwareRequest, db: DbSession):
"""Create and record a software-generation request."""
@@ -492,74 +840,64 @@ async def generate_software_from_text(request: FreeformSoftwareRequest, db: DbSe
},
}
manager = DatabaseManager(db)
interpreter_context = manager.get_interpreter_context(chat_id=request.chat_id, source=request.source)
interpreted, interpretation_trace = await RequestInterpreter().interpret_with_trace(
request.prompt_text,
context=interpreter_context,
)
routing = interpretation_trace.get('routing') or {}
selected_history = manager.get_project_by_id(routing.get('project_id'), include_archived=False) if routing.get('project_id') else None
if selected_history is not None and routing.get('intent') != 'new_project':
interpreted['name'] = selected_history.project_name
interpreted['description'] = selected_history.description or interpreted['description']
structured_request = SoftwareRequest(**interpreted)
response = await _run_generation(
structured_request,
db,
prompt_text=request.prompt_text,
prompt_actor=request.source,
prompt_source_context={
'chat_id': request.chat_id,
'chat_type': request.chat_type,
},
prompt_routing=routing,
preferred_project_id=routing.get('project_id') if routing.get('intent') != 'new_project' else None,
repo_name_override=routing.get('repo_name') if routing.get('intent') == 'new_project' else None,
related_issue={'number': routing.get('issue_number')} if routing.get('issue_number') is not None else None,
)
project_data = response.get('data', {})
if project_data.get('history_id') is not None:
if request.source == 'telegram' and database_module.settings.prompt_queue_enabled and not request.process_now:
manager = DatabaseManager(db)
prompts = manager.get_prompt_events(project_id=project_data.get('project_id'))
prompt_id = prompts[0]['id'] if prompts else None
manager.log_llm_trace(
project_id=project_data.get('project_id'),
history_id=project_data.get('history_id'),
prompt_id=prompt_id,
stage=interpretation_trace['stage'],
provider=interpretation_trace['provider'],
model=interpretation_trace['model'],
system_prompt=interpretation_trace['system_prompt'],
user_prompt=interpretation_trace['user_prompt'],
assistant_response=interpretation_trace['assistant_response'],
raw_response=interpretation_trace.get('raw_response'),
fallback_used=interpretation_trace.get('fallback_used', False),
queue_item = manager.enqueue_prompt(
prompt_text=request.prompt_text,
source=request.source,
chat_id=request.chat_id,
chat_type=request.chat_type,
source_context={'chat_id': request.chat_id, 'chat_type': request.chat_type},
)
naming_trace = interpretation_trace.get('project_naming')
if naming_trace:
manager.log_llm_trace(
project_id=project_data.get('project_id'),
history_id=project_data.get('history_id'),
prompt_id=prompt_id,
stage=naming_trace['stage'],
provider=naming_trace['provider'],
model=naming_trace['model'],
system_prompt=naming_trace['system_prompt'],
user_prompt=naming_trace['user_prompt'],
assistant_response=naming_trace['assistant_response'],
raw_response=naming_trace.get('raw_response'),
fallback_used=naming_trace.get('fallback_used', False),
)
response['interpreted_request'] = interpreted
response['routing'] = routing
response['llm_trace'] = interpretation_trace
response['source'] = {
'type': request.source,
'chat_id': request.chat_id,
'chat_type': request.chat_type,
return {
'status': 'queued',
'message': 'Prompt queued for energy-aware processing.',
'queue_item': queue_item,
'queue_summary': manager.get_prompt_queue_summary(),
'queue_gate': await _get_queue_gate_status(force=False),
'source': {
'type': request.source,
'chat_id': request.chat_id,
'chat_type': request.chat_type,
},
}
return await _run_freeform_generation(request, db)
@app.get('/queue')
def get_prompt_queue(db: DbSession):
"""Return queued prompt items and prompt queue configuration."""
manager = DatabaseManager(db)
return {
'queue': manager.get_prompt_queue(),
'summary': manager.get_prompt_queue_summary(),
'config': {
'enabled': database_module.settings.prompt_queue_enabled,
'auto_process': database_module.settings.prompt_queue_auto_process,
'force_process': database_module.settings.prompt_queue_force_process,
'poll_interval_seconds': database_module.settings.prompt_queue_poll_interval_seconds,
'max_batch_size': database_module.settings.prompt_queue_max_batch_size,
},
}
return response
@app.post('/queue/process')
async def process_prompt_queue(request: PromptQueueProcessRequest):
"""Manually process queued prompts, optionally bypassing the HA gate."""
return await _process_prompt_queue_batch(limit=request.limit, force=request.force)
@app.get('/gitea/health')
def get_gitea_health():
"""Return Gitea integration connectivity diagnostics."""
return _get_gitea_health()
@app.get('/home-assistant/health')
def get_home_assistant_health():
"""Return Home Assistant integration connectivity diagnostics."""
return _get_home_assistant_health()
@app.get('/projects')
@@ -743,13 +1081,18 @@ def delete_project(project_id: str, db: DbSession):
remote_delete = None
if repository and repository.get('mode') != 'shared' and repository.get('owner') and repository.get('name') and database_module.settings.gitea_url and database_module.settings.gitea_token:
remote_delete = _create_gitea_api().delete_repo_sync(owner=repository.get('owner'), repo=repository.get('name'))
if remote_delete.get('error') and remote_delete.get('status_code') not in {404, None}:
raise HTTPException(status_code=502, detail=remote_delete.get('error'))
if remote_delete.get('error'):
manager.log_system_event(
component='gitea',
level='WARNING',
message=f"Remote repository delete failed for {repository.get('owner')}/{repository.get('name')}: {remote_delete.get('error')}",
)
result = manager.delete_project(project_id)
if result.get('status') == 'error':
raise HTTPException(status_code=400, detail=result.get('message', 'Project deletion failed'))
result['remote_repository_deleted'] = bool(remote_delete and not remote_delete.get('error'))
result['remote_repository_delete_error'] = remote_delete.get('error') if remote_delete else None
result['remote_repository'] = repository if repository else None
return result