4 Commits
0.9.2 ... 0.9.4

Author SHA1 Message Date
b0c95323fd release: version 0.9.4 🚀
All checks were successful
Upload Python Package / Create Release (push) Successful in 24s
Upload Python Package / deploy (push) Successful in 56s
2026-04-11 13:06:54 +02:00
d60e753acf fix: add commit retry, refs NOISSUE 2026-04-11 13:06:48 +02:00
94c38359c7 release: version 0.9.3 🚀
All checks were successful
Upload Python Package / Create Release (push) Successful in 29s
Upload Python Package / deploy (push) Successful in 43s
2026-04-11 12:45:59 +02:00
2943fc79ab fix: better home assistant integration, refs NOISSUE 2026-04-11 12:45:56 +02:00
11 changed files with 1054 additions and 76 deletions

View File

@@ -5,11 +5,33 @@ Changelog
(unreleased)
------------
Fix
~~~
- Add commit retry, refs NOISSUE. [Simon Diesenreiter]
0.9.3 (2026-04-11)
------------------
Fix
~~~
- Better home assistant integration, refs NOISSUE. [Simon Diesenreiter]
Other
~~~~~
0.9.2 (2026-04-11)
------------------
Fix
~~~
- UI improvements and prompt hardening, refs NOISSUE. [Simon
Diesenreiter]
Other
~~~~~
0.9.1 (2026-04-11)
------------------

View File

@@ -71,18 +71,11 @@ N8N_WEBHOOK_URL=http://n8n.yourserver.com/webhook/telegram
TELEGRAM_BOT_TOKEN=your_telegram_bot_token
TELEGRAM_CHAT_ID=your_chat_id
# Optional: queue Telegram prompts until Home Assistant reports battery/surplus targets are met.
PROMPT_QUEUE_ENABLED=false
PROMPT_QUEUE_AUTO_PROCESS=true
PROMPT_QUEUE_FORCE_PROCESS=false
PROMPT_QUEUE_POLL_INTERVAL_SECONDS=60
PROMPT_QUEUE_MAX_BATCH_SIZE=1
# Optional: Home Assistant integration.
# Only the base URL and token are required in the environment.
# Entity ids, thresholds, and queue behavior can be configured from the dashboard System tab and are stored in the database.
HOME_ASSISTANT_URL=http://homeassistant.local:8123
HOME_ASSISTANT_TOKEN=your_home_assistant_long_lived_token
HOME_ASSISTANT_BATTERY_ENTITY_ID=sensor.home_battery_soc
HOME_ASSISTANT_SURPLUS_ENTITY_ID=sensor.home_pv_surplus_power
HOME_ASSISTANT_BATTERY_FULL_THRESHOLD=95
HOME_ASSISTANT_SURPLUS_THRESHOLD_WATTS=100
```
### Build and Run
@@ -107,7 +100,7 @@ docker-compose up -d
The backend now interprets free-form Telegram text with Ollama before generation.
If `TELEGRAM_CHAT_ID` is set, the Telegram-trigger workflow only reacts to messages from that specific chat.
If `PROMPT_QUEUE_ENABLED=true`, Telegram prompts are stored in a durable queue and processed only when the Home Assistant battery and surplus thresholds are satisfied, unless you force processing via `/queue/process` or send `process_now=true`.
If queueing is enabled from the dashboard System tab, Telegram prompts are stored in a durable queue and processed only when the configured Home Assistant battery and surplus thresholds are satisfied, unless you force processing via `/queue/process` or send `process_now=true`.
2. **Monitor progress via Web UI:**
@@ -121,7 +114,11 @@ If you deploy the container with PostgreSQL environment variables set, the servi
The health tab now shows separate application, n8n, Gitea, and Home Assistant/queue diagnostics so misconfigured integrations are visible without checking container logs.
The dashboard Health tab also exposes operator controls for the prompt queue, including manual batch processing, forced processing, and retrying failed items.
The dashboard Health tab exposes operator controls for the prompt queue, including manual batch processing, forced processing, and retrying failed items.
The dashboard System tab now also stores Home Assistant entity ids, queue toggles, thresholds, and batch settings in the database, so the environment only needs `HOME_ASSISTANT_URL` and `HOME_ASSISTANT_TOKEN` for that integration.
Projects that show `uncommitted`, `local_only`, or `pushed_no_pr` delivery warnings in the dashboard can now be retried in place from the UI before resorting to purging orphan audit rows.
Guardrail and system prompts are no longer environment-only in practice: the factory can persist DB-backed overrides for the editable LLM prompt set, expose them at `/llm/prompts`, and edit them from the dashboard System tab. Environment values still act as defaults and as the reset target.

View File

@@ -43,18 +43,10 @@ TELEGRAM_BOT_TOKEN=your_telegram_bot_token
TELEGRAM_CHAT_ID=your_chat_id
# Home Assistant energy gate for queued Telegram prompts
# Leave PROMPT_QUEUE_ENABLED=false to preserve immediate Telegram processing.
PROMPT_QUEUE_ENABLED=false
PROMPT_QUEUE_AUTO_PROCESS=true
PROMPT_QUEUE_FORCE_PROCESS=false
PROMPT_QUEUE_POLL_INTERVAL_SECONDS=60
PROMPT_QUEUE_MAX_BATCH_SIZE=1
# Only the base URL and token are environment-backed.
# Queue toggles, entity ids, thresholds, and batch sizing can be edited in the dashboard System tab and are stored in the database.
HOME_ASSISTANT_URL=http://homeassistant.local:8123
HOME_ASSISTANT_TOKEN=your_home_assistant_long_lived_token
HOME_ASSISTANT_BATTERY_ENTITY_ID=sensor.home_battery_soc
HOME_ASSISTANT_SURPLUS_ENTITY_ID=sensor.home_pv_surplus_power
HOME_ASSISTANT_BATTERY_FULL_THRESHOLD=95
HOME_ASSISTANT_SURPLUS_THRESHOLD_WATTS=100
# PostgreSQL
# In production, provide PostgreSQL settings below. They now take precedence over the SQLite default.

View File

@@ -75,18 +75,11 @@ N8N_WEBHOOK_URL=http://n8n.yourserver.com/webhook/telegram
TELEGRAM_BOT_TOKEN=your_telegram_bot_token
TELEGRAM_CHAT_ID=your_chat_id
# Optional: queue Telegram prompts until Home Assistant reports energy surplus.
PROMPT_QUEUE_ENABLED=false
PROMPT_QUEUE_AUTO_PROCESS=true
PROMPT_QUEUE_FORCE_PROCESS=false
PROMPT_QUEUE_POLL_INTERVAL_SECONDS=60
PROMPT_QUEUE_MAX_BATCH_SIZE=1
# Optional: Home Assistant integration.
# Only the base URL and token are required in the environment.
# Entity ids, thresholds, and queue behavior can be configured from the dashboard System tab and are stored in the database.
HOME_ASSISTANT_URL=http://homeassistant.local:8123
HOME_ASSISTANT_TOKEN=your_home_assistant_long_lived_token
HOME_ASSISTANT_BATTERY_ENTITY_ID=sensor.home_battery_soc
HOME_ASSISTANT_SURPLUS_ENTITY_ID=sensor.home_pv_surplus_power
HOME_ASSISTANT_BATTERY_FULL_THRESHOLD=95
HOME_ASSISTANT_SURPLUS_THRESHOLD_WATTS=100
```
### Build and Run
@@ -109,7 +102,9 @@ docker-compose up -d
Features: user authentication, task CRUD, notifications
```
If `PROMPT_QUEUE_ENABLED=true`, Telegram prompts are queued durably and processed only when Home Assistant reports the configured battery and surplus thresholds. Operators can override the gate via `/queue/process` or by sending `process_now=true` to `/generate/text`.
If queueing is enabled from the dashboard System tab, Telegram prompts are queued durably and processed only when Home Assistant reports the configured battery and surplus thresholds. Operators can override the gate via `/queue/process` or by sending `process_now=true` to `/generate/text`.
The dashboard System tab stores Home Assistant entity ids, queue toggles, thresholds, and batch settings in the database, so the environment only needs `HOME_ASSISTANT_URL` and `HOME_ASSISTANT_TOKEN` for that integration.
2. **Monitor progress via Web UI:**

View File

@@ -1 +1 @@
0.9.2
0.9.4

View File

@@ -4,7 +4,7 @@ from sqlalchemy.orm import Session
from sqlalchemy import text
try:
from ..config import EDITABLE_LLM_PROMPTS, settings
from ..config import EDITABLE_LLM_PROMPTS, EDITABLE_RUNTIME_SETTINGS, settings
from ..models import (
AuditTrail,
ProjectHistory,
@@ -18,7 +18,7 @@ try:
UserAction,
)
except ImportError:
from config import EDITABLE_LLM_PROMPTS, settings
from config import EDITABLE_LLM_PROMPTS, EDITABLE_RUNTIME_SETTINGS, settings
from models import (
AuditTrail,
ProjectHistory,
@@ -35,6 +35,7 @@ from datetime import datetime
import json
import re
import shutil
from pathlib import Path
class DatabaseMigrations:
@@ -87,6 +88,8 @@ class DatabaseManager:
PROMPT_QUEUE_ACTION = 'PROMPT_QUEUED'
PROMPT_CONFIG_PROJECT_ID = '__llm_prompt_config__'
PROMPT_CONFIG_ACTION = 'LLM_PROMPT_CONFIG'
RUNTIME_SETTINGS_PROJECT_ID = '__runtime_settings__'
RUNTIME_SETTINGS_ACTION = 'RUNTIME_SETTING'
def __init__(self, db: Session):
"""Initialize database manager."""
@@ -122,6 +125,56 @@ class DatabaseManager:
sanitized = sanitized.replace('--', '-')
return sanitized.strip('-') or 'external-project'
@staticmethod
def _partition_code_changes(raw_code_changes: list[dict], commits: list[dict]) -> tuple[list[dict], list[dict], list[dict]]:
"""Split code changes into remotely delivered, local-only, and orphaned rows."""
published_hashes = {
commit.get('commit_hash')
for commit in commits
if commit.get('commit_hash') and (
commit.get('remote_status') == 'pushed'
or commit.get('imported_from_remote')
or commit.get('commit_url')
)
}
published_prompt_ids = {
commit.get('prompt_id')
for commit in commits
if commit.get('prompt_id') is not None and (
commit.get('remote_status') == 'pushed'
or commit.get('imported_from_remote')
or commit.get('commit_url')
)
}
local_commit_hashes = {commit.get('commit_hash') for commit in commits if commit.get('commit_hash')}
local_prompt_ids = {commit.get('prompt_id') for commit in commits if commit.get('prompt_id') is not None}
visible_changes: list[dict] = []
local_only_changes: list[dict] = []
orphaned_changes: list[dict] = []
for change in raw_code_changes:
change_commit_hash = change.get('commit_hash')
prompt_id = change.get('prompt_id')
if (change_commit_hash and change_commit_hash in published_hashes) or (prompt_id is not None and prompt_id in published_prompt_ids):
visible_changes.append(change)
elif (change_commit_hash and change_commit_hash in local_commit_hashes) or (prompt_id is not None and prompt_id in local_prompt_ids):
local_only_changes.append(change)
else:
orphaned_changes.append(change)
return visible_changes, local_only_changes, orphaned_changes
@staticmethod
def _dedupe_preserve_order(values: list[str | None]) -> list[str]:
"""Return non-empty values in stable unique order."""
result: list[str] = []
seen: set[str] = set()
for value in values:
normalized = (value or '').strip()
if not normalized or normalized in seen:
continue
seen.add(normalized)
result.append(normalized)
return result
def get_project_by_repository(self, owner: str, repo_name: str, include_archived: bool = False) -> ProjectHistory | None:
"""Return the project currently associated with a repository."""
normalized_owner = (owner or '').strip().lower()
@@ -464,6 +517,26 @@ class DatabaseManager:
entries[key] = audit
return entries
def _latest_runtime_setting_entries(self) -> dict[str, AuditTrail]:
"""Return the most recent persisted audit row for each editable runtime setting key."""
entries: dict[str, AuditTrail] = {}
try:
audits = (
self.db.query(AuditTrail)
.filter(AuditTrail.action == self.RUNTIME_SETTINGS_ACTION)
.order_by(AuditTrail.created_at.desc(), AuditTrail.id.desc())
.all()
)
except Exception:
return entries
for audit in audits:
metadata = self._normalize_metadata(audit.metadata_json)
key = str(metadata.get('key') or '').strip()
if not key or key in entries or key not in EDITABLE_RUNTIME_SETTINGS:
continue
entries[key] = audit
return entries
def get_llm_prompt_override(self, key: str) -> str | None:
"""Return the persisted override for one editable LLM prompt key."""
entry = self._latest_llm_prompt_config_entries().get(key)
@@ -477,6 +550,16 @@ class DatabaseManager:
return None
return str(value)
def get_runtime_setting_override(self, key: str):
"""Return the persisted override for one editable runtime setting key."""
entry = self._latest_runtime_setting_entries().get(key)
if entry is None:
return None
metadata = self._normalize_metadata(entry.metadata_json)
if metadata.get('reset_to_default'):
return None
return metadata.get('value')
def get_llm_prompt_settings(self) -> list[dict]:
"""Return editable LLM prompt definitions merged with persisted DB overrides."""
latest = self._latest_llm_prompt_config_entries()
@@ -502,6 +585,32 @@ class DatabaseManager:
)
return items
def get_runtime_settings(self) -> list[dict]:
"""Return editable runtime settings merged with persisted DB overrides."""
latest = self._latest_runtime_setting_entries()
items = []
for key, metadata in EDITABLE_RUNTIME_SETTINGS.items():
entry = latest.get(key)
entry_metadata = self._normalize_metadata(entry.metadata_json) if entry is not None else {}
default_value = getattr(settings, key)
persisted_value = None if entry_metadata.get('reset_to_default') else entry_metadata.get('value')
items.append(
{
'key': key,
'label': metadata['label'],
'category': metadata['category'],
'description': metadata['description'],
'value_type': metadata['value_type'],
'default_value': default_value,
'value': persisted_value if persisted_value is not None else default_value,
'source': 'database' if persisted_value is not None else 'environment',
'updated_at': entry.created_at.isoformat() if entry and entry.created_at else None,
'updated_by': entry.actor if entry is not None else None,
'reset_to_default': bool(entry_metadata.get('reset_to_default')) if entry is not None else False,
}
)
return items
def save_llm_prompt_setting(self, key: str, value: str, actor: str = 'dashboard') -> dict:
"""Persist one editable LLM prompt override into the audit trail."""
if key not in EDITABLE_LLM_PROMPTS:
@@ -524,6 +633,28 @@ class DatabaseManager:
self.db.refresh(audit)
return {'status': 'success', 'setting': next(item for item in self.get_llm_prompt_settings() if item['key'] == key)}
def save_runtime_setting(self, key: str, value, actor: str = 'dashboard') -> dict:
"""Persist one editable runtime setting override into the audit trail."""
if key not in EDITABLE_RUNTIME_SETTINGS:
return {'status': 'error', 'message': f'Unsupported runtime setting key: {key}'}
audit = AuditTrail(
project_id=self.RUNTIME_SETTINGS_PROJECT_ID,
action=self.RUNTIME_SETTINGS_ACTION,
actor=actor,
action_type='UPDATE',
details=f'Updated runtime setting {key}',
message=f'Updated runtime setting {key}',
metadata_json={
'key': key,
'value': value,
'reset_to_default': False,
},
)
self.db.add(audit)
self.db.commit()
self.db.refresh(audit)
return {'status': 'success', 'setting': next(item for item in self.get_runtime_settings() if item['key'] == key)}
def reset_llm_prompt_setting(self, key: str, actor: str = 'dashboard') -> dict:
"""Reset one editable LLM prompt override back to its environment/default value."""
if key not in EDITABLE_LLM_PROMPTS:
@@ -546,6 +677,28 @@ class DatabaseManager:
self.db.refresh(audit)
return {'status': 'success', 'setting': next(item for item in self.get_llm_prompt_settings() if item['key'] == key)}
def reset_runtime_setting(self, key: str, actor: str = 'dashboard') -> dict:
"""Reset one editable runtime setting override back to its environment/default value."""
if key not in EDITABLE_RUNTIME_SETTINGS:
return {'status': 'error', 'message': f'Unsupported runtime setting key: {key}'}
audit = AuditTrail(
project_id=self.RUNTIME_SETTINGS_PROJECT_ID,
action=self.RUNTIME_SETTINGS_ACTION,
actor=actor,
action_type='RESET',
details=f'Reset runtime setting {key} to default',
message=f'Reset runtime setting {key} to default',
metadata_json={
'key': key,
'value': None,
'reset_to_default': True,
},
)
self.db.add(audit)
self.db.commit()
self.db.refresh(audit)
return {'status': 'success', 'setting': next(item for item in self.get_runtime_settings() if item['key'] == key)}
def attach_issue_to_prompt(self, prompt_id: int, related_issue: dict) -> AuditTrail | None:
"""Attach resolved issue context to a previously recorded prompt."""
prompt = self.db.query(AuditTrail).filter(AuditTrail.id == prompt_id, AuditTrail.action == 'PROMPT_RECEIVED').first()
@@ -1423,7 +1576,9 @@ class DatabaseManager:
def log_code_change(self, project_id: str, change_type: str, file_path: str,
actor: str, actor_type: str, details: str,
history_id: int | None = None, prompt_id: int | None = None,
diff_summary: str | None = None, diff_text: str | None = None) -> AuditTrail:
diff_summary: str | None = None, diff_text: str | None = None,
commit_hash: str | None = None, remote_status: str | None = None,
branch: str | None = None) -> AuditTrail:
"""Log a code change."""
audit = AuditTrail(
project_id=project_id,
@@ -1442,6 +1597,9 @@ class DatabaseManager:
"details": details,
"diff_summary": diff_summary,
"diff_text": diff_text,
"commit_hash": commit_hash,
"remote_status": remote_status,
"branch": branch,
}
)
self.db.add(audit)
@@ -2132,16 +2290,43 @@ class DatabaseManager:
).order_by(AuditTrail.created_at.desc()).all()
prompts = self.get_prompt_events(project_id=project_id)
code_changes = self.get_code_changes(project_id=project_id)
raw_code_changes = self.get_code_changes(project_id=project_id)
commits = self.get_commits(project_id=project_id)
pull_requests = self.get_pull_requests(project_id=project_id)
llm_traces = self.get_llm_traces(project_id=project_id)
correlations = self.get_prompt_change_correlations(project_id=project_id)
code_changes, local_only_code_changes, orphan_code_changes = self._partition_code_changes(raw_code_changes, commits)
repository = self._get_project_repository(history)
timeline = self.get_project_timeline(project_id=project_id)
repository_sync = self.get_repository_sync_status(project_id=project_id)
issues = self.get_repository_issues(project_id=project_id)
issue_work = self.get_issue_work_events(project_id=project_id)
published_commits = [
commit for commit in commits
if commit.get('remote_status') == 'pushed' or commit.get('imported_from_remote') or commit.get('commit_url')
]
has_pull_request = any(pr.get('pr_state') == 'open' and not pr.get('merged') for pr in pull_requests)
if orphan_code_changes:
delivery_status = 'uncommitted'
delivery_message = (
f"{len(orphan_code_changes)} generated file change(s) were recorded without a matching git commit. "
"These changes never reached a PR-backed delivery."
)
elif local_only_code_changes:
delivery_status = 'local_only'
delivery_message = (
f"{len(local_only_code_changes)} generated file change(s) were committed only in the local workspace. "
"No remote repo push was recorded for this prompt yet."
)
elif published_commits and repository and repository.get('mode') == 'project' and not has_pull_request:
delivery_status = 'pushed_no_pr'
delivery_message = 'Changes were pushed to the remote repository, but no pull request is currently tracked for review.'
elif published_commits:
delivery_status = 'delivered'
delivery_message = 'Generated changes were published to the tracked repository and are reviewable through the recorded pull request.'
else:
delivery_status = 'pending'
delivery_message = 'No git commit has been recorded for this project yet.'
return {
"project": {
@@ -2157,6 +2342,10 @@ class DatabaseManager:
"repository": repository,
"repository_sync": repository_sync,
"open_pull_requests": len([pr for pr in pull_requests if pr["pr_state"] == "open" and not pr["merged"]]),
"delivery_status": delivery_status,
"delivery_message": delivery_message,
"local_only_code_change_count": len(local_only_code_changes),
"orphan_code_change_count": len(orphan_code_changes),
"completed_at": history.completed_at.isoformat() if history.completed_at else None,
"created_at": history.started_at.isoformat() if history.started_at else None
},
@@ -2195,6 +2384,8 @@ class DatabaseManager:
],
"prompts": prompts,
"code_changes": code_changes,
"local_only_code_changes": local_only_code_changes,
"orphan_code_changes": orphan_code_changes,
"commits": commits,
"pull_requests": pull_requests,
"llm_traces": llm_traces,
@@ -2249,6 +2440,9 @@ class DatabaseManager:
"history_id": self._normalize_metadata(change.metadata_json).get("history_id"),
"diff_summary": self._normalize_metadata(change.metadata_json).get("diff_summary"),
"diff_text": self._normalize_metadata(change.metadata_json).get("diff_text"),
"commit_hash": self._normalize_metadata(change.metadata_json).get("commit_hash"),
"remote_status": self._normalize_metadata(change.metadata_json).get("remote_status"),
"branch": self._normalize_metadata(change.metadata_json).get("branch"),
"timestamp": change.created_at.isoformat() if change.created_at else None,
}
for change in changes
@@ -2258,8 +2452,21 @@ class DatabaseManager:
"""Correlate prompts with the concrete code changes that followed them."""
correlations = self._build_correlations_from_links(project_id=project_id, limit=limit)
if correlations:
return correlations
return self._build_correlations_from_audit_fallback(project_id=project_id, limit=limit)
return [
correlation for correlation in correlations
if any(
commit.get('remote_status') == 'pushed' or commit.get('imported_from_remote') or commit.get('commit_url')
for commit in correlation.get('commits', [])
)
]
fallback = self._build_correlations_from_audit_fallback(project_id=project_id, limit=limit)
return [
correlation for correlation in fallback
if any(
commit.get('remote_status') == 'pushed' or commit.get('imported_from_remote') or commit.get('commit_url')
for commit in correlation.get('commits', [])
)
]
def get_dashboard_snapshot(self, limit: int = 8) -> dict:
"""Return DB-backed dashboard data for the UI."""
@@ -2282,7 +2489,10 @@ class DatabaseManager:
pass
active_projects = self.get_all_projects()
archived_projects = self.get_all_projects(archived_only=True)
projects = active_projects[:limit]
project_bundles = [self.get_project_audit_data(project.project_id) for project in active_projects[:limit]]
archived_project_bundles = [self.get_project_audit_data(project.project_id) for project in archived_projects[:limit]]
all_project_bundles = [self.get_project_audit_data(project.project_id) for project in active_projects]
all_project_bundles.extend(self.get_project_audit_data(project.project_id) for project in archived_projects)
system_logs = self.db.query(SystemLog).order_by(SystemLog.created_at.desc()).limit(limit).all()
return {
"summary": {
@@ -2294,13 +2504,14 @@ class DatabaseManager:
"prompt_events": self.db.query(AuditTrail).filter(AuditTrail.action == "PROMPT_RECEIVED").count(),
"queued_prompts": queue_summary.get('queued', 0),
"failed_queued_prompts": queue_summary.get('failed', 0),
"code_changes": self.db.query(AuditTrail).filter(AuditTrail.action == "CODE_CHANGE").count(),
"code_changes": sum(len(bundle.get('code_changes', [])) for bundle in all_project_bundles),
"orphan_code_changes": sum(len(bundle.get('orphan_code_changes', [])) for bundle in all_project_bundles),
"open_pull_requests": self.db.query(PullRequest).filter(PullRequest.pr_state == "open", PullRequest.merged.is_(False)).count(),
"tracked_issues": self.db.query(AuditTrail).filter(AuditTrail.action == "REPOSITORY_ISSUE").count(),
"issue_work_events": self.db.query(AuditTrail).filter(AuditTrail.action == "ISSUE_WORKED").count(),
},
"projects": [self.get_project_audit_data(project.project_id) for project in projects],
"archived_projects": [self.get_project_audit_data(project.project_id) for project in archived_projects[:limit]],
"projects": project_bundles,
"archived_projects": archived_project_bundles,
"system_logs": [
{
"id": log.id,
@@ -2319,6 +2530,384 @@ class DatabaseManager:
},
}
def _build_commit_url(self, owner: str, repo_name: str, commit_hash: str) -> str | None:
"""Build a browser commit URL from configured Gitea settings."""
if not settings.gitea_url or not owner or not repo_name or not commit_hash:
return None
return f"{str(settings.gitea_url).rstrip('/')}/{owner}/{repo_name}/commit/{commit_hash}"
def _update_project_audit_rows_for_delivery(
self,
project_id: str,
branch: str,
owner: str,
repo_name: str,
code_change_ids: list[int],
orphan_code_change_ids: list[int],
published_commit_hashes: list[str],
) -> None:
"""Mark matching commit and code-change rows as remotely published."""
commit_hashes = set(self._dedupe_preserve_order(published_commit_hashes))
for commit_row in self.db.query(AuditTrail).filter(
AuditTrail.project_id == project_id,
AuditTrail.action == 'GIT_COMMIT',
).all():
metadata = self._normalize_metadata(commit_row.metadata_json)
commit_hash = metadata.get('commit_hash')
if not commit_hash or commit_hash not in commit_hashes:
continue
metadata['branch'] = branch
metadata['remote_status'] = 'pushed'
metadata['commit_url'] = self._build_commit_url(owner, repo_name, commit_hash)
commit_row.metadata_json = metadata
retry_ids = set(code_change_ids)
orphan_ids = set(orphan_code_change_ids)
new_commit_hash = next(iter(commit_hashes), None)
for change_row in self.db.query(AuditTrail).filter(
AuditTrail.project_id == project_id,
AuditTrail.action == 'CODE_CHANGE',
).all():
if change_row.id not in retry_ids:
continue
metadata = self._normalize_metadata(change_row.metadata_json)
metadata['branch'] = branch
metadata['remote_status'] = 'pushed'
if change_row.id in orphan_ids and new_commit_hash:
metadata['commit_hash'] = new_commit_hash
change_row.metadata_json = metadata
self.db.commit()
def _find_or_create_delivery_pull_request(
self,
history: ProjectHistory,
gitea_api,
owner: str,
repo_name: str,
branch: str,
prompt_text: str | None,
) -> dict:
"""Return an open PR for the project branch, creating one if necessary."""
existing = self.get_open_pull_request(project_id=history.project_id)
if existing is not None:
return existing
remote_prs = gitea_api.list_pull_requests_sync(owner=owner, repo=repo_name, state='open')
if isinstance(remote_prs, list):
for item in remote_prs:
remote_head = ((item.get('head') or {}) if isinstance(item.get('head'), dict) else {})
if remote_head.get('ref') != branch:
continue
pr = self.save_pr_data(
history.id,
{
'pr_number': item.get('number') or item.get('id') or 0,
'title': item.get('title') or f"AI delivery for {history.project_name}",
'body': item.get('body') or '',
'state': item.get('state', 'open'),
'base': ((item.get('base') or {}) if isinstance(item.get('base'), dict) else {}).get('ref', 'main'),
'user': ((item.get('user') or {}) if isinstance(item.get('user'), dict) else {}).get('login', 'system'),
'pr_url': item.get('html_url') or gitea_api.build_pull_request_url(item.get('number') or item.get('id'), owner=owner, repo=repo_name),
'merged': bool(item.get('merged')),
'head': remote_head.get('ref'),
},
)
return {
'pr_number': pr.pr_number,
'title': pr.pr_title,
'body': pr.pr_body,
'pr_url': pr.pr_url,
'pr_state': pr.pr_state,
'merged': pr.merged,
}
title = f"AI delivery for {history.project_name}"
body = (
f"Automated software factory changes for {history.project_name}.\n\n"
f"Prompt: {prompt_text or history.description}\n\n"
f"Branch: {branch}"
)
created = gitea_api.create_pull_request_sync(
title=title,
body=body,
owner=owner,
repo=repo_name,
base='main',
head=branch,
)
if created.get('error'):
raise RuntimeError(f"Unable to create pull request: {created.get('error')}")
pr = self.save_pr_data(
history.id,
{
'pr_number': created.get('number') or created.get('id') or 0,
'title': created.get('title', title),
'body': created.get('body', body),
'state': created.get('state', 'open'),
'base': ((created.get('base') or {}) if isinstance(created.get('base'), dict) else {}).get('ref', 'main'),
'user': ((created.get('user') or {}) if isinstance(created.get('user'), dict) else {}).get('login', 'system'),
'pr_url': created.get('html_url') or gitea_api.build_pull_request_url(created.get('number') or created.get('id'), owner=owner, repo=repo_name),
'merged': bool(created.get('merged')),
'head': branch,
},
)
return {
'pr_number': pr.pr_number,
'title': pr.pr_title,
'body': pr.pr_body,
'pr_url': pr.pr_url,
'pr_state': pr.pr_state,
'merged': pr.merged,
}
def retry_project_delivery(self, project_id: str) -> dict:
"""Retry remote delivery for orphaned, local-only, or missing-PR project changes."""
history = self.get_project_by_id(project_id)
if history is None:
return {'status': 'error', 'message': 'Project not found'}
audit_data = self.get_project_audit_data(project_id)
project = audit_data.get('project') or {}
delivery_status = project.get('delivery_status')
if delivery_status not in {'uncommitted', 'local_only', 'pushed_no_pr'}:
return {'status': 'success', 'message': 'No failed delivery state was found for this project.', 'project_id': project_id}
snapshot_data = self._get_latest_ui_snapshot_data(history.id)
repository = self._get_project_repository(history) or {}
if repository.get('mode') != 'project':
return {'status': 'error', 'message': 'Only project-scoped repositories support delivery retry.', 'project_id': project_id}
owner = repository.get('owner') or settings.gitea_owner
repo_name = repository.get('name') or settings.gitea_repo
if not owner or not repo_name or not settings.gitea_url or not settings.gitea_token:
return {'status': 'error', 'message': 'Gitea repository settings are incomplete; cannot retry delivery.', 'project_id': project_id}
project_root = Path(snapshot_data.get('project_root') or (settings.projects_root / project_id)).expanduser().resolve()
if not project_root.exists():
return {'status': 'error', 'message': f'Project workspace does not exist at {project_root}', 'project_id': project_id}
try:
from .git_manager import GitManager
from .gitea import GiteaAPI
except ImportError:
from agents.git_manager import GitManager
from agents.gitea import GiteaAPI
git_manager = GitManager(project_id=project_id, project_dir=str(project_root))
if not git_manager.is_git_available():
return {'status': 'error', 'message': 'git executable is not available in PATH', 'project_id': project_id}
if not git_manager.has_repo():
return {'status': 'error', 'message': 'Local git repository is missing; cannot retry delivery safely.', 'project_id': project_id}
commits = audit_data.get('commits', [])
local_only_changes = audit_data.get('local_only_code_changes', [])
orphan_changes = audit_data.get('orphan_code_changes', [])
published_commits = [
commit for commit in commits
if commit.get('remote_status') == 'pushed' or commit.get('imported_from_remote') or commit.get('commit_url')
]
branch_candidates = [
*(change.get('branch') for change in local_only_changes),
*(change.get('branch') for change in orphan_changes),
*(commit.get('branch') for commit in commits),
((snapshot_data.get('git') or {}).get('active_branch') if isinstance(snapshot_data.get('git'), dict) else None),
f'ai/{project_id}',
]
branch = self._dedupe_preserve_order(branch_candidates)[0]
head = git_manager.current_head_or_none()
if head is None:
return {'status': 'error', 'message': 'Local repository has no commits; retry delivery cannot determine a safe base commit.', 'project_id': project_id}
if git_manager.branch_exists(branch):
git_manager.checkout_branch(branch)
else:
git_manager.checkout_branch(branch, create=True, start_point=head)
code_change_ids = [change['id'] for change in local_only_changes] + [change['id'] for change in orphan_changes]
orphan_ids = [change['id'] for change in orphan_changes]
published_commit_hashes = [commit.get('commit_hash') for commit in published_commits if commit.get('commit_hash')]
if orphan_changes:
files_to_commit = self._dedupe_preserve_order([change.get('file_path') for change in orphan_changes])
missing_files = [path for path in files_to_commit if not (project_root / path).exists()]
if missing_files:
return {
'status': 'error',
'message': f"Cannot retry delivery because generated files are missing locally: {', '.join(missing_files)}",
'project_id': project_id,
}
git_manager.add_files(files_to_commit)
if not git_manager.get_status():
return {
'status': 'error',
'message': 'No local git changes remain for the orphaned files; purge them or regenerate the project.',
'project_id': project_id,
}
commit_message = f"Retry AI delivery for prompt: {history.project_name}"
retried_commit_hash = git_manager.commit(commit_message)
prompt_id = max((change.get('prompt_id') for change in orphan_changes if change.get('prompt_id') is not None), default=None)
self.log_commit(
project_id=project_id,
commit_message=commit_message,
actor='dashboard',
actor_type='operator',
history_id=history.id,
prompt_id=prompt_id,
commit_hash=retried_commit_hash,
changed_files=files_to_commit,
branch=branch,
remote_status='local-only',
)
published_commit_hashes.append(retried_commit_hash)
gitea_api = GiteaAPI(token=settings.gitea_token, base_url=settings.gitea_url, owner=owner, repo=repo_name)
user = gitea_api.get_current_user_sync()
if user.get('error'):
return {'status': 'error', 'message': f"Unable to authenticate with Gitea: {user.get('error')}", 'project_id': project_id}
clone_url = repository.get('clone_url') or gitea_api.build_repo_git_url(owner=owner, repo=repo_name)
if not clone_url:
return {'status': 'error', 'message': 'Repository clone URL could not be determined for retry delivery.', 'project_id': project_id}
try:
git_manager.push_with_credentials(
remote_url=clone_url,
username=user.get('login') or 'git',
password=settings.gitea_token,
remote='origin',
branch=branch,
)
except Exception as exc:
self.log_system_event(component='git', level='ERROR', message=f'Retry delivery push failed for {project_id}: {exc}')
return {'status': 'error', 'message': f'Remote git push failed: {exc}', 'project_id': project_id}
if not published_commit_hashes:
head_commit = git_manager.current_head_or_none()
if head_commit:
published_commit_hashes.append(head_commit)
prompt_text = (audit_data.get('prompts') or [{}])[0].get('prompt_text') if audit_data.get('prompts') else None
try:
pull_request = self._find_or_create_delivery_pull_request(history, gitea_api, owner, repo_name, branch, prompt_text)
except Exception as exc:
self.log_system_event(component='gitea', level='ERROR', message=f'Retry delivery PR creation failed for {project_id}: {exc}')
return {'status': 'error', 'message': str(exc), 'project_id': project_id}
self._update_project_audit_rows_for_delivery(
project_id=project_id,
branch=branch,
owner=owner,
repo_name=repo_name,
code_change_ids=code_change_ids,
orphan_code_change_ids=orphan_ids,
published_commit_hashes=published_commit_hashes,
)
refreshed_snapshot = dict(snapshot_data)
refreshed_git = dict(refreshed_snapshot.get('git') or {})
latest_commit_hash = self._dedupe_preserve_order(published_commit_hashes)[-1]
latest_commit = dict(refreshed_git.get('latest_commit') or {})
latest_commit.update(
{
'hash': latest_commit_hash,
'scope': 'remote',
'branch': branch,
'commit_url': gitea_api.build_commit_url(latest_commit_hash, owner=owner, repo=repo_name),
}
)
refreshed_git['latest_commit'] = latest_commit
refreshed_git['active_branch'] = branch
refreshed_git['remote_error'] = None
refreshed_git['remote_push'] = {
'status': 'pushed',
'remote': clone_url,
'branch': branch,
'commit_url': latest_commit.get('commit_url'),
'pull_request': pull_request,
}
refreshed_snapshot['git'] = refreshed_git
refreshed_repository = dict(repository)
refreshed_repository['last_commit_url'] = latest_commit.get('commit_url')
refreshed_snapshot['repository'] = refreshed_repository
refreshed_snapshot['pull_request'] = pull_request
refreshed_snapshot['project_root'] = str(project_root)
self.save_ui_snapshot(history.id, refreshed_snapshot)
self._log_audit_trail(
project_id=project_id,
action='DELIVERY_RETRIED',
actor='dashboard',
action_type='RETRY',
details=f'Retried remote delivery for branch {branch}',
message='Remote delivery retried successfully',
metadata_json={
'history_id': history.id,
'branch': branch,
'commit_hashes': self._dedupe_preserve_order(published_commit_hashes),
'pull_request': pull_request,
},
)
self.log_system_event(component='git', level='INFO', message=f'Retried remote delivery for {project_id} on {branch}')
return {
'status': 'success',
'message': 'Remote delivery retried successfully.',
'project_id': project_id,
'branch': branch,
'commit_hashes': self._dedupe_preserve_order(published_commit_hashes),
'pull_request': pull_request,
}
def cleanup_orphan_code_changes(self, project_id: str | None = None) -> dict:
"""Delete code change rows that cannot be tied to any recorded commit."""
change_query = self.db.query(AuditTrail).filter(AuditTrail.action == 'CODE_CHANGE')
commit_query = self.db.query(AuditTrail).filter(AuditTrail.action == 'GIT_COMMIT')
if project_id:
change_query = change_query.filter(AuditTrail.project_id == project_id)
commit_query = commit_query.filter(AuditTrail.project_id == project_id)
change_rows = change_query.all()
commit_rows = commit_query.all()
commits = [
{
'commit_hash': self._normalize_metadata(commit.metadata_json).get('commit_hash'),
'prompt_id': self._normalize_metadata(commit.metadata_json).get('prompt_id'),
}
for commit in commit_rows
]
raw_code_changes = [
{
'id': change.id,
'project_id': change.project_id,
'prompt_id': self._normalize_metadata(change.metadata_json).get('prompt_id'),
'commit_hash': self._normalize_metadata(change.metadata_json).get('commit_hash'),
}
for change in change_rows
]
_, _, orphaned_changes = self._partition_code_changes(raw_code_changes, commits)
orphan_ids = [change['id'] for change in orphaned_changes]
orphan_projects = sorted({change['project_id'] for change in orphaned_changes if change.get('project_id')})
if orphan_ids:
self.db.query(PromptCodeLink).filter(PromptCodeLink.code_change_audit_id.in_(orphan_ids)).delete(synchronize_session=False)
self.db.query(AuditTrail).filter(AuditTrail.id.in_(orphan_ids)).delete(synchronize_session=False)
self.db.commit()
self.log_system_event(
component='audit',
level='INFO',
message=(
f"Purged {len(orphan_ids)} orphaned code change audit row(s)"
+ (f" for project {project_id}" if project_id else '')
),
)
return {
'status': 'success',
'deleted_count': len(orphan_ids),
'project_count': len(orphan_projects),
'projects': orphan_projects,
'project_id': project_id,
'message': (
f"Purged {len(orphan_ids)} orphaned code change row(s)."
if orphan_ids else 'No orphaned code change rows were found.'
),
}
def cleanup_audit_trail(self) -> None:
"""Clear audit-related test data across all related tables."""
self.db.query(PromptCodeLink).delete()

View File

@@ -230,6 +230,26 @@ class GiteaAPI:
}
return await self._request("POST", f"repos/{_owner}/{_repo}/pulls", payload)
def create_pull_request_sync(
self,
title: str,
body: str,
owner: str,
repo: str,
base: str = "main",
head: str | None = None,
) -> dict:
"""Synchronously create a pull request."""
_owner = owner or self.owner
_repo = repo or self.repo
payload = {
"title": title,
"body": body,
"base": base,
"head": head or f"{_owner}-{_repo}-ai-gen-{hash(title) % 10000}",
}
return self._request_sync("POST", f"repos/{_owner}/{_repo}/pulls", payload)
async def list_pull_requests(
self,
owner: str | None = None,
@@ -402,3 +422,13 @@ class GiteaAPI:
return {"error": "Repository name required for org operations"}
return await self._request("GET", f"repos/{_owner}/{_repo}")
def get_repo_info_sync(self, owner: str | None = None, repo: str | None = None) -> dict:
"""Synchronously get repository information."""
_owner = owner or self.owner
_repo = repo or self.repo
if not _repo:
return {"error": "Repository name required for org operations"}
return self._request_sync("GET", f"repos/{_owner}/{_repo}")

View File

@@ -62,6 +62,7 @@ class AgentOrchestrator:
self.repo_name_override = repo_name_override
self.existing_history = existing_history
self.changed_files: list[str] = []
self.pending_code_changes: list[dict] = []
self.gitea_api = GiteaAPI(
token=settings.GITEA_TOKEN,
base_url=settings.GITEA_URL,
@@ -457,18 +458,14 @@ class AgentOrchestrator:
diff_text = self._build_diff_text(relative_path, previous_content, content)
target.write_text(content, encoding="utf-8")
self.changed_files.append(relative_path)
if self.db_manager and self.history:
self.db_manager.log_code_change(
project_id=self.project_id,
change_type=change_type,
file_path=relative_path,
actor="orchestrator",
actor_type="agent",
details=f"{change_type.title()}d generated artifact {relative_path}",
history_id=self.history.id,
prompt_id=self.prompt_audit.id if self.prompt_audit else None,
diff_summary=f"Wrote {len(content.splitlines())} lines to {relative_path}",
diff_text=diff_text,
self.pending_code_changes.append(
{
'change_type': change_type,
'file_path': relative_path,
'details': f"{change_type.title()}d generated artifact {relative_path}",
'diff_summary': f"Wrote {len(content.splitlines())} lines to {relative_path}",
'diff_text': diff_text,
}
)
def _template_files(self) -> dict[str, str]:
@@ -668,6 +665,23 @@ class AgentOrchestrator:
remote_status=remote_record.get("status") if remote_record else "local-only",
related_issue=self.related_issue,
)
for change in self.pending_code_changes:
self.db_manager.log_code_change(
project_id=self.project_id,
change_type=change['change_type'],
file_path=change['file_path'],
actor='orchestrator',
actor_type='agent',
details=change['details'],
history_id=self.history.id if self.history else None,
prompt_id=self.prompt_audit.id if self.prompt_audit else None,
diff_summary=change.get('diff_summary'),
diff_text=change.get('diff_text'),
commit_hash=commit_hash,
remote_status=remote_record.get('status') if remote_record else 'local-only',
branch=self.branch_name,
)
self.pending_code_changes.clear()
if self.related_issue:
self.db_manager.log_issue_work(
project_id=self.project_id,

View File

@@ -60,6 +60,63 @@ EDITABLE_LLM_PROMPTS: dict[str, dict[str, str]] = {
},
}
EDITABLE_RUNTIME_SETTINGS: dict[str, dict[str, str]] = {
'HOME_ASSISTANT_BATTERY_ENTITY_ID': {
'label': 'Battery Entity ID',
'category': 'home_assistant',
'description': 'Home Assistant entity used for battery state-of-charge gating.',
'value_type': 'string',
},
'HOME_ASSISTANT_SURPLUS_ENTITY_ID': {
'label': 'Surplus Power Entity ID',
'category': 'home_assistant',
'description': 'Home Assistant entity used for export or surplus power gating.',
'value_type': 'string',
},
'HOME_ASSISTANT_BATTERY_FULL_THRESHOLD': {
'label': 'Battery Full Threshold',
'category': 'home_assistant',
'description': 'Minimum battery percentage required before queued prompts may run.',
'value_type': 'float',
},
'HOME_ASSISTANT_SURPLUS_THRESHOLD_WATTS': {
'label': 'Surplus Threshold Watts',
'category': 'home_assistant',
'description': 'Minimum surplus/export power required before queued prompts may run.',
'value_type': 'float',
},
'PROMPT_QUEUE_ENABLED': {
'label': 'Queue Telegram Prompts',
'category': 'prompt_queue',
'description': 'When enabled, Telegram prompts are queued and gated instead of processed immediately.',
'value_type': 'boolean',
},
'PROMPT_QUEUE_AUTO_PROCESS': {
'label': 'Auto Process Queue',
'category': 'prompt_queue',
'description': 'Let the background worker drain the queue automatically when the gate is open.',
'value_type': 'boolean',
},
'PROMPT_QUEUE_FORCE_PROCESS': {
'label': 'Force Queue Processing',
'category': 'prompt_queue',
'description': 'Bypass the Home Assistant energy gate for queued prompts.',
'value_type': 'boolean',
},
'PROMPT_QUEUE_POLL_INTERVAL_SECONDS': {
'label': 'Queue Poll Interval Seconds',
'category': 'prompt_queue',
'description': 'Polling interval for the background queue worker.',
'value_type': 'integer',
},
'PROMPT_QUEUE_MAX_BATCH_SIZE': {
'label': 'Queue Max Batch Size',
'category': 'prompt_queue',
'description': 'Maximum number of queued prompts processed in one batch.',
'value_type': 'integer',
},
}
def _get_persisted_llm_prompt_override(env_key: str) -> str | None:
"""Load one persisted LLM prompt override from the database when available."""
@@ -92,6 +149,62 @@ def _resolve_llm_prompt_value(env_key: str, fallback: str) -> str:
return (fallback or '').strip()
def _get_persisted_runtime_setting_override(key: str):
"""Load one persisted runtime-setting override from the database when available."""
if key not in EDITABLE_RUNTIME_SETTINGS:
return None
try:
try:
from .database import get_db_sync
from .agents.database_manager import DatabaseManager
except ImportError:
from database import get_db_sync
from agents.database_manager import DatabaseManager
db = get_db_sync()
if db is None:
return None
try:
return DatabaseManager(db).get_runtime_setting_override(key)
finally:
db.close()
except Exception:
return None
def _coerce_runtime_setting_value(key: str, value, fallback):
"""Coerce a persisted runtime setting override into the expected scalar type."""
value_type = EDITABLE_RUNTIME_SETTINGS.get(key, {}).get('value_type')
if value is None:
return fallback
if value_type == 'boolean':
if isinstance(value, bool):
return value
normalized = str(value).strip().lower()
if normalized in {'1', 'true', 'yes', 'on'}:
return True
if normalized in {'0', 'false', 'no', 'off'}:
return False
return bool(fallback)
if value_type == 'integer':
try:
return int(value)
except Exception:
return int(fallback)
if value_type == 'float':
try:
return float(value)
except Exception:
return float(fallback)
return str(value).strip()
def _resolve_runtime_setting_value(key: str, fallback):
"""Resolve one editable runtime setting from DB override first, then environment/defaults."""
override = _get_persisted_runtime_setting_override(key)
return _coerce_runtime_setting_value(key, override, fallback)
class Settings(BaseSettings):
"""Application settings loaded from environment variables."""
@@ -309,6 +422,26 @@ class Settings(BaseSettings):
)
return prompts
@property
def editable_runtime_settings(self) -> list[dict]:
"""Return metadata for all DB-editable runtime settings."""
items = []
for key, metadata in EDITABLE_RUNTIME_SETTINGS.items():
default_value = getattr(self, key)
value = _resolve_runtime_setting_value(key, default_value)
items.append(
{
'key': key,
'label': metadata['label'],
'category': metadata['category'],
'description': metadata['description'],
'value_type': metadata['value_type'],
'default_value': default_value,
'value': value,
}
)
return items
@property
def llm_tool_allowlist(self) -> list[str]:
"""Get the allowed LLM tool names as a normalized list."""
@@ -438,47 +571,47 @@ class Settings(BaseSettings):
@property
def home_assistant_battery_entity_id(self) -> str:
"""Get the Home Assistant battery state entity id."""
return self.HOME_ASSISTANT_BATTERY_ENTITY_ID.strip()
return str(_resolve_runtime_setting_value('HOME_ASSISTANT_BATTERY_ENTITY_ID', self.HOME_ASSISTANT_BATTERY_ENTITY_ID)).strip()
@property
def home_assistant_surplus_entity_id(self) -> str:
"""Get the Home Assistant surplus power entity id."""
return self.HOME_ASSISTANT_SURPLUS_ENTITY_ID.strip()
return str(_resolve_runtime_setting_value('HOME_ASSISTANT_SURPLUS_ENTITY_ID', self.HOME_ASSISTANT_SURPLUS_ENTITY_ID)).strip()
@property
def home_assistant_battery_full_threshold(self) -> float:
"""Get the minimum battery SoC percentage for queue processing."""
return float(self.HOME_ASSISTANT_BATTERY_FULL_THRESHOLD)
return float(_resolve_runtime_setting_value('HOME_ASSISTANT_BATTERY_FULL_THRESHOLD', self.HOME_ASSISTANT_BATTERY_FULL_THRESHOLD))
@property
def home_assistant_surplus_threshold_watts(self) -> float:
"""Get the minimum export/surplus power threshold for queue processing."""
return float(self.HOME_ASSISTANT_SURPLUS_THRESHOLD_WATTS)
return float(_resolve_runtime_setting_value('HOME_ASSISTANT_SURPLUS_THRESHOLD_WATTS', self.HOME_ASSISTANT_SURPLUS_THRESHOLD_WATTS))
@property
def prompt_queue_enabled(self) -> bool:
"""Whether Telegram prompts should be queued instead of processed immediately."""
return bool(self.PROMPT_QUEUE_ENABLED)
return bool(_resolve_runtime_setting_value('PROMPT_QUEUE_ENABLED', self.PROMPT_QUEUE_ENABLED))
@property
def prompt_queue_auto_process(self) -> bool:
"""Whether the background worker should automatically process queued prompts."""
return bool(self.PROMPT_QUEUE_AUTO_PROCESS)
return bool(_resolve_runtime_setting_value('PROMPT_QUEUE_AUTO_PROCESS', self.PROMPT_QUEUE_AUTO_PROCESS))
@property
def prompt_queue_force_process(self) -> bool:
"""Whether queued prompts should bypass the Home Assistant energy gate."""
return bool(self.PROMPT_QUEUE_FORCE_PROCESS)
return bool(_resolve_runtime_setting_value('PROMPT_QUEUE_FORCE_PROCESS', self.PROMPT_QUEUE_FORCE_PROCESS))
@property
def prompt_queue_poll_interval_seconds(self) -> int:
"""Get the queue polling interval for background processing."""
return max(int(self.PROMPT_QUEUE_POLL_INTERVAL_SECONDS), 5)
return max(int(_resolve_runtime_setting_value('PROMPT_QUEUE_POLL_INTERVAL_SECONDS', self.PROMPT_QUEUE_POLL_INTERVAL_SECONDS)), 5)
@property
def prompt_queue_max_batch_size(self) -> int:
"""Get the maximum number of queued prompts to process in one batch."""
return max(int(self.PROMPT_QUEUE_MAX_BATCH_SIZE), 1)
return max(int(_resolve_runtime_setting_value('PROMPT_QUEUE_MAX_BATCH_SIZE', self.PROMPT_QUEUE_MAX_BATCH_SIZE)), 1)
@property
def projects_root(self) -> Path:

View File

@@ -907,6 +907,9 @@ def create_dashboard():
def _llm_prompt_draft_key(prompt_key: str) -> str:
return f'dashboard.llm_prompt_draft.{prompt_key}'
def _runtime_setting_draft_key(setting_key: str) -> str:
return f'dashboard.runtime_setting_draft.{setting_key}'
def _selected_tab_name() -> str:
"""Return the persisted active dashboard tab."""
return app.storage.user.get(active_tab_key, 'overview')
@@ -976,6 +979,15 @@ def create_dashboard():
def _clear_prompt_draft(prompt_key: str) -> None:
app.storage.user.pop(_llm_prompt_draft_key(prompt_key), None)
def _runtime_setting_draft_value(setting_key: str, fallback):
return app.storage.user.get(_runtime_setting_draft_key(setting_key), fallback)
def _store_runtime_setting_draft(setting_key: str, value) -> None:
app.storage.user[_runtime_setting_draft_key(setting_key)] = value
def _clear_runtime_setting_draft(setting_key: str) -> None:
app.storage.user.pop(_runtime_setting_draft_key(setting_key), None)
def _call_backend_json(path: str, method: str = 'GET', payload: dict | None = None) -> dict:
target = f"{settings.backend_public_url}{path}"
data = json.dumps(payload).encode('utf-8') if payload is not None else None
@@ -1172,6 +1184,26 @@ def create_dashboard():
ui.notify('Queued prompt returned to pending state', color='positive')
_refresh_all_dashboard_sections()
def purge_orphan_code_changes_action(project_id: str | None = None) -> None:
db = get_db_sync()
if db is None:
ui.notify('Database session could not be created', color='negative')
return
with closing(db):
result = DatabaseManager(db).cleanup_orphan_code_changes(project_id=project_id)
ui.notify(result.get('message', 'Audit cleanup completed'), color='positive')
_refresh_all_dashboard_sections()
def retry_project_delivery_action(project_id: str) -> None:
db = get_db_sync()
if db is None:
ui.notify('Database session could not be created', color='negative')
return
with closing(db):
result = DatabaseManager(db).retry_project_delivery(project_id)
ui.notify(result.get('message', 'Delivery retry completed'), color='positive' if result.get('status') == 'success' else 'negative')
_refresh_all_dashboard_sections()
def save_llm_prompt_action(prompt_key: str) -> None:
db = get_db_sync()
if db is None:
@@ -1202,6 +1234,36 @@ def create_dashboard():
ui.notify('LLM prompt setting reset to environment default', color='positive')
_refresh_system_sections()
def save_runtime_setting_action(setting_key: str) -> None:
db = get_db_sync()
if db is None:
ui.notify('Database session could not be created', color='negative')
return
with closing(db):
current = next((item for item in DatabaseManager(db).get_runtime_settings() if item['key'] == setting_key), None)
value = _runtime_setting_draft_value(setting_key, current['value'] if current else None)
result = DatabaseManager(db).save_runtime_setting(setting_key, value, actor='dashboard')
if result.get('status') == 'error':
ui.notify(result.get('message', 'Runtime setting save failed'), color='negative')
return
_clear_runtime_setting_draft(setting_key)
ui.notify('Runtime setting saved', color='positive')
_refresh_all_dashboard_sections()
def reset_runtime_setting_action(setting_key: str) -> None:
db = get_db_sync()
if db is None:
ui.notify('Database session could not be created', color='negative')
return
with closing(db):
result = DatabaseManager(db).reset_runtime_setting(setting_key, actor='dashboard')
if result.get('status') == 'error':
ui.notify(result.get('message', 'Runtime setting reset failed'), color='negative')
return
_clear_runtime_setting_draft(setting_key)
ui.notify('Runtime setting reset to environment default', color='positive')
_refresh_all_dashboard_sections()
def init_db_action() -> None:
result = init_db()
ui.notify(result.get('message', 'Database initialized'), color='positive' if result.get('status') == 'success' else 'negative')
@@ -1280,13 +1342,16 @@ def create_dashboard():
commit_lookup_query = _selected_commit_lookup()
discovered_repositories = _get_discovered_repositories()
prompt_settings = settings.editable_llm_prompts
runtime_settings = settings.editable_runtime_settings
db = get_db_sync()
if db is not None:
with closing(db):
try:
prompt_settings = DatabaseManager(db).get_llm_prompt_settings()
runtime_settings = DatabaseManager(db).get_runtime_settings()
except Exception:
prompt_settings = settings.editable_llm_prompts
runtime_settings = settings.editable_runtime_settings
if snapshot.get('error'):
return {
'error': snapshot['error'],
@@ -1298,6 +1363,7 @@ def create_dashboard():
'commit_lookup_query': commit_lookup_query,
'discovered_repositories': discovered_repositories,
'prompt_settings': prompt_settings,
'runtime_settings': runtime_settings,
}
projects = snapshot['projects']
all_llm_traces = [trace for project_bundle in projects for trace in project_bundle.get('llm_traces', [])]
@@ -1317,6 +1383,7 @@ def create_dashboard():
'commit_context': _load_commit_context(commit_lookup_query, branch_scope_filter) if commit_lookup_query else None,
'discovered_repositories': discovered_repositories,
'prompt_settings': prompt_settings,
'runtime_settings': runtime_settings,
'llm_stage_options': [''] + sorted({trace.get('stage') for trace in all_llm_traces if trace.get('stage')}),
'llm_model_options': [''] + sorted({trace.get('model') for trace in all_llm_traces if trace.get('model')}),
'project_repository_map': {
@@ -1373,6 +1440,7 @@ def create_dashboard():
('Completed', summary['completed_projects'], 'Finished project runs'),
('Prompts', summary['prompt_events'], 'Recorded originating prompts'),
('Open PRs', summary['open_pull_requests'], 'Unmerged review branches'),
('Orphans', summary.get('orphan_code_changes', 0), 'Generated diffs with no matching commit'),
]
for title, value, subtitle in metrics:
with ui.card().classes('factory-kpi'):
@@ -1391,15 +1459,38 @@ def create_dashboard():
with ui.grid(columns=2).classes('w-full gap-4'):
with ui.card().classes('factory-panel q-pa-lg'):
ui.label('Project Pipeline').style('font-size: 1.25rem; font-weight: 700; color: #3a281a;')
if summary.get('orphan_code_changes'):
with ui.card().classes('q-pa-md q-mt-md').style('background: #fff4dd; border: 1px solid #e0b36a;'):
ui.label('Uncommitted generated changes detected').style('font-weight: 700; color: #7a4b16;')
ui.label(
f"{summary['orphan_code_changes']} generated file change row(s) have no matching git commit or PR delivery record."
).classes('factory-muted')
ui.button(
'Purge orphan change rows',
on_click=lambda: _render_confirmation_dialog(
'Purge orphaned generated change rows?',
'Delete only generated CODE_CHANGE audit rows that have no matching git commit. Valid prompt, commit, and PR history will be kept.',
'Purge Orphans',
lambda: purge_orphan_code_changes_action(),
color='warning',
),
).props('outline color=warning').classes('q-mt-sm')
if projects:
for project_bundle in projects[:4]:
project = project_bundle['project']
with ui.column().classes('gap-1 q-mt-md'):
with ui.row().classes('justify-between items-center'):
ui.label(project['project_name']).style('font-weight: 700; color: #2f241d;')
with ui.row().classes('items-center gap-2'):
if project.get('delivery_status') in {'uncommitted', 'local_only', 'pushed_no_pr'}:
ui.label(project.get('delivery_status', 'delivery')).classes('factory-chip')
ui.label(project['status']).classes('factory-chip')
ui.linear_progress(value=(project['progress'] or 0) / 100, show_value=False).classes('w-full')
ui.label(project['message'] or 'No status message').classes('factory-muted')
ui.label(
project.get('delivery_message')
if project.get('delivery_status') in {'uncommitted', 'local_only', 'pushed_no_pr'}
else project['message'] or 'No status message'
).classes('factory-muted')
else:
ui.label('No projects in the database yet.').classes('factory-muted')
@@ -1455,6 +1546,28 @@ def create_dashboard():
lambda: delete_project_action(project_id),
),
).props('outline color=negative')
if project.get('delivery_status') in {'uncommitted', 'local_only', 'pushed_no_pr'}:
with ui.card().classes('q-ma-md q-pa-md').style('background: #fff4dd; border: 1px solid #e0b36a;'):
with ui.row().classes('items-center justify-between w-full gap-3'):
with ui.column().classes('gap-1'):
ui.label('Remote delivery attention needed').style('font-weight: 700; color: #7a4b16;')
ui.label(project.get('delivery_message') or 'Generated changes were not published to the tracked repository.').classes('factory-muted')
with ui.row().classes('items-center gap-2'):
ui.button(
'Retry delivery',
on_click=lambda _=None, project_id=project['project_id']: retry_project_delivery_action(project_id),
).props('outline color=positive')
if project.get('delivery_status') == 'uncommitted':
ui.button(
'Purge project orphan rows',
on_click=lambda _=None, project_id=project['project_id']: _render_confirmation_dialog(
'Purge orphaned generated change rows for this project?',
'Delete only generated CODE_CHANGE audit rows for this project that have no matching git commit. Valid history remains intact.',
'Purge Project Orphans',
lambda: purge_orphan_code_changes_action(project_id),
color='warning',
),
).props('outline color=warning')
with ui.grid(columns=2).classes('w-full gap-4 q-pa-md'):
with ui.card().classes('q-pa-md'):
ui.label('Repository').style('font-weight: 700; color: #3a281a;')
@@ -1505,6 +1618,26 @@ def create_dashboard():
lambda: delete_project_action(project_id),
),
).props('outline color=negative')
if project.get('delivery_status') in {'uncommitted', 'local_only', 'pushed_no_pr'}:
with ui.card().classes('q-ma-md q-pa-md').style('background: #fff4dd; border: 1px solid #e0b36a;'):
ui.label('Archived project needs delivery attention').style('font-weight: 700; color: #7a4b16;')
ui.label(project.get('delivery_message') or 'Generated changes were not published to the tracked repository.').classes('factory-muted')
with ui.row().classes('items-center gap-2 q-mt-sm'):
ui.button(
'Retry delivery',
on_click=lambda _=None, project_id=project['project_id']: retry_project_delivery_action(project_id),
).props('outline color=positive')
if project.get('delivery_status') == 'uncommitted':
ui.button(
'Purge archived project orphan rows',
on_click=lambda _=None, project_id=project['project_id']: _render_confirmation_dialog(
'Purge orphaned generated change rows for this archived project?',
'Delete only generated CODE_CHANGE audit rows for this project that have no matching git commit. Valid history remains intact.',
'Purge Archived Orphans',
lambda: purge_orphan_code_changes_action(project_id),
color='warning',
),
).props('outline color=warning')
with ui.grid(columns=2).classes('w-full gap-4 q-pa-md'):
with ui.card().classes('q-pa-md'):
ui.label('Repository').style('font-weight: 700; color: #3a281a;')
@@ -1711,6 +1844,7 @@ def create_dashboard():
llm_runtime = view_model['llm_runtime']
discovered_repositories = view_model['discovered_repositories']
prompt_settings = view_model.get('prompt_settings', [])
runtime_settings = view_model.get('runtime_settings', [])
with ui.grid(columns=2).classes('w-full gap-4'):
with ui.card().classes('factory-panel q-pa-lg'):
ui.label('System Logs').style('font-size: 1.25rem; font-weight: 700; color: #3a281a;')
@@ -1761,6 +1895,45 @@ def create_dashboard():
for label, text in system_prompts.items():
ui.label(label.replace('_', ' ').title()).classes('factory-muted q-mt-sm')
ui.label(text or 'Not configured').classes('factory-code')
with ui.card().classes('factory-panel q-pa-lg'):
ui.label('Home Assistant and Queue Settings').style('font-size: 1.25rem; font-weight: 700; color: #3a281a;')
ui.label('Keep only the Home Assistant base URL and access token in the environment. Entity ids, thresholds, and queue behavior are edited here and persisted in the database.').classes('factory-muted')
for setting in runtime_settings:
with ui.card().classes('q-pa-sm q-mt-md'):
with ui.row().classes('items-center justify-between w-full'):
with ui.column().classes('gap-1'):
ui.label(setting['label']).style('font-weight: 700; color: #2f241d;')
ui.label(setting.get('description') or '').classes('factory-muted')
with ui.row().classes('items-center gap-2'):
ui.label(setting.get('category', 'setting')).classes('factory-chip')
ui.label(setting.get('source', 'environment')).classes('factory-chip')
draft_value = _runtime_setting_draft_value(setting['key'], setting.get('value'))
if setting.get('value_type') == 'boolean':
ui.switch(
value=bool(draft_value),
on_change=lambda event, setting_key=setting['key']: _store_runtime_setting_draft(setting_key, bool(event.value)),
).props('color=accent').classes('q-mt-sm')
elif setting.get('value_type') == 'integer':
ui.number(
value=int(draft_value),
on_change=lambda event, setting_key=setting['key']: _store_runtime_setting_draft(setting_key, int(event.value) if event.value is not None else None),
).classes('w-full q-mt-sm')
elif setting.get('value_type') == 'float':
ui.number(
value=float(draft_value),
on_change=lambda event, setting_key=setting['key']: _store_runtime_setting_draft(setting_key, float(event.value) if event.value is not None else None),
).classes('w-full q-mt-sm')
else:
ui.input(
value=str(draft_value or ''),
on_change=lambda event, setting_key=setting['key']: _store_runtime_setting_draft(setting_key, event.value or ''),
).classes('w-full q-mt-sm')
ui.label(f"Environment default: {setting.get('default_value')}").classes('factory-muted q-mt-sm')
if setting.get('updated_at'):
ui.label(f"Last updated: {setting['updated_at']} by {setting.get('updated_by') or 'unknown'}").classes('factory-muted q-mt-sm')
with ui.row().classes('items-center gap-2 q-mt-md'):
ui.button('Save Override', on_click=lambda _=None, setting_key=setting['key']: save_runtime_setting_action(setting_key)).props('unelevated color=accent')
ui.button('Reset To Default', on_click=lambda _=None, setting_key=setting['key']: reset_runtime_setting_action(setting_key)).props('outline color=warning')
with ui.card().classes('factory-panel q-pa-lg'):
ui.label('Editable LLM Prompts').style('font-size: 1.25rem; font-weight: 700; color: #3a281a;')
ui.label('These guardrails and system prompts are persisted in the database and override environment defaults until reset.').classes('factory-muted')

View File

@@ -62,8 +62,6 @@ async def lifespan(_app: FastAPI):
print(
f"Runtime configuration: database_backend={runtime['backend']} target={runtime['target']}"
)
queue_worker = None
if database_module.settings.prompt_queue_enabled and database_module.settings.prompt_queue_auto_process:
queue_worker = asyncio.create_task(_prompt_queue_worker())
try:
yield
@@ -124,6 +122,12 @@ class LLMPromptSettingUpdateRequest(BaseModel):
value: str = Field(default='')
class RuntimeSettingUpdateRequest(BaseModel):
"""Request body for persisting one editable runtime setting override."""
value: str | bool | int | float | None = None
class GiteaRepositoryOnboardRequest(BaseModel):
"""Request body for onboarding a manually created Gitea repository."""
@@ -681,6 +685,7 @@ async def _prompt_queue_worker() -> None:
"""Background worker that drains the prompt queue when the energy gate opens."""
while True:
try:
if database_module.settings.prompt_queue_enabled and database_module.settings.prompt_queue_auto_process:
await _process_prompt_queue_batch(
limit=database_module.settings.prompt_queue_max_batch_size,
force=database_module.settings.prompt_queue_force_process,
@@ -719,6 +724,8 @@ def read_api_info():
'/llm/runtime',
'/llm/prompts',
'/llm/prompts/{prompt_key}',
'/settings/runtime',
'/settings/runtime/{setting_key}',
'/generate',
'/generate/text',
'/queue',
@@ -815,6 +822,32 @@ def reset_llm_prompt_setting(prompt_key: str, db: DbSession):
return result
@app.get('/settings/runtime')
def get_runtime_settings(db: DbSession):
"""Return editable runtime settings with DB overrides merged over environment defaults."""
return {'settings': DatabaseManager(db).get_runtime_settings()}
@app.put('/settings/runtime/{setting_key}')
def update_runtime_setting(setting_key: str, request: RuntimeSettingUpdateRequest, db: DbSession):
"""Persist one editable runtime setting override into the database."""
database_module.init_db()
result = DatabaseManager(db).save_runtime_setting(setting_key, request.value, actor='api')
if result.get('status') == 'error':
raise HTTPException(status_code=400, detail=result.get('message', 'Runtime setting save failed'))
return result
@app.delete('/settings/runtime/{setting_key}')
def reset_runtime_setting(setting_key: str, db: DbSession):
"""Reset one editable runtime setting override back to the environment/default value."""
database_module.init_db()
result = DatabaseManager(db).reset_runtime_setting(setting_key, actor='api')
if result.get('status') == 'error':
raise HTTPException(status_code=400, detail=result.get('message', 'Runtime setting reset failed'))
return result
@app.post('/generate')
async def generate_software(request: SoftwareRequest, db: DbSession):
"""Create and record a software-generation request."""