30 Commits

Author SHA1 Message Date
0770b254b1 release: version 0.9.14 🚀
All checks were successful
Upload Python Package / Create Release (push) Successful in 19s
Upload Python Package / deploy (push) Successful in 58s
2026-04-11 21:40:53 +02:00
e651e3324d fix: add Ollama connection health details in UI, refs NOISSUE 2026-04-11 21:40:50 +02:00
bbe0279af4 release: version 0.9.13 🚀
All checks were successful
Upload Python Package / Create Release (push) Successful in 16s
Upload Python Package / deploy (push) Successful in 29s
2026-04-11 21:17:16 +02:00
5e5e7b4f35 fix: fix internal server error, refs NOISSUE 2026-04-11 21:17:12 +02:00
634f4326c6 release: version 0.9.12 🚀
All checks were successful
Upload Python Package / Create Release (push) Successful in 17s
Upload Python Package / deploy (push) Successful in 34s
2026-04-11 20:31:22 +02:00
f54d3b3b7a fix: remove heuristic decision making fallbacks, refs NOISSUE 2026-04-11 20:31:19 +02:00
c147d8be78 release: version 0.9.11 🚀
All checks were successful
Upload Python Package / Create Release (push) Successful in 17s
Upload Python Package / deploy (push) Successful in 34s
2026-04-11 20:09:34 +02:00
9ffaa18efe fix: project association improvements, refs NOISSUE 2026-04-11 20:09:31 +02:00
d53f3fe207 release: version 0.9.10 🚀
All checks were successful
Upload Python Package / Create Release (push) Successful in 10s
Upload Python Package / deploy (push) Successful in 31s
2026-04-11 18:05:25 +02:00
4f1d757dd8 fix: more git integration fixes, refs NOISSUE 2026-04-11 18:05:20 +02:00
ac75cc2e3a release: version 0.9.9 🚀
All checks were successful
Upload Python Package / Create Release (push) Successful in 14s
Upload Python Package / deploy (push) Successful in 2m17s
2026-04-11 17:41:29 +02:00
f7f00d4e14 fix: add missing git binary, refs NOISSUE 2026-04-11 17:41:24 +02:00
1c539d5f60 release: version 0.9.8 🚀
All checks were successful
Upload Python Package / Create Release (push) Successful in 12s
Upload Python Package / deploy (push) Successful in 29s
2026-04-11 16:32:23 +02:00
64fcd2967c fix: more file change fixes, refs NOISSUE 2026-04-11 16:32:19 +02:00
4d050ff527 release: version 0.9.7 🚀
All checks were successful
Upload Python Package / Create Release (push) Successful in 22s
Upload Python Package / deploy (push) Successful in 1m19s
2026-04-11 14:33:47 +02:00
1944e2a9cf fix: more file generation improvements, refs NOISSUE 2026-04-11 14:33:45 +02:00
7e4066c609 release: version 0.9.6 🚀
All checks were successful
Upload Python Package / Create Release (push) Successful in 15s
Upload Python Package / deploy (push) Successful in 39s
2026-04-11 13:37:52 +02:00
4eeec5d808 fix: repo onboarding fix, refs NOISSUE 2026-04-11 13:37:49 +02:00
cbbed83915 release: version 0.9.5 🚀
All checks were successful
Upload Python Package / Create Release (push) Successful in 18s
Upload Python Package / deploy (push) Successful in 30s
2026-04-11 13:27:26 +02:00
1e72bc9a28 fix: better code generation, refs NOISSUE 2026-04-11 13:27:23 +02:00
b0c95323fd release: version 0.9.4 🚀
All checks were successful
Upload Python Package / Create Release (push) Successful in 24s
Upload Python Package / deploy (push) Successful in 56s
2026-04-11 13:06:54 +02:00
d60e753acf fix: add commit retry, refs NOISSUE 2026-04-11 13:06:48 +02:00
94c38359c7 release: version 0.9.3 🚀
All checks were successful
Upload Python Package / Create Release (push) Successful in 29s
Upload Python Package / deploy (push) Successful in 43s
2026-04-11 12:45:59 +02:00
2943fc79ab fix: better home assistant integration, refs NOISSUE 2026-04-11 12:45:56 +02:00
3e40338bbf release: version 0.9.2 🚀
All checks were successful
Upload Python Package / Create Release (push) Successful in 31s
Upload Python Package / deploy (push) Successful in 32s
2026-04-11 11:53:25 +02:00
39f9651236 fix: UI improvements and prompt hardening, refs NOISSUE 2026-04-11 11:53:18 +02:00
3175c53504 release: version 0.9.1 🚀
All checks were successful
Upload Python Package / Create Release (push) Successful in 10s
Upload Python Package / deploy (push) Successful in 55s
2026-04-11 11:37:22 +02:00
29cf2aa6bd fix: better repo name generation, refs NOISSUE 2026-04-11 11:37:19 +02:00
b881ef635a release: version 0.9.0 🚀
All checks were successful
Upload Python Package / Create Release (push) Successful in 21s
Upload Python Package / deploy (push) Successful in 1m11s
2026-04-11 11:12:54 +02:00
e35db0a361 feat: editable guardrails, refs NOISSUE 2026-04-11 11:12:50 +02:00
16 changed files with 3673 additions and 391 deletions

View File

@@ -12,7 +12,10 @@ WORKDIR /app
# Install system dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
ca-certificates \
curl \
git \
&& update-ca-certificates \
&& rm -rf /var/lib/apt/lists/*
# Install dependencies

View File

@@ -4,6 +4,165 @@ Changelog
(unreleased)
------------
Fix
~~~
- Add Ollama connection health details in UI, refs NOISSUE. [Simon
Diesenreiter]
0.9.13 (2026-04-11)
-------------------
Fix
~~~
- Fix internal server error, refs NOISSUE. [Simon Diesenreiter]
Other
~~~~~
0.9.12 (2026-04-11)
-------------------
Fix
~~~
- Remove heuristic decision making fallbacks, refs NOISSUE. [Simon
Diesenreiter]
Other
~~~~~
0.9.11 (2026-04-11)
-------------------
Fix
~~~
- Project association improvements, refs NOISSUE. [Simon Diesenreiter]
Other
~~~~~
0.9.10 (2026-04-11)
-------------------
Fix
~~~
- More git integration fixes, refs NOISSUE. [Simon Diesenreiter]
Other
~~~~~
0.9.9 (2026-04-11)
------------------
Fix
~~~
- Add missing git binary, refs NOISSUE. [Simon Diesenreiter]
Other
~~~~~
0.9.8 (2026-04-11)
------------------
Fix
~~~
- More file change fixes, refs NOISSUE. [Simon Diesenreiter]
Other
~~~~~
0.9.7 (2026-04-11)
------------------
Fix
~~~
- More file generation improvements, refs NOISSUE. [Simon Diesenreiter]
Other
~~~~~
0.9.6 (2026-04-11)
------------------
Fix
~~~
- Repo onboarding fix, refs NOISSUE. [Simon Diesenreiter]
Other
~~~~~
0.9.5 (2026-04-11)
------------------
Fix
~~~
- Better code generation, refs NOISSUE. [Simon Diesenreiter]
Other
~~~~~
0.9.4 (2026-04-11)
------------------
Fix
~~~
- Add commit retry, refs NOISSUE. [Simon Diesenreiter]
Other
~~~~~
0.9.3 (2026-04-11)
------------------
Fix
~~~
- Better home assistant integration, refs NOISSUE. [Simon Diesenreiter]
Other
~~~~~
0.9.2 (2026-04-11)
------------------
Fix
~~~
- UI improvements and prompt hardening, refs NOISSUE. [Simon
Diesenreiter]
Other
~~~~~
0.9.1 (2026-04-11)
------------------
Fix
~~~
- Better repo name generation, refs NOISSUE. [Simon Diesenreiter]
Other
~~~~~
0.9.0 (2026-04-11)
------------------
- Feat: editable guardrails, refs NOISSUE. [Simon Diesenreiter]
0.8.0 (2026-04-11)
------------------
- Feat: better dashboard reloading mechanism, refs NOISSUE. [Simon
Diesenreiter]
- Feat: add explicit workflow steps and guardrail prompts, refs NOISSUE.

View File

@@ -48,6 +48,7 @@ OLLAMA_URL=http://localhost:11434
OLLAMA_MODEL=llama3
# Gitea
# Host-only values such as git.disi.dev are normalized to https://git.disi.dev.
GITEA_URL=https://gitea.yourserver.com
GITEA_TOKEN=your_gitea_api_token
GITEA_OWNER=ai-software-factory
@@ -69,6 +70,12 @@ N8N_WEBHOOK_URL=http://n8n.yourserver.com/webhook/telegram
# Telegram
TELEGRAM_BOT_TOKEN=your_telegram_bot_token
TELEGRAM_CHAT_ID=your_chat_id
# Optional: Home Assistant integration.
# Only the base URL and token are required in the environment.
# Entity ids, thresholds, and queue behavior can be configured from the dashboard System tab and are stored in the database.
HOME_ASSISTANT_URL=http://homeassistant.local:8123
HOME_ASSISTANT_TOKEN=your_home_assistant_long_lived_token
```
### Build and Run
@@ -93,6 +100,7 @@ docker-compose up -d
The backend now interprets free-form Telegram text with Ollama before generation.
If `TELEGRAM_CHAT_ID` is set, the Telegram-trigger workflow only reacts to messages from that specific chat.
If queueing is enabled from the dashboard System tab, Telegram prompts are stored in a durable queue and processed only when the configured Home Assistant battery and surplus thresholds are satisfied, unless you force processing via `/queue/process` or send `process_now=true`.
2. **Monitor progress via Web UI:**
@@ -104,6 +112,16 @@ docker-compose up -d
If you deploy the container with PostgreSQL environment variables set, the service now selects PostgreSQL automatically even though SQLite remains the default for local/test usage.
The health tab now shows separate application, n8n, Gitea, and Home Assistant/queue diagnostics so misconfigured integrations are visible without checking container logs.
The dashboard Health tab exposes operator controls for the prompt queue, including manual batch processing, forced processing, and retrying failed items.
The dashboard System tab now also stores Home Assistant entity ids, queue toggles, thresholds, and batch settings in the database, so the environment only needs `HOME_ASSISTANT_URL` and `HOME_ASSISTANT_TOKEN` for that integration.
Projects that show `uncommitted`, `local_only`, or `pushed_no_pr` delivery warnings in the dashboard can now be retried in place from the UI before resorting to purging orphan audit rows.
Guardrail and system prompts are no longer environment-only in practice: the factory can persist DB-backed overrides for the editable LLM prompt set, expose them at `/llm/prompts`, and edit them from the dashboard System tab. Environment values still act as defaults and as the reset target.
## API Endpoints
| Endpoint | Method | Description |

View File

@@ -24,7 +24,7 @@ LLM_MAX_TOOL_CALL_ROUNDS=1
# Gitea
# Configure Gitea API for your organization
# GITEA_URL can be left empty to use GITEA_ORGANIZATION instead of GITEA_OWNER
# Host-only values such as git.disi.dev are normalized to https://git.disi.dev automatically.
GITEA_URL=https://gitea.yourserver.com
GITEA_TOKEN=your_gitea_api_token
GITEA_OWNER=your_organization_name
@@ -42,6 +42,12 @@ N8N_PASSWORD=your_secure_password
TELEGRAM_BOT_TOKEN=your_telegram_bot_token
TELEGRAM_CHAT_ID=your_chat_id
# Home Assistant energy gate for queued Telegram prompts
# Only the base URL and token are environment-backed.
# Queue toggles, entity ids, thresholds, and batch sizing can be edited in the dashboard System tab and are stored in the database.
HOME_ASSISTANT_URL=http://homeassistant.local:8123
HOME_ASSISTANT_TOKEN=your_home_assistant_long_lived_token
# PostgreSQL
# In production, provide PostgreSQL settings below. They now take precedence over the SQLite default.
# You can also set USE_SQLITE=false explicitly if you want the intent to be obvious.

View File

@@ -62,10 +62,11 @@ LLM_LIVE_TOOL_STAGE_TOOL_MAP={"request_interpretation": ["gitea_lookup_issue", "
LLM_MAX_TOOL_CALL_ROUNDS=1
# Gitea
# Host-only values such as git.disi.dev are normalized to https://git.disi.dev.
GITEA_URL=https://gitea.yourserver.com
GITEA_TOKEN= analyze your_gitea_api_token
GITEA_TOKEN=your_gitea_api_token
GITEA_OWNER=ai-software-factory
GITEA_REPO=ai-software-factory
GITEA_REPO=
# n8n
N8N_WEBHOOK_URL=http://n8n.yourserver.com/webhook/telegram
@@ -73,6 +74,12 @@ N8N_WEBHOOK_URL=http://n8n.yourserver.com/webhook/telegram
# Telegram
TELEGRAM_BOT_TOKEN=your_telegram_bot_token
TELEGRAM_CHAT_ID=your_chat_id
# Optional: Home Assistant integration.
# Only the base URL and token are required in the environment.
# Entity ids, thresholds, and queue behavior can be configured from the dashboard System tab and are stored in the database.
HOME_ASSISTANT_URL=http://homeassistant.local:8123
HOME_ASSISTANT_TOKEN=your_home_assistant_long_lived_token
```
### Build and Run
@@ -95,6 +102,10 @@ docker-compose up -d
Features: user authentication, task CRUD, notifications
```
If queueing is enabled from the dashboard System tab, Telegram prompts are queued durably and processed only when Home Assistant reports the configured battery and surplus thresholds. Operators can override the gate via `/queue/process` or by sending `process_now=true` to `/generate/text`.
The dashboard System tab stores Home Assistant entity ids, queue toggles, thresholds, and batch settings in the database, so the environment only needs `HOME_ASSISTANT_URL` and `HOME_ASSISTANT_TOKEN` for that integration.
2. **Monitor progress via Web UI:**
Open `http://yourserver:8000` to see real-time progress
@@ -138,6 +149,12 @@ New project creation can also run a dedicated `project_id_naming` stage. `LLM_PR
Runtime visibility for the active guardrails, mediated tools, live tools, and model configuration is available at `/llm/runtime` and in the dashboard System tab.
Operational visibility for the Gitea integration, Home Assistant energy gate, and queued prompt counts is available in the dashboard Health tab, plus `/gitea/health`, `/home-assistant/health`, and `/queue`.
The dashboard Health tab also includes operator controls for manually processing queued Telegram prompts, force-processing them when needed, and retrying failed items.
Editable guardrail and system prompts are persisted in the database as overrides on top of the environment defaults. The current merged values are available at `/llm/prompts`, and the dashboard System tab can edit or reset them without restarting the service.
These tool payloads are appended to the model prompt as authoritative JSON generated by the service, so the LLM can reason over live project and Gitea context while remaining constrained by the configured guardrails.
## Development

View File

@@ -1 +1 @@
0.8.0
0.9.14

File diff suppressed because it is too large Load Diff

View File

@@ -4,6 +4,20 @@ import os
import urllib.error
import urllib.request
import json
from urllib.parse import urlparse
def _normalize_base_url(base_url: str) -> str:
"""Normalize host-only service addresses into valid absolute URLs."""
normalized = (base_url or '').strip().rstrip('/')
if not normalized:
return ''
if '://' not in normalized:
normalized = f'https://{normalized}'
parsed = urlparse(normalized)
if not parsed.scheme or not parsed.netloc:
return ''
return normalized
class GiteaAPI:
@@ -11,7 +25,7 @@ class GiteaAPI:
def __init__(self, token: str, base_url: str, owner: str | None = None, repo: str | None = None):
self.token = token
self.base_url = base_url.rstrip("/")
self.base_url = _normalize_base_url(base_url)
self.owner = owner
self.repo = repo
self.headers = {
@@ -26,7 +40,7 @@ class GiteaAPI:
owner = os.getenv("GITEA_OWNER", "ai-test")
repo = os.getenv("GITEA_REPO", "")
return {
"base_url": base_url.rstrip("/"),
"base_url": _normalize_base_url(base_url),
"token": token,
"owner": owner,
"repo": repo,
@@ -44,6 +58,18 @@ class GiteaAPI:
"""Build a Gitea API URL from a relative path."""
return f"{self.base_url}/api/v1/{path.lstrip('/')}"
def _normalize_pull_request_head(self, head: str | None, owner: str | None = None) -> str | None:
"""Return a Gitea-compatible head ref for pull request creation."""
normalized = (head or '').strip()
if not normalized:
return None
if ':' in normalized:
return normalized
effective_owner = (owner or self.owner or '').strip()
if not effective_owner:
return normalized
return f"{effective_owner}:{normalized}"
def build_repo_git_url(self, owner: str | None = None, repo: str | None = None) -> str | None:
"""Build the clone URL for a repository."""
_owner = owner or self.owner
@@ -96,16 +122,16 @@ class GiteaAPI:
def _request_sync(self, method: str, path: str, payload: dict | None = None) -> dict:
"""Perform a synchronous Gitea API request."""
request = urllib.request.Request(
self._api_url(path),
headers=self.get_auth_headers(),
method=method.upper(),
)
data = None
if payload is not None:
data = json.dumps(payload).encode('utf-8')
request.data = data
try:
if not self.base_url:
return {'error': 'Gitea base URL is not configured or is invalid'}
request = urllib.request.Request(
self._api_url(path),
headers=self.get_auth_headers(),
method=method.upper(),
)
if payload is not None:
request.data = json.dumps(payload).encode('utf-8')
with urllib.request.urlopen(request) as response:
body = response.read().decode('utf-8')
return json.loads(body) if body else {}
@@ -182,6 +208,10 @@ class GiteaAPI:
"""Get the user associated with the configured token."""
return await self._request("GET", "user")
def get_current_user_sync(self) -> dict:
"""Synchronously get the user associated with the configured token."""
return self._request_sync("GET", "user")
async def create_branch(self, branch: str, base: str = "main", owner: str | None = None, repo: str | None = None):
"""Create a new branch."""
_owner = owner or self.owner
@@ -204,14 +234,36 @@ class GiteaAPI:
"""Create a pull request."""
_owner = owner or self.owner
_repo = repo or self.repo
normalized_head = self._normalize_pull_request_head(head, _owner)
payload = {
"title": title,
"body": body,
"base": base,
"head": head or f"{_owner}-{_repo}-ai-gen-{hash(title) % 10000}",
"head": normalized_head or f"{_owner}:{_owner}-{_repo}-ai-gen-{hash(title) % 10000}",
}
return await self._request("POST", f"repos/{_owner}/{_repo}/pulls", payload)
def create_pull_request_sync(
self,
title: str,
body: str,
owner: str,
repo: str,
base: str = "main",
head: str | None = None,
) -> dict:
"""Synchronously create a pull request."""
_owner = owner or self.owner
_repo = repo or self.repo
normalized_head = self._normalize_pull_request_head(head, _owner)
payload = {
"title": title,
"body": body,
"base": base,
"head": normalized_head or f"{_owner}:{_owner}-{_repo}-ai-gen-{hash(title) % 10000}",
}
return self._request_sync("POST", f"repos/{_owner}/{_repo}/pulls", payload)
async def list_pull_requests(
self,
owner: str | None = None,
@@ -384,3 +436,13 @@ class GiteaAPI:
return {"error": "Repository name required for org operations"}
return await self._request("GET", f"repos/{_owner}/{_repo}")
def get_repo_info_sync(self, owner: str | None = None, repo: str | None = None) -> dict:
"""Synchronously get repository information."""
_owner = owner or self.owner
_repo = repo or self.repo
if not _repo:
return {"error": "Repository name required for org operations"}
return self._request_sync("GET", f"repos/{_owner}/{_repo}")

View File

@@ -0,0 +1,162 @@
"""Home Assistant integration for energy-gated queue processing."""
from __future__ import annotations
try:
from ..config import settings
except ImportError:
from config import settings
class HomeAssistantAgent:
"""Query Home Assistant for queue-processing eligibility and health."""
def __init__(self, base_url: str | None = None, token: str | None = None):
self.base_url = (base_url or settings.home_assistant_url).rstrip('/')
self.token = token or settings.home_assistant_token
def _headers(self) -> dict[str, str]:
return {
'Authorization': f'Bearer {self.token}',
'Content-Type': 'application/json',
}
def _state_url(self, entity_id: str) -> str:
return f'{self.base_url}/api/states/{entity_id}'
async def _get_state(self, entity_id: str) -> dict:
if not self.base_url:
return {'error': 'Home Assistant URL is not configured'}
if not self.token:
return {'error': 'Home Assistant token is not configured'}
if not entity_id:
return {'error': 'Home Assistant entity id is not configured'}
try:
import aiohttp
async with aiohttp.ClientSession() as session:
async with session.get(self._state_url(entity_id), headers=self._headers()) as resp:
payload = await resp.json(content_type=None)
if 200 <= resp.status < 300:
return payload if isinstance(payload, dict) else {'value': payload}
return {'error': payload, 'status_code': resp.status}
except Exception as exc:
return {'error': str(exc)}
def _get_state_sync(self, entity_id: str) -> dict:
if not self.base_url:
return {'error': 'Home Assistant URL is not configured'}
if not self.token:
return {'error': 'Home Assistant token is not configured'}
if not entity_id:
return {'error': 'Home Assistant entity id is not configured'}
try:
import json
import urllib.error
import urllib.request
request = urllib.request.Request(self._state_url(entity_id), headers=self._headers(), method='GET')
with urllib.request.urlopen(request) as response:
body = response.read().decode('utf-8')
return json.loads(body) if body else {}
except urllib.error.HTTPError as exc:
try:
body = exc.read().decode('utf-8')
except Exception:
body = str(exc)
return {'error': body, 'status_code': exc.code}
except Exception as exc:
return {'error': str(exc)}
@staticmethod
def _coerce_float(payload: dict) -> float | None:
raw = payload.get('state') if isinstance(payload, dict) else None
try:
return float(raw)
except Exception:
return None
async def queue_gate_status(self, force: bool = False) -> dict:
"""Return whether queued prompts may be processed now."""
if force or settings.prompt_queue_force_process:
return {
'status': 'success',
'allowed': True,
'forced': True,
'reason': 'Queue override is enabled',
}
battery = await self._get_state(settings.home_assistant_battery_entity_id)
surplus = await self._get_state(settings.home_assistant_surplus_entity_id)
battery_value = self._coerce_float(battery)
surplus_value = self._coerce_float(surplus)
checks = []
if battery.get('error'):
checks.append({'name': 'battery', 'ok': False, 'message': str(battery.get('error')), 'entity_id': settings.home_assistant_battery_entity_id})
else:
checks.append({'name': 'battery', 'ok': battery_value is not None and battery_value >= settings.home_assistant_battery_full_threshold, 'message': f'{battery_value}%', 'entity_id': settings.home_assistant_battery_entity_id})
if surplus.get('error'):
checks.append({'name': 'surplus', 'ok': False, 'message': str(surplus.get('error')), 'entity_id': settings.home_assistant_surplus_entity_id})
else:
checks.append({'name': 'surplus', 'ok': surplus_value is not None and surplus_value >= settings.home_assistant_surplus_threshold_watts, 'message': f'{surplus_value} W', 'entity_id': settings.home_assistant_surplus_entity_id})
allowed = all(check['ok'] for check in checks)
return {
'status': 'success' if allowed else 'blocked',
'allowed': allowed,
'forced': False,
'checks': checks,
'battery_level': battery_value,
'surplus_watts': surplus_value,
'thresholds': {
'battery_full_percent': settings.home_assistant_battery_full_threshold,
'surplus_watts': settings.home_assistant_surplus_threshold_watts,
},
'reason': 'Energy gate open' if allowed else 'Battery or surplus threshold not met',
}
def health_check_sync(self) -> dict:
"""Return current Home Assistant connectivity and queue gate diagnostics."""
if not self.base_url:
return {
'status': 'error',
'message': 'Home Assistant URL is not configured.',
'base_url': '',
'configured': False,
'checks': [],
}
if not self.token:
return {
'status': 'error',
'message': 'Home Assistant token is not configured.',
'base_url': self.base_url,
'configured': False,
'checks': [],
}
battery = self._get_state_sync(settings.home_assistant_battery_entity_id)
surplus = self._get_state_sync(settings.home_assistant_surplus_entity_id)
checks = []
for name, entity_id, payload in (
('battery', settings.home_assistant_battery_entity_id, battery),
('surplus', settings.home_assistant_surplus_entity_id, surplus),
):
checks.append(
{
'name': name,
'entity_id': entity_id,
'ok': not bool(payload.get('error')),
'message': str(payload.get('error') or payload.get('state') or 'ok'),
'status_code': payload.get('status_code'),
'url': self._state_url(entity_id) if entity_id else self.base_url,
}
)
return {
'status': 'success' if all(check['ok'] for check in checks) else 'error',
'message': 'Home Assistant connectivity is healthy.' if all(check['ok'] for check in checks) else 'Home Assistant checks failed.',
'base_url': self.base_url,
'configured': True,
'checks': checks,
'queue_gate': {
'battery_full_percent': settings.home_assistant_battery_full_threshold,
'surplus_watts': settings.home_assistant_surplus_threshold_watts,
'force_process': settings.prompt_queue_force_process,
},
}

View File

@@ -3,6 +3,8 @@
from __future__ import annotations
import json
from urllib import error as urllib_error
from urllib import request as urllib_request
try:
from .gitea import GiteaAPI
@@ -297,6 +299,27 @@ class LLMServiceClient:
except Exception as exc:
return None, {'error': str(exc)}, str(exc)
@staticmethod
def extract_error_message(trace: dict | None) -> str | None:
"""Extract the most useful provider error message from a trace payload."""
if not isinstance(trace, dict):
return None
raw_response = trace.get('raw_response') if isinstance(trace.get('raw_response'), dict) else {}
provider_response = raw_response.get('provider_response') if isinstance(raw_response.get('provider_response'), dict) else {}
candidate_errors = [
provider_response.get('error'),
raw_response.get('error'),
trace.get('error'),
]
raw_responses = trace.get('raw_responses') if isinstance(trace.get('raw_responses'), list) else []
for payload in reversed(raw_responses):
if isinstance(payload, dict) and payload.get('error'):
candidate_errors.append(payload.get('error'))
for candidate in candidate_errors:
if candidate:
return str(candidate).strip()
return None
def _compose_system_prompt(self, stage: str, stage_prompt: str) -> str:
"""Merge the stage prompt with configured guardrails."""
sections = [stage_prompt.strip()] + self._guardrail_sections(stage)
@@ -392,3 +415,117 @@ class LLMServiceClient:
'max_tool_call_rounds': settings.llm_max_tool_call_rounds,
'gitea_live_tools_configured': bool(settings.gitea_url and settings.gitea_token),
}
def health_check_sync(self) -> dict:
"""Synchronously check Ollama reachability and configured model availability."""
if not self.ollama_url:
return {
'status': 'error',
'message': 'OLLAMA_URL is not configured.',
'ollama_url': 'Not configured',
'model': self.model,
'checks': [],
'suggestion': 'Set OLLAMA_URL to the reachable Ollama base URL.',
}
tags_url = f'{self.ollama_url}/api/tags'
try:
req = urllib_request.Request(tags_url, headers={'User-Agent': 'AI-Software-Factory'}, method='GET')
with urllib_request.urlopen(req, timeout=5) as resp:
raw_body = resp.read().decode('utf-8')
payload = json.loads(raw_body) if raw_body else {}
except urllib_error.HTTPError as exc:
body = exc.read().decode('utf-8') if exc.fp else ''
message = body or str(exc)
return {
'status': 'error',
'message': f'Ollama returned HTTP {exc.code}: {message}',
'ollama_url': self.ollama_url,
'model': self.model,
'checks': [
{
'name': 'api_tags',
'ok': False,
'status_code': exc.code,
'url': tags_url,
'message': message,
}
],
'suggestion': 'Verify OLLAMA_URL points to the Ollama service and that the API is reachable.',
}
except Exception as exc:
return {
'status': 'error',
'message': f'Unable to reach Ollama: {exc}',
'ollama_url': self.ollama_url,
'model': self.model,
'checks': [
{
'name': 'api_tags',
'ok': False,
'status_code': None,
'url': tags_url,
'message': str(exc),
}
],
'suggestion': 'Verify OLLAMA_URL resolves from the running factory process and that Ollama is listening on that address.',
}
models = payload.get('models') if isinstance(payload, dict) else []
model_names: list[str] = []
if isinstance(models, list):
for model_entry in models:
if not isinstance(model_entry, dict):
continue
name = str(model_entry.get('name') or model_entry.get('model') or '').strip()
if name:
model_names.append(name)
requested = (self.model or '').strip()
requested_base = requested.split(':', 1)[0]
model_available = any(
name == requested or name.startswith(f'{requested}:') or name.split(':', 1)[0] == requested_base
for name in model_names
)
checks = [
{
'name': 'api_tags',
'ok': True,
'status_code': 200,
'url': tags_url,
'message': f'Loaded {len(model_names)} model entries.',
},
{
'name': 'configured_model',
'ok': model_available,
'status_code': None,
'url': None,
'message': (
f'Configured model {requested} is available.'
if model_available else
f'Configured model {requested} was not found in Ollama tags.'
),
},
]
if model_available:
return {
'status': 'success',
'message': f'Ollama is reachable and model {requested} is available.',
'ollama_url': self.ollama_url,
'model': requested,
'model_available': True,
'model_count': len(model_names),
'models': model_names[:10],
'checks': checks,
}
return {
'status': 'error',
'message': f'Ollama is reachable, but model {requested} is not available.',
'ollama_url': self.ollama_url,
'model': requested,
'model_available': False,
'model_count': len(model_names),
'models': model_names[:10],
'checks': checks,
'suggestion': f'Pull or configure the model {requested}, or update OLLAMA_MODEL to a model that exists in Ollama.',
}

View File

@@ -3,9 +3,11 @@
from __future__ import annotations
import difflib
import json
import py_compile
import re
import subprocess
from pathlib import PurePosixPath
from typing import Optional
from datetime import datetime
@@ -14,18 +16,27 @@ try:
from .database_manager import DatabaseManager
from .git_manager import GitManager
from .gitea import GiteaAPI
from .llm_service import LLMServiceClient
from .ui_manager import UIManager
except ImportError:
from config import settings
from agents.database_manager import DatabaseManager
from agents.git_manager import GitManager
from agents.gitea import GiteaAPI
from agents.llm_service import LLMServiceClient
from agents.ui_manager import UIManager
class AgentOrchestrator:
"""Orchestrates the software generation process with full audit trail."""
REMOTE_READY_REPOSITORY_MODES = {'project', 'onboarded'}
REMOTE_READY_REPOSITORY_STATUSES = {'created', 'exists', 'ready', 'onboarded'}
GENERATED_TEXT_FILE_SUFFIXES = {'.py', '.md', '.txt', '.toml', '.yaml', '.yml', '.json', '.ini', '.cfg', '.sh', '.html', '.css', '.js', '.ts'}
GENERATED_TEXT_FILE_NAMES = {'README', 'README.md', '.gitignore', 'requirements.txt', 'pyproject.toml', 'Dockerfile', 'Containerfile', 'Makefile'}
MAX_WORKSPACE_CONTEXT_FILES = 20
MAX_WORKSPACE_CONTEXT_CHARS = 24000
def __init__(
self,
project_id: str,
@@ -62,6 +73,7 @@ class AgentOrchestrator:
self.repo_name_override = repo_name_override
self.existing_history = existing_history
self.changed_files: list[str] = []
self.pending_code_changes: list[dict] = []
self.gitea_api = GiteaAPI(
token=settings.GITEA_TOKEN,
base_url=settings.GITEA_URL,
@@ -76,6 +88,7 @@ class AgentOrchestrator:
self.branch_name = self._build_pr_branch_name(project_id)
self.active_pull_request = None
self._gitea_username: str | None = None
existing_repository: dict | None = None
hinted_issue_number = (related_issue_hint or {}).get('number') if related_issue_hint else None
self.related_issue_number = hinted_issue_number if hinted_issue_number is not None else self._extract_issue_number(prompt_text)
self.related_issue: dict | None = DatabaseManager._normalize_issue(related_issue_hint)
@@ -106,9 +119,14 @@ class AgentOrchestrator:
latest_ui = self.db_manager._get_latest_ui_snapshot_data(self.history.id)
repository = latest_ui.get('repository') if isinstance(latest_ui, dict) else None
if isinstance(repository, dict) and repository:
existing_repository = dict(repository)
self.repo_owner = repository.get('owner') or self.repo_owner
self.repo_name = repository.get('name') or self.repo_name
self.repo_url = repository.get('url') or self.repo_url
git_state = latest_ui.get('git') if isinstance(latest_ui.get('git'), dict) else {}
persisted_active_branch = git_state.get('active_branch')
if persisted_active_branch and persisted_active_branch not in {'main', 'master'}:
self.branch_name = persisted_active_branch
if self.prompt_text:
self.prompt_audit = self.db_manager.log_prompt_submission(
history_id=self.history.id,
@@ -117,6 +135,7 @@ class AgentOrchestrator:
features=self.features,
tech_stack=self.tech_stack,
actor_name=self.prompt_actor,
source=self.prompt_actor,
related_issue={'number': self.related_issue_number} if self.related_issue_number is not None else None,
source_context=self.prompt_source_context,
routing=self.prompt_routing,
@@ -125,22 +144,44 @@ class AgentOrchestrator:
self.ui_manager.ui_data["project_root"] = str(self.project_root)
self.ui_manager.ui_data["features"] = list(self.features)
self.ui_manager.ui_data["tech_stack"] = list(self.tech_stack)
self.ui_manager.ui_data["repository"] = {
repository_ui = {
"owner": self.repo_owner,
"name": self.repo_name,
"mode": "project" if settings.use_project_repositories else "shared",
"status": "pending" if settings.use_project_repositories else "shared",
"provider": "gitea",
}
if existing_repository:
repository_ui.update(existing_repository)
self.ui_manager.ui_data["repository"] = repository_ui
if self.related_issue:
self.ui_manager.ui_data["related_issue"] = self.related_issue
if self.active_pull_request:
self.ui_manager.ui_data["pull_request"] = self.active_pull_request
def _repository_supports_remote_delivery(self, repository: dict | None = None) -> bool:
"""Return whether repository metadata supports git push and PR delivery."""
repo = repository or self.ui_manager.ui_data.get('repository') or {}
return repo.get('mode') in self.REMOTE_READY_REPOSITORY_MODES and repo.get('status') in self.REMOTE_READY_REPOSITORY_STATUSES
def _static_files(self) -> dict[str, str]:
"""Files that do not need prompt-specific generation."""
return {
".gitignore": "__pycache__/\n*.pyc\n.venv/\n.pytest_cache/\n.mypy_cache/\n",
}
def _build_pr_branch_name(self, project_id: str) -> str:
"""Build a stable branch name used until the PR is merged."""
return f"ai/{project_id}"
def _should_use_pull_request_flow(self) -> bool:
"""Return whether this run should deliver changes through a PR branch."""
return self.existing_history is not None or self.active_pull_request is not None
def _delivery_branch_name(self) -> str:
"""Return the git branch used for the current delivery."""
return self.branch_name if self._should_use_pull_request_flow() else 'main'
def _extract_issue_number(self, prompt_text: str | None) -> int | None:
"""Extract an issue reference from prompt text."""
if not prompt_text:
@@ -157,7 +198,7 @@ class AgentOrchestrator:
"""Persist the current generation plan as an inspectable trace."""
if not self.db_manager or not self.history or not self.prompt_audit:
return
planned_files = list(self._template_files().keys())
planned_files = list(self._static_files().keys()) + ['README.md', 'requirements.txt', 'main.py', 'tests/test_app.py']
self.db_manager.log_llm_trace(
project_id=self.project_id,
history_id=self.history.id,
@@ -169,7 +210,7 @@ class AgentOrchestrator:
user_prompt=self.prompt_text or self.description,
assistant_response=(
f"Planned files: {', '.join(planned_files)}. "
f"Target branch: {self.branch_name}. "
f"Target branch: {self._delivery_branch_name()}. "
f"Repository mode: {self.ui_manager.ui_data.get('repository', {}).get('mode', 'unknown')}."
+ (
f" Linked issue: #{self.related_issue.get('number')} {self.related_issue.get('title')}."
@@ -180,13 +221,190 @@ class AgentOrchestrator:
'planned_files': planned_files,
'features': list(self.features),
'tech_stack': list(self.tech_stack),
'branch': self.branch_name,
'branch': self._delivery_branch_name(),
'repository': self.ui_manager.ui_data.get('repository', {}),
'related_issue': self.related_issue,
},
fallback_used=False,
)
def _is_safe_relative_path(self, path: str) -> bool:
"""Return whether a generated file path is safe to write under the project root."""
normalized = str(PurePosixPath((path or '').strip()))
if not normalized or normalized in {'.', '..'}:
return False
if normalized.startswith('/') or normalized.startswith('../') or '/../' in normalized:
return False
if normalized.startswith('.git/'):
return False
return True
def _is_supported_generated_text_file(self, path: str) -> bool:
"""Return whether the generated path is a supported text artifact."""
normalized = PurePosixPath(path)
if normalized.name in self.GENERATED_TEXT_FILE_NAMES:
return True
return normalized.suffix.lower() in self.GENERATED_TEXT_FILE_SUFFIXES
def _collect_workspace_context(self) -> dict:
"""Collect a compact, text-only snapshot of the current project workspace."""
if not self.project_root.exists():
return {'has_existing_files': False, 'files': []}
files: list[dict] = []
total_chars = 0
for path in sorted(self.project_root.rglob('*')):
if not path.is_file():
continue
relative_path = path.relative_to(self.project_root).as_posix()
if relative_path == '.gitignore':
continue
if not self._is_safe_relative_path(relative_path) or not self._is_supported_generated_text_file(relative_path):
continue
try:
content = path.read_text(encoding='utf-8')
except (UnicodeDecodeError, OSError):
continue
remaining_chars = self.MAX_WORKSPACE_CONTEXT_CHARS - total_chars
if remaining_chars <= 0:
break
snippet = content[:remaining_chars]
files.append(
{
'path': relative_path,
'content': snippet,
'truncated': len(snippet) < len(content),
}
)
total_chars += len(snippet)
if len(files) >= self.MAX_WORKSPACE_CONTEXT_FILES:
break
return {'has_existing_files': bool(files), 'files': files}
def _parse_generated_files(self, content: str | None) -> dict[str, str]:
"""Parse an LLM file bundle response into relative-path/content pairs."""
if not content:
return {}
try:
parsed = json.loads(content)
except Exception:
return {}
generated: dict[str, str] = {}
if isinstance(parsed, dict) and isinstance(parsed.get('files'), list):
for item in parsed['files']:
if not isinstance(item, dict):
continue
path = str(item.get('path') or '').strip()
file_content = item.get('content')
if (
self._is_safe_relative_path(path)
and self._is_supported_generated_text_file(path)
and isinstance(file_content, str)
and file_content.strip()
):
generated[path] = file_content.rstrip() + "\n"
elif isinstance(parsed, dict):
for path, file_content in parsed.items():
normalized_path = str(path).strip()
if (
self._is_safe_relative_path(normalized_path)
and self._is_supported_generated_text_file(normalized_path)
and isinstance(file_content, str)
and file_content.strip()
):
generated[normalized_path] = file_content.rstrip() + "\n"
return generated
async def _generate_prompt_driven_files(self) -> tuple[dict[str, str], dict | None, bool]:
"""Use the configured LLM to generate prompt-specific project files."""
workspace_context = self._collect_workspace_context()
has_existing_files = bool(workspace_context.get('has_existing_files'))
if has_existing_files:
system_prompt = (
'You modify an existing software repository. '
'Return only JSON. Update the smallest necessary set of files to satisfy the new prompt. '
'Prefer editing existing files over inventing a new starter app. '
'Only return files that should be written. Omit unchanged files. '
'Use repository-relative paths and do not wrap the JSON in markdown fences.'
)
user_prompt = (
f"Project name: {self.project_name}\n"
f"Description: {self.description}\n"
f"Original prompt: {self.prompt_text or self.description}\n"
f"Requested features: {json.dumps(self.features)}\n"
f"Preferred tech stack: {json.dumps(self.tech_stack)}\n"
f"Related issue: {json.dumps(self.related_issue) if self.related_issue else 'null'}\n\n"
f"Current workspace snapshot:\n{json.dumps(workspace_context['files'], indent=2)}\n\n"
'Return JSON shaped as {"files": [{"path": "relative/path.py", "content": "..."}, ...]}. '
'Each file path must be relative to the repository root.'
)
else:
system_prompt = (
'You generate small but concrete starter projects. '
'Return only JSON. Provide production-like but compact code that directly reflects the user request. '
'Include the files README.md, requirements.txt, main.py, and tests/test_app.py. '
'Use FastAPI for Python web requests unless the prompt clearly demands something else. '
'The test must verify a real behavior from main.py. '
'Do not wrap the JSON in markdown fences.'
)
user_prompt = (
f"Project name: {self.project_name}\n"
f"Description: {self.description}\n"
f"Original prompt: {self.prompt_text or self.description}\n"
f"Requested features: {json.dumps(self.features)}\n"
f"Preferred tech stack: {json.dumps(self.tech_stack)}\n"
f"Related issue: {json.dumps(self.related_issue) if self.related_issue else 'null'}\n\n"
'Return JSON shaped as {"files": [{"path": "README.md", "content": "..."}, ...]}. '
'At minimum include README.md, requirements.txt, main.py, and tests/test_app.py.'
)
content, trace = await LLMServiceClient().chat_with_trace(
stage='generation_plan',
system_prompt=system_prompt,
user_prompt=user_prompt,
tool_context_input={
'project_id': self.project_id,
'project_name': self.project_name,
'repository': self.ui_manager.ui_data.get('repository'),
'related_issue': self.related_issue,
'workspace_files': workspace_context.get('files', []),
},
expect_json=True,
)
raw_generated_paths = self._extract_raw_generated_paths(content)
generated_files = self._parse_generated_files(content)
accepted_paths = list(generated_files.keys())
rejected_paths = [path for path in raw_generated_paths if path not in accepted_paths]
generation_debug = {
'raw_paths': raw_generated_paths,
'accepted_paths': accepted_paths,
'rejected_paths': rejected_paths,
'existing_workspace': has_existing_files,
}
self.ui_manager.ui_data['generation_debug'] = generation_debug
self._append_log(
'LLM returned file candidates: '
f"raw={raw_generated_paths or []}; accepted={accepted_paths or []}; rejected={rejected_paths or []}."
)
self._log_system_debug(
'generation',
'LLM file candidates '
f"raw={raw_generated_paths or []}; accepted={accepted_paths or []}; rejected={rejected_paths or []}; "
f"existing_workspace={has_existing_files}",
)
if not content:
detail = LLMServiceClient.extract_error_message(trace)
if detail:
raise RuntimeError(f'LLM code generation failed: {detail}')
raise RuntimeError('LLM code generation did not return a usable response.')
if not generated_files:
raise RuntimeError('LLM code generation did not return any writable files.')
if not has_existing_files:
required_files = {'README.md', 'requirements.txt', 'main.py', 'tests/test_app.py'}
missing_files = sorted(required_files - set(generated_files))
if missing_files:
raise RuntimeError(f"LLM code generation omitted required starter files: {', '.join(missing_files)}")
return generated_files, trace, has_existing_files
async def _sync_issue_context(self) -> None:
"""Sync repository issues and resolve a linked issue from the prompt when present."""
if not self.db_manager or not self.history:
@@ -211,6 +429,14 @@ class AgentOrchestrator:
self.db_manager.attach_issue_to_prompt(self.prompt_audit.id, self.related_issue)
async def _ensure_remote_repository(self) -> None:
repository = self.ui_manager.ui_data.get("repository") or {}
if self._repository_supports_remote_delivery(repository):
repository.setdefault("provider", "gitea")
repository.setdefault("status", "ready")
if repository.get("url"):
self.repo_url = repository.get("url")
self.ui_manager.ui_data["repository"] = repository
return
if not settings.use_project_repositories:
self.ui_manager.ui_data["repository"]["status"] = "shared"
if settings.gitea_repo:
@@ -302,9 +528,7 @@ class AgentOrchestrator:
async def _push_branch(self, branch: str) -> dict | None:
"""Push a branch to the configured project repository when available."""
repository = self.ui_manager.ui_data.get('repository') or {}
if repository.get('mode') != 'project':
return None
if repository.get('status') not in {'created', 'exists', 'ready'}:
if not self._repository_supports_remote_delivery(repository):
return None
if not settings.gitea_token or not self.repo_owner or not self.repo_name:
return None
@@ -339,11 +563,15 @@ class AgentOrchestrator:
self.ui_manager.ui_data.setdefault('git', {})['remote_error'] = str(exc)
self._append_log(f'Initial main push skipped: {exc}')
if self.git_manager.branch_exists(self.branch_name):
self.git_manager.checkout_branch(self.branch_name)
delivery_branch = self._delivery_branch_name()
if self._should_use_pull_request_flow():
if self.git_manager.branch_exists(self.branch_name):
self.git_manager.checkout_branch(self.branch_name)
else:
self.git_manager.checkout_branch(self.branch_name, create=True, start_point='main')
else:
self.git_manager.checkout_branch(self.branch_name, create=True, start_point='main')
self.ui_manager.ui_data.setdefault('git', {})['active_branch'] = self.branch_name
self.git_manager.checkout_branch('main')
self.ui_manager.ui_data.setdefault('git', {})['active_branch'] = delivery_branch
async def _ensure_pull_request(self) -> dict | None:
"""Create the project pull request on first delivery and reuse it later."""
@@ -351,7 +579,7 @@ class AgentOrchestrator:
self.ui_manager.ui_data['pull_request'] = self.active_pull_request
return self.active_pull_request
repository = self.ui_manager.ui_data.get('repository') or {}
if repository.get('mode') != 'project' or repository.get('status') not in {'created', 'exists', 'ready'}:
if not self._repository_supports_remote_delivery(repository):
return None
title = f"AI delivery for {self.project_name}"
@@ -360,6 +588,16 @@ class AgentOrchestrator:
f"Prompt: {self.prompt_text or self.description}\n\n"
f"Branch: {self.branch_name}"
)
pull_request_debug = self.ui_manager.ui_data.setdefault('git', {}).setdefault('pull_request_debug', {})
pull_request_request = {
'owner': self.repo_owner,
'repo': self.repo_name,
'title': title,
'body': body,
'base': 'main',
'head': self.gitea_api._normalize_pull_request_head(self.branch_name, self.repo_owner) or self.branch_name,
}
pull_request_debug['request'] = pull_request_request
result = await self.gitea_api.create_pull_request(
title=title,
body=body,
@@ -368,7 +606,9 @@ class AgentOrchestrator:
base='main',
head=self.branch_name,
)
pull_request_debug['response'] = result
if result.get('error'):
pull_request_debug['status'] = 'error'
raise RuntimeError(f"Unable to create pull request: {result.get('error')}")
pr_number = result.get('number') or result.get('id') or 0
@@ -383,6 +623,8 @@ class AgentOrchestrator:
'merged': bool(result.get('merged')),
'pr_state': result.get('state', 'open'),
}
pull_request_debug['status'] = 'created'
pull_request_debug['resolved'] = pr_data
if self.db_manager and self.history:
self.db_manager.save_pr_data(self.history.id, pr_data)
self.active_pull_request = self.db_manager.get_open_pull_request(project_id=self.project_id) if self.db_manager else pr_data
@@ -392,20 +634,19 @@ class AgentOrchestrator:
async def _push_remote_commit(self, commit_hash: str, commit_message: str, changed_files: list[str], base_commit: str | None) -> dict | None:
"""Push the local commit to the provisioned Gitea repository and build browser links."""
repository = self.ui_manager.ui_data.get("repository") or {}
if repository.get("mode") != "project":
if not self._repository_supports_remote_delivery(repository):
return None
if repository.get("status") not in {"created", "exists", "ready"}:
return None
push_result = await self._push_branch(self.branch_name)
delivery_branch = self._delivery_branch_name()
push_result = await self._push_branch(delivery_branch)
if push_result is None:
return None
pull_request = await self._ensure_pull_request()
pull_request = await self._ensure_pull_request() if self._should_use_pull_request_flow() else None
commit_url = self.gitea_api.build_commit_url(commit_hash, owner=self.repo_owner, repo=self.repo_name)
compare_url = self.gitea_api.build_compare_url(base_commit, commit_hash, owner=self.repo_owner, repo=self.repo_name) if base_commit else None
remote_record = {
"status": "pushed",
"remote": push_result.get('remote'),
"branch": self.branch_name,
"branch": delivery_branch,
"commit_url": commit_url,
"compare_url": compare_url,
"changed_files": changed_files,
@@ -415,7 +656,10 @@ class AgentOrchestrator:
repository["last_commit_url"] = commit_url
if compare_url:
repository["last_compare_url"] = compare_url
self._append_log(f"Pushed generated commit to {self.repo_owner}/{self.repo_name}.")
if pull_request:
self._append_log(f"Pushed generated commit to {self.repo_owner}/{self.repo_name} and updated the delivery pull request.")
else:
self._append_log(f"Pushed generated commit directly to {self.repo_owner}/{self.repo_name} on {delivery_branch}.")
return remote_record
def _build_diff_text(self, relative_path: str, previous_content: str, new_content: str) -> str:
@@ -436,6 +680,35 @@ class AgentOrchestrator:
if self.db_manager and self.history:
self.db_manager._log_action(self.history.id, "INFO", message)
def _log_system_debug(self, component: str, message: str, level: str = 'INFO') -> None:
"""Persist a system-level debug breadcrumb for generation and git decisions."""
if not self.db_manager:
return
self.db_manager.log_system_event(component=component, level=level, message=f"{self.project_id}: {message}")
def _extract_raw_generated_paths(self, content: str | None) -> list[str]:
"""Return all file paths proposed by the LLM response before safety filtering."""
if not content:
return []
try:
parsed = json.loads(content)
except Exception:
return []
raw_paths: list[str] = []
if isinstance(parsed, dict) and isinstance(parsed.get('files'), list):
for item in parsed['files']:
if not isinstance(item, dict):
continue
path = str(item.get('path') or '').strip()
if path:
raw_paths.append(path)
elif isinstance(parsed, dict):
for path in parsed.keys():
normalized_path = str(path).strip()
if normalized_path:
raw_paths.append(normalized_path)
return raw_paths
def _update_progress(self, progress: int, step: str, message: str) -> None:
self.progress = progress
self.current_step = step
@@ -454,50 +727,20 @@ class AgentOrchestrator:
target.parent.mkdir(parents=True, exist_ok=True)
change_type = "UPDATE" if target.exists() else "CREATE"
previous_content = target.read_text(encoding="utf-8") if target.exists() else ""
if previous_content == content:
return
diff_text = self._build_diff_text(relative_path, previous_content, content)
target.write_text(content, encoding="utf-8")
self.changed_files.append(relative_path)
if self.db_manager and self.history:
self.db_manager.log_code_change(
project_id=self.project_id,
change_type=change_type,
file_path=relative_path,
actor="orchestrator",
actor_type="agent",
details=f"{change_type.title()}d generated artifact {relative_path}",
history_id=self.history.id,
prompt_id=self.prompt_audit.id if self.prompt_audit else None,
diff_summary=f"Wrote {len(content.splitlines())} lines to {relative_path}",
diff_text=diff_text,
)
def _template_files(self) -> dict[str, str]:
feature_section = "\n".join(f"- {feature}" for feature in self.features) or "- None specified"
tech_section = "\n".join(f"- {tech}" for tech in self.tech_stack) or "- Python"
return {
".gitignore": "__pycache__/\n*.pyc\n.venv/\n.pytest_cache/\n.mypy_cache/\n",
"README.md": (
f"# {self.project_name}\n\n"
f"{self.description}\n\n"
"## Features\n"
f"{feature_section}\n\n"
"## Tech Stack\n"
f"{tech_section}\n"
),
"requirements.txt": "fastapi\nuvicorn\npytest\n",
"main.py": (
"from fastapi import FastAPI\n\n"
"app = FastAPI(title=\"Generated App\")\n\n"
"@app.get('/')\n"
"def read_root():\n"
f" return {{'name': '{self.project_name}', 'status': 'generated', 'features': {self.features!r}}}\n"
),
"tests/test_app.py": (
"from main import read_root\n\n"
"def test_read_root():\n"
f" assert read_root()['name'] == '{self.project_name}'\n"
),
}
self.pending_code_changes.append(
{
'change_type': change_type,
'file_path': relative_path,
'details': f"{change_type.title()}d generated artifact {relative_path}",
'diff_summary': f"Wrote {len(content.splitlines())} lines to {relative_path}",
'diff_text': diff_text,
}
)
async def run(self) -> dict:
"""Run the software generation process with full audit logging."""
@@ -588,18 +831,34 @@ class AgentOrchestrator:
async def _create_project_structure(self) -> None:
"""Create initial project structure."""
self.project_root.mkdir(parents=True, exist_ok=True)
for relative_path, content in self._template_files().items():
if relative_path.startswith("main.py") or relative_path.startswith("tests/"):
continue
for relative_path, content in self._static_files().items():
self._write_file(relative_path, content)
self._append_log(f"Project structure created under {self.project_root}.")
async def _generate_code(self) -> None:
"""Generate code using Ollama."""
for relative_path, content in self._template_files().items():
if relative_path in {"main.py", "tests/test_app.py"}:
self._write_file(relative_path, content)
self._append_log("Application entrypoint and smoke test generated.")
change_count_before = len(self.pending_code_changes)
generated_files, trace, editing_existing_workspace = await self._generate_prompt_driven_files()
for relative_path, content in generated_files.items():
self._write_file(relative_path, content)
if editing_existing_workspace and len(self.pending_code_changes) == change_count_before:
raise RuntimeError('The LLM response did not produce any file changes for the existing project.')
fallback_used = bool(trace and trace.get('fallback_used'))
if self.db_manager and self.history and self.prompt_audit and trace:
self.db_manager.log_llm_trace(
project_id=self.project_id,
history_id=self.history.id,
prompt_id=self.prompt_audit.id,
stage='code_generation',
provider=trace.get('provider', 'ollama'),
model=trace.get('model', settings.OLLAMA_MODEL),
system_prompt=trace.get('system_prompt', ''),
user_prompt=trace.get('user_prompt', self.prompt_text or self.description),
assistant_response=trace.get('assistant_response', ''),
raw_response=trace.get('raw_response'),
fallback_used=fallback_used,
)
self._append_log('Application files generated from the prompt with the configured LLM.')
async def _run_tests(self) -> None:
"""Run tests for the generated code."""
@@ -610,11 +869,25 @@ class AgentOrchestrator:
async def _commit_to_git(self) -> None:
"""Commit changes to git."""
unique_files = list(dict.fromkeys(self.changed_files))
git_debug = self.ui_manager.ui_data.setdefault('git', {})
if not unique_files:
git_debug.update({
'commit_status': 'skipped',
'early_exit_reason': 'changed_files_empty',
'candidate_files': [],
})
self._append_log('Git commit skipped: no generated files were marked as changed.')
self._log_system_debug('git', 'Commit exited early because changed_files was empty.')
return
if not self.git_manager.is_git_available():
self.ui_manager.ui_data.setdefault('git', {})['error'] = 'git executable is not available in PATH'
git_debug.update({
'commit_status': 'error',
'early_exit_reason': 'git_unavailable',
'candidate_files': unique_files,
'error': 'git executable is not available in PATH',
})
self._append_log('Git commit skipped: git executable is not available in PATH')
self._log_system_debug('git', 'Commit exited early because git is unavailable.', level='ERROR')
return
try:
@@ -622,7 +895,23 @@ class AgentOrchestrator:
self.git_manager.init_repo()
base_commit = self.git_manager.current_head_or_none()
self.git_manager.add_files(unique_files)
if not self.git_manager.get_status():
status_after_add = self.git_manager.get_status()
if not status_after_add:
git_debug.update({
'commit_status': 'skipped',
'early_exit_reason': 'clean_after_staging',
'candidate_files': unique_files,
'status_after_add': '',
})
self._append_log(
'Git commit skipped: working tree was clean after staging candidate files '
f'{unique_files}. No repository diff was created.'
)
self._log_system_debug(
'git',
'Commit exited early because git status was clean after staging '
f'files={unique_files}',
)
return
commit_message = f"AI generation for prompt: {self.project_name}"
@@ -633,13 +922,19 @@ class AgentOrchestrator:
"files": unique_files,
"timestamp": datetime.utcnow().isoformat(),
"scope": "local",
"branch": self.branch_name,
"branch": self._delivery_branch_name(),
}
git_debug.update({
'commit_status': 'committed',
'early_exit_reason': None,
'candidate_files': unique_files,
'status_after_add': status_after_add,
})
remote_record = None
try:
remote_record = await self._push_remote_commit(commit_hash, commit_message, unique_files, base_commit)
except (RuntimeError, subprocess.CalledProcessError, FileNotFoundError) as remote_exc:
self.ui_manager.ui_data.setdefault("git", {})["remote_error"] = str(remote_exc)
git_debug["remote_error"] = str(remote_exc)
self._append_log(f"Remote git push skipped: {remote_exc}")
if remote_record:
@@ -649,8 +944,8 @@ class AgentOrchestrator:
if remote_record.get('pull_request'):
commit_record['pull_request'] = remote_record['pull_request']
self.ui_manager.ui_data['pull_request'] = remote_record['pull_request']
self.ui_manager.ui_data.setdefault("git", {})["latest_commit"] = commit_record
self.ui_manager.ui_data.setdefault("git", {})["commits"] = [commit_record]
git_debug["latest_commit"] = commit_record
git_debug["commits"] = [commit_record]
self._append_log(f"Recorded git commit {commit_hash[:12]} for generated files.")
if self.db_manager:
self.db_manager.log_commit(
@@ -668,6 +963,23 @@ class AgentOrchestrator:
remote_status=remote_record.get("status") if remote_record else "local-only",
related_issue=self.related_issue,
)
for change in self.pending_code_changes:
self.db_manager.log_code_change(
project_id=self.project_id,
change_type=change['change_type'],
file_path=change['file_path'],
actor='orchestrator',
actor_type='agent',
details=change['details'],
history_id=self.history.id if self.history else None,
prompt_id=self.prompt_audit.id if self.prompt_audit else None,
diff_summary=change.get('diff_summary'),
diff_text=change.get('diff_text'),
commit_hash=commit_hash,
remote_status=remote_record.get('status') if remote_record else 'local-only',
branch=self.branch_name,
)
self.pending_code_changes.clear()
if self.related_issue:
self.db_manager.log_issue_work(
project_id=self.project_id,
@@ -679,7 +991,12 @@ class AgentOrchestrator:
commit_url=remote_record.get('commit_url') if remote_record else None,
)
except (RuntimeError, subprocess.CalledProcessError, FileNotFoundError) as exc:
self.ui_manager.ui_data.setdefault("git", {})["error"] = str(exc)
git_debug.update({
'commit_status': 'error',
'early_exit_reason': 'commit_exception',
'candidate_files': unique_files,
'error': str(exc),
})
self._append_log(f"Git commit skipped: {exc}")
async def _create_pr(self) -> None:

View File

@@ -18,6 +18,17 @@ except ImportError:
class RequestInterpreter:
"""Use Ollama to turn free-form text into a structured software request."""
REQUEST_PREFIX_WORDS = {
'a', 'an', 'app', 'application', 'build', 'create', 'dashboard', 'develop', 'design', 'for', 'generate',
'internal', 'make', 'me', 'modern', 'need', 'new', 'our', 'platform', 'please', 'project', 'service',
'simple', 'site', 'start', 'system', 'the', 'tool', 'us', 'want', 'web', 'website', 'with',
}
REPO_NOISE_WORDS = REQUEST_PREFIX_WORDS | {'and', 'from', 'into', 'on', 'that', 'this', 'to'}
GENERIC_PROJECT_NAME_WORDS = {
'app', 'application', 'harness', 'platform', 'project', 'purpose', 'service', 'solution', 'suite', 'system', 'test', 'tool',
}
def __init__(self, ollama_url: str | None = None, model: str | None = None):
self.ollama_url = (ollama_url or settings.ollama_url).rstrip('/')
self.model = model or settings.OLLAMA_MODEL
@@ -77,45 +88,34 @@ class RequestInterpreter:
},
expect_json=True,
)
if content:
try:
parsed = json.loads(content)
interpreted = self._normalize_interpreted_request(parsed, normalized)
routing = self._normalize_routing(parsed.get('routing'), interpreted, compact_context)
naming_trace = None
if routing.get('intent') == 'new_project':
interpreted, routing, naming_trace = await self._refine_new_project_identity(
prompt_text=normalized,
interpreted=interpreted,
routing=routing,
context=compact_context,
)
trace['routing'] = routing
trace['context_excerpt'] = compact_context
if naming_trace is not None:
trace['project_naming'] = naming_trace
return interpreted, trace
except Exception:
pass
if not content:
detail = self.llm_client.extract_error_message(trace)
if detail:
raise RuntimeError(f'LLM request interpretation failed: {detail}')
raise RuntimeError('LLM request interpretation did not return a usable response.')
interpreted, routing = self._heuristic_fallback(normalized, compact_context)
try:
parsed = json.loads(content)
except Exception as exc:
raise RuntimeError('LLM request interpretation did not return valid JSON.') from exc
interpreted = self._normalize_interpreted_request(parsed)
routing = self._normalize_routing(parsed.get('routing'), interpreted, compact_context)
if routing.get('intent') == 'continue_project' and routing.get('project_name'):
interpreted['name'] = routing['project_name']
naming_trace = None
if routing.get('intent') == 'new_project':
constraints = await self._collect_project_identity_constraints(compact_context)
routing['repo_name'] = self._ensure_unique_repo_name(routing.get('repo_name') or interpreted.get('name') or 'project', constraints['repo_names'])
return interpreted, {
'stage': 'request_interpretation',
'provider': 'heuristic',
'model': self.model,
'system_prompt': system_prompt,
'user_prompt': user_prompt,
'assistant_response': json.dumps({'request': interpreted, 'routing': routing}),
'raw_response': {'fallback': 'heuristic', 'llm_trace': trace.get('raw_response') if isinstance(trace, dict) else None},
'routing': routing,
'context_excerpt': compact_context,
'guardrails': trace.get('guardrails') if isinstance(trace, dict) else [],
'tool_context': trace.get('tool_context') if isinstance(trace, dict) else [],
'fallback_used': True,
}
interpreted, routing, naming_trace = await self._refine_new_project_identity(
prompt_text=normalized,
interpreted=interpreted,
routing=routing,
context=compact_context,
)
trace['routing'] = routing
trace['context_excerpt'] = compact_context
if naming_trace is not None:
trace['project_naming'] = naming_trace
return interpreted, trace
async def _refine_new_project_identity(
self,
@@ -143,24 +143,22 @@ class RequestInterpreter:
},
expect_json=True,
)
if content:
try:
parsed = json.loads(content)
project_name, repo_name = self._normalize_project_identity(
parsed,
fallback_name=interpreted.get('name') or self._derive_name(prompt_text),
)
repo_name = self._ensure_unique_repo_name(repo_name, constraints['repo_names'])
interpreted['name'] = project_name
routing['project_name'] = project_name
routing['repo_name'] = repo_name
return interpreted, routing, trace
except Exception:
pass
if not content:
detail = self.llm_client.extract_error_message(trace)
if detail:
raise RuntimeError(f'LLM project naming failed: {detail}')
raise RuntimeError('LLM project naming did not return a usable response.')
fallback_name = interpreted.get('name') or self._derive_name(prompt_text)
routing['project_name'] = fallback_name
routing['repo_name'] = self._ensure_unique_repo_name(self._derive_repo_name(fallback_name), constraints['repo_names'])
try:
parsed = json.loads(content)
except Exception as exc:
raise RuntimeError('LLM project naming did not return valid JSON.') from exc
project_name, repo_name = self._normalize_project_identity(parsed)
repo_name = self._ensure_unique_repo_name(repo_name, constraints['repo_names'])
interpreted['name'] = project_name
routing['project_name'] = project_name
routing['repo_name'] = repo_name
return interpreted, routing, trace
async def _collect_project_identity_constraints(self, context: dict) -> dict[str, set[str]]:
@@ -190,17 +188,19 @@ class RequestInterpreter:
return set()
return {str(repo.get('name')).strip() for repo in repos if repo.get('name')}
def _normalize_interpreted_request(self, interpreted: dict, original_prompt: str) -> dict:
def _normalize_interpreted_request(self, interpreted: dict) -> dict:
"""Normalize LLM output into the required request shape."""
request_payload = interpreted.get('request') if isinstance(interpreted.get('request'), dict) else interpreted
name = str(interpreted.get('name') or '').strip() or self._derive_name(original_prompt)
if isinstance(request_payload, dict):
name = str(request_payload.get('name') or '').strip() or self._derive_name(original_prompt)
description = str((request_payload or {}).get('description') or '').strip() or original_prompt[:255]
features = self._normalize_list((request_payload or {}).get('features'))
tech_stack = self._normalize_list((request_payload or {}).get('tech_stack'))
if not features:
features = ['core workflow based on free-form request']
if not isinstance(request_payload, dict):
raise RuntimeError('LLM request interpretation did not include a request object.')
name = str(request_payload.get('name') or '').strip()
description = str(request_payload.get('description') or '').strip()
if not name:
raise RuntimeError('LLM request interpretation did not provide a project name.')
if not description:
raise RuntimeError('LLM request interpretation did not provide a project description.')
features = self._normalize_list(request_payload.get('features'))
tech_stack = self._normalize_list(request_payload.get('tech_stack'))
return {
'name': name[:255],
'description': description[:255],
@@ -234,6 +234,9 @@ class RequestInterpreter:
def _normalize_routing(self, routing: dict | None, interpreted: dict, context: dict) -> dict:
"""Normalize routing metadata returned by the LLM."""
routing = routing or {}
intent = str(routing.get('intent') or '').strip()
if intent not in {'new_project', 'continue_project'}:
raise RuntimeError('LLM request interpretation did not provide a valid routing intent.')
project_id = routing.get('project_id')
project_name = routing.get('project_name')
issue_number = routing.get('issue_number')
@@ -242,25 +245,32 @@ class RequestInterpreter:
elif isinstance(issue_number, str) and issue_number.isdigit():
issue_number = int(issue_number)
matched_project = None
for project in context.get('projects', []):
if project_id and project.get('project_id') == project_id:
matched_project = project
break
if project_name and project.get('name') == project_name:
matched_project = project
break
intent = str(routing.get('intent') or '').strip() or ('continue_project' if matched_project else 'new_project')
if intent == 'continue_project':
for project in context.get('projects', []):
if project_id and project.get('project_id') == project_id:
matched_project = project
break
if project_name and project.get('name') == project_name:
matched_project = project
break
elif project_id:
matched_project = next(
(project for project in context.get('projects', []) if project.get('project_id') == project_id),
None,
)
if intent == 'continue_project' and matched_project is None:
raise RuntimeError('LLM selected continue_project without identifying a tracked project from prompt history.')
if intent == 'new_project' and matched_project is not None:
raise RuntimeError('LLM selected new_project while also pointing at an existing tracked project.')
normalized = {
'intent': intent,
'project_id': matched_project.get('project_id') if matched_project else project_id,
'project_name': matched_project.get('name') if matched_project else (project_name or interpreted.get('name')),
'repo_name': routing.get('repo_name') if intent == 'new_project' else None,
'repo_name': str(routing.get('repo_name') or '').strip() or None if intent == 'new_project' else None,
'issue_number': issue_number,
'confidence': routing.get('confidence') or ('medium' if matched_project else 'low'),
'reasoning_summary': routing.get('reasoning_summary') or ('Matched prior project context' if matched_project else 'No strong prior project match found'),
'confidence': routing.get('confidence') or 'medium',
'reasoning_summary': routing.get('reasoning_summary') or '',
}
if normalized['intent'] == 'new_project' and not normalized['repo_name']:
normalized['repo_name'] = self._derive_repo_name(normalized['project_name'] or interpreted.get('name') or 'Generated Project')
return normalized
def _normalize_list(self, value) -> list[str]:
@@ -270,37 +280,11 @@ class RequestInterpreter:
return [item.strip() for item in value.split(',') if item.strip()]
return []
def _derive_name(self, prompt_text: str) -> str:
"""Derive a stable project name when the LLM does not provide one."""
first_line = prompt_text.splitlines()[0].strip()
quoted = re.search(r'["\']([^"\']{3,80})["\']', first_line)
if quoted:
return self._humanize_name(quoted.group(1))
noun_phrase = re.search(
r'(?:build|create|start|make|develop|generate|design|need|want)\s+'
r'(?:me\s+|us\s+|an?\s+|the\s+|new\s+|internal\s+|simple\s+|lightweight\s+|modern\s+|web\s+|mobile\s+)*'
r'([a-z0-9][a-z0-9\s-]{2,80}?(?:portal|dashboard|app|application|service|tool|system|platform|api|bot|assistant|website|site|workspace|tracker|manager))\b',
first_line,
flags=re.IGNORECASE,
)
if noun_phrase:
return self._humanize_name(noun_phrase.group(1))
cleaned = re.sub(r'[^A-Za-z0-9 ]+', ' ', first_line)
stopwords = {
'build', 'create', 'start', 'make', 'develop', 'generate', 'design', 'need', 'want', 'please', 'for', 'our', 'with', 'that', 'this',
'new', 'internal', 'simple', 'modern', 'web', 'mobile', 'app', 'application', 'tool', 'system',
}
tokens = [word for word in cleaned.split() if word and word.lower() not in stopwords]
if tokens:
return self._humanize_name(' '.join(tokens[:4]))
return 'Generated Project'
def _humanize_name(self, raw_name: str) -> str:
"""Normalize a candidate project name into a readable title."""
cleaned = re.sub(r'[^A-Za-z0-9\s-]+', ' ', raw_name).strip(' -')
cleaned = re.sub(r'\s+', ' ', cleaned)
cleaned = self._trim_request_prefix(cleaned)
special_upper = {'api', 'crm', 'erp', 'cms', 'hr', 'it', 'ui', 'qa'}
words = []
for word in cleaned.split()[:6]:
@@ -308,14 +292,70 @@ class RequestInterpreter:
words.append(lowered.upper() if lowered in special_upper else lowered.capitalize())
return ' '.join(words) or 'Generated Project'
def _trim_request_prefix(self, candidate: str) -> str:
"""Remove leading request phrasing from model-produced names and slugs."""
tokens = [token for token in re.split(r'[-\s]+', candidate or '') if token]
while tokens and tokens[0].lower() in self.REQUEST_PREFIX_WORDS:
tokens.pop(0)
trimmed = ' '.join(tokens).strip()
return trimmed or candidate.strip()
def _derive_repo_name(self, project_name: str) -> str:
"""Derive a repository slug from a human-readable project name."""
preferred = (project_name or 'project').strip().lower().replace(' ', '-')
preferred_name = self._trim_request_prefix((project_name or 'project').strip())
preferred = preferred_name.lower().replace(' ', '-')
sanitized = ''.join(ch if ch.isalnum() or ch in {'-', '_'} else '-' for ch in preferred)
while '--' in sanitized:
sanitized = sanitized.replace('--', '-')
return sanitized.strip('-') or 'project'
def _should_use_repo_name_candidate(self, candidate: str, project_name: str) -> bool:
"""Return whether a model-proposed repo slug is concise enough to trust directly."""
cleaned = self._trim_request_prefix(re.sub(r'[^A-Za-z0-9\s_-]+', ' ', candidate or '').strip())
if not cleaned:
return False
candidate_tokens = [token.lower() for token in re.split(r'[-\s_]+', cleaned) if token]
if not candidate_tokens:
return False
if len(candidate_tokens) > 6:
return False
noise_count = sum(1 for token in candidate_tokens if token in self.REPO_NOISE_WORDS)
if noise_count >= 2:
return False
if len('-'.join(candidate_tokens)) > 40:
return False
project_tokens = {
token.lower()
for token in re.split(r'[-\s_]+', project_name or '')
if token and token.lower() not in self.REPO_NOISE_WORDS
}
if project_tokens:
overlap = sum(1 for token in candidate_tokens if token in project_tokens)
if overlap == 0:
return False
return True
def _should_use_project_name_candidate(self, candidate: str, fallback_name: str) -> bool:
"""Return whether a model-proposed project title is concrete enough to trust."""
cleaned = self._trim_request_prefix(re.sub(r'[^A-Za-z0-9\s-]+', ' ', candidate or '').strip())
if not cleaned:
return False
candidate_tokens = [token.lower() for token in re.split(r'[-\s]+', cleaned) if token]
if not candidate_tokens:
return False
if len(candidate_tokens) == 1 and candidate_tokens[0] in self.GENERIC_PROJECT_NAME_WORDS:
return False
if all(token in self.GENERIC_PROJECT_NAME_WORDS for token in candidate_tokens):
return False
fallback_tokens = {
token.lower() for token in re.split(r'[-\s]+', fallback_name or '') if token and token.lower() not in self.REPO_NOISE_WORDS
}
if fallback_tokens and len(candidate_tokens) <= 2:
overlap = sum(1 for token in candidate_tokens if token in fallback_tokens)
if overlap == 0 and any(token in self.GENERIC_PROJECT_NAME_WORDS for token in candidate_tokens):
return False
return True
def _ensure_unique_repo_name(self, repo_name: str, reserved_names: set[str]) -> str:
"""Choose a repository slug that does not collide with tracked or remote repositories."""
base_name = self._derive_repo_name(repo_name)
@@ -326,69 +366,19 @@ class RequestInterpreter:
suffix += 1
return f'{base_name}-{suffix}'
def _normalize_project_identity(self, payload: dict, fallback_name: str) -> tuple[str, str]:
"""Normalize model-proposed project and repository naming."""
project_name = self._humanize_name(str(payload.get('project_name') or payload.get('name') or fallback_name))
repo_name = self._derive_repo_name(str(payload.get('repo_name') or project_name))
return project_name, repo_name
def _heuristic_fallback(self, prompt_text: str, context: dict | None = None) -> tuple[dict, dict]:
"""Fallback request extraction when Ollama is unavailable."""
lowered = prompt_text.lower()
tech_candidates = [
'python', 'fastapi', 'django', 'flask', 'postgresql', 'sqlite', 'react', 'vue', 'nicegui', 'docker'
]
tech_stack = [candidate for candidate in tech_candidates if candidate in lowered]
sentences = [part.strip() for part in re.split(r'[\n\.]+', prompt_text) if part.strip()]
features = sentences[:3] or ['Implement the user request from free-form text']
interpreted = {
'name': self._derive_name(prompt_text),
'description': sentences[0][:255] if sentences else prompt_text[:255],
'features': features,
'tech_stack': tech_stack,
}
routing = self._heuristic_routing(prompt_text, context or {})
if routing.get('project_name'):
interpreted['name'] = routing['project_name']
return interpreted, routing
def _heuristic_routing(self, prompt_text: str, context: dict) -> dict:
"""Best-effort routing when the LLM is unavailable."""
lowered = prompt_text.lower()
explicit_new = any(token in lowered for token in ['new project', 'start a new project', 'create a new project', 'build a new app'])
referenced_issue = self._extract_issue_number(prompt_text)
recent_history = context.get('recent_chat_history', [])
projects = context.get('projects', [])
last_project_id = recent_history[0].get('project_id') if recent_history else None
last_issue = ((recent_history[0].get('related_issue') or {}).get('number') if recent_history else None)
matched_project = None
for project in projects:
name = (project.get('name') or '').lower()
repo = ((project.get('repository') or {}).get('name') or '').lower()
if name and name in lowered:
matched_project = project
break
if repo and repo in lowered:
matched_project = project
break
if matched_project is None and not explicit_new:
follow_up_tokens = ['also', 'continue', 'for this project', 'for that project', 'work on this', 'work on that', 'fix that', 'add this']
if any(token in lowered for token in follow_up_tokens) and last_project_id:
matched_project = next((project for project in projects if project.get('project_id') == last_project_id), None)
issue_number = referenced_issue
if issue_number is None and any(token in lowered for token in ['that issue', 'this issue', 'the issue']) and last_issue is not None:
issue_number = last_issue
intent = 'new_project' if explicit_new or matched_project is None else 'continue_project'
return {
'intent': intent,
'project_id': matched_project.get('project_id') if matched_project else None,
'project_name': matched_project.get('name') if matched_project else self._derive_name(prompt_text),
'repo_name': None if matched_project else self._derive_repo_name(self._derive_name(prompt_text)),
'issue_number': issue_number,
'confidence': 'medium' if matched_project or explicit_new else 'low',
'reasoning_summary': 'Heuristic routing from chat history and project names.',
}
def _normalize_project_identity(self, payload: dict) -> tuple[str, str]:
"""Validate model-proposed project and repository naming."""
project_candidate = str(payload.get('project_name') or payload.get('name') or '').strip()
repo_candidate = str(payload.get('repo_name') or '').strip()
if not project_candidate:
raise RuntimeError('LLM project naming did not provide a project name.')
if not repo_candidate:
raise RuntimeError('LLM project naming did not provide a repository slug.')
if not self._should_use_project_name_candidate(project_candidate, project_candidate):
raise RuntimeError('LLM project naming returned an unusable project name.')
if not self._should_use_repo_name_candidate(repo_candidate, project_candidate):
raise RuntimeError('LLM project naming returned an unusable repository slug.')
return self._humanize_name(project_candidate), self._derive_repo_name(repo_candidate)
def _extract_issue_number(self, prompt_text: str) -> int | None:
match = re.search(r'(?:#|issue\s+)(\d+)', prompt_text, flags=re.IGNORECASE)

View File

@@ -4,10 +4,207 @@ import json
import os
from typing import Optional
from pathlib import Path
from urllib.parse import urlparse
from pydantic import Field
from pydantic_settings import BaseSettings, SettingsConfigDict
def _normalize_service_url(value: str, default_scheme: str = "https") -> str:
"""Normalize service URLs so host-only values still become valid absolute URLs."""
normalized = (value or "").strip().rstrip("/")
if not normalized:
return ""
if "://" not in normalized:
normalized = f"{default_scheme}://{normalized}"
parsed = urlparse(normalized)
if not parsed.scheme or not parsed.netloc:
return ""
return normalized
EDITABLE_LLM_PROMPTS: dict[str, dict[str, str]] = {
'LLM_GUARDRAIL_PROMPT': {
'label': 'Global Guardrails',
'category': 'guardrail',
'description': 'Applied to every outbound external LLM call.',
},
'LLM_REQUEST_INTERPRETER_GUARDRAIL_PROMPT': {
'label': 'Request Interpretation Guardrails',
'category': 'guardrail',
'description': 'Constrains project routing and continuation selection.',
},
'LLM_CHANGE_SUMMARY_GUARDRAIL_PROMPT': {
'label': 'Change Summary Guardrails',
'category': 'guardrail',
'description': 'Constrains factual delivery summaries.',
},
'LLM_PROJECT_NAMING_GUARDRAIL_PROMPT': {
'label': 'Project Naming Guardrails',
'category': 'guardrail',
'description': 'Constrains project display names and repo slugs.',
},
'LLM_PROJECT_NAMING_SYSTEM_PROMPT': {
'label': 'Project Naming System Prompt',
'category': 'system_prompt',
'description': 'Guides the dedicated new-project naming stage.',
},
'LLM_PROJECT_ID_GUARDRAIL_PROMPT': {
'label': 'Project ID Guardrails',
'category': 'guardrail',
'description': 'Constrains stable project id generation.',
},
'LLM_PROJECT_ID_SYSTEM_PROMPT': {
'label': 'Project ID System Prompt',
'category': 'system_prompt',
'description': 'Guides the dedicated project id naming stage.',
},
}
EDITABLE_RUNTIME_SETTINGS: dict[str, dict[str, str]] = {
'HOME_ASSISTANT_BATTERY_ENTITY_ID': {
'label': 'Battery Entity ID',
'category': 'home_assistant',
'description': 'Home Assistant entity used for battery state-of-charge gating.',
'value_type': 'string',
},
'HOME_ASSISTANT_SURPLUS_ENTITY_ID': {
'label': 'Surplus Power Entity ID',
'category': 'home_assistant',
'description': 'Home Assistant entity used for export or surplus power gating.',
'value_type': 'string',
},
'HOME_ASSISTANT_BATTERY_FULL_THRESHOLD': {
'label': 'Battery Full Threshold',
'category': 'home_assistant',
'description': 'Minimum battery percentage required before queued prompts may run.',
'value_type': 'float',
},
'HOME_ASSISTANT_SURPLUS_THRESHOLD_WATTS': {
'label': 'Surplus Threshold Watts',
'category': 'home_assistant',
'description': 'Minimum surplus/export power required before queued prompts may run.',
'value_type': 'float',
},
'PROMPT_QUEUE_ENABLED': {
'label': 'Queue Telegram Prompts',
'category': 'prompt_queue',
'description': 'When enabled, Telegram prompts are queued and gated instead of processed immediately.',
'value_type': 'boolean',
},
'PROMPT_QUEUE_AUTO_PROCESS': {
'label': 'Auto Process Queue',
'category': 'prompt_queue',
'description': 'Let the background worker drain the queue automatically when the gate is open.',
'value_type': 'boolean',
},
'PROMPT_QUEUE_FORCE_PROCESS': {
'label': 'Force Queue Processing',
'category': 'prompt_queue',
'description': 'Bypass the Home Assistant energy gate for queued prompts.',
'value_type': 'boolean',
},
'PROMPT_QUEUE_POLL_INTERVAL_SECONDS': {
'label': 'Queue Poll Interval Seconds',
'category': 'prompt_queue',
'description': 'Polling interval for the background queue worker.',
'value_type': 'integer',
},
'PROMPT_QUEUE_MAX_BATCH_SIZE': {
'label': 'Queue Max Batch Size',
'category': 'prompt_queue',
'description': 'Maximum number of queued prompts processed in one batch.',
'value_type': 'integer',
},
}
def _get_persisted_llm_prompt_override(env_key: str) -> str | None:
"""Load one persisted LLM prompt override from the database when available."""
if env_key not in EDITABLE_LLM_PROMPTS:
return None
try:
try:
from .database import get_db_sync
from .agents.database_manager import DatabaseManager
except ImportError:
from database import get_db_sync
from agents.database_manager import DatabaseManager
db = get_db_sync()
if db is None:
return None
try:
return DatabaseManager(db).get_llm_prompt_override(env_key)
finally:
db.close()
except Exception:
return None
def _resolve_llm_prompt_value(env_key: str, fallback: str) -> str:
"""Resolve one editable prompt from DB override first, then environment/defaults."""
override = _get_persisted_llm_prompt_override(env_key)
if override is not None:
return override.strip()
return (fallback or '').strip()
def _get_persisted_runtime_setting_override(key: str):
"""Load one persisted runtime-setting override from the database when available."""
if key not in EDITABLE_RUNTIME_SETTINGS:
return None
try:
try:
from .database import get_db_sync
from .agents.database_manager import DatabaseManager
except ImportError:
from database import get_db_sync
from agents.database_manager import DatabaseManager
db = get_db_sync()
if db is None:
return None
try:
return DatabaseManager(db).get_runtime_setting_override(key)
finally:
db.close()
except Exception:
return None
def _coerce_runtime_setting_value(key: str, value, fallback):
"""Coerce a persisted runtime setting override into the expected scalar type."""
value_type = EDITABLE_RUNTIME_SETTINGS.get(key, {}).get('value_type')
if value is None:
return fallback
if value_type == 'boolean':
if isinstance(value, bool):
return value
normalized = str(value).strip().lower()
if normalized in {'1', 'true', 'yes', 'on'}:
return True
if normalized in {'0', 'false', 'no', 'off'}:
return False
return bool(fallback)
if value_type == 'integer':
try:
return int(value)
except Exception:
return int(fallback)
if value_type == 'float':
try:
return float(value)
except Exception:
return float(fallback)
return str(value).strip()
def _resolve_runtime_setting_value(key: str, fallback):
"""Resolve one editable runtime setting from DB override first, then environment/defaults."""
override = _get_persisted_runtime_setting_override(key)
return _coerce_runtime_setting_value(key, override, fallback)
class Settings(BaseSettings):
"""Application settings loaded from environment variables."""
@@ -36,10 +233,10 @@ class Settings(BaseSettings):
"For summaries: only describe facts present in the provided context and tool outputs. Never claim a repository, commit, or pull request exists unless it is present in the supplied data."
)
LLM_PROJECT_NAMING_GUARDRAIL_PROMPT: str = (
"For project naming: prefer clear, product-like names and repository slugs that match the user's intent. Avoid reusing tracked project identities unless the request is clearly asking for an existing project."
"For project naming: prefer clear, product-like names and repository slugs that match the user's concrete deliverable. Avoid abstract or instructional words such as purpose, project, system, app, tool, platform, solution, new, create, or test unless the request truly centers on that exact noun. Base the name on the actual artifact or workflow being built, and avoid copying sentence fragments from the prompt. Avoid reusing tracked project identities unless the request is clearly asking for an existing project."
)
LLM_PROJECT_NAMING_SYSTEM_PROMPT: str = (
"You name newly requested software projects. Return only JSON with keys project_name, repo_name, and rationale. Project names should be concise human-readable titles. Repo names should be lowercase kebab-case slugs suitable for a Gitea repository name."
"You name newly requested software projects. Return only JSON with keys project_name, repo_name, and rationale. Project names should be concise human-readable titles based on the real product, artifact, or workflow being created. Repo names should be lowercase kebab-case slugs derived from that title. Never return generic names like purpose, project, system, app, tool, platform, solution, harness, or test by themselves, and never return a repo_name that is a copied sentence fragment from the prompt. Prefer 2 to 4 specific words when possible."
)
LLM_PROJECT_ID_GUARDRAIL_PROMPT: str = (
"For project ids: produce short stable slugs for newly created projects. Avoid collisions with known project ids and keep ids lowercase with hyphens."
@@ -76,6 +273,19 @@ class Settings(BaseSettings):
TELEGRAM_BOT_TOKEN: str = ""
TELEGRAM_CHAT_ID: str = ""
# Home Assistant and prompt queue settings
HOME_ASSISTANT_URL: str = ""
HOME_ASSISTANT_TOKEN: str = ""
HOME_ASSISTANT_BATTERY_ENTITY_ID: str = ""
HOME_ASSISTANT_SURPLUS_ENTITY_ID: str = ""
HOME_ASSISTANT_BATTERY_FULL_THRESHOLD: float = 95.0
HOME_ASSISTANT_SURPLUS_THRESHOLD_WATTS: float = 100.0
PROMPT_QUEUE_ENABLED: bool = False
PROMPT_QUEUE_AUTO_PROCESS: bool = True
PROMPT_QUEUE_FORCE_PROCESS: bool = False
PROMPT_QUEUE_POLL_INTERVAL_SECONDS: int = 60
PROMPT_QUEUE_MAX_BATCH_SIZE: int = 1
# PostgreSQL settings
POSTGRES_HOST: str = "localhost"
POSTGRES_PORT: int = 5432
@@ -163,37 +373,74 @@ class Settings(BaseSettings):
@property
def llm_guardrail_prompt(self) -> str:
"""Get the global guardrail prompt used for all external LLM calls."""
return self.LLM_GUARDRAIL_PROMPT.strip()
return _resolve_llm_prompt_value('LLM_GUARDRAIL_PROMPT', self.LLM_GUARDRAIL_PROMPT)
@property
def llm_request_interpreter_guardrail_prompt(self) -> str:
"""Get the request-interpretation specific guardrail prompt."""
return self.LLM_REQUEST_INTERPRETER_GUARDRAIL_PROMPT.strip()
return _resolve_llm_prompt_value('LLM_REQUEST_INTERPRETER_GUARDRAIL_PROMPT', self.LLM_REQUEST_INTERPRETER_GUARDRAIL_PROMPT)
@property
def llm_change_summary_guardrail_prompt(self) -> str:
"""Get the change-summary specific guardrail prompt."""
return self.LLM_CHANGE_SUMMARY_GUARDRAIL_PROMPT.strip()
return _resolve_llm_prompt_value('LLM_CHANGE_SUMMARY_GUARDRAIL_PROMPT', self.LLM_CHANGE_SUMMARY_GUARDRAIL_PROMPT)
@property
def llm_project_naming_guardrail_prompt(self) -> str:
"""Get the project-naming specific guardrail prompt."""
return self.LLM_PROJECT_NAMING_GUARDRAIL_PROMPT.strip()
return _resolve_llm_prompt_value('LLM_PROJECT_NAMING_GUARDRAIL_PROMPT', self.LLM_PROJECT_NAMING_GUARDRAIL_PROMPT)
@property
def llm_project_naming_system_prompt(self) -> str:
"""Get the project-naming system prompt."""
return self.LLM_PROJECT_NAMING_SYSTEM_PROMPT.strip()
return _resolve_llm_prompt_value('LLM_PROJECT_NAMING_SYSTEM_PROMPT', self.LLM_PROJECT_NAMING_SYSTEM_PROMPT)
@property
def llm_project_id_guardrail_prompt(self) -> str:
"""Get the project-id naming specific guardrail prompt."""
return self.LLM_PROJECT_ID_GUARDRAIL_PROMPT.strip()
return _resolve_llm_prompt_value('LLM_PROJECT_ID_GUARDRAIL_PROMPT', self.LLM_PROJECT_ID_GUARDRAIL_PROMPT)
@property
def llm_project_id_system_prompt(self) -> str:
"""Get the project-id naming system prompt."""
return self.LLM_PROJECT_ID_SYSTEM_PROMPT.strip()
return _resolve_llm_prompt_value('LLM_PROJECT_ID_SYSTEM_PROMPT', self.LLM_PROJECT_ID_SYSTEM_PROMPT)
@property
def editable_llm_prompts(self) -> list[dict[str, str]]:
"""Return metadata for all LLM prompts that may be persisted and edited from the UI."""
prompts = []
for env_key, metadata in EDITABLE_LLM_PROMPTS.items():
prompts.append(
{
'key': env_key,
'label': metadata['label'],
'category': metadata['category'],
'description': metadata['description'],
'default_value': (getattr(self, env_key, '') or '').strip(),
'value': _resolve_llm_prompt_value(env_key, getattr(self, env_key, '')),
}
)
return prompts
@property
def editable_runtime_settings(self) -> list[dict]:
"""Return metadata for all DB-editable runtime settings."""
items = []
for key, metadata in EDITABLE_RUNTIME_SETTINGS.items():
default_value = getattr(self, key)
value = _resolve_runtime_setting_value(key, default_value)
items.append(
{
'key': key,
'label': metadata['label'],
'category': metadata['category'],
'description': metadata['description'],
'value_type': metadata['value_type'],
'default_value': default_value,
'value': value,
}
)
return items
@property
def llm_tool_allowlist(self) -> list[str]:
@@ -254,7 +501,7 @@ class Settings(BaseSettings):
@property
def gitea_url(self) -> str:
"""Get Gitea URL with trimmed whitespace."""
return self.GITEA_URL.strip()
return _normalize_service_url(self.GITEA_URL)
@property
def gitea_token(self) -> str:
@@ -279,12 +526,12 @@ class Settings(BaseSettings):
@property
def n8n_webhook_url(self) -> str:
"""Get n8n webhook URL with trimmed whitespace."""
return self.N8N_WEBHOOK_URL.strip()
return _normalize_service_url(self.N8N_WEBHOOK_URL, default_scheme="http")
@property
def n8n_api_url(self) -> str:
"""Get n8n API URL with trimmed whitespace."""
return self.N8N_API_URL.strip()
return _normalize_service_url(self.N8N_API_URL, default_scheme="http")
@property
def n8n_api_key(self) -> str:
@@ -309,7 +556,62 @@ class Settings(BaseSettings):
@property
def backend_public_url(self) -> str:
"""Get backend public URL with trimmed whitespace."""
return self.BACKEND_PUBLIC_URL.strip().rstrip("/")
return _normalize_service_url(self.BACKEND_PUBLIC_URL, default_scheme="http")
@property
def home_assistant_url(self) -> str:
"""Get Home Assistant URL with trimmed whitespace."""
return _normalize_service_url(self.HOME_ASSISTANT_URL, default_scheme="http")
@property
def home_assistant_token(self) -> str:
"""Get Home Assistant token with trimmed whitespace."""
return self.HOME_ASSISTANT_TOKEN.strip()
@property
def home_assistant_battery_entity_id(self) -> str:
"""Get the Home Assistant battery state entity id."""
return str(_resolve_runtime_setting_value('HOME_ASSISTANT_BATTERY_ENTITY_ID', self.HOME_ASSISTANT_BATTERY_ENTITY_ID)).strip()
@property
def home_assistant_surplus_entity_id(self) -> str:
"""Get the Home Assistant surplus power entity id."""
return str(_resolve_runtime_setting_value('HOME_ASSISTANT_SURPLUS_ENTITY_ID', self.HOME_ASSISTANT_SURPLUS_ENTITY_ID)).strip()
@property
def home_assistant_battery_full_threshold(self) -> float:
"""Get the minimum battery SoC percentage for queue processing."""
return float(_resolve_runtime_setting_value('HOME_ASSISTANT_BATTERY_FULL_THRESHOLD', self.HOME_ASSISTANT_BATTERY_FULL_THRESHOLD))
@property
def home_assistant_surplus_threshold_watts(self) -> float:
"""Get the minimum export/surplus power threshold for queue processing."""
return float(_resolve_runtime_setting_value('HOME_ASSISTANT_SURPLUS_THRESHOLD_WATTS', self.HOME_ASSISTANT_SURPLUS_THRESHOLD_WATTS))
@property
def prompt_queue_enabled(self) -> bool:
"""Whether Telegram prompts should be queued instead of processed immediately."""
return bool(_resolve_runtime_setting_value('PROMPT_QUEUE_ENABLED', self.PROMPT_QUEUE_ENABLED))
@property
def prompt_queue_auto_process(self) -> bool:
"""Whether the background worker should automatically process queued prompts."""
return bool(_resolve_runtime_setting_value('PROMPT_QUEUE_AUTO_PROCESS', self.PROMPT_QUEUE_AUTO_PROCESS))
@property
def prompt_queue_force_process(self) -> bool:
"""Whether queued prompts should bypass the Home Assistant energy gate."""
return bool(_resolve_runtime_setting_value('PROMPT_QUEUE_FORCE_PROCESS', self.PROMPT_QUEUE_FORCE_PROCESS))
@property
def prompt_queue_poll_interval_seconds(self) -> int:
"""Get the queue polling interval for background processing."""
return max(int(_resolve_runtime_setting_value('PROMPT_QUEUE_POLL_INTERVAL_SECONDS', self.PROMPT_QUEUE_POLL_INTERVAL_SECONDS)), 5)
@property
def prompt_queue_max_batch_size(self) -> int:
"""Get the maximum number of queued prompts to process in one batch."""
return max(int(_resolve_runtime_setting_value('PROMPT_QUEUE_MAX_BATCH_SIZE', self.PROMPT_QUEUE_MAX_BATCH_SIZE)), 1)
@property
def projects_root(self) -> Path:

File diff suppressed because it is too large Load Diff

View File

@@ -6,7 +6,7 @@ from urllib.parse import urlparse
from alembic import command
from alembic.config import Config
from sqlalchemy import create_engine, event, text
from sqlalchemy import create_engine, text
from sqlalchemy.engine import Engine
from sqlalchemy.orm import Session, sessionmaker
@@ -64,20 +64,6 @@ def get_engine() -> Engine:
pool_timeout=settings.DB_POOL_TIMEOUT or 30
)
# Event listener for connection checkout (PostgreSQL only)
if not settings.use_sqlite:
@event.listens_for(engine, "checkout")
def receive_checkout(dbapi_connection, connection_record, connection_proxy):
"""Log connection checkout for audit purposes."""
if settings.LOG_LEVEL in ("DEBUG", "INFO"):
print(f"DB Connection checked out from pool")
@event.listens_for(engine, "checkin")
def receive_checkin(dbapi_connection, connection_record):
"""Log connection checkin for audit purposes."""
if settings.LOG_LEVEL == "DEBUG":
print(f"DB Connection returned to pool")
return engine

View File

@@ -13,6 +13,7 @@ The NiceGUI frontend provides:
from __future__ import annotations
import asyncio
from contextlib import asynccontextmanager
import json
import re
@@ -29,6 +30,7 @@ try:
from . import database as database_module
from .agents.change_summary import ChangeSummaryGenerator
from .agents.database_manager import DatabaseManager
from .agents.home_assistant import HomeAssistantAgent
from .agents.request_interpreter import RequestInterpreter
from .agents.llm_service import LLMServiceClient
from .agents.orchestrator import AgentOrchestrator
@@ -41,6 +43,7 @@ except ImportError:
import database as database_module
from agents.change_summary import ChangeSummaryGenerator
from agents.database_manager import DatabaseManager
from agents.home_assistant import HomeAssistantAgent
from agents.request_interpreter import RequestInterpreter
from agents.llm_service import LLMServiceClient
from agents.orchestrator import AgentOrchestrator
@@ -59,7 +62,16 @@ async def lifespan(_app: FastAPI):
print(
f"Runtime configuration: database_backend={runtime['backend']} target={runtime['target']}"
)
yield
queue_worker = asyncio.create_task(_prompt_queue_worker())
try:
yield
finally:
if queue_worker is not None:
queue_worker.cancel()
try:
await queue_worker
except asyncio.CancelledError:
pass
app = FastAPI(lifespan=lifespan)
@@ -94,6 +106,26 @@ class FreeformSoftwareRequest(BaseModel):
source: str = 'telegram'
chat_id: str | None = None
chat_type: str | None = None
process_now: bool = False
class PromptQueueProcessRequest(BaseModel):
"""Request body for manual queue processing."""
force: bool = False
limit: int = Field(default=1, ge=1, le=25)
class LLMPromptSettingUpdateRequest(BaseModel):
"""Request body for persisting one editable LLM prompt override."""
value: str = Field(default='')
class RuntimeSettingUpdateRequest(BaseModel):
"""Request body for persisting one editable runtime setting override."""
value: str | bool | int | float | None = None
class GiteaRepositoryOnboardRequest(BaseModel):
@@ -155,7 +187,6 @@ async def _derive_project_id_for_request(
) -> tuple[str, dict | None]:
"""Derive a stable project id for a newly created project."""
reserved_ids = {str(project.get('project_id')).strip() for project in existing_projects if project.get('project_id')}
fallback_id = _ensure_unique_identifier((prompt_routing or {}).get('project_name') or request.name, reserved_ids)
user_prompt = (
f"Original user prompt:\n{prompt_text}\n\n"
f"Structured request:\n{json.dumps({'name': request.name, 'description': request.description, 'features': request.features, 'tech_stack': request.tech_stack}, indent=2)}\n\n"
@@ -170,14 +201,19 @@ async def _derive_project_id_for_request(
tool_context_input={'projects': existing_projects},
expect_json=True,
)
if content:
try:
parsed = json.loads(content)
candidate = parsed.get('project_id') or parsed.get('slug') or request.name
return _ensure_unique_identifier(str(candidate), reserved_ids), trace
except Exception:
pass
return fallback_id, trace
if not content:
detail = LLMServiceClient.extract_error_message(trace)
if detail:
raise RuntimeError(f'LLM project id naming failed: {detail}')
raise RuntimeError('LLM project id naming did not return a usable response.')
try:
parsed = json.loads(content)
except Exception as exc:
raise RuntimeError('LLM project id naming did not return valid JSON.') from exc
candidate = str(parsed.get('project_id') or parsed.get('slug') or '').strip()
if not candidate:
raise RuntimeError('LLM project id naming did not provide a project id.')
return _ensure_unique_identifier(candidate, reserved_ids), trace
def _serialize_project(history: ProjectHistory) -> dict:
@@ -209,6 +245,17 @@ def _serialize_project_log(log: ProjectLog) -> dict:
}
def _ensure_summary_mentions_pull_request(summary_message: str, pull_request: dict | None) -> str:
"""Append the pull request URL to chat summaries when one exists."""
if not isinstance(pull_request, dict):
return summary_message
pr_url = (pull_request.get('pr_url') or '').strip()
if not pr_url or pr_url in summary_message:
return summary_message
separator = '' if summary_message.endswith(('.', '!', '?')) else '.'
return f"{summary_message}{separator} Review PR: {pr_url}"
def _serialize_system_log(log: SystemLog) -> dict:
"""Serialize a system log row."""
return {
@@ -239,6 +286,51 @@ def _compose_prompt_text(request: SoftwareRequest) -> str:
)
def _generation_error_payload(
*,
message: str,
request: SoftwareRequest | None = None,
source: dict | None = None,
interpreted_request: dict | None = None,
routing: dict | None = None,
) -> dict:
"""Return a workflow-safe JSON payload for expected generation failures."""
response = {
'status': 'error',
'message': message,
'error': message,
'summary_message': message,
'summary_metadata': {
'provider': None,
'model': None,
'fallback_used': False,
},
'data': {
'history_id': None,
'project_id': None,
'name': request.name if request is not None else (interpreted_request or {}).get('name'),
'description': request.description if request is not None else (interpreted_request or {}).get('description'),
'status': 'error',
'progress': 0,
'message': message,
'current_step': None,
'error_message': message,
'logs': [],
'changed_files': [],
'repository': None,
'pull_request': None,
'summary_message': message,
},
}
if source is not None:
response['source'] = source
if interpreted_request is not None:
response['interpreted_request'] = interpreted_request
if routing is not None:
response['routing'] = routing
return response
async def _run_generation(
request: SoftwareRequest,
db: Session,
@@ -274,7 +366,7 @@ async def _run_generation(
resolved_prompt_text = prompt_text or _compose_prompt_text(request)
if preferred_project_id and reusable_history is not None:
project_id = reusable_history.project_id
elif reusable_history and not is_explicit_new_project and manager.get_open_pull_request(project_id=reusable_history.project_id):
elif reusable_history and not is_explicit_new_project:
project_id = reusable_history.project_id
else:
if is_explicit_new_project or prompt_text:
@@ -316,6 +408,8 @@ async def _run_generation(
response_data = _serialize_project(history)
response_data['logs'] = [_serialize_project_log(log) for log in project_logs]
response_data['ui_data'] = result.get('ui_data')
response_data['generation_debug'] = ((result.get('ui_data') or {}).get('generation_debug'))
response_data['git_debug'] = ((result.get('ui_data') or {}).get('git'))
response_data['features'] = request.features
response_data['tech_stack'] = request.tech_stack
response_data['project_root'] = result.get('project_root', str(_project_root(project_id)))
@@ -357,6 +451,7 @@ async def _run_generation(
'logs': [log.get('message', '') for log in response_data.get('logs', []) if isinstance(log, dict)],
}
summary_message, summary_trace = await ChangeSummaryGenerator().summarize_with_trace(summary_context)
summary_message = _ensure_summary_mentions_pull_request(summary_message, response_data.get('pull_request'))
if orchestrator.db_manager and orchestrator.history and orchestrator.prompt_audit:
orchestrator.db_manager.log_llm_trace(
project_id=project_id,
@@ -372,8 +467,18 @@ async def _run_generation(
fallback_used=summary_trace.get('fallback_used', False),
)
response_data['summary_message'] = summary_message
response_data['summary_metadata'] = {
'provider': summary_trace.get('provider'),
'model': summary_trace.get('model'),
'fallback_used': bool(summary_trace.get('fallback_used')),
}
response_data['pull_request'] = result.get('pull_request') or manager.get_open_pull_request(project_id=project_id)
return {'status': result['status'], 'data': response_data, 'summary_message': summary_message}
return {
'status': result['status'],
'data': response_data,
'summary_message': summary_message,
'summary_metadata': response_data['summary_metadata'],
}
def _project_root(project_id: str) -> Path:
@@ -397,6 +502,281 @@ def _create_gitea_api():
)
def _create_home_assistant_agent() -> HomeAssistantAgent:
"""Create a configured Home Assistant client."""
return HomeAssistantAgent(
base_url=database_module.settings.home_assistant_url,
token=database_module.settings.home_assistant_token,
)
def _get_gitea_health() -> dict:
"""Return current Gitea connectivity diagnostics."""
if not database_module.settings.gitea_url:
return {
'status': 'error',
'message': 'Gitea URL is not configured.',
'base_url': '',
'configured': False,
'checks': [],
}
if not database_module.settings.gitea_token:
return {
'status': 'error',
'message': 'Gitea token is not configured.',
'base_url': database_module.settings.gitea_url,
'configured': False,
'checks': [],
}
response = _create_gitea_api().get_current_user_sync()
if response.get('error'):
return {
'status': 'error',
'message': response.get('error'),
'base_url': database_module.settings.gitea_url,
'configured': True,
'checks': [
{
'name': 'token_auth',
'ok': False,
'message': response.get('error'),
'url': f"{database_module.settings.gitea_url}/api/v1/user",
'status_code': response.get('status_code'),
}
],
}
username = response.get('login') or response.get('username') or response.get('full_name') or 'unknown'
return {
'status': 'success',
'message': f'Authenticated as {username}.',
'base_url': database_module.settings.gitea_url,
'configured': True,
'checks': [
{
'name': 'token_auth',
'ok': True,
'message': f'Authenticated as {username}',
'url': f"{database_module.settings.gitea_url}/api/v1/user",
}
],
'user': username,
}
def _get_home_assistant_health() -> dict:
"""Return current Home Assistant connectivity diagnostics."""
return _create_home_assistant_agent().health_check_sync()
def _get_ollama_health() -> dict:
"""Return current Ollama connectivity diagnostics."""
return LLMServiceClient().health_check_sync()
async def _get_queue_gate_status(force: bool = False) -> dict:
"""Return whether queued prompts may be processed now."""
if not database_module.settings.prompt_queue_enabled:
return {
'status': 'disabled',
'allowed': True,
'forced': False,
'reason': 'Prompt queue is disabled',
}
if not database_module.settings.home_assistant_url:
if force or database_module.settings.prompt_queue_force_process:
return {
'status': 'success',
'allowed': True,
'forced': True,
'reason': 'Queue override is enabled',
}
return {
'status': 'blocked',
'allowed': False,
'forced': False,
'reason': 'Home Assistant URL is not configured',
}
return await _create_home_assistant_agent().queue_gate_status(force=force)
async def _interpret_freeform_request(request: FreeformSoftwareRequest, manager: DatabaseManager) -> tuple[SoftwareRequest, dict, dict]:
"""Interpret a free-form request and return the structured request plus routing trace."""
interpreter_context = manager.get_interpreter_context(chat_id=request.chat_id, source=request.source)
interpreted, interpretation_trace = await RequestInterpreter().interpret_with_trace(
request.prompt_text,
context=interpreter_context,
)
routing = interpretation_trace.get('routing') or {}
selected_history = manager.get_project_by_id(routing.get('project_id'), include_archived=False) if routing.get('project_id') else None
if selected_history is not None and routing.get('intent') != 'new_project':
interpreted['name'] = selected_history.project_name
interpreted['description'] = selected_history.description or interpreted['description']
return SoftwareRequest(**interpreted), routing, interpretation_trace
async def _run_freeform_generation(
request: FreeformSoftwareRequest,
db: Session,
*,
queue_item_id: int | None = None,
) -> dict:
"""Shared free-form request flow used by direct calls and queued processing."""
manager = DatabaseManager(db)
try:
structured_request, routing, interpretation_trace = await _interpret_freeform_request(request, manager)
response = await _run_generation(
structured_request,
db,
prompt_text=request.prompt_text,
prompt_actor=request.source,
prompt_source_context={
'chat_id': request.chat_id,
'chat_type': request.chat_type,
'queue_item_id': queue_item_id,
},
prompt_routing=routing,
preferred_project_id=routing.get('project_id') if routing.get('intent') != 'new_project' else None,
repo_name_override=routing.get('repo_name') if routing.get('intent') == 'new_project' else None,
related_issue={'number': routing.get('issue_number')} if routing.get('issue_number') is not None else None,
)
project_data = response.get('data', {})
if project_data.get('history_id') is not None:
manager = DatabaseManager(db)
prompts = manager.get_prompt_events(project_id=project_data.get('project_id'))
prompt_id = prompts[0]['id'] if prompts else None
manager.log_llm_trace(
project_id=project_data.get('project_id'),
history_id=project_data.get('history_id'),
prompt_id=prompt_id,
stage=interpretation_trace['stage'],
provider=interpretation_trace['provider'],
model=interpretation_trace['model'],
system_prompt=interpretation_trace['system_prompt'],
user_prompt=interpretation_trace['user_prompt'],
assistant_response=interpretation_trace['assistant_response'],
raw_response=interpretation_trace.get('raw_response'),
fallback_used=interpretation_trace.get('fallback_used', False),
)
naming_trace = interpretation_trace.get('project_naming')
if naming_trace:
manager.log_llm_trace(
project_id=project_data.get('project_id'),
history_id=project_data.get('history_id'),
prompt_id=prompt_id,
stage=naming_trace['stage'],
provider=naming_trace['provider'],
model=naming_trace['model'],
system_prompt=naming_trace['system_prompt'],
user_prompt=naming_trace['user_prompt'],
assistant_response=naming_trace['assistant_response'],
raw_response=naming_trace.get('raw_response'),
fallback_used=naming_trace.get('fallback_used', False),
)
response['interpreted_request'] = structured_request.model_dump()
response['routing'] = routing
response['llm_trace'] = interpretation_trace
response['source'] = {
'type': request.source,
'chat_id': request.chat_id,
'chat_type': request.chat_type,
}
if queue_item_id is not None:
DatabaseManager(db).complete_queued_prompt(
queue_item_id,
{
'project_id': project_data.get('project_id'),
'history_id': project_data.get('history_id'),
'status': response.get('status'),
},
)
return response
except Exception as exc:
if queue_item_id is not None:
DatabaseManager(db).fail_queued_prompt(queue_item_id, str(exc))
raise
async def _process_prompt_queue_batch(limit: int = 1, force: bool = False) -> dict:
"""Process up to `limit` queued prompts if the energy gate allows it."""
queue_gate = await _get_queue_gate_status(force=force)
if not queue_gate.get('allowed'):
db = database_module.get_db_sync()
try:
summary = DatabaseManager(db).get_prompt_queue_summary()
finally:
db.close()
return {
'status': queue_gate.get('status', 'blocked'),
'processed_count': 0,
'queue_gate': queue_gate,
'queue_summary': summary,
'processed': [],
}
processed = []
for _ in range(max(limit, 1)):
claim_db = database_module.get_db_sync()
try:
claimed = DatabaseManager(claim_db).claim_next_queued_prompt()
finally:
claim_db.close()
if claimed is None:
break
work_db = database_module.get_db_sync()
try:
request = FreeformSoftwareRequest(
prompt_text=claimed['prompt_text'],
source=claimed['source'] or 'telegram',
chat_id=claimed.get('chat_id'),
chat_type=claimed.get('chat_type'),
process_now=True,
)
response = await _run_freeform_generation(request, work_db, queue_item_id=claimed['id'])
processed.append(
{
'queue_item_id': claimed['id'],
'project_id': (response.get('data') or {}).get('project_id'),
'status': response.get('status'),
}
)
except Exception as exc:
DatabaseManager(work_db).fail_queued_prompt(claimed['id'], str(exc))
processed.append({'queue_item_id': claimed['id'], 'status': 'failed', 'error': str(exc)})
finally:
work_db.close()
summary_db = database_module.get_db_sync()
try:
summary = DatabaseManager(summary_db).get_prompt_queue_summary()
finally:
summary_db.close()
return {
'status': 'success',
'processed_count': len(processed),
'processed': processed,
'queue_gate': queue_gate,
'queue_summary': summary,
}
async def _prompt_queue_worker() -> None:
"""Background worker that drains the prompt queue when the energy gate opens."""
while True:
try:
if database_module.settings.prompt_queue_enabled and database_module.settings.prompt_queue_auto_process:
await _process_prompt_queue_batch(
limit=database_module.settings.prompt_queue_max_batch_size,
force=database_module.settings.prompt_queue_force_process,
)
except Exception as exc:
db = database_module.get_db_sync()
try:
DatabaseManager(db).log_system_event('prompt-queue', 'ERROR', f'Queue worker error: {exc}')
finally:
db.close()
await asyncio.sleep(database_module.settings.prompt_queue_poll_interval_seconds)
def _resolve_n8n_api_url(explicit_url: str | None = None) -> str:
"""Resolve the effective n8n API URL from explicit input or settings."""
if explicit_url and explicit_url.strip():
@@ -420,8 +800,14 @@ def read_api_info():
'/api',
'/health',
'/llm/runtime',
'/llm/prompts',
'/llm/prompts/{prompt_key}',
'/settings/runtime',
'/settings/runtime/{setting_key}',
'/generate',
'/generate/text',
'/queue',
'/queue/process',
'/projects',
'/status/{project_id}',
'/audit/projects',
@@ -442,7 +828,9 @@ def read_api_info():
'/projects/{project_id}/prompts/{prompt_id}/undo',
'/projects/{project_id}/sync-repository',
'/gitea/repos',
'/gitea/health',
'/gitea/repos/onboard',
'/home-assistant/health',
'/n8n/health',
'/n8n/setup',
],
@@ -453,11 +841,31 @@ def read_api_info():
def health_check():
"""Health check endpoint."""
runtime = database_module.get_database_runtime_summary()
queue_summary = {'queued': 0, 'processing': 0, 'completed': 0, 'failed': 0, 'total': 0, 'next_item': None}
db = database_module.get_db_sync()
try:
try:
queue_summary = DatabaseManager(db).get_prompt_queue_summary()
except Exception:
pass
finally:
db.close()
return {
'status': 'healthy',
'database': runtime['backend'],
'database_target': runtime['target'],
'database_name': runtime['database'],
'integrations': {
'ollama': _get_ollama_health(),
'gitea': _get_gitea_health(),
'home_assistant': _get_home_assistant_health(),
},
'prompt_queue': {
'enabled': database_module.settings.prompt_queue_enabled,
'auto_process': database_module.settings.prompt_queue_auto_process,
'force_process': database_module.settings.prompt_queue_force_process,
'summary': queue_summary,
},
}
@@ -467,10 +875,70 @@ def get_llm_runtime():
return LLMServiceClient().get_runtime_configuration()
@app.get('/llm/prompts')
def get_llm_prompt_settings(db: DbSession):
"""Return editable LLM prompt settings with DB overrides merged over environment defaults."""
return {'prompts': DatabaseManager(db).get_llm_prompt_settings()}
@app.put('/llm/prompts/{prompt_key}')
def update_llm_prompt_setting(prompt_key: str, request: LLMPromptSettingUpdateRequest, db: DbSession):
"""Persist one editable LLM prompt override into the database."""
database_module.init_db()
result = DatabaseManager(db).save_llm_prompt_setting(prompt_key, request.value, actor='api')
if result.get('status') == 'error':
raise HTTPException(status_code=400, detail=result.get('message', 'Prompt save failed'))
return result
@app.delete('/llm/prompts/{prompt_key}')
def reset_llm_prompt_setting(prompt_key: str, db: DbSession):
"""Reset one editable LLM prompt override back to the environment/default value."""
database_module.init_db()
result = DatabaseManager(db).reset_llm_prompt_setting(prompt_key, actor='api')
if result.get('status') == 'error':
raise HTTPException(status_code=400, detail=result.get('message', 'Prompt reset failed'))
return result
@app.get('/settings/runtime')
def get_runtime_settings(db: DbSession):
"""Return editable runtime settings with DB overrides merged over environment defaults."""
return {'settings': DatabaseManager(db).get_runtime_settings()}
@app.put('/settings/runtime/{setting_key}')
def update_runtime_setting(setting_key: str, request: RuntimeSettingUpdateRequest, db: DbSession):
"""Persist one editable runtime setting override into the database."""
database_module.init_db()
result = DatabaseManager(db).save_runtime_setting(setting_key, request.value, actor='api')
if result.get('status') == 'error':
raise HTTPException(status_code=400, detail=result.get('message', 'Runtime setting save failed'))
return result
@app.delete('/settings/runtime/{setting_key}')
def reset_runtime_setting(setting_key: str, db: DbSession):
"""Reset one editable runtime setting override back to the environment/default value."""
database_module.init_db()
result = DatabaseManager(db).reset_runtime_setting(setting_key, actor='api')
if result.get('status') == 'error':
raise HTTPException(status_code=400, detail=result.get('message', 'Runtime setting reset failed'))
return result
@app.post('/generate')
async def generate_software(request: SoftwareRequest, db: DbSession):
"""Create and record a software-generation request."""
return await _run_generation(request, db)
try:
return await _run_generation(request, db)
except Exception as exc:
DatabaseManager(db).log_system_event(
component='api',
level='ERROR',
message=f"Structured generation failed: {exc}",
)
return _generation_error_payload(message=str(exc), request=request)
@app.post('/generate/text')
@@ -492,74 +960,79 @@ async def generate_software_from_text(request: FreeformSoftwareRequest, db: DbSe
},
}
manager = DatabaseManager(db)
interpreter_context = manager.get_interpreter_context(chat_id=request.chat_id, source=request.source)
interpreted, interpretation_trace = await RequestInterpreter().interpret_with_trace(
request.prompt_text,
context=interpreter_context,
)
routing = interpretation_trace.get('routing') or {}
selected_history = manager.get_project_by_id(routing.get('project_id'), include_archived=False) if routing.get('project_id') else None
if selected_history is not None and routing.get('intent') != 'new_project':
interpreted['name'] = selected_history.project_name
interpreted['description'] = selected_history.description or interpreted['description']
structured_request = SoftwareRequest(**interpreted)
response = await _run_generation(
structured_request,
db,
prompt_text=request.prompt_text,
prompt_actor=request.source,
prompt_source_context={
'chat_id': request.chat_id,
'chat_type': request.chat_type,
},
prompt_routing=routing,
preferred_project_id=routing.get('project_id') if routing.get('intent') != 'new_project' else None,
repo_name_override=routing.get('repo_name') if routing.get('intent') == 'new_project' else None,
related_issue={'number': routing.get('issue_number')} if routing.get('issue_number') is not None else None,
)
project_data = response.get('data', {})
if project_data.get('history_id') is not None:
if request.source == 'telegram' and database_module.settings.prompt_queue_enabled and not request.process_now:
manager = DatabaseManager(db)
prompts = manager.get_prompt_events(project_id=project_data.get('project_id'))
prompt_id = prompts[0]['id'] if prompts else None
manager.log_llm_trace(
project_id=project_data.get('project_id'),
history_id=project_data.get('history_id'),
prompt_id=prompt_id,
stage=interpretation_trace['stage'],
provider=interpretation_trace['provider'],
model=interpretation_trace['model'],
system_prompt=interpretation_trace['system_prompt'],
user_prompt=interpretation_trace['user_prompt'],
assistant_response=interpretation_trace['assistant_response'],
raw_response=interpretation_trace.get('raw_response'),
fallback_used=interpretation_trace.get('fallback_used', False),
queue_item = manager.enqueue_prompt(
prompt_text=request.prompt_text,
source=request.source,
chat_id=request.chat_id,
chat_type=request.chat_type,
source_context={'chat_id': request.chat_id, 'chat_type': request.chat_type},
)
naming_trace = interpretation_trace.get('project_naming')
if naming_trace:
manager.log_llm_trace(
project_id=project_data.get('project_id'),
history_id=project_data.get('history_id'),
prompt_id=prompt_id,
stage=naming_trace['stage'],
provider=naming_trace['provider'],
model=naming_trace['model'],
system_prompt=naming_trace['system_prompt'],
user_prompt=naming_trace['user_prompt'],
assistant_response=naming_trace['assistant_response'],
raw_response=naming_trace.get('raw_response'),
fallback_used=naming_trace.get('fallback_used', False),
)
response['interpreted_request'] = interpreted
response['routing'] = routing
response['llm_trace'] = interpretation_trace
response['source'] = {
'type': request.source,
'chat_id': request.chat_id,
'chat_type': request.chat_type,
return {
'status': 'queued',
'message': 'Prompt queued for energy-aware processing.',
'queue_item': queue_item,
'queue_summary': manager.get_prompt_queue_summary(),
'queue_gate': await _get_queue_gate_status(force=False),
'source': {
'type': request.source,
'chat_id': request.chat_id,
'chat_type': request.chat_type,
},
}
try:
return await _run_freeform_generation(request, db)
except Exception as exc:
DatabaseManager(db).log_system_event(
component='api',
level='ERROR',
message=f"Free-form generation failed for source={request.source}: {exc}",
)
return _generation_error_payload(
message=str(exc),
source={
'type': request.source,
'chat_id': request.chat_id,
'chat_type': request.chat_type,
},
)
@app.get('/queue')
def get_prompt_queue(db: DbSession):
"""Return queued prompt items and prompt queue configuration."""
manager = DatabaseManager(db)
return {
'queue': manager.get_prompt_queue(),
'summary': manager.get_prompt_queue_summary(),
'config': {
'enabled': database_module.settings.prompt_queue_enabled,
'auto_process': database_module.settings.prompt_queue_auto_process,
'force_process': database_module.settings.prompt_queue_force_process,
'poll_interval_seconds': database_module.settings.prompt_queue_poll_interval_seconds,
'max_batch_size': database_module.settings.prompt_queue_max_batch_size,
},
}
return response
@app.post('/queue/process')
async def process_prompt_queue(request: PromptQueueProcessRequest):
"""Manually process queued prompts, optionally bypassing the HA gate."""
return await _process_prompt_queue_batch(limit=request.limit, force=request.force)
@app.get('/gitea/health')
def get_gitea_health():
"""Return Gitea integration connectivity diagnostics."""
return _get_gitea_health()
@app.get('/home-assistant/health')
def get_home_assistant_health():
"""Return Home Assistant integration connectivity diagnostics."""
return _get_home_assistant_health()
@app.get('/projects')
@@ -743,13 +1216,18 @@ def delete_project(project_id: str, db: DbSession):
remote_delete = None
if repository and repository.get('mode') != 'shared' and repository.get('owner') and repository.get('name') and database_module.settings.gitea_url and database_module.settings.gitea_token:
remote_delete = _create_gitea_api().delete_repo_sync(owner=repository.get('owner'), repo=repository.get('name'))
if remote_delete.get('error') and remote_delete.get('status_code') not in {404, None}:
raise HTTPException(status_code=502, detail=remote_delete.get('error'))
if remote_delete.get('error'):
manager.log_system_event(
component='gitea',
level='WARNING',
message=f"Remote repository delete failed for {repository.get('owner')}/{repository.get('name')}: {remote_delete.get('error')}",
)
result = manager.delete_project(project_id)
if result.get('status') == 'error':
raise HTTPException(status_code=400, detail=result.get('message', 'Project deletion failed'))
result['remote_repository_deleted'] = bool(remote_delete and not remote_delete.get('error'))
result['remote_repository_delete_error'] = remote_delete.get('error') if remote_delete else None
result['remote_repository'] = repository if repository else None
return result