Compare commits
7 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| b881ef635a | |||
| e35db0a361 | |||
| 798bb218f8 | |||
| 3d77ac3104 | |||
| f6681a0f85 | |||
| ed8dc48280 | |||
| c3cf8da42d |
25
HISTORY.md
25
HISTORY.md
@@ -4,6 +4,31 @@ Changelog
|
|||||||
|
|
||||||
(unreleased)
|
(unreleased)
|
||||||
------------
|
------------
|
||||||
|
- Feat: editable guardrails, refs NOISSUE. [Simon Diesenreiter]
|
||||||
|
|
||||||
|
|
||||||
|
0.8.0 (2026-04-11)
|
||||||
|
------------------
|
||||||
|
- Feat: better dashboard reloading mechanism, refs NOISSUE. [Simon
|
||||||
|
Diesenreiter]
|
||||||
|
- Feat: add explicit workflow steps and guardrail prompts, refs NOISSUE.
|
||||||
|
[Simon Diesenreiter]
|
||||||
|
|
||||||
|
|
||||||
|
0.7.1 (2026-04-11)
|
||||||
|
------------------
|
||||||
|
|
||||||
|
Fix
|
||||||
|
~~~
|
||||||
|
- Add additional deletion confirmation, refs NOISSUE. [Simon
|
||||||
|
Diesenreiter]
|
||||||
|
|
||||||
|
Other
|
||||||
|
~~~~~
|
||||||
|
|
||||||
|
|
||||||
|
0.7.0 (2026-04-10)
|
||||||
|
------------------
|
||||||
- Feat: gitea issue integration, refs NOISSUE. [Simon Diesenreiter]
|
- Feat: gitea issue integration, refs NOISSUE. [Simon Diesenreiter]
|
||||||
- Feat: better history data, refs NOISSUE. [Simon Diesenreiter]
|
- Feat: better history data, refs NOISSUE. [Simon Diesenreiter]
|
||||||
|
|
||||||
|
|||||||
21
README.md
21
README.md
@@ -48,6 +48,7 @@ OLLAMA_URL=http://localhost:11434
|
|||||||
OLLAMA_MODEL=llama3
|
OLLAMA_MODEL=llama3
|
||||||
|
|
||||||
# Gitea
|
# Gitea
|
||||||
|
# Host-only values such as git.disi.dev are normalized to https://git.disi.dev.
|
||||||
GITEA_URL=https://gitea.yourserver.com
|
GITEA_URL=https://gitea.yourserver.com
|
||||||
GITEA_TOKEN=your_gitea_api_token
|
GITEA_TOKEN=your_gitea_api_token
|
||||||
GITEA_OWNER=ai-software-factory
|
GITEA_OWNER=ai-software-factory
|
||||||
@@ -69,6 +70,19 @@ N8N_WEBHOOK_URL=http://n8n.yourserver.com/webhook/telegram
|
|||||||
# Telegram
|
# Telegram
|
||||||
TELEGRAM_BOT_TOKEN=your_telegram_bot_token
|
TELEGRAM_BOT_TOKEN=your_telegram_bot_token
|
||||||
TELEGRAM_CHAT_ID=your_chat_id
|
TELEGRAM_CHAT_ID=your_chat_id
|
||||||
|
|
||||||
|
# Optional: queue Telegram prompts until Home Assistant reports battery/surplus targets are met.
|
||||||
|
PROMPT_QUEUE_ENABLED=false
|
||||||
|
PROMPT_QUEUE_AUTO_PROCESS=true
|
||||||
|
PROMPT_QUEUE_FORCE_PROCESS=false
|
||||||
|
PROMPT_QUEUE_POLL_INTERVAL_SECONDS=60
|
||||||
|
PROMPT_QUEUE_MAX_BATCH_SIZE=1
|
||||||
|
HOME_ASSISTANT_URL=http://homeassistant.local:8123
|
||||||
|
HOME_ASSISTANT_TOKEN=your_home_assistant_long_lived_token
|
||||||
|
HOME_ASSISTANT_BATTERY_ENTITY_ID=sensor.home_battery_soc
|
||||||
|
HOME_ASSISTANT_SURPLUS_ENTITY_ID=sensor.home_pv_surplus_power
|
||||||
|
HOME_ASSISTANT_BATTERY_FULL_THRESHOLD=95
|
||||||
|
HOME_ASSISTANT_SURPLUS_THRESHOLD_WATTS=100
|
||||||
```
|
```
|
||||||
|
|
||||||
### Build and Run
|
### Build and Run
|
||||||
@@ -93,6 +107,7 @@ docker-compose up -d
|
|||||||
|
|
||||||
The backend now interprets free-form Telegram text with Ollama before generation.
|
The backend now interprets free-form Telegram text with Ollama before generation.
|
||||||
If `TELEGRAM_CHAT_ID` is set, the Telegram-trigger workflow only reacts to messages from that specific chat.
|
If `TELEGRAM_CHAT_ID` is set, the Telegram-trigger workflow only reacts to messages from that specific chat.
|
||||||
|
If `PROMPT_QUEUE_ENABLED=true`, Telegram prompts are stored in a durable queue and processed only when the Home Assistant battery and surplus thresholds are satisfied, unless you force processing via `/queue/process` or send `process_now=true`.
|
||||||
|
|
||||||
2. **Monitor progress via Web UI:**
|
2. **Monitor progress via Web UI:**
|
||||||
|
|
||||||
@@ -104,6 +119,12 @@ docker-compose up -d
|
|||||||
|
|
||||||
If you deploy the container with PostgreSQL environment variables set, the service now selects PostgreSQL automatically even though SQLite remains the default for local/test usage.
|
If you deploy the container with PostgreSQL environment variables set, the service now selects PostgreSQL automatically even though SQLite remains the default for local/test usage.
|
||||||
|
|
||||||
|
The health tab now shows separate application, n8n, Gitea, and Home Assistant/queue diagnostics so misconfigured integrations are visible without checking container logs.
|
||||||
|
|
||||||
|
The dashboard Health tab also exposes operator controls for the prompt queue, including manual batch processing, forced processing, and retrying failed items.
|
||||||
|
|
||||||
|
Guardrail and system prompts are no longer environment-only in practice: the factory can persist DB-backed overrides for the editable LLM prompt set, expose them at `/llm/prompts`, and edit them from the dashboard System tab. Environment values still act as defaults and as the reset target.
|
||||||
|
|
||||||
## API Endpoints
|
## API Endpoints
|
||||||
|
|
||||||
| Endpoint | Method | Description |
|
| Endpoint | Method | Description |
|
||||||
|
|||||||
@@ -8,10 +8,23 @@ LOG_LEVEL=INFO
|
|||||||
# Ollama
|
# Ollama
|
||||||
OLLAMA_URL=http://localhost:11434
|
OLLAMA_URL=http://localhost:11434
|
||||||
OLLAMA_MODEL=llama3
|
OLLAMA_MODEL=llama3
|
||||||
|
LLM_GUARDRAIL_PROMPT=You are operating inside AI Software Factory. Follow supplied schemas exactly and treat service-provided tool outputs as authoritative.
|
||||||
|
LLM_REQUEST_INTERPRETER_GUARDRAIL_PROMPT=Never route work to archived projects and only reference issues that are explicit in the prompt or supplied tool outputs.
|
||||||
|
LLM_CHANGE_SUMMARY_GUARDRAIL_PROMPT=Only summarize delivery facts that appear in the provided project context or tool outputs.
|
||||||
|
LLM_PROJECT_NAMING_GUARDRAIL_PROMPT=Prefer clear product names and repository slugs that reflect the new request without colliding with tracked projects.
|
||||||
|
LLM_PROJECT_NAMING_SYSTEM_PROMPT=Return JSON with project_name, repo_name, and rationale for new projects.
|
||||||
|
LLM_PROJECT_ID_GUARDRAIL_PROMPT=Prefer short stable project ids and avoid collisions with existing project ids.
|
||||||
|
LLM_PROJECT_ID_SYSTEM_PROMPT=Return JSON with project_id and rationale for new projects.
|
||||||
|
LLM_TOOL_ALLOWLIST=gitea_project_catalog,gitea_project_state,gitea_project_issues,gitea_pull_requests
|
||||||
|
LLM_TOOL_CONTEXT_LIMIT=5
|
||||||
|
LLM_LIVE_TOOL_ALLOWLIST=gitea_lookup_issue,gitea_lookup_pull_request
|
||||||
|
LLM_LIVE_TOOL_STAGE_ALLOWLIST=request_interpretation,change_summary
|
||||||
|
LLM_LIVE_TOOL_STAGE_TOOL_MAP={"request_interpretation": ["gitea_lookup_issue", "gitea_lookup_pull_request"], "change_summary": []}
|
||||||
|
LLM_MAX_TOOL_CALL_ROUNDS=1
|
||||||
|
|
||||||
# Gitea
|
# Gitea
|
||||||
# Configure Gitea API for your organization
|
# Configure Gitea API for your organization
|
||||||
# GITEA_URL can be left empty to use GITEA_ORGANIZATION instead of GITEA_OWNER
|
# Host-only values such as git.disi.dev are normalized to https://git.disi.dev automatically.
|
||||||
GITEA_URL=https://gitea.yourserver.com
|
GITEA_URL=https://gitea.yourserver.com
|
||||||
GITEA_TOKEN=your_gitea_api_token
|
GITEA_TOKEN=your_gitea_api_token
|
||||||
GITEA_OWNER=your_organization_name
|
GITEA_OWNER=your_organization_name
|
||||||
@@ -29,6 +42,20 @@ N8N_PASSWORD=your_secure_password
|
|||||||
TELEGRAM_BOT_TOKEN=your_telegram_bot_token
|
TELEGRAM_BOT_TOKEN=your_telegram_bot_token
|
||||||
TELEGRAM_CHAT_ID=your_chat_id
|
TELEGRAM_CHAT_ID=your_chat_id
|
||||||
|
|
||||||
|
# Home Assistant energy gate for queued Telegram prompts
|
||||||
|
# Leave PROMPT_QUEUE_ENABLED=false to preserve immediate Telegram processing.
|
||||||
|
PROMPT_QUEUE_ENABLED=false
|
||||||
|
PROMPT_QUEUE_AUTO_PROCESS=true
|
||||||
|
PROMPT_QUEUE_FORCE_PROCESS=false
|
||||||
|
PROMPT_QUEUE_POLL_INTERVAL_SECONDS=60
|
||||||
|
PROMPT_QUEUE_MAX_BATCH_SIZE=1
|
||||||
|
HOME_ASSISTANT_URL=http://homeassistant.local:8123
|
||||||
|
HOME_ASSISTANT_TOKEN=your_home_assistant_long_lived_token
|
||||||
|
HOME_ASSISTANT_BATTERY_ENTITY_ID=sensor.home_battery_soc
|
||||||
|
HOME_ASSISTANT_SURPLUS_ENTITY_ID=sensor.home_pv_surplus_power
|
||||||
|
HOME_ASSISTANT_BATTERY_FULL_THRESHOLD=95
|
||||||
|
HOME_ASSISTANT_SURPLUS_THRESHOLD_WATTS=100
|
||||||
|
|
||||||
# PostgreSQL
|
# PostgreSQL
|
||||||
# In production, provide PostgreSQL settings below. They now take precedence over the SQLite default.
|
# In production, provide PostgreSQL settings below. They now take precedence over the SQLite default.
|
||||||
# You can also set USE_SQLITE=false explicitly if you want the intent to be obvious.
|
# You can also set USE_SQLITE=false explicitly if you want the intent to be obvious.
|
||||||
|
|||||||
@@ -6,6 +6,7 @@ Automated software generation service powered by Ollama LLM. This service allows
|
|||||||
|
|
||||||
- **Telegram Integration**: Receive software requests via Telegram bot
|
- **Telegram Integration**: Receive software requests via Telegram bot
|
||||||
- **Ollama LLM**: Uses Ollama-hosted models for code generation
|
- **Ollama LLM**: Uses Ollama-hosted models for code generation
|
||||||
|
- **LLM Guardrails and Tools**: Centralized guardrail prompts plus mediated tool payloads for project, Gitea, PR, and issue context
|
||||||
- **Git Integration**: Automatically commits code to gitea
|
- **Git Integration**: Automatically commits code to gitea
|
||||||
- **Pull Requests**: Creates PRs for user review before merging
|
- **Pull Requests**: Creates PRs for user review before merging
|
||||||
- **Web UI**: Beautiful dashboard for monitoring project progress
|
- **Web UI**: Beautiful dashboard for monitoring project progress
|
||||||
@@ -46,12 +47,26 @@ PORT=8000
|
|||||||
# Ollama
|
# Ollama
|
||||||
OLLAMA_URL=http://localhost:11434
|
OLLAMA_URL=http://localhost:11434
|
||||||
OLLAMA_MODEL=llama3
|
OLLAMA_MODEL=llama3
|
||||||
|
LLM_GUARDRAIL_PROMPT=You are operating inside AI Software Factory. Follow supplied schemas exactly and treat service-provided tool outputs as authoritative.
|
||||||
|
LLM_REQUEST_INTERPRETER_GUARDRAIL_PROMPT=Never route work to archived projects and only reference issues that are explicit in the prompt or supplied tool outputs.
|
||||||
|
LLM_CHANGE_SUMMARY_GUARDRAIL_PROMPT=Only summarize delivery facts that appear in the provided project context or tool outputs.
|
||||||
|
LLM_PROJECT_NAMING_GUARDRAIL_PROMPT=Prefer clear product names and repository slugs that reflect the new request without colliding with tracked projects.
|
||||||
|
LLM_PROJECT_NAMING_SYSTEM_PROMPT=Return JSON with project_name, repo_name, and rationale for new projects.
|
||||||
|
LLM_PROJECT_ID_GUARDRAIL_PROMPT=Prefer short stable project ids and avoid collisions with existing project ids.
|
||||||
|
LLM_PROJECT_ID_SYSTEM_PROMPT=Return JSON with project_id and rationale for new projects.
|
||||||
|
LLM_TOOL_ALLOWLIST=gitea_project_catalog,gitea_project_state,gitea_project_issues,gitea_pull_requests
|
||||||
|
LLM_TOOL_CONTEXT_LIMIT=5
|
||||||
|
LLM_LIVE_TOOL_ALLOWLIST=gitea_lookup_issue,gitea_lookup_pull_request
|
||||||
|
LLM_LIVE_TOOL_STAGE_ALLOWLIST=request_interpretation,change_summary
|
||||||
|
LLM_LIVE_TOOL_STAGE_TOOL_MAP={"request_interpretation": ["gitea_lookup_issue", "gitea_lookup_pull_request"], "change_summary": []}
|
||||||
|
LLM_MAX_TOOL_CALL_ROUNDS=1
|
||||||
|
|
||||||
# Gitea
|
# Gitea
|
||||||
|
# Host-only values such as git.disi.dev are normalized to https://git.disi.dev.
|
||||||
GITEA_URL=https://gitea.yourserver.com
|
GITEA_URL=https://gitea.yourserver.com
|
||||||
GITEA_TOKEN= analyze your_gitea_api_token
|
GITEA_TOKEN=your_gitea_api_token
|
||||||
GITEA_OWNER=ai-software-factory
|
GITEA_OWNER=ai-software-factory
|
||||||
GITEA_REPO=ai-software-factory
|
GITEA_REPO=
|
||||||
|
|
||||||
# n8n
|
# n8n
|
||||||
N8N_WEBHOOK_URL=http://n8n.yourserver.com/webhook/telegram
|
N8N_WEBHOOK_URL=http://n8n.yourserver.com/webhook/telegram
|
||||||
@@ -59,6 +74,19 @@ N8N_WEBHOOK_URL=http://n8n.yourserver.com/webhook/telegram
|
|||||||
# Telegram
|
# Telegram
|
||||||
TELEGRAM_BOT_TOKEN=your_telegram_bot_token
|
TELEGRAM_BOT_TOKEN=your_telegram_bot_token
|
||||||
TELEGRAM_CHAT_ID=your_chat_id
|
TELEGRAM_CHAT_ID=your_chat_id
|
||||||
|
|
||||||
|
# Optional: queue Telegram prompts until Home Assistant reports energy surplus.
|
||||||
|
PROMPT_QUEUE_ENABLED=false
|
||||||
|
PROMPT_QUEUE_AUTO_PROCESS=true
|
||||||
|
PROMPT_QUEUE_FORCE_PROCESS=false
|
||||||
|
PROMPT_QUEUE_POLL_INTERVAL_SECONDS=60
|
||||||
|
PROMPT_QUEUE_MAX_BATCH_SIZE=1
|
||||||
|
HOME_ASSISTANT_URL=http://homeassistant.local:8123
|
||||||
|
HOME_ASSISTANT_TOKEN=your_home_assistant_long_lived_token
|
||||||
|
HOME_ASSISTANT_BATTERY_ENTITY_ID=sensor.home_battery_soc
|
||||||
|
HOME_ASSISTANT_SURPLUS_ENTITY_ID=sensor.home_pv_surplus_power
|
||||||
|
HOME_ASSISTANT_BATTERY_FULL_THRESHOLD=95
|
||||||
|
HOME_ASSISTANT_SURPLUS_THRESHOLD_WATTS=100
|
||||||
```
|
```
|
||||||
|
|
||||||
### Build and Run
|
### Build and Run
|
||||||
@@ -81,6 +109,8 @@ docker-compose up -d
|
|||||||
Features: user authentication, task CRUD, notifications
|
Features: user authentication, task CRUD, notifications
|
||||||
```
|
```
|
||||||
|
|
||||||
|
If `PROMPT_QUEUE_ENABLED=true`, Telegram prompts are queued durably and processed only when Home Assistant reports the configured battery and surplus thresholds. Operators can override the gate via `/queue/process` or by sending `process_now=true` to `/generate/text`.
|
||||||
|
|
||||||
2. **Monitor progress via Web UI:**
|
2. **Monitor progress via Web UI:**
|
||||||
|
|
||||||
Open `http://yourserver:8000` to see real-time progress
|
Open `http://yourserver:8000` to see real-time progress
|
||||||
@@ -99,6 +129,39 @@ docker-compose up -d
|
|||||||
| `/status/{project_id}` | GET | Get project status |
|
| `/status/{project_id}` | GET | Get project status |
|
||||||
| `/projects` | GET | List all projects |
|
| `/projects` | GET | List all projects |
|
||||||
|
|
||||||
|
## LLM Guardrails and Tool Access
|
||||||
|
|
||||||
|
External LLM calls are now routed through a centralized client that applies:
|
||||||
|
|
||||||
|
- A global guardrail prompt for every outbound model request
|
||||||
|
- Stage-specific guardrails for request interpretation and change summaries
|
||||||
|
- Service-mediated tool outputs that expose tracked Gitea/project state without giving the model raw credentials
|
||||||
|
|
||||||
|
Current mediated tools include:
|
||||||
|
|
||||||
|
- `gitea_project_catalog`: active tracked projects and repository mappings
|
||||||
|
- `gitea_project_state`: current repository, PR, and linked-issue state for the project in scope
|
||||||
|
- `gitea_project_issues`: tracked open issues for the relevant repository
|
||||||
|
- `gitea_pull_requests`: tracked pull requests for the relevant repository
|
||||||
|
|
||||||
|
The service also supports a bounded live tool-call loop for selected lookups. When enabled, the model may request one live call such as `gitea_lookup_issue` or `gitea_lookup_pull_request`, the service executes it against Gitea, and the final model response is generated from the returned result. This remains mediated by the service, so the model never receives raw credentials.
|
||||||
|
|
||||||
|
Live tool access is stage-aware. `LLM_LIVE_TOOL_ALLOWLIST` controls which live tools exist globally, while `LLM_LIVE_TOOL_STAGE_ALLOWLIST` controls which LLM stages may use them. If you need per-stage subsets, `LLM_LIVE_TOOL_STAGE_TOOL_MAP` accepts a JSON object mapping each stage to the exact tools it may use. For example, you can allow issue and PR lookups during `request_interpretation` while keeping `change_summary` fully read-only.
|
||||||
|
|
||||||
|
When the interpreter decides a prompt starts a new project, the service can run a dedicated `project_naming` LLM stage before generation. `LLM_PROJECT_NAMING_SYSTEM_PROMPT` and `LLM_PROJECT_NAMING_GUARDRAIL_PROMPT` let you steer how project titles and repository slugs are chosen. The interpreter now checks tracked project repositories plus live Gitea repository names when available, so if the model suggests a colliding repo slug the service will automatically move to the next available slug.
|
||||||
|
|
||||||
|
New project creation can also run a dedicated `project_id_naming` stage. `LLM_PROJECT_ID_SYSTEM_PROMPT` and `LLM_PROJECT_ID_GUARDRAIL_PROMPT` control how stable project ids are chosen, and the service will append deterministic numeric suffixes when an id is already taken instead of always falling back to a random UUID-based id.
|
||||||
|
|
||||||
|
Runtime visibility for the active guardrails, mediated tools, live tools, and model configuration is available at `/llm/runtime` and in the dashboard System tab.
|
||||||
|
|
||||||
|
Operational visibility for the Gitea integration, Home Assistant energy gate, and queued prompt counts is available in the dashboard Health tab, plus `/gitea/health`, `/home-assistant/health`, and `/queue`.
|
||||||
|
|
||||||
|
The dashboard Health tab also includes operator controls for manually processing queued Telegram prompts, force-processing them when needed, and retrying failed items.
|
||||||
|
|
||||||
|
Editable guardrail and system prompts are persisted in the database as overrides on top of the environment defaults. The current merged values are available at `/llm/prompts`, and the dashboard System tab can edit or reset them without restarting the service.
|
||||||
|
|
||||||
|
These tool payloads are appended to the model prompt as authoritative JSON generated by the service, so the LLM can reason over live project and Gitea context while remaining constrained by the configured guardrails.
|
||||||
|
|
||||||
## Development
|
## Development
|
||||||
|
|
||||||
### Makefile Targets
|
### Makefile Targets
|
||||||
|
|||||||
@@ -1 +1 @@
|
|||||||
0.7.0
|
0.9.0
|
||||||
|
|||||||
@@ -4,8 +4,10 @@ from __future__ import annotations
|
|||||||
|
|
||||||
try:
|
try:
|
||||||
from ..config import settings
|
from ..config import settings
|
||||||
|
from .llm_service import LLMServiceClient
|
||||||
except ImportError:
|
except ImportError:
|
||||||
from config import settings
|
from config import settings
|
||||||
|
from agents.llm_service import LLMServiceClient
|
||||||
|
|
||||||
|
|
||||||
class ChangeSummaryGenerator:
|
class ChangeSummaryGenerator:
|
||||||
@@ -14,6 +16,7 @@ class ChangeSummaryGenerator:
|
|||||||
def __init__(self, ollama_url: str | None = None, model: str | None = None):
|
def __init__(self, ollama_url: str | None = None, model: str | None = None):
|
||||||
self.ollama_url = (ollama_url or settings.ollama_url).rstrip('/')
|
self.ollama_url = (ollama_url or settings.ollama_url).rstrip('/')
|
||||||
self.model = model or settings.OLLAMA_MODEL
|
self.model = model or settings.OLLAMA_MODEL
|
||||||
|
self.llm_client = LLMServiceClient(ollama_url=self.ollama_url, model=self.model)
|
||||||
|
|
||||||
async def summarize(self, context: dict) -> str:
|
async def summarize(self, context: dict) -> str:
|
||||||
"""Summarize project changes with Ollama, or fall back to a deterministic overview."""
|
"""Summarize project changes with Ollama, or fall back to a deterministic overview."""
|
||||||
@@ -28,40 +31,24 @@ class ChangeSummaryGenerator:
|
|||||||
'Write 3 to 5 sentences. Mention the application goal, main delivered pieces, '
|
'Write 3 to 5 sentences. Mention the application goal, main delivered pieces, '
|
||||||
'technical direction, and what the user should expect next. Avoid markdown bullets.'
|
'technical direction, and what the user should expect next. Avoid markdown bullets.'
|
||||||
)
|
)
|
||||||
try:
|
content, trace = await self.llm_client.chat_with_trace(
|
||||||
import aiohttp
|
stage='change_summary',
|
||||||
|
system_prompt=system_prompt,
|
||||||
async with aiohttp.ClientSession() as session:
|
user_prompt=prompt,
|
||||||
async with session.post(
|
tool_context_input={
|
||||||
f'{self.ollama_url}/api/chat',
|
'project_id': context.get('project_id'),
|
||||||
json={
|
'project_name': context.get('name'),
|
||||||
'model': self.model,
|
'repository': context.get('repository'),
|
||||||
'stream': False,
|
'repository_url': context.get('repository_url'),
|
||||||
'messages': [
|
'pull_request': context.get('pull_request'),
|
||||||
{
|
'pull_request_url': context.get('pull_request_url'),
|
||||||
'role': 'system',
|
'pull_request_state': context.get('pull_request_state'),
|
||||||
'content': system_prompt,
|
'related_issue': context.get('related_issue'),
|
||||||
|
'issues': [context.get('related_issue')] if context.get('related_issue') else [],
|
||||||
},
|
},
|
||||||
{'role': 'user', 'content': prompt},
|
)
|
||||||
],
|
|
||||||
},
|
|
||||||
) as resp:
|
|
||||||
payload = await resp.json()
|
|
||||||
if 200 <= resp.status < 300:
|
|
||||||
content = payload.get('message', {}).get('content', '').strip()
|
|
||||||
if content:
|
if content:
|
||||||
return content, {
|
return content.strip(), trace
|
||||||
'stage': 'change_summary',
|
|
||||||
'provider': 'ollama',
|
|
||||||
'model': self.model,
|
|
||||||
'system_prompt': system_prompt,
|
|
||||||
'user_prompt': prompt,
|
|
||||||
'assistant_response': content,
|
|
||||||
'raw_response': payload,
|
|
||||||
'fallback_used': False,
|
|
||||||
}
|
|
||||||
except Exception:
|
|
||||||
pass
|
|
||||||
|
|
||||||
fallback = self._fallback(context)
|
fallback = self._fallback(context)
|
||||||
return fallback, {
|
return fallback, {
|
||||||
@@ -71,7 +58,9 @@ class ChangeSummaryGenerator:
|
|||||||
'system_prompt': system_prompt,
|
'system_prompt': system_prompt,
|
||||||
'user_prompt': prompt,
|
'user_prompt': prompt,
|
||||||
'assistant_response': fallback,
|
'assistant_response': fallback,
|
||||||
'raw_response': {'fallback': 'deterministic'},
|
'raw_response': {'fallback': 'deterministic', 'llm_trace': trace.get('raw_response') if isinstance(trace, dict) else None},
|
||||||
|
'guardrails': trace.get('guardrails') if isinstance(trace, dict) else [],
|
||||||
|
'tool_context': trace.get('tool_context') if isinstance(trace, dict) else [],
|
||||||
'fallback_used': True,
|
'fallback_used': True,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -4,7 +4,7 @@ from sqlalchemy.orm import Session
|
|||||||
from sqlalchemy import text
|
from sqlalchemy import text
|
||||||
|
|
||||||
try:
|
try:
|
||||||
from ..config import settings
|
from ..config import EDITABLE_LLM_PROMPTS, settings
|
||||||
from ..models import (
|
from ..models import (
|
||||||
AuditTrail,
|
AuditTrail,
|
||||||
ProjectHistory,
|
ProjectHistory,
|
||||||
@@ -18,7 +18,7 @@ try:
|
|||||||
UserAction,
|
UserAction,
|
||||||
)
|
)
|
||||||
except ImportError:
|
except ImportError:
|
||||||
from config import settings
|
from config import EDITABLE_LLM_PROMPTS, settings
|
||||||
from models import (
|
from models import (
|
||||||
AuditTrail,
|
AuditTrail,
|
||||||
ProjectHistory,
|
ProjectHistory,
|
||||||
@@ -34,6 +34,7 @@ except ImportError:
|
|||||||
from datetime import datetime
|
from datetime import datetime
|
||||||
import json
|
import json
|
||||||
import re
|
import re
|
||||||
|
import shutil
|
||||||
|
|
||||||
|
|
||||||
class DatabaseMigrations:
|
class DatabaseMigrations:
|
||||||
@@ -82,11 +83,21 @@ class DatabaseMigrations:
|
|||||||
class DatabaseManager:
|
class DatabaseManager:
|
||||||
"""Manages database operations for audit logging and history tracking."""
|
"""Manages database operations for audit logging and history tracking."""
|
||||||
|
|
||||||
|
PROMPT_QUEUE_PROJECT_ID = '__prompt_queue__'
|
||||||
|
PROMPT_QUEUE_ACTION = 'PROMPT_QUEUED'
|
||||||
|
PROMPT_CONFIG_PROJECT_ID = '__llm_prompt_config__'
|
||||||
|
PROMPT_CONFIG_ACTION = 'LLM_PROMPT_CONFIG'
|
||||||
|
|
||||||
def __init__(self, db: Session):
|
def __init__(self, db: Session):
|
||||||
"""Initialize database manager."""
|
"""Initialize database manager."""
|
||||||
self.db = db
|
self.db = db
|
||||||
self.migrations = DatabaseMigrations(self.db)
|
self.migrations = DatabaseMigrations(self.db)
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def _is_archived_status(status: str | None) -> bool:
|
||||||
|
"""Return whether a project status represents an archived project."""
|
||||||
|
return (status or '').strip().lower() == 'archived'
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def _normalize_metadata(metadata: object) -> dict:
|
def _normalize_metadata(metadata: object) -> dict:
|
||||||
"""Normalize JSON-like metadata stored in audit columns."""
|
"""Normalize JSON-like metadata stored in audit columns."""
|
||||||
@@ -111,13 +122,15 @@ class DatabaseManager:
|
|||||||
sanitized = sanitized.replace('--', '-')
|
sanitized = sanitized.replace('--', '-')
|
||||||
return sanitized.strip('-') or 'external-project'
|
return sanitized.strip('-') or 'external-project'
|
||||||
|
|
||||||
def get_project_by_repository(self, owner: str, repo_name: str) -> ProjectHistory | None:
|
def get_project_by_repository(self, owner: str, repo_name: str, include_archived: bool = False) -> ProjectHistory | None:
|
||||||
"""Return the project currently associated with a repository."""
|
"""Return the project currently associated with a repository."""
|
||||||
normalized_owner = (owner or '').strip().lower()
|
normalized_owner = (owner or '').strip().lower()
|
||||||
normalized_repo = (repo_name or '').strip().lower()
|
normalized_repo = (repo_name or '').strip().lower()
|
||||||
if not normalized_owner or not normalized_repo:
|
if not normalized_owner or not normalized_repo:
|
||||||
return None
|
return None
|
||||||
for history in self.db.query(ProjectHistory).order_by(ProjectHistory.updated_at.desc(), ProjectHistory.id.desc()).all():
|
for history in self.db.query(ProjectHistory).order_by(ProjectHistory.updated_at.desc(), ProjectHistory.id.desc()).all():
|
||||||
|
if not include_archived and self._is_archived_status(history.status):
|
||||||
|
continue
|
||||||
repository = self._get_project_repository(history) or {}
|
repository = self._get_project_repository(history) or {}
|
||||||
if (repository.get('owner') or '').strip().lower() == normalized_owner and (repository.get('name') or '').strip().lower() == normalized_repo:
|
if (repository.get('owner') or '').strip().lower() == normalized_owner and (repository.get('name') or '').strip().lower() == normalized_repo:
|
||||||
return history
|
return history
|
||||||
@@ -262,6 +275,277 @@ class DatabaseManager:
|
|||||||
self.db.refresh(audit)
|
self.db.refresh(audit)
|
||||||
return audit
|
return audit
|
||||||
|
|
||||||
|
def enqueue_prompt(
|
||||||
|
self,
|
||||||
|
prompt_text: str,
|
||||||
|
source: str = 'telegram',
|
||||||
|
chat_id: str | None = None,
|
||||||
|
chat_type: str | None = None,
|
||||||
|
source_context: dict | None = None,
|
||||||
|
process_now: bool = False,
|
||||||
|
) -> dict:
|
||||||
|
"""Persist a queued prompt so it can be processed later by the worker."""
|
||||||
|
metadata = {
|
||||||
|
'status': 'queued',
|
||||||
|
'prompt_text': prompt_text,
|
||||||
|
'source': source,
|
||||||
|
'chat_id': chat_id,
|
||||||
|
'chat_type': chat_type,
|
||||||
|
'source_context': source_context or {},
|
||||||
|
'process_now': bool(process_now),
|
||||||
|
'queued_at': datetime.utcnow().isoformat(),
|
||||||
|
}
|
||||||
|
audit = AuditTrail(
|
||||||
|
project_id=self.PROMPT_QUEUE_PROJECT_ID,
|
||||||
|
action=self.PROMPT_QUEUE_ACTION,
|
||||||
|
actor=source or 'queue',
|
||||||
|
action_type='QUEUE',
|
||||||
|
details=prompt_text,
|
||||||
|
message='Prompt queued for deferred processing',
|
||||||
|
metadata_json=metadata,
|
||||||
|
)
|
||||||
|
self.db.add(audit)
|
||||||
|
self.db.commit()
|
||||||
|
self.db.refresh(audit)
|
||||||
|
return self._serialize_prompt_queue_item(audit)
|
||||||
|
|
||||||
|
def _serialize_prompt_queue_item(self, audit: AuditTrail) -> dict:
|
||||||
|
"""Convert a queue audit record into a stable API payload."""
|
||||||
|
metadata = self._normalize_metadata(audit.metadata_json)
|
||||||
|
return {
|
||||||
|
'id': audit.id,
|
||||||
|
'prompt_text': metadata.get('prompt_text') or audit.details,
|
||||||
|
'source': metadata.get('source') or audit.actor,
|
||||||
|
'chat_id': metadata.get('chat_id'),
|
||||||
|
'chat_type': metadata.get('chat_type'),
|
||||||
|
'status': metadata.get('status') or 'queued',
|
||||||
|
'queued_at': metadata.get('queued_at') or (audit.created_at.isoformat() if audit.created_at else None),
|
||||||
|
'claimed_at': metadata.get('claimed_at'),
|
||||||
|
'processed_at': metadata.get('processed_at'),
|
||||||
|
'failed_at': metadata.get('failed_at'),
|
||||||
|
'process_now': bool(metadata.get('process_now')),
|
||||||
|
'result': metadata.get('result') or {},
|
||||||
|
'error': metadata.get('error'),
|
||||||
|
'source_context': metadata.get('source_context') or {},
|
||||||
|
}
|
||||||
|
|
||||||
|
def _update_audit_metadata(self, audit: AuditTrail, updates: dict) -> AuditTrail:
|
||||||
|
"""Apply shallow metadata updates to an audit record."""
|
||||||
|
metadata = dict(self._normalize_metadata(audit.metadata_json))
|
||||||
|
metadata.update(updates)
|
||||||
|
audit.metadata_json = metadata
|
||||||
|
self.db.commit()
|
||||||
|
self.db.refresh(audit)
|
||||||
|
return audit
|
||||||
|
|
||||||
|
def get_prompt_queue(self, status: str | None = None, limit: int = 100) -> list[dict]:
|
||||||
|
"""Return queued prompt items, optionally filtered by queue status."""
|
||||||
|
audits = (
|
||||||
|
self.db.query(AuditTrail)
|
||||||
|
.filter(AuditTrail.action == self.PROMPT_QUEUE_ACTION)
|
||||||
|
.order_by(AuditTrail.created_at.desc(), AuditTrail.id.desc())
|
||||||
|
.all()
|
||||||
|
)
|
||||||
|
items = []
|
||||||
|
for audit in audits:
|
||||||
|
item = self._serialize_prompt_queue_item(audit)
|
||||||
|
if status and item['status'] != status:
|
||||||
|
continue
|
||||||
|
items.append(item)
|
||||||
|
if len(items) >= limit:
|
||||||
|
break
|
||||||
|
return items
|
||||||
|
|
||||||
|
def get_prompt_queue_summary(self) -> dict:
|
||||||
|
"""Return aggregate prompt queue counts for operations and health views."""
|
||||||
|
items = self.get_prompt_queue(limit=1000)
|
||||||
|
summary = {'queued': 0, 'processing': 0, 'completed': 0, 'failed': 0, 'total': len(items)}
|
||||||
|
for item in items:
|
||||||
|
summary[item['status']] = summary.get(item['status'], 0) + 1
|
||||||
|
summary['next_item'] = next((item for item in reversed(items) if item['status'] == 'queued'), None)
|
||||||
|
return summary
|
||||||
|
|
||||||
|
def claim_next_queued_prompt(self) -> dict | None:
|
||||||
|
"""Claim the oldest queued prompt for processing."""
|
||||||
|
audits = (
|
||||||
|
self.db.query(AuditTrail)
|
||||||
|
.filter(AuditTrail.action == self.PROMPT_QUEUE_ACTION)
|
||||||
|
.order_by(AuditTrail.created_at.asc(), AuditTrail.id.asc())
|
||||||
|
.all()
|
||||||
|
)
|
||||||
|
for audit in audits:
|
||||||
|
item = self._serialize_prompt_queue_item(audit)
|
||||||
|
if item['status'] != 'queued':
|
||||||
|
continue
|
||||||
|
updated = self._update_audit_metadata(
|
||||||
|
audit,
|
||||||
|
{
|
||||||
|
'status': 'processing',
|
||||||
|
'claimed_at': datetime.utcnow().isoformat(),
|
||||||
|
'error': None,
|
||||||
|
},
|
||||||
|
)
|
||||||
|
return self._serialize_prompt_queue_item(updated)
|
||||||
|
return None
|
||||||
|
|
||||||
|
def complete_queued_prompt(self, queue_item_id: int, result: dict | None = None) -> dict | None:
|
||||||
|
"""Mark a queued prompt as successfully processed."""
|
||||||
|
audit = self.db.query(AuditTrail).filter(AuditTrail.id == queue_item_id, AuditTrail.action == self.PROMPT_QUEUE_ACTION).first()
|
||||||
|
if audit is None:
|
||||||
|
return None
|
||||||
|
updated = self._update_audit_metadata(
|
||||||
|
audit,
|
||||||
|
{
|
||||||
|
'status': 'completed',
|
||||||
|
'processed_at': datetime.utcnow().isoformat(),
|
||||||
|
'result': result or {},
|
||||||
|
'error': None,
|
||||||
|
},
|
||||||
|
)
|
||||||
|
return self._serialize_prompt_queue_item(updated)
|
||||||
|
|
||||||
|
def fail_queued_prompt(self, queue_item_id: int, error: str) -> dict | None:
|
||||||
|
"""Mark a queued prompt as failed."""
|
||||||
|
audit = self.db.query(AuditTrail).filter(AuditTrail.id == queue_item_id, AuditTrail.action == self.PROMPT_QUEUE_ACTION).first()
|
||||||
|
if audit is None:
|
||||||
|
return None
|
||||||
|
updated = self._update_audit_metadata(
|
||||||
|
audit,
|
||||||
|
{
|
||||||
|
'status': 'failed',
|
||||||
|
'failed_at': datetime.utcnow().isoformat(),
|
||||||
|
'error': error,
|
||||||
|
},
|
||||||
|
)
|
||||||
|
return self._serialize_prompt_queue_item(updated)
|
||||||
|
|
||||||
|
def get_prompt_queue_item(self, queue_item_id: int) -> dict | None:
|
||||||
|
"""Return a single queued prompt item by audit id."""
|
||||||
|
audit = self.db.query(AuditTrail).filter(AuditTrail.id == queue_item_id, AuditTrail.action == self.PROMPT_QUEUE_ACTION).first()
|
||||||
|
if audit is None:
|
||||||
|
return None
|
||||||
|
return self._serialize_prompt_queue_item(audit)
|
||||||
|
|
||||||
|
def retry_queued_prompt(self, queue_item_id: int) -> dict | None:
|
||||||
|
"""Return a failed or completed queue item back to queued state."""
|
||||||
|
audit = self.db.query(AuditTrail).filter(AuditTrail.id == queue_item_id, AuditTrail.action == self.PROMPT_QUEUE_ACTION).first()
|
||||||
|
if audit is None:
|
||||||
|
return None
|
||||||
|
updated = self._update_audit_metadata(
|
||||||
|
audit,
|
||||||
|
{
|
||||||
|
'status': 'queued',
|
||||||
|
'queued_at': datetime.utcnow().isoformat(),
|
||||||
|
'claimed_at': None,
|
||||||
|
'processed_at': None,
|
||||||
|
'failed_at': None,
|
||||||
|
'error': None,
|
||||||
|
},
|
||||||
|
)
|
||||||
|
return self._serialize_prompt_queue_item(updated)
|
||||||
|
|
||||||
|
def _latest_llm_prompt_config_entries(self) -> dict[str, AuditTrail]:
|
||||||
|
"""Return the most recent persisted audit row for each editable LLM prompt key."""
|
||||||
|
entries: dict[str, AuditTrail] = {}
|
||||||
|
try:
|
||||||
|
audits = (
|
||||||
|
self.db.query(AuditTrail)
|
||||||
|
.filter(AuditTrail.action == self.PROMPT_CONFIG_ACTION)
|
||||||
|
.order_by(AuditTrail.created_at.desc(), AuditTrail.id.desc())
|
||||||
|
.all()
|
||||||
|
)
|
||||||
|
except Exception:
|
||||||
|
return entries
|
||||||
|
for audit in audits:
|
||||||
|
metadata = self._normalize_metadata(audit.metadata_json)
|
||||||
|
key = str(metadata.get('key') or '').strip()
|
||||||
|
if not key or key in entries or key not in EDITABLE_LLM_PROMPTS:
|
||||||
|
continue
|
||||||
|
entries[key] = audit
|
||||||
|
return entries
|
||||||
|
|
||||||
|
def get_llm_prompt_override(self, key: str) -> str | None:
|
||||||
|
"""Return the persisted override for one editable LLM prompt key."""
|
||||||
|
entry = self._latest_llm_prompt_config_entries().get(key)
|
||||||
|
if entry is None:
|
||||||
|
return None
|
||||||
|
metadata = self._normalize_metadata(entry.metadata_json)
|
||||||
|
if metadata.get('reset_to_default'):
|
||||||
|
return None
|
||||||
|
value = metadata.get('value')
|
||||||
|
if value is None:
|
||||||
|
return None
|
||||||
|
return str(value)
|
||||||
|
|
||||||
|
def get_llm_prompt_settings(self) -> list[dict]:
|
||||||
|
"""Return editable LLM prompt definitions merged with persisted DB overrides."""
|
||||||
|
latest = self._latest_llm_prompt_config_entries()
|
||||||
|
items = []
|
||||||
|
for key, metadata in EDITABLE_LLM_PROMPTS.items():
|
||||||
|
entry = latest.get(key)
|
||||||
|
entry_metadata = self._normalize_metadata(entry.metadata_json) if entry is not None else {}
|
||||||
|
default_value = (getattr(settings, key, '') or '').strip()
|
||||||
|
persisted_value = None if entry_metadata.get('reset_to_default') else entry_metadata.get('value')
|
||||||
|
items.append(
|
||||||
|
{
|
||||||
|
'key': key,
|
||||||
|
'label': metadata['label'],
|
||||||
|
'category': metadata['category'],
|
||||||
|
'description': metadata['description'],
|
||||||
|
'default_value': default_value,
|
||||||
|
'value': str(persisted_value).strip() if persisted_value is not None else default_value,
|
||||||
|
'source': 'database' if persisted_value is not None else 'environment',
|
||||||
|
'updated_at': entry.created_at.isoformat() if entry and entry.created_at else None,
|
||||||
|
'updated_by': entry.actor if entry is not None else None,
|
||||||
|
'reset_to_default': bool(entry_metadata.get('reset_to_default')) if entry is not None else False,
|
||||||
|
}
|
||||||
|
)
|
||||||
|
return items
|
||||||
|
|
||||||
|
def save_llm_prompt_setting(self, key: str, value: str, actor: str = 'dashboard') -> dict:
|
||||||
|
"""Persist one editable LLM prompt override into the audit trail."""
|
||||||
|
if key not in EDITABLE_LLM_PROMPTS:
|
||||||
|
return {'status': 'error', 'message': f'Unsupported prompt key: {key}'}
|
||||||
|
audit = AuditTrail(
|
||||||
|
project_id=self.PROMPT_CONFIG_PROJECT_ID,
|
||||||
|
action=self.PROMPT_CONFIG_ACTION,
|
||||||
|
actor=actor,
|
||||||
|
action_type='UPDATE',
|
||||||
|
details=f'Updated LLM prompt setting {key}',
|
||||||
|
message=f'Updated LLM prompt setting {key}',
|
||||||
|
metadata_json={
|
||||||
|
'key': key,
|
||||||
|
'value': value,
|
||||||
|
'reset_to_default': False,
|
||||||
|
},
|
||||||
|
)
|
||||||
|
self.db.add(audit)
|
||||||
|
self.db.commit()
|
||||||
|
self.db.refresh(audit)
|
||||||
|
return {'status': 'success', 'setting': next(item for item in self.get_llm_prompt_settings() if item['key'] == key)}
|
||||||
|
|
||||||
|
def reset_llm_prompt_setting(self, key: str, actor: str = 'dashboard') -> dict:
|
||||||
|
"""Reset one editable LLM prompt override back to its environment/default value."""
|
||||||
|
if key not in EDITABLE_LLM_PROMPTS:
|
||||||
|
return {'status': 'error', 'message': f'Unsupported prompt key: {key}'}
|
||||||
|
audit = AuditTrail(
|
||||||
|
project_id=self.PROMPT_CONFIG_PROJECT_ID,
|
||||||
|
action=self.PROMPT_CONFIG_ACTION,
|
||||||
|
actor=actor,
|
||||||
|
action_type='RESET',
|
||||||
|
details=f'Reset LLM prompt setting {key} to default',
|
||||||
|
message=f'Reset LLM prompt setting {key} to default',
|
||||||
|
metadata_json={
|
||||||
|
'key': key,
|
||||||
|
'value': None,
|
||||||
|
'reset_to_default': True,
|
||||||
|
},
|
||||||
|
)
|
||||||
|
self.db.add(audit)
|
||||||
|
self.db.commit()
|
||||||
|
self.db.refresh(audit)
|
||||||
|
return {'status': 'success', 'setting': next(item for item in self.get_llm_prompt_settings() if item['key'] == key)}
|
||||||
|
|
||||||
def attach_issue_to_prompt(self, prompt_id: int, related_issue: dict) -> AuditTrail | None:
|
def attach_issue_to_prompt(self, prompt_id: int, related_issue: dict) -> AuditTrail | None:
|
||||||
"""Attach resolved issue context to a previously recorded prompt."""
|
"""Attach resolved issue context to a previously recorded prompt."""
|
||||||
prompt = self.db.query(AuditTrail).filter(AuditTrail.id == prompt_id, AuditTrail.action == 'PROMPT_RECEIVED').first()
|
prompt = self.db.query(AuditTrail).filter(AuditTrail.id == prompt_id, AuditTrail.action == 'PROMPT_RECEIVED').first()
|
||||||
@@ -736,12 +1020,6 @@ class DatabaseManager:
|
|||||||
self.db.commit()
|
self.db.commit()
|
||||||
return updates
|
return updates
|
||||||
|
|
||||||
def get_latest_project_by_name(self, project_name: str) -> ProjectHistory | None:
|
|
||||||
"""Return the most recently updated project with the requested name."""
|
|
||||||
return self.db.query(ProjectHistory).filter(
|
|
||||||
ProjectHistory.project_name == project_name
|
|
||||||
).order_by(ProjectHistory.updated_at.desc(), ProjectHistory.id.desc()).first()
|
|
||||||
|
|
||||||
def log_prompt_revert(
|
def log_prompt_revert(
|
||||||
self,
|
self,
|
||||||
project_id: str,
|
project_id: str,
|
||||||
@@ -813,9 +1091,14 @@ class DatabaseManager:
|
|||||||
}
|
}
|
||||||
return None
|
return None
|
||||||
|
|
||||||
def get_project_by_id(self, project_id: str) -> ProjectHistory | None:
|
def get_project_by_id(self, project_id: str, include_archived: bool = True) -> ProjectHistory | None:
|
||||||
"""Get project by ID."""
|
"""Get project by ID."""
|
||||||
return self.db.query(ProjectHistory).filter(ProjectHistory.project_id == project_id).first()
|
history = self.db.query(ProjectHistory).filter(ProjectHistory.project_id == project_id).first()
|
||||||
|
if history is None:
|
||||||
|
return None
|
||||||
|
if not include_archived and self._is_archived_status(history.status):
|
||||||
|
return None
|
||||||
|
return history
|
||||||
|
|
||||||
def get_recent_chat_history(self, chat_id: str, source: str = 'telegram', limit: int = 12) -> list[dict]:
|
def get_recent_chat_history(self, chat_id: str, source: str = 'telegram', limit: int = 12) -> list[dict]:
|
||||||
"""Return recent prompt events for one chat/source conversation."""
|
"""Return recent prompt events for one chat/source conversation."""
|
||||||
@@ -832,6 +1115,9 @@ class DatabaseManager:
|
|||||||
continue
|
continue
|
||||||
if str(source_context.get('chat_id') or '') != str(chat_id):
|
if str(source_context.get('chat_id') or '') != str(chat_id):
|
||||||
continue
|
continue
|
||||||
|
history = self.get_project_by_id(prompt.project_id)
|
||||||
|
if history is None or self._is_archived_status(history.status):
|
||||||
|
continue
|
||||||
result.append(
|
result.append(
|
||||||
{
|
{
|
||||||
'prompt_id': prompt.id,
|
'prompt_id': prompt.id,
|
||||||
@@ -875,9 +1161,96 @@ class DatabaseManager:
|
|||||||
'projects': projects,
|
'projects': projects,
|
||||||
}
|
}
|
||||||
|
|
||||||
def get_all_projects(self) -> list[ProjectHistory]:
|
def get_all_projects(self, include_archived: bool = False, archived_only: bool = False) -> list[ProjectHistory]:
|
||||||
"""Get all projects."""
|
"""Get tracked projects with optional archive filtering."""
|
||||||
return self.db.query(ProjectHistory).all()
|
projects = self.db.query(ProjectHistory).order_by(ProjectHistory.updated_at.desc(), ProjectHistory.id.desc()).all()
|
||||||
|
if archived_only:
|
||||||
|
return [project for project in projects if self._is_archived_status(project.status)]
|
||||||
|
if include_archived:
|
||||||
|
return projects
|
||||||
|
return [project for project in projects if not self._is_archived_status(project.status)]
|
||||||
|
|
||||||
|
def get_latest_project_by_name(self, project_name: str, include_archived: bool = False) -> ProjectHistory | None:
|
||||||
|
"""Return the latest project matching a human-readable project name."""
|
||||||
|
if not project_name:
|
||||||
|
return None
|
||||||
|
query = self.db.query(ProjectHistory).filter(ProjectHistory.project_name == project_name).order_by(
|
||||||
|
ProjectHistory.updated_at.desc(), ProjectHistory.id.desc()
|
||||||
|
)
|
||||||
|
for history in query.all():
|
||||||
|
if include_archived or not self._is_archived_status(history.status):
|
||||||
|
return history
|
||||||
|
return None
|
||||||
|
|
||||||
|
def archive_project(self, project_id: str) -> dict:
|
||||||
|
"""Archive a project so it no longer participates in active automation."""
|
||||||
|
history = self.get_project_by_id(project_id)
|
||||||
|
if history is None:
|
||||||
|
return {'status': 'error', 'message': 'Project not found'}
|
||||||
|
if self._is_archived_status(history.status):
|
||||||
|
return {'status': 'success', 'message': 'Project already archived', 'project_id': project_id}
|
||||||
|
history.status = 'archived'
|
||||||
|
history.message = 'Project archived'
|
||||||
|
history.current_step = 'archived'
|
||||||
|
history.updated_at = datetime.utcnow()
|
||||||
|
self.db.commit()
|
||||||
|
self._log_audit_trail(
|
||||||
|
project_id=project_id,
|
||||||
|
action='PROJECT_ARCHIVED',
|
||||||
|
actor='user',
|
||||||
|
action_type='ARCHIVE',
|
||||||
|
details=f'Project {project_id} archived',
|
||||||
|
message='Project archived',
|
||||||
|
)
|
||||||
|
return {'status': 'success', 'message': 'Project archived', 'project_id': project_id}
|
||||||
|
|
||||||
|
def unarchive_project(self, project_id: str) -> dict:
|
||||||
|
"""Restore an archived project to the active automation set."""
|
||||||
|
history = self.get_project_by_id(project_id)
|
||||||
|
if history is None:
|
||||||
|
return {'status': 'error', 'message': 'Project not found'}
|
||||||
|
if not self._is_archived_status(history.status):
|
||||||
|
return {'status': 'success', 'message': 'Project is already active', 'project_id': project_id}
|
||||||
|
history.status = ProjectStatus.COMPLETED.value if history.completed_at else ProjectStatus.STARTED.value
|
||||||
|
history.message = 'Project restored from archive'
|
||||||
|
history.current_step = 'restored'
|
||||||
|
history.updated_at = datetime.utcnow()
|
||||||
|
self.db.commit()
|
||||||
|
self._log_audit_trail(
|
||||||
|
project_id=project_id,
|
||||||
|
action='PROJECT_UNARCHIVED',
|
||||||
|
actor='user',
|
||||||
|
action_type='RESTORE',
|
||||||
|
details=f'Project {project_id} restored from archive',
|
||||||
|
message='Project restored from archive',
|
||||||
|
)
|
||||||
|
return {'status': 'success', 'message': 'Project restored from archive', 'project_id': project_id}
|
||||||
|
|
||||||
|
def delete_project(self, project_id: str, delete_project_root: bool = True) -> dict:
|
||||||
|
"""Delete a project and all project-scoped traces from the database."""
|
||||||
|
history = self.get_project_by_id(project_id)
|
||||||
|
if history is None:
|
||||||
|
return {'status': 'error', 'message': 'Project not found'}
|
||||||
|
snapshot_data = self._get_latest_ui_snapshot_data(history.id)
|
||||||
|
project_root = snapshot_data.get('project_root') or str(settings.projects_root / project_id)
|
||||||
|
self.db.query(PromptCodeLink).filter(PromptCodeLink.history_id == history.id).delete()
|
||||||
|
self.db.query(PullRequest).filter(PullRequest.history_id == history.id).delete()
|
||||||
|
self.db.query(PullRequestData).filter(PullRequestData.history_id == history.id).delete()
|
||||||
|
self.db.query(UISnapshot).filter(UISnapshot.history_id == history.id).delete()
|
||||||
|
self.db.query(UserAction).filter(UserAction.history_id == history.id).delete()
|
||||||
|
self.db.query(ProjectLog).filter(ProjectLog.history_id == history.id).delete()
|
||||||
|
self.db.query(AuditTrail).filter(AuditTrail.project_id == project_id).delete()
|
||||||
|
self.db.delete(history)
|
||||||
|
self.db.commit()
|
||||||
|
if delete_project_root and project_root:
|
||||||
|
shutil.rmtree(project_root, ignore_errors=True)
|
||||||
|
return {
|
||||||
|
'status': 'success',
|
||||||
|
'message': 'Project deleted',
|
||||||
|
'project_id': project_id,
|
||||||
|
'project_root_deleted': bool(delete_project_root and project_root),
|
||||||
|
'project_root': project_root,
|
||||||
|
}
|
||||||
|
|
||||||
def get_project_logs(self, history_id: int, limit: int = 100) -> list[ProjectLog]:
|
def get_project_logs(self, history_id: int, limit: int = 100) -> list[ProjectLog]:
|
||||||
"""Get project logs."""
|
"""Get project logs."""
|
||||||
@@ -1890,6 +2263,7 @@ class DatabaseManager:
|
|||||||
|
|
||||||
def get_dashboard_snapshot(self, limit: int = 8) -> dict:
|
def get_dashboard_snapshot(self, limit: int = 8) -> dict:
|
||||||
"""Return DB-backed dashboard data for the UI."""
|
"""Return DB-backed dashboard data for the UI."""
|
||||||
|
queue_summary = self.get_prompt_queue_summary()
|
||||||
if settings.gitea_url and settings.gitea_token:
|
if settings.gitea_url and settings.gitea_token:
|
||||||
try:
|
try:
|
||||||
try:
|
try:
|
||||||
@@ -1906,21 +2280,27 @@ class DatabaseManager:
|
|||||||
)
|
)
|
||||||
except Exception:
|
except Exception:
|
||||||
pass
|
pass
|
||||||
projects = self.db.query(ProjectHistory).order_by(ProjectHistory.updated_at.desc()).limit(limit).all()
|
active_projects = self.get_all_projects()
|
||||||
|
archived_projects = self.get_all_projects(archived_only=True)
|
||||||
|
projects = active_projects[:limit]
|
||||||
system_logs = self.db.query(SystemLog).order_by(SystemLog.created_at.desc()).limit(limit).all()
|
system_logs = self.db.query(SystemLog).order_by(SystemLog.created_at.desc()).limit(limit).all()
|
||||||
return {
|
return {
|
||||||
"summary": {
|
"summary": {
|
||||||
"total_projects": self.db.query(ProjectHistory).count(),
|
"total_projects": len(active_projects),
|
||||||
"running_projects": self.db.query(ProjectHistory).filter(ProjectHistory.status == ProjectStatus.RUNNING.value).count(),
|
"archived_projects": len(archived_projects),
|
||||||
"completed_projects": self.db.query(ProjectHistory).filter(ProjectHistory.status == ProjectStatus.COMPLETED.value).count(),
|
"running_projects": len([project for project in active_projects if project.status == ProjectStatus.RUNNING.value]),
|
||||||
"error_projects": self.db.query(ProjectHistory).filter(ProjectHistory.status == ProjectStatus.ERROR.value).count(),
|
"completed_projects": len([project for project in active_projects if project.status == ProjectStatus.COMPLETED.value]),
|
||||||
|
"error_projects": len([project for project in active_projects if project.status == ProjectStatus.ERROR.value]),
|
||||||
"prompt_events": self.db.query(AuditTrail).filter(AuditTrail.action == "PROMPT_RECEIVED").count(),
|
"prompt_events": self.db.query(AuditTrail).filter(AuditTrail.action == "PROMPT_RECEIVED").count(),
|
||||||
|
"queued_prompts": queue_summary.get('queued', 0),
|
||||||
|
"failed_queued_prompts": queue_summary.get('failed', 0),
|
||||||
"code_changes": self.db.query(AuditTrail).filter(AuditTrail.action == "CODE_CHANGE").count(),
|
"code_changes": self.db.query(AuditTrail).filter(AuditTrail.action == "CODE_CHANGE").count(),
|
||||||
"open_pull_requests": self.db.query(PullRequest).filter(PullRequest.pr_state == "open", PullRequest.merged.is_(False)).count(),
|
"open_pull_requests": self.db.query(PullRequest).filter(PullRequest.pr_state == "open", PullRequest.merged.is_(False)).count(),
|
||||||
"tracked_issues": self.db.query(AuditTrail).filter(AuditTrail.action == "REPOSITORY_ISSUE").count(),
|
"tracked_issues": self.db.query(AuditTrail).filter(AuditTrail.action == "REPOSITORY_ISSUE").count(),
|
||||||
"issue_work_events": self.db.query(AuditTrail).filter(AuditTrail.action == "ISSUE_WORKED").count(),
|
"issue_work_events": self.db.query(AuditTrail).filter(AuditTrail.action == "ISSUE_WORKED").count(),
|
||||||
},
|
},
|
||||||
"projects": [self.get_project_audit_data(project.project_id) for project in projects],
|
"projects": [self.get_project_audit_data(project.project_id) for project in projects],
|
||||||
|
"archived_projects": [self.get_project_audit_data(project.project_id) for project in archived_projects[:limit]],
|
||||||
"system_logs": [
|
"system_logs": [
|
||||||
{
|
{
|
||||||
"id": log.id,
|
"id": log.id,
|
||||||
@@ -1933,6 +2313,10 @@ class DatabaseManager:
|
|||||||
],
|
],
|
||||||
"lineage_links": self.get_prompt_change_links(limit=limit * 10),
|
"lineage_links": self.get_prompt_change_links(limit=limit * 10),
|
||||||
"correlations": self.get_prompt_change_correlations(limit=limit),
|
"correlations": self.get_prompt_change_correlations(limit=limit),
|
||||||
|
"prompt_queue": {
|
||||||
|
'items': self.get_prompt_queue(limit=limit),
|
||||||
|
'summary': queue_summary,
|
||||||
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
def cleanup_audit_trail(self) -> None:
|
def cleanup_audit_trail(self) -> None:
|
||||||
|
|||||||
@@ -1,6 +1,7 @@
|
|||||||
"""Git manager for project operations."""
|
"""Git manager for project operations."""
|
||||||
|
|
||||||
import os
|
import os
|
||||||
|
import shutil
|
||||||
import subprocess
|
import subprocess
|
||||||
import tempfile
|
import tempfile
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
@@ -32,8 +33,18 @@ class GitManager:
|
|||||||
resolved = (base_root / project_id).resolve()
|
resolved = (base_root / project_id).resolve()
|
||||||
self.project_dir = str(resolved)
|
self.project_dir = str(resolved)
|
||||||
|
|
||||||
|
def is_git_available(self) -> bool:
|
||||||
|
"""Return whether the git executable is available in the current environment."""
|
||||||
|
return shutil.which('git') is not None
|
||||||
|
|
||||||
|
def _ensure_git_available(self) -> None:
|
||||||
|
"""Raise a clear error when git is not installed in the runtime environment."""
|
||||||
|
if not self.is_git_available():
|
||||||
|
raise RuntimeError('git executable is not available in PATH')
|
||||||
|
|
||||||
def _run(self, args: list[str], env: dict | None = None, check: bool = True) -> subprocess.CompletedProcess:
|
def _run(self, args: list[str], env: dict | None = None, check: bool = True) -> subprocess.CompletedProcess:
|
||||||
"""Run a git command in the project directory."""
|
"""Run a git command in the project directory."""
|
||||||
|
self._ensure_git_available()
|
||||||
return subprocess.run(
|
return subprocess.run(
|
||||||
args,
|
args,
|
||||||
check=check,
|
check=check,
|
||||||
|
|||||||
@@ -4,6 +4,20 @@ import os
|
|||||||
import urllib.error
|
import urllib.error
|
||||||
import urllib.request
|
import urllib.request
|
||||||
import json
|
import json
|
||||||
|
from urllib.parse import urlparse
|
||||||
|
|
||||||
|
|
||||||
|
def _normalize_base_url(base_url: str) -> str:
|
||||||
|
"""Normalize host-only service addresses into valid absolute URLs."""
|
||||||
|
normalized = (base_url or '').strip().rstrip('/')
|
||||||
|
if not normalized:
|
||||||
|
return ''
|
||||||
|
if '://' not in normalized:
|
||||||
|
normalized = f'https://{normalized}'
|
||||||
|
parsed = urlparse(normalized)
|
||||||
|
if not parsed.scheme or not parsed.netloc:
|
||||||
|
return ''
|
||||||
|
return normalized
|
||||||
|
|
||||||
|
|
||||||
class GiteaAPI:
|
class GiteaAPI:
|
||||||
@@ -11,7 +25,7 @@ class GiteaAPI:
|
|||||||
|
|
||||||
def __init__(self, token: str, base_url: str, owner: str | None = None, repo: str | None = None):
|
def __init__(self, token: str, base_url: str, owner: str | None = None, repo: str | None = None):
|
||||||
self.token = token
|
self.token = token
|
||||||
self.base_url = base_url.rstrip("/")
|
self.base_url = _normalize_base_url(base_url)
|
||||||
self.owner = owner
|
self.owner = owner
|
||||||
self.repo = repo
|
self.repo = repo
|
||||||
self.headers = {
|
self.headers = {
|
||||||
@@ -26,7 +40,7 @@ class GiteaAPI:
|
|||||||
owner = os.getenv("GITEA_OWNER", "ai-test")
|
owner = os.getenv("GITEA_OWNER", "ai-test")
|
||||||
repo = os.getenv("GITEA_REPO", "")
|
repo = os.getenv("GITEA_REPO", "")
|
||||||
return {
|
return {
|
||||||
"base_url": base_url.rstrip("/"),
|
"base_url": _normalize_base_url(base_url),
|
||||||
"token": token,
|
"token": token,
|
||||||
"owner": owner,
|
"owner": owner,
|
||||||
"repo": repo,
|
"repo": repo,
|
||||||
@@ -96,16 +110,16 @@ class GiteaAPI:
|
|||||||
|
|
||||||
def _request_sync(self, method: str, path: str, payload: dict | None = None) -> dict:
|
def _request_sync(self, method: str, path: str, payload: dict | None = None) -> dict:
|
||||||
"""Perform a synchronous Gitea API request."""
|
"""Perform a synchronous Gitea API request."""
|
||||||
|
try:
|
||||||
|
if not self.base_url:
|
||||||
|
return {'error': 'Gitea base URL is not configured or is invalid'}
|
||||||
request = urllib.request.Request(
|
request = urllib.request.Request(
|
||||||
self._api_url(path),
|
self._api_url(path),
|
||||||
headers=self.get_auth_headers(),
|
headers=self.get_auth_headers(),
|
||||||
method=method.upper(),
|
method=method.upper(),
|
||||||
)
|
)
|
||||||
data = None
|
|
||||||
if payload is not None:
|
if payload is not None:
|
||||||
data = json.dumps(payload).encode('utf-8')
|
request.data = json.dumps(payload).encode('utf-8')
|
||||||
request.data = data
|
|
||||||
try:
|
|
||||||
with urllib.request.urlopen(request) as response:
|
with urllib.request.urlopen(request) as response:
|
||||||
body = response.read().decode('utf-8')
|
body = response.read().decode('utf-8')
|
||||||
return json.loads(body) if body else {}
|
return json.loads(body) if body else {}
|
||||||
@@ -156,10 +170,36 @@ class GiteaAPI:
|
|||||||
result.setdefault("status", "created")
|
result.setdefault("status", "created")
|
||||||
return result
|
return result
|
||||||
|
|
||||||
|
async def delete_repo(self, owner: str | None = None, repo: str | None = None) -> dict:
|
||||||
|
"""Delete a repository from the configured organization/user."""
|
||||||
|
_owner = owner or self.owner
|
||||||
|
_repo = repo or self.repo
|
||||||
|
if not _owner or not _repo:
|
||||||
|
return {'error': 'Owner and repository name are required'}
|
||||||
|
result = await self._request('DELETE', f'repos/{_owner}/{_repo}')
|
||||||
|
if not result.get('error'):
|
||||||
|
result.setdefault('status', 'deleted')
|
||||||
|
return result
|
||||||
|
|
||||||
|
def delete_repo_sync(self, owner: str | None = None, repo: str | None = None) -> dict:
|
||||||
|
"""Synchronously delete a repository from the configured organization/user."""
|
||||||
|
_owner = owner or self.owner
|
||||||
|
_repo = repo or self.repo
|
||||||
|
if not _owner or not _repo:
|
||||||
|
return {'error': 'Owner and repository name are required'}
|
||||||
|
result = self._request_sync('DELETE', f'repos/{_owner}/{_repo}')
|
||||||
|
if not result.get('error'):
|
||||||
|
result.setdefault('status', 'deleted')
|
||||||
|
return result
|
||||||
|
|
||||||
async def get_current_user(self) -> dict:
|
async def get_current_user(self) -> dict:
|
||||||
"""Get the user associated with the configured token."""
|
"""Get the user associated with the configured token."""
|
||||||
return await self._request("GET", "user")
|
return await self._request("GET", "user")
|
||||||
|
|
||||||
|
def get_current_user_sync(self) -> dict:
|
||||||
|
"""Synchronously get the user associated with the configured token."""
|
||||||
|
return self._request_sync("GET", "user")
|
||||||
|
|
||||||
async def create_branch(self, branch: str, base: str = "main", owner: str | None = None, repo: str | None = None):
|
async def create_branch(self, branch: str, base: str = "main", owner: str | None = None, repo: str | None = None):
|
||||||
"""Create a new branch."""
|
"""Create a new branch."""
|
||||||
_owner = owner or self.owner
|
_owner = owner or self.owner
|
||||||
|
|||||||
162
ai_software_factory/agents/home_assistant.py
Normal file
162
ai_software_factory/agents/home_assistant.py
Normal file
@@ -0,0 +1,162 @@
|
|||||||
|
"""Home Assistant integration for energy-gated queue processing."""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
try:
|
||||||
|
from ..config import settings
|
||||||
|
except ImportError:
|
||||||
|
from config import settings
|
||||||
|
|
||||||
|
|
||||||
|
class HomeAssistantAgent:
|
||||||
|
"""Query Home Assistant for queue-processing eligibility and health."""
|
||||||
|
|
||||||
|
def __init__(self, base_url: str | None = None, token: str | None = None):
|
||||||
|
self.base_url = (base_url or settings.home_assistant_url).rstrip('/')
|
||||||
|
self.token = token or settings.home_assistant_token
|
||||||
|
|
||||||
|
def _headers(self) -> dict[str, str]:
|
||||||
|
return {
|
||||||
|
'Authorization': f'Bearer {self.token}',
|
||||||
|
'Content-Type': 'application/json',
|
||||||
|
}
|
||||||
|
|
||||||
|
def _state_url(self, entity_id: str) -> str:
|
||||||
|
return f'{self.base_url}/api/states/{entity_id}'
|
||||||
|
|
||||||
|
async def _get_state(self, entity_id: str) -> dict:
|
||||||
|
if not self.base_url:
|
||||||
|
return {'error': 'Home Assistant URL is not configured'}
|
||||||
|
if not self.token:
|
||||||
|
return {'error': 'Home Assistant token is not configured'}
|
||||||
|
if not entity_id:
|
||||||
|
return {'error': 'Home Assistant entity id is not configured'}
|
||||||
|
try:
|
||||||
|
import aiohttp
|
||||||
|
|
||||||
|
async with aiohttp.ClientSession() as session:
|
||||||
|
async with session.get(self._state_url(entity_id), headers=self._headers()) as resp:
|
||||||
|
payload = await resp.json(content_type=None)
|
||||||
|
if 200 <= resp.status < 300:
|
||||||
|
return payload if isinstance(payload, dict) else {'value': payload}
|
||||||
|
return {'error': payload, 'status_code': resp.status}
|
||||||
|
except Exception as exc:
|
||||||
|
return {'error': str(exc)}
|
||||||
|
|
||||||
|
def _get_state_sync(self, entity_id: str) -> dict:
|
||||||
|
if not self.base_url:
|
||||||
|
return {'error': 'Home Assistant URL is not configured'}
|
||||||
|
if not self.token:
|
||||||
|
return {'error': 'Home Assistant token is not configured'}
|
||||||
|
if not entity_id:
|
||||||
|
return {'error': 'Home Assistant entity id is not configured'}
|
||||||
|
try:
|
||||||
|
import json
|
||||||
|
import urllib.error
|
||||||
|
import urllib.request
|
||||||
|
|
||||||
|
request = urllib.request.Request(self._state_url(entity_id), headers=self._headers(), method='GET')
|
||||||
|
with urllib.request.urlopen(request) as response:
|
||||||
|
body = response.read().decode('utf-8')
|
||||||
|
return json.loads(body) if body else {}
|
||||||
|
except urllib.error.HTTPError as exc:
|
||||||
|
try:
|
||||||
|
body = exc.read().decode('utf-8')
|
||||||
|
except Exception:
|
||||||
|
body = str(exc)
|
||||||
|
return {'error': body, 'status_code': exc.code}
|
||||||
|
except Exception as exc:
|
||||||
|
return {'error': str(exc)}
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def _coerce_float(payload: dict) -> float | None:
|
||||||
|
raw = payload.get('state') if isinstance(payload, dict) else None
|
||||||
|
try:
|
||||||
|
return float(raw)
|
||||||
|
except Exception:
|
||||||
|
return None
|
||||||
|
|
||||||
|
async def queue_gate_status(self, force: bool = False) -> dict:
|
||||||
|
"""Return whether queued prompts may be processed now."""
|
||||||
|
if force or settings.prompt_queue_force_process:
|
||||||
|
return {
|
||||||
|
'status': 'success',
|
||||||
|
'allowed': True,
|
||||||
|
'forced': True,
|
||||||
|
'reason': 'Queue override is enabled',
|
||||||
|
}
|
||||||
|
battery = await self._get_state(settings.home_assistant_battery_entity_id)
|
||||||
|
surplus = await self._get_state(settings.home_assistant_surplus_entity_id)
|
||||||
|
battery_value = self._coerce_float(battery)
|
||||||
|
surplus_value = self._coerce_float(surplus)
|
||||||
|
checks = []
|
||||||
|
if battery.get('error'):
|
||||||
|
checks.append({'name': 'battery', 'ok': False, 'message': str(battery.get('error')), 'entity_id': settings.home_assistant_battery_entity_id})
|
||||||
|
else:
|
||||||
|
checks.append({'name': 'battery', 'ok': battery_value is not None and battery_value >= settings.home_assistant_battery_full_threshold, 'message': f'{battery_value}%', 'entity_id': settings.home_assistant_battery_entity_id})
|
||||||
|
if surplus.get('error'):
|
||||||
|
checks.append({'name': 'surplus', 'ok': False, 'message': str(surplus.get('error')), 'entity_id': settings.home_assistant_surplus_entity_id})
|
||||||
|
else:
|
||||||
|
checks.append({'name': 'surplus', 'ok': surplus_value is not None and surplus_value >= settings.home_assistant_surplus_threshold_watts, 'message': f'{surplus_value} W', 'entity_id': settings.home_assistant_surplus_entity_id})
|
||||||
|
allowed = all(check['ok'] for check in checks)
|
||||||
|
return {
|
||||||
|
'status': 'success' if allowed else 'blocked',
|
||||||
|
'allowed': allowed,
|
||||||
|
'forced': False,
|
||||||
|
'checks': checks,
|
||||||
|
'battery_level': battery_value,
|
||||||
|
'surplus_watts': surplus_value,
|
||||||
|
'thresholds': {
|
||||||
|
'battery_full_percent': settings.home_assistant_battery_full_threshold,
|
||||||
|
'surplus_watts': settings.home_assistant_surplus_threshold_watts,
|
||||||
|
},
|
||||||
|
'reason': 'Energy gate open' if allowed else 'Battery or surplus threshold not met',
|
||||||
|
}
|
||||||
|
|
||||||
|
def health_check_sync(self) -> dict:
|
||||||
|
"""Return current Home Assistant connectivity and queue gate diagnostics."""
|
||||||
|
if not self.base_url:
|
||||||
|
return {
|
||||||
|
'status': 'error',
|
||||||
|
'message': 'Home Assistant URL is not configured.',
|
||||||
|
'base_url': '',
|
||||||
|
'configured': False,
|
||||||
|
'checks': [],
|
||||||
|
}
|
||||||
|
if not self.token:
|
||||||
|
return {
|
||||||
|
'status': 'error',
|
||||||
|
'message': 'Home Assistant token is not configured.',
|
||||||
|
'base_url': self.base_url,
|
||||||
|
'configured': False,
|
||||||
|
'checks': [],
|
||||||
|
}
|
||||||
|
battery = self._get_state_sync(settings.home_assistant_battery_entity_id)
|
||||||
|
surplus = self._get_state_sync(settings.home_assistant_surplus_entity_id)
|
||||||
|
checks = []
|
||||||
|
for name, entity_id, payload in (
|
||||||
|
('battery', settings.home_assistant_battery_entity_id, battery),
|
||||||
|
('surplus', settings.home_assistant_surplus_entity_id, surplus),
|
||||||
|
):
|
||||||
|
checks.append(
|
||||||
|
{
|
||||||
|
'name': name,
|
||||||
|
'entity_id': entity_id,
|
||||||
|
'ok': not bool(payload.get('error')),
|
||||||
|
'message': str(payload.get('error') or payload.get('state') or 'ok'),
|
||||||
|
'status_code': payload.get('status_code'),
|
||||||
|
'url': self._state_url(entity_id) if entity_id else self.base_url,
|
||||||
|
}
|
||||||
|
)
|
||||||
|
return {
|
||||||
|
'status': 'success' if all(check['ok'] for check in checks) else 'error',
|
||||||
|
'message': 'Home Assistant connectivity is healthy.' if all(check['ok'] for check in checks) else 'Home Assistant checks failed.',
|
||||||
|
'base_url': self.base_url,
|
||||||
|
'configured': True,
|
||||||
|
'checks': checks,
|
||||||
|
'queue_gate': {
|
||||||
|
'battery_full_percent': settings.home_assistant_battery_full_threshold,
|
||||||
|
'surplus_watts': settings.home_assistant_surplus_threshold_watts,
|
||||||
|
'force_process': settings.prompt_queue_force_process,
|
||||||
|
},
|
||||||
|
}
|
||||||
394
ai_software_factory/agents/llm_service.py
Normal file
394
ai_software_factory/agents/llm_service.py
Normal file
@@ -0,0 +1,394 @@
|
|||||||
|
"""Centralized LLM client with guardrails and mediated tool context."""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import json
|
||||||
|
|
||||||
|
try:
|
||||||
|
from .gitea import GiteaAPI
|
||||||
|
except ImportError:
|
||||||
|
from gitea import GiteaAPI
|
||||||
|
|
||||||
|
try:
|
||||||
|
from ..config import settings
|
||||||
|
except ImportError:
|
||||||
|
from config import settings
|
||||||
|
|
||||||
|
|
||||||
|
class LLMToolbox:
|
||||||
|
"""Build named tool payloads that can be shared with external LLM providers."""
|
||||||
|
|
||||||
|
SUPPORTED_LIVE_TOOL_STAGES = ('request_interpretation', 'change_summary', 'generation_plan', 'project_naming', 'project_id_naming')
|
||||||
|
|
||||||
|
def build_tool_context(self, stage: str, context: dict | None = None) -> list[dict]:
|
||||||
|
"""Return the mediated tool payloads allowed for this LLM request."""
|
||||||
|
context = context or {}
|
||||||
|
allowed = set(settings.llm_tool_allowlist)
|
||||||
|
limit = settings.llm_tool_context_limit
|
||||||
|
tool_context: list[dict] = []
|
||||||
|
|
||||||
|
if 'gitea_project_catalog' in allowed:
|
||||||
|
projects = context.get('projects') or []
|
||||||
|
if projects:
|
||||||
|
tool_context.append(
|
||||||
|
{
|
||||||
|
'name': 'gitea_project_catalog',
|
||||||
|
'description': 'Tracked active projects and their repository mappings inside the factory.',
|
||||||
|
'payload': projects[:limit],
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
if 'gitea_project_state' in allowed:
|
||||||
|
state_payload = {
|
||||||
|
'project_id': context.get('project_id'),
|
||||||
|
'project_name': context.get('project_name') or context.get('name'),
|
||||||
|
'repository': context.get('repository'),
|
||||||
|
'repository_url': context.get('repository_url'),
|
||||||
|
'pull_request': context.get('pull_request'),
|
||||||
|
'pull_request_url': context.get('pull_request_url'),
|
||||||
|
'pull_request_state': context.get('pull_request_state'),
|
||||||
|
'related_issue': context.get('related_issue'),
|
||||||
|
}
|
||||||
|
if any(value for value in state_payload.values()):
|
||||||
|
tool_context.append(
|
||||||
|
{
|
||||||
|
'name': 'gitea_project_state',
|
||||||
|
'description': 'Current repository and pull-request state for the project being discussed.',
|
||||||
|
'payload': state_payload,
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
if 'gitea_project_issues' in allowed:
|
||||||
|
issues = context.get('open_issues') or context.get('issues') or []
|
||||||
|
if issues:
|
||||||
|
tool_context.append(
|
||||||
|
{
|
||||||
|
'name': 'gitea_project_issues',
|
||||||
|
'description': 'Open tracked Gitea issues for the relevant project repository.',
|
||||||
|
'payload': issues[:limit],
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
if 'gitea_pull_requests' in allowed:
|
||||||
|
pull_requests = context.get('pull_requests') or []
|
||||||
|
if pull_requests:
|
||||||
|
tool_context.append(
|
||||||
|
{
|
||||||
|
'name': 'gitea_pull_requests',
|
||||||
|
'description': 'Tracked pull requests associated with the relevant project repository.',
|
||||||
|
'payload': pull_requests[:limit],
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
return tool_context
|
||||||
|
|
||||||
|
def build_live_tool_specs(self, stage: str, context: dict | None = None) -> list[dict]:
|
||||||
|
"""Return live tool-call specs that the model may request explicitly."""
|
||||||
|
_context = context or {}
|
||||||
|
specs = []
|
||||||
|
allowed = set(settings.llm_live_tools_for_stage(stage))
|
||||||
|
if 'gitea_lookup_issue' in allowed:
|
||||||
|
specs.append(
|
||||||
|
{
|
||||||
|
'name': 'gitea_lookup_issue',
|
||||||
|
'description': 'Fetch one live Gitea issue by issue number for a tracked repository.',
|
||||||
|
'arguments': {
|
||||||
|
'project_id': 'optional tracked project id',
|
||||||
|
'owner': 'optional repository owner override',
|
||||||
|
'repo': 'optional repository name override',
|
||||||
|
'issue_number': 'required integer issue number',
|
||||||
|
},
|
||||||
|
}
|
||||||
|
)
|
||||||
|
if 'gitea_lookup_pull_request' in allowed:
|
||||||
|
specs.append(
|
||||||
|
{
|
||||||
|
'name': 'gitea_lookup_pull_request',
|
||||||
|
'description': 'Fetch one live Gitea pull request by PR number for a tracked repository.',
|
||||||
|
'arguments': {
|
||||||
|
'project_id': 'optional tracked project id',
|
||||||
|
'owner': 'optional repository owner override',
|
||||||
|
'repo': 'optional repository name override',
|
||||||
|
'pr_number': 'required integer pull request number',
|
||||||
|
},
|
||||||
|
}
|
||||||
|
)
|
||||||
|
return specs
|
||||||
|
|
||||||
|
|
||||||
|
class LLMLiveToolExecutor:
|
||||||
|
"""Resolve bounded live tool requests on behalf of the model."""
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
self.gitea_api = None
|
||||||
|
if settings.gitea_url and settings.gitea_token:
|
||||||
|
self.gitea_api = GiteaAPI(
|
||||||
|
token=settings.GITEA_TOKEN,
|
||||||
|
base_url=settings.GITEA_URL,
|
||||||
|
owner=settings.GITEA_OWNER,
|
||||||
|
repo=settings.GITEA_REPO or '',
|
||||||
|
)
|
||||||
|
|
||||||
|
async def execute(self, tool_name: str, arguments: dict, context: dict | None = None) -> dict:
|
||||||
|
"""Execute one live tool request and normalize the result."""
|
||||||
|
if tool_name not in set(settings.llm_live_tool_allowlist):
|
||||||
|
return {'error': f'Tool {tool_name} is not enabled'}
|
||||||
|
if self.gitea_api is None:
|
||||||
|
return {'error': 'Gitea live tool execution is not configured'}
|
||||||
|
resolved = self._resolve_repository(arguments=arguments, context=context or {})
|
||||||
|
if resolved.get('error'):
|
||||||
|
return resolved
|
||||||
|
owner = resolved['owner']
|
||||||
|
repo = resolved['repo']
|
||||||
|
|
||||||
|
if tool_name == 'gitea_lookup_issue':
|
||||||
|
issue_number = arguments.get('issue_number')
|
||||||
|
if issue_number is None:
|
||||||
|
return {'error': 'issue_number is required'}
|
||||||
|
return await self.gitea_api.get_issue(issue_number=int(issue_number), owner=owner, repo=repo)
|
||||||
|
|
||||||
|
if tool_name == 'gitea_lookup_pull_request':
|
||||||
|
pr_number = arguments.get('pr_number')
|
||||||
|
if pr_number is None:
|
||||||
|
return {'error': 'pr_number is required'}
|
||||||
|
return await self.gitea_api.get_pull_request(pr_number=int(pr_number), owner=owner, repo=repo)
|
||||||
|
|
||||||
|
return {'error': f'Unsupported tool {tool_name}'}
|
||||||
|
|
||||||
|
def _resolve_repository(self, arguments: dict, context: dict) -> dict:
|
||||||
|
"""Resolve repository owner/name from explicit args or tracked project context."""
|
||||||
|
owner = arguments.get('owner')
|
||||||
|
repo = arguments.get('repo')
|
||||||
|
if owner and repo:
|
||||||
|
return {'owner': owner, 'repo': repo}
|
||||||
|
project_id = arguments.get('project_id')
|
||||||
|
if project_id:
|
||||||
|
for project in context.get('projects', []):
|
||||||
|
if project.get('project_id') == project_id:
|
||||||
|
repository = project.get('repository') or {}
|
||||||
|
if repository.get('owner') and repository.get('name'):
|
||||||
|
return {'owner': repository['owner'], 'repo': repository['name']}
|
||||||
|
state = context.get('repository') or {}
|
||||||
|
if context.get('project_id') == project_id and state.get('owner') and state.get('name'):
|
||||||
|
return {'owner': state['owner'], 'repo': state['name']}
|
||||||
|
repository = context.get('repository') or {}
|
||||||
|
if repository.get('owner') and repository.get('name'):
|
||||||
|
return {'owner': repository['owner'], 'repo': repository['name']}
|
||||||
|
return {'error': 'Could not resolve repository for tool request'}
|
||||||
|
|
||||||
|
|
||||||
|
class LLMServiceClient:
|
||||||
|
"""Call the configured LLM provider with consistent guardrails and tool payloads."""
|
||||||
|
|
||||||
|
def __init__(self, ollama_url: str | None = None, model: str | None = None):
|
||||||
|
self.ollama_url = (ollama_url or settings.ollama_url).rstrip('/')
|
||||||
|
self.model = model or settings.OLLAMA_MODEL
|
||||||
|
self.toolbox = LLMToolbox()
|
||||||
|
self.live_tool_executor = LLMLiveToolExecutor()
|
||||||
|
|
||||||
|
async def chat_with_trace(
|
||||||
|
self,
|
||||||
|
*,
|
||||||
|
stage: str,
|
||||||
|
system_prompt: str,
|
||||||
|
user_prompt: str,
|
||||||
|
tool_context_input: dict | None = None,
|
||||||
|
expect_json: bool = False,
|
||||||
|
) -> tuple[str | None, dict]:
|
||||||
|
"""Invoke the configured LLM and return both content and a structured trace."""
|
||||||
|
effective_system_prompt = self._compose_system_prompt(stage, system_prompt)
|
||||||
|
tool_context = self.toolbox.build_tool_context(stage=stage, context=tool_context_input)
|
||||||
|
live_tool_specs = self.toolbox.build_live_tool_specs(stage=stage, context=tool_context_input)
|
||||||
|
effective_user_prompt = self._compose_user_prompt(user_prompt, tool_context, live_tool_specs)
|
||||||
|
raw_responses: list[dict] = []
|
||||||
|
executed_tool_calls: list[dict] = []
|
||||||
|
current_user_prompt = effective_user_prompt
|
||||||
|
max_rounds = settings.llm_max_tool_call_rounds
|
||||||
|
|
||||||
|
for round_index in range(max_rounds + 1):
|
||||||
|
content, payload, error = await self._send_chat_request(
|
||||||
|
system_prompt=effective_system_prompt,
|
||||||
|
user_prompt=current_user_prompt,
|
||||||
|
expect_json=expect_json,
|
||||||
|
)
|
||||||
|
raw_responses.append(payload)
|
||||||
|
if content:
|
||||||
|
tool_request = self._extract_tool_request(content)
|
||||||
|
if tool_request and round_index < max_rounds:
|
||||||
|
tool_name = tool_request.get('name')
|
||||||
|
tool_arguments = tool_request.get('arguments') or {}
|
||||||
|
tool_result = await self.live_tool_executor.execute(tool_name, tool_arguments, tool_context_input)
|
||||||
|
executed_tool_calls.append(
|
||||||
|
{
|
||||||
|
'name': tool_name,
|
||||||
|
'arguments': tool_arguments,
|
||||||
|
'result': tool_result,
|
||||||
|
}
|
||||||
|
)
|
||||||
|
current_user_prompt = self._compose_follow_up_prompt(user_prompt, tool_context, live_tool_specs, executed_tool_calls)
|
||||||
|
continue
|
||||||
|
return content, {
|
||||||
|
'stage': stage,
|
||||||
|
'provider': 'ollama',
|
||||||
|
'model': self.model,
|
||||||
|
'system_prompt': effective_system_prompt,
|
||||||
|
'user_prompt': current_user_prompt,
|
||||||
|
'assistant_response': content,
|
||||||
|
'raw_response': {
|
||||||
|
'provider_response': raw_responses[-1],
|
||||||
|
'provider_responses': raw_responses,
|
||||||
|
'tool_context': tool_context,
|
||||||
|
'live_tool_specs': live_tool_specs,
|
||||||
|
'executed_tool_calls': executed_tool_calls,
|
||||||
|
},
|
||||||
|
'raw_responses': raw_responses,
|
||||||
|
'fallback_used': False,
|
||||||
|
'guardrails': self._guardrail_sections(stage),
|
||||||
|
'tool_context': tool_context,
|
||||||
|
'live_tool_specs': live_tool_specs,
|
||||||
|
'executed_tool_calls': executed_tool_calls,
|
||||||
|
}
|
||||||
|
if error:
|
||||||
|
break
|
||||||
|
|
||||||
|
return None, {
|
||||||
|
'stage': stage,
|
||||||
|
'provider': 'ollama',
|
||||||
|
'model': self.model,
|
||||||
|
'system_prompt': effective_system_prompt,
|
||||||
|
'user_prompt': current_user_prompt,
|
||||||
|
'assistant_response': '',
|
||||||
|
'raw_response': {
|
||||||
|
'provider_response': raw_responses[-1] if raw_responses else {'error': 'No response'},
|
||||||
|
'provider_responses': raw_responses,
|
||||||
|
'tool_context': tool_context,
|
||||||
|
'live_tool_specs': live_tool_specs,
|
||||||
|
'executed_tool_calls': executed_tool_calls,
|
||||||
|
},
|
||||||
|
'raw_responses': raw_responses,
|
||||||
|
'fallback_used': True,
|
||||||
|
'guardrails': self._guardrail_sections(stage),
|
||||||
|
'tool_context': tool_context,
|
||||||
|
'live_tool_specs': live_tool_specs,
|
||||||
|
'executed_tool_calls': executed_tool_calls,
|
||||||
|
}
|
||||||
|
|
||||||
|
async def _send_chat_request(self, *, system_prompt: str, user_prompt: str, expect_json: bool) -> tuple[str | None, dict, str | None]:
|
||||||
|
"""Send one outbound chat request to the configured model provider."""
|
||||||
|
request_payload = {
|
||||||
|
'model': self.model,
|
||||||
|
'stream': False,
|
||||||
|
'messages': [
|
||||||
|
{'role': 'system', 'content': system_prompt},
|
||||||
|
{'role': 'user', 'content': user_prompt},
|
||||||
|
],
|
||||||
|
}
|
||||||
|
if expect_json:
|
||||||
|
request_payload['format'] = 'json'
|
||||||
|
try:
|
||||||
|
import aiohttp
|
||||||
|
|
||||||
|
async with aiohttp.ClientSession() as session:
|
||||||
|
async with session.post(f'{self.ollama_url}/api/chat', json=request_payload) as resp:
|
||||||
|
payload = await resp.json()
|
||||||
|
if 200 <= resp.status < 300:
|
||||||
|
return (payload.get('message') or {}).get('content', ''), payload, None
|
||||||
|
return None, payload, str(payload.get('error') or payload)
|
||||||
|
except Exception as exc:
|
||||||
|
return None, {'error': str(exc)}, str(exc)
|
||||||
|
|
||||||
|
def _compose_system_prompt(self, stage: str, stage_prompt: str) -> str:
|
||||||
|
"""Merge the stage prompt with configured guardrails."""
|
||||||
|
sections = [stage_prompt.strip()] + self._guardrail_sections(stage)
|
||||||
|
return '\n\n'.join(section for section in sections if section)
|
||||||
|
|
||||||
|
def _guardrail_sections(self, stage: str) -> list[str]:
|
||||||
|
"""Return all configured guardrail sections for one LLM stage."""
|
||||||
|
sections = []
|
||||||
|
if settings.llm_guardrail_prompt:
|
||||||
|
sections.append(f'Global guardrails:\n{settings.llm_guardrail_prompt}')
|
||||||
|
stage_specific = {
|
||||||
|
'request_interpretation': settings.llm_request_interpreter_guardrail_prompt,
|
||||||
|
'change_summary': settings.llm_change_summary_guardrail_prompt,
|
||||||
|
'project_naming': settings.llm_project_naming_guardrail_prompt,
|
||||||
|
'project_id_naming': settings.llm_project_id_guardrail_prompt,
|
||||||
|
}.get(stage)
|
||||||
|
if stage_specific:
|
||||||
|
sections.append(f'Stage-specific guardrails:\n{stage_specific}')
|
||||||
|
return sections
|
||||||
|
|
||||||
|
def _compose_user_prompt(self, prompt: str, tool_context: list[dict], live_tool_specs: list[dict] | None = None) -> str:
|
||||||
|
"""Append tool payloads and live tool-call specs to the outbound user prompt."""
|
||||||
|
live_tool_specs = live_tool_specs if live_tool_specs is not None else []
|
||||||
|
sections = [prompt]
|
||||||
|
if not tool_context:
|
||||||
|
pass
|
||||||
|
else:
|
||||||
|
sections.append(
|
||||||
|
'Service-mediated tool outputs are available below. Treat them as authoritative read-only data supplied by the factory:\n'
|
||||||
|
f'{json.dumps(tool_context, indent=2, sort_keys=True)}'
|
||||||
|
)
|
||||||
|
if live_tool_specs:
|
||||||
|
sections.append(
|
||||||
|
'If you need additional live repository data, you may request exactly one tool call by responding with JSON shaped as '
|
||||||
|
'{"tool_request": {"name": "<tool name>", "arguments": {...}}}. '
|
||||||
|
'After tool results are returned, respond with the final answer instead of another tool request.\n'
|
||||||
|
f'Available live tools:\n{json.dumps(live_tool_specs, indent=2, sort_keys=True)}'
|
||||||
|
)
|
||||||
|
return '\n\n'.join(section for section in sections if section)
|
||||||
|
|
||||||
|
def _compose_follow_up_prompt(self, original_prompt: str, tool_context: list[dict], live_tool_specs: list[dict], executed_tool_calls: list[dict]) -> str:
|
||||||
|
"""Build the follow-up user prompt after executing one or more live tool requests."""
|
||||||
|
sections = [self._compose_user_prompt(original_prompt, tool_context, live_tool_specs)]
|
||||||
|
sections.append(
|
||||||
|
'The service executed the requested live tool call(s). Use the tool result(s) below to produce the final answer. Do not request another tool call.\n'
|
||||||
|
f'{json.dumps(executed_tool_calls, indent=2, sort_keys=True)}'
|
||||||
|
)
|
||||||
|
return '\n\n'.join(sections)
|
||||||
|
|
||||||
|
def _extract_tool_request(self, content: str) -> dict | None:
|
||||||
|
"""Return a normalized tool request when the model explicitly asks for one."""
|
||||||
|
try:
|
||||||
|
parsed = json.loads(content)
|
||||||
|
except Exception:
|
||||||
|
return None
|
||||||
|
if not isinstance(parsed, dict):
|
||||||
|
return None
|
||||||
|
tool_request = parsed.get('tool_request')
|
||||||
|
if not isinstance(tool_request, dict) or not tool_request.get('name'):
|
||||||
|
return None
|
||||||
|
return {
|
||||||
|
'name': str(tool_request.get('name')).strip(),
|
||||||
|
'arguments': tool_request.get('arguments') or {},
|
||||||
|
}
|
||||||
|
|
||||||
|
def get_runtime_configuration(self) -> dict:
|
||||||
|
"""Return the active LLM runtime config, guardrails, and tool exposure."""
|
||||||
|
live_tool_stages = {
|
||||||
|
stage: settings.llm_live_tools_for_stage(stage)
|
||||||
|
for stage in self.toolbox.SUPPORTED_LIVE_TOOL_STAGES
|
||||||
|
}
|
||||||
|
return {
|
||||||
|
'provider': 'ollama',
|
||||||
|
'ollama_url': self.ollama_url,
|
||||||
|
'model': self.model,
|
||||||
|
'guardrails': {
|
||||||
|
'global': settings.llm_guardrail_prompt,
|
||||||
|
'request_interpretation': settings.llm_request_interpreter_guardrail_prompt,
|
||||||
|
'change_summary': settings.llm_change_summary_guardrail_prompt,
|
||||||
|
'project_naming': settings.llm_project_naming_guardrail_prompt,
|
||||||
|
'project_id_naming': settings.llm_project_id_guardrail_prompt,
|
||||||
|
},
|
||||||
|
'system_prompts': {
|
||||||
|
'project_naming': settings.llm_project_naming_system_prompt,
|
||||||
|
'project_id_naming': settings.llm_project_id_system_prompt,
|
||||||
|
},
|
||||||
|
'mediated_tools': settings.llm_tool_allowlist,
|
||||||
|
'live_tools': settings.llm_live_tool_allowlist,
|
||||||
|
'live_tool_stage_allowlist': settings.llm_live_tool_stage_allowlist,
|
||||||
|
'live_tool_stage_tool_map': settings.llm_live_tool_stage_tool_map,
|
||||||
|
'live_tools_by_stage': live_tool_stages,
|
||||||
|
'tool_context_limit': settings.llm_tool_context_limit,
|
||||||
|
'max_tool_call_rounds': settings.llm_max_tool_call_rounds,
|
||||||
|
'gitea_live_tools_configured': bool(settings.gitea_url and settings.gitea_token),
|
||||||
|
}
|
||||||
@@ -39,6 +39,7 @@ class AgentOrchestrator:
|
|||||||
existing_history=None,
|
existing_history=None,
|
||||||
prompt_source_context: dict | None = None,
|
prompt_source_context: dict | None = None,
|
||||||
prompt_routing: dict | None = None,
|
prompt_routing: dict | None = None,
|
||||||
|
repo_name_override: str | None = None,
|
||||||
related_issue_hint: dict | None = None,
|
related_issue_hint: dict | None = None,
|
||||||
):
|
):
|
||||||
"""Initialize orchestrator."""
|
"""Initialize orchestrator."""
|
||||||
@@ -58,6 +59,7 @@ class AgentOrchestrator:
|
|||||||
self.prompt_actor = prompt_actor
|
self.prompt_actor = prompt_actor
|
||||||
self.prompt_source_context = prompt_source_context or {}
|
self.prompt_source_context = prompt_source_context or {}
|
||||||
self.prompt_routing = prompt_routing or {}
|
self.prompt_routing = prompt_routing or {}
|
||||||
|
self.repo_name_override = repo_name_override
|
||||||
self.existing_history = existing_history
|
self.existing_history = existing_history
|
||||||
self.changed_files: list[str] = []
|
self.changed_files: list[str] = []
|
||||||
self.gitea_api = GiteaAPI(
|
self.gitea_api = GiteaAPI(
|
||||||
@@ -68,7 +70,7 @@ class AgentOrchestrator:
|
|||||||
)
|
)
|
||||||
self.project_root = settings.projects_root / project_id
|
self.project_root = settings.projects_root / project_id
|
||||||
self.prompt_audit = None
|
self.prompt_audit = None
|
||||||
self.repo_name = settings.gitea_repo or self.gitea_api.build_project_repo_name(project_id, project_name)
|
self.repo_name = settings.gitea_repo or self.gitea_api.build_project_repo_name(project_id, repo_name_override or project_name)
|
||||||
self.repo_owner = settings.gitea_owner
|
self.repo_owner = settings.gitea_owner
|
||||||
self.repo_url = None
|
self.repo_url = None
|
||||||
self.branch_name = self._build_pr_branch_name(project_id)
|
self.branch_name = self._build_pr_branch_name(project_id)
|
||||||
@@ -322,6 +324,10 @@ class AgentOrchestrator:
|
|||||||
|
|
||||||
async def _prepare_git_workspace(self) -> None:
|
async def _prepare_git_workspace(self) -> None:
|
||||||
"""Initialize the local repo and ensure the PR branch exists before writing files."""
|
"""Initialize the local repo and ensure the PR branch exists before writing files."""
|
||||||
|
if not self.git_manager.is_git_available():
|
||||||
|
self.ui_manager.ui_data.setdefault('git', {})['error'] = 'git executable is not available in PATH'
|
||||||
|
self._append_log('Local git workspace skipped: git executable is not available in PATH')
|
||||||
|
return
|
||||||
if not self.git_manager.has_repo():
|
if not self.git_manager.has_repo():
|
||||||
self.git_manager.init_repo()
|
self.git_manager.init_repo()
|
||||||
|
|
||||||
@@ -606,6 +612,10 @@ class AgentOrchestrator:
|
|||||||
unique_files = list(dict.fromkeys(self.changed_files))
|
unique_files = list(dict.fromkeys(self.changed_files))
|
||||||
if not unique_files:
|
if not unique_files:
|
||||||
return
|
return
|
||||||
|
if not self.git_manager.is_git_available():
|
||||||
|
self.ui_manager.ui_data.setdefault('git', {})['error'] = 'git executable is not available in PATH'
|
||||||
|
self._append_log('Git commit skipped: git executable is not available in PATH')
|
||||||
|
return
|
||||||
|
|
||||||
try:
|
try:
|
||||||
if not self.git_manager.has_repo():
|
if not self.git_manager.has_repo():
|
||||||
@@ -668,7 +678,7 @@ class AgentOrchestrator:
|
|||||||
commit_hash=commit_hash,
|
commit_hash=commit_hash,
|
||||||
commit_url=remote_record.get('commit_url') if remote_record else None,
|
commit_url=remote_record.get('commit_url') if remote_record else None,
|
||||||
)
|
)
|
||||||
except (subprocess.CalledProcessError, FileNotFoundError) as exc:
|
except (RuntimeError, subprocess.CalledProcessError, FileNotFoundError) as exc:
|
||||||
self.ui_manager.ui_data.setdefault("git", {})["error"] = str(exc)
|
self.ui_manager.ui_data.setdefault("git", {})["error"] = str(exc)
|
||||||
self._append_log(f"Git commit skipped: {exc}")
|
self._append_log(f"Git commit skipped: {exc}")
|
||||||
|
|
||||||
|
|||||||
@@ -7,8 +7,12 @@ import re
|
|||||||
|
|
||||||
try:
|
try:
|
||||||
from ..config import settings
|
from ..config import settings
|
||||||
|
from .gitea import GiteaAPI
|
||||||
|
from .llm_service import LLMServiceClient
|
||||||
except ImportError:
|
except ImportError:
|
||||||
from config import settings
|
from config import settings
|
||||||
|
from agents.gitea import GiteaAPI
|
||||||
|
from agents.llm_service import LLMServiceClient
|
||||||
|
|
||||||
|
|
||||||
class RequestInterpreter:
|
class RequestInterpreter:
|
||||||
@@ -17,6 +21,15 @@ class RequestInterpreter:
|
|||||||
def __init__(self, ollama_url: str | None = None, model: str | None = None):
|
def __init__(self, ollama_url: str | None = None, model: str | None = None):
|
||||||
self.ollama_url = (ollama_url or settings.ollama_url).rstrip('/')
|
self.ollama_url = (ollama_url or settings.ollama_url).rstrip('/')
|
||||||
self.model = model or settings.OLLAMA_MODEL
|
self.model = model or settings.OLLAMA_MODEL
|
||||||
|
self.llm_client = LLMServiceClient(ollama_url=self.ollama_url, model=self.model)
|
||||||
|
self.gitea_api = None
|
||||||
|
if settings.gitea_url and settings.gitea_token:
|
||||||
|
self.gitea_api = GiteaAPI(
|
||||||
|
token=settings.GITEA_TOKEN,
|
||||||
|
base_url=settings.GITEA_URL,
|
||||||
|
owner=settings.GITEA_OWNER,
|
||||||
|
repo=settings.GITEA_REPO or '',
|
||||||
|
)
|
||||||
|
|
||||||
async def interpret(self, prompt_text: str, context: dict | None = None) -> dict:
|
async def interpret(self, prompt_text: str, context: dict | None = None) -> dict:
|
||||||
"""Interpret free-form text into the request shape expected by the orchestrator."""
|
"""Interpret free-form text into the request shape expected by the orchestrator."""
|
||||||
@@ -49,48 +62,46 @@ class RequestInterpreter:
|
|||||||
f"User prompt:\n{normalized}"
|
f"User prompt:\n{normalized}"
|
||||||
)
|
)
|
||||||
|
|
||||||
try:
|
content, trace = await self.llm_client.chat_with_trace(
|
||||||
import aiohttp
|
stage='request_interpretation',
|
||||||
|
system_prompt=system_prompt,
|
||||||
async with aiohttp.ClientSession() as session:
|
user_prompt=user_prompt,
|
||||||
async with session.post(
|
tool_context_input={
|
||||||
f'{self.ollama_url}/api/chat',
|
'projects': compact_context.get('projects', []),
|
||||||
json={
|
'open_issues': [
|
||||||
'model': self.model,
|
issue
|
||||||
'stream': False,
|
for project in compact_context.get('projects', [])
|
||||||
'format': 'json',
|
for issue in project.get('open_issues', [])
|
||||||
'messages': [
|
|
||||||
{
|
|
||||||
'role': 'system',
|
|
||||||
'content': system_prompt,
|
|
||||||
},
|
|
||||||
{'role': 'user', 'content': user_prompt},
|
|
||||||
],
|
],
|
||||||
|
'recent_chat_history': compact_context.get('recent_chat_history', []),
|
||||||
},
|
},
|
||||||
) as resp:
|
expect_json=True,
|
||||||
payload = await resp.json()
|
)
|
||||||
if 200 <= resp.status < 300:
|
|
||||||
content = payload.get('message', {}).get('content', '')
|
|
||||||
if content:
|
if content:
|
||||||
|
try:
|
||||||
parsed = json.loads(content)
|
parsed = json.loads(content)
|
||||||
interpreted = self._normalize_interpreted_request(parsed, normalized)
|
interpreted = self._normalize_interpreted_request(parsed, normalized)
|
||||||
routing = self._normalize_routing(parsed.get('routing'), interpreted, compact_context)
|
routing = self._normalize_routing(parsed.get('routing'), interpreted, compact_context)
|
||||||
return interpreted, {
|
naming_trace = None
|
||||||
'stage': 'request_interpretation',
|
if routing.get('intent') == 'new_project':
|
||||||
'provider': 'ollama',
|
interpreted, routing, naming_trace = await self._refine_new_project_identity(
|
||||||
'model': self.model,
|
prompt_text=normalized,
|
||||||
'system_prompt': system_prompt,
|
interpreted=interpreted,
|
||||||
'user_prompt': user_prompt,
|
routing=routing,
|
||||||
'assistant_response': content,
|
context=compact_context,
|
||||||
'raw_response': payload,
|
)
|
||||||
'routing': routing,
|
trace['routing'] = routing
|
||||||
'context_excerpt': compact_context,
|
trace['context_excerpt'] = compact_context
|
||||||
'fallback_used': False,
|
if naming_trace is not None:
|
||||||
}
|
trace['project_naming'] = naming_trace
|
||||||
|
return interpreted, trace
|
||||||
except Exception:
|
except Exception:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
interpreted, routing = self._heuristic_fallback(normalized, compact_context)
|
interpreted, routing = self._heuristic_fallback(normalized, compact_context)
|
||||||
|
if routing.get('intent') == 'new_project':
|
||||||
|
constraints = await self._collect_project_identity_constraints(compact_context)
|
||||||
|
routing['repo_name'] = self._ensure_unique_repo_name(routing.get('repo_name') or interpreted.get('name') or 'project', constraints['repo_names'])
|
||||||
return interpreted, {
|
return interpreted, {
|
||||||
'stage': 'request_interpretation',
|
'stage': 'request_interpretation',
|
||||||
'provider': 'heuristic',
|
'provider': 'heuristic',
|
||||||
@@ -98,12 +109,87 @@ class RequestInterpreter:
|
|||||||
'system_prompt': system_prompt,
|
'system_prompt': system_prompt,
|
||||||
'user_prompt': user_prompt,
|
'user_prompt': user_prompt,
|
||||||
'assistant_response': json.dumps({'request': interpreted, 'routing': routing}),
|
'assistant_response': json.dumps({'request': interpreted, 'routing': routing}),
|
||||||
'raw_response': {'fallback': 'heuristic'},
|
'raw_response': {'fallback': 'heuristic', 'llm_trace': trace.get('raw_response') if isinstance(trace, dict) else None},
|
||||||
'routing': routing,
|
'routing': routing,
|
||||||
'context_excerpt': compact_context,
|
'context_excerpt': compact_context,
|
||||||
|
'guardrails': trace.get('guardrails') if isinstance(trace, dict) else [],
|
||||||
|
'tool_context': trace.get('tool_context') if isinstance(trace, dict) else [],
|
||||||
'fallback_used': True,
|
'fallback_used': True,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
async def _refine_new_project_identity(
|
||||||
|
self,
|
||||||
|
*,
|
||||||
|
prompt_text: str,
|
||||||
|
interpreted: dict,
|
||||||
|
routing: dict,
|
||||||
|
context: dict,
|
||||||
|
) -> tuple[dict, dict, dict | None]:
|
||||||
|
"""Refine project and repository naming for genuinely new work."""
|
||||||
|
constraints = await self._collect_project_identity_constraints(context)
|
||||||
|
user_prompt = (
|
||||||
|
f"Original user prompt:\n{prompt_text}\n\n"
|
||||||
|
f"Draft structured request:\n{json.dumps(interpreted, indent=2)}\n\n"
|
||||||
|
f"Tracked project names to avoid reusing unless the user clearly wants them:\n{json.dumps(sorted(constraints['project_names']))}\n\n"
|
||||||
|
f"Repository slugs already reserved in tracked projects or Gitea:\n{json.dumps(sorted(constraints['repo_names']))}\n\n"
|
||||||
|
"Suggest the best project display name and repository slug for this new project."
|
||||||
|
)
|
||||||
|
content, trace = await self.llm_client.chat_with_trace(
|
||||||
|
stage='project_naming',
|
||||||
|
system_prompt=settings.llm_project_naming_system_prompt,
|
||||||
|
user_prompt=user_prompt,
|
||||||
|
tool_context_input={
|
||||||
|
'projects': context.get('projects', []),
|
||||||
|
},
|
||||||
|
expect_json=True,
|
||||||
|
)
|
||||||
|
if content:
|
||||||
|
try:
|
||||||
|
parsed = json.loads(content)
|
||||||
|
project_name, repo_name = self._normalize_project_identity(
|
||||||
|
parsed,
|
||||||
|
fallback_name=interpreted.get('name') or self._derive_name(prompt_text),
|
||||||
|
)
|
||||||
|
repo_name = self._ensure_unique_repo_name(repo_name, constraints['repo_names'])
|
||||||
|
interpreted['name'] = project_name
|
||||||
|
routing['project_name'] = project_name
|
||||||
|
routing['repo_name'] = repo_name
|
||||||
|
return interpreted, routing, trace
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
|
||||||
|
fallback_name = interpreted.get('name') or self._derive_name(prompt_text)
|
||||||
|
routing['project_name'] = fallback_name
|
||||||
|
routing['repo_name'] = self._ensure_unique_repo_name(self._derive_repo_name(fallback_name), constraints['repo_names'])
|
||||||
|
return interpreted, routing, trace
|
||||||
|
|
||||||
|
async def _collect_project_identity_constraints(self, context: dict) -> dict[str, set[str]]:
|
||||||
|
"""Collect reserved project names and repository slugs from tracked state and Gitea."""
|
||||||
|
project_names: set[str] = set()
|
||||||
|
repo_names: set[str] = set()
|
||||||
|
for project in context.get('projects', []):
|
||||||
|
if project.get('name'):
|
||||||
|
project_names.add(str(project.get('name')).strip())
|
||||||
|
repository = project.get('repository') or {}
|
||||||
|
if repository.get('name'):
|
||||||
|
repo_names.add(str(repository.get('name')).strip())
|
||||||
|
repo_names.update(await self._load_remote_repo_names())
|
||||||
|
return {
|
||||||
|
'project_names': project_names,
|
||||||
|
'repo_names': repo_names,
|
||||||
|
}
|
||||||
|
|
||||||
|
async def _load_remote_repo_names(self) -> set[str]:
|
||||||
|
"""Load current Gitea repository names when live credentials are available."""
|
||||||
|
if settings.gitea_repo:
|
||||||
|
return {settings.gitea_repo}
|
||||||
|
if self.gitea_api is None or not settings.gitea_owner:
|
||||||
|
return set()
|
||||||
|
repos = await self.gitea_api.list_repositories(owner=settings.gitea_owner)
|
||||||
|
if not isinstance(repos, list):
|
||||||
|
return set()
|
||||||
|
return {str(repo.get('name')).strip() for repo in repos if repo.get('name')}
|
||||||
|
|
||||||
def _normalize_interpreted_request(self, interpreted: dict, original_prompt: str) -> dict:
|
def _normalize_interpreted_request(self, interpreted: dict, original_prompt: str) -> dict:
|
||||||
"""Normalize LLM output into the required request shape."""
|
"""Normalize LLM output into the required request shape."""
|
||||||
request_payload = interpreted.get('request') if isinstance(interpreted.get('request'), dict) else interpreted
|
request_payload = interpreted.get('request') if isinstance(interpreted.get('request'), dict) else interpreted
|
||||||
@@ -164,14 +250,18 @@ class RequestInterpreter:
|
|||||||
matched_project = project
|
matched_project = project
|
||||||
break
|
break
|
||||||
intent = str(routing.get('intent') or '').strip() or ('continue_project' if matched_project else 'new_project')
|
intent = str(routing.get('intent') or '').strip() or ('continue_project' if matched_project else 'new_project')
|
||||||
return {
|
normalized = {
|
||||||
'intent': intent,
|
'intent': intent,
|
||||||
'project_id': matched_project.get('project_id') if matched_project else project_id,
|
'project_id': matched_project.get('project_id') if matched_project else project_id,
|
||||||
'project_name': matched_project.get('name') if matched_project else (project_name or interpreted.get('name')),
|
'project_name': matched_project.get('name') if matched_project else (project_name or interpreted.get('name')),
|
||||||
|
'repo_name': routing.get('repo_name') if intent == 'new_project' else None,
|
||||||
'issue_number': issue_number,
|
'issue_number': issue_number,
|
||||||
'confidence': routing.get('confidence') or ('medium' if matched_project else 'low'),
|
'confidence': routing.get('confidence') or ('medium' if matched_project else 'low'),
|
||||||
'reasoning_summary': routing.get('reasoning_summary') or ('Matched prior project context' if matched_project else 'No strong prior project match found'),
|
'reasoning_summary': routing.get('reasoning_summary') or ('Matched prior project context' if matched_project else 'No strong prior project match found'),
|
||||||
}
|
}
|
||||||
|
if normalized['intent'] == 'new_project' and not normalized['repo_name']:
|
||||||
|
normalized['repo_name'] = self._derive_repo_name(normalized['project_name'] or interpreted.get('name') or 'Generated Project')
|
||||||
|
return normalized
|
||||||
|
|
||||||
def _normalize_list(self, value) -> list[str]:
|
def _normalize_list(self, value) -> list[str]:
|
||||||
if isinstance(value, list):
|
if isinstance(value, list):
|
||||||
@@ -183,10 +273,65 @@ class RequestInterpreter:
|
|||||||
def _derive_name(self, prompt_text: str) -> str:
|
def _derive_name(self, prompt_text: str) -> str:
|
||||||
"""Derive a stable project name when the LLM does not provide one."""
|
"""Derive a stable project name when the LLM does not provide one."""
|
||||||
first_line = prompt_text.splitlines()[0].strip()
|
first_line = prompt_text.splitlines()[0].strip()
|
||||||
|
quoted = re.search(r'["\']([^"\']{3,80})["\']', first_line)
|
||||||
|
if quoted:
|
||||||
|
return self._humanize_name(quoted.group(1))
|
||||||
|
|
||||||
|
noun_phrase = re.search(
|
||||||
|
r'(?:build|create|start|make|develop|generate|design|need|want)\s+'
|
||||||
|
r'(?:me\s+|us\s+|an?\s+|the\s+|new\s+|internal\s+|simple\s+|lightweight\s+|modern\s+|web\s+|mobile\s+)*'
|
||||||
|
r'([a-z0-9][a-z0-9\s-]{2,80}?(?:portal|dashboard|app|application|service|tool|system|platform|api|bot|assistant|website|site|workspace|tracker|manager))\b',
|
||||||
|
first_line,
|
||||||
|
flags=re.IGNORECASE,
|
||||||
|
)
|
||||||
|
if noun_phrase:
|
||||||
|
return self._humanize_name(noun_phrase.group(1))
|
||||||
|
|
||||||
cleaned = re.sub(r'[^A-Za-z0-9 ]+', ' ', first_line)
|
cleaned = re.sub(r'[^A-Za-z0-9 ]+', ' ', first_line)
|
||||||
words = [word.capitalize() for word in cleaned.split()[:4]]
|
stopwords = {
|
||||||
|
'build', 'create', 'start', 'make', 'develop', 'generate', 'design', 'need', 'want', 'please', 'for', 'our', 'with', 'that', 'this',
|
||||||
|
'new', 'internal', 'simple', 'modern', 'web', 'mobile', 'app', 'application', 'tool', 'system',
|
||||||
|
}
|
||||||
|
tokens = [word for word in cleaned.split() if word and word.lower() not in stopwords]
|
||||||
|
if tokens:
|
||||||
|
return self._humanize_name(' '.join(tokens[:4]))
|
||||||
|
return 'Generated Project'
|
||||||
|
|
||||||
|
def _humanize_name(self, raw_name: str) -> str:
|
||||||
|
"""Normalize a candidate project name into a readable title."""
|
||||||
|
cleaned = re.sub(r'[^A-Za-z0-9\s-]+', ' ', raw_name).strip(' -')
|
||||||
|
cleaned = re.sub(r'\s+', ' ', cleaned)
|
||||||
|
special_upper = {'api', 'crm', 'erp', 'cms', 'hr', 'it', 'ui', 'qa'}
|
||||||
|
words = []
|
||||||
|
for word in cleaned.split()[:6]:
|
||||||
|
lowered = word.lower()
|
||||||
|
words.append(lowered.upper() if lowered in special_upper else lowered.capitalize())
|
||||||
return ' '.join(words) or 'Generated Project'
|
return ' '.join(words) or 'Generated Project'
|
||||||
|
|
||||||
|
def _derive_repo_name(self, project_name: str) -> str:
|
||||||
|
"""Derive a repository slug from a human-readable project name."""
|
||||||
|
preferred = (project_name or 'project').strip().lower().replace(' ', '-')
|
||||||
|
sanitized = ''.join(ch if ch.isalnum() or ch in {'-', '_'} else '-' for ch in preferred)
|
||||||
|
while '--' in sanitized:
|
||||||
|
sanitized = sanitized.replace('--', '-')
|
||||||
|
return sanitized.strip('-') or 'project'
|
||||||
|
|
||||||
|
def _ensure_unique_repo_name(self, repo_name: str, reserved_names: set[str]) -> str:
|
||||||
|
"""Choose a repository slug that does not collide with tracked or remote repositories."""
|
||||||
|
base_name = self._derive_repo_name(repo_name)
|
||||||
|
if base_name not in reserved_names:
|
||||||
|
return base_name
|
||||||
|
suffix = 2
|
||||||
|
while f'{base_name}-{suffix}' in reserved_names:
|
||||||
|
suffix += 1
|
||||||
|
return f'{base_name}-{suffix}'
|
||||||
|
|
||||||
|
def _normalize_project_identity(self, payload: dict, fallback_name: str) -> tuple[str, str]:
|
||||||
|
"""Normalize model-proposed project and repository naming."""
|
||||||
|
project_name = self._humanize_name(str(payload.get('project_name') or payload.get('name') or fallback_name))
|
||||||
|
repo_name = self._derive_repo_name(str(payload.get('repo_name') or project_name))
|
||||||
|
return project_name, repo_name
|
||||||
|
|
||||||
def _heuristic_fallback(self, prompt_text: str, context: dict | None = None) -> tuple[dict, dict]:
|
def _heuristic_fallback(self, prompt_text: str, context: dict | None = None) -> tuple[dict, dict]:
|
||||||
"""Fallback request extraction when Ollama is unavailable."""
|
"""Fallback request extraction when Ollama is unavailable."""
|
||||||
lowered = prompt_text.lower()
|
lowered = prompt_text.lower()
|
||||||
@@ -239,6 +384,7 @@ class RequestInterpreter:
|
|||||||
'intent': intent,
|
'intent': intent,
|
||||||
'project_id': matched_project.get('project_id') if matched_project else None,
|
'project_id': matched_project.get('project_id') if matched_project else None,
|
||||||
'project_name': matched_project.get('name') if matched_project else self._derive_name(prompt_text),
|
'project_name': matched_project.get('name') if matched_project else self._derive_name(prompt_text),
|
||||||
|
'repo_name': None if matched_project else self._derive_repo_name(self._derive_name(prompt_text)),
|
||||||
'issue_number': issue_number,
|
'issue_number': issue_number,
|
||||||
'confidence': 'medium' if matched_project or explicit_new else 'low',
|
'confidence': 'medium' if matched_project or explicit_new else 'low',
|
||||||
'reasoning_summary': 'Heuristic routing from chat history and project names.',
|
'reasoning_summary': 'Heuristic routing from chat history and project names.',
|
||||||
|
|||||||
@@ -1,12 +1,97 @@
|
|||||||
"""Configuration settings for AI Software Factory."""
|
"""Configuration settings for AI Software Factory."""
|
||||||
|
|
||||||
|
import json
|
||||||
import os
|
import os
|
||||||
from typing import Optional
|
from typing import Optional
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
|
from urllib.parse import urlparse
|
||||||
from pydantic import Field
|
from pydantic import Field
|
||||||
from pydantic_settings import BaseSettings, SettingsConfigDict
|
from pydantic_settings import BaseSettings, SettingsConfigDict
|
||||||
|
|
||||||
|
|
||||||
|
def _normalize_service_url(value: str, default_scheme: str = "https") -> str:
|
||||||
|
"""Normalize service URLs so host-only values still become valid absolute URLs."""
|
||||||
|
normalized = (value or "").strip().rstrip("/")
|
||||||
|
if not normalized:
|
||||||
|
return ""
|
||||||
|
if "://" not in normalized:
|
||||||
|
normalized = f"{default_scheme}://{normalized}"
|
||||||
|
parsed = urlparse(normalized)
|
||||||
|
if not parsed.scheme or not parsed.netloc:
|
||||||
|
return ""
|
||||||
|
return normalized
|
||||||
|
|
||||||
|
|
||||||
|
EDITABLE_LLM_PROMPTS: dict[str, dict[str, str]] = {
|
||||||
|
'LLM_GUARDRAIL_PROMPT': {
|
||||||
|
'label': 'Global Guardrails',
|
||||||
|
'category': 'guardrail',
|
||||||
|
'description': 'Applied to every outbound external LLM call.',
|
||||||
|
},
|
||||||
|
'LLM_REQUEST_INTERPRETER_GUARDRAIL_PROMPT': {
|
||||||
|
'label': 'Request Interpretation Guardrails',
|
||||||
|
'category': 'guardrail',
|
||||||
|
'description': 'Constrains project routing and continuation selection.',
|
||||||
|
},
|
||||||
|
'LLM_CHANGE_SUMMARY_GUARDRAIL_PROMPT': {
|
||||||
|
'label': 'Change Summary Guardrails',
|
||||||
|
'category': 'guardrail',
|
||||||
|
'description': 'Constrains factual delivery summaries.',
|
||||||
|
},
|
||||||
|
'LLM_PROJECT_NAMING_GUARDRAIL_PROMPT': {
|
||||||
|
'label': 'Project Naming Guardrails',
|
||||||
|
'category': 'guardrail',
|
||||||
|
'description': 'Constrains project display names and repo slugs.',
|
||||||
|
},
|
||||||
|
'LLM_PROJECT_NAMING_SYSTEM_PROMPT': {
|
||||||
|
'label': 'Project Naming System Prompt',
|
||||||
|
'category': 'system_prompt',
|
||||||
|
'description': 'Guides the dedicated new-project naming stage.',
|
||||||
|
},
|
||||||
|
'LLM_PROJECT_ID_GUARDRAIL_PROMPT': {
|
||||||
|
'label': 'Project ID Guardrails',
|
||||||
|
'category': 'guardrail',
|
||||||
|
'description': 'Constrains stable project id generation.',
|
||||||
|
},
|
||||||
|
'LLM_PROJECT_ID_SYSTEM_PROMPT': {
|
||||||
|
'label': 'Project ID System Prompt',
|
||||||
|
'category': 'system_prompt',
|
||||||
|
'description': 'Guides the dedicated project id naming stage.',
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def _get_persisted_llm_prompt_override(env_key: str) -> str | None:
|
||||||
|
"""Load one persisted LLM prompt override from the database when available."""
|
||||||
|
if env_key not in EDITABLE_LLM_PROMPTS:
|
||||||
|
return None
|
||||||
|
try:
|
||||||
|
try:
|
||||||
|
from .database import get_db_sync
|
||||||
|
from .agents.database_manager import DatabaseManager
|
||||||
|
except ImportError:
|
||||||
|
from database import get_db_sync
|
||||||
|
from agents.database_manager import DatabaseManager
|
||||||
|
|
||||||
|
db = get_db_sync()
|
||||||
|
if db is None:
|
||||||
|
return None
|
||||||
|
try:
|
||||||
|
return DatabaseManager(db).get_llm_prompt_override(env_key)
|
||||||
|
finally:
|
||||||
|
db.close()
|
||||||
|
except Exception:
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def _resolve_llm_prompt_value(env_key: str, fallback: str) -> str:
|
||||||
|
"""Resolve one editable prompt from DB override first, then environment/defaults."""
|
||||||
|
override = _get_persisted_llm_prompt_override(env_key)
|
||||||
|
if override is not None:
|
||||||
|
return override.strip()
|
||||||
|
return (fallback or '').strip()
|
||||||
|
|
||||||
|
|
||||||
class Settings(BaseSettings):
|
class Settings(BaseSettings):
|
||||||
"""Application settings loaded from environment variables."""
|
"""Application settings loaded from environment variables."""
|
||||||
|
|
||||||
@@ -24,6 +109,34 @@ class Settings(BaseSettings):
|
|||||||
# Ollama settings computed from environment
|
# Ollama settings computed from environment
|
||||||
OLLAMA_URL: str = "http://ollama:11434"
|
OLLAMA_URL: str = "http://ollama:11434"
|
||||||
OLLAMA_MODEL: str = "llama3"
|
OLLAMA_MODEL: str = "llama3"
|
||||||
|
LLM_GUARDRAIL_PROMPT: str = (
|
||||||
|
"You are operating inside AI Software Factory. Follow the requested schema exactly, "
|
||||||
|
"treat provided tool outputs as authoritative, and do not invent repositories, issues, pull requests, or delivery facts."
|
||||||
|
)
|
||||||
|
LLM_REQUEST_INTERPRETER_GUARDRAIL_PROMPT: str = (
|
||||||
|
"For routing and request interpretation: never select archived projects, prefer tracked project IDs from tool outputs, and only reference issues that are explicit in the prompt or available tool data."
|
||||||
|
)
|
||||||
|
LLM_CHANGE_SUMMARY_GUARDRAIL_PROMPT: str = (
|
||||||
|
"For summaries: only describe facts present in the provided context and tool outputs. Never claim a repository, commit, or pull request exists unless it is present in the supplied data."
|
||||||
|
)
|
||||||
|
LLM_PROJECT_NAMING_GUARDRAIL_PROMPT: str = (
|
||||||
|
"For project naming: prefer clear, product-like names and repository slugs that match the user's intent. Avoid reusing tracked project identities unless the request is clearly asking for an existing project."
|
||||||
|
)
|
||||||
|
LLM_PROJECT_NAMING_SYSTEM_PROMPT: str = (
|
||||||
|
"You name newly requested software projects. Return only JSON with keys project_name, repo_name, and rationale. Project names should be concise human-readable titles. Repo names should be lowercase kebab-case slugs suitable for a Gitea repository name."
|
||||||
|
)
|
||||||
|
LLM_PROJECT_ID_GUARDRAIL_PROMPT: str = (
|
||||||
|
"For project ids: produce short stable slugs for newly created projects. Avoid collisions with known project ids and keep ids lowercase with hyphens."
|
||||||
|
)
|
||||||
|
LLM_PROJECT_ID_SYSTEM_PROMPT: str = (
|
||||||
|
"You derive stable project ids for new projects. Return only JSON with keys project_id and rationale. project_id must be a short lowercase kebab-case slug without spaces."
|
||||||
|
)
|
||||||
|
LLM_TOOL_ALLOWLIST: str = "gitea_project_catalog,gitea_project_state,gitea_project_issues,gitea_pull_requests"
|
||||||
|
LLM_TOOL_CONTEXT_LIMIT: int = 5
|
||||||
|
LLM_LIVE_TOOL_ALLOWLIST: str = "gitea_lookup_issue,gitea_lookup_pull_request"
|
||||||
|
LLM_LIVE_TOOL_STAGE_ALLOWLIST: str = "request_interpretation,change_summary"
|
||||||
|
LLM_LIVE_TOOL_STAGE_TOOL_MAP: str = ""
|
||||||
|
LLM_MAX_TOOL_CALL_ROUNDS: int = 1
|
||||||
|
|
||||||
# Gitea settings
|
# Gitea settings
|
||||||
GITEA_URL: str = "https://gitea.yourserver.com"
|
GITEA_URL: str = "https://gitea.yourserver.com"
|
||||||
@@ -47,6 +160,19 @@ class Settings(BaseSettings):
|
|||||||
TELEGRAM_BOT_TOKEN: str = ""
|
TELEGRAM_BOT_TOKEN: str = ""
|
||||||
TELEGRAM_CHAT_ID: str = ""
|
TELEGRAM_CHAT_ID: str = ""
|
||||||
|
|
||||||
|
# Home Assistant and prompt queue settings
|
||||||
|
HOME_ASSISTANT_URL: str = ""
|
||||||
|
HOME_ASSISTANT_TOKEN: str = ""
|
||||||
|
HOME_ASSISTANT_BATTERY_ENTITY_ID: str = ""
|
||||||
|
HOME_ASSISTANT_SURPLUS_ENTITY_ID: str = ""
|
||||||
|
HOME_ASSISTANT_BATTERY_FULL_THRESHOLD: float = 95.0
|
||||||
|
HOME_ASSISTANT_SURPLUS_THRESHOLD_WATTS: float = 100.0
|
||||||
|
PROMPT_QUEUE_ENABLED: bool = False
|
||||||
|
PROMPT_QUEUE_AUTO_PROCESS: bool = True
|
||||||
|
PROMPT_QUEUE_FORCE_PROCESS: bool = False
|
||||||
|
PROMPT_QUEUE_POLL_INTERVAL_SECONDS: int = 60
|
||||||
|
PROMPT_QUEUE_MAX_BATCH_SIZE: int = 1
|
||||||
|
|
||||||
# PostgreSQL settings
|
# PostgreSQL settings
|
||||||
POSTGRES_HOST: str = "localhost"
|
POSTGRES_HOST: str = "localhost"
|
||||||
POSTGRES_PORT: int = 5432
|
POSTGRES_PORT: int = 5432
|
||||||
@@ -131,10 +257,118 @@ class Settings(BaseSettings):
|
|||||||
"""Get Ollama URL with trimmed whitespace."""
|
"""Get Ollama URL with trimmed whitespace."""
|
||||||
return self.OLLAMA_URL.strip()
|
return self.OLLAMA_URL.strip()
|
||||||
|
|
||||||
|
@property
|
||||||
|
def llm_guardrail_prompt(self) -> str:
|
||||||
|
"""Get the global guardrail prompt used for all external LLM calls."""
|
||||||
|
return _resolve_llm_prompt_value('LLM_GUARDRAIL_PROMPT', self.LLM_GUARDRAIL_PROMPT)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def llm_request_interpreter_guardrail_prompt(self) -> str:
|
||||||
|
"""Get the request-interpretation specific guardrail prompt."""
|
||||||
|
return _resolve_llm_prompt_value('LLM_REQUEST_INTERPRETER_GUARDRAIL_PROMPT', self.LLM_REQUEST_INTERPRETER_GUARDRAIL_PROMPT)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def llm_change_summary_guardrail_prompt(self) -> str:
|
||||||
|
"""Get the change-summary specific guardrail prompt."""
|
||||||
|
return _resolve_llm_prompt_value('LLM_CHANGE_SUMMARY_GUARDRAIL_PROMPT', self.LLM_CHANGE_SUMMARY_GUARDRAIL_PROMPT)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def llm_project_naming_guardrail_prompt(self) -> str:
|
||||||
|
"""Get the project-naming specific guardrail prompt."""
|
||||||
|
return _resolve_llm_prompt_value('LLM_PROJECT_NAMING_GUARDRAIL_PROMPT', self.LLM_PROJECT_NAMING_GUARDRAIL_PROMPT)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def llm_project_naming_system_prompt(self) -> str:
|
||||||
|
"""Get the project-naming system prompt."""
|
||||||
|
return _resolve_llm_prompt_value('LLM_PROJECT_NAMING_SYSTEM_PROMPT', self.LLM_PROJECT_NAMING_SYSTEM_PROMPT)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def llm_project_id_guardrail_prompt(self) -> str:
|
||||||
|
"""Get the project-id naming specific guardrail prompt."""
|
||||||
|
return _resolve_llm_prompt_value('LLM_PROJECT_ID_GUARDRAIL_PROMPT', self.LLM_PROJECT_ID_GUARDRAIL_PROMPT)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def llm_project_id_system_prompt(self) -> str:
|
||||||
|
"""Get the project-id naming system prompt."""
|
||||||
|
return _resolve_llm_prompt_value('LLM_PROJECT_ID_SYSTEM_PROMPT', self.LLM_PROJECT_ID_SYSTEM_PROMPT)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def editable_llm_prompts(self) -> list[dict[str, str]]:
|
||||||
|
"""Return metadata for all LLM prompts that may be persisted and edited from the UI."""
|
||||||
|
prompts = []
|
||||||
|
for env_key, metadata in EDITABLE_LLM_PROMPTS.items():
|
||||||
|
prompts.append(
|
||||||
|
{
|
||||||
|
'key': env_key,
|
||||||
|
'label': metadata['label'],
|
||||||
|
'category': metadata['category'],
|
||||||
|
'description': metadata['description'],
|
||||||
|
'default_value': (getattr(self, env_key, '') or '').strip(),
|
||||||
|
'value': _resolve_llm_prompt_value(env_key, getattr(self, env_key, '')),
|
||||||
|
}
|
||||||
|
)
|
||||||
|
return prompts
|
||||||
|
|
||||||
|
@property
|
||||||
|
def llm_tool_allowlist(self) -> list[str]:
|
||||||
|
"""Get the allowed LLM tool names as a normalized list."""
|
||||||
|
return [item.strip() for item in self.LLM_TOOL_ALLOWLIST.split(',') if item.strip()]
|
||||||
|
|
||||||
|
@property
|
||||||
|
def llm_tool_context_limit(self) -> int:
|
||||||
|
"""Get the number of items to expose per mediated tool payload."""
|
||||||
|
return max(int(self.LLM_TOOL_CONTEXT_LIMIT), 1)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def llm_live_tool_allowlist(self) -> list[str]:
|
||||||
|
"""Get the allowed live tool-call names for model-driven lookup requests."""
|
||||||
|
return [item.strip() for item in self.LLM_LIVE_TOOL_ALLOWLIST.split(',') if item.strip()]
|
||||||
|
|
||||||
|
@property
|
||||||
|
def llm_live_tool_stage_allowlist(self) -> list[str]:
|
||||||
|
"""Get the LLM stages where live tool requests are enabled."""
|
||||||
|
return [item.strip() for item in self.LLM_LIVE_TOOL_STAGE_ALLOWLIST.split(',') if item.strip()]
|
||||||
|
|
||||||
|
@property
|
||||||
|
def llm_live_tool_stage_tool_map(self) -> dict[str, list[str]]:
|
||||||
|
"""Get an optional per-stage live tool map that overrides the simple stage allowlist."""
|
||||||
|
raw = (self.LLM_LIVE_TOOL_STAGE_TOOL_MAP or '').strip()
|
||||||
|
if not raw:
|
||||||
|
return {}
|
||||||
|
try:
|
||||||
|
parsed = json.loads(raw)
|
||||||
|
except Exception:
|
||||||
|
return {}
|
||||||
|
if not isinstance(parsed, dict):
|
||||||
|
return {}
|
||||||
|
allowed_tools = set(self.llm_live_tool_allowlist)
|
||||||
|
normalized: dict[str, list[str]] = {}
|
||||||
|
for stage, tools in parsed.items():
|
||||||
|
if not isinstance(stage, str):
|
||||||
|
continue
|
||||||
|
if not isinstance(tools, list):
|
||||||
|
continue
|
||||||
|
normalized[stage.strip()] = [str(tool).strip() for tool in tools if str(tool).strip() in allowed_tools]
|
||||||
|
return normalized
|
||||||
|
|
||||||
|
def llm_live_tools_for_stage(self, stage: str) -> list[str]:
|
||||||
|
"""Return live tools enabled for a specific LLM stage."""
|
||||||
|
stage_map = self.llm_live_tool_stage_tool_map
|
||||||
|
if stage_map:
|
||||||
|
return stage_map.get(stage, [])
|
||||||
|
if stage not in set(self.llm_live_tool_stage_allowlist):
|
||||||
|
return []
|
||||||
|
return self.llm_live_tool_allowlist
|
||||||
|
|
||||||
|
@property
|
||||||
|
def llm_max_tool_call_rounds(self) -> int:
|
||||||
|
"""Get the maximum number of model-driven live tool-call rounds per LLM request."""
|
||||||
|
return max(int(self.LLM_MAX_TOOL_CALL_ROUNDS), 0)
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def gitea_url(self) -> str:
|
def gitea_url(self) -> str:
|
||||||
"""Get Gitea URL with trimmed whitespace."""
|
"""Get Gitea URL with trimmed whitespace."""
|
||||||
return self.GITEA_URL.strip()
|
return _normalize_service_url(self.GITEA_URL)
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def gitea_token(self) -> str:
|
def gitea_token(self) -> str:
|
||||||
@@ -159,12 +393,12 @@ class Settings(BaseSettings):
|
|||||||
@property
|
@property
|
||||||
def n8n_webhook_url(self) -> str:
|
def n8n_webhook_url(self) -> str:
|
||||||
"""Get n8n webhook URL with trimmed whitespace."""
|
"""Get n8n webhook URL with trimmed whitespace."""
|
||||||
return self.N8N_WEBHOOK_URL.strip()
|
return _normalize_service_url(self.N8N_WEBHOOK_URL, default_scheme="http")
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def n8n_api_url(self) -> str:
|
def n8n_api_url(self) -> str:
|
||||||
"""Get n8n API URL with trimmed whitespace."""
|
"""Get n8n API URL with trimmed whitespace."""
|
||||||
return self.N8N_API_URL.strip()
|
return _normalize_service_url(self.N8N_API_URL, default_scheme="http")
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def n8n_api_key(self) -> str:
|
def n8n_api_key(self) -> str:
|
||||||
@@ -189,7 +423,62 @@ class Settings(BaseSettings):
|
|||||||
@property
|
@property
|
||||||
def backend_public_url(self) -> str:
|
def backend_public_url(self) -> str:
|
||||||
"""Get backend public URL with trimmed whitespace."""
|
"""Get backend public URL with trimmed whitespace."""
|
||||||
return self.BACKEND_PUBLIC_URL.strip().rstrip("/")
|
return _normalize_service_url(self.BACKEND_PUBLIC_URL, default_scheme="http")
|
||||||
|
|
||||||
|
@property
|
||||||
|
def home_assistant_url(self) -> str:
|
||||||
|
"""Get Home Assistant URL with trimmed whitespace."""
|
||||||
|
return _normalize_service_url(self.HOME_ASSISTANT_URL, default_scheme="http")
|
||||||
|
|
||||||
|
@property
|
||||||
|
def home_assistant_token(self) -> str:
|
||||||
|
"""Get Home Assistant token with trimmed whitespace."""
|
||||||
|
return self.HOME_ASSISTANT_TOKEN.strip()
|
||||||
|
|
||||||
|
@property
|
||||||
|
def home_assistant_battery_entity_id(self) -> str:
|
||||||
|
"""Get the Home Assistant battery state entity id."""
|
||||||
|
return self.HOME_ASSISTANT_BATTERY_ENTITY_ID.strip()
|
||||||
|
|
||||||
|
@property
|
||||||
|
def home_assistant_surplus_entity_id(self) -> str:
|
||||||
|
"""Get the Home Assistant surplus power entity id."""
|
||||||
|
return self.HOME_ASSISTANT_SURPLUS_ENTITY_ID.strip()
|
||||||
|
|
||||||
|
@property
|
||||||
|
def home_assistant_battery_full_threshold(self) -> float:
|
||||||
|
"""Get the minimum battery SoC percentage for queue processing."""
|
||||||
|
return float(self.HOME_ASSISTANT_BATTERY_FULL_THRESHOLD)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def home_assistant_surplus_threshold_watts(self) -> float:
|
||||||
|
"""Get the minimum export/surplus power threshold for queue processing."""
|
||||||
|
return float(self.HOME_ASSISTANT_SURPLUS_THRESHOLD_WATTS)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def prompt_queue_enabled(self) -> bool:
|
||||||
|
"""Whether Telegram prompts should be queued instead of processed immediately."""
|
||||||
|
return bool(self.PROMPT_QUEUE_ENABLED)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def prompt_queue_auto_process(self) -> bool:
|
||||||
|
"""Whether the background worker should automatically process queued prompts."""
|
||||||
|
return bool(self.PROMPT_QUEUE_AUTO_PROCESS)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def prompt_queue_force_process(self) -> bool:
|
||||||
|
"""Whether queued prompts should bypass the Home Assistant energy gate."""
|
||||||
|
return bool(self.PROMPT_QUEUE_FORCE_PROCESS)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def prompt_queue_poll_interval_seconds(self) -> int:
|
||||||
|
"""Get the queue polling interval for background processing."""
|
||||||
|
return max(int(self.PROMPT_QUEUE_POLL_INTERVAL_SECONDS), 5)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def prompt_queue_max_batch_size(self) -> int:
|
||||||
|
"""Get the maximum number of queued prompts to process in one batch."""
|
||||||
|
return max(int(self.PROMPT_QUEUE_MAX_BATCH_SIZE), 1)
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def projects_root(self) -> Path:
|
def projects_root(self) -> Path:
|
||||||
|
|||||||
File diff suppressed because it is too large
Load Diff
@@ -13,6 +13,7 @@ The NiceGUI frontend provides:
|
|||||||
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import asyncio
|
||||||
from contextlib import asynccontextmanager
|
from contextlib import asynccontextmanager
|
||||||
import json
|
import json
|
||||||
import re
|
import re
|
||||||
@@ -29,7 +30,9 @@ try:
|
|||||||
from . import database as database_module
|
from . import database as database_module
|
||||||
from .agents.change_summary import ChangeSummaryGenerator
|
from .agents.change_summary import ChangeSummaryGenerator
|
||||||
from .agents.database_manager import DatabaseManager
|
from .agents.database_manager import DatabaseManager
|
||||||
|
from .agents.home_assistant import HomeAssistantAgent
|
||||||
from .agents.request_interpreter import RequestInterpreter
|
from .agents.request_interpreter import RequestInterpreter
|
||||||
|
from .agents.llm_service import LLMServiceClient
|
||||||
from .agents.orchestrator import AgentOrchestrator
|
from .agents.orchestrator import AgentOrchestrator
|
||||||
from .agents.n8n_setup import N8NSetupAgent
|
from .agents.n8n_setup import N8NSetupAgent
|
||||||
from .agents.prompt_workflow import PromptWorkflowManager
|
from .agents.prompt_workflow import PromptWorkflowManager
|
||||||
@@ -40,7 +43,9 @@ except ImportError:
|
|||||||
import database as database_module
|
import database as database_module
|
||||||
from agents.change_summary import ChangeSummaryGenerator
|
from agents.change_summary import ChangeSummaryGenerator
|
||||||
from agents.database_manager import DatabaseManager
|
from agents.database_manager import DatabaseManager
|
||||||
|
from agents.home_assistant import HomeAssistantAgent
|
||||||
from agents.request_interpreter import RequestInterpreter
|
from agents.request_interpreter import RequestInterpreter
|
||||||
|
from agents.llm_service import LLMServiceClient
|
||||||
from agents.orchestrator import AgentOrchestrator
|
from agents.orchestrator import AgentOrchestrator
|
||||||
from agents.n8n_setup import N8NSetupAgent
|
from agents.n8n_setup import N8NSetupAgent
|
||||||
from agents.prompt_workflow import PromptWorkflowManager
|
from agents.prompt_workflow import PromptWorkflowManager
|
||||||
@@ -57,7 +62,18 @@ async def lifespan(_app: FastAPI):
|
|||||||
print(
|
print(
|
||||||
f"Runtime configuration: database_backend={runtime['backend']} target={runtime['target']}"
|
f"Runtime configuration: database_backend={runtime['backend']} target={runtime['target']}"
|
||||||
)
|
)
|
||||||
|
queue_worker = None
|
||||||
|
if database_module.settings.prompt_queue_enabled and database_module.settings.prompt_queue_auto_process:
|
||||||
|
queue_worker = asyncio.create_task(_prompt_queue_worker())
|
||||||
|
try:
|
||||||
yield
|
yield
|
||||||
|
finally:
|
||||||
|
if queue_worker is not None:
|
||||||
|
queue_worker.cancel()
|
||||||
|
try:
|
||||||
|
await queue_worker
|
||||||
|
except asyncio.CancelledError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
app = FastAPI(lifespan=lifespan)
|
app = FastAPI(lifespan=lifespan)
|
||||||
@@ -92,6 +108,20 @@ class FreeformSoftwareRequest(BaseModel):
|
|||||||
source: str = 'telegram'
|
source: str = 'telegram'
|
||||||
chat_id: str | None = None
|
chat_id: str | None = None
|
||||||
chat_type: str | None = None
|
chat_type: str | None = None
|
||||||
|
process_now: bool = False
|
||||||
|
|
||||||
|
|
||||||
|
class PromptQueueProcessRequest(BaseModel):
|
||||||
|
"""Request body for manual queue processing."""
|
||||||
|
|
||||||
|
force: bool = False
|
||||||
|
limit: int = Field(default=1, ge=1, le=25)
|
||||||
|
|
||||||
|
|
||||||
|
class LLMPromptSettingUpdateRequest(BaseModel):
|
||||||
|
"""Request body for persisting one editable LLM prompt override."""
|
||||||
|
|
||||||
|
value: str = Field(default='')
|
||||||
|
|
||||||
|
|
||||||
class GiteaRepositoryOnboardRequest(BaseModel):
|
class GiteaRepositoryOnboardRequest(BaseModel):
|
||||||
@@ -109,6 +139,75 @@ def _build_project_id(name: str) -> str:
|
|||||||
return f"{slug}-{uuid4().hex[:8]}"
|
return f"{slug}-{uuid4().hex[:8]}"
|
||||||
|
|
||||||
|
|
||||||
|
def _build_project_slug(name: str) -> str:
|
||||||
|
"""Normalize a project name into a kebab-case identifier slug."""
|
||||||
|
return PROJECT_ID_PATTERN.sub("-", name.strip().lower()).strip("-") or "project"
|
||||||
|
|
||||||
|
|
||||||
|
def _ensure_unique_identifier(base_slug: str, reserved_ids: set[str]) -> str:
|
||||||
|
"""Return a unique identifier using deterministic numeric suffixes when needed."""
|
||||||
|
normalized = _build_project_slug(base_slug)
|
||||||
|
if normalized not in reserved_ids:
|
||||||
|
return normalized
|
||||||
|
suffix = 2
|
||||||
|
while f"{normalized}-{suffix}" in reserved_ids:
|
||||||
|
suffix += 1
|
||||||
|
return f"{normalized}-{suffix}"
|
||||||
|
|
||||||
|
|
||||||
|
def _build_project_identity_context(manager: DatabaseManager) -> list[dict]:
|
||||||
|
"""Build a compact project catalog for naming stages."""
|
||||||
|
projects = []
|
||||||
|
for history in manager.get_all_projects(include_archived=True):
|
||||||
|
repository = manager._get_project_repository(history) or {}
|
||||||
|
projects.append(
|
||||||
|
{
|
||||||
|
'project_id': history.project_id,
|
||||||
|
'name': history.project_name,
|
||||||
|
'description': history.description,
|
||||||
|
'repository': {
|
||||||
|
'owner': repository.get('owner'),
|
||||||
|
'name': repository.get('name'),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
)
|
||||||
|
return projects
|
||||||
|
|
||||||
|
|
||||||
|
async def _derive_project_id_for_request(
|
||||||
|
request: SoftwareRequest,
|
||||||
|
*,
|
||||||
|
prompt_text: str,
|
||||||
|
prompt_routing: dict | None,
|
||||||
|
existing_projects: list[dict],
|
||||||
|
) -> tuple[str, dict | None]:
|
||||||
|
"""Derive a stable project id for a newly created project."""
|
||||||
|
reserved_ids = {str(project.get('project_id')).strip() for project in existing_projects if project.get('project_id')}
|
||||||
|
fallback_id = _ensure_unique_identifier((prompt_routing or {}).get('project_name') or request.name, reserved_ids)
|
||||||
|
user_prompt = (
|
||||||
|
f"Original user prompt:\n{prompt_text}\n\n"
|
||||||
|
f"Structured request:\n{json.dumps({'name': request.name, 'description': request.description, 'features': request.features, 'tech_stack': request.tech_stack}, indent=2)}\n\n"
|
||||||
|
f"Naming context:\n{json.dumps(prompt_routing or {}, indent=2)}\n\n"
|
||||||
|
f"Reserved project ids:\n{json.dumps(sorted(reserved_ids))}\n\n"
|
||||||
|
"Suggest the best stable project id for this new project."
|
||||||
|
)
|
||||||
|
content, trace = await LLMServiceClient().chat_with_trace(
|
||||||
|
stage='project_id_naming',
|
||||||
|
system_prompt=database_module.settings.llm_project_id_system_prompt,
|
||||||
|
user_prompt=user_prompt,
|
||||||
|
tool_context_input={'projects': existing_projects},
|
||||||
|
expect_json=True,
|
||||||
|
)
|
||||||
|
if content:
|
||||||
|
try:
|
||||||
|
parsed = json.loads(content)
|
||||||
|
candidate = parsed.get('project_id') or parsed.get('slug') or request.name
|
||||||
|
return _ensure_unique_identifier(str(candidate), reserved_ids), trace
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
return fallback_id, trace
|
||||||
|
|
||||||
|
|
||||||
def _serialize_project(history: ProjectHistory) -> dict:
|
def _serialize_project(history: ProjectHistory) -> dict:
|
||||||
"""Serialize a project history row for API responses."""
|
"""Serialize a project history row for API responses."""
|
||||||
return {
|
return {
|
||||||
@@ -176,13 +275,15 @@ async def _run_generation(
|
|||||||
prompt_source_context: dict | None = None,
|
prompt_source_context: dict | None = None,
|
||||||
prompt_routing: dict | None = None,
|
prompt_routing: dict | None = None,
|
||||||
preferred_project_id: str | None = None,
|
preferred_project_id: str | None = None,
|
||||||
|
repo_name_override: str | None = None,
|
||||||
related_issue: dict | None = None,
|
related_issue: dict | None = None,
|
||||||
) -> dict:
|
) -> dict:
|
||||||
"""Run the shared generation pipeline for a structured request."""
|
"""Run the shared generation pipeline for a structured request."""
|
||||||
database_module.init_db()
|
database_module.init_db()
|
||||||
|
|
||||||
manager = DatabaseManager(db)
|
manager = DatabaseManager(db)
|
||||||
reusable_history = manager.get_project_by_id(preferred_project_id) if preferred_project_id else manager.get_latest_project_by_name(request.name)
|
is_explicit_new_project = (prompt_routing or {}).get('intent') == 'new_project'
|
||||||
|
reusable_history = manager.get_project_by_id(preferred_project_id, include_archived=False) if preferred_project_id else (None if is_explicit_new_project else manager.get_latest_project_by_name(request.name))
|
||||||
if reusable_history and database_module.settings.gitea_url and database_module.settings.gitea_token:
|
if reusable_history and database_module.settings.gitea_url and database_module.settings.gitea_token:
|
||||||
try:
|
try:
|
||||||
from .agents.gitea import GiteaAPI
|
from .agents.gitea import GiteaAPI
|
||||||
@@ -197,14 +298,23 @@ async def _run_generation(
|
|||||||
),
|
),
|
||||||
project_id=reusable_history.project_id,
|
project_id=reusable_history.project_id,
|
||||||
)
|
)
|
||||||
|
project_id_trace = None
|
||||||
|
resolved_prompt_text = prompt_text or _compose_prompt_text(request)
|
||||||
if preferred_project_id and reusable_history is not None:
|
if preferred_project_id and reusable_history is not None:
|
||||||
project_id = reusable_history.project_id
|
project_id = reusable_history.project_id
|
||||||
elif reusable_history and manager.get_open_pull_request(project_id=reusable_history.project_id):
|
elif reusable_history and not is_explicit_new_project and manager.get_open_pull_request(project_id=reusable_history.project_id):
|
||||||
project_id = reusable_history.project_id
|
project_id = reusable_history.project_id
|
||||||
|
else:
|
||||||
|
if is_explicit_new_project or prompt_text:
|
||||||
|
project_id, project_id_trace = await _derive_project_id_for_request(
|
||||||
|
request,
|
||||||
|
prompt_text=resolved_prompt_text,
|
||||||
|
prompt_routing=prompt_routing,
|
||||||
|
existing_projects=_build_project_identity_context(manager),
|
||||||
|
)
|
||||||
else:
|
else:
|
||||||
project_id = _build_project_id(request.name)
|
project_id = _build_project_id(request.name)
|
||||||
reusable_history = None
|
reusable_history = None
|
||||||
resolved_prompt_text = prompt_text or _compose_prompt_text(request)
|
|
||||||
orchestrator = AgentOrchestrator(
|
orchestrator = AgentOrchestrator(
|
||||||
project_id=project_id,
|
project_id=project_id,
|
||||||
project_name=request.name,
|
project_name=request.name,
|
||||||
@@ -217,6 +327,7 @@ async def _run_generation(
|
|||||||
existing_history=reusable_history,
|
existing_history=reusable_history,
|
||||||
prompt_source_context=prompt_source_context,
|
prompt_source_context=prompt_source_context,
|
||||||
prompt_routing=prompt_routing,
|
prompt_routing=prompt_routing,
|
||||||
|
repo_name_override=repo_name_override,
|
||||||
related_issue_hint=related_issue,
|
related_issue_hint=related_issue,
|
||||||
)
|
)
|
||||||
result = await orchestrator.run()
|
result = await orchestrator.run()
|
||||||
@@ -240,6 +351,20 @@ async def _run_generation(
|
|||||||
response_data['repository'] = result.get('repository')
|
response_data['repository'] = result.get('repository')
|
||||||
response_data['related_issue'] = result.get('related_issue') or (result.get('ui_data') or {}).get('related_issue')
|
response_data['related_issue'] = result.get('related_issue') or (result.get('ui_data') or {}).get('related_issue')
|
||||||
response_data['pull_request'] = result.get('pull_request') or manager.get_open_pull_request(project_id=project_id)
|
response_data['pull_request'] = result.get('pull_request') or manager.get_open_pull_request(project_id=project_id)
|
||||||
|
if project_id_trace:
|
||||||
|
manager.log_llm_trace(
|
||||||
|
project_id=project_id,
|
||||||
|
history_id=history.id if history else None,
|
||||||
|
prompt_id=orchestrator.prompt_audit.id if orchestrator.prompt_audit else None,
|
||||||
|
stage=project_id_trace['stage'],
|
||||||
|
provider=project_id_trace['provider'],
|
||||||
|
model=project_id_trace['model'],
|
||||||
|
system_prompt=project_id_trace['system_prompt'],
|
||||||
|
user_prompt=project_id_trace['user_prompt'],
|
||||||
|
assistant_response=project_id_trace['assistant_response'],
|
||||||
|
raw_response=project_id_trace.get('raw_response'),
|
||||||
|
fallback_used=project_id_trace.get('fallback_used', False),
|
||||||
|
)
|
||||||
summary_context = {
|
summary_context = {
|
||||||
'name': response_data['name'],
|
'name': response_data['name'],
|
||||||
'description': response_data['description'],
|
'description': response_data['description'],
|
||||||
@@ -300,6 +425,275 @@ def _create_gitea_api():
|
|||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def _create_home_assistant_agent() -> HomeAssistantAgent:
|
||||||
|
"""Create a configured Home Assistant client."""
|
||||||
|
return HomeAssistantAgent(
|
||||||
|
base_url=database_module.settings.home_assistant_url,
|
||||||
|
token=database_module.settings.home_assistant_token,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def _get_gitea_health() -> dict:
|
||||||
|
"""Return current Gitea connectivity diagnostics."""
|
||||||
|
if not database_module.settings.gitea_url:
|
||||||
|
return {
|
||||||
|
'status': 'error',
|
||||||
|
'message': 'Gitea URL is not configured.',
|
||||||
|
'base_url': '',
|
||||||
|
'configured': False,
|
||||||
|
'checks': [],
|
||||||
|
}
|
||||||
|
if not database_module.settings.gitea_token:
|
||||||
|
return {
|
||||||
|
'status': 'error',
|
||||||
|
'message': 'Gitea token is not configured.',
|
||||||
|
'base_url': database_module.settings.gitea_url,
|
||||||
|
'configured': False,
|
||||||
|
'checks': [],
|
||||||
|
}
|
||||||
|
response = _create_gitea_api().get_current_user_sync()
|
||||||
|
if response.get('error'):
|
||||||
|
return {
|
||||||
|
'status': 'error',
|
||||||
|
'message': response.get('error'),
|
||||||
|
'base_url': database_module.settings.gitea_url,
|
||||||
|
'configured': True,
|
||||||
|
'checks': [
|
||||||
|
{
|
||||||
|
'name': 'token_auth',
|
||||||
|
'ok': False,
|
||||||
|
'message': response.get('error'),
|
||||||
|
'url': f"{database_module.settings.gitea_url}/api/v1/user",
|
||||||
|
'status_code': response.get('status_code'),
|
||||||
|
}
|
||||||
|
],
|
||||||
|
}
|
||||||
|
username = response.get('login') or response.get('username') or response.get('full_name') or 'unknown'
|
||||||
|
return {
|
||||||
|
'status': 'success',
|
||||||
|
'message': f'Authenticated as {username}.',
|
||||||
|
'base_url': database_module.settings.gitea_url,
|
||||||
|
'configured': True,
|
||||||
|
'checks': [
|
||||||
|
{
|
||||||
|
'name': 'token_auth',
|
||||||
|
'ok': True,
|
||||||
|
'message': f'Authenticated as {username}',
|
||||||
|
'url': f"{database_module.settings.gitea_url}/api/v1/user",
|
||||||
|
}
|
||||||
|
],
|
||||||
|
'user': username,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def _get_home_assistant_health() -> dict:
|
||||||
|
"""Return current Home Assistant connectivity diagnostics."""
|
||||||
|
return _create_home_assistant_agent().health_check_sync()
|
||||||
|
|
||||||
|
|
||||||
|
async def _get_queue_gate_status(force: bool = False) -> dict:
|
||||||
|
"""Return whether queued prompts may be processed now."""
|
||||||
|
if not database_module.settings.prompt_queue_enabled:
|
||||||
|
return {
|
||||||
|
'status': 'disabled',
|
||||||
|
'allowed': True,
|
||||||
|
'forced': False,
|
||||||
|
'reason': 'Prompt queue is disabled',
|
||||||
|
}
|
||||||
|
if not database_module.settings.home_assistant_url:
|
||||||
|
if force or database_module.settings.prompt_queue_force_process:
|
||||||
|
return {
|
||||||
|
'status': 'success',
|
||||||
|
'allowed': True,
|
||||||
|
'forced': True,
|
||||||
|
'reason': 'Queue override is enabled',
|
||||||
|
}
|
||||||
|
return {
|
||||||
|
'status': 'blocked',
|
||||||
|
'allowed': False,
|
||||||
|
'forced': False,
|
||||||
|
'reason': 'Home Assistant URL is not configured',
|
||||||
|
}
|
||||||
|
return await _create_home_assistant_agent().queue_gate_status(force=force)
|
||||||
|
|
||||||
|
|
||||||
|
async def _interpret_freeform_request(request: FreeformSoftwareRequest, manager: DatabaseManager) -> tuple[SoftwareRequest, dict, dict]:
|
||||||
|
"""Interpret a free-form request and return the structured request plus routing trace."""
|
||||||
|
interpreter_context = manager.get_interpreter_context(chat_id=request.chat_id, source=request.source)
|
||||||
|
interpreted, interpretation_trace = await RequestInterpreter().interpret_with_trace(
|
||||||
|
request.prompt_text,
|
||||||
|
context=interpreter_context,
|
||||||
|
)
|
||||||
|
routing = interpretation_trace.get('routing') or {}
|
||||||
|
selected_history = manager.get_project_by_id(routing.get('project_id'), include_archived=False) if routing.get('project_id') else None
|
||||||
|
if selected_history is not None and routing.get('intent') != 'new_project':
|
||||||
|
interpreted['name'] = selected_history.project_name
|
||||||
|
interpreted['description'] = selected_history.description or interpreted['description']
|
||||||
|
return SoftwareRequest(**interpreted), routing, interpretation_trace
|
||||||
|
|
||||||
|
|
||||||
|
async def _run_freeform_generation(
|
||||||
|
request: FreeformSoftwareRequest,
|
||||||
|
db: Session,
|
||||||
|
*,
|
||||||
|
queue_item_id: int | None = None,
|
||||||
|
) -> dict:
|
||||||
|
"""Shared free-form request flow used by direct calls and queued processing."""
|
||||||
|
manager = DatabaseManager(db)
|
||||||
|
try:
|
||||||
|
structured_request, routing, interpretation_trace = await _interpret_freeform_request(request, manager)
|
||||||
|
response = await _run_generation(
|
||||||
|
structured_request,
|
||||||
|
db,
|
||||||
|
prompt_text=request.prompt_text,
|
||||||
|
prompt_actor=request.source,
|
||||||
|
prompt_source_context={
|
||||||
|
'chat_id': request.chat_id,
|
||||||
|
'chat_type': request.chat_type,
|
||||||
|
'queue_item_id': queue_item_id,
|
||||||
|
},
|
||||||
|
prompt_routing=routing,
|
||||||
|
preferred_project_id=routing.get('project_id') if routing.get('intent') != 'new_project' else None,
|
||||||
|
repo_name_override=routing.get('repo_name') if routing.get('intent') == 'new_project' else None,
|
||||||
|
related_issue={'number': routing.get('issue_number')} if routing.get('issue_number') is not None else None,
|
||||||
|
)
|
||||||
|
project_data = response.get('data', {})
|
||||||
|
if project_data.get('history_id') is not None:
|
||||||
|
manager = DatabaseManager(db)
|
||||||
|
prompts = manager.get_prompt_events(project_id=project_data.get('project_id'))
|
||||||
|
prompt_id = prompts[0]['id'] if prompts else None
|
||||||
|
manager.log_llm_trace(
|
||||||
|
project_id=project_data.get('project_id'),
|
||||||
|
history_id=project_data.get('history_id'),
|
||||||
|
prompt_id=prompt_id,
|
||||||
|
stage=interpretation_trace['stage'],
|
||||||
|
provider=interpretation_trace['provider'],
|
||||||
|
model=interpretation_trace['model'],
|
||||||
|
system_prompt=interpretation_trace['system_prompt'],
|
||||||
|
user_prompt=interpretation_trace['user_prompt'],
|
||||||
|
assistant_response=interpretation_trace['assistant_response'],
|
||||||
|
raw_response=interpretation_trace.get('raw_response'),
|
||||||
|
fallback_used=interpretation_trace.get('fallback_used', False),
|
||||||
|
)
|
||||||
|
naming_trace = interpretation_trace.get('project_naming')
|
||||||
|
if naming_trace:
|
||||||
|
manager.log_llm_trace(
|
||||||
|
project_id=project_data.get('project_id'),
|
||||||
|
history_id=project_data.get('history_id'),
|
||||||
|
prompt_id=prompt_id,
|
||||||
|
stage=naming_trace['stage'],
|
||||||
|
provider=naming_trace['provider'],
|
||||||
|
model=naming_trace['model'],
|
||||||
|
system_prompt=naming_trace['system_prompt'],
|
||||||
|
user_prompt=naming_trace['user_prompt'],
|
||||||
|
assistant_response=naming_trace['assistant_response'],
|
||||||
|
raw_response=naming_trace.get('raw_response'),
|
||||||
|
fallback_used=naming_trace.get('fallback_used', False),
|
||||||
|
)
|
||||||
|
response['interpreted_request'] = structured_request.model_dump()
|
||||||
|
response['routing'] = routing
|
||||||
|
response['llm_trace'] = interpretation_trace
|
||||||
|
response['source'] = {
|
||||||
|
'type': request.source,
|
||||||
|
'chat_id': request.chat_id,
|
||||||
|
'chat_type': request.chat_type,
|
||||||
|
}
|
||||||
|
if queue_item_id is not None:
|
||||||
|
DatabaseManager(db).complete_queued_prompt(
|
||||||
|
queue_item_id,
|
||||||
|
{
|
||||||
|
'project_id': project_data.get('project_id'),
|
||||||
|
'history_id': project_data.get('history_id'),
|
||||||
|
'status': response.get('status'),
|
||||||
|
},
|
||||||
|
)
|
||||||
|
return response
|
||||||
|
except Exception as exc:
|
||||||
|
if queue_item_id is not None:
|
||||||
|
DatabaseManager(db).fail_queued_prompt(queue_item_id, str(exc))
|
||||||
|
raise
|
||||||
|
|
||||||
|
|
||||||
|
async def _process_prompt_queue_batch(limit: int = 1, force: bool = False) -> dict:
|
||||||
|
"""Process up to `limit` queued prompts if the energy gate allows it."""
|
||||||
|
queue_gate = await _get_queue_gate_status(force=force)
|
||||||
|
if not queue_gate.get('allowed'):
|
||||||
|
db = database_module.get_db_sync()
|
||||||
|
try:
|
||||||
|
summary = DatabaseManager(db).get_prompt_queue_summary()
|
||||||
|
finally:
|
||||||
|
db.close()
|
||||||
|
return {
|
||||||
|
'status': queue_gate.get('status', 'blocked'),
|
||||||
|
'processed_count': 0,
|
||||||
|
'queue_gate': queue_gate,
|
||||||
|
'queue_summary': summary,
|
||||||
|
'processed': [],
|
||||||
|
}
|
||||||
|
|
||||||
|
processed = []
|
||||||
|
for _ in range(max(limit, 1)):
|
||||||
|
claim_db = database_module.get_db_sync()
|
||||||
|
try:
|
||||||
|
claimed = DatabaseManager(claim_db).claim_next_queued_prompt()
|
||||||
|
finally:
|
||||||
|
claim_db.close()
|
||||||
|
if claimed is None:
|
||||||
|
break
|
||||||
|
work_db = database_module.get_db_sync()
|
||||||
|
try:
|
||||||
|
request = FreeformSoftwareRequest(
|
||||||
|
prompt_text=claimed['prompt_text'],
|
||||||
|
source=claimed['source'] or 'telegram',
|
||||||
|
chat_id=claimed.get('chat_id'),
|
||||||
|
chat_type=claimed.get('chat_type'),
|
||||||
|
process_now=True,
|
||||||
|
)
|
||||||
|
response = await _run_freeform_generation(request, work_db, queue_item_id=claimed['id'])
|
||||||
|
processed.append(
|
||||||
|
{
|
||||||
|
'queue_item_id': claimed['id'],
|
||||||
|
'project_id': (response.get('data') or {}).get('project_id'),
|
||||||
|
'status': response.get('status'),
|
||||||
|
}
|
||||||
|
)
|
||||||
|
except Exception as exc:
|
||||||
|
DatabaseManager(work_db).fail_queued_prompt(claimed['id'], str(exc))
|
||||||
|
processed.append({'queue_item_id': claimed['id'], 'status': 'failed', 'error': str(exc)})
|
||||||
|
finally:
|
||||||
|
work_db.close()
|
||||||
|
|
||||||
|
summary_db = database_module.get_db_sync()
|
||||||
|
try:
|
||||||
|
summary = DatabaseManager(summary_db).get_prompt_queue_summary()
|
||||||
|
finally:
|
||||||
|
summary_db.close()
|
||||||
|
return {
|
||||||
|
'status': 'success',
|
||||||
|
'processed_count': len(processed),
|
||||||
|
'processed': processed,
|
||||||
|
'queue_gate': queue_gate,
|
||||||
|
'queue_summary': summary,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
async def _prompt_queue_worker() -> None:
|
||||||
|
"""Background worker that drains the prompt queue when the energy gate opens."""
|
||||||
|
while True:
|
||||||
|
try:
|
||||||
|
await _process_prompt_queue_batch(
|
||||||
|
limit=database_module.settings.prompt_queue_max_batch_size,
|
||||||
|
force=database_module.settings.prompt_queue_force_process,
|
||||||
|
)
|
||||||
|
except Exception as exc:
|
||||||
|
db = database_module.get_db_sync()
|
||||||
|
try:
|
||||||
|
DatabaseManager(db).log_system_event('prompt-queue', 'ERROR', f'Queue worker error: {exc}')
|
||||||
|
finally:
|
||||||
|
db.close()
|
||||||
|
await asyncio.sleep(database_module.settings.prompt_queue_poll_interval_seconds)
|
||||||
|
|
||||||
|
|
||||||
def _resolve_n8n_api_url(explicit_url: str | None = None) -> str:
|
def _resolve_n8n_api_url(explicit_url: str | None = None) -> str:
|
||||||
"""Resolve the effective n8n API URL from explicit input or settings."""
|
"""Resolve the effective n8n API URL from explicit input or settings."""
|
||||||
if explicit_url and explicit_url.strip():
|
if explicit_url and explicit_url.strip():
|
||||||
@@ -322,8 +716,13 @@ def read_api_info():
|
|||||||
'/',
|
'/',
|
||||||
'/api',
|
'/api',
|
||||||
'/health',
|
'/health',
|
||||||
|
'/llm/runtime',
|
||||||
|
'/llm/prompts',
|
||||||
|
'/llm/prompts/{prompt_key}',
|
||||||
'/generate',
|
'/generate',
|
||||||
'/generate/text',
|
'/generate/text',
|
||||||
|
'/queue',
|
||||||
|
'/queue/process',
|
||||||
'/projects',
|
'/projects',
|
||||||
'/status/{project_id}',
|
'/status/{project_id}',
|
||||||
'/audit/projects',
|
'/audit/projects',
|
||||||
@@ -338,10 +737,15 @@ def read_api_info():
|
|||||||
'/audit/pull-requests',
|
'/audit/pull-requests',
|
||||||
'/audit/lineage',
|
'/audit/lineage',
|
||||||
'/audit/correlations',
|
'/audit/correlations',
|
||||||
|
'/projects/{project_id}/archive',
|
||||||
|
'/projects/{project_id}/unarchive',
|
||||||
|
'/projects/{project_id}',
|
||||||
'/projects/{project_id}/prompts/{prompt_id}/undo',
|
'/projects/{project_id}/prompts/{prompt_id}/undo',
|
||||||
'/projects/{project_id}/sync-repository',
|
'/projects/{project_id}/sync-repository',
|
||||||
'/gitea/repos',
|
'/gitea/repos',
|
||||||
|
'/gitea/health',
|
||||||
'/gitea/repos/onboard',
|
'/gitea/repos/onboard',
|
||||||
|
'/home-assistant/health',
|
||||||
'/n8n/health',
|
'/n8n/health',
|
||||||
'/n8n/setup',
|
'/n8n/setup',
|
||||||
],
|
],
|
||||||
@@ -352,14 +756,63 @@ def read_api_info():
|
|||||||
def health_check():
|
def health_check():
|
||||||
"""Health check endpoint."""
|
"""Health check endpoint."""
|
||||||
runtime = database_module.get_database_runtime_summary()
|
runtime = database_module.get_database_runtime_summary()
|
||||||
|
queue_summary = {'queued': 0, 'processing': 0, 'completed': 0, 'failed': 0, 'total': 0, 'next_item': None}
|
||||||
|
db = database_module.get_db_sync()
|
||||||
|
try:
|
||||||
|
try:
|
||||||
|
queue_summary = DatabaseManager(db).get_prompt_queue_summary()
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
finally:
|
||||||
|
db.close()
|
||||||
return {
|
return {
|
||||||
'status': 'healthy',
|
'status': 'healthy',
|
||||||
'database': runtime['backend'],
|
'database': runtime['backend'],
|
||||||
'database_target': runtime['target'],
|
'database_target': runtime['target'],
|
||||||
'database_name': runtime['database'],
|
'database_name': runtime['database'],
|
||||||
|
'integrations': {
|
||||||
|
'gitea': _get_gitea_health(),
|
||||||
|
'home_assistant': _get_home_assistant_health(),
|
||||||
|
},
|
||||||
|
'prompt_queue': {
|
||||||
|
'enabled': database_module.settings.prompt_queue_enabled,
|
||||||
|
'auto_process': database_module.settings.prompt_queue_auto_process,
|
||||||
|
'force_process': database_module.settings.prompt_queue_force_process,
|
||||||
|
'summary': queue_summary,
|
||||||
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@app.get('/llm/runtime')
|
||||||
|
def get_llm_runtime():
|
||||||
|
"""Return the active external LLM runtime, guardrail, and tool configuration."""
|
||||||
|
return LLMServiceClient().get_runtime_configuration()
|
||||||
|
|
||||||
|
|
||||||
|
@app.get('/llm/prompts')
|
||||||
|
def get_llm_prompt_settings(db: DbSession):
|
||||||
|
"""Return editable LLM prompt settings with DB overrides merged over environment defaults."""
|
||||||
|
return {'prompts': DatabaseManager(db).get_llm_prompt_settings()}
|
||||||
|
|
||||||
|
|
||||||
|
@app.put('/llm/prompts/{prompt_key}')
|
||||||
|
def update_llm_prompt_setting(prompt_key: str, request: LLMPromptSettingUpdateRequest, db: DbSession):
|
||||||
|
"""Persist one editable LLM prompt override into the database."""
|
||||||
|
result = DatabaseManager(db).save_llm_prompt_setting(prompt_key, request.value, actor='api')
|
||||||
|
if result.get('status') == 'error':
|
||||||
|
raise HTTPException(status_code=400, detail=result.get('message', 'Prompt save failed'))
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
@app.delete('/llm/prompts/{prompt_key}')
|
||||||
|
def reset_llm_prompt_setting(prompt_key: str, db: DbSession):
|
||||||
|
"""Reset one editable LLM prompt override back to the environment/default value."""
|
||||||
|
result = DatabaseManager(db).reset_llm_prompt_setting(prompt_key, actor='api')
|
||||||
|
if result.get('status') == 'error':
|
||||||
|
raise HTTPException(status_code=400, detail=result.get('message', 'Prompt reset failed'))
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
@app.post('/generate')
|
@app.post('/generate')
|
||||||
async def generate_software(request: SoftwareRequest, db: DbSession):
|
async def generate_software(request: SoftwareRequest, db: DbSession):
|
||||||
"""Create and record a software-generation request."""
|
"""Create and record a software-generation request."""
|
||||||
@@ -385,65 +838,75 @@ async def generate_software_from_text(request: FreeformSoftwareRequest, db: DbSe
|
|||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if request.source == 'telegram' and database_module.settings.prompt_queue_enabled and not request.process_now:
|
||||||
manager = DatabaseManager(db)
|
manager = DatabaseManager(db)
|
||||||
interpreter_context = manager.get_interpreter_context(chat_id=request.chat_id, source=request.source)
|
queue_item = manager.enqueue_prompt(
|
||||||
interpreted, interpretation_trace = await RequestInterpreter().interpret_with_trace(
|
|
||||||
request.prompt_text,
|
|
||||||
context=interpreter_context,
|
|
||||||
)
|
|
||||||
routing = interpretation_trace.get('routing') or {}
|
|
||||||
selected_history = manager.get_project_by_id(routing.get('project_id')) if routing.get('project_id') else None
|
|
||||||
if selected_history is not None and routing.get('intent') != 'new_project':
|
|
||||||
interpreted['name'] = selected_history.project_name
|
|
||||||
interpreted['description'] = selected_history.description or interpreted['description']
|
|
||||||
structured_request = SoftwareRequest(**interpreted)
|
|
||||||
response = await _run_generation(
|
|
||||||
structured_request,
|
|
||||||
db,
|
|
||||||
prompt_text=request.prompt_text,
|
prompt_text=request.prompt_text,
|
||||||
prompt_actor=request.source,
|
source=request.source,
|
||||||
prompt_source_context={
|
chat_id=request.chat_id,
|
||||||
'chat_id': request.chat_id,
|
chat_type=request.chat_type,
|
||||||
'chat_type': request.chat_type,
|
source_context={'chat_id': request.chat_id, 'chat_type': request.chat_type},
|
||||||
},
|
|
||||||
prompt_routing=routing,
|
|
||||||
preferred_project_id=routing.get('project_id') if routing.get('intent') != 'new_project' else None,
|
|
||||||
related_issue={'number': routing.get('issue_number')} if routing.get('issue_number') is not None else None,
|
|
||||||
)
|
)
|
||||||
project_data = response.get('data', {})
|
return {
|
||||||
if project_data.get('history_id') is not None:
|
'status': 'queued',
|
||||||
manager = DatabaseManager(db)
|
'message': 'Prompt queued for energy-aware processing.',
|
||||||
prompts = manager.get_prompt_events(project_id=project_data.get('project_id'))
|
'queue_item': queue_item,
|
||||||
prompt_id = prompts[0]['id'] if prompts else None
|
'queue_summary': manager.get_prompt_queue_summary(),
|
||||||
manager.log_llm_trace(
|
'queue_gate': await _get_queue_gate_status(force=False),
|
||||||
project_id=project_data.get('project_id'),
|
'source': {
|
||||||
history_id=project_data.get('history_id'),
|
|
||||||
prompt_id=prompt_id,
|
|
||||||
stage=interpretation_trace['stage'],
|
|
||||||
provider=interpretation_trace['provider'],
|
|
||||||
model=interpretation_trace['model'],
|
|
||||||
system_prompt=interpretation_trace['system_prompt'],
|
|
||||||
user_prompt=interpretation_trace['user_prompt'],
|
|
||||||
assistant_response=interpretation_trace['assistant_response'],
|
|
||||||
raw_response=interpretation_trace.get('raw_response'),
|
|
||||||
fallback_used=interpretation_trace.get('fallback_used', False),
|
|
||||||
)
|
|
||||||
response['interpreted_request'] = interpreted
|
|
||||||
response['routing'] = routing
|
|
||||||
response['llm_trace'] = interpretation_trace
|
|
||||||
response['source'] = {
|
|
||||||
'type': request.source,
|
'type': request.source,
|
||||||
'chat_id': request.chat_id,
|
'chat_id': request.chat_id,
|
||||||
'chat_type': request.chat_type,
|
'chat_type': request.chat_type,
|
||||||
|
},
|
||||||
}
|
}
|
||||||
return response
|
|
||||||
|
return await _run_freeform_generation(request, db)
|
||||||
|
|
||||||
|
|
||||||
|
@app.get('/queue')
|
||||||
|
def get_prompt_queue(db: DbSession):
|
||||||
|
"""Return queued prompt items and prompt queue configuration."""
|
||||||
|
manager = DatabaseManager(db)
|
||||||
|
return {
|
||||||
|
'queue': manager.get_prompt_queue(),
|
||||||
|
'summary': manager.get_prompt_queue_summary(),
|
||||||
|
'config': {
|
||||||
|
'enabled': database_module.settings.prompt_queue_enabled,
|
||||||
|
'auto_process': database_module.settings.prompt_queue_auto_process,
|
||||||
|
'force_process': database_module.settings.prompt_queue_force_process,
|
||||||
|
'poll_interval_seconds': database_module.settings.prompt_queue_poll_interval_seconds,
|
||||||
|
'max_batch_size': database_module.settings.prompt_queue_max_batch_size,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@app.post('/queue/process')
|
||||||
|
async def process_prompt_queue(request: PromptQueueProcessRequest):
|
||||||
|
"""Manually process queued prompts, optionally bypassing the HA gate."""
|
||||||
|
return await _process_prompt_queue_batch(limit=request.limit, force=request.force)
|
||||||
|
|
||||||
|
|
||||||
|
@app.get('/gitea/health')
|
||||||
|
def get_gitea_health():
|
||||||
|
"""Return Gitea integration connectivity diagnostics."""
|
||||||
|
return _get_gitea_health()
|
||||||
|
|
||||||
|
|
||||||
|
@app.get('/home-assistant/health')
|
||||||
|
def get_home_assistant_health():
|
||||||
|
"""Return Home Assistant integration connectivity diagnostics."""
|
||||||
|
return _get_home_assistant_health()
|
||||||
|
|
||||||
|
|
||||||
@app.get('/projects')
|
@app.get('/projects')
|
||||||
def list_projects(db: DbSession):
|
def list_projects(
|
||||||
|
db: DbSession,
|
||||||
|
include_archived: bool = Query(default=False),
|
||||||
|
archived_only: bool = Query(default=False),
|
||||||
|
):
|
||||||
"""List recorded projects."""
|
"""List recorded projects."""
|
||||||
manager = DatabaseManager(db)
|
manager = DatabaseManager(db)
|
||||||
projects = manager.get_all_projects()
|
projects = manager.get_all_projects(include_archived=include_archived, archived_only=archived_only)
|
||||||
return {'projects': [_serialize_project(project) for project in projects]}
|
return {'projects': [_serialize_project(project) for project in projects]}
|
||||||
|
|
||||||
|
|
||||||
@@ -572,16 +1035,75 @@ def get_pull_request_audit(db: DbSession, project_id: str | None = Query(default
|
|||||||
@app.post('/projects/{project_id}/prompts/{prompt_id}/undo')
|
@app.post('/projects/{project_id}/prompts/{prompt_id}/undo')
|
||||||
async def undo_prompt_changes(project_id: str, prompt_id: int, db: DbSession):
|
async def undo_prompt_changes(project_id: str, prompt_id: int, db: DbSession):
|
||||||
"""Undo all changes associated with a specific prompt."""
|
"""Undo all changes associated with a specific prompt."""
|
||||||
|
manager = DatabaseManager(db)
|
||||||
|
history = manager.get_project_by_id(project_id)
|
||||||
|
if history is None:
|
||||||
|
raise HTTPException(status_code=404, detail='Project not found')
|
||||||
|
if history.status == 'archived':
|
||||||
|
raise HTTPException(status_code=400, detail='Archived projects cannot be modified')
|
||||||
result = await PromptWorkflowManager(db).undo_prompt(project_id=project_id, prompt_id=prompt_id)
|
result = await PromptWorkflowManager(db).undo_prompt(project_id=project_id, prompt_id=prompt_id)
|
||||||
if result.get('status') == 'error':
|
if result.get('status') == 'error':
|
||||||
raise HTTPException(status_code=400, detail=result.get('message', 'Undo failed'))
|
raise HTTPException(status_code=400, detail=result.get('message', 'Undo failed'))
|
||||||
return result
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
@app.post('/projects/{project_id}/archive')
|
||||||
|
def archive_project(project_id: str, db: DbSession):
|
||||||
|
"""Archive a project so it no longer participates in active automation."""
|
||||||
|
manager = DatabaseManager(db)
|
||||||
|
result = manager.archive_project(project_id)
|
||||||
|
if result.get('status') == 'error':
|
||||||
|
raise HTTPException(status_code=404, detail=result.get('message', 'Archive failed'))
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
@app.post('/projects/{project_id}/unarchive')
|
||||||
|
def unarchive_project(project_id: str, db: DbSession):
|
||||||
|
"""Restore an archived project back into the active automation set."""
|
||||||
|
manager = DatabaseManager(db)
|
||||||
|
result = manager.unarchive_project(project_id)
|
||||||
|
if result.get('status') == 'error':
|
||||||
|
raise HTTPException(status_code=404, detail=result.get('message', 'Restore failed'))
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
@app.delete('/projects/{project_id}')
|
||||||
|
def delete_project(project_id: str, db: DbSession):
|
||||||
|
"""Delete a project, its local project directory, and project-scoped DB traces."""
|
||||||
|
manager = DatabaseManager(db)
|
||||||
|
audit_data = manager.get_project_audit_data(project_id)
|
||||||
|
if audit_data.get('project') is None:
|
||||||
|
raise HTTPException(status_code=404, detail='Project not found')
|
||||||
|
|
||||||
|
repository = audit_data.get('repository') or audit_data['project'].get('repository') or {}
|
||||||
|
remote_delete = None
|
||||||
|
if repository and repository.get('mode') != 'shared' and repository.get('owner') and repository.get('name') and database_module.settings.gitea_url and database_module.settings.gitea_token:
|
||||||
|
remote_delete = _create_gitea_api().delete_repo_sync(owner=repository.get('owner'), repo=repository.get('name'))
|
||||||
|
if remote_delete.get('error'):
|
||||||
|
manager.log_system_event(
|
||||||
|
component='gitea',
|
||||||
|
level='WARNING',
|
||||||
|
message=f"Remote repository delete failed for {repository.get('owner')}/{repository.get('name')}: {remote_delete.get('error')}",
|
||||||
|
)
|
||||||
|
|
||||||
|
result = manager.delete_project(project_id)
|
||||||
|
if result.get('status') == 'error':
|
||||||
|
raise HTTPException(status_code=400, detail=result.get('message', 'Project deletion failed'))
|
||||||
|
result['remote_repository_deleted'] = bool(remote_delete and not remote_delete.get('error'))
|
||||||
|
result['remote_repository_delete_error'] = remote_delete.get('error') if remote_delete else None
|
||||||
|
result['remote_repository'] = repository if repository else None
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
@app.post('/projects/{project_id}/sync-repository')
|
@app.post('/projects/{project_id}/sync-repository')
|
||||||
def sync_project_repository(project_id: str, db: DbSession, commit_limit: int = Query(default=25, ge=1, le=200)):
|
def sync_project_repository(project_id: str, db: DbSession, commit_limit: int = Query(default=25, ge=1, le=200)):
|
||||||
"""Import recent repository activity from Gitea for a tracked project."""
|
"""Import recent repository activity from Gitea for a tracked project."""
|
||||||
manager = DatabaseManager(db)
|
manager = DatabaseManager(db)
|
||||||
|
history = manager.get_project_by_id(project_id)
|
||||||
|
if history is None:
|
||||||
|
raise HTTPException(status_code=404, detail='Project not found')
|
||||||
|
if history.status == 'archived':
|
||||||
|
raise HTTPException(status_code=400, detail='Archived projects cannot be synced')
|
||||||
gitea_api = _create_gitea_api()
|
gitea_api = _create_gitea_api()
|
||||||
result = manager.sync_repository_activity(project_id=project_id, gitea_api=gitea_api, commit_limit=commit_limit)
|
result = manager.sync_repository_activity(project_id=project_id, gitea_api=gitea_api, commit_limit=commit_limit)
|
||||||
if result.get('status') == 'error':
|
if result.get('status') == 'error':
|
||||||
|
|||||||
Reference in New Issue
Block a user