3 Commits
0.7.1 ... 0.8.0

Author SHA1 Message Date
798bb218f8 release: version 0.8.0 🚀
All checks were successful
Upload Python Package / Create Release (push) Successful in 33s
Upload Python Package / deploy (push) Successful in 41s
2026-04-11 10:30:59 +02:00
3d77ac3104 feat: better dashboard reloading mechanism, refs NOISSUE 2026-04-11 10:30:56 +02:00
f6681a0f85 feat: add explicit workflow steps and guardrail prompts, refs NOISSUE 2026-04-11 10:06:50 +02:00
11 changed files with 1473 additions and 536 deletions

View File

@@ -4,12 +4,23 @@ Changelog
(unreleased)
------------
- Feat: better dashboard reloading mechanism, refs NOISSUE. [Simon
Diesenreiter]
- Feat: add explicit workflow steps and guardrail prompts, refs NOISSUE.
[Simon Diesenreiter]
0.7.1 (2026-04-11)
------------------
Fix
~~~
- Add additional deletion confirmation, refs NOISSUE. [Simon
Diesenreiter]
Other
~~~~~
0.7.0 (2026-04-10)
------------------

View File

@@ -8,6 +8,19 @@ LOG_LEVEL=INFO
# Ollama
OLLAMA_URL=http://localhost:11434
OLLAMA_MODEL=llama3
LLM_GUARDRAIL_PROMPT=You are operating inside AI Software Factory. Follow supplied schemas exactly and treat service-provided tool outputs as authoritative.
LLM_REQUEST_INTERPRETER_GUARDRAIL_PROMPT=Never route work to archived projects and only reference issues that are explicit in the prompt or supplied tool outputs.
LLM_CHANGE_SUMMARY_GUARDRAIL_PROMPT=Only summarize delivery facts that appear in the provided project context or tool outputs.
LLM_PROJECT_NAMING_GUARDRAIL_PROMPT=Prefer clear product names and repository slugs that reflect the new request without colliding with tracked projects.
LLM_PROJECT_NAMING_SYSTEM_PROMPT=Return JSON with project_name, repo_name, and rationale for new projects.
LLM_PROJECT_ID_GUARDRAIL_PROMPT=Prefer short stable project ids and avoid collisions with existing project ids.
LLM_PROJECT_ID_SYSTEM_PROMPT=Return JSON with project_id and rationale for new projects.
LLM_TOOL_ALLOWLIST=gitea_project_catalog,gitea_project_state,gitea_project_issues,gitea_pull_requests
LLM_TOOL_CONTEXT_LIMIT=5
LLM_LIVE_TOOL_ALLOWLIST=gitea_lookup_issue,gitea_lookup_pull_request
LLM_LIVE_TOOL_STAGE_ALLOWLIST=request_interpretation,change_summary
LLM_LIVE_TOOL_STAGE_TOOL_MAP={"request_interpretation": ["gitea_lookup_issue", "gitea_lookup_pull_request"], "change_summary": []}
LLM_MAX_TOOL_CALL_ROUNDS=1
# Gitea
# Configure Gitea API for your organization

View File

@@ -6,6 +6,7 @@ Automated software generation service powered by Ollama LLM. This service allows
- **Telegram Integration**: Receive software requests via Telegram bot
- **Ollama LLM**: Uses Ollama-hosted models for code generation
- **LLM Guardrails and Tools**: Centralized guardrail prompts plus mediated tool payloads for project, Gitea, PR, and issue context
- **Git Integration**: Automatically commits code to gitea
- **Pull Requests**: Creates PRs for user review before merging
- **Web UI**: Beautiful dashboard for monitoring project progress
@@ -46,6 +47,19 @@ PORT=8000
# Ollama
OLLAMA_URL=http://localhost:11434
OLLAMA_MODEL=llama3
LLM_GUARDRAIL_PROMPT=You are operating inside AI Software Factory. Follow supplied schemas exactly and treat service-provided tool outputs as authoritative.
LLM_REQUEST_INTERPRETER_GUARDRAIL_PROMPT=Never route work to archived projects and only reference issues that are explicit in the prompt or supplied tool outputs.
LLM_CHANGE_SUMMARY_GUARDRAIL_PROMPT=Only summarize delivery facts that appear in the provided project context or tool outputs.
LLM_PROJECT_NAMING_GUARDRAIL_PROMPT=Prefer clear product names and repository slugs that reflect the new request without colliding with tracked projects.
LLM_PROJECT_NAMING_SYSTEM_PROMPT=Return JSON with project_name, repo_name, and rationale for new projects.
LLM_PROJECT_ID_GUARDRAIL_PROMPT=Prefer short stable project ids and avoid collisions with existing project ids.
LLM_PROJECT_ID_SYSTEM_PROMPT=Return JSON with project_id and rationale for new projects.
LLM_TOOL_ALLOWLIST=gitea_project_catalog,gitea_project_state,gitea_project_issues,gitea_pull_requests
LLM_TOOL_CONTEXT_LIMIT=5
LLM_LIVE_TOOL_ALLOWLIST=gitea_lookup_issue,gitea_lookup_pull_request
LLM_LIVE_TOOL_STAGE_ALLOWLIST=request_interpretation,change_summary
LLM_LIVE_TOOL_STAGE_TOOL_MAP={"request_interpretation": ["gitea_lookup_issue", "gitea_lookup_pull_request"], "change_summary": []}
LLM_MAX_TOOL_CALL_ROUNDS=1
# Gitea
GITEA_URL=https://gitea.yourserver.com
@@ -99,6 +113,33 @@ docker-compose up -d
| `/status/{project_id}` | GET | Get project status |
| `/projects` | GET | List all projects |
## LLM Guardrails and Tool Access
External LLM calls are now routed through a centralized client that applies:
- A global guardrail prompt for every outbound model request
- Stage-specific guardrails for request interpretation and change summaries
- Service-mediated tool outputs that expose tracked Gitea/project state without giving the model raw credentials
Current mediated tools include:
- `gitea_project_catalog`: active tracked projects and repository mappings
- `gitea_project_state`: current repository, PR, and linked-issue state for the project in scope
- `gitea_project_issues`: tracked open issues for the relevant repository
- `gitea_pull_requests`: tracked pull requests for the relevant repository
The service also supports a bounded live tool-call loop for selected lookups. When enabled, the model may request one live call such as `gitea_lookup_issue` or `gitea_lookup_pull_request`, the service executes it against Gitea, and the final model response is generated from the returned result. This remains mediated by the service, so the model never receives raw credentials.
Live tool access is stage-aware. `LLM_LIVE_TOOL_ALLOWLIST` controls which live tools exist globally, while `LLM_LIVE_TOOL_STAGE_ALLOWLIST` controls which LLM stages may use them. If you need per-stage subsets, `LLM_LIVE_TOOL_STAGE_TOOL_MAP` accepts a JSON object mapping each stage to the exact tools it may use. For example, you can allow issue and PR lookups during `request_interpretation` while keeping `change_summary` fully read-only.
When the interpreter decides a prompt starts a new project, the service can run a dedicated `project_naming` LLM stage before generation. `LLM_PROJECT_NAMING_SYSTEM_PROMPT` and `LLM_PROJECT_NAMING_GUARDRAIL_PROMPT` let you steer how project titles and repository slugs are chosen. The interpreter now checks tracked project repositories plus live Gitea repository names when available, so if the model suggests a colliding repo slug the service will automatically move to the next available slug.
New project creation can also run a dedicated `project_id_naming` stage. `LLM_PROJECT_ID_SYSTEM_PROMPT` and `LLM_PROJECT_ID_GUARDRAIL_PROMPT` control how stable project ids are chosen, and the service will append deterministic numeric suffixes when an id is already taken instead of always falling back to a random UUID-based id.
Runtime visibility for the active guardrails, mediated tools, live tools, and model configuration is available at `/llm/runtime` and in the dashboard System tab.
These tool payloads are appended to the model prompt as authoritative JSON generated by the service, so the LLM can reason over live project and Gitea context while remaining constrained by the configured guardrails.
## Development
### Makefile Targets

View File

@@ -1 +1 @@
0.7.1
0.8.0

View File

@@ -4,8 +4,10 @@ from __future__ import annotations
try:
from ..config import settings
from .llm_service import LLMServiceClient
except ImportError:
from config import settings
from agents.llm_service import LLMServiceClient
class ChangeSummaryGenerator:
@@ -14,6 +16,7 @@ class ChangeSummaryGenerator:
def __init__(self, ollama_url: str | None = None, model: str | None = None):
self.ollama_url = (ollama_url or settings.ollama_url).rstrip('/')
self.model = model or settings.OLLAMA_MODEL
self.llm_client = LLMServiceClient(ollama_url=self.ollama_url, model=self.model)
async def summarize(self, context: dict) -> str:
"""Summarize project changes with Ollama, or fall back to a deterministic overview."""
@@ -28,40 +31,24 @@ class ChangeSummaryGenerator:
'Write 3 to 5 sentences. Mention the application goal, main delivered pieces, '
'technical direction, and what the user should expect next. Avoid markdown bullets.'
)
try:
import aiohttp
async with aiohttp.ClientSession() as session:
async with session.post(
f'{self.ollama_url}/api/chat',
json={
'model': self.model,
'stream': False,
'messages': [
{
'role': 'system',
'content': system_prompt,
},
{'role': 'user', 'content': prompt},
],
},
) as resp:
payload = await resp.json()
if 200 <= resp.status < 300:
content = payload.get('message', {}).get('content', '').strip()
if content:
return content, {
'stage': 'change_summary',
'provider': 'ollama',
'model': self.model,
'system_prompt': system_prompt,
'user_prompt': prompt,
'assistant_response': content,
'raw_response': payload,
'fallback_used': False,
}
except Exception:
pass
content, trace = await self.llm_client.chat_with_trace(
stage='change_summary',
system_prompt=system_prompt,
user_prompt=prompt,
tool_context_input={
'project_id': context.get('project_id'),
'project_name': context.get('name'),
'repository': context.get('repository'),
'repository_url': context.get('repository_url'),
'pull_request': context.get('pull_request'),
'pull_request_url': context.get('pull_request_url'),
'pull_request_state': context.get('pull_request_state'),
'related_issue': context.get('related_issue'),
'issues': [context.get('related_issue')] if context.get('related_issue') else [],
},
)
if content:
return content.strip(), trace
fallback = self._fallback(context)
return fallback, {
@@ -71,7 +58,9 @@ class ChangeSummaryGenerator:
'system_prompt': system_prompt,
'user_prompt': prompt,
'assistant_response': fallback,
'raw_response': {'fallback': 'deterministic'},
'raw_response': {'fallback': 'deterministic', 'llm_trace': trace.get('raw_response') if isinstance(trace, dict) else None},
'guardrails': trace.get('guardrails') if isinstance(trace, dict) else [],
'tool_context': trace.get('tool_context') if isinstance(trace, dict) else [],
'fallback_used': True,
}

View File

@@ -0,0 +1,394 @@
"""Centralized LLM client with guardrails and mediated tool context."""
from __future__ import annotations
import json
try:
from .gitea import GiteaAPI
except ImportError:
from gitea import GiteaAPI
try:
from ..config import settings
except ImportError:
from config import settings
class LLMToolbox:
"""Build named tool payloads that can be shared with external LLM providers."""
SUPPORTED_LIVE_TOOL_STAGES = ('request_interpretation', 'change_summary', 'generation_plan', 'project_naming', 'project_id_naming')
def build_tool_context(self, stage: str, context: dict | None = None) -> list[dict]:
"""Return the mediated tool payloads allowed for this LLM request."""
context = context or {}
allowed = set(settings.llm_tool_allowlist)
limit = settings.llm_tool_context_limit
tool_context: list[dict] = []
if 'gitea_project_catalog' in allowed:
projects = context.get('projects') or []
if projects:
tool_context.append(
{
'name': 'gitea_project_catalog',
'description': 'Tracked active projects and their repository mappings inside the factory.',
'payload': projects[:limit],
}
)
if 'gitea_project_state' in allowed:
state_payload = {
'project_id': context.get('project_id'),
'project_name': context.get('project_name') or context.get('name'),
'repository': context.get('repository'),
'repository_url': context.get('repository_url'),
'pull_request': context.get('pull_request'),
'pull_request_url': context.get('pull_request_url'),
'pull_request_state': context.get('pull_request_state'),
'related_issue': context.get('related_issue'),
}
if any(value for value in state_payload.values()):
tool_context.append(
{
'name': 'gitea_project_state',
'description': 'Current repository and pull-request state for the project being discussed.',
'payload': state_payload,
}
)
if 'gitea_project_issues' in allowed:
issues = context.get('open_issues') or context.get('issues') or []
if issues:
tool_context.append(
{
'name': 'gitea_project_issues',
'description': 'Open tracked Gitea issues for the relevant project repository.',
'payload': issues[:limit],
}
)
if 'gitea_pull_requests' in allowed:
pull_requests = context.get('pull_requests') or []
if pull_requests:
tool_context.append(
{
'name': 'gitea_pull_requests',
'description': 'Tracked pull requests associated with the relevant project repository.',
'payload': pull_requests[:limit],
}
)
return tool_context
def build_live_tool_specs(self, stage: str, context: dict | None = None) -> list[dict]:
"""Return live tool-call specs that the model may request explicitly."""
_context = context or {}
specs = []
allowed = set(settings.llm_live_tools_for_stage(stage))
if 'gitea_lookup_issue' in allowed:
specs.append(
{
'name': 'gitea_lookup_issue',
'description': 'Fetch one live Gitea issue by issue number for a tracked repository.',
'arguments': {
'project_id': 'optional tracked project id',
'owner': 'optional repository owner override',
'repo': 'optional repository name override',
'issue_number': 'required integer issue number',
},
}
)
if 'gitea_lookup_pull_request' in allowed:
specs.append(
{
'name': 'gitea_lookup_pull_request',
'description': 'Fetch one live Gitea pull request by PR number for a tracked repository.',
'arguments': {
'project_id': 'optional tracked project id',
'owner': 'optional repository owner override',
'repo': 'optional repository name override',
'pr_number': 'required integer pull request number',
},
}
)
return specs
class LLMLiveToolExecutor:
"""Resolve bounded live tool requests on behalf of the model."""
def __init__(self):
self.gitea_api = None
if settings.gitea_url and settings.gitea_token:
self.gitea_api = GiteaAPI(
token=settings.GITEA_TOKEN,
base_url=settings.GITEA_URL,
owner=settings.GITEA_OWNER,
repo=settings.GITEA_REPO or '',
)
async def execute(self, tool_name: str, arguments: dict, context: dict | None = None) -> dict:
"""Execute one live tool request and normalize the result."""
if tool_name not in set(settings.llm_live_tool_allowlist):
return {'error': f'Tool {tool_name} is not enabled'}
if self.gitea_api is None:
return {'error': 'Gitea live tool execution is not configured'}
resolved = self._resolve_repository(arguments=arguments, context=context or {})
if resolved.get('error'):
return resolved
owner = resolved['owner']
repo = resolved['repo']
if tool_name == 'gitea_lookup_issue':
issue_number = arguments.get('issue_number')
if issue_number is None:
return {'error': 'issue_number is required'}
return await self.gitea_api.get_issue(issue_number=int(issue_number), owner=owner, repo=repo)
if tool_name == 'gitea_lookup_pull_request':
pr_number = arguments.get('pr_number')
if pr_number is None:
return {'error': 'pr_number is required'}
return await self.gitea_api.get_pull_request(pr_number=int(pr_number), owner=owner, repo=repo)
return {'error': f'Unsupported tool {tool_name}'}
def _resolve_repository(self, arguments: dict, context: dict) -> dict:
"""Resolve repository owner/name from explicit args or tracked project context."""
owner = arguments.get('owner')
repo = arguments.get('repo')
if owner and repo:
return {'owner': owner, 'repo': repo}
project_id = arguments.get('project_id')
if project_id:
for project in context.get('projects', []):
if project.get('project_id') == project_id:
repository = project.get('repository') or {}
if repository.get('owner') and repository.get('name'):
return {'owner': repository['owner'], 'repo': repository['name']}
state = context.get('repository') or {}
if context.get('project_id') == project_id and state.get('owner') and state.get('name'):
return {'owner': state['owner'], 'repo': state['name']}
repository = context.get('repository') or {}
if repository.get('owner') and repository.get('name'):
return {'owner': repository['owner'], 'repo': repository['name']}
return {'error': 'Could not resolve repository for tool request'}
class LLMServiceClient:
"""Call the configured LLM provider with consistent guardrails and tool payloads."""
def __init__(self, ollama_url: str | None = None, model: str | None = None):
self.ollama_url = (ollama_url or settings.ollama_url).rstrip('/')
self.model = model or settings.OLLAMA_MODEL
self.toolbox = LLMToolbox()
self.live_tool_executor = LLMLiveToolExecutor()
async def chat_with_trace(
self,
*,
stage: str,
system_prompt: str,
user_prompt: str,
tool_context_input: dict | None = None,
expect_json: bool = False,
) -> tuple[str | None, dict]:
"""Invoke the configured LLM and return both content and a structured trace."""
effective_system_prompt = self._compose_system_prompt(stage, system_prompt)
tool_context = self.toolbox.build_tool_context(stage=stage, context=tool_context_input)
live_tool_specs = self.toolbox.build_live_tool_specs(stage=stage, context=tool_context_input)
effective_user_prompt = self._compose_user_prompt(user_prompt, tool_context, live_tool_specs)
raw_responses: list[dict] = []
executed_tool_calls: list[dict] = []
current_user_prompt = effective_user_prompt
max_rounds = settings.llm_max_tool_call_rounds
for round_index in range(max_rounds + 1):
content, payload, error = await self._send_chat_request(
system_prompt=effective_system_prompt,
user_prompt=current_user_prompt,
expect_json=expect_json,
)
raw_responses.append(payload)
if content:
tool_request = self._extract_tool_request(content)
if tool_request and round_index < max_rounds:
tool_name = tool_request.get('name')
tool_arguments = tool_request.get('arguments') or {}
tool_result = await self.live_tool_executor.execute(tool_name, tool_arguments, tool_context_input)
executed_tool_calls.append(
{
'name': tool_name,
'arguments': tool_arguments,
'result': tool_result,
}
)
current_user_prompt = self._compose_follow_up_prompt(user_prompt, tool_context, live_tool_specs, executed_tool_calls)
continue
return content, {
'stage': stage,
'provider': 'ollama',
'model': self.model,
'system_prompt': effective_system_prompt,
'user_prompt': current_user_prompt,
'assistant_response': content,
'raw_response': {
'provider_response': raw_responses[-1],
'provider_responses': raw_responses,
'tool_context': tool_context,
'live_tool_specs': live_tool_specs,
'executed_tool_calls': executed_tool_calls,
},
'raw_responses': raw_responses,
'fallback_used': False,
'guardrails': self._guardrail_sections(stage),
'tool_context': tool_context,
'live_tool_specs': live_tool_specs,
'executed_tool_calls': executed_tool_calls,
}
if error:
break
return None, {
'stage': stage,
'provider': 'ollama',
'model': self.model,
'system_prompt': effective_system_prompt,
'user_prompt': current_user_prompt,
'assistant_response': '',
'raw_response': {
'provider_response': raw_responses[-1] if raw_responses else {'error': 'No response'},
'provider_responses': raw_responses,
'tool_context': tool_context,
'live_tool_specs': live_tool_specs,
'executed_tool_calls': executed_tool_calls,
},
'raw_responses': raw_responses,
'fallback_used': True,
'guardrails': self._guardrail_sections(stage),
'tool_context': tool_context,
'live_tool_specs': live_tool_specs,
'executed_tool_calls': executed_tool_calls,
}
async def _send_chat_request(self, *, system_prompt: str, user_prompt: str, expect_json: bool) -> tuple[str | None, dict, str | None]:
"""Send one outbound chat request to the configured model provider."""
request_payload = {
'model': self.model,
'stream': False,
'messages': [
{'role': 'system', 'content': system_prompt},
{'role': 'user', 'content': user_prompt},
],
}
if expect_json:
request_payload['format'] = 'json'
try:
import aiohttp
async with aiohttp.ClientSession() as session:
async with session.post(f'{self.ollama_url}/api/chat', json=request_payload) as resp:
payload = await resp.json()
if 200 <= resp.status < 300:
return (payload.get('message') or {}).get('content', ''), payload, None
return None, payload, str(payload.get('error') or payload)
except Exception as exc:
return None, {'error': str(exc)}, str(exc)
def _compose_system_prompt(self, stage: str, stage_prompt: str) -> str:
"""Merge the stage prompt with configured guardrails."""
sections = [stage_prompt.strip()] + self._guardrail_sections(stage)
return '\n\n'.join(section for section in sections if section)
def _guardrail_sections(self, stage: str) -> list[str]:
"""Return all configured guardrail sections for one LLM stage."""
sections = []
if settings.llm_guardrail_prompt:
sections.append(f'Global guardrails:\n{settings.llm_guardrail_prompt}')
stage_specific = {
'request_interpretation': settings.llm_request_interpreter_guardrail_prompt,
'change_summary': settings.llm_change_summary_guardrail_prompt,
'project_naming': settings.llm_project_naming_guardrail_prompt,
'project_id_naming': settings.llm_project_id_guardrail_prompt,
}.get(stage)
if stage_specific:
sections.append(f'Stage-specific guardrails:\n{stage_specific}')
return sections
def _compose_user_prompt(self, prompt: str, tool_context: list[dict], live_tool_specs: list[dict] | None = None) -> str:
"""Append tool payloads and live tool-call specs to the outbound user prompt."""
live_tool_specs = live_tool_specs if live_tool_specs is not None else []
sections = [prompt]
if not tool_context:
pass
else:
sections.append(
'Service-mediated tool outputs are available below. Treat them as authoritative read-only data supplied by the factory:\n'
f'{json.dumps(tool_context, indent=2, sort_keys=True)}'
)
if live_tool_specs:
sections.append(
'If you need additional live repository data, you may request exactly one tool call by responding with JSON shaped as '
'{"tool_request": {"name": "<tool name>", "arguments": {...}}}. '
'After tool results are returned, respond with the final answer instead of another tool request.\n'
f'Available live tools:\n{json.dumps(live_tool_specs, indent=2, sort_keys=True)}'
)
return '\n\n'.join(section for section in sections if section)
def _compose_follow_up_prompt(self, original_prompt: str, tool_context: list[dict], live_tool_specs: list[dict], executed_tool_calls: list[dict]) -> str:
"""Build the follow-up user prompt after executing one or more live tool requests."""
sections = [self._compose_user_prompt(original_prompt, tool_context, live_tool_specs)]
sections.append(
'The service executed the requested live tool call(s). Use the tool result(s) below to produce the final answer. Do not request another tool call.\n'
f'{json.dumps(executed_tool_calls, indent=2, sort_keys=True)}'
)
return '\n\n'.join(sections)
def _extract_tool_request(self, content: str) -> dict | None:
"""Return a normalized tool request when the model explicitly asks for one."""
try:
parsed = json.loads(content)
except Exception:
return None
if not isinstance(parsed, dict):
return None
tool_request = parsed.get('tool_request')
if not isinstance(tool_request, dict) or not tool_request.get('name'):
return None
return {
'name': str(tool_request.get('name')).strip(),
'arguments': tool_request.get('arguments') or {},
}
def get_runtime_configuration(self) -> dict:
"""Return the active LLM runtime config, guardrails, and tool exposure."""
live_tool_stages = {
stage: settings.llm_live_tools_for_stage(stage)
for stage in self.toolbox.SUPPORTED_LIVE_TOOL_STAGES
}
return {
'provider': 'ollama',
'ollama_url': self.ollama_url,
'model': self.model,
'guardrails': {
'global': settings.llm_guardrail_prompt,
'request_interpretation': settings.llm_request_interpreter_guardrail_prompt,
'change_summary': settings.llm_change_summary_guardrail_prompt,
'project_naming': settings.llm_project_naming_guardrail_prompt,
'project_id_naming': settings.llm_project_id_guardrail_prompt,
},
'system_prompts': {
'project_naming': settings.llm_project_naming_system_prompt,
'project_id_naming': settings.llm_project_id_system_prompt,
},
'mediated_tools': settings.llm_tool_allowlist,
'live_tools': settings.llm_live_tool_allowlist,
'live_tool_stage_allowlist': settings.llm_live_tool_stage_allowlist,
'live_tool_stage_tool_map': settings.llm_live_tool_stage_tool_map,
'live_tools_by_stage': live_tool_stages,
'tool_context_limit': settings.llm_tool_context_limit,
'max_tool_call_rounds': settings.llm_max_tool_call_rounds,
'gitea_live_tools_configured': bool(settings.gitea_url and settings.gitea_token),
}

View File

@@ -39,6 +39,7 @@ class AgentOrchestrator:
existing_history=None,
prompt_source_context: dict | None = None,
prompt_routing: dict | None = None,
repo_name_override: str | None = None,
related_issue_hint: dict | None = None,
):
"""Initialize orchestrator."""
@@ -58,6 +59,7 @@ class AgentOrchestrator:
self.prompt_actor = prompt_actor
self.prompt_source_context = prompt_source_context or {}
self.prompt_routing = prompt_routing or {}
self.repo_name_override = repo_name_override
self.existing_history = existing_history
self.changed_files: list[str] = []
self.gitea_api = GiteaAPI(
@@ -68,7 +70,7 @@ class AgentOrchestrator:
)
self.project_root = settings.projects_root / project_id
self.prompt_audit = None
self.repo_name = settings.gitea_repo or self.gitea_api.build_project_repo_name(project_id, project_name)
self.repo_name = settings.gitea_repo or self.gitea_api.build_project_repo_name(project_id, repo_name_override or project_name)
self.repo_owner = settings.gitea_owner
self.repo_url = None
self.branch_name = self._build_pr_branch_name(project_id)

View File

@@ -7,8 +7,12 @@ import re
try:
from ..config import settings
from .gitea import GiteaAPI
from .llm_service import LLMServiceClient
except ImportError:
from config import settings
from agents.gitea import GiteaAPI
from agents.llm_service import LLMServiceClient
class RequestInterpreter:
@@ -17,6 +21,15 @@ class RequestInterpreter:
def __init__(self, ollama_url: str | None = None, model: str | None = None):
self.ollama_url = (ollama_url or settings.ollama_url).rstrip('/')
self.model = model or settings.OLLAMA_MODEL
self.llm_client = LLMServiceClient(ollama_url=self.ollama_url, model=self.model)
self.gitea_api = None
if settings.gitea_url and settings.gitea_token:
self.gitea_api = GiteaAPI(
token=settings.GITEA_TOKEN,
base_url=settings.GITEA_URL,
owner=settings.GITEA_OWNER,
repo=settings.GITEA_REPO or '',
)
async def interpret(self, prompt_text: str, context: dict | None = None) -> dict:
"""Interpret free-form text into the request shape expected by the orchestrator."""
@@ -49,48 +62,46 @@ class RequestInterpreter:
f"User prompt:\n{normalized}"
)
try:
import aiohttp
async with aiohttp.ClientSession() as session:
async with session.post(
f'{self.ollama_url}/api/chat',
json={
'model': self.model,
'stream': False,
'format': 'json',
'messages': [
{
'role': 'system',
'content': system_prompt,
},
{'role': 'user', 'content': user_prompt},
],
},
) as resp:
payload = await resp.json()
if 200 <= resp.status < 300:
content = payload.get('message', {}).get('content', '')
if content:
parsed = json.loads(content)
interpreted = self._normalize_interpreted_request(parsed, normalized)
routing = self._normalize_routing(parsed.get('routing'), interpreted, compact_context)
return interpreted, {
'stage': 'request_interpretation',
'provider': 'ollama',
'model': self.model,
'system_prompt': system_prompt,
'user_prompt': user_prompt,
'assistant_response': content,
'raw_response': payload,
'routing': routing,
'context_excerpt': compact_context,
'fallback_used': False,
}
except Exception:
pass
content, trace = await self.llm_client.chat_with_trace(
stage='request_interpretation',
system_prompt=system_prompt,
user_prompt=user_prompt,
tool_context_input={
'projects': compact_context.get('projects', []),
'open_issues': [
issue
for project in compact_context.get('projects', [])
for issue in project.get('open_issues', [])
],
'recent_chat_history': compact_context.get('recent_chat_history', []),
},
expect_json=True,
)
if content:
try:
parsed = json.loads(content)
interpreted = self._normalize_interpreted_request(parsed, normalized)
routing = self._normalize_routing(parsed.get('routing'), interpreted, compact_context)
naming_trace = None
if routing.get('intent') == 'new_project':
interpreted, routing, naming_trace = await self._refine_new_project_identity(
prompt_text=normalized,
interpreted=interpreted,
routing=routing,
context=compact_context,
)
trace['routing'] = routing
trace['context_excerpt'] = compact_context
if naming_trace is not None:
trace['project_naming'] = naming_trace
return interpreted, trace
except Exception:
pass
interpreted, routing = self._heuristic_fallback(normalized, compact_context)
if routing.get('intent') == 'new_project':
constraints = await self._collect_project_identity_constraints(compact_context)
routing['repo_name'] = self._ensure_unique_repo_name(routing.get('repo_name') or interpreted.get('name') or 'project', constraints['repo_names'])
return interpreted, {
'stage': 'request_interpretation',
'provider': 'heuristic',
@@ -98,12 +109,87 @@ class RequestInterpreter:
'system_prompt': system_prompt,
'user_prompt': user_prompt,
'assistant_response': json.dumps({'request': interpreted, 'routing': routing}),
'raw_response': {'fallback': 'heuristic'},
'raw_response': {'fallback': 'heuristic', 'llm_trace': trace.get('raw_response') if isinstance(trace, dict) else None},
'routing': routing,
'context_excerpt': compact_context,
'guardrails': trace.get('guardrails') if isinstance(trace, dict) else [],
'tool_context': trace.get('tool_context') if isinstance(trace, dict) else [],
'fallback_used': True,
}
async def _refine_new_project_identity(
self,
*,
prompt_text: str,
interpreted: dict,
routing: dict,
context: dict,
) -> tuple[dict, dict, dict | None]:
"""Refine project and repository naming for genuinely new work."""
constraints = await self._collect_project_identity_constraints(context)
user_prompt = (
f"Original user prompt:\n{prompt_text}\n\n"
f"Draft structured request:\n{json.dumps(interpreted, indent=2)}\n\n"
f"Tracked project names to avoid reusing unless the user clearly wants them:\n{json.dumps(sorted(constraints['project_names']))}\n\n"
f"Repository slugs already reserved in tracked projects or Gitea:\n{json.dumps(sorted(constraints['repo_names']))}\n\n"
"Suggest the best project display name and repository slug for this new project."
)
content, trace = await self.llm_client.chat_with_trace(
stage='project_naming',
system_prompt=settings.llm_project_naming_system_prompt,
user_prompt=user_prompt,
tool_context_input={
'projects': context.get('projects', []),
},
expect_json=True,
)
if content:
try:
parsed = json.loads(content)
project_name, repo_name = self._normalize_project_identity(
parsed,
fallback_name=interpreted.get('name') or self._derive_name(prompt_text),
)
repo_name = self._ensure_unique_repo_name(repo_name, constraints['repo_names'])
interpreted['name'] = project_name
routing['project_name'] = project_name
routing['repo_name'] = repo_name
return interpreted, routing, trace
except Exception:
pass
fallback_name = interpreted.get('name') or self._derive_name(prompt_text)
routing['project_name'] = fallback_name
routing['repo_name'] = self._ensure_unique_repo_name(self._derive_repo_name(fallback_name), constraints['repo_names'])
return interpreted, routing, trace
async def _collect_project_identity_constraints(self, context: dict) -> dict[str, set[str]]:
"""Collect reserved project names and repository slugs from tracked state and Gitea."""
project_names: set[str] = set()
repo_names: set[str] = set()
for project in context.get('projects', []):
if project.get('name'):
project_names.add(str(project.get('name')).strip())
repository = project.get('repository') or {}
if repository.get('name'):
repo_names.add(str(repository.get('name')).strip())
repo_names.update(await self._load_remote_repo_names())
return {
'project_names': project_names,
'repo_names': repo_names,
}
async def _load_remote_repo_names(self) -> set[str]:
"""Load current Gitea repository names when live credentials are available."""
if settings.gitea_repo:
return {settings.gitea_repo}
if self.gitea_api is None or not settings.gitea_owner:
return set()
repos = await self.gitea_api.list_repositories(owner=settings.gitea_owner)
if not isinstance(repos, list):
return set()
return {str(repo.get('name')).strip() for repo in repos if repo.get('name')}
def _normalize_interpreted_request(self, interpreted: dict, original_prompt: str) -> dict:
"""Normalize LLM output into the required request shape."""
request_payload = interpreted.get('request') if isinstance(interpreted.get('request'), dict) else interpreted
@@ -164,14 +250,18 @@ class RequestInterpreter:
matched_project = project
break
intent = str(routing.get('intent') or '').strip() or ('continue_project' if matched_project else 'new_project')
return {
normalized = {
'intent': intent,
'project_id': matched_project.get('project_id') if matched_project else project_id,
'project_name': matched_project.get('name') if matched_project else (project_name or interpreted.get('name')),
'repo_name': routing.get('repo_name') if intent == 'new_project' else None,
'issue_number': issue_number,
'confidence': routing.get('confidence') or ('medium' if matched_project else 'low'),
'reasoning_summary': routing.get('reasoning_summary') or ('Matched prior project context' if matched_project else 'No strong prior project match found'),
}
if normalized['intent'] == 'new_project' and not normalized['repo_name']:
normalized['repo_name'] = self._derive_repo_name(normalized['project_name'] or interpreted.get('name') or 'Generated Project')
return normalized
def _normalize_list(self, value) -> list[str]:
if isinstance(value, list):
@@ -218,6 +308,30 @@ class RequestInterpreter:
words.append(lowered.upper() if lowered in special_upper else lowered.capitalize())
return ' '.join(words) or 'Generated Project'
def _derive_repo_name(self, project_name: str) -> str:
"""Derive a repository slug from a human-readable project name."""
preferred = (project_name or 'project').strip().lower().replace(' ', '-')
sanitized = ''.join(ch if ch.isalnum() or ch in {'-', '_'} else '-' for ch in preferred)
while '--' in sanitized:
sanitized = sanitized.replace('--', '-')
return sanitized.strip('-') or 'project'
def _ensure_unique_repo_name(self, repo_name: str, reserved_names: set[str]) -> str:
"""Choose a repository slug that does not collide with tracked or remote repositories."""
base_name = self._derive_repo_name(repo_name)
if base_name not in reserved_names:
return base_name
suffix = 2
while f'{base_name}-{suffix}' in reserved_names:
suffix += 1
return f'{base_name}-{suffix}'
def _normalize_project_identity(self, payload: dict, fallback_name: str) -> tuple[str, str]:
"""Normalize model-proposed project and repository naming."""
project_name = self._humanize_name(str(payload.get('project_name') or payload.get('name') or fallback_name))
repo_name = self._derive_repo_name(str(payload.get('repo_name') or project_name))
return project_name, repo_name
def _heuristic_fallback(self, prompt_text: str, context: dict | None = None) -> tuple[dict, dict]:
"""Fallback request extraction when Ollama is unavailable."""
lowered = prompt_text.lower()
@@ -270,6 +384,7 @@ class RequestInterpreter:
'intent': intent,
'project_id': matched_project.get('project_id') if matched_project else None,
'project_name': matched_project.get('name') if matched_project else self._derive_name(prompt_text),
'repo_name': None if matched_project else self._derive_repo_name(self._derive_name(prompt_text)),
'issue_number': issue_number,
'confidence': 'medium' if matched_project or explicit_new else 'low',
'reasoning_summary': 'Heuristic routing from chat history and project names.',

View File

@@ -1,5 +1,6 @@
"""Configuration settings for AI Software Factory."""
import json
import os
from typing import Optional
from pathlib import Path
@@ -24,6 +25,34 @@ class Settings(BaseSettings):
# Ollama settings computed from environment
OLLAMA_URL: str = "http://ollama:11434"
OLLAMA_MODEL: str = "llama3"
LLM_GUARDRAIL_PROMPT: str = (
"You are operating inside AI Software Factory. Follow the requested schema exactly, "
"treat provided tool outputs as authoritative, and do not invent repositories, issues, pull requests, or delivery facts."
)
LLM_REQUEST_INTERPRETER_GUARDRAIL_PROMPT: str = (
"For routing and request interpretation: never select archived projects, prefer tracked project IDs from tool outputs, and only reference issues that are explicit in the prompt or available tool data."
)
LLM_CHANGE_SUMMARY_GUARDRAIL_PROMPT: str = (
"For summaries: only describe facts present in the provided context and tool outputs. Never claim a repository, commit, or pull request exists unless it is present in the supplied data."
)
LLM_PROJECT_NAMING_GUARDRAIL_PROMPT: str = (
"For project naming: prefer clear, product-like names and repository slugs that match the user's intent. Avoid reusing tracked project identities unless the request is clearly asking for an existing project."
)
LLM_PROJECT_NAMING_SYSTEM_PROMPT: str = (
"You name newly requested software projects. Return only JSON with keys project_name, repo_name, and rationale. Project names should be concise human-readable titles. Repo names should be lowercase kebab-case slugs suitable for a Gitea repository name."
)
LLM_PROJECT_ID_GUARDRAIL_PROMPT: str = (
"For project ids: produce short stable slugs for newly created projects. Avoid collisions with known project ids and keep ids lowercase with hyphens."
)
LLM_PROJECT_ID_SYSTEM_PROMPT: str = (
"You derive stable project ids for new projects. Return only JSON with keys project_id and rationale. project_id must be a short lowercase kebab-case slug without spaces."
)
LLM_TOOL_ALLOWLIST: str = "gitea_project_catalog,gitea_project_state,gitea_project_issues,gitea_pull_requests"
LLM_TOOL_CONTEXT_LIMIT: int = 5
LLM_LIVE_TOOL_ALLOWLIST: str = "gitea_lookup_issue,gitea_lookup_pull_request"
LLM_LIVE_TOOL_STAGE_ALLOWLIST: str = "request_interpretation,change_summary"
LLM_LIVE_TOOL_STAGE_TOOL_MAP: str = ""
LLM_MAX_TOOL_CALL_ROUNDS: int = 1
# Gitea settings
GITEA_URL: str = "https://gitea.yourserver.com"
@@ -131,6 +160,97 @@ class Settings(BaseSettings):
"""Get Ollama URL with trimmed whitespace."""
return self.OLLAMA_URL.strip()
@property
def llm_guardrail_prompt(self) -> str:
"""Get the global guardrail prompt used for all external LLM calls."""
return self.LLM_GUARDRAIL_PROMPT.strip()
@property
def llm_request_interpreter_guardrail_prompt(self) -> str:
"""Get the request-interpretation specific guardrail prompt."""
return self.LLM_REQUEST_INTERPRETER_GUARDRAIL_PROMPT.strip()
@property
def llm_change_summary_guardrail_prompt(self) -> str:
"""Get the change-summary specific guardrail prompt."""
return self.LLM_CHANGE_SUMMARY_GUARDRAIL_PROMPT.strip()
@property
def llm_project_naming_guardrail_prompt(self) -> str:
"""Get the project-naming specific guardrail prompt."""
return self.LLM_PROJECT_NAMING_GUARDRAIL_PROMPT.strip()
@property
def llm_project_naming_system_prompt(self) -> str:
"""Get the project-naming system prompt."""
return self.LLM_PROJECT_NAMING_SYSTEM_PROMPT.strip()
@property
def llm_project_id_guardrail_prompt(self) -> str:
"""Get the project-id naming specific guardrail prompt."""
return self.LLM_PROJECT_ID_GUARDRAIL_PROMPT.strip()
@property
def llm_project_id_system_prompt(self) -> str:
"""Get the project-id naming system prompt."""
return self.LLM_PROJECT_ID_SYSTEM_PROMPT.strip()
@property
def llm_tool_allowlist(self) -> list[str]:
"""Get the allowed LLM tool names as a normalized list."""
return [item.strip() for item in self.LLM_TOOL_ALLOWLIST.split(',') if item.strip()]
@property
def llm_tool_context_limit(self) -> int:
"""Get the number of items to expose per mediated tool payload."""
return max(int(self.LLM_TOOL_CONTEXT_LIMIT), 1)
@property
def llm_live_tool_allowlist(self) -> list[str]:
"""Get the allowed live tool-call names for model-driven lookup requests."""
return [item.strip() for item in self.LLM_LIVE_TOOL_ALLOWLIST.split(',') if item.strip()]
@property
def llm_live_tool_stage_allowlist(self) -> list[str]:
"""Get the LLM stages where live tool requests are enabled."""
return [item.strip() for item in self.LLM_LIVE_TOOL_STAGE_ALLOWLIST.split(',') if item.strip()]
@property
def llm_live_tool_stage_tool_map(self) -> dict[str, list[str]]:
"""Get an optional per-stage live tool map that overrides the simple stage allowlist."""
raw = (self.LLM_LIVE_TOOL_STAGE_TOOL_MAP or '').strip()
if not raw:
return {}
try:
parsed = json.loads(raw)
except Exception:
return {}
if not isinstance(parsed, dict):
return {}
allowed_tools = set(self.llm_live_tool_allowlist)
normalized: dict[str, list[str]] = {}
for stage, tools in parsed.items():
if not isinstance(stage, str):
continue
if not isinstance(tools, list):
continue
normalized[stage.strip()] = [str(tool).strip() for tool in tools if str(tool).strip() in allowed_tools]
return normalized
def llm_live_tools_for_stage(self, stage: str) -> list[str]:
"""Return live tools enabled for a specific LLM stage."""
stage_map = self.llm_live_tool_stage_tool_map
if stage_map:
return stage_map.get(stage, [])
if stage not in set(self.llm_live_tool_stage_allowlist):
return []
return self.llm_live_tool_allowlist
@property
def llm_max_tool_call_rounds(self) -> int:
"""Get the maximum number of model-driven live tool-call rounds per LLM request."""
return max(int(self.LLM_MAX_TOOL_CALL_ROUNDS), 0)
@property
def gitea_url(self) -> str:
"""Get Gitea URL with trimmed whitespace."""

File diff suppressed because it is too large Load Diff

View File

@@ -30,6 +30,7 @@ try:
from .agents.change_summary import ChangeSummaryGenerator
from .agents.database_manager import DatabaseManager
from .agents.request_interpreter import RequestInterpreter
from .agents.llm_service import LLMServiceClient
from .agents.orchestrator import AgentOrchestrator
from .agents.n8n_setup import N8NSetupAgent
from .agents.prompt_workflow import PromptWorkflowManager
@@ -41,6 +42,7 @@ except ImportError:
from agents.change_summary import ChangeSummaryGenerator
from agents.database_manager import DatabaseManager
from agents.request_interpreter import RequestInterpreter
from agents.llm_service import LLMServiceClient
from agents.orchestrator import AgentOrchestrator
from agents.n8n_setup import N8NSetupAgent
from agents.prompt_workflow import PromptWorkflowManager
@@ -109,6 +111,75 @@ def _build_project_id(name: str) -> str:
return f"{slug}-{uuid4().hex[:8]}"
def _build_project_slug(name: str) -> str:
"""Normalize a project name into a kebab-case identifier slug."""
return PROJECT_ID_PATTERN.sub("-", name.strip().lower()).strip("-") or "project"
def _ensure_unique_identifier(base_slug: str, reserved_ids: set[str]) -> str:
"""Return a unique identifier using deterministic numeric suffixes when needed."""
normalized = _build_project_slug(base_slug)
if normalized not in reserved_ids:
return normalized
suffix = 2
while f"{normalized}-{suffix}" in reserved_ids:
suffix += 1
return f"{normalized}-{suffix}"
def _build_project_identity_context(manager: DatabaseManager) -> list[dict]:
"""Build a compact project catalog for naming stages."""
projects = []
for history in manager.get_all_projects(include_archived=True):
repository = manager._get_project_repository(history) or {}
projects.append(
{
'project_id': history.project_id,
'name': history.project_name,
'description': history.description,
'repository': {
'owner': repository.get('owner'),
'name': repository.get('name'),
},
}
)
return projects
async def _derive_project_id_for_request(
request: SoftwareRequest,
*,
prompt_text: str,
prompt_routing: dict | None,
existing_projects: list[dict],
) -> tuple[str, dict | None]:
"""Derive a stable project id for a newly created project."""
reserved_ids = {str(project.get('project_id')).strip() for project in existing_projects if project.get('project_id')}
fallback_id = _ensure_unique_identifier((prompt_routing or {}).get('project_name') or request.name, reserved_ids)
user_prompt = (
f"Original user prompt:\n{prompt_text}\n\n"
f"Structured request:\n{json.dumps({'name': request.name, 'description': request.description, 'features': request.features, 'tech_stack': request.tech_stack}, indent=2)}\n\n"
f"Naming context:\n{json.dumps(prompt_routing or {}, indent=2)}\n\n"
f"Reserved project ids:\n{json.dumps(sorted(reserved_ids))}\n\n"
"Suggest the best stable project id for this new project."
)
content, trace = await LLMServiceClient().chat_with_trace(
stage='project_id_naming',
system_prompt=database_module.settings.llm_project_id_system_prompt,
user_prompt=user_prompt,
tool_context_input={'projects': existing_projects},
expect_json=True,
)
if content:
try:
parsed = json.loads(content)
candidate = parsed.get('project_id') or parsed.get('slug') or request.name
return _ensure_unique_identifier(str(candidate), reserved_ids), trace
except Exception:
pass
return fallback_id, trace
def _serialize_project(history: ProjectHistory) -> dict:
"""Serialize a project history row for API responses."""
return {
@@ -176,13 +247,15 @@ async def _run_generation(
prompt_source_context: dict | None = None,
prompt_routing: dict | None = None,
preferred_project_id: str | None = None,
repo_name_override: str | None = None,
related_issue: dict | None = None,
) -> dict:
"""Run the shared generation pipeline for a structured request."""
database_module.init_db()
manager = DatabaseManager(db)
reusable_history = manager.get_project_by_id(preferred_project_id, include_archived=False) if preferred_project_id else manager.get_latest_project_by_name(request.name)
is_explicit_new_project = (prompt_routing or {}).get('intent') == 'new_project'
reusable_history = manager.get_project_by_id(preferred_project_id, include_archived=False) if preferred_project_id else (None if is_explicit_new_project else manager.get_latest_project_by_name(request.name))
if reusable_history and database_module.settings.gitea_url and database_module.settings.gitea_token:
try:
from .agents.gitea import GiteaAPI
@@ -197,14 +270,23 @@ async def _run_generation(
),
project_id=reusable_history.project_id,
)
project_id_trace = None
resolved_prompt_text = prompt_text or _compose_prompt_text(request)
if preferred_project_id and reusable_history is not None:
project_id = reusable_history.project_id
elif reusable_history and manager.get_open_pull_request(project_id=reusable_history.project_id):
elif reusable_history and not is_explicit_new_project and manager.get_open_pull_request(project_id=reusable_history.project_id):
project_id = reusable_history.project_id
else:
project_id = _build_project_id(request.name)
if is_explicit_new_project or prompt_text:
project_id, project_id_trace = await _derive_project_id_for_request(
request,
prompt_text=resolved_prompt_text,
prompt_routing=prompt_routing,
existing_projects=_build_project_identity_context(manager),
)
else:
project_id = _build_project_id(request.name)
reusable_history = None
resolved_prompt_text = prompt_text or _compose_prompt_text(request)
orchestrator = AgentOrchestrator(
project_id=project_id,
project_name=request.name,
@@ -217,6 +299,7 @@ async def _run_generation(
existing_history=reusable_history,
prompt_source_context=prompt_source_context,
prompt_routing=prompt_routing,
repo_name_override=repo_name_override,
related_issue_hint=related_issue,
)
result = await orchestrator.run()
@@ -240,6 +323,20 @@ async def _run_generation(
response_data['repository'] = result.get('repository')
response_data['related_issue'] = result.get('related_issue') or (result.get('ui_data') or {}).get('related_issue')
response_data['pull_request'] = result.get('pull_request') or manager.get_open_pull_request(project_id=project_id)
if project_id_trace:
manager.log_llm_trace(
project_id=project_id,
history_id=history.id if history else None,
prompt_id=orchestrator.prompt_audit.id if orchestrator.prompt_audit else None,
stage=project_id_trace['stage'],
provider=project_id_trace['provider'],
model=project_id_trace['model'],
system_prompt=project_id_trace['system_prompt'],
user_prompt=project_id_trace['user_prompt'],
assistant_response=project_id_trace['assistant_response'],
raw_response=project_id_trace.get('raw_response'),
fallback_used=project_id_trace.get('fallback_used', False),
)
summary_context = {
'name': response_data['name'],
'description': response_data['description'],
@@ -322,6 +419,7 @@ def read_api_info():
'/',
'/api',
'/health',
'/llm/runtime',
'/generate',
'/generate/text',
'/projects',
@@ -363,6 +461,12 @@ def health_check():
}
@app.get('/llm/runtime')
def get_llm_runtime():
"""Return the active external LLM runtime, guardrail, and tool configuration."""
return LLMServiceClient().get_runtime_configuration()
@app.post('/generate')
async def generate_software(request: SoftwareRequest, db: DbSession):
"""Create and record a software-generation request."""
@@ -411,6 +515,7 @@ async def generate_software_from_text(request: FreeformSoftwareRequest, db: DbSe
},
prompt_routing=routing,
preferred_project_id=routing.get('project_id') if routing.get('intent') != 'new_project' else None,
repo_name_override=routing.get('repo_name') if routing.get('intent') == 'new_project' else None,
related_issue={'number': routing.get('issue_number')} if routing.get('issue_number') is not None else None,
)
project_data = response.get('data', {})
@@ -431,6 +536,21 @@ async def generate_software_from_text(request: FreeformSoftwareRequest, db: DbSe
raw_response=interpretation_trace.get('raw_response'),
fallback_used=interpretation_trace.get('fallback_used', False),
)
naming_trace = interpretation_trace.get('project_naming')
if naming_trace:
manager.log_llm_trace(
project_id=project_data.get('project_id'),
history_id=project_data.get('history_id'),
prompt_id=prompt_id,
stage=naming_trace['stage'],
provider=naming_trace['provider'],
model=naming_trace['model'],
system_prompt=naming_trace['system_prompt'],
user_prompt=naming_trace['user_prompt'],
assistant_response=naming_trace['assistant_response'],
raw_response=naming_trace.get('raw_response'),
fallback_used=naming_trace.get('fallback_used', False),
)
response['interpreted_request'] = interpreted
response['routing'] = routing
response['llm_trace'] = interpretation_trace