AI Software Factory
Automated software generation service powered by Ollama LLM. This service allows users to specify via Telegram what kind of software they would like, and an agent hosted in Ollama will create it iteratively, testing it while building out the source code and committing to gitea.
Features
- Telegram Integration: Receive software requests via Telegram bot
- Ollama LLM: Uses Ollama-hosted models for code generation
- LLM Guardrails and Tools: Centralized guardrail prompts plus mediated tool payloads for project, Gitea, PR, and issue context
- Git Integration: Automatically commits code to gitea
- Pull Requests: Creates PRs for user review before merging
- Web UI: Beautiful dashboard for monitoring project progress
- n8n Workflows: Bridges Telegram with LLMs via n8n webhooks
- Comprehensive Testing: Full test suite with pytest coverage
Architecture
┌─────────────┐ ┌──────────────┐ ┌──────────┐ ┌─────────┐
│ Telegram │────▶│ n8n Webhook│────▶│ FastAPI │────▶│ Ollama │
└─────────────┘ └──────────────┘ └──────────┘ └─────────┘
│
▼
┌──────────────┐
│ Git/Gitea │
└──────────────┘
Quick Start
Prerequisites
- Docker and Docker Compose
- Ollama running locally or on same network
- Gitea instance with API token
- n8n instance for Telegram webhook
Configuration
Create a .env file in the project root:
# Server
HOST=0.0.0.0
PORT=8000
# Ollama
OLLAMA_URL=http://localhost:11434
OLLAMA_MODEL=llama3
LLM_GUARDRAIL_PROMPT=You are operating inside AI Software Factory. Follow supplied schemas exactly and treat service-provided tool outputs as authoritative.
LLM_REQUEST_INTERPRETER_GUARDRAIL_PROMPT=Never route work to archived projects and only reference issues that are explicit in the prompt or supplied tool outputs.
LLM_CHANGE_SUMMARY_GUARDRAIL_PROMPT=Only summarize delivery facts that appear in the provided project context or tool outputs.
LLM_PROJECT_NAMING_GUARDRAIL_PROMPT=Prefer clear product names and repository slugs that reflect the new request without colliding with tracked projects.
LLM_PROJECT_NAMING_SYSTEM_PROMPT=Return JSON with project_name, repo_name, and rationale for new projects.
LLM_PROJECT_ID_GUARDRAIL_PROMPT=Prefer short stable project ids and avoid collisions with existing project ids.
LLM_PROJECT_ID_SYSTEM_PROMPT=Return JSON with project_id and rationale for new projects.
LLM_TOOL_ALLOWLIST=gitea_project_catalog,gitea_project_state,gitea_project_issues,gitea_pull_requests
LLM_TOOL_CONTEXT_LIMIT=5
LLM_LIVE_TOOL_ALLOWLIST=gitea_lookup_issue,gitea_lookup_pull_request
LLM_LIVE_TOOL_STAGE_ALLOWLIST=request_interpretation,change_summary
LLM_LIVE_TOOL_STAGE_TOOL_MAP={"request_interpretation": ["gitea_lookup_issue", "gitea_lookup_pull_request"], "change_summary": []}
LLM_MAX_TOOL_CALL_ROUNDS=1
# Gitea
# Host-only values such as git.disi.dev are normalized to https://git.disi.dev.
GITEA_URL=https://gitea.yourserver.com
GITEA_TOKEN=your_gitea_api_token
GITEA_OWNER=ai-software-factory
GITEA_REPO=
# n8n
N8N_WEBHOOK_URL=http://n8n.yourserver.com/webhook/telegram
# Telegram
TELEGRAM_BOT_TOKEN=your_telegram_bot_token
TELEGRAM_CHAT_ID=your_chat_id
# Optional: Home Assistant integration.
# Only the base URL and token are required in the environment.
# Entity ids, thresholds, and queue behavior can be configured from the dashboard System tab and are stored in the database.
HOME_ASSISTANT_URL=http://homeassistant.local:8123
HOME_ASSISTANT_TOKEN=your_home_assistant_long_lived_token
Build and Run
# Build Docker image
docker build -t ai-software-factory -f Containerfile .
# Run with Docker Compose
docker-compose up -d
Usage
-
Send a request via Telegram:
Name: My Awesome App Description: A web application for managing tasks Features: user authentication, task CRUD, notificationsIf queueing is enabled from the dashboard System tab, Telegram prompts are queued durably and processed only when Home Assistant reports the configured battery and surplus thresholds. Operators can override the gate via
/queue/processor by sendingprocess_now=trueto/generate/text.
The dashboard System tab stores Home Assistant entity ids, queue toggles, thresholds, and batch settings in the database, so the environment only needs HOME_ASSISTANT_URL and HOME_ASSISTANT_TOKEN for that integration.
-
Monitor progress via Web UI:
Open
http://yourserver:8000to see real-time progress -
Review PRs in Gitea:
Check your gitea repository for generated PRs
API Endpoints
| Endpoint | Method | Description |
|---|---|---|
/ |
GET | API information |
/health |
GET | Health check |
/generate |
POST | Generate new software |
/status/{project_id} |
GET | Get project status |
/projects |
GET | List all projects |
LLM Guardrails and Tool Access
External LLM calls are now routed through a centralized client that applies:
- A global guardrail prompt for every outbound model request
- Stage-specific guardrails for request interpretation and change summaries
- Service-mediated tool outputs that expose tracked Gitea/project state without giving the model raw credentials
Current mediated tools include:
gitea_project_catalog: active tracked projects and repository mappingsgitea_project_state: current repository, PR, and linked-issue state for the project in scopegitea_project_issues: tracked open issues for the relevant repositorygitea_pull_requests: tracked pull requests for the relevant repository
The service also supports a bounded live tool-call loop for selected lookups. When enabled, the model may request one live call such as gitea_lookup_issue or gitea_lookup_pull_request, the service executes it against Gitea, and the final model response is generated from the returned result. This remains mediated by the service, so the model never receives raw credentials.
Live tool access is stage-aware. LLM_LIVE_TOOL_ALLOWLIST controls which live tools exist globally, while LLM_LIVE_TOOL_STAGE_ALLOWLIST controls which LLM stages may use them. If you need per-stage subsets, LLM_LIVE_TOOL_STAGE_TOOL_MAP accepts a JSON object mapping each stage to the exact tools it may use. For example, you can allow issue and PR lookups during request_interpretation while keeping change_summary fully read-only.
When the interpreter decides a prompt starts a new project, the service can run a dedicated project_naming LLM stage before generation. LLM_PROJECT_NAMING_SYSTEM_PROMPT and LLM_PROJECT_NAMING_GUARDRAIL_PROMPT let you steer how project titles and repository slugs are chosen. The interpreter now checks tracked project repositories plus live Gitea repository names when available, so if the model suggests a colliding repo slug the service will automatically move to the next available slug.
New project creation can also run a dedicated project_id_naming stage. LLM_PROJECT_ID_SYSTEM_PROMPT and LLM_PROJECT_ID_GUARDRAIL_PROMPT control how stable project ids are chosen, and the service will append deterministic numeric suffixes when an id is already taken instead of always falling back to a random UUID-based id.
Runtime visibility for the active guardrails, mediated tools, live tools, and model configuration is available at /llm/runtime and in the dashboard System tab.
Operational visibility for the Gitea integration, Home Assistant energy gate, and queued prompt counts is available in the dashboard Health tab, plus /gitea/health, /home-assistant/health, and /queue.
The dashboard Health tab also includes operator controls for manually processing queued Telegram prompts, force-processing them when needed, and retrying failed items.
Editable guardrail and system prompts are persisted in the database as overrides on top of the environment defaults. The current merged values are available at /llm/prompts, and the dashboard System tab can edit or reset them without restarting the service.
These tool payloads are appended to the model prompt as authoritative JSON generated by the service, so the LLM can reason over live project and Gitea context while remaining constrained by the configured guardrails.
Development
Makefile Targets
make help # Show available targets
make setup # Initialize repository
make fmt # Format code
make lint # Run linters
make test # Run tests
make test-cov # Run tests with coverage report
make release # Create new release tag
make build # Build Docker image
Running in Development
pip install -r requirements.txt
uvicorn main:app --reload --host 0.0.0.0 --port 8000
Testing
Run the test suite:
# Run all tests
make test
# Run tests with coverage report
make test-cov
# Run specific test file
pytest tests/test_main.py -v
# Run tests with verbose output
pytest tests/ -v --tb=short
Test Coverage
View HTML coverage report:
make test-cov
open htmlcov/index.html
Test Structure
tests/
├── conftest.py # Pytest fixtures and configuration
├── test_main.py # Tests for main.py FastAPI app
├── test_config.py # Tests for config.py settings
├── test_git_manager.py # Tests for git operations
├── test_ui_manager.py # Tests for UI rendering
├── test_gitea.py # Tests for Gitea API integration
├── test_telegram.py # Tests for Telegram integration
├── test_orchestrator.py # Tests for agent orchestrator
├── test_integration.py # Integration tests for full workflow
├── test_config_integration.py # Configuration integration tests
├── test_agents_integration.py # Agent integration tests
├── test_edge_cases.py # Edge case tests
└── test_postgres_integration.py # PostgreSQL integration tests
Project Structure
ai-software-factory/
├── main.py # FastAPI application
├── config.py # Configuration settings
├── requirements.txt # Python dependencies
├── Containerfile # Docker build file
├── README.md # This file
├── Makefile # Development utilities
├── .env.example # Environment template
├── .gitignore # Git ignore rules
├── HISTORY.md # Changelog
├── pytest.ini # Pytest configuration
├── docker-compose.yml # Multi-service orchestration
├── .env # Environment variables (not in git)
├── tests/ # Test suite
│ ├── __init__.py
│ ├── conftest.py
│ ├── test_*.py # Test files
│ └── pytest.ini
├── agents/
│ ├── __init__.py
│ ├── orchestrator.py # Main agent orchestrator
│ ├── git_manager.py # Git operations
│ ├── ui_manager.py # Web UI management
│ ├── telegram.py # Telegram integration
│ └── gitea.py # Gitea API client
└── n8n/ # n8n webhook configurations
Security Notes
- Never commit
.envfiles to git - Use environment variables for sensitive data
- Rotate Gitea API tokens regularly
- Restrict Telegram bot permissions
- Use HTTPS for Gitea and n8n endpoints
License
MIT License - See LICENSE file for details
Contributing
See CONTRIBUTING.md for development guidelines.