AI Software Factory
Automated software generation service powered by Ollama LLM. This service allows users to specify via Telegram what kind of software they would like, and an agent hosted in Ollama will create it iteratively, testing it while building out the source code and committing to gitea.
Features
- Telegram Integration: Receive software requests via Telegram bot
- Ollama LLM: Uses Ollama-hosted models for code generation
- Git Integration: Creates a dedicated Gitea repository per generated project inside your organization
- Pull Requests: Creates PRs for user review before merging
- Web UI: Beautiful dashboard for monitoring project progress
- n8n Workflows: Bridges Telegram with LLMs via n8n webhooks
- Comprehensive Testing: Full test suite with pytest coverage
Architecture
┌─────────────┐ ┌──────────────┐ ┌──────────┐ ┌─────────┐
│ Telegram │────▶│ n8n Webhook│────▶│ FastAPI │────▶│ Ollama │
└─────────────┘ └──────────────┘ └──────────┘ └─────────┘
│
▼
┌──────────────┐
│ Git/Gitea │
└──────────────┘
Quick Start
Prerequisites
- Docker and Docker Compose
- Ollama running locally or on same network
- Gitea instance with API token
- n8n instance for Telegram webhook
Configuration
Create a .env file in the project root:
# Server
HOST=0.0.0.0
PORT=8000
# Ollama
OLLAMA_URL=http://localhost:11434
OLLAMA_MODEL=llama3
# Gitea
# Host-only values such as git.disi.dev are normalized to https://git.disi.dev.
GITEA_URL=https://gitea.yourserver.com
GITEA_TOKEN=your_gitea_api_token
GITEA_OWNER=ai-software-factory
# Optional legacy fixed-repository mode. Leave empty to create one repo per project.
GITEA_REPO=
# Database
# In production, provide PostgreSQL settings. They take precedence over the SQLite default.
# Setting USE_SQLITE=false is still supported if you want to make the choice explicit.
POSTGRES_HOST=postgres.yourserver.com
POSTGRES_PORT=5432
POSTGRES_USER=ai_software_factory
POSTGRES_PASSWORD=change-me
POSTGRES_DB=ai_software_factory
# n8n
N8N_WEBHOOK_URL=http://n8n.yourserver.com/webhook/telegram
# Telegram
TELEGRAM_BOT_TOKEN=your_telegram_bot_token
TELEGRAM_CHAT_ID=your_chat_id
# Optional: Home Assistant integration.
# Only the base URL and token are required in the environment.
# Entity ids, thresholds, and queue behavior can be configured from the dashboard System tab and are stored in the database.
HOME_ASSISTANT_URL=http://homeassistant.local:8123
HOME_ASSISTANT_TOKEN=your_home_assistant_long_lived_token
Build and Run
# Build Docker image
DOCKER_API_VERSION=1.43 docker build -t ai-software-factory -f Containerfile .
# Run with Docker Compose
docker-compose up -d
Usage
-
Send a request via Telegram:
Build an internal task management app for our operations team. It should support user authentication, task CRUD, notifications, and reporting. Prefer FastAPI with PostgreSQL and a simple web dashboard.The backend now interprets free-form Telegram text with Ollama before generation. If
TELEGRAM_CHAT_IDis set, the Telegram-trigger workflow only reacts to messages from that specific chat. If queueing is enabled from the dashboard System tab, Telegram prompts are stored in a durable queue and processed only when the configured Home Assistant battery and surplus thresholds are satisfied, unless you force processing via/queue/processor sendprocess_now=true. -
Monitor progress via Web UI:
Open
http://yourserver:8000/to see the dashboard andhttp://yourserver:8000/apifor API metadata -
Review PRs in Gitea:
Check your gitea repository for generated PRs
If you deploy the container with PostgreSQL environment variables set, the service now selects PostgreSQL automatically even though SQLite remains the default for local/test usage.
The health tab now shows separate application, n8n, Gitea, and Home Assistant/queue diagnostics so misconfigured integrations are visible without checking container logs.
The dashboard Health tab exposes operator controls for the prompt queue, including manual batch processing, forced processing, and retrying failed items.
The dashboard System tab now also stores Home Assistant entity ids, queue toggles, thresholds, and batch settings in the database, so the environment only needs HOME_ASSISTANT_URL and HOME_ASSISTANT_TOKEN for that integration.
Projects that show uncommitted, local_only, or pushed_no_pr delivery warnings in the dashboard can now be retried in place from the UI before resorting to purging orphan audit rows.
Guardrail and system prompts are no longer environment-only in practice: the factory can persist DB-backed overrides for the editable LLM prompt set, expose them at /llm/prompts, and edit them from the dashboard System tab. Environment values still act as defaults and as the reset target.
API Endpoints
| Endpoint | Method | Description |
|---|---|---|
/ |
GET | Dashboard |
/api |
GET | API information |
/health |
GET | Health check |
/generate |
POST | Generate new software |
/generate/text |
POST | Interpret free-form text and generate software |
/status/{project_id} |
GET | Get project status |
/projects |
GET | List all projects |
Development
Makefile Targets
make help # Show available targets
make setup # Initialize repository
make fmt # Format code
make lint # Run linters
make test # Run tests
make test-cov # Run tests with coverage report
make release # Create new release tag
make build # Build Docker image
Running in Development
pip install -r requirements.txt
uvicorn main:app --reload --host 0.0.0.0 --port 8000
Testing
Run the test suite:
# Run all tests
make test
# Run tests with coverage report
make test-cov
# Run specific test file
pytest tests/test_main.py -v
# Run tests with verbose output
pytest tests/ -v --tb=short
Test Coverage
View HTML coverage report:
make test-cov
open htmlcov/index.html
Test Structure
tests/
├── conftest.py # Pytest fixtures and configuration
├── test_main.py # Tests for main.py FastAPI app
├── test_config.py # Tests for config.py settings
├── test_git_manager.py # Tests for git operations
├── test_ui_manager.py # Tests for UI rendering
├── test_gitea.py # Tests for Gitea API integration
├── test_telegram.py # Tests for Telegram integration
├── test_orchestrator.py # Tests for agent orchestrator
├── test_integration.py # Integration tests for full workflow
├── test_config_integration.py # Configuration integration tests
├── test_agents_integration.py # Agent integration tests
├── test_edge_cases.py # Edge case tests
└── test_postgres_integration.py # PostgreSQL integration tests
Project Structure
ai-software-factory/
├── main.py # FastAPI application
├── config.py # Configuration settings
├── requirements.txt # Python dependencies
├── Containerfile # Docker build file
├── README.md # This file
├── Makefile # Development utilities
├── .env.example # Environment template
├── .gitignore # Git ignore rules
├── HISTORY.md # Changelog
├── pytest.ini # Pytest configuration
├── docker-compose.yml # Multi-service orchestration
├── .env # Environment variables (not in git)
├── tests/ # Test suite
│ ├── __init__.py
│ ├── conftest.py
│ ├── test_*.py # Test files
│ └── pytest.ini
├── agents/
│ ├── __init__.py
│ ├── orchestrator.py # Main agent orchestrator
│ ├── git_manager.py # Git operations
│ ├── ui_manager.py # Web UI management
│ ├── telegram.py # Telegram integration
│ └── gitea.py # Gitea API client
└── n8n/ # n8n webhook configurations
Security Notes
- Never commit
.envfiles to git - Use environment variables for sensitive data
- Rotate Gitea API tokens regularly
- Restrict Telegram bot permissions
- Use HTTPS for Gitea and n8n endpoints
License
MIT License - See LICENSE file for details
Contributing
See CONTRIBUTING.md for development guidelines.