51 Commits
0.0.1 ... 0.6.0

Author SHA1 Message Date
ebfcfb969a release: version 0.6.0 🚀
All checks were successful
Upload Python Package / Create Release (push) Successful in 17s
Upload Python Package / deploy (push) Successful in 42s
2026-04-10 20:43:36 +02:00
56b05eb686 feat(api): expose database target in health refs NOISSUE 2026-04-10 20:39:36 +02:00
59a7e9787e fix(db): prefer postgres config in production refs NOISSUE 2026-04-10 20:37:31 +02:00
a357a307a7 release: version 0.5.0 🚀
All checks were successful
Upload Python Package / Create Release (push) Successful in 22s
Upload Python Package / deploy (push) Successful in 49s
2026-04-10 20:27:26 +02:00
af4247e657 feat(dashboard): expose repository urls refs NOISSUE 2026-04-10 20:27:08 +02:00
227ad1ad6f feat(factory): serve dashboard at root and create project repos refs NOISSUE 2026-04-10 20:23:07 +02:00
82e53a6651 release: version 0.4.1 🚀
All checks were successful
Upload Python Package / Create Release (push) Successful in 31s
Upload Python Package / deploy (push) Successful in 1m2s
2026-04-10 19:59:04 +02:00
e9dc1ede55 fix(ci): pin docker api version for release builds refs NOISSUE 2026-04-10 19:58:38 +02:00
6ee1c46826 release: version 0.4.0 🚀
Some checks failed
Upload Python Package / Create Release (push) Successful in 16s
Upload Python Package / deploy (push) Failing after 1m0s
2026-04-10 19:40:17 +02:00
4f5c87bed9 chore(git): ignore local sqlite database refs NOISSUE 2026-04-10 19:39:39 +02:00
7180031d1f feat(factory): implement db-backed dashboard and workflow automation refs NOISSUE 2026-04-10 19:37:44 +02:00
de4feb61cd release: version 0.3.6 🚀 2026-04-05 01:00:05 +02:00
ddb9f2100b fix: rename gitea workflow, refs NOISSUE 2026-04-05 01:00:03 +02:00
034bb3eb63 release: version 0.3.5 🚀 2026-04-05 00:58:13 +02:00
06a50880b7 fix: some cleanup, refs NOISSUE 2026-04-05 00:58:09 +02:00
c66b57f9cb release: version 0.3.4 🚀 2026-04-05 00:19:31 +02:00
ba30f84f49 fix: fix database init, refs NOISSUE 2026-04-05 00:19:29 +02:00
81935daaf5 release: version 0.3.3 🚀 2026-04-04 23:53:04 +02:00
d2260ac797 fix: fix runtime errors, refs NOISSUE 2026-04-04 23:53:02 +02:00
ca6f39a3e8 release: version 0.3.2 🚀 2026-04-04 23:34:32 +02:00
5eb5bd426a fix: add back DB init endpoints, ref NOISSUE 2026-04-04 23:34:29 +02:00
08af3ed38d release: version 0.3.1 🚀 2026-04-04 23:23:06 +02:00
cc5060d317 fix: fix broken Docker build, refs NOISSUE 2026-04-04 23:22:54 +02:00
c51e51c9c2 release: version 0.3.0 🚀 2026-04-04 23:16:01 +02:00
f0ec9169c4 feat: dashboard via NiceGUI, refs NOISSUE 2026-04-04 23:15:55 +02:00
9615c50ccb release: version 0.2.2 🚀 2026-04-04 21:21:55 +02:00
9fcf2e2d1a fix: add missing jijna2 reference, refs NOISSUE 2026-04-04 21:21:43 +02:00
67df87072d release: version 0.2.1 🚀 2026-04-04 21:14:47 +02:00
ef249dfbe6 fix: make dashbaord work, refs NOISSUE 2026-04-04 21:14:38 +02:00
8bbbf6b9ac release: version 0.2.0 🚀 2026-04-04 20:58:10 +02:00
7f12034bff feat: Add Python-native dashboard and main.py cleanup, refs NOISSUE 2026-04-04 20:58:07 +02:00
4430348168 release: version 0.1.8 🚀 2026-04-04 20:41:50 +02:00
578be7b6f4 fix: broken python module references, refs NOISSUE 2026-04-04 20:41:39 +02:00
dbcd3fba91 release: version 0.1.7 🚀 2026-04-04 20:35:04 +02:00
0eb0bc0d41 fix: more bugfixes, refs NOISSUE 2026-04-04 20:34:59 +02:00
a73644b1da release: version 0.1.6 🚀 2026-04-04 20:29:09 +02:00
4c7a089753 fix: proper containerfile, refs NOISSUE 2026-04-04 20:29:07 +02:00
4d70a98902 chore: update Containerfile to start the app instead of hello world refs NOISSUE 2026-04-04 20:25:31 +02:00
f65f0b3603 release: version 0.1.5 🚀 2026-04-04 20:19:48 +02:00
fec96cd049 fix: bugfix in version generation, refs NOISSUE 2026-04-04 20:19:44 +02:00
25b180a2f3 feat(ai-software-factory): add n8n setup agent and enhance orchestration refs NOISSUE 2026-04-04 20:13:40 +02:00
45bcbfe80d release: version 0.1.4 🚀 2026-04-02 02:09:40 +02:00
d82b811e55 fix: fix container build, refs NOISSUE 2026-04-02 02:09:35 +02:00
b10c34f3fc release: version 0.1.3 🚀 2026-04-02 02:04:42 +02:00
f7b8925881 fix: fix version increment logic, refs NOISSUE 2026-04-02 02:04:39 +02:00
78c8bd68cc release: version 0.1.2 🚀 2026-04-02 02:03:23 +02:00
f17e241871 fix: test version increment logic, refs NOISSUE 2026-04-02 02:03:21 +02:00
55c5fca784 release: version 0.1.1 🚀 2026-04-02 01:58:17 +02:00
aa0ca2cb7b fix: broken CI build, refs NOISSUE 2026-04-02 01:58:13 +02:00
e824475872 feat: initial release, refs NOISSUE 2026-04-02 01:43:16 +02:00
simon
0b1384279d Ready to clone and code. 2026-03-14 12:58:13 +00:00
57 changed files with 5998 additions and 115 deletions

View File

@@ -46,7 +46,7 @@ create_file() {
}
get_commit_range() {
rm $TEMP_FILE_PATH/messages.txt
rm -f $TEMP_FILE_PATH/messages.txt
if [[ $LAST_TAG =~ $PATTERN ]]; then
create_file true
else
@@ -86,8 +86,8 @@ start() {
echo "New version: $new_version"
gitchangelog | grep -v "[rR]elease:" > HISTORY.md
echo $new_version > project_name/VERSION
git add project_name/VERSION HISTORY.md
echo $new_version > ai_software_factory/VERSION
git add ai_software_factory/VERSION HISTORY.md
git commit -m "release: version $new_version 🚀"
echo "creating git tag : $new_version"
git tag $new_version

View File

@@ -1,38 +0,0 @@
#!/usr/bin/env bash
while getopts a:n:u:d: flag
do
case "${flag}" in
a) author=${OPTARG};;
n) name=${OPTARG};;
u) urlname=${OPTARG};;
d) description=${OPTARG};;
esac
done
echo "Author: $author";
echo "Project Name: $name";
echo "Project URL name: $urlname";
echo "Description: $description";
echo "Renaming project..."
original_author="author_name"
original_name="project_name"
original_urlname="project_urlname"
original_description="project_description"
# for filename in $(find . -name "*.*")
for filename in $(git ls-files)
do
sed -i "s/$original_author/$author/g" $filename
sed -i "s/$original_name/$name/g" $filename
sed -i "s/$original_urlname/$urlname/g" $filename
sed -i "s/$original_description/$description/g" $filename
echo "Renamed $filename"
done
mv project_name $name
# This command runs only once on GHA!
rm -rf .gitea/template.yml
rm -rf project_name
rm -rf project_name.Tests

View File

@@ -1 +0,0 @@
author: rochacbruno

View File

@@ -4,6 +4,7 @@ permissions:
env:
SKIP_MAKE_SETUP_CHECK: 'true'
DOCKER_API_VERSION: '1.43'
on:
push:
@@ -41,7 +42,7 @@ jobs:
- name: Check version match
run: |
REPOSITORY_NAME=$(echo "$GITHUB_REPOSITORY" | awk -F '/' '{print $2}' | tr '-' '_')
if [ "$(cat project_name/VERSION)" = "${GITHUB_REF_NAME}" ] ; then
if [ "$(cat ai_software_factory/VERSION)" = "${GITHUB_REF_NAME}" ] ; then
echo "Version matches successfully!"
else
echo "Version must match!"
@@ -49,13 +50,17 @@ jobs:
fi
- name: Login to Gitea container registry
uses: docker/login-action@v3
env:
DOCKER_API_VERSION: ${{ env.DOCKER_API_VERSION }}
with:
username: gitearobot
password: ${{ secrets.PACKAGE_GITEA_PAT }}
registry: git.disi.dev
- name: Build and publish
env:
DOCKER_API_VERSION: ${{ env.DOCKER_API_VERSION }}
run: |
REPOSITORY_OWNER=$(echo "$GITHUB_REPOSITORY" | awk -F '/' '{print $1}' | tr '[:upper:]' '[:lower:]')
REPOSITORY_NAME=$(echo "$GITHUB_REPOSITORY" | awk -F '/' '{print $2}' | tr '-' '_')
docker build -t "git.disi.dev/$REPOSITORY_OWNER/project_name:$(cat project_name/VERSION)" -f Containerfile ./
docker push "git.disi.dev/$REPOSITORY_OWNER/project_name:$(cat project_name/VERSION)"
docker build -t "git.disi.dev/$REPOSITORY_OWNER/ai_software_factory:$(cat ai_software_factory/VERSION)" -f Containerfile ./
docker push "git.disi.dev/$REPOSITORY_OWNER/ai_software_factory:$(cat ai_software_factory/VERSION)"

View File

@@ -1,48 +0,0 @@
name: Rename the project from template
on: [push]
permissions: write-all
jobs:
rename-project:
if: ${{ !endsWith (gitea.repository, 'Templates/Docker_Image') }}
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
# by default, it uses a depth of 1
# this fetches all history so that we can read each commit
fetch-depth: 0
ref: ${{ gitea.head_ref }}
- run: echo "REPOSITORY_NAME=$(echo "$GITHUB_REPOSITORY" | awk -F '/' '{print $2}' | tr '-' '_')" >> $GITHUB_ENV
shell: bash
- run: echo "REPOSITORY_URLNAME=$(echo "$GITHUB_REPOSITORY" | awk -F '/' '{print $2}')" >> $GITHUB_ENV
shell: bash
- run: echo "REPOSITORY_OWNER=$(echo "$GITHUB_REPOSITORY" | awk -F '/' '{print $1}')" >> $GITHUB_ENV
shell: bash
- name: Is this still a template
id: is_template
run: echo "::set-output name=is_template::$(ls .gitea/template.yml &> /dev/null && echo true || echo false)"
- name: Rename the project
if: steps.is_template.outputs.is_template == 'true'
run: |
echo "Renaming the project with -a(author) ${{ env.REPOSITORY_OWNER }} -n(name) ${{ env.REPOSITORY_NAME }} -u(urlname) ${{ env.REPOSITORY_URLNAME }}"
.gitea/rename_project.sh -a ${{ env.REPOSITORY_OWNER }} -n ${{ env.REPOSITORY_NAME }} -u ${{ env.REPOSITORY_URLNAME }} -d "Awesome ${{ env.REPOSITORY_NAME }} created by ${{ env.REPOSITORY_OWNER }}"
- name: Remove renaming workflow
if: steps.is_template.outputs.is_template == 'true'
run: |
rm .gitea/workflows/rename_project.yml
rm .gitea/rename_project.sh
- uses: stefanzweifel/git-auto-commit-action@v4
with:
commit_message: "✅ Ready to clone and code."
# commit_options: '--amend --no-edit'
push_options: --force

2
.gitignore vendored Normal file
View File

@@ -0,0 +1,2 @@
sqlite.db
.nicegui/

View File

@@ -1,15 +1,15 @@
# How to develop on this project
project_name welcomes contributions from the community.
ai_software_factory welcomes contributions from the community.
This instructions are for linux base systems. (Linux, MacOS, BSD, etc.)
## Setting up your own fork of this repo.
- On gitea interface click on `Fork` button.
- Clone your fork of this repo. `git clone git@git.disi.dev:YOUR_GIT_USERNAME/project_urlname.git`
- Enter the directory `cd project_urlname`
- Add upstream repo `git remote add upstream https://git.disi.dev/author_name/project_urlname`
- Clone your fork of this repo. `git clone git@git.disi.dev:YOUR_GIT_USERNAME/ai-test.git`
- Enter the directory `cd ai-test`
- Add upstream repo `git remote add upstream https://git.disi.dev/Projects/ai-test`
- initialize repository for use `make setup`
## Install the project in develop mode

View File

@@ -1,6 +1,43 @@
FROM alpine
# AI Software Factory Dockerfile
FROM python:3.11-slim
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE=1 \
PYTHONUNBUFFERED=1 \
PIP_NO_CACHE_DIR=1 \
PIP_DISABLE_PIP_VERSION_CHECK=1
# Set work directory
WORKDIR /app
COPY ./project_name/* /app
CMD ["sh", "/app/hello_world.sh"]
# Install system dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
curl \
&& rm -rf /var/lib/apt/lists/*
# Install dependencies
COPY ./ai_software_factory/requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy application code
COPY ./ai_software_factory .
# Set up environment file if it exists, otherwise use .env.example
# RUN if [ -f .env ]; then \
# cat .env; \
# elif [ -f .env.example ]; then \
# cp .env.example .env; \
# fi
# Initialize database tables (use SQLite by default, can be overridden by DB_POOL_SIZE env var)
# RUN python database.py || true
# Expose port
EXPOSE 8000
# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD curl -f http://localhost:8000/health || exit 1
# Run application
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000", "--reload"]

View File

@@ -4,6 +4,233 @@ Changelog
(unreleased)
------------
- Feat(api): expose database target in health refs NOISSUE. [Simon
Diesenreiter]
- Fix(db): prefer postgres config in production refs NOISSUE. [Simon
Diesenreiter]
0.5.0 (2026-04-10)
------------------
- Feat(dashboard): expose repository urls refs NOISSUE. [Simon
Diesenreiter]
- Feat(factory): serve dashboard at root and create project repos refs
NOISSUE. [Simon Diesenreiter]
0.4.1 (2026-04-10)
------------------
- Fix(ci): pin docker api version for release builds refs NOISSUE.
[Simon Diesenreiter]
0.4.0 (2026-04-10)
------------------
- Chore(git): ignore local sqlite database refs NOISSUE. [Simon
Diesenreiter]
- Feat(factory): implement db-backed dashboard and workflow automation
refs NOISSUE. [Simon Diesenreiter]
0.3.6 (2026-04-04)
------------------
Fix
~~~
- Rename gitea workflow, refs NOISSUE. [Simon Diesenreiter]
Other
~~~~~
0.3.5 (2026-04-04)
------------------
Fix
~~~
- Some cleanup, refs NOISSUE. [Simon Diesenreiter]
Other
~~~~~
0.3.4 (2026-04-04)
------------------
Fix
~~~
- Fix database init, refs NOISSUE. [Simon Diesenreiter]
Other
~~~~~
0.3.3 (2026-04-04)
------------------
Fix
~~~
- Fix runtime errors, refs NOISSUE. [Simon Diesenreiter]
Other
~~~~~
0.3.2 (2026-04-04)
------------------
Fix
~~~
- Add back DB init endpoints, ref NOISSUE. [Simon Diesenreiter]
Other
~~~~~
0.3.1 (2026-04-04)
------------------
Fix
~~~
- Fix broken Docker build, refs NOISSUE. [Simon Diesenreiter]
Other
~~~~~
0.3.0 (2026-04-04)
------------------
- Feat: dashboard via NiceGUI, refs NOISSUE. [Simon Diesenreiter]
0.2.2 (2026-04-04)
------------------
Fix
~~~
- Add missing jijna2 reference, refs NOISSUE. [Simon Diesenreiter]
Other
~~~~~
0.2.1 (2026-04-04)
------------------
Fix
~~~
- Make dashbaord work, refs NOISSUE. [Simon Diesenreiter]
Other
~~~~~
0.2.0 (2026-04-04)
------------------
- Feat: Add Python-native dashboard and main.py cleanup, refs NOISSUE.
[Simon Diesenreiter]
0.1.8 (2026-04-04)
------------------
Fix
~~~
- Broken python module references, refs NOISSUE. [Simon Diesenreiter]
Other
~~~~~
0.1.7 (2026-04-04)
------------------
Fix
~~~
- More bugfixes, refs NOISSUE. [Simon Diesenreiter]
Other
~~~~~
0.1.6 (2026-04-04)
------------------
Fix
~~~
- Proper containerfile, refs NOISSUE. [Simon Diesenreiter]
Other
~~~~~
- Chore: update Containerfile to start the app instead of hello world
refs NOISSUE. [Simon Diesenreiter]
0.1.5 (2026-04-04)
------------------
Fix
~~~
- Bugfix in version generation, refs NOISSUE. [Simon Diesenreiter]
Other
~~~~~
- Feat(ai-software-factory): add n8n setup agent and enhance
orchestration refs NOISSUE. [Simon Diesenreiter]
0.1.4 (2026-04-02)
------------------
Fix
~~~
- Fix container build, refs NOISSUE. [Simon Diesenreiter]
Other
~~~~~
0.1.3 (2026-04-02)
------------------
Fix
~~~
- Fix version increment logic, refs NOISSUE. [Simon Diesenreiter]
Other
~~~~~
0.1.2 (2026-04-02)
------------------
Fix
~~~
- Test version increment logic, refs NOISSUE. [Simon Diesenreiter]
Other
~~~~~
0.1.1 (2026-04-01)
------------------
Fix
~~~
- Broken CI build, refs NOISSUE. [Simon Diesenreiter]
Other
~~~~~
0.1.0 (2026-04-01)
------------------
- Feat: initial release, refs NOISSUE. [Simon Diesenreiter]
- ✅ Ready to clone and code. [simon]
0.0.1 (2026-03-14)
------------------
Fix
~~~

View File

@@ -1,5 +1,7 @@
.ONESHELL:
DOCKER_API_VERSION ?= 1.43
.PHONY: issetup
issetup:
@[ -f .git/hooks/commit-msg ] || [ -z ${SKIP_MAKE_SETUP_CHECK+x} ] || (echo "You must run 'make setup' first to initialize the repo!" && exit 1)
@@ -17,26 +19,34 @@ help: ## Show the help.
.PHONY: fmt
fmt: issetup ## Format code using black & isort.
$(ENV_PREFIX)isort project_name/
$(ENV_PREFIX)black -l 79 project_name/
$(ENV_PREFIX)isort ai-software-factory/
$(ENV_PREFIX)black -l 79 ai-software-factory/
$(ENV_PREFIX)black -l 79 tests/
.PHONY: test
test: issetup ## Run tests with pytest.
$(ENV_PREFIX)pytest ai-software-factory/tests/ -v --tb=short
.PHONY: test-cov
test-cov: issetup ## Run tests with coverage report.
$(ENV_PREFIX)pytest ai-software-factory/tests/ -v --tb=short --cov=ai-software-factory --cov-report=html --cov-report=term-missing
.PHONY: lint
lint: issetup ## Run pep8, black, mypy linters.
$(ENV_PREFIX)flake8 project_name/
$(ENV_PREFIX)black -l 79 --check project_name/
$(ENV_PREFIX)flake8 ai-software-factory/
$(ENV_PREFIX)black -l 79 --check ai-software-factory/
$(ENV_PREFIX)black -l 79 --check tests/
$(ENV_PREFIX)mypy --ignore-missing-imports project_name/
$(ENV_PREFIX)mypy --ignore-missing-imports ai-software-factory/
.PHONY: release
release: issetup ## Create a new tag for release.
@./.gitea/conventional_commits/generate-version.sh
.PHONY: build
build: issetup ## Create a new tag for release.
@docker build -t project_name:$(cat project_name/VERSION) -f Containerfile .
build: issetup ## Create a new tag for release.
@DOCKER_API_VERSION=$(DOCKER_API_VERSION) docker build -t ai-software-factory:$(cat ai_software_factory/VERSION) -f Containerfile .
# This project has been generated from rochacbruno/python-project-template
# __author__ = 'rochacbruno'
#igest__ = 'rochacbruno'
# __repo__ = https://github.com/rochacbruno/python-project-template
# __sponsor__ = https://github.com/sponsors/rochacbruno/

225
README.md
View File

@@ -1,13 +1,228 @@
# project_name
# AI Software Factory
Project description goes here.
Automated software generation service powered by Ollama LLM. This service allows users to specify via Telegram what kind of software they would like, and an agent hosted in Ollama will create it iteratively, testing it while building out the source code and committing to gitea.
## Usage
## Features
- **Telegram Integration**: Receive software requests via Telegram bot
- **Ollama LLM**: Uses Ollama-hosted models for code generation
- **Git Integration**: Creates a dedicated Gitea repository per generated project inside your organization
- **Pull Requests**: Creates PRs for user review before merging
- **Web UI**: Beautiful dashboard for monitoring project progress
- **n8n Workflows**: Bridges Telegram with LLMs via n8n webhooks
- **Comprehensive Testing**: Full test suite with pytest coverage
## Architecture
```
┌─────────────┐ ┌──────────────┐ ┌──────────┐ ┌─────────┐
│ Telegram │────▶│ n8n Webhook│────▶│ FastAPI │────▶│ Ollama │
└─────────────┘ └──────────────┘ └──────────┘ └─────────┘
┌──────────────┐
│ Git/Gitea │
└──────────────┘
```
## Quick Start
### Prerequisites
- Docker and Docker Compose
- Ollama running locally or on same network
- Gitea instance with API token
- n8n instance for Telegram webhook
### Configuration
Create a `.env` file in the project root:
```bash
$ docker build -t <tagname> -f Containerfile .
# Server
HOST=0.0.0.0
PORT=8000
# Ollama
OLLAMA_URL=http://localhost:11434
OLLAMA_MODEL=llama3
# Gitea
GITEA_URL=https://gitea.yourserver.com
GITEA_TOKEN=your_gitea_api_token
GITEA_OWNER=ai-software-factory
# Optional legacy fixed-repository mode. Leave empty to create one repo per project.
GITEA_REPO=
# Database
# In production, provide PostgreSQL settings. They take precedence over the SQLite default.
# Setting USE_SQLITE=false is still supported if you want to make the choice explicit.
POSTGRES_HOST=postgres.yourserver.com
POSTGRES_PORT=5432
POSTGRES_USER=ai_software_factory
POSTGRES_PASSWORD=change-me
POSTGRES_DB=ai_software_factory
# n8n
N8N_WEBHOOK_URL=http://n8n.yourserver.com/webhook/telegram
# Telegram
TELEGRAM_BOT_TOKEN=your_telegram_bot_token
TELEGRAM_CHAT_ID=your_chat_id
```
### Build and Run
```bash
# Build Docker image
DOCKER_API_VERSION=1.43 docker build -t ai-software-factory -f Containerfile .
# Run with Docker Compose
docker-compose up -d
```
### Usage
1. **Send a request via Telegram:**
```
Name: My Awesome App
Description: A web application for managing tasks
Features: user authentication, task CRUD, notifications
```
2. **Monitor progress via Web UI:**
Open `http://yourserver:8000/` to see the dashboard and `http://yourserver:8000/api` for API metadata
3. **Review PRs in Gitea:**
Check your gitea repository for generated PRs
If you deploy the container with PostgreSQL environment variables set, the service now selects PostgreSQL automatically even though SQLite remains the default for local/test usage.
## API Endpoints
| Endpoint | Method | Description |
|------|------|-------|
| `/` | GET | Dashboard |
| `/api` | GET | API information |
| `/health` | GET | Health check |
| `/generate` | POST | Generate new software |
| `/status/{project_id}` | GET | Get project status |
| `/projects` | GET | List all projects |
## Development
Read the [CONTRIBUTING.md](CONTRIBUTING.md) file.
### Makefile Targets
```bash
make help # Show available targets
make setup # Initialize repository
make fmt # Format code
make lint # Run linters
make test # Run tests
make test-cov # Run tests with coverage report
make release # Create new release tag
make build # Build Docker image
```
### Running in Development
```bash
pip install -r requirements.txt
uvicorn main:app --reload --host 0.0.0.0 --port 8000
```
### Testing
Run the test suite:
```bash
# Run all tests
make test
# Run tests with coverage report
make test-cov
# Run specific test file
pytest tests/test_main.py -v
# Run tests with verbose output
pytest tests/ -v --tb=short
```
### Test Coverage
View HTML coverage report:
```bash
make test-cov
open htmlcov/index.html
```
### Test Structure
```
tests/
├── conftest.py # Pytest fixtures and configuration
├── test_main.py # Tests for main.py FastAPI app
├── test_config.py # Tests for config.py settings
├── test_git_manager.py # Tests for git operations
├── test_ui_manager.py # Tests for UI rendering
├── test_gitea.py # Tests for Gitea API integration
├── test_telegram.py # Tests for Telegram integration
├── test_orchestrator.py # Tests for agent orchestrator
├── test_integration.py # Integration tests for full workflow
├── test_config_integration.py # Configuration integration tests
├── test_agents_integration.py # Agent integration tests
├── test_edge_cases.py # Edge case tests
└── test_postgres_integration.py # PostgreSQL integration tests
```
## Project Structure
```
ai-software-factory/
├── main.py # FastAPI application
├── config.py # Configuration settings
├── requirements.txt # Python dependencies
├── Containerfile # Docker build file
├── README.md # This file
├── Makefile # Development utilities
├── .env.example # Environment template
├── .gitignore # Git ignore rules
├── HISTORY.md # Changelog
├── pytest.ini # Pytest configuration
├── docker-compose.yml # Multi-service orchestration
├── .env # Environment variables (not in git)
├── tests/ # Test suite
│ ├── __init__.py
│ ├── conftest.py
│ ├── test_*.py # Test files
│ └── pytest.ini
├── agents/
│ ├── __init__.py
│ ├── orchestrator.py # Main agent orchestrator
│ ├── git_manager.py # Git operations
│ ├── ui_manager.py # Web UI management
│ ├── telegram.py # Telegram integration
│ └── gitea.py # Gitea API client
└── n8n/ # n8n webhook configurations
```
## Security Notes
- Never commit `.env` files to git
- Use environment variables for sensitive data
- Rotate Gitea API tokens regularly
- Restrict Telegram bot permissions
- Use HTTPS for Gitea and n8n endpoints
## License
MIT License - See LICENSE file for details
## Contributing
See [CONTRIBUTING.md](CONTRIBUTING.md) for development guidelines.

View File

@@ -0,0 +1,45 @@
# AI Software Factory Environment Variables
# Server
HOST=0.0.0.0
PORT=8000
LOG_LEVEL=INFO
# Ollama
OLLAMA_URL=http://localhost:11434
OLLAMA_MODEL=llama3
# Gitea
# Configure Gitea API for your organization
# GITEA_URL can be left empty to use GITEA_ORGANIZATION instead of GITEA_OWNER
GITEA_URL=https://gitea.yourserver.com
GITEA_TOKEN=your_gitea_api_token
GITEA_OWNER=your_organization_name
GITEA_REPO= (optional legacy fixed repository mode; leave empty to create one repo per project)
# n8n
# n8n webhook for Telegram integration
N8N_WEBHOOK_URL=http://n8n.yourserver.com/webhook/telegram
# n8n API for automatic webhook configuration
N8N_API_URL=http://n8n.yourserver.com
N8N_USER=n8n_admin
N8N_PASSWORD=your_secure_password
# Telegram
TELEGRAM_BOT_TOKEN=your_telegram_bot_token
TELEGRAM_CHAT_ID=your_chat_id
# PostgreSQL
# In production, provide PostgreSQL settings below. They now take precedence over the SQLite default.
# You can also set USE_SQLITE=false explicitly if you want the intent to be obvious.
POSTGRES_HOST=postgres
POSTGRES_PORT=5432
POSTGRES_USER=ai_test
POSTGRES_PASSWORD=your_secure_password
POSTGRES_DB=ai_test
# Database Connection Pool Settings
DB_POOL_SIZE=10
DB_MAX_OVERFLOW=20
DB_POOL_RECYCLE=3600
DB_POOL_TIMEOUT=30

View File

@@ -0,0 +1,88 @@
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
.installed.cfg
*.egg
# PyInstaller
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
# Translations
*.mo
*.pot
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# IDE
.idea/
.vscode/
*.swp
*.swo
*~
# OS files
.DS_Store
Thumbs.db
# Project specific
.git/
.gitignore
.env
.env.local
.env.*.local
ai-software-factory/
n8n/
ui/
docs/
tests/
# Temporary files
*.tmp
*.temp
*.log

View File

@@ -0,0 +1 @@
{"dark_mode":false}

View File

@@ -0,0 +1 @@
{"dark_mode":false}

View File

@@ -0,0 +1,73 @@
# Contributing to AI Software Factory
Thank you for your interest in contributing to the AI Software Factory project!
## Code of Conduct
Please note that we have a Code of Conduct that all contributors are expected to follow.
## How to Contribute
### Reporting Bugs
Before creating bug reports, please check existing issues as the bug may have already been reported and fixed.
When reporting a bug, include:
- A clear description of the bug
- Steps to reproduce the bug
- Expected behavior
- Actual behavior
- Screenshots if applicable
- Your environment details (OS, Python version, etc.)
### Suggesting Features
Feature suggestions are welcome! Please create an issue with:
- A clear title and description
- Use cases for the feature
- Any relevant links or references
### Pull Requests
1. Fork the repository
2. Create a new branch (`git checkout -b feature/feature-name`)
3. Make your changes
4. Commit your changes (`git commit -am 'Add some feature'`)
5. Push to the branch (`git push origin feature/feature-name`)
6. Create a new Pull Request
### Style Guide
- Follow the existing code style
- Add comments for complex logic
- Write tests for new features
- Update documentation as needed
## Development Setup
1. Clone the repository
2. Create a virtual environment
3. Install dependencies (`pip install -r requirements.txt`)
4. Run tests (`make test`)
5. Make your changes
6. Run tests again to ensure nothing is broken
## Commit Messages
Follow the conventional commits format:
```
feat: add new feature
fix: fix bug
docs: update documentation
style: format code
refactor: refactor code
test: add tests
chore: update dependencies
```
## Questions?
Feel free to open an issue or discussion for any questions.

View File

@@ -0,0 +1,41 @@
Changelog
=========
## [0.0.1] - 2026-03-14
### Added
- Initial commit with AI Software Factory service
- FastAPI backend for software generation
- Telegram integration via n8n webhook
- Ollama LLM integration for code generation
- Gitea API integration for commits and PRs
- Web UI dashboard for monitoring progress
- Docker and docker-compose configuration for Unraid
- Environment configuration templates
- Makefile with development utilities
- PostgreSQL integration with connection pooling
- Comprehensive audit trail functionality
- User action tracking
- System log monitoring
- Database initialization and migration support
- Full test suite with pytest coverage
### Features
- Automated software generation from Telegram requests
- Iterative code generation with Ollama
- Git commit automation
- Pull request creation for user review
- Real-time progress monitoring via web UI
- n8n workflow integration
- Complete audit trail for compliance and debugging
- Connection pooling for database efficiency
- Health check endpoints
- Persistent volumes for git repos and n8n data
### Infrastructure
- Alpine-based Docker image
- GPU support for Ollama
- Persistent volumes for git repos and n8n data
- Health check endpoints
- PostgreSQL with connection pooling
- Docker Compose for multi-service orchestration

View File

@@ -0,0 +1,28 @@
.PHONY: help run-api run-frontend run-tests init-db clean
help:
@echo "Available targets:"
@echo " make run-api - Run FastAPI app with NiceGUI frontend (default)"
@echo " make run-tests - Run pytest tests"
@echo " make init-db - Initialize database"
@echo " make clean - Remove container volumes"
@echo " make rebuild - Rebuild and run container"
run-api:
@echo "Starting FastAPI app with NiceGUI frontend..."
@bash start.sh dev
run-frontend:
@echo "NiceGUI is now integrated with FastAPI - use 'make run-api' to start everything together"
run-tests:
pytest -v
init-db:
@python -c "from main import app; from database import init_db; init_db()"
clean:
@echo "Cleaning up..."
@docker-compose down -v
rebuild: clean run-api

View File

@@ -0,0 +1,215 @@
# AI Software Factory
Automated software generation service powered by Ollama LLM. This service allows users to specify via Telegram what kind of software they would like, and an agent hosted in Ollama will create it iteratively, testing it while building out the source code and committing to gitea.
## Features
- **Telegram Integration**: Receive software requests via Telegram bot
- **Ollama LLM**: Uses Ollama-hosted models for code generation
- **Git Integration**: Automatically commits code to gitea
- **Pull Requests**: Creates PRs for user review before merging
- **Web UI**: Beautiful dashboard for monitoring project progress
- **n8n Workflows**: Bridges Telegram with LLMs via n8n webhooks
- **Comprehensive Testing**: Full test suite with pytest coverage
## Architecture
```
┌─────────────┐ ┌──────────────┐ ┌──────────┐ ┌─────────┐
│ Telegram │────▶│ n8n Webhook│────▶│ FastAPI │────▶│ Ollama │
└─────────────┘ └──────────────┘ └──────────┘ └─────────┘
┌──────────────┐
│ Git/Gitea │
└──────────────┘
```
## Quick Start
### Prerequisites
- Docker and Docker Compose
- Ollama running locally or on same network
- Gitea instance with API token
- n8n instance for Telegram webhook
### Configuration
Create a `.env` file in the project root:
```bash
# Server
HOST=0.0.0.0
PORT=8000
# Ollama
OLLAMA_URL=http://localhost:11434
OLLAMA_MODEL=llama3
# Gitea
GITEA_URL=https://gitea.yourserver.com
GITEA_TOKEN= analyze your_gitea_api_token
GITEA_OWNER=ai-software-factory
GITEA_REPO=ai-software-factory
# n8n
N8N_WEBHOOK_URL=http://n8n.yourserver.com/webhook/telegram
# Telegram
TELEGRAM_BOT_TOKEN=your_telegram_bot_token
TELEGRAM_CHAT_ID=your_chat_id
```
### Build and Run
```bash
# Build Docker image
docker build -t ai-software-factory -f Containerfile .
# Run with Docker Compose
docker-compose up -d
```
### Usage
1. **Send a request via Telegram:**
```
Name: My Awesome App
Description: A web application for managing tasks
Features: user authentication, task CRUD, notifications
```
2. **Monitor progress via Web UI:**
Open `http://yourserver:8000` to see real-time progress
3. **Review PRs in Gitea:**
Check your gitea repository for generated PRs
## API Endpoints
| Endpoint | Method | Description |
|------|------|-------|
| `/` | GET | API information |
| `/health` | GET | Health check |
| `/generate` | POST | Generate new software |
| `/status/{project_id}` | GET | Get project status |
| `/projects` | GET | List all projects |
## Development
### Makefile Targets
```bash
make help # Show available targets
make setup # Initialize repository
make fmt # Format code
make lint # Run linters
make test # Run tests
make test-cov # Run tests with coverage report
make release # Create new release tag
make build # Build Docker image
```
### Running in Development
```bash
pip install -r requirements.txt
uvicorn main:app --reload --host 0.0.0.0 --port 8000
```
### Testing
Run the test suite:
```bash
# Run all tests
make test
# Run tests with coverage report
make test-cov
# Run specific test file
pytest tests/test_main.py -v
# Run tests with verbose output
pytest tests/ -v --tb=short
```
### Test Coverage
View HTML coverage report:
```bash
make test-cov
open htmlcov/index.html
```
### Test Structure
```
tests/
├── conftest.py # Pytest fixtures and configuration
├── test_main.py # Tests for main.py FastAPI app
├── test_config.py # Tests for config.py settings
├── test_git_manager.py # Tests for git operations
├── test_ui_manager.py # Tests for UI rendering
├── test_gitea.py # Tests for Gitea API integration
├── test_telegram.py # Tests for Telegram integration
├── test_orchestrator.py # Tests for agent orchestrator
├── test_integration.py # Integration tests for full workflow
├── test_config_integration.py # Configuration integration tests
├── test_agents_integration.py # Agent integration tests
├── test_edge_cases.py # Edge case tests
└── test_postgres_integration.py # PostgreSQL integration tests
```
## Project Structure
```
ai-software-factory/
├── main.py # FastAPI application
├── config.py # Configuration settings
├── requirements.txt # Python dependencies
├── Containerfile # Docker build file
├── README.md # This file
├── Makefile # Development utilities
├── .env.example # Environment template
├── .gitignore # Git ignore rules
├── HISTORY.md # Changelog
├── pytest.ini # Pytest configuration
├── docker-compose.yml # Multi-service orchestration
├── .env # Environment variables (not in git)
├── tests/ # Test suite
│ ├── __init__.py
│ ├── conftest.py
│ ├── test_*.py # Test files
│ └── pytest.ini
├── agents/
│ ├── __init__.py
│ ├── orchestrator.py # Main agent orchestrator
│ ├── git_manager.py # Git operations
│ ├── ui_manager.py # Web UI management
│ ├── telegram.py # Telegram integration
│ └── gitea.py # Gitea API client
└── n8n/ # n8n webhook configurations
```
## Security Notes
- Never commit `.env` files to git
- Use environment variables for sensitive data
- Rotate Gitea API tokens regularly
- Restrict Telegram bot permissions
- Use HTTPS for Gitea and n8n endpoints
## License
MIT License - See LICENSE file for details
## Contributing
See [CONTRIBUTING.md](CONTRIBUTING.md) for development guidelines.

View File

@@ -0,0 +1 @@
0.6.0

View File

@@ -0,0 +1,3 @@
"""AI Software Factory - Automated software generation service."""
__version__ = "0.0.1"

View File

@@ -0,0 +1,17 @@
"""AI Software Factory agents."""
from .orchestrator import AgentOrchestrator
from .git_manager import GitManager
from .ui_manager import UIManager
from .telegram import TelegramHandler
from .gitea import GiteaAPI
from .database_manager import DatabaseManager
__all__ = [
"AgentOrchestrator",
"GitManager",
"UIManager",
"TelegramHandler",
"GiteaAPI",
"DatabaseManager"
]

View File

@@ -0,0 +1,882 @@
"""Database manager for audit logging."""
from sqlalchemy.orm import Session
from sqlalchemy import text
try:
from ..config import settings
from ..models import (
AuditTrail,
ProjectHistory,
ProjectLog,
ProjectStatus,
PromptCodeLink,
PullRequest,
PullRequestData,
SystemLog,
UISnapshot,
UserAction,
)
except ImportError:
from config import settings
from models import (
AuditTrail,
ProjectHistory,
ProjectLog,
ProjectStatus,
PromptCodeLink,
PullRequest,
PullRequestData,
SystemLog,
UISnapshot,
UserAction,
)
from datetime import datetime
import json
class DatabaseMigrations:
"""Handles database migrations."""
def __init__(self, db: Session):
"""Initialize migrations."""
self.db = db
def run(self) -> int:
"""Run migrations."""
return 0
def get_project_by_id(self, project_id: str) -> ProjectHistory | None:
"""Get project by ID."""
return self.db.query(ProjectHistory).filter(ProjectHistory.project_id == project_id).first()
def get_all_projects(self) -> list[ProjectHistory]:
"""Get all projects."""
return self.db.query(ProjectHistory).all()
def get_project_logs(self, history_id: int, limit: int = 100) -> list[ProjectLog]:
"""Get project logs."""
return self.db.query(ProjectLog).filter(ProjectLog.history_id == history_id).limit(limit).all()
def get_system_logs(self, limit: int = 100) -> list[SystemLog]:
"""Get system logs."""
return self.db.query(SystemLog).limit(limit).all()
def log_system_event(self, component: str, level: str, message: str,
user_agent: str | None = None, ip_address: str | None = None) -> SystemLog:
"""Log a system event."""
log = SystemLog(
component=component,
log_level=level,
log_message=message,
user_agent=user_agent,
ip_address=ip_address
)
self.db.add(log)
self.db.commit()
self.db.refresh(log)
return log
class DatabaseManager:
"""Manages database operations for audit logging and history tracking."""
def __init__(self, db: Session):
"""Initialize database manager."""
self.db = db
self.migrations = DatabaseMigrations(self.db)
@staticmethod
def _normalize_metadata(metadata: object) -> dict:
"""Normalize JSON-like metadata stored in audit columns."""
if metadata is None:
return {}
if isinstance(metadata, dict):
return metadata
if isinstance(metadata, str):
try:
parsed = json.loads(metadata)
return parsed if isinstance(parsed, dict) else {"value": parsed}
except json.JSONDecodeError:
return {"value": metadata}
return {"value": metadata}
def log_project_start(self, project_id: str, project_name: str, description: str) -> ProjectHistory:
"""Log project start."""
history = ProjectHistory(
project_id=project_id,
project_name=project_name,
description=description,
status=ProjectStatus.INITIALIZED.value,
progress=0,
message="Project initialization started"
)
self.db.add(history)
self.db.commit()
self.db.refresh(history)
# Log the action in audit trail
self._log_audit_trail(
project_id=project_id,
action="PROJECT_CREATED",
actor="system",
action_type="CREATE",
details=f"Project {project_name} was created",
message="Project created successfully"
)
return history
def log_prompt_submission(
self,
history_id: int,
project_id: str,
prompt_text: str,
features: list[str] | None = None,
tech_stack: list[str] | None = None,
actor_name: str = "api",
actor_type: str = "user",
source: str = "generate-endpoint",
) -> AuditTrail | None:
"""Persist the originating prompt so later code changes can be correlated to it."""
history = self.db.query(ProjectHistory).filter(ProjectHistory.id == history_id).first()
if not history:
return None
feature_list = features or []
tech_list = tech_stack or []
history.features = json.dumps(feature_list)
history.current_step_description = "Prompt accepted"
history.current_step_details = prompt_text
self.db.commit()
self.log_user_action(
history_id=history_id,
action_type="PROMPT_SUBMITTED",
actor_type=actor_type,
actor_name=actor_name,
action_description="Submitted software generation request",
action_data={
"prompt": prompt_text,
"features": feature_list,
"tech_stack": tech_list,
"source": source,
},
)
audit = AuditTrail(
project_id=project_id,
action="PROMPT_RECEIVED",
actor=actor_name,
action_type="PROMPT",
details=prompt_text,
message="Software generation prompt received",
metadata_json={
"history_id": history_id,
"prompt_text": prompt_text,
"features": feature_list,
"tech_stack": tech_list,
"source": source,
},
)
self.db.add(audit)
self.db.commit()
self.db.refresh(audit)
return audit
def log_progress_update(self, history_id: int, progress: int, step: str, message: str) -> None:
"""Log progress update."""
history = self.db.query(ProjectHistory).filter(
ProjectHistory.id == history_id
).first()
if history:
history.progress = progress
history.current_step = step
history.message = message
self.db.commit()
# Log the action
self._log_action(history_id, "INFO", f"Progress: {progress}%, Step: {step} - {message}")
# Log to audit trail
self._log_audit_trail(
project_id=history.project_id,
action="PROGRESS_UPDATE",
actor="agent",
action_type="UPDATE",
details=f"Progress updated to {progress}% - {step}",
message=f"Progress: {progress}%, Step: {step} - {message}",
metadata_json=json.dumps({"step": step, "message": message})
)
def log_project_complete(self, history_id: int, message: str) -> None:
"""Log project completion."""
history = self.db.query(ProjectHistory).filter(
ProjectHistory.id == history_id
).first()
if history:
history.status = ProjectStatus.COMPLETED.value
history.progress = 100
history.current_step = "Completed"
history.completed_at = datetime.utcnow()
history.message = message
self.db.commit()
# Log the action
self._log_action(history_id, "INFO", f"Project completed: {message}")
# Log to audit trail
self._log_audit_trail(
project_id=history.project_id,
action="PROJECT_COMPLETED",
actor="agent",
action_type="COMPLETE",
details=message,
message=f"Project completed: {message}"
)
def log_error(self, history_id: int, error: str) -> None:
"""Log error."""
history = self.db.query(ProjectHistory).filter(
ProjectHistory.id == history_id
).first()
if history:
history.status = ProjectStatus.ERROR.value
history.error_message = error
self.db.commit()
# Log the action
self._log_action(history_id, "ERROR", f"Error occurred: {error}")
# Log to audit trail
self._log_audit_trail(
project_id=history.project_id,
action="ERROR_OCCURRED",
actor="agent",
action_type="ERROR",
details=error,
message=f"Error occurred: {error}"
)
def _log_action(self, history_id: int, level: str, message: str) -> None:
"""Log an action to the project log."""
project_log = ProjectLog(
history_id=history_id,
log_level=level,
log_message=message,
timestamp=datetime.utcnow()
)
self.db.add(project_log)
self.db.commit()
def save_ui_snapshot(self, history_id: int, ui_data: dict) -> UISnapshot:
"""Save UI snapshot."""
snapshot = UISnapshot(
history_id=history_id,
snapshot_data=json.dumps(ui_data),
created_at=datetime.utcnow()
)
self.db.add(snapshot)
self.db.commit()
self.db.refresh(snapshot)
return snapshot
def save_pr_data(self, history_id: int, pr_data: dict) -> PullRequest:
"""Save PR data."""
# Parse PR data
pr_number = pr_data.get("pr_number", pr_data.get("id", 0))
pr_title = pr_data.get("title", pr_data.get("pr_title", ""))
pr_body = pr_data.get("body", pr_data.get("pr_body", ""))
pr_state = pr_data.get("state", pr_data.get("pr_state", "open"))
pr_url = pr_data.get("url", pr_data.get("pr_url", ""))
pr = PullRequest(
history_id=history_id,
pr_number=pr_number,
pr_title=pr_title,
pr_body=pr_body,
base=pr_data.get("base", "main"),
user=pr_data.get("user", "system"),
pr_url=pr_url,
merged=False,
pr_state=pr_state
)
self.db.add(pr)
self.db.commit()
self.db.refresh(pr)
return pr
def _get_latest_ui_snapshot_data(self, history_id: int) -> dict:
"""Return the latest stored UI snapshot payload for a project."""
snapshot = self.db.query(UISnapshot).filter(
UISnapshot.history_id == history_id
).order_by(UISnapshot.created_at.desc(), UISnapshot.id.desc()).first()
if not snapshot:
return {}
return self._normalize_metadata(snapshot.snapshot_data)
def _get_project_repository(self, history: ProjectHistory) -> dict | None:
"""Resolve repository metadata for a project."""
snapshot_data = self._get_latest_ui_snapshot_data(history.id)
repository = snapshot_data.get("repository")
if isinstance(repository, dict) and any(repository.values()):
return repository
if settings.gitea_owner and settings.gitea_repo and settings.gitea_url:
return {
"owner": settings.gitea_owner,
"name": settings.gitea_repo,
"url": f"{settings.gitea_url.rstrip('/')}/{settings.gitea_owner}/{settings.gitea_repo}",
"mode": "shared",
}
return None
def get_project_by_id(self, project_id: str) -> ProjectHistory | None:
"""Get project by ID."""
return self.db.query(ProjectHistory).filter(ProjectHistory.project_id == project_id).first()
def get_all_projects(self) -> list[ProjectHistory]:
"""Get all projects."""
return self.db.query(ProjectHistory).all()
def get_project_logs(self, history_id: int, limit: int = 100) -> list[ProjectLog]:
"""Get project logs."""
return self.db.query(ProjectLog).filter(ProjectLog.history_id == history_id).limit(limit).all()
def log_system_event(self, component: str, level: str, message: str,
user_agent: str | None = None, ip_address: str | None = None) -> SystemLog:
"""Log a system event."""
log = SystemLog(
component=component,
log_level=level,
log_message=message,
user_agent=user_agent,
ip_address=ip_address
)
self.db.add(log)
self.db.commit()
self.db.refresh(log)
return log
def _log_audit_trail(
self,
project_id: str,
action: str,
actor: str,
action_type: str,
details: str,
message: str | None = None,
**kwargs
) -> AuditTrail:
"""Log to the audit trail."""
metadata_json = kwargs.get("metadata_json", kwargs.get("metadata", "{}"))
audit = AuditTrail(
project_id=project_id,
action=action,
actor=actor,
action_type=action_type,
details=details,
message=message or details,
metadata_json=metadata_json or "{}"
)
self.db.add(audit)
self.db.commit()
return audit
def get_logs(self, project_id: str = None, level: str = None, limit: int = 100) -> list:
"""Get logs from the database."""
query = self.db.query(ProjectLog)
if project_id:
query = query.filter(ProjectLog.history_id == project_id)
if level:
query = query.filter(ProjectLog.log_level == level)
logs = query.order_by(ProjectLog.timestamp.desc()).limit(limit).all()
return [
{
"id": log.id,
"history_id": log.history_id,
"level": log.log_level,
"message": log.log_message,
"timestamp": log.timestamp.isoformat() if log.timestamp else None
}
for log in logs
]
def get_audit_trail(self, project_id: str = None, limit: int = 100) -> list:
"""Get audit trail entries."""
query = self.db.query(AuditTrail)
if project_id:
query = query.filter(AuditTrail.project_id == project_id)
audits = query.order_by(AuditTrail.created_at.desc()).limit(limit).all()
return [
{
"id": audit.id,
"project_id": audit.project_id,
"action": audit.action,
"actor": audit.actor,
"action_type": audit.action_type,
"details": audit.details,
"metadata_json": self._normalize_metadata(audit.metadata_json),
"timestamp": audit.created_at.isoformat() if audit.created_at else None
}
for audit in audits
]
def get_all_audit_trail(self, limit: int = 100) -> list:
"""Get all audit trail entries."""
audits = self.db.query(AuditTrail).order_by(AuditTrail.created_at.desc()).limit(limit).all()
return [
{
"id": audit.id,
"project_id": audit.project_id,
"action": audit.action,
"actor": audit.actor,
"action_type": audit.action_type,
"details": audit.details,
"metadata_json": self._normalize_metadata(audit.metadata_json),
"timestamp": audit.created_at.isoformat() if audit.created_at else None
}
for audit in audits
]
def log_user_action(self, history_id: int, action_type: str, actor_type: str, actor_name: str,
action_description: str, action_data: dict = None) -> UserAction:
"""Log a user action."""
history = self.db.query(ProjectHistory).filter(
ProjectHistory.id == history_id
).first()
if not history:
return None
user_action = UserAction(
history_id=history_id,
action_type=action_type,
actor_type=actor_type,
actor_name=actor_name,
action_description=action_description,
action_data=action_data or {},
created_at=datetime.utcnow()
)
self.db.add(user_action)
self.db.commit()
self.db.refresh(user_action)
return user_action
def get_user_actions(self, history_id: int, limit: int = 100) -> list:
"""Get user actions for a history."""
user_actions = self.db.query(UserAction).filter(
UserAction.history_id == history_id
).order_by(UserAction.created_at.desc()).limit(limit).all()
return [
{
"id": ua.id,
"history_id": ua.history_id,
"action_type": ua.action_type,
"actor_type": ua.actor_type,
"actor_name": ua.actor_name,
"action_description": ua.action_description,
"action_data": ua.action_data,
"created_at": ua.created_at.isoformat() if ua.created_at else None
}
for ua in user_actions
]
def get_system_logs(self, level: str = None, limit: int = 100) -> list:
"""Get system logs."""
query = self.db.query(SystemLog)
if level:
query = query.filter(SystemLog.log_level == level)
logs = query.order_by(SystemLog.created_at.desc()).limit(limit).all()
return [
{
"id": log.id,
"component": log.component,
"level": log.log_level,
"message": log.log_message,
"timestamp": log.created_at.isoformat() if log.created_at else None
}
for log in logs
]
def log_code_change(self, project_id: str, change_type: str, file_path: str,
actor: str, actor_type: str, details: str,
history_id: int | None = None, prompt_id: int | None = None,
diff_summary: str | None = None) -> AuditTrail:
"""Log a code change."""
audit = AuditTrail(
project_id=project_id,
action="CODE_CHANGE",
actor=actor,
action_type=change_type,
details=f"File {file_path} {change_type}",
message=f"Code change: {file_path}",
metadata_json={
"file": file_path,
"change_type": change_type,
"actor": actor,
"actor_type": actor_type,
"history_id": history_id,
"prompt_id": prompt_id,
"details": details,
"diff_summary": diff_summary,
}
)
self.db.add(audit)
self.db.commit()
self.db.refresh(audit)
if history_id is not None and prompt_id is not None:
link = PromptCodeLink(
history_id=history_id,
project_id=project_id,
prompt_audit_id=prompt_id,
code_change_audit_id=audit.id,
file_path=file_path,
change_type=change_type,
)
self.db.add(link)
self.db.commit()
return audit
def get_prompt_change_links(self, project_id: str | None = None, limit: int = 200) -> list[dict]:
"""Return stored prompt/code lineage rows."""
query = self.db.query(PromptCodeLink)
if project_id:
query = query.filter(PromptCodeLink.project_id == project_id)
links = query.order_by(PromptCodeLink.created_at.desc()).limit(limit).all()
return [
{
"id": link.id,
"history_id": link.history_id,
"project_id": link.project_id,
"prompt_audit_id": link.prompt_audit_id,
"code_change_audit_id": link.code_change_audit_id,
"file_path": link.file_path,
"change_type": link.change_type,
"created_at": link.created_at.isoformat() if link.created_at else None,
}
for link in links
]
def _build_correlations_from_links(self, project_id: str | None = None, limit: int = 100) -> list[dict]:
"""Build prompt-change correlations from explicit lineage rows."""
prompt_events = self.get_prompt_events(project_id=project_id, limit=limit)
if not prompt_events:
return []
links = self.get_prompt_change_links(project_id=project_id, limit=limit * 10)
if not links:
return []
prompt_map = {prompt["id"]: {**prompt, "changes": []} for prompt in prompt_events}
change_map = {change["id"]: change for change in self.get_code_changes(project_id=project_id, limit=limit * 10)}
for link in links:
prompt = prompt_map.get(link["prompt_audit_id"])
change = change_map.get(link["code_change_audit_id"])
if prompt is None or change is None:
continue
prompt["changes"].append(
{
"id": change["id"],
"file_path": link["file_path"] or change["file_path"],
"change_type": link["change_type"] or change["action_type"],
"details": change["details"],
"diff_summary": change["diff_summary"],
"timestamp": change["timestamp"],
}
)
correlations = [
{
"project_id": prompt["project_id"],
"prompt_id": prompt["id"],
"prompt_text": prompt["prompt_text"],
"features": prompt["features"],
"tech_stack": prompt["tech_stack"],
"timestamp": prompt["timestamp"],
"changes": prompt["changes"],
}
for prompt in prompt_map.values()
]
correlations.sort(key=lambda item: item["timestamp"] or "", reverse=True)
return correlations[:limit]
def _build_correlations_from_audit_fallback(self, project_id: str | None = None, limit: int = 100) -> list[dict]:
"""Fallback correlation builder for older rows without explicit lineage."""
query = self.db.query(AuditTrail)
if project_id:
query = query.filter(AuditTrail.project_id == project_id)
events = query.filter(
AuditTrail.action.in_(["PROMPT_RECEIVED", "CODE_CHANGE"])
).order_by(AuditTrail.project_id.asc(), AuditTrail.created_at.asc(), AuditTrail.id.asc()).all()
grouped: dict[str, list[AuditTrail]] = {}
for event in events:
grouped.setdefault(event.project_id or "", []).append(event)
correlations: list[dict] = []
for grouped_project_id, project_events in grouped.items():
current_prompt: AuditTrail | None = None
current_changes: list[AuditTrail] = []
for event in project_events:
if event.action == "PROMPT_RECEIVED":
if current_prompt is not None:
prompt_metadata = self._normalize_metadata(current_prompt.metadata_json)
correlations.append({
"project_id": grouped_project_id,
"prompt_id": current_prompt.id,
"prompt_text": prompt_metadata.get("prompt_text", current_prompt.details),
"features": prompt_metadata.get("features", []),
"tech_stack": prompt_metadata.get("tech_stack", []),
"timestamp": current_prompt.created_at.isoformat() if current_prompt.created_at else None,
"changes": [
{
"id": change.id,
"file_path": self._normalize_metadata(change.metadata_json).get("file"),
"change_type": change.action_type,
"details": self._normalize_metadata(change.metadata_json).get("details", change.details),
"diff_summary": self._normalize_metadata(change.metadata_json).get("diff_summary"),
"timestamp": change.created_at.isoformat() if change.created_at else None,
}
for change in current_changes
],
})
current_prompt = event
current_changes = []
elif event.action == "CODE_CHANGE" and current_prompt is not None:
current_changes.append(event)
if current_prompt is not None:
prompt_metadata = self._normalize_metadata(current_prompt.metadata_json)
correlations.append({
"project_id": grouped_project_id,
"prompt_id": current_prompt.id,
"prompt_text": prompt_metadata.get("prompt_text", current_prompt.details),
"features": prompt_metadata.get("features", []),
"tech_stack": prompt_metadata.get("tech_stack", []),
"timestamp": current_prompt.created_at.isoformat() if current_prompt.created_at else None,
"changes": [
{
"id": change.id,
"file_path": self._normalize_metadata(change.metadata_json).get("file"),
"change_type": change.action_type,
"details": self._normalize_metadata(change.metadata_json).get("details", change.details),
"diff_summary": self._normalize_metadata(change.metadata_json).get("diff_summary"),
"timestamp": change.created_at.isoformat() if change.created_at else None,
}
for change in current_changes
],
})
correlations.sort(key=lambda item: item["timestamp"] or "", reverse=True)
return correlations[:limit]
def log_commit(self, project_id: str, commit_message: str, actor: str,
actor_type: str = "agent") -> AuditTrail:
"""Log a git commit."""
audit = AuditTrail(
project_id=project_id,
action="GIT_COMMIT",
actor=actor,
action_type="COMMIT",
details=f"Commit: {commit_message}",
message=f"Git commit: {commit_message}",
metadata_json=json.dumps({"commit": commit_message, "actor": actor, "actor_type": actor_type})
)
self.db.add(audit)
self.db.commit()
return audit
def get_project_audit_data(self, project_id: str) -> dict:
"""Get comprehensive audit data for a project."""
history = self.db.query(ProjectHistory).filter(
ProjectHistory.project_id == project_id
).first()
if not history:
return {
"project": None,
"logs": [],
"actions": [],
"audit_trail": [],
"prompts": [],
"code_changes": [],
"prompt_change_correlations": [],
}
# Get logs
logs = self.db.query(ProjectLog).filter(
ProjectLog.history_id == history.id
).order_by(ProjectLog.timestamp.desc()).all()
# Get user actions
user_actions = self.db.query(UserAction).filter(
UserAction.history_id == history.id
).order_by(UserAction.created_at.desc()).all()
# Get audit trail entries
audit_trails = self.db.query(AuditTrail).filter(
AuditTrail.project_id == project_id
).order_by(AuditTrail.created_at.desc()).all()
prompts = self.get_prompt_events(project_id=project_id)
code_changes = self.get_code_changes(project_id=project_id)
correlations = self.get_prompt_change_correlations(project_id=project_id)
repository = self._get_project_repository(history)
return {
"project": {
"id": history.id,
"project_id": history.project_id,
"project_name": history.project_name,
"description": history.description,
"status": history.status,
"progress": history.progress,
"message": history.message,
"error_message": history.error_message,
"current_step": history.current_step,
"repository": repository,
"completed_at": history.completed_at.isoformat() if history.completed_at else None,
"created_at": history.started_at.isoformat() if history.started_at else None
},
"logs": [
{
"id": log.id,
"level": log.log_level,
"message": log.log_message,
"timestamp": log.timestamp.isoformat() if log.timestamp else None
}
for log in logs
],
"actions": [
{
"id": ua.id,
"action_type": ua.action_type,
"actor_type": ua.actor_type,
"actor_name": ua.actor_name,
"action_description": ua.action_description,
"action_data": ua.action_data,
"created_at": ua.created_at.isoformat() if ua.created_at else None
}
for ua in user_actions
],
"audit_trail": [
{
"id": audit.id,
"action": audit.action,
"actor": audit.actor,
"action_type": audit.action_type,
"details": audit.details,
"metadata": self._normalize_metadata(audit.metadata_json),
"timestamp": audit.created_at.isoformat() if audit.created_at else None
}
for audit in audit_trails
],
"prompts": prompts,
"code_changes": code_changes,
"prompt_change_correlations": correlations,
"repository": repository,
}
def get_prompt_events(self, project_id: str | None = None, limit: int = 100) -> list[dict]:
"""Return prompt receipt events from the audit trail."""
query = self.db.query(AuditTrail).filter(AuditTrail.action == "PROMPT_RECEIVED")
if project_id:
query = query.filter(AuditTrail.project_id == project_id)
prompts = query.order_by(AuditTrail.created_at.desc()).limit(limit).all()
return [
{
"id": prompt.id,
"project_id": prompt.project_id,
"actor": prompt.actor,
"message": prompt.message,
"prompt_text": self._normalize_metadata(prompt.metadata_json).get("prompt_text", prompt.details),
"features": self._normalize_metadata(prompt.metadata_json).get("features", []),
"tech_stack": self._normalize_metadata(prompt.metadata_json).get("tech_stack", []),
"history_id": self._normalize_metadata(prompt.metadata_json).get("history_id"),
"timestamp": prompt.created_at.isoformat() if prompt.created_at else None,
}
for prompt in prompts
]
def get_code_changes(self, project_id: str | None = None, limit: int = 100) -> list[dict]:
"""Return code change events from the audit trail."""
query = self.db.query(AuditTrail).filter(AuditTrail.action == "CODE_CHANGE")
if project_id:
query = query.filter(AuditTrail.project_id == project_id)
changes = query.order_by(AuditTrail.created_at.desc()).limit(limit).all()
return [
{
"id": change.id,
"project_id": change.project_id,
"action_type": change.action_type,
"actor": change.actor,
"details": change.details,
"file_path": self._normalize_metadata(change.metadata_json).get("file"),
"prompt_id": self._normalize_metadata(change.metadata_json).get("prompt_id"),
"history_id": self._normalize_metadata(change.metadata_json).get("history_id"),
"diff_summary": self._normalize_metadata(change.metadata_json).get("diff_summary"),
"timestamp": change.created_at.isoformat() if change.created_at else None,
}
for change in changes
]
def get_prompt_change_correlations(self, project_id: str | None = None, limit: int = 100) -> list[dict]:
"""Correlate prompts with the concrete code changes that followed them."""
correlations = self._build_correlations_from_links(project_id=project_id, limit=limit)
if correlations:
return correlations
return self._build_correlations_from_audit_fallback(project_id=project_id, limit=limit)
def get_dashboard_snapshot(self, limit: int = 8) -> dict:
"""Return DB-backed dashboard data for the UI."""
projects = self.db.query(ProjectHistory).order_by(ProjectHistory.updated_at.desc()).limit(limit).all()
system_logs = self.db.query(SystemLog).order_by(SystemLog.created_at.desc()).limit(limit).all()
return {
"summary": {
"total_projects": self.db.query(ProjectHistory).count(),
"running_projects": self.db.query(ProjectHistory).filter(ProjectHistory.status == ProjectStatus.RUNNING.value).count(),
"completed_projects": self.db.query(ProjectHistory).filter(ProjectHistory.status == ProjectStatus.COMPLETED.value).count(),
"error_projects": self.db.query(ProjectHistory).filter(ProjectHistory.status == ProjectStatus.ERROR.value).count(),
"prompt_events": self.db.query(AuditTrail).filter(AuditTrail.action == "PROMPT_RECEIVED").count(),
"code_changes": self.db.query(AuditTrail).filter(AuditTrail.action == "CODE_CHANGE").count(),
},
"projects": [self.get_project_audit_data(project.project_id) for project in projects],
"system_logs": [
{
"id": log.id,
"component": log.component,
"level": log.log_level,
"message": log.log_message,
"timestamp": log.created_at.isoformat() if log.created_at else None,
}
for log in system_logs
],
"lineage_links": self.get_prompt_change_links(limit=limit * 10),
"correlations": self.get_prompt_change_correlations(limit=limit),
}
def cleanup_audit_trail(self) -> None:
"""Clear audit-related test data across all related tables."""
self.db.query(PromptCodeLink).delete()
self.db.query(PullRequest).delete()
self.db.query(UISnapshot).delete()
self.db.query(UserAction).delete()
self.db.query(ProjectLog).delete()
self.db.query(AuditTrail).delete()
self.db.query(SystemLog).delete()
self.db.query(ProjectHistory).delete()
self.db.commit()

View File

@@ -0,0 +1,89 @@
"""Git manager for project operations."""
import os
import subprocess
from pathlib import Path
from typing import Optional
try:
from ..config import settings
except ImportError:
from config import settings
class GitManager:
"""Manages git operations for the project."""
def __init__(self, project_id: str):
if not project_id:
raise ValueError("project_id cannot be empty or None")
self.project_id = project_id
project_path = Path(project_id)
if project_path.is_absolute() or len(project_path.parts) > 1:
resolved = project_path.expanduser().resolve()
else:
base_root = settings.projects_root
if base_root.name != "test-project":
base_root = base_root / "test-project"
resolved = (base_root / project_id).resolve()
self.project_dir = str(resolved)
def init_repo(self):
"""Initialize git repository."""
os.makedirs(self.project_dir, exist_ok=True)
os.chdir(self.project_dir)
subprocess.run(["git", "init"], check=True, capture_output=True)
def add_files(self, paths: list[str]):
"""Add files to git staging."""
subprocess.run(["git", "add"] + paths, check=True, capture_output=True)
def commit(self, message: str):
"""Create a git commit."""
subprocess.run(
["git", "commit", "-m", message],
check=True,
capture_output=True
)
def push(self, remote: str = "origin", branch: str = "main"):
"""Push changes to remote."""
subprocess.run(
["git", "push", "-u", remote, branch],
check=True,
capture_output=True
)
def create_branch(self, branch_name: str):
"""Create and switch to a new branch."""
subprocess.run(
["git", "checkout", "-b", branch_name],
check=True,
capture_output=True
)
def create_pr(
self,
title: str,
body: str,
base: str = "main",
head: Optional[str] = None
) -> dict:
"""Create a pull request via gitea API."""
# This would integrate with gitea API
# For now, return placeholder
return {
"title": title,
"body": body,
"base": base,
"head": head or f"ai-gen-{self.project_id}"
}
def get_status(self) -> str:
"""Get git status."""
result = subprocess.run(
["git", "status", "--porcelain"],
capture_output=True,
text=True
)
return result.stdout.strip()

View File

@@ -0,0 +1,163 @@
"""Gitea API integration for repository and pull request operations."""
import os
class GiteaAPI:
"""Gitea API client for repository operations."""
def __init__(self, token: str, base_url: str, owner: str | None = None, repo: str | None = None):
self.token = token
self.base_url = base_url.rstrip("/")
self.owner = owner
self.repo = repo
self.headers = {
"Authorization": f"token {token}",
"Content-Type": "application/json",
}
def get_config(self) -> dict:
"""Load configuration from environment."""
base_url = os.getenv("GITEA_URL", "https://gitea.local")
token = os.getenv("GITEA_TOKEN", "")
owner = os.getenv("GITEA_OWNER", "ai-test")
repo = os.getenv("GITEA_REPO", "")
return {
"base_url": base_url.rstrip("/"),
"token": token,
"owner": owner,
"repo": repo,
"supports_project_repos": not bool(repo),
}
def get_auth_headers(self) -> dict:
"""Get authentication headers."""
return {
"Authorization": f"token {self.token}",
"Content-Type": "application/json",
}
def _api_url(self, path: str) -> str:
"""Build a Gitea API URL from a relative path."""
return f"{self.base_url}/api/v1/{path.lstrip('/')}"
async def _request(self, method: str, path: str, payload: dict | None = None) -> dict:
"""Perform a Gitea API request and normalize the response."""
try:
import aiohttp
async with aiohttp.ClientSession() as session:
async with session.request(
method,
self._api_url(path),
headers=self.get_auth_headers(),
json=payload,
) as resp:
if resp.status in (200, 201):
return await resp.json()
return {"error": await resp.text(), "status_code": resp.status}
except Exception as e:
return {"error": str(e)}
def build_project_repo_name(self, project_id: str, project_name: str | None = None) -> str:
"""Build a repository name for a generated project."""
preferred = (project_name or project_id or "project").strip().lower().replace(" ", "-")
sanitized = "".join(ch if ch.isalnum() or ch in {"-", "_"} else "-" for ch in preferred)
while "--" in sanitized:
sanitized = sanitized.replace("--", "-")
return sanitized.strip("-") or project_id
async def create_repo(
self,
repo_name: str,
owner: str | None = None,
description: str | None = None,
private: bool = False,
auto_init: bool = True,
) -> dict:
"""Create a repository inside the configured organization."""
_owner = owner or self.owner
if not _owner:
return {"error": "Owner or organization is required"}
payload = {
"name": repo_name,
"description": description or f"AI-generated project repository for {repo_name}",
"private": private,
"auto_init": auto_init,
"default_branch": "main",
}
result = await self._request("POST", f"orgs/{_owner}/repos", payload)
if result.get("status_code") == 409:
existing = await self.get_repo_info(owner=_owner, repo=repo_name)
if not existing.get("error"):
existing["status"] = "exists"
return existing
if not result.get("error"):
result.setdefault("status", "created")
return result
async def create_branch(self, branch: str, base: str = "main", owner: str | None = None, repo: str | None = None):
"""Create a new branch."""
_owner = owner or self.owner
_repo = repo or self.repo
return await self._request(
"POST",
f"repos/{_owner}/{_repo}/branches",
{"new_branch_name": branch, "old_ref_name": base},
)
async def create_pull_request(
self,
title: str,
body: str,
owner: str,
repo: str,
base: str = "main",
head: str | None = None,
) -> dict:
"""Create a pull request."""
_owner = owner or self.owner
_repo = repo or self.repo
payload = {
"title": title,
"body": body,
"base": base,
"head": head or f"{_owner}-{_repo}-ai-gen-{hash(title) % 10000}",
}
return await self._request("POST", f"repos/{_owner}/{_repo}/pulls", payload)
async def push_commit(
self,
branch: str,
files: list[dict],
message: str,
owner: str | None = None,
repo: str | None = None,
) -> dict:
"""Push files to a branch.
In production, this would use gitea's API or git push.
For now, this remains simulated.
"""
_owner = owner or self.owner
_repo = repo or self.repo
return {
"status": "simulated",
"branch": branch,
"message": message,
"files": files,
"owner": _owner,
"repo": _repo,
}
async def get_repo_info(self, owner: str | None = None, repo: str | None = None) -> dict:
"""Get repository information."""
_owner = owner or self.owner
_repo = repo or self.repo
if not _repo:
return {"error": "Repository name required for org operations"}
return await self._request("GET", f"repos/{_owner}/{_repo}")

View File

@@ -0,0 +1,379 @@
"""n8n setup agent for automatic webhook configuration."""
import json
from typing import Optional
try:
from ..config import settings
except ImportError:
from config import settings
class N8NSetupAgent:
"""Automatically configures n8n webhooks and workflows using API token authentication."""
def __init__(self, api_url: str, webhook_token: str):
"""Initialize n8n setup agent.
Args:
api_url: n8n API URL (e.g., http://n8n.yourserver.com)
webhook_token: n8n webhook token for API access (more secure than username/password)
Note: Set the webhook token in n8n via Settings > Credentials > Webhook
This token is used for all API requests instead of Basic Auth
"""
self.api_url = api_url.rstrip("/")
self.webhook_token = webhook_token
self.session = None
def _api_path(self, path: str) -> str:
"""Build a full n8n API URL for a given endpoint path."""
if path.startswith("http://") or path.startswith("https://"):
return path
trimmed = path.lstrip("/")
if trimmed.startswith("api/"):
return f"{self.api_url}/{trimmed}"
return f"{self.api_url}/api/v1/{trimmed}"
def get_auth_headers(self) -> dict:
"""Get authentication headers for n8n API using webhook token."""
headers = {
"n8n-no-credentials": "true",
"Content-Type": "application/json",
"User-Agent": "AI-Software-Factory"
}
if self.webhook_token:
headers["X-N8N-API-KEY"] = self.webhook_token
return headers
async def _request(self, method: str, path: str, **kwargs) -> dict:
"""Send a request to n8n and normalize the response."""
import aiohttp
headers = kwargs.pop("headers", None) or self.get_auth_headers()
url = self._api_path(path)
try:
async with aiohttp.ClientSession() as session:
async with session.request(method, url, headers=headers, **kwargs) as resp:
content_type = resp.headers.get("Content-Type", "")
if "application/json" in content_type:
payload = await resp.json()
else:
payload = {"text": await resp.text()}
if 200 <= resp.status < 300:
if isinstance(payload, dict):
payload.setdefault("status_code", resp.status)
return payload
return {"data": payload, "status_code": resp.status}
message = payload.get("message") if isinstance(payload, dict) else str(payload)
return {"error": f"Status {resp.status}: {message}", "status_code": resp.status, "payload": payload}
except Exception as e:
return {"error": str(e)}
async def get_workflow(self, workflow_name: str) -> Optional[dict]:
"""Get a workflow by name."""
workflows = await self.list_workflows()
if isinstance(workflows, dict) and workflows.get("error"):
return workflows
for workflow in workflows:
if workflow.get("name") == workflow_name:
return workflow
return None
async def create_workflow(self, workflow_json: dict) -> dict:
"""Create or update a workflow."""
return await self._request("POST", "workflows", json=workflow_json)
async def update_workflow(self, workflow_id: str, workflow_json: dict) -> dict:
"""Update an existing workflow."""
return await self._request("PATCH", f"workflows/{workflow_id}", json=workflow_json)
async def enable_workflow(self, workflow_id: str) -> dict:
"""Enable a workflow."""
result = await self._request("POST", f"workflows/{workflow_id}/activate")
if result.get("error"):
fallback = await self._request("PATCH", f"workflows/{workflow_id}", json={"active": True})
if fallback.get("error"):
return fallback
return {"success": True, "id": workflow_id, "method": "patch"}
return {"success": True, "id": workflow_id, "method": "activate"}
async def list_workflows(self) -> list:
"""List all workflows."""
result = await self._request("GET", "workflows")
if result.get("error"):
return result
if isinstance(result, list):
return result
if isinstance(result, dict):
for key in ("data", "workflows"):
value = result.get(key)
if isinstance(value, list):
return value
return []
def build_telegram_workflow(self, webhook_path: str, backend_url: str) -> dict:
"""Build the Telegram-to-backend workflow definition."""
normalized_path = webhook_path.strip().strip("/") or "telegram"
return {
"name": "Telegram to AI Software Factory",
"active": False,
"settings": {"executionOrder": "v1"},
"nodes": [
{
"id": "webhook-node",
"name": "Telegram Webhook",
"type": "n8n-nodes-base.webhook",
"typeVersion": 2,
"position": [-520, 120],
"parameters": {
"httpMethod": "POST",
"path": normalized_path,
"responseMode": "responseNode",
"options": {},
},
},
{
"id": "parse-node",
"name": "Prepare Software Request",
"type": "n8n-nodes-base.code",
"typeVersion": 2,
"position": [-200, 120],
"parameters": {
"language": "javaScript",
"jsCode": "const body = $json.body ?? $json;\nconst message = body.message ?? body;\nconst text = String(message.text ?? '').trim();\nconst lines = text.split(/\\r?\\n/);\nconst request = { name: null, description: '', features: [], tech_stack: [] };\nlet nameIndex = -1;\nlet featuresIndex = -1;\nlet techIndex = -1;\nfor (let i = 0; i < lines.length; i += 1) {\n const line = lines[i].trim();\n if (line.toLowerCase().startsWith('name:')) { request.name = line.split(':', 2)[1]?.trim() || null; nameIndex = i; }\n if (line.toLowerCase().startsWith('features:') && featuresIndex === -1) { featuresIndex = i; }\n if (line.toLowerCase().startsWith('tech stack:') && techIndex === -1) { techIndex = i; }\n}\nif (nameIndex >= 0) {\n const descriptionEnd = featuresIndex >= 0 ? featuresIndex : (techIndex >= 0 ? techIndex : lines.length);\n request.description = lines.slice(nameIndex + 1, descriptionEnd).join('\\n').replace(/^description:\\s*/i, '').trim();\n}\nfunction collectList(startIndex, fieldName) {\n if (startIndex < 0) return;\n const firstLine = lines[startIndex].split(':').slice(1).join(':').trim();\n if (firstLine && !firstLine.startsWith('-') && !firstLine.startsWith('*')) {\n request[fieldName].push(...firstLine.split(',').map(item => item.trim()).filter(Boolean));\n }\n for (const rawLine of lines.slice(startIndex + 1)) {\n const line = rawLine.trim();\n if (!line) continue;\n if (/^[A-Za-z ]+:/.test(line)) break;\n if (line.startsWith('-') || line.startsWith('*')) {\n const value = line.slice(1).trim();\n if (value) request[fieldName].push(value);\n }\n }\n}\ncollectList(featuresIndex, 'features');\ncollectList(techIndex, 'tech_stack');\nif (!request.name || request.features.length === 0) { throw new Error('Could not parse software request from Telegram message'); }\nreturn [{ json: { ...request, _source: { raw_text: text, chat_id: message.chat?.id ?? null } } }];",
},
},
{
"id": "api-node",
"name": "AI Software Factory API",
"type": "n8n-nodes-base.httpRequest",
"typeVersion": 4.2,
"position": [120, 120],
"parameters": {
"method": "POST",
"url": backend_url,
"sendBody": True,
"specifyBody": "json",
"jsonBody": "={{ $json }}",
"options": {"response": {"response": {"fullResponse": False}}},
},
},
{
"id": "response-node",
"name": "Respond to Telegram Webhook",
"type": "n8n-nodes-base.respondToWebhook",
"typeVersion": 1.2,
"position": [420, 120],
"parameters": {
"respondWith": "json",
"responseBody": "={{ $json }}",
},
},
],
"connections": {
"Telegram Webhook": {"main": [[{"node": "Prepare Software Request", "type": "main", "index": 0}]]},
"Prepare Software Request": {"main": [[{"node": "AI Software Factory API", "type": "main", "index": 0}]]},
"AI Software Factory API": {"main": [[{"node": "Respond to Telegram Webhook", "type": "main", "index": 0}]]},
},
}
def build_telegram_trigger_workflow(
self,
backend_url: str,
credential_name: str,
) -> dict:
"""Build a production Telegram Trigger based workflow."""
return {
"name": "Telegram to AI Software Factory",
"active": False,
"settings": {"executionOrder": "v1"},
"nodes": [
{
"id": "telegram-trigger-node",
"name": "Telegram Trigger",
"type": "n8n-nodes-base.telegramTrigger",
"typeVersion": 1,
"position": [-520, 120],
"parameters": {"updates": ["message"]},
"credentials": {"telegramApi": {"name": credential_name}},
},
{
"id": "parse-node",
"name": "Prepare Software Request",
"type": "n8n-nodes-base.code",
"typeVersion": 2,
"position": [-180, 120],
"parameters": {
"language": "javaScript",
"jsCode": "const message = $json.message ?? $json;\nconst text = String(message.text ?? '').trim();\nconst lines = text.split(/\\r?\\n/);\nconst request = { name: null, description: '', features: [], tech_stack: [], _source: { raw_text: text, chat_id: message.chat?.id ?? null } };\nlet nameIndex = -1;\nlet featuresIndex = -1;\nlet techIndex = -1;\nfor (let i = 0; i < lines.length; i += 1) {\n const line = lines[i].trim();\n if (line.toLowerCase().startsWith('name:')) { request.name = line.split(':', 2)[1]?.trim() || null; nameIndex = i; }\n if (line.toLowerCase().startsWith('features:') && featuresIndex === -1) { featuresIndex = i; }\n if (line.toLowerCase().startsWith('tech stack:') && techIndex === -1) { techIndex = i; }\n}\nif (nameIndex >= 0) {\n const descriptionEnd = featuresIndex >= 0 ? featuresIndex : (techIndex >= 0 ? techIndex : lines.length);\n request.description = lines.slice(nameIndex + 1, descriptionEnd).join('\\n').replace(/^description:\\s*/i, '').trim();\n}\nfunction collectList(startIndex, fieldName) {\n if (startIndex < 0) return;\n const firstLine = lines[startIndex].split(':').slice(1).join(':').trim();\n if (firstLine && !firstLine.startsWith('-') && !firstLine.startsWith('*')) {\n request[fieldName].push(...firstLine.split(',').map(item => item.trim()).filter(Boolean));\n }\n for (const rawLine of lines.slice(startIndex + 1)) {\n const line = rawLine.trim();\n if (!line) continue;\n if (/^[A-Za-z ]+:/.test(line)) break;\n if (line.startsWith('-') || line.startsWith('*')) {\n const value = line.slice(1).trim();\n if (value) request[fieldName].push(value);\n }\n }\n}\ncollectList(featuresIndex, 'features');\ncollectList(techIndex, 'tech_stack');\nif (!request.name || request.features.length === 0) { throw new Error('Could not parse software request from Telegram message'); }\nreturn [{ json: request }];",
},
},
{
"id": "api-node",
"name": "AI Software Factory API",
"type": "n8n-nodes-base.httpRequest",
"typeVersion": 4.2,
"position": [120, 120],
"parameters": {
"method": "POST",
"url": backend_url,
"sendBody": True,
"specifyBody": "json",
"jsonBody": "={{ $json }}",
"options": {"response": {"response": {"fullResponse": False}}},
},
},
{
"id": "reply-node",
"name": "Send Telegram Update",
"type": "n8n-nodes-base.telegram",
"typeVersion": 1,
"position": [420, 120],
"parameters": {
"resource": "message",
"operation": "sendMessage",
"chatId": "={{ $('Telegram Trigger').item.json.message.chat.id }}",
"text": "={{ $json.data ? `Generated ${$json.data.name} (${($json.data.changed_files || []).length} files)` : ($json.message || 'Software generation request accepted') }}",
},
"credentials": {"telegramApi": {"name": credential_name}},
},
],
"connections": {
"Telegram Trigger": {"main": [[{"node": "Prepare Software Request", "type": "main", "index": 0}]]},
"Prepare Software Request": {"main": [[{"node": "AI Software Factory API", "type": "main", "index": 0}]]},
"AI Software Factory API": {"main": [[{"node": "Send Telegram Update", "type": "main", "index": 0}]]},
},
}
async def list_credentials(self) -> list:
"""List n8n credentials."""
result = await self._request("GET", "credentials")
if result.get("error"):
return []
if isinstance(result, list):
return result
if isinstance(result, dict):
for key in ("data", "credentials"):
value = result.get(key)
if isinstance(value, list):
return value
return []
async def get_credential(self, credential_name: str, credential_type: str = "telegramApi") -> Optional[dict]:
"""Get an existing credential by name and type."""
credentials = await self.list_credentials()
for credential in credentials:
if credential.get("name") == credential_name and credential.get("type") == credential_type:
return credential
return None
async def create_credential(self, name: str, credential_type: str, data: dict) -> dict:
"""Create an n8n credential."""
payload = {"name": name, "type": credential_type, "data": data}
return await self._request("POST", "credentials", json=payload)
async def ensure_telegram_credential(self, bot_token: str, credential_name: str) -> dict:
"""Ensure a Telegram credential exists for the workflow trigger."""
existing = await self.get_credential(credential_name)
if existing:
return existing
return await self.create_credential(
name=credential_name,
credential_type="telegramApi",
data={"accessToken": bot_token},
)
async def setup_telegram_workflow(self, webhook_path: str) -> dict:
"""Setup the Telegram webhook workflow in n8n.
Args:
webhook_path: The webhook path (e.g., /webhook/telegram)
Returns:
Result of setup operation
"""
return await self.setup(
webhook_path=webhook_path,
backend_url=f"{settings.backend_public_url}/generate",
force_update=False,
)
async def health_check(self) -> dict:
"""Check n8n API health."""
result = await self._request("GET", f"{self.api_url}/healthz")
if result.get("error"):
fallback = await self._request("GET", "workflows")
if fallback.get("error"):
return fallback
return {"status": "ok", "checked_via": "workflows"}
return {"status": "ok", "checked_via": "healthz"}
async def setup(
self,
webhook_path: str = "telegram",
backend_url: str | None = None,
force_update: bool = False,
use_telegram_trigger: bool | None = None,
telegram_bot_token: str | None = None,
telegram_credential_name: str | None = None,
) -> dict:
"""Setup n8n webhooks automatically."""
# First, verify n8n is accessible
health = await self.health_check()
if health.get("error"):
return {"status": "error", "message": health.get("error")}
effective_backend_url = backend_url or f"{settings.backend_public_url}/generate"
effective_bot_token = telegram_bot_token or settings.telegram_bot_token
effective_credential_name = telegram_credential_name or settings.n8n_telegram_credential_name
trigger_mode = use_telegram_trigger if use_telegram_trigger is not None else bool(effective_bot_token)
if trigger_mode:
credential = await self.ensure_telegram_credential(effective_bot_token, effective_credential_name)
if credential.get("error"):
return {"status": "error", "message": credential["error"]}
workflow = self.build_telegram_trigger_workflow(
backend_url=effective_backend_url,
credential_name=effective_credential_name,
)
else:
workflow = self.build_telegram_workflow(
webhook_path=webhook_path,
backend_url=effective_backend_url,
)
existing = await self.get_workflow(workflow["name"])
if isinstance(existing, dict) and existing.get("error"):
return {"status": "error", "message": existing["error"]}
workflow_id = None
if existing and existing.get("id"):
workflow_id = str(existing["id"])
if force_update:
result = await self.update_workflow(workflow_id, workflow)
else:
result = existing
else:
result = await self.create_workflow(workflow)
workflow_id = str(result.get("id", "")) if isinstance(result, dict) else None
if isinstance(result, dict) and result.get("error"):
return {"status": "error", "message": result["error"]}
workflow_id = workflow_id or str(result.get("id", ""))
enable_result = await self.enable_workflow(workflow_id)
if enable_result.get("error"):
return {"status": "error", "message": enable_result["error"], "workflow": result}
return {
"status": "success",
"message": f'Workflow "{workflow["name"]}" is active',
"workflow_id": workflow_id,
"workflow_name": workflow["name"],
"webhook_path": webhook_path.strip().strip("/") or "telegram",
"backend_url": effective_backend_url,
"trigger_mode": "telegram" if trigger_mode else "webhook",
}

View File

@@ -0,0 +1,332 @@
"""Agent orchestrator for software generation."""
from __future__ import annotations
import py_compile
from typing import Optional
from datetime import datetime
try:
from ..config import settings
from .database_manager import DatabaseManager
from .git_manager import GitManager
from .gitea import GiteaAPI
from .ui_manager import UIManager
except ImportError:
from config import settings
from agents.database_manager import DatabaseManager
from agents.git_manager import GitManager
from agents.gitea import GiteaAPI
from agents.ui_manager import UIManager
class AgentOrchestrator:
"""Orchestrates the software generation process with full audit trail."""
def __init__(
self,
project_id: str,
project_name: str,
description: str,
features: list,
tech_stack: list,
db=None,
prompt_text: str | None = None,
prompt_actor: str = "api",
):
"""Initialize orchestrator."""
self.project_id = project_id
self.project_name = project_name
self.description = description
self.features = features
self.tech_stack = tech_stack
self.status = "initialized"
self.progress = 0
self.current_step = None
self.message = ""
self.logs = []
self.ui_data = {}
self.db = db
self.prompt_text = prompt_text
self.prompt_actor = prompt_actor
self.changed_files: list[str] = []
self.gitea_api = GiteaAPI(
token=settings.GITEA_TOKEN,
base_url=settings.GITEA_URL,
owner=settings.GITEA_OWNER,
repo=settings.GITEA_REPO or ""
)
self.project_root = settings.projects_root / project_id
self.prompt_audit = None
self.repo_name = settings.gitea_repo or self.gitea_api.build_project_repo_name(project_id, project_name)
self.repo_owner = settings.gitea_owner
self.repo_url = self._build_repo_url(self.repo_owner, self.repo_name)
# Initialize agents
self.git_manager = GitManager(project_id)
self.ui_manager = UIManager(project_id)
# Initialize database manager if db session provided
self.db_manager = None
self.history = None
if db:
self.db_manager = DatabaseManager(db)
# Log project start to database
self.history = self.db_manager.log_project_start(
project_id=project_id,
project_name=project_name,
description=description
)
# Re-fetch with new history_id
self.db_manager = DatabaseManager(db)
if self.prompt_text:
self.prompt_audit = self.db_manager.log_prompt_submission(
history_id=self.history.id,
project_id=project_id,
prompt_text=self.prompt_text,
features=self.features,
tech_stack=self.tech_stack,
actor_name=self.prompt_actor,
)
self.ui_manager.ui_data["project_root"] = str(self.project_root)
self.ui_manager.ui_data["features"] = list(self.features)
self.ui_manager.ui_data["tech_stack"] = list(self.tech_stack)
self.ui_manager.ui_data["repository"] = {
"owner": self.repo_owner,
"name": self.repo_name,
"url": self.repo_url,
"mode": "project" if settings.use_project_repositories else "shared",
}
def _build_repo_url(self, owner: str | None, repo: str | None) -> str | None:
if not owner or not repo or not settings.gitea_url:
return None
return f"{settings.gitea_url.rstrip('/')}/{owner}/{repo}"
async def _ensure_remote_repository(self) -> None:
if not settings.use_project_repositories:
return
if not self.repo_owner or not settings.gitea_token or not settings.gitea_url:
return
repo_name = self.repo_name
result = await self.gitea_api.create_repo(
repo_name=repo_name,
owner=self.repo_owner,
description=f"AI-generated project for {self.project_name}",
)
if result.get("status") == "exists" and repo_name == self.gitea_api.build_project_repo_name(self.project_id, self.project_name):
repo_name = f"{repo_name}-{self.project_id.split('-')[-1]}"
result = await self.gitea_api.create_repo(
repo_name=repo_name,
owner=self.repo_owner,
description=f"AI-generated project for {self.project_name}",
)
self.repo_name = repo_name
self.ui_manager.ui_data["repository"]["name"] = repo_name
if self.db_manager:
self.db_manager.log_system_event(
component="gitea",
level="ERROR" if result.get("error") else "INFO",
message=(
f"Repository setup failed for {self.repo_owner}/{self.repo_name}: {result.get('error')}"
if result.get("error")
else f"Prepared repository {self.repo_owner}/{self.repo_name}"
),
)
self.ui_manager.ui_data["repository"]["status"] = result.get("status", "error" if result.get("error") else "ready")
if result.get("html_url"):
self.repo_url = result["html_url"]
self.ui_manager.ui_data["repository"]["url"] = self.repo_url
def _append_log(self, message: str) -> None:
timestamped = f"[{datetime.utcnow().isoformat()}] {message}"
self.logs.append(timestamped)
if self.db_manager and self.history:
self.db_manager._log_action(self.history.id, "INFO", message)
def _update_progress(self, progress: int, step: str, message: str) -> None:
self.progress = progress
self.current_step = step
self.message = message
self.ui_manager.update_status(self.status, progress, message)
if self.db_manager and self.history:
self.db_manager.log_progress_update(
history_id=self.history.id,
progress=progress,
step=step,
message=message,
)
def _write_file(self, relative_path: str, content: str) -> None:
target = self.project_root / relative_path
target.parent.mkdir(parents=True, exist_ok=True)
change_type = "UPDATE" if target.exists() else "CREATE"
target.write_text(content, encoding="utf-8")
self.changed_files.append(relative_path)
if self.db_manager and self.history:
self.db_manager.log_code_change(
project_id=self.project_id,
change_type=change_type,
file_path=relative_path,
actor="orchestrator",
actor_type="agent",
details=f"{change_type.title()}d generated artifact {relative_path}",
history_id=self.history.id,
prompt_id=self.prompt_audit.id if self.prompt_audit else None,
diff_summary=f"Wrote {len(content.splitlines())} lines to {relative_path}",
)
def _template_files(self) -> dict[str, str]:
feature_section = "\n".join(f"- {feature}" for feature in self.features) or "- None specified"
tech_section = "\n".join(f"- {tech}" for tech in self.tech_stack) or "- Python"
return {
".gitignore": "__pycache__/\n*.pyc\n.venv/\n.pytest_cache/\n.mypy_cache/\n",
"README.md": (
f"# {self.project_name}\n\n"
f"{self.description}\n\n"
"## Features\n"
f"{feature_section}\n\n"
"## Tech Stack\n"
f"{tech_section}\n"
),
"requirements.txt": "fastapi\nuvicorn\npytest\n",
"main.py": (
"from fastapi import FastAPI\n\n"
"app = FastAPI(title=\"Generated App\")\n\n"
"@app.get('/')\n"
"def read_root():\n"
f" return {{'name': '{self.project_name}', 'status': 'generated', 'features': {self.features!r}}}\n"
),
"tests/test_app.py": (
"from main import read_root\n\n"
"def test_read_root():\n"
f" assert read_root()['name'] == '{self.project_name}'\n"
),
}
async def run(self) -> dict:
"""Run the software generation process with full audit logging."""
try:
# Step 1: Initialize project
self.status = "running"
self._update_progress(5, "initializing", "Setting up project structure...")
self._append_log("Initializing project.")
await self._ensure_remote_repository()
# Step 2: Create project structure (skip git operations)
self._update_progress(20, "project-structure", "Creating project files...")
await self._create_project_structure()
# Step 3: Generate initial code
self._update_progress(55, "code-generation", "Generating project entrypoint and tests...")
await self._generate_code()
# Step 4: Test the code
self._update_progress(80, "validation", "Validating generated code...")
await self._run_tests()
# Step 7: Complete
self.status = "completed"
self._update_progress(100, "completed", "Software generation complete!")
self._append_log("Software generation complete!")
self.ui_manager.ui_data["changed_files"] = list(dict.fromkeys(self.changed_files))
# Log completion to database if available
if self.db_manager and self.history:
self.db_manager.save_ui_snapshot(self.history.id, self.ui_manager.get_ui_data())
self.db_manager.log_project_complete(
history_id=self.history.id,
message="Software generation complete!"
)
return {
"status": "completed",
"progress": self.progress,
"message": self.message,
"current_step": self.current_step,
"logs": self.logs,
"ui_data": self.ui_manager.ui_data,
"history_id": self.history.id if self.history else None,
"project_root": str(self.project_root),
"changed_files": list(dict.fromkeys(self.changed_files)),
"repository": self.ui_manager.ui_data.get("repository"),
}
except Exception as e:
self.status = "error"
self.message = f"Error: {str(e)}"
self._append_log(f"Error: {str(e)}")
# Log error to database if available
if self.db_manager and self.history:
self.db_manager.log_error(
history_id=self.history.id,
error=str(e)
)
return {
"status": "error",
"progress": self.progress,
"message": self.message,
"current_step": self.current_step,
"logs": self.logs,
"error": str(e),
"ui_data": self.ui_manager.ui_data,
"history_id": self.history.id if self.history else None,
"project_root": str(self.project_root),
"changed_files": list(dict.fromkeys(self.changed_files)),
"repository": self.ui_manager.ui_data.get("repository"),
}
async def _create_project_structure(self) -> None:
"""Create initial project structure."""
self.project_root.mkdir(parents=True, exist_ok=True)
for relative_path, content in self._template_files().items():
if relative_path.startswith("main.py") or relative_path.startswith("tests/"):
continue
self._write_file(relative_path, content)
self._append_log(f"Project structure created under {self.project_root}.")
async def _generate_code(self) -> None:
"""Generate code using Ollama."""
for relative_path, content in self._template_files().items():
if relative_path in {"main.py", "tests/test_app.py"}:
self._write_file(relative_path, content)
self._append_log("Application entrypoint and smoke test generated.")
async def _run_tests(self) -> None:
"""Run tests for the generated code."""
py_compile.compile(str(self.project_root / "main.py"), doraise=True)
py_compile.compile(str(self.project_root / "tests/test_app.py"), doraise=True)
self._append_log("Generated Python files compiled successfully.")
async def _commit_to_git(self) -> None:
"""Commit changes to git."""
pass # Skip git operations in test environment
async def _create_pr(self) -> None:
"""Create pull request."""
pass # Skip PR creation in test environment
def update_status(self, status: str, progress: int, message: str) -> None:
"""Update status and progress."""
self.status = status
self.progress = progress
self.message = message
def get_ui_data(self) -> dict:
"""Get UI data."""
return self.ui_manager.ui_data
def render_dashboard(self) -> str:
"""Render dashboard HTML."""
return self.ui_manager.render_dashboard()
def get_history(self) -> Optional[dict]:
"""Get project history from database."""
if self.db_manager and self.history:
return self.db_manager.get_project_audit_data(self.history.project_id)
return None

View File

@@ -0,0 +1,151 @@
"""Telegram bot integration for n8n webhook."""
import asyncio
import json
import re
from typing import Optional
class TelegramHandler:
"""Handles Telegram messages via n8n webhook."""
def __init__(self, webhook_url: str):
self.webhook_url = webhook_url
self.api_url = "https://api.telegram.org/bot"
async def handle_message(self, message_data: dict) -> dict:
"""Handle incoming Telegram message."""
text = message_data.get("text", "")
chat_id = message_data.get("chat", {}).get("id", "")
# Extract software request from message
request = self._parse_request(text)
if request:
# Forward to backend API
async def fetch_software():
try:
import aiohttp
async with aiohttp.ClientSession() as session:
async with session.post(
"http://localhost:8000/generate",
json=request
) as resp:
return await resp.json()
except Exception as e:
return {"error": str(e)}
result = await fetch_software()
return {
"status": "success",
"data": result,
"response": self._format_response(result)
}
else:
return {
"status": "error",
"message": "Could not parse software request"
}
def _parse_request(self, text: str) -> Optional[dict]:
"""Parse software request from user message."""
# Simple parser - in production, use LLM to extract
request = {
"name": None,
"description": None,
"features": []
}
lines = text.split("\n")
# Parse name
name_idx = -1
for i, line in enumerate(lines):
line_stripped = line.strip()
if line_stripped.lower().startswith("name:"):
request["name"] = line_stripped.split(":", 1)[1].strip()
name_idx = i
break
if not request["name"]:
return None
# Parse description (everything after name until features section)
# First, find where features section starts
features_idx = -1
for i in range(name_idx + 1, len(lines)):
line_stripped = lines[i].strip()
if line_stripped.lower().startswith("features:"):
features_idx = i
break
if features_idx > name_idx:
# Description is between name and features
request["description"] = "\n".join(lines[name_idx + 1:features_idx]).strip()
else:
# Description is everything after name
request["description"] = "\n".join(lines[name_idx + 1:]).strip()
# Strip description prefix if present
if request["description"]:
request["description"] = request["description"].strip()
if request["description"].lower().startswith("description:"):
request["description"] = request["description"][len("description:") + 1:].strip()
# Parse features
if features_idx > 0:
features_line = lines[features_idx]
# Parse inline features after "Features:"
if ":" in features_line:
inline_part = features_line.split(":", 1)[1].strip()
# Skip if it starts with dash (it's a multiline list marker)
if inline_part and not inline_part.startswith("-"):
# Remove any leading dashes or asterisks
if inline_part.startswith("-"):
inline_part = inline_part[1:].strip()
elif inline_part.startswith("*"):
inline_part = inline_part[1:].strip()
if inline_part:
# Split by comma for inline features
request["features"].extend([f.strip() for f in inline_part.split(",") if f.strip()])
# Parse multiline features (dash lines after features:)
for line in lines[features_idx + 1:]:
line_stripped = line.strip()
if not line_stripped:
continue
if line_stripped.startswith("-"):
feature_text = line_stripped[1:].strip()
if feature_text:
request["features"].append(feature_text)
elif line_stripped.startswith("*"):
feature_text = line_stripped[1:].strip()
if feature_text:
request["features"].append(feature_text)
elif ":" in line_stripped:
# Non-feature line with colon
break
# MUST have features
if not request["features"]:
return None
return request
def _format_response(self, result: dict) -> dict:
"""Format response for Telegram."""
status = result.get("status", "error")
message = result.get("message", result.get("detail", ""))
progress = result.get("progress", 0)
response_data = {
"status": status,
"message": message,
"progress": progress,
"project_name": result.get("name", "N/A"),
"logs": result.get("logs", [])
}
return response_data

View File

@@ -0,0 +1,429 @@
"""UI manager for web dashboard with audit trail display."""
import html
import json
from typing import Optional, List
class UIManager:
"""Manages UI data and updates with audit trail display."""
def __init__(self, project_id: str):
"""Initialize UI manager."""
self.project_id = project_id
self.ui_data = {
"project_id": project_id,
"status": "initialized",
"progress": 0,
"message": "Ready",
"snapshots": [],
"features": []
}
def update_status(self, status: str, progress: int, message: str) -> None:
"""Update UI status."""
self.ui_data["status"] = status
self.ui_data["progress"] = progress
self.ui_data["message"] = message
def add_snapshot(self, data: str, created_at: Optional[str] = None) -> None:
"""Add a snapshot of UI data."""
snapshot = {
"data": data,
"created_at": created_at or self._get_current_timestamp()
}
self.ui_data.setdefault("snapshots", []).append(snapshot)
def add_feature(self, feature: str) -> None:
"""Add a feature tag."""
self.ui_data.setdefault("features", []).append(feature)
def _get_current_timestamp(self) -> str:
"""Get current timestamp in ISO format."""
from datetime import datetime
return datetime.now().isoformat()
def get_ui_data(self) -> dict:
"""Get current UI data."""
return self.ui_data
def _escape_html(self, text: str) -> str:
"""Escape HTML special characters for safe display."""
if text is None:
return ""
return html.escape(str(text), quote=True)
def render_dashboard(self, audit_trail: Optional[List[dict]] = None,
actions: Optional[List[dict]] = None,
logs: Optional[List[dict]] = None) -> str:
"""Render dashboard HTML with audit trail and history display."""
# Format logs for display
logs_html = ""
if logs:
for log in logs:
level = log.get("level", "INFO")
message = self._escape_html(log.get("message", ""))
timestamp = self._escape_html(log.get("timestamp", ""))
if level == "ERROR":
level_class = "error"
else:
level_class = "info"
logs_html += f"""
<div class="log-item">
<span class="timestamp">{timestamp}</span>
<span class="log-level {level_class}">[{level}]</span>
<span>{message}</span>
</div>"""
# Format audit trail for display
audit_html = ""
if audit_trail:
for audit in audit_trail:
action = audit.get("action", "")
actor = self._escape_html(audit.get("actor", ""))
timestamp = self._escape_html(audit.get("timestamp", ""))
details = self._escape_html(audit.get("details", ""))
metadata = audit.get("metadata", {})
action_type = audit.get("action_type", "")
# Color classes for action types
action_color = action_type.lower() if action_type else "neutral"
audit_html += f"""
<div class="audit-item">
<div class="audit-header">
<span class="audit-action {action_color}">
{self._escape_html(action)}
</span>
<span class="audit-actor">{actor}</span>
<span class="audit-time">{timestamp}</span>
</div>
<div class="audit-details">{details}</div>
{f'<div class="audit-metadata">{json.dumps(metadata)}</div>' if metadata else ''}
</div>
"""
# Format actions for display
actions_html = ""
if actions:
for action in actions:
action_type = action.get("action_type", "")
description = self._escape_html(action.get("description", ""))
actor_name = self._escape_html(action.get("actor_name", ""))
actor_type = action.get("actor_type", "")
timestamp = self._escape_html(action.get("timestamp", ""))
actions_html += f"""
<div class="action-item">
<div class="action-type">{self._escape_html(action_type)}</div>
<div class="action-description">{description}</div>
<div class="action-actor">{actor_type}: {actor_name}</div>
<div class="action-time">{timestamp}</div>
</div>"""
# Format snapshots for display
snapshots_html = ""
snapshots = self.ui_data.get("snapshots", [])
if snapshots:
for snapshot in snapshots:
data = snapshot.get("data", "")
created_at = snapshot.get("created_at", "")
snapshots_html += f"""
<div class="snapshot-item">
<div class="snapshot-time">{created_at}</div>
<pre class="snapshot-data">{data}</pre>
</div>"""
# Build features HTML
features_html = ""
features = self.ui_data.get("features", [])
if features:
feature_tags = []
for feat in features:
feature_tags.append(f'<span class="feature-tag">{self._escape_html(feat)}</span>')
features_html = f'<div class="features">{"".join(feature_tags)}</div>'
# Build project header HTML
project_id_escaped = self._escape_html(self.ui_data.get('project_id', 'Project'))
status = self.ui_data.get('status', 'initialized')
# Determine empty state message
empty_state_message = ""
if not audit_trail and not actions and not snapshots_html:
empty_state_message = 'No audit trail entries available'
return f"""<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>AI Software Factory Dashboard</title>
<style>
* {{ margin: 0; padding: 0; box-sizing: border-box; }}
body {{
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, sans-serif;
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
min-height: 100vh;
padding: 2rem;
}}
.container {{
max-width: 1200px;
margin: 0 auto;
background: white;
border-radius: 16px;
padding: 2rem;
box-shadow: 0 20px 60px rgba(0,0,0,0.3);
}}
h1 {{
color: #333;
margin-bottom: 1.5rem;
font-size: 2rem;
}}
h2 {{
color: #444;
margin: 2rem 0 1rem;
font-size: 1.5rem;
border-bottom: 2px solid #667eea;
padding-bottom: 0.5rem;
}}
.project {{
background: #f8f9fa;
border-radius: 12px;
padding: 1.5rem;
margin-bottom: 1rem;
}}
.project-header {{
display: flex;
justify-content: space-between;
align-items: center;
margin-bottom: 1rem;
}}
.project-name {{
font-size: 1.25rem;
font-weight: bold;
color: #333;
}}
.status-badge {{
padding: 0.5rem 1rem;
border-radius: 20px;
font-weight: bold;
font-size: 0.85rem;
}}
.status-badge.running {{ background: #fff3cd; color: #856404; }}
.status-badge.completed {{ background: #d4edda; color: #155724; }}
.status-badge.error {{ background: #f8d7da; color: #721c24; }}
.status-badge.initialized {{ background: #e2e3e5; color: #383d41; }}
.progress-bar {{
width: 100%;
height: 24px;
background: #e9ecef;
border-radius: 12px;
overflow: hidden;
margin: 1rem 0;
}}
.progress-fill {{
height: 100%;
background: linear-gradient(90deg, #667eea, #764ba2);
transition: width 0.5s ease;
}}
.message {{
color: #495057;
margin: 0.5rem 0;
}}
.logs {{
background: #f8f9fa;
border-radius: 8px;
padding: 1rem;
max-height: 200px;
overflow-y: auto;
font-family: monospace;
font-size: 0.85rem;
}}
.log-item {{
padding: 0.25rem 0;
border-bottom: 1px solid #e9ecef;
}}
.log-item:last-child {{ border-bottom: none; }}
.timestamp {{
color: #6c757d;
font-size: 0.8rem;
}}
.log-level {{
font-weight: bold;
margin-right: 0.5rem;
}}
.log-level.info {{ color: #28a745; }}
.log-level.error {{ color: #dc3545; }}
.features {{
margin-top: 1rem;
display: flex;
flex-wrap: wrap;
gap: 0.5rem;
}}
.feature-tag {{
background: #e7f3ff;
color: #0066cc;
padding: 0.25rem 0.75rem;
border-radius: 12px;
font-size: 0.85rem;
}}
.audit-section {{
background: #f8f9fa;
border-radius: 12px;
padding: 1.5rem;
margin-top: 1rem;
}}
.audit-item {{
background: white;
border: 1px solid #dee2e6;
border-radius: 8px;
padding: 1rem;
margin-bottom: 0.5rem;
}}
.audit-header {{
display: flex;
justify-content: space-between;
align-items: center;
margin-bottom: 0.5rem;
flex-wrap: wrap;
gap: 0.5rem;
}}
.audit-action {{
padding: 0.25rem 0.75rem;
border-radius: 12px;
font-size: 0.85rem;
font-weight: bold;
}}
.audit-action.CREATE {{ background: #d4edda; color: #155724; }}
.audit-action.UPDATE {{ background: #cce5ff; color: #004085; }}
.audit-action.DELETE {{ background: #f8d7da; color: #721c24; }}
.audit-action.PROMPT {{ background: #d1ecf1; color: #0c5460; }}
.audit-action.COMMIT {{ background: #fff3cd; color: #856404; }}
.audit-action.PR_CREATED {{ background: #d4edda; color: #155724; }}
.audit-action.neutral {{ background: #e9ecef; color: #495057; }}
.audit-actor {{
background: #e9ecef;
padding: 0.25rem 0.75rem;
border-radius: 12px;
font-size: 0.8rem;
}}
.audit-time {{
color: #6c757d;
font-size: 0.8rem;
}}
.audit-details {{
color: #495057;
font-size: 0.9rem;
font-weight: bold;
text-transform: uppercase;
}}
.audit-metadata {{
background: #f1f3f5;
padding: 0.5rem;
border-radius: 4px;
font-size: 0.75rem;
font-family: monospace;
margin-top: 0.5rem;
max-height: 100px;
overflow-y: auto;
}}
.action-item {{
background: white;
border: 1px solid #dee2e6;
border-radius: 8px;
padding: 1rem;
margin-bottom: 0.5rem;
}}
.action-type {{
font-weight: bold;
color: #667eea;
font-size: 0.9rem;
}}
.action-description {{
color: #495057;
margin: 0.5rem 0;
}}
.action-actor {{
color: #6c757d;
font-size: 0.8rem;
}}
.snapshot-section {{
background: #f8f9fa;
border-radius: 12px;
padding: 1.5rem;
margin-top: 1rem;
}}
.snapshot-item {{
background: white;
border: 1px solid #dee2e6;
border-radius: 8px;
padding: 1rem;
margin-bottom: 0.5rem;
}}
.snapshot-time {{
color: #6c757d;
font-size: 0.8rem;
margin-bottom: 0.5rem;
}}
.snapshot-data {{
background: #f8f9fa;
padding: 0.5rem;
border-radius: 4px;
font-family: monospace;
font-size: 0.75rem;
max-height: 200px;
overflow-y: auto;
white-space: pre-wrap;
word-break: break-all;
}}
.empty-state {{
text-align: center;
color: #6c757d;
padding: 2rem;
}}
@media (max-width: 768px) {{
.container {{
padding: 1rem;
}}
h1 {{
font-size: 1.5rem;
}}
}}
</style>
</head>
<body>
<div class="container">
<h1>AI Software Factory Dashboard</h1>
<div class="project">
<div class="project-header">
<span class="project-name">{project_id_escaped}</span>
<span class="status-badge {status}">
{status.upper()}
</span>
</div>
<div class="progress-bar">
<div class="progress-fill" style="width: {self.ui_data.get('progress', 0)}%;"></div>
</div>
<div class="message">{self._escape_html(self.ui_data.get('message', 'No message'))}</div>
{f'<div class="logs" id="logs">{logs_html}</div>' if logs else '<div class="empty-state">No logs available</div>'}
{features_html}
</div>
{f'<div class="audit-section"><h2>Audit Trail</h2>{audit_html}</div>' if audit_html else ''}
{f'<div class="action-section"><h2>User Actions</h2>{actions_html}</div>' if actions_html else ''}
{f'<div class="snapshot-section"><h2>UI Snapshots</h2>{snapshots_html}</div>' if snapshots_html else ''}
{empty_state_message}
</div>
</body>
</html>"""

View File

@@ -0,0 +1,37 @@
[alembic]
script_location = alembic
prepend_sys_path = .
path_separator = os
sqlalchemy.url = sqlite:////tmp/ai_software_factory_test.db
[loggers]
keys = root,sqlalchemy,alembic
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = WARN
handlers = console
[logger_sqlalchemy]
level = WARN
handlers =
qualname = sqlalchemy.engine
[logger_alembic]
level = INFO
handlers = console
qualname = alembic
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic
[formatter_generic]
format = %(levelname)-5.5s [%(name)s] %(message)s

View File

@@ -0,0 +1,50 @@
"""Alembic environment for AI Software Factory."""
from __future__ import annotations
from logging.config import fileConfig
from alembic import context
from sqlalchemy import engine_from_config, pool
try:
from ai_software_factory.models import Base
except ImportError:
from models import Base
config = context.config
if config.config_file_name is not None:
fileConfig(config.config_file_name)
target_metadata = Base.metadata
def run_migrations_offline() -> None:
"""Run migrations in offline mode."""
url = config.get_main_option("sqlalchemy.url")
context.configure(url=url, target_metadata=target_metadata, literal_binds=True, compare_type=True)
with context.begin_transaction():
context.run_migrations()
def run_migrations_online() -> None:
"""Run migrations in online mode."""
connectable = engine_from_config(
config.get_section(config.config_ini_section, {}),
prefix="sqlalchemy.",
poolclass=pool.NullPool,
)
with connectable.connect() as connection:
context.configure(connection=connection, target_metadata=target_metadata, compare_type=True)
with context.begin_transaction():
context.run_migrations()
if context.is_offline_mode():
run_migrations_offline()
else:
run_migrations_online()

View File

@@ -0,0 +1,17 @@
"""${message}"""
revision = ${repr(up_revision)}
down_revision = ${repr(down_revision)}
branch_labels = ${repr(branch_labels)}
depends_on = ${repr(depends_on)}
from alembic import op
import sqlalchemy as sa
def upgrade() -> None:
${upgrades if upgrades else "pass"}
def downgrade() -> None:
${downgrades if downgrades else "pass"}

View File

@@ -0,0 +1,164 @@
"""initial schema
Revision ID: 20260410_01
Revises:
Create Date: 2026-04-10 00:00:00
"""
from alembic import op
import sqlalchemy as sa
revision = "20260410_01"
down_revision = None
branch_labels = None
depends_on = None
def upgrade() -> None:
op.create_table(
"agent_actions",
sa.Column("id", sa.Integer(), nullable=False),
sa.Column("agent_name", sa.String(length=100), nullable=False),
sa.Column("action_type", sa.String(length=100), nullable=False),
sa.Column("success", sa.Boolean(), nullable=True),
sa.Column("message", sa.String(length=500), nullable=True),
sa.Column("timestamp", sa.DateTime(), nullable=True),
sa.PrimaryKeyConstraint("id"),
)
op.create_table(
"audit_trail",
sa.Column("id", sa.Integer(), nullable=False),
sa.Column("component", sa.String(length=50), nullable=True),
sa.Column("log_level", sa.String(length=50), nullable=True),
sa.Column("message", sa.String(length=500), nullable=False),
sa.Column("created_at", sa.DateTime(), nullable=True),
sa.Column("project_id", sa.String(length=255), nullable=True),
sa.Column("action", sa.String(length=100), nullable=True),
sa.Column("actor", sa.String(length=100), nullable=True),
sa.Column("action_type", sa.String(length=50), nullable=True),
sa.Column("details", sa.Text(), nullable=True),
sa.Column("metadata_json", sa.JSON(), nullable=True),
sa.PrimaryKeyConstraint("id"),
)
op.create_table(
"project_history",
sa.Column("id", sa.Integer(), nullable=False),
sa.Column("project_id", sa.String(length=255), nullable=False),
sa.Column("project_name", sa.String(length=255), nullable=True),
sa.Column("features", sa.Text(), nullable=True),
sa.Column("description", sa.String(length=255), nullable=True),
sa.Column("status", sa.String(length=50), nullable=True),
sa.Column("progress", sa.Integer(), nullable=True),
sa.Column("message", sa.String(length=500), nullable=True),
sa.Column("current_step", sa.String(length=255), nullable=True),
sa.Column("total_steps", sa.Integer(), nullable=True),
sa.Column("current_step_description", sa.String(length=1024), nullable=True),
sa.Column("current_step_details", sa.Text(), nullable=True),
sa.Column("error_message", sa.Text(), nullable=True),
sa.Column("created_at", sa.DateTime(), nullable=True),
sa.Column("started_at", sa.DateTime(), nullable=True),
sa.Column("updated_at", sa.DateTime(), nullable=True),
sa.Column("completed_at", sa.DateTime(), nullable=True),
sa.PrimaryKeyConstraint("id"),
)
op.create_table(
"system_logs",
sa.Column("id", sa.Integer(), nullable=False),
sa.Column("component", sa.String(length=50), nullable=False),
sa.Column("log_level", sa.String(length=50), nullable=True),
sa.Column("log_message", sa.String(length=500), nullable=False),
sa.Column("user_agent", sa.String(length=255), nullable=True),
sa.Column("ip_address", sa.String(length=45), nullable=True),
sa.Column("created_at", sa.DateTime(), nullable=True),
sa.PrimaryKeyConstraint("id"),
)
op.create_table(
"project_logs",
sa.Column("id", sa.Integer(), nullable=False),
sa.Column("history_id", sa.Integer(), nullable=False),
sa.Column("log_level", sa.String(length=50), nullable=True),
sa.Column("log_message", sa.String(length=500), nullable=False),
sa.Column("timestamp", sa.DateTime(), nullable=True),
sa.ForeignKeyConstraint(["history_id"], ["project_history.id"]),
sa.PrimaryKeyConstraint("id"),
)
op.create_table(
"prompt_code_links",
sa.Column("id", sa.Integer(), nullable=False),
sa.Column("history_id", sa.Integer(), nullable=False),
sa.Column("project_id", sa.String(length=255), nullable=False),
sa.Column("prompt_audit_id", sa.Integer(), nullable=False),
sa.Column("code_change_audit_id", sa.Integer(), nullable=False),
sa.Column("file_path", sa.String(length=500), nullable=True),
sa.Column("change_type", sa.String(length=50), nullable=True),
sa.Column("created_at", sa.DateTime(), nullable=True),
sa.ForeignKeyConstraint(["history_id"], ["project_history.id"]),
sa.PrimaryKeyConstraint("id"),
)
op.create_table(
"pull_request_data",
sa.Column("id", sa.Integer(), nullable=False),
sa.Column("history_id", sa.Integer(), nullable=False),
sa.Column("pr_number", sa.Integer(), nullable=False),
sa.Column("pr_title", sa.String(length=500), nullable=False),
sa.Column("pr_body", sa.Text(), nullable=True),
sa.Column("pr_state", sa.String(length=50), nullable=False),
sa.Column("pr_url", sa.String(length=500), nullable=False),
sa.Column("created_at", sa.DateTime(), nullable=True),
sa.ForeignKeyConstraint(["history_id"], ["project_history.id"]),
sa.PrimaryKeyConstraint("id"),
)
op.create_table(
"pull_requests",
sa.Column("id", sa.Integer(), nullable=False),
sa.Column("history_id", sa.Integer(), nullable=False),
sa.Column("pr_number", sa.Integer(), nullable=False),
sa.Column("pr_title", sa.String(length=500), nullable=False),
sa.Column("pr_body", sa.Text(), nullable=True),
sa.Column("base", sa.String(length=255), nullable=False),
sa.Column("user", sa.String(length=255), nullable=False),
sa.Column("pr_url", sa.String(length=500), nullable=False),
sa.Column("merged", sa.Boolean(), nullable=True),
sa.Column("merged_at", sa.DateTime(), nullable=True),
sa.Column("pr_state", sa.String(length=50), nullable=False),
sa.Column("created_at", sa.DateTime(), nullable=True),
sa.ForeignKeyConstraint(["history_id"], ["project_history.id"]),
sa.PrimaryKeyConstraint("id"),
)
op.create_table(
"ui_snapshots",
sa.Column("id", sa.Integer(), nullable=False),
sa.Column("history_id", sa.Integer(), nullable=False),
sa.Column("snapshot_data", sa.JSON(), nullable=False),
sa.Column("created_at", sa.DateTime(), nullable=True),
sa.ForeignKeyConstraint(["history_id"], ["project_history.id"]),
sa.PrimaryKeyConstraint("id"),
)
op.create_table(
"user_actions",
sa.Column("id", sa.Integer(), nullable=False),
sa.Column("history_id", sa.Integer(), nullable=True),
sa.Column("user_id", sa.String(length=100), nullable=True),
sa.Column("action_type", sa.String(length=100), nullable=True),
sa.Column("actor_type", sa.String(length=50), nullable=True),
sa.Column("actor_name", sa.String(length=100), nullable=True),
sa.Column("action_description", sa.String(length=500), nullable=True),
sa.Column("action_data", sa.JSON(), nullable=True),
sa.Column("created_at", sa.DateTime(), nullable=True),
sa.ForeignKeyConstraint(["history_id"], ["project_history.id"]),
sa.PrimaryKeyConstraint("id"),
)
def downgrade() -> None:
op.drop_table("user_actions")
op.drop_table("ui_snapshots")
op.drop_table("pull_requests")
op.drop_table("pull_request_data")
op.drop_table("prompt_code_links")
op.drop_table("project_logs")
op.drop_table("system_logs")
op.drop_table("project_history")
op.drop_table("audit_trail")
op.drop_table("agent_actions")

View File

@@ -0,0 +1,232 @@
"""Configuration settings for AI Software Factory."""
import os
from typing import Optional
from pathlib import Path
from pydantic import Field
from pydantic_settings import BaseSettings, SettingsConfigDict
class Settings(BaseSettings):
"""Application settings loaded from environment variables."""
model_config = SettingsConfigDict(
env_file=".env",
env_file_encoding="utf-8",
extra="ignore",
)
# Server settings
HOST: str = "0.0.0.0"
PORT: int = 8000
LOG_LEVEL: str = "INFO"
# Ollama settings computed from environment
OLLAMA_URL: str = "http://ollama:11434"
OLLAMA_MODEL: str = "llama3"
# Gitea settings
GITEA_URL: str = "https://gitea.yourserver.com"
GITEA_TOKEN: str = ""
GITEA_OWNER: str = "ai-software-factory"
GITEA_REPO: str = ""
# n8n settings
N8N_WEBHOOK_URL: str = ""
N8N_API_URL: str = ""
N8N_API_KEY: str = ""
N8N_TELEGRAM_CREDENTIAL_NAME: str = "AI Software Factory Telegram"
N8N_USER: str = ""
N8N_PASSWORD: str = ""
# Runtime integration settings
BACKEND_PUBLIC_URL: str = "http://localhost:8000"
PROJECTS_ROOT: str = ""
# Telegram settings
TELEGRAM_BOT_TOKEN: str = ""
TELEGRAM_CHAT_ID: str = ""
# PostgreSQL settings
POSTGRES_HOST: str = "localhost"
POSTGRES_PORT: int = 5432
POSTGRES_USER: str = "postgres"
POSTGRES_PASSWORD: str = ""
POSTGRES_DB: str = "ai_software_factory"
POSTGRES_TEST_DB: str = "ai_software_factory_test"
POSTGRES_URL: Optional[str] = None # Optional direct PostgreSQL connection URL
# SQLite settings for testing
USE_SQLITE: bool = True # Enable SQLite by default for testing
SQLITE_DB_PATH: str = "sqlite.db"
# Database connection pool settings (only for PostgreSQL)
DB_POOL_SIZE: int = 10
DB_MAX_OVERFLOW: int = 20
DB_POOL_RECYCLE: int = 3600
DB_POOL_TIMEOUT: int = 30
@property
def postgres_url(self) -> str:
"""Get PostgreSQL URL with trimmed whitespace."""
return (self.POSTGRES_URL or "").strip()
@property
def postgres_env_configured(self) -> bool:
"""Whether PostgreSQL was explicitly configured via environment variables."""
if self.postgres_url:
return True
postgres_env_keys = (
"POSTGRES_HOST",
"POSTGRES_PORT",
"POSTGRES_USER",
"POSTGRES_PASSWORD",
"POSTGRES_DB",
)
return any(bool(os.environ.get(key, "").strip()) for key in postgres_env_keys)
@property
def use_sqlite(self) -> bool:
"""Whether SQLite should be used as the active database backend."""
if not self.USE_SQLITE:
return False
return not self.postgres_env_configured
@property
def pool(self) -> dict:
"""Get database pool configuration."""
return {
"pool_size": self.DB_POOL_SIZE,
"max_overflow": self.DB_MAX_OVERFLOW,
"pool_recycle": self.DB_POOL_RECYCLE,
"pool_timeout": self.DB_POOL_TIMEOUT
}
@property
def database_url(self) -> str:
"""Get database connection URL."""
if self.use_sqlite:
return f"sqlite:///{self.SQLITE_DB_PATH}"
if self.postgres_url:
return self.postgres_url
return (
f"postgresql://{self.POSTGRES_USER}:{self.POSTGRES_PASSWORD}"
f"@{self.POSTGRES_HOST}:{self.POSTGRES_PORT}/{self.POSTGRES_DB}"
)
@property
def test_database_url(self) -> str:
"""Get test database connection URL."""
if self.use_sqlite:
return f"sqlite:///{self.SQLITE_DB_PATH}"
if self.postgres_url:
return self.postgres_url
return (
f"postgresql://{self.POSTGRES_USER}:{self.POSTGRES_PASSWORD}"
f"@{self.POSTGRES_HOST}:{self.POSTGRES_PORT}/{self.POSTGRES_TEST_DB}"
)
@property
def ollama_url(self) -> str:
"""Get Ollama URL with trimmed whitespace."""
return self.OLLAMA_URL.strip()
@property
def gitea_url(self) -> str:
"""Get Gitea URL with trimmed whitespace."""
return self.GITEA_URL.strip()
@property
def gitea_token(self) -> str:
"""Get Gitea token with trimmed whitespace."""
return self.GITEA_TOKEN.strip()
@property
def gitea_owner(self) -> str:
"""Get Gitea owner/organization with trimmed whitespace."""
return self.GITEA_OWNER.strip()
@property
def gitea_repo(self) -> str:
"""Get the optional fixed Gitea repository name with trimmed whitespace."""
return self.GITEA_REPO.strip()
@property
def use_project_repositories(self) -> bool:
"""Whether the service should create one repository per generated project."""
return not bool(self.gitea_repo)
@property
def n8n_webhook_url(self) -> str:
"""Get n8n webhook URL with trimmed whitespace."""
return self.N8N_WEBHOOK_URL.strip()
@property
def n8n_api_url(self) -> str:
"""Get n8n API URL with trimmed whitespace."""
return self.N8N_API_URL.strip()
@property
def n8n_api_key(self) -> str:
"""Get n8n API key with trimmed whitespace."""
return self.N8N_API_KEY.strip()
@property
def n8n_telegram_credential_name(self) -> str:
"""Get the preferred n8n Telegram credential name."""
return self.N8N_TELEGRAM_CREDENTIAL_NAME.strip() or "AI Software Factory Telegram"
@property
def telegram_bot_token(self) -> str:
"""Get Telegram bot token with trimmed whitespace."""
return self.TELEGRAM_BOT_TOKEN.strip()
@property
def telegram_chat_id(self) -> str:
"""Get Telegram chat ID with trimmed whitespace."""
return self.TELEGRAM_CHAT_ID.strip()
@property
def backend_public_url(self) -> str:
"""Get backend public URL with trimmed whitespace."""
return self.BACKEND_PUBLIC_URL.strip().rstrip("/")
@property
def projects_root(self) -> Path:
"""Get the root directory for generated project artifacts."""
if self.PROJECTS_ROOT.strip():
return Path(self.PROJECTS_ROOT).expanduser().resolve()
return Path(__file__).resolve().parent.parent / "test-project"
@property
def postgres_host(self) -> str:
"""Get PostgreSQL host."""
return self.POSTGRES_HOST.strip()
@property
def postgres_port(self) -> int:
"""Get PostgreSQL port as integer."""
return int(self.POSTGRES_PORT)
@property
def postgres_user(self) -> str:
"""Get PostgreSQL user."""
return self.POSTGRES_USER.strip()
@property
def postgres_password(self) -> str:
"""Get PostgreSQL password."""
return self.POSTGRES_PASSWORD.strip()
@property
def postgres_db(self) -> str:
"""Get PostgreSQL database name."""
return self.POSTGRES_DB.strip()
@property
def postgres_test_db(self) -> str:
"""Get test PostgreSQL database name."""
return self.POSTGRES_TEST_DB.strip()
# Create instance for module-level access
settings = Settings()

View File

@@ -0,0 +1,322 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>AI Software Factory Dashboard</title>
<style>
* {
margin: 0;
padding: 0;
box-sizing: border-box;
}
body {
font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
background: linear-gradient(135deg, #1a1a2e 0%, #16213e 100%);
min-height: 100vh;
color: #fff;
padding: 20px;
}
.dashboard {
max-width: 1200px;
margin: 0 auto;
}
.header {
text-align: center;
padding: 30px;
background: rgba(255, 255, 255, 0.05);
border-radius: 15px;
margin-bottom: 20px;
border: 1px solid rgba(255, 255, 255, 0.1);
}
.header h1 {
font-size: 2.5em;
margin-bottom: 10px;
background: linear-gradient(90deg, #00d4ff, #00ff88);
-webkit-background-clip: text;
-webkit-text-fill-color: transparent;
background-clip: text;
}
.header p {
color: #888;
font-size: 1.1em;
}
.stats-grid {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(250px, 1fr));
gap: 20px;
margin-bottom: 20px;
}
.stat-card {
background: rgba(255, 255, 255, 0.05);
border-radius: 15px;
padding: 25px;
border: 1px solid rgba(255, 255, 255, 0.1);
text-align: center;
}
.stat-card h3 {
font-size: 0.9em;
color: #888;
margin-bottom: 10px;
text-transform: uppercase;
letter-spacing: 1px;
}
.stat-card .value {
font-size: 2.5em;
font-weight: bold;
color: #00d4ff;
}
.stat-card.project .value { color: #00ff88; }
.stat-card.active .value { color: #ff6b6b; }
.stat-card.code .value { color: #ffd93d; }
.status-panel {
background: rgba(255, 255, 255, 0.05);
border-radius: 15px;
padding: 25px;
margin-bottom: 20px;
border: 1px solid rgba(255, 255, 255, 0.1);
}
.status-panel h2 {
font-size: 1.3em;
margin-bottom: 15px;
color: #00d4ff;
}
.status-bar {
height: 20px;
background: #2a2a4a;
border-radius: 10px;
overflow: hidden;
margin-bottom: 10px;
}
.status-fill {
height: 100%;
background: linear-gradient(90deg, #00d4ff, #00ff88);
border-radius: 10px;
transition: width 0.5s ease;
}
.message {
padding: 10px;
background: rgba(0, 212, 255, 0.1);
border-radius: 8px;
border-left: 4px solid #00d4ff;
}
.projects-section {
background: rgba(255, 255, 255, 0.05);
border-radius: 15px;
padding: 25px;
margin-bottom: 20px;
border: 1px solid rgba(255, 255, 255, 0.1);
}
.projects-section h2 {
font-size: 1.3em;
margin-bottom: 15px;
color: #00ff88;
}
.projects-list {
display: flex;
flex-wrap: wrap;
gap: 15px;
}
.project-item {
background: rgba(0, 255, 136, 0.1);
padding: 15px 20px;
border-radius: 10px;
border: 1px solid rgba(0, 255, 136, 0.3);
font-size: 0.9em;
}
.project-item.active {
background: rgba(255, 107, 107, 0.1);
border-color: rgba(255, 107, 107, 0.3);
}
.audit-section {
background: rgba(255, 255, 255, 0.05);
border-radius: 15px;
padding: 25px;
margin-bottom: 20px;
border: 1px solid rgba(255, 255, 255, 0.1);
}
.audit-section h2 {
font-size: 1.3em;
margin-bottom: 15px;
color: #ffd93d;
}
.audit-table {
width: 100%;
border-collapse: collapse;
margin-top: 10px;
}
.audit-table th, .audit-table td {
padding: 12px;
text-align: left;
border-bottom: 1px solid rgba(255, 255, 255, 0.1);
}
.audit-table th {
color: #888;
font-weight: 600;
font-size: 0.85em;
}
.audit-table td {
font-size: 0.9em;
}
.audit-table .timestamp {
color: #666;
font-size: 0.8em;
}
.actions-panel {
background: rgba(255, 255, 255, 0.05);
border-radius: 15px;
padding: 25px;
border: 1px solid rgba(255, 255, 255, 0.1);
text-align: center;
}
.actions-panel h2 {
font-size: 1.3em;
margin-bottom: 15px;
color: #ff6b6b;
}
.actions-panel p {
color: #888;
margin-bottom: 20px;
}
@media (max-width: 768px) {
.stats-grid {
grid-template-columns: 1fr;
}
.projects-list {
flex-direction: column;
}
}
</style>
</head>
<body>
<div class="dashboard">
<div class="header">
<h1>🚀 AI Software Factory</h1>
<p>Real-time Dashboard & Audit Trail Display</p>
</div>
<div class="stats-grid">
<div class="stat-card project">
<h3>Current Project</h3>
<div class="value">test-project</div>
</div>
<div class="stat-card active">
<h3>Active Projects</h3>
<div class="value">1</div>
</div>
<div class="stat-card code">
<h3>Code Generated</h3>
<div class="value">12.4 KB</div>
</div>
<div class="stat-card">
<h3>Status</h3>
<div class="value" id="status-value">running</div>
</div>
</div>
<div class="status-panel">
<h2>📊 Current Status</h2>
<div class="status-bar">
<div class="status-fill" id="status-fill" style="width: 75%"></div>
</div>
<div class="message">
<strong>Generating code...</strong><br>
<span style="color: #888;">Progress: 75%</span>
</div>
</div>
<div class="projects-section">
<h2>📁 Active Projects</h2>
<div class="projects-list">
<div class="project-item active">
<strong>test-project</strong> • Agent: Orchestrator • Last update: just now
</div>
</div>
</div>
<div class="audit-section">
<h2>📜 Audit Trail</h2>
<table class="audit-table">
<thead>
<tr>
<th>Timestamp</th>
<th>Agent</th>
<th>Action</th>
<th>Status</th>
</tr>
</thead>
<tbody>
<tr>
<td class="timestamp">2026-03-22 01:41:00</td>
<td>Orchestrator</td>
<td>Initialized project</td>
<td style="color: #00ff88;">Success</td>
</tr>
<tr>
<td class="timestamp">2026-03-22 01:41:05</td>
<td>Git Manager</td>
<td>Initialized git repository</td>
<td style="color: #00ff88;">Success</td>
</tr>
<tr>
<td class="timestamp">2026-03-22 01:41:10</td>
<td>Code Generator</td>
<td>Generated main.py</td>
<td style="color: #00ff88;">Success</td>
</tr>
<tr>
<td class="timestamp">2026-03-22 01:41:15</td>
<td>Code Generator</td>
<td>Generated requirements.txt</td>
<td style="color: #00ff88;">Success</td>
</tr>
<tr>
<td class="timestamp">2026-03-22 01:41:18</td>
<td>Orchestrator</td>
<td>Running</td>
<td style="color: #00d4ff;">In Progress</td>
</tr>
</tbody>
</table>
</div>
<div class="actions-panel">
<h2>⚙️ System Actions</h2>
<p>Dashboard is rendering successfully. The UI manager is active and monitoring all projects.</p>
<p style="color: #888; font-size: 0.9em;">This dashboard is powered by the UIManager component and displays real-time status updates, audit trails, and project information.</p>
</div>
</div>
</body>
</html>

View File

@@ -0,0 +1,310 @@
"""NiceGUI dashboard backed by real database state."""
from __future__ import annotations
from contextlib import closing
from nicegui import ui
try:
from .agents.database_manager import DatabaseManager
from .agents.n8n_setup import N8NSetupAgent
from .config import settings
from .database import get_db_sync, init_db
except ImportError:
from agents.database_manager import DatabaseManager
from agents.n8n_setup import N8NSetupAgent
from config import settings
from database import get_db_sync, init_db
def _resolve_n8n_api_url() -> str:
"""Resolve the configured n8n API base URL."""
if settings.n8n_api_url:
return settings.n8n_api_url
if settings.n8n_webhook_url:
return settings.n8n_webhook_url.split('/webhook', 1)[0].rstrip('/')
return ''
def _render_repository_block(repository: dict | None) -> None:
"""Render repository details and URL when available."""
if not repository:
ui.label('Repository URL not available yet.').classes('factory-muted')
return
owner = repository.get('owner') or 'unknown-owner'
name = repository.get('name') or 'unknown-repo'
mode = repository.get('mode') or 'project'
status = repository.get('status')
repo_url = repository.get('url')
with ui.column().classes('gap-1'):
with ui.row().classes('items-center gap-2'):
ui.label(f'{owner}/{name}').style('font-weight: 700; color: #2f241d;')
ui.label(mode).classes('factory-chip')
if status:
ui.label(status).classes('factory-chip')
if repo_url:
ui.link(repo_url, repo_url, new_tab=True).classes('factory-code')
else:
ui.label('Repository URL not available yet.').classes('factory-muted')
def _load_dashboard_snapshot() -> dict:
"""Load dashboard data from the database."""
db = get_db_sync()
if db is None:
return {'error': 'Database session could not be created'}
with closing(db):
manager = DatabaseManager(db)
try:
return manager.get_dashboard_snapshot(limit=8)
except Exception as exc:
return {'error': f'Database error: {exc}'}
def create_dashboard():
"""Create the main NiceGUI dashboard."""
ui.add_head_html(
"""
<style>
body { background: radial-gradient(circle at top, #f4efe7 0%, #e9e1d4 38%, #d7cec1 100%); }
.factory-shell { max-width: 1240px; margin: 0 auto; }
.factory-panel { background: rgba(255,255,255,0.78); backdrop-filter: blur(18px); border: 1px solid rgba(73,54,40,0.10); border-radius: 24px; box-shadow: 0 24px 60px rgba(84,55,24,0.14); }
.factory-kpi { background: linear-gradient(145deg, rgba(63,94,78,0.94), rgba(29,52,45,0.92)); color: #f8f3eb; border-radius: 18px; padding: 18px; min-height: 128px; }
.factory-muted { color: #745e4c; }
.factory-code { font-family: 'IBM Plex Mono', 'Fira Code', monospace; background: rgba(32,26,20,0.92); color: #f4efe7; border-radius: 14px; padding: 12px; white-space: pre-wrap; }
.factory-chip { background: rgba(173, 129, 82, 0.14); color: #6b4b2e; border-radius: 999px; padding: 4px 10px; font-size: 12px; }
</style>
"""
)
async def setup_n8n_workflow_action() -> None:
api_url = _resolve_n8n_api_url()
if not api_url:
ui.notify('Configure N8N_API_URL or N8N_WEBHOOK_URL first', color='negative')
return
agent = N8NSetupAgent(api_url=api_url, webhook_token=settings.n8n_api_key)
result = await agent.setup(
webhook_path='telegram',
backend_url=f'{settings.backend_public_url}/generate',
force_update=True,
)
db = get_db_sync()
if db is not None:
with closing(db):
DatabaseManager(db).log_system_event(
component='n8n',
level='INFO' if result.get('status') == 'success' else 'ERROR',
message=result.get('message', str(result)),
)
ui.notify(result.get('message', 'n8n setup finished'), color='positive' if result.get('status') == 'success' else 'negative')
dashboard_body.refresh()
def init_db_action() -> None:
result = init_db()
ui.notify(result.get('message', 'Database initialized'), color='positive' if result.get('status') == 'success' else 'negative')
dashboard_body.refresh()
@ui.refreshable
def dashboard_body() -> None:
snapshot = _load_dashboard_snapshot()
if snapshot.get('error'):
with ui.card().classes('factory-panel w-full max-w-4xl mx-auto q-pa-xl'):
ui.label('Dashboard unavailable').style('font-size: 1.5rem; font-weight: 700; color: #5c2d1f;')
ui.label(snapshot['error']).classes('factory-muted')
ui.button('Initialize Database', on_click=init_db_action).props('unelevated')
return
summary = snapshot['summary']
projects = snapshot['projects']
correlations = snapshot['correlations']
system_logs = snapshot['system_logs']
project_repository_map = {
project_bundle['project']['project_id']: {
'project_name': project_bundle['project']['project_name'],
'repository': project_bundle.get('repository') or project_bundle['project'].get('repository'),
}
for project_bundle in projects
if project_bundle.get('project')
}
with ui.column().classes('factory-shell w-full gap-4 q-pa-lg'):
with ui.card().classes('factory-panel w-full q-pa-lg'):
with ui.row().classes('items-center justify-between w-full'):
with ui.column().classes('gap-1'):
ui.label('AI Software Factory').style('font-size: 2.3rem; font-weight: 800; color: #302116;')
ui.label('Operational dashboard with project audit, prompt traces, and n8n controls.').classes('factory-muted')
with ui.row().classes('items-center gap-2'):
ui.button('Refresh', on_click=dashboard_body.refresh).props('outline')
ui.button('Initialize DB', on_click=init_db_action).props('unelevated color=dark')
ui.button('Provision n8n Workflow', on_click=setup_n8n_workflow_action).props('unelevated color=accent')
with ui.grid(columns=4).classes('w-full gap-4'):
metrics = [
('Projects', summary['total_projects'], 'Tracked generation requests'),
('Completed', summary['completed_projects'], 'Finished project runs'),
('Prompts', summary['prompt_events'], 'Recorded originating prompts'),
('Code Changes', summary['code_changes'], 'Audited generated file writes'),
]
for title, value, subtitle in metrics:
with ui.card().classes('factory-kpi'):
ui.label(title).style('font-size: 0.78rem; text-transform: uppercase; letter-spacing: 0.08em; opacity: 0.8;')
ui.label(str(value)).style('font-size: 2.1rem; font-weight: 800; margin-top: 6px;')
ui.label(subtitle).style('font-size: 0.9rem; opacity: 0.78; margin-top: 8px;')
tabs = ui.tabs().classes('w-full')
overview_tab = ui.tab('Overview')
projects_tab = ui.tab('Projects')
trace_tab = ui.tab('Prompt Trace')
system_tab = ui.tab('System')
with ui.tab_panels(tabs, value=overview_tab).classes('w-full'):
with ui.tab_panel(overview_tab):
with ui.grid(columns=2).classes('w-full gap-4'):
with ui.card().classes('factory-panel q-pa-lg'):
ui.label('Project Pipeline').style('font-size: 1.25rem; font-weight: 700; color: #3a281a;')
if projects:
for project_bundle in projects[:4]:
project = project_bundle['project']
with ui.column().classes('gap-1 q-mt-md'):
with ui.row().classes('justify-between items-center'):
ui.label(project['project_name']).style('font-weight: 700; color: #2f241d;')
ui.label(project['status']).classes('factory-chip')
ui.linear_progress(value=(project['progress'] or 0) / 100, show_value=False).classes('w-full')
ui.label(project['message'] or 'No status message').classes('factory-muted')
else:
ui.label('No projects in the database yet.').classes('factory-muted')
with ui.card().classes('factory-panel q-pa-lg'):
ui.label('n8n and Runtime').style('font-size: 1.25rem; font-weight: 700; color: #3a281a;')
rows = [
('Backend URL', settings.backend_public_url),
('Project Root', str(settings.projects_root)),
('n8n API URL', _resolve_n8n_api_url() or 'Not configured'),
('Running Projects', str(summary['running_projects'])),
('Errored Projects', str(summary['error_projects'])),
]
for label, value in rows:
with ui.row().classes('justify-between w-full q-mt-sm'):
ui.label(label).classes('factory-muted')
ui.label(value).style('font-weight: 600; color: #3a281a;')
with ui.tab_panel(projects_tab):
if not projects:
with ui.card().classes('factory-panel q-pa-lg'):
ui.label('No project data available yet.').classes('factory-muted')
for project_bundle in projects:
project = project_bundle['project']
with ui.expansion(f"{project['project_name']} · {project['status']}", icon='folder').classes('factory-panel w-full q-mb-md'):
with ui.grid(columns=2).classes('w-full gap-4 q-pa-md'):
with ui.card().classes('q-pa-md'):
ui.label('Repository').style('font-weight: 700; color: #3a281a;')
_render_repository_block(project_bundle.get('repository') or project.get('repository'))
with ui.card().classes('q-pa-md'):
ui.label('Prompt').style('font-weight: 700; color: #3a281a;')
prompts = project_bundle.get('prompts', [])
if prompts:
prompt = prompts[0]
ui.markdown(f"**Requested features:** {', '.join(prompt['features']) or 'None'}")
ui.markdown(f"**Tech stack:** {', '.join(prompt['tech_stack']) or 'None'}")
ui.label(prompt['prompt_text']).classes('factory-code')
else:
ui.label('No prompt recorded.').classes('factory-muted')
with ui.card().classes('q-pa-md'):
ui.label('Generated Changes').style('font-weight: 700; color: #3a281a;')
changes = project_bundle.get('code_changes', [])
if changes:
for change in changes:
with ui.row().classes('justify-between items-start w-full q-mt-sm'):
ui.label(change['file_path'] or 'unknown file').style('font-weight: 600; color: #2f241d;')
ui.label(change['action_type']).classes('factory-chip')
ui.label(change['diff_summary'] or change['details']).classes('factory-muted')
else:
ui.label('No code changes recorded.').classes('factory-muted')
with ui.grid(columns=2).classes('w-full gap-4 q-pa-md'):
with ui.card().classes('q-pa-md'):
ui.label('Recent Logs').style('font-weight: 700; color: #3a281a;')
logs = project_bundle.get('logs', [])[:6]
if logs:
for log in logs:
ui.markdown(f"- {log['timestamp'] or 'n/a'} · {log['level']} · {log['message']}")
else:
ui.label('No project logs yet.').classes('factory-muted')
with ui.card().classes('q-pa-md'):
ui.label('Audit Trail').style('font-weight: 700; color: #3a281a;')
audits = project_bundle.get('audit_trail', [])[:6]
if audits:
for audit in audits:
ui.markdown(f"- {audit['timestamp'] or 'n/a'} · {audit['action']} · {audit['details']}")
else:
ui.label('No audit events yet.').classes('factory-muted')
with ui.tab_panel(trace_tab):
with ui.card().classes('factory-panel q-pa-lg'):
ui.label('Prompt to Code Correlation').style('font-size: 1.25rem; font-weight: 700; color: #3a281a;')
ui.label('Each prompt entry is linked to the generated files recorded after that prompt for the same project.').classes('factory-muted')
if correlations:
for correlation in correlations:
correlation_project = project_repository_map.get(correlation['project_id'], {})
with ui.card().classes('q-pa-md q-mt-md'):
ui.label(correlation_project.get('project_name') or correlation['project_id']).style('font-size: 1rem; font-weight: 700; color: #2f241d;')
_render_repository_block(correlation_project.get('repository'))
ui.label(correlation['prompt_text']).classes('factory-code q-mt-sm')
if correlation['changes']:
for change in correlation['changes']:
ui.markdown(
f"- **{change['file_path'] or 'unknown'}** · {change['change_type']} · {change['diff_summary'] or change['details']}"
)
else:
ui.label('No code changes correlated to this prompt yet.').classes('factory-muted')
else:
ui.label('No prompt traces recorded yet.').classes('factory-muted')
with ui.tab_panel(system_tab):
with ui.grid(columns=2).classes('w-full gap-4'):
with ui.card().classes('factory-panel q-pa-lg'):
ui.label('System Logs').style('font-size: 1.25rem; font-weight: 700; color: #3a281a;')
if system_logs:
for log in system_logs:
ui.markdown(f"- {log['timestamp'] or 'n/a'} · **{log['component']}** · {log['level']} · {log['message']}")
else:
ui.label('No system logs yet.').classes('factory-muted')
with ui.card().classes('factory-panel q-pa-lg'):
ui.label('Important Endpoints').style('font-size: 1.25rem; font-weight: 700; color: #3a281a;')
endpoints = [
'/health',
'/generate',
'/projects',
'/audit/projects',
'/audit/prompts',
'/audit/changes',
'/audit/correlations',
'/n8n/health',
'/n8n/setup',
]
for endpoint in endpoints:
ui.label(endpoint).classes('factory-code q-mt-sm')
dashboard_body()
ui.timer(10.0, dashboard_body.refresh)
def run_app(port=None, reload=False, browser=True, storage_secret=None):
"""Run the NiceGUI app."""
ui.run(title='AI Software Factory Dashboard', port=port, reload=reload, browser=browser, storage_secret=storage_secret)
if __name__ in {'__main__', '__console__'}:
create_dashboard()
run_app()

View File

@@ -0,0 +1,230 @@
"""Database connection and session management."""
from collections.abc import Generator
from pathlib import Path
from urllib.parse import urlparse
from alembic import command
from alembic.config import Config
from sqlalchemy import create_engine, event, text
from sqlalchemy.engine import Engine
from sqlalchemy.orm import Session, sessionmaker
try:
from .config import settings
from .models import Base
except ImportError:
from config import settings
from models import Base
def get_database_runtime_summary() -> dict[str, str]:
"""Return a human-readable summary of the effective database backend."""
if settings.use_sqlite:
db_path = str(Path(settings.SQLITE_DB_PATH or "/tmp/ai_software_factory_test.db").expanduser().resolve())
return {
"backend": "sqlite",
"target": db_path,
"database": db_path,
}
parsed = urlparse(settings.database_url)
database_name = parsed.path.lstrip("/") or "unknown"
host = parsed.hostname or "unknown-host"
port = str(parsed.port or 5432)
return {
"backend": parsed.scheme.split("+", 1)[0] or "postgresql",
"target": f"{host}:{port}/{database_name}",
"database": database_name,
}
def get_engine() -> Engine:
"""Create and return SQLAlchemy engine with connection pooling."""
# Use SQLite for tests, PostgreSQL for production
if settings.use_sqlite:
db_path = settings.SQLITE_DB_PATH or "/tmp/ai_software_factory_test.db"
Path(db_path).expanduser().resolve().parent.mkdir(parents=True, exist_ok=True)
db_url = f"sqlite:///{db_path}"
# SQLite-specific configuration - no pooling for SQLite
engine = create_engine(
db_url,
connect_args={"check_same_thread": False},
echo=settings.LOG_LEVEL == "DEBUG"
)
else:
db_url = settings.database_url
# PostgreSQL-specific configuration
engine = create_engine(
db_url,
pool_size=settings.DB_POOL_SIZE or 10,
max_overflow=settings.DB_MAX_OVERFLOW or 20,
pool_pre_ping=settings.LOG_LEVEL == "DEBUG",
echo=settings.LOG_LEVEL == "DEBUG",
pool_timeout=settings.DB_POOL_TIMEOUT or 30
)
# Event listener for connection checkout (PostgreSQL only)
if not settings.use_sqlite:
@event.listens_for(engine, "checkout")
def receive_checkout(dbapi_connection, connection_record, connection_proxy):
"""Log connection checkout for audit purposes."""
if settings.LOG_LEVEL in ("DEBUG", "INFO"):
print(f"DB Connection checked out from pool")
@event.listens_for(engine, "checkin")
def receive_checkin(dbapi_connection, connection_record):
"""Log connection checkin for audit purposes."""
if settings.LOG_LEVEL == "DEBUG":
print(f"DB Connection returned to pool")
return engine
def get_session() -> Generator[Session, None, None]:
"""Yield a managed database session."""
engine = get_engine()
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
session = SessionLocal()
try:
yield session
session.commit()
except Exception:
session.rollback()
raise
finally:
session.close()
def get_db() -> Generator[Session, None, None]:
"""Dependency for FastAPI routes that need database access."""
yield from get_session()
def get_db_sync() -> Session:
"""Get a database session directly (for non-FastAPI/NiceGUI usage)."""
engine = get_engine()
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
session = SessionLocal()
return session
def get_db_session() -> Session:
"""Get a database session directly (for non-FastAPI usage)."""
session = next(get_session())
return session
def get_alembic_config(database_url: str | None = None) -> Config:
"""Return an Alembic config bound to the active database URL."""
package_root = Path(__file__).resolve().parent
alembic_ini = package_root / "alembic.ini"
config = Config(str(alembic_ini))
config.set_main_option("script_location", str(package_root / "alembic"))
config.set_main_option("sqlalchemy.url", database_url or settings.database_url)
return config
def run_migrations(database_url: str | None = None) -> dict:
"""Apply Alembic migrations to the configured database."""
try:
config = get_alembic_config(database_url)
command.upgrade(config, "head")
return {"status": "success", "message": "Database migrations applied."}
except Exception as exc:
return {"status": "error", "message": str(exc)}
def init_db() -> dict:
"""Initialize database tables and database if needed."""
if settings.use_sqlite:
result = run_migrations()
if result["status"] == "success":
print("SQLite database migrations applied successfully.")
return {"status": "success", "message": "SQLite database initialized via migrations."}
engine = get_engine()
try:
Base.metadata.create_all(bind=engine)
print("SQLite database tables created successfully.")
return {"status": "success", "message": "SQLite database initialized with metadata fallback."}
except Exception as e:
print(f"Error initializing SQLite database: {str(e)}")
return {'status': 'error', 'message': f'Error: {str(e)}'}
else:
# PostgreSQL
db_url = settings.database_url
db_name = db_url.split('/')[-1] if '/' in db_url else 'ai_software_factory'
try:
# Create engine to check/create database
engine = create_engine(db_url)
# Try to create database if it doesn't exist
try:
with engine.connect() as conn:
# Check if database exists
result = conn.execute(text(f"SELECT 1 FROM {db_name} WHERE 1=0"))
# If no error, database exists
conn.commit()
print(f"PostgreSQL database '{db_name}' already exists.")
except Exception as e:
# Database doesn't exist or has different error - try to create it
error_msg = str(e).lower()
# Only create if it's a relation does not exist error or similar
if "does not exist" in error_msg or "database" in error_msg:
try:
conn = engine.connect()
conn.execute(text(f"CREATE DATABASE {db_name}"))
conn.commit()
print(f"PostgreSQL database '{db_name}' created.")
except Exception as db_error:
print(f"Could not create database: {str(db_error)}")
# Try to connect anyway - maybe using existing db name
engine = create_engine(db_url.replace(f'/{db_name}', '/postgres'))
with engine.connect() as conn:
# Just create tables in postgres database for now
print(f"Using existing 'postgres' database.")
migration_result = run_migrations(db_url)
if migration_result["status"] == "success":
print(f"PostgreSQL database '{db_name}' migrations applied successfully.")
return {'status': 'success', 'message': f'PostgreSQL database "{db_name}" initialized via migrations.'}
Base.metadata.create_all(bind=engine)
print(f"PostgreSQL database '{db_name}' tables created successfully.")
return {'status': 'success', 'message': f'PostgreSQL database "{db_name}" initialized with metadata fallback.'}
except Exception as e:
print(f"Error initializing PostgreSQL database: {str(e)}")
return {'status': 'error', 'message': f'Error: {str(e)}'}
def drop_db() -> dict:
"""Drop all database tables (use with caution!)."""
if settings.use_sqlite:
engine = get_engine()
try:
Base.metadata.drop_all(bind=engine)
print("SQLite database tables dropped successfully.")
return {'status': 'success', 'message': 'SQLite tables dropped.'}
except Exception as e:
print(f"Error dropping SQLite tables: {str(e)}")
return {'status': 'error', 'message': str(e)}
else:
db_url = settings.database_url
db_name = db_url.split('/')[-1] if '/' in db_url else 'ai_software_factory'
try:
engine = create_engine(db_url)
Base.metadata.drop_all(bind=engine)
print(f"PostgreSQL database '{db_name}' tables dropped successfully.")
return {'status': 'success', 'message': f'PostgreSQL "{db_name}" tables dropped.'}
except Exception as e:
print(f"Error dropping PostgreSQL tables: {str(e)}")
return {'status': 'error', 'message': str(e)}
def create_migration_script() -> str:
"""Generate a migration script for database schema changes."""
return """See ai_software_factory/alembic/versions for managed schema migrations."""

View File

@@ -0,0 +1,48 @@
"""Frontend module for NiceGUI with FastAPI integration.
This module provides the NiceGUI frontend that can be initialized with a FastAPI app.
The dashboard shown is from dashboard_ui.py with real-time database data.
"""
from fastapi import FastAPI
from fastapi.responses import RedirectResponse
from nicegui import app, ui
try:
from .dashboard_ui import create_dashboard
except ImportError:
from dashboard_ui import create_dashboard
def init(fastapi_app: FastAPI, storage_secret: str = 'Secr2t!') -> None:
"""Initialize the NiceGUI frontend with the FastAPI app.
Args:
fastapi_app: The FastAPI application instance.
storage_secret: Optional secret for persistent user storage.
"""
def render_dashboard_page() -> None:
create_dashboard()
# NOTE dark mode will be persistent for each user across tabs and server restarts
ui.dark_mode().bind_value(app.storage.user, 'dark_mode')
ui.checkbox('dark mode').bind_value(app.storage.user, 'dark_mode')
@ui.page('/')
def home() -> None:
render_dashboard_page()
@ui.page('/show')
def show() -> None:
render_dashboard_page()
@fastapi_app.get('/dashboard', include_in_schema=False)
def dashboard_redirect() -> RedirectResponse:
return RedirectResponse(url='/', status_code=307)
ui.run_with(
fastapi_app,
storage_secret=storage_secret, # NOTE setting a secret is optional but allows for persistent storage per user
)

374
ai_software_factory/main.py Normal file
View File

@@ -0,0 +1,374 @@
#!/usr/bin/env python3
"""AI Software Factory - Main application with FastAPI backend and NiceGUI frontend.
This application uses FastAPI to:
1. Provide HTTP API endpoints
2. Host NiceGUI frontend via ui.run_with()
The NiceGUI frontend provides:
1. Interactive dashboard at /
2. Real-time data visualization
3. Audit trail display
"""
from __future__ import annotations
from contextlib import asynccontextmanager
import json
import re
from pathlib import Path
from typing import Annotated
from uuid import uuid4
from fastapi import Depends, FastAPI, HTTPException, Query
from pydantic import BaseModel, Field
from sqlalchemy.orm import Session
try:
from . import __version__, frontend
from . import database as database_module
from .agents.database_manager import DatabaseManager
from .agents.orchestrator import AgentOrchestrator
from .agents.n8n_setup import N8NSetupAgent
from .agents.ui_manager import UIManager
from .models import ProjectHistory, ProjectLog, SystemLog
except ImportError:
import frontend
import database as database_module
from agents.database_manager import DatabaseManager
from agents.orchestrator import AgentOrchestrator
from agents.n8n_setup import N8NSetupAgent
from agents.ui_manager import UIManager
from models import ProjectHistory, ProjectLog, SystemLog
__version__ = "0.0.1"
@asynccontextmanager
async def lifespan(_app: FastAPI):
"""Log resolved runtime configuration when the app starts."""
runtime = database_module.get_database_runtime_summary()
print(
f"Runtime configuration: database_backend={runtime['backend']} target={runtime['target']}"
)
yield
app = FastAPI(lifespan=lifespan)
DbSession = Annotated[Session, Depends(database_module.get_db)]
PROJECT_ID_PATTERN = re.compile(r"[^a-z0-9]+")
class SoftwareRequest(BaseModel):
"""Request body for software generation."""
name: str = Field(min_length=1, max_length=255)
description: str = Field(min_length=1, max_length=255)
features: list[str] = Field(default_factory=list)
tech_stack: list[str] = Field(default_factory=list)
class N8NSetupRequest(BaseModel):
"""Request body for n8n workflow provisioning."""
api_url: str | None = None
api_key: str | None = None
webhook_path: str = "telegram"
backend_url: str | None = None
force_update: bool = False
def _build_project_id(name: str) -> str:
"""Create a stable project id from the requested name."""
slug = PROJECT_ID_PATTERN.sub("-", name.strip().lower()).strip("-") or "project"
return f"{slug}-{uuid4().hex[:8]}"
def _serialize_project(history: ProjectHistory) -> dict:
"""Serialize a project history row for API responses."""
return {
"history_id": history.id,
"project_id": history.project_id,
"name": history.project_name,
"description": history.description,
"status": history.status,
"progress": history.progress,
"message": history.message,
"current_step": history.current_step,
"error_message": history.error_message,
"created_at": history.created_at.isoformat() if history.created_at else None,
"updated_at": history.updated_at.isoformat() if history.updated_at else None,
"completed_at": history.completed_at.isoformat() if history.completed_at else None,
}
def _serialize_project_log(log: ProjectLog) -> dict:
"""Serialize a project log row."""
return {
"id": log.id,
"history_id": log.history_id,
"level": log.log_level,
"message": log.log_message,
"timestamp": log.timestamp.isoformat() if log.timestamp else None,
}
def _serialize_system_log(log: SystemLog) -> dict:
"""Serialize a system log row."""
return {
"id": log.id,
"component": log.component,
"level": log.log_level,
"message": log.log_message,
"user_agent": log.user_agent,
"ip_address": log.ip_address,
"timestamp": log.created_at.isoformat() if log.created_at else None,
}
def _serialize_audit_item(item: dict) -> dict:
"""Return audit-shaped dictionaries unchanged for API output."""
return item
def _compose_prompt_text(request: SoftwareRequest) -> str:
"""Render the originating software request into a stable prompt string."""
features = ", ".join(request.features) if request.features else "None"
tech_stack = ", ".join(request.tech_stack) if request.tech_stack else "None"
return (
f"Name: {request.name}\n"
f"Description: {request.description}\n"
f"Features: {features}\n"
f"Tech Stack: {tech_stack}"
)
def _project_root(project_id: str) -> Path:
"""Resolve the filesystem location for a generated project."""
return database_module.settings.projects_root / project_id
def _resolve_n8n_api_url(explicit_url: str | None = None) -> str:
"""Resolve the effective n8n API URL from explicit input or settings."""
if explicit_url and explicit_url.strip():
return explicit_url.strip()
if database_module.settings.n8n_api_url:
return database_module.settings.n8n_api_url
webhook_url = database_module.settings.n8n_webhook_url
if webhook_url:
return webhook_url.split("/webhook", 1)[0].rstrip("/")
return ""
@app.get('/api')
def read_api_info():
"""Return service metadata for API clients."""
return {
'service': 'AI Software Factory',
'version': __version__,
'endpoints': [
'/',
'/api',
'/health',
'/generate',
'/projects',
'/status/{project_id}',
'/audit/projects',
'/audit/logs',
'/audit/system/logs',
'/audit/prompts',
'/audit/changes',
'/audit/lineage',
'/audit/correlations',
'/n8n/health',
'/n8n/setup',
],
}
@app.get('/health')
def health_check():
"""Health check endpoint."""
runtime = database_module.get_database_runtime_summary()
return {
'status': 'healthy',
'database': runtime['backend'],
'database_target': runtime['target'],
'database_name': runtime['database'],
}
@app.post('/generate')
async def generate_software(request: SoftwareRequest, db: DbSession):
"""Create and record a software-generation request."""
database_module.init_db()
project_id = _build_project_id(request.name)
prompt_text = _compose_prompt_text(request)
orchestrator = AgentOrchestrator(
project_id=project_id,
project_name=request.name,
description=request.description,
features=request.features,
tech_stack=request.tech_stack,
db=db,
prompt_text=prompt_text,
)
result = await orchestrator.run()
manager = DatabaseManager(db)
manager.log_system_event(
component='api',
level='INFO' if result['status'] == 'completed' else 'ERROR',
message=f"Generated project {project_id} with {len(result.get('changed_files', []))} artifact(s)",
)
history = manager.get_project_by_id(project_id)
project_logs = manager.get_project_logs(history.id)
response_data = _serialize_project(history)
response_data['logs'] = [_serialize_project_log(log) for log in project_logs]
response_data['ui_data'] = result.get('ui_data')
response_data['features'] = request.features
response_data['tech_stack'] = request.tech_stack
response_data['project_root'] = result.get('project_root', str(_project_root(project_id)))
response_data['changed_files'] = result.get('changed_files', [])
response_data['repository'] = result.get('repository')
return {'status': result['status'], 'data': response_data}
@app.get('/projects')
def list_projects(db: DbSession):
"""List recorded projects."""
manager = DatabaseManager(db)
projects = manager.get_all_projects()
return {'projects': [_serialize_project(project) for project in projects]}
@app.get('/status/{project_id}')
def get_project_status(project_id: str, db: DbSession):
"""Get the current status for a single project."""
manager = DatabaseManager(db)
history = manager.get_project_by_id(project_id)
if history is None:
raise HTTPException(status_code=404, detail='Project not found')
return _serialize_project(history)
@app.get('/audit/projects')
def get_audit_projects(db: DbSession):
"""Return projects together with their related logs and audit data."""
manager = DatabaseManager(db)
projects = []
for history in manager.get_all_projects():
project_data = _serialize_project(history)
audit_data = manager.get_project_audit_data(history.project_id)
project_data['logs'] = audit_data['logs']
project_data['actions'] = audit_data['actions']
project_data['audit_trail'] = audit_data['audit_trail']
projects.append(project_data)
return {'projects': projects}
@app.get('/audit/prompts')
def get_prompt_audit(db: DbSession, project_id: str | None = Query(default=None)):
"""Return stored prompt submissions."""
manager = DatabaseManager(db)
return {'prompts': [_serialize_audit_item(item) for item in manager.get_prompt_events(project_id=project_id)]}
@app.get('/audit/changes')
def get_code_change_audit(db: DbSession, project_id: str | None = Query(default=None)):
"""Return recorded code changes."""
manager = DatabaseManager(db)
return {'changes': [_serialize_audit_item(item) for item in manager.get_code_changes(project_id=project_id)]}
@app.get('/audit/lineage')
def get_prompt_change_lineage(db: DbSession, project_id: str | None = Query(default=None)):
"""Return explicit prompt-to-code lineage rows."""
manager = DatabaseManager(db)
return {'lineage': manager.get_prompt_change_links(project_id=project_id)}
@app.get('/audit/correlations')
def get_prompt_change_correlations(db: DbSession, project_id: str | None = Query(default=None)):
"""Return prompt-to-change correlations for generated projects."""
manager = DatabaseManager(db)
return {'correlations': manager.get_prompt_change_correlations(project_id=project_id)}
@app.get('/audit/logs')
def get_audit_logs(db: DbSession):
"""Return all project logs ordered newest first."""
logs = db.query(ProjectLog).order_by(ProjectLog.id.desc()).all()
return {'logs': [_serialize_project_log(log) for log in logs]}
@app.get('/audit/system/logs')
def get_system_audit_logs(
db: DbSession,
component: str | None = Query(default=None),
):
"""Return system logs with optional component filtering."""
query = db.query(SystemLog).order_by(SystemLog.id.desc())
if component:
query = query.filter(SystemLog.component == component)
return {'logs': [_serialize_system_log(log) for log in query.all()]}
@app.get('/n8n/health')
async def get_n8n_health():
"""Check whether the configured n8n instance is reachable."""
api_url = _resolve_n8n_api_url()
if not api_url:
return {'status': 'error', 'message': 'N8N_API_URL or N8N_WEBHOOK_URL is not configured'}
agent = N8NSetupAgent(api_url=api_url, webhook_token=database_module.settings.n8n_api_key)
result = await agent.health_check()
return {'status': 'ok' if not result.get('error') else 'error', 'data': result}
@app.post('/n8n/setup')
async def setup_n8n_workflow(request: N8NSetupRequest, db: DbSession):
"""Create or update the n8n Telegram workflow."""
api_url = _resolve_n8n_api_url(request.api_url)
if not api_url:
raise HTTPException(status_code=400, detail='n8n API URL is not configured')
agent = N8NSetupAgent(
api_url=api_url,
webhook_token=(request.api_key or database_module.settings.n8n_api_key),
)
result = await agent.setup(
webhook_path=request.webhook_path,
backend_url=request.backend_url or f"{database_module.settings.backend_public_url}/generate",
force_update=request.force_update,
telegram_bot_token=database_module.settings.telegram_bot_token,
telegram_credential_name=database_module.settings.n8n_telegram_credential_name,
)
manager = DatabaseManager(db)
log_level = 'INFO' if result.get('status') != 'error' else 'ERROR'
manager.log_system_event(
component='n8n',
level=log_level,
message=result.get('message', json.dumps(result)),
)
return result
@app.post('/init-db')
def initialize_database():
"""Initialize database tables (POST endpoint for NiceGUI to call before dashboard)."""
try:
database_module.init_db()
return {'message': 'Database tables created successfully', 'status': 'success'}
except Exception as e:
return {'message': f'Error initializing database: {str(e)}', 'status': 'error'}
frontend.init(app)
if __name__ == '__main__':
print('Please start the app with the "uvicorn" command as shown in the start.sh script')

View File

@@ -0,0 +1,192 @@
"""Database models for AI Software Factory."""
from datetime import datetime
from enum import Enum
from typing import List, Optional
import logging
from sqlalchemy import (
Column, Integer, String, Text, Boolean, ForeignKey, DateTime, JSON
)
from sqlalchemy.orm import relationship, declarative_base
try:
from .config import settings
except ImportError:
from config import settings
Base = declarative_base()
logger = logging.getLogger(__name__)
class ProjectStatus(str, Enum):
"""Project status enumeration."""
INITIALIZED = "initialized"
STARTED = "started"
RUNNING = "running"
COMPLETED = "completed"
ERROR = "error"
class ProjectHistory(Base):
"""Main project tracking table."""
__tablename__ = "project_history"
id = Column(Integer, primary_key=True)
project_id = Column(String(255), nullable=False)
project_name = Column(String(255), nullable=True)
features = Column(Text, default="")
description = Column(String(255), default="")
status = Column(String(50), default='started')
progress = Column(Integer, default=0)
message = Column(String(500), default="")
current_step = Column(String(255), nullable=True)
total_steps = Column(Integer, nullable=True)
current_step_description = Column(String(1024), nullable=True)
current_step_details = Column(Text, nullable=True)
error_message = Column(Text, nullable=True)
created_at = Column(DateTime, default=datetime.utcnow)
started_at = Column(DateTime, default=datetime.utcnow)
updated_at = Column(DateTime, default=datetime.utcnow, onupdate=datetime.utcnow)
completed_at = Column(DateTime, nullable=True)
# Relationships
project_logs = relationship("ProjectLog", back_populates="project_history", cascade="all, delete-orphan")
ui_snapshots = relationship("UISnapshot", back_populates="project_history", cascade="all, delete-orphan")
pull_requests = relationship("PullRequest", back_populates="project_history", cascade="all, delete-orphan")
pull_request_data = relationship("PullRequestData", back_populates="project_history", cascade="all, delete-orphan")
prompt_code_links = relationship("PromptCodeLink", back_populates="project_history", cascade="all, delete-orphan")
class ProjectLog(Base):
"""Detailed log entries for projects."""
__tablename__ = "project_logs"
id = Column(Integer, primary_key=True)
history_id = Column(Integer, ForeignKey("project_history.id"), nullable=False)
log_level = Column(String(50), default="INFO") # INFO, WARNING, ERROR
log_message = Column(String(500), nullable=False)
timestamp = Column(DateTime, nullable=True)
project_history = relationship("ProjectHistory", back_populates="project_logs")
class UISnapshot(Base):
"""UI snapshots for projects."""
__tablename__ = "ui_snapshots"
id = Column(Integer, primary_key=True)
history_id = Column(Integer, ForeignKey("project_history.id"), nullable=False)
snapshot_data = Column(JSON, nullable=False)
created_at = Column(DateTime, default=datetime.utcnow)
project_history = relationship("ProjectHistory", back_populates="ui_snapshots")
class PullRequest(Base):
"""Pull request data for projects."""
__tablename__ = "pull_requests"
id = Column(Integer, primary_key=True)
history_id = Column(Integer, ForeignKey("project_history.id"), nullable=False)
pr_number = Column(Integer, nullable=False)
pr_title = Column(String(500), nullable=False)
pr_body = Column(Text)
base = Column(String(255), nullable=False)
user = Column(String(255), nullable=False)
pr_url = Column(String(500), nullable=False)
merged = Column(Boolean, default=False)
merged_at = Column(DateTime, nullable=True)
pr_state = Column(String(50), nullable=False)
created_at = Column(DateTime, default=datetime.utcnow)
project_history = relationship("ProjectHistory", back_populates="pull_requests")
class PullRequestData(Base):
"""Pull request data for audit API."""
__tablename__ = "pull_request_data"
id = Column(Integer, primary_key=True)
history_id = Column(Integer, ForeignKey("project_history.id"), nullable=False)
pr_number = Column(Integer, nullable=False)
pr_title = Column(String(500), nullable=False)
pr_body = Column(Text)
pr_state = Column(String(50), nullable=False)
pr_url = Column(String(500), nullable=False)
created_at = Column(DateTime, default=datetime.utcnow)
project_history = relationship("ProjectHistory", back_populates="pull_request_data")
class SystemLog(Base):
"""System-wide log entries."""
__tablename__ = "system_logs"
id = Column(Integer, primary_key=True)
component = Column(String(50), nullable=False)
log_level = Column(String(50), default="INFO")
log_message = Column(String(500), nullable=False)
user_agent = Column(String(255), nullable=True)
ip_address = Column(String(45), nullable=True)
created_at = Column(DateTime, default=datetime.utcnow)
class AuditTrail(Base):
"""Audit trail entries for system-wide logging."""
__tablename__ = "audit_trail"
id = Column(Integer, primary_key=True)
component = Column(String(50), nullable=True)
log_level = Column(String(50), default="INFO")
message = Column(String(500), nullable=False)
created_at = Column(DateTime, default=datetime.utcnow)
project_id = Column(String(255), nullable=True)
action = Column(String(100), nullable=True)
actor = Column(String(100), nullable=True)
action_type = Column(String(50), nullable=True)
details = Column(Text, nullable=True)
metadata_json = Column(JSON, nullable=True)
class PromptCodeLink(Base):
"""Explicit lineage between a prompt event and a resulting code change."""
__tablename__ = "prompt_code_links"
id = Column(Integer, primary_key=True)
history_id = Column(Integer, ForeignKey("project_history.id"), nullable=False)
project_id = Column(String(255), nullable=False)
prompt_audit_id = Column(Integer, nullable=False)
code_change_audit_id = Column(Integer, nullable=False)
file_path = Column(String(500), nullable=True)
change_type = Column(String(50), nullable=True)
created_at = Column(DateTime, default=datetime.utcnow)
project_history = relationship("ProjectHistory", back_populates="prompt_code_links")
class UserAction(Base):
"""User action audit entries."""
__tablename__ = "user_actions"
id = Column(Integer, primary_key=True)
history_id = Column(Integer, ForeignKey("project_history.id"), nullable=True)
user_id = Column(String(100), nullable=True)
action_type = Column(String(100), nullable=True)
actor_type = Column(String(50), nullable=True)
actor_name = Column(String(100), nullable=True)
action_description = Column(String(500), nullable=True)
action_data = Column(JSON, nullable=True)
created_at = Column(DateTime, default=datetime.utcnow)
class AgentAction(Base):
"""Agent action audit entries."""
__tablename__ = "agent_actions"
id = Column(Integer, primary_key=True)
agent_name = Column(String(100), nullable=False)
action_type = Column(String(100), nullable=False)
success = Column(Boolean, default=True)
message = Column(String(500), nullable=True)
timestamp = Column(DateTime, default=datetime.utcnow)

View File

@@ -0,0 +1,10 @@
[pytest]
testpaths = tests
pythonpath = .
addopts = -v --tb=short
filterwarnings =
ignore::DeprecationWarning
asyncio_mode = auto
asyncio_default_fixture_loop_scope = function
asyncio_default_test_loop_scope = function

View File

@@ -0,0 +1,21 @@
fastapi>=0.135.3
uvicorn[standard]==0.27.0
sqlalchemy==2.0.25
psycopg2-binary==2.9.9
pydantic==2.12.5
pydantic-settings==2.1.0
python-multipart==0.0.22
aiofiles==23.2.1
python-telegram-bot==20.7
requests==2.31.0
pytest==7.4.3
pytest-cov==4.1.0
black==23.12.1
isort==5.13.2
flake8==6.1.0
mypy==1.7.1
httpx==0.25.2
nicegui==3.9.0
aiohttp>=3.9.0
pytest-asyncio>=0.23.0
alembic>=1.14.0

View File

@@ -0,0 +1,17 @@
#!/usr/bin/env bash
# use path of this example as working directory; enables starting this script from anywhere
cd "$(dirname "$0")"
if [ "$1" = "prod" ]; then
echo "Starting Uvicorn server in production mode..."
# we also use a single worker in production mode so socket.io connections are always handled by the same worker
uvicorn main:app --workers 1 --log-level info --port 80
elif [ "$1" = "dev" ]; then
echo "Starting Uvicorn server in development mode..."
# reload implies workers = 1
uvicorn main:app --reload --log-level debug --port 8000
else
echo "Invalid parameter. Use 'prod' or 'dev'."
exit 1
fi

View File

@@ -0,0 +1,7 @@
# Test
Test
## Features
## Tech Stack

View File

@@ -0,0 +1,2 @@
# Generated by AI Software Factory
print('Hello, World!')

View File

@@ -0,0 +1,400 @@
"""Test logging utility for validating agent responses and system outputs."""
import re
from typing import Optional, Dict, Any, List
from datetime import datetime
# Color codes for terminal output
class Colors:
GREEN = '\033[92m'
RED = '\033[91m'
YELLOW = '\033[93m'
BLUE = '\033[94m'
CYAN = '\033[96m'
RESET = '\033[0m'
class TestLogger:
"""Utility class for logging test results and assertions."""
def __init__(self):
self.assertions: List[Dict[str, Any]] = []
self.errors: List[Dict[str, Any]] = []
self.logs: List[str] = []
def log(self, message: str, level: str = 'INFO') -> None:
"""Log an informational message."""
timestamp = datetime.now().strftime('%Y-%m-%d %H:%M:%S')
formatted = f"[{timestamp}] [{level}] {message}"
self.logs.append(formatted)
print(formatted)
def success(self, message: str) -> None:
"""Log a success message with green color."""
timestamp = datetime.now().strftime('%Y-%m-%d %H:%M:%S')
formatted = f"{Colors.GREEN}[{timestamp}] [✓ PASS] {message}{Colors.RESET}"
self.logs.append(formatted)
print(formatted)
def error(self, message: str) -> None:
"""Log an error message with red color."""
timestamp = datetime.now().strftime('%Y-%m-%d %H:%M:%S')
formatted = f"{Colors.RED}[{timestamp}] [✗ ERROR] {message}{Colors.RESET}"
self.logs.append(formatted)
print(formatted)
def warning(self, message: str) -> None:
"""Log a warning message with yellow color."""
timestamp = datetime.now().strftime('%Y-%m-%d %H:%M:%S')
formatted = f"{Colors.YELLOW}[{timestamp}] [!] WARN {message}{Colors.RESET}"
self.logs.append(formatted)
print(formatted)
def info(self, message: str) -> None:
"""Log an info message with blue color."""
timestamp = datetime.now().strftime('%Y-%m-%d %H:%M:%S')
formatted = f"{Colors.BLUE}[{timestamp}] [ INFO] {message}{Colors.RESET}"
self.logs.append(formatted)
print(formatted)
def assert_contains(self, text: str, expected: str, message: str = '') -> bool:
"""Assert that text contains expected substring."""
try:
contains = expected in text
if contains:
self.success(f"'{expected}' found in text")
self.assertions.append({
'type': 'assert_contains',
'result': 'pass',
'expected': expected,
'message': message or f"'{expected}' in text"
})
return True
else:
self.error(f"✗ Expected '{expected}' not found in text")
self.assertions.append({
'type': 'assert_contains',
'result': 'fail',
'expected': expected,
'message': message or f"'{expected}' in text"
})
return False
except Exception as e:
self.error(f"Assertion failed with exception: {e}")
self.assertions.append({
'type': 'assert_contains',
'result': 'error',
'expected': expected,
'message': message or f"Assertion failed: {e}"
})
return False
def assert_not_contains(self, text: str, unexpected: str, message: str = '') -> bool:
"""Assert that text does not contain expected substring."""
try:
contains = unexpected in text
if not contains:
self.success(f"'{unexpected}' not found in text")
self.assertions.append({
'type': 'assert_not_contains',
'result': 'pass',
'unexpected': unexpected,
'message': message or f"'{unexpected}' not in text"
})
return True
else:
self.error(f"✗ Unexpected '{unexpected}' found in text")
self.assertions.append({
'type': 'assert_not_contains',
'result': 'fail',
'unexpected': unexpected,
'message': message or f"'{unexpected}' not in text"
})
return False
except Exception as e:
self.error(f"Assertion failed with exception: {e}")
self.assertions.append({
'type': 'assert_not_contains',
'result': 'error',
'unexpected': unexpected,
'message': message or f"Assertion failed: {e}"
})
return False
def assert_equal(self, actual: str, expected: str, message: str = '') -> bool:
"""Assert that two strings are equal."""
try:
if actual == expected:
self.success(f"✓ Strings equal")
self.assertions.append({
'type': 'assert_equal',
'result': 'pass',
'expected': expected,
'message': message or f"actual == expected"
})
return True
else:
self.error(f"✗ Strings not equal. Expected: '{expected}', Got: '{actual}'")
self.assertions.append({
'type': 'assert_equal',
'result': 'fail',
'expected': expected,
'actual': actual,
'message': message or "actual == expected"
})
return False
except Exception as e:
self.error(f"Assertion failed with exception: {e}")
self.assertions.append({
'type': 'assert_equal',
'result': 'error',
'expected': expected,
'actual': actual,
'message': message or f"Assertion failed: {e}"
})
return False
def assert_starts_with(self, text: str, prefix: str, message: str = '') -> bool:
"""Assert that text starts with expected prefix."""
try:
starts_with = text.startswith(prefix)
if starts_with:
self.success(f"✓ Text starts with '{prefix}'")
self.assertions.append({
'type': 'assert_starts_with',
'result': 'pass',
'prefix': prefix,
'message': message or f"text starts with '{prefix}'"
})
return True
else:
self.error(f"✗ Text does not start with '{prefix}'")
self.assertions.append({
'type': 'assert_starts_with',
'result': 'fail',
'prefix': prefix,
'message': message or f"text starts with '{prefix}'"
})
return False
except Exception as e:
self.error(f"Assertion failed with exception: {e}")
self.assertions.append({
'type': 'assert_starts_with',
'result': 'error',
'prefix': prefix,
'message': message or f"Assertion failed: {e}"
})
return False
def assert_ends_with(self, text: str, suffix: str, message: str = '') -> bool:
"""Assert that text ends with expected suffix."""
try:
ends_with = text.endswith(suffix)
if ends_with:
self.success(f"✓ Text ends with '{suffix}'")
self.assertions.append({
'type': 'assert_ends_with',
'result': 'pass',
'suffix': suffix,
'message': message or f"text ends with '{suffix}'"
})
return True
else:
self.error(f"✗ Text does not end with '{suffix}'")
self.assertions.append({
'type': 'assert_ends_with',
'result': 'fail',
'suffix': suffix,
'message': message or f"text ends with '{suffix}'"
})
return False
except Exception as e:
self.error(f"Assertion failed with exception: {e}")
self.assertions.append({
'type': 'assert_ends_with',
'result': 'error',
'suffix': suffix,
'message': message or f"Assertion failed: {e}"
})
return False
def assert_regex(self, text: str, pattern: str, message: str = '') -> bool:
"""Assert that text matches a regex pattern."""
try:
if re.search(pattern, text):
self.success(f"✓ Regex pattern matched")
self.assertions.append({
'type': 'assert_regex',
'result': 'pass',
'pattern': pattern,
'message': message or f"text matches regex '{pattern}'"
})
return True
else:
self.error(f"✗ Regex pattern did not match")
self.assertions.append({
'type': 'assert_regex',
'result': 'fail',
'pattern': pattern,
'message': message or f"text matches regex '{pattern}'"
})
return False
except re.error as e:
self.error(f"✗ Invalid regex pattern: {e}")
self.assertions.append({
'type': 'assert_regex',
'result': 'error',
'pattern': pattern,
'message': message or f"Invalid regex: {e}"
})
return False
except Exception as ex:
self.error(f"Assertion failed with exception: {ex}")
self.assertions.append({
'type': 'assert_regex',
'result': 'error',
'pattern': pattern,
'message': message or f"Assertion failed: {ex}"
})
return False
def assert_length(self, text: str, expected_length: int, message: str = '') -> bool:
"""Assert that text has expected length."""
try:
length = len(text)
if length == expected_length:
self.success(f"✓ Length is {expected_length}")
self.assertions.append({
'type': 'assert_length',
'result': 'pass',
'expected_length': expected_length,
'message': message or f"len(text) == {expected_length}"
})
return True
else:
self.error(f"✗ Length is {length}, expected {expected_length}")
self.assertions.append({
'type': 'assert_length',
'result': 'fail',
'expected_length': expected_length,
'actual_length': length,
'message': message or f"len(text) == {expected_length}"
})
return False
except Exception as e:
self.error(f"Assertion failed with exception: {e}")
self.assertions.append({
'type': 'assert_length',
'result': 'error',
'expected_length': expected_length,
'message': message or f"Assertion failed: {e}"
})
return False
def assert_key_exists(self, text: str, key: str, message: str = '') -> bool:
"""Assert that a key exists in a JSON-like text."""
try:
if f'"{key}":' in text or f"'{key}':" in text:
self.success(f"✓ Key '{key}' exists")
self.assertions.append({
'type': 'assert_key_exists',
'result': 'pass',
'key': key,
'message': message or f"key '{key}' exists"
})
return True
else:
self.error(f"✗ Key '{key}' not found")
self.assertions.append({
'type': 'assert_key_exists',
'result': 'fail',
'key': key,
'message': message or f"key '{key}' exists"
})
return False
except Exception as e:
self.error(f"Assertion failed with exception: {e}")
self.assertions.append({
'type': 'assert_key_exists',
'result': 'error',
'key': key,
'message': message or f"Assertion failed: {e}"
})
return False
def assert_substring_count(self, text: str, substring: str, count: int, message: str = '') -> bool:
"""Assert that substring appears count times in text."""
try:
actual_count = text.count(substring)
if actual_count == count:
self.success(f"✓ Substring appears {count} time(s)")
self.assertions.append({
'type': 'assert_substring_count',
'result': 'pass',
'substring': substring,
'expected_count': count,
'actual_count': actual_count,
'message': message or f"'{substring}' appears {count} times"
})
return True
else:
self.error(f"✗ Substring appears {actual_count} time(s), expected {count}")
self.assertions.append({
'type': 'assert_substring_count',
'result': 'fail',
'substring': substring,
'expected_count': count,
'actual_count': actual_count,
'message': message or f"'{substring}' appears {count} times"
})
return False
except Exception as e:
self.error(f"Assertion failed with exception: {e}")
self.assertions.append({
'type': 'assert_substring_count',
'result': 'error',
'substring': substring,
'expected_count': count,
'message': message or f"Assertion failed: {e}"
})
return False
def get_assertion_count(self) -> int:
"""Get total number of assertions made."""
return len(self.assertions)
def get_failure_count(self) -> int:
"""Get number of failed assertions."""
return sum(1 for assertion in self.assertions if assertion.get('result') == 'fail')
def get_success_count(self) -> int:
"""Get number of passed assertions."""
return sum(1 for assertion in self.assertions if assertion.get('result') == 'pass')
def get_logs(self) -> List[str]:
"""Get all log messages."""
return self.logs.copy()
def get_errors(self) -> List[Dict[str, Any]]:
"""Get all error records."""
return self.errors.copy()
def clear(self) -> None:
"""Clear all logs and assertions."""
self.assertions.clear()
self.errors.clear()
self.logs.clear()
def __enter__(self):
"""Context manager entry."""
return self
def __exit__(self, exc_type, exc_val, exc_tb):
"""Context manager exit."""
return False
# Convenience function for context manager usage
def test_logger():
"""Create and return a TestLogger instance."""
return TestLogger()

View File

@@ -1 +0,0 @@
0.0.1

23
test-project/test/TestApp/.gitignore vendored Normal file
View File

@@ -0,0 +1,23 @@
# Python
__pycache__/
*.pyc
*.pyo
*.pyd
.Python
*.env
.venv/
node_modules/
.env
build/
dist/
.pytest_cache/
.mypy_cache/
.coverage
htmlcov/
.idea/
.vscode/
*.swp
*.swo
*~
.DS_Store
.git

View File

@@ -0,0 +1,11 @@
# TestApp
A test application
## Features
- feature1
- feature2
## Tech Stack
- python
- fastapi

View File

@@ -0,0 +1,2 @@
# Generated by AI Software Factory
print('Hello, World!')

View File

@@ -0,0 +1,23 @@
# Python
__pycache__/
*.pyc
*.pyo
*.pyd
.Python
*.env
.venv/
node_modules/
.env
build/
dist/
.pytest_cache/
.mypy_cache/
.coverage
htmlcov/
.idea/
.vscode/
*.swp
*.swo
*~
.DS_Store
.git

View File

@@ -0,0 +1,11 @@
# test-project
Test project description
## Features
- feature-1
- feature-2
## Tech Stack
- python
- fastapi

View File

@@ -0,0 +1,2 @@
# Generated by AI Software Factory
print('Hello, World!')

23
test-project/test/test/.gitignore vendored Normal file
View File

@@ -0,0 +1,23 @@
# Python
__pycache__/
*.pyc
*.pyo
*.pyd
.Python
*.env
.venv/
node_modules/
.env
build/
dist/
.pytest_cache/
.mypy_cache/
.coverage
htmlcov/
.idea/
.vscode/
*.swp
*.swo
*~
.DS_Store
.git

View File

@@ -0,0 +1,7 @@
# Test
Test
## Features
## Tech Stack

View File

@@ -0,0 +1,2 @@
# Generated by AI Software Factory
print('Hello, World!')