10 Commits
0.0.1 ... 0.1.4

Author SHA1 Message Date
45bcbfe80d release: version 0.1.4 🚀
All checks were successful
Upload Python Package / Create Release (push) Successful in 15s
Upload Python Package / deploy (push) Successful in 1m5s
2026-04-02 02:09:40 +02:00
d82b811e55 fix: fix container build, refs NOISSUE 2026-04-02 02:09:35 +02:00
b10c34f3fc release: version 0.1.3 🚀
Some checks failed
Upload Python Package / Create Release (push) Successful in 21s
Upload Python Package / deploy (push) Failing after 39s
2026-04-02 02:04:42 +02:00
f7b8925881 fix: fix version increment logic, refs NOISSUE 2026-04-02 02:04:39 +02:00
78c8bd68cc release: version 0.1.2 🚀
Some checks failed
Upload Python Package / Create Release (push) Successful in 26s
Upload Python Package / deploy (push) Failing after 22s
2026-04-02 02:03:23 +02:00
f17e241871 fix: test version increment logic, refs NOISSUE 2026-04-02 02:03:21 +02:00
55c5fca784 release: version 0.1.1 🚀
Some checks failed
Upload Python Package / Create Release (push) Successful in 15s
Upload Python Package / deploy (push) Failing after 13s
2026-04-02 01:58:17 +02:00
aa0ca2cb7b fix: broken CI build, refs NOISSUE 2026-04-02 01:58:13 +02:00
e824475872 feat: initial release, refs NOISSUE
Some checks failed
Upload Python Package / Create Release (push) Successful in 37s
Upload Python Package / deploy (push) Failing after 38s
2026-04-02 01:43:16 +02:00
simon
0b1384279d Ready to clone and code. 2026-03-14 12:58:13 +00:00
49 changed files with 4465 additions and 112 deletions

View File

@@ -86,8 +86,8 @@ start() {
echo "New version: $new_version"
gitchangelog | grep -v "[rR]elease:" > HISTORY.md
echo $new_version > project_name/VERSION
git add project_name/VERSION HISTORY.md
echo $new_version > ai_software_factory/VERSION
git add ai_software_factory/VERSION HISTORY.md
git commit -m "release: version $new_version 🚀"
echo "creating git tag : $new_version"
git tag $new_version

View File

@@ -1,38 +0,0 @@
#!/usr/bin/env bash
while getopts a:n:u:d: flag
do
case "${flag}" in
a) author=${OPTARG};;
n) name=${OPTARG};;
u) urlname=${OPTARG};;
d) description=${OPTARG};;
esac
done
echo "Author: $author";
echo "Project Name: $name";
echo "Project URL name: $urlname";
echo "Description: $description";
echo "Renaming project..."
original_author="author_name"
original_name="project_name"
original_urlname="project_urlname"
original_description="project_description"
# for filename in $(find . -name "*.*")
for filename in $(git ls-files)
do
sed -i "s/$original_author/$author/g" $filename
sed -i "s/$original_name/$name/g" $filename
sed -i "s/$original_urlname/$urlname/g" $filename
sed -i "s/$original_description/$description/g" $filename
echo "Renamed $filename"
done
mv project_name $name
# This command runs only once on GHA!
rm -rf .gitea/template.yml
rm -rf project_name
rm -rf project_name.Tests

View File

@@ -1 +0,0 @@
author: rochacbruno

View File

@@ -41,7 +41,7 @@ jobs:
- name: Check version match
run: |
REPOSITORY_NAME=$(echo "$GITHUB_REPOSITORY" | awk -F '/' '{print $2}' | tr '-' '_')
if [ "$(cat project_name/VERSION)" = "${GITHUB_REF_NAME}" ] ; then
if [ "$(cat ai_software_factory/VERSION)" = "${GITHUB_REF_NAME}" ] ; then
echo "Version matches successfully!"
else
echo "Version must match!"
@@ -57,5 +57,5 @@ jobs:
run: |
REPOSITORY_OWNER=$(echo "$GITHUB_REPOSITORY" | awk -F '/' '{print $1}' | tr '[:upper:]' '[:lower:]')
REPOSITORY_NAME=$(echo "$GITHUB_REPOSITORY" | awk -F '/' '{print $2}' | tr '-' '_')
docker build -t "git.disi.dev/$REPOSITORY_OWNER/project_name:$(cat project_name/VERSION)" -f Containerfile ./
docker push "git.disi.dev/$REPOSITORY_OWNER/project_name:$(cat project_name/VERSION)"
docker build -t "git.disi.dev/$REPOSITORY_OWNER/ai_software_factory:$(cat ai_software_factory/VERSION)" -f Containerfile ./
docker push "git.disi.dev/$REPOSITORY_OWNER/ai_software_factory:$(cat ai_software_factory/VERSION)"

View File

@@ -1,48 +0,0 @@
name: Rename the project from template
on: [push]
permissions: write-all
jobs:
rename-project:
if: ${{ !endsWith (gitea.repository, 'Templates/Docker_Image') }}
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
# by default, it uses a depth of 1
# this fetches all history so that we can read each commit
fetch-depth: 0
ref: ${{ gitea.head_ref }}
- run: echo "REPOSITORY_NAME=$(echo "$GITHUB_REPOSITORY" | awk -F '/' '{print $2}' | tr '-' '_')" >> $GITHUB_ENV
shell: bash
- run: echo "REPOSITORY_URLNAME=$(echo "$GITHUB_REPOSITORY" | awk -F '/' '{print $2}')" >> $GITHUB_ENV
shell: bash
- run: echo "REPOSITORY_OWNER=$(echo "$GITHUB_REPOSITORY" | awk -F '/' '{print $1}')" >> $GITHUB_ENV
shell: bash
- name: Is this still a template
id: is_template
run: echo "::set-output name=is_template::$(ls .gitea/template.yml &> /dev/null && echo true || echo false)"
- name: Rename the project
if: steps.is_template.outputs.is_template == 'true'
run: |
echo "Renaming the project with -a(author) ${{ env.REPOSITORY_OWNER }} -n(name) ${{ env.REPOSITORY_NAME }} -u(urlname) ${{ env.REPOSITORY_URLNAME }}"
.gitea/rename_project.sh -a ${{ env.REPOSITORY_OWNER }} -n ${{ env.REPOSITORY_NAME }} -u ${{ env.REPOSITORY_URLNAME }} -d "Awesome ${{ env.REPOSITORY_NAME }} created by ${{ env.REPOSITORY_OWNER }}"
- name: Remove renaming workflow
if: steps.is_template.outputs.is_template == 'true'
run: |
rm .gitea/workflows/rename_project.yml
rm .gitea/rename_project.sh
- uses: stefanzweifel/git-auto-commit-action@v4
with:
commit_message: "✅ Ready to clone and code."
# commit_options: '--amend --no-edit'
push_options: --force

View File

@@ -1,15 +1,15 @@
# How to develop on this project
project_name welcomes contributions from the community.
ai_software_factory welcomes contributions from the community.
This instructions are for linux base systems. (Linux, MacOS, BSD, etc.)
## Setting up your own fork of this repo.
- On gitea interface click on `Fork` button.
- Clone your fork of this repo. `git clone git@git.disi.dev:YOUR_GIT_USERNAME/project_urlname.git`
- Enter the directory `cd project_urlname`
- Add upstream repo `git remote add upstream https://git.disi.dev/author_name/project_urlname`
- Clone your fork of this repo. `git clone git@git.disi.dev:YOUR_GIT_USERNAME/ai-test.git`
- Enter the directory `cd ai-test`
- Add upstream repo `git remote add upstream https://git.disi.dev/Projects/ai-test`
- initialize repository for use `make setup`
## Install the project in develop mode

View File

@@ -1,6 +1,6 @@
FROM alpine
WORKDIR /app
COPY ./project_name/* /app
COPY ./ai_software_factory/* /app
CMD ["sh", "/app/hello_world.sh"]

View File

@@ -5,6 +5,53 @@ Changelog
(unreleased)
------------
Fix
~~~
- Fix container build, refs NOISSUE. [Simon Diesenreiter]
0.1.3 (2026-04-02)
------------------
Fix
~~~
- Fix version increment logic, refs NOISSUE. [Simon Diesenreiter]
Other
~~~~~
0.1.2 (2026-04-02)
------------------
Fix
~~~
- Test version increment logic, refs NOISSUE. [Simon Diesenreiter]
Other
~~~~~
0.1.1 (2026-04-01)
------------------
Fix
~~~
- Broken CI build, refs NOISSUE. [Simon Diesenreiter]
Other
~~~~~
0.1.0 (2026-04-01)
------------------
- Feat: initial release, refs NOISSUE. [Simon Diesenreiter]
- ✅ Ready to clone and code. [simon]
0.0.1 (2026-03-14)
------------------
Fix
~~~
- Second initial commit refs NOISSUE. [Simon Diesenreiter]

View File

@@ -17,16 +17,24 @@ help: ## Show the help.
.PHONY: fmt
fmt: issetup ## Format code using black & isort.
$(ENV_PREFIX)isort project_name/
$(ENV_PREFIX)black -l 79 project_name/
$(ENV_PREFIX)isort ai-software-factory/
$(ENV_PREFIX)black -l 79 ai-software-factory/
$(ENV_PREFIX)black -l 79 tests/
.PHONY: test
test: issetup ## Run tests with pytest.
$(ENV_PREFIX)pytest ai-software-factory/tests/ -v --tb=short
.PHONY: test-cov
test-cov: issetup ## Run tests with coverage report.
$(ENV_PREFIX)pytest ai-software-factory/tests/ -v --tb=short --cov=ai-software-factory --cov-report=html --cov-report=term-missing
.PHONY: lint
lint: issetup ## Run pep8, black, mypy linters.
$(ENV_PREFIX)flake8 project_name/
$(ENV_PREFIX)black -l 79 --check project_name/
$(ENV_PREFIX)flake8 ai-software-factory/
$(ENV_PREFIX)black -l 79 --check ai-software-factory/
$(ENV_PREFIX)black -l 79 --check tests/
$(ENV_PREFIX)mypy --ignore-missing-imports project_name/
$(ENV_PREFIX)mypy --ignore-missing-imports ai-software-factory/
.PHONY: release
release: issetup ## Create a new tag for release.
@@ -34,9 +42,9 @@ release: issetup ## Create a new tag for release.
.PHONY: build
build: issetup ## Create a new tag for release.
@docker build -t project_name:$(cat project_name/VERSION) -f Containerfile .
@docker build -t ai-software-factory:$(cat ai-software-factory/VERSION) -f Containerfile .
# This project has been generated from rochacbruno/python-project-template
# __author__ = 'rochacbruno'
#igest__ = 'rochacbruno'
# __repo__ = https://github.com/rochacbruno/python-project-template
# __sponsor__ = https://github.com/sponsors/rochacbruno/

212
README.md
View File

@@ -1,13 +1,215 @@
# project_name
# AI Software Factory
Project description goes here.
Automated software generation service powered by Ollama LLM. This service allows users to specify via Telegram what kind of software they would like, and an agent hosted in Ollama will create it iteratively, testing it while building out the source code and committing to gitea.
## Usage
## Features
- **Telegram Integration**: Receive software requests via Telegram bot
- **Ollama LLM**: Uses Ollama-hosted models for code generation
- **Git Integration**: Automatically commits code to gitea
- **Pull Requests**: Creates PRs for user review before merging
- **Web UI**: Beautiful dashboard for monitoring project progress
- **n8n Workflows**: Bridges Telegram with LLMs via n8n webhooks
- **Comprehensive Testing**: Full test suite with pytest coverage
## Architecture
```
┌─────────────┐ ┌──────────────┐ ┌──────────┐ ┌─────────┐
│ Telegram │────▶│ n8n Webhook│────▶│ FastAPI │────▶│ Ollama │
└─────────────┘ └──────────────┘ └──────────┘ └─────────┘
┌──────────────┐
│ Git/Gitea │
└──────────────┘
```
## Quick Start
### Prerequisites
- Docker and Docker Compose
- Ollama running locally or on same network
- Gitea instance with API token
- n8n instance for Telegram webhook
### Configuration
Create a `.env` file in the project root:
```bash
$ docker build -t <tagname> -f Containerfile .
# Server
HOST=0.0.0.0
PORT=8000
# Ollama
OLLAMA_URL=http://localhost:11434
OLLAMA_MODEL=llama3
# Gitea
GITEA_URL=https://gitea.yourserver.com
GITEA_TOKEN= analyze your_gitea_api_token
GITEA_OWNER=ai-software-factory
GITEA_REPO=ai-software-factory
# n8n
N8N_WEBHOOK_URL=http://n8n.yourserver.com/webhook/telegram
# Telegram
TELEGRAM_BOT_TOKEN=your_telegram_bot_token
TELEGRAM_CHAT_ID=your_chat_id
```
### Build and Run
```bash
# Build Docker image
docker build -t ai-software-factory -f Containerfile .
# Run with Docker Compose
docker-compose up -d
```
### Usage
1. **Send a request via Telegram:**
```
Name: My Awesome App
Description: A web application for managing tasks
Features: user authentication, task CRUD, notifications
```
2. **Monitor progress via Web UI:**
Open `http://yourserver:8000` to see real-time progress
3. **Review PRs in Gitea:**
Check your gitea repository for generated PRs
## API Endpoints
| Endpoint | Method | Description |
|------|------|-------|
| `/` | GET | API information |
| `/health` | GET | Health check |
| `/generate` | POST | Generate new software |
| `/status/{project_id}` | GET | Get project status |
| `/projects` | GET | List all projects |
## Development
Read the [CONTRIBUTING.md](CONTRIBUTING.md) file.
### Makefile Targets
```bash
make help # Show available targets
make setup # Initialize repository
make fmt # Format code
make lint # Run linters
make test # Run tests
make test-cov # Run tests with coverage report
make release # Create new release tag
make build # Build Docker image
```
### Running in Development
```bash
pip install -r requirements.txt
uvicorn main:app --reload --host 0.0.0.0 --port 8000
```
### Testing
Run the test suite:
```bash
# Run all tests
make test
# Run tests with coverage report
make test-cov
# Run specific test file
pytest tests/test_main.py -v
# Run tests with verbose output
pytest tests/ -v --tb=short
```
### Test Coverage
View HTML coverage report:
```bash
make test-cov
open htmlcov/index.html
```
### Test Structure
```
tests/
├── conftest.py # Pytest fixtures and configuration
├── test_main.py # Tests for main.py FastAPI app
├── test_config.py # Tests for config.py settings
├── test_git_manager.py # Tests for git operations
├── test_ui_manager.py # Tests for UI rendering
├── test_gitea.py # Tests for Gitea API integration
├── test_telegram.py # Tests for Telegram integration
├── test_orchestrator.py # Tests for agent orchestrator
├── test_integration.py # Integration tests for full workflow
├── test_config_integration.py # Configuration integration tests
├── test_agents_integration.py # Agent integration tests
├── test_edge_cases.py # Edge case tests
└── test_postgres_integration.py # PostgreSQL integration tests
```
## Project Structure
```
ai-software-factory/
├── main.py # FastAPI application
├── config.py # Configuration settings
├── requirements.txt # Python dependencies
├── Containerfile # Docker build file
├── README.md # This file
├── Makefile # Development utilities
├── .env.example # Environment template
├── .gitignore # Git ignore rules
├── HISTORY.md # Changelog
├── pytest.ini # Pytest configuration
├── docker-compose.yml # Multi-service orchestration
├── .env # Environment variables (not in git)
├── tests/ # Test suite
│ ├── __init__.py
│ ├── conftest.py
│ ├── test_*.py # Test files
│ └── pytest.ini
├── agents/
│ ├── __init__.py
│ ├── orchestrator.py # Main agent orchestrator
│ ├── git_manager.py # Git operations
│ ├── ui_manager.py # Web UI management
│ ├── telegram.py # Telegram integration
│ └── gitea.py # Gitea API client
└── n8n/ # n8n webhook configurations
```
## Security Notes
- Never commit `.env` files to git
- Use environment variables for sensitive data
- Rotate Gitea API tokens regularly
- Restrict Telegram bot permissions
- Use HTTPS for Gitea and n8n endpoints
## License
MIT License - See LICENSE file for details
## Contributing
See [CONTRIBUTING.md](CONTRIBUTING.md) for development guidelines.

View File

@@ -0,0 +1,36 @@
# AI Software Factory Environment Variables
# Server
HOST=0.0.0.0
PORT=8000
LOG_LEVEL=INFO
# Ollama
OLLAMA_URL=http://localhost:11434
OLLAMA_MODEL=llama3
# Gitea
GITEA_URL=https://gitea.yourserver.com
GITEA_TOKEN=your_gitea_api_token
GITEA_OWNER=ai-test
GITEA_REPO=ai-test
# n8n
N8N_WEBHOOK_URL=http://n8n.yourserver.com/webhook/telegram
# Telegram
TELEGRAM_BOT_TOKEN=your_telegram_bot_token
TELEGRAM_CHAT_ID=your_chat_id
# PostgreSQL
POSTGRES_HOST=postgres
POSTGRES_PORT=5432
POSTGRES_USER=ai_test
POSTGRES_PASSWORD=your_secure_password
POSTGRES_DB=ai_test
# Database Connection Pool Settings
DB_POOL_SIZE=10
DB_MAX_OVERFLOW=20
DB_POOL_RECYCLE=3600
DB_POOL_TIMEOUT=30

View File

@@ -0,0 +1,88 @@
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
.installed.cfg
*.egg
# PyInstaller
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
# Translations
*.mo
*.pot
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# IDE
.idea/
.vscode/
*.swp
*.swo
*~
# OS files
.DS_Store
Thumbs.db
# Project specific
.git/
.gitignore
.env
.env.local
.env.*.local
ai-software-factory/
n8n/
ui/
docs/
tests/
# Temporary files
*.tmp
*.temp
*.log

View File

@@ -0,0 +1,73 @@
# Contributing to AI Software Factory
Thank you for your interest in contributing to the AI Software Factory project!
## Code of Conduct
Please note that we have a Code of Conduct that all contributors are expected to follow.
## How to Contribute
### Reporting Bugs
Before creating bug reports, please check existing issues as the bug may have already been reported and fixed.
When reporting a bug, include:
- A clear description of the bug
- Steps to reproduce the bug
- Expected behavior
- Actual behavior
- Screenshots if applicable
- Your environment details (OS, Python version, etc.)
### Suggesting Features
Feature suggestions are welcome! Please create an issue with:
- A clear title and description
- Use cases for the feature
- Any relevant links or references
### Pull Requests
1. Fork the repository
2. Create a new branch (`git checkout -b feature/feature-name`)
3. Make your changes
4. Commit your changes (`git commit -am 'Add some feature'`)
5. Push to the branch (`git push origin feature/feature-name`)
6. Create a new Pull Request
### Style Guide
- Follow the existing code style
- Add comments for complex logic
- Write tests for new features
- Update documentation as needed
## Development Setup
1. Clone the repository
2. Create a virtual environment
3. Install dependencies (`pip install -r requirements.txt`)
4. Run tests (`make test`)
5. Make your changes
6. Run tests again to ensure nothing is broken
## Commit Messages
Follow the conventional commits format:
```
feat: add new feature
fix: fix bug
docs: update documentation
style: format code
refactor: refactor code
test: add tests
chore: update dependencies
```
## Questions?
Feel free to open an issue or discussion for any questions.

View File

@@ -0,0 +1,43 @@
# AI Software Factory Dockerfile
FROM python:3.11-slim
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE=1 \
PYTHONUNBUFFERED=1 \
PIP_NO_CACHE_DIR=1 \
PIP_DISABLE_PIP_VERSION_CHECK=1
# Set work directory
WORKDIR /app
# Install system dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
curl \
&& rm -rf /var/lib/apt/lists/*
# Install dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy application code
COPY . .
# Set up environment file if it exists, otherwise use .env.example
RUN if [ -f .env ]; then \
cat .env; \
elif [ -f .env.example ]; then \
cp .env.example .env; \
fi
# Initialize database tables (use SQLite by default, can be overridden by DB_POOL_SIZE env var)
RUN python database.py || true
# Expose port
EXPOSE 8000
# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD curl -f http://localhost:8000/health || exit 1
# Run application
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000", "--reload"]

View File

@@ -0,0 +1,41 @@
Changelog
=========
## [0.0.1] - 2026-03-14
### Added
- Initial commit with AI Software Factory service
- FastAPI backend for software generation
- Telegram integration via n8n webhook
- Ollama LLM integration for code generation
- Gitea API integration for commits and PRs
- Web UI dashboard for monitoring progress
- Docker and docker-compose configuration for Unraid
- Environment configuration templates
- Makefile with development utilities
- PostgreSQL integration with connection pooling
- Comprehensive audit trail functionality
- User action tracking
- System log monitoring
- Database initialization and migration support
- Full test suite with pytest coverage
### Features
- Automated software generation from Telegram requests
- Iterative code generation with Ollama
- Git commit automation
- Pull request creation for user review
- Real-time progress monitoring via web UI
- n8n workflow integration
- Complete audit trail for compliance and debugging
- Connection pooling for database efficiency
- Health check endpoints
- Persistent volumes for git repos and n8n data
### Infrastructure
- Alpine-based Docker image
- GPU support for Ollama
- Persistent volumes for git repos and n8n data
- Health check endpoints
- PostgreSQL with connection pooling
- Docker Compose for multi-service orchestration

View File

@@ -0,0 +1,215 @@
# AI Software Factory
Automated software generation service powered by Ollama LLM. This service allows users to specify via Telegram what kind of software they would like, and an agent hosted in Ollama will create it iteratively, testing it while building out the source code and committing to gitea.
## Features
- **Telegram Integration**: Receive software requests via Telegram bot
- **Ollama LLM**: Uses Ollama-hosted models for code generation
- **Git Integration**: Automatically commits code to gitea
- **Pull Requests**: Creates PRs for user review before merging
- **Web UI**: Beautiful dashboard for monitoring project progress
- **n8n Workflows**: Bridges Telegram with LLMs via n8n webhooks
- **Comprehensive Testing**: Full test suite with pytest coverage
## Architecture
```
┌─────────────┐ ┌──────────────┐ ┌──────────┐ ┌─────────┐
│ Telegram │────▶│ n8n Webhook│────▶│ FastAPI │────▶│ Ollama │
└─────────────┘ └──────────────┘ └──────────┘ └─────────┘
┌──────────────┐
│ Git/Gitea │
└──────────────┘
```
## Quick Start
### Prerequisites
- Docker and Docker Compose
- Ollama running locally or on same network
- Gitea instance with API token
- n8n instance for Telegram webhook
### Configuration
Create a `.env` file in the project root:
```bash
# Server
HOST=0.0.0.0
PORT=8000
# Ollama
OLLAMA_URL=http://localhost:11434
OLLAMA_MODEL=llama3
# Gitea
GITEA_URL=https://gitea.yourserver.com
GITEA_TOKEN= analyze your_gitea_api_token
GITEA_OWNER=ai-software-factory
GITEA_REPO=ai-software-factory
# n8n
N8N_WEBHOOK_URL=http://n8n.yourserver.com/webhook/telegram
# Telegram
TELEGRAM_BOT_TOKEN=your_telegram_bot_token
TELEGRAM_CHAT_ID=your_chat_id
```
### Build and Run
```bash
# Build Docker image
docker build -t ai-software-factory -f Containerfile .
# Run with Docker Compose
docker-compose up -d
```
### Usage
1. **Send a request via Telegram:**
```
Name: My Awesome App
Description: A web application for managing tasks
Features: user authentication, task CRUD, notifications
```
2. **Monitor progress via Web UI:**
Open `http://yourserver:8000` to see real-time progress
3. **Review PRs in Gitea:**
Check your gitea repository for generated PRs
## API Endpoints
| Endpoint | Method | Description |
|------|------|-------|
| `/` | GET | API information |
| `/health` | GET | Health check |
| `/generate` | POST | Generate new software |
| `/status/{project_id}` | GET | Get project status |
| `/projects` | GET | List all projects |
## Development
### Makefile Targets
```bash
make help # Show available targets
make setup # Initialize repository
make fmt # Format code
make lint # Run linters
make test # Run tests
make test-cov # Run tests with coverage report
make release # Create new release tag
make build # Build Docker image
```
### Running in Development
```bash
pip install -r requirements.txt
uvicorn main:app --reload --host 0.0.0.0 --port 8000
```
### Testing
Run the test suite:
```bash
# Run all tests
make test
# Run tests with coverage report
make test-cov
# Run specific test file
pytest tests/test_main.py -v
# Run tests with verbose output
pytest tests/ -v --tb=short
```
### Test Coverage
View HTML coverage report:
```bash
make test-cov
open htmlcov/index.html
```
### Test Structure
```
tests/
├── conftest.py # Pytest fixtures and configuration
├── test_main.py # Tests for main.py FastAPI app
├── test_config.py # Tests for config.py settings
├── test_git_manager.py # Tests for git operations
├── test_ui_manager.py # Tests for UI rendering
├── test_gitea.py # Tests for Gitea API integration
├── test_telegram.py # Tests for Telegram integration
├── test_orchestrator.py # Tests for agent orchestrator
├── test_integration.py # Integration tests for full workflow
├── test_config_integration.py # Configuration integration tests
├── test_agents_integration.py # Agent integration tests
├── test_edge_cases.py # Edge case tests
└── test_postgres_integration.py # PostgreSQL integration tests
```
## Project Structure
```
ai-software-factory/
├── main.py # FastAPI application
├── config.py # Configuration settings
├── requirements.txt # Python dependencies
├── Containerfile # Docker build file
├── README.md # This file
├── Makefile # Development utilities
├── .env.example # Environment template
├── .gitignore # Git ignore rules
├── HISTORY.md # Changelog
├── pytest.ini # Pytest configuration
├── docker-compose.yml # Multi-service orchestration
├── .env # Environment variables (not in git)
├── tests/ # Test suite
│ ├── __init__.py
│ ├── conftest.py
│ ├── test_*.py # Test files
│ └── pytest.ini
├── agents/
│ ├── __init__.py
│ ├── orchestrator.py # Main agent orchestrator
│ ├── git_manager.py # Git operations
│ ├── ui_manager.py # Web UI management
│ ├── telegram.py # Telegram integration
│ └── gitea.py # Gitea API client
└── n8n/ # n8n webhook configurations
```
## Security Notes
- Never commit `.env` files to git
- Use environment variables for sensitive data
- Rotate Gitea API tokens regularly
- Restrict Telegram bot permissions
- Use HTTPS for Gitea and n8n endpoints
## License
MIT License - See LICENSE file for details
## Contributing
See [CONTRIBUTING.md](CONTRIBUTING.md) for development guidelines.

View File

@@ -0,0 +1 @@
0.1.4

View File

@@ -0,0 +1,3 @@
"""AI Software Factory - Automated software generation service."""
__version__ = "0.0.1"

View File

@@ -0,0 +1,17 @@
"""AI Software Factory agents."""
from ai_software_factory.agents.orchestrator import AgentOrchestrator
from ai_software_factory.agents.git_manager import GitManager
from ai_software_factory.agents.ui_manager import UIManager
from ai_software_factory.agents.telegram import TelegramHandler
from ai_software_factory.agents.gitea import GiteaAPI
from ai_software_factory.agents.database_manager import DatabaseManager
__all__ = [
"AgentOrchestrator",
"GitManager",
"UIManager",
"TelegramHandler",
"GiteaAPI",
"DatabaseManager"
]

View File

@@ -0,0 +1,501 @@
"""Database manager for audit logging."""
from sqlalchemy.orm import Session
from sqlalchemy import text
from ai_software_factory.database import get_db
from ai_software_factory.models import (
ProjectHistory, ProjectLog, UISnapshot, PullRequestData, SystemLog, UserAction, AuditTrail, PullRequest, ProjectStatus
)
from datetime import datetime
import json
class DatabaseMigrations:
"""Handles database migrations."""
def __init__(self, db: Session):
"""Initialize migrations."""
self.db = db
def run(self) -> int:
"""Run migrations."""
return 0
def get_project_by_id(self, project_id: str) -> ProjectHistory | None:
"""Get project by ID."""
return self.db.query(ProjectHistory).filter(ProjectHistory.project_id == project_id).first()
def get_all_projects(self) -> list[ProjectHistory]:
"""Get all projects."""
return self.db.query(ProjectHistory).all()
def get_project_logs(self, history_id: int, limit: int = 100) -> list[ProjectLog]:
"""Get project logs."""
return self.db.query(ProjectLog).filter(ProjectLog.history_id == history_id).limit(limit).all()
def get_system_logs(self, limit: int = 100) -> list[SystemLog]:
"""Get system logs."""
return self.db.query(SystemLog).limit(limit).all()
def log_system_event(self, component: str, level: str, message: str,
user_agent: str | None = None, ip_address: str | None = None) -> SystemLog:
"""Log a system event."""
log = SystemLog(
component=component,
log_level=level,
log_message=message,
user_agent=user_agent,
ip_address=ip_address
)
self.db.add(log)
self.db.commit()
self.db.refresh(log)
return log
class DatabaseManager:
"""Manages database operations for audit logging and history tracking."""
def __init__(self, db: Session):
"""Initialize database manager."""
self.db = db
self.migrations = DatabaseMigrations(self.db)
def log_project_start(self, project_id: str, project_name: str, description: str) -> ProjectHistory:
"""Log project start."""
history = ProjectHistory(
project_id=project_id,
project_name=project_name,
description=description,
status=ProjectStatus.INITIALIZED.value,
progress=0,
message="Project initialization started"
)
self.db.add(history)
self.db.commit()
self.db.refresh(history)
# Log the action in audit trail
self._log_audit_trail(
project_id=project_id,
action="PROJECT_CREATED",
actor="system",
action_type="CREATE",
details=f"Project {project_name} was created",
message="Project created successfully"
)
return history
def log_progress_update(self, history_id: int, progress: int, step: str, message: str) -> None:
"""Log progress update."""
history = self.db.query(ProjectHistory).filter(
ProjectHistory.id == history_id
).first()
if history:
history.progress = progress
history.current_step = step
history.message = message
self.db.commit()
# Log the action
self._log_action(history_id, "INFO", f"Progress: {progress}%, Step: {step} - {message}")
# Log to audit trail
self._log_audit_trail(
project_id=history.project_id,
action="PROGRESS_UPDATE",
actor="agent",
action_type="UPDATE",
details=f"Progress updated to {progress}% - {step}",
message=f"Progress: {progress}%, Step: {step} - {message}",
metadata_json=json.dumps({"step": step, "message": message})
)
def log_project_complete(self, history_id: int, message: str) -> None:
"""Log project completion."""
history = self.db.query(ProjectHistory).filter(
ProjectHistory.id == history_id
).first()
if history:
history.status = ProjectStatus.COMPLETED.value
history.completed_at = datetime.utcnow()
history.message = message
self.db.commit()
# Log the action
self._log_action(history_id, "INFO", f"Project completed: {message}")
# Log to audit trail
self._log_audit_trail(
project_id=history.project_id,
action="PROJECT_COMPLETED",
actor="agent",
action_type="COMPLETE",
details=message,
message=f"Project completed: {message}"
)
def log_error(self, history_id: int, error: str) -> None:
"""Log error."""
history = self.db.query(ProjectHistory).filter(
ProjectHistory.id == history_id
).first()
if history:
history.status = ProjectStatus.ERROR.value
history.error_message = error
self.db.commit()
# Log the action
self._log_action(history_id, "ERROR", f"Error occurred: {error}")
# Log to audit trail
self._log_audit_trail(
project_id=history.project_id,
action="ERROR_OCCURRED",
actor="agent",
action_type="ERROR",
details=error,
message=f"Error occurred: {error}"
)
def _log_action(self, history_id: int, level: str, message: str) -> None:
"""Log an action to the project log."""
project_log = ProjectLog(
history_id=history_id,
log_level=level,
log_message=message,
timestamp=datetime.utcnow()
)
self.db.add(project_log)
self.db.commit()
def save_ui_snapshot(self, history_id: int, ui_data: dict) -> UISnapshot:
"""Save UI snapshot."""
snapshot = UISnapshot(
history_id=history_id,
snapshot_data=json.dumps(ui_data),
created_at=datetime.utcnow()
)
self.db.add(snapshot)
self.db.commit()
self.db.refresh(snapshot)
return snapshot
def save_pr_data(self, history_id: int, pr_data: dict) -> PullRequest:
"""Save PR data."""
# Parse PR data
pr_number = pr_data.get("pr_number", pr_data.get("id", 0))
pr_title = pr_data.get("title", pr_data.get("pr_title", ""))
pr_body = pr_data.get("body", pr_data.get("pr_body", ""))
pr_state = pr_data.get("state", pr_data.get("pr_state", "open"))
pr_url = pr_data.get("url", pr_data.get("pr_url", ""))
pr = PullRequest(
history_id=history_id,
pr_number=pr_number,
pr_title=pr_title,
pr_body=pr_body,
base=pr_data.get("base", "main"),
user=pr_data.get("user", "system"),
pr_url=pr_url,
merged=False,
pr_state=pr_state
)
self.db.add(pr)
self.db.commit()
self.db.refresh(pr)
return pr
def get_project_by_id(self, project_id: str) -> ProjectHistory | None:
"""Get project by ID."""
return self.db.query(ProjectHistory).filter(ProjectHistory.project_id == project_id).first()
def get_all_projects(self) -> list[ProjectHistory]:
"""Get all projects."""
return self.db.query(ProjectHistory).all()
def get_project_logs(self, history_id: int, limit: int = 100) -> list[ProjectLog]:
"""Get project logs."""
return self.db.query(ProjectLog).filter(ProjectLog.history_id == history_id).limit(limit).all()
def log_system_event(self, component: str, level: str, message: str,
user_agent: str | None = None, ip_address: str | None = None) -> SystemLog:
"""Log a system event."""
log = SystemLog(
component=component,
log_level=level,
log_message=message,
user_agent=user_agent,
ip_address=ip_address
)
self.db.add(log)
self.db.commit()
self.db.refresh(log)
return log
def _log_audit_trail(
self,
project_id: str,
action: str,
actor: str,
action_type: str,
details: str,
message: str | None = None,
**kwargs
) -> AuditTrail:
"""Log to the audit trail."""
metadata_json = kwargs.get("metadata_json", kwargs.get("metadata", "{}"))
audit = AuditTrail(
project_id=project_id,
action=action,
actor=actor,
action_type=action_type,
details=details,
message=message or details,
metadata_json=metadata_json or "{}"
)
self.db.add(audit)
self.db.commit()
return audit
def get_logs(self, project_id: str = None, level: str = None, limit: int = 100) -> list:
"""Get logs from the database."""
query = self.db.query(ProjectLog)
if project_id:
query = query.filter(ProjectLog.history_id == project_id)
if level:
query = query.filter(ProjectLog.log_level == level)
logs = query.order_by(ProjectLog.timestamp.desc()).limit(limit).all()
return [
{
"id": log.id,
"history_id": log.history_id,
"level": log.log_level,
"message": log.log_message,
"timestamp": log.timestamp.isoformat() if log.timestamp else None
}
for log in logs
]
def get_audit_trail(self, project_id: str = None, limit: int = 100) -> list:
"""Get audit trail entries."""
query = self.db.query(AuditTrail)
if project_id:
query = query.filter(AuditTrail.project_id == project_id)
audits = query.order_by(AuditTrail.created_at.desc()).limit(limit).all()
return [
{
"id": audit.id,
"project_id": audit.project_id,
"action": audit.action,
"actor": audit.actor,
"action_type": audit.action_type,
"details": audit.details,
"metadata_json": audit.metadata_json,
"timestamp": audit.created_at.isoformat() if audit.created_at else None
}
for audit in audits
]
def get_all_audit_trail(self, limit: int = 100) -> list:
"""Get all audit trail entries."""
audits = self.db.query(AuditTrail).order_by(AuditTrail.created_at.desc()).limit(limit).all()
return [
{
"id": audit.id,
"project_id": audit.project_id,
"action": audit.action,
"actor": audit.actor,
"action_type": audit.action_type,
"details": audit.details,
"metadata_json": audit.metadata_json,
"timestamp": audit.created_at.isoformat() if audit.created_at else None
}
for audit in audits
]
def log_user_action(self, history_id: int, action_type: str, actor_type: str, actor_name: str,
action_description: str, action_data: dict = None) -> UserAction:
"""Log a user action."""
history = self.db.query(ProjectHistory).filter(
ProjectHistory.id == history_id
).first()
if not history:
return None
user_action = UserAction(
history_id=history_id,
action_type=action_type,
actor_type=actor_type,
actor_name=actor_name,
action_description=action_description,
action_data=action_data or {},
created_at=datetime.utcnow()
)
self.db.add(user_action)
self.db.commit()
self.db.refresh(user_action)
return user_action
def get_user_actions(self, history_id: int, limit: int = 100) -> list:
"""Get user actions for a history."""
user_actions = self.db.query(UserAction).filter(
UserAction.history_id == history_id
).order_by(UserAction.created_at.desc()).limit(limit).all()
return [
{
"id": ua.id,
"history_id": ua.history_id,
"action_type": ua.action_type,
"actor_type": ua.actor_type,
"actor_name": ua.actor_name,
"action_description": ua.action_description,
"action_data": ua.action_data,
"created_at": ua.created_at.isoformat() if ua.created_at else None
}
for ua in user_actions
]
def get_system_logs(self, level: str = None, limit: int = 100) -> list:
"""Get system logs."""
query = self.db.query(SystemLog)
if level:
query = query.filter(SystemLog.log_level == level)
logs = query.order_by(SystemLog.created_at.desc()).limit(limit).all()
return [
{
"id": log.id,
"component": log.component,
"level": log.log_level,
"message": log.log_message,
"timestamp": log.created_at.isoformat() if log.created_at else None
}
for log in logs
]
def log_code_change(self, project_id: str, change_type: str, file_path: str,
actor: str, actor_type: str, details: str) -> AuditTrail:
"""Log a code change."""
audit = AuditTrail(
project_id=project_id,
action="CODE_CHANGE",
actor=actor,
action_type=change_type,
details=f"File {file_path} {change_type}",
message=f"Code change: {file_path}",
metadata_json=json.dumps({"file": file_path, "change_type": change_type, "actor": actor})
)
self.db.add(audit)
self.db.commit()
return audit
def log_commit(self, project_id: str, commit_message: str, actor: str,
actor_type: str = "agent") -> AuditTrail:
"""Log a git commit."""
audit = AuditTrail(
project_id=project_id,
action="GIT_COMMIT",
actor=actor,
action_type="COMMIT",
details=f"Commit: {commit_message}",
message=f"Git commit: {commit_message}",
metadata_json=json.dumps({"commit": commit_message, "actor": actor, "actor_type": actor_type})
)
self.db.add(audit)
self.db.commit()
return audit
def get_project_audit_data(self, project_id: str) -> dict:
"""Get comprehensive audit data for a project."""
history = self.db.query(ProjectHistory).filter(
ProjectHistory.project_id == project_id
).first()
if not history:
return {
"project": None,
"logs": [],
"actions": [],
"audit_trail": []
}
# Get logs
logs = self.db.query(ProjectLog).filter(
ProjectLog.history_id == history.id
).order_by(ProjectLog.timestamp.desc()).all()
# Get user actions
user_actions = self.db.query(UserAction).filter(
UserAction.history_id == history.id
).order_by(UserAction.created_at.desc()).all()
# Get audit trail entries
audit_trails = self.db.query(AuditTrail).filter(
AuditTrail.project_id == project_id
).order_by(AuditTrail.created_at.desc()).all()
return {
"project": {
"id": history.id,
"project_id": history.project_id,
"project_name": history.project_name,
"description": history.description,
"status": history.status,
"progress": history.progress,
"message": history.message,
"error_message": history.error_message,
"current_step": history.current_step,
"completed_at": history.completed_at.isoformat() if history.completed_at else None,
"created_at": history.started_at.isoformat() if history.started_at else None
},
"logs": [
{
"id": log.id,
"level": log.log_level,
"message": log.log_message,
"timestamp": log.timestamp.isoformat() if log.timestamp else None
}
for log in logs
],
"actions": [
{
"id": ua.id,
"action_type": ua.action_type,
"actor_type": ua.actor_type,
"actor_name": ua.actor_name,
"action_description": ua.action_description,
"action_data": ua.action_data,
"created_at": ua.created_at.isoformat() if ua.created_at else None
}
for ua in user_actions
],
"audit_trail": [
{
"id": audit.id,
"action": audit.action,
"actor": audit.actor,
"action_type": audit.action_type,
"details": audit.details,
"timestamp": audit.created_at.isoformat() if audit.created_at else None
}
for audit in audit_trails
]
}
def cleanup_audit_trail(self) -> None:
"""Clear all audit trail entries."""
self.db.query(AuditTrail).delete()
self.db.commit()

View File

@@ -0,0 +1,75 @@
"""Git manager for project operations."""
import os
import subprocess
from typing import Optional
class GitManager:
"""Manages git operations for the project."""
def __init__(self, project_id: str):
if not project_id:
raise ValueError("project_id cannot be empty or None")
self.project_id = project_id
self.project_dir = f"{os.path.dirname(__file__)}/../../test-project/{project_id}"
def init_repo(self):
"""Initialize git repository."""
os.makedirs(self.project_dir, exist_ok=True)
os.chdir(self.project_dir)
subprocess.run(["git", "init"], check=True, capture_output=True)
def add_files(self, paths: list[str]):
"""Add files to git staging."""
subprocess.run(["git", "add"] + paths, check=True, capture_output=True)
def commit(self, message: str):
"""Create a git commit."""
subprocess.run(
["git", "commit", "-m", message],
check=True,
capture_output=True
)
def push(self, remote: str = "origin", branch: str = "main"):
"""Push changes to remote."""
subprocess.run(
["git", "push", "-u", remote, branch],
check=True,
capture_output=True
)
def create_branch(self, branch_name: str):
"""Create and switch to a new branch."""
subprocess.run(
["git", "checkout", "-b", branch_name],
check=True,
capture_output=True
)
def create_pr(
self,
title: str,
body: str,
base: str = "main",
head: Optional[str] = None
) -> dict:
"""Create a pull request via gitea API."""
# This would integrate with gitea API
# For now, return placeholder
return {
"title": title,
"body": body,
"base": base,
"head": head or f"ai-gen-{self.project_id}"
}
def get_status(self) -> str:
"""Get git status."""
result = subprocess.run(
["git", "status", "--porcelain"],
capture_output=True,
text=True
)
return result.stdout.strip()

View File

@@ -0,0 +1,115 @@
"""Gitea API integration for commits and PRs."""
import json
import os
from typing import Optional
class GiteaAPI:
"""Gitea API client for repository operations."""
def __init__(self, token: str, base_url: str):
self.token = token
self.base_url = base_url.rstrip("/")
self.headers = {
"Authorization": f"token {token}",
"Content-Type": "application/json"
}
async def create_branch(self, owner: str, repo: str, branch: str, base: str = "main"):
"""Create a new branch."""
url = f"{self.base_url}/repos/{owner}/{repo}/branches/{branch}"
payload = {"base": base}
try:
import aiohttp
async with aiohttp.ClientSession() as session:
async with session.post(url, headers=self.headers, json=payload) as resp:
if resp.status == 201:
return await resp.json()
else:
return {"error": await resp.text()}
except Exception as e:
return {"error": str(e)}
async def create_pull_request(
self,
owner: str,
repo: str,
title: str,
body: str,
base: str = "main",
head: str = None
) -> dict:
"""Create a pull request."""
url = f"{self.base_url}/repos/{owner}/{repo}/pulls"
payload = {
"title": title,
"body": body,
"base": {"branch": base},
"head": head or f"ai-gen-{hash(title) % 10000}"
}
try:
import aiohttp
async with aiohttp.ClientSession() as session:
async with session.post(url, headers=self.headers, json=payload) as resp:
if resp.status == 201:
return await resp.json()
else:
return {"error": await resp.text()}
except Exception as e:
return {"error": str(e)}
async def push_commit(
self,
owner: str,
repo: str,
branch: str,
files: list[dict],
message: str
) -> dict:
"""
Push files to a branch.
In production, this would use gitea's API or git push.
For now, we'll simulate the operation.
"""
# In reality, you'd need to:
# 1. Clone repo
# 2. Create branch
# 3. Add files
# 4. Commit
# 5. Push
return {
"status": "simulated",
"branch": branch,
"message": message,
"files": files
}
async def get_repo_info(self, owner: str, repo: str) -> dict:
"""Get repository information."""
url = f"{self.base_url}/repos/{owner}/{repo}"
try:
import aiohttp
async with aiohttp.ClientSession() as session:
async with session.get(url, headers=self.headers) as resp:
if resp.status == 200:
return await resp.json()
else:
return {"error": await resp.text()}
except Exception as e:
return {"error": str(e)}
def get_config(self) -> dict:
"""Load configuration from environment."""
return {
"base_url": os.getenv("GITEA_URL", "https://gitea.local"),
"token": os.getenv("GITEA_TOKEN", ""),
"owner": os.getenv("GITEA_OWNER", "ai-test"),
"repo": os.getenv("GITEA_REPO", "ai-test")
}

View File

@@ -0,0 +1,227 @@
"""Agent orchestrator for software generation."""
import asyncio
from typing import Optional
from ai_software_factory.agents.git_manager import GitManager
from ai_software_factory.agents.ui_manager import UIManager
from ai_software_factory.agents.gitea import GiteaAPI
from ai_software_factory.agents.database_manager import DatabaseManager
from ai_software_factory.config import settings
from datetime import datetime
import os
class AgentOrchestrator:
"""Orchestrates the software generation process with full audit trail."""
def __init__(
self,
project_id: str,
project_name: str,
description: str,
features: list,
tech_stack: list,
db = None
):
"""Initialize orchestrator."""
self.project_id = project_id
self.project_name = project_name
self.description = description
self.features = features
self.tech_stack = tech_stack
self.status = "initialized"
self.progress = 0
self.current_step = None
self.message = ""
self.logs = []
self.ui_data = {}
self.db = db
# Initialize agents
self.git_manager = GitManager(project_id)
self.ui_manager = UIManager(project_id)
self.gitea_api = GiteaAPI(
token=settings.GITEA_TOKEN,
base_url=settings.GITEA_URL
)
# Initialize database manager if db session provided
self.db_manager = None
self.history = None
if db:
self.db_manager = DatabaseManager(db)
# Log project start to database
self.history = self.db_manager.log_project_start(
project_id=project_id,
project_name=project_name,
description=description
)
# Re-fetch with new history_id
self.db_manager = DatabaseManager(db)
async def run(self) -> dict:
"""Run the software generation process with full audit logging."""
try:
# Step 1: Initialize project
self.progress = 5
self.current_step = "Initializing project"
self.message = "Setting up project structure..."
self.logs.append(f"[{datetime.utcnow().isoformat()}] Initializing project.")
# Step 2: Create project structure (skip git operations)
self.progress = 15
self.current_step = "Creating project structure"
self.message = "Creating project files..."
await self._create_project_structure()
# Step 3: Generate initial code
self.progress = 25
self.current_step = "Generating initial code"
self.message = "Generating initial code with Ollama..."
await self._generate_code()
# Step 4: Test the code
self.progress = 50
self.current_step = "Testing code"
self.message = "Running tests..."
await self._run_tests()
# Step 5: Commit to git (skip in test env)
self.progress = 75
self.current_step = "Committing to git"
self.message = "Skipping git operations in test environment..."
# Step 6: Create PR (skip in test env)
self.progress = 90
self.current_step = "Creating PR"
self.message = "Skipping PR creation in test environment..."
# Step 7: Complete
self.progress = 100
self.current_step = "Completed"
self.message = "Software generation complete!"
self.logs.append(f"[{datetime.utcnow().isoformat()}] Software generation complete!")
# Log completion to database if available
if self.db_manager and self.history:
self.db_manager.log_project_complete(
history_id=self.history.id,
message="Software generation complete!"
)
return {
"status": "completed",
"progress": self.progress,
"message": self.message,
"current_step": self.current_step,
"logs": self.logs,
"ui_data": self.ui_manager.ui_data,
"history_id": self.history.id if self.history else None
}
except Exception as e:
self.status = "error"
self.message = f"Error: {str(e)}"
self.logs.append(f"[{datetime.utcnow().isoformat()}] Error: {str(e)}")
# Log error to database if available
if self.db_manager and self.history:
self.db_manager.log_error(
history_id=self.history.id,
error=str(e)
)
return {
"status": "error",
"progress": self.progress,
"message": self.message,
"current_step": self.current_step,
"logs": self.logs,
"error": str(e),
"ui_data": self.ui_manager.ui_data,
"history_id": self.history.id if self.history else None
}
async def _create_project_structure(self) -> None:
"""Create initial project structure."""
project_dir = self.project_id
# Create .gitignore
gitignore_path = f"{project_dir}/.gitignore"
try:
os.makedirs(project_dir, exist_ok=True)
with open(gitignore_path, "w") as f:
f.write("# Python\n__pycache__/\n*.pyc\n*.pyo\n*.pyd\n.Python\n*.env\n.venv/\nnode_modules/\n.env\nbuild/\ndist/\n.pytest_cache/\n.mypy_cache/\n.coverage\nhtmlcov/\n.idea/\n.vscode/\n*.swp\n*.swo\n*~\n.DS_Store\n.git\n")
except Exception as e:
self.logs.append(f"[{datetime.utcnow().isoformat()}] Failed to create .gitignore: {str(e)}")
# Create README.md
readme_path = f"{project_dir}/README.md"
try:
with open(readme_path, "w") as f:
f.write(f"# {self.project_name}\n\n{self.description}\n\n## Features\n")
for feature in self.features:
f.write(f"- {feature}\n")
f.write(f"\n## Tech Stack\n")
for tech in self.tech_stack:
f.write(f"- {tech}\n")
except Exception as e:
self.logs.append(f"[{datetime.utcnow().isoformat()}] Failed to create README.md: {str(e)}")
async def _generate_code(self) -> None:
"""Generate code using Ollama."""
# This would call Ollama API to generate code
# For now, create a placeholder file
try:
main_py_path = f"{self.project_id}/main.py"
os.makedirs(self.project_id, exist_ok=True)
with open(main_py_path, "w") as f:
f.write("# Generated by AI Software Factory\n")
f.write("print('Hello, World!')\n")
except Exception as e:
self.logs.append(f"[{datetime.utcnow().isoformat()}] Failed to create main.py: {str(e)}")
# Log code change to audit trail
if self.db_manager and self.history:
self.db_manager.log_code_change(
project_id=self.project_id,
change_type="CREATE",
file_path="main.py",
actor="agent",
actor_type="agent",
details="Generated main.py file"
)
async def _run_tests(self) -> None:
"""Run tests for the generated code."""
# This would run pytest or other test framework
# For now, simulate test success
pass
async def _commit_to_git(self) -> None:
"""Commit changes to git."""
pass # Skip git operations in test environment
async def _create_pr(self) -> None:
"""Create pull request."""
pass # Skip PR creation in test environment
def update_status(self, status: str, progress: int, message: str) -> None:
"""Update status and progress."""
self.status = status
self.progress = progress
self.message = message
def get_ui_data(self) -> dict:
"""Get UI data."""
return self.ui_manager.ui_data
def render_dashboard(self) -> str:
"""Render dashboard HTML."""
return self.ui_manager.render_dashboard()
def get_history(self) -> Optional[dict]:
"""Get project history from database."""
if self.db_manager and self.history:
return self.db_manager.get_project_audit_data(self.history.project_id)
return None

View File

@@ -0,0 +1,151 @@
"""Telegram bot integration for n8n webhook."""
import asyncio
import json
import re
from typing import Optional
class TelegramHandler:
"""Handles Telegram messages via n8n webhook."""
def __init__(self, webhook_url: str):
self.webhook_url = webhook_url
self.api_url = "https://api.telegram.org/bot"
async def handle_message(self, message_data: dict) -> dict:
"""Handle incoming Telegram message."""
text = message_data.get("text", "")
chat_id = message_data.get("chat", {}).get("id", "")
# Extract software request from message
request = self._parse_request(text)
if request:
# Forward to backend API
async def fetch_software():
try:
import aiohttp
async with aiohttp.ClientSession() as session:
async with session.post(
"http://localhost:8000/generate",
json=request
) as resp:
return await resp.json()
except Exception as e:
return {"error": str(e)}
result = await fetch_software()
return {
"status": "success",
"data": result,
"response": self._format_response(result)
}
else:
return {
"status": "error",
"message": "Could not parse software request"
}
def _parse_request(self, text: str) -> Optional[dict]:
"""Parse software request from user message."""
# Simple parser - in production, use LLM to extract
request = {
"name": None,
"description": None,
"features": []
}
lines = text.split("\n")
# Parse name
name_idx = -1
for i, line in enumerate(lines):
line_stripped = line.strip()
if line_stripped.lower().startswith("name:"):
request["name"] = line_stripped.split(":", 1)[1].strip()
name_idx = i
break
if not request["name"]:
return None
# Parse description (everything after name until features section)
# First, find where features section starts
features_idx = -1
for i in range(name_idx + 1, len(lines)):
line_stripped = lines[i].strip()
if line_stripped.lower().startswith("features:"):
features_idx = i
break
if features_idx > name_idx:
# Description is between name and features
request["description"] = "\n".join(lines[name_idx + 1:features_idx]).strip()
else:
# Description is everything after name
request["description"] = "\n".join(lines[name_idx + 1:]).strip()
# Strip description prefix if present
if request["description"]:
request["description"] = request["description"].strip()
if request["description"].lower().startswith("description:"):
request["description"] = request["description"][len("description:") + 1:].strip()
# Parse features
if features_idx > 0:
features_line = lines[features_idx]
# Parse inline features after "Features:"
if ":" in features_line:
inline_part = features_line.split(":", 1)[1].strip()
# Skip if it starts with dash (it's a multiline list marker)
if inline_part and not inline_part.startswith("-"):
# Remove any leading dashes or asterisks
if inline_part.startswith("-"):
inline_part = inline_part[1:].strip()
elif inline_part.startswith("*"):
inline_part = inline_part[1:].strip()
if inline_part:
# Split by comma for inline features
request["features"].extend([f.strip() for f in inline_part.split(",") if f.strip()])
# Parse multiline features (dash lines after features:)
for line in lines[features_idx + 1:]:
line_stripped = line.strip()
if not line_stripped:
continue
if line_stripped.startswith("-"):
feature_text = line_stripped[1:].strip()
if feature_text:
request["features"].append(feature_text)
elif line_stripped.startswith("*"):
feature_text = line_stripped[1:].strip()
if feature_text:
request["features"].append(feature_text)
elif ":" in line_stripped:
# Non-feature line with colon
break
# MUST have features
if not request["features"]:
return None
return request
def _format_response(self, result: dict) -> dict:
"""Format response for Telegram."""
status = result.get("status", "error")
message = result.get("message", result.get("detail", ""))
progress = result.get("progress", 0)
response_data = {
"status": status,
"message": message,
"progress": progress,
"project_name": result.get("name", "N/A"),
"logs": result.get("logs", [])
}
return response_data

View File

@@ -0,0 +1,435 @@
"""UI manager for web dashboard with audit trail display."""
import json
from typing import Optional, List
class UIManager:
"""Manages UI data and updates with audit trail display."""
def __init__(self, project_id: str):
"""Initialize UI manager."""
self.project_id = project_id
self.ui_data = {
"project_id": project_id,
"status": "initialized",
"progress": 0,
"message": "Ready",
"snapshots": [],
"features": []
}
def update_status(self, status: str, progress: int, message: str) -> None:
"""Update UI status."""
self.ui_data["status"] = status
self.ui_data["progress"] = progress
self.ui_data["message"] = message
def add_snapshot(self, data: str, created_at: Optional[str] = None) -> None:
"""Add a snapshot of UI data."""
snapshot = {
"data": data,
"created_at": created_at or self._get_current_timestamp()
}
self.ui_data.setdefault("snapshots", []).append(snapshot)
def add_feature(self, feature: str) -> None:
"""Add a feature tag."""
self.ui_data.setdefault("features", []).append(feature)
def _get_current_timestamp(self) -> str:
"""Get current timestamp in ISO format."""
from datetime import datetime
return datetime.now().isoformat()
def get_ui_data(self) -> dict:
"""Get current UI data."""
return self.ui_data
def _escape_html(self, text: str) -> str:
"""Escape HTML special characters for safe display."""
if text is None:
return ""
safe_chars = {
'&': '&',
'<': '<',
'>': '>',
'"': '"',
"'": '&#x27;'
}
return ''.join(safe_chars.get(c, c) for c in str(text))
def render_dashboard(self, audit_trail: Optional[List[dict]] = None,
actions: Optional[List[dict]] = None,
logs: Optional[List[dict]] = None) -> str:
"""Render dashboard HTML with audit trail and history display."""
# Format logs for display
logs_html = ""
if logs:
for log in logs:
level = log.get("level", "INFO")
message = self._escape_html(log.get("message", ""))
timestamp = self._escape_html(log.get("timestamp", ""))
if level == "ERROR":
level_class = "error"
else:
level_class = "info"
logs_html += f"""
<div class="log-item">
<span class="timestamp">{timestamp}</span>
<span class="log-level {level_class}">[{level}]</span>
<span>{message}</span>
</div>"""
# Format audit trail for display
audit_html = ""
if audit_trail:
for audit in audit_trail:
action = audit.get("action", "")
actor = self._escape_html(audit.get("actor", ""))
timestamp = self._escape_html(audit.get("timestamp", ""))
details = self._escape_html(audit.get("details", ""))
metadata = audit.get("metadata", {})
action_type = audit.get("action_type", "")
# Color classes for action types
action_color = action_type.lower() if action_type else "neutral"
audit_html += f"""
<div class="audit-item">
<div class="audit-header">
<span class="audit-action {action_color}">
{self._escape_html(action)}
</span>
<span class="audit-actor">{actor}</span>
<span class="audit-time">{timestamp}</span>
</div>
<div class="audit-details">{details}</div>
{f'<div class="audit-metadata">{json.dumps(metadata)}</div>' if metadata else ''}
</div>
"""
# Format actions for display
actions_html = ""
if actions:
for action in actions:
action_type = action.get("action_type", "")
description = self._escape_html(action.get("description", ""))
actor_name = self._escape_html(action.get("actor_name", ""))
actor_type = action.get("actor_type", "")
timestamp = self._escape_html(action.get("timestamp", ""))
actions_html += f"""
<div class="action-item">
<div class="action-type">{self._escape_html(action_type)}</div>
<div class="action-description">{description}</div>
<div class="action-actor">{actor_type}: {actor_name}</div>
<div class="action-time">{timestamp}</div>
</div>"""
# Format snapshots for display
snapshots_html = ""
snapshots = self.ui_data.get("snapshots", [])
if snapshots:
for snapshot in snapshots:
data = snapshot.get("data", "")
created_at = snapshot.get("created_at", "")
snapshots_html += f"""
<div class="snapshot-item">
<div class="snapshot-time">{created_at}</div>
<pre class="snapshot-data">{data}</pre>
</div>"""
# Build features HTML
features_html = ""
features = self.ui_data.get("features", [])
if features:
feature_tags = []
for feat in features:
feature_tags.append(f'<span class="feature-tag">{self._escape_html(feat)}</span>')
features_html = f'<div class="features">{"".join(feature_tags)}</div>'
# Build project header HTML
project_id_escaped = self._escape_html(self.ui_data.get('project_id', 'Project'))
status = self.ui_data.get('status', 'initialized')
# Determine empty state message
empty_state_message = ""
if not audit_trail and not actions and not snapshots_html:
empty_state_message = 'No audit trail entries available'
return f"""<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>AI Software Factory Dashboard</title>
<style>
* {{ margin: 0; padding: 0; box-sizing: border-box; }}
body {{
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, sans-serif;
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
min-height: 100vh;
padding: 2rem;
}}
.container {{
max-width: 1200px;
margin: 0 auto;
background: white;
border-radius: 16px;
padding: 2rem;
box-shadow: 0 20px 60px rgba(0,0,0,0.3);
}}
h1 {{
color: #333;
margin-bottom: 1.5rem;
font-size: 2rem;
}}
h2 {{
color: #444;
margin: 2rem 0 1rem;
font-size: 1.5rem;
border-bottom: 2px solid #667eea;
padding-bottom: 0.5rem;
}}
.project {{
background: #f8f9fa;
border-radius: 12px;
padding: 1.5rem;
margin-bottom: 1rem;
}}
.project-header {{
display: flex;
justify-content: space-between;
align-items: center;
margin-bottom: 1rem;
}}
.project-name {{
font-size: 1.25rem;
font-weight: bold;
color: #333;
}}
.status-badge {{
padding: 0.5rem 1rem;
border-radius: 20px;
font-weight: bold;
font-size: 0.85rem;
}}
.status-badge.running {{ background: #fff3cd; color: #856404; }}
.status-badge.completed {{ background: #d4edda; color: #155724; }}
.status-badge.error {{ background: #f8d7da; color: #721c24; }}
.status-badge.initialized {{ background: #e2e3e5; color: #383d41; }}
.progress-bar {{
width: 100%;
height: 24px;
background: #e9ecef;
border-radius: 12px;
overflow: hidden;
margin: 1rem 0;
}}
.progress-fill {{
height: 100%;
background: linear-gradient(90deg, #667eea, #764ba2);
transition: width 0.5s ease;
}}
.message {{
color: #495057;
margin: 0.5rem 0;
}}
.logs {{
background: #f8f9fa;
border-radius: 8px;
padding: 1rem;
max-height: 200px;
overflow-y: auto;
font-family: monospace;
font-size: 0.85rem;
}}
.log-item {{
padding: 0.25rem 0;
border-bottom: 1px solid #e9ecef;
}}
.log-item:last-child {{ border-bottom: none; }}
.timestamp {{
color: #6c757d;
font-size: 0.8rem;
}}
.log-level {{
font-weight: bold;
margin-right: 0.5rem;
}}
.log-level.info {{ color: #28a745; }}
.log-level.error {{ color: #dc3545; }}
.features {{
margin-top: 1rem;
display: flex;
flex-wrap: wrap;
gap: 0.5rem;
}}
.feature-tag {{
background: #e7f3ff;
color: #0066cc;
padding: 0.25rem 0.75rem;
border-radius: 12px;
font-size: 0.85rem;
}}
.audit-section {{
background: #f8f9fa;
border-radius: 12px;
padding: 1.5rem;
margin-top: 1rem;
}}
.audit-item {{
background: white;
border: 1px solid #dee2e6;
border-radius: 8px;
padding: 1rem;
margin-bottom: 0.5rem;
}}
.audit-header {{
display: flex;
justify-content: space-between;
align-items: center;
margin-bottom: 0.5rem;
flex-wrap: wrap;
gap: 0.5rem;
}}
.audit-action {{
padding: 0.25rem 0.75rem;
border-radius: 12px;
font-size: 0.85rem;
font-weight: bold;
}}
.audit-action.CREATE {{ background: #d4edda; color: #155724; }}
.audit-action.UPDATE {{ background: #cce5ff; color: #004085; }}
.audit-action.DELETE {{ background: #f8d7da; color: #721c24; }}
.audit-action.PROMPT {{ background: #d1ecf1; color: #0c5460; }}
.audit-action.COMMIT {{ background: #fff3cd; color: #856404; }}
.audit-action.PR_CREATED {{ background: #d4edda; color: #155724; }}
.audit-action.neutral {{ background: #e9ecef; color: #495057; }}
.audit-actor {{
background: #e9ecef;
padding: 0.25rem 0.75rem;
border-radius: 12px;
font-size: 0.8rem;
}}
.audit-time {{
color: #6c757d;
font-size: 0.8rem;
}}
.audit-details {{
color: #495057;
font-size: 0.9rem;
font-weight: bold;
text-transform: uppercase;
}}
.audit-metadata {{
background: #f1f3f5;
padding: 0.5rem;
border-radius: 4px;
font-size: 0.75rem;
font-family: monospace;
margin-top: 0.5rem;
max-height: 100px;
overflow-y: auto;
}}
.action-item {{
background: white;
border: 1px solid #dee2e6;
border-radius: 8px;
padding: 1rem;
margin-bottom: 0.5rem;
}}
.action-type {{
font-weight: bold;
color: #667eea;
font-size: 0.9rem;
}}
.action-description {{
color: #495057;
margin: 0.5rem 0;
}}
.action-actor {{
color: #6c757d;
font-size: 0.8rem;
}}
.snapshot-section {{
background: #f8f9fa;
border-radius: 12px;
padding: 1.5rem;
margin-top: 1rem;
}}
.snapshot-item {{
background: white;
border: 1px solid #dee2e6;
border-radius: 8px;
padding: 1rem;
margin-bottom: 0.5rem;
}}
.snapshot-time {{
color: #6c757d;
font-size: 0.8rem;
margin-bottom: 0.5rem;
}}
.snapshot-data {{
background: #f8f9fa;
padding: 0.5rem;
border-radius: 4px;
font-family: monospace;
font-size: 0.75rem;
max-height: 200px;
overflow-y: auto;
white-space: pre-wrap;
word-break: break-all;
}}
.empty-state {{
text-align: center;
color: #6c757d;
padding: 2rem;
}}
@media (max-width: 768px) {{
.container {{
padding: 1rem;
}}
h1 {{
font-size: 1.5rem;
}}
}}
</style>
</head>
<body>
<div class="container">
<h1>AI Software Factory Dashboard</h1>
<div class="project">
<div class="project-header">
<span class="project-name">{project_id_escaped}</span>
<span class="status-badge {status}">
{status.upper()}
</span>
</div>
<div class="progress-bar">
<div class="progress-fill" style="width: {self.ui_data.get('progress', 0)}%;"></div>
</div>
<div class="message">{self._escape_html(self.ui_data.get('message', 'No message'))}</div>
{f'<div class="logs" id="logs">{logs_html}</div>' if logs else '<div class="empty-state">No logs available</div>'}
{features_html}
</div>
{f'<div class="audit-section"><h2>Audit Trail</h2>{audit_html}</div>' if audit_html else ''}
{f'<div class="action-section"><h2>User Actions</h2>{actions_html}</div>' if actions_html else ''}
{f'<div class="snapshot-section"><h2>UI Snapshots</h2>{snapshots_html}</div>' if snapshots_html else ''}
{empty_state_message}
</div>
</body>
</html>"""

View File

@@ -0,0 +1,151 @@
"""Configuration settings for AI Software Factory."""
import os
from typing import Optional
from pathlib import Path
from pydantic import Field
from pydantic_settings import BaseSettings
class Settings(BaseSettings):
"""Application settings loaded from environment variables."""
# Server settings
HOST: str = "0.0.0.0"
PORT: int = 8000
LOG_LEVEL: str = "INFO"
# Ollama settings computed from environment
OLLAMA_URL: str = "http://ollama:11434"
OLLAMA_MODEL: str = "llama3"
# Gitea settings
GITEA_URL: str = "https://gitea.yourserver.com"
GITEA_TOKEN: str = ""
GITEA_OWNER: str = "ai-software-factory"
GITEA_REPO: str = "ai-software-factory"
# n8n settings
N8N_WEBHOOK_URL: str = ""
# Telegram settings
TELEGRAM_BOT_TOKEN: str = ""
TELEGRAM_CHAT_ID: str = ""
# PostgreSQL settings
POSTGRES_HOST: str = "localhost"
POSTGRES_PORT: int = 5432
POSTGRES_USER: str = "postgres"
POSTGRES_PASSWORD: str = ""
POSTGRES_DB: str = "ai_software_factory"
POSTGRES_TEST_DB: str = "ai_software_factory_test"
POSTGRES_URL: Optional[str] = None # Optional direct PostgreSQL connection URL
# SQLite settings for testing
USE_SQLITE: bool = True # Enable SQLite by default for testing
SQLITE_DB_PATH: str = "sqlite.db"
# Database connection pool settings (only for PostgreSQL)
DB_POOL_SIZE: int = 10
DB_MAX_OVERFLOW: int = 20
DB_POOL_RECYCLE: int = 3600
DB_POOL_TIMEOUT: int = 30
@property
def pool(self) -> dict:
"""Get database pool configuration."""
return {
"pool_size": self.DB_POOL_SIZE,
"max_overflow": self.DB_MAX_OVERFLOW,
"pool_recycle": self.DB_POOL_RECYCLE,
"pool_timeout": self.DB_POOL_TIMEOUT
}
@property
def database_url(self) -> str:
"""Get database connection URL."""
if self.USE_SQLITE:
return f"sqlite:///{self.SQLITE_DB_PATH}"
return (
f"postgresql://{self.POSTGRES_USER}:{self.POSTGRES_PASSWORD}"
f"@{self.POSTGRES_HOST}:{self.POSTGRES_PORT}/{self.POSTGRES_DB}"
)
@property
def test_database_url(self) -> str:
"""Get test database connection URL."""
if self.USE_SQLITE:
return f"sqlite:///{self.SQLITE_DB_PATH}"
return (
f"postgresql://{self.POSTGRES_USER}:{self.POSTGRES_PASSWORD}"
f"@{self.POSTGRES_HOST}:{self.POSTGRES_PORT}/{self.POSTGRES_TEST_DB}"
)
@property
def ollama_url(self) -> str:
"""Get Ollama URL with trimmed whitespace."""
return self.OLLAMA_URL.strip()
@property
def gitea_url(self) -> str:
"""Get Gitea URL with trimmed whitespace."""
return self.GITEA_URL.strip()
@property
def gitea_token(self) -> str:
"""Get Gitea token with trimmed whitespace."""
return self.GITEA_TOKEN.strip()
@property
def n8n_webhook_url(self) -> str:
"""Get n8n webhook URL with trimmed whitespace."""
return self.N8N_WEBHOOK_URL.strip()
@property
def telegram_bot_token(self) -> str:
"""Get Telegram bot token with trimmed whitespace."""
return self.TELEGRAM_BOT_TOKEN.strip()
@property
def telegram_chat_id(self) -> str:
"""Get Telegram chat ID with trimmed whitespace."""
return self.TELEGRAM_CHAT_ID.strip()
@property
def postgres_host(self) -> str:
"""Get PostgreSQL host."""
return self.POSTGRES_HOST.strip()
@property
def postgres_port(self) -> int:
"""Get PostgreSQL port as integer."""
return int(self.POSTGRES_PORT)
@property
def postgres_user(self) -> str:
"""Get PostgreSQL user."""
return self.POSTGRES_USER.strip()
@property
def postgres_password(self) -> str:
"""Get PostgreSQL password."""
return self.POSTGRES_PASSWORD.strip()
@property
def postgres_db(self) -> str:
"""Get PostgreSQL database name."""
return self.POSTGRES_DB.strip()
@property
def postgres_test_db(self) -> str:
"""Get test PostgreSQL database name."""
return self.POSTGRES_TEST_DB.strip()
class Config:
env_file = ".env"
env_file_encoding = "utf-8"
extra = "ignore"
# Create instance for module-level access
settings = Settings()

View File

@@ -0,0 +1,322 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>AI Software Factory Dashboard</title>
<style>
* {
margin: 0;
padding: 0;
box-sizing: border-box;
}
body {
font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
background: linear-gradient(135deg, #1a1a2e 0%, #16213e 100%);
min-height: 100vh;
color: #fff;
padding: 20px;
}
.dashboard {
max-width: 1200px;
margin: 0 auto;
}
.header {
text-align: center;
padding: 30px;
background: rgba(255, 255, 255, 0.05);
border-radius: 15px;
margin-bottom: 20px;
border: 1px solid rgba(255, 255, 255, 0.1);
}
.header h1 {
font-size: 2.5em;
margin-bottom: 10px;
background: linear-gradient(90deg, #00d4ff, #00ff88);
-webkit-background-clip: text;
-webkit-text-fill-color: transparent;
background-clip: text;
}
.header p {
color: #888;
font-size: 1.1em;
}
.stats-grid {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(250px, 1fr));
gap: 20px;
margin-bottom: 20px;
}
.stat-card {
background: rgba(255, 255, 255, 0.05);
border-radius: 15px;
padding: 25px;
border: 1px solid rgba(255, 255, 255, 0.1);
text-align: center;
}
.stat-card h3 {
font-size: 0.9em;
color: #888;
margin-bottom: 10px;
text-transform: uppercase;
letter-spacing: 1px;
}
.stat-card .value {
font-size: 2.5em;
font-weight: bold;
color: #00d4ff;
}
.stat-card.project .value { color: #00ff88; }
.stat-card.active .value { color: #ff6b6b; }
.stat-card.code .value { color: #ffd93d; }
.status-panel {
background: rgba(255, 255, 255, 0.05);
border-radius: 15px;
padding: 25px;
margin-bottom: 20px;
border: 1px solid rgba(255, 255, 255, 0.1);
}
.status-panel h2 {
font-size: 1.3em;
margin-bottom: 15px;
color: #00d4ff;
}
.status-bar {
height: 20px;
background: #2a2a4a;
border-radius: 10px;
overflow: hidden;
margin-bottom: 10px;
}
.status-fill {
height: 100%;
background: linear-gradient(90deg, #00d4ff, #00ff88);
border-radius: 10px;
transition: width 0.5s ease;
}
.message {
padding: 10px;
background: rgba(0, 212, 255, 0.1);
border-radius: 8px;
border-left: 4px solid #00d4ff;
}
.projects-section {
background: rgba(255, 255, 255, 0.05);
border-radius: 15px;
padding: 25px;
margin-bottom: 20px;
border: 1px solid rgba(255, 255, 255, 0.1);
}
.projects-section h2 {
font-size: 1.3em;
margin-bottom: 15px;
color: #00ff88;
}
.projects-list {
display: flex;
flex-wrap: wrap;
gap: 15px;
}
.project-item {
background: rgba(0, 255, 136, 0.1);
padding: 15px 20px;
border-radius: 10px;
border: 1px solid rgba(0, 255, 136, 0.3);
font-size: 0.9em;
}
.project-item.active {
background: rgba(255, 107, 107, 0.1);
border-color: rgba(255, 107, 107, 0.3);
}
.audit-section {
background: rgba(255, 255, 255, 0.05);
border-radius: 15px;
padding: 25px;
margin-bottom: 20px;
border: 1px solid rgba(255, 255, 255, 0.1);
}
.audit-section h2 {
font-size: 1.3em;
margin-bottom: 15px;
color: #ffd93d;
}
.audit-table {
width: 100%;
border-collapse: collapse;
margin-top: 10px;
}
.audit-table th, .audit-table td {
padding: 12px;
text-align: left;
border-bottom: 1px solid rgba(255, 255, 255, 0.1);
}
.audit-table th {
color: #888;
font-weight: 600;
font-size: 0.85em;
}
.audit-table td {
font-size: 0.9em;
}
.audit-table .timestamp {
color: #666;
font-size: 0.8em;
}
.actions-panel {
background: rgba(255, 255, 255, 0.05);
border-radius: 15px;
padding: 25px;
border: 1px solid rgba(255, 255, 255, 0.1);
text-align: center;
}
.actions-panel h2 {
font-size: 1.3em;
margin-bottom: 15px;
color: #ff6b6b;
}
.actions-panel p {
color: #888;
margin-bottom: 20px;
}
@media (max-width: 768px) {
.stats-grid {
grid-template-columns: 1fr;
}
.projects-list {
flex-direction: column;
}
}
</style>
</head>
<body>
<div class="dashboard">
<div class="header">
<h1>🚀 AI Software Factory</h1>
<p>Real-time Dashboard & Audit Trail Display</p>
</div>
<div class="stats-grid">
<div class="stat-card project">
<h3>Current Project</h3>
<div class="value">test-project</div>
</div>
<div class="stat-card active">
<h3>Active Projects</h3>
<div class="value">1</div>
</div>
<div class="stat-card code">
<h3>Code Generated</h3>
<div class="value">12.4 KB</div>
</div>
<div class="stat-card">
<h3>Status</h3>
<div class="value" id="status-value">running</div>
</div>
</div>
<div class="status-panel">
<h2>📊 Current Status</h2>
<div class="status-bar">
<div class="status-fill" id="status-fill" style="width: 75%"></div>
</div>
<div class="message">
<strong>Generating code...</strong><br>
<span style="color: #888;">Progress: 75%</span>
</div>
</div>
<div class="projects-section">
<h2>📁 Active Projects</h2>
<div class="projects-list">
<div class="project-item active">
<strong>test-project</strong> • Agent: Orchestrator • Last update: just now
</div>
</div>
</div>
<div class="audit-section">
<h2>📜 Audit Trail</h2>
<table class="audit-table">
<thead>
<tr>
<th>Timestamp</th>
<th>Agent</th>
<th>Action</th>
<th>Status</th>
</tr>
</thead>
<tbody>
<tr>
<td class="timestamp">2026-03-22 01:41:00</td>
<td>Orchestrator</td>
<td>Initialized project</td>
<td style="color: #00ff88;">Success</td>
</tr>
<tr>
<td class="timestamp">2026-03-22 01:41:05</td>
<td>Git Manager</td>
<td>Initialized git repository</td>
<td style="color: #00ff88;">Success</td>
</tr>
<tr>
<td class="timestamp">2026-03-22 01:41:10</td>
<td>Code Generator</td>
<td>Generated main.py</td>
<td style="color: #00ff88;">Success</td>
</tr>
<tr>
<td class="timestamp">2026-03-22 01:41:15</td>
<td>Code Generator</td>
<td>Generated requirements.txt</td>
<td style="color: #00ff88;">Success</td>
</tr>
<tr>
<td class="timestamp">2026-03-22 01:41:18</td>
<td>Orchestrator</td>
<td>Running</td>
<td style="color: #00d4ff;">In Progress</td>
</tr>
</tbody>
</table>
</div>
<div class="actions-panel">
<h2>⚙️ System Actions</h2>
<p>Dashboard is rendering successfully. The UI manager is active and monitoring all projects.</p>
<p style="color: #888; font-size: 0.9em;">This dashboard is powered by the UIManager component and displays real-time status updates, audit trails, and project information.</p>
</div>
</div>
</body>
</html>

View File

@@ -0,0 +1,126 @@
"""Database connection and session management."""
from sqlalchemy import create_engine, event
from sqlalchemy.orm import sessionmaker, Session
from ai_software_factory.config import settings
from ai_software_factory.models import Base
def get_engine() -> create_engine:
"""Create and return SQLAlchemy engine with connection pooling."""
# Use SQLite for tests, PostgreSQL for production
if settings.USE_SQLITE:
db_path = settings.SQLITE_DB_PATH or "/tmp/ai_software_factory_test.db"
db_url = f"sqlite:///{db_path}"
# SQLite-specific configuration - no pooling for SQLite
engine = create_engine(
db_url,
connect_args={"check_same_thread": False},
echo=settings.LOG_LEVEL == "DEBUG"
)
else:
db_url = settings.POSTGRES_URL or settings.database_url
# PostgreSQL-specific configuration
engine = create_engine(
db_url,
pool_size=settings.DB_POOL_SIZE or 10,
max_overflow=settings.DB_MAX_OVERFLOW or 20,
pool_pre_ping=settings.LOG_LEVEL == "DEBUG",
echo=settings.LOG_LEVEL == "DEBUG",
pool_timeout=settings.DB_POOL_TIMEOUT or 30
)
# Event listener for connection checkout (PostgreSQL only)
if not settings.USE_SQLITE:
@event.listens_for(engine, "checkout")
def receive_checkout(dbapi_connection, connection_record, connection_proxy):
"""Log connection checkout for audit purposes."""
if settings.LOG_LEVEL in ("DEBUG", "INFO"):
print(f"DB Connection checked out from pool")
@event.listens_for(engine, "checkin")
def receive_checkin(dbapi_connection, connection_record):
"""Log connection checkin for audit purposes."""
if settings.LOG_LEVEL == "DEBUG":
print(f"DB Connection returned to pool")
return engine
def get_session() -> Session:
"""Create and return database session factory."""
engine = get_engine()
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
def session_factory() -> Session:
session = SessionLocal()
try:
yield session
session.commit()
except Exception:
session.rollback()
raise
finally:
session.close()
return session_factory
def init_db() -> None:
"""Initialize database tables."""
engine = get_engine()
Base.metadata.create_all(bind=engine)
print("Database tables created successfully.")
def drop_db() -> None:
"""Drop all database tables (use with caution!)."""
engine = get_engine()
Base.metadata.drop_all(bind=engine)
print("Database tables dropped successfully.")
def get_db() -> Session:
"""Dependency for FastAPI routes that need database access."""
engine = get_engine()
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
session = SessionLocal()
try:
yield session
finally:
session.close()
def get_db_session() -> Session:
"""Get a database session directly (for non-FastAPI usage)."""
session = next(get_session())
return session
def create_migration_script() -> str:
"""Generate a migration script for database schema changes."""
return '''-- Migration script for AI Software Factory database
-- Generated automatically - review before applying
-- Add new columns to existing tables if needed
-- This is a placeholder for future migrations
-- Example: Add audit_trail_index for better query performance
CREATE INDEX IF NOT EXISTS idx_audit_trail_timestamp ON audit_trail(timestamp);
CREATE INDEX IF NOT EXISTS idx_audit_trail_action ON audit_trail(action);
CREATE INDEX IF NOT EXISTS idx_audit_trail_project ON audit_trail(project_id);
-- Example: Add user_actions_index for better query performance
CREATE INDEX IF NOT EXISTS idx_user_actions_timestamp ON user_actions(timestamp);
CREATE INDEX IF NOT EXISTS idx_user_actions_actor ON user_actions(actor_type, actor_name);
CREATE INDEX IF NOT EXISTS idx_user_actions_history ON user_actions(history_id);
-- Example: Add project_logs_index for better query performance
CREATE INDEX IF NOT EXISTS idx_project_logs_timestamp ON project_logs(timestamp);
CREATE INDEX IF NOT EXISTS idx_project_logs_level ON project_logs(log_level);
-- Example: Add system_logs_index for better query performance
CREATE INDEX IF NOT EXISTS idx_system_logs_timestamp ON system_logs(timestamp);
CREATE INDEX IF NOT EXISTS idx_system_logs_component ON system_logs(component);
'''

View File

@@ -0,0 +1,85 @@
version: '3.8'
services:
ai-software-factory:
build:
context: .
dockerfile: Containerfile
ports:
- "8000:8000"
environment:
- HOST=0.0.0.0
- PORT=8000
- OLLAMA_URL=http://ollama:11434
- OLLAMA_MODEL=llama3
- GITEA_URL=${GITEA_URL:-https://gitea.yourserver.com}
- GITEA_TOKEN=${GITEA_TOKEN:-}
- GITEA_OWNER=${GITEA_OWNER:-ai-test}
- GITEA_REPO=${GITEA_REPO:-ai-test}
- N8N_WEBHOOK_URL=${N8N_WEBHOOK_URL:-}
- TELEGRAM_BOT_TOKEN=${TELEGRAM_BOT_TOKEN:-}
- TELEGRAM_CHAT_ID=${TELEGRAM_CHAT_ID:-}
- POSTGRES_HOST=postgres
- POSTGRES_PORT=5432
- POSTGRES_USER=${POSTGRES_USER:-ai_software_factory}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-}
- POSTGRES_DB=${POSTGRES_DB:-ai_software_factory}
- LOG_LEVEL=${LOG_LEVEL:-INFO}
- DB_POOL_SIZE=${DB_POOL_SIZE:-10}
- DB_MAX_OVERFLOW=${DB_MAX_OVERFLOW:-20}
- DB_POOL_RECYCLE=${DB_POOL_RECYCLE:-3600}
- DB_POOL_TIMEOUT=${DB_POOL_TIMEOUT:-30}
depends_on:
- postgres
networks:
- ai-test-network
postgres:
image: postgres:15-alpine
environment:
- POSTGRES_USER=${POSTGRES_USER:-ai_software_factory}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-}
- POSTGRES_DB=${POSTGRES_DB:-ai_software_factory}
volumes:
- postgres_data:/var/lib/postgresql/data
ports:
- "5432:5432"
networks:
- ai-test-network
# Health check for PostgreSQL
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER:-ai_software_factory} -d ${POSTGRES_DB:-ai_software_factory}"]
interval: 10s
timeout: 5s
retries: 5
n8n:
image: n8nio/n8n:latest
ports:
- "5678:5678"
environment:
- N8N_HOST=n8n
- N8N_PORT=5678
- N8N_PROTOCOL=http
volumes:
- n8n_data:/home/node/.n8n
networks:
- ai-test-network
ollama:
image: ollama/ollama:latest
ports:
- "11434:11434"
volumes:
- ollama_data:/root/.ollama
networks:
- ai-test-network
volumes:
postgres_data:
n8n_data:
ollama_data:
networks:
ai-test-network:
driver: bridge

754
ai_software_factory/main.py Normal file
View File

@@ -0,0 +1,754 @@
"""FastAPI application for AI Software Factory."""
from fastapi import FastAPI, Depends, HTTPException, status
from fastapi.middleware.cors import CORSMiddleware
from fastapi.responses import JSONResponse
from sqlalchemy.orm import Session
from ai_software_factory.database import get_db, init_db, get_engine
from ai_software_factory.models import (
ProjectHistory, ProjectStatus, AuditTrail, UserAction, ProjectLog, SystemLog,
PullRequestData, UISnapshot
)
from ai_software_factory.agents.orchestrator import AgentOrchestrator
from ai_software_factory.agents.ui_manager import UIManager
from ai_software_factory.agents.database_manager import DatabaseManager
from ai_software_factory.config import settings
from datetime import datetime
import json
app = FastAPI(
title="AI Software Factory",
description="Automated software generation service with PostgreSQL audit trail",
version="0.0.2"
)
# Add CORS middleware
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
@app.get("/")
async def root():
"""API information endpoint."""
return {
"service": "AI Software Factory",
"version": "0.0.2",
"description": "Automated software generation with PostgreSQL audit trail",
"endpoints": {
"/": "API information",
"/health": "Health check",
"/generate": "Generate new software",
"/status/{project_id}": "Get project status",
"/projects": "List all projects",
"/audit/projects": "Get project audit data",
"/audit/logs": "Get project logs",
"/audit/system/logs": "Get system audit logs",
"/audit/trail": "Get audit trail",
"/audit/trail/{project_id}": "Get project audit trail",
"/audit/actions": "Get user actions",
"/audit/actions/{project_id}": "Get project user actions",
"/audit/history": "Get project history",
"/audit/history/{project_id}": "Get project history",
"/init-db": "Initialize database",
}
}
@app.get("/health")
async def health_check():
"""Health check endpoint."""
return {"status": "healthy", "timestamp": datetime.utcnow().isoformat()}
@app.post("/init-db")
async def initialize_database(db: Session = Depends(get_db)):
"""Initialize database tables."""
try:
init_db()
return {"status": "success", "message": "Database tables initialized successfully"}
except Exception as e:
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=f"Failed to initialize database: {str(e)}"
)
@app.post("/generate")
async def generate_software(
request: dict,
db: Session = Depends(get_db)
):
"""Generate new software based on user request."""
try:
# Validate request has required fields
if not request.get("name"):
raise HTTPException(
status_code=status.HTTP_422_UNPROCESSABLE_ENTITY,
detail="Request must contain 'name' field"
)
# Create orchestrator with database session
orchestrator = AgentOrchestrator(
project_id=request.get("name", "project"),
project_name=request.get("name", "Project"),
description=request.get("description", ""),
features=request.get("features", []),
tech_stack=request.get("tech_stack", []),
db=db
)
# Run orchestrator
result = await orchestrator.run()
# Flatten the response structure for tests
ui_data = orchestrator.ui_manager.ui_data
# Wrap data in {'status': '...'} format to match test expectations
return {
"status": result.get("status", orchestrator.status),
"data": {
"project_id": orchestrator.project_id,
"name": orchestrator.project_name,
"progress": orchestrator.progress,
"message": orchestrator.message,
"logs": orchestrator.logs,
"ui_data": ui_data,
"history_id": result.get("history_id")
}
}
except HTTPException:
raise
except Exception as e:
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=str(e)
)
@app.get("/projects")
async def list_projects(db: Session = Depends(get_db), limit: int = 100, offset: int = 0):
"""List all projects."""
projects = db.query(ProjectHistory).offset(offset).limit(limit).all()
return {
"projects": [
{
"project_id": p.project_id,
"project_name": p.project_name,
"status": p.status,
"progress": p.progress,
"message": p.message,
"created_at": p.created_at.isoformat()
}
for p in projects
],
"total": db.query(ProjectHistory).count()
}
@app.get("/status/{project_id}")
async def get_project_status(project_id: str, db: Session = Depends(get_db)):
"""Get status of a specific project."""
history = db.query(ProjectHistory).filter(
ProjectHistory.project_id == project_id
).first()
if not history:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail=f"Project {project_id} not found"
)
# Get latest UI snapshot
try:
latest_snapshot = db.query(UISnapshot).filter(
UISnapshot.history_id == history.id
).order_by(UISnapshot.created_at.desc()).first()
except Exception:
latest_snapshot = None
return {
"project_id": history.project_id,
"project_name": history.project_name,
"status": history.status,
"progress": history.progress,
"message": history.message,
"current_step": history.current_step,
"created_at": history.created_at.isoformat(),
"updated_at": history.updated_at.isoformat(),
"completed_at": history.completed_at.isoformat() if history.completed_at else None,
"ui_data": json.loads(latest_snapshot.snapshot_data) if latest_snapshot else None
}
@app.get("/audit/projects")
async def get_project_audit_data(db: Session = Depends(get_db)):
"""Get audit data for all projects."""
projects = db.query(ProjectHistory).all()
# Build PR data cache keyed by history_id
pr_cache = {}
all_prs = db.query(PullRequestData).all()
for pr in all_prs:
pr_cache[pr.history_id] = {
"pr_number": pr.pr_number,
"pr_title": pr.pr_title,
"pr_body": pr.pr_body,
"pr_state": pr.pr_state,
"pr_url": pr.pr_url,
"created_at": pr.created_at.isoformat() if pr.created_at else None
}
return {
"projects": [
{
"project_id": p.project_id,
"project_name": p.project_name,
"status": p.status,
"progress": p.progress,
"message": p.message,
"created_at": p.created_at.isoformat(),
"updated_at": p.updated_at.isoformat() if p.updated_at else None,
"completed_at": p.completed_at.isoformat() if p.completed_at else None,
"logs": [
{
"level": log.log_level,
"message": log.log_message,
"timestamp": log.timestamp.isoformat() if log.timestamp else None
}
for log in db.query(ProjectLog).filter(
ProjectLog.history_id == p.id
).limit(10).all()
],
"pr_data": pr_cache.get(p.id, None)
}
for p in projects
],
"total": len(projects)
}
@app.get("/audit/logs")
async def get_system_logs(
level: str = "INFO",
limit: int = 100,
offset: int = 0,
db: Session = Depends(get_db)
):
"""Get project logs."""
try:
logs = db.query(ProjectLog).filter(
ProjectLog.log_level == level
).offset(offset).limit(limit).all()
return {
"logs": [
{
"level": log.log_level,
"message": log.log_message,
"timestamp": log.timestamp.isoformat() if log.timestamp else None
}
for log in logs
],
"total": db.query(ProjectLog).filter(
ProjectLog.log_level == level
).count()
}
except Exception as e:
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=str(e)
)
@app.get("/audit/system/logs")
async def get_system_audit_logs(
level: str = "INFO",
component: str = None,
limit: int = 100,
offset: int = 0,
db: Session = Depends(get_db)
):
"""Get system-level audit logs."""
try:
query = db.query(SystemLog).filter(SystemLog.log_level == level)
if component:
query = query.filter(SystemLog.component == component)
logs = query.offset(offset).limit(limit).all()
return {
"logs": [
{
"level": log.log_level,
"message": log.log_message,
"component": log.component,
"timestamp": log.created_at.isoformat() if log.created_at else None
}
for log in logs
],
"total": query.count()
}
except Exception as e:
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=str(e)
)
@app.get("/audit/trail")
async def get_audit_trail(
action: str = None,
actor: str = None,
limit: int = 100,
offset: int = 0,
db: Session = Depends(get_db)
):
"""Get audit trail entries."""
try:
query = db.query(AuditTrail).order_by(AuditTrail.created_at.desc())
if action:
query = query.filter(AuditTrail.action == action)
if actor:
query = query.filter(AuditTrail.actor == actor)
audit_entries = query.offset(offset).limit(limit).all()
return {
"audit_trail": [
{
"id": audit.id,
"project_id": audit.project_id,
"action": audit.action,
"actor": audit.actor,
"action_type": audit.action_type,
"details": audit.details,
"metadata": audit.metadata,
"ip_address": audit.ip_address,
"user_agent": audit.user_agent,
"timestamp": audit.created_at.isoformat() if audit.created_at else None
}
for audit in audit_entries
],
"total": query.count(),
"limit": limit,
"offset": offset
}
except Exception as e:
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=str(e)
)
@app.get("/audit/trail/{project_id}")
async def get_project_audit_trail(
project_id: str,
limit: int = 100,
offset: int = 0,
db: Session = Depends(get_db)
):
"""Get audit trail for a specific project."""
try:
audit_entries = db.query(AuditTrail).filter(
AuditTrail.project_id == project_id
).order_by(AuditTrail.created_at.desc()).offset(offset).limit(limit).all()
return {
"project_id": project_id,
"audit_trail": [
{
"id": audit.id,
"action": audit.action,
"actor": audit.actor,
"action_type": audit.action_type,
"details": audit.details,
"metadata": audit.metadata,
"ip_address": audit.ip_address,
"user_agent": audit.user_agent,
"timestamp": audit.created_at.isoformat() if audit.created_at else None
}
for audit in audit_entries
],
"total": db.query(AuditTrail).filter(
AuditTrail.project_id == project_id
).count(),
"limit": limit,
"offset": offset
}
except Exception as e:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail=f"Project {project_id} not found"
)
@app.get("/audit/actions")
async def get_user_actions(
actor_type: str = None,
limit: int = 100,
offset: int = 0,
db: Session = Depends(get_db)
):
"""Get user actions."""
try:
query = db.query(UserAction).order_by(UserAction.created_at.desc())
if actor_type:
query = query.filter(UserAction.actor_type == actor_type)
actions = query.offset(offset).limit(limit).all()
return {
"actions": [
{
"id": action.id,
"history_id": action.history_id,
"action_type": action.action_type,
"actor_type": action.actor_type,
"actor_name": action.actor_name,
"description": action.action_description,
"data": action.action_data,
"ip_address": action.ip_address,
"user_agent": action.user_agent,
"timestamp": action.created_at.isoformat() if action.created_at else None
}
for action in actions
],
"total": query.count(),
"limit": limit,
"offset": offset
}
except Exception as e:
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=str(e)
)
@app.get("/audit/actions/{project_id}")
async def get_project_user_actions(
project_id: str,
actor_type: str = None,
limit: int = 100,
offset: int = 0,
db: Session = Depends(get_db)
):
"""Get user actions for a specific project."""
history = db.query(ProjectHistory).filter(
ProjectHistory.project_id == project_id
).first()
if not history:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail=f"Project {project_id} not found"
)
try:
query = db.query(UserAction).filter(
UserAction.history_id == history.id
).order_by(UserAction.created_at.desc())
if actor_type:
query = query.filter(UserAction.actor_type == actor_type)
actions = query.offset(offset).limit(limit).all()
return {
"project_id": project_id,
"actions": [
{
"id": action.id,
"action_type": action.action_type,
"actor_type": action.actor_type,
"actor_name": action.actor_name,
"description": action.action_description,
"data": action.action_data,
"timestamp": action.created_at.isoformat() if action.created_at else None
}
for action in actions
],
"total": query.count(),
"limit": limit,
"offset": offset
}
except Exception as e:
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=str(e)
)
@app.get("/audit/history")
async def get_project_history(
project_id: str = None,
limit: int = 100,
offset: int = 0,
db: Session = Depends(get_db)
):
"""Get project history."""
try:
if project_id:
history = db.query(ProjectHistory).filter(
ProjectHistory.project_id == project_id
).first()
if not history:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail=f"Project {project_id} not found"
)
return {
"project": {
"id": history.id,
"project_id": history.project_id,
"project_name": history.project_name,
"description": history.description,
"status": history.status,
"progress": history.progress,
"message": history.message,
"created_at": history.created_at.isoformat(),
"updated_at": history.updated_at.isoformat() if history.updated_at else None,
"completed_at": history.completed_at.isoformat() if history.completed_at else None,
"error_message": history.error_message
}
}
else:
histories = db.query(ProjectHistory).offset(offset).limit(limit).all()
return {
"histories": [
{
"id": h.id,
"project_id": h.project_id,
"project_name": h.project_name,
"status": h.status,
"progress": h.progress,
"message": h.message,
"created_at": h.created_at.isoformat()
}
for h in histories
],
"total": db.query(ProjectHistory).count(),
"limit": limit,
"offset": offset
}
except Exception as e:
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=str(e)
)
@app.get("/audit/history/{project_id}")
async def get_detailed_project_history(
project_id: str,
db: Session = Depends(get_db)
):
"""Get detailed history for a project including all audit data."""
history = db.query(ProjectHistory).filter(
ProjectHistory.project_id == project_id
).first()
if not history:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail=f"Project {project_id} not found"
)
try:
# Get all logs
logs = db.query(ProjectLog).filter(
ProjectLog.history_id == history.id
).order_by(ProjectLog.created_at.desc()).all()
# Get all user actions
actions = db.query(UserAction).filter(
UserAction.history_id == history.id
).order_by(UserAction.created_at.desc()).all()
# Get all audit trail entries
audit_entries = db.query(AuditTrail).filter(
AuditTrail.project_id == project_id
).order_by(AuditTrail.created_at.desc()).all()
# Get all UI snapshots
snapshots = db.query(UISnapshot).filter(
UISnapshot.history_id == history.id
).order_by(UISnapshot.created_at.desc()).all()
# Get PR data
pr = db.query(PullRequestData).filter(
PullRequestData.history_id == history.id
).first()
pr_data = None
if pr:
pr_data = {
"pr_number": pr.pr_number,
"pr_title": pr.pr_title,
"pr_body": pr.pr_body,
"pr_state": pr.pr_state,
"pr_url": pr.pr_url,
"created_at": pr.created_at.isoformat() if pr.created_at else None
}
return {
"project": {
"id": history.id,
"project_id": history.project_id,
"project_name": history.project_name,
"description": history.description,
"status": history.status,
"progress": history.progress,
"message": history.message,
"created_at": history.created_at.isoformat(),
"updated_at": history.updated_at.isoformat() if history.updated_at else None,
"completed_at": history.completed_at.isoformat() if history.completed_at else None,
"error_message": history.error_message
},
"logs": [
{
"id": log.id,
"level": log.log_level,
"message": log.log_message,
"timestamp": log.timestamp.isoformat() if log.timestamp else None
}
for log in logs
],
"actions": [
{
"id": action.id,
"action_type": action.action_type,
"actor_type": action.actor_type,
"actor_name": action.actor_name,
"description": action.action_description,
"data": action.action_data,
"timestamp": action.created_at.isoformat() if action.created_at else None
}
for action in actions
],
"audit_trail": [
{
"id": audit.id,
"action": audit.action,
"actor": audit.actor,
"action_type": audit.action_type,
"details": audit.details,
"metadata": audit.metadata,
"ip_address": audit.ip_address,
"user_agent": audit.user_agent,
"timestamp": audit.created_at.isoformat() if audit.created_at else None
}
for audit in audit_entries
],
"snapshots": [
{
"id": snapshot.id,
"data": snapshot.snapshot_data,
"created_at": snapshot.created_at.isoformat()
}
for snapshot in snapshots
],
"pr_data": pr_data
}
except Exception as e:
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=str(e)
)
@app.get("/audit/prompts")
async def get_prompts(
project_id: str = None,
limit: int = 100,
offset: int = 0,
db: Session = Depends(get_db)
):
"""Get prompts submitted by users."""
try:
query = db.query(AuditTrail).filter(
AuditTrail.action_type == "PROMPT"
).order_by(AuditTrail.created_at.desc())
if project_id:
query = query.filter(AuditTrail.project_id == project_id)
prompts = query.offset(offset).limit(limit).all()
return {
"prompts": [
{
"id": audit.id,
"project_id": audit.project_id,
"actor": audit.actor,
"details": audit.details,
"metadata": audit.metadata,
"timestamp": audit.created_at.isoformat() if audit.created_at else None
}
for audit in prompts
],
"total": query.count(),
"limit": limit,
"offset": offset
}
except Exception as e:
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=str(e)
)
@app.get("/audit/changes")
async def get_code_changes(
project_id: str = None,
action_type: str = None,
limit: int = 100,
offset: int = 0,
db: Session = Depends(get_db)
):
"""Get code changes made by users and agents."""
try:
query = db.query(AuditTrail).filter(
AuditTrail.action_type.in_(["CREATE", "UPDATE", "DELETE", "CODE_CHANGE"])
).order_by(AuditTrail.created_at.desc())
if project_id:
query = query.filter(AuditTrail.project_id == project_id)
if action_type:
query = query.filter(AuditTrail.action_type == action_type)
changes = query.offset(offset).limit(limit).all()
return {
"changes": [
{
"id": audit.id,
"project_id": audit.project_id,
"action": audit.action,
"actor": audit.actor,
"action_type": audit.action_type,
"details": audit.details,
"metadata": audit.metadata,
"timestamp": audit.created_at.isoformat() if audit.created_at else None
}
for audit in changes
],
"total": query.count(),
"limit": limit,
"offset": offset
}
except Exception as e:
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=str(e)
)

View File

@@ -0,0 +1,172 @@
"""Database models for AI Software Factory."""
from datetime import datetime
from enum import Enum
from typing import List, Optional
import logging
from sqlalchemy import (
Column, Integer, String, Text, Boolean, ForeignKey, DateTime, JSON
)
from sqlalchemy.orm import relationship, declarative_base
from ai_software_factory.config import settings
Base = declarative_base()
logger = logging.getLogger(__name__)
class ProjectStatus(str, Enum):
"""Project status enumeration."""
INITIALIZED = "initialized"
STARTED = "started"
RUNNING = "running"
COMPLETED = "completed"
ERROR = "error"
class ProjectHistory(Base):
"""Main project tracking table."""
__tablename__ = "project_history"
id = Column(Integer, primary_key=True)
project_id = Column(String(255), nullable=False)
project_name = Column(String(255), nullable=True)
features = Column(Text, default="")
description = Column(String(255), default="")
status = Column(String(50), default='started')
progress = Column(Integer, default=0)
message = Column(String(500), default="")
current_step = Column(String(255), nullable=True)
total_steps = Column(Integer, nullable=True)
current_step_description = Column(String(1024), nullable=True)
current_step_details = Column(Text, nullable=True)
error_message = Column(Text, nullable=True)
created_at = Column(DateTime, default=datetime.utcnow)
started_at = Column(DateTime, default=datetime.utcnow)
updated_at = Column(DateTime, default=datetime.utcnow, onupdate=datetime.utcnow)
completed_at = Column(DateTime, nullable=True)
# Relationships
project_logs = relationship("ProjectLog", back_populates="project_history", cascade="all, delete-orphan")
ui_snapshots = relationship("UISnapshot", back_populates="project_history", cascade="all, delete-orphan")
pull_requests = relationship("PullRequest", back_populates="project_history", cascade="all, delete-orphan")
pull_request_data = relationship("PullRequestData", back_populates="project_history", cascade="all, delete-orphan")
class ProjectLog(Base):
"""Detailed log entries for projects."""
__tablename__ = "project_logs"
id = Column(Integer, primary_key=True)
history_id = Column(Integer, ForeignKey("project_history.id"), nullable=False)
log_level = Column(String(50), default="INFO") # INFO, WARNING, ERROR
log_message = Column(String(500), nullable=False)
timestamp = Column(DateTime, nullable=True)
project_history = relationship("ProjectHistory", back_populates="project_logs")
class UISnapshot(Base):
"""UI snapshots for projects."""
__tablename__ = "ui_snapshots"
id = Column(Integer, primary_key=True)
history_id = Column(Integer, ForeignKey("project_history.id"), nullable=False)
snapshot_data = Column(JSON, nullable=False)
created_at = Column(DateTime, default=datetime.utcnow)
project_history = relationship("ProjectHistory", back_populates="ui_snapshots")
class PullRequest(Base):
"""Pull request data for projects."""
__tablename__ = "pull_requests"
id = Column(Integer, primary_key=True)
history_id = Column(Integer, ForeignKey("project_history.id"), nullable=False)
pr_number = Column(Integer, nullable=False)
pr_title = Column(String(500), nullable=False)
pr_body = Column(Text)
base = Column(String(255), nullable=False)
user = Column(String(255), nullable=False)
pr_url = Column(String(500), nullable=False)
merged = Column(Boolean, default=False)
merged_at = Column(DateTime, nullable=True)
pr_state = Column(String(50), nullable=False)
created_at = Column(DateTime, default=datetime.utcnow)
project_history = relationship("ProjectHistory", back_populates="pull_requests")
class PullRequestData(Base):
"""Pull request data for audit API."""
__tablename__ = "pull_request_data"
id = Column(Integer, primary_key=True)
history_id = Column(Integer, ForeignKey("project_history.id"), nullable=False)
pr_number = Column(Integer, nullable=False)
pr_title = Column(String(500), nullable=False)
pr_body = Column(Text)
pr_state = Column(String(50), nullable=False)
pr_url = Column(String(500), nullable=False)
created_at = Column(DateTime, default=datetime.utcnow)
project_history = relationship("ProjectHistory", back_populates="pull_request_data")
class SystemLog(Base):
"""System-wide log entries."""
__tablename__ = "system_logs"
id = Column(Integer, primary_key=True)
component = Column(String(50), nullable=False)
log_level = Column(String(50), default="INFO")
log_message = Column(String(500), nullable=False)
user_agent = Column(String(255), nullable=True)
ip_address = Column(String(45), nullable=True)
created_at = Column(DateTime, default=datetime.utcnow)
class AuditTrail(Base):
"""Audit trail entries for system-wide logging."""
__tablename__ = "audit_trail"
id = Column(Integer, primary_key=True)
component = Column(String(50), nullable=True)
log_level = Column(String(50), default="INFO")
message = Column(String(500), nullable=False)
created_at = Column(DateTime, default=datetime.utcnow)
project_id = Column(String(255), nullable=True)
action = Column(String(100), nullable=True)
actor = Column(String(100), nullable=True)
action_type = Column(String(50), nullable=True)
details = Column(Text, nullable=True)
metadata_json = Column(JSON, nullable=True)
class UserAction(Base):
"""User action audit entries."""
__tablename__ = "user_actions"
id = Column(Integer, primary_key=True)
history_id = Column(Integer, ForeignKey("project_history.id"), nullable=True)
user_id = Column(String(100), nullable=True)
action_type = Column(String(100), nullable=True)
actor_type = Column(String(50), nullable=True)
actor_name = Column(String(100), nullable=True)
action_description = Column(String(500), nullable=True)
action_data = Column(JSON, nullable=True)
created_at = Column(DateTime, default=datetime.utcnow)
class AgentAction(Base):
"""Agent action audit entries."""
__tablename__ = "agent_actions"
id = Column(Integer, primary_key=True)
agent_name = Column(String(100), nullable=False)
action_type = Column(String(100), nullable=False)
success = Column(Boolean, default=True)
message = Column(String(500), nullable=True)
timestamp = Column(DateTime, default=datetime.utcnow)

View File

@@ -0,0 +1,10 @@
[pytest]
testpaths = tests
pythonpath = .
addopts = -v --tb=short
filterwarnings =
ignore::DeprecationWarning
asyncio_mode = auto
asyncio_default_fixture_loop_scope = function
asyncio_default_test_loop_scope = function

View File

@@ -0,0 +1,17 @@
fastapi==0.109.0
uvicorn[standard]==0.27.0
sqlalchemy==2.0.25
psycopg2-binary==2.9.9
pydantic==2.5.3
pydantic-settings==2.1.0
python-multipart==0.0.6
aiofiles==23.2.1
python-telegram-bot==20.7
requests==2.31.0
pytest==7.4.3
pytest-cov==4.1.0
black==23.12.1
isort==5.13.2
flake8==6.1.0
mypy==1.7.1
httpx==0.25.2

View File

@@ -0,0 +1,11 @@
# test-project
Test project description
## Features
- feature-1
- feature-2
## Tech Stack
- python
- fastapi

View File

@@ -0,0 +1,2 @@
# Generated by AI Software Factory
print('Hello, World!')

View File

@@ -0,0 +1,7 @@
# Test
Test
## Features
## Tech Stack

View File

@@ -0,0 +1,2 @@
# Generated by AI Software Factory
print('Hello, World!')

View File

@@ -0,0 +1,400 @@
"""Test logging utility for validating agent responses and system outputs."""
import re
from typing import Optional, Dict, Any, List
from datetime import datetime
# Color codes for terminal output
class Colors:
GREEN = '\033[92m'
RED = '\033[91m'
YELLOW = '\033[93m'
BLUE = '\033[94m'
CYAN = '\033[96m'
RESET = '\033[0m'
class TestLogger:
"""Utility class for logging test results and assertions."""
def __init__(self):
self.assertions: List[Dict[str, Any]] = []
self.errors: List[Dict[str, Any]] = []
self.logs: List[str] = []
def log(self, message: str, level: str = 'INFO') -> None:
"""Log an informational message."""
timestamp = datetime.now().strftime('%Y-%m-%d %H:%M:%S')
formatted = f"[{timestamp}] [{level}] {message}"
self.logs.append(formatted)
print(formatted)
def success(self, message: str) -> None:
"""Log a success message with green color."""
timestamp = datetime.now().strftime('%Y-%m-%d %H:%M:%S')
formatted = f"{Colors.GREEN}[{timestamp}] [✓ PASS] {message}{Colors.RESET}"
self.logs.append(formatted)
print(formatted)
def error(self, message: str) -> None:
"""Log an error message with red color."""
timestamp = datetime.now().strftime('%Y-%m-%d %H:%M:%S')
formatted = f"{Colors.RED}[{timestamp}] [✗ ERROR] {message}{Colors.RESET}"
self.logs.append(formatted)
print(formatted)
def warning(self, message: str) -> None:
"""Log a warning message with yellow color."""
timestamp = datetime.now().strftime('%Y-%m-%d %H:%M:%S')
formatted = f"{Colors.YELLOW}[{timestamp}] [!] WARN {message}{Colors.RESET}"
self.logs.append(formatted)
print(formatted)
def info(self, message: str) -> None:
"""Log an info message with blue color."""
timestamp = datetime.now().strftime('%Y-%m-%d %H:%M:%S')
formatted = f"{Colors.BLUE}[{timestamp}] [ INFO] {message}{Colors.RESET}"
self.logs.append(formatted)
print(formatted)
def assert_contains(self, text: str, expected: str, message: str = '') -> bool:
"""Assert that text contains expected substring."""
try:
contains = expected in text
if contains:
self.success(f"'{expected}' found in text")
self.assertions.append({
'type': 'assert_contains',
'result': 'pass',
'expected': expected,
'message': message or f"'{expected}' in text"
})
return True
else:
self.error(f"✗ Expected '{expected}' not found in text")
self.assertions.append({
'type': 'assert_contains',
'result': 'fail',
'expected': expected,
'message': message or f"'{expected}' in text"
})
return False
except Exception as e:
self.error(f"Assertion failed with exception: {e}")
self.assertions.append({
'type': 'assert_contains',
'result': 'error',
'expected': expected,
'message': message or f"Assertion failed: {e}"
})
return False
def assert_not_contains(self, text: str, unexpected: str, message: str = '') -> bool:
"""Assert that text does not contain expected substring."""
try:
contains = unexpected in text
if not contains:
self.success(f"'{unexpected}' not found in text")
self.assertions.append({
'type': 'assert_not_contains',
'result': 'pass',
'unexpected': unexpected,
'message': message or f"'{unexpected}' not in text"
})
return True
else:
self.error(f"✗ Unexpected '{unexpected}' found in text")
self.assertions.append({
'type': 'assert_not_contains',
'result': 'fail',
'unexpected': unexpected,
'message': message or f"'{unexpected}' not in text"
})
return False
except Exception as e:
self.error(f"Assertion failed with exception: {e}")
self.assertions.append({
'type': 'assert_not_contains',
'result': 'error',
'unexpected': unexpected,
'message': message or f"Assertion failed: {e}"
})
return False
def assert_equal(self, actual: str, expected: str, message: str = '') -> bool:
"""Assert that two strings are equal."""
try:
if actual == expected:
self.success(f"✓ Strings equal")
self.assertions.append({
'type': 'assert_equal',
'result': 'pass',
'expected': expected,
'message': message or f"actual == expected"
})
return True
else:
self.error(f"✗ Strings not equal. Expected: '{expected}', Got: '{actual}'")
self.assertions.append({
'type': 'assert_equal',
'result': 'fail',
'expected': expected,
'actual': actual,
'message': message or "actual == expected"
})
return False
except Exception as e:
self.error(f"Assertion failed with exception: {e}")
self.assertions.append({
'type': 'assert_equal',
'result': 'error',
'expected': expected,
'actual': actual,
'message': message or f"Assertion failed: {e}"
})
return False
def assert_starts_with(self, text: str, prefix: str, message: str = '') -> bool:
"""Assert that text starts with expected prefix."""
try:
starts_with = text.startswith(prefix)
if starts_with:
self.success(f"✓ Text starts with '{prefix}'")
self.assertions.append({
'type': 'assert_starts_with',
'result': 'pass',
'prefix': prefix,
'message': message or f"text starts with '{prefix}'"
})
return True
else:
self.error(f"✗ Text does not start with '{prefix}'")
self.assertions.append({
'type': 'assert_starts_with',
'result': 'fail',
'prefix': prefix,
'message': message or f"text starts with '{prefix}'"
})
return False
except Exception as e:
self.error(f"Assertion failed with exception: {e}")
self.assertions.append({
'type': 'assert_starts_with',
'result': 'error',
'prefix': prefix,
'message': message or f"Assertion failed: {e}"
})
return False
def assert_ends_with(self, text: str, suffix: str, message: str = '') -> bool:
"""Assert that text ends with expected suffix."""
try:
ends_with = text.endswith(suffix)
if ends_with:
self.success(f"✓ Text ends with '{suffix}'")
self.assertions.append({
'type': 'assert_ends_with',
'result': 'pass',
'suffix': suffix,
'message': message or f"text ends with '{suffix}'"
})
return True
else:
self.error(f"✗ Text does not end with '{suffix}'")
self.assertions.append({
'type': 'assert_ends_with',
'result': 'fail',
'suffix': suffix,
'message': message or f"text ends with '{suffix}'"
})
return False
except Exception as e:
self.error(f"Assertion failed with exception: {e}")
self.assertions.append({
'type': 'assert_ends_with',
'result': 'error',
'suffix': suffix,
'message': message or f"Assertion failed: {e}"
})
return False
def assert_regex(self, text: str, pattern: str, message: str = '') -> bool:
"""Assert that text matches a regex pattern."""
try:
if re.search(pattern, text):
self.success(f"✓ Regex pattern matched")
self.assertions.append({
'type': 'assert_regex',
'result': 'pass',
'pattern': pattern,
'message': message or f"text matches regex '{pattern}'"
})
return True
else:
self.error(f"✗ Regex pattern did not match")
self.assertions.append({
'type': 'assert_regex',
'result': 'fail',
'pattern': pattern,
'message': message or f"text matches regex '{pattern}'"
})
return False
except re.error as e:
self.error(f"✗ Invalid regex pattern: {e}")
self.assertions.append({
'type': 'assert_regex',
'result': 'error',
'pattern': pattern,
'message': message or f"Invalid regex: {e}"
})
return False
except Exception as ex:
self.error(f"Assertion failed with exception: {ex}")
self.assertions.append({
'type': 'assert_regex',
'result': 'error',
'pattern': pattern,
'message': message or f"Assertion failed: {ex}"
})
return False
def assert_length(self, text: str, expected_length: int, message: str = '') -> bool:
"""Assert that text has expected length."""
try:
length = len(text)
if length == expected_length:
self.success(f"✓ Length is {expected_length}")
self.assertions.append({
'type': 'assert_length',
'result': 'pass',
'expected_length': expected_length,
'message': message or f"len(text) == {expected_length}"
})
return True
else:
self.error(f"✗ Length is {length}, expected {expected_length}")
self.assertions.append({
'type': 'assert_length',
'result': 'fail',
'expected_length': expected_length,
'actual_length': length,
'message': message or f"len(text) == {expected_length}"
})
return False
except Exception as e:
self.error(f"Assertion failed with exception: {e}")
self.assertions.append({
'type': 'assert_length',
'result': 'error',
'expected_length': expected_length,
'message': message or f"Assertion failed: {e}"
})
return False
def assert_key_exists(self, text: str, key: str, message: str = '') -> bool:
"""Assert that a key exists in a JSON-like text."""
try:
if f'"{key}":' in text or f"'{key}':" in text:
self.success(f"✓ Key '{key}' exists")
self.assertions.append({
'type': 'assert_key_exists',
'result': 'pass',
'key': key,
'message': message or f"key '{key}' exists"
})
return True
else:
self.error(f"✗ Key '{key}' not found")
self.assertions.append({
'type': 'assert_key_exists',
'result': 'fail',
'key': key,
'message': message or f"key '{key}' exists"
})
return False
except Exception as e:
self.error(f"Assertion failed with exception: {e}")
self.assertions.append({
'type': 'assert_key_exists',
'result': 'error',
'key': key,
'message': message or f"Assertion failed: {e}"
})
return False
def assert_substring_count(self, text: str, substring: str, count: int, message: str = '') -> bool:
"""Assert that substring appears count times in text."""
try:
actual_count = text.count(substring)
if actual_count == count:
self.success(f"✓ Substring appears {count} time(s)")
self.assertions.append({
'type': 'assert_substring_count',
'result': 'pass',
'substring': substring,
'expected_count': count,
'actual_count': actual_count,
'message': message or f"'{substring}' appears {count} times"
})
return True
else:
self.error(f"✗ Substring appears {actual_count} time(s), expected {count}")
self.assertions.append({
'type': 'assert_substring_count',
'result': 'fail',
'substring': substring,
'expected_count': count,
'actual_count': actual_count,
'message': message or f"'{substring}' appears {count} times"
})
return False
except Exception as e:
self.error(f"Assertion failed with exception: {e}")
self.assertions.append({
'type': 'assert_substring_count',
'result': 'error',
'substring': substring,
'expected_count': count,
'message': message or f"Assertion failed: {e}"
})
return False
def get_assertion_count(self) -> int:
"""Get total number of assertions made."""
return len(self.assertions)
def get_failure_count(self) -> int:
"""Get number of failed assertions."""
return sum(1 for assertion in self.assertions if assertion.get('result') == 'fail')
def get_success_count(self) -> int:
"""Get number of passed assertions."""
return sum(1 for assertion in self.assertions if assertion.get('result') == 'pass')
def get_logs(self) -> List[str]:
"""Get all log messages."""
return self.logs.copy()
def get_errors(self) -> List[Dict[str, Any]]:
"""Get all error records."""
return self.errors.copy()
def clear(self) -> None:
"""Clear all logs and assertions."""
self.assertions.clear()
self.errors.clear()
self.logs.clear()
def __enter__(self):
"""Context manager entry."""
return self
def __exit__(self, exc_type, exc_val, exc_tb):
"""Context manager exit."""
return False
# Convenience function for context manager usage
def test_logger():
"""Create and return a TestLogger instance."""
return TestLogger()

View File

@@ -1 +0,0 @@
0.0.1

23
test-project/test/TestApp/.gitignore vendored Normal file
View File

@@ -0,0 +1,23 @@
# Python
__pycache__/
*.pyc
*.pyo
*.pyd
.Python
*.env
.venv/
node_modules/
.env
build/
dist/
.pytest_cache/
.mypy_cache/
.coverage
htmlcov/
.idea/
.vscode/
*.swp
*.swo
*~
.DS_Store
.git

View File

@@ -0,0 +1,11 @@
# TestApp
A test application
## Features
- feature1
- feature2
## Tech Stack
- python
- fastapi

View File

@@ -0,0 +1,2 @@
# Generated by AI Software Factory
print('Hello, World!')

View File

@@ -0,0 +1,23 @@
# Python
__pycache__/
*.pyc
*.pyo
*.pyd
.Python
*.env
.venv/
node_modules/
.env
build/
dist/
.pytest_cache/
.mypy_cache/
.coverage
htmlcov/
.idea/
.vscode/
*.swp
*.swo
*~
.DS_Store
.git

View File

@@ -0,0 +1,11 @@
# test-project
Test project description
## Features
- feature-1
- feature-2
## Tech Stack
- python
- fastapi

View File

@@ -0,0 +1,2 @@
# Generated by AI Software Factory
print('Hello, World!')

23
test-project/test/test/.gitignore vendored Normal file
View File

@@ -0,0 +1,23 @@
# Python
__pycache__/
*.pyc
*.pyo
*.pyd
.Python
*.env
.venv/
node_modules/
.env
build/
dist/
.pytest_cache/
.mypy_cache/
.coverage
htmlcov/
.idea/
.vscode/
*.swp
*.swo
*~
.DS_Store
.git

View File

@@ -0,0 +1,7 @@
# Test
Test
## Features
## Tech Stack

View File

@@ -0,0 +1,2 @@
# Generated by AI Software Factory
print('Hello, World!')