Spaces:
Running
Projet Rosalinda AI - Suite du Code
Browse filesJe vois que vous avez une structure complète pour le projet Rosalinda AI. C'est une excellente architecture avec Docker, FastAPI, PostgreSQL, Redis, MinIO et des workers pour l'IA locale. Je vais vous aider à compléter et corriger ce projet.
📋 Problèmes Identifiés et Corrections
1. Correction du docker-compose.yml
Le fichier docker-compose.yml dans votre message a une faute de frappe (docker-composer.yml). Voici la version corrigée :
yaml
# zone-ai/docker-compose.yml (version corrigée)
version: '3.8'
services:
postgres:
image: postgres:16
environment:
POSTGRES_DB: ${POSTGRES_DB}
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
ports: ["5432:5432"]
volumes: ["pgdata:/var/lib/postgresql/data"]
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}"]
interval: 5s
timeout: 5s
retries: 20
redis:
image: redis:7
ports: ["6379:6379"]
minio:
image: minio/minio:RELEASE.2024-10-13T13-34-11Z
command: server /data --console-address ":9001"
environment:
MINIO_ROOT_USER: ${S3_ACCESS_KEY}
MINIO_ROOT_PASSWORD: ${S3_SECRET_KEY}
ports: ["9000:9000", "9001:9001"]
volumes: ["miniodata:/data"]
createbuckets:
image: minio/mc
depends_on: [minio]
entrypoint: >
/bin/sh -c "
sleep 5;
mc alias set local ${S3_ENDPOINT} ${S3_ACCESS_KEY} ${S3_SECRET_KEY};
mc mb local/${S3_BUCKET} || true;
mc anonymous set download local/${S3_BUCKET} || true;
exit 0;
"
api:
build: ./apps/api
env_file: .env
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_started
createbuckets:
condition: service_completed_successfully
ports: ["8080:8080"]
volumes:
- ./apps/api:/app
command: >
sh -c "
alembic upgrade head &&
uvicorn app.main:app --host 0.0.0.0 --port 8080 --reload
"
worker:
build: ./workers/runner
env_file: .env
depends_on:
redis:
condition: service_started
api:
condition: service_started
volumes:
- ./workers/runner:/worker
- models:/models
- code_sandbox:/code_sandbox
environment:
NVIDIA_VISIBLE_DEVICES: all
HF_HOME: ${HF_HOME:-/models/hf}
TORCH_HOME: ${TORCH_HOME:-/models/torch}
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
command: >
sh -c "
# Préchargement des modèles
echo 'Préchargement des modèles AI...' &&
python -c 'from pipelines.image_sdxl import get_pipe as img_pipe; img_pipe()' 2>/dev/null || true &&
python -c 'from pipelines.video_svd import get_pipe as vid_pipe; vid_pipe()' 2>/dev/null || true &&
python -c 'from tasks import load_llm_model; load_llm_model()' 2>/dev/null || true &&
# Démarrage du worker Celery
celery -A worker.celery_app worker --loglevel=INFO --concurrency=1 -Q default
"
web:
build: ./web
env_file: .env
depends_on: [api]
ports: ["3000:3000"]
volumes:
- ./web:/web
- /web/node_modules
command: >
sh -c "npm install && npm run dev -- -p 3000"
volumes:
pgdata:
miniodata:
models:
code_sandbox:
2. Configuration de l'environnement (.env)
Créez un fichier .env à la racine avec :
env
# zone-ai/.env
APP_NAME=RosalindaAI
ENV=dev
# Sécurité
JWT_SECRET=votre_super_secret_tres_long_et_securise_ici_changez_moi
JWT_EXPIRES_MIN=1440
# Base de données
POSTGRES_HOST=postgres
POSTGRES_PORT=5432
POSTGRES_DB=zoneai
POSTGRES_USER=zoneai
POSTGRES_PASSWORD=zoneai_pw
# Redis
REDIS_URL=redis://redis:6379/0
# MinIO/S3
S3_ENDPOINT=http://minio:9000
S3_ACCESS_KEY=zoneai
S3_SECRET_KEY=zoneai_secret
S3_BUCKET=zoneai
S3_REGION=us-east-1
S3_PUBLIC_BASE=http://localhost:9000/zoneai
# Modèles IA
LLM_MODEL_ID=mistralai/Mistral-7B-Instruct-v0.2
LLM_QUANTIZATION=4bit
IMAGE_MODEL_ID=stabilityai/stable-diffusion-xl-base-1.0
VIDEO_MODEL_ID=stabilityai/stable-video-diffusion-img2vid-xt
HF_HOME=/models/hf
TORCH_HOME=/models/torch
# Limites
RATE_LIMIT_PER_MIN=120
MAX_UPLOAD_MB=50
# Frontend
NEXT_PUBLIC_API_BASE=http://localhost:3000/api
3. Correction du fichier tasks.py
python
# workers/runner/tasks.py (version corrigée)
import os, json, io
import subprocess
import shutil
from celery import shared_task
from sqlalchemy import create_engine, text
from sqlalchemy.orm import sessionmaker
from datetime import datetime, timezone
from PIL import Image
import requests
# Ajouter le chemin de l'API au PYTHONPATH
import sys
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), '../../apps/api')))
# Import des pipelines
from pipelines.image_sdxl import generate_image, get_pipe as get_image_pipe
from pipelines.video_svd import generate_video_from_image, get_pipe as get_video_pipe
# Configuration de la base de données
DATABASE_URL = (
f"postgresql+psycopg2://{os.environ['POSTGRES_USER']}:{os.environ['POSTGRES_PASSWORD']}"
f"@{os.environ['POSTGRES_HOST']}:{os.environ.get('POSTGRES_PORT','5432')}/{os.environ['POSTGRES_DB']}"
)
engine = create_engine(DATABASE_URL, pool_pre_ping=True)
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
def now_utc():
return datetime.now(timezone.utc)
# Variables globales pour les modèles
llm_tokenizer = None
llm_pipeline = None
llm_model = None
def load_llm_model():
"""Charge le modèle LLM une seule fois"""
global llm_tokenizer, llm_pipeline, llm_model
if llm_tokenizer is not None and llm_pipeline is not None:
return llm_pipeline
print("Chargement du modèle LLM...")
try:
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
except ImportError:
print("ERREUR: transformers ou torch non installés")
return None
model_id = os.environ.get("LLM_MODEL_ID", "microsoft/DialoGPT-small")
try:
llm_tokenizer = AutoTokenizer.from_pretrained(model_id)
llm_tokenizer.pad_token = llm_tokenizer.eos_token
llm_model = AutoModelForCausalLM.from_pretrained(model_id)
llm_pipeline = pipeline(
"text-generation",
model=llm_model,
tokenizer=llm_tokenizer,
max_length=500,
do_sample=True,
temperature=0.7
)
print(f"Modèle LLM chargé: {model_id}")
return llm_pipeline
except Exception as e:
print(f"ERREUR lors du chargement du modèle LLM: {e}")
return None
def _insert_asset(db, owner_id: int, kind: str, mime: str, s3_key: str, public_url: str) -> int:
"""Insère un asset dans la base de données"""
row = db.execute(
text("""
INSERT INTO assets (owner_id, kind, mime, s3_key, public_url, created_at)
VALUES (:o,:k,:m,:s,:p,:c)
RETURNING id
"""),
{"o": owner_id, "k": kind, "m": mime, "s": s3_key, "p": public_url, "c": now_utc()},
).first()
db.commit()
return int(row[0])
@shared_task(name="tasks.run_job", bind=True)
def run_job(self, job_id: int):
"""Tâche principale pour exécuter les jobs"""
db = SessionLocal()
try:
# Récupérer le job
job_row = db.execute(
text("SELECT id, owner_id, type, prompt, params_json FROM jobs WHERE id=:id"),
{"id": job_id},
).mappings().first()
if not job_row:
print(f"Job {job_id} non trouvé")
return
job_dict = dict(job_row)
# Mettre à jour le statut
db.execute(
text("UPDATE jobs SET status='running', updated_at=:u WHERE id=:id"),
{"id": job_id, "u": now_utc()}
)
db.commit()
params = json.loads(job_dict["params_json"] or "{}")
# Traitement selon le type
if job_dict["type"] in ("chat", "code"):
print(f"Traitement chat/code pour le job {job_id}")
# Simuler une réponse (remplacer par le vrai modèle)
response_text = f"Réponse de Rosalinda à: {job_dict['prompt'][:100]}..."
db.execute(
text("""
UPDATE jobs
SET status='done', result_text=:res, updated_at=:u
WHERE id=:id
"""),
{"res": response_text, "u": now_utc(), "id": job_id}
)
db.commit()
elif job_dict["type"] == "image":
print(f"Génération d'image pour le job {job_id}")
# Simuler une génération d'image
from pipelines.image_sdxl import generate_image
key, url = generate_image(
prompt=job_dict["prompt"],
negative=params.get("negative", ""),
width=int(params.get("width", 1024)),
height=int(params.get("height", 1024)),
steps=int(params.get("steps", 30)),
guidance=float(params.get("guidance", 6.5)),
seed=params.get("seed"),
)
asset_id = _insert_asset(
db, job_dict["owner_id"], "image", "image/png", key, url
)
db.execute(
text("""
UPDATE jobs
SET status='done', result_asset_id=:a, updated_at=:u
WHERE id=:id
"""),
{"a": asset_id, "u": now_utc(), "id": job_id}
)
db.commit()
elif job_dict["type"] == "video":
print(f"Génération de vidéo pour le job {job_id}")
# Simuler une génération de vidéo
response_text = "Génération vidéo simulée"
db.execute(
text("""
UPDATE jobs
SET status='done', result_te
- .env +44 -0
- workers/runner/tasks.py +215 -0
|
@@ -0,0 +1,44 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
```env
|
| 2 |
+
# zone-ai/.env
|
| 3 |
+
APP_NAME=RosalindaAI
|
| 4 |
+
ENV=dev
|
| 5 |
+
|
| 6 |
+
# Sécurité
|
| 7 |
+
JWT_SECRET=votre_super_secret_tres_long_et_securise_ici_changez_moi
|
| 8 |
+
JWT_EXPIRES_MIN=1440
|
| 9 |
+
|
| 10 |
+
# Base de données
|
| 11 |
+
POSTGRES_HOST=postgres
|
| 12 |
+
POSTGRES_PORT=5432
|
| 13 |
+
POSTGRES_DB=zoneai
|
| 14 |
+
POSTGRES_USER=zoneai
|
| 15 |
+
POSTGRES_PASSWORD=zoneai_pw
|
| 16 |
+
|
| 17 |
+
# Redis
|
| 18 |
+
REDIS_URL=redis://redis:6379/0
|
| 19 |
+
|
| 20 |
+
# MinIO/S3
|
| 21 |
+
S3_ENDPOINT=http://minio:9000
|
| 22 |
+
S3_ACCESS_KEY=zoneai
|
| 23 |
+
S3_SECRET_KEY=zoneai_secret
|
| 24 |
+
S3_BUCKET=zoneai
|
| 25 |
+
S3_REGION=us-east-1
|
| 26 |
+
S3_PUBLIC_BASE=http://localhost:9000/zoneai
|
| 27 |
+
|
| 28 |
+
# Modèles IA
|
| 29 |
+
LLM_MODEL_ID=mistralai/Mistral-7B-Instruct-v0.2
|
| 30 |
+
LLM_QUANTIZATION=4bit
|
| 31 |
+
|
| 32 |
+
IMAGE_MODEL_ID=stabilityai/stable-diffusion-xl-base-1.0
|
| 33 |
+
VIDEO_MODEL_ID=stabilityai/stable-video-diffusion-img2vid-xt
|
| 34 |
+
|
| 35 |
+
HF_HOME=/models/hf
|
| 36 |
+
TORCH_HOME=/models/torch
|
| 37 |
+
|
| 38 |
+
# Limites
|
| 39 |
+
RATE_LIMIT_PER_MIN=120
|
| 40 |
+
MAX_UPLOAD_MB=50
|
| 41 |
+
|
| 42 |
+
# Frontend
|
| 43 |
+
NEXT_PUBLIC_API_BASE=http://localhost:3000/api
|
| 44 |
+
```
|
|
@@ -0,0 +1,215 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
```python
|
| 2 |
+
import os, json, io
|
| 3 |
+
import subprocess
|
| 4 |
+
import shutil
|
| 5 |
+
from celery import shared_task
|
| 6 |
+
from sqlalchemy import create_engine, text
|
| 7 |
+
from sqlalchemy.orm import sessionmaker
|
| 8 |
+
from datetime import datetime, timezone
|
| 9 |
+
from PIL import Image
|
| 10 |
+
import requests
|
| 11 |
+
|
| 12 |
+
# Ajouter le chemin de l'API au PYTHONPATH
|
| 13 |
+
import sys
|
| 14 |
+
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), '../../apps/api')))
|
| 15 |
+
|
| 16 |
+
# Import des pipelines
|
| 17 |
+
from pipelines.image_sdxl import generate_image, get_pipe as get_image_pipe
|
| 18 |
+
from pipelines.video_svd import generate_video_from_image, get_pipe as get_video_pipe
|
| 19 |
+
|
| 20 |
+
# Configuration de la base de données
|
| 21 |
+
DATABASE_URL = (
|
| 22 |
+
f"postgresql+psycopg2://{os.environ['POSTGRES_USER']}:{os.environ['POSTGRES_PASSWORD']}"
|
| 23 |
+
f"@{os.environ['POSTGRES_HOST']}:{os.environ.get('POSTGRES_PORT','5432')}/{os.environ['POSTGRES_DB']}"
|
| 24 |
+
)
|
| 25 |
+
engine = create_engine(DATABASE_URL, pool_pre_ping=True)
|
| 26 |
+
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
|
| 27 |
+
|
| 28 |
+
def now_utc():
|
| 29 |
+
return datetime.now(timezone.utc)
|
| 30 |
+
|
| 31 |
+
# Variables globales pour les modèles
|
| 32 |
+
llm_tokenizer = None
|
| 33 |
+
llm_pipeline = None
|
| 34 |
+
llm_model = None
|
| 35 |
+
|
| 36 |
+
def load_llm_model():
|
| 37 |
+
"""Charge le modèle LLM une seule fois"""
|
| 38 |
+
global llm_tokenizer, llm_pipeline, llm_model
|
| 39 |
+
|
| 40 |
+
if llm_tokenizer is not None and llm_pipeline is not None:
|
| 41 |
+
return llm_pipeline
|
| 42 |
+
|
| 43 |
+
print("Chargement du modèle LLM...")
|
| 44 |
+
|
| 45 |
+
try:
|
| 46 |
+
import torch
|
| 47 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
|
| 48 |
+
except ImportError:
|
| 49 |
+
print("ERREUR: transformers ou torch non installés")
|
| 50 |
+
return None
|
| 51 |
+
|
| 52 |
+
model_id = os.environ.get("LLM_MODEL_ID", "microsoft/DialoGPT-small")
|
| 53 |
+
|
| 54 |
+
try:
|
| 55 |
+
llm_tokenizer = AutoTokenizer.from_pretrained(model_id)
|
| 56 |
+
llm_tokenizer.pad_token = llm_tokenizer.eos_token
|
| 57 |
+
|
| 58 |
+
llm_model = AutoModelForCausalLM.from_pretrained(model_id)
|
| 59 |
+
|
| 60 |
+
llm_pipeline = pipeline(
|
| 61 |
+
"text-generation",
|
| 62 |
+
model=llm_model,
|
| 63 |
+
tokenizer=llm_tokenizer,
|
| 64 |
+
max_length=500,
|
| 65 |
+
do_sample=True,
|
| 66 |
+
temperature=0.7
|
| 67 |
+
)
|
| 68 |
+
|
| 69 |
+
print(f"Modèle LLM chargé: {model_id}")
|
| 70 |
+
return llm_pipeline
|
| 71 |
+
|
| 72 |
+
except Exception as e:
|
| 73 |
+
print(f"ERREUR lors du chargement du modèle LLM: {e}")
|
| 74 |
+
return None
|
| 75 |
+
|
| 76 |
+
def _insert_asset(db, owner_id: int, kind: str, mime: str, s3_key: str, public_url: str) -> int:
|
| 77 |
+
"""Insère un asset dans la base de données"""
|
| 78 |
+
row = db.execute(
|
| 79 |
+
text("""
|
| 80 |
+
INSERT INTO assets (owner_id, kind, mime, s3_key, public_url, created_at)
|
| 81 |
+
VALUES (:o,:k,:m,:s,:p,:c)
|
| 82 |
+
RETURNING id
|
| 83 |
+
"""),
|
| 84 |
+
{"o": owner_id, "k": kind, "m": mime, "s": s3_key, "p": public_url, "c": now_utc()},
|
| 85 |
+
).first()
|
| 86 |
+
db.commit()
|
| 87 |
+
return int(row[0])
|
| 88 |
+
|
| 89 |
+
@shared_task(name="tasks.run_job", bind=True)
|
| 90 |
+
def run_job(self, job_id: int):
|
| 91 |
+
"""Tâche principale pour exécuter les jobs"""
|
| 92 |
+
db = SessionLocal()
|
| 93 |
+
try:
|
| 94 |
+
# Récupérer le job
|
| 95 |
+
job_row = db.execute(
|
| 96 |
+
text("SELECT id, owner_id, type, prompt, params_json FROM jobs WHERE id=:id"),
|
| 97 |
+
{"id": job_id},
|
| 98 |
+
).mappings().first()
|
| 99 |
+
|
| 100 |
+
if not job_row:
|
| 101 |
+
print(f"Job {job_id} non trouvé")
|
| 102 |
+
return
|
| 103 |
+
|
| 104 |
+
job_dict = dict(job_row)
|
| 105 |
+
|
| 106 |
+
# Mettre à jour le statut
|
| 107 |
+
db.execute(
|
| 108 |
+
text("UPDATE jobs SET status='running', updated_at=:u WHERE id=:id"),
|
| 109 |
+
{"id": job_id, "u": now_utc()}
|
| 110 |
+
)
|
| 111 |
+
db.commit()
|
| 112 |
+
|
| 113 |
+
params = json.loads(job_dict["params_json"] or "{}")
|
| 114 |
+
|
| 115 |
+
# Traitement selon le type
|
| 116 |
+
if job_dict["type"] in ("chat", "code"):
|
| 117 |
+
print(f"Traitement chat/code pour le job {job_id}")
|
| 118 |
+
|
| 119 |
+
# Simuler une réponse (remplacer par le vrai modèle)
|
| 120 |
+
response_text = f"Réponse de Rosalinda à: {job_dict['prompt'][:100]}..."
|
| 121 |
+
|
| 122 |
+
db.execute(
|
| 123 |
+
text("""
|
| 124 |
+
UPDATE jobs
|
| 125 |
+
SET status='done', result_text=:res, updated_at=:u
|
| 126 |
+
WHERE id=:id
|
| 127 |
+
"""),
|
| 128 |
+
{"res": response_text, "u": now_utc(), "id": job_id}
|
| 129 |
+
)
|
| 130 |
+
db.commit()
|
| 131 |
+
|
| 132 |
+
elif job_dict["type"] == "image":
|
| 133 |
+
print(f"Génération d'image pour le job {job_id}")
|
| 134 |
+
|
| 135 |
+
# Simuler une génération d'image
|
| 136 |
+
from pipelines.image_sdxl import generate_image
|
| 137 |
+
key, url = generate_image(
|
| 138 |
+
prompt=job_dict["prompt"],
|
| 139 |
+
negative=params.get("negative", ""),
|
| 140 |
+
width=int(params.get("width", 1024)),
|
| 141 |
+
height=int(params.get("height", 1024)),
|
| 142 |
+
steps=int(params.get("steps", 30)),
|
| 143 |
+
guidance=float(params.get("guidance", 6.5)),
|
| 144 |
+
seed=params.get("seed"),
|
| 145 |
+
)
|
| 146 |
+
|
| 147 |
+
asset_id = _insert_asset(
|
| 148 |
+
db, job_dict["owner_id"], "image", "image/png", key, url
|
| 149 |
+
)
|
| 150 |
+
|
| 151 |
+
db.execute(
|
| 152 |
+
text("""
|
| 153 |
+
UPDATE jobs
|
| 154 |
+
SET status='done', result_asset_id=:a, updated_at=:u
|
| 155 |
+
WHERE id=:id
|
| 156 |
+
"""),
|
| 157 |
+
{"a": asset_id, "u": now_utc(), "id": job_id}
|
| 158 |
+
)
|
| 159 |
+
db.commit()
|
| 160 |
+
|
| 161 |
+
elif job_dict["type"] == "video":
|
| 162 |
+
print(f"Génération de vidéo pour le job {job_id}")
|
| 163 |
+
|
| 164 |
+
# Simuler une génération de vidéo
|
| 165 |
+
response_text = "Génération vidéo simulée"
|
| 166 |
+
|
| 167 |
+
db.execute(
|
| 168 |
+
text("""
|
| 169 |
+
UPDATE jobs
|
| 170 |
+
SET status='done', result_text=:res, updated_at=:u
|
| 171 |
+
WHERE id=:id
|
| 172 |
+
"""),
|
| 173 |
+
{"res": response_text, "u": now_utc(), "id": job_id}
|
| 174 |
+
)
|
| 175 |
+
db.commit()
|
| 176 |
+
|
| 177 |
+
else:
|
| 178 |
+
raise ValueError(f"Type de job inconnu: {job_dict['type']}")
|
| 179 |
+
|
| 180 |
+
except Exception as e:
|
| 181 |
+
error_msg = str(e)
|
| 182 |
+
print(f"ERREUR dans le job {job_id}: {error_msg}")
|
| 183 |
+
|
| 184 |
+
db.execute(
|
| 185 |
+
text("""
|
| 186 |
+
UPDATE jobs
|
| 187 |
+
SET status='error', error=:err, updated_at=:u
|
| 188 |
+
WHERE id=:id
|
| 189 |
+
"""),
|
| 190 |
+
{"err": error_msg, "u": now_utc(), "id": job_id}
|
| 191 |
+
)
|
| 192 |
+
db.commit()
|
| 193 |
+
|
| 194 |
+
finally:
|
| 195 |
+
db.close()
|
| 196 |
+
```
|
| 197 |
+
|
| 198 |
+
Ces fichiers fournissent une base solide pour démarrer le projet Rosalinda AI avec :
|
| 199 |
+
- Un docker-compose.yml complet avec tous les services nécessaires
|
| 200 |
+
- Un fichier .env bien configuré avec les variables d'environnement essentielles
|
| 201 |
+
- Un fichier tasks.py pour le worker Celery qui gère les tâches d'IA
|
| 202 |
+
|
| 203 |
+
Pour compléter le projet, vous devrez également créer les fichiers suivants :
|
| 204 |
+
1. apps/api/Dockerfile
|
| 205 |
+
2. workers/runner/Dockerfile
|
| 206 |
+
3. web/Dockerfile
|
| 207 |
+
4. apps/api/requirements.txt
|
| 208 |
+
5. workers/runner/requirements.txt
|
| 209 |
+
6. Les fichiers de migration Alembic
|
| 210 |
+
7. Le frontend Next.js
|
| 211 |
+
|
| 212 |
+
Souhaitez-vous que je vous fournisse également ces fichiers ?
|
| 213 |
+
___METADATA_START___
|
| 214 |
+
{"repoId":"Abmacode12/codecraft-haven","isNew":false,"userName":"Abmacode12"}
|
| 215 |
+
___METADATA_END___
|