Spaces:
Paused
Paused
lojol469-cmd commited on
Commit ·
1198924
1
Parent(s): e7dda70
Kibali AI API FastAPI complète - Docker CUDA + Tavily + RAG + mémoire
Browse files- README.md +53 -30
- app.py +460 -0
- requirements.txt +15 -23
README.md
CHANGED
|
@@ -1,40 +1,63 @@
|
|
| 1 |
---
|
| 2 |
-
title:
|
| 3 |
-
emoji:
|
| 4 |
-
colorFrom:
|
| 5 |
-
colorTo:
|
| 6 |
-
sdk:
|
| 7 |
-
|
| 8 |
-
app_file: app.py
|
| 9 |
pinned: false
|
| 10 |
---
|
| 11 |
|
| 12 |
-
#
|
| 13 |
|
| 14 |
-
|
| 15 |
-
(RAG + mémoire FAISS).
|
| 16 |
|
| 17 |
-
|
| 18 |
-
- Agent LLM autonome
|
| 19 |
-
- Mémoire vectorielle FAISS
|
| 20 |
-
- Tools personnalisés
|
| 21 |
-
- Compatible Hugging Face Spaces
|
| 22 |
|
| 23 |
-
##
|
| 24 |
-
-
|
| 25 |
-
-
|
| 26 |
-
-
|
| 27 |
-
-
|
| 28 |
-
-
|
|
|
|
| 29 |
|
| 30 |
-
##
|
| 31 |
-
|
| 32 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 33 |
|
| 34 |
-
|
| 35 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 36 |
|
| 37 |
-
##
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
title: Kibali AI
|
| 3 |
+
emoji: 🇬🇦
|
| 4 |
+
colorFrom: green
|
| 5 |
+
colorTo: emerald
|
| 6 |
+
sdk: docker
|
| 7 |
+
app_port: 7860
|
|
|
|
| 8 |
pinned: false
|
| 9 |
---
|
| 10 |
|
| 11 |
+
# 🇬🇦 Kibali AI — Intelligence Artificielle Gabonaise Souveraine
|
| 12 |
|
| 13 |
+
Assistant IA expert du Gabon, avec contexte géographique, RAG sur documents PDF, mémoire conversationnelle adaptative et recherche web en temps réel.
|
|
|
|
| 14 |
|
| 15 |
+
Modèle : `BelikanM/kibali-final-merged` (7B quantisé 4-bit) chargé sur GPU CUDA.
|
|
|
|
|
|
|
|
|
|
|
|
|
| 16 |
|
| 17 |
+
## 🚀 Fonctionnalités
|
| 18 |
+
- Réponses naturelles en français, toujours factuelles et chaleureuses
|
| 19 |
+
- Contexte géolocalisé (Libreville par défaut, configurable)
|
| 20 |
+
- Upload et interrogation de documents PDF (RAG vectoriel avec FAISS)
|
| 21 |
+
- Mémoire conversationnelle persistante et adaptative
|
| 22 |
+
- Recherche web intégrée (Tavily)
|
| 23 |
+
- API FastAPI complète avec streaming de génération
|
| 24 |
|
| 25 |
+
## 🔗 Endpoints API
|
| 26 |
+
- `POST /chat` → conversation principale
|
| 27 |
+
- `GET /status` → état du système (chunks, mémoire, GPU, etc.)
|
| 28 |
+
- `POST /upload` → import de PDFs
|
| 29 |
+
- `POST /clear-memory` → réinitialisation mémoire
|
| 30 |
+
- `/docs` → documentation Swagger interactive
|
| 31 |
+
- `/static` → fichiers statiques (logo, etc.)
|
| 32 |
|
| 33 |
+
## 🛠️ Stack technique
|
| 34 |
+
- FastAPI + Uvicorn
|
| 35 |
+
- Transformers + BitsAndBytes (quantisation 4-bit)
|
| 36 |
+
- Sentence-Transformers + FAISS (embedding & RAG)
|
| 37 |
+
- Tavily pour la recherche web
|
| 38 |
+
- Docker CUDA 12.4 (GPU accéléré)
|
| 39 |
|
| 40 |
+
## 🔐 Secrets requis
|
| 41 |
+
Ajouter dans **Settings → Secrets** du Space :
|
| 42 |
+
|
| 43 |
+
- `TAVILY_API_KEY` → votre clé Tavily (commence par `tvly-...`)
|
| 44 |
+
|
| 45 |
+
## 📂 Structure du projet
|
| 46 |
+
- `app.py` → API FastAPI principale
|
| 47 |
+
- `tools/` → outils personnalisés (web, réflexion, géo)
|
| 48 |
+
- `static/` → assets statiques
|
| 49 |
+
- `Dockerfile` → build CUDA optimisé
|
| 50 |
+
- `requirements.txt` → dépendances Python
|
| 51 |
+
|
| 52 |
+
## 🌍 Utilisation
|
| 53 |
+
L’API est prête à être consommée par n’importe quel frontend (React, Streamlit, Gradio, mobile, etc.).
|
| 54 |
+
|
| 55 |
+
Exemple de requête `/chat` :
|
| 56 |
+
```json
|
| 57 |
+
{
|
| 58 |
+
"messages": [{"role": "user", "content": "Bonjour Kibali, quel est le climat à Libreville aujourd'hui ?"}],
|
| 59 |
+
"latitude": 0.4061,
|
| 60 |
+
"longitude": 9.4673,
|
| 61 |
+
"city": "Libreville",
|
| 62 |
+
"thinking_mode": true
|
| 63 |
+
}
|
app.py
ADDED
|
@@ -0,0 +1,460 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from fastapi import FastAPI, UploadFile, File, HTTPException
|
| 2 |
+
from fastapi.middleware.cors import CORSMiddleware
|
| 3 |
+
from fastapi.staticfiles import StaticFiles
|
| 4 |
+
from pydantic import BaseModel
|
| 5 |
+
from typing import List, Optional
|
| 6 |
+
import torch
|
| 7 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer, TextIteratorStreamer, BitsAndBytesConfig
|
| 8 |
+
from sentence_transformers import SentenceTransformer
|
| 9 |
+
import faiss
|
| 10 |
+
import numpy as np
|
| 11 |
+
from threading import Thread
|
| 12 |
+
import os
|
| 13 |
+
from io import BytesIO
|
| 14 |
+
import logging
|
| 15 |
+
from datetime import datetime
|
| 16 |
+
import json
|
| 17 |
+
import hashlib
|
| 18 |
+
|
| 19 |
+
# --- CONFIGURATION LOGGING ---
|
| 20 |
+
logging.basicConfig(level=logging.INFO)
|
| 21 |
+
logger = logging.getLogger(__name__)
|
| 22 |
+
|
| 23 |
+
# --- CONFIGURATION PDF ---
|
| 24 |
+
try:
|
| 25 |
+
from pypdf import PdfReader as PypdfReader
|
| 26 |
+
PDF_READER = "pypdf"
|
| 27 |
+
except ImportError:
|
| 28 |
+
try:
|
| 29 |
+
import PyPDF2
|
| 30 |
+
from PyPDF2 import PdfReader as PypdfReader
|
| 31 |
+
PDF_READER = "PyPDF2"
|
| 32 |
+
except ImportError:
|
| 33 |
+
raise ImportError("Installe pypdf ou PyPDF2 : pip install pypdf")
|
| 34 |
+
|
| 35 |
+
# --- OUTILS PERSONNALISÉS ---
|
| 36 |
+
from tools.web import web_search
|
| 37 |
+
from tools.todo import execute_reflection_plan
|
| 38 |
+
from tools.geo import get_geo_context
|
| 39 |
+
|
| 40 |
+
app = FastAPI(title="Kibali AI API", version="1.0")
|
| 41 |
+
|
| 42 |
+
# --- SERVEUR STATIQUE ---
|
| 43 |
+
script_dir = os.path.dirname(os.path.abspath(__file__))
|
| 44 |
+
static_dir = os.path.join(script_dir, "static")
|
| 45 |
+
os.makedirs(static_dir, exist_ok=True)
|
| 46 |
+
app.mount("/static", StaticFiles(directory=static_dir), name="static")
|
| 47 |
+
|
| 48 |
+
# --- CORS ---
|
| 49 |
+
app.add_middleware(
|
| 50 |
+
CORSMiddleware,
|
| 51 |
+
allow_origins=["*"],
|
| 52 |
+
allow_credentials=True,
|
| 53 |
+
allow_methods=["*"],
|
| 54 |
+
allow_headers=["*"],
|
| 55 |
+
)
|
| 56 |
+
|
| 57 |
+
# --- CHARGEMENT DES MODÈLES (téléchargement depuis Hugging Face Hub) ---
|
| 58 |
+
HF_MODEL_ID = "BelikanM/kibali-final-merged"
|
| 59 |
+
CACHE_DIR = "/data/cache" # Dossier persistant sur HF Spaces
|
| 60 |
+
|
| 61 |
+
os.makedirs(CACHE_DIR, exist_ok=True)
|
| 62 |
+
|
| 63 |
+
logger.info("Chargement du modèle d'embedding...")
|
| 64 |
+
embed_model = SentenceTransformer(
|
| 65 |
+
'paraphrase-multilingual-MiniLM-L12-v2',
|
| 66 |
+
cache_folder=CACHE_DIR
|
| 67 |
+
)
|
| 68 |
+
|
| 69 |
+
logger.info(f"Chargement du tokenizer et du modèle LLM depuis Hugging Face : {HF_MODEL_ID}")
|
| 70 |
+
tokenizer = AutoTokenizer.from_pretrained(HF_MODEL_ID, cache_dir=CACHE_DIR)
|
| 71 |
+
|
| 72 |
+
if tokenizer.pad_token is None:
|
| 73 |
+
tokenizer.pad_token = tokenizer.eos_token
|
| 74 |
+
|
| 75 |
+
# Configuration 4-bit pour réduire la consommation VRAM
|
| 76 |
+
bnb_config = BitsAndBytesConfig(
|
| 77 |
+
load_in_4bit=True,
|
| 78 |
+
bnb_4bit_use_double_quant=True,
|
| 79 |
+
bnb_4bit_quant_type="nf4",
|
| 80 |
+
bnb_4bit_compute_dtype=torch.float16
|
| 81 |
+
)
|
| 82 |
+
|
| 83 |
+
try:
|
| 84 |
+
model = AutoModelForCausalLM.from_pretrained(
|
| 85 |
+
HF_MODEL_ID,
|
| 86 |
+
quantization_config=bnb_config,
|
| 87 |
+
device_map="auto",
|
| 88 |
+
torch_dtype=torch.float16,
|
| 89 |
+
trust_remote_code=True,
|
| 90 |
+
low_cpu_mem_usage=True,
|
| 91 |
+
cache_dir=CACHE_DIR
|
| 92 |
+
)
|
| 93 |
+
logger.info(f"Modèle chargé avec succès sur {model.device}")
|
| 94 |
+
except Exception as e:
|
| 95 |
+
logger.error(f"Erreur lors du chargement du modèle : {e}")
|
| 96 |
+
raise e
|
| 97 |
+
|
| 98 |
+
# --- BASES VECTORIELLES GLOBALES ---
|
| 99 |
+
dimension = 384
|
| 100 |
+
doc_index = faiss.IndexFlatL2(dimension)
|
| 101 |
+
doc_chunks: List[str] = []
|
| 102 |
+
doc_metadata: List[dict] = []
|
| 103 |
+
|
| 104 |
+
memory_index = faiss.IndexFlatL2(dimension)
|
| 105 |
+
memory_texts: List[str] = []
|
| 106 |
+
memory_metadata: List[dict] = []
|
| 107 |
+
|
| 108 |
+
# --- GESTION DU CONTEXTE CONVERSATIONNEL ---
|
| 109 |
+
class ConversationContext:
|
| 110 |
+
def __init__(self):
|
| 111 |
+
self.current_subject = None
|
| 112 |
+
self.subject_embedding = None
|
| 113 |
+
self.subject_start_time = None
|
| 114 |
+
self.message_count = 0
|
| 115 |
+
self.subject_keywords = []
|
| 116 |
+
|
| 117 |
+
def update_subject(self, message: str, embedding: np.ndarray):
|
| 118 |
+
keywords = self._extract_keywords(message)
|
| 119 |
+
|
| 120 |
+
if self.subject_embedding is not None:
|
| 121 |
+
similarity = np.dot(embedding.flatten(), self.subject_embedding.flatten())
|
| 122 |
+
if similarity < 0.6:
|
| 123 |
+
logger.info(f"Changement de sujet détecté (similarité: {similarity:.2f})")
|
| 124 |
+
self._archive_current_subject()
|
| 125 |
+
self.current_subject = message
|
| 126 |
+
self.subject_embedding = embedding
|
| 127 |
+
self.subject_start_time = datetime.now()
|
| 128 |
+
self.message_count = 1
|
| 129 |
+
self.subject_keywords = keywords
|
| 130 |
+
else:
|
| 131 |
+
self.message_count += 1
|
| 132 |
+
self.subject_keywords.extend(keywords)
|
| 133 |
+
self.subject_keywords = list(set(self.subject_keywords))[:10]
|
| 134 |
+
else:
|
| 135 |
+
self.current_subject = message
|
| 136 |
+
self.subject_embedding = embedding
|
| 137 |
+
self.subject_start_time = datetime.now()
|
| 138 |
+
self.message_count = 1
|
| 139 |
+
self.subject_keywords = keywords
|
| 140 |
+
|
| 141 |
+
def _extract_keywords(self, text: str) -> List[str]:
|
| 142 |
+
stopwords = {'le', 'la', 'les', 'un', 'une', 'des', 'de', 'du', 'et', 'ou',
|
| 143 |
+
'est', 'sont', 'à', 'au', 'en', 'pour', 'dans', 'sur', 'avec'}
|
| 144 |
+
words = text.lower().split()
|
| 145 |
+
keywords = [w for w in words if len(w) > 3 and w not in stopwords]
|
| 146 |
+
return keywords[:5]
|
| 147 |
+
|
| 148 |
+
def _archive_current_subject(self):
|
| 149 |
+
if self.current_subject and memory_index.ntotal > 0:
|
| 150 |
+
summary = {
|
| 151 |
+
"subject": self.current_subject[:200],
|
| 152 |
+
"keywords": self.subject_keywords,
|
| 153 |
+
"message_count": self.message_count,
|
| 154 |
+
"duration": (datetime.now() - self.subject_start_time).seconds,
|
| 155 |
+
"archived_at": datetime.now().isoformat()
|
| 156 |
+
}
|
| 157 |
+
logger.info(f"Sujet archivé: {summary['keywords']}")
|
| 158 |
+
|
| 159 |
+
conversation_ctx = ConversationContext()
|
| 160 |
+
|
| 161 |
+
# --- MODÈLES PYDANTIC ---
|
| 162 |
+
class Message(BaseModel):
|
| 163 |
+
role: str
|
| 164 |
+
content: str
|
| 165 |
+
|
| 166 |
+
class ChatRequest(BaseModel):
|
| 167 |
+
messages: List[Message]
|
| 168 |
+
latitude: float
|
| 169 |
+
longitude: float
|
| 170 |
+
city: Optional[str] = "Libreville"
|
| 171 |
+
thinking_mode: bool = True
|
| 172 |
+
|
| 173 |
+
class ChatResponse(BaseModel):
|
| 174 |
+
response: str
|
| 175 |
+
images: List[str] = []
|
| 176 |
+
context_info: Optional[dict] = None
|
| 177 |
+
|
| 178 |
+
# --- UTILITAIRES ---
|
| 179 |
+
def extract_text_from_pdf(pdf_bytes: bytes) -> str:
|
| 180 |
+
text = ""
|
| 181 |
+
try:
|
| 182 |
+
pdf_file = BytesIO(pdf_bytes)
|
| 183 |
+
reader = PypdfReader(pdf_file)
|
| 184 |
+
for page in reader.pages:
|
| 185 |
+
page_text = page.extract_text()
|
| 186 |
+
if page_text:
|
| 187 |
+
text += page_text + "\n"
|
| 188 |
+
return text.strip()
|
| 189 |
+
except Exception as e:
|
| 190 |
+
logger.error(f"Erreur extraction PDF : {e}")
|
| 191 |
+
return ""
|
| 192 |
+
|
| 193 |
+
def chunk_text(text: str, chunk_size: int = 400, overlap: int = 50) -> List[str]:
|
| 194 |
+
if not text.strip():
|
| 195 |
+
return []
|
| 196 |
+
words = text.split()
|
| 197 |
+
chunks = []
|
| 198 |
+
i = 0
|
| 199 |
+
while i < len(words):
|
| 200 |
+
chunk_words = words[i:i + chunk_size]
|
| 201 |
+
chunk = " ".join(chunk_words)
|
| 202 |
+
if chunk.strip():
|
| 203 |
+
chunks.append(chunk.strip())
|
| 204 |
+
i += chunk_size - overlap
|
| 205 |
+
if i >= len(words) and len(chunk_words) < overlap:
|
| 206 |
+
break
|
| 207 |
+
return chunks
|
| 208 |
+
|
| 209 |
+
def add_to_memory_realtime(user_msg: str, ai_response: str, subject_keywords: List[str]):
|
| 210 |
+
timestamp = datetime.now().isoformat()
|
| 211 |
+
memory_entry = f"""[{timestamp}]
|
| 212 |
+
Sujet: {', '.join(subject_keywords)}
|
| 213 |
+
Utilisateur: {user_msg}
|
| 214 |
+
Kibali: {ai_response}"""
|
| 215 |
+
|
| 216 |
+
metadata = {
|
| 217 |
+
"timestamp": timestamp,
|
| 218 |
+
"subject_keywords": subject_keywords,
|
| 219 |
+
"user_length": len(user_msg),
|
| 220 |
+
"ai_length": len(ai_response),
|
| 221 |
+
"hash": hashlib.md5(memory_entry.encode()).hexdigest()
|
| 222 |
+
}
|
| 223 |
+
|
| 224 |
+
if metadata["hash"] not in [m.get("hash") for m in memory_metadata]:
|
| 225 |
+
memory_texts.append(memory_entry)
|
| 226 |
+
memory_metadata.append(metadata)
|
| 227 |
+
mem_emb = embed_model.encode([memory_entry], normalize_embeddings=True).astype('float32')
|
| 228 |
+
memory_index.add(mem_emb)
|
| 229 |
+
logger.info(f"Mémoire ajoutée en temps réel: {subject_keywords} (total: {len(memory_texts)})")
|
| 230 |
+
return True
|
| 231 |
+
return False
|
| 232 |
+
|
| 233 |
+
def retrieve_adaptive_memory(query: str, k: int = 5) -> tuple:
|
| 234 |
+
if memory_index.ntotal == 0:
|
| 235 |
+
return [], []
|
| 236 |
+
|
| 237 |
+
query_emb = embed_model.encode([query], normalize_embeddings=True).astype('float32')
|
| 238 |
+
k_search = min(k * 2, memory_index.ntotal)
|
| 239 |
+
D, I = memory_index.search(query_emb, k=k_search)
|
| 240 |
+
|
| 241 |
+
results = []
|
| 242 |
+
for dist, idx in zip(D[0], I[0]):
|
| 243 |
+
if 0 <= idx < len(memory_texts):
|
| 244 |
+
metadata = memory_metadata[idx] if idx < len(memory_metadata) else {}
|
| 245 |
+
recency_score = 1.0 / (1 + (datetime.now() - datetime.fromisoformat(metadata.get("timestamp", datetime.now().isoformat()))).seconds / 3600)
|
| 246 |
+
similarity_score = 1.0 / (1 + dist)
|
| 247 |
+
keyword_bonus = 0
|
| 248 |
+
if conversation_ctx.subject_keywords:
|
| 249 |
+
text_lower = memory_texts[idx].lower()
|
| 250 |
+
keyword_bonus = sum(1 for kw in conversation_ctx.subject_keywords if kw in text_lower) * 0.1
|
| 251 |
+
total_score = similarity_score * 0.6 + recency_score * 0.3 + keyword_bonus
|
| 252 |
+
|
| 253 |
+
results.append({
|
| 254 |
+
"text": memory_texts[idx],
|
| 255 |
+
"score": total_score,
|
| 256 |
+
"metadata": metadata
|
| 257 |
+
})
|
| 258 |
+
|
| 259 |
+
results = sorted(results, key=lambda x: x["score"], reverse=True)[:k]
|
| 260 |
+
texts = [r["text"] for r in results]
|
| 261 |
+
scores = [r["score"] for r in results]
|
| 262 |
+
return texts, scores
|
| 263 |
+
|
| 264 |
+
# --- ROUTES ---
|
| 265 |
+
@app.get("/status")
|
| 266 |
+
async def status():
|
| 267 |
+
return {
|
| 268 |
+
"status": "ready",
|
| 269 |
+
"doc_chunks": len(doc_chunks),
|
| 270 |
+
"memory_entries": len(memory_texts),
|
| 271 |
+
"pdf_library": PDF_READER,
|
| 272 |
+
"model_device": str(model.device),
|
| 273 |
+
"torch_cuda_available": torch.cuda.is_available(),
|
| 274 |
+
"current_subject": conversation_ctx.current_subject[:100] if conversation_ctx.current_subject else None,
|
| 275 |
+
"subject_message_count": conversation_ctx.message_count
|
| 276 |
+
}
|
| 277 |
+
|
| 278 |
+
@app.post("/chat", response_model=ChatResponse)
|
| 279 |
+
async def chat(request: ChatRequest):
|
| 280 |
+
user_message = request.messages[-1].content.strip()
|
| 281 |
+
if not user_message:
|
| 282 |
+
raise HTTPException(status_code=400, detail="Message vide")
|
| 283 |
+
|
| 284 |
+
geo = {
|
| 285 |
+
"latitude": request.latitude,
|
| 286 |
+
"longitude": request.longitude,
|
| 287 |
+
"city": request.city or "Libreville"
|
| 288 |
+
}
|
| 289 |
+
|
| 290 |
+
user_emb = embed_model.encode([user_message], normalize_embeddings=True).astype('float32')
|
| 291 |
+
conversation_ctx.update_subject(user_message, user_emb)
|
| 292 |
+
|
| 293 |
+
# RAG Documents PDF
|
| 294 |
+
rag_context = ""
|
| 295 |
+
rag_sources = []
|
| 296 |
+
if doc_index.ntotal > 0 and len(doc_chunks) > 0:
|
| 297 |
+
D, I = doc_index.search(user_emb, k=5)
|
| 298 |
+
relevant_chunks = []
|
| 299 |
+
for idx in I[0]:
|
| 300 |
+
if 0 <= idx < len(doc_chunks):
|
| 301 |
+
relevant_chunks.append(doc_chunks[idx][:1000])
|
| 302 |
+
if idx < len(doc_metadata):
|
| 303 |
+
rag_sources.append(doc_metadata[idx].get("source", "PDF"))
|
| 304 |
+
if relevant_chunks:
|
| 305 |
+
rag_context = "\n\n".join([f"Document : {chunk}" for chunk in relevant_chunks])
|
| 306 |
+
|
| 307 |
+
# Mémoire adaptative
|
| 308 |
+
memory_context = ""
|
| 309 |
+
memory_texts_filtered, memory_scores = retrieve_adaptive_memory(user_message, k=5)
|
| 310 |
+
if memory_texts_filtered:
|
| 311 |
+
memory_context = "\n\n".join([f"Mémoire (score: {score:.2f}): {text}"
|
| 312 |
+
for text, score in zip(memory_texts_filtered, memory_scores)])
|
| 313 |
+
|
| 314 |
+
# Réflexion stratégique
|
| 315 |
+
if request.thinking_mode:
|
| 316 |
+
execute_reflection_plan(
|
| 317 |
+
user_message,
|
| 318 |
+
geo_info=geo,
|
| 319 |
+
messages=request.messages,
|
| 320 |
+
current_subject=conversation_ctx.current_subject,
|
| 321 |
+
subject_keywords=conversation_ctx.subject_keywords
|
| 322 |
+
)
|
| 323 |
+
|
| 324 |
+
# Recherche Web
|
| 325 |
+
search_query = user_message
|
| 326 |
+
if conversation_ctx.subject_keywords:
|
| 327 |
+
search_query = f"{user_message} {' '.join(conversation_ctx.subject_keywords[:3])} Gabon"
|
| 328 |
+
|
| 329 |
+
search_results = web_search(search_query)
|
| 330 |
+
web_context = "\n".join([f"- {r['content'][:500]}" for r in search_results.get("results", [])[:6]])
|
| 331 |
+
web_images = search_results.get("images", [])[:4]
|
| 332 |
+
|
| 333 |
+
# Prompt final
|
| 334 |
+
system_prompt = f"""Tu es Kibali, un assistant IA chaleureux, précis et expert du Gabon, basé à {geo['city']}.
|
| 335 |
+
Réponds toujours en français, de façon naturelle, concise et factuelle.
|
| 336 |
+
CONTEXTE CONVERSATIONNEL ACTUEL:
|
| 337 |
+
- Sujet en cours: {', '.join(conversation_ctx.subject_keywords) if conversation_ctx.subject_keywords else 'Nouveau sujet'}
|
| 338 |
+
- Nombre de messages sur ce sujet: {conversation_ctx.message_count}
|
| 339 |
+
PRIORITÉ DES SOURCES:
|
| 340 |
+
1. Documents uploadés (PDF Vault) - Source la plus fiable
|
| 341 |
+
2. Mémoire conversationnelle récente et pertinente
|
| 342 |
+
3. Informations Web actualisées
|
| 343 |
+
Si une information vient d'un document uploadé, mentionne-le brièvement.
|
| 344 |
+
Adapte-toi aux changements brusques de sujet en restant cohérent."""
|
| 345 |
+
|
| 346 |
+
full_prompt = f"""### INSTRUCTIONS STRICTES :
|
| 347 |
+
{system_prompt}
|
| 348 |
+
### CONTEXTE DOCUMENTS (PDF Vault) :
|
| 349 |
+
{rag_context if rag_context else "Aucun document pertinent trouvé."}
|
| 350 |
+
### HISTORIQUE PERTINENT (Mémoire adaptative) :
|
| 351 |
+
{memory_context if memory_context else "Pas d'historique pertinent."}
|
| 352 |
+
### INFORMATIONS WEB RÉCENTES :
|
| 353 |
+
{web_context if web_context else "Pas d'informations web disponibles."}
|
| 354 |
+
### QUESTION :
|
| 355 |
+
{user_message}
|
| 356 |
+
### RÉPONSE (en français uniquement) :
|
| 357 |
+
"""
|
| 358 |
+
|
| 359 |
+
inputs = tokenizer(full_prompt, return_tensors="pt", truncation=True, max_length=8192).to(model.device)
|
| 360 |
+
streamer = TextIteratorStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True, timeout=120.0)
|
| 361 |
+
|
| 362 |
+
def generate_stream():
|
| 363 |
+
try:
|
| 364 |
+
model.generate(
|
| 365 |
+
**inputs,
|
| 366 |
+
streamer=streamer,
|
| 367 |
+
max_new_tokens=1024,
|
| 368 |
+
temperature=0.6,
|
| 369 |
+
do_sample=True,
|
| 370 |
+
top_p=0.85,
|
| 371 |
+
top_k=50,
|
| 372 |
+
repetition_penalty=1.2,
|
| 373 |
+
length_penalty=0.8
|
| 374 |
+
)
|
| 375 |
+
except Exception as e:
|
| 376 |
+
logger.error(f"Erreur génération : {e}")
|
| 377 |
+
|
| 378 |
+
thread = Thread(target=generate_stream)
|
| 379 |
+
thread.start()
|
| 380 |
+
|
| 381 |
+
response_text = ""
|
| 382 |
+
for new_text in streamer:
|
| 383 |
+
if new_text is not None:
|
| 384 |
+
response_text += new_text
|
| 385 |
+
response_text = response_text.strip()
|
| 386 |
+
|
| 387 |
+
if response_text:
|
| 388 |
+
add_to_memory_realtime(
|
| 389 |
+
user_message,
|
| 390 |
+
response_text,
|
| 391 |
+
conversation_ctx.subject_keywords
|
| 392 |
+
)
|
| 393 |
+
|
| 394 |
+
context_info = {
|
| 395 |
+
"subject_keywords": conversation_ctx.subject_keywords,
|
| 396 |
+
"message_count": conversation_ctx.message_count,
|
| 397 |
+
"memory_used": len(memory_texts_filtered),
|
| 398 |
+
"rag_sources": list(set(rag_sources)),
|
| 399 |
+
"web_results": len(search_results.get("results", []))
|
| 400 |
+
}
|
| 401 |
+
|
| 402 |
+
return ChatResponse(response=response_text, images=web_images, context_info=context_info)
|
| 403 |
+
|
| 404 |
+
@app.post("/upload")
|
| 405 |
+
async def upload(files: List[UploadFile] = File(...)):
|
| 406 |
+
total_added = 0
|
| 407 |
+
processed_files = 0
|
| 408 |
+
for file in files:
|
| 409 |
+
if not file.filename.lower().endswith(".pdf"):
|
| 410 |
+
continue
|
| 411 |
+
try:
|
| 412 |
+
content = await file.read()
|
| 413 |
+
text = extract_text_from_pdf(content)
|
| 414 |
+
if not text:
|
| 415 |
+
logger.warning(f"Aucun texte extrait de {file.filename}")
|
| 416 |
+
continue
|
| 417 |
+
chunks = chunk_text(text)
|
| 418 |
+
if not chunks:
|
| 419 |
+
continue
|
| 420 |
+
timestamp = datetime.now().isoformat()
|
| 421 |
+
for chunk in chunks:
|
| 422 |
+
doc_metadata.append({
|
| 423 |
+
"source": file.filename,
|
| 424 |
+
"timestamp": timestamp,
|
| 425 |
+
"length": len(chunk)
|
| 426 |
+
})
|
| 427 |
+
embeddings = embed_model.encode(chunks, normalize_embeddings=True).astype('float32')
|
| 428 |
+
doc_index.add(embeddings)
|
| 429 |
+
doc_chunks.extend(chunks)
|
| 430 |
+
total_added += len(chunks)
|
| 431 |
+
processed_files += 1
|
| 432 |
+
logger.info(f"Upload réussi : {file.filename} → {len(chunks)} chunks ajoutés")
|
| 433 |
+
except Exception as e:
|
| 434 |
+
logger.error(f"Erreur lors du traitement de {file.filename} : {e}")
|
| 435 |
+
return {
|
| 436 |
+
"status": "success",
|
| 437 |
+
"files_processed": processed_files,
|
| 438 |
+
"chunks_added": total_added,
|
| 439 |
+
"total_doc_chunks": len(doc_chunks)
|
| 440 |
+
}
|
| 441 |
+
|
| 442 |
+
@app.post("/upload-pdfs")
|
| 443 |
+
async def upload_pdfs(files: List[UploadFile] = File(...)):
|
| 444 |
+
return await upload(files)
|
| 445 |
+
|
| 446 |
+
@app.post("/clear-memory")
|
| 447 |
+
async def clear_memory():
|
| 448 |
+
global memory_index, memory_texts, memory_metadata
|
| 449 |
+
memory_index = faiss.IndexFlatL2(dimension)
|
| 450 |
+
memory_texts = []
|
| 451 |
+
memory_metadata = []
|
| 452 |
+
conversation_ctx.__init__()
|
| 453 |
+
return {"status": "memory_cleared", "message": "Mémoire conversationnelle effacée"}
|
| 454 |
+
|
| 455 |
+
# --- DEMARRAGE ---
|
| 456 |
+
@app.on_event("startup")
|
| 457 |
+
async def startup_event():
|
| 458 |
+
logger.info("🚀 Kibali AI API démarrée avec succès sur Hugging Face Spaces !")
|
| 459 |
+
logger.info(f"Accès : https://your-username-your-space.hf.space | Docs : /docs")
|
| 460 |
+
logger.info(f"Mémoire adaptative et réflexion contextuelle activées ✓")
|
requirements.txt
CHANGED
|
@@ -1,27 +1,19 @@
|
|
| 1 |
-
#
|
| 2 |
-
transformers
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
sentence-transformers
|
| 6 |
-
faiss-
|
| 7 |
|
| 8 |
-
#
|
| 9 |
fastapi
|
| 10 |
uvicorn[standard]
|
| 11 |
-
pydantic
|
| 12 |
-
python-multipart
|
| 13 |
|
| 14 |
-
#
|
| 15 |
-
tavily-python
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
# === Compatibilité & Stabilité ===
|
| 22 |
-
numpy<2.0.0 # Évite les incompatibilités avec sentence-transformers et transformers
|
| 23 |
-
huggingface_hub>=0.25.0 # Gestion du cache et téléchargement modèles
|
| 24 |
-
|
| 25 |
-
# === Optionnels utiles (souvent requis par les modèles Mistral/Llama merged) ===
|
| 26 |
-
einops>=0.8.0
|
| 27 |
-
safetensors>=0.4.0
|
|
|
|
| 1 |
+
# --- Core IA ---
|
| 2 |
+
transformers==4.41.2
|
| 3 |
+
bitsandbytes>=0.43.0
|
| 4 |
+
accelerate>=0.30.0
|
| 5 |
+
sentence-transformers
|
| 6 |
+
faiss-gpu
|
| 7 |
|
| 8 |
+
# --- Serveur & API ---
|
| 9 |
fastapi
|
| 10 |
uvicorn[standard]
|
| 11 |
+
pydantic
|
| 12 |
+
python-multipart
|
| 13 |
|
| 14 |
+
# --- Outils & Data ---
|
| 15 |
+
tavily-python
|
| 16 |
+
pypdf>=3.0.0
|
| 17 |
+
numpy<2.0.0
|
| 18 |
+
duckduckgo-search
|
| 19 |
+
huggingface_hub
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|