Commit
·
9fdf9a2
0
Parent(s):
🚀 Initial commit - VerifAI Handler V2 Haute Précision
Browse files- ✨ Handler personnalisé avec haywoodsloan/ai-image-detector-deploy
- 🎯 Modèle unique haute précision, faibles faux positifs
- 🔍 SimpleGradCAM intégré pour explications visuelles
- 🛡️ Fiabilité optimisée pour l'écosystème VerifAI
- 📝 Documentation complète et script de test inclus
- README.md +81 -0
- handler.py +197 -0
- requirements.txt +6 -0
- test_verifai_v2.py +126 -0
README.md
ADDED
|
@@ -0,0 +1,81 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# VerifAI Handler V2 - Détecteur d'Images IA Haute Précision
|
| 2 |
+
|
| 3 |
+
Handler personnalisé VerifAI utilisant le modèle `haywoodsloan/ai-image-detector-deploy` optimisé pour une détection haute précision des images générées par l'intelligence artificielle.
|
| 4 |
+
|
| 5 |
+
## 🎯 VerifAI Handler V2 - Caractéristiques
|
| 6 |
+
|
| 7 |
+
- **🎯 Haute Précision** : Modèle haywoodsloan/ai-image-detector-deploy
|
| 8 |
+
- **🚫 Faibles Faux Positifs** : Évite les alertes incorrectes
|
| 9 |
+
- **🔍 SimpleGradCAM Intégré** : Cartes de saillance personnalisées
|
| 10 |
+
- **🏷️ Normalisation Intelligente** : Labels cohérents (Human/AI Generated)
|
| 11 |
+
- **⚡ Performance Optimisée** : Traitement rapide avec un modèle unique
|
| 12 |
+
- **🛡️ Fiabilité VerifAI** : Conçu pour l'écosystème VerifAI
|
| 13 |
+
|
| 14 |
+
## 🚀 Utilisation
|
| 15 |
+
|
| 16 |
+
### Format d'entrée VerifAI
|
| 17 |
+
```json
|
| 18 |
+
{
|
| 19 |
+
"inputs": "iVBORw0KGgoAAAANSUhEUgAAA..." // Image en base64
|
| 20 |
+
}
|
| 21 |
+
```
|
| 22 |
+
|
| 23 |
+
### Format de sortie VerifAI V2
|
| 24 |
+
```json
|
| 25 |
+
{
|
| 26 |
+
"status": "success",
|
| 27 |
+
"prediction": 1,
|
| 28 |
+
"predicted_class_name": "AI Generated",
|
| 29 |
+
"confidence": 0.8756,
|
| 30 |
+
"class_probabilities": {
|
| 31 |
+
"Human": 0.1244,
|
| 32 |
+
"AI Generated": 0.8756
|
| 33 |
+
},
|
| 34 |
+
"cam_image": "base64...",
|
| 35 |
+
"reliability": "TRÈS ÉLEVÉE",
|
| 36 |
+
"version": "2.0",
|
| 37 |
+
"handler_name": "VerifAI Handler V2",
|
| 38 |
+
"model_info": {
|
| 39 |
+
"handler_version": "verifai-v2",
|
| 40 |
+
"precision_mode": "high"
|
| 41 |
+
}
|
| 42 |
+
}
|
| 43 |
+
```
|
| 44 |
+
|
| 45 |
+
## 📈 Avantages VerifAI V2
|
| 46 |
+
|
| 47 |
+
✅ **Excellence en Précision** - Spécialisé sur images IA sophistiquées
|
| 48 |
+
✅ **Confiance Utilisateur** - Faibles faux positifs pour une expérience fiable
|
| 49 |
+
✅ **Performance VerifAI** - Optimisé pour l'infrastructure VerifAI
|
| 50 |
+
✅ **Explications Visuelles** - Grad-CAM intégré pour la transparence
|
| 51 |
+
✅ **Compatibilité** - API cohérente avec l'écosystème VerifAI
|
| 52 |
+
|
| 53 |
+
## ⚠️ Spécifications Techniques
|
| 54 |
+
|
| 55 |
+
- **Modèle Principal** : haywoodsloan/ai-image-detector-deploy
|
| 56 |
+
- **Type de Task** : Custom Handler
|
| 57 |
+
- **Formats Supportés** : PNG, JPEG, WebP
|
| 58 |
+
- **Résolution** : Auto-adaptative
|
| 59 |
+
- **Temps de Traitement** : ~2-4 secondes par image
|
| 60 |
+
|
| 61 |
+
## 📝 Notes de Version
|
| 62 |
+
|
| 63 |
+
### Version 2.0 - VerifAI Handler
|
| 64 |
+
- ✨ **Nouveau** : Handler VerifAI dédié
|
| 65 |
+
- 🎯 **Amélioration** : Modèle unique haute précision
|
| 66 |
+
- 🚀 **Performance** : Optimisation pour VerifAI
|
| 67 |
+
- 🛡️ **Fiabilité** : Réduction drastique des faux positifs
|
| 68 |
+
|
| 69 |
+
### Différences avec V1
|
| 70 |
+
- **V1** : Ensemble de modèles (plus complexe, plus de faux positifs)
|
| 71 |
+
- **V2** : Modèle unique (plus simple, plus fiable, plus rapide)
|
| 72 |
+
|
| 73 |
+
## 🔧 Déploiement
|
| 74 |
+
|
| 75 |
+
Le VerifAI Handler V2 se configure automatiquement. Aucune configuration manuelle requise.
|
| 76 |
+
|
| 77 |
+
Compatible avec Hugging Face Inference Endpoints et l'infrastructure VerifAI.
|
| 78 |
+
|
| 79 |
+
---
|
| 80 |
+
|
| 81 |
+
**VerifAI** - Intelligence Artificielle de Confiance pour la Vérification d'Images
|
handler.py
ADDED
|
@@ -0,0 +1,197 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from typing import Any, Dict
|
| 2 |
+
import torch
|
| 3 |
+
from torchvision import transforms
|
| 4 |
+
from PIL import Image
|
| 5 |
+
import base64
|
| 6 |
+
import io
|
| 7 |
+
import numpy as np
|
| 8 |
+
from transformers import AutoModelForImageClassification, AutoImageProcessor
|
| 9 |
+
import torch.nn.functional as F
|
| 10 |
+
import json
|
| 11 |
+
import re
|
| 12 |
+
|
| 13 |
+
class SimpleGradCAM:
|
| 14 |
+
def __init__(self, model, target_layer):
|
| 15 |
+
self.model = model
|
| 16 |
+
self.target_layer = target_layer
|
| 17 |
+
self.gradients = None
|
| 18 |
+
self.activations = None
|
| 19 |
+
|
| 20 |
+
# Hook pour capturer les gradients
|
| 21 |
+
self.target_layer.register_backward_hook(self.save_gradients)
|
| 22 |
+
self.target_layer.register_forward_hook(self.save_activations)
|
| 23 |
+
|
| 24 |
+
def save_gradients(self, module, grad_input, grad_output):
|
| 25 |
+
self.gradients = grad_output[0]
|
| 26 |
+
|
| 27 |
+
def save_activations(self, module, input, output):
|
| 28 |
+
self.activations = output
|
| 29 |
+
|
| 30 |
+
def generate_cam(self, input_tensor, class_idx=None):
|
| 31 |
+
# Forward pass
|
| 32 |
+
output = self.model(input_tensor)
|
| 33 |
+
|
| 34 |
+
if class_idx is None:
|
| 35 |
+
class_idx = output.logits.argmax(dim=1).item()
|
| 36 |
+
|
| 37 |
+
# Backward pass
|
| 38 |
+
self.model.zero_grad()
|
| 39 |
+
output.logits[0, class_idx].backward()
|
| 40 |
+
|
| 41 |
+
# Generate CAM
|
| 42 |
+
gradients = self.gradients[0] # (C, H, W)
|
| 43 |
+
activations = self.activations[0] # (C, H, W)
|
| 44 |
+
|
| 45 |
+
# Moyenne globale des gradients
|
| 46 |
+
weights = torch.mean(gradients, dim=(1, 2)) # (C,)
|
| 47 |
+
|
| 48 |
+
# CAM = somme pondérée des activations
|
| 49 |
+
cam = torch.zeros(activations.shape[1:]) # (H, W)
|
| 50 |
+
for i, w in enumerate(weights):
|
| 51 |
+
cam += w * activations[i, :, :]
|
| 52 |
+
|
| 53 |
+
# ReLU et normalisation
|
| 54 |
+
cam = F.relu(cam)
|
| 55 |
+
cam = cam / cam.max() if cam.max() > 0 else cam
|
| 56 |
+
|
| 57 |
+
return cam.detach().cpu().numpy()
|
| 58 |
+
|
| 59 |
+
def get_last_conv_layer(model):
|
| 60 |
+
"""Trouve la dernière couche de convolution du modèle"""
|
| 61 |
+
last_conv = None
|
| 62 |
+
for name, module in model.named_modules():
|
| 63 |
+
if isinstance(module, (torch.nn.Conv2d, torch.nn.AdaptiveAvgPool2d)):
|
| 64 |
+
last_conv = module
|
| 65 |
+
return last_conv
|
| 66 |
+
|
| 67 |
+
class EndpointHandler:
|
| 68 |
+
def __init__(self, path=""):
|
| 69 |
+
print("🚀 VerifAI Handler V2 - Initialisation")
|
| 70 |
+
print("📋 Modèle: haywoodsloan/ai-image-detector-deploy (Haute Précision)")
|
| 71 |
+
|
| 72 |
+
try:
|
| 73 |
+
# Modèle unique : Haywoodsloan AI Image Detector
|
| 74 |
+
print("🔄 Chargement du modèle: haywoodsloan/ai-image-detector-deploy")
|
| 75 |
+
self.model_name = "haywoodsloan/ai-image-detector-deploy"
|
| 76 |
+
self.processor = AutoImageProcessor.from_pretrained(self.model_name)
|
| 77 |
+
self.model = AutoModelForImageClassification.from_pretrained(self.model_name)
|
| 78 |
+
self.model.eval()
|
| 79 |
+
|
| 80 |
+
# Configuration Grad-CAM
|
| 81 |
+
target_layer = get_last_conv_layer(self.model)
|
| 82 |
+
self.grad_cam = SimpleGradCAM(self.model, target_layer) if target_layer else None
|
| 83 |
+
self.model_labels = self.model.config.id2label
|
| 84 |
+
|
| 85 |
+
print("✅ Modèle chargé avec succès")
|
| 86 |
+
print(f"📋 Étiquettes du modèle: {self.model_labels}")
|
| 87 |
+
print("🎯 VerifAI Handler V2 prêt!")
|
| 88 |
+
|
| 89 |
+
except Exception as e:
|
| 90 |
+
print(f"❌ Erreur lors du chargement: {e}")
|
| 91 |
+
raise e
|
| 92 |
+
|
| 93 |
+
def _normalize_label(self, label: str) -> str:
|
| 94 |
+
"""Normalise les étiquettes pour qu'elles soient cohérentes."""
|
| 95 |
+
label_lower = label.lower()
|
| 96 |
+
if re.search(r'real|human|authentic', label_lower):
|
| 97 |
+
return "Human"
|
| 98 |
+
if re.search(r'fake|generated|ai|artificial', label_lower):
|
| 99 |
+
return "AI Generated"
|
| 100 |
+
return "Unknown"
|
| 101 |
+
|
| 102 |
+
def __call__(self, data):
|
| 103 |
+
try:
|
| 104 |
+
# Traitement de l'image
|
| 105 |
+
image_data = data.get("inputs") or data
|
| 106 |
+
image_bytes = base64.b64decode(image_data)
|
| 107 |
+
image = Image.open(io.BytesIO(image_bytes)).convert('RGB')
|
| 108 |
+
|
| 109 |
+
# Prédiction avec le modèle unique
|
| 110 |
+
print("🔄 VerifAI V2 - Analyse en cours...")
|
| 111 |
+
inputs = self.processor(image, return_tensors="pt")
|
| 112 |
+
|
| 113 |
+
with torch.no_grad():
|
| 114 |
+
outputs = self.model(**inputs)
|
| 115 |
+
logits = outputs.logits
|
| 116 |
+
probabilities = F.softmax(logits, dim=-1)[0]
|
| 117 |
+
predicted_class_id = logits.argmax().item()
|
| 118 |
+
|
| 119 |
+
# Normaliser les probabilités
|
| 120 |
+
class_probs = {}
|
| 121 |
+
for class_id, prob in enumerate(probabilities):
|
| 122 |
+
label_str = self.model_labels.get(class_id, f"Class {class_id}")
|
| 123 |
+
normalized_label = self._normalize_label(label_str)
|
| 124 |
+
if normalized_label != "Unknown":
|
| 125 |
+
class_probs[normalized_label] = float(prob)
|
| 126 |
+
|
| 127 |
+
# S'assurer que les deux classes existent
|
| 128 |
+
class_probs.setdefault("Human", 0.0)
|
| 129 |
+
class_probs.setdefault("AI Generated", 0.0)
|
| 130 |
+
|
| 131 |
+
prediction_label = self._normalize_label(self.model_labels.get(predicted_class_id))
|
| 132 |
+
confidence = class_probs.get(prediction_label, 0.0)
|
| 133 |
+
|
| 134 |
+
# Déterminer l'ID de prédiction pour la compatibilité
|
| 135 |
+
prediction_id = 1 if prediction_label == "AI Generated" else 0
|
| 136 |
+
|
| 137 |
+
print(f"🔍 VerifAI V2 Résultat: {prediction_label} (confiance: {confidence:.3f})")
|
| 138 |
+
|
| 139 |
+
# Génération du Grad-CAM
|
| 140 |
+
cam_image_b64 = None
|
| 141 |
+
if self.grad_cam:
|
| 142 |
+
try:
|
| 143 |
+
print("🎨 Génération du Grad-CAM VerifAI...")
|
| 144 |
+
cam = self.grad_cam.generate_cam(
|
| 145 |
+
inputs['pixel_values'],
|
| 146 |
+
predicted_class_id
|
| 147 |
+
)
|
| 148 |
+
cam_resized = np.array(Image.fromarray((cam * 255).astype(np.uint8)).resize(image.size))
|
| 149 |
+
|
| 150 |
+
import matplotlib.pyplot as plt
|
| 151 |
+
plt.figure(figsize=(10, 5))
|
| 152 |
+
plt.imshow(image)
|
| 153 |
+
plt.imshow(cam_resized, cmap='jet', alpha=0.5)
|
| 154 |
+
plt.axis('off')
|
| 155 |
+
|
| 156 |
+
buf = io.BytesIO()
|
| 157 |
+
plt.savefig(buf, format='png', bbox_inches='tight', pad_inches=0)
|
| 158 |
+
buf.seek(0)
|
| 159 |
+
cam_image_b64 = base64.b64encode(buf.read()).decode('utf-8')
|
| 160 |
+
plt.close()
|
| 161 |
+
print("✅ Grad-CAM VerifAI généré avec succès")
|
| 162 |
+
except Exception as e:
|
| 163 |
+
print(f"⚠️ Erreur Grad-CAM: {e}")
|
| 164 |
+
|
| 165 |
+
return {
|
| 166 |
+
"status": "success",
|
| 167 |
+
"prediction": prediction_id,
|
| 168 |
+
"predicted_class_name": prediction_label,
|
| 169 |
+
"confidence": confidence,
|
| 170 |
+
"class_probabilities": class_probs,
|
| 171 |
+
"cam_image": cam_image_b64,
|
| 172 |
+
"model_info": {
|
| 173 |
+
"model_name": self.model_name,
|
| 174 |
+
"handler_version": "verifai-v2",
|
| 175 |
+
"precision_mode": "high",
|
| 176 |
+
"raw_prediction_id": predicted_class_id,
|
| 177 |
+
"raw_labels": self.model_labels
|
| 178 |
+
},
|
| 179 |
+
"reliability": "TRÈS ÉLEVÉE",
|
| 180 |
+
"version": "2.0",
|
| 181 |
+
"handler_name": "VerifAI Handler V2",
|
| 182 |
+
"deployment_note": "VERIFAI HANDLER V2 - HAUTE PRÉCISION"
|
| 183 |
+
}
|
| 184 |
+
|
| 185 |
+
except Exception as e:
|
| 186 |
+
print(f"❌ Erreur dans VerifAI Handler V2: {e}")
|
| 187 |
+
return {
|
| 188 |
+
"status": "error",
|
| 189 |
+
"error": str(e),
|
| 190 |
+
"prediction": 0,
|
| 191 |
+
"predicted_class_name": "Error",
|
| 192 |
+
"confidence": 0.0,
|
| 193 |
+
"class_probabilities": {},
|
| 194 |
+
"cam_image": None,
|
| 195 |
+
"version": "2.0",
|
| 196 |
+
"handler_name": "VerifAI Handler V2"
|
| 197 |
+
}
|
requirements.txt
ADDED
|
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
torch>=2.0.0
|
| 2 |
+
torchvision>=0.15.0
|
| 3 |
+
pillow>=9.0.0
|
| 4 |
+
transformers>=4.21.0
|
| 5 |
+
numpy>=1.21.0
|
| 6 |
+
matplotlib>=3.0.0
|
test_verifai_v2.py
ADDED
|
@@ -0,0 +1,126 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import requests
|
| 2 |
+
import base64
|
| 3 |
+
import json
|
| 4 |
+
from PIL import Image
|
| 5 |
+
import io
|
| 6 |
+
|
| 7 |
+
class VerifAIV2Tester:
|
| 8 |
+
def __init__(self, endpoint_url: str, api_token: str = None):
|
| 9 |
+
"""
|
| 10 |
+
Testeur pour VerifAI Handler V2
|
| 11 |
+
"""
|
| 12 |
+
self.endpoint_url = endpoint_url
|
| 13 |
+
self.headers = {"Content-Type": "application/json"}
|
| 14 |
+
if api_token:
|
| 15 |
+
self.headers["Authorization"] = f"Bearer {api_token}"
|
| 16 |
+
|
| 17 |
+
def test_prediction(self, image_path: str) -> dict:
|
| 18 |
+
"""Test une prédiction avec VerifAI Handler V2"""
|
| 19 |
+
try:
|
| 20 |
+
with open(image_path, "rb") as f:
|
| 21 |
+
image_bytes = f.read()
|
| 22 |
+
|
| 23 |
+
payload = {
|
| 24 |
+
"inputs": base64.b64encode(image_bytes).decode("utf-8")
|
| 25 |
+
}
|
| 26 |
+
|
| 27 |
+
response = requests.post(
|
| 28 |
+
self.endpoint_url,
|
| 29 |
+
json=payload,
|
| 30 |
+
headers=self.headers,
|
| 31 |
+
timeout=30
|
| 32 |
+
)
|
| 33 |
+
|
| 34 |
+
if response.status_code == 200:
|
| 35 |
+
return response.json()
|
| 36 |
+
else:
|
| 37 |
+
return {
|
| 38 |
+
"status": "error",
|
| 39 |
+
"error": f"HTTP {response.status_code}: {response.text}"
|
| 40 |
+
}
|
| 41 |
+
|
| 42 |
+
except Exception as e:
|
| 43 |
+
return {
|
| 44 |
+
"status": "error",
|
| 45 |
+
"error": str(e)
|
| 46 |
+
}
|
| 47 |
+
|
| 48 |
+
def compare_with_v1(self, v1_endpoint_url: str, image_path: str, v1_token: str = None):
|
| 49 |
+
"""Compare les résultats V1 vs V2"""
|
| 50 |
+
print("🔄 Comparaison VerifAI V1 vs V2...")
|
| 51 |
+
|
| 52 |
+
# Test V2 (actuel)
|
| 53 |
+
v2_result = self.test_prediction(image_path)
|
| 54 |
+
|
| 55 |
+
# Test V1 (ancien handler)
|
| 56 |
+
v1_headers = {"Content-Type": "application/json"}
|
| 57 |
+
if v1_token:
|
| 58 |
+
v1_headers["Authorization"] = f"Bearer {v1_token}"
|
| 59 |
+
|
| 60 |
+
try:
|
| 61 |
+
with open(image_path, "rb") as f:
|
| 62 |
+
image_bytes = f.read()
|
| 63 |
+
payload = {"inputs": base64.b64encode(image_bytes).decode("utf-8")}
|
| 64 |
+
|
| 65 |
+
v1_response = requests.post(v1_endpoint_url, json=payload, headers=v1_headers, timeout=30)
|
| 66 |
+
v1_result = v1_response.json() if v1_response.status_code == 200 else {"error": "V1 failed"}
|
| 67 |
+
|
| 68 |
+
except Exception as e:
|
| 69 |
+
v1_result = {"error": str(e)}
|
| 70 |
+
|
| 71 |
+
return {
|
| 72 |
+
"v1_result": v1_result,
|
| 73 |
+
"v2_result": v2_result,
|
| 74 |
+
"comparison": {
|
| 75 |
+
"v1_prediction": v1_result.get("predicted_class_name", "Error"),
|
| 76 |
+
"v2_prediction": v2_result.get("predicted_class_name", "Error"),
|
| 77 |
+
"v1_confidence": v1_result.get("confidence", 0),
|
| 78 |
+
"v2_confidence": v2_result.get("confidence", 0),
|
| 79 |
+
"agreement": v1_result.get("predicted_class_name") == v2_result.get("predicted_class_name")
|
| 80 |
+
}
|
| 81 |
+
}
|
| 82 |
+
|
| 83 |
+
def main():
|
| 84 |
+
"""Test principal VerifAI Handler V2"""
|
| 85 |
+
# Configuration
|
| 86 |
+
VERIFAI_V2_ENDPOINT = "https://[votre-endpoint-verifai-v2-url]"
|
| 87 |
+
VERIFAI_V1_ENDPOINT = "https://[votre-endpoint-v1-url]" # Optionnel pour comparaison
|
| 88 |
+
API_TOKEN = "votre_token_hf" # Optionnel
|
| 89 |
+
TEST_IMAGE = "test_image.jpg"
|
| 90 |
+
|
| 91 |
+
print("🧪 Test VerifAI Handler V2")
|
| 92 |
+
print("="*50)
|
| 93 |
+
|
| 94 |
+
tester = VerifAIV2Tester(VERIFAI_V2_ENDPOINT, API_TOKEN)
|
| 95 |
+
|
| 96 |
+
# Test simple
|
| 97 |
+
result = tester.test_prediction(TEST_IMAGE)
|
| 98 |
+
|
| 99 |
+
if result.get("status") == "success":
|
| 100 |
+
print("✅ VerifAI V2 - Test réussi!")
|
| 101 |
+
print(f"🎯 Handler: {result.get('handler_name', 'N/A')}")
|
| 102 |
+
print(f"📊 Prédiction: {result.get('predicted_class_name', 'N/A')}")
|
| 103 |
+
print(f"🎲 Confiance: {result.get('confidence', 0):.2%}")
|
| 104 |
+
print(f"🛡️ Fiabilité: {result.get('reliability', 'N/A')}")
|
| 105 |
+
print(f"📋 Version: {result.get('version', 'N/A')}")
|
| 106 |
+
|
| 107 |
+
if "model_info" in result:
|
| 108 |
+
print(f"🔧 Mode: {result['model_info'].get('precision_mode', 'N/A')}")
|
| 109 |
+
|
| 110 |
+
# Comparaison V1 vs V2 (si V1 disponible)
|
| 111 |
+
if VERIFAI_V1_ENDPOINT != "https://[votre-endpoint-v1-url]":
|
| 112 |
+
print("\n" + "="*30)
|
| 113 |
+
print("🆚 Comparaison V1 vs V2")
|
| 114 |
+
comparison = tester.compare_with_v1(VERIFAI_V1_ENDPOINT, TEST_IMAGE, API_TOKEN)
|
| 115 |
+
comp_data = comparison["comparison"]
|
| 116 |
+
|
| 117 |
+
print(f"V1: {comp_data['v1_prediction']} ({comp_data['v1_confidence']:.2%})")
|
| 118 |
+
print(f"V2: {comp_data['v2_prediction']} ({comp_data['v2_confidence']:.2%})")
|
| 119 |
+
print(f"Accord: {'✅ Oui' if comp_data['agreement'] else '❌ Non'}")
|
| 120 |
+
|
| 121 |
+
else:
|
| 122 |
+
print("❌ Erreur VerifAI V2:")
|
| 123 |
+
print(f" {result.get('error', 'Erreur inconnue')}")
|
| 124 |
+
|
| 125 |
+
if __name__ == "__main__":
|
| 126 |
+
main()
|