DSBackend / README.md
MrDevCoder01's picture
Fix Hugging Face SDK configuration
7a168b9
metadata
title: DSBackend
emoji: 🦀
colorFrom: red
colorTo: pink
sdk: docker
app_port: 7860
license: mit
language:
  - en
metrics:
  - accuracy
library_name: tf-keras
pipeline_tag: image-classification

Deepfake Detection Backend & Model (V1)

This repository contains a Convolutional Neural Network (CNN)-based model fine-tuned for deepfake classification, now wrapped in a high-performance FastAPI backend that natively supports processing both images and frame-by-frame videos.

Core Advancements

To drastically improve real-world accuracy (especially on webcams and scaling distortions), we implemented Ultralytics YOLO11-Pose (yolo11n-pose.pt) for facial extraction.

The underlying CNN (model.h5) excels only when evaluated on tight facial crops matching its training data. Generative YOLO bounding boxes are too loose and capture background noise. By extracting tracking keypoints (eyes, nose, ears) and explicitly drawing bounding configurations around them via YOLO11, we mathematically generate tight facial configurations, ensuring that the CNN captures exactly what it was trained to see, regardless of camera distance.

Key Features:

  • Model Architecture: Convolutional Neural Network (CNN)
  • Input Size: 128x128 pixels (Tight facial crop)
  • Face Extractor: Ultralytics YOLO11-Pose (yolo11n-pose.pt)
  • Video Processing: Extracts and analyzes 1 in every 5 frames (~6 fps) for robust temporal spoof detection. Deepfake videos are flagged as "Fake" if any evaluated frame's prediction score exceeds 50%.
  • Number of Classes: 2 (Real, Fake)
  • API Framework: FastAPI, Uvicorn, Python-Multipart

Processing Flow & Algorithm

The system natively processes both images and videos using a unified core prediction pipeline. The following describes the step-by-step logic.

1. Media Handling Flow

For Images:

  1. The image is parsed and decoded directly from the HTTP request.
  2. The image is passed to the Core Prediction Pipeline.
  3. A confidence score is returned, classifying the image as "Real" or "Fake".

For Videos:

  1. The video is saved to a temporary file and read using OpenCV.
  2. Frames are iteratively extracted.
  3. To optimize performance without sacrificing temporal accuracy, 1 in every 5 frames (~6 FPS for a 30 FPS video) is analyzed.
  4. Each selected frame is individually passed to the Core Prediction Pipeline.
  5. The backend collects a list of confidence_scores from the analyzed frames.
  6. The video is flagged as "Fake" if the maximum confidence score among all frames (i.e., the most manipulated frame) exceeds 0.5.

2. Core Prediction Pipeline (Pseudocode)

To definitively locate and strictly frame the face, the YOLO11-Pose pipeline extracts 5 specific facial keypoints: Nose, Left Eye, Right Eye, Left Ear, and Right Ear.

function process_frame(frame):
    # Step 1: Detect Face & Extract Keypoints (YOLO11-Pose)
    results = yolo_pose_model.predict(frame)
    
    if face_keypoints_found(results):
        # Eyes, nose, and ears detected
        bounding_box = calculate_tight_box_from_keypoints()
        face_crop = crop_image(frame, bounding_box)
    elif person_bounding_box_found(results): 
        # Fallback to standard object detection box if keypoints fail
        bounding_box = shrink_box_to_approximate_face()
        face_crop = crop_image(frame, bounding_box)
    else: 
        # Extreme fallback if no person is detected
        face_crop = frame

    # Step 2: Preprocessing
    resized_face = resize_image(face_crop, width=128, height=128)
    normalized_face = resized_face / 255.0
    model_input = expand_dimensions(normalized_face)

    # Step 3: CNN Model Inference
    confidence_score = cnn_model.predict(model_input)
    
    return confidence_score

Training Performance

Below are the graphs illustrating the training and validation accuracy and loss for the model:

Model Training/Validation Graph 1

Model Training/Validation Graph 2

Installation

  1. Create a Python 3.11 virtual environment and activate it:
python3.11 -m venv venv
source venv/bin/activate
  1. Install the required dependencies:
pip install -r requirements.txt

Running the API Server

We provide a convenient startup script to launch the FastAPI backend:

chmod +x start_server.sh
./start_server.sh

The server will bind to 0.0.0.0:8000, making the /predict endpoint available.

Usage (API)

You can send a POST request with an image or video to the /predict endpoint using multipart/form-data:

import requests

url = "http://localhost:8000/predict"
file_path = "sample_video.mp4" # Or an image.jpg

with open(file_path, "rb") as file:
    files = {"file": file}
    response = requests.post(url, files=files)

print(response.json())

JSON Output Structure (Video):

{
  "filename": "sample_video.mp4",
  "type": "video",
  "prediction": "Fake",
  "confidence_score": 0.8921, 
  "frames_analyzed": 120,
  "fake_frames_count": 14,
  "max_fake_score": 0.8921,
  "avg_score": 0.3102 
}

Note: A score closer to 1.0 is recognized as heavily manipulated. A score closer to 0.0 is authentic. An inference resulting in max_fake_score ≥ 0.5 triggers a "Fake" prediction limit.

Usage (Direct Python Inference)

If you'd like to use the YOLO11 inference pipeline directly in your Python code without the API server, feel free to adapt this minimal inference script:

import cv2
import numpy as np
import warnings
from tensorflow.keras.preprocessing import image
from tensorflow.keras.models import load_model
from ultralytics import YOLO

warnings.filterwarnings('ignore', category=UserWarning)

# Load Models
model = load_model('model.h5', compile=False)
detector = YOLO('yolo11n-pose.pt')

def detect_and_predict(img_path):
    img = cv2.imread(img_path)
    
    # 1. Detect Face using YOLO11-Pose Keypoints
    results = detector.predict(img, verbose=False)
    if len(results) > 0 and results[0].keypoints is not None and len(results[0].keypoints.xy[0]) > 0:
        kpts = results[0].keypoints.xy[0].cpu().numpy()
        valid_kpts = np.array([k for k in kpts[0:5] if k[0] > 0 and k[1] > 0]) # Eyes, nose, ears
        
        if len(valid_kpts) > 0:
            x_min, y_min = np.min(valid_kpts, axis=0)
            x_max, y_max = np.max(valid_kpts, axis=0)
            
            # Expand tight box to capture full face (forehead to jaw)
            w, h = x_max - x_min, y_max - y_min
            if w > 0 and h > 0:
                x1 = max(0, int(x_min - w * 0.3))
                y1 = max(0, int(y_min - h * 0.5))
                x2 = min(img.shape[1], int(x_max + w * 0.3))
                y2 = min(img.shape[0], int(y_max + h * 0.8))
                
                face = img[y1:y2, x1:x2]
                if face.size > 0:
                    face = cv2.resize(face, (128, 128))
                    
                    # 2. Preprocess & Predict
                    img_array = np.expand_dims(image.img_to_array(face), axis=0) / 255.0
                    score = float(model.predict(img_array, verbose=0)[0][0])
                    
                    prediction = 'Fake' if score >= 0.5 else 'Real'
                    print(f"Prediction: {prediction} (Score: {score:.4f})")
                    return
                    
    print("Could not detect a clear face.")

# Try it out
detect_and_predict('path_to_your_image.jpg')