YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Deepfake Detection Pipeline

A complete deepfake detection system that combines a backbone classifier with Vision-Language Model (VLM) reasoning for explainable predictions.

Features

  • Backbone Classification: Uses SigLIP model to classify images as Artificial, Deepfake, or Real
  • Forensic Signal Extraction: Analyzes texture, frequency, and compression artifacts
  • Conditional VLM Analysis: Provides natural language explanations for non-real images using Qwen2-VL-2B
  • Efficient Processing: Only runs VLM on images classified as non-real or low-confidence real

Installation

pip install -r requirements.txt

Usage

python predict.py --input_dir /path/to/test_images --output_file predictions.json

Arguments

  • --input_dir (required): Path to folder containing images to analyze
  • --output_file (required): Path to output JSON file for predictions
  • --real_threshold (optional): Confidence threshold for "Real" classification (default: 0.90)

Example

python predict.py --input_dir ./test_images --output_file results.json --real_threshold 0.85

Output Format

The script generates a JSON file with predictions for each image:

[
  {
    "image_name": "example.jpg",
    "manipulation_type": "Deepfake",
    "authenticity_score": 0.8542,
    "explanation": "The image exhibits unnatural texture smoothing in facial regions. Frequency analysis reveals artifacts consistent with GAN-based synthesis."
  }
]

Requirements

  • Python 3.8+
  • CUDA-capable GPU (recommended for faster processing)
  • ~8GB GPU memory for VLM inference

Model Details

  • Backbone: prithivMLmods/AI-vs-Deepfake-vs-Real-9999 (SigLIP)
  • VLM: Qwen/Qwen2-VL-2B-Instruct
  • Forensic Analysis: Laplacian, LBP, FFT, DCT

Notes

  • The VLM only runs on images classified as non-real or with low confidence
  • First run will download models (~2-4GB total)
  • Supported image formats: .jpg, .jpeg, .png
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support