The dataset is currently empty. Upload or create new data files. Then, you will be able to explore them in the Dataset Viewer.
Kaeva Deepfake Detection — Training Datasets (V1–V9)
This repository documents all training datasets used across Kaeva deepfake detection model versions V1 through V9. No raw data is hosted here — this serves as a comprehensive reference card.
Training code: Viraj-FG/kaeva-verify/training/
Dataset Inventory
Established Benchmarks
| Dataset | Type | Source | License |
|---|---|---|---|
| CIFAKE | Real + AI-generated (CIFAR-10 scale) | HF: Bird/CIFAKE | CC BY-SA 4.0 |
| ArtiFact | Multi-generator forensics benchmark | GitHub: awsaf49/artifact | Research |
| OpenFake | Open-source deepfake benchmark | GitHub | Research |
| DeepFakeFace | Face-swap deepfakes | Kaggle | Research |
| GenImage | Multi-generator image detection | GitHub: GenImage-Dataset | Research |
| Kaggle DFD | Deepfake Detection Challenge | Kaggle DFD | Competition |
Face Datasets (Real Baselines)
| Dataset | Description | Source | License |
|---|---|---|---|
| CelebA-HQ | 30k high-quality celebrity faces | GitHub: tkarras/progressive_growing_of_gans | Non-commercial research |
| FFHQ | 70k Flickr-sourced high-quality faces | GitHub: NVlabs/ffhq-dataset | CC BY-NC-SA 4.0 |
Large-Scale Image Datasets
| Dataset | Description | Source | License |
|---|---|---|---|
| ImageNet-1k | 1.28M images, 1000 classes | image-net.org | Research (non-commercial) |
| ai-artbench | AI-generated art benchmark | HF: ramonpzg/ai-artbench | MIT |
| dima806/ai_vs_real | AI vs real photo classification | HF: dima806/ai_vs_real | CC BY 4.0 |
Web-Scraped Sources
| Source | Type | Usage |
|---|---|---|
| thispersondoesnotexist.com | GAN-generated faces (StyleGAN) | Fake samples |
| picsum.photos | Random real photographs | Real baseline samples |
| StyleGAN3 | NVIDIA StyleGAN3 generated faces | Fake samples (GAN family) |
V9 Generator Coverage
V9 expanded coverage to 10 modern generators to ensure broad generalization:
| Generator | Family | Notes |
|---|---|---|
sdxl_turbo |
Stable Diffusion XL Turbo | Distilled, few-step |
playground_v2.5 |
Playground AI | Aesthetic-optimized |
pixart_sigma |
PixArt-Σ | DiT-based |
kandinsky3 |
Kandinsky 3 | Sber AI |
sd35_medium |
Stable Diffusion 3.5 Medium | MMDiT |
kolors |
Kolors (Kwai) | Chinese text-to-image |
sd35_large |
Stable Diffusion 3.5 Large | MMDiT (large) |
flux_schnell |
FLUX.1 [schnell] | Black Forest Labs, distilled |
flux_dev |
FLUX.1 [dev] | Black Forest Labs, guidance-distilled |
wan2.1 |
Wan 2.1 | Video/image generation |
Data Principles
1. Real Baseline — Pristine
All real images are sourced at highest available quality with no re-compression. This ensures the model learns authentic camera/sensor characteristics rather than compression artifacts.
2. Compression Washing for Fakes
Fake images undergo compression washing (JPEG re-save at varying quality levels, WebP conversion, etc.) to strip superficial generation artifacts. This forces the model to detect deeper structural signals rather than relying on compression-level shortcuts.
3. GER Buffer — Hard Negatives
A Generator-Error-Rate (GER) buffer of hard negative samples is maintained. These are AI-generated images that closely mimic real image statistics and are difficult to classify. Including them during training improves calibration and pushes the decision boundary into the ambiguous region where it matters most.
Training Scripts
All training code is maintained in the private repository:
Viraj-FG/kaeva-verify/training/
├── train_lnclip.py # LNCLIP LayerNorm probe training
├── train_audio.py # Audio deepfake detector training
├── data_pipeline.py # Dataset loading & augmentation
├── compression_wash.py # Compression washing transforms
└── ger_buffer.py # GER hard negative mining
Citation
If you use this dataset documentation or the Kaeva models, please reference:
@misc{kaeva2026,
title={Kaeva: Multi-Modal Deepfake Detection},
author={Viraj},
year={2026},
url={https://github.com/Viraj-FG/kaeva-verify}
}
- Downloads last month
- 11