|
|
--- |
|
|
license: gpl-3.0 |
|
|
pipeline_tag: image-classification |
|
|
tags: |
|
|
- ai-detection |
|
|
- deepfake-detection |
|
|
- image-classification |
|
|
- computer-vision |
|
|
- pytorch |
|
|
--- |
|
|
|
|
|
# BAILU - Lightweight AI-Generated Image Detector |
|
|
|
|
|
BAILU is a highly efficient deepfake detection model designed to identify AI-generated images from various image generation models. With only **2M parameters (~8MB)**, it achieves **95.88% overall accuracy** by analyzing artifacts/signatures unique to AI generation pipelines. |
|
|
|
|
|
## ๐ Why Open-Source Matters for Deepfake Detection |
|
|
|
|
|
This model was only possible because companies like Black Forest Labs and Stability AI release their models publicly. Private, closed-source models create detection blind spotsโwe cannot defend against what we cannot study. |
|
|
We strongly encourage all AI companies to open-source their models to enable: |
|
|
|
|
|
- Effective deepfake detection research |
|
|
- Transparency in AI development |
|
|
- Collaborative safety measures |
|
|
- Public trust through verifiable defenses |
|
|
|
|
|
## ๐ฏ Key Features |
|
|
|
|
|
- **Ultra-Lightweight**: 2M parameters, ~8MB model size - runs on CPU or GPU |
|
|
- **Multi-VAE Detection**: Trained to detect artifacts from FLUX.1, FLUX.2, SDXL, and Stable Diffusion 1.5 |
|
|
- **High Accuracy**: 95.88% overall accuracy (97.75% AI detection rate, 94.00% real detection) |
|
|
- **Fast Inference**: <10ms per image on modern GPUs |
|
|
- **Open-Source Advocacy**: Built to demonstrate the importance of open-source model transparency |
|
|
|
|
|
## ๐ Performance Metrics |
|
|
|
|
|
| Metric | Score | |
|
|
|--------|-------| |
|
|
| **Overall Validation Accuracy** | 95.88% (767/800) | |
|
|
| **Loss** | 0.2645 | |
|
|
|
|
|
*Tested on balanced dataset of 400 AI-generated and 400 real images* |
|
|
|
|
|
## ๐ Training Details |
|
|
|
|
|
- **Hardware**: NVIDIA RTX 5090 |
|
|
- **Training Time**: ~110 hours |
|
|
- **Data Augmentation**: Random crops, flips, compression, resizing |
|
|
- **Optimizer**: AdamW (lr=1e-4, weight_decay=1e-4) |
|
|
- **Scheduler**: CosineAnnealingLR (T_max=50) |
|
|
- **Loss**: Binary Cross-Entropy with Logits |
|
|
|
|
|
Detection must keep pace with generation. That requires open access. |
|
|
## โ ๏ธ Important Limitations |
|
|
|
|
|
- Not foolproof: Adversarial attacks and new model architectures may evade detection (**We plan to train model capable of detecting adversarial attacks later.**) |
|
|
- No attribution: Cannot identify which specific AI model created an image |
|
|
- Temporal degradation: Effectiveness may decrease as new AI models emerge |
|
|
|
|
|
Disclaimer: This tool is for research and educational purposes. Results should not be used as sole evidence in legal or high-stakes decisions without human expert verification. |