YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
Deepfake Detector (FaceForensics++)
Model Overview
This model detects whether a facial image is real or manipulated (deepfake). It is fine-tuned on the FaceForensics++ dataset, which contains both authentic and synthetically manipulated facial images generated using various deepfake techniques.
The model uses a Vision Transformer (ViT) architecture to learn spatial and contextual patterns that differentiate real faces from manipulated ones.
The goal of this project is to provide a lightweight and accessible deepfake detection model that can be integrated into content moderation systems, digital forensics pipelines, and misinformation detection tools.
Model Details
Model Name: deepfake-detector-faceforensics Task: Image Classification Architecture: Vision Transformer (ViT) Parameters: ~85M Framework: Hugging Face Transformers File Format: Safetensors
Labels
0 โ Real1 โ Deepfake
Dataset
The model is trained on the FaceForensics++ dataset, a widely used benchmark for deepfake detection research.
The dataset contains manipulated videos generated using several face manipulation methods:
- FaceSwap
- Face2Face
- DeepFakes
- NeuralTextures
Frames extracted from these videos were used as training samples.
Dataset characteristics:
- Thousands of real and manipulated face images
- Multiple deepfake generation techniques
- Various compression levels
Training Configuration
Training Setup
Architecture: Vision Transformer (ViT-base) Optimizer: AdamW Batch Size: 32 Loss Function: Cross Entropy Loss Framework: PyTorch + Hugging Face Transformers
The model was fine-tuned on facial images extracted from FaceForensics++.
Standard preprocessing steps were applied:
- Face cropping
- Image resizing
- Normalization
- Data augmentation
Evaluation
The model was evaluated on a held-out validation set.
| Metric | Score |
|---|---|
| Accuracy | ~93% |
| Precision | ~0.92 |
| Recall | ~0.94 |
| F1-score | ~0.93 |
These results demonstrate that transformer-based architectures can effectively identify subtle manipulation artifacts in deepfake images.
Usage
Example inference using the Hugging Face pipeline API:
from transformers import pipeline
classifier = pipeline(
"image-classification",
model="HrutikAdsare/deepfake-detector-faceforensics"
)
result = classifier("face_image.jpg")
print(result)
Example output:
[
{"label": "deepfake", "score": 0.94}
]
Applications
Possible real-world applications include:
- Social media content moderation
- Deepfake detection tools
- Digital forensics
- Misinformation detection
- Identity verification systems
Limitations
- Performance may decrease on low resolution images
- May struggle with highly advanced GAN-generated faces
- Works best when the face region is clearly visible
Future work may include training on larger and more diverse datasets to improve robustness.
Ethical Considerations
Deepfake detection technology should be used responsibly. This model is intended for research and educational purposes and should not be used as the sole method for determining authenticity.
Human verification is recommended in critical decision-making scenarios.
Author
Hrutik Adsare
This project was developed as part of a deep learning research exploration into AI-generated media detection using transformer-based architectures.
License
Please refer to the repository license for usage terms.
If you use this model, consider giving it a โญ
- Downloads last month
- 4