File size: 7,743 Bytes
776c831 aac32b4 776c831 aac32b4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 |
---
license: mit
language:
- en
library_name: pytorch
tags:
- image-classification
- computer-vision
- ai-detection
- deepfake-detection
- pytorch
- image-quality
datasets:
- custom
metrics:
- accuracy
pipeline_tag: image-classification
model-index:
- name: TIGAS
results:
- task:
type: image-classification
name: AI-Generated Image Detection
metrics:
- type: accuracy
value: 0.656
name: Validation Accuracy
- type: loss
value: 0.308
name: Validation Loss
---
# TIGAS - Trained Image Generation Authenticity Score
<div align="center">
[](https://opensource.org/licenses/MIT)
[](https://www.python.org/downloads/)
[](https://pytorch.org/)
**Neural network metric for detecting AI-generated images**
[GitHub Repository](https://github.com/H1merka/TIGAS) β’ [Documentation](https://github.com/H1merka/TIGAS/blob/main/README_eng.md)
</div>
## Model Description
TIGAS (Trained Image Generation Authenticity Score) is a multi-branch neural network designed to distinguish between real/natural images and AI-generated/fake images. It provides a continuous score in the range [0, 1]:
- **1.0** β Natural/Real image
- **0.0** β AI-Generated/Fake image
### Architecture
The model uses a **Full Mode** architecture with three complementary analysis branches:
1. **Perceptual Features** β Multi-scale CNN extracting visual patterns at 1/2, 1/4, 1/8, 1/16 resolutions
2. **Spectral Analysis** β FFT-based frequency domain analysis for detecting GAN artifacts
3. **Statistical Consistency** β Distribution analysis and moment estimation
4. **Cross-Modal Attention** β Fuses features from all branches for final prediction
### Model Specifications
| Property | Value |
|----------|-------|
| **Parameters** | ~18.9M |
| **Input Size** | 256Γ256 RGB |
| **Output** | Single score [0, 1] |
| **Architecture** | TIGASModel (Full Mode) |
| **File Size** | ~217 MB |
## Training Details
### Dataset
- **Training samples**: 128,776 images
- **Validation samples**: 14,167 images
- **Test samples**: 14,126 images
- **Total**: 157,069 images
- **Class balance**: ~46% real, ~54% fake
### Training Configuration
| Parameter | Value |
|-----------|-------|
| Epochs | 3 |
| Batch Size | 8 |
| Image Size | 256Γ256 |
| Learning Rate | 1e-4 (with warmup) |
| Optimizer | AdamW |
| Scheduler | Cosine Annealing |
| Mixed Precision | Enabled (AMP) |
| Hardware | NVIDIA RTX 3050 Ti (4GB VRAM) |
### Training Results
| Epoch | Train Loss | Val Loss | Val Accuracy |
|-------|------------|----------|--------------|
| 0 | 0.4115 | 0.3262 | 61.46% |
| 1 | 0.3707 | 0.3099 | 65.09% |
| 2 | 0.3506 | **0.3079** | **65.55%** |
**Note**: This is an early checkpoint after 3 epochs of training. The model is still learning and accuracy will improve with more training epochs (recommended: 30-50 epochs for production use).
## Usage
### Installation
```bash
pip install torch torchvision
pip install huggingface-hub
# Clone the TIGAS repository
git clone https://github.com/H1merka/TIGAS.git
cd TIGAS
pip install -e .
```
### Quick Start
```python
from tigas import TIGAS
# Initialize with auto-download from HuggingFace Hub
tigas = TIGAS(auto_download=True)
# Evaluate single image
score = tigas('path/to/image.jpg')
print(f"Authenticity Score: {score:.4f}")
# Interpretation
if score > 0.7:
print("Likely REAL (High Confidence)")
elif score > 0.5:
print("Probably REAL (Medium Confidence)")
elif score > 0.3:
print("Probably FAKE (Medium Confidence)")
else:
print("Likely FAKE (High Confidence)")
```
### Batch Processing
```python
import torch
from tigas import TIGAS
tigas = TIGAS(auto_download=True, device='cuda')
# Process batch of images
images = torch.randn(8, 3, 256, 256) # [B, C, H, W]
scores = tigas(images)
print(f"Mean score: {scores.mean():.4f}")
```
### Directory Processing
```python
from tigas import TIGAS
tigas = TIGAS(auto_download=True)
# Evaluate all images in directory
results = tigas.compute_directory(
'path/to/images/',
return_paths=True,
batch_size=32
)
for img_path, score in results.items():
print(f"{img_path}: {score:.4f}")
```
### As a Differentiable Loss Function
```python
from tigas import TIGAS
tigas = TIGAS(auto_download=True)
# In generator training loop
generated_images = generator(noise)
authenticity_score = tigas(generated_images)
# Maximize authenticity (make images look more real)
loss = 1.0 - authenticity_score.mean()
loss.backward()
```
### Command Line
```bash
# Single image evaluation
python scripts/evaluate.py --image test.jpg --auto_download
# Directory evaluation
python scripts/evaluate.py --image_dir images/ --auto_download --batch_size 32
# Save results
python scripts/evaluate.py --image_dir images/ --output results.json --plot
```
## Loading the Model Manually
```python
import torch
from huggingface_hub import hf_hub_download
# Download checkpoint
checkpoint_path = hf_hub_download(
repo_id="H1merka/TIGAS",
filename="best_model.pt"
)
# Load checkpoint
checkpoint = torch.load(checkpoint_path, map_location='cpu')
# Access model weights
model_state_dict = checkpoint['model_state_dict']
epoch = checkpoint['epoch']
best_val_loss = checkpoint['best_val_loss']
print(f"Loaded model from epoch {epoch}")
print(f"Best validation loss: {best_val_loss:.4f}")
```
## Checkpoint Contents
The checkpoint file (`best_model.pt`) contains:
| Key | Description |
|-----|-------------|
| `model_state_dict` | Model weights |
| `optimizer_state_dict` | Optimizer state (for resume training) |
| `scheduler_state_dict` | LR scheduler state |
| `scaler_state_dict` | AMP GradScaler state |
| `epoch` | Training epoch number |
| `global_step` | Global training step |
| `best_val_loss` | Best validation loss achieved |
| `train_history` | Training loss history |
| `val_history` | Validation metrics history |
## Limitations
- **Early Training Stage**: This checkpoint is from early training (3 epochs). For production use, train for 30-50+ epochs.
- **Dataset Bias**: Performance may vary on images from generators not represented in the training set.
- **Resolution Dependency**: Best results at 256Γ256. Other resolutions are automatically resized.
- **Adversarial Robustness**: Not specifically hardened against adversarial attacks.
## Intended Use
### Primary Use Cases
- Detecting AI-generated images in content moderation
- Evaluating quality of generative models
- Research on image authenticity
- Integration as a loss function for training more realistic generators
### Out-of-Scope Use
- Legal evidence without human verification
- Sole basis for content removal decisions
- Real-time processing of high-volume streams (without optimization)
## Citation
```bibtex
@software{tigas2025,
title = {TIGAS: Trained Image Generation Authenticity Score},
author = {Morgenshtern, Dmitrij},
year = {2025},
url = {https://github.com/H1merka/TIGAS},
license = {MIT}
}
```
## License
This model is released under the [MIT License](LICENSE).
## Contact
- **GitHub**: [H1merka/TIGAS](https://github.com/H1merka/TIGAS)
- **Issues**: [GitHub Issues](https://github.com/H1merka/TIGAS/issues)
|