File size: 2,969 Bytes
feaa2d3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d665ce1
feaa2d3
 
d665ce1
 
 
 
 
feaa2d3
 
 
 
 
d665ce1
feaa2d3
 
 
 
 
 
 
 
 
 
 
 
 
 
d665ce1
 
feaa2d3
 
 
 
 
 
d665ce1
 
 
 
 
 
feaa2d3
 
d665ce1
feaa2d3
 
d665ce1
feaa2d3
d665ce1
 
 
feaa2d3
 
d665ce1
feaa2d3
d665ce1
 
 
 
 
 
 
feaa2d3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d665ce1
feaa2d3
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
---
language: en
license: apache-2.0
tags:
- image-classification
- ai-detection
- flux
- vision-transformer
- fake-detection
datasets:
- huggan/wikiart
- ash12321/flux-1-dev-generated-10k
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: FLUX Detector ViT
  results:
  - task:
      type: image-classification
      name: AI Image Detection
    metrics:
    - type: accuracy
      value: 0.9985
      name: Test Accuracy
    - type: f1
      value: 0.9985
      name: F1 Score
---

# FLUX Detector - Vision Transformer

## Model Description

This model is a **specialized binary classifier** trained to detect images generated by **FLUX.1-dev** (Black Forest Labs). It achieves **99.85% accuracy** with **ZERO false positives** on held-out test data.

### Key Features

- 🎯 **Specialist Detector**: Optimized specifically for FLUX.1-dev images
- 🚀 **Exceptional Accuracy**: 99.85% test accuracy
- 🛡️ **Zero False Positives**: Never misclassifies real images as fake
-**Fast Inference**: ~10ms per image on GPU
- 📊 **Well-Validated**: Separate train/val/test splits with no overlap

### Performance

```
Test Accuracy:  0.9985
Precision:      1.0000 (PERFECT!)
Recall:         0.9970
F1 Score:       0.9985
AUC-ROC:        1.0000 (PERFECT!)

False Positive Rate: 0.0000 (0.0%!)
False Negative Rate: 0.0030
```

## Quick Start

```python
import torch
from PIL import Image
from transformers import ViTForImageClassification, ViTImageProcessor

# Load model and processor
model = ViTForImageClassification.from_pretrained(
    "ash12321/flux-detector-vit"
)
processor = ViTImageProcessor.from_pretrained(
    "google/vit-base-patch16-224"
)

# Load image
image = Image.open("test.jpg")
inputs = processor(images=image, return_tensors="pt")

# Get prediction
model.eval()
with torch.no_grad():
    outputs = model(**inputs)
    probs = torch.softmax(outputs.logits, dim=1)
    
    if probs[0][1] > 0.5:
        print(f"FLUX-Generated ({probs[0][1]:.2%} confident)")
    else:
        print(f"Real Image ({probs[0][0]:.2%} confident)")
```

## Using the model.py Helper

```python
from model import detect_image

result = detect_image("test.jpg", model_path="ash12321/flux-detector-vit")
print(f"Is Fake: {result['is_fake']}")
print(f"Confidence: {result['confidence']:.2%}")
```

## Files in this Repository

- `pytorch_model.bin` - Model weights
- `config.json` - Model configuration
- `model.py` - Model architecture and helper functions
- `README.md` - This documentation
- `training_results.json` - Detailed training metrics
- `training_curves.png` - Training visualization
- `confusion_matrix.png` - Test set confusion matrix

## Citation

```bibtex
@misc{flux-detector-vit,
  author = {ash12321},
  title = {FLUX Detector - Vision Transformer},
  year = {2024},
  publisher = {HuggingFace},
  howpublished = {\url{https://huggingface.co/ash12321/flux-detector-vit}},
}
```

---

**License**: Apache 2.0  
**Created**: 2025-12-31