alderstonelabs
/

Image Classification
vision
synthetic image detection
jove1661 pabberpe commited on
Commit
57407f4
·
0 Parent(s):

Duplicate from HPAI-BSC/SuSy

Browse files

Co-authored-by: Pablo Bernabeu <pabberpe@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,370 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - HPAI-BSC/SuSy-Dataset
5
+ - HuggingFaceM4/COCO
6
+ - ehristoforu/dalle-3-images
7
+ - poloclub/diffusiondb
8
+ - ehristoforu/midjourney-images
9
+ - nateraw/midjourney-texttoimage
10
+ - duchaiten/duchaiten-realistic-sdxl
11
+ tags:
12
+ - vision
13
+ - image-classification
14
+ - synthetic image detection
15
+ pipeline_tag: image-classification
16
+ metrics:
17
+ - recall
18
+ widget:
19
+ - src: midjourney-images-example-patch0.jpg
20
+ output:
21
+ - label: authentic
22
+ score: 0.000049
23
+ - label: dalle-3-images
24
+ score: 0.004659
25
+ - label: diffusiondb
26
+ score: 0.00011
27
+ - label: midjourney-images
28
+ score: 0.994384
29
+ - label: midjourney_tti
30
+ score: 0.000569
31
+ - label: realisticSDXL
32
+ score: 0.000229
33
+ ---
34
+
35
+ <div align="center">
36
+ <h2>SuSy - Synthetic Image Detector</h2>
37
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/62f7a16192950415b637e201/NobqlpFbFkTyBi1LsT9JE.png" alt="image" width="300" height="auto">
38
+ </div>
39
+ <hr>
40
+ <div align="center" style="line-height: 1;">
41
+ <a href="https://arxiv.org/abs/2409.14128" target="_blank" style="margin: 2px;">
42
+ <img alt="Paper" src="https://img.shields.io/badge/arXiv-2409.14128-b31b1b.svg" style="display: inline-block; vertical-align: middle;"/>
43
+ </a>
44
+ <a href="https://github.com/HPAI-BSC/SuSy" target="_blank" style="margin: 2px;">
45
+ <img alt="Repository" src="https://img.shields.io/badge/Repository-GitHub-181717?logo=github&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
46
+ </a>
47
+ <a href="https://huggingface.co/datasets/HPAI-BSC/SuSy-Dataset" target="_blank" style="margin: 2px;">
48
+ <img alt="Dataset" src="https://img.shields.io/badge/Dataset-Hugging%20Face-FFD21E?logo=huggingface" style="display: inline-block; vertical-align: middle;"/>
49
+ </a>
50
+ </div>
51
+
52
+ <div align="center" style="line-height: 1;">
53
+ <a href="https://huggingface.co/spaces/HPAI-BSC/SuSyGame" target="_blank" style="margin: 2px;">
54
+ <img alt="SuSy Challenge" src="https://img.shields.io/badge/SuSy%20Challenge-Hugging%20Face-FFD21E?logo=huggingface" style="display: inline-block; vertical-align: middle;"/>
55
+ </a>
56
+ <a href="https://huggingface.co/spaces/HPAI-BSC/SuSy" target="_blank" style="margin: 2px;">
57
+ <img alt="Interactive Demo" src="https://img.shields.io/badge/Interactive%20Demo-Hugging%20Face-FFD21E?logo=huggingface" style="display: inline-block; vertical-align: middle;"/>
58
+ </a>
59
+ <a href="https://colab.research.google.com/drive/15nxo0FVd-snOnj9TcX737fFH0j3SmS05" target="_blank" style="margin: 2px;">
60
+ <img alt="Code Demo" src="https://img.shields.io/badge/Code%20Demo-Colab-F9AB00?logo=googlecolab&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
61
+ </a>
62
+ </div>
63
+
64
+ <div align="center" style="line-height: 1;">
65
+ <a href="https://hpai.bsc.es/" target="_blank" style="margin: 2px;">
66
+ <img alt="HPAI Website" src="https://img.shields.io/badge/HPAI-Website-blue" style="display: inline-block; vertical-align: middle;"/>
67
+ </a>
68
+ <a href="https://www.linkedin.com/company/hpai" target="_blank" style="margin: 2px;">
69
+ <img alt="LinkedIn" src="https://custom-icon-badges.demolab.com/badge/LinkedIn-0A66C2?logo=linkedin-white&logoColor=fff" style="display: inline-block; vertical-align: middle;"/>
70
+ </a>
71
+ <a href="https://bsky.app/profile/hpai.bsky.social" target="_blank" style="margin: 2px;">
72
+ <img alt="Bluesky" src="https://img.shields.io/badge/Bluesky-0285FF?logo=bluesky&logoColor=fff" style="display: inline-block; vertical-align: middle;"/>
73
+ </a>
74
+ </div>
75
+
76
+
77
+ **Model Results**
78
+
79
+ | Dataset | Type | Model | Year | Recall |
80
+ |:-------------------:|:---------:|:-------------------------:|:----:|:------:|
81
+ | Flickr30k | Authentic | - | 2014 | 90.53 |
82
+ | Google Landmarks v2 | Authentic | - | 2020 | 64.54 |
83
+ | Synthbuster | Synthetic | Glide | 2021 | 53.50 |
84
+ | Synthbuster | Synthetic | Stable Diffusion 1.3 | 2022 | 87.00 |
85
+ | Synthbuster | Synthetic | Stable Diffusion 1.4 | 2022 | 87.10 |
86
+ | Synthbuster | Synthetic | Stable Diffusion 2 | 2022 | 68.40 |
87
+ | Synthbuster | Synthetic | DALL-E 2 | 2022 | 20.70 |
88
+ | Synthbuster | Synthetic | MidJourney V5 | 2023 | 73.10 |
89
+ | Synthbuster | Synthetic | Stable Diffusion XL | 2023 | 79.50 |
90
+ | Synthbuster | Synthetic | Firefly | 2023 | 40.90 |
91
+ | Synthbuster | Synthetic | DALL-E 3 | 2023 | 88.60 |
92
+ | Authors | Synthetic | Stable Diffusion 3 Medium | 2024 | 93.23 |
93
+ | Authors | Synthetic | Flux.1-dev | 2024 | 96.46 |
94
+ | In-the-wild | Synthetic | Mixed/Unknown | 2024 | 89.90 |
95
+ | In-the-wild | Authentic | - | 2024 | 33.06 |
96
+
97
+ ## Model Details
98
+
99
+ <!-- Provide a longer summary of what this model is. -->
100
+
101
+ SuSy is a Spatial-Based Synthetic Image Detection and Recognition Model, designed and trained to detect synthetic images and attribute them to a generative model (i.e., two StableDiffusion models, two Midjourney versions and DALL·E 3). The model takes image patches of size 224x224 as input, and outputs the probability of the image being authentic or having been created by each of the aforementioned generative models.
102
+
103
+ <img src="model_architecture.png" alt="image" width="900" height="auto">
104
+
105
+ The model is based on a CNN architecture and is trained using a supervised learning approach. It's design is based on [previous work](https://upcommons.upc.edu/handle/2117/395959), originally intended for video superresolution detection, adapted here for the tasks of synthetic image detection and recognition. The architecture consists of two modules: a feature extractor and a multi-layer perceptron (MLP), as it's quite light weight. SuSy has a total of 12.7M parameters, with the feature extractor accounting for 12.5M parameters and the MLP accounting for the remaining 197K.
106
+
107
+ The CNN feature extractor consists of five stages following a ResNet-18 scheme. The output of each of the blocks is used as input for various bottleneck modules that are arranged in a staircase pattern. The bottleneck modules consist of three 2D convolutional layers. Each level of bottlenecks takes input at a later stage than the previous level, and each bottleneck module takes input from the current stage and, except the first bottleneck of each level, from the previous bottleneck module.
108
+
109
+ The outputs of each level of bottlenecks and stage 4 are passed to a 2D adaptative average pooling layer and then concatenated to form the feature map feeding the MLP. The MLP consists of three fully connected layers with 512, 256 and 256 units, respectively. Between each layer, a dropout layer (rate of 0.5) prevents overfitting. The output of the MLP has 6 units, corresponding to the number of classes in the dataset (5 synthetic models and 1 real image class).
110
+
111
+ The model can be used as a detector by either taking the class with the highest probability as the output or summing the probabilities of the synthetic classes and comparing them to the real class. The model can also be used as an recognition model by taking the class with the highest probability as the output.
112
+
113
+ ### Model Description
114
+
115
+ - **Developed by:** [Pablo Bernabeu Perez](https://huggingface.co/pabberpe), [Enrique Lopez Cuena](https://huggingface.co/Cuena) and [Dario Garcia Gasulla](https://huggingface.co/dariog) from [HPAI](https://hpai.bsc.es/)
116
+ - **Model type:** Spatial-Based Synthetic Image Detection and Recognition Convolutional Neural Network
117
+ - **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
118
+
119
+ ## Uses
120
+
121
+ This model can be used to detect synthetic images in a scalable manner, thanks to its small size. Since it operates on patches of 224x224, a moving window should be implemented in inference when applied on larger inputs (the most likely scenario, and the one it was trained under). This also enables the capacity for synthetic content localization within a high resolution input.
122
+
123
+ Any individual or organization seeking for support on the identification of synthetic content can use this model. However, it should not be used as the only source of evidence, particularly when applied to inputs produced by generative models not included in its training (see details in Training Data below).
124
+
125
+ ### Intended Uses
126
+
127
+ Intended uses include the following:
128
+
129
+ * Detection of authentic and synthetic images
130
+ * Attribution of synthetic images to their generative model (if included in the training data)
131
+ * Localization of image patches likely to be synthetic or tampered.
132
+
133
+ ### Out-of-Scope Uses
134
+
135
+ Out-of-scope uses include the following:
136
+
137
+ * Detection of manually edited images using traditional tools.
138
+ * Detection of images automatically downscaled and/or upscaled. These are considered as non-synthetic samples in the model training phase.
139
+ * Detection of inpainted images.
140
+ * Detection of synthetic vs manually crafted illustrations. The model is trained mainly on photorealistic samples.
141
+ * Attribution of synthetic images to their generative model if the model was not included in the training data. AThis model may not be used to train generative models or tools aimed at lthough some generalization capabilities are expected, reliability in this case cannot be estimated.
142
+
143
+ ### Forbidden Uses
144
+
145
+ This model may not be used to train generative models or tools aimed at purposefully deceiving the model or creating misleading content.
146
+
147
+ ## Bias, Risks, and Limitations
148
+
149
+ The model may be biased in the following ways:
150
+
151
+ * The model may be biased towards the training data, which may not be representative of all authentic and synthetic images. Particularly for the class of real world images, which were obtained from a single source.
152
+ * The model may be biased towards the generative models included in the training data, which may not be representative of all possible generative models. Particularly new ones, since all models included were released between 2022 and 2023.
153
+ * The model may be biased towards certain type of images or contents. While it is trained using roughly 18K synthetic images, no assessment was made on which domains and profiles are included in those.
154
+
155
+ The model has the following technical limitations:
156
+
157
+ * The performance of the model may be influenced by transformations and editions performed on the images. While the model was trained on some alterations (blur, brightness, compression and gamma) there are other alterations applicable to images that could reduce the model accuracy.
158
+ * The performance of the model might vary depending on the type and source of images
159
+ * The model will not be able to attribute synthetic images to their generative model if the model was not included in the training data.
160
+ * The model is trained on patches with high gray-level contrast. For images composed entirely by low contrast regions, the model may not work as expected.
161
+
162
+ ### Recommendations
163
+
164
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
165
+
166
+ ## How to Get Started with the Model
167
+
168
+ Use the code below to get started with the model.
169
+
170
+ ```python
171
+ import torch
172
+ from PIL import Image
173
+ from torchvision import transforms
174
+
175
+ # Load the model
176
+ model = torch.jit.load("SuSy.pt")
177
+
178
+ # Load patch
179
+ patch = Image.open("midjourney-images-example-patch0.png")
180
+
181
+ # Transform patch to tensor
182
+ patch = transforms.PILToTensor()(patch).unsqueeze(0) / 255.
183
+
184
+ # Predict patch
185
+ model.eval()
186
+ with torch.no_grad():
187
+ preds = model(patch)
188
+
189
+ print(preds)
190
+ ```
191
+
192
+ See `test_image.py` and `test_patch.py` for other examples on how to use the model.
193
+
194
+ ## Training Details
195
+
196
+ ### Training Data
197
+
198
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
199
+
200
+ The dataset is available at: https://huggingface.co/datasets/HPAI-BSC/SuSy-Dataset
201
+
202
+ | Dataset | Year | Train | Validation | Test | Total |
203
+ |:-----------------:|:----:|:-----:|:----------:|:-----:|:-----:|
204
+ | COCO | 2017 | 2,967 | 1,234 | 1,234 | 5,435 |
205
+ | dalle-3-images | 2023 | 987 | 330 | 330 | 1,647 |
206
+ | diffusiondb | 2022 | 2,967 | 1,234 | 1,234 | 5,435 |
207
+ | realisticSDXL | 2023 | 2,967 | 1,234 | 1,234 | 5,435 |
208
+ | midjourney-tti | 2022 | 2,718 | 906 | 906 | 4,530 |
209
+ | midjourney-images | 2023 | 1,845 | 617 | 617 | 3,079 |
210
+
211
+ #### Authentic Images
212
+
213
+ - [COCO](https://cocodataset.org/)
214
+
215
+ We use a random subset of the COCO dataset, containing 5,435 images, for the authentic images in our training dataset. The partitions are made respecting the original COCO splits, with 2,967 images in the training partition and 1,234 in the validation and test partitions.
216
+
217
+ #### Synthetic Images
218
+
219
+ - [dalle-3-images](https://huggingface.co/datasets/ehristoforu/dalle-3-images)
220
+ - [diffusiondb](https://poloclub.github.io/diffusiondb/)
221
+ - [midjourney-images](https://huggingface.co/datasets/ehristoforu/midjourney-images)
222
+ - [midjourney-texttoimage](https://www.kaggle.com/datasets/succinctlyai/midjourney-texttoimage)
223
+ - [realistic-SDXL](https://huggingface.co/datasets/DucHaiten/DucHaiten-realistic-SDXL)
224
+
225
+ For the diffusiondb dataset, we use a random subset of 5,435 images, with 2,967 in the training partition and 1,234 in the validation and test partitions. We use only the realistic images from the realisticSDXL dataset, with images in the realistic-2.2 split in our training data and the realistic-1 split for our test partition. The remaining datasets are used in their entirety, with 60% of the images in the training partition, 20% in the validation partition and 20% in the test partition.
226
+
227
+ ### Training Procedure
228
+
229
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
230
+
231
+ The training code is available at: https://github.com/HPAI-BSC/SuSy
232
+
233
+ #### Preprocessing
234
+
235
+ **Patch Extraction**
236
+
237
+ To prepare the training data, we extract 240x240 patches from the images, minimizing the overlap between them. We then select the most informative patches by calculating the gray-level co-occurrence matrix (GLCM) for each patch. Given the GLCM, we calculate the contrast and select the five patches with the highest contrast. These patches are then passed to the model in their original RGB format and cropped to 224x224.
238
+
239
+ **Data Augmentation**
240
+
241
+ | Technique | Probability | Other Parameters |
242
+ |:------------------------:|:-----------:|:-----------------------------------------:|
243
+ | HorizontalFlip | 0.50 | - |
244
+ | RandomBrightnessContrast | 0.20 | brightness\_limit=0.2 contrast\_limit=0.2 |
245
+ | RandomGamma | 0.20 | gamma\_limit=(80, 120) |
246
+ | AdvancedBlur | 0.20 | |
247
+ | GaussianBlur | 0.20 | |
248
+ | JPEGCompression | 0.20 | quality\_lower=75 quality\_upper=100 |
249
+
250
+
251
+ #### Training Hyperparameters
252
+
253
+ - Loss Function: Cross-Entropy Loss
254
+ - Optimizer: Adam
255
+ - Learning Rate: 0.0001
256
+ - Weight Decay: 0
257
+ - Scheduler: ReduceLROnPlateau
258
+ - Factor: 0.1
259
+ - Patience: 4
260
+ - Batch Size: 128
261
+ - Epochs: 10
262
+ - Early Stopping: 2
263
+
264
+ ## Evaluation
265
+
266
+ <!-- This section describes the evaluation protocols and provides the results. -->
267
+
268
+ The evaluation code is available at: https://github.com/HPAI-BSC/SuSy
269
+
270
+ ### Testing Data, Factors & Metrics
271
+
272
+ #### Testing Data
273
+
274
+ <!-- This should link to a Dataset Card if possible. -->
275
+
276
+ - Test Split of our Training Dataset
277
+ - Synthetic Images generated with [Stable Diffusion 3 Medium](https://huggingface.co/stabilityai/stable-diffusion-3-medium) and [FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev) using prompts from [Gustavosta/Stable-Diffusion-Prompts](https://huggingface.co/datasets/Gustavosta/Stable-Diffusion-Prompts)
278
+ - Synthetic Images in the Wild: Dataset containing 210 Authentic and Synthetic Images obtained from Social Media Platforms
279
+ - [Flickr 30k Dataset](https://www.kaggle.com/datasets/hsankesara/flickr-image-dataset)
280
+ - [Google Landmarks v2](https://github.com/cvdfoundation/google-landmark)
281
+ - [Synthbuster](https://zenodo.org/records/10066460)
282
+
283
+ #### Metrics
284
+
285
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
286
+
287
+ - Recall: The proportion of correctly classified positive instances out of all actual positive instances in a dataset.
288
+
289
+ ### Results
290
+
291
+ <!-- This section provides the results of the evaluation. -->
292
+
293
+ #### Authentic Sources
294
+
295
+ | Dataset | Year | Recall |
296
+ |:-------------------:|:----:|:------:|
297
+ | Flickr30k | 2014 | 90.53 |
298
+ | Google Landmarks v2 | 2020 | 64.54 |
299
+ | In-the-wild | 2024 | 33.06 |
300
+
301
+ #### Synthetic Sources
302
+
303
+ | Dataset | Model | Year | Recall |
304
+ |:-----------:|:-------------------------:|:----:|:------:|
305
+ | Synthbuster | Glide | 2021 | 53.50 |
306
+ | Synthbuster | Stable Diffusion 1.3 | 2022 | 87.00 |
307
+ | Synthbuster | Stable Diffusion 1.4 | 2022 | 87.10 |
308
+ | Synthbuster | Stable Diffusion 2 | 2022 | 68.40 |
309
+ | Synthbuster | DALL-E 2 | 2022 | 20.70 |
310
+ | Synthbuster | MidJourney V5 | 2023 | 73.10 |
311
+ | Synthbuster | Stable Diffusion XL | 2023 | 79.50 |
312
+ | Synthbuster | Firefly | 2023 | 40.90 |
313
+ | Synthbuster | DALL-E 3 | 2023 | 88.60 |
314
+ | Authors | Stable Diffusion 3 Medium | 2024 | 93.23 |
315
+ | Authors | Flux.1-dev | 2024 | 96.46 |
316
+ | In-the-wild | Mixed/Unknown | 2024 | 89.90 |
317
+
318
+ ### Summary
319
+
320
+ The results for authentic image datasets reveal varying detection performance across different sources. Recall rates range from 33.06% for the In-the-wild dataset to 90.53% for the Flickr30k dataset. The Google Landmarks v2 dataset shows an intermediate recall rate of 64.54%. These results indicate a significant disparity in the detectability of authentic images across different datasets, with the In-the-wild dataset presenting the most challenging case for SuSy.
321
+
322
+ The results for synthetic image datasets show varying detection performance across different image generation models. Recall rates range from 20.70% for DALL-E 2 (2022) to 96.46% for Flux.1-dev (2024). Stable Diffusion models generally exhibited high detectability, with versions 1.3 and 1.4 (2022) showing recall rates above 87%. More recent models tested by the authors, such as Stable Diffusion 3 Medium (2024) and Flux.1-dev (2024), demonstrate even higher detectability with recall rates above 93%. The in-the-wild mixed/unknown synthetic dataset from 2024 showed a high recall of 89.90%, indicating effective detection across various unknown generation methods. These results suggest an overall trend of improving detection capabilities for synthetic images, with newer generation models generally being more easily detectable.
323
+
324
+ It must be noted that these metrics were computed using the center-patch of images, instead of using the patch voting mechanisms described previously. This strategy allows a more fair comparison with other state-of-the-art methods although it hinders the performance of SuSy.
325
+
326
+ ## Environmental Impact
327
+
328
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
329
+
330
+ - **Hardware Type:** H100
331
+ - **Hours used:** 16
332
+ - **Hardware Provider:** Barcelona Supercomputing Center (BSC)
333
+ - **Compute Region:** Spain
334
+ - **Carbon Emitted:** 0.63kg
335
+
336
+ ## Citation
337
+
338
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
339
+
340
+ **BibTeX:**
341
+
342
+ ```bibtex
343
+ @misc{bernabeu2024susy,
344
+ title={Present and Future Generalization of Synthetic Image Detectors},
345
+ author={Pablo Bernabeu-Perez and Enrique Lopez-Cuena and Dario Garcia-Gasulla},
346
+ year={2024},
347
+ eprint={2409.14128},
348
+ archivePrefix={arXiv},
349
+ primaryClass={cs.CV},
350
+ url={https://arxiv.org/abs/2409.14128},
351
+ }
352
+ ```
353
+
354
+ ```bibtex
355
+ @thesis{bernabeu2024aidetection,
356
+ title={Detecting and Attributing AI-Generated Images with Machine Learning},
357
+ author={Bernabeu Perez, Pablo},
358
+ school={UPC, Facultat d'Informàtica de Barcelona, Departament de Ciències de la Computació},
359
+ year={2024},
360
+ month={06}
361
+ }
362
+ ```
363
+
364
+ ## Model Card Authors
365
+
366
+ [Pablo Bernabeu Perez](https://huggingface.co/pabberpe) and [Dario Garcia Gasulla](https://huggingface.co/dariog)
367
+
368
+ ## Model Card Contact
369
+
370
+ For further inquiries, please contact [HPAI](mailto:hpai@bsc.es)
SuSy.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fa10fae300ee2742c7a373b6c3332c2595b461954b8f5616d2d382ef2751020e
3
+ size 50810392
config.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ {
2
+ "description": "This JSON file does not contain any functional data. Its presence allows Hugging Face to monitor downloads for this repository."
3
+ }
midjourney-images-example-patch0.png ADDED
midjourney-images-example.jpg ADDED
model_architecture.png ADDED
susy_logo.jpeg ADDED
test_image.py ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import numpy as np
2
+ import pandas as pd
3
+ import torch
4
+ from PIL import Image
5
+ from skimage.feature import graycomatrix, graycoprops
6
+ from torchvision import transforms
7
+
8
+ # Load the model
9
+ model = torch.jit.load("SuSy.pt")
10
+
11
+ # Load the image
12
+ image = Image.open("midjourney-images-example.jpg")
13
+
14
+ # Set Parameters
15
+ top_k_patches = 5
16
+ patch_size = 224
17
+
18
+ # Get the image dimensions
19
+ width, height = image.size
20
+
21
+ # Calculate the number of patches
22
+ num_patches_x = width // patch_size
23
+ num_patches_y = height // patch_size
24
+
25
+ # Divide the image in patches
26
+ patches = np.zeros((num_patches_x * num_patches_y, patch_size, patch_size, 3), dtype=np.uint8)
27
+ for i in range(num_patches_x):
28
+ for j in range(num_patches_y):
29
+ x = i * patch_size
30
+ y = j * patch_size
31
+ patch = image.crop((x, y, x + patch_size, y + patch_size))
32
+ patches[i * num_patches_y + j] = np.array(patch)
33
+
34
+ # Compute the most relevant patches (optional)
35
+ dissimilarity_scores = []
36
+ for patch in patches:
37
+ transform_patch = transforms.Compose([transforms.PILToTensor(), transforms.Grayscale()])
38
+ grayscale_patch = transform_patch(Image.fromarray(patch)).squeeze(0)
39
+ glcm = graycomatrix(grayscale_patch, [5], [0], 256, symmetric=True, normed=True)
40
+ dissimilarity_scores.append(graycoprops(glcm, "contrast")[0, 0])
41
+
42
+ # Sort patch indices by their dissimilarity score
43
+ sorted_indices = np.argsort(dissimilarity_scores)[::-1]
44
+
45
+ # Extract top k patches and convert them to tensor
46
+ top_patches = patches[sorted_indices[:top_k_patches]]
47
+ top_patches = torch.from_numpy(np.transpose(top_patches, (0, 3, 1, 2))) / 255.0
48
+
49
+ # Predict patches
50
+ model.eval()
51
+ with torch.no_grad():
52
+ preds = model(top_patches)
53
+
54
+ # Print results
55
+ classes = ['authentic', 'dalle-3-images', 'diffusiondb', 'midjourney-images', 'midjourney_tti', 'realisticSDXL']
56
+ result = pd.DataFrame(preds.numpy(), columns=classes)
57
+ print(result)
test_patch.py ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import pandas as pd
2
+ import torch
3
+ from PIL import Image
4
+ from torchvision import transforms
5
+
6
+ # Load the model
7
+ model = torch.jit.load("SuSy.pt")
8
+
9
+ # Load patch
10
+ patch = Image.open("midjourney-images-example-patch0.png")
11
+
12
+ # Transform patch to tensor
13
+ patch = transforms.PILToTensor()(patch).unsqueeze(0) / 255.
14
+
15
+ # Predict patch
16
+ model.eval()
17
+ with torch.no_grad():
18
+ preds = model(patch)
19
+
20
+ # Print results
21
+ classes = ['authentic', 'dalle-3-images', 'diffusiondb', 'midjourney-images', 'midjourney_tti', 'realisticSDXL']
22
+ result = pd.DataFrame(preds.numpy(), columns=classes)
23
+ print(result)