File size: 20,319 Bytes
7d8850b 307a7bb 7d8850b 307a7bb 7ae5627 307a7bb 71f4ef7 307a7bb 71f4ef7 307a7bb 71f4ef7 307a7bb 005ea06 307a7bb 6c539a5 307a7bb |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 |
---
license: cc-by-nc-sa-4.0
task_categories:
- image-classification
pretty_name: Community Forensics (small)
configs:
- config_name: default
data_files:
- split: train
path:
- data/*.parquet
tags:
- image
size_categories:
- 100K<n<1M
language:
- en
---
# *Community Forensics: Using Thousands of Generators to Train Fake Image Detectors (CVPR 2025)*
[Paper](https://arxiv.org/abs/2411.04125) / [Project Page](https://jespark.net/projects/2024/community_forensics/) / [Code (GitHub)](https://github.com/JeongsooP/Community-Forensics)
This is a small version of the [Community Forensics dataset](https://huggingface.co/datasets/OwensLab/CommunityForensics). It contains roughly 11% of the generated images of the base dataset and is paired with real data with redistributable license. This dataset is intended for easier prototyping as you do not have to download the corresponding real datasets separately.
We distribute this dataset with a `cc-nc-by-sa-4.0` license for non-commercial research purposes only.
The following table shows the performance (AP) difference between the classifier trained on the base dataset and this version of the dataset:
| Version | GAN | Lat. Diff. | Pix. Diff. | Commercial | Other | Mean |
| :------ | :---: | :--------: | :--------: | :--------: | :----: | :---: |
| Base | 0.995 | 0.996 | 0.947 | 0.985 | 0.998 | 0.984 |
| Small | 0.986 | 0.995 | 0.888 | 0.852 | 0.993 | 0.943 |
*Please check [Community Forensics-Eval](https://huggingface.co/datasets/OwensLab/CommunityForensics-Eval) for the recommended 'comprehensive' evaluation set.*
## Dataset Summary
- The Community Forensics (small) dataset is intended for developing and benchmarking forensics methods that detect or analyze AI-generated images. It contains 278K generated images collected from 4803 generator models, and paired with 278K "real" images, sourced from [FFHQ](https://github.com/NVlabs/ffhq-dataset), [VISION](https://lesc.dinfo.unifi.it/VISION/), [COCO](https://cocodataset.org/), and [Landscapes HQ](https://github.com/universome/alis) datasets.
## Supported Tasks
- Image Classification: identify whether the given image is AI-generated. We mainly study this task in our paper, but other tasks may be possible with our dataset.
# Dataset Structure
## Data Instances
Our dataset is formatted in a Parquet data frame of the following structure:
```
{
"image_name": "00000162.png",
"format": "PNG",
"resolution": "[512, 512]",
"mode": "RGB",
"image_data": "b'\x89PNG\r\n\x1a\n\x00\x00\x00\rIHDR\..."
"model_name": "stabilityai/stable-diffusion-2",
"nsfw_flag": False,
"prompt": "montreal grand prix 2018 von icrdesigns",
"real_source": "LAION",
"subset": "Systematic",
"split": "train",
"label": "1",
"architecture": "LatDiff"
}
```
## Data Fields
`image_name`: Filename of an image. \
`format`: PIL image format. \
`resolution`: Image resolution. \
`mode`: PIL image mode (e.g., RGB) \
`image_data`: Image data in byte format. Can be read using Python's BytesIO. \
`model_name`: Name of the model used to sample this image. Has format {author_name}/{model_name} for `Systematic` subset, and {model_name} for other subsets. \
`nsfw_flag`: NSFW flag determined using [Stable Diffusion Safety Checker](https://huggingface.co/CompVis/stable-diffusion-safety-checker). \
`prompt`: Input prompt (if exists). \
`real_source`: Paired real dataset(s) that was used to source the prompts or to train the generators. \
`subset`: Denotes which subset the image belongs to (Systematic: Hugging Face models, Manual: manually downloaded models, Commercial: commercial models). \
`split`: Train/test split. \
`label`: Fake/Real label. (1: Fake, 0: Real) \
`architecture`: Architecture of the generative model that is used to generate this image. (Categories: `LatDiff`, `PixDiff`, `GAN`, `other`, `real`)
## Data splits
`train`: Default split containing the paired dataset (278K real and 278K generated images).
## Usage examples
Default train/eval settings:
```python
import datasets as ds
import PIL.Image as Image
import io
# default training set
commfor_small_train = ds.load_dataset("OwensLab/CommunityForensics-Small", split="train", cache_dir="~/.cache/huggingface/datasets")
# evaluation set
commfor_eval = ds.load_dataset("OwensLab/CommunityForensics-Eval", split="CompEval", cache_dir="~/.cache/huggingface/datasets")
# optionally shuffle the dataset
commfor_small_train = commfor_small_train.shuffle(seed=123, writer_batch_size=3000)
for i, data in enumerate(commfor_small_train):
img, label = Image.open(io.BytesIO(data['image_data'])), data['label']
## Your operations here ##
# e.g., img_torch = torchvision.transforms.functional.pil_to_tensor(img)
```
*Note:*
- Downloading and indexing the data can take some time, but only for the first time. **Downloading may use up to ~600GB** (278GB data + 278GB re-indexed `arrow` files)
- It is possible to randomly access data by passing an index (e.g., `commfor_small_train[10]`, `commfor_small_train[247]`).
- You can set `cache_dir` to some other directory if your home directory is limited. By default, it will download data to `~/.cache/huggingface/datasets`.
It is also possible to use streaming for some use cases (e.g., downloading only a certain subset or a small portion of data).
```python
import datasets as ds
import PIL.Image as Image
import io
# steaming only the systematic set. Note that when streaming, you can only load specific splits
commfor_train_stream = ds.load_dataset("OwensLab/CommunityForensics-Small", split='train', streaming=True)
# streaming the evaluation set
commfor_eval_stream = ds.load_dataset("OwensLab/CommunityForensics-Eval", split='CompEval', streaming=True)
# optionally shuffle the streaming dataset
commfor_train_stream = commfor_train_stream.shuffle(seed=123, buffer_size=3000)
# usage example
for i, data in enumerate(commfor_train_stream):
if i>=10000: # use only first 10000 samples
break
img, label = Image.open(io.BytesIO(data['image_data'])), data['label']
## Your operations here ##
# e.g., img_torch = torchvision.transforms.functional.pil_to_tensor(img)
```
Please check [Hugging Face documentation](https://huggingface.co/docs/datasets/v3.5.0/loading#slice-splits) for more usage examples.
# Below is the dataset card of the base dataset with minor modifications.
# Dataset Creation
## Curation Rationale
This dataset is created to address the limited model diversity of the existing datasets for generated image detection. While some existing datasets contain millions of images, they are typically sampled from handful of generator models. We instead sample 2.7M images from 4803 generator models, approximately 34 times more generators than the most extensive previous dataset that we are aware of.
This is the "small" version of the dataset which contains approximately 11% of the base dataset (278K generated images) which are then paired with 278K "real" images for easier prototyping.
## Collection Methodology
We collect generators in three different subgroups. (1) We systematically download and sample open source latent diffusion models from Hugging Face. (2) We manually sample open source generators with various architectures and training procedures. (3) We sample from both open and closed commercially available generators.
## Personal and Sensitive Information
The dataset does not contain any sensitive identifying information (i.e., does not contain data that reveals information such as racial or ethnic origin, sexual orientation, religious or political beliefs).
# Considerations of Using the Data
## Social Impact of Dataset
This dataset may be useful for researchers in developing and benchmarking forensics methods. Such methods may aid users in better understanding the given image. However, we believe the classifiers, at least the ones that we have trained or benchmarked, still show far too high error rates to be used directly in the wild, and can lead to unwanted consequences (e.g., falsely accusing an author of creating fake images or allowing generated content to be certified as real).
## Discussion of Biases
The dataset has been primarily sampled from LAION captions. This may introduce biases that could be present in web-scale data (e.g., favoring human photos instead of other categories of photos). In addition, a vast majority of the generators we collect are derivatives of Stable Diffusion, which may introduce bias towards detecting certain types of generators.
## Other Known Limitations
The generative models are sourced from the community and may contain inappropriate content. While in many contexts it is important to detect such images, these generated images may require further scrutiny before being used in other downstream applications.
# Additional Information
## Acknowledgement
We thank the creators of the many open source models that we used to collect the Community Forensics dataset. We thank Chenhao Zheng, Cameron Johnson, Matthias Kirchner, Daniel Geng, Ziyang Chen, Ayush Shrivastava, Yiming Dou, Chao Feng, Xuanchen Lu, Zihao Wei, Zixuan Pan, Inbum Park, Rohit Banerjee, and Ang Cao for the valuable discussions and feedback. This research was developed with funding from the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR001120C0123.
## Licensing Information
We release the dataset with a `cc-by-nc-sa-4.0` license for research purposes only. In addition, we note that each image in this dataset has been generated by the models with their respective licenses. We therefore provide metadata of all models present in our dataset with their license information. A vast majority of the generators use the [CreativeML OpenRAIL-M license](https://github.com/CompVis/stable-diffusion/blob/main/LICENSE). Please refer to the [metadata](https://huggingface.co/datasets/OwensLab/CommunityForensics/tree/main/data/metadata) for detailed licensing information for your specific application.
## Citation Information
```
@InProceedings{Park_2025_CVPR,
author = {Park, Jeongsoo and Owens, Andrew},
title = {Community Forensics: Using Thousands of Generators to Train Fake Image Detectors},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month = {June},
year = {2025},
pages = {8245-8257}
}
```---
license: cc-by-nc-sa-4.0
task_categories:
- image-classification
pretty_name: Community Forensics (small)
configs:
- config_name: default
data_files:
- split: train
path:
- data/*.parquet
tags:
- image
size_categories:
- 100K<n<1M
language:
- en
---
# *Community Forensics: Using Thousands of Generators to Train Fake Image Detectors (CVPR 2025)*
[Paper](https://arxiv.org/abs/2411.04125)/[Project Page](https://jespark.net/projects/2024/community_forensics/)
This is a small version of the [Community Forensics dataset](https://huggingface.co/datasets/OwensLab/CommunityForensics). It contains roughly 11% of the generated images of the base dataset and is paired with real data with redistributable license. This dataset is intended for easier prototyping as you do not have to download the corresponding real datasets separately.
We distribute this dataset with a `cc-nc-by-sa-4.0` license for non-commercial research purposes only.
The following table shows the performance (AP) difference between the classifier trained on the base dataset and this version of the dataset:
| Version | GAN | Lat. Diff. | Pix. Diff. | Commercial | Other | Mean |
| :------ | :---: | :--------: | :--------: | :--------: | :----: | :---: |
| Base | 0.995 | 0.996 | 0.947 | 0.985 | 0.998 | 0.984 |
| Small | 0.986 | 0.995 | 0.888 | 0.852 | 0.993 | 0.943 |
## Dataset Summary
- The Community Forensics (small) dataset is intended for developing and benchmarking forensics methods that detect or analyze AI-generated images. It contains 278K generated images collected from 4803 generator models, and paired with 278K "real" images, sourced from [FFHQ](https://github.com/NVlabs/ffhq-dataset), [VISION](https://lesc.dinfo.unifi.it/VISION/), [COCO](https://cocodataset.org/), and [Landscapes HQ](https://github.com/universome/alis) datasets.
## Supported Tasks
- Image Classification: identify whether the given image is AI-generated. We mainly study this task in our paper, but other tasks may be possible with our dataset.
# Dataset Structure
## Data Instances
Our dataset is formatted in a Parquet data frame of the following structure:
```
{
"image_name": "00000162.png",
"format": "PNG",
"resolution": "[512, 512]",
"mode": "RGB",
"image_data": "b'\x89PNG\r\n\x1a\n\x00\x00\x00\rIHDR\..."
"model_name": "stabilityai/stable-diffusion-2",
"nsfw_flag": False,
"prompt": "montreal grand prix 2018 von icrdesigns",
"real_source": "LAION",
"subset": "Systematic",
"split": "train",
"label": "1",
"architecture": "LatDiff"
}
```
## Data Fields
`image_name`: Filename of an image. \
`format`: PIL image format. \
`resolution`: Image resolution. \
`mode`: PIL image mode (e.g., RGB) \
`image_data`: Image data in byte format. Can be read using Python's BytesIO. \
`model_name`: Name of the model used to sample this image. Has format {author_name}/{model_name} for `Systematic` subset, and {model_name} for other subsets. \
`nsfw_flag`: NSFW flag determined using [Stable Diffusion Safety Checker](https://huggingface.co/CompVis/stable-diffusion-safety-checker). \
`prompt`: Input prompt (if exists). \
`real_source`: Paired real dataset(s) that was used to source the prompts or to train the generators. \
`subset`: Denotes which subset the image belongs to (Systematic: Hugging Face models, Manual: manually downloaded models, Commercial: commercial models). \
`split`: Train/test split. \
`label`: Fake/Real label. (1: Fake, 0: Real) \
`architecture`: Architecture of the generative model that is used to generate this image. (Categories: `LatDiff`, `PixDiff`, `GAN`, `other`, `real`)
## Data splits
`train`: Default split containing the paired dataset (278K real and 278K generated images).
## Usage examples
Default train/eval settings:
```python
import datasets as ds
import PIL.Image as Image
import io
# default training set
commfor_small_train = ds.load_dataset("OwensLab/CommunityForensics-Small", split="train", cache_dir="~/.cache/huggingface/datasets")
# optionally shuffle the dataset
commfor_small_train = commfor_small_train.shuffle(seed=123, writer_batch_size=3000)
for i, data in enumerate(commfor_small_train):
img, label = Image.open(io.BytesIO(data['image_data'])), data['label']
## Your operations here ##
# e.g., img_torch = torchvision.transforms.functional.pil_to_tensor(img)
```
*Note:*
- Downloading and indexing the data can take some time, but only for the first time. **Downloading may use up to ~600GB** (278GB data + 278GB re-indexed `arrow` files)
- It is possible to randomly access data by passing an index (e.g., `commfor_small_train[10]`, `commfor_small_train[247]`).
- You can set `cache_dir` to some other directory if your home directory is limited. By default, it will download data to `~/.cache/huggingface/datasets`.
It is also possible to use streaming for some use cases (e.g., downloading only a certain subset or a small portion of data).
```python
import datasets as ds
import PIL.Image as Image
import io
# steaming only the systematic set. Note that when streaming, you can only load specific splits
commfor_train_stream = ds.load_dataset("OwensLab/CommunityForensics-Small", split='train', streaming=True)
# optionally shuffle the streaming dataset
commfor_train_stream = commfor_train_stream.shuffle(seed=123, buffer_size=3000)
# usage example
for i, data in enumerate(commfor_train_stream):
if i>=10000: # use only first 10000 samples
break
img, label = Image.open(io.BytesIO(data['image_data'])), data['label']
## Your operations here ##
# e.g., img_torch = torchvision.transforms.functional.pil_to_tensor(img)
```
Please check [Hugging Face documentation](https://huggingface.co/docs/datasets/v3.5.0/loading#slice-splits) for more usage examples.
# Below is the dataset card of the base dataset with minor modifications.
# Dataset Creation
## Curation Rationale
This dataset is created to address the limited model diversity of the existing datasets for generated image detection. While some existing datasets contain millions of images, they are typically sampled from handful of generator models. We instead sample 2.7M images from 4803 generator models, approximately 34 times more generators than the most extensive previous dataset that we are aware of.
This is the "small" version of the dataset which contains approximately 11% of the base dataset (278K generated images) which are then paired with 278K "real" images for easier prototyping.
## Collection Methodology
We collect generators in three different subgroups. (1) We systematically download and sample open source latent diffusion models from Hugging Face. (2) We manually sample open source generators with various architectures and training procedures. (3) We sample from both open and closed commercially available generators.
## Personal and Sensitive Information
The dataset does not contain any sensitive identifying information (i.e., does not contain data that reveals information such as racial or ethnic origin, sexual orientation, religious or political beliefs).
# Considerations of Using the Data
## Social Impact of Dataset
This dataset may be useful for researchers in developing and benchmarking forensics methods. Such methods may aid users in better understanding the given image. However, we believe the classifiers, at least the ones that we have trained or benchmarked, still show far too high error rates to be used directly in the wild, and can lead to unwanted consequences (e.g., falsely accusing an author of creating fake images or allowing generated content to be certified as real).
## Discussion of Biases
The dataset has been primarily sampled from LAION captions. This may introduce biases that could be present in web-scale data (e.g., favoring human photos instead of other categories of photos). In addition, a vast majority of the generators we collect are derivatives of Stable Diffusion, which may introduce bias towards detecting certain types of generators.
## Other Known Limitations
The generative models are sourced from the community and may contain inappropriate content. While in many contexts it is important to detect such images, these generated images may require further scrutiny before being used in other downstream applications.
# Additional Information
## Acknowledgement
We thank the creators of the many open source models that we used to collect the Community Forensics dataset. We thank Chenhao Zheng, Cameron Johnson, Matthias Kirchner, Daniel Geng, Ziyang Chen, Ayush Shrivastava, Yiming Dou, Chao Feng, Xuanchen Lu, Zihao Wei, Zixuan Pan, Inbum Park, Rohit Banerjee, and Ang Cao for the valuable discussions and feedback. This research was developed with funding from the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR001120C0123.
## Licensing Information
We release the dataset with a `cc-by-nc-sa-4.0` license for research purposes only. In addition, we note that each image in this dataset has been generated by the models with their respective licenses. We therefore provide metadata of all models present in our dataset with their license information. A vast majority of the generators use the [CreativeML OpenRAIL-M license](https://github.com/CompVis/stable-diffusion/blob/main/LICENSE). Please refer to the [metadata](https://huggingface.co/datasets/OwensLab/CommunityForensics/tree/main/data/metadata) for detailed licensing information for your specific application.
## Citation Information
```
@InProceedings{Park_2025_CVPR,
author = {Park, Jeongsoo and Owens, Andrew},
title = {Community Forensics: Using Thousands of Generators to Train Fake Image Detectors},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month = {June},
year = {2025},
pages = {8245-8257}
}
``` |