AIGIBench / README.md
nielsr's picture
nielsr HF Staff
Improve dataset card: Add metadata, links, abstract summary, and detection methods
969b568 verified
|
raw
history blame
7.34 kB
metadata
license: cc-by-nc-sa-4.0
task_categories:
  - image-classification
tags:
  - ai-generated-image-detection
  - deepfake-detection
  - generative-models
  - benchmark

Is Artificial Intelligence Generated Image Detection a Solved Problem?

Ziqiang Li1, Jiazhen Yan1, Ziwen He1, Kai Zeng2, Weiwei Jiang1, Lizhi Xiong1, Zhangjie Fu1‡

Corresponding author

1Nanjing University of Information Science and Technology 2University of Siena

Paper | GitHub Repository

AIGIBench is a comprehensive benchmark designed to rigorously evaluate the robustness and generalization capabilities of state-of-the-art Artificial Intelligence Generated Image (AIGI) detectors. It simulates real-world challenges through four core tasks: multi-source generalization, robustness to image degradation, sensitivity to data augmentation, and impact of test-time pre-processing.

This repository is the official dataset of the AIGIBench.

AIGIBench dataset contains two types of training and 25 test subsets. This dataset has the following advantages:

  • Comprehensive generate types: including GAN-based Noise-to-Image Generation, Diffusion for Text-to-Image Generation, GANs for Deepfake, Diffusion for Personalized Generation, and Open-source Platforms.
  • State-of-the-art Generators: MidjourneyV6, Stable Diffusion 3, Imagen, DALLE3, InstantID, FaceSwap, StyleGAN-XL and so on.
  • Completely unknown generation method: Crawl pictures from communities and social media to build datasets CommunityAI & SocialRF, making detection more challenging.

image/png

If this project helps you, please fork, watch, and give a star to this repository.

📚Dataset

Each folder contains compressed files. After unzip the file, files under the data root directory can be organized as follows.

Train

AIGIBench introduces two training dataset settings: (i) Setting-I: Training on 144K images generated by ProGAN across four object categories—car, cat, chair, and horse. (ii) Setting-II: Training on 144K images generated by both SD-v1.4 and ProGAN, covering the same four object categories. The data of ProGAN comes from ForenSynths, and the data of sdv1.4 comes from GenImage. In order to maintain the fairness of the training data, we randomly select the sdv1.4 training images of GenImage to keep the same number as ProGAN, and then merge the data. The file directory is as follows:

├── train
│   ├── car
│   │   ├── 0_real
│   │   ├── 1_fake
│   ├── cat
│   │   ├── ...
│   ├── chair
│   │   ├── ...
│   ├── horse
│   │   ├── ...
│   ├── sdv1.4
│   │   ├── 0_real
│   │   ├── 1_fake
├── val
│   ├── ...
│   │   ├── 0_real
│   │   ├── 1_fake
│   │   ...

Test

AIGIBench comprehensively tests the performance of the detector and builds a test dataset from five perspectives: GAN-based Noise-to-Image Generation, Diffusion for Text-to-Image Generation, GANs for Deepfake, Diffusion for Personalized Generation, and Open-source Platforms. The file directory is as follows:

├── test
│   ├── ProGAN
│   │   ├── 0_real
│   │   ├── 1_fake
│   ├── R3GAN
│   │   ├── ...
│   │   ...
│   ├── BlendFace
│   │   ├── 0_real
│   │   ├── 1_fake
│   ├── InSwap
│   │   ├── ...
│   │   ...
│   ├── FLUX1-dev
│   │   ├── 0_real
│   │   ├── 1_fake
│   ├── Midjourney-V6
│   │   ├── ...
│   │   ...
│   ├── BLIP
│   │   ├── 0_real
│   │   ├── 1_fake
│   ├── Infinite-ID
│   │   ├── ...
│   │   ...
│   ├── CommunityAI
│   │   ├── 0_real
│   │   ├── 1_fake
│   ├── SocialRF
│   │   ├── ...

🔍Detection Methods

We use the official code for all detection codes and make unified modifications to the input and output. The code we use for training in Setting-II is publicly available above, the corresponding pre-trained checkpoints are publicly available on Huggingface. Of course, if you need the code from the original paper, the following is the corresponding detection code in the paper:

  • ResNet-50: Deep Residual Learning for Image Recognition
  • CNNDetection: CNN-generated images are surprisingly easy to spot...for now
  • GramNet: Global Texture Enhancement for Fake Face Detection in the Wild
  • LGrad: Learning on Gradients: Generalized Artifacts Representation for GAN-Generated Images Detection
  • CLIPDetection: Towards Universal Fake Image Detectors that Generalize Across Generative Models
  • FreqNet: FreqNet: A Frequency-domain Image Super-Resolution Network with Dicrete Cosine Transform
  • NPR: Rethinking the Up-Sampling Operations in CNN-based Generative Network for Generalizable Deepfake Detection
  • DFFreq: Dual Frequency Branch Framework with Reconstructed Sliding Windows Attention for AI-Generated Image Detection
  • LaDeDa: Real-Time Deepfake Detection in the Real-World
  • AIDE: A Sanity Check for AI-generated Image Detection
  • SAFE: Improving Synthetic Image Detection Towards Generalization: An Image Transformation Perspectives

Citation

@inproceedings{li2025artificial,
  title={Is Artificial Intelligence Generated Image Detection a Solved Problem?},
  author={Li, Ziqiang and Yan, Jiazhen and He, Ziwen and Zeng, Kai and Jiang, Weiwei and Xiong, Lizhi and Fu, Zhangjie},
  booktitle={Advances in Neural Information Processing Systems},
  year={2025}
}

Contact

If you have any question about this project, please feel free to contact 247918horizon@gmail.com