Datasets:
File size: 5,545 Bytes
1d8f3ed 06fa7e9 1d8f3ed bf8e0f9 ea9aed4 bf8e0f9 1d8f3ed 166270e ea9aed4 06fa7e9 1d8f3ed 363e3f5 bf8e0f9 363e3f5 bf8e0f9 363e3f5 1dff7d8 363e3f5 1dff7d8 363e3f5 06fa7e9 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 | ---
license: cc-by-4.0
dataset_info:
features:
- name: image
dtype: image
- name: generator
dtype: string
- name: uid
dtype: string
- name: labels
list:
- name: label
dtype: string
- name: points
list:
list: float64
- name: original_prompt
dtype: string
- name: positive_prompt
dtype: string
- name: negative_prompt
dtype: string
- name: guidance_scale
dtype: float64
- name: num_inference_steps
dtype: int64
- name: scheduler
dtype: string
- name: seed
dtype: int64
- name: width
dtype: int64
- name: height
dtype: int64
- name: image_format
dtype: string
- name: jpeg_quality
dtype: int64
- name: chroma_subsampling
dtype: string
splits:
- name: labeled_train
num_bytes: 1229331054
num_examples: 918
- name: labeled_test
num_bytes: 3492466407
num_examples: 2419
- name: unlabeled_train
num_bytes: 34599400559
num_examples: 24013
- name: unlabeled_test
num_bytes: 35214906257
num_examples: 24638
download_size: 74508314134
dataset_size: 74536104277
configs:
- config_name: default
data_files:
- split: labeled_train
path: data/labeled_train-*
- split: labeled_test
path: data/labeled_test-*
- split: unlabeled_train
path: data/unlabeled_train-*
- split: unlabeled_test
path: data/unlabeled_test-*
pretty_name: X-AIGD
---
# X-AIGD
<p align="center">
<a href="https://arxiv.org/abs/2601.19430"><img src="https://img.shields.io/badge/arXiv-2601.19430-b31b1b.svg" alt="arXiv"></a>
<a href="https://github.com/Coxy7/X-AIGD"><img src="https://img.shields.io/badge/GitHub-X--AIGD-blue?logo=github" alt="GitHub"></a>
</p>
X-AIGD is a fine-grained benchmark designed for **eXplainable AI-Generated image Detection**. It provides pixel-level human annotations of perceptual artifacts in AI-generated images, spanning low-level distortions, high-level semantics, and cognitive-level counterfactuals, aiming to advance robust and explainable AI-generated image detection methods.
For more details, please refer to our paper: [Unveiling Perceptual Artifacts: A Fine-Grained Benchmark for Interpretable AI-Generated Image Detection](https://arxiv.org/abs/2601.19430).
## 🎨 Artifact Taxonomy
We define a comprehensive artifact taxonomy comprising 3 levels and 7 specific categories to capture the diverse range of perceptual artifacts in AI-generated images.
<p align="center">
<img src="taxonomy.jpg" width="800">
</p>
* **Low-level Distortions:** `low-level-edge_shape`, `low-level-texture`, `low-level-color`, `low-level-symbol`.
* **High-level Semantics:** `high-level-semantics`.
* **Cognitive-level Counterfactuals:** `cognitive-level-commonsense`, `cognitive-level-physics`.
## 🚀 Dataset Contents
This repository currently hosts the **pixel-level annotated subset** of X-AIGD, which includes over 18,000 artifact instances across 3,000+ labeled samples, along with a large-scale **unlabeled** dataset.
**Note on Dataset Status:**
- `labeled_train`, `labeled_test`, `unlabeled_train`, and `unlabeled_test` splits are currently available.
- Real images are planned for upcoming release.
### Data Fields
- `image`: The AI-generated image (raw images with **PNG** format).
- `generator`: Name of the text-to-image generator.
- `uid`: Unique identifier for the image.
- `labels`: List of human-annotated artifacts, each containing:
- `label`: Category of the artifact (e.g., `low-level-edge_shape`, `high-level-semantics`).
- `points`: Polygon coordinates `[[x1, y1], [x2, y2], ...]` localizing the artifact.
- `original_prompt`, `positive_prompt`, `negative_prompt`: Text prompts used for generation.
- `num_inference_steps`, `guidance_scale`, `seed`, `scheduler`: Generation parameters.
- `width`, `height`: Image resolution.
- `image_format`, `jpeg_quality`, `chroma_subsampling`: Image compression details of the _corresponding real image_ (used for optional compression alignment).
### UID Correspondence
Each AI-generated (fake) image is generated based on the caption of a real image and inherits its `uid` from the corresponding real image metadata entry. This means the `uid` field in the fake splits matches the `uid` used across different generators, allowing direct pairing and comparison between images sharing the same semantic source.
## 📖 Usage Example
```python
from datasets import load_dataset
# Load the labeled test split (AI-generated images with artifact annotations)
ds = load_dataset("Coxy7/X-AIGD", split="labeled_test")
# Access an example
sample = ds[0]
print(f"Generator: {sample['generator']}")
print(f"UID: {sample['uid']}")
# Access artifact labels and polygon localization
for artifact in sample["labels"]:
print(f"Artifact category: {artifact['label']}")
print(f"Polygon points: {artifact['points']}")
# The image is a PIL object
# sample["image"].show()
```
## 📝 Citation
If you find our work useful in your research, please consider citing:
```bibtex
@article{xiao2026unveiling,
title={Unveiling Perceptual Artifacts: A Fine-Grained Benchmark for Interpretable AI-Generated Image Detection},
author={Xiao, Yao and Chen, Weiyan and Chen, Jiahao and Cao, Zijie and Deng, Weijian and Yang, Binbin and Dong, Ziyi and Ji, Xiangyang and Ke, Wei and Wei, Pengxu and Lin, Liang},
journal={arXiv preprint arXiv:2601.19430},
year={2026}
}
```
## 📄 License
The dataset is released under the [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) license. |