Renyang commited on
Commit
40a7688
·
verified ·
1 Parent(s): d967e49

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +119 -32
README.md CHANGED
@@ -1,32 +1,119 @@
1
- ---
2
- license: cc-by-nc-4.0
3
- dataset_info:
4
- features:
5
- - name: index
6
- dtype: int64
7
- - name: image
8
- dtype: image
9
- - name: size
10
- dtype: int64
11
- - name: category
12
- dtype: string
13
- - name: class_id
14
- dtype: string
15
- - name: model
16
- dtype: string
17
- - name: gen_type
18
- dtype: string
19
- - name: reference
20
- dtype: bool
21
- splits:
22
- - name: train
23
- num_bytes: 5879033389.852
24
- num_examples: 540258
25
- download_size: 46404626455
26
- dataset_size: 5879033389.852
27
- configs:
28
- - config_name: default
29
- data_files:
30
- - split: train
31
- path: data/train-*
32
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ ---
4
+
5
+ # DANI: Discrepancy Accessing for Natural and AI Images
6
+
7
+ **A Large-Scale Dataset for Visual Research on AI-Synthesized and Natural Images**
8
+
9
+ ## Overview
10
+
11
+ DANI (Discrepancy Accessing for Natural and AI Images) is a large-scale, multimodal dataset for benchmarking and broad visual research on both AI-generated images (AIGIs) and natural images.
12
+ The dataset is designed to support a wide range of computer vision and multimodal research tasks, including but not limited to:
13
+ - AI-generated vs. real image discrimination
14
+ - Representation learning
15
+ - Image quality assessment
16
+ - Style transfer
17
+ - Image reconstruction
18
+ - Domain adaptation
19
+ - Multimodal understanding and beyond
20
+
21
+ DANI accompanies the paper:
22
+
23
+ > Liu, Renyang; Lyu, Ziyu; Zhou, Wei; Ng, See-Kiong.
24
+ > *D-Judge: How Far Are We? Evaluating the Discrepancies Between AI-synthesized Images and Natural Images through Multimodal Guidance.*
25
+ > ACM International Conference on Multimedia (MM), 2025.
26
+
27
+ ## Dataset Summary
28
+
29
+ DANI contains over **445,000 images**, including 5,000 natural images (from COCO, with resolutions 224, 256, 512, 1024) and more than 440,000 AI-generated images produced by diverse state-of-the-art generative models.
30
+ Each sample is annotated with detailed metadata, enabling comprehensive evaluation and flexible use for a broad range of visual and multimodal research.
31
+ Images are generated using a wide range of generative models and protocols:
32
+
33
+ - **Models:** GALIP, DFGAN, SD_V14, SD_V15, Versatile Diffusion (VD), SD_V21, SD_XL, Dalle2, Dalle3, and COCO (real images)
34
+ - **Image Sizes:** 224, 256, 512, 768, 1024
35
+ - **Generation Types:** Text-to-Image (T2I), Image-to-Image (I2I), Text and Image-to-Image (TI2I)
36
+ - **Categories:** indoor, outdoor, etc.
37
+
38
+ ## Data Fields
39
+
40
+ Each sample in the dataset contains the following fields:
41
+
42
+ | Field | Description |
43
+ |------------|------------------------------------------------------------------------------|
44
+ | index | Unique index for each image |
45
+ | image | The image itself (as a file, not just path) |
46
+ | size | Image resolution (e.g., 224, 256, 512, 768, 1024) |
47
+ | category | Scene category (e.g., `indoor`, `outdoor`, etc.) |
48
+ | class_id | COCO class or semantic category ID/name |
49
+ | model | Generative model used (`GALIP`, `DFGAN`, `SD_V14`, `SD_V15`, `VD`, etc.) |
50
+ | gen_type | Generation method (`T2I`, `I2I`, `TI2I`) |
51
+ | reference | Whether it is a real/natural image (`True` for real, `False` for generated) |
52
+
53
+ > *Note:*
54
+ > - **COCO** images have `reference=True`, and may appear at multiple resolutions.
55
+ > - For AI-generated images, the `model` and `gen_type` fields indicate the specific generative model and generation protocol (T2I, I2I, or TI2I) used for each sample.
56
+
57
+ ## Model/Generation Configurations
58
+
59
+ The dataset covers the following models and settings:
60
+
61
+ | Model | Image Size | Generation Types Supported |
62
+ |----------|------------|---------------------------------------|
63
+ | GALIP | 224 | T2I |
64
+ | DFGAN | 256 | T2I |
65
+ | SD_V14 | 512 | T2I, I2I, TI2I |
66
+ | SD_V15 | 512 | T2I, I2I, TI2I |
67
+ | VD | 512 | T2I, I2I, TI2I |
68
+ | SD_V21 | 768 | T2I, I2I, TI2I |
69
+ | SD_XL | 1024 | T2I, I2I, TI2I |
70
+ | Dalle2 | 512 | T2I, I2I |
71
+ | Dalle3 | 1024 | T2I |
72
+ | COCO | 224,256,512,1024 | Reference/Real Images |
73
+
74
+ For each generation type (`T2I`, `I2I`, `TI2I`), a diverse set of models are covered.
75
+
76
+ ## Usage
77
+
78
+ You can load DANI directly using the 🤗 datasets library:
79
+
80
+ ```python
81
+ from datasets import load_dataset
82
+
83
+ ds = load_dataset("Renyang/DANI")
84
+ print(ds)
85
+ # Output: DatasetDict({
86
+ # train: Dataset({
87
+ # features: ['index', 'image', 'size', 'category', 'class_id','model', 'gen_type','reference'],
88
+ # num_rows: 540257
89
+ # })
90
+ # })
91
+ # Access images and metadata
92
+ img = ds["train"][0]["image"]
93
+ meta = {k: ds["train"][0][k] for k in ds["train"].column_names if k != "image"}
94
+
95
+ ```
96
+
97
+ *Note:* Images are loaded as PIL Images. Use `.convert("RGB")` if needed.
98
+
99
+ ## Citation
100
+
101
+ If you use this dataset or the associated benchmark, please cite:
102
+
103
+ ```bibtex
104
+ @inproceedings{liu2024djudge,
105
+ title = {D-Judge: How Far Are We? Evaluating the Discrepancies Between AI-synthesized Images and Natural Images through Multimodal Guidance},
106
+ author = {Liu, Renyang and Lyu, Ziyu and Zhou, Wei and Ng, See-Kiong},
107
+ booktitle = {ACM International Conference on Multimedia (MM)},
108
+ organization = {ACM},
109
+ year = {2025},
110
+ }
111
+ ```
112
+
113
+ ## License
114
+
115
+ This dataset is released under the [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) license (for non-commercial research use).
116
+
117
+ ## Contact
118
+
119
+ For questions or collaborations, please visit [Renyang Liu's homepage](https://ryliu68.github.io/).