Update dataset card for InnerControl paper (Heeding the Inner Voice)

#3
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +95 -2
README.md CHANGED
@@ -16,9 +16,102 @@ dataset_info:
16
  num_examples: 2810604
17
  download_size: 5473812704
18
  dataset_size: 1145721433301.688
 
 
 
 
 
 
 
 
 
 
 
19
  ---
 
20
  # Dataset Card for "MultiGen-20M_train"
21
 
22
- This dataset is constructed from [UniControl](https://arxiv.org/abs/2305.11147), and used for evaluation of the paper [ControlNet++: Improving Conditional Controls with Efficient Consistency Feedback](https://huggingface.co/papers/2404.07987)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23
 
24
- ControlNet++ Github repository: https://github.com/liming-ai/ControlNet_Plus_Plus
 
16
  num_examples: 2810604
17
  download_size: 5473812704
18
  dataset_size: 1145721433301.688
19
+ language:
20
+ - en
21
+ task_categories:
22
+ - text-to-image
23
+ license: cc-by-nc-4.0
24
+ tags:
25
+ - diffusion
26
+ - controlnet
27
+ - image-generation
28
+ size_categories:
29
+ - 1M<n<10M
30
  ---
31
+
32
  # Dataset Card for "MultiGen-20M_train"
33
 
34
+ This dataset is constructed from [UniControl](https://arxiv.org/abs/2305.11147) and is a large-scale dataset for conditional text-to-image generation.
35
+ It is notably used for training and evaluation in:
36
+
37
+ * The paper [ControlNet++: Improving Conditional Controls with Efficient Consistency Feedback](https://huggingface.co/papers/2404.07987).
38
+ Its GitHub repository: [https://github.com/liming-ai/ControlNet_Plus_Plus](https://github.com/liming-ai/ControlNet_Plus_Plus)
39
+ * The paper **[Heeding the Inner Voice: Aligning ControlNet Training via Intermediate Features Feedback](https://huggingface.co/papers/2507.02321)** (InnerControl).
40
+ * Project page: [https://controlgenai.github.io/InnerControl/](https://controlgenai.github.io/InnerControl/)
41
+ * Code: [https://github.com/control/InnerControl](https://github.com/control/InnerControl)
42
+
43
+ The dataset consists of various control types like LineArt, Hed, and Depth, and is designed to improve spatial control over generated images in text-to-image diffusion models.
44
+
45
+ ### Abstract of "Heeding the Inner Voice":
46
+
47
+ Despite significant progress in text-to-image diffusion models, achieving precise spatial control over generated outputs remains challenging. ControlNet addresses this by introducing an auxiliary conditioning module, while ControlNet++ further refines alignment through a cycle consistency loss applied only to the final denoising steps. However, this approach neglects intermediate generation stages, limiting its effectiveness. We propose InnerControl, a training strategy that enforces spatial consistency across all diffusion steps. Our method trains lightweight convolutional probes to reconstruct input control signals (e.g., edges, depth) from intermediate UNet features at every denoising step. These probes efficiently extract signals even from highly noisy latents, enabling pseudo ground truth controls for training. By minimizing the discrepancy between predicted and target conditions throughout the entire diffusion process, our alignment loss improves both control fidelity and generation quality. Combined with established techniques like ControlNet++, InnerControl achieves state-of-the-art performance across diverse conditioning methods (e.g., edges, depth).
48
+
49
+ ![InnerControl Method Diagram](https://cdn-uploads.huggingface.co/production/uploads/6667e3d60a7f1d1cbb63cf4d/IEOsiqRC9pV_SgzmN9zos.png)
50
+
51
+ ### Data structure
52
+
53
+ Folder `./data/data_part0` contains an example subset and folder `./data/full_data` contains the full data currently, 1.04M images in total. Under each `.zip` package, data is arranged as below:
54
+
55
+ ```
56
+ - package_idx
57
+ --- package_idx.json # metadata
58
+ --- images
59
+ ----- 00001.png
60
+ ----- 00002.png
61
+ ...
62
+ ```
63
+
64
+ Each sample is a `2x2` image grid at the resolution of `1024x1024`, we count the data sample as `ONE` sample, allowing for more space and diversity for randomly choosing different sample within the 4 images at training time.
65
+
66
+ `metadata.json` contains the metadata for each sample, where we show an example as below:
67
+
68
+ ```json
69
+ # Metadata structure
70
+ [
71
+ {
72
+ "idx": ..., # index
73
+ "image_path": "", # path to the image
74
+ "features": {
75
+ "attributes": [{"attribute type": "..."}, {"attribute type": "..."}], # attribute types and the specific description in this sample.
76
+ "subject": "..." # subject name
77
+ },
78
+ "prompt": "...", # the prompt used for image generation
79
+ },
80
+ ...
81
+ ]
82
+
83
+ # An example
84
+ [
85
+ {
86
+ "idx": 0,
87
+ "image_path": "0/00000.png",
88
+ "features": {
89
+ "attributes": [
90
+ {
91
+ "lighting": "1@hard light, highlighting texture, sharp shadow"
92
+ },
93
+ {
94
+ "color": "30@soft lavender, mint green, pale peach, and baby blue"
95
+ }
96
+ ],
97
+ "subject": "introspective sports portrait"
98
+ },
99
+ "prompt": "soft lavender, mint green, pale peach, and baby blue, hard light, highlighting texture, sharp shadow, introspective sports portrait",
100
+ },
101
+ ...
102
+ ]
103
+ ```
104
+
105
+ ### Code and supporting files
106
+
107
+ **Attributes and Subjects**
108
+
109
+ `./code/attributes_and_subjects.json` contains the attribute and subject dictionaries.
110
+
111
+ **Range-sensitive filtering**
112
+
113
+ `./code/range_sensitive_filter.json` contains our meta data for the filter, and `./code/data_filter.py` converts it into a format that can be used in the dataloader.
114
+
115
+ **Data Loader**
116
 
117
+ `./code/dataloader.py` provides an example in loading the data into image pairs with the filter and balanced resampling adopted.