dwromero-nv commited on
Commit
29ecaef
·
verified ·
1 Parent(s): 24b8b88

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +195 -0
README.md ADDED
@@ -0,0 +1,195 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # **HMAR: Pretrained Multi-Scale Autoregressive Image Generation Models**
2
+
3
+ [**Code**](https://github.com/NVlabs/HMAR) | [**Paper**](https://arxiv.org/abs/2506.04421) | [**Paper Website**](https://research.nvidia.com/labs/dir/hmar)
4
+
5
+ # Model Overview
6
+
7
+ ## Description:
8
+
9
+ Visual AutoRegressive modeling (VAR) has shown promise in bridging the speed and quality gap between autoregressive image models and diffusion models. VAR reformulates autoregressive modeling by decomposing an image into successive resolution scales. During inference, an image is generated by predicting all the tokens in the next (higher-resolution) scale, conditioned on all tokens in all previous (lower-resolution) scales. However, this formulation suffers from reduced image quality due to parallel generation of all tokens in a resolution scale; has sequence lengths scaling superlinearly in image resolution; and requires retraining to change the sampling schedule.
10
+
11
+ We introduce Hierarchical Masked AutoRegressive modeling (**HMAR**), a new image generation algorithm that alleviates these issues using next-scale prediction and masked prediction to generate high-quality images with fast sampling. HMAR reformulates next-scale prediction as a Markovian process, wherein prediction of each resolution scale is conditioned only on tokens in its immediate predecessor instead of the tokens in all predecessor resolutions. When predicting a resolution scale, HMAR uses a controllable multi-step masked generation procedure to generate a subset of the tokens in each step. On ImageNet 256x256 and 512x512 benchmarks, HMAR models match or outperform parameter-matched VAR, diffusion, and autoregressive baselines. We develop efficient IO-aware block-sparse attention kernels that allow HMAR to achieve faster training and inference times over VAR by over 2.5x and 1.75x respectively, as well as over 3x lower inference memory footprint. Finally HMAR yields additional flexibility over VAR; its sampling schedule can be changed without further training, and it can be applied to image editing tasks in a zero-shot manner.
12
+
13
+ This model is for research and development/non-commercial use only.
14
+
15
+ **Model Developer**: NVIDIA
16
+
17
+ ## Model Versions
18
+
19
+ We release 4 trained checkpoints for models of different sizes: `hmar-d16`, `hmar-d20`, `hmar-d24` and `hmar-d30` with 0.46B, 0.84B, 1.3B and 2.4B trainable parameters, respectively.
20
+
21
+ - [hmar-d`N`] Given a ImageNet class label as input (from 0 to 999), the model produces an image that belongs to that class.
22
+
23
+ ### License:
24
+
25
+ This model is released under the [NVIDIA One-Way Noncommercial License (NSCLv1)](https://github.com/NVlabs/HMAR/blob/main/LICENSE). For a custom license, please contact [cosmos-license@nvidia.com](mailto:cosmos-license@nvidia.com).
26
+
27
+ Under the NVIDIA One-Way Noncommercial License (NSCLv1), NVIDIA confirms:
28
+
29
+ * Models are not for commercial use.
30
+ * NVIDIA does not claim ownership to any outputs generated using the Models or Derivative Models.
31
+
32
+ ### Deployment Geography:
33
+
34
+ Global
35
+
36
+ ### Use Case:
37
+
38
+ Conditional Image Generation: Generation of images conditioned on a class label from the ImageNet dataset.
39
+
40
+ ### Release Date:
41
+
42
+ * Github: [07/08/2025](https://github.com/NVlabs/HMAR)
43
+ * Huggingface: [07/08/2025](https://huggingface.co/nvidia/HMAR/tree/main)
44
+
45
+ ## Model Architecture:
46
+
47
+ *Architecture Type:* Transformer
48
+
49
+ *Network Architecture:* Block-wise attention DiT
50
+
51
+ This model was developed based on [VAR](https://github.com/FoundationVision/VAR).
52
+
53
+ ## Input
54
+
55
+ **Input Type(s)**: Class label (integer between 0 and 999)
56
+
57
+ **Input Format(s)**:
58
+ * Class label: Integer
59
+
60
+ **Input Parameters**:
61
+ * Class label: One-dimensional (1D)
62
+
63
+ **Other Properties Related to Input**:
64
+ * The sampling configs can be modified at config/sampling/hmar-d30.yaml
65
+ * The number of masked sampling steps can be changed at utils/sampling_arg_util.py
66
+
67
+ ## Output
68
+
69
+ **Output Type**: Image
70
+
71
+ **Output Format**: PNG
72
+
73
+ **Output Parameters**: Image: Two-dimensional (2D)
74
+
75
+ **Other Properties Related to Output:** The generated images are RGB images of size 256x256.
76
+
77
+ * Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions. <br>
78
+
79
+ ## Software Integration
80
+
81
+ **Runtime Engine(s):**
82
+
83
+ Not Applicable (N/A)]
84
+
85
+ **Supported Hardware Microarchitecture Compatibility:**
86
+
87
+ * NVIDIA Blackwell
88
+ * NVIDIA Hopper
89
+
90
+ **Note**: We have only tested doing inference with BF16 precision.
91
+
92
+ **Operating System(s):**
93
+
94
+ * Linux (We have not tested on other operating systems.)
95
+
96
+
97
+ # Usage
98
+
99
+ See the [HMAR repository](https://github.com/nvlabs/hmar) for details.
100
+
101
+ # Training, Testing, and Evaluation Datasets: <br>
102
+
103
+ We use the [ImageNet dataset](https://www.image-net.org/) in our experiments –both for training, testing, and evaluation. ImageNet is a widely used dataset that spans 1000 object classes and contains 1,281,167 training images, 50,000 validation images and 100,000 test images. Each data sample is composed of an image-label pair.
104
+
105
+ **The total size (in number of data points):** 1,431,167 <br>
106
+ **Total number of datasets:** 1 <br>
107
+ **Dataset partition:** Training [89.5 %], testing [7 %], validation [3.5 %] <br>
108
+
109
+ ## Training Dataset: Official ImageNet training dataset
110
+
111
+ *Link:* https://www.image-net.org <br>
112
+
113
+ ## Test Dataset: Official ImageNet test dataset
114
+
115
+ *Link:* https://www.image-net.org <br>
116
+
117
+ ## Evaluation Dataset: Official ImageNet validation dataset
118
+
119
+ *Link:* https://www.image-net.org <br>
120
+
121
+ # Evaluation
122
+
123
+ Please see our [technical paper](https://arxiv.org/abs/2506.04421) for detailed evaluations.
124
+
125
+ ## Inference:
126
+ **Acceleration Engine:** PyTorch, flash attention <br>
127
+ **Test Hardware:** H100, A100, GB200 <br>
128
+ * Minimum 1 GPU cards, multi nodes require Infiniband / ROCE connection <br>
129
+
130
+ ## Ethical Considerations
131
+
132
+ NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
133
+
134
+ Users are responsible for model inputs and outputs. Users are responsible for ensuring safe integration of this model, including implementing guardrails as well as other safety mechanisms, prior to deployment.
135
+
136
+ For more detailed information on ethical considerations for this model, please see the Explainability, Bias, Safety & Security, and Privacy subcards below.
137
+
138
+ Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
139
+
140
+ ### Plus Plus (++) Promise
141
+
142
+ We value you, the datasets, the diversity they represent, and what we have been entrusted with. This model and its associated data have been:
143
+
144
+ * Verified to comply with current applicable disclosure laws, regulations, and industry standards.
145
+ * Verified to comply with applicable privacy labeling requirements.
146
+ * Annotated to describe the collector/source (NVIDIA or a third-party).
147
+ * Characterized for technical limitations.
148
+ * Reviewed to ensure proper disclosure is accessible to, maintained for, and in compliance with NVIDIA data subjects and their requests.
149
+ * Reviewed before release.
150
+ * Tagged for known restrictions and potential safety implications.
151
+
152
+ ### Bias
153
+
154
+ | Field | Response |
155
+ | :--------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------- |
156
+ | Participation considerations from adversely impacted groups [protected classes](https://www.senate.ca.gov/content/protected-classes) in model design and testing: | None |
157
+ | Measures taken to mitigate against unwanted bias: | None |
158
+
159
+ ### Explainability
160
+
161
+ | Field | Response |
162
+ | :-------------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------- |
163
+ | Intended Application & Domain: | Image Generation |
164
+ | Model Type: | Transformer |
165
+ | Intended Users: | Research |
166
+ | Output: | Image |
167
+ | Describe how the model works: | Generates images based on a class label from ImageNet |
168
+ | Technical Limitations: | Due to the stochastic nature of the model, it may not correctly follow the label on which the generation is conditioned at times. |
169
+ | Verified to have met prescribed NVIDIA quality standards: | Yes |
170
+ | Performance Metrics: | . We report multiple metrics for HMAR used for conditional image generation models. These are FID, IS, Precision and Recall metrics –see results table above. In addition, we perform human verification of the generated outputs to validate image quality and prompt following |
171
+ | Potential Known Risks: | None Known |
172
+ | Licensing: | [NVIDIA One-Way Noncommercial License (NSCLv1)](https://github.com/NVlabs/HMAR/blob/main/LICENSE) |
173
+
174
+ ### Privacy
175
+
176
+ Field | Response
177
+ :----------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------
178
+ Generatable or reverse engineerable personal data? | No
179
+ Personal data used to create this model? | No
180
+ How often is dataset reviewed? | Before Release
181
+ Is there provenance for all datasets used in training? | Not Applicable. Only externally-sourced data was used.
182
+ Does data labeling (annotation, metadata) comply with privacy laws? | Yes.
183
+ Is data compliant with data subject requests for data correction or removal, if such a request was made? | No, not possible with externally-sourced data.
184
+ Applicable Privacy Policy | https://www.nvidia.com/en-us/about-nvidia/privacy-policy/ =
185
+
186
+
187
+
188
+ ### Safety
189
+
190
+ | Field | Response |
191
+ | :---------------------------------------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
192
+ | Model Application(s): | Conditional Image generation |
193
+ | Describe the life critical impact (if present). | None Known |
194
+ | Use Case Restrictions: | [NVIDIA One-Way Noncommercial License (NSCLv1)](https://github.com/NVlabs/HMAR/blob/main/LICENSE) |
195
+ | Model and dataset restrictions: | The Principle of least privilege (PoLP) is applied limiting access for dataset generation and model development. Restrictions enforce dataset access during training, and dataset license constraints adhered to. Model checkpoints are made available on Hugging Face, and may become available on cloud providers' model catalog. |