Feature Extraction
Transformers
Safetensors
custom_code
gheinrich commited on
Commit
2b25f46
·
verified ·
1 Parent(s): 8a9a271

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +234 -153
README.md CHANGED
@@ -1,199 +1,280 @@
1
  ---
2
- library_name: transformers
3
- tags: []
 
4
  ---
5
 
6
- # Model Card for Model ID
7
 
8
- <!-- Provide a quick summary of what the model is/does. -->
9
 
 
 
10
 
 
 
 
 
 
11
 
12
- ## Model Details
 
 
 
13
 
14
- ### Model Description
15
 
16
- <!-- Provide a longer summary of what this model is. -->
17
 
18
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
 
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
 
28
- ### Model Sources [optional]
29
 
30
- <!-- Provide the basic links for the model. -->
31
 
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
35
 
36
- ## Uses
 
 
37
 
38
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
 
40
- ### Direct Use
41
 
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
 
44
- [More Information Needed]
 
 
 
 
45
 
46
- ### Downstream Use [optional]
47
 
48
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
 
 
49
 
50
- [More Information Needed]
51
 
52
- ### Out-of-Scope Use
 
 
 
53
 
54
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
 
56
- [More Information Needed]
 
 
 
57
 
58
- ## Bias, Risks, and Limitations
59
 
60
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
 
 
 
61
 
62
- [More Information Needed]
 
 
 
63
 
64
- ### Recommendations
65
 
66
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
 
 
67
 
68
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
 
 
69
 
70
- ## How to Get Started with the Model
 
71
 
72
- Use the code below to get started with the model.
 
73
 
74
- [More Information Needed]
 
 
 
75
 
76
- ## Training Details
77
 
78
- ### Training Data
79
 
80
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
 
81
 
82
- [More Information Needed]
83
 
84
- ### Training Procedure
 
 
 
 
 
 
 
 
85
 
86
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
 
 
 
 
87
 
88
- #### Preprocessing [optional]
89
 
90
- [More Information Needed]
91
 
 
92
 
93
- #### Training Hyperparameters
 
 
 
94
 
95
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
 
97
- #### Speeds, Sizes, Times [optional]
98
-
99
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
-
101
- [More Information Needed]
102
-
103
- ## Evaluation
104
-
105
- <!-- This section describes the evaluation protocols and provides the results. -->
106
-
107
- ### Testing Data, Factors & Metrics
108
-
109
- #### Testing Data
110
-
111
- <!-- This should link to a Dataset Card if possible. -->
112
-
113
- [More Information Needed]
114
-
115
- #### Factors
116
-
117
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
-
119
- [More Information Needed]
120
-
121
- #### Metrics
122
-
123
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
-
125
- [More Information Needed]
126
-
127
- ### Results
128
-
129
- [More Information Needed]
130
-
131
- #### Summary
132
-
133
-
134
-
135
- ## Model Examination [optional]
136
-
137
- <!-- Relevant interpretability work for the model goes here -->
138
-
139
- [More Information Needed]
140
-
141
- ## Environmental Impact
142
-
143
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
-
145
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
-
147
- - **Hardware Type:** [More Information Needed]
148
- - **Hours used:** [More Information Needed]
149
- - **Cloud Provider:** [More Information Needed]
150
- - **Compute Region:** [More Information Needed]
151
- - **Carbon Emitted:** [More Information Needed]
152
-
153
- ## Technical Specifications [optional]
154
-
155
- ### Model Architecture and Objective
156
-
157
- [More Information Needed]
158
-
159
- ### Compute Infrastructure
160
-
161
- [More Information Needed]
162
-
163
- #### Hardware
164
-
165
- [More Information Needed]
166
-
167
- #### Software
168
-
169
- [More Information Needed]
170
-
171
- ## Citation [optional]
172
-
173
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
-
175
- **BibTeX:**
176
-
177
- [More Information Needed]
178
-
179
- **APA:**
180
-
181
- [More Information Needed]
182
-
183
- ## Glossary [optional]
184
-
185
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
-
187
- [More Information Needed]
188
-
189
- ## More Information [optional]
190
-
191
- [More Information Needed]
192
-
193
- ## Model Card Authors [optional]
194
-
195
- [More Information Needed]
196
-
197
- ## Model Card Contact
198
-
199
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: other
3
+ license_name: nvidia-open-model-license
4
+ license_link: https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf
5
  ---
6
 
7
+ # Model Overview
8
 
9
+ ## Description
10
 
11
+ This model performs visual feature extraction.
12
+ For instance, RADIO generates image embeddings that can be used by a downstream model to classify images.
13
 
14
+ C-RADIOv4 models are available in multiple sizes:
15
+ * Base (98M parameters).
16
+ * Large (320M parameters).
17
+ * Shape-Optimized (431M parameters).
18
+ * Huge (653M parameters).
19
 
20
+ C-RADIOv4 was trained using an updated set of teach models:
21
+ * [SigLIP2-g](https://huggingface.co/google/siglip2-giant-opt-patch16-384)
22
+ * [DINOv3-7B](https://huggingface.co/facebook/dinov3-vit7b16-pretrain-lvd1689m)
23
+ * [SAM3](https://huggingface.co/facebook/sam3)
24
 
25
+ This model is ready for commercial/non-commercial use.
26
 
27
+ ### License/Terms of Use
28
 
29
+ GOVERNING TERMS: Use of this model is governed by the [NVIDIA Open Model License Agreement](https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf).
30
 
31
+ ## Deployment Geography
 
 
 
 
 
 
32
 
33
+ Global
34
 
35
+ ## Use Case
36
 
37
+ The embeddings generated by this model are expected to be used by a downstream application.
38
+ For example:
 
39
 
40
+ * Image-level understanding (image classification, curation, etc.).
41
+ * Dense processing (semantic segmentation, depth estimation, etc.).
42
+ * Integration into a Vision-Language Model.
43
 
44
+ ## Release Date
45
 
46
+ Hugging Face: 01/23/2026 via [RADIO Collection of Models](https://huggingface.co/collections/nvidia/radio-669f77f1dd6b153f007dd1c6).
47
 
48
+ ## References
49
 
50
+ * [Paper](https://arxiv.org/abs/2312.06709)
51
+ * [Paper](https://arxiv.org/abs/2410.01680)
52
+ * [Paper](https://arxiv.org/abs/2412.07679)
53
+ * [Paper](https://arxiv.org/abs/2502.16025)
54
+ * [Paper](https://arxiv.org/abs/2601.17237)
55
 
56
+ ## Model Architecture
57
 
58
+ **Architecture Type:** Neural Network <br>
59
+ **Network Architecture:** Vision Transformer <br>
60
+ **Number of model parameters:** -B size: 98M, -L size: 320M, -SO400M size: 431M, -H size: 653M <br>
61
 
62
+ ## Input
63
 
64
+ **Input Type(s):** Image <br>
65
+ **Input Format(s):** Red, Green, Blue (RGB) <br>
66
+ **Input Parameters:** Two Dimensional (2D) <br>
67
+ **Other Properties Related to Input:** Image resolutions up to 2048x2028 in increments of 16 pixels <br>
68
 
69
+ ## Output
70
 
71
+ **Output Type(s):** Embeddings <br>
72
+ **Output Format:** Tensor <br>
73
+ **Output Parameters:** Two Dimensional 2D <br>
74
+ **Other Properties Related to Output:** Downstream model required to leverage image features. Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions. <br>
75
 
76
+ ## Usage:
77
 
78
+ RADIO will return a tuple with two tensors.
79
+ The `summary` is similar to the `cls_token` in ViT and is meant to represent the general concept of the entire image.
80
+ It has shape `(B,C)` with `B` being the batch dimension, and `C` being some number of channels.
81
+ The `spatial_features` represent more localized content which should be suitable for dense tasks such as semantic segmentation, or for integration into an LLM.
82
 
83
+ ```python
84
+ import torch
85
+ from PIL import Image
86
+ from transformers import AutoModel, CLIPImageProcessor
87
 
88
+ hf_repo = "nvidia/C-RADIOv4-H"
89
 
90
+ image_processor = CLIPImageProcessor.from_pretrained(hf_repo)
91
+ model = AutoModel.from_pretrained(hf_repo, trust_remote_code=True)
92
+ model.eval().cuda()
93
 
94
+ image = Image.open('./assets/radio.png').convert('RGB')
95
+ pixel_values = image_processor(images=image, return_tensors='pt', do_resize=True).pixel_values
96
+ pixel_values = pixel_values.cuda()
97
 
98
+ summary, features = model(pixel_values)
99
+ ```
100
 
101
+ Spatial features have shape `(B,T,D)` with `T` being the flattened spatial tokens, and `D` being the channels for spatial features. Note that `C!=D` in general.
102
+ Converting to a spatial tensor format can be done using the downsampling size of the model, combined with the input tensor shape. For RADIO, the patch size is 16.
103
 
104
+ ```Python
105
+ from einops import rearrange
106
+ spatial_features = rearrange(spatial_features, 'b (h w) d -> b d h w', h=x.shape[-2] // patch_size, w=x.shape[-1] // patch_size)
107
+ ```
108
 
109
+ The resulting tensor will have shape `(B,D,H,W)`, as is typically seen with computer vision models.
110
 
111
+ ## Software Integration
112
 
113
+ **Runtime Engine(s):**
114
+ * [TAO-6.1] <br>
115
 
 
116
 
117
+ **Supported Hardware Microarchitecture Compatibility:** <br>
118
+ * NVIDIA Ampere <br>
119
+ * NVIDIA Blackwell <br>
120
+ * NVIDIA Jetson <br>
121
+ * NVIDIA Hopper <br>
122
+ * NVIDIA Lovelace <br>
123
+ * NVIDIA Pascal <br>
124
+ * NVIDIA Turing <br>
125
+ * NVIDIA Volta <br>
126
 
127
+ **[Preferred/Supported] Operating System(s):** <br>
128
+ * Linux
129
+ * Linux 4 Tegra
130
+ * QNX
131
+ * Windows
132
 
133
+ The integration of foundation and fine-tuned models into AI systems requires additional testing using use-case-specific data to ensure safe and effective deployment. Following the V-model methodology, iterative testing and validation at both unit and system levels are essential to mitigate risks, meet technical and functional requirements, and ensure compliance with safety and ethical standards before deployment.
134
 
135
+ This AI model can be embedded as an Application Programming Interface (API) call into the software environment described above.
136
 
137
+ ## Model Version(s)
138
 
139
+ * C-RADIOv4-B (98M parameters).
140
+ * C-RADIOv4-L (320M parameters).
141
+ * C-RADIOv4-SO400M (400M parameters).
142
+ * C-RADIOv4-H (653M parameters).
143
 
144
+ **Links:**
145
 
146
+ * https://huggingface.co/nvidia/C-RADIOv4-B
147
+ * https://huggingface.co/nvidia/C-RADIOv4-L
148
+ * https://huggingface.co/nvidia/C-RADIOv4-SO400M
149
+ * https://huggingface.co/nvidia/C-RADIOv4-H
150
+
151
+ # Training and Evaluation Datasets
152
+
153
+ ## Training Dataset
154
+
155
+ ** Data Modality
156
+
157
+ NV-CC-Img-Text-Dataset <br>
158
+ ** Data Modality <br>
159
+ * Image <br>
160
+ ** Image Training Data Size <br>
161
+ * 1 Million to 1 Billion Images <br>
162
+ ** Data Collection Method by dataset <br>
163
+ * Automated <br>
164
+ ** Labeling Method by dataset <br>
165
+ * Not Applicable (no labels are needed) <br>
166
+ **Properties:** 700 Million Images <br>
167
+
168
+ ## Evaluation Datasets
169
+
170
+ ImageNet <br>
171
+ ** Link <br>
172
+ * [ImageNet](https://www.image-net.org/) <br>
173
+ ** Data Collection <br>
174
+ * Automated <br>
175
+ ** Labeling Method <br>
176
+ * Human <br>
177
+ ** Training Images <br>
178
+ * 1,281,167 <br>
179
+ ** Validation Images <br>
180
+ * 50,000 <br>
181
+ ** Test Images <br>
182
+ * 100,000
183
+
184
+ To perform the semantic segmentation evaluation, we use training sets from ADE20K and PascalVOC to train a linear layer, and subsequently performed evaluations on the validation set.
185
+ See below for further details:
186
+
187
+ ADE20k <br>
188
+ ** Link <br>
189
+ * [ADE20K](https://ade20k.csail.mit.edu/) <br>
190
+ ** Data Collection <br>
191
+ * Human <br>
192
+ ** Labeling Method <br>
193
+ * Human <br>
194
+ ** Training Images <br>
195
+ * 25,574 <br>
196
+ ** Validation Images <br>
197
+ * 2,000 <br>
198
+
199
+ Pascal VOC <br>
200
+ ** Link <br>
201
+ * [Pascal VOC](http://host.robots.ox.ac.uk/pascal/VOC/) <br>
202
+ ** Data Collection <br>
203
+ * Human <br>
204
+ ** Labeling Method <br>
205
+ * Human <br>
206
+ ** Training Images <br>
207
+ * 1,464 <br>
208
+ ** Validation Images <br>
209
+ * 1,449 <br>
210
+
211
+ | Benchmark | C-RADIOv3-B | C-RADIOv4-B | C-RADIOv3-L | C-RADIOv4-L | C-RADIOv4-SO400M | C-RADIOv3-H | C-RADIOv4-H |
212
+ |-----------|-------------|-------------|-------------|---------------|------------------|-------------|-------------|
213
+ | **ImageNet Classification (Top1 accuracy)** | | | | |
214
+ | Zero-Shot | 71.30 | 66.45 | 79.95 | 79.67 | 82.01 | 82.65 | 82.91 |
215
+ | KNN | 81.22 | 80.12 | 84.33 | 85.16 | 85.75 | 86.23 | 86.27 |
216
+ | **ADE20k Semantic Segmentation (mIoU)** | 49.79 | 50.48 | 51.87 | 54.64 | 55.14 | 52.75 | 55.05 |
217
+ | **Pascal VOC Semantic Segmentation (mIoU)** | 84.68 | 85.47 | 86.12 | 86.55 | 87.22 | 86.41 | 87.64 |
218
+
219
+ ## Inference
220
+
221
+ **Acceleration Engine:** Tensor(RT), Tensor(RT)-LLM <br>
222
+ **Engine:** PyTorch <br>
223
+ **Test Hardware:** H100 <br>
224
+
225
+ ## Ethical Considerations
226
+
227
+ NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
228
+
229
+ Please make sure you have proper rights and permissions for all input image and video content; if image or video includes people, personal health information, or intellectual property, the image or video generated will not blur or maintain proportions of image subjects included.
230
+
231
+ For more detailed information on ethical considerations for this model, please see the Model Card++ Explainability, Bias, Safety & Security, and Privacy Subcards below.
232
+
233
+ Please report model quality, risk, security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
234
+
235
+ ### Bias
236
+
237
+ Field | Response
238
+ :---------------------------------------------------------------------------------------------------|:---------------
239
+ Participation considerations from adversely impacted groups [protected classes](https://www.senate.ca.gov/content/protected-classes) in model design and testing: | None
240
+ Measures taken to mitigate against unwanted bias: | None
241
+
242
+
243
+ ### Explainability
244
+
245
+ Field | Response
246
+ :------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------
247
+ Intended Task/Domain: | Visual Feature Extraction
248
+ Model Type: | Vision Transformer
249
+ Intended Users: | Developers of downstream vision applications
250
+ Output: | Image embeddings
251
+ Describe how the model works: | The model takes an image as input, processes the image through multiple transformer blocks, and outputs summary and patch embeddings.
252
+ Name the adversely impacted groups this has been tested to deliver comparable outcomes regardless of: | Not Applicable
253
+ Technical Limitations: | This model generates image embeddings that can be used by a downstream model to, for example, classify images. The downstream model must be trained to leverage the visual embeddings. This model is only tested on input resolutions ranging from 256 to 2048, in increments of 16 pixels. This model may fail to surface information about the orientation of objects (e.g. whether a traffic sign points left/right).
254
+ Verified to have met prescribed NVIDIA quality standards: | Yes
255
+ Performance Metrics: | Image classification accuracy, semantic segmentation mean-over-intersection.
256
+ Potential Known Risks: | This model may not perform well on visual domains that are not represented in the training data. The generated embeddings might fail to disambiguate differences that appear evident to humans (e.g. two images showing different breeds of dogs might in fact produce very similar embeddings). Domain-specific evaluation is required for the target application.
257
+ Licensing: | [NVIDIA Open Model License](https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf)
258
+
259
+
260
+ ### Privacy
261
+
262
+ Field | Response
263
+ :----------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------
264
+ Generatable or reverse engineerable personal data? | No
265
+ Personal data used to create this model? | None Known
266
+ How often is dataset reviewed? | Before Every Release
267
+ Is there provenance for all datasets used in training? | Yes
268
+ Does data labeling (annotation, metadata) comply with privacy laws? | Yes
269
+ Is data compliant with data subject requests for data correction or removal, if such a request was made? | Yes
270
+ Was data from user interactions with the AI model (e.g. user input and prompts) used to train the model? | No
271
+ Applicable Privacy Policy | https://www.nvidia.com/en-us/about-nvidia/privacy-policy/
272
+
273
+ ### Safety
274
+
275
+ Field | Response
276
+ :---------------------------------------------------|:----------------------------------
277
+ Model Application Field(s): | Generation of visual embeddings
278
+ Describe the life critical impact (if present). | Not Applicable
279
+ Use Case Restrictions: | Abide by NVIDIA Open Model License Agreement
280
+ Model and dataset restrictions: | The Principle of least privilege (PoLP) is applied limiting access for dataset generation and model development. Restrictions enforce dataset access during training, and dataset license constraints adhered to.