Feature Extraction
Transformers
Safetensors
custom_code
gheinrich commited on
Commit
de6f1b7
·
verified ·
1 Parent(s): a51558a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +229 -151
README.md CHANGED
@@ -1,199 +1,277 @@
1
  ---
 
 
 
2
  library_name: transformers
3
- tags: []
4
  ---
5
 
6
- # Model Card for Model ID
7
 
8
- <!-- Provide a quick summary of what the model is/does. -->
9
 
 
 
10
 
 
 
 
11
 
12
- ## Model Details
 
 
 
13
 
14
- ### Model Description
15
 
16
- <!-- Provide a longer summary of what this model is. -->
17
 
18
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
 
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
 
28
- ### Model Sources [optional]
29
 
30
- <!-- Provide the basic links for the model. -->
31
 
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
35
 
36
- ## Uses
 
 
37
 
38
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
 
40
- ### Direct Use
41
 
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
 
44
- [More Information Needed]
 
 
 
 
45
 
46
- ### Downstream Use [optional]
47
 
48
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
 
 
49
 
50
- [More Information Needed]
51
 
52
- ### Out-of-Scope Use
 
 
 
53
 
54
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
 
56
- [More Information Needed]
 
 
 
57
 
58
- ## Bias, Risks, and Limitations
59
 
60
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
 
 
 
61
 
62
- [More Information Needed]
 
 
 
63
 
64
- ### Recommendations
65
 
66
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
 
 
67
 
68
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
 
 
69
 
70
- ## How to Get Started with the Model
 
71
 
72
- Use the code below to get started with the model.
 
73
 
74
- [More Information Needed]
 
 
 
75
 
76
- ## Training Details
77
 
78
- ### Training Data
79
 
80
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
 
81
 
82
- [More Information Needed]
83
 
84
- ### Training Procedure
 
 
 
 
 
 
 
 
85
 
86
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
 
 
 
 
87
 
88
- #### Preprocessing [optional]
89
 
90
- [More Information Needed]
91
 
 
92
 
93
- #### Training Hyperparameters
 
94
 
95
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
 
97
- #### Speeds, Sizes, Times [optional]
 
 
 
98
 
99
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
-
101
- [More Information Needed]
102
-
103
- ## Evaluation
104
-
105
- <!-- This section describes the evaluation protocols and provides the results. -->
106
-
107
- ### Testing Data, Factors & Metrics
108
-
109
- #### Testing Data
110
-
111
- <!-- This should link to a Dataset Card if possible. -->
112
-
113
- [More Information Needed]
114
-
115
- #### Factors
116
-
117
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
-
119
- [More Information Needed]
120
-
121
- #### Metrics
122
-
123
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
-
125
- [More Information Needed]
126
-
127
- ### Results
128
-
129
- [More Information Needed]
130
-
131
- #### Summary
132
-
133
-
134
-
135
- ## Model Examination [optional]
136
-
137
- <!-- Relevant interpretability work for the model goes here -->
138
-
139
- [More Information Needed]
140
-
141
- ## Environmental Impact
142
-
143
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
-
145
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
-
147
- - **Hardware Type:** [More Information Needed]
148
- - **Hours used:** [More Information Needed]
149
- - **Cloud Provider:** [More Information Needed]
150
- - **Compute Region:** [More Information Needed]
151
- - **Carbon Emitted:** [More Information Needed]
152
-
153
- ## Technical Specifications [optional]
154
-
155
- ### Model Architecture and Objective
156
-
157
- [More Information Needed]
158
-
159
- ### Compute Infrastructure
160
-
161
- [More Information Needed]
162
-
163
- #### Hardware
164
-
165
- [More Information Needed]
166
-
167
- #### Software
168
-
169
- [More Information Needed]
170
-
171
- ## Citation [optional]
172
-
173
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
-
175
- **BibTeX:**
176
-
177
- [More Information Needed]
178
-
179
- **APA:**
180
-
181
- [More Information Needed]
182
-
183
- ## Glossary [optional]
184
-
185
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
-
187
- [More Information Needed]
188
-
189
- ## More Information [optional]
190
-
191
- [More Information Needed]
192
-
193
- ## Model Card Authors [optional]
194
-
195
- [More Information Needed]
196
-
197
- ## Model Card Contact
198
-
199
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: other
3
+ license_name: nvidia-open-model-license
4
+ license_link: https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf
5
  library_name: transformers
 
6
  ---
7
 
8
+ # Model Overview
9
 
10
+ ## Description
11
 
12
+ This model performs visual feature extraction.
13
+ For instance, RADIO generates image embeddings that can be used by a downstream model to classify images.
14
 
15
+ C-RADIOv4 models are available in multiple sizes:
16
+ * Shape-Optimized (431M parameters).
17
+ * Huge (653M parameters).
18
 
19
+ C-RADIOv4 was trained using an updated set of teach models:
20
+ * [SigLIP2-g](https://huggingface.co/google/siglip2-giant-opt-patch16-384)
21
+ * [DINOv3-7B](https://huggingface.co/facebook/dinov3-vit7b16-pretrain-lvd1689m)
22
+ * [SAM3](https://huggingface.co/facebook/sam3)
23
 
24
+ This model is ready for commercial/non-commercial use.
25
 
26
+ ### License/Terms of Use
27
 
28
+ GOVERNING TERMS: Use of this model is governed by the [NVIDIA Open Model License Agreement](https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf).
29
 
30
+ ## Deployment Geography
 
 
 
 
 
 
31
 
32
+ Global
33
 
34
+ ## Use Case
35
 
36
+ The embeddings generated by this model are expected to be used by a downstream application.
37
+ For example:
 
38
 
39
+ * Image-level understanding (image classification, curation, etc.).
40
+ * Dense processing (semantic segmentation, depth estimation, etc.).
41
+ * Integration into a Vision-Language Model.
42
 
43
+ ## Release Date
44
 
45
+ Hugging Face: 01/23/2026 via [RADIO Collection of Models](https://huggingface.co/collections/nvidia/radio-669f77f1dd6b153f007dd1c6).
46
 
47
+ ## References
48
 
49
+ * [AM-RADIO: Agglomerative Vision Foundation Model -- Reduce All Domains Into One](https://arxiv.org/abs/2312.06709)
50
+ * [PHI-S: Distribution Balancing for Label-Free Multi-Teacher Distillation](https://arxiv.org/abs/2410.01680)
51
+ * [RADIOv2.5: Improved Baselines for Agglomerative Vision Foundation Models](https://arxiv.org/abs/2412.07679)
52
+ * [FeatSharp: Your Vision Model Features, Sharper](https://arxiv.org/abs/2502.16025)
53
+ * [C-RADIOv4 (Tech Report)](https://arxiv.org/abs/2601.17237)
54
 
55
+ ## Model Architecture
56
 
57
+ **Architecture Type:** Neural Network <br>
58
+ **Network Architecture:** Vision Transformer <br>
59
+ **Number of model parameters:** -B size: 98M, -L size: 320M, -SO400M size: 431M, -H size: 653M <br>
60
 
61
+ ## Input
62
 
63
+ **Input Type(s):** Image <br>
64
+ **Input Format(s):** Red, Green, Blue (RGB) <br>
65
+ **Input Parameters:** Two Dimensional (2D) <br>
66
+ **Other Properties Related to Input:** Image resolutions up to 2048x2028 in increments of 16 pixels <br>
67
 
68
+ ## Output
69
 
70
+ **Output Type(s):** Embeddings <br>
71
+ **Output Format:** Tensor <br>
72
+ **Output Parameters:** Two Dimensional 2D <br>
73
+ **Other Properties Related to Output:** Downstream model required to leverage image features. Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions. <br>
74
 
75
+ ## Usage:
76
 
77
+ RADIO will return a tuple with two tensors.
78
+ The `summary` is similar to the `cls_token` in ViT and is meant to represent the general concept of the entire image.
79
+ It has shape `(B,C)` with `B` being the batch dimension, and `C` being some number of channels.
80
+ The `spatial_features` represent more localized content which should be suitable for dense tasks such as semantic segmentation, or for integration into an LLM.
81
 
82
+ ```python
83
+ import torch
84
+ from PIL import Image
85
+ from transformers import AutoModel, CLIPImageProcessor
86
 
87
+ hf_repo = "nvidia/C-RADIOv4-SO400M"
88
 
89
+ image_processor = CLIPImageProcessor.from_pretrained(hf_repo)
90
+ model = AutoModel.from_pretrained(hf_repo, trust_remote_code=True)
91
+ model.eval().cuda()
92
 
93
+ image = Image.open('./assets/radio.png').convert('RGB')
94
+ pixel_values = image_processor(images=image, return_tensors='pt', do_resize=True).pixel_values
95
+ pixel_values = pixel_values.cuda()
96
 
97
+ summary, features = model(pixel_values)
98
+ ```
99
 
100
+ Spatial features have shape `(B,T,D)` with `T` being the flattened spatial tokens, and `D` being the channels for spatial features. Note that `C!=D` in general.
101
+ Converting to a spatial tensor format can be done using the downsampling size of the model, combined with the input tensor shape. For RADIO, the patch size is 16.
102
 
103
+ ```Python
104
+ from einops import rearrange
105
+ spatial_features = rearrange(spatial_features, 'b (h w) d -> b d h w', h=x.shape[-2] // patch_size, w=x.shape[-1] // patch_size)
106
+ ```
107
 
108
+ The resulting tensor will have shape `(B,D,H,W)`, as is typically seen with computer vision models.
109
 
110
+ ## Software Integration
111
 
112
+ **Runtime Engine(s):**
113
+ * [TAO-6.1] <br>
114
 
 
115
 
116
+ **Supported Hardware Microarchitecture Compatibility:** <br>
117
+ * NVIDIA Ampere <br>
118
+ * NVIDIA Blackwell <br>
119
+ * NVIDIA Jetson <br>
120
+ * NVIDIA Hopper <br>
121
+ * NVIDIA Lovelace <br>
122
+ * NVIDIA Pascal <br>
123
+ * NVIDIA Turing <br>
124
+ * NVIDIA Volta <br>
125
 
126
+ **[Preferred/Supported] Operating System(s):** <br>
127
+ * Linux
128
+ * Linux 4 Tegra
129
+ * QNX
130
+ * Windows
131
 
132
+ The integration of foundation and fine-tuned models into AI systems requires additional testing using use-case-specific data to ensure safe and effective deployment. Following the V-model methodology, iterative testing and validation at both unit and system levels are essential to mitigate risks, meet technical and functional requirements, and ensure compliance with safety and ethical standards before deployment.
133
 
134
+ This AI model can be embedded as an Application Programming Interface (API) call into the software environment described above.
135
 
136
+ ## Model Version(s)
137
 
138
+ * C-RADIOv4-SO400M (400M parameters).
139
+ * C-RADIOv4-H (653M parameters).
140
 
141
+ **Links:**
142
 
143
+ * https://huggingface.co/nvidia/C-RADIOv4-B
144
+ * https://huggingface.co/nvidia/C-RADIOv4-L
145
+ * https://huggingface.co/nvidia/C-RADIOv4-SO400M
146
+ * https://huggingface.co/nvidia/C-RADIOv4-H
147
 
148
+ # Training and Evaluation Datasets
149
+
150
+ ## Training Dataset
151
+
152
+ ** Data Modality
153
+
154
+ NV-CC-Img-Text-Dataset <br>
155
+ ** Data Modality <br>
156
+ * Image <br>
157
+ ** Image Training Data Size <br>
158
+ * 1 Million to 1 Billion Images <br>
159
+ ** Data Collection Method by dataset <br>
160
+ * Automated <br>
161
+ ** Labeling Method by dataset <br>
162
+ * Not Applicable (no labels are needed) <br>
163
+ **Properties:** 700 Million Images <br>
164
+
165
+ ## Evaluation Datasets
166
+
167
+ ImageNet <br>
168
+ ** Link <br>
169
+ * [ImageNet](https://www.image-net.org/) <br>
170
+ ** Data Collection <br>
171
+ * Automated <br>
172
+ ** Labeling Method <br>
173
+ * Human <br>
174
+ ** Training Images <br>
175
+ * 1,281,167 <br>
176
+ ** Validation Images <br>
177
+ * 50,000 <br>
178
+ ** Test Images <br>
179
+ * 100,000
180
+
181
+ To perform the semantic segmentation evaluation, we use training sets from ADE20K and PascalVOC to train a linear layer, and subsequently performed evaluations on the validation set.
182
+ See below for further details:
183
+
184
+ ADE20k <br>
185
+ ** Link <br>
186
+ * [ADE20K](https://ade20k.csail.mit.edu/) <br>
187
+ ** Data Collection <br>
188
+ * Human <br>
189
+ ** Labeling Method <br>
190
+ * Human <br>
191
+ ** Training Images <br>
192
+ * 25,574 <br>
193
+ ** Validation Images <br>
194
+ * 2,000 <br>
195
+
196
+ Pascal VOC <br>
197
+ ** Link <br>
198
+ * [Pascal VOC](http://host.robots.ox.ac.uk/pascal/VOC/) <br>
199
+ ** Data Collection <br>
200
+ * Human <br>
201
+ ** Labeling Method <br>
202
+ * Human <br>
203
+ ** Training Images <br>
204
+ * 1,464 <br>
205
+ ** Validation Images <br>
206
+ * 1,449 <br>
207
+
208
+ | Benchmark | C-RADIOv3-B | C-RADIOv4-B | C-RADIOv3-L | C-RADIOv4-L | C-RADIOv4-SO400M | C-RADIOv3-H | C-RADIOv4-H |
209
+ |-----------|-------------|-------------|-------------|---------------|------------------|-------------|-------------|
210
+ | **ImageNet Classification (Top1 accuracy)** | | | | |
211
+ | Zero-Shot | 71.30 | 66.45 | 79.95 | 79.67 | 82.01 | 82.65 | 82.91 |
212
+ | KNN | 81.22 | 80.12 | 84.33 | 85.16 | 85.75 | 86.23 | 86.27 |
213
+ | **ADE20k Semantic Segmentation (mIoU)** | 49.79 | 50.48 | 51.87 | 54.64 | 55.14 | 52.75 | 55.05 |
214
+ | **Pascal VOC Semantic Segmentation (mIoU)** | 84.68 | 85.47 | 86.12 | 86.55 | 87.22 | 86.41 | 87.64 |
215
+
216
+ ## Inference
217
+
218
+ **Acceleration Engine:** Tensor(RT), Tensor(RT)-LLM <br>
219
+ **Engine:** PyTorch <br>
220
+ **Test Hardware:** H100 <br>
221
+
222
+ ## Ethical Considerations
223
+
224
+ NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
225
+
226
+ Please make sure you have proper rights and permissions for all input image and video content; if image or video includes people, personal health information, or intellectual property, the image or video generated will not blur or maintain proportions of image subjects included.
227
+
228
+ For more detailed information on ethical considerations for this model, please see the Model Card++ Explainability, Bias, Safety & Security, and Privacy Subcards below.
229
+
230
+ Please report model quality, risk, security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
231
+
232
+ ### Bias
233
+
234
+ Field | Response
235
+ :---------------------------------------------------------------------------------------------------|:---------------
236
+ Participation considerations from adversely impacted groups [protected classes](https://www.senate.ca.gov/content/protected-classes) in model design and testing: | None
237
+ Measures taken to mitigate against unwanted bias: | None
238
+
239
+
240
+ ### Explainability
241
+
242
+ Field | Response
243
+ :------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------
244
+ Intended Task/Domain: | Visual Feature Extraction
245
+ Model Type: | Vision Transformer
246
+ Intended Users: | Developers of downstream vision applications
247
+ Output: | Image embeddings
248
+ Describe how the model works: | The model takes an image as input, processes the image through multiple transformer blocks, and outputs summary and patch embeddings.
249
+ Name the adversely impacted groups this has been tested to deliver comparable outcomes regardless of: | Not Applicable
250
+ Technical Limitations: | This model generates image embeddings that can be used by a downstream model to, for example, classify images. The downstream model must be trained to leverage the visual embeddings. This model is only tested on input resolutions ranging from 256 to 2048, in increments of 16 pixels. This model may fail to surface information about the orientation of objects (e.g. whether a traffic sign points left/right).
251
+ Verified to have met prescribed NVIDIA quality standards: | Yes
252
+ Performance Metrics: | Image classification accuracy, semantic segmentation mean-over-intersection.
253
+ Potential Known Risks: | This model may not perform well on visual domains that are not represented in the training data. The generated embeddings might fail to disambiguate differences that appear evident to humans (e.g. two images showing different breeds of dogs might in fact produce very similar embeddings). Domain-specific evaluation is required for the target application.
254
+ Licensing: | [NVIDIA Open Model License](https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf)
255
+
256
+
257
+ ### Privacy
258
+
259
+ Field | Response
260
+ :----------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------
261
+ Generatable or reverse engineerable personal data? | No
262
+ Personal data used to create this model? | None Known
263
+ How often is dataset reviewed? | Before Every Release
264
+ Is there provenance for all datasets used in training? | Yes
265
+ Does data labeling (annotation, metadata) comply with privacy laws? | Yes
266
+ Is data compliant with data subject requests for data correction or removal, if such a request was made? | Yes
267
+ Was data from user interactions with the AI model (e.g. user input and prompts) used to train the model? | No
268
+ Applicable Privacy Policy | https://www.nvidia.com/en-us/about-nvidia/privacy-policy/
269
+
270
+ ### Safety
271
+
272
+ Field | Response
273
+ :---------------------------------------------------|:----------------------------------
274
+ Model Application Field(s): | Generation of visual embeddings
275
+ Describe the life critical impact (if present). | Not Applicable
276
+ Use Case Restrictions: | Abide by NVIDIA Open Model License Agreement
277
+ Model and dataset restrictions: | The Principle of least privilege (PoLP) is applied limiting access for dataset generation and model development. Restrictions enforce dataset access during training, and dataset license constraints adhered to.