File size: 13,995 Bytes
4eb2761 2b25f46 d8c7dac 4eb2761 2b25f46 4eb2761 2b25f46 4eb2761 2b25f46 4eb2761 2b25f46 4eb2761 2b25f46 4eb2761 2b25f46 4eb2761 2b25f46 4eb2761 2b25f46 4eb2761 2b25f46 4eb2761 2b25f46 4eb2761 2b25f46 4eb2761 2b25f46 4eb2761 2b25f46 4eb2761 2b25f46 4eb2761 9d5c29f 4eb2761 2b25f46 4eb2761 03a45a2 4eb2761 2b25f46 4eb2761 2b25f46 4e6f6bf 4eb2761 2b25f46 4eb2761 2b25f46 4eb2761 2b25f46 4eb2761 2b25f46 4eb2761 2b25f46 4eb2761 2b25f46 4eb2761 2b25f46 4eb2761 2b25f46 4eb2761 2b25f46 4eb2761 2b25f46 4eb2761 2b25f46 4eb2761 2b25f46 4eb2761 2b25f46 4eb2761 2b25f46 4eb2761 2b25f46 4eb2761 2b25f46 4eb2761 2b25f46 4eb2761 2b25f46 4eb2761 2b25f46 4eb2761 2b25f46 4eb2761 2b25f46 4eb2761 2b25f46 4eb2761 2b25f46 4eb2761 2b25f46 40dab6d 2b25f46 40dab6d 2b25f46 40dab6d 2b25f46 ce707b9 309430d ce707b9 2b25f46 40dab6d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 |
---
license: other
license_name: nvidia-open-model-license
license_link: https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf
library_name: transformers
---
# Model Overview
## Description
This model performs visual feature extraction.
For instance, RADIO generates image embeddings that can be used by a downstream model to classify images.
C-RADIOv4 models are available in multiple sizes:
* Shape-Optimized (431M parameters).
* Huge (653M parameters).
C-RADIOv4 was trained using an updated set of teach models:
* [SigLIP2-g](https://huggingface.co/google/siglip2-giant-opt-patch16-384)
* [DINOv3-7B](https://huggingface.co/facebook/dinov3-vit7b16-pretrain-lvd1689m)
* [SAM3](https://huggingface.co/facebook/sam3)
This model is ready for commercial/non-commercial use.
### License/Terms of Use
GOVERNING TERMS: Use of this model is governed by the [NVIDIA Open Model License Agreement](https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf).
## Deployment Geography
Global
## Use Case
The embeddings generated by this model are expected to be used by a downstream application.
For example:
* Image-level understanding (image classification, curation, etc.).
* Dense processing (semantic segmentation, depth estimation, etc.).
* Integration into a Vision-Language Model.
## Release Date
Hugging Face: 01/27/2026 via [RADIO Collection of Models](https://huggingface.co/collections/nvidia/radio-669f77f1dd6b153f007dd1c6).
## References
* [AM-RADIO: Agglomerative Vision Foundation Model -- Reduce All Domains Into One](https://arxiv.org/abs/2312.06709)
* [PHI-S: Distribution Balancing for Label-Free Multi-Teacher Distillation](https://arxiv.org/abs/2410.01680)
* [RADIOv2.5: Improved Baselines for Agglomerative Vision Foundation Models](https://arxiv.org/abs/2412.07679)
* [FeatSharp: Your Vision Model Features, Sharper](https://arxiv.org/abs/2502.16025)
* [C-RADIOv4 (Tech Report)](https://arxiv.org/abs/2601.17237)
## Model Architecture
**Architecture Type:** Neural Network <br>
**Network Architecture:** Vision Transformer <br>
**Number of model parameters:** -SO400M size: 431M, -H size: 653M <br>
## Input
**Input Type(s):** Image <br>
**Input Format(s):** Red, Green, Blue (RGB) <br>
**Input Parameters:** Two Dimensional (2D) <br>
**Other Properties Related to Input:** Image resolutions up to 2048x2028 in increments of 16 pixels <br>
## Output
**Output Type(s):** Embeddings <br>
**Output Format:** Tensor <br>
**Output Parameters:** Two Dimensional 2D <br>
**Other Properties Related to Output:** Downstream model required to leverage image features. Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions. <br>
## Usage:
RADIO will return a tuple with two tensors.
The `summary` is similar to the `cls_token` in ViT and is meant to represent the general concept of the entire image.
It has shape `(B,C)` with `B` being the batch dimension, and `C` being some number of channels.
The `spatial_features` represent more localized content which should be suitable for dense tasks such as semantic segmentation, or for integration into an LLM.
```python
import torch
from PIL import Image
from transformers import AutoModel, CLIPImageProcessor
hf_repo = "nvidia/C-RADIOv4-H"
image_processor = CLIPImageProcessor.from_pretrained(hf_repo)
model = AutoModel.from_pretrained(hf_repo, trust_remote_code=True)
model.eval().cuda()
image = Image.open('./assets/radio.png').convert('RGB')
pixel_values = image_processor(images=image, return_tensors='pt', do_resize=True).pixel_values
pixel_values = pixel_values.cuda()
summary, features = model(pixel_values)
```
Spatial features have shape `(B,T,D)` with `T` being the flattened spatial tokens, and `D` being the channels for spatial features. Note that `C!=D` in general.
Converting to a spatial tensor format can be done using the downsampling size of the model, combined with the input tensor shape. For RADIO, the patch size is 16.
```Python
from einops import rearrange
spatial_features = rearrange(spatial_features, 'b (h w) d -> b d h w', h=x.shape[-2] // patch_size, w=x.shape[-1] // patch_size)
```
The resulting tensor will have shape `(B,D,H,W)`, as is typically seen with computer vision models.
## Software Integration
**Runtime Engine(s):**
* [TAO-6.1] <br>
**Supported Hardware Microarchitecture Compatibility:** <br>
* NVIDIA Ampere <br>
* NVIDIA Blackwell <br>
* NVIDIA Jetson <br>
* NVIDIA Hopper <br>
* NVIDIA Lovelace <br>
* NVIDIA Pascal <br>
* NVIDIA Turing <br>
* NVIDIA Volta <br>
**[Preferred/Supported] Operating System(s):** <br>
* Linux
* Linux 4 Tegra
* QNX
* Windows
The integration of foundation and fine-tuned models into AI systems requires additional testing using use-case-specific data to ensure safe and effective deployment. Following the V-model methodology, iterative testing and validation at both unit and system levels are essential to mitigate risks, meet technical and functional requirements, and ensure compliance with safety and ethical standards before deployment.
This AI model can be embedded as an Application Programming Interface (API) call into the software environment described above.
## Model Version(s)
* C-RADIOv4-SO400M (400M parameters).
* C-RADIOv4-H (653M parameters).
**Links:**
* https://huggingface.co/nvidia/C-RADIOv4-SO400M
* https://huggingface.co/nvidia/C-RADIOv4-H
# Training and Evaluation Datasets
## Training Dataset
**NV-CC-Img-Text-Dataset**
**Data Modality:** Image <br>
**Image Training Data Size:** 1 Million to 1 Billion Images <br>
**Data Collection Method by dataset:** Automated <br>
**Labeling Method by dataset:** Not Applicable (no labels are needed) <br>
**Properties:** 700 Million Images <br>
## Evaluation Datasets
**ImageNet**
**Link:** [ImageNet](https://www.image-net.org/) <br>
**Data Collection:** Automated <br>
**Labeling Method:** Human <br>
**Training Images:** 1,281,167 <br>
**Validation Images:** 50,000 <br>
**Test Images:** 100,000 <br>
To perform the semantic segmentation evaluation, we use training sets from ADE20K and PascalVOC to train a linear layer, and subsequently performed evaluations on the validation set.
See below for further details:
**ADE20k**
**Link:** [ADE20K](https://ade20k.csail.mit.edu/) <br>
**Data Collection:** Human <br>
**Labeling Method:** Human <br>
**Training Images:** 25,574 <br>
**Validation Images:** 2,000 <br>
**Pascal VOC**
**Link:** [Pascal VOC](http://host.robots.ox.ac.uk/pascal/VOC/) <br>
**Data Collection:** Human <br>
**Labeling Method:** Human <br>
**Training Images:** 1,464 <br>
**Validation Images:** 1,449 <br>
| Benchmark | C-RADIOv3-B | C-RADIOv3-L | C-RADIOv4-SO400M | C-RADIOv3-H | C-RADIOv4-H |
|-----------|-------------|-------------|------------------|-------------|-------------|
| **ImageNet Classification (Top1 accuracy)** | | |
| Zero-Shot | 71.30 | 79.95 | 82.01 | 82.65 | 83.09 |
| KNN | 81.22 | 84.33 | 85.75 | 86.23 | 86.68 |
| **ADE20k Semantic Segmentation (mIoU)** | 49.79 | 51.87 | 55.14 | 52.75 | 55.20 |
| **Pascal VOC Semantic Segmentation (mIoU)** | 84.68 | 86.12 | 87.22 | 86.41 | 87.24 |
## Inference
**Acceleration Engine:** Tensor(RT), Tensor(RT)-LLM <br>
**Engine:** PyTorch <br>
**Test Hardware:** H100 <br>
## Ethical Considerations
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
Please make sure you have proper rights and permissions for all input image and video content; if image or video includes people, personal health information, or intellectual property, the image or video generated will not blur or maintain proportions of image subjects included.
For more detailed information on ethical considerations for this model, please see the Model Card++ Explainability, Bias, Safety & Security, and Privacy Subcards below.
Please report model quality, risk, security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
### Bias
Field | Response
:---------------------------------------------------------------------------------------------------|:---------------
Participation considerations from adversely impacted groups [protected classes](https://www.senate.ca.gov/content/protected-classes) in model design and testing: | None
Measures taken to mitigate against unwanted bias: | None
### Explainability
Field | Response
:------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------
Intended Task/Domain: | Visual Feature Extraction
Model Type: | Vision Transformer
Intended Users: | Developers of downstream vision applications
Output: | Image embeddings
Describe how the model works: | The model takes an image as input, processes the image through multiple transformer blocks, and outputs summary and patch embeddings.
Name the adversely impacted groups this has been tested to deliver comparable outcomes regardless of: | Not Applicable
Technical Limitations: | This model generates image embeddings that can be used by a downstream model to, for example, classify images. The downstream model must be trained to leverage the visual embeddings. This model is only tested on input resolutions ranging from 256 to 2048, in increments of 16 pixels. This model may fail to surface information about the orientation of objects (e.g. whether a traffic sign points left/right).
Verified to have met prescribed NVIDIA quality standards: | Yes
Performance Metrics: | Image classification accuracy, semantic segmentation mean-over-intersection.
Potential Known Risks: | This model may not perform well on visual domains that are not represented in the training data. The generated embeddings might fail to disambiguate differences that appear evident to humans (e.g. two images showing different breeds of dogs might in fact produce very similar embeddings). Domain-specific evaluation is required for the target application.
Licensing: | [NVIDIA Open Model License](https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf)
### Privacy
Field | Response
:----------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------
Generatable or reverse engineerable personal data? | No
Personal data used to create this model? | None Known
How often is dataset reviewed? | Before Every Release
Is there provenance for all datasets used in training? | Yes
Does data labeling (annotation, metadata) comply with privacy laws? | Yes
Is data compliant with data subject requests for data correction or removal, if such a request was made? | Yes
Was data from user interactions with the AI model (e.g. user input and prompts) used to train the model? | No
Applicable Privacy Policy | https://www.nvidia.com/en-us/about-nvidia/privacy-policy/
### Safety
Field | Response
:---------------------------------------------------|:----------------------------------
Model Application Field(s): | Generation of visual embeddings
Describe the life critical impact (if present). | Not Applicable
Use Case Restrictions: | Abide by NVIDIA Open Model License Agreement
Model and dataset restrictions: | The Principle of least privilege (PoLP) is applied limiting access for dataset generation and model development. Restrictions enforce dataset access during training, and dataset license constraints adhered to. |