--- license: other license_name: nvidia-open-model-license license_link: https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf library_name: transformers --- # Model Overview ## Description This model performs visual feature extraction. For instance, RADIO generates image embeddings that can be used by a downstream model to classify images. C-RADIOv4 models are available in multiple sizes: * Shape-Optimized (431M parameters). * Huge (653M parameters). C-RADIOv4 was trained using an updated set of teach models: * [SigLIP2-g](https://huggingface.co/google/siglip2-giant-opt-patch16-384) * [DINOv3-7B](https://huggingface.co/facebook/dinov3-vit7b16-pretrain-lvd1689m) * [SAM3](https://huggingface.co/facebook/sam3) This model is ready for commercial/non-commercial use. ### License/Terms of Use GOVERNING TERMS: Use of this model is governed by the [NVIDIA Open Model License Agreement](https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf). ## Deployment Geography Global ## Use Case The embeddings generated by this model are expected to be used by a downstream application. For example: * Image-level understanding (image classification, curation, etc.). * Dense processing (semantic segmentation, depth estimation, etc.). * Integration into a Vision-Language Model. ## Release Date Hugging Face: 01/27/2026 via [RADIO Collection of Models](https://huggingface.co/collections/nvidia/radio-669f77f1dd6b153f007dd1c6). ## References * [AM-RADIO: Agglomerative Vision Foundation Model -- Reduce All Domains Into One](https://arxiv.org/abs/2312.06709) * [PHI-S: Distribution Balancing for Label-Free Multi-Teacher Distillation](https://arxiv.org/abs/2410.01680) * [RADIOv2.5: Improved Baselines for Agglomerative Vision Foundation Models](https://arxiv.org/abs/2412.07679) * [FeatSharp: Your Vision Model Features, Sharper](https://arxiv.org/abs/2502.16025) * [C-RADIOv4 (Tech Report)](https://arxiv.org/abs/2601.17237) ## Model Architecture **Architecture Type:** Neural Network
**Network Architecture:** Vision Transformer
**Number of model parameters:** -SO400M size: 431M, -H size: 653M
## Input **Input Type(s):** Image
**Input Format(s):** Red, Green, Blue (RGB)
**Input Parameters:** Two Dimensional (2D)
**Other Properties Related to Input:** Image resolutions up to 2048x2028 in increments of 16 pixels
## Output **Output Type(s):** Embeddings
**Output Format:** Tensor
**Output Parameters:** Two Dimensional 2D
**Other Properties Related to Output:** Downstream model required to leverage image features. Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.
## Usage: RADIO will return a tuple with two tensors. The `summary` is similar to the `cls_token` in ViT and is meant to represent the general concept of the entire image. It has shape `(B,C)` with `B` being the batch dimension, and `C` being some number of channels. The `spatial_features` represent more localized content which should be suitable for dense tasks such as semantic segmentation, or for integration into an LLM. ```python import torch from PIL import Image from transformers import AutoModel, CLIPImageProcessor hf_repo = "nvidia/C-RADIOv4-H" image_processor = CLIPImageProcessor.from_pretrained(hf_repo) model = AutoModel.from_pretrained(hf_repo, trust_remote_code=True) model.eval().cuda() image = Image.open('./assets/radio.png').convert('RGB') pixel_values = image_processor(images=image, return_tensors='pt', do_resize=True).pixel_values pixel_values = pixel_values.cuda() summary, features = model(pixel_values) ``` Spatial features have shape `(B,T,D)` with `T` being the flattened spatial tokens, and `D` being the channels for spatial features. Note that `C!=D` in general. Converting to a spatial tensor format can be done using the downsampling size of the model, combined with the input tensor shape. For RADIO, the patch size is 16. ```Python from einops import rearrange spatial_features = rearrange(spatial_features, 'b (h w) d -> b d h w', h=x.shape[-2] // patch_size, w=x.shape[-1] // patch_size) ``` The resulting tensor will have shape `(B,D,H,W)`, as is typically seen with computer vision models. ## Software Integration **Runtime Engine(s):** * [TAO-6.1]
**Supported Hardware Microarchitecture Compatibility:**
* NVIDIA Ampere
* NVIDIA Blackwell
* NVIDIA Jetson
* NVIDIA Hopper
* NVIDIA Lovelace
* NVIDIA Pascal
* NVIDIA Turing
* NVIDIA Volta
**[Preferred/Supported] Operating System(s):**
* Linux * Linux 4 Tegra * QNX * Windows The integration of foundation and fine-tuned models into AI systems requires additional testing using use-case-specific data to ensure safe and effective deployment. Following the V-model methodology, iterative testing and validation at both unit and system levels are essential to mitigate risks, meet technical and functional requirements, and ensure compliance with safety and ethical standards before deployment. This AI model can be embedded as an Application Programming Interface (API) call into the software environment described above. ## Model Version(s) * C-RADIOv4-SO400M (400M parameters). * C-RADIOv4-H (653M parameters). **Links:** * https://huggingface.co/nvidia/C-RADIOv4-SO400M * https://huggingface.co/nvidia/C-RADIOv4-H # Training and Evaluation Datasets ## Training Dataset **NV-CC-Img-Text-Dataset** **Data Modality:** Image
**Image Training Data Size:** 1 Million to 1 Billion Images
**Data Collection Method by dataset:** Automated
**Labeling Method by dataset:** Not Applicable (no labels are needed)
**Properties:** 700 Million Images
## Evaluation Datasets **ImageNet** **Link:** [ImageNet](https://www.image-net.org/)
**Data Collection:** Automated
**Labeling Method:** Human
**Training Images:** 1,281,167
**Validation Images:** 50,000
**Test Images:** 100,000
To perform the semantic segmentation evaluation, we use training sets from ADE20K and PascalVOC to train a linear layer, and subsequently performed evaluations on the validation set. See below for further details: **ADE20k** **Link:** [ADE20K](https://ade20k.csail.mit.edu/)
**Data Collection:** Human
**Labeling Method:** Human
**Training Images:** 25,574
**Validation Images:** 2,000
**Pascal VOC** **Link:** [Pascal VOC](http://host.robots.ox.ac.uk/pascal/VOC/)
**Data Collection:** Human
**Labeling Method:** Human
**Training Images:** 1,464
**Validation Images:** 1,449
| Benchmark | C-RADIOv3-B | C-RADIOv3-L | C-RADIOv4-SO400M | C-RADIOv3-H | C-RADIOv4-H | |-----------|-------------|-------------|------------------|-------------|-------------| | **ImageNet Classification (Top1 accuracy)** | | | | Zero-Shot | 71.30 | 79.95 | 82.01 | 82.65 | 83.09 | | KNN | 81.22 | 84.33 | 85.75 | 86.23 | 86.68 | | **ADE20k Semantic Segmentation (mIoU)** | 49.79 | 51.87 | 55.14 | 52.75 | 55.20 | | **Pascal VOC Semantic Segmentation (mIoU)** | 84.68 | 86.12 | 87.22 | 86.41 | 87.24 | ## Inference **Acceleration Engine:** Tensor(RT), Tensor(RT)-LLM
**Engine:** PyTorch
**Test Hardware:** H100
## Ethical Considerations NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. Please make sure you have proper rights and permissions for all input image and video content; if image or video includes people, personal health information, or intellectual property, the image or video generated will not blur or maintain proportions of image subjects included. For more detailed information on ethical considerations for this model, please see the Model Card++ Explainability, Bias, Safety & Security, and Privacy Subcards below. Please report model quality, risk, security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/). ### Bias Field | Response :---------------------------------------------------------------------------------------------------|:--------------- Participation considerations from adversely impacted groups [protected classes](https://www.senate.ca.gov/content/protected-classes) in model design and testing: | None Measures taken to mitigate against unwanted bias: | None ### Explainability Field | Response :------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------- Intended Task/Domain: | Visual Feature Extraction Model Type: | Vision Transformer Intended Users: | Developers of downstream vision applications Output: | Image embeddings Describe how the model works: | The model takes an image as input, processes the image through multiple transformer blocks, and outputs summary and patch embeddings. Name the adversely impacted groups this has been tested to deliver comparable outcomes regardless of: | Not Applicable Technical Limitations: | This model generates image embeddings that can be used by a downstream model to, for example, classify images. The downstream model must be trained to leverage the visual embeddings. This model is only tested on input resolutions ranging from 256 to 2048, in increments of 16 pixels. This model may fail to surface information about the orientation of objects (e.g. whether a traffic sign points left/right). Verified to have met prescribed NVIDIA quality standards: | Yes Performance Metrics: | Image classification accuracy, semantic segmentation mean-over-intersection. Potential Known Risks: | This model may not perform well on visual domains that are not represented in the training data. The generated embeddings might fail to disambiguate differences that appear evident to humans (e.g. two images showing different breeds of dogs might in fact produce very similar embeddings). Domain-specific evaluation is required for the target application. Licensing: | [NVIDIA Open Model License](https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf) ### Privacy Field | Response :----------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------- Generatable or reverse engineerable personal data? | No Personal data used to create this model? | None Known How often is dataset reviewed? | Before Every Release Is there provenance for all datasets used in training? | Yes Does data labeling (annotation, metadata) comply with privacy laws? | Yes Is data compliant with data subject requests for data correction or removal, if such a request was made? | Yes Was data from user interactions with the AI model (e.g. user input and prompts) used to train the model? | No Applicable Privacy Policy | https://www.nvidia.com/en-us/about-nvidia/privacy-policy/ ### Safety Field | Response :---------------------------------------------------|:---------------------------------- Model Application Field(s): | Generation of visual embeddings Describe the life critical impact (if present). | Not Applicable Use Case Restrictions: | Abide by NVIDIA Open Model License Agreement Model and dataset restrictions: | The Principle of least privilege (PoLP) is applied limiting access for dataset generation and model development. Restrictions enforce dataset access during training, and dataset license constraints adhered to.