Image Feature Extraction
Transformers
Safetensors
feature-extraction
custom_code
C-RADIOv2-B / README.md
gheinrich's picture
Update README.md
9853756 verified
|
raw
history blame
9.16 kB
metadata
license: other
license_name: nvidia-open-model-license
license_link: >-
  https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf

Model Overview

Description

This model performs visual feature extraction. For instance, RADIO generates image embeddings that can be used by a downstream model to classify images.

C-RADIOv2 models are available in multiple sizes:

  • Base (90M parameters).
  • Large (320M parameters).
  • Huge (653M parameters).
  • Gigantic (1.8B parameters).

C-RADIOv2 was trained for 1M steps (400k more steps than v1), using inverse frequency sampling for data balancing, and PHI Standardization for teacher distribution balancing.

This model is ready for commercial/non-commercial use.

License/Terms of Use

GOVERNING TERMS: Use of this model is governed by the NVIDIA Open Model License Agreement.

Deployment Geography

Global.

Use Case

The embeddings generated by this model are expected to be used by a downstream application. For example:

  • Image-level understanding (image classification, curation, etc.).
  • Dense processing (semantic segmentation, depth estimation, etc.).
  • Integration into a Vision-Language Model.

Release Date

Huggingface: 03/26/2025 via RADIO Collection of Models.

References

Model Architecture

Architecture Type: Neural Network
Network Architecture: Vision Transformer

Input

Input Type(s): Image
Input Format(s): Red, Green, Blue (RGB)
Input Parameters: Two Dimensional (2D)
Other Properties Related to Input: Image resolutions up to 2048x2028 in increments of 16 pixels

Output

Output Type(s): Embeddings
Output Format: Tensor
Output Parameters: 2D
Other Properties Related to Output: Downstream model required to leverage image features

Software Integration

Runtime Engine(s):

  • TAO- 24.10

Supported Hardware Microarchitecture Compatibility:

  • NVIDIA Ampere
  • NVIDIA Blackwell
  • NVIDIA Jetson
  • NVIDIA Hopper
  • NVIDIA Lovelace
  • NVIDIA Pascal
  • NVIDIA Turing
  • NVIDIA Volta

[Preferred/Supported] Operating System(s):

  • Linux
  • Linux 4 Tegra
  • QNX
  • Windows

Model Version(s)

  • C-RADIOv2-B (90M parameters).
  • C-RADIOv2-L (320M parameters).
  • C-RADIOv2-H (653M parameters).
  • C-RADIOv2-G (1.8B parameters).

Links:

Training and Evaluation Datasets

Training Dataset

NV-CC-Img-Text-Dataset

Data Collection Method by dataset

  • Automated

Labeling Method by dataset

  • Not Applicable (no labels are needed)

Properties

  • 700 Million Images

Evaluation Dataset

Link: ImageNet

Data Collection Method by dataset

  • Automated

Labeling Method by dataset

  • Human

Properties: This dataset spans 1000 object classes and contains 1,281,167 training images, 50,000 validation images and 100,000 test images.

Inference

Engine: PyTorch
Test Hardware: A100

Ethical Considerations

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.

For more detailed information on ethical considerations for this model, please see the Model Card++ Explainability, Bias, Safety & Security, and Privacy Subcards below.

Please report security vulnerabilities or NVIDIA AI Concerns here.

Bias

Field Response
Participation considerations from adversely impacted groups protected classes in model design and testing: None
Measures taken to mitigate against unwanted bias: None

Explainability

Field Response
Intended Application & Domain: Visual Feature Extraction
Model Type: Vision Transformer
Intended Users: Developers of downstream vision applications
Output: Image embeddings
Describe how the model works: The model takes an image as input, processes the image through multiple transformer blocks, and outputs summary and patch embeddings.
Name the adversely impacted groups this has been tested to deliver comparable outcomes regardless of: Not Applicable
Technical Limitations: This model generates image embeddings that can be used by a downstream model to, for example, classify images. The downstream model must be trained to leverage the visual embeddings.
Verified to have met prescribed NVIDIA quality standards: Yes
Performance Metrics: Image classification accuracy, semantic segmentation mean-over-intersection.
Potential Known Risks: This model is only tested on input resolutions ranging from 256 to 2048, in increments of 16 pixels. Additionally, the generated embeddings might fail to disambiguate differences that appear evident to humans (e.g. two images showing different breeds of dogs might in fact produce very similar embeddings). Domain-specific evaluation is required for the target application.
Licensing: NVIDIA Open Model License

Privacy

Field Response
Generatable or reverse engineerable personal data? None
Personal data used to create this model? None
How often is dataset reviewed? Before Every Release
Is there provenance for all datasets used in training? Yes
Does data labeling (annotation, metadata) comply with privacy laws? Yes
Is data compliant with data subject requests for data correction or removal, if such a request was made? Yes

Safety

Field Response
Model Application(s): Generation of visual embeddings
Describe the life critical impact (if present). Not Applicable
Use Case Restrictions: Abide by NVIDIA Open Model License Agreement
Model and dataset restrictions: The Principle of least privilege (PoLP) is applied limiting access for dataset generation and model development. Restrictions enforce dataset access during training, and dataset license constraints adhered to.