NV-Raw2insights-MRI Overview

Description:

NV-Raw2insights-MRI is a deep unrolled Convolutional Neural Network (CNN) model (SDUM — Scalable Deep Unrolled Model) designed for AI-accelerated Magnetic Resonance Imaging (MRI) reconstruction. The model reconstructs fully sampled MR images from undersampled k-space data, significantly reducing MRI scan times while maintaining high image quality.

In standard clinical MRI, acquiring the full k-space (frequency domain) is time-consuming, as every point in the k-space matrix must be traversed. Traditional compressed sensing methods use iterative optimization to reconstruct from partial k-space acquisitions but are slow and typically limited to acceleration factors below 4x. NV-Raw2insights-MRI replaces this with a single forward pass through a deep neural network, achieving acceleration factors of 8x and above — up to 24x — while maintaining reconstruction quality. The model won all four tracks of the CMRxRecon 2025 challenge.

This model has been trained and validated on cardiac MRI data.

This model is ready for commercial use.

License/Terms of Use:

Use of this model is governed by the NVIDIA Open Model License.

Deployment Geography:

Global

Use Case:

Medical researchers, radiologists, AI developers, and healthcare institutions would be expected to use this model for accelerating MRI reconstruction from undersampled k-space acquisitions, reducing patient scan times, and advancing AI-based MRI reconstruction research. The model can serve as a foundation for fine-tuning to other anatomies and MR acquisition protocols.

It is not a clinically validated medical device and should not be used for clinical diagnostic purposes.

Release Date:

Huggingface: 03/16/2026 (GTC San Jose 2026) via https://huggingface.co/nvidia/NV-raw2insights-mri

Reference(s):

[1] SDUM paper: https://arxiv.org/abs/2512.17137

Publication references required when using the CMRxRecon challenge datasets:

[2] Wang C, Lyu J, Wang S, et al. "CMRxRecon: A publicly available k-space dataset and benchmark to advance deep learning for cardiac MRI." Scientific Data, 2024, 11(1): 687. https://doi.org/10.1038/s41597-024-03525-4

[3] Wang Z, Wang F, Qin C, et al. "CMRxRecon2024: A Multimodality, Multiview k-Space Dataset Boosting Universal Machine Learning for Accelerated Cardiac MRI." Radiology: Artificial Intelligence, 2025, 7(2): e240443. https://doi.org/10.1148/ryai.240443

[4] Wang Z, Huang M, Shi Z, et al. "Enabling Ultra-Fast Cardiovascular Imaging Across Heterogeneous Clinical Environments with a Generalist Foundation Model and Multimodal Database." arXiv preprint arXiv:2512.21652, 2025. https://doi.org/10.48550/arXiv.2512.21652

CMRxRecon 2025 Challenge Page: https://www.synapse.org/Synapse:syn59814210/wiki/631023

Model Architecture:

Architecture Type: CNN (Deep Unrolled)
Network Architecture: Restormer-based cascaded reconstruction model (SDUM)
Task: Reconstruction

The model combines the following components:

  • Restormer-based reconstructor for image restoration at each cascade stage
  • Learned Coil Sensitivity Map Estimator (CSME) estimated per cascade for improved multi-coil reconstruction
  • Sampling-Aware Weighted Data Consistency (SWDC) to enforce consistency with acquired k-space measurements
  • Universal Conditioning (UC) on cascade index and protocol metadata (acceleration factor, sampling pattern, modality)

The deep unroll architecture uses multiple cascades (up to 34), with reconstruction quality following a predictable scaling law: PSNR ~ log(parameters).

This model was developed using MONAI components and PyTorch.

Number of model parameters: 760M

Input:

Input Type(s): Complex-valued multi-coil arrays, Binary/density mask, Categorical/numerical metadata
Input Format(s): Complex-valued multi-coil k-space data, Sampling mask, Integer/float conditioning values
Input Parameters: Undersampled multi-coil k-space data (2D/3D/4D), Sampling mask (2D/3D), Protocol metadata vector (1D)
Other Properties Related to Input: The model accepts undersampled multi-coil k-space data along with universal conditioning (UC) metadata that includes cascade index, acceleration factor, sampling pattern type, and acquisition type.

k-space data

  • Type: Complex-valued multi-coil array
  • Description: Undersampled multi-coil k-space (frequency domain) data from MRI acquisition. Shape varies by protocol (e.g., [coils, kx, ky] for 2D; [coils, kx, ky, t] for dynamic/cine). The model handles variable numbers of receiver coils natively.

sampling_mask

  • Type: Binary or density-weighted mask
  • Description: Indicates which k-space locations were acquired during the undersampled scan. Feeds directly into the Sampling-Aware Weighted Data Consistency (SWDC) module, which learns spatially varying k-space weights conditioned on the mask rather than using a single scalar weight.

conditioning metadata

  • Type: Categorical/numerical values, encoded via sinusoidal embedding + MLP
  • Description: Protocol metadata and cascade position, including:
    • Cascade index: Integer position of the current cascade in the unrolled chain (0 to T-1)
    • Acceleration factor: The undersampling rate (e.g., 4x, 8x, 10x, 24x)
    • Sampling pattern (mask type): Type of k-space traversal (Cartesian, radial, spiral, kt-space)
    • Acquisition type: MR sequence/anatomy (e.g., cine, mapping, T1, T2, FLAIR, knee)

Output:

Output Type(s): Image
Output Format: Reconstructed MR images (magnitude)
Output Parameters: Two-dimensional (2D) or three-dimensional (3D)/dynamic, matching input spatial dimensions
Other Properties Related to Output: Fully reconstructed MR images from undersampled multi-coil k-space input. The model produces high-quality reconstructions through T cascades of iterative refinement, each combining a Restormer-based reconstructor, a learned coil sensitivity map estimator (CSME), and SWDC. Intermediate outputs include per-cascade refined coil sensitivity maps. Scales up to 34 cascades with log-linear PSNR improvement.

Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA's hardware (GPU cores) and software frameworks (CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.

Software Integration:

Runtime Engine(s):

  • MONAI
  • PyTorch

Supported Hardware Microarchitecture Compatibility:

  • NVIDIA Ampere
  • NVIDIA Hopper
  • NVIDIA Blackwell

Supported Operating System(s):

  • Linux

The integration of foundation and fine-tuned models into AI systems requires additional testing using use-case-specific data to ensure safe and effective deployment. Following the V-model methodology, iterative testing and validation at both unit and system levels are essential to mitigate risks, meet technical and functional requirements, and ensure compliance with safety and ethical standards before deployment.

Model Version(s):

0.1 - Initial release version for AI-accelerated MRI reconstruction (cardiac)

Training, Testing, and Evaluation Datasets:

Dataset Overview:

The model was trained on the CMRxRecon 2023, 2024, and 2025 challenge datasets, which comprise cardiac MRI data from multiple centers, multiple scanner vendors (Siemens, Philips, etc.), and multiple years. The datasets cover diverse acquisition protocols, sampling patterns, and acceleration factors.

Training Dataset:

Data Modality:

  • Other: MRI k-space data (cardiac)

Data Collection Method by dataset:

  • Human

Labeling Method by dataset:

  • Automatic/Sensors

Properties: 364 training cases (~861 subjects) from CMRxRecon 2023-2025; multi-coil cardiac MRI k-space covering cine, T1/T2 mapping, phase-contrast, and dark-blood sequences; IRB-approved and fully anonymized; no synthetic content; 3T scanners (expanding to 1.5T-5.0T multi-vendor in 2025).

Testing Dataset:

Data Modality:

  • Other: MRI k-space data (cardiac)

Data Collection Method by dataset:

  • Human

Labeling Method by dataset:

  • Automatic/Sensors

Properties: Approximately 104 cases across ~246 subjects held out for testing; same modalities and acquisition protocols as training.

Evaluation Dataset:

Data Modality:

  • Other: MRI k-space data (cardiac)

Data Collection Method by dataset:

  • Human

Labeling Method by dataset:

  • Automatic/Sensors

Properties: Approximately 52 cases across ~123 subjects held out for evaluation; same modalities and acquisition protocols as training.

Performance:

The model achieved state-of-the-art results across all four tracks of the CMRxRecon 2025 challenge:

  • Outperforms the winning method PromptMR+ by +0.55 dB on CMRxRecon 2024
  • Exceeds PC-RNN by +1.8 dB on fastMRI brain
  • Supports acceleration factors of 8x to 24x (compared to traditional compressed sensing at <4x)
  • Component ablation improvements: SWDC (+0.43 dB), per-cascade CSME (+0.51 dB), UC (+0.38 dB)

Inference:

Acceleration Engine: PyTorch
Test Hardware:

  • A100
  • H100

Ethical Considerations:

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.

For more detailed information on ethical considerations for this model, please see the Model Card++ Bias, Explainability, Safety & Security, and Privacy Subcards.

Please make sure you have proper rights and permissions for all input image and video content; if image or video includes people, personal health information, or intellectual property, the image or video generated will not blur or maintain proportions of image subjects included.

Please report model quality, risk, security vulnerabilities or concerns here.

For more detailed information on ethical considerations for this model, please see the Model Card++ Bias, Explainability, Safety & Security, and Privacy Subcards.

Downloads last month
189
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including nvidia/NV-Raw2insights-MRI

Papers for nvidia/NV-Raw2insights-MRI