Model Overview
Description:
The NVIDIA Wan2.2-T2V-A14B-Diffusers NVFP4 model is the quantized version of Wan-AI's Wan2.2-T2V-A14B model, which is a text-to-video diffusion transformer. For more information, please check here. The NVIDIA Wan2.2-T2V-A14B-Diffusers NVFP4 model is quantized with Model Optimizer.
This model is ready for commercial/non-commercial use.
Third-Party Community Consideration
This model is not owned or developed by NVIDIA. This model has been developed and built to a third-party's requirements for this application and use case; see link to Non-NVIDIA (Wan2.2-T2V-A14B) Model Card.
License/Terms of Use:
Deployment Geography:
Global
Use Case:
Developers looking to take off-the-shelf, pre-quantized models for deployment in video generation applications, creative content pipelines, and other AI-powered multimedia systems.
Release Date:
Hugging Face 05/05/2026 via https://huggingface.co/nvidia/Wan2.2-T2V-A14B-Diffusers-NVFP4
Model Architecture:
Architecture Type: Diffusion Transformer (DiT) with Mixture-of-Experts (MoE)
Network Architecture: Wan2.2-T2V-A14B
**This model was developed based on Wan2.2-T2V-A14B
Number of Model Parameters: 27B total parameters, 14B active parameters per denoising step
Input:
Input Type(s): Text
Input Format(s): String
Input Parameters: One-Dimensional (1D): Sequences
Other Properties Related to Input: Resolution and video length are user-configurable
Output:
Output Type(s): Video
Output Format: Video (MP4)
Output Parameters: Three-Dimensional (3D): Frames, Height, Width
Other Properties Related to Output: Generates video at configurable resolutions (default 480p at 480ร832) and frame counts (default 81 frames); resolution must be divisible by 16
Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA's hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster inference times compared to CPU-only solutions.
Software Integration:
Supported Runtime Engine(s):
- TRTLLM
Supported Hardware Microarchitecture Compatibility:
- NVIDIA Blackwell
Preferred Operating System(s):
- Linux
The integration of foundation and fine-tuned models into AI systems requires additional testing using use-case-specific data to ensure safe and effective deployment. Following the V-model methodology, iterative testing and validation at both unit and system levels are essential to mitigate risks, meet technical and functional requirements, and ensure compliance with safety and ethical standards before deployment.
Model Version(s):
The model is quantized with nvidia-modelopt v0.42.0
Training, Testing, and Evaluation Datasets:
Calibration Dataset:
Link: OpenVid-1M
Data Collection Method by dataset: Automated.
Labeling Method by dataset: Automated.
Properties: The OpenVid-1M dataset contains over 1 million video-text pairs for video generation research. Only the text captions from this dataset were used for calibration.
Training Dataset:
Training Data Size: Undisclosed
Data Modality: Text, Image, Video
Data Collection Method by dataset: Undisclosed
Labeling Method by dataset: Undisclosed
Properties: The original Wan2.2 model was trained with substantially expanded image and video data relative to Wan2.1; additional training dataset details are not disclosed in the source model card.
Testing Dataset:
Data Collection Method by dataset: Undisclosed
Labeling Method by dataset: Undisclosed
Properties: Undisclosed
Evaluation Dataset:
Data Collection Method by dataset: Hybrid: Human, Automated
Labeling Method by dataset: Hybrid: Human, Automated
Properties: This model was evaluated using the VBench 2.0 benchmark, an open-source video generation benchmark suite for evaluating intrinsic faithfulness across 18 fine-grained capability dimensions grouped into 5 broad categories. Evaluation was performed on the standard VBench 2.0 prompt suite (1,012 text prompts) covering diverse subjects, scenes, and action descriptions. We report four VBench 2.0 dimensions: Camera Motion (whether camera movement in the generated video โ pan, tilt, orbit, or static โ matches the camera direction described in the prompt), Complex Plot (whether the generated video correctly portrays multi-stage narrative plots described in the prompt), Instance Preservation (anatomical and structural integrity of subjects across frames, with anomaly detection on people and objects), and Motion Order Understanding (whether the temporal order of actions described in the prompt is preserved in the generated video). Generated outputs were additionally subject to manual engineering review.
Inference:
Acceleration Engine: TRTLLM
Test Hardware: B200
Post Training Quantization
This model was obtained by quantizing the weights and activations of Wan2.2-T2V-A14B to NVFP4 data type, ready for inference with TRTLLM. Only the weights and activations of the linear operators within both transformer denoiser blocks (transformer and transformer_2) are quantized.
Usage
To serve this checkpoint with TRTLLM:
trtllm-serve nvidia/Wan2.2-T2V-A14B-Diffusers-NVFP4 --extra_visual_gen_options ./examples/visual_gen/serve/configs/wan.yml
Model Characteristics
The original Wan2.2-T2V-A14B model uses a Mixture-of-Experts design with separate high-noise and low-noise experts across denoising timesteps. This enables larger total capacity (27B parameters) while keeping the active parameters per step at roughly 14B. See the original model card for more details: Wan-AI/Wan2.2-T2V-A14B-Diffusers.
Model Limitations:
The base model was trained on internet-scale image and video data that may contain societal biases or undesirable content patterns. Therefore, the model may amplify those biases and may generate videos that are inaccurate, inconsistent with the prompt, low quality, or inappropriate, even when prompts are benign. Generated outputs can also reflect limitations in motion coherence, temporal consistency, and prompt adherence. This model is not designed for factual information generation or safety-critical applications without additional safeguards and testing.
Ethical Considerations
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. Please make sure you have proper rights and permissions for all input image and video content; if image or video includes people, personal health information, or intellectual property, the image or video generated will not blur or maintain proportions of image subjects included.
Please report model quality, risk, security vulnerabilities or NVIDIA AI Concerns here.
Model tree for nvidia/Wan2.2-T2V-A14B-Diffusers-NVFP4
Base model
Wan-AI/Wan2.2-T2V-A14B-Diffusers