| license: apache-2.0 | |
| base_model: | |
| - Qwen/Qwen3-Coder-30B-A3B-Instruct | |
| ## Description | |
| NVFP4 Quantization of [Qwen/Qwen3-Coder-30B-A3B-Instruct](https://huggingface.co/Qwen/Qwen3-Coder-30B-A3B-Instruct) using [TensorRT-Model-Optimizer](https://github.com/NVIDIA/Model-Optimizer). KV Cache quantized to FP8 for compatibility with inference backends. |