vincentzed-hf commited on
Commit
86975c2
·
verified ·
1 Parent(s): fc3a2e9

Add NVFP4 model card for Qwen3.5-2B-Base-NVFP4

Browse files
Files changed (1) hide show
  1. README.md +117 -0
README.md ADDED
@@ -0,0 +1,117 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pipeline_tag: image-text-to-text
3
+ base_model:
4
+ - Qwen/Qwen3.5-2B-Base
5
+ license: apache-2.0
6
+ library_name: transformers
7
+ tags:
8
+ - AxionML
9
+ - ModelOpt
10
+ - Qwen3.5
11
+ - quantized
12
+ - NVFP4
13
+ - nvfp4
14
+ - sglang
15
+ ---
16
+
17
+ # AxionML Qwen3.5-2B-Base-NVFP4
18
+
19
+ > Developed by [AxionML](https://huggingface.co/AxionML) for open-source serving and deployment use cases. Part of AxionML's effort to provide ready-to-serve quantized models for the community.
20
+
21
+ This is an NVFP4-quantized version of [Qwen/Qwen3.5-2B-Base](https://huggingface.co/Qwen/Qwen3.5-2B-Base) (2B parameters), quantized using [NVIDIA TensorRT Model Optimizer](https://github.com/NVIDIA/TensorRT-Model-Optimizer). Weights and activations of linear layers are quantized to FP4, reducing disk size and GPU memory by ~4x compared to BF16.
22
+
23
+ **About NVFP4 quantization:** NVFP4 on Blackwell couples a compact E2M1 FP4 codebook with blockwise FP8 (E4M3) scaling over 16-element micro-blocks, so that 4-bit stored values remain numerically useful for neural-network computation. The E2M1 codebook provides a small, nonuniform set of representable magnitudes up to ±6 and relies on saturating behavior rather than IEEE NaN/Inf encodings to maximize usable range per bit. Using an FP8 block scale (rather than power-of-two-only E8M0) enables fractional scales and error-minimizing scale selection strategies such as dual-pass evaluation comparing "map max to 6" versus "map max to 4 with clipping." On Blackwell Tensor Cores, native FP4 multipliers exploit E2M1 simplicity to reduce multiplier area while higher-precision FP32 accumulation protects dot-product accuracy.
24
+
25
+ > **Ready for commercial and non-commercial use under [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0).**
26
+
27
+ Over recent months, we have intensified our focus on developing foundation models that deliver exceptional utility and performance. Qwen3.5 represents a significant leap forward, integrating breakthroughs in multimodal learning, architectural efficiency, reinforcement learning scale, and global accessibility to empower developers and enterprises with unprecedented capability and efficiency.
28
+
29
+ ## Qwen3.5 Highlights
30
+
31
+ Qwen3.5 features the following enhancement:
32
+
33
+ - **Unified Vision-Language Foundation**: Early fusion training on multimodal tokens achieves cross-generational parity with Qwen3 and outperforms Qwen3-VL models across reasoning, coding, agents, and visual understanding benchmarks.
34
+
35
+ - **Efficient Hybrid Architecture**: Gated Delta Networks combined with sparse Mixture-of-Experts deliver high-throughput inference with minimal latency and cost overhead.
36
+
37
+ - **Scalable RL Generalization**: Reinforcement learning scaled across million-agent environments with progressively complex task distributions for robust real-world adaptability.
38
+
39
+ - **Global Linguistic Coverage**: Expanded support to 201 languages and dialects, enabling inclusive, worldwide deployment with nuanced cultural and regional understanding.
40
+
41
+ - **Next-Generation Training Infrastructure**: Near-100% multimodal training efficiency compared to text-only training and asynchronous RL frameworks supporting massive-scale agent scaffolds and environment orchestration.
42
+
43
+
44
+ For more details, please refer to our blog post [Qwen3.5](https://qwen.ai/blog?id=qwen3.5).
45
+
46
+
47
+ ## Model Overview
48
+
49
+ - Type: Causal Language Model with Vision Encoder
50
+ - Training Stage: Pre-training & Post-training
51
+ - Language Model
52
+ - Number of Parameters: 2B
53
+ - Hidden Dimension: 2048
54
+ - Token Embedding: 248320 (Padded)
55
+ - Number of Layers: 24
56
+ - Hidden Layout: 6 × (3 × (Gated DeltaNet → FFN) → 1 × (Gated Attention → FFN))
57
+ - Gated DeltaNet:
58
+ - Number of Linear Attention Heads: 16 for V and 16 for QK
59
+ - Head Dimension: 128
60
+ - Gated Attention:
61
+ - Number of Attention Heads: 8 for Q and 2 for KV
62
+ - Head Dimension: 256
63
+ - Rotary Position Embedding Dimension: 64
64
+ - Feed Forward Network:
65
+ - Intermediate Dimension: 6144
66
+ - LM Output: 248320 (Tied to token embedding)
67
+ - MTP: trained with multi-steps
68
+ - Context Length: 262,144 natively and extensible up to 1,010,000 tokens.
69
+
70
+ ### Citation
71
+
72
+ If you find our work helpful, feel free to give us a cite.
73
+
74
+ ```bibtex
75
+ @misc{qwen3.5,
76
+ title = {{Qwen3.5}: Towards Native Multimodal Agents},
77
+ author = {{Qwen Team}},
78
+ month = {February},
79
+ year = {2026},
80
+ url = {https://qwen.ai/blog?id=qwen3.5}
81
+ }
82
+ ```
83
+
84
+
85
+ ## Quantization Details
86
+
87
+ This model was quantized by applying NVFP4 to the weights and activations of linear operators within transformer blocks. The KV-cache is not quantized. Vision encoder weights are kept in their original precision.
88
+
89
+ - **Quantization format:** NVFP4 (MLP-only, MSE calibration)
90
+ - **Calibration dataset:** [Nemotron-Post-Training-Dataset-v2](https://huggingface.co/datasets/nvidia/Nemotron-Post-Training-Dataset-v2)
91
+ - **Quantized checkpoint size:** ~1.5GB
92
+ - **Tool:** [TensorRT Model Optimizer](https://github.com/NVIDIA/TensorRT-Model-Optimizer)
93
+
94
+ ## Usage
95
+
96
+ ### Deploy with SGLang
97
+
98
+ ```bash
99
+ python3 -m sglang.launch_server \
100
+ --model-path AxionML/Qwen3.5-2B-Base-NVFP4 \
101
+ --quantization modelopt_fp4 \
102
+ --tp 1 \
103
+ --reasoning-parser qwen3
104
+ ```
105
+
106
+ ### Reproduce with ModelOpt
107
+
108
+ ```bash
109
+ python3 examples/llm_ptq/hf_ptq.py \
110
+ --pyt_ckpt_path Qwen/Qwen3.5-2B-Base \
111
+ --qformat nvfp4_mse \
112
+ --export_path ./qwen3.5-2b-base-nvfp4
113
+ ```
114
+
115
+ ## Limitations
116
+
117
+ The base model was trained on data that may contain toxic language and societal biases. The quantized model inherits these limitations. It may generate inaccurate, biased, or offensive content. Please refer to the [original model card](https://huggingface.co/Qwen/Qwen3.5-2B-Base) for full details.