What does “Lite” mean?
other than the size difference (from 373 GB to 339 GB) what else is different? What does “Lite” mean here? thanks!
Size difference only — more layers have been quantized to 4-bit.
Here’s the catch:
DeepSeek V3 series generally perform poorly under int-4 bit quantization (can’t maintain stable outputs).
To make AWQ quants still work, we usually keep some critical layers unquantized so that the model can output properly.
Those variants are still named with the suffix -AWQ (yeah, the naming is a bit loose here) and typically take up around 373 GB.
Some of the DeepSeek V3 NVFP4 repos on HF actually keep even more layers unquantized (over 400 GB total size),
yet this is not reflected in their naming — which makes things a bit confusing overall.
However, some of these V3 variants surprisingly work fine under the regular AWQ strategy (all the desired layers are quantized ).
So we released these as AWQ-Lite — same architecture, but with more layers fully quantized to 4-bit, resulting in a smaller size (≈ 339 GB).
That said, the DeepSeek V3 series still have unresolved issues with quantization quality, and haven’t seen much improvement so far.