ChibuUkachi commited on
Commit
31ec501
·
verified ·
1 Parent(s): d9619a0

update quantization message

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -34,12 +34,12 @@ tags:
34
  ### Model Optimizations
35
 
36
  This model was obtained by quantizing the weights of [Qwen/Qwen3-4B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507) to INT8 data type.
37
- This optimization reduces the number of bits per parameter from 16 to 4, reducing the disk size and GPU memory requirements by approximately 75%.
38
-
39
- Only the weights of the linear operators within transformers blocks are quantized.
40
- Weights are quantized using a symmetric per-group scheme, with group size 128.
41
- The [GPTQ](https://arxiv.org/abs/2210.17323) algorithm is applied for quantization, as implemented in the [llm-compressor](https://github.com/vllm-project/llm-compressor) library.
42
 
 
 
 
43
 
44
  ## Deployment
45
 
 
34
  ### Model Optimizations
35
 
36
  This model was obtained by quantizing the weights of [Qwen/Qwen3-4B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507) to INT8 data type.
37
+ This optimization reduces the number of bits used to represent weights and activations from 16 to 8, reducing GPU memory requirements (by approximately 50%) and increasing matrix-multiply compute throughput (by approximately 2x).
38
+ Weight quantization also reduces disk size requirements by approximately 50%.
 
 
 
39
 
40
+ Only weights and activations of the linear operators within transformers blocks are quantized.
41
+ Weights are quantized with a symmetric static per-channel scheme, whereas activations are quantized with a symmetric dynamic per-token scheme.
42
+ A combination of the [SmoothQuant](https://arxiv.org/abs/2211.10438) and [GPTQ](https://arxiv.org/abs/2210.17323) algorithms is applied for quantization, as implemented in the [llm-compressor](https://github.com/vllm-project/llm-compressor) library.
43
 
44
  ## Deployment
45