| === VPTQ Quantization Configuration === |
| Date: 2026-02-22 19:40:24 UTC |
|
|
| Target: ~2-bit quantization with Residual VQ |
| Model: speakleash/Bielik-11B-v2.3-Instruct (MistralForCausalLM) |
|
|
| Parameters: |
| --model_name /workspace/models/bielik-11b-instruct |
| --output_dir /workspace/variant-e/output/ |
| --vector_lens -1 8 # Skip embeddings (-1), quantize in vectors of 8 |
| --group_num 1 # Single group |
| --num_centroids -1 65536 # 2^16 centroids = 16 bits per vector index |
| --num_res_centroids -1 256 # 2^8 residual centroids |
| --npercent 0 # No outlier channels |
| --blocksize 128 # Block size for quantization |
| --new_eval # Use new evaluation method |
| --seq_len 4096 # Sequence length for calibration |
| --kmeans_mode hessian # Hessian-weighted k-means clustering |
| --num_gpus 1 # Single GPU |
| --enable_perm # Channel permutation for better quant |
| --enable_norm # Channel normalization |
| --save_model # Save quantized model |
| --save_packed_model # Save packed model for inference |
| --hessian_path /workspace/hessians/quip-format/hessians |
| --kiter 100 # K-means iterations |
| --ktol 1e-5 # K-means convergence tolerance |
|
|
| Effective bitwidth calculation: |
| Primary: log2(65536) / 8 = 16/8 = 2.0 bits/weight |
| Residual: log2(256) / 8 = 8/8 = 1.0 bits/weight overhead |
| Total: ~2.0 bits/weight (residual adds codebook storage overhead, not per-weight) |
|
|
| Hessian source: Jakubrd4/bielik-quip-e8p12 |
| Format: QuIP# (flatH + mu + n + ct per layer) |
| Calibration: CulturaX-PL, 512 samples x 4096 tokens |
| Layers: 50 x 4 projections (qkv, o, up, down) = 200 files |
|
|