File size: 1,913 Bytes
80692f2 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 | name: 07_w4a16_gemm
display_name: "W4A16 Weight-only Quantized GEMM"
precision: int4_bf16
regime: memory # decode-dominant; M=1 is bandwidth-bound on the int4 weight stream
# Dense-equivalent FLOPs (matmul work, ignoring dequant arithmetic).
flops_formula: "2 * M * N * K"
# Bytes moved per call (memory roofline):
# x: M*K*2 (bf16 activations, streamed in once)
# w_q: (K/2)*N (packed int4, 0.5 B/elem)
# scales: (K/128)*N*2 (bf16 scales)
# zeros: (K/128)*N*2 (bf16 zero-points)
# out: M*N*2 (bf16 store)
bytes_formula: "M*K*2 + (K/2)*N + (K/128)*N*2 + (K/128)*N*2 + M*N*2"
hardware: [RTX_PRO_6000]
peak_tflops_key: bf16
peak_bandwidth_key: dram
tolerance:
bfloat16: 0.10 # group-quant adds noise on top of bf16 accumulator slop
# Forbidden ops -- agent must write the unpack + GEMM themselves, not call a
# vendor library that does both.
forbidden:
- "bitsandbytes.functional.dequantize_4bit"
- "bitsandbytes.functional.gemv_4bit"
- "marlin_kernel.gemm"
- "torch.nn.functional.linear"
sota:
name: "bitsandbytes NF4 (gemv_4bit / dequantize_4bit + matmul)"
url: "https://github.com/TimDettmers/bitsandbytes"
function: "bitsandbytes.functional.gemv_4bit"
notes: |
Marlin (IST-DASLab) is the W4A16 SOTA on Ampere/Hopper but does not have
SM120 (Blackwell consumer) kernels yet. GPTQ-Triton is unmaintained and
does not target SM120. bitsandbytes 0.49.2 *does* run on SM120 -- it
autotunes its CUDA kernels for compute capability 12.0 -- so we use its
NF4 path (different quant scheme but same regime) as the SOTA reference
line. Note that bnb's NF4 is symmetric/non-uniform; our reference uses
AWQ-style asymmetric int4 with explicit zero-points, which is what the
agent must implement. The SOTA line is informational only.
deps:
- "bitsandbytes>=0.49.2"
num_correct_trials: 3
num_perf_trials: 50
|