WilhelmT commited on
Commit
7684592
·
verified ·
1 Parent(s): 3ae76e2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -39,7 +39,7 @@ FlashHead matches the baseline **Llama-3.2-1B** within rounding on standard eval
39
  ## Optimizations
40
 
41
  - **FlashHead LM Head** - lightweight replacement for the dense LM head, significantly improving throughput.
42
- - **Mixed-Precision Quantization (W4A16)** - optimal balance of memory footprint and accuracy.
43
  - **Custom Runtime Integration** - compatible with **vLLM (0.10.2)** via the `embedl-models` package.
44
 
45
  ---
 
39
  ## Optimizations
40
 
41
  - **FlashHead LM Head** - lightweight replacement for the dense LM head, significantly improving throughput.
42
+ - **Quantization (W4A16)** - large reduction in memory footprint and accuracy.
43
  - **Custom Runtime Integration** - compatible with **vLLM (0.10.2)** via the `embedl-models` package.
44
 
45
  ---