WilhelmT commited on
Commit
10e648a
·
verified ·
1 Parent(s): ca55dec

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -18,7 +18,7 @@ Designed for **low-latency inference** on **NVIDIA RTX GPUs**, leveraging:
18
  - Quantization (W4A16)
19
  - Custom vLLM generation via `embedl-models`
20
 
21
- FlashHead matches the baseline **Llama-3.2-3B-Instruct** within rounding on standard evaluations (MMLU-Pro, HellaSwag, GSM8K, etc.) and, in combination with quantization, achieves **H200-level latency** on **RTX Ada** GPUs.
22
 
23
  ---
24
 
 
18
  - Quantization (W4A16)
19
  - Custom vLLM generation via `embedl-models`
20
 
21
+ FlashHead matches the Llama-3.2-3B-Instruct baseline within rounding error on common benchmarks (MMLU-Pro, HellaSwag, GSM8K, etc.) and, combined with quantization, delivers SOTA on-device latency
22
 
23
  ---
24