Quantization Dominates Rank Reduction for KV-Cache Compression
Paper โข 2604.11501 โข Published
KV cache optimized with fraQtl โ 3.5x less KV cache memory during inference.
Note: The model file size is the same as the original (~2.2GB). The optimization modifies V projection weights so that at inference time, the KV cache uses less GPU memory. The savings happen at runtime, not at download.
| Metric | Value |
|---|---|
| Original | TinyLlama/TinyLlama-1.1B-Chat-v1.0 |
| File size | Same as original (~2.2GB) |
| PPL before | 15.5249 |
| PPL after | 15.8782 |
| Delta | +0.353 (weight-level) |
| Config | k=16, INT3 |
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("fraQtl/TinyLlama-1.1B-compressed")
tokenizer = AutoTokenizer.from_pretrained("fraQtl/TinyLlama-1.1B-compressed")
Our runtime compression achieves significantly better results on larger models. Contact us for integration.
fraqtl.ai | contact@fraqtl.ai | Patent pending. Paper: arXiv:2604.11501