Gemma 4 31B-it - TurboQuant KV Cache
TurboQuant KV-cache quantization applied to google/gemma-4-31B-it, enabling dramatically reduced memory usage during inference without modifying model weights.
This repository provides the TurboQuant KV-cache configuration for Gemma 4 31B-it. The model weights remain at their original precision; only the key-value cache is quantized at runtime.
Model Specifications
| Property | Value |
|---|---|
| Base Model | google/gemma-4-31B-it |
| Parameters | 31 billion |
| Architecture | Dense transformer |
| Modality | Multimodal: image + text input, text output |
| License | Apache 2.0 |
| Quantization | TurboQuant KV-cache only (weights unchanged) |
Quickstart
from turboquant import TurboQuantCache
from transformers import AutoModelForImageTextToText, AutoProcessor
model_id = "google/gemma-4-31B-it"
processor = AutoProcessor.from_pretrained(model_id)
model = AutoModelForImageTextToText.from_pretrained(model_id, device_map="auto")
# Apply TurboQuant KV-cache quantization
cache = TurboQuantCache(model)
inputs = processor("Describe this image.", images=image, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, past_key_values=cache)
print(processor.decode(outputs[0], skip_special_tokens=True))
What is TurboQuant?
TurboQuant (arXiv: 2504.19874) is a KV-cache quantization technique that compresses the key-value cache used during autoregressive generation. Instead of quantizing the model weights, TurboQuant targets the memory bottleneck of the KV cache, which grows linearly with sequence length and batch size.
Key benefits:
- No weight modification -- model weights stay at original precision
- Reduced inference memory -- KV cache is compressed significantly
- Longer context windows -- fit more tokens in the same GPU memory
- Minimal quality loss -- carefully designed quantization preserves generation quality
KV-Cache Quantization Comparison
| Method | Prefill Speed | Decode Speed | Memory Savings | Reference |
|---|---|---|---|---|
| TurboQuant | Baseline | Baseline | High | arXiv: 2504.19874 |
| RotorQuant | 5.3x faster | 28% faster | High | GitHub |
Memory Estimates (Gemma 4 31B-it)
| Precision | Approximate Size |
|---|---|
| FP16 (original) | ~62 GB |
| 8-bit quantized | ~31 GB |
| 4-bit quantized | ~17 GB |
| 2-bit quantized | ~9 GB |
Note: These estimates are for weight quantization. This repository applies KV-cache quantization only, so model weight memory remains at the precision you load the model in. The KV-cache memory savings are realized during generation.
See Also
- google/gemma-4-31B-it -- Base model
- majentik/gemma-4-31B-it-RotorQuant -- RotorQuant KV-cache variant
- majentik/gemma-4-31B-it-TurboQuant-MLX-8bit -- MLX 8-bit weight-quantized variant
- majentik/gemma-4-31B-it-TurboQuant-MLX-4bit -- MLX 4-bit weight-quantized variant
- majentik/gemma-4-31B-it-TurboQuant-MLX-2bit -- MLX 2-bit weight-quantized variant
- TurboQuant Paper (arXiv: 2504.19874)
Model tree for majentik/gemma-4-31B-it-TurboQuant
Base model
google/gemma-4-31B-it