File size: 3,995 Bytes
79a981b
 
 
 
446dc48
 
 
 
 
 
 
 
79a981b
 
446dc48
 
79a981b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
---
base_model: google/gemma-4-31B
library_name: mlx
tags:
- rotorquant
- kv-cache-quantization
- gemma
- gemma4
- multimodal
- quantized
- mlx
- 4bit
license: apache-2.0
pipeline_tag: image-text-to-text
language:
- en
---

# Gemma 4 31B - RotorQuant MLX 4-bit

**4-bit weight-quantized MLX version** of [google/gemma-4-31B](https://huggingface.co/google/gemma-4-31B) with RotorQuant KV-cache quantization. Optimized for Apple Silicon inference via the [MLX](https://github.com/ml-explore/mlx) framework. RotorQuant delivers 5.3x faster prefill and 28% faster decode compared to TurboQuant.

Approximate model size: **~17 GB**

## Model Specifications

| Property | Value |
|---|---|
| **Base Model** | [google/gemma-4-31B](https://huggingface.co/google/gemma-4-31B) |
| **Parameters** | 31 billion (dense transformer) |
| **Architecture** | Dense transformer (not MoE) |
| **Modality** | Multimodal: image + text input, text output |
| **License** | Apache 2.0 |
| **Weight Quantization** | 4-bit (~17 GB) |
| **KV-Cache Quantization** | RotorQuant |
| **Framework** | MLX (Apple Silicon) |

## Quickstart

```python
import mlx.core as mx
from mlx_lm import load, generate

model, tokenizer = load("majentik/gemma-4-31B-RotorQuant-MLX-4bit")

prompt = "Describe this image in detail."
response = generate(model, tokenizer, prompt=prompt, max_tokens=512)
print(response)
```

For multimodal usage with images:

```python
from mlx_vlm import load, generate

model, processor = load("majentik/gemma-4-31B-RotorQuant-MLX-4bit")

prompt = "What do you see in this image?"
output = generate(model, processor, prompt=prompt, image="path/to/image.jpg", max_tokens=512)
print(output)
```

## What is RotorQuant?

[RotorQuant](https://github.com/scrya-com/rotorquant) is a high-performance KV-cache quantization method that achieves significantly better throughput than TurboQuant. Combined with 4-bit weight quantization in MLX, this provides a dual compression strategy: aggressively smaller model weights for reduced disk and memory footprint, plus compressed KV cache for efficient long-context generation.

Key advantages over TurboQuant:
- **5.3x faster prefill**
- **28% faster decode**
- Equivalent memory savings

## KV-Cache Quantization Comparison

| Method | Prefill Speed | Decode Speed | Memory Savings | Reference |
|---|---|---|---|---|
| **TurboQuant** | 1x (baseline) | 1x (baseline) | High | [arXiv: 2504.19874](https://arxiv.org/abs/2504.19874) |
| **RotorQuant** | **5.3x faster** | **28% faster** | High | [GitHub](https://github.com/scrya-com/rotorquant) |

## Memory Estimates (Gemma 4 31B)

| Precision | Approximate Size | MLX Variant |
|---|---|---|
| FP16 (original) | ~62 GB | -- |
| 8-bit quantized | ~31 GB | [RotorQuant-MLX-8bit](https://huggingface.co/majentik/gemma-4-31B-RotorQuant-MLX-8bit) |
| **4-bit quantized** | **~17 GB** | **This model** |
| 2-bit quantized | ~9 GB | [RotorQuant-MLX-2bit](https://huggingface.co/majentik/gemma-4-31B-RotorQuant-MLX-2bit) |

## Hardware Requirements

This model requires approximately 17 GB of unified memory. Recommended hardware:
- Apple M1 Pro (32 GB+)
- Apple M2 Pro (32 GB+)
- Apple M3 Pro (36 GB+)
- Apple M4 Pro (24 GB+)
- Any Apple Silicon Mac with 24 GB+ unified memory

## See Also

- [google/gemma-4-31B](https://huggingface.co/google/gemma-4-31B) -- Base model
- [majentik/gemma-4-31B-RotorQuant](https://huggingface.co/majentik/gemma-4-31B-RotorQuant) -- RotorQuant KV-cache only (transformers)
- [majentik/gemma-4-31B-RotorQuant-MLX-8bit](https://huggingface.co/majentik/gemma-4-31B-RotorQuant-MLX-8bit) -- MLX 8-bit variant
- [majentik/gemma-4-31B-RotorQuant-MLX-2bit](https://huggingface.co/majentik/gemma-4-31B-RotorQuant-MLX-2bit) -- MLX 2-bit variant
- [majentik/gemma-4-31B-TurboQuant-MLX-4bit](https://huggingface.co/majentik/gemma-4-31B-TurboQuant-MLX-4bit) -- TurboQuant MLX 4-bit variant
- [RotorQuant GitHub](https://github.com/scrya-com/rotorquant)
- [MLX Framework](https://github.com/ml-explore/mlx)