| --- |
| base_model: google/gemma-4-E2B |
| library_name: mlx |
| tags: |
| - rotorquant |
| - kv-cache-quantization |
| - gemma |
| - gemma4 |
| - multimodal |
| - quantized |
| - mlx |
| - 2bit |
| license: apache-2.0 |
| pipeline_tag: image-text-to-text |
| --- |
| |
| # Gemma 4 E2B - RotorQuant MLX 2-bit |
|
|
| **2-bit weight-quantized MLX version** of [google/gemma-4-E2B](https://huggingface.co/google/gemma-4-E2B) with RotorQuant KV-cache quantization. Optimized for Apple Silicon inference via the [MLX](https://github.com/ml-explore/mlx) framework. RotorQuant delivers 5.3x faster prefill and 28% faster decode compared to TurboQuant. The most aggressive quantization, fitting the full model in the smallest possible footprint. |
|
|
| Approximate model size: **~0.6 GB** |
|
|
| ## Model Specifications |
|
|
| | Property | Value | |
| |---|---| |
| | **Base Model** | [google/gemma-4-E2B](https://huggingface.co/google/gemma-4-E2B) | |
| | **Parameters** | ~2 billion | |
| | **Architecture** | Dense transformer | |
| | **Modality** | Multimodal: image + text input, text output | |
| | **License** | Apache 2.0 | |
| | **Weight Quantization** | 2-bit (~0.6 GB) | |
| | **KV-Cache Quantization** | RotorQuant | |
| | **Framework** | MLX (Apple Silicon) | |
|
|
| ## Quickstart |
|
|
| ```python |
| import mlx.core as mx |
| from mlx_lm import load, generate |
| |
| model, tokenizer = load("majentik/gemma-4-E2B-RotorQuant-MLX-2bit") |
| |
| prompt = "The history of artificial intelligence began" |
| response = generate(model, tokenizer, prompt=prompt, max_tokens=512) |
| print(response) |
| ``` |
|
|
| For multimodal usage with images: |
|
|
| ```python |
| from mlx_vlm import load, generate |
| |
| model, processor = load("majentik/gemma-4-E2B-RotorQuant-MLX-2bit") |
| |
| prompt = "Describe the contents of this image." |
| output = generate(model, processor, prompt=prompt, image="path/to/image.jpg", max_tokens=512) |
| print(output) |
| ``` |
|
|
| ## What is RotorQuant? |
|
|
| [RotorQuant](https://github.com/scrya-com/rotorquant) is a high-performance KV-cache quantization method that achieves significantly better throughput than TurboQuant. Combined with 2-bit weight quantization in MLX, this provides maximum compression with the best available KV-cache performance: the smallest possible model footprint plus the fastest compressed KV cache for efficient long-context generation. |
|
|
| Key advantages over TurboQuant: |
| - **5.3x faster prefill** |
| - **28% faster decode** |
| - Equivalent memory savings |
|
|
| **Note:** 2-bit quantization is the most aggressive option and may result in some quality degradation compared to higher-precision variants. It is best suited for experimentation, rapid prototyping, or hardware-constrained environments. |
|
|
| ## KV-Cache Quantization Comparison |
|
|
| | Method | Prefill Speed | Decode Speed | Memory Savings | Reference | |
| |---|---|---|---|---| |
| | **TurboQuant** | 1x (baseline) | 1x (baseline) | High | [arXiv: 2504.19874](https://arxiv.org/abs/2504.19874) | |
| | **RotorQuant** | **5.3x faster** | **28% faster** | High | [GitHub](https://github.com/scrya-com/rotorquant) | |
|
|
| ## Memory Estimates (Gemma 4 E2B) |
|
|
| | Precision | Approximate Size | MLX Variant | |
| |---|---|---| |
| | FP16 (original) | ~4 GB | -- | |
| | 8-bit quantized | ~2 GB | [RotorQuant-MLX-8bit](https://huggingface.co/majentik/gemma-4-E2B-RotorQuant-MLX-8bit) | |
| | 4-bit quantized | ~1.2 GB | [RotorQuant-MLX-4bit](https://huggingface.co/majentik/gemma-4-E2B-RotorQuant-MLX-4bit) | |
| | **2-bit quantized** | **~0.6 GB** | **This model** | |
|
|
| ## Hardware Requirements |
|
|
| This model requires approximately 0.6 GB of unified memory. Recommended hardware: |
| - Apple M1 (8 GB+) |
| - Apple M2 (8 GB+) |
| - Apple M3 (8 GB+) |
| - Apple M4 (8 GB+) |
| - Any Apple Silicon Mac with 8 GB+ unified memory |
|
|
| ## See Also |
|
|
| - [google/gemma-4-E2B](https://huggingface.co/google/gemma-4-E2B) -- Base model |
| - [majentik/gemma-4-E2B-RotorQuant](https://huggingface.co/majentik/gemma-4-E2B-RotorQuant) -- RotorQuant KV-cache only (transformers) |
| - [majentik/gemma-4-E2B-RotorQuant-MLX-8bit](https://huggingface.co/majentik/gemma-4-E2B-RotorQuant-MLX-8bit) -- MLX 8-bit variant |
| - [majentik/gemma-4-E2B-RotorQuant-MLX-4bit](https://huggingface.co/majentik/gemma-4-E2B-RotorQuant-MLX-4bit) -- MLX 4-bit variant |
| - [majentik/gemma-4-E2B-TurboQuant-MLX-2bit](https://huggingface.co/majentik/gemma-4-E2B-TurboQuant-MLX-2bit) -- TurboQuant MLX 2-bit variant |
| - [RotorQuant GitHub](https://github.com/scrya-com/rotorquant) |
| - [MLX Framework](https://github.com/ml-explore/mlx) |
|
|