GPT-OSS-20B - RotorQuant MLX 4-bit
4-bit weight-quantized MLX version of openai/gpt-oss-20b with RotorQuant KV-cache quantization. Optimized for Apple Silicon inference via the MLX framework. RotorQuant delivers 5.3x faster prefill and 28% faster decode compared to TurboQuant. A good balance between model quality and memory efficiency. GPT-OSS-20B is OpenAI's first open-weights release in years (Apache 2.0), a Mixture-of-Experts model that rivals o3-mini on reasoning benchmarks.
Approximate model size: ~12 GB
Model Specifications
| Property | Value |
|---|---|
| Base Model | openai/gpt-oss-20b |
| Parameters | 20 billion (MoE) |
| Architecture | Mixture-of-Experts (MoE) Transformer |
| License | Apache 2.0 (commercial use OK) |
| Weight Quantization | 4-bit (~12 GB) |
| KV-Cache Quantization | RotorQuant |
| Framework | MLX (Apple Silicon) |
Quickstart
from mlx_lm import load, generate
from rotorquant import IsoQuantCache
model, tokenizer = load("majentik/gpt-oss-20b-RotorQuant-MLX-4bit")
prompt = "Explain the theory of relativity."
response = generate(model, tokenizer, prompt=prompt, max_tokens=512)
print(response)
What is RotorQuant?
RotorQuant applies block-diagonal rotations (Clifford algebra) for KV cache compression. Combined with 4-bit weight quantization in MLX, this provides a dual compression strategy with superior KV-cache performance: smaller model weights plus faster compressed KV cache for efficient long-context generation.
Key advantages over TurboQuant:
- 5.3x faster prefill
- 28% faster decode
- Equivalent memory savings
KV-Cache Quantization Comparison
| Method | Prefill Speed | Decode Speed | Memory Savings | Reference |
|---|---|---|---|---|
| TurboQuant | 1x (baseline) | 1x (baseline) | High | arXiv: 2504.19874 |
| RotorQuant | 5.3x faster | 28% faster | High | GitHub |
Memory Estimates (GPT-OSS-20B)
| Precision | Approximate Size | MLX Variant |
|---|---|---|
| BF16 (original) | ~40 GB | -- |
| 8-bit quantized | ~20 GB | RotorQuant-MLX-8bit |
| 4-bit quantized | ~12 GB | This model |
| 2-bit quantized | ~6 GB | RotorQuant-MLX-2bit |
Hardware Requirements
This model requires approximately 12 GB of unified memory. Recommended hardware:
- Apple M1 Pro (16 GB+)
- Apple M2 Pro (16 GB+)
- Apple M3 Pro (18 GB+)
- Apple M4 Pro (24 GB+)
- Any Apple Silicon Mac with 16 GB+ unified memory
See Also
- openai/gpt-oss-20b -- Base model
- majentik/gpt-oss-20b-RotorQuant -- RotorQuant KV-cache only (transformers)
- majentik/gpt-oss-20b-RotorQuant-MLX-8bit -- MLX 8-bit variant
- majentik/gpt-oss-20b-RotorQuant-MLX-2bit -- MLX 2-bit variant
- majentik/gpt-oss-20b-TurboQuant-MLX-4bit -- TurboQuant MLX 4-bit variant
- RotorQuant GitHub
- MLX Framework
- Downloads last month
- 11
4-bit
Model tree for majentik/gpt-oss-20b-RotorQuant-MLX-4bit
Base model
openai/gpt-oss-20b