Add MLX quantized model
Browse files
README.md
ADDED
|
@@ -0,0 +1,90 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
base_model: openai/gpt-oss-20b
|
| 3 |
+
library_name: mlx
|
| 4 |
+
tags:
|
| 5 |
+
- rotorquant
|
| 6 |
+
- kv-cache-quantization
|
| 7 |
+
- gpt-oss
|
| 8 |
+
- openai
|
| 9 |
+
- moe
|
| 10 |
+
- quantized
|
| 11 |
+
- mlx
|
| 12 |
+
- 2bit
|
| 13 |
+
license: apache-2.0
|
| 14 |
+
pipeline_tag: text-generation
|
| 15 |
+
---
|
| 16 |
+
|
| 17 |
+
# GPT-OSS-20B - RotorQuant MLX 2-bit
|
| 18 |
+
|
| 19 |
+
**2-bit weight-quantized MLX version** of [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b) with RotorQuant KV-cache quantization. Optimized for Apple Silicon inference via the [MLX](https://github.com/ml-explore/mlx) framework. The smallest variant with RotorQuant's superior KV-cache throughput -- ideal for memory-constrained devices. GPT-OSS-20B is OpenAI's first open-weights release in years (Apache 2.0), a Mixture-of-Experts model that rivals o3-mini on reasoning benchmarks.
|
| 20 |
+
|
| 21 |
+
Approximate model size: **~6 GB**
|
| 22 |
+
|
| 23 |
+
## Model Specifications
|
| 24 |
+
|
| 25 |
+
| Property | Value |
|
| 26 |
+
|---|---|
|
| 27 |
+
| **Base Model** | [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b) |
|
| 28 |
+
| **Parameters** | 20 billion (MoE) |
|
| 29 |
+
| **Architecture** | Mixture-of-Experts (MoE) Transformer |
|
| 30 |
+
| **License** | Apache 2.0 (commercial use OK) |
|
| 31 |
+
| **Weight Quantization** | 2-bit (~6 GB) |
|
| 32 |
+
| **KV-Cache Quantization** | RotorQuant |
|
| 33 |
+
| **Framework** | MLX (Apple Silicon) |
|
| 34 |
+
|
| 35 |
+
## Quickstart
|
| 36 |
+
|
| 37 |
+
```python
|
| 38 |
+
from mlx_lm import load, generate
|
| 39 |
+
from rotorquant import IsoQuantCache
|
| 40 |
+
|
| 41 |
+
model, tokenizer = load("majentik/gpt-oss-20b-RotorQuant-MLX-2bit")
|
| 42 |
+
|
| 43 |
+
prompt = "Explain the theory of relativity."
|
| 44 |
+
response = generate(model, tokenizer, prompt=prompt, max_tokens=512)
|
| 45 |
+
print(response)
|
| 46 |
+
```
|
| 47 |
+
|
| 48 |
+
## What is RotorQuant?
|
| 49 |
+
|
| 50 |
+
[RotorQuant](https://github.com/scrya-com/rotorquant) applies block-diagonal rotations (Clifford algebra) for KV cache compression. Combined with aggressive 2-bit weight quantization in MLX, this produces the smallest possible footprint for GPT-OSS-20B while retaining RotorQuant's fast KV-cache throughput.
|
| 51 |
+
|
| 52 |
+
Key advantages over TurboQuant:
|
| 53 |
+
- **5.3x faster prefill**
|
| 54 |
+
- **28% faster decode**
|
| 55 |
+
- Equivalent memory savings
|
| 56 |
+
|
| 57 |
+
## KV-Cache Quantization Comparison
|
| 58 |
+
|
| 59 |
+
| Method | Prefill Speed | Decode Speed | Memory Savings | Reference |
|
| 60 |
+
|---|---|---|---|---|
|
| 61 |
+
| **TurboQuant** | 1x (baseline) | 1x (baseline) | High | [arXiv: 2504.19874](https://arxiv.org/abs/2504.19874) |
|
| 62 |
+
| **RotorQuant** | **5.3x faster** | **28% faster** | High | [GitHub](https://github.com/scrya-com/rotorquant) |
|
| 63 |
+
|
| 64 |
+
## Memory Estimates (GPT-OSS-20B)
|
| 65 |
+
|
| 66 |
+
| Precision | Approximate Size | MLX Variant |
|
| 67 |
+
|---|---|---|
|
| 68 |
+
| BF16 (original) | ~40 GB | -- |
|
| 69 |
+
| 8-bit quantized | ~20 GB | [RotorQuant-MLX-8bit](https://huggingface.co/majentik/gpt-oss-20b-RotorQuant-MLX-8bit) |
|
| 70 |
+
| 4-bit quantized | ~12 GB | [RotorQuant-MLX-4bit](https://huggingface.co/majentik/gpt-oss-20b-RotorQuant-MLX-4bit) |
|
| 71 |
+
| **2-bit quantized** | **~6 GB** | **This model** |
|
| 72 |
+
|
| 73 |
+
## Hardware Requirements
|
| 74 |
+
|
| 75 |
+
This model requires approximately 6 GB of unified memory. Recommended hardware:
|
| 76 |
+
- Apple M1 (8 GB+)
|
| 77 |
+
- Apple M2 (8 GB+)
|
| 78 |
+
- Apple M3 (8 GB+)
|
| 79 |
+
- Apple M4 (8 GB+)
|
| 80 |
+
- Any Apple Silicon Mac with 8 GB+ unified memory
|
| 81 |
+
|
| 82 |
+
## See Also
|
| 83 |
+
|
| 84 |
+
- [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b) -- Base model
|
| 85 |
+
- [majentik/gpt-oss-20b-RotorQuant](https://huggingface.co/majentik/gpt-oss-20b-RotorQuant) -- RotorQuant KV-cache only (transformers)
|
| 86 |
+
- [majentik/gpt-oss-20b-RotorQuant-MLX-8bit](https://huggingface.co/majentik/gpt-oss-20b-RotorQuant-MLX-8bit) -- MLX 8-bit variant
|
| 87 |
+
- [majentik/gpt-oss-20b-RotorQuant-MLX-4bit](https://huggingface.co/majentik/gpt-oss-20b-RotorQuant-MLX-4bit) -- MLX 4-bit variant
|
| 88 |
+
- [majentik/gpt-oss-20b-TurboQuant-MLX-2bit](https://huggingface.co/majentik/gpt-oss-20b-TurboQuant-MLX-2bit) -- TurboQuant MLX 2-bit variant
|
| 89 |
+
- [RotorQuant GitHub](https://github.com/scrya-com/rotorquant)
|
| 90 |
+
- [MLX Framework](https://github.com/ml-explore/mlx)
|