| --- |
| base_model: openai/gpt-oss-20b |
| library_name: mlx |
| tags: |
| - rotorquant |
| - kv-cache-quantization |
| - gpt-oss |
| - openai |
| - moe |
| - quantized |
| - mlx |
| - 2bit |
| license: apache-2.0 |
| pipeline_tag: text-generation |
| --- |
| |
| # GPT-OSS-20B - RotorQuant MLX 2-bit |
|
|
| **2-bit weight-quantized MLX version** of [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b) with RotorQuant KV-cache quantization. Optimized for Apple Silicon inference via the [MLX](https://github.com/ml-explore/mlx) framework. The smallest variant with RotorQuant's superior KV-cache throughput -- ideal for memory-constrained devices. GPT-OSS-20B is OpenAI's first open-weights release in years (Apache 2.0), a Mixture-of-Experts model that rivals o3-mini on reasoning benchmarks. |
|
|
| Approximate model size: **~6 GB** |
|
|
| ## Model Specifications |
|
|
| | Property | Value | |
| |---|---| |
| | **Base Model** | [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b) | |
| | **Parameters** | 20 billion (MoE) | |
| | **Architecture** | Mixture-of-Experts (MoE) Transformer | |
| | **License** | Apache 2.0 (commercial use OK) | |
| | **Weight Quantization** | 2-bit (~6 GB) | |
| | **KV-Cache Quantization** | RotorQuant | |
| | **Framework** | MLX (Apple Silicon) | |
|
|
| ## Quickstart |
|
|
| ```python |
| from mlx_lm import load, generate |
| from rotorquant import IsoQuantCache |
| |
| model, tokenizer = load("majentik/gpt-oss-20b-RotorQuant-MLX-2bit") |
| |
| prompt = "Explain the theory of relativity." |
| response = generate(model, tokenizer, prompt=prompt, max_tokens=512) |
| print(response) |
| ``` |
|
|
| ## What is RotorQuant? |
|
|
| [RotorQuant](https://github.com/scrya-com/rotorquant) applies block-diagonal rotations (Clifford algebra) for KV cache compression. Combined with aggressive 2-bit weight quantization in MLX, this produces the smallest possible footprint for GPT-OSS-20B while retaining RotorQuant's fast KV-cache throughput. |
|
|
| Key advantages over TurboQuant: |
| - **5.3x faster prefill** |
| - **28% faster decode** |
| - Equivalent memory savings |
|
|
| ## KV-Cache Quantization Comparison |
|
|
| | Method | Prefill Speed | Decode Speed | Memory Savings | Reference | |
| |---|---|---|---|---| |
| | **TurboQuant** | 1x (baseline) | 1x (baseline) | High | [arXiv: 2504.19874](https://arxiv.org/abs/2504.19874) | |
| | **RotorQuant** | **5.3x faster** | **28% faster** | High | [GitHub](https://github.com/scrya-com/rotorquant) | |
|
|
| ## Memory Estimates (GPT-OSS-20B) |
|
|
| | Precision | Approximate Size | MLX Variant | |
| |---|---|---| |
| | BF16 (original) | ~40 GB | -- | |
| | 8-bit quantized | ~20 GB | [RotorQuant-MLX-8bit](https://huggingface.co/majentik/gpt-oss-20b-RotorQuant-MLX-8bit) | |
| | 4-bit quantized | ~12 GB | [RotorQuant-MLX-4bit](https://huggingface.co/majentik/gpt-oss-20b-RotorQuant-MLX-4bit) | |
| | **2-bit quantized** | **~6 GB** | **This model** | |
|
|
| ## Hardware Requirements |
|
|
| This model requires approximately 6 GB of unified memory. Recommended hardware: |
| - Apple M1 (8 GB+) |
| - Apple M2 (8 GB+) |
| - Apple M3 (8 GB+) |
| - Apple M4 (8 GB+) |
| - Any Apple Silicon Mac with 8 GB+ unified memory |
|
|
| ## See Also |
|
|
| - [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b) -- Base model |
| - [majentik/gpt-oss-20b-RotorQuant](https://huggingface.co/majentik/gpt-oss-20b-RotorQuant) -- RotorQuant KV-cache only (transformers) |
| - [majentik/gpt-oss-20b-RotorQuant-MLX-8bit](https://huggingface.co/majentik/gpt-oss-20b-RotorQuant-MLX-8bit) -- MLX 8-bit variant |
| - [majentik/gpt-oss-20b-RotorQuant-MLX-4bit](https://huggingface.co/majentik/gpt-oss-20b-RotorQuant-MLX-4bit) -- MLX 4-bit variant |
| - [majentik/gpt-oss-20b-TurboQuant-MLX-2bit](https://huggingface.co/majentik/gpt-oss-20b-TurboQuant-MLX-2bit) -- TurboQuant MLX 2-bit variant |
| - [RotorQuant GitHub](https://github.com/scrya-com/rotorquant) |
| - [MLX Framework](https://github.com/ml-explore/mlx) |
|
|