GPT-OSS-20B - RotorQuant MLX 8-bit

8-bit weight-quantized MLX version of openai/gpt-oss-20b with RotorQuant KV-cache quantization. Optimized for Apple Silicon inference via the MLX framework. RotorQuant delivers 5.3x faster prefill and 28% faster decode compared to TurboQuant. GPT-OSS-20B is OpenAI's first open-weights release in years (Apache 2.0), a Mixture-of-Experts model that rivals o3-mini on reasoning benchmarks.

Approximate model size: ~20 GB

Model Specifications

Property Value
Base Model openai/gpt-oss-20b
Parameters 20 billion (MoE)
Architecture Mixture-of-Experts (MoE) Transformer
License Apache 2.0 (commercial use OK)
Weight Quantization 8-bit (~20 GB)
KV-Cache Quantization RotorQuant
Framework MLX (Apple Silicon)

Quickstart

from mlx_lm import load, generate
from rotorquant import IsoQuantCache

model, tokenizer = load("majentik/gpt-oss-20b-RotorQuant-MLX-8bit")

prompt = "Explain the theory of relativity."
response = generate(model, tokenizer, prompt=prompt, max_tokens=512)
print(response)

What is RotorQuant?

RotorQuant applies block-diagonal rotations (Clifford algebra) for KV cache compression. Combined with 8-bit weight quantization in MLX, this provides a dual compression strategy with superior KV-cache performance.

Key advantages over TurboQuant:

  • 5.3x faster prefill
  • 28% faster decode
  • Equivalent memory savings

KV-Cache Quantization Comparison

Method Prefill Speed Decode Speed Memory Savings Reference
TurboQuant 1x (baseline) 1x (baseline) High arXiv: 2504.19874
RotorQuant 5.3x faster 28% faster High GitHub

Memory Estimates (GPT-OSS-20B)

Precision Approximate Size MLX Variant
BF16 (original) ~40 GB --
8-bit quantized ~20 GB This model
4-bit quantized ~12 GB RotorQuant-MLX-4bit
2-bit quantized ~6 GB RotorQuant-MLX-2bit

Hardware Requirements

This model requires approximately 20 GB of unified memory. Recommended hardware:

  • Apple M2 Max (32 GB+)
  • Apple M3 Max (32 GB+)
  • Apple M4 Max (32 GB+)
  • Any Apple Silicon Mac with 32 GB+ unified memory

See Also

Downloads last month
17
Safetensors
Model size
21B params
Tensor type
BF16
·
U32
·
U8
·
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for majentik/gpt-oss-20b-RotorQuant-MLX-8bit

Quantized
(191)
this model

Paper for majentik/gpt-oss-20b-RotorQuant-MLX-8bit