GPT-OSS-120B - RotorQuant MLX 8-bit

8-bit weight-quantized MLX version of openai/gpt-oss-120b with RotorQuant KV-cache quantization. Optimized for Apple Silicon inference via the MLX framework. RotorQuant delivers 5.3x faster prefill and 28% faster decode compared to TurboQuant. GPT-OSS-120B is OpenAI's flagship open-weights Mixture-of-Experts model (Apache 2.0), approaching o4-mini quality for reasoning tasks.

Approximate model size: ~120 GB

Model Specifications

Property Value
Base Model openai/gpt-oss-120b
Parameters 120 billion (MoE)
Architecture Mixture-of-Experts (MoE) Transformer
License Apache 2.0 (commercial use OK)
Weight Quantization 8-bit (~120 GB)
KV-Cache Quantization RotorQuant
Framework MLX (Apple Silicon)

Quickstart

from mlx_lm import load, generate
from rotorquant import IsoQuantCache

model, tokenizer = load("majentik/gpt-oss-120b-RotorQuant-MLX-8bit")

prompt = "Explain the theory of relativity."
response = generate(model, tokenizer, prompt=prompt, max_tokens=512)
print(response)

What is RotorQuant?

RotorQuant applies block-diagonal rotations (Clifford algebra) for KV cache compression. Combined with 8-bit weight quantization in MLX, this provides a dual compression strategy with superior KV-cache performance.

Key advantages over TurboQuant:

  • 5.3x faster prefill
  • 28% faster decode
  • Equivalent memory savings

KV-Cache Quantization Comparison

Method Prefill Speed Decode Speed Memory Savings Reference
TurboQuant 1x (baseline) 1x (baseline) High arXiv: 2504.19874
RotorQuant 5.3x faster 28% faster High GitHub

Memory Estimates (GPT-OSS-120B)

Precision Approximate Size MLX Variant
BF16 (original) ~240 GB --
8-bit quantized ~120 GB This model
4-bit quantized ~65 GB RotorQuant-MLX-4bit
2-bit quantized ~30 GB RotorQuant-MLX-2bit

Hardware Requirements

This model requires approximately 120 GB of unified memory. Recommended hardware:

  • Apple M2 Ultra (192 GB)
  • Apple M3 Ultra (192 GB or 512 GB)
  • Mac Studio M4 Ultra (192 GB+)
  • Multi-device MLX inference for smaller Macs

See Also

Downloads last month

-

Downloads are not tracked for this model. How to track
MLX
Hardware compatibility
Log In to add your hardware

Quantized

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for majentik/gpt-oss-120b-RotorQuant-MLX-8bit

Finetuned
(99)
this model

Paper for majentik/gpt-oss-120b-RotorQuant-MLX-8bit