GPT-OSS-120B - RotorQuant KV Cache
RotorQuant KV-cache quantization applied to openai/gpt-oss-120b. RotorQuant uses block-diagonal rotations (Clifford algebra) to compress the KV cache, delivering 5.3x faster prefill and 28% faster decode compared to TurboQuant with equivalent memory savings.
This repository provides the RotorQuant KV-cache configuration for GPT-OSS-120B, OpenAI's first open-weights release in years (Apache 2.0). The model weights remain at their original precision; only the key-value cache is quantized at runtime. GPT-OSS-120B is OpenAI's flagship Mixture-of-Experts open model, approaching o4-mini quality for reasoning tasks and designed for production inference.
Model Specifications
| Property | Value |
|---|---|
| Base Model | openai/gpt-oss-120b |
| Parameters | 120 billion (MoE) |
| Architecture | Mixture-of-Experts (MoE) Transformer |
| License | Apache 2.0 (commercial use OK) |
| Quantization | RotorQuant KV-cache only (weights unchanged) |
| Downloads | 3.5M+ on HuggingFace |
Quickstart
from rotorquant import IsoQuantCache
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "openai/gpt-oss-120b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")
# Apply RotorQuant KV-cache quantization
cache = IsoQuantCache(model)
inputs = tokenizer("Explain the theory of relativity.", return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, past_key_values=cache)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
What is RotorQuant?
RotorQuant applies block-diagonal rotations (Clifford algebra) for KV cache compression. It provides equivalent memory savings to TurboQuant while dramatically improving throughput.
Key advantages over TurboQuant:
- 5.3x faster prefill
- 28% faster decode
- Equivalent memory savings
- Slightly better perplexity
KV-Cache Quantization Comparison
| Method | Prefill Speed | Decode Speed | Memory Savings | Reference |
|---|---|---|---|---|
| TurboQuant | 1x (baseline) | 1x (baseline) | High | arXiv: 2504.19874 |
| RotorQuant | 5.3x faster | 28% faster | High | GitHub |
Memory Estimates (GPT-OSS-120B)
| Precision | Approximate Size |
|---|---|
| BF16 (original) | ~240 GB |
| 8-bit quantized | ~120 GB |
| 4-bit quantized | ~65 GB |
| 2-bit quantized | ~30 GB |
Note: These estimates are for weight quantization. This repository applies KV-cache quantization only, so model weight memory remains at the precision you load the model in. The KV-cache memory savings are realized during generation.
See Also
- openai/gpt-oss-120b -- Base model
- majentik/gpt-oss-120b-TurboQuant -- TurboQuant KV-cache variant
- majentik/gpt-oss-120b-RotorQuant-MLX-8bit -- MLX 8-bit variant
- majentik/gpt-oss-120b-RotorQuant-MLX-4bit -- MLX 4-bit variant
- majentik/gpt-oss-120b-RotorQuant-MLX-2bit -- MLX 2-bit variant
- majentik/gpt-oss-120b-RotorQuant-GGUF-Q4_K_M -- GGUF Q4_K_M variant
- RotorQuant GitHub
Model tree for majentik/gpt-oss-120b-RotorQuant
Base model
openai/gpt-oss-120b