| --- |
| base_model: openai/gpt-oss-20b |
| library_name: transformers |
| tags: |
| - rotorquant |
| - kv-cache-quantization |
| - gpt-oss |
| - openai |
| - moe |
| - quantized |
| license: apache-2.0 |
| pipeline_tag: text-generation |
| --- |
| |
| # GPT-OSS-20B - RotorQuant KV Cache |
|
|
| **RotorQuant KV-cache quantization** applied to [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b). RotorQuant uses block-diagonal rotations (Clifford algebra) to compress the KV cache, delivering 5.3x faster prefill and 28% faster decode compared to TurboQuant with equivalent memory savings. |
|
|
| This repository provides the RotorQuant KV-cache configuration for GPT-OSS-20B, OpenAI's first open-weights release in years (Apache 2.0). The model weights remain at their original precision; only the key-value cache is quantized at runtime. GPT-OSS-20B is a Mixture-of-Experts model that rivals o3-mini on reasoning benchmarks and is ideal for local and edge deployment. |
|
|
| ## Model Specifications |
|
|
| | Property | Value | |
| |---|---| |
| | **Base Model** | [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b) | |
| | **Parameters** | 20 billion (MoE) | |
| | **Architecture** | Mixture-of-Experts (MoE) Transformer | |
| | **License** | Apache 2.0 (commercial use OK) | |
| | **Quantization** | RotorQuant KV-cache only (weights unchanged) | |
| | **Downloads** | 6M+ on HuggingFace | |
|
|
| ## Quickstart |
|
|
| ```python |
| from rotorquant import IsoQuantCache |
| from transformers import AutoModelForCausalLM, AutoTokenizer |
| |
| model_id = "openai/gpt-oss-20b" |
| |
| tokenizer = AutoTokenizer.from_pretrained(model_id) |
| model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto") |
| |
| # Apply RotorQuant KV-cache quantization |
| cache = IsoQuantCache(model) |
| |
| inputs = tokenizer("Explain the theory of relativity.", return_tensors="pt").to(model.device) |
| outputs = model.generate(**inputs, past_key_values=cache) |
| print(tokenizer.decode(outputs[0], skip_special_tokens=True)) |
| ``` |
|
|
| ## What is RotorQuant? |
|
|
| [RotorQuant](https://github.com/scrya-com/rotorquant) applies block-diagonal rotations (Clifford algebra) for KV cache compression. It provides equivalent memory savings to TurboQuant while dramatically improving throughput. |
|
|
| Key advantages over TurboQuant: |
| - **5.3x faster prefill** |
| - **28% faster decode** |
| - Equivalent memory savings |
| - Slightly better perplexity |
|
|
| ## KV-Cache Quantization Comparison |
|
|
| | Method | Prefill Speed | Decode Speed | Memory Savings | Reference | |
| |---|---|---|---|---| |
| | **TurboQuant** | 1x (baseline) | 1x (baseline) | High | [arXiv: 2504.19874](https://arxiv.org/abs/2504.19874) | |
| | **RotorQuant** | **5.3x faster** | **28% faster** | High | [GitHub](https://github.com/scrya-com/rotorquant) | |
|
|
| ## Memory Estimates (GPT-OSS-20B) |
|
|
| | Precision | Approximate Size | |
| |---|---| |
| | BF16 (original) | ~40 GB | |
| | 8-bit quantized | ~20 GB | |
| | 4-bit quantized | ~12 GB | |
| | 2-bit quantized | ~6 GB | |
|
|
| Note: These estimates are for weight quantization. This repository applies KV-cache quantization only, so model weight memory remains at the precision you load the model in. The KV-cache memory savings are realized during generation. |
|
|
| ## See Also |
|
|
| - [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b) -- Base model |
| - [majentik/gpt-oss-20b-TurboQuant](https://huggingface.co/majentik/gpt-oss-20b-TurboQuant) -- TurboQuant KV-cache variant |
| - [majentik/gpt-oss-20b-RotorQuant-MLX-8bit](https://huggingface.co/majentik/gpt-oss-20b-RotorQuant-MLX-8bit) -- MLX 8-bit variant |
| - [majentik/gpt-oss-20b-RotorQuant-MLX-4bit](https://huggingface.co/majentik/gpt-oss-20b-RotorQuant-MLX-4bit) -- MLX 4-bit variant |
| - [majentik/gpt-oss-20b-RotorQuant-MLX-2bit](https://huggingface.co/majentik/gpt-oss-20b-RotorQuant-MLX-2bit) -- MLX 2-bit variant |
| - [majentik/gpt-oss-20b-RotorQuant-GGUF-Q4_K_M](https://huggingface.co/majentik/gpt-oss-20b-RotorQuant-GGUF-Q4_K_M) -- GGUF Q4_K_M variant |
| - [RotorQuant GitHub](https://github.com/scrya-com/rotorquant) |
|
|