majentik commited on
Commit
e17cc63
·
verified ·
1 Parent(s): a8905ca

Add model card

Browse files
Files changed (1) hide show
  1. README.md +87 -0
README.md ADDED
@@ -0,0 +1,87 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: openai/gpt-oss-120b
3
+ library_name: transformers
4
+ tags:
5
+ - rotorquant
6
+ - kv-cache-quantization
7
+ - gpt-oss
8
+ - openai
9
+ - moe
10
+ - quantized
11
+ license: apache-2.0
12
+ pipeline_tag: text-generation
13
+ ---
14
+
15
+ # GPT-OSS-120B - RotorQuant KV Cache
16
+
17
+ **RotorQuant KV-cache quantization** applied to [openai/gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b). RotorQuant uses block-diagonal rotations (Clifford algebra) to compress the KV cache, delivering 5.3x faster prefill and 28% faster decode compared to TurboQuant with equivalent memory savings.
18
+
19
+ This repository provides the RotorQuant KV-cache configuration for GPT-OSS-120B, OpenAI's first open-weights release in years (Apache 2.0). The model weights remain at their original precision; only the key-value cache is quantized at runtime. GPT-OSS-120B is OpenAI's flagship Mixture-of-Experts open model, approaching o4-mini quality for reasoning tasks and designed for production inference.
20
+
21
+ ## Model Specifications
22
+
23
+ | Property | Value |
24
+ |---|---|
25
+ | **Base Model** | [openai/gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b) |
26
+ | **Parameters** | 120 billion (MoE) |
27
+ | **Architecture** | Mixture-of-Experts (MoE) Transformer |
28
+ | **License** | Apache 2.0 (commercial use OK) |
29
+ | **Quantization** | RotorQuant KV-cache only (weights unchanged) |
30
+ | **Downloads** | 3.5M+ on HuggingFace |
31
+
32
+ ## Quickstart
33
+
34
+ ```python
35
+ from rotorquant import IsoQuantCache
36
+ from transformers import AutoModelForCausalLM, AutoTokenizer
37
+
38
+ model_id = "openai/gpt-oss-120b"
39
+
40
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
41
+ model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")
42
+
43
+ # Apply RotorQuant KV-cache quantization
44
+ cache = IsoQuantCache(model)
45
+
46
+ inputs = tokenizer("Explain the theory of relativity.", return_tensors="pt").to(model.device)
47
+ outputs = model.generate(**inputs, past_key_values=cache)
48
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
49
+ ```
50
+
51
+ ## What is RotorQuant?
52
+
53
+ [RotorQuant](https://github.com/scrya-com/rotorquant) applies block-diagonal rotations (Clifford algebra) for KV cache compression. It provides equivalent memory savings to TurboQuant while dramatically improving throughput.
54
+
55
+ Key advantages over TurboQuant:
56
+ - **5.3x faster prefill**
57
+ - **28% faster decode**
58
+ - Equivalent memory savings
59
+ - Slightly better perplexity
60
+
61
+ ## KV-Cache Quantization Comparison
62
+
63
+ | Method | Prefill Speed | Decode Speed | Memory Savings | Reference |
64
+ |---|---|---|---|---|
65
+ | **TurboQuant** | 1x (baseline) | 1x (baseline) | High | [arXiv: 2504.19874](https://arxiv.org/abs/2504.19874) |
66
+ | **RotorQuant** | **5.3x faster** | **28% faster** | High | [GitHub](https://github.com/scrya-com/rotorquant) |
67
+
68
+ ## Memory Estimates (GPT-OSS-120B)
69
+
70
+ | Precision | Approximate Size |
71
+ |---|---|
72
+ | BF16 (original) | ~240 GB |
73
+ | 8-bit quantized | ~120 GB |
74
+ | 4-bit quantized | ~65 GB |
75
+ | 2-bit quantized | ~30 GB |
76
+
77
+ Note: These estimates are for weight quantization. This repository applies KV-cache quantization only, so model weight memory remains at the precision you load the model in. The KV-cache memory savings are realized during generation.
78
+
79
+ ## See Also
80
+
81
+ - [openai/gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b) -- Base model
82
+ - [majentik/gpt-oss-120b-TurboQuant](https://huggingface.co/majentik/gpt-oss-120b-TurboQuant) -- TurboQuant KV-cache variant
83
+ - [majentik/gpt-oss-120b-RotorQuant-MLX-8bit](https://huggingface.co/majentik/gpt-oss-120b-RotorQuant-MLX-8bit) -- MLX 8-bit variant
84
+ - [majentik/gpt-oss-120b-RotorQuant-MLX-4bit](https://huggingface.co/majentik/gpt-oss-120b-RotorQuant-MLX-4bit) -- MLX 4-bit variant
85
+ - [majentik/gpt-oss-120b-RotorQuant-MLX-2bit](https://huggingface.co/majentik/gpt-oss-120b-RotorQuant-MLX-2bit) -- MLX 2-bit variant
86
+ - [majentik/gpt-oss-120b-RotorQuant-GGUF-Q4_K_M](https://huggingface.co/majentik/gpt-oss-120b-RotorQuant-GGUF-Q4_K_M) -- GGUF Q4_K_M variant
87
+ - [RotorQuant GitHub](https://github.com/scrya-com/rotorquant)