majentik commited on
Commit
23c8f95
·
verified ·
1 Parent(s): 6976476

Add MLX quantized model

Browse files
README.md ADDED
@@ -0,0 +1,89 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: openai/gpt-oss-120b
3
+ library_name: mlx
4
+ tags:
5
+ - rotorquant
6
+ - kv-cache-quantization
7
+ - gpt-oss
8
+ - openai
9
+ - moe
10
+ - quantized
11
+ - mlx
12
+ - 8bit
13
+ license: apache-2.0
14
+ pipeline_tag: text-generation
15
+ ---
16
+
17
+ # GPT-OSS-120B - RotorQuant MLX 8-bit
18
+
19
+ **8-bit weight-quantized MLX version** of [openai/gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b) with RotorQuant KV-cache quantization. Optimized for Apple Silicon inference via the [MLX](https://github.com/ml-explore/mlx) framework. RotorQuant delivers 5.3x faster prefill and 28% faster decode compared to TurboQuant. GPT-OSS-120B is OpenAI's flagship open-weights Mixture-of-Experts model (Apache 2.0), approaching o4-mini quality for reasoning tasks.
20
+
21
+ Approximate model size: **~120 GB**
22
+
23
+ ## Model Specifications
24
+
25
+ | Property | Value |
26
+ |---|---|
27
+ | **Base Model** | [openai/gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b) |
28
+ | **Parameters** | 120 billion (MoE) |
29
+ | **Architecture** | Mixture-of-Experts (MoE) Transformer |
30
+ | **License** | Apache 2.0 (commercial use OK) |
31
+ | **Weight Quantization** | 8-bit (~120 GB) |
32
+ | **KV-Cache Quantization** | RotorQuant |
33
+ | **Framework** | MLX (Apple Silicon) |
34
+
35
+ ## Quickstart
36
+
37
+ ```python
38
+ from mlx_lm import load, generate
39
+ from rotorquant import IsoQuantCache
40
+
41
+ model, tokenizer = load("majentik/gpt-oss-120b-RotorQuant-MLX-8bit")
42
+
43
+ prompt = "Explain the theory of relativity."
44
+ response = generate(model, tokenizer, prompt=prompt, max_tokens=512)
45
+ print(response)
46
+ ```
47
+
48
+ ## What is RotorQuant?
49
+
50
+ [RotorQuant](https://github.com/scrya-com/rotorquant) applies block-diagonal rotations (Clifford algebra) for KV cache compression. Combined with 8-bit weight quantization in MLX, this provides a dual compression strategy with superior KV-cache performance.
51
+
52
+ Key advantages over TurboQuant:
53
+ - **5.3x faster prefill**
54
+ - **28% faster decode**
55
+ - Equivalent memory savings
56
+
57
+ ## KV-Cache Quantization Comparison
58
+
59
+ | Method | Prefill Speed | Decode Speed | Memory Savings | Reference |
60
+ |---|---|---|---|---|
61
+ | **TurboQuant** | 1x (baseline) | 1x (baseline) | High | [arXiv: 2504.19874](https://arxiv.org/abs/2504.19874) |
62
+ | **RotorQuant** | **5.3x faster** | **28% faster** | High | [GitHub](https://github.com/scrya-com/rotorquant) |
63
+
64
+ ## Memory Estimates (GPT-OSS-120B)
65
+
66
+ | Precision | Approximate Size | MLX Variant |
67
+ |---|---|---|
68
+ | BF16 (original) | ~240 GB | -- |
69
+ | **8-bit quantized** | **~120 GB** | **This model** |
70
+ | 4-bit quantized | ~65 GB | [RotorQuant-MLX-4bit](https://huggingface.co/majentik/gpt-oss-120b-RotorQuant-MLX-4bit) |
71
+ | 2-bit quantized | ~30 GB | [RotorQuant-MLX-2bit](https://huggingface.co/majentik/gpt-oss-120b-RotorQuant-MLX-2bit) |
72
+
73
+ ## Hardware Requirements
74
+
75
+ This model requires approximately 120 GB of unified memory. Recommended hardware:
76
+ - Apple M2 Ultra (192 GB)
77
+ - Apple M3 Ultra (192 GB or 512 GB)
78
+ - Mac Studio M4 Ultra (192 GB+)
79
+ - Multi-device MLX inference for smaller Macs
80
+
81
+ ## See Also
82
+
83
+ - [openai/gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b) -- Base model
84
+ - [majentik/gpt-oss-120b-RotorQuant](https://huggingface.co/majentik/gpt-oss-120b-RotorQuant) -- RotorQuant KV-cache only (transformers)
85
+ - [majentik/gpt-oss-120b-RotorQuant-MLX-4bit](https://huggingface.co/majentik/gpt-oss-120b-RotorQuant-MLX-4bit) -- MLX 4-bit variant
86
+ - [majentik/gpt-oss-120b-RotorQuant-MLX-2bit](https://huggingface.co/majentik/gpt-oss-120b-RotorQuant-MLX-2bit) -- MLX 2-bit variant
87
+ - [majentik/gpt-oss-120b-TurboQuant-MLX-8bit](https://huggingface.co/majentik/gpt-oss-120b-TurboQuant-MLX-8bit) -- TurboQuant MLX 8-bit variant
88
+ - [RotorQuant GitHub](https://github.com/scrya-com/rotorquant)
89
+ - [MLX Framework](https://github.com/ml-explore/mlx)
model-00001-of-00013.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:19fe13510a991b43eac32c9d92145b60d4d7cf340ec8b67ad372b2f614c8b568
3
+ size 5218855440
model-00002-of-00013.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5ead6967decb7235f14090d3793a0cbce04615e67ca33f826b718352e6d6af07
3
+ size 5168680928
model-00003-of-00013.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ab5badb15305c5f68e5ab28832e27f243fb5d68963daa2f82509f62e46b0b508
3
+ size 5168680940
model-00004-of-00013.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8a1b5a3371fb84167015761f0eb902ed5d66f53d55fdd54aec7b70584124fcb5
3
+ size 5168680929
model-00005-of-00013.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:573b011537c4d71e199da3ce958b92a069679d534ce252b9bdbc7bf77a9dd41e
3
+ size 5168681060
model-00006-of-00013.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8b37ed414dec0649bcc315f0904b1d3fc9eb49335477f449e11b300f90ae120a
3
+ size 5168680980
model-00007-of-00013.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d64806b7615e3c4d73f2936738dd47f671e859b8f740b4fed28abe3a0093250f
3
+ size 2870636512