majentik commited on
Commit
137d154
·
verified ·
1 Parent(s): c00cf09

Add model card (weights pending mlx_lm mistral3 architecture support)

Browse files
Files changed (1) hide show
  1. README.md +94 -0
README.md ADDED
@@ -0,0 +1,94 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: mistralai/Leanstral-2603
3
+ library_name: mlx
4
+ tags:
5
+ - rotorquant
6
+ - kv-cache-quantization
7
+ - mlx
8
+ - 2-bit
9
+ - weight-quantization
10
+ - leanstral
11
+ - lean4
12
+ - formal-proofs
13
+ - theorem-proving
14
+ - quantized
15
+ - apple-silicon
16
+ - mistral
17
+ - moe
18
+ license: apache-2.0
19
+ ---
20
+
21
+ # Leanstral-RotorQuant-MLX-2bit
22
+
23
+ **2-bit MLX weight-quantized [Leanstral-2603](https://huggingface.co/mistralai/Leanstral-2603) with [RotorQuant](https://github.com/scrya-com/rotorquant) KV-cache quantization for high-throughput Lean 4 formal proof generation on Apple Silicon.**
24
+
25
+ Leanstral is the first open-source AI agent purpose-built for Lean 4 formal proofs -- generating both executable code and machine-checkable mathematical proofs. This variant combines **dual compression**: 2-bit MLX weight quantization for aggressive model size reduction plus RotorQuant KV-cache quantization, delivering **5.3x faster prefill** and **28% faster decode** compared to TurboQuant equivalents.
26
+
27
+ ## Overview
28
+
29
+ This repository provides an aggressively compressed configuration with RotorQuant's superior throughput: MLX 2-bit weight quantization minimizes the static memory footprint, while RotorQuant's rotation-aware KV-cache compression delivers faster prefill and decode than TurboQuant.
30
+
31
+ | Spec | Value |
32
+ |------|-------|
33
+ | Base model | [mistralai/Leanstral-2603](https://huggingface.co/mistralai/Leanstral-2603) |
34
+ | Architecture | Mistral MoE (~119B parameters, 7 consolidated shards) |
35
+ | Weight quantization | 2-bit (MLX) |
36
+ | KV-cache quantization | RotorQuant |
37
+ | Weight memory | ~30 GB |
38
+ | Prefill speedup | 5.3x vs TurboQuant |
39
+ | Decode speedup | 28% vs TurboQuant |
40
+ | Runtime | MLX (Apple Silicon) |
41
+ | License | Apache 2.0 |
42
+ | Use case | Lean 4 formal verification, theorem proving, mathematical proofs |
43
+
44
+ ## Quickstart
45
+
46
+ ```python
47
+ from mlx_lm import load, generate
48
+
49
+ model, tokenizer = load("majentik/Leanstral-RotorQuant-MLX-2bit")
50
+
51
+ prompt = "Prove that for all natural numbers n, n + 0 = n in Lean 4:"
52
+ response = generate(
53
+ model,
54
+ tokenizer,
55
+ prompt=prompt,
56
+ max_tokens=512,
57
+ )
58
+ print(response)
59
+ ```
60
+
61
+ ## What is RotorQuant?
62
+
63
+ [RotorQuant](https://github.com/scrya-com/rotorquant) is an advanced KV-cache quantization method that leverages rotation-aware quantization to achieve superior throughput compared to standard KV-cache compression. By exploiting the rotary positional embedding structure, RotorQuant achieves:
64
+
65
+ - **5.3x faster prefill** -- critical for long Lean 4 proof contexts
66
+ - **28% faster decode** -- faster token-by-token proof generation
67
+ - Equivalent memory savings to TurboQuant with better computational efficiency
68
+
69
+ > **Note:** 2-bit weight quantization is lossy. Expect some degradation in proof quality compared to the 4-bit variant. For critical formal verification work, prefer the 4-bit or full-precision variants.
70
+
71
+ ## Memory Estimates
72
+
73
+ | Component | Estimate |
74
+ |-----------|----------|
75
+ | Model weights (2-bit) | ~30 GB |
76
+ | KV-cache | Reduced via RotorQuant |
77
+ | Recommended hardware | MacBook Pro M2/M3/M4 Max (64 GB+) or Mac Studio |
78
+
79
+ ## Lean 4 Use Case
80
+
81
+ Leanstral excels at:
82
+ - **Formal verification** -- generating machine-checkable proofs of mathematical theorems
83
+ - **Theorem proving** -- interactive and automated proof search in Lean 4
84
+ - **Code generation** -- writing verified Lean 4 programs with correctness guarantees
85
+ - **Proof repair** -- fixing incomplete or broken proof scripts
86
+
87
+ ## See Also
88
+
89
+ - [mistralai/Leanstral-2603](https://huggingface.co/mistralai/Leanstral-2603) -- Base model
90
+ - [majentik/Leanstral-RotorQuant](https://huggingface.co/majentik/Leanstral-RotorQuant) -- Full-precision weights + RotorQuant KV cache
91
+ - [majentik/Leanstral-RotorQuant-MLX-4bit](https://huggingface.co/majentik/Leanstral-RotorQuant-MLX-4bit) -- MLX 4-bit + RotorQuant
92
+ - [majentik/Leanstral-RotorQuant-MLX-1bit](https://huggingface.co/majentik/Leanstral-RotorQuant-MLX-1bit) -- MLX 1-bit + RotorQuant
93
+ - [majentik/Leanstral-TurboQuant-MLX-2bit](https://huggingface.co/majentik/Leanstral-TurboQuant-MLX-2bit) -- MLX 2-bit + TurboQuant
94
+ - [RotorQuant repository](https://github.com/scrya-com/rotorquant)