File size: 5,196 Bytes
76ada0a 6c43f20 76ada0a 14d7fec 6c43f20 14d7fec 76ada0a 6c43f20 76ada0a 6c43f20 76ada0a 6c43f20 76ada0a 6c43f20 76ada0a 6c43f20 76ada0a 6c43f20 76ada0a 6c43f20 76ada0a 6c43f20 76ada0a 6c43f20 76ada0a 6c43f20 76ada0a 6c43f20 76ada0a 6c43f20 76ada0a 6c43f20 76ada0a 6c43f20 76ada0a 6c43f20 76ada0a 6c43f20 76ada0a 6c43f20 76ada0a 6c43f20 76ada0a 6c43f20 76ada0a 6c43f20 76ada0a 6c43f20 76ada0a 6c43f20 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 | ---
license: apache-2.0
base_model: mistralai/Leanstral-2603
tags:
- turboquant
- kv-cache-quantization
- mistral
- moe
- lean4
- formal-proofs
- quantized
library_name: transformers
pipeline_tag: text-generation
language:
- en
inference: false
---
# Leanstral-TurboQuant
**TurboQuant KV cache compression** for [mistralai/Leanstral-2603](https://huggingface.co/mistralai/Leanstral-2603).
This is a **documentation repository** that explains how to combine Leanstral's weights with TurboQuant inference-time KV cache compression. No weights are stored here β use the base model directly and apply TurboQuant via the Python package or llama.cpp fork.
## What is this?
KV cache compression reduces the memory used by the attention cache during inference. Unlike weight quantization (which is baked into the GGUF/MLX file), KV cache compression is applied at runtime β so the same base weights can be used with or without compression.
| Technique | Where it's applied | Savings |
|-----------|-------------------|---------|
| Weight quantization (GGUF/MLX/AWQ) | Baked into model file | Reduces disk + weight memory |
| **TurboQuant KV cache** | At inference time | Reduces attention memory (critical for long context) |
Both can be combined for maximum efficiency.
## Quickstart
### Option A β Python / transformers
Install the `turboquant` package:
```bash
pip install turboquant
```
Then use it with the base model:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from turboquant import TurboQuantCache
tokenizer = AutoTokenizer.from_pretrained("mistralai/Leanstral-2603", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
"mistralai/Leanstral-2603",
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True,
)
# Apply TurboQuant to the KV cache
cache = TurboQuantCache(bits=4) # or bits=2 for more aggressive compression
inputs = tokenizer("Hello, how are you?", return_tensors="pt").to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=128,
past_key_values=cache,
use_cache=True,
)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:], skip_special_tokens=True))
```
### Option B β llama.cpp / LM Studio / Ollama (with fork)
TurboQuant KV cache types (`planar3`) are **not** in upstream llama.cpp. They require:
- [llama-cpp-turboquant fork](https://github.com/johndpope/llama-cpp-turboquant/tree/feature/planarquant-kv-cache)
Once built:
```bash
llama-cli -m Leanstral.gguf \
--cache-type-k planar3 --cache-type-v planar3 \
-ngl 99 -fa \
-p "Hello"
```
For standard runtimes (LM Studio, Ollama, upstream llama.cpp), use conventional KV cache types (`q8_0`, `q4_0`). You lose the TurboQuant-specific benefits but keep GGUF weight quantization.
## Model Specifications
| Property | Value |
|----------|-------|
| Base Model | [mistralai/Leanstral-2603](https://huggingface.co/mistralai/Leanstral-2603) |
| Architecture | Sparse MoE (128 experts, 4 active) |
| Parameters | 119B total (MoE) |
| Context Length | 256K |
| BF16 Size | ~238 GB |
| Modalities | Text |
| License | apache-2.0 |
## What is TurboQuant?
[TurboQuant](https://arxiv.org/abs/2504.19874) (ICLR 2026) applies random orthogonal rotations followed by optimal scalar quantization to the KV cache. Bit-identical prefill logits at 4-bit, up to 4-8Γ memory savings for long sequences.
**Benchmarks** (from the TurboQuant repository, Llama 3.1 8B on RTX 5090 β results vary by model and hardware):
- 4-bit KV cache: bit-identical prefill logits
- ~1.4-1.7Γ speedup on Apple Silicon
- Up to 8Γ KV memory savings
> Benchmarks are from the TurboQuant repository using Llama 3.1 8B. Performance on Leanstral will differ. Please open a discussion if you have independent results.
## Current Ecosystem Support
| Runtime | TurboQuant Support | Notes |
|---------|----------------------|-------|
| Python transformers + `turboquant` | β
Full | Drop-in cache class |
| llama.cpp upstream | β Not merged | Use fork below |
| llama-cpp-turboquant fork | β
`planar3`, `iso3` | [GitHub](https://github.com/johndpope/llama-cpp-turboquant/tree/feature/planarquant-kv-cache) |
| LM Studio | β [Requested](https://github.com/lmstudio-ai/lmstudio-bug-tracker/issues/1719) | Use `q8_0` as alternative |
| Ollama | β Not supported | Use `OLLAMA_KV_CACHE_TYPE=q8_0` |
| vLLM | β Not supported | β |
| koboldcpp | β Not supported | β |
## Pre-quantized weight variants
If you want combined weight + KV cache compression, majentik hosts pre-quantized versions:
- [MLX (Apple Silicon)](https://huggingface.co/majentik?search=Leanstral+MLX)
- [GGUF (llama.cpp / Ollama / LM Studio)](https://huggingface.co/majentik?search=Leanstral+GGUF)
## See Also
- [RotorQuant GitHub](https://github.com/scrya-com/rotorquant)
- [TurboQuant paper (arXiv 2504.19874)](https://arxiv.org/abs/2504.19874)
- [llama-cpp-turboquant fork](https://github.com/johndpope/llama-cpp-turboquant/tree/feature/planarquant-kv-cache)
- [Base model: mistralai/Leanstral-2603](https://huggingface.co/mistralai/Leanstral-2603)
- [Leanstral announcement](https://mistral.ai/news/leanstral)
|