Leanstral 119B A6B - GGUF

GGUF quantizations of mistralai/Leanstral-2603 for use with llama.cpp.

Leanstral is the first open-source code agent designed for Lean 4, a proof assistant for formal mathematics and software verification. Built as part of the Mistral Small 4 family, it combines multimodal capabilities with an efficient MoE + MLA architecture.

Available Quantizations

File Quant Size Description
mistralai_Leanstral-128x3.9B-2603-Q4_K_M.gguf Q4_K_M 68 GB Best balance of quality and size. Runs on 2x RTX 4090 + RAM offload
mistralai_Leanstral-128x3.9B-2603-Q8_0.gguf Q8_0 118 GB Near-lossless. Good base for custom requantization

Architecture

  • Type: Mixture of Experts (MoE) + Multi-head Latent Attention (MLA)
  • GGUF arch: deepseek2 (Mistral 4 uses the same architecture as DeepSeek V3)
  • Total parameters: 119B (6.5B active per token)
  • Experts: 128 routed + 1 shared, 4 active per token
  • MLA: q_lora_rank=1024, kv_lora_rank=256, qk_rope_head_dim=64
  • Context: Up to 1M tokens (256k recommended)
  • RoPE: YaRN scaling (factor=128, original_ctx=8192)
  • Vocab: 131,072 tokens (Tekken tokenizer)

How to Run

llama-server (recommended)

./llama-server \
  -m mistralai_Leanstral-128x3.9B-2603-Q4_K_M.gguf \
  -fit on -fa on \
  --host 0.0.0.0 \
  --ctx-size 128000 \
  --jinja \
  --chat-template-file chat_template.jinja

Note: You need a chat template that supports [THINK] blocks for reasoning. Download the template from the original model repo.

Reasoning

The model supports reasoning_effort via the chat template:

  • "high" - Enables thinking (recommended for Lean 4 proofs and complex tasks)
  • "none" - Direct answers without reasoning

Pass reasoning_effort in your API request body, or modify the chat template default.

Performance

On 2x RTX 4090 (48GB VRAM) + 192GB RAM with Q4_K_M:

  • ~34 tokens/s generation speed
  • Model splits between GPU and system RAM automatically with -fit on

Conversion Details

  • Source: FP8 (e4m3) consolidated weights from mistralai/Leanstral-2603
  • Pipeline: FP8 consolidated โ†’ dequant to BF16 โ†’ Q8_0 GGUF โ†’ Q4_K_M GGUF (with --allow-requantize)
  • Converter: convert_hf_to_gguf.py with --mistral-format flag (llama.cpp)
  • Tokenizer: Tekken v15 (requires mistral-common >= 1.10.0 for conversion)

License

Apache 2.0 - same as the original model.

Credits

Downloads last month
5
GGUF
Model size
119B params
Architecture
deepseek2
Hardware compatibility
Log In to add your hardware

4-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for jackcloudman/Leanstral-2603-GGUF

Quantized
(1)
this model