majentik's picture
chore(card): enrich YAML frontmatter (pipeline_tag, language, library_name, inference)
17e06ee verified
metadata
base_model: mistralai/Leanstral-2603
library_name: mlx
tags:
  - turboquant
  - kv-cache-quantization
  - mlx
  - 4-bit
  - weight-quantization
  - leanstral
  - lean4
  - formal-proofs
  - theorem-proving
  - quantized
  - apple-silicon
  - mistral
  - moe
license: apache-2.0
pipeline_tag: text-generation
language:
  - en

Leanstral-TurboQuant-MLX-4bit

4-bit MLX weight-quantized Leanstral-2603 with TurboQuant KV-cache quantization for Lean 4 formal proof generation on Apple Silicon.

Leanstral is the first open-source AI agent purpose-built for Lean 4 formal proofs -- generating both executable code and machine-checkable mathematical proofs. This variant combines dual compression: 4-bit MLX weight quantization for reduced model size plus TurboQuant KV-cache quantization for efficient long-context inference.

Overview

This repository provides a dual-compressed configuration: MLX 4-bit weight quantization reduces the static memory footprint, while TurboQuant compresses the KV cache at runtime. Together, they enable running Leanstral on high-memory Apple Silicon machines.

Spec Value
Base model mistralai/Leanstral-2603
Architecture Mistral MoE (~119B parameters, 7 consolidated shards)
Weight quantization 4-bit (MLX)
KV-cache quantization TurboQuant
Weight memory ~60 GB
Runtime MLX (Apple Silicon)
License Apache 2.0
Use case Lean 4 formal verification, theorem proving, mathematical proofs

Quickstart

from mlx_lm import load, generate

model, tokenizer = load("majentik/Leanstral-TurboQuant-MLX-4bit")

prompt = "Prove that for all natural numbers n, n + 0 = n in Lean 4:"
response = generate(
    model,
    tokenizer,
    prompt=prompt,
    max_tokens=512,
)
print(response)

What is TurboQuant?

TurboQuant (arXiv: 2504.19874) is a KV-cache quantization method that compresses the key-value cache used during autoregressive generation. By quantizing the KV cache to lower precision, TurboQuant reduces memory consumption proportionally to context length. Combined with MLX 4-bit weight quantization, this dual compression approach makes it feasible to run Leanstral's ~119B parameter model on Apple Silicon hardware.

Memory Estimates

Component Estimate
Model weights (4-bit) ~60 GB
KV-cache Reduced via TurboQuant
Recommended hardware Mac Studio M2/M3/M4 Ultra (192 GB+) or Mac Pro

Lean 4 Use Case

Leanstral excels at:

  • Formal verification -- generating machine-checkable proofs of mathematical theorems
  • Theorem proving -- interactive and automated proof search in Lean 4
  • Code generation -- writing verified Lean 4 programs with correctness guarantees
  • Proof repair -- fixing incomplete or broken proof scripts

See Also