Voxtral-Mini-3B-2507-TurboQuant-MLX-4bit

4-bit MLX weight-quantized build of mistralai/Voxtral-Mini-3B-2507 with a TurboQuant KV-cache profile. Balanced point between quality and memory for Apple Silicon.

Overview

  • Base: mistralai/Voxtral-Mini-3B-2507 — 3B speech-understanding model
  • Capabilities: transcription, speech translation, audio QA
  • Weight precision: 4-bit (group-wise)
  • KV-cache profile: TurboQuant (per-head static calibration)
  • Approx. on-disk size: ~1.5 GB
  • Runtime: MLX on Apple Silicon

Quickstart

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("majentik/Voxtral-Mini-3B-2507-TurboQuant-MLX-4bit")

prompt = tokenizer.apply_chat_template(
    [{"role": "user", "content": [{"type": "audio", "path": "sample.wav"},
                                  {"type": "text", "text": "Transcribe this."}]}],
    add_generation_prompt=True,
)
print(generate(model, tokenizer, prompt=prompt, max_tokens=256))

Model specs

Field Value
Parameters 3B
Weight bits 4
Group size 64
Cache profile TurboQuant
Size on disk ~1.5 GB
Target hardware Apple Silicon (M1/M2/M3/M4)
License Apache 2.0

RotorQuant vs TurboQuant

TurboQuant RotorQuant
Strategy Per-head static calibration Rotational online re-basis
Memory reduction ~3.5x on KV-cache ~4x on KV-cache
Best for Batch transcription Streaming / code-switching

Recommended default for most Apple-Silicon deployments. Pick 8-bit for highest fidelity, 2-bit for lightest footprint.

See also

Downloads last month
55
Safetensors
Model size
5B params
Tensor type
BF16
·
F32
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for majentik/Voxtral-Mini-3B-2507-TurboQuant-MLX-4bit

Quantized
(18)
this model