Voxtral-Mini-4B-Realtime-2602-RotorQuant-MLX-4bit

4-bit MLX weight-quantized build of mistralai/Voxtral-Mini-4B-Realtime-2602 with RotorQuant KV-cache. Recommended default for noisy / multi-speaker real-time ASR on Apple Silicon.

Overview

  • Base: mistralai/Voxtral-Mini-4B-Realtime-2602 โ€” 4B real-time ASR model
  • Weight precision: 4-bit (group-wise)
  • KV-cache profile: RotorQuant
  • Approx. on-disk size: ~2 GB
  • Runtime: MLX on Apple Silicon

Quickstart

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("majentik/Voxtral-Mini-4B-Realtime-2602-RotorQuant-MLX-4bit")

for chunk in audio_stream():
    prompt = tokenizer.apply_chat_template(
        [{"role": "user", "content": [{"type": "audio", "path": chunk}]}],
        add_generation_prompt=True,
    )
    emit(generate(model, tokenizer, prompt=prompt, max_tokens=32))

Model specs

Field Value
Parameters 4B
Weight bits 4
Group size 64
Cache profile RotorQuant
Size on disk ~2 GB
Target hardware Apple Silicon (M1/M2/M3/M4)
License Apache 2.0

RotorQuant vs TurboQuant

RotorQuant TurboQuant
Strategy Rotational online re-basis Per-head static calibration
Memory reduction ~4x on KV-cache ~3.5x on KV-cache
Best for Noisy/multi-speaker streams Predictable domains, lowest p50 latency

See also

Downloads last month
-
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for majentik/Voxtral-Mini-4B-Realtime-2602-RotorQuant-MLX-4bit