Voxtral-4B-TTS-2603-RotorQuant-MLX-8bit
8-bit MLX weight-quantized build of mistralai/Voxtral-4B-TTS-2603 with a RotorQuant KV-cache profile. Highest-fidelity MLX TTS variant โ preferred when batches mix voices or languages.
Overview
- Base:
mistralai/Voxtral-4B-TTS-2603โ 4B multilingual TTS with zero-shot voice cloning - Weight precision: 8-bit (group-wise)
- KV-cache profile: RotorQuant (rotational online re-basis)
- Approx. on-disk size: ~4 GB
- Runtime: MLX on Apple Silicon
Quickstart
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("majentik/Voxtral-4B-TTS-2603-RotorQuant-MLX-8bit")
for text, voice in utterances:
prompt = tokenizer.apply_chat_template(
[{"role": "user", "content": [
{"type": "audio", "path": voice},
{"type": "text", "text": text},
]}],
add_generation_prompt=True,
)
audio_tokens = generate(model, tokenizer, prompt=prompt, max_tokens=2048)
Model specs
| Field | Value |
|---|---|
| Parameters | 4B |
| Weight bits | 8 |
| Group size | 64 |
| Cache profile | RotorQuant |
| Languages | 9 |
| Voice cloning | Zero-shot |
| Size on disk | ~4 GB |
| Target hardware | Apple Silicon (M1/M2/M3/M4) |
| License | Apache 2.0 |
RotorQuant vs TurboQuant
| RotorQuant | TurboQuant | |
|---|---|---|
| Strategy | Rotational online re-basis | Per-head static calibration |
| Memory reduction | ~4x on KV-cache | ~3.5x on KV-cache |
| Best for | Multi-voice / multi-language batches | Single-voice sessions |
See also
majentik/Voxtral-4B-TTS-2603-RotorQuant-MLX-4bitmajentik/Voxtral-4B-TTS-2603-RotorQuant-MLX-2bitmajentik/Voxtral-4B-TTS-2603-TurboQuant-MLX-8bitmajentik/Voxtral-4B-TTS-2603-RotorQuantโ KV-cache-only bundlemistralai/Voxtral-4B-TTS-2603โ upstream base model
- Downloads last month
- 23
Hardware compatibility
Log In to add your hardware
8-bit
Model tree for majentik/Voxtral-4B-TTS-2603-RotorQuant-MLX-8bit
Base model
mistralai/Ministral-3-3B-Base-2512 Finetuned
mistralai/Voxtral-4B-TTS-2603