majentik's picture
chore(card): enrich YAML frontmatter (pipeline_tag, language, library_name, inference)
ad888a0 verified
metadata
base_model: mistralai/Voxtral-4B-TTS-2603
library_name: mlx
license: apache-2.0
pipeline_tag: text-to-speech
tags:
  - voxtral
  - audio
  - speech
  - tts
  - text-to-speech
  - voice-cloning
  - zero-shot
  - mlx
  - rotorquant
  - quantization
  - 8-bit
language:
  - en

Voxtral-4B-TTS-2603-RotorQuant-MLX-8bit

8-bit MLX weight-quantized build of mistralai/Voxtral-4B-TTS-2603 with a RotorQuant KV-cache profile. Highest-fidelity MLX TTS variant — preferred when batches mix voices or languages.

Overview

  • Base: mistralai/Voxtral-4B-TTS-2603 — 4B multilingual TTS with zero-shot voice cloning
  • Weight precision: 8-bit (group-wise)
  • KV-cache profile: RotorQuant (rotational online re-basis)
  • Approx. on-disk size: ~4 GB
  • Runtime: MLX on Apple Silicon

Quickstart

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("majentik/Voxtral-4B-TTS-2603-RotorQuant-MLX-8bit")

for text, voice in utterances:
    prompt = tokenizer.apply_chat_template(
        [{"role": "user", "content": [
            {"type": "audio", "path": voice},
            {"type": "text", "text": text},
        ]}],
        add_generation_prompt=True,
    )
    audio_tokens = generate(model, tokenizer, prompt=prompt, max_tokens=2048)

Model specs

Field Value
Parameters 4B
Weight bits 8
Group size 64
Cache profile RotorQuant
Languages 9
Voice cloning Zero-shot
Size on disk ~4 GB
Target hardware Apple Silicon (M1/M2/M3/M4)
License Apache 2.0

RotorQuant vs TurboQuant

RotorQuant TurboQuant
Strategy Rotational online re-basis Per-head static calibration
Memory reduction ~4x on KV-cache ~3.5x on KV-cache
Best for Multi-voice / multi-language batches Single-voice sessions

See also