Chatterbox-Turbo-TTS-4bit

This model was converted to MLX format from ResembleAI/chatterbox-turbo using mlx-audio-plus version 0.1.6.

This model uses 4-bit quantization for the T3 GPT2 backbone, reducing memory usage while maintaining audio quality.

Note: This model requires the S3Tokenizer weights from mlx-community/S3TokenizerV2, which will be downloaded automatically.

Use with mlx-audio-plus

pip install -U mlx-audio-plus

Command line

mlx_audio.tts --model /path/to/Chatterbox-Turbo-TTS-4bit --text "Hello, this is Chatterbox Turbo on MLX!" --ref_audio reference.wav

Python

from mlx_audio.tts.generate import generate_audio

generate_audio(
    text="Hello, this is Chatterbox Turbo on MLX!",
    model="/path/to/Chatterbox-Turbo-TTS-4bit",
    ref_audio="reference.wav",
    file_prefix="output",
)
Downloads last month
164
Safetensors
Model size
0.4B params
Tensor type
F16
·
U32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for mlx-community/Chatterbox-Turbo-TTS-4bit

Finetuned
(5)
this model