Chatterbox-Turbo-TTS-8bit
This model was converted to MLX format from ResembleAI/chatterbox-turbo using mlx-audio-plus version 0.1.6.
This model uses 8-bit quantization for the T3 GPT2 backbone, reducing memory usage while maintaining audio quality.
Note: This model requires the S3Tokenizer weights from mlx-community/S3TokenizerV2, which will be downloaded automatically.
Use with mlx-audio-plus
pip install -U mlx-audio-plus
Command line
mlx_audio.tts --model /path/to/Chatterbox-Turbo-TTS-8bit --text "Hello, this is Chatterbox Turbo on MLX!" --ref_audio reference.wav
Python
from mlx_audio.tts.generate import generate_audio
generate_audio(
text="Hello, this is Chatterbox Turbo on MLX!",
model="/path/to/Chatterbox-Turbo-TTS-8bit",
ref_audio="reference.wav",
file_prefix="output",
)
- Downloads last month
- 24
Model tree for mlx-community/Chatterbox-Turbo-TTS-8bit
Base model
ResembleAI/chatterbox-turbo