How to use from the
Use from the
MLX library
# Download the model from the Hub
pip install huggingface_hub[hf_xet]

huggingface-cli download --local-dir IndexTTS mlx-community/IndexTTS

mlx-community/IndexTTS

This model was converted to MLX format from IndexTeam/Index-TTS using mlx-audio version 0.2.3. Refer to the original model card for more details on the model.

Use with mlx

pip install -U mlx-audio
python -m mlx_audio.tts.generate --model mlx-community/IndexTTS --text "Describe this image."
Downloads last month
100
Safetensors
Model size
0.5B params
Tensor type
F16
ยท
MLX
Hardware compatibility
Log In to add your hardware

Quantized

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Space using mlx-community/IndexTTS 1