mlx-community/kitten-tts-micro-0.8-bf16

This is the BF16 MLX conversion of KittenML/kitten-tts-micro-0.8.

Usage

pip install -U mlx-audio
python -m mlx_audio.tts.generate --model mlx-community/kitten-tts-micro-0.8-bf16 --text "This is a local MLX test voice." --voice "expr-voice-5-m"

Inference Notes

The MLX implementation includes small end-of-utterance smoothing to prevent abrupt cutoffs. You can override it with fade_out_ms=0 and tail_silence_ms=0 in Model.generate().

Original Model

Refer to the original model card for details: https://huggingface.co/KittenML/kitten-tts-micro-0.8

Downloads last month
6
Safetensors
Model size
35.5M params
Tensor type
F32
·
F16
·
MLX
Hardware compatibility
Log In to add your hardware

Quantized

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including mlx-community/kitten-tts-micro-0.8-bf16