MioTTS
Collection
11 items
β’
Updated
β’
4
This repository contains GGUF quantized versions of the MioTTS models. MioTTS is a lightweight, high-speed Text-to-Speech (TTS) model family designed for high-quality English and Japanese speech generation.
For model details, usage, and citations, please refer to the original model cards (linked below).
| Model Size | Quantization | File Name | Size | Original Model |
|---|---|---|---|---|
| 0.1B | BF16 | MioTTS-0.1B-BF16.gguf |
232 MB | Link |
| Q8_0 | MioTTS-0.1B-Q8_0.gguf |
125 MB | ||
| Q6_K | MioTTS-0.1B-Q6_K.gguf |
97.3 MB | ||
| Q4_K_M | MioTTS-0.1B-Q4_K_M.gguf |
79.6 MB | ||
| 0.4B | BF16 | MioTTS-0.4B-BF16.gguf |
736 MB | Link |
| Q8_0 | MioTTS-0.4B-Q8_0.gguf |
392 MB | ||
| Q6_K | MioTTS-0.4B-Q6_K.gguf |
304 MB | ||
| Q4_K_M | MioTTS-0.4B-Q4_K_M.gguf |
239 MB | ||
| 0.6B | BF16 | MioTTS-0.6B-BF16.gguf |
1.22 GB | Link |
| Q8_0 | MioTTS-0.6B-Q8_0.gguf |
653 MB | ||
| Q6_K | MioTTS-0.6B-Q6_K.gguf |
506 MB | ||
| Q4_K_M | MioTTS-0.6B-Q4_K_M.gguf |
408 MB | ||
| 1.2B | BF16 | MioTTS-1.2B-BF16.gguf |
2.39 GB | Link |
| Q8_0 | MioTTS-1.2B-Q8_0.gguf |
1.27 GB | ||
| Q6_K | MioTTS-1.2B-Q6_K.gguf |
983 MB | ||
| Q4_K_M | MioTTS-1.2B-Q4_K_M.gguf |
751 MB | ||
| 1.7B | BF16 | MioTTS-1.7B-BF16.gguf |
3.5 GB | Link |
| Q8_0 | MioTTS-1.7B-Q8_0.gguf |
1.86 GB | ||
| Q6_K | MioTTS-1.7B-Q6_K.gguf |
1.44 GB | ||
| Q4_K_M | MioTTS-1.7B-Q4_K_M.gguf |
1.13 GB | ||
| 2.6B | BF16 | MioTTS-2.6B-BF16.gguf |
5.19 GB | Link |
| Q8_0 | MioTTS-2.6B-Q8_0.gguf |
2.76 GB | ||
| Q6_K | MioTTS-2.6B-Q6_K.gguf |
2.13 GB | ||
| Q4_K_M | MioTTS-2.6B-Q4_K_M.gguf |
1.58 GB |
Please check the official inference repository for instructions on how to run these models.
π GitHub: Aratako/MioTTS-Inference
Please note that the license differs depending on the model size (inherited from their respective base models). Please check the original model card for the specific license terms before use.
4-bit
6-bit
8-bit
16-bit