Aratako commited on
Commit
3192b3e
·
verified ·
1 Parent(s): ddf0a55

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +62 -0
README.md ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pipeline_tag: text-to-speech
3
+ tags:
4
+ - speech
5
+ - tts
6
+ - voice
7
+ - gguf
8
+ license: other
9
+ ---
10
+
11
+ # MioTTS-GGUF
12
+
13
+ [![Hugging Face Collection](https://img.shields.io/badge/Collection-HuggingFace-yellow)](https://huggingface.co/collections/Aratako/miotts)
14
+ [![Inference Code](https://img.shields.io/badge/Inference-GitHub-black)](https://github.com/Aratako/MioTTS-Inference)
15
+
16
+ This repository contains **GGUF quantized versions** of the [MioTTS models](https://huggingface.co/collections/Aratako/miotts).
17
+ MioTTS is a lightweight, high-speed Text-to-Speech (TTS) model family designed for high-quality English and Japanese speech generation.
18
+
19
+ For model details, usage, and citations, please refer to the original model cards (linked below).
20
+
21
+ ## 📦 Available Models & Files
22
+
23
+ | Model Size | Quantization | File Name | Size | Original Model |
24
+ | :--- | :--- | :--- | :--- | :--- |
25
+ | **0.1B** | BF16 | `MioTTS-0.1B-BF16.gguf` | 232 MB | [Link](https://huggingface.co/Aratako/MioTTS-0.1B) |
26
+ | | Q8_0 | `MioTTS-0.1B-Q8_0.gguf` | 125 MB | |
27
+ | | Q6_K | `MioTTS-0.1B-Q6_K.gguf` | 97.3 MB | |
28
+ | | Q4_K_M | `MioTTS-0.1B-Q4_K_M.gguf` | 79.6 MB | |
29
+ | **0.4B** | BF16 | `MioTTS-0.4B-BF16.gguf` | 736 MB | [Link](https://huggingface.co/Aratako/MioTTS-0.4B) |
30
+ | | Q8_0 | `MioTTS-0.4B-Q8_0.gguf` | 392 MB | |
31
+ | | Q6_K | `MioTTS-0.4B-Q6_K.gguf` | 304 MB | |
32
+ | | Q4_K_M | `MioTTS-0.4B-Q4_K_M.gguf` | 239 MB | |
33
+ | **0.6B** | BF16 | `MioTTS-0.6B-BF16.gguf` | 1.22 GB | [Link](https://huggingface.co/Aratako/MioTTS-0.6B) |
34
+ | | Q8_0 | `MioTTS-0.6B-Q8_0.gguf` | 653 MB | |
35
+ | | Q6_K | `MioTTS-0.6B-Q6_K.gguf` | 506 MB | |
36
+ | | Q4_K_M | `MioTTS-0.6B-Q4_K_M.gguf` | 408 MB | |
37
+ | **1.2B** | BF16 | `MioTTS-1.2B-BF16.gguf` | 2.39 GB | [Link](https://huggingface.co/Aratako/MioTTS-1.2B) |
38
+ | | Q8_0 | `MioTTS-1.2B-Q8_0.gguf` | 1.27 GB | |
39
+ | | Q6_K | `MioTTS-1.2B-Q6_K.gguf` | 983 MB | |
40
+ | | Q4_K_M | `MioTTS-1.2B-Q4_K_M.gguf` | 751 MB | |
41
+ | **1.7B** | BF16 | `MioTTS-1.7B-BF16.gguf` | 3.5 GB | [Link](https://huggingface.co/Aratako/MioTTS-1.7B) |
42
+ | | Q8_0 | `MioTTS-1.7B-Q8_0.gguf` | 1.86 GB | |
43
+ | | Q6_K | `MioTTS-1.7B-Q6_K.gguf` | 1.44 GB | |
44
+ | | Q4_K_M | `MioTTS-1.7B-Q4_K_M.gguf` | 1.13 GB | |
45
+ | **2.6B** | BF16 | `MioTTS-2.6B-BF16.gguf` | 5.19 GB | [Link](https://huggingface.co/Aratako/MioTTS-2.6B) |
46
+ | | Q8_0 | `MioTTS-2.6B-Q8_0.gguf` | 2.76 GB | |
47
+ | | Q6_K | `MioTTS-2.6B-Q6_K.gguf` | 2.13 GB | |
48
+ | | Q4_K_M | `MioTTS-2.6B-Q4_K_M.gguf` | 1.58 GB | |
49
+
50
+ ## 🚀 Usage
51
+
52
+ Please check the official inference repository for instructions on how to run these models.
53
+
54
+ 👉 **[GitHub: Aratako/MioTTS-Inference](https://github.com/Aratako/MioTTS-Inference)**
55
+
56
+ ## 📜 License
57
+
58
+ Please note that the license differs depending on the model size (inherited from their respective base models). **Please check the original model card for the specific license terms before use.**
59
+
60
+ * **0.1B:** Falcon-LLM License
61
+ * **0.4B, 1.2B, 2.6B:** LFM Open License v1.0
62
+ * **0.6B, 1.7B:** Apache 2.0