mr-don88 commited on
Commit
a5753ac
·
verified ·
1 Parent(s): 6ee6511

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +127 -117
README.md CHANGED
@@ -1,117 +1,127 @@
1
- ---
2
- license: apache-2.0
3
- language:
4
- - en
5
- base_model:
6
- - yl4579/StyleTTS2-LJSpeech
7
- pipeline_tag: text-to-speech
8
- ---
9
- **vansarah** is an open-weight TTS model with 82 million parameters. Despite its lightweight architecture, it delivers comparable quality to larger models while being significantly faster and more cost-efficient. With Apache-licensed weights, vansarah can be deployed anywhere from production environments to personal projects.
10
-
11
- <audio controls><source src="https://huggingface.co/mr-don88/vansarah-82M/resolve/main/samples/HEARME.wav" type="audio/wav"></audio>
12
-
13
- 🐈 **GitHub**: https://github.com/mr-don88/vansarah
14
-
15
- 🚀 **Demo**: https://huggingface.co/spaces/MR-DON/vansarah-tts
16
-
17
- > [!NOTE]
18
- > As of April 2025, the market rate of vansarah served over API is **under $1 per million characters of text input**, or under $0.06 per hour of audio output. (On average, 1000 characters of input is about 1 minute of output.) Sources: [ArtificialAnalysis/Replicate at 65 cents per M chars](https://artificialanalysis.ai/text-to-speech/model-family/vansarah#price) and [DeepInfra at 80 cents per M chars](https://deepinfra.com/mr-don88/vansarah-82M).
19
- >
20
- > This is an Apache-licensed model, and vansarah has been deployed in numerous projects and commercial APIs. We welcome the deployment of the model in real use cases.
21
-
22
- > [!CAUTION]
23
- > Fake websites like vansarahttsai_com (snapshot: https://archive.ph/nRRnk) and vansarahtts_net (snapshot: https://archive.ph/60opa) are likely scams masquerading under the banner of a popular model.
24
- >
25
- > Any website containing "vansarah" in its root domain (e.g. vansarahttsai_com, vansarahtts_net) is **NOT owned by and NOT affiliated with this model page or its author**, and attempts to imply otherwise are red flags.
26
-
27
- - [Releases](#releases)
28
- - [Usage](#usage)
29
- - [EVAL.md](https://huggingface.co/mr-don88/vansarah-82M/blob/main/EVAL.md) ↗️
30
- - [SAMPLES.md](https://huggingface.co/mr-don88/vansarah-82M/blob/main/SAMPLES.md) ↗️
31
- - [VOICES.md](https://huggingface.co/mr-don88/vansarah-82M/blob/main/VOICES.md) ↗️
32
- - [Model Facts](#model-facts)
33
- - [Training Details](#training-details)
34
- - [Creative Commons Attribution](#creative-commons-attribution)
35
- - [Acknowledgements](#acknowledgements)
36
-
37
- ### Releases
38
-
39
- | Model | Published | Training Data | Langs & Voices | SHA256 |
40
- | ----- | --------- | ------------- | -------------- | ------ |
41
- | **v1.0** | **2025 Jan 27** | **Few hundred hrs** | [**8 & 54**](https://huggingface.co/mr-don88/vansarah-82M/blob/main/VOICES.md) | `496dba11` |
42
- | [v0.19](https://huggingface.co/mr-don88/kLegacy/tree/main/v0.19) | 2024 Dec 25 | <100 hrs | 1 & 10 | `3b0c392f` |
43
-
44
- | Training Costs | v0.19 | v1.0 | **Total** |
45
- | -------------- | ----- | ---- | ----- |
46
- | in A100 80GB GPU hours | 500 | 500 | **1000** |
47
- | average hourly rate | $0.80/h | $1.20/h | **$1/h** |
48
- | in USD | $400 | $600 | **$1000** |
49
-
50
- ### Usage
51
- You can run this basic cell on [Google Colab](https://colab.research.google.com/). [Listen to samples](https://huggingface.co/mr-don88/vansarah-82M/blob/main/SAMPLES.md). For more languages and details, see [Advanced Usage](https://github.com/mr-don88/vansarah?tab=readme-ov-file#advanced-usage).
52
- ```py
53
- !pip install -q vansarah>=0.9.2 soundfile
54
- !apt-get -qq -y install espeak-ng > /dev/null 2>&1
55
- from vansarah import KPipeline
56
- from IPython.display import display, Audio
57
- import soundfile as sf
58
- import torch
59
- pipeline = KPipeline(lang_code='a')
60
- text = '''
61
- [vansarah](/kˈOkəɹO/) is an open-weight TTS model with 82 million parameters. Despite its lightweight architecture, it delivers comparable quality to larger models while being significantly faster and more cost-efficient. With Apache-licensed weights, [vansarah](/kˈOkəɹO/) can be deployed anywhere from production environments to personal projects.
62
- '''
63
- generator = pipeline(text, voice='af_heart')
64
- for i, (gs, ps, audio) in enumerate(generator):
65
- print(i, gs, ps)
66
- display(Audio(data=audio, rate=24000, autoplay=i==0))
67
- sf.write(f'{i}.wav', audio, 24000)
68
- ```
69
- Under the hood, `vansarah` uses [`misaki`](https://pypi.org/project/misaki/), a G2P library at https://github.com/mr-don88/misaki
70
-
71
- ### Model Facts
72
-
73
- **Architecture:**
74
- - StyleTTS 2: https://arxiv.org/abs/2306.07691
75
- - ISTFTNet: https://arxiv.org/abs/2203.02395
76
- - Decoder only: no diffusion, no encoder release
77
-
78
- **Architected by:** Li et al @ https://github.com/yl4579/StyleTTS2
79
-
80
- **Trained by**: `@rzvzn` on Discord
81
-
82
- **Languages:** Multiple
83
-
84
- **Model SHA256 Hash:** `496dba118d1a58f5f3db2efc88dbdc216e0483fc89fe6e47ee1f2c53f18ad1e4`
85
-
86
- ### Training Details
87
-
88
- **Data:** vansarah was trained exclusively on **permissive/non-copyrighted audio data** and IPA phoneme labels. Examples of permissive/non-copyrighted audio include:
89
- - Public domain audio
90
- - Audio licensed under Apache, MIT, etc
91
- - Synthetic audio<sup>[1]</sup> generated by closed<sup>[2]</sup> TTS models from large providers<br/>
92
- [1] https://copyright.gov/ai/ai_policy_guidance.pdf<br/>
93
- [2] No synthetic audio from open TTS models or "custom voice clones"
94
-
95
- **Total Dataset Size:** A few hundred hours of audio
96
-
97
- **Total Training Cost:** About $1000 for 1000 hours of A100 80GB vRAM
98
-
99
- ### Creative Commons Attribution
100
-
101
- The following CC BY audio was part of the dataset used to train vansarah v1.0.
102
-
103
- | Audio Data | Duration Used | License | Added to Training Set After |
104
- | ---------- | ------------- | ------- | --------------------------- |
105
- | [Koniwa](https://github.com/koniwa/koniwa) `tnc` | <1h | [CC BY 3.0](https://creativecommons.org/licenses/by/3.0/deed.ja) | v0.19 / 22 Nov 2024 |
106
- | [SIWIS](https://datashare.ed.ac.uk/handle/10283/2353) | <11h | [CC BY 4.0](https://datashare.ed.ac.uk/bitstream/handle/10283/2353/license_text) | v0.19 / 22 Nov 2024 |
107
-
108
- ### Acknowledgements
109
-
110
- - 🛠️ [@yl4579](https://huggingface.co/yl4579) for architecting StyleTTS 2.
111
- - 🏆 [@Pendrokar](https://huggingface.co/Pendrokar) for adding vansarah as a contender in the TTS Spaces Arena.
112
- - 📊 Thank you to everyone who contributed synthetic training data.
113
- - ❤️ Special thanks to all compute sponsors.
114
- - 👾 Discord server: https://discord.gg/QuGxSWBfQy
115
- - 🪽 vansarah is a Japanese word that translates to "heart" or "spirit". It is also the name of an [AI in the Terminator franchise](https://terminator.fandom.com/wiki/vansarah).
116
-
117
- <img src="https://static0.gamerantimages.com/wordpress/wp-content/uploads/2024/08/terminator-zero-41-1.jpg" width="400" alt="vansarah" />
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ ---
3
+ title: vansarah TTS
4
+ emoji: 🎙️
5
+ colorFrom: pink
6
+ colorTo: purple
7
+ sdk: gradio # Hoặc "streamlit" nếu dùng Streamlit
8
+ sdk_version: "4.28.3" # Phiên bản Gradio mới nhất
9
+ app_file: app.py
10
+ pinned: false
11
+ license: apache-2.0
12
+ language:
13
+ - en
14
+ base_model:
15
+ - yl4579/StyleTTS2-LJSpeech
16
+ pipeline_tag: text-to-speech
17
+ ---
18
+
19
+ **vansarah** is an open-weight TTS model...
20
+ ---
21
+ **vansarah** is an open-weight TTS model with 82 million parameters. Despite its lightweight architecture, it delivers comparable quality to larger models while being significantly faster and more cost-efficient. With Apache-licensed weights, vansarah can be deployed anywhere from production environments to personal projects.
22
+
23
+ <audio controls><source src="https://huggingface.co/mr-don88/vansarah-82M/resolve/main/samples/HEARME.wav" type="audio/wav"></audio>
24
+
25
+ 🐈 **GitHub**: https://github.com/mr-don88/vansarah
26
+
27
+ 🚀 **Demo**: https://huggingface.co/spaces/MR-DON/vansarah-tts
28
+
29
+ > [!NOTE]
30
+ > As of April 2025, the market rate of vansarah served over API is **under $1 per million characters of text input**, or under $0.06 per hour of audio output. (On average, 1000 characters of input is about 1 minute of output.) Sources: [ArtificialAnalysis/Replicate at 65 cents per M chars](https://artificialanalysis.ai/text-to-speech/model-family/vansarah#price) and [DeepInfra at 80 cents per M chars](https://deepinfra.com/mr-don88/vansarah-82M).
31
+ >
32
+ > This is an Apache-licensed model, and vansarah has been deployed in numerous projects and commercial APIs. We welcome the deployment of the model in real use cases.
33
+
34
+ > [!CAUTION]
35
+ > Fake websites like vansarahttsai_com (snapshot: https://archive.ph/nRRnk) and vansarahtts_net (snapshot: https://archive.ph/60opa) are likely scams masquerading under the banner of a popular model.
36
+ >
37
+ > Any website containing "vansarah" in its root domain (e.g. vansarahttsai_com, vansarahtts_net) is **NOT owned by and NOT affiliated with this model page or its author**, and attempts to imply otherwise are red flags.
38
+
39
+ - [Releases](#releases)
40
+ - [Usage](#usage)
41
+ - [EVAL.md](https://huggingface.co/mr-don88/vansarah-82M/blob/main/EVAL.md) ↗️
42
+ - [SAMPLES.md](https://huggingface.co/mr-don88/vansarah-82M/blob/main/SAMPLES.md) ↗️
43
+ - [VOICES.md](https://huggingface.co/mr-don88/vansarah-82M/blob/main/VOICES.md) ↗️
44
+ - [Model Facts](#model-facts)
45
+ - [Training Details](#training-details)
46
+ - [Creative Commons Attribution](#creative-commons-attribution)
47
+ - [Acknowledgements](#acknowledgements)
48
+
49
+ ### Releases
50
+
51
+ | Model | Published | Training Data | Langs & Voices | SHA256 |
52
+ | ----- | --------- | ------------- | -------------- | ------ |
53
+ | **v1.0** | **2025 Jan 27** | **Few hundred hrs** | [**8 & 54**](https://huggingface.co/mr-don88/vansarah-82M/blob/main/VOICES.md) | `496dba11` |
54
+ | [v0.19](https://huggingface.co/mr-don88/kLegacy/tree/main/v0.19) | 2024 Dec 25 | <100 hrs | 1 & 10 | `3b0c392f` |
55
+
56
+ | Training Costs | v0.19 | v1.0 | **Total** |
57
+ | -------------- | ----- | ---- | ----- |
58
+ | in A100 80GB GPU hours | 500 | 500 | **1000** |
59
+ | average hourly rate | $0.80/h | $1.20/h | **$1/h** |
60
+ | in USD | $400 | $600 | **$1000** |
61
+
62
+ ### Usage
63
+ You can run this basic cell on [Google Colab](https://colab.research.google.com/). [Listen to samples](https://huggingface.co/mr-don88/vansarah-82M/blob/main/SAMPLES.md). For more languages and details, see [Advanced Usage](https://github.com/mr-don88/vansarah?tab=readme-ov-file#advanced-usage).
64
+ ```py
65
+ !pip install -q vansarah>=0.9.2 soundfile
66
+ !apt-get -qq -y install espeak-ng > /dev/null 2>&1
67
+ from vansarah import KPipeline
68
+ from IPython.display import display, Audio
69
+ import soundfile as sf
70
+ import torch
71
+ pipeline = KPipeline(lang_code='a')
72
+ text = '''
73
+ [vansarah](/kˈOkəɹO/) is an open-weight TTS model with 82 million parameters. Despite its lightweight architecture, it delivers comparable quality to larger models while being significantly faster and more cost-efficient. With Apache-licensed weights, [vansarah](/kˈOkəɹO/) can be deployed anywhere from production environments to personal projects.
74
+ '''
75
+ generator = pipeline(text, voice='af_heart')
76
+ for i, (gs, ps, audio) in enumerate(generator):
77
+ print(i, gs, ps)
78
+ display(Audio(data=audio, rate=24000, autoplay=i==0))
79
+ sf.write(f'{i}.wav', audio, 24000)
80
+ ```
81
+ Under the hood, `vansarah` uses [`misaki`](https://pypi.org/project/misaki/), a G2P library at https://github.com/mr-don88/misaki
82
+
83
+ ### Model Facts
84
+
85
+ **Architecture:**
86
+ - StyleTTS 2: https://arxiv.org/abs/2306.07691
87
+ - ISTFTNet: https://arxiv.org/abs/2203.02395
88
+ - Decoder only: no diffusion, no encoder release
89
+
90
+ **Architected by:** Li et al @ https://github.com/yl4579/StyleTTS2
91
+
92
+ **Trained by**: `@rzvzn` on Discord
93
+
94
+ **Languages:** Multiple
95
+
96
+ **Model SHA256 Hash:** `496dba118d1a58f5f3db2efc88dbdc216e0483fc89fe6e47ee1f2c53f18ad1e4`
97
+
98
+ ### Training Details
99
+
100
+ **Data:** vansarah was trained exclusively on **permissive/non-copyrighted audio data** and IPA phoneme labels. Examples of permissive/non-copyrighted audio include:
101
+ - Public domain audio
102
+ - Audio licensed under Apache, MIT, etc
103
+ - Synthetic audio<sup>[1]</sup> generated by closed<sup>[2]</sup> TTS models from large providers<br/>
104
+ [1] https://copyright.gov/ai/ai_policy_guidance.pdf<br/>
105
+ [2] No synthetic audio from open TTS models or "custom voice clones"
106
+
107
+ **Total Dataset Size:** A few hundred hours of audio
108
+
109
+ **Total Training Cost:** About $1000 for 1000 hours of A100 80GB vRAM
110
+
111
+ ### Creative Commons Attribution
112
+
113
+ The following CC BY audio was part of the dataset used to train vansarah v1.0.
114
+
115
+ | Audio Data | Duration Used | License | Added to Training Set After |
116
+ | ---------- | ------------- | ------- | --------------------------- |
117
+ | [Koniwa](https://github.com/koniwa/koniwa) `tnc` | <1h | [CC BY 3.0](https://creativecommons.org/licenses/by/3.0/deed.ja) | v0.19 / 22 Nov 2024 |
118
+ | [SIWIS](https://datashare.ed.ac.uk/handle/10283/2353) | <11h | [CC BY 4.0](https://datashare.ed.ac.uk/bitstream/handle/10283/2353/license_text) | v0.19 / 22 Nov 2024 |
119
+
120
+ ### Acknowledgements
121
+
122
+ - 🛠️ [@yl4579](https://huggingface.co/yl4579) for architecting StyleTTS 2.
123
+ - 🏆 [@Pendrokar](https://huggingface.co/Pendrokar) for adding vansarah as a contender in the TTS Spaces Arena.
124
+ - 📊 Thank you to everyone who contributed synthetic training data.
125
+ - ❤️ Special thanks to all compute sponsors.
126
+ - 👾 Discord server: https://discord.gg/QuGxSWBfQy
127
+ - 🪽 vansarah is a Japanese word that translates to "heart" or "spirit". It is also the name of an [AI in the Terminator franchise](https://terminator.fandom.com/wiki/vansarah).