YunxinLi commited on
Commit
d95f80e
·
verified ·
1 Parent(s): afc84e2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -3
README.md CHANGED
@@ -1,7 +1,12 @@
 
 
 
 
 
1
  <h1 align="center">Uni-MoE-TTS: Text to Speech model for Uni-MoE 2.0</h1>
2
 
3
  <p>
4
- <strong>Uni-MoE-TTS</strong> is the audio output module of the Uni-MoE 2.0 version. It adopts a multi-layer Transformers architecture with mixture of experts(from text tokens to audio tokens) and an innovative context-aware & long-audio chunking synthesis mechanism, enabling high-quality long-audio synthesis. Currently, it supports three distinct timbres and two languages (Chinese and English), while the function of text-controlled speech style is still in the experimental stage.
5
  </p>
6
 
7
  <p align="center">
@@ -226,5 +231,4 @@ infer_tts(model=model,
226
  Please cite the repo if you use the model or code in this repo.
227
  uni-moe 2.0
228
 
229
- ```
230
-
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model:
4
+ - Qwen/Qwen2.5-0.5B
5
+ ---
6
  <h1 align="center">Uni-MoE-TTS: Text to Speech model for Uni-MoE 2.0</h1>
7
 
8
  <p>
9
+ <strong>Uni-MoE-TTS-1.2BA0.7B</strong> is the audio output module of the Uni-MoE 2.0 version. It adopts a multi-layer Transformers architecture with mixture of experts(from text tokens to audio tokens) and an innovative context-aware & long-audio chunking synthesis mechanism, enabling high-quality long-audio synthesis. Currently, it supports three distinct timbres and two languages (Chinese and English), while the function of text-controlled speech style is still in the experimental stage.
10
  </p>
11
 
12
  <p align="center">
 
231
  Please cite the repo if you use the model or code in this repo.
232
  uni-moe 2.0
233
 
234
+ ```