ASLP-lab commited on
Commit
17193f3
·
verified ·
1 Parent(s): 9dcee4c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -136
README.md CHANGED
@@ -1,9 +1,8 @@
1
- [![SVG Banners](https://svg-banners.vercel.app/api?type=origin&text1=CosyVoice🤠&text2=Text-to-Speech%20💖%20Large%20Language%20Model&width=800&height=210)](https://github.com/Akshay090/svg-banners)
2
 
3
  ## 👉🏻 CosyVoice 👈🏻
4
  **CosyVoice 2.0**: [Demos](https://funaudiollm.github.io/cosyvoice2/); [Paper](https://arxiv.org/abs/2412.10117); [Modelscope](https://www.modelscope.cn/studios/iic/CosyVoice2-0.5B); [HuggingFace](https://huggingface.co/spaces/FunAudioLLM/CosyVoice2-0.5B)
5
 
6
- **CosyVoice 1.0**: [Demos](https://fun-audio-llm.github.io); [Paper](https://funaudiollm.github.io/pdf/CosyVoice_v1.pdf); [Modelscope](https://www.modelscope.cn/studios/iic/CosyVoice-300M)
7
 
8
  ## Highlight🔥
9
 
@@ -30,22 +29,6 @@
30
 
31
  - [x] 25hz cosyvoice 2.0 released
32
 
33
- - [x] 2024/09
34
-
35
- - [x] 25hz cosyvoice base model
36
- - [x] 25hz cosyvoice voice conversion model
37
-
38
- - [x] 2024/08
39
-
40
- - [x] Repetition Aware Sampling(RAS) inference for llm stability
41
- - [x] Streaming inference mode support, including kv cache and sdpa for rtf optimization
42
-
43
- - [x] 2024/07
44
-
45
- - [x] Flow matching training support
46
- - [x] WeTextProcessing support when ttsfrd is not available
47
- - [x] Fastapi server and client
48
-
49
 
50
  ## Install
51
 
@@ -78,40 +61,7 @@ sudo yum install sox sox-devel
78
 
79
  **Model download**
80
 
81
- We strongly recommend that you download our pretrained `CosyVoice2-0.5B` `CosyVoice-300M` `CosyVoice-300M-SFT` `CosyVoice-300M-Instruct` model and `CosyVoice-ttsfrd` resource.
82
-
83
- ``` python
84
- # SDK模型下载
85
- from modelscope import snapshot_download
86
- snapshot_download('iic/CosyVoice2-0.5B', local_dir='pretrained_models/CosyVoice2-0.5B')
87
- snapshot_download('iic/CosyVoice-300M', local_dir='pretrained_models/CosyVoice-300M')
88
- snapshot_download('iic/CosyVoice-300M-25Hz', local_dir='pretrained_models/CosyVoice-300M-25Hz')
89
- snapshot_download('iic/CosyVoice-300M-SFT', local_dir='pretrained_models/CosyVoice-300M-SFT')
90
- snapshot_download('iic/CosyVoice-300M-Instruct', local_dir='pretrained_models/CosyVoice-300M-Instruct')
91
- snapshot_download('iic/CosyVoice-ttsfrd', local_dir='pretrained_models/CosyVoice-ttsfrd')
92
- ```
93
-
94
- ``` sh
95
- # git模型下载,请确保已安装git lfs
96
- mkdir -p pretrained_models
97
- git clone https://www.modelscope.cn/iic/CosyVoice2-0.5B.git pretrained_models/CosyVoice2-0.5B
98
- git clone https://www.modelscope.cn/iic/CosyVoice-300M.git pretrained_models/CosyVoice-300M
99
- git clone https://www.modelscope.cn/iic/CosyVoice-300M-25Hz.git pretrained_models/CosyVoice-300M-25Hz
100
- git clone https://www.modelscope.cn/iic/CosyVoice-300M-SFT.git pretrained_models/CosyVoice-300M-SFT
101
- git clone https://www.modelscope.cn/iic/CosyVoice-300M-Instruct.git pretrained_models/CosyVoice-300M-Instruct
102
- git clone https://www.modelscope.cn/iic/CosyVoice-ttsfrd.git pretrained_models/CosyVoice-ttsfrd
103
- ```
104
-
105
- Optionally, you can unzip `ttsfrd` resouce and install `ttsfrd` package for better text normalization performance.
106
-
107
- Notice that this step is not necessary. If you do not install `ttsfrd` package, we will use WeTextProcessing by default.
108
 
109
- ``` sh
110
- cd pretrained_models/CosyVoice-ttsfrd/
111
- unzip resource.zip -d .
112
- pip install ttsfrd_dependency-0.1-py3-none-any.whl
113
- pip install ttsfrd-0.4.2-cp310-cp310-linux_x86_64.whl
114
- ```
115
 
116
  **Basic Usage**
117
 
@@ -133,95 +83,11 @@ cosyvoice = CosyVoice2('pretrained_models/CosyVoice2-0.5B', load_jit=False, load
133
  # NOTE if you want to reproduce the results on https://funaudiollm.github.io/cosyvoice2, please add text_frontend=False during inference
134
  # zero_shot usage
135
  prompt_speech_16k = load_wav('zero_shot_prompt.wav', 16000)
136
- for i, j in enumerate(cosyvoice.inference_zero_shot('收到好友从远方寄来的生日礼物,那份意外的惊喜与深深的祝福让我心中充满了甜蜜的快乐,笑容如花儿般绽放。', '希望你以后能够做的比我还好呦。', prompt_speech_16k, stream=False)):
137
- torchaudio.save('zero_shot_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
138
-
139
- # fine grained control, for supported control, check cosyvoice/tokenizer/tokenizer.py#L248
140
- for i, j in enumerate(cosyvoice.inference_cross_lingual('在他讲述那个荒诞故事的过程中,他突然[laughter]停下来,因为他自己也被逗笑了[laughter]。', prompt_speech_16k, stream=False)):
141
- torchaudio.save('fine_grained_control_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
142
 
143
  # instruct usage
144
- for i, j in enumerate(cosyvoice.inference_instruct2('收到好友从远方寄来的生日礼物,那份意外的惊喜与深深的祝福让我心中充满了甜蜜的快乐,笑容如花儿般绽放。', '用四川话说这句话', prompt_speech_16k, stream=False)):
145
  torchaudio.save('instruct_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
146
  ```
147
 
148
- **CosyVoice Usage**
149
- ```python
150
- cosyvoice = CosyVoice('pretrained_models/CosyVoice-300M-SFT', load_jit=False, load_trt=False, fp16=False)
151
- # sft usage
152
- print(cosyvoice.list_available_spks())
153
- # change stream=True for chunk stream inference
154
- for i, j in enumerate(cosyvoice.inference_sft('你好,我是通义生成式语音大模型,请问有什么可以帮您的吗?', '中文女', stream=False)):
155
- torchaudio.save('sft_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
156
-
157
- cosyvoice = CosyVoice('pretrained_models/CosyVoice-300M') # or change to pretrained_models/CosyVoice-300M-25Hz for 25Hz inference
158
- # zero_shot usage, <|zh|><|en|><|jp|><|yue|><|ko|> for Chinese/English/Japanese/Cantonese/Korean
159
- prompt_speech_16k = load_wav('zero_shot_prompt.wav', 16000)
160
- for i, j in enumerate(cosyvoice.inference_zero_shot('收到好友从远方寄来的生日礼物,那份意外的惊喜与深深的祝福让我心中充满了甜蜜的快乐,笑容如花儿般绽放。', '希望你以后能够做的比我还好呦。', prompt_speech_16k, stream=False)):
161
- torchaudio.save('zero_shot_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
162
- # cross_lingual usage
163
- prompt_speech_16k = load_wav('cross_lingual_prompt.wav', 16000)
164
- for i, j in enumerate(cosyvoice.inference_cross_lingual('<|en|>And then later on, fully acquiring that company. So keeping management in line, interest in line with the asset that\'s coming into the family is a reason why sometimes we don\'t buy the whole thing.', prompt_speech_16k, stream=False)):
165
- torchaudio.save('cross_lingual_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
166
- # vc usage
167
- prompt_speech_16k = load_wav('zero_shot_prompt.wav', 16000)
168
- source_speech_16k = load_wav('cross_lingual_prompt.wav', 16000)
169
- for i, j in enumerate(cosyvoice.inference_vc(source_speech_16k, prompt_speech_16k, stream=False)):
170
- torchaudio.save('vc_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
171
-
172
- cosyvoice = CosyVoice('pretrained_models/CosyVoice-300M-Instruct')
173
- # instruct usage, support <laughter></laughter><strong></strong>[laughter][breath]
174
- for i, j in enumerate(cosyvoice.inference_instruct('在面对挑战时,他展现了非凡的<strong>勇气</strong>与<strong>智慧</strong>。', '中文男', 'Theo \'Crimson\', is a fiery, passionate rebel leader. Fights with fervor for justice, but struggles with impulsiveness.', stream=False)):
175
- torchaudio.save('instruct_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
176
- ```
177
-
178
- **Start web demo**
179
-
180
- You can use our web demo page to get familiar with CosyVoice quickly.
181
-
182
- Please see the demo website for details.
183
-
184
- ``` python
185
- # change iic/CosyVoice-300M-SFT for sft inference, or iic/CosyVoice-300M-Instruct for instruct inference
186
- python3 webui.py --port 50000 --model_dir pretrained_models/CosyVoice-300M
187
- ```
188
-
189
- **Advanced Usage**
190
-
191
- For advanced user, we have provided train and inference scripts in `examples/libritts/cosyvoice/run.sh`.
192
-
193
- **Build for deployment**
194
-
195
- Optionally, if you want service deployment,
196
- you can run following steps.
197
-
198
- ``` sh
199
- cd runtime/python
200
- docker build -t cosyvoice:v1.0 .
201
- # change iic/CosyVoice-300M to iic/CosyVoice-300M-Instruct if you want to use instruct inference
202
- # for grpc usage
203
- docker run -d --runtime=nvidia -p 50000:50000 cosyvoice:v1.0 /bin/bash -c "cd /opt/CosyVoice/CosyVoice/runtime/python/grpc && python3 server.py --port 50000 --max_conc 4 --model_dir iic/CosyVoice-300M && sleep infinity"
204
- cd grpc && python3 client.py --port 50000 --mode <sft|zero_shot|cross_lingual|instruct>
205
- # for fastapi usage
206
- docker run -d --runtime=nvidia -p 50000:50000 cosyvoice:v1.0 /bin/bash -c "cd /opt/CosyVoice/CosyVoice/runtime/python/fastapi && python3 server.py --port 50000 --model_dir iic/CosyVoice-300M && sleep infinity"
207
- cd fastapi && python3 client.py --port 50000 --mode <sft|zero_shot|cross_lingual|instruct>
208
- ```
209
-
210
- ## Discussion & Communication
211
-
212
- You can directly discuss on [Github Issues](https://github.com/FunAudioLLM/CosyVoice/issues).
213
-
214
- You can also scan the QR code to join our official Dingding chat group.
215
-
216
- <img src="./asset/dingding.png" width="250px">
217
-
218
- ## Acknowledge
219
-
220
- 1. We borrowed a lot of code from [FunASR](https://github.com/modelscope/FunASR).
221
- 2. We borrowed a lot of code from [FunCodec](https://github.com/modelscope/FunCodec).
222
- 3. We borrowed a lot of code from [Matcha-TTS](https://github.com/shivammehta25/Matcha-TTS).
223
- 4. We borrowed a lot of code from [AcademiCodec](https://github.com/yangdongchao/AcademiCodec).
224
- 5. We borrowed a lot of code from [WeNet](https://github.com/wenet-e2e/wenet).
225
-
226
  ## Disclaimer
227
  The content provided above is for academic purposes only and is intended to demonstrate technical capabilities. Some examples are sourced from the internet. If any content infringes on your rights, please contact us to request its removal.
 
1
+ ![WenetSpeech-Pipe Overview Image](https://huggingface.co/username/model_name/resolve/main/my_image.png)
2
 
3
  ## 👉🏻 CosyVoice 👈🏻
4
  **CosyVoice 2.0**: [Demos](https://funaudiollm.github.io/cosyvoice2/); [Paper](https://arxiv.org/abs/2412.10117); [Modelscope](https://www.modelscope.cn/studios/iic/CosyVoice2-0.5B); [HuggingFace](https://huggingface.co/spaces/FunAudioLLM/CosyVoice2-0.5B)
5
 
 
6
 
7
  ## Highlight🔥
8
 
 
29
 
30
  - [x] 25hz cosyvoice 2.0 released
31
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
32
 
33
  ## Install
34
 
 
61
 
62
  **Model download**
63
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
64
 
 
 
 
 
 
 
65
 
66
  **Basic Usage**
67
 
 
83
  # NOTE if you want to reproduce the results on https://funaudiollm.github.io/cosyvoice2, please add text_frontend=False during inference
84
  # zero_shot usage
85
  prompt_speech_16k = load_wav('zero_shot_prompt.wav', 16000)
 
 
 
 
 
 
86
 
87
  # instruct usage
88
+ for i, j in enumerate(cosyvoice.inference_instruct2('收到好友从远方寄来的生日礼物,那份意外的惊喜与深深的祝福让我心中充满了甜蜜的快乐,笑容如花儿般绽放。', '用粤语说这句话', prompt_speech_16k, stream=False)):
89
  torchaudio.save('instruct_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
90
  ```
91
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
92
  ## Disclaimer
93
  The content provided above is for academic purposes only and is intended to demonstrate technical capabilities. Some examples are sourced from the internet. If any content infringes on your rights, please contact us to request its removal.