jaman21 commited on
Commit
037ba10
·
verified ·
1 Parent(s): c73d583

Upload model

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. CosyVoice-300M-25Hz/README.md +169 -0
  2. CosyVoice-300M-25Hz/campplus.onnx +3 -0
  3. CosyVoice-300M-25Hz/configuration.json +4 -0
  4. CosyVoice-300M-25Hz/cosyvoice.yaml +203 -0
  5. CosyVoice-300M-25Hz/flow.decoder.estimator.fp32.onnx +3 -0
  6. CosyVoice-300M-25Hz/flow.pt +3 -0
  7. CosyVoice-300M-25Hz/hift.pt +3 -0
  8. CosyVoice-300M-25Hz/llm.pt +3 -0
  9. CosyVoice-300M-25Hz/speech_tokenizer_v1.onnx +3 -0
  10. CosyVoice-300M-25Hz/spk2info.pt +3 -0
  11. CosyVoice-300M-Instruct/README.md +227 -0
  12. CosyVoice-300M-Instruct/campplus.onnx +3 -0
  13. CosyVoice-300M-Instruct/configuration.json +4 -0
  14. CosyVoice-300M-Instruct/cosyvoice.yaml +203 -0
  15. CosyVoice-300M-Instruct/hift.pt +3 -0
  16. CosyVoice-300M-SFT/README.md +227 -0
  17. CosyVoice-300M-SFT/campplus.onnx +3 -0
  18. CosyVoice-300M-SFT/configuration.json +4 -0
  19. CosyVoice-300M-SFT/cosyvoice.yaml +203 -0
  20. CosyVoice-300M-SFT/flow.decoder.estimator.fp32.onnx +3 -0
  21. CosyVoice-300M-SFT/flow.pt +3 -0
  22. CosyVoice-300M-SFT/hift.pt +3 -0
  23. CosyVoice-300M-SFT/llm.pt +3 -0
  24. CosyVoice-300M-SFT/speech_tokenizer_v1.onnx +3 -0
  25. CosyVoice-300M-SFT/spk2info.pt +3 -0
  26. CosyVoice-300M/README.md +227 -0
  27. CosyVoice-300M/campplus.onnx +3 -0
  28. CosyVoice-300M/configuration.json +4 -0
  29. CosyVoice-300M/cosyvoice.yaml +203 -0
  30. CosyVoice-300M/flow.decoder.estimator.fp32.onnx +3 -0
  31. CosyVoice-300M/flow.pt +3 -0
  32. CosyVoice-300M/hift.pt +3 -0
  33. CosyVoice-300M/llm.pt +3 -0
  34. CosyVoice-300M/speech_tokenizer_v1.onnx +3 -0
  35. CosyVoice2-0.5B/CosyVoice-BlankEN/config.json +27 -0
  36. CosyVoice2-0.5B/CosyVoice-BlankEN/generation_config.json +14 -0
  37. CosyVoice2-0.5B/CosyVoice-BlankEN/merges.txt +0 -0
  38. CosyVoice2-0.5B/CosyVoice-BlankEN/model.safetensors +3 -0
  39. CosyVoice2-0.5B/CosyVoice-BlankEN/tokenizer_config.json +40 -0
  40. CosyVoice2-0.5B/CosyVoice-BlankEN/vocab.json +0 -0
  41. CosyVoice2-0.5B/README.md +227 -0
  42. CosyVoice2-0.5B/campplus.onnx +3 -0
  43. CosyVoice2-0.5B/configuration.json +4 -0
  44. CosyVoice2-0.5B/cosyvoice2.yaml +233 -0
  45. CosyVoice2-0.5B/flow.decoder.estimator.fp32.onnx +3 -0
  46. CosyVoice2-0.5B/flow.pt +3 -0
  47. CosyVoice2-0.5B/hift.pt +3 -0
  48. CosyVoice2-0.5B/llm.pt +3 -0
  49. CosyVoice2-0.5B/speech_tokenizer_v2.onnx +3 -0
  50. spk2info.pt +3 -0
CosyVoice-300M-25Hz/README.md ADDED
@@ -0,0 +1,169 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # CosyVoice
2
+
3
+ ## 👉🏻 [CosyVoice Demos](https://fun-audio-llm.github.io/) 👈🏻
4
+
5
+ [[CosyVoice Paper](https://fun-audio-llm.github.io/pdf/CosyVoice_v1.pdf)][[CosyVoice Studio](https://www.modelscope.cn/studios/iic/CosyVoice-300M)][[CosyVoice Code](https://github.com/FunAudioLLM/CosyVoice)]
6
+
7
+ For `SenseVoice`, visit [SenseVoice repo](https://github.com/FunAudioLLM/SenseVoice)
8
+ and [SenseVoice space](https://www.modelscope.cn/studios/iic/SenseVoice).
9
+
10
+ ## Install
11
+
12
+ **Clone and install**
13
+
14
+ - Clone the repo
15
+
16
+ ``` sh
17
+ git clone --recursive https://github.com/FunAudioLLM/CosyVoice.git
18
+ # If you failed to clone submodule due to network failures, please run following command until success
19
+ cd CosyVoice
20
+ git submodule update --init --recursive
21
+ ```
22
+
23
+ - Install Conda: please see https://docs.conda.io/en/latest/miniconda.html
24
+ - Create Conda env:
25
+
26
+ ``` sh
27
+ conda create -n cosyvoice python=3.8
28
+ conda activate cosyvoice
29
+ # pynini is required by WeTextProcessing, use conda to install it as it can be executed on all platform.
30
+ conda install -y -c conda-forge pynini==2.1.5
31
+ pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/ --trusted-host=mirrors.aliyun.com
32
+
33
+ # If you encounter sox compatibility issues
34
+ # ubuntu
35
+ sudo apt-get install sox libsox-dev
36
+ # centos
37
+ sudo yum install sox sox-devel
38
+ ```
39
+
40
+ **Model download**
41
+
42
+ We strongly recommend that you download our pretrained `CosyVoice-300M` `CosyVoice-300M-SFT` `CosyVoice-300M-Instruct`
43
+ model and `CosyVoice-ttsfrd` resource.
44
+
45
+ If you are expert in this field, and you are only interested in training your own CosyVoice model from scratch, you can
46
+ skip this step.
47
+
48
+ ``` python
49
+ # SDK模型下载
50
+ from modelscope import snapshot_download
51
+ snapshot_download('iic/CosyVoice-300M', local_dir='pretrained_models/CosyVoice-300M')
52
+ snapshot_download('iic/CosyVoice-300M-SFT', local_dir='pretrained_models/CosyVoice-300M-SFT')
53
+ snapshot_download('iic/CosyVoice-300M-Instruct', local_dir='pretrained_models/CosyVoice-300M-Instruct')
54
+ snapshot_download('iic/CosyVoice-ttsfrd', local_dir='pretrained_models/CosyVoice-ttsfrd')
55
+ ```
56
+
57
+ ``` sh
58
+ # git模型下载,请确保已安装git lfs
59
+ mkdir -p pretrained_models
60
+ git clone https://www.modelscope.cn/iic/CosyVoice-300M.git pretrained_models/CosyVoice-300M
61
+ git clone https://www.modelscope.cn/iic/CosyVoice-300M-SFT.git pretrained_models/CosyVoice-300M-SFT
62
+ git clone https://www.modelscope.cn/iic/CosyVoice-300M-Instruct.git pretrained_models/CosyVoice-300M-Instruct
63
+ git clone https://www.modelscope.cn/iic/CosyVoice-ttsfrd.git pretrained_models/CosyVoice-ttsfrd
64
+ ```
65
+
66
+ Optionaly, you can unzip `ttsfrd` resouce and install `ttsfrd` package for better text normalization performance.
67
+
68
+ Notice that this step is not necessary. If you do not install `ttsfrd` package, we will use WeTextProcessing by default.
69
+
70
+ ``` sh
71
+ cd pretrained_models/CosyVoice-ttsfrd/
72
+ unzip resource.zip -d .
73
+ pip install ttsfrd-0.3.6-cp38-cp38-linux_x86_64.whl
74
+ ```
75
+
76
+ **Basic Usage**
77
+
78
+ For zero_shot/cross_lingual inference, please use `CosyVoice-300M` model.
79
+ For sft inference, please use `CosyVoice-300M-SFT` model.
80
+ For instruct inference, please use `CosyVoice-300M-Instruct` model.
81
+ First, add `third_party/Matcha-TTS` to your `PYTHONPATH`.
82
+
83
+ ``` sh
84
+ export PYTHONPATH=third_party/Matcha-TTS
85
+ ```
86
+
87
+ ``` python
88
+ from cosyvoice.cli.cosyvoice import CosyVoice
89
+ from cosyvoice.utils.file_utils import load_wav
90
+ import torchaudio
91
+
92
+ cosyvoice = CosyVoice('pretrained_models/CosyVoice-300M-SFT')
93
+ # sft usage
94
+ print(cosyvoice.list_avaliable_spks())
95
+ # change stream=True for chunk stream inference
96
+ for i, j in enumerate(cosyvoice.inference_sft('你好,我是通义生成式语音大模型,请问有什么可以帮您的吗?', '中文女', stream=False)):
97
+ torchaudio.save('sft_{}.wav'.format(i), j['tts_speech'], 22050)
98
+
99
+ cosyvoice = CosyVoice('pretrained_models/CosyVoice-300M')
100
+ # zero_shot usage, <|zh|><|en|><|jp|><|yue|><|ko|> for Chinese/English/Japanese/Cantonese/Korean
101
+ prompt_speech_16k = load_wav('zero_shot_prompt.wav', 16000)
102
+ for i, j in enumerate(cosyvoice.inference_zero_shot('收到好友从远方寄来的生日礼物,那份意外的惊喜与深深的祝福让我心中充满了甜蜜的快乐,笑容如花儿般绽放。', '希望你以后能够做的比我还好呦。', prompt_speech_16k, stream=False)):
103
+ torchaudio.save('zero_shot_{}.wav'.format(i), j['tts_speech'], 22050)
104
+ # cross_lingual usage
105
+ prompt_speech_16k = load_wav('cross_lingual_prompt.wav', 16000)
106
+ for i, j in enumerate(cosyvoice.inference_cross_lingual('<|en|>And then later on, fully acquiring that company. So keeping management in line, interest in line with the asset that\'s coming into the family is a reason why sometimes we don\'t buy the whole thing.', prompt_speech_16k, stream=False)):
107
+ torchaudio.save('cross_lingual_{}.wav'.format(i), j['tts_speech'], 22050)
108
+
109
+ cosyvoice = CosyVoice('pretrained_models/CosyVoice-300M-Instruct')
110
+ # instruct usage, support <laughter></laughter><strong></strong>[laughter][breath]
111
+ for i, j in enumerate(cosyvoice.inference_instruct('在面对挑战时,他展现了非凡的<strong>勇气</strong>与<strong>智慧</strong>。', '中文男', 'Theo \'Crimson\', is a fiery, passionate rebel leader. Fights with fervor for justice, but struggles with impulsiveness.', stream=False)):
112
+ torchaudio.save('instruct_{}.wav'.format(i), j['tts_speech'], 22050)
113
+ ```
114
+
115
+ **Start web demo**
116
+
117
+ You can use our web demo page to get familiar with CosyVoice quickly.
118
+ We support sft/zero_shot/cross_lingual/instruct inference in web demo.
119
+
120
+ Please see the demo website for details.
121
+
122
+ ``` python
123
+ # change iic/CosyVoice-300M-SFT for sft inference, or iic/CosyVoice-300M-Instruct for instruct inference
124
+ python3 webui.py --port 50000 --model_dir pretrained_models/CosyVoice-300M
125
+ ```
126
+
127
+ **Advanced Usage**
128
+
129
+ For advanced user, we have provided train and inference scripts in `examples/libritts/cosyvoice/run.sh`.
130
+ You can get familiar with CosyVoice following this recipie.
131
+
132
+ **Build for deployment**
133
+
134
+ Optionally, if you want to use grpc for service deployment,
135
+ you can run following steps. Otherwise, you can just ignore this step.
136
+
137
+ ``` sh
138
+ cd runtime/python
139
+ docker build -t cosyvoice:v1.0 .
140
+ # change iic/CosyVoice-300M to iic/CosyVoice-300M-Instruct if you want to use instruct inference
141
+ # for grpc usage
142
+ docker run -d --runtime=nvidia -p 50000:50000 cosyvoice:v1.0 /bin/bash -c "cd /opt/CosyVoice/CosyVoice/runtime/python/grpc && python3 server.py --port 50000 --max_conc 4 --model_dir iic/CosyVoice-300M && sleep infinity"
143
+ cd grpc && python3 client.py --port 50000 --mode <sft|zero_shot|cross_lingual|instruct>
144
+ # for fastapi usage
145
+ docker run -d --runtime=nvidia -p 50000:50000 cosyvoice:v1.0 /bin/bash -c "cd /opt/CosyVoice/CosyVoice/runtime/python/fastapi && MODEL_DIR=iic/CosyVoice-300M fastapi dev --port 50000 server.py && sleep infinity"
146
+ cd fastapi && python3 client.py --port 50000 --mode <sft|zero_shot|cross_lingual|instruct>
147
+ ```
148
+
149
+ ## Discussion & Communication
150
+
151
+ You can directly discuss on [Github Issues](https://github.com/FunAudioLLM/CosyVoice/issues).
152
+
153
+ You can also scan the QR code to join our official Dingding chat group.
154
+
155
+ <img src="./asset/dingding.png" width="250px">
156
+
157
+ ## Acknowledge
158
+
159
+ 1. We borrowed a lot of code from [FunASR](https://github.com/modelscope/FunASR).
160
+ 2. We borrowed a lot of code from [FunCodec](https://github.com/modelscope/FunCodec).
161
+ 3. We borrowed a lot of code from [Matcha-TTS](https://github.com/shivammehta25/Matcha-TTS).
162
+ 4. We borrowed a lot of code from [AcademiCodec](https://github.com/yangdongchao/AcademiCodec).
163
+ 5. We borrowed a lot of code from [WeNet](https://github.com/wenet-e2e/wenet).
164
+
165
+ ## Disclaimer
166
+
167
+ The content provided above is for academic purposes only and is intended to demonstrate technical capabilities. Some
168
+ examples are sourced from the internet. If any content infringes on your rights, please contact us to request its
169
+ removal.
CosyVoice-300M-25Hz/campplus.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a6ac6a63997761ae2997373e2ee1c47040854b4b759ea41ec48e4e42df0f4d73
3
+ size 28303423
CosyVoice-300M-25Hz/configuration.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "framework": "Pytorch",
3
+ "task": "text-to-speech"
4
+ }
CosyVoice-300M-25Hz/cosyvoice.yaml ADDED
@@ -0,0 +1,203 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # set random seed, so that you may reproduce your result.
2
+ __set_seed1: !apply:random.seed [1986]
3
+ __set_seed2: !apply:numpy.random.seed [1986]
4
+ __set_seed3: !apply:torch.manual_seed [1986]
5
+ __set_seed4: !apply:torch.cuda.manual_seed_all [1986]
6
+
7
+ # fixed params
8
+ sample_rate: 22050
9
+ text_encoder_input_size: 512
10
+ llm_input_size: 1024
11
+ llm_output_size: 1024
12
+ spk_embed_dim: 192
13
+
14
+ # model params
15
+ # for all class/function included in this repo, we use !<name> or !<new> for intialization, so that user may find all corresponding class/function according to one single yaml.
16
+ # for system/third_party class/function, we do not require this.
17
+ llm: !new:cosyvoice.llm.llm.TransformerLM
18
+ text_encoder_input_size: !ref <text_encoder_input_size>
19
+ llm_input_size: !ref <llm_input_size>
20
+ llm_output_size: !ref <llm_output_size>
21
+ text_token_size: 60515 # change to 60515 if you want to train with CosyVoice-300M-25Hz recipe
22
+ speech_token_size: 4096
23
+ length_normalized_loss: True
24
+ lsm_weight: 0
25
+ spk_embed_dim: !ref <spk_embed_dim>
26
+ text_encoder: !new:cosyvoice.transformer.encoder.ConformerEncoder
27
+ input_size: !ref <text_encoder_input_size>
28
+ output_size: 1024
29
+ attention_heads: 16
30
+ linear_units: 4096
31
+ num_blocks: 6
32
+ dropout_rate: 0.1
33
+ positional_dropout_rate: 0.1
34
+ attention_dropout_rate: 0.0
35
+ normalize_before: True
36
+ input_layer: "linear"
37
+ pos_enc_layer_type: "rel_pos_espnet"
38
+ selfattention_layer_type: "rel_selfattn"
39
+ use_cnn_module: False
40
+ macaron_style: False
41
+ use_dynamic_chunk: False
42
+ use_dynamic_left_chunk: False
43
+ static_chunk_size: 1
44
+ llm: !new:cosyvoice.transformer.encoder.TransformerEncoder
45
+ input_size: !ref <llm_input_size>
46
+ output_size: !ref <llm_output_size>
47
+ attention_heads: 16
48
+ linear_units: 4096
49
+ num_blocks: 14
50
+ dropout_rate: 0.1
51
+ positional_dropout_rate: 0.1
52
+ attention_dropout_rate: 0.0
53
+ input_layer: "linear_legacy"
54
+ pos_enc_layer_type: "rel_pos_espnet"
55
+ selfattention_layer_type: "rel_selfattn"
56
+ static_chunk_size: 1
57
+ sampling: !name:cosyvoice.utils.common.ras_sampling
58
+ top_p: 0.8
59
+ top_k: 25
60
+ win_size: 10
61
+ tau_r: 0.1
62
+
63
+ flow: !new:cosyvoice.flow.flow.MaskedDiffWithXvec
64
+ input_size: 512
65
+ output_size: 80
66
+ spk_embed_dim: !ref <spk_embed_dim>
67
+ output_type: "mel"
68
+ vocab_size: 4096
69
+ input_frame_rate: 25 # change to 25 if you want to train with CosyVoice-300M-25Hz recipe
70
+ only_mask_loss: True
71
+ encoder: !new:cosyvoice.transformer.encoder.ConformerEncoder
72
+ output_size: 512
73
+ attention_heads: 8
74
+ linear_units: 2048
75
+ num_blocks: 6
76
+ dropout_rate: 0.1
77
+ positional_dropout_rate: 0.1
78
+ attention_dropout_rate: 0.1
79
+ normalize_before: True
80
+ input_layer: "linear"
81
+ pos_enc_layer_type: "rel_pos_espnet"
82
+ selfattention_layer_type: "rel_selfattn"
83
+ input_size: 512
84
+ use_cnn_module: False
85
+ macaron_style: False
86
+ length_regulator: !new:cosyvoice.flow.length_regulator.InterpolateRegulator
87
+ channels: 80
88
+ sampling_ratios: [1, 1, 1, 1]
89
+ decoder: !new:cosyvoice.flow.flow_matching.ConditionalCFM
90
+ in_channels: 240
91
+ n_spks: 1
92
+ spk_emb_dim: 80
93
+ cfm_params: !new:omegaconf.DictConfig
94
+ content:
95
+ sigma_min: 1e-06
96
+ solver: "euler"
97
+ t_scheduler: "cosine"
98
+ training_cfg_rate: 0.2
99
+ inference_cfg_rate: 0.7
100
+ reg_loss_type: "l1"
101
+ estimator: !new:cosyvoice.flow.decoder.ConditionalDecoder
102
+ in_channels: 320
103
+ out_channels: 80
104
+ channels: [256, 256]
105
+ dropout: 0.0
106
+ attention_head_dim: 64
107
+ n_blocks: 4
108
+ num_mid_blocks: 12
109
+ num_heads: 8
110
+ act_fn: "gelu"
111
+
112
+ hift: !new:cosyvoice.hifigan.generator.HiFTGenerator
113
+ in_channels: 80
114
+ base_channels: 512
115
+ nb_harmonics: 8
116
+ sampling_rate: !ref <sample_rate>
117
+ nsf_alpha: 0.1
118
+ nsf_sigma: 0.003
119
+ nsf_voiced_threshold: 10
120
+ upsample_rates: [8, 8]
121
+ upsample_kernel_sizes: [16, 16]
122
+ istft_params:
123
+ n_fft: 16
124
+ hop_len: 4
125
+ resblock_kernel_sizes: [3, 7, 11]
126
+ resblock_dilation_sizes: [[1, 3, 5], [1, 3, 5], [1, 3, 5]]
127
+ source_resblock_kernel_sizes: [7, 11]
128
+ source_resblock_dilation_sizes: [[1, 3, 5], [1, 3, 5]]
129
+ lrelu_slope: 0.1
130
+ audio_limit: 0.99
131
+ f0_predictor: !new:cosyvoice.hifigan.f0_predictor.ConvRNNF0Predictor
132
+ num_class: 1
133
+ in_channels: 80
134
+ cond_channels: 512
135
+
136
+ # processor functions
137
+ parquet_opener: !name:cosyvoice.dataset.processor.parquet_opener
138
+ get_tokenizer: !name:cosyvoice.tokenizer.tokenizer.get_tokenizer
139
+ multilingual: True
140
+ num_languages: 100
141
+ language: "en"
142
+ task: "transcribe"
143
+ allowed_special: "all"
144
+ tokenize: !name:cosyvoice.dataset.processor.tokenize
145
+ get_tokenizer: !ref <get_tokenizer>
146
+ allowed_special: !ref <allowed_special>
147
+ filter: !name:cosyvoice.dataset.processor.filter
148
+ max_length: 40960
149
+ min_length: 0
150
+ token_max_length: 200
151
+ token_min_length: 1
152
+ resample: !name:cosyvoice.dataset.processor.resample
153
+ resample_rate: !ref <sample_rate>
154
+ feat_extractor: !name:cosyvoice.utils.audio.mel_spectrogram
155
+ n_fft: 1024
156
+ num_mels: 80
157
+ sampling_rate: !ref <sample_rate>
158
+ hop_size: 256
159
+ win_size: 1024
160
+ fmin: 0
161
+ fmax: 8000
162
+ center: False
163
+ compute_fbank: !name:cosyvoice.dataset.processor.compute_fbank
164
+ feat_extractor: !ref <feat_extractor>
165
+ parse_embedding: !name:cosyvoice.dataset.processor.parse_embedding
166
+ normalize: True
167
+ shuffle: !name:cosyvoice.dataset.processor.shuffle
168
+ shuffle_size: 1000
169
+ sort: !name:cosyvoice.dataset.processor.sort
170
+ sort_size: 500 # sort_size should be less than shuffle_size
171
+ batch: !name:cosyvoice.dataset.processor.batch
172
+ batch_type: "dynamic"
173
+ max_frames_in_batch: 2000
174
+ padding: !name:cosyvoice.dataset.processor.padding # dataset processor pipeline
175
+
176
+
177
+ data_pipeline:
178
+ [
179
+ !ref <parquet_opener>,
180
+ !ref <tokenize>,
181
+ !ref <filter>,
182
+ !ref <resample>,
183
+ !ref <compute_fbank>,
184
+ !ref <parse_embedding>,
185
+ !ref <shuffle>,
186
+ !ref <sort>,
187
+ !ref <batch>,
188
+ !ref <padding>,
189
+ ]
190
+
191
+ # train conf
192
+ train_conf:
193
+ optim: adam
194
+ optim_conf:
195
+ lr: 1e-5 # change to 1e-5 during sft
196
+ scheduler: warmuplr # change to constantlr during sft
197
+ scheduler_conf:
198
+ warmup_steps: 2500
199
+ max_epoch: 200
200
+ grad_clip: 5
201
+ accum_grad: 2
202
+ log_interval: 100
203
+ save_per_step: -1
CosyVoice-300M-25Hz/flow.decoder.estimator.fp32.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e37e81b4ab4c0c66d7c68fbe56da62b246c56254ecb72d8e9afd2770b8e34020
3
+ size 328627300
CosyVoice-300M-25Hz/flow.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1411de192039a21d53f0bf1968feb50586ce71d81ea1443f8163f4d1c46c5455
3
+ size 419901370
CosyVoice-300M-25Hz/hift.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:91e679b6ca1eff71187ffb4f3ab0444935594cdcc20a9bd12afad111ef8d6012
3
+ size 81896716
CosyVoice-300M-25Hz/llm.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:23bc18a6d53b516868c7827fdedfd86df16642913e168ed6949fe07464c7d6ae
3
+ size 1260708412
CosyVoice-300M-25Hz/speech_tokenizer_v1.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:56285ddd4a83e883ee0cb9f8d69c1089b53a94b1f78ff7e4a0224a27eb4cb486
3
+ size 522625011
CosyVoice-300M-25Hz/spk2info.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e3b1d62ca87cdcb25a9003fa0c8f2cba5c94f55b0d5f80f0b63ef8c22d919cfc
3
+ size 7772
CosyVoice-300M-Instruct/README.md ADDED
@@ -0,0 +1,227 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [![SVG Banners](https://svg-banners.vercel.app/api?type=origin&text1=CosyVoice🤠&text2=Text-to-Speech%20💖%20Large%20Language%20Model&width=800&height=210)](https://github.com/Akshay090/svg-banners)
2
+
3
+ ## 👉🏻 CosyVoice 👈🏻
4
+ **CosyVoice 2.0**: [Demos](https://funaudiollm.github.io/cosyvoice2/); [Paper](https://arxiv.org/abs/2412.10117); [Modelscope](https://www.modelscope.cn/studios/iic/CosyVoice2-0.5B); [HuggingFace](https://huggingface.co/spaces/FunAudioLLM/CosyVoice2-0.5B)
5
+
6
+ **CosyVoice 1.0**: [Demos](https://fun-audio-llm.github.io); [Paper](https://funaudiollm.github.io/pdf/CosyVoice_v1.pdf); [Modelscope](https://www.modelscope.cn/studios/iic/CosyVoice-300M)
7
+
8
+ ## Highlight🔥
9
+
10
+ **CosyVoice 2.0** has been released! Compared to version 1.0, the new version offers more accurate, more stable, faster, and better speech generation capabilities.
11
+ ### Multilingual
12
+ - **Supported Language**: Chinese, English, Japanese, Korean, Chinese dialects (Cantonese, Sichuanese, Shanghainese, Tianjinese, Wuhanese, etc.)
13
+ - **Crosslingual & Mixlingual**:Support zero-shot voice cloning for cross-lingual and code-switching scenarios.
14
+ ### Ultra-Low Latency
15
+ - **Bidirectional Streaming Support**: CosyVoice 2.0 integrates offline and streaming modeling technologies.
16
+ - **Rapid First Packet Synthesis**: Achieves latency as low as 150ms while maintaining high-quality audio output.
17
+ ### High Accuracy
18
+ - **Improved Pronunciation**: Reduces pronunciation errors by 30% to 50% compared to CosyVoice 1.0.
19
+ - **Benchmark Achievements**: Attains the lowest character error rate on the hard test set of the Seed-TTS evaluation set.
20
+ ### Strong Stability
21
+ - **Consistency in Timbre**: Ensures reliable voice consistency for zero-shot and cross-language speech synthesis.
22
+ - **Cross-language Synthesis**: Marked improvements compared to version 1.0.
23
+ ### Natural Experience
24
+ - **Enhanced Prosody and Sound Quality**: Improved alignment of synthesized audio, raising MOS evaluation scores from 5.4 to 5.53.
25
+ - **Emotional and Dialectal Flexibility**: Now supports more granular emotional controls and accent adjustments.
26
+
27
+ ## Roadmap
28
+
29
+ - [x] 2024/12
30
+
31
+ - [x] 25hz cosyvoice 2.0 released
32
+
33
+ - [x] 2024/09
34
+
35
+ - [x] 25hz cosyvoice base model
36
+ - [x] 25hz cosyvoice voice conversion model
37
+
38
+ - [x] 2024/08
39
+
40
+ - [x] Repetition Aware Sampling(RAS) inference for llm stability
41
+ - [x] Streaming inference mode support, including kv cache and sdpa for rtf optimization
42
+
43
+ - [x] 2024/07
44
+
45
+ - [x] Flow matching training support
46
+ - [x] WeTextProcessing support when ttsfrd is not available
47
+ - [x] Fastapi server and client
48
+
49
+
50
+ ## Install
51
+
52
+ **Clone and install**
53
+
54
+ - Clone the repo
55
+ ``` sh
56
+ git clone --recursive https://github.com/FunAudioLLM/CosyVoice.git
57
+ # If you failed to clone submodule due to network failures, please run following command until success
58
+ cd CosyVoice
59
+ git submodule update --init --recursive
60
+ ```
61
+
62
+ - Install Conda: please see https://docs.conda.io/en/latest/miniconda.html
63
+ - Create Conda env:
64
+
65
+ ``` sh
66
+ conda create -n cosyvoice python=3.10
67
+ conda activate cosyvoice
68
+ # pynini is required by WeTextProcessing, use conda to install it as it can be executed on all platform.
69
+ conda install -y -c conda-forge pynini==2.1.5
70
+ pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/ --trusted-host=mirrors.aliyun.com
71
+
72
+ # If you encounter sox compatibility issues
73
+ # ubuntu
74
+ sudo apt-get install sox libsox-dev
75
+ # centos
76
+ sudo yum install sox sox-devel
77
+ ```
78
+
79
+ **Model download**
80
+
81
+ We strongly recommend that you download our pretrained `CosyVoice2-0.5B` `CosyVoice-300M` `CosyVoice-300M-SFT` `CosyVoice-300M-Instruct` model and `CosyVoice-ttsfrd` resource.
82
+
83
+ ``` python
84
+ # SDK模型下载
85
+ from modelscope import snapshot_download
86
+ snapshot_download('iic/CosyVoice2-0.5B', local_dir='pretrained_models/CosyVoice2-0.5B')
87
+ snapshot_download('iic/CosyVoice-300M', local_dir='pretrained_models/CosyVoice-300M')
88
+ snapshot_download('iic/CosyVoice-300M-25Hz', local_dir='pretrained_models/CosyVoice-300M-25Hz')
89
+ snapshot_download('iic/CosyVoice-300M-SFT', local_dir='pretrained_models/CosyVoice-300M-SFT')
90
+ snapshot_download('iic/CosyVoice-300M-Instruct', local_dir='pretrained_models/CosyVoice-300M-Instruct')
91
+ snapshot_download('iic/CosyVoice-ttsfrd', local_dir='pretrained_models/CosyVoice-ttsfrd')
92
+ ```
93
+
94
+ ``` sh
95
+ # git模型下载,请确保已安装git lfs
96
+ mkdir -p pretrained_models
97
+ git clone https://www.modelscope.cn/iic/CosyVoice2-0.5B.git pretrained_models/CosyVoice2-0.5B
98
+ git clone https://www.modelscope.cn/iic/CosyVoice-300M.git pretrained_models/CosyVoice-300M
99
+ git clone https://www.modelscope.cn/iic/CosyVoice-300M-25Hz.git pretrained_models/CosyVoice-300M-25Hz
100
+ git clone https://www.modelscope.cn/iic/CosyVoice-300M-SFT.git pretrained_models/CosyVoice-300M-SFT
101
+ git clone https://www.modelscope.cn/iic/CosyVoice-300M-Instruct.git pretrained_models/CosyVoice-300M-Instruct
102
+ git clone https://www.modelscope.cn/iic/CosyVoice-ttsfrd.git pretrained_models/CosyVoice-ttsfrd
103
+ ```
104
+
105
+ Optionally, you can unzip `ttsfrd` resouce and install `ttsfrd` package for better text normalization performance.
106
+
107
+ Notice that this step is not necessary. If you do not install `ttsfrd` package, we will use WeTextProcessing by default.
108
+
109
+ ``` sh
110
+ cd pretrained_models/CosyVoice-ttsfrd/
111
+ unzip resource.zip -d .
112
+ pip install ttsfrd_dependency-0.1-py3-none-any.whl
113
+ pip install ttsfrd-0.4.2-cp310-cp310-linux_x86_64.whl
114
+ ```
115
+
116
+ **Basic Usage**
117
+
118
+ We strongly recommend using `CosyVoice2-0.5B` for better performance.
119
+ Follow code below for detailed usage of each model.
120
+
121
+ ``` python
122
+ import sys
123
+ sys.path.append('third_party/Matcha-TTS')
124
+ from cosyvoice.cli.cosyvoice import CosyVoice, CosyVoice2
125
+ from cosyvoice.utils.file_utils import load_wav
126
+ import torchaudio
127
+ ```
128
+
129
+ **CosyVoice2 Usage**
130
+ ```python
131
+ cosyvoice = CosyVoice2('pretrained_models/CosyVoice2-0.5B', load_jit=False, load_trt=False, fp16=False)
132
+
133
+ # NOTE if you want to reproduce the results on https://funaudiollm.github.io/cosyvoice2, please add text_frontend=False during inference
134
+ # zero_shot usage
135
+ prompt_speech_16k = load_wav('zero_shot_prompt.wav', 16000)
136
+ for i, j in enumerate(cosyvoice.inference_zero_shot('收到好友从远方寄来的生日礼物,那份意外的惊喜与深深的祝福让我心中充满了甜蜜的快乐,笑容如花儿般绽放。', '希望你以后能够做的比我还好呦。', prompt_speech_16k, stream=False)):
137
+ torchaudio.save('zero_shot_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
138
+
139
+ # fine grained control, for supported control, check cosyvoice/tokenizer/tokenizer.py#L248
140
+ for i, j in enumerate(cosyvoice.inference_cross_lingual('在他讲述那个荒诞故事的过程中,他突然[laughter]停下来,因为他自己也被逗笑了[laughter]。', prompt_speech_16k, stream=False)):
141
+ torchaudio.save('fine_grained_control_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
142
+
143
+ # instruct usage
144
+ for i, j in enumerate(cosyvoice.inference_instruct2('收到好友从远方寄来的生日礼物,那份意外的惊喜与深深的祝福让我心中充满了甜蜜的快乐,笑容如花儿般绽放。', '用四川话说这句话', prompt_speech_16k, stream=False)):
145
+ torchaudio.save('instruct_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
146
+ ```
147
+
148
+ **CosyVoice Usage**
149
+ ```python
150
+ cosyvoice = CosyVoice('pretrained_models/CosyVoice-300M-SFT', load_jit=False, load_trt=False, fp16=False)
151
+ # sft usage
152
+ print(cosyvoice.list_available_spks())
153
+ # change stream=True for chunk stream inference
154
+ for i, j in enumerate(cosyvoice.inference_sft('你好,我是通义生成式语音大模型,请问有什么可以帮您的吗?', '中文女', stream=False)):
155
+ torchaudio.save('sft_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
156
+
157
+ cosyvoice = CosyVoice('pretrained_models/CosyVoice-300M') # or change to pretrained_models/CosyVoice-300M-25Hz for 25Hz inference
158
+ # zero_shot usage, <|zh|><|en|><|jp|><|yue|><|ko|> for Chinese/English/Japanese/Cantonese/Korean
159
+ prompt_speech_16k = load_wav('zero_shot_prompt.wav', 16000)
160
+ for i, j in enumerate(cosyvoice.inference_zero_shot('收到好友从远方寄来的生日礼物,那份意外的惊喜与深深的祝福让我心中充满了甜蜜的快乐,笑容如花儿般绽放。', '希望你以后能够做的比我还好呦。', prompt_speech_16k, stream=False)):
161
+ torchaudio.save('zero_shot_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
162
+ # cross_lingual usage
163
+ prompt_speech_16k = load_wav('cross_lingual_prompt.wav', 16000)
164
+ for i, j in enumerate(cosyvoice.inference_cross_lingual('<|en|>And then later on, fully acquiring that company. So keeping management in line, interest in line with the asset that\'s coming into the family is a reason why sometimes we don\'t buy the whole thing.', prompt_speech_16k, stream=False)):
165
+ torchaudio.save('cross_lingual_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
166
+ # vc usage
167
+ prompt_speech_16k = load_wav('zero_shot_prompt.wav', 16000)
168
+ source_speech_16k = load_wav('cross_lingual_prompt.wav', 16000)
169
+ for i, j in enumerate(cosyvoice.inference_vc(source_speech_16k, prompt_speech_16k, stream=False)):
170
+ torchaudio.save('vc_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
171
+
172
+ cosyvoice = CosyVoice('pretrained_models/CosyVoice-300M-Instruct')
173
+ # instruct usage, support <laughter></laughter><strong></strong>[laughter][breath]
174
+ for i, j in enumerate(cosyvoice.inference_instruct('在面对挑战时,他展现了非凡的<strong>勇气</strong>与<strong>智慧</strong>。', '中文男', 'Theo \'Crimson\', is a fiery, passionate rebel leader. Fights with fervor for justice, but struggles with impulsiveness.', stream=False)):
175
+ torchaudio.save('instruct_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
176
+ ```
177
+
178
+ **Start web demo**
179
+
180
+ You can use our web demo page to get familiar with CosyVoice quickly.
181
+
182
+ Please see the demo website for details.
183
+
184
+ ``` python
185
+ # change iic/CosyVoice-300M-SFT for sft inference, or iic/CosyVoice-300M-Instruct for instruct inference
186
+ python3 webui.py --port 50000 --model_dir pretrained_models/CosyVoice-300M
187
+ ```
188
+
189
+ **Advanced Usage**
190
+
191
+ For advanced user, we have provided train and inference scripts in `examples/libritts/cosyvoice/run.sh`.
192
+
193
+ **Build for deployment**
194
+
195
+ Optionally, if you want service deployment,
196
+ you can run following steps.
197
+
198
+ ``` sh
199
+ cd runtime/python
200
+ docker build -t cosyvoice:v1.0 .
201
+ # change iic/CosyVoice-300M to iic/CosyVoice-300M-Instruct if you want to use instruct inference
202
+ # for grpc usage
203
+ docker run -d --runtime=nvidia -p 50000:50000 cosyvoice:v1.0 /bin/bash -c "cd /opt/CosyVoice/CosyVoice/runtime/python/grpc && python3 server.py --port 50000 --max_conc 4 --model_dir iic/CosyVoice-300M && sleep infinity"
204
+ cd grpc && python3 client.py --port 50000 --mode <sft|zero_shot|cross_lingual|instruct>
205
+ # for fastapi usage
206
+ docker run -d --runtime=nvidia -p 50000:50000 cosyvoice:v1.0 /bin/bash -c "cd /opt/CosyVoice/CosyVoice/runtime/python/fastapi && python3 server.py --port 50000 --model_dir iic/CosyVoice-300M && sleep infinity"
207
+ cd fastapi && python3 client.py --port 50000 --mode <sft|zero_shot|cross_lingual|instruct>
208
+ ```
209
+
210
+ ## Discussion & Communication
211
+
212
+ You can directly discuss on [Github Issues](https://github.com/FunAudioLLM/CosyVoice/issues).
213
+
214
+ You can also scan the QR code to join our official Dingding chat group.
215
+
216
+ <img src="./asset/dingding.png" width="250px">
217
+
218
+ ## Acknowledge
219
+
220
+ 1. We borrowed a lot of code from [FunASR](https://github.com/modelscope/FunASR).
221
+ 2. We borrowed a lot of code from [FunCodec](https://github.com/modelscope/FunCodec).
222
+ 3. We borrowed a lot of code from [Matcha-TTS](https://github.com/shivammehta25/Matcha-TTS).
223
+ 4. We borrowed a lot of code from [AcademiCodec](https://github.com/yangdongchao/AcademiCodec).
224
+ 5. We borrowed a lot of code from [WeNet](https://github.com/wenet-e2e/wenet).
225
+
226
+ ## Disclaimer
227
+ The content provided above is for academic purposes only and is intended to demonstrate technical capabilities. Some examples are sourced from the internet. If any content infringes on your rights, please contact us to request its removal.
CosyVoice-300M-Instruct/campplus.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a6ac6a63997761ae2997373e2ee1c47040854b4b759ea41ec48e4e42df0f4d73
3
+ size 28303423
CosyVoice-300M-Instruct/configuration.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "framework": "Pytorch",
3
+ "task": "text-to-speech"
4
+ }
CosyVoice-300M-Instruct/cosyvoice.yaml ADDED
@@ -0,0 +1,203 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # set random seed, so that you may reproduce your result.
2
+ __set_seed1: !apply:random.seed [1986]
3
+ __set_seed2: !apply:numpy.random.seed [1986]
4
+ __set_seed3: !apply:torch.manual_seed [1986]
5
+ __set_seed4: !apply:torch.cuda.manual_seed_all [1986]
6
+
7
+ # fixed params
8
+ sample_rate: 22050
9
+ text_encoder_input_size: 512
10
+ llm_input_size: 1024
11
+ llm_output_size: 1024
12
+ spk_embed_dim: 192
13
+
14
+ # model params
15
+ # for all class/function included in this repo, we use !<name> or !<new> for intialization, so that user may find all corresponding class/function according to one single yaml.
16
+ # for system/third_party class/function, we do not require this.
17
+ llm: !new:cosyvoice.llm.llm.TransformerLM
18
+ text_encoder_input_size: !ref <text_encoder_input_size>
19
+ llm_input_size: !ref <llm_input_size>
20
+ llm_output_size: !ref <llm_output_size>
21
+ text_token_size: 51866 # change to 60515 if you want to train with CosyVoice-300M-25Hz recipe
22
+ speech_token_size: 4096
23
+ length_normalized_loss: True
24
+ lsm_weight: 0
25
+ spk_embed_dim: !ref <spk_embed_dim>
26
+ text_encoder: !new:cosyvoice.transformer.encoder.ConformerEncoder
27
+ input_size: !ref <text_encoder_input_size>
28
+ output_size: 1024
29
+ attention_heads: 16
30
+ linear_units: 4096
31
+ num_blocks: 6
32
+ dropout_rate: 0.1
33
+ positional_dropout_rate: 0.1
34
+ attention_dropout_rate: 0.0
35
+ normalize_before: True
36
+ input_layer: "linear"
37
+ pos_enc_layer_type: "rel_pos_espnet"
38
+ selfattention_layer_type: "rel_selfattn"
39
+ use_cnn_module: False
40
+ macaron_style: False
41
+ use_dynamic_chunk: False
42
+ use_dynamic_left_chunk: False
43
+ static_chunk_size: 1
44
+ llm: !new:cosyvoice.transformer.encoder.TransformerEncoder
45
+ input_size: !ref <llm_input_size>
46
+ output_size: !ref <llm_output_size>
47
+ attention_heads: 16
48
+ linear_units: 4096
49
+ num_blocks: 14
50
+ dropout_rate: 0.1
51
+ positional_dropout_rate: 0.1
52
+ attention_dropout_rate: 0.0
53
+ input_layer: "linear_legacy"
54
+ pos_enc_layer_type: "rel_pos_espnet"
55
+ selfattention_layer_type: "rel_selfattn"
56
+ static_chunk_size: 1
57
+ sampling: !name:cosyvoice.utils.common.ras_sampling
58
+ top_p: 0.8
59
+ top_k: 25
60
+ win_size: 10
61
+ tau_r: 0.1
62
+
63
+ flow: !new:cosyvoice.flow.flow.MaskedDiffWithXvec
64
+ input_size: 512
65
+ output_size: 80
66
+ spk_embed_dim: !ref <spk_embed_dim>
67
+ output_type: "mel"
68
+ vocab_size: 4096
69
+ input_frame_rate: 50 # change to 25 if you want to train with CosyVoice-300M-25Hz recipe
70
+ only_mask_loss: True
71
+ encoder: !new:cosyvoice.transformer.encoder.ConformerEncoder
72
+ output_size: 512
73
+ attention_heads: 8
74
+ linear_units: 2048
75
+ num_blocks: 6
76
+ dropout_rate: 0.1
77
+ positional_dropout_rate: 0.1
78
+ attention_dropout_rate: 0.1
79
+ normalize_before: True
80
+ input_layer: "linear"
81
+ pos_enc_layer_type: "rel_pos_espnet"
82
+ selfattention_layer_type: "rel_selfattn"
83
+ input_size: 512
84
+ use_cnn_module: False
85
+ macaron_style: False
86
+ length_regulator: !new:cosyvoice.flow.length_regulator.InterpolateRegulator
87
+ channels: 80
88
+ sampling_ratios: [1, 1, 1, 1]
89
+ decoder: !new:cosyvoice.flow.flow_matching.ConditionalCFM
90
+ in_channels: 240
91
+ n_spks: 1
92
+ spk_emb_dim: 80
93
+ cfm_params: !new:omegaconf.DictConfig
94
+ content:
95
+ sigma_min: 1e-06
96
+ solver: "euler"
97
+ t_scheduler: "cosine"
98
+ training_cfg_rate: 0.2
99
+ inference_cfg_rate: 0.7
100
+ reg_loss_type: "l1"
101
+ estimator: !new:cosyvoice.flow.decoder.ConditionalDecoder
102
+ in_channels: 320
103
+ out_channels: 80
104
+ channels: [256, 256]
105
+ dropout: 0.0
106
+ attention_head_dim: 64
107
+ n_blocks: 4
108
+ num_mid_blocks: 12
109
+ num_heads: 8
110
+ act_fn: "gelu"
111
+
112
+ hift: !new:cosyvoice.hifigan.generator.HiFTGenerator
113
+ in_channels: 80
114
+ base_channels: 512
115
+ nb_harmonics: 8
116
+ sampling_rate: !ref <sample_rate>
117
+ nsf_alpha: 0.1
118
+ nsf_sigma: 0.003
119
+ nsf_voiced_threshold: 10
120
+ upsample_rates: [8, 8]
121
+ upsample_kernel_sizes: [16, 16]
122
+ istft_params:
123
+ n_fft: 16
124
+ hop_len: 4
125
+ resblock_kernel_sizes: [3, 7, 11]
126
+ resblock_dilation_sizes: [[1, 3, 5], [1, 3, 5], [1, 3, 5]]
127
+ source_resblock_kernel_sizes: [7, 11]
128
+ source_resblock_dilation_sizes: [[1, 3, 5], [1, 3, 5]]
129
+ lrelu_slope: 0.1
130
+ audio_limit: 0.99
131
+ f0_predictor: !new:cosyvoice.hifigan.f0_predictor.ConvRNNF0Predictor
132
+ num_class: 1
133
+ in_channels: 80
134
+ cond_channels: 512
135
+
136
+ # processor functions
137
+ parquet_opener: !name:cosyvoice.dataset.processor.parquet_opener
138
+ get_tokenizer: !name:cosyvoice.tokenizer.tokenizer.get_tokenizer
139
+ multilingual: True
140
+ num_languages: 100
141
+ language: "en"
142
+ task: "transcribe"
143
+ allowed_special: "all"
144
+ tokenize: !name:cosyvoice.dataset.processor.tokenize
145
+ get_tokenizer: !ref <get_tokenizer>
146
+ allowed_special: !ref <allowed_special>
147
+ filter: !name:cosyvoice.dataset.processor.filter
148
+ max_length: 40960
149
+ min_length: 0
150
+ token_max_length: 200
151
+ token_min_length: 1
152
+ resample: !name:cosyvoice.dataset.processor.resample
153
+ resample_rate: !ref <sample_rate>
154
+ feat_extractor: !name:cosyvoice.utils.audio.mel_spectrogram
155
+ n_fft: 1024
156
+ num_mels: 80
157
+ sampling_rate: !ref <sample_rate>
158
+ hop_size: 256
159
+ win_size: 1024
160
+ fmin: 0
161
+ fmax: 8000
162
+ center: False
163
+ compute_fbank: !name:cosyvoice.dataset.processor.compute_fbank
164
+ feat_extractor: !ref <feat_extractor>
165
+ parse_embedding: !name:cosyvoice.dataset.processor.parse_embedding
166
+ normalize: True
167
+ shuffle: !name:cosyvoice.dataset.processor.shuffle
168
+ shuffle_size: 1000
169
+ sort: !name:cosyvoice.dataset.processor.sort
170
+ sort_size: 500 # sort_size should be less than shuffle_size
171
+ batch: !name:cosyvoice.dataset.processor.batch
172
+ batch_type: "dynamic"
173
+ max_frames_in_batch: 2000
174
+ padding: !name:cosyvoice.dataset.processor.padding # dataset processor pipeline
175
+
176
+
177
+ data_pipeline:
178
+ [
179
+ !ref <parquet_opener>,
180
+ !ref <tokenize>,
181
+ !ref <filter>,
182
+ !ref <resample>,
183
+ !ref <compute_fbank>,
184
+ !ref <parse_embedding>,
185
+ !ref <shuffle>,
186
+ !ref <sort>,
187
+ !ref <batch>,
188
+ !ref <padding>,
189
+ ]
190
+
191
+ # train conf
192
+ train_conf:
193
+ optim: adam
194
+ optim_conf:
195
+ lr: 0.001 # change to 1e-5 during sft
196
+ scheduler: warmuplr # change to constantlr during sft
197
+ scheduler_conf:
198
+ warmup_steps: 2500
199
+ max_epoch: 200
200
+ grad_clip: 5
201
+ accum_grad: 2
202
+ log_interval: 100
203
+ save_per_step: -1
CosyVoice-300M-Instruct/hift.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:91e679b6ca1eff71187ffb4f3ab0444935594cdcc20a9bd12afad111ef8d6012
3
+ size 81896716
CosyVoice-300M-SFT/README.md ADDED
@@ -0,0 +1,227 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [![SVG Banners](https://svg-banners.vercel.app/api?type=origin&text1=CosyVoice🤠&text2=Text-to-Speech%20💖%20Large%20Language%20Model&width=800&height=210)](https://github.com/Akshay090/svg-banners)
2
+
3
+ ## 👉🏻 CosyVoice 👈🏻
4
+ **CosyVoice 2.0**: [Demos](https://funaudiollm.github.io/cosyvoice2/); [Paper](https://arxiv.org/abs/2412.10117); [Modelscope](https://www.modelscope.cn/studios/iic/CosyVoice2-0.5B); [HuggingFace](https://huggingface.co/spaces/FunAudioLLM/CosyVoice2-0.5B)
5
+
6
+ **CosyVoice 1.0**: [Demos](https://fun-audio-llm.github.io); [Paper](https://funaudiollm.github.io/pdf/CosyVoice_v1.pdf); [Modelscope](https://www.modelscope.cn/studios/iic/CosyVoice-300M)
7
+
8
+ ## Highlight🔥
9
+
10
+ **CosyVoice 2.0** has been released! Compared to version 1.0, the new version offers more accurate, more stable, faster, and better speech generation capabilities.
11
+ ### Multilingual
12
+ - **Supported Language**: Chinese, English, Japanese, Korean, Chinese dialects (Cantonese, Sichuanese, Shanghainese, Tianjinese, Wuhanese, etc.)
13
+ - **Crosslingual & Mixlingual**:Support zero-shot voice cloning for cross-lingual and code-switching scenarios.
14
+ ### Ultra-Low Latency
15
+ - **Bidirectional Streaming Support**: CosyVoice 2.0 integrates offline and streaming modeling technologies.
16
+ - **Rapid First Packet Synthesis**: Achieves latency as low as 150ms while maintaining high-quality audio output.
17
+ ### High Accuracy
18
+ - **Improved Pronunciation**: Reduces pronunciation errors by 30% to 50% compared to CosyVoice 1.0.
19
+ - **Benchmark Achievements**: Attains the lowest character error rate on the hard test set of the Seed-TTS evaluation set.
20
+ ### Strong Stability
21
+ - **Consistency in Timbre**: Ensures reliable voice consistency for zero-shot and cross-language speech synthesis.
22
+ - **Cross-language Synthesis**: Marked improvements compared to version 1.0.
23
+ ### Natural Experience
24
+ - **Enhanced Prosody and Sound Quality**: Improved alignment of synthesized audio, raising MOS evaluation scores from 5.4 to 5.53.
25
+ - **Emotional and Dialectal Flexibility**: Now supports more granular emotional controls and accent adjustments.
26
+
27
+ ## Roadmap
28
+
29
+ - [x] 2024/12
30
+
31
+ - [x] 25hz cosyvoice 2.0 released
32
+
33
+ - [x] 2024/09
34
+
35
+ - [x] 25hz cosyvoice base model
36
+ - [x] 25hz cosyvoice voice conversion model
37
+
38
+ - [x] 2024/08
39
+
40
+ - [x] Repetition Aware Sampling(RAS) inference for llm stability
41
+ - [x] Streaming inference mode support, including kv cache and sdpa for rtf optimization
42
+
43
+ - [x] 2024/07
44
+
45
+ - [x] Flow matching training support
46
+ - [x] WeTextProcessing support when ttsfrd is not available
47
+ - [x] Fastapi server and client
48
+
49
+
50
+ ## Install
51
+
52
+ **Clone and install**
53
+
54
+ - Clone the repo
55
+ ``` sh
56
+ git clone --recursive https://github.com/FunAudioLLM/CosyVoice.git
57
+ # If you failed to clone submodule due to network failures, please run following command until success
58
+ cd CosyVoice
59
+ git submodule update --init --recursive
60
+ ```
61
+
62
+ - Install Conda: please see https://docs.conda.io/en/latest/miniconda.html
63
+ - Create Conda env:
64
+
65
+ ``` sh
66
+ conda create -n cosyvoice python=3.10
67
+ conda activate cosyvoice
68
+ # pynini is required by WeTextProcessing, use conda to install it as it can be executed on all platform.
69
+ conda install -y -c conda-forge pynini==2.1.5
70
+ pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/ --trusted-host=mirrors.aliyun.com
71
+
72
+ # If you encounter sox compatibility issues
73
+ # ubuntu
74
+ sudo apt-get install sox libsox-dev
75
+ # centos
76
+ sudo yum install sox sox-devel
77
+ ```
78
+
79
+ **Model download**
80
+
81
+ We strongly recommend that you download our pretrained `CosyVoice2-0.5B` `CosyVoice-300M` `CosyVoice-300M-SFT` `CosyVoice-300M-Instruct` model and `CosyVoice-ttsfrd` resource.
82
+
83
+ ``` python
84
+ # SDK模型下载
85
+ from modelscope import snapshot_download
86
+ snapshot_download('iic/CosyVoice2-0.5B', local_dir='pretrained_models/CosyVoice2-0.5B')
87
+ snapshot_download('iic/CosyVoice-300M', local_dir='pretrained_models/CosyVoice-300M')
88
+ snapshot_download('iic/CosyVoice-300M-25Hz', local_dir='pretrained_models/CosyVoice-300M-25Hz')
89
+ snapshot_download('iic/CosyVoice-300M-SFT', local_dir='pretrained_models/CosyVoice-300M-SFT')
90
+ snapshot_download('iic/CosyVoice-300M-Instruct', local_dir='pretrained_models/CosyVoice-300M-Instruct')
91
+ snapshot_download('iic/CosyVoice-ttsfrd', local_dir='pretrained_models/CosyVoice-ttsfrd')
92
+ ```
93
+
94
+ ``` sh
95
+ # git模型下载,请确保已安装git lfs
96
+ mkdir -p pretrained_models
97
+ git clone https://www.modelscope.cn/iic/CosyVoice2-0.5B.git pretrained_models/CosyVoice2-0.5B
98
+ git clone https://www.modelscope.cn/iic/CosyVoice-300M.git pretrained_models/CosyVoice-300M
99
+ git clone https://www.modelscope.cn/iic/CosyVoice-300M-25Hz.git pretrained_models/CosyVoice-300M-25Hz
100
+ git clone https://www.modelscope.cn/iic/CosyVoice-300M-SFT.git pretrained_models/CosyVoice-300M-SFT
101
+ git clone https://www.modelscope.cn/iic/CosyVoice-300M-Instruct.git pretrained_models/CosyVoice-300M-Instruct
102
+ git clone https://www.modelscope.cn/iic/CosyVoice-ttsfrd.git pretrained_models/CosyVoice-ttsfrd
103
+ ```
104
+
105
+ Optionally, you can unzip `ttsfrd` resouce and install `ttsfrd` package for better text normalization performance.
106
+
107
+ Notice that this step is not necessary. If you do not install `ttsfrd` package, we will use WeTextProcessing by default.
108
+
109
+ ``` sh
110
+ cd pretrained_models/CosyVoice-ttsfrd/
111
+ unzip resource.zip -d .
112
+ pip install ttsfrd_dependency-0.1-py3-none-any.whl
113
+ pip install ttsfrd-0.4.2-cp310-cp310-linux_x86_64.whl
114
+ ```
115
+
116
+ **Basic Usage**
117
+
118
+ We strongly recommend using `CosyVoice2-0.5B` for better performance.
119
+ Follow code below for detailed usage of each model.
120
+
121
+ ``` python
122
+ import sys
123
+ sys.path.append('third_party/Matcha-TTS')
124
+ from cosyvoice.cli.cosyvoice import CosyVoice, CosyVoice2
125
+ from cosyvoice.utils.file_utils import load_wav
126
+ import torchaudio
127
+ ```
128
+
129
+ **CosyVoice2 Usage**
130
+ ```python
131
+ cosyvoice = CosyVoice2('pretrained_models/CosyVoice2-0.5B', load_jit=False, load_trt=False, fp16=False)
132
+
133
+ # NOTE if you want to reproduce the results on https://funaudiollm.github.io/cosyvoice2, please add text_frontend=False during inference
134
+ # zero_shot usage
135
+ prompt_speech_16k = load_wav('zero_shot_prompt.wav', 16000)
136
+ for i, j in enumerate(cosyvoice.inference_zero_shot('收到好友从远方寄来的生日礼物,那份意外的惊喜与深深的祝福让我心中充满了甜蜜的快乐,笑容如花儿般绽放。', '希望你以后能够做的比我还好呦。', prompt_speech_16k, stream=False)):
137
+ torchaudio.save('zero_shot_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
138
+
139
+ # fine grained control, for supported control, check cosyvoice/tokenizer/tokenizer.py#L248
140
+ for i, j in enumerate(cosyvoice.inference_cross_lingual('在他讲述那个荒诞故事的过程中,他突然[laughter]停下来,因为他自己也被逗笑了[laughter]。', prompt_speech_16k, stream=False)):
141
+ torchaudio.save('fine_grained_control_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
142
+
143
+ # instruct usage
144
+ for i, j in enumerate(cosyvoice.inference_instruct2('收到好友从远方寄来的生日礼物,那份意外的惊喜与深深的祝福让我心中充满了甜蜜的快乐,笑容如花儿般绽放。', '用四川话说这句话', prompt_speech_16k, stream=False)):
145
+ torchaudio.save('instruct_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
146
+ ```
147
+
148
+ **CosyVoice Usage**
149
+ ```python
150
+ cosyvoice = CosyVoice('pretrained_models/CosyVoice-300M-SFT', load_jit=False, load_trt=False, fp16=False)
151
+ # sft usage
152
+ print(cosyvoice.list_available_spks())
153
+ # change stream=True for chunk stream inference
154
+ for i, j in enumerate(cosyvoice.inference_sft('你好,我是通义生成式语音大模型,请问有什么可以帮您的吗?', '中文女', stream=False)):
155
+ torchaudio.save('sft_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
156
+
157
+ cosyvoice = CosyVoice('pretrained_models/CosyVoice-300M') # or change to pretrained_models/CosyVoice-300M-25Hz for 25Hz inference
158
+ # zero_shot usage, <|zh|><|en|><|jp|><|yue|><|ko|> for Chinese/English/Japanese/Cantonese/Korean
159
+ prompt_speech_16k = load_wav('zero_shot_prompt.wav', 16000)
160
+ for i, j in enumerate(cosyvoice.inference_zero_shot('收到好友从远方寄来的生日礼物,那份意外的惊喜与深深的祝福让我心中充满了甜蜜的快乐,笑容如花儿般绽放。', '希望你以后能够做的比我还好呦。', prompt_speech_16k, stream=False)):
161
+ torchaudio.save('zero_shot_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
162
+ # cross_lingual usage
163
+ prompt_speech_16k = load_wav('cross_lingual_prompt.wav', 16000)
164
+ for i, j in enumerate(cosyvoice.inference_cross_lingual('<|en|>And then later on, fully acquiring that company. So keeping management in line, interest in line with the asset that\'s coming into the family is a reason why sometimes we don\'t buy the whole thing.', prompt_speech_16k, stream=False)):
165
+ torchaudio.save('cross_lingual_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
166
+ # vc usage
167
+ prompt_speech_16k = load_wav('zero_shot_prompt.wav', 16000)
168
+ source_speech_16k = load_wav('cross_lingual_prompt.wav', 16000)
169
+ for i, j in enumerate(cosyvoice.inference_vc(source_speech_16k, prompt_speech_16k, stream=False)):
170
+ torchaudio.save('vc_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
171
+
172
+ cosyvoice = CosyVoice('pretrained_models/CosyVoice-300M-Instruct')
173
+ # instruct usage, support <laughter></laughter><strong></strong>[laughter][breath]
174
+ for i, j in enumerate(cosyvoice.inference_instruct('在面对挑战时,他展现了非凡的<strong>勇气</strong>与<strong>智慧</strong>。', '中文男', 'Theo \'Crimson\', is a fiery, passionate rebel leader. Fights with fervor for justice, but struggles with impulsiveness.', stream=False)):
175
+ torchaudio.save('instruct_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
176
+ ```
177
+
178
+ **Start web demo**
179
+
180
+ You can use our web demo page to get familiar with CosyVoice quickly.
181
+
182
+ Please see the demo website for details.
183
+
184
+ ``` python
185
+ # change iic/CosyVoice-300M-SFT for sft inference, or iic/CosyVoice-300M-Instruct for instruct inference
186
+ python3 webui.py --port 50000 --model_dir pretrained_models/CosyVoice-300M
187
+ ```
188
+
189
+ **Advanced Usage**
190
+
191
+ For advanced user, we have provided train and inference scripts in `examples/libritts/cosyvoice/run.sh`.
192
+
193
+ **Build for deployment**
194
+
195
+ Optionally, if you want service deployment,
196
+ you can run following steps.
197
+
198
+ ``` sh
199
+ cd runtime/python
200
+ docker build -t cosyvoice:v1.0 .
201
+ # change iic/CosyVoice-300M to iic/CosyVoice-300M-Instruct if you want to use instruct inference
202
+ # for grpc usage
203
+ docker run -d --runtime=nvidia -p 50000:50000 cosyvoice:v1.0 /bin/bash -c "cd /opt/CosyVoice/CosyVoice/runtime/python/grpc && python3 server.py --port 50000 --max_conc 4 --model_dir iic/CosyVoice-300M && sleep infinity"
204
+ cd grpc && python3 client.py --port 50000 --mode <sft|zero_shot|cross_lingual|instruct>
205
+ # for fastapi usage
206
+ docker run -d --runtime=nvidia -p 50000:50000 cosyvoice:v1.0 /bin/bash -c "cd /opt/CosyVoice/CosyVoice/runtime/python/fastapi && python3 server.py --port 50000 --model_dir iic/CosyVoice-300M && sleep infinity"
207
+ cd fastapi && python3 client.py --port 50000 --mode <sft|zero_shot|cross_lingual|instruct>
208
+ ```
209
+
210
+ ## Discussion & Communication
211
+
212
+ You can directly discuss on [Github Issues](https://github.com/FunAudioLLM/CosyVoice/issues).
213
+
214
+ You can also scan the QR code to join our official Dingding chat group.
215
+
216
+ <img src="./asset/dingding.png" width="250px">
217
+
218
+ ## Acknowledge
219
+
220
+ 1. We borrowed a lot of code from [FunASR](https://github.com/modelscope/FunASR).
221
+ 2. We borrowed a lot of code from [FunCodec](https://github.com/modelscope/FunCodec).
222
+ 3. We borrowed a lot of code from [Matcha-TTS](https://github.com/shivammehta25/Matcha-TTS).
223
+ 4. We borrowed a lot of code from [AcademiCodec](https://github.com/yangdongchao/AcademiCodec).
224
+ 5. We borrowed a lot of code from [WeNet](https://github.com/wenet-e2e/wenet).
225
+
226
+ ## Disclaimer
227
+ The content provided above is for academic purposes only and is intended to demonstrate technical capabilities. Some examples are sourced from the internet. If any content infringes on your rights, please contact us to request its removal.
CosyVoice-300M-SFT/campplus.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a6ac6a63997761ae2997373e2ee1c47040854b4b759ea41ec48e4e42df0f4d73
3
+ size 28303423
CosyVoice-300M-SFT/configuration.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "framework": "Pytorch",
3
+ "task": "text-to-speech"
4
+ }
CosyVoice-300M-SFT/cosyvoice.yaml ADDED
@@ -0,0 +1,203 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # set random seed, so that you may reproduce your result.
2
+ __set_seed1: !apply:random.seed [1986]
3
+ __set_seed2: !apply:numpy.random.seed [1986]
4
+ __set_seed3: !apply:torch.manual_seed [1986]
5
+ __set_seed4: !apply:torch.cuda.manual_seed_all [1986]
6
+
7
+ # fixed params
8
+ sample_rate: 22050
9
+ text_encoder_input_size: 512
10
+ llm_input_size: 1024
11
+ llm_output_size: 1024
12
+ spk_embed_dim: 192
13
+
14
+ # model params
15
+ # for all class/function included in this repo, we use !<name> or !<new> for intialization, so that user may find all corresponding class/function according to one single yaml.
16
+ # for system/third_party class/function, we do not require this.
17
+ llm: !new:cosyvoice.llm.llm.TransformerLM
18
+ text_encoder_input_size: !ref <text_encoder_input_size>
19
+ llm_input_size: !ref <llm_input_size>
20
+ llm_output_size: !ref <llm_output_size>
21
+ text_token_size: 51866 # change to 60515 if you want to train with CosyVoice-300M-25Hz recipe
22
+ speech_token_size: 4096
23
+ length_normalized_loss: True
24
+ lsm_weight: 0
25
+ spk_embed_dim: !ref <spk_embed_dim>
26
+ text_encoder: !new:cosyvoice.transformer.encoder.ConformerEncoder
27
+ input_size: !ref <text_encoder_input_size>
28
+ output_size: 1024
29
+ attention_heads: 16
30
+ linear_units: 4096
31
+ num_blocks: 6
32
+ dropout_rate: 0.1
33
+ positional_dropout_rate: 0.1
34
+ attention_dropout_rate: 0.0
35
+ normalize_before: True
36
+ input_layer: "linear"
37
+ pos_enc_layer_type: "rel_pos_espnet"
38
+ selfattention_layer_type: "rel_selfattn"
39
+ use_cnn_module: False
40
+ macaron_style: False
41
+ use_dynamic_chunk: False
42
+ use_dynamic_left_chunk: False
43
+ static_chunk_size: 1
44
+ llm: !new:cosyvoice.transformer.encoder.TransformerEncoder
45
+ input_size: !ref <llm_input_size>
46
+ output_size: !ref <llm_output_size>
47
+ attention_heads: 16
48
+ linear_units: 4096
49
+ num_blocks: 14
50
+ dropout_rate: 0.1
51
+ positional_dropout_rate: 0.1
52
+ attention_dropout_rate: 0.0
53
+ input_layer: "linear_legacy"
54
+ pos_enc_layer_type: "rel_pos_espnet"
55
+ selfattention_layer_type: "rel_selfattn"
56
+ static_chunk_size: 1
57
+ sampling: !name:cosyvoice.utils.common.ras_sampling
58
+ top_p: 0.8
59
+ top_k: 25
60
+ win_size: 10
61
+ tau_r: 0.1
62
+
63
+ flow: !new:cosyvoice.flow.flow.MaskedDiffWithXvec
64
+ input_size: 512
65
+ output_size: 80
66
+ spk_embed_dim: !ref <spk_embed_dim>
67
+ output_type: "mel"
68
+ vocab_size: 4096
69
+ input_frame_rate: 50 # change to 25 if you want to train with CosyVoice-300M-25Hz recipe
70
+ only_mask_loss: True
71
+ encoder: !new:cosyvoice.transformer.encoder.ConformerEncoder
72
+ output_size: 512
73
+ attention_heads: 8
74
+ linear_units: 2048
75
+ num_blocks: 6
76
+ dropout_rate: 0.1
77
+ positional_dropout_rate: 0.1
78
+ attention_dropout_rate: 0.1
79
+ normalize_before: True
80
+ input_layer: "linear"
81
+ pos_enc_layer_type: "rel_pos_espnet"
82
+ selfattention_layer_type: "rel_selfattn"
83
+ input_size: 512
84
+ use_cnn_module: False
85
+ macaron_style: False
86
+ length_regulator: !new:cosyvoice.flow.length_regulator.InterpolateRegulator
87
+ channels: 80
88
+ sampling_ratios: [1, 1, 1, 1]
89
+ decoder: !new:cosyvoice.flow.flow_matching.ConditionalCFM
90
+ in_channels: 240
91
+ n_spks: 1
92
+ spk_emb_dim: 80
93
+ cfm_params: !new:omegaconf.DictConfig
94
+ content:
95
+ sigma_min: 1e-06
96
+ solver: "euler"
97
+ t_scheduler: "cosine"
98
+ training_cfg_rate: 0.2
99
+ inference_cfg_rate: 0.7
100
+ reg_loss_type: "l1"
101
+ estimator: !new:cosyvoice.flow.decoder.ConditionalDecoder
102
+ in_channels: 320
103
+ out_channels: 80
104
+ channels: [256, 256]
105
+ dropout: 0.0
106
+ attention_head_dim: 64
107
+ n_blocks: 4
108
+ num_mid_blocks: 12
109
+ num_heads: 8
110
+ act_fn: "gelu"
111
+
112
+ hift: !new:cosyvoice.hifigan.generator.HiFTGenerator
113
+ in_channels: 80
114
+ base_channels: 512
115
+ nb_harmonics: 8
116
+ sampling_rate: !ref <sample_rate>
117
+ nsf_alpha: 0.1
118
+ nsf_sigma: 0.003
119
+ nsf_voiced_threshold: 10
120
+ upsample_rates: [8, 8]
121
+ upsample_kernel_sizes: [16, 16]
122
+ istft_params:
123
+ n_fft: 16
124
+ hop_len: 4
125
+ resblock_kernel_sizes: [3, 7, 11]
126
+ resblock_dilation_sizes: [[1, 3, 5], [1, 3, 5], [1, 3, 5]]
127
+ source_resblock_kernel_sizes: [7, 11]
128
+ source_resblock_dilation_sizes: [[1, 3, 5], [1, 3, 5]]
129
+ lrelu_slope: 0.1
130
+ audio_limit: 0.99
131
+ f0_predictor: !new:cosyvoice.hifigan.f0_predictor.ConvRNNF0Predictor
132
+ num_class: 1
133
+ in_channels: 80
134
+ cond_channels: 512
135
+
136
+ # processor functions
137
+ parquet_opener: !name:cosyvoice.dataset.processor.parquet_opener
138
+ get_tokenizer: !name:cosyvoice.tokenizer.tokenizer.get_tokenizer
139
+ multilingual: True
140
+ num_languages: 100
141
+ language: "en"
142
+ task: "transcribe"
143
+ allowed_special: "all"
144
+ tokenize: !name:cosyvoice.dataset.processor.tokenize
145
+ get_tokenizer: !ref <get_tokenizer>
146
+ allowed_special: !ref <allowed_special>
147
+ filter: !name:cosyvoice.dataset.processor.filter
148
+ max_length: 40960
149
+ min_length: 0
150
+ token_max_length: 200
151
+ token_min_length: 1
152
+ resample: !name:cosyvoice.dataset.processor.resample
153
+ resample_rate: !ref <sample_rate>
154
+ feat_extractor: !name:cosyvoice.utils.audio.mel_spectrogram
155
+ n_fft: 1024
156
+ num_mels: 80
157
+ sampling_rate: !ref <sample_rate>
158
+ hop_size: 256
159
+ win_size: 1024
160
+ fmin: 0
161
+ fmax: 8000
162
+ center: False
163
+ compute_fbank: !name:cosyvoice.dataset.processor.compute_fbank
164
+ feat_extractor: !ref <feat_extractor>
165
+ parse_embedding: !name:cosyvoice.dataset.processor.parse_embedding
166
+ normalize: True
167
+ shuffle: !name:cosyvoice.dataset.processor.shuffle
168
+ shuffle_size: 1000
169
+ sort: !name:cosyvoice.dataset.processor.sort
170
+ sort_size: 500 # sort_size should be less than shuffle_size
171
+ batch: !name:cosyvoice.dataset.processor.batch
172
+ batch_type: "dynamic"
173
+ max_frames_in_batch: 2000
174
+ padding: !name:cosyvoice.dataset.processor.padding # dataset processor pipeline
175
+
176
+
177
+ data_pipeline:
178
+ [
179
+ !ref <parquet_opener>,
180
+ !ref <tokenize>,
181
+ !ref <filter>,
182
+ !ref <resample>,
183
+ !ref <compute_fbank>,
184
+ !ref <parse_embedding>,
185
+ !ref <shuffle>,
186
+ !ref <sort>,
187
+ !ref <batch>,
188
+ !ref <padding>,
189
+ ]
190
+
191
+ # train conf
192
+ train_conf:
193
+ optim: adam
194
+ optim_conf:
195
+ lr: 0.001 # change to 1e-5 during sft
196
+ scheduler: warmuplr # change to constantlr during sft
197
+ scheduler_conf:
198
+ warmup_steps: 2500
199
+ max_epoch: 200
200
+ grad_clip: 5
201
+ accum_grad: 2
202
+ log_interval: 100
203
+ save_per_step: -1
CosyVoice-300M-SFT/flow.decoder.estimator.fp32.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f2b71b58497f56a5b5e8f2cacc8c2c5088b0fb0e8f9547e1a39269f0a98d0c92
3
+ size 328698885
CosyVoice-300M-SFT/flow.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:21eae78c105b5e1c6c337b04f667843377651b4bcfb2d43247ed3ad7fd0a3470
3
+ size 419900943
CosyVoice-300M-SFT/hift.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:91e679b6ca1eff71187ffb4f3ab0444935594cdcc20a9bd12afad111ef8d6012
3
+ size 81896716
CosyVoice-300M-SFT/llm.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d198ce56636e1eb1c9d0cb0d6e3529de8fdfd3fd45075c346296b0d6dcfc54ea
3
+ size 1242994835
CosyVoice-300M-SFT/speech_tokenizer_v1.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:23b5a723ed9143aebfd9ffda14ac4c21231f31c35ef837b6a13bb9e5488abb1e
3
+ size 522624269
CosyVoice-300M-SFT/spk2info.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e3b1d62ca87cdcb25a9003fa0c8f2cba5c94f55b0d5f80f0b63ef8c22d919cfc
3
+ size 7772
CosyVoice-300M/README.md ADDED
@@ -0,0 +1,227 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [![SVG Banners](https://svg-banners.vercel.app/api?type=origin&text1=CosyVoice🤠&text2=Text-to-Speech%20💖%20Large%20Language%20Model&width=800&height=210)](https://github.com/Akshay090/svg-banners)
2
+
3
+ ## 👉🏻 CosyVoice 👈🏻
4
+ **CosyVoice 2.0**: [Demos](https://funaudiollm.github.io/cosyvoice2/); [Paper](https://arxiv.org/abs/2412.10117); [Modelscope](https://www.modelscope.cn/studios/iic/CosyVoice2-0.5B); [HuggingFace](https://huggingface.co/spaces/FunAudioLLM/CosyVoice2-0.5B)
5
+
6
+ **CosyVoice 1.0**: [Demos](https://fun-audio-llm.github.io); [Paper](https://funaudiollm.github.io/pdf/CosyVoice_v1.pdf); [Modelscope](https://www.modelscope.cn/studios/iic/CosyVoice-300M)
7
+
8
+ ## Highlight🔥
9
+
10
+ **CosyVoice 2.0** has been released! Compared to version 1.0, the new version offers more accurate, more stable, faster, and better speech generation capabilities.
11
+ ### Multilingual
12
+ - **Supported Language**: Chinese, English, Japanese, Korean, Chinese dialects (Cantonese, Sichuanese, Shanghainese, Tianjinese, Wuhanese, etc.)
13
+ - **Crosslingual & Mixlingual**:Support zero-shot voice cloning for cross-lingual and code-switching scenarios.
14
+ ### Ultra-Low Latency
15
+ - **Bidirectional Streaming Support**: CosyVoice 2.0 integrates offline and streaming modeling technologies.
16
+ - **Rapid First Packet Synthesis**: Achieves latency as low as 150ms while maintaining high-quality audio output.
17
+ ### High Accuracy
18
+ - **Improved Pronunciation**: Reduces pronunciation errors by 30% to 50% compared to CosyVoice 1.0.
19
+ - **Benchmark Achievements**: Attains the lowest character error rate on the hard test set of the Seed-TTS evaluation set.
20
+ ### Strong Stability
21
+ - **Consistency in Timbre**: Ensures reliable voice consistency for zero-shot and cross-language speech synthesis.
22
+ - **Cross-language Synthesis**: Marked improvements compared to version 1.0.
23
+ ### Natural Experience
24
+ - **Enhanced Prosody and Sound Quality**: Improved alignment of synthesized audio, raising MOS evaluation scores from 5.4 to 5.53.
25
+ - **Emotional and Dialectal Flexibility**: Now supports more granular emotional controls and accent adjustments.
26
+
27
+ ## Roadmap
28
+
29
+ - [x] 2024/12
30
+
31
+ - [x] 25hz cosyvoice 2.0 released
32
+
33
+ - [x] 2024/09
34
+
35
+ - [x] 25hz cosyvoice base model
36
+ - [x] 25hz cosyvoice voice conversion model
37
+
38
+ - [x] 2024/08
39
+
40
+ - [x] Repetition Aware Sampling(RAS) inference for llm stability
41
+ - [x] Streaming inference mode support, including kv cache and sdpa for rtf optimization
42
+
43
+ - [x] 2024/07
44
+
45
+ - [x] Flow matching training support
46
+ - [x] WeTextProcessing support when ttsfrd is not available
47
+ - [x] Fastapi server and client
48
+
49
+
50
+ ## Install
51
+
52
+ **Clone and install**
53
+
54
+ - Clone the repo
55
+ ``` sh
56
+ git clone --recursive https://github.com/FunAudioLLM/CosyVoice.git
57
+ # If you failed to clone submodule due to network failures, please run following command until success
58
+ cd CosyVoice
59
+ git submodule update --init --recursive
60
+ ```
61
+
62
+ - Install Conda: please see https://docs.conda.io/en/latest/miniconda.html
63
+ - Create Conda env:
64
+
65
+ ``` sh
66
+ conda create -n cosyvoice python=3.10
67
+ conda activate cosyvoice
68
+ # pynini is required by WeTextProcessing, use conda to install it as it can be executed on all platform.
69
+ conda install -y -c conda-forge pynini==2.1.5
70
+ pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/ --trusted-host=mirrors.aliyun.com
71
+
72
+ # If you encounter sox compatibility issues
73
+ # ubuntu
74
+ sudo apt-get install sox libsox-dev
75
+ # centos
76
+ sudo yum install sox sox-devel
77
+ ```
78
+
79
+ **Model download**
80
+
81
+ We strongly recommend that you download our pretrained `CosyVoice2-0.5B` `CosyVoice-300M` `CosyVoice-300M-SFT` `CosyVoice-300M-Instruct` model and `CosyVoice-ttsfrd` resource.
82
+
83
+ ``` python
84
+ # SDK模型下载
85
+ from modelscope import snapshot_download
86
+ snapshot_download('iic/CosyVoice2-0.5B', local_dir='pretrained_models/CosyVoice2-0.5B')
87
+ snapshot_download('iic/CosyVoice-300M', local_dir='pretrained_models/CosyVoice-300M')
88
+ snapshot_download('iic/CosyVoice-300M-25Hz', local_dir='pretrained_models/CosyVoice-300M-25Hz')
89
+ snapshot_download('iic/CosyVoice-300M-SFT', local_dir='pretrained_models/CosyVoice-300M-SFT')
90
+ snapshot_download('iic/CosyVoice-300M-Instruct', local_dir='pretrained_models/CosyVoice-300M-Instruct')
91
+ snapshot_download('iic/CosyVoice-ttsfrd', local_dir='pretrained_models/CosyVoice-ttsfrd')
92
+ ```
93
+
94
+ ``` sh
95
+ # git模型下载,请确保已安装git lfs
96
+ mkdir -p pretrained_models
97
+ git clone https://www.modelscope.cn/iic/CosyVoice2-0.5B.git pretrained_models/CosyVoice2-0.5B
98
+ git clone https://www.modelscope.cn/iic/CosyVoice-300M.git pretrained_models/CosyVoice-300M
99
+ git clone https://www.modelscope.cn/iic/CosyVoice-300M-25Hz.git pretrained_models/CosyVoice-300M-25Hz
100
+ git clone https://www.modelscope.cn/iic/CosyVoice-300M-SFT.git pretrained_models/CosyVoice-300M-SFT
101
+ git clone https://www.modelscope.cn/iic/CosyVoice-300M-Instruct.git pretrained_models/CosyVoice-300M-Instruct
102
+ git clone https://www.modelscope.cn/iic/CosyVoice-ttsfrd.git pretrained_models/CosyVoice-ttsfrd
103
+ ```
104
+
105
+ Optionally, you can unzip `ttsfrd` resouce and install `ttsfrd` package for better text normalization performance.
106
+
107
+ Notice that this step is not necessary. If you do not install `ttsfrd` package, we will use WeTextProcessing by default.
108
+
109
+ ``` sh
110
+ cd pretrained_models/CosyVoice-ttsfrd/
111
+ unzip resource.zip -d .
112
+ pip install ttsfrd_dependency-0.1-py3-none-any.whl
113
+ pip install ttsfrd-0.4.2-cp310-cp310-linux_x86_64.whl
114
+ ```
115
+
116
+ **Basic Usage**
117
+
118
+ We strongly recommend using `CosyVoice2-0.5B` for better performance.
119
+ Follow code below for detailed usage of each model.
120
+
121
+ ``` python
122
+ import sys
123
+ sys.path.append('third_party/Matcha-TTS')
124
+ from cosyvoice.cli.cosyvoice import CosyVoice, CosyVoice2
125
+ from cosyvoice.utils.file_utils import load_wav
126
+ import torchaudio
127
+ ```
128
+
129
+ **CosyVoice2 Usage**
130
+ ```python
131
+ cosyvoice = CosyVoice2('pretrained_models/CosyVoice2-0.5B', load_jit=False, load_trt=False, fp16=False)
132
+
133
+ # NOTE if you want to reproduce the results on https://funaudiollm.github.io/cosyvoice2, please add text_frontend=False during inference
134
+ # zero_shot usage
135
+ prompt_speech_16k = load_wav('zero_shot_prompt.wav', 16000)
136
+ for i, j in enumerate(cosyvoice.inference_zero_shot('收到好友从远方寄来的生日礼物,那份意外的惊喜与深深的祝福让我心中充满了甜蜜的快乐,笑容如花儿般绽放。', '希望你以后能够做的比我还好呦。', prompt_speech_16k, stream=False)):
137
+ torchaudio.save('zero_shot_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
138
+
139
+ # fine grained control, for supported control, check cosyvoice/tokenizer/tokenizer.py#L248
140
+ for i, j in enumerate(cosyvoice.inference_cross_lingual('在他讲述那个荒诞故事的过程中,他突然[laughter]停下来,因为他自己也被逗笑了[laughter]。', prompt_speech_16k, stream=False)):
141
+ torchaudio.save('fine_grained_control_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
142
+
143
+ # instruct usage
144
+ for i, j in enumerate(cosyvoice.inference_instruct2('收到好友从远方寄来的生日礼物,那份意外的惊喜与深深的祝福让我心中充满了甜蜜的快乐,笑容如花儿般绽放。', '用四川话说这句话', prompt_speech_16k, stream=False)):
145
+ torchaudio.save('instruct_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
146
+ ```
147
+
148
+ **CosyVoice Usage**
149
+ ```python
150
+ cosyvoice = CosyVoice('pretrained_models/CosyVoice-300M-SFT', load_jit=False, load_trt=False, fp16=False)
151
+ # sft usage
152
+ print(cosyvoice.list_available_spks())
153
+ # change stream=True for chunk stream inference
154
+ for i, j in enumerate(cosyvoice.inference_sft('你好,我是通义生成式语音大模型,请问有什么可以帮您的吗?', '中文女', stream=False)):
155
+ torchaudio.save('sft_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
156
+
157
+ cosyvoice = CosyVoice('pretrained_models/CosyVoice-300M') # or change to pretrained_models/CosyVoice-300M-25Hz for 25Hz inference
158
+ # zero_shot usage, <|zh|><|en|><|jp|><|yue|><|ko|> for Chinese/English/Japanese/Cantonese/Korean
159
+ prompt_speech_16k = load_wav('zero_shot_prompt.wav', 16000)
160
+ for i, j in enumerate(cosyvoice.inference_zero_shot('收到好友从远方寄来的生日礼物,那份意外的惊喜与深深的祝福让我心中充满了甜蜜的快乐,笑容如花儿般绽放。', '希望你以后能够做的比我还好呦。', prompt_speech_16k, stream=False)):
161
+ torchaudio.save('zero_shot_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
162
+ # cross_lingual usage
163
+ prompt_speech_16k = load_wav('cross_lingual_prompt.wav', 16000)
164
+ for i, j in enumerate(cosyvoice.inference_cross_lingual('<|en|>And then later on, fully acquiring that company. So keeping management in line, interest in line with the asset that\'s coming into the family is a reason why sometimes we don\'t buy the whole thing.', prompt_speech_16k, stream=False)):
165
+ torchaudio.save('cross_lingual_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
166
+ # vc usage
167
+ prompt_speech_16k = load_wav('zero_shot_prompt.wav', 16000)
168
+ source_speech_16k = load_wav('cross_lingual_prompt.wav', 16000)
169
+ for i, j in enumerate(cosyvoice.inference_vc(source_speech_16k, prompt_speech_16k, stream=False)):
170
+ torchaudio.save('vc_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
171
+
172
+ cosyvoice = CosyVoice('pretrained_models/CosyVoice-300M-Instruct')
173
+ # instruct usage, support <laughter></laughter><strong></strong>[laughter][breath]
174
+ for i, j in enumerate(cosyvoice.inference_instruct('在面对挑战时,他展现了非凡的<strong>勇气</strong>与<strong>智慧</strong>。', '中文男', 'Theo \'Crimson\', is a fiery, passionate rebel leader. Fights with fervor for justice, but struggles with impulsiveness.', stream=False)):
175
+ torchaudio.save('instruct_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
176
+ ```
177
+
178
+ **Start web demo**
179
+
180
+ You can use our web demo page to get familiar with CosyVoice quickly.
181
+
182
+ Please see the demo website for details.
183
+
184
+ ``` python
185
+ # change iic/CosyVoice-300M-SFT for sft inference, or iic/CosyVoice-300M-Instruct for instruct inference
186
+ python3 webui.py --port 50000 --model_dir pretrained_models/CosyVoice-300M
187
+ ```
188
+
189
+ **Advanced Usage**
190
+
191
+ For advanced user, we have provided train and inference scripts in `examples/libritts/cosyvoice/run.sh`.
192
+
193
+ **Build for deployment**
194
+
195
+ Optionally, if you want service deployment,
196
+ you can run following steps.
197
+
198
+ ``` sh
199
+ cd runtime/python
200
+ docker build -t cosyvoice:v1.0 .
201
+ # change iic/CosyVoice-300M to iic/CosyVoice-300M-Instruct if you want to use instruct inference
202
+ # for grpc usage
203
+ docker run -d --runtime=nvidia -p 50000:50000 cosyvoice:v1.0 /bin/bash -c "cd /opt/CosyVoice/CosyVoice/runtime/python/grpc && python3 server.py --port 50000 --max_conc 4 --model_dir iic/CosyVoice-300M && sleep infinity"
204
+ cd grpc && python3 client.py --port 50000 --mode <sft|zero_shot|cross_lingual|instruct>
205
+ # for fastapi usage
206
+ docker run -d --runtime=nvidia -p 50000:50000 cosyvoice:v1.0 /bin/bash -c "cd /opt/CosyVoice/CosyVoice/runtime/python/fastapi && python3 server.py --port 50000 --model_dir iic/CosyVoice-300M && sleep infinity"
207
+ cd fastapi && python3 client.py --port 50000 --mode <sft|zero_shot|cross_lingual|instruct>
208
+ ```
209
+
210
+ ## Discussion & Communication
211
+
212
+ You can directly discuss on [Github Issues](https://github.com/FunAudioLLM/CosyVoice/issues).
213
+
214
+ You can also scan the QR code to join our official Dingding chat group.
215
+
216
+ <img src="./asset/dingding.png" width="250px">
217
+
218
+ ## Acknowledge
219
+
220
+ 1. We borrowed a lot of code from [FunASR](https://github.com/modelscope/FunASR).
221
+ 2. We borrowed a lot of code from [FunCodec](https://github.com/modelscope/FunCodec).
222
+ 3. We borrowed a lot of code from [Matcha-TTS](https://github.com/shivammehta25/Matcha-TTS).
223
+ 4. We borrowed a lot of code from [AcademiCodec](https://github.com/yangdongchao/AcademiCodec).
224
+ 5. We borrowed a lot of code from [WeNet](https://github.com/wenet-e2e/wenet).
225
+
226
+ ## Disclaimer
227
+ The content provided above is for academic purposes only and is intended to demonstrate technical capabilities. Some examples are sourced from the internet. If any content infringes on your rights, please contact us to request its removal.
CosyVoice-300M/campplus.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a6ac6a63997761ae2997373e2ee1c47040854b4b759ea41ec48e4e42df0f4d73
3
+ size 28303423
CosyVoice-300M/configuration.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "framework": "Pytorch",
3
+ "task": "text-to-speech"
4
+ }
CosyVoice-300M/cosyvoice.yaml ADDED
@@ -0,0 +1,203 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # set random seed, so that you may reproduce your result.
2
+ __set_seed1: !apply:random.seed [1986]
3
+ __set_seed2: !apply:numpy.random.seed [1986]
4
+ __set_seed3: !apply:torch.manual_seed [1986]
5
+ __set_seed4: !apply:torch.cuda.manual_seed_all [1986]
6
+
7
+ # fixed params
8
+ sample_rate: 22050
9
+ text_encoder_input_size: 512
10
+ llm_input_size: 1024
11
+ llm_output_size: 1024
12
+ spk_embed_dim: 192
13
+
14
+ # model params
15
+ # for all class/function included in this repo, we use !<name> or !<new> for intialization, so that user may find all corresponding class/function according to one single yaml.
16
+ # for system/third_party class/function, we do not require this.
17
+ llm: !new:cosyvoice.llm.llm.TransformerLM
18
+ text_encoder_input_size: !ref <text_encoder_input_size>
19
+ llm_input_size: !ref <llm_input_size>
20
+ llm_output_size: !ref <llm_output_size>
21
+ text_token_size: 51866 # change to 60515 if you want to train with CosyVoice-300M-25Hz recipe
22
+ speech_token_size: 4096
23
+ length_normalized_loss: True
24
+ lsm_weight: 0
25
+ spk_embed_dim: !ref <spk_embed_dim>
26
+ text_encoder: !new:cosyvoice.transformer.encoder.ConformerEncoder
27
+ input_size: !ref <text_encoder_input_size>
28
+ output_size: 1024
29
+ attention_heads: 16
30
+ linear_units: 4096
31
+ num_blocks: 6
32
+ dropout_rate: 0.1
33
+ positional_dropout_rate: 0.1
34
+ attention_dropout_rate: 0.0
35
+ normalize_before: True
36
+ input_layer: "linear"
37
+ pos_enc_layer_type: "rel_pos_espnet"
38
+ selfattention_layer_type: "rel_selfattn"
39
+ use_cnn_module: False
40
+ macaron_style: False
41
+ use_dynamic_chunk: False
42
+ use_dynamic_left_chunk: False
43
+ static_chunk_size: 1
44
+ llm: !new:cosyvoice.transformer.encoder.TransformerEncoder
45
+ input_size: !ref <llm_input_size>
46
+ output_size: !ref <llm_output_size>
47
+ attention_heads: 16
48
+ linear_units: 4096
49
+ num_blocks: 14
50
+ dropout_rate: 0.1
51
+ positional_dropout_rate: 0.1
52
+ attention_dropout_rate: 0.0
53
+ input_layer: "linear_legacy"
54
+ pos_enc_layer_type: "rel_pos_espnet"
55
+ selfattention_layer_type: "rel_selfattn"
56
+ static_chunk_size: 1
57
+ sampling: !name:cosyvoice.utils.common.ras_sampling
58
+ top_p: 0.8
59
+ top_k: 25
60
+ win_size: 10
61
+ tau_r: 0.1
62
+
63
+ flow: !new:cosyvoice.flow.flow.MaskedDiffWithXvec
64
+ input_size: 512
65
+ output_size: 80
66
+ spk_embed_dim: !ref <spk_embed_dim>
67
+ output_type: "mel"
68
+ vocab_size: 4096
69
+ input_frame_rate: 50 # change to 25 if you want to train with CosyVoice-300M-25Hz recipe
70
+ only_mask_loss: True
71
+ encoder: !new:cosyvoice.transformer.encoder.ConformerEncoder
72
+ output_size: 512
73
+ attention_heads: 8
74
+ linear_units: 2048
75
+ num_blocks: 6
76
+ dropout_rate: 0.1
77
+ positional_dropout_rate: 0.1
78
+ attention_dropout_rate: 0.1
79
+ normalize_before: True
80
+ input_layer: "linear"
81
+ pos_enc_layer_type: "rel_pos_espnet"
82
+ selfattention_layer_type: "rel_selfattn"
83
+ input_size: 512
84
+ use_cnn_module: False
85
+ macaron_style: False
86
+ length_regulator: !new:cosyvoice.flow.length_regulator.InterpolateRegulator
87
+ channels: 80
88
+ sampling_ratios: [1, 1, 1, 1]
89
+ decoder: !new:cosyvoice.flow.flow_matching.ConditionalCFM
90
+ in_channels: 240
91
+ n_spks: 1
92
+ spk_emb_dim: 80
93
+ cfm_params: !new:omegaconf.DictConfig
94
+ content:
95
+ sigma_min: 1e-06
96
+ solver: "euler"
97
+ t_scheduler: "cosine"
98
+ training_cfg_rate: 0.2
99
+ inference_cfg_rate: 0.7
100
+ reg_loss_type: "l1"
101
+ estimator: !new:cosyvoice.flow.decoder.ConditionalDecoder
102
+ in_channels: 320
103
+ out_channels: 80
104
+ channels: [256, 256]
105
+ dropout: 0.0
106
+ attention_head_dim: 64
107
+ n_blocks: 4
108
+ num_mid_blocks: 12
109
+ num_heads: 8
110
+ act_fn: "gelu"
111
+
112
+ hift: !new:cosyvoice.hifigan.generator.HiFTGenerator
113
+ in_channels: 80
114
+ base_channels: 512
115
+ nb_harmonics: 8
116
+ sampling_rate: !ref <sample_rate>
117
+ nsf_alpha: 0.1
118
+ nsf_sigma: 0.003
119
+ nsf_voiced_threshold: 10
120
+ upsample_rates: [8, 8]
121
+ upsample_kernel_sizes: [16, 16]
122
+ istft_params:
123
+ n_fft: 16
124
+ hop_len: 4
125
+ resblock_kernel_sizes: [3, 7, 11]
126
+ resblock_dilation_sizes: [[1, 3, 5], [1, 3, 5], [1, 3, 5]]
127
+ source_resblock_kernel_sizes: [7, 11]
128
+ source_resblock_dilation_sizes: [[1, 3, 5], [1, 3, 5]]
129
+ lrelu_slope: 0.1
130
+ audio_limit: 0.99
131
+ f0_predictor: !new:cosyvoice.hifigan.f0_predictor.ConvRNNF0Predictor
132
+ num_class: 1
133
+ in_channels: 80
134
+ cond_channels: 512
135
+
136
+ # processor functions
137
+ parquet_opener: !name:core.cosyvoice.dataset.processor.parquet_opener
138
+ get_tokenizer: !name:core.cosyvoice.tokenizer.tokenizer.get_tokenizer
139
+ multilingual: True
140
+ num_languages: 100
141
+ language: "en"
142
+ task: "transcribe"
143
+ allowed_special: "all"
144
+ tokenize: !name:cosyvoice.dataset.processor.tokenize
145
+ get_tokenizer: !ref <get_tokenizer>
146
+ allowed_special: !ref <allowed_special>
147
+ filter: !name:cosyvoice.dataset.processor.filter
148
+ max_length: 40960
149
+ min_length: 0
150
+ token_max_length: 200
151
+ token_min_length: 1
152
+ resample: !name:cosyvoice.dataset.processor.resample
153
+ resample_rate: !ref <sample_rate>
154
+ feat_extractor: !name:cosyvoice.utils.audio.mel_spectrogram
155
+ n_fft: 1024
156
+ num_mels: 80
157
+ sampling_rate: !ref <sample_rate>
158
+ hop_size: 256
159
+ win_size: 1024
160
+ fmin: 0
161
+ fmax: 8000
162
+ center: False
163
+ compute_fbank: !name:cosyvoice.dataset.processor.compute_fbank
164
+ feat_extractor: !ref <feat_extractor>
165
+ parse_embedding: !name:cosyvoice.dataset.processor.parse_embedding
166
+ normalize: True
167
+ shuffle: !name:cosyvoice.dataset.processor.shuffle
168
+ shuffle_size: 1000
169
+ sort: !name:cosyvoice.dataset.processor.sort
170
+ sort_size: 500 # sort_size should be less than shuffle_size
171
+ batch: !name:cosyvoice.dataset.processor.batch
172
+ batch_type: "dynamic"
173
+ max_frames_in_batch: 2000
174
+ padding: !name:cosyvoice.dataset.processor.padding # dataset processor pipeline
175
+
176
+
177
+ data_pipeline:
178
+ [
179
+ !ref <parquet_opener>,
180
+ !ref <tokenize>,
181
+ !ref <filter>,
182
+ !ref <resample>,
183
+ !ref <compute_fbank>,
184
+ !ref <parse_embedding>,
185
+ !ref <shuffle>,
186
+ !ref <sort>,
187
+ !ref <batch>,
188
+ !ref <padding>,
189
+ ]
190
+
191
+ # train conf
192
+ train_conf:
193
+ optim: adam
194
+ optim_conf:
195
+ lr: 0.001 # change to 1e-5 during sft
196
+ scheduler: warmuplr # change to constantlr during sft
197
+ scheduler_conf:
198
+ warmup_steps: 2500
199
+ max_epoch: 200
200
+ grad_clip: 5
201
+ accum_grad: 2
202
+ log_interval: 100
203
+ save_per_step: -1
CosyVoice-300M/flow.decoder.estimator.fp32.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8c02bc651f599d66e5786b9dc66d7841153a2f1ae682306a5bac84d067f7c947
3
+ size 328698885
CosyVoice-300M/flow.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fd80b089444a95e52956c57cdf177d7f6017a5af13b8a697717628a1d2be6b55
3
+ size 419900943
CosyVoice-300M/hift.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:91e679b6ca1eff71187ffb4f3ab0444935594cdcc20a9bd12afad111ef8d6012
3
+ size 81896716
CosyVoice-300M/llm.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:59f7eb172d2f33f6c8c02709d829c83960f97876fa9886c7ea232597b51af976
3
+ size 1242994835
CosyVoice-300M/speech_tokenizer_v1.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:23b5a723ed9143aebfd9ffda14ac4c21231f31c35ef837b6a13bb9e5488abb1e
3
+ size 522624269
CosyVoice2-0.5B/CosyVoice-BlankEN/config.json ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "Qwen2ForCausalLM"
4
+ ],
5
+ "attention_dropout": 0.0,
6
+ "bos_token_id": 151643,
7
+ "eos_token_id": 151645,
8
+ "hidden_act": "silu",
9
+ "hidden_size": 896,
10
+ "initializer_range": 0.02,
11
+ "intermediate_size": 4864,
12
+ "max_position_embeddings": 32768,
13
+ "max_window_layers": 24,
14
+ "model_type": "qwen2",
15
+ "num_attention_heads": 14,
16
+ "num_hidden_layers": 24,
17
+ "num_key_value_heads": 2,
18
+ "rms_norm_eps": 1e-06,
19
+ "rope_theta": 1000000.0,
20
+ "sliding_window": 32768,
21
+ "tie_word_embeddings": true,
22
+ "torch_dtype": "bfloat16",
23
+ "transformers_version": "4.40.1",
24
+ "use_cache": true,
25
+ "use_sliding_window": false,
26
+ "vocab_size": 151936
27
+ }
CosyVoice2-0.5B/CosyVoice-BlankEN/generation_config.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 151643,
3
+ "pad_token_id": 151643,
4
+ "do_sample": true,
5
+ "eos_token_id": [
6
+ 151645,
7
+ 151643
8
+ ],
9
+ "repetition_penalty": 1.1,
10
+ "temperature": 0.7,
11
+ "top_p": 0.8,
12
+ "top_k": 20,
13
+ "transformers_version": "4.37.0"
14
+ }
CosyVoice2-0.5B/CosyVoice-BlankEN/merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
CosyVoice2-0.5B/CosyVoice-BlankEN/model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:130282af0dfa9fe5840737cc49a0d339d06075f83c5a315c3372c9a0740d0b96
3
+ size 988097824
CosyVoice2-0.5B/CosyVoice-BlankEN/tokenizer_config.json ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "added_tokens_decoder": {
4
+ "151643": {
5
+ "content": "<|endoftext|>",
6
+ "lstrip": false,
7
+ "normalized": false,
8
+ "rstrip": false,
9
+ "single_word": false,
10
+ "special": true
11
+ },
12
+ "151644": {
13
+ "content": "<|im_start|>",
14
+ "lstrip": false,
15
+ "normalized": false,
16
+ "rstrip": false,
17
+ "single_word": false,
18
+ "special": true
19
+ },
20
+ "151645": {
21
+ "content": "<|im_end|>",
22
+ "lstrip": false,
23
+ "normalized": false,
24
+ "rstrip": false,
25
+ "single_word": false,
26
+ "special": true
27
+ }
28
+ },
29
+ "additional_special_tokens": ["<|im_start|>", "<|im_end|>"],
30
+ "bos_token": null,
31
+ "chat_template": "{% for message in messages %}{% if loop.first and messages[0]['role'] != 'system' %}{{ '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n' }}{% endif %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}",
32
+ "clean_up_tokenization_spaces": false,
33
+ "eos_token": "<|im_end|>",
34
+ "errors": "replace",
35
+ "model_max_length": 32768,
36
+ "pad_token": "<|endoftext|>",
37
+ "split_special_tokens": false,
38
+ "tokenizer_class": "Qwen2Tokenizer",
39
+ "unk_token": null
40
+ }
CosyVoice2-0.5B/CosyVoice-BlankEN/vocab.json ADDED
The diff for this file is too large to render. See raw diff
 
CosyVoice2-0.5B/README.md ADDED
@@ -0,0 +1,227 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [![SVG Banners](https://svg-banners.vercel.app/api?type=origin&text1=CosyVoice🤠&text2=Text-to-Speech%20💖%20Large%20Language%20Model&width=800&height=210)](https://github.com/Akshay090/svg-banners)
2
+
3
+ ## 👉🏻 CosyVoice 👈🏻
4
+ **CosyVoice 2.0**: [Demos](https://funaudiollm.github.io/cosyvoice2/); [Paper](https://arxiv.org/abs/2412.10117); [Modelscope](https://www.modelscope.cn/studios/iic/CosyVoice2-0.5B); [HuggingFace](https://huggingface.co/spaces/FunAudioLLM/CosyVoice2-0.5B)
5
+
6
+ **CosyVoice 1.0**: [Demos](https://fun-audio-llm.github.io); [Paper](https://funaudiollm.github.io/pdf/CosyVoice_v1.pdf); [Modelscope](https://www.modelscope.cn/studios/iic/CosyVoice-300M)
7
+
8
+ ## Highlight🔥
9
+
10
+ **CosyVoice 2.0** has been released! Compared to version 1.0, the new version offers more accurate, more stable, faster, and better speech generation capabilities.
11
+ ### Multilingual
12
+ - **Supported Language**: Chinese, English, Japanese, Korean, Chinese dialects (Cantonese, Sichuanese, Shanghainese, Tianjinese, Wuhanese, etc.)
13
+ - **Crosslingual & Mixlingual**:Support zero-shot voice cloning for cross-lingual and code-switching scenarios.
14
+ ### Ultra-Low Latency
15
+ - **Bidirectional Streaming Support**: CosyVoice 2.0 integrates offline and streaming modeling technologies.
16
+ - **Rapid First Packet Synthesis**: Achieves latency as low as 150ms while maintaining high-quality audio output.
17
+ ### High Accuracy
18
+ - **Improved Pronunciation**: Reduces pronunciation errors by 30% to 50% compared to CosyVoice 1.0.
19
+ - **Benchmark Achievements**: Attains the lowest character error rate on the hard test set of the Seed-TTS evaluation set.
20
+ ### Strong Stability
21
+ - **Consistency in Timbre**: Ensures reliable voice consistency for zero-shot and cross-language speech synthesis.
22
+ - **Cross-language Synthesis**: Marked improvements compared to version 1.0.
23
+ ### Natural Experience
24
+ - **Enhanced Prosody and Sound Quality**: Improved alignment of synthesized audio, raising MOS evaluation scores from 5.4 to 5.53.
25
+ - **Emotional and Dialectal Flexibility**: Now supports more granular emotional controls and accent adjustments.
26
+
27
+ ## Roadmap
28
+
29
+ - [x] 2024/12
30
+
31
+ - [x] 25hz cosyvoice 2.0 released
32
+
33
+ - [x] 2024/09
34
+
35
+ - [x] 25hz cosyvoice base model
36
+ - [x] 25hz cosyvoice voice conversion model
37
+
38
+ - [x] 2024/08
39
+
40
+ - [x] Repetition Aware Sampling(RAS) inference for llm stability
41
+ - [x] Streaming inference mode support, including kv cache and sdpa for rtf optimization
42
+
43
+ - [x] 2024/07
44
+
45
+ - [x] Flow matching training support
46
+ - [x] WeTextProcessing support when ttsfrd is not available
47
+ - [x] Fastapi server and client
48
+
49
+
50
+ ## Install
51
+
52
+ **Clone and install**
53
+
54
+ - Clone the repo
55
+ ``` sh
56
+ git clone --recursive https://github.com/FunAudioLLM/CosyVoice.git
57
+ # If you failed to clone submodule due to network failures, please run following command until success
58
+ cd CosyVoice
59
+ git submodule update --init --recursive
60
+ ```
61
+
62
+ - Install Conda: please see https://docs.conda.io/en/latest/miniconda.html
63
+ - Create Conda env:
64
+
65
+ ``` sh
66
+ conda create -n cosyvoice python=3.10
67
+ conda activate cosyvoice
68
+ # pynini is required by WeTextProcessing, use conda to install it as it can be executed on all platform.
69
+ conda install -y -c conda-forge pynini==2.1.5
70
+ pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/ --trusted-host=mirrors.aliyun.com
71
+
72
+ # If you encounter sox compatibility issues
73
+ # ubuntu
74
+ sudo apt-get install sox libsox-dev
75
+ # centos
76
+ sudo yum install sox sox-devel
77
+ ```
78
+
79
+ **Model download**
80
+
81
+ We strongly recommend that you download our pretrained `CosyVoice2-0.5B` `CosyVoice-300M` `CosyVoice-300M-SFT` `CosyVoice-300M-Instruct` model and `CosyVoice-ttsfrd` resource.
82
+
83
+ ``` python
84
+ # SDK模型下载
85
+ from modelscope import snapshot_download
86
+ snapshot_download('iic/CosyVoice2-0.5B', local_dir='pretrained_models/CosyVoice2-0.5B')
87
+ snapshot_download('iic/CosyVoice-300M', local_dir='pretrained_models/CosyVoice-300M')
88
+ snapshot_download('iic/CosyVoice-300M-25Hz', local_dir='pretrained_models/CosyVoice-300M-25Hz')
89
+ snapshot_download('iic/CosyVoice-300M-SFT', local_dir='pretrained_models/CosyVoice-300M-SFT')
90
+ snapshot_download('iic/CosyVoice-300M-Instruct', local_dir='pretrained_models/CosyVoice-300M-Instruct')
91
+ snapshot_download('iic/CosyVoice-ttsfrd', local_dir='pretrained_models/CosyVoice-ttsfrd')
92
+ ```
93
+
94
+ ``` sh
95
+ # git模型下载,请确保已安装git lfs
96
+ mkdir -p pretrained_models
97
+ git clone https://www.modelscope.cn/iic/CosyVoice2-0.5B.git pretrained_models/CosyVoice2-0.5B
98
+ git clone https://www.modelscope.cn/iic/CosyVoice-300M.git pretrained_models/CosyVoice-300M
99
+ git clone https://www.modelscope.cn/iic/CosyVoice-300M-25Hz.git pretrained_models/CosyVoice-300M-25Hz
100
+ git clone https://www.modelscope.cn/iic/CosyVoice-300M-SFT.git pretrained_models/CosyVoice-300M-SFT
101
+ git clone https://www.modelscope.cn/iic/CosyVoice-300M-Instruct.git pretrained_models/CosyVoice-300M-Instruct
102
+ git clone https://www.modelscope.cn/iic/CosyVoice-ttsfrd.git pretrained_models/CosyVoice-ttsfrd
103
+ ```
104
+
105
+ Optionally, you can unzip `ttsfrd` resouce and install `ttsfrd` package for better text normalization performance.
106
+
107
+ Notice that this step is not necessary. If you do not install `ttsfrd` package, we will use WeTextProcessing by default.
108
+
109
+ ``` sh
110
+ cd pretrained_models/CosyVoice-ttsfrd/
111
+ unzip resource.zip -d .
112
+ pip install ttsfrd_dependency-0.1-py3-none-any.whl
113
+ pip install ttsfrd-0.4.2-cp310-cp310-linux_x86_64.whl
114
+ ```
115
+
116
+ **Basic Usage**
117
+
118
+ We strongly recommend using `CosyVoice2-0.5B` for better performance.
119
+ Follow code below for detailed usage of each model.
120
+
121
+ ``` python
122
+ import sys
123
+ sys.path.append('third_party/Matcha-TTS')
124
+ from cosyvoice.cli.cosyvoice import CosyVoice, CosyVoice2
125
+ from cosyvoice.utils.file_utils import load_wav
126
+ import torchaudio
127
+ ```
128
+
129
+ **CosyVoice2 Usage**
130
+ ```python
131
+ cosyvoice = CosyVoice2('pretrained_models/CosyVoice2-0.5B', load_jit=False, load_trt=False, fp16=False)
132
+
133
+ # NOTE if you want to reproduce the results on https://funaudiollm.github.io/cosyvoice2, please add text_frontend=False during inference
134
+ # zero_shot usage
135
+ prompt_speech_16k = load_wav('zero_shot_prompt.wav', 16000)
136
+ for i, j in enumerate(cosyvoice.inference_zero_shot('收到好友从远方寄来的生日礼物,那份意外的惊喜与深深的祝福让我心中充满了甜蜜的快乐,笑容如花儿般绽放。', '希望你以后能够做的比我还好呦。', prompt_speech_16k, stream=False)):
137
+ torchaudio.save('zero_shot_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
138
+
139
+ # fine grained control, for supported control, check cosyvoice/tokenizer/tokenizer.py#L248
140
+ for i, j in enumerate(cosyvoice.inference_cross_lingual('在他讲述那个荒诞故事的过程中,他突然[laughter]停下来,因为他自己也被逗笑了[laughter]。', prompt_speech_16k, stream=False)):
141
+ torchaudio.save('fine_grained_control_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
142
+
143
+ # instruct usage
144
+ for i, j in enumerate(cosyvoice.inference_instruct2('收到好友从远方寄来的生日礼物,那份意外的惊喜与深深的祝福让我心中充满了甜蜜的快乐,笑容如花儿般绽放。', '用四川话说这句话', prompt_speech_16k, stream=False)):
145
+ torchaudio.save('instruct_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
146
+ ```
147
+
148
+ **CosyVoice Usage**
149
+ ```python
150
+ cosyvoice = CosyVoice('pretrained_models/CosyVoice-300M-SFT', load_jit=False, load_trt=False, fp16=False)
151
+ # sft usage
152
+ print(cosyvoice.list_available_spks())
153
+ # change stream=True for chunk stream inference
154
+ for i, j in enumerate(cosyvoice.inference_sft('你好,我是通义生成式语音大模型,请问有什么可以帮您的吗?', '中文女', stream=False)):
155
+ torchaudio.save('sft_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
156
+
157
+ cosyvoice = CosyVoice('pretrained_models/CosyVoice-300M') # or change to pretrained_models/CosyVoice-300M-25Hz for 25Hz inference
158
+ # zero_shot usage, <|zh|><|en|><|jp|><|yue|><|ko|> for Chinese/English/Japanese/Cantonese/Korean
159
+ prompt_speech_16k = load_wav('zero_shot_prompt.wav', 16000)
160
+ for i, j in enumerate(cosyvoice.inference_zero_shot('收到好友从远方寄来的生日礼物,那份意外的惊喜与深深的祝福让我心中充满了甜蜜的快乐,笑容如花儿般绽放。', '希望你以后能够做的比我还好呦。', prompt_speech_16k, stream=False)):
161
+ torchaudio.save('zero_shot_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
162
+ # cross_lingual usage
163
+ prompt_speech_16k = load_wav('cross_lingual_prompt.wav', 16000)
164
+ for i, j in enumerate(cosyvoice.inference_cross_lingual('<|en|>And then later on, fully acquiring that company. So keeping management in line, interest in line with the asset that\'s coming into the family is a reason why sometimes we don\'t buy the whole thing.', prompt_speech_16k, stream=False)):
165
+ torchaudio.save('cross_lingual_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
166
+ # vc usage
167
+ prompt_speech_16k = load_wav('zero_shot_prompt.wav', 16000)
168
+ source_speech_16k = load_wav('cross_lingual_prompt.wav', 16000)
169
+ for i, j in enumerate(cosyvoice.inference_vc(source_speech_16k, prompt_speech_16k, stream=False)):
170
+ torchaudio.save('vc_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
171
+
172
+ cosyvoice = CosyVoice('pretrained_models/CosyVoice-300M-Instruct')
173
+ # instruct usage, support <laughter></laughter><strong></strong>[laughter][breath]
174
+ for i, j in enumerate(cosyvoice.inference_instruct('在面对挑战时,他展现了非凡的<strong>勇气</strong>与<strong>智慧</strong>。', '中文男', 'Theo \'Crimson\', is a fiery, passionate rebel leader. Fights with fervor for justice, but struggles with impulsiveness.', stream=False)):
175
+ torchaudio.save('instruct_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
176
+ ```
177
+
178
+ **Start web demo**
179
+
180
+ You can use our web demo page to get familiar with CosyVoice quickly.
181
+
182
+ Please see the demo website for details.
183
+
184
+ ``` python
185
+ # change iic/CosyVoice-300M-SFT for sft inference, or iic/CosyVoice-300M-Instruct for instruct inference
186
+ python3 webui.py --port 50000 --model_dir pretrained_models/CosyVoice-300M
187
+ ```
188
+
189
+ **Advanced Usage**
190
+
191
+ For advanced user, we have provided train and inference scripts in `examples/libritts/cosyvoice/run.sh`.
192
+
193
+ **Build for deployment**
194
+
195
+ Optionally, if you want service deployment,
196
+ you can run following steps.
197
+
198
+ ``` sh
199
+ cd runtime/python
200
+ docker build -t cosyvoice:v1.0 .
201
+ # change iic/CosyVoice-300M to iic/CosyVoice-300M-Instruct if you want to use instruct inference
202
+ # for grpc usage
203
+ docker run -d --runtime=nvidia -p 50000:50000 cosyvoice:v1.0 /bin/bash -c "cd /opt/CosyVoice/CosyVoice/runtime/python/grpc && python3 server.py --port 50000 --max_conc 4 --model_dir iic/CosyVoice-300M && sleep infinity"
204
+ cd grpc && python3 client.py --port 50000 --mode <sft|zero_shot|cross_lingual|instruct>
205
+ # for fastapi usage
206
+ docker run -d --runtime=nvidia -p 50000:50000 cosyvoice:v1.0 /bin/bash -c "cd /opt/CosyVoice/CosyVoice/runtime/python/fastapi && python3 server.py --port 50000 --model_dir iic/CosyVoice-300M && sleep infinity"
207
+ cd fastapi && python3 client.py --port 50000 --mode <sft|zero_shot|cross_lingual|instruct>
208
+ ```
209
+
210
+ ## Discussion & Communication
211
+
212
+ You can directly discuss on [Github Issues](https://github.com/FunAudioLLM/CosyVoice/issues).
213
+
214
+ You can also scan the QR code to join our official Dingding chat group.
215
+
216
+ <img src="./asset/dingding.png" width="250px">
217
+
218
+ ## Acknowledge
219
+
220
+ 1. We borrowed a lot of code from [FunASR](https://github.com/modelscope/FunASR).
221
+ 2. We borrowed a lot of code from [FunCodec](https://github.com/modelscope/FunCodec).
222
+ 3. We borrowed a lot of code from [Matcha-TTS](https://github.com/shivammehta25/Matcha-TTS).
223
+ 4. We borrowed a lot of code from [AcademiCodec](https://github.com/yangdongchao/AcademiCodec).
224
+ 5. We borrowed a lot of code from [WeNet](https://github.com/wenet-e2e/wenet).
225
+
226
+ ## Disclaimer
227
+ The content provided above is for academic purposes only and is intended to demonstrate technical capabilities. Some examples are sourced from the internet. If any content infringes on your rights, please contact us to request its removal.
CosyVoice2-0.5B/campplus.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a6ac6a63997761ae2997373e2ee1c47040854b4b759ea41ec48e4e42df0f4d73
3
+ size 28303423
CosyVoice2-0.5B/configuration.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "framework": "Pytorch",
3
+ "task": "text-to-speech"
4
+ }
CosyVoice2-0.5B/cosyvoice2.yaml ADDED
@@ -0,0 +1,233 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # set random seed, so that you may reproduce your result.
2
+ __set_seed1: !apply:random.seed [1986]
3
+ __set_seed2: !apply:numpy.random.seed [1986]
4
+ __set_seed3: !apply:torch.manual_seed [1986]
5
+ __set_seed4: !apply:torch.cuda.manual_seed_all [1986]
6
+
7
+ # fixed params
8
+ sample_rate: 24000
9
+ llm_input_size: 896
10
+ llm_output_size: 896
11
+ spk_embed_dim: 192
12
+ qwen_pretrain_path: ""
13
+ token_frame_rate: 25
14
+ token_mel_ratio: 2
15
+
16
+ # stream related params
17
+ chunk_size: 25 # streaming inference chunk size, in token
18
+ num_decoding_left_chunks: -1 # streaming inference flow decoder left chunk size, <0 means use all left chunks
19
+
20
+ # model params
21
+ # for all class/function included in this repo, we use !<name> or !<new> for intialization, so that user may find all corresponding class/function according to one single yaml.
22
+ # for system/third_party class/function, we do not require this.
23
+ llm: !new:cosyvoice.llm.llm.Qwen2LM
24
+ llm_input_size: !ref <llm_input_size>
25
+ llm_output_size: !ref <llm_output_size>
26
+ speech_token_size: 6561
27
+ length_normalized_loss: True
28
+ lsm_weight: 0
29
+ mix_ratio: [5, 15]
30
+ llm: !new:cosyvoice.llm.llm.Qwen2Encoder
31
+ pretrain_path: !ref <qwen_pretrain_path>
32
+ sampling: !name:cosyvoice.utils.common.ras_sampling
33
+ top_p: 0.8
34
+ top_k: 25
35
+ win_size: 10
36
+ tau_r: 0.1
37
+
38
+ flow: !new:cosyvoice.flow.flow.CausalMaskedDiffWithXvec
39
+ input_size: 512
40
+ output_size: 80
41
+ spk_embed_dim: !ref <spk_embed_dim>
42
+ output_type: "mel"
43
+ vocab_size: 6561
44
+ input_frame_rate: !ref <token_frame_rate>
45
+ only_mask_loss: True
46
+ token_mel_ratio: !ref <token_mel_ratio>
47
+ pre_lookahead_len: 3
48
+ encoder:
49
+ !new:cosyvoice.transformer.upsample_encoder.UpsampleConformerEncoder
50
+ output_size: 512
51
+ attention_heads: 8
52
+ linear_units: 2048
53
+ num_blocks: 6
54
+ dropout_rate: 0.1
55
+ positional_dropout_rate: 0.1
56
+ attention_dropout_rate: 0.1
57
+ normalize_before: True
58
+ input_layer: "linear"
59
+ pos_enc_layer_type: "rel_pos_espnet"
60
+ selfattention_layer_type: "rel_selfattn"
61
+ input_size: 512
62
+ use_cnn_module: False
63
+ macaron_style: False
64
+ static_chunk_size: !ref <chunk_size>
65
+ decoder: !new:cosyvoice.flow.flow_matching.CausalConditionalCFM
66
+ in_channels: 240
67
+ n_spks: 1
68
+ spk_emb_dim: 80
69
+ cfm_params: !new:omegaconf.DictConfig
70
+ content:
71
+ sigma_min: 1e-06
72
+ solver: "euler"
73
+ t_scheduler: "cosine"
74
+ training_cfg_rate: 0.2
75
+ inference_cfg_rate: 0.7
76
+ reg_loss_type: "l1"
77
+ estimator: !new:cosyvoice.flow.decoder.CausalConditionalDecoder
78
+ in_channels: 320
79
+ out_channels: 80
80
+ channels: [256]
81
+ dropout: 0.0
82
+ attention_head_dim: 64
83
+ n_blocks: 4
84
+ num_mid_blocks: 12
85
+ num_heads: 8
86
+ act_fn: "gelu"
87
+ static_chunk_size: !ref <chunk_size> * <token_mel_ratio>
88
+ num_decoding_left_chunks: !ref <num_decoding_left_chunks>
89
+
90
+ hift: !new:cosyvoice.hifigan.generator.HiFTGenerator
91
+ in_channels: 80
92
+ base_channels: 512
93
+ nb_harmonics: 8
94
+ sampling_rate: !ref <sample_rate>
95
+ nsf_alpha: 0.1
96
+ nsf_sigma: 0.003
97
+ nsf_voiced_threshold: 10
98
+ upsample_rates: [8, 5, 3]
99
+ upsample_kernel_sizes: [16, 11, 7]
100
+ istft_params:
101
+ n_fft: 16
102
+ hop_len: 4
103
+ resblock_kernel_sizes: [3, 7, 11]
104
+ resblock_dilation_sizes: [[1, 3, 5], [1, 3, 5], [1, 3, 5]]
105
+ source_resblock_kernel_sizes: [7, 7, 11]
106
+ source_resblock_dilation_sizes: [[1, 3, 5], [1, 3, 5], [1, 3, 5]]
107
+ lrelu_slope: 0.1
108
+ audio_limit: 0.99
109
+ f0_predictor: !new:cosyvoice.hifigan.f0_predictor.ConvRNNF0Predictor
110
+ num_class: 1
111
+ in_channels: 80
112
+ cond_channels: 512
113
+
114
+ # gan related module
115
+ mel_spec_transform1: !name:cosyvoice.utils.audio.mel_spectrogram
116
+ n_fft: 1920
117
+ num_mels: 80
118
+ sampling_rate: !ref <sample_rate>
119
+ hop_size: 480
120
+ win_size: 1920
121
+ fmin: 0
122
+ fmax: null
123
+ center: False
124
+ hifigan: !new:cosyvoice.hifigan.hifigan.HiFiGan
125
+ generator: !ref <hift>
126
+ discriminator: !new:cosyvoice.hifigan.discriminator.MultipleDiscriminator
127
+ mpd: !new:matcha.hifigan.models.MultiPeriodDiscriminator
128
+ mrd: !new:cosyvoice.hifigan.discriminator.MultiResSpecDiscriminator
129
+ mel_spec_transform: [!ref <mel_spec_transform1>]
130
+
131
+ # processor functions
132
+ parquet_opener: !name:cosyvoice.dataset.processor.parquet_opener
133
+ get_tokenizer: !name:cosyvoice.tokenizer.tokenizer.get_qwen_tokenizer
134
+ token_path: !ref <qwen_pretrain_path>
135
+ skip_special_tokens: True
136
+ allowed_special: "all"
137
+ tokenize: !name:cosyvoice.dataset.processor.tokenize
138
+ get_tokenizer: !ref <get_tokenizer>
139
+ allowed_special: !ref <allowed_special>
140
+ filter: !name:cosyvoice.dataset.processor.filter
141
+ max_length: 40960
142
+ min_length: 100
143
+ token_max_length: 200
144
+ token_min_length: 1
145
+ resample: !name:cosyvoice.dataset.processor.resample
146
+ resample_rate: !ref <sample_rate>
147
+ truncate: !name:cosyvoice.dataset.processor.truncate
148
+ truncate_length: 24480 # must be a multiplier of hop_size
149
+ feat_extractor: !name:cosyvoice.utils.audio.mel_spectrogram
150
+ n_fft: 1920
151
+ num_mels: 80
152
+ sampling_rate: !ref <sample_rate>
153
+ hop_size: 480
154
+ win_size: 1920
155
+ fmin: 0
156
+ fmax: 8000
157
+ center: False
158
+ compute_fbank: !name:cosyvoice.dataset.processor.compute_fbank
159
+ feat_extractor: !ref <feat_extractor>
160
+ compute_f0: !name:cosyvoice.dataset.processor.compute_f0
161
+ sample_rate: !ref <sample_rate>
162
+ hop_size: 480
163
+ parse_embedding: !name:cosyvoice.dataset.processor.parse_embedding
164
+ normalize: True
165
+ shuffle: !name:cosyvoice.dataset.processor.shuffle
166
+ shuffle_size: 1000
167
+ sort: !name:cosyvoice.dataset.processor.sort
168
+ sort_size: 500 # sort_size should be less than shuffle_size
169
+ batch: !name:cosyvoice.dataset.processor.batch
170
+ batch_type: "dynamic"
171
+ max_frames_in_batch: 2000
172
+ padding: !name:cosyvoice.dataset.processor.padding
173
+ use_spk_embedding: False # change to True during sft
174
+
175
+ # dataset processor pipeline
176
+ data_pipeline:
177
+ [
178
+ !ref <parquet_opener>,
179
+ !ref <tokenize>,
180
+ !ref <filter>,
181
+ !ref <resample>,
182
+ !ref <compute_fbank>,
183
+ !ref <parse_embedding>,
184
+ !ref <shuffle>,
185
+ !ref <sort>,
186
+ !ref <batch>,
187
+ !ref <padding>,
188
+ ]
189
+ data_pipeline_gan:
190
+ [
191
+ !ref <parquet_opener>,
192
+ !ref <tokenize>,
193
+ !ref <filter>,
194
+ !ref <resample>,
195
+ !ref <truncate>,
196
+ !ref <compute_fbank>,
197
+ !ref <compute_f0>,
198
+ !ref <parse_embedding>,
199
+ !ref <shuffle>,
200
+ !ref <sort>,
201
+ !ref <batch>,
202
+ !ref <padding>,
203
+ ]
204
+
205
+ # llm flow train conf
206
+ train_conf:
207
+ optim: adam
208
+ optim_conf:
209
+ lr: 1e-5 # change to 1e-5 during sft
210
+ scheduler: constantlr # change to constantlr during sft
211
+ scheduler_conf:
212
+ warmup_steps: 2500
213
+ max_epoch: 200
214
+ grad_clip: 5
215
+ accum_grad: 2
216
+ log_interval: 100
217
+ save_per_step: -1
218
+
219
+ # gan train conf
220
+ train_conf_gan:
221
+ optim: adam
222
+ optim_conf:
223
+ lr: 0.0002 # use small lr for gan training
224
+ scheduler: constantlr
225
+ optim_d: adam
226
+ optim_conf_d:
227
+ lr: 0.0002 # use small lr for gan training
228
+ scheduler_d: constantlr
229
+ max_epoch: 200
230
+ grad_clip: 5
231
+ accum_grad: 1 # in gan training, accum_grad must be 1
232
+ log_interval: 100
233
+ save_per_step: -1
CosyVoice2-0.5B/flow.decoder.estimator.fp32.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cd54e4281701e6630730da64502d77b7e8b6e5c057cca65128bffb50f85cbf98
3
+ size 286317026
CosyVoice2-0.5B/flow.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ff4c2f867674411e0a08cee702996df13fa67c1cd864c06108da88d16d088541
3
+ size 450575567
CosyVoice2-0.5B/hift.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3386cc880324d4e98e05987b99107f49e40ed925b8ecc87c1f4939432d429879
3
+ size 83390254
CosyVoice2-0.5B/llm.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b144ef55b51ce8cfb79a73c90dbba0bdaba4e451c0ebcfab20f769264f84a608
3
+ size 2023316821
CosyVoice2-0.5B/speech_tokenizer_v2.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d43342aa12163a80bf07bffb94c9de2e120a8df2f9917cd2f642e7f4219c6f71
3
+ size 496082973
spk2info.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:652d571b2efec1be6dc14345c2bae52eb41affe4b5d3fa4174548e059bd633b4
3
+ size 1317821