Jinbiii commited on
Commit
f07a66e
·
1 Parent(s): ac094a6

Upload so-vits-svc_for_aliyun.ipynb

Browse files
Files changed (1) hide show
  1. so-vits-svc_for_aliyun.ipynb +824 -0
so-vits-svc_for_aliyun.ipynb ADDED
@@ -0,0 +1,824 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "markdown",
5
+ "id": "5cfafef2-66b0-449e-b3f5-734215fdb747",
6
+ "metadata": {
7
+ "tags": []
8
+ },
9
+ "source": [
10
+ "## Warning:请自行解决数据集授权问题,禁止使用非授权数据集进行训练!任何由于使用非授权数据集进行训练造成的问题,需自行承担全部责任和后果!与仓库、仓库维护者、svc develop team 、镜像作者无关! \n",
11
+ "\n",
12
+ " 本项目是基于学术交流目的建立,仅供交流与学习使用,并非为生产环境准备。\n",
13
+ "\n",
14
+ " 任何发布到视频平台的基于 sovits 制作的视频,都必须要在简介明确指明用于变声器转换的输入源歌声、音频,例如:使用他人发布的视频音频,通过分离的人声作为输入源进行转换的,必须要给出明确的原视频、音乐链接;若使用是自己的人声,或是使用其他歌声合成引擎合成的声音作为输入源进行转换的,也必须在简介加以说明。\n",
15
+ "\n",
16
+ " 由输入源造成的侵权问题需自行承担全部责任和一切后果。使用其他商用歌声合成软件作为输入源时,请确保遵守该软件的使用条例,注意,许多歌声合成引擎使用条例中明确指明不可用于输入源进行转换!\n",
17
+ "\n",
18
+ " 禁止使用该项目从事违法行为与宗教、政治等活动,该项目维护者、镜像制作者坚决抵制上述行为,不同意此条则禁止使用该项目。\n",
19
+ "\n",
20
+ " 继续使用视为已同意上述条款\n",
21
+ "\n",
22
+ "### 所有由使用者非法使用所产生的任何后果,均与镜像制作者,项目维护着,开发者 无任何关系 "
23
+ ]
24
+ },
25
+ {
26
+ "cell_type": "markdown",
27
+ "id": "70a8b412-1c64-4006-b8d5-47c2a2ba6a52",
28
+ "metadata": {
29
+ "tags": []
30
+ },
31
+ "source": [
32
+ "## 这是一个简介,以及部分教程 \n",
33
+ "\n",
34
+ " 初次使用请耐心看完\n",
35
+ "\n",
36
+ " 作者:bilibili@kiss丿冷鸟鸟\n",
37
+ " 邮箱:2649406963@qq.com\n",
38
+ " 第一次制作镜像,如有bug还请多多理解(摊手)\n",
39
+ "\n",
40
+ " 本镜像基于so-vits-svc项目分支sovits4.1-stable(目前为主分支并推荐使用)\n",
41
+ " \n",
42
+ "### 更新日志(可以看一下)\n",
43
+ " 2023.6.11\n",
44
+ " 1.缝合了羽毛布团大佬的webui,能用webui进行推理辣!(无法进行训练,但是可以预处理,等我之后想办法解决)\n",
45
+ " 2.修正了从v1到v5版本的命名错误(至今我才发现768打成了786,草)\n",
46
+ " 3.新增响度嵌入模型底模(仅768编码器可用)\n",
47
+ " 4.新增了部分编码器,但是没有底模所以暂时不提供使用\n",
48
+ " 5.修复了部分镜像v5产生的bug,修改了部分说明,小白务必记得仔细查看\n",
49
+ " 2023.6.3\n",
50
+ " 1.更新whisper-ppg编码器 (咬字更清晰) 不过浅扩散功能待更新\n",
51
+ " 2.增加静态/动态声线融合 (与云端训练基本无关)\n",
52
+ " 3.增加响度嵌入 (使用后训练出的模型将匹配到输入源响度,否则为训练集响度,开启后能更加匹配原曲的响度)\n",
53
+ " 4.增加特征检索,来自于RVC (跟聚类方案可以减小音色泄漏,咬字比聚类稍好,但会降低推理速度,俗称聚类模型plus)\n",
54
+ " 5.新增tensorboard (训练输出可视化)\n",
55
+ " 6.更新了整合包链接为最新版 (截至6.3的下一天)\n",
56
+ " \n",
57
+ " 2023.05.27\n",
58
+ " 1.更新resample.py文件,修复了在重采样过程中产生的爆音问题\n",
59
+ " 2.更新768l12的底模以获得更好的效果\n",
60
+ "\n",
61
+ " 2023.05.24\n",
62
+ " 1.输入更换为 Content Vec 的第12层Transformer输出,并兼容4.0分支\n",
63
+ " 2.浅层扩散,可以使用浅层扩散模型提升音质\n",
64
+ " 3.新增vec768l12以及hubert编码器\n",
65
+ "\n",
66
+ "### 待更新(希望不会越加越长)\n",
67
+ " 2023.05.27\n",
68
+ " 1.onnx编码器(暂时不会更新这玩意,估计用不到,白白浪费储存空间,如果有人需要可以提,或者去看原项目)\n",
69
+ " 就是懒 (摊手)\n",
70
+ " 2.whisper-ppg编码器的浅扩散底模添加\n",
71
+ " 3.可能会试图添加一个建议的webui供推理,可能(√)\n",
72
+ " 4.很多编码器的底模添加\n",
73
+ "\n",
74
+ "项目地址:https://github.com/svc-develop-team/so-vits-svc\n",
75
+ "\n",
76
+ "本地建议搭配bilibili@羽毛布团 大佬的整合包使用\n",
77
+ "\n",
78
+ "百度网盘:https://pan.baidu.com/s/12u_LDyb5KSOfvjJ9LVwCIQ?pwd=g8n4 提取码:g8n4 \n",
79
+ "\n",
80
+ " 目前已经完成环境配置,已经装载用于预训练的底模,包括扩散模型以及主要模型的底模 开箱即用\n",
81
+ " 他们存放于pre_trained_model,扩散模型底模名为model_0.pt,主要模型底模名称为G_0.pt,D_0.pt\n",
82
+ "\n",
83
+ " \n",
84
+ "#### 注意 \n",
85
+ "### sovits4.1-stable训练的模型与其他分支基本不兼容 \n",
86
+ " 但是其他分支的模型可修改配置文件,在这个分支上使用,当然我的推荐是重新训练一个模型\n",
87
+ " 修改方法,添加代码\n",
88
+ " 例:\n",
89
+ " ...\n",
90
+ " \"ssl_dim\": 768,\n",
91
+ " \"n_speakers\": 1\n",
92
+ " 修改为\n",
93
+ " ...\n",
94
+ " \"ssl_dim\": 768,\n",
95
+ " \"n_speakers\": 1,\n",
96
+ " \"speech_encoder\": \"vec768l12\",\n",
97
+ " \"speaker_embedding\": false\n",
98
+ "\n",
99
+ "#### 下面是关于几个 编码器 以及 f0预测器 的区别说明(感谢bilibili@羽毛布団) \n",
100
+ "\n",
101
+ " 编码器,默认vec768l12,不推荐ver25619,不提供基于该模型训练,如需要可拉取其他镜像\n",
102
+ " vec256l9: ContentVec(256Layer9),旧版本叫v1,So-VITS-SVC 4.0的基础版本,暂不支持扩散模型(不推荐)\n",
103
+ " vec768l12: 特征输入更换为ContentVec的第12层Transformer输出,更还原音色,支持响度嵌入(768特有口胡)\n",
104
+ " hubertsoft: So-VITS-SVC 3.0使用的编码器,咬字更为准确,但可能存在音色泄露问题(一个模型只训练一哥声音应该能很好地避免这个问题(?)\n",
105
+ " whipser: 咬字更为准确,但是音色还原度不如vec,并且更吃配置。请注意使用这个编码器的时候单个训练数据的长度需要小于30s(暂不支持浅扩散)\n",
106
+ " \n",
107
+ " f0预测器\n",
108
+ " crepe: 抗噪能力最强,但预处理速度很慢(数据集嘈杂推荐使用)\n",
109
+ " pm: 预处理速度快,但抗噪能力较弱 \n",
110
+ " dio: 原来预处理使用的f0预测器 \n",
111
+ " harvest: 有一定抗噪能力,预处理显存占用友好(显存小推荐使用)\n",
112
+ "\n",
113
+ " 其他参数我会在下面使用的时候注明\n",
114
+ "\n",
115
+ "#### 摸鱼交流群:829974025 \n",
116
+ "\n",
117
+ "heart heart heart heart heart heart heart ♥"
118
+ ]
119
+ },
120
+ {
121
+ "cell_type": "markdown",
122
+ "id": "2a673177-d43e-4a79-8d66-f12f351801d3",
123
+ "metadata": {
124
+ "tags": []
125
+ },
126
+ "source": [
127
+ "#### 请先将数据集上传至 so-vits-svc/dataset_raw文件夹\n",
128
+ " 关于数据集的处理\n",
129
+ " 请观看bv:114514(待制作)\n",
130
+ " 数据集要求\n",
131
+ " 5s - 15s左右的纯人声切片,总时长建议30分钟到2小时,但 宁缺毋滥,不要往数据集里面丢垃圾,就算是只有5分钟的优秀数据集,也可能比30分钟塞满垃圾的片段效果好\n",
132
+ " 切片并不是强求5~15s,稍微大点可以安但是不要太大就行 注:使用whisper作为编码器,音频切片需要小于30秒\n",
133
+ " 数据集的切片尽量在本地进行,切片工具audio-slicer\n",
134
+ " \n",
135
+ "工具链接:https://github.com/flutydeer/audio-slicer\n",
136
+ "#### 如何上传?\n",
137
+ " 建议打包成压缩文件,上传后再解压(不会解压直接一股脑全拖过来,只要不嫌慢,随意)\n",
138
+ " 尽量只训练单个说话人\n",
139
+ " 假设你训练的是miku模型\n",
140
+ " /root/so-vits-svc/dataset_raw/miku\n",
141
+ " 此时miku文件夹内存放的是miku数据集\n",
142
+ " miku为说话人,也就是下方的speaker0\n",
143
+ " 请尽量不要训练多个角色模型\n",
144
+ " 我的意思不推荐\n",
145
+ "上传文件也可参考平台的帮助文档https://www.autodl.com/docs/netdisk/"
146
+ ]
147
+ },
148
+ {
149
+ "cell_type": "markdown",
150
+ "id": "8752163c-5d4e-45df-aa2b-cf385c4a6303",
151
+ "metadata": {},
152
+ "source": [
153
+ "#### 数据集文件结构\n",
154
+ " dataset_raw\n",
155
+ " ├───speaker0\n",
156
+ " │ ├───xxx1-xxx1.wav\n",
157
+ " │ ├───...\n",
158
+ " │ └───Lxx-0xx8.wav\n",
159
+ " └───speaker1\n",
160
+ " ├───xx2-0xxx2.wav\n",
161
+ " ├───...\n",
162
+ " └───xxx7-xxx007.wav"
163
+ ]
164
+ },
165
+ {
166
+ "cell_type": "markdown",
167
+ "id": "13bab566-0be4-42c6-b31b-7ab75bd704d5",
168
+ "metadata": {},
169
+ "source": [
170
+ "### 如果你在重新进入笔记本,运行命令时发生报错:can't open file 'xxxxx': [Errno 2] No such file or directory\n",
171
+ " 请执行下面的进入项目文件夹命令"
172
+ ]
173
+ },
174
+ {
175
+ "cell_type": "code",
176
+ "execution_count": null,
177
+ "id": "5556f776-8343-4376-80bc-9f2f89692e1d",
178
+ "metadata": {},
179
+ "outputs": [],
180
+ "source": [
181
+ "#克隆sovits仓库\n",
182
+ "!git clone https://ghproxy.com/https://github.com/svc-develop-team/so-vits-svc"
183
+ ]
184
+ },
185
+ {
186
+ "cell_type": "code",
187
+ "execution_count": null,
188
+ "id": "d6a59e50-22ae-40d3-8277-8e217329b2c3",
189
+ "metadata": {},
190
+ "outputs": [],
191
+ "source": [
192
+ "#创建环境\n",
193
+ "!conda create -n sovits python=3.8\n",
194
+ "#初始化\n",
195
+ "!conda init\n",
196
+ "#激活\n",
197
+ "!conda activate sovits\n",
198
+ "#安装torch\n",
199
+ "!conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia"
200
+ ]
201
+ },
202
+ {
203
+ "cell_type": "code",
204
+ "execution_count": null,
205
+ "id": "7bf0e3bf-caa6-4c9d-95eb-26d0342558a4",
206
+ "metadata": {
207
+ "ExecutionIndicator": {
208
+ "show": true
209
+ },
210
+ "execution": {
211
+ "iopub.execute_input": "2023-07-19T03:33:13.532566Z",
212
+ "iopub.status.busy": "2023-07-19T03:33:13.531535Z",
213
+ "iopub.status.idle": "2023-07-19T03:33:13.543838Z",
214
+ "shell.execute_reply": "2023-07-19T03:33:13.542961Z",
215
+ "shell.execute_reply.started": "2023-07-19T03:33:13.532519Z"
216
+ },
217
+ "tags": []
218
+ },
219
+ "outputs": [],
220
+ "source": [
221
+ "#进入项目文件夹\n",
222
+ "%cd /mnt/workspace/so-vits-svc"
223
+ ]
224
+ },
225
+ {
226
+ "cell_type": "code",
227
+ "execution_count": null,
228
+ "id": "a8d486d9-eda0-47d7-8715-3898cd2b7a1f",
229
+ "metadata": {},
230
+ "outputs": [],
231
+ "source": [
232
+ "#安装依赖\n",
233
+ "!pip install pyworld==0.2.12\n",
234
+ "!pip install -r requirements.txt"
235
+ ]
236
+ },
237
+ {
238
+ "cell_type": "code",
239
+ "execution_count": null,
240
+ "id": "0baf0de1-9ee4-43da-8bec-62c18734c63f",
241
+ "metadata": {
242
+ "ExecutionIndicator": {
243
+ "show": true
244
+ },
245
+ "tags": []
246
+ },
247
+ "outputs": [],
248
+ "source": [
249
+ "#下载contentvec编码器\n",
250
+ "!wget -cP /mnt/workspace/so-vits-svc/pretrain https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/hubert_base.pt -O checkpoint_best_legacy_500.pt"
251
+ ]
252
+ },
253
+ {
254
+ "cell_type": "code",
255
+ "execution_count": null,
256
+ "id": "09bf5744-5315-4235-8129-32977c509f3a",
257
+ "metadata": {
258
+ "ExecutionIndicator": {
259
+ "show": true
260
+ },
261
+ "tags": []
262
+ },
263
+ "outputs": [],
264
+ "source": [
265
+ "#下载hubertsoft编码器\n",
266
+ "!wget -P /mnt/workspace/so-vits-svc/pretrain https://github.com/bshall/hubert/releases/download/v0.1/hubert-soft-0d54a1f4.pt"
267
+ ]
268
+ },
269
+ {
270
+ "cell_type": "code",
271
+ "execution_count": null,
272
+ "id": "d53a38a6-f4de-40f6-aa1a-d9d5b022f617",
273
+ "metadata": {
274
+ "tags": []
275
+ },
276
+ "outputs": [],
277
+ "source": [
278
+ "#下载Whisper-ppg编码器\n",
279
+ "!wget -P /mnt/workspace/so-vits-svc/pretrain https://openaipublic.azureedge.net/main/whisper/models/345ae4da62f9b3d59415adc60127b97c714f32e89e936602e85993674d08dcb1/medium.pt"
280
+ ]
281
+ },
282
+ {
283
+ "cell_type": "code",
284
+ "execution_count": null,
285
+ "id": "5da911c3-561e-43b5-848e-86f4262e1141",
286
+ "metadata": {},
287
+ "outputs": [],
288
+ "source": [
289
+ "#下载rmvpe编码器\n",
290
+ "!wget -P /mnt/workspace/so-vits-svc/pretrain https://huggingface.co/datasets/Jinbiii/Jinbi_s_projects/resolve/main/rmvpe.pt"
291
+ ]
292
+ },
293
+ {
294
+ "cell_type": "code",
295
+ "execution_count": null,
296
+ "id": "b93256be-5cdf-40b7-a873-216a26458f61",
297
+ "metadata": {},
298
+ "outputs": [],
299
+ "source": [
300
+ "#下载cnhubertlarge编码器\n",
301
+ "!wget -P /mnt/workspace/so-vits-svc/pretrain https://huggingface.co/TencentGameMate/chinese-hubert-large/resolve/main/chinese-hubert-large-fairseq-ckpt.pt"
302
+ ]
303
+ },
304
+ {
305
+ "cell_type": "code",
306
+ "execution_count": null,
307
+ "id": "81ba4373-605c-4aba-ab88-da832e06177c",
308
+ "metadata": {},
309
+ "outputs": [],
310
+ "source": [
311
+ "#下载dphubert编码器\n",
312
+ "!wget -P /mnt/workspace/so-vits-svc/pretrain https://huggingface.co/pyf98/DPHuBERT/resolve/main/DPHuBERT-sp0.75.pth"
313
+ ]
314
+ },
315
+ {
316
+ "cell_type": "code",
317
+ "execution_count": null,
318
+ "id": "2490bf4e-5d7b-4f35-847c-a7eba071489c",
319
+ "metadata": {
320
+ "tags": []
321
+ },
322
+ "outputs": [],
323
+ "source": [
324
+ "!wget -P /mnt/workspace/so-vits-svc/pretrain https://valle.blob.core.windows.net/share/wavlm/WavLM-Base+.pt"
325
+ ]
326
+ },
327
+ {
328
+ "cell_type": "code",
329
+ "execution_count": null,
330
+ "id": "59d97962-a8ed-46d7-a98e-2411da8e11a5",
331
+ "metadata": {
332
+ "tags": []
333
+ },
334
+ "outputs": [],
335
+ "source": [
336
+ "#下载nsf_hifigan\n",
337
+ "!wget -cP /mnt/workspace/pretrain https://huggingface.co/datasets/Jinbiii/Jinbi_s_projects/resolve/main/nsf_hifigan_20221211.zip\n",
338
+ "#解压\n",
339
+ "%cd /mnt/workspace/so-vits-svc/pretrain\n",
340
+ "!unzip nsf_hifigan_20221211\n",
341
+ "%cd /mnt/workspace/so-vits-svc/"
342
+ ]
343
+ },
344
+ {
345
+ "cell_type": "code",
346
+ "execution_count": null,
347
+ "id": "254c6415-7999-4228-9b9f-ddb027ef7283",
348
+ "metadata": {
349
+ "tags": []
350
+ },
351
+ "outputs": [],
352
+ "source": [
353
+ "#下载扩散模型底模\n",
354
+ "!wget -P logs/44k/diffusion https://huggingface.co/Kakaru/sovits-whisper-pretrain/resolve/main/diffusion/model_0.pt"
355
+ ]
356
+ },
357
+ {
358
+ "cell_type": "code",
359
+ "execution_count": null,
360
+ "id": "7189ee02-7c9d-4b6d-8a36-6e73d8b0a361",
361
+ "metadata": {
362
+ "ExecutionIndicator": {
363
+ "show": true
364
+ },
365
+ "tags": []
366
+ },
367
+ "outputs": [],
368
+ "source": [
369
+ "#下载预训练底模\n",
370
+ "!wget -cP /mnt/workspace/so-vits-svc https://huggingface.co/datasets/Jinbiii/Jinbi_s_projects/resolve/main/pre_trained_model.zip"
371
+ ]
372
+ },
373
+ {
374
+ "cell_type": "code",
375
+ "execution_count": null,
376
+ "id": "034c9b94-7828-4420-9038-3b1f2a6641e3",
377
+ "metadata": {
378
+ "tags": []
379
+ },
380
+ "outputs": [],
381
+ "source": [
382
+ "#解压预训练��模\n",
383
+ "!unzip pre_trained_model"
384
+ ]
385
+ },
386
+ {
387
+ "cell_type": "markdown",
388
+ "id": "62a0d38c-b580-427f-855e-060e819d3915",
389
+ "metadata": {
390
+ "tags": []
391
+ },
392
+ "source": [
393
+ "请先将数据集上传至Hugging Face上的Dataset处,以节省时间\n",
394
+ "下载时注意将链接中的blob改为resolve\n",
395
+ "例如:https://huggingface.co/datasets/xxx/xxxxx/blob/main/xxx.zip 改为 https://huggingface.co/datasets/xxx/xxxxx/resolve/main/xxx.zip改为"
396
+ ]
397
+ },
398
+ {
399
+ "cell_type": "code",
400
+ "execution_count": null,
401
+ "id": "913e816f-8545-4fa8-96dd-990aff067b3d",
402
+ "metadata": {},
403
+ "outputs": [],
404
+ "source": [
405
+ "#下载数据集\n",
406
+ "!wget -cP /mnt/workspace/so-vits-svc/dataset_raw https://huggingface.co/datasets/Jinbiii/Jinbi_s_projects/resolve/main/Geping.zip"
407
+ ]
408
+ },
409
+ {
410
+ "cell_type": "code",
411
+ "execution_count": null,
412
+ "id": "d5656920-2ad0-4177-98a0-3f287505b95e",
413
+ "metadata": {
414
+ "tags": []
415
+ },
416
+ "outputs": [],
417
+ "source": [
418
+ "#解压数据集\n",
419
+ "%cd /mnt/workspace/so-vits-svc/dataset_raw\n",
420
+ "!unzip Geping\n",
421
+ "%cd /mnt/workspace/so-vits-svc"
422
+ ]
423
+ },
424
+ {
425
+ "cell_type": "markdown",
426
+ "id": "b95bfbc4-048f-4acd-a035-16f2fb366b66",
427
+ "metadata": {},
428
+ "source": [
429
+ "开始预处理"
430
+ ]
431
+ },
432
+ {
433
+ "cell_type": "code",
434
+ "execution_count": null,
435
+ "id": "ef0ad7e1-44ba-474f-beae-2a2b56481795",
436
+ "metadata": {
437
+ "tags": []
438
+ },
439
+ "outputs": [],
440
+ "source": [
441
+ "#进行数据集的重采样(如果报错,请检查是否有数据集太长/短)\n",
442
+ "!python resample.py "
443
+ ]
444
+ },
445
+ {
446
+ "cell_type": "markdown",
447
+ "id": "b316b1e7-5b2f-4edb-8f2c-06c57dc4b3fe",
448
+ "metadata": {
449
+ "tags": []
450
+ },
451
+ "source": [
452
+ "#### 选择编码器,并生成配置文件 \n",
453
+ "\n",
454
+ " 配置文件存放在config文件夹,名为config.json\n",
455
+ " 扩散模型的配置文件默认就行,如需查看参数,请见/config_template/diffusion_template.yaml\n",
456
+ " (懒得去翻译了)\n",
457
+ " \n",
458
+ " all_in_mem,cache_all_data:加载所有数据集到内存中,硬盘IO过于低下、同时内存容量 远大于 数据集体积时可以启用(可以较大的提升训练速度)\n",
459
+ " \n",
460
+ " epoch 总最大训练轮数,中途可中断训练,继续训练会从保存的最后一个模型处开始训练,不一定需要全部训练完,一般3000,4000就可以试试效果,大概1w,2w就能比较好的拟合了\n",
461
+ " batch_size 根据显存大小和数据集大小调节,默认值6,显存够可选12,但是请不要太大\n",
462
+ " learning_rat 学习率,请和你的batch_size同步,默认为0.0001.当batch_size改为12时,学习率应该改为0.0002\n",
463
+ "\n",
464
+ " eval_interval 每隔多少步保存一次模型,会自动清除老的模型。默认800steps保存一次,请配合keep_ckpt参数设置\n",
465
+ " keep_ckpts 一共需要保存多少个模型,默认3,一组模型1G左右,看你硬盘大小 记得备份优秀模型防止被删除\n",
466
+ " log_interval 每个多少步输出一次训练日志,\n",
467
+ " loss 损失值 一般越低越好.一般在28左右,如果特别大请检查是否加载底模/底模是否对应编码器/数据集是否有问题\n",
468
+ " warmup_epochs 预热轮数,此时学习率不会衰减\n",
469
+ " seed 模型初始化种子.如果训练效果不理想,可以换一个种子重新训练\n",
470
+ "\n",
471
+ " 其他参数默认即可\n",
472
+ "#### 训练途中请不要修改配置文件"
473
+ ]
474
+ },
475
+ {
476
+ "cell_type": "code",
477
+ "execution_count": null,
478
+ "id": "6d426aea-8b0e-424d-8313-12b957283921",
479
+ "metadata": {
480
+ "tags": []
481
+ },
482
+ "outputs": [],
483
+ "source": [
484
+ "#选择vec768l12编码器执行这条(带响度嵌入)\n",
485
+ "!python preprocess_flist_config.py --speech_encoder vec768l12 --vol_aug\n",
486
+ "%cp pre_trained_model/vol_emb/768l12/* logs/44k"
487
+ ]
488
+ },
489
+ {
490
+ "cell_type": "code",
491
+ "execution_count": null,
492
+ "id": "05f27476-a4b6-4d89-93e7-e24ecbec69d2",
493
+ "metadata": {
494
+ "ExecutionIndicator": {
495
+ "show": true
496
+ },
497
+ "tags": []
498
+ },
499
+ "outputs": [],
500
+ "source": [
501
+ "#选择vec768l12编码器执行这条(不带响度嵌入)\n",
502
+ "!python preprocess_flist_config.py --speech_encoder vec768l12\n",
503
+ "%cp -r pre_trained_model/768l12/* logs/44k"
504
+ ]
505
+ },
506
+ {
507
+ "cell_type": "code",
508
+ "execution_count": null,
509
+ "id": "bc2224e1-5dc2-4569-8261-217af904d318",
510
+ "metadata": {},
511
+ "outputs": [],
512
+ "source": [
513
+ "#选择hubertsoft编码器执行这条\n",
514
+ "!python preprocess_flist_config.py --speech_encoder hubertsoft\n",
515
+ "%cp pre_trained_model/hubertsoft/* logs/44k"
516
+ ]
517
+ },
518
+ {
519
+ "cell_type": "code",
520
+ "execution_count": null,
521
+ "id": "4c3aacdf-11b5-43e7-8340-24ef4d5d9e36",
522
+ "metadata": {},
523
+ "outputs": [],
524
+ "source": [
525
+ "#选择了whisper-ppg编码器执行这条(暂无扩散模型底模,不支持浅扩散)\n",
526
+ "!python preprocess_flist_config.py --speech_encoder whisper-ppg\n",
527
+ "%cp pre_trained_model/whisper-ppg/* logs/44k"
528
+ ]
529
+ },
530
+ {
531
+ "cell_type": "markdown",
532
+ "id": "95242345-25d1-4fbe-b5b5-de25a7e5f645",
533
+ "metadata": {
534
+ "tags": []
535
+ },
536
+ "source": [
537
+ "#### 请选择f0预测器\n",
538
+ " 选择哪个预测器就删掉哪一个前面的的注释符号#,然后运行(不要删掉感叹号!)\n",
539
+ " 如果需要用浅扩散模型,需要在代码最后面增加 --use_diff 参数\n",
540
+ " 比如\n",
541
+ " python preprocess_hubert_f0.py --f0_predictor dio --use_diff"
542
+ ]
543
+ },
544
+ {
545
+ "cell_type": "code",
546
+ "execution_count": null,
547
+ "id": "9594cf8a-3ce5-49f5-88c9-ce323f35ab67",
548
+ "metadata": {
549
+ "tags": []
550
+ },
551
+ "outputs": [],
552
+ "source": [
553
+ "!python preprocess_hubert_f0.py --f0_predictor crepe --use_diff\n",
554
+ "#!python preprocess_hubert_f0.py --f0_predictor dio\n",
555
+ "#!python preprocess_hubert_f0.py --f0_predictor pm\n",
556
+ "#!python preprocess_hubert_f0.py --f0_predictor harvest"
557
+ ]
558
+ },
559
+ {
560
+ "cell_type": "markdown",
561
+ "id": "7cc95c5a-5612-4f30-8f28-6c1c9dd57ac8",
562
+ "metadata": {},
563
+ "source": [
564
+ "#### 如果你上一条选择了使用浅扩散模型,则根据你选择编码器,执行下面的命令 将对应的扩散模型放入/logs/44k/diffusion\n",
565
+ "如没有,则请跳过这部分"
566
+ ]
567
+ },
568
+ {
569
+ "cell_type": "code",
570
+ "execution_count": null,
571
+ "id": "ead62598-cbf4-4198-8fcd-a71e07484cb7",
572
+ "metadata": {},
573
+ "outputs": [],
574
+ "source": [
575
+ "#选择了vec768l12编码器执行这条\n",
576
+ "%cp -r pre_trained_model/diffusion/768l12/* logs/44k/diffusion"
577
+ ]
578
+ },
579
+ {
580
+ "cell_type": "code",
581
+ "execution_count": null,
582
+ "id": "eee66f94-0997-435e-95f5-37d3d025eb84",
583
+ "metadata": {
584
+ "tags": []
585
+ },
586
+ "outputs": [],
587
+ "source": [
588
+ "#选择了hubertsoft编码器执行这条\n",
589
+ "%cp pre_trained_model/diffusion/hubertsoft/* logs/44k/diffusion"
590
+ ]
591
+ },
592
+ {
593
+ "cell_type": "markdown",
594
+ "id": "6da4706f-e862-41ba-9b7d-5fab7c97d620",
595
+ "metadata": {
596
+ "tags": []
597
+ },
598
+ "source": [
599
+ "#### 下面进行训练,请注意你是否选择了浅扩散模型\n",
600
+ " 注意:只要jupyterlab不出现重启(几乎不会),jupyterlab的终端就会一直运行,训练也不会中断,无论是本地主机断网还是关机\n",
601
+ " 如果想要中途停止训练,只需在终端中按下ctrl + c,或者点击上面的中断内核\n",
602
+ " 继续训练也是执行以下命令,从最后保存的节点继续训练"
603
+ ]
604
+ },
605
+ {
606
+ "cell_type": "code",
607
+ "execution_count": null,
608
+ "id": "ddee8b5d-24e5-485a-af6f-941610b356a9",
609
+ "metadata": {},
610
+ "outputs": [],
611
+ "source": [
612
+ "#如果你在上面选择了浅扩散模型,需要训练扩散模型,扩散模型训练方法为:\n",
613
+ "#如果没有选择,请跳过此步骤\n",
614
+ "!python train_diff.py -c configs/diffusion.yaml "
615
+ ]
616
+ },
617
+ {
618
+ "cell_type": "code",
619
+ "execution_count": null,
620
+ "id": "ebb3516e-534a-44af-be51-211d23583063",
621
+ "metadata": {},
622
+ "outputs": [],
623
+ "source": [
624
+ "#主模型正式训练(继续训练也是点这个)\n",
625
+ "!python train.py -c configs/config.json -m 44k"
626
+ ]
627
+ },
628
+ {
629
+ "cell_type": "markdown",
630
+ "id": "f87c6c55-3914-4088-96b2-686e854af6a3",
631
+ "metadata": {
632
+ "tags": []
633
+ },
634
+ "source": [
635
+ "#### 输出日志参数说明\n",
636
+ " loss 损失值,看loss的收敛程度(看下面的tensorboard)"
637
+ ]
638
+ },
639
+ {
640
+ "cell_type": "code",
641
+ "execution_count": null,
642
+ "id": "e7051279-ccbe-49df-a866-7bf2723812e6",
643
+ "metadata": {
644
+ "ExecutionIndicator": {
645
+ "show": false
646
+ },
647
+ "tags": []
648
+ },
649
+ "outputs": [],
650
+ "source": [
651
+ "#启用tensorboard\n",
652
+ "#运行之后去实例监控里面查看,实例监控入口位于快捷工具里面,jupyterlab你怎么开的就在哪里找\n",
653
+ "!ps -ef | grep tensorboard | awk '{print $2}' | xargs kill -9\n",
654
+ "!tensorboard --port 6007 --logdir /mnt/workspace/so-vits-svc/logs"
655
+ ]
656
+ },
657
+ {
658
+ "cell_type": "markdown",
659
+ "id": "1ae3f472-ea5d-4908-bd84-82962650f189",
660
+ "metadata": {},
661
+ "source": [
662
+ "#### 聚类模型和特征检索模型"
663
+ ]
664
+ },
665
+ {
666
+ "cell_type": "code",
667
+ "execution_count": null,
668
+ "id": "29c5119f-ea50-44c3-bd45-a2af7b1a5437",
669
+ "metadata": {},
670
+ "outputs": [],
671
+ "source": [
672
+ "#聚类模型训练(个人不太喜欢所以不太推荐) \n",
673
+ "#模型的输出会在logs/44k/kmeans_10000.pt,默认使用gpu训练,所需要的时间大大缩短\n",
674
+ "!python cluster/train_cluster.py --gpu"
675
+ ]
676
+ },
677
+ {
678
+ "cell_type": "code",
679
+ "execution_count": null,
680
+ "id": "dc6d0b09-a541-484c-938c-e29e49d5b16e",
681
+ "metadata": {},
682
+ "outputs": [],
683
+ "source": [
684
+ "#特征检错训练(聚类模型plus),很快\n",
685
+ "#生成的模型会保存在 logs/44k/feature_and_index.pkl\n",
686
+ "!python train_index.py -c configs/config.json"
687
+ ]
688
+ },
689
+ {
690
+ "cell_type": "markdown",
691
+ "id": "6f8a53fb-f9c8-4ccb-8bf5-57f926918234",
692
+ "metadata": {},
693
+ "source": [
694
+ "#### 模型压缩(非必要)\n",
695
+ "生成的模型含有继续训练所需的信息。如果确认不再训练,可以移除模型中此部分信息,得到约 1/3 大小的最终模型。"
696
+ ]
697
+ },
698
+ {
699
+ "cell_type": "code",
700
+ "execution_count": null,
701
+ "id": "5d28e48d-1aa1-450c-bb78-5ea2dd8823fb",
702
+ "metadata": {},
703
+ "outputs": [],
704
+ "source": [
705
+ "#请把***G_***.pth改为你自己的模型名称,release.pth为压缩后的最终模型\n",
706
+ "!python compress_model.py -c=\"configs/config.json\" -i=\"logs/44k/G_.pth\" -o=\"logs/44k/release.pth\""
707
+ ]
708
+ },
709
+ {
710
+ "cell_type": "markdown",
711
+ "id": "129e43a2-402e-4b64-be1e-9f878c014727",
712
+ "metadata": {
713
+ "tags": []
714
+ },
715
+ "source": [
716
+ "#### 推理(建议在本地进行),当然云端也行\n",
717
+ " 本地整合包链接在最上方,不要漏看了,下面是webui"
718
+ ]
719
+ },
720
+ {
721
+ "cell_type": "code",
722
+ "execution_count": null,
723
+ "id": "5fe77fbb-e883-414f-a8b1-4d1df564101e",
724
+ "metadata": {},
725
+ "outputs": [],
726
+ "source": [
727
+ "#使用webui进行推理,由于技术问题无法用于训练,但是可以用来推理和预处理,推理得到的音频文件存放于results文件夹\n",
728
+ "#运行后生成两条链接,可以通过自定义服务打开,也可以直接点击第二条打开(前者比较稳定,后者很不推荐,音频稍微长一点点,及其容易转换到一半超时,除非走投无路否则别用)\n",
729
+ "!python app.py"
730
+ ]
731
+ },
732
+ {
733
+ "cell_type": "code",
734
+ "execution_count": null,
735
+ "id": "442429a0-23a3-4cc1-a44a-0d5e3d198bac",
736
+ "metadata": {
737
+ "tags": []
738
+ },
739
+ "outputs": [],
740
+ "source": [
741
+ "#使用 inference_main.py\n",
742
+ "#具体推理请参考源项目地址,这里不列举其他参数\n",
743
+ "# 例\n",
744
+ "!python inference_main.py -m \"logs/44k/G_350400.pth\" -c \"configs/config.json\" -n \"祝福.wav\" -t -12 -s \"Geping\" -f0p \"crepe\" -dm \"logs/44k/diffusion/model_6000.pt\" -dc \"logs/44k/diffusion/configs.yaml\" -ks 150\n",
745
+ "!python inference_main.py -m \"logs/44k/G_330400.pth\" -c \"configs/config.json\" -n \"向云端.wav\" -t -12 -s \"Geping\" -f0p \"crepe\" -dm \"logs/44k/diffusion/model_6000.pt\" -dc \"logs/44k/diffusion/configs.yaml\" -ks 150\n",
746
+ "!python inference_main.py -m \"logs/44k/G_192800.pth\" -c \"configs/config.json\" -n \"杀死那个石家庄人.wav\" -t 0 -s \"Geping\" -f0p \"crepe\" -dm \"logs/44k/diffusion/model_6000.pt\" -dc \"logs/44k/diffusion/configs.yaml\" -ks 200\n",
747
+ "!python inference_main.py -m \"logs/44k/G_192800.pth\" -c \"configs/config.json\" -n \"单相思.wav\" -t -12 -s \"Geping\" -f0p \"crepe\" -cm \"logs/44k/kmeans_10000.pt\" -cr 0.3 -dm \"logs/44k/diffusion/model_6000.pt\" -dc \"logs/44k/diffusion/configs.yaml\" -ks 320\n",
748
+ "!python inference_main.py -m \"logs/44k/G_192800.pth\" -c \"configs/config.json\" -n \"idol1.wav\" -t -12 -s \"Geping\" -f0p \"rmvpe\" -cm \"logs/44k/kmeans_10000.pt\" -cr 0.3\n",
749
+ "#将G_30400.pth改为你的模型名称\n",
750
+ "#config.json改为你的配置文件名称(默认为config.json)\n",
751
+ "#君の知らない物語-src.wav改为你用于转换的歌曲名称,请将用于转换的歌曲存放于 raw 文件夹 内\n",
752
+ "#nen改为说话人名称,也就是你的数据集文件夹名,在配置文件最下方spk内可看见\n",
753
+ "#音高部分随缘改"
754
+ ]
755
+ },
756
+ {
757
+ "cell_type": "markdown",
758
+ "id": "5ac6e67e-33df-424d-b56c-37b327d85024",
759
+ "metadata": {},
760
+ "source": [
761
+ "下载模型的时候记得同时下载配置文件,该镜像如果出现问题,请联系镜像作者。\n",
762
+ "先暂时用着"
763
+ ]
764
+ },
765
+ {
766
+ "cell_type": "markdown",
767
+ "id": "2a5024b8-7144-44ab-a077-fb5ddd1783b8",
768
+ "metadata": {
769
+ "tags": []
770
+ },
771
+ "source": [
772
+ "## 请及时备份\n",
773
+ "重新训练模型需要删除数据集,配置文件,以及已经训练好的模型。\n",
774
+ "删除完后重复你第一次训练的步骤即可"
775
+ ]
776
+ },
777
+ {
778
+ "cell_type": "code",
779
+ "execution_count": null,
780
+ "id": "4e5ab7a0-3666-4673-a796-2a20e0efbb71",
781
+ "metadata": {},
782
+ "outputs": [],
783
+ "source": [
784
+ "#删除数据集,配置文件,已经训练好的模型\n",
785
+ "#慎用\n",
786
+ "%rm -rf logs/44k/*\n",
787
+ "%rm -rf dataset_raw/*\n",
788
+ "%rm -rf configs/*\n",
789
+ "%rm -rf dataset/*\n",
790
+ "%cd logs/44k/\n",
791
+ "%mkdir diffusion"
792
+ ]
793
+ },
794
+ {
795
+ "cell_type": "markdown",
796
+ "id": "90b4ad4d-6f4d-4501-b0f5-44cb3dabae04",
797
+ "metadata": {},
798
+ "source": [
799
+ "碰见bug请与作者联系,能修就修,不能修就摇人修("
800
+ ]
801
+ }
802
+ ],
803
+ "metadata": {
804
+ "kernelspec": {
805
+ "display_name": "Python 3",
806
+ "language": "python",
807
+ "name": "python3"
808
+ },
809
+ "language_info": {
810
+ "codemirror_mode": {
811
+ "name": "ipython",
812
+ "version": 3
813
+ },
814
+ "file_extension": ".py",
815
+ "mimetype": "text/x-python",
816
+ "name": "python",
817
+ "nbconvert_exporter": "python",
818
+ "pygments_lexer": "ipython3",
819
+ "version": "3.6.12"
820
+ }
821
+ },
822
+ "nbformat": 4,
823
+ "nbformat_minor": 5
824
+ }