niobures commited on
Commit
02eb85d
·
verified ·
1 Parent(s): f9bd03b

HuBERT (models_onnx: ailia-models)

Browse files
.gitattributes CHANGED
@@ -33,3 +33,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ ailia-models/code/booth.wav filter=lfs diff=lfs merge=lfs -text
37
+ ailia-models/code/output.wav filter=lfs diff=lfs merge=lfs -text
ailia-models/code/README.md ADDED
@@ -0,0 +1,81 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Retrieval-based-Voice-Conversion
2
+
3
+ ## Input
4
+
5
+ Audio file
6
+
7
+ https://github.com/axinc-ai/ailia-models/assets/29946532/689bba85-b894-4645-bd2a-8abf928733db
8
+
9
+ (Audio from https://github.com/ohashi3399/RVC-demo)
10
+
11
+ ## Output
12
+
13
+ Audio file
14
+
15
+ https://github.com/axinc-ai/ailia-models/assets/29946532/5c036243-a93b-4627-acf0-90bdb911daee
16
+
17
+ ## Requirements
18
+
19
+ This model requires additional module.
20
+ ```
21
+ pip3 install librosa
22
+ pip3 install soundfile
23
+ pip3 install faiss-cpu==1.7.3
24
+ pip3 install pyworld==0.3.2
25
+ ```
26
+
27
+ ## Usage
28
+ Automatically downloads the onnx and prototxt files on the first run.
29
+ It is necessary to be connected to the Internet while downloading.
30
+
31
+ For the sample wav,
32
+ ```bash
33
+ $ python3 rvc.py
34
+ ```
35
+
36
+ If you want to specify the audio, put the file path after the `--input` option.
37
+ ```bash
38
+ $ python3 rvc.py --input AUDIO_FILE
39
+ ```
40
+
41
+ By adding the `--model_file` option, you can specify vc model file.
42
+ ```bash
43
+ $ python3 rvc.py --model_file AISO-HOWATTO.onnx
44
+ ```
45
+
46
+ Specify the f0 option to infer a model that uses f0. You can choice `crepe` or `crepe_tiny` for f0_method.
47
+
48
+ ```bash $
49
+ python3 rvc.py -i booth.wav -m Rinne.onnx --f0_method crepe_tiny --f0 1 --f0_up_key 11 --tgt_sr 48000
50
+ ```
51
+
52
+ By adding the `--file_index` option, you can specify faiss feature file.
53
+
54
+ ```bash $
55
+ python3 rvc.py -i booth.wav -m Rinne.onnx --f0_method crepe --f0 1 --f0_up_key 11 --tgt_sr 48000 --file_index Rinne.index --index_rate 0.75
56
+ ```
57
+
58
+ By adding the `--version` option, you can specify rvc model file version.
59
+ ```bash
60
+ $ python3 rvc.py --model_file rvc_v2.onnx --version 2
61
+ ```
62
+
63
+ ## Reference
64
+
65
+ - [Retrieval-based-Voice-Conversion-WebUI](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI)
66
+ - [RVC向け学習済みボイスモデルデータ](https://chihaya369.booth.pm/items/4701666)
67
+
68
+ ## Framework
69
+
70
+ Pytorch
71
+
72
+ ## Model Format
73
+
74
+ ONNX opset=14
75
+
76
+ ## Netron
77
+
78
+ - [hubert_base.onnx.prototxt](https://netron.app/?url=https://storage.googleapis.com/ailia-models/rvc/hubert_base.onnx.prototxt)
79
+ - [AISO-HOWATTO.onnx.prototxt](https://netron.app/?url=https://storage.googleapis.com/ailia-models/rvc/AISO-HOWATTO.onnx.prototxt)
80
+ - [crepe.onnx.prototxt](https://netron.app/?url=https://storage.googleapis.com/ailia-models/rvc/crepe.onnx.prototxt)
81
+ - [crepe_tiny.onnx.prototxt](https://netron.app/?url=https://storage.googleapis.com/ailia-models/rvc/crepe_tiny.onnx.prototxt)
ailia-models/code/booth.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:143ae994701271390e04d99a3a31977f8bba329d846a14194dc10c494d4dcad6
3
+ size 1197796
ailia-models/code/output.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:38b24abeffe6bf7a3d6d838d04ba60607acbd23ce46e59022397ca34a1d38df0
3
+ size 1084844
ailia-models/code/rvc.py ADDED
@@ -0,0 +1,581 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import sys
2
+ import time
3
+ from logging import getLogger
4
+
5
+ import numpy as np
6
+ import scipy.signal as signal
7
+ from PIL import Image
8
+ import librosa
9
+ import soundfile as sf
10
+
11
+ import ailia
12
+
13
+ # import original modules
14
+ sys.path.append('../../util')
15
+ sys.path.append('../crepe')
16
+ from microphone_utils import start_microphone_input # noqa
17
+ from model_utils import check_and_download_models # noqa
18
+ from arg_utils import get_base_parser, get_savepath, update_parser # noqa
19
+
20
+ flg_ffmpeg = False
21
+
22
+ if flg_ffmpeg:
23
+ import ffmpeg
24
+
25
+ logger = getLogger(__name__)
26
+
27
+ # ======================
28
+ # Parameters
29
+ # ======================
30
+
31
+ WEIGHT_HUBERT_PATH = "hubert_base.onnx"
32
+ MODEL_HUBERT_PATH = "hubert_base.onnx.prototxt"
33
+ WEIGHT_VC_PATH = "AISO-HOWATTO.onnx"
34
+ MODEL_VC_PATH = "AISO-HOWATTO.onnx.prototxt"
35
+ REMOTE_PATH = 'https://storage.googleapis.com/ailia-models/rvc/'
36
+
37
+ SAMPLE_RATE = 16000
38
+
39
+ WAV_PATH = 'booth.wav'
40
+ SAVE_WAV_PATH = 'output.wav'
41
+
42
+ # ======================
43
+ # Arguemnt Parser Config
44
+ # ======================
45
+
46
+ parser = get_base_parser(
47
+ 'Retrieval-based-Voice-Conversion', WAV_PATH, SAVE_WAV_PATH, input_ftype='audio'
48
+ )
49
+ parser.add_argument(
50
+ '--tgt_sr', metavar="SR", type=int, default=40000,
51
+ help='VC model sampling rate.',
52
+ )
53
+ parser.add_argument(
54
+ '--f0', type=int, default=0, choices=(0, 1),
55
+ help='f0 flag of VC model.',
56
+ )
57
+ parser.add_argument(
58
+ '--sid', type=int, default=0,
59
+ help='Select Speaker/Singer ID',
60
+ )
61
+ parser.add_argument(
62
+ '--f0_up_key', metavar="N", type=int, default=0,
63
+ help='Transpose (number of semitones, raise by an octave: 12, lower by an octave: -12)',
64
+ )
65
+ parser.add_argument(
66
+ '--f0_method', default="pm", choices=("pm", "harvest", "crepe", "crepe_tiny"),
67
+ help='Select the pitch extraction algorithm',
68
+ )
69
+ parser.add_argument(
70
+ '--file_index', metavar="FILE", type=str, default=None,
71
+ help='Path to the feature index file.',
72
+ )
73
+ parser.add_argument(
74
+ '--index_rate', metavar="RATIO", type=float, default=0.75,
75
+ help='Search feature ratio. (controls accent strength, too high has artifacting)',
76
+ )
77
+ parser.add_argument(
78
+ '--filter_radius', metavar="N", type=int, default=3,
79
+ help='If >=3: apply median filtering to the harvested pitch results. The value can reduce breathiness.',
80
+ )
81
+ parser.add_argument(
82
+ '--resample_sr', metavar="SR", type=int, default=0,
83
+ help='Resample the output audio. Set to 0 for no resampling.',
84
+ )
85
+ parser.add_argument(
86
+ '--rms_mix_rate', metavar="RATE", type=float, default=0.25,
87
+ help='Adjust the volume envelope scaling.',
88
+ )
89
+ parser.add_argument(
90
+ '--protect', metavar="N", type=float, default=0.33,
91
+ help='Protect voiceless consonants and breath sounds'
92
+ ' to prevent artifacts such as tearing in electronic music.'
93
+ ' Set to 0.5 to disable',
94
+ )
95
+ parser.add_argument(
96
+ '-m', '--model_file', default=WEIGHT_VC_PATH,
97
+ help='specify .onnx file'
98
+ )
99
+ parser.add_argument(
100
+ '--version', default=1, choices=[1, 2], type=int,
101
+ help='specify rvc version'
102
+ )
103
+ parser.add_argument(
104
+ '--onnx',
105
+ action='store_true',
106
+ help='execute onnxruntime version.'
107
+ )
108
+ args = update_parser(parser)
109
+
110
+
111
+ class VCParam(object):
112
+ def __init__(self, tgt_sr):
113
+ self.x_pad, self.x_query, self.x_center, self.x_max = (
114
+ 3, 10, 60, 65
115
+ )
116
+ self.sr = 16000 # hubert输入采样率
117
+ self.window = 160 # 每帧点数
118
+ self.t_pad = self.sr * self.x_pad # 每条前后pad时间
119
+ self.t_pad_tgt = tgt_sr * self.x_pad
120
+ self.t_pad2 = self.t_pad * 2
121
+ self.t_query = self.sr * self.x_query # 查询切点前后查询时间
122
+ self.t_center = self.sr * self.x_center # 查询切点位置
123
+ self.t_max = self.sr * self.x_max # 免查询时长阈值
124
+
125
+
126
+ # ======================
127
+ # Secondaty Functions
128
+ # ======================
129
+
130
+ def load_audio(file: str, sr: int = SAMPLE_RATE):
131
+ if flg_ffmpeg:
132
+ # https://github.com/openai/whisper/blob/main/whisper/audio.py#L26
133
+ # This launches a subprocess to decode audio while down-mixing and resampling as necessary.
134
+ # Requires the ffmpeg CLI and `ffmpeg-python` package to be installed.
135
+ out, _ = ffmpeg.input(file, threads=0) \
136
+ .output("-", format="f32le", acodec="pcm_f32le", ac=1, ar=sr) \
137
+ .run(cmd=["ffmpeg", "-nostdin"], capture_stdout=True, capture_stderr=True)
138
+
139
+ audio = np.frombuffer(out, np.float32).flatten()
140
+ else:
141
+ # prepare input data
142
+ audio, source_sr = librosa.load(file, sr=None)
143
+ # Resample the wav if needed
144
+ if source_sr is not None and source_sr != sr:
145
+ audio = librosa.resample(audio, orig_sr=source_sr, target_sr=sr)
146
+
147
+ return audio
148
+
149
+
150
+ def change_rms(data1, sr1, data2, sr2, rate): # 1是输入音频,2是输出音频,rate是2的占比
151
+ rms1 = librosa.feature.rms(
152
+ y=data1, frame_length=sr1 // 2 * 2, hop_length=sr1 // 2
153
+ ) # 每半秒一个点
154
+ rms2 = librosa.feature.rms(y=data2, frame_length=sr2 // 2 * 2, hop_length=sr2 // 2)
155
+
156
+ rms1 = np.array(Image.fromarray(rms1).resize((data2.shape[0], 1), Image.Resampling.BILINEAR))
157
+ rms1 = rms1.flatten()
158
+ rms2 = np.array(Image.fromarray(rms2).resize((data2.shape[0], 1), Image.Resampling.BILINEAR))
159
+ rms2 = rms2.flatten()
160
+
161
+ r = np.zeros(rms2.shape) + 1e-6
162
+ rms2 = np.where(rms2 > r, rms2, r)
163
+
164
+ data2 *= np.power(rms1, 1 - rate) * np.power(rms2, rate - 1)
165
+
166
+ return data2
167
+
168
+
169
+ # ======================
170
+ # Main functions
171
+ # ======================
172
+
173
+ def get_f0(
174
+ vc_param,
175
+ x,
176
+ p_len,
177
+ f0_up_key,
178
+ f0_method,
179
+ filter_radius,
180
+ inp_f0=None):
181
+ time_step = vc_param.window / vc_param.sr * 1000
182
+ f0_min = 50
183
+ f0_max = 1100
184
+ f0_mel_min = 1127 * np.log(1 + f0_min / 700)
185
+ f0_mel_max = 1127 * np.log(1 + f0_max / 700)
186
+
187
+ if f0_method == "pm":
188
+ import parselmouth
189
+
190
+ f0 = (
191
+ parselmouth.Sound(x, vc_param.sr).to_pitch_ac(
192
+ time_step=time_step / 1000,
193
+ voicing_threshold=0.6,
194
+ pitch_floor=f0_min,
195
+ pitch_ceiling=f0_max,
196
+ ).selected_array["frequency"]
197
+ )
198
+ pad_size = (p_len - len(f0) + 1) // 2
199
+ if pad_size > 0 or p_len - len(f0) - pad_size > 0:
200
+ f0 = np.pad(
201
+ f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant"
202
+ )
203
+ elif f0_method == "harvest":
204
+ import pyworld
205
+
206
+ audio = x.astype(np.double)
207
+ fs = vc_param.sr
208
+ frame_period = 10
209
+ f0, t = pyworld.harvest(
210
+ audio,
211
+ fs=fs,
212
+ f0_ceil=f0_max,
213
+ f0_floor=f0_min,
214
+ frame_period=frame_period,
215
+ )
216
+ f0 = pyworld.stonemask(audio, f0, t, fs)
217
+
218
+ if filter_radius > 2:
219
+ f0 = signal.medfilt(f0, 3)
220
+ elif f0_method == "crepe" or f0_method == "crepe_tiny":
221
+ import mod_crepe
222
+
223
+ # Pick a batch size that doesn't cause memory errors on your gpu
224
+ batch_size = 512
225
+ audio = np.copy(x)[None]
226
+ f0, pd = mod_crepe.predict(
227
+ audio,
228
+ vc_param.sr,
229
+ vc_param.window,
230
+ f0_min,
231
+ f0_max,
232
+ batch_size=batch_size,
233
+ return_periodicity=True,
234
+ )
235
+ pd = mod_crepe.median(pd, 3)
236
+ f0 = mod_crepe.mean(f0, 3)
237
+ f0[pd < 0.1] = 0
238
+ f0 = f0[0]
239
+ else:
240
+ raise ValueError("f0_method: %s" % f0_method)
241
+
242
+ f0 *= pow(2, f0_up_key / 12)
243
+
244
+ tf0 = vc_param.sr // vc_param.window # 每秒f0点数
245
+ if inp_f0 is not None:
246
+ delta_t = np.round(
247
+ (inp_f0[:, 0].max() - inp_f0[:, 0].min()) * tf0 + 1
248
+ ).astype("int16")
249
+ replace_f0 = np.interp(
250
+ list(range(delta_t)), inp_f0[:, 0] * 100, inp_f0[:, 1]
251
+ )
252
+ shape = f0[vc_param.x_pad * tf0: vc_param.x_pad * tf0 + len(replace_f0)].shape[0]
253
+ f0[vc_param.x_pad * tf0: vc_param.x_pad * tf0 + len(replace_f0)] = \
254
+ replace_f0[:shape]
255
+
256
+ f0bak = f0.copy()
257
+ f0_mel = 1127 * np.log(1 + f0 / 700)
258
+ f0_mel[f0_mel > 0] = \
259
+ (f0_mel[f0_mel > 0] - f0_mel_min) * 254 \
260
+ / (f0_mel_max - f0_mel_min) + 1
261
+ f0_mel[f0_mel <= 1] = 1
262
+ f0_mel[f0_mel > 255] = 255
263
+ f0_coarse = np.rint(f0_mel).astype(int)
264
+
265
+ return f0_coarse, f0bak # 1-0
266
+
267
+
268
+ def vc(
269
+ hubert,
270
+ net_g,
271
+ sid,
272
+ audio0,
273
+ pitch,
274
+ pitchf,
275
+ vc_param,
276
+ index,
277
+ big_npy,
278
+ index_rate,
279
+ protect):
280
+ feats = audio0.reshape(1, -1).astype(np.float32)
281
+ padding_mask = np.zeros(feats.shape, dtype=bool)
282
+
283
+ # feedforward
284
+ if not args.onnx:
285
+ output = hubert.predict([feats, padding_mask])
286
+ else:
287
+ output = hubert.run(None, {'source': feats, 'padding_mask': padding_mask})
288
+
289
+ if args.version == 1:
290
+ feats = output[0] # v1 : 256
291
+ elif args.version == 2:
292
+ feats = hubert.get_blob_data(hubert.find_blob_index_by_name("/encoder/Slice_5_output_0")) # v2 : 768
293
+
294
+ if protect < 0.5 and pitch is not None and pitchf is not None:
295
+ feats0 = np.copy(feats)
296
+
297
+ if isinstance(index, type(None)) is False \
298
+ and isinstance(big_npy, type(None)) is False \
299
+ and index_rate > 0:
300
+ x = feats[0]
301
+
302
+ score, ix = index.search(x, k=8)
303
+ weight = np.square(1 / score)
304
+ weight /= weight.sum(axis=1, keepdims=True)
305
+ x = np.sum(big_npy[ix] * np.expand_dims(weight, axis=2), axis=1)
306
+
307
+ feats = (
308
+ np.expand_dims(x, axis=0) * index_rate
309
+ + (1 - index_rate) * feats
310
+ )
311
+
312
+ # interpolate
313
+ new_feats = np.zeros((feats.shape[0], feats.shape[1] * 2, feats.shape[2]), dtype=np.float32)
314
+ for i in range(feats.shape[1]):
315
+ new_feats[:, i * 2 + 0, :] = feats[:, i, :]
316
+ new_feats[:, i * 2 + 1, :] = feats[:, i, :]
317
+ feats = new_feats
318
+
319
+ if protect < 0.5 and pitch is not None and pitchf is not None:
320
+ # interpolate
321
+ new_feats = np.zeros((feats0.shape[0], feats0.shape[1] * 2, feats0.shape[2]), dtype=np.float32)
322
+ for i in range(feats0.shape[1]):
323
+ new_feats[:, i * 2 + 0, :] = feats0[:, i, :]
324
+ new_feats[:, i * 2 + 1, :] = feats0[:, i, :]
325
+ feats0 = new_feats
326
+
327
+ p_len = audio0.shape[0] // vc_param.window
328
+ if feats.shape[1] < p_len:
329
+ p_len = feats.shape[1]
330
+ if pitch is not None and pitchf is not None:
331
+ pitch = pitch[:, :p_len]
332
+ pitchf = pitchf[:, :p_len]
333
+
334
+ if protect < 0.5 and pitch is not None and pitchf is not None:
335
+ pitchff = np.copy(pitchf)
336
+ pitchff[pitchf > 0] = 1
337
+ pitchff[pitchf < 1] = protect
338
+ pitchff = np.expand_dims(pitchff, axis=-1)
339
+ feats = feats * pitchff + feats0 * (1 - pitchff)
340
+
341
+ p_len = np.array([p_len], dtype=int)
342
+
343
+ # feedforward
344
+ rnd = np.random.randn(1, 192, p_len[0]).astype(np.float32) * 0.66666 # 噪声(加入随机因子)
345
+ if pitch is not None and pitchf is not None:
346
+ if not args.onnx:
347
+ output = net_g.predict([feats, p_len, pitch, pitchf, sid, rnd])
348
+ else:
349
+ output = net_g.run(None, {
350
+ 'phone': feats, 'phone_lengths': p_len,
351
+ 'pitch': pitch, 'pitchf': pitchf,
352
+ 'ds': sid, 'rnd': rnd
353
+ })
354
+ else:
355
+ if not args.onnx:
356
+ output = net_g.predict([feats, p_len, sid, rnd])
357
+ else:
358
+ output = net_g.run(None, {
359
+ 'phone': feats, 'phone_lengths': p_len, 'ds': sid, 'rnd': rnd
360
+ })
361
+ audio1 = output[0][0, 0]
362
+
363
+ return audio1
364
+
365
+
366
+ bh, ah = signal.butter(N=5, Wn=48, btype="high", fs=16000)
367
+
368
+
369
+ def predict(audio, models, tgt_sr=40000, if_f0=0):
370
+ audio_max = np.abs(audio).max() / 0.95
371
+ if audio_max > 1:
372
+ audio /= audio_max
373
+
374
+ sid = args.sid
375
+ file_index = args.file_index
376
+ index_rate = args.index_rate
377
+ resample_sr = args.resample_sr
378
+ rms_mix_rate = args.rms_mix_rate
379
+ protect = args.protect
380
+ f0_up_key = args.f0_up_key
381
+ f0_method = args.f0_method
382
+ filter_radius = args.filter_radius
383
+ inp_f0 = None
384
+
385
+ vc_param = VCParam(tgt_sr)
386
+
387
+ index = big_npy = None
388
+ if file_index and index_rate > 0:
389
+ import faiss
390
+ try:
391
+ index = faiss.read_index(file_index)
392
+ big_npy = index.reconstruct_n(0, index.ntotal)
393
+ except Exception as e:
394
+ logger.exception(e)
395
+
396
+ audio = signal.filtfilt(bh, ah, audio)
397
+ audio_pad = np.pad(audio, (vc_param.window // 2, vc_param.window // 2), mode="reflect")
398
+
399
+ opt_ts = []
400
+ if audio_pad.shape[0] > vc_param.t_max:
401
+ audio_sum = np.zeros_like(audio)
402
+ for i in range(vc_param.window):
403
+ audio_sum += audio_pad[i: i - vc_param.window]
404
+ for t in range(vc_param.t_center, audio.shape[0], vc_param.t_center):
405
+ opt_ts.append(
406
+ t - vc_param.t_query
407
+ + np.where(
408
+ np.abs(audio_sum[t - vc_param.t_query: t + vc_param.t_query])
409
+ == np.abs(audio_sum[t - vc_param.t_query: t + vc_param.t_query]).min()
410
+ )[0][0]
411
+ )
412
+
413
+ s = 0
414
+ audio_opt = []
415
+ t = None
416
+ audio_pad = np.pad(audio, (vc_param.t_pad, vc_param.t_pad), mode="reflect")
417
+ p_len = audio_pad.shape[0] // vc_param.window
418
+
419
+ pitch, pitchf = None, None
420
+ if if_f0 == 1:
421
+ pitch, pitchf = get_f0(
422
+ vc_param,
423
+ audio_pad,
424
+ p_len,
425
+ f0_up_key,
426
+ f0_method,
427
+ filter_radius,
428
+ inp_f0,
429
+ )
430
+ pitch = pitch[:p_len]
431
+ pitchf = pitchf[:p_len]
432
+ pitch = np.expand_dims(pitch, axis=0)
433
+ pitchf = np.expand_dims(pitchf, axis=0)
434
+ pitchf = pitchf.astype(np.float32)
435
+
436
+ sid = np.array([sid], dtype=int)
437
+ for t in opt_ts:
438
+ t = t // vc_param.window * vc_param.window
439
+ audio1 = vc(
440
+ models["hubert"],
441
+ models["net_g"],
442
+ sid,
443
+ audio_pad[s: t + vc_param.t_pad2 + vc_param.window],
444
+ pitch[:, s // vc_param.window: (t + vc_param.t_pad2) // vc_param.window]
445
+ if if_f0 == 1 else None,
446
+ pitchf[:, s // vc_param.window: (t + vc_param.t_pad2) // vc_param.window]
447
+ if if_f0 == 1 else None,
448
+ vc_param,
449
+ index,
450
+ big_npy,
451
+ index_rate,
452
+ protect,
453
+ )
454
+ audio_opt.append(audio1[vc_param.t_pad_tgt: -vc_param.t_pad_tgt])
455
+ s = t
456
+ audio1 = vc(
457
+ models["hubert"],
458
+ models["net_g"],
459
+ sid,
460
+ audio_pad[t:],
461
+ (pitch[:, t // vc_param.window:] if t is not None else pitch)
462
+ if if_f0 == 1 else None,
463
+ (pitchf[:, t // vc_param.window:] if t is not None else pitchf)
464
+ if if_f0 == 1 else None,
465
+ vc_param,
466
+ index,
467
+ big_npy,
468
+ index_rate,
469
+ protect,
470
+ )
471
+ audio_opt.append(audio1[vc_param.t_pad_tgt: -vc_param.t_pad_tgt])
472
+ audio_opt = np.concatenate(audio_opt)
473
+ audio_opt = audio_opt.astype(np.float32)
474
+
475
+ if rms_mix_rate < 1:
476
+ audio_opt = change_rms(audio, 16000, audio_opt, tgt_sr, rms_mix_rate)
477
+ if 16000 <= resample_sr != tgt_sr:
478
+ audio_opt = librosa.resample(
479
+ audio_opt, orig_sr=tgt_sr, target_sr=resample_sr
480
+ )
481
+ tgt_sr = resample_sr
482
+
483
+ audio_max = np.abs(audio_opt).max() / 0.99
484
+ max_int16 = 32768
485
+ if audio_max > 1:
486
+ max_int16 /= audio_max
487
+ audio_opt = (audio_opt * max_int16).astype(np.int16)
488
+
489
+ return audio_opt, tgt_sr
490
+
491
+
492
+ def recognize_from_audio(models):
493
+ # Depend on voice model
494
+ tgt_sr = args.tgt_sr
495
+ if_f0 = args.f0
496
+
497
+ # input audio loop
498
+ for audio_path in args.input:
499
+ logger.info(audio_path)
500
+
501
+ # prepare input data
502
+ audio = load_audio(audio_path, SAMPLE_RATE)
503
+
504
+ # inference
505
+ logger.info('Start inference...')
506
+ if args.benchmark:
507
+ logger.info('BENCHMARK mode')
508
+ start = int(round(time.time() * 1000))
509
+ output, sr = predict(audio, models, tgt_sr, if_f0)
510
+ end = int(round(time.time() * 1000))
511
+ estimation_time = (end - start)
512
+ logger.info(f'\ttotal processing time {estimation_time} ms')
513
+ else:
514
+ output, sr = predict(audio, models, tgt_sr, if_f0)
515
+
516
+ # save result
517
+ savepath = get_savepath(args.savepath, audio_path, ext='.wav')
518
+ logger.info(f'saved at : {savepath}')
519
+ sf.write(savepath, output, sr)
520
+
521
+ logger.info('Script finished successfully.')
522
+
523
+
524
+ def main():
525
+ WEIGHT_VC_PATH = args.model_file
526
+ MODEL_VC_PATH = WEIGHT_VC_PATH.replace(".onnx", ".onnx.prototxt")
527
+ check_and_download_models(WEIGHT_HUBERT_PATH, MODEL_HUBERT_PATH, REMOTE_PATH)
528
+ check_and_download_models(WEIGHT_VC_PATH, MODEL_VC_PATH, REMOTE_PATH)
529
+
530
+ if args.f0 == 1 and (args.f0_method == "crepe" or args.f0_method == "crepe_tiny"):
531
+ from mod_crepe import WEIGHT_CREPE_PATH, MODEL_CREPE_PATH, WEIGHT_CREPE_TINY_PATH, MODEL_CREPE_TINY_PATH
532
+ if args.f0_method == "crepe_tiny":
533
+ check_and_download_models(WEIGHT_CREPE_TINY_PATH, MODEL_CREPE_TINY_PATH, REMOTE_PATH)
534
+ else:
535
+ check_and_download_models(WEIGHT_CREPE_PATH, MODEL_CREPE_PATH, REMOTE_PATH)
536
+
537
+ env_id = args.env_id
538
+
539
+ # initialize
540
+ if not args.onnx:
541
+ hubert = ailia.Net(MODEL_HUBERT_PATH, WEIGHT_HUBERT_PATH, env_id=env_id)
542
+ net_g = ailia.Net(MODEL_VC_PATH, WEIGHT_VC_PATH, env_id=env_id)
543
+ if args.profile:
544
+ hubert.set_profile_mode(True)
545
+ net_g.set_profile_mode(True)
546
+ else:
547
+ import onnxruntime
548
+ providers = ["CPUExecutionProvider", "CUDAExecutionProvider"]
549
+ hubert = onnxruntime.InferenceSession(WEIGHT_HUBERT_PATH, providers=providers)
550
+ net_g = onnxruntime.InferenceSession(WEIGHT_VC_PATH, providers=providers)
551
+
552
+ if args.f0 == 1 and (args.f0_method == "crepe" or args.f0_method == "crepe_tiny"):
553
+ import mod_crepe
554
+ f0_model = mod_crepe.load_model(env_id, args.onnx, args.f0_method == "crepe_tiny")
555
+ if args.profile:
556
+ f0_model.set_profile_mode(True)
557
+ else:
558
+ f0_model = None
559
+
560
+ models = {
561
+ "hubert": hubert,
562
+ "net_g": net_g,
563
+ }
564
+
565
+ recognize_from_audio(models)
566
+
567
+ if args.profile and not args.onnx:
568
+ print("--- profile hubert")
569
+ print(hubert.get_summary())
570
+ print("")
571
+ print("--- profile net_g")
572
+ print(net_g.get_summary())
573
+ print("")
574
+ if f0_model != None:
575
+ print("--- profile f0_model")
576
+ print(f0_model.get_summary())
577
+ print("")
578
+
579
+
580
+ if __name__ == '__main__':
581
+ main()
ailia-models/hubert_base.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ad8743e43836dcd0fc36c1e2275359b1c43fbe706e016008749d689295805dad
3
+ size 293548300
ailia-models/hubert_base.onnx.prototxt ADDED
The diff for this file is too large to render. See raw diff
 
ailia-models/source.txt ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ https://github.com/axinc-ai/ailia-models/tree/master/audio_processing/rvc
2
+
3
+ https://storage.googleapis.com/ailia-models/rvc/hubert_base.onnx
4
+ https://storage.googleapis.com/ailia-models/rvc/hubert_base.onnx.prototxt