duanyu027 commited on
Commit
4bafcf7
·
verified ·
1 Parent(s): 13a8aa2

Upload prepare_data.ipynb with huggingface_hub

Browse files
Files changed (1) hide show
  1. prepare_data.ipynb +1853 -0
prepare_data.ipynb ADDED
@@ -0,0 +1,1853 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "code",
5
+ "execution_count": 1,
6
+ "id": "4f6ad8a0-2fbf-4732-adab-88acc36814d0",
7
+ "metadata": {},
8
+ "outputs": [],
9
+ "source": [
10
+ "# 准备训练和测试数据"
11
+ ]
12
+ },
13
+ {
14
+ "cell_type": "code",
15
+ "execution_count": 2,
16
+ "id": "cffb3f58-7324-4825-9fc2-fdae98d0749d",
17
+ "metadata": {},
18
+ "outputs": [],
19
+ "source": [
20
+ "# load phonemizer\n",
21
+ "import phonemizer\n",
22
+ "from nltk.tokenize import word_tokenize\n",
23
+ "\n",
24
+ "global_phonemizer_en = phonemizer.backend.EspeakBackend(language='en-us', preserve_punctuation=True, with_stress=True)\n",
25
+ "global_phonemizer_es = phonemizer.backend.EspeakBackend(language='es', preserve_punctuation=True, with_stress=True)\n"
26
+ ]
27
+ },
28
+ {
29
+ "cell_type": "code",
30
+ "execution_count": 3,
31
+ "id": "8a7bb6c1-9ce9-4d43-958f-112d0df55cce",
32
+ "metadata": {},
33
+ "outputs": [],
34
+ "source": [
35
+ "def preprocess(text, language):\n",
36
+ " text = text.strip()\n",
37
+ " if language == 'en-us':\n",
38
+ " ps = global_phonemizer_en.phonemize([text])\n",
39
+ " elif language == 'es':\n",
40
+ " ps = global_phonemizer_es.phonemize([text])\n",
41
+ " ps = word_tokenize(ps[0])\n",
42
+ " ps = [p for p in ps if p not in ['``', '`', '(', ')']]\n",
43
+ " ps = ' '.join(ps)\n",
44
+ "\n",
45
+ " return ps"
46
+ ]
47
+ },
48
+ {
49
+ "cell_type": "code",
50
+ "execution_count": 4,
51
+ "id": "dde01040-55aa-4408-9544-507a56a27b76",
52
+ "metadata": {},
53
+ "outputs": [],
54
+ "source": [
55
+ "# # test\n",
56
+ "# text = '''¡Usuario! ¿De qué estás hablando? Ese tipo de conversación es inapropiada... Supongo que no puedo decir que no... sólo por esta vez. Mmm... me haces sentir tan bien.'''\n",
57
+ "# preprocess(text, 'es')"
58
+ ]
59
+ },
60
+ {
61
+ "cell_type": "code",
62
+ "execution_count": 5,
63
+ "id": "35366d0d-149e-4cf6-8780-be6631dc3f3b",
64
+ "metadata": {},
65
+ "outputs": [],
66
+ "source": [
67
+ "# !pip3 install pandas"
68
+ ]
69
+ },
70
+ {
71
+ "cell_type": "code",
72
+ "execution_count": 6,
73
+ "id": "da973b2e-4520-4647-8451-a83bf0c412bb",
74
+ "metadata": {},
75
+ "outputs": [],
76
+ "source": [
77
+ "import wave\n",
78
+ "import soundfile as sf\n",
79
+ "\n",
80
+ "def get_wav_duration(file_path):\n",
81
+ " try:\n",
82
+ " with wave.open(file_path, 'rb') as wf:\n",
83
+ " # 获取音频文件的帧数\n",
84
+ " frames = wf.getnframes()\n",
85
+ " # 获取帧速率(每秒的帧数)\n",
86
+ " frame_rate = wf.getframerate()\n",
87
+ " # 计算音频文件的持续时间(秒数)\n",
88
+ " duration = frames / float(frame_rate)\n",
89
+ " return duration\n",
90
+ " except:\n",
91
+ " wave_array, sr = sf.read(file_path)\n",
92
+ " return len(wave_array) / sr"
93
+ ]
94
+ },
95
+ {
96
+ "cell_type": "code",
97
+ "execution_count": 7,
98
+ "id": "8b93b418-d2f6-46c9-a6b7-71fa37c063ea",
99
+ "metadata": {},
100
+ "outputs": [
101
+ {
102
+ "name": "stderr",
103
+ "output_type": "stream",
104
+ "text": [
105
+ "100%|██████████| 1151/1151 [00:08<00:00, 137.66it/s]\n",
106
+ "100%|██████████| 40/40 [00:00<00:00, 124.61it/s]\n",
107
+ "100%|██████████| 39/39 [00:00<00:00, 130.81it/s]"
108
+ ]
109
+ },
110
+ {
111
+ "name": "stdout",
112
+ "output_type": "stream",
113
+ "text": [
114
+ "142731 5491 4619\n"
115
+ ]
116
+ },
117
+ {
118
+ "name": "stderr",
119
+ "output_type": "stream",
120
+ "text": [
121
+ "\n"
122
+ ]
123
+ }
124
+ ],
125
+ "source": [
126
+ "import os\n",
127
+ "import pandas as pd\n",
128
+ "from tqdm import tqdm\n",
129
+ "\n",
130
+ "# libritts\n",
131
+ "def prepare_libritts_data(dir_path, id1_list):\n",
132
+ " final_data = []\n",
133
+ " data = {}\n",
134
+ " \n",
135
+ " for id1_index in tqdm(range(len(id1_list))):\n",
136
+ " id1 = id1_list[id1_index]\n",
137
+ " # id1是speaker id\n",
138
+ " for id2 in os.listdir(dir_path+'/'+id1):\n",
139
+ " for fname in os.listdir(dir_path+'/'+id1+'/'+id2):\n",
140
+ " if 'txt' in fname or 'wav' in fname:\n",
141
+ " whole_id = fname.split('.')[0]\n",
142
+ " if whole_id not in data:\n",
143
+ " data[whole_id] = {}\n",
144
+ " \n",
145
+ " if 'wav' in fname:\n",
146
+ " wav_path = dir_path+'/'+id1+'/'+id2+'/'+fname\n",
147
+ " wav_duration = get_wav_duration(wav_path)\n",
148
+ " data[whole_id]['wav'] = wav_path\n",
149
+ " data[whole_id]['wav_dur'] = wav_duration\n",
150
+ " \n",
151
+ " elif 'normalized' in fname: # 使用normalized版本,或是original\n",
152
+ " with open(dir_path+'/'+id1+'/'+id2+'/'+fname, 'r') as f:\n",
153
+ " data[whole_id]['text'] = f.read().strip()\n",
154
+ " \n",
155
+ " data[whole_id]['speaker_id'] = id1\n",
156
+ "\n",
157
+ " for d in data:\n",
158
+ " if data[d]['wav_dur'] > 1:\n",
159
+ " # 去除太短的\n",
160
+ " final_data.append(f'''{data[d]['wav']}|{data[d]['text']}|{data[d]['speaker_id']}''')\n",
161
+ " \n",
162
+ " return final_data\n",
163
+ "\n",
164
+ "train_path = '/workspace/TTS/data/LibriTTS/train-clean-460'\n",
165
+ "train_ids = os.listdir(train_path)\n",
166
+ "\n",
167
+ "val_path = '/workspace/TTS/data/LibriTTS/dev-clean'\n",
168
+ "val_ids = os.listdir(val_path)\n",
169
+ "\n",
170
+ "test_path = '/workspace/TTS/data/LibriTTS/test-clean'\n",
171
+ "test_ids = os.listdir(test_path)\n",
172
+ "\n",
173
+ "libritts_train_data = prepare_libritts_data(train_path, train_ids)\n",
174
+ "libritts_val_data = prepare_libritts_data(val_path, val_ids)\n",
175
+ "libritts_test_data = prepare_libritts_data(test_path, test_ids)\n",
176
+ "\n",
177
+ "print(len(libritts_train_data), len(libritts_val_data), len(libritts_test_data))"
178
+ ]
179
+ },
180
+ {
181
+ "cell_type": "code",
182
+ "execution_count": 8,
183
+ "id": "a9f1c3cf-16ae-4b48-9b96-0440c1056db4",
184
+ "metadata": {},
185
+ "outputs": [],
186
+ "source": [
187
+ "# # 分析一下libritts的数据分布,给个参考\n",
188
+ "# id2num = {}\n",
189
+ "# for d in libritts_train_data:\n",
190
+ "# _id = d.split('|')[2]\n",
191
+ "# if _id not in id2num:\n",
192
+ "# id2num[_id] = 1\n",
193
+ "# else:\n",
194
+ "# id2num[_id] += 1\n",
195
+ "\n",
196
+ "# sorted_dict = sorted(id2num.items(), key=lambda x: x[1], reverse=True)"
197
+ ]
198
+ },
199
+ {
200
+ "cell_type": "code",
201
+ "execution_count": 9,
202
+ "id": "c93c9b4f-4cee-42f6-8fa2-c3cac47220c3",
203
+ "metadata": {
204
+ "scrolled": true
205
+ },
206
+ "outputs": [],
207
+ "source": [
208
+ "# for i, d in enumerate(sorted_dict):\n",
209
+ "# print(d, i/len(id2num))"
210
+ ]
211
+ },
212
+ {
213
+ "cell_type": "code",
214
+ "execution_count": 10,
215
+ "id": "d357def5-b0af-4a73-a04e-cc1084c83a83",
216
+ "metadata": {},
217
+ "outputs": [
218
+ {
219
+ "name": "stdout",
220
+ "output_type": "stream",
221
+ "text": [
222
+ "153150 1897 1662\n"
223
+ ]
224
+ }
225
+ ],
226
+ "source": [
227
+ "# cml-tts\n",
228
+ "def prepare_cml_data():\n",
229
+ " # 准备 cml-tts数据,直接读取csv数据即可\n",
230
+ " # 去掉transcription不太对的数据\n",
231
+ " # +100000是为了和libritts错开speaker id\n",
232
+ "\n",
233
+ " df_train = pd.read_csv('/workspace/TTS/data/cml_tts_dataset_spanish_v0.1/train.csv', delimiter='|')\n",
234
+ " train = []\n",
235
+ " for index, row in df_train.iterrows():\n",
236
+ " if row['levenshtein'] >= 0.9 and row['duration'] > 1 and row['duration'] < 30:\n",
237
+ " train.append(f'''/workspace/TTS/data/cml_tts_dataset_spanish_v0.1/{row['wav_filename']}|{row['transcript']}|{10000+row['client_id']}''')\n",
238
+ " \n",
239
+ " df_val = pd.read_csv('/workspace/TTS/data/cml_tts_dataset_spanish_v0.1/dev.csv', delimiter='|')\n",
240
+ " val = []\n",
241
+ " for index, row in df_val.iterrows():\n",
242
+ " if row['levenshtein'] >= 0.9 and row['duration'] > 1 and row['duration'] < 30:\n",
243
+ " val.append(f'''/workspace/TTS/data/cml_tts_dataset_spanish_v0.1/{row['wav_filename']}|{row['transcript']}|{10000+int(row['client_id'])}''')\n",
244
+ "\n",
245
+ " df_test = pd.read_csv('/workspace/TTS/data/cml_tts_dataset_spanish_v0.1/test.csv', delimiter='|')\n",
246
+ " test = []\n",
247
+ " for index, row in df_test.iterrows():\n",
248
+ " if row['levenshtein'] >= 0.9 and row['duration'] > 1 and row['duration'] < 30:\n",
249
+ " test.append(f'''/workspace/TTS/data/cml_tts_dataset_spanish_v0.1/{row['wav_filename']}|{row['transcript']}|{10000+int(row['client_id'])}''')\n",
250
+ "\n",
251
+ " return train, val, test\n",
252
+ " \n",
253
+ "cml_train_data, cml_val_data, cml_test_data = prepare_cml_data()\n",
254
+ "print(len(cml_train_data), len(cml_val_data), len(cml_test_data))"
255
+ ]
256
+ },
257
+ {
258
+ "cell_type": "code",
259
+ "execution_count": 11,
260
+ "id": "f25a81d5-d032-4d6d-a981-9f866c5a14c6",
261
+ "metadata": {},
262
+ "outputs": [
263
+ {
264
+ "name": "stderr",
265
+ "output_type": "stream",
266
+ "text": [
267
+ "Some weights of Wav2Vec2ForCTC were not initialized from the model checkpoint at facebook/wav2vec2-base-960h and are newly initialized: ['wav2vec2.masked_spec_embed']\n",
268
+ "You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\n"
269
+ ]
270
+ }
271
+ ],
272
+ "source": [
273
+ "from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC\n",
274
+ "import torch\n",
275
+ "import soundfile as sf\n",
276
+ "import librosa\n",
277
+ "\n",
278
+ "# load model and tokenizer\n",
279
+ "wav2vec_processor = Wav2Vec2Processor.from_pretrained(\"facebook/wav2vec2-base-960h\", cache_dir = '/workspace/hf_resource/')\n",
280
+ "wav2vec_model = Wav2Vec2ForCTC.from_pretrained(\"facebook/wav2vec2-base-960h\", cache_dir = '/workspace/hf_resource/')\n",
281
+ "\n",
282
+ "def wav2vec_asr(wav_path, end_second = None):\n",
283
+ " wave, sr = sf.read(wav_path)\n",
284
+ " if wave.shape[-1] == 2:\n",
285
+ " wave = wave[:, 0].squeeze()\n",
286
+ " if sr != 16000:\n",
287
+ " wave = librosa.resample(wave, orig_sr=sr, target_sr=16000)\n",
288
+ " \n",
289
+ " wave = torch.from_numpy(wave).float()\n",
290
+ "\n",
291
+ " if end_second is not None:\n",
292
+ " wave = wave[:int(16000 * end_second)]\n",
293
+ "\n",
294
+ " # tokenize\n",
295
+ " input_values = wav2vec_processor(wave, return_tensors=\"pt\", padding=\"longest\", sampling_rate=16000).input_values # Batch size 1\n",
296
+ " \n",
297
+ " # retrieve logits\n",
298
+ " logits = wav2vec_model(input_values).logits\n",
299
+ " \n",
300
+ " # take argmax and decode\n",
301
+ " predicted_ids = torch.argmax(logits, dim=-1)\n",
302
+ " transcription = wav2vec_processor.batch_decode(predicted_ids)\n",
303
+ "\n",
304
+ " return transcription"
305
+ ]
306
+ },
307
+ {
308
+ "cell_type": "code",
309
+ "execution_count": 12,
310
+ "id": "f2dcd822-6d1d-41de-babd-a966ef0d1da1",
311
+ "metadata": {
312
+ "scrolled": true
313
+ },
314
+ "outputs": [
315
+ {
316
+ "ename": "ModuleNotFoundError",
317
+ "evalue": "No module named 'whisper'",
318
+ "output_type": "error",
319
+ "traceback": [
320
+ "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
321
+ "\u001b[0;31mModuleNotFoundError\u001b[0m Traceback (most recent call last)",
322
+ "Cell \u001b[0;32mIn[12], line 1\u001b[0m\n\u001b[0;32m----> 1\u001b[0m \u001b[38;5;28;01mimport\u001b[39;00m \u001b[38;5;21;01mwhisper\u001b[39;00m\n\u001b[1;32m 3\u001b[0m model \u001b[38;5;241m=\u001b[39m whisper\u001b[38;5;241m.\u001b[39mload_model(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mlarge-v3\u001b[39m\u001b[38;5;124m\"\u001b[39m)\n\u001b[1;32m 5\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21masr\u001b[39m(wav_path):\n",
323
+ "\u001b[0;31mModuleNotFoundError\u001b[0m: No module named 'whisper'"
324
+ ]
325
+ }
326
+ ],
327
+ "source": [
328
+ "import whisper\n",
329
+ "\n",
330
+ "model = whisper.load_model(\"large-v3\")\n",
331
+ "\n",
332
+ "def asr(wav_path):\n",
333
+ " result = model.transcribe(wav_path)\n",
334
+ " return result[\"text\"]"
335
+ ]
336
+ },
337
+ {
338
+ "cell_type": "code",
339
+ "execution_count": null,
340
+ "id": "d86ed416-9110-4e02-a605-1ca645f258bc",
341
+ "metadata": {},
342
+ "outputs": [],
343
+ "source": [
344
+ "# 本质是,第一个单词不能太晚说出来\n",
345
+ "import whisperx\n",
346
+ "\n",
347
+ "model_a, metadata = whisperx.load_align_model(language_code='en', device='cuda')\n",
348
+ "\n",
349
+ "def alignment(wav_path):\n",
350
+ " result = model.transcribe(wav_path)\n",
351
+ " \n",
352
+ " result = whisperx.align(result[\"segments\"], model_a, metadata, wav_path, 'cuda', return_char_alignments=False)\n",
353
+ " \n",
354
+ " return result[\"segments\"]"
355
+ ]
356
+ },
357
+ {
358
+ "cell_type": "code",
359
+ "execution_count": null,
360
+ "id": "a62e3c95-c1e5-48e1-82b4-56836e7c2bd4",
361
+ "metadata": {},
362
+ "outputs": [],
363
+ "source": [
364
+ "# 加载已经处理好的数据,主要为得到id list,为重新采样做准备\n",
365
+ "game_train_ids, game_val_ids, game_test_ids = [], [], []\n",
366
+ "\n",
367
+ "with open('/workspace/TTS/tts/StyleTTS2/Data/train_list_en_20k_0318.txt', 'r') as f:\n",
368
+ " for line in f:\n",
369
+ " line = line.strip()\n",
370
+ " if 'game_en' in line:\n",
371
+ " game_train_ids.append('/'.join(line.split('|')[0].split('/')[:-1]))\n",
372
+ "\n",
373
+ "with open('/workspace/TTS/tts/StyleTTS2/Data/val_list_en_20k_0318.txt', 'r') as f:\n",
374
+ " for line in f:\n",
375
+ " line = line.strip()\n",
376
+ " if 'game_en' in line:\n",
377
+ " game_val_ids.append('/'.join(line.split('|')[0].split('/')[:-1]))\n",
378
+ "\n",
379
+ "with open('/workspace/TTS/tts/StyleTTS2/Data/OOD_texts_en_2k_0318.txt', 'r') as f:\n",
380
+ " for line in f:\n",
381
+ " line = line.strip()\n",
382
+ " if 'game_en' in line:\n",
383
+ " game_test_ids.append('/'.join(line.split('|')[0].split('/')[:-1]))"
384
+ ]
385
+ },
386
+ {
387
+ "cell_type": "code",
388
+ "execution_count": null,
389
+ "id": "e45ae4c5-5163-4c48-85b0-0b84055468d6",
390
+ "metadata": {
391
+ "scrolled": true
392
+ },
393
+ "outputs": [],
394
+ "source": [
395
+ "for d in game_train_ids:\n",
396
+ " print(d)"
397
+ ]
398
+ },
399
+ {
400
+ "cell_type": "code",
401
+ "execution_count": null,
402
+ "id": "ebfd5f74-2796-41fd-8cc6-52464776dc33",
403
+ "metadata": {},
404
+ "outputs": [],
405
+ "source": [
406
+ "# 加入游戏数据(英文)\n",
407
+ "# 过滤游戏数据\n",
408
+ "# 短于1s的不要、长度太短的不要、太长的也不要(30s为限)\n",
409
+ "game_en = []\n",
410
+ "\n",
411
+ "def get_game_data(path, speaker_id):\n",
412
+ " role_names = os.listdir(path)\n",
413
+ " for role_name in role_names:\n",
414
+ " if '#Unknown' in role_name or ' ' in role_name:\n",
415
+ " continue\n",
416
+ " \n",
417
+ " file_names = os.listdir(os.path.join(path, role_name))\n",
418
+ " file_names = [n for n in file_names if 'lab' in n or 'wav' in n]\n",
419
+ " id2wav, id2text = {}, {}\n",
420
+ " for file_name in file_names:\n",
421
+ " id = file_name.split('.')[0]\n",
422
+ " \n",
423
+ " if 'lab' in file_name:\n",
424
+ " try:\n",
425
+ " with open(path+'/'+role_name+'/'+file_name, 'r') as f:\n",
426
+ " text = f.read().strip()\n",
427
+ " id2text[id] = text\n",
428
+ " except:\n",
429
+ " continue\n",
430
+ " \n",
431
+ " elif 'wav' in file_name:\n",
432
+ " wav = path+'/'+role_name+'/'+file_name\n",
433
+ " id2wav[id] = wav\n",
434
+ " \n",
435
+ " for id in id2wav:\n",
436
+ " if id in id2text:\n",
437
+ " wav_path, text = id2wav[id], id2text[id]\n",
438
+ " wav_len = get_wav_duration(wav_path) \n",
439
+ " if wav_len > 1 and wav_len < 30 and len(text.split(' ')) >= 3 and len(text.split(' ')) <= 50 and '{' not in text and '<' not in text:\n",
440
+ " game_en.append(f'{wav_path}|{text}|{speaker_id}')\n",
441
+ "\n",
442
+ " speaker_id += 1\n",
443
+ " \n",
444
+ " return speaker_id\n",
445
+ "\n",
446
+ "sid = get_game_data('/workspace/TTS/data/game_en', 30000)\n",
447
+ "print(sid)\n",
448
+ "sid = get_game_data('/workspace/TTS/data/game_en1', sid)\n",
449
+ "print(sid)\n",
450
+ "\n",
451
+ "print(len(game_en))\n"
452
+ ]
453
+ },
454
+ {
455
+ "cell_type": "code",
456
+ "execution_count": 18,
457
+ "id": "776c404e-9865-4f92-b593-272e4d065372",
458
+ "metadata": {
459
+ "scrolled": true
460
+ },
461
+ "outputs": [],
462
+ "source": [
463
+ "# # 按照speaker id,统计一下分布\n",
464
+ "# sid2game = {}\n",
465
+ "# for d in game_en:\n",
466
+ "# d = d.strip().split('|')\n",
467
+ "# sid = '/'.join(d[0].split('/')[:-1])\n",
468
+ "# if sid not in sid2game:\n",
469
+ "# sid2game[sid] = [d]\n",
470
+ "# else:\n",
471
+ "# sid2game[sid].append(d)\n",
472
+ "\n",
473
+ "# sid2game = sorted(sid2game.items(), key=lambda x: len(x[1]), reverse=True)\n",
474
+ "\n",
475
+ "# whole_num = sum([len(wavs) for sid, wavs in sid2game])\n",
476
+ "# print(whole_num)\n",
477
+ "# for sid, wavs in sid2game:\n",
478
+ "# print(sid, len(wavs), len(wavs)/whole_num)"
479
+ ]
480
+ },
481
+ {
482
+ "cell_type": "code",
483
+ "execution_count": 19,
484
+ "id": "6a5abdd0-2d6e-433d-8c9e-cd64a1e512dd",
485
+ "metadata": {},
486
+ "outputs": [],
487
+ "source": [
488
+ "# len(sid2game)"
489
+ ]
490
+ },
491
+ {
492
+ "cell_type": "code",
493
+ "execution_count": 20,
494
+ "id": "20133137-f9b1-444e-a4e2-477d858b0af9",
495
+ "metadata": {},
496
+ "outputs": [],
497
+ "source": [
498
+ "# 对游戏数据,再次进行处理\n",
499
+ "# 处理1:每个sid,设定样本上限\n",
500
+ "# 处理2:按照sid,划分train/val/test\n",
501
+ "\n",
502
+ "import random\n",
503
+ "import numpy as np\n",
504
+ "\n",
505
+ "# 隐藏seed,可以更充分利用data\n",
506
+ "random.seed(42)\n",
507
+ "np.random.seed(42)\n",
508
+ "\n",
509
+ "sid2game = {}\n",
510
+ "for d in game_en:\n",
511
+ " sid = d.split('|')[2]\n",
512
+ " if sid not in sid2game:\n",
513
+ " sid2game[sid] = [d]\n",
514
+ " else:\n",
515
+ " sid2game[sid].append(d)\n",
516
+ "\n",
517
+ "max_sample = 150\n",
518
+ "filter_sid2game = {}\n",
519
+ "for sid in sid2game:\n",
520
+ " wavs = sid2game[sid]\n",
521
+ " if len(wavs) > max_sample:\n",
522
+ " np.random.shuffle(wavs)\n",
523
+ " wavs = wavs[:max_sample]\n",
524
+ " filter_sid2game[sid] = wavs\n"
525
+ ]
526
+ },
527
+ {
528
+ "cell_type": "code",
529
+ "execution_count": 21,
530
+ "id": "ec591e2c-2f84-44b6-9bbe-9e511c16cc84",
531
+ "metadata": {
532
+ "scrolled": true
533
+ },
534
+ "outputs": [],
535
+ "source": [
536
+ "# print(len([d for d in filter_sid2game if len(filter_sid2game[d]) > 1]))\n",
537
+ "# print(len(filter_sid2game))\n",
538
+ "# filter_sid2game = sorted(filter_sid2game.items(), key=lambda x: len(x[1]), reverse=True)\n",
539
+ "\n",
540
+ "# whole_num = sum([len(wavs) for sid, wavs in filter_sid2game])\n",
541
+ "# print(whole_num)\n",
542
+ "# for sid, wavs in filter_sid2game:\n",
543
+ "# print(sid, len(wavs), len(wavs)/whole_num)"
544
+ ]
545
+ },
546
+ {
547
+ "cell_type": "code",
548
+ "execution_count": 22,
549
+ "id": "10dc4e4f-f3c3-4bcd-9330-413636f4e638",
550
+ "metadata": {
551
+ "scrolled": true
552
+ },
553
+ "outputs": [],
554
+ "source": [
555
+ "# index = random.choice(range(len(game_en_train)))\n",
556
+ "# tmp = game_en_train[index]\n",
557
+ "# print(tmp)\n",
558
+ "# import IPython.display as ipd\n",
559
+ "# display(ipd.Audio(tmp.split('|')[0], rate=24000, normalize=False))\n"
560
+ ]
561
+ },
562
+ {
563
+ "cell_type": "code",
564
+ "execution_count": 23,
565
+ "id": "cee4bad1-4d05-45bd-8140-0ee8e677dc92",
566
+ "metadata": {},
567
+ "outputs": [],
568
+ "source": [
569
+ "# game_en_train, game_en_val, game_en_test = [], [], []\n",
570
+ "# # whole_sids = list(filter_sid2game.keys())\n",
571
+ "# # np.random.shuffle(whole_sids)\n",
572
+ "\n",
573
+ "# # train_sids = whole_sids[:int(0.8 * len(whole_sids))]\n",
574
+ "# # val_sids = whole_sids[int(0.8 * len(whole_sids)):int(0.9 * len(whole_sids))]\n",
575
+ "# # test_sids = whole_sids[int(0.9 * len(whole_sids)):]\n",
576
+ "\n",
577
+ "# train_sids = set()\n",
578
+ "# val_sids = set()\n",
579
+ "# test_sids = set()\n",
580
+ "# for sid in filter_sid2game:\n",
581
+ "# wavs = filter_sid2game[sid]\n",
582
+ "# if '/'.join(wavs[0].split('/')[:-1]) in game_train_ids and '男声' not in wavs[0] and '女声' not in wavs:\n",
583
+ " \n",
584
+ "# train_sids.add(sid)\n",
585
+ "# elif '/'.join(wavs[0].split('/')[:-1]) in game_val_ids:\n",
586
+ "# val_sids.add(sid)\n",
587
+ "# elif '/'.join(wavs[0].split('/')[:-1]) in game_test_ids:\n",
588
+ "# test_sids.add(sid)\n",
589
+ "\n",
590
+ "# for sid in train_sids:\n",
591
+ "# game_en_train += filter_sid2game[sid]\n",
592
+ "\n",
593
+ "# for sid in val_sids:\n",
594
+ "# game_en_val += filter_sid2game[sid]\n",
595
+ "\n",
596
+ "# for sid in test_sids:\n",
597
+ "# game_en_test += filter_sid2game[sid]\n",
598
+ "\n",
599
+ "# np.random.shuffle(game_en_train)\n",
600
+ "# np.random.shuffle(game_en_val)\n",
601
+ "# np.random.shuffle(game_en_test)\n",
602
+ "\n",
603
+ "# print(len(game_en_train), len(game_en_val), len(game_en_test))\n"
604
+ ]
605
+ },
606
+ {
607
+ "cell_type": "code",
608
+ "execution_count": 24,
609
+ "id": "8351bf48-26f7-4111-b0ea-4ac055953fca",
610
+ "metadata": {},
611
+ "outputs": [],
612
+ "source": [
613
+ "# 加入emotional tts数据"
614
+ ]
615
+ },
616
+ {
617
+ "cell_type": "code",
618
+ "execution_count": 25,
619
+ "id": "bba721d0-0c35-4388-968f-ef5b7a96d91e",
620
+ "metadata": {},
621
+ "outputs": [],
622
+ "source": [
623
+ "# !pip3 install langdetect"
624
+ ]
625
+ },
626
+ {
627
+ "cell_type": "code",
628
+ "execution_count": 26,
629
+ "id": "f3955304-af9e-471c-8fd9-e760b0796202",
630
+ "metadata": {},
631
+ "outputs": [],
632
+ "source": [
633
+ "# from langdetect import detect"
634
+ ]
635
+ },
636
+ {
637
+ "cell_type": "code",
638
+ "execution_count": 27,
639
+ "id": "5a735bbe-628d-487d-9692-2c11e6ce1909",
640
+ "metadata": {},
641
+ "outputs": [],
642
+ "source": [
643
+ "import os\n",
644
+ "import re\n",
645
+ "\n",
646
+ "def is_english_string(s):\n",
647
+ " return bool(re.match('^[a-zA-Z\\s]+$', s))\n",
648
+ " \n",
649
+ "esd = []\n",
650
+ "emov = []\n",
651
+ "jl = []\n",
652
+ "rav = []\n",
653
+ "\n",
654
+ "esd_path = '/workspace/TTS/data/ESD/dataset/'\n",
655
+ "esd_trans = []\n",
656
+ "for f1_name in os.listdir(esd_path):\n",
657
+ " if '.' not in f1_name:\n",
658
+ " with open(esd_path + f1_name + '/' + f1_name + '.txt', 'r') as f:\n",
659
+ " for line in f:\n",
660
+ " line = line.strip().split('\\t')\n",
661
+ " id, trans, emo = line[0], line[1], line[2]\n",
662
+ " if is_english_string(emo):\n",
663
+ " esd_trans.append([id, trans, emo])\n",
664
+ "\n",
665
+ "emov_path = '/workspace/TTS/data/emov_db/'\n",
666
+ "# 没有trans,需要过whisper\n",
667
+ "for f1_name in os.listdir(emov_path):\n",
668
+ " for f2_name in os.listdir(emov_path + f1_name):\n",
669
+ " if 'wav' in f2_name:\n",
670
+ " emov.append(f'{emov_path}{f1_name}/{f2_name}')\n",
671
+ "\n",
672
+ "rav_path = '/workspace/TTS/data/ravdess/'\n",
673
+ "# 也没有trans,需要过whisper\n",
674
+ "for f1_name in os.listdir(rav_path):\n",
675
+ " for f2_name in os.listdir(rav_path + f1_name):\n",
676
+ " rav.append(f'/workspace/TTS/data/ravdess/{f1_name}/{f2_name}')"
677
+ ]
678
+ },
679
+ {
680
+ "cell_type": "code",
681
+ "execution_count": 28,
682
+ "id": "4ce217b6-09cc-4932-9209-fb213cdde19a",
683
+ "metadata": {},
684
+ "outputs": [],
685
+ "source": [
686
+ "jl_path = '/workspace/TTS/data/jl_corpus/raw_jl/jl/'\n",
687
+ "jl_sid2wav = {}\n",
688
+ "jl_sid2txt = {}\n",
689
+ "# 有trans\n",
690
+ "for f1_name in os.listdir(jl_path):\n",
691
+ " if 'wav' in f1_name:\n",
692
+ " sid = f1_name.split('.')[0]\n",
693
+ " jl_sid2wav[sid] = jl_path + f1_name\n",
694
+ " elif 'txt' in f1_name:\n",
695
+ " with open(jl_path + f1_name, 'r') as f:\n",
696
+ " jl_sid2txt[sid] = f.read().strip()\n",
697
+ "\n",
698
+ "for sid in jl_sid2wav:\n",
699
+ " if sid in jl_sid2txt:\n",
700
+ " jl.append([jl_sid2wav[sid], jl_sid2txt[sid]])\n",
701
+ "\n"
702
+ ]
703
+ },
704
+ {
705
+ "cell_type": "code",
706
+ "execution_count": 29,
707
+ "id": "cc8666ed-c576-409f-bfeb-03c8827de90c",
708
+ "metadata": {},
709
+ "outputs": [],
710
+ "source": [
711
+ "# 进行asr\n",
712
+ "need_whisper = []\n",
713
+ "for d in game_en_train + game_en_val + game_en_test:\n",
714
+ " need_whisper.append(d.split('|')[0])\n",
715
+ "\n",
716
+ "need_whisper += rav\n",
717
+ "print(len(need_whisper))\n"
718
+ ]
719
+ },
720
+ {
721
+ "cell_type": "code",
722
+ "execution_count": 30,
723
+ "id": "2cc9361f-39dd-45d3-a548-0118041c5cb4",
724
+ "metadata": {},
725
+ "outputs": [],
726
+ "source": [
727
+ "from tqdm import tqdm\n",
728
+ "asr_res = []\n",
729
+ "\n",
730
+ "with open('/workspace/TTS/data/whisper_large_v3_result_0330.txt', 'w') as f:\n",
731
+ " for i in tqdm(range(len(need_whisper))):\n",
732
+ " wav_path = need_whisper[i]\n",
733
+ " trans = ''\n",
734
+ " try:\n",
735
+ " trans = asr(wav_path)\n",
736
+ " except:\n",
737
+ " pass\n",
738
+ " \n",
739
+ " asr_res.append([wav_path, trans])\n",
740
+ " f.write(f'{wav_path}|{trans}\\n')\n"
741
+ ]
742
+ },
743
+ {
744
+ "cell_type": "code",
745
+ "execution_count": 31,
746
+ "id": "d25b2f6c-6bd1-488d-bb77-7891d5aee5a6",
747
+ "metadata": {},
748
+ "outputs": [],
749
+ "source": [
750
+ "# len(emov)"
751
+ ]
752
+ },
753
+ {
754
+ "cell_type": "code",
755
+ "execution_count": 32,
756
+ "id": "bc51b44d-57f2-4fc7-bbb2-08e6cbe7bfbe",
757
+ "metadata": {},
758
+ "outputs": [],
759
+ "source": [
760
+ "# 对emov也进行asr\n",
761
+ "new_need_whisper = []\n",
762
+ "new_need_whisper += emov\n",
763
+ "\n",
764
+ "with open('/workspace/TTS/data/whisper_large_v3_result_0401_add.txt', 'w') as f:\n",
765
+ " for i in tqdm(range(len(new_need_whisper))):\n",
766
+ " wav_path = new_need_whisper[i]\n",
767
+ " trans = ''\n",
768
+ " try:\n",
769
+ " trans = asr(wav_path)\n",
770
+ " except:\n",
771
+ " pass\n",
772
+ "\n",
773
+ " f.write(f'{wav_path}|{trans}\\n')"
774
+ ]
775
+ },
776
+ {
777
+ "cell_type": "code",
778
+ "execution_count": 33,
779
+ "id": "13c471aa-0d2a-4050-a438-5bcb02c437c2",
780
+ "metadata": {},
781
+ "outputs": [],
782
+ "source": [
783
+ "asr_res = []\n",
784
+ "with open('/workspace/TTS/data/whisper_large_v3_result_0330.txt', 'r') as f:\n",
785
+ " for line in f:\n",
786
+ " line = line.strip().split('|')\n",
787
+ " asr_res.append([line[0], line[1]])\n",
788
+ "\n",
789
+ "with open('/workspace/TTS/data/whisper_large_v3_result_0401_add.txt', 'r') as f:\n",
790
+ " for line in f:\n",
791
+ " line = line.strip().split('|')\n",
792
+ " asr_res.append([line[0], line[1]])"
793
+ ]
794
+ },
795
+ {
796
+ "cell_type": "code",
797
+ "execution_count": 34,
798
+ "id": "77210c33-9e55-476e-891c-edb0de0b0ad1",
799
+ "metadata": {},
800
+ "outputs": [],
801
+ "source": [
802
+ "# len(asr_res)"
803
+ ]
804
+ },
805
+ {
806
+ "cell_type": "code",
807
+ "execution_count": 35,
808
+ "id": "3c3afd1a-ad41-4c8f-a48f-6c75202024e6",
809
+ "metadata": {},
810
+ "outputs": [],
811
+ "source": [
812
+ "# !pip install editdistance"
813
+ ]
814
+ },
815
+ {
816
+ "cell_type": "code",
817
+ "execution_count": 37,
818
+ "id": "eea2bb15-613a-4f85-8fc2-9995f94df5e8",
819
+ "metadata": {},
820
+ "outputs": [],
821
+ "source": [
822
+ "# 过滤游戏数据\n",
823
+ "import string\n",
824
+ "import IPython.display as ipd\n",
825
+ "import editdistance\n",
826
+ "\n",
827
+ "wav2asr = {}\n",
828
+ "for items in asr_res:\n",
829
+ " wav2asr[items[0]] = items[1]\n",
830
+ "\n",
831
+ "def remove_punctuation(text):\n",
832
+ " # 定义标点符号集合\n",
833
+ " punctuation = string.punctuation\n",
834
+ " # 遍历字符串,将标点符号替换为空格\n",
835
+ " result = ''.join([char if char not in punctuation else '' for char in text])\n",
836
+ " return result\n",
837
+ "\n",
838
+ "def jaccard_similarity(s1, s2):\n",
839
+ " set1 = set(s1)\n",
840
+ " set2 = set(s2)\n",
841
+ " intersection = len(set1.intersection(set2))\n",
842
+ " union = len(set1.union(set2))\n",
843
+ " return intersection / union if union != 0 else 0\n",
844
+ " \n",
845
+ "def is_game_en_valid(wav_path, text, log = False):\n",
846
+ " aha_words = ['ahem', 'argh', 'sigh', 'sobs', 'uh', 'um', 'hmm', 'ugh', 'woo', 'haha', 'hehe', 'huh', 'hmph', 'hm', 'whoa', 'yay', 'ah', 'mmhmm', 'ha', 'oh', 'hu', 'mmm']\n",
847
+ "\n",
848
+ " if wav_path in wav2asr:\n",
849
+ " whisper_trans = remove_punctuation(wav2asr[wav_path].lower().strip())\n",
850
+ " else:\n",
851
+ " whisper_trans = asr(wav_path).lower().strip()\n",
852
+ "\n",
853
+ " original_text = text\n",
854
+ " \n",
855
+ " text = remove_punctuation(text.lower().strip())\n",
856
+ "\n",
857
+ " duration = get_wav_duration(wav_path)\n",
858
+ " mean_dur = duration / len(text.split(' '))\n",
859
+ "\n",
860
+ " jaccard_score = jaccard_similarity(whisper_trans.split(' '), text.split(' '))\n",
861
+ " edit_distance = editdistance.eval(whisper_trans.split(' '), text.split(' '))\n",
862
+ "\n",
863
+ " length_ratio = len(whisper_trans.split(' ')) / len(text.split(' '))\n",
864
+ "\n",
865
+ " if '*' in original_text \\\n",
866
+ " or (edit_distance >= 3 and length_ratio > 1.5) \\\n",
867
+ " or whisper_trans.split(' ')[0] in aha_words \\\n",
868
+ " or text.split(' ')[0] in aha_words \\\n",
869
+ " or ('男声' in wav_path or '女声' in wav_path or '会场广播' in wav_path):\n",
870
+ " # if '*' in text or len(whisper_trans) == 0 or length_ratio > 1.5 or mean_dur > 1.0 or jaccard_score < 0.3 or (whisper_trans.split(' ')[0] != text.split(' ')[0] and whisper_trans.split(' ')[0] in aha_words) or text.split(' ')[0] in aha_words or 'sigh' in text or len(start_wav2vec_trans0) == 0 or start_wav2vec_trans0.split(' ')[0] in aha_words:\n",
871
+ " if log:\n",
872
+ " print(wav_path)\n",
873
+ " display(ipd.Audio(wav_path, rate=24000, normalize=False))\n",
874
+ " print(f'mean_dur: {mean_dur}')\n",
875
+ " print(f'transcript(orignial): {text} \\ntranscript(whisper): {whisper_trans}')\n",
876
+ " print(f'length_ratio: {length_ratio}')\n",
877
+ " print(f'jaccard_score: {jaccard_score}')\n",
878
+ " print(f'editdistance: {edit_distance}')\n",
879
+ " print()\n",
880
+ " return False\n",
881
+ " \n",
882
+ " wav2vec_trans0 = wav2vec_asr(wav_path, 0.5)[0].lower().strip()\n",
883
+ " if len(wav2vec_trans0) > 0 and wav2vec_trans0.split(' ')[0] in aha_words:\n",
884
+ " if log:\n",
885
+ " print(f'transcript-0.5s(wav2vec): {wav2vec_trans0}')\n",
886
+ " return False\n",
887
+ " \n",
888
+ " wav2vec_trans1 = wav2vec_asr(wav_path, 2.0)[0].lower().strip()\n",
889
+ " if len(wav2vec_trans1) > 0 and wav2vec_trans1.split(' ')[0] in aha_words:\n",
890
+ " if log:\n",
891
+ " print(f'transcript-2s(wav2vec): {wav2vec_trans1}')\n",
892
+ " return False\n",
893
+ " \n",
894
+ " if len(wav2vec_trans0) == 0:\n",
895
+ " alignment_asr = alignment(wav_path)\n",
896
+ " if len(alignment_asr) > 0 and alignment_asr[0]['start'] > 0.5:\n",
897
+ " if log:\n",
898
+ " print(f'alignment: {alignment_asr}')\n",
899
+ " return False \n",
900
+ "\n",
901
+ " return True\n",
902
+ "\n",
903
+ " # # 第二次过滤\n",
904
+ " # trans1 = asr(wav_path, 2.0)[0].lower().strip()\n",
905
+ " # trans2 = asr(wav_path, 3.0)[0].lower().strip()\n",
906
+ " # if (len(trans1) > 0 and trans1.split(' ')[0] in aha_words) or (len(trans2) > 0 and trans2.split(' ')[0] in aha_words):\n",
907
+ " # if log:\n",
908
+ " # print(wav_path, text)\n",
909
+ " # print(trans0)\n",
910
+ " # print(trans1)\n",
911
+ " # print(trans2)\n",
912
+ " # display(ipd.Audio(wav_path, rate=24000, normalize=False))\n",
913
+ " # return False\n",
914
+ " # else:\n",
915
+ " # return True\n"
916
+ ]
917
+ },
918
+ {
919
+ "cell_type": "code",
920
+ "execution_count": 38,
921
+ "id": "7c4a6cc8-3a08-4a97-9658-3cece6c97628",
922
+ "metadata": {
923
+ "scrolled": true
924
+ },
925
+ "outputs": [],
926
+ "source": [
927
+ "# # 看过滤规则的准确率【这些样本是不是不想加入训练的】\n",
928
+ "# sample_size = 200\n",
929
+ "# game_en_tmp = np.array(game_en_train)\n",
930
+ "# random_indices = np.random.choice(len(game_en_train), size=sample_size, replace=False)\n",
931
+ "# game_en_tmp = game_en_tmp[random_indices]\n",
932
+ "\n",
933
+ "# game_en_tmp1 = []\n",
934
+ "# for i in tqdm(range(len(game_en_tmp))):\n",
935
+ "# d = game_en_tmp[i]\n",
936
+ "# if is_game_en_valid(d.split('|')[0], d.split('|')[1], log=True):\n",
937
+ "# game_en_tmp1.append(d)\n",
938
+ "\n",
939
+ "# print(len(game_en_tmp1))"
940
+ ]
941
+ },
942
+ {
943
+ "cell_type": "code",
944
+ "execution_count": 39,
945
+ "id": "7a1c7cc8-c93a-4dfc-b00a-651254a17fe7",
946
+ "metadata": {},
947
+ "outputs": [],
948
+ "source": [
949
+ "# # 随机查看过滤后的数据【看看是不是都想用来训练】,保证召回率\n",
950
+ "# tmp = random.choice(game_en_tmp1)\n",
951
+ "# print(tmp)\n",
952
+ "# display(ipd.Audio(tmp.split('|')[0], rate=24000, normalize=False))"
953
+ ]
954
+ },
955
+ {
956
+ "cell_type": "code",
957
+ "execution_count": 40,
958
+ "id": "2c3da5b7-3776-4597-a719-4ac51ebdd442",
959
+ "metadata": {},
960
+ "outputs": [],
961
+ "source": [
962
+ "# same_speaker_data = [d for d in game_en_train if d.split('|')[-1]==tmp.split('|')[-1]]\n",
963
+ "# for d in same_speaker_data[:3]:\n",
964
+ "# print(d)\n",
965
+ "# display(ipd.Audio(d.split('|')[0], rate=24000, normalize=False))"
966
+ ]
967
+ },
968
+ {
969
+ "cell_type": "code",
970
+ "execution_count": 377,
971
+ "id": "f2554191-419a-40fc-a2e8-87fffac30fcb",
972
+ "metadata": {},
973
+ "outputs": [
974
+ {
975
+ "name": "stderr",
976
+ "output_type": "stream",
977
+ "text": [
978
+ "100%|██████████| 2798/2798 [21:25<00:00, 2.18it/s] s]\n",
979
+ " 95%|█████████▌| 3189/3347 [22:40<01:23, 1.89it/s] "
980
+ ]
981
+ },
982
+ {
983
+ "name": "stdout",
984
+ "output_type": "stream",
985
+ "text": [
986
+ "Failed to align segment (\"风流尽在山水间\"): no characters in this segment found in model dictionary, resorting to original...\n"
987
+ ]
988
+ },
989
+ {
990
+ "name": "stderr",
991
+ "output_type": "stream",
992
+ "text": [
993
+ "100%|██████████| 3347/3347 [23:41<00:00, 2.35it/s]"
994
+ ]
995
+ },
996
+ {
997
+ "name": "stdout",
998
+ "output_type": "stream",
999
+ "text": [
1000
+ "19338 2136 2674\n"
1001
+ ]
1002
+ },
1003
+ {
1004
+ "name": "stderr",
1005
+ "output_type": "stream",
1006
+ "text": [
1007
+ "\n"
1008
+ ]
1009
+ }
1010
+ ],
1011
+ "source": [
1012
+ "# 过滤数据\n",
1013
+ "def purify_game(data):\n",
1014
+ " new_data = []\n",
1015
+ " for i in tqdm(range(len(data))):\n",
1016
+ " d = data[i]\n",
1017
+ " if is_game_en_valid(d.split('|')[0], d.split('|')[1], log=False):\n",
1018
+ " new_data.append(d)\n",
1019
+ " return new_data\n",
1020
+ "\n",
1021
+ "game_en_train = purify_game(game_en_train)\n",
1022
+ "game_en_val = purify_game(game_en_val)\n",
1023
+ "game_en_test = purify_game(game_en_test)\n",
1024
+ "\n",
1025
+ "print(len(game_en_train), len(game_en_val), len(game_en_test))"
1026
+ ]
1027
+ },
1028
+ {
1029
+ "cell_type": "code",
1030
+ "execution_count": 386,
1031
+ "id": "59d2027f-a23c-448f-873a-6546504e62d8",
1032
+ "metadata": {},
1033
+ "outputs": [
1034
+ {
1035
+ "name": "stdout",
1036
+ "output_type": "stream",
1037
+ "text": [
1038
+ "1200 150 150\n"
1039
+ ]
1040
+ },
1041
+ {
1042
+ "name": "stderr",
1043
+ "output_type": "stream",
1044
+ "text": [
1045
+ "100%|██████████| 6893/6893 [00:00<00:00, 389635.42it/s]"
1046
+ ]
1047
+ },
1048
+ {
1049
+ "name": "stdout",
1050
+ "output_type": "stream",
1051
+ "text": [
1052
+ "375 75 75\n"
1053
+ ]
1054
+ },
1055
+ {
1056
+ "name": "stderr",
1057
+ "output_type": "stream",
1058
+ "text": [
1059
+ "\n"
1060
+ ]
1061
+ }
1062
+ ],
1063
+ "source": [
1064
+ "# 过滤emotional tts数据,包括采样处理\n",
1065
+ "emo_start_id = 35000 # 避开游戏\n",
1066
+ "\n",
1067
+ "emo_id_num = -1\n",
1068
+ "emotion_data = []\n",
1069
+ "\n",
1070
+ "#esd\n",
1071
+ "esd_ids = []\n",
1072
+ "esd_data = {}\n",
1073
+ "for d in esd_trans:\n",
1074
+ " id, txt, emo = d[0], d[1], d[2]\n",
1075
+ " role_id = id.split('_')[0] + d[2]\n",
1076
+ " if role_id not in esd_ids:\n",
1077
+ " emo_id_num += 1\n",
1078
+ " esd_ids.append(role_id)\n",
1079
+ "\n",
1080
+ " wav_path = f'''/workspace/TTS/data/ESD/dataset/{id.split('_')[0]}/{emo}/{id}.wav'''\n",
1081
+ " if role_id not in esd_data:\n",
1082
+ " esd_data[role_id] = [[wav_path, txt, emo_start_id + emo_id_num]]\n",
1083
+ " else:\n",
1084
+ " esd_data[role_id].append([wav_path, txt, emo_start_id + emo_id_num])\n",
1085
+ "\n",
1086
+ "# 采样\n",
1087
+ "esd_train, esd_val, esd_test = [], [], []\n",
1088
+ "esd_max_num = 30\n",
1089
+ "for role_id in esd_data:\n",
1090
+ " d = esd_data[role_id]\n",
1091
+ " np.random.shuffle(d)\n",
1092
+ " d = d[:esd_max_num]\n",
1093
+ " if '0011' in role_id:\n",
1094
+ " esd_val += [f'{_d[0]}|{_d[1]}|{_d[2]}' for _d in d]\n",
1095
+ " elif '0017' in role_id:\n",
1096
+ " esd_test += [f'{_d[0]}|{_d[1]}|{_d[2]}' for _d in d]\n",
1097
+ " else:\n",
1098
+ " esd_train += [f'{_d[0]}|{_d[1]}|{_d[2]}' for _d in d]\n",
1099
+ "\n",
1100
+ "print(len(esd_train), len(esd_val), len(esd_test))\n",
1101
+ "\n",
1102
+ "# emov\n",
1103
+ "emov_data = {}\n",
1104
+ "for i in tqdm(range(len(emov))):\n",
1105
+ " d = emov[i]\n",
1106
+ " role_name = d.split('/')[5]\n",
1107
+ " emo = d.split('/')[6].split('_')[0]\n",
1108
+ " role_id = role_name + '_' + emo\n",
1109
+ "\n",
1110
+ " if 'neu' in role_id or 'ang' in role_id:\n",
1111
+ " if role_id not in emov_data:\n",
1112
+ " emo_id_num += 1\n",
1113
+ " emov_data[role_id] = [[d, wav2asr[d].strip(), emo_start_id + emo_id_num]]\n",
1114
+ " else:\n",
1115
+ " emov_data[role_id].append([d, wav2asr[d].strip(), emo_start_id + emo_id_num])\n",
1116
+ "\n",
1117
+ "emov_train, emov_val, emov_test = [], [], []\n",
1118
+ "emov_max_num = 75 #emo少\n",
1119
+ "for role_id in emov_data:\n",
1120
+ " d = emov_data[role_id]\n",
1121
+ " np.random.shuffle(d)\n",
1122
+ " d = d[:emov_max_num]\n",
1123
+ " if 'jenie_neutral' in role_id:\n",
1124
+ " emov_val += [f'{_d[0]}|{_d[1]}|{_d[2]}' for _d in d]\n",
1125
+ " elif 'jenie_anger' in role_id:\n",
1126
+ " emov_test += [f'{_d[0]}|{_d[1]}|{_d[2]}' for _d in d]\n",
1127
+ " else:\n",
1128
+ " emov_train += [f'{_d[0]}|{_d[1]}|{_d[2]}' for _d in d]\n",
1129
+ "\n",
1130
+ "print(len(emov_train), len(emov_val), len(emov_test))"
1131
+ ]
1132
+ },
1133
+ {
1134
+ "cell_type": "code",
1135
+ "execution_count": 388,
1136
+ "id": "3dbcf791-840c-414f-a557-5d9713dd0407",
1137
+ "metadata": {},
1138
+ "outputs": [
1139
+ {
1140
+ "name": "stderr",
1141
+ "output_type": "stream",
1142
+ "text": [
1143
+ "100%|██████████| 2380/2380 [00:00<00:00, 200093.08it/s]\n",
1144
+ "100%|██████████| 40/40 [03:53<00:00, 5.84s/it]\n"
1145
+ ]
1146
+ },
1147
+ {
1148
+ "name": "stdout",
1149
+ "output_type": "stream",
1150
+ "text": [
1151
+ "340 20 40\n"
1152
+ ]
1153
+ },
1154
+ {
1155
+ "name": "stderr",
1156
+ "output_type": "stream",
1157
+ "text": [
1158
+ "100%|██████████| 1440/1440 [00:00<00:00, 156135.71it/s]"
1159
+ ]
1160
+ },
1161
+ {
1162
+ "name": "stdout",
1163
+ "output_type": "stream",
1164
+ "text": [
1165
+ "192 0 0\n"
1166
+ ]
1167
+ },
1168
+ {
1169
+ "name": "stderr",
1170
+ "output_type": "stream",
1171
+ "text": [
1172
+ "\n"
1173
+ ]
1174
+ }
1175
+ ],
1176
+ "source": [
1177
+ "# jl\n",
1178
+ "jl_data = {}\n",
1179
+ "for i in tqdm(range(len(jl))):\n",
1180
+ " d = jl[i]\n",
1181
+ " role_name = d[0].split('/')[-1].split('_')[0]\n",
1182
+ " emo = d[0].split('/')[-1].split('_')[1]\n",
1183
+ " text = d[1].strip()\n",
1184
+ " role_id = role_name + '_' + emo\n",
1185
+ " # role_id = role_name\n",
1186
+ "\n",
1187
+ " if role_id not in jl_data:\n",
1188
+ " emo_id_num += 1\n",
1189
+ " jl_data[role_id] = [[d[0], text, emo_start_id + emo_id_num]]\n",
1190
+ " else:\n",
1191
+ " jl_data[role_id].append([d[0], text, emo_start_id + emo_id_num])\n",
1192
+ "\n",
1193
+ "jl_train, jl_val, jl_test = [], [], []\n",
1194
+ "jl_max_num = 10 #emo多\n",
1195
+ "jl_ids = list(jl_data.keys())\n",
1196
+ "for i in tqdm(range(len(jl_ids))):\n",
1197
+ " role_id = jl_ids[i]\n",
1198
+ " d = jl_data[role_id]\n",
1199
+ " np.random.shuffle(d)\n",
1200
+ " d = d[:jl_max_num]\n",
1201
+ "\n",
1202
+ " # 去掉transcript没对齐的数据\n",
1203
+ " new_d = []\n",
1204
+ " for _d in d:\n",
1205
+ " wav_path = _d[0]\n",
1206
+ " text = _d[1]\n",
1207
+ " whisper_asr = asr(wav_path).strip()\n",
1208
+ "\n",
1209
+ " new_d.append([_d[0], whisper_asr, _d[2]])\n",
1210
+ " \n",
1211
+ " # edit_distance = editdistance.eval(remove_punctuation(text.lower().strip()).split(' '), remove_punctuation(whisper_asr.lower().strip()).split(' '))\n",
1212
+ " # if edit_distance == 0:\n",
1213
+ " # new_d.append(_d)\n",
1214
+ "\n",
1215
+ " d = new_d\n",
1216
+ " \n",
1217
+ " if 'female1_encouraging' in role_id or 'female2_happy' in role_id:\n",
1218
+ " jl_val += [f'{_d[0]}|{_d[1]}|{_d[2]}' for _d in d]\n",
1219
+ " elif 'male1_angry' in role_id or 'male2_anxious' in role_id:\n",
1220
+ " jl_test += [f'{_d[0]}|{_d[1]}|{_d[2]}' for _d in d]\n",
1221
+ " else:\n",
1222
+ " jl_train += [f'{_d[0]}|{_d[1]}|{_d[2]}' for _d in d]\n",
1223
+ "\n",
1224
+ "print(len(jl_train), len(jl_val), len(jl_test))\n",
1225
+ "\n",
1226
+ "# rav数据\n",
1227
+ "rav_data = {}\n",
1228
+ "for i in tqdm(range(len(rav))):\n",
1229
+ " d = rav[i]\n",
1230
+ " role_name = d.split('/')[5]\n",
1231
+ " profile = d.split('/')[-1].split('.')[0].split('-')\n",
1232
+ " emo = profile[2]\n",
1233
+ " emo_inten = profile[3]\n",
1234
+ " role_id = role_name + '_' + emo\n",
1235
+ "\n",
1236
+ " if role_id not in rav_data:\n",
1237
+ " emo_id_num += 1\n",
1238
+ " rav_data[role_id] = [[d, wav2asr[d].strip(), emo_start_id + emo_id_num]]\n",
1239
+ " else:\n",
1240
+ " rav_data[role_id].append([d, wav2asr[d].strip(), emo_start_id + emo_id_num])\n",
1241
+ "\n",
1242
+ "rav_train, rav_val, rav_test = [], [], []\n",
1243
+ "rav_max_num = 1 #emo少\n",
1244
+ "for role_id in rav_data:\n",
1245
+ " d = rav_data[role_id]\n",
1246
+ " np.random.shuffle(d)\n",
1247
+ " d = d[:rav_max_num]\n",
1248
+ " \n",
1249
+ " rav_train += [f'{_d[0]}|{_d[1]}|{_d[2]}' for _d in d]\n",
1250
+ "\n",
1251
+ "print(len(rav_train), len(rav_val), len(rav_test))\n"
1252
+ ]
1253
+ },
1254
+ {
1255
+ "cell_type": "code",
1256
+ "execution_count": 389,
1257
+ "id": "c37279a3-d2a6-4695-99eb-f52c202cb806",
1258
+ "metadata": {},
1259
+ "outputs": [
1260
+ {
1261
+ "name": "stdout",
1262
+ "output_type": "stream",
1263
+ "text": [
1264
+ "340 20 40\n",
1265
+ "192 0 0\n"
1266
+ ]
1267
+ }
1268
+ ],
1269
+ "source": [
1270
+ "print(len(jl_train), len(jl_val), len(jl_test))\n",
1271
+ "print(len(rav_train), len(rav_val), len(rav_test))"
1272
+ ]
1273
+ },
1274
+ {
1275
+ "cell_type": "code",
1276
+ "execution_count": null,
1277
+ "id": "4bc3b899-aff4-4d73-b3eb-f79de84ac568",
1278
+ "metadata": {},
1279
+ "outputs": [],
1280
+ "source": [
1281
+ "# 随机查看数据\n",
1282
+ "# # esd\n",
1283
+ "# sample_data = esd_train + esd_val + esd_test\n",
1284
+ "# sample_data = [d for d in sample_data if 'Surprise' in d]\n",
1285
+ "# tmp = random.choice(sample_data)\n",
1286
+ "# print(tmp)\n",
1287
+ "# display(ipd.Audio(tmp.split('|')[0], rate=24000, normalize=False))\n",
1288
+ "\n",
1289
+ "# # emov\n",
1290
+ "# sample_data = [d for d in emov if 'neu' in d]\n",
1291
+ "# # sample_data = emov\n",
1292
+ "# tmp = random.choice(sample_data)\n",
1293
+ "# print(tmp, wav2asr[tmp])\n",
1294
+ "# wave, sr = sf.read(tmp)\n",
1295
+ "# display(ipd.Audio(wave, rate=sr, normalize=False))\n",
1296
+ "\n",
1297
+ "# # jl\n",
1298
+ "# sample_data = [d for d in jl if '' in d[0]]\n",
1299
+ "# tmp = random.choice(sample_data)\n",
1300
+ "# print(tmp)\n",
1301
+ "# display(ipd.Audio(tmp[0], rate=sr, normalize=False))\n",
1302
+ "\n",
1303
+ "# # rav\n",
1304
+ "# sample_data = [d for d in rav]\n",
1305
+ "# # sample_data = emov\n",
1306
+ "# tmp = random.choice(sample_data)\n",
1307
+ "# print(tmp, wav2asr[tmp])\n",
1308
+ "# display(ipd.Audio(tmp, rate=sr, normalize=False))"
1309
+ ]
1310
+ },
1311
+ {
1312
+ "cell_type": "code",
1313
+ "execution_count": 391,
1314
+ "id": "95e602d2-2144-4795-b39a-feec48dd26f8",
1315
+ "metadata": {},
1316
+ "outputs": [
1317
+ {
1318
+ "name": "stdout",
1319
+ "output_type": "stream",
1320
+ "text": [
1321
+ "2107 245 265\n"
1322
+ ]
1323
+ }
1324
+ ],
1325
+ "source": [
1326
+ "emo_train = esd_train + emov_train + jl_train + rav_train\n",
1327
+ "emo_val = esd_val + emov_val + jl_val + rav_val\n",
1328
+ "emo_test = esd_test + emov_test + jl_test + rav_test\n",
1329
+ "\n",
1330
+ "print(len(emo_train), len(emo_val), len(emo_test))"
1331
+ ]
1332
+ },
1333
+ {
1334
+ "cell_type": "code",
1335
+ "execution_count": 459,
1336
+ "id": "65c347f5-bfc8-42a3-8dd7-f93c26b21916",
1337
+ "metadata": {},
1338
+ "outputs": [],
1339
+ "source": [
1340
+ "import re\n",
1341
+ "def process_text(text):\n",
1342
+ " new_text = re.sub(r'^\\.\\.\\.', '', text)\n",
1343
+ " return new_text"
1344
+ ]
1345
+ },
1346
+ {
1347
+ "cell_type": "code",
1348
+ "execution_count": 470,
1349
+ "id": "5cbf0254-082c-4fcc-b173-005df662c231",
1350
+ "metadata": {},
1351
+ "outputs": [
1352
+ {
1353
+ "name": "stdout",
1354
+ "output_type": "stream",
1355
+ "text": [
1356
+ "libritts_val_data hours 3.830820000000008\n"
1357
+ ]
1358
+ },
1359
+ {
1360
+ "name": "stderr",
1361
+ "output_type": "stream",
1362
+ "text": [
1363
+ "100%|██████████| 2380/2380 [00:01<00:00, 1391.90it/s]\n"
1364
+ ]
1365
+ },
1366
+ {
1367
+ "name": "stdout",
1368
+ "output_type": "stream",
1369
+ "text": [
1370
+ "game_train hours 36.31573596808863\n"
1371
+ ]
1372
+ },
1373
+ {
1374
+ "name": "stderr",
1375
+ "output_type": "stream",
1376
+ "text": [
1377
+ "100%|██████████| 19338/19338 [00:15<00:00, 1235.76it/s]\n"
1378
+ ]
1379
+ },
1380
+ {
1381
+ "name": "stdout",
1382
+ "output_type": "stream",
1383
+ "text": [
1384
+ "game_val hours 3.908920038225938\n"
1385
+ ]
1386
+ },
1387
+ {
1388
+ "name": "stderr",
1389
+ "output_type": "stream",
1390
+ "text": [
1391
+ "100%|██████████| 2136/2136 [00:01<00:00, 1233.00it/s]\n",
1392
+ "100%|██████████| 2107/2107 [00:01<00:00, 1811.55it/s]\n",
1393
+ "100%|██████████| 245/245 [00:00<00:00, 1533.77it/s]\n"
1394
+ ]
1395
+ }
1396
+ ],
1397
+ "source": [
1398
+ "# 进行phonemizer, mix data\n",
1399
+ "\n",
1400
+ "import random\n",
1401
+ "import numpy as np\n",
1402
+ "\n",
1403
+ "# 隐藏seed,可以更充分利用data\n",
1404
+ "random.seed(42)\n",
1405
+ "np.random.seed(42)\n",
1406
+ "\n",
1407
+ "train, val = [], []\n",
1408
+ "\n",
1409
+ "# np.random.shuffle(libritts_train_data)\n",
1410
+ "# libritts_train_data = libritts_train_data[:10000]\n",
1411
+ "# hours = sum([get_wav_duration(x.split('|')[0]) for x in libritts_train_data]) / 3600\n",
1412
+ "# print('libritts_train_data hours', hours)\n",
1413
+ "\n",
1414
+ "# for i in tqdm(range(len(libritts_train_data))):\n",
1415
+ "# line = libritts_train_data[i]\n",
1416
+ "# line = line.split('|')\n",
1417
+ "# wav, text, sid = line[0], line[1], line[2]\n",
1418
+ "# ps = preprocess(text, 'en-us')\n",
1419
+ "# train.append(f'{wav}|{ps}|{sid}')\n",
1420
+ "\n",
1421
+ "np.random.shuffle(libritts_val_data)\n",
1422
+ "sample_libritts_val_data = libritts_val_data[:2380]\n",
1423
+ "hours = sum([get_wav_duration(x.split('|')[0]) for x in sample_libritts_val_data]) / 3600\n",
1424
+ "print('libritts_val_data hours', hours)\n",
1425
+ "for i in tqdm(range(len(sample_libritts_val_data))):\n",
1426
+ " line = sample_libritts_val_data[i]\n",
1427
+ " line = line.split('|')\n",
1428
+ " wav, text, sid = line[0], line[1], line[2]\n",
1429
+ " ps = preprocess(process_text(text.strip()), 'en-us')\n",
1430
+ " val.append(f'{wav}|{ps}|{sid}')\n",
1431
+ "\n",
1432
+ "hours = sum([get_wav_duration(x.split('|')[0]) for x in game_en_train]) / 3600\n",
1433
+ "print('game_train hours', hours)\n",
1434
+ "\n",
1435
+ "for i in tqdm(range(len(game_en_train))):\n",
1436
+ " line = game_en_train[i]\n",
1437
+ " line = line.split('|')\n",
1438
+ " wav, text, sid = line[0], line[1], line[2]\n",
1439
+ " ps = preprocess(process_text(text.strip()), 'en-us')\n",
1440
+ " train.append(f'{wav}|{ps}|{sid}')\n",
1441
+ "\n",
1442
+ "hours = sum([get_wav_duration(x.split('|')[0]) for x in game_en_val]) / 3600\n",
1443
+ "print('game_val hours', hours)\n",
1444
+ "\n",
1445
+ "for i in tqdm(range(len(game_en_val))):\n",
1446
+ " line = game_en_val[i]\n",
1447
+ " line = line.split('|')\n",
1448
+ " wav, text, sid = line[0], line[1], line[2]\n",
1449
+ " ps = preprocess(process_text(text.strip()), 'en-us')\n",
1450
+ " val.append(f'{wav}|{ps}|{sid}')\n",
1451
+ "\n",
1452
+ "for i in tqdm(range(len(emo_train))):\n",
1453
+ " line = emo_train[i]\n",
1454
+ " line = line.split('|')\n",
1455
+ " wav, text, sid = line[0], line[1], line[2]\n",
1456
+ " second = get_wav_duration(wav)\n",
1457
+ " if second > 1:\n",
1458
+ " ps = preprocess(process_text(text.strip()), 'en-us')\n",
1459
+ " train.append(f'{wav}|{ps}|{sid}')\n",
1460
+ "\n",
1461
+ "for i in tqdm(range(len(emo_val))):\n",
1462
+ " line = emo_val[i]\n",
1463
+ " line = line.split('|')\n",
1464
+ " wav, text, sid = line[0], line[1], line[2]\n",
1465
+ " second = get_wav_duration(wav)\n",
1466
+ " if second > 1:\n",
1467
+ " ps = preprocess(process_text(text.strip()), 'en-us')\n",
1468
+ " val.append(f'{wav}|{ps}|{sid}')\n",
1469
+ "\n",
1470
+ "# np.random.shuffle(cml_train_data)\n",
1471
+ "# cml_train_data = cml_train_data[:25000]\n",
1472
+ "# hours = sum([get_wav_duration(x.split('|')[0]) for x in cml_train_data]) / 3600\n",
1473
+ "# print('cml_train_data hours', hours)\n",
1474
+ "# for i in tqdm(range(len(cml_train_data))):\n",
1475
+ "# line = cml_train_data[i]\n",
1476
+ "# line = line.split('|')\n",
1477
+ "# wav, text, sid = line[0], line[1], line[2]\n",
1478
+ "# ps = preprocess(text, 'es')\n",
1479
+ "# train.append(f'{wav}|{ps}|{sid}')\n",
1480
+ "\n",
1481
+ "# np.random.shuffle(cml_val_data)\n",
1482
+ "# cml_val_data = cml_val_data[:500]\n",
1483
+ "# hours = sum([get_wav_duration(x.split('|')[0]) for x in cml_val_data]) / 3600\n",
1484
+ "# print('cml_val_data hours', hours)\n",
1485
+ "# for i in tqdm(range(len(cml_val_data))):\n",
1486
+ "# line = cml_val_data[i]\n",
1487
+ "# line = line.split('|')\n",
1488
+ "# wav, text, sid = line[0], line[1], line[2]\n",
1489
+ "# ps = preprocess(text, 'es')\n",
1490
+ "# val.append(f'{wav}|{ps}|{sid}')\n"
1491
+ ]
1492
+ },
1493
+ {
1494
+ "cell_type": "code",
1495
+ "execution_count": 471,
1496
+ "id": "fc762cfc-84c0-4280-b7d7-586c7bc293ca",
1497
+ "metadata": {},
1498
+ "outputs": [
1499
+ {
1500
+ "name": "stdout",
1501
+ "output_type": "stream",
1502
+ "text": [
1503
+ "21445\n",
1504
+ "38.01937940192716\n"
1505
+ ]
1506
+ }
1507
+ ],
1508
+ "source": [
1509
+ "# data statistic\n",
1510
+ "hours = sum([get_wav_duration(x.split('|')[0]) for x in train])\n",
1511
+ "print(len(train))\n",
1512
+ "print(hours/3600)"
1513
+ ]
1514
+ },
1515
+ {
1516
+ "cell_type": "code",
1517
+ "execution_count": 472,
1518
+ "id": "8d4f7db0-b01a-4f3e-9949-73d7d05e12d4",
1519
+ "metadata": {},
1520
+ "outputs": [
1521
+ {
1522
+ "name": "stdout",
1523
+ "output_type": "stream",
1524
+ "text": [
1525
+ "4761\n",
1526
+ "7.950606687531487\n"
1527
+ ]
1528
+ }
1529
+ ],
1530
+ "source": [
1531
+ "# data statistic\n",
1532
+ "hours = sum([get_wav_duration(x.split('|')[0]) for x in val])\n",
1533
+ "print(len(val))\n",
1534
+ "print(hours/3600)"
1535
+ ]
1536
+ },
1537
+ {
1538
+ "cell_type": "code",
1539
+ "execution_count": 473,
1540
+ "id": "1e862965-6b93-40ad-bac6-1340325811e8",
1541
+ "metadata": {},
1542
+ "outputs": [
1543
+ {
1544
+ "name": "stdout",
1545
+ "output_type": "stream",
1546
+ "text": [
1547
+ "21445\n",
1548
+ "4761\n",
1549
+ "21440\n",
1550
+ "4752\n"
1551
+ ]
1552
+ }
1553
+ ],
1554
+ "source": [
1555
+ "# 保证长度可以整除batch\n",
1556
+ "# 去除太长的样本\n",
1557
+ "\n",
1558
+ "from meldataset import TextCleaner\n",
1559
+ "\n",
1560
+ "text_cleaner = TextCleaner() # 对齐training\n",
1561
+ "\n",
1562
+ "bs = 16\n",
1563
+ "maxlen = 512-2 # bert最大长度512,并且要考虑开头和结尾的pad\n",
1564
+ "\n",
1565
+ "print(len(train))\n",
1566
+ "print(len(val))\n",
1567
+ "\n",
1568
+ "train = [d for d in train if len(text_cleaner(d.split('|')[1])) < maxlen]\n",
1569
+ "val = [d for d in val if len(text_cleaner(d.split('|')[1])) < maxlen]\n",
1570
+ "\n",
1571
+ "train = train[:int(len(train)/bs) * bs]\n",
1572
+ "val = val[:int(len(val)/bs) * bs]\n",
1573
+ "\n",
1574
+ "print(len(train))\n",
1575
+ "print(len(val))\n",
1576
+ "\n",
1577
+ "np.random.shuffle(train)\n",
1578
+ "\n",
1579
+ "with open('/workspace/TTS/tts/StyleTTS2/Data/train_list_en_21k_0401.txt', 'w') as f:\n",
1580
+ " for d in train:\n",
1581
+ " f.write(d+'\\n')\n",
1582
+ "\n",
1583
+ "with open('/workspace/TTS/tts/StyleTTS2/Data/val_list_en_21k_0401.txt', 'w') as f:\n",
1584
+ " for d in val:\n",
1585
+ " f.write(d+'\\n')"
1586
+ ]
1587
+ },
1588
+ {
1589
+ "cell_type": "code",
1590
+ "execution_count": 474,
1591
+ "id": "14de8383-89fd-4a44-a8e4-377b8bb77180",
1592
+ "metadata": {},
1593
+ "outputs": [],
1594
+ "source": [
1595
+ "# !pip install langdetect"
1596
+ ]
1597
+ },
1598
+ {
1599
+ "cell_type": "code",
1600
+ "execution_count": 475,
1601
+ "id": "f96880cd-ce73-4e69-bc55-c54dd735a260",
1602
+ "metadata": {},
1603
+ "outputs": [],
1604
+ "source": [
1605
+ "# # 新增OOD数据\n",
1606
+ "# import json, re, random\n",
1607
+ "# from tqdm import tqdm\n",
1608
+ "# from langdetect import detect\n",
1609
+ "# from nltk.tokenize import sent_tokenize\n",
1610
+ "\n",
1611
+ "# ood_data = []\n",
1612
+ "\n",
1613
+ "# waifu_data = []\n",
1614
+ "# with open('/workspace/TTS/data/sft_test.jsonl', 'r') as f:\n",
1615
+ "# for line in f:\n",
1616
+ "# line = json.loads(line.strip())\n",
1617
+ "# if line['lang'] == 'es':\n",
1618
+ "# waifu_data.append(line)\n",
1619
+ "\n",
1620
+ "# with open('/workspace/TTS/data/sft_val.jsonl', 'r') as f:\n",
1621
+ "# for line in f:\n",
1622
+ "# line = json.loads(line.strip())\n",
1623
+ "# if line['lang'] == 'es':\n",
1624
+ "# waifu_data.append(line)\n",
1625
+ "\n",
1626
+ "# def get_ood_data_waifu(text):\n",
1627
+ "\n",
1628
+ "# sentences = \n",
1629
+ " \n",
1630
+ "# sentences = sent_tokenize(text) # 拆分出来,还是太长\n",
1631
+ "# print('sen1', sentences)\n",
1632
+ "\n",
1633
+ "# sentences = [preprocess(sen.strip(), 'es') for sen in sentences]\n",
1634
+ "\n",
1635
+ "# if len(sentences) > 0:\n",
1636
+ "# sentence = random.choice(sentences)\n",
1637
+ "# print('sen4', sentence)\n",
1638
+ "# return sentence\n",
1639
+ "# else:\n",
1640
+ "# return None\n",
1641
+ "\n",
1642
+ "# for i in tqdm(range(len(waifu_data[:10]))):\n",
1643
+ "# d = waifu_data[i]\n",
1644
+ "# if len(d['context']) < 1:\n",
1645
+ "# continue\n",
1646
+ " \n",
1647
+ "# text = d['context'][0]['translate_content']\n",
1648
+ "\n",
1649
+ "# # print(text)\n",
1650
+ " \n",
1651
+ "# try:\n",
1652
+ "# if isinstance(text, str) and len(text) > 10 and detect(text[:100]) == 'es':\n",
1653
+ "# # 去除非西班牙数据\n",
1654
+ "# print('sen', text)\n",
1655
+ "# sen = get_ood_data_waifu(text)\n",
1656
+ "# if sen is not None:\n",
1657
+ "# ood_data.append(sen)\n",
1658
+ "# except:\n",
1659
+ "# pass\n",
1660
+ "\n",
1661
+ "# # if len(d['context']) < 2:\n",
1662
+ "# # continue\n",
1663
+ " \n",
1664
+ "# # text = d['context'][1]['translate_content']\n",
1665
+ "\n",
1666
+ "# # try:\n",
1667
+ "# # if isinstance(text, str) and len(text) > 10 and detect(text[:100]) == 'es':\n",
1668
+ "# # # 去除非西班牙数据\n",
1669
+ "# # sen = get_ood_data_waifu(text)\n",
1670
+ "# # if sen is not None:\n",
1671
+ "# # ood_data.append(sen)\n",
1672
+ "# # except:\n",
1673
+ "# # pass\n"
1674
+ ]
1675
+ },
1676
+ {
1677
+ "cell_type": "code",
1678
+ "execution_count": 476,
1679
+ "id": "6227d8c1-40c6-4c5e-98f1-1cd135074adc",
1680
+ "metadata": {},
1681
+ "outputs": [
1682
+ {
1683
+ "data": {
1684
+ "text/plain": [
1685
+ "2674"
1686
+ ]
1687
+ },
1688
+ "execution_count": 476,
1689
+ "metadata": {},
1690
+ "output_type": "execute_result"
1691
+ }
1692
+ ],
1693
+ "source": [
1694
+ "len(game_en_test)"
1695
+ ]
1696
+ },
1697
+ {
1698
+ "cell_type": "code",
1699
+ "execution_count": 477,
1700
+ "id": "0f76809d-120c-4083-8f36-d599badf678c",
1701
+ "metadata": {},
1702
+ "outputs": [
1703
+ {
1704
+ "data": {
1705
+ "text/plain": [
1706
+ "265"
1707
+ ]
1708
+ },
1709
+ "execution_count": 477,
1710
+ "metadata": {},
1711
+ "output_type": "execute_result"
1712
+ }
1713
+ ],
1714
+ "source": [
1715
+ "len(emo_test)"
1716
+ ]
1717
+ },
1718
+ {
1719
+ "cell_type": "code",
1720
+ "execution_count": 480,
1721
+ "id": "f262faf8-d7eb-4cc3-bd08-64de847f3ca0",
1722
+ "metadata": {},
1723
+ "outputs": [
1724
+ {
1725
+ "name": "stdout",
1726
+ "output_type": "stream",
1727
+ "text": [
1728
+ "5938\n"
1729
+ ]
1730
+ }
1731
+ ],
1732
+ "source": [
1733
+ "from meldataset import TextCleaner\n",
1734
+ "\n",
1735
+ "text_cleaner = TextCleaner() # 对齐training\n",
1736
+ "\n",
1737
+ "# es_ood_data = []\n",
1738
+ "# with open('/workspace/TTS/tts/StyleTTS2/Data/OOD_texts.txt', 'r') as f:\n",
1739
+ "# # waifu数据\n",
1740
+ "# for line in f:\n",
1741
+ "# line = line.strip().split('|')\n",
1742
+ "# text, sid = line[0], line[1]\n",
1743
+ "# es_ood_data.append(preprocess(text, 'es'))\n",
1744
+ "\n",
1745
+ "# # +其他西班牙语\n",
1746
+ "# for d in cml_test_data[:1500]:\n",
1747
+ "# es_ood_data.append(preprocess(d.split('|')[1], 'es'))\n",
1748
+ "\n",
1749
+ "# +英语,加入wav_path\n",
1750
+ "en_ood_data = []\n",
1751
+ "for d in libritts_test_data[:3000]:\n",
1752
+ " en_ood_data.append([d.split('|')[0], preprocess(d.split('|')[1], 'en-us')])\n",
1753
+ "\n",
1754
+ "for d in game_en_test:\n",
1755
+ " # en_ood_data.append([d.split('|')[0], d.split('|')[1]])\n",
1756
+ " en_ood_data.append([d.split('|')[0], preprocess(process_text(d.split('|')[1].strip()), 'en-us')])\n",
1757
+ "\n",
1758
+ "for d in emo_test:\n",
1759
+ " en_ood_data.append([d.split('|')[0], preprocess(process_text(d.split('|')[1].strip()), 'en-us')])\n",
1760
+ " \n",
1761
+ "# es_ood_data = list(set(es_ood_data))\n",
1762
+ "# en_ood_data = list(set(en_ood_data))\n",
1763
+ "\n",
1764
+ "# 去掉text cleaner之后超长的内容\n",
1765
+ "# es_ood_data = [d for d in es_ood_data if len(text_cleaner(d)) < 510]\n",
1766
+ "en_ood_data = [d for d in en_ood_data if len(text_cleaner(d[1])) < 510]\n",
1767
+ "\n",
1768
+ "# print(len(es_ood_data))\n",
1769
+ "print(len(en_ood_data))\n"
1770
+ ]
1771
+ },
1772
+ {
1773
+ "cell_type": "code",
1774
+ "execution_count": 481,
1775
+ "id": "3be67931-2506-44f0-b394-56cb6ecb722f",
1776
+ "metadata": {},
1777
+ "outputs": [
1778
+ {
1779
+ "name": "stdout",
1780
+ "output_type": "stream",
1781
+ "text": [
1782
+ "5938\n"
1783
+ ]
1784
+ }
1785
+ ],
1786
+ "source": [
1787
+ "# es_ood_data = [f'{d}|0|es' for d in es_ood_data]\n",
1788
+ "en_ood_data = [f'{d[0]}|{d[1]}|en' for d in en_ood_data]\n",
1789
+ "ood_data = en_ood_data\n",
1790
+ "# ood_data = es_ood_data + en_ood_data\n",
1791
+ "np.random.shuffle(ood_data)\n",
1792
+ "print(len(ood_data))\n",
1793
+ "\n",
1794
+ "with open('/workspace/TTS/tts/StyleTTS2/Data/OOD_texts_en_6k_0401.txt', 'w') as f:\n",
1795
+ " for d in ood_data:\n",
1796
+ " f.write(d+'\\n')"
1797
+ ]
1798
+ },
1799
+ {
1800
+ "cell_type": "code",
1801
+ "execution_count": null,
1802
+ "id": "2dda822d-24f7-4bb4-882f-7b9711c8e983",
1803
+ "metadata": {},
1804
+ "outputs": [],
1805
+ "source": []
1806
+ },
1807
+ {
1808
+ "cell_type": "code",
1809
+ "execution_count": null,
1810
+ "id": "56ed7106-07d2-45aa-9fc1-1286a3a956bd",
1811
+ "metadata": {},
1812
+ "outputs": [],
1813
+ "source": []
1814
+ },
1815
+ {
1816
+ "cell_type": "code",
1817
+ "execution_count": null,
1818
+ "id": "51992195-2507-4b56-8dd0-6a7876decf0e",
1819
+ "metadata": {},
1820
+ "outputs": [],
1821
+ "source": []
1822
+ },
1823
+ {
1824
+ "cell_type": "code",
1825
+ "execution_count": null,
1826
+ "id": "56e4670b-598b-4c08-8137-5475f87e8aa2",
1827
+ "metadata": {},
1828
+ "outputs": [],
1829
+ "source": []
1830
+ }
1831
+ ],
1832
+ "metadata": {
1833
+ "kernelspec": {
1834
+ "display_name": "Python 3 (ipykernel)",
1835
+ "language": "python",
1836
+ "name": "python3"
1837
+ },
1838
+ "language_info": {
1839
+ "codemirror_mode": {
1840
+ "name": "ipython",
1841
+ "version": 3
1842
+ },
1843
+ "file_extension": ".py",
1844
+ "mimetype": "text/x-python",
1845
+ "name": "python",
1846
+ "nbconvert_exporter": "python",
1847
+ "pygments_lexer": "ipython3",
1848
+ "version": "3.10.6"
1849
+ }
1850
+ },
1851
+ "nbformat": 4,
1852
+ "nbformat_minor": 5
1853
+ }