Your Name commited on
Commit
a3c7d09
·
0 Parent(s):
.gitattributes ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ bert/chinese-roberta-wwm-ext-large/** filter=lfs diff=lfs merge=lfs -text
2
+ Data/** filter=lfs diff=lfs merge=lfs -text
.gitignore ADDED
@@ -0,0 +1 @@
 
 
1
+ __pycache__
Data/configs/config.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:896910343fc67152ad80c30a8a52efa41628c8e53de6a54cbe7790b79016adf5
3
+ size 1811
Data/models/compressed.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a4ae8ec32d2c6775e8c3f191425858006b110f6e7574dcb504745f43e4bfcd56
3
+ size 200688274
LICENSE ADDED
@@ -0,0 +1,661 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ GNU AFFERO GENERAL PUBLIC LICENSE
2
+ Version 3, 19 November 2007
3
+
4
+ Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
5
+ Everyone is permitted to copy and distribute verbatim copies
6
+ of this license document, but changing it is not allowed.
7
+
8
+ Preamble
9
+
10
+ The GNU Affero General Public License is a free, copyleft license for
11
+ software and other kinds of works, specifically designed to ensure
12
+ cooperation with the community in the case of network server software.
13
+
14
+ The licenses for most software and other practical works are designed
15
+ to take away your freedom to share and change the works. By contrast,
16
+ our General Public Licenses are intended to guarantee your freedom to
17
+ share and change all versions of a program--to make sure it remains free
18
+ software for all its users.
19
+
20
+ When we speak of free software, we are referring to freedom, not
21
+ price. Our General Public Licenses are designed to make sure that you
22
+ have the freedom to distribute copies of free software (and charge for
23
+ them if you wish), that you receive source code or can get it if you
24
+ want it, that you can change the software or use pieces of it in new
25
+ free programs, and that you know you can do these things.
26
+
27
+ Developers that use our General Public Licenses protect your rights
28
+ with two steps: (1) assert copyright on the software, and (2) offer
29
+ you this License which gives you legal permission to copy, distribute
30
+ and/or modify the software.
31
+
32
+ A secondary benefit of defending all users' freedom is that
33
+ improvements made in alternate versions of the program, if they
34
+ receive widespread use, become available for other developers to
35
+ incorporate. Many developers of free software are heartened and
36
+ encouraged by the resulting cooperation. However, in the case of
37
+ software used on network servers, this result may fail to come about.
38
+ The GNU General Public License permits making a modified version and
39
+ letting the public access it on a server without ever releasing its
40
+ source code to the public.
41
+
42
+ The GNU Affero General Public License is designed specifically to
43
+ ensure that, in such cases, the modified source code becomes available
44
+ to the community. It requires the operator of a network server to
45
+ provide the source code of the modified version running there to the
46
+ users of that server. Therefore, public use of a modified version, on
47
+ a publicly accessible server, gives the public access to the source
48
+ code of the modified version.
49
+
50
+ An older license, called the Affero General Public License and
51
+ published by Affero, was designed to accomplish similar goals. This is
52
+ a different license, not a version of the Affero GPL, but Affero has
53
+ released a new version of the Affero GPL which permits relicensing under
54
+ this license.
55
+
56
+ The precise terms and conditions for copying, distribution and
57
+ modification follow.
58
+
59
+ TERMS AND CONDITIONS
60
+
61
+ 0. Definitions.
62
+
63
+ "This License" refers to version 3 of the GNU Affero General Public License.
64
+
65
+ "Copyright" also means copyright-like laws that apply to other kinds of
66
+ works, such as semiconductor masks.
67
+
68
+ "The Program" refers to any copyrightable work licensed under this
69
+ License. Each licensee is addressed as "you". "Licensees" and
70
+ "recipients" may be individuals or organizations.
71
+
72
+ To "modify" a work means to copy from or adapt all or part of the work
73
+ in a fashion requiring copyright permission, other than the making of an
74
+ exact copy. The resulting work is called a "modified version" of the
75
+ earlier work or a work "based on" the earlier work.
76
+
77
+ A "covered work" means either the unmodified Program or a work based
78
+ on the Program.
79
+
80
+ To "propagate" a work means to do anything with it that, without
81
+ permission, would make you directly or secondarily liable for
82
+ infringement under applicable copyright law, except executing it on a
83
+ computer or modifying a private copy. Propagation includes copying,
84
+ distribution (with or without modification), making available to the
85
+ public, and in some countries other activities as well.
86
+
87
+ To "convey" a work means any kind of propagation that enables other
88
+ parties to make or receive copies. Mere interaction with a user through
89
+ a computer network, with no transfer of a copy, is not conveying.
90
+
91
+ An interactive user interface displays "Appropriate Legal Notices"
92
+ to the extent that it includes a convenient and prominently visible
93
+ feature that (1) displays an appropriate copyright notice, and (2)
94
+ tells the user that there is no warranty for the work (except to the
95
+ extent that warranties are provided), that licensees may convey the
96
+ work under this License, and how to view a copy of this License. If
97
+ the interface presents a list of user commands or options, such as a
98
+ menu, a prominent item in the list meets this criterion.
99
+
100
+ 1. Source Code.
101
+
102
+ The "source code" for a work means the preferred form of the work
103
+ for making modifications to it. "Object code" means any non-source
104
+ form of a work.
105
+
106
+ A "Standard Interface" means an interface that either is an official
107
+ standard defined by a recognized standards body, or, in the case of
108
+ interfaces specified for a particular programming language, one that
109
+ is widely used among developers working in that language.
110
+
111
+ The "System Libraries" of an executable work include anything, other
112
+ than the work as a whole, that (a) is included in the normal form of
113
+ packaging a Major Component, but which is not part of that Major
114
+ Component, and (b) serves only to enable use of the work with that
115
+ Major Component, or to implement a Standard Interface for which an
116
+ implementation is available to the public in source code form. A
117
+ "Major Component", in this context, means a major essential component
118
+ (kernel, window system, and so on) of the specific operating system
119
+ (if any) on which the executable work runs, or a compiler used to
120
+ produce the work, or an object code interpreter used to run it.
121
+
122
+ The "Corresponding Source" for a work in object code form means all
123
+ the source code needed to generate, install, and (for an executable
124
+ work) run the object code and to modify the work, including scripts to
125
+ control those activities. However, it does not include the work's
126
+ System Libraries, or general-purpose tools or generally available free
127
+ programs which are used unmodified in performing those activities but
128
+ which are not part of the work. For example, Corresponding Source
129
+ includes interface definition files associated with source files for
130
+ the work, and the source code for shared libraries and dynamically
131
+ linked subprograms that the work is specifically designed to require,
132
+ such as by intimate data communication or control flow between those
133
+ subprograms and other parts of the work.
134
+
135
+ The Corresponding Source need not include anything that users
136
+ can regenerate automatically from other parts of the Corresponding
137
+ Source.
138
+
139
+ The Corresponding Source for a work in source code form is that
140
+ same work.
141
+
142
+ 2. Basic Permissions.
143
+
144
+ All rights granted under this License are granted for the term of
145
+ copyright on the Program, and are irrevocable provided the stated
146
+ conditions are met. This License explicitly affirms your unlimited
147
+ permission to run the unmodified Program. The output from running a
148
+ covered work is covered by this License only if the output, given its
149
+ content, constitutes a covered work. This License acknowledges your
150
+ rights of fair use or other equivalent, as provided by copyright law.
151
+
152
+ You may make, run and propagate covered works that you do not
153
+ convey, without conditions so long as your license otherwise remains
154
+ in force. You may convey covered works to others for the sole purpose
155
+ of having them make modifications exclusively for you, or provide you
156
+ with facilities for running those works, provided that you comply with
157
+ the terms of this License in conveying all material for which you do
158
+ not control copyright. Those thus making or running the covered works
159
+ for you must do so exclusively on your behalf, under your direction
160
+ and control, on terms that prohibit them from making any copies of
161
+ your copyrighted material outside their relationship with you.
162
+
163
+ Conveying under any other circumstances is permitted solely under
164
+ the conditions stated below. Sublicensing is not allowed; section 10
165
+ makes it unnecessary.
166
+
167
+ 3. Protecting Users' Legal Rights From Anti-Circumvention Law.
168
+
169
+ No covered work shall be deemed part of an effective technological
170
+ measure under any applicable law fulfilling obligations under article
171
+ 11 of the WIPO copyright treaty adopted on 20 December 1996, or
172
+ similar laws prohibiting or restricting circumvention of such
173
+ measures.
174
+
175
+ When you convey a covered work, you waive any legal power to forbid
176
+ circumvention of technological measures to the extent such circumvention
177
+ is effected by exercising rights under this License with respect to
178
+ the covered work, and you disclaim any intention to limit operation or
179
+ modification of the work as a means of enforcing, against the work's
180
+ users, your or third parties' legal rights to forbid circumvention of
181
+ technological measures.
182
+
183
+ 4. Conveying Verbatim Copies.
184
+
185
+ You may convey verbatim copies of the Program's source code as you
186
+ receive it, in any medium, provided that you conspicuously and
187
+ appropriately publish on each copy an appropriate copyright notice;
188
+ keep intact all notices stating that this License and any
189
+ non-permissive terms added in accord with section 7 apply to the code;
190
+ keep intact all notices of the absence of any warranty; and give all
191
+ recipients a copy of this License along with the Program.
192
+
193
+ You may charge any price or no price for each copy that you convey,
194
+ and you may offer support or warranty protection for a fee.
195
+
196
+ 5. Conveying Modified Source Versions.
197
+
198
+ You may convey a work based on the Program, or the modifications to
199
+ produce it from the Program, in the form of source code under the
200
+ terms of section 4, provided that you also meet all of these conditions:
201
+
202
+ a) The work must carry prominent notices stating that you modified
203
+ it, and giving a relevant date.
204
+
205
+ b) The work must carry prominent notices stating that it is
206
+ released under this License and any conditions added under section
207
+ 7. This requirement modifies the requirement in section 4 to
208
+ "keep intact all notices".
209
+
210
+ c) You must license the entire work, as a whole, under this
211
+ License to anyone who comes into possession of a copy. This
212
+ License will therefore apply, along with any applicable section 7
213
+ additional terms, to the whole of the work, and all its parts,
214
+ regardless of how they are packaged. This License gives no
215
+ permission to license the work in any other way, but it does not
216
+ invalidate such permission if you have separately received it.
217
+
218
+ d) If the work has interactive user interfaces, each must display
219
+ Appropriate Legal Notices; however, if the Program has interactive
220
+ interfaces that do not display Appropriate Legal Notices, your
221
+ work need not make them do so.
222
+
223
+ A compilation of a covered work with other separate and independent
224
+ works, which are not by their nature extensions of the covered work,
225
+ and which are not combined with it such as to form a larger program,
226
+ in or on a volume of a storage or distribution medium, is called an
227
+ "aggregate" if the compilation and its resulting copyright are not
228
+ used to limit the access or legal rights of the compilation's users
229
+ beyond what the individual works permit. Inclusion of a covered work
230
+ in an aggregate does not cause this License to apply to the other
231
+ parts of the aggregate.
232
+
233
+ 6. Conveying Non-Source Forms.
234
+
235
+ You may convey a covered work in object code form under the terms
236
+ of sections 4 and 5, provided that you also convey the
237
+ machine-readable Corresponding Source under the terms of this License,
238
+ in one of these ways:
239
+
240
+ a) Convey the object code in, or embodied in, a physical product
241
+ (including a physical distribution medium), accompanied by the
242
+ Corresponding Source fixed on a durable physical medium
243
+ customarily used for software interchange.
244
+
245
+ b) Convey the object code in, or embodied in, a physical product
246
+ (including a physical distribution medium), accompanied by a
247
+ written offer, valid for at least three years and valid for as
248
+ long as you offer spare parts or customer support for that product
249
+ model, to give anyone who possesses the object code either (1) a
250
+ copy of the Corresponding Source for all the software in the
251
+ product that is covered by this License, on a durable physical
252
+ medium customarily used for software interchange, for a price no
253
+ more than your reasonable cost of physically performing this
254
+ conveying of source, or (2) access to copy the
255
+ Corresponding Source from a network server at no charge.
256
+
257
+ c) Convey individual copies of the object code with a copy of the
258
+ written offer to provide the Corresponding Source. This
259
+ alternative is allowed only occasionally and noncommercially, and
260
+ only if you received the object code with such an offer, in accord
261
+ with subsection 6b.
262
+
263
+ d) Convey the object code by offering access from a designated
264
+ place (gratis or for a charge), and offer equivalent access to the
265
+ Corresponding Source in the same way through the same place at no
266
+ further charge. You need not require recipients to copy the
267
+ Corresponding Source along with the object code. If the place to
268
+ copy the object code is a network server, the Corresponding Source
269
+ may be on a different server (operated by you or a third party)
270
+ that supports equivalent copying facilities, provided you maintain
271
+ clear directions next to the object code saying where to find the
272
+ Corresponding Source. Regardless of what server hosts the
273
+ Corresponding Source, you remain obligated to ensure that it is
274
+ available for as long as needed to satisfy these requirements.
275
+
276
+ e) Convey the object code using peer-to-peer transmission, provided
277
+ you inform other peers where the object code and Corresponding
278
+ Source of the work are being offered to the general public at no
279
+ charge under subsection 6d.
280
+
281
+ A separable portion of the object code, whose source code is excluded
282
+ from the Corresponding Source as a System Library, need not be
283
+ included in conveying the object code work.
284
+
285
+ A "User Product" is either (1) a "consumer product", which means any
286
+ tangible personal property which is normally used for personal, family,
287
+ or household purposes, or (2) anything designed or sold for incorporation
288
+ into a dwelling. In determining whether a product is a consumer product,
289
+ doubtful cases shall be resolved in favor of coverage. For a particular
290
+ product received by a particular user, "normally used" refers to a
291
+ typical or common use of that class of product, regardless of the status
292
+ of the particular user or of the way in which the particular user
293
+ actually uses, or expects or is expected to use, the product. A product
294
+ is a consumer product regardless of whether the product has substantial
295
+ commercial, industrial or non-consumer uses, unless such uses represent
296
+ the only significant mode of use of the product.
297
+
298
+ "Installation Information" for a User Product means any methods,
299
+ procedures, authorization keys, or other information required to install
300
+ and execute modified versions of a covered work in that User Product from
301
+ a modified version of its Corresponding Source. The information must
302
+ suffice to ensure that the continued functioning of the modified object
303
+ code is in no case prevented or interfered with solely because
304
+ modification has been made.
305
+
306
+ If you convey an object code work under this section in, or with, or
307
+ specifically for use in, a User Product, and the conveying occurs as
308
+ part of a transaction in which the right of possession and use of the
309
+ User Product is transferred to the recipient in perpetuity or for a
310
+ fixed term (regardless of how the transaction is characterized), the
311
+ Corresponding Source conveyed under this section must be accompanied
312
+ by the Installation Information. But this requirement does not apply
313
+ if neither you nor any third party retains the ability to install
314
+ modified object code on the User Product (for example, the work has
315
+ been installed in ROM).
316
+
317
+ The requirement to provide Installation Information does not include a
318
+ requirement to continue to provide support service, warranty, or updates
319
+ for a work that has been modified or installed by the recipient, or for
320
+ the User Product in which it has been modified or installed. Access to a
321
+ network may be denied when the modification itself materially and
322
+ adversely affects the operation of the network or violates the rules and
323
+ protocols for communication across the network.
324
+
325
+ Corresponding Source conveyed, and Installation Information provided,
326
+ in accord with this section must be in a format that is publicly
327
+ documented (and with an implementation available to the public in
328
+ source code form), and must require no special password or key for
329
+ unpacking, reading or copying.
330
+
331
+ 7. Additional Terms.
332
+
333
+ "Additional permissions" are terms that supplement the terms of this
334
+ License by making exceptions from one or more of its conditions.
335
+ Additional permissions that are applicable to the entire Program shall
336
+ be treated as though they were included in this License, to the extent
337
+ that they are valid under applicable law. If additional permissions
338
+ apply only to part of the Program, that part may be used separately
339
+ under those permissions, but the entire Program remains governed by
340
+ this License without regard to the additional permissions.
341
+
342
+ When you convey a copy of a covered work, you may at your option
343
+ remove any additional permissions from that copy, or from any part of
344
+ it. (Additional permissions may be written to require their own
345
+ removal in certain cases when you modify the work.) You may place
346
+ additional permissions on material, added by you to a covered work,
347
+ for which you have or can give appropriate copyright permission.
348
+
349
+ Notwithstanding any other provision of this License, for material you
350
+ add to a covered work, you may (if authorized by the copyright holders of
351
+ that material) supplement the terms of this License with terms:
352
+
353
+ a) Disclaiming warranty or limiting liability differently from the
354
+ terms of sections 15 and 16 of this License; or
355
+
356
+ b) Requiring preservation of specified reasonable legal notices or
357
+ author attributions in that material or in the Appropriate Legal
358
+ Notices displayed by works containing it; or
359
+
360
+ c) Prohibiting misrepresentation of the origin of that material, or
361
+ requiring that modified versions of such material be marked in
362
+ reasonable ways as different from the original version; or
363
+
364
+ d) Limiting the use for publicity purposes of names of licensors or
365
+ authors of the material; or
366
+
367
+ e) Declining to grant rights under trademark law for use of some
368
+ trade names, trademarks, or service marks; or
369
+
370
+ f) Requiring indemnification of licensors and authors of that
371
+ material by anyone who conveys the material (or modified versions of
372
+ it) with contractual assumptions of liability to the recipient, for
373
+ any liability that these contractual assumptions directly impose on
374
+ those licensors and authors.
375
+
376
+ All other non-permissive additional terms are considered "further
377
+ restrictions" within the meaning of section 10. If the Program as you
378
+ received it, or any part of it, contains a notice stating that it is
379
+ governed by this License along with a term that is a further
380
+ restriction, you may remove that term. If a license document contains
381
+ a further restriction but permits relicensing or conveying under this
382
+ License, you may add to a covered work material governed by the terms
383
+ of that license document, provided that the further restriction does
384
+ not survive such relicensing or conveying.
385
+
386
+ If you add terms to a covered work in accord with this section, you
387
+ must place, in the relevant source files, a statement of the
388
+ additional terms that apply to those files, or a notice indicating
389
+ where to find the applicable terms.
390
+
391
+ Additional terms, permissive or non-permissive, may be stated in the
392
+ form of a separately written license, or stated as exceptions;
393
+ the above requirements apply either way.
394
+
395
+ 8. Termination.
396
+
397
+ You may not propagate or modify a covered work except as expressly
398
+ provided under this License. Any attempt otherwise to propagate or
399
+ modify it is void, and will automatically terminate your rights under
400
+ this License (including any patent licenses granted under the third
401
+ paragraph of section 11).
402
+
403
+ However, if you cease all violation of this License, then your
404
+ license from a particular copyright holder is reinstated (a)
405
+ provisionally, unless and until the copyright holder explicitly and
406
+ finally terminates your license, and (b) permanently, if the copyright
407
+ holder fails to notify you of the violation by some reasonable means
408
+ prior to 60 days after the cessation.
409
+
410
+ Moreover, your license from a particular copyright holder is
411
+ reinstated permanently if the copyright holder notifies you of the
412
+ violation by some reasonable means, this is the first time you have
413
+ received notice of violation of this License (for any work) from that
414
+ copyright holder, and you cure the violation prior to 30 days after
415
+ your receipt of the notice.
416
+
417
+ Termination of your rights under this section does not terminate the
418
+ licenses of parties who have received copies or rights from you under
419
+ this License. If your rights have been terminated and not permanently
420
+ reinstated, you do not qualify to receive new licenses for the same
421
+ material under section 10.
422
+
423
+ 9. Acceptance Not Required for Having Copies.
424
+
425
+ You are not required to accept this License in order to receive or
426
+ run a copy of the Program. Ancillary propagation of a covered work
427
+ occurring solely as a consequence of using peer-to-peer transmission
428
+ to receive a copy likewise does not require acceptance. However,
429
+ nothing other than this License grants you permission to propagate or
430
+ modify any covered work. These actions infringe copyright if you do
431
+ not accept this License. Therefore, by modifying or propagating a
432
+ covered work, you indicate your acceptance of this License to do so.
433
+
434
+ 10. Automatic Licensing of Downstream Recipients.
435
+
436
+ Each time you convey a covered work, the recipient automatically
437
+ receives a license from the original licensors, to run, modify and
438
+ propagate that work, subject to this License. You are not responsible
439
+ for enforcing compliance by third parties with this License.
440
+
441
+ An "entity transaction" is a transaction transferring control of an
442
+ organization, or substantially all assets of one, or subdividing an
443
+ organization, or merging organizations. If propagation of a covered
444
+ work results from an entity transaction, each party to that
445
+ transaction who receives a copy of the work also receives whatever
446
+ licenses to the work the party's predecessor in interest had or could
447
+ give under the previous paragraph, plus a right to possession of the
448
+ Corresponding Source of the work from the predecessor in interest, if
449
+ the predecessor has it or can get it with reasonable efforts.
450
+
451
+ You may not impose any further restrictions on the exercise of the
452
+ rights granted or affirmed under this License. For example, you may
453
+ not impose a license fee, royalty, or other charge for exercise of
454
+ rights granted under this License, and you may not initiate litigation
455
+ (including a cross-claim or counterclaim in a lawsuit) alleging that
456
+ any patent claim is infringed by making, using, selling, offering for
457
+ sale, or importing the Program or any portion of it.
458
+
459
+ 11. Patents.
460
+
461
+ A "contributor" is a copyright holder who authorizes use under this
462
+ License of the Program or a work on which the Program is based. The
463
+ work thus licensed is called the contributor's "contributor version".
464
+
465
+ A contributor's "essential patent claims" are all patent claims
466
+ owned or controlled by the contributor, whether already acquired or
467
+ hereafter acquired, that would be infringed by some manner, permitted
468
+ by this License, of making, using, or selling its contributor version,
469
+ but do not include claims that would be infringed only as a
470
+ consequence of further modification of the contributor version. For
471
+ purposes of this definition, "control" includes the right to grant
472
+ patent sublicenses in a manner consistent with the requirements of
473
+ this License.
474
+
475
+ Each contributor grants you a non-exclusive, worldwide, royalty-free
476
+ patent license under the contributor's essential patent claims, to
477
+ make, use, sell, offer for sale, import and otherwise run, modify and
478
+ propagate the contents of its contributor version.
479
+
480
+ In the following three paragraphs, a "patent license" is any express
481
+ agreement or commitment, however denominated, not to enforce a patent
482
+ (such as an express permission to practice a patent or covenant not to
483
+ sue for patent infringement). To "grant" such a patent license to a
484
+ party means to make such an agreement or commitment not to enforce a
485
+ patent against the party.
486
+
487
+ If you convey a covered work, knowingly relying on a patent license,
488
+ and the Corresponding Source of the work is not available for anyone
489
+ to copy, free of charge and under the terms of this License, through a
490
+ publicly available network server or other readily accessible means,
491
+ then you must either (1) cause the Corresponding Source to be so
492
+ available, or (2) arrange to deprive yourself of the benefit of the
493
+ patent license for this particular work, or (3) arrange, in a manner
494
+ consistent with the requirements of this License, to extend the patent
495
+ license to downstream recipients. "Knowingly relying" means you have
496
+ actual knowledge that, but for the patent license, your conveying the
497
+ covered work in a country, or your recipient's use of the covered work
498
+ in a country, would infringe one or more identifiable patents in that
499
+ country that you have reason to believe are valid.
500
+
501
+ If, pursuant to or in connection with a single transaction or
502
+ arrangement, you convey, or propagate by procuring conveyance of, a
503
+ covered work, and grant a patent license to some of the parties
504
+ receiving the covered work authorizing them to use, propagate, modify
505
+ or convey a specific copy of the covered work, then the patent license
506
+ you grant is automatically extended to all recipients of the covered
507
+ work and works based on it.
508
+
509
+ A patent license is "discriminatory" if it does not include within
510
+ the scope of its coverage, prohibits the exercise of, or is
511
+ conditioned on the non-exercise of one or more of the rights that are
512
+ specifically granted under this License. You may not convey a covered
513
+ work if you are a party to an arrangement with a third party that is
514
+ in the business of distributing software, under which you make payment
515
+ to the third party based on the extent of your activity of conveying
516
+ the work, and under which the third party grants, to any of the
517
+ parties who would receive the covered work from you, a discriminatory
518
+ patent license (a) in connection with copies of the covered work
519
+ conveyed by you (or copies made from those copies), or (b) primarily
520
+ for and in connection with specific products or compilations that
521
+ contain the covered work, unless you entered into that arrangement,
522
+ or that patent license was granted, prior to 28 March 2007.
523
+
524
+ Nothing in this License shall be construed as excluding or limiting
525
+ any implied license or other defenses to infringement that may
526
+ otherwise be available to you under applicable patent law.
527
+
528
+ 12. No Surrender of Others' Freedom.
529
+
530
+ If conditions are imposed on you (whether by court order, agreement or
531
+ otherwise) that contradict the conditions of this License, they do not
532
+ excuse you from the conditions of this License. If you cannot convey a
533
+ covered work so as to satisfy simultaneously your obligations under this
534
+ License and any other pertinent obligations, then as a consequence you may
535
+ not convey it at all. For example, if you agree to terms that obligate you
536
+ to collect a royalty for further conveying from those to whom you convey
537
+ the Program, the only way you could satisfy both those terms and this
538
+ License would be to refrain entirely from conveying the Program.
539
+
540
+ 13. Remote Network Interaction; Use with the GNU General Public License.
541
+
542
+ Notwithstanding any other provision of this License, if you modify the
543
+ Program, your modified version must prominently offer all users
544
+ interacting with it remotely through a computer network (if your version
545
+ supports such interaction) an opportunity to receive the Corresponding
546
+ Source of your version by providing access to the Corresponding Source
547
+ from a network server at no charge, through some standard or customary
548
+ means of facilitating copying of software. This Corresponding Source
549
+ shall include the Corresponding Source for any work covered by version 3
550
+ of the GNU General Public License that is incorporated pursuant to the
551
+ following paragraph.
552
+
553
+ Notwithstanding any other provision of this License, you have
554
+ permission to link or combine any covered work with a work licensed
555
+ under version 3 of the GNU General Public License into a single
556
+ combined work, and to convey the resulting work. The terms of this
557
+ License will continue to apply to the part which is the covered work,
558
+ but the work with which it is combined will remain governed by version
559
+ 3 of the GNU General Public License.
560
+
561
+ 14. Revised Versions of this License.
562
+
563
+ The Free Software Foundation may publish revised and/or new versions of
564
+ the GNU Affero General Public License from time to time. Such new versions
565
+ will be similar in spirit to the present version, but may differ in detail to
566
+ address new problems or concerns.
567
+
568
+ Each version is given a distinguishing version number. If the
569
+ Program specifies that a certain numbered version of the GNU Affero General
570
+ Public License "or any later version" applies to it, you have the
571
+ option of following the terms and conditions either of that numbered
572
+ version or of any later version published by the Free Software
573
+ Foundation. If the Program does not specify a version number of the
574
+ GNU Affero General Public License, you may choose any version ever published
575
+ by the Free Software Foundation.
576
+
577
+ If the Program specifies that a proxy can decide which future
578
+ versions of the GNU Affero General Public License can be used, that proxy's
579
+ public statement of acceptance of a version permanently authorizes you
580
+ to choose that version for the Program.
581
+
582
+ Later license versions may give you additional or different
583
+ permissions. However, no additional obligations are imposed on any
584
+ author or copyright holder as a result of your choosing to follow a
585
+ later version.
586
+
587
+ 15. Disclaimer of Warranty.
588
+
589
+ THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
590
+ APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
591
+ HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
592
+ OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
593
+ THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
594
+ PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
595
+ IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
596
+ ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
597
+
598
+ 16. Limitation of Liability.
599
+
600
+ IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
601
+ WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
602
+ THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
603
+ GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
604
+ USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
605
+ DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
606
+ PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
607
+ EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
608
+ SUCH DAMAGES.
609
+
610
+ 17. Interpretation of Sections 15 and 16.
611
+
612
+ If the disclaimer of warranty and limitation of liability provided
613
+ above cannot be given local legal effect according to their terms,
614
+ reviewing courts shall apply local law that most closely approximates
615
+ an absolute waiver of all civil liability in connection with the
616
+ Program, unless a warranty or assumption of liability accompanies a
617
+ copy of the Program in return for a fee.
618
+
619
+ END OF TERMS AND CONDITIONS
620
+
621
+ How to Apply These Terms to Your New Programs
622
+
623
+ If you develop a new program, and you want it to be of the greatest
624
+ possible use to the public, the best way to achieve this is to make it
625
+ free software which everyone can redistribute and change under these terms.
626
+
627
+ To do so, attach the following notices to the program. It is safest
628
+ to attach them to the start of each source file to most effectively
629
+ state the exclusion of warranty; and each file should have at least
630
+ the "copyright" line and a pointer to where the full notice is found.
631
+
632
+ <one line to give the program's name and a brief idea of what it does.>
633
+ Copyright (C) <year> <name of author>
634
+
635
+ This program is free software: you can redistribute it and/or modify
636
+ it under the terms of the GNU Affero General Public License as published
637
+ by the Free Software Foundation, either version 3 of the License, or
638
+ (at your option) any later version.
639
+
640
+ This program is distributed in the hope that it will be useful,
641
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
642
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
643
+ GNU Affero General Public License for more details.
644
+
645
+ You should have received a copy of the GNU Affero General Public License
646
+ along with this program. If not, see <https://www.gnu.org/licenses/>.
647
+
648
+ Also add information on how to contact you by electronic and paper mail.
649
+
650
+ If your software can interact with users remotely through a computer
651
+ network, you should also make sure that it provides a way for users to
652
+ get its source. For example, if your program is a web application, its
653
+ interface could display a "Source" link that leads users to an archive
654
+ of the code. There are many ways you could offer source, and different
655
+ solutions will be better for different programs; see section 13 for the
656
+ specific requirements.
657
+
658
+ You should also get your employer (if you work as a programmer) or school,
659
+ if any, to sign a "copyright disclaimer" for the program, if necessary.
660
+ For more information on this, and how to apply and follow the GNU AGPL, see
661
+ <https://www.gnu.org/licenses/>.
README.md ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ title: Bert-VITS2
2
+ emoji: 🌟
3
+ colorFrom: red
4
+ colorTo: indigo
5
+ sdk: gradio
6
+ sdk_version: 5.33.0
7
+ app_file: webui.py
8
+ pinned: false
9
+ license: gpl-3.0
attentions.py ADDED
@@ -0,0 +1,464 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import math
2
+ import torch
3
+ from torch import nn
4
+ from torch.nn import functional as F
5
+
6
+ import commons
7
+ import logging
8
+
9
+ logger = logging.getLogger(__name__)
10
+
11
+
12
+ class LayerNorm(nn.Module):
13
+ def __init__(self, channels, eps=1e-5):
14
+ super().__init__()
15
+ self.channels = channels
16
+ self.eps = eps
17
+
18
+ self.gamma = nn.Parameter(torch.ones(channels))
19
+ self.beta = nn.Parameter(torch.zeros(channels))
20
+
21
+ def forward(self, x):
22
+ x = x.transpose(1, -1)
23
+ x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
24
+ return x.transpose(1, -1)
25
+
26
+
27
+ @torch.jit.script
28
+ def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels):
29
+ n_channels_int = n_channels[0]
30
+ in_act = input_a + input_b
31
+ t_act = torch.tanh(in_act[:, :n_channels_int, :])
32
+ s_act = torch.sigmoid(in_act[:, n_channels_int:, :])
33
+ acts = t_act * s_act
34
+ return acts
35
+
36
+
37
+ class Encoder(nn.Module):
38
+ def __init__(
39
+ self,
40
+ hidden_channels,
41
+ filter_channels,
42
+ n_heads,
43
+ n_layers,
44
+ kernel_size=1,
45
+ p_dropout=0.0,
46
+ window_size=4,
47
+ isflow=True,
48
+ **kwargs
49
+ ):
50
+ super().__init__()
51
+ self.hidden_channels = hidden_channels
52
+ self.filter_channels = filter_channels
53
+ self.n_heads = n_heads
54
+ self.n_layers = n_layers
55
+ self.kernel_size = kernel_size
56
+ self.p_dropout = p_dropout
57
+ self.window_size = window_size
58
+ # if isflow:
59
+ # cond_layer = torch.nn.Conv1d(256, 2*hidden_channels*n_layers, 1)
60
+ # self.cond_pre = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, 1)
61
+ # self.cond_layer = weight_norm(cond_layer, name='weight')
62
+ # self.gin_channels = 256
63
+ self.cond_layer_idx = self.n_layers
64
+ if "gin_channels" in kwargs:
65
+ self.gin_channels = kwargs["gin_channels"]
66
+ if self.gin_channels != 0:
67
+ self.spk_emb_linear = nn.Linear(self.gin_channels, self.hidden_channels)
68
+ # vits2 says 3rd block, so idx is 2 by default
69
+ self.cond_layer_idx = (
70
+ kwargs["cond_layer_idx"] if "cond_layer_idx" in kwargs else 2
71
+ )
72
+ logging.debug(self.gin_channels, self.cond_layer_idx)
73
+ assert (
74
+ self.cond_layer_idx < self.n_layers
75
+ ), "cond_layer_idx should be less than n_layers"
76
+ self.drop = nn.Dropout(p_dropout)
77
+ self.attn_layers = nn.ModuleList()
78
+ self.norm_layers_1 = nn.ModuleList()
79
+ self.ffn_layers = nn.ModuleList()
80
+ self.norm_layers_2 = nn.ModuleList()
81
+ for i in range(self.n_layers):
82
+ self.attn_layers.append(
83
+ MultiHeadAttention(
84
+ hidden_channels,
85
+ hidden_channels,
86
+ n_heads,
87
+ p_dropout=p_dropout,
88
+ window_size=window_size,
89
+ )
90
+ )
91
+ self.norm_layers_1.append(LayerNorm(hidden_channels))
92
+ self.ffn_layers.append(
93
+ FFN(
94
+ hidden_channels,
95
+ hidden_channels,
96
+ filter_channels,
97
+ kernel_size,
98
+ p_dropout=p_dropout,
99
+ )
100
+ )
101
+ self.norm_layers_2.append(LayerNorm(hidden_channels))
102
+
103
+ def forward(self, x, x_mask, g=None):
104
+ attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
105
+ x = x * x_mask
106
+ for i in range(self.n_layers):
107
+ if i == self.cond_layer_idx and g is not None:
108
+ g = self.spk_emb_linear(g.transpose(1, 2))
109
+ g = g.transpose(1, 2)
110
+ x = x + g
111
+ x = x * x_mask
112
+ y = self.attn_layers[i](x, x, attn_mask)
113
+ y = self.drop(y)
114
+ x = self.norm_layers_1[i](x + y)
115
+
116
+ y = self.ffn_layers[i](x, x_mask)
117
+ y = self.drop(y)
118
+ x = self.norm_layers_2[i](x + y)
119
+ x = x * x_mask
120
+ return x
121
+
122
+
123
+ class Decoder(nn.Module):
124
+ def __init__(
125
+ self,
126
+ hidden_channels,
127
+ filter_channels,
128
+ n_heads,
129
+ n_layers,
130
+ kernel_size=1,
131
+ p_dropout=0.0,
132
+ proximal_bias=False,
133
+ proximal_init=True,
134
+ **kwargs
135
+ ):
136
+ super().__init__()
137
+ self.hidden_channels = hidden_channels
138
+ self.filter_channels = filter_channels
139
+ self.n_heads = n_heads
140
+ self.n_layers = n_layers
141
+ self.kernel_size = kernel_size
142
+ self.p_dropout = p_dropout
143
+ self.proximal_bias = proximal_bias
144
+ self.proximal_init = proximal_init
145
+
146
+ self.drop = nn.Dropout(p_dropout)
147
+ self.self_attn_layers = nn.ModuleList()
148
+ self.norm_layers_0 = nn.ModuleList()
149
+ self.encdec_attn_layers = nn.ModuleList()
150
+ self.norm_layers_1 = nn.ModuleList()
151
+ self.ffn_layers = nn.ModuleList()
152
+ self.norm_layers_2 = nn.ModuleList()
153
+ for i in range(self.n_layers):
154
+ self.self_attn_layers.append(
155
+ MultiHeadAttention(
156
+ hidden_channels,
157
+ hidden_channels,
158
+ n_heads,
159
+ p_dropout=p_dropout,
160
+ proximal_bias=proximal_bias,
161
+ proximal_init=proximal_init,
162
+ )
163
+ )
164
+ self.norm_layers_0.append(LayerNorm(hidden_channels))
165
+ self.encdec_attn_layers.append(
166
+ MultiHeadAttention(
167
+ hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout
168
+ )
169
+ )
170
+ self.norm_layers_1.append(LayerNorm(hidden_channels))
171
+ self.ffn_layers.append(
172
+ FFN(
173
+ hidden_channels,
174
+ hidden_channels,
175
+ filter_channels,
176
+ kernel_size,
177
+ p_dropout=p_dropout,
178
+ causal=True,
179
+ )
180
+ )
181
+ self.norm_layers_2.append(LayerNorm(hidden_channels))
182
+
183
+ def forward(self, x, x_mask, h, h_mask):
184
+ """
185
+ x: decoder input
186
+ h: encoder output
187
+ """
188
+ self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(
189
+ device=x.device, dtype=x.dtype
190
+ )
191
+ encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
192
+ x = x * x_mask
193
+ for i in range(self.n_layers):
194
+ y = self.self_attn_layers[i](x, x, self_attn_mask)
195
+ y = self.drop(y)
196
+ x = self.norm_layers_0[i](x + y)
197
+
198
+ y = self.encdec_attn_layers[i](x, h, encdec_attn_mask)
199
+ y = self.drop(y)
200
+ x = self.norm_layers_1[i](x + y)
201
+
202
+ y = self.ffn_layers[i](x, x_mask)
203
+ y = self.drop(y)
204
+ x = self.norm_layers_2[i](x + y)
205
+ x = x * x_mask
206
+ return x
207
+
208
+
209
+ class MultiHeadAttention(nn.Module):
210
+ def __init__(
211
+ self,
212
+ channels,
213
+ out_channels,
214
+ n_heads,
215
+ p_dropout=0.0,
216
+ window_size=None,
217
+ heads_share=True,
218
+ block_length=None,
219
+ proximal_bias=False,
220
+ proximal_init=False,
221
+ ):
222
+ super().__init__()
223
+ assert channels % n_heads == 0
224
+
225
+ self.channels = channels
226
+ self.out_channels = out_channels
227
+ self.n_heads = n_heads
228
+ self.p_dropout = p_dropout
229
+ self.window_size = window_size
230
+ self.heads_share = heads_share
231
+ self.block_length = block_length
232
+ self.proximal_bias = proximal_bias
233
+ self.proximal_init = proximal_init
234
+ self.attn = None
235
+
236
+ self.k_channels = channels // n_heads
237
+ self.conv_q = nn.Conv1d(channels, channels, 1)
238
+ self.conv_k = nn.Conv1d(channels, channels, 1)
239
+ self.conv_v = nn.Conv1d(channels, channels, 1)
240
+ self.conv_o = nn.Conv1d(channels, out_channels, 1)
241
+ self.drop = nn.Dropout(p_dropout)
242
+
243
+ if window_size is not None:
244
+ n_heads_rel = 1 if heads_share else n_heads
245
+ rel_stddev = self.k_channels**-0.5
246
+ self.emb_rel_k = nn.Parameter(
247
+ torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels)
248
+ * rel_stddev
249
+ )
250
+ self.emb_rel_v = nn.Parameter(
251
+ torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels)
252
+ * rel_stddev
253
+ )
254
+
255
+ nn.init.xavier_uniform_(self.conv_q.weight)
256
+ nn.init.xavier_uniform_(self.conv_k.weight)
257
+ nn.init.xavier_uniform_(self.conv_v.weight)
258
+ if proximal_init:
259
+ with torch.no_grad():
260
+ self.conv_k.weight.copy_(self.conv_q.weight)
261
+ self.conv_k.bias.copy_(self.conv_q.bias)
262
+
263
+ def forward(self, x, c, attn_mask=None):
264
+ q = self.conv_q(x)
265
+ k = self.conv_k(c)
266
+ v = self.conv_v(c)
267
+
268
+ x, self.attn = self.attention(q, k, v, mask=attn_mask)
269
+
270
+ x = self.conv_o(x)
271
+ return x
272
+
273
+ def attention(self, query, key, value, mask=None):
274
+ # reshape [b, d, t] -> [b, n_h, t, d_k]
275
+ b, d, t_s, t_t = (*key.size(), query.size(2))
276
+ query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3)
277
+ key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
278
+ value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
279
+
280
+ scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1))
281
+ if self.window_size is not None:
282
+ assert (
283
+ t_s == t_t
284
+ ), "Relative attention is only available for self-attention."
285
+ key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s)
286
+ rel_logits = self._matmul_with_relative_keys(
287
+ query / math.sqrt(self.k_channels), key_relative_embeddings
288
+ )
289
+ scores_local = self._relative_position_to_absolute_position(rel_logits)
290
+ scores = scores + scores_local
291
+ if self.proximal_bias:
292
+ assert t_s == t_t, "Proximal bias is only available for self-attention."
293
+ scores = scores + self._attention_bias_proximal(t_s).to(
294
+ device=scores.device, dtype=scores.dtype
295
+ )
296
+ if mask is not None:
297
+ scores = scores.masked_fill(mask == 0, -1e4)
298
+ if self.block_length is not None:
299
+ assert (
300
+ t_s == t_t
301
+ ), "Local attention is only available for self-attention."
302
+ block_mask = (
303
+ torch.ones_like(scores)
304
+ .triu(-self.block_length)
305
+ .tril(self.block_length)
306
+ )
307
+ scores = scores.masked_fill(block_mask == 0, -1e4)
308
+ p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s]
309
+ p_attn = self.drop(p_attn)
310
+ output = torch.matmul(p_attn, value)
311
+ if self.window_size is not None:
312
+ relative_weights = self._absolute_position_to_relative_position(p_attn)
313
+ value_relative_embeddings = self._get_relative_embeddings(
314
+ self.emb_rel_v, t_s
315
+ )
316
+ output = output + self._matmul_with_relative_values(
317
+ relative_weights, value_relative_embeddings
318
+ )
319
+ output = (
320
+ output.transpose(2, 3).contiguous().view(b, d, t_t)
321
+ ) # [b, n_h, t_t, d_k] -> [b, d, t_t]
322
+ return output, p_attn
323
+
324
+ def _matmul_with_relative_values(self, x, y):
325
+ """
326
+ x: [b, h, l, m]
327
+ y: [h or 1, m, d]
328
+ ret: [b, h, l, d]
329
+ """
330
+ ret = torch.matmul(x, y.unsqueeze(0))
331
+ return ret
332
+
333
+ def _matmul_with_relative_keys(self, x, y):
334
+ """
335
+ x: [b, h, l, d]
336
+ y: [h or 1, m, d]
337
+ ret: [b, h, l, m]
338
+ """
339
+ ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1))
340
+ return ret
341
+
342
+ def _get_relative_embeddings(self, relative_embeddings, length):
343
+ 2 * self.window_size + 1
344
+ # Pad first before slice to avoid using cond ops.
345
+ pad_length = max(length - (self.window_size + 1), 0)
346
+ slice_start_position = max((self.window_size + 1) - length, 0)
347
+ slice_end_position = slice_start_position + 2 * length - 1
348
+ if pad_length > 0:
349
+ padded_relative_embeddings = F.pad(
350
+ relative_embeddings,
351
+ commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]),
352
+ )
353
+ else:
354
+ padded_relative_embeddings = relative_embeddings
355
+ used_relative_embeddings = padded_relative_embeddings[
356
+ :, slice_start_position:slice_end_position
357
+ ]
358
+ return used_relative_embeddings
359
+
360
+ def _relative_position_to_absolute_position(self, x):
361
+ """
362
+ x: [b, h, l, 2*l-1]
363
+ ret: [b, h, l, l]
364
+ """
365
+ batch, heads, length, _ = x.size()
366
+ # Concat columns of pad to shift from relative to absolute indexing.
367
+ x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, 1]]))
368
+
369
+ # Concat extra elements so to add up to shape (len+1, 2*len-1).
370
+ x_flat = x.view([batch, heads, length * 2 * length])
371
+ x_flat = F.pad(
372
+ x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [0, length - 1]])
373
+ )
374
+
375
+ # Reshape and slice out the padded elements.
376
+ x_final = x_flat.view([batch, heads, length + 1, 2 * length - 1])[
377
+ :, :, :length, length - 1 :
378
+ ]
379
+ return x_final
380
+
381
+ def _absolute_position_to_relative_position(self, x):
382
+ """
383
+ x: [b, h, l, l]
384
+ ret: [b, h, l, 2*l-1]
385
+ """
386
+ batch, heads, length, _ = x.size()
387
+ # pad along column
388
+ x = F.pad(
389
+ x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length - 1]])
390
+ )
391
+ x_flat = x.view([batch, heads, length**2 + length * (length - 1)])
392
+ # add 0's in the beginning that will skew the elements after reshape
393
+ x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]]))
394
+ x_final = x_flat.view([batch, heads, length, 2 * length])[:, :, :, 1:]
395
+ return x_final
396
+
397
+ def _attention_bias_proximal(self, length):
398
+ """Bias for self-attention to encourage attention to close positions.
399
+ Args:
400
+ length: an integer scalar.
401
+ Returns:
402
+ a Tensor with shape [1, 1, length, length]
403
+ """
404
+ r = torch.arange(length, dtype=torch.float32)
405
+ diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1)
406
+ return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0)
407
+
408
+
409
+ class FFN(nn.Module):
410
+ def __init__(
411
+ self,
412
+ in_channels,
413
+ out_channels,
414
+ filter_channels,
415
+ kernel_size,
416
+ p_dropout=0.0,
417
+ activation=None,
418
+ causal=False,
419
+ ):
420
+ super().__init__()
421
+ self.in_channels = in_channels
422
+ self.out_channels = out_channels
423
+ self.filter_channels = filter_channels
424
+ self.kernel_size = kernel_size
425
+ self.p_dropout = p_dropout
426
+ self.activation = activation
427
+ self.causal = causal
428
+
429
+ if causal:
430
+ self.padding = self._causal_padding
431
+ else:
432
+ self.padding = self._same_padding
433
+
434
+ self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size)
435
+ self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size)
436
+ self.drop = nn.Dropout(p_dropout)
437
+
438
+ def forward(self, x, x_mask):
439
+ x = self.conv_1(self.padding(x * x_mask))
440
+ if self.activation == "gelu":
441
+ x = x * torch.sigmoid(1.702 * x)
442
+ else:
443
+ x = torch.relu(x)
444
+ x = self.drop(x)
445
+ x = self.conv_2(self.padding(x * x_mask))
446
+ return x * x_mask
447
+
448
+ def _causal_padding(self, x):
449
+ if self.kernel_size == 1:
450
+ return x
451
+ pad_l = self.kernel_size - 1
452
+ pad_r = 0
453
+ padding = [[0, 0], [0, 0], [pad_l, pad_r]]
454
+ x = F.pad(x, commons.convert_pad_shape(padding))
455
+ return x
456
+
457
+ def _same_padding(self, x):
458
+ if self.kernel_size == 1:
459
+ return x
460
+ pad_l = (self.kernel_size - 1) // 2
461
+ pad_r = self.kernel_size // 2
462
+ padding = [[0, 0], [0, 0], [pad_l, pad_r]]
463
+ x = F.pad(x, commons.convert_pad_shape(padding))
464
+ return x
bert/.gitignore ADDED
@@ -0,0 +1 @@
 
 
1
+ chinese-roberta-wwm-ext-large
bert/bert_models.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "chinese-roberta-wwm-ext-large": {
3
+ "repo_id": "hfl/chinese-roberta-wwm-ext-large",
4
+ "files": ["pytorch_model.bin"]
5
+ }
6
+ }
commons.py ADDED
@@ -0,0 +1,158 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import math
2
+ import torch
3
+ from torch.nn import functional as F
4
+
5
+
6
+ def init_weights(m, mean=0.0, std=0.01):
7
+ classname = m.__class__.__name__
8
+ if classname.find("Conv") != -1:
9
+ m.weight.data.normal_(mean, std)
10
+
11
+
12
+ def get_padding(kernel_size, dilation=1):
13
+ return int((kernel_size * dilation - dilation) / 2)
14
+
15
+
16
+ def convert_pad_shape(pad_shape):
17
+ layer = pad_shape[::-1]
18
+ pad_shape = [item for sublist in layer for item in sublist]
19
+ return pad_shape
20
+
21
+
22
+ def intersperse(lst, item):
23
+ result = [item] * (len(lst) * 2 + 1)
24
+ result[1::2] = lst
25
+ return result
26
+
27
+
28
+ def kl_divergence(m_p, logs_p, m_q, logs_q):
29
+ """KL(P||Q)"""
30
+ kl = (logs_q - logs_p) - 0.5
31
+ kl += (
32
+ 0.5 * (torch.exp(2.0 * logs_p) + ((m_p - m_q) ** 2)) * torch.exp(-2.0 * logs_q)
33
+ )
34
+ return kl
35
+
36
+
37
+ def rand_gumbel(shape):
38
+ """Sample from the Gumbel distribution, protect from overflows."""
39
+ uniform_samples = torch.rand(shape) * 0.99998 + 0.00001
40
+ return -torch.log(-torch.log(uniform_samples))
41
+
42
+
43
+ def rand_gumbel_like(x):
44
+ g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device)
45
+ return g
46
+
47
+
48
+ def slice_segments(x, ids_str, segment_size=4):
49
+ gather_indices = ids_str.view(x.size(0), 1, 1).repeat(
50
+ 1, x.size(1), 1
51
+ ) + torch.arange(segment_size, device=x.device)
52
+ return torch.gather(x, 2, gather_indices)
53
+
54
+
55
+ def rand_slice_segments(x, x_lengths=None, segment_size=4):
56
+ b, d, t = x.size()
57
+ if x_lengths is None:
58
+ x_lengths = t
59
+ ids_str_max = torch.clamp(x_lengths - segment_size + 1, min=0)
60
+ ids_str = (torch.rand([b], device=x.device) * ids_str_max).to(dtype=torch.long)
61
+ ret = slice_segments(x, ids_str, segment_size)
62
+ return ret, ids_str
63
+
64
+
65
+ def get_timing_signal_1d(length, channels, min_timescale=1.0, max_timescale=1.0e4):
66
+ position = torch.arange(length, dtype=torch.float)
67
+ num_timescales = channels // 2
68
+ log_timescale_increment = math.log(float(max_timescale) / float(min_timescale)) / (
69
+ num_timescales - 1
70
+ )
71
+ inv_timescales = min_timescale * torch.exp(
72
+ torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment
73
+ )
74
+ scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1)
75
+ signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0)
76
+ signal = F.pad(signal, [0, 0, 0, channels % 2])
77
+ signal = signal.view(1, channels, length)
78
+ return signal
79
+
80
+
81
+ def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4):
82
+ b, channels, length = x.size()
83
+ signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
84
+ return x + signal.to(dtype=x.dtype, device=x.device)
85
+
86
+
87
+ def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1):
88
+ b, channels, length = x.size()
89
+ signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
90
+ return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis)
91
+
92
+
93
+ def subsequent_mask(length):
94
+ mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0)
95
+ return mask
96
+
97
+
98
+ @torch.jit.script
99
+ def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels):
100
+ n_channels_int = n_channels[0]
101
+ in_act = input_a + input_b
102
+ t_act = torch.tanh(in_act[:, :n_channels_int, :])
103
+ s_act = torch.sigmoid(in_act[:, n_channels_int:, :])
104
+ acts = t_act * s_act
105
+ return acts
106
+
107
+
108
+ def convert_pad_shape(pad_shape):
109
+ layer = pad_shape[::-1]
110
+ pad_shape = [item for sublist in layer for item in sublist]
111
+ return pad_shape
112
+
113
+
114
+ def shift_1d(x):
115
+ x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1]
116
+ return x
117
+
118
+
119
+ def sequence_mask(length, max_length=None):
120
+ if max_length is None:
121
+ max_length = length.max()
122
+ x = torch.arange(max_length, dtype=length.dtype, device=length.device)
123
+ return x.unsqueeze(0) < length.unsqueeze(1)
124
+
125
+
126
+ def generate_path(duration, mask):
127
+ """
128
+ duration: [b, 1, t_x]
129
+ mask: [b, 1, t_y, t_x]
130
+ """
131
+
132
+ b, _, t_y, t_x = mask.shape
133
+ cum_duration = torch.cumsum(duration, -1)
134
+
135
+ cum_duration_flat = cum_duration.view(b * t_x)
136
+ path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype)
137
+ path = path.view(b, t_x, t_y)
138
+ path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1]
139
+ path = path.unsqueeze(1).transpose(2, 3) * mask
140
+ return path
141
+
142
+
143
+ def clip_grad_value_(parameters, clip_value, norm_type=2):
144
+ if isinstance(parameters, torch.Tensor):
145
+ parameters = [parameters]
146
+ parameters = list(filter(lambda p: p.grad is not None, parameters))
147
+ norm_type = float(norm_type)
148
+ if clip_value is not None:
149
+ clip_value = float(clip_value)
150
+
151
+ total_norm = 0
152
+ for p in parameters:
153
+ param_norm = p.grad.data.norm(norm_type)
154
+ total_norm += param_norm.item() ** norm_type
155
+ if clip_value is not None:
156
+ p.grad.data.clamp_(min=-clip_value, max=clip_value)
157
+ total_norm = total_norm ** (1.0 / norm_type)
158
+ return total_norm
compress_model.py ADDED
@@ -0,0 +1,89 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from collections import OrderedDict
2
+ from text.symbols import symbols
3
+ import torch
4
+
5
+ from tools.log import logger
6
+ import utils
7
+ from models import SynthesizerTrn
8
+ import os
9
+
10
+
11
+ def copyStateDict(state_dict):
12
+ if list(state_dict.keys())[0].startswith("module"):
13
+ start_idx = 1
14
+ else:
15
+ start_idx = 0
16
+ new_state_dict = OrderedDict()
17
+ for k, v in state_dict.items():
18
+ name = ",".join(k.split(".")[start_idx:])
19
+ new_state_dict[name] = v
20
+ return new_state_dict
21
+
22
+
23
+ def removeOptimizer(config: str, input_model: str, ishalf: bool, output_model: str):
24
+ hps = utils.get_hparams_from_file(config)
25
+
26
+ net_g = SynthesizerTrn(
27
+ len(symbols),
28
+ hps.data.filter_length // 2 + 1,
29
+ hps.train.segment_size // hps.data.hop_length,
30
+ n_speakers=hps.data.n_speakers,
31
+ **hps.model,
32
+ )
33
+
34
+ optim_g = torch.optim.AdamW(
35
+ net_g.parameters(),
36
+ hps.train.learning_rate,
37
+ betas=hps.train.betas,
38
+ eps=hps.train.eps,
39
+ )
40
+
41
+ state_dict_g = torch.load(input_model, map_location="cpu")
42
+ new_dict_g = copyStateDict(state_dict_g)
43
+ keys = []
44
+ for k, v in new_dict_g["model"].items():
45
+ if "enc_q" in k:
46
+ continue # noqa: E701
47
+ keys.append(k)
48
+
49
+ new_dict_g = (
50
+ {k: new_dict_g["model"][k].half() for k in keys}
51
+ if ishalf
52
+ else {k: new_dict_g["model"][k] for k in keys}
53
+ )
54
+
55
+ torch.save(
56
+ {
57
+ "model": new_dict_g,
58
+ "iteration": 0,
59
+ "optimizer": optim_g.state_dict(),
60
+ "learning_rate": 0.0001,
61
+ },
62
+ output_model,
63
+ )
64
+
65
+
66
+ if __name__ == "__main__":
67
+ import argparse
68
+
69
+ parser = argparse.ArgumentParser()
70
+ parser.add_argument("-c", "--config", type=str, default="configs/config.json")
71
+ parser.add_argument("-i", "--input", type=str)
72
+ parser.add_argument("-o", "--output", type=str, default=None)
73
+ parser.add_argument(
74
+ "-hf", "--half", action="store_true", default=False, help="Save as FP16"
75
+ )
76
+
77
+ args = parser.parse_args()
78
+
79
+ output = args.output
80
+
81
+ if output is None:
82
+ import os.path
83
+
84
+ filename, ext = os.path.splitext(args.input)
85
+ half = "_half" if args.half else ""
86
+ output = filename + "_release" + half + ext
87
+
88
+ removeOptimizer(args.config, args.input, args.half, output)
89
+ logger.info(f"压缩模型成功, 输出模型: {os.path.abspath(output)}")
config.py ADDED
@@ -0,0 +1,70 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ @Desc: 全局配置文件读取
3
+ """
4
+
5
+ import argparse
6
+ import yaml
7
+ from typing import Dict, List
8
+ import os
9
+ import shutil
10
+ import sys
11
+
12
+
13
+ class Webui_config:
14
+ """webui 配置"""
15
+
16
+ def __init__(
17
+ self,
18
+ device: str,
19
+ model: str,
20
+ config_path: str,
21
+ port: int = 7860,
22
+ share: bool = False,
23
+ debug: bool = False,
24
+ ):
25
+ self.device: str = device
26
+ self.model: str = model # 端口号
27
+ self.config_path: str = config_path # 是否公开部署,对外网开放
28
+ self.port: int = port # 是否开启debug模式
29
+ self.share: bool = share # 模型路径
30
+ self.debug: bool = debug # 配置文件路径
31
+
32
+ @classmethod
33
+ def from_dict(cls, dataset_path: str, data: Dict[str, any]):
34
+ data["config_path"] = os.path.join(dataset_path, data["config_path"])
35
+ data["model"] = os.path.join(dataset_path, data["model"])
36
+ return cls(**data)
37
+
38
+
39
+ class Server_config:
40
+ def __init__(
41
+ self, models: List[Dict[str, any]], port: int = 5000, device: str = "cuda"
42
+ ):
43
+ self.models: List[Dict[str, any]] = models # 需要加载的所有模型的配置
44
+ self.port: int = port # 端口号
45
+ self.device: str = device # 模型默认使用设备
46
+
47
+ @classmethod
48
+ def from_dict(cls, data: Dict[str, any]):
49
+ return cls(**data)
50
+
51
+
52
+ class Config:
53
+ def __init__(self, config_path: str):
54
+ with open(file=config_path, mode="r", encoding="utf-8") as file:
55
+ yaml_config: Dict[str, any] = yaml.safe_load(file.read())
56
+ dataset_path: str = yaml_config["dataset_path"]
57
+ self.dataset_path: str = dataset_path
58
+ self.webui_config: Webui_config = Webui_config.from_dict(
59
+ dataset_path, yaml_config["webui"]
60
+ )
61
+ self.server_config: Server_config = Server_config.from_dict(
62
+ yaml_config["server"]
63
+ )
64
+
65
+
66
+ parser = argparse.ArgumentParser()
67
+ # 为避免与以前的config.json起冲突,将其更名如下
68
+ parser.add_argument("-y", "--yml_config", type=str, default="config.yml")
69
+ args, _ = parser.parse_known_args()
70
+ config = Config(args.yml_config)
config.yml ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 全局配置
2
+ # 对于希望在同一时间使用多个配置文件的情况,例如两个GPU同时跑两个训练集:通过环境变量指定配置文件,不指定则默认为./config.yml
3
+
4
+ # 拟提供通用路径配置,统一存放数据,避免数据放得很乱
5
+ # 每个数据集与其对应的模型存放至统一路径下,后续所有的路径配置均为相对于datasetPath的路径
6
+ # 不填或者填空则路径为相对于项目根目录的路径
7
+ dataset_path: "Data/"
8
+
9
+ # webui webui配置
10
+ # 注意, “:” 后需要加空格
11
+ webui:
12
+ # 推理设备
13
+ device: "cpu"
14
+ # 模型路径
15
+ model: "models/compressed.pth"
16
+ # 配置文件路径
17
+ config_path: "configs/config.json"
18
+ # 端口号
19
+ port: 7860
20
+ # 是否公开部署,对外网开放
21
+ share: false
22
+ # 是否开启debug模式
23
+ debug: false
24
+ # 语种识别库,可选langid, fastlid
25
+ language_identification_library: "langid"
26
+
27
+ # server-fastapi配置
28
+ # 注意, “:” 后需要加空格
29
+ # 注意,本配置下的所有配置均为相对于根目录的路径
30
+ server:
31
+ # 端口号
32
+ port: 5000
33
+ # 模型默认使用设备:但是当前并没有实现这个配置。
34
+ device: "cpu"
35
+ # 需要加载的所有模型的配置,可以填多个模型,也可以不填模型,等网页成功后手动加载模型
36
+ # 不加载模型的配置格式:删除默认给的两个模型配置,给models赋值 [ ],也就是空列表。参考模型2的speakers 即 models: [ ]
37
+ # 注意,所有模型都必须正确配置model与config的路径,空路径会导致加载错误。
38
+ # 也可以不填模型,等网页加载成功后手动填写models。
39
+ models:
40
+ - # 模型的路径
41
+ model: ""
42
+ # 模型config.json的路径
43
+ config: ""
44
+ device: "cuda"
45
+ # 模型默认使用的语言
46
+ language: "ZH"
47
+ # 模型人物默认参数
48
+ # 不必填写所有人物,不填的使用默认值
49
+ # 暂时不用填写,当前尚未实现按人区分配置
50
+ speakers: []
data_utils.py ADDED
@@ -0,0 +1,404 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import random
3
+ import torch
4
+ import torch.utils.data
5
+ from tqdm import tqdm
6
+ from tools.log import logger
7
+ import commons
8
+ from mel_processing import spectrogram_torch, mel_spectrogram_torch
9
+ from utils import load_wav_to_torch, load_filepaths_and_text
10
+ from text import cleaned_text_to_sequence
11
+ from config import config
12
+
13
+ """Multi speaker version"""
14
+
15
+
16
+ class TextAudioSpeakerLoader(torch.utils.data.Dataset):
17
+ """
18
+ 1) loads audio, speaker_id, text pairs
19
+ 2) normalizes text and converts them to sequences of integers
20
+ 3) computes spectrograms from audio files.
21
+ """
22
+
23
+ def __init__(self, audiopaths_sid_text, hparams):
24
+ self.audiopaths_sid_text = load_filepaths_and_text(audiopaths_sid_text)
25
+ self.max_wav_value = hparams.max_wav_value
26
+ self.sampling_rate = hparams.sampling_rate
27
+ self.filter_length = hparams.filter_length
28
+ self.hop_length = hparams.hop_length
29
+ self.win_length = hparams.win_length
30
+ self.sampling_rate = hparams.sampling_rate
31
+ self.spk_map = hparams.spk2id
32
+ self.hparams = hparams
33
+
34
+ self.use_mel_spec_posterior = getattr(
35
+ hparams, "use_mel_posterior_encoder", False
36
+ )
37
+ if self.use_mel_spec_posterior:
38
+ self.n_mel_channels = getattr(hparams, "n_mel_channels", 80)
39
+
40
+ self.cleaned_text = getattr(hparams, "cleaned_text", False)
41
+
42
+ self.add_blank = hparams.add_blank
43
+ self.min_text_len = getattr(hparams, "min_text_len", 1)
44
+ self.max_text_len = getattr(hparams, "max_text_len", 384)
45
+
46
+ random.seed(1234)
47
+ random.shuffle(self.audiopaths_sid_text)
48
+ self._filter()
49
+
50
+ def _filter(self):
51
+ """
52
+ Filter text & store spec lengths
53
+ """
54
+ # Store spectrogram lengths for Bucketing
55
+ # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2)
56
+ # spec_length = wav_length // hop_length
57
+
58
+ audiopaths_sid_text_new = []
59
+ lengths = []
60
+ skipped = 0
61
+ logger.info("Init dataset...")
62
+ for _id, spk, language, text, phones, tone, word2ph in tqdm(
63
+ self.audiopaths_sid_text
64
+ ):
65
+ audiopath = f"{_id}"
66
+ if self.min_text_len <= len(phones) and len(phones) <= self.max_text_len:
67
+ phones = phones.split(" ")
68
+ tone = [int(i) for i in tone.split(" ")]
69
+ word2ph = [int(i) for i in word2ph.split(" ")]
70
+ audiopaths_sid_text_new.append(
71
+ [audiopath, spk, language, text, phones, tone, word2ph]
72
+ )
73
+ lengths.append(os.path.getsize(audiopath) // (2 * self.hop_length))
74
+ else:
75
+ skipped += 1
76
+ logger.info(
77
+ "skipped: "
78
+ + str(skipped)
79
+ + ", total: "
80
+ + str(len(self.audiopaths_sid_text))
81
+ )
82
+ self.audiopaths_sid_text = audiopaths_sid_text_new
83
+ self.lengths = lengths
84
+
85
+ def get_audio_text_speaker_pair(self, audiopath_sid_text):
86
+ # separate filename, speaker_id and text
87
+ audiopath, sid, language, text, phones, tone, word2ph = audiopath_sid_text
88
+
89
+ bert, ja_bert, en_bert, phones, tone, language = self.get_text(
90
+ text, word2ph, phones, tone, language, audiopath
91
+ )
92
+
93
+ spec, wav = self.get_audio(audiopath)
94
+ sid = torch.LongTensor([int(self.spk_map[sid])])
95
+
96
+ return (phones, spec, wav, sid, tone, language, bert, ja_bert, en_bert)
97
+
98
+ def get_audio(self, filename):
99
+ audio, sampling_rate = load_wav_to_torch(filename)
100
+ if sampling_rate != self.sampling_rate:
101
+ raise ValueError(
102
+ "{} {} SR doesn't match target {} SR".format(
103
+ filename, sampling_rate, self.sampling_rate
104
+ )
105
+ )
106
+ audio_norm = audio / self.max_wav_value
107
+ audio_norm = audio_norm.unsqueeze(0)
108
+ spec_filename = filename.replace(".wav", ".spec.pt")
109
+ if self.use_mel_spec_posterior:
110
+ spec_filename = spec_filename.replace(".spec.pt", ".mel.pt")
111
+ try:
112
+ spec = torch.load(spec_filename)
113
+ except:
114
+ if self.use_mel_spec_posterior:
115
+ spec = mel_spectrogram_torch(
116
+ audio_norm,
117
+ self.filter_length,
118
+ self.n_mel_channels,
119
+ self.sampling_rate,
120
+ self.hop_length,
121
+ self.win_length,
122
+ self.hparams.mel_fmin,
123
+ self.hparams.mel_fmax,
124
+ center=False,
125
+ )
126
+ else:
127
+ spec = spectrogram_torch(
128
+ audio_norm,
129
+ self.filter_length,
130
+ self.sampling_rate,
131
+ self.hop_length,
132
+ self.win_length,
133
+ center=False,
134
+ )
135
+ spec = torch.squeeze(spec, 0)
136
+ if config.train_ms_config.spec_cache:
137
+ torch.save(spec, spec_filename)
138
+ return spec, audio_norm
139
+
140
+ def get_text(self, text, word2ph, phone, tone, language_str, wav_path):
141
+ phone, tone, language = cleaned_text_to_sequence(phone, tone, language_str)
142
+ if self.add_blank:
143
+ phone = commons.intersperse(phone, 0)
144
+ tone = commons.intersperse(tone, 0)
145
+ language = commons.intersperse(language, 0)
146
+ for i in range(len(word2ph)):
147
+ word2ph[i] = word2ph[i] * 2
148
+ word2ph[0] += 1
149
+ bert_path = wav_path.replace(".wav", ".bert.pt")
150
+ try:
151
+ bert_ori = torch.load(bert_path)
152
+ assert bert_ori.shape[-1] == len(phone)
153
+ except Exception as e:
154
+ logger.warning("Bert load Failed")
155
+ logger.warning(e)
156
+
157
+ if language_str == "ZH":
158
+ bert = bert_ori
159
+ ja_bert = torch.randn(1024, len(phone))
160
+ en_bert = torch.randn(1024, len(phone))
161
+ elif language_str == "JP":
162
+ bert = torch.randn(1024, len(phone))
163
+ ja_bert = bert_ori
164
+ en_bert = torch.randn(1024, len(phone))
165
+ elif language_str == "EN":
166
+ bert = torch.randn(1024, len(phone))
167
+ ja_bert = torch.randn(1024, len(phone))
168
+ en_bert = bert_ori
169
+ phone = torch.LongTensor(phone)
170
+ tone = torch.LongTensor(tone)
171
+ language = torch.LongTensor(language)
172
+ return bert, ja_bert, en_bert, phone, tone, language
173
+
174
+ def get_sid(self, sid):
175
+ sid = torch.LongTensor([int(sid)])
176
+ return sid
177
+
178
+ def __getitem__(self, index):
179
+ return self.get_audio_text_speaker_pair(self.audiopaths_sid_text[index])
180
+
181
+ def __len__(self):
182
+ return len(self.audiopaths_sid_text)
183
+
184
+
185
+ class TextAudioSpeakerCollate:
186
+ """Zero-pads model inputs and targets"""
187
+
188
+ def __init__(self, return_ids=False):
189
+ self.return_ids = return_ids
190
+
191
+ def __call__(self, batch):
192
+ """Collate's training batch from normalized text, audio and speaker identities
193
+ PARAMS
194
+ ------
195
+ batch: [text_normalized, spec_normalized, wav_normalized, sid]
196
+ """
197
+ # Right zero-pad all one-hot text sequences to max input length
198
+ _, ids_sorted_decreasing = torch.sort(
199
+ torch.LongTensor([x[1].size(1) for x in batch]), dim=0, descending=True
200
+ )
201
+
202
+ max_text_len = max([len(x[0]) for x in batch])
203
+ max_spec_len = max([x[1].size(1) for x in batch])
204
+ max_wav_len = max([x[2].size(1) for x in batch])
205
+
206
+ text_lengths = torch.LongTensor(len(batch))
207
+ spec_lengths = torch.LongTensor(len(batch))
208
+ wav_lengths = torch.LongTensor(len(batch))
209
+ sid = torch.LongTensor(len(batch))
210
+
211
+ text_padded = torch.LongTensor(len(batch), max_text_len)
212
+ tone_padded = torch.LongTensor(len(batch), max_text_len)
213
+ language_padded = torch.LongTensor(len(batch), max_text_len)
214
+ bert_padded = torch.FloatTensor(len(batch), 1024, max_text_len)
215
+ ja_bert_padded = torch.FloatTensor(len(batch), 1024, max_text_len)
216
+ en_bert_padded = torch.FloatTensor(len(batch), 1024, max_text_len)
217
+
218
+ spec_padded = torch.FloatTensor(len(batch), batch[0][1].size(0), max_spec_len)
219
+ wav_padded = torch.FloatTensor(len(batch), 1, max_wav_len)
220
+ text_padded.zero_()
221
+ tone_padded.zero_()
222
+ language_padded.zero_()
223
+ spec_padded.zero_()
224
+ wav_padded.zero_()
225
+ bert_padded.zero_()
226
+ ja_bert_padded.zero_()
227
+ en_bert_padded.zero_()
228
+
229
+ for i in range(len(ids_sorted_decreasing)):
230
+ row = batch[ids_sorted_decreasing[i]]
231
+
232
+ text = row[0]
233
+ text_padded[i, : text.size(0)] = text
234
+ text_lengths[i] = text.size(0)
235
+
236
+ spec = row[1]
237
+ spec_padded[i, :, : spec.size(1)] = spec
238
+ spec_lengths[i] = spec.size(1)
239
+
240
+ wav = row[2]
241
+ wav_padded[i, :, : wav.size(1)] = wav
242
+ wav_lengths[i] = wav.size(1)
243
+
244
+ sid[i] = row[3]
245
+
246
+ tone = row[4]
247
+ tone_padded[i, : tone.size(0)] = tone
248
+
249
+ language = row[5]
250
+ language_padded[i, : language.size(0)] = language
251
+
252
+ bert = row[6]
253
+ bert_padded[i, :, : bert.size(1)] = bert
254
+
255
+ ja_bert = row[7]
256
+ ja_bert_padded[i, :, : ja_bert.size(1)] = ja_bert
257
+
258
+ en_bert = row[8]
259
+ en_bert_padded[i, :, : en_bert.size(1)] = en_bert
260
+
261
+ return (
262
+ text_padded,
263
+ text_lengths,
264
+ spec_padded,
265
+ spec_lengths,
266
+ wav_padded,
267
+ wav_lengths,
268
+ sid,
269
+ tone_padded,
270
+ language_padded,
271
+ bert_padded,
272
+ ja_bert_padded,
273
+ en_bert_padded,
274
+ )
275
+
276
+
277
+ class DistributedBucketSampler(torch.utils.data.distributed.DistributedSampler):
278
+ """
279
+ Maintain similar input lengths in a batch.
280
+ Length groups are specified by boundaries.
281
+ Ex) boundaries = [b1, b2, b3] -> any batch is included either {x | b1 < length(x) <=b2} or {x | b2 < length(x) <= b3}.
282
+
283
+ It removes samples which are not included in the boundaries.
284
+ Ex) boundaries = [b1, b2, b3] -> any x s.t. length(x) <= b1 or length(x) > b3 are discarded.
285
+ """
286
+
287
+ def __init__(
288
+ self,
289
+ dataset,
290
+ batch_size,
291
+ boundaries,
292
+ num_replicas=None,
293
+ rank=None,
294
+ shuffle=True,
295
+ ):
296
+ super().__init__(dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle)
297
+ self.lengths = dataset.lengths
298
+ self.batch_size = batch_size
299
+ self.boundaries = boundaries
300
+
301
+ self.buckets, self.num_samples_per_bucket = self._create_buckets()
302
+ self.total_size = sum(self.num_samples_per_bucket)
303
+ self.num_samples = self.total_size // self.num_replicas
304
+
305
+ def _create_buckets(self):
306
+ buckets = [[] for _ in range(len(self.boundaries) - 1)]
307
+ for i in range(len(self.lengths)):
308
+ length = self.lengths[i]
309
+ idx_bucket = self._bisect(length)
310
+ if idx_bucket != -1:
311
+ buckets[idx_bucket].append(i)
312
+
313
+ try:
314
+ for i in range(len(buckets) - 1, 0, -1):
315
+ if len(buckets[i]) == 0:
316
+ buckets.pop(i)
317
+ self.boundaries.pop(i + 1)
318
+ assert all(len(bucket) > 0 for bucket in buckets)
319
+ # When one bucket is not traversed
320
+ except Exception as e:
321
+ print("Bucket warning ", e)
322
+ for i in range(len(buckets) - 1, -1, -1):
323
+ if len(buckets[i]) == 0:
324
+ buckets.pop(i)
325
+ self.boundaries.pop(i + 1)
326
+
327
+ num_samples_per_bucket = []
328
+ for i in range(len(buckets)):
329
+ len_bucket = len(buckets[i])
330
+ total_batch_size = self.num_replicas * self.batch_size
331
+ rem = (
332
+ total_batch_size - (len_bucket % total_batch_size)
333
+ ) % total_batch_size
334
+ num_samples_per_bucket.append(len_bucket + rem)
335
+ return buckets, num_samples_per_bucket
336
+
337
+ def __iter__(self):
338
+ # deterministically shuffle based on epoch
339
+ g = torch.Generator()
340
+ g.manual_seed(self.epoch)
341
+
342
+ indices = []
343
+ if self.shuffle:
344
+ for bucket in self.buckets:
345
+ indices.append(torch.randperm(len(bucket), generator=g).tolist())
346
+ else:
347
+ for bucket in self.buckets:
348
+ indices.append(list(range(len(bucket))))
349
+
350
+ batches = []
351
+ for i in range(len(self.buckets)):
352
+ bucket = self.buckets[i]
353
+ len_bucket = len(bucket)
354
+ if len_bucket == 0:
355
+ continue
356
+ ids_bucket = indices[i]
357
+ num_samples_bucket = self.num_samples_per_bucket[i]
358
+
359
+ # add extra samples to make it evenly divisible
360
+ rem = num_samples_bucket - len_bucket
361
+ ids_bucket = (
362
+ ids_bucket
363
+ + ids_bucket * (rem // len_bucket)
364
+ + ids_bucket[: (rem % len_bucket)]
365
+ )
366
+
367
+ # subsample
368
+ ids_bucket = ids_bucket[self.rank :: self.num_replicas]
369
+
370
+ # batching
371
+ for j in range(len(ids_bucket) // self.batch_size):
372
+ batch = [
373
+ bucket[idx]
374
+ for idx in ids_bucket[
375
+ j * self.batch_size : (j + 1) * self.batch_size
376
+ ]
377
+ ]
378
+ batches.append(batch)
379
+
380
+ if self.shuffle:
381
+ batch_ids = torch.randperm(len(batches), generator=g).tolist()
382
+ batches = [batches[i] for i in batch_ids]
383
+ self.batches = batches
384
+
385
+ assert len(self.batches) * self.batch_size == self.num_samples
386
+ return iter(self.batches)
387
+
388
+ def _bisect(self, x, lo=0, hi=None):
389
+ if hi is None:
390
+ hi = len(self.boundaries) - 1
391
+
392
+ if hi > lo:
393
+ mid = (hi + lo) // 2
394
+ if self.boundaries[mid] < x and x <= self.boundaries[mid + 1]:
395
+ return mid
396
+ elif x <= self.boundaries[mid]:
397
+ return self._bisect(x, lo, mid)
398
+ else:
399
+ return self._bisect(x, mid + 1, hi)
400
+ else:
401
+ return -1
402
+
403
+ def __len__(self):
404
+ return self.num_samples // self.batch_size
hiyoriUI.py ADDED
@@ -0,0 +1,735 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ api服务,网页后端 多版本多模型 fastapi实现
3
+ 原 server_fastapi
4
+ """
5
+
6
+ import logging
7
+ import gc
8
+ import random
9
+ import librosa
10
+ import gradio
11
+ import numpy as np
12
+ import utils
13
+ from fastapi import FastAPI, Query, Request, File, UploadFile, Form
14
+ from fastapi.responses import Response, FileResponse
15
+ from fastapi.staticfiles import StaticFiles
16
+ from io import BytesIO
17
+ from scipy.io import wavfile
18
+ import uvicorn
19
+ import torch
20
+ import webbrowser
21
+ import psutil
22
+ import GPUtil
23
+ from typing import Dict, Optional, List, Set, Union, Tuple
24
+ import os
25
+ from tools.log import logger
26
+ from urllib.parse import unquote
27
+
28
+ from infer import infer, get_net_g, latest_version
29
+ import tools.translate as trans
30
+ from tools.sentence import split_by_language
31
+ from re_matching import cut_sent
32
+
33
+
34
+ from config import config
35
+
36
+ os.environ["TOKENIZERS_PARALLELISM"] = "false"
37
+
38
+
39
+ class Model:
40
+ """模型封装类"""
41
+
42
+ def __init__(self, config_path: str, model_path: str, device: str, language: str):
43
+ self.config_path: str = os.path.normpath(config_path)
44
+ self.model_path: str = os.path.normpath(model_path)
45
+ self.device: str = device
46
+ self.language: str = language
47
+ self.hps = utils.get_hparams_from_file(config_path)
48
+ self.spk2id: Dict[str, int] = self.hps.data.spk2id # spk - id 映射字典
49
+ self.id2spk: Dict[int, str] = dict() # id - spk 映射字典
50
+ for speaker, speaker_id in self.hps.data.spk2id.items():
51
+ self.id2spk[speaker_id] = speaker
52
+ self.version: str = (
53
+ self.hps.version if hasattr(self.hps, "version") else latest_version
54
+ )
55
+ self.net_g = get_net_g(
56
+ model_path=model_path,
57
+ version=self.version,
58
+ device=device,
59
+ hps=self.hps,
60
+ )
61
+
62
+ def to_dict(self) -> Dict[str, any]:
63
+ return {
64
+ "config_path": self.config_path,
65
+ "model_path": self.model_path,
66
+ "device": self.device,
67
+ "language": self.language,
68
+ "spk2id": self.spk2id,
69
+ "id2spk": self.id2spk,
70
+ "version": self.version,
71
+ }
72
+
73
+
74
+ class Models:
75
+ def __init__(self):
76
+ self.models: Dict[int, Model] = dict()
77
+ self.num = 0
78
+ # spkInfo[角色名][模型id] = 角色id
79
+ self.spk_info: Dict[str, Dict[int, int]] = dict()
80
+ self.path2ids: Dict[str, Set[int]] = dict() # 路径指向的model的id
81
+
82
+ def init_model(
83
+ self, config_path: str, model_path: str, device: str, language: str
84
+ ) -> int:
85
+ """
86
+ 初始化并添加一个模型
87
+
88
+ :param config_path: 模型config.json路径
89
+ :param model_path: 模型路径
90
+ :param device: 模型推理使用设备
91
+ :param language: 模型推理默认语言
92
+ """
93
+ # 若文件不存在则不进行加载
94
+ if not os.path.isfile(model_path):
95
+ if model_path != "":
96
+ logger.warning(f"模型文件{model_path} 不存在,不进行初始化")
97
+ return self.num
98
+ if not os.path.isfile(config_path):
99
+ if config_path != "":
100
+ logger.warning(f"配置文件{config_path} 不存在,不进行初始化")
101
+ return self.num
102
+
103
+ # 若路径中的模型已存在,则不添加模型,若不存在,则进行初始化。
104
+ model_path = os.path.realpath(model_path)
105
+ if model_path not in self.path2ids.keys():
106
+ self.path2ids[model_path] = {self.num}
107
+ self.models[self.num] = Model(
108
+ config_path=config_path,
109
+ model_path=model_path,
110
+ device=device,
111
+ language=language,
112
+ )
113
+ logger.success(
114
+ f"添加模型{model_path},使用配置文件{os.path.realpath(config_path)}"
115
+ )
116
+ else:
117
+ # 获取一个指向id
118
+ m_id = next(iter(self.path2ids[model_path]))
119
+ self.models[self.num] = self.models[m_id]
120
+ self.path2ids[model_path].add(self.num)
121
+ logger.success("模型已存在,添加模型引用。")
122
+ # 添加角色信息
123
+ for speaker, speaker_id in self.models[self.num].spk2id.items():
124
+ if speaker not in self.spk_info.keys():
125
+ self.spk_info[speaker] = {self.num: speaker_id}
126
+ else:
127
+ self.spk_info[speaker][self.num] = speaker_id
128
+ # 修改计数
129
+ self.num += 1
130
+ return self.num - 1
131
+
132
+ def del_model(self, index: int) -> Optional[int]:
133
+ """删除对应序号的模型,若不存在则返回None"""
134
+ if index not in self.models.keys():
135
+ return None
136
+ # 删除角色信息
137
+ for speaker, speaker_id in self.models[index].spk2id.items():
138
+ self.spk_info[speaker].pop(index)
139
+ if len(self.spk_info[speaker]) == 0:
140
+ # 若对应角色的所有模型都被删除,则清除该角色信息
141
+ self.spk_info.pop(speaker)
142
+ # 删除路径信息
143
+ model_path = os.path.realpath(self.models[index].model_path)
144
+ self.path2ids[model_path].remove(index)
145
+ if len(self.path2ids[model_path]) == 0:
146
+ self.path2ids.pop(model_path)
147
+ logger.success(f"删除模型{model_path}, id = {index}")
148
+ else:
149
+ logger.success(f"删除模型引用{model_path}, id = {index}")
150
+ # 删除模型
151
+ self.models.pop(index)
152
+ gc.collect()
153
+ if torch.cuda.is_available():
154
+ torch.cuda.empty_cache()
155
+ return index
156
+
157
+ def get_models(self):
158
+ """获取所有模型"""
159
+ return self.models
160
+
161
+
162
+ if __name__ == "__main__":
163
+ app = FastAPI()
164
+ app.logger = logger
165
+ # 挂载静态文件
166
+ logger.info("开始挂载网页页面")
167
+ StaticDir: str = "./Web"
168
+ if not os.path.isdir(StaticDir):
169
+ logger.warning(
170
+ "缺少网页资源,无法开启网页页面,如有需要请在 https://github.com/jiangyuxiaoxiao/Bert-VITS2-UI 或者Bert-VITS对应版本的release页面下载"
171
+ )
172
+ else:
173
+ dirs = [fir.name for fir in os.scandir(StaticDir) if fir.is_dir()]
174
+ files = [fir.name for fir in os.scandir(StaticDir) if fir.is_dir()]
175
+ for dirName in dirs:
176
+ app.mount(
177
+ f"/{dirName}",
178
+ StaticFiles(directory=f"./{StaticDir}/{dirName}"),
179
+ name=dirName,
180
+ )
181
+ loaded_models = Models()
182
+ # 加载模型
183
+ logger.info("开始加载模型")
184
+ models_info = config.server_config.models
185
+ for model_info in models_info:
186
+ loaded_models.init_model(
187
+ config_path=model_info["config"],
188
+ model_path=model_info["model"],
189
+ device=model_info["device"],
190
+ language=model_info["language"],
191
+ )
192
+
193
+ @app.get("/")
194
+ async def index():
195
+ return FileResponse("./Web/index.html")
196
+
197
+ async def _voice(
198
+ text: str,
199
+ model_id: int,
200
+ speaker_name: str,
201
+ speaker_id: int,
202
+ sdp_ratio: float,
203
+ noise: float,
204
+ noisew: float,
205
+ length: float,
206
+ language: str,
207
+ auto_translate: bool,
208
+ auto_split: bool,
209
+ emotion: Optional[Union[int, str]] = None,
210
+ reference_audio=None,
211
+ style_text: Optional[str] = None,
212
+ style_weight: float = 0.7,
213
+ ) -> Union[Response, Dict[str, any]]:
214
+ """TTS实现函数"""
215
+
216
+ # 检查
217
+ # 检查模型是否存在
218
+ if model_id not in loaded_models.models.keys():
219
+ logger.error(f"/voice 请求错误:模型model_id={model_id}未加载")
220
+ return {"status": 10, "detail": f"模型model_id={model_id}未加载"}
221
+ # 检查是否提供speaker
222
+ if speaker_name is None and speaker_id is None:
223
+ logger.error("/voice 请求错误:推理请求未提供speaker_name或speaker_id")
224
+ return {"status": 11, "detail": "请提供speaker_name或speaker_id"}
225
+ elif speaker_name is None:
226
+ # 检查speaker_id是否存在
227
+ if speaker_id not in loaded_models.models[model_id].id2spk.keys():
228
+ logger.error(f"/voice 请求错误:角色speaker_id={speaker_id}不存在")
229
+ return {"status": 12, "detail": f"角色speaker_id={speaker_id}不存在"}
230
+ speaker_name = loaded_models.models[model_id].id2spk[speaker_id]
231
+ # 检查speaker_name是否存在
232
+ if speaker_name not in loaded_models.models[model_id].spk2id.keys():
233
+ logger.error(f"/voice 请求错误:角色speaker_name={speaker_name}不存在")
234
+ return {"status": 13, "detail": f"角色speaker_name={speaker_name}不存在"}
235
+ # 未传入则使用默认语言
236
+ if language is None:
237
+ language = loaded_models.models[model_id].language
238
+ # 翻译会破坏mix结构,auto也会变得无意义。不要在这两个模式下使用
239
+ if auto_translate:
240
+ if language == "auto" or language == "mix":
241
+ logger.error(
242
+ f"/voice 请求错误:请勿同时使用language = {language}与auto_translate模式"
243
+ )
244
+ return {
245
+ "status": 20,
246
+ "detail": f"请勿同时使用language = {language}与auto_translate模式",
247
+ }
248
+ text = trans.translate(Sentence=text, to_Language=language.lower())
249
+ if reference_audio is not None:
250
+ ref_audio = BytesIO(await reference_audio.read())
251
+ # 2.2 适配
252
+ if loaded_models.models[model_id].version == "2.2":
253
+ ref_audio, _ = librosa.load(ref_audio, 48000)
254
+ else:
255
+ ref_audio = reference_audio
256
+
257
+ # 改动:增加使用 || 对文本进行主动切分
258
+ # 切分优先级: || → auto/mix → auto_split
259
+ text2 = text.replace("\n", "").lstrip()
260
+ texts: List[str] = text2.split("||")
261
+
262
+ # 对于mix和auto的说明:出于版本兼容性的考虑,暂时无法使用multilang的方式进行推理
263
+ if language == "MIX":
264
+ text_language_speakers: List[Tuple[str, str, str]] = []
265
+ for _text in texts:
266
+ speaker_pieces = _text.split("[") # 按说话人分割多块
267
+ for speaker_piece in speaker_pieces:
268
+ if speaker_piece == "":
269
+ continue
270
+ speaker_piece2 = speaker_piece.split("]")
271
+ if len(speaker_piece2) != 2:
272
+ return {
273
+ "status": 21,
274
+ "detail": "MIX语法错误",
275
+ }
276
+ speaker = speaker_piece2[0].strip()
277
+ lang_pieces = speaker_piece2[1].split("<")
278
+ for lang_piece in lang_pieces:
279
+ if lang_piece == "":
280
+ continue
281
+ lang_piece2 = lang_piece.split(">")
282
+ if len(lang_piece2) != 2:
283
+ return {
284
+ "status": 21,
285
+ "detail": "MIX语法错误",
286
+ }
287
+ lang = lang_piece2[0].strip()
288
+ if lang.upper() not in ["ZH", "EN", "JP"]:
289
+ return {
290
+ "status": 21,
291
+ "detail": "MIX语法错误",
292
+ }
293
+ t = lang_piece2[1]
294
+ text_language_speakers.append((t, lang.upper(), speaker))
295
+
296
+ elif language == "AUTO":
297
+ text_language_speakers: List[Tuple[str, str, str]] = [
298
+ (final_text, language.upper().replace("JA", "JP"), speaker_name)
299
+ for sub_list in [
300
+ split_by_language(_text, target_languages=["zh", "ja", "en"])
301
+ for _text in texts
302
+ if _text != ""
303
+ ]
304
+ for final_text, language in sub_list
305
+ if final_text != ""
306
+ ]
307
+ else:
308
+ text_language_speakers: List[Tuple[str, str, str]] = [
309
+ (_text, language, speaker_name) for _text in texts if _text != ""
310
+ ]
311
+
312
+ if auto_split:
313
+ text_language_speakers: List[Tuple[str, str, str]] = [
314
+ (final_text, lang, speaker)
315
+ for _text, lang, speaker in text_language_speakers
316
+ for final_text in cut_sent(_text)
317
+ ]
318
+
319
+ audios = []
320
+ with torch.no_grad():
321
+ for _text, lang, speaker in text_language_speakers:
322
+ audios.append(
323
+ infer(
324
+ text=_text,
325
+ sdp_ratio=sdp_ratio,
326
+ noise_scale=noise,
327
+ noise_scale_w=noisew,
328
+ length_scale=length,
329
+ sid=speaker,
330
+ language=lang,
331
+ hps=loaded_models.models[model_id].hps,
332
+ net_g=loaded_models.models[model_id].net_g,
333
+ device=loaded_models.models[model_id].device,
334
+ emotion=emotion,
335
+ reference_audio=ref_audio,
336
+ style_text=style_text,
337
+ style_weight=style_weight,
338
+ )
339
+ )
340
+ # audios.append(np.zeros(int(44100 * 0.2)))
341
+ # audios.pop()
342
+ audio = np.concatenate(audios)
343
+ audio = gradio.processing_utils.convert_to_16_bit_wav(audio)
344
+ with BytesIO() as wavContent:
345
+ wavfile.write(
346
+ wavContent, loaded_models.models[model_id].hps.data.sampling_rate, audio
347
+ )
348
+ response = Response(content=wavContent.getvalue(), media_type="audio/wav")
349
+ return response
350
+
351
+ @app.post("/voice")
352
+ async def voice(
353
+ request: Request, # fastapi自动注入
354
+ text: str = Form(...),
355
+ model_id: int = Query(..., description="模型ID"), # 模型序号
356
+ speaker_name: str = Query(
357
+ None, description="说话人名"
358
+ ), # speaker_name与 speaker_id二者选其一
359
+ speaker_id: int = Query(None, description="说话人id,与speaker_name二选一"),
360
+ sdp_ratio: float = Query(0.2, description="SDP/DP混合比"),
361
+ noise: float = Query(0.2, description="感情"),
362
+ noisew: float = Query(0.9, description="音素长度"),
363
+ length: float = Query(1, description="语速"),
364
+ language: str = Query(None, description="语言"), # 若不指定使用语言则使用默认值
365
+ auto_translate: bool = Query(False, description="自动翻译"),
366
+ auto_split: bool = Query(False, description="自动切分"),
367
+ emotion: Optional[Union[int, str]] = Query(None, description="emo"),
368
+ reference_audio: UploadFile = File(None),
369
+ style_text: Optional[str] = Form(None, description="风格文本"),
370
+ style_weight: float = Query(0.7, description="风格权重"),
371
+ ):
372
+ """语音接口,若需要上传参考音频请仅使用post请求"""
373
+ logger.info(
374
+ f"{request.client.host}:{request.client.port}/voice { unquote(str(request.query_params) )} text={text}"
375
+ )
376
+ return await _voice(
377
+ text=text,
378
+ model_id=model_id,
379
+ speaker_name=speaker_name,
380
+ speaker_id=speaker_id,
381
+ sdp_ratio=sdp_ratio,
382
+ noise=noise,
383
+ noisew=noisew,
384
+ length=length,
385
+ language=language,
386
+ auto_translate=auto_translate,
387
+ auto_split=auto_split,
388
+ emotion=emotion,
389
+ reference_audio=reference_audio,
390
+ style_text=style_text,
391
+ style_weight=style_weight,
392
+ )
393
+
394
+ @app.get("/voice")
395
+ async def voice(
396
+ request: Request, # fastapi自动注入
397
+ text: str = Query(..., description="输入文字"),
398
+ model_id: int = Query(..., description="模型ID"), # 模型序号
399
+ speaker_name: str = Query(
400
+ None, description="说话人名"
401
+ ), # speaker_name与 speaker_id二者选其一
402
+ speaker_id: int = Query(None, description="说话人id,与speaker_name二选一"),
403
+ sdp_ratio: float = Query(0.2, description="SDP/DP混合比"),
404
+ noise: float = Query(0.2, description="感情"),
405
+ noisew: float = Query(0.9, description="音素长度"),
406
+ length: float = Query(1, description="语速"),
407
+ language: str = Query(None, description="语言"), # 若不指定使用语言则使用默认值
408
+ auto_translate: bool = Query(False, description="自动翻译"),
409
+ auto_split: bool = Query(False, description="自动切分"),
410
+ emotion: Optional[Union[int, str]] = Query(None, description="emo"),
411
+ style_text: Optional[str] = Query(None, description="风格文本"),
412
+ style_weight: float = Query(0.7, description="风格权重"),
413
+ ):
414
+ """语音接口,不建议使用"""
415
+ logger.info(
416
+ f"{request.client.host}:{request.client.port}/voice { unquote(str(request.query_params) )}"
417
+ )
418
+ return await _voice(
419
+ text=text,
420
+ model_id=model_id,
421
+ speaker_name=speaker_name,
422
+ speaker_id=speaker_id,
423
+ sdp_ratio=sdp_ratio,
424
+ noise=noise,
425
+ noisew=noisew,
426
+ length=length,
427
+ language=language,
428
+ auto_translate=auto_translate,
429
+ auto_split=auto_split,
430
+ emotion=emotion,
431
+ style_text=style_text,
432
+ style_weight=style_weight,
433
+ )
434
+
435
+ @app.get("/models/info")
436
+ def get_loaded_models_info(request: Request):
437
+ """获取已加载模型信息"""
438
+
439
+ result: Dict[str, Dict] = dict()
440
+ for key, model in loaded_models.models.items():
441
+ result[str(key)] = model.to_dict()
442
+ return result
443
+
444
+ @app.get("/models/delete")
445
+ def delete_model(
446
+ request: Request, model_id: int = Query(..., description="删除模型id")
447
+ ):
448
+ """删除指定模型"""
449
+ logger.info(
450
+ f"{request.client.host}:{request.client.port}/models/delete { unquote(str(request.query_params) )}"
451
+ )
452
+ result = loaded_models.del_model(model_id)
453
+ if result is None:
454
+ logger.error(f"/models/delete 模型删除错误:模型{model_id}不存在,删除失败")
455
+ return {"status": 14, "detail": f"模型{model_id}不存在,删除失败"}
456
+
457
+ return {"status": 0, "detail": "删除成功"}
458
+
459
+ @app.get("/models/add")
460
+ def add_model(
461
+ request: Request,
462
+ model_path: str = Query(..., description="添加模型路径"),
463
+ config_path: str = Query(
464
+ None,
465
+ description="添加模型配置文件路径,不填则使用./config.json或../config.json",
466
+ ),
467
+ device: str = Query("cuda", description="推理使用设备"),
468
+ language: str = Query("ZH", description="模型默认语言"),
469
+ ):
470
+ """添加指定模型:允许重复添加相同路径模型,且不重复占用内存"""
471
+ logger.info(
472
+ f"{request.client.host}:{request.client.port}/models/add { unquote(str(request.query_params) )}"
473
+ )
474
+ if config_path is None:
475
+ model_dir = os.path.dirname(model_path)
476
+ if os.path.isfile(os.path.join(model_dir, "config.json")):
477
+ config_path = os.path.join(model_dir, "config.json")
478
+ elif os.path.isfile(os.path.join(model_dir, "../config.json")):
479
+ config_path = os.path.join(model_dir, "../config.json")
480
+ else:
481
+ logger.error(
482
+ "/models/add 模型添加失败:未在模型所在目录以及上级目录找到config.json文件"
483
+ )
484
+ return {
485
+ "status": 15,
486
+ "detail": "查询未传入配置文件路径,同时默认路径./与../中不存在配置文件config.json。",
487
+ }
488
+ try:
489
+ model_id = loaded_models.init_model(
490
+ config_path=config_path,
491
+ model_path=model_path,
492
+ device=device,
493
+ language=language,
494
+ )
495
+ except Exception:
496
+ logging.exception("模型加载出错")
497
+ return {
498
+ "status": 16,
499
+ "detail": "模型加载出错,详细查看日志",
500
+ }
501
+ return {
502
+ "status": 0,
503
+ "detail": "模型添加成功",
504
+ "Data": {
505
+ "model_id": model_id,
506
+ "model_info": loaded_models.models[model_id].to_dict(),
507
+ },
508
+ }
509
+
510
+ def _get_all_models(root_dir: str = "Data", only_unloaded: bool = False):
511
+ """从root_dir搜索获取所有可用模型"""
512
+ result: Dict[str, List[str]] = dict()
513
+ files = os.listdir(root_dir) + ["."]
514
+ for file in files:
515
+ if os.path.isdir(os.path.join(root_dir, file)):
516
+ sub_dir = os.path.join(root_dir, file)
517
+ # 搜索 "sub_dir" 、 "sub_dir/models" 两个路径
518
+ result[file] = list()
519
+ sub_files = os.listdir(sub_dir)
520
+ model_files = []
521
+ for sub_file in sub_files:
522
+ relpath = os.path.realpath(os.path.join(sub_dir, sub_file))
523
+ if only_unloaded and relpath in loaded_models.path2ids.keys():
524
+ continue
525
+ if sub_file.endswith(".pth") and sub_file.startswith("G_"):
526
+ if os.path.isfile(relpath):
527
+ model_files.append(sub_file)
528
+ # 对模型文件按步数排序
529
+ model_files = sorted(
530
+ model_files,
531
+ key=lambda pth: (
532
+ int(pth.lstrip("G_").rstrip(".pth"))
533
+ if pth.lstrip("G_").rstrip(".pth").isdigit()
534
+ else 10**10
535
+ ),
536
+ )
537
+ result[file] = model_files
538
+ models_dir = os.path.join(sub_dir, "models")
539
+ model_files = []
540
+ if os.path.isdir(models_dir):
541
+ sub_files = os.listdir(models_dir)
542
+ for sub_file in sub_files:
543
+ relpath = os.path.realpath(os.path.join(models_dir, sub_file))
544
+ if only_unloaded and relpath in loaded_models.path2ids.keys():
545
+ continue
546
+ if sub_file.endswith(".pth") and sub_file.startswith("G_"):
547
+ if os.path.isfile(os.path.join(models_dir, sub_file)):
548
+ model_files.append(f"models/{sub_file}")
549
+ # 对模型文件按步数排序
550
+ model_files = sorted(
551
+ model_files,
552
+ key=lambda pth: (
553
+ int(pth.lstrip("models/G_").rstrip(".pth"))
554
+ if pth.lstrip("models/G_").rstrip(".pth").isdigit()
555
+ else 10**10
556
+ ),
557
+ )
558
+ result[file] += model_files
559
+ if len(result[file]) == 0:
560
+ result.pop(file)
561
+
562
+ return result
563
+
564
+ @app.get("/models/get_unloaded")
565
+ def get_unloaded_models_info(
566
+ request: Request, root_dir: str = Query("Data", description="搜索根目录")
567
+ ):
568
+ """获取未加载模型"""
569
+ logger.info(
570
+ f"{request.client.host}:{request.client.port}/models/get_unloaded { unquote(str(request.query_params) )}"
571
+ )
572
+ return _get_all_models(root_dir, only_unloaded=True)
573
+
574
+ @app.get("/models/get_local")
575
+ def get_local_models_info(
576
+ request: Request, root_dir: str = Query("Data", description="搜索根目录")
577
+ ):
578
+ """获取全部本地模型"""
579
+ logger.info(
580
+ f"{request.client.host}:{request.client.port}/models/get_local { unquote(str(request.query_params) )}"
581
+ )
582
+ return _get_all_models(root_dir, only_unloaded=False)
583
+
584
+ @app.get("/status")
585
+ def get_status():
586
+ """获取电脑运行状态"""
587
+ cpu_percent = psutil.cpu_percent(interval=1)
588
+ memory_info = psutil.virtual_memory()
589
+ memory_total = memory_info.total
590
+ memory_available = memory_info.available
591
+ memory_used = memory_info.used
592
+ memory_percent = memory_info.percent
593
+ gpuInfo = []
594
+ devices = ["cpu"]
595
+ for i in range(torch.cuda.device_count()):
596
+ devices.append(f"cuda:{i}")
597
+ gpus = GPUtil.getGPUs()
598
+ for gpu in gpus:
599
+ gpuInfo.append(
600
+ {
601
+ "gpu_id": gpu.id,
602
+ "gpu_load": gpu.load,
603
+ "gpu_memory": {
604
+ "total": gpu.memoryTotal,
605
+ "used": gpu.memoryUsed,
606
+ "free": gpu.memoryFree,
607
+ },
608
+ }
609
+ )
610
+ return {
611
+ "devices": devices,
612
+ "cpu_percent": cpu_percent,
613
+ "memory_total": memory_total,
614
+ "memory_available": memory_available,
615
+ "memory_used": memory_used,
616
+ "memory_percent": memory_percent,
617
+ "gpu": gpuInfo,
618
+ }
619
+
620
+ @app.get("/tools/translate")
621
+ def translate(
622
+ request: Request,
623
+ texts: str = Query(..., description="待翻译文本"),
624
+ to_language: str = Query(..., description="翻译目标语言"),
625
+ ):
626
+ """翻译"""
627
+ logger.info(
628
+ f"{request.client.host}:{request.client.port}/tools/translate { unquote(str(request.query_params) )}"
629
+ )
630
+ return {"texts": trans.translate(Sentence=texts, to_Language=to_language)}
631
+
632
+ all_examples: Dict[str, Dict[str, List]] = dict() # 存放示例
633
+
634
+ @app.get("/tools/random_example")
635
+ def random_example(
636
+ request: Request,
637
+ language: str = Query(None, description="指定语言,未指定则随机返回"),
638
+ root_dir: str = Query("Data", description="搜索根目录"),
639
+ ):
640
+ """
641
+ 获取一个随机音频+文本,用于对比,音频会从本地目录随机选择。
642
+ """
643
+ logger.info(
644
+ f"{request.client.host}:{request.client.port}/tools/random_example { unquote(str(request.query_params) )}"
645
+ )
646
+ global all_examples
647
+ # 数据初始化
648
+ if root_dir not in all_examples.keys():
649
+ all_examples[root_dir] = {"ZH": [], "JP": [], "EN": []}
650
+
651
+ examples = all_examples[root_dir]
652
+
653
+ # 从项目Data目录中搜索train/val.list
654
+ for root, directories, _files in os.walk(root_dir):
655
+ for file in _files:
656
+ if file in ["train.list", "val.list"]:
657
+ with open(
658
+ os.path.join(root, file), mode="r", encoding="utf-8"
659
+ ) as f:
660
+ lines = f.readlines()
661
+ for line in lines:
662
+ data = line.split("|")
663
+ if len(data) != 7:
664
+ continue
665
+ # 音频存在 且语言为ZH/EN/JP
666
+ if os.path.isfile(data[0]) and data[2] in [
667
+ "ZH",
668
+ "JP",
669
+ "EN",
670
+ ]:
671
+ examples[data[2]].append(
672
+ {
673
+ "text": data[3],
674
+ "audio": data[0],
675
+ "speaker": data[1],
676
+ }
677
+ )
678
+
679
+ examples = all_examples[root_dir]
680
+ if language is None:
681
+ if len(examples["ZH"]) + len(examples["JP"]) + len(examples["EN"]) == 0:
682
+ return {"status": 17, "detail": "没有加载任何示例数据"}
683
+ else:
684
+ # 随机选一个
685
+ rand_num = random.randint(
686
+ 0,
687
+ len(examples["ZH"]) + len(examples["JP"]) + len(examples["EN"]) - 1,
688
+ )
689
+ # ZH
690
+ if rand_num < len(examples["ZH"]):
691
+ return {"status": 0, "Data": examples["ZH"][rand_num]}
692
+ # JP
693
+ if rand_num < len(examples["ZH"]) + len(examples["JP"]):
694
+ return {
695
+ "status": 0,
696
+ "Data": examples["JP"][rand_num - len(examples["ZH"])],
697
+ }
698
+ # EN
699
+ return {
700
+ "status": 0,
701
+ "Data": examples["EN"][
702
+ rand_num - len(examples["ZH"]) - len(examples["JP"])
703
+ ],
704
+ }
705
+
706
+ else:
707
+ if len(examples[language]) == 0:
708
+ return {"status": 17, "detail": f"没有加载任何{language}数据"}
709
+ return {
710
+ "status": 0,
711
+ "Data": examples[language][
712
+ random.randint(0, len(examples[language]) - 1)
713
+ ],
714
+ }
715
+
716
+ @app.get("/tools/get_audio")
717
+ def get_audio(request: Request, path: str = Query(..., description="本地音频路径")):
718
+ logger.info(
719
+ f"{request.client.host}:{request.client.port}/tools/get_audio { unquote(str(request.query_params) )}"
720
+ )
721
+ if not os.path.isfile(path):
722
+ logger.error(f"/tools/get_audio 获取音频错误:指定音频{path}不存���")
723
+ return {"status": 18, "detail": "指定音频不存在"}
724
+ if not path.lower().endswith(".wav"):
725
+ logger.error(f"/tools/get_audio 获取音频错误:音频{path}非wav文件")
726
+ return {"status": 19, "detail": "非wav格式文件"}
727
+ return FileResponse(path=path)
728
+
729
+ logger.warning("本地服务,请勿将服务端口暴露于外网")
730
+ logger.info(f"api文档地址 http://127.0.0.1:{config.server_config.port}/docs")
731
+ if os.path.isdir(StaticDir):
732
+ webbrowser.open(f"http://127.0.0.1:{config.server_config.port}")
733
+ uvicorn.run(
734
+ app, port=config.server_config.port, host="0.0.0.0", log_level="warning"
735
+ )
infer.py ADDED
@@ -0,0 +1,185 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ 版本管理、兼容推理及模型加载实现。
3
+ 版本说明:
4
+ 1. 版本号与github的release版本号对应,使用哪个release版本训练的模型即对应其版本号
5
+ 2. 请在模型的config.json中显示声明版本号,添加一个字段"version" : "你的版本号"
6
+ 特殊版本说明:
7
+ 1.1.1-fix: 1.1.1版本训练的模型,但是在推理时使用dev的日语修复
8
+ 2.3:当前版本
9
+ """
10
+
11
+ import torch
12
+ import commons
13
+ from text import cleaned_text_to_sequence, get_bert
14
+
15
+ # from clap_wrapper import get_clap_audio_feature, get_clap_text_feature
16
+ from typing import Union
17
+ from text.cleaner import clean_text
18
+ import utils
19
+
20
+ from models import SynthesizerTrn
21
+ from text.symbols import symbols
22
+
23
+ latest_version = "2.3"
24
+
25
+
26
+ # def get_emo_(reference_audio, emotion, sid):
27
+ # emo = (
28
+ # torch.from_numpy(get_emo(reference_audio))
29
+ # if reference_audio and emotion == -1
30
+ # else torch.FloatTensor(
31
+ # np.load(f"emo_clustering/{sid}/cluster_center_{emotion}.npy")
32
+ # )
33
+ # )
34
+ # return emo
35
+
36
+
37
+ def get_net_g(model_path: str, version: str, device: str, hps):
38
+ # 当前版本模型 net_g
39
+ net_g = SynthesizerTrn(
40
+ len(symbols),
41
+ hps.data.filter_length // 2 + 1,
42
+ hps.train.segment_size // hps.data.hop_length,
43
+ n_speakers=hps.data.n_speakers,
44
+ **hps.model,
45
+ ).to(device)
46
+ _ = net_g.eval()
47
+ _ = utils.load_checkpoint(model_path, net_g, None, skip_optimizer=True)
48
+ return net_g
49
+
50
+
51
+ def get_text(text, language_str, hps, device, style_text=None, style_weight=0.7):
52
+ style_text = None if style_text == "" else style_text
53
+ # 在此处实现当前版本的get_text
54
+ norm_text, phone, tone, word2ph = clean_text(text, language_str)
55
+ phone, tone, language = cleaned_text_to_sequence(phone, tone, language_str)
56
+
57
+ if hps.data.add_blank:
58
+ phone = commons.intersperse(phone, 0)
59
+ tone = commons.intersperse(tone, 0)
60
+ language = commons.intersperse(language, 0)
61
+ for i in range(len(word2ph)):
62
+ word2ph[i] = word2ph[i] * 2
63
+ word2ph[0] += 1
64
+ bert_ori = get_bert(
65
+ norm_text, word2ph, language_str, device, style_text, style_weight
66
+ )
67
+ del word2ph
68
+ assert bert_ori.shape[-1] == len(phone), phone
69
+
70
+ if language_str == "ZH":
71
+ bert = bert_ori
72
+ ja_bert = torch.randn(1024, len(phone))
73
+ en_bert = torch.randn(1024, len(phone))
74
+ elif language_str == "JP":
75
+ bert = torch.randn(1024, len(phone))
76
+ ja_bert = bert_ori
77
+ en_bert = torch.randn(1024, len(phone))
78
+ elif language_str == "EN":
79
+ bert = torch.randn(1024, len(phone))
80
+ ja_bert = torch.randn(1024, len(phone))
81
+ en_bert = bert_ori
82
+ else:
83
+ raise ValueError("language_str should be ZH, JP or EN")
84
+
85
+ assert bert.shape[-1] == len(
86
+ phone
87
+ ), f"Bert seq len {bert.shape[-1]} != {len(phone)}"
88
+
89
+ phone = torch.LongTensor(phone)
90
+ tone = torch.LongTensor(tone)
91
+ language = torch.LongTensor(language)
92
+ return bert, ja_bert, en_bert, phone, tone, language
93
+
94
+
95
+ def infer(
96
+ text,
97
+ emotion: Union[int, str],
98
+ sdp_ratio,
99
+ noise_scale,
100
+ noise_scale_w,
101
+ length_scale,
102
+ sid,
103
+ language,
104
+ hps,
105
+ net_g,
106
+ device,
107
+ reference_audio=None,
108
+ skip_start=False,
109
+ skip_end=False,
110
+ style_text=None,
111
+ style_weight=0.7,
112
+ ):
113
+ # 在此处实现当前版本的推理
114
+ # emo = get_emo_(reference_audio, emotion, sid)
115
+ # if isinstance(reference_audio, np.ndarray):
116
+ # emo = get_clap_audio_feature(reference_audio, device)
117
+ # else:
118
+ # emo = get_clap_text_feature(emotion, device)
119
+ # emo = torch.squeeze(emo, dim=1)
120
+
121
+ bert, ja_bert, en_bert, phones, tones, lang_ids = get_text(
122
+ text,
123
+ language,
124
+ hps,
125
+ device,
126
+ style_text=style_text,
127
+ style_weight=style_weight,
128
+ )
129
+ if skip_start:
130
+ phones = phones[3:]
131
+ tones = tones[3:]
132
+ lang_ids = lang_ids[3:]
133
+ bert = bert[:, 3:]
134
+ ja_bert = ja_bert[:, 3:]
135
+ en_bert = en_bert[:, 3:]
136
+ if skip_end:
137
+ phones = phones[:-2]
138
+ tones = tones[:-2]
139
+ lang_ids = lang_ids[:-2]
140
+ bert = bert[:, :-2]
141
+ ja_bert = ja_bert[:, :-2]
142
+ en_bert = en_bert[:, :-2]
143
+ with torch.no_grad():
144
+ x_tst = phones.to(device).unsqueeze(0)
145
+ tones = tones.to(device).unsqueeze(0)
146
+ lang_ids = lang_ids.to(device).unsqueeze(0)
147
+ bert = bert.to(device).unsqueeze(0)
148
+ ja_bert = ja_bert.to(device).unsqueeze(0)
149
+ en_bert = en_bert.to(device).unsqueeze(0)
150
+ x_tst_lengths = torch.LongTensor([phones.size(0)]).to(device)
151
+ # emo = emo.to(device).unsqueeze(0)
152
+ del phones
153
+ speakers = torch.LongTensor([hps.data.spk2id[sid]]).to(device)
154
+ audio = (
155
+ net_g.infer(
156
+ x_tst,
157
+ x_tst_lengths,
158
+ speakers,
159
+ tones,
160
+ lang_ids,
161
+ bert,
162
+ ja_bert,
163
+ en_bert,
164
+ sdp_ratio=sdp_ratio,
165
+ noise_scale=noise_scale,
166
+ noise_scale_w=noise_scale_w,
167
+ length_scale=length_scale,
168
+ )[0][0, 0]
169
+ .data.cpu()
170
+ .float()
171
+ .numpy()
172
+ )
173
+ del (
174
+ x_tst,
175
+ tones,
176
+ lang_ids,
177
+ bert,
178
+ x_tst_lengths,
179
+ speakers,
180
+ ja_bert,
181
+ en_bert,
182
+ ) # , emo
183
+ if torch.cuda.is_available():
184
+ torch.cuda.empty_cache()
185
+ return audio
losses.py ADDED
@@ -0,0 +1,153 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ import torchaudio
3
+ from transformers import AutoModel
4
+
5
+
6
+ def feature_loss(fmap_r, fmap_g):
7
+ loss = 0
8
+ for dr, dg in zip(fmap_r, fmap_g):
9
+ for rl, gl in zip(dr, dg):
10
+ rl = rl.float().detach()
11
+ gl = gl.float()
12
+ loss += torch.mean(torch.abs(rl - gl))
13
+
14
+ return loss * 2
15
+
16
+
17
+ def discriminator_loss(disc_real_outputs, disc_generated_outputs):
18
+ loss = 0
19
+ r_losses = []
20
+ g_losses = []
21
+ for dr, dg in zip(disc_real_outputs, disc_generated_outputs):
22
+ dr = dr.float()
23
+ dg = dg.float()
24
+ r_loss = torch.mean((1 - dr) ** 2)
25
+ g_loss = torch.mean(dg**2)
26
+ loss += r_loss + g_loss
27
+ r_losses.append(r_loss.item())
28
+ g_losses.append(g_loss.item())
29
+
30
+ return loss, r_losses, g_losses
31
+
32
+
33
+ def generator_loss(disc_outputs):
34
+ loss = 0
35
+ gen_losses = []
36
+ for dg in disc_outputs:
37
+ dg = dg.float()
38
+ l = torch.mean((1 - dg) ** 2)
39
+ gen_losses.append(l)
40
+ loss += l
41
+
42
+ return loss, gen_losses
43
+
44
+
45
+ def kl_loss(z_p, logs_q, m_p, logs_p, z_mask):
46
+ """
47
+ z_p, logs_q: [b, h, t_t]
48
+ m_p, logs_p: [b, h, t_t]
49
+ """
50
+ z_p = z_p.float()
51
+ logs_q = logs_q.float()
52
+ m_p = m_p.float()
53
+ logs_p = logs_p.float()
54
+ z_mask = z_mask.float()
55
+
56
+ kl = logs_p - logs_q - 0.5
57
+ kl += 0.5 * ((z_p - m_p) ** 2) * torch.exp(-2.0 * logs_p)
58
+ kl = torch.sum(kl * z_mask)
59
+ l = kl / torch.sum(z_mask)
60
+ return l
61
+
62
+
63
+ class WavLMLoss(torch.nn.Module):
64
+ def __init__(self, model, wd, model_sr, slm_sr=16000):
65
+ super(WavLMLoss, self).__init__()
66
+ self.wavlm = AutoModel.from_pretrained(model)
67
+ self.wd = wd
68
+ self.resample = torchaudio.transforms.Resample(model_sr, slm_sr)
69
+ self.wavlm.eval()
70
+ for param in self.wavlm.parameters():
71
+ param.requires_grad = False
72
+
73
+ def forward(self, wav, y_rec):
74
+ with torch.no_grad():
75
+ wav_16 = self.resample(wav)
76
+ wav_embeddings = self.wavlm(
77
+ input_values=wav_16, output_hidden_states=True
78
+ ).hidden_states
79
+ y_rec_16 = self.resample(y_rec)
80
+ y_rec_embeddings = self.wavlm(
81
+ input_values=y_rec_16.squeeze(), output_hidden_states=True
82
+ ).hidden_states
83
+
84
+ floss = 0
85
+ for er, eg in zip(wav_embeddings, y_rec_embeddings):
86
+ floss += torch.mean(torch.abs(er - eg))
87
+
88
+ return floss.mean()
89
+
90
+ def generator(self, y_rec):
91
+ y_rec_16 = self.resample(y_rec)
92
+ y_rec_embeddings = self.wavlm(
93
+ input_values=y_rec_16, output_hidden_states=True
94
+ ).hidden_states
95
+ y_rec_embeddings = (
96
+ torch.stack(y_rec_embeddings, dim=1)
97
+ .transpose(-1, -2)
98
+ .flatten(start_dim=1, end_dim=2)
99
+ )
100
+ y_df_hat_g = self.wd(y_rec_embeddings)
101
+ loss_gen = torch.mean((1 - y_df_hat_g) ** 2)
102
+
103
+ return loss_gen
104
+
105
+ def discriminator(self, wav, y_rec):
106
+ with torch.no_grad():
107
+ wav_16 = self.resample(wav)
108
+ wav_embeddings = self.wavlm(
109
+ input_values=wav_16, output_hidden_states=True
110
+ ).hidden_states
111
+ y_rec_16 = self.resample(y_rec)
112
+ y_rec_embeddings = self.wavlm(
113
+ input_values=y_rec_16, output_hidden_states=True
114
+ ).hidden_states
115
+
116
+ y_embeddings = (
117
+ torch.stack(wav_embeddings, dim=1)
118
+ .transpose(-1, -2)
119
+ .flatten(start_dim=1, end_dim=2)
120
+ )
121
+ y_rec_embeddings = (
122
+ torch.stack(y_rec_embeddings, dim=1)
123
+ .transpose(-1, -2)
124
+ .flatten(start_dim=1, end_dim=2)
125
+ )
126
+
127
+ y_d_rs = self.wd(y_embeddings)
128
+ y_d_gs = self.wd(y_rec_embeddings)
129
+
130
+ y_df_hat_r, y_df_hat_g = y_d_rs, y_d_gs
131
+
132
+ r_loss = torch.mean((1 - y_df_hat_r) ** 2)
133
+ g_loss = torch.mean((y_df_hat_g) ** 2)
134
+
135
+ loss_disc_f = r_loss + g_loss
136
+
137
+ return loss_disc_f.mean()
138
+
139
+ def discriminator_forward(self, wav):
140
+ with torch.no_grad():
141
+ wav_16 = self.resample(wav)
142
+ wav_embeddings = self.wavlm(
143
+ input_values=wav_16, output_hidden_states=True
144
+ ).hidden_states
145
+ y_embeddings = (
146
+ torch.stack(wav_embeddings, dim=1)
147
+ .transpose(-1, -2)
148
+ .flatten(start_dim=1, end_dim=2)
149
+ )
150
+
151
+ y_d_rs = self.wd(y_embeddings)
152
+
153
+ return y_d_rs
mel_processing.py ADDED
@@ -0,0 +1,142 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ import torch.utils.data
3
+ from librosa.filters import mel as librosa_mel_fn
4
+ import warnings
5
+
6
+ # warnings.simplefilter(action='ignore', category=FutureWarning)
7
+ warnings.filterwarnings(action="ignore")
8
+ MAX_WAV_VALUE = 32768.0
9
+
10
+
11
+ def dynamic_range_compression_torch(x, C=1, clip_val=1e-5):
12
+ """
13
+ PARAMS
14
+ ------
15
+ C: compression factor
16
+ """
17
+ return torch.log(torch.clamp(x, min=clip_val) * C)
18
+
19
+
20
+ def dynamic_range_decompression_torch(x, C=1):
21
+ """
22
+ PARAMS
23
+ ------
24
+ C: compression factor used to compress
25
+ """
26
+ return torch.exp(x) / C
27
+
28
+
29
+ def spectral_normalize_torch(magnitudes):
30
+ output = dynamic_range_compression_torch(magnitudes)
31
+ return output
32
+
33
+
34
+ def spectral_de_normalize_torch(magnitudes):
35
+ output = dynamic_range_decompression_torch(magnitudes)
36
+ return output
37
+
38
+
39
+ mel_basis = {}
40
+ hann_window = {}
41
+
42
+
43
+ def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False):
44
+ if torch.min(y) < -1.0:
45
+ print("min value is ", torch.min(y))
46
+ if torch.max(y) > 1.0:
47
+ print("max value is ", torch.max(y))
48
+
49
+ global hann_window
50
+ dtype_device = str(y.dtype) + "_" + str(y.device)
51
+ wnsize_dtype_device = str(win_size) + "_" + dtype_device
52
+ if wnsize_dtype_device not in hann_window:
53
+ hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(
54
+ dtype=y.dtype, device=y.device
55
+ )
56
+
57
+ y = torch.nn.functional.pad(
58
+ y.unsqueeze(1),
59
+ (int((n_fft - hop_size) / 2), int((n_fft - hop_size) / 2)),
60
+ mode="reflect",
61
+ )
62
+ y = y.squeeze(1)
63
+
64
+ spec = torch.stft(
65
+ y,
66
+ n_fft,
67
+ hop_length=hop_size,
68
+ win_length=win_size,
69
+ window=hann_window[wnsize_dtype_device],
70
+ center=center,
71
+ pad_mode="reflect",
72
+ normalized=False,
73
+ onesided=True,
74
+ return_complex=False,
75
+ )
76
+
77
+ spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6)
78
+ return spec
79
+
80
+
81
+ def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax):
82
+ global mel_basis
83
+ dtype_device = str(spec.dtype) + "_" + str(spec.device)
84
+ fmax_dtype_device = str(fmax) + "_" + dtype_device
85
+ if fmax_dtype_device not in mel_basis:
86
+ mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax)
87
+ mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(
88
+ dtype=spec.dtype, device=spec.device
89
+ )
90
+ spec = torch.matmul(mel_basis[fmax_dtype_device], spec)
91
+ spec = spectral_normalize_torch(spec)
92
+ return spec
93
+
94
+
95
+ def mel_spectrogram_torch(
96
+ y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False
97
+ ):
98
+ if torch.min(y) < -1.0:
99
+ print("min value is ", torch.min(y))
100
+ if torch.max(y) > 1.0:
101
+ print("max value is ", torch.max(y))
102
+
103
+ global mel_basis, hann_window
104
+ dtype_device = str(y.dtype) + "_" + str(y.device)
105
+ fmax_dtype_device = str(fmax) + "_" + dtype_device
106
+ wnsize_dtype_device = str(win_size) + "_" + dtype_device
107
+ if fmax_dtype_device not in mel_basis:
108
+ mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax)
109
+ mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(
110
+ dtype=y.dtype, device=y.device
111
+ )
112
+ if wnsize_dtype_device not in hann_window:
113
+ hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(
114
+ dtype=y.dtype, device=y.device
115
+ )
116
+
117
+ y = torch.nn.functional.pad(
118
+ y.unsqueeze(1),
119
+ (int((n_fft - hop_size) / 2), int((n_fft - hop_size) / 2)),
120
+ mode="reflect",
121
+ )
122
+ y = y.squeeze(1)
123
+
124
+ spec = torch.stft(
125
+ y,
126
+ n_fft,
127
+ hop_length=hop_size,
128
+ win_length=win_size,
129
+ window=hann_window[wnsize_dtype_device],
130
+ center=center,
131
+ pad_mode="reflect",
132
+ normalized=False,
133
+ onesided=True,
134
+ return_complex=False,
135
+ )
136
+
137
+ spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6)
138
+
139
+ spec = torch.matmul(mel_basis[fmax_dtype_device], spec)
140
+ spec = spectral_normalize_torch(spec)
141
+
142
+ return spec
models.py ADDED
@@ -0,0 +1,1074 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import math
2
+ import torch
3
+ from torch import nn
4
+ from torch.nn import functional as F
5
+
6
+ import commons
7
+ import modules
8
+ import attentions
9
+ import monotonic_align
10
+
11
+ from torch.nn import Conv1d, ConvTranspose1d, Conv2d
12
+ from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
13
+
14
+ from commons import init_weights, get_padding
15
+ from text import symbols, num_tones, num_languages
16
+
17
+
18
+ class DurationDiscriminator(nn.Module): # vits2
19
+ def __init__(
20
+ self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0
21
+ ):
22
+ super().__init__()
23
+
24
+ self.in_channels = in_channels
25
+ self.filter_channels = filter_channels
26
+ self.kernel_size = kernel_size
27
+ self.p_dropout = p_dropout
28
+ self.gin_channels = gin_channels
29
+
30
+ self.drop = nn.Dropout(p_dropout)
31
+ self.conv_1 = nn.Conv1d(
32
+ in_channels, filter_channels, kernel_size, padding=kernel_size // 2
33
+ )
34
+ self.norm_1 = modules.LayerNorm(filter_channels)
35
+ self.conv_2 = nn.Conv1d(
36
+ filter_channels, filter_channels, kernel_size, padding=kernel_size // 2
37
+ )
38
+ self.norm_2 = modules.LayerNorm(filter_channels)
39
+ self.dur_proj = nn.Conv1d(1, filter_channels, 1)
40
+
41
+ self.LSTM = nn.LSTM(
42
+ 2 * filter_channels, filter_channels, batch_first=True, bidirectional=True
43
+ )
44
+
45
+ if gin_channels != 0:
46
+ self.cond = nn.Conv1d(gin_channels, in_channels, 1)
47
+
48
+ self.output_layer = nn.Sequential(
49
+ nn.Linear(2 * filter_channels, 1), nn.Sigmoid()
50
+ )
51
+
52
+ def forward_probability(self, x, dur):
53
+ dur = self.dur_proj(dur)
54
+ x = torch.cat([x, dur], dim=1)
55
+ x = x.transpose(1, 2)
56
+ x, _ = self.LSTM(x)
57
+ output_prob = self.output_layer(x)
58
+ return output_prob
59
+
60
+ def forward(self, x, x_mask, dur_r, dur_hat, g=None):
61
+ x = torch.detach(x)
62
+ if g is not None:
63
+ g = torch.detach(g)
64
+ x = x + self.cond(g)
65
+ x = self.conv_1(x * x_mask)
66
+ x = torch.relu(x)
67
+ x = self.norm_1(x)
68
+ x = self.drop(x)
69
+ x = self.conv_2(x * x_mask)
70
+ x = torch.relu(x)
71
+ x = self.norm_2(x)
72
+ x = self.drop(x)
73
+
74
+ output_probs = []
75
+ for dur in [dur_r, dur_hat]:
76
+ output_prob = self.forward_probability(x, dur)
77
+ output_probs.append(output_prob)
78
+
79
+ return output_probs
80
+
81
+
82
+ class TransformerCouplingBlock(nn.Module):
83
+ def __init__(
84
+ self,
85
+ channels,
86
+ hidden_channels,
87
+ filter_channels,
88
+ n_heads,
89
+ n_layers,
90
+ kernel_size,
91
+ p_dropout,
92
+ n_flows=4,
93
+ gin_channels=0,
94
+ share_parameter=False,
95
+ ):
96
+ super().__init__()
97
+ self.channels = channels
98
+ self.hidden_channels = hidden_channels
99
+ self.kernel_size = kernel_size
100
+ self.n_layers = n_layers
101
+ self.n_flows = n_flows
102
+ self.gin_channels = gin_channels
103
+
104
+ self.flows = nn.ModuleList()
105
+
106
+ self.wn = (
107
+ attentions.FFT(
108
+ hidden_channels,
109
+ filter_channels,
110
+ n_heads,
111
+ n_layers,
112
+ kernel_size,
113
+ p_dropout,
114
+ isflow=True,
115
+ gin_channels=self.gin_channels,
116
+ )
117
+ if share_parameter
118
+ else None
119
+ )
120
+
121
+ for i in range(n_flows):
122
+ self.flows.append(
123
+ modules.TransformerCouplingLayer(
124
+ channels,
125
+ hidden_channels,
126
+ kernel_size,
127
+ n_layers,
128
+ n_heads,
129
+ p_dropout,
130
+ filter_channels,
131
+ mean_only=True,
132
+ wn_sharing_parameter=self.wn,
133
+ gin_channels=self.gin_channels,
134
+ )
135
+ )
136
+ self.flows.append(modules.Flip())
137
+
138
+ def forward(self, x, x_mask, g=None, reverse=False):
139
+ if not reverse:
140
+ for flow in self.flows:
141
+ x, _ = flow(x, x_mask, g=g, reverse=reverse)
142
+ else:
143
+ for flow in reversed(self.flows):
144
+ x = flow(x, x_mask, g=g, reverse=reverse)
145
+ return x
146
+
147
+
148
+ class StochasticDurationPredictor(nn.Module):
149
+ def __init__(
150
+ self,
151
+ in_channels,
152
+ filter_channels,
153
+ kernel_size,
154
+ p_dropout,
155
+ n_flows=4,
156
+ gin_channels=0,
157
+ ):
158
+ super().__init__()
159
+ filter_channels = in_channels # it needs to be removed from future version.
160
+ self.in_channels = in_channels
161
+ self.filter_channels = filter_channels
162
+ self.kernel_size = kernel_size
163
+ self.p_dropout = p_dropout
164
+ self.n_flows = n_flows
165
+ self.gin_channels = gin_channels
166
+
167
+ self.log_flow = modules.Log()
168
+ self.flows = nn.ModuleList()
169
+ self.flows.append(modules.ElementwiseAffine(2))
170
+ for i in range(n_flows):
171
+ self.flows.append(
172
+ modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)
173
+ )
174
+ self.flows.append(modules.Flip())
175
+
176
+ self.post_pre = nn.Conv1d(1, filter_channels, 1)
177
+ self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1)
178
+ self.post_convs = modules.DDSConv(
179
+ filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout
180
+ )
181
+ self.post_flows = nn.ModuleList()
182
+ self.post_flows.append(modules.ElementwiseAffine(2))
183
+ for i in range(4):
184
+ self.post_flows.append(
185
+ modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)
186
+ )
187
+ self.post_flows.append(modules.Flip())
188
+
189
+ self.pre = nn.Conv1d(in_channels, filter_channels, 1)
190
+ self.proj = nn.Conv1d(filter_channels, filter_channels, 1)
191
+ self.convs = modules.DDSConv(
192
+ filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout
193
+ )
194
+ if gin_channels != 0:
195
+ self.cond = nn.Conv1d(gin_channels, filter_channels, 1)
196
+
197
+ def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0):
198
+ x = torch.detach(x)
199
+ x = self.pre(x)
200
+ if g is not None:
201
+ g = torch.detach(g)
202
+ x = x + self.cond(g)
203
+ x = self.convs(x, x_mask)
204
+ x = self.proj(x) * x_mask
205
+
206
+ if not reverse:
207
+ flows = self.flows
208
+ assert w is not None
209
+
210
+ logdet_tot_q = 0
211
+ h_w = self.post_pre(w)
212
+ h_w = self.post_convs(h_w, x_mask)
213
+ h_w = self.post_proj(h_w) * x_mask
214
+ e_q = (
215
+ torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype)
216
+ * x_mask
217
+ )
218
+ z_q = e_q
219
+ for flow in self.post_flows:
220
+ z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w))
221
+ logdet_tot_q += logdet_q
222
+ z_u, z1 = torch.split(z_q, [1, 1], 1)
223
+ u = torch.sigmoid(z_u) * x_mask
224
+ z0 = (w - u) * x_mask
225
+ logdet_tot_q += torch.sum(
226
+ (F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1, 2]
227
+ )
228
+ logq = (
229
+ torch.sum(-0.5 * (math.log(2 * math.pi) + (e_q**2)) * x_mask, [1, 2])
230
+ - logdet_tot_q
231
+ )
232
+
233
+ logdet_tot = 0
234
+ z0, logdet = self.log_flow(z0, x_mask)
235
+ logdet_tot += logdet
236
+ z = torch.cat([z0, z1], 1)
237
+ for flow in flows:
238
+ z, logdet = flow(z, x_mask, g=x, reverse=reverse)
239
+ logdet_tot = logdet_tot + logdet
240
+ nll = (
241
+ torch.sum(0.5 * (math.log(2 * math.pi) + (z**2)) * x_mask, [1, 2])
242
+ - logdet_tot
243
+ )
244
+ return nll + logq # [b]
245
+ else:
246
+ flows = list(reversed(self.flows))
247
+ flows = flows[:-2] + [flows[-1]] # remove a useless vflow
248
+ z = (
249
+ torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype)
250
+ * noise_scale
251
+ )
252
+ for flow in flows:
253
+ z = flow(z, x_mask, g=x, reverse=reverse)
254
+ z0, z1 = torch.split(z, [1, 1], 1)
255
+ logw = z0
256
+ return logw
257
+
258
+
259
+ class DurationPredictor(nn.Module):
260
+ def __init__(
261
+ self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0
262
+ ):
263
+ super().__init__()
264
+
265
+ self.in_channels = in_channels
266
+ self.filter_channels = filter_channels
267
+ self.kernel_size = kernel_size
268
+ self.p_dropout = p_dropout
269
+ self.gin_channels = gin_channels
270
+
271
+ self.drop = nn.Dropout(p_dropout)
272
+ self.conv_1 = nn.Conv1d(
273
+ in_channels, filter_channels, kernel_size, padding=kernel_size // 2
274
+ )
275
+ self.norm_1 = modules.LayerNorm(filter_channels)
276
+ self.conv_2 = nn.Conv1d(
277
+ filter_channels, filter_channels, kernel_size, padding=kernel_size // 2
278
+ )
279
+ self.norm_2 = modules.LayerNorm(filter_channels)
280
+ self.proj = nn.Conv1d(filter_channels, 1, 1)
281
+
282
+ if gin_channels != 0:
283
+ self.cond = nn.Conv1d(gin_channels, in_channels, 1)
284
+
285
+ def forward(self, x, x_mask, g=None):
286
+ x = torch.detach(x)
287
+ if g is not None:
288
+ g = torch.detach(g)
289
+ x = x + self.cond(g)
290
+ x = self.conv_1(x * x_mask)
291
+ x = torch.relu(x)
292
+ x = self.norm_1(x)
293
+ x = self.drop(x)
294
+ x = self.conv_2(x * x_mask)
295
+ x = torch.relu(x)
296
+ x = self.norm_2(x)
297
+ x = self.drop(x)
298
+ x = self.proj(x * x_mask)
299
+ return x * x_mask
300
+
301
+
302
+ class Bottleneck(nn.Sequential):
303
+ def __init__(self, in_dim, hidden_dim):
304
+ c_fc1 = nn.Linear(in_dim, hidden_dim, bias=False)
305
+ c_fc2 = nn.Linear(in_dim, hidden_dim, bias=False)
306
+ super().__init__(*[c_fc1, c_fc2])
307
+
308
+
309
+ class Block(nn.Module):
310
+ def __init__(self, in_dim, hidden_dim) -> None:
311
+ super().__init__()
312
+ self.norm = nn.LayerNorm(in_dim)
313
+ self.mlp = MLP(in_dim, hidden_dim)
314
+
315
+ def forward(self, x: torch.Tensor) -> torch.Tensor:
316
+ x = x + self.mlp(self.norm(x))
317
+ return x
318
+
319
+
320
+ class MLP(nn.Module):
321
+ def __init__(self, in_dim, hidden_dim):
322
+ super().__init__()
323
+ self.c_fc1 = nn.Linear(in_dim, hidden_dim, bias=False)
324
+ self.c_fc2 = nn.Linear(in_dim, hidden_dim, bias=False)
325
+ self.c_proj = nn.Linear(hidden_dim, in_dim, bias=False)
326
+
327
+ def forward(self, x: torch.Tensor):
328
+ x = F.silu(self.c_fc1(x)) * self.c_fc2(x)
329
+ x = self.c_proj(x)
330
+ return x
331
+
332
+
333
+ class TextEncoder(nn.Module):
334
+ def __init__(
335
+ self,
336
+ n_vocab,
337
+ out_channels,
338
+ hidden_channels,
339
+ filter_channels,
340
+ n_heads,
341
+ n_layers,
342
+ kernel_size,
343
+ p_dropout,
344
+ gin_channels=0,
345
+ ):
346
+ super().__init__()
347
+ self.n_vocab = n_vocab
348
+ self.out_channels = out_channels
349
+ self.hidden_channels = hidden_channels
350
+ self.filter_channels = filter_channels
351
+ self.n_heads = n_heads
352
+ self.n_layers = n_layers
353
+ self.kernel_size = kernel_size
354
+ self.p_dropout = p_dropout
355
+ self.gin_channels = gin_channels
356
+ self.emb = nn.Embedding(len(symbols), hidden_channels)
357
+ nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5)
358
+ self.tone_emb = nn.Embedding(num_tones, hidden_channels)
359
+ nn.init.normal_(self.tone_emb.weight, 0.0, hidden_channels**-0.5)
360
+ self.language_emb = nn.Embedding(num_languages, hidden_channels)
361
+ nn.init.normal_(self.language_emb.weight, 0.0, hidden_channels**-0.5)
362
+ self.bert_proj = nn.Conv1d(1024, hidden_channels, 1)
363
+ self.ja_bert_proj = nn.Conv1d(1024, hidden_channels, 1)
364
+ self.en_bert_proj = nn.Conv1d(1024, hidden_channels, 1)
365
+
366
+ self.encoder = attentions.Encoder(
367
+ hidden_channels,
368
+ filter_channels,
369
+ n_heads,
370
+ n_layers,
371
+ kernel_size,
372
+ p_dropout,
373
+ gin_channels=self.gin_channels,
374
+ )
375
+ self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
376
+
377
+ def forward(self, x, x_lengths, tone, language, bert, ja_bert, en_bert, g=None):
378
+ bert_emb = self.bert_proj(bert).transpose(1, 2)
379
+ ja_bert_emb = self.ja_bert_proj(ja_bert).transpose(1, 2)
380
+ en_bert_emb = self.en_bert_proj(en_bert).transpose(1, 2)
381
+ x = (
382
+ self.emb(x)
383
+ + self.tone_emb(tone)
384
+ + self.language_emb(language)
385
+ + bert_emb
386
+ + ja_bert_emb
387
+ + en_bert_emb
388
+ ) * math.sqrt(
389
+ self.hidden_channels
390
+ ) # [b, t, h]
391
+ x = torch.transpose(x, 1, -1) # [b, h, t]
392
+ x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(
393
+ x.dtype
394
+ )
395
+
396
+ x = self.encoder(x * x_mask, x_mask, g=g)
397
+ stats = self.proj(x) * x_mask
398
+
399
+ m, logs = torch.split(stats, self.out_channels, dim=1)
400
+ return x, m, logs, x_mask
401
+
402
+
403
+ class ResidualCouplingBlock(nn.Module):
404
+ def __init__(
405
+ self,
406
+ channels,
407
+ hidden_channels,
408
+ kernel_size,
409
+ dilation_rate,
410
+ n_layers,
411
+ n_flows=4,
412
+ gin_channels=0,
413
+ ):
414
+ super().__init__()
415
+ self.channels = channels
416
+ self.hidden_channels = hidden_channels
417
+ self.kernel_size = kernel_size
418
+ self.dilation_rate = dilation_rate
419
+ self.n_layers = n_layers
420
+ self.n_flows = n_flows
421
+ self.gin_channels = gin_channels
422
+
423
+ self.flows = nn.ModuleList()
424
+ for i in range(n_flows):
425
+ self.flows.append(
426
+ modules.ResidualCouplingLayer(
427
+ channels,
428
+ hidden_channels,
429
+ kernel_size,
430
+ dilation_rate,
431
+ n_layers,
432
+ gin_channels=gin_channels,
433
+ mean_only=True,
434
+ )
435
+ )
436
+ self.flows.append(modules.Flip())
437
+
438
+ def forward(self, x, x_mask, g=None, reverse=False):
439
+ if not reverse:
440
+ for flow in self.flows:
441
+ x, _ = flow(x, x_mask, g=g, reverse=reverse)
442
+ else:
443
+ for flow in reversed(self.flows):
444
+ x = flow(x, x_mask, g=g, reverse=reverse)
445
+ return x
446
+
447
+
448
+ class PosteriorEncoder(nn.Module):
449
+ def __init__(
450
+ self,
451
+ in_channels,
452
+ out_channels,
453
+ hidden_channels,
454
+ kernel_size,
455
+ dilation_rate,
456
+ n_layers,
457
+ gin_channels=0,
458
+ ):
459
+ super().__init__()
460
+ self.in_channels = in_channels
461
+ self.out_channels = out_channels
462
+ self.hidden_channels = hidden_channels
463
+ self.kernel_size = kernel_size
464
+ self.dilation_rate = dilation_rate
465
+ self.n_layers = n_layers
466
+ self.gin_channels = gin_channels
467
+
468
+ self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
469
+ self.enc = modules.WN(
470
+ hidden_channels,
471
+ kernel_size,
472
+ dilation_rate,
473
+ n_layers,
474
+ gin_channels=gin_channels,
475
+ )
476
+ self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
477
+
478
+ def forward(self, x, x_lengths, g=None):
479
+ x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(
480
+ x.dtype
481
+ )
482
+ x = self.pre(x) * x_mask
483
+ x = self.enc(x, x_mask, g=g)
484
+ stats = self.proj(x) * x_mask
485
+ m, logs = torch.split(stats, self.out_channels, dim=1)
486
+ z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
487
+ return z, m, logs, x_mask
488
+
489
+
490
+ class Generator(torch.nn.Module):
491
+ def __init__(
492
+ self,
493
+ initial_channel,
494
+ resblock,
495
+ resblock_kernel_sizes,
496
+ resblock_dilation_sizes,
497
+ upsample_rates,
498
+ upsample_initial_channel,
499
+ upsample_kernel_sizes,
500
+ gin_channels=0,
501
+ ):
502
+ super(Generator, self).__init__()
503
+ self.num_kernels = len(resblock_kernel_sizes)
504
+ self.num_upsamples = len(upsample_rates)
505
+ self.conv_pre = Conv1d(
506
+ initial_channel, upsample_initial_channel, 7, 1, padding=3
507
+ )
508
+ resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
509
+
510
+ self.ups = nn.ModuleList()
511
+ for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
512
+ self.ups.append(
513
+ weight_norm(
514
+ ConvTranspose1d(
515
+ upsample_initial_channel // (2**i),
516
+ upsample_initial_channel // (2 ** (i + 1)),
517
+ k,
518
+ u,
519
+ padding=(k - u) // 2,
520
+ )
521
+ )
522
+ )
523
+
524
+ self.resblocks = nn.ModuleList()
525
+ for i in range(len(self.ups)):
526
+ ch = upsample_initial_channel // (2 ** (i + 1))
527
+ for j, (k, d) in enumerate(
528
+ zip(resblock_kernel_sizes, resblock_dilation_sizes)
529
+ ):
530
+ self.resblocks.append(resblock(ch, k, d))
531
+
532
+ self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
533
+ self.ups.apply(init_weights)
534
+
535
+ if gin_channels != 0:
536
+ self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
537
+
538
+ def forward(self, x, g=None):
539
+ x = self.conv_pre(x)
540
+ if g is not None:
541
+ x = x + self.cond(g)
542
+
543
+ for i in range(self.num_upsamples):
544
+ x = F.leaky_relu(x, modules.LRELU_SLOPE)
545
+ x = self.ups[i](x)
546
+ xs = None
547
+ for j in range(self.num_kernels):
548
+ if xs is None:
549
+ xs = self.resblocks[i * self.num_kernels + j](x)
550
+ else:
551
+ xs += self.resblocks[i * self.num_kernels + j](x)
552
+ x = xs / self.num_kernels
553
+ x = F.leaky_relu(x)
554
+ x = self.conv_post(x)
555
+ x = torch.tanh(x)
556
+
557
+ return x
558
+
559
+ def remove_weight_norm(self):
560
+ print("Removing weight norm...")
561
+ for layer in self.ups:
562
+ remove_weight_norm(layer)
563
+ for layer in self.resblocks:
564
+ layer.remove_weight_norm()
565
+
566
+
567
+ class DiscriminatorP(torch.nn.Module):
568
+ def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
569
+ super(DiscriminatorP, self).__init__()
570
+ self.period = period
571
+ self.use_spectral_norm = use_spectral_norm
572
+ norm_f = weight_norm if use_spectral_norm is False else spectral_norm
573
+ self.convs = nn.ModuleList(
574
+ [
575
+ norm_f(
576
+ Conv2d(
577
+ 1,
578
+ 32,
579
+ (kernel_size, 1),
580
+ (stride, 1),
581
+ padding=(get_padding(kernel_size, 1), 0),
582
+ )
583
+ ),
584
+ norm_f(
585
+ Conv2d(
586
+ 32,
587
+ 128,
588
+ (kernel_size, 1),
589
+ (stride, 1),
590
+ padding=(get_padding(kernel_size, 1), 0),
591
+ )
592
+ ),
593
+ norm_f(
594
+ Conv2d(
595
+ 128,
596
+ 512,
597
+ (kernel_size, 1),
598
+ (stride, 1),
599
+ padding=(get_padding(kernel_size, 1), 0),
600
+ )
601
+ ),
602
+ norm_f(
603
+ Conv2d(
604
+ 512,
605
+ 1024,
606
+ (kernel_size, 1),
607
+ (stride, 1),
608
+ padding=(get_padding(kernel_size, 1), 0),
609
+ )
610
+ ),
611
+ norm_f(
612
+ Conv2d(
613
+ 1024,
614
+ 1024,
615
+ (kernel_size, 1),
616
+ 1,
617
+ padding=(get_padding(kernel_size, 1), 0),
618
+ )
619
+ ),
620
+ ]
621
+ )
622
+ self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
623
+
624
+ def forward(self, x):
625
+ fmap = []
626
+
627
+ # 1d to 2d
628
+ b, c, t = x.shape
629
+ if t % self.period != 0: # pad first
630
+ n_pad = self.period - (t % self.period)
631
+ x = F.pad(x, (0, n_pad), "reflect")
632
+ t = t + n_pad
633
+ x = x.view(b, c, t // self.period, self.period)
634
+
635
+ for layer in self.convs:
636
+ x = layer(x)
637
+ x = F.leaky_relu(x, modules.LRELU_SLOPE)
638
+ fmap.append(x)
639
+ x = self.conv_post(x)
640
+ fmap.append(x)
641
+ x = torch.flatten(x, 1, -1)
642
+
643
+ return x, fmap
644
+
645
+
646
+ class DiscriminatorS(torch.nn.Module):
647
+ def __init__(self, use_spectral_norm=False):
648
+ super(DiscriminatorS, self).__init__()
649
+ norm_f = weight_norm if use_spectral_norm is False else spectral_norm
650
+ self.convs = nn.ModuleList(
651
+ [
652
+ norm_f(Conv1d(1, 16, 15, 1, padding=7)),
653
+ norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
654
+ norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
655
+ norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
656
+ norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
657
+ norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
658
+ ]
659
+ )
660
+ self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
661
+
662
+ def forward(self, x):
663
+ fmap = []
664
+
665
+ for layer in self.convs:
666
+ x = layer(x)
667
+ x = F.leaky_relu(x, modules.LRELU_SLOPE)
668
+ fmap.append(x)
669
+ x = self.conv_post(x)
670
+ fmap.append(x)
671
+ x = torch.flatten(x, 1, -1)
672
+
673
+ return x, fmap
674
+
675
+
676
+ class MultiPeriodDiscriminator(torch.nn.Module):
677
+ def __init__(self, use_spectral_norm=False):
678
+ super(MultiPeriodDiscriminator, self).__init__()
679
+ periods = [2, 3, 5, 7, 11]
680
+
681
+ discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
682
+ discs = discs + [
683
+ DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods
684
+ ]
685
+ self.discriminators = nn.ModuleList(discs)
686
+
687
+ def forward(self, y, y_hat):
688
+ y_d_rs = []
689
+ y_d_gs = []
690
+ fmap_rs = []
691
+ fmap_gs = []
692
+ for i, d in enumerate(self.discriminators):
693
+ y_d_r, fmap_r = d(y)
694
+ y_d_g, fmap_g = d(y_hat)
695
+ y_d_rs.append(y_d_r)
696
+ y_d_gs.append(y_d_g)
697
+ fmap_rs.append(fmap_r)
698
+ fmap_gs.append(fmap_g)
699
+
700
+ return y_d_rs, y_d_gs, fmap_rs, fmap_gs
701
+
702
+
703
+ class WavLMDiscriminator(nn.Module):
704
+ """docstring for Discriminator."""
705
+
706
+ def __init__(
707
+ self, slm_hidden=768, slm_layers=13, initial_channel=64, use_spectral_norm=False
708
+ ):
709
+ super(WavLMDiscriminator, self).__init__()
710
+ norm_f = weight_norm if use_spectral_norm == False else spectral_norm
711
+ self.pre = norm_f(
712
+ Conv1d(slm_hidden * slm_layers, initial_channel, 1, 1, padding=0)
713
+ )
714
+
715
+ self.convs = nn.ModuleList(
716
+ [
717
+ norm_f(
718
+ nn.Conv1d(
719
+ initial_channel, initial_channel * 2, kernel_size=5, padding=2
720
+ )
721
+ ),
722
+ norm_f(
723
+ nn.Conv1d(
724
+ initial_channel * 2,
725
+ initial_channel * 4,
726
+ kernel_size=5,
727
+ padding=2,
728
+ )
729
+ ),
730
+ norm_f(
731
+ nn.Conv1d(initial_channel * 4, initial_channel * 4, 5, 1, padding=2)
732
+ ),
733
+ ]
734
+ )
735
+
736
+ self.conv_post = norm_f(Conv1d(initial_channel * 4, 1, 3, 1, padding=1))
737
+
738
+ def forward(self, x):
739
+ x = self.pre(x)
740
+
741
+ fmap = []
742
+ for l in self.convs:
743
+ x = l(x)
744
+ x = F.leaky_relu(x, modules.LRELU_SLOPE)
745
+ fmap.append(x)
746
+ x = self.conv_post(x)
747
+ x = torch.flatten(x, 1, -1)
748
+
749
+ return x
750
+
751
+
752
+ class ReferenceEncoder(nn.Module):
753
+ """
754
+ inputs --- [N, Ty/r, n_mels*r] mels
755
+ outputs --- [N, ref_enc_gru_size]
756
+ """
757
+
758
+ def __init__(self, spec_channels, gin_channels=0):
759
+ super().__init__()
760
+ self.spec_channels = spec_channels
761
+ ref_enc_filters = [32, 32, 64, 64, 128, 128]
762
+ K = len(ref_enc_filters)
763
+ filters = [1] + ref_enc_filters
764
+ convs = [
765
+ weight_norm(
766
+ nn.Conv2d(
767
+ in_channels=filters[i],
768
+ out_channels=filters[i + 1],
769
+ kernel_size=(3, 3),
770
+ stride=(2, 2),
771
+ padding=(1, 1),
772
+ )
773
+ )
774
+ for i in range(K)
775
+ ]
776
+ self.convs = nn.ModuleList(convs)
777
+ # self.wns = nn.ModuleList([weight_norm(num_features=ref_enc_filters[i]) for i in range(K)]) # noqa: E501
778
+
779
+ out_channels = self.calculate_channels(spec_channels, 3, 2, 1, K)
780
+ self.gru = nn.GRU(
781
+ input_size=ref_enc_filters[-1] * out_channels,
782
+ hidden_size=256 // 2,
783
+ batch_first=True,
784
+ )
785
+ self.proj = nn.Linear(128, gin_channels)
786
+
787
+ def forward(self, inputs, mask=None):
788
+ N = inputs.size(0)
789
+ out = inputs.view(N, 1, -1, self.spec_channels) # [N, 1, Ty, n_freqs]
790
+ for conv in self.convs:
791
+ out = conv(out)
792
+ # out = wn(out)
793
+ out = F.relu(out) # [N, 128, Ty//2^K, n_mels//2^K]
794
+
795
+ out = out.transpose(1, 2) # [N, Ty//2^K, 128, n_mels//2^K]
796
+ T = out.size(1)
797
+ N = out.size(0)
798
+ out = out.contiguous().view(N, T, -1) # [N, Ty//2^K, 128*n_mels//2^K]
799
+
800
+ self.gru.flatten_parameters()
801
+ memory, out = self.gru(out) # out --- [1, N, 128]
802
+
803
+ return self.proj(out.squeeze(0))
804
+
805
+ def calculate_channels(self, L, kernel_size, stride, pad, n_convs):
806
+ for i in range(n_convs):
807
+ L = (L - kernel_size + 2 * pad) // stride + 1
808
+ return L
809
+
810
+
811
+ class SynthesizerTrn(nn.Module):
812
+ """
813
+ Synthesizer for Training
814
+ """
815
+
816
+ def __init__(
817
+ self,
818
+ n_vocab,
819
+ spec_channels,
820
+ segment_size,
821
+ inter_channels,
822
+ hidden_channels,
823
+ filter_channels,
824
+ n_heads,
825
+ n_layers,
826
+ kernel_size,
827
+ p_dropout,
828
+ resblock,
829
+ resblock_kernel_sizes,
830
+ resblock_dilation_sizes,
831
+ upsample_rates,
832
+ upsample_initial_channel,
833
+ upsample_kernel_sizes,
834
+ n_speakers=256,
835
+ gin_channels=256,
836
+ use_sdp=True,
837
+ n_flow_layer=4,
838
+ n_layers_trans_flow=4,
839
+ flow_share_parameter=False,
840
+ use_transformer_flow=True,
841
+ **kwargs
842
+ ):
843
+ super().__init__()
844
+ self.n_vocab = n_vocab
845
+ self.spec_channels = spec_channels
846
+ self.inter_channels = inter_channels
847
+ self.hidden_channels = hidden_channels
848
+ self.filter_channels = filter_channels
849
+ self.n_heads = n_heads
850
+ self.n_layers = n_layers
851
+ self.kernel_size = kernel_size
852
+ self.p_dropout = p_dropout
853
+ self.resblock = resblock
854
+ self.resblock_kernel_sizes = resblock_kernel_sizes
855
+ self.resblock_dilation_sizes = resblock_dilation_sizes
856
+ self.upsample_rates = upsample_rates
857
+ self.upsample_initial_channel = upsample_initial_channel
858
+ self.upsample_kernel_sizes = upsample_kernel_sizes
859
+ self.segment_size = segment_size
860
+ self.n_speakers = n_speakers
861
+ self.gin_channels = gin_channels
862
+ self.n_layers_trans_flow = n_layers_trans_flow
863
+ self.use_spk_conditioned_encoder = kwargs.get(
864
+ "use_spk_conditioned_encoder", True
865
+ )
866
+ self.use_sdp = use_sdp
867
+ self.use_noise_scaled_mas = kwargs.get("use_noise_scaled_mas", False)
868
+ self.mas_noise_scale_initial = kwargs.get("mas_noise_scale_initial", 0.01)
869
+ self.noise_scale_delta = kwargs.get("noise_scale_delta", 2e-6)
870
+ self.current_mas_noise_scale = self.mas_noise_scale_initial
871
+ if self.use_spk_conditioned_encoder and gin_channels > 0:
872
+ self.enc_gin_channels = gin_channels
873
+ self.enc_p = TextEncoder(
874
+ n_vocab,
875
+ inter_channels,
876
+ hidden_channels,
877
+ filter_channels,
878
+ n_heads,
879
+ n_layers,
880
+ kernel_size,
881
+ p_dropout,
882
+ gin_channels=self.enc_gin_channels,
883
+ )
884
+ self.dec = Generator(
885
+ inter_channels,
886
+ resblock,
887
+ resblock_kernel_sizes,
888
+ resblock_dilation_sizes,
889
+ upsample_rates,
890
+ upsample_initial_channel,
891
+ upsample_kernel_sizes,
892
+ gin_channels=gin_channels,
893
+ )
894
+ self.enc_q = PosteriorEncoder(
895
+ spec_channels,
896
+ inter_channels,
897
+ hidden_channels,
898
+ 5,
899
+ 1,
900
+ 16,
901
+ gin_channels=gin_channels,
902
+ )
903
+ if use_transformer_flow:
904
+ self.flow = TransformerCouplingBlock(
905
+ inter_channels,
906
+ hidden_channels,
907
+ filter_channels,
908
+ n_heads,
909
+ n_layers_trans_flow,
910
+ 5,
911
+ p_dropout,
912
+ n_flow_layer,
913
+ gin_channels=gin_channels,
914
+ share_parameter=flow_share_parameter,
915
+ )
916
+ else:
917
+ self.flow = ResidualCouplingBlock(
918
+ inter_channels,
919
+ hidden_channels,
920
+ 5,
921
+ 1,
922
+ n_flow_layer,
923
+ gin_channels=gin_channels,
924
+ )
925
+ self.sdp = StochasticDurationPredictor(
926
+ hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels
927
+ )
928
+ self.dp = DurationPredictor(
929
+ hidden_channels, 256, 3, 0.5, gin_channels=gin_channels
930
+ )
931
+
932
+ if n_speakers >= 1:
933
+ self.emb_g = nn.Embedding(n_speakers, gin_channels)
934
+ else:
935
+ self.ref_enc = ReferenceEncoder(spec_channels, gin_channels)
936
+
937
+ def forward(
938
+ self,
939
+ x,
940
+ x_lengths,
941
+ y,
942
+ y_lengths,
943
+ sid,
944
+ tone,
945
+ language,
946
+ bert,
947
+ ja_bert,
948
+ en_bert,
949
+ ):
950
+ if self.n_speakers > 0:
951
+ g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
952
+ else:
953
+ g = self.ref_enc(y.transpose(1, 2)).unsqueeze(-1)
954
+ x, m_p, logs_p, x_mask = self.enc_p(
955
+ x, x_lengths, tone, language, bert, ja_bert, en_bert, g=g
956
+ )
957
+ z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
958
+ z_p = self.flow(z, y_mask, g=g)
959
+
960
+ with torch.no_grad():
961
+ # negative cross-entropy
962
+ s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t]
963
+ neg_cent1 = torch.sum(
964
+ -0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True
965
+ ) # [b, 1, t_s]
966
+ neg_cent2 = torch.matmul(
967
+ -0.5 * (z_p**2).transpose(1, 2), s_p_sq_r
968
+ ) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s]
969
+ neg_cent3 = torch.matmul(
970
+ z_p.transpose(1, 2), (m_p * s_p_sq_r)
971
+ ) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s]
972
+ neg_cent4 = torch.sum(
973
+ -0.5 * (m_p**2) * s_p_sq_r, [1], keepdim=True
974
+ ) # [b, 1, t_s]
975
+ neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4
976
+ if self.use_noise_scaled_mas:
977
+ epsilon = (
978
+ torch.std(neg_cent)
979
+ * torch.randn_like(neg_cent)
980
+ * self.current_mas_noise_scale
981
+ )
982
+ neg_cent = neg_cent + epsilon
983
+
984
+ attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1)
985
+ attn = (
986
+ monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1))
987
+ .unsqueeze(1)
988
+ .detach()
989
+ )
990
+
991
+ w = attn.sum(2)
992
+
993
+ l_length_sdp = self.sdp(x, x_mask, w, g=g)
994
+ l_length_sdp = l_length_sdp / torch.sum(x_mask)
995
+
996
+ logw_ = torch.log(w + 1e-6) * x_mask
997
+ logw = self.dp(x, x_mask, g=g)
998
+ logw_sdp = self.sdp(x, x_mask, g=g, reverse=True, noise_scale=1.0)
999
+ l_length_dp = torch.sum((logw - logw_) ** 2, [1, 2]) / torch.sum(
1000
+ x_mask
1001
+ ) # for averaging
1002
+ l_length_sdp += torch.sum((logw_sdp - logw_) ** 2, [1, 2]) / torch.sum(x_mask)
1003
+
1004
+ l_length = l_length_dp + l_length_sdp
1005
+
1006
+ # expand prior
1007
+ m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2)
1008
+ logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2)
1009
+
1010
+ z_slice, ids_slice = commons.rand_slice_segments(
1011
+ z, y_lengths, self.segment_size
1012
+ )
1013
+ o = self.dec(z_slice, g=g)
1014
+ return (
1015
+ o,
1016
+ l_length,
1017
+ attn,
1018
+ ids_slice,
1019
+ x_mask,
1020
+ y_mask,
1021
+ (z, z_p, m_p, logs_p, m_q, logs_q),
1022
+ (x, logw, logw_, logw_sdp),
1023
+ g,
1024
+ )
1025
+
1026
+ def infer(
1027
+ self,
1028
+ x,
1029
+ x_lengths,
1030
+ sid,
1031
+ tone,
1032
+ language,
1033
+ bert,
1034
+ ja_bert,
1035
+ en_bert,
1036
+ noise_scale=0.667,
1037
+ length_scale=1,
1038
+ noise_scale_w=0.8,
1039
+ max_len=None,
1040
+ sdp_ratio=0,
1041
+ y=None,
1042
+ ):
1043
+ # x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths, tone, language, bert)
1044
+ # g = self.gst(y)
1045
+ if self.n_speakers > 0:
1046
+ g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
1047
+ else:
1048
+ g = self.ref_enc(y.transpose(1, 2)).unsqueeze(-1)
1049
+ x, m_p, logs_p, x_mask = self.enc_p(
1050
+ x, x_lengths, tone, language, bert, ja_bert, en_bert, g=g
1051
+ )
1052
+ logw = self.sdp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w) * (
1053
+ sdp_ratio
1054
+ ) + self.dp(x, x_mask, g=g) * (1 - sdp_ratio)
1055
+ w = torch.exp(logw) * x_mask * length_scale
1056
+ w_ceil = torch.ceil(w)
1057
+ y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long()
1058
+ y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(
1059
+ x_mask.dtype
1060
+ )
1061
+ attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1)
1062
+ attn = commons.generate_path(w_ceil, attn_mask)
1063
+
1064
+ m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(
1065
+ 1, 2
1066
+ ) # [b, t', t], [b, t, d] -> [b, d, t']
1067
+ logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(
1068
+ 1, 2
1069
+ ) # [b, t', t], [b, t, d] -> [b, d, t']
1070
+
1071
+ z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale
1072
+ z = self.flow(z_p, y_mask, g=g, reverse=True)
1073
+ o = self.dec((z * y_mask)[:, :, :max_len], g=g)
1074
+ return o, attn, y_mask, (z, z_p, m_p, logs_p)
modules.py ADDED
@@ -0,0 +1,599 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import math
2
+ import torch
3
+ from torch import nn
4
+ from torch.nn import functional as F
5
+
6
+ from torch.nn import Conv1d
7
+ from torch.nn.utils import weight_norm, remove_weight_norm
8
+
9
+ import commons
10
+ from commons import init_weights, get_padding
11
+ from transforms import piecewise_rational_quadratic_transform
12
+ from attentions import Encoder
13
+
14
+ LRELU_SLOPE = 0.1
15
+
16
+
17
+ class LayerNorm(nn.Module):
18
+ def __init__(self, channels, eps=1e-5):
19
+ super().__init__()
20
+ self.channels = channels
21
+ self.eps = eps
22
+
23
+ self.gamma = nn.Parameter(torch.ones(channels))
24
+ self.beta = nn.Parameter(torch.zeros(channels))
25
+
26
+ def forward(self, x):
27
+ x = x.transpose(1, -1)
28
+ x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
29
+ return x.transpose(1, -1)
30
+
31
+
32
+ class ConvReluNorm(nn.Module):
33
+ def __init__(
34
+ self,
35
+ in_channels,
36
+ hidden_channels,
37
+ out_channels,
38
+ kernel_size,
39
+ n_layers,
40
+ p_dropout,
41
+ ):
42
+ super().__init__()
43
+ self.in_channels = in_channels
44
+ self.hidden_channels = hidden_channels
45
+ self.out_channels = out_channels
46
+ self.kernel_size = kernel_size
47
+ self.n_layers = n_layers
48
+ self.p_dropout = p_dropout
49
+ assert n_layers > 1, "Number of layers should be larger than 0."
50
+
51
+ self.conv_layers = nn.ModuleList()
52
+ self.norm_layers = nn.ModuleList()
53
+ self.conv_layers.append(
54
+ nn.Conv1d(
55
+ in_channels, hidden_channels, kernel_size, padding=kernel_size // 2
56
+ )
57
+ )
58
+ self.norm_layers.append(LayerNorm(hidden_channels))
59
+ self.relu_drop = nn.Sequential(nn.ReLU(), nn.Dropout(p_dropout))
60
+ for _ in range(n_layers - 1):
61
+ self.conv_layers.append(
62
+ nn.Conv1d(
63
+ hidden_channels,
64
+ hidden_channels,
65
+ kernel_size,
66
+ padding=kernel_size // 2,
67
+ )
68
+ )
69
+ self.norm_layers.append(LayerNorm(hidden_channels))
70
+ self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
71
+ self.proj.weight.data.zero_()
72
+ self.proj.bias.data.zero_()
73
+
74
+ def forward(self, x, x_mask):
75
+ x_org = x
76
+ for i in range(self.n_layers):
77
+ x = self.conv_layers[i](x * x_mask)
78
+ x = self.norm_layers[i](x)
79
+ x = self.relu_drop(x)
80
+ x = x_org + self.proj(x)
81
+ return x * x_mask
82
+
83
+
84
+ class DDSConv(nn.Module):
85
+ """
86
+ Dilated and Depth-Separable Convolution
87
+ """
88
+
89
+ def __init__(self, channels, kernel_size, n_layers, p_dropout=0.0):
90
+ super().__init__()
91
+ self.channels = channels
92
+ self.kernel_size = kernel_size
93
+ self.n_layers = n_layers
94
+ self.p_dropout = p_dropout
95
+
96
+ self.drop = nn.Dropout(p_dropout)
97
+ self.convs_sep = nn.ModuleList()
98
+ self.convs_1x1 = nn.ModuleList()
99
+ self.norms_1 = nn.ModuleList()
100
+ self.norms_2 = nn.ModuleList()
101
+ for i in range(n_layers):
102
+ dilation = kernel_size**i
103
+ padding = (kernel_size * dilation - dilation) // 2
104
+ self.convs_sep.append(
105
+ nn.Conv1d(
106
+ channels,
107
+ channels,
108
+ kernel_size,
109
+ groups=channels,
110
+ dilation=dilation,
111
+ padding=padding,
112
+ )
113
+ )
114
+ self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
115
+ self.norms_1.append(LayerNorm(channels))
116
+ self.norms_2.append(LayerNorm(channels))
117
+
118
+ def forward(self, x, x_mask, g=None):
119
+ if g is not None:
120
+ x = x + g
121
+ for i in range(self.n_layers):
122
+ y = self.convs_sep[i](x * x_mask)
123
+ y = self.norms_1[i](y)
124
+ y = F.gelu(y)
125
+ y = self.convs_1x1[i](y)
126
+ y = self.norms_2[i](y)
127
+ y = F.gelu(y)
128
+ y = self.drop(y)
129
+ x = x + y
130
+ return x * x_mask
131
+
132
+
133
+ class WN(torch.nn.Module):
134
+ def __init__(
135
+ self,
136
+ hidden_channels,
137
+ kernel_size,
138
+ dilation_rate,
139
+ n_layers,
140
+ gin_channels=0,
141
+ p_dropout=0,
142
+ ):
143
+ super(WN, self).__init__()
144
+ assert kernel_size % 2 == 1
145
+ self.hidden_channels = hidden_channels
146
+ self.kernel_size = (kernel_size,)
147
+ self.dilation_rate = dilation_rate
148
+ self.n_layers = n_layers
149
+ self.gin_channels = gin_channels
150
+ self.p_dropout = p_dropout
151
+
152
+ self.in_layers = torch.nn.ModuleList()
153
+ self.res_skip_layers = torch.nn.ModuleList()
154
+ self.drop = nn.Dropout(p_dropout)
155
+
156
+ if gin_channels != 0:
157
+ cond_layer = torch.nn.Conv1d(
158
+ gin_channels, 2 * hidden_channels * n_layers, 1
159
+ )
160
+ self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name="weight")
161
+
162
+ for i in range(n_layers):
163
+ dilation = dilation_rate**i
164
+ padding = int((kernel_size * dilation - dilation) / 2)
165
+ in_layer = torch.nn.Conv1d(
166
+ hidden_channels,
167
+ 2 * hidden_channels,
168
+ kernel_size,
169
+ dilation=dilation,
170
+ padding=padding,
171
+ )
172
+ in_layer = torch.nn.utils.weight_norm(in_layer, name="weight")
173
+ self.in_layers.append(in_layer)
174
+
175
+ # last one is not necessary
176
+ if i < n_layers - 1:
177
+ res_skip_channels = 2 * hidden_channels
178
+ else:
179
+ res_skip_channels = hidden_channels
180
+
181
+ res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
182
+ res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name="weight")
183
+ self.res_skip_layers.append(res_skip_layer)
184
+
185
+ def forward(self, x, x_mask, g=None, **kwargs):
186
+ output = torch.zeros_like(x)
187
+ n_channels_tensor = torch.IntTensor([self.hidden_channels])
188
+
189
+ if g is not None:
190
+ g = self.cond_layer(g)
191
+
192
+ for i in range(self.n_layers):
193
+ x_in = self.in_layers[i](x)
194
+ if g is not None:
195
+ cond_offset = i * 2 * self.hidden_channels
196
+ g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :]
197
+ else:
198
+ g_l = torch.zeros_like(x_in)
199
+
200
+ acts = commons.fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor)
201
+ acts = self.drop(acts)
202
+
203
+ res_skip_acts = self.res_skip_layers[i](acts)
204
+ if i < self.n_layers - 1:
205
+ res_acts = res_skip_acts[:, : self.hidden_channels, :]
206
+ x = (x + res_acts) * x_mask
207
+ output = output + res_skip_acts[:, self.hidden_channels :, :]
208
+ else:
209
+ output = output + res_skip_acts
210
+ return output * x_mask
211
+
212
+ def remove_weight_norm(self):
213
+ if self.gin_channels != 0:
214
+ torch.nn.utils.remove_weight_norm(self.cond_layer)
215
+ for l in self.in_layers:
216
+ torch.nn.utils.remove_weight_norm(l)
217
+ for l in self.res_skip_layers:
218
+ torch.nn.utils.remove_weight_norm(l)
219
+
220
+
221
+ class ResBlock1(torch.nn.Module):
222
+ def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
223
+ super(ResBlock1, self).__init__()
224
+ self.convs1 = nn.ModuleList(
225
+ [
226
+ weight_norm(
227
+ Conv1d(
228
+ channels,
229
+ channels,
230
+ kernel_size,
231
+ 1,
232
+ dilation=dilation[0],
233
+ padding=get_padding(kernel_size, dilation[0]),
234
+ )
235
+ ),
236
+ weight_norm(
237
+ Conv1d(
238
+ channels,
239
+ channels,
240
+ kernel_size,
241
+ 1,
242
+ dilation=dilation[1],
243
+ padding=get_padding(kernel_size, dilation[1]),
244
+ )
245
+ ),
246
+ weight_norm(
247
+ Conv1d(
248
+ channels,
249
+ channels,
250
+ kernel_size,
251
+ 1,
252
+ dilation=dilation[2],
253
+ padding=get_padding(kernel_size, dilation[2]),
254
+ )
255
+ ),
256
+ ]
257
+ )
258
+ self.convs1.apply(init_weights)
259
+
260
+ self.convs2 = nn.ModuleList(
261
+ [
262
+ weight_norm(
263
+ Conv1d(
264
+ channels,
265
+ channels,
266
+ kernel_size,
267
+ 1,
268
+ dilation=1,
269
+ padding=get_padding(kernel_size, 1),
270
+ )
271
+ ),
272
+ weight_norm(
273
+ Conv1d(
274
+ channels,
275
+ channels,
276
+ kernel_size,
277
+ 1,
278
+ dilation=1,
279
+ padding=get_padding(kernel_size, 1),
280
+ )
281
+ ),
282
+ weight_norm(
283
+ Conv1d(
284
+ channels,
285
+ channels,
286
+ kernel_size,
287
+ 1,
288
+ dilation=1,
289
+ padding=get_padding(kernel_size, 1),
290
+ )
291
+ ),
292
+ ]
293
+ )
294
+ self.convs2.apply(init_weights)
295
+
296
+ def forward(self, x, x_mask=None):
297
+ for c1, c2 in zip(self.convs1, self.convs2):
298
+ xt = F.leaky_relu(x, LRELU_SLOPE)
299
+ if x_mask is not None:
300
+ xt = xt * x_mask
301
+ xt = c1(xt)
302
+ xt = F.leaky_relu(xt, LRELU_SLOPE)
303
+ if x_mask is not None:
304
+ xt = xt * x_mask
305
+ xt = c2(xt)
306
+ x = xt + x
307
+ if x_mask is not None:
308
+ x = x * x_mask
309
+ return x
310
+
311
+ def remove_weight_norm(self):
312
+ for l in self.convs1:
313
+ remove_weight_norm(l)
314
+ for l in self.convs2:
315
+ remove_weight_norm(l)
316
+
317
+
318
+ class ResBlock2(torch.nn.Module):
319
+ def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
320
+ super(ResBlock2, self).__init__()
321
+ self.convs = nn.ModuleList(
322
+ [
323
+ weight_norm(
324
+ Conv1d(
325
+ channels,
326
+ channels,
327
+ kernel_size,
328
+ 1,
329
+ dilation=dilation[0],
330
+ padding=get_padding(kernel_size, dilation[0]),
331
+ )
332
+ ),
333
+ weight_norm(
334
+ Conv1d(
335
+ channels,
336
+ channels,
337
+ kernel_size,
338
+ 1,
339
+ dilation=dilation[1],
340
+ padding=get_padding(kernel_size, dilation[1]),
341
+ )
342
+ ),
343
+ ]
344
+ )
345
+ self.convs.apply(init_weights)
346
+
347
+ def forward(self, x, x_mask=None):
348
+ for c in self.convs:
349
+ xt = F.leaky_relu(x, LRELU_SLOPE)
350
+ if x_mask is not None:
351
+ xt = xt * x_mask
352
+ xt = c(xt)
353
+ x = xt + x
354
+ if x_mask is not None:
355
+ x = x * x_mask
356
+ return x
357
+
358
+ def remove_weight_norm(self):
359
+ for l in self.convs:
360
+ remove_weight_norm(l)
361
+
362
+
363
+ class Log(nn.Module):
364
+ def forward(self, x, x_mask, reverse=False, **kwargs):
365
+ if not reverse:
366
+ y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
367
+ logdet = torch.sum(-y, [1, 2])
368
+ return y, logdet
369
+ else:
370
+ x = torch.exp(x) * x_mask
371
+ return x
372
+
373
+
374
+ class Flip(nn.Module):
375
+ def forward(self, x, *args, reverse=False, **kwargs):
376
+ x = torch.flip(x, [1])
377
+ if not reverse:
378
+ logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
379
+ return x, logdet
380
+ else:
381
+ return x
382
+
383
+
384
+ class ElementwiseAffine(nn.Module):
385
+ def __init__(self, channels):
386
+ super().__init__()
387
+ self.channels = channels
388
+ self.m = nn.Parameter(torch.zeros(channels, 1))
389
+ self.logs = nn.Parameter(torch.zeros(channels, 1))
390
+
391
+ def forward(self, x, x_mask, reverse=False, **kwargs):
392
+ if not reverse:
393
+ y = self.m + torch.exp(self.logs) * x
394
+ y = y * x_mask
395
+ logdet = torch.sum(self.logs * x_mask, [1, 2])
396
+ return y, logdet
397
+ else:
398
+ x = (x - self.m) * torch.exp(-self.logs) * x_mask
399
+ return x
400
+
401
+
402
+ class ResidualCouplingLayer(nn.Module):
403
+ def __init__(
404
+ self,
405
+ channels,
406
+ hidden_channels,
407
+ kernel_size,
408
+ dilation_rate,
409
+ n_layers,
410
+ p_dropout=0,
411
+ gin_channels=0,
412
+ mean_only=False,
413
+ ):
414
+ assert channels % 2 == 0, "channels should be divisible by 2"
415
+ super().__init__()
416
+ self.channels = channels
417
+ self.hidden_channels = hidden_channels
418
+ self.kernel_size = kernel_size
419
+ self.dilation_rate = dilation_rate
420
+ self.n_layers = n_layers
421
+ self.half_channels = channels // 2
422
+ self.mean_only = mean_only
423
+
424
+ self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
425
+ self.enc = WN(
426
+ hidden_channels,
427
+ kernel_size,
428
+ dilation_rate,
429
+ n_layers,
430
+ p_dropout=p_dropout,
431
+ gin_channels=gin_channels,
432
+ )
433
+ self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
434
+ self.post.weight.data.zero_()
435
+ self.post.bias.data.zero_()
436
+
437
+ def forward(self, x, x_mask, g=None, reverse=False):
438
+ x0, x1 = torch.split(x, [self.half_channels] * 2, 1)
439
+ h = self.pre(x0) * x_mask
440
+ h = self.enc(h, x_mask, g=g)
441
+ stats = self.post(h) * x_mask
442
+ if not self.mean_only:
443
+ m, logs = torch.split(stats, [self.half_channels] * 2, 1)
444
+ else:
445
+ m = stats
446
+ logs = torch.zeros_like(m)
447
+
448
+ if not reverse:
449
+ x1 = m + x1 * torch.exp(logs) * x_mask
450
+ x = torch.cat([x0, x1], 1)
451
+ logdet = torch.sum(logs, [1, 2])
452
+ return x, logdet
453
+ else:
454
+ x1 = (x1 - m) * torch.exp(-logs) * x_mask
455
+ x = torch.cat([x0, x1], 1)
456
+ return x
457
+
458
+
459
+ class ConvFlow(nn.Module):
460
+ def __init__(
461
+ self,
462
+ in_channels,
463
+ filter_channels,
464
+ kernel_size,
465
+ n_layers,
466
+ num_bins=10,
467
+ tail_bound=5.0,
468
+ ):
469
+ super().__init__()
470
+ self.in_channels = in_channels
471
+ self.filter_channels = filter_channels
472
+ self.kernel_size = kernel_size
473
+ self.n_layers = n_layers
474
+ self.num_bins = num_bins
475
+ self.tail_bound = tail_bound
476
+ self.half_channels = in_channels // 2
477
+
478
+ self.pre = nn.Conv1d(self.half_channels, filter_channels, 1)
479
+ self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.0)
480
+ self.proj = nn.Conv1d(
481
+ filter_channels, self.half_channels * (num_bins * 3 - 1), 1
482
+ )
483
+ self.proj.weight.data.zero_()
484
+ self.proj.bias.data.zero_()
485
+
486
+ def forward(self, x, x_mask, g=None, reverse=False):
487
+ x0, x1 = torch.split(x, [self.half_channels] * 2, 1)
488
+ h = self.pre(x0)
489
+ h = self.convs(h, x_mask, g=g)
490
+ h = self.proj(h) * x_mask
491
+
492
+ b, c, t = x0.shape
493
+ h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?]
494
+
495
+ unnormalized_widths = h[..., : self.num_bins] / math.sqrt(self.filter_channels)
496
+ unnormalized_heights = h[..., self.num_bins : 2 * self.num_bins] / math.sqrt(
497
+ self.filter_channels
498
+ )
499
+ unnormalized_derivatives = h[..., 2 * self.num_bins :]
500
+
501
+ x1, logabsdet = piecewise_rational_quadratic_transform(
502
+ x1,
503
+ unnormalized_widths,
504
+ unnormalized_heights,
505
+ unnormalized_derivatives,
506
+ inverse=reverse,
507
+ tails="linear",
508
+ tail_bound=self.tail_bound,
509
+ )
510
+
511
+ x = torch.cat([x0, x1], 1) * x_mask
512
+ logdet = torch.sum(logabsdet * x_mask, [1, 2])
513
+ if not reverse:
514
+ return x, logdet
515
+ else:
516
+ return x
517
+
518
+
519
+ class TransformerCouplingLayer(nn.Module):
520
+ def __init__(
521
+ self,
522
+ channels,
523
+ hidden_channels,
524
+ kernel_size,
525
+ n_layers,
526
+ n_heads,
527
+ p_dropout=0,
528
+ filter_channels=0,
529
+ mean_only=False,
530
+ wn_sharing_parameter=None,
531
+ gin_channels=0,
532
+ ):
533
+ assert channels % 2 == 0, "channels should be divisible by 2"
534
+ super().__init__()
535
+ self.channels = channels
536
+ self.hidden_channels = hidden_channels
537
+ self.kernel_size = kernel_size
538
+ self.n_layers = n_layers
539
+ self.half_channels = channels // 2
540
+ self.mean_only = mean_only
541
+
542
+ self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
543
+ self.enc = (
544
+ Encoder(
545
+ hidden_channels,
546
+ filter_channels,
547
+ n_heads,
548
+ n_layers,
549
+ kernel_size,
550
+ p_dropout,
551
+ isflow=True,
552
+ gin_channels=gin_channels,
553
+ )
554
+ if wn_sharing_parameter is None
555
+ else wn_sharing_parameter
556
+ )
557
+ self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
558
+ self.post.weight.data.zero_()
559
+ self.post.bias.data.zero_()
560
+
561
+ def forward(self, x, x_mask, g=None, reverse=False):
562
+ x0, x1 = torch.split(x, [self.half_channels] * 2, 1)
563
+ h = self.pre(x0) * x_mask
564
+ h = self.enc(h, x_mask, g=g)
565
+ stats = self.post(h) * x_mask
566
+ if not self.mean_only:
567
+ m, logs = torch.split(stats, [self.half_channels] * 2, 1)
568
+ else:
569
+ m = stats
570
+ logs = torch.zeros_like(m)
571
+
572
+ if not reverse:
573
+ x1 = m + x1 * torch.exp(logs) * x_mask
574
+ x = torch.cat([x0, x1], 1)
575
+ logdet = torch.sum(logs, [1, 2])
576
+ return x, logdet
577
+ else:
578
+ x1 = (x1 - m) * torch.exp(-logs) * x_mask
579
+ x = torch.cat([x0, x1], 1)
580
+ return x
581
+
582
+ x1, logabsdet = piecewise_rational_quadratic_transform(
583
+ x1,
584
+ unnormalized_widths,
585
+ unnormalized_heights,
586
+ unnormalized_derivatives,
587
+ inverse=reverse,
588
+ tails="linear",
589
+ tail_bound=self.tail_bound,
590
+ )
591
+
592
+ x = torch.cat([x0, x1], 1) * x_mask
593
+ logdet = torch.sum(logabsdet * x_mask, [1, 2])
594
+ if not reverse:
595
+ return x, logdet
596
+ else:
597
+ return x
598
+ logs = torch.cat([logs0, logs1], 1)
599
+ return x, m, logs
monotonic_align/__init__.py ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from numpy import zeros, int32, float32
2
+ from torch import from_numpy
3
+
4
+ from .core import maximum_path_jit
5
+
6
+
7
+ def maximum_path(neg_cent, mask):
8
+ device = neg_cent.device
9
+ dtype = neg_cent.dtype
10
+ neg_cent = neg_cent.data.cpu().numpy().astype(float32)
11
+ path = zeros(neg_cent.shape, dtype=int32)
12
+
13
+ t_t_max = mask.sum(1)[:, 0].data.cpu().numpy().astype(int32)
14
+ t_s_max = mask.sum(2)[:, 0].data.cpu().numpy().astype(int32)
15
+ maximum_path_jit(path, neg_cent, t_t_max, t_s_max)
16
+ return from_numpy(path).to(device=device, dtype=dtype)
monotonic_align/core.py ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import numba
2
+
3
+
4
+ @numba.jit(
5
+ numba.void(
6
+ numba.int32[:, :, ::1],
7
+ numba.float32[:, :, ::1],
8
+ numba.int32[::1],
9
+ numba.int32[::1],
10
+ ),
11
+ nopython=True,
12
+ nogil=True,
13
+ )
14
+ def maximum_path_jit(paths, values, t_ys, t_xs):
15
+ b = paths.shape[0]
16
+ max_neg_val = -1e9
17
+ for i in range(int(b)):
18
+ path = paths[i]
19
+ value = values[i]
20
+ t_y = t_ys[i]
21
+ t_x = t_xs[i]
22
+
23
+ v_prev = v_cur = 0.0
24
+ index = t_x - 1
25
+
26
+ for y in range(t_y):
27
+ for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)):
28
+ if x == y:
29
+ v_cur = max_neg_val
30
+ else:
31
+ v_cur = value[y - 1, x]
32
+ if x == 0:
33
+ if y == 0:
34
+ v_prev = 0.0
35
+ else:
36
+ v_prev = max_neg_val
37
+ else:
38
+ v_prev = value[y - 1, x - 1]
39
+ value[y, x] += max(v_prev, v_cur)
40
+
41
+ for y in range(t_y - 1, -1, -1):
42
+ path[y, index] = 1
43
+ if index != 0 and (
44
+ index == y or value[y - 1, index] < value[y - 1, index - 1]
45
+ ):
46
+ index = index - 1
re_matching.py ADDED
@@ -0,0 +1,81 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import re
2
+
3
+
4
+ def extract_language_and_text_updated(speaker, dialogue):
5
+ # 使用正则表达式匹配<语言>标签和其后的文本
6
+ pattern_language_text = r"<(\S+?)>([^<]+)"
7
+ matches = re.findall(pattern_language_text, dialogue, re.DOTALL)
8
+ speaker = speaker[1:-1]
9
+ # 清理文本:去除两边的空白字符
10
+ matches_cleaned = [(lang.upper(), text.strip()) for lang, text in matches]
11
+ matches_cleaned.append(speaker)
12
+ return matches_cleaned
13
+
14
+
15
+ def validate_text(input_text):
16
+ # 验证说话人的正则表达式
17
+ pattern_speaker = r"(\[\S+?\])((?:\s*<\S+?>[^<\[\]]+?)+)"
18
+
19
+ # 使用re.DOTALL标志使.匹配包括换行符在内的所有字符
20
+ matches = re.findall(pattern_speaker, input_text, re.DOTALL)
21
+
22
+ # 对每个匹配到的说话人内容进行进一步验证
23
+ for _, dialogue in matches:
24
+ language_text_matches = extract_language_and_text_updated(_, dialogue)
25
+ if not language_text_matches:
26
+ return (
27
+ False,
28
+ "Error: Invalid format detected in dialogue content. Please check your input.",
29
+ )
30
+
31
+ # 如果输入的文本中没有找到任何匹配项
32
+ if not matches:
33
+ return (
34
+ False,
35
+ "Error: No valid speaker format detected. Please check your input.",
36
+ )
37
+
38
+ return True, "Input is valid."
39
+
40
+
41
+ def text_matching(text: str) -> list:
42
+ speaker_pattern = r"(\[\S+?\])(.+?)(?=\[\S+?\]|$)"
43
+ matches = re.findall(speaker_pattern, text, re.DOTALL)
44
+ result = []
45
+ for speaker, dialogue in matches:
46
+ result.append(extract_language_and_text_updated(speaker, dialogue))
47
+ return result
48
+
49
+
50
+ def cut_para(text):
51
+ splitted_para = re.split("[\n]", text) # 按段分
52
+ splitted_para = [
53
+ sentence.strip() for sentence in splitted_para if sentence.strip()
54
+ ] # 删除空字符串
55
+ return splitted_para
56
+
57
+
58
+ def cut_sent(para):
59
+ para = re.sub("([。!;?\?])([^”’])", r"\1\n\2", para) # 单字符断句符
60
+ para = re.sub("(\.{6})([^”’])", r"\1\n\2", para) # 英文省略号
61
+ para = re.sub("(\…{2})([^”’])", r"\1\n\2", para) # 中文省略号
62
+ para = re.sub("([。!?\?][”’])([^,。!?\?])", r"\1\n\2", para)
63
+ para = para.rstrip() # 段尾如果有多余的\n就去掉它
64
+ return para.split("\n")
65
+
66
+
67
+ if __name__ == "__main__":
68
+ text = """
69
+ [说话人1]
70
+ [说话人2]<zh>你好吗?<jp>元気ですか?<jp>こんにちは,世界。<zh>你好吗?
71
+ [说话人3]<zh>谢谢。<jp>どういたしまして。
72
+ """
73
+ text_matching(text)
74
+ # 测试函数
75
+ test_text = """
76
+ [说话人1]<zh>你好,こんにちは!<jp>こんにちは,世界。
77
+ [说话人2]<zh>你好吗?
78
+ """
79
+ text_matching(test_text)
80
+ res = validate_text(test_text)
81
+ print(res)
requirements.txt ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ librosa
2
+ matplotlib
3
+ numpy
4
+ numba
5
+ scipy
6
+ jieba
7
+ transformers
8
+ pypinyin
9
+ cn2an
10
+ #gradio
11
+ loguru
12
+ PyYAML
spec_gen.py ADDED
@@ -0,0 +1,87 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ from tqdm import tqdm
3
+ from multiprocessing import Pool
4
+ from mel_processing import spectrogram_torch, mel_spectrogram_torch
5
+ from utils import load_wav_to_torch
6
+
7
+
8
+ class AudioProcessor:
9
+ def __init__(
10
+ self,
11
+ max_wav_value,
12
+ use_mel_spec_posterior,
13
+ filter_length,
14
+ n_mel_channels,
15
+ sampling_rate,
16
+ hop_length,
17
+ win_length,
18
+ mel_fmin,
19
+ mel_fmax,
20
+ ):
21
+ self.max_wav_value = max_wav_value
22
+ self.use_mel_spec_posterior = use_mel_spec_posterior
23
+ self.filter_length = filter_length
24
+ self.n_mel_channels = n_mel_channels
25
+ self.sampling_rate = sampling_rate
26
+ self.hop_length = hop_length
27
+ self.win_length = win_length
28
+ self.mel_fmin = mel_fmin
29
+ self.mel_fmax = mel_fmax
30
+
31
+ def process_audio(self, filename):
32
+ audio, sampling_rate = load_wav_to_torch(filename)
33
+ audio_norm = audio / self.max_wav_value
34
+ audio_norm = audio_norm.unsqueeze(0)
35
+ spec_filename = filename.replace(".wav", ".spec.pt")
36
+ if self.use_mel_spec_posterior:
37
+ spec_filename = spec_filename.replace(".spec.pt", ".mel.pt")
38
+ try:
39
+ spec = torch.load(spec_filename)
40
+ except:
41
+ if self.use_mel_spec_posterior:
42
+ spec = mel_spectrogram_torch(
43
+ audio_norm,
44
+ self.filter_length,
45
+ self.n_mel_channels,
46
+ self.sampling_rate,
47
+ self.hop_length,
48
+ self.win_length,
49
+ self.mel_fmin,
50
+ self.mel_fmax,
51
+ center=False,
52
+ )
53
+ else:
54
+ spec = spectrogram_torch(
55
+ audio_norm,
56
+ self.filter_length,
57
+ self.sampling_rate,
58
+ self.hop_length,
59
+ self.win_length,
60
+ center=False,
61
+ )
62
+ spec = torch.squeeze(spec, 0)
63
+ torch.save(spec, spec_filename)
64
+ return spec, audio_norm
65
+
66
+
67
+ # 使用示例
68
+ processor = AudioProcessor(
69
+ max_wav_value=32768.0,
70
+ use_mel_spec_posterior=False,
71
+ filter_length=2048,
72
+ n_mel_channels=128,
73
+ sampling_rate=44100,
74
+ hop_length=512,
75
+ win_length=2048,
76
+ mel_fmin=0.0,
77
+ mel_fmax="null",
78
+ )
79
+
80
+ with open("filelists/train.list", "r") as f:
81
+ filepaths = [line.split("|")[0] for line in f] # 取每一行的第一部分作为audiopath
82
+
83
+ # 使用多进程处理
84
+ with Pool(processes=32) as pool: # 使用4个进程
85
+ with tqdm(total=len(filepaths)) as pbar:
86
+ for i, _ in enumerate(pool.imap_unordered(processor.process_audio, filepaths)):
87
+ pbar.update()
text/__init__.py ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from text.symbols import *
2
+
3
+ _symbol_to_id = {s: i for i, s in enumerate(symbols)}
4
+
5
+
6
+ def cleaned_text_to_sequence(cleaned_text, tones, language):
7
+ """Converts a string of text to a sequence of IDs corresponding to the symbols in the text.
8
+ Args:
9
+ text: string to convert to a sequence
10
+ Returns:
11
+ List of integers corresponding to the symbols in the text
12
+ """
13
+ phones = [_symbol_to_id[symbol] for symbol in cleaned_text]
14
+ tone_start = language_tone_start_map[language]
15
+ tones = [i + tone_start for i in tones]
16
+ lang_id = language_id_map[language]
17
+ lang_ids = [lang_id for i in phones]
18
+ return phones, tones, lang_ids
19
+
20
+
21
+ def get_bert(norm_text, word2ph, language, device, style_text=None, style_weight=0.7):
22
+ from .chinese_bert import get_bert_feature as zh_bert
23
+
24
+ lang_bert_func_map = {"ZH": zh_bert}
25
+ bert = lang_bert_func_map[language](
26
+ norm_text, word2ph, device, style_text, style_weight
27
+ )
28
+ return bert
29
+
30
+
31
+ def check_bert_models():
32
+ import json
33
+ from pathlib import Path
34
+
35
+ # from config import config
36
+ from .bert_utils import _check_bert
37
+
38
+ with open("./bert/bert_models.json", "r") as fp:
39
+ models = json.load(fp)
40
+ for k, v in models.items():
41
+ local_path = Path("./bert").joinpath(k)
42
+ _check_bert(v["repo_id"], v["files"], local_path)
43
+
44
+
45
+ # def init_openjtalk():
46
+ # import platform
47
+
48
+ # if platform.platform() == "Linux":
49
+ # import pyopenjtalk
50
+
51
+ # pyopenjtalk.g2p("こんにちは,世界。")
52
+
53
+
54
+ # init_openjtalk()
55
+ check_bert_models()
text/bert_utils.py ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from pathlib import Path
2
+
3
+ from huggingface_hub import hf_hub_download
4
+
5
+ from config import config
6
+
7
+
8
+ MIRROR: str = config.mirror
9
+
10
+
11
+ def _check_bert(repo_id, files, local_path):
12
+ for file in files:
13
+ if not Path(local_path).joinpath(file).exists():
14
+ hf_hub_download(
15
+ repo_id, file, local_dir=local_path, local_dir_use_symlinks=False
16
+ )
text/chinese.py ADDED
@@ -0,0 +1,208 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import re
3
+
4
+ from pypinyin import lazy_pinyin, Style
5
+
6
+ from text.symbols import punctuation
7
+ from text.tone_sandhi import ToneSandhi
8
+
9
+ try:
10
+ from tn.chinese.normalizer import Normalizer
11
+
12
+ normalizer = Normalizer(
13
+ remove_interjections=False, remove_erhua=False, overwrite_cache=True
14
+ ).normalize
15
+ except ImportError:
16
+ import cn2an
17
+
18
+ print("tn.chinese.normalizer not found, use cn2an normalizer")
19
+ normalizer = lambda x: cn2an.transform(x, "an2cn")
20
+
21
+ current_file_path = os.path.dirname(__file__)
22
+ pinyin_to_symbol_map = {
23
+ line.split("\t")[0]: line.strip().split("\t")[1]
24
+ for line in open(os.path.join(current_file_path, "opencpop-strict.txt")).readlines()
25
+ }
26
+
27
+ import jieba.posseg as psg
28
+
29
+
30
+ rep_map = {
31
+ ":": ",",
32
+ ";": ",",
33
+ ",": ",",
34
+ "。": ".",
35
+ "!": "!",
36
+ "?": "?",
37
+ "\n": ".",
38
+ "·": ",",
39
+ "、": ",",
40
+ "...": "…",
41
+ "$": ".",
42
+ "“": "'",
43
+ "”": "'",
44
+ '"': "'",
45
+ "‘": "'",
46
+ "’": "'",
47
+ "(": "'",
48
+ ")": "'",
49
+ "(": "'",
50
+ ")": "'",
51
+ "《": "'",
52
+ "》": "'",
53
+ "【": "'",
54
+ "】": "'",
55
+ "[": "'",
56
+ "]": "'",
57
+ "—": "-",
58
+ "~": "-",
59
+ "~": "-",
60
+ "「": "'",
61
+ "」": "'",
62
+ }
63
+
64
+ tone_modifier = ToneSandhi()
65
+
66
+
67
+ def replace_punctuation(text):
68
+ text = text.replace("嗯", "恩").replace("呣", "母")
69
+ pattern = re.compile("|".join(re.escape(p) for p in rep_map.keys()))
70
+
71
+ replaced_text = pattern.sub(lambda x: rep_map[x.group()], text)
72
+
73
+ replaced_text = re.sub(
74
+ r"[^\u4e00-\u9fa5" + "".join(punctuation) + r"]+", "", replaced_text
75
+ )
76
+
77
+ return replaced_text
78
+
79
+
80
+ def g2p(text):
81
+ pattern = r"(?<=[{0}])\s*".format("".join(punctuation))
82
+ sentences = [i for i in re.split(pattern, text) if i.strip() != ""]
83
+ phones, tones, word2ph = _g2p(sentences)
84
+ assert sum(word2ph) == len(phones)
85
+ assert len(word2ph) == len(text) # Sometimes it will crash,you can add a try-catch.
86
+ phones = ["_"] + phones + ["_"]
87
+ tones = [0] + tones + [0]
88
+ word2ph = [1] + word2ph + [1]
89
+ return phones, tones, word2ph
90
+
91
+
92
+ def _get_initials_finals(word):
93
+ initials = []
94
+ finals = []
95
+ orig_initials = lazy_pinyin(word, neutral_tone_with_five=True, style=Style.INITIALS)
96
+ orig_finals = lazy_pinyin(
97
+ word, neutral_tone_with_five=True, style=Style.FINALS_TONE3
98
+ )
99
+ for c, v in zip(orig_initials, orig_finals):
100
+ initials.append(c)
101
+ finals.append(v)
102
+ return initials, finals
103
+
104
+
105
+ def _g2p(segments):
106
+ phones_list = []
107
+ tones_list = []
108
+ word2ph = []
109
+ for seg in segments:
110
+ # Replace all English words in the sentence
111
+ seg = re.sub("[a-zA-Z]+", "", seg)
112
+ seg_cut = psg.lcut(seg)
113
+ initials = []
114
+ finals = []
115
+ seg_cut = tone_modifier.pre_merge_for_modify(seg_cut)
116
+ for word, pos in seg_cut:
117
+ if pos == "eng":
118
+ continue
119
+ sub_initials, sub_finals = _get_initials_finals(word)
120
+ sub_finals = tone_modifier.modified_tone(word, pos, sub_finals)
121
+ initials.append(sub_initials)
122
+ finals.append(sub_finals)
123
+
124
+ # assert len(sub_initials) == len(sub_finals) == len(word)
125
+ initials = sum(initials, [])
126
+ finals = sum(finals, [])
127
+ #
128
+ for c, v in zip(initials, finals):
129
+ raw_pinyin = c + v
130
+ # NOTE: post process for pypinyin outputs
131
+ # we discriminate i, ii and iii
132
+ if c == v:
133
+ assert c in punctuation
134
+ phone = [c]
135
+ tone = "0"
136
+ word2ph.append(1)
137
+ else:
138
+ v_without_tone = v[:-1]
139
+ tone = v[-1]
140
+
141
+ pinyin = c + v_without_tone
142
+ assert tone in "12345"
143
+
144
+ if c:
145
+ # 多音节
146
+ v_rep_map = {
147
+ "uei": "ui",
148
+ "iou": "iu",
149
+ "uen": "un",
150
+ }
151
+ if v_without_tone in v_rep_map.keys():
152
+ pinyin = c + v_rep_map[v_without_tone]
153
+ else:
154
+ # 单音节
155
+ pinyin_rep_map = {
156
+ "ing": "ying",
157
+ "i": "yi",
158
+ "in": "yin",
159
+ "u": "wu",
160
+ }
161
+ if pinyin in pinyin_rep_map.keys():
162
+ pinyin = pinyin_rep_map[pinyin]
163
+ else:
164
+ single_rep_map = {
165
+ "v": "yu",
166
+ "e": "e",
167
+ "i": "y",
168
+ "u": "w",
169
+ }
170
+ if pinyin[0] in single_rep_map.keys():
171
+ pinyin = single_rep_map[pinyin[0]] + pinyin[1:]
172
+
173
+ assert pinyin in pinyin_to_symbol_map.keys(), (pinyin, seg, raw_pinyin)
174
+ phone = pinyin_to_symbol_map[pinyin].split(" ")
175
+ word2ph.append(len(phone))
176
+
177
+ phones_list += phone
178
+ tones_list += [int(tone)] * len(phone)
179
+ return phones_list, tones_list, word2ph
180
+
181
+
182
+ def text_normalize(text):
183
+ text = normalizer(text)
184
+ text = replace_punctuation(text)
185
+ return text
186
+
187
+
188
+ def get_bert_feature(text, word2ph):
189
+ from text import chinese_bert
190
+
191
+ return chinese_bert.get_bert_feature(text, word2ph)
192
+
193
+
194
+ if __name__ == "__main__":
195
+ from text.chinese_bert import get_bert_feature
196
+
197
+ text = "啊!但是《原神》是由,米哈\游自主, [研发]的一款全.新开放世界.冒险游戏"
198
+ text = text_normalize(text)
199
+ print(text)
200
+ phones, tones, word2ph = g2p(text)
201
+ bert = get_bert_feature(text, word2ph)
202
+
203
+ print(phones, tones, word2ph, bert.shape)
204
+
205
+
206
+ # # 示例用法
207
+ # text = "这是一个示例文本:,你好!这是一个测试...."
208
+ # print(g2p_paddle(text)) # 输出: 这是一个示例文本你好这是一个测试
text/chinese_bert.py ADDED
@@ -0,0 +1,122 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import sys
2
+
3
+ import torch
4
+ from transformers import AutoModelForMaskedLM, AutoTokenizer
5
+
6
+ from config import config
7
+
8
+ LOCAL_PATH = "./bert/chinese-roberta-wwm-ext-large"
9
+
10
+ tokenizer = AutoTokenizer.from_pretrained(LOCAL_PATH)
11
+
12
+ models = dict()
13
+
14
+
15
+ def get_bert_feature(
16
+ text,
17
+ word2ph,
18
+ device=config.bert_gen_config.device,
19
+ style_text=None,
20
+ style_weight=0.7,
21
+ ):
22
+ if (
23
+ sys.platform == "darwin"
24
+ and torch.backends.mps.is_available()
25
+ and device == "cpu"
26
+ ):
27
+ device = "mps"
28
+ if not device:
29
+ if torch.cuda.is_available():
30
+ device = "cuda"
31
+ else:
32
+ device = "cpu"
33
+ if device not in models.keys():
34
+ models[device] = AutoModelForMaskedLM.from_pretrained(LOCAL_PATH).to(device)
35
+ with torch.no_grad():
36
+ inputs = tokenizer(text, return_tensors="pt")
37
+ for i in inputs:
38
+ inputs[i] = inputs[i].to(device)
39
+ res = models[device](**inputs, output_hidden_states=True)
40
+ res = torch.cat(res["hidden_states"][-3:-2], -1)[0].cpu()
41
+ if style_text:
42
+ style_inputs = tokenizer(style_text, return_tensors="pt")
43
+ for i in style_inputs:
44
+ style_inputs[i] = style_inputs[i].to(device)
45
+ style_res = models[device](**style_inputs, output_hidden_states=True)
46
+ style_res = torch.cat(style_res["hidden_states"][-3:-2], -1)[0].cpu()
47
+ style_res_mean = style_res.mean(0)
48
+ assert len(word2ph) == len(text) + 2
49
+ word2phone = word2ph
50
+ phone_level_feature = []
51
+ for i in range(len(word2phone)):
52
+ if style_text:
53
+ repeat_feature = (
54
+ res[i].repeat(word2phone[i], 1) * (1 - style_weight)
55
+ + style_res_mean.repeat(word2phone[i], 1) * style_weight
56
+ )
57
+ else:
58
+ repeat_feature = res[i].repeat(word2phone[i], 1)
59
+ phone_level_feature.append(repeat_feature)
60
+
61
+ phone_level_feature = torch.cat(phone_level_feature, dim=0)
62
+
63
+ return phone_level_feature.T
64
+
65
+
66
+ if __name__ == "__main__":
67
+ word_level_feature = torch.rand(38, 1024) # 12个词,每个词1024维特征
68
+ word2phone = [
69
+ 1,
70
+ 2,
71
+ 1,
72
+ 2,
73
+ 2,
74
+ 1,
75
+ 2,
76
+ 2,
77
+ 1,
78
+ 2,
79
+ 2,
80
+ 1,
81
+ 2,
82
+ 2,
83
+ 2,
84
+ 2,
85
+ 2,
86
+ 1,
87
+ 1,
88
+ 2,
89
+ 2,
90
+ 1,
91
+ 2,
92
+ 2,
93
+ 2,
94
+ 2,
95
+ 1,
96
+ 2,
97
+ 2,
98
+ 2,
99
+ 2,
100
+ 2,
101
+ 1,
102
+ 2,
103
+ 2,
104
+ 2,
105
+ 2,
106
+ 1,
107
+ ]
108
+
109
+ # 计算总帧数
110
+ total_frames = sum(word2phone)
111
+ print(word_level_feature.shape)
112
+ print(word2phone)
113
+ phone_level_feature = []
114
+ for i in range(len(word2phone)):
115
+ print(word_level_feature[i].shape)
116
+
117
+ # 对每个词重复word2phone[i]次
118
+ repeat_feature = word_level_feature[i].repeat(word2phone[i], 1)
119
+ phone_level_feature.append(repeat_feature)
120
+
121
+ phone_level_feature = torch.cat(phone_level_feature, dim=0)
122
+ print(phone_level_feature.shape) # torch.Size([36, 1024])
text/cleaner.py ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from text import chinese, cleaned_text_to_sequence
2
+
3
+
4
+ language_module_map = {"ZH": chinese}
5
+
6
+
7
+ def clean_text(text, language):
8
+ language_module = language_module_map[language]
9
+ norm_text = language_module.text_normalize(text)
10
+ phones, tones, word2ph = language_module.g2p(norm_text)
11
+ return norm_text, phones, tones, word2ph
12
+
13
+
14
+ def clean_text_bert(text, language):
15
+ language_module = language_module_map[language]
16
+ norm_text = language_module.text_normalize(text)
17
+ phones, tones, word2ph = language_module.g2p(norm_text)
18
+ bert = language_module.get_bert_feature(norm_text, word2ph)
19
+ return phones, tones, bert
20
+
21
+
22
+ def text_to_sequence(text, language):
23
+ norm_text, phones, tones, word2ph = clean_text(text, language)
24
+ return cleaned_text_to_sequence(phones, tones, language)
25
+
26
+
27
+ if __name__ == "__main__":
28
+ pass
text/opencpop-strict.txt ADDED
@@ -0,0 +1,429 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ a AA a
2
+ ai AA ai
3
+ an AA an
4
+ ang AA ang
5
+ ao AA ao
6
+ ba b a
7
+ bai b ai
8
+ ban b an
9
+ bang b ang
10
+ bao b ao
11
+ bei b ei
12
+ ben b en
13
+ beng b eng
14
+ bi b i
15
+ bian b ian
16
+ biao b iao
17
+ bie b ie
18
+ bin b in
19
+ bing b ing
20
+ bo b o
21
+ bu b u
22
+ ca c a
23
+ cai c ai
24
+ can c an
25
+ cang c ang
26
+ cao c ao
27
+ ce c e
28
+ cei c ei
29
+ cen c en
30
+ ceng c eng
31
+ cha ch a
32
+ chai ch ai
33
+ chan ch an
34
+ chang ch ang
35
+ chao ch ao
36
+ che ch e
37
+ chen ch en
38
+ cheng ch eng
39
+ chi ch ir
40
+ chong ch ong
41
+ chou ch ou
42
+ chu ch u
43
+ chua ch ua
44
+ chuai ch uai
45
+ chuan ch uan
46
+ chuang ch uang
47
+ chui ch ui
48
+ chun ch un
49
+ chuo ch uo
50
+ ci c i0
51
+ cong c ong
52
+ cou c ou
53
+ cu c u
54
+ cuan c uan
55
+ cui c ui
56
+ cun c un
57
+ cuo c uo
58
+ da d a
59
+ dai d ai
60
+ dan d an
61
+ dang d ang
62
+ dao d ao
63
+ de d e
64
+ dei d ei
65
+ den d en
66
+ deng d eng
67
+ di d i
68
+ dia d ia
69
+ dian d ian
70
+ diao d iao
71
+ die d ie
72
+ ding d ing
73
+ diu d iu
74
+ dong d ong
75
+ dou d ou
76
+ du d u
77
+ duan d uan
78
+ dui d ui
79
+ dun d un
80
+ duo d uo
81
+ e EE e
82
+ ei EE ei
83
+ en EE en
84
+ eng EE eng
85
+ er EE er
86
+ fa f a
87
+ fan f an
88
+ fang f ang
89
+ fei f ei
90
+ fen f en
91
+ feng f eng
92
+ fo f o
93
+ fou f ou
94
+ fu f u
95
+ ga g a
96
+ gai g ai
97
+ gan g an
98
+ gang g ang
99
+ gao g ao
100
+ ge g e
101
+ gei g ei
102
+ gen g en
103
+ geng g eng
104
+ gong g ong
105
+ gou g ou
106
+ gu g u
107
+ gua g ua
108
+ guai g uai
109
+ guan g uan
110
+ guang g uang
111
+ gui g ui
112
+ gun g un
113
+ guo g uo
114
+ ha h a
115
+ hai h ai
116
+ han h an
117
+ hang h ang
118
+ hao h ao
119
+ he h e
120
+ hei h ei
121
+ hen h en
122
+ heng h eng
123
+ hong h ong
124
+ hou h ou
125
+ hu h u
126
+ hua h ua
127
+ huai h uai
128
+ huan h uan
129
+ huang h uang
130
+ hui h ui
131
+ hun h un
132
+ huo h uo
133
+ ji j i
134
+ jia j ia
135
+ jian j ian
136
+ jiang j iang
137
+ jiao j iao
138
+ jie j ie
139
+ jin j in
140
+ jing j ing
141
+ jiong j iong
142
+ jiu j iu
143
+ ju j v
144
+ jv j v
145
+ juan j van
146
+ jvan j van
147
+ jue j ve
148
+ jve j ve
149
+ jun j vn
150
+ jvn j vn
151
+ ka k a
152
+ kai k ai
153
+ kan k an
154
+ kang k ang
155
+ kao k ao
156
+ ke k e
157
+ kei k ei
158
+ ken k en
159
+ keng k eng
160
+ kong k ong
161
+ kou k ou
162
+ ku k u
163
+ kua k ua
164
+ kuai k uai
165
+ kuan k uan
166
+ kuang k uang
167
+ kui k ui
168
+ kun k un
169
+ kuo k uo
170
+ la l a
171
+ lai l ai
172
+ lan l an
173
+ lang l ang
174
+ lao l ao
175
+ le l e
176
+ lei l ei
177
+ leng l eng
178
+ li l i
179
+ lia l ia
180
+ lian l ian
181
+ liang l iang
182
+ liao l iao
183
+ lie l ie
184
+ lin l in
185
+ ling l ing
186
+ liu l iu
187
+ lo l o
188
+ long l ong
189
+ lou l ou
190
+ lu l u
191
+ luan l uan
192
+ lun l un
193
+ luo l uo
194
+ lv l v
195
+ lve l ve
196
+ ma m a
197
+ mai m ai
198
+ man m an
199
+ mang m ang
200
+ mao m ao
201
+ me m e
202
+ mei m ei
203
+ men m en
204
+ meng m eng
205
+ mi m i
206
+ mian m ian
207
+ miao m iao
208
+ mie m ie
209
+ min m in
210
+ ming m ing
211
+ miu m iu
212
+ mo m o
213
+ mou m ou
214
+ mu m u
215
+ na n a
216
+ nai n ai
217
+ nan n an
218
+ nang n ang
219
+ nao n ao
220
+ ne n e
221
+ nei n ei
222
+ nen n en
223
+ neng n eng
224
+ ni n i
225
+ nian n ian
226
+ niang n iang
227
+ niao n iao
228
+ nie n ie
229
+ nin n in
230
+ ning n ing
231
+ niu n iu
232
+ nong n ong
233
+ nou n ou
234
+ nu n u
235
+ nuan n uan
236
+ nun n un
237
+ nuo n uo
238
+ nv n v
239
+ nve n ve
240
+ o OO o
241
+ ou OO ou
242
+ pa p a
243
+ pai p ai
244
+ pan p an
245
+ pang p ang
246
+ pao p ao
247
+ pei p ei
248
+ pen p en
249
+ peng p eng
250
+ pi p i
251
+ pian p ian
252
+ piao p iao
253
+ pie p ie
254
+ pin p in
255
+ ping p ing
256
+ po p o
257
+ pou p ou
258
+ pu p u
259
+ qi q i
260
+ qia q ia
261
+ qian q ian
262
+ qiang q iang
263
+ qiao q iao
264
+ qie q ie
265
+ qin q in
266
+ qing q ing
267
+ qiong q iong
268
+ qiu q iu
269
+ qu q v
270
+ qv q v
271
+ quan q van
272
+ qvan q van
273
+ que q ve
274
+ qve q ve
275
+ qun q vn
276
+ qvn q vn
277
+ ran r an
278
+ rang r ang
279
+ rao r ao
280
+ re r e
281
+ ren r en
282
+ reng r eng
283
+ ri r ir
284
+ rong r ong
285
+ rou r ou
286
+ ru r u
287
+ rua r ua
288
+ ruan r uan
289
+ rui r ui
290
+ run r un
291
+ ruo r uo
292
+ sa s a
293
+ sai s ai
294
+ san s an
295
+ sang s ang
296
+ sao s ao
297
+ se s e
298
+ sen s en
299
+ seng s eng
300
+ sha sh a
301
+ shai sh ai
302
+ shan sh an
303
+ shang sh ang
304
+ shao sh ao
305
+ she sh e
306
+ shei sh ei
307
+ shen sh en
308
+ sheng sh eng
309
+ shi sh ir
310
+ shou sh ou
311
+ shu sh u
312
+ shua sh ua
313
+ shuai sh uai
314
+ shuan sh uan
315
+ shuang sh uang
316
+ shui sh ui
317
+ shun sh un
318
+ shuo sh uo
319
+ si s i0
320
+ song s ong
321
+ sou s ou
322
+ su s u
323
+ suan s uan
324
+ sui s ui
325
+ sun s un
326
+ suo s uo
327
+ ta t a
328
+ tai t ai
329
+ tan t an
330
+ tang t ang
331
+ tao t ao
332
+ te t e
333
+ tei t ei
334
+ teng t eng
335
+ ti t i
336
+ tian t ian
337
+ tiao t iao
338
+ tie t ie
339
+ ting t ing
340
+ tong t ong
341
+ tou t ou
342
+ tu t u
343
+ tuan t uan
344
+ tui t ui
345
+ tun t un
346
+ tuo t uo
347
+ wa w a
348
+ wai w ai
349
+ wan w an
350
+ wang w ang
351
+ wei w ei
352
+ wen w en
353
+ weng w eng
354
+ wo w o
355
+ wu w u
356
+ xi x i
357
+ xia x ia
358
+ xian x ian
359
+ xiang x iang
360
+ xiao x iao
361
+ xie x ie
362
+ xin x in
363
+ xing x ing
364
+ xiong x iong
365
+ xiu x iu
366
+ xu x v
367
+ xv x v
368
+ xuan x van
369
+ xvan x van
370
+ xue x ve
371
+ xve x ve
372
+ xun x vn
373
+ xvn x vn
374
+ ya y a
375
+ yan y En
376
+ yang y ang
377
+ yao y ao
378
+ ye y E
379
+ yi y i
380
+ yin y in
381
+ ying y ing
382
+ yo y o
383
+ yong y ong
384
+ you y ou
385
+ yu y v
386
+ yv y v
387
+ yuan y van
388
+ yvan y van
389
+ yue y ve
390
+ yve y ve
391
+ yun y vn
392
+ yvn y vn
393
+ za z a
394
+ zai z ai
395
+ zan z an
396
+ zang z ang
397
+ zao z ao
398
+ ze z e
399
+ zei z ei
400
+ zen z en
401
+ zeng z eng
402
+ zha zh a
403
+ zhai zh ai
404
+ zhan zh an
405
+ zhang zh ang
406
+ zhao zh ao
407
+ zhe zh e
408
+ zhei zh ei
409
+ zhen zh en
410
+ zheng zh eng
411
+ zhi zh ir
412
+ zhong zh ong
413
+ zhou zh ou
414
+ zhu zh u
415
+ zhua zh ua
416
+ zhuai zh uai
417
+ zhuan zh uan
418
+ zhuang zh uang
419
+ zhui zh ui
420
+ zhun zh un
421
+ zhuo zh uo
422
+ zi z i0
423
+ zong z ong
424
+ zou z ou
425
+ zu z u
426
+ zuan z uan
427
+ zui z ui
428
+ zun z un
429
+ zuo z uo
text/symbols.py ADDED
@@ -0,0 +1,187 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ punctuation = ["!", "?", "…", ",", ".", "'", "-"]
2
+ pu_symbols = punctuation + ["SP", "UNK"]
3
+ pad = "_"
4
+
5
+ # chinese
6
+ zh_symbols = [
7
+ "E",
8
+ "En",
9
+ "a",
10
+ "ai",
11
+ "an",
12
+ "ang",
13
+ "ao",
14
+ "b",
15
+ "c",
16
+ "ch",
17
+ "d",
18
+ "e",
19
+ "ei",
20
+ "en",
21
+ "eng",
22
+ "er",
23
+ "f",
24
+ "g",
25
+ "h",
26
+ "i",
27
+ "i0",
28
+ "ia",
29
+ "ian",
30
+ "iang",
31
+ "iao",
32
+ "ie",
33
+ "in",
34
+ "ing",
35
+ "iong",
36
+ "ir",
37
+ "iu",
38
+ "j",
39
+ "k",
40
+ "l",
41
+ "m",
42
+ "n",
43
+ "o",
44
+ "ong",
45
+ "ou",
46
+ "p",
47
+ "q",
48
+ "r",
49
+ "s",
50
+ "sh",
51
+ "t",
52
+ "u",
53
+ "ua",
54
+ "uai",
55
+ "uan",
56
+ "uang",
57
+ "ui",
58
+ "un",
59
+ "uo",
60
+ "v",
61
+ "van",
62
+ "ve",
63
+ "vn",
64
+ "w",
65
+ "x",
66
+ "y",
67
+ "z",
68
+ "zh",
69
+ "AA",
70
+ "EE",
71
+ "OO",
72
+ ]
73
+ num_zh_tones = 6
74
+
75
+ # japanese
76
+ ja_symbols = [
77
+ "N",
78
+ "a",
79
+ "a:",
80
+ "b",
81
+ "by",
82
+ "ch",
83
+ "d",
84
+ "dy",
85
+ "e",
86
+ "e:",
87
+ "f",
88
+ "g",
89
+ "gy",
90
+ "h",
91
+ "hy",
92
+ "i",
93
+ "i:",
94
+ "j",
95
+ "k",
96
+ "ky",
97
+ "m",
98
+ "my",
99
+ "n",
100
+ "ny",
101
+ "o",
102
+ "o:",
103
+ "p",
104
+ "py",
105
+ "q",
106
+ "r",
107
+ "ry",
108
+ "s",
109
+ "sh",
110
+ "t",
111
+ "ts",
112
+ "ty",
113
+ "u",
114
+ "u:",
115
+ "w",
116
+ "y",
117
+ "z",
118
+ "zy",
119
+ ]
120
+ num_ja_tones = 2
121
+
122
+ # English
123
+ en_symbols = [
124
+ "aa",
125
+ "ae",
126
+ "ah",
127
+ "ao",
128
+ "aw",
129
+ "ay",
130
+ "b",
131
+ "ch",
132
+ "d",
133
+ "dh",
134
+ "eh",
135
+ "er",
136
+ "ey",
137
+ "f",
138
+ "g",
139
+ "hh",
140
+ "ih",
141
+ "iy",
142
+ "jh",
143
+ "k",
144
+ "l",
145
+ "m",
146
+ "n",
147
+ "ng",
148
+ "ow",
149
+ "oy",
150
+ "p",
151
+ "r",
152
+ "s",
153
+ "sh",
154
+ "t",
155
+ "th",
156
+ "uh",
157
+ "uw",
158
+ "V",
159
+ "w",
160
+ "y",
161
+ "z",
162
+ "zh",
163
+ ]
164
+ num_en_tones = 4
165
+
166
+ # combine all symbols
167
+ normal_symbols = sorted(set(zh_symbols + ja_symbols + en_symbols))
168
+ symbols = [pad] + normal_symbols + pu_symbols
169
+ sil_phonemes_ids = [symbols.index(i) for i in pu_symbols]
170
+
171
+ # combine all tones
172
+ num_tones = num_zh_tones + num_ja_tones + num_en_tones
173
+
174
+ # language maps
175
+ language_id_map = {"ZH": 0, "JP": 1, "EN": 2}
176
+ num_languages = len(language_id_map.keys())
177
+
178
+ language_tone_start_map = {
179
+ "ZH": 0,
180
+ "JP": num_zh_tones,
181
+ "EN": num_zh_tones + num_ja_tones,
182
+ }
183
+
184
+ if __name__ == "__main__":
185
+ a = set(zh_symbols)
186
+ b = set(en_symbols)
187
+ print(sorted(a & b))
text/tone_sandhi.py ADDED
@@ -0,0 +1,776 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ from typing import List
15
+ from typing import Tuple
16
+
17
+ import jieba
18
+ from pypinyin import lazy_pinyin
19
+ from pypinyin import Style
20
+
21
+
22
+ class ToneSandhi:
23
+ def __init__(self):
24
+ self.must_neural_tone_words = {
25
+ "麻烦",
26
+ "麻利",
27
+ "鸳鸯",
28
+ "高粱",
29
+ "骨头",
30
+ "骆驼",
31
+ "马虎",
32
+ "首饰",
33
+ "馒头",
34
+ "馄饨",
35
+ "风筝",
36
+ "难为",
37
+ "队伍",
38
+ "阔气",
39
+ "闺女",
40
+ "门道",
41
+ "锄头",
42
+ "铺盖",
43
+ "铃铛",
44
+ "铁匠",
45
+ "钥匙",
46
+ "里脊",
47
+ "里头",
48
+ "部分",
49
+ "那么",
50
+ "道士",
51
+ "造化",
52
+ "迷糊",
53
+ "连累",
54
+ "这么",
55
+ "这个",
56
+ "运气",
57
+ "过去",
58
+ "软和",
59
+ "转悠",
60
+ "踏实",
61
+ "跳蚤",
62
+ "跟头",
63
+ "趔趄",
64
+ "财主",
65
+ "豆腐",
66
+ "讲究",
67
+ "记性",
68
+ "记号",
69
+ "认识",
70
+ "规矩",
71
+ "见识",
72
+ "裁缝",
73
+ "补丁",
74
+ "衣裳",
75
+ "衣服",
76
+ "衙门",
77
+ "街坊",
78
+ "行李",
79
+ "行当",
80
+ "蛤蟆",
81
+ "蘑菇",
82
+ "薄荷",
83
+ "葫芦",
84
+ "葡萄",
85
+ "萝卜",
86
+ "荸荠",
87
+ "苗条",
88
+ "苗头",
89
+ "苍蝇",
90
+ "芝麻",
91
+ "舒服",
92
+ "舒坦",
93
+ "舌头",
94
+ "自在",
95
+ "膏药",
96
+ "脾气",
97
+ "脑袋",
98
+ "脊梁",
99
+ "能耐",
100
+ "胳膊",
101
+ "胭脂",
102
+ "胡萝",
103
+ "胡琴",
104
+ "胡同",
105
+ "聪明",
106
+ "耽误",
107
+ "耽搁",
108
+ "耷拉",
109
+ "耳朵",
110
+ "老爷",
111
+ "老实",
112
+ "老婆",
113
+ "老头",
114
+ "老太",
115
+ "翻腾",
116
+ "罗嗦",
117
+ "罐头",
118
+ "编辑",
119
+ "结实",
120
+ "红火",
121
+ "累赘",
122
+ "糨糊",
123
+ "糊涂",
124
+ "精神",
125
+ "粮食",
126
+ "簸箕",
127
+ "篱笆",
128
+ "算计",
129
+ "算盘",
130
+ "答应",
131
+ "笤帚",
132
+ "笑语",
133
+ "笑话",
134
+ "窟窿",
135
+ "窝囊",
136
+ "窗户",
137
+ "稳当",
138
+ "稀罕",
139
+ "称呼",
140
+ "秧歌",
141
+ "秀气",
142
+ "秀才",
143
+ "福气",
144
+ "祖宗",
145
+ "砚台",
146
+ "码头",
147
+ "石榴",
148
+ "石头",
149
+ "石匠",
150
+ "知识",
151
+ "眼睛",
152
+ "眯缝",
153
+ "眨巴",
154
+ "眉毛",
155
+ "相声",
156
+ "盘算",
157
+ "白净",
158
+ "痢疾",
159
+ "痛快",
160
+ "疟疾",
161
+ "疙瘩",
162
+ "疏忽",
163
+ "畜生",
164
+ "生意",
165
+ "甘蔗",
166
+ "琵琶",
167
+ "琢磨",
168
+ "琉璃",
169
+ "玻璃",
170
+ "玫瑰",
171
+ "玄乎",
172
+ "狐狸",
173
+ "状元",
174
+ "特务",
175
+ "牲口",
176
+ "牙碜",
177
+ "牌楼",
178
+ "爽快",
179
+ "爱人",
180
+ "热闹",
181
+ "烧饼",
182
+ "烟筒",
183
+ "烂糊",
184
+ "点心",
185
+ "炊帚",
186
+ "灯笼",
187
+ "火候",
188
+ "漂亮",
189
+ "滑溜",
190
+ "溜达",
191
+ "温和",
192
+ "清楚",
193
+ "消息",
194
+ "浪头",
195
+ "活泼",
196
+ "比方",
197
+ "正经",
198
+ "欺负",
199
+ "模糊",
200
+ "槟榔",
201
+ "棺材",
202
+ "棒槌",
203
+ "棉花",
204
+ "核桃",
205
+ "栅栏",
206
+ "柴火",
207
+ "架势",
208
+ "枕头",
209
+ "枇杷",
210
+ "机灵",
211
+ "本事",
212
+ "木头",
213
+ "木匠",
214
+ "朋友",
215
+ "月饼",
216
+ "月亮",
217
+ "暖和",
218
+ "明白",
219
+ "时候",
220
+ "新鲜",
221
+ "故事",
222
+ "收拾",
223
+ "收成",
224
+ "提防",
225
+ "挖苦",
226
+ "挑剔",
227
+ "指甲",
228
+ "指头",
229
+ "拾掇",
230
+ "拳头",
231
+ "拨弄",
232
+ "招牌",
233
+ "招呼",
234
+ "抬举",
235
+ "护士",
236
+ "折腾",
237
+ "扫帚",
238
+ "打量",
239
+ "打算",
240
+ "打点",
241
+ "打扮",
242
+ "打听",
243
+ "打发",
244
+ "扎实",
245
+ "扁担",
246
+ "戒指",
247
+ "懒得",
248
+ "意识",
249
+ "意思",
250
+ "情形",
251
+ "悟性",
252
+ "怪物",
253
+ "思量",
254
+ "怎么",
255
+ "念头",
256
+ "念叨",
257
+ "快活",
258
+ "忙活",
259
+ "志气",
260
+ "心思",
261
+ "得罪",
262
+ "张罗",
263
+ "弟兄",
264
+ "开通",
265
+ "应酬",
266
+ "庄稼",
267
+ "干事",
268
+ "帮手",
269
+ "帐篷",
270
+ "希罕",
271
+ "师父",
272
+ "师傅",
273
+ "巴结",
274
+ "巴掌",
275
+ "差事",
276
+ "工夫",
277
+ "岁数",
278
+ "屁股",
279
+ "尾巴",
280
+ "少爷",
281
+ "小气",
282
+ "小伙",
283
+ "将就",
284
+ "对头",
285
+ "对付",
286
+ "寡妇",
287
+ "家伙",
288
+ "客气",
289
+ "实在",
290
+ "官司",
291
+ "学问",
292
+ "学生",
293
+ "字号",
294
+ "嫁妆",
295
+ "媳妇",
296
+ "媒人",
297
+ "婆家",
298
+ "娘家",
299
+ "委屈",
300
+ "姑娘",
301
+ "姐夫",
302
+ "妯娌",
303
+ "妥当",
304
+ "妖精",
305
+ "奴才",
306
+ "女婿",
307
+ "头发",
308
+ "太阳",
309
+ "大爷",
310
+ "大方",
311
+ "大意",
312
+ "大夫",
313
+ "多少",
314
+ "多么",
315
+ "外甥",
316
+ "壮实",
317
+ "地道",
318
+ "地方",
319
+ "在乎",
320
+ "困难",
321
+ "嘴巴",
322
+ "嘱咐",
323
+ "嘟囔",
324
+ "嘀咕",
325
+ "喜欢",
326
+ "喇嘛",
327
+ "喇叭",
328
+ "商量",
329
+ "唾沫",
330
+ "哑巴",
331
+ "哈欠",
332
+ "哆嗦",
333
+ "咳嗽",
334
+ "和尚",
335
+ "告诉",
336
+ "告示",
337
+ "含糊",
338
+ "吓唬",
339
+ "后头",
340
+ "名字",
341
+ "名堂",
342
+ "合同",
343
+ "吆喝",
344
+ "叫唤",
345
+ "口袋",
346
+ "厚道",
347
+ "厉害",
348
+ "千斤",
349
+ "包袱",
350
+ "包涵",
351
+ "匀称",
352
+ "勤快",
353
+ "动静",
354
+ "动弹",
355
+ "功夫",
356
+ "力气",
357
+ "前头",
358
+ "刺猬",
359
+ "刺激",
360
+ "别扭",
361
+ "利落",
362
+ "利索",
363
+ "利害",
364
+ "分析",
365
+ "出息",
366
+ "凑合",
367
+ "凉快",
368
+ "冷战",
369
+ "冤枉",
370
+ "冒失",
371
+ "养活",
372
+ "关系",
373
+ "先生",
374
+ "兄弟",
375
+ "便宜",
376
+ "使唤",
377
+ "佩服",
378
+ "作坊",
379
+ "体面",
380
+ "位置",
381
+ "似的",
382
+ "伙计",
383
+ "休息",
384
+ "什么",
385
+ "人家",
386
+ "亲戚",
387
+ "亲家",
388
+ "交情",
389
+ "云彩",
390
+ "事情",
391
+ "买卖",
392
+ "主意",
393
+ "丫头",
394
+ "丧气",
395
+ "两口",
396
+ "东西",
397
+ "东家",
398
+ "世故",
399
+ "不由",
400
+ "不在",
401
+ "下水",
402
+ "下巴",
403
+ "上头",
404
+ "上司",
405
+ "丈夫",
406
+ "丈人",
407
+ "一辈",
408
+ "那个",
409
+ "菩萨",
410
+ "父亲",
411
+ "母亲",
412
+ "咕噜",
413
+ "邋遢",
414
+ "费用",
415
+ "冤家",
416
+ "甜头",
417
+ "介绍",
418
+ "荒唐",
419
+ "大人",
420
+ "泥鳅",
421
+ "幸福",
422
+ "熟悉",
423
+ "计划",
424
+ "扑腾",
425
+ "蜡烛",
426
+ "姥爷",
427
+ "照顾",
428
+ "喉咙",
429
+ "吉他",
430
+ "弄堂",
431
+ "蚂蚱",
432
+ "凤凰",
433
+ "拖沓",
434
+ "寒碜",
435
+ "糟蹋",
436
+ "倒腾",
437
+ "报复",
438
+ "逻辑",
439
+ "盘缠",
440
+ "喽啰",
441
+ "牢骚",
442
+ "咖喱",
443
+ "扫把",
444
+ "惦记",
445
+ }
446
+ self.must_not_neural_tone_words = {
447
+ "男子",
448
+ "女子",
449
+ "分子",
450
+ "原子",
451
+ "量子",
452
+ "莲子",
453
+ "石子",
454
+ "瓜子",
455
+ "电子",
456
+ "人人",
457
+ "虎虎",
458
+ }
459
+ self.punc = ":,;。?!“”‘’':,;.?!"
460
+
461
+ # the meaning of jieba pos tag: https://blog.csdn.net/weixin_44174352/article/details/113731041
462
+ # e.g.
463
+ # word: "家里"
464
+ # pos: "s"
465
+ # finals: ['ia1', 'i3']
466
+ def _neural_sandhi(self, word: str, pos: str, finals: List[str]) -> List[str]:
467
+ # reduplication words for n. and v. e.g. 奶奶, 试试, 旺旺
468
+ for j, item in enumerate(word):
469
+ if (
470
+ j - 1 >= 0
471
+ and item == word[j - 1]
472
+ and pos[0] in {"n", "v", "a"}
473
+ and word not in self.must_not_neural_tone_words
474
+ ):
475
+ finals[j] = finals[j][:-1] + "5"
476
+ ge_idx = word.find("个")
477
+ if len(word) >= 1 and word[-1] in "吧呢啊呐噻嘛吖嗨呐哦哒额滴哩哟喽啰耶喔诶":
478
+ finals[-1] = finals[-1][:-1] + "5"
479
+ elif len(word) >= 1 and word[-1] in "的地得":
480
+ finals[-1] = finals[-1][:-1] + "5"
481
+ # e.g. 走了, 看着, 去过
482
+ # elif len(word) == 1 and word in "了着过" and pos in {"ul", "uz", "ug"}:
483
+ # finals[-1] = finals[-1][:-1] + "5"
484
+ elif (
485
+ len(word) > 1
486
+ and word[-1] in "们子"
487
+ and pos in {"r", "n"}
488
+ and word not in self.must_not_neural_tone_words
489
+ ):
490
+ finals[-1] = finals[-1][:-1] + "5"
491
+ # e.g. 桌上, 地下, 家里
492
+ elif len(word) > 1 and word[-1] in "上下里" and pos in {"s", "l", "f"}:
493
+ finals[-1] = finals[-1][:-1] + "5"
494
+ # e.g. 上来, 下去
495
+ elif len(word) > 1 and word[-1] in "来去" and word[-2] in "上下进出回过起开":
496
+ finals[-1] = finals[-1][:-1] + "5"
497
+ # 个做量词
498
+ elif (
499
+ ge_idx >= 1
500
+ and (
501
+ word[ge_idx - 1].isnumeric()
502
+ or word[ge_idx - 1] in "几有两半多各整每做是"
503
+ )
504
+ ) or word == "个":
505
+ finals[ge_idx] = finals[ge_idx][:-1] + "5"
506
+ else:
507
+ if (
508
+ word in self.must_neural_tone_words
509
+ or word[-2:] in self.must_neural_tone_words
510
+ ):
511
+ finals[-1] = finals[-1][:-1] + "5"
512
+
513
+ word_list = self._split_word(word)
514
+ finals_list = [finals[: len(word_list[0])], finals[len(word_list[0]) :]]
515
+ for i, word in enumerate(word_list):
516
+ # conventional neural in Chinese
517
+ if (
518
+ word in self.must_neural_tone_words
519
+ or word[-2:] in self.must_neural_tone_words
520
+ ):
521
+ finals_list[i][-1] = finals_list[i][-1][:-1] + "5"
522
+ finals = sum(finals_list, [])
523
+ return finals
524
+
525
+ def _bu_sandhi(self, word: str, finals: List[str]) -> List[str]:
526
+ # e.g. 看不懂
527
+ if len(word) == 3 and word[1] == "不":
528
+ finals[1] = finals[1][:-1] + "5"
529
+ else:
530
+ for i, char in enumerate(word):
531
+ # "不" before tone4 should be bu2, e.g. 不怕
532
+ if char == "不" and i + 1 < len(word) and finals[i + 1][-1] == "4":
533
+ finals[i] = finals[i][:-1] + "2"
534
+ return finals
535
+
536
+ def _yi_sandhi(self, word: str, finals: List[str]) -> List[str]:
537
+ # "一" in number sequences, e.g. 一零零, 二一零
538
+ if word.find("一") != -1 and all(
539
+ [item.isnumeric() for item in word if item != "一"]
540
+ ):
541
+ return finals
542
+ # "一" between reduplication words should be yi5, e.g. 看一看
543
+ elif len(word) == 3 and word[1] == "一" and word[0] == word[-1]:
544
+ finals[1] = finals[1][:-1] + "5"
545
+ # when "一" is ordinal word, it should be yi1
546
+ elif word.startswith("第一"):
547
+ finals[1] = finals[1][:-1] + "1"
548
+ else:
549
+ for i, char in enumerate(word):
550
+ if char == "一" and i + 1 < len(word):
551
+ # "一" before tone4 should be yi2, e.g. 一段
552
+ if finals[i + 1][-1] == "4":
553
+ finals[i] = finals[i][:-1] + "2"
554
+ # "一" before non-tone4 should be yi4, e.g. 一天
555
+ else:
556
+ # "一" 后面如果是标点,还读一声
557
+ if word[i + 1] not in self.punc:
558
+ finals[i] = finals[i][:-1] + "4"
559
+ return finals
560
+
561
+ def _split_word(self, word: str) -> List[str]:
562
+ word_list = jieba.cut_for_search(word)
563
+ word_list = sorted(word_list, key=lambda i: len(i), reverse=False)
564
+ first_subword = word_list[0]
565
+ first_begin_idx = word.find(first_subword)
566
+ if first_begin_idx == 0:
567
+ second_subword = word[len(first_subword) :]
568
+ new_word_list = [first_subword, second_subword]
569
+ else:
570
+ second_subword = word[: -len(first_subword)]
571
+ new_word_list = [second_subword, first_subword]
572
+ return new_word_list
573
+
574
+ def _three_sandhi(self, word: str, finals: List[str]) -> List[str]:
575
+ if len(word) == 2 and self._all_tone_three(finals):
576
+ finals[0] = finals[0][:-1] + "2"
577
+ elif len(word) == 3:
578
+ word_list = self._split_word(word)
579
+ if self._all_tone_three(finals):
580
+ # disyllabic + monosyllabic, e.g. 蒙古/包
581
+ if len(word_list[0]) == 2:
582
+ finals[0] = finals[0][:-1] + "2"
583
+ finals[1] = finals[1][:-1] + "2"
584
+ # monosyllabic + disyllabic, e.g. 纸/老虎
585
+ elif len(word_list[0]) == 1:
586
+ finals[1] = finals[1][:-1] + "2"
587
+ else:
588
+ finals_list = [finals[: len(word_list[0])], finals[len(word_list[0]) :]]
589
+ if len(finals_list) == 2:
590
+ for i, sub in enumerate(finals_list):
591
+ # e.g. 所有/人
592
+ if self._all_tone_three(sub) and len(sub) == 2:
593
+ finals_list[i][0] = finals_list[i][0][:-1] + "2"
594
+ # e.g. 好/喜欢
595
+ elif (
596
+ i == 1
597
+ and not self._all_tone_three(sub)
598
+ and finals_list[i][0][-1] == "3"
599
+ and finals_list[0][-1][-1] == "3"
600
+ ):
601
+ finals_list[0][-1] = finals_list[0][-1][:-1] + "2"
602
+ finals = sum(finals_list, [])
603
+ # split idiom into two words who's length is 2
604
+ elif len(word) == 4:
605
+ finals_list = [finals[:2], finals[2:]]
606
+ finals = []
607
+ for sub in finals_list:
608
+ if self._all_tone_three(sub):
609
+ sub[0] = sub[0][:-1] + "2"
610
+ finals += sub
611
+
612
+ return finals
613
+
614
+ def _all_tone_three(self, finals: List[str]) -> bool:
615
+ return all(x[-1] == "3" for x in finals)
616
+
617
+ # merge "不" and the word behind it
618
+ # if don't merge, "不" sometimes appears alone according to jieba, which may occur sandhi error
619
+ def _merge_bu(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
620
+ new_seg = []
621
+ last_word = ""
622
+ for word, pos in seg:
623
+ if last_word == "不":
624
+ word = last_word + word
625
+ if word != "不":
626
+ new_seg.append((word, pos))
627
+ last_word = word[:]
628
+ if last_word == "不":
629
+ new_seg.append((last_word, "d"))
630
+ last_word = ""
631
+ return new_seg
632
+
633
+ # function 1: merge "一" and reduplication words in it's left and right, e.g. "听","一","听" ->"听一听"
634
+ # function 2: merge single "一" and the word behind it
635
+ # if don't merge, "一" sometimes appears alone according to jieba, which may occur sandhi error
636
+ # e.g.
637
+ # input seg: [('听', 'v'), ('一', 'm'), ('听', 'v')]
638
+ # output seg: [['听一听', 'v']]
639
+ def _merge_yi(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
640
+ new_seg = [] * len(seg)
641
+ # function 1
642
+ i = 0
643
+ while i < len(seg):
644
+ word, pos = seg[i]
645
+ if (
646
+ i - 1 >= 0
647
+ and word == "一"
648
+ and i + 1 < len(seg)
649
+ and seg[i - 1][0] == seg[i + 1][0]
650
+ and seg[i - 1][1] == "v"
651
+ ):
652
+ new_seg[i - 1][0] = new_seg[i - 1][0] + "一" + new_seg[i - 1][0]
653
+ i += 2
654
+ else:
655
+ if (
656
+ i - 2 >= 0
657
+ and seg[i - 1][0] == "一"
658
+ and seg[i - 2][0] == word
659
+ and pos == "v"
660
+ ):
661
+ continue
662
+ else:
663
+ new_seg.append([word, pos])
664
+ i += 1
665
+ seg = [i for i in new_seg if len(i) > 0]
666
+ new_seg = []
667
+ # function 2
668
+ for i, (word, pos) in enumerate(seg):
669
+ if new_seg and new_seg[-1][0] == "一":
670
+ new_seg[-1][0] = new_seg[-1][0] + word
671
+ else:
672
+ new_seg.append([word, pos])
673
+ return new_seg
674
+
675
+ # the first and the second words are all_tone_three
676
+ def _merge_continuous_three_tones(
677
+ self, seg: List[Tuple[str, str]]
678
+ ) -> List[Tuple[str, str]]:
679
+ new_seg = []
680
+ sub_finals_list = [
681
+ lazy_pinyin(word, neutral_tone_with_five=True, style=Style.FINALS_TONE3)
682
+ for (word, pos) in seg
683
+ ]
684
+ assert len(sub_finals_list) == len(seg)
685
+ merge_last = [False] * len(seg)
686
+ for i, (word, pos) in enumerate(seg):
687
+ if (
688
+ i - 1 >= 0
689
+ and self._all_tone_three(sub_finals_list[i - 1])
690
+ and self._all_tone_three(sub_finals_list[i])
691
+ and not merge_last[i - 1]
692
+ ):
693
+ # if the last word is reduplication, not merge, because reduplication need to be _neural_sandhi
694
+ if (
695
+ not self._is_reduplication(seg[i - 1][0])
696
+ and len(seg[i - 1][0]) + len(seg[i][0]) <= 3
697
+ ):
698
+ new_seg[-1][0] = new_seg[-1][0] + seg[i][0]
699
+ merge_last[i] = True
700
+ else:
701
+ new_seg.append([word, pos])
702
+ else:
703
+ new_seg.append([word, pos])
704
+
705
+ return new_seg
706
+
707
+ def _is_reduplication(self, word: str) -> bool:
708
+ return len(word) == 2 and word[0] == word[1]
709
+
710
+ # the last char of first word and the first char of second word is tone_three
711
+ def _merge_continuous_three_tones_2(
712
+ self, seg: List[Tuple[str, str]]
713
+ ) -> List[Tuple[str, str]]:
714
+ new_seg = []
715
+ sub_finals_list = [
716
+ lazy_pinyin(word, neutral_tone_with_five=True, style=Style.FINALS_TONE3)
717
+ for (word, pos) in seg
718
+ ]
719
+ assert len(sub_finals_list) == len(seg)
720
+ merge_last = [False] * len(seg)
721
+ for i, (word, pos) in enumerate(seg):
722
+ if (
723
+ i - 1 >= 0
724
+ and sub_finals_list[i - 1][-1][-1] == "3"
725
+ and sub_finals_list[i][0][-1] == "3"
726
+ and not merge_last[i - 1]
727
+ ):
728
+ # if the last word is reduplication, not merge, because reduplication need to be _neural_sandhi
729
+ if (
730
+ not self._is_reduplication(seg[i - 1][0])
731
+ and len(seg[i - 1][0]) + len(seg[i][0]) <= 3
732
+ ):
733
+ new_seg[-1][0] = new_seg[-1][0] + seg[i][0]
734
+ merge_last[i] = True
735
+ else:
736
+ new_seg.append([word, pos])
737
+ else:
738
+ new_seg.append([word, pos])
739
+ return new_seg
740
+
741
+ def _merge_er(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
742
+ new_seg = []
743
+ for i, (word, pos) in enumerate(seg):
744
+ if i - 1 >= 0 and word == "儿" and seg[i - 1][0] != "#":
745
+ new_seg[-1][0] = new_seg[-1][0] + seg[i][0]
746
+ else:
747
+ new_seg.append([word, pos])
748
+ return new_seg
749
+
750
+ def _merge_reduplication(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
751
+ new_seg = []
752
+ for i, (word, pos) in enumerate(seg):
753
+ if new_seg and word == new_seg[-1][0]:
754
+ new_seg[-1][0] = new_seg[-1][0] + seg[i][0]
755
+ else:
756
+ new_seg.append([word, pos])
757
+ return new_seg
758
+
759
+ def pre_merge_for_modify(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
760
+ seg = self._merge_bu(seg)
761
+ try:
762
+ seg = self._merge_yi(seg)
763
+ except:
764
+ print("_merge_yi failed")
765
+ seg = self._merge_reduplication(seg)
766
+ seg = self._merge_continuous_three_tones(seg)
767
+ seg = self._merge_continuous_three_tones_2(seg)
768
+ seg = self._merge_er(seg)
769
+ return seg
770
+
771
+ def modified_tone(self, word: str, pos: str, finals: List[str]) -> List[str]:
772
+ finals = self._bu_sandhi(word, finals)
773
+ finals = self._yi_sandhi(word, finals)
774
+ finals = self._neural_sandhi(word, pos, finals)
775
+ finals = self._three_sandhi(word, finals)
776
+ return finals
tools/__init__.py ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ """
2
+ 工具包
3
+ """
tools/log.py ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ logger封装
3
+ """
4
+
5
+ from loguru import logger
6
+ import sys
7
+
8
+
9
+ # 移除所有默认的处理器
10
+ logger.remove()
11
+
12
+ # 自定义格式并添加到标准输出
13
+ log_format = (
14
+ "<g>{time:MM-DD HH:mm:ss}</g> <lvl>{level:<9}</lvl>| {file}:{line} | {message}"
15
+ )
16
+
17
+ logger.add(sys.stdout, format=log_format, backtrace=True, diagnose=True)
transforms.py ADDED
@@ -0,0 +1,209 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ from torch.nn import functional as F
3
+
4
+ import numpy as np
5
+
6
+
7
+ DEFAULT_MIN_BIN_WIDTH = 1e-3
8
+ DEFAULT_MIN_BIN_HEIGHT = 1e-3
9
+ DEFAULT_MIN_DERIVATIVE = 1e-3
10
+
11
+
12
+ def piecewise_rational_quadratic_transform(
13
+ inputs,
14
+ unnormalized_widths,
15
+ unnormalized_heights,
16
+ unnormalized_derivatives,
17
+ inverse=False,
18
+ tails=None,
19
+ tail_bound=1.0,
20
+ min_bin_width=DEFAULT_MIN_BIN_WIDTH,
21
+ min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
22
+ min_derivative=DEFAULT_MIN_DERIVATIVE,
23
+ ):
24
+ if tails is None:
25
+ spline_fn = rational_quadratic_spline
26
+ spline_kwargs = {}
27
+ else:
28
+ spline_fn = unconstrained_rational_quadratic_spline
29
+ spline_kwargs = {"tails": tails, "tail_bound": tail_bound}
30
+
31
+ outputs, logabsdet = spline_fn(
32
+ inputs=inputs,
33
+ unnormalized_widths=unnormalized_widths,
34
+ unnormalized_heights=unnormalized_heights,
35
+ unnormalized_derivatives=unnormalized_derivatives,
36
+ inverse=inverse,
37
+ min_bin_width=min_bin_width,
38
+ min_bin_height=min_bin_height,
39
+ min_derivative=min_derivative,
40
+ **spline_kwargs
41
+ )
42
+ return outputs, logabsdet
43
+
44
+
45
+ def searchsorted(bin_locations, inputs, eps=1e-6):
46
+ bin_locations[..., -1] += eps
47
+ return torch.sum(inputs[..., None] >= bin_locations, dim=-1) - 1
48
+
49
+
50
+ def unconstrained_rational_quadratic_spline(
51
+ inputs,
52
+ unnormalized_widths,
53
+ unnormalized_heights,
54
+ unnormalized_derivatives,
55
+ inverse=False,
56
+ tails="linear",
57
+ tail_bound=1.0,
58
+ min_bin_width=DEFAULT_MIN_BIN_WIDTH,
59
+ min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
60
+ min_derivative=DEFAULT_MIN_DERIVATIVE,
61
+ ):
62
+ inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound)
63
+ outside_interval_mask = ~inside_interval_mask
64
+
65
+ outputs = torch.zeros_like(inputs)
66
+ logabsdet = torch.zeros_like(inputs)
67
+
68
+ if tails == "linear":
69
+ unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1))
70
+ constant = np.log(np.exp(1 - min_derivative) - 1)
71
+ unnormalized_derivatives[..., 0] = constant
72
+ unnormalized_derivatives[..., -1] = constant
73
+
74
+ outputs[outside_interval_mask] = inputs[outside_interval_mask]
75
+ logabsdet[outside_interval_mask] = 0
76
+ else:
77
+ raise RuntimeError("{} tails are not implemented.".format(tails))
78
+
79
+ (
80
+ outputs[inside_interval_mask],
81
+ logabsdet[inside_interval_mask],
82
+ ) = rational_quadratic_spline(
83
+ inputs=inputs[inside_interval_mask],
84
+ unnormalized_widths=unnormalized_widths[inside_interval_mask, :],
85
+ unnormalized_heights=unnormalized_heights[inside_interval_mask, :],
86
+ unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :],
87
+ inverse=inverse,
88
+ left=-tail_bound,
89
+ right=tail_bound,
90
+ bottom=-tail_bound,
91
+ top=tail_bound,
92
+ min_bin_width=min_bin_width,
93
+ min_bin_height=min_bin_height,
94
+ min_derivative=min_derivative,
95
+ )
96
+
97
+ return outputs, logabsdet
98
+
99
+
100
+ def rational_quadratic_spline(
101
+ inputs,
102
+ unnormalized_widths,
103
+ unnormalized_heights,
104
+ unnormalized_derivatives,
105
+ inverse=False,
106
+ left=0.0,
107
+ right=1.0,
108
+ bottom=0.0,
109
+ top=1.0,
110
+ min_bin_width=DEFAULT_MIN_BIN_WIDTH,
111
+ min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
112
+ min_derivative=DEFAULT_MIN_DERIVATIVE,
113
+ ):
114
+ if torch.min(inputs) < left or torch.max(inputs) > right:
115
+ raise ValueError("Input to a transform is not within its domain")
116
+
117
+ num_bins = unnormalized_widths.shape[-1]
118
+
119
+ if min_bin_width * num_bins > 1.0:
120
+ raise ValueError("Minimal bin width too large for the number of bins")
121
+ if min_bin_height * num_bins > 1.0:
122
+ raise ValueError("Minimal bin height too large for the number of bins")
123
+
124
+ widths = F.softmax(unnormalized_widths, dim=-1)
125
+ widths = min_bin_width + (1 - min_bin_width * num_bins) * widths
126
+ cumwidths = torch.cumsum(widths, dim=-1)
127
+ cumwidths = F.pad(cumwidths, pad=(1, 0), mode="constant", value=0.0)
128
+ cumwidths = (right - left) * cumwidths + left
129
+ cumwidths[..., 0] = left
130
+ cumwidths[..., -1] = right
131
+ widths = cumwidths[..., 1:] - cumwidths[..., :-1]
132
+
133
+ derivatives = min_derivative + F.softplus(unnormalized_derivatives)
134
+
135
+ heights = F.softmax(unnormalized_heights, dim=-1)
136
+ heights = min_bin_height + (1 - min_bin_height * num_bins) * heights
137
+ cumheights = torch.cumsum(heights, dim=-1)
138
+ cumheights = F.pad(cumheights, pad=(1, 0), mode="constant", value=0.0)
139
+ cumheights = (top - bottom) * cumheights + bottom
140
+ cumheights[..., 0] = bottom
141
+ cumheights[..., -1] = top
142
+ heights = cumheights[..., 1:] - cumheights[..., :-1]
143
+
144
+ if inverse:
145
+ bin_idx = searchsorted(cumheights, inputs)[..., None]
146
+ else:
147
+ bin_idx = searchsorted(cumwidths, inputs)[..., None]
148
+
149
+ input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0]
150
+ input_bin_widths = widths.gather(-1, bin_idx)[..., 0]
151
+
152
+ input_cumheights = cumheights.gather(-1, bin_idx)[..., 0]
153
+ delta = heights / widths
154
+ input_delta = delta.gather(-1, bin_idx)[..., 0]
155
+
156
+ input_derivatives = derivatives.gather(-1, bin_idx)[..., 0]
157
+ input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0]
158
+
159
+ input_heights = heights.gather(-1, bin_idx)[..., 0]
160
+
161
+ if inverse:
162
+ a = (inputs - input_cumheights) * (
163
+ input_derivatives + input_derivatives_plus_one - 2 * input_delta
164
+ ) + input_heights * (input_delta - input_derivatives)
165
+ b = input_heights * input_derivatives - (inputs - input_cumheights) * (
166
+ input_derivatives + input_derivatives_plus_one - 2 * input_delta
167
+ )
168
+ c = -input_delta * (inputs - input_cumheights)
169
+
170
+ discriminant = b.pow(2) - 4 * a * c
171
+ assert (discriminant >= 0).all()
172
+
173
+ root = (2 * c) / (-b - torch.sqrt(discriminant))
174
+ outputs = root * input_bin_widths + input_cumwidths
175
+
176
+ theta_one_minus_theta = root * (1 - root)
177
+ denominator = input_delta + (
178
+ (input_derivatives + input_derivatives_plus_one - 2 * input_delta)
179
+ * theta_one_minus_theta
180
+ )
181
+ derivative_numerator = input_delta.pow(2) * (
182
+ input_derivatives_plus_one * root.pow(2)
183
+ + 2 * input_delta * theta_one_minus_theta
184
+ + input_derivatives * (1 - root).pow(2)
185
+ )
186
+ logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
187
+
188
+ return outputs, -logabsdet
189
+ else:
190
+ theta = (inputs - input_cumwidths) / input_bin_widths
191
+ theta_one_minus_theta = theta * (1 - theta)
192
+
193
+ numerator = input_heights * (
194
+ input_delta * theta.pow(2) + input_derivatives * theta_one_minus_theta
195
+ )
196
+ denominator = input_delta + (
197
+ (input_derivatives + input_derivatives_plus_one - 2 * input_delta)
198
+ * theta_one_minus_theta
199
+ )
200
+ outputs = input_cumheights + numerator / denominator
201
+
202
+ derivative_numerator = input_delta.pow(2) * (
203
+ input_derivatives_plus_one * theta.pow(2)
204
+ + 2 * input_delta * theta_one_minus_theta
205
+ + input_derivatives * (1 - theta).pow(2)
206
+ )
207
+ logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
208
+
209
+ return outputs, logabsdet
update_status.py ADDED
@@ -0,0 +1,93 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import gradio as gr
3
+
4
+ lang_dict = {"EN(英文)": "_en", "ZH(中文)": "_zh", "JP(日语)": "_jp"}
5
+
6
+
7
+ def raw_dir_convert_to_path(target_dir: str, lang):
8
+ res = target_dir.rstrip("/").rstrip("\\")
9
+ if (not target_dir.startswith("raw")) and (not target_dir.startswith("./raw")):
10
+ res = os.path.join("./raw", res)
11
+ if (
12
+ (not res.endswith("_zh"))
13
+ and (not res.endswith("_jp"))
14
+ and (not res.endswith("_en"))
15
+ ):
16
+ res += lang_dict[lang]
17
+ return res
18
+
19
+
20
+ def update_g_files():
21
+ g_files = []
22
+ cnt = 0
23
+ for root, dirs, files in os.walk(os.path.abspath("./logs")):
24
+ for file in files:
25
+ if file.startswith("G_") and file.endswith(".pth"):
26
+ g_files.append(os.path.join(root, file))
27
+ cnt += 1
28
+ print(g_files)
29
+ return f"更新模型列表完成, 共找到{cnt}个模型", gr.Dropdown.update(choices=g_files)
30
+
31
+
32
+ def update_c_files():
33
+ c_files = []
34
+ cnt = 0
35
+ for root, dirs, files in os.walk(os.path.abspath("./logs")):
36
+ for file in files:
37
+ if file.startswith("config.json"):
38
+ c_files.append(os.path.join(root, file))
39
+ cnt += 1
40
+ print(c_files)
41
+ return f"更新模型列表完成, 共找到{cnt}个配置文件", gr.Dropdown.update(
42
+ choices=c_files
43
+ )
44
+
45
+
46
+ def update_model_folders():
47
+ subdirs = []
48
+ cnt = 0
49
+ for root, dirs, files in os.walk(os.path.abspath("./logs")):
50
+ for dir_name in dirs:
51
+ if os.path.basename(dir_name) != "eval":
52
+ subdirs.append(os.path.join(root, dir_name))
53
+ cnt += 1
54
+ print(subdirs)
55
+ return f"更新模型文件夹列表完成, 共找到{cnt}个文件夹", gr.Dropdown.update(
56
+ choices=subdirs
57
+ )
58
+
59
+
60
+ def update_wav_lab_pairs():
61
+ wav_count = tot_count = 0
62
+ for root, _, files in os.walk("./raw"):
63
+ for file in files:
64
+ # print(file)
65
+ file_path = os.path.join(root, file)
66
+ if file.lower().endswith(".wav"):
67
+ lab_file = os.path.splitext(file_path)[0] + ".lab"
68
+ if os.path.exists(lab_file):
69
+ wav_count += 1
70
+ tot_count += 1
71
+ return f"{wav_count} / {tot_count}"
72
+
73
+
74
+ def update_raw_folders():
75
+ subdirs = []
76
+ cnt = 0
77
+ script_path = os.path.dirname(os.path.abspath(__file__)) # 获取当前脚本的绝对路径
78
+ raw_path = os.path.join(script_path, "raw")
79
+ print(raw_path)
80
+ os.makedirs(raw_path, exist_ok=True)
81
+ for root, dirs, files in os.walk(raw_path):
82
+ for dir_name in dirs:
83
+ relative_path = os.path.relpath(
84
+ os.path.join(root, dir_name), script_path
85
+ ) # 获取相对路径
86
+ subdirs.append(relative_path)
87
+ cnt += 1
88
+ print(subdirs)
89
+ return (
90
+ f"更新raw音频文件夹列表完成, 共找到{cnt}个文件夹",
91
+ gr.Dropdown.update(choices=subdirs),
92
+ gr.Textbox.update(value=update_wav_lab_pairs()),
93
+ )
utils.py ADDED
@@ -0,0 +1,436 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import glob
3
+ import argparse
4
+ import logging
5
+ import json
6
+ import shutil
7
+ import subprocess
8
+ import numpy as np
9
+
10
+ # from huggingface_hub import hf_hub_download
11
+ from scipy.io.wavfile import read
12
+ import torch
13
+ import re
14
+
15
+ MATPLOTLIB_FLAG = False
16
+
17
+ logger = logging.getLogger(__name__)
18
+
19
+
20
+ # def download_emo_models(mirror, repo_id, model_name):
21
+ # hf_hub_download(
22
+ # repo_id,
23
+ # "pytorch_model.bin",
24
+ # local_dir=model_name,
25
+ # local_dir_use_symlinks=False,
26
+ # )
27
+
28
+
29
+ # def download_checkpoint(dir_path, repo_config, token=None, regex="G_*.pth"):
30
+ # repo_id = repo_config["repo_id"]
31
+ # f_list = glob.glob(os.path.join(dir_path, regex))
32
+ # if f_list:
33
+ # print("Use existed model, skip downloading.")
34
+ # return
35
+
36
+ # for file in ["DUR_0.pth", "D_0.pth", "G_0.pth"]:
37
+ # hf_hub_download(repo_id, file, local_dir=dir_path, local_dir_use_symlinks=False)
38
+
39
+
40
+ def load_checkpoint(checkpoint_path, model, optimizer=None, skip_optimizer=False):
41
+ assert os.path.isfile(checkpoint_path)
42
+ checkpoint_dict = torch.load(checkpoint_path, map_location="cpu")
43
+ iteration = checkpoint_dict["iteration"]
44
+ learning_rate = checkpoint_dict["learning_rate"]
45
+ if (
46
+ optimizer is not None
47
+ and not skip_optimizer
48
+ and checkpoint_dict["optimizer"] is not None
49
+ ):
50
+ optimizer.load_state_dict(checkpoint_dict["optimizer"])
51
+ elif optimizer is None and not skip_optimizer:
52
+ # else: Disable this line if Infer and resume checkpoint,then enable the line upper
53
+ new_opt_dict = optimizer.state_dict()
54
+ new_opt_dict_params = new_opt_dict["param_groups"][0]["params"]
55
+ new_opt_dict["param_groups"] = checkpoint_dict["optimizer"]["param_groups"]
56
+ new_opt_dict["param_groups"][0]["params"] = new_opt_dict_params
57
+ optimizer.load_state_dict(new_opt_dict)
58
+
59
+ saved_state_dict = checkpoint_dict["model"]
60
+ if hasattr(model, "module"):
61
+ state_dict = model.module.state_dict()
62
+ else:
63
+ state_dict = model.state_dict()
64
+
65
+ new_state_dict = {}
66
+ for k, v in state_dict.items():
67
+ try:
68
+ # assert "emb_g" not in k
69
+ new_state_dict[k] = saved_state_dict[k]
70
+ assert saved_state_dict[k].shape == v.shape, (
71
+ saved_state_dict[k].shape,
72
+ v.shape,
73
+ )
74
+ except:
75
+ # For upgrading from the old version
76
+ if "ja_bert_proj" in k:
77
+ v = torch.zeros_like(v)
78
+ logger.warn(
79
+ f"Seems you are using the old version of the model, the {k} is automatically set to zero for backward compatibility"
80
+ )
81
+ else:
82
+ logger.error(f"{k} is not in the checkpoint")
83
+
84
+ new_state_dict[k] = v
85
+
86
+ if hasattr(model, "module"):
87
+ model.module.load_state_dict(new_state_dict, strict=False)
88
+ else:
89
+ model.load_state_dict(new_state_dict, strict=False)
90
+
91
+ logger.info(
92
+ "Loaded checkpoint '{}' (iteration {})".format(checkpoint_path, iteration)
93
+ )
94
+
95
+ return model, optimizer, learning_rate, iteration
96
+
97
+
98
+ def save_checkpoint(model, optimizer, learning_rate, iteration, checkpoint_path):
99
+ logger.info(
100
+ "Saving model and optimizer state at iteration {} to {}".format(
101
+ iteration, checkpoint_path
102
+ )
103
+ )
104
+ if hasattr(model, "module"):
105
+ state_dict = model.module.state_dict()
106
+ else:
107
+ state_dict = model.state_dict()
108
+ torch.save(
109
+ {
110
+ "model": state_dict,
111
+ "iteration": iteration,
112
+ "optimizer": optimizer.state_dict(),
113
+ "learning_rate": learning_rate,
114
+ },
115
+ checkpoint_path,
116
+ )
117
+
118
+
119
+ def summarize(
120
+ writer,
121
+ global_step,
122
+ scalars={},
123
+ histograms={},
124
+ images={},
125
+ audios={},
126
+ audio_sampling_rate=22050,
127
+ ):
128
+ for k, v in scalars.items():
129
+ writer.add_scalar(k, v, global_step)
130
+ for k, v in histograms.items():
131
+ writer.add_histogram(k, v, global_step)
132
+ for k, v in images.items():
133
+ writer.add_image(k, v, global_step, dataformats="HWC")
134
+ for k, v in audios.items():
135
+ writer.add_audio(k, v, global_step, audio_sampling_rate)
136
+
137
+
138
+ def latest_checkpoint_path(dir_path, regex="G_*.pth"):
139
+ f_list = glob.glob(os.path.join(dir_path, regex))
140
+ f_list.sort(key=lambda f: int("".join(filter(str.isdigit, f))))
141
+ x = f_list[-1]
142
+ return x
143
+
144
+
145
+ def plot_spectrogram_to_numpy(spectrogram):
146
+ global MATPLOTLIB_FLAG
147
+ if not MATPLOTLIB_FLAG:
148
+ import matplotlib
149
+
150
+ matplotlib.use("Agg")
151
+ MATPLOTLIB_FLAG = True
152
+ mpl_logger = logging.getLogger("matplotlib")
153
+ mpl_logger.setLevel(logging.WARNING)
154
+ import matplotlib.pylab as plt
155
+ import numpy as np
156
+
157
+ fig, ax = plt.subplots(figsize=(10, 2))
158
+ im = ax.imshow(spectrogram, aspect="auto", origin="lower", interpolation="none")
159
+ plt.colorbar(im, ax=ax)
160
+ plt.xlabel("Frames")
161
+ plt.ylabel("Channels")
162
+ plt.tight_layout()
163
+
164
+ fig.canvas.draw()
165
+ data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep="")
166
+ data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
167
+ plt.close()
168
+ return data
169
+
170
+
171
+ def plot_alignment_to_numpy(alignment, info=None):
172
+ global MATPLOTLIB_FLAG
173
+ if not MATPLOTLIB_FLAG:
174
+ import matplotlib
175
+
176
+ matplotlib.use("Agg")
177
+ MATPLOTLIB_FLAG = True
178
+ mpl_logger = logging.getLogger("matplotlib")
179
+ mpl_logger.setLevel(logging.WARNING)
180
+ import matplotlib.pylab as plt
181
+ import numpy as np
182
+
183
+ fig, ax = plt.subplots(figsize=(6, 4))
184
+ im = ax.imshow(
185
+ alignment.transpose(), aspect="auto", origin="lower", interpolation="none"
186
+ )
187
+ fig.colorbar(im, ax=ax)
188
+ xlabel = "Decoder timestep"
189
+ if info is not None:
190
+ xlabel += "\n\n" + info
191
+ plt.xlabel(xlabel)
192
+ plt.ylabel("Encoder timestep")
193
+ plt.tight_layout()
194
+
195
+ fig.canvas.draw()
196
+ data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep="")
197
+ data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
198
+ plt.close()
199
+ return data
200
+
201
+
202
+ def load_wav_to_torch(full_path):
203
+ sampling_rate, data = read(full_path)
204
+ return torch.FloatTensor(data.astype(np.float32)), sampling_rate
205
+
206
+
207
+ def load_filepaths_and_text(filename, split="|"):
208
+ with open(filename, encoding="utf-8") as f:
209
+ filepaths_and_text = [line.strip().split(split) for line in f]
210
+ return filepaths_and_text
211
+
212
+
213
+ def get_hparams(init=True):
214
+ parser = argparse.ArgumentParser()
215
+ parser.add_argument(
216
+ "-c",
217
+ "--config",
218
+ type=str,
219
+ default="./configs/base.json",
220
+ help="JSON file for configuration",
221
+ )
222
+ parser.add_argument("-m", "--model", type=str, required=True, help="Model name")
223
+
224
+ args = parser.parse_args()
225
+ model_dir = os.path.join("./logs", args.model)
226
+
227
+ if not os.path.exists(model_dir):
228
+ os.makedirs(model_dir)
229
+
230
+ config_path = args.config
231
+ config_save_path = os.path.join(model_dir, "config.json")
232
+ if init:
233
+ with open(config_path, "r", encoding="utf-8") as f:
234
+ data = f.read()
235
+ with open(config_save_path, "w", encoding="utf-8") as f:
236
+ f.write(data)
237
+ else:
238
+ with open(config_save_path, "r", vencoding="utf-8") as f:
239
+ data = f.read()
240
+ config = json.loads(data)
241
+ hparams = HParams(**config)
242
+ hparams.model_dir = model_dir
243
+ return hparams
244
+
245
+
246
+ def clean_checkpoints(path_to_models="logs/44k/", n_ckpts_to_keep=2, sort_by_time=True):
247
+ """Freeing up space by deleting saved ckpts
248
+
249
+ Arguments:
250
+ path_to_models -- Path to the model directory
251
+ n_ckpts_to_keep -- Number of ckpts to keep, excluding G_0.pth and D_0.pth
252
+ sort_by_time -- True -> chronologically delete ckpts
253
+ False -> lexicographically delete ckpts
254
+ """
255
+ import re
256
+
257
+ ckpts_files = [
258
+ f
259
+ for f in os.listdir(path_to_models)
260
+ if os.path.isfile(os.path.join(path_to_models, f))
261
+ ]
262
+
263
+ def name_key(_f):
264
+ return int(re.compile("._(\\d+)\\.pth").match(_f).group(1))
265
+
266
+ def time_key(_f):
267
+ return os.path.getmtime(os.path.join(path_to_models, _f))
268
+
269
+ sort_key = time_key if sort_by_time else name_key
270
+
271
+ def x_sorted(_x):
272
+ return sorted(
273
+ [f for f in ckpts_files if f.startswith(_x) and not f.endswith("_0.pth")],
274
+ key=sort_key,
275
+ )
276
+
277
+ to_del = [
278
+ os.path.join(path_to_models, fn)
279
+ for fn in (
280
+ x_sorted("G")[:-n_ckpts_to_keep]
281
+ + x_sorted("D")[:-n_ckpts_to_keep]
282
+ + x_sorted("WD")[:-n_ckpts_to_keep]
283
+ )
284
+ ]
285
+
286
+ def del_info(fn):
287
+ return logger.info(f".. Free up space by deleting ckpt {fn}")
288
+
289
+ def del_routine(x):
290
+ return [os.remove(x), del_info(x)]
291
+
292
+ [del_routine(fn) for fn in to_del]
293
+
294
+
295
+ def get_hparams_from_dir(model_dir):
296
+ config_save_path = os.path.join(model_dir, "config.json")
297
+ with open(config_save_path, "r", encoding="utf-8") as f:
298
+ data = f.read()
299
+ config = json.loads(data)
300
+
301
+ hparams = HParams(**config)
302
+ hparams.model_dir = model_dir
303
+ return hparams
304
+
305
+
306
+ def get_hparams_from_file(config_path):
307
+ # print("config_path: ", config_path)
308
+ with open(config_path, "r", encoding="utf-8") as f:
309
+ data = f.read()
310
+ config = json.loads(data)
311
+
312
+ hparams = HParams(**config)
313
+ return hparams
314
+
315
+
316
+ def check_git_hash(model_dir):
317
+ source_dir = os.path.dirname(os.path.realpath(__file__))
318
+ if not os.path.exists(os.path.join(source_dir, ".git")):
319
+ logger.warn(
320
+ "{} is not a git repository, therefore hash value comparison will be ignored.".format(
321
+ source_dir
322
+ )
323
+ )
324
+ return
325
+
326
+ cur_hash = subprocess.getoutput("git rev-parse HEAD")
327
+
328
+ path = os.path.join(model_dir, "githash")
329
+ if os.path.exists(path):
330
+ saved_hash = open(path).read()
331
+ if saved_hash != cur_hash:
332
+ logger.warn(
333
+ "git hash values are different. {}(saved) != {}(current)".format(
334
+ saved_hash[:8], cur_hash[:8]
335
+ )
336
+ )
337
+ else:
338
+ open(path, "w").write(cur_hash)
339
+
340
+
341
+ def get_logger(model_dir, filename="train.log"):
342
+ global logger
343
+ logger = logging.getLogger(os.path.basename(model_dir))
344
+ logger.setLevel(logging.DEBUG)
345
+
346
+ formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s")
347
+ if not os.path.exists(model_dir):
348
+ os.makedirs(model_dir)
349
+ h = logging.FileHandler(os.path.join(model_dir, filename))
350
+ h.setLevel(logging.DEBUG)
351
+ h.setFormatter(formatter)
352
+ logger.addHandler(h)
353
+ return logger
354
+
355
+
356
+ class HParams:
357
+ def __init__(self, **kwargs):
358
+ for k, v in kwargs.items():
359
+ if type(v) == dict:
360
+ v = HParams(**v)
361
+ self[k] = v
362
+
363
+ def keys(self):
364
+ return self.__dict__.keys()
365
+
366
+ def items(self):
367
+ return self.__dict__.items()
368
+
369
+ def values(self):
370
+ return self.__dict__.values()
371
+
372
+ def __len__(self):
373
+ return len(self.__dict__)
374
+
375
+ def __getitem__(self, key):
376
+ return getattr(self, key)
377
+
378
+ def __setitem__(self, key, value):
379
+ return setattr(self, key, value)
380
+
381
+ def __contains__(self, key):
382
+ return key in self.__dict__
383
+
384
+ def __repr__(self):
385
+ return self.__dict__.__repr__()
386
+
387
+
388
+ def load_model(model_path, config_path):
389
+ hps = get_hparams_from_file(config_path)
390
+ net = SynthesizerTrn(
391
+ # len(symbols),
392
+ 108,
393
+ hps.data.filter_length // 2 + 1,
394
+ hps.train.segment_size // hps.data.hop_length,
395
+ n_speakers=hps.data.n_speakers,
396
+ **hps.model,
397
+ ).to("cpu")
398
+ _ = net.eval()
399
+ _ = load_checkpoint(model_path, net, None, skip_optimizer=True)
400
+ return net
401
+
402
+
403
+ def mix_model(
404
+ network1, network2, output_path, voice_ratio=(0.5, 0.5), tone_ratio=(0.5, 0.5)
405
+ ):
406
+ if hasattr(network1, "module"):
407
+ state_dict1 = network1.module.state_dict()
408
+ state_dict2 = network2.module.state_dict()
409
+ else:
410
+ state_dict1 = network1.state_dict()
411
+ state_dict2 = network2.state_dict()
412
+ for k in state_dict1.keys():
413
+ if k not in state_dict2.keys():
414
+ continue
415
+ if "enc_p" in k:
416
+ state_dict1[k] = (
417
+ state_dict1[k].clone() * tone_ratio[0]
418
+ + state_dict2[k].clone() * tone_ratio[1]
419
+ )
420
+ else:
421
+ state_dict1[k] = (
422
+ state_dict1[k].clone() * voice_ratio[0]
423
+ + state_dict2[k].clone() * voice_ratio[1]
424
+ )
425
+ for k in state_dict2.keys():
426
+ if k not in state_dict1.keys():
427
+ state_dict1[k] = state_dict2[k].clone()
428
+ torch.save(
429
+ {"model": state_dict1, "iteration": 0, "optimizer": None, "learning_rate": 0},
430
+ output_path,
431
+ )
432
+
433
+
434
+ def get_steps(model_path):
435
+ matches = re.findall(r"\d+", model_path)
436
+ return matches[-1] if matches else None
webui.py ADDED
@@ -0,0 +1,297 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # flake8: noqa: E402
2
+ import gc
3
+ import os
4
+ import logging
5
+ import re_matching
6
+
7
+ logging.getLogger("numba").setLevel(logging.WARNING)
8
+ logging.getLogger("markdown_it").setLevel(logging.WARNING)
9
+ logging.getLogger("urllib3").setLevel(logging.WARNING)
10
+ logging.getLogger("matplotlib").setLevel(logging.WARNING)
11
+
12
+ logging.basicConfig(
13
+ level=logging.INFO, format="| %(name)s | %(levelname)s | %(message)s"
14
+ )
15
+
16
+ logger = logging.getLogger(__name__)
17
+
18
+ import torch
19
+ import utils
20
+ from infer import infer, latest_version, get_net_g
21
+ import gradio as gr
22
+
23
+ # import webbrowser
24
+ import numpy as np
25
+ from config import config
26
+
27
+ net_g = None
28
+
29
+ device = config.webui_config.device
30
+ if device == "mps":
31
+ os.environ["PYTORCH_ENABLE_MPS_FALLBACK"] = "1"
32
+
33
+
34
+ def free_up_memory():
35
+ # Prior inference run might have large variables not cleaned up due to exception during the run.
36
+ # Free up as much memory as possible to allow this run to be successful.
37
+ gc.collect()
38
+ if torch.cuda.is_available():
39
+ torch.cuda.empty_cache()
40
+
41
+
42
+ def generate_audio(
43
+ slices,
44
+ sdp_ratio,
45
+ noise_scale,
46
+ noise_scale_w,
47
+ length_scale,
48
+ speaker,
49
+ # language,
50
+ # reference_audio,
51
+ # emotion,
52
+ style_text,
53
+ style_weight,
54
+ skip_start=False,
55
+ skip_end=False,
56
+ ):
57
+ audio_list = []
58
+ # silence = np.zeros(hps.data.sampling_rate // 2, dtype=np.int16)
59
+
60
+ free_up_memory()
61
+
62
+ with torch.no_grad():
63
+ for idx, piece in enumerate(slices):
64
+ skip_start = idx != 0
65
+ skip_end = idx != len(slices) - 1
66
+ audio = infer(
67
+ piece,
68
+ # reference_audio=reference_audio,
69
+ emotion=None,
70
+ sdp_ratio=sdp_ratio,
71
+ noise_scale=noise_scale,
72
+ noise_scale_w=noise_scale_w,
73
+ length_scale=length_scale,
74
+ sid=speaker,
75
+ language="ZH",
76
+ hps=hps,
77
+ net_g=net_g,
78
+ device=device,
79
+ skip_start=skip_start,
80
+ skip_end=skip_end,
81
+ style_text=style_text,
82
+ style_weight=style_weight,
83
+ )
84
+ audio16bit = gr.processing_utils.convert_to_16_bit_wav(audio)
85
+ audio_list.append(audio16bit)
86
+ return audio_list
87
+
88
+
89
+ def process_text(
90
+ text: str,
91
+ speaker,
92
+ sdp_ratio,
93
+ noise_scale,
94
+ noise_scale_w,
95
+ length_scale,
96
+ # language,
97
+ # reference_audio,
98
+ # emotion,
99
+ style_text=None,
100
+ style_weight=0,
101
+ ):
102
+ audio_list = []
103
+ audio_list.extend(
104
+ generate_audio(
105
+ text.split("|"),
106
+ sdp_ratio,
107
+ noise_scale,
108
+ noise_scale_w,
109
+ length_scale,
110
+ speaker,
111
+ # language,
112
+ # reference_audio,
113
+ # emotion,
114
+ style_text,
115
+ style_weight,
116
+ )
117
+ )
118
+ return audio_list
119
+
120
+
121
+ def tts_fn(
122
+ text: str,
123
+ speaker,
124
+ sdp_ratio,
125
+ noise_scale,
126
+ noise_scale_w,
127
+ length_scale,
128
+ # reference_audio,
129
+ # emotion,
130
+ # prompt_mode,
131
+ style_text=None,
132
+ style_weight=0,
133
+ ):
134
+ if style_text == "":
135
+ style_text = None
136
+ # if prompt_mode == "Audio prompt":
137
+ # if reference_audio == None:
138
+ # return ("Invalid audio prompt", None)
139
+ # else:
140
+ # reference_audio = load_audio(reference_audio)[1]
141
+ # else:
142
+ # reference_audio = None
143
+
144
+ audio_list = process_text(
145
+ text,
146
+ speaker,
147
+ sdp_ratio,
148
+ noise_scale,
149
+ noise_scale_w,
150
+ length_scale,
151
+ # language,
152
+ # reference_audio,
153
+ # emotion,
154
+ style_text,
155
+ style_weight,
156
+ )
157
+
158
+ audio_concat = np.concatenate(audio_list)
159
+ return "Success", (hps.data.sampling_rate, audio_concat)
160
+
161
+
162
+ if __name__ == "__main__":
163
+ if config.webui_config.debug:
164
+ logger.info("Enable DEBUG-LEVEL log")
165
+ logging.basicConfig(level=logging.DEBUG)
166
+ hps = utils.get_hparams_from_file(config.webui_config.config_path)
167
+ # 若config.json中未指定版本则默认为最新版本
168
+ version = hps.version if hasattr(hps, "version") else latest_version
169
+ net_g = get_net_g(
170
+ model_path=config.webui_config.model, version=version, device=device, hps=hps
171
+ )
172
+ speaker_ids = hps.data.spk2id
173
+ speakers = list(speaker_ids.keys())
174
+ languages = ["ZH", "JP", "EN", "mix", "auto"]
175
+ with gr.Blocks() as app:
176
+ with gr.Row():
177
+ with gr.Column():
178
+ text = gr.TextArea(
179
+ label="输入文本内容",
180
+ )
181
+ # trans = gr.Button("中翻日", variant="primary")
182
+ # slicer = gr.Button("快速切分", variant="primary")
183
+ # formatter = gr.Button("检测语言,并整理为 MIX 格式", variant="primary")
184
+ speaker = gr.Dropdown(
185
+ choices=speakers, value=speakers[0], label="Speaker"
186
+ )
187
+ # _ = gr.Markdown(
188
+ # value="提示模式(Prompt mode):可选文字提示或音频提示,用于生成文字或音频指定风格的声音。\n",
189
+ # visible=False,
190
+ # )
191
+ # prompt_mode = gr.Radio(
192
+ # ["Text prompt", "Audio prompt"],
193
+ # label="Prompt Mode",
194
+ # value="Text prompt",
195
+ # visible=False,
196
+ # )
197
+ # text_prompt = gr.Textbox(
198
+ # label="Text prompt",
199
+ # placeholder="用文字描述生成风格。如:Happy",
200
+ # value="Happy",
201
+ # visible=False,
202
+ # )
203
+ # audio_prompt = gr.Audio(
204
+ # label="Audio prompt", type="filepath", visible=False
205
+ # )
206
+ sdp_ratio = gr.Slider(
207
+ minimum=0, maximum=1, value=0.5, step=0.1, label="SDP Ratio"
208
+ )
209
+ noise_scale = gr.Slider(
210
+ minimum=0.1, maximum=2, value=0.6, step=0.1, label="Noise"
211
+ )
212
+ noise_scale_w = gr.Slider(
213
+ minimum=0.1, maximum=2, value=0.9, step=0.1, label="Noise_W"
214
+ )
215
+ length_scale = gr.Slider(
216
+ minimum=0.1, maximum=2, value=1.0, step=0.1, label="Length"
217
+ )
218
+ btn = gr.Button("生成音频!", variant="primary")
219
+ with gr.Column():
220
+ with gr.Accordion("融合文本语义", open=False):
221
+ gr.Markdown(
222
+ value="使用辅助文本的语意来辅助生成对话(语言保持与主文本相同)\n\n"
223
+ "**注意**:不要使用**指令式文本**(如:开心),要使用**带有强烈情感的文本**(如:我好快乐!!!)\n\n"
224
+ "效果较不明确,留空即为不使用该功能"
225
+ )
226
+ style_text = gr.Textbox(label="辅助文本")
227
+ style_weight = gr.Slider(
228
+ minimum=0,
229
+ maximum=1,
230
+ value=0.7,
231
+ step=0.1,
232
+ label="Weight",
233
+ info="主文本和辅助文本的bert混合比率,0表示仅主文本,1表示仅辅助文本",
234
+ )
235
+ text_output = gr.Textbox(label="状态信息")
236
+ audio_output = gr.Audio(label="输出音频")
237
+ # explain_image = gr.Image(
238
+ # label="参数解释信息",
239
+ # show_label=True,
240
+ # show_share_button=False,
241
+ # show_download_button=False,
242
+ # value=os.path.abspath("./img/参数说明.png"),
243
+ # )
244
+ btn.click(
245
+ tts_fn,
246
+ inputs=[
247
+ text,
248
+ speaker,
249
+ sdp_ratio,
250
+ noise_scale,
251
+ noise_scale_w,
252
+ length_scale,
253
+ # language,
254
+ # audio_prompt,
255
+ # text_prompt,
256
+ # prompt_mode,
257
+ style_text,
258
+ style_weight,
259
+ ],
260
+ outputs=[text_output, audio_output],
261
+ )
262
+
263
+ # trans.click(
264
+ # translate,
265
+ # inputs=[text],
266
+ # outputs=[text],
267
+ # )
268
+ # slicer.click(
269
+ # tts_split,
270
+ # inputs=[
271
+ # text,
272
+ # speaker,
273
+ # sdp_ratio,
274
+ # noise_scale,
275
+ # noise_scale_w,
276
+ # length_scale,
277
+ # language,
278
+ # opt_cut_by_sent,
279
+ # interval_between_para,
280
+ # interval_between_sent,
281
+ # # audio_prompt,
282
+ # # text_prompt,
283
+ # style_text,
284
+ # style_weight,
285
+ # ],
286
+ # outputs=[text_output, audio_output],
287
+ # )
288
+
289
+ # formatter.click(
290
+ # format_utils,
291
+ # inputs=[text, speaker],
292
+ # outputs=[language, text],
293
+ # )
294
+
295
+ print("推理页面已开启!")
296
+ # webbrowser.open(f"http://127.0.0.1:{config.webui_config.port}")
297
+ app.launch(share=config.webui_config.share, server_port=config.webui_config.port)