yongjielv commited on
Commit
cc9919d
·
verified ·
1 Parent(s): a1075ac

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +275 -0
README.md ADDED
@@ -0,0 +1,275 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Key Features
2
+ - 🚀 **Unified Representation:** A single semantic-acoustic unified representation for both understanding and generation tasks.
3
+ - 🎧 **High-Fidelity Reconstruction:** Achieve high-fidelity audio generation by modeling continuous features with a VAE, minimizing information loss and preserving intricate acoustic textures.
4
+ - 🌐 **Convolution-Free Efficiency:** Built on a pure causal transformer architecture, completely eliminating convolutional layers for superior efficiency and a simpler design.
5
+
6
+
7
+
8
+ ## Installation
9
+ ```
10
+ pip install -r requirements.txt
11
+ ```
12
+
13
+ ## Quick start
14
+ ```python
15
+ import torch
16
+ import torchaudio
17
+
18
+ from audio_tokenizer.modeling_audio_vae import AudioVAE
19
+
20
+ model = AudioVAE.from_pretrained('inclusionAI/Ming-UniAudio-Tokenizer')
21
+ model = model.cuda()
22
+ model.eval()
23
+
24
+ waveform, sr = torchaudio.load('data/1089-134686-0000.flac', backend='soundfile')
25
+ sample = {'waveform': waveform.cuda(), 'waveform_length': torch.tensor([waveform.size(-1)]).cuda()}
26
+
27
+ with torch.no_grad():
28
+ with torch.autocast(device_type='cuda', dtype=torch.bfloat16):
29
+ latent, frame_num = model.encode_latent(**sample)
30
+ output_waveform = model.decode(latent)
31
+
32
+ torchaudio.save('./1089-134686-0000_reconstruct.wav', output_waveform.cpu()[0], sample_rate=16000)
33
+ ```
34
+
35
+ ## Performance
36
+ ### Speech reconstruction performance
37
+ <table>
38
+ <caption>Speech reconstruction performance comparison on various audio benchmark datasets. The best results are in <strong>bold</strong>.</caption>
39
+ <thead>
40
+ <tr>
41
+ <th rowspan="2" align="left"><b>System</b></th>
42
+ <th rowspan="2" align="center"><b>FrameRate</b></th>
43
+ <th colspan="3" align="center"><b>SEED-ZH</b></th>
44
+ <th colspan="3" align="center"><b>SEED-EN</b></th>
45
+ </tr>
46
+ <tr>
47
+ <th align="center"><b>PESQ↑</b></th>
48
+ <th align="center"><b>SIM↑</b></th>
49
+ <th align="center"><b>STOI↑</b></th>
50
+ <th align="center"><b>PESQ↑</b></th>
51
+ <th align="center"><b>SIM↑</b></th>
52
+ <th align="center"><b>STOI↑</b></th>
53
+ </tr>
54
+ </thead>
55
+ <tbody>
56
+ <tr>
57
+ <td align="left">MiMo-Audio-Tokenizer</td>
58
+ <td align="center">25</td>
59
+ <td align="center">2.71</td>
60
+ <td align="center">0.89</td>
61
+ <td align="center">0.93</td>
62
+ <td align="center">2.43</td>
63
+ <td align="center">0.85</td>
64
+ <td align="center">0.92</td>
65
+ </tr>
66
+ <tr>
67
+ <td align="left">GLM4-Voice-Tokenizer</td>
68
+ <td align="center">12.5</td>
69
+ <td align="center">1.06</td>
70
+ <td align="center">0.33</td>
71
+ <td align="center">0.61</td>
72
+ <td align="center">1.05</td>
73
+ <td align="center">0.12</td>
74
+ <td align="center">0.60</td>
75
+ </tr>
76
+ <tr>
77
+ <td align="left">Baichuan-Audio-Tokenizer</td>
78
+ <td align="center">12.5</td>
79
+ <td align="center">1.84</td>
80
+ <td align="center">0.78</td>
81
+ <td align="center">0.86</td>
82
+ <td align="center">1.62</td>
83
+ <td align="center">0.69</td>
84
+ <td align="center">0.85</td>
85
+ </tr>
86
+ <tr>
87
+ <td align="left">XY-Tokenizer</td>
88
+ <td align="center">12.5</td>
89
+ <td align="center">2.27</td>
90
+ <td align="center">0.77</td>
91
+ <td align="center">0.90</td>
92
+ <td align="center">2.14</td>
93
+ <td align="center">0.82</td>
94
+ <td align="center">0.90</td>
95
+ </tr>
96
+ <tr>
97
+ <td align="left">Mimi</td>
98
+ <td align="center">75</td>
99
+ <td align="center">2.05</td>
100
+ <td align="center">0.73</td>
101
+ <td align="center">0.89</td>
102
+ <td align="center">2.01</td>
103
+ <td align="center">0.77</td>
104
+ <td align="center">0.89</td>
105
+ </tr>
106
+ <tr>
107
+ <td align="left">XCodec2.0</td>
108
+ <td align="center">50</td>
109
+ <td align="center">2.19</td>
110
+ <td align="center">0.80</td>
111
+ <td align="center">0.92</td>
112
+ <td align="center">2.37</td>
113
+ <td align="center">0.82</td>
114
+ <td align="center">0.93</td>
115
+ </tr>
116
+ <tr>
117
+ <td align="left">BigCodec</td>
118
+ <td align="center">80</td>
119
+ <td align="center">2.26</td>
120
+ <td align="center">0.81</td>
121
+ <td align="center">0.92</td>
122
+ <td align="center">2.22</td>
123
+ <td align="center">0.80</td>
124
+ <td align="center">0.91</td>
125
+ </tr>
126
+ <tr>
127
+ <td align="left"><strong>Ming-UniAudio-Tokenizer(ours)</td>
128
+ <td align="center">50</td>
129
+ <td align="center"><b>4.21</b></td>
130
+ <td align="center"><b>0.96</b></td>
131
+ <td align="center"><b>0.98</b></td>
132
+ <td align="center"><b>4.04</b></td>
133
+ <td align="center"><b>0.96</b></td>
134
+ <td align="center"><b>0.98</b></td>
135
+ </tr>
136
+ </tbody>
137
+ </table>
138
+
139
+
140
+ ### The adaptation performance for downstream ASR tasks
141
+ <table>
142
+ <caption>Understanding ASR performance comparison on various audio benchmark datasets. The best results are in <strong>bold</strong>.</caption>
143
+ <thead>
144
+ <tr>
145
+ <th rowspan="2"><strong>Datasets</strong></th>
146
+ <th rowspan="2"><strong>Model</strong></th>
147
+ <th colspan="7"><strong>Performance</strong></th>
148
+ </tr>
149
+ <tr>
150
+ <th><strong>aishell2-ios</strong></th>
151
+ <th><strong>LS-clean</strong></th>
152
+ <th><strong>Hunan</strong></th>
153
+ <th><strong>Minnan</strong></th>
154
+ <th><strong>Guangyue</strong></th>
155
+ <th><strong>Chuanyu</strong></th>
156
+ <th><strong>Shanghai</strong></th>
157
+ </tr>
158
+ </thead>
159
+ <tbody>
160
+ <tr>
161
+ <td rowspan="4"><strong>Understanding ASR</strong></td>
162
+ <td>Kimi-Audio</td>
163
+ <td><strong>2.56</td>
164
+ <td><strong>1.28</td>
165
+ <td>31.93</td>
166
+ <td>80.28</td>
167
+ <td>41.49</td>
168
+ <td>6.69</td>
169
+ <td>60.64</td>
170
+ </tr>
171
+ <tr>
172
+ <td>Qwen2.5 Omni</td>
173
+ <td>2.75</td>
174
+ <td>1.80</td>
175
+ <td>29.31</td>
176
+ <td>53.43</td>
177
+ <td>10.39</td>
178
+ <td>7.61</td>
179
+ <td>32.05</td>
180
+ </tr>
181
+ <tr>
182
+ <td>Qwen2 Audio</td>
183
+ <td>2.92</td>
184
+ <td>1.60</td>
185
+ <td>25.88</td>
186
+ <td>123.78</td>
187
+ <td>7.59</td>
188
+ <td>7.77</td>
189
+ <td>31.73</td>
190
+ </tr>
191
+ <tr>
192
+ <td><strong>Ming-UniAudio(ours)</td>
193
+ <td>2.84</td>
194
+ <td>1.62</td>
195
+ <td><strong>9.80</strong></td>
196
+ <td><strong>16.50</strong></td>
197
+ <td><strong>5.51</strong></td>
198
+ <td><strong>5.46</strong></td>
199
+ <td><strong>14.65</strong></td>
200
+ </tr>
201
+ </tbody>
202
+ </table>
203
+
204
+ ### The adaptation performance for downstream TTS tasks
205
+
206
+ <table>
207
+ <caption>Performance comparison on various audio benchmark datasets. The best results are in <strong>bold</strong>.</caption>
208
+ <thead>
209
+ <tr>
210
+ <th align="left"><b>Datasets</b></th>
211
+ <th align="left"><b>Model</b></th>
212
+ <th colspan="4" align="center"><b>Performance</b></th>
213
+ </tr>
214
+ <tr>
215
+ <th></th>
216
+ <th></th>
217
+ <th align="center"><b>Seed-zh WER(%)</b></th>
218
+ <th align="center"><b>Seed-zh SIM</b></th>
219
+ <th align="center"><b>Seed-en WER(%)</b></th>
220
+ <th align="center"><b>Seed-en SIM</b></th>
221
+ </tr>
222
+ </thead>
223
+ <tbody>
224
+ <tr>
225
+ <td rowspan="5" align="left" style="vertical-align: middle;"><b>Generation</b></td>
226
+ <td align="left">Seed-TTS</td>
227
+ <td align="center">1.12</td>
228
+ <td align="center"><b>0.80</b></td>
229
+ <td align="center">2.25</td>
230
+ <td align="center"><b>0.76</b></td>
231
+ </tr>
232
+ <tr>
233
+ <td align="left">MiMo-Audio</td>
234
+ <td align="center">1.96</td>
235
+ <td align="center">-</td>
236
+ <td align="center">5.37</td>
237
+ <td align="center">-</td>
238
+ </tr>
239
+ <tr>
240
+ <td align="left">Qwen3-Omni-30B-A3B-Instruct</td>
241
+ <td align="center">1.07</td>
242
+ <td align="center">-</td>
243
+ <td align="center"><b>1.39</b></td>
244
+ <td align="center">-</td>
245
+ </tr>
246
+ <tr>
247
+ <td align="left">Ming-Omni-Lite</td>
248
+ <td align="center">1.69</td>
249
+ <td align="center">0.68</td>
250
+ <td align="center">4.31</td>
251
+ <td align="center">0.51</td>
252
+ </tr>
253
+ <tr>
254
+ <td align="left"><strong>Ming-UniAudio(ours)</td>
255
+ <td align="center"><b>0.95</b></td>
256
+ <td align="center">0.70</td>
257
+ <td align="center">1.85</td>
258
+ <td align="center">0.58</td>
259
+ </tr>
260
+ </tbody>
261
+ </table>
262
+
263
+
264
+ ## Acknowledgements
265
+ 1. We borrowed a lot of code from [X-Codec-2.0](https://github.com/zhenye234/X-Codec-2.0.git) for tokenizer training.
266
+ 2. We thank the OpenAI team for developing the [Whisper](https://github.com/openai/whisper) model and making its weights publicly available.
267
+
268
+
269
+ ## License and Legal Disclaimer
270
+
271
+ This code repository is licensed under the [MIT License](./LICENSE), and the Legal Disclaimer is located in the [LEGAL.md file](./LEGAL.md) under the project's root directory.
272
+
273
+ ## Citation
274
+
275
+ If you find our work helpful, feel free to give us a cite.