jaman21 commited on
Commit
0bef84e
·
verified ·
1 Parent(s): b02d735

Upload model

Browse files
README.md ADDED
@@ -0,0 +1,272 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: model-license
4
+ license_link: https://github.com/modelscope/FunASR/blob/main/MODEL_LICENSE
5
+ language:
6
+ - en
7
+ - zh
8
+ - ja
9
+ - ko
10
+ library: funasr
11
+ ---
12
+
13
+ ([简体中文](./README_zh.md)|English|[日本語](./README_ja.md))
14
+
15
+ # Introduction
16
+
17
+ github [repo](https://github.com/FunAudioLLM/SenseVoice) : https://github.com/FunAudioLLM/SenseVoice
18
+
19
+ SenseVoice is a speech foundation model with multiple speech understanding capabilities, including automatic speech
20
+ recognition (ASR), spoken language identification (LID), speech emotion recognition (SER), and audio event detection (
21
+ AED).
22
+
23
+ <img src="image/sensevoice2.png">
24
+
25
+ [//]: # (<div align="center"><img src="image/sensevoice.png" width="700"/> </div>)
26
+
27
+ <div align="center">
28
+ <h4>
29
+ <a href="https://fun-audio-llm.github.io/"> Homepage </a>
30
+ |<a href="#What's News"> What's News </a>
31
+ |<a href="#Benchmarks"> Benchmarks </a>
32
+ |<a href="#Install"> Install </a>
33
+ |<a href="#Usage"> Usage </a>
34
+ |<a href="#Community"> Community </a>
35
+ </h4>
36
+
37
+ Model Zoo:
38
+ [modelscope](https://www.modelscope.cn/models/iic/SenseVoiceSmall), [huggingface](https://huggingface.co/FunAudioLLM/SenseVoiceSmall)
39
+
40
+ Online Demo:
41
+ [modelscope demo](https://www.modelscope.cn/studios/iic/SenseVoice), [huggingface space](https://huggingface.co/spaces/FunAudioLLM/SenseVoice)
42
+
43
+
44
+ </div>
45
+
46
+
47
+ <a name="Highligts"></a>
48
+
49
+ # Highlights 🎯
50
+
51
+ **SenseVoice** focuses on high-accuracy multilingual speech recognition, speech emotion recognition, and audio event
52
+ detection.
53
+
54
+ - **Multilingual Speech Recognition:** Trained with over 400,000 hours of data, supporting more than 50 languages, the
55
+ recognition performance surpasses that of the Whisper model.
56
+ - **Rich transcribe:**
57
+ - Possess excellent emotion recognition capabilities, achieving and surpassing the effectiveness of the current best
58
+ emotion recognition models on test data.
59
+ - Offer sound event detection capabilities, supporting the detection of various common human-computer interaction
60
+ events such as bgm, applause, laughter, crying, coughing, and sneezing.
61
+ - **Efficient Inference:** The SenseVoice-Small model utilizes a non-autoregressive end-to-end framework, leading to
62
+ exceptionally low inference latency. It requires only 70ms to process 10 seconds of audio, which is 15 times faster
63
+ than Whisper-Large.
64
+ - **Convenient Finetuning:** Provide convenient finetuning scripts and strategies, allowing users to easily address
65
+ long-tail sample issues according to their business scenarios.
66
+ - **Service Deployment:** Offer service deployment pipeline, supporting multi-concurrent requests, with client-side
67
+ languages including Python, C++, HTML, Java, and C#, among others.
68
+
69
+ <a name="What's News"></a>
70
+
71
+ # What's New 🔥
72
+
73
+ - 2024/7: Added Export Features for [ONNX](https://github.com/FunAudioLLM/SenseVoice/demo_onnx.py)
74
+ and [libtorch](https://github.com/FunAudioLLM/SenseVoice/demo_libtorch.py), as well as Python Version
75
+ Runtimes: [funasr-onnx-0.4.0](https://pypi.org/project/funasr-onnx/), [funasr-torch-0.1.1](https://pypi.org/project/funasr-torch/)
76
+ - 2024/7: The [SenseVoice-Small](https://www.modelscope.cn/models/iic/SenseVoiceSmall) voice understanding model is
77
+ open-sourced, which offers high-precision multilingual speech recognition, emotion recognition, and audio event
78
+ detection capabilities for Mandarin, Cantonese, English, Japanese, and Korean and leads to exceptionally low inference
79
+ latency.
80
+ - 2024/7: The CosyVoice for natural speech generation with multi-language, timbre, and emotion control. CosyVoice excels
81
+ in multi-lingual voice generation, zero-shot voice generation, cross-lingual voice cloning, and instruction-following
82
+ capabilities. [CosyVoice repo](https://github.com/FunAudioLLM/CosyVoice)
83
+ and [CosyVoice space](https://www.modelscope.cn/studios/iic/CosyVoice-300M).
84
+ - 2024/7: [FunASR](https://github.com/modelscope/FunASR) is a fundamental speech recognition toolkit that offers a
85
+ variety of features, including speech recognition (ASR), Voice Activity Detection (VAD), Punctuation Restoration,
86
+ Language Models, Speaker Verification, Speaker Diarization and multi-talker ASR.
87
+
88
+ <a name="Benchmarks"></a>
89
+
90
+ # Benchmarks 📝
91
+
92
+ ## Multilingual Speech Recognition
93
+
94
+ We compared the performance of multilingual speech recognition between SenseVoice and Whisper on open-source benchmark
95
+ datasets, including AISHELL-1, AISHELL-2, Wenetspeech, LibriSpeech, and Common Voice. In terms of Chinese and Cantonese
96
+ recognition, the SenseVoice-Small model has advantages.
97
+
98
+ <div align="center">
99
+ <img src="image/asr_results1.png" width="400" /><img src="image/asr_results2.png" width="400" />
100
+ </div>
101
+
102
+ ## Speech Emotion Recognition
103
+
104
+ Due to the current lack of widely-used benchmarks and methods for speech emotion recognition, we conducted evaluations
105
+ across various metrics on multiple test sets and performed a comprehensive comparison with numerous results from recent
106
+ benchmarks. The selected test sets encompass data in both Chinese and English, and include multiple styles such as
107
+ performances, films, and natural conversations. Without finetuning on the target data, SenseVoice was able to achieve
108
+ and exceed the performance of the current best speech emotion recognition models.
109
+
110
+ <div align="center">
111
+ <img src="image/ser_table.png" width="1000" />
112
+ </div>
113
+
114
+ Furthermore, we compared multiple open-source speech emotion recognition models on the test sets, and the results
115
+ indicate that the SenseVoice-Large model achieved the best performance on nearly all datasets, while the
116
+ SenseVoice-Small model also surpassed other open-source models on the majority of the datasets.
117
+
118
+ <div align="center">
119
+ <img src="image/ser_figure.png" width="500" />
120
+ </div>
121
+
122
+ ## Audio Event Detection
123
+
124
+ Although trained exclusively on speech data, SenseVoice can still function as a standalone event detection model. We
125
+ compared its performance on the environmental sound classification ESC-50 dataset against the widely used industry
126
+ models BEATS and PANN. The SenseVoice model achieved commendable results on these tasks. However, due to limitations in
127
+ training data and methodology, its event classification performance has some gaps compared to specialized AED models.
128
+
129
+ <div align="center">
130
+ <img src="image/aed_figure.png" width="500" />
131
+ </div>
132
+
133
+ ## Computational Efficiency
134
+
135
+ The SenseVoice-Small model deploys a non-autoregressive end-to-end architecture, resulting in extremely low inference
136
+ latency. With a similar number of parameters to the Whisper-Small model, it infers more than 5 times faster than
137
+ Whisper-Small and 15 times faster than Whisper-Large.
138
+
139
+ <div align="center">
140
+ <img src="image/inference.png" width="1000" />
141
+ </div>
142
+
143
+ # Requirements
144
+
145
+ ```shell
146
+ pip install -r requirements.txt
147
+ ```
148
+
149
+ <a name="Usage"></a>
150
+
151
+ # Usage
152
+
153
+ ## Inference
154
+
155
+ Supports input of audio in any format and of any duration.
156
+
157
+ ```python
158
+ from funasr import AutoModel
159
+ from funasr.utils.postprocess_utils import rich_transcription_postprocess
160
+
161
+ model_dir = "FunAudioLLM/SenseVoiceSmall"
162
+
163
+
164
+ model = AutoModel(
165
+ model=model_dir,
166
+ vad_model="fsmn-vad",
167
+ vad_kwargs={"max_single_segment_time": 30000},
168
+ device="cuda:0",
169
+ hub="hf",
170
+ )
171
+
172
+ # en
173
+ res = model.generate(
174
+ input=f"{model.model_path}/example/en.mp3",
175
+ cache={},
176
+ language="auto", # "zn", "en", "yue", "ja", "ko", "nospeech"
177
+ use_itn=True,
178
+ batch_size_s=60,
179
+ merge_vad=True, #
180
+ merge_length_s=15,
181
+ )
182
+ text = rich_transcription_postprocess(res[0]["text"])
183
+ print(text)
184
+ ```
185
+
186
+ Parameter Description:
187
+
188
+ - `model_dir`: The name of the model, or the path to the model on the local disk.
189
+ - `vad_model`: This indicates the activation of VAD (Voice Activity Detection). The purpose of VAD is to split long
190
+ audio into shorter clips. In this case, the inference time includes both VAD and SenseVoice total consumption, and
191
+ represents the end-to-end latency. If you wish to test the SenseVoice model's inference time separately, the VAD model
192
+ can be disabled.
193
+ - `vad_kwargs`: Specifies the configurations for the VAD model. `max_single_segment_time`: denotes the maximum duration
194
+ for audio segmentation by the `vad_model`, with the unit being milliseconds (ms).
195
+ - `use_itn`: Whether the output result includes punctuation and inverse text normalization.
196
+ - `batch_size_s`: Indicates the use of dynamic batching, where the total duration of audio in the batch is measured in
197
+ seconds (s).
198
+ - `merge_vad`: Whether to merge short audio fragments segmented by the VAD model, with the merged length
199
+ being `merge_length_s`, in seconds (s).
200
+
201
+ If all inputs are short audios (<30s), and batch inference is needed to speed up inference efficiency, the VAD model can
202
+ be removed, and `batch_size` can be set accordingly.
203
+
204
+ ```python
205
+ model = AutoModel(model=model_dir, device="cuda:0", hub="hf")
206
+
207
+ res = model.generate(
208
+ input=f"{model.model_path}/example/en.mp3",
209
+ cache={},
210
+ language="zh", # "zn", "en", "yue", "ja", "ko", "nospeech"
211
+ use_itn=False,
212
+ batch_size=64,
213
+ hub="hf",
214
+ )
215
+ ```
216
+
217
+ For more usage, please refer to [docs](https://github.com/modelscope/FunASR/blob/main/docs/tutorial/README.md)
218
+
219
+ ### Inference directly
220
+
221
+ Supports input of audio in any format, with an input duration limit of 30 seconds or less.
222
+
223
+ ```python
224
+ from model import SenseVoiceSmall
225
+ from funasr.utils.postprocess_utils import rich_transcription_postprocess
226
+
227
+ model_dir = "FunAudioLLM/SenseVoiceSmall"
228
+ m, kwargs = SenseVoiceSmall.from_pretrained(model=model_dir, device="cuda:0", hub="hf")
229
+ m.eval()
230
+
231
+ res = m.inference(
232
+ data_in=f"{kwargs['model_path']}/example/en.mp3",
233
+ language="auto", # "zn", "en", "yue", "ja", "ko", "nospeech"
234
+ use_itn=False,
235
+ **kwargs,
236
+ )
237
+
238
+ text = rich_transcription_postprocess(res[0][0]["text"])
239
+ print(text)
240
+ ```
241
+
242
+ ### Export and Test (*On going*)
243
+
244
+ Ref to [SenseVoice](https://github.com/FunAudioLLM/SenseVoice)
245
+
246
+ ## Service
247
+
248
+ Ref to [SenseVoice](https://github.com/FunAudioLLM/SenseVoice)
249
+
250
+ ## Finetune
251
+
252
+ Ref to [SenseVoice](https://github.com/FunAudioLLM/SenseVoice)
253
+
254
+ ## WebUI
255
+
256
+ ```shell
257
+ python webui.py
258
+ ```
259
+
260
+ <div align="center"><img src="image/webui.png" width="700"/> </div>
261
+
262
+ <a name="Community"></a>
263
+
264
+ # Community
265
+
266
+ If you encounter problems in use, you can directly raise Issues on the github page.
267
+
268
+ You can also scan the following DingTalk group QR code to join the community group for communication and discussion.
269
+
270
+ | FunAudioLLM | FunASR |
271
+ |:----------------------------------------------------------------:|:--------------------------------------------------------:|
272
+ | <div align="left"><img src="image/dingding_sv.png" width="250"/> | <img src="image/dingding_funasr.png" width="250"/></div> |
am.mvn ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ <Nnet>
2
+ <Splice> 560 560
3
+ [ 0 ]
4
+ <AddShift> 560 560
5
+ <LearnRateCoef> 0 [ -8.311879 -8.600912 -9.615928 -10.43595 -11.21292 -11.88333 -12.36243 -12.63706 -12.8818 -12.83066 -12.89103 -12.95666 -13.19763 -13.40598 -13.49113 -13.5546 -13.55639 -13.51915 -13.68284 -13.53289 -13.42107 -13.65519 -13.50713 -13.75251 -13.76715 -13.87408 -13.73109 -13.70412 -13.56073 -13.53488 -13.54895 -13.56228 -13.59408 -13.62047 -13.64198 -13.66109 -13.62669 -13.58297 -13.57387 -13.4739 -13.53063 -13.48348 -13.61047 -13.64716 -13.71546 -13.79184 -13.90614 -14.03098 -14.18205 -14.35881 -14.48419 -14.60172 -14.70591 -14.83362 -14.92122 -15.00622 -15.05122 -15.03119 -14.99028 -14.92302 -14.86927 -14.82691 -14.7972 -14.76909 -14.71356 -14.61277 -14.51696 -14.42252 -14.36405 -14.30451 -14.23161 -14.19851 -14.16633 -14.15649 -14.10504 -13.99518 -13.79562 -13.3996 -12.7767 -11.71208 -8.311879 -8.600912 -9.615928 -10.43595 -11.21292 -11.88333 -12.36243 -12.63706 -12.8818 -12.83066 -12.89103 -12.95666 -13.19763 -13.40598 -13.49113 -13.5546 -13.55639 -13.51915 -13.68284 -13.53289 -13.42107 -13.65519 -13.50713 -13.75251 -13.76715 -13.87408 -13.73109 -13.70412 -13.56073 -13.53488 -13.54895 -13.56228 -13.59408 -13.62047 -13.64198 -13.66109 -13.62669 -13.58297 -13.57387 -13.4739 -13.53063 -13.48348 -13.61047 -13.64716 -13.71546 -13.79184 -13.90614 -14.03098 -14.18205 -14.35881 -14.48419 -14.60172 -14.70591 -14.83362 -14.92122 -15.00622 -15.05122 -15.03119 -14.99028 -14.92302 -14.86927 -14.82691 -14.7972 -14.76909 -14.71356 -14.61277 -14.51696 -14.42252 -14.36405 -14.30451 -14.23161 -14.19851 -14.16633 -14.15649 -14.10504 -13.99518 -13.79562 -13.3996 -12.7767 -11.71208 -8.311879 -8.600912 -9.615928 -10.43595 -11.21292 -11.88333 -12.36243 -12.63706 -12.8818 -12.83066 -12.89103 -12.95666 -13.19763 -13.40598 -13.49113 -13.5546 -13.55639 -13.51915 -13.68284 -13.53289 -13.42107 -13.65519 -13.50713 -13.75251 -13.76715 -13.87408 -13.73109 -13.70412 -13.56073 -13.53488 -13.54895 -13.56228 -13.59408 -13.62047 -13.64198 -13.66109 -13.62669 -13.58297 -13.57387 -13.4739 -13.53063 -13.48348 -13.61047 -13.64716 -13.71546 -13.79184 -13.90614 -14.03098 -14.18205 -14.35881 -14.48419 -14.60172 -14.70591 -14.83362 -14.92122 -15.00622 -15.05122 -15.03119 -14.99028 -14.92302 -14.86927 -14.82691 -14.7972 -14.76909 -14.71356 -14.61277 -14.51696 -14.42252 -14.36405 -14.30451 -14.23161 -14.19851 -14.16633 -14.15649 -14.10504 -13.99518 -13.79562 -13.3996 -12.7767 -11.71208 -8.311879 -8.600912 -9.615928 -10.43595 -11.21292 -11.88333 -12.36243 -12.63706 -12.8818 -12.83066 -12.89103 -12.95666 -13.19763 -13.40598 -13.49113 -13.5546 -13.55639 -13.51915 -13.68284 -13.53289 -13.42107 -13.65519 -13.50713 -13.75251 -13.76715 -13.87408 -13.73109 -13.70412 -13.56073 -13.53488 -13.54895 -13.56228 -13.59408 -13.62047 -13.64198 -13.66109 -13.62669 -13.58297 -13.57387 -13.4739 -13.53063 -13.48348 -13.61047 -13.64716 -13.71546 -13.79184 -13.90614 -14.03098 -14.18205 -14.35881 -14.48419 -14.60172 -14.70591 -14.83362 -14.92122 -15.00622 -15.05122 -15.03119 -14.99028 -14.92302 -14.86927 -14.82691 -14.7972 -14.76909 -14.71356 -14.61277 -14.51696 -14.42252 -14.36405 -14.30451 -14.23161 -14.19851 -14.16633 -14.15649 -14.10504 -13.99518 -13.79562 -13.3996 -12.7767 -11.71208 -8.311879 -8.600912 -9.615928 -10.43595 -11.21292 -11.88333 -12.36243 -12.63706 -12.8818 -12.83066 -12.89103 -12.95666 -13.19763 -13.40598 -13.49113 -13.5546 -13.55639 -13.51915 -13.68284 -13.53289 -13.42107 -13.65519 -13.50713 -13.75251 -13.76715 -13.87408 -13.73109 -13.70412 -13.56073 -13.53488 -13.54895 -13.56228 -13.59408 -13.62047 -13.64198 -13.66109 -13.62669 -13.58297 -13.57387 -13.4739 -13.53063 -13.48348 -13.61047 -13.64716 -13.71546 -13.79184 -13.90614 -14.03098 -14.18205 -14.35881 -14.48419 -14.60172 -14.70591 -14.83362 -14.92122 -15.00622 -15.05122 -15.03119 -14.99028 -14.92302 -14.86927 -14.82691 -14.7972 -14.76909 -14.71356 -14.61277 -14.51696 -14.42252 -14.36405 -14.30451 -14.23161 -14.19851 -14.16633 -14.15649 -14.10504 -13.99518 -13.79562 -13.3996 -12.7767 -11.71208 -8.311879 -8.600912 -9.615928 -10.43595 -11.21292 -11.88333 -12.36243 -12.63706 -12.8818 -12.83066 -12.89103 -12.95666 -13.19763 -13.40598 -13.49113 -13.5546 -13.55639 -13.51915 -13.68284 -13.53289 -13.42107 -13.65519 -13.50713 -13.75251 -13.76715 -13.87408 -13.73109 -13.70412 -13.56073 -13.53488 -13.54895 -13.56228 -13.59408 -13.62047 -13.64198 -13.66109 -13.62669 -13.58297 -13.57387 -13.4739 -13.53063 -13.48348 -13.61047 -13.64716 -13.71546 -13.79184 -13.90614 -14.03098 -14.18205 -14.35881 -14.48419 -14.60172 -14.70591 -14.83362 -14.92122 -15.00622 -15.05122 -15.03119 -14.99028 -14.92302 -14.86927 -14.82691 -14.7972 -14.76909 -14.71356 -14.61277 -14.51696 -14.42252 -14.36405 -14.30451 -14.23161 -14.19851 -14.16633 -14.15649 -14.10504 -13.99518 -13.79562 -13.3996 -12.7767 -11.71208 -8.311879 -8.600912 -9.615928 -10.43595 -11.21292 -11.88333 -12.36243 -12.63706 -12.8818 -12.83066 -12.89103 -12.95666 -13.19763 -13.40598 -13.49113 -13.5546 -13.55639 -13.51915 -13.68284 -13.53289 -13.42107 -13.65519 -13.50713 -13.75251 -13.76715 -13.87408 -13.73109 -13.70412 -13.56073 -13.53488 -13.54895 -13.56228 -13.59408 -13.62047 -13.64198 -13.66109 -13.62669 -13.58297 -13.57387 -13.4739 -13.53063 -13.48348 -13.61047 -13.64716 -13.71546 -13.79184 -13.90614 -14.03098 -14.18205 -14.35881 -14.48419 -14.60172 -14.70591 -14.83362 -14.92122 -15.00622 -15.05122 -15.03119 -14.99028 -14.92302 -14.86927 -14.82691 -14.7972 -14.76909 -14.71356 -14.61277 -14.51696 -14.42252 -14.36405 -14.30451 -14.23161 -14.19851 -14.16633 -14.15649 -14.10504 -13.99518 -13.79562 -13.3996 -12.7767 -11.71208 ]
6
+ <Rescale> 560 560
7
+ <LearnRateCoef> 0 [ 0.155775 0.154484 0.1527379 0.1518718 0.1506028 0.1489256 0.147067 0.1447061 0.1436307 0.1443568 0.1451849 0.1455157 0.1452821 0.1445717 0.1439195 0.1435867 0.1436018 0.1438781 0.1442086 0.1448844 0.1454756 0.145663 0.146268 0.1467386 0.1472724 0.147664 0.1480913 0.1483739 0.1488841 0.1493636 0.1497088 0.1500379 0.1502916 0.1505389 0.1506787 0.1507102 0.1505992 0.1505445 0.1505938 0.1508133 0.1509569 0.1512396 0.1514625 0.1516195 0.1516156 0.1515561 0.1514966 0.1513976 0.1512612 0.151076 0.1510596 0.1510431 0.151077 0.1511168 0.1511917 0.151023 0.1508045 0.1505885 0.1503493 0.1502373 0.1501726 0.1500762 0.1500065 0.1499782 0.150057 0.1502658 0.150469 0.1505335 0.1505505 0.1505328 0.1504275 0.1502438 0.1499674 0.1497118 0.1494661 0.1493102 0.1493681 0.1495501 0.1499738 0.1509654 0.155775 0.154484 0.1527379 0.1518718 0.1506028 0.1489256 0.147067 0.1447061 0.1436307 0.1443568 0.1451849 0.1455157 0.1452821 0.1445717 0.1439195 0.1435867 0.1436018 0.1438781 0.1442086 0.1448844 0.1454756 0.145663 0.146268 0.1467386 0.1472724 0.147664 0.1480913 0.1483739 0.1488841 0.1493636 0.1497088 0.1500379 0.1502916 0.1505389 0.1506787 0.1507102 0.1505992 0.1505445 0.1505938 0.1508133 0.1509569 0.1512396 0.1514625 0.1516195 0.1516156 0.1515561 0.1514966 0.1513976 0.1512612 0.151076 0.1510596 0.1510431 0.151077 0.1511168 0.1511917 0.151023 0.1508045 0.1505885 0.1503493 0.1502373 0.1501726 0.1500762 0.1500065 0.1499782 0.150057 0.1502658 0.150469 0.1505335 0.1505505 0.1505328 0.1504275 0.1502438 0.1499674 0.1497118 0.1494661 0.1493102 0.1493681 0.1495501 0.1499738 0.1509654 0.155775 0.154484 0.1527379 0.1518718 0.1506028 0.1489256 0.147067 0.1447061 0.1436307 0.1443568 0.1451849 0.1455157 0.1452821 0.1445717 0.1439195 0.1435867 0.1436018 0.1438781 0.1442086 0.1448844 0.1454756 0.145663 0.146268 0.1467386 0.1472724 0.147664 0.1480913 0.1483739 0.1488841 0.1493636 0.1497088 0.1500379 0.1502916 0.1505389 0.1506787 0.1507102 0.1505992 0.1505445 0.1505938 0.1508133 0.1509569 0.1512396 0.1514625 0.1516195 0.1516156 0.1515561 0.1514966 0.1513976 0.1512612 0.151076 0.1510596 0.1510431 0.151077 0.1511168 0.1511917 0.151023 0.1508045 0.1505885 0.1503493 0.1502373 0.1501726 0.1500762 0.1500065 0.1499782 0.150057 0.1502658 0.150469 0.1505335 0.1505505 0.1505328 0.1504275 0.1502438 0.1499674 0.1497118 0.1494661 0.1493102 0.1493681 0.1495501 0.1499738 0.1509654 0.155775 0.154484 0.1527379 0.1518718 0.1506028 0.1489256 0.147067 0.1447061 0.1436307 0.1443568 0.1451849 0.1455157 0.1452821 0.1445717 0.1439195 0.1435867 0.1436018 0.1438781 0.1442086 0.1448844 0.1454756 0.145663 0.146268 0.1467386 0.1472724 0.147664 0.1480913 0.1483739 0.1488841 0.1493636 0.1497088 0.1500379 0.1502916 0.1505389 0.1506787 0.1507102 0.1505992 0.1505445 0.1505938 0.1508133 0.1509569 0.1512396 0.1514625 0.1516195 0.1516156 0.1515561 0.1514966 0.1513976 0.1512612 0.151076 0.1510596 0.1510431 0.151077 0.1511168 0.1511917 0.151023 0.1508045 0.1505885 0.1503493 0.1502373 0.1501726 0.1500762 0.1500065 0.1499782 0.150057 0.1502658 0.150469 0.1505335 0.1505505 0.1505328 0.1504275 0.1502438 0.1499674 0.1497118 0.1494661 0.1493102 0.1493681 0.1495501 0.1499738 0.1509654 0.155775 0.154484 0.1527379 0.1518718 0.1506028 0.1489256 0.147067 0.1447061 0.1436307 0.1443568 0.1451849 0.1455157 0.1452821 0.1445717 0.1439195 0.1435867 0.1436018 0.1438781 0.1442086 0.1448844 0.1454756 0.145663 0.146268 0.1467386 0.1472724 0.147664 0.1480913 0.1483739 0.1488841 0.1493636 0.1497088 0.1500379 0.1502916 0.1505389 0.1506787 0.1507102 0.1505992 0.1505445 0.1505938 0.1508133 0.1509569 0.1512396 0.1514625 0.1516195 0.1516156 0.1515561 0.1514966 0.1513976 0.1512612 0.151076 0.1510596 0.1510431 0.151077 0.1511168 0.1511917 0.151023 0.1508045 0.1505885 0.1503493 0.1502373 0.1501726 0.1500762 0.1500065 0.1499782 0.150057 0.1502658 0.150469 0.1505335 0.1505505 0.1505328 0.1504275 0.1502438 0.1499674 0.1497118 0.1494661 0.1493102 0.1493681 0.1495501 0.1499738 0.1509654 0.155775 0.154484 0.1527379 0.1518718 0.1506028 0.1489256 0.147067 0.1447061 0.1436307 0.1443568 0.1451849 0.1455157 0.1452821 0.1445717 0.1439195 0.1435867 0.1436018 0.1438781 0.1442086 0.1448844 0.1454756 0.145663 0.146268 0.1467386 0.1472724 0.147664 0.1480913 0.1483739 0.1488841 0.1493636 0.1497088 0.1500379 0.1502916 0.1505389 0.1506787 0.1507102 0.1505992 0.1505445 0.1505938 0.1508133 0.1509569 0.1512396 0.1514625 0.1516195 0.1516156 0.1515561 0.1514966 0.1513976 0.1512612 0.151076 0.1510596 0.1510431 0.151077 0.1511168 0.1511917 0.151023 0.1508045 0.1505885 0.1503493 0.1502373 0.1501726 0.1500762 0.1500065 0.1499782 0.150057 0.1502658 0.150469 0.1505335 0.1505505 0.1505328 0.1504275 0.1502438 0.1499674 0.1497118 0.1494661 0.1493102 0.1493681 0.1495501 0.1499738 0.1509654 0.155775 0.154484 0.1527379 0.1518718 0.1506028 0.1489256 0.147067 0.1447061 0.1436307 0.1443568 0.1451849 0.1455157 0.1452821 0.1445717 0.1439195 0.1435867 0.1436018 0.1438781 0.1442086 0.1448844 0.1454756 0.145663 0.146268 0.1467386 0.1472724 0.147664 0.1480913 0.1483739 0.1488841 0.1493636 0.1497088 0.1500379 0.1502916 0.1505389 0.1506787 0.1507102 0.1505992 0.1505445 0.1505938 0.1508133 0.1509569 0.1512396 0.1514625 0.1516195 0.1516156 0.1515561 0.1514966 0.1513976 0.1512612 0.151076 0.1510596 0.1510431 0.151077 0.1511168 0.1511917 0.151023 0.1508045 0.1505885 0.1503493 0.1502373 0.1501726 0.1500762 0.1500065 0.1499782 0.150057 0.1502658 0.150469 0.1505335 0.1505505 0.1505328 0.1504275 0.1502438 0.1499674 0.1497118 0.1494661 0.1493102 0.1493681 0.1495501 0.1499738 0.1509654 ]
8
+ </Nnet>
chn_jpn_yue_eng_ko_spectok.bpe.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aa87f86064c3730d799ddf7af3c04659151102cba548bce325cf06ba4da4e6a8
3
+ size 377341
config.yaml ADDED
@@ -0,0 +1,97 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ encoder: SenseVoiceEncoderSmall
2
+ encoder_conf:
3
+ output_size: 512
4
+ attention_heads: 4
5
+ linear_units: 2048
6
+ num_blocks: 50
7
+ tp_blocks: 20
8
+ dropout_rate: 0.1
9
+ positional_dropout_rate: 0.1
10
+ attention_dropout_rate: 0.1
11
+ input_layer: pe
12
+ pos_enc_class: SinusoidalPositionEncoder
13
+ normalize_before: true
14
+ kernel_size: 11
15
+ sanm_shfit: 0
16
+ selfattention_layer_type: sanm
17
+
18
+
19
+ model: SenseVoiceSmall
20
+ model_conf:
21
+ length_normalized_loss: true
22
+ sos: 1
23
+ eos: 2
24
+ ignore_id: -1
25
+
26
+ tokenizer: SentencepiecesTokenizer
27
+ tokenizer_conf:
28
+ bpemodel: null
29
+ unk_symbol: <unk>
30
+ split_with_space: true
31
+
32
+ frontend: WavFrontend
33
+ frontend_conf:
34
+ fs: 16000
35
+ window: hamming
36
+ n_mels: 80
37
+ frame_length: 25
38
+ frame_shift: 10
39
+ lfr_m: 7
40
+ lfr_n: 6
41
+ cmvn_file: null
42
+
43
+
44
+ dataset: SenseVoiceCTCDataset
45
+ dataset_conf:
46
+ index_ds: IndexDSJsonl
47
+ batch_sampler: EspnetStyleBatchSampler
48
+ data_split_num: 32
49
+ batch_type: token
50
+ batch_size: 14000
51
+ max_token_length: 2000
52
+ min_token_length: 60
53
+ max_source_length: 2000
54
+ min_source_length: 60
55
+ max_target_length: 200
56
+ min_target_length: 0
57
+ shuffle: true
58
+ num_workers: 4
59
+ sos: ${model_conf.sos}
60
+ eos: ${model_conf.eos}
61
+ IndexDSJsonl: IndexDSJsonl
62
+ retry: 20
63
+
64
+ train_conf:
65
+ accum_grad: 1
66
+ grad_clip: 5
67
+ max_epoch: 20
68
+ keep_nbest_models: 10
69
+ avg_nbest_model: 10
70
+ log_interval: 100
71
+ resume: true
72
+ validate_interval: 10000
73
+ save_checkpoint_interval: 10000
74
+
75
+ optim: adamw
76
+ optim_conf:
77
+ lr: 0.00002
78
+ scheduler: warmuplr
79
+ scheduler_conf:
80
+ warmup_steps: 25000
81
+
82
+ specaug: SpecAugLFR
83
+ specaug_conf:
84
+ apply_time_warp: false
85
+ time_warp_window: 5
86
+ time_warp_mode: bicubic
87
+ apply_freq_mask: true
88
+ freq_mask_width_range:
89
+ - 0
90
+ - 30
91
+ lfr_rate: 6
92
+ num_freq_mask: 1
93
+ apply_time_mask: true
94
+ time_mask_width_range:
95
+ - 0
96
+ - 12
97
+ num_time_mask: 1
configuration.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "framework": "pytorch",
3
+ "task": "auto-speech-recognition",
4
+ "model": {
5
+ "type": "funasr"
6
+ },
7
+ "pipeline": {
8
+ "type": "funasr-pipeline"
9
+ },
10
+ "model_name_in_hub": {
11
+ "ms": "",
12
+ "hf": ""
13
+ },
14
+ "file_path_metas": {
15
+ "init_param": "model.pt",
16
+ "config": "config.yaml",
17
+ "tokenizer_conf": {
18
+ "bpemodel": "chn_jpn_yue_eng_ko_spectok.bpe.model"
19
+ },
20
+ "frontend_conf": {
21
+ "cmvn_file": "am.mvn"
22
+ }
23
+ }
24
+ }
model.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:75a054eafc563ffc46febb5850bc8938ac49c5d84641a21a52de8a48cce28ef8
3
+ size 937615371
model.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:833ca2dcfdf8ec91bd4f31cfac36d6124e0c459074d5e909aec9cabe6204a3ea
3
+ size 936291369
model_quant.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b466b19006784340a9f09af96f37778363ccc50917db02d4dc10ca260d73434c
3
+ size 241217542