FRCRN (code, models, paper)
Browse files- .gitattributes +3 -0
- FRCRN. Boosting Feature Representation using Frequency Recurrence for Monaural Speech Enhancement.pdf +3 -0
- code/FRCRN [KappyDays].zip +3 -0
- code/FRCRN.zip +3 -0
- code/speech_frcrn_ans_cirm_16k.zip +3 -0
- models/FRCRN_SE_16K (RedbeardNZ)/.gitattributes +35 -0
- models/FRCRN_SE_16K (RedbeardNZ)/README.md +8 -0
- models/FRCRN_SE_16K (RedbeardNZ)/last_best_checkpoint +1 -0
- models/FRCRN_SE_16K (RedbeardNZ)/last_best_checkpoint.pt +3 -0
- models/FRCRN_SE_16K (RedbeardNZ)/source.txt +1 -0
- models/FRCRN_SE_16K/.gitattributes +35 -0
- models/FRCRN_SE_16K/README.md +8 -0
- models/FRCRN_SE_16K/last_best_checkpoint +1 -0
- models/FRCRN_SE_16K/last_best_checkpoint.pt +3 -0
- models/FRCRN_SE_16K/source.txt +1 -0
- models/a_frcrn_tflocoformer/.gitattributes +35 -0
- models/a_frcrn_tflocoformer/README.md +3 -0
- models/a_frcrn_tflocoformer/last_best_checkpoint.pt +3 -0
- models/a_frcrn_tflocoformer/last_best_checkpoint_old.pt +3 -0
- models/a_frcrn_tflocoformer/source.txt +1 -0
- models/speech_frcrn_ans_cirm_16k/.gitattributes +35 -0
- models/speech_frcrn_ans_cirm_16k/README.md +292 -0
- models/speech_frcrn_ans_cirm_16k/configuration.json +65 -0
- models/speech_frcrn_ans_cirm_16k/description/matrix.png +3 -0
- models/speech_frcrn_ans_cirm_16k/description/model.png +0 -0
- models/speech_frcrn_ans_cirm_16k/examples/speech_with_noise.wav +0 -0
- models/speech_frcrn_ans_cirm_16k/examples/speech_with_noise1.wav +3 -0
- models/speech_frcrn_ans_cirm_16k/faq.md +11 -0
- models/speech_frcrn_ans_cirm_16k/pytorch_model.bin +3 -0
- models/speech_frcrn_ans_cirm_16k/source.txt +3 -0
.gitattributes
CHANGED
|
@@ -33,3 +33,6 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 36 |
+
FRCRN.[[:space:]]Boosting[[:space:]]Feature[[:space:]]Representation[[:space:]]using[[:space:]]Frequency[[:space:]]Recurrence[[:space:]]for[[:space:]]Monaural[[:space:]]Speech[[:space:]]Enhancement.pdf filter=lfs diff=lfs merge=lfs -text
|
| 37 |
+
models/speech_frcrn_ans_cirm_16k/description/matrix.png filter=lfs diff=lfs merge=lfs -text
|
| 38 |
+
models/speech_frcrn_ans_cirm_16k/examples/speech_with_noise1.wav filter=lfs diff=lfs merge=lfs -text
|
FRCRN. Boosting Feature Representation using Frequency Recurrence for Monaural Speech Enhancement.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e9691ee3646a4c20605c16f0150356a49c2f89257a32f8a7bc19b1f62ccf0559
|
| 3 |
+
size 636841
|
code/FRCRN [KappyDays].zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f5efae53a4dc378e4317bf78fdf02cc0b623f8f4249f6a6866337260969a4013
|
| 3 |
+
size 150820149
|
code/FRCRN.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:675cae196df817d704e74198bf7f42c3d83a7e51c22235adde185df39b589fdf
|
| 3 |
+
size 88147307
|
code/speech_frcrn_ans_cirm_16k.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8f682ba855a04229a1ec5289ef0861f5b16deefbfb7f74e20eb6f220fa0d6af3
|
| 3 |
+
size 50511021
|
models/FRCRN_SE_16K (RedbeardNZ)/.gitattributes
ADDED
|
@@ -0,0 +1,35 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
| 2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
| 3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
| 4 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
| 5 |
+
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
| 6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
| 7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
| 8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
| 9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
| 10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
| 11 |
+
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
| 12 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
| 13 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
| 14 |
+
*.npy filter=lfs diff=lfs merge=lfs -text
|
| 15 |
+
*.npz filter=lfs diff=lfs merge=lfs -text
|
| 16 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
| 17 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
| 18 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
| 19 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
| 20 |
+
*.pickle filter=lfs diff=lfs merge=lfs -text
|
| 21 |
+
*.pkl filter=lfs diff=lfs merge=lfs -text
|
| 22 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
| 23 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
| 24 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
| 25 |
+
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
| 26 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
| 27 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
| 28 |
+
*.tar filter=lfs diff=lfs merge=lfs -text
|
| 29 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
| 30 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
| 31 |
+
*.wasm filter=lfs diff=lfs merge=lfs -text
|
| 32 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
| 33 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
+
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
models/FRCRN_SE_16K (RedbeardNZ)/README.md
ADDED
|
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
---
|
| 4 |
+
The FRCRN_SE_16K model weights for 16 kHz speech enhancement in [ClearerVoice-Studio](https://github.com/modelscope/ClearerVoice-Studio/tree/main) repo.
|
| 5 |
+
|
| 6 |
+
This model is trained on large scale datasets inclduing open-sourced and private data.
|
| 7 |
+
|
| 8 |
+
It enhances speech audios by removing background noise.
|
models/FRCRN_SE_16K (RedbeardNZ)/last_best_checkpoint
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
last_best_checkpoint.pt
|
models/FRCRN_SE_16K (RedbeardNZ)/last_best_checkpoint.pt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b22256adbb91b68cf5a3db8f6657a4fb17066eecd5f069803e59c186c1cf3ebb
|
| 3 |
+
size 161053751
|
models/FRCRN_SE_16K (RedbeardNZ)/source.txt
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
https://huggingface.co/RedbeardNZ/FRCRN_SE_16K
|
models/FRCRN_SE_16K/.gitattributes
ADDED
|
@@ -0,0 +1,35 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
| 2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
| 3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
| 4 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
| 5 |
+
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
| 6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
| 7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
| 8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
| 9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
| 10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
| 11 |
+
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
| 12 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
| 13 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
| 14 |
+
*.npy filter=lfs diff=lfs merge=lfs -text
|
| 15 |
+
*.npz filter=lfs diff=lfs merge=lfs -text
|
| 16 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
| 17 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
| 18 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
| 19 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
| 20 |
+
*.pickle filter=lfs diff=lfs merge=lfs -text
|
| 21 |
+
*.pkl filter=lfs diff=lfs merge=lfs -text
|
| 22 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
| 23 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
| 24 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
| 25 |
+
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
| 26 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
| 27 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
| 28 |
+
*.tar filter=lfs diff=lfs merge=lfs -text
|
| 29 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
| 30 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
| 31 |
+
*.wasm filter=lfs diff=lfs merge=lfs -text
|
| 32 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
| 33 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
+
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
models/FRCRN_SE_16K/README.md
ADDED
|
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
---
|
| 4 |
+
The FRCRN_SE_16K model weights for 16 kHz speech enhancement in [ClearerVoice-Studio](https://github.com/modelscope/ClearerVoice-Studio/tree/main) repo.
|
| 5 |
+
|
| 6 |
+
This model is trained on large scale datasets inclduing open-sourced and private data.
|
| 7 |
+
|
| 8 |
+
It enhances speech audios by removing background noise.
|
models/FRCRN_SE_16K/last_best_checkpoint
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
last_best_checkpoint.pt
|
models/FRCRN_SE_16K/last_best_checkpoint.pt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b22256adbb91b68cf5a3db8f6657a4fb17066eecd5f069803e59c186c1cf3ebb
|
| 3 |
+
size 161053751
|
models/FRCRN_SE_16K/source.txt
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
https://huggingface.co/alibabasglab/FRCRN_SE_16K
|
models/a_frcrn_tflocoformer/.gitattributes
ADDED
|
@@ -0,0 +1,35 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
| 2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
| 3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
| 4 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
| 5 |
+
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
| 6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
| 7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
| 8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
| 9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
| 10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
| 11 |
+
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
| 12 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
| 13 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
| 14 |
+
*.npy filter=lfs diff=lfs merge=lfs -text
|
| 15 |
+
*.npz filter=lfs diff=lfs merge=lfs -text
|
| 16 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
| 17 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
| 18 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
| 19 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
| 20 |
+
*.pickle filter=lfs diff=lfs merge=lfs -text
|
| 21 |
+
*.pkl filter=lfs diff=lfs merge=lfs -text
|
| 22 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
| 23 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
| 24 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
| 25 |
+
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
| 26 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
| 27 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
| 28 |
+
*.tar filter=lfs diff=lfs merge=lfs -text
|
| 29 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
| 30 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
| 31 |
+
*.wasm filter=lfs diff=lfs merge=lfs -text
|
| 32 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
| 33 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
+
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
models/a_frcrn_tflocoformer/README.md
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
---
|
models/a_frcrn_tflocoformer/last_best_checkpoint.pt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:20d9b0002eb459d7663039401e8a761dfbd48976b2e6ef1831474a7b2bb02317
|
| 3 |
+
size 172052426
|
models/a_frcrn_tflocoformer/last_best_checkpoint_old.pt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:aae3b392ed6cc9456d954c9319baaadd75f9981150a8d28167dbd14e0b809e6f
|
| 3 |
+
size 172052426
|
models/a_frcrn_tflocoformer/source.txt
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
https://huggingface.co/alibabasglab/a_frcrn_tflocoformer
|
models/speech_frcrn_ans_cirm_16k/.gitattributes
ADDED
|
@@ -0,0 +1,35 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
| 2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
| 3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
| 4 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
| 5 |
+
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
| 6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
| 7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
| 8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
| 9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
| 10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
| 11 |
+
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
| 12 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
| 13 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
| 14 |
+
*.npy filter=lfs diff=lfs merge=lfs -text
|
| 15 |
+
*.npz filter=lfs diff=lfs merge=lfs -text
|
| 16 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
| 17 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
| 18 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
| 19 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
| 20 |
+
*.pickle filter=lfs diff=lfs merge=lfs -text
|
| 21 |
+
*.pkl filter=lfs diff=lfs merge=lfs -text
|
| 22 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
| 23 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
| 24 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
| 25 |
+
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
| 26 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
| 27 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
| 28 |
+
*.tar filter=lfs diff=lfs merge=lfs -text
|
| 29 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
| 30 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
| 31 |
+
*.wasm filter=lfs diff=lfs merge=lfs -text
|
| 32 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
| 33 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
+
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
models/speech_frcrn_ans_cirm_16k/README.md
ADDED
|
@@ -0,0 +1,292 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
tasks:
|
| 3 |
+
- acoustic-noise-suppression
|
| 4 |
+
widgets:
|
| 5 |
+
- task: acoustic-noise-suppression
|
| 6 |
+
inputs:
|
| 7 |
+
- type: audio
|
| 8 |
+
name: input
|
| 9 |
+
title: 带噪音的原始音频
|
| 10 |
+
validator:
|
| 11 |
+
max_size: 10M
|
| 12 |
+
examples:
|
| 13 |
+
- name: 1
|
| 14 |
+
title: 示例1
|
| 15 |
+
inputs:
|
| 16 |
+
- name: input
|
| 17 |
+
data: git://examples/speech_with_noise1.wav
|
| 18 |
+
- name: 2
|
| 19 |
+
title: 示例2
|
| 20 |
+
inputs:
|
| 21 |
+
- name: input
|
| 22 |
+
data: git://examples/speech_with_noise.wav
|
| 23 |
+
inferencespec:
|
| 24 |
+
cpu: 1
|
| 25 |
+
memory: 1000
|
| 26 |
+
gpu: 0
|
| 27 |
+
gpu_memory: 1000
|
| 28 |
+
model_type:
|
| 29 |
+
- complex-nn
|
| 30 |
+
domain:
|
| 31 |
+
- audio
|
| 32 |
+
frameworks:
|
| 33 |
+
- pytorch
|
| 34 |
+
model-backbone:
|
| 35 |
+
- frcrn
|
| 36 |
+
customized-quickstart: True
|
| 37 |
+
finetune-support: True
|
| 38 |
+
license: Apache License 2.0
|
| 39 |
+
tags:
|
| 40 |
+
- Alibaba
|
| 41 |
+
- Mind DNS
|
| 42 |
+
- ANS
|
| 43 |
+
- AI降噪
|
| 44 |
+
- 语音增强
|
| 45 |
+
- 音频前处理
|
| 46 |
+
- 3A
|
| 47 |
+
datasets:
|
| 48 |
+
train:
|
| 49 |
+
- modelscope/ICASSP_2021_DNS_Challenge
|
| 50 |
+
evaluation:
|
| 51 |
+
- modelscope/ICASSP_2021_DNS_Challenge
|
| 52 |
+
|
| 53 |
+
---
|
| 54 |
+
|
| 55 |
+
|
| 56 |
+
# FRCRN语音降噪模型介绍
|
| 57 |
+
|
| 58 |
+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
| 59 |
+
### 更新说明:
|
| 60 |
+
|
| 61 |
+
FRCRN模型已成功集成到语音处理平台 [ClearerVoice-Studio](https://github.com/modelscope/ClearerVoice-Studio) 中!我们致力于开源一个集语音增强、语音分离、语音超分辨率、目标说话人提取等功能于一体的共享语音处理平台,旨在为用户提供全面的语音信号处理工具。 了解更多详情,请访问我们的 GitHub 仓库:https://github.com/modelscope/ClearerVoice-Studio 和魔搭空间:https://modelscope.cn/studios/iic/ClearerVoice-Studio 。
|
| 62 |
+
|
| 63 |
+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
| 64 |
+
|
| 65 |
+
我们日常可能会碰到一些录音质量不佳的场景。比如,想录制一段干净的语音却发现周围都很吵,录制的语音里往往混杂着噪声。当我们在噪杂的地铁或者巴士上通电话,为了让对方听清楚,不得不提高嗓门和音量。这都是因为环境噪声的影响,使我们在使用语音应用时出现障碍。这是语音通讯中一个普遍存在且又非常棘手的问题。语音质量(quality)和可懂度(intelligibility)容易受到环境噪声、拾音设备、混响及回声的干扰,使通话质量和交流效率大幅降低,如何在嘈杂的环境中保持较高的语音质量和可懂度一直以来是众多企业和学者追求的目标。
|
| 66 |
+
|
| 67 |
+
语音降噪问题通过多年研发积累,已经取得一定的突破,尤其针对复杂环境中的语音降噪问题,通过融入复数域深度学习算法,在性能上获得大幅度的提升,在保障更小语音失真度的情况下,最大限度地消除背景噪声,还原目标语音的清晰度,因而语音降噪模型也通常被叫做语音增强模型。
|
| 68 |
+
|
| 69 |
+
语音降噪模型的作用是从污染的语音中提取目标语音,还原目标语音质量和可懂度,同时提升语音识别的效果和性能。我们的语音降噪模型只需要输入单麦克风的录音音频,便能够输出降噪后的干净语音音频,即保持音频的格式不变,仅消除音频中的噪声和混响部分,最大限度地保留原始语音。
|
| 70 |
+
|
| 71 |
+
## 模型描述
|
| 72 |
+
|
| 73 |
+
FRCRN语音降噪模型是基于频率循环 CRN (FRCRN) 新框架开发出来的。该框架是在卷积编-解码(Convolutional Encoder-Decoder)架构的基础上,通过进一步增加循环层获得的卷积循环编-解码(Convolutional Recurrent Encoder-Decoder)新型架构,可以明显改善卷积核的视野局限性,提升降噪模型对频率维度的特征表达,尤其是在频率长距离相关性表达上获得提升,可以在消除噪声的同时,对语音进行更针对性的辨识和保护。
|
| 74 |
+
|
| 75 |
+
另外,我们引入前馈序列记忆网络(Feedforward Sequential Memory Network: FSMN)来降低循环网络的复杂性,以及结合复数域网络运算,实现全复数深度网络模型算法,不仅更有效地对长序列语音进行建模,同时对语音的幅度和相位进行同时增强,相关模型在IEEE/INTERSpeech DNS Challenge上有较好的表现。本次开放的模型在参赛版本基础上做了进一步优化,使用了两个Unet级联和SE layer,可以获得更为稳定的效果。如果用户需要因果模型,也可以自行修改代码,把模型中的SElayer替换成卷积层或者加上掩蔽即可。
|
| 76 |
+
|
| 77 |
+
该模型神经网络结构如下图所示。
|
| 78 |
+
|
| 79 |
+

|
| 80 |
+
|
| 81 |
+
模型输入和输出均为16kHz采样率单通道语音时域波形信号,输入信号可由单通道麦克风直接进行录制,输出为噪声抑制后的语音音频信号[1]。模型输入��号通过STFT变换转换成复数频谱特征作为输入,并采用Complex FSMN在频域上进行关联性处理和在时序特征上进行长序处理,预测中间输出目标Complex ideal ratio mask, 然后使用预测的mask和输入频谱相乘后得到增强后的频谱,最后通过STFT逆变换得到增强后语音波形信号。
|
| 82 |
+
|
| 83 |
+
## 期望模型使用方式以及适用范围
|
| 84 |
+
|
| 85 |
+
|
| 86 |
+
### 如何使用
|
| 87 |
+
|
| 88 |
+
在安装ModelScope完成之后即可使用```speech_frcrn_ans_cirm_16k```进行推理。模型输入和输出均为16kHz采样率单通道语音时域波形信号,输入信号可由单通道麦克风直接进行录制,输出为噪声抑制后的语音音频信号。为了方便使用在pipeline在模型处理前后增加了wav文件处理逻辑,可以直接读取一个wav文件,并把输出结果保存在指定的wav文件中。
|
| 89 |
+
|
| 90 |
+
#### 环境准备:
|
| 91 |
+
|
| 92 |
+
* 本模型支持Linxu,Windows和MacOS平台。
|
| 93 |
+
* 本模型已经在1.8~1.11和1.13 下测试通过,由于PyTorch v1.12的[BUG](https://github.com/pytorch/pytorch/issues/80837),无法在v1.12上运行,请升级到新版或执行以下命令回退到v1.11
|
| 94 |
+
|
| 95 |
+
```
|
| 96 |
+
conda install pytorch==1.11 torchaudio torchvision -c pytorch
|
| 97 |
+
```
|
| 98 |
+
|
| 99 |
+
* 本模型的pipeline中使用了三方库SoundFile进行wav文件处理,**在Linux系统上用户需要手动安装SoundFile的底层依赖库libsndfile**,在Windows和MacOS上会自动安装不需要用户操作。详细信息可参考[SoundFile官网](https://github.com/bastibe/python-soundfile#installation)。以Ubuntu系统为例,用户需要执行如下命令:
|
| 100 |
+
|
| 101 |
+
```shell
|
| 102 |
+
sudo apt-get update
|
| 103 |
+
sudo apt-get install libsndfile1
|
| 104 |
+
```
|
| 105 |
+
|
| 106 |
+
#### 代码范例
|
| 107 |
+
|
| 108 |
+
```python
|
| 109 |
+
from modelscope.pipelines import pipeline
|
| 110 |
+
from modelscope.utils.constant import Tasks
|
| 111 |
+
|
| 112 |
+
|
| 113 |
+
ans = pipeline(
|
| 114 |
+
Tasks.acoustic_noise_suppression,
|
| 115 |
+
model='damo/speech_frcrn_ans_cirm_16k')
|
| 116 |
+
result = ans(
|
| 117 |
+
'https://modelscope.oss-cn-beijing.aliyuncs.com/test/audios/speech_with_noise1.wav',
|
| 118 |
+
output_path='output.wav')
|
| 119 |
+
```
|
| 120 |
+
|
| 121 |
+
### 模型局限性以及可能的偏差
|
| 122 |
+
|
| 123 |
+
模型在存在多说话人干扰声的场景噪声抑制性能有不同程度的下降。
|
| 124 |
+
|
| 125 |
+
## 训练数据介绍
|
| 126 |
+
|
| 127 |
+
模型的训练数据来自DNS-Challenge开源数据集,是Microsoft团队为ICASSP相关挑战赛提供的,[官方网址](https://github.com/microsoft/DNS-Challenge)[2]。我们这个模型是用来处理16k音频,因此只使用了其中的fullband数据,并做了少量调整。为便于大家使用,我们把DNS Challenge 2020的数据集迁移在modelscope的[DatasetHub](https://modelscope.cn/datasets/modelscope/ICASSP_2021_DNS_Challenge/summary)上,用户可参照数据集说明文档下载使用。
|
| 128 |
+
|
| 129 |
+
## 模型训练流程
|
| 130 |
+
|
| 131 |
+
### 复制官方模型
|
| 132 |
+
要训练您自己的降噪模型,首先需要一份官方模型的副本。ModelScope 框架默认把官方模型保存在本地缓存中,可以把本地缓存的模型目录copy一份到您的工作目录。
|
| 133 |
+
|
| 134 |
+
检查目录./speech_frcrn_ans_cirm_16k,其中的 pytorch_model.bin 就是模型文件。如果想从头开始训练一个全新的模型,请删除掉这里的 pytorch_model.bin,避免程序运行时加载;如果想基于官方模型继续训练则不要删除。
|
| 135 |
+
|
| 136 |
+
```bash
|
| 137 |
+
cp -r ~/.cache/modelscope/hub/damo/speech_frcrn_ans_cirm_16k ./
|
| 138 |
+
cd ./speech_frcrn_ans_cirm_16k
|
| 139 |
+
rm pytorch_model.bin
|
| 140 |
+
```
|
| 141 |
+
|
| 142 |
+
目录中的configuration.json文件中是模型和训练的配置项,建议用户对代码逻辑非常熟悉以后再尝试修改。
|
| 143 |
+
|
| 144 |
+
### 运行训练代码
|
| 145 |
+
|
| 146 |
+
以下列出的为训练示例代码,其中有两个地方需要替换成您的本地路径:
|
| 147 |
+
|
| 148 |
+
1. 用您前面下载的本地数据集路径替换`/your_local_path/ICASSP_2021_DNS_Challenge`
|
| 149 |
+
2. 用您复制的官方模型路径替换模型路径
|
| 150 |
+
|
| 151 |
+
```python
|
| 152 |
+
import os
|
| 153 |
+
|
| 154 |
+
from datasets import load_dataset
|
| 155 |
+
|
| 156 |
+
from modelscope.metainfo import Trainers
|
| 157 |
+
from modelscope.msdatasets import MsDataset
|
| 158 |
+
from modelscope.trainers import build_trainer
|
| 159 |
+
from modelscope.utils.audio.audio_utils import to_segment
|
| 160 |
+
|
| 161 |
+
tmp_dir = './checkpoint'
|
| 162 |
+
if not os.path.exists(tmp_dir):
|
| 163 |
+
os.makedirs(tmp_dir)
|
| 164 |
+
|
| 165 |
+
hf_ds = load_dataset(
|
| 166 |
+
'/your_local_path/ICASSP_2021_DNS_Challenge',
|
| 167 |
+
'train',
|
| 168 |
+
split='train')
|
| 169 |
+
mapped_ds = hf_ds.map(
|
| 170 |
+
to_segment,
|
| 171 |
+
remove_columns=['duration'],
|
| 172 |
+
num_proc=8,
|
| 173 |
+
batched=True,
|
| 174 |
+
batch_size=36)
|
| 175 |
+
mapped_ds = mapped_ds.train_test_split(test_size=3000)
|
| 176 |
+
mapped_ds = mapped_ds.shuffle()
|
| 177 |
+
dataset = MsDataset.from_hf_dataset(mapped_ds)
|
| 178 |
+
|
| 179 |
+
kwargs = dict(
|
| 180 |
+
model='your_local_path/speech_frcrn_ans_cirm_16k',
|
| 181 |
+
train_dataset=dataset['train'],
|
| 182 |
+
eval_dataset=dataset['test'],
|
| 183 |
+
work_dir=tmp_dir)
|
| 184 |
+
trainer = build_trainer(
|
| 185 |
+
Trainers.speech_frcrn_ans_cirm_16k, default_args=kwargs)
|
| 186 |
+
trainer.train()
|
| 187 |
+
```
|
| 188 |
+
|
| 189 |
+
训练按照默认配置共200轮,每轮2000个batch,训练出的模型文件会保存在代码中tmp_dir = './checkpoint'指定的目录。目录下还有一个log文件,��录了每个模型的训练和测试loss数据。
|
| 190 |
+
|
| 191 |
+
### 使用您的模型
|
| 192 |
+
|
| 193 |
+
从您训练出的模型中选择效果最好的,把模型文件copy到 `/your_local_path/speech_frcrn_ans_cirm_16k` ,重命名为 `pytorch_model.bin` 。
|
| 194 |
+
把以下代码中模型路径 `/your_local_path/speech_frcrn_ans_cirm_16k` 替换为您复制的模型目录,就可以测试您的模型效果了。
|
| 195 |
+
|
| 196 |
+
```python
|
| 197 |
+
from modelscope.pipelines import pipeline
|
| 198 |
+
from modelscope.utils.constant import Tasks
|
| 199 |
+
|
| 200 |
+
|
| 201 |
+
ans = pipeline(
|
| 202 |
+
Tasks.acoustic_noise_suppression,
|
| 203 |
+
model='/your_local_path/speech_frcrn_ans_cirm_16k')
|
| 204 |
+
result = ans(
|
| 205 |
+
'https://modelscope.oss-cn-beijing.aliyuncs.com/test/audios/speech_with_noise.wav',
|
| 206 |
+
output_path='output.wav')
|
| 207 |
+
```
|
| 208 |
+
|
| 209 |
+
代码中的http地址也可以换成您的本地音频文件路径,注意模型支持的音频格式是采样率16000,16bit的单通道wav文件。如果您有多个文件需要处理,只需要循环调用ans()方法即可。如果要多线程处理则需要在每个线程内运行pipeline()初始化一个ans对象。
|
| 210 |
+
|
| 211 |
+
## 数据评估及结果
|
| 212 |
+
|
| 213 |
+
与其他SOTA模型在DNS Challenge 2020官方测试集上对比效果如下:
|
| 214 |
+
|
| 215 |
+

|
| 216 |
+
|
| 217 |
+
指标说明:
|
| 218 |
+
|
| 219 |
+
* PESQ (Perceptual Evaluation Of Speech Quality) 语音质量感知评估,是一种客观的、全参考的语音质量评估方法,得分范围在-0.5--4.5之间,得分越高表示语音质量越好。
|
| 220 |
+
* STOI (Short-Time Objective Intelligibility) 短时客观可懂度,反映人类的听觉感知系统对语音可懂度的客观评价,STOI 值介于0~1 之间,值越大代表语音可懂度越高,越清晰。
|
| 221 |
+
* SI-SNR (Scale Invariant Signal-to-Noise Ratio) 尺度不变的信噪比,是在普通信噪比基础上通过正则化消减信号变化导致的影响,是针对宽带噪声失真的语音增强算法的常规衡量方法。
|
| 222 |
+
|
| 223 |
+
DNS Challenge的结果列表在[这里](https://www.microsoft.com/en-us/research/academic-program/deep-noise-suppression-challenge-icassp-2022/results/)。
|
| 224 |
+
|
| 225 |
+
### 模型评估代码
|
| 226 |
+
可通过如下代码对模型进行评估验证,我们在modelscope的[DatasetHub](https://modelscope.cn/datasets/modelscope/ICASSP_2021_DNS_Challenge/summary)上存储了DNS Challenge 2020的验证集,方便用户下载调用。
|
| 227 |
+
|
| 228 |
+
```python
|
| 229 |
+
import os
|
| 230 |
+
import tempfile
|
| 231 |
+
|
| 232 |
+
from modelscope.metainfo import Trainers
|
| 233 |
+
from modelscope.msdatasets import MsDataset
|
| 234 |
+
from modelscope.trainers import build_trainer
|
| 235 |
+
from modelscope.utils.audio.audio_utils import to_segment
|
| 236 |
+
|
| 237 |
+
tmp_dir = tempfile.TemporaryDirectory().name
|
| 238 |
+
if not os.path.exists(tmp_dir):
|
| 239 |
+
os.makedirs(tmp_dir)
|
| 240 |
+
|
| 241 |
+
hf_ds = MsDataset.load(
|
| 242 |
+
'ICASSP_2021_DNS_Challenge', split='test').to_hf_dataset()
|
| 243 |
+
mapped_ds = hf_ds.map(
|
| 244 |
+
to_segment,
|
| 245 |
+
remove_columns=['duration'],
|
| 246 |
+
# num_proc=5, # Comment this line to avoid error in Jupyter notebook
|
| 247 |
+
batched=True,
|
| 248 |
+
batch_size=36)
|
| 249 |
+
dataset = MsDataset.from_hf_dataset(mapped_ds)
|
| 250 |
+
kwargs = dict(
|
| 251 |
+
model='damo/speech_frcrn_ans_cirm_16k',
|
| 252 |
+
model_revision='beta',
|
| 253 |
+
train_dataset=None,
|
| 254 |
+
eval_dataset=dataset,
|
| 255 |
+
val_iters_per_epoch=125,
|
| 256 |
+
work_dir=tmp_dir)
|
| 257 |
+
|
| 258 |
+
trainer = build_trainer(
|
| 259 |
+
Trainers.speech_frcrn_ans_cirm_16k, default_args=kwargs)
|
| 260 |
+
|
| 261 |
+
eval_res = trainer.evaluate()
|
| 262 |
+
print(eval_res['avg_sisnr'])
|
| 263 |
+
|
| 264 |
+
```
|
| 265 |
+
|
| 266 |
+
更多详情请参考下面相关论文。
|
| 267 |
+
|
| 268 |
+
### 相关论文以及引用信息
|
| 269 |
+
|
| 270 |
+
[1]
|
| 271 |
+
|
| 272 |
+
```BibTeX
|
| 273 |
+
@INPROCEEDINGS{9747578,
|
| 274 |
+
author={Zhao, Shengkui and Ma, Bin and Watcharasupat, Karn N. and Gan, Woon-Seng},
|
| 275 |
+
booktitle={ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
|
| 276 |
+
title={FRCRN: Boosting Feature Representation Using Frequency Recurrence for Monaural Speech Enhancement},
|
| 277 |
+
year={2022},
|
| 278 |
+
pages={9281-9285},
|
| 279 |
+
doi={10.1109/ICASSP43922.2022.9747578}}
|
| 280 |
+
```
|
| 281 |
+
|
| 282 |
+
[2]
|
| 283 |
+
|
| 284 |
+
```BibTeX
|
| 285 |
+
@INPROCEEDINGS{9747230,
|
| 286 |
+
author={Dubey, Harishchandra and Gopal, Vishak and Cutler, Ross and Aazami, Ashkan and Matusevych, Sergiy and Braun, Sebastian and Eskimez, Sefik Emre and Thakker, Manthan and Yoshioka, Takuya and Gamper, Hannes and Aichner, Robert},
|
| 287 |
+
booktitle={ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
|
| 288 |
+
title={Icassp 2022 Deep Noise Suppression Challenge},
|
| 289 |
+
year={2022},
|
| 290 |
+
pages={9271-9275},
|
| 291 |
+
doi={10.1109/ICASSP43922.2022.9747230}}
|
| 292 |
+
```
|
models/speech_frcrn_ans_cirm_16k/configuration.json
ADDED
|
@@ -0,0 +1,65 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"framework": "pytorch",
|
| 3 |
+
"task": "acoustic-noise-suppression",
|
| 4 |
+
"framework": "pytorch",
|
| 5 |
+
"pipeline": {
|
| 6 |
+
"type": "speech_frcrn_ans_cirm_16k"
|
| 7 |
+
},
|
| 8 |
+
"model": {
|
| 9 |
+
"type": "speech_frcrn_ans_cirm_16k",
|
| 10 |
+
"complex": true,
|
| 11 |
+
"model_complexity": 45,
|
| 12 |
+
"model_depth": 14,
|
| 13 |
+
"log_amp": false,
|
| 14 |
+
"padding_mode": "zeros",
|
| 15 |
+
"win_len": 640,
|
| 16 |
+
"win_inc": 320,
|
| 17 |
+
"fft_len": 640,
|
| 18 |
+
"win_type": "hann"
|
| 19 |
+
},
|
| 20 |
+
"preprocessor": {},
|
| 21 |
+
"train": {
|
| 22 |
+
"max_epochs": 200,
|
| 23 |
+
"train_iters_per_epoch": 2000,
|
| 24 |
+
"dataloader": {
|
| 25 |
+
"batch_size_per_gpu": 12,
|
| 26 |
+
"workers_per_gpu": 0
|
| 27 |
+
},
|
| 28 |
+
"seed": 20,
|
| 29 |
+
"optimizer": {
|
| 30 |
+
"type": "Adam",
|
| 31 |
+
"lr": 0.001,
|
| 32 |
+
"weight_decay": 0.00001,
|
| 33 |
+
"options": {
|
| 34 |
+
"grad_clip": {
|
| 35 |
+
"max_norm": 10.0
|
| 36 |
+
}
|
| 37 |
+
}
|
| 38 |
+
},
|
| 39 |
+
"lr_scheduler": {
|
| 40 |
+
"type": "ReduceLROnPlateau",
|
| 41 |
+
"mode": "min",
|
| 42 |
+
"factor": 0.98,
|
| 43 |
+
"patience": 2,
|
| 44 |
+
"verbose": true
|
| 45 |
+
},
|
| 46 |
+
"lr_scheduler_hook": {
|
| 47 |
+
"type": "PlateauLrSchedulerHook",
|
| 48 |
+
"metric_key": "avg_loss"
|
| 49 |
+
},
|
| 50 |
+
"hooks": [
|
| 51 |
+
{
|
| 52 |
+
"type": "EvaluationHook",
|
| 53 |
+
"interval": 1
|
| 54 |
+
}
|
| 55 |
+
]
|
| 56 |
+
},
|
| 57 |
+
"evaluation": {
|
| 58 |
+
"val_iters_per_epoch": 200,
|
| 59 |
+
"dataloader": {
|
| 60 |
+
"batch_size_per_gpu": 12,
|
| 61 |
+
"workers_per_gpu": 0
|
| 62 |
+
},
|
| 63 |
+
"metrics": ["audio-noise-metric"]
|
| 64 |
+
}
|
| 65 |
+
}
|
models/speech_frcrn_ans_cirm_16k/description/matrix.png
ADDED
|
Git LFS Details
|
models/speech_frcrn_ans_cirm_16k/description/model.png
ADDED
|
models/speech_frcrn_ans_cirm_16k/examples/speech_with_noise.wav
ADDED
|
Binary file (76.8 kB). View file
|
|
|
models/speech_frcrn_ans_cirm_16k/examples/speech_with_noise1.wav
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0b2882d3bcd9e8f8f9531ac34ac09c0208d86500b910d3e1ca34c022caa9be62
|
| 3 |
+
size 155874
|
models/speech_frcrn_ans_cirm_16k/faq.md
ADDED
|
@@ -0,0 +1,11 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
## Q: 模型处理后的音频听起来有问题?
|
| 2 |
+
A: 建议先确认一下音频格式是否16KHz采样率单通道wav音频,音频内容是否带噪音的语音。
|
| 3 |
+
|
| 4 |
+
## Q: 这个模型 cpu 推理较慢怎么办?
|
| 5 |
+
A: FRCRN语音降噪这一版模型的运算量是比较大的,特别是在CPU上处理耗时相对比较长,在模型不变的情况下没有什么很好的优化方案。建议使用GPU来提升速度,通常能够比CPU提升几倍到几十倍,不过GPU第一次使用需要初始化CUDA所以会比第二次调用耗时长一些。
|
| 6 |
+
|
| 7 |
+
## Q: 模型是否支持导出为ONNX格式?
|
| 8 |
+
A: 不支持导出。
|
| 9 |
+
|
| 10 |
+
## Q: 模型训练速度很慢,一个epoch要跑10个小时左右,请问这是正常的吗?
|
| 11 |
+
A: 这种情况不正常,目前训练流程默认使用单卡,通常V100单卡跑一个epoch约40分钟。您训练的时候可以观察一下cpu和gpu的占用情况。
|
models/speech_frcrn_ans_cirm_16k/pytorch_model.bin
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:477a2f82019b0a2b5bdd51541405d7e585a1f0355e162f3b46bf5c20d79fd612
|
| 3 |
+
size 57991225
|
models/speech_frcrn_ans_cirm_16k/source.txt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
https://github.com/ti-j-nafziger/speech_frcrn_ans_cirm_16k
|
| 2 |
+
https://huggingface.co/alextomcat/speech_frcrn_ans_cirm_16k
|
| 3 |
+
https://modelscope.cn/models/iic/speech_frcrn_ans_cirm_16k
|