Feature Extraction
NeMo
rlangman commited on
Commit
cc69692
·
verified ·
1 Parent(s): 704d0b4

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +142 -0
README.md ADDED
@@ -0,0 +1,142 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: nsclv1
4
+ license_link: https://developer.nvidia.com/downloads/license/nsclv1
5
+ ---
6
+
7
+
8
+ # NVIDIA NeMo Mel Codec 22khz
9
+ <style>
10
+ img {
11
+ display: inline-table;
12
+ vertical-align: small;
13
+ margin: 0;
14
+ padding: 0;
15
+ }
16
+ </style>
17
+ [![Model architecture](https://img.shields.io/badge/Model_Arch-HiFi--GAN-lightgrey#model-badge)](#model-architecture)
18
+ | [![Model size](https://img.shields.io/badge/Params-64.4M-lightgrey#model-badge)](#model-architecture)
19
+ | [![Language](https://img.shields.io/badge/Language-multilingual-lightgrey#model-badge)](#datasets)
20
+
21
+ The NeMo Mel Codec is a neural audio codec which compresses mel-spectrograms into a quantized representation and reconstructs audio. The model can be used as a vocoder for speech synthesis.
22
+
23
+ The model works with full-bandwidth 22.05kHz speech. It might have lower performance with low-bandwidth speech (e.g. 16kHz speech upsampled to 22.05kHz) or with non-speech audio.
24
+
25
+ | Sample Rate | Frame Rate | Bit Rate | # Codebooks | Codebook Size | Embed Dim | FSQ Levels |
26
+ |:-----------:|:----------:|:----------:|:-----------:|:-------------:|:-----------:|:------------:|
27
+ | 22050 | 86.1 | 6.9kpbs | 8 | 1000 | 32 | [8, 5, 5, 5] |
28
+
29
+ ## Model Architecture
30
+ The NeMo Mel Codec model uses a residual network encoder and [HiFi-GAN](https://arxiv.org/abs/2010.05646) decoder. We use [Finite Scalar Quantization (FSQ)](https://arxiv.org/abs/2309.15505), with 8 codebooks and 1000 entries per codebook.
31
+
32
+ For more details please refer to [our paper](https://arxiv.org/abs/2406.05298).
33
+
34
+ ### Input
35
+ - **Input Type:** Audio
36
+ - **Input Format(s):** .wav files
37
+ - **Input Parameters:** One-Dimensional (1D)
38
+ - **Other Properties Related to Input:** 22050 Hz Mono-channel Audio
39
+
40
+ ### Output
41
+ - **Output Type**: Audio
42
+ - **Output Format:** .wav files
43
+ - **Output Parameters:** One Dimensional (1D)
44
+ - **Other Properties Related to Output:** 22050 Hz Mono-channel Audio
45
+
46
+
47
+ ## How to Use this Model
48
+
49
+ The model is available for use in the [NVIDIA NeMo](https://github.com/NVIDIA/NeMo), and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
50
+
51
+ ### Inference
52
+ For inference, you can follow our [Audio Codec Inference Tutorial](https://github.com/NVIDIA/NeMo/blob/main/tutorials/tts/Audio_Codec_Inference.ipynb) which automatically downloads the model checkpoint. Note that you will need to set the ```model_name``` parameter to "nvidia/mel-codec-22khz".
53
+
54
+ Alternatively, you can download the ```.nemo``` checkpoint from the "Files and versions" tab and use the code below to make an inference on the model:
55
+
56
+ ```
57
+ import librosa
58
+ import torch
59
+ import soundfile as sf
60
+ from nemo.collections.tts.models import AudioCodecModel
61
+
62
+ model_name = "nvidia/mel-codec-22khz"
63
+ path_to_input_audio = ??? # path of the input audio
64
+ path_to_output_audio = ??? # path of the reconstructed output audio
65
+
66
+ nemo_codec_model = AudioCodecModel.from_pretrained(model_name).eval()
67
+
68
+ # get discrete tokens from audio
69
+ audio, _ = librosa.load(path_to_input_audio, sr=nemo_codec_model.sample_rate)
70
+
71
+ device = 'cuda' if torch.cuda.is_available() else 'cpu'
72
+ audio_tensor = torch.from_numpy(audio).unsqueeze(dim=0).to(device)
73
+ audio_len = torch.tensor([audio_tensor[0].shape[0]]).to(device)
74
+
75
+ with torch.no_grad():
76
+ encoded_tokens, encoded_len = nemo_codec_model.encode(audio=audio_tensor, audio_len=audio_len)
77
+
78
+ # Reconstruct audio from tokens
79
+ reconstructed_audio, _ = nemo_codec_model.decode(tokens=encoded_tokens, tokens_len=encoded_len)
80
+
81
+ # save reconstructed audio
82
+ output_audio = reconstructed_audio.cpu().numpy().squeeze()
83
+ sf.write(path_to_output_audio, output_audio, nemo_codec_model.sample_rate)
84
+
85
+ ```
86
+
87
+ ### Training
88
+
89
+ For fine-tuning on another dataset please follow the steps available at our [Audio Codec Training Tutorial](https://github.com/NVIDIA/NeMo/blob/main/tutorials/tts/Audio_Codec_Training.ipynb). Note that you will need to set the ```CONFIG_FILENAME``` parameter to the "mel_codec_22050.yaml" config. You also will need to set ```pretrained_model_name``` to "nvidia/mel-codec-22khz".
90
+
91
+ ## Training, Testing, and Evaluation Datasets:
92
+
93
+
94
+ ### Training Datasets
95
+
96
+ The NeMo Audio Codec is trained on a total of 28.7k hrs of speech data from 105 languages.
97
+
98
+ - [MLS English](https://www.openslr.org/94/) - 25.5k hours, 4.3k speakers, English
99
+ - [Common Voice](https://huggingface.co/datasets/mozilla-foundation/common_voice_13_0) - 3.2k hours, 100k speakers, 105 languages.
100
+
101
+ ### Test Datasets
102
+
103
+ - [MLS English](https://www.openslr.org/94/) - 15 hours, 42 speakers, English
104
+ - [Common Voice](https://huggingface.co/datasets/mozilla-foundation/common_voice_13_0) - 2 hours, 1356 speakers, 59 languages
105
+
106
+ ## Performance
107
+
108
+ We evaluate our codec using several objective audio quality metrics. We evaluate [ViSQOL](https://github.com/google/visqol) and [PESQ](https://lightning.ai/docs/torchmetrics/stable/audio/perceptual_evaluation_speech_quality.html) for perception quality, [ESTOI](https://ieeexplore.ieee.org/document/7539284) for intelligbility, and mel spectrogram and STFT distances for spectral reconstruction accuracy. Metrics are reported on the test set for both the MLS English and CommonVoice data. The model has not been trained or evaluated on non-speech audio.
109
+
110
+ | Dataset | ViSQOL |PESQ |ESTOI |Mel Distance |STFT Distance|
111
+ |:-----------:|:----------:|:----------:|:----------:|:-----------:|:-----------:|
112
+ | MLS English | 4.48 | 3.43 | 0.92 | 0.069 | 0.034 |
113
+ | CommonVoice | 4.51 | 3.21 | 0.91 | 0.100 | 0.057 |
114
+
115
+ ## Software Integration
116
+
117
+ ### Supported Hardware Microarchitecture Compatibility:
118
+ - NVIDIA Ampere
119
+ - NVIDIA Blackwell
120
+ - NVIDIA Jetson
121
+ - NVIDIA Hopper
122
+ - NVIDIA Lovelace
123
+ - NVIDIA Pascal
124
+ - NVIDIA Turing
125
+ - NVIDIA Volta
126
+
127
+ ### Runtime Engine
128
+
129
+ - Nemo 2.0.0
130
+
131
+ ### Preferred Operating System
132
+
133
+ - Linux
134
+
135
+ ## License/Terms of Use
136
+ This model is for research and development only (non-commercial use) and the license to use this model is covered by the [NSCLv1](https://developer.nvidia.com/downloads/license/nsclv1).
137
+
138
+ ## Ethical Considerations:
139
+ NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
140
+ Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
141
+
142
+