Update README.md
Browse files
README.md
CHANGED
|
@@ -270,21 +270,6 @@ Atleast 2GB RAM for model to load. The bigger the RAM, the larger audio input it
|
|
| 270 |
|
| 271 |
Current version: Quantum_STT_V2.0. Previous versions can be [accessed](https://huggingface.co/Quantamhash/Quantum_STT) here.
|
| 272 |
|
| 273 |
-
## <span style="color:#466f00;">Training and Evaluation Datasets:</span>
|
| 274 |
-
|
| 275 |
-
### <span style="color:#466f00;">Training</span>
|
| 276 |
-
|
| 277 |
-
This model was trained using the NeMo toolkit [3], following the strategies below:
|
| 278 |
-
|
| 279 |
-
- Initialized from a FastConformer SSL checkpoint that was pretrained with a wav2vec method on the LibriLight dataset[7].
|
| 280 |
-
- Trained for 150,000 steps on 64 A100 GPUs.
|
| 281 |
-
- Dataset corpora were balanced using a temperature sampling value of 0.5.
|
| 282 |
-
- Stage 2 fine-tuning was performed for 2,500 steps on 4 A100 GPUs using approximately 500 hours of high-quality, human-transcribed data of NeMo ASR Set 3.0.
|
| 283 |
-
|
| 284 |
-
Training was conducted using this [example script](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/asr_transducer/speech_to_text_rnnt_bpe.py) and [TDT configuration](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/conf/fastconformer/hybrid_transducer_ctc/fastconformer_hybrid_tdt_ctc_bpe.yaml).
|
| 285 |
-
|
| 286 |
-
The tokenizer was constructed from the training set transcripts using this [script](https://github.com/NVIDIA/NeMo/blob/main/scripts/tokenizers/process_asr_text_tokenizer.py).
|
| 287 |
-
|
| 288 |
## <span style="color:#466f00;">Performance</span>
|
| 289 |
|
| 290 |
#### Huggingface Open-ASR-Leaderboard Performance
|
|
@@ -295,7 +280,7 @@ The table below summarizes the WER (%) using a Transducer decoder with greedy de
|
|
| 295 |
|
| 296 |
| **Model** | **Avg WER** | **AMI** | **Earnings-22** | **GigaSpeech** | **LS test-clean** | **LS test-other** | **SPGI Speech** | **TEDLIUM-v3** | **VoxPopuli** |
|
| 297 |
|:-------------|:-------------:|:---------:|:------------------:|:----------------:|:-----------------:|:-----------------:|:------------------:|:----------------:|:---------------:|
|
| 298 |
-
|
|
| 299 |
|
| 300 |
### Noise Robustness
|
| 301 |
Performance across different Signal-to-Noise Ratios (SNR) using MUSAN music and noise samples:
|
|
@@ -315,87 +300,4 @@ Performance comparison between standard 16kHz audio and telephony-style audio (u
|
|
| 315 |
| Standard 16kHz | 6.05 | 11.16 | 11.15 | 9.74 | 1.69 | 3.19 | 2.17 | 3.38 | 5.95 | - |
|
| 316 |
| μ-law 8kHz | 6.32 | 11.98 | 11.16 | 10.02 | 1.78 | 3.52 | 2.20 | 3.38 | 6.52 | -4.10% |
|
| 317 |
|
| 318 |
-
These WER scores were obtained using greedy decoding without an external language model.
|
| 319 |
-
|
| 320 |
-
|
| 321 |
-
|
| 322 |
-
## <span style="color:#466f00;">References</span>
|
| 323 |
-
|
| 324 |
-
[1] [Fast Conformer with Linearly Scalable Attention for Efficient Speech Recognition](https://arxiv.org/abs/2305.05084)
|
| 325 |
-
|
| 326 |
-
[2] [Efficient Sequence Transduction by Jointly Predicting Tokens and Durations](https://arxiv.org/abs/2304.06795)
|
| 327 |
-
|
| 328 |
-
[3] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
|
| 329 |
-
|
| 330 |
-
[4] [Youtube-commons: A massive open corpus for conversational and multimodal data](https://huggingface.co/blog/Pclanglais/youtube-commons)
|
| 331 |
-
|
| 332 |
-
[5] [Yodas: Youtube-oriented dataset for audio and speech](https://arxiv.org/abs/2406.00899)
|
| 333 |
-
|
| 334 |
-
[6] [HuggingFace ASR Leaderboard](https://huggingface.co/spaces/hf-audio/open_asr_leaderboard)
|
| 335 |
-
|
| 336 |
-
[7] [MOSEL: 950,000 Hours of Speech Data for Open-Source Speech Foundation Model Training on EU Languages](https://arxiv.org/abs/2410.01036)
|
| 337 |
-
|
| 338 |
-
## <span style="color:#466f00;">Inference:</span>
|
| 339 |
-
|
| 340 |
-
**Engine**:
|
| 341 |
-
* NVIDIA NeMo
|
| 342 |
-
|
| 343 |
-
**Test Hardware**:
|
| 344 |
-
* NVIDIA A10
|
| 345 |
-
* NVIDIA A100
|
| 346 |
-
* NVIDIA A30
|
| 347 |
-
* NVIDIA H100
|
| 348 |
-
* NVIDIA L4
|
| 349 |
-
* NVIDIA L40
|
| 350 |
-
* NVIDIA Turing T4
|
| 351 |
-
* NVIDIA Volta V100
|
| 352 |
-
|
| 353 |
-
## <span style="color:#466f00;">Ethical Considerations:</span>
|
| 354 |
-
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their supporting model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
|
| 355 |
-
|
| 356 |
-
For more detailed information on ethical considerations for this model, please see the Model Card++ Explainability, Bias, Safety & Security, and Privacy Subcards [here](https://developer.nvidia.com/blog/enhancing-ai-transparency-and-ethical-considerations-with-model-card/).
|
| 357 |
-
|
| 358 |
-
Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
|
| 359 |
-
|
| 360 |
-
## <span style="color:#466f00;">Bias:</span>
|
| 361 |
-
|
| 362 |
-
Field | Response
|
| 363 |
-
---------------------------------------------------------------------------------------------------|---------------
|
| 364 |
-
Participation considerations from adversely impacted groups [protected classes](https://www.senate.ca.gov/content/protected-classes) in model design and testing | None
|
| 365 |
-
Measures taken to mitigate against unwanted bias | None
|
| 366 |
-
|
| 367 |
-
## <span style="color:#466f00;">Explainability:</span>
|
| 368 |
-
|
| 369 |
-
Field | Response
|
| 370 |
-
------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------
|
| 371 |
-
Intended Domain | Speech to Text Transcription
|
| 372 |
-
Model Type | FastConformer
|
| 373 |
-
Intended Users | This model is intended for developers, researchers, academics, and industries building conversational based applications.
|
| 374 |
-
Output | Text
|
| 375 |
-
Describe how the model works | Speech input is encoded into embeddings and passed into conformer-based model and output a text response.
|
| 376 |
-
Name the adversely impacted groups this has been tested to deliver comparable outcomes regardless of | Not Applicable
|
| 377 |
-
Technical Limitations & Mitigation | Transcripts may be not 100% accurate. Accuracy varies based on language and characteristics of input audio (Domain, Use Case, Accent, Noise, Speech Type, Context of speech, etc.)
|
| 378 |
-
Verified to have met prescribed NVIDIA quality standards | Yes
|
| 379 |
-
Performance Metrics | Word Error Rate
|
| 380 |
-
Potential Known Risks | If a word is not trained in the language model and not presented in vocabulary, the word is not likely to be recognized. Not recommended for word-for-word/incomplete sentences as accuracy varies based on the context of input text
|
| 381 |
-
Licensing | GOVERNING TERMS: Use of this model is governed by the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/legalcode.en) license.
|
| 382 |
-
|
| 383 |
-
## <span style="color:#466f00;">Privacy:</span>
|
| 384 |
-
|
| 385 |
-
Field | Response
|
| 386 |
-
----------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------
|
| 387 |
-
Generatable or reverse engineerable personal data? | None
|
| 388 |
-
Personal data used to create this model? | None
|
| 389 |
-
Is there provenance for all datasets used in training? | Yes
|
| 390 |
-
Does data labeling (annotation, metadata) comply with privacy laws? | Yes
|
| 391 |
-
Is data compliant with data subject requests for data correction or removal, if such a request was made? | No, not possible with externally-sourced data.
|
| 392 |
-
Applicable Privacy Policy | https://www.nvidia.com/en-us/about-nvidia/privacy-policy/
|
| 393 |
-
|
| 394 |
-
## <span style="color:#466f00;">Safety:</span>
|
| 395 |
-
|
| 396 |
-
Field | Response
|
| 397 |
-
---------------------------------------------------|----------------------------------
|
| 398 |
-
Model Application(s) | Speech to Text Transcription
|
| 399 |
-
Describe the life critical impact | None
|
| 400 |
-
Use Case Restrictions | Abide by [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/legalcode.en) License
|
| 401 |
-
Model and dataset restrictions | The Principle of least privilege (PoLP) is applied limiting access for dataset generation and model development. Restrictions enforce dataset access during training, and dataset license constraints adhered to.
|
|
|
|
| 270 |
|
| 271 |
Current version: Quantum_STT_V2.0. Previous versions can be [accessed](https://huggingface.co/Quantamhash/Quantum_STT) here.
|
| 272 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 273 |
## <span style="color:#466f00;">Performance</span>
|
| 274 |
|
| 275 |
#### Huggingface Open-ASR-Leaderboard Performance
|
|
|
|
| 280 |
|
| 281 |
| **Model** | **Avg WER** | **AMI** | **Earnings-22** | **GigaSpeech** | **LS test-clean** | **LS test-other** | **SPGI Speech** | **TEDLIUM-v3** | **VoxPopuli** |
|
| 282 |
|:-------------|:-------------:|:---------:|:------------------:|:----------------:|:-----------------:|:-----------------:|:------------------:|:----------------:|:---------------:|
|
| 283 |
+
| Quantum_STT_V2.0 | 6.05 | 11.16 | 11.15 | 9.74 | 1.69 | 3.19 | 2.17 | 3.38 | 5.95 | - |
|
| 284 |
|
| 285 |
### Noise Robustness
|
| 286 |
Performance across different Signal-to-Noise Ratios (SNR) using MUSAN music and noise samples:
|
|
|
|
| 300 |
| Standard 16kHz | 6.05 | 11.16 | 11.15 | 9.74 | 1.69 | 3.19 | 2.17 | 3.38 | 5.95 | - |
|
| 301 |
| μ-law 8kHz | 6.32 | 11.98 | 11.16 | 10.02 | 1.78 | 3.52 | 2.20 | 3.38 | 6.52 | -4.10% |
|
| 302 |
|
| 303 |
+
These WER scores were obtained using greedy decoding without an external language model.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|