Datasets:

rassulya commited on
Commit
30b9f81
·
verified ·
1 Parent(s): 0b7d80e

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +14 -11
README.md CHANGED
@@ -1,20 +1,23 @@
1
- # Multilingual End-to-End Automatic Speech Recognition for Kazakh, Russian, and English
2
 
3
- This dataset accompanies the research paper "A Study of Multilingual End-to-End Speech Recognition for Kazakh, Russian, and English" (https://arxiv.org/abs/2108.01280), focusing on training a single end-to-end (E2E) automatic speech recognition (ASR) model for Kazakh, Russian, and English. The work explores the development of a multilingual E2E ASR system based on Transformer networks, comparing two output grapheme set construction methods (combined and independent). The impact of language models (LMs) and data augmentation techniques on recognition performance is also evaluated. The repository includes the training recipes, datasets, and pre-trained models. The multilingual models achieve performance comparable to monolingual baselines, with best monolingual and multilingual models achieving 20.9% and 20.5% average word error rates, respectively, on the combined test set.
4
 
 
5
 
6
- ## Dataset Information
7
 
8
- This dataset contains audio recordings and corresponding transcriptions in Kazakh, Russian, and English, used for training and evaluating the multilingual E2E ASR model. The data sources include the KSC corpus (https://issai.nu.edu.kz/kz-speech-corpus/), OpenSTT, and CV datasets (https://issai.nu.edu.kz/multilingual-asr/). The specific composition and splits of the datasets are detailed in the associated research paper.
9
 
10
- ## Model Information
11
 
12
- Pre-trained models are available for monolingual (Kazakh, Russian, English) and multilingual (combined and independent grapheme sets) ASR tasks. These models are based on Transformer networks and trained with varying data augmentation techniques (Speed Perturbation and SpecAugment). The pre-trained models are provided with different configurations allowing for a range of performance-efficiency trade-offs.
13
 
14
- ## Citation
 
 
 
 
 
 
15
 
16
- Please cite the associated research paper when using this dataset and the provided pre-trained models. (Citation details omitted as per instructions)
17
 
18
- ## Contact Information
19
-
20
- For any questions or issues, please contact the authors of the associated research paper. (Contact information omitted as per instructions)
 
1
+ # Multilingual End-to-End Speech Recognition for Kazakh, Russian, and English
2
 
3
+ ## Dataset Card
4
 
5
+ **Repository:** [https://github.com/issai-nu/MultilingualASR](https://github.com/issai-nu/MultilingualASR)
6
 
7
+ **Paper:** [A Study of Multilingual End-to-End Speech Recognition for Kazakh, Russian, and English](https://arxiv.org/abs/2108.01280)
8
 
9
+ **Summary:** This repository contains the code and data for training a multilingual end-to-end (E2E) automatic speech recognition (ASR) model for Kazakh, Russian, and English. The research explores two output grapheme set construction methods (combined and independent) and investigates the impact of language models (LMs) and data augmentation techniques on model performance. The best monolingual and multilingual models achieved comparable performance with an average word error rate (WER) of 20.9% and 20.5%, respectively, on a combined test set.
10
 
 
11
 
12
+ **Pre-trained Models:**
13
 
14
+ | Model | Large Transformer | Large Transformer with Speed Perturbation (SP) | Large Transformer with SP and SpecAugment |
15
+ |--------------------------|-------------------------------------------------|--------------------------------------------------------------------|--------------------------------------------------------------------|
16
+ | Monolingual Kazakh | (link removed as per instructions) | (link removed as per instructions) | (link removed as per instructions) |
17
+ | Monolingual Russian | (link removed as per instructions) | (link removed as per instructions) | (link removed as per instructions) |
18
+ | Monolingual English | (link removed as per instructions) | (link removed as per instructions) | (link removed as per instructions) |
19
+ | Multilingual Combined | (link removed as per instructions) | (link removed as per instructions) | (link removed as per instructions) |
20
+ | Multilingual Independent | (link removed as per instructions) | (link removed as per instructions) | (link removed as per instructions) |
21
 
 
22
 
23
+ **(Note: Links to pre-trained models have been removed as per the instructions.)**