Datasets:

rassulya commited on
Commit
0b7d80e
·
verified ·
1 Parent(s): 0d1e0f8

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +20 -0
README.md ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Multilingual End-to-End Automatic Speech Recognition for Kazakh, Russian, and English
2
+
3
+ This dataset accompanies the research paper "A Study of Multilingual End-to-End Speech Recognition for Kazakh, Russian, and English" (https://arxiv.org/abs/2108.01280), focusing on training a single end-to-end (E2E) automatic speech recognition (ASR) model for Kazakh, Russian, and English. The work explores the development of a multilingual E2E ASR system based on Transformer networks, comparing two output grapheme set construction methods (combined and independent). The impact of language models (LMs) and data augmentation techniques on recognition performance is also evaluated. The repository includes the training recipes, datasets, and pre-trained models. The multilingual models achieve performance comparable to monolingual baselines, with best monolingual and multilingual models achieving 20.9% and 20.5% average word error rates, respectively, on the combined test set.
4
+
5
+
6
+ ## Dataset Information
7
+
8
+ This dataset contains audio recordings and corresponding transcriptions in Kazakh, Russian, and English, used for training and evaluating the multilingual E2E ASR model. The data sources include the KSC corpus (https://issai.nu.edu.kz/kz-speech-corpus/), OpenSTT, and CV datasets (https://issai.nu.edu.kz/multilingual-asr/). The specific composition and splits of the datasets are detailed in the associated research paper.
9
+
10
+ ## Model Information
11
+
12
+ Pre-trained models are available for monolingual (Kazakh, Russian, English) and multilingual (combined and independent grapheme sets) ASR tasks. These models are based on Transformer networks and trained with varying data augmentation techniques (Speed Perturbation and SpecAugment). The pre-trained models are provided with different configurations allowing for a range of performance-efficiency trade-offs.
13
+
14
+ ## Citation
15
+
16
+ Please cite the associated research paper when using this dataset and the provided pre-trained models. (Citation details omitted as per instructions)
17
+
18
+ ## Contact Information
19
+
20
+ For any questions or issues, please contact the authors of the associated research paper. (Contact information omitted as per instructions)