Datasets:

rassulya commited on
Commit
5e56379
·
verified ·
1 Parent(s): 6e01ca1

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +14 -23
README.md CHANGED
@@ -1,33 +1,24 @@
1
- # Multilingual End-to-End Automatic Speech Recognition for Kazakh, Russian, and English
2
 
3
- This repository contains dataset to train models for automatic speech recognition (ASR) for Kazakh, Russian, and English, as described in [A Study of Multilingual End-to-End Speech Recognition for Kazakh, Russian, and English](https://arxiv.org/abs/2108.01280). The research explores the performance of multilingual E2E ASR models using Transformer networks, comparing different output grapheme set constructions (combined and independent), and evaluating the impact of language models (LMs) and data augmentation techniques. The best monolingual and multilingual models achieved 20.9% and 20.5% average word error rates, respectively, on a combined test set.
4
 
5
- **Repository:** [https://github.com/IS2AI/MultilingualASR]
6
 
 
7
 
8
- ## Model Description
9
 
10
- This work trains a single E2E ASR model capable of recognizing Kazakh, Russian, and English speech. Two variants of output grapheme set construction are explored: combined and independent.
11
- The impact of LMs and data augmentation (Speed Perturbation and SpecAugment) on model performance is also investigated. The models are based on Transformer networks.
 
 
 
 
 
12
 
13
- ## Evaluation Results
14
 
15
- The multilingual models achieve comparable performance to monolingual baselines with a similar number of parameters.
16
 
17
- | Model Type | Average Word Error Rate (%) |
18
- |---------------------------------|-----------------------------|
19
- | Best Monolingual | 20.9 |
20
- | Best Multilingual | 20.5 |
21
 
22
- Links to the models can be found in github repository above.
23
 
24
- ## Citation
25
-
26
- ```bibtex
27
- @article{...,
28
- title={A Study of Multilingual End-to-End Speech Recognition for Kazakh, Russian, and English},
29
- author={...},
30
- journal={arXiv preprint arXiv:2108.01280},
31
- year={2021}
32
- }
33
- ```
 
1
+ ## Hugging Face Dataset Card
2
 
3
+ **Dataset Name:** Multilingual End-to-End Speech Recognition for Kazakh, Russian, and English
4
 
5
+ **Repository:** [https://github.com/your-github-username/MultilingualASR](replace with actual github link)
6
 
7
+ **Summary:** This repository contains the recipe for reproducing the experiments detailed in the paper "A Study of Multilingual End-to-End Speech Recognition for Kazakh, Russian, and English" ([https://arxiv.org/abs/2108.01280](https://arxiv.org/abs/2108.01280)). The work focuses on training a single end-to-end (E2E) automatic speech recognition (ASR) model for Kazakh, Russian, and English. The research compares monolingual and multilingual models (with combined and independent output grapheme sets), investigates the impact of language models (LMs) and data augmentation techniques, and achieves comparable performance to monolingual baselines (20.9% and 20.5% average word error rates for the best monolingual and multilingual models respectively on the combined test set). Pre-trained models are provided.
8
 
9
+ **Table of Pre-trained Models:**
10
 
11
+ | Model | Large Transformer | Large Transformer with Speed Perturbation (SP) | Large Transformer with SP and SpecAugment |
12
+ |--------------------------|-------------------------------------------------|-------------------------------------------------|---------------------------------------------------|
13
+ | Monolingual Kazakh | [Model Link Removed] | [Model Link Removed] | [Model Link Removed] |
14
+ | Monolingual Russian | [Model Link Removed] | [Model Link Removed] | [Model Link Removed] |
15
+ | Monolingual English | [Model Link Removed] | [Model Link Removed] | [Model Link Removed] |
16
+ | Multilingual (Combined) | [Model Link Removed] | [Model Link Removed] | [Model Link Removed] |
17
+ | Multilingual (Independent) | [Model Link Removed] | [Model Link Removed] | [Model Link Removed] |
18
 
 
19
 
20
+ **Citation:**
21
 
22
+ Please cite the original paper: [Add proper citation here based on the arxiv paper]
 
 
 
23
 
 
24