Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,16 +1,40 @@
|
|
| 1 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
|
| 3 |
**Dataset Name:** Multilingual End-to-End Speech Recognition for Kazakh, Russian, and English
|
| 4 |
|
| 5 |
**Repository:** https://github.com/IS2AI/MultilingualASR
|
| 6 |
|
| 7 |
-
**
|
| 8 |
-
The dataset includes:
|
| 9 |
-
- We introduce a 7-hour evaluation set of transcribed Kazakh-accented English audio recordings (i.e., native Kazakh speakers reading English sentences extracted from the SpeakingFaces dataset).
|
| 10 |
-
- We introduce a 334-hour manually-cleaned subset of the OpenSTT dataset for the Russian language, which can also be used to train robust standalone Russian ASR systems.
|
| 11 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 12 |
**Citation:**
|
| 13 |
|
| 14 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 15 |
|
|
|
|
| 16 |
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
language:
|
| 4 |
+
- kk
|
| 5 |
+
- ru
|
| 6 |
+
- en
|
| 7 |
+
tags:
|
| 8 |
+
- audio
|
| 9 |
+
- speech
|
| 10 |
+
task_categories:
|
| 11 |
+
- speech-recognition
|
| 12 |
+
|
| 13 |
+
---
|
| 14 |
+
|
| 15 |
+
## Multilingual Speech Dataset
|
| 16 |
|
| 17 |
**Dataset Name:** Multilingual End-to-End Speech Recognition for Kazakh, Russian, and English
|
| 18 |
|
| 19 |
**Repository:** https://github.com/IS2AI/MultilingualASR
|
| 20 |
|
| 21 |
+
**Description:** This repository provides the dataset used in the paper "A Study of Multilingual End-to-End Speech Recognition for Kazakh, Russian, and English" (https://arxiv.org/abs/2108.01280). The paper focuses on training a single end-to-end (E2E) ASR model for Kazakh, Russian, and English, comparing monolingual and multilingual approaches (with both combined and independent grapheme sets). It also explores the effects of language models (LMs) and data augmentation. The best monolingual and multilingual models achieve comparable performance, with 20.9% and 20.5% average word error rates, respectively, on the combined test set.
|
|
|
|
|
|
|
|
|
|
| 22 |
|
| 23 |
+
Included in this dataset:
|
| 24 |
+
- A 7-hour evaluation set of Kazakh-accented English audio (native Kazakh speakers reading English sentences from the SpeakingFaces dataset), along with cleaned training data adapted from CommonVoice.
|
| 25 |
+
- A 334-hour manually-cleaned subset of the OpenSTT dataset for Russian, useful for training robust standalone Russian ASR systems.
|
| 26 |
+
-
|
| 27 |
**Citation:**
|
| 28 |
|
| 29 |
+
```bibtex
|
| 30 |
+
@inproceedings{mussakhojayeva2021study,
|
| 31 |
+
title={A study of multilingual end-to-end speech recognition for Kazakh, Russian, and English},
|
| 32 |
+
author={Mussakhojayeva, Saida and Khassanov, Yerbolat and Atakan Varol, Huseyin},
|
| 33 |
+
booktitle={Speech and Computer: 23rd International Conference, SPECOM 2021, St. Petersburg, Russia, September 27--30, 2021, Proceedings 23},
|
| 34 |
+
pages={448--459},
|
| 35 |
+
year={2021},
|
| 36 |
+
organization={Springer}
|
| 37 |
+
}
|
| 38 |
|
| 39 |
+
```
|
| 40 |
|