metadata
dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
- name: speaker
dtype: string
- name: category
dtype: string
splits:
- name: train
num_bytes: 43917886678.336
num_examples: 140482
download_size: 39649854722
dataset_size: 43917886678.336
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
Synthetic dataset based on highly specialised texts from Wikipedia. Voiced using Yandex SpeechKit with random voices, roles and speech rate. Can be used to evaluate ASR models not trained on given domains and to identify areas that the model does not handle well. Splitted by category version can be found here: https://huggingface.co/datasets/rmndrnts/wikipedia_asr_splitted