language:
- es
- fr
- pt
- it
- ru
- el
- ar
- de
license: cc-by-nc-nd-4.0
task_categories:
- automatic-speech-recognition
- translation
pretty_name: Multilingual TEDx (mTEDx) — SLR100
tags:
- speech
- audio
- tedx
- multilingual
- asr
- speech-translation
configs:
- config_name: ar
data_files:
- split: train
path: ar/train-*
- split: valid
path: ar/valid-*
- split: test
path: ar/test-*
- config_name: de
data_files:
- split: train
path: de/train-*
- split: valid
path: de/valid-*
- split: test
path: de/test-*
dataset_info:
- config_name: ar
features:
- name: id
dtype: string
- name: audio
dtype: audio
- name: transcript
dtype: string
- name: duration
dtype: float32
- name: talk_id
dtype: string
- name: segment_id
dtype: int32
- name: start
dtype: float32
- name: end
dtype: float32
splits:
- name: train
num_bytes: 2391070227.46
num_examples: 16420
- name: valid
num_bytes: 146961659.916
num_examples: 1188
- name: test
num_bytes: 154468753.204
num_examples: 1332
download_size: 2556295615
dataset_size: 2692500640.58
- config_name: de
features:
- name: id
dtype: string
- name: audio
dtype: audio
- name: transcript
dtype: string
- name: duration
dtype: float32
- name: talk_id
dtype: string
- name: segment_id
dtype: int32
- name: start
dtype: float32
- name: end
dtype: float32
splits:
- name: train
num_bytes: 1765760361.764
num_examples: 6764
- name: valid
num_bytes: 256591446.396
num_examples: 1172
- name: test
num_bytes: 245484939.996
num_examples: 1126
download_size: 2035086468
dataset_size: 2267836748.1559997
Multilingual TEDx (mTEDx) — SLR100
Dataset Description
Homepage: OpenSLR SLR100
mTEDx is a multilingual speech recognition and translation corpus built from
TEDx Talks.
Original resource: https://www.openslr.org/100/
The corpus provides audio recordings and VTT transcripts for 8 languages (Spanish, French, Portuguese, Italian, Russian, Greek, Arabic, German) with aligned translations into up to 5 languages (English, Spanish, French, Portuguese, Italian).
License: CC BY-NC-ND 4.0
Contact: Elizabeth Salesky (esalesky@jhu.edu), Matthew Wiesner (wiesner@jhu.edu)
Corpus Statistics
Each row in the dataset corresponds to one segment (individual audio clip + transcript).
The table below reflects sentence counts and total audio duration as reported in docs/statistics.txt per language.
Arabic (ar)
| Split | Talks | Sentences | Words | Duration |
|---|---|---|---|---|
| train | 95 | 11 821 | 115 259 | 68 310 s ≈ 18h 58m |
| valid | 7 | 1 079 | 9 374 | 5 280 s ≈ 1h 28m |
| test | 7 | 1 066 | 8 964 | 5 187 s ≈ 1h 26m |
| total | 109 | 13 966 | 133 597 | 78 778 s ≈ 21h 53m |
German (de)
| Split | Talks | Sentences | Words | Duration |
|---|---|---|---|---|
| train | 53 | 6 764 | 94 984 | s ≈ 12h29m18s |
| valid | 9 | 1 172 | 14 661 | s ≈ 1h54m53s |
| test | 9 | 1 166 | 14 289 | s ≈ 1h51m55s |
| total | 71 | 9 062 | 123 934 | s ≈ 16h16m6s |
All Languages — Download Sizes (original tarballs)
| Config | Language | Tarball size |
|---|---|---|
es |
Spanish | 35 GB |
fr |
French | 34 GB |
pt |
Portuguese | 29 GB |
it |
Italian | 19 GB |
ru |
Russian | 10 GB |
el |
Greek | 5.5 GB |
ar |
Arabic | 3.6 GB |
de |
German | 2.6 GB |
Dataset Structure
Schema
Each example corresponds to one audio segment extracted from a full TEDx talk using the Kaldi segments timestamps file.
| Field | Type | Description |
|---|---|---|
id |
string |
Unique segment id: <talk_stem>_<index> (e.g. 14zpc3Nj_e4_0003) |
audio |
Audio |
Audio float32 waveform of the segment |
transcript |
string |
Transcription text |
duration |
float32 |
Duration of the audio segment in seconds |
talk_id |
string |
Source talk file stem |
segment_id |
int32 |
0-based index of the segment within its talk |
start |
float32 |
Segment start time within the source talk (seconds) |
end |
float32 |
Segment end time within the source talk (seconds) |
Splits
| Split | Description |
|---|---|
train |
Training set |
valid |
Validation / development set |
test |
Test set |
Usage
from datasets import load_dataset
# Load Arabic training split
ds = load_dataset("deepdml/mtedx", "ar", split="train")
print(ds[0])
# {
# 'id': '14zpc3Nj_e4_0001',
# 'audio': {'array': array([...], dtype=float32), 'sampling_rate': 16000},
# 'transcript': 'أكل العالم وغص بنخلة',
# 'duration': 4.16,
# 'talk_id': '14zpc3Nj_e4',
# 'segment_id': 1,
# 'start': 9.332,
# 'end': 13.492,
# 'language': 'ar'
# }
# Stream a large language without downloading everything
ds = load_dataset("deepdml/mtedx", "es", split="train", streaming=True)
for sample in ds:
audio = sample["audio"]["array"] # numpy float32 array @ 16 kHz
text = sample["transcript"]
dur = sample["duration"] # seconds
break
# ASR fine-tuning example (Whisper / wav2vec2)
ds = load_dataset("deepdml/mtedx", "fr", split="train")
ds = ds.select_columns(["audio", "transcript", "duration"])
Source Data
Downloaded from OpenSLR SLR100.
Each language pack (mtedx_<lang>.tgz) contains:
data/<split>/wav/— Full-talk FLAC audio filesdata/<split>/vtt/— WebVTT transcript files (<id>.<lang>.vtt)data/<split>/txt/— Segments and Plain-text transcriptsdocs/statistics.txt— Per-split statistics
The upload script (create_mtedx_dataset.py) slices the full-talk FLAC files
into individual segments using the kaldi segments timestamps and discards segments shorter
than 0.5 s or longer than 30 s.
Citation
@inproceedings{salesky2021mtedx,
title = {Multilingual TEDx Corpus for Speech Recognition and Translation},
author = {Elizabeth Salesky and Matthew Wiesner and Jacob Bremerman and
Roldano Cattoni and Matteo Negri and Marco Turchi and
Douglas W. Oard and Matt Post},
booktitle = {Proceedings of Interspeech},
year = {2021},
}