mtedx / README.md
deepdml's picture
Update README.md
e067e46 verified
|
raw
history blame
5.71 kB
metadata
language:
  - ar
  - es
  - fr
  - pt
  - it
  - ru
  - el
  - de
license: cc-by-nc-nd-4.0
task_categories:
  - automatic-speech-recognition
  - translation
pretty_name: Multilingual TEDx (mTEDx)  SLR100
tags:
  - speech
  - audio
  - tedx
  - multilingual
  - asr
  - speech-translation

Multilingual TEDx (mTEDx) — SLR100

Dataset Description

mTEDx is a multilingual speech recognition and translation corpus built from TEDx Talks.
Original resource: https://www.openslr.org/100/

The corpus provides audio recordings and VTT transcripts for 8 languages (Arabic, Spanish, French, Portuguese, Italian, Russian, Greek, German) with aligned translations into up to 5 languages (English, Spanish, French, Portuguese, Italian).

License: CC BY-NC-ND 4.0
Contact: Elizabeth Salesky (esalesky@jhu.edu), Matthew Wiesner (wiesner@jhu.edu)


Corpus Statistics

Each row in the dataset corresponds to one VTT segment (individual audio clip + transcript).
The table below reflects sentence counts and total audio duration as reported in docs/statistics.txt per language.

Arabic (ar)

Split Talks Sentences Words Duration
train 95 11 821 115 259 68 310 s ≈ 18h 58m
valid 7 1 079 9 374 5 280 s ≈ 1h 28m
test 7 1 066 8 964 5 187 s ≈ 1h 26m
total 109 13 966 133 597 78 778 s ≈ 21h 53m

Replace the rows above with the equivalent table for each language once docs/statistics.txt files are available. The script create_mtedx_dataset.py reads and prints statistics automatically when run.

All Languages — Download Sizes (original tarballs)

Config Language Tarball size
ar Arabic 3.6 GB
es Spanish 35 GB
fr French 34 GB
pt Portuguese 29 GB
it Italian 19 GB
ru Russian 10 GB
el Greek 5.5 GB
de German 2.6 GB

Dataset Structure

Schema

Each example corresponds to one audio segment extracted from a full TEDx talk using the VTT timestamps.

Field Type Description
id string Unique segment id: <talk_stem>_<index> (e.g. 14zpc3Nj_e4_0003)
talk_id string Source talk file stem
segment_id int32 0-based index of the segment within its talk
audio Audio Audio float32 waveform of the segment
duration float32 Duration of the audio segment in seconds
transcript string Transcription text from the VTT file
start float32 Segment start time within the source talk (seconds)
end float32 Segment end time within the source talk (seconds)

Splits

Split Description
train Training set
valid Validation / development set
test Test set

Usage

from datasets import load_dataset

# Load Arabic training split
ds = load_dataset("deepdml/mtedx", "ar", split="train")
print(ds[0])
# {
#   'id':         '14zpc3Nj_e4_0001',
#   'talk_id':    '14zpc3Nj_e4',
#   'segment_id': 1,
#   'audio':      {'array': array([...], dtype=float32), 'sampling_rate': 16000},
#   'duration':   4.16,
#   'transcript': 'أكل العالم وغص بنخلة',
#   'start':      9.332,
#   'end':        13.492,
#   'language':   'ar'
# }

# Stream a large language without downloading everything
ds = load_dataset("your-username/mtedx", "es", split="train", streaming=True)
for sample in ds:
    audio = sample["audio"]["array"]           # numpy float32 array @ 16 kHz
    text  = sample["transcript"]
    dur   = sample["duration"]                 # seconds
    break

# ASR fine-tuning example (Whisper / wav2vec2)
ds = load_dataset("your-username/mtedx", "fr", split="train")
ds = ds.select_columns(["audio", "transcript", "duration"])

Source Data

Downloaded from OpenSLR SLR100.
Each language pack (mtedx_<lang>.tgz) contains:

  • data/<split>/wav/ — Full-talk FLAC audio files
  • data/<split>/vtt/ — WebVTT transcript files (<id>.<lang>.vtt)
  • data/<split>/txt/ — Plain-text transcripts
  • docs/statistics.txt — Per-split statistics

The upload script (create_mtedx_dataset.py) slices the full-talk FLAC files into individual segments using the VTT timestamps and discards segments shorter than 0.5 s or longer than 30 s.


Citation

@inproceedings{salesky2021mtedx,
  title     = {Multilingual TEDx Corpus for Speech Recognition and Translation},
  author    = {Elizabeth Salesky and Matthew Wiesner and Jacob Bremerman and
               Roldano Cattoni and Matteo Negri and Marco Turchi and
               Douglas W. Oard and Matt Post},
  booktitle = {Proceedings of Interspeech},
  year      = {2021},
}