FLEURS-AR-EN-split / README.md
farahabdou's picture
Update README.md
851b310 verified
metadata
dataset_info:
  features:
    - name: id
      dtype: int32
    - name: audio
      dtype:
        audio:
          sampling_rate: 16000
    - name: arabic
      dtype: string
    - name: english
      dtype: string
  splits:
    - name: train
      num_bytes: 1494439642.810786
      num_examples: 2228
    - name: validation
      num_bytes: 186469553.36575875
      num_examples: 278
    - name: test
      num_bytes: 187140351.28793773
      num_examples: 279
  download_size: 1848856905
  dataset_size: 1868049547.4644823
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: validation
        path: data/validation-*
      - split: test
        path: data/test-*

FLEURS-AR-EN Dataset

Dataset Description

FLEURS-AR-EN is an Arabic-to-English dataset designed for Speech Translation tasks. This dataset is derived from Google's FLEURS (Few-shot Learning Evaluation of Universal Representations of Speech) dataset, specifically focusing on aligned Arabic audio samples with their corresponding Arabic transcriptions and English translations.

Overview

  • Task: Speech Translation
  • Languages: Arabic (source) → English (target)
  • Source: Google FLEURS dataset
  • Dataset Size: 1.87 GB (1,868,049,547 bytes)
  • Download Size: 1.85 GB (1,848,856,905 bytes)

Dataset Structure

Features

  • id (int32): Unique identifier for each example
  • audio (audio): Audio file with 16kHz sampling rate
  • arabic (string): Arabic transcription
  • english (string): English translation

Splits

  • Train: 2,228 examples (1.49 GB)
  • Validation: 278 examples (186.47 MB)
  • Test: 279 examples (187.14 MB)

Data Files

The dataset is organized into the following structure:

data/
├── train-*
├── validation-*
└── test-*

Dataset Details

The dataset was created by aligning the Arabic and English portions of the FLEURS dataset through identifying common IDs between Arabic and English data and merging the datasets based on these common identifiers.

Citation

@article{fleurs2022arxiv,
  title = {FLEURS: Few-shot Learning Evaluation of Universal Representations of Speech},
  author = {Conneau, Alexis and Ma, Min and Khanuja, Simran and Zhang, Yu and Axelrod, Vera, 
            Dalmia, Siddharth and Riesa, Jason and Rivera, Clara and Bapna, Ankur},
  journal = {arXiv preprint arXiv:2205.12446},
  url = {https://arxiv.org/abs/2205.12446},
  year = {2022},
}

Contact

For questions or issues related to the dataset, please contact: Farah Abdou (faraahabdou@gmail.com)