Vast-Urdu / README.md
Humair332's picture
Duplicate from ReySajju742/Vast-Urdu
b4e61ca verified
metadata
license: mit
task_categories:
  - translation
  - token-classification
language:
  - ur
  - en
  - zh
  - ar
  - hy
  - ak
tags:
  - nmt
  - parallel-corpus
  - multilingual
  - urdu
  - large-scale
  - bitext
  - synthetic-data
pretty_name: Vast Urdu Parallel Corpus

Vast Urdu Parallel Corpus

Dataset Description

Vast-Urdu is a large-scale collection of parallel text corpora specifically filtered to support Urdu (UR) language research. This dataset was extracted from the liboaccn/nmt-parallel-corpus to provide a dedicated resource for Neural Machine Translation (NMT), cross-lingual understanding, and token-classification tasks involving Urdu.

Source Data

The data is sourced from a massive web-scale crawl, containing sentence-aligned pairs between Urdu and several other languages including:

  • English (en)
  • Chinese (zh)
  • Arabic (ar)
  • Armenian (hy)
  • Akan (ak)

Dataset Structure

The files are provided in .parquet format for efficient storage and fast loading. Each file represents a language pair (e.g., en-ur.parquet), containing:

  • Source text: The text in the primary language.
  • Target text: The corresponding translation in Urdu (or vice-versa).

Usage

You can load this dataset directly using the Hugging Face datasets library:

from datasets import load_dataset

dataset = load_dataset("ReySajju742/Vast-Urdu", data_files="en-ur.parquet")
print(dataset['train'][0])