Datasets:

Modalities:
Text
Formats:
text
ArXiv:
Libraries:
Datasets
License:
CoRil-Parallel / README.md
sohamb37's picture
Update README.md
434c86b verified
|
raw
history blame
3.24 kB
metadata
license: cc-by-4.0
language:
  - en
  - hi
  - gu
  - ks
  - te
  - kn
  - pa
  - or
  - ur
  - sd
  - doi

Indic Parallel Corpus: 11 Indian Language Pairs for Machine Translation

This repository contains a parallel corpus for machine translation across 11 Indian language pairs. The data is curated to cover three distinct domains: Governance, Health, and General. This dataset is designed to help researchers and developers build and evaluate robust machine translation models for Indian languages.

Dataset Description

The corpus provides parallel sentences for a variety of language pairs, with a focus on Hindi as a pivot language. All translation pairs are bidirectional. The data has been sourced and cleaned to be useful for training Neural Machine Translation (NMT) models.


Languages Covered

The dataset includes the following 11 language pairs:

Source Language Target Language Language Codes
Hindi Gujarati hi - gu
Hindi Kashmiri hi - ks
Hindi Telugu hi - te
Hindi Kannada hi - kn
Hindi Punjabi hi - pa
Hindi Oriya hi - or
Hindi Urdu hi - ur
Hindi Sindhi hi - sd
Hindi Dogri hi - doi
English Hindi en - hi
Telugu English te - en

Dataset Structure

The data is organized by language pair and domain. Each language pair directory contains sub-directories for the specific domains.

Domains

  1. Governance: Includes sentences from government documents, press releases, and legal texts.
  2. Health: Comprises text from medical journals, healthcare advisories, and public health communications.
  3. General: A broad category including sentences from news articles, websites, and miscellaneous sources.

Data Format

Each dataset configuration is provided as a single tab-separated text file (.txt).

Each line in the file represents a parallel sentence pair, with the source language sentence and the target language sentence separated by a single tab character (\t).


How to Use

You can easily load this dataset using the Hugging Face datasets library. You will need to specify the configuration name, which is a combination of the language pair and the domain.

The configuration name follows the pattern: {src_lang}-{tgt_lang}_{domain}. For example, to load the Hindi-Gujarati pair from the general domain, you would use hi-gu_general.

# Make sure you have the 'datasets' library installed
# pip install datasets

from datasets import load_dataset

# Example 1: Load the English-Hindi pair from the Health domain
en_hi_health_dataset = load_dataset("HimangY/CoRil-Parallel", "en-hi_health")

# Example 2: Load the Hindi-Kannada pair from the Governance domain
hi_kn_gov_dataset = load_dataset("HimangY/CoRil-Parallel", "hi-kn_governance")

# Access the data splits (e.g., train)
print(en_hi_health_dataset['train'][0])