Datasets:

Modalities:
Text
Formats:
text
ArXiv:
Libraries:
Datasets
License:
sohamb37 commited on
Commit
4cb8b65
ยท
verified ยท
1 Parent(s): 98ea8dc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +86 -3
README.md CHANGED
@@ -1,3 +1,86 @@
1
- ---
2
- license: cc-by-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ language:
4
+ - en
5
+ - hi
6
+ - gu
7
+ - ks
8
+ - te
9
+ - kn
10
+ - pa
11
+ - or
12
+ - ur
13
+ - sd
14
+ - doi
15
+ ---
16
+
17
+ # Indic Parallel Corpus: 11 Indian Language Pairs for Machine Translation
18
+
19
+ This repository contains a parallel corpus for machine translation across 11 Indian language pairs. The data is curated to cover three distinct domains: **Governance**, **Health**, and **General**. This dataset is designed to help researchers and developers build and evaluate robust machine translation models for Indian languages.
20
+
21
+
22
+
23
+ ## ๐Ÿ“œ Dataset Description
24
+
25
+ The corpus provides parallel sentences for a variety of language pairs, with a focus on Hindi as a pivot language. All translation pairs are bidirectional. The data has been sourced and cleaned to be useful for training Neural Machine Translation (NMT) models.
26
+
27
+ ---
28
+
29
+ ## ๐ŸŒ Languages Covered
30
+
31
+ The dataset includes the following 11 language pairs:
32
+
33
+ | Source Language | Target Language | Language Codes |
34
+ |-----------------|-----------------|----------------|
35
+ | Hindi | Gujarati | `hi` - `gu` |
36
+ | Hindi | Kashmiri | `hi` - `ks` |
37
+ | Hindi | Telugu | `hi` - `te` |
38
+ | Hindi | Kannada | `hi` - `kn` |
39
+ | Hindi | Punjabi | `hi` - `pa` |
40
+ | Hindi | Oriya | `hi` - `or` |
41
+ | Hindi | Urdu | `hi` - `ur` |
42
+ | Hindi | Sindhi | `hi` - `sd` |
43
+ | Hindi | Dogri | `hi` - `doi` |
44
+ | English | Hindi | `en` - `hi` |
45
+ | Telugu | English | `te` - `en` |
46
+
47
+ ---
48
+
49
+ ## ๐Ÿ“‚ Dataset Structure
50
+
51
+ The data is organized by language pair and domain. Each language pair directory contains sub-directories for the specific domains.
52
+
53
+ ### Domains
54
+
55
+ 1. **Governance**: Includes sentences from government documents, press releases, and legal texts.
56
+ 2. **Health**: Comprises text from medical journals, healthcare advisories, and public health communications.
57
+ 3. **General**: A broad category including sentences from news articles, websites, and miscellaneous sources.
58
+
59
+ ### Data Format
60
+
61
+ Each dataset configuration is provided as a single **tab-separated text file** (`.txt`).
62
+
63
+ Each line in the file represents a parallel sentence pair, with the source language sentence and the target language sentence separated by a single tab character (`\t`).
64
+
65
+ ---
66
+
67
+ ## ๐Ÿš€ How to Use
68
+
69
+ You can easily load this dataset using the Hugging Face `datasets` library. You will need to specify the configuration name, which is a combination of the language pair and the domain.
70
+
71
+ The configuration name follows the pattern: `{src_lang}-{tgt_lang}_{domain}`. For example, to load the Hindi-Gujarati pair from the general domain, you would use `hi-gu_general`.
72
+
73
+ ```python
74
+ # Make sure you have the 'datasets' library installed
75
+ # pip install datasets
76
+
77
+ from datasets import load_dataset
78
+
79
+ # Example 1: Load the English-Hindi pair from the Health domain
80
+ en_hi_health_dataset = load_dataset("YOUR_USERNAME/YOUR_REPOSITORY_NAME", "en-hi_health")
81
+
82
+ # Example 2: Load the Hindi-Kannada pair from the Governance domain
83
+ hi_kn_gov_dataset = load_dataset("YOUR_USERNAME/YOUR_REPOSITORY_NAME", "hi-kn_governance")
84
+
85
+ # Access the data splits (e.g., train)
86
+ print(en_hi_health_dataset['train'][0])