Datasets:
File size: 3,549 Bytes
8b272e0 52b0453 8b272e0 52b0453 8b272e0 52b0453 8b272e0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 |
---
license: mit
task_categories:
- translation
language:
- en
- hi
tags:
- machine-translation
- english-hindi
- parallel-corpus
- synthetic-data
- large-scale
- nlp
- benchmark
- seq2seq
- huggingface-dataset
size_categories:
- 1M<n<10M
---
# 📘 README.md
👉 Copy everything below into your repository `README.md`
---
# English–Hindi Massive Synthetic Translation Dataset
## 🧠 Overview
This dataset is a large-scale synthetic parallel corpus for **English → Hindi machine translation**, designed to stress-test modern sequence-to-sequence models, tokenizers, and large-scale training pipelines.
The corpus contains **10 million aligned sentence pairs** generated using a high-entropy template engine with:
* 100+ subjects
* 100+ verbs
* 100+ objects
* 100+ adjectives, adverbs, metrics, conditions, and scales
* Structured bilingual phrase composition
* Deterministic alignment between English and Hindi
This produces **trillions of possible combinations**, ensuring minimal repetition even at massive scale.
---
## 📦 Dataset Structure
```
hf_translation_dataset/
├── train.jsonl (8,000,000 sentence pairs)
├── test.jsonl (2,000,000 sentence pairs)
└── README.md
```
Split ratio:
* **Training:** 80%
* **Testing:** 20%
---
## 🧾 Data Format
Each line is a JSON object:
```json
{
"id": 934221,
"en": "AI engineer efficiently_42 build systems condition_17 metric_88 remains optimized_12 and optimized_91 scale_55",
"hi": "एआई इंजीनियर सिस्टम को कुशलता_42 निर्माण करते हैं स्थिति_17 मेट्रिक_88 अनुकूलित_12 और अनुकूलित_91 पैमाना_55"
}
```
### Fields
| Field | Type | Description |
| -------- | ------- | ------------------------ |
| `id` | Integer | Unique sample identifier |
| `en` | String | English sentence |
| `hi` | String | Hindi translation |
| Encoding | UTF-8 | Unicode safe |
---
## 📊 Dataset Characteristics
* ✔️ Total samples: **10,000,000**
* ✔️ Language pair: **English → Hindi**
* ✔️ Vocabulary size: **100+ per lexical category**
* ✔️ Combinatorial space: **>10¹⁴ unique pairs**
* ✔️ Grammar-driven generation
* ✔️ Balanced template distribution
* ✔️ Deterministic alignment
* ✔️ Streaming-friendly JSONL format
---
## 🎯 Intended Use
This dataset is suitable for:
* Machine translation benchmarking
* Seq2Seq model stress testing
* Tokenizer robustness analysis
* Curriculum learning experiments
* Large-scale distributed training validation
* Synthetic data research
* Parallel corpus augmentation
---
## ⚠️ Limitations
* Synthetic grammar (not natural conversational Hindi).
* No discourse-level coherence.
* No idiomatic expressions or cultural nuance.
* Artificial tokens (`optimized_42`, etc.) are symbolic placeholders.
* Not suitable for production translation systems.
This dataset is intended for **algorithmic benchmarking and scaling research**.
---
## 🤗 How to Load
```python
from datasets import load_dataset
dataset = load_dataset("NNEngine/your-dataset-name")
print(dataset)
```
Streaming mode:
```python
dataset = load_dataset(
"NNEngine/your-dataset-name",
streaming=True
)
```
---
## 📜 License
MIT License
Free for research and educational usage.
---
## ✨ Author
Created by **NNEngine** for large-scale NLP benchmarking and synthetic data research. |