NNEngine commited on
Commit
8b272e0
·
verified ·
1 Parent(s): bfeb10e

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +166 -3
README.md CHANGED
@@ -1,3 +1,166 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ task_categories:
4
+ - translation
5
+ language:
6
+ - en
7
+ - hi
8
+ tags:
9
+ - translation
10
+ - english
11
+ - hindi
12
+ - machine-learning
13
+ - nlp
14
+ - seq2seq
15
+ size_categories:
16
+ - 1M<n<10M
17
+ ---
18
+ # 📘 README.md
19
+
20
+ 👉 Copy everything below into your repository `README.md`
21
+
22
+ ---
23
+
24
+ # English–Hindi Massive Synthetic Translation Dataset
25
+
26
+ ## 🧠 Overview
27
+
28
+ This dataset is a large-scale synthetic parallel corpus for **English → Hindi machine translation**, designed to stress-test modern sequence-to-sequence models, tokenizers, and large-scale training pipelines.
29
+
30
+ The corpus contains **10 million aligned sentence pairs** generated using a high-entropy template engine with:
31
+
32
+ * 100+ subjects
33
+ * 100+ verbs
34
+ * 100+ objects
35
+ * 100+ adjectives, adverbs, metrics, conditions, and scales
36
+ * Structured bilingual phrase composition
37
+ * Deterministic alignment between English and Hindi
38
+
39
+ This produces **trillions of possible combinations**, ensuring minimal repetition even at massive scale.
40
+
41
+ ---
42
+
43
+ ## 📦 Dataset Structure
44
+
45
+ ```
46
+ hf_translation_dataset/
47
+ ├── train.jsonl (8,000,000 sentence pairs)
48
+ ├── test.jsonl (2,000,000 sentence pairs)
49
+ └── README.md
50
+ ```
51
+
52
+ Split ratio:
53
+
54
+ * **Training:** 80%
55
+ * **Testing:** 20%
56
+
57
+ ---
58
+
59
+ ## 🧾 Data Format
60
+
61
+ Each line is a JSON object:
62
+
63
+ ```json
64
+ {
65
+ "id": 934221,
66
+ "en": "AI engineer efficiently_42 build systems condition_17 metric_88 remains optimized_12 and optimized_91 scale_55",
67
+ "hi": "एआई इंजीनियर सिस्टम को कुशलता_42 निर्माण करते हैं स्थिति_17 मेट्रिक_88 अनुकूलित_12 और अनुकूलित_91 पैमाना_55"
68
+ }
69
+ ```
70
+
71
+ ### Fields
72
+
73
+ | Field | Type | Description |
74
+ | -------- | ------- | ------------------------ |
75
+ | `id` | Integer | Unique sample identifier |
76
+ | `en` | String | English sentence |
77
+ | `hi` | String | Hindi translation |
78
+ | Encoding | UTF-8 | Unicode safe |
79
+
80
+ ---
81
+
82
+ ## 📊 Dataset Characteristics
83
+
84
+ * ✔️ Total samples: **10,000,000**
85
+ * ✔️ Language pair: **English → Hindi**
86
+ * ✔️ Vocabulary size: **100+ per lexical category**
87
+ * ✔️ Combinatorial space: **>10¹⁴ unique pairs**
88
+ * ✔️ Grammar-driven generation
89
+ * ✔️ Balanced template distribution
90
+ * ✔️ Deterministic alignment
91
+ * ✔️ Streaming-friendly JSONL format
92
+
93
+ ---
94
+
95
+ ## 🎯 Intended Use
96
+
97
+ This dataset is suitable for:
98
+
99
+ * Machine translation benchmarking
100
+ * Seq2Seq model stress testing
101
+ * Tokenizer robustness analysis
102
+ * Curriculum learning experiments
103
+ * Large-scale distributed training validation
104
+ * Synthetic data research
105
+ * Parallel corpus augmentation
106
+
107
+ ---
108
+
109
+ ## ⚠️ Limitations
110
+
111
+ * Synthetic grammar (not natural conversational Hindi).
112
+ * No discourse-level coherence.
113
+ * No idiomatic expressions or cultural nuance.
114
+ * Artificial tokens (`optimized_42`, etc.) are symbolic placeholders.
115
+ * Not suitable for production translation systems.
116
+
117
+ This dataset is intended for **algorithmic benchmarking and scaling research**.
118
+
119
+ ---
120
+
121
+ ## 🤗 How to Load
122
+
123
+ ```python
124
+ from datasets import load_dataset
125
+
126
+ dataset = load_dataset("NNEngine/your-dataset-name")
127
+ print(dataset)
128
+ ```
129
+
130
+ Streaming mode:
131
+
132
+ ```python
133
+ dataset = load_dataset(
134
+ "NNEngine/your-dataset-name",
135
+ streaming=True
136
+ )
137
+ ```
138
+
139
+ ---
140
+
141
+ ## 🏷️ Tags
142
+
143
+ ```
144
+ machine-translation
145
+ english-hindi
146
+ parallel-corpus
147
+ synthetic-data
148
+ large-scale
149
+ nlp
150
+ benchmark
151
+ seq2seq
152
+ huggingface-dataset
153
+ ```
154
+
155
+ ---
156
+
157
+ ## 📜 License
158
+
159
+ MIT License
160
+ Free for research and educational usage.
161
+
162
+ ---
163
+
164
+ ## ✨ Author
165
+
166
+ Created by **NNEngine** for large-scale NLP benchmarking and synthetic data research.