sudeshna84 commited on
Commit
6ed37ce
·
verified ·
1 Parent(s): 56938a8

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +565 -0
README.md ADDED
@@ -0,0 +1,565 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # BHT25: Bengali-Hindi-Telugu Parallel Corpus for Literary Machine Translation
2
+
3
+ [![License: CC BY 4.0](https://img.shields.io/badge/License-CC%20BY%204.0-lightgrey.svg)](https://creativecommons.org/licenses/by/4.0/)
4
+ [![Dataset on HF](https://huggingface.co/datasets/huggingface/badges/resolve/main/dataset-on-hf-sm.svg)](https://huggingface.co/datasets/sudeshna84/BHT25)
5
+ [![Paper](https://img.shields.io/badge/Paper-Data%20in%20Brief-blue)](https://doi.org/XXXXXX)
6
+
7
+ ## Overview
8
+
9
+ BHT25 is a high-quality trilingual parallel corpus comprising **27,149 sentence triplets** across Bengali (BN), Hindi (HI), and Telugu (TE) languages. This dataset addresses a critical gap in resources for cross-family Indian language machine translation, particularly for literary and culturally rich content spanning Indo-Aryan and Dravidian language families.
10
+
11
+ **Key Features:**
12
+
13
+ * 🌐 **Three Indian Languages**: Bengali (Indo-Aryan), Hindi (Indo-Aryan), Telugu (Dravidian)
14
+ * 📚 **Literary Domain**: Sourced from renowned authors including Rabindranath Tagore and Sarat Chandra Chattopadhyay
15
+ * ✨ **Archaic Varieties**: Includes traditional Bengali Sadhu Bhasha for diachronic NLP research
16
+ * ✅ **Human-Verified**: 75.8% semantic alignment accuracy validated by expert linguists
17
+ * 🔍 **Unique Identifiers**: Each triplet has a unique ID (BHT25_XXXXX) for reproducibility
18
+ * 📖 **High Quality**: Composite fluency score of 4.08/5.0 from native speaker ratings
19
+ * 🔓 **Open Access**: Released under CC BY 4.0 license
20
+
21
+ ## Dataset Description
22
+
23
+ ### Languages
24
+
25
+ | Language | Family | Script | Speakers | ISO 639-1 |
26
+ |----------|--------|--------|----------|-----------|
27
+ | Bengali | Indo-Aryan | Bengali (Bangla) | ~265M | bn |
28
+ | Hindi | Indo-Aryan | Devanagari | ~600M | hi |
29
+ | Telugu | Dravidian | Telugu | ~95M | te |
30
+
31
+ ### Dataset Statistics
32
+
33
+ | Metric | Bengali (bn) | Hindi (hi) | Telugu (te) |
34
+ |--------|--------------|------------|-------------|
35
+ | **Total Sentences** | 27,149 | 27,149 | 27,149 |
36
+ | **Total Tokens** | ~420,000 | ~386,000 | ~445,000 |
37
+ | **Avg. Tokens/Sentence** | 15.5 ± 8.3 | 14.3 ± 7.6 | 16.4 ± 9.1 |
38
+ | **Avg. Characters/Sentence** | 87.3 ± 48.2 | 82.1 ± 45.7 | 95.7 ± 52.6 |
39
+ | **Vocabulary Size** | ~47,856 | ~43,214 | ~52,143 |
40
+ | **Min Sentence Length** | 3 tokens | 3 tokens | 3 tokens |
41
+ | **Max Sentence Length** | 147 tokens | 132 tokens | 156 tokens |
42
+ | **Median Sentence Length** | 14 tokens | 13 tokens | 15 tokens |
43
+
44
+ > **Note**: Telugu exhibits slightly longer average sentence lengths due to its agglutinative morphology.
45
+
46
+ ### Content Characteristics
47
+
48
+ The corpus encompasses diverse literary genres to ensure broad applicability:
49
+
50
+ - **Narrative Fiction** (45.2%): Short stories and novel excerpts
51
+ - **Poetry and Verse** (18.7%): Traditional and modern poetry
52
+ - **Folk Literature** (15.6%): Folk tales and oral traditions
53
+ - **Contemporary Prose** (12.3%): Modern literary essays and articles
54
+ - **Classical Literature** (8.2%): Traditional Sadhu Bhasha texts
55
+
56
+ ### Quality Metrics
57
+
58
+ | Quality Measure | Score | Method |
59
+ |-----------------|-------|--------|
60
+ | **Alignment Accuracy** | 75.8% | Human validation (500-sample random subset) |
61
+ | **Semantic Consistency (bn-te)** | 0.873 | Cross-lingual Word Embeddings (CLWE) similarity |
62
+ | **Semantic Consistency (bn-hi)** | 0.81 | CLWE similarity |
63
+ | **Semantic Consistency (hi-te)** | 0.71 | CLWE similarity |
64
+ | **Translation Fluency** | 4.08/5.0 | Expert annotation (composite score, n=3 per language) |
65
+ | **Inter-Annotator Agreement** | κ = 0.89 | Fleiss' kappa coefficient |
66
+
67
+ > Lower semantic consistency between Hindi and Telugu reflects greater typological distance between Indo-Aryan and Dravidian families.
68
+
69
+ ## Dataset Structure
70
+
71
+ ### Data Format
72
+
73
+ The dataset is provided in **Apache Parquet format** with the following schema:
74
+
75
+ ```python
76
+ {
77
+ 'id': string, # Unique identifier (BHT25_00001 to BHT25_27149)
78
+ 'bn': string, # Bengali sentence (UTF-8 encoded)
79
+ 'hi': string, # Hindi sentence (UTF-8 encoded)
80
+ 'te': string # Telugu sentence (UTF-8 encoded)
81
+ }
82
+ ```
83
+
84
+ ### Example Triplets
85
+
86
+ ```python
87
+ {
88
+ 'id': 'BHT25_00001',
89
+ 'bn': 'হুগলি জেলার সপ্তগ্রামে দুই ভাই নীলাম্বর ও পীতাম্বর চক্রবর্তী বাস করিত',
90
+ 'hi': 'हुगली जिले का सप्तग्राम-उसमें दो भाई नीलाम्बर व पीताम्बर रहते थे',
91
+ 'te': 'హుగ్లీ జిల్లాలోని సప్తగ్రామ్-దీనికి ఇద్దరు సోదరులు నీలాంబర్ మరియు పితాంబర్ అక్కడ నివసించేవారు.'
92
+ }
93
+ ```
94
+
95
+ ```python
96
+ {
97
+ 'id': 'BHT25_00015',
98
+ 'bn': 'আজ সকালে নীলাম্বর চন্ডীমণ্ডপের একধারে বসিয়া তামাক খাইত��ছিল',
99
+ 'hi': 'आज सवेरे नीलाम्बर चण्डी-मण्डप में बैठा हुक्का पी रहा था',
100
+ 'te': 'ఈ ఉదయం నీలాంబర్ చండీ-మండపంలో కూర్చుని హుక్కా తాగుతున్నాడు.'
101
+ }
102
+ ```
103
+
104
+ ### Data Splits
105
+
106
+ The dataset is provided as a **single unified corpus** without pre-defined train/development/test splits. This design choice maximizes research flexibility, allowing users to:
107
+
108
+ - Create custom split ratios (80-10-10, 70-15-15, 90-5-5, etc.)
109
+ - Implement k-fold cross-validation
110
+ - Combine with other datasets
111
+ - Use deterministic splitting via unique IDs
112
+
113
+ **Suggested Split Strategy** (for standardization):
114
+ - **Train**: 80% (21,719 triplets)
115
+ - **Development**: 10% (2,715 triplets)
116
+ - **Test**: 10% (2,715 triplets)
117
+
118
+ ## Usage
119
+
120
+ ### Loading the Dataset
121
+
122
+ ```python
123
+ from datasets import load_dataset
124
+
125
+ # Load the full dataset
126
+ dataset = load_dataset("sudeshna84/BHT25")
127
+
128
+ # Access data
129
+ print(f"Total samples: {len(dataset['train'])}")
130
+ print(f"First example:\n{dataset['train'][0]}")
131
+
132
+ # Example output:
133
+ # Total samples: 27149
134
+ # First example:
135
+ # {
136
+ # 'id': 'BHT25_00001',
137
+ # 'bn': 'হুগলি জেলার সপ্তগ্রামে দুই ভাই নীলাম্বর ও পীতাম্বর চক্রবর্তী বাস করিত',
138
+ # 'hi': 'हुगली जिले का सप्तग्राम-उसमें दो भाई नीलाम्बर व पीताम्बर रहते थे',
139
+ # 'te': 'హుగ్లీ జిల్లాలోని సప్తగ్రామ్-దీనికి ఇద్దరు సోదరులు నీలాంబర్ మరియు పితాంబర్ అక్కడ నివసించేవారు.'
140
+ # }
141
+ ```
142
+
143
+ ### Creating Train/Dev/Test Splits
144
+
145
+ ```python
146
+ from datasets import load_dataset
147
+
148
+ # Load dataset
149
+ dataset = load_dataset("sudeshna84/BHT25", split="train")
150
+
151
+ # Create 80-10-10 split (reproducible with seed)
152
+ train_test = dataset.train_test_split(test_size=0.2, seed=42)
153
+ train_dataset = train_test['train'] # 21,719 triplets
154
+ temp_dataset = train_test['test'] # 5,430 triplets
155
+
156
+ # Further split test into dev and test
157
+ dev_test = temp_dataset.train_test_split(test_size=0.5, seed=42)
158
+ dev_dataset = dev_test['train'] # 2,715 triplets
159
+ test_dataset = dev_test['test'] # 2,715 triplets
160
+
161
+ print(f"Train: {len(train_dataset)}, Dev: {len(dev_dataset)}, Test: {len(test_dataset)}")
162
+ # Output: Train: 21719, Dev: 2715, Test: 2715
163
+ ```
164
+
165
+ ### Accessing Specific Language Pairs
166
+
167
+ ```python
168
+ # Extract Bengali-Hindi pairs
169
+ bn_hi_pairs = [(item['bn'], item['hi']) for item in dataset['train']]
170
+ print(f"Bengali-Hindi pairs: {len(bn_hi_pairs)}")
171
+
172
+ # Extract Bengali-Telugu pairs
173
+ bn_te_pairs = [(item['bn'], item['te']) for item in dataset['train']]
174
+ print(f"Bengali-Telugu pairs: {len(bn_te_pairs)}")
175
+
176
+ # Extract Hindi-Telugu pairs
177
+ hi_te_pairs = [(item['hi'], item['te']) for item in dataset['train']]
178
+ print(f"Hindi-Telugu pairs: {len(hi_te_pairs)}")
179
+ ```
180
+
181
+ ### Integration with Translation Models
182
+
183
+ ```python
184
+ from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
185
+ from datasets import load_dataset
186
+
187
+ # Load dataset
188
+ dataset = load_dataset("sudeshna84/BHT25", split="train")
189
+
190
+ # Example: Load IndicTrans2 model for Bengali→Hindi translation
191
+ tokenizer = AutoTokenizer.from_pretrained("ai4bharat/indictrans2-en-indic-dist-200M")
192
+ model = AutoModelForSeq2SeqLM.from_pretrained("ai4bharat/indictrans2-en-indic-dist-200M")
193
+
194
+ # Prepare data for fine-tuning
195
+ def preprocess_function(examples):
196
+ # IndicTrans2 requires language tags
197
+ inputs = [f"ben_Beng: {text}" for text in examples['bn']]
198
+ targets = [f"hin_Deva: {text}" for text in examples['hi']]
199
+
200
+ model_inputs = tokenizer(
201
+ inputs,
202
+ max_length=128,
203
+ truncation=True,
204
+ padding='max_length'
205
+ )
206
+
207
+ labels = tokenizer(
208
+ targets,
209
+ max_length=128,
210
+ truncation=True,
211
+ padding='max_length'
212
+ )
213
+
214
+ model_inputs["labels"] = labels["input_ids"]
215
+ return model_inputs
216
+
217
+ # Tokenize dataset
218
+ tokenized_dataset = dataset.map(
219
+ preprocess_function,
220
+ batched=True,
221
+ remove_columns=dataset.column_names
222
+ )
223
+
224
+ # Now ready for training with HuggingFace Trainer!
225
+ ```
226
+
227
+ ### Data Analysis Example
228
+
229
+ ```python
230
+ import pandas as pd
231
+ import matplotlib.pyplot as plt
232
+ from datasets import load_dataset
233
+
234
+ # Load dataset
235
+ dataset = load_dataset("sudeshna84/BHT25", split="train")
236
+
237
+ # Convert to pandas DataFrame
238
+ df = pd.DataFrame(dataset)
239
+
240
+ # Analyze sentence lengths (in tokens)
241
+ df['bn_length'] = df['bn'].str.split().str.len()
242
+ df['hi_length'] = df['hi'].str.split().str.len()
243
+ df['te_length'] = df['te'].str.split().str.len()
244
+
245
+ # Print statistics
246
+ print("Sentence Length Statistics (tokens):")
247
+ print(df[['bn_length', 'hi_length', 'te_length']].describe())
248
+
249
+ # Example output:
250
+ # bn_length hi_length te_length
251
+ # count 27149.00 27149.00 27149.00
252
+ # mean 15.48 14.27 16.39
253
+ # std 8.31 7.58 9.12
254
+ # min 3.00 3.00 3.00
255
+ # 25% 10.00 9.00 10.00
256
+ # 50% 14.00 13.00 15.00
257
+ # 75% 19.00 18.00 21.00
258
+ # max 147.00 132.00 156.00
259
+
260
+ # Visualize length distributions
261
+ fig, axes = plt.subplots(1, 3, figsize=(15, 4))
262
+ df['bn_length'].hist(bins=50, ax=axes[0], color='skyblue', edgecolor='black')
263
+ axes[0].set_title('Bengali Sentence Length Distribution')
264
+ axes[0].set_xlabel('Tokens')
265
+ axes[0].set_ylabel('Frequency')
266
+
267
+ df['hi_length'].hist(bins=50, ax=axes[1], color='lightcoral', edgecolor='black')
268
+ axes[1].set_title('Hindi Sentence Length Distribution')
269
+ axes[1].set_xlabel('Tokens')
270
+
271
+ df['te_length'].hist(bins=50, ax=axes[2], color='lightgreen', edgecolor='black')
272
+ axes[2].set_title('Telugu Sentence Length Distribution')
273
+ axes[2].set_xlabel('Tokens')
274
+
275
+ plt.tight_layout()
276
+ plt.savefig('sentence_length_distribution.png', dpi=300)
277
+ plt.show()
278
+ ```
279
+
280
+ ## Applications
281
+
282
+ This corpus supports a wide range of NLP research:
283
+
284
+ ### 1. Machine Translation
285
+ - **Neural MT training and evaluation**: Fine-tune mBART, mT5, IndicTrans2
286
+ - **Cross-family translation**: Study challenges in Indo-Aryan ↔ Dravidian transfer
287
+ - **Low-resource language pairs**: Bootstrap Hindi-Telugu models via Bengali pivot
288
+ - **Domain adaptation**: Literary translation quality assessment
289
+
290
+ ### 2. Cross-Lingual Analysis
291
+ - **Cross-lingual word embeddings**: Evaluation on semantic similarity tasks
292
+ - **Syntactic divergence analysis**: Compare word order and morphological strategies
293
+ - **Translation quality estimation**: Automatic metric development for Indian languages
294
+
295
+ ### 3. Multilingual NLP
296
+ - **Multilingual language model fine-tuning**: BERT, XLM-R, mBERT
297
+ - **Zero-shot translation**: Transfer learning across language families
298
+ - **Multilingual sentiment/emotion analysis**: Literary emotion preservation
299
+
300
+ ### 4. Linguistic Research
301
+ - **Comparative morphology**: Agglutinative (Telugu) vs. inflectional (Bengali/Hindi)
302
+ - **Literary translation studies**: Style and cultural adaptation analysis
303
+ - **Diachronic NLP**: Archaic Bengali (Sadhu Bhasha) processing
304
+ - **Typological studies**: Word order, case marking, verb morphology
305
+
306
+ ### 5. Quality Estimation
307
+ - **Alignment algorithm evaluation**: Benchmark Gale-Church, CLWE-based methods
308
+ - **Translation quality metrics**: BLEU, chrF, METEOR for Indic languages
309
+ - **Human evaluation correlation**: Automatic metrics vs. expert ratings
310
+
311
+ ## Methodology
312
+
313
+ ### Data Collection Pipeline
314
+
315
+ ```
316
+ ┌─────────────────────────┐
317
+ │ Literary Text Sources │
318
+ │ (Tagore, Sarat Chandra,│
319
+ │ Folk Literature, etc.)│
320
+ └───────────┬─────────────┘
321
+
322
+
323
+ ┌─────────────────────────┐
324
+ │ Digitization & OCR │
325
+ │ (Google Cloud Vision, │
326
+ │ Tesseract 4.1) │
327
+ └───────────┬─────────────┘
328
+
329
+
330
+ ┌─────────────────────────┐
331
+ │ Text Preprocessing │
332
+ │ (Unicode normalization,│
333
+ │ script standardization)│
334
+ └───────────┬─────────────┘
335
+
336
+
337
+ ┌─────────────────────────┐
338
+ │ Sentence Segmentation │
339
+ │ (Language-specific │
340
+ │ punctuation rules) │
341
+ └───────────┬─────────────┘
342
+
343
+
344
+ ┌─────────────────────────┐
345
+ │ Trilingual Alignment │
346
+ │ - Gale-Church (length) │
347
+ │ - CLWE similarity │
348
+ │ - Hybrid scoring │
349
+ └───────────┬─────────────┘
350
+
351
+
352
+ ┌─────────────────────────┐
353
+ │ Quality Validation │
354
+ │ - Automatic filtering │
355
+ │ - Expert review (n=9) │
356
+ │ - Fluency rating │
357
+ │ - IAA: κ=0.89 │
358
+ └───────────┬─────────────┘
359
+
360
+
361
+ ┌─────────────────────────┐
362
+ │ Final Corpus (27,149) │
363
+ │ Format: Parquet │
364
+ │ Unique IDs assigned │
365
+ └─────────────────────────┘
366
+ ```
367
+
368
+ ### Alignment Algorithm
369
+
370
+ The corpus employs a **hybrid alignment approach**:
371
+
372
+ 1. **Gale-Church Length-Based Alignment**:
373
+ - Character-count ratios optimized for Indian languages
374
+ - Adjusted for Telugu agglutinative morphology
375
+
376
+ 2. **CLWE Semantic Refinement**:
377
+ - FastText multilingual embeddings (300-dim)
378
+ - Cosine similarity threshold: ≥0.7
379
+
380
+ 3. **Manual Validation**:
381
+ - 500-sample random verification
382
+ - Inter-annotator agreement: κ=0.89
383
+
384
+ ### Quality Assurance
385
+
386
+ **Three-stage validation**:
387
+ - **Stage 1**: Automatic filtering (length, encoding, language detection)
388
+ - **Stage 2**: Expert review by 9 native speakers (3 per language)
389
+ - **Stage 3**: Iterative refinement based on consensus
390
+
391
+ **Fluency Rating Scale**:
392
+ - 5: Perfectly natural and fluent
393
+ - 4: Minor awkwardness, generally fluent
394
+ - 3: Understandable but noticeable issues
395
+ - 2: Significant fluency problems
396
+ - 1: Incomprehensible or severely malformed
397
+
398
+ Mean corpus fluency: **4.08 ± 0.67**
399
+
400
+ ## Citation
401
+
402
+ If you use this dataset in your research, please cite:
403
+
404
+ ```bibtex
405
+ @article{sani2024bht25,
406
+ title={BHT25: A Bengali-Hindi-Telugu Parallel Corpus for Enhanced Literary Machine Translation},
407
+ author={Sani, Sudeshna and Gangashetty, Suryakanth V and Samudravijaya, K and Nandi, Anik and Priya, Aruna and Kumar, Vineeth and Dubey, Akhilesh Kumar},
408
+ journal={Data in Brief},
409
+ year={2024},
410
+ volume={XX},
411
+ pages={XXXXX},
412
+ publisher={Elsevier},
413
+ doi={10.1016/j.dib.2024.XXXXX}
414
+ }
415
+ ```
416
+
417
+ **Related Research** (ESA-NMT Model):
418
+
419
+ ```bibtex
420
+ @article{sani2024esa,
421
+ title={Emotion-Semantic-Aware Neural Machine Translation for Cross-Family Indian Languages},
422
+ author={Sani, Sudeshna and Gangashetty, Suryakanth V and others},
423
+ journal={IEEE Access},
424
+ year={2024},
425
+ doi={10.1109/ACCESS.2024.XXXXXXX},
426
+ note={Under review}
427
+ }
428
+ ```
429
+
430
+ ## License
431
+
432
+ This dataset is released under the **Creative Commons Attribution 4.0 International License (CC BY 4.0)**.
433
+
434
+ [![License: CC BY 4.0](https://licensebuttons.net/l/by/4.0/88x31.png)](https://creativecommons.org/licenses/by/4.0/)
435
+
436
+ **You are free to:**
437
+ - ✅ **Share**: Copy and redistribute the material in any medium or format
438
+ - ✅ **Adapt**: Remix, transform, and build upon the material for any purpose, even commercially
439
+
440
+ **Under the following terms:**
441
+ - **Attribution**: You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
442
+
443
+ ## Ethical Considerations
444
+
445
+ ### Copyright Compliance
446
+ - All source texts are either public domain (published >95 years ago) or used with explicit permission
447
+ - Author attributions maintained in metadata
448
+ - No unauthorized copyrighted material included
449
+
450
+ ### Annotator Welfare
451
+ - Fair compensation (₹500/hour) for all human annotators
452
+ - Work limited to 4 hours/day to prevent fatigue
453
+ - Option to decline sensitive content
454
+
455
+ ### Content Screening
456
+ - No explicit sexual content
457
+ - No graphic violence or hate speech
458
+ - No personally identifiable information
459
+ - 0.3% of extracted sentences excluded on ethical grounds (78 triplets)
460
+
461
+ ### Data Privacy
462
+ - All names in corpus are fictional literary characters or historical public figures
463
+ - No user-generated content or social media data
464
+ - Full compliance with data protection regulations
465
+
466
+ ## Known Limitations
467
+
468
+ ### Alignment Quality
469
+ - **5.8%** of triplets have suboptimal alignment (validation score = 0)
470
+ - Primarily occurs in complex literary passages with metaphorical language
471
+ - Quality scores provided in metadata for user-aware filtering
472
+
473
+ ### Genre Imbalance
474
+ - Corpus skews toward narrative fiction (**45.2%**)
475
+ - Poetry and technical writing underrepresented
476
+ - Users should account for domain bias in downstream tasks
477
+
478
+ ### Archaic Language
479
+ - Bengali Sadhu Bhasha (8.2%) differs from contemporary Bengali
480
+ - May pose challenges for modern MT systems
481
+ - Valuable for diachronic NLP research
482
+ - Can be filtered using genre metadata if needed
483
+
484
+ ### Residual OCR Errors
485
+ - Estimated error rate: **<0.5%** per language
486
+ - Most common in rare characters and conjuncts
487
+ - Users encouraged to report errors via GitHub issues
488
+
489
+ ## Contributing
490
+
491
+ We welcome contributions to improve the dataset quality:
492
+
493
+ ### Reporting Issues
494
+ - **GitHub Issues**: [github.com/sudeshna84/BHT25](https://github.com/sudeshna84/BHT25)
495
+ - **Email**: sudeshna.sani@klef.edu.in
496
+
497
+ ### Corrections and Improvements
498
+ - Submit pull requests with specific triplet corrections
499
+ - Provide evidence (source text citations) for proposed changes
500
+ - All contributions will be reviewed and credited
501
+
502
+ ## Authors and Affiliations
503
+
504
+ **Sudeshna Sani**¹ *(Corresponding Author)*
505
+ **Suryakanth V Gangashetty**¹
506
+ **Samudravijaya K**¹
507
+ **Anik Nandi**²
508
+ **Aruna Priya**³
509
+ **Vineeth Kumar**³
510
+ **Akhilesh Kumar Dubey**¹
511
+
512
+ ¹ Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation (KLEF), Guntur, Andhra Pradesh, India
513
+ ² School of Business, Woxsen University, Hyderabad, Telangana, India
514
+ ³ School of Technology, Woxsen University, Hyderabad, Telangana, India
515
+
516
+ ## Contact
517
+
518
+ **Corresponding Author**: Sudeshna Sani
519
+ 📧 Email: sudeshna.sani@klef.edu.in
520
+ 🔗 GitHub: [github.com/sudeshna84](https://github.com/sudeshna84)
521
+ 🏢 Affiliation: KLEF, Guntur, India
522
+
523
+ For dataset-specific inquiries:
524
+ - Open an issue in this repository
525
+ - Email the corresponding author
526
+ - Use HuggingFace Discussions tab
527
+
528
+ ## Acknowledgments
529
+
530
+ We gratefully acknowledge:
531
+
532
+ - **Native speakers** who participated in manual validation (9 expert annotators)
533
+ - **Literary estates** for permission to use copyrighted source materials
534
+ - **India's National Translation Mission** for inspiring this work
535
+ - **AI4Bharat** initiative for Indic NLP tools and resources
536
+ - **KLEF** and **Woxsen University** for institutional support
537
+
538
+ ## Changelog
539
+
540
+ ### Version 1.0 (December 2024) - Initial Release
541
+ - 27,149 sentence triplets
542
+ - Three languages: Bengali, Hindi, Telugu
543
+ - Quality metrics: 75.8% alignment accuracy, 4.08/5.0 fluency
544
+ - Parquet format with unique IDs
545
+ - Complete metadata and documentation
546
+
547
+ ## Roadmap
548
+
549
+ Future enhancements planned:
550
+
551
+ - **v1.1** (Q1 2025): Add sentence-level emotion annotations
552
+ - **v2.0** (Q3 2025): Expand to 50,000 triplets
553
+ - **v2.1** (Q4 2025): Include parallel audio for multilingual speech research
554
+ - **v3.0** (2026): Extend to Tamil and Kannada (pan-South-Indian coverage)
555
+
556
+ ---
557
+
558
+ **Dataset Version**: 1.0
559
+ **Last Updated**: December 19, 2024
560
+ **DOI**: [Will be assigned upon Data in Brief publication]
561
+ **HuggingFace Downloads**: [![Downloads](https://img.shields.io/badge/dynamic/json?color=blue&label=downloads&query=%24.downloads&url=https%3A%2F%2Fhuggingface.co%2Fapi%2Fdatasets%2Fsudeshna84%2FBHT25)](https://huggingface.co/datasets/sudeshna84/BHT25)
562
+
563
+ ---
564
+
565
+ *For comprehensive methodology details, statistical analyses, and validation procedures, please refer to our Data in Brief paper (citation above).*