Datasets:
๐ Bhojpuri Behavioral Corpus (Phase 2: Engineered Refinement)
โ ๏ธ NOTICE: Phase 2 Refinement This repository contains the Phase 2 Engineered Refinement. This Phase 2 is automatically refined specifically to prevent class collapse during fine-tuning.
๐ Executive Summary
The Bhojpuri Behavioral Corpus (Phase 2) is a 68,822-row, rigidly balanced dataset engineered to solve the inherent instability of low-resource language fine-tuning. Moving beyond noisy web-scraped corpora, this dataset provides a sterile, perfectly stratified environment (1:1:1 ratio) to teach Large Language Models precise pragmatic boundaries and cross-lingual semantic alignment across diverse domains (Science, Agriculture, Environment, General).
๐ง Architectural Innovations
1. Mathematical Stratification (1:1:1)
To prevent the majority-class collapse common in naturalistic datasets, this corpus is artificially balanced to an exact 1:1:1 ratio:
- Positive: 22,940 samples
- Negative: 22,940 samples
- Neutral: 22,942 samples This ensures the model's loss function penalizes misclassification equally across all sentiment vectors.
2. Cross-Lingual Alignment via Anchored Translation
A subset of the corpus utilizes an Anchored Translation format. Complex technical terms (e.g., 'เคเคเคธเฅเคเฅเคช' / Isotope) are paired with their bracketed English equivalents directly within the Bhojpuri string. This is a deliberate architectural choice to provide an explicit semantic alignment signal, bridging the gap between high-resource English representations and low-resource Bhojpuri vernacular.
3. Contemporary Lexical Borrowing
The dataset intentionally preserves English loanwords and technical transliterations within the Devanagari script (e.g., 'เคฎเคถเฅเคจ' / Machine). Rather than artificially sanitizing the corpus to an archaic standard, this reflects authentic, contemporary Bhojpuri morphology.
๐ Dataset Schema
id: Unique identifier.text: The Bhojpuri utterance (Devanagari script).english_tr: High-fidelity semantic English translation.label: Primary sentiment (positive, negative, neutral).domain/sub_domain: Context of the utterance (e.g., agriculture, science).
โ๏ธ Intended Use & Limitations
- Best For: Parameter-efficient fine-tuning (LoRA/QLoRA), Teacher-model initialization, and cross-lingual representation alignment.
- Limitations: Because the sentiment distribution is artificially balanced (33% per class), models trained exclusively on this dataset may over-predict positive/negative sentiments in real-world, highly neutral environments without threshold calibration or Adaptive Knowledge Distillation (AdaptKD).
๐ Citation
If you use this dataset in your research, please cite the accompanying paper:
@article{prasad2026bhojpuri,
title={abhiprd20/Bhojpuri-Behavioral-Corpus-8K},
author={Prasad, Abhimanyu},
year={2026},
- Downloads last month
- 31