DaliaBarua commited on
Commit
c0ae164
ยท
verified ยท
1 Parent(s): 2926c46

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +72 -16
README.md CHANGED
@@ -8,27 +8,83 @@ tags:
8
  - code-mixed-sentiment-analysis
9
  pretty_name: En-Bn-Code-Mixed-Two-Class-Sentiment-Dataset
10
  ---
11
- ๐Ÿ“Š **Data Analysis**
12
 
13
- The **En-Bn-Code-Mixed-Two-Class-Sentiment-Dataset** underwent a detailed analysis to understand its linguistic diversity, sentiment distribution, and code-mixing characteristics. The following observations summarize the key insights obtained during dataset exploration and preprocessing.
14
 
15
- **Sentiment Distribution:**
16
- The dataset follows a binary sentiment structure consisting of positive and negative classes. Both classes are distributed nearly evenly, ensuring that the dataset remains balanced and suitable for supervised machine learning tasks. This balanced structure minimizes class bias and supports fair model evaluation during binary sentiment classification.
17
 
18
- **Text Length and Token Variation:**
19
- A broad range of review lengths was observed across all categories. English reviews tend to have shorter average token counts, while Bengali and code-mixed texts demonstrate increased token length due to the nature of translation and word expansion. The fully translated Bengali reviews (5โฟ category) exhibit the highest average word count, while original English reviews (1โฟ) remain the most concise. This variation provides valuable diversity for training models to handle different sentence complexities and structures.
20
 
21
- **Linguistic Composition and Code-Mixing Intensity:**
22
- The dataset effectively captures multiple degrees of language mixing through its four structured categories. The Original English subset maintains pure English syntax, the Selective POS Translated subset introduces light code-mixing via selective translation of adjectives, adverbs, and conjunctions, while the Selective + Roman Bengali subset intensifies the mixing by adding Roman-script Bengali words. Finally, the Fully Translated Bengali subset represents complete linguistic transformation. This gradual increase in code-mixing intensity makes the dataset ideal for studying multilingual interference and domain adaptation in sentiment analysis.
23
 
24
- **Lexical and POS-Level Variation:**
25
- The application of POS-based selective translation ensures a meaningful linguistic shift without distorting sentence semantics. Adjectives and adverbsโ€”key sentiment-bearing componentsโ€”are primarily translated, allowing for a balanced blend of English grammatical structure with Bengali emotional tone. This selective inclusion enhances sentiment preservation across languages.
26
 
27
- **Script Diversity and Transliteration Impact:**
28
- The Roman Bengali transliteration layer introduces significant script variation, simulating real-world code-mixed writing behavior commonly observed in South Asian digital communication. The transliteration process produces multiple spelling variations of the same Bengali word, which helps capture orthographic irregularities and boosts the robustness of downstream NLP models.
29
 
30
- **Vocabulary Expansion:**
31
- The dataset exhibits notable vocabulary growth due to the introduction of Bengali and Roman Bengali tokens. This expansion diversifies the linguistic space and challenges language models to learn bilingual lexical representations. Distinct vocabularies for English, Bengali, and Roman Bengali words were separately stored to aid further lexical analysis and embedding training.
32
 
33
- **Applicability in NLP Research:**
34
- The datasetโ€™s balanced sentiment structure, systematic code-mixing variation, and multilingual token distribution make it suitable for a range of research areas including code-mixed sentiment analysis, language identification, cross-lingual embeddings, and domain adaptation. The presence of structured categories also enables comparative performance analysis between monolingual, partially mixed, and fully translated text data.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
  - code-mixed-sentiment-analysis
9
  pretty_name: En-Bn-Code-Mixed-Two-Class-Sentiment-Dataset
10
  ---
11
+ ๐Ÿ“Š En-Bn-Code-Mixed-Two-Class-Sentiment-Dataset
12
 
13
+ The En-Bn-Code-Mixed-Two-Class-Sentiment-Dataset is a multilingual dataset of 100,000 product review texts designed for code-mixed sentiment analysis involving English, Bengali, and Roman Bengali.
14
 
15
+ Each record includes:
 
16
 
17
+ ๐Ÿ†” Id
 
18
 
19
+ ๐Ÿ›’ ProductId
 
20
 
21
+ ๐Ÿ’ฌ Code-Mixed-Text
 
22
 
23
+ ๐Ÿ’ก Sentiment
 
24
 
25
+ The dataset captures diverse linguistic styles, authentic code-mixing, and real-world sentiment patterns from multilingual digital communication.
 
26
 
27
+ ๐ŸŒ Text Distribution
28
+
29
+ The dataset includes four major text categories, representing different levels of Englishโ€“Bengali mixing.
30
+ Each group is evenly structured across the dataset for balanced linguistic coverage.
31
+
32
+ ๐Ÿ‡ฌ๐Ÿ‡ง English Texts : 15,000 โ–“โ–“โ–“โ–“โ–“โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘ 15%
33
+ ๐Ÿ‡ง๐Ÿ‡ฉ Bengali Texts : 15,000 โ–“โ–“โ–“โ–“โ–“โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘ 15%
34
+ ๐ŸŒ Englishโ€“Bengali Mixed Texts : 35,000 โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘ 35%
35
+ ๐Ÿ”ค Englishโ€“Roman Bengali Mixed : 35,000 โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘ 35%
36
+ ---------------------------------------------------------------
37
+ ๐Ÿงฎ **Total Samples** : 100,000
38
+
39
+
40
+ ๐Ÿ“˜ This distribution ensures the dataset provides a rich combination of monolingual and code-mixed samples suitable for multilingual model training.
41
+
42
+ ๐Ÿงฉ Word Distribution
43
+
44
+ The dataset exhibits extensive lexical diversity across three language layers.
45
+ Word counts demonstrate the dominance of English syntax, with embedded Bengali and Roman Bengali expressions adding cultural and emotional context.
46
+
47
+ โœด๏ธ English Words : 6,870,500 โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“ 71.5%
48
+ ๐Ÿ”ก Bengali Words : 2,136,460 โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘ 22.2%
49
+ ๐Ÿ”  Roman Bengali Words : 601,220 โ–“โ–“โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘ 6.3%
50
+ ---------------------------------------------------------------
51
+ ๐Ÿงพ **Total Word Count** : 9,608,180
52
+
53
+
54
+ ๐Ÿ“™ The mix of three distinct scripts provides valuable linguistic variability, allowing models to learn fine-grained lexical and orthographic distinctions.
55
+
56
+ ๐Ÿ’ฌ Sentiment Distribution
57
+
58
+ The dataset follows a binary sentiment structure with positive and negative review labels.
59
+ Unlike a perfectly balanced dataset, it naturally reflects real-world customer behavior โ€” where users share more positive experiences.
60
+
61
+ ๐Ÿ˜Š Positive Reviews : 79.3% โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–“โ–‘โ–‘ 79,300
62
+ ๐Ÿ˜  Negative Reviews : 20.7% โ–“โ–“โ–“โ–“โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘ 20,700
63
+ -------------------------------------------------------
64
+ ๐Ÿ’ก **Total Samples** : 100,000
65
+
66
+
67
+ ๐Ÿ“— This realistic sentiment imbalance provides a natural testing ground for building sentiment classification models robust to class skew.
68
+
69
+ ๐Ÿ“ˆ Summary
70
+
71
+ The En-Bn-Code-Mixed-Two-Class-Sentiment-Dataset provides a comprehensive resource for multilingual NLP and code-mixed text analysis.
72
+ Key highlights include:
73
+
74
+ ๐ŸŒ Four-level text distribution โ€” English, Bengali, Englishโ€“Bengali, Englishโ€“Roman Bengali
75
+
76
+ ๐Ÿงฉ 9.6M words across three languages
77
+
78
+ ๐Ÿ’ฌ Natural sentiment imbalance (79.3% positive, 20.7% negative)
79
+
80
+ ๐Ÿ—ฃ๏ธ Rich linguistic variation for bilingual and transliterated text
81
+
82
+ โš™๏ธ Ideal for:
83
+
84
+ Code-Mixed Sentiment Analysis
85
+
86
+ Language Identification
87
+
88
+ Cross-Lingual Embedding Learning
89
+
90
+ Multilingual Model Evaluation