Update README.md
Browse files
README.md
CHANGED
|
@@ -8,27 +8,83 @@ tags:
|
|
| 8 |
- code-mixed-sentiment-analysis
|
| 9 |
pretty_name: En-Bn-Code-Mixed-Two-Class-Sentiment-Dataset
|
| 10 |
---
|
| 11 |
-
๐
|
| 12 |
|
| 13 |
-
The
|
| 14 |
|
| 15 |
-
|
| 16 |
-
The dataset follows a binary sentiment structure consisting of positive and negative classes. Both classes are distributed nearly evenly, ensuring that the dataset remains balanced and suitable for supervised machine learning tasks. This balanced structure minimizes class bias and supports fair model evaluation during binary sentiment classification.
|
| 17 |
|
| 18 |
-
|
| 19 |
-
A broad range of review lengths was observed across all categories. English reviews tend to have shorter average token counts, while Bengali and code-mixed texts demonstrate increased token length due to the nature of translation and word expansion. The fully translated Bengali reviews (5โฟ category) exhibit the highest average word count, while original English reviews (1โฟ) remain the most concise. This variation provides valuable diversity for training models to handle different sentence complexities and structures.
|
| 20 |
|
| 21 |
-
|
| 22 |
-
The dataset effectively captures multiple degrees of language mixing through its four structured categories. The Original English subset maintains pure English syntax, the Selective POS Translated subset introduces light code-mixing via selective translation of adjectives, adverbs, and conjunctions, while the Selective + Roman Bengali subset intensifies the mixing by adding Roman-script Bengali words. Finally, the Fully Translated Bengali subset represents complete linguistic transformation. This gradual increase in code-mixing intensity makes the dataset ideal for studying multilingual interference and domain adaptation in sentiment analysis.
|
| 23 |
|
| 24 |
-
|
| 25 |
-
The application of POS-based selective translation ensures a meaningful linguistic shift without distorting sentence semantics. Adjectives and adverbsโkey sentiment-bearing componentsโare primarily translated, allowing for a balanced blend of English grammatical structure with Bengali emotional tone. This selective inclusion enhances sentiment preservation across languages.
|
| 26 |
|
| 27 |
-
|
| 28 |
-
The Roman Bengali transliteration layer introduces significant script variation, simulating real-world code-mixed writing behavior commonly observed in South Asian digital communication. The transliteration process produces multiple spelling variations of the same Bengali word, which helps capture orthographic irregularities and boosts the robustness of downstream NLP models.
|
| 29 |
|
| 30 |
-
|
| 31 |
-
The dataset exhibits notable vocabulary growth due to the introduction of Bengali and Roman Bengali tokens. This expansion diversifies the linguistic space and challenges language models to learn bilingual lexical representations. Distinct vocabularies for English, Bengali, and Roman Bengali words were separately stored to aid further lexical analysis and embedding training.
|
| 32 |
|
| 33 |
-
|
| 34 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 8 |
- code-mixed-sentiment-analysis
|
| 9 |
pretty_name: En-Bn-Code-Mixed-Two-Class-Sentiment-Dataset
|
| 10 |
---
|
| 11 |
+
๐ En-Bn-Code-Mixed-Two-Class-Sentiment-Dataset
|
| 12 |
|
| 13 |
+
The En-Bn-Code-Mixed-Two-Class-Sentiment-Dataset is a multilingual dataset of 100,000 product review texts designed for code-mixed sentiment analysis involving English, Bengali, and Roman Bengali.
|
| 14 |
|
| 15 |
+
Each record includes:
|
|
|
|
| 16 |
|
| 17 |
+
๐ Id
|
|
|
|
| 18 |
|
| 19 |
+
๐ ProductId
|
|
|
|
| 20 |
|
| 21 |
+
๐ฌ Code-Mixed-Text
|
|
|
|
| 22 |
|
| 23 |
+
๐ก Sentiment
|
|
|
|
| 24 |
|
| 25 |
+
The dataset captures diverse linguistic styles, authentic code-mixing, and real-world sentiment patterns from multilingual digital communication.
|
|
|
|
| 26 |
|
| 27 |
+
๐ Text Distribution
|
| 28 |
+
|
| 29 |
+
The dataset includes four major text categories, representing different levels of EnglishโBengali mixing.
|
| 30 |
+
Each group is evenly structured across the dataset for balanced linguistic coverage.
|
| 31 |
+
|
| 32 |
+
๐ฌ๐ง English Texts : 15,000 โโโโโโโโโโโโโโโโโโโโโโ 15%
|
| 33 |
+
๐ง๐ฉ Bengali Texts : 15,000 โโโโโโโโโโโโโโโโโโโโโโ 15%
|
| 34 |
+
๐ EnglishโBengali Mixed Texts : 35,000 โโโโโโโโโโโโโโโโโโโโโโ 35%
|
| 35 |
+
๐ค EnglishโRoman Bengali Mixed : 35,000 โโโโโโโโโโโโโโโโโโโโโโ 35%
|
| 36 |
+
---------------------------------------------------------------
|
| 37 |
+
๐งฎ **Total Samples** : 100,000
|
| 38 |
+
|
| 39 |
+
|
| 40 |
+
๐ This distribution ensures the dataset provides a rich combination of monolingual and code-mixed samples suitable for multilingual model training.
|
| 41 |
+
|
| 42 |
+
๐งฉ Word Distribution
|
| 43 |
+
|
| 44 |
+
The dataset exhibits extensive lexical diversity across three language layers.
|
| 45 |
+
Word counts demonstrate the dominance of English syntax, with embedded Bengali and Roman Bengali expressions adding cultural and emotional context.
|
| 46 |
+
|
| 47 |
+
โด๏ธ English Words : 6,870,500 โโโโโโโโโโโโโโโโโโโโโโโโโโ 71.5%
|
| 48 |
+
๐ก Bengali Words : 2,136,460 โโโโโโโโโโโโโโโโโโโโโโโโโโ 22.2%
|
| 49 |
+
๐ Roman Bengali Words : 601,220 โโโโโโโโโโโโโโโโโโโโโโโโโโ 6.3%
|
| 50 |
+
---------------------------------------------------------------
|
| 51 |
+
๐งพ **Total Word Count** : 9,608,180
|
| 52 |
+
|
| 53 |
+
|
| 54 |
+
๐ The mix of three distinct scripts provides valuable linguistic variability, allowing models to learn fine-grained lexical and orthographic distinctions.
|
| 55 |
+
|
| 56 |
+
๐ฌ Sentiment Distribution
|
| 57 |
+
|
| 58 |
+
The dataset follows a binary sentiment structure with positive and negative review labels.
|
| 59 |
+
Unlike a perfectly balanced dataset, it naturally reflects real-world customer behavior โ where users share more positive experiences.
|
| 60 |
+
|
| 61 |
+
๐ Positive Reviews : 79.3% โโโโโโโโโโโโโโโโโโโโโโโโโโโ 79,300
|
| 62 |
+
๐ Negative Reviews : 20.7% โโโโโโโโโโโโโโโโโโโโโโโโโโ 20,700
|
| 63 |
+
-------------------------------------------------------
|
| 64 |
+
๐ก **Total Samples** : 100,000
|
| 65 |
+
|
| 66 |
+
|
| 67 |
+
๐ This realistic sentiment imbalance provides a natural testing ground for building sentiment classification models robust to class skew.
|
| 68 |
+
|
| 69 |
+
๐ Summary
|
| 70 |
+
|
| 71 |
+
The En-Bn-Code-Mixed-Two-Class-Sentiment-Dataset provides a comprehensive resource for multilingual NLP and code-mixed text analysis.
|
| 72 |
+
Key highlights include:
|
| 73 |
+
|
| 74 |
+
๐ Four-level text distribution โ English, Bengali, EnglishโBengali, EnglishโRoman Bengali
|
| 75 |
+
|
| 76 |
+
๐งฉ 9.6M words across three languages
|
| 77 |
+
|
| 78 |
+
๐ฌ Natural sentiment imbalance (79.3% positive, 20.7% negative)
|
| 79 |
+
|
| 80 |
+
๐ฃ๏ธ Rich linguistic variation for bilingual and transliterated text
|
| 81 |
+
|
| 82 |
+
โ๏ธ Ideal for:
|
| 83 |
+
|
| 84 |
+
Code-Mixed Sentiment Analysis
|
| 85 |
+
|
| 86 |
+
Language Identification
|
| 87 |
+
|
| 88 |
+
Cross-Lingual Embedding Learning
|
| 89 |
+
|
| 90 |
+
Multilingual Model Evaluation
|