Commit
·
a585869
1
Parent(s):
23cd368
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,56 @@
|
|
| 1 |
---
|
| 2 |
license: cc-by-nc-nd-4.0
|
| 3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: cc-by-nc-nd-4.0
|
| 3 |
---
|
| 4 |
+
|
| 5 |
+
### Dataset Generation:
|
| 6 |
+
|
| 7 |
+
Initially, we select the Amazon Review Dataset as our base data, referenced from Ni et al. (2019)[^1]. We randomly extract 100,000 instances from this dataset. The original labels in this dataset are ratings, scaled from 1 to 5. For our specific task, we categorize them into Positive (rating > 3), Neutral (rating = 3), and Negative (rating < 3), ensuring a balanced number of instances for each label. To generate the synthetic Code-mixed dataset, we apply two distinct methodologies: the Random Code-mixing Algorithm by Krishnan et al. (2021)[^2] and r-CM by Santy et al. (2021)[^3].
|
| 8 |
+
|
| 9 |
+
### Class Distribution:
|
| 10 |
+
|
| 11 |
+
#### For train.csv:
|
| 12 |
+
|
| 13 |
+
| Label | Count | Percentage |
|
| 14 |
+
|----------|-------|------------|
|
| 15 |
+
| Negative | 20000 | 33.33% |
|
| 16 |
+
| Neutral | 20000 | 33.33% |
|
| 17 |
+
| Positive | 19999 | 33.33% |
|
| 18 |
+
|
| 19 |
+
#### For dev.csv:
|
| 20 |
+
|
| 21 |
+
| Label | Count | Percentage |
|
| 22 |
+
|----------|-------|------------|
|
| 23 |
+
| Neutral | 6667 | 33.34% |
|
| 24 |
+
| Positive | 6667 | 33.34% |
|
| 25 |
+
| Negative | 6666 | 33.33% |
|
| 26 |
+
|
| 27 |
+
#### For test.csv:
|
| 28 |
+
|
| 29 |
+
| Label | Count | Percentage |
|
| 30 |
+
|----------|-------|------------|
|
| 31 |
+
| Negative | 6667 | 33.34% |
|
| 32 |
+
| Positive | 6667 | 33.34% |
|
| 33 |
+
| Neutral | 6666 | 33.33% |
|
| 34 |
+
|
| 35 |
+
### Cite our Paper:
|
| 36 |
+
|
| 37 |
+
If you utilize this dataset, kindly cite our paper.
|
| 38 |
+
|
| 39 |
+
```bibtex
|
| 40 |
+
@article{raihan2023mixed,
|
| 41 |
+
title={Mixed-Distil-BERT: Code-mixed Language Modeling for Bangla, English, and Hindi},
|
| 42 |
+
author={Raihan, Md Nishat and Goswami, Dhiman and Mahmud, Antara},
|
| 43 |
+
journal={arXiv preprint arXiv:2309.10272},
|
| 44 |
+
year={2023}
|
| 45 |
+
}
|
| 46 |
+
```
|
| 47 |
+
|
| 48 |
+
### References
|
| 49 |
+
|
| 50 |
+
[^1]: Ni, J., Li, J., & McAuley, J. (2019). Justifying recommendations using distantly-labeled reviews and fine-grained aspects. In Proceedings of the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing (EMNLP-IJCNLP) (pp. 188-197).
|
| 51 |
+
|
| 52 |
+
[^2]: Krishnan, J., Anastasopoulos, A., Purohit, H., & Rangwala, H. (2021). Multilingual code-switching for zero-shot cross-lingual intent prediction and slot filling. arXiv preprint arXiv:2103.07792.
|
| 53 |
+
|
| 54 |
+
[^3]: Santy, S., Srinivasan, A., & Choudhury, M. (2021). BERTologiCoMix: How does code-mixing interact with multilingual BERT? In Proceedings of the Second Workshop on Domain Adaptation for NLP (pp. 111-121).
|
| 55 |
+
|
| 56 |
+
---
|