Update readme
Browse files
README.md
CHANGED
|
@@ -21,5 +21,19 @@ dataset_info:
|
|
| 21 |
dataset_size: 285980688
|
| 22 |
---
|
| 23 |
# Dataset Card for "marc-multilingual-encodings-v4"
|
|
|
|
| 24 |
|
| 25 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 21 |
dataset_size: 285980688
|
| 22 |
---
|
| 23 |
# Dataset Card for "marc-multilingual-encodings-v4"
|
| 24 |
+
# marc-code-mixed-small
|
| 25 |
|
| 26 |
+
These encodings are based on the review_tokens of [msislam/marc-code-mixed-small](https://huggingface.co/datasets/msislam/marc-code-mixed-small).
|
| 27 |
+
|
| 28 |
+
It contains German (DE), English (EN), Spanish (ES), and French (FR) languages.
|
| 29 |
+
|
| 30 |
+
The labels are 0 (DE), 1 (EN), 2 (ES), and 3 (FR).
|
| 31 |
+
|
| 32 |
+
Each review contains all four languages.
|
| 33 |
+
|
| 34 |
+
Total number of tokens:
|
| 35 |
+
* In training set: 10195342
|
| 36 |
+
* In test set: 842760
|
| 37 |
+
* In validation set: 842760
|
| 38 |
+
|
| 39 |
+
The encodings are created using the tokenizer [xlm-roberta-base](https://huggingface.co/xlm-roberta-base).
|