File size: 3,498 Bytes
2a7c5ec
 
 
 
 
 
 
 
 
 
 
ecb4e04
 
2a7c5ec
ecb4e04
 
2a7c5ec
ecb4e04
 
 
 
2a7c5ec
 
 
 
 
 
 
 
 
26b9a71
 
 
 
 
2a7c5ec
79ecdbc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e90da49
 
 
 
 
 
 
 
 
 
 
 
 
 
 
79ecdbc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e90da49
 
 
 
 
 
 
 
 
 
 
 
 
79ecdbc
e90da49
79ecdbc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1b57de9
79ecdbc
 
 
26b9a71
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
---
dataset_info:
  features:
  - name: text
    dtype: string
  - name: lang
    dtype: string
  - name: label
    dtype: int64
  splits:
  - name: train
    num_bytes: 4512776
    num_examples: 25942
  - name: validation
    num_bytes: 644682
    num_examples: 3706
  - name: test
    num_bytes: 1289538
    num_examples: 7413
  download_size: 4254592
  dataset_size: 6446996
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: validation
    path: data/validation-*
  - split: test
    path: data/test-*
license: mit
task_categories:
- text-classification
language:
- vi
---



# Multilingual Text Classification Dataset

This dataset is designed for **multilingual text classification** tasks.
It includes labeled text samples across **8 languages**, making it ideal for training and evaluating models on **cross-lingual transfer**, **language identification**, and **multilingual understanding**.


## Dataset Overview

| Split      | # Examples | Size (bytes)  |
| ---------- | ---------- | ------------- |
| Train      | 18,657     | 2,651,248     |
| Validation | 2,665      | 378,709       |
| Test       | 5,331      | 757,560       |
| **Total**  | **26,653** | **3,787,517** |

**Total Download Size:** 2.6 MB
**Total Dataset Size:** 3.8 MB
**Task Type:** Text Classification


## Data Fields

| Field   | Type     | Description                                        |
| ------- | -------- | -------------------------------------------------- |
| `text`  | `string` | The input text sample.                             |
| `lang`  | `string` | The ISO 639-3 language code of the text.           |
| `label` | `int64`  | The integer label representing the language class. |


## Language Labels

| Language        | Code  | Label ID |
| --------------- | ----- | -------- |
| German          | `deu` | 0        |
| Chinese         | `zho` | 1        |
| Amharic         | `amh` | 2        |
| Hindi           | `hin` | 3        |
| Arabic          | `arb` | 4        |
| Hausa           | `hau` | 5        |
| Turkish         | `tur` | 6        |
| Urdu            | `urd` | 7        |
| Spanish         | `spa` | 8        |
| Persian (Farsi) | `fas` | 9        |
| English         | `eng` | 10       |
| Nepali          | `nep` | 11       |




## Intended Uses

* Multilingual language classification
* Cross-lingual and zero-shot evaluation
* Benchmarking multilingual embeddings (e.g., mBERT, XLM-R, LaBSE)
* Studying language similarity and confusion patterns


## Usage Example

You can easily load the dataset using the Hugging Face `datasets` library:

```python
from datasets import load_dataset

dataset = load_dataset("8Opt/multilingual-classification-0001")

example = dataset["train"][0]
print(example)
```

Output:

```python
{
  "text": "Das ist ein Beispielsatz.",
  "lang": "deu",
  "label": 0
}
```

Label mapping:

```python
label2idx = {
  'deu': 0,
  'zho': 1,
  'amh': 2,
  'hin': 3,
  'arb': 4,
  'hau': 5,
  'tur': 6,
  'urd': 7,
  'spa': 8,
  'fas': 9,
  'eng': 10,
  'nep': 11
}

```


## Configurations

**Configuration name:** `default`

Each split is stored under `data/`:

```
data/
 ├── train-*
 ├── validation-*
 └── test-*
```

---

## Citation

If you use this dataset in your work, please cite it as:

```
@dataset{8Opt,
  title={Multilingual Text Classification Dataset},
  author={8Opt},
  year={2025},
  url={https://huggingface.co/datasets/8Opt/multilingual-classification-0001}
}
```