minhleduc's picture
Update README.md
e90da49 verified
metadata
dataset_info:
  features:
    - name: text
      dtype: string
    - name: lang
      dtype: string
    - name: label
      dtype: int64
  splits:
    - name: train
      num_bytes: 4512776
      num_examples: 25942
    - name: validation
      num_bytes: 644682
      num_examples: 3706
    - name: test
      num_bytes: 1289538
      num_examples: 7413
  download_size: 4254592
  dataset_size: 6446996
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: validation
        path: data/validation-*
      - split: test
        path: data/test-*
license: mit
task_categories:
  - text-classification
language:
  - vi

Multilingual Text Classification Dataset

This dataset is designed for multilingual text classification tasks. It includes labeled text samples across 8 languages, making it ideal for training and evaluating models on cross-lingual transfer, language identification, and multilingual understanding.

Dataset Overview

Split # Examples Size (bytes)
Train 18,657 2,651,248
Validation 2,665 378,709
Test 5,331 757,560
Total 26,653 3,787,517

Total Download Size: 2.6 MB Total Dataset Size: 3.8 MB Task Type: Text Classification

Data Fields

Field Type Description
text string The input text sample.
lang string The ISO 639-3 language code of the text.
label int64 The integer label representing the language class.

Language Labels

Language Code Label ID
German deu 0
Chinese zho 1
Amharic amh 2
Hindi hin 3
Arabic arb 4
Hausa hau 5
Turkish tur 6
Urdu urd 7
Spanish spa 8
Persian (Farsi) fas 9
English eng 10
Nepali nep 11

Intended Uses

  • Multilingual language classification
  • Cross-lingual and zero-shot evaluation
  • Benchmarking multilingual embeddings (e.g., mBERT, XLM-R, LaBSE)
  • Studying language similarity and confusion patterns

Usage Example

You can easily load the dataset using the Hugging Face datasets library:

from datasets import load_dataset

dataset = load_dataset("8Opt/multilingual-classification-0001")

example = dataset["train"][0]
print(example)

Output:

{
  "text": "Das ist ein Beispielsatz.",
  "lang": "deu",
  "label": 0
}

Label mapping:

label2idx = {
  'deu': 0,
  'zho': 1,
  'amh': 2,
  'hin': 3,
  'arb': 4,
  'hau': 5,
  'tur': 6,
  'urd': 7,
  'spa': 8,
  'fas': 9,
  'eng': 10,
  'nep': 11
}

Configurations

Configuration name: default

Each split is stored under data/:

data/
 ├── train-*
 ├── validation-*
 └── test-*

Citation

If you use this dataset in your work, please cite it as:

@dataset{8Opt,
  title={Multilingual Text Classification Dataset},
  author={8Opt},
  year={2025},
  url={https://huggingface.co/datasets/8Opt/multilingual-classification-0001}
}