minhleduc's picture
Update README.md
1b57de9 verified
|
raw
history blame
3.35 kB
metadata
dataset_info:
  features:
    - name: text
      dtype: string
    - name: lang
      dtype: string
    - name: label
      dtype: int64
  splits:
    - name: train
      num_bytes: 2651248
      num_examples: 18657
    - name: validation
      num_bytes: 378709
      num_examples: 2665
    - name: test
      num_bytes: 757560
      num_examples: 5331
  download_size: 2646591
  dataset_size: 3787517
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: validation
        path: data/validation-*
      - split: test
        path: data/test-*
license: mit
task_categories:
  - text-classification
language:
  - vi

Multilingual Text Classification Dataset

This dataset is designed for multilingual text classification tasks. It includes labeled text samples across 8 languages, making it ideal for training and evaluating models on cross-lingual transfer, language identification, and multilingual understanding.

Dataset Overview

Split # Examples Size (bytes)
Train 18,657 2,651,248
Validation 2,665 378,709
Test 5,331 757,560
Total 26,653 3,787,517

Total Download Size: 2.6 MB Total Dataset Size: 3.8 MB Task Type: Text Classification

Data Fields

Field Type Description
text string The input text sample.
lang string The ISO 639-3 language code of the text.
label int64 The integer label representing the language class.

Language Labels

Language Code Label ID
German deu 0
Chinese zho 1
Amharic amh 2
Arabic arb 3
Hausa hau 4
Urdu urd 5
Spanish spa 6
English eng 7

This mapping is stored internally in the dataset and can be used to decode model predictions or remap outputs.

Intended Uses

  • Multilingual language classification
  • Cross-lingual and zero-shot evaluation
  • Benchmarking multilingual embeddings (e.g., mBERT, XLM-R, LaBSE)
  • Studying language similarity and confusion patterns

Usage Example

You can easily load the dataset using the Hugging Face datasets library:

from datasets import load_dataset

dataset = load_dataset("8Opt/multilingual-classification-0001")

example = dataset["train"][0]
print(example)

Output:

{
  "text": "Das ist ein Beispielsatz.",
  "lang": "deu",
  "label": 0
}

Label mapping:

id2label = {
    0: "deu",
    1: "zho",
    2: "amh",
    3: "arb",
    4: "hau",
    5: "urd",
    6: "spa",
    7: "eng"
}

Configurations

Configuration name: default

Each split is stored under data/:

data/
 ├── train-*
 ├── validation-*
 └── test-*

Citation

If you use this dataset in your work, please cite it as:

@dataset{8Opt,
  title={Multilingual Text Classification Dataset},
  author={8Opt},
  year={2025},
  url={https://huggingface.co/datasets/8Opt/multilingual-classification-0001}
}