tokenizers / README.md
nicholasKluge's picture
Update README.md
e8da924 verified
metadata
dataset_info:
  - config_name: portuguese
    features:
      - name: text
        dtype: string
    splits:
      - name: train
        num_bytes: 21307799396
        num_examples: 2000000
    download_size: 7981082403
    dataset_size: 21307799396
  - config_name: bengali
    features:
      - name: text
        dtype: string
    splits:
      - name: train
        num_bytes: 13222913829
        num_examples: 2000000
    download_size: 3804453185
    dataset_size: 13222913829
  - config_name: code
    features:
      - name: text
        dtype: string
    splits:
      - name: train
        num_bytes: 6854288242
        num_examples: 975000
    download_size: 2194755063
    dataset_size: 6854288242
  - config_name: english
    features:
      - name: text
        dtype: string
    splits:
      - name: train
        num_bytes: 15670768606
        num_examples: 2000000
    download_size: 5700592325
    dataset_size: 15670768606
  - config_name: hindi
    features:
      - name: text
        dtype: string
    splits:
      - name: train
        num_bytes: 22579216927
        num_examples: 2000000
    download_size: 6194810350
    dataset_size: 22579216927
configs:
  - config_name: portuguese
    data_files:
      - split: train
        path: portuguese/train-*
  - config_name: bengali
    data_files:
      - split: train
        path: bengali/train-*
  - config_name: code
    data_files:
      - split: train
        path: code/train-*
  - config_name: english
    default: true
    data_files:
      - split: train
        path: english/train-*
  - config_name: hindi
    data_files:
      - split: train
        path: hindi/train-*
language:
  - hi
  - bn
  - en
  - pt
license: other
task_categories:
  - text-generation
tags:
  - tokenizer
  - tokenization
  - english
  - code
  - bengali
  - hindi
  - portuguese
pretty_name: Polygl0t tokenizers
size_categories:
  - 1M<n<10M

Polygl0t Tokenizers

Table of Contents

Dataset Description

Dataset Summary

This dataset contains several subsets for training multilingual tokenizers. Every subset possesses a collection of curated text samples in different languages.

Supported Tasks and Leaderboards

This dataset can be used for the task of text generation, specifically for training and evaluating tokenizers in multiple languages.

Languages

Hindi, Bengali, English, Portuguese, and Code (a mixture of 36 programming languages).

All programming languages fortran, jupyter, cpp, solidity, python, cmake, assembly, ruby, perl, lua, typescript, c, java, html, powershell, php, haskell, shell, scala, sql, visual_basic, ada, julia, markdown, batchfile, rust, cuda, json, kotlin, go, r, javascript, pascal, yaml, css, c_sharp

Dataset Structure

Data Instances

The dataset consists of the following features:

  • text: a string of text in the respective language of the subset.

Data Fields

{
  "text": "Olá, como vai você?"
}

Subsets and Splits

The dataset includes the following subsets:

  • Portuguese: This subset contains 2,000,000 text samples in Portuguese.
  • Hindi: This subset contains 2,000,000 text samples in Hindi.
  • Bengali: This subset contains 2,000,000 text samples in Bengali
  • English: This subset contains 2,000,000 text samples in English.
  • Code: This subset contains 975,000 text samples in various programming languages.

The txt files (e.g., hindi_test.txt) are for testing/evaluation purposes.

Dataset Creation

Source Data

Additional Information

Dataset Maintainers

Licensing Information

Please refer to the individual licenses of the source datasets used to create this corpus, as listed in the "Source Data" section above. The combined dataset does not have a single unified license, and users should ensure compliance with the terms of each source dataset when utilizing this corpus.

Citation Information

@misc{correa2026tucano2cool,
      title={{Tucano 2 Cool: Better Open Source LLMs for Portuguese}}, 
      author={Nicholas Kluge Corr{\^e}a and Aniket Sen and Shiza Fatimah and Sophia Falk and Lennard Landgraf and Julia Kastner and Lucie Flek},
      year={2026},
      eprint={2603.03543},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2603.03543}, 
}

@misc{shiza2026lilmoo,
      title={{Raising Bars, Not Parameters: LilMoo Compact Language Model for Hindi}}, 
      author={Shiza Fatimah and Aniket Sen and Sophia Falk and Florian Mai and Lucie Flek and Nicholas Kluge Corr{\^e}a},
      year={2026},
      eprint={2603.03508},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2603.03508}, 
}

@misc{fatimah2026liltii,
  title={{LilTii: A 0.6B Bengali Language Model that Outperforms Qwen}},
  author={Shiza Fatimah and Aniket Sen and Sophia Falk and Florian Mai and Lucie Flek and Nicholas Kluge Corr{\^e}a},
  year={2026},
  howpublished={\url{https://hf.co/blog/Polygl0t/liltii}}
}

Acknowledgments

Polyglot is a project funded by the Federal Ministry of Education and Research (BMBF) and the Ministry of Culture and Science of the State of North Rhine-Westphalia (MWK) as part of TRA Sustainable Futures (University of Bonn) and the Excellence Strategy of the federal and state governments.

We also gratefully acknowledge the granted access to the Marvin cluster hosted by University of Bonn along with the support provided by its High Performance Computing & Analytics Lab.

Contributions

If you want to contribute, contact us at polyglot@uni-bonn.de!