File size: 6,203 Bytes
43d5afd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9a5739f
43d5afd
9a5739f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
43d5afd
 
 
 
 
 
 
 
9a5739f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0a797be
9a5739f
a160c53
9a5739f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
---
dataset_info:
- config_name: portuguese
  features:
  - name: text
    dtype: string
  splits:
  - name: train
    num_bytes: 21307799396
    num_examples: 2000000
  download_size: 7981082403
  dataset_size: 21307799396
- config_name: bengali
  features:
  - name: text
    dtype: string
  splits:
  - name: train
    num_bytes: 13222913829
    num_examples: 2000000
  download_size: 3804453185
  dataset_size: 13222913829
- config_name: code
  features:
  - name: text
    dtype: string
  splits:
  - name: train
    num_bytes: 6854288242
    num_examples: 975000
  download_size: 2194755063
  dataset_size: 6854288242
- config_name: english
  features:
  - name: text
    dtype: string
  splits:
  - name: train
    num_bytes: 15670768606
    num_examples: 2000000
  download_size: 5700592325
  dataset_size: 15670768606
- config_name: hindi
  features:
  - name: text
    dtype: string
  splits:
  - name: train
    num_bytes: 22579216927
    num_examples: 2000000
  download_size: 6194810350
  dataset_size: 22579216927
configs:
- config_name: portuguese
  data_files:
  - split: train
    path: portuguese/train-*
- config_name: bengali
  data_files:
  - split: train
    path: bengali/train-*
- config_name: code
  data_files:
  - split: train
    path: code/train-*
- config_name: english
  default: true
  data_files:
  - split: train
    path: english/train-*
- config_name: hindi
  data_files:
  - split: train
    path: hindi/train-*
language:
- hi
- bn
- en
- pt
license: other
task_categories:
- text-generation
tags:
- tokenizer
- tokenization
- english
- code
- bengali
- hindi
- portuguese
pretty_name: Portuguese | Hindi | Bengali | English | Code | Tokenizer Dataset
size_categories:
- 1M<n<10M
---
# Polygl0t Tokenizers

## Table of Contents

- [Dataset Description](#dataset-description)
  - [Dataset Summary](#dataset-summary)
  - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
  - [Languages](#languages)
- [Dataset Structure](#dataset-structure)
  - [Data Instances](#data-instances)
  - [Data Fields](#data-fields)
  - [Subsets and Splits](#subsets-and-splits)
- [Dataset Creation](#dataset-creation)
  - [Source Data](#source-data)
- [Additional Information](#additional-information)
  - [Dataset Maintainers](#dataset-maintainers)
  - [Licensing Information](#licensing-information)
  - [Citation Information](#citation-information)
  - [Acknowledgments](#acknowledgments)
  - [Contributions](#contributions)

## Dataset Description

- **Homepage:** https://huggingface.co/datasets/Polygl0t/tokenizers
- **Repository:** https://huggingface.co/datasets/Polygl0t/tokenizers
- **Point of Contact:** [Polyg0t](mailto:kluge@uni-bonn.de)

### Dataset Summary

This dataset contains several subsets for training multilingual tokenizers. Every subset possesses a collection of curated text samples in different languages. 

### Supported Tasks and Leaderboards

This dataset can be used for the task of text generation, specifically for training and evaluating tokenizers in multiple languages.

### Languages

Hindi, Bengali, English, Portuguese, and Code (a mixture of 36 programming languages).

<details>
<summary><b>All programming languages</b></summary>
 <code>
  fortran, jupyter, cpp, solidity, python, cmake, assembly, ruby, perl, lua, typescript, c, java, html, powershell, php, haskell, shell, scala, sql, visual_basic, ada, julia, markdown, batchfile, rust, cuda, json, kotlin, go, r, javascript, pascal, yaml, css, c_sharp
 </code>
</details>

## Dataset Structure

### Data Instances

The dataset consists of the following features:

- **text:** a string of text in the respective language of the subset.

### Data Fields

```json
{
  "text": "Olá, como vai você?"
}
```

### Subsets and Splits

The dataset includes the following subsets:

- **Portuguese:** This subset contains 2,000,000 text samples in Portuguese.
- **Hindi:** This subset contains 2,000,000 text samples in Hindi.
- **Bengali:** This subset contains 2,000,000 text samples in Bengali
- **English:** This subset contains 2,000,000 text samples in English.
- **Code:** This subset contains 975,000 text samples in various programming languages.

The `txt` files (e.g., [`hindi_test.txt`](hindi_test.txt)) are for testing/evaluation purposes.

### Dataset Creation

### Source Data

- **Bengali:** The Bengali text samples were sourced from [Polygl0t/gigakriya-v1](https://huggingface.co/datasets/Polygl0t/gigakriya-v1).
- **English:** The English text samples were sourced from [HuggingFaceFW/fineweb-edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu).
- **Hindi:** The Hindi text samples were sourced from [Polygl0t/GigaLekh](https://huggingface.co/datasets/Polygl0t/GigaLekh).
- **Portuguese:** The Portuguese text samples were sourced from [TucanoBR/GigaVerbo](https://huggingface.co/datasets/TucanoBR/GigaVerbo).
- **Code:** The code samples were sourced from [bigcode/starcoderdata](https://huggingface.co/datasets/bigcode/starcoderdata).

## Additional Information

### Dataset Maintainers

- [Nicholas Kluge Corrêa](mailto:kluge@uni-bonn.de).
- [Aniket Sen](mailto:sen@hiskp.uni-bonn.de).

### Licensing Information

Please refer to the individual licenses of the source datasets used to create this corpus, as listed in the "Source Data" section above. The combined dataset does not have a single unified license, and users should ensure compliance with the terms of each source dataset when utilizing this corpus.

### Citation Information

```latex

```

### Acknowledgments

Polyglot is a project funded by the Federal Ministry of Education and Research (BMBF) and the Ministry of Culture and Science of the State of North Rhine-Westphalia (MWK) as part of TRA Sustainable Futures (University of Bonn) and the Excellence Strategy of the federal and state governments.

We also gratefully acknowledge the granted access to the [Marvin cluster](https://www.hpc.uni-bonn.de/en/systems/marvin) hosted by [University of Bonn](https://www.uni-bonn.de/en) along with the support provided by its High Performance Computing & Analytics Lab.


### Contributions

If you want to contribute, contact us at [polyglot@uni-bonn.de](mailto:polyglot@uni-bonn.de)!