nicholasKluge commited on
Commit
9a5739f
·
verified ·
1 Parent(s): 16ba60c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +100 -3
README.md CHANGED
@@ -94,9 +94,42 @@ size_categories:
94
  ---
95
  # Polygl0t Tokenizers
96
 
97
- This dataset contains several subsets for training multilingual tokenizers. All samples on their respective datasets have been filtered for their educational content (e.g., all 5 scores from FineWeb-Edu in the case of English).
98
 
99
- The `code` subset is a mixture of 36 programming languages.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
100
 
101
  <details>
102
  <summary><b>All programming languages</b></summary>
@@ -105,4 +138,68 @@ The `code` subset is a mixture of 36 programming languages.
105
  </code>
106
  </details>
107
 
108
- The `txt` files (e.g., [`hindi_test.txt`](hindi_test.txt)) are for testing purposes.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
94
  ---
95
  # Polygl0t Tokenizers
96
 
97
+ ## Table of Contents
98
 
99
+ - [Dataset Description](#dataset-description)
100
+ - [Dataset Summary](#dataset-summary)
101
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
102
+ - [Languages](#languages)
103
+ - [Dataset Structure](#dataset-structure)
104
+ - [Data Instances](#data-instances)
105
+ - [Data Fields](#data-fields)
106
+ - [Subsets and Splits](#subsets-and-splits)
107
+ - [Dataset Creation](#dataset-creation)
108
+ - [Source Data](#source-data)
109
+ - [Additional Information](#additional-information)
110
+ - [Dataset Maintainers](#dataset-maintainers)
111
+ - [Licensing Information](#licensing-information)
112
+ - [Citation Information](#citation-information)
113
+ - [Acknowledgments](#acknowledgments)
114
+ - [Contributions](#contributions)
115
+
116
+ ## Dataset Description
117
+
118
+ - **Homepage:** https://huggingface.co/datasets/Polygl0t/tokenizers
119
+ - **Repository:** https://huggingface.co/datasets/Polygl0t/tokenizers
120
+ - **Point of Contact:** [Polyg0t](mailto:kluge@uni-bonn.de)
121
+
122
+ ### Dataset Summary
123
+
124
+ This dataset contains several subsets for training multilingual tokenizers. Every subset possesses a collection of curated text samples in different languages.
125
+
126
+ ### Supported Tasks and Leaderboards
127
+
128
+ This dataset can be used for the task of text generation, specifically for training and evaluating tokenizers in multiple languages.
129
+
130
+ ### Languages
131
+
132
+ Hindi, Bengali, English, Portuguese, and Code (a mixture of 36 programming languages).
133
 
134
  <details>
135
  <summary><b>All programming languages</b></summary>
 
138
  </code>
139
  </details>
140
 
141
+ ## Dataset Structure
142
+
143
+ ### Data Instances
144
+
145
+ The dataset consists of the following features:
146
+
147
+ - **text:** a string of text in the respective language of the subset.
148
+
149
+ ### Data Fields
150
+
151
+ ```json
152
+ {
153
+ "text": "Olá, como vai você?"
154
+ }
155
+ ```
156
+
157
+ ### Subsets and Splits
158
+
159
+ The dataset includes the following subsets:
160
+
161
+ - **Portuguese:** This subset contains 2,000,000 text samples in Portuguese.
162
+ - **Hindi:** This subset contains 2,000,000 text samples in Hindi.
163
+ - **Bengali:** This subset contains 2,000,000 text samples in Bengali
164
+ - **English:** This subset contains 2,000,000 text samples in English.
165
+ - **Code:** This subset contains 975,000 text samples in various programming languages.
166
+
167
+ The `txt` files (e.g., [`hindi_test.txt`](hindi_test.txt)) are for testing/evaluation purposes.
168
+
169
+ ### Dataset Creation
170
+
171
+ ### Source Data
172
+
173
+ - **Bengali:** The Bengali text samples were sourced from [Polygl0t/bengali-corpus](https://huggingface.co/datasets/Polygl0t/bengali-corpus).
174
+ - **English:** The English text samples were sourced from [HuggingFaceFW/fineweb-edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu).
175
+ - **Hindi:** The Hindi text samples were sourced from [Polygl0t/hindi-corpus](https://huggingface.co/datasets/Polygl0t/hindi-corpus).
176
+ - **Portuguese:** The Portuguese text samples were sourced from [TucanoBR/GigaVerbo](https://huggingface.co/datasets/TucanoBR/GigaVerbo).
177
+ - **Code:** The code samples were sourced from [bigcode/starcoderdata](https://huggingface.co/datasets/bigcode/starcoderdata).
178
+
179
+ ## Additional Information
180
+
181
+ ### Dataset Maintainers
182
+
183
+ - [Nicholas Kluge Corrêa](mailto:kluge@uni-bonn.de).
184
+ - [Aniket Sen](mailto:sen@hiskp.uni-bonn.de).
185
+
186
+ ### Licensing Information
187
+
188
+ Please refer to the individual licenses of the source datasets used to create this corpus, as listed in the "Source Data" section above. The combined dataset does not have a single unified license, and users should ensure compliance with the terms of each source dataset when utilizing this corpus.
189
+
190
+ ### Citation Information
191
+
192
+ ```latex
193
+
194
+ ```
195
+
196
+ ### Acknowledgments
197
+
198
+ Polyglot is a project funded by the Federal Ministry of Education and Research (BMBF) and the Ministry of Culture and Science of the State of North Rhine-Westphalia (MWK) as part of TRA Sustainable Futures (University of Bonn) and the Excellence Strategy of the federal and state governments.
199
+
200
+ We also gratefully acknowledge the granted access to the [Marvin cluster](https://www.hpc.uni-bonn.de/en/systems/marvin) hosted by [University of Bonn](https://www.uni-bonn.de/en) along with the support provided by its High Performance Computing & Analytics Lab.
201
+
202
+
203
+ ### Contributions
204
+
205
+ If you want to contribute, contact us at [polyglot@uni-bonn.de](mailto:polyglot@uni-bonn.de)!