Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,12 +1,12 @@
|
|
| 1 |
---
|
| 2 |
-
license:
|
| 3 |
multilinguality: multilingual
|
| 4 |
task_categories:
|
| 5 |
- multiple-choice
|
| 6 |
pretty_name: Tokenization Robustness Math
|
| 7 |
tags:
|
| 8 |
-
- multilingual
|
| 9 |
- tokenization
|
|
|
|
| 10 |
dataset_info:
|
| 11 |
- config_name: tokenizer_robustness_completion_math_canonical
|
| 12 |
features:
|
|
@@ -1161,192 +1161,139 @@ configs:
|
|
| 1161 |
data_files:
|
| 1162 |
- split: test
|
| 1163 |
path: tokenizer_robustness_completion_math_turkish/test-*
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1164 |
---
|
|
|
|
| 1165 |
|
| 1166 |
-
# Dataset Card for Tokenization Robustness Math
|
| 1167 |
<!-- Provide a quick summary of the dataset. -->
|
| 1168 |
|
| 1169 |
-
<img src="toksuite-logo.png" alt="TokSuite Logo" width="250px" style="margin-left:
|
| 1170 |
-
|
| 1171 |
-
# TokSuite Benchmark ({LANGUAGE_NAME} Collection)
|
| 1172 |
|
|
|
|
| 1173 |
|
| 1174 |
## Dataset Description
|
| 1175 |
|
| 1176 |
-
This dataset is part of **TokSuite**, a comprehensive benchmark designed to measure how different tokenization strategies affect language model
|
|
|
|
|
|
|
| 1177 |
|
| 1178 |
-
- **Curated by:** R3 Research Team
|
| 1179 |
-
- **
|
| 1180 |
-
- **License:** MIT License
|
| 1181 |
|
| 1182 |
### Dataset Summary
|
| 1183 |
|
| 1184 |
-
TokSuite
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1185 |
|
| 1186 |
**Key Features:**
|
| 1187 |
-
-
|
| 1188 |
-
-
|
| 1189 |
-
- Parallel structure with TokSuite
|
| 1190 |
-
-
|
| 1191 |
|
| 1192 |
### Supported Tasks
|
| 1193 |
|
| 1194 |
-
- **Multiple-Choice Question Answering
|
| 1195 |
-
- **Tokenizer Robustness Evaluation
|
| 1196 |
-
- **
|
| 1197 |
|
| 1198 |
-
|
| 1199 |
-
|
| 1200 |
-
The dataset contains text in {LANGUAGE_NAME} written in {SCRIPT_NAME} (language code: {LANGUAGE_CODE_FULL}).
|
| 1201 |
|
| 1202 |
## Dataset Structure
|
| 1203 |
|
| 1204 |
-
### Data Instances
|
| 1205 |
-
|
| 1206 |
-
An example from the dataset:
|
| 1207 |
-
```json
|
| 1208 |
-
{
|
| 1209 |
-
"question": "{EXAMPLE_QUESTION}",
|
| 1210 |
-
"choices": ["{CHOICE_A}", "{CHOICE_B}", "{CHOICE_C}", "{CHOICE_D}"],
|
| 1211 |
-
"answer": {ANSWER_INDEX},
|
| 1212 |
-
"answer_label": "{ANSWER_LABEL}",
|
| 1213 |
-
"split": "test",
|
| 1214 |
-
"subcategories": "{SUBCATEGORY}",
|
| 1215 |
-
"lang": "{LANGUAGE_CODE_FULL}",
|
| 1216 |
-
"second_lang": "{ENGLISH_TRANSLATION}",
|
| 1217 |
-
"coding_lang": "",
|
| 1218 |
-
"notes": "{NOTES}",
|
| 1219 |
-
"id": "{ID}",
|
| 1220 |
-
"set_id": {SET_ID},
|
| 1221 |
-
"variation_id": {VARIATION_ID}
|
| 1222 |
-
}
|
| 1223 |
-
```
|
| 1224 |
-
|
| 1225 |
### Data Fields
|
| 1226 |
|
| 1227 |
| Field | Type | Description |
|
| 1228 |
-
|
| 1229 |
-
| question | string |
|
| 1230 |
-
| choices | list[string] |
|
| 1231 |
-
| answer | int64 | Index of the correct answer
|
| 1232 |
-
| answer_label | string | Letter label of the correct answer
|
| 1233 |
-
| split | string | Dataset split identifier (all entries are
|
| 1234 |
-
| subcategories | string | Perturbation category |
|
| 1235 |
-
| lang | string |
|
| 1236 |
-
|
|
| 1237 |
-
|
|
| 1238 |
-
|
|
| 1239 |
-
|
|
| 1240 |
-
|
|
| 1241 |
-
|
|
|
|
|
| 1242 |
|
|
|
|
| 1243 |
|
| 1244 |
## Dataset Creation
|
| 1245 |
|
| 1246 |
### Curation Rationale
|
| 1247 |
|
| 1248 |
This dataset was created to:
|
| 1249 |
-
1. Systematically evaluate
|
| 1250 |
-
2. Measure
|
| 1251 |
-
3.
|
| 1252 |
-
4. Provide standardized benchmarks for
|
| 1253 |
|
| 1254 |
-
|
| 1255 |
|
| 1256 |
### Source Data
|
| 1257 |
|
| 1258 |
-
|
| 1259 |
-
|
| 1260 |
-
-
|
| 1261 |
-
- **Translation**: Native {LANGUAGE_NAME} speakers translated questions to {LANGUAGE_NAME}
|
| 1262 |
-
- **Perturbations**: Each question underwent targeted perturbations designed to reflect {LINGUISTIC_CHARACTERISTICS}
|
| 1263 |
-
- **Validation**: Model-in-the-loop process ensured high baseline accuracy across 14 different tokenizers
|
| 1264 |
-
|
| 1265 |
-
#### Perturbation Categories
|
| 1266 |
-
|
| 1267 |
-
1. **Canonical**
|
| 1268 |
-
{DESCRIPTION_OF_CANONICAL}
|
| 1269 |
-
|
| 1270 |
-
2. **{PERTURBATION_NAME_1}**
|
| 1271 |
-
{DESCRIPTION_1}
|
| 1272 |
|
| 1273 |
-
|
| 1274 |
-
{DESCRIPTION_2}
|
| 1275 |
-
|
| 1276 |
-
4. **{PERTURBATION_NAME_3}**
|
| 1277 |
-
{DESCRIPTION_3}
|
| 1278 |
-
|
| 1279 |
-
5. **{PERTURBATION_NAME_4}**
|
| 1280 |
-
{DESCRIPTION_4}
|
| 1281 |
-
|
| 1282 |
-
6. **{PERTURBATION_NAME_5}**
|
| 1283 |
-
{DESCRIPTION_5}
|
| 1284 |
-
|
| 1285 |
-
7. **{PERTURBATION_NAME_6}**
|
| 1286 |
-
{DESCRIPTION_6}
|
| 1287 |
-
|
| 1288 |
-
8. **{PERTURBATION_NAME_7}**
|
| 1289 |
-
{DESCRIPTION_7}
|
| 1290 |
|
| 1291 |
-
|
| 1292 |
|
| 1293 |
-
|
| 1294 |
-
|
| 1295 |
-
| Aya | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
|
| 1296 |
-
| BLOOM | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
|
| 1297 |
-
| ByT5 | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
|
| 1298 |
-
| Comma | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
|
| 1299 |
-
| GPT-2 | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
|
| 1300 |
-
| GPT-4o | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
|
| 1301 |
-
| Gemma-2 | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
|
| 1302 |
-
| Llama-3.2 | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
|
| 1303 |
-
| Phi-3 | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
|
| 1304 |
-
| Qwen-3 | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
|
| 1305 |
-
| Tekken | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
|
| 1306 |
-
| TokenMonster | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
|
| 1307 |
-
| XGLM | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
|
| 1308 |
-
| mBERT | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
|
| 1309 |
|
| 1310 |
-
|
|
|
|
| 1311 |
|
| 1312 |
-
|
|
|
|
| 1313 |
|
| 1314 |
-
|
|
|
|
| 1315 |
|
| 1316 |
-
|
|
|
|
| 1317 |
|
| 1318 |
-
|
|
|
|
| 1319 |
|
| 1320 |
-
|
|
|
|
| 1321 |
|
| 1322 |
-
|
|
|
|
| 1323 |
|
| 1324 |
-
|
|
|
|
| 1325 |
|
| 1326 |
-
|
| 1327 |
|
| 1328 |
## Considerations for Using the Data
|
| 1329 |
|
| 1330 |
-
|
|
|
|
|
|
|
|
|
|
| 1331 |
|
| 1332 |
-
|
| 1333 |
-
- Enabling better understanding of tokenization challenges in {LANGUAGE_NAME}
|
| 1334 |
-
- Supporting development of more robust multilingual models
|
| 1335 |
-
- Providing standardized evaluation for {LANGUAGE_NAME} NLP research
|
| 1336 |
-
|
| 1337 |
-
### Discussion of Biases
|
| 1338 |
-
|
| 1339 |
-
- **Language variety**: The dataset uses {STANDARD_VARIETY} and may not fully represent dialectal variations
|
| 1340 |
-
- **Script focus**: {SCRIPT_LIMITATIONS_DESCRIPTION}
|
| 1341 |
-
- **Domain coverage**: Questions focus on general knowledge and may not represent domain-specific language use
|
| 1342 |
-
- **Question simplicity**: Designed for high baseline accuracy, which may not reflect real-world task complexity
|
| 1343 |
-
|
| 1344 |
-
### Other Known Limitations
|
| 1345 |
-
|
| 1346 |
-
- Relatively small dataset size (designed for evaluation, not training)
|
| 1347 |
-
- Focus on multiple-choice format may not capture all aspects of language understanding
|
| 1348 |
-
- Perturbations are specific to {LANGUAGE_NAME}'s characteristics and findings may not generalize to all languages
|
| 1349 |
-
- Models evaluated were trained at ~1B parameters; results may differ at larger scales
|
| 1350 |
|
| 1351 |
## Additional Information
|
| 1352 |
|
|
@@ -1356,21 +1303,19 @@ The dataset was curated by the TokSuite research team at R3.
|
|
| 1356 |
|
| 1357 |
### Licensing Information
|
| 1358 |
|
| 1359 |
-
MIT
|
| 1360 |
|
| 1361 |
### Citation Information
|
| 1362 |
|
| 1363 |
If you use this dataset in your research, please cite the TokSuite paper:
|
|
|
|
| 1364 |
```bibtex
|
| 1365 |
@inproceedings{toksuite2026,
|
| 1366 |
title={TokSuite: Measuring the Impact of Tokenizer Choice on Language Model Behavior},
|
| 1367 |
author={Altıntaş, Gül Sena and Ehghaghi, Malikeh and Lester, Brian and Liu, Fengyuan and Zhao, Wanru and Ciccone, Marco and Raffel, Colin},
|
| 1368 |
-
|
| 1369 |
-
year={2026},
|
| 1370 |
-
url={TBD}
|
| 1371 |
}
|
| 1372 |
-
```
|
| 1373 |
-
|
| 1374 |
**Paper**: [TokSuite: Measuring the Impact of Tokenizer Choice on Language Model Behavior](TBD)
|
| 1375 |
|
| 1376 |
### Contributions
|
|
@@ -1383,7 +1328,7 @@ This dataset is part of TokSuite, which includes:
|
|
| 1383 |
|
| 1384 |
### Contact
|
| 1385 |
|
| 1386 |
-
For questions or issues related to this dataset, please refer to the TokSuite project or contact the authors
|
| 1387 |
|
| 1388 |
---
|
| 1389 |
|
|
|
|
| 1 |
---
|
| 2 |
+
license: mit
|
| 3 |
multilinguality: multilingual
|
| 4 |
task_categories:
|
| 5 |
- multiple-choice
|
| 6 |
pretty_name: Tokenization Robustness Math
|
| 7 |
tags:
|
|
|
|
| 8 |
- tokenization
|
| 9 |
+
- mathematics
|
| 10 |
dataset_info:
|
| 11 |
- config_name: tokenizer_robustness_completion_math_canonical
|
| 12 |
features:
|
|
|
|
| 1161 |
data_files:
|
| 1162 |
- split: test
|
| 1163 |
path: tokenizer_robustness_completion_math_turkish/test-*
|
| 1164 |
+
language:
|
| 1165 |
+
- en
|
| 1166 |
+
- fa
|
| 1167 |
+
- zh
|
| 1168 |
+
- it
|
| 1169 |
+
- tr
|
| 1170 |
+
size_categories:
|
| 1171 |
+
- n<1K
|
| 1172 |
---
|
| 1173 |
+
# Dataset Card for Tokenization Robustness (Math)
|
| 1174 |
|
|
|
|
| 1175 |
<!-- Provide a quick summary of the dataset. -->
|
| 1176 |
|
| 1177 |
+
<img src="toksuite-logo.png" alt="TokSuite Logo" width="250px" style="margin-left:auto; margin-right:auto; display:block;"/>
|
|
|
|
|
|
|
| 1178 |
|
| 1179 |
+
# TokSuite Benchmark (Math Collection)
|
| 1180 |
|
| 1181 |
## Dataset Description
|
| 1182 |
|
| 1183 |
+
This dataset is part of **TokSuite**, a comprehensive benchmark designed to measure how different tokenization strategies affect language model behavior under controlled conditions.
|
| 1184 |
+
|
| 1185 |
+
This specific subset focuses on **mathematical text completion**, containing multiple-choice math questions with a variety of **surface-form perturbations** that stress tokenizer handling of numbers, symbols, formatting, scripts, and mathematical notation.
|
| 1186 |
|
| 1187 |
+
- **Curated by:** R3 Research Team
|
| 1188 |
+
- **Domain:** Mathematics
|
| 1189 |
+
- **License:** MIT License
|
| 1190 |
|
| 1191 |
### Dataset Summary
|
| 1192 |
|
| 1193 |
+
TokSuite isolates the impact of tokenization by holding **model architecture, training data, training budget, and initialization constant**, varying only the tokenizer.
|
| 1194 |
+
|
| 1195 |
+
The Math benchmark evaluates performance on:
|
| 1196 |
+
- A **canonical mathematical formulation**
|
| 1197 |
+
- Multiple **perturbed variants** that preserve mathematical meaning while altering surface representation
|
| 1198 |
+
|
| 1199 |
+
These perturbations reflect realistic variation in how mathematical expressions are written, formatted, localized, and queried in practice.
|
| 1200 |
|
| 1201 |
**Key Features:**
|
| 1202 |
+
- Canonical math questions with unambiguous answers
|
| 1203 |
+
- Perturbations targeting notation, symbols, scripts, and formatting
|
| 1204 |
+
- Parallel structure with TokSuite language benchmarks
|
| 1205 |
+
- Designed for **evaluation**, not training
|
| 1206 |
|
| 1207 |
### Supported Tasks
|
| 1208 |
|
| 1209 |
+
- **Multiple-Choice Math Question Answering**
|
| 1210 |
+
- **Tokenizer Robustness Evaluation**
|
| 1211 |
+
- **Symbolic and Numerical Text Processing**
|
| 1212 |
|
| 1213 |
+
---
|
|
|
|
|
|
|
| 1214 |
|
| 1215 |
## Dataset Structure
|
| 1216 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1217 |
### Data Fields
|
| 1218 |
|
| 1219 |
| Field | Type | Description |
|
| 1220 |
+
|------|------|-------------|
|
| 1221 |
+
| `question` | `string` | Mathematical question text |
|
| 1222 |
+
| `choices` | `list[string]` | Multiple-choice answer options |
|
| 1223 |
+
| `answer` | `int64` | Index of the correct answer |
|
| 1224 |
+
| `answer_label` | `string` | Letter label of the correct answer |
|
| 1225 |
+
| `split` | `string` | Dataset split identifier (all entries are `test`) |
|
| 1226 |
+
| `subcategories` | `string` | Perturbation category |
|
| 1227 |
+
| `lang` | `string` | Domain identifier (`math`) |
|
| 1228 |
+
| `notes` | `string` | Additional context about the perturbation |
|
| 1229 |
+
| `id` | `string` | Unique question identifier |
|
| 1230 |
+
| `set_id` | `float64` | Question set grouping identifier |
|
| 1231 |
+
| `variation_id` | `float64` | Variation number within a question set |
|
| 1232 |
+
| `vanilla_cos_sim_to_canonical` | `dict[string, float]` | Cosine similarity to canonical form using raw token sequences |
|
| 1233 |
+
| `trimmed_cos_sim_to_canonical` | `dict[string, float]` | Cosine similarity after token normalization |
|
| 1234 |
+
| `token_counts` | `dict[string, int]` | Number of tokens produced per tokenizer |
|
| 1235 |
|
| 1236 |
+
---
|
| 1237 |
|
| 1238 |
## Dataset Creation
|
| 1239 |
|
| 1240 |
### Curation Rationale
|
| 1241 |
|
| 1242 |
This dataset was created to:
|
| 1243 |
+
1. Systematically evaluate tokenizer robustness on **mathematical notation and structure**
|
| 1244 |
+
2. Measure sensitivity to changes in formatting, symbols, scripts, and numeric representation
|
| 1245 |
+
3. Isolate tokenization effects from mathematical reasoning difficulty
|
| 1246 |
+
4. Provide standardized benchmarks for math-focused language models
|
| 1247 |
|
| 1248 |
+
Canonical questions are intentionally **simple and high-accuracy**, allowing researchers to attribute performance degradation to tokenization rather than reasoning complexity.
|
| 1249 |
|
| 1250 |
### Source Data
|
| 1251 |
|
| 1252 |
+
- Canonical math questions were manually authored
|
| 1253 |
+
- Each question was perturbed while preserving mathematical equivalence
|
| 1254 |
+
- Canonical accuracy was validated across TokSuite models
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1255 |
|
| 1256 |
+
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1257 |
|
| 1258 |
+
## Perturbation Categories (Math)
|
| 1259 |
|
| 1260 |
+
1. **Canonical**
|
| 1261 |
+
The baseline mathematical text written in a standard, well-formatted form with no perturbations. This serves as the reference condition for evaluating all other perturbations.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1262 |
|
| 1263 |
+
2. **Chinese**
|
| 1264 |
+
Rewrites mathematical text using Chinese characters for numbers, operators, or surrounding descriptions, testing tokenizer robustness to non-Latin scripts in math contexts.
|
| 1265 |
|
| 1266 |
+
3. **Decorative Unicode**
|
| 1267 |
+
Replaces standard mathematical symbols with visually similar decorative or stylized Unicode characters (e.g., fancy numerals or operators), stressing Unicode normalization and symbol handling.
|
| 1268 |
|
| 1269 |
+
4. **Farsi**
|
| 1270 |
+
Introduces Persian (Farsi) numerals or script elements into mathematical expressions, testing tokenizer robustness to right-to-left scripts and cross-script numeric representations.
|
| 1271 |
|
| 1272 |
+
5. **Italian**
|
| 1273 |
+
Rewrites textual components of math problems in Italian while preserving the same mathematical structure and solution.
|
| 1274 |
|
| 1275 |
+
6. **LaTeX**
|
| 1276 |
+
Encodes mathematical expressions using LaTeX-style syntax (e.g., `\frac`, `^`, `_`), stressing tokenizer handling of markup-heavy mathematical notation.
|
| 1277 |
|
| 1278 |
+
7. **Space Removal**
|
| 1279 |
+
Removes or alters spacing within mathematical expressions and surrounding text, stressing tokenizer assumptions about whitespace in math contexts.
|
| 1280 |
|
| 1281 |
+
8. **Spelled-Out Forms**
|
| 1282 |
+
Replaces numerals or symbols with fully spelled-out textual equivalents (e.g., numbers written as words), increasing sequence length and altering token boundaries.
|
| 1283 |
|
| 1284 |
+
9. **Turkish**
|
| 1285 |
+
Rewrites textual components of math problems in Turkish while preserving the underlying mathematical meaning.
|
| 1286 |
|
| 1287 |
+
---
|
| 1288 |
|
| 1289 |
## Considerations for Using the Data
|
| 1290 |
|
| 1291 |
+
- **Language variety:** The dataset uses standard mathematical notation and English-language math phrasing, and may not represent informal or pedagogical math language.
|
| 1292 |
+
- **Script focus:** Mathematical expressions are primarily written using ASCII and standard Unicode; LaTeX, decorative Unicode, and non-Latin scripts are included as perturbations.
|
| 1293 |
+
- **Domain coverage:** Questions focus on general mathematics and may not represent highly specialized or advanced mathematical domains.
|
| 1294 |
+
- **Question simplicity:** Designed for high baseline accuracy, which may not reflect real-world mathematical task complexity.
|
| 1295 |
|
| 1296 |
+
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1297 |
|
| 1298 |
## Additional Information
|
| 1299 |
|
|
|
|
| 1303 |
|
| 1304 |
### Licensing Information
|
| 1305 |
|
| 1306 |
+
MIT License
|
| 1307 |
|
| 1308 |
### Citation Information
|
| 1309 |
|
| 1310 |
If you use this dataset in your research, please cite the TokSuite paper:
|
| 1311 |
+
|
| 1312 |
```bibtex
|
| 1313 |
@inproceedings{toksuite2026,
|
| 1314 |
title={TokSuite: Measuring the Impact of Tokenizer Choice on Language Model Behavior},
|
| 1315 |
author={Altıntaş, Gül Sena and Ehghaghi, Malikeh and Lester, Brian and Liu, Fengyuan and Zhao, Wanru and Ciccone, Marco and Raffel, Colin},
|
| 1316 |
+
year={2026}
|
|
|
|
|
|
|
| 1317 |
}
|
| 1318 |
+
```''
|
|
|
|
| 1319 |
**Paper**: [TokSuite: Measuring the Impact of Tokenizer Choice on Language Model Behavior](TBD)
|
| 1320 |
|
| 1321 |
### Contributions
|
|
|
|
| 1328 |
|
| 1329 |
### Contact
|
| 1330 |
|
| 1331 |
+
For questions or issues related to this dataset, please refer to the TokSuite project or contact the authors of the paper.
|
| 1332 |
|
| 1333 |
---
|
| 1334 |
|