Malikeh1375 commited on
Commit
86843a0
·
verified ·
1 Parent(s): a0d3308

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +89 -144
README.md CHANGED
@@ -1,12 +1,12 @@
1
  ---
2
- license: cc
3
  multilinguality: multilingual
4
  task_categories:
5
  - multiple-choice
6
  pretty_name: Tokenization Robustness Math
7
  tags:
8
- - multilingual
9
  - tokenization
 
10
  dataset_info:
11
  - config_name: tokenizer_robustness_completion_math_canonical
12
  features:
@@ -1161,192 +1161,139 @@ configs:
1161
  data_files:
1162
  - split: test
1163
  path: tokenizer_robustness_completion_math_turkish/test-*
 
 
 
 
 
 
 
 
1164
  ---
 
1165
 
1166
- # Dataset Card for Tokenization Robustness Math
1167
  <!-- Provide a quick summary of the dataset. -->
1168
 
1169
- <img src="toksuite-logo.png" alt="TokSuite Logo" width="250px" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
1170
-
1171
- # TokSuite Benchmark ({LANGUAGE_NAME} Collection)
1172
 
 
1173
 
1174
  ## Dataset Description
1175
 
1176
- This dataset is part of **TokSuite**, a comprehensive benchmark designed to measure how different tokenization strategies affect language model performance and robustness. This specific subset contains {LANGUAGE_NAME} language multiple-choice text completion questions with various real-world perturbations that test tokenizer robustness.
 
 
1177
 
1178
- - **Curated by:** R3 Research Team
1179
- - **Language(s):** {LANGUAGE_NAME} ({LANGUAGE_CODE})
1180
- - **License:** MIT License
1181
 
1182
  ### Dataset Summary
1183
 
1184
- TokSuite addresses a fundamental challenge in language model research: understanding how tokenization choices impact model behavior in isolation. The {LANGUAGE_NAME} subset specifically measures model performance on canonical questions and various perturbations including {LIST_KEY_PERTURBATION_TYPES}.
 
 
 
 
 
 
1185
 
1186
  **Key Features:**
1187
- - {NUM_CANONICAL_QUESTIONS} canonical questions covering {TOPIC_AREAS}
1188
- - Multiple perturbation types reflecting real-world text variations in {LANGUAGE_NAME}
1189
- - Parallel structure with TokSuite benchmark (available in English, Turkish, Italian, Chinese, Farsi)
1190
- - Native speaker curation ensuring linguistic authenticity
1191
 
1192
  ### Supported Tasks
1193
 
1194
- - **Multiple-Choice Question Answering**: Text completion format with 4 answer choices
1195
- - **Tokenizer Robustness Evaluation**: Measuring performance degradation under various text perturbations
1196
- - **Multilingual NLP Benchmarking**: Evaluating language models on {LANGUAGE_NAME} text understanding
1197
 
1198
- ### Languages
1199
-
1200
- The dataset contains text in {LANGUAGE_NAME} written in {SCRIPT_NAME} (language code: {LANGUAGE_CODE_FULL}).
1201
 
1202
  ## Dataset Structure
1203
 
1204
- ### Data Instances
1205
-
1206
- An example from the dataset:
1207
- ```json
1208
- {
1209
- "question": "{EXAMPLE_QUESTION}",
1210
- "choices": ["{CHOICE_A}", "{CHOICE_B}", "{CHOICE_C}", "{CHOICE_D}"],
1211
- "answer": {ANSWER_INDEX},
1212
- "answer_label": "{ANSWER_LABEL}",
1213
- "split": "test",
1214
- "subcategories": "{SUBCATEGORY}",
1215
- "lang": "{LANGUAGE_CODE_FULL}",
1216
- "second_lang": "{ENGLISH_TRANSLATION}",
1217
- "coding_lang": "",
1218
- "notes": "{NOTES}",
1219
- "id": "{ID}",
1220
- "set_id": {SET_ID},
1221
- "variation_id": {VARIATION_ID}
1222
- }
1223
- ```
1224
-
1225
  ### Data Fields
1226
 
1227
  | Field | Type | Description |
1228
- |-------|------|-------------|
1229
- | question | string | The question text in {LANGUAGE_NAME} ({SCRIPT_DESCRIPTION}) |
1230
- | choices | list[string] | Four multiple-choice answer options in {LANGUAGE_NAME} |
1231
- | answer | int64 | Index of the correct answer (0-3) |
1232
- | answer_label | string | Letter label of the correct answer (A, B, C, or D) |
1233
- | split | string | Dataset split identifier (all entries are "test") |
1234
- | subcategories | string | Perturbation category |
1235
- | lang | string | Language code ({LANGUAGE_CODE_FULL} = {LANGUAGE_DESCRIPTION}) |
1236
- | second_lang | string | English translation or description of the question |
1237
- | coding_lang | string | Not applicable for this dataset (empty string) |
1238
- | notes | string | Additional context about the question or perturbation type |
1239
- | id | string | Unique question identifier |
1240
- | set_id | float64 | Question set grouping identifier (ranges from {ID_RANGE_START}-{ID_RANGE_END}) |
1241
- | variation_id | float64 | Variation number within a question set |
 
1242
 
 
1243
 
1244
  ## Dataset Creation
1245
 
1246
  ### Curation Rationale
1247
 
1248
  This dataset was created to:
1249
- 1. Systematically evaluate how different tokenization strategies handle {LANGUAGE_NAME} text
1250
- 2. Measure robustness against real-world text perturbations specific to {LANGUAGE_NAME} language
1251
- 3. Support research into tokenization's impact on language model behavior
1252
- 4. Provide standardized benchmarks for {LANGUAGE_NAME} language models
1253
 
1254
- The questions were designed to be straightforward with high baseline accuracy, allowing researchers to cleanly measure performance degradation when perturbations are applied.
1255
 
1256
  ### Source Data
1257
 
1258
- #### Data Collection and Processing
1259
-
1260
- - **Canonical Questions**: {NUM_BASE_QUESTIONS} baseline questions in English were created covering general knowledge topics
1261
- - **Translation**: Native {LANGUAGE_NAME} speakers translated questions to {LANGUAGE_NAME}
1262
- - **Perturbations**: Each question underwent targeted perturbations designed to reflect {LINGUISTIC_CHARACTERISTICS}
1263
- - **Validation**: Model-in-the-loop process ensured high baseline accuracy across 14 different tokenizers
1264
-
1265
- #### Perturbation Categories
1266
-
1267
- 1. **Canonical**
1268
- {DESCRIPTION_OF_CANONICAL}
1269
-
1270
- 2. **{PERTURBATION_NAME_1}**
1271
- {DESCRIPTION_1}
1272
 
1273
- 3. **{PERTURBATION_NAME_2}**
1274
- {DESCRIPTION_2}
1275
-
1276
- 4. **{PERTURBATION_NAME_3}**
1277
- {DESCRIPTION_3}
1278
-
1279
- 5. **{PERTURBATION_NAME_4}**
1280
- {DESCRIPTION_4}
1281
-
1282
- 6. **{PERTURBATION_NAME_5}**
1283
- {DESCRIPTION_5}
1284
-
1285
- 7. **{PERTURBATION_NAME_6}**
1286
- {DESCRIPTION_6}
1287
-
1288
- 8. **{PERTURBATION_NAME_7}**
1289
- {DESCRIPTION_7}
1290
 
1291
- #### Model Performance Comparison
1292
 
1293
- | model_name | canonical | {PERTURBATION_COL_1} | {PERTURBATION_COL_2} | {PERTURBATION_COL_3} | {PERTURBATION_COL_4} | {PERTURBATION_COL_5} | {PERTURBATION_COL_6} | {PERTURBATION_COL_7} |
1294
- |:-------------|----------:|---------------------:|---------------------:|---------------------:|---------------------:|---------------------:|---------------------:|---------------------:|
1295
- | Aya | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
1296
- | BLOOM | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
1297
- | ByT5 | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
1298
- | Comma | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
1299
- | GPT-2 | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
1300
- | GPT-4o | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
1301
- | Gemma-2 | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
1302
- | Llama-3.2 | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
1303
- | Phi-3 | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
1304
- | Qwen-3 | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
1305
- | Tekken | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
1306
- | TokenMonster | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
1307
- | XGLM | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
1308
- | mBERT | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
1309
 
1310
- #### Who are the source data producers?
 
1311
 
1312
- Native {LANGUAGE_NAME} speakers curated and validated all questions and perturbations. The TokSuite research team at R3 designed the overall benchmark framework.
 
1313
 
1314
- ### Annotations
 
1315
 
1316
- #### Annotation process
 
1317
 
1318
- Questions were manually created and translated by native speakers. Each perturbation was carefully designed to reflect authentic variations encountered in real-world {LANGUAGE_NAME} text processing.
 
1319
 
1320
- #### Who are the annotators?
 
1321
 
1322
- Native {LANGUAGE_NAME} speakers with expertise in linguistics and NLP, working as part of the TokSuite project.
 
1323
 
1324
- ### Personal and Sensitive Information
 
1325
 
1326
- The dataset contains only general knowledge questions and does not include any personal or sensitive information.
1327
 
1328
  ## Considerations for Using the Data
1329
 
1330
- ### Social Impact of Dataset
 
 
 
1331
 
1332
- This dataset contributes to improving language technology for {LANGUAGE_NAME} speakers by:
1333
- - Enabling better understanding of tokenization challenges in {LANGUAGE_NAME}
1334
- - Supporting development of more robust multilingual models
1335
- - Providing standardized evaluation for {LANGUAGE_NAME} NLP research
1336
-
1337
- ### Discussion of Biases
1338
-
1339
- - **Language variety**: The dataset uses {STANDARD_VARIETY} and may not fully represent dialectal variations
1340
- - **Script focus**: {SCRIPT_LIMITATIONS_DESCRIPTION}
1341
- - **Domain coverage**: Questions focus on general knowledge and may not represent domain-specific language use
1342
- - **Question simplicity**: Designed for high baseline accuracy, which may not reflect real-world task complexity
1343
-
1344
- ### Other Known Limitations
1345
-
1346
- - Relatively small dataset size (designed for evaluation, not training)
1347
- - Focus on multiple-choice format may not capture all aspects of language understanding
1348
- - Perturbations are specific to {LANGUAGE_NAME}'s characteristics and findings may not generalize to all languages
1349
- - Models evaluated were trained at ~1B parameters; results may differ at larger scales
1350
 
1351
  ## Additional Information
1352
 
@@ -1356,21 +1303,19 @@ The dataset was curated by the TokSuite research team at R3.
1356
 
1357
  ### Licensing Information
1358
 
1359
- MIT license
1360
 
1361
  ### Citation Information
1362
 
1363
  If you use this dataset in your research, please cite the TokSuite paper:
 
1364
  ```bibtex
1365
  @inproceedings{toksuite2026,
1366
  title={TokSuite: Measuring the Impact of Tokenizer Choice on Language Model Behavior},
1367
  author={Altıntaş, Gül Sena and Ehghaghi, Malikeh and Lester, Brian and Liu, Fengyuan and Zhao, Wanru and Ciccone, Marco and Raffel, Colin},
1368
- booktitle={Preprint.},
1369
- year={2026},
1370
- url={TBD}
1371
  }
1372
- ```
1373
-
1374
  **Paper**: [TokSuite: Measuring the Impact of Tokenizer Choice on Language Model Behavior](TBD)
1375
 
1376
  ### Contributions
@@ -1383,7 +1328,7 @@ This dataset is part of TokSuite, which includes:
1383
 
1384
  ### Contact
1385
 
1386
- For questions or issues related to this dataset, please refer to the TokSuite project or contact the authors through the paper submission system.
1387
 
1388
  ---
1389
 
 
1
  ---
2
+ license: mit
3
  multilinguality: multilingual
4
  task_categories:
5
  - multiple-choice
6
  pretty_name: Tokenization Robustness Math
7
  tags:
 
8
  - tokenization
9
+ - mathematics
10
  dataset_info:
11
  - config_name: tokenizer_robustness_completion_math_canonical
12
  features:
 
1161
  data_files:
1162
  - split: test
1163
  path: tokenizer_robustness_completion_math_turkish/test-*
1164
+ language:
1165
+ - en
1166
+ - fa
1167
+ - zh
1168
+ - it
1169
+ - tr
1170
+ size_categories:
1171
+ - n<1K
1172
  ---
1173
+ # Dataset Card for Tokenization Robustness (Math)
1174
 
 
1175
  <!-- Provide a quick summary of the dataset. -->
1176
 
1177
+ <img src="toksuite-logo.png" alt="TokSuite Logo" width="250px" style="margin-left:auto; margin-right:auto; display:block;"/>
 
 
1178
 
1179
+ # TokSuite Benchmark (Math Collection)
1180
 
1181
  ## Dataset Description
1182
 
1183
+ This dataset is part of **TokSuite**, a comprehensive benchmark designed to measure how different tokenization strategies affect language model behavior under controlled conditions.
1184
+
1185
+ This specific subset focuses on **mathematical text completion**, containing multiple-choice math questions with a variety of **surface-form perturbations** that stress tokenizer handling of numbers, symbols, formatting, scripts, and mathematical notation.
1186
 
1187
+ - **Curated by:** R3 Research Team
1188
+ - **Domain:** Mathematics
1189
+ - **License:** MIT License
1190
 
1191
  ### Dataset Summary
1192
 
1193
+ TokSuite isolates the impact of tokenization by holding **model architecture, training data, training budget, and initialization constant**, varying only the tokenizer.
1194
+
1195
+ The Math benchmark evaluates performance on:
1196
+ - A **canonical mathematical formulation**
1197
+ - Multiple **perturbed variants** that preserve mathematical meaning while altering surface representation
1198
+
1199
+ These perturbations reflect realistic variation in how mathematical expressions are written, formatted, localized, and queried in practice.
1200
 
1201
  **Key Features:**
1202
+ - Canonical math questions with unambiguous answers
1203
+ - Perturbations targeting notation, symbols, scripts, and formatting
1204
+ - Parallel structure with TokSuite language benchmarks
1205
+ - Designed for **evaluation**, not training
1206
 
1207
  ### Supported Tasks
1208
 
1209
+ - **Multiple-Choice Math Question Answering**
1210
+ - **Tokenizer Robustness Evaluation**
1211
+ - **Symbolic and Numerical Text Processing**
1212
 
1213
+ ---
 
 
1214
 
1215
  ## Dataset Structure
1216
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1217
  ### Data Fields
1218
 
1219
  | Field | Type | Description |
1220
+ |------|------|-------------|
1221
+ | `question` | `string` | Mathematical question text |
1222
+ | `choices` | `list[string]` | Multiple-choice answer options |
1223
+ | `answer` | `int64` | Index of the correct answer |
1224
+ | `answer_label` | `string` | Letter label of the correct answer |
1225
+ | `split` | `string` | Dataset split identifier (all entries are `test`) |
1226
+ | `subcategories` | `string` | Perturbation category |
1227
+ | `lang` | `string` | Domain identifier (`math`) |
1228
+ | `notes` | `string` | Additional context about the perturbation |
1229
+ | `id` | `string` | Unique question identifier |
1230
+ | `set_id` | `float64` | Question set grouping identifier |
1231
+ | `variation_id` | `float64` | Variation number within a question set |
1232
+ | `vanilla_cos_sim_to_canonical` | `dict[string, float]` | Cosine similarity to canonical form using raw token sequences |
1233
+ | `trimmed_cos_sim_to_canonical` | `dict[string, float]` | Cosine similarity after token normalization |
1234
+ | `token_counts` | `dict[string, int]` | Number of tokens produced per tokenizer |
1235
 
1236
+ ---
1237
 
1238
  ## Dataset Creation
1239
 
1240
  ### Curation Rationale
1241
 
1242
  This dataset was created to:
1243
+ 1. Systematically evaluate tokenizer robustness on **mathematical notation and structure**
1244
+ 2. Measure sensitivity to changes in formatting, symbols, scripts, and numeric representation
1245
+ 3. Isolate tokenization effects from mathematical reasoning difficulty
1246
+ 4. Provide standardized benchmarks for math-focused language models
1247
 
1248
+ Canonical questions are intentionally **simple and high-accuracy**, allowing researchers to attribute performance degradation to tokenization rather than reasoning complexity.
1249
 
1250
  ### Source Data
1251
 
1252
+ - Canonical math questions were manually authored
1253
+ - Each question was perturbed while preserving mathematical equivalence
1254
+ - Canonical accuracy was validated across TokSuite models
 
 
 
 
 
 
 
 
 
 
 
1255
 
1256
+ ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1257
 
1258
+ ## Perturbation Categories (Math)
1259
 
1260
+ 1. **Canonical**
1261
+ The baseline mathematical text written in a standard, well-formatted form with no perturbations. This serves as the reference condition for evaluating all other perturbations.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1262
 
1263
+ 2. **Chinese**
1264
+ Rewrites mathematical text using Chinese characters for numbers, operators, or surrounding descriptions, testing tokenizer robustness to non-Latin scripts in math contexts.
1265
 
1266
+ 3. **Decorative Unicode**
1267
+ Replaces standard mathematical symbols with visually similar decorative or stylized Unicode characters (e.g., fancy numerals or operators), stressing Unicode normalization and symbol handling.
1268
 
1269
+ 4. **Farsi**
1270
+ Introduces Persian (Farsi) numerals or script elements into mathematical expressions, testing tokenizer robustness to right-to-left scripts and cross-script numeric representations.
1271
 
1272
+ 5. **Italian**
1273
+ Rewrites textual components of math problems in Italian while preserving the same mathematical structure and solution.
1274
 
1275
+ 6. **LaTeX**
1276
+ Encodes mathematical expressions using LaTeX-style syntax (e.g., `\frac`, `^`, `_`), stressing tokenizer handling of markup-heavy mathematical notation.
1277
 
1278
+ 7. **Space Removal**
1279
+ Removes or alters spacing within mathematical expressions and surrounding text, stressing tokenizer assumptions about whitespace in math contexts.
1280
 
1281
+ 8. **Spelled-Out Forms**
1282
+ Replaces numerals or symbols with fully spelled-out textual equivalents (e.g., numbers written as words), increasing sequence length and altering token boundaries.
1283
 
1284
+ 9. **Turkish**
1285
+ Rewrites textual components of math problems in Turkish while preserving the underlying mathematical meaning.
1286
 
1287
+ ---
1288
 
1289
  ## Considerations for Using the Data
1290
 
1291
+ - **Language variety:** The dataset uses standard mathematical notation and English-language math phrasing, and may not represent informal or pedagogical math language.
1292
+ - **Script focus:** Mathematical expressions are primarily written using ASCII and standard Unicode; LaTeX, decorative Unicode, and non-Latin scripts are included as perturbations.
1293
+ - **Domain coverage:** Questions focus on general mathematics and may not represent highly specialized or advanced mathematical domains.
1294
+ - **Question simplicity:** Designed for high baseline accuracy, which may not reflect real-world mathematical task complexity.
1295
 
1296
+ ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1297
 
1298
  ## Additional Information
1299
 
 
1303
 
1304
  ### Licensing Information
1305
 
1306
+ MIT License
1307
 
1308
  ### Citation Information
1309
 
1310
  If you use this dataset in your research, please cite the TokSuite paper:
1311
+
1312
  ```bibtex
1313
  @inproceedings{toksuite2026,
1314
  title={TokSuite: Measuring the Impact of Tokenizer Choice on Language Model Behavior},
1315
  author={Altıntaş, Gül Sena and Ehghaghi, Malikeh and Lester, Brian and Liu, Fengyuan and Zhao, Wanru and Ciccone, Marco and Raffel, Colin},
1316
+ year={2026}
 
 
1317
  }
1318
+ ```''
 
1319
  **Paper**: [TokSuite: Measuring the Impact of Tokenizer Choice on Language Model Behavior](TBD)
1320
 
1321
  ### Contributions
 
1328
 
1329
  ### Contact
1330
 
1331
+ For questions or issues related to this dataset, please refer to the TokSuite project or contact the authors of the paper.
1332
 
1333
  ---
1334