Malikeh1375 commited on
Commit
ff2865f
·
verified ·
1 Parent(s): a4c4461

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +100 -106
README.md CHANGED
@@ -1,5 +1,5 @@
1
  ---
2
- license: cc
3
  multilinguality: multilingual
4
  task_categories:
5
  - multiple-choice
@@ -7,6 +7,7 @@ pretty_name: Tokenization Robustness
7
  tags:
8
  - multilingual
9
  - tokenization
 
10
  dataset_info:
11
  - config_name: tokenizer_robustness_completion_turkish_canonical
12
  features:
@@ -2124,7 +2125,8 @@ configs:
2124
  - config_name: tokenizer_robustness_completion_turkish_code_language_script_switching
2125
  data_files:
2126
  - split: test
2127
- path: tokenizer_robustness_completion_turkish_code_language_script_switching/test-*
 
2128
  - config_name: tokenizer_robustness_completion_turkish_colloquial
2129
  data_files:
2130
  - split: test
@@ -2185,96 +2187,79 @@ configs:
2185
  data_files:
2186
  - split: test
2187
  path: tokenizer_robustness_completion_turkish_word_reordering/test-*
 
 
 
 
 
2188
  ---
2189
-
2190
  # Dataset Card for Tokenization Robustness
2191
 
2192
  <!-- Provide a quick summary of the dataset. -->
2193
 
2194
  <img src="toksuite-logo.png" alt="TokSuite Logo" width="250px" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
2195
 
2196
- # TokSuite Benchmark ({LANGUAGE_NAME} Collection)
2197
-
2198
 
2199
  ## Dataset Description
2200
 
2201
- This dataset is part of **TokSuite**, a comprehensive benchmark designed to measure how different tokenization strategies affect language model performance and robustness. This specific subset contains {LANGUAGE_NAME} language multiple-choice text completion questions with various real-world perturbations that test tokenizer robustness.
2202
 
2203
  - **Curated by:** R3 Research Team
2204
- - **Language(s):** {LANGUAGE_NAME} ({LANGUAGE_CODE})
2205
  - **License:** MIT License
2206
 
2207
  ### Dataset Summary
2208
 
2209
- TokSuite addresses a fundamental challenge in language model research: understanding how tokenization choices impact model behavior in isolation. The {LANGUAGE_NAME} subset specifically measures model performance on canonical questions and various perturbations including {LIST_KEY_PERTURBATION_TYPES}.
2210
-
2211
  **Key Features:**
2212
- - {NUM_CANONICAL_QUESTIONS} canonical questions covering {TOPIC_AREAS}
2213
- - Multiple perturbation types reflecting real-world text variations in {LANGUAGE_NAME}
2214
- - Parallel structure with TokSuite benchmark (available in English, Turkish, Italian, Chinese, Farsi)
2215
  - Native speaker curation ensuring linguistic authenticity
2216
 
2217
  ### Supported Tasks
2218
 
2219
  - **Multiple-Choice Question Answering**: Text completion format with 4 answer choices
2220
  - **Tokenizer Robustness Evaluation**: Measuring performance degradation under various text perturbations
2221
- - **Multilingual NLP Benchmarking**: Evaluating language models on {LANGUAGE_NAME} text understanding
2222
 
2223
  ### Languages
2224
 
2225
- The dataset contains text in {LANGUAGE_NAME} written in {SCRIPT_NAME} (language code: {LANGUAGE_CODE_FULL}).
2226
 
2227
  ## Dataset Structure
2228
 
2229
- ### Data Instances
2230
-
2231
- An example from the dataset:
2232
- ```json
2233
- {
2234
- "question": "{EXAMPLE_QUESTION}",
2235
- "choices": ["{CHOICE_A}", "{CHOICE_B}", "{CHOICE_C}", "{CHOICE_D}"],
2236
- "answer": {ANSWER_INDEX},
2237
- "answer_label": "{ANSWER_LABEL}",
2238
- "split": "test",
2239
- "subcategories": "{SUBCATEGORY}",
2240
- "lang": "{LANGUAGE_CODE_FULL}",
2241
- "second_lang": "{ENGLISH_TRANSLATION}",
2242
- "coding_lang": "",
2243
- "notes": "{NOTES}",
2244
- "id": "{ID}",
2245
- "set_id": {SET_ID},
2246
- "variation_id": {VARIATION_ID}
2247
- }
2248
- ```
2249
-
2250
  ### Data Fields
2251
 
2252
  | Field | Type | Description |
2253
  |-------|------|-------------|
2254
- | question | string | The question text in {LANGUAGE_NAME} ({SCRIPT_DESCRIPTION}) |
2255
- | choices | list[string] | Four multiple-choice answer options in {LANGUAGE_NAME} |
2256
- | answer | int64 | Index of the correct answer (0-3) |
2257
- | answer_label | string | Letter label of the correct answer (A, B, C, or D) |
2258
- | split | string | Dataset split identifier (all entries are "test") |
2259
- | subcategories | string | Perturbation category |
2260
- | lang | string | Language code ({LANGUAGE_CODE_FULL} = {LANGUAGE_DESCRIPTION}) |
2261
- | second_lang | string | English translation or description of the question |
2262
- | coding_lang | string | Not applicable for this dataset (empty string) |
2263
- | notes | string | Additional context about the question or perturbation type |
2264
- | id | string | Unique question identifier |
2265
- | set_id | float64 | Question set grouping identifier (ranges from {ID_RANGE_START}-{ID_RANGE_END}) |
2266
- | variation_id | float64 | Variation number within a question set |
2267
-
 
 
2268
 
2269
  ## Dataset Creation
2270
 
2271
  ### Curation Rationale
2272
 
2273
  This dataset was created to:
2274
- 1. Systematically evaluate how different tokenization strategies handle {LANGUAGE_NAME} text
2275
- 2. Measure robustness against real-world text perturbations specific to {LANGUAGE_NAME} language
2276
- 3. Support research into tokenization's impact on language model behavior
2277
- 4. Provide standardized benchmarks for {LANGUAGE_NAME} language models
2278
 
2279
  The questions were designed to be straightforward with high baseline accuracy, allowing researchers to cleanly measure performance degradation when perturbations are applied.
2280
 
@@ -2282,69 +2267,80 @@ The questions were designed to be straightforward with high baseline accuracy, a
2282
 
2283
  #### Data Collection and Processing
2284
 
2285
- - **Canonical Questions**: {NUM_BASE_QUESTIONS} baseline questions in English were created covering general knowledge topics
2286
- - **Translation**: Native {LANGUAGE_NAME} speakers translated questions to {LANGUAGE_NAME}
2287
- - **Perturbations**: Each question underwent targeted perturbations designed to reflect {LINGUISTIC_CHARACTERISTICS}
2288
- - **Validation**: Model-in-the-loop process ensured high baseline accuracy across 14 different tokenizers
2289
 
2290
  #### Perturbation Categories
2291
 
2292
  1. **Canonical**
2293
- {DESCRIPTION_OF_CANONICAL}
 
 
 
2294
 
2295
- 2. **{PERTURBATION_NAME_1}**
2296
- {DESCRIPTION_1}
2297
 
2298
- 3. **{PERTURBATION_NAME_2}**
2299
- {DESCRIPTION_2}
2300
 
2301
- 4. **{PERTURBATION_NAME_3}**
2302
- {DESCRIPTION_3}
2303
 
2304
- 5. **{PERTURBATION_NAME_4}**
2305
- {DESCRIPTION_4}
2306
 
2307
- 6. **{PERTURBATION_NAME_5}**
2308
- {DESCRIPTION_5}
2309
 
2310
- 7. **{PERTURBATION_NAME_6}**
2311
- {DESCRIPTION_6}
2312
 
2313
- 8. **{PERTURBATION_NAME_7}**
2314
- {DESCRIPTION_7}
2315
 
2316
- #### Model Performance Comparison
 
2317
 
2318
- | model_name | canonical | {PERTURBATION_COL_1} | {PERTURBATION_COL_2} | {PERTURBATION_COL_3} | {PERTURBATION_COL_4} | {PERTURBATION_COL_5} | {PERTURBATION_COL_6} | {PERTURBATION_COL_7} |
2319
- |:-------------|----------:|---------------------:|---------------------:|---------------------:|---------------------:|---------------------:|---------------------:|---------------------:|
2320
- | Aya | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
2321
- | BLOOM | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
2322
- | ByT5 | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
2323
- | Comma | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
2324
- | GPT-2 | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
2325
- | GPT-4o | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
2326
- | Gemma-2 | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
2327
- | Llama-3.2 | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
2328
- | Phi-3 | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
2329
- | Qwen-3 | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
2330
- | Tekken | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
2331
- | TokenMonster | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
2332
- | XGLM | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
2333
- | mBERT | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
 
 
 
 
 
 
 
2334
 
2335
  #### Who are the source data producers?
2336
 
2337
- Native {LANGUAGE_NAME} speakers curated and validated all questions and perturbations. The TokSuite research team at R3 designed the overall benchmark framework.
2338
 
2339
  ### Annotations
2340
 
2341
  #### Annotation process
2342
 
2343
- Questions were manually created and translated by native speakers. Each perturbation was carefully designed to reflect authentic variations encountered in real-world {LANGUAGE_NAME} text processing.
2344
 
2345
  #### Who are the annotators?
2346
 
2347
- Native {LANGUAGE_NAME} speakers with expertise in linguistics and NLP, working as part of the TokSuite project.
2348
 
2349
  ### Personal and Sensitive Information
2350
 
@@ -2354,24 +2350,21 @@ The dataset contains only general knowledge questions and does not include any p
2354
 
2355
  ### Social Impact of Dataset
2356
 
2357
- This dataset contributes to improving language technology for {LANGUAGE_NAME} speakers by:
2358
- - Enabling better understanding of tokenization challenges in {LANGUAGE_NAME}
2359
- - Supporting development of more robust multilingual models
2360
- - Providing standardized evaluation for {LANGUAGE_NAME} NLP research
2361
 
2362
  ### Discussion of Biases
2363
 
2364
- - **Language variety**: The dataset uses {STANDARD_VARIETY} and may not fully represent dialectal variations
2365
- - **Script focus**: {SCRIPT_LIMITATIONS_DESCRIPTION}
2366
- - **Domain coverage**: Questions focus on general knowledge and may not represent domain-specific language use
2367
- - **Question simplicity**: Designed for high baseline accuracy, which may not reflect real-world task complexity
2368
 
2369
  ### Other Known Limitations
2370
 
2371
- - Relatively small dataset size (designed for evaluation, not training)
2372
- - Focus on multiple-choice format may not capture all aspects of language understanding
2373
- - Perturbations are specific to {LANGUAGE_NAME}'s characteristics and findings may not generalize to all languages
2374
- - Models evaluated were trained at ~1B parameters; results may differ at larger scales
2375
 
2376
  ## Additional Information
2377
 
@@ -2386,6 +2379,7 @@ MIT license
2386
  ### Citation Information
2387
 
2388
  If you use this dataset in your research, please cite the TokSuite paper:
 
2389
  ```bibtex
2390
  @inproceedings{toksuite2026,
2391
  title={TokSuite: Measuring the Impact of Tokenizer Choice on Language Model Behavior},
@@ -2408,7 +2402,7 @@ This dataset is part of TokSuite, which includes:
2408
 
2409
  ### Contact
2410
 
2411
- For questions or issues related to this dataset, please refer to the TokSuite project or contact the authors through the paper submission system.
2412
 
2413
  ---
2414
 
@@ -2418,4 +2412,4 @@ For questions or issues related to this dataset, please refer to the TokSuite pr
2418
 
2419
  *Understanding Tokenization's Role in Language Model Behavior*
2420
 
2421
- </div>
 
1
  ---
2
+ license: mit
3
  multilinguality: multilingual
4
  task_categories:
5
  - multiple-choice
 
7
  tags:
8
  - multilingual
9
  - tokenization
10
+ - robustness
11
  dataset_info:
12
  - config_name: tokenizer_robustness_completion_turkish_canonical
13
  features:
 
2125
  - config_name: tokenizer_robustness_completion_turkish_code_language_script_switching
2126
  data_files:
2127
  - split: test
2128
+ path: >-
2129
+ tokenizer_robustness_completion_turkish_code_language_script_switching/test-*
2130
  - config_name: tokenizer_robustness_completion_turkish_colloquial
2131
  data_files:
2132
  - split: test
 
2187
  data_files:
2188
  - split: test
2189
  path: tokenizer_robustness_completion_turkish_word_reordering/test-*
2190
+ language:
2191
+ - tr
2192
+ - en
2193
+ size_categories:
2194
+ - n<1K
2195
  ---
 
2196
  # Dataset Card for Tokenization Robustness
2197
 
2198
  <!-- Provide a quick summary of the dataset. -->
2199
 
2200
  <img src="toksuite-logo.png" alt="TokSuite Logo" width="250px" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
2201
 
2202
+ # TokSuite Benchmark (Turkish Collection)
 
2203
 
2204
  ## Dataset Description
2205
 
2206
+ This dataset is part of **TokSuite**, a comprehensive benchmark designed to measure how different tokenization strategies affect language model performance and robustness. This specific subset contains Italian language multiple-choice text completion questions with various real-world perturbations that test tokenizer robustness.
2207
 
2208
  - **Curated by:** R3 Research Team
2209
+ - **Language(s):** Turkish (Tr)
2210
  - **License:** MIT License
2211
 
2212
  ### Dataset Summary
2213
 
2214
+ TokSuite addresses a fundamental challenge in language model research: understanding how tokenization choices impact model behavior in isolation. The Turkish subset specifically measures model performance on canonical questions and various perturbations.
 
2215
  **Key Features:**
2216
+ - 40 canonical questions covering general knowledge, geography, science, and language understanding
2217
+ - Multiple perturbation types reflecting real-world text variations in Turkish
2218
+ - Parallel structure with TokSuite benchmark (available in English, Italian, Farsi, Chinese)
2219
  - Native speaker curation ensuring linguistic authenticity
2220
 
2221
  ### Supported Tasks
2222
 
2223
  - **Multiple-Choice Question Answering**: Text completion format with 4 answer choices
2224
  - **Tokenizer Robustness Evaluation**: Measuring performance degradation under various text perturbations
2225
+ - **Multilingual NLP Benchmarking**: Evaluating language models on Turkish text understanding
2226
 
2227
  ### Languages
2228
 
2229
+ The dataset contains text in Turkish (language code: `tur_Latn` / `tr`).
2230
 
2231
  ## Dataset Structure
2232
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2233
  ### Data Fields
2234
 
2235
  | Field | Type | Description |
2236
  |-------|------|-------------|
2237
+ | `question` | `string` | The question text in Turkish |
2238
+ | `choices` | `list[string]` | 4 multiple-choice answer options |
2239
+ | `answer` | `int64` | Index of the correct answer |
2240
+ | `answer_label` | `string` | Letter label of the correct answer |
2241
+ | `split` | `string` | Dataset split identifier |
2242
+ | `subcategories` | `string` | Perturbation category |
2243
+ | `lang` | `string` | Language code |
2244
+ | `second_lang` | `string` | English translation or description of the question |
2245
+ | `coding_lang` | `string` | Not applicable unless code-switching is present |
2246
+ | `notes` | `string` | Additional context about the question or perturbation |
2247
+ | `id` | `string` | Unique question identifier |
2248
+ | `set_id` | `float64` | Question set grouping identifier |
2249
+ | `variation_id` | `float64` | Variation number within a question set |
2250
+ | `vanilla_cos_sim_to_canonical` | `dict[string, float]` | Cosine similarity scores to canonical form (raw tokens) |
2251
+ | `trimmed_cos_sim_to_canonical` | `dict[string, float]` | Cosine similarity scores after token normalization |
2252
+ | `token_counts` | `dict[string, integer]` | Number of tokens produced per tokenizer |
2253
 
2254
  ## Dataset Creation
2255
 
2256
  ### Curation Rationale
2257
 
2258
  This dataset was created to:
2259
+ 1. Systematically evaluate how different tokenization strategies handle Turkish
2260
+ 2. Measure robustness against real-world text perturbations specific to Turkish
2261
+ 3. Support research into the impact of tokenization on language model behavior
2262
+ 4. Provide standardized benchmarks for Turkish language models
2263
 
2264
  The questions were designed to be straightforward with high baseline accuracy, allowing researchers to cleanly measure performance degradation when perturbations are applied.
2265
 
 
2267
 
2268
  #### Data Collection and Processing
2269
 
2270
+ - **Canonical Questions**: 40 baseline questions created in English
2271
+ - **Translation**: Native Turkish speakers translated questions
2272
+ - **Perturbations**: Each question underwent targeted perturbations designed to reflect Turkish characteristics
2273
+ - **Validation**: Model-in-the-loop process ensured high baseline accuracy
2274
 
2275
  #### Perturbation Categories
2276
 
2277
  1. **Canonical**
2278
+ The baseline Turkish text written in standard, grammatically correct Turkish with no perturbations. This serves as the reference condition for evaluating the impact of all other perturbations.
2279
+
2280
+ 2. **Abbreviations**
2281
+ Introduces common Turkish abbreviations and shortened forms (e.g., `Dr.`, `Prof.`, `vb.`, `sn.`), testing tokenizer robustness to compressed lexical forms.
2282
 
2283
+ 3. **Capitalization**
2284
+ Alters capitalization patterns by randomly capitalizing, lowercasing, or mixing case within words and sentences, simulating informal writing or casing errors.
2285
 
2286
+ 5. **Code / Language / Script Switching**
2287
+ Mixes Turkish with English words or phrases within the same sentence, reflecting real-world code-switching common in technical, academic, or online Turkish text.
2288
 
2289
+ 6. **Contractions**
2290
+ Applies contracted or fused forms common in informal Turkish writing (e.g., dropped vowels or merged suffix boundaries), stressing tokenizer handling of agglutinative morphology.
2291
 
2292
+ 7. **Date Formats**
2293
+ Varies date representations (e.g., `12.03.2022`, `12 Mart 2022`, `03/12/22`), testing sensitivity to formatting and punctuation variation.
2294
 
2295
+ 8. **Dialects**
2296
+ Introduces regional Turkish dialectal or colloquial variants that preserve meaning but differ lexically or morphologically from Standard Turkish.
2297
 
2298
+ 9. **English Keyboard**
2299
+ Simulates Turkish text typed on an English keyboard, leading to missing or substituted Turkish-specific characters (e.g., `cok` instead of `çok`, `saglik` instead of `sağlık`).
2300
 
2301
+ 10. **Grammatical Errors**
2302
+ Injects plausible grammatical mistakes such as incorrect suffix usage, agreement errors, or case marking issues, reflecting non-standard or learner Turkish.
2303
 
2304
+ 11. **Keyboard Proximity Errors**
2305
+ Introduces typos caused by pressing adjacent keys on a keyboard, simulating realistic typing errors without intentionally changing word choice.
2306
 
2307
+ 12. **Numerical Formats**
2308
+ Varies numeric representations (e.g., `1.000` vs. `1000`, comma vs. period usage for decimals), testing tokenizer sensitivity to locale-specific number formatting.
2309
+
2310
+ 13. **Orthographic Errors**
2311
+ Applies spelling mistakes that violate standard Turkish orthography (e.g., incorrect consonant usage or misspelled suffixes) while remaining plausible to native readers.
2312
+
2313
+ 14. **Phonetic Spelling**
2314
+ Replaces words with spellings based on pronunciation rather than standard orthography, reflecting informal or speech-inspired Turkish writing.
2315
+
2316
+ 15. **Plausible Diacritics Errors**
2317
+ Introduces missing, incorrect, or substituted diacritics (e.g., `s` vs. `ş`, `g` vs. `ğ`, `i` vs. `ı`), testing tokenizer sensitivity to Turkish-specific characters.
2318
+
2319
+ 16. **Similar Words**
2320
+ Substitutes words with closely related or easily confusable alternatives (e.g., near-synonyms or minimal lexical contrasts), preserving sentence plausibility.
2321
+
2322
+ 17. **Spelled-Out Forms**
2323
+ Replaces numerals, abbreviations, or symbols with fully spelled-out Turkish equivalents, increasing sequence length and altering token boundaries.
2324
+
2325
+ 18. **Typographical Errors**
2326
+ Introduces general typographical mistakes such as duplicated letters, missing characters, or minor corruption commonly found in fast or careless typing.
2327
+
2328
+ 19. **Web Search Query**
2329
+ Rewrites questions in the style of Turkish web search queries, using keyword-heavy phrasing, omitted function words, and informal structure typical of search engine inputs.
2330
 
2331
  #### Who are the source data producers?
2332
 
2333
+ Native Turkish speakers curated and validated all questions and perturbations. The TokSuite research team at R3 designed the overall benchmark framework.
2334
 
2335
  ### Annotations
2336
 
2337
  #### Annotation process
2338
 
2339
+ Questions were manually created and translated by native speakers. Each perturbation was carefully designed to reflect authentic variations encountered in real-world Turkish text processing.
2340
 
2341
  #### Who are the annotators?
2342
 
2343
+ Native Turkish speakers with expertise in linguistics and NLP, working as part of the TokSuite project.
2344
 
2345
  ### Personal and Sensitive Information
2346
 
 
2350
 
2351
  ### Social Impact of Dataset
2352
 
2353
+ This dataset contributes to improving language technology for Turkish speakers by enabling better understanding of tokenization challenges and supporting more robust multilingual models.
 
 
 
2354
 
2355
  ### Discussion of Biases
2356
 
2357
+ - **Language variety**: he dataset uses Standard Turkish (Türkiye Türkçesi) and may not fully represent regional or dialectal variations.
2358
+ - **Script focus**: Only the Latin script is used; Turkish-specific diacritics and keyboard-related variations are included as perturbations.
2359
+ - **Domain coverage**: Questions focus on general knowledge and may not represent domain-specific Turkish language use.
2360
+ - **Question simplicity**: Designed for high baseline accuracy, which may not reflect real-world task complexity.
2361
 
2362
  ### Other Known Limitations
2363
 
2364
+ - Relatively small dataset size (evaluation-only)
2365
+ - Multiple-choice format
2366
+ - Language-specific perturbations
2367
+ - Results may differ at larger model scales
2368
 
2369
  ## Additional Information
2370
 
 
2379
  ### Citation Information
2380
 
2381
  If you use this dataset in your research, please cite the TokSuite paper:
2382
+
2383
  ```bibtex
2384
  @inproceedings{toksuite2026,
2385
  title={TokSuite: Measuring the Impact of Tokenizer Choice on Language Model Behavior},
 
2402
 
2403
  ### Contact
2404
 
2405
+ For questions or issues related to this dataset, please refer to the TokSuite project or contact the authors of the paper.
2406
 
2407
  ---
2408
 
 
2412
 
2413
  *Understanding Tokenization's Role in Language Model Behavior*
2414
 
2415
+ </div>