Malikeh1375 commited on
Commit
a4c4461
·
verified ·
1 Parent(s): c8bff63

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +164 -69
README.md CHANGED
@@ -2191,136 +2191,231 @@ configs:
2191
 
2192
  <!-- Provide a quick summary of the dataset. -->
2193
 
2194
- A comprehensive evaluation dataset for testing robustness of different tokenization strategies.
2195
 
2196
- ## Dataset Details
2197
 
2198
- ### Dataset Description
2199
 
2200
- <!-- Provide a longer summary of what this dataset is. -->
2201
 
2202
- This dataset evaluates how robust language models are to different tokenization strategies and edge cases. It includes text completion questions with multiple choice answers designed to test various aspects of tokenization handling.
2203
 
2204
- - **Curated by:** R3
2205
- - **Funded by [optional]:** [More Information Needed]
2206
- - **Shared by [optional]:** [More Information Needed]
2207
- - **Language(s) (NLP):** [More Information Needed]
2208
- - **License:** cc
2209
 
2210
- ### Dataset Sources [optional]
2211
 
2212
- <!-- Provide the basic links for the dataset. -->
2213
 
2214
- - **Repository:** [More Information Needed]
2215
- - **Paper [optional]:** [More Information Needed]
2216
- - **Demo [optional]:** [More Information Needed]
 
 
2217
 
2218
- ## Uses
2219
 
2220
- <!-- Address questions around how the dataset is intended to be used. -->
 
 
2221
 
2222
- ### Direct Use
2223
 
2224
- <!-- This section describes suitable use cases for the dataset. -->
2225
-
2226
- [More Information Needed]
2227
-
2228
- ### Out-of-Scope Use
2229
-
2230
- <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
2231
-
2232
- [More Information Needed]
2233
 
2234
  ## Dataset Structure
2235
 
2236
- <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2237
 
2238
- The dataset contains multiple-choice questions with associated metadata about tokenization types and categories.
2239
 
2240
  ## Dataset Creation
2241
 
2242
  ### Curation Rationale
2243
 
2244
- <!-- Motivation for the creation of this dataset. -->
 
 
 
 
2245
 
2246
- [More Information Needed]
2247
 
2248
  ### Source Data
2249
 
2250
- <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
2251
-
2252
  #### Data Collection and Processing
2253
 
2254
- <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
 
 
 
2255
 
2256
- [More Information Needed]
2257
 
2258
- #### Who are the source data producers?
 
2259
 
2260
- <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
 
2261
 
2262
- [More Information Needed]
 
2263
 
2264
- ### Annotations [optional]
 
2265
 
2266
- <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
 
2267
 
2268
- #### Annotation process
 
 
 
 
 
 
 
2269
 
2270
- <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
2271
 
2272
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2273
 
2274
  #### Who are the annotators?
2275
 
2276
- <!-- This section describes the people or systems who created the annotations. -->
2277
 
2278
- [More Information Needed]
2279
 
2280
- #### Personal and Sensitive Information
2281
 
2282
- <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
2283
 
2284
- [More Information Needed]
2285
 
2286
- ## Bias, Risks, and Limitations
 
 
 
2287
 
2288
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
2289
 
2290
- The dataset focuses primarily on English text and may not generalize to other languages or tokenization schemes not covered in the evaluation.
 
 
 
2291
 
2292
- ### Recommendations
2293
 
2294
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
 
 
 
2295
 
2296
- Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
2297
 
2298
- ## Citation [optional]
2299
 
2300
- <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
2301
 
2302
- **BibTeX:**
2303
 
2304
- [More Information Needed]
2305
 
2306
- **APA:**
2307
 
2308
- [More Information Needed]
 
 
 
 
 
 
 
 
 
2309
 
2310
- ## Glossary [optional]
2311
 
2312
- <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
2313
 
2314
- [More Information Needed]
 
 
 
2315
 
2316
- ## More Information [optional]
2317
 
2318
- [More Information Needed]
2319
 
2320
- ## Dataset Card Authors [optional]
 
 
2321
 
2322
- [More Information Needed]
 
 
2323
 
2324
- ## Dataset Card Contact
2325
 
2326
- [More Information Needed]
 
2191
 
2192
  <!-- Provide a quick summary of the dataset. -->
2193
 
2194
+ <img src="toksuite-logo.png" alt="TokSuite Logo" width="250px" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
2195
 
2196
+ # TokSuite Benchmark ({LANGUAGE_NAME} Collection)
2197
 
 
2198
 
2199
+ ## Dataset Description
2200
 
2201
+ This dataset is part of **TokSuite**, a comprehensive benchmark designed to measure how different tokenization strategies affect language model performance and robustness. This specific subset contains {LANGUAGE_NAME} language multiple-choice text completion questions with various real-world perturbations that test tokenizer robustness.
2202
 
2203
+ - **Curated by:** R3 Research Team
2204
+ - **Language(s):** {LANGUAGE_NAME} ({LANGUAGE_CODE})
2205
+ - **License:** MIT License
 
 
2206
 
2207
+ ### Dataset Summary
2208
 
2209
+ TokSuite addresses a fundamental challenge in language model research: understanding how tokenization choices impact model behavior in isolation. The {LANGUAGE_NAME} subset specifically measures model performance on canonical questions and various perturbations including {LIST_KEY_PERTURBATION_TYPES}.
2210
 
2211
+ **Key Features:**
2212
+ - {NUM_CANONICAL_QUESTIONS} canonical questions covering {TOPIC_AREAS}
2213
+ - Multiple perturbation types reflecting real-world text variations in {LANGUAGE_NAME}
2214
+ - Parallel structure with TokSuite benchmark (available in English, Turkish, Italian, Chinese, Farsi)
2215
+ - Native speaker curation ensuring linguistic authenticity
2216
 
2217
+ ### Supported Tasks
2218
 
2219
+ - **Multiple-Choice Question Answering**: Text completion format with 4 answer choices
2220
+ - **Tokenizer Robustness Evaluation**: Measuring performance degradation under various text perturbations
2221
+ - **Multilingual NLP Benchmarking**: Evaluating language models on {LANGUAGE_NAME} text understanding
2222
 
2223
+ ### Languages
2224
 
2225
+ The dataset contains text in {LANGUAGE_NAME} written in {SCRIPT_NAME} (language code: {LANGUAGE_CODE_FULL}).
 
 
 
 
 
 
 
 
2226
 
2227
  ## Dataset Structure
2228
 
2229
+ ### Data Instances
2230
+
2231
+ An example from the dataset:
2232
+ ```json
2233
+ {
2234
+ "question": "{EXAMPLE_QUESTION}",
2235
+ "choices": ["{CHOICE_A}", "{CHOICE_B}", "{CHOICE_C}", "{CHOICE_D}"],
2236
+ "answer": {ANSWER_INDEX},
2237
+ "answer_label": "{ANSWER_LABEL}",
2238
+ "split": "test",
2239
+ "subcategories": "{SUBCATEGORY}",
2240
+ "lang": "{LANGUAGE_CODE_FULL}",
2241
+ "second_lang": "{ENGLISH_TRANSLATION}",
2242
+ "coding_lang": "",
2243
+ "notes": "{NOTES}",
2244
+ "id": "{ID}",
2245
+ "set_id": {SET_ID},
2246
+ "variation_id": {VARIATION_ID}
2247
+ }
2248
+ ```
2249
+
2250
+ ### Data Fields
2251
+
2252
+ | Field | Type | Description |
2253
+ |-------|------|-------------|
2254
+ | question | string | The question text in {LANGUAGE_NAME} ({SCRIPT_DESCRIPTION}) |
2255
+ | choices | list[string] | Four multiple-choice answer options in {LANGUAGE_NAME} |
2256
+ | answer | int64 | Index of the correct answer (0-3) |
2257
+ | answer_label | string | Letter label of the correct answer (A, B, C, or D) |
2258
+ | split | string | Dataset split identifier (all entries are "test") |
2259
+ | subcategories | string | Perturbation category |
2260
+ | lang | string | Language code ({LANGUAGE_CODE_FULL} = {LANGUAGE_DESCRIPTION}) |
2261
+ | second_lang | string | English translation or description of the question |
2262
+ | coding_lang | string | Not applicable for this dataset (empty string) |
2263
+ | notes | string | Additional context about the question or perturbation type |
2264
+ | id | string | Unique question identifier |
2265
+ | set_id | float64 | Question set grouping identifier (ranges from {ID_RANGE_START}-{ID_RANGE_END}) |
2266
+ | variation_id | float64 | Variation number within a question set |
2267
 
 
2268
 
2269
  ## Dataset Creation
2270
 
2271
  ### Curation Rationale
2272
 
2273
+ This dataset was created to:
2274
+ 1. Systematically evaluate how different tokenization strategies handle {LANGUAGE_NAME} text
2275
+ 2. Measure robustness against real-world text perturbations specific to {LANGUAGE_NAME} language
2276
+ 3. Support research into tokenization's impact on language model behavior
2277
+ 4. Provide standardized benchmarks for {LANGUAGE_NAME} language models
2278
 
2279
+ The questions were designed to be straightforward with high baseline accuracy, allowing researchers to cleanly measure performance degradation when perturbations are applied.
2280
 
2281
  ### Source Data
2282
 
 
 
2283
  #### Data Collection and Processing
2284
 
2285
+ - **Canonical Questions**: {NUM_BASE_QUESTIONS} baseline questions in English were created covering general knowledge topics
2286
+ - **Translation**: Native {LANGUAGE_NAME} speakers translated questions to {LANGUAGE_NAME}
2287
+ - **Perturbations**: Each question underwent targeted perturbations designed to reflect {LINGUISTIC_CHARACTERISTICS}
2288
+ - **Validation**: Model-in-the-loop process ensured high baseline accuracy across 14 different tokenizers
2289
 
2290
+ #### Perturbation Categories
2291
 
2292
+ 1. **Canonical**
2293
+ {DESCRIPTION_OF_CANONICAL}
2294
 
2295
+ 2. **{PERTURBATION_NAME_1}**
2296
+ {DESCRIPTION_1}
2297
 
2298
+ 3. **{PERTURBATION_NAME_2}**
2299
+ {DESCRIPTION_2}
2300
 
2301
+ 4. **{PERTURBATION_NAME_3}**
2302
+ {DESCRIPTION_3}
2303
 
2304
+ 5. **{PERTURBATION_NAME_4}**
2305
+ {DESCRIPTION_4}
2306
 
2307
+ 6. **{PERTURBATION_NAME_5}**
2308
+ {DESCRIPTION_5}
2309
+
2310
+ 7. **{PERTURBATION_NAME_6}**
2311
+ {DESCRIPTION_6}
2312
+
2313
+ 8. **{PERTURBATION_NAME_7}**
2314
+ {DESCRIPTION_7}
2315
 
2316
+ #### Model Performance Comparison
2317
 
2318
+ | model_name | canonical | {PERTURBATION_COL_1} | {PERTURBATION_COL_2} | {PERTURBATION_COL_3} | {PERTURBATION_COL_4} | {PERTURBATION_COL_5} | {PERTURBATION_COL_6} | {PERTURBATION_COL_7} |
2319
+ |:-------------|----------:|---------------------:|---------------------:|---------------------:|---------------------:|---------------------:|---------------------:|---------------------:|
2320
+ | Aya | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
2321
+ | BLOOM | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
2322
+ | ByT5 | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
2323
+ | Comma | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
2324
+ | GPT-2 | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
2325
+ | GPT-4o | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
2326
+ | Gemma-2 | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
2327
+ | Llama-3.2 | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
2328
+ | Phi-3 | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
2329
+ | Qwen-3 | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
2330
+ | Tekken | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
2331
+ | TokenMonster | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
2332
+ | XGLM | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
2333
+ | mBERT | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
2334
+
2335
+ #### Who are the source data producers?
2336
+
2337
+ Native {LANGUAGE_NAME} speakers curated and validated all questions and perturbations. The TokSuite research team at R3 designed the overall benchmark framework.
2338
+
2339
+ ### Annotations
2340
+
2341
+ #### Annotation process
2342
+
2343
+ Questions were manually created and translated by native speakers. Each perturbation was carefully designed to reflect authentic variations encountered in real-world {LANGUAGE_NAME} text processing.
2344
 
2345
  #### Who are the annotators?
2346
 
2347
+ Native {LANGUAGE_NAME} speakers with expertise in linguistics and NLP, working as part of the TokSuite project.
2348
 
2349
+ ### Personal and Sensitive Information
2350
 
2351
+ The dataset contains only general knowledge questions and does not include any personal or sensitive information.
2352
 
2353
+ ## Considerations for Using the Data
2354
 
2355
+ ### Social Impact of Dataset
2356
 
2357
+ This dataset contributes to improving language technology for {LANGUAGE_NAME} speakers by:
2358
+ - Enabling better understanding of tokenization challenges in {LANGUAGE_NAME}
2359
+ - Supporting development of more robust multilingual models
2360
+ - Providing standardized evaluation for {LANGUAGE_NAME} NLP research
2361
 
2362
+ ### Discussion of Biases
2363
 
2364
+ - **Language variety**: The dataset uses {STANDARD_VARIETY} and may not fully represent dialectal variations
2365
+ - **Script focus**: {SCRIPT_LIMITATIONS_DESCRIPTION}
2366
+ - **Domain coverage**: Questions focus on general knowledge and may not represent domain-specific language use
2367
+ - **Question simplicity**: Designed for high baseline accuracy, which may not reflect real-world task complexity
2368
 
2369
+ ### Other Known Limitations
2370
 
2371
+ - Relatively small dataset size (designed for evaluation, not training)
2372
+ - Focus on multiple-choice format may not capture all aspects of language understanding
2373
+ - Perturbations are specific to {LANGUAGE_NAME}'s characteristics and findings may not generalize to all languages
2374
+ - Models evaluated were trained at ~1B parameters; results may differ at larger scales
2375
 
2376
+ ## Additional Information
2377
 
2378
+ ### Dataset Curators
2379
 
2380
+ The dataset was curated by the TokSuite research team at R3.
2381
 
2382
+ ### Licensing Information
2383
 
2384
+ MIT license
2385
 
2386
+ ### Citation Information
2387
 
2388
+ If you use this dataset in your research, please cite the TokSuite paper:
2389
+ ```bibtex
2390
+ @inproceedings{toksuite2026,
2391
+ title={TokSuite: Measuring the Impact of Tokenizer Choice on Language Model Behavior},
2392
+ author={Altıntaş, Gül Sena and Ehghaghi, Malikeh and Lester, Brian and Liu, Fengyuan and Zhao, Wanru and Ciccone, Marco and Raffel, Colin},
2393
+ booktitle={Preprint.},
2394
+ year={2026},
2395
+ url={TBD}
2396
+ }
2397
+ ```
2398
 
2399
+ **Paper**: [TokSuite: Measuring the Impact of Tokenizer Choice on Language Model Behavior](TBD)
2400
 
2401
+ ### Contributions
2402
 
2403
+ This dataset is part of TokSuite, which includes:
2404
+ - 14 language models with identical architectures but different tokenizers
2405
+ - Multilingual benchmark datasets (English, Turkish, Italian, Farsi, Chinese)
2406
+ - Comprehensive analysis of tokenization's impact on model behavior
2407
 
 
2408
 
2409
+ ### Contact
2410
 
2411
+ For questions or issues related to this dataset, please refer to the TokSuite project or contact the authors through the paper submission system.
2412
+
2413
+ ---
2414
 
2415
+ <div align="center">
2416
+
2417
+ **Part of the [TokSuite Project](TBD)**
2418
 
2419
+ *Understanding Tokenization's Role in Language Model Behavior*
2420
 
2421
+ </div>