Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,5 +1,5 @@
|
|
| 1 |
---
|
| 2 |
-
license:
|
| 3 |
multilinguality: multilingual
|
| 4 |
task_categories:
|
| 5 |
- multiple-choice
|
|
@@ -7,6 +7,7 @@ pretty_name: Tokenization Robustness
|
|
| 7 |
tags:
|
| 8 |
- multilingual
|
| 9 |
- tokenization
|
|
|
|
| 10 |
dataset_info:
|
| 11 |
- config_name: tokenizer_robustness_completion_turkish_canonical
|
| 12 |
features:
|
|
@@ -2124,7 +2125,8 @@ configs:
|
|
| 2124 |
- config_name: tokenizer_robustness_completion_turkish_code_language_script_switching
|
| 2125 |
data_files:
|
| 2126 |
- split: test
|
| 2127 |
-
path:
|
|
|
|
| 2128 |
- config_name: tokenizer_robustness_completion_turkish_colloquial
|
| 2129 |
data_files:
|
| 2130 |
- split: test
|
|
@@ -2185,96 +2187,79 @@ configs:
|
|
| 2185 |
data_files:
|
| 2186 |
- split: test
|
| 2187 |
path: tokenizer_robustness_completion_turkish_word_reordering/test-*
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2188 |
---
|
| 2189 |
-
|
| 2190 |
# Dataset Card for Tokenization Robustness
|
| 2191 |
|
| 2192 |
<!-- Provide a quick summary of the dataset. -->
|
| 2193 |
|
| 2194 |
<img src="toksuite-logo.png" alt="TokSuite Logo" width="250px" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
|
| 2195 |
|
| 2196 |
-
# TokSuite Benchmark (
|
| 2197 |
-
|
| 2198 |
|
| 2199 |
## Dataset Description
|
| 2200 |
|
| 2201 |
-
This dataset is part of **TokSuite**, a comprehensive benchmark designed to measure how different tokenization strategies affect language model performance and robustness. This specific subset contains
|
| 2202 |
|
| 2203 |
- **Curated by:** R3 Research Team
|
| 2204 |
-
- **Language(s):**
|
| 2205 |
- **License:** MIT License
|
| 2206 |
|
| 2207 |
### Dataset Summary
|
| 2208 |
|
| 2209 |
-
TokSuite addresses a fundamental challenge in language model research: understanding how tokenization choices impact model behavior in isolation. The
|
| 2210 |
-
|
| 2211 |
**Key Features:**
|
| 2212 |
-
-
|
| 2213 |
-
- Multiple perturbation types reflecting real-world text variations in
|
| 2214 |
-
- Parallel structure with TokSuite benchmark (available in English,
|
| 2215 |
- Native speaker curation ensuring linguistic authenticity
|
| 2216 |
|
| 2217 |
### Supported Tasks
|
| 2218 |
|
| 2219 |
- **Multiple-Choice Question Answering**: Text completion format with 4 answer choices
|
| 2220 |
- **Tokenizer Robustness Evaluation**: Measuring performance degradation under various text perturbations
|
| 2221 |
-
- **Multilingual NLP Benchmarking**: Evaluating language models on
|
| 2222 |
|
| 2223 |
### Languages
|
| 2224 |
|
| 2225 |
-
The dataset contains text in
|
| 2226 |
|
| 2227 |
## Dataset Structure
|
| 2228 |
|
| 2229 |
-
### Data Instances
|
| 2230 |
-
|
| 2231 |
-
An example from the dataset:
|
| 2232 |
-
```json
|
| 2233 |
-
{
|
| 2234 |
-
"question": "{EXAMPLE_QUESTION}",
|
| 2235 |
-
"choices": ["{CHOICE_A}", "{CHOICE_B}", "{CHOICE_C}", "{CHOICE_D}"],
|
| 2236 |
-
"answer": {ANSWER_INDEX},
|
| 2237 |
-
"answer_label": "{ANSWER_LABEL}",
|
| 2238 |
-
"split": "test",
|
| 2239 |
-
"subcategories": "{SUBCATEGORY}",
|
| 2240 |
-
"lang": "{LANGUAGE_CODE_FULL}",
|
| 2241 |
-
"second_lang": "{ENGLISH_TRANSLATION}",
|
| 2242 |
-
"coding_lang": "",
|
| 2243 |
-
"notes": "{NOTES}",
|
| 2244 |
-
"id": "{ID}",
|
| 2245 |
-
"set_id": {SET_ID},
|
| 2246 |
-
"variation_id": {VARIATION_ID}
|
| 2247 |
-
}
|
| 2248 |
-
```
|
| 2249 |
-
|
| 2250 |
### Data Fields
|
| 2251 |
|
| 2252 |
| Field | Type | Description |
|
| 2253 |
|-------|------|-------------|
|
| 2254 |
-
| question | string | The question text in
|
| 2255 |
-
| choices | list[string] |
|
| 2256 |
-
| answer | int64 | Index of the correct answer
|
| 2257 |
-
| answer_label | string | Letter label of the correct answer
|
| 2258 |
-
| split | string | Dataset split identifier
|
| 2259 |
-
| subcategories | string | Perturbation category |
|
| 2260 |
-
| lang | string | Language code
|
| 2261 |
-
| second_lang | string | English translation or description of the question |
|
| 2262 |
-
| coding_lang | string | Not applicable
|
| 2263 |
-
| notes | string | Additional context about the question or perturbation
|
| 2264 |
-
| id | string | Unique question identifier |
|
| 2265 |
-
| set_id | float64 | Question set grouping identifier
|
| 2266 |
-
| variation_id | float64 | Variation number within a question set |
|
| 2267 |
-
|
|
|
|
|
|
|
| 2268 |
|
| 2269 |
## Dataset Creation
|
| 2270 |
|
| 2271 |
### Curation Rationale
|
| 2272 |
|
| 2273 |
This dataset was created to:
|
| 2274 |
-
1. Systematically evaluate how different tokenization strategies handle
|
| 2275 |
-
2. Measure robustness against real-world text perturbations specific to
|
| 2276 |
-
3. Support research into
|
| 2277 |
-
4. Provide standardized benchmarks for
|
| 2278 |
|
| 2279 |
The questions were designed to be straightforward with high baseline accuracy, allowing researchers to cleanly measure performance degradation when perturbations are applied.
|
| 2280 |
|
|
@@ -2282,69 +2267,80 @@ The questions were designed to be straightforward with high baseline accuracy, a
|
|
| 2282 |
|
| 2283 |
#### Data Collection and Processing
|
| 2284 |
|
| 2285 |
-
- **Canonical Questions**:
|
| 2286 |
-
- **Translation**: Native
|
| 2287 |
-
- **Perturbations**: Each question underwent targeted perturbations designed to reflect
|
| 2288 |
-
- **Validation**: Model-in-the-loop process ensured high baseline accuracy
|
| 2289 |
|
| 2290 |
#### Perturbation Categories
|
| 2291 |
|
| 2292 |
1. **Canonical**
|
| 2293 |
-
|
|
|
|
|
|
|
|
|
|
| 2294 |
|
| 2295 |
-
|
| 2296 |
-
|
| 2297 |
|
| 2298 |
-
|
| 2299 |
-
|
| 2300 |
|
| 2301 |
-
|
| 2302 |
-
|
| 2303 |
|
| 2304 |
-
|
| 2305 |
-
|
| 2306 |
|
| 2307 |
-
|
| 2308 |
-
|
| 2309 |
|
| 2310 |
-
|
| 2311 |
-
|
| 2312 |
|
| 2313 |
-
|
| 2314 |
-
|
| 2315 |
|
| 2316 |
-
|
|
|
|
| 2317 |
|
| 2318 |
-
|
| 2319 |
-
|
| 2320 |
-
|
| 2321 |
-
|
| 2322 |
-
|
| 2323 |
-
|
| 2324 |
-
|
| 2325 |
-
|
| 2326 |
-
|
| 2327 |
-
|
| 2328 |
-
|
| 2329 |
-
|
| 2330 |
-
|
| 2331 |
-
|
| 2332 |
-
|
| 2333 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2334 |
|
| 2335 |
#### Who are the source data producers?
|
| 2336 |
|
| 2337 |
-
Native
|
| 2338 |
|
| 2339 |
### Annotations
|
| 2340 |
|
| 2341 |
#### Annotation process
|
| 2342 |
|
| 2343 |
-
Questions were manually created and translated by native speakers. Each perturbation was carefully designed to reflect authentic variations encountered in real-world
|
| 2344 |
|
| 2345 |
#### Who are the annotators?
|
| 2346 |
|
| 2347 |
-
Native
|
| 2348 |
|
| 2349 |
### Personal and Sensitive Information
|
| 2350 |
|
|
@@ -2354,24 +2350,21 @@ The dataset contains only general knowledge questions and does not include any p
|
|
| 2354 |
|
| 2355 |
### Social Impact of Dataset
|
| 2356 |
|
| 2357 |
-
This dataset contributes to improving language technology for
|
| 2358 |
-
- Enabling better understanding of tokenization challenges in {LANGUAGE_NAME}
|
| 2359 |
-
- Supporting development of more robust multilingual models
|
| 2360 |
-
- Providing standardized evaluation for {LANGUAGE_NAME} NLP research
|
| 2361 |
|
| 2362 |
### Discussion of Biases
|
| 2363 |
|
| 2364 |
-
- **Language variety**:
|
| 2365 |
-
- **Script focus**:
|
| 2366 |
-
- **Domain coverage**: Questions focus on general knowledge and may not represent domain-specific language use
|
| 2367 |
-
- **Question simplicity**: Designed for high baseline accuracy, which may not reflect real-world task complexity
|
| 2368 |
|
| 2369 |
### Other Known Limitations
|
| 2370 |
|
| 2371 |
-
- Relatively small dataset size (
|
| 2372 |
-
-
|
| 2373 |
-
-
|
| 2374 |
-
-
|
| 2375 |
|
| 2376 |
## Additional Information
|
| 2377 |
|
|
@@ -2386,6 +2379,7 @@ MIT license
|
|
| 2386 |
### Citation Information
|
| 2387 |
|
| 2388 |
If you use this dataset in your research, please cite the TokSuite paper:
|
|
|
|
| 2389 |
```bibtex
|
| 2390 |
@inproceedings{toksuite2026,
|
| 2391 |
title={TokSuite: Measuring the Impact of Tokenizer Choice on Language Model Behavior},
|
|
@@ -2408,7 +2402,7 @@ This dataset is part of TokSuite, which includes:
|
|
| 2408 |
|
| 2409 |
### Contact
|
| 2410 |
|
| 2411 |
-
For questions or issues related to this dataset, please refer to the TokSuite project or contact the authors
|
| 2412 |
|
| 2413 |
---
|
| 2414 |
|
|
@@ -2418,4 +2412,4 @@ For questions or issues related to this dataset, please refer to the TokSuite pr
|
|
| 2418 |
|
| 2419 |
*Understanding Tokenization's Role in Language Model Behavior*
|
| 2420 |
|
| 2421 |
-
</div>
|
|
|
|
| 1 |
---
|
| 2 |
+
license: mit
|
| 3 |
multilinguality: multilingual
|
| 4 |
task_categories:
|
| 5 |
- multiple-choice
|
|
|
|
| 7 |
tags:
|
| 8 |
- multilingual
|
| 9 |
- tokenization
|
| 10 |
+
- robustness
|
| 11 |
dataset_info:
|
| 12 |
- config_name: tokenizer_robustness_completion_turkish_canonical
|
| 13 |
features:
|
|
|
|
| 2125 |
- config_name: tokenizer_robustness_completion_turkish_code_language_script_switching
|
| 2126 |
data_files:
|
| 2127 |
- split: test
|
| 2128 |
+
path: >-
|
| 2129 |
+
tokenizer_robustness_completion_turkish_code_language_script_switching/test-*
|
| 2130 |
- config_name: tokenizer_robustness_completion_turkish_colloquial
|
| 2131 |
data_files:
|
| 2132 |
- split: test
|
|
|
|
| 2187 |
data_files:
|
| 2188 |
- split: test
|
| 2189 |
path: tokenizer_robustness_completion_turkish_word_reordering/test-*
|
| 2190 |
+
language:
|
| 2191 |
+
- tr
|
| 2192 |
+
- en
|
| 2193 |
+
size_categories:
|
| 2194 |
+
- n<1K
|
| 2195 |
---
|
|
|
|
| 2196 |
# Dataset Card for Tokenization Robustness
|
| 2197 |
|
| 2198 |
<!-- Provide a quick summary of the dataset. -->
|
| 2199 |
|
| 2200 |
<img src="toksuite-logo.png" alt="TokSuite Logo" width="250px" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
|
| 2201 |
|
| 2202 |
+
# TokSuite Benchmark (Turkish Collection)
|
|
|
|
| 2203 |
|
| 2204 |
## Dataset Description
|
| 2205 |
|
| 2206 |
+
This dataset is part of **TokSuite**, a comprehensive benchmark designed to measure how different tokenization strategies affect language model performance and robustness. This specific subset contains Italian language multiple-choice text completion questions with various real-world perturbations that test tokenizer robustness.
|
| 2207 |
|
| 2208 |
- **Curated by:** R3 Research Team
|
| 2209 |
+
- **Language(s):** Turkish (Tr)
|
| 2210 |
- **License:** MIT License
|
| 2211 |
|
| 2212 |
### Dataset Summary
|
| 2213 |
|
| 2214 |
+
TokSuite addresses a fundamental challenge in language model research: understanding how tokenization choices impact model behavior in isolation. The Turkish subset specifically measures model performance on canonical questions and various perturbations.
|
|
|
|
| 2215 |
**Key Features:**
|
| 2216 |
+
- 40 canonical questions covering general knowledge, geography, science, and language understanding
|
| 2217 |
+
- Multiple perturbation types reflecting real-world text variations in Turkish
|
| 2218 |
+
- Parallel structure with TokSuite benchmark (available in English, Italian, Farsi, Chinese)
|
| 2219 |
- Native speaker curation ensuring linguistic authenticity
|
| 2220 |
|
| 2221 |
### Supported Tasks
|
| 2222 |
|
| 2223 |
- **Multiple-Choice Question Answering**: Text completion format with 4 answer choices
|
| 2224 |
- **Tokenizer Robustness Evaluation**: Measuring performance degradation under various text perturbations
|
| 2225 |
+
- **Multilingual NLP Benchmarking**: Evaluating language models on Turkish text understanding
|
| 2226 |
|
| 2227 |
### Languages
|
| 2228 |
|
| 2229 |
+
The dataset contains text in Turkish (language code: `tur_Latn` / `tr`).
|
| 2230 |
|
| 2231 |
## Dataset Structure
|
| 2232 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2233 |
### Data Fields
|
| 2234 |
|
| 2235 |
| Field | Type | Description |
|
| 2236 |
|-------|------|-------------|
|
| 2237 |
+
| `question` | `string` | The question text in Turkish |
|
| 2238 |
+
| `choices` | `list[string]` | 4 multiple-choice answer options |
|
| 2239 |
+
| `answer` | `int64` | Index of the correct answer |
|
| 2240 |
+
| `answer_label` | `string` | Letter label of the correct answer |
|
| 2241 |
+
| `split` | `string` | Dataset split identifier |
|
| 2242 |
+
| `subcategories` | `string` | Perturbation category |
|
| 2243 |
+
| `lang` | `string` | Language code |
|
| 2244 |
+
| `second_lang` | `string` | English translation or description of the question |
|
| 2245 |
+
| `coding_lang` | `string` | Not applicable unless code-switching is present |
|
| 2246 |
+
| `notes` | `string` | Additional context about the question or perturbation |
|
| 2247 |
+
| `id` | `string` | Unique question identifier |
|
| 2248 |
+
| `set_id` | `float64` | Question set grouping identifier |
|
| 2249 |
+
| `variation_id` | `float64` | Variation number within a question set |
|
| 2250 |
+
| `vanilla_cos_sim_to_canonical` | `dict[string, float]` | Cosine similarity scores to canonical form (raw tokens) |
|
| 2251 |
+
| `trimmed_cos_sim_to_canonical` | `dict[string, float]` | Cosine similarity scores after token normalization |
|
| 2252 |
+
| `token_counts` | `dict[string, integer]` | Number of tokens produced per tokenizer |
|
| 2253 |
|
| 2254 |
## Dataset Creation
|
| 2255 |
|
| 2256 |
### Curation Rationale
|
| 2257 |
|
| 2258 |
This dataset was created to:
|
| 2259 |
+
1. Systematically evaluate how different tokenization strategies handle Turkish
|
| 2260 |
+
2. Measure robustness against real-world text perturbations specific to Turkish
|
| 2261 |
+
3. Support research into the impact of tokenization on language model behavior
|
| 2262 |
+
4. Provide standardized benchmarks for Turkish language models
|
| 2263 |
|
| 2264 |
The questions were designed to be straightforward with high baseline accuracy, allowing researchers to cleanly measure performance degradation when perturbations are applied.
|
| 2265 |
|
|
|
|
| 2267 |
|
| 2268 |
#### Data Collection and Processing
|
| 2269 |
|
| 2270 |
+
- **Canonical Questions**: 40 baseline questions created in English
|
| 2271 |
+
- **Translation**: Native Turkish speakers translated questions
|
| 2272 |
+
- **Perturbations**: Each question underwent targeted perturbations designed to reflect Turkish characteristics
|
| 2273 |
+
- **Validation**: Model-in-the-loop process ensured high baseline accuracy
|
| 2274 |
|
| 2275 |
#### Perturbation Categories
|
| 2276 |
|
| 2277 |
1. **Canonical**
|
| 2278 |
+
The baseline Turkish text written in standard, grammatically correct Turkish with no perturbations. This serves as the reference condition for evaluating the impact of all other perturbations.
|
| 2279 |
+
|
| 2280 |
+
2. **Abbreviations**
|
| 2281 |
+
Introduces common Turkish abbreviations and shortened forms (e.g., `Dr.`, `Prof.`, `vb.`, `sn.`), testing tokenizer robustness to compressed lexical forms.
|
| 2282 |
|
| 2283 |
+
3. **Capitalization**
|
| 2284 |
+
Alters capitalization patterns by randomly capitalizing, lowercasing, or mixing case within words and sentences, simulating informal writing or casing errors.
|
| 2285 |
|
| 2286 |
+
5. **Code / Language / Script Switching**
|
| 2287 |
+
Mixes Turkish with English words or phrases within the same sentence, reflecting real-world code-switching common in technical, academic, or online Turkish text.
|
| 2288 |
|
| 2289 |
+
6. **Contractions**
|
| 2290 |
+
Applies contracted or fused forms common in informal Turkish writing (e.g., dropped vowels or merged suffix boundaries), stressing tokenizer handling of agglutinative morphology.
|
| 2291 |
|
| 2292 |
+
7. **Date Formats**
|
| 2293 |
+
Varies date representations (e.g., `12.03.2022`, `12 Mart 2022`, `03/12/22`), testing sensitivity to formatting and punctuation variation.
|
| 2294 |
|
| 2295 |
+
8. **Dialects**
|
| 2296 |
+
Introduces regional Turkish dialectal or colloquial variants that preserve meaning but differ lexically or morphologically from Standard Turkish.
|
| 2297 |
|
| 2298 |
+
9. **English Keyboard**
|
| 2299 |
+
Simulates Turkish text typed on an English keyboard, leading to missing or substituted Turkish-specific characters (e.g., `cok` instead of `çok`, `saglik` instead of `sağlık`).
|
| 2300 |
|
| 2301 |
+
10. **Grammatical Errors**
|
| 2302 |
+
Injects plausible grammatical mistakes such as incorrect suffix usage, agreement errors, or case marking issues, reflecting non-standard or learner Turkish.
|
| 2303 |
|
| 2304 |
+
11. **Keyboard Proximity Errors**
|
| 2305 |
+
Introduces typos caused by pressing adjacent keys on a keyboard, simulating realistic typing errors without intentionally changing word choice.
|
| 2306 |
|
| 2307 |
+
12. **Numerical Formats**
|
| 2308 |
+
Varies numeric representations (e.g., `1.000` vs. `1000`, comma vs. period usage for decimals), testing tokenizer sensitivity to locale-specific number formatting.
|
| 2309 |
+
|
| 2310 |
+
13. **Orthographic Errors**
|
| 2311 |
+
Applies spelling mistakes that violate standard Turkish orthography (e.g., incorrect consonant usage or misspelled suffixes) while remaining plausible to native readers.
|
| 2312 |
+
|
| 2313 |
+
14. **Phonetic Spelling**
|
| 2314 |
+
Replaces words with spellings based on pronunciation rather than standard orthography, reflecting informal or speech-inspired Turkish writing.
|
| 2315 |
+
|
| 2316 |
+
15. **Plausible Diacritics Errors**
|
| 2317 |
+
Introduces missing, incorrect, or substituted diacritics (e.g., `s` vs. `ş`, `g` vs. `ğ`, `i` vs. `ı`), testing tokenizer sensitivity to Turkish-specific characters.
|
| 2318 |
+
|
| 2319 |
+
16. **Similar Words**
|
| 2320 |
+
Substitutes words with closely related or easily confusable alternatives (e.g., near-synonyms or minimal lexical contrasts), preserving sentence plausibility.
|
| 2321 |
+
|
| 2322 |
+
17. **Spelled-Out Forms**
|
| 2323 |
+
Replaces numerals, abbreviations, or symbols with fully spelled-out Turkish equivalents, increasing sequence length and altering token boundaries.
|
| 2324 |
+
|
| 2325 |
+
18. **Typographical Errors**
|
| 2326 |
+
Introduces general typographical mistakes such as duplicated letters, missing characters, or minor corruption commonly found in fast or careless typing.
|
| 2327 |
+
|
| 2328 |
+
19. **Web Search Query**
|
| 2329 |
+
Rewrites questions in the style of Turkish web search queries, using keyword-heavy phrasing, omitted function words, and informal structure typical of search engine inputs.
|
| 2330 |
|
| 2331 |
#### Who are the source data producers?
|
| 2332 |
|
| 2333 |
+
Native Turkish speakers curated and validated all questions and perturbations. The TokSuite research team at R3 designed the overall benchmark framework.
|
| 2334 |
|
| 2335 |
### Annotations
|
| 2336 |
|
| 2337 |
#### Annotation process
|
| 2338 |
|
| 2339 |
+
Questions were manually created and translated by native speakers. Each perturbation was carefully designed to reflect authentic variations encountered in real-world Turkish text processing.
|
| 2340 |
|
| 2341 |
#### Who are the annotators?
|
| 2342 |
|
| 2343 |
+
Native Turkish speakers with expertise in linguistics and NLP, working as part of the TokSuite project.
|
| 2344 |
|
| 2345 |
### Personal and Sensitive Information
|
| 2346 |
|
|
|
|
| 2350 |
|
| 2351 |
### Social Impact of Dataset
|
| 2352 |
|
| 2353 |
+
This dataset contributes to improving language technology for Turkish speakers by enabling better understanding of tokenization challenges and supporting more robust multilingual models.
|
|
|
|
|
|
|
|
|
|
| 2354 |
|
| 2355 |
### Discussion of Biases
|
| 2356 |
|
| 2357 |
+
- **Language variety**: he dataset uses Standard Turkish (Türkiye Türkçesi) and may not fully represent regional or dialectal variations.
|
| 2358 |
+
- **Script focus**: Only the Latin script is used; Turkish-specific diacritics and keyboard-related variations are included as perturbations.
|
| 2359 |
+
- **Domain coverage**: Questions focus on general knowledge and may not represent domain-specific Turkish language use.
|
| 2360 |
+
- **Question simplicity**: Designed for high baseline accuracy, which may not reflect real-world task complexity.
|
| 2361 |
|
| 2362 |
### Other Known Limitations
|
| 2363 |
|
| 2364 |
+
- Relatively small dataset size (evaluation-only)
|
| 2365 |
+
- Multiple-choice format
|
| 2366 |
+
- Language-specific perturbations
|
| 2367 |
+
- Results may differ at larger model scales
|
| 2368 |
|
| 2369 |
## Additional Information
|
| 2370 |
|
|
|
|
| 2379 |
### Citation Information
|
| 2380 |
|
| 2381 |
If you use this dataset in your research, please cite the TokSuite paper:
|
| 2382 |
+
|
| 2383 |
```bibtex
|
| 2384 |
@inproceedings{toksuite2026,
|
| 2385 |
title={TokSuite: Measuring the Impact of Tokenizer Choice on Language Model Behavior},
|
|
|
|
| 2402 |
|
| 2403 |
### Contact
|
| 2404 |
|
| 2405 |
+
For questions or issues related to this dataset, please refer to the TokSuite project or contact the authors of the paper.
|
| 2406 |
|
| 2407 |
---
|
| 2408 |
|
|
|
|
| 2412 |
|
| 2413 |
*Understanding Tokenization's Role in Language Model Behavior*
|
| 2414 |
|
| 2415 |
+
</div>
|