--- language: - ru - en license: mit task_categories: - text-generation - fill-mask tags: - tokenization - robustness - morphology - evaluation - base-models size_categories: - 1K «wugs»*) based solely on grammar, without relying on lexical memory. | | **`Facts`** | Factual Knowledge | Control group. Checks the integrity of perception regarding common named entities (e.g., *«Paris», «Sun»*). If the tokenizer splits them poorly, access to knowledge becomes difficult. | | **`Logic`** | Logical Patterns | Simple numeric and algorithmic sequences (e.g., *«Tuesday -> Wednesday»*). Assesses token stitching when working with numbers and logic. | | **`Noise`** | Noise Robustness | The context contains typos and perturbations. Evaluates how much the model's confidence "drifts" with slight input distortions. | ## 📊 Dataset Fields * **`prefix`**: Input context for the model. * **`target`**: The expected continuation (ground truth). * **`category`**: Test category (`Morphology`, `Facts`, `Logic`, `Noise`). ## 📐 Evaluation Methodology (Metrics) Two metrics are calculated for each example. This allows distinguishing a model that "does not know" (low Score) from a model that "doubts due to tokenization" (low Confidence). 1. **Levenshtein Score (Generation Quality):** Normalized Levenshtein distance. Evaluates how close the generated text is to the reference. *Range: 0.0% – 100.0%* $$ \text{Score}_c = \frac{1}{|D_c|} \sum_{i=1}^{|D_c|} \left( 1 - \frac{\text{Lev}(S_{\text{gen}}^{(i)}, S_{\text{ref}}^{(i)})}{\max(|S_{\text{gen}}^{(i)}|, |S_{\text{ref}}^{(i)}|)} \right) \times 100\% $$ 2. **Target Confidence (Ground Truth Certainty):** The geometric mean probability of the tokens that make up the **actual target**. It shows how ready the model was to output the correct answer. *Range: 0.0% – 100.0%* $$ \text{Conf}(S_{\text{ref}}) = \exp \left( \frac{1}{T} \sum_{j=1}^{T} \log P(t_j \mid \text{context}, t_{