|
|
--- |
|
|
language: |
|
|
- ru |
|
|
- en |
|
|
license: mit |
|
|
task_categories: |
|
|
- text-generation |
|
|
- fill-mask |
|
|
tags: |
|
|
- tokenization |
|
|
- robustness |
|
|
- morphology |
|
|
- evaluation |
|
|
- base-models |
|
|
size_categories: |
|
|
- 1K<n<10K |
|
|
--- |
|
|
|
|
|
# M.I.R.O.N. (Multi-aspect Inference Robustness on Objective Next-tokens) |
|
|
|
|
|
**M.I.R.O.N.** is a specialized benchmark designed to evaluate the impact of tokenization and architectural constraints on the generation quality of small, **Base** language models (SLMs). |
|
|
|
|
|
Unlike global benchmarks (MMLU, GSM8K), MIRON focuses on the atomic capabilities of a model: morphological generalization, noise robustness, and factual integrity within a simple next-token prediction task. |
|
|
|
|
|
## 🎯 Main Goal |
|
|
To evaluate not the "intelligence" in complex reasoning, but the **fundamental capability to correctly process input data**, robustness to word fragmentation by the tokenizer, and prediction stability under noise conditions. |
|
|
|
|
|
The benchmark is designed to be solvable even by small transformers. The tasks do not require Instruction Following, making this dataset ideal for testing **Pre-trained (Base)** checkpoints. |
|
|
|
|
|
## 🧩 Data Structure |
|
|
|
|
|
The dataset consists of **4000 examples** (1000 per category), separated into two languages (`ru`, `en`). The dataset uses short category tags: |
|
|
|
|
|
| Tag (Category) | Full Name | Description | |
|
|
| :--- | :--- | :--- | |
|
|
| **`Morphology`** | Morphological Generalization | Tests on pseudo-words (*wug-words*). Checks if the model can inflect non-existent words (e.g., *«wug» -> «wugs»*) based solely on grammar, without relying on lexical memory. | |
|
|
| **`Facts`** | Factual Knowledge | Control group. Checks the integrity of perception regarding common named entities (e.g., *«Paris», «Sun»*). If the tokenizer splits them poorly, access to knowledge becomes difficult. | |
|
|
| **`Logic`** | Logical Patterns | Simple numeric and algorithmic sequences (e.g., *«Tuesday -> Wednesday»*). Assesses token stitching when working with numbers and logic. | |
|
|
| **`Noise`** | Noise Robustness | The context contains typos and perturbations. Evaluates how much the model's confidence "drifts" with slight input distortions. | |
|
|
|
|
|
## 📊 Dataset Fields |
|
|
* **`prefix`**: Input context for the model. |
|
|
* **`target`**: The expected continuation (ground truth). |
|
|
* **`category`**: Test category (`Morphology`, `Facts`, `Logic`, `Noise`). |
|
|
|
|
|
## 📐 Evaluation Methodology (Metrics) |
|
|
|
|
|
Two metrics are calculated for each example. This allows distinguishing a model that "does not know" (low Score) from a model that "doubts due to tokenization" (low Confidence). |
|
|
|
|
|
1. **Levenshtein Score (Generation Quality):** |
|
|
Normalized Levenshtein distance. Evaluates how close the generated text is to the reference. |
|
|
*Range: 0.0% – 100.0%* |
|
|
|
|
|
$$ |
|
|
\text{Score}_c = \frac{1}{|D_c|} \sum_{i=1}^{|D_c|} \left( 1 - \frac{\text{Lev}(S_{\text{gen}}^{(i)}, S_{\text{ref}}^{(i)})}{\max(|S_{\text{gen}}^{(i)}|, |S_{\text{ref}}^{(i)}|)} \right) \times 100\% |
|
|
$$ |
|
|
|
|
|
2. **Target Confidence (Ground Truth Certainty):** |
|
|
The geometric mean probability of the tokens that make up the **actual target**. It shows how ready the model was to output the correct answer. |
|
|
*Range: 0.0% – 100.0%* |
|
|
|
|
|
$$ |
|
|
\text{Conf}(S_{\text{ref}}) = \exp \left( \frac{1}{T} \sum_{j=1}^{T} \log P(t_j \mid \text{context}, t_{<j}) \right) \times 100\% |
|
|
$$ |
|
|
|
|
|
### 💻 Evaluation Code Example (Python) |
|
|
|
|
|
```python |
|
|
import torch |
|
|
import numpy as np |
|
|
from Levenshtein import distance as lev_distance |
|
|
|
|
|
def compute_metrics(model, tokenizer, prefix: str, target: str, generated_text: str, device='cuda'): |
|
|
model.eval() |
|
|
|
|
|
# 1. Levenshtein Score |
|
|
max_len = max(len(generated_text), len(target)) |
|
|
if max_len == 0: |
|
|
lev_score = 100.0 |
|
|
else: |
|
|
dist = lev_distance(generated_text, target) |
|
|
lev_score = (1 - dist / max_len) * 100.0 |
|
|
|
|
|
# 2. Target Confidence |
|
|
prefix_ids = tokenizer(prefix, return_tensors="pt").input_ids.to(device) |
|
|
full_ids = tokenizer(prefix + target, return_tensors="pt").input_ids.to(device) |
|
|
|
|
|
prefix_len = prefix_ids.shape[1] |
|
|
target_len = full_ids.shape[1] - prefix_len |
|
|
|
|
|
if target_len <= 0: |
|
|
return {'lev_score': round(lev_score, 2), 'target_confidence': 0.0} |
|
|
|
|
|
with torch.no_grad(): |
|
|
logits = model(full_ids).logits |
|
|
|
|
|
shift_logits = logits[0, prefix_len-1:-1, :] |
|
|
target_labels = full_ids[0, prefix_len:] |
|
|
|
|
|
log_probs = torch.log_softmax(shift_logits, dim=-1) |
|
|
target_log_probs = torch.gather(log_probs, 1, target_labels.unsqueeze(1)).squeeze() |
|
|
|
|
|
if target_log_probs.dim() == 0: |
|
|
target_log_probs = target_log_probs.unsqueeze(0) |
|
|
|
|
|
confidence = np.exp(target_log_probs.mean().item()) * 100.0 |
|
|
|
|
|
return { |
|
|
'lev_score': round(lev_score, 2), |
|
|
'target_confidence': round(confidence, 2) |
|
|
} |
|
|
|