model stringclasses 3
values | rouge1 float64 0.04 0.13 | rouge2 float64 0.01 0.06 | rougeL float64 0.03 0.09 | normalized_levenshtein float64 0.12 0.13 | levenshtein_distance float64 5.44k 5.9k | num_examples int64 42 42 | evaluation_date stringdate 2025-12-09 10:56:10 2025-12-09 11:14:26 |
|---|---|---|---|---|---|---|---|
unsloth/Qwen3-0.6B-GGUF | 0.055981 | 0.009523 | 0.044265 | 0.124479 | 5,890.880952 | 42 | 2025-12-09T10:56:10.311010 |
espsluar/crawlerlm-qwen3-0.6b-test | 0.041047 | 0.011858 | 0.030611 | 0.125309 | 5,902.238095 | 42 | 2025-12-09T11:10:40.583629 |
espsluar/qwen-crawlerlm-sft | 0.125857 | 0.055374 | 0.090374 | 0.134227 | 5,444.857143 | 42 | 2025-12-09T11:14:26.851934 |
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
CrawlerLM Evaluation Results
This dataset contains evaluation metrics for models tested on the CrawlerLM HTML-to-JSON extraction task.
Current Results
| Model | ROUGE-1 | ROUGE-2 | ROUGE-L | Norm. Lev. | Lev. Dist |
|---|---|---|---|---|---|
| unsloth/Qwen3-0.6B-GGUF | 0.0560 | 0.0095 | 0.0443 | 0.1245 | 5890.88 |
| espsluar/crawlerlm-qwen3-0.6b-test | 0.0410 | 0.0119 | 0.0306 | 0.1253 | 5902.24 |
| espsluar/qwen-crawlerlm-sft | 0.1259 | 0.0554 | 0.0904 | 0.1342 | 5444.86 |
Dataset Structure
model: HuggingFace model ID (namespace/model-name)rouge1: ROUGE-1 F1 score (higher is better)rouge2: ROUGE-2 F1 score (higher is better)rougeL: ROUGE-L F1 score (higher is better)normalized_levenshtein: Normalized Levenshtein similarity (0-1, higher is better)levenshtein_distance: Average Levenshtein edit distance (lower is better)num_examples: Number of test examples evaluatedevaluation_date: ISO timestamp of evaluation
Usage
from datasets import load_dataset
ds = load_dataset("espsluar/crawlerlm-eval-results")
print(ds)
Test Dataset
All models are evaluated on: espsluar/crawlerlm-html-to-json (test split)
- Downloads last month
- 4