Datasets:
model large_stringclasses 10
values | benchmark large_stringclasses 8
values | scope large_stringclasses 5
values | accuracy float64 0 0.97 | f1_macro float64 0 0.93 | f1_weighted float64 0 0.98 | precision_macro float64 0 0.94 | recall_macro float64 0 0.93 | n_samples int64 2.08k 373k | n_classes int64 3 275 |
|---|---|---|---|---|---|---|---|---|---|
gherbal-v2 | flores-devtest | full | 0.1495 | 0.0588 | 0.0616 | 0.0442 | 0.1452 | 222,640 | 214 |
gherbal-v2 | flores-devtest | v1 | 0.9312 | 0.6912 | 0.9352 | 0.7094 | 0.6883 | 34,408 | 34 |
gherbal-v2 | flores-devtest | v2 | 0.889 | 0.7089 | 0.8952 | 0.7361 | 0.7073 | 37,444 | 36 |
gherbal-v2 | flores-devtest | v3 | 0.3614 | 0.2188 | 0.245 | 0.1798 | 0.3254 | 92,092 | 90 |
gherbal-v2 | flores-devtest | v4 | 0.162 | 0.0662 | 0.0697 | 0.05 | 0.1564 | 205,436 | 198 |
gherbal-v2 | flores-dev | full | 0.147 | 0.0571 | 0.0602 | 0.0429 | 0.1419 | 224,325 | 220 |
gherbal-v2 | flores-dev | v1 | 0.9303 | 0.7056 | 0.9338 | 0.7244 | 0.7029 | 33,898 | 34 |
gherbal-v2 | flores-dev | v2 | 0.8939 | 0.7267 | 0.8998 | 0.7524 | 0.7254 | 36,889 | 36 |
gherbal-v2 | flores-dev | v3 | 0.3634 | 0.2195 | 0.2464 | 0.1802 | 0.3264 | 90,727 | 90 |
gherbal-v2 | flores-dev | v4 | 0.1637 | 0.0667 | 0.0709 | 0.0504 | 0.1569 | 201,394 | 198 |
gherbal-v2 | madar | full | 0.5811 | 0.1944 | 0.5625 | 0.1991 | 0.2046 | 5,600 | 15 |
gherbal-v2 | madar | v1 | 0.7487 | 0.1219 | 0.8069 | 0.135 | 0.1146 | 2,077 | 3 |
gherbal-v2 | madar | v2 | 0.6411 | 0.2388 | 0.6449 | 0.2555 | 0.2402 | 5,076 | 11 |
gherbal-v2 | madar | v3 | 0.6051 | 0.214 | 0.5954 | 0.2231 | 0.221 | 5,378 | 13 |
gherbal-v2 | madar | v4 | 0.6051 | 0.214 | 0.5954 | 0.2231 | 0.221 | 5,378 | 13 |
gherbal-v2 | gherbal-multi | full | 0.7961 | 0.6377 | 0.8132 | 0.6733 | 0.6217 | 184,994 | 36 |
gherbal-v2 | gherbal-multi | v1 | 0.7961 | 0.6377 | 0.8132 | 0.6733 | 0.6217 | 184,994 | 36 |
gherbal-v2 | gherbal-multi | v2 | 0.7961 | 0.6377 | 0.8132 | 0.6733 | 0.6217 | 184,994 | 36 |
gherbal-v2 | gherbal-multi | v3 | 0.7961 | 0.6377 | 0.8132 | 0.6733 | 0.6217 | 184,994 | 36 |
gherbal-v2 | gherbal-multi | v4 | 0.7961 | 0.6377 | 0.8132 | 0.6733 | 0.6217 | 184,994 | 36 |
gherbal-v2 | atlasia-lid | full | 0.6561 | 0.1481 | 0.6199 | 0.152 | 0.1612 | 234,327 | 15 |
gherbal-v2 | atlasia-lid | v1 | 0.9532 | 0.1099 | 0.9675 | 0.1156 | 0.1052 | 117,533 | 3 |
gherbal-v2 | atlasia-lid | v2 | 0.7099 | 0.1683 | 0.6888 | 0.1795 | 0.1746 | 216,563 | 13 |
gherbal-v2 | atlasia-lid | v3 | 0.6561 | 0.1481 | 0.6199 | 0.152 | 0.1612 | 234,327 | 15 |
gherbal-v2 | atlasia-lid | v4 | 0.6561 | 0.1481 | 0.6199 | 0.152 | 0.1612 | 234,327 | 15 |
gherbal-v2 | wili-2018 | full | 0.2374 | 0.1173 | 0.1296 | 0.0909 | 0.2149 | 62,000 | 124 |
gherbal-v2 | wili-2018 | v1 | 0.8921 | 0.6493 | 0.8854 | 0.668 | 0.6542 | 16,500 | 33 |
gherbal-v2 | wili-2018 | v2 | 0.8921 | 0.6493 | 0.8854 | 0.668 | 0.6542 | 16,500 | 33 |
gherbal-v2 | wili-2018 | v3 | 0.4673 | 0.288 | 0.3474 | 0.249 | 0.3874 | 31,500 | 63 |
gherbal-v2 | wili-2018 | v4 | 0.2374 | 0.1173 | 0.1296 | 0.0909 | 0.2149 | 62,000 | 124 |
gherbal-v2 | commonlid | full | 0.5934 | 0.1506 | 0.5493 | 0.1377 | 0.2193 | 373,230 | 101 |
gherbal-v2 | commonlid | v1 | 0.8214 | 0.4535 | 0.8319 | 0.4579 | 0.5276 | 269,625 | 31 |
gherbal-v2 | commonlid | v2 | 0.8213 | 0.4609 | 0.8318 | 0.4651 | 0.5436 | 269,667 | 33 |
gherbal-v2 | commonlid | v3 | 0.6819 | 0.3297 | 0.6563 | 0.3168 | 0.4311 | 324,781 | 45 |
gherbal-v2 | commonlid | v4 | 0.6158 | 0.1972 | 0.5777 | 0.1829 | 0.2778 | 359,646 | 77 |
gherbal-v2 | bouquet | full | 0.0986 | 0.0297 | 0.0313 | 0.0209 | 0.0935 | 289,300 | 275 |
gherbal-v2 | bouquet | v1 | 0.8887 | 0.618 | 0.9064 | 0.6455 | 0.6059 | 31,560 | 30 |
gherbal-v2 | bouquet | v2 | 0.8743 | 0.6304 | 0.8948 | 0.661 | 0.616 | 32,612 | 31 |
gherbal-v2 | bouquet | v3 | 0.4302 | 0.266 | 0.3293 | 0.2295 | 0.3475 | 66,276 | 63 |
gherbal-v2 | bouquet | v4 | 0.1869 | 0.084 | 0.0927 | 0.0643 | 0.1694 | 152,540 | 145 |
gherbal-v3 | flores-devtest | full | 0.3605 | 0.2412 | 0.2548 | 0.216 | 0.3427 | 222,640 | 214 |
gherbal-v3 | flores-devtest | v1 | 0.9596 | 0.4516 | 0.9695 | 0.4583 | 0.447 | 34,408 | 34 |
gherbal-v3 | flores-devtest | v2 | 0.9245 | 0.4648 | 0.9346 | 0.4745 | 0.4618 | 37,444 | 36 |
gherbal-v3 | flores-devtest | v3 | 0.8716 | 0.739 | 0.8675 | 0.773 | 0.7435 | 92,092 | 90 |
gherbal-v3 | flores-devtest | v4 | 0.3907 | 0.2679 | 0.2853 | 0.2429 | 0.3683 | 205,436 | 198 |
gherbal-v3 | flores-dev | full | 0.3581 | 0.2366 | 0.2514 | 0.2075 | 0.3384 | 224,325 | 220 |
gherbal-v3 | flores-dev | v1 | 0.9719 | 0.4824 | 0.9789 | 0.4866 | 0.4789 | 33,898 | 34 |
gherbal-v3 | flores-dev | v2 | 0.9526 | 0.5027 | 0.9595 | 0.5075 | 0.5005 | 36,889 | 36 |
gherbal-v3 | flores-dev | v3 | 0.8854 | 0.7493 | 0.8813 | 0.7758 | 0.7533 | 90,727 | 90 |
gherbal-v3 | flores-dev | v4 | 0.3989 | 0.273 | 0.2929 | 0.2443 | 0.3732 | 201,394 | 198 |
gherbal-v3 | madar | full | 0.5745 | 0.2402 | 0.5518 | 0.272 | 0.2442 | 5,600 | 15 |
gherbal-v3 | madar | v1 | 0.7848 | 0.1387 | 0.8243 | 0.1506 | 0.1338 | 2,077 | 3 |
gherbal-v3 | madar | v2 | 0.6328 | 0.2745 | 0.6315 | 0.3155 | 0.2687 | 5,076 | 11 |
gherbal-v3 | madar | v3 | 0.5982 | 0.2704 | 0.5845 | 0.3122 | 0.2699 | 5,378 | 13 |
gherbal-v3 | madar | v4 | 0.5982 | 0.2704 | 0.5845 | 0.3122 | 0.2699 | 5,378 | 13 |
gherbal-v3 | gherbal-multi | full | 0.8966 | 0.3534 | 0.9028 | 0.3562 | 0.3514 | 184,994 | 36 |
gherbal-v3 | gherbal-multi | v1 | 0.8966 | 0.3534 | 0.9028 | 0.3562 | 0.3514 | 184,994 | 36 |
gherbal-v3 | gherbal-multi | v2 | 0.8966 | 0.3534 | 0.9028 | 0.3562 | 0.3514 | 184,994 | 36 |
gherbal-v3 | gherbal-multi | v3 | 0.8966 | 0.3534 | 0.9028 | 0.3562 | 0.3514 | 184,994 | 36 |
gherbal-v3 | gherbal-multi | v4 | 0.8966 | 0.3534 | 0.9028 | 0.3562 | 0.3514 | 184,994 | 36 |
gherbal-v3 | atlasia-lid | full | 0.6561 | 0.108 | 0.6252 | 0.1132 | 0.1282 | 234,327 | 15 |
gherbal-v3 | atlasia-lid | v1 | 0.937 | 0.0829 | 0.9505 | 0.0837 | 0.0821 | 117,533 | 3 |
gherbal-v3 | atlasia-lid | v2 | 0.7098 | 0.1287 | 0.6939 | 0.1362 | 0.1456 | 216,563 | 13 |
gherbal-v3 | atlasia-lid | v3 | 0.6561 | 0.108 | 0.6252 | 0.1132 | 0.1282 | 234,327 | 15 |
gherbal-v3 | atlasia-lid | v4 | 0.6561 | 0.108 | 0.6252 | 0.1132 | 0.1282 | 234,327 | 15 |
gherbal-v3 | wili-2018 | full | 0.4695 | 0.2834 | 0.3771 | 0.2544 | 0.3529 | 62,000 | 124 |
gherbal-v3 | wili-2018 | v1 | 0.9209 | 0.4191 | 0.9399 | 0.4316 | 0.4107 | 16,500 | 33 |
gherbal-v3 | wili-2018 | v2 | 0.9209 | 0.4191 | 0.9399 | 0.4316 | 0.4107 | 16,500 | 33 |
gherbal-v3 | wili-2018 | v3 | 0.9242 | 0.6842 | 0.934 | 0.7014 | 0.677 | 31,500 | 63 |
gherbal-v3 | wili-2018 | v4 | 0.4695 | 0.2834 | 0.3771 | 0.2544 | 0.3529 | 62,000 | 124 |
gherbal-v3 | commonlid | full | 0.7441 | 0.1718 | 0.7457 | 0.1667 | 0.2198 | 373,230 | 101 |
gherbal-v3 | commonlid | v1 | 0.8627 | 0.2552 | 0.8926 | 0.2564 | 0.2863 | 269,625 | 31 |
gherbal-v3 | commonlid | v2 | 0.8626 | 0.2584 | 0.8925 | 0.2593 | 0.2937 | 269,667 | 33 |
gherbal-v3 | commonlid | v3 | 0.8551 | 0.2972 | 0.8828 | 0.3046 | 0.3426 | 324,781 | 45 |
gherbal-v3 | commonlid | v4 | 0.7722 | 0.2075 | 0.7808 | 0.204 | 0.2589 | 359,646 | 77 |
gherbal-v3 | bouquet | full | 0.1914 | 0.0939 | 0.1086 | 0.083 | 0.1655 | 289,300 | 275 |
gherbal-v3 | bouquet | v1 | 0.9479 | 0.3837 | 0.9593 | 0.3907 | 0.3792 | 31,560 | 30 |
gherbal-v3 | bouquet | v2 | 0.9343 | 0.3921 | 0.9485 | 0.4008 | 0.3862 | 32,612 | 31 |
gherbal-v3 | bouquet | v3 | 0.8356 | 0.5098 | 0.8497 | 0.5592 | 0.5014 | 66,276 | 63 |
gherbal-v3 | bouquet | v4 | 0.3631 | 0.2106 | 0.2731 | 0.2004 | 0.28 | 152,540 | 145 |
gherbal-v4 | flores-devtest | full | 0.85 | 0.7693 | 0.8245 | 0.7712 | 0.7943 | 222,640 | 214 |
gherbal-v4 | flores-devtest | v1 | 0.9591 | 0.3465 | 0.9682 | 0.3519 | 0.3432 | 34,408 | 34 |
gherbal-v4 | flores-devtest | v2 | 0.9213 | 0.3487 | 0.9309 | 0.357 | 0.3466 | 37,444 | 36 |
gherbal-v4 | flores-devtest | v3 | 0.8914 | 0.4411 | 0.8936 | 0.4612 | 0.4407 | 92,092 | 90 |
gherbal-v4 | flores-devtest | v4 | 0.9212 | 0.8505 | 0.9187 | 0.8686 | 0.8537 | 205,436 | 198 |
gherbal-v4 | flores-dev | full | 0.8334 | 0.7485 | 0.801 | 0.745 | 0.7798 | 224,325 | 220 |
gherbal-v4 | flores-dev | v1 | 0.9654 | 0.3558 | 0.9732 | 0.3601 | 0.3529 | 33,898 | 34 |
gherbal-v4 | flores-dev | v2 | 0.9344 | 0.3559 | 0.9423 | 0.3619 | 0.3543 | 36,889 | 36 |
gherbal-v4 | flores-dev | v3 | 0.9007 | 0.4625 | 0.9019 | 0.4816 | 0.4625 | 90,727 | 90 |
gherbal-v4 | flores-dev | v4 | 0.9282 | 0.8565 | 0.9252 | 0.873 | 0.86 | 201,394 | 198 |
gherbal-v4 | madar | full | 0.6298 | 0.2608 | 0.6169 | 0.316 | 0.2712 | 5,600 | 15 |
gherbal-v4 | madar | v1 | 0.8411 | 0.1574 | 0.8898 | 0.1672 | 0.1495 | 2,077 | 3 |
gherbal-v4 | madar | v2 | 0.6629 | 0.2595 | 0.6682 | 0.3354 | 0.2465 | 5,076 | 11 |
gherbal-v4 | madar | v3 | 0.6558 | 0.2953 | 0.6516 | 0.3682 | 0.2984 | 5,378 | 13 |
gherbal-v4 | madar | v4 | 0.6558 | 0.2953 | 0.6516 | 0.3682 | 0.2984 | 5,378 | 13 |
gherbal-v4 | gherbal-multi | full | 0.8699 | 0.163 | 0.8964 | 0.1684 | 0.1583 | 184,994 | 36 |
gherbal-v4 | gherbal-multi | v1 | 0.8699 | 0.163 | 0.8964 | 0.1684 | 0.1583 | 184,994 | 36 |
gherbal-v4 | gherbal-multi | v2 | 0.8699 | 0.163 | 0.8964 | 0.1684 | 0.1583 | 184,994 | 36 |
gherbal-v4 | gherbal-multi | v3 | 0.8699 | 0.163 | 0.8964 | 0.1684 | 0.1583 | 184,994 | 36 |
gherbal-v4 | gherbal-multi | v4 | 0.8699 | 0.163 | 0.8964 | 0.1684 | 0.1583 | 184,994 | 36 |
LID Benchmark — Language Identification Evaluation Results
Structured evaluation results for 10 language identification models across 8 benchmarks covering 380 languages — with per-language accuracy, aggregate metrics, and confusion analysis.
Built as part of the Gherbal evaluation pipeline.
The full PDF report is also available.
Quick Start
from datasets import load_dataset
# Per-language results (26,540 rows)
per_lang = load_dataset("omneity-labs/lid-benchmark", "results_per_language", split="train")
# Aggregate metrics per model × benchmark × scope (400 rows)
aggregate = load_dataset("omneity-labs/lid-benchmark", "results_aggregate", split="train")
# Summary — one row per model × benchmark, full scope only (80 rows)
summary = load_dataset("omneity-labs/lid-benchmark", "results_summary", split="train")
Leaderboard (Full Scope)
| Model | FLORES+ devtest | MADAR | Gherbal-Multi | ATLASIA-LID |
|---|---|---|---|---|
| GlotLID | 0.9253 | 0.5648 | 0.7772 | 0.4977 |
| OpenLID v2 | 0.8748 | 0.6262 | 0.7762 | 0.5735 |
| OpenLID v3 (HPLT-LID) | 0.8556 | — | 0.6619 | — |
| Gherbal v4 | 0.8500 | 0.6298 | 0.8699 | 0.6909 |
| OpenLID v1 | 0.8425 | 0.5587 | 0.8296 | 0.4845 |
| NLLB-LID | 0.8331 | 0.1052 | 0.7522 | 0.3348 |
| FastLID-176 | 0.4006 | 0.1352 | 0.6472 | 0.3899 |
| Gherbal v3 | 0.3605 | 0.5745 | 0.8966 | 0.6561 |
| Gherbal v2 | 0.1495 | 0.5811 | 0.7961 | 0.6561 |
| Gherbal v1 | 0.1374 | 0.2771 | 0.8385 | 0.2718 |
Note: Full-scope FLORES+ accuracy penalizes models that support fewer languages (unsupported languages count as errors). Use
results_aggregatewithscope=v4(214 languages) for a fairer narrower comparison. Gherbal v4 achieves 0.9312 accuracy on FLORES+ devtest in the v4 scope.
Dataset Configs
results_per_language — Per-Language Breakdown
26,540 rows. One row per (model, benchmark, scope, language).
| Column | Type | Description |
|---|---|---|
model |
string | Model name (e.g. gherbal-v4, glotlid) |
benchmark |
string | Benchmark name (e.g. flores-devtest) |
scope |
string | Language scope: full, v1, v2, v3, or v4 |
language |
string | Language code in iso639-3_Script format (e.g. arb_Arab) |
n_samples |
int | Number of test samples for this language |
accuracy |
float | Classification accuracy (0–1) |
top_confusion_1 |
string | Most confused-with language |
top_confusion_1_count |
int | Count of samples misclassified as this language |
top_confusion_2 |
string | 2nd most confused-with language |
top_confusion_2_count |
int | — |
top_confusion_3 |
string | 3rd most confused-with language |
top_confusion_3_count |
int | — |
confusions_json |
string | Full confusion map as JSON (all misclassified targets and counts) |
Example — find the hardest languages for a model:
from datasets import load_dataset
import pandas as pd
ds = load_dataset("omneity-labs/lid-benchmark", "results_per_language", split="train")
df = ds.to_pandas()
# Worst-performing languages for Gherbal v4 on FLORES
worst = (
df[(df["model"] == "gherbal-v4") &
(df["benchmark"] == "flores-devtest") &
(df["scope"] == "full") &
(df["n_samples"] >= 100)]
.sort_values("accuracy")
.head(10)
[["language", "accuracy", "n_samples", "top_confusion_1"]]
)
print(worst)
Example — compare Arabic dialect accuracy across models:
arabic_dialects = [
"arz_Arab", "ary_Arab", "arq_Arab", "aeb_Arab",
"apc_Arab", "acm_Arab", "ars_Arab", "afb_Arab",
# Add more
]
arabic_df = df[
(df["language"].isin(arabic_dialects)) &
(df["benchmark"] == "flores-devtest") &
(df["scope"] == "full")
]
pivot = arabic_df.pivot_table(
index="language", columns="model", values="accuracy"
)
print(pivot.round(3))
results_aggregate — Aggregate Metrics
400 rows. One row per (model, benchmark, scope).
| Column | Type | Description |
|---|---|---|
model |
string | Model name |
benchmark |
string | Benchmark name |
scope |
string | Language scope |
accuracy |
float | Overall accuracy |
f1_macro |
float | Macro-averaged F1 |
f1_weighted |
float | Weighted F1 |
precision_macro |
float | Macro-averaged precision |
recall_macro |
float | Macro-averaged recall |
n_samples |
int | Total evaluation samples |
n_classes |
int | Number of unique languages |
Example — model comparison across scopes:
ds = load_dataset("omneity-labs/lid-benchmark", "results_aggregate", split="train")
df = ds.to_pandas()
comparison = df[
(df["benchmark"] == "flores-devtest") &
(df["model"].isin(["gherbal-v4", "glotlid", "openlid-v2"]))
].pivot_table(index="scope", columns="model", values="accuracy")
print(comparison.round(4))
results_summary — Quick Summary
80 rows. One row per (model, benchmark) — full scope only. Best for quick leaderboard construction.
| Column | Type | Description |
|---|---|---|
model |
string | Model name |
benchmark |
string | Benchmark name |
accuracy |
float | Overall accuracy (full scope) |
f1_macro |
float | Macro F1 (full scope) |
f1_weighted |
float | Weighted F1 (full scope) |
precision_macro |
float | Macro precision (full scope) |
recall_macro |
float | Macro recall (full scope) |
n_samples |
int | Total samples |
n_classes |
int | Number of classes |
Models Evaluated
| Model | Type | Languages | Source |
|---|---|---|---|
| Gherbal v4 | FastText | 214 | Omneity Labs |
| Gherbal v3 | FastText | 106 | Omneity Labs |
| Gherbal v2 | FastText | 46 | Omneity Labs |
| Gherbal v1 | FastText | 36 | Omneity Labs |
| GlotLID v3 | FastText | 2,102 | LMU Munich |
| NLLB-LID | FastText | 218 | Meta |
| OpenLID v1 | FastText | 201 | Laurie Burchell |
| OpenLID v2 | FastText | 201 | Laurie Burchell |
| OpenLID v3 (HPLT-LID) | FastText | 201 | HPLT |
| FastLID-176 | FastText | 176 | Meta |
Benchmarks
| Benchmark | Samples | Languages | Description |
|---|---|---|---|
| FLORES+ devtest | 222,640 | 214 | openlanguagedata/flores_plus devtest split |
| FLORES+ dev | 224,325 | 220 | openlanguagedata/flores_plus dev split |
| MADAR | 5,600 | 15 | sawalni-ai/madar — Arabic dialect corpus |
| Gherbal-Multi | 185,000+ | 106+ | sawalni-ai/gherbal-multi — multi-source test set |
| ATLASIA-LID | 234,000+ | 24 | atlasia/Arabic-LID-Leaderboard — Arabic country-level dialects |
| WiLI-2018 | — | 235 | Wikipedia Language Identification |
| CommonLID | — | — | Common Crawl language ID |
| Bouquet | — | — | Cross-domain evaluation |
Evaluation Scopes
Results include multiple scopes to enable fair comparison between models with different language coverage:
| Scope | Languages | Description |
|---|---|---|
full |
All | All languages in the benchmark (penalizes models with fewer supported languages) |
v1 |
36 | Intersection with Gherbal v1 language set |
v2 |
46 | Intersection with Gherbal v2 language set |
v3 |
106 | Intersection with Gherbal v3 language set |
v4 |
214 | Intersection with Gherbal v4 language set |
Using scoped evaluation ensures models are compared only on languages they were designed to handle. For example, Gherbal v3 supports 106 languages — its v3 scope accuracy on FLORES+ is much higher than its full scope accuracy, because the full scope includes 108+ languages it was never trained on.
Language Codes
Languages use the iso639-3_Script format from FLORES+:
arb_Arab— Modern Standard Arabic (Arabic script)arz_Arab— Egyptian Arabicary_Arab— Moroccan Arabic (Arabic script)ary_Latn— Moroccan Arabic (Latin script)eng_Latn— Englishfra_Latn— French
Full list of 380 languages available in the results_per_language config.
CSV Downloads
For convenience, CSV versions of all three configs are also included in the csv/ directory.
Citation
If you use this benchmark data in your research, please reference:
- Omneity Labs LID Benchmark: https://huggingface.co/datasets/omneity-labs/lid-benchmark
- Gherbal model: https://www.omneitylabs.com/models/gherbal
- Evaluation benchmarks: See individual benchmark datasets linked above.
License
The evaluation results in this dataset are released under Apache 2.0. The underlying benchmark datasets retain their original licenses.
- Downloads last month
- 67