Datasets:
question
string | choices
list | answer
int64 | answer_label
string | split
string | subcategories
string | category
string | lang
string | second_lang
string | notes
string | id
string | set_id
string | variation_id
string | perturbed_word
string | vanilla_cos_sim_to_canonical
dict | trimmed_cos_sim_to_canonical
dict | token_counts
dict |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Il prezzo di questa casa è 300.000 eu. Il costo di questa casa è
|
[
"300.000 euro",
"300,000 euro",
"300 euro",
"30.0000 euro"
] | 0
|
A
|
test
|
Abbreviations
|
Script / Orthography
|
ita_Latn
|
302-3.8
|
302
|
3.8
|
euro
|
{
"CohereLabs/aya-expanse-8b": 0.9857403039932251,
"Qwen/Qwen3-8B": 0.9869157075881958,
"bigscience/bloom": 0.9789521098136902,
"common-pile/comma-v0.1-1t": 0.9729571342468262,
"facebook/xglm-564M": 0.9706848859786987,
"google-bert/bert-base-multilingual-cased": 0.9703931212425232,
"google/byt5-small": 0.9958672523498535,
"google/gemma-2-2b": 0.9848247170448303,
"gpt2": 0.9685859680175781,
"meta-llama/Llama-3.2-1B": 0.975965142250061,
"microsoft/Phi-3-mini-4k-instruct": 0.984674870967865,
"mistralai/tekken": 0.9843989610671997,
"tiktoken/gpt-4o": 0.9751088619232178,
"tokenmonster/englishcode-32000-consistent-v1": 0.9815220832824707
}
|
{
"CohereLabs/aya-expanse-8b": 0.1385408192873001,
"Qwen/Qwen3-8B": 0.14013592898845673,
"bigscience/bloom": 0.14557421207427979,
"common-pile/comma-v0.1-1t": 0.026260867714881897,
"facebook/xglm-564M": 0.19083420932292938,
"google-bert/bert-base-multilingual-cased": 0.05557875335216522,
"google/byt5-small": 0.5528782606124878,
"google/gemma-2-2b": 0.10549915581941605,
"gpt2": 0.03810340166091919,
"meta-llama/Llama-3.2-1B": 0.14337904751300812,
"microsoft/Phi-3-mini-4k-instruct": 0.10051627457141876,
"mistralai/tekken": 0.12515200674533844,
"tiktoken/gpt-4o": 0.17352880537509918,
"tokenmonster/englishcode-32000-consistent-v1": 0.09028923511505127
}
|
{
"CohereLabs/aya-expanse-8b": 22,
"Qwen/Qwen3-8B": 23,
"bigscience/bloom": 20,
"common-pile/comma-v0.1-1t": 26,
"facebook/xglm-564M": 15,
"google-bert/bert-base-multilingual-cased": 17,
"google/byt5-small": 66,
"google/gemma-2-2b": 22,
"gpt2": 26,
"meta-llama/Llama-3.2-1B": 19,
"microsoft/Phi-3-mini-4k-instruct": 24,
"mistralai/tekken": 23,
"tiktoken/gpt-4o": 18,
"tokenmonster/englishcode-32000-consistent-v1": 23
}
|
||
Il prezzo di questa casa è 300.000 e. Il costo di questa casa è
|
[
"300.000 euro",
"300,000 euro",
"300 euro",
"30.0000 euro"
] | 0
|
A
|
test
|
Abbreviations
|
Script / Orthography
|
ita_Latn
|
302-3.10
|
302
|
3.1
|
euro
|
{
"CohereLabs/aya-expanse-8b": 0.9830060005187988,
"Qwen/Qwen3-8B": 0.9846362471580505,
"bigscience/bloom": 0.9739221930503845,
"common-pile/comma-v0.1-1t": 0.9833942651748657,
"facebook/xglm-564M": 0.9650736451148987,
"google-bert/bert-base-multilingual-cased": 0.9671679735183716,
"google/byt5-small": 0.9934729337692261,
"google/gemma-2-2b": 0.983406126499176,
"gpt2": 0.9779931306838989,
"meta-llama/Llama-3.2-1B": 0.9702856540679932,
"microsoft/Phi-3-mini-4k-instruct": 0.982698917388916,
"mistralai/tekken": 0.9820671677589417,
"tiktoken/gpt-4o": 0.9692671895027161,
"tokenmonster/englishcode-32000-consistent-v1": 0.9786087870597839
}
|
{
"CohereLabs/aya-expanse-8b": -0.006411748472601175,
"Qwen/Qwen3-8B": 0.0681527704000473,
"bigscience/bloom": -0.005266927182674408,
"common-pile/comma-v0.1-1t": -0.002005022019147873,
"facebook/xglm-564M": 0.015466060489416122,
"google-bert/bert-base-multilingual-cased": -0.01870070770382881,
"google/byt5-small": 0.46131211519241333,
"google/gemma-2-2b": 0.028713535517454147,
"gpt2": 0.011133558116853237,
"meta-llama/Llama-3.2-1B": -0.05435729771852493,
"microsoft/Phi-3-mini-4k-instruct": -0.010381445288658142,
"mistralai/tekken": -0.012979264371097088,
"tiktoken/gpt-4o": 0.0036126829218119383,
"tokenmonster/englishcode-32000-consistent-v1": 0.023911256343126297
}
|
{
"CohereLabs/aya-expanse-8b": 22,
"Qwen/Qwen3-8B": 23,
"bigscience/bloom": 20,
"common-pile/comma-v0.1-1t": 27,
"facebook/xglm-564M": 15,
"google-bert/bert-base-multilingual-cased": 17,
"google/byt5-small": 65,
"google/gemma-2-2b": 22,
"gpt2": 25,
"meta-llama/Llama-3.2-1B": 19,
"microsoft/Phi-3-mini-4k-instruct": 24,
"mistralai/tekken": 23,
"tiktoken/gpt-4o": 18,
"tokenmonster/englishcode-32000-consistent-v1": 23
}
|
||
Il numero d mesi in un anno è
|
[
"10",
"12",
"11",
"13"
] | 1
|
B
|
test
|
Abbreviations
|
Script / Orthography
|
ita_Latn
|
310-3.9
|
310
|
3.9
|
di
|
{
"CohereLabs/aya-expanse-8b": 0.9041993618011475,
"Qwen/Qwen3-8B": 0.8993269801139832,
"bigscience/bloom": 0.8886461853981018,
"common-pile/comma-v0.1-1t": 0.9249951839447021,
"facebook/xglm-564M": 0.9030013084411621,
"google-bert/bert-base-multilingual-cased": 0.8927466869354248,
"google/byt5-small": 0.9935976266860962,
"google/gemma-2-2b": 0.9197080135345459,
"gpt2": 0.9018505811691284,
"meta-llama/Llama-3.2-1B": 0.8880246877670288,
"microsoft/Phi-3-mini-4k-instruct": 0.879460334777832,
"mistralai/tekken": 0.903039813041687,
"tiktoken/gpt-4o": 0.8884236216545105,
"tokenmonster/englishcode-32000-consistent-v1": 0.9177913069725037
}
|
{
"CohereLabs/aya-expanse-8b": 0.1277914196252823,
"Qwen/Qwen3-8B": -0.038773685693740845,
"bigscience/bloom": -0.021927211433649063,
"common-pile/comma-v0.1-1t": -0.06762950122356415,
"facebook/xglm-564M": 0.10845727473497391,
"google-bert/bert-base-multilingual-cased": 0.10735861212015152,
"google/byt5-small": 0.5014738440513611,
"google/gemma-2-2b": 0.14631111919879913,
"gpt2": -0.03892117738723755,
"meta-llama/Llama-3.2-1B": -0.021200457587838173,
"microsoft/Phi-3-mini-4k-instruct": -0.042535509914159775,
"mistralai/tekken": 0.09141422808170319,
"tiktoken/gpt-4o": 0.04498814046382904,
"tokenmonster/englishcode-32000-consistent-v1": 0.12912601232528687
}
|
{
"CohereLabs/aya-expanse-8b": 8,
"Qwen/Qwen3-8B": 9,
"bigscience/bloom": 9,
"common-pile/comma-v0.1-1t": 14,
"facebook/xglm-564M": 8,
"google-bert/bert-base-multilingual-cased": 8,
"google/byt5-small": 30,
"google/gemma-2-2b": 8,
"gpt2": 12,
"meta-llama/Llama-3.2-1B": 9,
"microsoft/Phi-3-mini-4k-instruct": 9,
"mistralai/tekken": 8,
"tiktoken/gpt-4o": 8,
"tokenmonster/englishcode-32000-consistent-v1": 10
}
|
||
I secondi che compongono un minuto sn
|
[
"50",
"60",
"100",
"30"
] | 1
|
B
|
test
|
Abbreviations
|
Script / Orthography
|
ita_Latn
|
311-3.1
|
311
|
3.1
|
sono
|
{
"CohereLabs/aya-expanse-8b": 0.889119029045105,
"Qwen/Qwen3-8B": 0.9206670522689819,
"bigscience/bloom": 0.9086928963661194,
"common-pile/comma-v0.1-1t": 0.9027715921401978,
"facebook/xglm-564M": 0.9245870113372803,
"google-bert/bert-base-multilingual-cased": 0.852838933467865,
"google/byt5-small": 0.9900485277175903,
"google/gemma-2-2b": 0.9006550312042236,
"gpt2": 0.8930383920669556,
"meta-llama/Llama-3.2-1B": 0.9155977964401245,
"microsoft/Phi-3-mini-4k-instruct": 0.9173316955566406,
"mistralai/tekken": 0.9098365306854248,
"tiktoken/gpt-4o": 0.9097579717636108,
"tokenmonster/englishcode-32000-consistent-v1": 0.959909200668335
}
|
{
"CohereLabs/aya-expanse-8b": 0.02457837015390396,
"Qwen/Qwen3-8B": 0.042501673102378845,
"bigscience/bloom": 0.019553344696760178,
"common-pile/comma-v0.1-1t": 0.016733499243855476,
"facebook/xglm-564M": 0.1390569806098938,
"google-bert/bert-base-multilingual-cased": 0.0033388654701411724,
"google/byt5-small": 0.40499550104141235,
"google/gemma-2-2b": 0.06815136969089508,
"gpt2": 0.009069850668311119,
"meta-llama/Llama-3.2-1B": 0.06639596819877625,
"microsoft/Phi-3-mini-4k-instruct": 0.047561533749103546,
"mistralai/tekken": 0.021650152280926704,
"tiktoken/gpt-4o": 0.03376496210694313,
"tokenmonster/englishcode-32000-consistent-v1": -0.007327871397137642
}
|
{
"CohereLabs/aya-expanse-8b": 8,
"Qwen/Qwen3-8B": 11,
"bigscience/bloom": 10,
"common-pile/comma-v0.1-1t": 11,
"facebook/xglm-564M": 9,
"google-bert/bert-base-multilingual-cased": 10,
"google/byt5-small": 37,
"google/gemma-2-2b": 8,
"gpt2": 11,
"meta-llama/Llama-3.2-1B": 11,
"microsoft/Phi-3-mini-4k-instruct": 11,
"mistralai/tekken": 10,
"tiktoken/gpt-4o": 10,
"tokenmonster/englishcode-32000-consistent-v1": 16
}
|
||
I secondi ke compongono un minuto sono
|
[
"50",
"60",
"100",
"30"
] | 1
|
B
|
test
|
Abbreviations, Phonetic spelling
|
Script / Orthography, Social Media & Informal Text
|
ita_Latn
|
311-3.3
|
311
|
3.3
|
che
|
{
"CohereLabs/aya-expanse-8b": 0.9062737822532654,
"Qwen/Qwen3-8B": 0.9312589764595032,
"bigscience/bloom": 0.9275574088096619,
"common-pile/comma-v0.1-1t": 0.9388861656188965,
"facebook/xglm-564M": 0.9199646711349487,
"google-bert/bert-base-multilingual-cased": 0.9145818948745728,
"google/byt5-small": 0.988102912902832,
"google/gemma-2-2b": 0.9178076982498169,
"gpt2": 0.9206658601760864,
"meta-llama/Llama-3.2-1B": 0.9282821416854858,
"microsoft/Phi-3-mini-4k-instruct": 0.9255558252334595,
"mistralai/tekken": 0.9258506298065186,
"tiktoken/gpt-4o": 0.9301422238349915,
"tokenmonster/englishcode-32000-consistent-v1": 0.9591631889343262
}
|
{
"CohereLabs/aya-expanse-8b": 0.07124575972557068,
"Qwen/Qwen3-8B": 0.04917556792497635,
"bigscience/bloom": 0.038712747395038605,
"common-pile/comma-v0.1-1t": 0.061479900032281876,
"facebook/xglm-564M": 0.10398856550455093,
"google-bert/bert-base-multilingual-cased": 0.0657016783952713,
"google/byt5-small": 0.09882692992687225,
"google/gemma-2-2b": 0.13011936843395233,
"gpt2": 0.0447414293885231,
"meta-llama/Llama-3.2-1B": 0.06527149677276611,
"microsoft/Phi-3-mini-4k-instruct": 0.03401301056146622,
"mistralai/tekken": 0.08301310241222382,
"tiktoken/gpt-4o": 0.1011124923825264,
"tokenmonster/englishcode-32000-consistent-v1": 0.01840820163488388
}
|
{
"CohereLabs/aya-expanse-8b": 8,
"Qwen/Qwen3-8B": 11,
"bigscience/bloom": 10,
"common-pile/comma-v0.1-1t": 12,
"facebook/xglm-564M": 9,
"google-bert/bert-base-multilingual-cased": 9,
"google/byt5-small": 38,
"google/gemma-2-2b": 8,
"gpt2": 12,
"meta-llama/Llama-3.2-1B": 11,
"microsoft/Phi-3-mini-4k-instruct": 11,
"mistralai/tekken": 10,
"tiktoken/gpt-4o": 10,
"tokenmonster/englishcode-32000-consistent-v1": 16
}
|
||
I secondi che compongono un minuto sn
|
[
"50",
"60",
"100",
"30"
] | 1
|
B
|
test
|
Abbreviations
|
Script / Orthography
|
ita_Latn
|
311-3.5
|
311
|
3.5
|
sono
|
{
"CohereLabs/aya-expanse-8b": 0.889119029045105,
"Qwen/Qwen3-8B": 0.9206670522689819,
"bigscience/bloom": 0.9086928963661194,
"common-pile/comma-v0.1-1t": 0.9027715921401978,
"facebook/xglm-564M": 0.9245870113372803,
"google-bert/bert-base-multilingual-cased": 0.852838933467865,
"google/byt5-small": 0.9900485277175903,
"google/gemma-2-2b": 0.9006550312042236,
"gpt2": 0.8930383920669556,
"meta-llama/Llama-3.2-1B": 0.9155977964401245,
"microsoft/Phi-3-mini-4k-instruct": 0.9173316955566406,
"mistralai/tekken": 0.9098365306854248,
"tiktoken/gpt-4o": 0.9097579717636108,
"tokenmonster/englishcode-32000-consistent-v1": 0.959909200668335
}
|
{
"CohereLabs/aya-expanse-8b": 0.02457837015390396,
"Qwen/Qwen3-8B": 0.042501673102378845,
"bigscience/bloom": 0.019553344696760178,
"common-pile/comma-v0.1-1t": 0.016733499243855476,
"facebook/xglm-564M": 0.1390569806098938,
"google-bert/bert-base-multilingual-cased": 0.0033388654701411724,
"google/byt5-small": 0.40499550104141235,
"google/gemma-2-2b": 0.06815136969089508,
"gpt2": 0.009069850668311119,
"meta-llama/Llama-3.2-1B": 0.06639596819877625,
"microsoft/Phi-3-mini-4k-instruct": 0.047561533749103546,
"mistralai/tekken": 0.021650152280926704,
"tiktoken/gpt-4o": 0.03376496210694313,
"tokenmonster/englishcode-32000-consistent-v1": -0.007327871397137642
}
|
{
"CohereLabs/aya-expanse-8b": 8,
"Qwen/Qwen3-8B": 11,
"bigscience/bloom": 10,
"common-pile/comma-v0.1-1t": 11,
"facebook/xglm-564M": 9,
"google-bert/bert-base-multilingual-cased": 10,
"google/byt5-small": 37,
"google/gemma-2-2b": 8,
"gpt2": 11,
"meta-llama/Llama-3.2-1B": 11,
"microsoft/Phi-3-mini-4k-instruct": 11,
"mistralai/tekken": 10,
"tiktoken/gpt-4o": 10,
"tokenmonster/englishcode-32000-consistent-v1": 16
}
|
||
I secondi k compongono un minuto sono
|
[
"50",
"60",
"100",
"30"
] | 1
|
B
|
test
|
Abbreviations, Phonetic spelling
|
Script / Orthography, Social Media & Informal Text
|
ita_Latn
|
311-3.10
|
311
|
3.1
|
che
|
{
"CohereLabs/aya-expanse-8b": 0.8960384130477905,
"Qwen/Qwen3-8B": 0.9305408000946045,
"bigscience/bloom": 0.9289969205856323,
"common-pile/comma-v0.1-1t": 0.9358091950416565,
"facebook/xglm-564M": 0.9100795388221741,
"google-bert/bert-base-multilingual-cased": 0.9062113761901855,
"google/byt5-small": 0.9854108691215515,
"google/gemma-2-2b": 0.9122244715690613,
"gpt2": 0.9248408079147339,
"meta-llama/Llama-3.2-1B": 0.9165666103363037,
"microsoft/Phi-3-mini-4k-instruct": 0.9232894778251648,
"mistralai/tekken": 0.9056615233421326,
"tiktoken/gpt-4o": 0.9238539934158325,
"tokenmonster/englishcode-32000-consistent-v1": 0.9592347145080566
}
|
{
"CohereLabs/aya-expanse-8b": 0.035248227417469025,
"Qwen/Qwen3-8B": 0.10117741674184799,
"bigscience/bloom": 0.051930442452430725,
"common-pile/comma-v0.1-1t": 0.04446348547935486,
"facebook/xglm-564M": 0.0702851191163063,
"google-bert/bert-base-multilingual-cased": -0.021673817187547684,
"google/byt5-small": 0.07845024019479752,
"google/gemma-2-2b": 0.10049274563789368,
"gpt2": 0.05740271136164665,
"meta-llama/Llama-3.2-1B": 0.016170667484402657,
"microsoft/Phi-3-mini-4k-instruct": 0.06751787662506104,
"mistralai/tekken": 0.00842366460710764,
"tiktoken/gpt-4o": 0.0656261071562767,
"tokenmonster/englishcode-32000-consistent-v1": 0.04240305349230766
}
|
{
"CohereLabs/aya-expanse-8b": 8,
"Qwen/Qwen3-8B": 11,
"bigscience/bloom": 10,
"common-pile/comma-v0.1-1t": 12,
"facebook/xglm-564M": 9,
"google-bert/bert-base-multilingual-cased": 9,
"google/byt5-small": 37,
"google/gemma-2-2b": 8,
"gpt2": 12,
"meta-llama/Llama-3.2-1B": 11,
"microsoft/Phi-3-mini-4k-instruct": 11,
"mistralai/tekken": 10,
"tiktoken/gpt-4o": 10,
"tokenmonster/englishcode-32000-consistent-v1": 16
}
|
||
Il numero d lati che ha un esagono è
|
[
"6",
"5",
"7",
"8"
] | 0
|
A
|
test
|
Abbreviations
|
Script / Orthography
|
ita_Latn
|
312-3.2
|
312
|
3.2
|
d
|
{
"CohereLabs/aya-expanse-8b": 0.9364355802536011,
"Qwen/Qwen3-8B": 0.9304925203323364,
"bigscience/bloom": 0.9238738417625427,
"common-pile/comma-v0.1-1t": 0.9282301068305969,
"facebook/xglm-564M": 0.9229193925857544,
"google-bert/bert-base-multilingual-cased": 0.9181416034698486,
"google/byt5-small": 0.9942460060119629,
"google/gemma-2-2b": 0.9441303610801697,
"gpt2": 0.9171051979064941,
"meta-llama/Llama-3.2-1B": 0.9227380752563477,
"microsoft/Phi-3-mini-4k-instruct": 0.9150516986846924,
"mistralai/tekken": 0.9307839870452881,
"tiktoken/gpt-4o": 0.9249601364135742,
"tokenmonster/englishcode-32000-consistent-v1": 0.9488548040390015
}
|
{
"CohereLabs/aya-expanse-8b": 0.1277914196252823,
"Qwen/Qwen3-8B": -0.038773685693740845,
"bigscience/bloom": -0.021927211433649063,
"common-pile/comma-v0.1-1t": -0.06762950122356415,
"facebook/xglm-564M": 0.10845727473497391,
"google-bert/bert-base-multilingual-cased": 0.10735861212015152,
"google/byt5-small": 0.5014738440513611,
"google/gemma-2-2b": 0.14631111919879913,
"gpt2": -0.03892117738723755,
"meta-llama/Llama-3.2-1B": -0.021200457587838173,
"microsoft/Phi-3-mini-4k-instruct": -0.042535509914159775,
"mistralai/tekken": 0.09141422808170319,
"tiktoken/gpt-4o": 0.04498814046382904,
"tokenmonster/englishcode-32000-consistent-v1": 0.12912601232528687
}
|
{
"CohereLabs/aya-expanse-8b": 11,
"Qwen/Qwen3-8B": 12,
"bigscience/bloom": 11,
"common-pile/comma-v0.1-1t": 15,
"facebook/xglm-564M": 10,
"google-bert/bert-base-multilingual-cased": 10,
"google/byt5-small": 37,
"google/gemma-2-2b": 11,
"gpt2": 14,
"meta-llama/Llama-3.2-1B": 12,
"microsoft/Phi-3-mini-4k-instruct": 12,
"mistralai/tekken": 11,
"tiktoken/gpt-4o": 11,
"tokenmonster/englishcode-32000-consistent-v1": 14
}
|
||
X vivere, gli esseri umani devono respirare
|
[
"idrogeno",
"ossigeno",
"elio",
"metano"
] | 1
|
B
|
test
|
Abbreviations, Phonetic spelling
|
Script / Orthography, Social Media & Informal Text
|
ita_Latn
|
322-3.1
|
322
|
3.1
|
Per
|
{
"CohereLabs/aya-expanse-8b": 0.8937414288520813,
"Qwen/Qwen3-8B": 0.9413673877716064,
"bigscience/bloom": 0.9438265562057495,
"common-pile/comma-v0.1-1t": 0.9650958776473999,
"facebook/xglm-564M": 0.9179518818855286,
"google-bert/bert-base-multilingual-cased": 0.9172888994216919,
"google/byt5-small": 0.9886741638183594,
"google/gemma-2-2b": 0.9233962893486023,
"gpt2": 0.9421610832214355,
"meta-llama/Llama-3.2-1B": 0.9381470084190369,
"microsoft/Phi-3-mini-4k-instruct": 0.9380953311920166,
"mistralai/tekken": 0.9068201184272766,
"tiktoken/gpt-4o": 0.9271345138549805,
"tokenmonster/englishcode-32000-consistent-v1": 0.958351731300354
}
|
{
"CohereLabs/aya-expanse-8b": 0.03569697588682175,
"Qwen/Qwen3-8B": 0.043544597923755646,
"bigscience/bloom": 0.1250547468662262,
"common-pile/comma-v0.1-1t": 0.039114292711019516,
"facebook/xglm-564M": 0.09100203961133957,
"google-bert/bert-base-multilingual-cased": -0.004503719508647919,
"google/byt5-small": 0.06359102576971054,
"google/gemma-2-2b": 0.08995195478200912,
"gpt2": 0.07059931010007858,
"meta-llama/Llama-3.2-1B": 0.053816698491573334,
"microsoft/Phi-3-mini-4k-instruct": 0.01675296202301979,
"mistralai/tekken": 0.03543246537446976,
"tiktoken/gpt-4o": 0.11964098364114761,
"tokenmonster/englishcode-32000-consistent-v1": 0.028944389894604683
}
|
{
"CohereLabs/aya-expanse-8b": 9,
"Qwen/Qwen3-8B": 13,
"bigscience/bloom": 13,
"common-pile/comma-v0.1-1t": 20,
"facebook/xglm-564M": 10,
"google-bert/bert-base-multilingual-cased": 11,
"google/byt5-small": 43,
"google/gemma-2-2b": 10,
"gpt2": 16,
"meta-llama/Llama-3.2-1B": 13,
"microsoft/Phi-3-mini-4k-instruct": 16,
"mistralai/tekken": 10,
"tiktoken/gpt-4o": 11,
"tokenmonster/englishcode-32000-consistent-v1": 16
}
|
||
Il 10% d 100 è
|
[
"5",
"10",
"15",
"20"
] | 1
|
B
|
test
|
Abbreviations
|
Script / Orthography
|
ita_Latn
|
323-3.2
|
323
|
3.2
|
di
|
{
"CohereLabs/aya-expanse-8b": 0.9520768523216248,
"Qwen/Qwen3-8B": 0.9458957314491272,
"bigscience/bloom": 0.806017279624939,
"common-pile/comma-v0.1-1t": 0.9204361438751221,
"facebook/xglm-564M": 0.8347979187965393,
"google-bert/bert-base-multilingual-cased": 0.879058837890625,
"google/byt5-small": 0.9854932427406311,
"google/gemma-2-2b": 0.9555849432945251,
"gpt2": 0.8641003370285034,
"meta-llama/Llama-3.2-1B": 0.8903765678405762,
"microsoft/Phi-3-mini-4k-instruct": 0.9409155249595642,
"mistralai/tekken": 0.9481838941574097,
"tiktoken/gpt-4o": 0.8849680423736572,
"tokenmonster/englishcode-32000-consistent-v1": 0.8863186836242676
}
|
{
"CohereLabs/aya-expanse-8b": 0.1277914196252823,
"Qwen/Qwen3-8B": -0.038773685693740845,
"bigscience/bloom": -0.021927211433649063,
"common-pile/comma-v0.1-1t": -0.06762950122356415,
"facebook/xglm-564M": 0.10845727473497391,
"google-bert/bert-base-multilingual-cased": 0.10735861212015152,
"google/byt5-small": 0.5014738440513611,
"google/gemma-2-2b": 0.14631111919879913,
"gpt2": -0.03892117738723755,
"meta-llama/Llama-3.2-1B": -0.021200457587838173,
"microsoft/Phi-3-mini-4k-instruct": -0.042535509914159775,
"mistralai/tekken": 0.09141422808170319,
"tiktoken/gpt-4o": 0.04498814046382904,
"tokenmonster/englishcode-32000-consistent-v1": 0.12912601232528687
}
|
{
"CohereLabs/aya-expanse-8b": 11,
"Qwen/Qwen3-8B": 11,
"bigscience/bloom": 5,
"common-pile/comma-v0.1-1t": 10,
"facebook/xglm-564M": 5,
"google-bert/bert-base-multilingual-cased": 6,
"google/byt5-small": 15,
"google/gemma-2-2b": 11,
"gpt2": 7,
"meta-llama/Llama-3.2-1B": 8,
"microsoft/Phi-3-mini-4k-instruct": 11,
"mistralai/tekken": 11,
"tiktoken/gpt-4o": 8,
"tokenmonster/englishcode-32000-consistent-v1": 7
}
|
||
Il 25% d 80 è
|
[
"15",
"25",
"30",
"20"
] | 3
|
D
|
test
|
Abbreviations
|
Script / Orthography
|
ita_Latn
|
324-3.2
|
324
|
3.2
|
di
|
{
"CohereLabs/aya-expanse-8b": 0.9104667901992798,
"Qwen/Qwen3-8B": 0.9128794074058533,
"bigscience/bloom": 0.7881747484207153,
"common-pile/comma-v0.1-1t": 0.9152465462684631,
"facebook/xglm-564M": 0.8274888396263123,
"google-bert/bert-base-multilingual-cased": 0.8693957328796387,
"google/byt5-small": 0.9772714972496033,
"google/gemma-2-2b": 0.9283674955368042,
"gpt2": 0.8521519303321838,
"meta-llama/Llama-3.2-1B": 0.8834445476531982,
"microsoft/Phi-3-mini-4k-instruct": 0.8923227190971375,
"mistralai/tekken": 0.9099657535552979,
"tiktoken/gpt-4o": 0.8818855285644531,
"tokenmonster/englishcode-32000-consistent-v1": 0.8755566477775574
}
|
{
"CohereLabs/aya-expanse-8b": 0.1277914196252823,
"Qwen/Qwen3-8B": -0.038773685693740845,
"bigscience/bloom": -0.021927211433649063,
"common-pile/comma-v0.1-1t": -0.06762950122356415,
"facebook/xglm-564M": 0.10845727473497391,
"google-bert/bert-base-multilingual-cased": 0.10735861212015152,
"google/byt5-small": 0.5014738440513611,
"google/gemma-2-2b": 0.14631111919879913,
"gpt2": -0.03892117738723755,
"meta-llama/Llama-3.2-1B": -0.021200457587838173,
"microsoft/Phi-3-mini-4k-instruct": -0.042535509914159775,
"mistralai/tekken": 0.09141422808170319,
"tiktoken/gpt-4o": 0.04498814046382904,
"tokenmonster/englishcode-32000-consistent-v1": 0.12912601232528687
}
|
{
"CohereLabs/aya-expanse-8b": 10,
"Qwen/Qwen3-8B": 10,
"bigscience/bloom": 5,
"common-pile/comma-v0.1-1t": 10,
"facebook/xglm-564M": 5,
"google-bert/bert-base-multilingual-cased": 6,
"google/byt5-small": 14,
"google/gemma-2-2b": 10,
"gpt2": 7,
"meta-llama/Llama-3.2-1B": 8,
"microsoft/Phi-3-mini-4k-instruct": 10,
"mistralai/tekken": 10,
"tiktoken/gpt-4o": 8,
"tokenmonster/englishcode-32000-consistent-v1": 7
}
|
||
L'intento in "A ke ora chiude il negozio?" è
|
[
"fare un acquisto",
"ottenere un'informazione",
"prenotare un appuntamento",
"presentare un reclamo"
] | 1
|
B
|
test
|
Abbreviations, Phonetic spelling
|
Script / Orthography, Social Media & Informal Text
|
ita_Latn
|
331-3.7
|
331
|
3.7
|
che
|
{
"CohereLabs/aya-expanse-8b": 0.9548805356025696,
"Qwen/Qwen3-8B": 0.9524189233779907,
"bigscience/bloom": 0.9446884393692017,
"common-pile/comma-v0.1-1t": 0.9580223560333252,
"facebook/xglm-564M": 0.9478813409805298,
"google-bert/bert-base-multilingual-cased": 0.9619719982147217,
"google/byt5-small": 0.983336329460144,
"google/gemma-2-2b": 0.9595054984092712,
"gpt2": 0.9548848271369934,
"meta-llama/Llama-3.2-1B": 0.9533548951148987,
"microsoft/Phi-3-mini-4k-instruct": 0.947738766670227,
"mistralai/tekken": 0.946022093296051,
"tiktoken/gpt-4o": 0.9507603645324707,
"tokenmonster/englishcode-32000-consistent-v1": 0.9582582712173462
}
|
{
"CohereLabs/aya-expanse-8b": 0.07124575972557068,
"Qwen/Qwen3-8B": 0.04917556792497635,
"bigscience/bloom": 0.038712747395038605,
"common-pile/comma-v0.1-1t": 0.061479900032281876,
"facebook/xglm-564M": 0.10398856550455093,
"google-bert/bert-base-multilingual-cased": 0.0657016783952713,
"google/byt5-small": 0.09882692992687225,
"google/gemma-2-2b": 0.13011936843395233,
"gpt2": 0.0447414293885231,
"meta-llama/Llama-3.2-1B": 0.06527149677276611,
"microsoft/Phi-3-mini-4k-instruct": 0.03401301056146622,
"mistralai/tekken": 0.08301310241222382,
"tiktoken/gpt-4o": 0.1011124923825264,
"tokenmonster/englishcode-32000-consistent-v1": 0.01840820163488388
}
|
{
"CohereLabs/aya-expanse-8b": 15,
"Qwen/Qwen3-8B": 16,
"bigscience/bloom": 14,
"common-pile/comma-v0.1-1t": 19,
"facebook/xglm-564M": 14,
"google-bert/bert-base-multilingual-cased": 16,
"google/byt5-small": 45,
"google/gemma-2-2b": 15,
"gpt2": 20,
"meta-llama/Llama-3.2-1B": 16,
"microsoft/Phi-3-mini-4k-instruct": 17,
"mistralai/tekken": 15,
"tiktoken/gpt-4o": 15,
"tokenmonster/englishcode-32000-consistent-v1": 19
}
|
||
L'intento in "A k ora chiude il negozio?" è
|
[
"fare un acquisto",
"prenotare un appuntamento",
"presentare un reclamo",
"ottenere un'informazione"
] | 3
|
D
|
test
|
Abbreviations, Phonetic spelling
|
Script / Orthography, Social Media & Informal Text
|
ita_Latn
|
331-3.8
|
331
|
3.8
|
che
|
{
"CohereLabs/aya-expanse-8b": 0.9488560557365417,
"Qwen/Qwen3-8B": 0.9524766206741333,
"bigscience/bloom": 0.9462546706199646,
"common-pile/comma-v0.1-1t": 0.9559975862503052,
"facebook/xglm-564M": 0.9411079287528992,
"google-bert/bert-base-multilingual-cased": 0.9574185013771057,
"google/byt5-small": 0.9798530340194702,
"google/gemma-2-2b": 0.956087589263916,
"gpt2": 0.9566252827644348,
"meta-llama/Llama-3.2-1B": 0.946645200252533,
"microsoft/Phi-3-mini-4k-instruct": 0.9453569650650024,
"mistralai/tekken": 0.9327573776245117,
"tiktoken/gpt-4o": 0.9467290043830872,
"tokenmonster/englishcode-32000-consistent-v1": 0.9572751522064209
}
|
{
"CohereLabs/aya-expanse-8b": 0.035248227417469025,
"Qwen/Qwen3-8B": 0.10117741674184799,
"bigscience/bloom": 0.051930442452430725,
"common-pile/comma-v0.1-1t": 0.04446348547935486,
"facebook/xglm-564M": 0.0702851191163063,
"google-bert/bert-base-multilingual-cased": -0.021673817187547684,
"google/byt5-small": 0.07845024019479752,
"google/gemma-2-2b": 0.10049274563789368,
"gpt2": 0.05740271136164665,
"meta-llama/Llama-3.2-1B": 0.016170667484402657,
"microsoft/Phi-3-mini-4k-instruct": 0.06751787662506104,
"mistralai/tekken": 0.00842366460710764,
"tiktoken/gpt-4o": 0.0656261071562767,
"tokenmonster/englishcode-32000-consistent-v1": 0.04240305349230766
}
|
{
"CohereLabs/aya-expanse-8b": 15,
"Qwen/Qwen3-8B": 16,
"bigscience/bloom": 14,
"common-pile/comma-v0.1-1t": 19,
"facebook/xglm-564M": 14,
"google-bert/bert-base-multilingual-cased": 16,
"google/byt5-small": 44,
"google/gemma-2-2b": 15,
"gpt2": 20,
"meta-llama/Llama-3.2-1B": 16,
"microsoft/Phi-3-mini-4k-instruct": 17,
"mistralai/tekken": 15,
"tiktoken/gpt-4o": 15,
"tokenmonster/englishcode-32000-consistent-v1": 19
}
|
||
Il mammifero + grande del mondo è
|
[
"la balenottera azzurra",
"l'orso",
"la giraffa",
"il delfino"
] | 0
|
A
|
test
|
Abbreviations, Phonetic spelling
|
Script / Orthography, Social Media & Informal Text
|
ita_Latn
|
332-3.2
|
332
|
3.2
|
più
|
{
"CohereLabs/aya-expanse-8b": 0.8924036622047424,
"Qwen/Qwen3-8B": 0.8983543515205383,
"bigscience/bloom": 0.8852360844612122,
"common-pile/comma-v0.1-1t": 0.9010025262832642,
"facebook/xglm-564M": 0.8935565948486328,
"google-bert/bert-base-multilingual-cased": 0.8811402320861816,
"google/byt5-small": 0.9687008857727051,
"google/gemma-2-2b": 0.9094482660293579,
"gpt2": 0.8648938536643982,
"meta-llama/Llama-3.2-1B": 0.9035158753395081,
"microsoft/Phi-3-mini-4k-instruct": 0.906665563583374,
"mistralai/tekken": 0.8903003931045532,
"tiktoken/gpt-4o": 0.9090982675552368,
"tokenmonster/englishcode-32000-consistent-v1": 0.9339672327041626
}
|
{
"CohereLabs/aya-expanse-8b": 0.08528129011392593,
"Qwen/Qwen3-8B": 0.07864779233932495,
"bigscience/bloom": 0.06795525550842285,
"common-pile/comma-v0.1-1t": 0.12643569707870483,
"facebook/xglm-564M": 0.03972551226615906,
"google-bert/bert-base-multilingual-cased": 0.0873791053891182,
"google/byt5-small": -0.012513848021626472,
"google/gemma-2-2b": 0.11864964663982391,
"gpt2": 0.008266289718449116,
"meta-llama/Llama-3.2-1B": 0.14310981333255768,
"microsoft/Phi-3-mini-4k-instruct": 0.08778500556945801,
"mistralai/tekken": 0.04943012818694115,
"tiktoken/gpt-4o": 0.08131778240203857,
"tokenmonster/englishcode-32000-consistent-v1": 0.0829562321305275
}
|
{
"CohereLabs/aya-expanse-8b": 9,
"Qwen/Qwen3-8B": 9,
"bigscience/bloom": 10,
"common-pile/comma-v0.1-1t": 16,
"facebook/xglm-564M": 9,
"google-bert/bert-base-multilingual-cased": 9,
"google/byt5-small": 34,
"google/gemma-2-2b": 9,
"gpt2": 12,
"meta-llama/Llama-3.2-1B": 9,
"microsoft/Phi-3-mini-4k-instruct": 10,
"mistralai/tekken": 9,
"tiktoken/gpt-4o": 9,
"tokenmonster/englishcode-32000-consistent-v1": 12
}
|
||
L'organo ke pompa il sangue nel corpo umano è il
|
[
"fegato",
"polmone",
"rene",
"cuore"
] | 3
|
D
|
test
|
Abbreviations, Phonetic spelling
|
Script / Orthography, Social Media & Informal Text
|
ita_Latn
|
337-3.6
|
337
|
3.6
|
che
|
{
"CohereLabs/aya-expanse-8b": 0.953015923500061,
"Qwen/Qwen3-8B": 0.9636853933334351,
"bigscience/bloom": 0.9492459297180176,
"common-pile/comma-v0.1-1t": 0.9649181962013245,
"facebook/xglm-564M": 0.9541990160942078,
"google-bert/bert-base-multilingual-cased": 0.9466762542724609,
"google/byt5-small": 0.9883430004119873,
"google/gemma-2-2b": 0.9601500034332275,
"gpt2": 0.9600470066070557,
"meta-llama/Llama-3.2-1B": 0.9622837901115417,
"microsoft/Phi-3-mini-4k-instruct": 0.9512009024620056,
"mistralai/tekken": 0.9526544809341431,
"tiktoken/gpt-4o": 0.9570120573043823,
"tokenmonster/englishcode-32000-consistent-v1": 0.963679313659668
}
|
{
"CohereLabs/aya-expanse-8b": 0.07124575972557068,
"Qwen/Qwen3-8B": 0.04917556792497635,
"bigscience/bloom": 0.038712747395038605,
"common-pile/comma-v0.1-1t": 0.061479900032281876,
"facebook/xglm-564M": 0.10398856550455093,
"google-bert/bert-base-multilingual-cased": 0.0657016783952713,
"google/byt5-small": 0.09882692992687225,
"google/gemma-2-2b": 0.13011936843395233,
"gpt2": 0.0447414293885231,
"meta-llama/Llama-3.2-1B": 0.06527149677276611,
"microsoft/Phi-3-mini-4k-instruct": 0.03401301056146622,
"mistralai/tekken": 0.08301310241222382,
"tiktoken/gpt-4o": 0.1011124923825264,
"tokenmonster/englishcode-32000-consistent-v1": 0.01840820163488388
}
|
{
"CohereLabs/aya-expanse-8b": 13,
"Qwen/Qwen3-8B": 16,
"bigscience/bloom": 13,
"common-pile/comma-v0.1-1t": 20,
"facebook/xglm-564M": 13,
"google-bert/bert-base-multilingual-cased": 13,
"google/byt5-small": 49,
"google/gemma-2-2b": 13,
"gpt2": 20,
"meta-llama/Llama-3.2-1B": 16,
"microsoft/Phi-3-mini-4k-instruct": 17,
"mistralai/tekken": 13,
"tiktoken/gpt-4o": 14,
"tokenmonster/englishcode-32000-consistent-v1": 18
}
|
||
L'organo k pompa il sangue nel corpo umano è il
|
[
"fegato",
"polmone",
"cuore",
"rene"
] | 2
|
C
|
test
|
Abbreviations, Phonetic spelling
|
Script / Orthography, Social Media & Informal Text
|
ita_Latn
|
337-3.7
|
337
|
3.7
|
che
|
{
"CohereLabs/aya-expanse-8b": 0.9478777647018433,
"Qwen/Qwen3-8B": 0.9629906415939331,
"bigscience/bloom": 0.949998676776886,
"common-pile/comma-v0.1-1t": 0.963193953037262,
"facebook/xglm-564M": 0.9486895203590393,
"google-bert/bert-base-multilingual-cased": 0.9423035383224487,
"google/byt5-small": 0.9857349395751953,
"google/gemma-2-2b": 0.9568383693695068,
"gpt2": 0.9618849158287048,
"meta-llama/Llama-3.2-1B": 0.9569565057754517,
"microsoft/Phi-3-mini-4k-instruct": 0.9493803977966309,
"mistralai/tekken": 0.939483642578125,
"tiktoken/gpt-4o": 0.952803373336792,
"tokenmonster/englishcode-32000-consistent-v1": 0.9628617763519287
}
|
{
"CohereLabs/aya-expanse-8b": 0.035248227417469025,
"Qwen/Qwen3-8B": 0.10117741674184799,
"bigscience/bloom": 0.051930442452430725,
"common-pile/comma-v0.1-1t": 0.04446348547935486,
"facebook/xglm-564M": 0.0702851191163063,
"google-bert/bert-base-multilingual-cased": -0.021673817187547684,
"google/byt5-small": 0.07845024019479752,
"google/gemma-2-2b": 0.10049274563789368,
"gpt2": 0.05740271136164665,
"meta-llama/Llama-3.2-1B": 0.016170667484402657,
"microsoft/Phi-3-mini-4k-instruct": 0.06751787662506104,
"mistralai/tekken": 0.00842366460710764,
"tiktoken/gpt-4o": 0.0656261071562767,
"tokenmonster/englishcode-32000-consistent-v1": 0.04240305349230766
}
|
{
"CohereLabs/aya-expanse-8b": 13,
"Qwen/Qwen3-8B": 16,
"bigscience/bloom": 13,
"common-pile/comma-v0.1-1t": 20,
"facebook/xglm-564M": 13,
"google-bert/bert-base-multilingual-cased": 13,
"google/byt5-small": 48,
"google/gemma-2-2b": 13,
"gpt2": 20,
"meta-llama/Llama-3.2-1B": 16,
"microsoft/Phi-3-mini-4k-instruct": 17,
"mistralai/tekken": 13,
"tiktoken/gpt-4o": 14,
"tokenmonster/englishcode-32000-consistent-v1": 18
}
|
||
Il pianeta ke ha la distanza minore dal Sole è
|
[
"Venere",
"Terra",
"Marte",
"Mercurio"
] | 3
|
D
|
test
|
Abbreviations, Phonetic spelling
|
Script / Orthography, Social Media & Informal Text
|
ita_Latn
|
338-3.13
|
338
|
3.13
|
che
|
{
"CohereLabs/aya-expanse-8b": 0.9311029314994812,
"Qwen/Qwen3-8B": 0.9395805597305298,
"bigscience/bloom": 0.9409386515617371,
"common-pile/comma-v0.1-1t": 0.9540438652038574,
"facebook/xglm-564M": 0.9350600242614746,
"google-bert/bert-base-multilingual-cased": 0.9203010201454163,
"google/byt5-small": 0.9869223237037659,
"google/gemma-2-2b": 0.9413893818855286,
"gpt2": 0.9391340017318726,
"meta-llama/Llama-3.2-1B": 0.9361344575881958,
"microsoft/Phi-3-mini-4k-instruct": 0.9408984780311584,
"mistralai/tekken": 0.9363722801208496,
"tiktoken/gpt-4o": 0.9390668272972107,
"tokenmonster/englishcode-32000-consistent-v1": 0.9529011845588684
}
|
{
"CohereLabs/aya-expanse-8b": 0.07124575972557068,
"Qwen/Qwen3-8B": 0.04917556792497635,
"bigscience/bloom": 0.038712747395038605,
"common-pile/comma-v0.1-1t": 0.061479900032281876,
"facebook/xglm-564M": 0.10398856550455093,
"google-bert/bert-base-multilingual-cased": 0.0657016783952713,
"google/byt5-small": 0.09882692992687225,
"google/gemma-2-2b": 0.13011936843395233,
"gpt2": 0.0447414293885231,
"meta-llama/Llama-3.2-1B": 0.06527149677276611,
"microsoft/Phi-3-mini-4k-instruct": 0.03401301056146622,
"mistralai/tekken": 0.08301310241222382,
"tiktoken/gpt-4o": 0.1011124923825264,
"tokenmonster/englishcode-32000-consistent-v1": 0.01840820163488388
}
|
{
"CohereLabs/aya-expanse-8b": 10,
"Qwen/Qwen3-8B": 13,
"bigscience/bloom": 13,
"common-pile/comma-v0.1-1t": 20,
"facebook/xglm-564M": 11,
"google-bert/bert-base-multilingual-cased": 10,
"google/byt5-small": 47,
"google/gemma-2-2b": 11,
"gpt2": 15,
"meta-llama/Llama-3.2-1B": 13,
"microsoft/Phi-3-mini-4k-instruct": 14,
"mistralai/tekken": 11,
"tiktoken/gpt-4o": 12,
"tokenmonster/englishcode-32000-consistent-v1": 17
}
|
||
Il pianeta k ha la distanza minore dal Sole è
|
[
"Venere",
"Terra",
"Marte",
"Mercurio"
] | 3
|
D
|
test
|
Abbreviations, Phonetic spelling
|
Script / Orthography, Social Media & Informal Text
|
ita_Latn
|
338-3.14
|
338
|
3.14
|
che
|
{
"CohereLabs/aya-expanse-8b": 0.9232596158981323,
"Qwen/Qwen3-8B": 0.9387797117233276,
"bigscience/bloom": 0.942047119140625,
"common-pile/comma-v0.1-1t": 0.9513167142868042,
"facebook/xglm-564M": 0.9254839420318604,
"google-bert/bert-base-multilingual-cased": 0.9135955572128296,
"google/byt5-small": 0.9840972423553467,
"google/gemma-2-2b": 0.9358816146850586,
"gpt2": 0.941421389579773,
"meta-llama/Llama-3.2-1B": 0.9274284243583679,
"microsoft/Phi-3-mini-4k-instruct": 0.9396511912345886,
"mistralai/tekken": 0.9188168048858643,
"tiktoken/gpt-4o": 0.9330872297286987,
"tokenmonster/englishcode-32000-consistent-v1": 0.951474666595459
}
|
{
"CohereLabs/aya-expanse-8b": 0.035248227417469025,
"Qwen/Qwen3-8B": 0.10117741674184799,
"bigscience/bloom": 0.051930442452430725,
"common-pile/comma-v0.1-1t": 0.04446348547935486,
"facebook/xglm-564M": 0.0702851191163063,
"google-bert/bert-base-multilingual-cased": -0.021673817187547684,
"google/byt5-small": 0.07845024019479752,
"google/gemma-2-2b": 0.10049274563789368,
"gpt2": 0.05740271136164665,
"meta-llama/Llama-3.2-1B": 0.016170667484402657,
"microsoft/Phi-3-mini-4k-instruct": 0.06751787662506104,
"mistralai/tekken": 0.00842366460710764,
"tiktoken/gpt-4o": 0.0656261071562767,
"tokenmonster/englishcode-32000-consistent-v1": 0.04240305349230766
}
|
{
"CohereLabs/aya-expanse-8b": 10,
"Qwen/Qwen3-8B": 13,
"bigscience/bloom": 13,
"common-pile/comma-v0.1-1t": 20,
"facebook/xglm-564M": 11,
"google-bert/bert-base-multilingual-cased": 10,
"google/byt5-small": 46,
"google/gemma-2-2b": 11,
"gpt2": 15,
"meta-llama/Llama-3.2-1B": 13,
"microsoft/Phi-3-mini-4k-instruct": 14,
"mistralai/tekken": 11,
"tiktoken/gpt-4o": 12,
"tokenmonster/englishcode-32000-consistent-v1": 17
}
|
||
Il pianeta con il diametro + grande del Sistema Solare è
|
[
"Terra",
"Saturno",
"Marte",
"Giove"
] | 3
|
D
|
test
|
Abbreviations, Phonetic spelling
|
Script / Orthography, Social Media & Informal Text
|
ita_Latn
|
339-3.5
|
339
|
3.5
|
più
|
{
"CohereLabs/aya-expanse-8b": 0.9274422526359558,
"Qwen/Qwen3-8B": 0.9387179613113403,
"bigscience/bloom": 0.9221353530883789,
"common-pile/comma-v0.1-1t": 0.9522654414176941,
"facebook/xglm-564M": 0.9314883947372437,
"google-bert/bert-base-multilingual-cased": 0.9126060009002686,
"google/byt5-small": 0.9868512153625488,
"google/gemma-2-2b": 0.9448539614677429,
"gpt2": 0.9260600209236145,
"meta-llama/Llama-3.2-1B": 0.9436653256416321,
"microsoft/Phi-3-mini-4k-instruct": 0.9457412362098694,
"mistralai/tekken": 0.9341747164726257,
"tiktoken/gpt-4o": 0.944807767868042,
"tokenmonster/englishcode-32000-consistent-v1": 0.9659094214439392
}
|
{
"CohereLabs/aya-expanse-8b": 0.08528129011392593,
"Qwen/Qwen3-8B": 0.07864779233932495,
"bigscience/bloom": 0.06795525550842285,
"common-pile/comma-v0.1-1t": 0.12643569707870483,
"facebook/xglm-564M": 0.03972551226615906,
"google-bert/bert-base-multilingual-cased": 0.0873791053891182,
"google/byt5-small": -0.012513848021626472,
"google/gemma-2-2b": 0.11864964663982391,
"gpt2": 0.008266289718449116,
"meta-llama/Llama-3.2-1B": 0.14310981333255768,
"microsoft/Phi-3-mini-4k-instruct": 0.08778500556945801,
"mistralai/tekken": 0.04943012818694115,
"tiktoken/gpt-4o": 0.08131778240203857,
"tokenmonster/englishcode-32000-consistent-v1": 0.0829562321305275
}
|
{
"CohereLabs/aya-expanse-8b": 12,
"Qwen/Qwen3-8B": 14,
"bigscience/bloom": 14,
"common-pile/comma-v0.1-1t": 28,
"facebook/xglm-564M": 14,
"google-bert/bert-base-multilingual-cased": 12,
"google/byt5-small": 57,
"google/gemma-2-2b": 13,
"gpt2": 19,
"meta-llama/Llama-3.2-1B": 14,
"microsoft/Phi-3-mini-4k-instruct": 15,
"mistralai/tekken": 14,
"tiktoken/gpt-4o": 14,
"tokenmonster/englishcode-32000-consistent-v1": 19
}
|
||
Il meccanismo cn cui le piante creano nutrimento dalla luce del sole è la
|
[
"Respirazione",
"Traspirazione",
"Germinazione",
"Fotosintesi"
] | 3
|
D
|
test
|
Abbreviations
|
Script / Orthography
|
ita_Latn
|
340-3.9
|
340
|
3.9
|
{
"CohereLabs/aya-expanse-8b": 0.963135302066803,
"Qwen/Qwen3-8B": 0.9769824743270874,
"bigscience/bloom": 0.9720531702041626,
"common-pile/comma-v0.1-1t": 0.96943598985672,
"facebook/xglm-564M": 0.9315529465675354,
"google-bert/bert-base-multilingual-cased": 0.9593971967697144,
"google/byt5-small": 0.9985104203224182,
"google/gemma-2-2b": 0.9682530164718628,
"gpt2": 0.9600086808204651,
"meta-llama/Llama-3.2-1B": 0.9717621803283691,
"microsoft/Phi-3-mini-4k-instruct": 0.9420631527900696,
"mistralai/tekken": 0.962251603603363,
"tiktoken/gpt-4o": 0.9733248353004456,
"tokenmonster/englishcode-32000-consistent-v1": 0.9761285781860352
}
|
{
"CohereLabs/aya-expanse-8b": 0.15594418346881866,
"Qwen/Qwen3-8B": 0.12046341598033905,
"bigscience/bloom": 0.14314505457878113,
"common-pile/comma-v0.1-1t": 0.0859692394733429,
"facebook/xglm-564M": 0.07062813639640808,
"google-bert/bert-base-multilingual-cased": 0.03519526124000549,
"google/byt5-small": 0.670381486415863,
"google/gemma-2-2b": 0.11807601153850555,
"gpt2": 0.15417490899562836,
"meta-llama/Llama-3.2-1B": 0.11838929355144501,
"microsoft/Phi-3-mini-4k-instruct": -0.006341450847685337,
"mistralai/tekken": 0.13359354436397552,
"tiktoken/gpt-4o": 0.17466433346271515,
"tokenmonster/englishcode-32000-consistent-v1": -0.009682351723313332
}
|
{
"CohereLabs/aya-expanse-8b": 17,
"Qwen/Qwen3-8B": 22,
"bigscience/bloom": 20,
"common-pile/comma-v0.1-1t": 31,
"facebook/xglm-564M": 18,
"google-bert/bert-base-multilingual-cased": 19,
"google/byt5-small": 74,
"google/gemma-2-2b": 17,
"gpt2": 27,
"meta-llama/Llama-3.2-1B": 22,
"microsoft/Phi-3-mini-4k-instruct": 23,
"mistralai/tekken": 18,
"tiktoken/gpt-4o": 19,
"tokenmonster/englishcode-32000-consistent-v1": 26
}
|
|||
Le api sn famose x produrre
|
[
"latte",
"seta",
"miele",
"cera"
] | 2
|
C
|
test
|
Abbreviations, Phonetic spelling
|
Script / Orthography, Social Media & Informal Text
|
ita_Latn
|
342-3.1
|
342
|
3.1
|
sono, per
|
{
"CohereLabs/aya-expanse-8b": 0.7163447141647339,
"Qwen/Qwen3-8B": 0.7560109496116638,
"bigscience/bloom": 0.7651405334472656,
"common-pile/comma-v0.1-1t": 0.7137374877929688,
"facebook/xglm-564M": 0.7786864042282104,
"google-bert/bert-base-multilingual-cased": 0.7065253853797913,
"google/byt5-small": 0.9584067463874817,
"google/gemma-2-2b": 0.7286409735679626,
"gpt2": 0.7674142122268677,
"meta-llama/Llama-3.2-1B": 0.7783521413803101,
"microsoft/Phi-3-mini-4k-instruct": 0.7637495398521423,
"mistralai/tekken": 0.7783644199371338,
"tiktoken/gpt-4o": 0.7763932943344116,
"tokenmonster/englishcode-32000-consistent-v1": 0.8173370361328125
}
|
{
"CohereLabs/aya-expanse-8b": 0.5158410668373108,
"Qwen/Qwen3-8B": 0.4966774880886078,
"bigscience/bloom": 0.4921298921108246,
"common-pile/comma-v0.1-1t": 0.4408930242061615,
"facebook/xglm-564M": 0.5119045376777649,
"google-bert/bert-base-multilingual-cased": 0.5038468241691589,
"google/byt5-small": 0.7804622650146484,
"google/gemma-2-2b": 0.5171924233436584,
"gpt2": 0.4779168665409088,
"meta-llama/Llama-3.2-1B": 0.5238360166549683,
"microsoft/Phi-3-mini-4k-instruct": 0.5037320256233215,
"mistralai/tekken": 0.4934210479259491,
"tiktoken/gpt-4o": 0.5235404372215271,
"tokenmonster/englishcode-32000-consistent-v1": 0.543506920337677
}
|
{
"CohereLabs/aya-expanse-8b": 7,
"Qwen/Qwen3-8B": 8,
"bigscience/bloom": 8,
"common-pile/comma-v0.1-1t": 8,
"facebook/xglm-564M": 8,
"google-bert/bert-base-multilingual-cased": 8,
"google/byt5-small": 27,
"google/gemma-2-2b": 7,
"gpt2": 9,
"meta-llama/Llama-3.2-1B": 8,
"microsoft/Phi-3-mini-4k-instruct": 8,
"mistralai/tekken": 8,
"tiktoken/gpt-4o": 8,
"tokenmonster/englishcode-32000-consistent-v1": 11
}
|
||
Le api sono famose x produrre
|
[
"miele",
"latte",
"seta",
"cera"
] | 0
|
A
|
test
|
Abbreviations, Phonetic spelling
|
Script / Orthography, Social Media & Informal Text
|
ita_Latn
|
342-3.2
|
342
|
3.2
|
per
|
{
"CohereLabs/aya-expanse-8b": 0.8690755367279053,
"Qwen/Qwen3-8B": 0.8860767483711243,
"bigscience/bloom": 0.8982897400856018,
"common-pile/comma-v0.1-1t": 0.8992642760276794,
"facebook/xglm-564M": 0.8839998245239258,
"google-bert/bert-base-multilingual-cased": 0.8838616609573364,
"google/byt5-small": 0.9759882688522339,
"google/gemma-2-2b": 0.8735721111297607,
"gpt2": 0.9064213037490845,
"meta-llama/Llama-3.2-1B": 0.8930012583732605,
"microsoft/Phi-3-mini-4k-instruct": 0.8872737884521484,
"mistralai/tekken": 0.892910897731781,
"tiktoken/gpt-4o": 0.8919006586074829,
"tokenmonster/englishcode-32000-consistent-v1": 0.9116339683532715
}
|
{
"CohereLabs/aya-expanse-8b": 0.0667194053530693,
"Qwen/Qwen3-8B": 0.054881129413843155,
"bigscience/bloom": 0.14576995372772217,
"common-pile/comma-v0.1-1t": 0.09467829018831253,
"facebook/xglm-564M": 0.11107033491134644,
"google-bert/bert-base-multilingual-cased": 0.0742538571357727,
"google/byt5-small": 0.024468015879392624,
"google/gemma-2-2b": 0.1181274801492691,
"gpt2": 0.11088388413190842,
"meta-llama/Llama-3.2-1B": 0.11354011297225952,
"microsoft/Phi-3-mini-4k-instruct": 0.10353539884090424,
"mistralai/tekken": 0.10444324463605881,
"tiktoken/gpt-4o": 0.13031360507011414,
"tokenmonster/englishcode-32000-consistent-v1": 0.03969891741871834
}
|
{
"CohereLabs/aya-expanse-8b": 7,
"Qwen/Qwen3-8B": 8,
"bigscience/bloom": 8,
"common-pile/comma-v0.1-1t": 9,
"facebook/xglm-564M": 8,
"google-bert/bert-base-multilingual-cased": 7,
"google/byt5-small": 29,
"google/gemma-2-2b": 7,
"gpt2": 10,
"meta-llama/Llama-3.2-1B": 8,
"microsoft/Phi-3-mini-4k-instruct": 8,
"mistralai/tekken": 8,
"tiktoken/gpt-4o": 8,
"tokenmonster/englishcode-32000-consistent-v1": 11
}
|
||
Le api sn famose per produrre
|
[
"latte",
"seta",
"cera",
"miele"
] | 3
|
D
|
test
|
Abbreviations
|
Script / Orthography
|
ita_Latn
|
342-3.3
|
342
|
3.3
|
sono
|
{
"CohereLabs/aya-expanse-8b": 0.8545225262641907,
"Qwen/Qwen3-8B": 0.874309778213501,
"bigscience/bloom": 0.866213858127594,
"common-pile/comma-v0.1-1t": 0.836077094078064,
"facebook/xglm-564M": 0.8987281918525696,
"google-bert/bert-base-multilingual-cased": 0.8024898767471313,
"google/byt5-small": 0.9746745824813843,
"google/gemma-2-2b": 0.862398624420166,
"gpt2": 0.8704708814620972,
"meta-llama/Llama-3.2-1B": 0.8820863366127014,
"microsoft/Phi-3-mini-4k-instruct": 0.8782750368118286,
"mistralai/tekken": 0.8837018013000488,
"tiktoken/gpt-4o": 0.8818065524101257,
"tokenmonster/englishcode-32000-consistent-v1": 0.9185905456542969
}
|
{
"CohereLabs/aya-expanse-8b": 0.02457837015390396,
"Qwen/Qwen3-8B": 0.042501673102378845,
"bigscience/bloom": 0.019553344696760178,
"common-pile/comma-v0.1-1t": 0.016733499243855476,
"facebook/xglm-564M": 0.1390569806098938,
"google-bert/bert-base-multilingual-cased": 0.0033388654701411724,
"google/byt5-small": 0.40499550104141235,
"google/gemma-2-2b": 0.06815136969089508,
"gpt2": 0.009069850668311119,
"meta-llama/Llama-3.2-1B": 0.06639596819877625,
"microsoft/Phi-3-mini-4k-instruct": 0.047561533749103546,
"mistralai/tekken": 0.021650152280926704,
"tiktoken/gpt-4o": 0.03376496210694313,
"tokenmonster/englishcode-32000-consistent-v1": -0.007327871397137642
}
|
{
"CohereLabs/aya-expanse-8b": 7,
"Qwen/Qwen3-8B": 8,
"bigscience/bloom": 8,
"common-pile/comma-v0.1-1t": 8,
"facebook/xglm-564M": 8,
"google-bert/bert-base-multilingual-cased": 8,
"google/byt5-small": 29,
"google/gemma-2-2b": 7,
"gpt2": 9,
"meta-llama/Llama-3.2-1B": 8,
"microsoft/Phi-3-mini-4k-instruct": 8,
"mistralai/tekken": 8,
"tiktoken/gpt-4o": 8,
"tokenmonster/englishcode-32000-consistent-v1": 11
}
|
||
Quello di cui hanno bisogno le piante dall'aria x produrre cibo è
|
[
"azoto",
"idrogeno",
"anidride carbonica",
"elio"
] | 2
|
C
|
test
|
Abbreviations, Phonetic spelling
|
Script / Orthography, Social Media & Informal Text
|
ita_Latn
|
per->x
|
343-3.2
|
343
|
3.2
|
per
|
{
"CohereLabs/aya-expanse-8b": 0.9534022808074951,
"Qwen/Qwen3-8B": 0.9655131101608276,
"bigscience/bloom": 0.9685128927230835,
"common-pile/comma-v0.1-1t": 0.9764779806137085,
"facebook/xglm-564M": 0.943093478679657,
"google-bert/bert-base-multilingual-cased": 0.9478885531425476,
"google/byt5-small": 0.9922052621841431,
"google/gemma-2-2b": 0.9554425477981567,
"gpt2": 0.9726036787033081,
"meta-llama/Llama-3.2-1B": 0.9624615907669067,
"microsoft/Phi-3-mini-4k-instruct": 0.964576244354248,
"mistralai/tekken": 0.9594010710716248,
"tiktoken/gpt-4o": 0.9567535519599915,
"tokenmonster/englishcode-32000-consistent-v1": 0.9666756391525269
}
|
{
"CohereLabs/aya-expanse-8b": 0.0667194053530693,
"Qwen/Qwen3-8B": 0.054881129413843155,
"bigscience/bloom": 0.14576995372772217,
"common-pile/comma-v0.1-1t": 0.09467829018831253,
"facebook/xglm-564M": 0.11107033491134644,
"google-bert/bert-base-multilingual-cased": 0.0742538571357727,
"google/byt5-small": 0.024468015879392624,
"google/gemma-2-2b": 0.1181274801492691,
"gpt2": 0.11088388413190842,
"meta-llama/Llama-3.2-1B": 0.11354011297225952,
"microsoft/Phi-3-mini-4k-instruct": 0.10353539884090424,
"mistralai/tekken": 0.10444324463605881,
"tiktoken/gpt-4o": 0.13031360507011414,
"tokenmonster/englishcode-32000-consistent-v1": 0.03969891741871834
}
|
{
"CohereLabs/aya-expanse-8b": 15,
"Qwen/Qwen3-8B": 20,
"bigscience/bloom": 21,
"common-pile/comma-v0.1-1t": 28,
"facebook/xglm-564M": 15,
"google-bert/bert-base-multilingual-cased": 15,
"google/byt5-small": 66,
"google/gemma-2-2b": 15,
"gpt2": 27,
"meta-llama/Llama-3.2-1B": 20,
"microsoft/Phi-3-mini-4k-instruct": 21,
"mistralai/tekken": 18,
"tiktoken/gpt-4o": 18,
"tokenmonster/englishcode-32000-consistent-v1": 23
}
|
|
Quello d cui hanno bisogno le piante dall'aria x produrre cibo è
|
[
"azoto",
"anidride carbonica",
"idrogeno",
"elio"
] | 1
|
B
|
test
|
Abbreviations, Phonetic spelling
|
Script / Orthography, Social Media & Informal Text
|
ita_Latn
|
di->d, per->x
|
343-3.3
|
343
|
3.3
|
di, per
|
{
"CohereLabs/aya-expanse-8b": 0.9063833355903625,
"Qwen/Qwen3-8B": 0.9271215200424194,
"bigscience/bloom": 0.9343258142471313,
"common-pile/comma-v0.1-1t": 0.9487971663475037,
"facebook/xglm-564M": 0.8831492066383362,
"google-bert/bert-base-multilingual-cased": 0.8994290828704834,
"google/byt5-small": 0.9914813041687012,
"google/gemma-2-2b": 0.9122122526168823,
"gpt2": 0.9460916519165039,
"meta-llama/Llama-3.2-1B": 0.9151080846786499,
"microsoft/Phi-3-mini-4k-instruct": 0.9244072437286377,
"mistralai/tekken": 0.9196685552597046,
"tiktoken/gpt-4o": 0.903580904006958,
"tokenmonster/englishcode-32000-consistent-v1": 0.9295505881309509
}
|
{
"CohereLabs/aya-expanse-8b": 0.8480374813079834,
"Qwen/Qwen3-8B": 0.8715304732322693,
"bigscience/bloom": 0.8965203166007996,
"common-pile/comma-v0.1-1t": 0.927065372467041,
"facebook/xglm-564M": 0.8090014457702637,
"google-bert/bert-base-multilingual-cased": 0.8344502449035645,
"google/byt5-small": 0.9802043437957764,
"google/gemma-2-2b": 0.8622153997421265,
"gpt2": 0.909629762172699,
"meta-llama/Llama-3.2-1B": 0.8564296364784241,
"microsoft/Phi-3-mini-4k-instruct": 0.8667284250259399,
"mistralai/tekken": 0.8478317856788635,
"tiktoken/gpt-4o": 0.829788863658905,
"tokenmonster/englishcode-32000-consistent-v1": 0.8817609548568726
}
|
{
"CohereLabs/aya-expanse-8b": 15,
"Qwen/Qwen3-8B": 20,
"bigscience/bloom": 21,
"common-pile/comma-v0.1-1t": 29,
"facebook/xglm-564M": 15,
"google-bert/bert-base-multilingual-cased": 15,
"google/byt5-small": 65,
"google/gemma-2-2b": 15,
"gpt2": 27,
"meta-llama/Llama-3.2-1B": 20,
"microsoft/Phi-3-mini-4k-instruct": 21,
"mistralai/tekken": 18,
"tiktoken/gpt-4o": 18,
"tokenmonster/englishcode-32000-consistent-v1": 23
}
|
|
Quello d cui hanno bisogno le piante dall'aria per produrre cibo è
|
[
"azoto",
"idrogeno",
"elio",
"anidride carbonica"
] | 3
|
D
|
test
|
Abbreviations, Phonetic spelling
|
Script / Orthography, Social Media & Informal Text
|
ita_Latn
|
di -> d
|
343-3.4
|
343
|
3.4
|
di
|
{
"CohereLabs/aya-expanse-8b": 0.9550455212593079,
"Qwen/Qwen3-8B": 0.9644867777824402,
"bigscience/bloom": 0.9677246809005737,
"common-pile/comma-v0.1-1t": 0.9758893847465515,
"facebook/xglm-564M": 0.9471949934959412,
"google-bert/bert-base-multilingual-cased": 0.9506067633628845,
"google/byt5-small": 0.9984737634658813,
"google/gemma-2-2b": 0.9594575762748718,
"gpt2": 0.9733547568321228,
"meta-llama/Llama-3.2-1B": 0.9591813683509827,
"microsoft/Phi-3-mini-4k-instruct": 0.9594258069992065,
"mistralai/tekken": 0.9635310173034668,
"tiktoken/gpt-4o": 0.9542483687400818,
"tokenmonster/englishcode-32000-consistent-v1": 0.9700537919998169
}
|
{
"CohereLabs/aya-expanse-8b": 0.1277914196252823,
"Qwen/Qwen3-8B": -0.038773685693740845,
"bigscience/bloom": -0.021927211433649063,
"common-pile/comma-v0.1-1t": -0.06762950122356415,
"facebook/xglm-564M": 0.10845727473497391,
"google-bert/bert-base-multilingual-cased": 0.10735861212015152,
"google/byt5-small": 0.5014738440513611,
"google/gemma-2-2b": 0.14631111919879913,
"gpt2": -0.03892117738723755,
"meta-llama/Llama-3.2-1B": -0.021200457587838173,
"microsoft/Phi-3-mini-4k-instruct": -0.042535509914159775,
"mistralai/tekken": 0.09141422808170319,
"tiktoken/gpt-4o": 0.04498814046382904,
"tokenmonster/englishcode-32000-consistent-v1": 0.12912601232528687
}
|
{
"CohereLabs/aya-expanse-8b": 15,
"Qwen/Qwen3-8B": 20,
"bigscience/bloom": 21,
"common-pile/comma-v0.1-1t": 29,
"facebook/xglm-564M": 15,
"google-bert/bert-base-multilingual-cased": 15,
"google/byt5-small": 67,
"google/gemma-2-2b": 15,
"gpt2": 27,
"meta-llama/Llama-3.2-1B": 20,
"microsoft/Phi-3-mini-4k-instruct": 21,
"mistralai/tekken": 18,
"tiktoken/gpt-4o": 18,
"tokenmonster/englishcode-32000-consistent-v1": 23
}
|
|
In "Puoi prenotare il volo x Parigi?", la persona vuole
|
[
"fare acquisti",
"presentare un reclamo",
"annullare un volo",
"fare una prenotazione"
] | 3
|
D
|
test
|
Abbreviations, Phonetic spelling
|
Script / Orthography, Social Media & Informal Text
|
ita_Latn
|
344-3.1
|
344
|
3.1
|
per
|
{
"CohereLabs/aya-expanse-8b": 0.9488150477409363,
"Qwen/Qwen3-8B": 0.9544429779052734,
"bigscience/bloom": 0.9680806398391724,
"common-pile/comma-v0.1-1t": 0.9684615731239319,
"facebook/xglm-564M": 0.949338436126709,
"google-bert/bert-base-multilingual-cased": 0.9616315364837646,
"google/byt5-small": 0.9903391003608704,
"google/gemma-2-2b": 0.9565054178237915,
"gpt2": 0.9571622610092163,
"meta-llama/Llama-3.2-1B": 0.9547482132911682,
"microsoft/Phi-3-mini-4k-instruct": 0.9637066721916199,
"mistralai/tekken": 0.9452743530273438,
"tiktoken/gpt-4o": 0.9550979137420654,
"tokenmonster/englishcode-32000-consistent-v1": 0.964215099811554
}
|
{
"CohereLabs/aya-expanse-8b": 0.0667194053530693,
"Qwen/Qwen3-8B": 0.054881129413843155,
"bigscience/bloom": 0.14576995372772217,
"common-pile/comma-v0.1-1t": 0.09467829018831253,
"facebook/xglm-564M": 0.11107033491134644,
"google-bert/bert-base-multilingual-cased": 0.0742538571357727,
"google/byt5-small": 0.024468015879392624,
"google/gemma-2-2b": 0.1181274801492691,
"gpt2": 0.11088388413190842,
"meta-llama/Llama-3.2-1B": 0.11354011297225952,
"microsoft/Phi-3-mini-4k-instruct": 0.10353539884090424,
"mistralai/tekken": 0.10444324463605881,
"tiktoken/gpt-4o": 0.13031360507011414,
"tokenmonster/englishcode-32000-consistent-v1": 0.03969891741871834
}
|
{
"CohereLabs/aya-expanse-8b": 15,
"Qwen/Qwen3-8B": 18,
"bigscience/bloom": 20,
"common-pile/comma-v0.1-1t": 21,
"facebook/xglm-564M": 16,
"google-bert/bert-base-multilingual-cased": 18,
"google/byt5-small": 55,
"google/gemma-2-2b": 15,
"gpt2": 20,
"meta-llama/Llama-3.2-1B": 18,
"microsoft/Phi-3-mini-4k-instruct": 21,
"mistralai/tekken": 15,
"tiktoken/gpt-4o": 17,
"tokenmonster/englishcode-32000-consistent-v1": 22
}
|
Dataset Card for Tokenization Robustness
TokSuite Benchmark (Italian Collection)
Dataset Description
This dataset is part of TokSuite, a comprehensive benchmark designed to measure how different tokenization strategies affect language model performance and robustness. This specific subset contains Italian language multiple-choice text completion questions with various real-world perturbations that test tokenizer robustness.
- Curated by: R3 Research Team
- Language(s): Italian (It)
- License: MIT License
Dataset Summary
TokSuite addresses a fundamental challenge in language model research: understanding how tokenization choices impact model behavior in isolation. The Italian subset specifically measures model performance on canonical questions and various perturbations. Key Features:
- 40 canonical questions covering general knowledge, geography, science, and language understanding
- Multiple perturbation types reflecting real-world text variations in Italian
- Parallel structure with TokSuite benchmark (available in English, Turkish, Farsi, Chinese)
- Native speaker curation ensuring linguistic authenticity
Supported Tasks
- Multiple-Choice Question Answering: Text completion format with 4 answer choices
- Tokenizer Robustness Evaluation: Measuring performance degradation under various text perturbations
- Multilingual NLP Benchmarking: Evaluating language models on Italian text understanding
Languages
The dataset contains text in Italian (language code: ita_Latn / it).
Dataset Structure
Data Fields
| Field | Type | Description |
|---|---|---|
question |
string |
The question text in Italian |
choices |
list[string] |
4 multiple-choice answer options |
answer |
int64 |
Index of the correct answer |
answer_label |
string |
Letter label of the correct answer |
split |
string |
Dataset split identifier |
subcategories |
string |
Perturbation category |
lang |
string |
Language code |
second_lang |
string |
English translation or description of the question |
notes |
string |
Additional context about the question or perturbation |
id |
string |
Unique question identifier |
set_id |
float64 |
Question set grouping identifier |
variation_id |
float64 |
Variation number within a question set |
vanilla_cos_sim_to_canonical |
dict[string, float] |
Cosine similarity scores to canonical form (raw tokens) |
trimmed_cos_sim_to_canonical |
dict[string, float] |
Cosine similarity scores after token normalization |
token_counts |
dict[string, integer] |
Number of tokens produced per tokenizer |
Dataset Creation
Curation Rationale
This dataset was created to:
- Systematically evaluate how different tokenization strategies handle Italian
- Measure robustness against real-world text perturbations specific to Italian
- Support research into the impact of tokenization on language model behavior
- Provide standardized benchmarks for Italian language models
The questions were designed to be straightforward with high baseline accuracy, allowing researchers to cleanly measure performance degradation when perturbations are applied.
Source Data
Data Collection and Processing
- Canonical Questions: 40 baseline questions created in English
- Translation: Native Italian speakers translated questions
- Perturbations: Each question underwent targeted perturbations designed to reflect Italian characteristics
- Validation: Model-in-the-loop process ensured high baseline accuracy
Perturbation Categories
Canonical The original Italian question written in standard, well-formed Italian with correct spelling, grammar, accents, capitalization, and formatting. All other perturbations are derived from this version and preserve its meaning.
Abbreviations Words or expressions in the canonical sentence are replaced with common Italian abbreviations (e.g., titles like
Dr., shortened forms such asecc.orn.). The semantic content remains unchanged, but surface length and token boundaries are altered.Capitalization Capital letters are altered relative to the canonical form (e.g., sentence-level lowercasing, random capitalization, or improper casing of proper nouns). The lexical content is the same, but casing information is corrupted or inconsistent.
Code / Language / Script Switching Italian sentences contain inserted English words or phrases (often technical terms or borrowed expressions). The script remains Latin, but language identity switches mid-sentence, reflecting realistic bilingual or mixed-language usage.
Contractions Italian elisions and contractions are introduced or modified (e.g.,
l’amico,dell’acqua,all’università). Apostrophes merge words that are separate in canonical form, changing token segmentation while preserving meaning.Date Formats Dates are rewritten using alternative Italian or international formats (e.g., numeric dates, month-name formats, different separators). The temporal meaning is preserved, but punctuation and numeric structure vary.
Dialects Standard Italian words or constructions are replaced with dialect-influenced variants (e.g., regional lexical or morphological forms). These versions remain interpretable to native speakers but diverge from standardized Italian orthography.
English Keyboard Italian text is written as if typed on an English keyboard, resulting in missing or simplified accented characters (e.g.,
percheinstead ofperché). Unicode accents are dropped or normalized, stressing tokenizer handling of diacritics.Grammatical Errors The sentence includes plausible grammatical mistakes such as incorrect agreement, article misuse, or tense errors. The sentence remains understandable, but violates formal Italian grammar rules.
Keyboard Proximity Errors Introduces typos caused by pressing adjacent keys on a keyboard, simulating realistic typing errors without altering intended meaning.
Numerical Formats Numbers are rewritten using different Italian-appropriate formats (e.g., thousand separators, decimal symbols, or spacing). The numeric value is preserved while its surface representation changes.
Orthographic Errors Spelling errors are introduced that violate standard Italian orthography (e.g., incorrect consonant doubling, wrong letter choice). These errors are visually or phonetically plausible but formally incorrect.
Phonetic Spelling Words are spelled according to pronunciation rather than standard orthography, often resembling informal or speech-based writing. This alters character sequences while preserving phonetic identity.
Plausible Diacritics Errors Introduces missing, incorrect, or misplaced diacritics (e.g.,
evs.è,perchévs.perche), testing tokenizer sensitivity to accent marks that affect meaning.Similar Words Canonical words are replaced with closely related or confusable alternatives (e.g., near-synonyms or minimal lexical contrasts). The sentence remains plausible and grammatical but is lexically altered.
Spelled-Out Forms Digits, abbreviations, or compact expressions are replaced with their fully spelled-out Italian equivalents (e.g., numerals written as words). This increases token length and changes lexical composition without changing meaning.
Typographical Errors General typing mistakes are introduced, such as duplicated letters, missing characters, or minor corruptions. These errors are less systematic than keyboard-proximity errors and reflect careless typing.
Web Search Query The question is rewritten in the style of an Italian web search query: function words may be dropped, word order simplified, and phrasing becomes keyword-like rather than sentence-like, while retaining the same informational intent.
Who are the source data producers?
Native Italian speakers curated and validated all questions and perturbations. The TokSuite research team at R3 designed the overall benchmark framework.
Annotations
Annotation process
Questions were manually created and translated by native speakers. Each perturbation was carefully designed to reflect authentic variations encountered in real-world Italian text processing.
Who are the annotators?
Native Italian speakers with expertise in linguistics and NLP, working as part of the TokSuite project.
Personal and Sensitive Information
The dataset contains only general knowledge questions and does not include any personal or sensitive information.
Considerations for Using the Data
Social Impact of Dataset
This dataset contributes to improving language technology for Italian speakers by enabling better understanding of tokenization challenges and supporting more robust multilingual models.
Discussion of Biases
- Language variety:The dataset uses Standard Italian (Italiano standard) and may not fully represent regional or dialectal variations.
- Script focus: Only the Latin script is used; accent and keyboard-related variations are included as perturbations.
- Domain coverage: Questions focus on general knowledge and may not represent domain-specific Italian language use.
- Question simplicity: Designed for high baseline accuracy, which may not reflect real-world task complexity
Other Known Limitations
- Relatively small dataset size (evaluation-only)
- Multiple-choice format
- Language-specific perturbations
- Results may differ at larger model scales
Additional Information
Dataset Curators
The dataset was curated by the TokSuite research team at R3.
Licensing Information
MIT license
Citation Information
If you use this dataset in your research, please cite the TokSuite paper:
@inproceedings{toksuite2026,
title={TokSuite: Measuring the Impact of Tokenizer Choice on Language Model Behavior},
author={Altıntaş, Gül Sena and Ehghaghi, Malikeh and Lester, Brian and Liu, Fengyuan and Zhao, Wanru and Ciccone, Marco and Raffel, Colin},
booktitle={Preprint.},
year={2026},
arxiv={https://arxiv.org/abs/2512.20757},
url={TBD}
}
Paper: TokSuite: Measuring the Impact of Tokenizer Choice on Language Model Behavior
Contributions
This dataset is part of TokSuite, which includes:
- 14 language models with identical architectures but different tokenizers
- Multilingual benchmark datasets (English, Turkish, Italian, Farsi, Chinese)
- Comprehensive analysis of tokenization's impact on model behavior
Contact
For questions or issues related to this dataset, please refer to the TokSuite project or contact the authors of the paper.
Part of the TokSuite Project
Understanding Tokenization's Role in Language Model Behavior
- Downloads last month
- 392