Datasets:
tokens sequence | tags sequence | evaluation_predictions sequence |
|---|---|---|
[
"Seine",
"erste",
"Profistation",
"war",
"Lech",
"Posen",
"."
] | [
6,
6,
6,
6,
1,
4,
6
] | [
[
2.5546875,
-0.323974609375,
-0.42431640625,
0.2227783203125,
-0.5048828125,
0.26708984375,
-0.89892578125
],
[
9.2109375,
-1.103515625,
-1.7890625,
-1.0537109375,
-1.875,
-1.029296875,
-1.830078125
],
[
8.796875,
-1.3994140625,
-2.2753906... |
[
"zum",
"Koadjutorerzbischof",
"von",
"Armagh",
"."
] | [
6,
2,
6,
0,
6
] | [
[
2.291015625,
-0.019744873046875,
-0.0124053955078125,
-0.5576171875,
-1.3642578125,
0.95361328125,
-0.9814453125
],
[
9.2890625,
-1.115234375,
-1.0517578125,
-1.3017578125,
-2.078125,
-1.7880859375,
-1.580078125
],
[
0.0557861328125,
4.074218... |
[
"Das",
"war",
"vor",
"allem",
"für",
"den",
"aus",
"dem",
"Ruhrgebiet",
"nach",
"Süden",
"führenden",
"Kohleverkehr",
"vorteilhaft",
"."
] | [
6,
6,
6,
6,
6,
6,
6,
6,
0,
6,
6,
6,
6,
6,
6
] | [
[
5.14453125,
-1.396484375,
-2.642578125,
0.4287109375,
-1.2392578125,
1.2001953125,
-1.1611328125
],
[
9.4375,
-1.7841796875,
-2.16015625,
-1.01171875,
-2.009765625,
-0.833984375,
-1.4482421875
],
[
9.7265625,
-1.9150390625,
-1.9853515625,... |
[
"Bis",
"Anfang",
"der",
"1990er",
"Jahre",
"wurden",
"die",
"Brigaden",
"gemäß",
"ihrer",
"Unterstellung",
"nummeriert",
",",
"wobei",
"auch",
"hier",
"das",
"Folgende",
"nur",
"prinzipiell",
"gilt",
"."
] | [
6,
6,
6,
6,
6,
6,
6,
1,
6,
6,
6,
6,
6,
6,
6,
6,
6,
6,
6,
6,
6,
6
] | [
[
5.6328125,
-0.5703125,
-2.2890625,
0.98974609375,
-1.0439453125,
0.223876953125,
-2.10546875
],
[
9.4765625,
-2.095703125,
-2.150390625,
-1.162109375,
-1.6064453125,
-0.87255859375,
-1.333984375
],
[
8.984375,
-2.0234375,
-2.087890625,
... |
[
"Annette",
"Dasch",
",",
"Sängerin"
] | [
2,
5,
6,
6
] | [[2.921875,1.0771484375,0.427490234375,0.005107879638671875,-0.69189453125,-0.91552734375,-2.4238281(...TRUNCATED) |
[
"WEITERLEITUNG",
"Freundeskreis",
"Reichsführer",
"SS"
] | [
6,
1,
4,
4
] | [[2.708984375,-0.413330078125,-0.2100830078125,0.5830078125,0.84033203125,-0.82861328125,-1.58691406(...TRUNCATED) |
[
"**",
"'",
"''",
"Soissons",
"''",
"'"
] | [
6,
6,
6,
2,
6,
6
] | [[0.5068359375,-0.2081298828125,-1.794921875,0.499755859375,-0.1572265625,2.37890625,-0.6875],[8.843(...TRUNCATED) |
["Nachdem","die","ersten","rund","150","km","auf","flacher","Strecke","absolviert","wurden",",","fü(...TRUNCATED) | [
6,
6,
6,
6,
6,
6,
6,
6,
6,
6,
6,
6,
6,
6,
6,
6,
6,
6,
6,
6,
6,
6,
6,
6,
6,
6,
6,
6,
6,
0,
3,
6,
6,
6,
6,
6,
6,
6,
6,
6,
6,
6,
6
] | [[4.21875,-1.24609375,-2.24609375,0.102294921875,-0.94091796875,0.83203125,-0.6552734375],[9.640625,(...TRUNCATED) |
[
"'",
"''",
"Weströmisches",
"Reich",
"''",
"'"
] | [
6,
6,
1,
4,
6,
6
] | [[-0.5380859375,-1.4033203125,-1.736328125,0.012786865234375,0.489990234375,1.4384765625,1.578125],[(...TRUNCATED) |
[
"'",
"''",
"Raszien",
"''",
"'"
] | [
6,
6,
0,
6,
6
] | [[0.70751953125,-0.72265625,-2.103515625,0.29541015625,-0.6591796875,2.3828125,-0.00469207763671875](...TRUNCATED) |
End of preview. Expand in Data Studio
Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
- Task: Token Classification
- Model: transformersbook/xlm-roberta-base-finetuned-panx-de
- Dataset: xtreme
To run new evaluation jobs, visit Hugging Face's automatic evaluation service.
Contributions
Thanks to @lewtun for evaluating this model.
- Downloads last month
- 52