gsaltintas commited on
Commit
2bfccc2
·
verified ·
1 Parent(s): 5b443bf

Uploading tokenizer_robustness_completion_italian_abbreviations subset

Browse files
README.md CHANGED
@@ -7,6 +7,138 @@ pretty_name: Tokenization Robustness
7
  tags:
8
  - multilingual
9
  - tokenization
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  ---
11
 
12
  # Dataset Card for Tokenization Robustness
 
7
  tags:
8
  - multilingual
9
  - tokenization
10
+ dataset_info:
11
+ config_name: tokenizer_robustness_completion_italian_abbreviations
12
+ features:
13
+ - name: question
14
+ dtype: string
15
+ - name: choices
16
+ list: string
17
+ - name: answer
18
+ dtype: int64
19
+ - name: answer_label
20
+ dtype: string
21
+ - name: split
22
+ dtype: string
23
+ - name: subcategories
24
+ dtype: string
25
+ - name: category
26
+ dtype: string
27
+ - name: lang
28
+ dtype: string
29
+ - name: second_lang
30
+ dtype: string
31
+ - name: notes
32
+ dtype: string
33
+ - name: id
34
+ dtype: string
35
+ - name: set_id
36
+ dtype: string
37
+ - name: variation_id
38
+ dtype: string
39
+ - name: perturbed_word
40
+ dtype: string
41
+ - name: vanilla_cos_sim_to_canonical
42
+ struct:
43
+ - name: CohereLabs/aya-expanse-8b
44
+ dtype: float64
45
+ - name: Qwen/Qwen3-8B
46
+ dtype: float64
47
+ - name: bigscience/bloom
48
+ dtype: float64
49
+ - name: common-pile/comma-v0.1-1t
50
+ dtype: float64
51
+ - name: facebook/xglm-564M
52
+ dtype: float64
53
+ - name: google-bert/bert-base-multilingual-cased
54
+ dtype: float64
55
+ - name: google/byt5-small
56
+ dtype: float64
57
+ - name: google/gemma-2-2b
58
+ dtype: float64
59
+ - name: gpt2
60
+ dtype: float64
61
+ - name: meta-llama/Llama-3.2-1B
62
+ dtype: float64
63
+ - name: microsoft/Phi-3-mini-4k-instruct
64
+ dtype: float64
65
+ - name: mistralai/tekken
66
+ dtype: float64
67
+ - name: tiktoken/gpt-4o
68
+ dtype: float64
69
+ - name: tokenmonster/englishcode-32000-consistent-v1
70
+ dtype: float64
71
+ - name: trimmed_cos_sim_to_canonical
72
+ struct:
73
+ - name: CohereLabs/aya-expanse-8b
74
+ dtype: float64
75
+ - name: Qwen/Qwen3-8B
76
+ dtype: float64
77
+ - name: bigscience/bloom
78
+ dtype: float64
79
+ - name: common-pile/comma-v0.1-1t
80
+ dtype: float64
81
+ - name: facebook/xglm-564M
82
+ dtype: float64
83
+ - name: google-bert/bert-base-multilingual-cased
84
+ dtype: float64
85
+ - name: google/byt5-small
86
+ dtype: float64
87
+ - name: google/gemma-2-2b
88
+ dtype: float64
89
+ - name: gpt2
90
+ dtype: float64
91
+ - name: meta-llama/Llama-3.2-1B
92
+ dtype: float64
93
+ - name: microsoft/Phi-3-mini-4k-instruct
94
+ dtype: float64
95
+ - name: mistralai/tekken
96
+ dtype: float64
97
+ - name: tiktoken/gpt-4o
98
+ dtype: float64
99
+ - name: tokenmonster/englishcode-32000-consistent-v1
100
+ dtype: float64
101
+ - name: token_counts
102
+ struct:
103
+ - name: CohereLabs/aya-expanse-8b
104
+ dtype: int64
105
+ - name: Qwen/Qwen3-8B
106
+ dtype: int64
107
+ - name: bigscience/bloom
108
+ dtype: int64
109
+ - name: common-pile/comma-v0.1-1t
110
+ dtype: int64
111
+ - name: facebook/xglm-564M
112
+ dtype: int64
113
+ - name: google-bert/bert-base-multilingual-cased
114
+ dtype: int64
115
+ - name: google/byt5-small
116
+ dtype: int64
117
+ - name: google/gemma-2-2b
118
+ dtype: int64
119
+ - name: gpt2
120
+ dtype: int64
121
+ - name: meta-llama/Llama-3.2-1B
122
+ dtype: int64
123
+ - name: microsoft/Phi-3-mini-4k-instruct
124
+ dtype: int64
125
+ - name: mistralai/tekken
126
+ dtype: int64
127
+ - name: tiktoken/gpt-4o
128
+ dtype: int64
129
+ - name: tokenmonster/englishcode-32000-consistent-v1
130
+ dtype: int64
131
+ splits:
132
+ - name: test
133
+ num_bytes: 15662
134
+ num_examples: 27
135
+ download_size: 34752
136
+ dataset_size: 15662
137
+ configs:
138
+ - config_name: tokenizer_robustness_completion_italian_abbreviations
139
+ data_files:
140
+ - split: test
141
+ path: tokenizer_robustness_completion_italian_abbreviations/test-*
142
  ---
143
 
144
  # Dataset Card for Tokenization Robustness
tokenizer_robustness_completion_italian_abbreviations/test-00000-of-00001.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a29a39f1aee0cc6b940797b138a0901f2e74ce6f8fa1ffafe5560b3bb7f3314b
3
- size 34757
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1c19c9eccf640cb680ee8dd3d5b89b6c9dbd115f50d966643c01de3572530ba4
3
+ size 34752