hf-transformers-bot commited on
Commit
850cd0a
·
verified ·
1 Parent(s): 7b74ff0

Upload 2026-03-23/runs/7092-23453925729/ci_results_run_models_gpu/model_results.json with huggingface_hub

Browse files
2026-03-23/runs/7092-23453925729/ci_results_run_models_gpu/model_results.json ADDED
@@ -0,0 +1,2055 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "models_auto": {
3
+ "failed": {
4
+ "PyTorch": {
5
+ "unclassified": 0,
6
+ "single": 0,
7
+ "multi": 0
8
+ },
9
+ "Tokenizers": {
10
+ "unclassified": 0,
11
+ "single": 1,
12
+ "multi": 1
13
+ },
14
+ "Pipelines": {
15
+ "unclassified": 0,
16
+ "single": 0,
17
+ "multi": 0
18
+ },
19
+ "Trainer": {
20
+ "unclassified": 0,
21
+ "single": 0,
22
+ "multi": 0
23
+ },
24
+ "ONNX": {
25
+ "unclassified": 0,
26
+ "single": 0,
27
+ "multi": 0
28
+ },
29
+ "Auto": {
30
+ "unclassified": 0,
31
+ "single": 0,
32
+ "multi": 0
33
+ },
34
+ "Quantization": {
35
+ "unclassified": 0,
36
+ "single": 0,
37
+ "multi": 0
38
+ },
39
+ "Unclassified": {
40
+ "unclassified": 0,
41
+ "single": 0,
42
+ "multi": 0
43
+ }
44
+ },
45
+ "errors": 0,
46
+ "success": 254,
47
+ "skipped": 14,
48
+ "time_spent": [
49
+ 74.22,
50
+ 75.99
51
+ ],
52
+ "error": false,
53
+ "failures": {
54
+ "multi": [
55
+ {
56
+ "line": "tests/models/auto/test_tokenization_auto.py::AutoTokenizerTest::test_from_pretrained_dynamic_tokenizer",
57
+ "trace": "(line 163) AssertionError: ValueError not raised"
58
+ }
59
+ ],
60
+ "single": [
61
+ {
62
+ "line": "tests/models/auto/test_tokenization_auto.py::AutoTokenizerTest::test_from_pretrained_dynamic_tokenizer",
63
+ "trace": "(line 163) AssertionError: ValueError not raised"
64
+ }
65
+ ]
66
+ },
67
+ "job_link": {
68
+ "multi": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795749",
69
+ "single": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795832"
70
+ },
71
+ "captured_info": {}
72
+ },
73
+ "models_bert": {
74
+ "failed": {
75
+ "PyTorch": {
76
+ "unclassified": 0,
77
+ "single": 1,
78
+ "multi": 1
79
+ },
80
+ "Tokenizers": {
81
+ "unclassified": 0,
82
+ "single": 0,
83
+ "multi": 0
84
+ },
85
+ "Pipelines": {
86
+ "unclassified": 0,
87
+ "single": 0,
88
+ "multi": 0
89
+ },
90
+ "Trainer": {
91
+ "unclassified": 0,
92
+ "single": 0,
93
+ "multi": 0
94
+ },
95
+ "ONNX": {
96
+ "unclassified": 0,
97
+ "single": 0,
98
+ "multi": 0
99
+ },
100
+ "Auto": {
101
+ "unclassified": 0,
102
+ "single": 0,
103
+ "multi": 0
104
+ },
105
+ "Quantization": {
106
+ "unclassified": 0,
107
+ "single": 0,
108
+ "multi": 0
109
+ },
110
+ "Unclassified": {
111
+ "unclassified": 0,
112
+ "single": 0,
113
+ "multi": 0
114
+ }
115
+ },
116
+ "errors": 0,
117
+ "success": 415,
118
+ "skipped": 193,
119
+ "time_spent": [
120
+ 144.73,
121
+ 146.94
122
+ ],
123
+ "error": false,
124
+ "failures": {
125
+ "single": [
126
+ {
127
+ "line": "tests/models/bert/test_modeling_bert.py::BertModelTest::test_flash_attn_2_inference_equivalence",
128
+ "trace": "(line 3343) AssertionError: Tensor-likes are not close!"
129
+ }
130
+ ],
131
+ "multi": [
132
+ {
133
+ "line": "tests/models/bert/test_modeling_bert.py::BertModelTest::test_flash_attn_2_inference_equivalence_right_padding",
134
+ "trace": "(line 3345) AssertionError: Tensor-likes are not close!"
135
+ }
136
+ ]
137
+ },
138
+ "job_link": {
139
+ "single": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795812",
140
+ "multi": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795808"
141
+ },
142
+ "captured_info": {
143
+ "single": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795812#step:16:1",
144
+ "multi": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795808#step:16:1"
145
+ }
146
+ },
147
+ "models_clip": {
148
+ "failed": {
149
+ "PyTorch": {
150
+ "unclassified": 0,
151
+ "single": 0,
152
+ "multi": 0
153
+ },
154
+ "Tokenizers": {
155
+ "unclassified": 0,
156
+ "single": 0,
157
+ "multi": 0
158
+ },
159
+ "Pipelines": {
160
+ "unclassified": 0,
161
+ "single": 0,
162
+ "multi": 0
163
+ },
164
+ "Trainer": {
165
+ "unclassified": 0,
166
+ "single": 0,
167
+ "multi": 0
168
+ },
169
+ "ONNX": {
170
+ "unclassified": 0,
171
+ "single": 0,
172
+ "multi": 0
173
+ },
174
+ "Auto": {
175
+ "unclassified": 0,
176
+ "single": 0,
177
+ "multi": 0
178
+ },
179
+ "Quantization": {
180
+ "unclassified": 0,
181
+ "single": 0,
182
+ "multi": 0
183
+ },
184
+ "Unclassified": {
185
+ "unclassified": 0,
186
+ "single": 0,
187
+ "multi": 0
188
+ }
189
+ },
190
+ "errors": 0,
191
+ "success": 1020,
192
+ "skipped": 572,
193
+ "time_spent": [
194
+ 159.34,
195
+ 156.8
196
+ ],
197
+ "error": false,
198
+ "failures": {},
199
+ "job_link": {
200
+ "multi": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795778",
201
+ "single": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795843"
202
+ },
203
+ "captured_info": {}
204
+ },
205
+ "models_csm": {
206
+ "failed": {
207
+ "PyTorch": {
208
+ "unclassified": 0,
209
+ "single": 0,
210
+ "multi": 0
211
+ },
212
+ "Tokenizers": {
213
+ "unclassified": 0,
214
+ "single": 0,
215
+ "multi": 0
216
+ },
217
+ "Pipelines": {
218
+ "unclassified": 0,
219
+ "single": 0,
220
+ "multi": 0
221
+ },
222
+ "Trainer": {
223
+ "unclassified": 0,
224
+ "single": 0,
225
+ "multi": 0
226
+ },
227
+ "ONNX": {
228
+ "unclassified": 0,
229
+ "single": 0,
230
+ "multi": 0
231
+ },
232
+ "Auto": {
233
+ "unclassified": 0,
234
+ "single": 0,
235
+ "multi": 0
236
+ },
237
+ "Quantization": {
238
+ "unclassified": 0,
239
+ "single": 0,
240
+ "multi": 0
241
+ },
242
+ "Unclassified": {
243
+ "unclassified": 0,
244
+ "single": 0,
245
+ "multi": 0
246
+ }
247
+ },
248
+ "errors": 0,
249
+ "success": 292,
250
+ "skipped": 208,
251
+ "time_spent": [
252
+ 169.98,
253
+ 170.6
254
+ ],
255
+ "error": false,
256
+ "failures": {},
257
+ "job_link": {
258
+ "single": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795897",
259
+ "multi": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795815"
260
+ },
261
+ "captured_info": {}
262
+ },
263
+ "models_detr": {
264
+ "failed": {
265
+ "PyTorch": {
266
+ "unclassified": 0,
267
+ "single": 0,
268
+ "multi": 0
269
+ },
270
+ "Tokenizers": {
271
+ "unclassified": 0,
272
+ "single": 0,
273
+ "multi": 0
274
+ },
275
+ "Pipelines": {
276
+ "unclassified": 0,
277
+ "single": 0,
278
+ "multi": 0
279
+ },
280
+ "Trainer": {
281
+ "unclassified": 0,
282
+ "single": 0,
283
+ "multi": 0
284
+ },
285
+ "ONNX": {
286
+ "unclassified": 0,
287
+ "single": 0,
288
+ "multi": 0
289
+ },
290
+ "Auto": {
291
+ "unclassified": 0,
292
+ "single": 0,
293
+ "multi": 0
294
+ },
295
+ "Quantization": {
296
+ "unclassified": 0,
297
+ "single": 0,
298
+ "multi": 0
299
+ },
300
+ "Unclassified": {
301
+ "unclassified": 0,
302
+ "single": 0,
303
+ "multi": 0
304
+ }
305
+ },
306
+ "errors": 0,
307
+ "success": 249,
308
+ "skipped": 211,
309
+ "time_spent": [
310
+ 95.68,
311
+ 93.44
312
+ ],
313
+ "error": false,
314
+ "failures": {},
315
+ "job_link": {
316
+ "multi": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795777",
317
+ "single": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795906"
318
+ },
319
+ "captured_info": {}
320
+ },
321
+ "models_gemma3": {
322
+ "failed": {
323
+ "PyTorch": {
324
+ "unclassified": 0,
325
+ "single": 6,
326
+ "multi": 6
327
+ },
328
+ "Tokenizers": {
329
+ "unclassified": 0,
330
+ "single": 0,
331
+ "multi": 0
332
+ },
333
+ "Pipelines": {
334
+ "unclassified": 0,
335
+ "single": 0,
336
+ "multi": 0
337
+ },
338
+ "Trainer": {
339
+ "unclassified": 0,
340
+ "single": 0,
341
+ "multi": 0
342
+ },
343
+ "ONNX": {
344
+ "unclassified": 0,
345
+ "single": 0,
346
+ "multi": 0
347
+ },
348
+ "Auto": {
349
+ "unclassified": 0,
350
+ "single": 0,
351
+ "multi": 0
352
+ },
353
+ "Quantization": {
354
+ "unclassified": 0,
355
+ "single": 0,
356
+ "multi": 0
357
+ },
358
+ "Unclassified": {
359
+ "unclassified": 0,
360
+ "single": 0,
361
+ "multi": 0
362
+ }
363
+ },
364
+ "errors": 0,
365
+ "success": 716,
366
+ "skipped": 438,
367
+ "time_spent": [
368
+ 455.93,
369
+ 454.42
370
+ ],
371
+ "error": false,
372
+ "failures": {
373
+ "single": [
374
+ {
375
+ "line": "tests/models/gemma3/test_modeling_gemma3.py::Gemma3Vision2TextModelTest::test_torch_export",
376
+ "trace": "(line 481) AssertionError: Current active mode <torch.fx.experimental.proxy_tensor.ProxyTorchDispatchMode object at 0x7f8534518b20> not registered"
377
+ },
378
+ {
379
+ "line": "tests/models/gemma3/test_modeling_gemma3.py::Gemma3IntegrationTest::test_dynamic_sliding_window_is_default",
380
+ "trace": "(line 862) AssertionError: 'DynamicSlidingWindowLayer' unexpectedly found in 'DynamicCache(layers=[DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer])'"
381
+ },
382
+ {
383
+ "line": "tests/models/gemma3/test_modeling_gemma3.py::Gemma3IntegrationTest::test_export_text_only",
384
+ "trace": "(line 1666) torch._dynamo.exc.TorchRuntimeError: Dynamo failed to run FX node with fake tensors: call_function <built-in function scaled_dot_product_attention>(*(FakeTensor(..., size=(1, 4, 1, 256), dtype=torch.bfloat16,"
385
+ },
386
+ {
387
+ "line": "tests/models/gemma3/test_modeling_gemma3.py::Gemma3IntegrationTest::test_model_4b_crops",
388
+ "trace": "(line 577) AssertionError: Lists differ: [\"user\\nYou are a helpful assistant.\\n\\nHe[267 chars]ve.\"] != ['user\\nYou are a helpful assistant.\\n\\nHe[268 chars]the']"
389
+ },
390
+ {
391
+ "line": "tests/models/gemma3/test_modeling_gemma3.py::Gemma3IntegrationTest::test_model_4b_flash_attn",
392
+ "trace": "(line 750) AssertionError: Lists differ: ['use[75 chars]del\\nCertainly! \\n\\nThe image shows a brown an[92 chars]and'] != ['use[75 chars]del\\nThe image shows a brown and white cow sta[106 chars]day']"
393
+ },
394
+ {
395
+ "line": "tests/models/gemma3/test_modeling_gemma3.py::Gemma3IntegrationTest::test_model_4b_multiimage",
396
+ "trace": "(line 693) AssertionError: Lists differ: [\"use[115 chars]image:\\n\\n**Overall Scene:**\\n\\nIt looks like [26 chars]ith\"] != [\"use[115 chars]image!\\n\\nHere's a description of the scene:\\n[17 chars]rch\"]"
397
+ }
398
+ ],
399
+ "multi": [
400
+ {
401
+ "line": "tests/models/gemma3/test_modeling_gemma3.py::Gemma3Vision2TextModelTest::test_torch_export",
402
+ "trace": "(line 481) AssertionError: Current active mode <torch.fx.experimental.proxy_tensor.ProxyTorchDispatchMode object at 0x7f48044a3820> not registered"
403
+ },
404
+ {
405
+ "line": "tests/models/gemma3/test_modeling_gemma3.py::Gemma3IntegrationTest::test_dynamic_sliding_window_is_default",
406
+ "trace": "(line 862) AssertionError: 'DynamicSlidingWindowLayer' unexpectedly found in 'DynamicCache(layers=[DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer, DynamicLayer, DynamicSlidingWindowLayer, DynamicSlidingWindowLayer])'"
407
+ },
408
+ {
409
+ "line": "tests/models/gemma3/test_modeling_gemma3.py::Gemma3IntegrationTest::test_export_text_only",
410
+ "trace": "(line 1666) torch._dynamo.exc.TorchRuntimeError: Dynamo failed to run FX node with fake tensors: call_function <built-in function scaled_dot_product_attention>(*(FakeTensor(..., size=(1, 4, 1, 256), dtype=torch.bfloat16,"
411
+ },
412
+ {
413
+ "line": "tests/models/gemma3/test_modeling_gemma3.py::Gemma3IntegrationTest::test_model_4b_crops",
414
+ "trace": "(line 577) AssertionError: Lists differ: [\"user\\nYou are a helpful assistant.\\n\\nHe[267 chars]ve.\"] != ['user\\nYou are a helpful assistant.\\n\\nHe[268 chars]the']"
415
+ },
416
+ {
417
+ "line": "tests/models/gemma3/test_modeling_gemma3.py::Gemma3IntegrationTest::test_model_4b_flash_attn",
418
+ "trace": "(line 750) AssertionError: Lists differ: ['use[75 chars]del\\nCertainly! \\n\\nThe image shows a brown an[92 chars]and'] != ['use[75 chars]del\\nThe image shows a brown and white cow sta[106 chars]day']"
419
+ },
420
+ {
421
+ "line": "tests/models/gemma3/test_modeling_gemma3.py::Gemma3IntegrationTest::test_model_4b_multiimage",
422
+ "trace": "(line 693) AssertionError: Lists differ: [\"use[115 chars]image:\\n\\n**Overall Scene:**\\n\\nIt looks like [26 chars]ith\"] != [\"use[115 chars]image!\\n\\nHere's a description of the scene:\\n[17 chars]rch\"]"
423
+ }
424
+ ]
425
+ },
426
+ "job_link": {
427
+ "single": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795835",
428
+ "multi": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795922"
429
+ },
430
+ "captured_info": {
431
+ "single": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795835#step:16:1",
432
+ "multi": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795922#step:16:1"
433
+ }
434
+ },
435
+ "models_gemma3n": {
436
+ "failed": {
437
+ "PyTorch": {
438
+ "unclassified": 0,
439
+ "single": 3,
440
+ "multi": 5
441
+ },
442
+ "Tokenizers": {
443
+ "unclassified": 0,
444
+ "single": 0,
445
+ "multi": 0
446
+ },
447
+ "Pipelines": {
448
+ "unclassified": 0,
449
+ "single": 0,
450
+ "multi": 0
451
+ },
452
+ "Trainer": {
453
+ "unclassified": 0,
454
+ "single": 0,
455
+ "multi": 0
456
+ },
457
+ "ONNX": {
458
+ "unclassified": 0,
459
+ "single": 0,
460
+ "multi": 0
461
+ },
462
+ "Auto": {
463
+ "unclassified": 0,
464
+ "single": 0,
465
+ "multi": 0
466
+ },
467
+ "Quantization": {
468
+ "unclassified": 0,
469
+ "single": 0,
470
+ "multi": 0
471
+ },
472
+ "Unclassified": {
473
+ "unclassified": 0,
474
+ "single": 0,
475
+ "multi": 0
476
+ }
477
+ },
478
+ "errors": 0,
479
+ "success": 664,
480
+ "skipped": 666,
481
+ "time_spent": [
482
+ 634.01,
483
+ 638.3
484
+ ],
485
+ "error": false,
486
+ "failures": {
487
+ "multi": [
488
+ {
489
+ "line": "tests/models/gemma3n/test_modeling_gemma3n.py::Gemma3nTextModelTest::test_flash_attn_2_equivalence",
490
+ "trace": "(line 632) AssertionError: Tensor-likes are not close!"
491
+ },
492
+ {
493
+ "line": "tests/models/gemma3n/test_modeling_gemma3n.py::Gemma3nTextModelTest::test_flash_attn_2_inference_equivalence",
494
+ "trace": "(line 3341) AssertionError: Tensor-likes are not close!"
495
+ },
496
+ {
497
+ "line": "tests/models/gemma3n/test_modeling_gemma3n.py::Gemma3nTextModelTest::test_flash_attn_2_inference_equivalence_right_padding",
498
+ "trace": "(line 3345) AssertionError: Tensor-likes are not close!"
499
+ },
500
+ {
501
+ "line": "tests/models/gemma3n/test_modeling_gemma3n.py::Gemma3nVision2TextModelTest::test_model_parallelism",
502
+ "trace": "(line 1962) AttributeError: 'Gemma3nModel' object has no attribute 'hf_device_map'"
503
+ },
504
+ {
505
+ "line": "tests/models/gemma3n/test_modeling_gemma3n.py::Gemma3nVision2TextModelTest::test_multi_gpu_data_parallel_forward",
506
+ "trace": "(line 769) StopIteration: Caught StopIteration in replica 1 on device 1."
507
+ }
508
+ ],
509
+ "single": [
510
+ {
511
+ "line": "tests/models/gemma3n/test_modeling_gemma3n.py::Gemma3nTextModelTest::test_flash_attn_2_equivalence",
512
+ "trace": "(line 632) AssertionError: Tensor-likes are not close!"
513
+ },
514
+ {
515
+ "line": "tests/models/gemma3n/test_modeling_gemma3n.py::Gemma3nTextModelTest::test_flash_attn_2_inference_equivalence",
516
+ "trace": "(line 3341) AssertionError: Tensor-likes are not close!"
517
+ },
518
+ {
519
+ "line": "tests/models/gemma3n/test_modeling_gemma3n.py::Gemma3nTextModelTest::test_flash_attn_2_inference_equivalence_right_padding",
520
+ "trace": "(line 3341) AssertionError: Tensor-likes are not close!"
521
+ }
522
+ ]
523
+ },
524
+ "job_link": {
525
+ "multi": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795766",
526
+ "single": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795814"
527
+ },
528
+ "captured_info": {
529
+ "multi": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795766#step:16:1",
530
+ "single": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795814#step:16:1"
531
+ }
532
+ },
533
+ "models_got_ocr2": {
534
+ "failed": {
535
+ "PyTorch": {
536
+ "unclassified": 0,
537
+ "single": 1,
538
+ "multi": 1
539
+ },
540
+ "Tokenizers": {
541
+ "unclassified": 0,
542
+ "single": 0,
543
+ "multi": 0
544
+ },
545
+ "Pipelines": {
546
+ "unclassified": 0,
547
+ "single": 0,
548
+ "multi": 0
549
+ },
550
+ "Trainer": {
551
+ "unclassified": 0,
552
+ "single": 0,
553
+ "multi": 0
554
+ },
555
+ "ONNX": {
556
+ "unclassified": 0,
557
+ "single": 0,
558
+ "multi": 0
559
+ },
560
+ "Auto": {
561
+ "unclassified": 0,
562
+ "single": 0,
563
+ "multi": 0
564
+ },
565
+ "Quantization": {
566
+ "unclassified": 0,
567
+ "single": 0,
568
+ "multi": 0
569
+ },
570
+ "Unclassified": {
571
+ "unclassified": 0,
572
+ "single": 0,
573
+ "multi": 0
574
+ }
575
+ },
576
+ "errors": 0,
577
+ "success": 325,
578
+ "skipped": 317,
579
+ "time_spent": [
580
+ 185.58,
581
+ 183.65
582
+ ],
583
+ "error": false,
584
+ "failures": {
585
+ "single": [
586
+ {
587
+ "line": "tests/models/got_ocr2/test_modeling_got_ocr2.py::GotOcr2IntegrationTest::test_small_model_integration_test_got_ocr_format",
588
+ "trace": "(line 210) AssertionError: 'R\\\\&D' != '\\\\title{\\nR'"
589
+ }
590
+ ],
591
+ "multi": [
592
+ {
593
+ "line": "tests/models/got_ocr2/test_modeling_got_ocr2.py::GotOcr2IntegrationTest::test_small_model_integration_test_got_ocr_format",
594
+ "trace": "(line 210) AssertionError: 'R\\\\&D' != '\\\\title{\\nR'"
595
+ }
596
+ ]
597
+ },
598
+ "job_link": {
599
+ "single": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795872",
600
+ "multi": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795790"
601
+ },
602
+ "captured_info": {
603
+ "single": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795872#step:16:1",
604
+ "multi": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795790#step:16:1"
605
+ }
606
+ },
607
+ "models_gpt2": {
608
+ "failed": {
609
+ "PyTorch": {
610
+ "unclassified": 0,
611
+ "single": 0,
612
+ "multi": 0
613
+ },
614
+ "Tokenizers": {
615
+ "unclassified": 0,
616
+ "single": 0,
617
+ "multi": 0
618
+ },
619
+ "Pipelines": {
620
+ "unclassified": 0,
621
+ "single": 0,
622
+ "multi": 0
623
+ },
624
+ "Trainer": {
625
+ "unclassified": 0,
626
+ "single": 0,
627
+ "multi": 0
628
+ },
629
+ "ONNX": {
630
+ "unclassified": 0,
631
+ "single": 0,
632
+ "multi": 0
633
+ },
634
+ "Auto": {
635
+ "unclassified": 0,
636
+ "single": 0,
637
+ "multi": 0
638
+ },
639
+ "Quantization": {
640
+ "unclassified": 0,
641
+ "single": 0,
642
+ "multi": 0
643
+ },
644
+ "Unclassified": {
645
+ "unclassified": 0,
646
+ "single": 0,
647
+ "multi": 0
648
+ }
649
+ },
650
+ "errors": 0,
651
+ "success": 439,
652
+ "skipped": 213,
653
+ "time_spent": [
654
+ 149.02,
655
+ 147.23
656
+ ],
657
+ "error": false,
658
+ "failures": {},
659
+ "job_link": {
660
+ "multi": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795910",
661
+ "single": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795876"
662
+ },
663
+ "captured_info": {}
664
+ },
665
+ "models_internvl": {
666
+ "failed": {
667
+ "PyTorch": {
668
+ "unclassified": 0,
669
+ "single": 0,
670
+ "multi": 1
671
+ },
672
+ "Tokenizers": {
673
+ "unclassified": 0,
674
+ "single": 0,
675
+ "multi": 0
676
+ },
677
+ "Pipelines": {
678
+ "unclassified": 0,
679
+ "single": 0,
680
+ "multi": 0
681
+ },
682
+ "Trainer": {
683
+ "unclassified": 0,
684
+ "single": 0,
685
+ "multi": 0
686
+ },
687
+ "ONNX": {
688
+ "unclassified": 0,
689
+ "single": 0,
690
+ "multi": 0
691
+ },
692
+ "Auto": {
693
+ "unclassified": 0,
694
+ "single": 0,
695
+ "multi": 0
696
+ },
697
+ "Quantization": {
698
+ "unclassified": 0,
699
+ "single": 0,
700
+ "multi": 0
701
+ },
702
+ "Unclassified": {
703
+ "unclassified": 0,
704
+ "single": 0,
705
+ "multi": 0
706
+ }
707
+ },
708
+ "errors": 0,
709
+ "success": 444,
710
+ "skipped": 211,
711
+ "time_spent": [
712
+ 239.25,
713
+ 236.06
714
+ ],
715
+ "error": false,
716
+ "failures": {
717
+ "multi": [
718
+ {
719
+ "line": "tests/models/internvl/test_modeling_internvl.py::InternVLModelTest::test_multi_gpu_data_parallel_forward",
720
+ "trace": "(line 769) StopIteration: Caught StopIteration in replica 1 on device 1."
721
+ }
722
+ ]
723
+ },
724
+ "job_link": {
725
+ "single": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795916",
726
+ "multi": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795861"
727
+ },
728
+ "captured_info": {}
729
+ },
730
+ "models_llama": {
731
+ "failed": {
732
+ "PyTorch": {
733
+ "unclassified": 0,
734
+ "single": 0,
735
+ "multi": 0
736
+ },
737
+ "Tokenizers": {
738
+ "unclassified": 0,
739
+ "single": 0,
740
+ "multi": 0
741
+ },
742
+ "Pipelines": {
743
+ "unclassified": 0,
744
+ "single": 0,
745
+ "multi": 0
746
+ },
747
+ "Trainer": {
748
+ "unclassified": 0,
749
+ "single": 0,
750
+ "multi": 0
751
+ },
752
+ "ONNX": {
753
+ "unclassified": 0,
754
+ "single": 0,
755
+ "multi": 0
756
+ },
757
+ "Auto": {
758
+ "unclassified": 0,
759
+ "single": 0,
760
+ "multi": 0
761
+ },
762
+ "Quantization": {
763
+ "unclassified": 0,
764
+ "single": 0,
765
+ "multi": 0
766
+ },
767
+ "Unclassified": {
768
+ "unclassified": 0,
769
+ "single": 0,
770
+ "multi": 0
771
+ }
772
+ },
773
+ "errors": 0,
774
+ "success": 455,
775
+ "skipped": 179,
776
+ "time_spent": [
777
+ 272.98,
778
+ 284.98
779
+ ],
780
+ "error": false,
781
+ "failures": {},
782
+ "job_link": {
783
+ "multi": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795857",
784
+ "single": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795979"
785
+ },
786
+ "captured_info": {
787
+ "multi": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795857#step:16:1",
788
+ "single": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795979#step:16:1"
789
+ }
790
+ },
791
+ "models_llava": {
792
+ "failed": {
793
+ "PyTorch": {
794
+ "unclassified": 0,
795
+ "single": 3,
796
+ "multi": 3
797
+ },
798
+ "Tokenizers": {
799
+ "unclassified": 0,
800
+ "single": 0,
801
+ "multi": 0
802
+ },
803
+ "Pipelines": {
804
+ "unclassified": 0,
805
+ "single": 0,
806
+ "multi": 0
807
+ },
808
+ "Trainer": {
809
+ "unclassified": 0,
810
+ "single": 0,
811
+ "multi": 0
812
+ },
813
+ "ONNX": {
814
+ "unclassified": 0,
815
+ "single": 0,
816
+ "multi": 0
817
+ },
818
+ "Auto": {
819
+ "unclassified": 0,
820
+ "single": 0,
821
+ "multi": 0
822
+ },
823
+ "Quantization": {
824
+ "unclassified": 0,
825
+ "single": 0,
826
+ "multi": 0
827
+ },
828
+ "Unclassified": {
829
+ "unclassified": 0,
830
+ "single": 0,
831
+ "multi": 0
832
+ }
833
+ },
834
+ "errors": 0,
835
+ "success": 431,
836
+ "skipped": 229,
837
+ "time_spent": [
838
+ 274.05,
839
+ 287.85
840
+ ],
841
+ "error": false,
842
+ "failures": {
843
+ "multi": [
844
+ {
845
+ "line": "tests/models/llava/test_modeling_llava.py::LlavaForConditionalGenerationIntegrationTest::test_pixtral",
846
+ "trace": "(line 812) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 40.00 MiB. GPU 0 has a total capacity of 22.30 GiB of which 10.69 MiB is free. Process 36357 has 22.29 GiB memory in use. Of the allocated memory 21.77 GiB is allocated by PyTorch, and 14.88 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
847
+ },
848
+ {
849
+ "line": "tests/models/llava/test_modeling_llava.py::LlavaForConditionalGenerationIntegrationTest::test_pixtral_4bit",
850
+ "trace": "(line 4827) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 7.77 GiB. GPU 0 has a total capacity of 22.30 GiB of which 704.00 KiB is free. Process 36357 has 22.29 GiB memory in use. Of the allocated memory 21.78 GiB is allocated by PyTorch, and 14.86 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
851
+ },
852
+ {
853
+ "line": "tests/models/llava/test_modeling_llava.py::LlavaForConditionalGenerationIntegrationTest::test_pixtral_batched",
854
+ "trace": "(line 724) AssertionError: Lists differ: ['Wha[97 chars]mage?A narrow dirt path is surrounded by grass[74 chars]ue.'] != ['Wha[97 chars]mage?The image depicts a narrow, winding dirt [175 chars]ere']"
855
+ }
856
+ ],
857
+ "single": [
858
+ {
859
+ "line": "tests/models/llava/test_modeling_llava.py::LlavaForConditionalGenerationIntegrationTest::test_pixtral",
860
+ "trace": "(line 812) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 140.00 MiB. GPU 0 has a total capacity of 22.30 GiB of which 86.69 MiB is free. Process 31013 has 22.21 GiB memory in use. Of the allocated memory 21.81 GiB is allocated by PyTorch, and 13.00 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
861
+ },
862
+ {
863
+ "line": "tests/models/llava/test_modeling_llava.py::LlavaForConditionalGenerationIntegrationTest::test_pixtral_4bit",
864
+ "trace": "(line 4827) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 7.77 GiB. GPU 0 has a total capacity of 22.30 GiB of which 36.69 MiB is free. Process 31013 has 22.26 GiB memory in use. Of the allocated memory 21.86 GiB is allocated by PyTorch, and 12.99 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
865
+ },
866
+ {
867
+ "line": "tests/models/llava/test_modeling_llava.py::LlavaForConditionalGenerationIntegrationTest::test_pixtral_batched",
868
+ "trace": "(line 724) AssertionError: Lists differ: ['Wha[97 chars]mage?A narrow dirt path is surrounded by grass[74 chars]ue.'] != ['Wha[97 chars]mage?The image depicts a narrow, winding dirt [175 chars]ere']"
869
+ }
870
+ ]
871
+ },
872
+ "job_link": {
873
+ "multi": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795809",
874
+ "single": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795902"
875
+ },
876
+ "captured_info": {
877
+ "multi": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795809#step:16:1",
878
+ "single": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795902#step:16:1"
879
+ }
880
+ },
881
+ "models_mistral3": {
882
+ "failed": {
883
+ "PyTorch": {
884
+ "unclassified": 0,
885
+ "single": 2,
886
+ "multi": 2
887
+ },
888
+ "Tokenizers": {
889
+ "unclassified": 0,
890
+ "single": 0,
891
+ "multi": 0
892
+ },
893
+ "Pipelines": {
894
+ "unclassified": 0,
895
+ "single": 0,
896
+ "multi": 0
897
+ },
898
+ "Trainer": {
899
+ "unclassified": 0,
900
+ "single": 0,
901
+ "multi": 0
902
+ },
903
+ "ONNX": {
904
+ "unclassified": 0,
905
+ "single": 0,
906
+ "multi": 0
907
+ },
908
+ "Auto": {
909
+ "unclassified": 0,
910
+ "single": 0,
911
+ "multi": 0
912
+ },
913
+ "Quantization": {
914
+ "unclassified": 0,
915
+ "single": 0,
916
+ "multi": 0
917
+ },
918
+ "Unclassified": {
919
+ "unclassified": 0,
920
+ "single": 0,
921
+ "multi": 0
922
+ }
923
+ },
924
+ "errors": 0,
925
+ "success": 355,
926
+ "skipped": 243,
927
+ "time_spent": [
928
+ 642.56,
929
+ 682.63
930
+ ],
931
+ "error": false,
932
+ "failures": {
933
+ "multi": [
934
+ {
935
+ "line": "tests/models/mistral3/test_modeling_mistral3.py::Mistral3IntegrationTest::test_mistral3_integration_batched_generate",
936
+ "trace": "(line 362) AssertionError: ' to write a short story based on this ima[70 chars]e pl' != 'Calm waters reflect\\nWooden path to dista[26 chars]oods'"
937
+ },
938
+ {
939
+ "line": "tests/models/mistral3/test_modeling_mistral3.py::Mistral3IntegrationTest::test_mistral3_integration_batched_generate_multi_image",
940
+ "trace": "(line 438) AssertionError: ' to write a short story based on this im[81 chars]ched' != \"Calm waters reflect\\nWooden path to dist[29 chars]hold\""
941
+ }
942
+ ],
943
+ "single": [
944
+ {
945
+ "line": "tests/models/mistral3/test_modeling_mistral3.py::Mistral3IntegrationTest::test_mistral3_integration_batched_generate",
946
+ "trace": "(line 362) AssertionError: ' to write a short story based on this ima[70 chars]e pl' != 'Calm waters reflect\\nWooden path to dista[26 chars]oods'"
947
+ },
948
+ {
949
+ "line": "tests/models/mistral3/test_modeling_mistral3.py::Mistral3IntegrationTest::test_mistral3_integration_batched_generate_multi_image",
950
+ "trace": "(line 438) AssertionError: ' to write a short story based on this im[81 chars]ched' != \"Calm waters reflect\\nWooden path to dist[29 chars]hold\""
951
+ }
952
+ ]
953
+ },
954
+ "job_link": {
955
+ "multi": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795705",
956
+ "single": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238796020"
957
+ },
958
+ "captured_info": {
959
+ "multi": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795705#step:16:1",
960
+ "single": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238796020#step:16:1"
961
+ }
962
+ },
963
+ "models_modernbert": {
964
+ "failed": {
965
+ "PyTorch": {
966
+ "unclassified": 0,
967
+ "single": 1,
968
+ "multi": 1
969
+ },
970
+ "Tokenizers": {
971
+ "unclassified": 0,
972
+ "single": 0,
973
+ "multi": 0
974
+ },
975
+ "Pipelines": {
976
+ "unclassified": 0,
977
+ "single": 0,
978
+ "multi": 0
979
+ },
980
+ "Trainer": {
981
+ "unclassified": 0,
982
+ "single": 0,
983
+ "multi": 0
984
+ },
985
+ "ONNX": {
986
+ "unclassified": 0,
987
+ "single": 0,
988
+ "multi": 0
989
+ },
990
+ "Auto": {
991
+ "unclassified": 0,
992
+ "single": 0,
993
+ "multi": 0
994
+ },
995
+ "Quantization": {
996
+ "unclassified": 0,
997
+ "single": 0,
998
+ "multi": 0
999
+ },
1000
+ "Unclassified": {
1001
+ "unclassified": 0,
1002
+ "single": 0,
1003
+ "multi": 0
1004
+ }
1005
+ },
1006
+ "errors": 0,
1007
+ "success": 236,
1008
+ "skipped": 162,
1009
+ "time_spent": [
1010
+ 104.55,
1011
+ 103.15
1012
+ ],
1013
+ "error": false,
1014
+ "failures": {
1015
+ "multi": [
1016
+ {
1017
+ "line": "tests/models/modernbert/test_modeling_modernbert.py::ModernBertModelIntegrationTest::test_inference_masked_lm_flash_attention_2",
1018
+ "trace": "(line 437) AssertionError: Tensor-likes are not close!"
1019
+ }
1020
+ ],
1021
+ "single": [
1022
+ {
1023
+ "line": "tests/models/modernbert/test_modeling_modernbert.py::ModernBertModelIntegrationTest::test_inference_masked_lm_flash_attention_2",
1024
+ "trace": "(line 437) AssertionError: Tensor-likes are not close!"
1025
+ }
1026
+ ]
1027
+ },
1028
+ "job_link": {
1029
+ "multi": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795733",
1030
+ "single": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795969"
1031
+ },
1032
+ "captured_info": {
1033
+ "multi": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795733#step:16:1",
1034
+ "single": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795969#step:16:1"
1035
+ }
1036
+ },
1037
+ "models_pi0": {
1038
+ "failed": {
1039
+ "PyTorch": {
1040
+ "unclassified": 0,
1041
+ "single": 4,
1042
+ "multi": 4
1043
+ },
1044
+ "Tokenizers": {
1045
+ "unclassified": 0,
1046
+ "single": 0,
1047
+ "multi": 0
1048
+ },
1049
+ "Pipelines": {
1050
+ "unclassified": 0,
1051
+ "single": 0,
1052
+ "multi": 0
1053
+ },
1054
+ "Trainer": {
1055
+ "unclassified": 0,
1056
+ "single": 0,
1057
+ "multi": 0
1058
+ },
1059
+ "ONNX": {
1060
+ "unclassified": 0,
1061
+ "single": 0,
1062
+ "multi": 0
1063
+ },
1064
+ "Auto": {
1065
+ "unclassified": 0,
1066
+ "single": 0,
1067
+ "multi": 0
1068
+ },
1069
+ "Quantization": {
1070
+ "unclassified": 0,
1071
+ "single": 0,
1072
+ "multi": 0
1073
+ },
1074
+ "Unclassified": {
1075
+ "unclassified": 0,
1076
+ "single": 0,
1077
+ "multi": 0
1078
+ }
1079
+ },
1080
+ "errors": 0,
1081
+ "success": 214,
1082
+ "skipped": 186,
1083
+ "time_spent": [
1084
+ 143.33,
1085
+ 145.79
1086
+ ],
1087
+ "error": false,
1088
+ "failures": {
1089
+ "multi": [
1090
+ {
1091
+ "line": "tests/models/pi0/test_modeling_pi0.py::PI0ForConditionalGenerationModelTest::test_flash_attn_2_inference_equivalence",
1092
+ "trace": "(line 128) AttributeError: 'NoneType' object has no attribute 'shape'"
1093
+ },
1094
+ {
1095
+ "line": "tests/models/pi0/test_modeling_pi0.py::PI0ForConditionalGenerationModelTest::test_flash_attn_2_inference_equivalence_right_padding",
1096
+ "trace": "(line 128) AttributeError: 'NoneType' object has no attribute 'shape'"
1097
+ },
1098
+ {
1099
+ "line": "tests/models/pi0/test_modeling_pi0.py::PI0ForConditionalGenerationModelTest::test_sdpa_can_dispatch_on_flash",
1100
+ "trace": "(line 92) RuntimeError: No available kernel. Aborting execution."
1101
+ },
1102
+ {
1103
+ "line": "tests/models/pi0/test_modeling_pi0.py::PI0ModelIntegrationTest::test_train_pi0_base_libero",
1104
+ "trace": "(line 769) torch.OutOfMemoryError: Caught OutOfMemoryError in replica 0 on device 0."
1105
+ }
1106
+ ],
1107
+ "single": [
1108
+ {
1109
+ "line": "tests/models/pi0/test_modeling_pi0.py::PI0ForConditionalGenerationModelTest::test_flash_attn_2_inference_equivalence",
1110
+ "trace": "(line 128) AttributeError: 'NoneType' object has no attribute 'shape'"
1111
+ },
1112
+ {
1113
+ "line": "tests/models/pi0/test_modeling_pi0.py::PI0ForConditionalGenerationModelTest::test_flash_attn_2_inference_equivalence_right_padding",
1114
+ "trace": "(line 128) AttributeError: 'NoneType' object has no attribute 'shape'"
1115
+ },
1116
+ {
1117
+ "line": "tests/models/pi0/test_modeling_pi0.py::PI0ForConditionalGenerationModelTest::test_sdpa_can_dispatch_on_flash",
1118
+ "trace": "(line 92) RuntimeError: No available kernel. Aborting execution."
1119
+ },
1120
+ {
1121
+ "line": "tests/models/pi0/test_modeling_pi0.py::PI0ModelIntegrationTest::test_train_pi0_base_libero",
1122
+ "trace": "(line 193) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 18.00 MiB. GPU 0 has a total capacity of 22.30 GiB of which 2.69 MiB is free. Process 30268 has 22.29 GiB memory in use. Of the allocated memory 21.50 GiB is allocated by PyTorch, and 478.93 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
1123
+ }
1124
+ ]
1125
+ },
1126
+ "job_link": {
1127
+ "multi": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795700",
1128
+ "single": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795948"
1129
+ },
1130
+ "captured_info": {}
1131
+ },
1132
+ "models_qwen2": {
1133
+ "failed": {
1134
+ "PyTorch": {
1135
+ "unclassified": 0,
1136
+ "single": 3,
1137
+ "multi": 3
1138
+ },
1139
+ "Tokenizers": {
1140
+ "unclassified": 0,
1141
+ "single": 0,
1142
+ "multi": 0
1143
+ },
1144
+ "Pipelines": {
1145
+ "unclassified": 0,
1146
+ "single": 0,
1147
+ "multi": 0
1148
+ },
1149
+ "Trainer": {
1150
+ "unclassified": 0,
1151
+ "single": 0,
1152
+ "multi": 0
1153
+ },
1154
+ "ONNX": {
1155
+ "unclassified": 0,
1156
+ "single": 0,
1157
+ "multi": 0
1158
+ },
1159
+ "Auto": {
1160
+ "unclassified": 0,
1161
+ "single": 0,
1162
+ "multi": 0
1163
+ },
1164
+ "Quantization": {
1165
+ "unclassified": 0,
1166
+ "single": 0,
1167
+ "multi": 0
1168
+ },
1169
+ "Unclassified": {
1170
+ "unclassified": 0,
1171
+ "single": 0,
1172
+ "multi": 0
1173
+ }
1174
+ },
1175
+ "errors": 0,
1176
+ "success": 445,
1177
+ "skipped": 177,
1178
+ "time_spent": [
1179
+ 222.89,
1180
+ 224.22
1181
+ ],
1182
+ "error": false,
1183
+ "failures": {
1184
+ "single": [
1185
+ {
1186
+ "line": "tests/models/qwen2/test_modeling_qwen2.py::Qwen2IntegrationTest::test_export_static_cache",
1187
+ "trace": "(line 281) AssertionError: Lists differ: ['My [35 chars], organic, gluten free, vegan, and free from preservatives. I'] != ['My [35 chars], organic, gluten free, vegan, and vegetarian. I love to use']"
1188
+ },
1189
+ {
1190
+ "line": "tests/models/qwen2/test_modeling_qwen2.py::Qwen2IntegrationTest::test_model_450m_logits",
1191
+ "trace": "(line 88) AssertionError: Tensor-likes are not close!"
1192
+ },
1193
+ {
1194
+ "line": "tests/models/qwen2/test_modeling_qwen2.py::Qwen2IntegrationTest::test_speculative_generation",
1195
+ "trace": "(line 201) AttributeError: 'str' object has no attribute 'get_expectation'"
1196
+ }
1197
+ ],
1198
+ "multi": [
1199
+ {
1200
+ "line": "tests/models/qwen2/test_modeling_qwen2.py::Qwen2IntegrationTest::test_export_static_cache",
1201
+ "trace": "(line 281) AssertionError: Lists differ: ['My [35 chars], organic, gluten free, vegan, and free from preservatives. I'] != ['My [35 chars], organic, gluten free, vegan, and vegetarian. I love to use']"
1202
+ },
1203
+ {
1204
+ "line": "tests/models/qwen2/test_modeling_qwen2.py::Qwen2IntegrationTest::test_model_450m_logits",
1205
+ "trace": "(line 88) AssertionError: Tensor-likes are not close!"
1206
+ },
1207
+ {
1208
+ "line": "tests/models/qwen2/test_modeling_qwen2.py::Qwen2IntegrationTest::test_speculative_generation",
1209
+ "trace": "(line 201) AttributeError: 'str' object has no attribute 'get_expectation'"
1210
+ }
1211
+ ]
1212
+ },
1213
+ "job_link": {
1214
+ "single": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795925",
1215
+ "multi": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795752"
1216
+ },
1217
+ "captured_info": {
1218
+ "single": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795925#step:16:1",
1219
+ "multi": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795752#step:16:1"
1220
+ }
1221
+ },
1222
+ "models_qwen2_5_omni": {
1223
+ "failed": {
1224
+ "PyTorch": {
1225
+ "unclassified": 0,
1226
+ "single": 2,
1227
+ "multi": 3
1228
+ },
1229
+ "Tokenizers": {
1230
+ "unclassified": 0,
1231
+ "single": 0,
1232
+ "multi": 0
1233
+ },
1234
+ "Pipelines": {
1235
+ "unclassified": 0,
1236
+ "single": 0,
1237
+ "multi": 0
1238
+ },
1239
+ "Trainer": {
1240
+ "unclassified": 0,
1241
+ "single": 0,
1242
+ "multi": 0
1243
+ },
1244
+ "ONNX": {
1245
+ "unclassified": 0,
1246
+ "single": 0,
1247
+ "multi": 0
1248
+ },
1249
+ "Auto": {
1250
+ "unclassified": 0,
1251
+ "single": 0,
1252
+ "multi": 0
1253
+ },
1254
+ "Quantization": {
1255
+ "unclassified": 0,
1256
+ "single": 0,
1257
+ "multi": 0
1258
+ },
1259
+ "Unclassified": {
1260
+ "unclassified": 0,
1261
+ "single": 0,
1262
+ "multi": 0
1263
+ }
1264
+ },
1265
+ "errors": 0,
1266
+ "success": 358,
1267
+ "skipped": 233,
1268
+ "time_spent": [
1269
+ 226.28,
1270
+ 184.42
1271
+ ],
1272
+ "error": false,
1273
+ "failures": {
1274
+ "single": [
1275
+ {
1276
+ "line": "tests/models/qwen2_5_omni/test_modeling_qwen2_5_omni.py::Qwen2_5OmniModelIntegrationTest::test_small_model_integration_test",
1277
+ "trace": "(line 692) AssertionError: \"syst[108 chars]d is glass shattering, and the dog is a Labrador Retriever.\" != \"syst[108 chars]d is a glass shattering. The dog in the pictur[22 chars]ver.\""
1278
+ },
1279
+ {
1280
+ "line": "tests/models/qwen2_5_omni/test_modeling_qwen2_5_omni.py::Qwen2_5OmniModelIntegrationTest::test_small_model_integration_test_batch",
1281
+ "trace": "(line 734) AssertionError: Lists differ: [\"sys[109 chars]d is glass shattering, and the dog is a Labrad[185 chars]er.\"] != [\"sys[109 chars]d is a glass shattering. The dog in the pictur[211 chars]er.\"]"
1282
+ }
1283
+ ],
1284
+ "multi": [
1285
+ {
1286
+ "line": "tests/models/qwen2_5_omni/test_modeling_qwen2_5_omni.py::Qwen2_5OmniThinkerForConditionalGenerationModelTest::test_multi_gpu_data_parallel_forward",
1287
+ "trace": "(line 769) StopIteration: Caught StopIteration in replica 1 on device 1."
1288
+ },
1289
+ {
1290
+ "line": "tests/models/qwen2_5_omni/test_modeling_qwen2_5_omni.py::Qwen2_5OmniModelIntegrationTest::test_small_model_integration_test",
1291
+ "trace": "(line 692) AssertionError: \"syst[108 chars]d is glass shattering, and the dog is a Labrador Retriever.\" != \"syst[108 chars]d is a glass shattering. The dog in the pictur[22 chars]ver.\""
1292
+ },
1293
+ {
1294
+ "line": "tests/models/qwen2_5_omni/test_modeling_qwen2_5_omni.py::Qwen2_5OmniModelIntegrationTest::test_small_model_integration_test_batch",
1295
+ "trace": "(line 734) AssertionError: Lists differ: [\"sys[109 chars]d is glass shattering, and the dog is a Labrad[185 chars]er.\"] != [\"sys[109 chars]d is a glass shattering. The dog in the pictur[211 chars]er.\"]"
1296
+ }
1297
+ ]
1298
+ },
1299
+ "job_link": {
1300
+ "single": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795996",
1301
+ "multi": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795802"
1302
+ },
1303
+ "captured_info": {
1304
+ "single": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795996#step:16:1",
1305
+ "multi": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795802#step:16:1"
1306
+ }
1307
+ },
1308
+ "models_qwen2_5_vl": {
1309
+ "failed": {
1310
+ "PyTorch": {
1311
+ "unclassified": 0,
1312
+ "single": 2,
1313
+ "multi": 2
1314
+ },
1315
+ "Tokenizers": {
1316
+ "unclassified": 0,
1317
+ "single": 0,
1318
+ "multi": 0
1319
+ },
1320
+ "Pipelines": {
1321
+ "unclassified": 0,
1322
+ "single": 0,
1323
+ "multi": 0
1324
+ },
1325
+ "Trainer": {
1326
+ "unclassified": 0,
1327
+ "single": 0,
1328
+ "multi": 0
1329
+ },
1330
+ "ONNX": {
1331
+ "unclassified": 0,
1332
+ "single": 0,
1333
+ "multi": 0
1334
+ },
1335
+ "Auto": {
1336
+ "unclassified": 0,
1337
+ "single": 0,
1338
+ "multi": 0
1339
+ },
1340
+ "Quantization": {
1341
+ "unclassified": 0,
1342
+ "single": 0,
1343
+ "multi": 0
1344
+ },
1345
+ "Unclassified": {
1346
+ "unclassified": 0,
1347
+ "single": 0,
1348
+ "multi": 0
1349
+ }
1350
+ },
1351
+ "errors": 0,
1352
+ "success": 391,
1353
+ "skipped": 119,
1354
+ "time_spent": [
1355
+ 229.44,
1356
+ 229.08
1357
+ ],
1358
+ "error": false,
1359
+ "failures": {
1360
+ "single": [
1361
+ {
1362
+ "line": "tests/models/qwen2_5_vl/test_modeling_qwen2_5_vl.py::Qwen2_5_VLIntegrationTest::test_small_model_integration_test_batch_wo_image_flashatt2",
1363
+ "trace": "(line 709) AssertionError: Lists differ: ['sys[216 chars]in', 'system\\nYou are a helpful assistant.\\nus[166 chars]and'] != ['sys[216 chars]in', \"system\\nYou are a helpful assistant.\\nus[162 chars]ing\"]"
1364
+ },
1365
+ {
1366
+ "line": "tests/models/qwen2_5_vl/test_modeling_qwen2_5_vl.py::Qwen2_5_VLIntegrationTest::test_small_model_integration_test_with_video",
1367
+ "trace": "(line 764) AssertionError: Lists differ: ['sys[184 chars] The individual appears to be practicing or warming up,'] != ['sys[184 chars] The individual is wearing athletic attire, including a white']"
1368
+ }
1369
+ ],
1370
+ "multi": [
1371
+ {
1372
+ "line": "tests/models/qwen2_5_vl/test_modeling_qwen2_5_vl.py::Qwen2_5_VLIntegrationTest::test_small_model_integration_test_batch_wo_image_flashatt2",
1373
+ "trace": "(line 709) AssertionError: Lists differ: ['sys[216 chars]in', 'system\\nYou are a helpful assistant.\\nus[166 chars]and'] != ['sys[216 chars]in', \"system\\nYou are a helpful assistant.\\nus[162 chars]ing\"]"
1374
+ },
1375
+ {
1376
+ "line": "tests/models/qwen2_5_vl/test_modeling_qwen2_5_vl.py::Qwen2_5_VLIntegrationTest::test_small_model_integration_test_with_video",
1377
+ "trace": "(line 764) AssertionError: Lists differ: ['sys[184 chars] The individual appears to be practicing or warming up,'] != ['sys[184 chars] The individual is wearing athletic attire, including a white']"
1378
+ }
1379
+ ]
1380
+ },
1381
+ "job_link": {
1382
+ "single": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238796067",
1383
+ "multi": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795735"
1384
+ },
1385
+ "captured_info": {
1386
+ "single": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238796067#step:16:1",
1387
+ "multi": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795735#step:16:1"
1388
+ }
1389
+ },
1390
+ "models_qwen2_audio": {
1391
+ "failed": {
1392
+ "PyTorch": {
1393
+ "unclassified": 0,
1394
+ "single": 0,
1395
+ "multi": 1
1396
+ },
1397
+ "Tokenizers": {
1398
+ "unclassified": 0,
1399
+ "single": 0,
1400
+ "multi": 0
1401
+ },
1402
+ "Pipelines": {
1403
+ "unclassified": 0,
1404
+ "single": 0,
1405
+ "multi": 0
1406
+ },
1407
+ "Trainer": {
1408
+ "unclassified": 0,
1409
+ "single": 0,
1410
+ "multi": 0
1411
+ },
1412
+ "ONNX": {
1413
+ "unclassified": 0,
1414
+ "single": 0,
1415
+ "multi": 0
1416
+ },
1417
+ "Auto": {
1418
+ "unclassified": 0,
1419
+ "single": 0,
1420
+ "multi": 0
1421
+ },
1422
+ "Quantization": {
1423
+ "unclassified": 0,
1424
+ "single": 0,
1425
+ "multi": 0
1426
+ },
1427
+ "Unclassified": {
1428
+ "unclassified": 0,
1429
+ "single": 0,
1430
+ "multi": 0
1431
+ }
1432
+ },
1433
+ "errors": 0,
1434
+ "success": 318,
1435
+ "skipped": 273,
1436
+ "time_spent": [
1437
+ 126.99,
1438
+ 134.43
1439
+ ],
1440
+ "error": false,
1441
+ "failures": {
1442
+ "multi": [
1443
+ {
1444
+ "line": "tests/models/qwen2_audio/test_modeling_qwen2_audio.py::Qwen2AudioForConditionalGenerationModelTest::test_multi_gpu_data_parallel_forward",
1445
+ "trace": "(line 769) StopIteration: Caught StopIteration in replica 1 on device 1."
1446
+ }
1447
+ ]
1448
+ },
1449
+ "job_link": {
1450
+ "single": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238796009",
1451
+ "multi": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795727"
1452
+ },
1453
+ "captured_info": {}
1454
+ },
1455
+ "models_smolvlm": {
1456
+ "failed": {
1457
+ "PyTorch": {
1458
+ "unclassified": 0,
1459
+ "single": 1,
1460
+ "multi": 3
1461
+ },
1462
+ "Tokenizers": {
1463
+ "unclassified": 0,
1464
+ "single": 0,
1465
+ "multi": 0
1466
+ },
1467
+ "Pipelines": {
1468
+ "unclassified": 0,
1469
+ "single": 0,
1470
+ "multi": 0
1471
+ },
1472
+ "Trainer": {
1473
+ "unclassified": 0,
1474
+ "single": 0,
1475
+ "multi": 0
1476
+ },
1477
+ "ONNX": {
1478
+ "unclassified": 0,
1479
+ "single": 0,
1480
+ "multi": 0
1481
+ },
1482
+ "Auto": {
1483
+ "unclassified": 0,
1484
+ "single": 0,
1485
+ "multi": 0
1486
+ },
1487
+ "Quantization": {
1488
+ "unclassified": 0,
1489
+ "single": 0,
1490
+ "multi": 0
1491
+ },
1492
+ "Unclassified": {
1493
+ "unclassified": 0,
1494
+ "single": 0,
1495
+ "multi": 0
1496
+ }
1497
+ },
1498
+ "errors": 0,
1499
+ "success": 661,
1500
+ "skipped": 307,
1501
+ "time_spent": [
1502
+ 118.94,
1503
+ 122.06
1504
+ ],
1505
+ "error": false,
1506
+ "failures": {
1507
+ "single": [
1508
+ {
1509
+ "line": "tests/models/smolvlm/test_modeling_smolvlm.py::SmolVLMForConditionalGenerationIntegrationTest::test_integration_test_video",
1510
+ "trace": "(line 579) AssertionError: 'User[310 chars]ideo depicts a step-by-step process of creatin[43 chars]work' != 'User[310 chars]ideo showcases a large language model, specifi[56 chars] and'"
1511
+ }
1512
+ ],
1513
+ "multi": [
1514
+ {
1515
+ "line": "tests/models/smolvlm/test_modeling_smolvlm.py::SmolVLMModelTest::test_multi_gpu_data_parallel_forward",
1516
+ "trace": "(line 769) StopIteration: Caught StopIteration in replica 1 on device 1."
1517
+ },
1518
+ {
1519
+ "line": "tests/models/smolvlm/test_modeling_smolvlm.py::SmolVLMForConditionalGenerationModelTest::test_multi_gpu_data_parallel_forward",
1520
+ "trace": "(line 769) StopIteration: Caught StopIteration in replica 1 on device 1."
1521
+ },
1522
+ {
1523
+ "line": "tests/models/smolvlm/test_modeling_smolvlm.py::SmolVLMForConditionalGenerationIntegrationTest::test_integration_test_video",
1524
+ "trace": "(line 579) AssertionError: 'User[310 chars]ideo depicts a step-by-step process of creatin[43 chars]work' != 'User[310 chars]ideo showcases a large language model, specifi[56 chars] and'"
1525
+ }
1526
+ ]
1527
+ },
1528
+ "job_link": {
1529
+ "single": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795953",
1530
+ "multi": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795737"
1531
+ },
1532
+ "captured_info": {
1533
+ "single": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795953#step:16:1",
1534
+ "multi": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795737#step:16:1"
1535
+ }
1536
+ },
1537
+ "models_t5": {
1538
+ "failed": {
1539
+ "PyTorch": {
1540
+ "unclassified": 0,
1541
+ "single": 3,
1542
+ "multi": 3
1543
+ },
1544
+ "Tokenizers": {
1545
+ "unclassified": 0,
1546
+ "single": 0,
1547
+ "multi": 0
1548
+ },
1549
+ "Pipelines": {
1550
+ "unclassified": 0,
1551
+ "single": 0,
1552
+ "multi": 0
1553
+ },
1554
+ "Trainer": {
1555
+ "unclassified": 0,
1556
+ "single": 0,
1557
+ "multi": 0
1558
+ },
1559
+ "ONNX": {
1560
+ "unclassified": 0,
1561
+ "single": 0,
1562
+ "multi": 0
1563
+ },
1564
+ "Auto": {
1565
+ "unclassified": 0,
1566
+ "single": 0,
1567
+ "multi": 0
1568
+ },
1569
+ "Quantization": {
1570
+ "unclassified": 0,
1571
+ "single": 0,
1572
+ "multi": 0
1573
+ },
1574
+ "Unclassified": {
1575
+ "unclassified": 0,
1576
+ "single": 0,
1577
+ "multi": 0
1578
+ }
1579
+ },
1580
+ "errors": 0,
1581
+ "success": 513,
1582
+ "skipped": 505,
1583
+ "time_spent": [
1584
+ 149.18,
1585
+ 146.37
1586
+ ],
1587
+ "error": false,
1588
+ "failures": {
1589
+ "single": [
1590
+ {
1591
+ "line": "tests/models/t5/test_modeling_t5.py::T5ModelIntegrationTests::test_compile_static_cache",
1592
+ "trace": "(line 1432) AssertionError: Lists differ: ['the[91 chars]rames . the laws of physics are the same for a[65 chars]t .'] != ['the[91 chars]rames. the laws of physics are the same for al[62 chars]nt.']"
1593
+ },
1594
+ {
1595
+ "line": "tests/models/t5/test_modeling_t5.py::T5ModelIntegrationTests::test_small_byt5_integration_test",
1596
+ "trace": "(line 1091) AssertionError: Scalars are not close!"
1597
+ },
1598
+ {
1599
+ "line": "tests/models/t5/test_modeling_t5.py::T5ModelIntegrationTests::test_small_v1_1_integration_test",
1600
+ "trace": "(line 1062) AssertionError: Scalars are not close!"
1601
+ }
1602
+ ],
1603
+ "multi": [
1604
+ {
1605
+ "line": "tests/models/t5/test_modeling_t5.py::T5ModelIntegrationTests::test_compile_static_cache",
1606
+ "trace": "(line 1432) AssertionError: Lists differ: ['the[91 chars]rames . the laws of physics are the same for a[65 chars]t .'] != ['the[91 chars]rames. the laws of physics are the same for al[62 chars]nt.']"
1607
+ },
1608
+ {
1609
+ "line": "tests/models/t5/test_modeling_t5.py::T5ModelIntegrationTests::test_small_byt5_integration_test",
1610
+ "trace": "(line 1091) AssertionError: Scalars are not close!"
1611
+ },
1612
+ {
1613
+ "line": "tests/models/t5/test_modeling_t5.py::T5ModelIntegrationTests::test_small_v1_1_integration_test",
1614
+ "trace": "(line 1062) AssertionError: Scalars are not close!"
1615
+ }
1616
+ ]
1617
+ },
1618
+ "job_link": {
1619
+ "single": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238796019",
1620
+ "multi": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795820"
1621
+ },
1622
+ "captured_info": {
1623
+ "single": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238796019#step:16:1",
1624
+ "multi": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795820#step:16:1"
1625
+ }
1626
+ },
1627
+ "models_table_transformer": {
1628
+ "failed": {
1629
+ "PyTorch": {
1630
+ "unclassified": 0,
1631
+ "single": 1,
1632
+ "multi": 1
1633
+ },
1634
+ "Tokenizers": {
1635
+ "unclassified": 0,
1636
+ "single": 0,
1637
+ "multi": 0
1638
+ },
1639
+ "Pipelines": {
1640
+ "unclassified": 0,
1641
+ "single": 0,
1642
+ "multi": 0
1643
+ },
1644
+ "Trainer": {
1645
+ "unclassified": 0,
1646
+ "single": 0,
1647
+ "multi": 0
1648
+ },
1649
+ "ONNX": {
1650
+ "unclassified": 0,
1651
+ "single": 0,
1652
+ "multi": 0
1653
+ },
1654
+ "Auto": {
1655
+ "unclassified": 0,
1656
+ "single": 0,
1657
+ "multi": 0
1658
+ },
1659
+ "Quantization": {
1660
+ "unclassified": 0,
1661
+ "single": 0,
1662
+ "multi": 0
1663
+ },
1664
+ "Unclassified": {
1665
+ "unclassified": 0,
1666
+ "single": 0,
1667
+ "multi": 0
1668
+ }
1669
+ },
1670
+ "errors": 0,
1671
+ "success": 154,
1672
+ "skipped": 238,
1673
+ "time_spent": [
1674
+ 51.0,
1675
+ 48.76
1676
+ ],
1677
+ "error": false,
1678
+ "failures": {
1679
+ "multi": [
1680
+ {
1681
+ "line": "tests/models/table_transformer/test_modeling_table_transformer.py::TableTransformerModelIntegrationTests::test_table_detection",
1682
+ "trace": "(line 554) AssertionError: Tensor-likes are not close!"
1683
+ }
1684
+ ],
1685
+ "single": [
1686
+ {
1687
+ "line": "tests/models/table_transformer/test_modeling_table_transformer.py::TableTransformerModelIntegrationTests::test_table_detection",
1688
+ "trace": "(line 554) AssertionError: Tensor-likes are not close!"
1689
+ }
1690
+ ]
1691
+ },
1692
+ "job_link": {
1693
+ "multi": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795717",
1694
+ "single": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795943"
1695
+ },
1696
+ "captured_info": {
1697
+ "multi": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795717#step:16:1",
1698
+ "single": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795943#step:16:1"
1699
+ }
1700
+ },
1701
+ "models_vit": {
1702
+ "failed": {
1703
+ "PyTorch": {
1704
+ "unclassified": 0,
1705
+ "single": 0,
1706
+ "multi": 0
1707
+ },
1708
+ "Tokenizers": {
1709
+ "unclassified": 0,
1710
+ "single": 0,
1711
+ "multi": 0
1712
+ },
1713
+ "Pipelines": {
1714
+ "unclassified": 0,
1715
+ "single": 0,
1716
+ "multi": 0
1717
+ },
1718
+ "Trainer": {
1719
+ "unclassified": 0,
1720
+ "single": 0,
1721
+ "multi": 0
1722
+ },
1723
+ "ONNX": {
1724
+ "unclassified": 0,
1725
+ "single": 0,
1726
+ "multi": 0
1727
+ },
1728
+ "Auto": {
1729
+ "unclassified": 0,
1730
+ "single": 0,
1731
+ "multi": 0
1732
+ },
1733
+ "Quantization": {
1734
+ "unclassified": 0,
1735
+ "single": 0,
1736
+ "multi": 0
1737
+ },
1738
+ "Unclassified": {
1739
+ "unclassified": 0,
1740
+ "single": 0,
1741
+ "multi": 0
1742
+ }
1743
+ },
1744
+ "errors": 0,
1745
+ "success": 259,
1746
+ "skipped": 173,
1747
+ "time_spent": [
1748
+ 53.87,
1749
+ 54.14
1750
+ ],
1751
+ "error": false,
1752
+ "failures": {},
1753
+ "job_link": {
1754
+ "multi": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795792",
1755
+ "single": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238796002"
1756
+ },
1757
+ "captured_info": {}
1758
+ },
1759
+ "models_wav2vec2": {
1760
+ "failed": {
1761
+ "PyTorch": {
1762
+ "unclassified": 0,
1763
+ "single": 0,
1764
+ "multi": 0
1765
+ },
1766
+ "Tokenizers": {
1767
+ "unclassified": 0,
1768
+ "single": 0,
1769
+ "multi": 0
1770
+ },
1771
+ "Pipelines": {
1772
+ "unclassified": 0,
1773
+ "single": 0,
1774
+ "multi": 0
1775
+ },
1776
+ "Trainer": {
1777
+ "unclassified": 0,
1778
+ "single": 0,
1779
+ "multi": 0
1780
+ },
1781
+ "ONNX": {
1782
+ "unclassified": 0,
1783
+ "single": 0,
1784
+ "multi": 0
1785
+ },
1786
+ "Auto": {
1787
+ "unclassified": 0,
1788
+ "single": 0,
1789
+ "multi": 0
1790
+ },
1791
+ "Quantization": {
1792
+ "unclassified": 0,
1793
+ "single": 0,
1794
+ "multi": 0
1795
+ },
1796
+ "Unclassified": {
1797
+ "unclassified": 0,
1798
+ "single": 0,
1799
+ "multi": 0
1800
+ }
1801
+ },
1802
+ "errors": 0,
1803
+ "success": 688,
1804
+ "skipped": 384,
1805
+ "time_spent": [
1806
+ 350.71,
1807
+ 372.29
1808
+ ],
1809
+ "error": false,
1810
+ "failures": {},
1811
+ "job_link": {
1812
+ "single": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795921",
1813
+ "multi": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795821"
1814
+ },
1815
+ "captured_info": {}
1816
+ },
1817
+ "models_whisper": {
1818
+ "failed": {
1819
+ "PyTorch": {
1820
+ "unclassified": 0,
1821
+ "single": 21,
1822
+ "multi": 22
1823
+ },
1824
+ "Tokenizers": {
1825
+ "unclassified": 0,
1826
+ "single": 0,
1827
+ "multi": 0
1828
+ },
1829
+ "Pipelines": {
1830
+ "unclassified": 0,
1831
+ "single": 0,
1832
+ "multi": 0
1833
+ },
1834
+ "Trainer": {
1835
+ "unclassified": 0,
1836
+ "single": 0,
1837
+ "multi": 0
1838
+ },
1839
+ "ONNX": {
1840
+ "unclassified": 0,
1841
+ "single": 0,
1842
+ "multi": 0
1843
+ },
1844
+ "Auto": {
1845
+ "unclassified": 0,
1846
+ "single": 0,
1847
+ "multi": 0
1848
+ },
1849
+ "Quantization": {
1850
+ "unclassified": 0,
1851
+ "single": 0,
1852
+ "multi": 0
1853
+ },
1854
+ "Unclassified": {
1855
+ "unclassified": 0,
1856
+ "single": 0,
1857
+ "multi": 0
1858
+ }
1859
+ },
1860
+ "errors": 0,
1861
+ "success": 1056,
1862
+ "skipped": 415,
1863
+ "time_spent": [
1864
+ 576.37,
1865
+ 598.22
1866
+ ],
1867
+ "error": false,
1868
+ "failures": {
1869
+ "single": [
1870
+ {
1871
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_distil_token_timestamp_generation",
1872
+ "trace": "(line 366) RuntimeError: Input type (float) and bias type (c10::Half) should be the same"
1873
+ },
1874
+ {
1875
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_large_batched_generation",
1876
+ "trace": "(line 366) RuntimeError: Input type (float) and bias type (c10::Half) should be the same"
1877
+ },
1878
+ {
1879
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_large_generation",
1880
+ "trace": "(line 366) RuntimeError: Input type (float) and bias type (c10::Half) should be the same"
1881
+ },
1882
+ {
1883
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_large_generation_multilingual",
1884
+ "trace": "(line 366) RuntimeError: Input type (float) and bias type (c10::Half) should be the same"
1885
+ },
1886
+ {
1887
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_large_timestamp_generation",
1888
+ "trace": "(line 366) RuntimeError: Input type (float) and bias type (c10::Half) should be the same"
1889
+ },
1890
+ {
1891
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_small_longform_timestamps_generation",
1892
+ "trace": "(line 1882) KeyError: 0"
1893
+ },
1894
+ {
1895
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_small_token_timestamp_generation",
1896
+ "trace": "(line 2023) AssertionError: Tensor-likes are not close!"
1897
+ },
1898
+ {
1899
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_speculative_decoding_distil",
1900
+ "trace": "(line 326) UnboundLocalError: local variable 'is_updated' referenced before assignment"
1901
+ },
1902
+ {
1903
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_speculative_decoding_non_distil",
1904
+ "trace": "(line 2390) AssertionError: Lists differ: [' Mr[35 chars]dle classes and we are glad to welcome his gospel. Thank you.'] != [' Mr[35 chars]dle classes and we are glad to welcome his gospel.']"
1905
+ },
1906
+ {
1907
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_tiny_en_batched_generation",
1908
+ "trace": "(line 1541) AssertionError: The values for attribute 'shape' do not match: torch.Size([4, 18]) != torch.Size([4, 20])."
1909
+ },
1910
+ {
1911
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_tiny_en_generation",
1912
+ "trace": "(line 1383) AssertionError: ' Mr.[15 chars] apostle of the middle classes, and we are glad to' != ' Mr.[15 chars] apostle of the middle classes, and we are glad to welcome his'"
1913
+ },
1914
+ {
1915
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_tiny_generation",
1916
+ "trace": "(line 1399) AssertionError: ' Mr.[21 chars]le of the middle classes and we are glad' != ' Mr.[21 chars]le of the middle classes and we are glad to welcome his gospel'"
1917
+ },
1918
+ {
1919
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_tiny_longform_timestamps_generation",
1920
+ "trace": "(line 1698) KeyError: 0"
1921
+ },
1922
+ {
1923
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_tiny_specaugment_librispeech",
1924
+ "trace": "(line 2137) AssertionError: Tensor-likes are not close!"
1925
+ },
1926
+ {
1927
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_tiny_static_generation_long_form",
1928
+ "trace": "(line 3098) RuntimeError: The size of tensor a (352) must match the size of tensor b (354) at non-singleton dimension 1"
1929
+ },
1930
+ {
1931
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_tiny_timestamp_generation",
1932
+ "trace": "(line 4091) IndexError: list index out of range"
1933
+ },
1934
+ {
1935
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_whisper_longform_multi_batch_hard",
1936
+ "trace": "(line 2787) AssertionError: Lists differ: [\" Fo[272 chars]ting of classics, Sicilian, nade door variatio[8147 chars]le!'] != [\" Fo[272 chars]ting a classic Sicilian, nade door variation o[8150 chars]le!']"
1937
+ },
1938
+ {
1939
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_whisper_longform_multi_batch_hard_prev_cond",
1940
+ "trace": "(line 2841) AssertionError: Lists differ: [\" Fo[425 chars]a fischer shows in lip nitskey attack the fisc[5577 chars]ty.\"] != [\" Fo[425 chars]a fisher shows in lip-nitsky attack that culmi[7900 chars]le!\"]"
1941
+ },
1942
+ {
1943
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_whisper_longform_no_speech_detection",
1944
+ "trace": "(line 2947) AssertionError: Lists differ: [\" Fo[435 chars]sting And so so so so so so so so so so so so [7329 chars]our\"] != [\" Fo[435 chars]sting\", ' Ladies and gentlemen, you know, I sp[1433 chars]es.\"]"
1945
+ },
1946
+ {
1947
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_whisper_longform_single_batch",
1948
+ "trace": "(line 294) TypeError: '>=' not supported between instances of 'list' and 'int'"
1949
+ },
1950
+ {
1951
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_whisper_shortform_single_batch_prev_cond",
1952
+ "trace": "(line 2556) AssertionError: Lists differ: [\" Fo[268 chars]ating, so soft, it would make JD power and her[196 chars]ke.\"] != [\" Fo[268 chars]ating so soft, it would make JD power and her [195 chars]ke.\"]"
1953
+ }
1954
+ ],
1955
+ "multi": [
1956
+ {
1957
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_distil_token_timestamp_generation",
1958
+ "trace": "(line 366) RuntimeError: Input type (float) and bias type (c10::Half) should be the same"
1959
+ },
1960
+ {
1961
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_large_batched_generation",
1962
+ "trace": "(line 366) RuntimeError: Input type (float) and bias type (c10::Half) should be the same"
1963
+ },
1964
+ {
1965
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_large_generation",
1966
+ "trace": "(line 366) RuntimeError: Input type (float) and bias type (c10::Half) should be the same"
1967
+ },
1968
+ {
1969
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_large_generation_multilingual",
1970
+ "trace": "(line 366) RuntimeError: Input type (float) and bias type (c10::Half) should be the same"
1971
+ },
1972
+ {
1973
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_large_timestamp_generation",
1974
+ "trace": "(line 366) RuntimeError: Input type (float) and bias type (c10::Half) should be the same"
1975
+ },
1976
+ {
1977
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_small_longform_timestamps_generation",
1978
+ "trace": "(line 1882) KeyError: 0"
1979
+ },
1980
+ {
1981
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_small_token_timestamp_generation",
1982
+ "trace": "(line 2023) AssertionError: Tensor-likes are not close!"
1983
+ },
1984
+ {
1985
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_speculative_decoding_distil",
1986
+ "trace": "(line 326) UnboundLocalError: local variable 'is_updated' referenced before assignment"
1987
+ },
1988
+ {
1989
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_speculative_decoding_non_distil",
1990
+ "trace": "(line 2390) AssertionError: Lists differ: [' Mr[35 chars]dle classes and we are glad to welcome his gospel. Thank you.'] != [' Mr[35 chars]dle classes and we are glad to welcome his gospel.']"
1991
+ },
1992
+ {
1993
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_tiny_en_batched_generation",
1994
+ "trace": "(line 1541) AssertionError: The values for attribute 'shape' do not match: torch.Size([4, 18]) != torch.Size([4, 20])."
1995
+ },
1996
+ {
1997
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_tiny_en_generation",
1998
+ "trace": "(line 1383) AssertionError: ' Mr.[15 chars] apostle of the middle classes, and we are glad to' != ' Mr.[15 chars] apostle of the middle classes, and we are glad to welcome his'"
1999
+ },
2000
+ {
2001
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_tiny_generation",
2002
+ "trace": "(line 1399) AssertionError: ' Mr.[21 chars]le of the middle classes and we are glad' != ' Mr.[21 chars]le of the middle classes and we are glad to welcome his gospel'"
2003
+ },
2004
+ {
2005
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_tiny_longform_timestamps_generation",
2006
+ "trace": "(line 1698) KeyError: 0"
2007
+ },
2008
+ {
2009
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_tiny_specaugment_librispeech",
2010
+ "trace": "(line 2137) AssertionError: Tensor-likes are not close!"
2011
+ },
2012
+ {
2013
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_tiny_static_generation_long_form",
2014
+ "trace": "(line 3098) RuntimeError: The size of tensor a (352) must match the size of tensor b (354) at non-singleton dimension 1"
2015
+ },
2016
+ {
2017
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_tiny_timestamp_generation",
2018
+ "trace": "(line 4091) IndexError: list index out of range"
2019
+ },
2020
+ {
2021
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_whisper_longform_multi_batch_hard",
2022
+ "trace": "(line 2787) AssertionError: Lists differ: [\" Fo[272 chars]ting of classics, Sicilian, nade door variatio[8147 chars]le!'] != [\" Fo[272 chars]ting a classic Sicilian, nade door variation o[8150 chars]le!']"
2023
+ },
2024
+ {
2025
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_whisper_longform_multi_batch_hard_prev_cond",
2026
+ "trace": "(line 2841) AssertionError: Lists differ: [\" Fo[425 chars]a fischer shows in lip nitskey attack the fisc[5577 chars]ty.\"] != [\" Fo[425 chars]a fisher shows in lip-nitsky attack that culmi[7900 chars]le!\"]"
2027
+ },
2028
+ {
2029
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_whisper_longform_no_speech_detection",
2030
+ "trace": "(line 2947) AssertionError: Lists differ: [\" Fo[435 chars]sting And so so so so so so so so so so so so [7329 chars]our\"] != [\" Fo[435 chars]sting\", ' Ladies and gentlemen, you know, I sp[1433 chars]es.\"]"
2031
+ },
2032
+ {
2033
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_whisper_longform_single_batch",
2034
+ "trace": "(line 294) TypeError: '>=' not supported between instances of 'list' and 'int'"
2035
+ },
2036
+ {
2037
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperModelIntegrationTests::test_whisper_shortform_single_batch_prev_cond",
2038
+ "trace": "(line 2556) AssertionError: Lists differ: [\" Fo[268 chars]ating, so soft, it would make JD power and her[196 chars]ke.\"] != [\" Fo[268 chars]ating so soft, it would make JD power and her [195 chars]ke.\"]"
2039
+ },
2040
+ {
2041
+ "line": "tests/models/whisper/test_modeling_whisper.py::WhisperEncoderModelTest::test_model_parallelism",
2042
+ "trace": "(line 625) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1!"
2043
+ }
2044
+ ]
2045
+ },
2046
+ "job_link": {
2047
+ "single": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795983",
2048
+ "multi": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795751"
2049
+ },
2050
+ "captured_info": {
2051
+ "single": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795983#step:16:1",
2052
+ "multi": "https://github.com/huggingface/transformers/actions/runs/23453925729/job/68238795751#step:16:1"
2053
+ }
2054
+ }
2055
+ }