hf-transformers-bot commited on
Commit
da7dd95
·
verified ·
1 Parent(s): c4c4414

Upload 2026-01-13/runs/26695-20960410794/ci_results_run_models_gpu/model_results.json with huggingface_hub

Browse files
2026-01-13/runs/26695-20960410794/ci_results_run_models_gpu/model_results.json ADDED
@@ -0,0 +1,660 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "models_gpt_oss": {
3
+ "failed": {
4
+ "PyTorch": {
5
+ "unclassified": 0,
6
+ "single": 74,
7
+ "multi": 74
8
+ },
9
+ "Tokenizers": {
10
+ "unclassified": 0,
11
+ "single": 0,
12
+ "multi": 0
13
+ },
14
+ "Pipelines": {
15
+ "unclassified": 0,
16
+ "single": 0,
17
+ "multi": 0
18
+ },
19
+ "Trainer": {
20
+ "unclassified": 0,
21
+ "single": 0,
22
+ "multi": 0
23
+ },
24
+ "ONNX": {
25
+ "unclassified": 0,
26
+ "single": 0,
27
+ "multi": 0
28
+ },
29
+ "Auto": {
30
+ "unclassified": 0,
31
+ "single": 0,
32
+ "multi": 0
33
+ },
34
+ "Quantization": {
35
+ "unclassified": 0,
36
+ "single": 0,
37
+ "multi": 0
38
+ },
39
+ "Unclassified": {
40
+ "unclassified": 0,
41
+ "single": 0,
42
+ "multi": 0
43
+ }
44
+ },
45
+ "errors": 0,
46
+ "success": 195,
47
+ "skipped": 317,
48
+ "time_spent": [
49
+ 750.91,
50
+ 3883.3
51
+ ],
52
+ "error": false,
53
+ "failures": {
54
+ "multi": [
55
+ {
56
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_matches_original_120b",
57
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.96 GiB. GPU 0 has a total capacity of 22.30 GiB of which 3.74 GiB is free. Process 56867 has 18.55 GiB memory in use. Of the allocated memory 9.12 GiB is allocated by PyTorch, and 8.95 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
58
+ },
59
+ {
60
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_matches_original_20b",
61
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1014.00 MiB. GPU 0 has a total capacity of 22.30 GiB of which 326.69 MiB is free. Process 56867 has 21.98 GiB memory in use. Of the allocated memory 17.53 GiB is allocated by PyTorch, and 3.96 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
62
+ },
63
+ {
64
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_00",
65
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1014.00 MiB. GPU 0 has a total capacity of 22.30 GiB of which 326.69 MiB is free. Process 56867 has 21.98 GiB memory in use. Of the allocated memory 17.53 GiB is allocated by PyTorch, and 3.96 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
66
+ },
67
+ {
68
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_01",
69
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1014.00 MiB. GPU 0 has a total capacity of 22.30 GiB of which 326.69 MiB is free. Process 56867 has 21.98 GiB memory in use. Of the allocated memory 17.53 GiB is allocated by PyTorch, and 3.96 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
70
+ },
71
+ {
72
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_02",
73
+ "trace": "(line 302) ImportError: `kernels` is either not installed or uses an incompatible version. Please install the latest version with `pip install -U kernels`."
74
+ },
75
+ {
76
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_03",
77
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1014.00 MiB. GPU 0 has a total capacity of 22.30 GiB of which 326.69 MiB is free. Process 56867 has 21.98 GiB memory in use. Of the allocated memory 17.53 GiB is allocated by PyTorch, and 3.96 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
78
+ },
79
+ {
80
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_04",
81
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1014.00 MiB. GPU 0 has a total capacity of 22.30 GiB of which 326.69 MiB is free. Process 56867 has 21.98 GiB memory in use. Of the allocated memory 16.54 GiB is allocated by PyTorch, and 4.95 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
82
+ },
83
+ {
84
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_05",
85
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1014.00 MiB. GPU 1 has a total capacity of 22.30 GiB of which 412.69 MiB is free. Process 56867 has 21.89 GiB memory in use. Of the allocated memory 15.54 GiB is allocated by PyTorch, and 5.94 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
86
+ },
87
+ {
88
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_06",
89
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1014.00 MiB. GPU 0 has a total capacity of 22.30 GiB of which 326.69 MiB is free. Process 56867 has 21.98 GiB memory in use. Of the allocated memory 18.52 GiB is allocated by PyTorch, and 2.97 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
90
+ },
91
+ {
92
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_07",
93
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1014.00 MiB. GPU 0 has a total capacity of 22.30 GiB of which 326.69 MiB is free. Process 56867 has 21.98 GiB memory in use. Of the allocated memory 17.53 GiB is allocated by PyTorch, and 3.96 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
94
+ },
95
+ {
96
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_08",
97
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1014.00 MiB. GPU 1 has a total capacity of 22.30 GiB of which 412.69 MiB is free. Process 56867 has 21.89 GiB memory in use. Of the allocated memory 15.54 GiB is allocated by PyTorch, and 5.94 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
98
+ },
99
+ {
100
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_09",
101
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1014.00 MiB. GPU 0 has a total capacity of 22.30 GiB of which 326.69 MiB is free. Process 56867 has 21.98 GiB memory in use. Of the allocated memory 17.53 GiB is allocated by PyTorch, and 3.96 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
102
+ },
103
+ {
104
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_10",
105
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1014.00 MiB. GPU 0 has a total capacity of 22.30 GiB of which 326.69 MiB is free. Process 56867 has 21.98 GiB memory in use. Of the allocated memory 18.52 GiB is allocated by PyTorch, and 2.97 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
106
+ },
107
+ {
108
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_11",
109
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1014.00 MiB. GPU 0 has a total capacity of 22.30 GiB of which 326.69 MiB is free. Process 56867 has 21.98 GiB memory in use. Of the allocated memory 17.53 GiB is allocated by PyTorch, and 3.96 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
110
+ },
111
+ {
112
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_12",
113
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1014.00 MiB. GPU 0 has a total capacity of 22.30 GiB of which 326.69 MiB is free. Process 56867 has 21.98 GiB memory in use. Of the allocated memory 18.52 GiB is allocated by PyTorch, and 2.97 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
114
+ },
115
+ {
116
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_13",
117
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1014.00 MiB. GPU 0 has a total capacity of 22.30 GiB of which 326.69 MiB is free. Process 56867 has 21.98 GiB memory in use. Of the allocated memory 17.53 GiB is allocated by PyTorch, and 3.96 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
118
+ },
119
+ {
120
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_14",
121
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1014.00 MiB. GPU 1 has a total capacity of 22.30 GiB of which 412.69 MiB is free. Process 56867 has 21.89 GiB memory in use. Of the allocated memory 15.54 GiB is allocated by PyTorch, and 5.94 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
122
+ },
123
+ {
124
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_15",
125
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1014.00 MiB. GPU 0 has a total capacity of 22.30 GiB of which 326.69 MiB is free. Process 56867 has 21.98 GiB memory in use. Of the allocated memory 17.53 GiB is allocated by PyTorch, and 3.96 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
126
+ },
127
+ {
128
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_16",
129
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.96 GiB. GPU 0 has a total capacity of 22.30 GiB of which 3.74 GiB is free. Process 56867 has 18.55 GiB memory in use. Of the allocated memory 9.12 GiB is allocated by PyTorch, and 8.95 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
130
+ },
131
+ {
132
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_17",
133
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.96 GiB. GPU 0 has a total capacity of 22.30 GiB of which 3.74 GiB is free. Process 56867 has 18.55 GiB memory in use. Of the allocated memory 9.12 GiB is allocated by PyTorch, and 8.95 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
134
+ },
135
+ {
136
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_18",
137
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.96 GiB. GPU 0 has a total capacity of 22.30 GiB of which 3.74 GiB is free. Process 56867 has 18.55 GiB memory in use. Of the allocated memory 9.12 GiB is allocated by PyTorch, and 8.95 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
138
+ },
139
+ {
140
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_19",
141
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.96 GiB. GPU 0 has a total capacity of 22.30 GiB of which 3.74 GiB is free. Process 56867 has 18.55 GiB memory in use. Of the allocated memory 9.12 GiB is allocated by PyTorch, and 8.95 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
142
+ },
143
+ {
144
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_20",
145
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.96 GiB. GPU 0 has a total capacity of 22.30 GiB of which 3.74 GiB is free. Process 56867 has 18.55 GiB memory in use. Of the allocated memory 9.12 GiB is allocated by PyTorch, and 8.95 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
146
+ },
147
+ {
148
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_21",
149
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.96 GiB. GPU 0 has a total capacity of 22.30 GiB of which 3.74 GiB is free. Process 56867 has 18.55 GiB memory in use. Of the allocated memory 9.12 GiB is allocated by PyTorch, and 8.95 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
150
+ },
151
+ {
152
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_22",
153
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.96 GiB. GPU 0 has a total capacity of 22.30 GiB of which 3.74 GiB is free. Process 56867 has 18.55 GiB memory in use. Of the allocated memory 9.12 GiB is allocated by PyTorch, and 8.95 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
154
+ },
155
+ {
156
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_23",
157
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.96 GiB. GPU 0 has a total capacity of 22.30 GiB of which 3.74 GiB is free. Process 56867 has 18.55 GiB memory in use. Of the allocated memory 9.12 GiB is allocated by PyTorch, and 8.95 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
158
+ },
159
+ {
160
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_24",
161
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.96 GiB. GPU 0 has a total capacity of 22.30 GiB of which 3.74 GiB is free. Process 56867 has 18.55 GiB memory in use. Of the allocated memory 9.12 GiB is allocated by PyTorch, and 8.95 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
162
+ },
163
+ {
164
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_25",
165
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.96 GiB. GPU 0 has a total capacity of 22.30 GiB of which 3.74 GiB is free. Process 56867 has 18.55 GiB memory in use. Of the allocated memory 9.12 GiB is allocated by PyTorch, and 8.95 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
166
+ },
167
+ {
168
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_26",
169
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.96 GiB. GPU 0 has a total capacity of 22.30 GiB of which 3.74 GiB is free. Process 56867 has 18.55 GiB memory in use. Of the allocated memory 9.12 GiB is allocated by PyTorch, and 8.95 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
170
+ },
171
+ {
172
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_27",
173
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.96 GiB. GPU 0 has a total capacity of 22.30 GiB of which 3.74 GiB is free. Process 56867 has 18.55 GiB memory in use. Of the allocated memory 9.12 GiB is allocated by PyTorch, and 8.95 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
174
+ },
175
+ {
176
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_28",
177
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.96 GiB. GPU 0 has a total capacity of 22.30 GiB of which 3.74 GiB is free. Process 56867 has 18.55 GiB memory in use. Of the allocated memory 9.12 GiB is allocated by PyTorch, and 8.95 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
178
+ },
179
+ {
180
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_29",
181
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.96 GiB. GPU 0 has a total capacity of 22.30 GiB of which 3.74 GiB is free. Process 56867 has 18.55 GiB memory in use. Of the allocated memory 9.12 GiB is allocated by PyTorch, and 8.95 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
182
+ },
183
+ {
184
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_30",
185
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.96 GiB. GPU 0 has a total capacity of 22.30 GiB of which 3.74 GiB is free. Process 56867 has 18.55 GiB memory in use. Of the allocated memory 9.12 GiB is allocated by PyTorch, and 8.95 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
186
+ },
187
+ {
188
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_31",
189
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.96 GiB. GPU 0 has a total capacity of 22.30 GiB of which 3.74 GiB is free. Process 56867 has 18.55 GiB memory in use. Of the allocated memory 9.12 GiB is allocated by PyTorch, and 8.95 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
190
+ },
191
+ {
192
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_00",
193
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=2', '/tmp/tmpzwzlza21_worker.py']' returned non-zero exit status 1."
194
+ },
195
+ {
196
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_01",
197
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=2', '/tmp/tmpsgy4ma87_worker.py']' returned non-zero exit status 1."
198
+ },
199
+ {
200
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_02",
201
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=2', '/tmp/tmp8ephpgbt_worker.py']' returned non-zero exit status 1."
202
+ },
203
+ {
204
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_03",
205
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=2', '/tmp/tmpdq_outku_worker.py']' returned non-zero exit status 1."
206
+ },
207
+ {
208
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_04",
209
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=2', '/tmp/tmplupssvh4_worker.py']' returned non-zero exit status 1."
210
+ },
211
+ {
212
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_05",
213
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=2', '/tmp/tmpubev8fu0_worker.py']' returned non-zero exit status 1."
214
+ },
215
+ {
216
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_06",
217
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=2', '/tmp/tmpck2qkgcr_worker.py']' returned non-zero exit status 1."
218
+ },
219
+ {
220
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_07",
221
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=2', '/tmp/tmp1qarvsqw_worker.py']' returned non-zero exit status 1."
222
+ },
223
+ {
224
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_08",
225
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=2', '/tmp/tmp23yu0t8q_worker.py']' returned non-zero exit status 1."
226
+ },
227
+ {
228
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_09",
229
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=2', '/tmp/tmp3ugabhpj_worker.py']' returned non-zero exit status 1."
230
+ },
231
+ {
232
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_10",
233
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=2', '/tmp/tmpm7afkb3j_worker.py']' returned non-zero exit status 1."
234
+ },
235
+ {
236
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_11",
237
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=2', '/tmp/tmpm1oj3kbn_worker.py']' returned non-zero exit status 1."
238
+ },
239
+ {
240
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_12",
241
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=2', '/tmp/tmpikzwhfh4_worker.py']' returned non-zero exit status 1."
242
+ },
243
+ {
244
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_13",
245
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=2', '/tmp/tmpnx9ubffl_worker.py']' returned non-zero exit status 1."
246
+ },
247
+ {
248
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_14",
249
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=2', '/tmp/tmpgjsx68gw_worker.py']' returned non-zero exit status 1."
250
+ },
251
+ {
252
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_15",
253
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=2', '/tmp/tmp5xkxzkcd_worker.py']' returned non-zero exit status 1."
254
+ },
255
+ {
256
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_16",
257
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=2', '/tmp/tmpqlnttu9f_worker.py']' returned non-zero exit status 1."
258
+ },
259
+ {
260
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_17",
261
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=2', '/tmp/tmp99cez5ot_worker.py']' returned non-zero exit status 1."
262
+ },
263
+ {
264
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_18",
265
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=2', '/tmp/tmpl1sabs8p_worker.py']' returned non-zero exit status 1."
266
+ },
267
+ {
268
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_19",
269
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=2', '/tmp/tmpe3tx2xl6_worker.py']' returned non-zero exit status 1."
270
+ },
271
+ {
272
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_20",
273
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=2', '/tmp/tmp7q48khzn_worker.py']' returned non-zero exit status 1."
274
+ },
275
+ {
276
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_21",
277
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=2', '/tmp/tmpqgueuw3z_worker.py']' returned non-zero exit status 1."
278
+ },
279
+ {
280
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_22",
281
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=2', '/tmp/tmpuqrqrdeb_worker.py']' returned non-zero exit status 1."
282
+ },
283
+ {
284
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_23",
285
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=2', '/tmp/tmp61do928o_worker.py']' returned non-zero exit status 1."
286
+ },
287
+ {
288
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_24",
289
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=2', '/tmp/tmp_2ldqlns_worker.py']' returned non-zero exit status 1."
290
+ },
291
+ {
292
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_25",
293
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=2', '/tmp/tmpdtn7rply_worker.py']' returned non-zero exit status 1."
294
+ },
295
+ {
296
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_26",
297
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=2', '/tmp/tmpq64i3f1__worker.py']' returned non-zero exit status 1."
298
+ },
299
+ {
300
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_27",
301
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=2', '/tmp/tmpt3ccd3vw_worker.py']' returned non-zero exit status 1."
302
+ },
303
+ {
304
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_28",
305
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=2', '/tmp/tmpry3g7k1u_worker.py']' returned non-zero exit status 1."
306
+ },
307
+ {
308
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_29",
309
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=2', '/tmp/tmp65iw3u4x_worker.py']' returned non-zero exit status 1."
310
+ },
311
+ {
312
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_30",
313
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=2', '/tmp/tmpkthgrrpd_worker.py']' returned non-zero exit status 1."
314
+ },
315
+ {
316
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_31",
317
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=2', '/tmp/tmp6af0ml7t_worker.py']' returned non-zero exit status 1."
318
+ },
319
+ {
320
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_training_step_01",
321
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1014.00 MiB. GPU 0 has a total capacity of 22.30 GiB of which 326.69 MiB is free. Process 56867 has 21.98 GiB memory in use. Of the allocated memory 17.53 GiB is allocated by PyTorch, and 3.96 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
322
+ },
323
+ {
324
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_training_step_03",
325
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1014.00 MiB. GPU 0 has a total capacity of 22.30 GiB of which 326.69 MiB is free. Process 56867 has 21.98 GiB memory in use. Of the allocated memory 17.53 GiB is allocated by PyTorch, and 3.96 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
326
+ },
327
+ {
328
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_training_step_05",
329
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1014.00 MiB. GPU 0 has a total capacity of 22.30 GiB of which 326.69 MiB is free. Process 56867 has 21.98 GiB memory in use. Of the allocated memory 17.53 GiB is allocated by PyTorch, and 3.96 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
330
+ },
331
+ {
332
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_training_step_07",
333
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1014.00 MiB. GPU 1 has a total capacity of 22.30 GiB of which 412.69 MiB is free. Process 56867 has 21.89 GiB memory in use. Of the allocated memory 15.54 GiB is allocated by PyTorch, and 5.94 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
334
+ },
335
+ {
336
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_training_step_17",
337
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.96 GiB. GPU 0 has a total capacity of 22.30 GiB of which 3.74 GiB is free. Process 56867 has 18.55 GiB memory in use. Of the allocated memory 9.12 GiB is allocated by PyTorch, and 8.95 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
338
+ },
339
+ {
340
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_training_step_19",
341
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.96 GiB. GPU 0 has a total capacity of 22.30 GiB of which 3.74 GiB is free. Process 56867 has 18.55 GiB memory in use. Of the allocated memory 9.12 GiB is allocated by PyTorch, and 8.95 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
342
+ },
343
+ {
344
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_training_step_21",
345
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.96 GiB. GPU 0 has a total capacity of 22.30 GiB of which 3.74 GiB is free. Process 56867 has 18.55 GiB memory in use. Of the allocated memory 9.12 GiB is allocated by PyTorch, and 8.95 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
346
+ },
347
+ {
348
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_training_step_23",
349
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.96 GiB. GPU 0 has a total capacity of 22.30 GiB of which 3.74 GiB is free. Process 56867 has 18.55 GiB memory in use. Of the allocated memory 9.12 GiB is allocated by PyTorch, and 8.95 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
350
+ }
351
+ ],
352
+ "single": [
353
+ {
354
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_matches_original_120b",
355
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.96 GiB. GPU 0 has a total capacity of 22.30 GiB of which 3.86 GiB is free. Process 45729 has 18.43 GiB memory in use. Of the allocated memory 9.11 GiB is allocated by PyTorch, and 8.95 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
356
+ },
357
+ {
358
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_matches_original_20b",
359
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1014.00 MiB. GPU 0 has a total capacity of 22.30 GiB of which 1006.69 MiB is free. Process 45729 has 21.31 GiB memory in use. Of the allocated memory 16.98 GiB is allocated by PyTorch, and 3.96 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
360
+ },
361
+ {
362
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_00",
363
+ "trace": "(line 243) RuntimeError: We encountered some issues during automatic conversion of the weights. For details look at the `CONVERSION` entries of the above report!"
364
+ },
365
+ {
366
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_01",
367
+ "trace": "(line 243) RuntimeError: We encountered some issues during automatic conversion of the weights. For details look at the `CONVERSION` entries of the above report!"
368
+ },
369
+ {
370
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_02",
371
+ "trace": "(line 302) ImportError: `kernels` is either not installed or uses an incompatible version. Please install the latest version with `pip install -U kernels`."
372
+ },
373
+ {
374
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_03",
375
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1014.00 MiB. GPU 0 has a total capacity of 22.30 GiB of which 1006.69 MiB is free. Process 45729 has 21.31 GiB memory in use. Of the allocated memory 16.98 GiB is allocated by PyTorch, and 3.96 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
376
+ },
377
+ {
378
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_04",
379
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1014.00 MiB. GPU 0 has a total capacity of 22.30 GiB of which 1006.69 MiB is free. Process 45729 has 21.31 GiB memory in use. Of the allocated memory 16.98 GiB is allocated by PyTorch, and 3.96 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
380
+ },
381
+ {
382
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_05",
383
+ "trace": "(line 243) RuntimeError: We encountered some issues during automatic conversion of the weights. For details look at the `CONVERSION` entries of the above report!"
384
+ },
385
+ {
386
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_06",
387
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1014.00 MiB. GPU 0 has a total capacity of 22.30 GiB of which 1006.69 MiB is free. Process 45729 has 21.31 GiB memory in use. Of the allocated memory 16.98 GiB is allocated by PyTorch, and 3.97 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
388
+ },
389
+ {
390
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_07",
391
+ "trace": "(line 243) RuntimeError: We encountered some issues during automatic conversion of the weights. For details look at the `CONVERSION` entries of the above report!"
392
+ },
393
+ {
394
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_08",
395
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1014.00 MiB. GPU 0 has a total capacity of 22.30 GiB of which 1006.69 MiB is free. Process 45729 has 21.31 GiB memory in use. Of the allocated memory 16.98 GiB is allocated by PyTorch, and 3.97 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
396
+ },
397
+ {
398
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_09",
399
+ "trace": "(line 243) RuntimeError: We encountered some issues during automatic conversion of the weights. For details look at the `CONVERSION` entries of the above report!"
400
+ },
401
+ {
402
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_10",
403
+ "trace": "(line 243) RuntimeError: We encountered some issues during automatic conversion of the weights. For details look at the `CONVERSION` entries of the above report!"
404
+ },
405
+ {
406
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_11",
407
+ "trace": "(line 243) RuntimeError: We encountered some issues during automatic conversion of the weights. For details look at the `CONVERSION` entries of the above report!"
408
+ },
409
+ {
410
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_12",
411
+ "trace": "(line 243) RuntimeError: We encountered some issues during automatic conversion of the weights. For details look at the `CONVERSION` entries of the above report!"
412
+ },
413
+ {
414
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_13",
415
+ "trace": "(line 243) RuntimeError: We encountered some issues during automatic conversion of the weights. For details look at the `CONVERSION` entries of the above report!"
416
+ },
417
+ {
418
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_14",
419
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1014.00 MiB. GPU 0 has a total capacity of 22.30 GiB of which 1006.69 MiB is free. Process 45729 has 21.31 GiB memory in use. Of the allocated memory 16.98 GiB is allocated by PyTorch, and 3.96 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
420
+ },
421
+ {
422
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_15",
423
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1014.00 MiB. GPU 0 has a total capacity of 22.30 GiB of which 1006.69 MiB is free. Process 45729 has 21.31 GiB memory in use. Of the allocated memory 16.98 GiB is allocated by PyTorch, and 3.96 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
424
+ },
425
+ {
426
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_16",
427
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.96 GiB. GPU 0 has a total capacity of 22.30 GiB of which 3.86 GiB is free. Process 45729 has 18.43 GiB memory in use. Of the allocated memory 9.11 GiB is allocated by PyTorch, and 8.95 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
428
+ },
429
+ {
430
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_17",
431
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.96 GiB. GPU 0 has a total capacity of 22.30 GiB of which 3.86 GiB is free. Process 45729 has 18.43 GiB memory in use. Of the allocated memory 9.11 GiB is allocated by PyTorch, and 8.95 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
432
+ },
433
+ {
434
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_18",
435
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.96 GiB. GPU 0 has a total capacity of 22.30 GiB of which 3.86 GiB is free. Process 45729 has 18.43 GiB memory in use. Of the allocated memory 9.11 GiB is allocated by PyTorch, and 8.95 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
436
+ },
437
+ {
438
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_19",
439
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.96 GiB. GPU 0 has a total capacity of 22.30 GiB of which 3.86 GiB is free. Process 45729 has 18.43 GiB memory in use. Of the allocated memory 9.11 GiB is allocated by PyTorch, and 8.95 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
440
+ },
441
+ {
442
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_20",
443
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.96 GiB. GPU 0 has a total capacity of 22.30 GiB of which 3.86 GiB is free. Process 45729 has 18.43 GiB memory in use. Of the allocated memory 9.11 GiB is allocated by PyTorch, and 8.95 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
444
+ },
445
+ {
446
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_21",
447
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.96 GiB. GPU 0 has a total capacity of 22.30 GiB of which 3.86 GiB is free. Process 45729 has 18.43 GiB memory in use. Of the allocated memory 9.11 GiB is allocated by PyTorch, and 8.95 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
448
+ },
449
+ {
450
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_22",
451
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.96 GiB. GPU 0 has a total capacity of 22.30 GiB of which 3.86 GiB is free. Process 45729 has 18.43 GiB memory in use. Of the allocated memory 9.11 GiB is allocated by PyTorch, and 8.95 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
452
+ },
453
+ {
454
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_23",
455
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.96 GiB. GPU 0 has a total capacity of 22.30 GiB of which 3.86 GiB is free. Process 45729 has 18.43 GiB memory in use. Of the allocated memory 9.11 GiB is allocated by PyTorch, and 8.95 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
456
+ },
457
+ {
458
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_24",
459
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.96 GiB. GPU 0 has a total capacity of 22.30 GiB of which 3.86 GiB is free. Process 45729 has 18.43 GiB memory in use. Of the allocated memory 9.11 GiB is allocated by PyTorch, and 8.95 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
460
+ },
461
+ {
462
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_25",
463
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.96 GiB. GPU 0 has a total capacity of 22.30 GiB of which 3.86 GiB is free. Process 45729 has 18.43 GiB memory in use. Of the allocated memory 9.11 GiB is allocated by PyTorch, and 8.95 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
464
+ },
465
+ {
466
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_26",
467
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.96 GiB. GPU 0 has a total capacity of 22.30 GiB of which 3.86 GiB is free. Process 45729 has 18.43 GiB memory in use. Of the allocated memory 9.11 GiB is allocated by PyTorch, and 8.95 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
468
+ },
469
+ {
470
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_27",
471
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.96 GiB. GPU 0 has a total capacity of 22.30 GiB of which 3.86 GiB is free. Process 45729 has 18.43 GiB memory in use. Of the allocated memory 9.11 GiB is allocated by PyTorch, and 8.95 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
472
+ },
473
+ {
474
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_28",
475
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.96 GiB. GPU 0 has a total capacity of 22.30 GiB of which 3.86 GiB is free. Process 45729 has 18.43 GiB memory in use. Of the allocated memory 9.11 GiB is allocated by PyTorch, and 8.95 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
476
+ },
477
+ {
478
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_29",
479
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.96 GiB. GPU 0 has a total capacity of 22.30 GiB of which 3.86 GiB is free. Process 45729 has 18.43 GiB memory in use. Of the allocated memory 9.11 GiB is allocated by PyTorch, and 8.95 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
480
+ },
481
+ {
482
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_30",
483
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.96 GiB. GPU 0 has a total capacity of 22.30 GiB of which 3.86 GiB is free. Process 45729 has 18.43 GiB memory in use. Of the allocated memory 9.11 GiB is allocated by PyTorch, and 8.95 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
484
+ },
485
+ {
486
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_31",
487
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.96 GiB. GPU 0 has a total capacity of 22.30 GiB of which 3.86 GiB is free. Process 45729 has 18.43 GiB memory in use. Of the allocated memory 9.11 GiB is allocated by PyTorch, and 8.95 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
488
+ },
489
+ {
490
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_00",
491
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=1', '/tmp/tmpdc0iv99t_worker.py']' returned non-zero exit status 1."
492
+ },
493
+ {
494
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_01",
495
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=1', '/tmp/tmp4o5lab_s_worker.py']' returned non-zero exit status 1."
496
+ },
497
+ {
498
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_02",
499
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=1', '/tmp/tmpm8tjcc4t_worker.py']' returned non-zero exit status 1."
500
+ },
501
+ {
502
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_03",
503
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=1', '/tmp/tmps9052pj__worker.py']' returned non-zero exit status 1."
504
+ },
505
+ {
506
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_04",
507
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=1', '/tmp/tmpq71k944x_worker.py']' returned non-zero exit status 1."
508
+ },
509
+ {
510
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_05",
511
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=1', '/tmp/tmpqli754_5_worker.py']' returned non-zero exit status 1."
512
+ },
513
+ {
514
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_06",
515
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=1', '/tmp/tmpetm4ki1x_worker.py']' returned non-zero exit status 1."
516
+ },
517
+ {
518
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_07",
519
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=1', '/tmp/tmpma208gay_worker.py']' returned non-zero exit status 1."
520
+ },
521
+ {
522
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_08",
523
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=1', '/tmp/tmpsvxisax8_worker.py']' returned non-zero exit status 1."
524
+ },
525
+ {
526
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_09",
527
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=1', '/tmp/tmpkxf8tyer_worker.py']' returned non-zero exit status 1."
528
+ },
529
+ {
530
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_10",
531
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=1', '/tmp/tmpw4o3668e_worker.py']' returned non-zero exit status 1."
532
+ },
533
+ {
534
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_11",
535
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=1', '/tmp/tmplk483jyl_worker.py']' returned non-zero exit status 1."
536
+ },
537
+ {
538
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_12",
539
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=1', '/tmp/tmpfii_3ue3_worker.py']' returned non-zero exit status 1."
540
+ },
541
+ {
542
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_13",
543
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=1', '/tmp/tmpg6igd26u_worker.py']' returned non-zero exit status 1."
544
+ },
545
+ {
546
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_14",
547
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=1', '/tmp/tmpz6vkzh5r_worker.py']' returned non-zero exit status 1."
548
+ },
549
+ {
550
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_15",
551
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=1', '/tmp/tmpw98rsj7b_worker.py']' returned non-zero exit status 1."
552
+ },
553
+ {
554
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_16",
555
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=1', '/tmp/tmpcqgvvjc8_worker.py']' returned non-zero exit status 1."
556
+ },
557
+ {
558
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_17",
559
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=1', '/tmp/tmp_qcna48s_worker.py']' returned non-zero exit status 1."
560
+ },
561
+ {
562
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_18",
563
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=1', '/tmp/tmp17nz83mt_worker.py']' returned non-zero exit status 1."
564
+ },
565
+ {
566
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_19",
567
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=1', '/tmp/tmpw7my3nq3_worker.py']' returned non-zero exit status 1."
568
+ },
569
+ {
570
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_20",
571
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=1', '/tmp/tmp9cngk29k_worker.py']' returned non-zero exit status 1."
572
+ },
573
+ {
574
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_21",
575
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=1', '/tmp/tmpul7q0irn_worker.py']' returned non-zero exit status 1."
576
+ },
577
+ {
578
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_22",
579
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=1', '/tmp/tmp4gw52rfd_worker.py']' returned non-zero exit status 1."
580
+ },
581
+ {
582
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_23",
583
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=1', '/tmp/tmp25vgb6xx_worker.py']' returned non-zero exit status 1."
584
+ },
585
+ {
586
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_24",
587
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=1', '/tmp/tmpup11n94e_worker.py']' returned non-zero exit status 1."
588
+ },
589
+ {
590
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_25",
591
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=1', '/tmp/tmp5uo9uxsi_worker.py']' returned non-zero exit status 1."
592
+ },
593
+ {
594
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_26",
595
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=1', '/tmp/tmpummqwphi_worker.py']' returned non-zero exit status 1."
596
+ },
597
+ {
598
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_27",
599
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=1', '/tmp/tmpkj87ri65_worker.py']' returned non-zero exit status 1."
600
+ },
601
+ {
602
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_28",
603
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=1', '/tmp/tmpqvsl9g7u_worker.py']' returned non-zero exit status 1."
604
+ },
605
+ {
606
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_29",
607
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=1', '/tmp/tmpkugmzw7l_worker.py']' returned non-zero exit status 1."
608
+ },
609
+ {
610
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_30",
611
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=1', '/tmp/tmpa973f8fh_worker.py']' returned non-zero exit status 1."
612
+ },
613
+ {
614
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_model_outputs_distributed_31",
615
+ "trace": "(line 526) subprocess.CalledProcessError: Command '['torchrun', '--nproc_per_node=1', '/tmp/tmpp6jicf_d_worker.py']' returned non-zero exit status 1."
616
+ },
617
+ {
618
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_training_step_01",
619
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1014.00 MiB. GPU 0 has a total capacity of 22.30 GiB of which 1006.69 MiB is free. Process 45729 has 21.31 GiB memory in use. Of the allocated memory 16.98 GiB is allocated by PyTorch, and 3.96 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
620
+ },
621
+ {
622
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_training_step_03",
623
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1014.00 MiB. GPU 0 has a total capacity of 22.30 GiB of which 1006.69 MiB is free. Process 45729 has 21.31 GiB memory in use. Of the allocated memory 16.98 GiB is allocated by PyTorch, and 3.96 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
624
+ },
625
+ {
626
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_training_step_05",
627
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1014.00 MiB. GPU 0 has a total capacity of 22.30 GiB of which 1006.69 MiB is free. Process 45729 has 21.31 GiB memory in use. Of the allocated memory 16.98 GiB is allocated by PyTorch, and 3.96 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
628
+ },
629
+ {
630
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_training_step_07",
631
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1014.00 MiB. GPU 0 has a total capacity of 22.30 GiB of which 1006.69 MiB is free. Process 45729 has 21.31 GiB memory in use. Of the allocated memory 16.98 GiB is allocated by PyTorch, and 3.96 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
632
+ },
633
+ {
634
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_training_step_17",
635
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.96 GiB. GPU 0 has a total capacity of 22.30 GiB of which 3.86 GiB is free. Process 45729 has 18.43 GiB memory in use. Of the allocated memory 9.11 GiB is allocated by PyTorch, and 8.95 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
636
+ },
637
+ {
638
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_training_step_19",
639
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.96 GiB. GPU 0 has a total capacity of 22.30 GiB of which 3.86 GiB is free. Process 45729 has 18.43 GiB memory in use. Of the allocated memory 9.11 GiB is allocated by PyTorch, and 8.95 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
640
+ },
641
+ {
642
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_training_step_21",
643
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.96 GiB. GPU 0 has a total capacity of 22.30 GiB of which 3.86 GiB is free. Process 45729 has 18.43 GiB memory in use. Of the allocated memory 9.11 GiB is allocated by PyTorch, and 8.95 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
644
+ },
645
+ {
646
+ "line": "tests/models/gpt_oss/test_modeling_gpt_oss.py::GptOssIntegrationTest::test_training_step_23",
647
+ "trace": "(line 5113) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.96 GiB. GPU 0 has a total capacity of 22.30 GiB of which 3.86 GiB is free. Process 45729 has 18.43 GiB memory in use. Of the allocated memory 9.11 GiB is allocated by PyTorch, and 8.95 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)"
648
+ }
649
+ ]
650
+ },
651
+ "job_link": {
652
+ "multi": "https://github.com/huggingface/transformers/actions/runs/20960410794/job/60235991322",
653
+ "single": "https://github.com/huggingface/transformers/actions/runs/20960410794/job/60235991349"
654
+ },
655
+ "captured_info": {
656
+ "multi": "https://github.com/huggingface/transformers/actions/runs/20960410794/job/60235991322#step:16:1",
657
+ "single": "https://github.com/huggingface/transformers/actions/runs/20960410794/job/60235991349#step:16:1"
658
+ }
659
+ }
660
+ }