hf-transformers-bot commited on
Commit
1ab233f
Β·
verified Β·
1 Parent(s): e02edc1

Upload 2025-10-23/runs/89-18735905170/ci_results_run_pipelines_torch_gpu/test_results_diff.json with huggingface_hub

Browse files
2025-10-23/runs/89-18735905170/ci_results_run_pipelines_torch_gpu/test_results_diff.json ADDED
@@ -0,0 +1,126 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ === Diff for job: multi-gpu_run_pipelines_torch_gpu_test_reports ===
2
+ --- Absent in current run:
3
+ - 401 Client Error. (Request ID: Root=1-68f8b0b6-0a360e8c7d0348b20e2fed34;60460819-af93-4509-89d3-d3074885d128)
4
+ - Diff is 625257 characters long. Set self.maxDiff to None to see it.
5
+ - FAILED tests/pipelines/test_pipelines_text_to_audio.py::TextToAudioPipelineTests::test_conversion_additional_tensor
6
+ - FAILED tests/pipelines/test_pipelines_text_to_audio.py::TextToAudioPipelineTests::test_small_bark_pt
7
+ - [[-0.08846619725227356, -0.08407789468765259, -0.080[234444 chars]518]]
8
+ - [[-0.08846622705459595, -0.08407790958881378, -0.080[234437 chars]488]]
9
+ +++ Appeared in current run:
10
+ + 401 Client Error. (Request ID: Root=1-68f9a229-2b28029d4947bc9b553b2ed8;ed0838e8-66f6-4118-98f5-7206bd01515f)
11
+ + Diff is 625739 characters long. Set self.maxDiff to None to see it.
12
+ + PASSED tests/pipelines/test_pipelines_text_to_audio.py::TextToAudioPipelineTests::test_conversion_additional_tensor
13
+ + PASSED tests/pipelines/test_pipelines_text_to_audio.py::TextToAudioPipelineTests::test_small_bark_pt
14
+ + [[-0.08846613764762878, -0.0840778797864914, -0.08044[234423 chars]266]]
15
+ + [[-0.08846619725227356, -0.08407788723707199, -0.0804[234406 chars]266]]
16
+
17
+ === Diff for job: single-gpu_run_examples_gpu_test_reports ===
18
+ --- Absent in current run:
19
+ - 0%| | 0/10 [00:00<?, ?it/s]
20
+ - 10%|β–ˆ | 5/50 [00:02<00:15, 3.00it/s]Traceback (most recent call last):
21
+ - 10%|β–ˆ | 5/50 [00:02<00:24, 1.81it/s]
22
+ - 10/22/2025 09:47:18 - INFO - __main__ - Distributed environment: NO
23
+ - 10/22/2025 09:47:19 - INFO - httpx - HTTP Request: HEAD https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/json/json.py "HTTP/1.1 200 OK"
24
+ - 10/22/2025 09:47:20 - INFO - httpx - HTTP Request: GET https://huggingface.co/api/models/google-t5/t5-small/tree/main/additional_chat_templates?recursive=false&expand=false "HTTP/1.1 404 Not Found"
25
+ - 10/22/2025 09:47:20 - INFO - httpx - HTTP Request: HEAD https://huggingface.co/api/resolve-cache/models/google-t5/t5-small/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/config.json "HTTP/1.1 200 OK"
26
+ - 10/22/2025 09:47:20 - INFO - httpx - HTTP Request: HEAD https://huggingface.co/api/resolve-cache/models/google-t5/t5-small/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/generation_config.json "HTTP/1.1 200 OK"
27
+ - 10/22/2025 09:47:20 - INFO - httpx - HTTP Request: HEAD https://huggingface.co/api/resolve-cache/models/google-t5/t5-small/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/tokenizer_config.json "HTTP/1.1 200 OK"
28
+ - 10/22/2025 09:47:20 - INFO - httpx - HTTP Request: HEAD https://huggingface.co/google-t5/t5-small/resolve/main/config.json "HTTP/1.1 307 Temporary Redirect"
29
+ - 10/22/2025 09:47:20 - INFO - httpx - HTTP Request: HEAD https://huggingface.co/google-t5/t5-small/resolve/main/custom_generate/generate.py "HTTP/1.1 404 Not Found"
30
+ - 10/22/2025 09:47:20 - INFO - httpx - HTTP Request: HEAD https://huggingface.co/google-t5/t5-small/resolve/main/generation_config.json "HTTP/1.1 307 Temporary Redirect"
31
+ - 10/22/2025 09:47:20 - INFO - httpx - HTTP Request: HEAD https://huggingface.co/google-t5/t5-small/resolve/main/tokenizer_config.json "HTTP/1.1 307 Temporary Redirect"
32
+ - 10/22/2025 09:47:20 - INFO - transformers.configuration_utils - Model config T5Config {
33
+ - 10/22/2025 09:47:20 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /mnt/cache/hub/models--google-t5--t5-small/snapshots/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/config.json
34
+ - 10/22/2025 09:47:20 - INFO - transformers.dynamic_module_utils - Could not locate the custom_generate/generate.py inside google-t5/t5-small.
35
+ - 10/22/2025 09:47:20 - INFO - transformers.generation.configuration_utils - Generate config GenerationConfig {
36
+ - 10/22/2025 09:47:20 - INFO - transformers.generation.configuration_utils - loading configuration file generation_config.json from cache at /mnt/cache/hub/models--google-t5--t5-small/snapshots/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/generation_config.json
37
+ - 10/22/2025 09:47:20 - INFO - transformers.modeling_utils - loading weights file model.safetensors from cache at /mnt/cache/hub/models--google-t5--t5-small/snapshots/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/model.safetensors
38
+ - 10/22/2025 09:47:20 - INFO - transformers.tokenization_utils_base - loading file added_tokens.json from cache at None
39
+ - 10/22/2025 09:47:20 - INFO - transformers.tokenization_utils_base - loading file chat_template.jinja from cache at None
40
+ - 10/22/2025 09:47:20 - INFO - transformers.tokenization_utils_base - loading file special_tokens_map.json from cache at None
41
+ - 10/22/2025 09:47:20 - INFO - transformers.tokenization_utils_base - loading file spiece.model from cache at /mnt/cache/hub/models--google-t5--t5-small/snapshots/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/spiece.model
42
+ - 10/22/2025 09:47:20 - INFO - transformers.tokenization_utils_base - loading file tokenizer.json from cache at /mnt/cache/hub/models--google-t5--t5-small/snapshots/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/tokenizer.json
43
+ - 10/22/2025 09:47:20 - INFO - transformers.tokenization_utils_base - loading file tokenizer_config.json from cache at /mnt/cache/hub/models--google-t5--t5-small/snapshots/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/tokenizer_config.json
44
+ - 10/22/2025 09:47:21 - INFO - __main__ - Gradient Accumulation steps = 1
45
+ - 10/22/2025 09:47:21 - INFO - __main__ - Instantaneous batch size per device = 2
46
+ - 10/22/2025 09:47:21 - INFO - __main__ - Num Epochs = 10
47
+ - 10/22/2025 09:47:21 - INFO - __main__ - Num examples = 10
48
+ - 10/22/2025 09:47:21 - INFO - __main__ - Total optimization steps = 50
49
+ - 10/22/2025 09:47:21 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 2
50
+ - 10/22/2025 09:47:21 - INFO - __main__ - ***** Running training *****
51
+ - 10/22/2025 09:47:21 - INFO - __main__ - Sample 3 of the training set: {'input_ids': [37, 2050, 18, 3535, 17952, 2237, 57, 1363, 3424, 32, 7, 638, 15, 40, 107, 32, 751, 8, 167, 6116, 16, 8, 4356, 30, 314, 1797, 5, 299, 2730, 343, 2488, 12923, 11229, 65, 118, 464, 12, 918, 3, 9, 17952, 28, 623, 18, 17068, 2251, 5, 1404, 857, 24, 1363, 3424, 32, 7, 638, 15, 40, 107, 32, 56, 5124, 12, 1903, 8, 794, 13, 3, 9, 2902, 13, 150, 3410, 16, 12627, 31, 7, 20417, 5, 1661, 389, 23, 3849, 17655, 9, 509, 26551, 133, 258, 36, 1644, 12, 987, 8, 646, 12, 607, 3, 9, 789, 5, 290, 33, 14935, 24, 1274, 13, 14068, 228, 6263, 12627, 31, 7, 1456, 3938, 6, 72, 145, 3, 9, 215, 227, 34, 7189, 15, 26, 8, 6926, 1353, 13, 165, 3, 1439, 2, 3940, 115, 29, 41, 2, 19853, 3436, 115, 29, 61, 1038, 15794, 670, 5, 3371, 4298, 43, 16026, 12, 240, 1041, 581, 12627, 21, 3586, 3, 9, 627, 1797, 7183, 12, 915, 165, 6488, 1421, 1487, 5, 12627, 19, 341, 1180, 80, 13, 8, 2030, 1487, 11724, 7, 16, 8, 3983, 9431, 5, 3, 26821, 13, 8, 10312, 19, 73, 18749, 7580, 13, 151, 619, 666, 8, 9569, 689, 4678, 5898, 3, 15, 27467, 45, 12627, 344, 2722, 11, 1412, 586, 2712, 2814, 12, 11284, 3, 18, 8, 511, 2030, 1080, 16, 8, 1611, 3545, 1363, 3424, 32, 7, 638, 15, 40, 107, 32, 31, 7, 2730, 11882, 43, 11130, 12, 915, 3, 9, 1487, 6, 68, 8, 192, 646, 18, 3108, 2251, 2066, 15, 26, 7157, 581, 112, 91, 9545, 789, 31, 7, 1368, 13, 11955, 403, 449, 485, 5, 37, 14298, 12861, 75, 19, 894, 38, 3, 26655, 12, 8, 1181, 18, 402, 1370, 485, 5224, 13266, 9, 1088, 16, 12263, 6, 84, 21, 767, 1971, 12, 3, 1536, 6066, 17, 23, 342, 8, 1353, 13, 12263, 31, 7, 3983, 9431, 15794, 670, 5, 12627, 31, 7, 26988, 3450, 19, 3, 12327, 38, 1181, 18, 1238, 32, 11, 1181, 18, 567, 9, 235, 6, 2199, 34, 19, 816, 12, 43, 8107, 26, 165, 3983, 9431, 3101, 16, 1100, 1274, 5, 156, 1363, 11229, 31, 7, 2730, 343, 7, 33, 3725, 3934, 12, 991, 3, 9, 646, 18, 3108, 17952, 6, 34, 133, 36, 8, 166, 97, 437, 8, 1590, 13, 12627, 31, 7, 25280, 2009, 16, 17184, 24, 3, 9, 269, 18, 3108, 2753, 7817, 3, 9, 789, 3, 9485, 57, 6639, 343, 7, 5, 621, 112, 3, 60, 18, 9, 102, 2700, 297, 38, 3427, 6323, 1374, 3, 9, 269, 18, 858, 18, 3728, 60, 17952, 6, 24923, 3424, 32, 7, 638, 15, 40, 107, 32, 65, 335, 477, 12, 3, 9, 102, 2700, 6323, 7, 11, 2451, 3, 28443, 5142, 5, 466, 164, 4410, 4586, 6, 437, 112, 17952, 1513, 165, 2942, 16, 8, 314, 1797, 4356, 11, 8, 2730, 343, 7, 43, 15387, 26, 12, 15092, 112, 2486, 3, 99, 70, 6927, 28, 119, 2251, 7229, 5, 10965, 6, 8, 2730, 343, 7, 6, 14298, 12861, 75, 11, 26988, 3450, 43, 3, 9, 2942, 5, 432, 1114, 8, 2753, 12, 3, 9, 102, 2700, 1363, 11229, 3, 18, 3, 23998, 24, 959, 1307, 47, 3, 9, 2670, 13, 97, 5, 156, 1363, 3424, 32, 7, 638, 15, 40, 107, 32, 405, 5124, 6, 8, 2753, 228, 258, 3, 9, 102, 2700, 1363, 11229, 42, 453, 8, 28406, 30, 38, 124, 4914, 52, 5, 8767, 14751, 9768, 164, 163, 240, 286, 45, 1515, 6, 227, 10861, 43, 8160, 3, 9, 126, 2753, 778, 416, 215, 5, 1], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'labels': [37, 21076, 2753, 65, 5374, 28406, 5923, 3271, 24923, 3424, 32, 7, 638, 15, 40, 107, 32, 12, 607, 8, 416, 789, 6, 3, 3565, 376, 578, 1513, 112, 2942, 5, 1]}.
52
+ - 10/22/2025 09:47:21 - WARNING - evaluate.loading - Using the latest cached version of the module from /mnt/cache/modules/evaluate_modules/metrics/evaluate-metric--rouge/b01e0accf3bd6dd24839b769a5fda24e14995071570870922c71970b3a6ed886 (last modified on Fri Sep 19 09:54:15 2025) since it couldn't be found locally at evaluate-metric--rouge, or remotely on the Hugging Face Hub.
53
+ - 2%|▏ | 1/50 [00:02<01:45, 2.15s/it]
54
+ - 6%|β–Œ | 3/50 [00:02<00:28, 1.63it/s]
55
+ - Running tokenizer on dataset: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 10/10 [00:00<00:00, 539.93 examples/s]
56
+ - Running tokenizer on dataset: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 10/10 [00:00<00:00, 833.76 examples/s]
57
+ - RuntimeError: DataLoader worker (pid 551) is killed by signal: Bus error. It is possible that dataloader's workers are out of shared memory. Please try to raise your shared memory limit.
58
+ - RuntimeError: DataLoader worker (pid(s) 551, 552) exited unexpectedly
59
+ - RuntimeError: unable to write to file </torch_549_1577359929_1>: No space left on device (28)
60
+ - RuntimeError: unable to write to file </torch_549_1848656295_0>: No space left on device (28)
61
+ - RuntimeError: unable to write to file </torch_550_3510014317_0>: No space left on device (28)
62
+ - RuntimeError: unable to write to file </torch_550_4146703690_1>: No space left on device (28)
63
+ - subprocess.CalledProcessError: Command '['/opt/venv/bin/python', '/transformers/examples/pytorch/object-detection/run_object_detection_no_trainer.py', '--model_name_or_path', 'qubvel-hf/detr-resnet-50-finetuned-10k-cppe5', '--dataset_name', 'qubvel-hf/cppe-5-sample', '--output_dir', '/tmp/tmpefs52_qw', '--max_train_steps=10', '--num_warmup_steps=2', '--learning_rate=1e-6', '--per_device_train_batch_size=2', '--per_device_eval_batch_size=1', '--checkpointing_steps', 'epoch']' returned non-zero exit status 1.
64
+ - subprocess.CalledProcessError: Command '['/opt/venv/bin/python', '/transformers/examples/pytorch/summarization/run_summarization_no_trainer.py', '--model_name_or_path', 'google-t5/t5-small', '--train_file', 'tests/fixtures/tests_samples/xsum/sample.json', '--validation_file', 'tests/fixtures/tests_samples/xsum/sample.json', '--output_dir', '/tmp/tmpeaylghsj', '--max_train_steps=50', '--num_warmup_steps=8', '--learning_rate=2e-4', '--per_device_train_batch_size=2', '--per_device_eval_batch_size=1', '--checkpointing_steps', 'epoch', '--with_tracking']' returned non-zero exit status 1.
65
+ +++ Appeared in current run:
66
+ + 0%| | 0/10 [00:01<?, ?it/s]
67
+ + 10%|β–ˆ | 5/50 [00:01<00:17, 2.50it/s]
68
+ + 10/23/2025 02:53:58 - INFO - __main__ - Distributed environment: NO
69
+ + 10/23/2025 02:53:59 - INFO - httpx - HTTP Request: GET https://huggingface.co/api/models/google-t5/t5-small/tree/main/additional_chat_templates?recursive=false&expand=false "HTTP/1.1 404 Not Found"
70
+ + 10/23/2025 02:53:59 - INFO - httpx - HTTP Request: HEAD https://huggingface.co/api/resolve-cache/models/google-t5/t5-small/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/config.json "HTTP/1.1 200 OK"
71
+ + 10/23/2025 02:53:59 - INFO - httpx - HTTP Request: HEAD https://huggingface.co/api/resolve-cache/models/google-t5/t5-small/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/tokenizer_config.json "HTTP/1.1 200 OK"
72
+ + 10/23/2025 02:53:59 - INFO - httpx - HTTP Request: HEAD https://huggingface.co/google-t5/t5-small/resolve/main/config.json "HTTP/1.1 307 Temporary Redirect"
73
+ + 10/23/2025 02:53:59 - INFO - httpx - HTTP Request: HEAD https://huggingface.co/google-t5/t5-small/resolve/main/tokenizer_config.json "HTTP/1.1 307 Temporary Redirect"
74
+ + 10/23/2025 02:53:59 - INFO - httpx - HTTP Request: HEAD https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/json/json.py "HTTP/1.1 200 OK"
75
+ + 10/23/2025 02:53:59 - INFO - transformers.configuration_utils - Model config T5Config {
76
+ + 10/23/2025 02:53:59 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /mnt/cache/hub/models--google-t5--t5-small/snapshots/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/config.json
77
+ + 10/23/2025 02:53:59 - INFO - transformers.generation.configuration_utils - Generate config GenerationConfig {
78
+ + 10/23/2025 02:53:59 - INFO - transformers.modeling_utils - loading weights file model.safetensors from cache at /mnt/cache/hub/models--google-t5--t5-small/snapshots/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/model.safetensors
79
+ + 10/23/2025 02:53:59 - INFO - transformers.tokenization_utils_base - loading file added_tokens.json from cache at None
80
+ + 10/23/2025 02:53:59 - INFO - transformers.tokenization_utils_base - loading file chat_template.jinja from cache at None
81
+ + 10/23/2025 02:53:59 - INFO - transformers.tokenization_utils_base - loading file special_tokens_map.json from cache at None
82
+ + 10/23/2025 02:53:59 - INFO - transformers.tokenization_utils_base - loading file spiece.model from cache at /mnt/cache/hub/models--google-t5--t5-small/snapshots/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/spiece.model
83
+ + 10/23/2025 02:53:59 - INFO - transformers.tokenization_utils_base - loading file tokenizer.json from cache at /mnt/cache/hub/models--google-t5--t5-small/snapshots/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/tokenizer.json
84
+ + 10/23/2025 02:53:59 - INFO - transformers.tokenization_utils_base - loading file tokenizer_config.json from cache at /mnt/cache/hub/models--google-t5--t5-small/snapshots/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/tokenizer_config.json
85
+ + 10/23/2025 02:54:00 - INFO - __main__ - Sample 1 of the training set: {'input_ids': [148, 54, 217, 8285, 13, 3068, 221, 7721, 3, 208, 22358, 30, 12296, 13, 8, 1430, 44, 1630, 10, 1755, 272, 4209, 30, 1856, 30, 9938, 555, 11, 8, 9938, 3349, 475, 5, 28089, 11, 1244, 5845, 6, 21, 677, 6, 43, 708, 12, 8147, 550, 45, 8, 3, 60, 5772, 257, 2901, 68, 8, 2630, 3516, 21, 3068, 221, 7721, 2675, 19, 24, 70, 596, 103, 59, 320, 20081, 3919, 13, 692, 8, 337, 5, 27, 214, 8, 1589, 3431, 7, 43, 530, 91, 13, 3169, 274, 578, 435, 1452, 16, 3, 9, 1126, 1419, 68, 48, 97, 6, 227, 8915, 95, 163, 192, 979, 45, 70, 166, 4169, 1031, 6, 378, 320, 310, 18337, 21, 8, 163, 420, 18, 89, 2242, 372, 406, 3, 9, 1369, 5, 486, 709, 80, 3282, 13, 70, 16562, 1330, 12, 36, 1044, 18, 77, 89, 2176, 1054, 6, 28, 921, 44, 8, 1886, 1829, 8032, 21, 1452, 3, 18, 11, 59, 131, 250, 79, 43, 1513, 128, 1508, 12, 2871, 11, 28325, 26, 128, 11855, 1480, 1766, 5, 290, 19, 3, 9, 2841, 1829, 81, 8, 286, 28, 8, 2743, 1955, 1290, 10070, 11, 112, 1508, 2508, 81, 149, 79, 43, 2767, 223, 2239, 7, 437, 336, 774, 6, 116, 79, 225, 36, 4549, 21, 136, 773, 13, 13233, 24, 228, 483, 378, 300, 5, 1029, 8, 1067, 6, 479, 44, 8, 194, 79, 577, 11, 70, 2136, 13, 6933, 6, 34, 19, 614, 12, 217, 125, 24, 13233, 429, 36, 42, 125, 228, 4431, 120, 483, 365, 1290, 10070, 552, 8, 1762, 2025, 2034, 9540, 5, 156, 79, 54, 129, 80, 1369, 365, 70, 6782, 258, 79, 56, 129, 3, 9, 720, 13, 7750, 223, 68, 6, 8, 1200, 48, 1369, 924, 661, 1550, 30, 6, 8, 72, 17291, 485, 132, 56, 36, 5, 3159, 577, 1549, 19, 59, 3510, 30, 48, 1407, 3068, 221, 7721, 2369, 336, 774, 30, 3, 9, 306, 365, 3084, 432, 986, 63, 565, 6, 28, 3, 9, 661, 13, 131, 80, 9589, 16, 70, 336, 850, 1031, 3, 7, 17348, 70, 1455, 5, 86, 8, 628, 13, 874, 767, 6, 66, 13, 24, 3410, 11, 15290, 1330, 12, 43, 118, 3, 7, 15318, 91, 13, 8, 1886, 6, 3, 3565, 135, 3762, 578, 8, 337, 563, 13, 1508, 113, 6, 59, 78, 307, 977, 6, 2299, 3555, 5, 466, 19, 59, 66, 323, 12, 1290, 10070, 6, 68, 3, 88, 65, 12, 240, 128, 3263, 21, 34, 5, 27, 183, 780, 12, 217, 3, 9, 4802, 869, 13, 577, 45, 3068, 221, 7721, 437, 3, 88, 808, 1567, 44, 8, 414, 13, 1718, 5, 466, 19, 16, 4656, 12, 432, 986, 63, 565, 31, 7, 97, 38, 2743, 6, 116, 79, 130, 3, 60, 4099, 2810, 11, 1256, 12, 3853, 11, 6, 44, 8, 414, 13, 112, 28388, 44, 8, 12750, 13, 2892, 6, 92, 1944, 28, 3, 9, 1730, 116, 79, 877, 1039, 5, 4395, 8, 6242, 6, 1290, 10070, 65, 59, 2139, 2448, 231, 893, 5, 290, 47, 150, 174, 21, 376, 12, 36, 78, 158, 7, 7, 603, 3040, 116, 3, 88, 764, 91, 227, 8, 511, 467, 13, 8, 774, 11, 2162, 79, 133, 36, 16, 3, 9, 3, 60, 5772, 257, 2870, 6, 84, 410, 59, 1299, 91, 8, 269, 1569, 12, 112, 1508, 42, 8, 2675, 5, 366, 3, 88, 808, 1567, 6, 3, 88, 141, 700, 708, 91, 57, 271, 31820, 1427, 1465, 3, 18, 2508, 81, 3068, 221, 7721, 2852, 3, 9, 1886, 24, 3842, 2369, 16, 8, 420, 985, 13, 8, 6552, 3815, 3, 18, 68, 112, 4454, 877, 323, 6321, 182, 1224, 5, 27, 214, 25, 54, 9409, 24, 3, 88, 65, 118, 9193, 269, 6, 250, 3068, 221, 7721, 33, 230, 3, 25764, 8, 2328, 6, 68, 34, 3679, 132, 47, 3, 9, 3126, 147, 45, 135, 966, 38, 1116, 38, 8, 774, 141, 708, 5, 94, 1330, 12, 36, 3, 9, 495, 24, 3, 99, 25, 1190, 446, 49, 7484, 3, 16196, 32, 15, 6, 25, 1190, 3068, 221, 7721, 5, 978, 7475, 1518, 95, 168, 16, 4993, 12, 336, 774, 6, 68, 8, 880, 13, 70, 372, 33, 59, 692, 631, 16, 3211, 5, 328, 130, 3, 60, 9333, 30, 3, 16196, 32, 15, 336, 774, 396, 6, 68, 717, 410, 6591, 16, 3, 18, 16, 70, 166, 4169, 5533, 1031, 13, 1230, 10892, 6, 874, 1508, 435, 8, 3134, 5, 100, 97, 300, 6, 163, 3, 16196, 32, 15, 11, 8643, 4049, 71, 152, 2831, 17, 43, 5799, 16, 8, 337, 1059, 5, 94, 19, 352, 12, 36, 3, 9, 3805, 4393, 21, 135, 12, 1049, 95, 45, 8, 1102, 79, 33, 230, 16, 6161, 6, 68, 79, 14621, 174, 3, 9, 1369, 11, 1224, 5, 27, 278, 31, 17, 217, 34, 1107, 44, 234, 12, 22358, 30, 1856, 6, 713, 5, 531, 79, 237, 320, 3919, 13, 3609, 91, 21, 3, 9, 3314, 581, 8, 9982, 687, 7, 6, 8, 194, 430, 8335, 372, 4551, 7, 115, 13245, 410, 44, 24106, 12750, 336, 1851, 58, 465, 5, 156, 25, 4393, 12, 143, 6209, 11, 2604, 1766, 6, 38, 3068, 221, 7721, 103, 6, 24, 10762, 72, 1666, 30, 39, 13613, 250, 25, 214, 3, 99, 25, 28325, 258, 25, 33, 16, 600, 3169, 5, 275, 8, 1589, 3431, 7, 43, 982, 44, 8, 223, 38, 168, 3, 18, 70, 163, 1349, 4228, 16, 586, 6407, 365, 1290, 10070, 47, 581, 3815, 555, 596, 180, 13296, 7, 7165, 4463, 16, 8, 262, 10765, 3802, 5, 94, 405, 59, 3005, 221, 168, 581, 46, 22358, 596, 24, 33, 3, 9, 23980, 72, 145, 192, 1766, 3, 9, 467, 48, 774, 5, 94, 19, 614, 12, 253, 136, 1465, 7, 45, 3068, 221, 7721, 31, 7, 1419, 68, 44, 709, 79, 43, 59, 118, 1340, 3, 9, 26, 22722, 44, 8, 2007, 3, 18, 780, 5, 3, 14967, 79, 1369, 1116, 6, 24, 228, 1837, 5, 27, 317, 3455, 195, 33, 92, 16, 21, 3, 9, 182, 3429, 774, 68, 116, 27, 320, 44, 8, 119, 192, 2323, 2017, 756, 135, 6, 7254, 32, 11, 18041, 1], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'labels': [101, 33, 1776, 3, 9, 2893, 13, 8, 194, 190, 8, 6552, 3815, 774, 11, 128, 2323, 44, 8, 2007, 13, 8, 953, 1727, 12, 36, 5074, 378, 300, 227, 492, 3, 9, 1282, 456, 5, 1]}.
86
+ + 10/23/2025 02:54:00 - INFO - httpx - HTTP Request: HEAD https://huggingface.co/api/resolve-cache/models/google-t5/t5-small/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/generation_config.json "HTTP/1.1 200 OK"
87
+ + 10/23/2025 02:54:00 - INFO - httpx - HTTP Request: HEAD https://huggingface.co/google-t5/t5-small/resolve/main/custom_generate/generate.py "HTTP/1.1 404 Not Found"
88
+ + 10/23/2025 02:54:00 - INFO - httpx - HTTP Request: HEAD https://huggingface.co/google-t5/t5-small/resolve/main/generation_config.json "HTTP/1.1 307 Temporary Redirect"
89
+ + 10/23/2025 02:54:00 - INFO - transformers.dynamic_module_utils - Could not locate the custom_generate/generate.py inside google-t5/t5-small.
90
+ + 10/23/2025 02:54:00 - INFO - transformers.generation.configuration_utils - Generate config GenerationConfig {
91
+ + 10/23/2025 02:54:00 - INFO - transformers.generation.configuration_utils - loading configuration file generation_config.json from cache at /mnt/cache/hub/models--google-t5--t5-small/snapshots/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/generation_config.json
92
+ + 10/23/2025 02:54:01 - INFO - __main__ - Gradient Accumulation steps = 1
93
+ + 10/23/2025 02:54:01 - INFO - __main__ - Instantaneous batch size per device = 2
94
+ + 10/23/2025 02:54:01 - INFO - __main__ - Num Epochs = 10
95
+ + 10/23/2025 02:54:01 - INFO - __main__ - Num examples = 10
96
+ + 10/23/2025 02:54:01 - INFO - __main__ - Total optimization steps = 50
97
+ + 10/23/2025 02:54:01 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 2
98
+ + 10/23/2025 02:54:01 - INFO - __main__ - ***** Running training *****
99
+ + 10/23/2025 02:54:01 - WARNING - evaluate.loading - Using the latest cached version of the module from /mnt/cache/modules/evaluate_modules/metrics/evaluate-metric--rouge/b01e0accf3bd6dd24839b769a5fda24e14995071570870922c71970b3a6ed886 (last modified on Fri Sep 19 09:54:15 2025) since it couldn't be found locally at evaluate-metric--rouge, or remotely on the Hugging Face Hub.
100
+ + 2%|▏ | 1/50 [00:01<01:18, 1.60s/it]
101
+ + 8%|β–Š | 4/50 [00:01<00:15, 2.97it/s]Traceback (most recent call last):
102
+ + Running tokenizer on dataset: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 10/10 [00:00<00:00, 316.53 examples/s]
103
+ + Running tokenizer on dataset: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 10/10 [00:00<00:00, 499.84 examples/s]
104
+ + RuntimeError: DataLoader worker (pid 548) is killed by signal: Bus error. It is possible that dataloader's workers are out of shared memory. Please try to raise your shared memory limit.
105
+ + RuntimeError: DataLoader worker (pid(s) 548) exited unexpectedly
106
+ + RuntimeError: unable to write to file </torch_549_37395250_1>: No space left on device (28)
107
+ + RuntimeError: unable to write to file </torch_549_3866372777_2>: No space left on device (28)
108
+ + RuntimeError: unable to write to file </torch_550_3470294363_0>: No space left on device (28)
109
+ + subprocess.CalledProcessError: Command '['/opt/venv/bin/python', '/transformers/examples/pytorch/object-detection/run_object_detection_no_trainer.py', '--model_name_or_path', 'qubvel-hf/detr-resnet-50-finetuned-10k-cppe5', '--dataset_name', 'qubvel-hf/cppe-5-sample', '--output_dir', '/tmp/tmpd2il6wdy', '--max_train_steps=10', '--num_warmup_steps=2', '--learning_rate=1e-6', '--per_device_train_batch_size=2', '--per_device_eval_batch_size=1', '--checkpointing_steps', 'epoch']' returned non-zero exit status 1.
110
+ + subprocess.CalledProcessError: Command '['/opt/venv/bin/python', '/transformers/examples/pytorch/summarization/run_summarization_no_trainer.py', '--model_name_or_path', 'google-t5/t5-small', '--train_file', 'tests/fixtures/tests_samples/xsum/sample.json', '--validation_file', 'tests/fixtures/tests_samples/xsum/sample.json', '--output_dir', '/tmp/tmp9tvewysj', '--max_train_steps=50', '--num_warmup_steps=8', '--learning_rate=2e-4', '--per_device_train_batch_size=2', '--per_device_eval_batch_size=1', '--checkpointing_steps', 'epoch', '--with_tracking']' returned non-zero exit status 1.
111
+
112
+ === Diff for job: single-gpu_run_pipelines_torch_gpu_test_reports ===
113
+ --- Absent in current run:
114
+ - 401 Client Error. (Request ID: Root=1-68f8b101-37a59b4a6f63cec8490e0658;232f3cc1-59bf-4b45-b4a6-f0f3544cf72f)
115
+ - Diff is 627348 characters long. Set self.maxDiff to None to see it.
116
+ - FAILED tests/pipelines/test_pipelines_text_to_audio.py::TextToAudioPipelineTests::test_conversion_additional_tensor
117
+ - FAILED tests/pipelines/test_pipelines_text_to_audio.py::TextToAudioPipelineTests::test_small_bark_pt
118
+ - [[-0.08846624940633774, -0.08407793194055557, -0.0804[234439 chars]488]]
119
+ - [[-0.08846625685691833, -0.08407790958881378, -0.0804[234497 chars]475]]
120
+ +++ Appeared in current run:
121
+ + 401 Client Error. (Request ID: Root=1-68f9a102-40783be764e683583b44aa6c;d9bcf364-9894-4b08-a2d9-3468b07e346f)
122
+ + Diff is 628277 characters long. Set self.maxDiff to None to see it.
123
+ + PASSED tests/pipelines/test_pipelines_text_to_audio.py::TextToAudioPipelineTests::test_conversion_additional_tensor
124
+ + PASSED tests/pipelines/test_pipelines_text_to_audio.py::TextToAudioPipelineTests::test_small_bark_pt
125
+ + [[-0.08846623450517654, -0.08407793939113617, -0.0804[234456 chars]458]]
126
+ + [[-0.08846624940633774, -0.08407793194055557, -0.0804[234462 chars]266]]