hf-transformers-bot commited on
Commit
88e1514
Β·
verified Β·
1 Parent(s): 754f6ef

Upload 2025-11-02/runs/99-19006313889/ci_results_run_pipelines_torch_gpu/test_results_diff.json with huggingface_hub

Browse files
2025-11-02/runs/99-19006313889/ci_results_run_pipelines_torch_gpu/test_results_diff.json ADDED
@@ -0,0 +1,127 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ === Diff for job: multi-gpu_run_pipelines_torch_gpu_test_reports ===
2
+ --- Absent in current run:
3
+ - 401 Client Error. (Request ID: Root=1-69057d51-1dfd06567deda7664763ff9e;d819b249-15f6-40b7-8559-c036cae63854)
4
+ - Diff is 623682 characters long. Set self.maxDiff to None to see it.
5
+ - [[-0.08846612274646759, -0.08407783508300781, -0.0804[234483 chars]475]]
6
+ - [[-0.08846614509820938, -0.0840778574347496, -0.08044[234494 chars]667]]
7
+ +++ Appeared in current run:
8
+ + 401 Client Error. (Request ID: Root=1-6906d043-4b7aeb337137bef54d80c053;5c8e0a41-5a3b-4e63-beaa-246fbcd52185)
9
+ + Diff is 626190 characters long. Set self.maxDiff to None to see it.
10
+ + [[-0.08846613764762878, -0.08407779783010483, -0.0804[234365 chars]685]]
11
+ + [[-0.08846619725227356, -0.08407782763242722, -0.0804[234433 chars]488]]
12
+
13
+ === Diff for job: single-gpu_run_examples_gpu_test_reports ===
14
+ --- Absent in current run:
15
+ - 10%|β–ˆ | 5/50 [00:01<00:10, 4.42it/s]Traceback (most recent call last):
16
+ - 10%|β–ˆ | 5/50 [00:01<00:16, 2.70it/s]
17
+ - 11/01/2025 02:56:30 - INFO - __main__ - Distributed environment: NO
18
+ - 11/01/2025 02:56:30 - INFO - httpx - HTTP Request: HEAD https://huggingface.co/api/resolve-cache/models/google-t5/t5-small/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/config.json "HTTP/1.1 200 OK"
19
+ - 11/01/2025 02:56:30 - INFO - httpx - HTTP Request: HEAD https://huggingface.co/api/resolve-cache/models/google-t5/t5-small/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/tokenizer_config.json "HTTP/1.1 200 OK"
20
+ - 11/01/2025 02:56:30 - INFO - httpx - HTTP Request: HEAD https://huggingface.co/google-t5/t5-small/resolve/main/config.json "HTTP/1.1 307 Temporary Redirect"
21
+ - 11/01/2025 02:56:30 - INFO - httpx - HTTP Request: HEAD https://huggingface.co/google-t5/t5-small/resolve/main/tokenizer_config.json "HTTP/1.1 307 Temporary Redirect"
22
+ - 11/01/2025 02:56:30 - INFO - httpx - HTTP Request: HEAD https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/json/json.py "HTTP/1.1 200 OK"
23
+ - 11/01/2025 02:56:30 - INFO - transformers.configuration_utils - Model config T5Config {
24
+ - 11/01/2025 02:56:30 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /mnt/cache/hub/models--google-t5--t5-small/snapshots/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/config.json
25
+ - 11/01/2025 02:56:31 - INFO - __main__ - Gradient Accumulation steps = 1
26
+ - 11/01/2025 02:56:31 - INFO - __main__ - Instantaneous batch size per device = 2
27
+ - 11/01/2025 02:56:31 - INFO - __main__ - Num Epochs = 10
28
+ - 11/01/2025 02:56:31 - INFO - __main__ - Num examples = 10
29
+ - 11/01/2025 02:56:31 - INFO - __main__ - Total optimization steps = 50
30
+ - 11/01/2025 02:56:31 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 2
31
+ - 11/01/2025 02:56:31 - INFO - __main__ - ***** Running training *****
32
+ - 11/01/2025 02:56:31 - INFO - __main__ - Sample 9 of the training set: {'input_ids': [299, 125, 65, 118, 39, 798, 13, 8, 215, 58, 1029, 2798, 28089, 7, 31, 204, 3449, 326, 3, 24151, 11607, 581, 1013, 2648, 12, 23338, 13017, 31, 7, 431, 10794, 581, 8, 337, 16383, 6, 11, 901, 9, 15303, 6176, 271, 8, 166, 1566, 348, 12, 1535, 13923, 2300, 3154, 6, 132, 33, 1995, 13, 8285, 5, 955, 2361, 25, 26061, 1361, 16, 2051, 271, 5210, 8692, 26, 21, 131, 11989, 58, 955, 8, 1782, 24, 24281, 26, 8, 6242, 44, 1813, 1629, 122, 58, 37, 18096, 2241, 7, 13, 9938, 3349, 11, 9938, 5061, 305, 619, 1380, 25, 12, 11003, 39, 420, 10372, 11, 39, 710, 3350, 56, 36, 5111, 30, 2818, 31, 7, 2740, 7010, 7, 11, 584, 18819, 152, 24661, 3111, 17543, 10, 1458, 22866, 6, 9938, 5061, 305, 619, 11, 367, 137, 3152, 1422, 56, 150, 1200, 3476, 68, 25, 54, 341, 1432, 39, 420, 335, 11, 698, 28, 803, 5, 363, 33, 39, 420, 335, 18096, 53, 4413, 45, 48, 215, 58, 1], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'labels': [94, 31, 7, 118, 12, 19819, 18, 2905, 208, 63, 21, 8, 2789, 596, 68, 605, 1329, 11, 10230, 19236, 5, 1]}.
33
+ - 11/01/2025 02:56:31 - INFO - httpx - HTTP Request: GET https://huggingface.co/api/models/google-t5/t5-small/tree/main/additional_chat_templates?recursive=false&expand=false "HTTP/1.1 404 Not Found"
34
+ - 11/01/2025 02:56:31 - INFO - httpx - HTTP Request: HEAD https://huggingface.co/api/resolve-cache/models/google-t5/t5-small/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/config.json "HTTP/1.1 200 OK"
35
+ - 11/01/2025 02:56:31 - INFO - httpx - HTTP Request: HEAD https://huggingface.co/api/resolve-cache/models/google-t5/t5-small/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/generation_config.json "HTTP/1.1 200 OK"
36
+ - 11/01/2025 02:56:31 - INFO - httpx - HTTP Request: HEAD https://huggingface.co/google-t5/t5-small/resolve/main/config.json "HTTP/1.1 307 Temporary Redirect"
37
+ - 11/01/2025 02:56:31 - INFO - httpx - HTTP Request: HEAD https://huggingface.co/google-t5/t5-small/resolve/main/custom_generate/generate.py "HTTP/1.1 404 Not Found"
38
+ - 11/01/2025 02:56:31 - INFO - httpx - HTTP Request: HEAD https://huggingface.co/google-t5/t5-small/resolve/main/generation_config.json "HTTP/1.1 307 Temporary Redirect"
39
+ - 11/01/2025 02:56:31 - INFO - transformers.dynamic_module_utils - Could not locate the custom_generate/generate.py inside google-t5/t5-small.
40
+ - 11/01/2025 02:56:31 - INFO - transformers.generation.configuration_utils - Generate config GenerationConfig {
41
+ - 11/01/2025 02:56:31 - INFO - transformers.generation.configuration_utils - loading configuration file generation_config.json from cache at /mnt/cache/hub/models--google-t5--t5-small/snapshots/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/generation_config.json
42
+ - 11/01/2025 02:56:31 - INFO - transformers.modeling_utils - loading weights file model.safetensors from cache at /mnt/cache/hub/models--google-t5--t5-small/snapshots/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/model.safetensors
43
+ - 11/01/2025 02:56:31 - INFO - transformers.tokenization_utils_base - loading file added_tokens.json from cache at None
44
+ - 11/01/2025 02:56:31 - INFO - transformers.tokenization_utils_base - loading file chat_template.jinja from cache at None
45
+ - 11/01/2025 02:56:31 - INFO - transformers.tokenization_utils_base - loading file special_tokens_map.json from cache at None
46
+ - 11/01/2025 02:56:31 - INFO - transformers.tokenization_utils_base - loading file spiece.model from cache at /mnt/cache/hub/models--google-t5--t5-small/snapshots/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/spiece.model
47
+ - 11/01/2025 02:56:31 - INFO - transformers.tokenization_utils_base - loading file tokenizer.json from cache at /mnt/cache/hub/models--google-t5--t5-small/snapshots/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/tokenizer.json
48
+ - 11/01/2025 02:56:31 - INFO - transformers.tokenization_utils_base - loading file tokenizer_config.json from cache at /mnt/cache/hub/models--google-t5--t5-small/snapshots/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/tokenizer_config.json
49
+ - 11/01/2025 02:56:31 - WARNING - evaluate.loading - Using the latest cached version of the module from /mnt/cache/modules/evaluate_modules/metrics/evaluate-metric--rouge/b01e0accf3bd6dd24839b769a5fda24e14995071570870922c71970b3a6ed886 (last modified on Fri Sep 19 09:54:15 2025) since it couldn't be found locally at evaluate-metric--rouge, or remotely on the Hugging Face Hub.
50
+ - 2%|▏ | 1/50 [00:01<01:09, 1.42s/it]
51
+ - 6%|β–Œ | 3/50 [00:01<00:19, 2.46it/s]
52
+ - File "/opt/venv/lib/python3.10/site-packages/torch/multiprocessing/reductions.py", line 618, in reduce_storage
53
+ - File "/opt/venv/lib/python3.10/site-packages/torch/storage.py", line 451, in wrapper
54
+ - File "/opt/venv/lib/python3.10/site-packages/torch/storage.py", line 526, in _share_fd_cpu_
55
+ - File "/usr/lib/python3.10/multiprocessing/queues.py", line 244, in _feed
56
+ - File "/usr/lib/python3.10/multiprocessing/reduction.py", line 51, in dumps
57
+ - Running tokenizer on dataset: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 10/10 [00:00<00:00, 1024.00 examples/s]
58
+ - Running tokenizer on dataset: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 10/10 [00:00<00:00, 528.68 examples/s]
59
+ - RuntimeError: DataLoader worker (pid 547) is killed by signal: Bus error. It is possible that dataloader's workers are out of shared memory. Please try to raise your shared memory limit.
60
+ - RuntimeError: DataLoader worker (pid(s) 547, 549) exited unexpectedly
61
+ - RuntimeError: unable to write to file </torch_548_43172545_0>: No space left on device (28)
62
+ - RuntimeError: unable to write to file </torch_550_3506692942_15>: No space left on device (28)
63
+ - cls(buf, protocol).dump(obj)
64
+ - fd, size = storage._share_fd_cpu_()
65
+ - obj = _ForkingPickler.dumps(obj)
66
+ - return fn(self, *args, **kwargs)
67
+ - return super()._share_fd_cpu_(*args, **kwargs)
68
+ - subprocess.CalledProcessError: Command '['/opt/venv/bin/python', '/transformers/examples/pytorch/object-detection/run_object_detection_no_trainer.py', '--model_name_or_path', 'qubvel-hf/detr-resnet-50-finetuned-10k-cppe5', '--dataset_name', 'qubvel-hf/cppe-5-sample', '--output_dir', '/tmp/tmp0zozvzey', '--max_train_steps=10', '--num_warmup_steps=2', '--learning_rate=1e-6', '--per_device_train_batch_size=2', '--per_device_eval_batch_size=1', '--checkpointing_steps', 'epoch']' returned non-zero exit status 1.
69
+ - subprocess.CalledProcessError: Command '['/opt/venv/bin/python', '/transformers/examples/pytorch/summarization/run_summarization_no_trainer.py', '--model_name_or_path', 'google-t5/t5-small', '--train_file', 'tests/fixtures/tests_samples/xsum/sample.json', '--validation_file', 'tests/fixtures/tests_samples/xsum/sample.json', '--output_dir', '/tmp/tmp417kpu11', '--max_train_steps=50', '--num_warmup_steps=8', '--learning_rate=2e-4', '--per_device_train_batch_size=2', '--per_device_eval_batch_size=1', '--checkpointing_steps', 'epoch', '--with_tracking']' returned non-zero exit status 1.
70
+ +++ Appeared in current run:
71
+ +
72
+ + 10%|β–ˆ | 5/50 [00:02<00:13, 3.31it/s]Traceback (most recent call last):
73
+ + 10%|β–ˆ | 5/50 [00:02<00:22, 1.96it/s]
74
+ + 11/02/2025 03:01:00 - INFO - __main__ - Distributed environment: NO
75
+ + 11/02/2025 03:01:00 - INFO - httpx - HTTP Request: HEAD https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/json/json.py "HTTP/1.1 200 OK"
76
+ + 11/02/2025 03:01:01 - INFO - httpx - HTTP Request: GET https://huggingface.co/api/models/google-t5/t5-small/tree/main/additional_chat_templates?recursive=false&expand=false "HTTP/1.1 404 Not Found"
77
+ + 11/02/2025 03:01:01 - INFO - httpx - HTTP Request: HEAD https://huggingface.co/api/resolve-cache/models/google-t5/t5-small/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/config.json "HTTP/1.1 200 OK"
78
+ + 11/02/2025 03:01:01 - INFO - httpx - HTTP Request: HEAD https://huggingface.co/api/resolve-cache/models/google-t5/t5-small/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/tokenizer_config.json "HTTP/1.1 200 OK"
79
+ + 11/02/2025 03:01:01 - INFO - httpx - HTTP Request: HEAD https://huggingface.co/google-t5/t5-small/resolve/main/config.json "HTTP/1.1 307 Temporary Redirect"
80
+ + 11/02/2025 03:01:01 - INFO - httpx - HTTP Request: HEAD https://huggingface.co/google-t5/t5-small/resolve/main/tokenizer_config.json "HTTP/1.1 307 Temporary Redirect"
81
+ + 11/02/2025 03:01:01 - INFO - transformers.configuration_utils - Model config T5Config {
82
+ + 11/02/2025 03:01:01 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /mnt/cache/hub/models--google-t5--t5-small/snapshots/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/config.json
83
+ + 11/02/2025 03:01:01 - INFO - transformers.tokenization_utils_base - loading file added_tokens.json from cache at None
84
+ + 11/02/2025 03:01:01 - INFO - transformers.tokenization_utils_base - loading file chat_template.jinja from cache at None
85
+ + 11/02/2025 03:01:01 - INFO - transformers.tokenization_utils_base - loading file special_tokens_map.json from cache at None
86
+ + 11/02/2025 03:01:01 - INFO - transformers.tokenization_utils_base - loading file spiece.model from cache at /mnt/cache/hub/models--google-t5--t5-small/snapshots/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/spiece.model
87
+ + 11/02/2025 03:01:01 - INFO - transformers.tokenization_utils_base - loading file tokenizer.json from cache at /mnt/cache/hub/models--google-t5--t5-small/snapshots/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/tokenizer.json
88
+ + 11/02/2025 03:01:01 - INFO - transformers.tokenization_utils_base - loading file tokenizer_config.json from cache at /mnt/cache/hub/models--google-t5--t5-small/snapshots/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/tokenizer_config.json
89
+ + 11/02/2025 03:01:02 - INFO - __main__ - Sample 6 of the training set: {'input_ids': [907, 3, 12655, 472, 1079, 3009, 48, 471, 28, 3, 9, 903, 12, 4461, 12, 9065, 18, 1201, 18, 1490, 16634, 81, 8, 613, 68, 48, 1295, 47, 12967, 57, 8, 2788, 7, 1476, 5, 37, 332, 10878, 26, 867, 1886, 3, 18, 2007, 13, 8, 6552, 2009, 3, 18, 33, 3945, 12, 3601, 26868, 26842, 9, 1635, 9, 6, 113, 646, 336, 847, 5, 8545, 10715, 348, 808, 8, 166, 372, 21, 1856, 31, 7, 1453, 12, 2733, 3142, 100, 17, 109, 5, 37, 18939, 49, 4477, 43, 751, 163, 728, 48, 774, 11, 6377, 95, 8, 953, 28, 874, 979, 45, 335, 1031, 5, 18263, 5961, 5316, 1288, 10477, 16634, 6, 113, 5821, 5659, 1815, 2754, 44, 3038, 23770, 52, 6983, 1061, 16, 7218, 2237, 472, 1079, 3009, 12, 12580, 3802, 1269, 16, 112, 166, 774, 16, 1567, 5, 216, 65, 92, 10774, 192, 25694, 420, 18, 7, 2407, 13084, 21, 8, 22343, 596, 11, 3150, 3030, 16, 112, 29686, 5, 1], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'labels': [472, 1079, 3009, 7930, 22091, 16634, 19, 150, 1200, 365, 4587, 21, 8, 6393, 221, 15, 907, 2743, 31, 7, 613, 6, 9938, 8288, 65, 2525, 5, 1]}.
90
+ + 11/02/2025 03:01:02 - INFO - httpx - HTTP Request: HEAD https://huggingface.co/api/resolve-cache/models/google-t5/t5-small/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/config.json "HTTP/1.1 200 OK"
91
+ + 11/02/2025 03:01:02 - INFO - httpx - HTTP Request: HEAD https://huggingface.co/api/resolve-cache/models/google-t5/t5-small/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/generation_config.json "HTTP/1.1 200 OK"
92
+ + 11/02/2025 03:01:02 - INFO - httpx - HTTP Request: HEAD https://huggingface.co/google-t5/t5-small/resolve/main/config.json "HTTP/1.1 307 Temporary Redirect"
93
+ + 11/02/2025 03:01:02 - INFO - httpx - HTTP Request: HEAD https://huggingface.co/google-t5/t5-small/resolve/main/custom_generate/generate.py "HTTP/1.1 404 Not Found"
94
+ + 11/02/2025 03:01:02 - INFO - httpx - HTTP Request: HEAD https://huggingface.co/google-t5/t5-small/resolve/main/generation_config.json "HTTP/1.1 307 Temporary Redirect"
95
+ + 11/02/2025 03:01:02 - INFO - transformers.dynamic_module_utils - Could not locate the custom_generate/generate.py inside google-t5/t5-small.
96
+ + 11/02/2025 03:01:02 - INFO - transformers.generation.configuration_utils - Generate config GenerationConfig {
97
+ + 11/02/2025 03:01:02 - INFO - transformers.generation.configuration_utils - loading configuration file generation_config.json from cache at /mnt/cache/hub/models--google-t5--t5-small/snapshots/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/generation_config.json
98
+ + 11/02/2025 03:01:02 - INFO - transformers.modeling_utils - loading weights file model.safetensors from cache at /mnt/cache/hub/models--google-t5--t5-small/snapshots/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/model.safetensors
99
+ + 11/02/2025 03:01:03 - INFO - __main__ - Gradient Accumulation steps = 1
100
+ + 11/02/2025 03:01:03 - INFO - __main__ - Instantaneous batch size per device = 2
101
+ + 11/02/2025 03:01:03 - INFO - __main__ - Num Epochs = 10
102
+ + 11/02/2025 03:01:03 - INFO - __main__ - Num examples = 10
103
+ + 11/02/2025 03:01:03 - INFO - __main__ - Total optimization steps = 50
104
+ + 11/02/2025 03:01:03 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 2
105
+ + 11/02/2025 03:01:03 - INFO - __main__ - ***** Running training *****
106
+ + 11/02/2025 03:01:03 - WARNING - evaluate.loading - Using the latest cached version of the module from /mnt/cache/modules/evaluate_modules/metrics/evaluate-metric--rouge/b01e0accf3bd6dd24839b769a5fda24e14995071570870922c71970b3a6ed886 (last modified on Fri Sep 19 09:54:15 2025) since it couldn't be found locally at evaluate-metric--rouge, or remotely on the Hugging Face Hub.
107
+ + 2%|▏ | 1/50 [00:01<01:35, 1.95s/it]
108
+ + 6%|β–Œ | 3/50 [00:02<00:26, 1.79it/s]
109
+ + ERROR: Unexpected bus error encountered in worker. This might be caused by insufficient shared memory (shm).
110
+ + Running tokenizer on dataset: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 10/10 [00:00<00:00, 384.70 examples/s]
111
+ + Running tokenizer on dataset: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 10/10 [00:00<00:00, 516.25 examples/s]
112
+ + RuntimeError: DataLoader worker (pid 549) is killed by signal: Bus error. It is possible that dataloader's workers are out of shared memory. Please try to raise your shared memory limit.
113
+ + RuntimeError: DataLoader worker (pid(s) 549) exited unexpectedly
114
+ + subprocess.CalledProcessError: Command '['/opt/venv/bin/python', '/transformers/examples/pytorch/object-detection/run_object_detection_no_trainer.py', '--model_name_or_path', 'qubvel-hf/detr-resnet-50-finetuned-10k-cppe5', '--dataset_name', 'qubvel-hf/cppe-5-sample', '--output_dir', '/tmp/tmpzxuxs3lh', '--max_train_steps=10', '--num_warmup_steps=2', '--learning_rate=1e-6', '--per_device_train_batch_size=2', '--per_device_eval_batch_size=1', '--checkpointing_steps', 'epoch']' returned non-zero exit status 1.
115
+ + subprocess.CalledProcessError: Command '['/opt/venv/bin/python', '/transformers/examples/pytorch/summarization/run_summarization_no_trainer.py', '--model_name_or_path', 'google-t5/t5-small', '--train_file', 'tests/fixtures/tests_samples/xsum/sample.json', '--validation_file', 'tests/fixtures/tests_samples/xsum/sample.json', '--output_dir', '/tmp/tmp_mj0qz50', '--max_train_steps=50', '--num_warmup_steps=8', '--learning_rate=2e-4', '--per_device_train_batch_size=2', '--per_device_eval_batch_size=1', '--checkpointing_steps', 'epoch', '--with_tracking']' returned non-zero exit status 1.
116
+
117
+ === Diff for job: single-gpu_run_pipelines_torch_gpu_test_reports ===
118
+ --- Absent in current run:
119
+ - 401 Client Error. (Request ID: Root=1-69057d88-2d735299119e7ddc5ce1c5d3;039c000a-2b8b-4c75-b9db-70c9f9a35f50)
120
+ - Diff is 626091 characters long. Set self.maxDiff to None to see it.
121
+ - [[-0.08846622705459595, -0.0840778797864914, -0.08044[234334 chars]309]]
122
+ - [[-0.08846624195575714, -0.08407791703939438, -0.0804[234423 chars]607]]
123
+ +++ Appeared in current run:
124
+ + 401 Client Error. (Request ID: Root=1-6906cfe9-22ca31d164a406612e4a301c;3e9cfe1e-e777-450c-a062-4385f6a61f7e)
125
+ + Diff is 628254 characters long. Set self.maxDiff to None to see it.
126
+ + [[-0.08846620470285416, -0.08407790213823318, -0.0804[234411 chars]697]]
127
+ + [[-0.08846626430749893, -0.08407794684171677, -0.0804[234519 chars]667]]