Repoaner commited on
Commit
17d20eb
·
verified ·
1 Parent(s): f2462c5

Upload LLaVA-Next-3D/17652629.err with huggingface_hub

Browse files
Files changed (1) hide show
  1. LLaVA-Next-3D/17652629.err +152 -0
LLaVA-Next-3D/17652629.err ADDED
@@ -0,0 +1,152 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0
  0%| | 0/5031 [00:00<?, ?it/s]/mnt/petrelfs/wangzehan/miniconda3/envs/llava-3d/lib/python3.10/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3526.)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  0%| | 1/5031 [00:26<37:20:03, 26.72s/it]
2
 
3
  0%| | 1/5031 [00:26<37:20:03, 26.72s/it]
4
  0%| | 2/5031 [00:35<22:22:14, 16.01s/it]
5
 
6
  0%| | 2/5031 [00:35<22:22:14, 16.01s/it]
7
  0%| | 3/5031 [00:44<18:14:53, 13.07s/it]
8
 
9
  0%| | 3/5031 [00:44<18:14:53, 13.07s/it]
10
  0%| | 4/5031 [00:53<15:42:06, 11.24s/it]
11
 
12
  0%| | 4/5031 [00:53<15:42:06, 11.24s/it]
13
  0%| | 5/5031 [01:01<14:22:16, 10.29s/it]
14
 
15
  0%| | 5/5031 [01:01<14:22:16, 10.29s/it]
16
  0%| | 6/5031 [01:10<13:45:28, 9.86s/it]
17
 
18
  0%| | 6/5031 [01:10<13:45:28, 9.86s/it]
19
  0%| | 7/5031 [01:20<13:29:24, 9.67s/it]
20
 
21
  0%| | 7/5031 [01:20<13:29:24, 9.67s/it]
22
  0%| | 8/5031 [01:28<13:04:18, 9.37s/it]
23
 
24
  0%| | 8/5031 [01:28<13:04:18, 9.37s/it]
25
  0%| | 9/5031 [01:40<13:57:51, 10.01s/it]
26
 
27
  0%| | 9/5031 [01:40<13:57:51, 10.01s/it]
28
  0%| | 10/5031 [01:48<13:16:33, 9.52s/it]
29
 
30
  0%| | 10/5031 [01:48<13:16:33, 9.52s/it]
31
  0%| | 11/5031 [01:57<12:55:33, 9.27s/it]
32
 
33
  0%| | 11/5031 [01:57<12:55:33, 9.27s/it]
34
  0%| | 12/5031 [02:06<12:44:51, 9.14s/it]
35
 
36
  0%| | 12/5031 [02:06<12:44:51, 9.14s/it]
37
  0%| | 13/5031 [02:16<13:18:55, 9.55s/it]
38
 
39
  0%| | 13/5031 [02:16<13:18:55, 9.55srun: Job step aborted: Waiting up to 2 seconds for job step to finish.
 
 
 
 
 
40
  0%| | 14/5031 [02:25<12:54:44, 9.27s/it]
41
 
42
  0%| | 14/5031 [02:25<12:54:44, 9.27s/it]
43
  0%| | 15/5031 [02:33<12:36:57, 9.05s/it]
44
 
45
  0%| | 15/5031 [02:33<12:36:57, 9.05s/it]
46
  0%| | 16/5031 [02:42<12:27:47, 8.95s/it]
47
 
48
  0%| | 16/5031 [02:42<12:27:47, 8.95s/it]
49
  0%| | 17/5031 [02:51<12:18:47, 8.84s/it]
50
 
51
  0%| | 17/5031 [02:51<12:18:47, 8.84s/it]slurmstepd: error: *** STEP 17652629.0 ON SH-IDC1-10-140-0-209 CANCELLED AT 2025-04-02T01:12:20 ***
 
 
1
+ /mnt/petrelfs/wangzehan/miniconda3/envs/llava-3d/lib/python3.10/site-packages/huggingface_hub/file_download.py:795: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
2
+ warnings.warn(
3
+ /mnt/petrelfs/wangzehan/miniconda3/envs/llava-3d/lib/python3.10/site-packages/huggingface_hub/file_download.py:795: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
4
+ warnings.warn(
5
+ /mnt/petrelfs/wangzehan/miniconda3/envs/llava-3d/lib/python3.10/site-packages/huggingface_hub/file_download.py:795: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
6
+ warnings.warn(
7
+ /mnt/petrelfs/wangzehan/miniconda3/envs/llava-3d/lib/python3.10/site-packages/huggingface_hub/file_download.py:795: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
8
+ warnings.warn(
9
+ /mnt/petrelfs/wangzehan/miniconda3/envs/llava-3d/lib/python3.10/site-packages/huggingface_hub/file_download.py:795: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
10
+ warnings.warn(
11
+ /mnt/petrelfs/wangzehan/miniconda3/envs/llava-3d/lib/python3.10/site-packages/huggingface_hub/file_download.py:795: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
12
+ warnings.warn(
13
+ /mnt/petrelfs/wangzehan/miniconda3/envs/llava-3d/lib/python3.10/site-packages/huggingface_hub/file_download.py:795: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
14
+ warnings.warn(
15
+ /mnt/petrelfs/wangzehan/miniconda3/envs/llava-3d/lib/python3.10/site-packages/huggingface_hub/file_download.py:795: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
16
+ warnings.warn(
17
+ /mnt/petrelfs/wangzehan/miniconda3/envs/llava-3d/lib/python3.10/site-packages/transformers/models/llava/configuration_llava.py:103: FutureWarning: The `vocab_size` argument is deprecated and will be removed in v4.42, since it can be inferred from the `text_config`. Passing this argument has no effect
18
+ warnings.warn(
19
+ /mnt/petrelfs/wangzehan/miniconda3/envs/llava-3d/lib/python3.10/site-packages/transformers/models/llava/configuration_llava.py:103: FutureWarning: The `vocab_size` argument is deprecated and will be removed in v4.42, since it can be inferred from the `text_config`. Passing this argument has no effect
20
+ warnings.warn(
21
+ /mnt/petrelfs/wangzehan/miniconda3/envs/llava-3d/lib/python3.10/site-packages/transformers/models/llava/configuration_llava.py:103: FutureWarning: The `vocab_size` argument is deprecated and will be removed in v4.42, since it can be inferred from the `text_config`. Passing this argument has no effect
22
+ warnings.warn(
23
+ /mnt/petrelfs/wangzehan/miniconda3/envs/llava-3d/lib/python3.10/site-packages/transformers/models/llava/configuration_llava.py:103: FutureWarning: The `vocab_size` argument is deprecated and will be removed in v4.42, since it can be inferred from the `text_config`. Passing this argument has no effect
24
+ warnings.warn(
25
+ /mnt/petrelfs/wangzehan/miniconda3/envs/llava-3d/lib/python3.10/site-packages/transformers/models/llava/configuration_llava.py:103: FutureWarning: The `vocab_size` argument is deprecated and will be removed in v4.42, since it can be inferred from the `text_config`. Passing this argument has no effect
26
+ warnings.warn(
27
+ /mnt/petrelfs/wangzehan/miniconda3/envs/llava-3d/lib/python3.10/site-packages/transformers/models/llava/configuration_llava.py:103: FutureWarning: The `vocab_size` argument is deprecated and will be removed in v4.42, since it can be inferred from the `text_config`. Passing this argument has no effect
28
+ warnings.warn(
29
+ You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
30
+ You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
31
+ You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
32
+ You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
33
+ You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
34
+ You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
35
+ /mnt/petrelfs/wangzehan/miniconda3/envs/llava-3d/lib/python3.10/site-packages/transformers/models/llava/configuration_llava.py:143: FutureWarning: The `vocab_size` attribute is deprecated and will be removed in v4.42, Please use `text_config.vocab_size` instead.
36
+ warnings.warn(
37
+ /mnt/petrelfs/wangzehan/miniconda3/envs/llava-3d/lib/python3.10/site-packages/transformers/models/llava/configuration_llava.py:143: FutureWarning: The `vocab_size` attribute is deprecated and will be removed in v4.42, Please use `text_config.vocab_size` instead.
38
+ warnings.warn(
39
+ /mnt/petrelfs/wangzehan/miniconda3/envs/llava-3d/lib/python3.10/site-packages/transformers/models/llava/configuration_llava.py:143: FutureWarning: The `vocab_size` attribute is deprecated and will be removed in v4.42, Please use `text_config.vocab_size` instead.
40
+ warnings.warn(
41
+ /mnt/petrelfs/wangzehan/miniconda3/envs/llava-3d/lib/python3.10/site-packages/transformers/models/llava/configuration_llava.py:103: FutureWarning: The `vocab_size` argument is deprecated and will be removed in v4.42, since it can be inferred from the `text_config`. Passing this argument has no effect
42
+ warnings.warn(
43
+ /mnt/petrelfs/wangzehan/miniconda3/envs/llava-3d/lib/python3.10/site-packages/transformers/models/llava/configuration_llava.py:143: FutureWarning: The `vocab_size` attribute is deprecated and will be removed in v4.42, Please use `text_config.vocab_size` instead.
44
+ warnings.warn(
45
+ /mnt/petrelfs/wangzehan/miniconda3/envs/llava-3d/lib/python3.10/site-packages/transformers/models/llava/configuration_llava.py:143: FutureWarning: The `vocab_size` attribute is deprecated and will be removed in v4.42, Please use `text_config.vocab_size` instead.
46
+ warnings.warn(
47
+ /mnt/petrelfs/wangzehan/miniconda3/envs/llava-3d/lib/python3.10/site-packages/transformers/models/llava/configuration_llava.py:143: FutureWarning: The `vocab_size` attribute is deprecated and will be removed in v4.42, Please use `text_config.vocab_size` instead.
48
+ warnings.warn(
49
+ You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
50
+ /mnt/petrelfs/wangzehan/miniconda3/envs/llava-3d/lib/python3.10/site-packages/transformers/models/llava/configuration_llava.py:103: FutureWarning: The `vocab_size` argument is deprecated and will be removed in v4.42, since it can be inferred from the `text_config`. Passing this argument has no effect
51
+ warnings.warn(
52
+ /mnt/petrelfs/wangzehan/miniconda3/envs/llava-3d/lib/python3.10/site-packages/transformers/models/llava/configuration_llava.py:143: FutureWarning: The `vocab_size` attribute is deprecated and will be removed in v4.42, Please use `text_config.vocab_size` instead.
53
+ warnings.warn(
54
+ You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
55
+ /mnt/petrelfs/wangzehan/miniconda3/envs/llava-3d/lib/python3.10/site-packages/transformers/models/llava/configuration_llava.py:143: FutureWarning: The `vocab_size` attribute is deprecated and will be removed in v4.42, Please use `text_config.vocab_size` instead.
56
+ warnings.warn(
57
+
58
+ Some weights of LlavaQwenForCausalLM were not initialized from the model checkpoint at data/model_2/LLaVA-Video-7B-Qwen2 and are newly initialized: ['ground_head_obj.0.bias', 'ground_head_obj.0.weight', 'ground_head_obj.2.bias', 'ground_head_obj.2.weight', 'ground_head_obj.3.bias', 'ground_head_obj.3.weight', 'ground_head_query.0.bias', 'ground_head_query.0.weight', 'ground_head_query.2.bias', 'ground_head_query.2.weight', 'ground_head_query.3.bias', 'ground_head_query.3.weight', 'ground_head_zero_target', 'model.frame_id']
59
+ You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
60
+
61
+ Some weights of LlavaQwenForCausalLM were not initialized from the model checkpoint at data/model_2/LLaVA-Video-7B-Qwen2 and are newly initialized: ['ground_head_obj.0.bias', 'ground_head_obj.0.weight', 'ground_head_obj.2.bias', 'ground_head_obj.2.weight', 'ground_head_obj.3.bias', 'ground_head_obj.3.weight', 'ground_head_query.0.bias', 'ground_head_query.0.weight', 'ground_head_query.2.bias', 'ground_head_query.2.weight', 'ground_head_query.3.bias', 'ground_head_query.3.weight', 'ground_head_zero_target', 'model.frame_id']
62
+ You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
63
+
64
+ Some weights of LlavaQwenForCausalLM were not initialized from the model checkpoint at data/model_2/LLaVA-Video-7B-Qwen2 and are newly initialized: ['ground_head_obj.0.bias', 'ground_head_obj.0.weight', 'ground_head_obj.2.bias', 'ground_head_obj.2.weight', 'ground_head_obj.3.bias', 'ground_head_obj.3.weight', 'ground_head_query.0.bias', 'ground_head_query.0.weight', 'ground_head_query.2.bias', 'ground_head_query.2.weight', 'ground_head_query.3.bias', 'ground_head_query.3.weight', 'ground_head_zero_target', 'model.frame_id']
65
+ You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
66
+
67
+ Some weights of LlavaQwenForCausalLM were not initialized from the model checkpoint at data/model_2/LLaVA-Video-7B-Qwen2 and are newly initialized: ['ground_head_obj.0.bias', 'ground_head_obj.0.weight', 'ground_head_obj.2.bias', 'ground_head_obj.2.weight', 'ground_head_obj.3.bias', 'ground_head_obj.3.weight', 'ground_head_query.0.bias', 'ground_head_query.0.weight', 'ground_head_query.2.bias', 'ground_head_query.2.weight', 'ground_head_query.3.bias', 'ground_head_query.3.weight', 'ground_head_zero_target', 'model.frame_id']
68
+ You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
69
+
70
+ Some weights of LlavaQwenForCausalLM were not initialized from the model checkpoint at data/model_2/LLaVA-Video-7B-Qwen2 and are newly initialized: ['ground_head_obj.0.bias', 'ground_head_obj.0.weight', 'ground_head_obj.2.bias', 'ground_head_obj.2.weight', 'ground_head_obj.3.bias', 'ground_head_obj.3.weight', 'ground_head_query.0.bias', 'ground_head_query.0.weight', 'ground_head_query.2.bias', 'ground_head_query.2.weight', 'ground_head_query.3.bias', 'ground_head_query.3.weight', 'ground_head_zero_target', 'model.frame_id']
71
+ You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
72
+
73
+ Some weights of LlavaQwenForCausalLM were not initialized from the model checkpoint at data/model_2/LLaVA-Video-7B-Qwen2 and are newly initialized: ['ground_head_obj.0.bias', 'ground_head_obj.0.weight', 'ground_head_obj.2.bias', 'ground_head_obj.2.weight', 'ground_head_obj.3.bias', 'ground_head_obj.3.weight', 'ground_head_query.0.bias', 'ground_head_query.0.weight', 'ground_head_query.2.bias', 'ground_head_query.2.weight', 'ground_head_query.3.bias', 'ground_head_query.3.weight', 'ground_head_zero_target', 'model.frame_id']
74
+ You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
75
+
76
+ Some weights of LlavaQwenForCausalLM were not initialized from the model checkpoint at data/model_2/LLaVA-Video-7B-Qwen2 and are newly initialized: ['ground_head_obj.0.bias', 'ground_head_obj.0.weight', 'ground_head_obj.2.bias', 'ground_head_obj.2.weight', 'ground_head_obj.3.bias', 'ground_head_obj.3.weight', 'ground_head_query.0.bias', 'ground_head_query.0.weight', 'ground_head_query.2.bias', 'ground_head_query.2.weight', 'ground_head_query.3.bias', 'ground_head_query.3.weight', 'ground_head_zero_target', 'model.frame_id']
77
+ You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
78
+ Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
79
+ Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
80
+ Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
81
+ Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
82
+ Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
83
+ Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
84
+ Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
85
+
86
+ Some weights of LlavaQwenForCausalLM were not initialized from the model checkpoint at data/model_2/LLaVA-Video-7B-Qwen2 and are newly initialized: ['ground_head_obj.0.bias', 'ground_head_obj.0.weight', 'ground_head_obj.2.bias', 'ground_head_obj.2.weight', 'ground_head_obj.3.bias', 'ground_head_obj.3.weight', 'ground_head_query.0.bias', 'ground_head_query.0.weight', 'ground_head_query.2.bias', 'ground_head_query.2.weight', 'ground_head_query.3.bias', 'ground_head_query.3.weight', 'ground_head_zero_target', 'model.frame_id']
87
+ You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
88
+ Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
89
+ /mnt/petrelfs/wangzehan/miniconda3/envs/llava-3d/lib/python3.10/site-packages/accelerate/accelerator.py:451: FutureWarning: Passing the following arguments to `Accelerator` is deprecated and will be removed in version 1.0 of Accelerate: dict_keys(['dispatch_batches', 'split_batches']). Please pass an `accelerate.DataLoaderConfiguration` instead:
90
+ dataloader_config = DataLoaderConfiguration(dispatch_batches=None, split_batches=None)
91
+ warnings.warn(
92
+ Detected kernel version 3.10.0, which is below the recommended minimum of 5.5.0; this can cause the process to hang. It is recommended to upgrade the kernel to the minimum version or higher.
93
+ /mnt/petrelfs/wangzehan/miniconda3/envs/llava-3d/lib/python3.10/site-packages/accelerate/accelerator.py:451: FutureWarning: Passing the following arguments to `Accelerator` is deprecated and will be removed in version 1.0 of Accelerate: dict_keys(['dispatch_batches', 'split_batches']). Please pass an `accelerate.DataLoaderConfiguration` instead:
94
+ dataloader_config = DataLoaderConfiguration(dispatch_batches=None, split_batches=None)
95
+ warnings.warn(
96
+ /mnt/petrelfs/wangzehan/miniconda3/envs/llava-3d/lib/python3.10/site-packages/accelerate/accelerator.py:451: FutureWarning: Passing the following arguments to `Accelerator` is deprecated and will be removed in version 1.0 of Accelerate: dict_keys(['dispatch_batches', 'split_batches']). Please pass an `accelerate.DataLoaderConfiguration` instead:
97
+ dataloader_config = DataLoaderConfiguration(dispatch_batches=None, split_batches=None)
98
+ warnings.warn(
99
+ /mnt/petrelfs/wangzehan/miniconda3/envs/llava-3d/lib/python3.10/site-packages/accelerate/accelerator.py:451: FutureWarning: Passing the following arguments to `Accelerator` is deprecated and will be removed in version 1.0 of Accelerate: dict_keys(['dispatch_batches', 'split_batches']). Please pass an `accelerate.DataLoaderConfiguration` instead:
100
+ dataloader_config = DataLoaderConfiguration(dispatch_batches=None, split_batches=None)
101
+ warnings.warn(
102
+ /mnt/petrelfs/wangzehan/miniconda3/envs/llava-3d/lib/python3.10/site-packages/accelerate/accelerator.py:451: FutureWarning: Passing the following arguments to `Accelerator` is deprecated and will be removed in version 1.0 of Accelerate: dict_keys(['dispatch_batches', 'split_batches']). Please pass an `accelerate.DataLoaderConfiguration` instead:
103
+ dataloader_config = DataLoaderConfiguration(dispatch_batches=None, split_batches=None)
104
+ warnings.warn(
105
+ /mnt/petrelfs/wangzehan/miniconda3/envs/llava-3d/lib/python3.10/site-packages/accelerate/accelerator.py:451: FutureWarning: Passing the following arguments to `Accelerator` is deprecated and will be removed in version 1.0 of Accelerate: dict_keys(['dispatch_batches', 'split_batches']). Please pass an `accelerate.DataLoaderConfiguration` instead:
106
+ dataloader_config = DataLoaderConfiguration(dispatch_batches=None, split_batches=None)
107
+ warnings.warn(
108
+ /mnt/petrelfs/wangzehan/miniconda3/envs/llava-3d/lib/python3.10/site-packages/accelerate/accelerator.py:451: FutureWarning: Passing the following arguments to `Accelerator` is deprecated and will be removed in version 1.0 of Accelerate: dict_keys(['dispatch_batches', 'split_batches']). Please pass an `accelerate.DataLoaderConfiguration` instead:
109
+ dataloader_config = DataLoaderConfiguration(dispatch_batches=None, split_batches=None)
110
+ warnings.warn(
111
+ /mnt/petrelfs/wangzehan/miniconda3/envs/llava-3d/lib/python3.10/site-packages/accelerate/accelerator.py:451: FutureWarning: Passing the following arguments to `Accelerator` is deprecated and will be removed in version 1.0 of Accelerate: dict_keys(['dispatch_batches', 'split_batches']). Please pass an `accelerate.DataLoaderConfiguration` instead:
112
+ dataloader_config = DataLoaderConfiguration(dispatch_batches=None, split_batches=None)
113
+ warnings.warn(
114
+ /mnt/petrelfs/wangzehan/miniconda3/envs/llava-3d/lib/python3.10/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3526.)
115
+ return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
116
+ /mnt/petrelfs/wangzehan/miniconda3/envs/llava-3d/lib/python3.10/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3526.)
117
+ return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
118
+ /mnt/petrelfs/wangzehan/miniconda3/envs/llava-3d/lib/python3.10/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3526.)
119
+ return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
120
+ /mnt/petrelfs/wangzehan/miniconda3/envs/llava-3d/lib/python3.10/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3526.)
121
+ return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
122
+ /mnt/petrelfs/wangzehan/miniconda3/envs/llava-3d/lib/python3.10/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3526.)
123
+ return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
124
+
125
  0%| | 0/5031 [00:00<?, ?it/s]/mnt/petrelfs/wangzehan/miniconda3/envs/llava-3d/lib/python3.10/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3526.)
126
+ return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
127
+ /mnt/petrelfs/wangzehan/miniconda3/envs/llava-3d/lib/python3.10/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3526.)
128
+ return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
129
+ /mnt/petrelfs/wangzehan/miniconda3/envs/llava-3d/lib/python3.10/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3526.)
130
+ return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
131
+ /mnt/petrelfs/wangzehan/miniconda3/envs/llava-3d/lib/python3.10/site-packages/torch/utils/checkpoint.py:429: UserWarning: torch.utils.checkpoint: please pass in use_reentrant=True or use_reentrant=False explicitly. The default value of use_reentrant will be updated to be False in the future. To maintain current behavior, pass use_reentrant=True. It is recommended that you use use_reentrant=False. Refer to docs for more details on the differences between the two variants.
132
+ warnings.warn(
133
+ /mnt/petrelfs/wangzehan/miniconda3/envs/llava-3d/lib/python3.10/site-packages/torch/utils/checkpoint.py:429: UserWarning: torch.utils.checkpoint: please pass in use_reentrant=True or use_reentrant=False explicitly. The default value of use_reentrant will be updated to be False in the future. To maintain current behavior, pass use_reentrant=True. It is recommended that you use use_reentrant=False. Refer to docs for more details on the differences between the two variants.
134
+ warnings.warn(
135
+ /mnt/petrelfs/wangzehan/miniconda3/envs/llava-3d/lib/python3.10/site-packages/torch/utils/checkpoint.py:429: UserWarning: torch.utils.checkpoint: please pass in use_reentrant=True or use_reentrant=False explicitly. The default value of use_reentrant will be updated to be False in the future. To maintain current behavior, pass use_reentrant=True. It is recommended that you use use_reentrant=False. Refer to docs for more details on the differences between the two variants.
136
+ warnings.warn(
137
+ /mnt/petrelfs/wangzehan/miniconda3/envs/llava-3d/lib/python3.10/site-packages/torch/utils/checkpoint.py:429: UserWarning: torch.utils.checkpoint: please pass in use_reentrant=True or use_reentrant=False explicitly. The default value of use_reentrant will be updated to be False in the future. To maintain current behavior, pass use_reentrant=True. It is recommended that you use use_reentrant=False. Refer to docs for more details on the differences between the two variants.
138
+ warnings.warn(
139
+ /mnt/petrelfs/wangzehan/miniconda3/envs/llava-3d/lib/python3.10/site-packages/torch/utils/checkpoint.py:429: UserWarning: torch.utils.checkpoint: please pass in use_reentrant=True or use_reentrant=False explicitly. The default value of use_reentrant will be updated to be False in the future. To maintain current behavior, pass use_reentrant=True. It is recommended that you use use_reentrant=False. Refer to docs for more details on the differences between the two variants.
140
+ warnings.warn(
141
+ /mnt/petrelfs/wangzehan/miniconda3/envs/llava-3d/lib/python3.10/site-packages/torch/utils/checkpoint.py:429: UserWarning: torch.utils.checkpoint: please pass in use_reentrant=True or use_reentrant=False explicitly. The default value of use_reentrant will be updated to be False in the future. To maintain current behavior, pass use_reentrant=True. It is recommended that you use use_reentrant=False. Refer to docs for more details on the differences between the two variants.
142
+ warnings.warn(
143
+ /mnt/petrelfs/wangzehan/miniconda3/envs/llava-3d/lib/python3.10/site-packages/torch/utils/checkpoint.py:429: UserWarning: torch.utils.checkpoint: please pass in use_reentrant=True or use_reentrant=False explicitly. The default value of use_reentrant will be updated to be False in the future. To maintain current behavior, pass use_reentrant=True. It is recommended that you use use_reentrant=False. Refer to docs for more details on the differences between the two variants.
144
+ warnings.warn(
145
+ /mnt/petrelfs/wangzehan/miniconda3/envs/llava-3d/lib/python3.10/site-packages/torch/utils/checkpoint.py:429: UserWarning: torch.utils.checkpoint: please pass in use_reentrant=True or use_reentrant=False explicitly. The default value of use_reentrant will be updated to be False in the future. To maintain current behavior, pass use_reentrant=True. It is recommended that you use use_reentrant=False. Refer to docs for more details on the differences between the two variants.
146
+ warnings.warn(
147
+
148
  0%| | 1/5031 [00:26<37:20:03, 26.72s/it]
149
 
150
  0%| | 1/5031 [00:26<37:20:03, 26.72s/it]
151
  0%| | 2/5031 [00:35<22:22:14, 16.01s/it]
152
 
153
  0%| | 2/5031 [00:35<22:22:14, 16.01s/it]
154
  0%| | 3/5031 [00:44<18:14:53, 13.07s/it]
155
 
156
  0%| | 3/5031 [00:44<18:14:53, 13.07s/it]
157
  0%| | 4/5031 [00:53<15:42:06, 11.24s/it]
158
 
159
  0%| | 4/5031 [00:53<15:42:06, 11.24s/it]
160
  0%| | 5/5031 [01:01<14:22:16, 10.29s/it]
161
 
162
  0%| | 5/5031 [01:01<14:22:16, 10.29s/it]
163
  0%| | 6/5031 [01:10<13:45:28, 9.86s/it]
164
 
165
  0%| | 6/5031 [01:10<13:45:28, 9.86s/it]
166
  0%| | 7/5031 [01:20<13:29:24, 9.67s/it]
167
 
168
  0%| | 7/5031 [01:20<13:29:24, 9.67s/it]
169
  0%| | 8/5031 [01:28<13:04:18, 9.37s/it]
170
 
171
  0%| | 8/5031 [01:28<13:04:18, 9.37s/it]
172
  0%| | 9/5031 [01:40<13:57:51, 10.01s/it]
173
 
174
  0%| | 9/5031 [01:40<13:57:51, 10.01s/it]
175
  0%| | 10/5031 [01:48<13:16:33, 9.52s/it]
176
 
177
  0%| | 10/5031 [01:48<13:16:33, 9.52s/it]
178
  0%| | 11/5031 [01:57<12:55:33, 9.27s/it]
179
 
180
  0%| | 11/5031 [01:57<12:55:33, 9.27s/it]
181
  0%| | 12/5031 [02:06<12:44:51, 9.14s/it]
182
 
183
  0%| | 12/5031 [02:06<12:44:51, 9.14s/it]
184
  0%| | 13/5031 [02:16<13:18:55, 9.55s/it]
185
 
186
  0%| | 13/5031 [02:16<13:18:55, 9.55srun: Job step aborted: Waiting up to 2 seconds for job step to finish.
187
+ srun: Easily find out why your job was killed by following the link below:
188
+ https://docs.phoenix.sensetime.com/FAQ/SlurmFAQ/Find-out-why-my-job-was-killed/
189
+ slurmstepd: error: *** JOB 17652629 ON SH-IDC1-10-140-0-209 CANCELLED AT 2025-04-02T01:12:20 ***
190
+ srun: got SIGCONT
191
+ s/it]
192
  0%| | 14/5031 [02:25<12:54:44, 9.27s/it]
193
 
194
  0%| | 14/5031 [02:25<12:54:44, 9.27s/it]
195
  0%| | 15/5031 [02:33<12:36:57, 9.05s/it]
196
 
197
  0%| | 15/5031 [02:33<12:36:57, 9.05s/it]
198
  0%| | 16/5031 [02:42<12:27:47, 8.95s/it]
199
 
200
  0%| | 16/5031 [02:42<12:27:47, 8.95s/it]
201
  0%| | 17/5031 [02:51<12:18:47, 8.84s/it]
202
 
203
  0%| | 17/5031 [02:51<12:18:47, 8.84s/it]slurmstepd: error: *** STEP 17652629.0 ON SH-IDC1-10-140-0-209 CANCELLED AT 2025-04-02T01:12:20 ***
204
+ srun: forcing job termination