text stringlengths 0 1.16k |
|---|
FINAL_ANSWER=RESULT(var=ANSWER1) |
torch.Size([7, 3, 448, 448]) |
dynamic ViT batch size: 13, images per sample: 13.0, dynamic token length: 3398 |
question: ['How many humans are in the image?'], responses:['2'] |
[('2', 0.12961991198727602), ('3', 0.12561270547489775), ('4', 0.12556127085987287), ('1', 0.1254920833223361), ('5', 0.12407835939022728), ('8', 0.124024076973589), ('7', 0.12288810153923228), ('29', 0.12272349045256851)] |
[['2', '3', '4', '1', '5', '8', '7', '29']] |
dynamic ViT batch size: 13, images per sample: 13.0, dynamic token length: 3398 |
torch.Size([7, 3, 448, 448]) knan debug pixel values shape |
tensor([8.8733e-01, 6.0383e-02, 8.6991e-03, 3.8982e-02, 3.2001e-03, 6.4994e-04, |
7.1398e-04, 3.9062e-05], device='cuda:0', grad_fn=<SoftmaxBackward0>) |
2 ************* |
['2', '3', '4', '1', '5', '8', '7', '29'] tensor([8.8733e-01, 6.0383e-02, 8.6991e-03, 3.8982e-02, 3.2001e-03, 6.4994e-04, |
7.1398e-04, 3.9062e-05], device='cuda:0', grad_fn=<SelectBackward0>) |
ๆๅ็ๆฆ็ๅๅธไธบ: {True: tensor(0.8873, device='cuda:0', grad_fn=<DivBackward0>), False: tensor(0.1127, device='cuda:0', grad_fn=<DivBackward0>), 'Execute Error': tensor(-1.1921e-07, device='cuda:0', grad_fn=<DivBackward0>)} |
Encountered ExecuteError: CUDA out of memory. Tried to allocate 1020.00 MiB. GPU 2 has a total capacty of 44.34 GiB of which 946.94 MiB is free. Including non-PyTorch memory, this process has 43.40 GiB memory in use. Of the allocated memory 41.44 GiB is allocated by PyTorch, and 1.33 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF |
Encountered TypeError: unsupported operand type(s) for +: 'NoneType' and 'str' |
ๆๅ็ๆฆ็ๅๅธไธบ: {True: 1e-09, False: 1e-09, 'Execute Error': 0.999999998} |
ANSWER0=VQA(image=RIGHT,question='How many pillows are in the image?') |
ANSWER1=EVAL(expr='{ANSWER0} == 2') |
FINAL_ANSWER=RESULT(var=ANSWER1) |
torch.Size([1, 3, 448, 448]) |
question: ['How many pillows are in the image?'], responses:['1'] |
[('1', 0.12829009354978346), ('3', 0.12529928082343206), ('4', 0.12464806219229535), ('8', 0.12460015878893425), ('6', 0.12451220062887247), ('12', 0.124338487048427), ('2', 0.12420459433498025), ('47', 0.12410712263327517)] |
[['1', '3', '4', '8', '6', '12', '2', '47']] |
torch.Size([1, 3, 448, 448]) knan debug pixel values shape |
tensor([9.8807e-01, 2.6068e-03, 1.4850e-03, 4.6540e-04, 7.9041e-04, 6.1294e-04, |
5.8753e-03, 9.2662e-05], device='cuda:2', grad_fn=<SoftmaxBackward0>) |
1 ************* |
['1', '3', '4', '8', '6', '12', '2', '47'] tensor([9.8807e-01, 2.6068e-03, 1.4850e-03, 4.6540e-04, 7.9041e-04, 6.1294e-04, |
5.8753e-03, 9.2662e-05], device='cuda:2', grad_fn=<SelectBackward0>) |
ๆๅ็ๆฆ็ๅๅธไธบ: {True: tensor(0.0059, device='cuda:2', grad_fn=<DivBackward0>), False: tensor(0.9941, device='cuda:2', grad_fn=<DivBackward0>), 'Execute Error': tensor(0., device='cuda:2', grad_fn=<DivBackward0>)} |
tensor([8.7346e-01, 6.3246e-02, 1.5991e-02, 4.0802e-02, 4.3045e-03, 9.9091e-04, |
1.1232e-03, 7.8810e-05], device='cuda:3', grad_fn=<SoftmaxBackward0>) |
2 ************* |
['2', '3', '4', '1', '5', '8', '7', '29'] tensor([8.7346e-01, 6.3246e-02, 1.5991e-02, 4.0802e-02, 4.3045e-03, 9.9091e-04, |
1.1232e-03, 7.8810e-05], device='cuda:3', grad_fn=<SelectBackward0>) |
ๆๅ็ๆฆ็ๅๅธไธบ: {True: tensor(0.8735, device='cuda:3', grad_fn=<DivBackward0>), False: tensor(0.1265, device='cuda:3', grad_fn=<DivBackward0>), 'Execute Error': tensor(5.9605e-08, device='cuda:3', grad_fn=<DivBackward0>)} |
petrel_client is not installed. If you read data locally instead of from ceph, ignore it. |
Replace train sampler!! |
petrel_client is not installed. Using PIL to load images. |
petrel_client is not installed. If you read data locally instead of from ceph, ignore it. |
Replace train sampler!! |
petrel_client is not installed. Using PIL to load images. |
petrel_client is not installed. If you read data locally instead of from ceph, ignore it. |
Replace train sampler!! |
petrel_client is not installed. Using PIL to load images. |
petrel_client is not installed. If you read data locally instead of from ceph, ignore it. |
Replace train sampler!! |
petrel_client is not installed. Using PIL to load images. |
Traceback (most recent call last): |
File "/mnt/SSD1_4TB/yunjie/Internvl_NLVR/InternVL/internvl_chat/internvl/train/internvl_chat_finetune.py", line 898, in <module> |
main() |
File "/mnt/SSD1_4TB/yunjie/Internvl_NLVR/InternVL/internvl_chat/internvl/train/internvl_chat_finetune.py", line 882, in main |
train_result = trainer.train(resume_from_checkpoint=checkpoint) |
File "/home/yunjie/anaconda3/envs/internvl/lib/python3.10/site-packages/transformers/trainer.py", line 1539, in train |
return inner_training_loop( |
File "/home/yunjie/anaconda3/envs/internvl/lib/python3.10/site-packages/transformers/trainer.py", line 1869, in _inner_training_loop |
tr_loss_step = self.training_step(model, inputs) |
File "/home/yunjie/anaconda3/envs/internvl/lib/python3.10/site-packages/transformers/trainer.py", line 2783, in training_step |
self.accelerator.backward(loss) |
File "/home/yunjie/anaconda3/envs/internvl/lib/python3.10/site-packages/accelerate/accelerator.py", line 2188, in backward |
self.deepspeed_engine_wrapped.backward(loss, **kwargs) |
File "/home/yunjie/anaconda3/envs/internvl/lib/python3.10/site-packages/accelerate/utils/deepspeed.py", line 166, in backward |
self.engine.backward(loss, **kwargs) |
File "/home/yunjie/anaconda3/envs/internvl/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn |
ret_val = func(*args, **kwargs) |
File "/home/yunjie/anaconda3/envs/internvl/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 1895, in backward |
self.optimizer.backward(loss, retain_graph=retain_graph) |
File "/home/yunjie/anaconda3/envs/internvl/lib/python3.10/site-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 1902, in backward |
self.loss_scaler.backward(loss.float(), retain_graph=retain_graph) |
File "/home/yunjie/anaconda3/envs/internvl/lib/python3.10/site-packages/deepspeed/runtime/fp16/loss_scaler.py", line 63, in backward |
scaled_loss.backward(retain_graph=retain_graph) |
File "/home/yunjie/anaconda3/envs/internvl/lib/python3.10/site-packages/torch/_tensor.py", line 492, in backward |
torch.autograd.backward( |
File "/home/yunjie/anaconda3/envs/internvl/lib/python3.10/site-packages/torch/autograd/__init__.py", line 251, in backward |
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
File "/home/yunjie/anaconda3/envs/internvl/lib/python3.10/site-packages/torch/autograd/function.py", line 288, in apply |
return user_fn(self, *args) |
File "/home/yunjie/anaconda3/envs/internvl/lib/python3.10/site-packages/torch/utils/checkpoint.py", line 288, in backward |
torch.autograd.backward(outputs_with_grad, args_with_grad) |
File "/home/yunjie/anaconda3/envs/internvl/lib/python3.10/site-packages/torch/autograd/__init__.py", line 251, in backward |
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass |
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 2.00 MiB. GPU 3 has a total capacty of 44.34 GiB of which 2.94 MiB is free. Including non-PyTorch memory, this process has 44.32 GiB memory in use. Of the allocated memory 41.03 GiB is allocated by PyTorch, and 2.74 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF |
[2024-10-22 17:32:36,618] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 2084211 closing signal SIGTERM |
[2024-10-22 17:32:36,618] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 2084212 closing signal SIGTERM |
[2024-10-22 17:32:36,619] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 2084213 closing signal SIGTERM |
[2024-10-22 17:32:38,236] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: 1) local_rank: 3 (pid: 2084214) of binary: /home/yunjie/anaconda3/envs/internvl/bin/python |
Traceback (most recent call last): |
File "/home/yunjie/anaconda3/envs/internvl/bin/torchrun", line 8, in <module> |
sys.exit(main()) |
File "/home/yunjie/anaconda3/envs/internvl/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper |
return f(*args, **kwargs) |
File "/home/yunjie/anaconda3/envs/internvl/lib/python3.10/site-packages/torch/distributed/run.py", line 806, in main |
run(args) |
File "/home/yunjie/anaconda3/envs/internvl/lib/python3.10/site-packages/torch/distributed/run.py", line 797, in run |
elastic_launch( |
File "/home/yunjie/anaconda3/envs/internvl/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 134, in __call__ |
return launch_agent(self._config, self._entrypoint, list(args)) |
File "/home/yunjie/anaconda3/envs/internvl/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 264, in launch_agent |
raise ChildFailedError( |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.