text
stringlengths
0
1.16k
dynamic ViT batch size: 1, images per sample: 1.0, dynamic token length: 324
dynamic ViT batch size: 1, images per sample: 1.0, dynamic token length: 324
tensor([7.9498e-01, 1.1453e-01, 2.4008e-02, 5.4101e-02, 8.0272e-03, 1.9098e-03,
2.3736e-03, 6.8425e-05], device='cuda:0', grad_fn=<SoftmaxBackward0>)
2 *************
['2', '3', '4', '1', '5', '8', '7', '29'] tensor([7.9498e-01, 1.1453e-01, 2.4008e-02, 5.4101e-02, 8.0272e-03, 1.9098e-03,
2.3736e-03, 6.8425e-05], device='cuda:0', grad_fn=<SelectBackward0>)
ζœ€εŽηš„ζ¦‚ηŽ‡εˆ†εΈƒδΈΊ: {True: tensor(0.7950, device='cuda:0', grad_fn=<DivBackward0>), False: tensor(0.2050, device='cuda:0', grad_fn=<DivBackward0>), 'Execute Error': tensor(0., device='cuda:0', grad_fn=<DivBackward0>)}
tensor([0.1630, 0.1254, 0.1603, 0.1617, 0.1602, 0.0347, 0.0914, 0.1034],
device='cuda:2', grad_fn=<SoftmaxBackward0>)
10 *************
['10', '11', '12', '8', '9', '26', '13', '6'] tensor([0.1630, 0.1254, 0.1603, 0.1617, 0.1602, 0.0347, 0.0914, 0.1034],
device='cuda:2', grad_fn=<SelectBackward0>)
ζœ€εŽηš„ζ¦‚ηŽ‡εˆ†εΈƒδΈΊ: {True: tensor(1., device='cuda:2', grad_fn=<DivBackward0>), False: tensor(0., device='cuda:2', grad_fn=<MulBackward0>), 'Execute Error': tensor(0., device='cuda:2', grad_fn=<DivBackward0>)}
tensor([9.8959e-01, 1.4607e-03, 1.2284e-03, 2.0534e-04, 9.4250e-04, 2.7740e-04,
1.2083e-03, 5.0849e-03], device='cuda:1', grad_fn=<SoftmaxBackward0>)
0 *************
['0', 'circles', 'maroon', 'large', 'rooster', 'nuts', 'beige', 'bottle'] tensor([9.8959e-01, 1.4607e-03, 1.2284e-03, 2.0534e-04, 9.4250e-04, 2.7740e-04,
1.2083e-03, 5.0849e-03], device='cuda:1', grad_fn=<SelectBackward0>)
ζœ€εŽηš„ζ¦‚ηŽ‡εˆ†εΈƒδΈΊ: {True: tensor(0., device='cuda:1', grad_fn=<MulBackward0>), False: tensor(0.9896, device='cuda:1', grad_fn=<DivBackward0>), 'Execute Error': tensor(0.0104, device='cuda:1', grad_fn=<DivBackward0>)}
ANSWER0=VQA(image=LEFT,question='Is a rodent eating pasta in the image?')
ANSWER1=EVAL(expr='{ANSWER0}')
FINAL_ANSWER=RESULT(var=ANSWER1)
torch.Size([13, 3, 448, 448])
Encountered ExecuteError: CUDA out of memory. Tried to allocate 5.86 GiB. GPU 1 has a total capacty of 44.34 GiB of which 3.88 GiB is free. Including non-PyTorch memory, this process has 40.44 GiB memory in use. Of the allocated memory 38.10 GiB is allocated by PyTorch, and 1.71 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Encountered TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'
ζœ€εŽηš„ζ¦‚ηŽ‡εˆ†εΈƒδΈΊ: {True: 1e-09, False: 1e-09, 'Execute Error': 0.999999998}
[2024-10-22 17:28:57,050] [INFO] [logging.py:96:log_dist] [Rank 0] rank=0 time (ms) | optimizer_allgather: 1.39 | optimizer_gradients: 0.28 | optimizer_step: 0.32
[2024-10-22 17:28:57,051] [INFO] [logging.py:96:log_dist] [Rank 0] rank=0 time (ms) | forward_microstep: 8941.20 | backward_microstep: 10928.38 | backward_inner_microstep: 8522.65 | backward_allreduce_microstep: 2405.63 | step_microstep: 7.50
[2024-10-22 17:28:57,051] [INFO] [logging.py:96:log_dist] [Rank 0] rank=0 time (ms) | forward: 8941.21 | backward: 10928.37 | backward_inner: 8522.67 | backward_allreduce: 2405.61 | step: 7.52
1%| | 26/2424 [10:29<15:13:59, 22.87s/it]Registering VQA_lavis step
Registering EVAL stepRegistering VQA_lavis step
Registering RESULT step
Registering EVAL step
Registering RESULT step
Registering VQA_lavis step
Registering EVAL step
Registering RESULT step
Registering VQA_lavis step
Registering EVAL step
Registering RESULT step
ANSWER0=VQA(image=RIGHT,question='How many sets of measuring utensils are in the image?')
ANSWER1=EVAL(expr='{ANSWER0} == 1')
FINAL_ANSWER=RESULT(var=ANSWER1)
ANSWER0=VQA(image=LEFT,question='How many rolls of paper towels are in the package?')
ANSWER1=EVAL(expr='{ANSWER0} >= 6')
FINAL_ANSWER=RESULT(var=ANSWER1)
ANSWER0=VQA(image=LEFT,question='How many puppies are lying down in the image?')
ANSWER1=EVAL(expr='{ANSWER0} == 1')
FINAL_ANSWER=RESULT(var=ANSWER1)
torch.Size([7, 3, 448, 448])
torch.Size([7, 3, 448, 448])
ANSWER0=VQA(image=RIGHT,question='How many objects are standing straight up in the image?')
ANSWER1=EVAL(expr='{ANSWER0} >= 9')
FINAL_ANSWER=RESULT(var=ANSWER1)
torch.Size([13, 3, 448, 448])
torch.Size([7, 3, 448, 448])
question: ['How many puppies are lying down in the image?'], responses:['3']
question: ['How many rolls of paper towels are in the package?'], responses:['13']
question: ['How many objects are standing straight up in the image?'], responses:['5']
[('3', 0.12809209985493852), ('4', 0.12520382509374006), ('1', 0.1251059160028928), ('5', 0.12483070991268265), ('8', 0.12458076282181878), ('2', 0.12413212281858195), ('6', 0.1241125313968017), ('12', 0.12394203209854344)]
[['3', '4', '1', '5', '8', '2', '6', '12']]
[('13', 0.12770862924411772), ('14', 0.12534395389083108), ('21', 0.12493249815266858), ('12', 0.12491814916612239), ('11', 0.12461120999761086), ('27', 0.12444592740053353), ('15', 0.12414436865504584), ('29', 0.1238952634930699)]
[['13', '14', '21', '12', '11', '27', '15', '29']]
[('5', 0.12793059870235002), ('8', 0.12539646467821697), ('4', 0.12509737486793587), ('6', 0.12470234839853608), ('3', 0.12467331676337925), ('7', 0.12441254825093238), ('11', 0.12401867309944531), ('9', 0.12376867523920407)]
[['5', '8', '4', '6', '3', '7', '11', '9']]
torch.Size([7, 3, 448, 448]) knan debug pixel values shape
torch.Size([7, 3, 448, 448]) knan debug pixel values shape
torch.Size([7, 3, 448, 448]) knan debug pixel values shape
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1863
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1863
question: ['How many sets of measuring utensils are in the image?'], responses:['5']
[('5', 0.12793059870235002), ('8', 0.12539646467821697), ('4', 0.12509737486793587), ('6', 0.12470234839853608), ('3', 0.12467331676337925), ('7', 0.12441254825093238), ('11', 0.12401867309944531), ('9', 0.12376867523920407)]
[['5', '8', '4', '6', '3', '7', '11', '9']]
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1863
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1863
torch.Size([13, 3, 448, 448]) knan debug pixel values shape
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1863
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1863
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1863
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1863
tensor([0.3802, 0.0662, 0.0936, 0.0178, 0.0022, 0.4309, 0.0081, 0.0010],
device='cuda:2', grad_fn=<SoftmaxBackward0>)
2 *************
['3', '4', '1', '5', '8', '2', '6', '12'] tensor([0.3802, 0.0662, 0.0936, 0.0178, 0.0022, 0.4309, 0.0081, 0.0010],
device='cuda:2', grad_fn=<SelectBackward0>)
tensor([0.4789, 0.0834, 0.0221, 0.2482, 0.0838, 0.0110, 0.0631, 0.0094],
device='cuda:3', grad_fn=<SoftmaxBackward0>)
13 *************
['13', '14', '21', '12', '11', '27', '15', '29'] tensor([0.4789, 0.0834, 0.0221, 0.2482, 0.0838, 0.0110, 0.0631, 0.0094],
device='cuda:3', grad_fn=<SelectBackward0>)
tensor([0.2074, 0.0852, 0.1650, 0.1714, 0.1257, 0.1376, 0.0314, 0.0763],
device='cuda:0', grad_fn=<SoftmaxBackward0>)
5 *************
['5', '8', '4', '6', '3', '7', '11', '9'] tensor([0.2074, 0.0852, 0.1650, 0.1714, 0.1257, 0.1376, 0.0314, 0.0763],
device='cuda:0', grad_fn=<SelectBackward0>)
ζœ€εŽηš„ζ¦‚ηŽ‡εˆ†εΈƒδΈΊ: {True: tensor(1., device='cuda:3', grad_fn=<DivBackward0>), False: tensor(0., device='cuda:3', grad_fn=<MulBackward0>), 'Execute Error': tensor(0., device='cuda:3', grad_fn=<DivBackward0>)}
ζœ€εŽηš„ζ¦‚ηŽ‡εˆ†εΈƒδΈΊ: {True: tensor(0.0936, device='cuda:2', grad_fn=<DivBackward0>), False: tensor(0.9064, device='cuda:2', grad_fn=<DivBackward0>), 'Execute Error': tensor(0., device='cuda:2', grad_fn=<DivBackward0>)}