text
stringlengths
0
1.16k
tensor([5.4584e-01, 4.5252e-01, 7.6914e-05, 1.2162e-04, 1.6979e-04, 8.0130e-04,
4.3414e-04, 3.2882e-05], device='cuda:3', grad_fn=<SoftmaxBackward0>)
no *************
['no', 'yes', 'no smoking', 'gone', 'man', 'meow', 'kia', 'no clock'] tensor([5.4584e-01, 4.5252e-01, 7.6914e-05, 1.2162e-04, 1.6979e-04, 8.0130e-04,
4.3414e-04, 3.2882e-05], device='cuda:3', grad_fn=<SelectBackward0>)
ζœ€εŽηš„ζ¦‚ηŽ‡εˆ†εΈƒδΈΊ: {True: tensor(0.4525, device='cuda:3', grad_fn=<DivBackward0>), False: tensor(0.5458, device='cuda:3', grad_fn=<DivBackward0>), 'Execute Error': tensor(0.0016, device='cuda:3', grad_fn=<DivBackward0>)}
ANSWER0=VQA(image=LEFT,question='How many hamsters are in the image?')
ANSWER1=EVAL(expr='{ANSWER0} == 2')
FINAL_ANSWER=RESULT(var=ANSWER1)
torch.Size([3, 3, 448, 448])
torch.Size([5, 3, 448, 448]) knan debug pixel values shape
dynamic ViT batch size: 5, images per sample: 5.0, dynamic token length: 1349
dynamic ViT batch size: 5, images per sample: 5.0, dynamic token length: 1349
question: ['How many hamsters are in the image?'], responses:['1']
dynamic ViT batch size: 5, images per sample: 5.0, dynamic token length: 1349
[('1', 0.12829009354978346), ('3', 0.12529928082343206), ('4', 0.12464806219229535), ('8', 0.12460015878893425), ('6', 0.12451220062887247), ('12', 0.124338487048427), ('2', 0.12420459433498025), ('47', 0.12410712263327517)]
[['1', '3', '4', '8', '6', '12', '2', '47']]
torch.Size([3, 3, 448, 448]) knan debug pixel values shape
dynamic ViT batch size: 5, images per sample: 5.0, dynamic token length: 1349
dynamic ViT batch size: 5, images per sample: 5.0, dynamic token length: 1349
dynamic ViT batch size: 5, images per sample: 5.0, dynamic token length: 1349
dynamic ViT batch size: 5, images per sample: 5.0, dynamic token length: 1349
dynamic ViT batch size: 5, images per sample: 5.0, dynamic token length: 1349
tensor([6.4988e-01, 4.4225e-02, 1.0504e-02, 1.8252e-03, 3.6301e-03, 1.4658e-03,
2.8838e-01, 8.8708e-05], device='cuda:3', grad_fn=<SoftmaxBackward0>)
1 *************
['1', '3', '4', '8', '6', '12', '2', '47'] tensor([6.4988e-01, 4.4225e-02, 1.0504e-02, 1.8252e-03, 3.6301e-03, 1.4658e-03,
2.8838e-01, 8.8708e-05], device='cuda:3', grad_fn=<SelectBackward0>)
ζœ€εŽηš„ζ¦‚ηŽ‡εˆ†εΈƒδΈΊ: {True: tensor(0.2884, device='cuda:3', grad_fn=<DivBackward0>), False: tensor(0.7116, device='cuda:3', grad_fn=<DivBackward0>), 'Execute Error': tensor(5.9605e-08, device='cuda:3', grad_fn=<DivBackward0>)}
tensor([8.5591e-01, 2.9285e-02, 9.8087e-03, 2.3964e-03, 4.3501e-03, 2.0745e-03,
9.6030e-02, 1.4712e-04], device='cuda:0', grad_fn=<SoftmaxBackward0>)
1 *************
['1', '3', '4', '8', '6', '12', '2', '47'] tensor([8.5591e-01, 2.9285e-02, 9.8087e-03, 2.3964e-03, 4.3501e-03, 2.0745e-03,
9.6030e-02, 1.4712e-04], device='cuda:0', grad_fn=<SelectBackward0>)
ζœ€εŽηš„ζ¦‚ηŽ‡εˆ†εΈƒδΈΊ: {True: tensor(0.1441, device='cuda:0', grad_fn=<DivBackward0>), False: tensor(0.8559, device='cuda:0', grad_fn=<DivBackward0>), 'Execute Error': tensor(1.1921e-07, device='cuda:0', grad_fn=<DivBackward0>)}
ANSWER0=VQA(image=LEFT,question='Is there water in the image?')
FINAL_ANSWER=RESULT(var=ANSWER0)
tensor([9.1888e-01, 8.0291e-02, 6.5637e-05, 1.4110e-04, 2.6952e-04, 5.9935e-05,
2.2502e-04, 6.5721e-05], device='cuda:2', grad_fn=<SoftmaxBackward0>)
no *************
['no', 'yes', 'no smoking', 'gone', 'man', 'meow', 'kia', 'no clock'] tensor([9.1888e-01, 8.0291e-02, 6.5637e-05, 1.4110e-04, 2.6952e-04, 5.9935e-05,
2.2502e-04, 6.5721e-05], device='cuda:2', grad_fn=<SelectBackward0>)
torch.Size([7, 3, 448, 448])
ζœ€εŽηš„ζ¦‚ηŽ‡εˆ†εΈƒδΈΊ: {True: tensor(0.0803, device='cuda:2', grad_fn=<DivBackward0>), False: tensor(0.9189, device='cuda:2', grad_fn=<DivBackward0>), 'Execute Error': tensor(0.0008, device='cuda:2', grad_fn=<DivBackward0>)}
ANSWER0=VQA(image=LEFT,question='Is there water in the image?')
FINAL_ANSWER=RESULT(var=ANSWER0)
torch.Size([7, 3, 448, 448])
Encountered ExecuteError: CUDA out of memory. Tried to allocate 3.20 GiB. GPU 2 has a total capacty of 44.34 GiB of which 2.45 GiB is free. Including non-PyTorch memory, this process has 41.87 GiB memory in use. Of the allocated memory 38.69 GiB is allocated by PyTorch, and 2.54 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Encountered TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'
question: ['Is there water in the image?'], responses:['yes']
ζœ€εŽηš„ζ¦‚ηŽ‡εˆ†εΈƒδΈΊ: {True: 1e-09, False: 1e-09, 'Execute Error': 0.999999998}
[('yes', 0.1298617250866936), ('congratulations', 0.12464161604141298), ('no', 0.12445222599225532), ('honey', 0.12437056445881921), ('solid', 0.12422595371654564), ('right', 0.12419889376311324), ('candle', 0.12414264780165109), ('chocolate', 0.12410637313950891)]
[['yes', 'congratulations', 'no', 'honey', 'solid', 'right', 'candle', 'chocolate']]
torch.Size([7, 3, 448, 448]) knan debug pixel values shape
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1859
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1862
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1859
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1860
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1859
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1859
Encountered ExecuteError: CUDA out of memory. Tried to allocate 656.00 MiB. GPU 0 has a total capacty of 44.34 GiB of which 496.94 MiB is free. Including non-PyTorch memory, this process has 43.84 GiB memory in use. Of the allocated memory 40.55 GiB is allocated by PyTorch, and 2.67 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Encountered TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'
ζœ€εŽηš„ζ¦‚ηŽ‡εˆ†εΈƒδΈΊ: {True: 1e-09, False: 1e-09, 'Execute Error': 0.999999998}
[2024-10-22 17:19:24,437] [INFO] [logging.py:96:log_dist] [Rank 0] rank=0 time (ms) | optimizer_allgather: 1.44 | optimizer_gradients: 0.20 | optimizer_step: 0.30
[2024-10-22 17:19:24,438] [INFO] [logging.py:96:log_dist] [Rank 0] rank=0 time (ms) | forward_microstep: 12988.75 | backward_microstep: 10520.11 | backward_inner_microstep: 10514.62 | backward_allreduce_microstep: 5.40 | step_microstep: 7.39
[2024-10-22 17:19:24,438] [INFO] [logging.py:96:log_dist] [Rank 0] rank=0 time (ms) | forward: 12988.76 | backward: 10520.10 | backward_inner: 10514.64 | backward_allreduce: 5.38 | step: 7.40
0%| | 2/2424 [00:56<18:30:06, 27.50s/it]Registering VQA_lavis step
Registering EVAL step
Registering RESULT step
Registering VQA_lavis step
Registering EVAL step
Registering RESULT step
Registering VQA_lavis step
Registering EVAL step
Registering RESULT step
ANSWER0=VQA(image=LEFT,question='How many chimneys are in the image?')
ANSWER1=EVAL(expr='{ANSWER0} >= 2')
FINAL_ANSWER=RESULT(var=ANSWER1)
Registering VQA_lavis step
Registering EVAL step
Registering RESULT step
ANSWER0=VQA(image=RIGHT,question='How many warthogs are in the image?')
ANSWER1=EVAL(expr='{ANSWER0} <= 2')
FINAL_ANSWER=RESULT(var=ANSWER1)
ANSWER0=VQA(image=RIGHT,question='Is there a structure with a wooden roof to the right of the yurt?')
ANSWER1=EVAL(expr='{ANSWER0}')
FINAL_ANSWER=RESULT(var=ANSWER1)
ANSWER0=VQA(image=RIGHT,question='How many creatures are in the image?')
ANSWER1=EVAL(expr='{ANSWER0} <= 8')
FINAL_ANSWER=RESULT(var=ANSWER1)
torch.Size([7, 3, 448, 448])
torch.Size([7, 3, 448, 448])
torch.Size([13, 3, 448, 448])
torch.Size([13, 3, 448, 448])
question: ['How many warthogs are in the image?'], responses:['5']
question: ['How many creatures are in the image?'], responses:['many']
[('5', 0.12793059870235002), ('8', 0.12539646467821697), ('4', 0.12509737486793587), ('6', 0.12470234839853608), ('3', 0.12467331676337925), ('7', 0.12441254825093238), ('11', 0.12401867309944531), ('9', 0.12376867523920407)]
[['5', '8', '4', '6', '3', '7', '11', '9']]
[('many', 0.12680051474066337), ('few', 0.12559712123098582), ('several', 0.12545126119006317), ('blinds', 0.12452572291517987), ('moss', 0.12441899466837554), ('rainbow', 0.1244056457460399), ('kite', 0.12440323404357946), ('directions', 0.12439750546511286)]