text
stringlengths
0
1.16k
question: ['What color is the purse in the image?'], responses:['white']
[('white', 0.12741698904857263), ('black', 0.12562195821587463), ('purple', 0.12482758531934457), ('orange', 0.12467593918870701), ('maroon', 0.12456097552653009), ('color', 0.12448461429606533), ('brown', 0.12421598902969112), ('dark', 0.12419594937521464)]
[['white', 'black', 'purple', 'orange', 'maroon', 'color', 'brown', 'dark']]
torch.Size([1, 3, 448, 448]) knan debug pixel values shape
tensor([0.5853, 0.0655, 0.0331, 0.0174, 0.0426, 0.0039, 0.2427, 0.0095],
device='cuda:1', grad_fn=<SoftmaxBackward0>)
white *************
['white', 'black', 'purple', 'orange', 'maroon', 'color', 'brown', 'dark'] tensor([0.5853, 0.0655, 0.0331, 0.0174, 0.0426, 0.0039, 0.2427, 0.0095],
device='cuda:1', grad_fn=<SelectBackward0>)
ζœ€εŽηš„ζ¦‚ηŽ‡εˆ†εΈƒδΈΊ: {True: tensor(0., device='cuda:1', grad_fn=<MulBackward0>), False: tensor(0., device='cuda:1', grad_fn=<MulBackward0>), 'Execute Error': tensor(1., device='cuda:1', grad_fn=<DivBackward0>)}
question: ['What color is the keyboard?'], responses:['black']
[('black', 0.12706825260511387), ('white', 0.12527812565897103), ('dark', 0.1250491849195085), ('purple', 0.12486259083591467), ('orange', 0.12479002203010545), ('red', 0.12434049404478545), ('maroon', 0.12433890776852753), ('blue', 0.12427242213707339)]
[['black', 'white', 'dark', 'purple', 'orange', 'red', 'maroon', 'blue']]
torch.Size([13, 3, 448, 448]) knan debug pixel values shape
dynamic ViT batch size: 13, images per sample: 13.0, dynamic token length: 3394
tensor([0.8472, 0.0290, 0.0309, 0.0049, 0.0008, 0.0839, 0.0024, 0.0009],
device='cuda:2', grad_fn=<SoftmaxBackward0>)
3 *************
['3', '4', '1', '5', '8', '2', '6', '12'] tensor([0.8472, 0.0290, 0.0309, 0.0049, 0.0008, 0.0839, 0.0024, 0.0009],
device='cuda:2', grad_fn=<SelectBackward0>)
ζœ€εŽηš„ζ¦‚ηŽ‡εˆ†εΈƒδΈΊ: {True: tensor(0.8472, device='cuda:2', grad_fn=<DivBackward0>), False: tensor(0.1528, device='cuda:2', grad_fn=<DivBackward0>), 'Execute Error': tensor(1.1921e-07, device='cuda:2', grad_fn=<DivBackward0>)}
ANSWER0=VQA(image=RIGHT,question='How many seals are in the image?')
ANSWER1=EVAL(expr='{ANSWER0} == 2')
FINAL_ANSWER=RESULT(var=ANSWER1)
torch.Size([7, 3, 448, 448])
dynamic ViT batch size: 13, images per sample: 13.0, dynamic token length: 3394
question: ['How many seals are in the image?'], responses:['1']
dynamic ViT batch size: 13, images per sample: 13.0, dynamic token length: 3394
[('1', 0.12829009354978346), ('3', 0.12529928082343206), ('4', 0.12464806219229535), ('8', 0.12460015878893425), ('6', 0.12451220062887247), ('12', 0.124338487048427), ('2', 0.12420459433498025), ('47', 0.12410712263327517)]
[['1', '3', '4', '8', '6', '12', '2', '47']]
torch.Size([7, 3, 448, 448]) knan debug pixel values shape
dynamic ViT batch size: 13, images per sample: 13.0, dynamic token length: 3394
tensor([0.2743, 0.2215, 0.2214, 0.0611, 0.1007, 0.0732, 0.0463, 0.0015],
device='cuda:3', grad_fn=<SoftmaxBackward0>)
2 *************
['2', '3', '4', '1', '5', '8', '7', '29'] tensor([0.2743, 0.2215, 0.2214, 0.0611, 0.1007, 0.0732, 0.0463, 0.0015],
device='cuda:3', grad_fn=<SelectBackward0>)
ζœ€εŽηš„ζ¦‚ηŽ‡εˆ†εΈƒδΈΊ: {True: tensor(0.6646, device='cuda:3', grad_fn=<DivBackward0>), False: tensor(0.3354, device='cuda:3', grad_fn=<DivBackward0>), 'Execute Error': tensor(0., device='cuda:3', grad_fn=<DivBackward0>)}
ANSWER0=VQA(image=RIGHT,question='How many baboons are in the image?')
ANSWER1=EVAL(expr='{ANSWER0} >= 2')
FINAL_ANSWER=RESULT(var=ANSWER1)
torch.Size([7, 3, 448, 448])
dynamic ViT batch size: 13, images per sample: 13.0, dynamic token length: 3394
dynamic ViT batch size: 13, images per sample: 13.0, dynamic token length: 3394
question: ['How many baboons are in the image?'], responses:['0']
[('0', 0.13077743594303964), ('circles', 0.12449813349255197), ('maroon', 0.12428926693968681), ('large', 0.1242263466991631), ('rooster', 0.12409315512763705), ('nuts', 0.12408018414184876), ('beige', 0.1240288472550799), ('bottle', 0.12400663040099273)]
[['0', 'circles', 'maroon', 'large', 'rooster', 'nuts', 'beige', 'bottle']]
torch.Size([7, 3, 448, 448]) knan debug pixel values shape
dynamic ViT batch size: 13, images per sample: 13.0, dynamic token length: 3395
tensor([4.3334e-01, 1.0954e-01, 2.5216e-02, 3.1070e-03, 6.3759e-03, 1.5568e-03,
4.2078e-01, 8.4552e-05], device='cuda:2', grad_fn=<SoftmaxBackward0>)
1 *************
['1', '3', '4', '8', '6', '12', '2', '47'] tensor([4.3334e-01, 1.0954e-01, 2.5216e-02, 3.1070e-03, 6.3759e-03, 1.5568e-03,
4.2078e-01, 8.4552e-05], device='cuda:2', grad_fn=<SelectBackward0>)
ζœ€εŽηš„ζ¦‚ηŽ‡εˆ†εΈƒδΈΊ: dynamic ViT batch size: 13, images per sample: 13.0, dynamic token length: 3394
{True: tensor(0.4208, device='cuda:2', grad_fn=<DivBackward0>), False: tensor(0.5792, device='cuda:2', grad_fn=<DivBackward0>), 'Execute Error': tensor(5.9605e-08, device='cuda:2', grad_fn=<DivBackward0>)}
tensor([0.7898, 0.0870, 0.0364, 0.0064, 0.0083, 0.0253, 0.0210, 0.0258],
device='cuda:0', grad_fn=<SoftmaxBackward0>)
black *************
['black', 'white', 'dark', 'purple', 'orange', 'red', 'maroon', 'blue'] tensor([0.7898, 0.0870, 0.0364, 0.0064, 0.0083, 0.0253, 0.0210, 0.0258],
device='cuda:0', grad_fn=<SelectBackward0>)
ζœ€εŽηš„ζ¦‚ηŽ‡εˆ†εΈƒδΈΊ: {True: tensor(0., device='cuda:0', grad_fn=<MulBackward0>), False: tensor(0., device='cuda:0', grad_fn=<MulBackward0>), 'Execute Error': tensor(1., device='cuda:0', grad_fn=<DivBackward0>)}
ANSWER0=VQA(image=RIGHT,question='Does the image contain a tree house?')
FINAL_ANSWER=RESULT(var=ANSWER0)
torch.Size([7, 3, 448, 448])
Encountered ExecuteError: CUDA out of memory. Tried to allocate 3.20 GiB. GPU 0 has a total capacty of 44.34 GiB of which 3.20 GiB is free. Including non-PyTorch memory, this process has 41.13 GiB memory in use. Of the allocated memory 38.63 GiB is allocated by PyTorch, and 1.88 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Encountered TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'
ζœ€εŽηš„ζ¦‚ηŽ‡εˆ†εΈƒδΈΊ: {True: 1e-09, False: 1e-09, 'Execute Error': 0.999999998}
tensor([9.8177e-01, 2.0182e-03, 1.5950e-03, 6.8146e-04, 1.2954e-03, 1.2288e-03,
2.0303e-03, 9.3771e-03], device='cuda:3', grad_fn=<SoftmaxBackward0>)
0 *************
['0', 'circles', 'maroon', 'large', 'rooster', 'nuts', 'beige', 'bottle'] tensor([9.8177e-01, 2.0182e-03, 1.5950e-03, 6.8146e-04, 1.2954e-03, 1.2288e-03,
2.0303e-03, 9.3771e-03], device='cuda:3', grad_fn=<SelectBackward0>)
ζœ€εŽηš„ζ¦‚ηŽ‡εˆ†εΈƒδΈΊ: {True: tensor(0., device='cuda:3', grad_fn=<MulBackward0>), False: tensor(0.9818, device='cuda:3', grad_fn=<DivBackward0>), 'Execute Error': tensor(0.0182, device='cuda:3', grad_fn=<DivBackward0>)}
ANSWER0=VQA(image=RIGHT,question='How many rodents are in the image?')
ANSWER1=EVAL(expr='{ANSWER0} == 1')
FINAL_ANSWER=RESULT(var=ANSWER1)
torch.Size([7, 3, 448, 448])
Encountered ExecuteError: CUDA out of memory. Tried to allocate 3.21 GiB. GPU 3 has a total capacty of 44.34 GiB of which 1.76 GiB is free. Including non-PyTorch memory, this process has 42.57 GiB memory in use. Of the allocated memory 40.55 GiB is allocated by PyTorch, and 1.46 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Encountered TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'
ζœ€εŽηš„ζ¦‚ηŽ‡εˆ†εΈƒδΈΊ: {True: 1e-09, False: 1e-09, 'Execute Error': 0.999999998}
[2024-10-22 17:20:56,883] [INFO] [logging.py:96:log_dist] [Rank 0] rank=0 time (ms) | optimizer_allgather: 1.35 | optimizer_gradients: 0.27 | optimizer_step: 0.32
[2024-10-22 17:20:56,884] [INFO] [logging.py:96:log_dist] [Rank 0] rank=0 time (ms) | forward_microstep: 11919.90 | backward_microstep: 11580.66 | backward_inner_microstep: 10141.59 | backward_allreduce_microstep: 1438.67 | step_microstep: 7.65
[2024-10-22 17:20:56,884] [INFO] [logging.py:96:log_dist] [Rank 0] rank=0 time (ms) | forward: 11919.92 | backward: 11580.64 | backward_inner: 10141.75 | backward_allreduce: 1438.65 | step: 7.66
0%| | 6/2424 [02:29<15:57:53, 23.77s/it]Registering VQA_lavis step
Registering EVAL step
Registering RESULT step
ANSWER0=VQA(image=RIGHT,question='How many sets of measuring utensils are in the image?')
ANSWER1=EVAL(expr='{ANSWER0} == 1')
FINAL_ANSWER=RESULT(var=ANSWER1)
Registering VQA_lavis step
Registering EVAL step
Registering RESULT step
ANSWER0=VQA(image=RIGHT,question='Does the image show a hound standing on thick green grass?')
ANSWER1=RESULT(var=ANSWER0)
Registering VQA_lavis step
Registering EVAL step
Registering RESULT step
Registering VQA_lavis step