text
stringlengths
0
1.16k
['yes', 'congratulations', 'no', 'honey', 'solid', 'right', 'candle', 'chocolate'] tensor([1.0000e+00, 3.4635e-09, 3.0635e-10, 2.5252e-09, 2.0729e-10, 5.7565e-11,
8.3094e-12, 3.3670e-09], device='cuda:1', grad_fn=<SelectBackward0>)
ζœ€εŽηš„ζ¦‚ηŽ‡εˆ†εΈƒδΈΊ: {True: tensor(1., device='cuda:1', grad_fn=<DivBackward0>), False: tensor(3.0635e-10, device='cuda:1', grad_fn=<DivBackward0>), 'Execute Error': tensor(-3.0635e-10, device='cuda:1', grad_fn=<DivBackward0>)}
ANSWER0=VQA(image=RIGHT,question='How many rodents are in the image?')
ANSWER1=EVAL(expr='{ANSWER0} == 2')
FINAL_ANSWER=RESULT(var=ANSWER1)
torch.Size([7, 3, 448, 448])
dynamic ViT batch size: 13, images per sample: 13.0, dynamic token length: 3403
dynamic ViT batch size: 13, images per sample: 13.0, dynamic token length: 3402
tensor([2.6063e-07, 5.5597e-01, 2.7658e-02, 4.8767e-04, 4.1565e-01, 6.3253e-05,
7.2049e-05, 1.0488e-04], device='cuda:3', grad_fn=<SoftmaxBackward0>)
babies *************
['7 eleven', 'babies', 'sunrise', 'eating', 'feet', 'candle', 'light', 'floating'] tensor([2.6063e-07, 5.5597e-01, 2.7658e-02, 4.8767e-04, 4.1565e-01, 6.3253e-05,
7.2049e-05, 1.0488e-04], device='cuda:3', grad_fn=<SelectBackward0>)
ζœ€εŽηš„ζ¦‚ηŽ‡εˆ†εΈƒδΈΊ: {True: tensor(0., device='cuda:3', grad_fn=<MulBackward0>), False: tensor(0., device='cuda:3', grad_fn=<MulBackward0>), 'Execute Error': tensor(1., device='cuda:3', grad_fn=<DivBackward0>)}
question: ['How many rodents are in the image?'], responses:['δΈ‰']
[('biking', 0.12639990046765587), ('geese', 0.1262789403477572), ('cushion', 0.1253965842661667), ('bulldog', 0.1252365705078606), ('striped', 0.12499404846420245), ('floral', 0.12444127054742124), ('stove', 0.12381223353082338), ('dodgers', 0.12344045186811266)]
[['biking', 'geese', 'cushion', 'bulldog', 'striped', 'floral', 'stove', 'dodgers']]
dynamic ViT batch size: 13, images per sample: 13.0, dynamic token length: 3402
torch.Size([7, 3, 448, 448]) knan debug pixel values shape
dynamic ViT batch size: 13, images per sample: 13.0, dynamic token length: 3403
tensor([9.8184e-01, 1.8159e-02, 6.7817e-07, 3.1547e-06, 1.1296e-09, 3.5106e-07,
1.1668e-08, 5.1311e-07], device='cuda:2', grad_fn=<SoftmaxBackward0>)
3 *************
['3', '4', '1', '5', '8', '2', '6', '12'] tensor([9.8184e-01, 1.8159e-02, 6.7817e-07, 3.1547e-06, 1.1296e-09, 3.5106e-07,
1.1668e-08, 5.1311e-07], device='cuda:2', grad_fn=<SelectBackward0>)
ζœ€εŽηš„ζ¦‚ηŽ‡εˆ†εΈƒδΈΊ: {True: tensor(1.0000, device='cuda:2', grad_fn=<DivBackward0>), False: tensor(6.7817e-07, device='cuda:2', grad_fn=<DivBackward0>), 'Execute Error': tensor(5.9605e-08, device='cuda:2', grad_fn=<DivBackward0>)}
dynamic ViT batch size: 13, images per sample: 13.0, dynamic token length: 3403
tensor([1.0000e+00, 6.3838e-08, 1.8582e-10, 2.2658e-08, 3.2077e-10, 6.6916e-10,
4.2456e-11, 2.3077e-08], device='cuda:0', grad_fn=<SoftmaxBackward0>)
yes *************
['yes', 'congratulations', 'no', 'honey', 'solid', 'right', 'candle', 'chocolate'] tensor([1.0000e+00, 6.3838e-08, 1.8582e-10, 2.2658e-08, 3.2077e-10, 6.6916e-10,
4.2456e-11, 2.3077e-08], device='cuda:0', grad_fn=<SelectBackward0>)
ζœ€εŽηš„ζ¦‚ηŽ‡εˆ†εΈƒδΈΊ: {True: tensor(1.0000, device='cuda:0', grad_fn=<DivBackward0>), False: tensor(1.8582e-10, device='cuda:0', grad_fn=<DivBackward0>), 'Execute Error': tensor(1.1902e-07, device='cuda:0', grad_fn=<DivBackward0>)}
ANSWER0=VQA(image=RIGHT,question='Are the doors open in the image?')
ANSWER1=EVAL(expr='{ANSWER0}')
FINAL_ANSWER=RESULT(var=ANSWER1)
torch.Size([7, 3, 448, 448])
tensor([4.3767e-05, 1.3872e-03, 4.9485e-02, 7.9379e-01, 9.0253e-02, 4.6355e-02,
2.3832e-03, 1.6301e-02], device='cuda:1', grad_fn=<SoftmaxBackward0>)
bulldog *************
['biking', 'geese', 'cushion', 'bulldog', 'striped', 'floral', 'stove', 'dodgers'] tensor([4.3767e-05, 1.3872e-03, 4.9485e-02, 7.9379e-01, 9.0253e-02, 4.6355e-02,
2.3832e-03, 1.6301e-02], device='cuda:1', grad_fn=<SelectBackward0>)
ζœ€εŽηš„ζ¦‚ηŽ‡εˆ†εΈƒδΈΊ: {True: tensor(0., device='cuda:1', grad_fn=<MulBackward0>), False: tensor(0., device='cuda:1', grad_fn=<MulBackward0>), 'Execute Error': tensor(1., device='cuda:1', grad_fn=<DivBackward0>)}
question: ['Are the doors open in the image?'], responses:['yes']
[('yes', 0.1298617250866936), ('congratulations', 0.12464161604141298), ('no', 0.12445222599225532), ('honey', 0.12437056445881921), ('solid', 0.12422595371654564), ('right', 0.12419889376311324), ('candle', 0.12414264780165109), ('chocolate', 0.12410637313950891)]
[['yes', 'congratulations', 'no', 'honey', 'solid', 'right', 'candle', 'chocolate']]
torch.Size([7, 3, 448, 448]) knan debug pixel values shape
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1860
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1863
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1860
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1861
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1860
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1860
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1861
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1861
tensor([1.0000e+00, 3.2836e-09, 6.0836e-07, 2.0203e-09, 6.5602e-12, 1.0483e-11,
2.4856e-11, 7.1947e-10], device='cuda:0', grad_fn=<SoftmaxBackward0>)
yes *************
['yes', 'congratulations', 'no', 'honey', 'solid', 'right', 'candle', 'chocolate'] tensor([1.0000e+00, 3.2836e-09, 6.0836e-07, 2.0203e-09, 6.5602e-12, 1.0483e-11,
2.4856e-11, 7.1947e-10], device='cuda:0', grad_fn=<SelectBackward0>)
ζœ€εŽηš„ζ¦‚ηŽ‡εˆ†εΈƒδΈΊ: {True: tensor(1.0000, device='cuda:0', grad_fn=<DivBackward0>), False: tensor(6.0836e-07, device='cuda:0', grad_fn=<DivBackward0>), 'Execute Error': tensor(-1.2312e-08, device='cuda:0', grad_fn=<DivBackward0>)}
[2024-10-24 10:09:13,114] [INFO] [logging.py:96:log_dist] [Rank 0] rank=0 time (ms) | optimizer_allgather: 1.42 | optimizer_gradients: 0.29 | optimizer_step: 0.32
[2024-10-24 10:09:13,115] [INFO] [logging.py:96:log_dist] [Rank 0] rank=0 time (ms) | forward_microstep: 7074.58 | backward_microstep: 6811.34 | backward_inner_microstep: 6805.22 | backward_allreduce_microstep: 5.96 | step_microstep: 7.62
[2024-10-24 10:09:13,115] [INFO] [logging.py:96:log_dist] [Rank 0] rank=0 time (ms) | forward: 7074.59 | backward: 6811.33 | backward_inner: 6805.28 | backward_allreduce: 5.94 | step: 7.63
97%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹| 4679/4844 [19:27:56<42:37, 15.50s/it]Registering VQA_lavis step
Registering EVAL step
Registering RESULT step
Registering VQA_lavis step
Registering EVAL step
Registering RESULT step
Registering VQA_lavis step
Registering EVAL step
Registering RESULT step
Registering VQA_lavis step
Registering EVAL step
Registering RESULT step
ANSWER0=VQA(image=RIGHT,question='How many bottles are in the image?')
ANSWER1=EVAL(expr='{ANSWER0} == 1')
FINAL_ANSWER=RESULT(var=ANSWER1)
ANSWER0=VQA(image=LEFT,question='How many jellyfish are in the image?')
ANSWER1=EVAL(expr='{ANSWER0} >= 3')
FINAL_ANSWER=RESULT(var=ANSWER1)
torch.Size([1, 3, 448, 448])
ANSWER0=VQA(image=RIGHT,question='How many dogs are in the image?')
ANSWER1=EVAL(expr='{ANSWER0} >= 2')
FINAL_ANSWER=RESULT(var=ANSWER1)
ANSWER0=VQA(image=LEFT,question='Are any of the dogs wearing a vest?')
ANSWER1=EVAL(expr='{ANSWER0}')
FINAL_ANSWER=RESULT(var=ANSWER1)
torch.Size([7, 3, 448, 448])
torch.Size([7, 3, 448, 448])
torch.Size([13, 3, 448, 448])
question: ['How many bottles are in the image?'], responses:['111']
[('106', 0.12556070940736277), ('120', 0.12533922270280565), ('101', 0.1252441884519632), ('56', 0.12490260878466017), ('52', 0.12476067988206749), ('193', 0.1247440897055595), ('59', 0.12474156001416575), ('75', 0.12470694105141557)]
[['106', '120', '101', '56', '52', '193', '59', '75']]
torch.Size([1, 3, 448, 448]) knan debug pixel values shape
dynamic ViT batch size: 1, images per sample: 1.0, dynamic token length: 324
dynamic ViT batch size: 1, images per sample: 1.0, dynamic token length: 324