text
stringlengths
0
1.16k
['yes', 'congratulations', 'no', 'honey', 'solid', 'right', 'candle', 'chocolate'] tensor([1.0000e+00, 2.9019e-08, 5.6450e-11, 2.9311e-08, 1.2373e-10, 2.4235e-10,
3.0871e-11, 1.6674e-08], device='cuda:0', grad_fn=<SelectBackward0>)
ζœ€εŽηš„ζ¦‚ηŽ‡εˆ†εΈƒδΈΊ: {True: tensor(1.0000, device='cuda:0', grad_fn=<UnbindBackward0>), False: tensor(5.6450e-11, device='cuda:0', grad_fn=<UnbindBackward0>), 'Execute Error': tensor(1.1915e-07, device='cuda:0', grad_fn=<SubBackward0>)}
question: ['How many sets of pads are in the image?'], responses:['4']
[('4', 0.12804651361935848), ('5', 0.12521071898947128), ('3', 0.12515925906184908), ('8', 0.12489091845155219), ('6', 0.1245383468146311), ('1', 0.12441141527606933), ('2', 0.12403713327181662), ('11', 0.12370569451525179)]
[['4', '5', '3', '8', '6', '1', '2', '11']]
torch.Size([13, 3, 448, 448]) knan debug pixel values shape
tensor([6.0935e-05, 8.7979e-04, 7.7511e-02, 6.3467e-01, 1.3181e-01, 1.1890e-01,
5.5002e-03, 3.0667e-02], device='cuda:1', grad_fn=<SoftmaxBackward0>)
bulldog *************
['biking', 'geese', 'cushion', 'bulldog', 'striped', 'floral', 'stove', 'dodgers'] tensor([6.0935e-05, 8.7979e-04, 7.7511e-02, 6.3467e-01, 1.3181e-01, 1.1890e-01,
5.5002e-03, 3.0667e-02], device='cuda:1', grad_fn=<SelectBackward0>)
ζœ€εŽηš„ζ¦‚ηŽ‡εˆ†εΈƒδΈΊ: {True: tensor(0., device='cuda:1', grad_fn=<MulBackward0>), False: tensor(0., device='cuda:1', grad_fn=<MulBackward0>), 'Execute Error': tensor(1., device='cuda:1', grad_fn=<DivBackward0>)}
tensor([9.5533e-01, 4.4662e-02, 5.8698e-06, 6.7408e-09, 4.1619e-06, 1.3936e-06,
1.9459e-07, 1.6863e-08], device='cuda:2', grad_fn=<SoftmaxBackward0>)
4 *************
['4', '5', '3', '8', '6', '1', '2', '11'] tensor([9.5533e-01, 4.4662e-02, 5.8698e-06, 6.7408e-09, 4.1619e-06, 1.3936e-06,
1.9459e-07, 1.6863e-08], device='cuda:2', grad_fn=<SelectBackward0>)
ζœ€εŽηš„ζ¦‚ηŽ‡εˆ†εΈƒδΈΊ: {True: tensor(1.0000, device='cuda:2', grad_fn=<DivBackward0>), False: tensor(1.3936e-06, device='cuda:2', grad_fn=<DivBackward0>), 'Execute Error': tensor(1.7881e-07, device='cuda:2', grad_fn=<DivBackward0>)}
[2024-10-24 09:48:19,274] [INFO] [logging.py:96:log_dist] [Rank 0] rank=0 time (ms) | optimizer_allgather: 1.35 | optimizer_gradients: 0.36 | optimizer_step: 0.32
[2024-10-24 09:48:19,274] [INFO] [logging.py:96:log_dist] [Rank 0] rank=0 time (ms) | forward_microstep: 5124.37 | backward_microstep: 12531.49 | backward_inner_microstep: 4960.21 | backward_allreduce_microstep: 7571.21 | step_microstep: 9.14
[2024-10-24 09:48:19,274] [INFO] [logging.py:96:log_dist] [Rank 0] rank=0 time (ms) | forward: 5124.37 | backward: 12531.48 | backward_inner: 4960.23 | backward_allreduce: 7571.20 | step: 9.15
95%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–| 4597/4844 [19:07:03<1:08:38, 16.67s/it]Registering VQA_lavis step
Registering VQA_lavis step
Registering EVAL step
Registering RESULT step
Registering VQA_lavis step
ANSWER0=VQA(image=LEFT,question='How many green and yellow balloons are in the image?')
ANSWER1=EVAL(expr='{ANSWER0} >= 1')
FINAL_ANSWER=RESULT(var=ANSWER1)
Registering EVAL step
Registering RESULT step
Registering VQA_lavis step
ANSWER0=VQA(image=LEFT,question='What color is the jacket the person is wearing?')
ANSWER1=EVAL(expr='{ANSWER0} == "red"')
FINAL_ANSWER=RESULT(var=ANSWER1)
Registering EVAL step
Registering RESULT step
Registering EVAL step
Registering RESULT step
ANSWER0=VQA(image=RIGHT,question='How many animals are in the image?')
ANSWER1=EVAL(expr='{ANSWER0} == 1')
FINAL_ANSWER=RESULT(var=ANSWER1)
ANSWER0=VQA(image=LEFT,question='How many pairs of mittens are in the image?')
ANSWER1=EVAL(expr='{ANSWER0} >= 2')
FINAL_ANSWER=RESULT(var=ANSWER1)
torch.Size([7, 3, 448, 448])
torch.Size([7, 3, 448, 448])
torch.Size([13, 3, 448, 448])
torch.Size([13, 3, 448, 448])
question: ['How many green and yellow balloons are in the image?'], responses:['1']
question: ['How many animals are in the image?'], responses:['1']
[('1', 0.12829009354978346), ('3', 0.12529928082343206), ('4', 0.12464806219229535), ('8', 0.12460015878893425), ('6', 0.12451220062887247), ('12', 0.124338487048427), ('2', 0.12420459433498025), ('47', 0.12410712263327517)]
[['1', '3', '4', '8', '6', '12', '2', '47']]
[('1', 0.12829009354978346), ('3', 0.12529928082343206), ('4', 0.12464806219229535), ('8', 0.12460015878893425), ('6', 0.12451220062887247), ('12', 0.124338487048427), ('2', 0.12420459433498025), ('47', 0.12410712263327517)]
[['1', '3', '4', '8', '6', '12', '2', '47']]
torch.Size([7, 3, 448, 448]) knan debug pixel values shape
torch.Size([7, 3, 448, 448]) knan debug pixel values shape
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1864
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1864
question: ['What color is the jacket the person is wearing?'], responses:['black']
question: ['How many pairs of mittens are in the image?'], responses:['3']
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1864
[('black', 0.12706825260511387), ('white', 0.12527812565897103), ('dark', 0.1250491849195085), ('purple', 0.12486259083591467), ('orange', 0.12479002203010545), ('red', 0.12434049404478545), ('maroon', 0.12433890776852753), ('blue', 0.12427242213707339)]
[['black', 'white', 'dark', 'purple', 'orange', 'red', 'maroon', 'blue']]
[('3', 0.12809209985493852), ('4', 0.12520382509374006), ('1', 0.1251059160028928), ('5', 0.12483070991268265), ('8', 0.12458076282181878), ('2', 0.12413212281858195), ('6', 0.1241125313968017), ('12', 0.12394203209854344)]
[['3', '4', '1', '5', '8', '2', '6', '12']]
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1864
torch.Size([13, 3, 448, 448]) knan debug pixel values shape
torch.Size([13, 3, 448, 448]) knan debug pixel values shape
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1864
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1864
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1864
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1864
tensor([1.0000e+00, 5.4389e-10, 5.8470e-11, 1.7052e-10, 2.0649e-10, 1.5711e-08,
6.8256e-08, 4.1616e-10], device='cuda:0', grad_fn=<SoftmaxBackward0>)
1 *************
['1', '3', '4', '8', '6', '12', '2', '47'] tensor([1.0000e+00, 5.4389e-10, 5.8470e-11, 1.7052e-10, 2.0649e-10, 1.5711e-08,
6.8256e-08, 4.1616e-10], device='cuda:0', grad_fn=<SelectBackward0>)
tensor([1.0000e+00, 9.5107e-10, 1.7731e-10, 2.9004e-10, 1.2369e-10, 3.4842e-08,
9.8332e-09, 1.0512e-09], device='cuda:3', grad_fn=<SoftmaxBackward0>)
1 *************
['1', '3', '4', '8', '6', '12', '2', '47'] tensor([1.0000e+00, 9.5107e-10, 1.7731e-10, 2.9004e-10, 1.2369e-10, 3.4842e-08,
9.8332e-09, 1.0512e-09], device='cuda:3', grad_fn=<SelectBackward0>)
ζœ€εŽηš„ζ¦‚ηŽ‡εˆ†εΈƒδΈΊ: ζœ€εŽηš„ζ¦‚ηŽ‡εˆ†εΈƒδΈΊ: {True: tensor(1., device='cuda:3', grad_fn=<DivBackward0>), False: tensor(4.7269e-08, device='cuda:3', grad_fn=<DivBackward0>), 'Execute Error': tensor(0., device='cuda:3', grad_fn=<DivBackward0>)}
{True: tensor(1.0000, device='cuda:0', grad_fn=<DivBackward0>), False: tensor(0., device='cuda:0', grad_fn=<MulBackward0>), 'Execute Error': tensor(5.9605e-08, device='cuda:0', grad_fn=<DivBackward0>)}
ANSWER0=VQA(image=RIGHT,question='Is the bed-tent white?')
FINAL_ANSWER=RESULT(var=ANSWER0)
ANSWER0=VQA(image=LEFT,question='Does the image show food served in a rectangular dish?')
ANSWER1=EVAL(expr='{ANSWER0}')
FINAL_ANSWER=RESULT(var=ANSWER1)
torch.Size([7, 3, 448, 448])
torch.Size([13, 3, 448, 448])
question: ['Is the bed-tent white?'], responses:['yes']
[('yes', 0.1298617250866936), ('congratulations', 0.12464161604141298), ('no', 0.12445222599225532), ('honey', 0.12437056445881921), ('solid', 0.12422595371654564), ('right', 0.12419889376311324), ('candle', 0.12414264780165109), ('chocolate', 0.12410637313950891)]
[['yes', 'congratulations', 'no', 'honey', 'solid', 'right', 'candle', 'chocolate']]
torch.Size([7, 3, 448, 448]) knan debug pixel values shape
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1859
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1862