text stringlengths 0 1.16k |
|---|
['tan', 'pear', 'pan', 'broom', 'chimney', 'doll', 'hood', 'sauce'] tensor([9.4119e-01, 1.0041e-03, 2.3843e-03, 2.6707e-03, 3.1399e-02, 2.9495e-03, |
1.4936e-04, 1.8255e-02], device='cuda:2', grad_fn=<SelectBackward0>) |
ๆๅ็ๆฆ็ๅๅธไธบ: {True: tensor(0., device='cuda:2', grad_fn=<MulBackward0>), False: tensor(0., device='cuda:2', grad_fn=<MulBackward0>), 'Execute Error': tensor(1., device='cuda:2', grad_fn=<DivBackward0>)} |
ANSWER0=VQA(image=LEFT,question='How many windows are on the rusted out bus?') |
ANSWER1=EVAL(expr='{ANSWER0} >= 12') |
FINAL_ANSWER=RESULT(var=ANSWER1) |
torch.Size([13, 3, 448, 448]) |
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1862 |
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1862 |
tensor([8.2007e-01, 7.6261e-02, 4.3461e-02, 3.8345e-02, 1.1697e-02, 5.8762e-03, |
4.0415e-03, 2.5063e-04], device='cuda:0', grad_fn=<SoftmaxBackward0>) |
2 ************* |
['2', '3', '4', '1', '5', '8', '7', '29'] tensor([8.2007e-01, 7.6261e-02, 4.3461e-02, 3.8345e-02, 1.1697e-02, 5.8762e-03, |
4.0415e-03, 2.5063e-04], device='cuda:0', grad_fn=<SelectBackward0>) |
ๆๅ็ๆฆ็ๅๅธไธบ: {True: tensor(0.8201, device='cuda:0', grad_fn=<DivBackward0>), False: tensor(0.1799, device='cuda:0', grad_fn=<DivBackward0>), 'Execute Error': tensor(0., device='cuda:0', grad_fn=<DivBackward0>)} |
tensor([8.9365e-01, 2.0976e-02, 8.3122e-02, 1.0316e-03, 9.3614e-05, 3.3869e-04, |
1.8766e-05, 7.7098e-04], device='cuda:3', grad_fn=<SoftmaxBackward0>) |
yes ************* |
['yes', 'congratulations', 'no', 'honey', 'solid', 'right', 'candle', 'chocolate'] tensor([8.9365e-01, 2.0976e-02, 8.3122e-02, 1.0316e-03, 9.3614e-05, 3.3869e-04, |
1.8766e-05, 7.7098e-04], device='cuda:3', grad_fn=<SelectBackward0>) |
ๆๅ็ๆฆ็ๅๅธไธบ: {True: tensor(0.0831, device='cuda:3', grad_fn=<DivBackward0>), False: tensor(0.8936, device='cuda:3', grad_fn=<DivBackward0>), 'Execute Error': tensor(0.0232, device='cuda:3', grad_fn=<DivBackward0>)} |
question: ['How many windows are on the rusted out bus?'], responses:['8'] |
[('8', 0.12723446457289017), ('9', 0.12488291461145089), ('12', 0.12481394644705951), ('7', 0.12480302292408052), ('5', 0.12471410185987472), ('6', 0.12470198211184266), ('11', 0.12452966814724155), ('10', 0.12431989932555992)] |
[['8', '9', '12', '7', '5', '6', '11', '10']] |
torch.Size([13, 3, 448, 448]) knan debug pixel values shape |
tensor([0.1509, 0.0971, 0.1223, 0.1252, 0.1078, 0.1588, 0.1089, 0.1290], |
device='cuda:2', grad_fn=<SoftmaxBackward0>) |
6 ************* |
['8', '9', '12', '7', '5', '6', '11', '10'] tensor([0.1509, 0.0971, 0.1223, 0.1252, 0.1078, 0.1588, 0.1089, 0.1290], |
device='cuda:2', grad_fn=<SelectBackward0>) |
ๆๅ็ๆฆ็ๅๅธไธบ: {True: tensor(0.1223, device='cuda:2', grad_fn=<DivBackward0>), False: tensor(0.8777, device='cuda:2', grad_fn=<DivBackward0>), 'Execute Error': tensor(0., device='cuda:2', grad_fn=<DivBackward0>)} |
[2024-10-23 14:43:59,829] [INFO] [logging.py:96:log_dist] [Rank 0] rank=0 time (ms) | optimizer_allgather: 1.39 | optimizer_gradients: 0.33 | optimizer_step: 0.32 |
[2024-10-23 14:43:59,830] [INFO] [logging.py:96:log_dist] [Rank 0] rank=0 time (ms) | forward_microstep: 5049.84 | backward_microstep: 12549.97 | backward_inner_microstep: 4812.35 | backward_allreduce_microstep: 7737.51 | step_microstep: 10.20 |
[2024-10-23 14:43:59,830] [INFO] [logging.py:96:log_dist] [Rank 0] rank=0 time (ms) | forward: 5049.86 | backward: 12549.96 | backward_inner: 4812.40 | backward_allreduce: 7737.36 | step: 10.22 |
0%| | 10/4844 [02:43<21:52:30, 16.29s/it]Registering VQA_lavis step |
Registering EVAL step |
Registering RESULT step |
Registering VQA_lavis step |
Registering VQA_lavis step |
Registering EVAL step |
Registering RESULT step |
Registering EVAL step |
Registering RESULT step |
Registering VQA_lavis step |
Registering EVAL step |
Registering RESULT step |
ANSWER0=VQA(image=RIGHT,question='How many people are in the image?') |
ANSWER1=EVAL(expr='{ANSWER0} >= 3') |
FINAL_ANSWER=RESULT(var=ANSWER1) |
ANSWER0=VQA(image=RIGHT,question='How many of the ape's feet can be seen in the image?') |
ANSWER1=EVAL(expr='{ANSWER0} == 1') |
FINAL_ANSWER=RESULT(var=ANSWER1) |
ANSWER0=VQA(image=RIGHT,question='Do boats float in the water on a sunny day in the image?') |
ANSWER1=EVAL(expr='{ANSWER0}') |
FINAL_ANSWER=RESULT(var=ANSWER1) |
torch.Size([7, 3, 448, 448]) |
ANSWER0=VQA(image=RIGHT,question='How many drawers are on the cabinet?') |
ANSWER1=EVAL(expr='{ANSWER0} == 3') |
FINAL_ANSWER=RESULT(var=ANSWER1) |
torch.Size([13, 3, 448, 448]) |
torch.Size([5, 3, 448, 448]) |
torch.Size([13, 3, 448, 448]) |
question: ['How many drawers are on the cabinet?'], responses:['5'] |
[('5', 0.12793059870235002), ('8', 0.12539646467821697), ('4', 0.12509737486793587), ('6', 0.12470234839853608), ('3', 0.12467331676337925), ('7', 0.12441254825093238), ('11', 0.12401867309944531), ('9', 0.12376867523920407)] |
[['5', '8', '4', '6', '3', '7', '11', '9']] |
question: ['How many people are in the image?'], responses:['3'] |
torch.Size([5, 3, 448, 448]) knan debug pixel values shape |
dynamic ViT batch size: 5, images per sample: 5.0, dynamic token length: 1348 |
[('3', 0.12809209985493852), ('4', 0.12520382509374006), ('1', 0.1251059160028928), ('5', 0.12483070991268265), ('8', 0.12458076282181878), ('2', 0.12413212281858195), ('6', 0.1241125313968017), ('12', 0.12394203209854344)] |
[['3', '4', '1', '5', '8', '2', '6', '12']] |
dynamic ViT batch size: 5, images per sample: 5.0, dynamic token length: 1348 |
torch.Size([7, 3, 448, 448]) knan debug pixel values shape |
dynamic ViT batch size: 5, images per sample: 5.0, dynamic token length: 1348 |
dynamic ViT batch size: 5, images per sample: 5.0, dynamic token length: 1348 |
question: ['How many of the ape'], responses:['1'] |
question: ['Do boats float in the water on a sunny day in the image?'], responses:['no'] |
[('1', 0.12829009354978346), ('3', 0.12529928082343206), ('4', 0.12464806219229535), ('8', 0.12460015878893425), ('6', 0.12451220062887247), ('12', 0.124338487048427), ('2', 0.12420459433498025), ('47', 0.12410712263327517)] |
[['1', '3', '4', '8', '6', '12', '2', '47']] |
dynamic ViT batch size: 5, images per sample: 5.0, dynamic token length: 1348 |
[('no', 0.1313955057270409), ('yes', 0.12592208734904367), ('no smoking', 0.12472972590078177), ('gone', 0.12376514658020793), ('man', 0.12367833016285167), ('meow', 0.1235796378467502), ('kia', 0.12347643720898455), ('no clock', 0.12345312922433942)] |
[['no', 'yes', 'no smoking', 'gone', 'man', 'meow', 'kia', 'no clock']] |
dynamic ViT batch size: 5, images per sample: 5.0, dynamic token length: 1348 |
torch.Size([13, 3, 448, 448]) knan debug pixel values shape |
dynamic ViT batch size: 5, images per sample: 5.0, dynamic token length: 1348 |
torch.Size([13, 3, 448, 448]) knan debug pixel values shape |
dynamic ViT batch size: 5, images per sample: 5.0, dynamic token length: 1348 |
tensor([0.1929, 0.0996, 0.1813, 0.1912, 0.1263, 0.1114, 0.0348, 0.0625], |
device='cuda:0', grad_fn=<SoftmaxBackward0>) |
5 ************* |
['5', '8', '4', '6', '3', '7', '11', '9'] tensor([0.1929, 0.0996, 0.1813, 0.1912, 0.1263, 0.1114, 0.0348, 0.0625], |
device='cuda:0', grad_fn=<SelectBackward0>) |
ๆๅ็ๆฆ็ๅๅธไธบ: {True: tensor(0.1263, device='cuda:0', grad_fn=<DivBackward0>), False: tensor(0.8737, device='cuda:0', grad_fn=<DivBackward0>), 'Execute Error': tensor(5.9605e-08, device='cuda:0', grad_fn=<DivBackward0>)} |
ANSWER0=VQA(image=RIGHT,question='Does the image contain a tree house?') |
FINAL_ANSWER=RESULT(var=ANSWER0) |
torch.Size([7, 3, 448, 448]) |
tensor([0.5138, 0.2921, 0.0145, 0.0837, 0.0050, 0.0695, 0.0199, 0.0014], |
device='cuda:3', grad_fn=<SoftmaxBackward0>) |
3 ************* |
['3', '4', '1', '5', '8', '2', '6', '12'] tensor([0.5138, 0.2921, 0.0145, 0.0837, 0.0050, 0.0695, 0.0199, 0.0014], |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.