text stringlengths 0 1.16k |
|---|
question: ['How many chimneys does the building have?'], responses:['3'] |
tensor([0.2243, 0.1001, 0.1649, 0.0883, 0.1756, 0.0371, 0.1868, 0.0229], |
device='cuda:2', grad_fn=<SoftmaxBackward0>) |
4 ************* |
['4', '5', '3', '8', '6', '1', '2', '11'] tensor([0.2243, 0.1001, 0.1649, 0.0883, 0.1756, 0.0371, 0.1868, 0.0229], |
device='cuda:2', grad_fn=<SelectBackward0>) |
ζεηζ¦ηεεΈδΈΊ: {True: tensor(0.7761, device='cuda:2', grad_fn=<DivBackward0>), False: tensor(0.2239, device='cuda:2', grad_fn=<DivBackward0>), 'Execute Error': tensor(0., device='cuda:2', grad_fn=<DivBackward0>)} |
ANSWER0=VQA(image=LEFT,question='How many parrots are in the image?') |
ANSWER1=EVAL(expr='{ANSWER0} <= 1') |
FINAL_ANSWER=RESULT(var=ANSWER1) |
tensor([8.1019e-01, 8.5339e-02, 1.2294e-02, 8.5337e-02, 4.6629e-03, 8.9106e-04, |
1.2172e-03, 6.8076e-05], device='cuda:3', grad_fn=<SoftmaxBackward0>) |
2 ************* |
['2', '3', '4', '1', '5', '8', '7', '29'] tensor([8.1019e-01, 8.5339e-02, 1.2294e-02, 8.5337e-02, 4.6629e-03, 8.9106e-04, |
1.2172e-03, 6.8076e-05], device='cuda:3', grad_fn=<SelectBackward0>) |
[('3', 0.12809209985493852), ('4', 0.12520382509374006), ('1', 0.1251059160028928), ('5', 0.12483070991268265), ('8', 0.12458076282181878), ('2', 0.12413212281858195), ('6', 0.1241125313968017), ('12', 0.12394203209854344)] |
[['3', '4', '1', '5', '8', '2', '6', '12']] |
ζεηζ¦ηεεΈδΈΊ: {True: tensor(0.0853, device='cuda:3', grad_fn=<DivBackward0>), False: tensor(0.9147, device='cuda:3', grad_fn=<DivBackward0>), 'Execute Error': tensor(1.1921e-07, device='cuda:3', grad_fn=<DivBackward0>)} |
torch.Size([7, 3, 448, 448]) |
ANSWER0=VQA(image=RIGHT,question='How many warthogs are in the image?') |
ANSWER1=EVAL(expr='{ANSWER0} == 5') |
FINAL_ANSWER=RESULT(var=ANSWER1) |
torch.Size([7, 3, 448, 448]) |
torch.Size([7, 3, 448, 448]) knan debug pixel values shape |
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1861 |
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1861 |
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1861 |
question: ['How many parrots are in the image?'], responses:['1'] |
[('1', 0.12829009354978346), ('3', 0.12529928082343206), ('4', 0.12464806219229535), ('8', 0.12460015878893425), ('6', 0.12451220062887247), ('12', 0.124338487048427), ('2', 0.12420459433498025), ('47', 0.12410712263327517)] |
[['1', '3', '4', '8', '6', '12', '2', '47']] |
question: ['How many warthogs are in the image?'], responses:['3'] |
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1861 |
[('3', 0.12809209985493852), ('4', 0.12520382509374006), ('1', 0.1251059160028928), ('5', 0.12483070991268265), ('8', 0.12458076282181878), ('2', 0.12413212281858195), ('6', 0.1241125313968017), ('12', 0.12394203209854344)] |
[['3', '4', '1', '5', '8', '2', '6', '12']] |
torch.Size([7, 3, 448, 448]) knan debug pixel values shape |
torch.Size([7, 3, 448, 448]) knan debug pixel values shape |
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1861 |
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1861 |
tensor([9.7496e-01, 4.1109e-03, 1.6606e-03, 6.6876e-04, 1.0065e-03, 7.6872e-04, |
1.6775e-02, 4.6547e-05], device='cuda:1', grad_fn=<SoftmaxBackward0>) |
1 ************* |
['1', '3', '4', '8', '6', '12', '2', '47'] tensor([9.7496e-01, 4.1109e-03, 1.6606e-03, 6.6876e-04, 1.0065e-03, 7.6872e-04, |
1.6775e-02, 4.6547e-05], device='cuda:1', grad_fn=<SelectBackward0>) |
ζεηζ¦ηεεΈδΈΊ: {True: tensor(0.9750, device='cuda:1', grad_fn=<DivBackward0>), False: tensor(0.0250, device='cuda:1', grad_fn=<DivBackward0>), 'Execute Error': tensor(5.9605e-08, device='cuda:1', grad_fn=<DivBackward0>)} |
ANSWER0=VQA(image=LEFT,question='How many cheetahs are in the image?') |
ANSWER1=EVAL(expr='{ANSWER0} == 2') |
FINAL_ANSWER=RESULT(var=ANSWER1) |
torch.Size([1, 3, 448, 448]) |
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1861 |
question: ['How many cheetahs are in the image?'], responses:['2'] |
[('2', 0.12961991198727602), ('3', 0.12561270547489775), ('4', 0.12556127085987287), ('1', 0.1254920833223361), ('5', 0.12407835939022728), ('8', 0.124024076973589), ('7', 0.12288810153923228), ('29', 0.12272349045256851)] |
[['2', '3', '4', '1', '5', '8', '7', '29']] |
torch.Size([1, 3, 448, 448]) knan debug pixel values shape |
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1861 |
tensor([0.2436, 0.1993, 0.0858, 0.1209, 0.0345, 0.2259, 0.0791, 0.0109], |
device='cuda:0', grad_fn=<SoftmaxBackward0>) |
3 ************* |
['3', '4', '1', '5', '8', '2', '6', '12'] tensor([0.2436, 0.1993, 0.0858, 0.1209, 0.0345, 0.2259, 0.0791, 0.0109], |
device='cuda:0', grad_fn=<SelectBackward0>) |
ζεηζ¦ηεεΈδΈΊ: {True: tensor(0.0858, device='cuda:0', grad_fn=<DivBackward0>), False: tensor(0.9142, device='cuda:0', grad_fn=<DivBackward0>), 'Execute Error': tensor(-1.1921e-07, device='cuda:0', grad_fn=<DivBackward0>)} |
ANSWER0=VQA(image=LEFT,question='How many hyenas are in the image?') |
ANSWER1=EVAL(expr='{ANSWER0} == 3') |
FINAL_ANSWER=RESULT(var=ANSWER1) |
torch.Size([7, 3, 448, 448]) |
tensor([9.1022e-01, 2.9250e-02, 3.7187e-03, 5.4662e-02, 1.2458e-03, 4.1684e-04, |
4.4391e-04, 4.5058e-05], device='cuda:1', grad_fn=<SoftmaxBackward0>) |
2 ************* |
['2', '3', '4', '1', '5', '8', '7', '29'] tensor([9.1022e-01, 2.9250e-02, 3.7187e-03, 5.4662e-02, 1.2458e-03, 4.1684e-04, |
4.4391e-04, 4.5058e-05], device='cuda:1', grad_fn=<SelectBackward0>) |
ζεηζ¦ηεεΈδΈΊ: {True: tensor(0.9102, device='cuda:1', grad_fn=<DivBackward0>), False: tensor(0.0898, device='cuda:1', grad_fn=<DivBackward0>), 'Execute Error': tensor(5.9605e-08, device='cuda:1', grad_fn=<DivBackward0>)} |
ANSWER0=VQA(image=LEFT,question='How many flutes are in the image?') |
ANSWER1=EVAL(expr='{ANSWER0} == 1') |
FINAL_ANSWER=RESULT(var=ANSWER1) |
torch.Size([1, 3, 448, 448]) |
question: ['How many flutes are in the image?'], responses:['2'] |
[('2', 0.12961991198727602), ('3', 0.12561270547489775), ('4', 0.12556127085987287), ('1', 0.1254920833223361), ('5', 0.12407835939022728), ('8', 0.124024076973589), ('7', 0.12288810153923228), ('29', 0.12272349045256851)] |
[['2', '3', '4', '1', '5', '8', '7', '29']] |
torch.Size([1, 3, 448, 448]) knan debug pixel values shape |
tensor([0.3841, 0.1757, 0.1469, 0.1437, 0.0573, 0.0565, 0.0328, 0.0031], |
device='cuda:1', grad_fn=<SoftmaxBackward0>) |
2 ************* |
['2', '3', '4', '1', '5', '8', '7', '29'] tensor([0.3841, 0.1757, 0.1469, 0.1437, 0.0573, 0.0565, 0.0328, 0.0031], |
device='cuda:1', grad_fn=<SelectBackward0>) |
ζεηζ¦ηεεΈδΈΊ: {True: tensor(0.1437, device='cuda:1', grad_fn=<DivBackward0>), False: tensor(0.8563, device='cuda:1', grad_fn=<DivBackward0>), 'Execute Error': tensor(0., device='cuda:1', grad_fn=<DivBackward0>)} |
tensor([9.6867e-01, 5.4107e-03, 1.8123e-03, 8.8208e-04, 1.0649e-03, 7.0207e-04, |
2.1399e-02, 5.5348e-05], device='cuda:2', grad_fn=<SoftmaxBackward0>) |
1 ************* |
['1', '3', '4', '8', '6', '12', '2', '47'] tensor([9.6867e-01, 5.4107e-03, 1.8123e-03, 8.8208e-04, 1.0649e-03, 7.0207e-04, |
2.1399e-02, 5.5348e-05], device='cuda:2', grad_fn=<SelectBackward0>) |
question: ['How many hyenas are in the image?'], responses:['2'] |
ζεηζ¦ηεεΈδΈΊ: {True: tensor(0.9687, device='cuda:2', grad_fn=<DivBackward0>), False: tensor(0.0313, device='cuda:2', grad_fn=<DivBackward0>), 'Execute Error': tensor(0., device='cuda:2', grad_fn=<DivBackward0>)} |
ANSWER0=VQA(image=LEFT,question='Is the dog on the left sitting on the grass?') |
ANSWER1=EVAL(expr='{ANSWER0}') |
FINAL_ANSWER=RESULT(var=ANSWER1) |
[('2', 0.12961991198727602), ('3', 0.12561270547489775), ('4', 0.12556127085987287), ('1', 0.1254920833223361), ('5', 0.12407835939022728), ('8', 0.124024076973589), ('7', 0.12288810153923228), ('29', 0.12272349045256851)] |
[['2', '3', '4', '1', '5', '8', '7', '29']] |
torch.Size([13, 3, 448, 448]) |
tensor([0.6499, 0.2247, 0.0173, 0.0471, 0.0028, 0.0471, 0.0099, 0.0013], |
device='cuda:3', grad_fn=<SoftmaxBackward0>) |
3 ************* |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.