text stringlengths 0 1.16k |
|---|
ANSWER1=EVAL(expr='{ANSWER0} == 2') |
FINAL_ANSWER=RESULT(var=ANSWER1) |
torch.Size([1, 3, 448, 448]) |
ANSWER0=VQA(image=RIGHT,question='Does the image show a pair of lips wearing makeup?') |
ANSWER1=EVAL(expr='{ANSWER0}') |
FINAL_ANSWER=RESULT(var=ANSWER1) |
ANSWER0=VQA(image=LEFT,question='How many dogs are in the image?') |
ANSWER1=EVAL(expr='{ANSWER0} >= 2') |
FINAL_ANSWER=RESULT(var=ANSWER1) |
torch.Size([1, 3, 448, 448]) |
torch.Size([7, 3, 448, 448]) |
torch.Size([13, 3, 448, 448]) |
question: ['Does the image show a laptop displayed like an inverted book with its pages fanning out?'], responses:['yes'] |
question: ['Does the image show a pair of lips wearing makeup?'], responses:['yes'] |
[('yes', 0.1298617250866936), ('congratulations', 0.12464161604141298), ('no', 0.12445222599225532), ('honey', 0.12437056445881921), ('solid', 0.12422595371654564), ('right', 0.12419889376311324), ('candle', 0.12414264780165109), ('chocolate', 0.12410637313950891)] |
[['yes', 'congratulations', 'no', 'honey', 'solid', 'right', 'candle', 'chocolate']] |
[('yes', 0.1298617250866936), ('congratulations', 0.12464161604141298), ('no', 0.12445222599225532), ('honey', 0.12437056445881921), ('solid', 0.12422595371654564), ('right', 0.12419889376311324), ('candle', 0.12414264780165109), ('chocolate', 0.12410637313950891)] |
[['yes', 'congratulations', 'no', 'honey', 'solid', 'right', 'candle', 'chocolate']] |
torch.Size([1, 3, 448, 448]) knan debug pixel values shape |
dynamic ViT batch size: 1, images per sample: 1.0, dynamic token length: 334 |
torch.Size([1, 3, 448, 448]) knan debug pixel values shape |
dynamic ViT batch size: 1, images per sample: 1.0, dynamic token length: 337 |
dynamic ViT batch size: 1, images per sample: 1.0, dynamic token length: 334 |
dynamic ViT batch size: 1, images per sample: 1.0, dynamic token length: 335 |
dynamic ViT batch size: 1, images per sample: 1.0, dynamic token length: 334 |
dynamic ViT batch size: 1, images per sample: 1.0, dynamic token length: 334 |
dynamic ViT batch size: 1, images per sample: 1.0, dynamic token length: 335 |
dynamic ViT batch size: 1, images per sample: 1.0, dynamic token length: 335 |
tensor([1.0000e+00, 2.4078e-09, 3.2853e-08, 7.5923e-10, 3.7327e-12, 4.9109e-13, |
3.7110e-12, 3.2363e-10], device='cuda:0', grad_fn=<SoftmaxBackward0>) |
yes ************* |
['yes', 'congratulations', 'no', 'honey', 'solid', 'right', 'candle', 'chocolate'] tensor([1.0000e+00, 2.4078e-09, 3.2853e-08, 7.5923e-10, 3.7327e-12, 4.9109e-13, |
3.7110e-12, 3.2363e-10], device='cuda:0', grad_fn=<SelectBackward0>) |
tensor([1.0000e+00, 1.5773e-09, 1.6374e-07, 3.0004e-09, 1.2595e-11, 8.6906e-12, |
4.0324e-11, 7.2838e-10], device='cuda:1', grad_fn=<SoftmaxBackward0>) |
yes ************* |
['yes', 'congratulations', 'no', 'honey', 'solid', 'right', 'candle', 'chocolate'] tensor([1.0000e+00, 1.5773e-09, 1.6374e-07, 3.0004e-09, 1.2595e-11, 8.6906e-12, |
4.0324e-11, 7.2838e-10], device='cuda:1', grad_fn=<SelectBackward0>) |
ๆๅ็ๆฆ็ๅๅธไธบ: {True: tensor(1., device='cuda:0', grad_fn=<UnbindBackward0>), False: tensor(3.2853e-08, device='cuda:0', grad_fn=<UnbindBackward0>), 'Execute Error': tensor(-3.2853e-08, device='cuda:0', grad_fn=<SubBackward0>)} |
ANSWER0=VQA(image=LEFT,question='Is there a chair in the image?') |
FINAL_ANSWER=RESULT(var=ANSWER0) |
ๆๅ็ๆฆ็ๅๅธไธบ: {True: tensor(1.0000, device='cuda:1', grad_fn=<DivBackward0>), False: tensor(1.6374e-07, device='cuda:1', grad_fn=<DivBackward0>), 'Execute Error': tensor(-4.4532e-08, device='cuda:1', grad_fn=<DivBackward0>)} |
torch.Size([7, 3, 448, 448]) |
ANSWER0=VQA(image=LEFT,question='How many items are in the image?') |
ANSWER1=EVAL(expr='{ANSWER0} <= 3') |
FINAL_ANSWER=RESULT(var=ANSWER1) |
torch.Size([1, 3, 448, 448]) |
question: ['How many dogs are in the image?'], responses:['3'] |
[('3', 0.12809209985493852), ('4', 0.12520382509374006), ('1', 0.1251059160028928), ('5', 0.12483070991268265), ('8', 0.12458076282181878), ('2', 0.12413212281858195), ('6', 0.1241125313968017), ('12', 0.12394203209854344)] |
[['3', '4', '1', '5', '8', '2', '6', '12']] |
question: ['How many items are in the image?'], responses:['2'] |
[('2', 0.12961991198727602), ('3', 0.12561270547489775), ('4', 0.12556127085987287), ('1', 0.1254920833223361), ('5', 0.12407835939022728), ('8', 0.124024076973589), ('7', 0.12288810153923228), ('29', 0.12272349045256851)] |
[['2', '3', '4', '1', '5', '8', '7', '29']] |
torch.Size([1, 3, 448, 448]) knan debug pixel values shape |
torch.Size([7, 3, 448, 448]) knan debug pixel values shape |
tensor([9.9997e-01, 2.5867e-05, 1.5960e-08, 4.8882e-07, 1.6048e-09, 6.5877e-10, |
2.2814e-09, 1.0363e-09], device='cuda:1', grad_fn=<SoftmaxBackward0>) |
2 ************* |
['2', '3', '4', '1', '5', '8', '7', '29'] tensor([9.9997e-01, 2.5867e-05, 1.5960e-08, 4.8882e-07, 1.6048e-09, 6.5877e-10, |
2.2814e-09, 1.0363e-09], device='cuda:1', grad_fn=<SelectBackward0>) |
ๆๅ็ๆฆ็ๅๅธไธบ: {True: tensor(1., device='cuda:1', grad_fn=<DivBackward0>), False: tensor(2.1542e-08, device='cuda:1', grad_fn=<DivBackward0>), 'Execute Error': tensor(0., device='cuda:1', grad_fn=<DivBackward0>)} |
question: ['How many white dogs are in the image?'], responses:['2'] |
question: ['Is there a chair in the image?'], responses:['yes'] |
[('2', 0.12961991198727602), ('3', 0.12561270547489775), ('4', 0.12556127085987287), ('1', 0.1254920833223361), ('5', 0.12407835939022728), ('8', 0.124024076973589), ('7', 0.12288810153923228), ('29', 0.12272349045256851)] |
[['2', '3', '4', '1', '5', '8', '7', '29']] |
[('yes', 0.1298617250866936), ('congratulations', 0.12464161604141298), ('no', 0.12445222599225532), ('honey', 0.12437056445881921), ('solid', 0.12422595371654564), ('right', 0.12419889376311324), ('candle', 0.12414264780165109), ('chocolate', 0.12410637313950891)] |
[['yes', 'congratulations', 'no', 'honey', 'solid', 'right', 'candle', 'chocolate']] |
torch.Size([7, 3, 448, 448]) knan debug pixel values shape |
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1860 |
torch.Size([13, 3, 448, 448]) knan debug pixel values shape |
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1863 |
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1860 |
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1861 |
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1860 |
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1860 |
tensor([1.0000e+00, 7.1186e-08, 4.2590e-08, 5.9389e-10, 8.7663e-11, 1.9588e-07, |
1.7246e-10, 4.3647e-09], device='cuda:3', grad_fn=<SoftmaxBackward0>) |
3 ************* |
['3', '4', '1', '5', '8', '2', '6', '12'] tensor([1.0000e+00, 7.1186e-08, 4.2590e-08, 5.9389e-10, 8.7663e-11, 1.9588e-07, |
1.7246e-10, 4.3647e-09], device='cuda:3', grad_fn=<SelectBackward0>) |
ๆๅ็ๆฆ็ๅๅธไธบ: {True: tensor(1.0000, device='cuda:3', grad_fn=<DivBackward0>), False: tensor(4.2590e-08, device='cuda:3', grad_fn=<DivBackward0>), 'Execute Error': tensor(-1.1921e-07, device='cuda:3', grad_fn=<DivBackward0>)} |
ANSWER0=VQA(image=RIGHT,question='How many crabs are in the image?') |
ANSWER1=EVAL(expr='{ANSWER0} <= 1') |
FINAL_ANSWER=RESULT(var=ANSWER1) |
torch.Size([7, 3, 448, 448]) |
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1861 |
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1861 |
tensor([1.0000e+00, 2.1296e-08, 8.3153e-07, 8.9146e-09, 4.5468e-11, 6.2861e-10, |
2.0730e-10, 4.5756e-09], device='cuda:0', grad_fn=<SoftmaxBackward0>) |
yes ************* |
['yes', 'congratulations', 'no', 'honey', 'solid', 'right', 'candle', 'chocolate'] tensor([1.0000e+00, 2.1296e-08, 8.3153e-07, 8.9146e-09, 4.5468e-11, 6.2861e-10, |
2.0730e-10, 4.5756e-09], device='cuda:0', grad_fn=<SelectBackward0>) |
ๆๅ็ๆฆ็ๅๅธไธบ: {True: tensor(1.0000, device='cuda:0', grad_fn=<UnbindBackward0>), False: tensor(8.3153e-07, device='cuda:0', grad_fn=<UnbindBackward0>), 'Execute Error': tensor(2.9370e-09, device='cuda:0', grad_fn=<SubBackward0>)} |
question: ['How many crabs are in the image?'], responses:['11'] |
[('11', 0.12740768001087358), ('10', 0.12548679249075975), ('12', 0.12538137681693887), ('9', 0.12485855662563465), ('8', 0.12469919178932766), ('13', 0.12431757055023795), ('7', 0.12396146028399917), ('14', 0.1238873714322284)] |
[['11', '10', '12', '9', '8', '13', '7', '14']] |
torch.Size([7, 3, 448, 448]) knan debug pixel values shape |
tensor([1.0000e+00, 2.5110e-08, 1.2825e-08, 5.9177e-09, 1.5166e-10, 1.8746e-09, |
5.3338e-10, 6.8763e-10], device='cuda:2', grad_fn=<SoftmaxBackward0>) |
2 ************* |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.