text stringlengths 0 1.16k |
|---|
[2024-10-24 09:52:56,533] [INFO] [logging.py:96:log_dist] [Rank 0] rank=0 time (ms) | forward: 7007.38 | backward: 6763.94 | backward_inner: 6758.68 | backward_allreduce: 5.19 | step: 7.38 |
95%|ββββββββββ| 4615/4844 [19:11:40<58:39, 15.37s/it] Registering VQA_lavis step |
Registering EVAL step |
Registering RESULT step |
Registering VQA_lavis step |
Registering EVAL step |
Registering RESULT step |
ANSWER0=VQA(image=LEFT,question='How many cups of dessert are in the image?') |
ANSWER1=EVAL(expr='{ANSWER0} == 3') |
FINAL_ANSWER=RESULT(var=ANSWER1) |
Registering VQA_lavis step |
Registering EVAL step |
Registering RESULT step |
Registering VQA_lavis step |
Registering EVAL step |
Registering RESULT step |
ANSWER0=VQA(image=RIGHT,question='Does the right image include green reading lamps suspended from black arches?') |
FINAL_ANSWER=RESULT(var=ANSWER0) |
ANSWER0=VQA(image=RIGHT,question='Is the dog in the image standing on two legs?') |
ANSWER1=EVAL(expr='{ANSWER0}') |
FINAL_ANSWER=RESULT(var=ANSWER1) |
torch.Size([3, 3, 448, 448]) |
ANSWER0=VQA(image=RIGHT,question='Is there at least one person standing outside near the machines?') |
ANSWER1=EVAL(expr='{ANSWER0}') |
FINAL_ANSWER=RESULT(var=ANSWER1) |
torch.Size([7, 3, 448, 448]) |
torch.Size([7, 3, 448, 448]) |
torch.Size([7, 3, 448, 448]) |
question: ['Is the dog in the image standing on two legs?'], responses:['yes'] |
[('yes', 0.1298617250866936), ('congratulations', 0.12464161604141298), ('no', 0.12445222599225532), ('honey', 0.12437056445881921), ('solid', 0.12422595371654564), ('right', 0.12419889376311324), ('candle', 0.12414264780165109), ('chocolate', 0.12410637313950891)] |
[['yes', 'congratulations', 'no', 'honey', 'solid', 'right', 'candle', 'chocolate']] |
torch.Size([3, 3, 448, 448]) knan debug pixel values shape |
question: ['How many cups of dessert are in the image?'], responses:['3'] |
question: ['Does the right image include green reading lamps suspended from black arches?'], responses:['no'] |
question: ['Is there at least one person standing outside near the machines?'], responses:['yes'] |
[('3', 0.12809209985493852), ('4', 0.12520382509374006), ('1', 0.1251059160028928), ('5', 0.12483070991268265), ('8', 0.12458076282181878), ('2', 0.12413212281858195), ('6', 0.1241125313968017), ('12', 0.12394203209854344)] |
[['3', '4', '1', '5', '8', '2', '6', '12']] |
[('no', 0.1313955057270409), ('yes', 0.12592208734904367), ('no smoking', 0.12472972590078177), ('gone', 0.12376514658020793), ('man', 0.12367833016285167), ('meow', 0.1235796378467502), ('kia', 0.12347643720898455), ('no clock', 0.12345312922433942)] |
[['no', 'yes', 'no smoking', 'gone', 'man', 'meow', 'kia', 'no clock']] |
[('yes', 0.1298617250866936), ('congratulations', 0.12464161604141298), ('no', 0.12445222599225532), ('honey', 0.12437056445881921), ('solid', 0.12422595371654564), ('right', 0.12419889376311324), ('candle', 0.12414264780165109), ('chocolate', 0.12410637313950891)] |
[['yes', 'congratulations', 'no', 'honey', 'solid', 'right', 'candle', 'chocolate']] |
torch.Size([7, 3, 448, 448]) knan debug pixel values shape |
torch.Size([7, 3, 448, 448]) knan debug pixel values shape |
torch.Size([7, 3, 448, 448]) knan debug pixel values shape |
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1864 |
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1867 |
tensor([1.0000e+00, 2.2645e-09, 6.8936e-07, 1.4111e-09, 1.3046e-11, 2.4900e-12, |
1.3704e-11, 7.4782e-10], device='cuda:2', grad_fn=<SoftmaxBackward0>) |
yes ************* |
['yes', 'congratulations', 'no', 'honey', 'solid', 'right', 'candle', 'chocolate'] tensor([1.0000e+00, 2.2645e-09, 6.8936e-07, 1.4111e-09, 1.3046e-11, 2.4900e-12, |
1.3704e-11, 7.4782e-10], device='cuda:2', grad_fn=<SelectBackward0>) |
ζεηζ¦ηεεΈδΈΊ: {True: tensor(1.0000, device='cuda:2', grad_fn=<DivBackward0>), False: tensor(6.8936e-07, device='cuda:2', grad_fn=<DivBackward0>), 'Execute Error': tensor(2.5895e-08, device='cuda:2', grad_fn=<DivBackward0>)} |
ANSWER0=VQA(image=LEFT,question='Is a thumb pressing the phone's screen?') |
ANSWER1=EVAL(expr='{ANSWER0}') |
FINAL_ANSWER=RESULT(var=ANSWER1) |
torch.Size([3, 3, 448, 448]) |
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1864 |
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1865 |
question: ['Is a thumb pressing the phone'], responses:['no'] |
[('no', 0.1313955057270409), ('yes', 0.12592208734904367), ('no smoking', 0.12472972590078177), ('gone', 0.12376514658020793), ('man', 0.12367833016285167), ('meow', 0.1235796378467502), ('kia', 0.12347643720898455), ('no clock', 0.12345312922433942)] |
[['no', 'yes', 'no smoking', 'gone', 'man', 'meow', 'kia', 'no clock']] |
torch.Size([3, 3, 448, 448]) knan debug pixel values shape |
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1864 |
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1864 |
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1865 |
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1865 |
tensor([1.0000e+00, 2.7036e-10, 3.8366e-07, 1.8399e-11, 4.8036e-11, 1.1641e-08, |
6.1861e-10, 8.2243e-07], device='cuda:1', grad_fn=<SoftmaxBackward0>) |
no ************* |
['no', 'yes', 'no smoking', 'gone', 'man', 'meow', 'kia', 'no clock'] tensor([1.0000e+00, 2.7036e-10, 3.8366e-07, 1.8399e-11, 4.8036e-11, 1.1641e-08, |
6.1861e-10, 8.2243e-07], device='cuda:1', grad_fn=<SelectBackward0>) |
tensor([1.0000e+00, 2.6679e-09, 3.1879e-07, 9.6929e-10, 8.2062e-09, 6.1946e-08, |
3.5928e-08, 3.4208e-07], device='cuda:2', grad_fn=<SoftmaxBackward0>) |
no ************* |
['no', 'yes', 'no smoking', 'gone', 'man', 'meow', 'kia', 'no clock'] tensor([1.0000e+00, 2.6679e-09, 3.1879e-07, 9.6929e-10, 8.2062e-09, 6.1946e-08, |
3.5928e-08, 3.4208e-07], device='cuda:2', grad_fn=<SelectBackward0>) |
ζεηζ¦ηεεΈδΈΊ: {True: tensor(2.6679e-09, device='cuda:2', grad_fn=<DivBackward0>), False: tensor(1.0000, device='cuda:2', grad_fn=<DivBackward0>), 'Execute Error': tensor(7.1526e-07, device='cuda:2', grad_fn=<DivBackward0>)} |
tensor([9.9997e-01, 4.7934e-06, 3.7370e-08, 2.6140e-09, 1.7319e-11, 2.7737e-05, |
5.7036e-11, 2.1083e-09], device='cuda:3', grad_fn=<SoftmaxBackward0>) |
3 ************* |
['3', '4', '1', '5', '8', '2', '6', '12'] tensor([9.9997e-01, 4.7934e-06, 3.7370e-08, 2.6140e-09, 1.7319e-11, 2.7737e-05, |
5.7036e-11, 2.1083e-09], device='cuda:3', grad_fn=<SelectBackward0>) |
tensor([9.9539e-01, 8.4546e-09, 4.6096e-03, 1.7350e-08, 1.2507e-10, 3.5626e-10, |
4.7322e-10, 5.1249e-09], device='cuda:0', grad_fn=<SoftmaxBackward0>) |
yes ************* |
['yes', 'congratulations', 'no', 'honey', 'solid', 'right', 'candle', 'chocolate'] tensor([9.9539e-01, 8.4546e-09, 4.6096e-03, 1.7350e-08, 1.2507e-10, 3.5626e-10, |
4.7322e-10, 5.1249e-09], device='cuda:0', grad_fn=<SelectBackward0>) |
ζεηζ¦ηεεΈδΈΊ: {True: tensor(2.7036e-10, device='cuda:1', grad_fn=<UnbindBackward0>), False: tensor(1.0000, device='cuda:1', grad_fn=<UnbindBackward0>), 'Execute Error': tensor(1.1921e-06, device='cuda:1', grad_fn=<SubBackward0>)} |
ANSWER0=VQA(image=RIGHT,question='How many birds are in the image?') |
ANSWER1=EVAL(expr='{ANSWER0} == 1') |
FINAL_ANSWER=RESULT(var=ANSWER1) |
torch.Size([1, 3, 448, 448]) |
ζεηζ¦ηεεΈδΈΊ: {True: tensor(1.0000, device='cuda:3', grad_fn=<DivBackward0>), False: tensor(3.2573e-05, device='cuda:3', grad_fn=<DivBackward0>), 'Execute Error': tensor(0., device='cuda:3', grad_fn=<DivBackward0>)} |
ζεηζ¦ηεεΈδΈΊ: {True: tensor(0.9954, device='cuda:0', grad_fn=<DivBackward0>), False: tensor(0.0046, device='cuda:0', grad_fn=<DivBackward0>), 'Execute Error': tensor(-4.2375e-08, device='cuda:0', grad_fn=<DivBackward0>)} |
ANSWER0=VQA(image=LEFT,question='How many buffaloes are in the image?') |
ANSWER1=EVAL(expr='{ANSWER0} == 1') |
FINAL_ANSWER=RESULT(var=ANSWER1) |
ANSWER0=VQA(image=RIGHT,question='How many hyenas are in the image?') |
ANSWER1=EVAL(expr='{ANSWER0} <= 3') |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.