text stringlengths 0 1.16k |
|---|
[['2', '3', '4', '1', '5', '8', '7', '29']] |
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1860 |
torch.Size([13, 3, 448, 448]) knan debug pixel values shape |
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1860 |
tensor([9.8380e-01, 3.0346e-03, 1.3888e-03, 6.7449e-04, 9.5364e-04, 7.4243e-04, |
9.3480e-03, 5.5191e-05], device='cuda:0', grad_fn=<SoftmaxBackward0>) |
1 ************* |
['1', '3', '4', '8', '6', '12', '2', '47'] tensor([9.8380e-01, 3.0346e-03, 1.3888e-03, 6.7449e-04, 9.5364e-04, 7.4243e-04, |
9.3480e-03, 5.5191e-05], device='cuda:0', grad_fn=<SelectBackward0>) |
ζεηζ¦ηεεΈδΈΊ: {True: tensor(0.0162, device='cuda:0', grad_fn=<DivBackward0>), False: tensor(0.9838, device='cuda:0', grad_fn=<DivBackward0>), 'Execute Error': tensor(0., device='cuda:0', grad_fn=<DivBackward0>)} |
ANSWER0=VQA(image=RIGHT,question='How many dogs are in the image?') |
ANSWER1=EVAL(expr='{ANSWER0} >= 2') |
FINAL_ANSWER=RESULT(var=ANSWER1) |
torch.Size([13, 3, 448, 448]) |
question: ['How many dogs are in the image?'], responses:['1'] |
[('1', 0.12829009354978346), ('3', 0.12529928082343206), ('4', 0.12464806219229535), ('8', 0.12460015878893425), ('6', 0.12451220062887247), ('12', 0.124338487048427), ('2', 0.12420459433498025), ('47', 0.12410712263327517)] |
[['1', '3', '4', '8', '6', '12', '2', '47']] |
torch.Size([13, 3, 448, 448]) knan debug pixel values shape |
dynamic ViT batch size: 13, images per sample: 13.0, dynamic token length: 3396 |
tensor([0.3135, 0.1780, 0.2285, 0.0118, 0.0481, 0.1014, 0.1149, 0.0038], |
device='cuda:1', grad_fn=<SoftmaxBackward0>) |
4 ************* |
['4', '5', '3', '8', '6', '1', '2', '11'] tensor([0.3135, 0.1780, 0.2285, 0.0118, 0.0481, 0.1014, 0.1149, 0.0038], |
device='cuda:1', grad_fn=<SelectBackward0>) |
dynamic ViT batch size: 13, images per sample: 13.0, dynamic token length: 3396 |
ζεηζ¦ηεεΈδΈΊ: {True: tensor(0.1014, device='cuda:1', grad_fn=<DivBackward0>), False: tensor(0.8986, device='cuda:1', grad_fn=<DivBackward0>), 'Execute Error': tensor(-1.1921e-07, device='cuda:1', grad_fn=<DivBackward0>)} |
ANSWER0=VQA(image=RIGHT,question='Is there a woman in the image?') |
ANSWER1=EVAL(expr='{ANSWER0}') |
FINAL_ANSWER=RESULT(var=ANSWER1) |
tensor([0.6517, 0.0368, 0.0269, 0.0077, 0.0010, 0.2717, 0.0035, 0.0007], |
device='cuda:3', grad_fn=<SoftmaxBackward0>) |
3 ************* |
['3', '4', '1', '5', '8', '2', '6', '12'] tensor([0.6517, 0.0368, 0.0269, 0.0077, 0.0010, 0.2717, 0.0035, 0.0007], |
device='cuda:3', grad_fn=<SelectBackward0>) |
torch.Size([13, 3, 448, 448]) |
ζεηζ¦ηεεΈδΈΊ: {True: tensor(0.9503, device='cuda:3', grad_fn=<DivBackward0>), False: tensor(0.0497, device='cuda:3', grad_fn=<DivBackward0>), 'Execute Error': tensor(-1.1921e-07, device='cuda:3', grad_fn=<DivBackward0>)} |
ANSWER0=VQA(image=RIGHT,question='Is the dog indoors?') |
ANSWER1=EVAL(expr='{ANSWER0}') |
FINAL_ANSWER=RESULT(var=ANSWER1) |
torch.Size([13, 3, 448, 448]) |
dynamic ViT batch size: 13, images per sample: 13.0, dynamic token length: 3396 |
tensor([4.4774e-01, 3.0772e-01, 5.3480e-02, 1.6471e-01, 1.9292e-02, 2.9240e-03, |
3.9966e-03, 1.2987e-04], device='cuda:2', grad_fn=<SoftmaxBackward0>) |
2 ************* |
['2', '3', '4', '1', '5', '8', '7', '29'] tensor([4.4774e-01, 3.0772e-01, 5.3480e-02, 1.6471e-01, 1.9292e-02, 2.9240e-03, |
3.9966e-03, 1.2987e-04], device='cuda:2', grad_fn=<SelectBackward0>) |
ζεηζ¦ηεεΈδΈΊ: {True: tensor(0.8353, device='cuda:2', grad_fn=<DivBackward0>), False: tensor(0.1647, device='cuda:2', grad_fn=<DivBackward0>), 'Execute Error': tensor(0., device='cuda:2', grad_fn=<DivBackward0>)} |
dynamic ViT batch size: 13, images per sample: 13.0, dynamic token length: 3396 |
dynamic ViT batch size: 13, images per sample: 13.0, dynamic token length: 3396 |
question: ['Is there a woman in the image?'], responses:['yes'] |
[('yes', 0.1298617250866936), ('congratulations', 0.12464161604141298), ('no', 0.12445222599225532), ('honey', 0.12437056445881921), ('solid', 0.12422595371654564), ('right', 0.12419889376311324), ('candle', 0.12414264780165109), ('chocolate', 0.12410637313950891)] |
[['yes', 'congratulations', 'no', 'honey', 'solid', 'right', 'candle', 'chocolate']] |
question: ['Is the dog indoors?'], responses:['yes'] |
dynamic ViT batch size: 13, images per sample: 13.0, dynamic token length: 3396 |
[('yes', 0.1298617250866936), ('congratulations', 0.12464161604141298), ('no', 0.12445222599225532), ('honey', 0.12437056445881921), ('solid', 0.12422595371654564), ('right', 0.12419889376311324), ('candle', 0.12414264780165109), ('chocolate', 0.12410637313950891)] |
[['yes', 'congratulations', 'no', 'honey', 'solid', 'right', 'candle', 'chocolate']] |
torch.Size([13, 3, 448, 448]) knan debug pixel values shape |
torch.Size([13, 3, 448, 448]) knan debug pixel values shape |
dynamic ViT batch size: 13, images per sample: 13.0, dynamic token length: 3396 |
dynamic ViT batch size: 13, images per sample: 13.0, dynamic token length: 3396 |
tensor([9.7144e-01, 5.2592e-03, 2.4075e-03, 1.2467e-03, 1.6538e-03, 1.1601e-03, |
1.6714e-02, 1.1763e-04], device='cuda:0', grad_fn=<SoftmaxBackward0>) |
1 ************* |
['1', '3', '4', '8', '6', '12', '2', '47'] tensor([9.7144e-01, 5.2592e-03, 2.4075e-03, 1.2467e-03, 1.6538e-03, 1.1601e-03, |
1.6714e-02, 1.1763e-04], device='cuda:0', grad_fn=<SelectBackward0>) |
ζεηζ¦ηεεΈδΈΊ: {True: tensor(0.0286, device='cuda:0', grad_fn=<DivBackward0>), False: tensor(0.9714, device='cuda:0', grad_fn=<DivBackward0>), 'Execute Error': tensor(5.9605e-08, device='cuda:0', grad_fn=<DivBackward0>)} |
tensor([8.1183e-01, 2.5894e-02, 1.6026e-01, 7.7498e-04, 1.5618e-04, 3.3344e-04, |
6.2298e-05, 6.9296e-04], device='cuda:1', grad_fn=<SoftmaxBackward0>) |
yes ************* |
['yes', 'congratulations', 'no', 'honey', 'solid', 'right', 'candle', 'chocolate'] tensor([8.1183e-01, 2.5894e-02, 1.6026e-01, 7.7498e-04, 1.5618e-04, 3.3344e-04, |
6.2298e-05, 6.9296e-04], device='cuda:1', grad_fn=<SelectBackward0>) |
ζεηζ¦ηεεΈδΈΊ: {True: tensor(0.8118, device='cuda:1', grad_fn=<DivBackward0>), False: tensor(0.1603, device='cuda:1', grad_fn=<DivBackward0>), 'Execute Error': tensor(0.0279, device='cuda:1', grad_fn=<DivBackward0>)} |
tensor([8.2971e-01, 1.4093e-02, 1.5348e-01, 1.6236e-03, 8.1267e-05, 2.9485e-04, |
1.4605e-04, 5.6869e-04], device='cuda:3', grad_fn=<SoftmaxBackward0>) |
yes ************* |
['yes', 'congratulations', 'no', 'honey', 'solid', 'right', 'candle', 'chocolate'] tensor([8.2971e-01, 1.4093e-02, 1.5348e-01, 1.6236e-03, 8.1267e-05, 2.9485e-04, |
1.4605e-04, 5.6869e-04], device='cuda:3', grad_fn=<SelectBackward0>) |
ζεηζ¦ηεεΈδΈΊ: {True: tensor(0.8297, device='cuda:3', grad_fn=<DivBackward0>), False: tensor(0.1535, device='cuda:3', grad_fn=<DivBackward0>), 'Execute Error': tensor(0.0168, device='cuda:3', grad_fn=<DivBackward0>)} |
[2024-10-23 14:49:20,788] [INFO] [logging.py:96:log_dist] [Rank 0] rank=0 time (ms) | optimizer_allgather: 1.35 | optimizer_gradients: 0.37 | optimizer_step: 0.33 |
[2024-10-23 14:49:20,788] [INFO] [logging.py:96:log_dist] [Rank 0] rank=0 time (ms) | forward_microstep: 7041.73 | backward_microstep: 11042.20 | backward_inner_microstep: 6757.60 | backward_allreduce_microstep: 4284.44 | step_microstep: 7.75 |
[2024-10-23 14:49:20,788] [INFO] [logging.py:96:log_dist] [Rank 0] rank=0 time (ms) | forward: 7041.74 | backward: 11042.19 | backward_inner: 6757.64 | backward_allreduce: 4284.43 | step: 7.76 |
1%| | 32/4844 [08:04<21:05:14, 15.78s/it]Registering VQA_lavis step |
Registering EVAL step |
Registering RESULT step |
ANSWER0=VQA(image=LEFT,question='Is the animal holding food?') |
ANSWER1=EVAL(expr='{ANSWER0}') |
FINAL_ANSWER=RESULT(var=ANSWER1) |
Registering VQA_lavis stepRegistering VQA_lavis step |
Registering EVAL step |
Registering EVAL step |
Registering RESULT step |
Registering RESULT step |
Registering VQA_lavis step |
Registering EVAL step |
Registering RESULT step |
ANSWER0=VQA(image=LEFT,question='Is the dog looking toward the camera?') |
ANSWER1=EVAL(expr='{ANSWER0}') |
FINAL_ANSWER=RESULT(var=ANSWER1) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.