text stringlengths 0 1.16k |
|---|
dynamic ViT batch size: 1, images per sample: 1.0, dynamic token length: 325 |
dynamic ViT batch size: 1, images per sample: 1.0, dynamic token length: 326 |
dynamic ViT batch size: 1, images per sample: 1.0, dynamic token length: 325 |
dynamic ViT batch size: 1, images per sample: 1.0, dynamic token length: 325 |
dynamic ViT batch size: 1, images per sample: 1.0, dynamic token length: 326 |
dynamic ViT batch size: 1, images per sample: 1.0, dynamic token length: 326 |
dynamic ViT batch size: 1, images per sample: 1.0, dynamic token length: 326 |
tensor([1.0000e+00, 1.1979e-07, 1.3116e-07, 4.1053e-12, 1.2027e-12, 5.5948e-10, |
3.7237e-11, 2.2089e-07], device='cuda:0', grad_fn=<SoftmaxBackward0>) |
no ************* |
['no', 'yes', 'no smoking', 'gone', 'man', 'meow', 'kia', 'no clock'] tensor([1.0000e+00, 1.1979e-07, 1.3116e-07, 4.1053e-12, 1.2027e-12, 5.5948e-10, |
3.7237e-11, 2.2089e-07], device='cuda:0', grad_fn=<SelectBackward0>) |
ζεηζ¦ηεεΈδΈΊ: {True: tensor(1.1979e-07, device='cuda:0', grad_fn=<DivBackward0>), False: tensor(1.0000, device='cuda:0', grad_fn=<DivBackward0>), 'Execute Error': tensor(3.5763e-07, device='cuda:0', grad_fn=<DivBackward0>)} |
question: ['How many birds are in the image?'], responses:['2'] |
[('2', 0.12961991198727602), ('3', 0.12561270547489775), ('4', 0.12556127085987287), ('1', 0.1254920833223361), ('5', 0.12407835939022728), ('8', 0.124024076973589), ('7', 0.12288810153923228), ('29', 0.12272349045256851)] |
[['2', '3', '4', '1', '5', '8', '7', '29']] |
torch.Size([7, 3, 448, 448]) knan debug pixel values shape |
tensor([9.8992e-01, 9.7050e-03, 1.4447e-06, 2.0142e-04, 2.4333e-07, 1.6699e-04, |
3.2556e-06, 1.3864e-07], device='cuda:1', grad_fn=<SoftmaxBackward0>) |
3 ************* |
['3', '4', '1', '5', '8', '2', '6', '12'] tensor([9.8992e-01, 9.7050e-03, 1.4447e-06, 2.0142e-04, 2.4333e-07, 1.6699e-04, |
3.2556e-06, 1.3864e-07], device='cuda:1', grad_fn=<SelectBackward0>) |
ζεηζ¦ηεεΈδΈΊ: {True: tensor(1.0000, device='cuda:1', grad_fn=<DivBackward0>), False: tensor(1.4447e-06, device='cuda:1', grad_fn=<DivBackward0>), 'Execute Error': tensor(-1.1921e-07, device='cuda:1', grad_fn=<DivBackward0>)} |
ANSWER0=VQA(image=LEFT,question='How many birds are flying in the image?') |
ANSWER1=EVAL(expr='{ANSWER0} == 1') |
FINAL_ANSWER=RESULT(var=ANSWER1) |
torch.Size([7, 3, 448, 448]) |
tensor([1.0000e+00, 8.6090e-10, 1.3074e-10, 4.2861e-10, 3.2230e-10, 1.8642e-08, |
1.2626e-08, 4.4694e-10], device='cuda:3', grad_fn=<SoftmaxBackward0>) |
1 ************* |
['1', '3', '4', '8', '6', '12', '2', '47'] tensor([1.0000e+00, 8.6090e-10, 1.3074e-10, 4.2861e-10, 3.2230e-10, 1.8642e-08, |
1.2626e-08, 4.4694e-10], device='cuda:3', grad_fn=<SelectBackward0>) |
ζεηζ¦ηεεΈδΈΊ: {True: tensor(1., device='cuda:3', grad_fn=<DivBackward0>), False: tensor(2.0832e-08, device='cuda:3', grad_fn=<DivBackward0>), 'Execute Error': tensor(0., device='cuda:3', grad_fn=<DivBackward0>)} |
ANSWER0=VQA(image=RIGHT,question='How many monkeys are in the image?') |
ANSWER1=EVAL(expr='{ANSWER0} < 10') |
FINAL_ANSWER=RESULT(var=ANSWER1) |
torch.Size([7, 3, 448, 448]) |
tensor([1.0000e+00, 3.1563e-07, 5.9989e-09, 2.0981e-08, 1.1034e-09, 2.2816e-09, |
4.6448e-09, 3.5752e-09], device='cuda:2', grad_fn=<SoftmaxBackward0>) |
2 ************* |
['2', '3', '4', '1', '5', '8', '7', '29'] tensor([1.0000e+00, 3.1563e-07, 5.9989e-09, 2.0981e-08, 1.1034e-09, 2.2816e-09, |
4.6448e-09, 3.5752e-09], device='cuda:2', grad_fn=<SelectBackward0>) |
ζεηζ¦ηεεΈδΈΊ: {True: tensor(1.0000, device='cuda:2', grad_fn=<DivBackward0>), False: tensor(3.5421e-07, device='cuda:2', grad_fn=<DivBackward0>), 'Execute Error': tensor(5.9605e-08, device='cuda:2', grad_fn=<DivBackward0>)} |
question: ['How many birds are flying in the image?'], responses:['7'] |
[('7', 0.12828776251745355), ('8', 0.1258361832781132), ('11', 0.12481772898325143), ('5', 0.124759881092759), ('9', 0.12447036165452931), ('10', 0.1239759375399529), ('6', 0.12393017600998846), ('12', 0.12392196892395223)] |
[['7', '8', '11', '5', '9', '10', '6', '12']] |
question: ['How many monkeys are in the image?'], responses:['40'] |
[('40', 0.12638022987124733), ('39', 0.12509919407251455), ('42', 0.12494223232783619), ('41', 0.12482626048065008), ('45', 0.12479694604159434), ('38', 0.12473125094691345), ('47', 0.1246423477331973), ('32', 0.1245815385260468)] |
[['40', '39', '42', '41', '45', '38', '47', '32']] |
torch.Size([7, 3, 448, 448]) knan debug pixel values shape |
torch.Size([7, 3, 448, 448]) knan debug pixel values shape |
tensor([8.1458e-01, 3.9094e-04, 8.3473e-05, 1.9431e-03, 1.6457e-03, 5.9106e-05, |
1.8130e-01, 2.3061e-06], device='cuda:1', grad_fn=<SoftmaxBackward0>) |
7 ************* |
['7', '8', '11', '5', '9', '10', '6', '12'] tensor([8.1458e-01, 3.9094e-04, 8.3473e-05, 1.9431e-03, 1.6457e-03, 5.9106e-05, |
1.8130e-01, 2.3061e-06], device='cuda:1', grad_fn=<SelectBackward0>) |
ζεηζ¦ηεεΈδΈΊ: {True: tensor(0., device='cuda:1', grad_fn=<MulBackward0>), False: tensor(1.0000, device='cuda:1', grad_fn=<DivBackward0>), 'Execute Error': tensor(-1.1921e-07, device='cuda:1', grad_fn=<DivBackward0>)} |
tensor([0.8706, 0.0156, 0.0042, 0.0068, 0.0497, 0.0046, 0.0447, 0.0037], |
device='cuda:3', grad_fn=<SoftmaxBackward0>) |
40 ************* |
['40', '39', '42', '41', '45', '38', '47', '32'] tensor([0.8706, 0.0156, 0.0042, 0.0068, 0.0497, 0.0046, 0.0447, 0.0037], |
device='cuda:3', grad_fn=<SelectBackward0>) |
ζεηζ¦ηεεΈδΈΊ: {True: tensor(0., device='cuda:3', grad_fn=<MulBackward0>), False: tensor(1.0000, device='cuda:3', grad_fn=<DivBackward0>), 'Execute Error': tensor(1.1921e-07, device='cuda:3', grad_fn=<DivBackward0>)} |
[2024-10-24 09:50:25,901] [INFO] [logging.py:96:log_dist] [Rank 0] rank=0 time (ms) | optimizer_allgather: 1.36 | optimizer_gradients: 0.35 | optimizer_step: 0.33 |
[2024-10-24 09:50:25,901] [INFO] [logging.py:96:log_dist] [Rank 0] rank=0 time (ms) | forward_microstep: 3148.90 | backward_microstep: 10864.33 | backward_inner_microstep: 3010.53 | backward_allreduce_microstep: 7853.69 | step_microstep: 7.65 |
[2024-10-24 09:50:25,901] [INFO] [logging.py:96:log_dist] [Rank 0] rank=0 time (ms) | forward: 3148.91 | backward: 10864.32 | backward_inner: 3010.57 | backward_allreduce: 7853.67 | step: 7.66 |
95%|ββββββββββ| 4605/4844 [19:09:09<1:01:28, 15.43s/it]Registering VQA_lavis step |
Registering VQA_lavis step |
Registering EVAL step |
Registering RESULT step |
ANSWER0=VQA(image=RIGHT,question='How many animals are in the image?') |
ANSWER1=EVAL(expr='{ANSWER0} == 1') |
FINAL_ANSWER=RESULT(var=ANSWER1) |
Registering EVAL step |
Registering RESULT step |
Registering VQA_lavis step |
Registering EVAL step |
Registering RESULT step |
Registering VQA_lavis step |
Registering EVAL step |
Registering RESULT step |
ANSWER0=VQA(image=RIGHT,question='Which direction is the car facing?') |
ANSWER1=EVAL(expr='{ANSWER0} == "right"') |
FINAL_ANSWER=RESULT(var=ANSWER1) |
ANSWER0=VQA(image=LEFT,question='How many animals are in the image?') |
ANSWER1=EVAL(expr='{ANSWER0} == 1') |
FINAL_ANSWER=RESULT(var=ANSWER1) |
torch.Size([3, 3, 448, 448]) |
ANSWER0=VQA(image=RIGHT,question='Is the animal looking toward the camera?') |
ANSWER1=EVAL(expr='{ANSWER0}') |
FINAL_ANSWER=RESULT(var=ANSWER1) |
torch.Size([13, 3, 448, 448]) |
torch.Size([7, 3, 448, 448]) |
torch.Size([7, 3, 448, 448]) |
question: ['Which direction is the car facing?'], responses:['forward'] |
[('forward', 0.12740750763265657), ('backwards', 0.12486722130148889), ('sideways', 0.12482029904463199), ('back', 0.124701201025097), ('straight', 0.12464134925665891), ('movement', 0.12455374859988748), ('swing', 0.12453530036866774), ('working', 0.1244733727709115)] |
[['forward', 'backwards', 'sideways', 'back', 'straight', 'movement', 'swing', 'working']] |
torch.Size([3, 3, 448, 448]) knan debug pixel values shape |
question: ['How many animals are in the image?'], responses:['1'] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.