text stringlengths 0 1.16k |
|---|
['3', '4', '1', '5', '8', '2', '6', '12'] tensor([0.6499, 0.2247, 0.0173, 0.0471, 0.0028, 0.0471, 0.0099, 0.0013], |
device='cuda:3', grad_fn=<SelectBackward0>) |
ๆๅ็ๆฆ็ๅๅธไธบ: {True: tensor(0.0471, device='cuda:3', grad_fn=<DivBackward0>), False: tensor(0.9529, device='cuda:3', grad_fn=<DivBackward0>), 'Execute Error': tensor(-1.1921e-07, device='cuda:3', grad_fn=<DivBackward0>)} |
torch.Size([7, 3, 448, 448]) knan debug pixel values shape |
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1861 |
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1861 |
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1861 |
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1861 |
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1861 |
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1861 |
question: ['Is the dog on the left sitting on the grass?'], responses:['no'] |
[('no', 0.1313955057270409), ('yes', 0.12592208734904367), ('no smoking', 0.12472972590078177), ('gone', 0.12376514658020793), ('man', 0.12367833016285167), ('meow', 0.1235796378467502), ('kia', 0.12347643720898455), ('no clock', 0.12345312922433942)] |
[['no', 'yes', 'no smoking', 'gone', 'man', 'meow', 'kia', 'no clock']] |
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1861 |
torch.Size([13, 3, 448, 448]) knan debug pixel values shape |
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1861 |
tensor([8.2555e-01, 4.3700e-02, 6.9142e-03, 1.1879e-01, 3.1601e-03, 8.2465e-04, |
9.6442e-04, 9.5103e-05], device='cuda:0', grad_fn=<SoftmaxBackward0>) |
2 ************* |
['2', '3', '4', '1', '5', '8', '7', '29'] tensor([8.2555e-01, 4.3700e-02, 6.9142e-03, 1.1879e-01, 3.1601e-03, 8.2465e-04, |
9.6442e-04, 9.5103e-05], device='cuda:0', grad_fn=<SelectBackward0>) |
ๆๅ็ๆฆ็ๅๅธไธบ: {True: tensor(0.0437, device='cuda:0', grad_fn=<DivBackward0>), False: tensor(0.9563, device='cuda:0', grad_fn=<DivBackward0>), 'Execute Error': tensor(0., device='cuda:0', grad_fn=<DivBackward0>)} |
Encountered ExecuteError: CUDA out of memory. Tried to allocate 1.17 GiB. GPU 2 has a total capacty of 44.34 GiB of which 102.94 MiB is free. Including non-PyTorch memory, this process has 44.22 GiB memory in use. Of the allocated memory 40.98 GiB is allocated by PyTorch, and 2.62 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF |
Encountered TypeError: unsupported operand type(s) for +: 'NoneType' and 'str' |
ๆๅ็ๆฆ็ๅๅธไธบ: {True: 1e-09, False: 1e-09, 'Execute Error': 0.999999998} |
[2024-10-22 17:30:39,893] [INFO] [logging.py:96:log_dist] [Rank 0] rank=0 time (ms) | optimizer_allgather: 1.35 | optimizer_gradients: 0.35 | optimizer_step: 0.33 |
[2024-10-22 17:30:39,894] [INFO] [logging.py:96:log_dist] [Rank 0] rank=0 time (ms) | forward_microstep: 10176.71 | backward_microstep: 10669.99 | backward_inner_microstep: 9656.84 | backward_allreduce_microstep: 1012.98 | step_microstep: 7.90 |
[2024-10-22 17:30:39,894] [INFO] [logging.py:96:log_dist] [Rank 0] rank=0 time (ms) | forward: 10176.73 | backward: 10669.98 | backward_inner: 9656.92 | backward_allreduce: 1012.97 | step: 7.91 |
1%|โ | 31/2424 [12:12<13:57:08, 20.99s/it]Registering VQA_lavis step |
Registering EVAL step |
Registering RESULT step |
Registering VQA_lavis step |
Registering VQA_lavis step |
Registering EVAL step |
Registering RESULT step |
Registering EVAL step |
Registering RESULT step |
Registering VQA_lavis step |
Registering EVAL step |
Registering RESULT step |
ANSWER0=VQA(image=LEFT,question='Can the sky be seen behind the dog?') |
ANSWER1=EVAL(expr='{ANSWER0}') |
FINAL_ANSWER=RESULT(var=ANSWER1) |
ANSWER0=VQA(image=RIGHT,question='Is the sting ray facing towards the left?') |
FINAL_ANSWER=RESULT(var=ANSWER0) |
ANSWER0=VQA(image=LEFT,question='Are the bags displayed in the same position in both images?') |
ANSWER1=EVAL(expr='{ANSWER0}') |
FINAL_ANSWER=RESULT(var=ANSWER1) |
ANSWER0=VQA(image=RIGHT,question='Are the nipples hanging down on an adult primate?') |
ANSWER1=EVAL(expr='{ANSWER0}') |
FINAL_ANSWER=RESULT(var=ANSWER1) |
torch.Size([1, 3, 448, 448]) |
torch.Size([7, 3, 448, 448]) |
torch.Size([7, 3, 448, 448]) |
torch.Size([7, 3, 448, 448]) |
question: ['Are the bags displayed in the same position in both images?'], responses:['no'] |
[('no', 0.1313955057270409), ('yes', 0.12592208734904367), ('no smoking', 0.12472972590078177), ('gone', 0.12376514658020793), ('man', 0.12367833016285167), ('meow', 0.1235796378467502), ('kia', 0.12347643720898455), ('no clock', 0.12345312922433942)] |
[['no', 'yes', 'no smoking', 'gone', 'man', 'meow', 'kia', 'no clock']] |
torch.Size([1, 3, 448, 448]) knan debug pixel values shape |
tensor([5.0768e-01, 4.9157e-01, 4.7377e-05, 1.3683e-04, 9.6224e-05, 1.0837e-04, |
3.3453e-04, 1.9256e-05], device='cuda:1', grad_fn=<SoftmaxBackward0>) |
no ************* |
['no', 'yes', 'no smoking', 'gone', 'man', 'meow', 'kia', 'no clock'] tensor([5.0768e-01, 4.9157e-01, 4.7377e-05, 1.3683e-04, 9.6224e-05, 1.0837e-04, |
3.3453e-04, 1.9256e-05], device='cuda:1', grad_fn=<SelectBackward0>) |
ๆๅ็ๆฆ็ๅๅธไธบ: {True: tensor(0.4916, device='cuda:1', grad_fn=<DivBackward0>), False: tensor(0.5077, device='cuda:1', grad_fn=<DivBackward0>), 'Execute Error': tensor(0.0007, device='cuda:1', grad_fn=<DivBackward0>)} |
ANSWER0=VQA(image=RIGHT,question='How many vending machines are in the image?') |
ANSWER1=EVAL(expr='{ANSWER0} == 1') |
FINAL_ANSWER=RESULT(var=ANSWER1) |
torch.Size([7, 3, 448, 448]) |
question: ['Can the sky be seen behind the dog?'], responses:['yes'] |
question: ['Is the sting ray facing towards the left?'], responses:['yes'] |
question: ['Are the nipples hanging down on an adult primate?'], responses:['no'] |
[('yes', 0.1298617250866936), ('congratulations', 0.12464161604141298), ('no', 0.12445222599225532), ('honey', 0.12437056445881921), ('solid', 0.12422595371654564), ('right', 0.12419889376311324), ('candle', 0.12414264780165109), ('chocolate', 0.12410637313950891)] |
[['yes', 'congratulations', 'no', 'honey', 'solid', 'right', 'candle', 'chocolate']] |
[('yes', 0.1298617250866936), ('congratulations', 0.12464161604141298), ('no', 0.12445222599225532), ('honey', 0.12437056445881921), ('solid', 0.12422595371654564), ('right', 0.12419889376311324), ('candle', 0.12414264780165109), ('chocolate', 0.12410637313950891)] |
[['yes', 'congratulations', 'no', 'honey', 'solid', 'right', 'candle', 'chocolate']] |
[('no', 0.1313955057270409), ('yes', 0.12592208734904367), ('no smoking', 0.12472972590078177), ('gone', 0.12376514658020793), ('man', 0.12367833016285167), ('meow', 0.1235796378467502), ('kia', 0.12347643720898455), ('no clock', 0.12345312922433942)] |
[['no', 'yes', 'no smoking', 'gone', 'man', 'meow', 'kia', 'no clock']] |
torch.Size([7, 3, 448, 448]) knan debug pixel values shape |
torch.Size([7, 3, 448, 448]) knan debug pixel values shape |
torch.Size([7, 3, 448, 448]) knan debug pixel values shape |
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1864 |
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1864 |
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1865 |
question: ['How many vending machines are in the image?'], responses:['3'] |
[('3', 0.12809209985493852), ('4', 0.12520382509374006), ('1', 0.1251059160028928), ('5', 0.12483070991268265), ('8', 0.12458076282181878), ('2', 0.12413212281858195), ('6', 0.1241125313968017), ('12', 0.12394203209854344)] |
[['3', '4', '1', '5', '8', '2', '6', '12']] |
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1864 |
torch.Size([7, 3, 448, 448]) knan debug pixel values shape |
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1864 |
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1865 |
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1865 |
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1865 |
tensor([6.4795e-01, 8.6692e-03, 3.4225e-01, 6.1654e-04, 6.7809e-05, 1.5788e-04, |
4.6566e-05, 2.3541e-04], device='cuda:2', grad_fn=<SoftmaxBackward0>) |
yes ************* |
['yes', 'congratulations', 'no', 'honey', 'solid', 'right', 'candle', 'chocolate'] tensor([6.4795e-01, 8.6692e-03, 3.4225e-01, 6.1654e-04, 6.7809e-05, 1.5788e-04, |
4.6566e-05, 2.3541e-04], device='cuda:2', grad_fn=<SelectBackward0>) |
tensor([5.6138e-01, 4.3720e-01, 8.9261e-05, 1.4403e-04, 3.6972e-04, 4.1917e-04, |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.