text
stringlengths 0
1.16k
|
|---|
[['1', '3', '4', '8', '6', '12', '2', '47']]
|
[('no', 0.1313955057270409), ('yes', 0.12592208734904367), ('no smoking', 0.12472972590078177), ('gone', 0.12376514658020793), ('man', 0.12367833016285167), ('meow', 0.1235796378467502), ('kia', 0.12347643720898455), ('no clock', 0.12345312922433942)]
|
[['no', 'yes', 'no smoking', 'gone', 'man', 'meow', 'kia', 'no clock']]
|
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1860
|
torch.Size([13, 3, 448, 448]) knan debug pixel values shape
|
torch.Size([13, 3, 448, 448]) knan debug pixel values shape
|
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1860
|
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1860
|
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1860
|
tensor([7.0284e-01, 1.2159e-01, 1.8729e-02, 1.4732e-01, 6.4435e-03, 1.3144e-03,
|
1.6316e-03, 1.2779e-04], device='cuda:0', grad_fn=<SoftmaxBackward0>)
|
2 *************
|
['2', '3', '4', '1', '5', '8', '7', '29'] tensor([7.0284e-01, 1.2159e-01, 1.8729e-02, 1.4732e-01, 6.4435e-03, 1.3144e-03,
|
1.6316e-03, 1.2779e-04], device='cuda:0', grad_fn=<SelectBackward0>)
|
ζεηζ¦ηεεΈδΈΊ: {True: tensor(0.7028, device='cuda:0', grad_fn=<DivBackward0>), False: tensor(0.2972, device='cuda:0', grad_fn=<DivBackward0>), 'Execute Error': tensor(5.9605e-08, device='cuda:0', grad_fn=<DivBackward0>)}
|
ANSWER0=VQA(image=LEFT,question='Is the drum on the left white?')
|
ANSWER1=EVAL(expr='{ANSWER0}')
|
FINAL_ANSWER=RESULT(var=ANSWER1)
|
torch.Size([13, 3, 448, 448])
|
tensor([7.9706e-01, 2.2560e-02, 1.7785e-01, 1.1782e-03, 1.2777e-04, 5.6396e-04,
|
7.8238e-05, 5.8722e-04], device='cuda:2', grad_fn=<SoftmaxBackward0>)
|
yes *************
|
['yes', 'congratulations', 'no', 'honey', 'solid', 'right', 'candle', 'chocolate'] tensor([7.9706e-01, 2.2560e-02, 1.7785e-01, 1.1782e-03, 1.2777e-04, 5.6396e-04,
|
7.8238e-05, 5.8722e-04], device='cuda:2', grad_fn=<SelectBackward0>)
|
ζεηζ¦ηεεΈδΈΊ: {True: tensor(0.1778, device='cuda:2', grad_fn=<DivBackward0>), False: tensor(0.7971, device='cuda:2', grad_fn=<DivBackward0>), 'Execute Error': tensor(0.0251, device='cuda:2', grad_fn=<DivBackward0>)}
|
ANSWER0=VQA(image=LEFT,question='Is the bowl on the left image all white?')
|
FINAL_ANSWER=RESULT(var=ANSWER0)
|
torch.Size([13, 3, 448, 448])
|
question: ['Is the drum on the left white?'], responses:['no']
|
[('no', 0.1313955057270409), ('yes', 0.12592208734904367), ('no smoking', 0.12472972590078177), ('gone', 0.12376514658020793), ('man', 0.12367833016285167), ('meow', 0.1235796378467502), ('kia', 0.12347643720898455), ('no clock', 0.12345312922433942)]
|
[['no', 'yes', 'no smoking', 'gone', 'man', 'meow', 'kia', 'no clock']]
|
Encountered ExecuteError: CUDA out of memory. Tried to allocate 5.86 GiB. GPU 2 has a total capacty of 44.34 GiB of which 4.32 GiB is free. Including non-PyTorch memory, this process has 40.01 GiB memory in use. Of the allocated memory 36.91 GiB is allocated by PyTorch, and 2.46 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
|
Encountered TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'
|
torch.Size([13, 3, 448, 448]) knan debug pixel values shape
|
dynamic ViT batch size: 13, images per sample: 13.0, dynamic token length: 3396
|
dynamic ViT batch size: 13, images per sample: 13.0, dynamic token length: 3396
|
tensor([5.4619e-01, 4.5281e-01, 2.7730e-05, 1.6172e-04, 1.0275e-04, 1.0245e-04,
|
5.8692e-04, 1.4959e-05], device='cuda:1', grad_fn=<SoftmaxBackward0>)
|
no *************
|
['no', 'yes', 'no smoking', 'gone', 'man', 'meow', 'kia', 'no clock'] tensor([5.4619e-01, 4.5281e-01, 2.7730e-05, 1.6172e-04, 1.0275e-04, 1.0245e-04,
|
5.8692e-04, 1.4959e-05], device='cuda:1', grad_fn=<SelectBackward0>)
|
ζεηζ¦ηεεΈδΈΊ: {True: tensor(0.4528, device='cuda:1', grad_fn=<DivBackward0>), False: tensor(0.5462, device='cuda:1', grad_fn=<DivBackward0>), 'Execute Error': tensor(0.0010, device='cuda:1', grad_fn=<DivBackward0>)}
|
ANSWER0=VQA(image=LEFT,question='Are there any fish in the image?')
|
ANSWER1=EVAL(expr='not {ANSWER0}')
|
FINAL_ANSWER=RESULT(var=ANSWER1)
|
tensor([9.3908e-01, 1.0431e-02, 3.2814e-03, 1.0205e-03, 1.3232e-03, 9.0174e-04,
|
4.3921e-02, 3.9694e-05], device='cuda:3', grad_fn=<SoftmaxBackward0>)
|
1 *************
|
['1', '3', '4', '8', '6', '12', '2', '47'] tensor([9.3908e-01, 1.0431e-02, 3.2814e-03, 1.0205e-03, 1.3232e-03, 9.0174e-04,
|
4.3921e-02, 3.9694e-05], device='cuda:3', grad_fn=<SelectBackward0>)
|
ζεηζ¦ηεεΈδΈΊ: {True: tensor(0.0439, device='cuda:3', grad_fn=<DivBackward0>), False: tensor(0.9561, device='cuda:3', grad_fn=<DivBackward0>), 'Execute Error': tensor(1.1921e-07, device='cuda:3', grad_fn=<DivBackward0>)}
|
torch.Size([13, 3, 448, 448])
|
ζεηζ¦ηεεΈδΈΊ: {True: 1e-09, False: 1e-09, 'Execute Error': 0.999999998}
|
ANSWER0=VQA(image=LEFT,question='How many baboons are in the image?')
|
ANSWER1=EVAL(expr='{ANSWER0} <= 3')
|
FINAL_ANSWER=RESULT(var=ANSWER1)
|
dynamic ViT batch size: 13, images per sample: 13.0, dynamic token length: 3397
|
torch.Size([7, 3, 448, 448])
|
Encountered ExecuteError: CUDA out of memory. Tried to allocate 5.85 GiB. GPU 1 has a total capacty of 44.34 GiB of which 1.87 GiB is free. Including non-PyTorch memory, this process has 42.46 GiB memory in use. Of the allocated memory 39.95 GiB is allocated by PyTorch, and 1.87 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
|
Encountered TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'
|
dynamic ViT batch size: 13, images per sample: 13.0, dynamic token length: 3396
|
question: ['How many baboons are in the image?'], responses:['1']
|
[('1', 0.12829009354978346), ('3', 0.12529928082343206), ('4', 0.12464806219229535), ('8', 0.12460015878893425), ('6', 0.12451220062887247), ('12', 0.124338487048427), ('2', 0.12420459433498025), ('47', 0.12410712263327517)]
|
[['1', '3', '4', '8', '6', '12', '2', '47']]
|
dynamic ViT batch size: 13, images per sample: 13.0, dynamic token length: 3396
|
torch.Size([7, 3, 448, 448]) knan debug pixel values shape
|
dynamic ViT batch size: 13, images per sample: 13.0, dynamic token length: 3397
|
ζεηζ¦ηεεΈδΈΊ: {True: 1e-09, False: 1e-09, 'Execute Error': 0.999999998}
|
dynamic ViT batch size: 13, images per sample: 13.0, dynamic token length: 3397
|
dynamic ViT batch size: 13, images per sample: 13.0, dynamic token length: 3397
|
tensor([5.4618e-01, 4.5280e-01, 2.3082e-05, 1.1825e-04, 1.0979e-04, 1.8486e-04,
|
5.7801e-04, 1.2334e-05], device='cuda:0', grad_fn=<SoftmaxBackward0>)
|
no *************
|
['no', 'yes', 'no smoking', 'gone', 'man', 'meow', 'kia', 'no clock'] tensor([5.4618e-01, 4.5280e-01, 2.3082e-05, 1.1825e-04, 1.0979e-04, 1.8486e-04,
|
5.7801e-04, 1.2334e-05], device='cuda:0', grad_fn=<SelectBackward0>)
|
ζεηζ¦ηεεΈδΈΊ: {True: tensor(0.4528, device='cuda:0', grad_fn=<DivBackward0>), False: tensor(0.5462, device='cuda:0', grad_fn=<DivBackward0>), 'Execute Error': tensor(0.0010, device='cuda:0', grad_fn=<DivBackward0>)}
|
tensor([5.8034e-01, 2.8893e-02, 7.7772e-03, 2.3344e-03, 3.6720e-03, 2.0747e-03,
|
3.7477e-01, 1.3864e-04], device='cuda:2', grad_fn=<SoftmaxBackward0>)
|
1 *************
|
['1', '3', '4', '8', '6', '12', '2', '47'] tensor([5.8034e-01, 2.8893e-02, 7.7772e-03, 2.3344e-03, 3.6720e-03, 2.0747e-03,
|
3.7477e-01, 1.3864e-04], device='cuda:2', grad_fn=<SelectBackward0>)
|
ζεηζ¦ηεεΈδΈΊ: {True: tensor(0.9840, device='cuda:2', grad_fn=<DivBackward0>), False: tensor(0.0160, device='cuda:2', grad_fn=<DivBackward0>), 'Execute Error': tensor(0., device='cuda:2', grad_fn=<DivBackward0>)}
|
[2024-10-22 17:25:23,538] [INFO] [logging.py:96:log_dist] [Rank 0] rank=0 time (ms) | optimizer_allgather: 1.37 | optimizer_gradients: 0.25 | optimizer_step: 0.32
|
[2024-10-22 17:25:23,538] [INFO] [logging.py:96:log_dist] [Rank 0] rank=0 time (ms) | forward_microstep: 12634.14 | backward_microstep: 11762.56 | backward_inner_microstep: 11756.67 | backward_allreduce_microstep: 5.66 | step_microstep: 7.57
|
[2024-10-22 17:25:23,538] [INFO] [logging.py:96:log_dist] [Rank 0] rank=0 time (ms) | forward: 12634.16 | backward: 11762.55 | backward_inner: 11756.82 | backward_allreduce: 5.64 | step: 7.58
|
1%| | 17/2424 [06:55<16:11:43, 24.22s/it]Registering VQA_lavis step
|
Registering VQA_lavis step
|
Registering EVAL step
|
Registering RESULT step
|
Registering VQA_lavis step
|
Registering EVAL step
|
Registering RESULT step
|
ANSWER0=VQA(image=LEFT,question='Is the dog wearing a collar?')
|
ANSWER1=EVAL(expr='{ANSWER0}')
|
FINAL_ANSWER=RESULT(var=ANSWER1)
|
Registering EVAL step
|
Registering RESULT step
|
Registering VQA_lavis step
|
Registering EVAL step
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.