text stringlengths 0 1.16k |
|---|
['1', '3', '4', '8', '6', '12', '2', '47'] tensor([7.6683e-01, 2.4650e-02, 7.7557e-03, 1.7371e-03, 2.8646e-03, 1.4294e-03, |
1.9466e-01, 7.5309e-05], device='cuda:1', grad_fn=<SelectBackward0>) |
ζεηζ¦ηεεΈδΈΊ: {True: tensor(0.1947, device='cuda:1', grad_fn=<DivBackward0>), False: tensor(0.8053, device='cuda:1', grad_fn=<DivBackward0>), 'Execute Error': tensor(5.9605e-08, device='cuda:1', grad_fn=<DivBackward0>)} |
ANSWER0=VQA(image=RIGHT,question='How many seals are in the image?') |
ANSWER1=EVAL(expr='{ANSWER0} == 2') |
FINAL_ANSWER=RESULT(var=ANSWER1) |
torch.Size([7, 3, 448, 448]) |
question: ['How many boars are in the image?'], responses:['1'] |
question: ['How many seals are in the image?'], responses:['2'] |
[('1', 0.12829009354978346), ('3', 0.12529928082343206), ('4', 0.12464806219229535), ('8', 0.12460015878893425), ('6', 0.12451220062887247), ('12', 0.124338487048427), ('2', 0.12420459433498025), ('47', 0.12410712263327517)] |
[['1', '3', '4', '8', '6', '12', '2', '47']] |
[('2', 0.12961991198727602), ('3', 0.12561270547489775), ('4', 0.12556127085987287), ('1', 0.1254920833223361), ('5', 0.12407835939022728), ('8', 0.124024076973589), ('7', 0.12288810153923228), ('29', 0.12272349045256851)] |
[['2', '3', '4', '1', '5', '8', '7', '29']] |
torch.Size([7, 3, 448, 448]) knan debug pixel values shape |
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1861 |
torch.Size([7, 3, 448, 448]) knan debug pixel values shape |
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1861 |
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1861 |
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1861 |
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1861 |
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1861 |
tensor([9.6499e-01, 5.0635e-03, 1.8053e-03, 6.8403e-04, 1.0608e-03, 6.3789e-04, |
2.5715e-02, 4.1845e-05], device='cuda:2', grad_fn=<SoftmaxBackward0>) |
1 ************* |
['1', '3', '4', '8', '6', '12', '2', '47'] tensor([9.6499e-01, 5.0635e-03, 1.8053e-03, 6.8403e-04, 1.0608e-03, 6.3789e-04, |
2.5715e-02, 4.1845e-05], device='cuda:2', grad_fn=<SelectBackward0>) |
ζεηζ¦ηεεΈδΈΊ: {True: tensor(0.0257, device='cuda:2', grad_fn=<DivBackward0>), False: tensor(0.9743, device='cuda:2', grad_fn=<DivBackward0>), 'Execute Error': tensor(1.1921e-07, device='cuda:2', grad_fn=<DivBackward0>)} |
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1861 |
dynamic ViT batch size: 7, images per sample: 7.0, dynamic token length: 1861 |
tensor([9.3240e-01, 1.2891e-02, 5.7198e-03, 2.3819e-03, 3.2585e-03, 2.2238e-03, |
4.0966e-02, 1.6121e-04], device='cuda:0', grad_fn=<SoftmaxBackward0>) |
1 ************* |
['1', '3', '4', '8', '6', '12', '2', '47'] tensor([9.3240e-01, 1.2891e-02, 5.7198e-03, 2.3819e-03, 3.2585e-03, 2.2238e-03, |
4.0966e-02, 1.6121e-04], device='cuda:0', grad_fn=<SelectBackward0>) |
ζεηζ¦ηεεΈδΈΊ: {True: tensor(0.0676, device='cuda:0', grad_fn=<DivBackward0>), False: tensor(0.9324, device='cuda:0', grad_fn=<DivBackward0>), 'Execute Error': tensor(0., device='cuda:0', grad_fn=<DivBackward0>)} |
ANSWER0=VQA(image=RIGHT,question='Is there a human visible next to the german shepherd dog?') |
ANSWER1=EVAL(expr='not {ANSWER0}') |
FINAL_ANSWER=RESULT(var=ANSWER1) |
tensor([7.6515e-01, 1.5067e-01, 2.4595e-02, 4.5949e-02, 9.0481e-03, 2.0186e-03, |
2.4351e-03, 1.3809e-04], device='cuda:1', grad_fn=<SoftmaxBackward0>) |
2 ************* |
['2', '3', '4', '1', '5', '8', '7', '29'] tensor([7.6515e-01, 1.5067e-01, 2.4595e-02, 4.5949e-02, 9.0481e-03, 2.0186e-03, |
2.4351e-03, 1.3809e-04], device='cuda:1', grad_fn=<SelectBackward0>) |
ζεηζ¦ηεεΈδΈΊ: {True: tensor(0.7651, device='cuda:1', grad_fn=<DivBackward0>), False: tensor(0.2349, device='cuda:1', grad_fn=<DivBackward0>), 'Execute Error': tensor(5.9605e-08, device='cuda:1', grad_fn=<DivBackward0>)} |
torch.Size([13, 3, 448, 448]) |
Encountered ExecuteError: CUDA out of memory. Tried to allocate 1.17 GiB. GPU 3 has a total capacty of 44.34 GiB of which 924.94 MiB is free. Including non-PyTorch memory, this process has 43.42 GiB memory in use. Of the allocated memory 37.85 GiB is allocated by PyTorch, and 5.02 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF |
Encountered TypeError: unsupported operand type(s) for +: 'NoneType' and 'str' |
ζεηζ¦ηεεΈδΈΊ: {True: 1e-09, False: 1e-09, 'Execute Error': 0.999999998} |
ANSWER0=VQA(image=RIGHT,question='Are straws visible in the image?') |
ANSWER1=EVAL(expr='{ANSWER0}') |
FINAL_ANSWER=RESULT(var=ANSWER1) |
torch.Size([1, 3, 448, 448]) |
question: ['Are straws visible in the image?'], responses:['yes'] |
[('yes', 0.1298617250866936), ('congratulations', 0.12464161604141298), ('no', 0.12445222599225532), ('honey', 0.12437056445881921), ('solid', 0.12422595371654564), ('right', 0.12419889376311324), ('candle', 0.12414264780165109), ('chocolate', 0.12410637313950891)] |
[['yes', 'congratulations', 'no', 'honey', 'solid', 'right', 'candle', 'chocolate']] |
torch.Size([1, 3, 448, 448]) knan debug pixel values shape |
tensor([6.4060e-01, 1.3414e-02, 3.4289e-01, 9.4435e-04, 8.3034e-05, 2.4960e-04, |
8.8756e-05, 1.7375e-03], device='cuda:3', grad_fn=<SoftmaxBackward0>) |
yes ************* |
['yes', 'congratulations', 'no', 'honey', 'solid', 'right', 'candle', 'chocolate'] tensor([6.4060e-01, 1.3414e-02, 3.4289e-01, 9.4435e-04, 8.3034e-05, 2.4960e-04, |
8.8756e-05, 1.7375e-03], device='cuda:3', grad_fn=<SelectBackward0>) |
ζεηζ¦ηεεΈδΈΊ: {True: tensor(0.6406, device='cuda:3', grad_fn=<DivBackward0>), False: tensor(0.3429, device='cuda:3', grad_fn=<DivBackward0>), 'Execute Error': tensor(0.0165, device='cuda:3', grad_fn=<DivBackward0>)} |
Encountered ExecuteError: CUDA out of memory. Tried to allocate 2.93 GiB. GPU 0 has a total capacty of 44.34 GiB of which 788.94 MiB is free. Including non-PyTorch memory, this process has 43.55 GiB memory in use. Of the allocated memory 40.73 GiB is allocated by PyTorch, and 2.20 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF |
Encountered TypeError: unsupported operand type(s) for +: 'NoneType' and 'str' |
ζεηζ¦ηεεΈδΈΊ: {True: 1e-09, False: 1e-09, 'Execute Error': 0.999999998} |
[2024-10-22 17:26:35,779] [INFO] [logging.py:96:log_dist] [Rank 0] rank=0 time (ms) | optimizer_allgather: 1.52 | optimizer_gradients: 0.24 | optimizer_step: 0.31 |
[2024-10-22 17:26:35,780] [INFO] [logging.py:96:log_dist] [Rank 0] rank=0 time (ms) | forward_microstep: 13289.40 | backward_microstep: 10773.76 | backward_inner_microstep: 10768.32 | backward_allreduce_microstep: 5.22 | step_microstep: 7.82 |
[2024-10-22 17:26:35,780] [INFO] [logging.py:96:log_dist] [Rank 0] rank=0 time (ms) | forward: 13289.42 | backward: 10773.75 | backward_inner: 10768.39 | backward_allreduce: 5.20 | step: 7.83 |
1%| | 20/2424 [08:08<16:06:56, 24.13s/it]Registering VQA_lavis step |
Registering EVAL step |
Registering RESULT step |
ANSWER0=VQA(image=LEFT,question='How many dogs are in the image?') |
ANSWER1=EVAL(expr='{ANSWER0} > 10') |
FINAL_ANSWER=RESULT(var=ANSWER1) |
Registering VQA_lavis step |
Registering EVAL step |
Registering RESULT step |
Registering VQA_lavis step |
ANSWER0=VQA(image=RIGHT,question='How many people are in the image?') |
ANSWER1=EVAL(expr='{ANSWER0} == 1') |
FINAL_ANSWER=RESULT(var=ANSWER1) |
Registering EVAL step |
Registering RESULT step |
Registering VQA_lavis step |
Registering EVAL step |
Registering RESULT step |
torch.Size([1, 3, 448, 448]) |
ANSWER0=VQA(image=RIGHT,question='Is there at least one person standing in front of and staring ahead at a row of vending machines?') |
ANSWER1=EVAL(expr='{ANSWER0}') |
FINAL_ANSWER=RESULT(var=ANSWER1) |
ANSWER0=VQA(image=RIGHT,question='How many round plates are visible in the image?') |
ANSWER1=EVAL(expr='{ANSWER0} >= 2') |
FINAL_ANSWER=RESULT(var=ANSWER1) |
torch.Size([3, 3, 448, 448]) |
torch.Size([7, 3, 448, 448]) |
torch.Size([7, 3, 448, 448]) |
question: ['How many dogs are in the image?'], responses:['15'] |
[('15', 0.12850265658859292), ('14', 0.12554598114685298), ('13', 0.12491622450863256), ('16', 0.12450938797787274), ('29', 0.12444750181633149), ('35', 0.12413627702798803), ('22', 0.12400388658176363), ('21', 0.12393808435196574)] |
[['15', '14', '13', '16', '29', '35', '22', '21']] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.