text
stringlengths 0
1.16k
|
|---|
ζεηζ¦ηεεΈδΈΊ: {True: tensor(0.0271, device='cuda:1', grad_fn=<DivBackward0>), False: tensor(0.9729, device='cuda:1', grad_fn=<DivBackward0>), 'Execute Error': tensor(-1.1921e-07, device='cuda:1', grad_fn=<DivBackward0>)}
|
tensor([9.2996e-01, 1.1346e-02, 5.3578e-03, 2.1634e-03, 2.9589e-03, 1.7769e-03,
|
4.6287e-02, 1.4539e-04], device='cuda:2', grad_fn=<SoftmaxBackward0>)
|
1 *************
|
['1', '3', '4', '8', '6', '12', '2', '47'] tensor([9.2996e-01, 1.1346e-02, 5.3578e-03, 2.1634e-03, 2.9589e-03, 1.7769e-03,
|
4.6287e-02, 1.4539e-04], device='cuda:2', grad_fn=<SelectBackward0>)
|
ζεηζ¦ηεεΈδΈΊ: {True: tensor(0.0237, device='cuda:2', grad_fn=<DivBackward0>), False: tensor(0.9763, device='cuda:2', grad_fn=<DivBackward0>), 'Execute Error': tensor(5.9605e-08, device='cuda:2', grad_fn=<DivBackward0>)}
|
dynamic ViT batch size: 13, images per sample: 13.0, dynamic token length: 3393
|
dynamic ViT batch size: 13, images per sample: 13.0, dynamic token length: 3394
|
dynamic ViT batch size: 13, images per sample: 13.0, dynamic token length: 3394
|
tensor([8.2918e-01, 1.4683e-02, 1.5338e-01, 1.6671e-03, 8.7692e-05, 2.9467e-04,
|
1.3775e-04, 5.7093e-04], device='cuda:0', grad_fn=<SoftmaxBackward0>)
|
yes *************
|
['yes', 'congratulations', 'no', 'honey', 'solid', 'right', 'candle', 'chocolate'] tensor([8.2918e-01, 1.4683e-02, 1.5338e-01, 1.6671e-03, 8.7692e-05, 2.9467e-04,
|
1.3775e-04, 5.7093e-04], device='cuda:0', grad_fn=<SelectBackward0>)
|
ζεηζ¦ηεεΈδΈΊ: {True: tensor(0.8292, device='cuda:0', grad_fn=<DivBackward0>), False: tensor(0.1534, device='cuda:0', grad_fn=<DivBackward0>), 'Execute Error': tensor(0.0174, device='cuda:0', grad_fn=<DivBackward0>)}
|
ANSWER0=VQA(image=RIGHT,question='How many dogs are in the image?')
|
ANSWER1=EVAL(expr='{ANSWER0} >= 2')
|
FINAL_ANSWER=RESULT(var=ANSWER1)
|
torch.Size([13, 3, 448, 448])
|
tensor([8.0291e-01, 2.6661e-02, 1.6830e-01, 7.9991e-04, 1.5460e-04, 3.5563e-04,
|
6.3776e-05, 7.5329e-04], device='cuda:3', grad_fn=<SoftmaxBackward0>)
|
yes *************
|
['yes', 'congratulations', 'no', 'honey', 'solid', 'right', 'candle', 'chocolate'] tensor([8.0291e-01, 2.6661e-02, 1.6830e-01, 7.9991e-04, 1.5460e-04, 3.5563e-04,
|
6.3776e-05, 7.5329e-04], device='cuda:3', grad_fn=<SelectBackward0>)
|
ζεηζ¦ηεεΈδΈΊ: {True: tensor(0.8029, device='cuda:3', grad_fn=<DivBackward0>), False: tensor(0.1683, device='cuda:3', grad_fn=<DivBackward0>), 'Execute Error': tensor(0.0288, device='cuda:3', grad_fn=<DivBackward0>)}
|
Encountered ExecuteError: CUDA out of memory. Tried to allocate 2.93 GiB. GPU 0 has a total capacty of 44.34 GiB of which 1.01 GiB is free. Including non-PyTorch memory, this process has 43.31 GiB memory in use. Of the allocated memory 40.70 GiB is allocated by PyTorch, and 2.00 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
|
Encountered TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'
|
ζεηζ¦ηεεΈδΈΊ: {True: 1e-09, False: 1e-09, 'Execute Error': 0.999999998}
|
[2024-10-22 17:24:59,116] [INFO] [logging.py:96:log_dist] [Rank 0] rank=0 time (ms) | optimizer_allgather: 1.35 | optimizer_gradients: 0.36 | optimizer_step: 0.32
|
[2024-10-22 17:24:59,116] [INFO] [logging.py:96:log_dist] [Rank 0] rank=0 time (ms) | forward_microstep: 12813.79 | backward_microstep: 11494.00 | backward_inner_microstep: 10687.08 | backward_allreduce_microstep: 806.65 | step_microstep: 7.66
|
[2024-10-22 17:24:59,117] [INFO] [logging.py:96:log_dist] [Rank 0] rank=0 time (ms) | forward: 12813.81 | backward: 11493.99 | backward_inner: 10687.16 | backward_allreduce: 806.47 | step: 7.68
|
1%| | 16/2424 [06:31<16:08:40, 24.14s/it]Registering VQA_lavis step
|
Registering EVAL step
|
Registering RESULT step
|
Registering VQA_lavis step
|
Registering EVAL step
|
Registering RESULT step
|
ANSWER0=VQA(image=RIGHT,question='Does the laptop on the right display the tiles from the operating system Windows?')
|
FINAL_ANSWER=RESULT(var=ANSWER0)
|
Registering VQA_lavis step
|
Registering VQA_lavis step
|
Registering EVAL step
|
Registering RESULT step
|
Registering EVAL step
|
Registering RESULT step
|
ANSWER0=VQA(image=LEFT,question='Is the dog looking toward the camera?')
|
ANSWER1=EVAL(expr='{ANSWER0}')
|
FINAL_ANSWER=RESULT(var=ANSWER1)
|
ANSWER0=VQA(image=RIGHT,question='Is the drummer wearing a blue and white shirt?')
|
FINAL_ANSWER=RESULT(var=ANSWER0)
|
ANSWER0=VQA(image=LEFT,question='How many dogs are standing in the grass?')
|
ANSWER1=EVAL(expr='{ANSWER0} == 2')
|
FINAL_ANSWER=RESULT(var=ANSWER1)
|
torch.Size([3, 3, 448, 448])
|
torch.Size([7, 3, 448, 448])
|
torch.Size([13, 3, 448, 448])
|
torch.Size([13, 3, 448, 448])
|
question: ['Is the drummer wearing a blue and white shirt?'], responses:['yes']
|
[('yes', 0.1298617250866936), ('congratulations', 0.12464161604141298), ('no', 0.12445222599225532), ('honey', 0.12437056445881921), ('solid', 0.12422595371654564), ('right', 0.12419889376311324), ('candle', 0.12414264780165109), ('chocolate', 0.12410637313950891)]
|
[['yes', 'congratulations', 'no', 'honey', 'solid', 'right', 'candle', 'chocolate']]
|
torch.Size([3, 3, 448, 448]) knan debug pixel values shape
|
question: ['Is the dog looking toward the camera?'], responses:['yes']
|
[('yes', 0.1298617250866936), ('congratulations', 0.12464161604141298), ('no', 0.12445222599225532), ('honey', 0.12437056445881921), ('solid', 0.12422595371654564), ('right', 0.12419889376311324), ('candle', 0.12414264780165109), ('chocolate', 0.12410637313950891)]
|
[['yes', 'congratulations', 'no', 'honey', 'solid', 'right', 'candle', 'chocolate']]
|
torch.Size([7, 3, 448, 448]) knan debug pixel values shape
|
tensor([8.6410e-01, 2.3322e-02, 1.0975e-01, 1.0560e-03, 1.1733e-04, 3.4943e-04,
|
2.9020e-05, 1.2676e-03], device='cuda:1', grad_fn=<SoftmaxBackward0>)
|
yes *************
|
['yes', 'congratulations', 'no', 'honey', 'solid', 'right', 'candle', 'chocolate'] tensor([8.6410e-01, 2.3322e-02, 1.0975e-01, 1.0560e-03, 1.1733e-04, 3.4943e-04,
|
2.9020e-05, 1.2676e-03], device='cuda:1', grad_fn=<SelectBackward0>)
|
question: ['How many dogs are standing in the grass?'], responses:['2']
|
ζεηζ¦ηεεΈδΈΊ: {True: tensor(0.8641, device='cuda:1', grad_fn=<UnbindBackward0>), False: tensor(0.1098, device='cuda:1', grad_fn=<UnbindBackward0>), 'Execute Error': tensor(0.0261, device='cuda:1', grad_fn=<SubBackward0>)}
|
ANSWER0=VQA(image=LEFT,question='Is the animal holding food?')
|
ANSWER1=EVAL(expr='{ANSWER0}')
|
FINAL_ANSWER=RESULT(var=ANSWER1)
|
[('2', 0.12961991198727602), ('3', 0.12561270547489775), ('4', 0.12556127085987287), ('1', 0.1254920833223361), ('5', 0.12407835939022728), ('8', 0.124024076973589), ('7', 0.12288810153923228), ('29', 0.12272349045256851)]
|
[['2', '3', '4', '1', '5', '8', '7', '29']]
|
torch.Size([13, 3, 448, 448])
|
torch.Size([13, 3, 448, 448]) knan debug pixel values shape
|
question: ['Does the laptop on the right display the tiles from the operating system Windows?'], responses:['no']
|
[('no', 0.1313955057270409), ('yes', 0.12592208734904367), ('no smoking', 0.12472972590078177), ('gone', 0.12376514658020793), ('man', 0.12367833016285167), ('meow', 0.1235796378467502), ('kia', 0.12347643720898455), ('no clock', 0.12345312922433942)]
|
[['no', 'yes', 'no smoking', 'gone', 'man', 'meow', 'kia', 'no clock']]
|
torch.Size([13, 3, 448, 448]) knan debug pixel values shape
|
dynamic ViT batch size: 13, images per sample: 13.0, dynamic token length: 3403
|
dynamic ViT batch size: 13, images per sample: 13.0, dynamic token length: 3403
|
tensor([8.3385e-01, 1.8874e-02, 1.4465e-01, 1.1085e-03, 1.0186e-04, 6.8897e-04,
|
4.8068e-05, 6.8228e-04], device='cuda:3', grad_fn=<SoftmaxBackward0>)
|
yes *************
|
['yes', 'congratulations', 'no', 'honey', 'solid', 'right', 'candle', 'chocolate'] tensor([8.3385e-01, 1.8874e-02, 1.4465e-01, 1.1085e-03, 1.0186e-04, 6.8897e-04,
|
4.8068e-05, 6.8228e-04], device='cuda:3', grad_fn=<SelectBackward0>)
|
question: ['Is the animal holding food?'], responses:['yes']
|
ζεηζ¦ηεεΈδΈΊ: {True: tensor(0.8338, device='cuda:3', grad_fn=<DivBackward0>), False: tensor(0.1447, device='cuda:3', grad_fn=<DivBackward0>), 'Execute Error': tensor(0.0215, device='cuda:3', grad_fn=<DivBackward0>)}
|
ANSWER0=VQA(image=LEFT,question='Is a person pushing the dispenser?')
|
ANSWER1=EVAL(expr='{ANSWER0}')
|
FINAL_ANSWER=RESULT(var=ANSWER1)
|
torch.Size([1, 3, 448, 448])
|
[('yes', 0.1298617250866936), ('congratulations', 0.12464161604141298), ('no', 0.12445222599225532), ('honey', 0.12437056445881921), ('solid', 0.12422595371654564), ('right', 0.12419889376311324), ('candle', 0.12414264780165109), ('chocolate', 0.12410637313950891)]
|
[['yes', 'congratulations', 'no', 'honey', 'solid', 'right', 'candle', 'chocolate']]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.