text
stringlengths
0
1.16k
[2024-10-22 17:09:02,843] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2024-10-22 17:09:02,859] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2024-10-22 17:09:05,546] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2024-10-22 17:09:05,863] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2024-10-22 17:09:05,912] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2024-10-22 17:09:05,927] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2024-10-22 17:09:08,417] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2024-10-22 17:09:08,766] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2024-10-22 17:09:08,913] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2024-10-22 17:09:08,939] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect)
Registering VQA_lavis step
Registering EVAL step
Registering RESULT step
ANSWER0=VQA(image=LEFT,question='How many boars are in the image?')
ANSWER1=VQA(image=LEFT,question='Is the boar swimming in the water?')
ANSWER2=EVAL(expr='{ANSWER0} == 1 and {ANSWER1}')
FINAL_ANSWER=RESULT(var=ANSWER2)
torch.Size([7, 3, 448, 448])
Registering VQA_lavis step
Registering EVAL step
Registering RESULT step
ANSWER0=VQA(image=LEFT,question='How many prepared drinks are in serving cups?')
ANSWER1=VQA(image=RIGHT,question='How many prepared drinks are in serving cups?')
ANSWER2=EVAL(expr='{ANSWER0} == {ANSWER1}')
FINAL_ANSWER=RESULT(var=ANSWER2)
torch.Size([7, 3, 448, 448])
Registering VQA_lavis step
Registering EVAL step
Registering RESULT step
ANSWER0=VQA(image=RIGHT,question='Is the dog facing right in the image?')
ANSWER1=VQA(image=LEFT,question='Is the dog facing left in the image?')
ANSWER2=EVAL(expr='{ANSWER0} and {ANSWER1}')
FINAL_ANSWER=RESULT(var=ANSWER2)
Registering VQA_lavis step
Registering EVAL step
Registering RESULT step
ANSWER0=VQA(image=LEFT,question='What shape are the pizzas?')
ANSWER1=VQA(image=RIGHT,question='What shape are the pizzas?')
ANSWER2=EVAL(expr='{ANSWER0} == "rectangle" and {ANSWER1} == "rectangle"')
FINAL_ANSWER=RESULT(var=ANSWER2)
torch.Size([7, 3, 448, 448])
torch.Size([13, 3, 448, 448])
question: ['How many boars are in the image?'], responses:['1']
[WARNING|tokenization_utils_base.py:2697] 2024-10-22 17:09:11,433 >> Asking to truncate to max_length but no maximum length is provided and the model has no predefined maximum length. Default to no truncation.
question: ['How many prepared drinks are in serving cups?'], responses:['0']
[WARNING|tokenization_utils_base.py:2697] 2024-10-22 17:09:11,640 >> Asking to truncate to max_length but no maximum length is provided and the model has no predefined maximum length. Default to no truncation.
[('1', 0.12829009354978346), ('3', 0.12529928082343206), ('4', 0.12464806219229535), ('8', 0.12460015878893425), ('6', 0.12451220062887247), ('12', 0.124338487048427), ('2', 0.12420459433498025), ('47', 0.12410712263327517)]
[['1', '3', '4', '8', '6', '12', '2', '47']]
torch.Size([7, 3, 448, 448]) knan debug pixel values shape
[('0', 0.13077743594303964), ('circles', 0.12449813349255197), ('maroon', 0.12428926693968681), ('large', 0.1242263466991631), ('rooster', 0.12409315512763705), ('nuts', 0.12408018414184876), ('beige', 0.1240288472550799), ('bottle', 0.12400663040099273)]
[['0', 'circles', 'maroon', 'large', 'rooster', 'nuts', 'beige', 'bottle']]
question: ['What shape are the pizzas?'], responses:['round']
[WARNING|tokenization_utils_base.py:2697] 2024-10-22 17:09:12,142 >> Asking to truncate to max_length but no maximum length is provided and the model has no predefined maximum length. Default to no truncation.
torch.Size([7, 3, 448, 448]) knan debug pixel values shape
question: ['Is the dog facing right in the image?'], responses:['yes']
[WARNING|tokenization_utils_base.py:2697] 2024-10-22 17:09:12,588 >> Asking to truncate to max_length but no maximum length is provided and the model has no predefined maximum length. Default to no truncation.
[('round', 0.12813543442466266), ('warning', 0.12464114900863767), ('exit', 0.12459056062387183), ('cut', 0.12456996524356728), ('cup', 0.12456900943720788), ('circle', 0.12452673539867194), ('tube', 0.12449066591861223), ('tile', 0.12447647994476846)]
[['round', 'warning', 'exit', 'cut', 'cup', 'circle', 'tube', 'tile']]
torch.Size([7, 3, 448, 448]) knan debug pixel values shape
[('yes', 0.1298617250866936), ('congratulations', 0.12464161604141298), ('no', 0.12445222599225532), ('honey', 0.12437056445881921), ('solid', 0.12422595371654564), ('right', 0.12419889376311324), ('candle', 0.12414264780165109), ('chocolate', 0.12410637313950891)]
[['yes', 'congratulations', 'no', 'honey', 'solid', 'right', 'candle', 'chocolate']]
torch.Size([13, 3, 448, 448]) knan debug pixel values shape
dynamic ViT batch size: 13, images per sample: 13.0, dynamic token length: 3397
tensor([9.3429e-01, 1.2519e-02, 5.9125e-03, 2.7819e-03, 3.4741e-03, 2.2894e-03,
3.8563e-02, 1.6747e-04], device='cuda:3', grad_fn=<SoftmaxBackward0>)
1 *************
['1', '3', '4', '8', '6', '12', '2', '47'] tensor([9.3429e-01, 1.2519e-02, 5.9125e-03, 2.7819e-03, 3.4741e-03, 2.2894e-03,
3.8563e-02, 1.6747e-04], device='cuda:3', grad_fn=<SelectBackward0>)
['1', '3', '4', '8', '6', '12', '2', '47'] tensor([9.3429e-01, 1.2519e-02, 5.9125e-03, 2.7819e-03, 3.4741e-03, 2.2894e-03,
3.8563e-02, 1.6747e-04], device='cuda:3', grad_fn=<SelectBackward0>)
torch.Size([7, 3, 448, 448])
dynamic ViT batch size: 13, images per sample: 13.0, dynamic token length: 3400
tensor([0.9561, 0.0071, 0.0037, 0.0023, 0.0026, 0.0031, 0.0045, 0.0206],
device='cuda:2', grad_fn=<SoftmaxBackward0>)
0 *************
['0', 'circles', 'maroon', 'large', 'rooster', 'nuts', 'beige', 'bottle'] tensor([0.9561, 0.0071, 0.0037, 0.0023, 0.0026, 0.0031, 0.0045, 0.0206],
device='cuda:2', grad_fn=<SelectBackward0>)
['0', 'circles', 'maroon', 'large', 'rooster', 'nuts', 'beige', 'bottle'] tensor([0.9561, 0.0071, 0.0037, 0.0023, 0.0026, 0.0031, 0.0045, 0.0206],
device='cuda:2', grad_fn=<SelectBackward0>)
torch.Size([7, 3, 448, 448])
dynamic ViT batch size: 13, images per sample: 13.0, dynamic token length: 3397
question: ['Is the boar swimming in the water?'], responses:['yes']
tensor([5.8387e-01, 2.7038e-04, 3.1051e-04, 5.3179e-03, 1.8806e-03, 3.5417e-01,
5.1399e-02, 2.7807e-03], device='cuda:1', grad_fn=<SoftmaxBackward0>)
round *************
['round', 'warning', 'exit', 'cut', 'cup', 'circle', 'tube', 'tile'] tensor([5.8387e-01, 2.7038e-04, 3.1051e-04, 5.3179e-03, 1.8806e-03, 3.5417e-01,
5.1399e-02, 2.7807e-03], device='cuda:1', grad_fn=<SelectBackward0>)
['round', 'warning', 'exit', 'cut', 'cup', 'circle', 'tube', 'tile'] tensor([5.8387e-01, 2.7038e-04, 3.1051e-04, 5.3179e-03, 1.8806e-03, 3.5417e-01,
5.1399e-02, 2.7807e-03], device='cuda:1', grad_fn=<SelectBackward0>)
[('yes', 0.1298617250866936), ('congratulations', 0.12464161604141298), ('no', 0.12445222599225532), ('honey', 0.12437056445881921), ('solid', 0.12422595371654564), ('right', 0.12419889376311324), ('candle', 0.12414264780165109), ('chocolate', 0.12410637313950891)]
[['yes', 'congratulations', 'no', 'honey', 'solid', 'right', 'candle', 'chocolate']]
torch.Size([13, 3, 448, 448])
dynamic ViT batch size: 13, images per sample: 13.0, dynamic token length: 3398
torch.Size([7, 3, 448, 448]) knan debug pixel values shape
question: ['How many prepared drinks are in serving cups?'], responses:['1']
[('1', 0.12829009354978346), ('3', 0.12529928082343206), ('4', 0.12464806219229535), ('8', 0.12460015878893425), ('6', 0.12451220062887247), ('12', 0.124338487048427), ('2', 0.12420459433498025), ('47', 0.12410712263327517)]
[['1', '3', '4', '8', '6', '12', '2', '47']]
torch.Size([7, 3, 448, 448]) knan debug pixel values shape
dynamic ViT batch size: 13, images per sample: 13.0, dynamic token length: 3397
dynamic ViT batch size: 13, images per sample: 13.0, dynamic token length: 3397