text
stringlengths 0
1.16k
|
|---|
Encountered ExecuteError: 'InternVLChatModel' object has no attribute 'tokenizer'
|
Encountered TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'
|
ζεηζ¦ηεεΈδΈΊ: {True: 1e-09, False: 1e-09, 'Execute Error': 0.999999998}
|
ANSWER0=VQA(image=LEFT,question='Does at least one dog have its mouth open?')
|
ANSWER1=VQA(image=RIGHT,question='Does at least one dog have its mouth open?')
|
ANSWER2=EVAL(expr='{ANSWER0} or {ANSWER1}')
|
FINAL_ANSWER=RESULT(var=ANSWER2)
|
torch.Size([1, 3, 448, 448])
|
Encountered ExecuteError: 'InternVLChatModel' object has no attribute 'tokenizer'
|
Encountered TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'
|
ζεηζ¦ηεεΈδΈΊ: {True: 1e-09, False: 1e-09, 'Execute Error': 0.999999998}
|
[2024-10-22 17:08:10,750] [INFO] [logging.py:96:log_dist] [Rank 0] rank=0 time (ms) | optimizer_allgather: 42.27 | optimizer_gradients: 1.21 | optimizer_step: 0.34
|
[2024-10-22 17:08:10,750] [INFO] [logging.py:96:log_dist] [Rank 0] rank=0 time (ms) | forward_microstep: 208.34 | backward_microstep: 4.07 | backward_inner_microstep: 3.05 | backward_allreduce_microstep: 0.93 | step_microstep: 152.18
|
[2024-10-22 17:08:10,750] [INFO] [logging.py:96:log_dist] [Rank 0] rank=0 time (ms) | forward: 208.28 | backward: 4.07 | backward_inner: 3.06 | backward_allreduce: 0.94 | step: 152.19
|
0%| | 1/26352 [00:13<97:23:18, 13.30s/it]Registering VQA_lavis step
|
Registering EVAL step
|
Registering RESULT step
|
Registering VQA_lavis step
|
Registering EVAL step
|
Registering RESULT step
|
ANSWER0=VQA(image=LEFT,question='Are the penguins walking through the waves?')
|
ANSWER1=VQA(image=RIGHT,question='Are the penguins walking through the waves?')
|
ANSWER2=EVAL(expr='{ANSWER0} or {ANSWER1}')
|
FINAL_ANSWER=RESULT(var=ANSWER2)
|
ANSWER0=VQA(image=LEFT,question='How many upright tubes of lipstick are in the image?')
|
ANSWER1=VQA(image=RIGHT,question='How many upright tubes of lipstick are in the image?')
|
ANSWER2=EVAL(expr='{ANSWER1} > {ANSWER0}')
|
FINAL_ANSWER=RESULT(var=ANSWER2)
|
Registering VQA_lavis step
|
Registering EVAL step
|
Registering RESULT step
|
Registering VQA_lavis step
|
Registering EVAL step
|
Registering RESULT step
|
ANSWER0=VQA(image=LEFT,question='Is there a black eared boar facing right with its snout facing forward left?')
|
ANSWER1=VQA(image=RIGHT,question='Is there a black eared boar facing right with its snout facing forward left?')
|
ANSWER2=EVAL(expr='{ANSWER0} or {ANSWER1}')
|
FINAL_ANSWER=RESULT(var=ANSWER2)
|
ANSWER0=VQA(image=LEFT,question='Is there a single purple headed crab crawling in the ground?')
|
ANSWER1=VQA(image=RIGHT,question='Is there a single purple headed crab crawling in the ground?')
|
ANSWER2=EVAL(expr='{ANSWER0} or {ANSWER1}')
|
FINAL_ANSWER=RESULT(var=ANSWER2)
|
torch.Size([7, 3, 448, 448])
|
torch.Size([7, 3, 448, 448])
|
Encountered ExecuteError: 'InternVLChatModel' object has no attribute 'tokenizer'
|
Encountered TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'
|
Encountered ExecuteError: 'InternVLChatModel' object has no attribute 'tokenizer'
|
Encountered TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'
|
torch.Size([5, 3, 448, 448])
|
ζεηζ¦ηεεΈδΈΊ: {True: 1e-09, False: 1e-09, 'Execute Error': 0.999999998}
|
ζεηζ¦ηεεΈδΈΊ: {True: 1e-09, False: 1e-09, 'Execute Error': 0.999999998}
|
Encountered ExecuteError: 'InternVLChatModel' object has no attribute 'tokenizer'
|
Encountered TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'
|
ζεηζ¦ηεεΈδΈΊ: {True: 1e-09, False: 1e-09, 'Execute Error': 0.999999998}
|
ANSWER0=VQA(image=LEFT,question='How many ducks are in the water?')
|
ANSWER1=VQA(image=RIGHT,question='How many ducks are in the water?')
|
ANSWER2=EVAL(expr='{ANSWER0} + {ANSWER1} > 3')
|
FINAL_ANSWER=RESULT(var=ANSWER2)
|
ANSWER0=VQA(image=LEFT,question='How many dogs are in the image?')
|
ANSWER1=VQA(image=RIGHT,question='How many dogs are in the image?')
|
ANSWER2=EVAL(expr='{ANSWER0} == 1 and {ANSWER1} == 1')
|
FINAL_ANSWER=RESULT(var=ANSWER2)
|
ANSWER0=VQA(image=LEFT,question='Are there dogs wearing colored socks in the image?')
|
ANSWER1=VQA(image=RIGHT,question='Are there dogs wearing colored socks in the image?')
|
ANSWER2=EVAL(expr='{ANSWER0} or {ANSWER1}')
|
FINAL_ANSWER=RESULT(var=ANSWER2)
|
torch.Size([7, 3, 448, 448])
|
Encountered ExecuteError: 'InternVLChatModel' object has no attribute 'tokenizer'
|
Encountered TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'
|
ζεηζ¦ηεεΈδΈΊ: {True: 1e-09, False: 1e-09, 'Execute Error': 0.999999998}
|
ANSWER0=VQA(image=LEFT,question='Is there a feather in the image?')
|
ANSWER1=VQA(image=RIGHT,question='Is there a feather in the image?')
|
ANSWER2=EVAL(expr='{ANSWER0} or {ANSWER1}')
|
FINAL_ANSWER=RESULT(var=ANSWER2)
|
torch.Size([1, 3, 448, 448])
|
Encountered ExecuteError: 'InternVLChatModel' object has no attribute 'tokenizer'
|
Encountered TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'
|
ζεηζ¦ηεεΈδΈΊ: {True: 1e-09, False: 1e-09, 'Execute Error': 0.999999998}
|
torch.Size([7, 3, 448, 448])
|
Encountered ExecuteError: 'InternVLChatModel' object has no attribute 'tokenizer'
|
Encountered TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'
|
ζεηζ¦ηεεΈδΈΊ: {True: 1e-09, False: 1e-09, 'Execute Error': 0.999999998}
|
torch.Size([7, 3, 448, 448])
|
Encountered ExecuteError: 'InternVLChatModel' object has no attribute 'tokenizer'
|
Encountered TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'
|
ζεηζ¦ηεεΈδΈΊ: {True: 1e-09, False: 1e-09, 'Execute Error': 0.999999998}
|
torch.Size([13, 3, 448, 448])
|
Encountered ExecuteError: 'InternVLChatModel' object has no attribute 'tokenizer'
|
Encountered TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'
|
ζεηζ¦ηεεΈδΈΊ: {True: 1e-09, False: 1e-09, 'Execute Error': 0.999999998}
|
[2024-10-22 17:08:11,040] [INFO] [logging.py:96:log_dist] [Rank 0] rank=0 time (ms) | optimizer_allgather: 44.47 | optimizer_gradients: 1.29 | optimizer_step: 0.34
|
[2024-10-22 17:08:11,040] [INFO] [logging.py:96:log_dist] [Rank 0] rank=0 time (ms) | forward_microstep: 107.60 | backward_microstep: 1.71 | backward_inner_microstep: 0.71 | backward_allreduce_microstep: 0.92 | step_microstep: 123.18
|
[2024-10-22 17:08:11,041] [INFO] [logging.py:96:log_dist] [Rank 0] rank=0 time (ms) | forward: 107.62 | backward: 1.71 | backward_inner: 0.72 | backward_allreduce: 0.92 | step: 123.18
|
0%| | 2/26352 [00:13<41:19:16, 5.65s/it]Registering VQA_lavis step
|
Registering VQA_lavis step
|
Registering EVAL step
|
Registering RESULT step
|
Registering EVAL step
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.