text
stringlengths 0
1.16k
|
|---|
FINAL_ANSWER=RESULT(var=ANSWER2)
|
torch.Size([13, 3, 448, 448])
|
torch.Size([7, 3, 448, 448])
|
Encountered ExecuteError: 'InternVLChatModel' object has no attribute 'tokenizer'
|
Encountered TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'
|
ζεηζ¦ηεεΈδΈΊ:Encountered ExecuteError: 'InternVLChatModel' object has no attribute 'tokenizer'
|
{True: 1e-09, False: 1e-09, 'Execute Error': 0.999999998}
|
Encountered TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'
|
ζεηζ¦ηεεΈδΈΊ: {True: 1e-09, False: 1e-09, 'Execute Error': 0.999999998}
|
torch.Size([13, 3, 448, 448])
|
Encountered ExecuteError: 'InternVLChatModel' object has no attribute 'tokenizer'
|
Encountered TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'
|
ζεηζ¦ηεεΈδΈΊ: {True: 1e-09, False: 1e-09, 'Execute Error': 0.999999998}
|
torch.Size([13, 3, 448, 448])
|
Encountered ExecuteError: 'InternVLChatModel' object has no attribute 'tokenizer'
|
Encountered TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'
|
ζεηζ¦ηεεΈδΈΊ: {True: 1e-09, False: 1e-09, 'Execute Error': 0.999999998}
|
ANSWER0=VQA(image=LEFT,question='Is there a black dog with a white muzzle?')
|
ANSWER1=VQA(image=RIGHT,question='Is there a black dog with a white muzzle?')
|
ANSWER2=EVAL(expr='{ANSWER0} xor {ANSWER1}')
|
FINAL_ANSWER=RESULT(var=ANSWER2)
|
torch.Size([7, 3, 448, 448])
|
Encountered ExecuteError: 'InternVLChatModel' object has no attribute 'tokenizer'
|
Encountered TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'
|
ζεηζ¦ηεεΈδΈΊ: {True: 1e-09, False: 1e-09, 'Execute Error': 0.999999998}
|
[2024-10-22 17:08:11,814] [INFO] [logging.py:96:log_dist] [Rank 0] rank=0 time (ms) | optimizer_allgather: 1.53 | optimizer_gradients: 0.83 | optimizer_step: 0.32
|
[2024-10-22 17:08:11,815] [INFO] [logging.py:96:log_dist] [Rank 0] rank=0 time (ms) | forward_microstep: 105.78 | backward_microstep: 1.47 | backward_inner_microstep: 0.61 | backward_allreduce_microstep: 0.78 | step_microstep: 115.72
|
[2024-10-22 17:08:11,815] [INFO] [logging.py:96:log_dist] [Rank 0] rank=0 time (ms) | forward: 105.79 | backward: 1.47 | backward_inner: 0.61 | backward_allreduce: 0.78 | step: 115.72
|
0%| | 5/26352 [00:14<10:06:09, 1.38s/it]Registering VQA_lavis step
|
Registering EVAL step
|
Registering RESULT step
|
Registering VQA_lavis step
|
Registering EVAL step
|
Registering RESULT step
|
ANSWER0=VQA(image=LEFT,question='How many skunks are on the piece of wood?')
|
ANSWER1=VQA(image=RIGHT,question='How many skunks are on the piece of wood?')
|
ANSWER2=EVAL(expr='{ANSWER0} == 2 and {ANSWER1} == 2')
|
FINAL_ANSWER=RESULT(var=ANSWER2)
|
Registering VQA_lavis step
|
Registering EVAL step
|
Registering RESULT step
|
Registering VQA_lavis step
|
Registering EVAL step
|
Registering RESULT step
|
ANSWER0=VQA(image=LEFT,question='Are the birds only drinking water?')
|
ANSWER1=VQA(image=RIGHT,question='Are the birds only drinking water?')
|
ANSWER2=EVAL(expr='{ANSWER0} xor {ANSWER1}')
|
FINAL_ANSWER=RESULT(var=ANSWER2)
|
ANSWER0=VQA(image=LEFT,question='Do any of the birds have their wings spread?')
|
ANSWER1=VQA(image=RIGHT,question='Do any of the birds have their wings spread?')
|
ANSWER2=EVAL(expr='{ANSWER0} or {ANSWER1}')
|
FINAL_ANSWER=RESULT(var=ANSWER2)
|
torch.Size([13, 3, 448, 448])
|
Encountered ExecuteError: 'InternVLChatModel' object has no attribute 'tokenizer'
|
Encountered TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'
|
torch.Size([7, 3, 448, 448])
|
ζεηζ¦ηεεΈδΈΊ: {True: 1e-09, False: 1e-09, 'Execute Error': 0.999999998}
|
ANSWER0=VQA(image=RIGHT,question='How many ducks are in the image?')
|
ANSWER1=EVAL(expr='{ANSWER0} == 1')
|
FINAL_ANSWER=RESULT(var=ANSWER1)
|
ANSWER0=VQA(image=LEFT,question='Are any of the dogs actively moving by running, jumping, or walking?')
|
ANSWER1=VQA(image=RIGHT,question='Are any of the dogs actively moving by running, jumping, or walking?')
|
ANSWER2=EVAL(expr='{ANSWER0} or {ANSWER1}')
|
FINAL_ANSWER=RESULT(var=ANSWER2)
|
Encountered ExecuteError: 'InternVLChatModel' object has no attribute 'tokenizer'
|
Encountered TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'
|
torch.Size([7, 3, 448, 448])
|
ζεηζ¦ηεεΈδΈΊ: {True: 1e-09, False: 1e-09, 'Execute Error': 0.999999998}
|
Encountered ExecuteError: 'InternVLChatModel' object has no attribute 'tokenizer'
|
Encountered TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'
|
[2024-10-22 17:08:45,632] torch.distributed.run: [WARNING]
|
[2024-10-22 17:08:45,632] torch.distributed.run: [WARNING] *****************************************
|
[2024-10-22 17:08:45,632] torch.distributed.run: [WARNING] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
|
[2024-10-22 17:08:45,632] torch.distributed.run: [WARNING] *****************************************
|
[2024-10-22 17:08:47,345] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect)
|
[2024-10-22 17:08:47,362] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect)
|
[2024-10-22 17:08:47,372] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect)
|
[2024-10-22 17:08:47,373] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect)
|
petrel_client is not installed. If you read data locally instead of from ceph, ignore it.
|
petrel_client is not installed. If you read data locally instead of from ceph, ignore it.
|
petrel_client is not installed. If you read data locally instead of from ceph, ignore it.
|
petrel_client is not installed. If you read data locally instead of from ceph, ignore it.
|
Replace train sampler!!
|
petrel_client is not installed. Using PIL to load images.
|
Replace train sampler!!
|
petrel_client is not installed. Using PIL to load images.
|
Replace train sampler!!
|
petrel_client is not installed. Using PIL to load images.
|
Replace train sampler!!
|
petrel_client is not installed. Using PIL to load images.
|
[2024-10-22 17:08:50,367] [WARNING] [comm.py:152:init_deepspeed_backend] NCCL backend in DeepSpeed not yet implemented
|
[2024-10-22 17:08:50,367] [INFO] [comm.py:616:init_distributed] cdb=None
|
[2024-10-22 17:08:50,367] [INFO] [comm.py:643:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl
|
10/22/2024 17:08:50 - WARNING - __main__ - Process rank: 0, device: cuda:0, n_gpu: 1distributed training: True, 16-bits training: False
|
10/22/2024 17:08:50 - INFO - __main__ - Training/evaluation parameters TrainingArguments(
|
_n_gpu=1,
|
adafactor=False,
|
adam_beta1=0.9,
|
adam_beta2=0.999,
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.