2023-07-29 19:00:59.461 | INFO | __main__:main:56 - test 2023-07-29 19:00:59.462 | INFO | model:build_model:29 - Window size 12! 2023-07-29 19:01:00.277 | INFO | model:build_model:49 - Initializing Multi-modal Swin Transformer weights from ckpts/swin_base_patch4_window12_384_22k.pth 2023-07-29 19:01:01.600 | INFO | model.backbone:init_weights:459 - loading swin success !!! 2023-07-29 19:01:07.468 | INFO | __main__:main:81 - => loading checkpoint 'exp/impl/cgformer/best_model.pth' 2023-07-29 19:01:07.480 | ERROR | __main__::96 - An error has been caught in function '', process 'MainProcess' (227226), thread 'MainThread' (139974767077184): Traceback (most recent call last): > File "test.py", line 96, in main() └ File "test.py", line 84, in main model = load_state_dict_from_zero_checkpoint(model, args.model_dir, tag=None).cuda() │ │ └ CfgNode({'dataset': 'refcocog_u', 'train_split': 'train', 'train_lmdb': 'data/lmdb/refcocog_u/train.lmdb', 'val_split': 'val'... │ └ DataParallel( │ (module): CGFormer( │ (backbone): MultiModalSwinTransformer( │ (patch_embed): PatchEmbed( │ (proj... └ File "/home/seongsu/.conda/envs/cgformer/lib/python3.8/site-packages/deepspeed/utils/zero_to_fp32.py", line 554, in load_state_dict_from_zero_checkpoint state_dict = get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir, tag) │ │ └ None │ └ 'exp/impl/cgformer/best_model.pth' └ File "/home/seongsu/.conda/envs/cgformer/lib/python3.8/site-packages/deepspeed/utils/zero_to_fp32.py", line 498, in get_fp32_state_dict_from_zero_checkpoint raise ValueError(f"Unable to find 'latest' file at {latest_path}") ValueError: Unable to find 'latest' file at exp/impl/cgformer/best_model.pth/latest 2023-07-29 19:01:48.667 | INFO | __main__:main:56 - test 2023-07-29 19:01:48.668 | INFO | model:build_model:29 - Window size 12! 2023-07-29 19:01:49.458 | INFO | model:build_model:49 - Initializing Multi-modal Swin Transformer weights from ckpts/swin_base_patch4_window12_384_22k.pth 2023-07-29 19:01:50.754 | INFO | model.backbone:init_weights:459 - loading swin success !!! 2023-07-29 19:01:56.401 | INFO | __main__:main:81 - => loading checkpoint 'exp/impl/cgformer/best_model.pth' 2023-07-29 19:01:57.675 | ERROR | __main__::96 - An error has been caught in function '', process 'MainProcess' (243904), thread 'MainThread' (140258757855040): Traceback (most recent call last): > File "test.py", line 96, in main() └ File "test.py", line 83, in main model.module.load_state_dict(checkpoint['model_state_dict'], strict=True) │ └ {'epoch': 3, 'cur_iou': tensor(0.1034, device='cuda:0'), 'best_iou': tensor(0.0702, device='cuda:0'), 'prec': {'Pr@50': 0.019... └ DataParallel( (module): CGFormer( (backbone): MultiModalSwinTransformer( (patch_embed): PatchEmbed( (proj... File "/home/seongsu/.conda/envs/cgformer/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1604, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state_dict for CGFormer: size mismatch for backbone.layers.0.downsample.reduction.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([256, 512]). size mismatch for backbone.layers.1.blocks.0.attn.qkv.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 256]). size mismatch for backbone.layers.1.blocks.0.mlp.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([1024, 256]). size mismatch for backbone.layers.1.blocks.0.mlp.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([256, 1024]). size mismatch for backbone.layers.1.blocks.1.attn.qkv.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 256]). size mismatch for backbone.layers.1.blocks.1.mlp.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([1024, 256]). size mismatch for backbone.layers.1.blocks.1.mlp.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([256, 1024]). size mismatch for backbone.layers.1.pwam_fusion.image_lang_att.f_key.0.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([256, 768, 1]). size mismatch for backbone.layers.1.pwam_fusion.image_lang_att.f_value.0.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([256, 768, 1]). size mismatch for backbone.layers.1.downsample.reduction.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 1024]). size mismatch for backbone.layers.2.blocks.0.attn.qkv.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([1536, 512]). size mismatch for backbone.layers.2.blocks.0.attn.proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 512]). size mismatch for backbone.layers.2.blocks.0.mlp.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2048, 512]). size mismatch for backbone.layers.2.blocks.0.mlp.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 2048]). size mismatch for backbone.layers.2.blocks.1.attn.qkv.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([1536, 512]). size mismatch for backbone.layers.2.blocks.1.attn.proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 512]). size mismatch for backbone.layers.2.blocks.1.mlp.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2048, 512]). size mismatch for backbone.layers.2.blocks.1.mlp.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 2048]). size mismatch for backbone.layers.2.blocks.2.attn.qkv.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([1536, 512]). size mismatch for backbone.layers.2.blocks.2.attn.proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 512]). size mismatch for backbone.layers.2.blocks.2.mlp.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2048, 512]). size mismatch for backbone.layers.2.blocks.2.mlp.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 2048]). size mismatch for backbone.layers.2.blocks.3.attn.qkv.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([1536, 512]). size mismatch for backbone.layers.2.blocks.3.attn.proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 512]). size mismatch for backbone.layers.2.blocks.3.mlp.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2048, 512]). size mismatch for backbone.layers.2.blocks.3.mlp.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 2048]). size mismatch for backbone.layers.2.blocks.4.attn.qkv.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([1536, 512]). size mismatch for backbone.layers.2.blocks.4.attn.proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 512]). size mismatch for backbone.layers.2.blocks.4.mlp.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2048, 512]). size mismatch for backbone.layers.2.blocks.4.mlp.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 2048]). size mismatch for backbone.layers.2.blocks.5.attn.qkv.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([1536, 512]). size mismatch for backbone.layers.2.blocks.5.attn.proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 512]). size mismatch for backbone.layers.2.blocks.5.mlp.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2048, 512]). size mismatch for backbone.layers.2.blocks.5.mlp.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 2048]). size mismatch for backbone.layers.2.blocks.6.attn.qkv.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([1536, 512]). size mismatch for backbone.layers.2.blocks.6.attn.proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 512]). size mismatch for backbone.layers.2.blocks.6.mlp.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2048, 512]). size mismatch for backbone.layers.2.blocks.6.mlp.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 2048]). size mismatch for backbone.layers.2.blocks.7.attn.qkv.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([1536, 512]). size mismatch for backbone.layers.2.blocks.7.attn.proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 512]). size mismatch for backbone.layers.2.blocks.7.mlp.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2048, 512]). size mismatch for backbone.layers.2.blocks.7.mlp.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 2048]). size mismatch for backbone.layers.2.blocks.8.attn.qkv.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([1536, 512]). size mismatch for backbone.layers.2.blocks.8.attn.proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 512]). size mismatch for backbone.layers.2.blocks.8.mlp.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2048, 512]). size mismatch for backbone.layers.2.blocks.8.mlp.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 2048]). size mismatch for backbone.layers.2.blocks.9.attn.qkv.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([1536, 512]). size mismatch for backbone.layers.2.blocks.9.attn.proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 512]). size mismatch for backbone.layers.2.blocks.9.mlp.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2048, 512]). size mismatch for backbone.layers.2.blocks.9.mlp.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 2048]). size mismatch for backbone.layers.2.blocks.10.attn.qkv.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([1536, 512]). size mismatch for backbone.layers.2.blocks.10.attn.proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 512]). size mismatch for backbone.layers.2.blocks.10.mlp.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2048, 512]). size mismatch for backbone.layers.2.blocks.10.mlp.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 2048]). size mismatch for backbone.layers.2.blocks.11.attn.qkv.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([1536, 512]). size mismatch for backbone.layers.2.blocks.11.attn.proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 512]). size mismatch for backbone.layers.2.blocks.11.mlp.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2048, 512]). size mismatch for backbone.layers.2.blocks.11.mlp.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 2048]). size mismatch for backbone.layers.2.blocks.12.attn.qkv.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([1536, 512]). size mismatch for backbone.layers.2.blocks.12.attn.proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 512]). size mismatch for backbone.layers.2.blocks.12.mlp.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2048, 512]). size mismatch for backbone.layers.2.blocks.12.mlp.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 2048]). size mismatch for backbone.layers.2.blocks.13.attn.qkv.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([1536, 512]). size mismatch for backbone.layers.2.blocks.13.attn.proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 512]). size mismatch for backbone.layers.2.blocks.13.mlp.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2048, 512]). size mismatch for backbone.layers.2.blocks.13.mlp.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 2048]). size mismatch for backbone.layers.2.blocks.14.attn.qkv.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([1536, 512]). size mismatch for backbone.layers.2.blocks.14.attn.proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 512]). size mismatch for backbone.layers.2.blocks.14.mlp.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2048, 512]). size mismatch for backbone.layers.2.blocks.14.mlp.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 2048]). size mismatch for backbone.layers.2.blocks.15.attn.qkv.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([1536, 512]). size mismatch for backbone.layers.2.blocks.15.attn.proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 512]). size mismatch for backbone.layers.2.blocks.15.mlp.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2048, 512]). size mismatch for backbone.layers.2.blocks.15.mlp.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 2048]). size mismatch for backbone.layers.2.blocks.16.attn.qkv.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([1536, 512]). size mismatch for backbone.layers.2.blocks.16.attn.proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 512]). size mismatch for backbone.layers.2.blocks.16.mlp.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2048, 512]). size mismatch for backbone.layers.2.blocks.16.mlp.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 2048]). size mismatch for backbone.layers.2.blocks.17.attn.qkv.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([1536, 512]). size mismatch for backbone.layers.2.blocks.17.attn.proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 512]). size mismatch for backbone.layers.2.blocks.17.mlp.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([2048, 512]). size mismatch for backbone.layers.2.blocks.17.mlp.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 2048]). size mismatch for backbone.layers.2.pwam_fusion.vis_project.0.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 512, 1]). size mismatch for backbone.layers.2.pwam_fusion.image_lang_att.f_key.0.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 768, 1]). size mismatch for backbone.layers.2.pwam_fusion.image_lang_att.f_query.0.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 512, 1]). size mismatch for backbone.layers.2.pwam_fusion.image_lang_att.f_value.0.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 768, 1]). size mismatch for backbone.layers.2.pwam_fusion.image_lang_att.W.0.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 512, 1]). size mismatch for backbone.layers.2.pwam_fusion.project_mm.0.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 512, 1]). size mismatch for backbone.layers.2.pwam_gate.0.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 512]). size mismatch for backbone.layers.2.pwam_gate.2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 512]). size mismatch for backbone.layers.2.downsample.reduction.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([1024, 2048]). size mismatch for backbone.layers.3.blocks.0.attn.qkv.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([3072, 1024]). size mismatch for backbone.layers.3.blocks.0.attn.proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([1024, 1024]). size mismatch for backbone.layers.3.blocks.0.mlp.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([4096, 1024]). size mismatch for backbone.layers.3.blocks.0.mlp.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([1024, 4096]). size mismatch for backbone.layers.3.blocks.1.attn.qkv.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([3072, 1024]). size mismatch for backbone.layers.3.blocks.1.attn.proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([1024, 1024]). size mismatch for backbone.layers.3.blocks.1.mlp.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([4096, 1024]). size mismatch for backbone.layers.3.blocks.1.mlp.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([1024, 4096]). size mismatch for backbone.layers.3.pwam_fusion.vis_project.0.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([1024, 1024, 1]). size mismatch for backbone.layers.3.pwam_fusion.image_lang_att.f_key.0.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([1024, 768, 1]). size mismatch for backbone.layers.3.pwam_fusion.image_lang_att.f_query.0.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([1024, 1024, 1]). size mismatch for backbone.layers.3.pwam_fusion.image_lang_att.f_value.0.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([1024, 768, 1]). size mismatch for backbone.layers.3.pwam_fusion.image_lang_att.W.0.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([1024, 1024, 1]). size mismatch for backbone.layers.3.pwam_fusion.project_mm.0.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([1024, 1024, 1]). size mismatch for backbone.layers.3.pwam_gate.0.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([1024, 1024]). size mismatch for backbone.layers.3.pwam_gate.2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([1024, 1024]). size mismatch for decoder.cgattention1.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 512]). size mismatch for decoder.cgattention1.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 512]). size mismatch for decoder.cgattention1.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 512]). size mismatch for decoder.cgattention1.proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 512]). size mismatch for decoder.cgattention1.mlp.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([1024, 512]). size mismatch for decoder.cgattention1.mlp.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 1024]). size mismatch for decoder.cgattention2.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 512]). size mismatch for decoder.cgattention2.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 512]). size mismatch for decoder.cgattention2.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 512]). size mismatch for decoder.cgattention2.proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 512]). size mismatch for decoder.cgattention2.mlp.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([1024, 512]). size mismatch for decoder.cgattention2.mlp.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 1024]). size mismatch for decoder.layers.1.loadtoken.cross_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 512]). size mismatch for decoder.layers.1.loadtoken.cross_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 768]). size mismatch for decoder.layers.1.loadtoken.cross_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 768]). size mismatch for decoder.layers.1.loadtoken.cross_attn.proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 512]). size mismatch for decoder.layers.1.loadtoken.mlp.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([1024, 512]). size mismatch for decoder.layers.1.loadtoken.mlp.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 1024]). size mismatch for decoder.layers.1.mlp.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([1024, 512]). size mismatch for decoder.layers.1.mlp.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 1024]). size mismatch for decoder.layers.2.loadtoken.cross_attn.q_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 512]). size mismatch for decoder.layers.2.loadtoken.cross_attn.k_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 768]). size mismatch for decoder.layers.2.loadtoken.cross_attn.v_proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 768]). size mismatch for decoder.layers.2.loadtoken.cross_attn.proj.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 512]). size mismatch for decoder.layers.2.loadtoken.mlp.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([1024, 512]). size mismatch for decoder.layers.2.loadtoken.mlp.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 1024]). size mismatch for decoder.layers.2.mlp.fc1.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([1024, 512]). size mismatch for decoder.layers.2.mlp.fc2.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 1024]). size mismatch for decoder.fuses.0.fusion.0.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 1536, 3, 3]). size mismatch for decoder.fuses.0.fusion.3.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]). size mismatch for decoder.fuses.1.fusion.0.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 768, 3, 3]). size mismatch for decoder.fuses.1.fusion.3.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]). size mismatch for decoder.fuses.2.fusion.0.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 640, 3, 3]). size mismatch for decoder.fuses.2.fusion.3.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]). size mismatch for decoder.proj.vis.1.0.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]). size mismatch for decoder.proj.vis.3.0.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]). size mismatch for decoder.proj.vis.4.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 512, 1, 1]). size mismatch for decoder.proj.txt.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([513, 512]). size mismatch for text_encoder.embeddings.word_embeddings.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([30522, 768]). size mismatch for text_encoder.embeddings.position_embeddings.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([512, 768]). size mismatch for text_encoder.encoder.layer.0.attention.self.query.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 768]). size mismatch for text_encoder.encoder.layer.0.attention.self.key.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 768]). size mismatch for text_encoder.encoder.layer.0.attention.self.value.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 768]). size mismatch for text_encoder.encoder.layer.0.attention.output.dense.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 768]). size mismatch for text_encoder.encoder.layer.0.intermediate.dense.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([3072, 768]). size mismatch for text_encoder.encoder.layer.0.output.dense.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 3072]). size mismatch for text_encoder.encoder.layer.1.attention.self.query.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 768]). size mismatch for text_encoder.encoder.layer.1.attention.self.key.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 768]). size mismatch for text_encoder.encoder.layer.1.attention.self.value.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 768]). size mismatch for text_encoder.encoder.layer.1.attention.output.dense.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 768]). size mismatch for text_encoder.encoder.layer.1.intermediate.dense.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([3072, 768]). size mismatch for text_encoder.encoder.layer.1.output.dense.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 3072]). size mismatch for text_encoder.encoder.layer.2.attention.self.query.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 768]). size mismatch for text_encoder.encoder.layer.2.attention.self.key.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 768]). size mismatch for text_encoder.encoder.layer.2.attention.self.value.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 768]). size mismatch for text_encoder.encoder.layer.2.attention.output.dense.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 768]). size mismatch for text_encoder.encoder.layer.2.intermediate.dense.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([3072, 768]). size mismatch for text_encoder.encoder.layer.2.output.dense.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 3072]). size mismatch for text_encoder.encoder.layer.3.attention.self.query.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 768]). size mismatch for text_encoder.encoder.layer.3.attention.self.key.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 768]). size mismatch for text_encoder.encoder.layer.3.attention.self.value.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 768]). size mismatch for text_encoder.encoder.layer.3.attention.output.dense.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 768]). size mismatch for text_encoder.encoder.layer.3.intermediate.dense.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([3072, 768]). size mismatch for text_encoder.encoder.layer.3.output.dense.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 3072]). size mismatch for text_encoder.encoder.layer.4.attention.self.query.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 768]). size mismatch for text_encoder.encoder.layer.4.attention.self.key.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 768]). size mismatch for text_encoder.encoder.layer.4.attention.self.value.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 768]). size mismatch for text_encoder.encoder.layer.4.attention.output.dense.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 768]). size mismatch for text_encoder.encoder.layer.4.intermediate.dense.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([3072, 768]). size mismatch for text_encoder.encoder.layer.4.output.dense.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 3072]). size mismatch for text_encoder.encoder.layer.5.attention.self.query.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 768]). size mismatch for text_encoder.encoder.layer.5.attention.self.key.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 768]). size mismatch for text_encoder.encoder.layer.5.attention.self.value.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 768]). size mismatch for text_encoder.encoder.layer.5.attention.output.dense.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 768]). size mismatch for text_encoder.encoder.layer.5.intermediate.dense.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([3072, 768]). size mismatch for text_encoder.encoder.layer.5.output.dense.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 3072]). size mismatch for text_encoder.encoder.layer.6.attention.self.query.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 768]). size mismatch for text_encoder.encoder.layer.6.attention.self.key.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 768]). size mismatch for text_encoder.encoder.layer.6.attention.self.value.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 768]). size mismatch for text_encoder.encoder.layer.6.attention.output.dense.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 768]). size mismatch for text_encoder.encoder.layer.6.intermediate.dense.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([3072, 768]). size mismatch for text_encoder.encoder.layer.6.output.dense.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 3072]). size mismatch for text_encoder.encoder.layer.7.attention.self.query.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 768]). size mismatch for text_encoder.encoder.layer.7.attention.self.key.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 768]). size mismatch for text_encoder.encoder.layer.7.attention.self.value.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 768]). size mismatch for text_encoder.encoder.layer.7.attention.output.dense.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 768]). size mismatch for text_encoder.encoder.layer.7.intermediate.dense.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([3072, 768]). size mismatch for text_encoder.encoder.layer.7.output.dense.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 3072]). size mismatch for text_encoder.encoder.layer.8.attention.self.query.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 768]). size mismatch for text_encoder.encoder.layer.8.attention.self.key.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 768]). size mismatch for text_encoder.encoder.layer.8.attention.self.value.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 768]). size mismatch for text_encoder.encoder.layer.8.attention.output.dense.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 768]). size mismatch for text_encoder.encoder.layer.8.intermediate.dense.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([3072, 768]). size mismatch for text_encoder.encoder.layer.8.output.dense.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 3072]). size mismatch for text_encoder.encoder.layer.9.attention.self.query.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 768]). size mismatch for text_encoder.encoder.layer.9.attention.self.key.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 768]). size mismatch for text_encoder.encoder.layer.9.attention.self.value.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 768]). size mismatch for text_encoder.encoder.layer.9.attention.output.dense.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 768]). size mismatch for text_encoder.encoder.layer.9.intermediate.dense.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([3072, 768]). size mismatch for text_encoder.encoder.layer.9.output.dense.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 3072]). size mismatch for text_encoder.encoder.layer.10.attention.self.query.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 768]). size mismatch for text_encoder.encoder.layer.10.attention.self.key.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 768]). size mismatch for text_encoder.encoder.layer.10.attention.self.value.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 768]). size mismatch for text_encoder.encoder.layer.10.attention.output.dense.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 768]). size mismatch for text_encoder.encoder.layer.10.intermediate.dense.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([3072, 768]). size mismatch for text_encoder.encoder.layer.10.output.dense.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 3072]). size mismatch for text_encoder.encoder.layer.11.attention.self.query.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 768]). size mismatch for text_encoder.encoder.layer.11.attention.self.key.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 768]). size mismatch for text_encoder.encoder.layer.11.attention.self.value.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 768]). size mismatch for text_encoder.encoder.layer.11.attention.output.dense.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 768]). size mismatch for text_encoder.encoder.layer.11.intermediate.dense.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([3072, 768]). size mismatch for text_encoder.encoder.layer.11.output.dense.weight: copying a param with shape torch.Size([0]) from checkpoint, the shape in current model is torch.Size([768, 3072]). 2023-07-29 19:28:51.165 | INFO | __main__:main:56 - test 2023-07-29 19:28:51.166 | INFO | model:build_model:29 - Window size 12! 2023-07-29 19:28:51.973 | INFO | model:build_model:49 - Initializing Multi-modal Swin Transformer weights from ckpts/swin_base_patch4_window12_384_22k.pth 2023-07-29 19:28:53.341 | INFO | model.backbone:init_weights:459 - loading swin success !!! 2023-07-29 19:28:59.157 | ERROR | __main__::96 - An error has been caught in function '', process 'MainProcess' (266406), thread 'MainThread' (140056220157760): Traceback (most recent call last): > File "test.py", line 96, in main() └ File "test.py", line 80, in main if os.path.isfile(args.model_dir): │ │ │ └ CfgNode({'dataset': 'refcocog_u', 'train_split': 'train', 'train_lmdb': 'data/lmdb/refcocog_u/train.lmdb', 'val_split': 'val'... │ │ └ │ └ File "/data/projects/seongsu/RIS/CGFormer/utils/config.py", line 30, in __getattr__ raise AttributeError(name) └ 'model_dir' AttributeError: model_dir 2023-07-29 19:29:27.919 | INFO | __main__:main:56 - test 2023-07-29 19:29:27.921 | INFO | model:build_model:29 - Window size 12! 2023-07-29 19:29:28.695 | INFO | model:build_model:49 - Initializing Multi-modal Swin Transformer weights from ckpts/swin_base_patch4_window12_384_22k.pth 2023-07-29 19:29:30.062 | INFO | model.backbone:init_weights:459 - loading swin success !!! 2023-07-29 19:29:35.892 | INFO | __main__:main:81 - => loading checkpoint 'exp/impl/cgformer/best_model.pth' 2023-07-29 19:29:35.896 | ERROR | __main__::96 - An error has been caught in function '', process 'MainProcess' (284924), thread 'MainThread' (140494160029504): Traceback (most recent call last): > File "test.py", line 96, in main() └ File "test.py", line 84, in main model = load_state_dict_from_zero_checkpoint(model, args.ouptut_dir, tag=2).cuda() │ │ └ CfgNode({'dataset': 'refcocog_u', 'train_split': 'train', 'train_lmdb': 'data/lmdb/refcocog_u/train.lmdb', 'val_split': 'val'... │ └ DataParallel( │ (module): CGFormer( │ (backbone): MultiModalSwinTransformer( │ (patch_embed): PatchEmbed( │ (proj... └ File "/data/projects/seongsu/RIS/CGFormer/utils/config.py", line 30, in __getattr__ raise AttributeError(name) └ 'ouptut_dir' AttributeError: ouptut_dir 2023-07-29 19:30:40.310 | INFO | __main__:main:56 - test 2023-07-29 19:30:40.311 | INFO | model:build_model:29 - Window size 12! 2023-07-29 19:30:41.118 | INFO | model:build_model:49 - Initializing Multi-modal Swin Transformer weights from ckpts/swin_base_patch4_window12_384_22k.pth 2023-07-29 19:30:42.477 | INFO | model.backbone:init_weights:459 - loading swin success !!! 2023-07-29 19:30:48.327 | INFO | __main__:main:81 - => loading checkpoint 'exp/impl/cgformer/best_model.pth' 2023-07-29 19:30:48.329 | ERROR | __main__::96 - An error has been caught in function '', process 'MainProcess' (299829), thread 'MainThread' (139703874156352): Traceback (most recent call last): > File "test.py", line 96, in main() └ File "test.py", line 84, in main model = load_state_dict_from_zero_checkpoint(model, args.output_dir, tag=2).cuda() │ │ └ CfgNode({'dataset': 'refcocog_u', 'train_split': 'train', 'train_lmdb': 'data/lmdb/refcocog_u/train.lmdb', 'val_split': 'val'... │ └ DataParallel( │ (module): CGFormer( │ (backbone): MultiModalSwinTransformer( │ (patch_embed): PatchEmbed( │ (proj... └ File "/home/seongsu/.conda/envs/cgformer/lib/python3.8/site-packages/deepspeed/utils/zero_to_fp32.py", line 554, in load_state_dict_from_zero_checkpoint state_dict = get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir, tag) │ │ └ 2 │ └ 'exp/impl/cgformer' └ File "/home/seongsu/.conda/envs/cgformer/lib/python3.8/site-packages/deepspeed/utils/zero_to_fp32.py", line 500, in get_fp32_state_dict_from_zero_checkpoint ds_checkpoint_dir = os.path.join(checkpoint_dir, tag) │ │ │ │ └ 2 │ │ │ └ 'exp/impl/cgformer' │ │ └ │ └ File "/home/seongsu/.conda/envs/cgformer/lib/python3.8/posixpath.py", line 90, in join genericpath._check_arg_types('join', a, *p) │ │ │ └ (2,) │ │ └ 'exp/impl/cgformer' │ └ File "/home/seongsu/.conda/envs/cgformer/lib/python3.8/genericpath.py", line 152, in _check_arg_types raise TypeError(f'{funcname}() argument must be str, bytes, or ' TypeError: join() argument must be str, bytes, or os.PathLike object, not 'int' 2023-07-29 19:31:16.457 | INFO | __main__:main:56 - test 2023-07-29 19:31:16.459 | INFO | model:build_model:29 - Window size 12! 2023-07-29 19:31:17.294 | INFO | model:build_model:49 - Initializing Multi-modal Swin Transformer weights from ckpts/swin_base_patch4_window12_384_22k.pth 2023-07-29 19:31:18.610 | INFO | model.backbone:init_weights:459 - loading swin success !!! 2023-07-29 19:31:24.321 | INFO | __main__:main:81 - => loading checkpoint 'exp/impl/cgformer/best_model.pth' 2023-07-29 19:31:28.450 | INFO | __main__:main:85 - => loaded checkpoint 'exp/impl/cgformer/best_model.pth' 2023-07-29 19:32:00.404 | INFO | engine.engine:inference:196 - => Metric Calculation <= 2023-07-29 19:32:00.410 | INFO | engine.engine:inference:209 - oIoU=2.15 2023-07-29 19:32:00.410 | INFO | engine.engine:inference:210 - mIoU=0.38 2023-07-29 19:32:00.411 | INFO | engine.engine:inference:212 - Pr@50: 0.00. 2023-07-29 19:32:00.411 | INFO | engine.engine:inference:212 - Pr@60: 0.00. 2023-07-29 19:32:00.412 | INFO | engine.engine:inference:212 - Pr@70: 0.00. 2023-07-29 19:32:00.412 | INFO | engine.engine:inference:212 - Pr@80: 0.00. 2023-07-29 19:32:00.412 | INFO | engine.engine:inference:212 - Pr@90: 0.00. 2023-07-29 21:43:39.770 | INFO | __main__:main:56 - test 2023-07-29 21:43:39.771 | INFO | model:build_model:29 - Window size 12! 2023-07-29 21:43:40.526 | INFO | model:build_model:49 - Initializing Multi-modal Swin Transformer weights from ckpts/swin_base_patch4_window12_384_22k.pth 2023-07-29 21:43:41.755 | INFO | model.backbone:init_weights:459 - loading swin success !!! 2023-07-29 21:43:47.530 | INFO | __main__:main:81 - => loading checkpoint 'exp/impl/cgformer/best_model.pth' 2023-07-29 21:43:51.847 | INFO | __main__:main:85 - => loaded checkpoint 'exp/impl/cgformer/best_model.pth' 2023-07-29 21:48:56.035 | INFO | engine.engine:inference:196 - => Metric Calculation <= 2023-07-29 21:48:56.048 | INFO | engine.engine:inference:209 - oIoU=11.04 2023-07-29 21:48:56.049 | INFO | engine.engine:inference:210 - mIoU=10.96 2023-07-29 21:48:56.050 | INFO | engine.engine:inference:212 - Pr@50: 1.11. 2023-07-29 21:48:56.051 | INFO | engine.engine:inference:212 - Pr@60: 0.66. 2023-07-29 21:48:56.052 | INFO | engine.engine:inference:212 - Pr@70: 0.41. 2023-07-29 21:48:56.053 | INFO | engine.engine:inference:212 - Pr@80: 0.08. 2023-07-29 21:48:56.053 | INFO | engine.engine:inference:212 - Pr@90: 0.00.