2026-02-25 11:15:18,028 INFO MainThread:121696 [wandb_setup.py:_flush():81] Current SDK version is 0.22.3 2026-02-25 11:15:18,030 INFO MainThread:121696 [wandb_setup.py:_flush():81] Configure stats pid to 121696 2026-02-25 11:15:18,031 INFO MainThread:121696 [wandb_setup.py:_flush():81] Loading settings from /mnt/petrelfs/wangmaonan/.config/wandb/settings 2026-02-25 11:15:18,031 INFO MainThread:121696 [wandb_setup.py:_flush():81] Loading settings from /mnt/petrelfs/wangmaonan/yuxin/CL_CoTNav/InternVL_cleaned/internvl_chat/wandb/settings 2026-02-25 11:15:18,032 INFO MainThread:121696 [wandb_setup.py:_flush():81] Loading settings from environment variables 2026-02-25 11:15:18,032 INFO MainThread:121696 [wandb_init.py:setup_run_log_directory():706] Logging user logs to /mnt/petrelfs/wangmaonan/yuxin/CL_CoTNav/all_log/experiments/a100_dualvit_llm-64_mlp-train-patch-32768-acc1_BEVftFOV_FrontierRGB_PosB__FRONTIER_PIXEL_NUMBER_ONLY/wandb/run-20260225_111518-pg2w7c3p/logs/debug.log 2026-02-25 11:15:18,033 INFO MainThread:121696 [wandb_init.py:setup_run_log_directory():707] Logging internal logs to /mnt/petrelfs/wangmaonan/yuxin/CL_CoTNav/all_log/experiments/a100_dualvit_llm-64_mlp-train-patch-32768-acc1_BEVftFOV_FrontierRGB_PosB__FRONTIER_PIXEL_NUMBER_ONLY/wandb/run-20260225_111518-pg2w7c3p/logs/debug-internal.log 2026-02-25 11:15:18,034 INFO MainThread:121696 [wandb_init.py:init():833] calling init triggers 2026-02-25 11:15:18,034 INFO MainThread:121696 [wandb_init.py:init():838] wandb.init called with sweep_config: {} config: {'_wandb': {}} 2026-02-25 11:15:18,035 INFO MainThread:121696 [wandb_init.py:init():881] starting backend 2026-02-25 11:15:18,255 INFO MainThread:121696 [wandb_init.py:init():884] sending inform_init request 2026-02-25 11:15:18,261 INFO MainThread:121696 [wandb_init.py:init():892] backend started and connected 2026-02-25 11:15:18,263 INFO MainThread:121696 [wandb_init.py:init():962] updated telemetry 2026-02-25 11:15:18,290 INFO MainThread:121696 [wandb_init.py:init():986] communicating run to backend with 90.0 second timeout 2026-02-25 11:15:28,713 INFO MainThread:121696 [wandb_init.py:init():1033] starting run threads in backend 2026-02-25 11:15:28,992 INFO MainThread:121696 [wandb_run.py:_console_start():2506] atexit reg 2026-02-25 11:15:28,993 INFO MainThread:121696 [wandb_run.py:_redirect():2354] redirect: wrap_raw 2026-02-25 11:15:28,993 INFO MainThread:121696 [wandb_run.py:_redirect():2423] Wrapping output streams. 2026-02-25 11:15:28,994 INFO MainThread:121696 [wandb_run.py:_redirect():2446] Redirects installed. 2026-02-25 11:15:28,999 INFO MainThread:121696 [wandb_init.py:init():1073] run started, returning control to user process 2026-02-25 11:15:29,002 INFO MainThread:121696 [wandb_run.py:_config_callback():1390] config_cb None None {'return_dict': True, 'output_hidden_states': False, 'output_attentions': False, 'torchscript': False, 'torch_dtype': 'torch.bfloat16', 'use_bfloat16': False, 'tf_legacy_loss': False, 'pruned_heads': {}, 'tie_word_embeddings': False, 'chunk_size_feed_forward': 0, 'is_encoder_decoder': False, 'is_decoder': False, 'cross_attention_hidden_size': None, 'add_cross_attention': False, 'tie_encoder_decoder': False, 'max_length': 20, 'min_length': 0, 'do_sample': False, 'early_stopping': False, 'num_beams': 1, 'num_beam_groups': 1, 'diversity_penalty': 0.0, 'temperature': 1.0, 'top_k': 50, 'top_p': 1.0, 'typical_p': 1.0, 'repetition_penalty': 1.0, 'length_penalty': 1.0, 'no_repeat_ngram_size': 0, 'encoder_no_repeat_ngram_size': 0, 'bad_words_ids': None, 'num_return_sequences': 1, 'output_scores': False, 'return_dict_in_generate': False, 'forced_bos_token_id': None, 'forced_eos_token_id': None, 'remove_invalid_values': False, 'exponential_decay_length_penalty': None, 'suppress_tokens': None, 'begin_suppress_tokens': None, 'architectures': ['InternVLChatModel'], 'finetuning_task': None, 'id2label': {0: 'LABEL_0', 1: 'LABEL_1'}, 'label2id': {'LABEL_0': 0, 'LABEL_1': 1}, 'tokenizer_class': None, 'prefix': None, 'bos_token_id': None, 'pad_token_id': None, 'eos_token_id': None, 'sep_token_id': None, 'decoder_start_token_id': None, 'task_specific_params': None, 'problem_type': None, '_name_or_path': '../pretrained/InternVL3-2B', '_commit_hash': None, '_attn_implementation_internal': None, 'transformers_version': None, 'auto_map': {'AutoConfig': 'configuration_internvl_chat.InternVLChatConfig', 'AutoModel': 'modeling_internvl_chat.InternVLChatModel', 'AutoModelForCausalLM': 'modeling_internvl_chat.InternVLChatModel'}, 'hidden_size': 1536, 'image_fold': None, 'model_type': 'internvl_chat', 'system_message': 'You are an autonomous navigation agent operating in indoor environments. You receive spatial information through position embeddings injected into visual features and text tokens. Use the BEV map, position embeddings, and semantic information to make navigation decisions. When the target object is detected ( marker), navigate directly to it. Otherwise, explore frontiers strategically to find the goal object.', 'vision_config': {'return_dict': True, 'output_hidden_states': False, 'output_attentions': False, 'torchscript': False, 'torch_dtype': 'bfloat16', 'use_bfloat16': True, 'tf_legacy_loss': False, 'pruned_heads': {}, 'tie_word_embeddings': True, 'chunk_size_feed_forward': 0, 'is_encoder_decoder': False, 'is_decoder': False, 'cross_attention_hidden_size': None, 'add_cross_attention': False, 'tie_encoder_decoder': False, 'max_length': 20, 'min_length': 0, 'do_sample': False, 'early_stopping': False, 'num_beams': 1, 'num_beam_groups': 1, 'diversity_penalty': 0.0, 'temperature': 1.0, 'top_k': 50, 'top_p': 1.0, 'typical_p': 1.0, 'repetition_penalty': 1.0, 'length_penalty': 1.0, 'no_repeat_ngram_size': 0, 'encoder_no_repeat_ngram_size': 0, 'bad_words_ids': None, 'num_return_sequences': 1, 'output_scores': False, 'return_dict_in_generate': False, 'forced_bos_token_id': None, 'forced_eos_token_id': None, 'remove_invalid_values': False, 'exponential_decay_length_penalty': None, 'suppress_tokens': None, 'begin_suppress_tokens': None, 'architectures': ['InternVisionModel'], 'finetuning_task': None, 'id2label': {0: 'LABEL_0', 1: 'LABEL_1'}, 'label2id': {'LABEL_0': 0, 'LABEL_1': 1}, 'tokenizer_class': None, 'prefix': None, 'bos_token_id': None, 'pad_token_id': None, 'eos_token_id': None, 'sep_token_id': None, 'decoder_start_token_id': None, 'task_specific_params': None, 'problem_type': None, '_name_or_path': 'OpenGVLab/InternViT-6B-448px-V1-5', 'transformers_version': '4.37.2', '_attn_implementation_autoset': True, 'auto_map': {'AutoConfig': 'configuration_intern_vit.InternVisionConfig', 'AutoModel': 'modeling_intern_vit.InternVisionModel'}, 'capacity_factor': 1.2, 'eval_capacity_factor': 1.4, 'laux_allreduce': 'all_nodes', 'model_type': 'intern_vit_6b', 'moe_coeff_ratio': 0.5, 'moe_intermediate_size': 768, 'moe_output_scale': 4.0, 'noisy_gate_policy': 'RSample_before', 'num_experts': 8, 'num_routed_experts': 4, 'num_shared_experts': 4, 'shared_expert_intermediate_size': 3072, 'use_moe': False, 'use_residual': True, 'use_rts': False, 'use_weighted_residual': False, 'hidden_size': 1024, 'intermediate_size': 4096, 'dropout': 0.0, 'drop_path_rate': 0.0, 'num_hidden_layers': 24, 'num_attention_heads': 16, 'num_channels': 3, 'patch_size': 14, 'image_size': 448, 'initializer_range': 1e-10, 'initializer_factor': 0.1, 'attention_dropout': 0.0, 'layer_norm_eps': 1e-06, 'hidden_act': 'gelu', 'norm_type': 'layer_norm', 'qkv_bias': True, 'qk_normalization': False, 'use_flash_attn': True}, 'llm_config': {'vocab_size': 151677, 'max_position_embeddings': 32768, 'hidden_size': 1536, 'intermediate_size': 8960, 'num_hidden_layers': 28, 'num_attention_heads': 12, 'use_sliding_window': False, 'sliding_window': None, 'max_window_layers': 70, 'num_key_value_heads': 2, 'hidden_act': 'silu', 'initializer_range': 0.02, 'rms_norm_eps': 1e-06, 'use_cache': False, 'rope_theta': 1000000.0, 'attention_dropout': 0.0, 'return_dict': True, 'output_hidden_states': False, 'output_attentions': False, 'torchscript': False, 'torch_dtype': 'bfloat16', 'use_bfloat16': True, 'tf_legacy_loss': False, 'pruned_heads': {}, 'tie_word_embeddings': False, 'chunk_size_feed_forward': 0, 'is_encoder_decoder': False, 'is_decoder': False, 'cross_attention_hidden_size': None, 'add_cross_attention': False, 'tie_encoder_decoder': False, 'max_length': 20, 'min_length': 0, 'do_sample': False, 'early_stopping': False, 'num_beams': 1, 'num_beam_groups': 1, 'diversity_penalty': 0.0, 'temperature': 1.0, 'top_k': 50, 'top_p': 1.0, 'typical_p': 1.0, 'repetition_penalty': 1.0, 'length_penalty': 1.0, 'no_repeat_ngram_size': 0, 'encoder_no_repeat_ngram_size': 0, 'bad_words_ids': None, 'num_return_sequences': 1, 'output_scores': False, 'return_dict_in_generate': False, 'forced_bos_token_id': None, 'forced_eos_token_id': None, 'remove_invalid_values': False, 'exponential_decay_length_penalty': None, 'suppress_tokens': None, 'begin_suppress_tokens': None, 'architectures': ['Qwen2ForCausalLM'], 'finetuning_task': None, 'id2label': {0: 'LABEL_0', 1: 'LABEL_1'}, 'label2id': {'LABEL_0': 0, 'LABEL_1': 1}, 'tokenizer_class': None, 'prefix': None, 'bos_token_id': 151643, 'pad_token_id': None, 'eos_token_id': 151643, 'sep_token_id': None, 'decoder_start_token_id': None, 'task_specific_params': None, 'problem_type': None, '_name_or_path': './pretrained/Qwen2.5-32B-Instruct', 'transformers_version': '4.37.2', '_attn_implementation_autoset': True, 'model_type': 'qwen2', 'moe_config': None, 'rope_scaling': {'factor': 2.0, 'rope_type': 'dynamic', 'type': 'dynamic'}, 'attn_implementation': 'flash_attention_2'}, 'use_backbone_lora': 0, 'use_llm_lora': 64, 'pad2square': False, 'select_layer': -1, 'force_image_size': 448, 'downsample_ratio': 0.5, 'template': 'internvl2_5_nav', 'dynamic_image_size': False, 'use_thumbnail': True, 'ps_version': 'v2', 'min_dynamic_patch': 1, 'max_dynamic_patch': 12, 'num_image_token_bev': 256, 'num_image_token_ego': 32, 'use_pairwise_spatial_encoder': False, 'use_position_embeddings': True, 'dual_text_pos_injection': True, 'bev_image_size': 448, 'vit_bev_freeze': True, 'vit_bev_use_lora': True, 'vit_bev_lora_rank': 64, 'vit_rgb_freeze': True, 'vit_rgb_use_lora': True, 'vit_rgb_lora_rank': 16, 'output_dir': '/mnt/petrelfs/wangmaonan/yuxin/CL_CoTNav/all_log/experiments/a100_dualvit_llm-64_mlp-train-patch-32768-acc1_BEVftFOV_FrontierRGB_PosB__FRONTIER_PIXEL_NUMBER_ONLY', 'overwrite_output_dir': True, 'do_train': True, 'do_eval': False, 'do_predict': False, 'evaluation_strategy': 'no', 'prediction_loss_only': False, 'per_device_train_batch_size': 1, 'per_device_eval_batch_size': 8, 'per_gpu_train_batch_size': None, 'per_gpu_eval_batch_size': None, 'gradient_accumulation_steps': 1, 'eval_accumulation_steps': None, 'eval_delay': 0, 'learning_rate': 0.0001, 'weight_decay': 0.01, 'adam_beta1': 0.9, 'adam_beta2': 0.999, 'adam_epsilon': 1e-08, 'max_grad_norm': 1.0, 'num_train_epochs': 1, 'max_steps': 9300, 'lr_scheduler_type': 'cosine', 'lr_scheduler_kwargs': {}, 'warmup_ratio': 0.03, 'warmup_steps': 0, 'log_level': 'passive', 'log_level_replica': 'warning', 'log_on_each_node': True, 'logging_dir': 'runs/Feb25_11-14-26_SH-IDC1-10-140-37-90', 'logging_strategy': 'steps', 'logging_first_step': False, 'logging_steps': 1, 'logging_nan_inf_filter': True, 'save_strategy': 'steps', 'save_steps': 0.5, 'save_total_limit': 1, 'save_safetensors': True, 'save_on_each_node': False, 'save_only_model': False, 'no_cuda': False, 'use_cpu': False, 'use_mps_device': False, 'seed': 42, 'data_seed': None, 'jit_mode_eval': False, 'use_ipex': False, 'bf16': True, 'fp16': False, 'fp16_opt_level': 'O1', 'half_precision_backend': 'auto', 'bf16_full_eval': False, 'fp16_full_eval': False, 'tf32': None, 'local_rank': 0, 'ddp_backend': None, 'tpu_num_cores': None, 'tpu_metrics_debug': False, 'debug': [], 'dataloader_drop_last': False, 'eval_steps': None, 'dataloader_num_workers': 2, 'past_index': -1, 'run_name': 'a100_dualvit_llm-64_mlp-train-patch-32768-acc1_BEVftFOV_FrontierRGB_PosB__FRONTIER_PIXEL_NUMBER_ONLY_steps9300_gpus4_acc1', 'disable_tqdm': False, 'remove_unused_columns': False, 'label_names': None, 'load_best_model_at_end': False, 'metric_for_best_model': None, 'greater_is_better': None, 'ignore_data_skip': False, 'fsdp': [], 'fsdp_min_num_params': 0, 'fsdp_config': {'min_num_params': 0, 'xla': False, 'xla_fsdp_grad_ckpt': False}, 'fsdp_transformer_layer_cls_to_wrap': None, 'deepspeed': 'zero_stage2_config_acc1.json', 'label_smoothing_factor': 0.0, 'optim': 'adamw_torch', 'optim_args': None, 'adafactor': False, 'group_by_length': False, 'length_column_name': 'length', 'report_to': ['wandb'], 'ddp_find_unused_parameters': None, 'ddp_bucket_cap_mb': None, 'ddp_broadcast_buffers': None, 'dataloader_pin_memory': True, 'dataloader_persistent_workers': False, 'skip_memory_metrics': True, 'use_legacy_prediction_loop': False, 'push_to_hub': False, 'resume_from_checkpoint': None, 'hub_model_id': None, 'hub_strategy': 'every_save', 'hub_token': '', 'hub_private_repo': False, 'hub_always_push': False, 'gradient_checkpointing': True, 'gradient_checkpointing_kwargs': None, 'include_inputs_for_metrics': False, 'fp16_backend': 'auto', 'push_to_hub_model_id': None, 'push_to_hub_organization': None, 'push_to_hub_token': '', 'mp_parameters': '', 'auto_find_batch_size': False, 'full_determinism': False, 'torchdynamo': None, 'ray_scope': 'last', 'ddp_timeout': 1800, 'torch_compile': False, 'torch_compile_backend': None, 'torch_compile_mode': None, 'dispatch_batches': None, 'split_batches': False, 'include_tokens_per_second': False, 'include_num_input_tokens_seen': False, 'neftune_noise_alpha': None} 2026-02-26 19:46:56,115 INFO wandb-AsyncioManager-main:121696 [service_client.py:_forward_responses():80] Reached EOF. 2026-02-26 19:46:56,116 INFO wandb-AsyncioManager-main:121696 [mailbox.py:close():137] Closing mailbox, abandoning 1 handles.