pi05tests-openpi-multiarm / openpi /run_logs /split_communicating_real_smoke3.log
lsnu's picture
Add files using upload-large-folder tool
ccf25b1 verified
19:55:02.788 [I] Created experiment checkpoint directory: /workspace/pi05tests/openpi/checkpoints/pi05_twin_dual_push_128_packed_split_expert_communicating_pytorch_5k/split_communicating_real_smoke3 (22110:train_pytorch.py:533)
19:55:02.789 [I] Using batch size per GPU: 1 (total batch size across 1 GPUs: 1) (22110:train_pytorch.py:552)
19:55:02.865 [I] Loaded norm stats from /workspace/pi05tests/openpi/assets/pi05_twin_dual_push_128_packed_split_expert_communicating_pytorch_5k/lsnu/twin_dual_push_128_train (22110:config.py:234)
19:55:02.867 [I] data_config: DataConfig(repo_id='lsnu/twin_dual_push_128_train', asset_id='lsnu/twin_dual_push_128_train', norm_stats={'state': NormStats(mean=array([ 0.10604009, 0.20956482, 0.09184283, -1.98801565, -0.04930164,
2.20065784, 1.07595289, 0.52742052, 0.01585805, 0.08288047,
-0.06887393, -1.906394 , 0.04810138, 2.01086807, -0.92902797,
0.8440811 ]), std=array([0.09207697, 0.31317395, 0.08127229, 0.53812712, 0.06093267,
0.51205784, 0.22527155, 0.49924755, 0.20230208, 0.31408131,
0.21665592, 0.5264315 , 0.20170984, 0.4745712 , 1.17861438,
0.36277843]), q01=array([-5.00321221e-06, -3.88026012e-01, -2.23782954e-05, -2.98962682e+00,
-2.38592355e-01, 1.22146201e+00, 7.85383821e-01, 0.00000000e+00,
-6.15615927e-01, -4.14941930e-01, -9.43696350e-01, -2.88397729e+00,
-9.05083556e-01, 1.22148895e+00, -2.79564499e+00, 0.00000000e+00]), q99=array([ 0.31251293, 0.86546916, 0.35174239, -0.87634897, 0.05212194,
2.97208117, 1.64465171, 0.9998 , 0.7670313 , 0.96073459,
0.68710467, -0.87498123, 0.35838486, 2.9773227 , 0.78477909,
0.9998 ])), 'actions': NormStats(mean=array([ 0.03630241, 0.09624442, 0.01367408, -0.2224988 , -0.02762174,
0.27498844, 0.0892187 , 0.45650524, -0.00378086, 0.09113847,
-0.00376227, -0.22537093, 0.00826233, 0.26799494, -0.57452869,
0.7731654 ]), std=array([0.04995174, 0.29268014, 0.06852161, 0.3647725 , 0.07012808,
0.27129024, 0.11329207, 0.4981046 , 0.0917461 , 0.22704004,
0.1069391 , 0.2572591 , 0.11801817, 0.1235588 , 0.35835782,
0.41878474]), q01=array([-5.86206436e-04, -3.88117499e-01, -2.55800724e-01, -8.34769463e-01,
-3.51454727e-01, -1.54787922e-03, -5.81741333e-04, 0.00000000e+00,
-2.64436970e-01, -3.51582764e-01, -3.69693995e-01, -7.30919549e-01,
-3.35441585e-01, -6.62303925e-04, -9.34731126e-01, 0.00000000e+00]), q99=array([0.20790743, 0.81198567, 0.19612836, 0.33958174, 0.05568643,
0.75265345, 0.425256 , 0.9998 , 0.2558236 , 0.58901345,
0.35822071, 0.18567593, 0.44035054, 0.49966629, 0.12655233,
0.9998 ]))}, repack_transforms=Group(inputs=[RepackTransform(structure={'images': {'cam_high': 'front_image', 'cam_left_wrist': 'wrist_left_image', 'cam_right_wrist': 'wrist_right_image'}, 'state': 'state', 'actions': 'action', 'prompt': 'task'})], outputs=()), data_transforms=Group(inputs=[AlohaInputs(adapt_to_pi=False)], outputs=[]), model_transforms=Group(inputs=[InjectDefaultPrompt(prompt=None), ResizeImages(height=224, width=224), TokenizePrompt(tokenizer=<openpi.models.tokenizer.PaligemmaTokenizer object at 0x7ec79fca8910>, discrete_state_input=True), PackPerArmBlocks(real_arm_dims=(8, 8), block_dims=(16, 16))], outputs=[UnpackPerArmBlocks(real_arm_dims=(8, 8), block_dims=(16, 16))]), use_quantile_norm=True, action_sequence_keys=('action',), prompt_from_task=False, rlds_data_dir=None, action_space=None, datasets=()) (22110:data_loader.py:284)
19:55:09.225 [I] JAX version 0.5.3 available. (22110:config.py:125)
19:55:34.099 [I] Using existing local LeRobot dataset mirror for lsnu/twin_dual_push_128_train: /workspace/lerobot/lsnu/twin_dual_push_128_train (22110:data_loader.py:148)
19:55:34.205 [W] 'torchcodec' is not available in your platform, falling back to 'pyav' as a default decoder (22110:video_utils.py:36)
19:56:38.376 [I] local_batch_size: 1 (22110:data_loader.py:365)
19:58:25.969 [I] Enabled gradient checkpointing for PI0Pytorch model (22110:pi0_pytorch.py:138)
19:58:25.971 [I] Enabled gradient checkpointing for memory optimization (22110:train_pytorch.py:624)
19:58:25.972 [I] Step 0 (after_model_creation): GPU memory - allocated: 17.23GB, reserved: 17.23GB, free: 0.00GB, peak_allocated: 17.23GB, peak_reserved: 17.23GB (22110:train_pytorch.py:493)
19:58:25.972 [I] Loading weights from: /workspace/checkpoints/pi05_base_split_communicating_packed_from_single (22110:train_pytorch.py:653)
19:58:29.565 [I] Weight loading missing key count: 0 (22110:train_pytorch.py:657)
19:58:29.566 [I] Weight loading missing keys: set() (22110:train_pytorch.py:658)
19:58:29.566 [I] Weight loading unexpected key count: 0 (22110:train_pytorch.py:659)
19:58:29.566 [I] Weight loading unexpected keys: [] (22110:train_pytorch.py:660)
19:58:29.567 [I] Loaded PyTorch weights from /workspace/checkpoints/pi05_base_split_communicating_packed_from_single (22110:train_pytorch.py:661)
19:58:29.571 [I] Running on: 963c158043aa | world_size=1 (22110:train_pytorch.py:701)
19:58:29.571 [I] Training config: batch_size=1, effective_batch_size=1, num_train_steps=3 (22110:train_pytorch.py:702)
19:58:29.572 [I] Memory optimizations: gradient_checkpointing=True (22110:train_pytorch.py:705)
19:58:29.572 [I] DDP settings: find_unused_parameters=False, gradient_as_bucket_view=True, static_graph=True (22110:train_pytorch.py:706)
19:58:29.573 [I] LR schedule: warmup=250, peak_lr=2.50e-05, decay_steps=5000, end_lr=2.50e-06 (22110:train_pytorch.py:707)
19:58:29.573 [I] Optimizer: AdamW, weight_decay=1e-10, clip_norm=1.0 (22110:train_pytorch.py:710)
19:58:29.573 [I] EMA is not supported for PyTorch training (22110:train_pytorch.py:713)
19:58:29.574 [I] Training precision: float32 (22110:train_pytorch.py:714)
19:58:29.590 [I] Resolved config name: pi05_twin_dual_push_128_packed_split_expert_communicating_pytorch_5k (22110:train_pytorch.py:308)
19:58:29.590 [I] Dataset repo_id: lsnu/twin_dual_push_128_train (22110:train_pytorch.py:309)
19:58:29.591 [I] Norm-stats file path: /workspace/pi05tests/openpi/assets/pi05_twin_dual_push_128_packed_split_expert_communicating_pytorch_5k/lsnu/twin_dual_push_128_train/norm_stats.json (22110:train_pytorch.py:310)
19:58:29.592 [I] Norm-stats summary: {'keys': ['actions', 'state'], 'state_mean_len': 16, 'state_std_len': 16, 'actions_mean_len': 16, 'actions_std_len': 16} (22110:train_pytorch.py:311)
19:58:29.592 [I] Checkpoint source path: /workspace/checkpoints/pi05_base_split_communicating_packed_from_single (22110:train_pytorch.py:312)
19:58:29.592 [I] Model type: split_communicating (22110:train_pytorch.py:313)
19:58:29.593 [I] Packed transforms active: True (22110:train_pytorch.py:314)
19:58:29.593 [I] World size: 1 (22110:train_pytorch.py:315)
19:58:29.594 [I] Batch size: local=1, global=1 (22110:train_pytorch.py:316)
19:58:29.594 [I] num_workers: 0 (22110:train_pytorch.py:317)
19:58:29.595 [I] Precision: float32 (22110:train_pytorch.py:318)
19:58:29.595 [I] LR schedule summary: warmup_steps=250, peak_lr=2.50e-05, decay_steps=5000, decay_lr=2.50e-06 (22110:train_pytorch.py:319)
19:58:29.595 [I] Save/log intervals: save_interval=3, log_interval=1 (22110:train_pytorch.py:326)
19:58:29.596 [I] Action-loss mask: (1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0) (22110:train_pytorch.py:327)
19:58:29.596 [I] Active mask dims: [0, 1, 2, 3, 4, 5, 6, 7, 16, 17, 18, 19, 20, 21, 22, 23] (22110:train_pytorch.py:328)
19:58:29.597 [I] Masked dims: [8, 9, 10, 11, 12, 13, 14, 15, 24, 25, 26, 27, 28, 29, 30, 31] (22110:train_pytorch.py:329)
19:58:29.597 [I] Gradient bucket diagnostics: left_action_in, right_action_in, left_expert, right_expert, action_out, cross_arm_comm (22110:train_pytorch.py:722)
Training: 0%| | 0/3 [00:00<?, ?it/s]/usr/local/lib/python3.11/dist-packages/torch/utils/checkpoint.py:1399: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with device_autocast_ctx, torch.cpu.amp.autocast(**cpu_autocast_kwargs), recompute_context: # type: ignore[attr-defined]
19:58:31.354 [I] debug_step=1 observation.state shape=(1, 32) dtype=torch.float64 actions shape=(1, 16, 32) dtype=torch.float32 (22110:train_pytorch.py:831)
19:58:31.355 [I] debug_step=1 image_keys=['base_0_rgb', 'left_wrist_0_rgb', 'right_wrist_0_rgb'] image_shapes={'base_0_rgb': (1, 3, 224, 224), 'left_wrist_0_rgb': (1, 3, 224, 224), 'right_wrist_0_rgb': (1, 3, 224, 224)} (22110:train_pytorch.py:835)
19:58:31.356 [I] debug_step=1 prompt_token_lengths=[75] (22110:train_pytorch.py:838)
19:58:31.356 [I] debug_step=1 state_stats min=-1.0000 max=1.0004 mean=0.0112 std=0.3876 (22110:train_pytorch.py:839)
19:58:31.357 [I] debug_step=1 action_stats min=-1.0016 max=1.0004 mean=-0.0454 std=0.4716 (22110:train_pytorch.py:842)
19:58:31.358 [I] debug_step=1 state_nonzero_counts_8d_blocks=[8, 0, 8, 0] action_nonzero_counts_8d_blocks=[128, 0, 128, 0] (22110:train_pytorch.py:845)
19:58:31.372 [I] debug_step=1 masked_dims=[8, 9, 10, 11, 12, 13, 14, 15, 24, 25, 26, 27, 28, 29, 30, 31] active_dims=[0, 1, 2, 3, 4, 5, 6, 7, 16, 17, 18, 19, 20, 21, 22, 23] masked_zero_counts state=16 actions=256 (22110:train_pytorch.py:849)
19:58:31.372 [I] debug_step=1 lr=9.96e-08 grad_norm=60.0472 data_time=0.3311s step_time=1.3966s gpu_mem_allocated=46.71GB gpu_mem_reserved=76.25GB gpu_mem_max_allocated=76.13GB gpu_mem_max_reserved=76.25GB (22110:train_pytorch.py:854)
19:58:31.373 [I] debug_step=1 grad_shared_backbone=36.9945 grad_left_action_in=2.3769 grad_right_action_in=1.7630 grad_left_expert=31.1244 grad_right_expert=27.8917 grad_action_out=13.0720 grad_cross_arm_comm=3.1067 cross_arm_comm_gate_layer_0=0.0000 cross_arm_comm_gate_layer_1=0.0000 cross_arm_comm_gate_layer_2=0.0000 cross_arm_comm_gate_layer_3=0.0000 cross_arm_comm_gate_layer_4=0.0000 cross_arm_comm_gate_layer_5=0.0000 cross_arm_comm_gate_layer_6=0.0000 cross_arm_comm_gate_layer_7=0.0000 cross_arm_comm_gate_layer_8=0.0000 cross_arm_comm_gate_layer_9=0.0000 cross_arm_comm_gate_layer_10=0.0000 cross_arm_comm_gate_layer_11=0.0000 cross_arm_comm_gate_layer_12=0.0000 cross_arm_comm_gate_layer_13=0.0000 cross_arm_comm_gate_layer_14=0.0000 cross_arm_comm_gate_layer_15=0.0000 cross_arm_comm_gate_layer_16=0.0000 cross_arm_comm_gate_layer_17=0.0000 cross_arm_attention_mass_layer_0=0.0001 cross_arm_attention_mass_layer_1=0.0050 cross_arm_attention_mass_layer_2=0.0217 cross_arm_attention_mass_layer_3=0.0086 cross_arm_attention_mass_layer_4=0.0279 cross_arm_attention_mass_layer_5=0.0355 cross_arm_attention_mass_layer_6=0.0179 cross_arm_attention_mass_layer_7=0.0369 cross_arm_attention_mass_layer_8=0.0183 cross_arm_attention_mass_layer_9=0.0153 cross_arm_attention_mass_layer_10=0.0188 cross_arm_attention_mass_layer_11=0.0278 cross_arm_attention_mass_layer_12=0.0052 cross_arm_attention_mass_layer_13=0.0161 cross_arm_attention_mass_layer_14=0.0091 cross_arm_attention_mass_layer_15=0.0342 cross_arm_attention_mass_layer_16=0.0457 cross_arm_attention_mass_layer_17=0.0454 (22110:train_pytorch.py:862)
19:58:31.374 [I] step=1 loss=3.8411 smoothed_loss=3.8411 lr=9.96e-08 grad_norm=60.0472 step_time=1.3966s data_time=0.3311s it/s=0.555 eta_to_3=3.6s max_cuda_memory=76.13GB cross_arm_attention_mass_layer_0=0.0001 cross_arm_attention_mass_layer_1=0.0050 cross_arm_attention_mass_layer_10=0.0188 cross_arm_attention_mass_layer_11=0.0278 cross_arm_attention_mass_layer_12=0.0052 cross_arm_attention_mass_layer_13=0.0161 cross_arm_attention_mass_layer_14=0.0091 cross_arm_attention_mass_layer_15=0.0342 cross_arm_attention_mass_layer_16=0.0457 cross_arm_attention_mass_layer_17=0.0454 cross_arm_attention_mass_layer_2=0.0217 cross_arm_attention_mass_layer_3=0.0086 cross_arm_attention_mass_layer_4=0.0279 cross_arm_attention_mass_layer_5=0.0355 cross_arm_attention_mass_layer_6=0.0179 cross_arm_attention_mass_layer_7=0.0369 cross_arm_attention_mass_layer_8=0.0183 cross_arm_attention_mass_layer_9=0.0153 cross_arm_comm_gate_layer_0=0.0000 cross_arm_comm_gate_layer_1=0.0000 cross_arm_comm_gate_layer_10=0.0000 cross_arm_comm_gate_layer_11=0.0000 cross_arm_comm_gate_layer_12=0.0000 cross_arm_comm_gate_layer_13=0.0000 cross_arm_comm_gate_layer_14=0.0000 cross_arm_comm_gate_layer_15=0.0000 cross_arm_comm_gate_layer_16=0.0000 cross_arm_comm_gate_layer_17=0.0000 cross_arm_comm_gate_layer_2=0.0000 cross_arm_comm_gate_layer_3=0.0000 cross_arm_comm_gate_layer_4=0.0000 cross_arm_comm_gate_layer_5=0.0000 cross_arm_comm_gate_layer_6=0.0000 cross_arm_comm_gate_layer_7=0.0000 cross_arm_comm_gate_layer_8=0.0000 cross_arm_comm_gate_layer_9=0.0000 grad_action_out=13.0720 grad_cross_arm_comm=3.1067 grad_left_action_in=2.3769 grad_left_expert=31.1244 grad_right_action_in=1.7630 grad_right_expert=27.8917 grad_shared_backbone=36.9945 (22110:train_pytorch.py:882)
Training: 33%|β–ˆβ–ˆβ–ˆβ–Ž | 1/3 [00:01<00:03, 1.78s/it] Training: 33%|β–ˆβ–ˆβ–ˆβ–Ž | 1/3 [00:01<00:03, 1.78s/it, loss=3.8411, lr=9.96e-08, step=1]/usr/local/lib/python3.11/dist-packages/torch/utils/checkpoint.py:1399: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with device_autocast_ctx, torch.cpu.amp.autocast(**cpu_autocast_kwargs), recompute_context: # type: ignore[attr-defined]
19:58:32.164 [I] debug_step=2 observation.state shape=(1, 32) dtype=torch.float64 actions shape=(1, 16, 32) dtype=torch.float32 (22110:train_pytorch.py:831)
19:58:32.165 [I] debug_step=2 image_keys=['base_0_rgb', 'left_wrist_0_rgb', 'right_wrist_0_rgb'] image_shapes={'base_0_rgb': (1, 3, 224, 224), 'left_wrist_0_rgb': (1, 3, 224, 224), 'right_wrist_0_rgb': (1, 3, 224, 224)} (22110:train_pytorch.py:835)
19:58:32.166 [I] debug_step=2 prompt_token_lengths=[76] (22110:train_pytorch.py:838)
19:58:32.166 [I] debug_step=2 state_stats min=-0.9415 max=1.0004 mean=-0.0010 std=0.4295 (22110:train_pytorch.py:839)
19:58:32.167 [I] debug_step=2 action_stats min=-1.0000 max=1.1367 mean=0.0272 std=0.4576 (22110:train_pytorch.py:842)
19:58:32.168 [I] debug_step=2 state_nonzero_counts_8d_blocks=[8, 0, 8, 0] action_nonzero_counts_8d_blocks=[128, 0, 128, 0] (22110:train_pytorch.py:845)
19:58:32.168 [I] debug_step=2 masked_dims=[8, 9, 10, 11, 12, 13, 14, 15, 24, 25, 26, 27, 28, 29, 30, 31] active_dims=[0, 1, 2, 3, 4, 5, 6, 7, 16, 17, 18, 19, 20, 21, 22, 23] masked_zero_counts state=16 actions=256 (22110:train_pytorch.py:849)
19:58:32.169 [I] debug_step=2 lr=1.99e-07 grad_norm=10.7300 data_time=0.1812s step_time=0.6234s gpu_mem_allocated=46.71GB gpu_mem_reserved=76.30GB gpu_mem_max_allocated=76.13GB gpu_mem_max_reserved=76.30GB (22110:train_pytorch.py:854)
19:58:32.169 [I] debug_step=2 grad_shared_backbone=9.2018 grad_left_action_in=0.1651 grad_right_action_in=0.1485 grad_left_expert=2.5032 grad_right_expert=2.3988 grad_action_out=4.0772 grad_cross_arm_comm=0.0166 cross_arm_comm_gate_layer_0=-0.0000 cross_arm_comm_gate_layer_1=0.0000 cross_arm_comm_gate_layer_2=-0.0000 cross_arm_comm_gate_layer_3=0.0000 cross_arm_comm_gate_layer_4=-0.0000 cross_arm_comm_gate_layer_5=-0.0000 cross_arm_comm_gate_layer_6=-0.0000 cross_arm_comm_gate_layer_7=0.0000 cross_arm_comm_gate_layer_8=-0.0000 cross_arm_comm_gate_layer_9=-0.0000 cross_arm_comm_gate_layer_10=-0.0000 cross_arm_comm_gate_layer_11=-0.0000 cross_arm_comm_gate_layer_12=0.0000 cross_arm_comm_gate_layer_13=-0.0000 cross_arm_comm_gate_layer_14=-0.0000 cross_arm_comm_gate_layer_15=0.0000 cross_arm_comm_gate_layer_16=-0.0000 cross_arm_comm_gate_layer_17=-0.0000 cross_arm_attention_mass_layer_0=0.0000 cross_arm_attention_mass_layer_1=0.0019 cross_arm_attention_mass_layer_2=0.0161 cross_arm_attention_mass_layer_3=0.0029 cross_arm_attention_mass_layer_4=0.0175 cross_arm_attention_mass_layer_5=0.0243 cross_arm_attention_mass_layer_6=0.0074 cross_arm_attention_mass_layer_7=0.0232 cross_arm_attention_mass_layer_8=0.0155 cross_arm_attention_mass_layer_9=0.0135 cross_arm_attention_mass_layer_10=0.0094 cross_arm_attention_mass_layer_11=0.0151 cross_arm_attention_mass_layer_12=0.0021 cross_arm_attention_mass_layer_13=0.0053 cross_arm_attention_mass_layer_14=0.0056 cross_arm_attention_mass_layer_15=0.0250 cross_arm_attention_mass_layer_16=0.0356 cross_arm_attention_mass_layer_17=0.0413 (22110:train_pytorch.py:862)
19:58:32.170 [I] step=2 loss=1.1389 smoothed_loss=3.5709 lr=1.99e-07 grad_norm=10.7300 step_time=0.6234s data_time=0.1812s it/s=1.257 eta_to_3=0.8s max_cuda_memory=76.13GB cross_arm_attention_mass_layer_0=0.0000 cross_arm_attention_mass_layer_1=0.0019 cross_arm_attention_mass_layer_10=0.0094 cross_arm_attention_mass_layer_11=0.0151 cross_arm_attention_mass_layer_12=0.0021 cross_arm_attention_mass_layer_13=0.0053 cross_arm_attention_mass_layer_14=0.0056 cross_arm_attention_mass_layer_15=0.0250 cross_arm_attention_mass_layer_16=0.0356 cross_arm_attention_mass_layer_17=0.0413 cross_arm_attention_mass_layer_2=0.0161 cross_arm_attention_mass_layer_3=0.0029 cross_arm_attention_mass_layer_4=0.0175 cross_arm_attention_mass_layer_5=0.0243 cross_arm_attention_mass_layer_6=0.0074 cross_arm_attention_mass_layer_7=0.0232 cross_arm_attention_mass_layer_8=0.0155 cross_arm_attention_mass_layer_9=0.0135 cross_arm_comm_gate_layer_0=-0.0000 cross_arm_comm_gate_layer_1=0.0000 cross_arm_comm_gate_layer_10=-0.0000 cross_arm_comm_gate_layer_11=-0.0000 cross_arm_comm_gate_layer_12=0.0000 cross_arm_comm_gate_layer_13=-0.0000 cross_arm_comm_gate_layer_14=-0.0000 cross_arm_comm_gate_layer_15=0.0000 cross_arm_comm_gate_layer_16=-0.0000 cross_arm_comm_gate_layer_17=-0.0000 cross_arm_comm_gate_layer_2=-0.0000 cross_arm_comm_gate_layer_3=0.0000 cross_arm_comm_gate_layer_4=-0.0000 cross_arm_comm_gate_layer_5=-0.0000 cross_arm_comm_gate_layer_6=-0.0000 cross_arm_comm_gate_layer_7=0.0000 cross_arm_comm_gate_layer_8=-0.0000 cross_arm_comm_gate_layer_9=-0.0000 grad_action_out=4.0772 grad_cross_arm_comm=0.0166 grad_left_action_in=0.1651 grad_left_expert=2.5032 grad_right_action_in=0.1485 grad_right_expert=2.3988 grad_shared_backbone=9.2018 (22110:train_pytorch.py:882)
Training: 67%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 2/3 [00:02<00:01, 1.20s/it, loss=3.8411, lr=9.96e-08, step=1] Training: 67%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 2/3 [00:02<00:01, 1.20s/it, loss=1.1389, lr=1.99e-07, step=2]/usr/local/lib/python3.11/dist-packages/torch/utils/checkpoint.py:1399: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.
with device_autocast_ctx, torch.cpu.amp.autocast(**cpu_autocast_kwargs), recompute_context: # type: ignore[attr-defined]
19:58:32.708 [I] debug_step=3 observation.state shape=(1, 32) dtype=torch.float64 actions shape=(1, 16, 32) dtype=torch.float32 (22110:train_pytorch.py:831)
19:58:32.709 [I] debug_step=3 image_keys=['base_0_rgb', 'left_wrist_0_rgb', 'right_wrist_0_rgb'] image_shapes={'base_0_rgb': (1, 3, 224, 224), 'left_wrist_0_rgb': (1, 3, 224, 224), 'right_wrist_0_rgb': (1, 3, 224, 224)} (22110:train_pytorch.py:835)
19:58:32.709 [I] debug_step=3 prompt_token_lengths=[75] (22110:train_pytorch.py:838)
19:58:32.710 [I] debug_step=3 state_stats min=-1.0000 max=1.0004 mean=0.0558 std=0.4300 (22110:train_pytorch.py:839)
19:58:32.711 [I] debug_step=3 action_stats min=-1.0033 max=1.0004 mean=-0.0658 std=0.4704 (22110:train_pytorch.py:842)
19:58:32.711 [I] debug_step=3 state_nonzero_counts_8d_blocks=[8, 0, 8, 0] action_nonzero_counts_8d_blocks=[128, 0, 128, 0] (22110:train_pytorch.py:845)
19:58:32.712 [I] debug_step=3 masked_dims=[8, 9, 10, 11, 12, 13, 14, 15, 24, 25, 26, 27, 28, 29, 30, 31] active_dims=[0, 1, 2, 3, 4, 5, 6, 7, 16, 17, 18, 19, 20, 21, 22, 23] masked_zero_counts state=16 actions=256 (22110:train_pytorch.py:849)
19:58:32.712 [I] debug_step=3 lr=2.99e-07 grad_norm=343.7256 data_time=0.1312s step_time=0.4126s gpu_mem_allocated=46.71GB gpu_mem_reserved=76.30GB gpu_mem_max_allocated=76.13GB gpu_mem_max_reserved=76.30GB (22110:train_pytorch.py:854)
19:58:32.713 [I] debug_step=3 grad_shared_backbone=215.2880 grad_left_action_in=4.7981 grad_right_action_in=9.5346 grad_left_expert=72.6437 grad_right_expert=227.6029 grad_action_out=23.7709 grad_cross_arm_comm=3.3555 cross_arm_comm_gate_layer_0=-0.0000 cross_arm_comm_gate_layer_1=0.0000 cross_arm_comm_gate_layer_2=-0.0000 cross_arm_comm_gate_layer_3=0.0000 cross_arm_comm_gate_layer_4=-0.0000 cross_arm_comm_gate_layer_5=-0.0000 cross_arm_comm_gate_layer_6=-0.0000 cross_arm_comm_gate_layer_7=0.0000 cross_arm_comm_gate_layer_8=-0.0000 cross_arm_comm_gate_layer_9=-0.0000 cross_arm_comm_gate_layer_10=-0.0000 cross_arm_comm_gate_layer_11=-0.0000 cross_arm_comm_gate_layer_12=0.0000 cross_arm_comm_gate_layer_13=-0.0000 cross_arm_comm_gate_layer_14=0.0000 cross_arm_comm_gate_layer_15=0.0000 cross_arm_comm_gate_layer_16=-0.0000 cross_arm_comm_gate_layer_17=-0.0000 cross_arm_attention_mass_layer_0=0.0003 cross_arm_attention_mass_layer_1=0.0127 cross_arm_attention_mass_layer_2=0.0275 cross_arm_attention_mass_layer_3=0.0190 cross_arm_attention_mass_layer_4=0.0359 cross_arm_attention_mass_layer_5=0.0454 cross_arm_attention_mass_layer_6=0.0228 cross_arm_attention_mass_layer_7=0.0346 cross_arm_attention_mass_layer_8=0.0149 cross_arm_attention_mass_layer_9=0.0296 cross_arm_attention_mass_layer_10=0.0177 cross_arm_attention_mass_layer_11=0.0230 cross_arm_attention_mass_layer_12=0.0134 cross_arm_attention_mass_layer_13=0.0242 cross_arm_attention_mass_layer_14=0.0109 cross_arm_attention_mass_layer_15=0.0285 cross_arm_attention_mass_layer_16=0.0403 cross_arm_attention_mass_layer_17=0.0268 (22110:train_pytorch.py:862)
19:58:32.713 [I] step=3 loss=5.0518 smoothed_loss=3.7190 lr=2.99e-07 grad_norm=343.7256 step_time=0.4126s data_time=0.1312s it/s=1.843 eta_to_3=0.0s max_cuda_memory=76.13GB cross_arm_attention_mass_layer_0=0.0003 cross_arm_attention_mass_layer_1=0.0127 cross_arm_attention_mass_layer_10=0.0177 cross_arm_attention_mass_layer_11=0.0230 cross_arm_attention_mass_layer_12=0.0134 cross_arm_attention_mass_layer_13=0.0242 cross_arm_attention_mass_layer_14=0.0109 cross_arm_attention_mass_layer_15=0.0285 cross_arm_attention_mass_layer_16=0.0403 cross_arm_attention_mass_layer_17=0.0268 cross_arm_attention_mass_layer_2=0.0275 cross_arm_attention_mass_layer_3=0.0190 cross_arm_attention_mass_layer_4=0.0359 cross_arm_attention_mass_layer_5=0.0454 cross_arm_attention_mass_layer_6=0.0228 cross_arm_attention_mass_layer_7=0.0346 cross_arm_attention_mass_layer_8=0.0149 cross_arm_attention_mass_layer_9=0.0296 cross_arm_comm_gate_layer_0=-0.0000 cross_arm_comm_gate_layer_1=0.0000 cross_arm_comm_gate_layer_10=-0.0000 cross_arm_comm_gate_layer_11=-0.0000 cross_arm_comm_gate_layer_12=0.0000 cross_arm_comm_gate_layer_13=-0.0000 cross_arm_comm_gate_layer_14=0.0000 cross_arm_comm_gate_layer_15=0.0000 cross_arm_comm_gate_layer_16=-0.0000 cross_arm_comm_gate_layer_17=-0.0000 cross_arm_comm_gate_layer_2=-0.0000 cross_arm_comm_gate_layer_3=0.0000 cross_arm_comm_gate_layer_4=-0.0000 cross_arm_comm_gate_layer_5=-0.0000 cross_arm_comm_gate_layer_6=-0.0000 cross_arm_comm_gate_layer_7=0.0000 cross_arm_comm_gate_layer_8=-0.0000 cross_arm_comm_gate_layer_9=-0.0000 grad_action_out=23.7709 grad_cross_arm_comm=3.3555 grad_left_action_in=4.7981 grad_left_expert=72.6437 grad_right_action_in=9.5346 grad_right_expert=227.6029 grad_shared_backbone=215.2880 (22110:train_pytorch.py:882)
20:01:38.475 [I] Saved checkpoint at step 3 -> /workspace/pi05tests/openpi/checkpoints/pi05_twin_dual_push_128_packed_split_expert_communicating_pytorch_5k/split_communicating_real_smoke3/3 (22110:train_pytorch.py:378)
Training: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [03:08<00:00, 85.72s/it, loss=1.1389, lr=1.99e-07, step=2] Training: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [03:08<00:00, 85.72s/it, loss=5.0518, lr=2.99e-07, step=3] Training: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [03:09<00:00, 63.01s/it, loss=5.0518, lr=2.99e-07, step=3]