Junyi42 commited on
Commit
8d6d6aa
·
verified ·
1 Parent(s): 5eb47c7

Upload checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins/checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins

Browse files
checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins/checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins/wandb/offline-run-20260125_192135-checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins-run0/files/output.log CHANGED
@@ -1204,27 +1204,6 @@ wandb: For more information, check out the docs at: https://weave-docs.wandb.ai/
1204
  [2026-01-25 20:16:01] (step=0001016) Train Loss mse: 0.0000, Train Loss ce: 0.5289, Train Steps/Sec: 0.28,
1205
  [2026-01-25 20:16:04] (step=0001017) Train Loss mse: 0.0000, Train Loss ce: 0.5496, Train Steps/Sec: 0.30,
1206
  [2026-01-25 20:16:06] (step=0001018) Train Loss mse: 0.0000, Train Loss ce: 0.5517, Train Steps/Sec: 0.39,
1207
- base_dir is /dev/shm/models/checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins/eval_used_rows, step_tag is checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins_step1500
1208
- Preparing Dataset vlm_gym_counting_mark_all_celoss_no_mse_evalonce/vlm_gym_counting_mark_all_val
1209
- [eval debug] first 3 batch fingerprints:
1210
- fp[0]: [{'data_indexes': [0], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
1211
- fp[1]: [{'data_indexes': [8], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
1212
- fp[2]: [{'data_indexes': [16], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
1213
- ce_avg: 0.6630723476409912, mse_avg: 0.0
1214
- base_dir is /dev/shm/models/checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins/eval_used_rows, step_tag is checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins_step2000
1215
- Preparing Dataset vlm_gym_counting_mark_all_celoss_no_mse_evalonce/vlm_gym_counting_mark_all_val
1216
- [eval debug] first 3 batch fingerprints:
1217
- fp[0]: [{'data_indexes': [0], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
1218
- fp[1]: [{'data_indexes': [8], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
1219
- fp[2]: [{'data_indexes': [16], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
1220
- ce_avg: 0.8126255869865417, mse_avg: 0.0
1221
- base_dir is /dev/shm/models/checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins/eval_used_rows, step_tag is checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins_step2500
1222
- Preparing Dataset vlm_gym_counting_mark_all_celoss_no_mse_evalonce/vlm_gym_counting_mark_all_val
1223
- [eval debug] first 3 batch fingerprints:
1224
- fp[0]: [{'data_indexes': [0], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
1225
- fp[1]: [{'data_indexes': [8], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
1226
- fp[2]: [{'data_indexes': [16], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
1227
- ce_avg: 0.9854414463043213, mse_avg: 0.0
1228
  [2026-01-25 20:16:08] (step=0001019) Train Loss mse: 0.0000, Train Loss ce: 0.5016, Train Steps/Sec: 0.52,
1229
  [2026-01-25 20:16:12] (step=0001020) Train Loss mse: 0.0000, Train Loss ce: 0.5413, Train Steps/Sec: 0.27,
1230
  [2026-01-25 20:16:15] (step=0001021) Train Loss mse: 0.0000, Train Loss ce: 0.5106, Train Steps/Sec: 0.33,
@@ -1274,6 +1253,20 @@ ce_avg: 0.9854414463043213, mse_avg: 0.0
1274
  [2026-01-25 20:18:28] (step=0001065) Train Loss mse: 0.0000, Train Loss ce: 0.5084, Train Steps/Sec: 0.44,
1275
  [2026-01-25 20:18:31] (step=0001066) Train Loss mse: 0.0000, Train Loss ce: 0.5379, Train Steps/Sec: 0.37,
1276
  [2026-01-25 20:18:33] (step=0001067) Train Loss mse: 0.0000, Train Loss ce: 0.5146, Train Steps/Sec: 0.41,
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1277
  [2026-01-25 20:18:36] (step=0001068) Train Loss mse: 0.0000, Train Loss ce: 0.5320, Train Steps/Sec: 0.42,
1278
  [2026-01-25 20:18:38] (step=0001069) Train Loss mse: 0.0000, Train Loss ce: 0.5156, Train Steps/Sec: 0.46,
1279
  [2026-01-25 20:18:41] (step=0001070) Train Loss mse: 0.0000, Train Loss ce: 0.5680, Train Steps/Sec: 0.36,
@@ -2659,6 +2652,20 @@ ce_avg: 0.9854414463043213, mse_avg: 0.0
2659
  [2026-01-25 21:23:42] (step=0002450) Train Loss mse: 0.0000, Train Loss ce: 0.5259, Train Steps/Sec: 0.32,
2660
  [2026-01-25 21:23:44] (step=0002451) Train Loss mse: 0.0000, Train Loss ce: 0.4923, Train Steps/Sec: 0.53,
2661
  [2026-01-25 21:23:46] (step=0002452) Train Loss mse: 0.0000, Train Loss ce: 0.4985, Train Steps/Sec: 0.53,
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2662
  [2026-01-25 21:23:49] (step=0002453) Train Loss mse: 0.0000, Train Loss ce: 0.5462, Train Steps/Sec: 0.33,
2663
  [2026-01-25 21:23:52] (step=0002454) Train Loss mse: 0.0000, Train Loss ce: 0.4800, Train Steps/Sec: 0.32,
2664
  [2026-01-25 21:23:56] (step=0002455) Train Loss mse: 0.0000, Train Loss ce: 0.5539, Train Steps/Sec: 0.26,
@@ -2722,20 +2729,6 @@ ce_avg: 0.9854414463043213, mse_avg: 0.0
2722
  [2026-01-25 21:26:49] (step=0002513) Train Loss mse: 0.0000, Train Loss ce: 0.5441, Train Steps/Sec: 0.51,
2723
  [2026-01-25 21:26:52] (step=0002514) Train Loss mse: 0.0000, Train Loss ce: 0.5287, Train Steps/Sec: 0.32,
2724
  [2026-01-25 21:26:54] (step=0002515) Train Loss mse: 0.0000, Train Loss ce: 0.5109, Train Steps/Sec: 0.49,
2725
- base_dir is /dev/shm/models/checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins/eval_used_rows, step_tag is checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins_step3000
2726
- Preparing Dataset vlm_gym_counting_mark_all_celoss_no_mse_evalonce/vlm_gym_counting_mark_all_val
2727
- [eval debug] first 3 batch fingerprints:
2728
- fp[0]: [{'data_indexes': [0], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
2729
- fp[1]: [{'data_indexes': [8], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
2730
- fp[2]: [{'data_indexes': [16], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
2731
- ce_avg: 0.9968664646148682, mse_avg: 0.0
2732
- base_dir is /dev/shm/models/checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins/eval_used_rows, step_tag is checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins_step3500
2733
- Preparing Dataset vlm_gym_counting_mark_all_celoss_no_mse_evalonce/vlm_gym_counting_mark_all_val
2734
- [eval debug] first 3 batch fingerprints:
2735
- fp[0]: [{'data_indexes': [0], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
2736
- fp[1]: [{'data_indexes': [8], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
2737
- fp[2]: [{'data_indexes': [16], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
2738
- ce_avg: 0.9615826606750488, mse_avg: 0.0
2739
  [2026-01-25 21:26:57] (step=0002516) Train Loss mse: 0.0000, Train Loss ce: 0.5004, Train Steps/Sec: 0.44,
2740
  [2026-01-25 21:26:59] (step=0002517) Train Loss mse: 0.0000, Train Loss ce: 0.5243, Train Steps/Sec: 0.47,
2741
  [2026-01-25 21:27:01] (step=0002518) Train Loss mse: 0.0000, Train Loss ce: 0.4966, Train Steps/Sec: 0.46,
@@ -3751,6 +3744,21 @@ ce_avg: 0.9615826606750488, mse_avg: 0.0
3751
  [2026-01-25 22:14:55] (step=0003528) Train Loss mse: 0.0000, Train Loss ce: 0.5088, Train Steps/Sec: 0.30,
3752
  [2026-01-25 22:14:58] (step=0003529) Train Loss mse: 0.0000, Train Loss ce: 0.4618, Train Steps/Sec: 0.36,
3753
  [2026-01-25 22:15:01] (step=0003530) Train Loss mse: 0.0000, Train Loss ce: 0.5275, Train Steps/Sec: 0.30,
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3754
  base_dir is /dev/shm/models/checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins/eval_used_rows, step_tag is checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins_step4000
3755
  Preparing Dataset vlm_gym_counting_mark_all_celoss_no_mse_evalonce/vlm_gym_counting_mark_all_val
3756
  [eval debug] first 3 batch fingerprints:
@@ -3765,6 +3773,17 @@ Preparing Dataset vlm_gym_counting_mark_all_celoss_no_mse_evalonce/vlm_gym_count
3765
  fp[1]: [{'data_indexes': [8], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
3766
  fp[2]: [{'data_indexes': [16], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
3767
  ce_avg: 0.8653228282928467, mse_avg: 0.0
 
 
 
 
 
 
 
 
 
 
 
3768
  [2026-01-25 22:15:54] (step=0003550) Train Loss mse: 0.0000, Train Loss ce: 0.4626, Train Steps/Sec: 0.57,
3769
  [2026-01-25 22:15:56] (step=0003551) Train Loss mse: 0.0000, Train Loss ce: 0.4705, Train Steps/Sec: 0.49,
3770
  [2026-01-25 22:15:58] (step=0003552) Train Loss mse: 0.0000, Train Loss ce: 0.4905, Train Steps/Sec: 0.44,
@@ -5210,6 +5229,13 @@ ce_avg: 0.8653228282928467, mse_avg: 0.0
5210
  [2026-01-25 23:24:51] (step=0004992) Train Loss mse: 0.0000, Train Loss ce: 0.5111, Train Steps/Sec: 0.37,
5211
  [2026-01-25 23:24:54] (step=0004993) Train Loss mse: 0.0000, Train Loss ce: 0.5282, Train Steps/Sec: 0.38,
5212
  [2026-01-25 23:24:57] (step=0004994) Train Loss mse: 0.0000, Train Loss ce: 0.5177, Train Steps/Sec: 0.33,
 
 
 
 
 
 
 
5213
  [2026-01-25 23:25:00] (step=0004995) Train Loss mse: 0.0000, Train Loss ce: 0.5113, Train Steps/Sec: 0.34,
5214
  [2026-01-25 23:25:03] (step=0004996) Train Loss mse: 0.0000, Train Loss ce: 0.4552, Train Steps/Sec: 0.34,
5215
  [2026-01-25 23:25:06] (step=0004997) Train Loss mse: 0.0000, Train Loss ce: 0.5105, Train Steps/Sec: 0.38,
@@ -5219,11 +5245,4 @@ ce_avg: 0.8653228282928467, mse_avg: 0.0
5219
  [2026-01-25 23:25:21] Saving checkpoint to /dev/shm/models/checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins/0005000.
5220
  /opt/conda/lib/python3.11/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py:690: FutureWarning: FSDP.state_dict_type() and FSDP.set_state_dict_type() are being deprecated. Please use APIs, get_state_dict() and set_state_dict(), which can support different parallelisms, FSDP1, FSDP2, DDP. API doc: https://pytorch.org/docs/stable/distributed.checkpoint.html#torch.distributed.checkpoint.state_dict.get_state_dict .Tutorial: https://pytorch.org/tutorials/recipes/distributed_checkpoint_recipe.html .
5221
  warnings.warn(
5222
- [2026-01-25 23:27:59] Done!
5223
- base_dir is /dev/shm/models/checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins/eval_used_rows, step_tag is checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins_step5000
5224
- Preparing Dataset vlm_gym_counting_mark_all_celoss_no_mse_evalonce/vlm_gym_counting_mark_all_val
5225
- [eval debug] first 3 batch fingerprints:
5226
- fp[0]: [{'data_indexes': [0], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
5227
- fp[1]: [{'data_indexes': [8], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
5228
- fp[2]: [{'data_indexes': [16], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
5229
- ce_avg: 0.847433865070343, mse_avg: 0.0
 
1204
  [2026-01-25 20:16:01] (step=0001016) Train Loss mse: 0.0000, Train Loss ce: 0.5289, Train Steps/Sec: 0.28,
1205
  [2026-01-25 20:16:04] (step=0001017) Train Loss mse: 0.0000, Train Loss ce: 0.5496, Train Steps/Sec: 0.30,
1206
  [2026-01-25 20:16:06] (step=0001018) Train Loss mse: 0.0000, Train Loss ce: 0.5517, Train Steps/Sec: 0.39,
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1207
  [2026-01-25 20:16:08] (step=0001019) Train Loss mse: 0.0000, Train Loss ce: 0.5016, Train Steps/Sec: 0.52,
1208
  [2026-01-25 20:16:12] (step=0001020) Train Loss mse: 0.0000, Train Loss ce: 0.5413, Train Steps/Sec: 0.27,
1209
  [2026-01-25 20:16:15] (step=0001021) Train Loss mse: 0.0000, Train Loss ce: 0.5106, Train Steps/Sec: 0.33,
 
1253
  [2026-01-25 20:18:28] (step=0001065) Train Loss mse: 0.0000, Train Loss ce: 0.5084, Train Steps/Sec: 0.44,
1254
  [2026-01-25 20:18:31] (step=0001066) Train Loss mse: 0.0000, Train Loss ce: 0.5379, Train Steps/Sec: 0.37,
1255
  [2026-01-25 20:18:33] (step=0001067) Train Loss mse: 0.0000, Train Loss ce: 0.5146, Train Steps/Sec: 0.41,
1256
+ base_dir is /dev/shm/models/checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins/eval_used_rows, step_tag is checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins_step1500
1257
+ Preparing Dataset vlm_gym_counting_mark_all_celoss_no_mse_evalonce/vlm_gym_counting_mark_all_val
1258
+ [eval debug] first 3 batch fingerprints:
1259
+ fp[0]: [{'data_indexes': [0], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
1260
+ fp[1]: [{'data_indexes': [8], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
1261
+ fp[2]: [{'data_indexes': [16], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
1262
+ ce_avg: 0.6630723476409912, mse_avg: 0.0
1263
+ base_dir is /dev/shm/models/checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins/eval_used_rows, step_tag is checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins_step2000
1264
+ Preparing Dataset vlm_gym_counting_mark_all_celoss_no_mse_evalonce/vlm_gym_counting_mark_all_val
1265
+ [eval debug] first 3 batch fingerprints:
1266
+ fp[0]: [{'data_indexes': [0], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
1267
+ fp[1]: [{'data_indexes': [8], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
1268
+ fp[2]: [{'data_indexes': [16], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
1269
+ ce_avg: 0.8126255869865417, mse_avg: 0.0
1270
  [2026-01-25 20:18:36] (step=0001068) Train Loss mse: 0.0000, Train Loss ce: 0.5320, Train Steps/Sec: 0.42,
1271
  [2026-01-25 20:18:38] (step=0001069) Train Loss mse: 0.0000, Train Loss ce: 0.5156, Train Steps/Sec: 0.46,
1272
  [2026-01-25 20:18:41] (step=0001070) Train Loss mse: 0.0000, Train Loss ce: 0.5680, Train Steps/Sec: 0.36,
 
2652
  [2026-01-25 21:23:42] (step=0002450) Train Loss mse: 0.0000, Train Loss ce: 0.5259, Train Steps/Sec: 0.32,
2653
  [2026-01-25 21:23:44] (step=0002451) Train Loss mse: 0.0000, Train Loss ce: 0.4923, Train Steps/Sec: 0.53,
2654
  [2026-01-25 21:23:46] (step=0002452) Train Loss mse: 0.0000, Train Loss ce: 0.4985, Train Steps/Sec: 0.53,
2655
+ base_dir is /dev/shm/models/checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins/eval_used_rows, step_tag is checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins_step2500
2656
+ Preparing Dataset vlm_gym_counting_mark_all_celoss_no_mse_evalonce/vlm_gym_counting_mark_all_val
2657
+ [eval debug] first 3 batch fingerprints:
2658
+ fp[0]: [{'data_indexes': [0], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
2659
+ fp[1]: [{'data_indexes': [8], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
2660
+ fp[2]: [{'data_indexes': [16], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
2661
+ ce_avg: 0.9854414463043213, mse_avg: 0.0
2662
+ base_dir is /dev/shm/models/checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins/eval_used_rows, step_tag is checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins_step3000
2663
+ Preparing Dataset vlm_gym_counting_mark_all_celoss_no_mse_evalonce/vlm_gym_counting_mark_all_val
2664
+ [eval debug] first 3 batch fingerprints:
2665
+ fp[0]: [{'data_indexes': [0], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
2666
+ fp[1]: [{'data_indexes': [8], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
2667
+ fp[2]: [{'data_indexes': [16], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
2668
+ ce_avg: 0.9968664646148682, mse_avg: 0.0
2669
  [2026-01-25 21:23:49] (step=0002453) Train Loss mse: 0.0000, Train Loss ce: 0.5462, Train Steps/Sec: 0.33,
2670
  [2026-01-25 21:23:52] (step=0002454) Train Loss mse: 0.0000, Train Loss ce: 0.4800, Train Steps/Sec: 0.32,
2671
  [2026-01-25 21:23:56] (step=0002455) Train Loss mse: 0.0000, Train Loss ce: 0.5539, Train Steps/Sec: 0.26,
 
2729
  [2026-01-25 21:26:49] (step=0002513) Train Loss mse: 0.0000, Train Loss ce: 0.5441, Train Steps/Sec: 0.51,
2730
  [2026-01-25 21:26:52] (step=0002514) Train Loss mse: 0.0000, Train Loss ce: 0.5287, Train Steps/Sec: 0.32,
2731
  [2026-01-25 21:26:54] (step=0002515) Train Loss mse: 0.0000, Train Loss ce: 0.5109, Train Steps/Sec: 0.49,
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2732
  [2026-01-25 21:26:57] (step=0002516) Train Loss mse: 0.0000, Train Loss ce: 0.5004, Train Steps/Sec: 0.44,
2733
  [2026-01-25 21:26:59] (step=0002517) Train Loss mse: 0.0000, Train Loss ce: 0.5243, Train Steps/Sec: 0.47,
2734
  [2026-01-25 21:27:01] (step=0002518) Train Loss mse: 0.0000, Train Loss ce: 0.4966, Train Steps/Sec: 0.46,
 
3744
  [2026-01-25 22:14:55] (step=0003528) Train Loss mse: 0.0000, Train Loss ce: 0.5088, Train Steps/Sec: 0.30,
3745
  [2026-01-25 22:14:58] (step=0003529) Train Loss mse: 0.0000, Train Loss ce: 0.4618, Train Steps/Sec: 0.36,
3746
  [2026-01-25 22:15:01] (step=0003530) Train Loss mse: 0.0000, Train Loss ce: 0.5275, Train Steps/Sec: 0.30,
3747
+ [2026-01-25 22:15:03] (step=0003531) Train Loss mse: 0.0000, Train Loss ce: 0.5246, Train Steps/Sec: 0.43,
3748
+ [2026-01-25 22:15:06] (step=0003532) Train Loss mse: 0.0000, Train Loss ce: 0.4948, Train Steps/Sec: 0.34,
3749
+ [2026-01-25 22:15:08] (step=0003533) Train Loss mse: 0.0000, Train Loss ce: 0.4725, Train Steps/Sec: 0.61,
3750
+ [2026-01-25 22:15:10] (step=0003534) Train Loss mse: 0.0000, Train Loss ce: 0.4692, Train Steps/Sec: 0.52,
3751
+ [2026-01-25 22:15:13] (step=0003535) Train Loss mse: 0.0000, Train Loss ce: 0.5502, Train Steps/Sec: 0.30,
3752
+ [2026-01-25 22:15:17] (step=0003536) Train Loss mse: 0.0000, Train Loss ce: 0.5425, Train Steps/Sec: 0.27,
3753
+ [2026-01-25 22:15:19] (step=0003537) Train Loss mse: 0.0000, Train Loss ce: 0.4866, Train Steps/Sec: 0.39,
3754
+ [2026-01-25 22:15:24] (step=0003538) Train Loss mse: 0.0000, Train Loss ce: 0.5266, Train Steps/Sec: 0.23,
3755
+ base_dir is /dev/shm/models/checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins/eval_used_rows, step_tag is checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins_step3500
3756
+ Preparing Dataset vlm_gym_counting_mark_all_celoss_no_mse_evalonce/vlm_gym_counting_mark_all_val
3757
+ [eval debug] first 3 batch fingerprints:
3758
+ fp[0]: [{'data_indexes': [0], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
3759
+ fp[1]: [{'data_indexes': [8], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
3760
+ fp[2]: [{'data_indexes': [16], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
3761
+ ce_avg: 0.9615826606750488, mse_avg: 0.0
3762
  base_dir is /dev/shm/models/checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins/eval_used_rows, step_tag is checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins_step4000
3763
  Preparing Dataset vlm_gym_counting_mark_all_celoss_no_mse_evalonce/vlm_gym_counting_mark_all_val
3764
  [eval debug] first 3 batch fingerprints:
 
3773
  fp[1]: [{'data_indexes': [8], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
3774
  fp[2]: [{'data_indexes': [16], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
3775
  ce_avg: 0.8653228282928467, mse_avg: 0.0
3776
+ [2026-01-25 22:15:26] (step=0003539) Train Loss mse: 0.0000, Train Loss ce: 0.5203, Train Steps/Sec: 0.47,
3777
+ [2026-01-25 22:15:28] (step=0003540) Train Loss mse: 0.0000, Train Loss ce: 0.4538, Train Steps/Sec: 0.49,
3778
+ [2026-01-25 22:15:31] (step=0003541) Train Loss mse: 0.0000, Train Loss ce: 0.4993, Train Steps/Sec: 0.32,
3779
+ [2026-01-25 22:15:34] (step=0003542) Train Loss mse: 0.0000, Train Loss ce: 0.5085, Train Steps/Sec: 0.34,
3780
+ [2026-01-25 22:15:36] (step=0003543) Train Loss mse: 0.0000, Train Loss ce: 0.4943, Train Steps/Sec: 0.49,
3781
+ [2026-01-25 22:15:38] (step=0003544) Train Loss mse: 0.0000, Train Loss ce: 0.5110, Train Steps/Sec: 0.44,
3782
+ [2026-01-25 22:15:41] (step=0003545) Train Loss mse: 0.0000, Train Loss ce: 0.4802, Train Steps/Sec: 0.32,
3783
+ [2026-01-25 22:15:44] (step=0003546) Train Loss mse: 0.0000, Train Loss ce: 0.4802, Train Steps/Sec: 0.39,
3784
+ [2026-01-25 22:15:46] (step=0003547) Train Loss mse: 0.0000, Train Loss ce: 0.4930, Train Steps/Sec: 0.39,
3785
+ [2026-01-25 22:15:50] (step=0003548) Train Loss mse: 0.0000, Train Loss ce: 0.5221, Train Steps/Sec: 0.31,
3786
+ [2026-01-25 22:15:52] (step=0003549) Train Loss mse: 0.0000, Train Loss ce: 0.4530, Train Steps/Sec: 0.41,
3787
  [2026-01-25 22:15:54] (step=0003550) Train Loss mse: 0.0000, Train Loss ce: 0.4626, Train Steps/Sec: 0.57,
3788
  [2026-01-25 22:15:56] (step=0003551) Train Loss mse: 0.0000, Train Loss ce: 0.4705, Train Steps/Sec: 0.49,
3789
  [2026-01-25 22:15:58] (step=0003552) Train Loss mse: 0.0000, Train Loss ce: 0.4905, Train Steps/Sec: 0.44,
 
5229
  [2026-01-25 23:24:51] (step=0004992) Train Loss mse: 0.0000, Train Loss ce: 0.5111, Train Steps/Sec: 0.37,
5230
  [2026-01-25 23:24:54] (step=0004993) Train Loss mse: 0.0000, Train Loss ce: 0.5282, Train Steps/Sec: 0.38,
5231
  [2026-01-25 23:24:57] (step=0004994) Train Loss mse: 0.0000, Train Loss ce: 0.5177, Train Steps/Sec: 0.33,
5232
+ base_dir is /dev/shm/models/checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins/eval_used_rows, step_tag is checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins_step5000
5233
+ Preparing Dataset vlm_gym_counting_mark_all_celoss_no_mse_evalonce/vlm_gym_counting_mark_all_val
5234
+ [eval debug] first 3 batch fingerprints:
5235
+ fp[0]: [{'data_indexes': [0], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
5236
+ fp[1]: [{'data_indexes': [8], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
5237
+ fp[2]: [{'data_indexes': [16], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
5238
+ ce_avg: 0.847433865070343, mse_avg: 0.0
5239
  [2026-01-25 23:25:00] (step=0004995) Train Loss mse: 0.0000, Train Loss ce: 0.5113, Train Steps/Sec: 0.34,
5240
  [2026-01-25 23:25:03] (step=0004996) Train Loss mse: 0.0000, Train Loss ce: 0.4552, Train Steps/Sec: 0.34,
5241
  [2026-01-25 23:25:06] (step=0004997) Train Loss mse: 0.0000, Train Loss ce: 0.5105, Train Steps/Sec: 0.38,
 
5245
  [2026-01-25 23:25:21] Saving checkpoint to /dev/shm/models/checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins/0005000.
5246
  /opt/conda/lib/python3.11/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py:690: FutureWarning: FSDP.state_dict_type() and FSDP.set_state_dict_type() are being deprecated. Please use APIs, get_state_dict() and set_state_dict(), which can support different parallelisms, FSDP1, FSDP2, DDP. API doc: https://pytorch.org/docs/stable/distributed.checkpoint.html#torch.distributed.checkpoint.state_dict.get_state_dict .Tutorial: https://pytorch.org/tutorials/recipes/distributed_checkpoint_recipe.html .
5247
  warnings.warn(
5248
+ [2026-01-25 23:27:59] Done!