Upload checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins/checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins
Browse files
checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins/checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins/wandb/offline-run-20260125_192135-checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins-run0/files/output.log
CHANGED
|
@@ -1,3 +1,180 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
wandb: Detected [huggingface_hub.inference] in use.
|
| 2 |
wandb: Use W&B Weave for improved LLM call tracing. Install Weave with `pip install weave` then add `import weave` to the top of your script.
|
| 3 |
wandb: For more information, check out the docs at: https://weave-docs.wandb.ai/
|
|
@@ -1027,6 +1204,27 @@ wandb: For more information, check out the docs at: https://weave-docs.wandb.ai/
|
|
| 1027 |
[[34m2026-01-25 20:16:01[39m] (step=0001016) Train Loss mse: 0.0000, Train Loss ce: 0.5289, Train Steps/Sec: 0.28,
|
| 1028 |
[[34m2026-01-25 20:16:04[39m] (step=0001017) Train Loss mse: 0.0000, Train Loss ce: 0.5496, Train Steps/Sec: 0.30,
|
| 1029 |
[[34m2026-01-25 20:16:06[39m] (step=0001018) Train Loss mse: 0.0000, Train Loss ce: 0.5517, Train Steps/Sec: 0.39,
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1030 |
[[34m2026-01-25 20:16:08[39m] (step=0001019) Train Loss mse: 0.0000, Train Loss ce: 0.5016, Train Steps/Sec: 0.52,
|
| 1031 |
[[34m2026-01-25 20:16:12[39m] (step=0001020) Train Loss mse: 0.0000, Train Loss ce: 0.5413, Train Steps/Sec: 0.27,
|
| 1032 |
[[34m2026-01-25 20:16:15[39m] (step=0001021) Train Loss mse: 0.0000, Train Loss ce: 0.5106, Train Steps/Sec: 0.33,
|
|
@@ -1062,197 +1260,6 @@ wandb: For more information, check out the docs at: https://weave-docs.wandb.ai/
|
|
| 1062 |
[[34m2026-01-25 20:17:46[39m] (step=0001051) Train Loss mse: 0.0000, Train Loss ce: 0.5387, Train Steps/Sec: 0.37,
|
| 1063 |
[[34m2026-01-25 20:17:49[39m] (step=0001052) Train Loss mse: 0.0000, Train Loss ce: 0.5305, Train Steps/Sec: 0.27,
|
| 1064 |
[[34m2026-01-25 20:17:52[39m] (step=0001053) Train Loss mse: 0.0000, Train Loss ce: 0.5647, Train Steps/Sec: 0.33,
|
| 1065 |
-
FullyShardedDataParallel(
|
| 1066 |
-
(_fsdp_wrapped_module): Bagel(
|
| 1067 |
-
(language_model): Qwen2ForCausalLM(
|
| 1068 |
-
(model): Qwen2Model(
|
| 1069 |
-
(embed_tokens): Embedding(152064, 3584)
|
| 1070 |
-
(layers): ModuleList(
|
| 1071 |
-
(0-27): 28 x FullyShardedDataParallel(
|
| 1072 |
-
(_fsdp_wrapped_module): CheckpointWrapper(
|
| 1073 |
-
(_checkpoint_wrapped_module): Qwen2MoTDecoderLayer(
|
| 1074 |
-
(self_attn): PackedAttentionMoT(
|
| 1075 |
-
(q_proj): Linear(in_features=3584, out_features=3584, bias=True)
|
| 1076 |
-
(k_proj): Linear(in_features=3584, out_features=512, bias=True)
|
| 1077 |
-
(v_proj): Linear(in_features=3584, out_features=512, bias=True)
|
| 1078 |
-
(o_proj): Linear(in_features=3584, out_features=3584, bias=False)
|
| 1079 |
-
(q_norm): Qwen2RMSNorm((128,), eps=1e-06)
|
| 1080 |
-
(k_norm): Qwen2RMSNorm((128,), eps=1e-06)
|
| 1081 |
-
(q_norm_moe_gen): Qwen2RMSNorm((128,), eps=1e-06)
|
| 1082 |
-
(k_norm_moe_gen): Qwen2RMSNorm((128,), eps=1e-06)
|
| 1083 |
-
(q_proj_moe_gen): Linear(in_features=3584, out_features=3584, bias=True)
|
| 1084 |
-
(k_proj_moe_gen): Linear(in_features=3584, out_features=512, bias=True)
|
| 1085 |
-
(v_proj_moe_gen): Linear(in_features=3584, out_features=512, bias=True)
|
| 1086 |
-
(o_proj_moe_gen): Linear(in_features=3584, out_features=3584, bias=False)
|
| 1087 |
-
)
|
| 1088 |
-
(mlp): Qwen2MLP(
|
| 1089 |
-
(gate_proj): Linear(in_features=3584, out_features=18944, bias=False)
|
| 1090 |
-
(up_proj): Linear(in_features=3584, out_features=18944, bias=False)
|
| 1091 |
-
(down_proj): Linear(in_features=18944, out_features=3584, bias=False)
|
| 1092 |
-
(act_fn): SiLU()
|
| 1093 |
-
)
|
| 1094 |
-
(mlp_moe_gen): Qwen2MLP(
|
| 1095 |
-
(gate_proj): Linear(in_features=3584, out_features=18944, bias=False)
|
| 1096 |
-
(up_proj): Linear(in_features=3584, out_features=18944, bias=False)
|
| 1097 |
-
(down_proj): Linear(in_features=18944, out_features=3584, bias=False)
|
| 1098 |
-
(act_fn): SiLU()
|
| 1099 |
-
)
|
| 1100 |
-
(input_layernorm): Qwen2RMSNorm((3584,), eps=1e-06)
|
| 1101 |
-
(input_layernorm_moe_gen): Qwen2RMSNorm((3584,), eps=1e-06)
|
| 1102 |
-
(post_attention_layernorm): Qwen2RMSNorm((3584,), eps=1e-06)
|
| 1103 |
-
(post_attention_layernorm_moe_gen): Qwen2RMSNorm((3584,), eps=1e-06)
|
| 1104 |
-
)
|
| 1105 |
-
)
|
| 1106 |
-
)
|
| 1107 |
-
)
|
| 1108 |
-
(norm): Qwen2RMSNorm((3584,), eps=1e-06)
|
| 1109 |
-
(norm_moe_gen): Qwen2RMSNorm((3584,), eps=1e-06)
|
| 1110 |
-
(rotary_emb): Qwen2RotaryEmbedding()
|
| 1111 |
-
)
|
| 1112 |
-
(lm_head): Linear(in_features=3584, out_features=152064, bias=False)
|
| 1113 |
-
)
|
| 1114 |
-
(vit_model): SiglipVisionModel(
|
| 1115 |
-
(vision_model): FullyShardedDataParallel(
|
| 1116 |
-
(_fsdp_wrapped_module): SiglipVisionTransformer(
|
| 1117 |
-
(embeddings): SiglipVisionEmbeddings(
|
| 1118 |
-
(position_embedding): Embedding(4900, 1152)
|
| 1119 |
-
(patch_embedding): Linear(in_features=588, out_features=1152, bias=True)
|
| 1120 |
-
)
|
| 1121 |
-
(encoder): SiglipEncoder(
|
| 1122 |
-
(layers): ModuleList(
|
| 1123 |
-
(0-25): 26 x FullyShardedDataParallel(
|
| 1124 |
-
(_fsdp_wrapped_module): CheckpointWrapper(
|
| 1125 |
-
(_checkpoint_wrapped_module): SiglipEncoderLayer(
|
| 1126 |
-
(self_attn): SiglipFlashAttention2(
|
| 1127 |
-
(k_proj): Linear(in_features=1152, out_features=1152, bias=True)
|
| 1128 |
-
(v_proj): Linear(in_features=1152, out_features=1152, bias=True)
|
| 1129 |
-
(q_proj): Linear(in_features=1152, out_features=1152, bias=True)
|
| 1130 |
-
(out_proj): Linear(in_features=1152, out_features=1152, bias=True)
|
| 1131 |
-
)
|
| 1132 |
-
(layer_norm1): LayerNorm((1152,), eps=1e-06, elementwise_affine=True)
|
| 1133 |
-
(mlp): SiglipMLP(
|
| 1134 |
-
(activation_fn): PytorchGELUTanh()
|
| 1135 |
-
(fc1): Linear(in_features=1152, out_features=4304, bias=True)
|
| 1136 |
-
(fc2): Linear(in_features=4304, out_features=1152, bias=True)
|
| 1137 |
-
)
|
| 1138 |
-
(layer_norm2): LayerNorm((1152,), eps=1e-06, elementwise_affine=True)
|
| 1139 |
-
)
|
| 1140 |
-
)
|
| 1141 |
-
)
|
| 1142 |
-
)
|
| 1143 |
-
)
|
| 1144 |
-
(post_layernorm): LayerNorm((1152,), eps=1e-06, elementwise_affine=True)
|
| 1145 |
-
)
|
| 1146 |
-
)
|
| 1147 |
-
)
|
| 1148 |
-
(connector): FullyShardedDataParallel(
|
| 1149 |
-
(_fsdp_wrapped_module): CheckpointWrapper(
|
| 1150 |
-
(_checkpoint_wrapped_module): MLPconnector(
|
| 1151 |
-
(activation_fn): PytorchGELUTanh()
|
| 1152 |
-
(fc1): Linear(in_features=1152, out_features=3584, bias=True)
|
| 1153 |
-
(fc2): Linear(in_features=3584, out_features=3584, bias=True)
|
| 1154 |
-
)
|
| 1155 |
-
)
|
| 1156 |
-
)
|
| 1157 |
-
(vit_pos_embed): FullyShardedDataParallel(
|
| 1158 |
-
(_fsdp_wrapped_module): PositionEmbedding()
|
| 1159 |
-
)
|
| 1160 |
-
)
|
| 1161 |
-
)
|
| 1162 |
-
_flat_param True
|
| 1163 |
-
language_model.model.layers.0._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 1164 |
-
language_model.model.layers.1._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 1165 |
-
language_model.model.layers.2._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 1166 |
-
language_model.model.layers.3._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 1167 |
-
language_model.model.layers.4._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 1168 |
-
language_model.model.layers.5._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 1169 |
-
language_model.model.layers.6._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 1170 |
-
language_model.model.layers.7._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 1171 |
-
language_model.model.layers.8._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 1172 |
-
language_model.model.layers.9._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 1173 |
-
language_model.model.layers.10._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 1174 |
-
language_model.model.layers.11._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 1175 |
-
language_model.model.layers.12._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 1176 |
-
language_model.model.layers.13._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 1177 |
-
language_model.model.layers.14._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 1178 |
-
language_model.model.layers.15._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 1179 |
-
language_model.model.layers.16._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 1180 |
-
language_model.model.layers.17._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 1181 |
-
language_model.model.layers.18._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 1182 |
-
language_model.model.layers.19._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 1183 |
-
language_model.model.layers.20._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 1184 |
-
language_model.model.layers.21._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 1185 |
-
language_model.model.layers.22._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 1186 |
-
language_model.model.layers.23._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 1187 |
-
language_model.model.layers.24._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 1188 |
-
language_model.model.layers.25._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 1189 |
-
language_model.model.layers.26._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 1190 |
-
language_model.model.layers.27._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 1191 |
-
vit_model.vision_model._fsdp_wrapped_module._flat_param True
|
| 1192 |
-
vit_model.vision_model._fsdp_wrapped_module.encoder.layers.0._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 1193 |
-
vit_model.vision_model._fsdp_wrapped_module.encoder.layers.1._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 1194 |
-
vit_model.vision_model._fsdp_wrapped_module.encoder.layers.2._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 1195 |
-
vit_model.vision_model._fsdp_wrapped_module.encoder.layers.3._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 1196 |
-
vit_model.vision_model._fsdp_wrapped_module.encoder.layers.4._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 1197 |
-
vit_model.vision_model._fsdp_wrapped_module.encoder.layers.5._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 1198 |
-
vit_model.vision_model._fsdp_wrapped_module.encoder.layers.6._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 1199 |
-
vit_model.vision_model._fsdp_wrapped_module.encoder.layers.7._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 1200 |
-
vit_model.vision_model._fsdp_wrapped_module.encoder.layers.8._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 1201 |
-
vit_model.vision_model._fsdp_wrapped_module.encoder.layers.9._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 1202 |
-
vit_model.vision_model._fsdp_wrapped_module.encoder.layers.10._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 1203 |
-
vit_model.vision_model._fsdp_wrapped_module.encoder.layers.11._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 1204 |
-
vit_model.vision_model._fsdp_wrapped_module.encoder.layers.12._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 1205 |
-
vit_model.vision_model._fsdp_wrapped_module.encoder.layers.13._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 1206 |
-
vit_model.vision_model._fsdp_wrapped_module.encoder.layers.14._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 1207 |
-
vit_model.vision_model._fsdp_wrapped_module.encoder.layers.15._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 1208 |
-
vit_model.vision_model._fsdp_wrapped_module.encoder.layers.16._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 1209 |
-
vit_model.vision_model._fsdp_wrapped_module.encoder.layers.17._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 1210 |
-
vit_model.vision_model._fsdp_wrapped_module.encoder.layers.18._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 1211 |
-
vit_model.vision_model._fsdp_wrapped_module.encoder.layers.19._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 1212 |
-
vit_model.vision_model._fsdp_wrapped_module.encoder.layers.20._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 1213 |
-
vit_model.vision_model._fsdp_wrapped_module.encoder.layers.21._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 1214 |
-
vit_model.vision_model._fsdp_wrapped_module.encoder.layers.22._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 1215 |
-
vit_model.vision_model._fsdp_wrapped_module.encoder.layers.23._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 1216 |
-
vit_model.vision_model._fsdp_wrapped_module.encoder.layers.24._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 1217 |
-
vit_model.vision_model._fsdp_wrapped_module.encoder.layers.25._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 1218 |
-
connector._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 1219 |
-
vit_pos_embed._fsdp_wrapped_module._flat_param False
|
| 1220 |
-
Preparing Dataset vlm_gym_counting_mark_all_celoss_no_mse/vlm_gym_counting_mark_all_train
|
| 1221 |
-
base_dir is /dev/shm/models/checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins/eval_used_rows, step_tag is checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins_step0
|
| 1222 |
-
Preparing Dataset vlm_gym_counting_mark_all_celoss_no_mse_evalonce/vlm_gym_counting_mark_all_val
|
| 1223 |
-
[eval debug] first 3 batch fingerprints:
|
| 1224 |
-
fp[0]: [{'data_indexes': [0], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
|
| 1225 |
-
fp[1]: [{'data_indexes': [8], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
|
| 1226 |
-
fp[2]: [{'data_indexes': [16], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
|
| 1227 |
-
ce_avg: 1.073127269744873, mse_avg: 0.0
|
| 1228 |
-
base_dir is /dev/shm/models/checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins/eval_used_rows, step_tag is checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins_step500
|
| 1229 |
-
Preparing Dataset vlm_gym_counting_mark_all_celoss_no_mse_evalonce/vlm_gym_counting_mark_all_val
|
| 1230 |
-
[eval debug] first 3 batch fingerprints:
|
| 1231 |
-
fp[0]: [{'data_indexes': [0], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
|
| 1232 |
-
fp[1]: [{'data_indexes': [8], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
|
| 1233 |
-
fp[2]: [{'data_indexes': [16], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
|
| 1234 |
-
ce_avg: 0.5390675663948059, mse_avg: 0.0
|
| 1235 |
-
base_dir is /dev/shm/models/checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins/eval_used_rows, step_tag is checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins_step1500
|
| 1236 |
-
Preparing Dataset vlm_gym_counting_mark_all_celoss_no_mse_evalonce/vlm_gym_counting_mark_all_val
|
| 1237 |
-
[eval debug] first 3 batch fingerprints:
|
| 1238 |
-
fp[0]: [{'data_indexes': [0], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
|
| 1239 |
-
fp[1]: [{'data_indexes': [8], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
|
| 1240 |
-
fp[2]: [{'data_indexes': [16], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
|
| 1241 |
-
ce_avg: 0.6630723476409912, mse_avg: 0.0
|
| 1242 |
-
base_dir is /dev/shm/models/checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins/eval_used_rows, step_tag is checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins_step2000
|
| 1243 |
-
Preparing Dataset vlm_gym_counting_mark_all_celoss_no_mse_evalonce/vlm_gym_counting_mark_all_val
|
| 1244 |
-
[eval debug] first 3 batch fingerprints:
|
| 1245 |
-
fp[0]: [{'data_indexes': [0], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
|
| 1246 |
-
fp[1]: [{'data_indexes': [8], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
|
| 1247 |
-
fp[2]: [{'data_indexes': [16], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
|
| 1248 |
-
ce_avg: 0.8126255869865417, mse_avg: 0.0
|
| 1249 |
-
base_dir is /dev/shm/models/checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins/eval_used_rows, step_tag is checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins_step2500
|
| 1250 |
-
Preparing Dataset vlm_gym_counting_mark_all_celoss_no_mse_evalonce/vlm_gym_counting_mark_all_val
|
| 1251 |
-
[eval debug] first 3 batch fingerprints:
|
| 1252 |
-
fp[0]: [{'data_indexes': [0], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
|
| 1253 |
-
fp[1]: [{'data_indexes': [8], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
|
| 1254 |
-
fp[2]: [{'data_indexes': [16], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
|
| 1255 |
-
ce_avg: 0.9854414463043213, mse_avg: 0.0
|
| 1256 |
[[34m2026-01-25 20:17:55[39m] (step=0001054) Train Loss mse: 0.0000, Train Loss ce: 0.5160, Train Steps/Sec: 0.34,
|
| 1257 |
[[34m2026-01-25 20:17:59[39m] (step=0001055) Train Loss mse: 0.0000, Train Loss ce: 0.5283, Train Steps/Sec: 0.28,
|
| 1258 |
[[34m2026-01-25 20:18:02[39m] (step=0001056) Train Loss mse: 0.0000, Train Loss ce: 0.5526, Train Steps/Sec: 0.35,
|
|
@@ -2692,20 +2699,6 @@ ce_avg: 0.9854414463043213, mse_avg: 0.0
|
|
| 2692 |
[[34m2026-01-25 21:25:36[39m] (step=0002490) Train Loss mse: 0.0000, Train Loss ce: 0.5307, Train Steps/Sec: 0.45,
|
| 2693 |
[[34m2026-01-25 21:25:39[39m] (step=0002491) Train Loss mse: 0.0000, Train Loss ce: 0.5212, Train Steps/Sec: 0.29,
|
| 2694 |
[[34m2026-01-25 21:25:41[39m] (step=0002492) Train Loss mse: 0.0000, Train Loss ce: 0.4945, Train Steps/Sec: 0.48,
|
| 2695 |
-
base_dir is /dev/shm/models/checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins/eval_used_rows, step_tag is checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins_step3000
|
| 2696 |
-
Preparing Dataset vlm_gym_counting_mark_all_celoss_no_mse_evalonce/vlm_gym_counting_mark_all_val
|
| 2697 |
-
[eval debug] first 3 batch fingerprints:
|
| 2698 |
-
fp[0]: [{'data_indexes': [0], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
|
| 2699 |
-
fp[1]: [{'data_indexes': [8], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
|
| 2700 |
-
fp[2]: [{'data_indexes': [16], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
|
| 2701 |
-
ce_avg: 0.9968664646148682, mse_avg: 0.0
|
| 2702 |
-
base_dir is /dev/shm/models/checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins/eval_used_rows, step_tag is checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins_step3500
|
| 2703 |
-
Preparing Dataset vlm_gym_counting_mark_all_celoss_no_mse_evalonce/vlm_gym_counting_mark_all_val
|
| 2704 |
-
[eval debug] first 3 batch fingerprints:
|
| 2705 |
-
fp[0]: [{'data_indexes': [0], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
|
| 2706 |
-
fp[1]: [{'data_indexes': [8], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
|
| 2707 |
-
fp[2]: [{'data_indexes': [16], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
|
| 2708 |
-
ce_avg: 0.9615826606750488, mse_avg: 0.0
|
| 2709 |
[[34m2026-01-25 21:25:44[39m] (step=0002493) Train Loss mse: 0.0000, Train Loss ce: 0.5583, Train Steps/Sec: 0.37,
|
| 2710 |
[[34m2026-01-25 21:25:48[39m] (step=0002494) Train Loss mse: 0.0000, Train Loss ce: 0.5176, Train Steps/Sec: 0.24,
|
| 2711 |
[[34m2026-01-25 21:25:51[39m] (step=0002495) Train Loss mse: 0.0000, Train Loss ce: 0.5266, Train Steps/Sec: 0.32,
|
|
@@ -2729,6 +2722,20 @@ ce_avg: 0.9615826606750488, mse_avg: 0.0
|
|
| 2729 |
[[34m2026-01-25 21:26:49[39m] (step=0002513) Train Loss mse: 0.0000, Train Loss ce: 0.5441, Train Steps/Sec: 0.51,
|
| 2730 |
[[34m2026-01-25 21:26:52[39m] (step=0002514) Train Loss mse: 0.0000, Train Loss ce: 0.5287, Train Steps/Sec: 0.32,
|
| 2731 |
[[34m2026-01-25 21:26:54[39m] (step=0002515) Train Loss mse: 0.0000, Train Loss ce: 0.5109, Train Steps/Sec: 0.49,
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2732 |
[[34m2026-01-25 21:26:57[39m] (step=0002516) Train Loss mse: 0.0000, Train Loss ce: 0.5004, Train Steps/Sec: 0.44,
|
| 2733 |
[[34m2026-01-25 21:26:59[39m] (step=0002517) Train Loss mse: 0.0000, Train Loss ce: 0.5243, Train Steps/Sec: 0.47,
|
| 2734 |
[[34m2026-01-25 21:27:01[39m] (step=0002518) Train Loss mse: 0.0000, Train Loss ce: 0.4966, Train Steps/Sec: 0.46,
|
|
@@ -3731,6 +3738,19 @@ ce_avg: 0.9615826606750488, mse_avg: 0.0
|
|
| 3731 |
[[34m2026-01-25 22:14:20[39m] (step=0003515) Train Loss mse: 0.0000, Train Loss ce: 0.5115, Train Steps/Sec: 0.38,
|
| 3732 |
[[34m2026-01-25 22:14:23[39m] (step=0003516) Train Loss mse: 0.0000, Train Loss ce: 0.4846, Train Steps/Sec: 0.39,
|
| 3733 |
[[34m2026-01-25 22:14:25[39m] (step=0003517) Train Loss mse: 0.0000, Train Loss ce: 0.4703, Train Steps/Sec: 0.45,
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3734 |
base_dir is /dev/shm/models/checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins/eval_used_rows, step_tag is checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins_step4000
|
| 3735 |
Preparing Dataset vlm_gym_counting_mark_all_celoss_no_mse_evalonce/vlm_gym_counting_mark_all_val
|
| 3736 |
[eval debug] first 3 batch fingerprints:
|
|
@@ -3745,38 +3765,6 @@ Preparing Dataset vlm_gym_counting_mark_all_celoss_no_mse_evalonce/vlm_gym_count
|
|
| 3745 |
fp[1]: [{'data_indexes': [8], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
|
| 3746 |
fp[2]: [{'data_indexes': [16], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
|
| 3747 |
ce_avg: 0.8653228282928467, mse_avg: 0.0
|
| 3748 |
-
[[34m2026-01-25 22:14:28[39m] (step=0003518) Train Loss mse: 0.0000, Train Loss ce: 0.4743, Train Steps/Sec: 0.44,
|
| 3749 |
-
[[34m2026-01-25 22:14:30[39m] (step=0003519) Train Loss mse: 0.0000, Train Loss ce: 0.5065, Train Steps/Sec: 0.39,
|
| 3750 |
-
[[34m2026-01-25 22:14:32[39m] (step=0003520) Train Loss mse: 0.0000, Train Loss ce: 0.4738, Train Steps/Sec: 0.42,
|
| 3751 |
-
[[34m2026-01-25 22:14:35[39m] (step=0003521) Train Loss mse: 0.0000, Train Loss ce: 0.4873, Train Steps/Sec: 0.35,
|
| 3752 |
-
[[34m2026-01-25 22:14:39[39m] (step=0003522) Train Loss mse: 0.0000, Train Loss ce: 0.5458, Train Steps/Sec: 0.28,
|
| 3753 |
-
[[34m2026-01-25 22:14:42[39m] (step=0003523) Train Loss mse: 0.0000, Train Loss ce: 0.5189, Train Steps/Sec: 0.32,
|
| 3754 |
-
[[34m2026-01-25 22:14:44[39m] (step=0003524) Train Loss mse: 0.0000, Train Loss ce: 0.4763, Train Steps/Sec: 0.54,
|
| 3755 |
-
[[34m2026-01-25 22:14:46[39m] (step=0003525) Train Loss mse: 0.0000, Train Loss ce: 0.4845, Train Steps/Sec: 0.41,
|
| 3756 |
-
[[34m2026-01-25 22:14:49[39m] (step=0003526) Train Loss mse: 0.0000, Train Loss ce: 0.4723, Train Steps/Sec: 0.41,
|
| 3757 |
-
[[34m2026-01-25 22:14:51[39m] (step=0003527) Train Loss mse: 0.0000, Train Loss ce: 0.5612, Train Steps/Sec: 0.37,
|
| 3758 |
-
[[34m2026-01-25 22:14:55[39m] (step=0003528) Train Loss mse: 0.0000, Train Loss ce: 0.5088, Train Steps/Sec: 0.30,
|
| 3759 |
-
[[34m2026-01-25 22:14:58[39m] (step=0003529) Train Loss mse: 0.0000, Train Loss ce: 0.4618, Train Steps/Sec: 0.36,
|
| 3760 |
-
[[34m2026-01-25 22:15:01[39m] (step=0003530) Train Loss mse: 0.0000, Train Loss ce: 0.5275, Train Steps/Sec: 0.30,
|
| 3761 |
-
[[34m2026-01-25 22:15:03[39m] (step=0003531) Train Loss mse: 0.0000, Train Loss ce: 0.5246, Train Steps/Sec: 0.43,
|
| 3762 |
-
[[34m2026-01-25 22:15:06[39m] (step=0003532) Train Loss mse: 0.0000, Train Loss ce: 0.4948, Train Steps/Sec: 0.34,
|
| 3763 |
-
[[34m2026-01-25 22:15:08[39m] (step=0003533) Train Loss mse: 0.0000, Train Loss ce: 0.4725, Train Steps/Sec: 0.61,
|
| 3764 |
-
[[34m2026-01-25 22:15:10[39m] (step=0003534) Train Loss mse: 0.0000, Train Loss ce: 0.4692, Train Steps/Sec: 0.52,
|
| 3765 |
-
[[34m2026-01-25 22:15:13[39m] (step=0003535) Train Loss mse: 0.0000, Train Loss ce: 0.5502, Train Steps/Sec: 0.30,
|
| 3766 |
-
[[34m2026-01-25 22:15:17[39m] (step=0003536) Train Loss mse: 0.0000, Train Loss ce: 0.5425, Train Steps/Sec: 0.27,
|
| 3767 |
-
[[34m2026-01-25 22:15:19[39m] (step=0003537) Train Loss mse: 0.0000, Train Loss ce: 0.4866, Train Steps/Sec: 0.39,
|
| 3768 |
-
[[34m2026-01-25 22:15:24[39m] (step=0003538) Train Loss mse: 0.0000, Train Loss ce: 0.5266, Train Steps/Sec: 0.23,
|
| 3769 |
-
[[34m2026-01-25 22:15:26[39m] (step=0003539) Train Loss mse: 0.0000, Train Loss ce: 0.5203, Train Steps/Sec: 0.47,
|
| 3770 |
-
[[34m2026-01-25 22:15:28[39m] (step=0003540) Train Loss mse: 0.0000, Train Loss ce: 0.4538, Train Steps/Sec: 0.49,
|
| 3771 |
-
[[34m2026-01-25 22:15:31[39m] (step=0003541) Train Loss mse: 0.0000, Train Loss ce: 0.4993, Train Steps/Sec: 0.32,
|
| 3772 |
-
[[34m2026-01-25 22:15:34[39m] (step=0003542) Train Loss mse: 0.0000, Train Loss ce: 0.5085, Train Steps/Sec: 0.34,
|
| 3773 |
-
[[34m2026-01-25 22:15:36[39m] (step=0003543) Train Loss mse: 0.0000, Train Loss ce: 0.4943, Train Steps/Sec: 0.49,
|
| 3774 |
-
[[34m2026-01-25 22:15:38[39m] (step=0003544) Train Loss mse: 0.0000, Train Loss ce: 0.5110, Train Steps/Sec: 0.44,
|
| 3775 |
-
[[34m2026-01-25 22:15:41[39m] (step=0003545) Train Loss mse: 0.0000, Train Loss ce: 0.4802, Train Steps/Sec: 0.32,
|
| 3776 |
-
[[34m2026-01-25 22:15:44[39m] (step=0003546) Train Loss mse: 0.0000, Train Loss ce: 0.4802, Train Steps/Sec: 0.39,
|
| 3777 |
-
[[34m2026-01-25 22:15:46[39m] (step=0003547) Train Loss mse: 0.0000, Train Loss ce: 0.4930, Train Steps/Sec: 0.39,
|
| 3778 |
-
[[34m2026-01-25 22:15:50[39m] (step=0003548) Train Loss mse: 0.0000, Train Loss ce: 0.5221, Train Steps/Sec: 0.31,
|
| 3779 |
-
[[34m2026-01-25 22:15:52[39m] (step=0003549) Train Loss mse: 0.0000, Train Loss ce: 0.4530, Train Steps/Sec: 0.41,
|
| 3780 |
[[34m2026-01-25 22:15:54[39m] (step=0003550) Train Loss mse: 0.0000, Train Loss ce: 0.4626, Train Steps/Sec: 0.57,
|
| 3781 |
[[34m2026-01-25 22:15:56[39m] (step=0003551) Train Loss mse: 0.0000, Train Loss ce: 0.4705, Train Steps/Sec: 0.49,
|
| 3782 |
[[34m2026-01-25 22:15:58[39m] (step=0003552) Train Loss mse: 0.0000, Train Loss ce: 0.4905, Train Steps/Sec: 0.44,
|
|
@@ -5158,13 +5146,6 @@ ce_avg: 0.8653228282928467, mse_avg: 0.0
|
|
| 5158 |
[[34m2026-01-25 23:21:47[39m] (step=0004928) Train Loss mse: 0.0000, Train Loss ce: 0.4813, Train Steps/Sec: 0.40,
|
| 5159 |
[[34m2026-01-25 23:21:51[39m] (step=0004929) Train Loss mse: 0.0000, Train Loss ce: 0.5108, Train Steps/Sec: 0.28,
|
| 5160 |
[[34m2026-01-25 23:21:54[39m] (step=0004930) Train Loss mse: 0.0000, Train Loss ce: 0.5030, Train Steps/Sec: 0.32,
|
| 5161 |
-
base_dir is /dev/shm/models/checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins/eval_used_rows, step_tag is checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins_step5000
|
| 5162 |
-
Preparing Dataset vlm_gym_counting_mark_all_celoss_no_mse_evalonce/vlm_gym_counting_mark_all_val
|
| 5163 |
-
[eval debug] first 3 batch fingerprints:
|
| 5164 |
-
fp[0]: [{'data_indexes': [0], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
|
| 5165 |
-
fp[1]: [{'data_indexes': [8], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
|
| 5166 |
-
fp[2]: [{'data_indexes': [16], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
|
| 5167 |
-
ce_avg: 0.847433865070343, mse_avg: 0.0
|
| 5168 |
[[34m2026-01-25 23:21:57[39m] (step=0004931) Train Loss mse: 0.0000, Train Loss ce: 0.5101, Train Steps/Sec: 0.29,
|
| 5169 |
[[34m2026-01-25 23:22:01[39m] (step=0004932) Train Loss mse: 0.0000, Train Loss ce: 0.5305, Train Steps/Sec: 0.25,
|
| 5170 |
[[34m2026-01-25 23:22:04[39m] (step=0004933) Train Loss mse: 0.0000, Train Loss ce: 0.4626, Train Steps/Sec: 0.36,
|
|
@@ -5238,4 +5219,11 @@ ce_avg: 0.847433865070343, mse_avg: 0.0
|
|
| 5238 |
[[34m2026-01-25 23:25:21[39m] Saving checkpoint to /dev/shm/models/checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins/0005000.
|
| 5239 |
/opt/conda/lib/python3.11/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py:690: FutureWarning: FSDP.state_dict_type() and FSDP.set_state_dict_type() are being deprecated. Please use APIs, get_state_dict() and set_state_dict(), which can support different parallelisms, FSDP1, FSDP2, DDP. API doc: https://pytorch.org/docs/stable/distributed.checkpoint.html#torch.distributed.checkpoint.state_dict.get_state_dict .Tutorial: https://pytorch.org/tutorials/recipes/distributed_checkpoint_recipe.html .
|
| 5240 |
warnings.warn(
|
| 5241 |
-
[[34m2026-01-25 23:27:59[39m] Done!
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
FullyShardedDataParallel(
|
| 2 |
+
(_fsdp_wrapped_module): Bagel(
|
| 3 |
+
(language_model): Qwen2ForCausalLM(
|
| 4 |
+
(model): Qwen2Model(
|
| 5 |
+
(embed_tokens): Embedding(152064, 3584)
|
| 6 |
+
(layers): ModuleList(
|
| 7 |
+
(0-27): 28 x FullyShardedDataParallel(
|
| 8 |
+
(_fsdp_wrapped_module): CheckpointWrapper(
|
| 9 |
+
(_checkpoint_wrapped_module): Qwen2MoTDecoderLayer(
|
| 10 |
+
(self_attn): PackedAttentionMoT(
|
| 11 |
+
(q_proj): Linear(in_features=3584, out_features=3584, bias=True)
|
| 12 |
+
(k_proj): Linear(in_features=3584, out_features=512, bias=True)
|
| 13 |
+
(v_proj): Linear(in_features=3584, out_features=512, bias=True)
|
| 14 |
+
(o_proj): Linear(in_features=3584, out_features=3584, bias=False)
|
| 15 |
+
(q_norm): Qwen2RMSNorm((128,), eps=1e-06)
|
| 16 |
+
(k_norm): Qwen2RMSNorm((128,), eps=1e-06)
|
| 17 |
+
(q_norm_moe_gen): Qwen2RMSNorm((128,), eps=1e-06)
|
| 18 |
+
(k_norm_moe_gen): Qwen2RMSNorm((128,), eps=1e-06)
|
| 19 |
+
(q_proj_moe_gen): Linear(in_features=3584, out_features=3584, bias=True)
|
| 20 |
+
(k_proj_moe_gen): Linear(in_features=3584, out_features=512, bias=True)
|
| 21 |
+
(v_proj_moe_gen): Linear(in_features=3584, out_features=512, bias=True)
|
| 22 |
+
(o_proj_moe_gen): Linear(in_features=3584, out_features=3584, bias=False)
|
| 23 |
+
)
|
| 24 |
+
(mlp): Qwen2MLP(
|
| 25 |
+
(gate_proj): Linear(in_features=3584, out_features=18944, bias=False)
|
| 26 |
+
(up_proj): Linear(in_features=3584, out_features=18944, bias=False)
|
| 27 |
+
(down_proj): Linear(in_features=18944, out_features=3584, bias=False)
|
| 28 |
+
(act_fn): SiLU()
|
| 29 |
+
)
|
| 30 |
+
(mlp_moe_gen): Qwen2MLP(
|
| 31 |
+
(gate_proj): Linear(in_features=3584, out_features=18944, bias=False)
|
| 32 |
+
(up_proj): Linear(in_features=3584, out_features=18944, bias=False)
|
| 33 |
+
(down_proj): Linear(in_features=18944, out_features=3584, bias=False)
|
| 34 |
+
(act_fn): SiLU()
|
| 35 |
+
)
|
| 36 |
+
(input_layernorm): Qwen2RMSNorm((3584,), eps=1e-06)
|
| 37 |
+
(input_layernorm_moe_gen): Qwen2RMSNorm((3584,), eps=1e-06)
|
| 38 |
+
(post_attention_layernorm): Qwen2RMSNorm((3584,), eps=1e-06)
|
| 39 |
+
(post_attention_layernorm_moe_gen): Qwen2RMSNorm((3584,), eps=1e-06)
|
| 40 |
+
)
|
| 41 |
+
)
|
| 42 |
+
)
|
| 43 |
+
)
|
| 44 |
+
(norm): Qwen2RMSNorm((3584,), eps=1e-06)
|
| 45 |
+
(norm_moe_gen): Qwen2RMSNorm((3584,), eps=1e-06)
|
| 46 |
+
(rotary_emb): Qwen2RotaryEmbedding()
|
| 47 |
+
)
|
| 48 |
+
(lm_head): Linear(in_features=3584, out_features=152064, bias=False)
|
| 49 |
+
)
|
| 50 |
+
(vit_model): SiglipVisionModel(
|
| 51 |
+
(vision_model): FullyShardedDataParallel(
|
| 52 |
+
(_fsdp_wrapped_module): SiglipVisionTransformer(
|
| 53 |
+
(embeddings): SiglipVisionEmbeddings(
|
| 54 |
+
(position_embedding): Embedding(4900, 1152)
|
| 55 |
+
(patch_embedding): Linear(in_features=588, out_features=1152, bias=True)
|
| 56 |
+
)
|
| 57 |
+
(encoder): SiglipEncoder(
|
| 58 |
+
(layers): ModuleList(
|
| 59 |
+
(0-25): 26 x FullyShardedDataParallel(
|
| 60 |
+
(_fsdp_wrapped_module): CheckpointWrapper(
|
| 61 |
+
(_checkpoint_wrapped_module): SiglipEncoderLayer(
|
| 62 |
+
(self_attn): SiglipFlashAttention2(
|
| 63 |
+
(k_proj): Linear(in_features=1152, out_features=1152, bias=True)
|
| 64 |
+
(v_proj): Linear(in_features=1152, out_features=1152, bias=True)
|
| 65 |
+
(q_proj): Linear(in_features=1152, out_features=1152, bias=True)
|
| 66 |
+
(out_proj): Linear(in_features=1152, out_features=1152, bias=True)
|
| 67 |
+
)
|
| 68 |
+
(layer_norm1): LayerNorm((1152,), eps=1e-06, elementwise_affine=True)
|
| 69 |
+
(mlp): SiglipMLP(
|
| 70 |
+
(activation_fn): PytorchGELUTanh()
|
| 71 |
+
(fc1): Linear(in_features=1152, out_features=4304, bias=True)
|
| 72 |
+
(fc2): Linear(in_features=4304, out_features=1152, bias=True)
|
| 73 |
+
)
|
| 74 |
+
(layer_norm2): LayerNorm((1152,), eps=1e-06, elementwise_affine=True)
|
| 75 |
+
)
|
| 76 |
+
)
|
| 77 |
+
)
|
| 78 |
+
)
|
| 79 |
+
)
|
| 80 |
+
(post_layernorm): LayerNorm((1152,), eps=1e-06, elementwise_affine=True)
|
| 81 |
+
)
|
| 82 |
+
)
|
| 83 |
+
)
|
| 84 |
+
(connector): FullyShardedDataParallel(
|
| 85 |
+
(_fsdp_wrapped_module): CheckpointWrapper(
|
| 86 |
+
(_checkpoint_wrapped_module): MLPconnector(
|
| 87 |
+
(activation_fn): PytorchGELUTanh()
|
| 88 |
+
(fc1): Linear(in_features=1152, out_features=3584, bias=True)
|
| 89 |
+
(fc2): Linear(in_features=3584, out_features=3584, bias=True)
|
| 90 |
+
)
|
| 91 |
+
)
|
| 92 |
+
)
|
| 93 |
+
(vit_pos_embed): FullyShardedDataParallel(
|
| 94 |
+
(_fsdp_wrapped_module): PositionEmbedding()
|
| 95 |
+
)
|
| 96 |
+
)
|
| 97 |
+
)
|
| 98 |
+
_flat_param True
|
| 99 |
+
language_model.model.layers.0._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 100 |
+
language_model.model.layers.1._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 101 |
+
language_model.model.layers.2._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 102 |
+
language_model.model.layers.3._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 103 |
+
language_model.model.layers.4._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 104 |
+
language_model.model.layers.5._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 105 |
+
language_model.model.layers.6._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 106 |
+
language_model.model.layers.7._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 107 |
+
language_model.model.layers.8._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 108 |
+
language_model.model.layers.9._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 109 |
+
language_model.model.layers.10._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 110 |
+
language_model.model.layers.11._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 111 |
+
language_model.model.layers.12._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 112 |
+
language_model.model.layers.13._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 113 |
+
language_model.model.layers.14._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 114 |
+
language_model.model.layers.15._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 115 |
+
language_model.model.layers.16._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 116 |
+
language_model.model.layers.17._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 117 |
+
language_model.model.layers.18._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 118 |
+
language_model.model.layers.19._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 119 |
+
language_model.model.layers.20._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 120 |
+
language_model.model.layers.21._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 121 |
+
language_model.model.layers.22._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 122 |
+
language_model.model.layers.23._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 123 |
+
language_model.model.layers.24._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 124 |
+
language_model.model.layers.25._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 125 |
+
language_model.model.layers.26._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 126 |
+
language_model.model.layers.27._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 127 |
+
vit_model.vision_model._fsdp_wrapped_module._flat_param True
|
| 128 |
+
vit_model.vision_model._fsdp_wrapped_module.encoder.layers.0._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 129 |
+
vit_model.vision_model._fsdp_wrapped_module.encoder.layers.1._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 130 |
+
vit_model.vision_model._fsdp_wrapped_module.encoder.layers.2._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 131 |
+
vit_model.vision_model._fsdp_wrapped_module.encoder.layers.3._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 132 |
+
vit_model.vision_model._fsdp_wrapped_module.encoder.layers.4._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 133 |
+
vit_model.vision_model._fsdp_wrapped_module.encoder.layers.5._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 134 |
+
vit_model.vision_model._fsdp_wrapped_module.encoder.layers.6._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 135 |
+
vit_model.vision_model._fsdp_wrapped_module.encoder.layers.7._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 136 |
+
vit_model.vision_model._fsdp_wrapped_module.encoder.layers.8._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 137 |
+
vit_model.vision_model._fsdp_wrapped_module.encoder.layers.9._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 138 |
+
vit_model.vision_model._fsdp_wrapped_module.encoder.layers.10._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 139 |
+
vit_model.vision_model._fsdp_wrapped_module.encoder.layers.11._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 140 |
+
vit_model.vision_model._fsdp_wrapped_module.encoder.layers.12._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 141 |
+
vit_model.vision_model._fsdp_wrapped_module.encoder.layers.13._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 142 |
+
vit_model.vision_model._fsdp_wrapped_module.encoder.layers.14._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 143 |
+
vit_model.vision_model._fsdp_wrapped_module.encoder.layers.15._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 144 |
+
vit_model.vision_model._fsdp_wrapped_module.encoder.layers.16._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 145 |
+
vit_model.vision_model._fsdp_wrapped_module.encoder.layers.17._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 146 |
+
vit_model.vision_model._fsdp_wrapped_module.encoder.layers.18._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 147 |
+
vit_model.vision_model._fsdp_wrapped_module.encoder.layers.19._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 148 |
+
vit_model.vision_model._fsdp_wrapped_module.encoder.layers.20._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 149 |
+
vit_model.vision_model._fsdp_wrapped_module.encoder.layers.21._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 150 |
+
vit_model.vision_model._fsdp_wrapped_module.encoder.layers.22._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 151 |
+
vit_model.vision_model._fsdp_wrapped_module.encoder.layers.23._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 152 |
+
vit_model.vision_model._fsdp_wrapped_module.encoder.layers.24._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 153 |
+
vit_model.vision_model._fsdp_wrapped_module.encoder.layers.25._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 154 |
+
connector._fsdp_wrapped_module._checkpoint_wrapped_module._flat_param True
|
| 155 |
+
vit_pos_embed._fsdp_wrapped_module._flat_param False
|
| 156 |
+
Preparing Dataset vlm_gym_counting_mark_all_celoss_no_mse/vlm_gym_counting_mark_all_train
|
| 157 |
+
base_dir is /dev/shm/models/checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins/eval_used_rows, step_tag is checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins_step0
|
| 158 |
+
Preparing Dataset vlm_gym_counting_mark_all_celoss_no_mse_evalonce/vlm_gym_counting_mark_all_val
|
| 159 |
+
[eval debug] first 3 batch fingerprints:
|
| 160 |
+
fp[0]: [{'data_indexes': [0], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
|
| 161 |
+
fp[1]: [{'data_indexes': [8], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
|
| 162 |
+
fp[2]: [{'data_indexes': [16], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
|
| 163 |
+
ce_avg: 1.073127269744873, mse_avg: 0.0
|
| 164 |
+
base_dir is /dev/shm/models/checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins/eval_used_rows, step_tag is checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins_step500
|
| 165 |
+
Preparing Dataset vlm_gym_counting_mark_all_celoss_no_mse_evalonce/vlm_gym_counting_mark_all_val
|
| 166 |
+
[eval debug] first 3 batch fingerprints:
|
| 167 |
+
fp[0]: [{'data_indexes': [0], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
|
| 168 |
+
fp[1]: [{'data_indexes': [8], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
|
| 169 |
+
fp[2]: [{'data_indexes': [16], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
|
| 170 |
+
ce_avg: 0.5390675663948059, mse_avg: 0.0
|
| 171 |
+
base_dir is /dev/shm/models/checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins/eval_used_rows, step_tag is checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins_step1000
|
| 172 |
+
Preparing Dataset vlm_gym_counting_mark_all_celoss_no_mse_evalonce/vlm_gym_counting_mark_all_val
|
| 173 |
+
[eval debug] first 3 batch fingerprints:
|
| 174 |
+
fp[0]: [{'data_indexes': [0], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
|
| 175 |
+
fp[1]: [{'data_indexes': [8], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
|
| 176 |
+
fp[2]: [{'data_indexes': [16], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
|
| 177 |
+
ce_avg: 0.6019229888916016, mse_avg: 0.0
|
| 178 |
wandb: Detected [huggingface_hub.inference] in use.
|
| 179 |
wandb: Use W&B Weave for improved LLM call tracing. Install Weave with `pip install weave` then add `import weave` to the top of your script.
|
| 180 |
wandb: For more information, check out the docs at: https://weave-docs.wandb.ai/
|
|
|
|
| 1204 |
[[34m2026-01-25 20:16:01[39m] (step=0001016) Train Loss mse: 0.0000, Train Loss ce: 0.5289, Train Steps/Sec: 0.28,
|
| 1205 |
[[34m2026-01-25 20:16:04[39m] (step=0001017) Train Loss mse: 0.0000, Train Loss ce: 0.5496, Train Steps/Sec: 0.30,
|
| 1206 |
[[34m2026-01-25 20:16:06[39m] (step=0001018) Train Loss mse: 0.0000, Train Loss ce: 0.5517, Train Steps/Sec: 0.39,
|
| 1207 |
+
base_dir is /dev/shm/models/checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins/eval_used_rows, step_tag is checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins_step1500
|
| 1208 |
+
Preparing Dataset vlm_gym_counting_mark_all_celoss_no_mse_evalonce/vlm_gym_counting_mark_all_val
|
| 1209 |
+
[eval debug] first 3 batch fingerprints:
|
| 1210 |
+
fp[0]: [{'data_indexes': [0], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
|
| 1211 |
+
fp[1]: [{'data_indexes': [8], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
|
| 1212 |
+
fp[2]: [{'data_indexes': [16], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
|
| 1213 |
+
ce_avg: 0.6630723476409912, mse_avg: 0.0
|
| 1214 |
+
base_dir is /dev/shm/models/checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins/eval_used_rows, step_tag is checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins_step2000
|
| 1215 |
+
Preparing Dataset vlm_gym_counting_mark_all_celoss_no_mse_evalonce/vlm_gym_counting_mark_all_val
|
| 1216 |
+
[eval debug] first 3 batch fingerprints:
|
| 1217 |
+
fp[0]: [{'data_indexes': [0], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
|
| 1218 |
+
fp[1]: [{'data_indexes': [8], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
|
| 1219 |
+
fp[2]: [{'data_indexes': [16], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
|
| 1220 |
+
ce_avg: 0.8126255869865417, mse_avg: 0.0
|
| 1221 |
+
base_dir is /dev/shm/models/checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins/eval_used_rows, step_tag is checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins_step2500
|
| 1222 |
+
Preparing Dataset vlm_gym_counting_mark_all_celoss_no_mse_evalonce/vlm_gym_counting_mark_all_val
|
| 1223 |
+
[eval debug] first 3 batch fingerprints:
|
| 1224 |
+
fp[0]: [{'data_indexes': [0], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
|
| 1225 |
+
fp[1]: [{'data_indexes': [8], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
|
| 1226 |
+
fp[2]: [{'data_indexes': [16], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
|
| 1227 |
+
ce_avg: 0.9854414463043213, mse_avg: 0.0
|
| 1228 |
[[34m2026-01-25 20:16:08[39m] (step=0001019) Train Loss mse: 0.0000, Train Loss ce: 0.5016, Train Steps/Sec: 0.52,
|
| 1229 |
[[34m2026-01-25 20:16:12[39m] (step=0001020) Train Loss mse: 0.0000, Train Loss ce: 0.5413, Train Steps/Sec: 0.27,
|
| 1230 |
[[34m2026-01-25 20:16:15[39m] (step=0001021) Train Loss mse: 0.0000, Train Loss ce: 0.5106, Train Steps/Sec: 0.33,
|
|
|
|
| 1260 |
[[34m2026-01-25 20:17:46[39m] (step=0001051) Train Loss mse: 0.0000, Train Loss ce: 0.5387, Train Steps/Sec: 0.37,
|
| 1261 |
[[34m2026-01-25 20:17:49[39m] (step=0001052) Train Loss mse: 0.0000, Train Loss ce: 0.5305, Train Steps/Sec: 0.27,
|
| 1262 |
[[34m2026-01-25 20:17:52[39m] (step=0001053) Train Loss mse: 0.0000, Train Loss ce: 0.5647, Train Steps/Sec: 0.33,
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1263 |
[[34m2026-01-25 20:17:55[39m] (step=0001054) Train Loss mse: 0.0000, Train Loss ce: 0.5160, Train Steps/Sec: 0.34,
|
| 1264 |
[[34m2026-01-25 20:17:59[39m] (step=0001055) Train Loss mse: 0.0000, Train Loss ce: 0.5283, Train Steps/Sec: 0.28,
|
| 1265 |
[[34m2026-01-25 20:18:02[39m] (step=0001056) Train Loss mse: 0.0000, Train Loss ce: 0.5526, Train Steps/Sec: 0.35,
|
|
|
|
| 2699 |
[[34m2026-01-25 21:25:36[39m] (step=0002490) Train Loss mse: 0.0000, Train Loss ce: 0.5307, Train Steps/Sec: 0.45,
|
| 2700 |
[[34m2026-01-25 21:25:39[39m] (step=0002491) Train Loss mse: 0.0000, Train Loss ce: 0.5212, Train Steps/Sec: 0.29,
|
| 2701 |
[[34m2026-01-25 21:25:41[39m] (step=0002492) Train Loss mse: 0.0000, Train Loss ce: 0.4945, Train Steps/Sec: 0.48,
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2702 |
[[34m2026-01-25 21:25:44[39m] (step=0002493) Train Loss mse: 0.0000, Train Loss ce: 0.5583, Train Steps/Sec: 0.37,
|
| 2703 |
[[34m2026-01-25 21:25:48[39m] (step=0002494) Train Loss mse: 0.0000, Train Loss ce: 0.5176, Train Steps/Sec: 0.24,
|
| 2704 |
[[34m2026-01-25 21:25:51[39m] (step=0002495) Train Loss mse: 0.0000, Train Loss ce: 0.5266, Train Steps/Sec: 0.32,
|
|
|
|
| 2722 |
[[34m2026-01-25 21:26:49[39m] (step=0002513) Train Loss mse: 0.0000, Train Loss ce: 0.5441, Train Steps/Sec: 0.51,
|
| 2723 |
[[34m2026-01-25 21:26:52[39m] (step=0002514) Train Loss mse: 0.0000, Train Loss ce: 0.5287, Train Steps/Sec: 0.32,
|
| 2724 |
[[34m2026-01-25 21:26:54[39m] (step=0002515) Train Loss mse: 0.0000, Train Loss ce: 0.5109, Train Steps/Sec: 0.49,
|
| 2725 |
+
base_dir is /dev/shm/models/checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins/eval_used_rows, step_tag is checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins_step3000
|
| 2726 |
+
Preparing Dataset vlm_gym_counting_mark_all_celoss_no_mse_evalonce/vlm_gym_counting_mark_all_val
|
| 2727 |
+
[eval debug] first 3 batch fingerprints:
|
| 2728 |
+
fp[0]: [{'data_indexes': [0], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
|
| 2729 |
+
fp[1]: [{'data_indexes': [8], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
|
| 2730 |
+
fp[2]: [{'data_indexes': [16], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
|
| 2731 |
+
ce_avg: 0.9968664646148682, mse_avg: 0.0
|
| 2732 |
+
base_dir is /dev/shm/models/checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins/eval_used_rows, step_tag is checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins_step3500
|
| 2733 |
+
Preparing Dataset vlm_gym_counting_mark_all_celoss_no_mse_evalonce/vlm_gym_counting_mark_all_val
|
| 2734 |
+
[eval debug] first 3 batch fingerprints:
|
| 2735 |
+
fp[0]: [{'data_indexes': [0], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
|
| 2736 |
+
fp[1]: [{'data_indexes': [8], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
|
| 2737 |
+
fp[2]: [{'data_indexes': [16], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
|
| 2738 |
+
ce_avg: 0.9615826606750488, mse_avg: 0.0
|
| 2739 |
[[34m2026-01-25 21:26:57[39m] (step=0002516) Train Loss mse: 0.0000, Train Loss ce: 0.5004, Train Steps/Sec: 0.44,
|
| 2740 |
[[34m2026-01-25 21:26:59[39m] (step=0002517) Train Loss mse: 0.0000, Train Loss ce: 0.5243, Train Steps/Sec: 0.47,
|
| 2741 |
[[34m2026-01-25 21:27:01[39m] (step=0002518) Train Loss mse: 0.0000, Train Loss ce: 0.4966, Train Steps/Sec: 0.46,
|
|
|
|
| 3738 |
[[34m2026-01-25 22:14:20[39m] (step=0003515) Train Loss mse: 0.0000, Train Loss ce: 0.5115, Train Steps/Sec: 0.38,
|
| 3739 |
[[34m2026-01-25 22:14:23[39m] (step=0003516) Train Loss mse: 0.0000, Train Loss ce: 0.4846, Train Steps/Sec: 0.39,
|
| 3740 |
[[34m2026-01-25 22:14:25[39m] (step=0003517) Train Loss mse: 0.0000, Train Loss ce: 0.4703, Train Steps/Sec: 0.45,
|
| 3741 |
+
[[34m2026-01-25 22:14:28[39m] (step=0003518) Train Loss mse: 0.0000, Train Loss ce: 0.4743, Train Steps/Sec: 0.44,
|
| 3742 |
+
[[34m2026-01-25 22:14:30[39m] (step=0003519) Train Loss mse: 0.0000, Train Loss ce: 0.5065, Train Steps/Sec: 0.39,
|
| 3743 |
+
[[34m2026-01-25 22:14:32[39m] (step=0003520) Train Loss mse: 0.0000, Train Loss ce: 0.4738, Train Steps/Sec: 0.42,
|
| 3744 |
+
[[34m2026-01-25 22:14:35[39m] (step=0003521) Train Loss mse: 0.0000, Train Loss ce: 0.4873, Train Steps/Sec: 0.35,
|
| 3745 |
+
[[34m2026-01-25 22:14:39[39m] (step=0003522) Train Loss mse: 0.0000, Train Loss ce: 0.5458, Train Steps/Sec: 0.28,
|
| 3746 |
+
[[34m2026-01-25 22:14:42[39m] (step=0003523) Train Loss mse: 0.0000, Train Loss ce: 0.5189, Train Steps/Sec: 0.32,
|
| 3747 |
+
[[34m2026-01-25 22:14:44[39m] (step=0003524) Train Loss mse: 0.0000, Train Loss ce: 0.4763, Train Steps/Sec: 0.54,
|
| 3748 |
+
[[34m2026-01-25 22:14:46[39m] (step=0003525) Train Loss mse: 0.0000, Train Loss ce: 0.4845, Train Steps/Sec: 0.41,
|
| 3749 |
+
[[34m2026-01-25 22:14:49[39m] (step=0003526) Train Loss mse: 0.0000, Train Loss ce: 0.4723, Train Steps/Sec: 0.41,
|
| 3750 |
+
[[34m2026-01-25 22:14:51[39m] (step=0003527) Train Loss mse: 0.0000, Train Loss ce: 0.5612, Train Steps/Sec: 0.37,
|
| 3751 |
+
[[34m2026-01-25 22:14:55[39m] (step=0003528) Train Loss mse: 0.0000, Train Loss ce: 0.5088, Train Steps/Sec: 0.30,
|
| 3752 |
+
[[34m2026-01-25 22:14:58[39m] (step=0003529) Train Loss mse: 0.0000, Train Loss ce: 0.4618, Train Steps/Sec: 0.36,
|
| 3753 |
+
[[34m2026-01-25 22:15:01[39m] (step=0003530) Train Loss mse: 0.0000, Train Loss ce: 0.5275, Train Steps/Sec: 0.30,
|
| 3754 |
base_dir is /dev/shm/models/checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins/eval_used_rows, step_tag is checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins_step4000
|
| 3755 |
Preparing Dataset vlm_gym_counting_mark_all_celoss_no_mse_evalonce/vlm_gym_counting_mark_all_val
|
| 3756 |
[eval debug] first 3 batch fingerprints:
|
|
|
|
| 3765 |
fp[1]: [{'data_indexes': [8], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
|
| 3766 |
fp[2]: [{'data_indexes': [16], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
|
| 3767 |
ce_avg: 0.8653228282928467, mse_avg: 0.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3768 |
[[34m2026-01-25 22:15:54[39m] (step=0003550) Train Loss mse: 0.0000, Train Loss ce: 0.4626, Train Steps/Sec: 0.57,
|
| 3769 |
[[34m2026-01-25 22:15:56[39m] (step=0003551) Train Loss mse: 0.0000, Train Loss ce: 0.4705, Train Steps/Sec: 0.49,
|
| 3770 |
[[34m2026-01-25 22:15:58[39m] (step=0003552) Train Loss mse: 0.0000, Train Loss ce: 0.4905, Train Steps/Sec: 0.44,
|
|
|
|
| 5146 |
[[34m2026-01-25 23:21:47[39m] (step=0004928) Train Loss mse: 0.0000, Train Loss ce: 0.4813, Train Steps/Sec: 0.40,
|
| 5147 |
[[34m2026-01-25 23:21:51[39m] (step=0004929) Train Loss mse: 0.0000, Train Loss ce: 0.5108, Train Steps/Sec: 0.28,
|
| 5148 |
[[34m2026-01-25 23:21:54[39m] (step=0004930) Train Loss mse: 0.0000, Train Loss ce: 0.5030, Train Steps/Sec: 0.32,
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 5149 |
[[34m2026-01-25 23:21:57[39m] (step=0004931) Train Loss mse: 0.0000, Train Loss ce: 0.5101, Train Steps/Sec: 0.29,
|
| 5150 |
[[34m2026-01-25 23:22:01[39m] (step=0004932) Train Loss mse: 0.0000, Train Loss ce: 0.5305, Train Steps/Sec: 0.25,
|
| 5151 |
[[34m2026-01-25 23:22:04[39m] (step=0004933) Train Loss mse: 0.0000, Train Loss ce: 0.4626, Train Steps/Sec: 0.36,
|
|
|
|
| 5219 |
[[34m2026-01-25 23:25:21[39m] Saving checkpoint to /dev/shm/models/checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins/0005000.
|
| 5220 |
/opt/conda/lib/python3.11/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py:690: FutureWarning: FSDP.state_dict_type() and FSDP.set_state_dict_type() are being deprecated. Please use APIs, get_state_dict() and set_state_dict(), which can support different parallelisms, FSDP1, FSDP2, DDP. API doc: https://pytorch.org/docs/stable/distributed.checkpoint.html#torch.distributed.checkpoint.state_dict.get_state_dict .Tutorial: https://pytorch.org/tutorials/recipes/distributed_checkpoint_recipe.html .
|
| 5221 |
warnings.warn(
|
| 5222 |
+
[[34m2026-01-25 23:27:59[39m] Done!
|
| 5223 |
+
base_dir is /dev/shm/models/checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins/eval_used_rows, step_tag is checkpoints_vlm_gym_counting_mark_all_one_image_lr2e_5_ce_no_mse_ins_step5000
|
| 5224 |
+
Preparing Dataset vlm_gym_counting_mark_all_celoss_no_mse_evalonce/vlm_gym_counting_mark_all_val
|
| 5225 |
+
[eval debug] first 3 batch fingerprints:
|
| 5226 |
+
fp[0]: [{'data_indexes': [0], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
|
| 5227 |
+
fp[1]: [{'data_indexes': [8], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
|
| 5228 |
+
fp[2]: [{'data_indexes': [16], 'worker_id': 0, 'dataset_name': 'vlm_gym_counting_mark_all_celoss_no_mse_evalonce'}]
|
| 5229 |
+
ce_avg: 0.847433865070343, mse_avg: 0.0
|