Auto-sync checkpoint during training
Browse files
checkpoint-6000.pt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d53d1a795b8a7c89fef9bd0301a463deaefffebde9184cd27499c0b6c978e8cb
|
| 3 |
+
size 1141963947
|
log/log-train-2026-01-13-10-02-58
CHANGED
|
@@ -1212,3 +1212,298 @@
|
|
| 1212 |
2026-01-13 10:43:40,196 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=12.08 vs. limit=2.0
|
| 1213 |
2026-01-13 10:43:45,521 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=3.92 vs. limit=2.0
|
| 1214 |
2026-01-13 10:43:46,862 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=4.27 vs. limit=2.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1212 |
2026-01-13 10:43:40,196 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=12.08 vs. limit=2.0
|
| 1213 |
2026-01-13 10:43:45,521 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=3.92 vs. limit=2.0
|
| 1214 |
2026-01-13 10:43:46,862 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=4.27 vs. limit=2.0
|
| 1215 |
+
2026-01-13 10:43:55,393 INFO [train.py:895] Epoch 1, batch 5450, loss[loss=1.233, simple_loss=0.7633, pruned_loss=0.8518, over 1237.00 frames. ], tot_loss[loss=1.195, simple_loss=0.7529, pruned_loss=0.8183, over 261606.08 frames. ], batch size: 3, lr: 4.11e-02, grad_scale: 16.0
|
| 1216 |
+
2026-01-13 10:44:01,438 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=2.74 vs. limit=2.0
|
| 1217 |
+
2026-01-13 10:44:01,727 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=9.18 vs. limit=2.0
|
| 1218 |
+
2026-01-13 10:44:13,024 INFO [zipformer.py:2441] attn_weights_entropy = tensor([6.0637, 5.9771, 5.9874, 6.0381, 6.0887, 6.0713, 6.0927, 6.0920],
|
| 1219 |
+
device='cuda:0'), covar=tensor([0.0003, 0.0009, 0.0004, 0.0005, 0.0004, 0.0004, 0.0006, 0.0006],
|
| 1220 |
+
device='cuda:0'), in_proj_covar=tensor([0.0026, 0.0028, 0.0029, 0.0027, 0.0027, 0.0029, 0.0030, 0.0032],
|
| 1221 |
+
device='cuda:0'), out_proj_covar=tensor([2.1924e-05, 2.3730e-05, 2.4376e-05, 2.1508e-05, 2.0909e-05, 2.0971e-05,
|
| 1222 |
+
2.2425e-05, 2.3647e-05], device='cuda:0')
|
| 1223 |
+
2026-01-13 10:44:13,676 INFO [train.py:895] Epoch 1, batch 5500, loss[loss=1.475, simple_loss=0.9367, pruned_loss=1.007, over 1329.00 frames. ], tot_loss[loss=1.196, simple_loss=0.7525, pruned_loss=0.8195, over 261379.65 frames. ], batch size: 10, lr: 4.10e-02, grad_scale: 16.0
|
| 1224 |
+
2026-01-13 10:44:14,391 INFO [optim.py:365] Clipping_scale=2.0, grad-norm quartiles 1.336e+02 1.938e+02 2.463e+02 3.217e+02 6.132e+02, threshold=4.926e+02, percent-clipped=2.0
|
| 1225 |
+
2026-01-13 10:44:17,924 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=7.21 vs. limit=2.0
|
| 1226 |
+
2026-01-13 10:44:18,306 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=3.56 vs. limit=2.0
|
| 1227 |
+
2026-01-13 10:44:18,420 INFO [zipformer.py:2441] attn_weights_entropy = tensor([1.6139, 4.0170, 3.5528, 4.7373, 3.6257, 1.7031, 4.0241, 4.5235],
|
| 1228 |
+
device='cuda:0'), covar=tensor([0.0267, 0.0243, 0.0154, 0.0169, 0.0891, 0.0274, 0.0174, 0.0058],
|
| 1229 |
+
device='cuda:0'), in_proj_covar=tensor([0.0022, 0.0022, 0.0020, 0.0020, 0.0019, 0.0022, 0.0023, 0.0022],
|
| 1230 |
+
device='cuda:0'), out_proj_covar=tensor([1.6101e-05, 1.5984e-05, 1.4719e-05, 1.4758e-05, 1.6021e-05, 1.5498e-05,
|
| 1231 |
+
1.5356e-05, 1.4727e-05], device='cuda:0')
|
| 1232 |
+
2026-01-13 10:44:20,398 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=6.99 vs. limit=2.0
|
| 1233 |
+
2026-01-13 10:44:21,113 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=4.96 vs. limit=2.0
|
| 1234 |
+
2026-01-13 10:44:23,271 INFO [scaling.py:681] Whitening: num_groups=1, num_channels=384, metric=60.44 vs. limit=5.0
|
| 1235 |
+
2026-01-13 10:44:24,305 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=10.67 vs. limit=2.0
|
| 1236 |
+
2026-01-13 10:44:29,081 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=7.68 vs. limit=2.0
|
| 1237 |
+
2026-01-13 10:44:29,793 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=4.96 vs. limit=2.0
|
| 1238 |
+
2026-01-13 10:44:29,795 INFO [scaling.py:681] Whitening: num_groups=1, num_channels=384, metric=33.95 vs. limit=5.0
|
| 1239 |
+
2026-01-13 10:44:31,903 INFO [train.py:895] Epoch 1, batch 5550, loss[loss=1.077, simple_loss=0.6569, pruned_loss=0.7489, over 1468.00 frames. ], tot_loss[loss=1.183, simple_loss=0.7442, pruned_loss=0.8112, over 261725.53 frames. ], batch size: 4, lr: 4.09e-02, grad_scale: 16.0
|
| 1240 |
+
2026-01-13 10:44:32,727 INFO [zipformer.py:2441] attn_weights_entropy = tensor([2.4515, 2.3217, 2.4584, 2.4658, 2.4747, 2.4568, 2.0916, 1.9878],
|
| 1241 |
+
device='cuda:0'), covar=tensor([0.0140, 0.0129, 0.0134, 0.0174, 0.0118, 0.0081, 0.0143, 0.0128],
|
| 1242 |
+
device='cuda:0'), in_proj_covar=tensor([0.0020, 0.0018, 0.0017, 0.0017, 0.0018, 0.0016, 0.0019, 0.0019],
|
| 1243 |
+
device='cuda:0'), out_proj_covar=tensor([1.3129e-05, 1.2790e-05, 1.2928e-05, 1.2873e-05, 1.3213e-05, 1.2706e-05,
|
| 1244 |
+
1.3622e-05, 1.4262e-05], device='cuda:0')
|
| 1245 |
+
2026-01-13 10:44:33,419 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=11.82 vs. limit=2.0
|
| 1246 |
+
2026-01-13 10:44:35,085 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=3.72 vs. limit=2.0
|
| 1247 |
+
2026-01-13 10:44:38,105 INFO [zipformer.py:2441] attn_weights_entropy = tensor([3.5403, 3.5596, 3.4965, 3.5626, 3.5206, 3.5557, 3.5649, 3.5724],
|
| 1248 |
+
device='cuda:0'), covar=tensor([0.0032, 0.0045, 0.0051, 0.0035, 0.0028, 0.0044, 0.0037, 0.0041],
|
| 1249 |
+
device='cuda:0'), in_proj_covar=tensor([0.0027, 0.0029, 0.0029, 0.0027, 0.0027, 0.0030, 0.0031, 0.0033],
|
| 1250 |
+
device='cuda:0'), out_proj_covar=tensor([2.2362e-05, 2.3908e-05, 2.4379e-05, 2.1674e-05, 2.1112e-05, 2.1321e-05,
|
| 1251 |
+
2.2726e-05, 2.4151e-05], device='cuda:0')
|
| 1252 |
+
2026-01-13 10:44:47,127 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=6.86 vs. limit=2.0
|
| 1253 |
+
2026-01-13 10:44:50,375 INFO [train.py:895] Epoch 1, batch 5600, loss[loss=0.9632, simple_loss=0.6192, pruned_loss=0.6536, over 1247.00 frames. ], tot_loss[loss=1.176, simple_loss=0.7404, pruned_loss=0.8061, over 260671.61 frames. ], batch size: 4, lr: 4.08e-02, grad_scale: 16.0
|
| 1254 |
+
2026-01-13 10:44:51,093 INFO [optim.py:365] Clipping_scale=2.0, grad-norm quartiles 1.253e+02 1.572e+02 1.859e+02 2.456e+02 4.035e+02, threshold=3.718e+02, percent-clipped=0.0
|
| 1255 |
+
2026-01-13 10:45:03,733 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=18.50 vs. limit=2.0
|
| 1256 |
+
2026-01-13 10:45:04,732 INFO [zipformer.py:2441] attn_weights_entropy = tensor([4.5316, 4.3607, 4.4749, 4.5316, 4.5053, 4.5445, 4.5308, 4.5505],
|
| 1257 |
+
device='cuda:0'), covar=tensor([0.0008, 0.0011, 0.0010, 0.0008, 0.0008, 0.0009, 0.0010, 0.0010],
|
| 1258 |
+
device='cuda:0'), in_proj_covar=tensor([0.0027, 0.0028, 0.0030, 0.0027, 0.0028, 0.0030, 0.0031, 0.0033],
|
| 1259 |
+
device='cuda:0'), out_proj_covar=tensor([2.2200e-05, 2.3972e-05, 2.4290e-05, 2.1463e-05, 2.1389e-05, 2.1292e-05,
|
| 1260 |
+
2.2449e-05, 2.3983e-05], device='cuda:0')
|
| 1261 |
+
2026-01-13 10:45:08,325 INFO [zipformer.py:1188] warmup_begin=2666.7, warmup_end=3333.3, batch_count=5650.0, num_to_drop=1, layers_to_drop={0}
|
| 1262 |
+
2026-01-13 10:45:08,567 INFO [train.py:895] Epoch 1, batch 5650, loss[loss=1.428, simple_loss=0.9122, pruned_loss=0.9715, over 1374.00 frames. ], tot_loss[loss=1.173, simple_loss=0.738, pruned_loss=0.804, over 261235.17 frames. ], batch size: 8, lr: 4.07e-02, grad_scale: 16.0
|
| 1263 |
+
2026-01-13 10:45:21,214 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=4.21 vs. limit=2.0
|
| 1264 |
+
2026-01-13 10:45:24,458 INFO [zipformer.py:2441] attn_weights_entropy = tensor([3.3223, 3.6488, 2.9405, 3.7710, 3.7409, 3.4905, 3.1038, 3.7724],
|
| 1265 |
+
device='cuda:0'), covar=tensor([0.0008, 0.0007, 0.0012, 0.0004, 0.0006, 0.0008, 0.0010, 0.0005],
|
| 1266 |
+
device='cuda:0'), in_proj_covar=tensor([0.0031, 0.0030, 0.0033, 0.0029, 0.0029, 0.0028, 0.0030, 0.0028],
|
| 1267 |
+
device='cuda:0'), out_proj_covar=tensor([1.7678e-05, 1.7906e-05, 2.0256e-05, 1.8911e-05, 1.8101e-05, 1.8790e-05,
|
| 1268 |
+
1.8714e-05, 1.7033e-05], device='cuda:0')
|
| 1269 |
+
2026-01-13 10:45:26,782 INFO [train.py:895] Epoch 1, batch 5700, loss[loss=1.17, simple_loss=0.7401, pruned_loss=0.8003, over 1357.00 frames. ], tot_loss[loss=1.176, simple_loss=0.7386, pruned_loss=0.8064, over 262257.21 frames. ], batch size: 4, lr: 4.06e-02, grad_scale: 16.0
|
| 1270 |
+
2026-01-13 10:45:27,446 INFO [optim.py:365] Clipping_scale=2.0, grad-norm quartiles 1.067e+02 1.802e+02 2.422e+02 3.171e+02 5.255e+02, threshold=4.844e+02, percent-clipped=14.0
|
| 1271 |
+
2026-01-13 10:45:30,228 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=18.34 vs. limit=2.0
|
| 1272 |
+
2026-01-13 10:45:30,472 INFO [zipformer.py:1188] warmup_begin=3333.3, warmup_end=4000.0, batch_count=5711.0, num_to_drop=1, layers_to_drop={3}
|
| 1273 |
+
2026-01-13 10:45:35,775 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=3.78 vs. limit=2.0
|
| 1274 |
+
2026-01-13 10:45:36,077 INFO [scaling.py:681] Whitening: num_groups=1, num_channels=384, metric=79.17 vs. limit=5.0
|
| 1275 |
+
2026-01-13 10:45:36,483 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=8.25 vs. limit=2.0
|
| 1276 |
+
2026-01-13 10:45:38,095 INFO [zipformer.py:2441] attn_weights_entropy = tensor([1.5529, 4.3460, 1.3876, 5.0529, 3.2468, 1.7785, 1.9749, 4.9970],
|
| 1277 |
+
device='cuda:0'), covar=tensor([0.0645, 0.0231, 0.0621, 0.0360, 0.1390, 0.0476, 0.0531, 0.0063],
|
| 1278 |
+
device='cuda:0'), in_proj_covar=tensor([0.0026, 0.0025, 0.0024, 0.0023, 0.0020, 0.0025, 0.0025, 0.0025],
|
| 1279 |
+
device='cuda:0'), out_proj_covar=tensor([1.8241e-05, 1.7061e-05, 1.6059e-05, 1.6559e-05, 1.7776e-05, 1.7492e-05,
|
| 1280 |
+
1.6709e-05, 1.6565e-05], device='cuda:0')
|
| 1281 |
+
2026-01-13 10:45:44,857 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=2.57 vs. limit=2.0
|
| 1282 |
+
2026-01-13 10:45:44,941 INFO [train.py:895] Epoch 1, batch 5750, loss[loss=1.538, simple_loss=1.003, pruned_loss=1.037, over 1194.00 frames. ], tot_loss[loss=1.179, simple_loss=0.7419, pruned_loss=0.8077, over 262754.74 frames. ], batch size: 6, lr: 4.05e-02, grad_scale: 16.0
|
| 1283 |
+
2026-01-13 10:45:46,490 INFO [zipformer.py:1188] warmup_begin=2666.7, warmup_end=3333.3, batch_count=5755.0, num_to_drop=0, layers_to_drop=set()
|
| 1284 |
+
2026-01-13 10:45:55,506 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=3.31 vs. limit=2.0
|
| 1285 |
+
2026-01-13 10:45:55,940 INFO [zipformer.py:2441] attn_weights_entropy = tensor([1.4970, 3.2731, 1.5954, 4.1236, 3.0978, 1.6762, 2.2032, 3.7660],
|
| 1286 |
+
device='cuda:0'), covar=tensor([0.0115, 0.0043, 0.0101, 0.0056, 0.0061, 0.0115, 0.0068, 0.0033],
|
| 1287 |
+
device='cuda:0'), in_proj_covar=tensor([0.0025, 0.0027, 0.0025, 0.0024, 0.0020, 0.0026, 0.0024, 0.0025],
|
| 1288 |
+
device='cuda:0'), out_proj_covar=tensor([1.7479e-05, 1.7219e-05, 1.6227e-05, 1.6755e-05, 1.7242e-05, 1.8350e-05,
|
| 1289 |
+
1.6438e-05, 1.6306e-05], device='cuda:0')
|
| 1290 |
+
2026-01-13 10:46:03,327 INFO [train.py:895] Epoch 1, batch 5800, loss[loss=1.442, simple_loss=0.882, pruned_loss=1.001, over 1432.00 frames. ], tot_loss[loss=1.187, simple_loss=0.7478, pruned_loss=0.8129, over 264063.42 frames. ], batch size: 7, lr: 4.04e-02, grad_scale: 16.0
|
| 1291 |
+
2026-01-13 10:46:04,075 INFO [optim.py:365] Clipping_scale=2.0, grad-norm quartiles 1.270e+02 1.646e+02 2.014e+02 2.902e+02 8.004e+02, threshold=4.028e+02, percent-clipped=9.0
|
| 1292 |
+
2026-01-13 10:46:08,990 INFO [zipformer.py:1188] warmup_begin=3333.3, warmup_end=4000.0, batch_count=5816.0, num_to_drop=0, layers_to_drop=set()
|
| 1293 |
+
2026-01-13 10:46:14,728 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=8.83 vs. limit=2.0
|
| 1294 |
+
2026-01-13 10:46:18,860 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=9.94 vs. limit=2.0
|
| 1295 |
+
2026-01-13 10:46:21,965 INFO [train.py:895] Epoch 1, batch 5850, loss[loss=1.358, simple_loss=0.858, pruned_loss=0.9286, over 1354.00 frames. ], tot_loss[loss=1.186, simple_loss=0.7456, pruned_loss=0.8135, over 264132.81 frames. ], batch size: 8, lr: 4.03e-02, grad_scale: 16.0
|
| 1296 |
+
2026-01-13 10:46:24,720 INFO [zipformer.py:1188] warmup_begin=2666.7, warmup_end=3333.3, batch_count=5858.0, num_to_drop=0, layers_to_drop=set()
|
| 1297 |
+
2026-01-13 10:46:26,192 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=8.99 vs. limit=2.0
|
| 1298 |
+
2026-01-13 10:46:31,360 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=3.64 vs. limit=2.0
|
| 1299 |
+
2026-01-13 10:46:31,741 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=2.90 vs. limit=2.0
|
| 1300 |
+
2026-01-13 10:46:39,988 INFO [train.py:895] Epoch 1, batch 5900, loss[loss=1.307, simple_loss=0.8173, pruned_loss=0.8981, over 1408.00 frames. ], tot_loss[loss=1.184, simple_loss=0.7437, pruned_loss=0.8124, over 264744.32 frames. ], batch size: 5, lr: 4.02e-02, grad_scale: 16.0
|
| 1301 |
+
2026-01-13 10:46:40,658 INFO [optim.py:365] Clipping_scale=2.0, grad-norm quartiles 1.408e+02 2.212e+02 2.992e+02 3.668e+02 7.538e+02, threshold=5.984e+02, percent-clipped=15.0
|
| 1302 |
+
2026-01-13 10:46:43,622 INFO [zipformer.py:2441] attn_weights_entropy = tensor([2.7618, 2.6409, 2.7274, 2.5476, 3.2738, 2.8267, 2.7103, 2.8670],
|
| 1303 |
+
device='cuda:0'), covar=tensor([0.0068, 0.0075, 0.0086, 0.0074, 0.0062, 0.0076, 0.0061, 0.0067],
|
| 1304 |
+
device='cuda:0'), in_proj_covar=tensor([0.0030, 0.0029, 0.0028, 0.0029, 0.0032, 0.0029, 0.0029, 0.0030],
|
| 1305 |
+
device='cuda:0'), out_proj_covar=tensor([2.8501e-05, 3.0150e-05, 2.7701e-05, 2.8411e-05, 2.9625e-05, 2.8997e-05,
|
| 1306 |
+
2.9005e-05, 2.8811e-05], device='cuda:0')
|
| 1307 |
+
2026-01-13 10:46:43,833 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=2.75 vs. limit=2.0
|
| 1308 |
+
2026-01-13 10:46:46,618 INFO [zipformer.py:1188] warmup_begin=3333.3, warmup_end=4000.0, batch_count=5919.0, num_to_drop=0, layers_to_drop=set()
|
| 1309 |
+
2026-01-13 10:46:47,047 INFO [scaling.py:681] Whitening: num_groups=1, num_channels=384, metric=43.51 vs. limit=5.0
|
| 1310 |
+
2026-01-13 10:46:48,647 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=5.16 vs. limit=2.0
|
| 1311 |
+
2026-01-13 10:46:48,915 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=8.02 vs. limit=2.0
|
| 1312 |
+
2026-01-13 10:46:49,934 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=10.67 vs. limit=2.0
|
| 1313 |
+
2026-01-13 10:46:51,064 INFO [scaling.py:681] Whitening: num_groups=1, num_channels=384, metric=25.01 vs. limit=5.0
|
| 1314 |
+
2026-01-13 10:46:58,012 INFO [train.py:895] Epoch 1, batch 5950, loss[loss=1.057, simple_loss=0.66, pruned_loss=0.7271, over 1406.00 frames. ], tot_loss[loss=1.18, simple_loss=0.7415, pruned_loss=0.8092, over 264054.24 frames. ], batch size: 5, lr: 4.01e-02, grad_scale: 16.0
|
| 1315 |
+
2026-01-13 10:47:00,702 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=5.89 vs. limit=2.0
|
| 1316 |
+
2026-01-13 10:47:11,493 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=5.36 vs. limit=2.0
|
| 1317 |
+
2026-01-13 10:47:12,000 INFO [zipformer.py:1188] warmup_begin=2666.7, warmup_end=3333.3, batch_count=5990.0, num_to_drop=0, layers_to_drop=set()
|
| 1318 |
+
2026-01-13 10:47:15,774 INFO [checkpoint.py:74] Saving checkpoint to /kaggle/working/amharic_training/exp_amharic_streaming/checkpoint-6000.pt
|
| 1319 |
+
2026-01-13 10:47:17,992 INFO [train.py:895] Epoch 1, batch 6000, loss[loss=1.39, simple_loss=0.8993, pruned_loss=0.9401, over 1269.00 frames. ], tot_loss[loss=1.175, simple_loss=0.7397, pruned_loss=0.8053, over 262158.01 frames. ], batch size: 10, lr: 4.00e-02, grad_scale: 16.0
|
| 1320 |
+
2026-01-13 10:47:18,646 INFO [optim.py:365] Clipping_scale=2.0, grad-norm quartiles 1.114e+02 1.768e+02 2.060e+02 2.599e+02 4.314e+02, threshold=4.121e+02, percent-clipped=0.0
|
| 1321 |
+
2026-01-13 10:47:19,806 INFO [zipformer.py:1188] warmup_begin=1333.3, warmup_end=2000.0, batch_count=6006.0, num_to_drop=1, layers_to_drop={2}
|
| 1322 |
+
2026-01-13 10:47:20,970 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=7.42 vs. limit=2.0
|
| 1323 |
+
2026-01-13 10:47:21,381 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=6.44 vs. limit=2.0
|
| 1324 |
+
2026-01-13 10:47:23,243 INFO [scaling.py:681] Whitening: num_groups=1, num_channels=384, metric=30.55 vs. limit=5.0
|
| 1325 |
+
2026-01-13 10:47:27,745 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=11.35 vs. limit=2.0
|
| 1326 |
+
2026-01-13 10:47:32,057 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=4.95 vs. limit=2.0
|
| 1327 |
+
2026-01-13 10:47:32,059 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=9.12 vs. limit=2.0
|
| 1328 |
+
2026-01-13 10:47:33,028 INFO [zipformer.py:1188] warmup_begin=2666.7, warmup_end=3333.3, batch_count=6043.0, num_to_drop=0, layers_to_drop=set()
|
| 1329 |
+
2026-01-13 10:47:35,938 INFO [train.py:895] Epoch 1, batch 6050, loss[loss=1.399, simple_loss=0.8755, pruned_loss=0.9616, over 1491.00 frames. ], tot_loss[loss=1.173, simple_loss=0.7387, pruned_loss=0.8033, over 263075.26 frames. ], batch size: 4, lr: 3.99e-02, grad_scale: 16.0
|
| 1330 |
+
2026-01-13 10:47:36,051 INFO [zipformer.py:1188] warmup_begin=3333.3, warmup_end=4000.0, batch_count=6051.0, num_to_drop=0, layers_to_drop=set()
|
| 1331 |
+
2026-01-13 10:47:46,376 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=15.73 vs. limit=2.0
|
| 1332 |
+
2026-01-13 10:47:51,413 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=4.98 vs. limit=2.0
|
| 1333 |
+
2026-01-13 10:47:54,237 INFO [train.py:895] Epoch 1, batch 6100, loss[loss=1.014, simple_loss=0.6598, pruned_loss=0.6842, over 1399.00 frames. ], tot_loss[loss=1.167, simple_loss=0.736, pruned_loss=0.7988, over 262796.31 frames. ], batch size: 5, lr: 3.98e-02, grad_scale: 16.0
|
| 1334 |
+
2026-01-13 10:47:54,904 INFO [optim.py:365] Clipping_scale=2.0, grad-norm quartiles 1.137e+02 1.765e+02 2.176e+02 2.638e+02 8.058e+02, threshold=4.352e+02, percent-clipped=5.0
|
| 1335 |
+
2026-01-13 10:47:55,365 INFO [zipformer.py:1188] warmup_begin=3333.3, warmup_end=4000.0, batch_count=6104.0, num_to_drop=0, layers_to_drop=set()
|
| 1336 |
+
2026-01-13 10:47:57,566 INFO [scaling.py:681] Whitening: num_groups=1, num_channels=384, metric=51.83 vs. limit=5.0
|
| 1337 |
+
2026-01-13 10:47:57,731 INFO [zipformer.py:1188] warmup_begin=1333.3, warmup_end=2000.0, batch_count=6111.0, num_to_drop=0, layers_to_drop=set()
|
| 1338 |
+
2026-01-13 10:48:01,848 INFO [zipformer.py:2441] attn_weights_entropy = tensor([3.6965, 3.6966, 3.6952, 3.7017, 3.5813, 3.6991, 3.6939, 3.5537],
|
| 1339 |
+
device='cuda:0'), covar=tensor([0.0024, 0.0020, 0.0022, 0.0031, 0.0059, 0.0025, 0.0025, 0.0074],
|
| 1340 |
+
device='cuda:0'), in_proj_covar=tensor([0.0025, 0.0026, 0.0025, 0.0027, 0.0027, 0.0026, 0.0026, 0.0026],
|
| 1341 |
+
device='cuda:0'), out_proj_covar=tensor([2.2541e-05, 2.6385e-05, 2.6011e-05, 2.4045e-05, 2.5693e-05, 2.4897e-05,
|
| 1342 |
+
2.4435e-05, 2.5255e-05], device='cuda:0')
|
| 1343 |
+
2026-01-13 10:48:02,319 INFO [scaling.py:681] Whitening: num_groups=1, num_channels=384, metric=66.35 vs. limit=5.0
|
| 1344 |
+
2026-01-13 10:48:05,275 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=10.93 vs. limit=2.0
|
| 1345 |
+
2026-01-13 10:48:06,227 INFO [zipformer.py:1188] warmup_begin=2666.7, warmup_end=3333.3, batch_count=6135.0, num_to_drop=0, layers_to_drop=set()
|
| 1346 |
+
2026-01-13 10:48:06,346 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=6.42 vs. limit=2.0
|
| 1347 |
+
2026-01-13 10:48:08,649 INFO [scaling.py:681] Whitening: num_groups=1, num_channels=384, metric=34.58 vs. limit=5.0
|
| 1348 |
+
2026-01-13 10:48:11,301 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=6.22 vs. limit=2.0
|
| 1349 |
+
2026-01-13 10:48:11,844 INFO [train.py:895] Epoch 1, batch 6150, loss[loss=1.381, simple_loss=0.894, pruned_loss=0.934, over 1191.00 frames. ], tot_loss[loss=1.167, simple_loss=0.7374, pruned_loss=0.7978, over 260844.26 frames. ], batch size: 13, lr: 3.97e-02, grad_scale: 16.0
|
| 1350 |
+
2026-01-13 10:48:13,175 INFO [zipformer.py:2441] attn_weights_entropy = tensor([2.3652, 2.9483, 0.8889, 3.0498, 2.2135, 3.8711, 2.9194, 2.2863],
|
| 1351 |
+
device='cuda:0'), covar=tensor([0.0117, 0.0212, 0.0117, 0.0089, 0.0251, 0.0089, 0.0071, 0.0144],
|
| 1352 |
+
device='cuda:0'), in_proj_covar=tensor([0.0043, 0.0044, 0.0038, 0.0042, 0.0045, 0.0038, 0.0041, 0.0039],
|
| 1353 |
+
device='cuda:0'), out_proj_covar=tensor([2.9597e-05, 3.3293e-05, 2.9375e-05, 2.9621e-05, 3.2809e-05, 3.2168e-05,
|
| 1354 |
+
2.8364e-05, 2.7874e-05], device='cuda:0')
|
| 1355 |
+
2026-01-13 10:48:14,402 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=13.97 vs. limit=2.0
|
| 1356 |
+
2026-01-13 10:48:15,414 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=11.10 vs. limit=2.0
|
| 1357 |
+
2026-01-13 10:48:18,802 INFO [zipformer.py:1188] warmup_begin=2666.7, warmup_end=3333.3, batch_count=6170.0, num_to_drop=0, layers_to_drop=set()
|
| 1358 |
+
2026-01-13 10:48:20,787 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=5.76 vs. limit=2.0
|
| 1359 |
+
2026-01-13 10:48:26,181 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=6.81 vs. limit=2.0
|
| 1360 |
+
2026-01-13 10:48:28,190 INFO [zipformer.py:1188] warmup_begin=3333.3, warmup_end=4000.0, batch_count=6196.0, num_to_drop=0, layers_to_drop=set()
|
| 1361 |
+
2026-01-13 10:48:30,059 INFO [train.py:895] Epoch 1, batch 6200, loss[loss=1.292, simple_loss=0.8161, pruned_loss=0.8843, over 1251.00 frames. ], tot_loss[loss=1.166, simple_loss=0.738, pruned_loss=0.7969, over 260652.69 frames. ], batch size: 4, lr: 3.96e-02, grad_scale: 16.0
|
| 1362 |
+
2026-01-13 10:48:30,739 INFO [optim.py:365] Clipping_scale=2.0, grad-norm quartiles 1.268e+02 1.632e+02 1.895e+02 2.286e+02 4.417e+02, threshold=3.790e+02, percent-clipped=1.0
|
| 1363 |
+
2026-01-13 10:48:30,863 INFO [zipformer.py:1188] warmup_begin=2666.7, warmup_end=3333.3, batch_count=6203.0, num_to_drop=0, layers_to_drop=set()
|
| 1364 |
+
2026-01-13 10:48:33,879 INFO [zipformer.py:1188] warmup_begin=2666.7, warmup_end=3333.3, batch_count=6211.0, num_to_drop=0, layers_to_drop=set()
|
| 1365 |
+
2026-01-13 10:48:34,943 INFO [zipformer.py:1188] warmup_begin=1333.3, warmup_end=2000.0, batch_count=6214.0, num_to_drop=0, layers_to_drop=set()
|
| 1366 |
+
2026-01-13 10:48:39,120 INFO [scaling.py:681] Whitening: num_groups=1, num_channels=384, metric=51.37 vs. limit=5.0
|
| 1367 |
+
2026-01-13 10:48:39,872 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=7.47 vs. limit=2.0
|
| 1368 |
+
2026-01-13 10:48:40,545 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=13.35 vs. limit=2.0
|
| 1369 |
+
2026-01-13 10:48:41,119 INFO [zipformer.py:1188] warmup_begin=3333.3, warmup_end=4000.0, batch_count=6231.0, num_to_drop=0, layers_to_drop=set()
|
| 1370 |
+
2026-01-13 10:48:47,751 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=6.31 vs. limit=2.0
|
| 1371 |
+
2026-01-13 10:48:48,251 INFO [train.py:895] Epoch 1, batch 6250, loss[loss=1.204, simple_loss=0.7548, pruned_loss=0.8264, over 1278.00 frames. ], tot_loss[loss=1.162, simple_loss=0.7365, pruned_loss=0.7941, over 260632.18 frames. ], batch size: 4, lr: 3.95e-02, grad_scale: 16.0
|
| 1372 |
+
2026-01-13 10:48:50,589 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=5.72 vs. limit=2.0
|
| 1373 |
+
2026-01-13 10:48:50,815 INFO [zipformer.py:2441] attn_weights_entropy = tensor([3.3779, 3.3601, 3.3315, 3.3946, 3.2335, 3.4044, 3.3205, 3.1632],
|
| 1374 |
+
device='cuda:0'), covar=tensor([0.0322, 0.0127, 0.0222, 0.0150, 0.0377, 0.0143, 0.0202, 0.0415],
|
| 1375 |
+
device='cuda:0'), in_proj_covar=tensor([0.0026, 0.0027, 0.0024, 0.0025, 0.0027, 0.0026, 0.0026, 0.0025],
|
| 1376 |
+
device='cuda:0'), out_proj_covar=tensor([2.3765e-05, 2.7275e-05, 2.6109e-05, 2.3881e-05, 2.5733e-05, 2.5291e-05,
|
| 1377 |
+
2.4247e-05, 2.4841e-05], device='cuda:0')
|
| 1378 |
+
2026-01-13 10:48:51,064 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=4.43 vs. limit=2.0
|
| 1379 |
+
2026-01-13 10:48:53,015 INFO [zipformer.py:1188] warmup_begin=3333.3, warmup_end=4000.0, batch_count=6264.0, num_to_drop=0, layers_to_drop=set()
|
| 1380 |
+
2026-01-13 10:48:53,773 INFO [zipformer.py:1188] warmup_begin=2666.7, warmup_end=3333.3, batch_count=6266.0, num_to_drop=0, layers_to_drop=set()
|
| 1381 |
+
2026-01-13 10:48:55,903 INFO [zipformer.py:1188] warmup_begin=3333.3, warmup_end=4000.0, batch_count=6272.0, num_to_drop=0, layers_to_drop=set()
|
| 1382 |
+
2026-01-13 10:49:00,635 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=3.24 vs. limit=2.0
|
| 1383 |
+
2026-01-13 10:49:02,640 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=9.52 vs. limit=2.0
|
| 1384 |
+
2026-01-13 10:49:05,827 INFO [train.py:895] Epoch 1, batch 6300, loss[loss=1.141, simple_loss=0.7064, pruned_loss=0.7876, over 1273.00 frames. ], tot_loss[loss=1.161, simple_loss=0.7362, pruned_loss=0.7925, over 259893.79 frames. ], batch size: 3, lr: 3.94e-02, grad_scale: 16.0
|
| 1385 |
+
2026-01-13 10:49:06,548 INFO [optim.py:365] Clipping_scale=2.0, grad-norm quartiles 1.229e+02 1.753e+02 2.325e+02 2.900e+02 5.125e+02, threshold=4.649e+02, percent-clipped=6.0
|
| 1386 |
+
2026-01-13 10:49:07,651 INFO [zipformer.py:1188] warmup_begin=2000.0, warmup_end=2666.7, batch_count=6306.0, num_to_drop=1, layers_to_drop={2}
|
| 1387 |
+
2026-01-13 10:49:10,220 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=3.18 vs. limit=2.0
|
| 1388 |
+
2026-01-13 10:49:12,300 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=8.07 vs. limit=2.0
|
| 1389 |
+
2026-01-13 10:49:15,064 INFO [zipformer.py:1188] warmup_begin=3333.3, warmup_end=4000.0, batch_count=6327.0, num_to_drop=0, layers_to_drop=set()
|
| 1390 |
+
2026-01-13 10:49:16,512 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=4.59 vs. limit=2.0
|
| 1391 |
+
2026-01-13 10:49:17,979 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=4.14 vs. limit=2.0
|
| 1392 |
+
2026-01-13 10:49:19,470 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=2.97 vs. limit=2.0
|
| 1393 |
+
2026-01-13 10:49:21,731 INFO [zipformer.py:1188] warmup_begin=1333.3, warmup_end=2000.0, batch_count=6346.0, num_to_drop=0, layers_to_drop=set()
|
| 1394 |
+
2026-01-13 10:49:23,456 INFO [train.py:895] Epoch 1, batch 6350, loss[loss=1.085, simple_loss=0.6951, pruned_loss=0.7373, over 1380.00 frames. ], tot_loss[loss=1.157, simple_loss=0.7321, pruned_loss=0.791, over 261693.20 frames. ], batch size: 8, lr: 3.93e-02, grad_scale: 16.0
|
| 1395 |
+
2026-01-13 10:49:24,019 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=7.06 vs. limit=2.0
|
| 1396 |
+
2026-01-13 10:49:24,536 INFO [zipformer.py:1188] warmup_begin=666.7, warmup_end=1333.3, batch_count=6354.0, num_to_drop=1, layers_to_drop={0}
|
| 1397 |
+
2026-01-13 10:49:25,322 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=8.84 vs. limit=2.0
|
| 1398 |
+
2026-01-13 10:49:26,388 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=4.83 vs. limit=2.0
|
| 1399 |
+
2026-01-13 10:49:29,715 INFO [zipformer.py:2441] attn_weights_entropy = tensor([3.0025, 2.5716, 2.9675, 2.5703, 2.9240, 2.8299, 3.0027, 2.8651],
|
| 1400 |
+
device='cuda:0'), covar=tensor([0.0029, 0.0047, 0.0035, 0.0077, 0.0025, 0.0025, 0.0036, 0.0062],
|
| 1401 |
+
device='cuda:0'), in_proj_covar=tensor([0.0018, 0.0017, 0.0017, 0.0019, 0.0017, 0.0017, 0.0018, 0.0019],
|
| 1402 |
+
device='cuda:0'), out_proj_covar=tensor([1.1581e-05, 1.2167e-05, 1.1496e-05, 1.1918e-05, 1.0593e-05, 1.2107e-05,
|
| 1403 |
+
1.1736e-05, 1.2853e-05], device='cuda:0')
|
| 1404 |
+
2026-01-13 10:49:34,474 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=2.26 vs. limit=2.0
|
| 1405 |
+
2026-01-13 10:49:39,976 INFO [zipformer.py:1188] warmup_begin=1333.3, warmup_end=2000.0, batch_count=6399.0, num_to_drop=0, layers_to_drop=set()
|
| 1406 |
+
2026-01-13 10:49:40,704 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=4.08 vs. limit=2.0
|
| 1407 |
+
2026-01-13 10:49:40,851 INFO [train.py:895] Epoch 1, batch 6400, loss[loss=1.077, simple_loss=0.6968, pruned_loss=0.7285, over 1292.00 frames. ], tot_loss[loss=1.152, simple_loss=0.7311, pruned_loss=0.7868, over 260289.51 frames. ], batch size: 4, lr: 3.92e-02, grad_scale: 16.0
|
| 1408 |
+
2026-01-13 10:49:40,851 INFO [train.py:920] Computing validation loss
|
| 1409 |
+
2026-01-13 10:49:42,159 INFO [zipformer.py:2441] attn_weights_entropy = tensor([2.4972, 2.6226, 2.4392, 2.6692, 2.7100, 2.3333, 2.6378, 2.7103],
|
| 1410 |
+
device='cuda:0'), covar=tensor([0.0049, 0.0055, 0.0065, 0.0055, 0.0053, 0.0102, 0.0061, 0.0073],
|
| 1411 |
+
device='cuda:0'), in_proj_covar=tensor([0.0033, 0.0031, 0.0033, 0.0032, 0.0032, 0.0031, 0.0032, 0.0031],
|
| 1412 |
+
device='cuda:0'), out_proj_covar=tensor([1.9123e-05, 1.8210e-05, 1.9780e-05, 1.8637e-05, 1.8430e-05, 1.8939e-05,
|
| 1413 |
+
1.9931e-05, 1.7120e-05], device='cuda:0')
|
| 1414 |
+
2026-01-13 10:49:42,767 INFO [zipformer.py:2441] attn_weights_entropy = tensor([2.7364, 2.7309, 2.5040, 2.3852, 2.8122, 2.5358, 2.6896, 2.6805],
|
| 1415 |
+
device='cuda:0'), covar=tensor([0.0066, 0.0066, 0.0078, 0.0090, 0.0055, 0.0074, 0.0076, 0.0058],
|
| 1416 |
+
device='cuda:0'), in_proj_covar=tensor([0.0025, 0.0025, 0.0025, 0.0023, 0.0026, 0.0025, 0.0027, 0.0026],
|
| 1417 |
+
device='cuda:0'), out_proj_covar=tensor([2.5692e-05, 2.5495e-05, 2.4682e-05, 2.4473e-05, 2.6089e-05, 2.5304e-05,
|
| 1418 |
+
2.6426e-05, 2.5537e-05], device='cuda:0')
|
| 1419 |
+
2026-01-13 10:49:48,596 INFO [zipformer.py:2441] attn_weights_entropy = tensor([3.3225, 3.4298, 2.8358, 3.3780, 2.0040, 3.3337, 1.8626, 2.7080],
|
| 1420 |
+
device='cuda:0'), covar=tensor([0.0037, 0.0021, 0.0027, 0.0028, 0.0061, 0.0020, 0.0042, 0.0035],
|
| 1421 |
+
device='cuda:0'), in_proj_covar=tensor([0.0023, 0.0023, 0.0022, 0.0024, 0.0024, 0.0023, 0.0024, 0.0022],
|
| 1422 |
+
device='cuda:0'), out_proj_covar=tensor([1.6269e-05, 1.4811e-05, 1.6135e-05, 1.6915e-05, 1.9617e-05, 1.6433e-05,
|
| 1423 |
+
1.7600e-05, 1.7200e-05], device='cuda:0')
|
| 1424 |
+
2026-01-13 10:49:54,302 INFO [zipformer.py:2441] attn_weights_entropy = tensor([1.2900, 1.5907, 1.7506, 1.8253, 1.7478, 1.2530, 1.7443, 1.6294],
|
| 1425 |
+
device='cuda:0'), covar=tensor([0.0291, 0.0312, 0.0166, 0.0211, 0.0235, 0.0232, 0.0261, 0.0276],
|
| 1426 |
+
device='cuda:0'), in_proj_covar=tensor([0.0013, 0.0014, 0.0012, 0.0013, 0.0015, 0.0015, 0.0013, 0.0013],
|
| 1427 |
+
device='cuda:0'), out_proj_covar=tensor([7.3239e-06, 9.0119e-06, 6.6195e-06, 7.4283e-06, 7.7905e-06, 7.9622e-06,
|
| 1428 |
+
7.2798e-06, 7.5132e-06], device='cuda:0')
|
| 1429 |
+
2026-01-13 10:50:03,860 INFO [zipformer.py:2441] attn_weights_entropy = tensor([2.7402, 2.7277, 2.5001, 2.2994, 2.8583, 2.4974, 2.7238, 2.6086],
|
| 1430 |
+
device='cuda:0'), covar=tensor([0.0088, 0.0081, 0.0090, 0.0110, 0.0057, 0.0090, 0.0092, 0.0064],
|
| 1431 |
+
device='cuda:0'), in_proj_covar=tensor([0.0025, 0.0025, 0.0025, 0.0023, 0.0026, 0.0025, 0.0027, 0.0026],
|
| 1432 |
+
device='cuda:0'), out_proj_covar=tensor([2.5692e-05, 2.5495e-05, 2.4682e-05, 2.4473e-05, 2.6089e-05, 2.5304e-05,
|
| 1433 |
+
2.6426e-05, 2.5537e-05], device='cuda:0')
|
| 1434 |
+
2026-01-13 10:50:19,658 INFO [zipformer.py:2441] attn_weights_entropy = tensor([2.6277, 2.5657, 2.6281, 2.6094, 2.6444, 2.5920, 2.6138, 2.5647],
|
| 1435 |
+
device='cuda:0'), covar=tensor([0.0031, 0.0035, 0.0034, 0.0044, 0.0028, 0.0043, 0.0039, 0.0027],
|
| 1436 |
+
device='cuda:0'), in_proj_covar=tensor([0.0022, 0.0020, 0.0023, 0.0028, 0.0024, 0.0025, 0.0025, 0.0023],
|
| 1437 |
+
device='cuda:0'), out_proj_covar=tensor([1.6809e-05, 1.6074e-05, 1.8000e-05, 2.0086e-05, 1.8584e-05, 1.8053e-05,
|
| 1438 |
+
1.9758e-05, 1.7311e-05], device='cuda:0')
|
| 1439 |
+
2026-01-13 10:50:28,436 INFO [zipformer.py:2441] attn_weights_entropy = tensor([3.1548, 3.4078, 3.4446, 3.2749, 3.3819, 3.2016, 3.5131, 3.3339],
|
| 1440 |
+
device='cuda:0'), covar=tensor([0.0142, 0.0105, 0.0080, 0.0125, 0.0088, 0.0133, 0.0085, 0.0128],
|
| 1441 |
+
device='cuda:0'), in_proj_covar=tensor([0.0027, 0.0029, 0.0028, 0.0030, 0.0027, 0.0031, 0.0029, 0.0031],
|
| 1442 |
+
device='cuda:0'), out_proj_covar=tensor([2.3009e-05, 2.3858e-05, 2.2481e-05, 2.4621e-05, 2.2925e-05, 2.6180e-05,
|
| 1443 |
+
2.3685e-05, 2.8781e-05], device='cuda:0')
|
| 1444 |
+
2026-01-13 10:50:37,208 INFO [zipformer.py:2441] attn_weights_entropy = tensor([1.4103, 1.9481, 0.9765, 1.4255, 1.2481, 2.3680, 1.9063, 2.0237],
|
| 1445 |
+
device='cuda:0'), covar=tensor([0.0534, 0.0568, 0.0377, 0.0411, 0.0780, 0.0426, 0.0338, 0.0407],
|
| 1446 |
+
device='cuda:0'), in_proj_covar=tensor([0.0043, 0.0042, 0.0039, 0.0042, 0.0043, 0.0039, 0.0041, 0.0038],
|
| 1447 |
+
device='cuda:0'), out_proj_covar=tensor([2.9024e-05, 3.3194e-05, 2.9171e-05, 2.9133e-05, 3.1823e-05, 3.2256e-05,
|
| 1448 |
+
2.8504e-05, 2.7688e-05], device='cuda:0')
|
| 1449 |
+
2026-01-13 10:50:51,710 INFO [zipformer.py:2441] attn_weights_entropy = tensor([2.8952, 2.8939, 2.8779, 2.9065, 2.9136, 2.9998, 2.9803, 3.0435],
|
| 1450 |
+
device='cuda:0'), covar=tensor([0.0055, 0.0067, 0.0069, 0.0070, 0.0111, 0.0076, 0.0056, 0.0075],
|
| 1451 |
+
device='cuda:0'), in_proj_covar=tensor([0.0023, 0.0024, 0.0024, 0.0021, 0.0025, 0.0025, 0.0025, 0.0025],
|
| 1452 |
+
device='cuda:0'), out_proj_covar=tensor([2.3694e-05, 2.3666e-05, 2.4328e-05, 2.2619e-05, 2.5760e-05, 2.5778e-05,
|
| 1453 |
+
2.6251e-05, 2.4681e-05], device='cuda:0')
|
| 1454 |
+
2026-01-13 10:51:10,042 INFO [zipformer.py:2441] attn_weights_entropy = tensor([3.2292, 3.3405, 3.1399, 3.3775, 3.3050, 3.3659, 3.3280, 3.3416],
|
| 1455 |
+
device='cuda:0'), covar=tensor([0.0110, 0.0093, 0.0144, 0.0083, 0.0162, 0.0130, 0.0212, 0.0112],
|
| 1456 |
+
device='cuda:0'), in_proj_covar=tensor([0.0029, 0.0031, 0.0034, 0.0030, 0.0031, 0.0031, 0.0033, 0.0035],
|
| 1457 |
+
device='cuda:0'), out_proj_covar=tensor([2.1968e-05, 2.4784e-05, 2.6704e-05, 2.2540e-05, 2.2916e-05, 2.2826e-05,
|
| 1458 |
+
2.3036e-05, 2.5575e-05], device='cuda:0')
|
| 1459 |
+
2026-01-13 10:51:19,197 INFO [zipformer.py:2441] attn_weights_entropy = tensor([2.4324, 2.5433, 2.3606, 2.5871, 2.6188, 2.3105, 2.5704, 2.6225],
|
| 1460 |
+
device='cuda:0'), covar=tensor([0.0033, 0.0044, 0.0039, 0.0042, 0.0036, 0.0067, 0.0047, 0.0050],
|
| 1461 |
+
device='cuda:0'), in_proj_covar=tensor([0.0033, 0.0031, 0.0033, 0.0032, 0.0032, 0.0031, 0.0032, 0.0031],
|
| 1462 |
+
device='cuda:0'), out_proj_covar=tensor([1.9123e-05, 1.8210e-05, 1.9780e-05, 1.8637e-05, 1.8430e-05, 1.8939e-05,
|
| 1463 |
+
1.9931e-05, 1.7120e-05], device='cuda:0')
|
| 1464 |
+
2026-01-13 10:51:45,346 INFO [zipformer.py:2441] attn_weights_entropy = tensor([2.9878, 3.0495, 2.7464, 2.8621, 3.2431, 2.7329, 2.9485, 3.0129],
|
| 1465 |
+
device='cuda:0'), covar=tensor([0.0093, 0.0102, 0.0098, 0.0134, 0.0059, 0.0094, 0.0110, 0.0068],
|
| 1466 |
+
device='cuda:0'), in_proj_covar=tensor([0.0025, 0.0025, 0.0025, 0.0023, 0.0026, 0.0025, 0.0027, 0.0026],
|
| 1467 |
+
device='cuda:0'), out_proj_covar=tensor([2.5692e-05, 2.5495e-05, 2.4682e-05, 2.4473e-05, 2.6089e-05, 2.5304e-05,
|
| 1468 |
+
2.6426e-05, 2.5537e-05], device='cuda:0')
|
| 1469 |
+
2026-01-13 10:51:47,930 INFO [zipformer.py:2441] attn_weights_entropy = tensor([3.1774, 3.4711, 3.4730, 3.3416, 3.3503, 3.2196, 3.5449, 3.4160],
|
| 1470 |
+
device='cuda:0'), covar=tensor([0.0060, 0.0043, 0.0036, 0.0047, 0.0042, 0.0056, 0.0031, 0.0050],
|
| 1471 |
+
device='cuda:0'), in_proj_covar=tensor([0.0027, 0.0029, 0.0028, 0.0030, 0.0027, 0.0031, 0.0029, 0.0031],
|
| 1472 |
+
device='cuda:0'), out_proj_covar=tensor([2.3009e-05, 2.3858e-05, 2.2481e-05, 2.4621e-05, 2.2925e-05, 2.6180e-05,
|
| 1473 |
+
2.3685e-05, 2.8781e-05], device='cuda:0')
|
| 1474 |
+
2026-01-13 10:52:00,862 INFO [zipformer.py:2441] attn_weights_entropy = tensor([2.7703, 2.7370, 2.5089, 2.4234, 2.8848, 2.5650, 2.7034, 2.7578],
|
| 1475 |
+
device='cuda:0'), covar=tensor([0.0083, 0.0092, 0.0103, 0.0108, 0.0066, 0.0085, 0.0088, 0.0057],
|
| 1476 |
+
device='cuda:0'), in_proj_covar=tensor([0.0025, 0.0025, 0.0025, 0.0023, 0.0026, 0.0025, 0.0027, 0.0026],
|
| 1477 |
+
device='cuda:0'), out_proj_covar=tensor([2.5692e-05, 2.5495e-05, 2.4682e-05, 2.4473e-05, 2.6089e-05, 2.5304e-05,
|
| 1478 |
+
2.6426e-05, 2.5537e-05], device='cuda:0')
|
| 1479 |
+
2026-01-13 10:52:08,262 INFO [zipformer.py:2441] attn_weights_entropy = tensor([1.9639, 2.5156, 0.8944, 1.5290, 1.4525, 2.7271, 2.0293, 2.0373],
|
| 1480 |
+
device='cuda:0'), covar=tensor([0.0273, 0.0299, 0.0276, 0.0301, 0.0441, 0.0230, 0.0264, 0.0260],
|
| 1481 |
+
device='cuda:0'), in_proj_covar=tensor([0.0043, 0.0042, 0.0039, 0.0042, 0.0043, 0.0039, 0.0041, 0.0038],
|
| 1482 |
+
device='cuda:0'), out_proj_covar=tensor([2.9024e-05, 3.3194e-05, 2.9171e-05, 2.9133e-05, 3.1823e-05, 3.2256e-05,
|
| 1483 |
+
2.8504e-05, 2.7688e-05], device='cuda:0')
|
| 1484 |
+
2026-01-13 10:52:14,976 INFO [zipformer.py:2441] attn_weights_entropy = tensor([3.1798, 3.3423, 3.0694, 3.2966, 3.3228, 3.3535, 3.3306, 3.2574],
|
| 1485 |
+
device='cuda:0'), covar=tensor([0.0086, 0.0089, 0.0133, 0.0054, 0.0090, 0.0082, 0.0133, 0.0125],
|
| 1486 |
+
device='cuda:0'), in_proj_covar=tensor([0.0029, 0.0031, 0.0034, 0.0030, 0.0031, 0.0031, 0.0033, 0.0035],
|
| 1487 |
+
device='cuda:0'), out_proj_covar=tensor([2.1968e-05, 2.4784e-05, 2.6704e-05, 2.2540e-05, 2.2916e-05, 2.2826e-05,
|
| 1488 |
+
2.3036e-05, 2.5575e-05], device='cuda:0')
|
| 1489 |
+
2026-01-13 10:52:16,076 INFO [zipformer.py:2441] attn_weights_entropy = tensor([2.6214, 2.6220, 2.3722, 2.6500, 2.6684, 2.6747, 2.5955, 2.5912],
|
| 1490 |
+
device='cuda:0'), covar=tensor([0.0126, 0.0128, 0.0194, 0.0094, 0.0127, 0.0119, 0.0104, 0.0103],
|
| 1491 |
+
device='cuda:0'), in_proj_covar=tensor([0.0025, 0.0026, 0.0024, 0.0024, 0.0026, 0.0025, 0.0025, 0.0024],
|
| 1492 |
+
device='cuda:0'), out_proj_covar=tensor([2.2885e-05, 2.6892e-05, 2.6078e-05, 2.3085e-05, 2.5011e-05, 2.4906e-05,
|
| 1493 |
+
2.3432e-05, 2.4234e-05], device='cuda:0')
|
| 1494 |
+
2026-01-13 10:52:26,788 INFO [train.py:929] Epoch 1, validation: loss=1.807, simple_loss=1.177, pruned_loss=1.219, over 1639044.00 frames.
|
| 1495 |
+
2026-01-13 10:52:26,789 INFO [train.py:930] Maximum memory allocated so far is 2987MB
|
| 1496 |
+
2026-01-13 10:52:27,494 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=11.29 vs. limit=2.0
|
| 1497 |
+
2026-01-13 10:52:27,589 INFO [optim.py:365] Clipping_scale=2.0, grad-norm quartiles 1.240e+02 1.910e+02 2.289e+02 2.866e+02 5.513e+02, threshold=4.578e+02, percent-clipped=3.0
|
| 1498 |
+
2026-01-13 10:52:30,815 INFO [zipformer.py:1188] warmup_begin=2000.0, warmup_end=2666.7, batch_count=6411.0, num_to_drop=0, layers_to_drop=set()
|
| 1499 |
+
2026-01-13 10:52:34,033 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=4.38 vs. limit=2.0
|
| 1500 |
+
2026-01-13 10:52:38,719 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=2.82 vs. limit=2.0
|
| 1501 |
+
2026-01-13 10:52:43,223 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=4.84 vs. limit=2.0
|
| 1502 |
+
2026-01-13 10:52:45,811 INFO [train.py:895] Epoch 1, batch 6450, loss[loss=1.375, simple_loss=0.8855, pruned_loss=0.932, over 1288.00 frames. ], tot_loss[loss=1.151, simple_loss=0.7334, pruned_loss=0.7841, over 260292.08 frames. ], batch size: 10, lr: 3.91e-02, grad_scale: 16.0
|
| 1503 |
+
2026-01-13 10:52:46,975 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=5.10 vs. limit=2.0
|
| 1504 |
+
2026-01-13 10:52:48,564 INFO [zipformer.py:1188] warmup_begin=666.7, warmup_end=1333.3, batch_count=6459.0, num_to_drop=0, layers_to_drop=set()
|
| 1505 |
+
2026-01-13 10:52:49,636 INFO [zipformer.py:1188] warmup_begin=2666.7, warmup_end=3333.3, batch_count=6462.0, num_to_drop=0, layers_to_drop=set()
|
| 1506 |
+
2026-01-13 10:52:51,367 INFO [zipformer.py:1188] warmup_begin=2666.7, warmup_end=3333.3, batch_count=6467.0, num_to_drop=0, layers_to_drop=set()
|
| 1507 |
+
2026-01-13 10:52:53,011 INFO [train.py:1204] Saving batch to /kaggle/working/amharic_training/exp_amharic_streaming/batch-bdd640fb-0667-1ad1-1c80-317fa3b1799d.pt
|
| 1508 |
+
2026-01-13 10:52:53,014 INFO [train.py:1210] features shape: torch.Size([4, 1305, 80])
|
| 1509 |
+
2026-01-13 10:52:53,016 INFO [train.py:1214] num tokens: 215
|
tensorboard/events.out.tfevents.1768298578.8e64ffbd666a.24203.0
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a44d11f647aa0e2fd899ae07be466487c9225708180527b94c0b4895758f4d75
|
| 3 |
+
size 61583
|