Syncing latest checkpoint
Browse files
epoch-1.pt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:bc800e9655621547803aceb0cd253408ec1fcf054039be9713d48b18eddbc356
|
| 3 |
+
size 1141949523
|
log/log-train-2026-01-13-11-44-05-0
CHANGED
|
@@ -269,3 +269,62 @@
|
|
| 269 |
device='cuda:0'), in_proj_covar=tensor([0.0019, 0.0018, 0.0017, 0.0018, 0.0018, 0.0018, 0.0019, 0.0016],
|
| 270 |
device='cuda:0'), out_proj_covar=tensor([1.5737e-05, 1.5695e-05, 1.4800e-05, 1.4713e-05, 1.4137e-05, 1.5296e-05,
|
| 271 |
1.7667e-05, 1.3828e-05], device='cuda:0')
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 269 |
device='cuda:0'), in_proj_covar=tensor([0.0019, 0.0018, 0.0017, 0.0018, 0.0018, 0.0018, 0.0019, 0.0016],
|
| 270 |
device='cuda:0'), out_proj_covar=tensor([1.5737e-05, 1.5695e-05, 1.4800e-05, 1.4713e-05, 1.4137e-05, 1.5296e-05,
|
| 271 |
1.7667e-05, 1.3828e-05], device='cuda:0')
|
| 272 |
+
2026-01-13 11:55:55,181 INFO [scaling.py:681] (0/2) Whitening: num_groups=8, num_channels=192, metric=2.05 vs. limit=2.0
|
| 273 |
+
2026-01-13 11:55:56,053 INFO [optim.py:365] (0/2) Clipping_scale=2.0, grad-norm quartiles 8.654e+01 1.693e+02 2.387e+02 3.285e+02 1.270e+03, threshold=4.774e+02, percent-clipped=16.0
|
| 274 |
+
2026-01-13 11:55:56,089 INFO [train.py:895] (0/2) Epoch 1, batch 1400, loss[loss=0.5981, simple_loss=0.5234, pruned_loss=0.3532, over 2864.00 frames. ], tot_loss[loss=0.5997, simple_loss=0.5149, pruned_loss=0.3808, over 552176.70 frames. ], batch size: 7, lr: 4.91e-02, grad_scale: 8.0
|
| 275 |
+
2026-01-13 11:56:06,541 INFO [scaling.py:681] (0/2) Whitening: num_groups=8, num_channels=96, metric=2.04 vs. limit=2.0
|
| 276 |
+
2026-01-13 11:56:11,436 INFO [zipformer.py:1188] (0/2) warmup_begin=2666.7, warmup_end=3333.3, batch_count=1434.0, num_to_drop=1, layers_to_drop={1}
|
| 277 |
+
2026-01-13 11:56:19,451 INFO [train.py:895] (0/2) Epoch 1, batch 1450, loss[loss=0.6259, simple_loss=0.5149, pruned_loss=0.3955, over 2786.00 frames. ], tot_loss[loss=0.5919, simple_loss=0.5098, pruned_loss=0.3705, over 551189.36 frames. ], batch size: 10, lr: 4.90e-02, grad_scale: 8.0
|
| 278 |
+
2026-01-13 11:56:30,720 INFO [scaling.py:681] (0/2) Whitening: num_groups=8, num_channels=192, metric=2.29 vs. limit=2.0
|
| 279 |
+
2026-01-13 11:56:40,383 INFO [zipformer.py:1188] (0/2) warmup_begin=3333.3, warmup_end=4000.0, batch_count=1495.0, num_to_drop=2, layers_to_drop={0, 2}
|
| 280 |
+
2026-01-13 11:56:42,960 INFO [optim.py:365] (0/2) Clipping_scale=2.0, grad-norm quartiles 8.658e+01 1.835e+02 2.429e+02 3.386e+02 6.529e+02, threshold=4.857e+02, percent-clipped=8.0
|
| 281 |
+
2026-01-13 11:56:43,000 INFO [train.py:895] (0/2) Epoch 1, batch 1500, loss[loss=0.6655, simple_loss=0.5689, pruned_loss=0.3999, over 2866.00 frames. ], tot_loss[loss=0.582, simple_loss=0.5032, pruned_loss=0.3593, over 550839.32 frames. ], batch size: 9, lr: 4.89e-02, grad_scale: 8.0
|
| 282 |
+
2026-01-13 11:56:44,109 INFO [zipformer.py:1188] (0/2) warmup_begin=2000.0, warmup_end=2666.7, batch_count=1503.0, num_to_drop=1, layers_to_drop={2}
|
| 283 |
+
2026-01-13 11:56:47,005 INFO [zipformer.py:1188] (0/2) warmup_begin=1333.3, warmup_end=2000.0, batch_count=1509.0, num_to_drop=2, layers_to_drop={0, 1}
|
| 284 |
+
2026-01-13 11:56:55,382 INFO [scaling.py:681] (0/2) Whitening: num_groups=8, num_channels=192, metric=2.20 vs. limit=2.0
|
| 285 |
+
2026-01-13 11:57:07,141 INFO [train.py:895] (0/2) Epoch 1, batch 1550, loss[loss=0.4728, simple_loss=0.4413, pruned_loss=0.2539, over 2896.00 frames. ], tot_loss[loss=0.5789, simple_loss=0.5015, pruned_loss=0.3534, over 548214.72 frames. ], batch size: 10, lr: 4.89e-02, grad_scale: 8.0
|
| 286 |
+
2026-01-13 11:57:07,195 INFO [zipformer.py:1188] (0/2) warmup_begin=666.7, warmup_end=1333.3, batch_count=1551.0, num_to_drop=0, layers_to_drop=set()
|
| 287 |
+
2026-01-13 11:57:12,637 INFO [scaling.py:681] (0/2) Whitening: num_groups=8, num_channels=192, metric=2.20 vs. limit=2.0
|
| 288 |
+
2026-01-13 11:57:14,382 INFO [zipformer.py:1188] (0/2) warmup_begin=2666.7, warmup_end=3333.3, batch_count=1566.0, num_to_drop=1, layers_to_drop={0}
|
| 289 |
+
2026-01-13 11:57:31,111 INFO [optim.py:365] (0/2) Clipping_scale=2.0, grad-norm quartiles 9.520e+01 2.049e+02 2.852e+02 3.650e+02 7.423e+02, threshold=5.703e+02, percent-clipped=10.0
|
| 290 |
+
2026-01-13 11:57:31,148 INFO [train.py:895] (0/2) Epoch 1, batch 1600, loss[loss=0.5265, simple_loss=0.4731, pruned_loss=0.2959, over 2923.00 frames. ], tot_loss[loss=0.5722, simple_loss=0.4971, pruned_loss=0.3455, over 545376.59 frames. ], batch size: 11, lr: 4.88e-02, grad_scale: 8.0
|
| 291 |
+
2026-01-13 11:57:31,149 INFO [train.py:920] (0/2) Computing validation loss
|
| 292 |
+
2026-01-13 11:57:53,948 INFO [zipformer.py:2441] (0/2) attn_weights_entropy = tensor([1.6460, 1.6298, 1.3032, 1.3896, 1.4481, 1.0893, 1.3794, 1.5383],
|
| 293 |
+
device='cuda:0'), covar=tensor([0.0898, 0.1261, 0.1167, 0.1229, 0.1209, 0.1426, 0.1321, 0.1041],
|
| 294 |
+
device='cuda:0'), in_proj_covar=tensor([0.0010, 0.0010, 0.0010, 0.0011, 0.0011, 0.0011, 0.0010, 0.0010],
|
| 295 |
+
device='cuda:0'), out_proj_covar=tensor([7.3589e-06, 8.8167e-06, 7.9994e-06, 8.8864e-06, 8.8690e-06, 9.4186e-06,
|
| 296 |
+
8.4498e-06, 7.6856e-06], device='cuda:0')
|
| 297 |
+
2026-01-13 11:58:02,890 INFO [zipformer.py:2441] (0/2) attn_weights_entropy = tensor([2.4958, 1.8941, 1.6997, 2.4628, 2.0100, 2.6838, 2.1254, 1.6515],
|
| 298 |
+
device='cuda:0'), covar=tensor([0.3892, 1.1334, 1.5682, 0.3381, 1.0763, 0.3657, 0.5418, 1.3985],
|
| 299 |
+
device='cuda:0'), in_proj_covar=tensor([0.0048, 0.0067, 0.0081, 0.0047, 0.0071, 0.0047, 0.0047, 0.0073],
|
| 300 |
+
device='cuda:0'), out_proj_covar=tensor([4.2078e-05, 7.1114e-05, 8.0022e-05, 3.9135e-05, 7.0635e-05, 3.8949e-05,
|
| 301 |
+
3.9488e-05, 7.7211e-05], device='cuda:0')
|
| 302 |
+
2026-01-13 11:58:10,428 INFO [zipformer.py:2441] (0/2) attn_weights_entropy = tensor([2.0410, 2.1260, 1.8959, 1.9695, 2.1428, 2.0346, 1.5017, 2.0459],
|
| 303 |
+
device='cuda:0'), covar=tensor([0.1470, 0.1262, 0.1780, 0.1355, 0.1114, 0.1452, 0.2307, 0.1577],
|
| 304 |
+
device='cuda:0'), in_proj_covar=tensor([0.0017, 0.0016, 0.0017, 0.0015, 0.0016, 0.0017, 0.0018, 0.0015],
|
| 305 |
+
device='cuda:0'), out_proj_covar=tensor([1.3872e-05, 1.3405e-05, 1.4232e-05, 1.2236e-05, 1.1740e-05, 1.4211e-05,
|
| 306 |
+
1.6897e-05, 1.2218e-05], device='cuda:0')
|
| 307 |
+
2026-01-13 11:58:36,235 INFO [zipformer.py:2441] (0/2) attn_weights_entropy = tensor([1.7859, 1.8170, 1.7568, 1.8324, 1.7783, 1.6457, 1.7925, 1.7074],
|
| 308 |
+
device='cuda:0'), covar=tensor([0.1445, 0.1834, 0.1912, 0.2012, 0.1463, 0.2161, 0.1691, 0.2292],
|
| 309 |
+
device='cuda:0'), in_proj_covar=tensor([0.0015, 0.0015, 0.0015, 0.0015, 0.0013, 0.0016, 0.0015, 0.0016],
|
| 310 |
+
device='cuda:0'), out_proj_covar=tensor([1.3396e-05, 1.4789e-05, 1.2804e-05, 1.3667e-05, 1.2556e-05, 1.4016e-05,
|
| 311 |
+
1.3431e-05, 1.6502e-05], device='cuda:0')
|
| 312 |
+
2026-01-13 11:58:36,858 INFO [zipformer.py:2441] (0/2) attn_weights_entropy = tensor([2.5411, 2.4361, 2.6137, 2.6074, 2.1117, 1.2654, 2.2329, 0.8523],
|
| 313 |
+
device='cuda:0'), covar=tensor([0.0854, 0.1238, 0.1037, 0.0897, 0.1465, 0.4936, 0.1450, 0.6848],
|
| 314 |
+
device='cuda:0'), in_proj_covar=tensor([0.0015, 0.0016, 0.0018, 0.0015, 0.0017, 0.0024, 0.0018, 0.0028],
|
| 315 |
+
device='cuda:0'), out_proj_covar=tensor([1.0739e-05, 1.0964e-05, 1.2835e-05, 1.0152e-05, 1.3120e-05, 2.4568e-05,
|
| 316 |
+
1.4089e-05, 2.7964e-05], device='cuda:0')
|
| 317 |
+
2026-01-13 11:59:03,791 INFO [train.py:929] (0/2) Epoch 1, validation: loss=1.035, simple_loss=0.8689, pruned_loss=0.626, over 1639044.00 frames.
|
| 318 |
+
2026-01-13 11:59:03,791 INFO [train.py:930] (0/2) Maximum memory allocated so far is 3736MB
|
| 319 |
+
2026-01-13 11:59:08,725 INFO [zipformer.py:1188] (0/2) warmup_begin=2666.7, warmup_end=3333.3, batch_count=1611.0, num_to_drop=1, layers_to_drop={0}
|
| 320 |
+
2026-01-13 11:59:14,233 INFO [zipformer.py:2441] (0/2) attn_weights_entropy = tensor([1.9990, 1.9855, 1.9294, 2.0605, 1.9850, 1.7354, 1.9851, 1.7422],
|
| 321 |
+
device='cuda:0'), covar=tensor([0.0917, 0.1087, 0.1016, 0.1081, 0.0851, 0.1247, 0.0920, 0.2075],
|
| 322 |
+
device='cuda:0'), in_proj_covar=tensor([0.0015, 0.0016, 0.0015, 0.0015, 0.0014, 0.0016, 0.0015, 0.0017],
|
| 323 |
+
device='cuda:0'), out_proj_covar=tensor([1.3854e-05, 1.5131e-05, 1.3070e-05, 1.3995e-05, 1.3165e-05, 1.4274e-05,
|
| 324 |
+
1.3542e-05, 1.7317e-05], device='cuda:0')
|
| 325 |
+
2026-01-13 11:59:14,757 INFO [scaling.py:681] (0/2) Whitening: num_groups=1, num_channels=384, metric=7.77 vs. limit=5.0
|
| 326 |
+
2026-01-13 11:59:15,969 INFO [zipformer.py:1188] (0/2) warmup_begin=3333.3, warmup_end=4000.0, batch_count=1627.0, num_to_drop=2, layers_to_drop={0, 2}
|
| 327 |
+
2026-01-13 11:59:26,606 INFO [train.py:895] (0/2) Epoch 1, batch 1650, loss[loss=0.6257, simple_loss=0.5215, pruned_loss=0.379, over 2452.00 frames. ], tot_loss[loss=0.5912, simple_loss=0.51, pruned_loss=0.3564, over 540057.57 frames. ], batch size: 26, lr: 4.87e-02, grad_scale: 8.0
|
| 328 |
+
2026-01-13 11:59:30,069 INFO [checkpoint.py:74] (0/2) Saving checkpoint to /kaggle/working/amharic_training/exp_amharic_streaming/epoch-1.pt
|
| 329 |
+
2026-01-13 11:59:47,232 INFO [train.py:895] (0/2) Epoch 2, batch 0, loss[loss=0.5761, simple_loss=0.4964, pruned_loss=0.3375, over 2662.00 frames. ], tot_loss[loss=0.5761, simple_loss=0.4964, pruned_loss=0.3375, over 2662.00 frames. ], batch size: 7, lr: 4.78e-02, grad_scale: 8.0
|
| 330 |
+
2026-01-13 11:59:47,233 INFO [train.py:920] (0/2) Computing validation loss
|
log/log-train-2026-01-13-11-44-05-1
CHANGED
|
@@ -279,3 +279,72 @@
|
|
| 279 |
2026-01-13 11:55:31,441 INFO [train.py:895] (1/2) Epoch 1, batch 1350, loss[loss=0.5113, simple_loss=0.4575, pruned_loss=0.2943, over 2748.00 frames. ], tot_loss[loss=0.5966, simple_loss=0.5137, pruned_loss=0.3819, over 551481.20 frames. ], batch size: 8, lr: 4.91e-02, grad_scale: 8.0
|
| 280 |
2026-01-13 11:55:36,968 INFO [scaling.py:681] (1/2) Whitening: num_groups=8, num_channels=192, metric=2.53 vs. limit=2.0
|
| 281 |
2026-01-13 11:55:46,370 INFO [scaling.py:681] (1/2) Whitening: num_groups=8, num_channels=96, metric=1.95 vs. limit=2.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 279 |
2026-01-13 11:55:31,441 INFO [train.py:895] (1/2) Epoch 1, batch 1350, loss[loss=0.5113, simple_loss=0.4575, pruned_loss=0.2943, over 2748.00 frames. ], tot_loss[loss=0.5966, simple_loss=0.5137, pruned_loss=0.3819, over 551481.20 frames. ], batch size: 8, lr: 4.91e-02, grad_scale: 8.0
|
| 280 |
2026-01-13 11:55:36,968 INFO [scaling.py:681] (1/2) Whitening: num_groups=8, num_channels=192, metric=2.53 vs. limit=2.0
|
| 281 |
2026-01-13 11:55:46,370 INFO [scaling.py:681] (1/2) Whitening: num_groups=8, num_channels=96, metric=1.95 vs. limit=2.0
|
| 282 |
+
2026-01-13 11:55:56,053 INFO [optim.py:365] (1/2) Clipping_scale=2.0, grad-norm quartiles 8.654e+01 1.693e+02 2.387e+02 3.285e+02 1.270e+03, threshold=4.774e+02, percent-clipped=16.0
|
| 283 |
+
2026-01-13 11:55:56,090 INFO [train.py:895] (1/2) Epoch 1, batch 1400, loss[loss=0.5317, simple_loss=0.4634, pruned_loss=0.3157, over 2872.00 frames. ], tot_loss[loss=0.5917, simple_loss=0.5099, pruned_loss=0.3739, over 552430.72 frames. ], batch size: 7, lr: 4.91e-02, grad_scale: 8.0
|
| 284 |
+
2026-01-13 11:56:09,806 INFO [scaling.py:681] (1/2) Whitening: num_groups=8, num_channels=96, metric=2.03 vs. limit=2.0
|
| 285 |
+
2026-01-13 11:56:11,436 INFO [zipformer.py:1188] (1/2) warmup_begin=2666.7, warmup_end=3333.3, batch_count=1434.0, num_to_drop=1, layers_to_drop={0}
|
| 286 |
+
2026-01-13 11:56:19,450 INFO [train.py:895] (1/2) Epoch 1, batch 1450, loss[loss=0.4929, simple_loss=0.4293, pruned_loss=0.2913, over 2796.00 frames. ], tot_loss[loss=0.5843, simple_loss=0.5048, pruned_loss=0.3642, over 551287.22 frames. ], batch size: 10, lr: 4.90e-02, grad_scale: 8.0
|
| 287 |
+
2026-01-13 11:56:35,275 INFO [scaling.py:681] (1/2) Whitening: num_groups=8, num_channels=192, metric=1.99 vs. limit=2.0
|
| 288 |
+
2026-01-13 11:56:40,352 INFO [zipformer.py:1188] (1/2) warmup_begin=3333.3, warmup_end=4000.0, batch_count=1495.0, num_to_drop=2, layers_to_drop={0, 2}
|
| 289 |
+
2026-01-13 11:56:42,961 INFO [optim.py:365] (1/2) Clipping_scale=2.0, grad-norm quartiles 8.658e+01 1.835e+02 2.429e+02 3.386e+02 6.529e+02, threshold=4.857e+02, percent-clipped=8.0
|
| 290 |
+
2026-01-13 11:56:43,001 INFO [train.py:895] (1/2) Epoch 1, batch 1500, loss[loss=0.4892, simple_loss=0.4553, pruned_loss=0.2641, over 2876.00 frames. ], tot_loss[loss=0.5798, simple_loss=0.5018, pruned_loss=0.3572, over 550373.84 frames. ], batch size: 9, lr: 4.89e-02, grad_scale: 8.0
|
| 291 |
+
2026-01-13 11:56:44,105 INFO [zipformer.py:1188] (1/2) warmup_begin=2000.0, warmup_end=2666.7, batch_count=1503.0, num_to_drop=1, layers_to_drop={1}
|
| 292 |
+
2026-01-13 11:56:47,002 INFO [zipformer.py:1188] (1/2) warmup_begin=1333.3, warmup_end=2000.0, batch_count=1509.0, num_to_drop=2, layers_to_drop={2, 3}
|
| 293 |
+
2026-01-13 11:56:54,256 INFO [zipformer.py:2441] (1/2) attn_weights_entropy = tensor([3.6552, 2.9821, 3.3448, 3.5855, 3.3832, 3.3386, 3.6209, 3.5190],
|
| 294 |
+
device='cuda:1'), covar=tensor([0.0203, 0.0263, 0.0281, 0.0264, 0.0276, 0.0491, 0.0234, 0.0402],
|
| 295 |
+
device='cuda:1'), in_proj_covar=tensor([0.0008, 0.0007, 0.0008, 0.0007, 0.0008, 0.0008, 0.0007, 0.0009],
|
| 296 |
+
device='cuda:1'), out_proj_covar=tensor([6.4319e-06, 6.8004e-06, 6.7588e-06, 5.9259e-06, 6.7042e-06, 6.9838e-06,
|
| 297 |
+
6.3286e-06, 7.7209e-06], device='cuda:1')
|
| 298 |
+
2026-01-13 11:57:07,141 INFO [train.py:895] (1/2) Epoch 1, batch 1550, loss[loss=0.54, simple_loss=0.4506, pruned_loss=0.331, over 2899.00 frames. ], tot_loss[loss=0.5769, simple_loss=0.5005, pruned_loss=0.3513, over 548302.60 frames. ], batch size: 10, lr: 4.89e-02, grad_scale: 8.0
|
| 299 |
+
2026-01-13 11:57:07,194 INFO [zipformer.py:1188] (1/2) warmup_begin=666.7, warmup_end=1333.3, batch_count=1551.0, num_to_drop=0, layers_to_drop=set()
|
| 300 |
+
2026-01-13 11:57:14,387 INFO [zipformer.py:1188] (1/2) warmup_begin=2666.7, warmup_end=3333.3, batch_count=1566.0, num_to_drop=1, layers_to_drop={0}
|
| 301 |
+
2026-01-13 11:57:31,111 INFO [optim.py:365] (1/2) Clipping_scale=2.0, grad-norm quartiles 9.520e+01 2.049e+02 2.852e+02 3.650e+02 7.423e+02, threshold=5.703e+02, percent-clipped=10.0
|
| 302 |
+
2026-01-13 11:57:31,148 INFO [train.py:895] (1/2) Epoch 1, batch 1600, loss[loss=0.6946, simple_loss=0.5851, pruned_loss=0.4189, over 2687.00 frames. ], tot_loss[loss=0.5712, simple_loss=0.4967, pruned_loss=0.3443, over 545183.38 frames. ], batch size: 10, lr: 4.88e-02, grad_scale: 8.0
|
| 303 |
+
2026-01-13 11:57:31,148 INFO [train.py:920] (1/2) Computing validation loss
|
| 304 |
+
2026-01-13 11:57:43,245 INFO [zipformer.py:2441] (1/2) attn_weights_entropy = tensor([0.9092, 0.9553, 0.9524, 1.0006, 0.7997, 0.8940, 0.7361, 0.9564],
|
| 305 |
+
device='cuda:1'), covar=tensor([0.1036, 0.0946, 0.0828, 0.0687, 0.1103, 0.0979, 0.0993, 0.0813],
|
| 306 |
+
device='cuda:1'), in_proj_covar=tensor([0.0006, 0.0006, 0.0006, 0.0006, 0.0006, 0.0006, 0.0006, 0.0007],
|
| 307 |
+
device='cuda:1'), out_proj_covar=tensor([5.5212e-06, 5.2831e-06, 5.0532e-06, 5.5774e-06, 5.5788e-06, 5.2944e-06,
|
| 308 |
+
5.3092e-06, 5.9974e-06], device='cuda:1')
|
| 309 |
+
2026-01-13 11:57:54,885 INFO [zipformer.py:2441] (1/2) attn_weights_entropy = tensor([2.2404, 1.9951, 1.6599, 1.4862, 1.9659, 1.7663, 1.9979, 1.8963],
|
| 310 |
+
device='cuda:1'), covar=tensor([0.0521, 0.0720, 0.1108, 0.1166, 0.0797, 0.0742, 0.0732, 0.0803],
|
| 311 |
+
device='cuda:1'), in_proj_covar=tensor([0.0004, 0.0004, 0.0005, 0.0005, 0.0005, 0.0004, 0.0004, 0.0004],
|
| 312 |
+
device='cuda:1'), out_proj_covar=tensor([3.1774e-06, 3.3800e-06, 3.8657e-06, 3.7859e-06, 3.5904e-06, 3.5542e-06,
|
| 313 |
+
3.4501e-06, 3.3029e-06], device='cuda:1')
|
| 314 |
+
2026-01-13 11:57:59,885 INFO [zipformer.py:2441] (1/2) attn_weights_entropy = tensor([1.7893, 1.6581, 1.3666, 1.4537, 1.5532, 1.1999, 1.4384, 1.5708],
|
| 315 |
+
device='cuda:1'), covar=tensor([0.0747, 0.0948, 0.1050, 0.1150, 0.0987, 0.1269, 0.1116, 0.0894],
|
| 316 |
+
device='cuda:1'), in_proj_covar=tensor([0.0010, 0.0010, 0.0010, 0.0011, 0.0011, 0.0011, 0.0010, 0.0010],
|
| 317 |
+
device='cuda:1'), out_proj_covar=tensor([7.3589e-06, 8.8167e-06, 7.9994e-06, 8.8864e-06, 8.8690e-06, 9.4186e-06,
|
| 318 |
+
8.4498e-06, 7.6856e-06], device='cuda:1')
|
| 319 |
+
2026-01-13 11:58:00,126 INFO [zipformer.py:2441] (1/2) attn_weights_entropy = tensor([1.8191, 0.9204, 1.5162, 1.8150, 1.3167, 1.8792, 1.2907, 1.6266],
|
| 320 |
+
device='cuda:1'), covar=tensor([0.0979, 0.2696, 0.1017, 0.0866, 0.1541, 0.1092, 0.1645, 0.1194],
|
| 321 |
+
device='cuda:1'), in_proj_covar=tensor([0.0013, 0.0016, 0.0012, 0.0012, 0.0014, 0.0014, 0.0015, 0.0014],
|
| 322 |
+
device='cuda:1'), out_proj_covar=tensor([1.0234e-05, 1.4760e-05, 1.0098e-05, 9.7123e-06, 1.2315e-05, 1.1718e-05,
|
| 323 |
+
1.3190e-05, 1.1340e-05], device='cuda:1')
|
| 324 |
+
2026-01-13 11:58:23,143 INFO [zipformer.py:2441] (1/2) attn_weights_entropy = tensor([2.0159, 2.0681, 1.9662, 1.9939, 2.1610, 2.0154, 1.4004, 2.0359],
|
| 325 |
+
device='cuda:1'), covar=tensor([0.1497, 0.1441, 0.1710, 0.1286, 0.1100, 0.1442, 0.2427, 0.1476],
|
| 326 |
+
device='cuda:1'), in_proj_covar=tensor([0.0017, 0.0016, 0.0017, 0.0015, 0.0016, 0.0017, 0.0018, 0.0015],
|
| 327 |
+
device='cuda:1'), out_proj_covar=tensor([1.3872e-05, 1.3405e-05, 1.4232e-05, 1.2236e-05, 1.1740e-05, 1.4211e-05,
|
| 328 |
+
1.6897e-05, 1.2218e-05], device='cuda:1')
|
| 329 |
+
2026-01-13 11:58:30,124 INFO [zipformer.py:2441] (1/2) attn_weights_entropy = tensor([2.0214, 1.5295, 1.6267, 1.7783, 2.0317, 1.8515, 1.5381, 1.5019],
|
| 330 |
+
device='cuda:1'), covar=tensor([0.0389, 0.1164, 0.0933, 0.0757, 0.0480, 0.0806, 0.1303, 0.0927],
|
| 331 |
+
device='cuda:1'), in_proj_covar=tensor([0.0007, 0.0010, 0.0009, 0.0008, 0.0008, 0.0009, 0.0009, 0.0008],
|
| 332 |
+
device='cuda:1'), out_proj_covar=tensor([5.7838e-06, 7.9208e-06, 7.6991e-06, 6.7249e-06, 6.3381e-06, 8.2537e-06,
|
| 333 |
+
7.9991e-06, 7.0416e-06], device='cuda:1')
|
| 334 |
+
2026-01-13 11:58:31,637 INFO [zipformer.py:2441] (1/2) attn_weights_entropy = tensor([1.4133, 0.7515, 1.2074, 1.3837, 1.0873, 1.4620, 0.9686, 1.2898],
|
| 335 |
+
device='cuda:1'), covar=tensor([0.0735, 0.1871, 0.0785, 0.0737, 0.1225, 0.0753, 0.1268, 0.0930],
|
| 336 |
+
device='cuda:1'), in_proj_covar=tensor([0.0013, 0.0016, 0.0012, 0.0012, 0.0014, 0.0014, 0.0015, 0.0014],
|
| 337 |
+
device='cuda:1'), out_proj_covar=tensor([1.0234e-05, 1.4760e-05, 1.0098e-05, 9.7123e-06, 1.2315e-05, 1.1718e-05,
|
| 338 |
+
1.3190e-05, 1.1340e-05], device='cuda:1')
|
| 339 |
+
2026-01-13 11:59:03,791 INFO [train.py:929] (1/2) Epoch 1, validation: loss=1.035, simple_loss=0.8689, pruned_loss=0.626, over 1639044.00 frames.
|
| 340 |
+
2026-01-13 11:59:03,791 INFO [train.py:930] (1/2) Maximum memory allocated so far is 3832MB
|
| 341 |
+
2026-01-13 11:59:08,726 INFO [zipformer.py:1188] (1/2) warmup_begin=2666.7, warmup_end=3333.3, batch_count=1611.0, num_to_drop=1, layers_to_drop={0}
|
| 342 |
+
2026-01-13 11:59:15,967 INFO [zipformer.py:1188] (1/2) warmup_begin=3333.3, warmup_end=4000.0, batch_count=1627.0, num_to_drop=2, layers_to_drop={1, 3}
|
| 343 |
+
2026-01-13 11:59:26,606 INFO [train.py:895] (1/2) Epoch 1, batch 1650, loss[loss=0.6384, simple_loss=0.5407, pruned_loss=0.3807, over 2410.00 frames. ], tot_loss[loss=0.5937, simple_loss=0.5125, pruned_loss=0.3574, over 541411.36 frames. ], batch size: 26, lr: 4.87e-02, grad_scale: 8.0
|
| 344 |
+
2026-01-13 11:59:47,228 INFO [train.py:895] (1/2) Epoch 2, batch 0, loss[loss=0.5816, simple_loss=0.4805, pruned_loss=0.355, over 2654.00 frames. ], tot_loss[loss=0.5816, simple_loss=0.4805, pruned_loss=0.355, over 2654.00 frames. ], batch size: 7, lr: 4.78e-02, grad_scale: 8.0
|
| 345 |
+
2026-01-13 11:59:47,228 INFO [train.py:920] (1/2) Computing validation loss
|
| 346 |
+
2026-01-13 12:00:05,334 INFO [zipformer.py:2441] (1/2) attn_weights_entropy = tensor([2.5355, 1.4842, 1.3601, 2.1801, 1.8354, 2.4291, 2.2427, 1.4162],
|
| 347 |
+
device='cuda:1'), covar=tensor([0.2171, 1.0405, 1.3523, 0.2964, 0.7762, 0.2831, 0.3689, 1.4802],
|
| 348 |
+
device='cuda:1'), in_proj_covar=tensor([0.0050, 0.0070, 0.0083, 0.0049, 0.0072, 0.0050, 0.0051, 0.0076],
|
| 349 |
+
device='cuda:1'), out_proj_covar=tensor([4.4592e-05, 7.4365e-05, 8.2909e-05, 4.1058e-05, 7.2073e-05, 4.2411e-05,
|
| 350 |
+
4.2429e-05, 8.1056e-05], device='cuda:1')
|
tensorboard/events.out.tfevents.1768304645.8e64ffbd666a.97184.0
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:553b9d433cc01ec5e4db7f4713407516cfee969c53f673b4a7cff0f4d0dcc92a
|
| 3 |
+
size 15830
|