projecti7 commited on
Commit
6ff7d76
·
verified ·
1 Parent(s): 4eb905e

Auto-sync checkpoint during training

Browse files
checkpoint-3000.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:42e43f7b49a9db1b9f92c44d372596bff4cbb4dd8468187a30dfd52b3125f066
3
+ size 1141963947
checkpoint-4000.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2f6451a460ddcef7ec308006feb2a2dca0ecf82db0e826e2cc9244cd8cdddb8d
3
+ size 1141963947
log/log-train-2026-01-13-10-02-58 CHANGED
@@ -542,3 +542,613 @@
542
  2026-01-13 10:19:57,293 INFO [scaling.py:681] Whitening: num_groups=1, num_channels=384, metric=50.99 vs. limit=5.0
543
  2026-01-13 10:19:57,351 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=9.24 vs. limit=2.0
544
  2026-01-13 10:19:58,347 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=15.89 vs. limit=2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
542
  2026-01-13 10:19:57,293 INFO [scaling.py:681] Whitening: num_groups=1, num_channels=384, metric=50.99 vs. limit=5.0
543
  2026-01-13 10:19:57,351 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=9.24 vs. limit=2.0
544
  2026-01-13 10:19:58,347 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=15.89 vs. limit=2.0
545
+ 2026-01-13 10:20:01,758 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=7.21 vs. limit=2.0
546
+ 2026-01-13 10:20:05,368 INFO [train.py:895] Epoch 1, batch 2450, loss[loss=1.146, simple_loss=0.7428, pruned_loss=0.7743, over 1351.00 frames. ], tot_loss[loss=1.218, simple_loss=0.7539, pruned_loss=0.8448, over 263519.58 frames. ], batch size: 4, lr: 4.74e-02, grad_scale: 8.0
547
+ 2026-01-13 10:20:12,423 INFO [zipformer.py:2441] attn_weights_entropy = tensor([5.4913, 5.4945, 5.4992, 5.4918, 5.4836, 5.4995, 5.5007, 5.5011],
548
+ device='cuda:0'), covar=tensor([0.0002, 0.0001, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0003],
549
+ device='cuda:0'), in_proj_covar=tensor([0.0023, 0.0019, 0.0022, 0.0022, 0.0021, 0.0021, 0.0023, 0.0024],
550
+ device='cuda:0'), out_proj_covar=tensor([1.8250e-05, 1.4849e-05, 1.7794e-05, 1.6737e-05, 1.6757e-05, 1.6866e-05,
551
+ 1.7888e-05, 1.9821e-05], device='cuda:0')
552
+ 2026-01-13 10:20:15,906 INFO [scaling.py:681] Whitening: num_groups=1, num_channels=384, metric=37.05 vs. limit=5.0
553
+ 2026-01-13 10:20:19,551 INFO [zipformer.py:1188] warmup_begin=2000.0, warmup_end=2666.7, batch_count=2494.0, num_to_drop=1, layers_to_drop={2}
554
+ 2026-01-13 10:20:21,781 INFO [train.py:895] Epoch 1, batch 2500, loss[loss=1.069, simple_loss=0.6603, pruned_loss=0.7385, over 1275.00 frames. ], tot_loss[loss=1.215, simple_loss=0.7529, pruned_loss=0.842, over 264188.58 frames. ], batch size: 4, lr: 4.73e-02, grad_scale: 8.0
555
+ 2026-01-13 10:20:22,052 INFO [optim.py:365] Clipping_scale=2.0, grad-norm quartiles 9.710e+01 1.522e+02 1.854e+02 2.298e+02 9.618e+02, threshold=3.709e+02, percent-clipped=5.0
556
+ 2026-01-13 10:20:25,483 INFO [zipformer.py:1188] warmup_begin=2000.0, warmup_end=2666.7, batch_count=2513.0, num_to_drop=0, layers_to_drop=set()
557
+ 2026-01-13 10:20:29,577 INFO [zipformer.py:1188] warmup_begin=2000.0, warmup_end=2666.7, batch_count=2526.0, num_to_drop=0, layers_to_drop=set()
558
+ 2026-01-13 10:20:34,908 INFO [zipformer.py:1188] warmup_begin=666.7, warmup_end=1333.3, batch_count=2542.0, num_to_drop=0, layers_to_drop=set()
559
+ 2026-01-13 10:20:37,973 INFO [train.py:895] Epoch 1, batch 2550, loss[loss=1.227, simple_loss=0.7511, pruned_loss=0.8516, over 1272.00 frames. ], tot_loss[loss=1.22, simple_loss=0.7567, pruned_loss=0.8446, over 264230.43 frames. ], batch size: 3, lr: 4.72e-02, grad_scale: 8.0
560
+ 2026-01-13 10:20:41,267 INFO [zipformer.py:1188] warmup_begin=666.7, warmup_end=1333.3, batch_count=2561.0, num_to_drop=0, layers_to_drop=set()
561
+ 2026-01-13 10:20:42,838 INFO [zipformer.py:2441] attn_weights_entropy = tensor([2.9202, 2.9227, 2.9302, 2.9280, 2.8807, 2.8947, 2.9328, 2.9000],
562
+ device='cuda:0'), covar=tensor([0.0070, 0.0083, 0.0065, 0.0040, 0.0054, 0.0055, 0.0038, 0.0068],
563
+ device='cuda:0'), in_proj_covar=tensor([0.0033, 0.0036, 0.0033, 0.0031, 0.0033, 0.0033, 0.0030, 0.0033],
564
+ device='cuda:0'), out_proj_covar=tensor([3.3786e-05, 4.0307e-05, 3.7316e-05, 3.3601e-05, 3.1805e-05, 3.3708e-05,
565
+ 3.1461e-05, 3.7077e-05], device='cuda:0')
566
+ 2026-01-13 10:20:44,021 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=12.98 vs. limit=2.0
567
+ 2026-01-13 10:20:45,479 INFO [zipformer.py:1188] warmup_begin=666.7, warmup_end=1333.3, batch_count=2574.0, num_to_drop=0, layers_to_drop=set()
568
+ 2026-01-13 10:20:50,694 INFO [scaling.py:681] Whitening: num_groups=1, num_channels=384, metric=34.06 vs. limit=5.0
569
+ 2026-01-13 10:20:54,416 INFO [train.py:895] Epoch 1, batch 2600, loss[loss=1.051, simple_loss=0.6504, pruned_loss=0.726, over 1412.00 frames. ], tot_loss[loss=1.218, simple_loss=0.7566, pruned_loss=0.8413, over 264140.42 frames. ], batch size: 4, lr: 4.71e-02, grad_scale: 8.0
570
+ 2026-01-13 10:20:54,709 INFO [optim.py:365] Clipping_scale=2.0, grad-norm quartiles 9.915e+01 1.483e+02 1.787e+02 2.143e+02 3.289e+02, threshold=3.573e+02, percent-clipped=0.0
571
+ 2026-01-13 10:21:05,764 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=3.47 vs. limit=2.0
572
+ 2026-01-13 10:21:08,432 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=5.91 vs. limit=2.0
573
+ 2026-01-13 10:21:10,841 INFO [train.py:895] Epoch 1, batch 2650, loss[loss=1.382, simple_loss=0.8562, pruned_loss=0.9536, over 1244.00 frames. ], tot_loss[loss=1.225, simple_loss=0.7623, pruned_loss=0.845, over 263196.61 frames. ], batch size: 5, lr: 4.70e-02, grad_scale: 8.0
574
+ 2026-01-13 10:21:13,365 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=6.63 vs. limit=2.0
575
+ 2026-01-13 10:21:21,846 INFO [zipformer.py:1188] warmup_begin=2666.7, warmup_end=3333.3, batch_count=2685.0, num_to_drop=1, layers_to_drop={1}
576
+ 2026-01-13 10:21:25,491 INFO [zipformer.py:1188] warmup_begin=1333.3, warmup_end=2000.0, batch_count=2696.0, num_to_drop=1, layers_to_drop={1}
577
+ 2026-01-13 10:21:27,004 INFO [train.py:895] Epoch 1, batch 2700, loss[loss=1.139, simple_loss=0.729, pruned_loss=0.7749, over 1251.00 frames. ], tot_loss[loss=1.224, simple_loss=0.7622, pruned_loss=0.8441, over 262289.78 frames. ], batch size: 4, lr: 4.69e-02, grad_scale: 8.0
578
+ 2026-01-13 10:21:27,288 INFO [optim.py:365] Clipping_scale=2.0, grad-norm quartiles 1.112e+02 1.605e+02 1.959e+02 2.548e+02 5.301e+02, threshold=3.918e+02, percent-clipped=5.0
579
+ 2026-01-13 10:21:32,868 INFO [scaling.py:681] Whitening: num_groups=1, num_channels=384, metric=107.70 vs. limit=5.0
580
+ 2026-01-13 10:21:33,766 INFO [zipformer.py:2441] attn_weights_entropy = tensor([3.6219, 3.6233, 3.6223, 3.6147, 3.6149, 3.6222, 3.6207, 3.6215],
581
+ device='cuda:0'), covar=tensor([0.0014, 0.0017, 0.0012, 0.0010, 0.0009, 0.0010, 0.0008, 0.0011],
582
+ device='cuda:0'), in_proj_covar=tensor([0.0031, 0.0034, 0.0032, 0.0030, 0.0032, 0.0032, 0.0029, 0.0032],
583
+ device='cuda:0'), out_proj_covar=tensor([3.0167e-05, 3.5079e-05, 3.3481e-05, 2.9489e-05, 2.9544e-05, 3.0976e-05,
584
+ 2.8968e-05, 3.3750e-05], device='cuda:0')
585
+ 2026-01-13 10:21:35,210 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=21.42 vs. limit=2.0
586
+ 2026-01-13 10:21:39,444 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=6.40 vs. limit=2.0
587
+ 2026-01-13 10:21:41,980 INFO [zipformer.py:1188] warmup_begin=3333.3, warmup_end=4000.0, batch_count=2746.0, num_to_drop=2, layers_to_drop={0, 2}
588
+ 2026-01-13 10:21:43,576 INFO [train.py:895] Epoch 1, batch 2750, loss[loss=1.241, simple_loss=0.7814, pruned_loss=0.8503, over 1463.00 frames. ], tot_loss[loss=1.219, simple_loss=0.7596, pruned_loss=0.8405, over 262612.42 frames. ], batch size: 4, lr: 4.68e-02, grad_scale: 8.0
589
+ 2026-01-13 10:21:58,370 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=20.57 vs. limit=2.0
590
+ 2026-01-13 10:22:00,210 INFO [train.py:895] Epoch 1, batch 2800, loss[loss=1.357, simple_loss=0.8749, pruned_loss=0.92, over 1267.00 frames. ], tot_loss[loss=1.221, simple_loss=0.7616, pruned_loss=0.841, over 263495.72 frames. ], batch size: 13, lr: 4.67e-02, grad_scale: 8.0
591
+ 2026-01-13 10:22:00,527 INFO [optim.py:365] Clipping_scale=2.0, grad-norm quartiles 9.261e+01 1.550e+02 1.862e+02 2.337e+02 3.859e+02, threshold=3.724e+02, percent-clipped=0.0
592
+ 2026-01-13 10:22:01,750 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=12.01 vs. limit=2.0
593
+ 2026-01-13 10:22:07,950 INFO [zipformer.py:1188] warmup_begin=2666.7, warmup_end=3333.3, batch_count=2824.0, num_to_drop=1, layers_to_drop={1}
594
+ 2026-01-13 10:22:08,995 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=6.27 vs. limit=2.0
595
+ 2026-01-13 10:22:10,653 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=13.74 vs. limit=2.0
596
+ 2026-01-13 10:22:16,655 INFO [train.py:895] Epoch 1, batch 2850, loss[loss=1.149, simple_loss=0.7342, pruned_loss=0.7815, over 1420.00 frames. ], tot_loss[loss=1.225, simple_loss=0.7622, pruned_loss=0.845, over 263020.37 frames. ], batch size: 4, lr: 4.66e-02, grad_scale: 8.0
597
+ 2026-01-13 10:22:18,749 INFO [zipformer.py:2441] attn_weights_entropy = tensor([3.4937, 3.4810, 3.5028, 3.4964, 3.5020, 3.5026, 3.4182, 3.5055],
598
+ device='cuda:0'), covar=tensor([0.0020, 0.0018, 0.0011, 0.0022, 0.0013, 0.0017, 0.0013, 0.0023],
599
+ device='cuda:0'), in_proj_covar=tensor([0.0014, 0.0015, 0.0014, 0.0016, 0.0016, 0.0015, 0.0014, 0.0016],
600
+ device='cuda:0'), out_proj_covar=tensor([1.2947e-05, 1.5423e-05, 1.4116e-05, 1.5412e-05, 1.4673e-05, 1.3217e-05,
601
+ 1.3632e-05, 1.5744e-05], device='cuda:0')
602
+ 2026-01-13 10:22:19,445 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=5.25 vs. limit=2.0
603
+ 2026-01-13 10:22:19,468 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=12.69 vs. limit=2.0
604
+ 2026-01-13 10:22:27,982 INFO [zipformer.py:1188] warmup_begin=3333.3, warmup_end=4000.0, batch_count=2885.0, num_to_drop=2, layers_to_drop={2, 3}
605
+ 2026-01-13 10:22:33,367 INFO [train.py:895] Epoch 1, batch 2900, loss[loss=1.201, simple_loss=0.7426, pruned_loss=0.8295, over 1389.00 frames. ], tot_loss[loss=1.22, simple_loss=0.7573, pruned_loss=0.8418, over 264336.26 frames. ], batch size: 4, lr: 4.65e-02, grad_scale: 8.0
606
+ 2026-01-13 10:22:33,692 INFO [optim.py:365] Clipping_scale=2.0, grad-norm quartiles 1.148e+02 2.115e+02 2.851e+02 3.698e+02 6.618e+02, threshold=5.702e+02, percent-clipped=24.0
607
+ 2026-01-13 10:22:45,641 INFO [zipformer.py:1188] warmup_begin=2666.7, warmup_end=3333.3, batch_count=2937.0, num_to_drop=1, layers_to_drop={0}
608
+ 2026-01-13 10:22:46,586 INFO [zipformer.py:2441] attn_weights_entropy = tensor([4.3216, 4.5616, 4.4062, 4.5486, 4.3783, 4.5689, 4.5457, 4.4484],
609
+ device='cuda:0'), covar=tensor([0.0013, 0.0013, 0.0012, 0.0010, 0.0014, 0.0011, 0.0016, 0.0010],
610
+ device='cuda:0'), in_proj_covar=tensor([0.0026, 0.0025, 0.0025, 0.0022, 0.0027, 0.0026, 0.0027, 0.0025],
611
+ device='cuda:0'), out_proj_covar=tensor([2.2695e-05, 2.0363e-05, 2.0366e-05, 2.0134e-05, 2.3061e-05, 2.1485e-05,
612
+ 2.3011e-05, 2.1898e-05], device='cuda:0')
613
+ 2026-01-13 10:22:50,130 INFO [train.py:895] Epoch 1, batch 2950, loss[loss=1.149, simple_loss=0.7169, pruned_loss=0.7909, over 1469.00 frames. ], tot_loss[loss=1.209, simple_loss=0.7521, pruned_loss=0.8328, over 263374.76 frames. ], batch size: 4, lr: 4.64e-02, grad_scale: 8.0
614
+ 2026-01-13 10:23:07,148 INFO [zipformer.py:1188] warmup_begin=2000.0, warmup_end=2666.7, batch_count=2996.0, num_to_drop=1, layers_to_drop={1}
615
+ 2026-01-13 10:23:07,287 INFO [scaling.py:681] Whitening: num_groups=1, num_channels=384, metric=35.88 vs. limit=5.0
616
+ 2026-01-13 10:23:07,984 INFO [zipformer.py:1188] warmup_begin=3333.3, warmup_end=4000.0, batch_count=2998.0, num_to_drop=2, layers_to_drop={0, 1}
617
+ 2026-01-13 10:23:08,833 INFO [checkpoint.py:74] Saving checkpoint to /kaggle/working/amharic_training/exp_amharic_streaming/checkpoint-3000.pt
618
+ 2026-01-13 10:23:10,775 INFO [train.py:895] Epoch 1, batch 3000, loss[loss=1.26, simple_loss=0.7867, pruned_loss=0.8666, over 1372.00 frames. ], tot_loss[loss=1.207, simple_loss=0.7528, pruned_loss=0.8304, over 263428.51 frames. ], batch size: 4, lr: 4.63e-02, grad_scale: 8.0
619
+ 2026-01-13 10:23:11,068 INFO [optim.py:365] Clipping_scale=2.0, grad-norm quartiles 1.029e+02 1.397e+02 1.692e+02 2.036e+02 3.627e+02, threshold=3.384e+02, percent-clipped=0.0
620
+ 2026-01-13 10:23:16,790 INFO [zipformer.py:1188] warmup_begin=2666.7, warmup_end=3333.3, batch_count=3019.0, num_to_drop=1, layers_to_drop={1}
621
+ 2026-01-13 10:23:20,644 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=4.28 vs. limit=2.0
622
+ 2026-01-13 10:23:22,978 INFO [scaling.py:681] Whitening: num_groups=1, num_channels=384, metric=212.63 vs. limit=5.0
623
+ 2026-01-13 10:23:24,588 INFO [zipformer.py:1188] warmup_begin=1333.3, warmup_end=2000.0, batch_count=3041.0, num_to_drop=0, layers_to_drop=set()
624
+ 2026-01-13 10:23:25,337 INFO [zipformer.py:2441] attn_weights_entropy = tensor([4.3106, 4.3131, 4.3070, 4.2847, 4.1083, 4.2669, 4.3001, 4.2724],
625
+ device='cuda:0'), covar=tensor([0.0412, 0.0359, 0.0324, 0.0210, 0.0123, 0.0245, 0.0198, 0.0181],
626
+ device='cuda:0'), in_proj_covar=tensor([0.0027, 0.0030, 0.0028, 0.0027, 0.0028, 0.0029, 0.0027, 0.0030],
627
+ device='cuda:0'), out_proj_covar=tensor([2.9128e-05, 3.2566e-05, 3.0656e-05, 2.7087e-05, 2.7695e-05, 2.9715e-05,
628
+ 2.7711e-05, 3.1805e-05], device='cuda:0')
629
+ 2026-01-13 10:23:25,658 INFO [zipformer.py:1188] warmup_begin=666.7, warmup_end=1333.3, batch_count=3044.0, num_to_drop=1, layers_to_drop={1}
630
+ 2026-01-13 10:23:28,041 INFO [train.py:895] Epoch 1, batch 3050, loss[loss=1.244, simple_loss=0.765, pruned_loss=0.8618, over 1445.00 frames. ], tot_loss[loss=1.21, simple_loss=0.7559, pruned_loss=0.8327, over 264170.82 frames. ], batch size: 4, lr: 4.62e-02, grad_scale: 8.0
631
+ 2026-01-13 10:23:38,083 INFO [zipformer.py:1188] warmup_begin=3333.3, warmup_end=4000.0, batch_count=3080.0, num_to_drop=2, layers_to_drop={2, 3}
632
+ 2026-01-13 10:23:41,695 INFO [scaling.py:681] Whitening: num_groups=1, num_channels=384, metric=26.36 vs. limit=5.0
633
+ 2026-01-13 10:23:45,031 INFO [train.py:895] Epoch 1, batch 3100, loss[loss=1.43, simple_loss=0.8704, pruned_loss=0.9948, over 1273.00 frames. ], tot_loss[loss=1.213, simple_loss=0.7563, pruned_loss=0.8353, over 263694.99 frames. ], batch size: 5, lr: 4.61e-02, grad_scale: 8.0
634
+ 2026-01-13 10:23:45,323 INFO [optim.py:365] Clipping_scale=2.0, grad-norm quartiles 1.132e+02 1.916e+02 2.448e+02 3.121e+02 5.261e+02, threshold=4.897e+02, percent-clipped=17.0
635
+ 2026-01-13 10:23:46,328 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=3.04 vs. limit=2.0
636
+ 2026-01-13 10:23:50,628 INFO [scaling.py:681] Whitening: num_groups=1, num_channels=384, metric=72.77 vs. limit=5.0
637
+ 2026-01-13 10:23:50,704 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=8.13 vs. limit=2.0
638
+ 2026-01-13 10:23:52,557 INFO [scaling.py:681] Whitening: num_groups=1, num_channels=384, metric=34.69 vs. limit=5.0
639
+ 2026-01-13 10:23:59,314 INFO [zipformer.py:2441] attn_weights_entropy = tensor([4.1769, 4.0953, 3.9882, 3.9888, 4.1174, 4.1763, 3.4133, 3.9914],
640
+ device='cuda:0'), covar=tensor([0.0390, 0.0345, 0.0616, 0.0463, 0.0438, 0.0608, 0.0363, 0.0774],
641
+ device='cuda:0'), in_proj_covar=tensor([0.0025, 0.0023, 0.0028, 0.0023, 0.0022, 0.0031, 0.0027, 0.0023],
642
+ device='cuda:0'), out_proj_covar=tensor([2.1248e-05, 2.1084e-05, 2.4496e-05, 2.0646e-05, 1.9886e-05, 2.6953e-05,
643
+ 2.5216e-05, 2.1286e-05], device='cuda:0')
644
+ 2026-01-13 10:24:00,966 INFO [zipformer.py:2441] attn_weights_entropy = tensor([4.4494, 4.4092, 4.4273, 4.2688, 4.4762, 4.4447, 4.2049, 4.4079],
645
+ device='cuda:0'), covar=tensor([0.0117, 0.0140, 0.0111, 0.0168, 0.0121, 0.0105, 0.0111, 0.0179],
646
+ device='cuda:0'), in_proj_covar=tensor([0.0015, 0.0017, 0.0015, 0.0019, 0.0016, 0.0016, 0.0015, 0.0019],
647
+ device='cuda:0'), out_proj_covar=tensor([1.4466e-05, 1.6598e-05, 1.5521e-05, 1.8507e-05, 1.6040e-05, 1.5113e-05,
648
+ 1.4970e-05, 1.7735e-05], device='cuda:0')
649
+ 2026-01-13 10:24:01,841 INFO [train.py:895] Epoch 1, batch 3150, loss[loss=1.288, simple_loss=0.8229, pruned_loss=0.8765, over 1346.00 frames. ], tot_loss[loss=1.214, simple_loss=0.7574, pruned_loss=0.835, over 262891.73 frames. ], batch size: 6, lr: 4.60e-02, grad_scale: 8.0
650
+ 2026-01-13 10:24:03,878 WARNING [optim.py:385] Scaling gradients by 0.077285535633564, model_norm_threshold=489.67706298828125
651
+ 2026-01-13 10:24:03,974 INFO [optim.py:446] Parameter Dominanting tot_sumsq encoder.encoders.2.encoder.layers.1.bypass_scale with proportion 0.95, where dominant_sumsq=(grad_sumsq*orig_rms_sq)=3.822e+07, grad_sumsq = 3.822e+07, orig_rms_sq=1.000e+00
652
+ 2026-01-13 10:24:04,297 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=3.94 vs. limit=2.0
653
+ 2026-01-13 10:24:04,675 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=7.75 vs. limit=2.0
654
+ 2026-01-13 10:24:05,284 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=11.97 vs. limit=2.0
655
+ 2026-01-13 10:24:11,688 INFO [zipformer.py:1188] warmup_begin=1333.3, warmup_end=2000.0, batch_count=3180.0, num_to_drop=0, layers_to_drop=set()
656
+ 2026-01-13 10:24:14,751 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=12.43 vs. limit=2.0
657
+ 2026-01-13 10:24:18,817 INFO [train.py:895] Epoch 1, batch 3200, loss[loss=1.229, simple_loss=0.7776, pruned_loss=0.8401, over 1390.00 frames. ], tot_loss[loss=1.212, simple_loss=0.7572, pruned_loss=0.8336, over 262906.95 frames. ], batch size: 4, lr: 4.59e-02, grad_scale: 8.0
658
+ 2026-01-13 10:24:18,818 INFO [train.py:920] Computing validation loss
659
+ 2026-01-13 10:24:21,454 INFO [zipformer.py:2441] attn_weights_entropy = tensor([2.0923, 2.0917, 2.0359, 2.1326, 2.0168, 2.0525, 2.1589, 2.0743],
660
+ device='cuda:0'), covar=tensor([0.0038, 0.0058, 0.0035, 0.0043, 0.0116, 0.0047, 0.0032, 0.0035],
661
+ device='cuda:0'), in_proj_covar=tensor([0.0012, 0.0011, 0.0012, 0.0012, 0.0012, 0.0011, 0.0011, 0.0011],
662
+ device='cuda:0'), out_proj_covar=tensor([1.0008e-05, 1.0021e-05, 9.9525e-06, 1.0299e-05, 1.0480e-05, 9.8945e-06,
663
+ 1.0015e-05, 8.7758e-06], device='cuda:0')
664
+ 2026-01-13 10:24:53,355 INFO [zipformer.py:2441] attn_weights_entropy = tensor([2.4881, 2.7717, 2.7461, 2.7469, 2.5902, 2.4006, 2.5672, 2.6097],
665
+ device='cuda:0'), covar=tensor([43.7901, 23.3571, 36.4996, 29.2917, 21.8235, 30.6714, 24.7510, 27.0358],
666
+ device='cuda:0'), in_proj_covar=tensor([0.0031, 0.0028, 0.0031, 0.0031, 0.0031, 0.0032, 0.0031, 0.0034],
667
+ device='cuda:0'), out_proj_covar=tensor([3.0289e-05, 2.7310e-05, 2.9528e-05, 2.6026e-05, 3.0210e-05, 2.8163e-05,
668
+ 2.8044e-05, 3.2595e-05], device='cuda:0')
669
+ 2026-01-13 10:24:54,107 INFO [zipformer.py:2441] attn_weights_entropy = tensor([2.6637, 3.2292, 2.9032, 3.1189, 2.5525, 3.0720, 2.9521, 2.8913],
670
+ device='cuda:0'), covar=tensor([0.0317, 0.0408, 0.0862, 0.0358, 0.0432, 0.0463, 0.0537, 0.0443],
671
+ device='cuda:0'), in_proj_covar=tensor([0.0025, 0.0022, 0.0025, 0.0022, 0.0025, 0.0025, 0.0026, 0.0023],
672
+ device='cuda:0'), out_proj_covar=tensor([2.0822e-05, 1.7944e-05, 2.0398e-05, 1.9643e-05, 2.0829e-05, 2.0606e-05,
673
+ 2.1996e-05, 2.0334e-05], device='cuda:0')
674
+ 2026-01-13 10:25:10,422 INFO [zipformer.py:2441] attn_weights_entropy = tensor([3.4964, 3.5342, 3.5646, 3.4167, 3.4766, 3.5175, 3.2633, 3.4023],
675
+ device='cuda:0'), covar=tensor([0.0167, 0.0110, 0.0073, 0.0101, 0.0202, 0.0136, 0.0089, 0.0093],
676
+ device='cuda:0'), in_proj_covar=tensor([0.0032, 0.0030, 0.0028, 0.0027, 0.0029, 0.0027, 0.0027, 0.0030],
677
+ device='cuda:0'), out_proj_covar=tensor([2.5489e-05, 2.4381e-05, 2.3200e-05, 2.2347e-05, 2.3810e-05, 2.2201e-05,
678
+ 2.4344e-05, 2.4136e-05], device='cuda:0')
679
+ 2026-01-13 10:25:14,166 INFO [zipformer.py:2441] attn_weights_entropy = tensor([1.8368, 2.8567, 1.3679, 1.5809, 1.8666, 3.0501, 1.7823, 1.5381],
680
+ device='cuda:0'), covar=tensor([0.0457, 0.0405, 0.0328, 0.0375, 0.0274, 0.0362, 0.0397, 0.0356],
681
+ device='cuda:0'), in_proj_covar=tensor([0.0024, 0.0025, 0.0024, 0.0023, 0.0026, 0.0026, 0.0024, 0.0027],
682
+ device='cuda:0'), out_proj_covar=tensor([2.4218e-05, 2.6261e-05, 2.5588e-05, 2.3380e-05, 2.5061e-05, 2.5089e-05,
683
+ 2.3876e-05, 2.7284e-05], device='cuda:0')
684
+ 2026-01-13 10:25:20,996 INFO [zipformer.py:2441] attn_weights_entropy = tensor([2.4891, 2.8813, 2.8359, 2.8100, 2.6678, 2.5007, 2.6867, 2.6357],
685
+ device='cuda:0'), covar=tensor([35.5253, 21.5430, 24.6214, 20.3761, 18.5999, 22.1355, 19.3728, 22.1152],
686
+ device='cuda:0'), in_proj_covar=tensor([0.0031, 0.0028, 0.0031, 0.0031, 0.0031, 0.0032, 0.0031, 0.0034],
687
+ device='cuda:0'), out_proj_covar=tensor([3.0289e-05, 2.7310e-05, 2.9528e-05, 2.6026e-05, 3.0210e-05, 2.8163e-05,
688
+ 2.8044e-05, 3.2595e-05], device='cuda:0')
689
+ 2026-01-13 10:25:31,466 INFO [zipformer.py:2441] attn_weights_entropy = tensor([3.4780, 3.4933, 3.5307, 3.4066, 3.4473, 3.4766, 3.3153, 3.3824],
690
+ device='cuda:0'), covar=tensor([0.0171, 0.0116, 0.0087, 0.0102, 0.0181, 0.0124, 0.0091, 0.0107],
691
+ device='cuda:0'), in_proj_covar=tensor([0.0032, 0.0030, 0.0028, 0.0027, 0.0029, 0.0027, 0.0027, 0.0030],
692
+ device='cuda:0'), out_proj_covar=tensor([2.5489e-05, 2.4381e-05, 2.3200e-05, 2.2347e-05, 2.3810e-05, 2.2201e-05,
693
+ 2.4344e-05, 2.4136e-05], device='cuda:0')
694
+ 2026-01-13 10:25:38,773 INFO [zipformer.py:2441] attn_weights_entropy = tensor([2.8256, 2.7540, 2.8695, 2.8732, 2.8574, 2.8731, 2.8581, 2.7401],
695
+ device='cuda:0'), covar=tensor([0.4337, 0.1850, 0.3623, 0.1922, 0.3070, 0.3128, 0.1778, 0.2466],
696
+ device='cuda:0'), in_proj_covar=tensor([0.0036, 0.0035, 0.0037, 0.0034, 0.0037, 0.0034, 0.0035, 0.0035],
697
+ device='cuda:0'), out_proj_covar=tensor([3.2861e-05, 3.3652e-05, 3.7904e-05, 3.4264e-05, 3.3625e-05, 3.2225e-05,
698
+ 3.3082e-05, 3.5983e-05], device='cuda:0')
699
+ 2026-01-13 10:25:43,106 INFO [zipformer.py:2441] attn_weights_entropy = tensor([1.8777, 1.7966, 1.7787, 1.8684, 1.6945, 1.8083, 1.9319, 1.7670],
700
+ device='cuda:0'), covar=tensor([0.0104, 0.0179, 0.0102, 0.0108, 0.0367, 0.0104, 0.0082, 0.0083],
701
+ device='cuda:0'), in_proj_covar=tensor([0.0012, 0.0011, 0.0012, 0.0012, 0.0012, 0.0011, 0.0011, 0.0011],
702
+ device='cuda:0'), out_proj_covar=tensor([1.0008e-05, 1.0021e-05, 9.9525e-06, 1.0299e-05, 1.0480e-05, 9.8945e-06,
703
+ 1.0015e-05, 8.7758e-06], device='cuda:0')
704
+ 2026-01-13 10:26:14,902 INFO [zipformer.py:2441] attn_weights_entropy = tensor([2.6965, 3.0956, 3.0169, 2.8024, 1.6484, 2.6187, 1.8870, 2.4230],
705
+ device='cuda:0'), covar=tensor([1.5985, 1.7111, 1.3949, 1.7041, 2.1025, 1.3127, 1.2313, 0.7960],
706
+ device='cuda:0'), in_proj_covar=tensor([0.0025, 0.0023, 0.0026, 0.0022, 0.0023, 0.0030, 0.0028, 0.0023],
707
+ device='cuda:0'), out_proj_covar=tensor([2.2008e-05, 2.1002e-05, 2.3031e-05, 1.9500e-05, 1.8803e-05, 2.4858e-05,
708
+ 2.4334e-05, 2.2178e-05], device='cuda:0')
709
+ 2026-01-13 10:26:20,234 INFO [zipformer.py:2441] attn_weights_entropy = tensor([1.9572, 2.0322, 2.1855, 2.1445, 2.1660, 2.0224, 2.1808, 1.9510],
710
+ device='cuda:0'), covar=tensor([0.0116, 0.0086, 0.0103, 0.0086, 0.0099, 0.0076, 0.0142, 0.0075],
711
+ device='cuda:0'), in_proj_covar=tensor([0.0018, 0.0017, 0.0017, 0.0019, 0.0017, 0.0017, 0.0017, 0.0017],
712
+ device='cuda:0'), out_proj_covar=tensor([1.5164e-05, 1.3358e-05, 1.4023e-05, 1.5014e-05, 1.3688e-05, 1.3378e-05,
713
+ 1.4806e-05, 1.4314e-05], device='cuda:0')
714
+ 2026-01-13 10:26:49,876 INFO [zipformer.py:2441] attn_weights_entropy = tensor([3.4945, 3.5663, 3.4767, 3.3435, 3.5640, 3.5109, 3.4904, 3.5518],
715
+ device='cuda:0'), covar=tensor([0.0062, 0.0076, 0.0097, 0.0087, 0.0053, 0.0038, 0.0063, 0.0069],
716
+ device='cuda:0'), in_proj_covar=tensor([0.0024, 0.0020, 0.0025, 0.0023, 0.0023, 0.0022, 0.0024, 0.0025],
717
+ device='cuda:0'), out_proj_covar=tensor([1.8982e-05, 1.6386e-05, 1.9365e-05, 1.7535e-05, 1.7344e-05, 1.7117e-05,
718
+ 1.8095e-05, 2.0043e-05], device='cuda:0')
719
+ 2026-01-13 10:27:04,383 INFO [train.py:929] Epoch 1, validation: loss=1.907, simple_loss=1.213, pruned_loss=1.3, over 1639044.00 frames.
720
+ 2026-01-13 10:27:04,383 INFO [train.py:930] Maximum memory allocated so far is 2482MB
721
+ 2026-01-13 10:27:05,139 INFO [optim.py:365] Clipping_scale=2.0, grad-norm quartiles 1.104e+02 1.439e+02 1.616e+02 2.030e+02 6.336e+03, threshold=3.233e+02, percent-clipped=4.0
722
+ 2026-01-13 10:27:05,616 INFO [zipformer.py:2441] attn_weights_entropy = tensor([4.2854, 4.2518, 4.4732, 4.4580, 4.3681, 4.3739, 4.4799, 3.8823],
723
+ device='cuda:0'), covar=tensor([0.0030, 0.0023, 0.0040, 0.0069, 0.0024, 0.0028, 0.0034, 0.0049],
724
+ device='cuda:0'), in_proj_covar=tensor([0.0018, 0.0017, 0.0017, 0.0019, 0.0017, 0.0017, 0.0018, 0.0018],
725
+ device='cuda:0'), out_proj_covar=tensor([1.5162e-05, 1.3334e-05, 1.3975e-05, 1.4982e-05, 1.3681e-05, 1.3350e-05,
726
+ 1.4814e-05, 1.4257e-05], device='cuda:0')
727
+ 2026-01-13 10:27:11,077 INFO [scaling.py:681] Whitening: num_groups=1, num_channels=384, metric=47.36 vs. limit=5.0
728
+ 2026-01-13 10:27:14,218 INFO [zipformer.py:2441] attn_weights_entropy = tensor([3.8914, 3.7491, 4.0712, 3.7182, 4.1638, 3.8819, 3.1735, 3.6504],
729
+ device='cuda:0'), covar=tensor([0.0108, 0.0133, 0.0233, 0.0260, 0.0189, 0.0139, 0.0127, 0.0160],
730
+ device='cuda:0'), in_proj_covar=tensor([0.0018, 0.0021, 0.0021, 0.0024, 0.0021, 0.0020, 0.0019, 0.0023],
731
+ device='cuda:0'), out_proj_covar=tensor([1.6829e-05, 1.8789e-05, 1.9749e-05, 2.2929e-05, 1.9393e-05, 1.7041e-05,
732
+ 1.6791e-05, 2.0179e-05], device='cuda:0')
733
+ 2026-01-13 10:27:21,865 INFO [train.py:895] Epoch 1, batch 3250, loss[loss=1.068, simple_loss=0.674, pruned_loss=0.7314, over 1316.00 frames. ], tot_loss[loss=1.208, simple_loss=0.7552, pruned_loss=0.8307, over 263101.73 frames. ], batch size: 4, lr: 4.58e-02, grad_scale: 8.0
734
+ 2026-01-13 10:27:22,396 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=15.59 vs. limit=2.0
735
+ 2026-01-13 10:27:24,452 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=6.95 vs. limit=2.0
736
+ 2026-01-13 10:27:25,844 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=6.76 vs. limit=2.0
737
+ 2026-01-13 10:27:31,102 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=3.65 vs. limit=2.0
738
+ 2026-01-13 10:27:34,542 INFO [zipformer.py:1188] warmup_begin=2666.7, warmup_end=3333.3, batch_count=3288.0, num_to_drop=1, layers_to_drop={1}
739
+ 2026-01-13 10:27:36,194 INFO [zipformer.py:1188] warmup_begin=1333.3, warmup_end=2000.0, batch_count=3293.0, num_to_drop=0, layers_to_drop=set()
740
+ 2026-01-13 10:27:37,652 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=9.54 vs. limit=2.0
741
+ 2026-01-13 10:27:38,744 INFO [train.py:895] Epoch 1, batch 3300, loss[loss=1.434, simple_loss=0.8982, pruned_loss=0.9848, over 1221.00 frames. ], tot_loss[loss=1.203, simple_loss=0.7523, pruned_loss=0.8273, over 262372.38 frames. ], batch size: 6, lr: 4.57e-02, grad_scale: 8.0
742
+ 2026-01-13 10:27:39,399 INFO [optim.py:365] Clipping_scale=2.0, grad-norm quartiles 1.120e+02 1.627e+02 1.886e+02 2.477e+02 4.209e+02, threshold=3.772e+02, percent-clipped=8.0
743
+ 2026-01-13 10:27:46,325 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=7.20 vs. limit=2.0
744
+ 2026-01-13 10:27:52,249 INFO [zipformer.py:1188] warmup_begin=2000.0, warmup_end=2666.7, batch_count=3341.0, num_to_drop=0, layers_to_drop=set()
745
+ 2026-01-13 10:27:53,425 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=6.55 vs. limit=2.0
746
+ 2026-01-13 10:27:55,024 INFO [zipformer.py:1188] warmup_begin=3333.3, warmup_end=4000.0, batch_count=3349.0, num_to_drop=2, layers_to_drop={2, 3}
747
+ 2026-01-13 10:27:55,567 INFO [train.py:895] Epoch 1, batch 3350, loss[loss=1.479, simple_loss=0.9234, pruned_loss=1.018, over 1375.00 frames. ], tot_loss[loss=1.209, simple_loss=0.7566, pruned_loss=0.831, over 261735.70 frames. ], batch size: 6, lr: 4.56e-02, grad_scale: 8.0
748
+ 2026-01-13 10:28:01,618 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=6.08 vs. limit=2.0
749
+ 2026-01-13 10:28:03,772 INFO [zipformer.py:1188] warmup_begin=1333.3, warmup_end=2000.0, batch_count=3375.0, num_to_drop=0, layers_to_drop=set()
750
+ 2026-01-13 10:28:08,547 INFO [zipformer.py:1188] warmup_begin=666.7, warmup_end=1333.3, batch_count=3389.0, num_to_drop=0, layers_to_drop=set()
751
+ 2026-01-13 10:28:10,657 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=4.12 vs. limit=2.0
752
+ 2026-01-13 10:28:13,187 INFO [train.py:895] Epoch 1, batch 3400, loss[loss=1.177, simple_loss=0.7313, pruned_loss=0.8111, over 1280.00 frames. ], tot_loss[loss=1.207, simple_loss=0.7566, pruned_loss=0.8287, over 262006.68 frames. ], batch size: 3, lr: 4.55e-02, grad_scale: 8.0
753
+ 2026-01-13 10:28:13,425 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=7.69 vs. limit=2.0
754
+ 2026-01-13 10:28:13,835 INFO [optim.py:365] Clipping_scale=2.0, grad-norm quartiles 1.111e+02 1.512e+02 1.857e+02 2.375e+02 5.494e+02, threshold=3.713e+02, percent-clipped=2.0
755
+ 2026-01-13 10:28:18,584 INFO [scaling.py:681] Whitening: num_groups=1, num_channels=384, metric=44.63 vs. limit=5.0
756
+ 2026-01-13 10:28:26,011 INFO [zipformer.py:2441] attn_weights_entropy = tensor([3.9033, 4.2007, 3.9925, 3.9460, 3.7216, 3.8895, 3.7793, 3.4443],
757
+ device='cuda:0'), covar=tensor([0.0071, 0.0016, 0.0015, 0.0028, 0.0182, 0.0040, 0.0321, 0.0334],
758
+ device='cuda:0'), in_proj_covar=tensor([0.0012, 0.0010, 0.0012, 0.0011, 0.0012, 0.0011, 0.0011, 0.0012],
759
+ device='cuda:0'), out_proj_covar=tensor([8.8282e-06, 8.3533e-06, 8.4489e-06, 8.7946e-06, 9.1213e-06, 8.6189e-06,
760
+ 9.2231e-06, 7.6402e-06], device='cuda:0')
761
+ 2026-01-13 10:28:29,845 INFO [zipformer.py:2441] attn_weights_entropy = tensor([5.0500, 4.6595, 5.0173, 4.7896, 4.2318, 4.4319, 4.9856, 4.3924],
762
+ device='cuda:0'), covar=tensor([0.0026, 0.0027, 0.0033, 0.0030, 0.0422, 0.0114, 0.0046, 0.0119],
763
+ device='cuda:0'), in_proj_covar=tensor([0.0022, 0.0027, 0.0024, 0.0026, 0.0021, 0.0026, 0.0026, 0.0026],
764
+ device='cuda:0'), out_proj_covar=tensor([1.9886e-05, 2.3765e-05, 1.8528e-05, 2.1630e-05, 2.0199e-05, 2.3334e-05,
765
+ 2.1817e-05, 2.1424e-05], device='cuda:0')
766
+ 2026-01-13 10:28:30,508 INFO [train.py:895] Epoch 1, batch 3450, loss[loss=1.374, simple_loss=0.8662, pruned_loss=0.9412, over 1220.00 frames. ], tot_loss[loss=1.209, simple_loss=0.7584, pruned_loss=0.8298, over 262555.62 frames. ], batch size: 6, lr: 4.54e-02, grad_scale: 8.0
767
+ 2026-01-13 10:28:31,425 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=4.57 vs. limit=2.0
768
+ 2026-01-13 10:28:31,643 INFO [scaling.py:681] Whitening: num_groups=1, num_channels=384, metric=42.74 vs. limit=5.0
769
+ 2026-01-13 10:28:40,580 INFO [zipformer.py:1188] warmup_begin=2000.0, warmup_end=2666.7, batch_count=3480.0, num_to_drop=0, layers_to_drop=set()
770
+ 2026-01-13 10:28:47,235 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=6.12 vs. limit=2.0
771
+ 2026-01-13 10:28:47,779 INFO [train.py:895] Epoch 1, batch 3500, loss[loss=1.146, simple_loss=0.7065, pruned_loss=0.7929, over 1165.00 frames. ], tot_loss[loss=1.2, simple_loss=0.7527, pruned_loss=0.8235, over 263058.78 frames. ], batch size: 3, lr: 4.53e-02, grad_scale: 8.0
772
+ 2026-01-13 10:28:48,461 INFO [optim.py:365] Clipping_scale=2.0, grad-norm quartiles 9.982e+01 1.638e+02 1.931e+02 2.549e+02 3.976e+02, threshold=3.862e+02, percent-clipped=1.0
773
+ 2026-01-13 10:28:52,662 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=3.31 vs. limit=2.0
774
+ 2026-01-13 10:28:54,781 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=6.63 vs. limit=2.0
775
+ 2026-01-13 10:28:55,875 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=6.07 vs. limit=2.0
776
+ 2026-01-13 10:28:57,437 INFO [zipformer.py:1188] warmup_begin=666.7, warmup_end=1333.3, batch_count=3528.0, num_to_drop=0, layers_to_drop=set()
777
+ 2026-01-13 10:29:00,261 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=7.14 vs. limit=2.0
778
+ 2026-01-13 10:29:01,398 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=3.40 vs. limit=2.0
779
+ 2026-01-13 10:29:03,184 INFO [zipformer.py:2441] attn_weights_entropy = tensor([3.2905, 3.2886, 3.2958, 3.2940, 3.2946, 3.2959, 3.2921, 3.2929],
780
+ device='cuda:0'), covar=tensor([0.0012, 0.0010, 0.0011, 0.0009, 0.0013, 0.0010, 0.0011, 0.0010],
781
+ device='cuda:0'), in_proj_covar=tensor([0.0034, 0.0034, 0.0036, 0.0032, 0.0036, 0.0032, 0.0033, 0.0034],
782
+ device='cuda:0'), out_proj_covar=tensor([3.1300e-05, 3.2067e-05, 3.6001e-05, 3.2149e-05, 3.1937e-05, 3.0261e-05,
783
+ 3.1539e-05, 3.3816e-05], device='cuda:0')
784
+ 2026-01-13 10:29:03,594 INFO [zipformer.py:1188] warmup_begin=2666.7, warmup_end=3333.3, batch_count=3546.0, num_to_drop=1, layers_to_drop={0}
785
+ 2026-01-13 10:29:05,158 INFO [train.py:895] Epoch 1, batch 3550, loss[loss=1.158, simple_loss=0.7288, pruned_loss=0.7937, over 1229.00 frames. ], tot_loss[loss=1.195, simple_loss=0.7491, pruned_loss=0.82, over 263251.05 frames. ], batch size: 4, lr: 4.51e-02, grad_scale: 8.0
786
+ 2026-01-13 10:29:09,788 INFO [scaling.py:681] Whitening: num_groups=1, num_channels=384, metric=80.31 vs. limit=5.0
787
+ 2026-01-13 10:29:09,811 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=8.18 vs. limit=2.0
788
+ 2026-01-13 10:29:09,910 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=3.85 vs. limit=2.0
789
+ 2026-01-13 10:29:19,655 INFO [zipformer.py:1188] warmup_begin=2000.0, warmup_end=2666.7, batch_count=3593.0, num_to_drop=0, layers_to_drop=set()
790
+ 2026-01-13 10:29:22,671 INFO [train.py:895] Epoch 1, batch 3600, loss[loss=1.243, simple_loss=0.7703, pruned_loss=0.8579, over 1429.00 frames. ], tot_loss[loss=1.197, simple_loss=0.7514, pruned_loss=0.821, over 262738.97 frames. ], batch size: 4, lr: 4.50e-02, grad_scale: 8.0
791
+ 2026-01-13 10:29:23,469 INFO [optim.py:365] Clipping_scale=2.0, grad-norm quartiles 1.141e+02 1.555e+02 1.854e+02 2.297e+02 4.534e+02, threshold=3.709e+02, percent-clipped=2.0
792
+ 2026-01-13 10:29:24,985 INFO [zipformer.py:1188] warmup_begin=3333.3, warmup_end=4000.0, batch_count=3607.0, num_to_drop=2, layers_to_drop={2, 3}
793
+ 2026-01-13 10:29:30,882 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=3.70 vs. limit=2.0
794
+ 2026-01-13 10:29:30,908 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=12.41 vs. limit=2.0
795
+ 2026-01-13 10:29:36,808 INFO [zipformer.py:1188] warmup_begin=666.7, warmup_end=1333.3, batch_count=3641.0, num_to_drop=0, layers_to_drop=set()
796
+ 2026-01-13 10:29:37,932 INFO [zipformer.py:1188] warmup_begin=1333.3, warmup_end=2000.0, batch_count=3644.0, num_to_drop=1, layers_to_drop={0}
797
+ 2026-01-13 10:29:40,446 INFO [train.py:895] Epoch 1, batch 3650, loss[loss=1.227, simple_loss=0.7513, pruned_loss=0.851, over 1483.00 frames. ], tot_loss[loss=1.198, simple_loss=0.7524, pruned_loss=0.8221, over 262877.28 frames. ], batch size: 5, lr: 4.49e-02, grad_scale: 8.0
798
+ 2026-01-13 10:29:43,041 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=19.62 vs. limit=2.0
799
+ 2026-01-13 10:29:43,713 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=8.11 vs. limit=2.0
800
+ 2026-01-13 10:29:45,855 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=9.40 vs. limit=2.0
801
+ 2026-01-13 10:29:47,700 INFO [scaling.py:681] Whitening: num_groups=1, num_channels=384, metric=47.02 vs. limit=5.0
802
+ 2026-01-13 10:29:48,927 INFO [zipformer.py:1188] warmup_begin=2000.0, warmup_end=2666.7, batch_count=3675.0, num_to_drop=0, layers_to_drop=set()
803
+ 2026-01-13 10:29:49,295 INFO [zipformer.py:1188] warmup_begin=2666.7, warmup_end=3333.3, batch_count=3676.0, num_to_drop=1, layers_to_drop={0}
804
+ 2026-01-13 10:29:55,244 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=12.70 vs. limit=2.0
805
+ 2026-01-13 10:29:58,245 INFO [train.py:895] Epoch 1, batch 3700, loss[loss=1.093, simple_loss=0.6916, pruned_loss=0.7472, over 1314.00 frames. ], tot_loss[loss=1.194, simple_loss=0.7495, pruned_loss=0.8194, over 263265.48 frames. ], batch size: 4, lr: 4.48e-02, grad_scale: 8.0
806
+ 2026-01-13 10:29:58,907 INFO [optim.py:365] Clipping_scale=2.0, grad-norm quartiles 1.035e+02 1.625e+02 2.094e+02 2.750e+02 5.503e+02, threshold=4.188e+02, percent-clipped=8.0
807
+ 2026-01-13 10:30:04,638 INFO [zipformer.py:2441] attn_weights_entropy = tensor([3.9356, 3.9346, 3.9361, 3.9234, 3.9324, 3.9164, 3.8254, 3.8886],
808
+ device='cuda:0'), covar=tensor([0.0007, 0.0007, 0.0008, 0.0009, 0.0007, 0.0010, 0.0008, 0.0006],
809
+ device='cuda:0'), in_proj_covar=tensor([0.0021, 0.0022, 0.0021, 0.0021, 0.0023, 0.0023, 0.0022, 0.0024],
810
+ device='cuda:0'), out_proj_covar=tensor([2.1127e-05, 2.2527e-05, 2.1880e-05, 2.0300e-05, 2.1649e-05, 2.1621e-05,
811
+ 2.1020e-05, 2.3746e-05], device='cuda:0')
812
+ 2026-01-13 10:30:06,020 INFO [zipformer.py:1188] warmup_begin=666.7, warmup_end=1333.3, batch_count=3723.0, num_to_drop=0, layers_to_drop=set()
813
+ 2026-01-13 10:30:06,213 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=3.95 vs. limit=2.0
814
+ 2026-01-13 10:30:08,664 INFO [scaling.py:681] Whitening: num_groups=1, num_channels=384, metric=39.36 vs. limit=5.0
815
+ 2026-01-13 10:30:09,316 INFO [scaling.py:681] Whitening: num_groups=1, num_channels=384, metric=36.52 vs. limit=5.0
816
+ 2026-01-13 10:30:11,099 INFO [zipformer.py:1188] warmup_begin=3333.3, warmup_end=4000.0, batch_count=3737.0, num_to_drop=2, layers_to_drop={2, 3}
817
+ 2026-01-13 10:30:11,278 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=7.13 vs. limit=2.0
818
+ 2026-01-13 10:30:15,905 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=15.64 vs. limit=2.0
819
+ 2026-01-13 10:30:16,043 INFO [train.py:895] Epoch 1, batch 3750, loss[loss=1.088, simple_loss=0.6812, pruned_loss=0.7469, over 1261.00 frames. ], tot_loss[loss=1.187, simple_loss=0.7452, pruned_loss=0.8145, over 263639.47 frames. ], batch size: 3, lr: 4.47e-02, grad_scale: 8.0
820
+ 2026-01-13 10:30:33,787 INFO [train.py:895] Epoch 1, batch 3800, loss[loss=1.112, simple_loss=0.6871, pruned_loss=0.7686, over 1276.00 frames. ], tot_loss[loss=1.19, simple_loss=0.7475, pruned_loss=0.8165, over 263250.35 frames. ], batch size: 4, lr: 4.46e-02, grad_scale: 8.0
821
+ 2026-01-13 10:30:34,027 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=3.25 vs. limit=2.0
822
+ 2026-01-13 10:30:34,434 INFO [optim.py:365] Clipping_scale=2.0, grad-norm quartiles 9.486e+01 1.479e+02 1.743e+02 2.078e+02 3.716e+02, threshold=3.486e+02, percent-clipped=0.0
823
+ 2026-01-13 10:30:36,501 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=6.79 vs. limit=2.0
824
+ 2026-01-13 10:30:51,958 INFO [train.py:895] Epoch 1, batch 3850, loss[loss=0.9741, simple_loss=0.6203, pruned_loss=0.664, over 1251.00 frames. ], tot_loss[loss=1.186, simple_loss=0.7443, pruned_loss=0.8134, over 263370.64 frames. ], batch size: 5, lr: 4.45e-02, grad_scale: 8.0
825
+ 2026-01-13 10:30:52,216 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=3.95 vs. limit=2.0
826
+ 2026-01-13 10:31:00,361 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=11.30 vs. limit=2.0
827
+ 2026-01-13 10:31:10,032 INFO [train.py:895] Epoch 1, batch 3900, loss[loss=1.117, simple_loss=0.705, pruned_loss=0.7649, over 1270.00 frames. ], tot_loss[loss=1.188, simple_loss=0.7457, pruned_loss=0.8151, over 264414.40 frames. ], batch size: 4, lr: 4.44e-02, grad_scale: 8.0
828
+ 2026-01-13 10:31:10,220 INFO [scaling.py:681] Whitening: num_groups=1, num_channels=384, metric=39.97 vs. limit=5.0
829
+ 2026-01-13 10:31:10,431 INFO [zipformer.py:1188] warmup_begin=1333.3, warmup_end=2000.0, batch_count=3902.0, num_to_drop=1, layers_to_drop={0}
830
+ 2026-01-13 10:31:10,680 INFO [optim.py:365] Clipping_scale=2.0, grad-norm quartiles 1.023e+02 1.626e+02 1.988e+02 2.828e+02 5.542e+02, threshold=3.975e+02, percent-clipped=14.0
831
+ 2026-01-13 10:31:11,888 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=6.12 vs. limit=2.0
832
+ 2026-01-13 10:31:13,720 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=3.74 vs. limit=2.0
833
+ 2026-01-13 10:31:14,114 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=3.71 vs. limit=2.0
834
+ 2026-01-13 10:31:17,297 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=9.65 vs. limit=2.0
835
+ 2026-01-13 10:31:19,027 INFO [zipformer.py:1188] warmup_begin=2666.7, warmup_end=3333.3, batch_count=3926.0, num_to_drop=0, layers_to_drop=set()
836
+ 2026-01-13 10:31:19,209 INFO [scaling.py:681] Whitening: num_groups=1, num_channels=384, metric=245.48 vs. limit=5.0
837
+ 2026-01-13 10:31:20,302 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=3.89 vs. limit=2.0
838
+ 2026-01-13 10:31:21,368 INFO [zipformer.py:2441] attn_weights_entropy = tensor([3.4515, 3.8562, 3.9015, 4.1270, 3.2732, 3.7985, 2.9904, 3.3834],
839
+ device='cuda:0'), covar=tensor([0.0016, 0.0012, 0.0014, 0.0007, 0.0014, 0.0010, 0.0013, 0.0012],
840
+ device='cuda:0'), in_proj_covar=tensor([0.0020, 0.0019, 0.0020, 0.0019, 0.0020, 0.0020, 0.0020, 0.0019],
841
+ device='cuda:0'), out_proj_covar=tensor([1.8221e-05, 1.7697e-05, 1.8645e-05, 1.6981e-05, 1.8729e-05, 1.8759e-05,
842
+ 1.7949e-05, 1.7636e-05], device='cuda:0')
843
+ 2026-01-13 10:31:25,744 INFO [zipformer.py:1188] warmup_begin=2000.0, warmup_end=2666.7, batch_count=3944.0, num_to_drop=1, layers_to_drop={0}
844
+ 2026-01-13 10:31:28,194 INFO [train.py:895] Epoch 1, batch 3950, loss[loss=1.167, simple_loss=0.7405, pruned_loss=0.7968, over 1336.00 frames. ], tot_loss[loss=1.183, simple_loss=0.7416, pruned_loss=0.8125, over 263864.26 frames. ], batch size: 5, lr: 4.43e-02, grad_scale: 8.0
845
+ 2026-01-13 10:31:36,755 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=15.74 vs. limit=2.0
846
+ 2026-01-13 10:31:40,036 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=4.13 vs. limit=2.0
847
+ 2026-01-13 10:31:41,357 INFO [zipformer.py:1188] warmup_begin=3333.3, warmup_end=4000.0, batch_count=3987.0, num_to_drop=0, layers_to_drop=set()
848
+ 2026-01-13 10:31:41,513 INFO [scaling.py:681] Whitening: num_groups=1, num_channels=384, metric=80.70 vs. limit=5.0
849
+ 2026-01-13 10:31:43,064 INFO [zipformer.py:1188] warmup_begin=666.7, warmup_end=1333.3, batch_count=3992.0, num_to_drop=1, layers_to_drop={0}
850
+ 2026-01-13 10:31:45,342 INFO [zipformer.py:2441] attn_weights_entropy = tensor([2.8272, 2.7909, 2.8479, 2.8097, 2.8467, 2.8483, 2.8369, 2.8407],
851
+ device='cuda:0'), covar=tensor([0.0052, 0.0025, 0.0037, 0.0042, 0.0034, 0.0050, 0.0044, 0.0056],
852
+ device='cuda:0'), in_proj_covar=tensor([0.0027, 0.0020, 0.0025, 0.0025, 0.0020, 0.0028, 0.0026, 0.0028],
853
+ device='cuda:0'), out_proj_covar=tensor([2.3814e-05, 1.8576e-05, 2.3197e-05, 2.3200e-05, 1.7126e-05, 2.4448e-05,
854
+ 2.3300e-05, 2.9172e-05], device='cuda:0')
855
+ 2026-01-13 10:31:46,158 INFO [checkpoint.py:74] Saving checkpoint to /kaggle/working/amharic_training/exp_amharic_streaming/checkpoint-4000.pt
856
+ 2026-01-13 10:31:48,161 INFO [train.py:895] Epoch 1, batch 4000, loss[loss=1.147, simple_loss=0.7084, pruned_loss=0.7924, over 1496.00 frames. ], tot_loss[loss=1.185, simple_loss=0.7432, pruned_loss=0.8129, over 263482.73 frames. ], batch size: 4, lr: 4.42e-02, grad_scale: 8.0
857
+ 2026-01-13 10:31:48,797 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=9.59 vs. limit=2.0
858
+ 2026-01-13 10:31:48,963 INFO [optim.py:365] Clipping_scale=2.0, grad-norm quartiles 1.048e+02 1.512e+02 1.751e+02 2.287e+02 4.635e+02, threshold=3.502e+02, percent-clipped=2.0
859
+ 2026-01-13 10:31:49,666 INFO [scaling.py:681] Whitening: num_groups=1, num_channels=384, metric=48.98 vs. limit=5.0
860
+ 2026-01-13 10:31:53,055 INFO [scaling.py:681] Whitening: num_groups=1, num_channels=384, metric=130.73 vs. limit=5.0
861
+ 2026-01-13 10:32:00,175 INFO [zipformer.py:1188] warmup_begin=1333.3, warmup_end=2000.0, batch_count=4032.0, num_to_drop=1, layers_to_drop={2}
862
+ 2026-01-13 10:32:02,507 INFO [zipformer.py:2441] attn_weights_entropy = tensor([4.9832, 4.8560, 4.9991, 4.9909, 4.9126, 4.9941, 4.7292, 4.9780],
863
+ device='cuda:0'), covar=tensor([0.0006, 0.0005, 0.0008, 0.0012, 0.0010, 0.0019, 0.0007, 0.0015],
864
+ device='cuda:0'), in_proj_covar=tensor([0.0025, 0.0023, 0.0025, 0.0027, 0.0025, 0.0028, 0.0027, 0.0028],
865
+ device='cuda:0'), out_proj_covar=tensor([2.3862e-05, 2.1654e-05, 2.4095e-05, 2.2342e-05, 2.5852e-05, 2.4956e-05,
866
+ 2.4479e-05, 2.5347e-05], device='cuda:0')
867
+ 2026-01-13 10:32:03,750 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=15.03 vs. limit=2.0
868
+ 2026-01-13 10:32:04,172 INFO [scaling.py:681] Whitening: num_groups=1, num_channels=384, metric=116.86 vs. limit=5.0
869
+ 2026-01-13 10:32:05,634 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=9.28 vs. limit=2.0
870
+ 2026-01-13 10:32:05,725 INFO [scaling.py:681] Whitening: num_groups=1, num_channels=384, metric=42.97 vs. limit=5.0
871
+ 2026-01-13 10:32:07,292 INFO [train.py:895] Epoch 1, batch 4050, loss[loss=1.237, simple_loss=0.8155, pruned_loss=0.8291, over 1148.00 frames. ], tot_loss[loss=1.188, simple_loss=0.7458, pruned_loss=0.8155, over 262667.75 frames. ], batch size: 13, lr: 4.41e-02, grad_scale: 8.0
872
+ 2026-01-13 10:32:09,669 INFO [scaling.py:681] Whitening: num_groups=1, num_channels=384, metric=32.24 vs. limit=5.0
873
+ 2026-01-13 10:32:10,080 INFO [scaling.py:681] Whitening: num_groups=1, num_channels=384, metric=33.05 vs. limit=5.0
874
+ 2026-01-13 10:32:11,565 INFO [scaling.py:681] Whitening: num_groups=1, num_channels=384, metric=157.00 vs. limit=5.0
875
+ 2026-01-13 10:32:13,837 INFO [scaling.py:681] Whitening: num_groups=1, num_channels=384, metric=214.61 vs. limit=5.0
876
+ 2026-01-13 10:32:15,697 INFO [zipformer.py:2441] attn_weights_entropy = tensor([5.3569, 5.4151, 5.3954, 5.3411, 5.3940, 5.4166, 5.4119, 5.3654],
877
+ device='cuda:0'), covar=tensor([0.0003, 0.0011, 0.0009, 0.0005, 0.0005, 0.0008, 0.0003, 0.0005],
878
+ device='cuda:0'), in_proj_covar=tensor([0.0020, 0.0020, 0.0021, 0.0019, 0.0019, 0.0021, 0.0020, 0.0019],
879
+ device='cuda:0'), out_proj_covar=tensor([2.0746e-05, 2.0880e-05, 2.0733e-05, 2.0192e-05, 2.0582e-05, 1.9906e-05,
880
+ 2.0717e-05, 1.9830e-05], device='cuda:0')
881
+ 2026-01-13 10:32:17,539 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=6.50 vs. limit=2.0
882
+ 2026-01-13 10:32:17,891 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=12.57 vs. limit=2.0
883
+ 2026-01-13 10:32:18,269 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=12.27 vs. limit=2.0
884
+ 2026-01-13 10:32:19,778 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=5.18 vs. limit=2.0
885
+ 2026-01-13 10:32:22,595 INFO [zipformer.py:2441] attn_weights_entropy = tensor([4.1641, 3.7211, 3.4923, 3.4801, 3.9562, 3.9726, 4.1850, 2.1947],
886
+ device='cuda:0'), covar=tensor([0.0054, 0.0095, 0.0122, 0.0063, 0.0043, 0.0065, 0.0045, 0.0190],
887
+ device='cuda:0'), in_proj_covar=tensor([0.0022, 0.0027, 0.0022, 0.0024, 0.0020, 0.0023, 0.0025, 0.0026],
888
+ device='cuda:0'), out_proj_covar=tensor([1.8459e-05, 2.2242e-05, 1.6844e-05, 1.9154e-05, 2.0028e-05, 1.9843e-05,
889
+ 1.8680e-05, 2.1027e-05], device='cuda:0')
890
+ 2026-01-13 10:32:26,200 INFO [train.py:895] Epoch 1, batch 4100, loss[loss=1.27, simple_loss=0.7982, pruned_loss=0.8705, over 1199.00 frames. ], tot_loss[loss=1.194, simple_loss=0.7498, pruned_loss=0.8189, over 262974.63 frames. ], batch size: 4, lr: 4.40e-02, grad_scale: 8.0
891
+ 2026-01-13 10:32:26,933 INFO [optim.py:365] Clipping_scale=2.0, grad-norm quartiles 1.326e+02 1.758e+02 2.217e+02 2.999e+02 7.322e+02, threshold=4.435e+02, percent-clipped=14.0
892
+ 2026-01-13 10:32:28,656 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=7.86 vs. limit=2.0
893
+ 2026-01-13 10:32:30,467 INFO [scaling.py:681] Whitening: num_groups=1, num_channels=384, metric=41.00 vs. limit=5.0
894
+ 2026-01-13 10:32:30,936 INFO [scaling.py:681] Whitening: num_groups=1, num_channels=384, metric=68.47 vs. limit=5.0
895
+ 2026-01-13 10:32:34,481 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=5.86 vs. limit=2.0
896
+ 2026-01-13 10:32:36,364 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=4.28 vs. limit=2.0
897
+ 2026-01-13 10:32:41,020 INFO [zipformer.py:1188] warmup_begin=2666.7, warmup_end=3333.3, batch_count=4141.0, num_to_drop=0, layers_to_drop=set()
898
+ 2026-01-13 10:32:43,809 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=4.94 vs. limit=2.0
899
+ 2026-01-13 10:32:43,993 INFO [zipformer.py:2441] attn_weights_entropy = tensor([4.5863, 4.5637, 4.4651, 4.5537, 4.5732, 4.5623, 4.4679, 4.5202],
900
+ device='cuda:0'), covar=tensor([0.0015, 0.0019, 0.0018, 0.0014, 0.0011, 0.0019, 0.0014, 0.0017],
901
+ device='cuda:0'), in_proj_covar=tensor([0.0036, 0.0038, 0.0037, 0.0033, 0.0036, 0.0037, 0.0032, 0.0034],
902
+ device='cuda:0'), out_proj_covar=tensor([3.3068e-05, 3.7446e-05, 3.6506e-05, 3.2536e-05, 3.3699e-05, 3.5837e-05,
903
+ 3.2313e-05, 3.3725e-05], device='cuda:0')
904
+ 2026-01-13 10:32:44,654 INFO [train.py:895] Epoch 1, batch 4150, loss[loss=1.027, simple_loss=0.6154, pruned_loss=0.7197, over 1393.00 frames. ], tot_loss[loss=1.186, simple_loss=0.7441, pruned_loss=0.8136, over 263599.32 frames. ], batch size: 4, lr: 4.39e-02, grad_scale: 8.0
905
+ 2026-01-13 10:32:47,759 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=4.49 vs. limit=2.0
906
+ 2026-01-13 10:32:50,436 INFO [scaling.py:681] Whitening: num_groups=1, num_channels=384, metric=88.70 vs. limit=5.0
907
+ 2026-01-13 10:32:53,560 INFO [scaling.py:681] Whitening: num_groups=1, num_channels=384, metric=38.98 vs. limit=5.0
908
+ 2026-01-13 10:33:00,677 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=4.25 vs. limit=2.0
909
+ 2026-01-13 10:33:03,260 INFO [train.py:895] Epoch 1, batch 4200, loss[loss=1.086, simple_loss=0.6895, pruned_loss=0.741, over 1195.00 frames. ], tot_loss[loss=1.18, simple_loss=0.7406, pruned_loss=0.8096, over 263532.06 frames. ], batch size: 4, lr: 4.38e-02, grad_scale: 8.0
910
+ 2026-01-13 10:33:03,725 INFO [zipformer.py:1188] warmup_begin=2000.0, warmup_end=2666.7, batch_count=4202.0, num_to_drop=1, layers_to_drop={0}
911
+ 2026-01-13 10:33:03,764 INFO [zipformer.py:1188] warmup_begin=3333.3, warmup_end=4000.0, batch_count=4202.0, num_to_drop=0, layers_to_drop=set()
912
+ 2026-01-13 10:33:04,010 INFO [optim.py:365] Clipping_scale=2.0, grad-norm quartiles 1.069e+02 1.875e+02 2.306e+02 2.868e+02 9.143e+02, threshold=4.613e+02, percent-clipped=1.0
913
+ 2026-01-13 10:33:04,965 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=4.29 vs. limit=2.0
914
+ 2026-01-13 10:33:09,419 INFO [zipformer.py:2441] attn_weights_entropy = tensor([2.9061, 2.8358, 2.9443, 2.8948, 2.8953, 2.9784, 2.8865, 2.9173],
915
+ device='cuda:0'), covar=tensor([0.0018, 0.0017, 0.0020, 0.0024, 0.0018, 0.0017, 0.0024, 0.0020],
916
+ device='cuda:0'), in_proj_covar=tensor([0.0020, 0.0018, 0.0019, 0.0021, 0.0020, 0.0020, 0.0018, 0.0018],
917
+ device='cuda:0'), out_proj_covar=tensor([1.4807e-05, 1.2855e-05, 1.4677e-05, 1.5681e-05, 1.4548e-05, 1.4218e-05,
918
+ 1.3783e-05, 1.3772e-05], device='cuda:0')
919
+ 2026-01-13 10:33:15,432 INFO [zipformer.py:2441] attn_weights_entropy = tensor([2.1165, 2.3187, 2.3144, 2.2894, 2.2900, 2.4750, 2.2556, 2.3678],
920
+ device='cuda:0'), covar=tensor([0.0076, 0.0038, 0.0068, 0.0059, 0.0054, 0.0041, 0.0061, 0.0041],
921
+ device='cuda:0'), in_proj_covar=tensor([0.0020, 0.0018, 0.0019, 0.0021, 0.0020, 0.0020, 0.0018, 0.0018],
922
+ device='cuda:0'), out_proj_covar=tensor([1.4771e-05, 1.2848e-05, 1.4635e-05, 1.5618e-05, 1.4478e-05, 1.4219e-05,
923
+ 1.3710e-05, 1.3736e-05], device='cuda:0')
924
+ 2026-01-13 10:33:19,631 INFO [zipformer.py:2441] attn_weights_entropy = tensor([3.5655, 3.5843, 3.5928, 3.5937, 3.5948, 3.5904, 3.5888, 3.5946],
925
+ device='cuda:0'), covar=tensor([0.0012, 0.0015, 0.0016, 0.0016, 0.0011, 0.0015, 0.0016, 0.0017],
926
+ device='cuda:0'), in_proj_covar=tensor([0.0024, 0.0021, 0.0024, 0.0023, 0.0024, 0.0023, 0.0026, 0.0024],
927
+ device='cuda:0'), out_proj_covar=tensor([1.7263e-05, 1.5414e-05, 1.7525e-05, 1.6667e-05, 1.6548e-05, 1.6729e-05,
928
+ 1.8032e-05, 1.7893e-05], device='cuda:0')
929
+ 2026-01-13 10:33:21,370 INFO [zipformer.py:1188] warmup_begin=666.7, warmup_end=1333.3, batch_count=4250.0, num_to_drop=1, layers_to_drop={0}
930
+ 2026-01-13 10:33:21,653 INFO [train.py:895] Epoch 1, batch 4250, loss[loss=1.196, simple_loss=0.7423, pruned_loss=0.8249, over 1445.00 frames. ], tot_loss[loss=1.177, simple_loss=0.7388, pruned_loss=0.8075, over 263232.83 frames. ], batch size: 4, lr: 4.36e-02, grad_scale: 8.0
931
+ 2026-01-13 10:33:23,701 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=11.79 vs. limit=2.0
932
+ 2026-01-13 10:33:27,167 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=5.61 vs. limit=2.0
933
+ 2026-01-13 10:33:30,419 INFO [zipformer.py:2441] attn_weights_entropy = tensor([4.2813, 4.2601, 4.3124, 4.2739, 4.2495, 4.3201, 4.2639, 4.3064],
934
+ device='cuda:0'), covar=tensor([0.0025, 0.0010, 0.0009, 0.0023, 0.0013, 0.0009, 0.0010, 0.0011],
935
+ device='cuda:0'), in_proj_covar=tensor([0.0021, 0.0019, 0.0020, 0.0022, 0.0022, 0.0021, 0.0019, 0.0019],
936
+ device='cuda:0'), out_proj_covar=tensor([1.5774e-05, 1.3534e-05, 1.5501e-05, 1.7015e-05, 1.5887e-05, 1.5369e-05,
937
+ 1.4568e-05, 1.4559e-05], device='cuda:0')
938
+ 2026-01-13 10:33:30,591 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=3.50 vs. limit=2.0
939
+ 2026-01-13 10:33:33,655 INFO [zipformer.py:1188] warmup_begin=1333.3, warmup_end=2000.0, batch_count=4282.0, num_to_drop=0, layers_to_drop=set()
940
+ 2026-01-13 10:33:34,529 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=17.91 vs. limit=2.0
941
+ 2026-01-13 10:33:35,914 INFO [zipformer.py:1188] warmup_begin=2666.7, warmup_end=3333.3, batch_count=4288.0, num_to_drop=0, layers_to_drop=set()
942
+ 2026-01-13 10:33:40,755 INFO [train.py:895] Epoch 1, batch 4300, loss[loss=1.241, simple_loss=0.7928, pruned_loss=0.8448, over 1373.00 frames. ], tot_loss[loss=1.177, simple_loss=0.7397, pruned_loss=0.8072, over 263664.55 frames. ], batch size: 4, lr: 4.35e-02, grad_scale: 8.0
943
+ 2026-01-13 10:33:41,456 INFO [optim.py:365] Clipping_scale=2.0, grad-norm quartiles 1.135e+02 1.561e+02 1.979e+02 2.485e+02 4.430e+02, threshold=3.958e+02, percent-clipped=0.0
944
+ 2026-01-13 10:33:42,704 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=3.34 vs. limit=2.0
945
+ 2026-01-13 10:33:46,758 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=11.77 vs. limit=2.0
946
+ 2026-01-13 10:33:46,787 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=8.61 vs. limit=2.0
947
+ 2026-01-13 10:33:51,537 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=6.50 vs. limit=2.0
948
+ 2026-01-13 10:33:52,074 INFO [zipformer.py:1188] warmup_begin=2000.0, warmup_end=2666.7, batch_count=4332.0, num_to_drop=1, layers_to_drop={0}
949
+ 2026-01-13 10:33:52,954 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=4.01 vs. limit=2.0
950
+ 2026-01-13 10:33:56,956 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=5.73 vs. limit=2.0
951
+ 2026-01-13 10:33:58,241 INFO [zipformer.py:1188] warmup_begin=3333.3, warmup_end=4000.0, batch_count=4349.0, num_to_drop=0, layers_to_drop=set()
952
+ 2026-01-13 10:33:58,844 INFO [train.py:895] Epoch 1, batch 4350, loss[loss=1.26, simple_loss=0.7543, pruned_loss=0.8827, over 1377.00 frames. ], tot_loss[loss=1.179, simple_loss=0.7404, pruned_loss=0.8085, over 263340.42 frames. ], batch size: 4, lr: 4.34e-02, grad_scale: 8.0
953
+ 2026-01-13 10:34:03,811 INFO [scaling.py:681] Whitening: num_groups=1, num_channels=384, metric=75.20 vs. limit=5.0
954
+ 2026-01-13 10:34:09,032 INFO [scaling.py:681] Whitening: num_groups=1, num_channels=384, metric=227.44 vs. limit=5.0
955
+ 2026-01-13 10:34:09,067 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=6.28 vs. limit=2.0
956
+ 2026-01-13 10:34:09,545 INFO [zipformer.py:1188] warmup_begin=666.7, warmup_end=1333.3, batch_count=4380.0, num_to_drop=1, layers_to_drop={0}
957
+ 2026-01-13 10:34:11,143 INFO [scaling.py:681] Whitening: num_groups=1, num_channels=384, metric=76.82 vs. limit=5.0
958
+ 2026-01-13 10:34:17,372 INFO [train.py:895] Epoch 1, batch 4400, loss[loss=1.119, simple_loss=0.702, pruned_loss=0.768, over 1267.00 frames. ], tot_loss[loss=1.184, simple_loss=0.7429, pruned_loss=0.8121, over 262860.21 frames. ], batch size: 4, lr: 4.33e-02, grad_scale: 8.0
959
+ 2026-01-13 10:34:18,083 INFO [optim.py:365] Clipping_scale=2.0, grad-norm quartiles 1.144e+02 1.971e+02 2.587e+02 3.075e+02 5.533e+02, threshold=5.173e+02, percent-clipped=6.0
960
+ 2026-01-13 10:34:32,088 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=14.52 vs. limit=2.0
961
+ 2026-01-13 10:34:35,496 INFO [train.py:895] Epoch 1, batch 4450, loss[loss=1.173, simple_loss=0.7364, pruned_loss=0.8051, over 1162.00 frames. ], tot_loss[loss=1.189, simple_loss=0.7475, pruned_loss=0.8157, over 262614.91 frames. ], batch size: 3, lr: 4.32e-02, grad_scale: 8.0
962
+ 2026-01-13 10:34:39,379 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=3.46 vs. limit=2.0
963
+ 2026-01-13 10:34:39,837 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=3.29 vs. limit=2.0
964
+ 2026-01-13 10:34:47,445 INFO [scaling.py:681] Whitening: num_groups=1, num_channels=384, metric=267.78 vs. limit=5.0
965
+ 2026-01-13 10:34:49,209 INFO [scaling.py:681] Whitening: num_groups=1, num_channels=384, metric=44.49 vs. limit=5.0
966
+ 2026-01-13 10:34:52,305 INFO [zipformer.py:1188] warmup_begin=1333.3, warmup_end=2000.0, batch_count=4497.0, num_to_drop=0, layers_to_drop=set()
967
+ 2026-01-13 10:34:53,795 INFO [train.py:895] Epoch 1, batch 4500, loss[loss=1.372, simple_loss=0.8574, pruned_loss=0.9434, over 1247.00 frames. ], tot_loss[loss=1.187, simple_loss=0.7459, pruned_loss=0.8143, over 262730.18 frames. ], batch size: 4, lr: 4.31e-02, grad_scale: 8.0
968
+ 2026-01-13 10:34:54,485 INFO [optim.py:365] Clipping_scale=2.0, grad-norm quartiles 1.209e+02 1.612e+02 2.047e+02 2.693e+02 5.371e+02, threshold=4.094e+02, percent-clipped=1.0
969
+ 2026-01-13 10:34:59,949 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=2.81 vs. limit=2.0
970
+ 2026-01-13 10:35:00,647 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=3.20 vs. limit=2.0
971
+ 2026-01-13 10:35:04,389 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=2.95 vs. limit=2.0
972
+ 2026-01-13 10:35:11,122 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=24.50 vs. limit=2.0
973
+ 2026-01-13 10:35:11,955 INFO [train.py:895] Epoch 1, batch 4550, loss[loss=1.047, simple_loss=0.6393, pruned_loss=0.7277, over 1403.00 frames. ], tot_loss[loss=1.184, simple_loss=0.7451, pruned_loss=0.8119, over 262387.46 frames. ], batch size: 4, lr: 4.30e-02, grad_scale: 8.0
974
+ 2026-01-13 10:35:17,068 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=3.86 vs. limit=2.0
975
+ 2026-01-13 10:35:23,302 INFO [zipformer.py:1188] warmup_begin=2000.0, warmup_end=2666.7, batch_count=4582.0, num_to_drop=0, layers_to_drop=set()
976
+ 2026-01-13 10:35:30,336 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=8.08 vs. limit=2.0
977
+ 2026-01-13 10:35:30,404 INFO [train.py:895] Epoch 1, batch 4600, loss[loss=1.397, simple_loss=0.902, pruned_loss=0.9455, over 1303.00 frames. ], tot_loss[loss=1.185, simple_loss=0.7455, pruned_loss=0.8121, over 262381.43 frames. ], batch size: 8, lr: 4.29e-02, grad_scale: 8.0
978
+ 2026-01-13 10:35:31,199 INFO [optim.py:365] Clipping_scale=2.0, grad-norm quartiles 1.098e+02 1.594e+02 1.891e+02 2.344e+02 4.827e+02, threshold=3.783e+02, percent-clipped=2.0
979
+ 2026-01-13 10:35:33,630 INFO [scaling.py:681] Whitening: num_groups=1, num_channels=384, metric=40.07 vs. limit=5.0
980
+ 2026-01-13 10:35:36,930 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=6.77 vs. limit=2.0
981
+ 2026-01-13 10:35:41,224 INFO [zipformer.py:1188] warmup_begin=666.7, warmup_end=1333.3, batch_count=4630.0, num_to_drop=0, layers_to_drop=set()
982
+ 2026-01-13 10:35:45,417 INFO [zipformer.py:2441] attn_weights_entropy = tensor([5.4914, 5.5853, 5.5721, 5.7775, 4.7584, 5.7898, 2.5770, 4.8956],
983
+ device='cuda:0'), covar=tensor([7.7757e-05, 4.2139e-05, 1.3114e-04, 1.2053e-04, 1.7132e-04, 1.0314e-04,
984
+ 7.1434e-04, 4.4227e-05], device='cuda:0'), in_proj_covar=tensor([0.0018, 0.0018, 0.0017, 0.0019, 0.0019, 0.0020, 0.0020, 0.0016],
985
+ device='cuda:0'), out_proj_covar=tensor([1.4744e-05, 1.4569e-05, 1.5659e-05, 1.5747e-05, 1.7227e-05, 1.7476e-05,
986
+ 1.7104e-05, 1.4096e-05], device='cuda:0')
987
+ 2026-01-13 10:35:46,497 INFO [zipformer.py:1188] warmup_begin=1333.3, warmup_end=2000.0, batch_count=4644.0, num_to_drop=0, layers_to_drop=set()
988
+ 2026-01-13 10:35:46,769 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=3.29 vs. limit=2.0
989
+ 2026-01-13 10:35:47,473 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=3.12 vs. limit=2.0
990
+ 2026-01-13 10:35:49,067 INFO [train.py:895] Epoch 1, batch 4650, loss[loss=1.309, simple_loss=0.8157, pruned_loss=0.9012, over 1351.00 frames. ], tot_loss[loss=1.184, simple_loss=0.7446, pruned_loss=0.8113, over 262974.13 frames. ], batch size: 5, lr: 4.28e-02, grad_scale: 8.0
991
+ 2026-01-13 10:35:53,966 INFO [zipformer.py:2441] attn_weights_entropy = tensor([3.5309, 3.4526, 3.0889, 3.1806, 3.5433, 3.5304, 2.7653, 3.5101],
992
+ device='cuda:0'), covar=tensor([0.0065, 0.0042, 0.0080, 0.0092, 0.0041, 0.0055, 0.0102, 0.0103],
993
+ device='cuda:0'), in_proj_covar=tensor([0.0023, 0.0018, 0.0023, 0.0027, 0.0025, 0.0023, 0.0025, 0.0026],
994
+ device='cuda:0'), out_proj_covar=tensor([2.1878e-05, 1.6682e-05, 2.1995e-05, 2.5147e-05, 2.2548e-05, 1.9532e-05,
995
+ 2.0176e-05, 2.3421e-05], device='cuda:0')
996
+ 2026-01-13 10:36:02,748 INFO [zipformer.py:2441] attn_weights_entropy = tensor([3.8546, 3.8632, 3.8521, 3.8551, 3.8677, 3.8642, 3.8472, 3.8437],
997
+ device='cuda:0'), covar=tensor([0.0029, 0.0086, 0.0084, 0.0060, 0.0055, 0.0052, 0.0053, 0.0051],
998
+ device='cuda:0'), in_proj_covar=tensor([0.0028, 0.0030, 0.0033, 0.0029, 0.0034, 0.0030, 0.0030, 0.0029],
999
+ device='cuda:0'), out_proj_covar=tensor([2.6643e-05, 2.8653e-05, 3.1331e-05, 2.6188e-05, 2.9164e-05, 2.6111e-05,
1000
+ 2.7445e-05, 2.8144e-05], device='cuda:0')
1001
+ 2026-01-13 10:36:03,300 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=8.52 vs. limit=2.0
1002
+ 2026-01-13 10:36:07,638 INFO [train.py:895] Epoch 1, batch 4700, loss[loss=1.028, simple_loss=0.6499, pruned_loss=0.7025, over 1216.00 frames. ], tot_loss[loss=1.187, simple_loss=0.747, pruned_loss=0.8134, over 262319.63 frames. ], batch size: 3, lr: 4.27e-02, grad_scale: 8.0
1003
+ 2026-01-13 10:36:08,437 INFO [optim.py:365] Clipping_scale=2.0, grad-norm quartiles 1.191e+02 1.680e+02 1.990e+02 2.499e+02 5.450e+02, threshold=3.981e+02, percent-clipped=7.0
1004
+ 2026-01-13 10:36:08,533 INFO [zipformer.py:2441] attn_weights_entropy = tensor([2.2125, 3.6100, 2.6900, 3.5332, 2.4236, 3.9396, 3.6871, 2.4241],
1005
+ device='cuda:0'), covar=tensor([0.0059, 0.0027, 0.0035, 0.0038, 0.0038, 0.0030, 0.0027, 0.0051],
1006
+ device='cuda:0'), in_proj_covar=tensor([0.0033, 0.0033, 0.0027, 0.0029, 0.0032, 0.0034, 0.0031, 0.0028],
1007
+ device='cuda:0'), out_proj_covar=tensor([2.3714e-05, 2.4895e-05, 2.1100e-05, 2.2095e-05, 2.4066e-05, 2.6150e-05,
1008
+ 2.4080e-05, 2.1768e-05], device='cuda:0')
1009
+ 2026-01-13 10:36:10,186 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=5.95 vs. limit=2.0
1010
+ 2026-01-13 10:36:25,401 INFO [zipformer.py:2441] attn_weights_entropy = tensor([3.3901, 4.1124, 4.1117, 4.6799, 3.8299, 2.9880, 4.6175, 4.2230],
1011
+ device='cuda:0'), covar=tensor([0.0228, 0.0063, 0.0208, 0.0049, 0.0500, 0.0148, 0.0050, 0.0126],
1012
+ device='cuda:0'), in_proj_covar=tensor([0.0022, 0.0025, 0.0022, 0.0023, 0.0019, 0.0023, 0.0023, 0.0024],
1013
+ device='cuda:0'), out_proj_covar=tensor([1.7890e-05, 2.0032e-05, 1.6661e-05, 1.7521e-05, 1.9089e-05, 1.7678e-05,
1014
+ 1.7537e-05, 1.8897e-05], device='cuda:0')
1015
+ 2026-01-13 10:36:26,050 INFO [train.py:895] Epoch 1, batch 4750, loss[loss=1.271, simple_loss=0.8169, pruned_loss=0.8627, over 1316.00 frames. ], tot_loss[loss=1.186, simple_loss=0.7482, pruned_loss=0.8119, over 261252.44 frames. ], batch size: 8, lr: 4.26e-02, grad_scale: 8.0
1016
+ 2026-01-13 10:36:32,556 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=12.32 vs. limit=2.0
1017
+ 2026-01-13 10:36:34,493 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=3.81 vs. limit=2.0
1018
+ 2026-01-13 10:36:36,057 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=3.74 vs. limit=2.0
1019
+ 2026-01-13 10:36:40,371 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=2.90 vs. limit=2.0
1020
+ 2026-01-13 10:36:43,109 INFO [zipformer.py:1188] warmup_begin=2000.0, warmup_end=2666.7, batch_count=4797.0, num_to_drop=0, layers_to_drop=set()
1021
+ 2026-01-13 10:36:44,732 INFO [train.py:895] Epoch 1, batch 4800, loss[loss=1.131, simple_loss=0.7211, pruned_loss=0.7707, over 1355.00 frames. ], tot_loss[loss=1.184, simple_loss=0.7467, pruned_loss=0.8107, over 261939.98 frames. ], batch size: 4, lr: 4.25e-02, grad_scale: 8.0
1022
+ 2026-01-13 10:36:44,733 INFO [train.py:920] Computing validation loss
1023
+ 2026-01-13 10:37:01,511 INFO [zipformer.py:2441] attn_weights_entropy = tensor([3.0090, 3.2246, 3.3864, 3.1839, 3.2414, 3.2670, 3.1820, 3.0521],
1024
+ device='cuda:0'), covar=tensor([0.0030, 0.0068, 0.0035, 0.0029, 0.0017, 0.0033, 0.0026, 0.0041],
1025
+ device='cuda:0'), in_proj_covar=tensor([0.0019, 0.0018, 0.0021, 0.0019, 0.0019, 0.0020, 0.0020, 0.0019],
1026
+ device='cuda:0'), out_proj_covar=tensor([2.1800e-05, 2.0878e-05, 2.2693e-05, 2.0153e-05, 2.1205e-05, 2.0251e-05,
1027
+ 2.0651e-05, 1.9828e-05], device='cuda:0')
1028
+ 2026-01-13 10:37:10,534 INFO [zipformer.py:2441] attn_weights_entropy = tensor([2.7878, 2.6021, 2.3753, 2.9454, 2.3695, 2.6983, 2.6208, 2.6622],
1029
+ device='cuda:0'), covar=tensor([0.0061, 0.0073, 0.0120, 0.0051, 0.0077, 0.0091, 0.0066, 0.0069],
1030
+ device='cuda:0'), in_proj_covar=tensor([0.0030, 0.0029, 0.0026, 0.0025, 0.0029, 0.0030, 0.0028, 0.0025],
1031
+ device='cuda:0'), out_proj_covar=tensor([2.1508e-05, 2.1964e-05, 1.9926e-05, 1.9699e-05, 2.1619e-05, 2.3216e-05,
1032
+ 2.1513e-05, 1.9227e-05], device='cuda:0')
1033
+ 2026-01-13 10:37:11,350 INFO [zipformer.py:2441] attn_weights_entropy = tensor([2.1759, 2.1767, 1.8506, 2.1872, 2.1912, 2.1905, 2.1908, 2.1657],
1034
+ device='cuda:0'), covar=tensor([0.0116, 0.0125, 0.0074, 0.0151, 0.0079, 0.0119, 0.0209, 0.0116],
1035
+ device='cuda:0'), in_proj_covar=tensor([0.0011, 0.0011, 0.0010, 0.0010, 0.0011, 0.0011, 0.0010, 0.0011],
1036
+ device='cuda:0'), out_proj_covar=tensor([6.6044e-06, 7.5213e-06, 6.0950e-06, 6.6026e-06, 6.6991e-06, 6.9559e-06,
1037
+ 6.6536e-06, 6.6166e-06], device='cuda:0')
1038
+ 2026-01-13 10:37:31,159 INFO [zipformer.py:2441] attn_weights_entropy = tensor([3.3343, 3.3608, 3.1668, 3.0758, 3.4860, 3.4123, 2.9876, 3.2163],
1039
+ device='cuda:0'), covar=tensor([0.0064, 0.0032, 0.0047, 0.0044, 0.0040, 0.0026, 0.0131, 0.0042],
1040
+ device='cuda:0'), in_proj_covar=tensor([0.0032, 0.0031, 0.0029, 0.0027, 0.0031, 0.0029, 0.0028, 0.0030],
1041
+ device='cuda:0'), out_proj_covar=tensor([2.8492e-05, 3.1128e-05, 2.8447e-05, 2.5465e-05, 2.9507e-05, 2.8195e-05,
1042
+ 2.7682e-05, 3.0844e-05], device='cuda:0')
1043
+ 2026-01-13 10:37:32,992 INFO [zipformer.py:2441] attn_weights_entropy = tensor([2.5865, 3.0030, 2.8581, 3.3344, 2.0313, 3.1664, 1.7502, 2.5258],
1044
+ device='cuda:0'), covar=tensor([0.0027, 0.0022, 0.0020, 0.0020, 0.0028, 0.0021, 0.0040, 0.0022],
1045
+ device='cuda:0'), in_proj_covar=tensor([0.0017, 0.0018, 0.0017, 0.0019, 0.0018, 0.0020, 0.0019, 0.0017],
1046
+ device='cuda:0'), out_proj_covar=tensor([1.4237e-05, 1.4390e-05, 1.5396e-05, 1.5170e-05, 1.6643e-05, 1.7065e-05,
1047
+ 1.6675e-05, 1.3692e-05], device='cuda:0')
1048
+ 2026-01-13 10:37:41,705 INFO [zipformer.py:2441] attn_weights_entropy = tensor([2.7909, 2.5901, 2.3404, 2.8833, 2.3714, 2.5435, 2.4945, 2.7030],
1049
+ device='cuda:0'), covar=tensor([0.0072, 0.0078, 0.0143, 0.0045, 0.0084, 0.0119, 0.0077, 0.0066],
1050
+ device='cuda:0'), in_proj_covar=tensor([0.0030, 0.0029, 0.0026, 0.0025, 0.0029, 0.0030, 0.0028, 0.0025],
1051
+ device='cuda:0'), out_proj_covar=tensor([2.1508e-05, 2.1964e-05, 1.9926e-05, 1.9699e-05, 2.1619e-05, 2.3216e-05,
1052
+ 2.1513e-05, 1.9227e-05], device='cuda:0')
1053
+ 2026-01-13 10:37:43,468 INFO [zipformer.py:2441] attn_weights_entropy = tensor([1.8079, 1.8679, 1.8005, 1.9504, 1.8884, 1.9111, 1.7725, 1.9049],
1054
+ device='cuda:0'), covar=tensor([0.0042, 0.0049, 0.0044, 0.0034, 0.0056, 0.0036, 0.0071, 0.0045],
1055
+ device='cuda:0'), in_proj_covar=tensor([0.0016, 0.0016, 0.0016, 0.0016, 0.0017, 0.0016, 0.0017, 0.0015],
1056
+ device='cuda:0'), out_proj_covar=tensor([1.1713e-05, 1.2623e-05, 1.2488e-05, 1.2891e-05, 1.2336e-05, 1.2159e-05,
1057
+ 1.2458e-05, 1.1383e-05], device='cuda:0')
1058
+ 2026-01-13 10:38:01,234 INFO [zipformer.py:2441] attn_weights_entropy = tensor([2.1546, 2.2613, 1.8866, 2.3452, 2.3043, 2.1577, 2.1567, 2.3552],
1059
+ device='cuda:0'), covar=tensor([0.0016, 0.0014, 0.0022, 0.0013, 0.0017, 0.0013, 0.0014, 0.0011],
1060
+ device='cuda:0'), in_proj_covar=tensor([0.0027, 0.0027, 0.0028, 0.0027, 0.0026, 0.0024, 0.0028, 0.0025],
1061
+ device='cuda:0'), out_proj_covar=tensor([1.5627e-05, 1.5376e-05, 1.7020e-05, 1.6921e-05, 1.6003e-05, 1.4819e-05,
1062
+ 1.6133e-05, 1.4859e-05], device='cuda:0')
1063
+ 2026-01-13 10:38:10,971 INFO [zipformer.py:2441] attn_weights_entropy = tensor([1.9508, 1.9857, 1.9748, 2.0607, 2.0219, 2.0495, 1.8876, 2.0233],
1064
+ device='cuda:0'), covar=tensor([0.0047, 0.0063, 0.0050, 0.0039, 0.0055, 0.0040, 0.0084, 0.0049],
1065
+ device='cuda:0'), in_proj_covar=tensor([0.0016, 0.0016, 0.0016, 0.0016, 0.0017, 0.0016, 0.0017, 0.0015],
1066
+ device='cuda:0'), out_proj_covar=tensor([1.1713e-05, 1.2623e-05, 1.2488e-05, 1.2891e-05, 1.2336e-05, 1.2159e-05,
1067
+ 1.2458e-05, 1.1383e-05], device='cuda:0')
1068
+ 2026-01-13 10:38:14,397 INFO [zipformer.py:2441] attn_weights_entropy = tensor([3.0155, 3.1568, 3.3060, 3.1460, 3.1950, 3.2526, 3.1431, 3.0428],
1069
+ device='cuda:0'), covar=tensor([0.0033, 0.0076, 0.0065, 0.0028, 0.0019, 0.0042, 0.0030, 0.0051],
1070
+ device='cuda:0'), in_proj_covar=tensor([0.0019, 0.0018, 0.0021, 0.0019, 0.0019, 0.0020, 0.0020, 0.0019],
1071
+ device='cuda:0'), out_proj_covar=tensor([2.1800e-05, 2.0878e-05, 2.2693e-05, 2.0153e-05, 2.1205e-05, 2.0251e-05,
1072
+ 2.0651e-05, 1.9828e-05], device='cuda:0')
1073
+ 2026-01-13 10:38:19,320 INFO [zipformer.py:2441] attn_weights_entropy = tensor([3.2950, 3.3951, 3.2007, 3.0596, 3.5256, 3.4273, 3.0135, 3.1705],
1074
+ device='cuda:0'), covar=tensor([0.0190, 0.0022, 0.0132, 0.0071, 0.0055, 0.0031, 0.0275, 0.0181],
1075
+ device='cuda:0'), in_proj_covar=tensor([0.0032, 0.0031, 0.0029, 0.0027, 0.0031, 0.0029, 0.0028, 0.0030],
1076
+ device='cuda:0'), out_proj_covar=tensor([2.8492e-05, 3.1128e-05, 2.8447e-05, 2.5465e-05, 2.9507e-05, 2.8195e-05,
1077
+ 2.7682e-05, 3.0844e-05], device='cuda:0')
1078
+ 2026-01-13 10:38:26,955 INFO [zipformer.py:2441] attn_weights_entropy = tensor([2.8778, 2.8324, 2.8765, 2.8217, 2.7820, 2.8618, 2.8022, 2.8951],
1079
+ device='cuda:0'), covar=tensor([0.0022, 0.0014, 0.0017, 0.0020, 0.0021, 0.0017, 0.0020, 0.0016],
1080
+ device='cuda:0'), in_proj_covar=tensor([0.0021, 0.0021, 0.0022, 0.0025, 0.0021, 0.0023, 0.0023, 0.0022],
1081
+ device='cuda:0'), out_proj_covar=tensor([1.9277e-05, 1.9019e-05, 1.8753e-05, 2.0470e-05, 1.9448e-05, 1.9067e-05,
1082
+ 1.9646e-05, 2.0193e-05], device='cuda:0')
1083
+ 2026-01-13 10:38:32,677 INFO [zipformer.py:2441] attn_weights_entropy = tensor([3.5192, 3.5469, 3.5447, 3.5613, 3.5470, 3.5372, 3.5106, 3.5537],
1084
+ device='cuda:0'), covar=tensor([0.0049, 0.0045, 0.0053, 0.0045, 0.0036, 0.0044, 0.0062, 0.0052],
1085
+ device='cuda:0'), in_proj_covar=tensor([0.0026, 0.0024, 0.0028, 0.0025, 0.0026, 0.0027, 0.0027, 0.0029],
1086
+ device='cuda:0'), out_proj_covar=tensor([2.0328e-05, 1.7976e-05, 2.0122e-05, 1.9760e-05, 2.0012e-05, 1.9180e-05,
1087
+ 1.9563e-05, 2.1516e-05], device='cuda:0')
1088
+ 2026-01-13 10:38:55,265 INFO [zipformer.py:2441] attn_weights_entropy = tensor([2.6119, 2.6306, 2.6390, 2.6396, 2.6392, 2.6264, 2.6114, 2.6435],
1089
+ device='cuda:0'), covar=tensor([0.0049, 0.0047, 0.0056, 0.0040, 0.0039, 0.0047, 0.0052, 0.0048],
1090
+ device='cuda:0'), in_proj_covar=tensor([0.0026, 0.0024, 0.0028, 0.0025, 0.0026, 0.0027, 0.0027, 0.0029],
1091
+ device='cuda:0'), out_proj_covar=tensor([2.0328e-05, 1.7976e-05, 2.0122e-05, 1.9760e-05, 2.0012e-05, 1.9180e-05,
1092
+ 1.9563e-05, 2.1516e-05], device='cuda:0')
1093
+ 2026-01-13 10:38:58,563 INFO [zipformer.py:2441] attn_weights_entropy = tensor([2.5182, 3.8917, 3.4344, 3.9973, 3.5635, 3.3568, 3.9271, 3.5998],
1094
+ device='cuda:0'), covar=tensor([0.0565, 0.0074, 0.0097, 0.0164, 0.0204, 0.0096, 0.0084, 0.0082],
1095
+ device='cuda:0'), in_proj_covar=tensor([0.0022, 0.0023, 0.0021, 0.0022, 0.0019, 0.0022, 0.0023, 0.0023],
1096
+ device='cuda:0'), out_proj_covar=tensor([1.7168e-05, 1.8873e-05, 1.6113e-05, 1.6991e-05, 1.8598e-05, 1.7497e-05,
1097
+ 1.6766e-05, 1.8513e-05], device='cuda:0')
1098
+ 2026-01-13 10:38:58,591 INFO [zipformer.py:2441] attn_weights_entropy = tensor([2.9234, 2.6781, 2.4468, 3.0021, 2.4226, 2.5620, 2.6053, 2.8287],
1099
+ device='cuda:0'), covar=tensor([0.0051, 0.0067, 0.0075, 0.0049, 0.0072, 0.0067, 0.0061, 0.0046],
1100
+ device='cuda:0'), in_proj_covar=tensor([0.0030, 0.0029, 0.0026, 0.0025, 0.0029, 0.0030, 0.0028, 0.0025],
1101
+ device='cuda:0'), out_proj_covar=tensor([2.1508e-05, 2.1964e-05, 1.9926e-05, 1.9699e-05, 2.1619e-05, 2.3216e-05,
1102
+ 2.1513e-05, 1.9227e-05], device='cuda:0')
1103
+ 2026-01-13 10:39:02,153 INFO [zipformer.py:2441] attn_weights_entropy = tensor([2.8975, 2.8472, 2.8996, 2.8501, 2.8245, 2.8853, 2.8220, 2.9162],
1104
+ device='cuda:0'), covar=tensor([0.0013, 0.0011, 0.0012, 0.0016, 0.0016, 0.0014, 0.0015, 0.0013],
1105
+ device='cuda:0'), in_proj_covar=tensor([0.0021, 0.0021, 0.0022, 0.0025, 0.0021, 0.0023, 0.0023, 0.0022],
1106
+ device='cuda:0'), out_proj_covar=tensor([1.9277e-05, 1.9019e-05, 1.8753e-05, 2.0470e-05, 1.9448e-05, 1.9067e-05,
1107
+ 1.9646e-05, 2.0193e-05], device='cuda:0')
1108
+ 2026-01-13 10:39:14,315 INFO [zipformer.py:2441] attn_weights_entropy = tensor([2.9986, 3.1080, 1.8068, 2.6216, 2.6956, 2.3853, 2.5772, 2.0147],
1109
+ device='cuda:0'), covar=tensor([0.0220, 0.0501, 0.0927, 0.0494, 0.1256, 0.0755, 0.1582, 0.1051],
1110
+ device='cuda:0'), in_proj_covar=tensor([0.0033, 0.0039, 0.0045, 0.0045, 0.0036, 0.0035, 0.0041, 0.0043],
1111
+ device='cuda:0'), out_proj_covar=tensor([1.4982e-05, 1.5634e-05, 1.5812e-05, 1.8157e-05, 2.2165e-05, 1.2832e-05,
1112
+ 1.6798e-05, 1.5230e-05], device='cuda:0')
1113
+ 2026-01-13 10:39:30,598 INFO [train.py:929] Epoch 1, validation: loss=1.859, simple_loss=1.177, pruned_loss=1.271, over 1639044.00 frames.
1114
+ 2026-01-13 10:39:30,599 INFO [train.py:930] Maximum memory allocated so far is 2925MB
1115
+ 2026-01-13 10:39:30,857 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=5.26 vs. limit=2.0
1116
+ 2026-01-13 10:39:31,400 INFO [optim.py:365] Clipping_scale=2.0, grad-norm quartiles 1.136e+02 1.512e+02 1.902e+02 2.295e+02 4.298e+02, threshold=3.803e+02, percent-clipped=1.0
1117
+ 2026-01-13 10:39:35,969 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=12.61 vs. limit=2.0
1118
+ 2026-01-13 10:39:42,163 INFO [scaling.py:681] Whitening: num_groups=1, num_channels=384, metric=271.80 vs. limit=5.0
1119
+ 2026-01-13 10:39:43,999 INFO [scaling.py:681] Whitening: num_groups=1, num_channels=384, metric=43.31 vs. limit=5.0
1120
+ 2026-01-13 10:39:45,376 INFO [zipformer.py:2441] attn_weights_entropy = tensor([2.8169, 3.9549, 3.6575, 4.5036, 4.3937, 2.1797, 4.3701, 3.8795],
1121
+ device='cuda:0'), covar=tensor([0.0299, 0.0087, 0.0130, 0.0124, 0.0179, 0.0206, 0.0064, 0.0068],
1122
+ device='cuda:0'), in_proj_covar=tensor([0.0020, 0.0022, 0.0020, 0.0021, 0.0017, 0.0021, 0.0022, 0.0022],
1123
+ device='cuda:0'), out_proj_covar=tensor([1.5726e-05, 1.7354e-05, 1.5380e-05, 1.5626e-05, 1.6816e-05, 1.6441e-05,
1124
+ 1.5738e-05, 1.6955e-05], device='cuda:0')
1125
+ 2026-01-13 10:39:47,634 INFO [zipformer.py:1188] warmup_begin=666.7, warmup_end=1333.3, batch_count=4845.0, num_to_drop=0, layers_to_drop=set()
1126
+ 2026-01-13 10:39:49,951 INFO [train.py:895] Epoch 1, batch 4850, loss[loss=0.9857, simple_loss=0.6238, pruned_loss=0.6737, over 1247.00 frames. ], tot_loss[loss=1.178, simple_loss=0.7428, pruned_loss=0.8064, over 262654.93 frames. ], batch size: 4, lr: 4.24e-02, grad_scale: 8.0
1127
+ 2026-01-13 10:39:50,555 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=2.76 vs. limit=2.0
1128
+ 2026-01-13 10:39:53,222 INFO [zipformer.py:2441] attn_weights_entropy = tensor([3.8061, 4.2546, 4.2604, 4.1080, 4.0658, 4.1968, 4.1610, 3.8171],
1129
+ device='cuda:0'), covar=tensor([0.0016, 0.0010, 0.0011, 0.0009, 0.0014, 0.0010, 0.0011, 0.0016],
1130
+ device='cuda:0'), in_proj_covar=tensor([0.0018, 0.0018, 0.0020, 0.0018, 0.0018, 0.0020, 0.0019, 0.0018],
1131
+ device='cuda:0'), out_proj_covar=tensor([2.1066e-05, 2.0086e-05, 2.1649e-05, 1.9883e-05, 2.0762e-05, 1.9683e-05,
1132
+ 2.0020e-05, 1.9199e-05], device='cuda:0')
1133
+ 2026-01-13 10:39:56,398 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=3.36 vs. limit=2.0
1134
+ 2026-01-13 10:39:58,215 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=5.15 vs. limit=2.0
1135
+ 2026-01-13 10:40:00,825 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=6.34 vs. limit=2.0
1136
+ 2026-01-13 10:40:02,994 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=192, metric=11.71 vs. limit=2.0
1137
+ 2026-01-13 10:40:04,235 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=4.16 vs. limit=2.0
1138
+ 2026-01-13 10:40:08,870 INFO [train.py:895] Epoch 1, batch 4900, loss[loss=1.176, simple_loss=0.7465, pruned_loss=0.8023, over 1371.00 frames. ], tot_loss[loss=1.175, simple_loss=0.7402, pruned_loss=0.8049, over 262443.32 frames. ], batch size: 4, lr: 4.23e-02, grad_scale: 8.0
1139
+ 2026-01-13 10:40:09,447 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=4.82 vs. limit=2.0
1140
+ 2026-01-13 10:40:09,575 INFO [optim.py:365] Clipping_scale=2.0, grad-norm quartiles 1.205e+02 1.742e+02 2.154e+02 2.812e+02 5.415e+02, threshold=4.309e+02, percent-clipped=3.0
1141
+ 2026-01-13 10:40:13,122 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=6.50 vs. limit=2.0
1142
+ 2026-01-13 10:40:14,748 INFO [scaling.py:681] Whitening: num_groups=1, num_channels=384, metric=94.23 vs. limit=5.0
1143
+ 2026-01-13 10:40:25,646 INFO [zipformer.py:2441] attn_weights_entropy = tensor([5.2466, 5.2611, 4.5423, 4.8311, 5.2805, 5.2427, 3.9168, 5.2722],
1144
+ device='cuda:0'), covar=tensor([0.0004, 0.0003, 0.0009, 0.0006, 0.0003, 0.0003, 0.0006, 0.0006],
1145
+ device='cuda:0'), in_proj_covar=tensor([0.0023, 0.0021, 0.0023, 0.0025, 0.0025, 0.0025, 0.0023, 0.0025],
1146
+ device='cuda:0'), out_proj_covar=tensor([2.1643e-05, 1.8064e-05, 2.1884e-05, 2.3686e-05, 2.2005e-05, 2.1159e-05,
1147
+ 1.9218e-05, 2.3031e-05], device='cuda:0')
1148
+ 2026-01-13 10:40:33,462 INFO [zipformer.py:1188] warmup_begin=2000.0, warmup_end=2666.7, batch_count=4944.0, num_to_drop=0, layers_to_drop=set()
1149
+ 2026-01-13 10:40:36,587 INFO [train.py:895] Epoch 1, batch 4950, loss[loss=1.359, simple_loss=0.8799, pruned_loss=0.9191, over 1125.00 frames. ], tot_loss[loss=1.188, simple_loss=0.7484, pruned_loss=0.8138, over 261986.90 frames. ], batch size: 13, lr: 4.21e-02, grad_scale: 8.0
1150
+ 2026-01-13 10:40:38,394 INFO [scaling.py:681] Whitening: num_groups=8, num_channels=96, metric=13.94 vs. limit=2.0
1151
+ 2026-01-13 10:40:52,065 INFO [zipformer.py:1188] warmup_begin=666.7, warmup_end=1333.3, batch_count=4992.0, num_to_drop=0, layers_to_drop=set()
1152
+ 2026-01-13 10:40:55,544 INFO [checkpoint.py:74] Saving checkpoint to /kaggle/working/amharic_training/exp_amharic_streaming/checkpoint-5000.pt
1153
+ 2026-01-13 10:40:57,736 INFO [train.py:895] Epoch 1, batch 5000, loss[loss=1.344, simple_loss=0.8505, pruned_loss=0.9187, over 1226.00 frames. ], tot_loss[loss=1.183, simple_loss=0.7445, pruned_loss=0.811, over 261199.81 frames. ], batch size: 4, lr: 4.20e-02, grad_scale: 8.0
1154
+ 2026-01-13 10:40:58,505 INFO [optim.py:365] Clipping_scale=2.0, grad-norm quartiles 1.131e+02 1.550e+02 1.948e+02 2.658e+02 4.998e+02, threshold=3.897e+02, percent-clipped=2.0
tensorboard/events.out.tfevents.1768298578.8e64ffbd666a.24203.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:bf5b45bc742fa9c60e565d2884832c6541b1c06067cbd154442e6e637b8efd4c
3
- size 22132
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ee1b8558419f7c4232c372212a0785b2712c2162f9aa1da762c24996b8ecc579
3
+ size 46332