projecti7 commited on
Commit
35d6054
·
verified ·
1 Parent(s): 7d7e7a8

Update latest checkpoint

Browse files
log/log-train-2026-01-13-11-44-05-0 CHANGED
@@ -3776,3 +3776,38 @@
3776
  device='cuda:0')
3777
  2026-01-13 15:48:12,893 INFO [zipformer.py:1188] (0/2) warmup_begin=3333.3, warmup_end=4000.0, batch_count=20649.0, num_to_drop=0, layers_to_drop=set()
3778
  2026-01-13 15:48:31,075 INFO [zipformer.py:1188] (0/2) warmup_begin=2000.0, warmup_end=2666.7, batch_count=20676.0, num_to_drop=0, layers_to_drop=set()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3776
  device='cuda:0')
3777
  2026-01-13 15:48:12,893 INFO [zipformer.py:1188] (0/2) warmup_begin=3333.3, warmup_end=4000.0, batch_count=20649.0, num_to_drop=0, layers_to_drop=set()
3778
  2026-01-13 15:48:31,075 INFO [zipformer.py:1188] (0/2) warmup_begin=2000.0, warmup_end=2666.7, batch_count=20676.0, num_to_drop=0, layers_to_drop=set()
3779
+ 2026-01-13 15:48:33,118 INFO [zipformer.py:2441] (0/2) attn_weights_entropy = tensor([1.1721, 3.9977, 4.1399, 1.9051, 4.2439, 4.7040, 1.8011, 1.7017],
3780
+ device='cuda:0'), covar=tensor([0.1199, 0.0032, 0.0026, 0.0458, 0.0039, 0.0021, 0.1258, 0.0802],
3781
+ device='cuda:0'), in_proj_covar=tensor([0.0183, 0.0098, 0.0100, 0.0148, 0.0100, 0.0097, 0.0235, 0.0176],
3782
+ device='cuda:0'), out_proj_covar=tensor([9.7527e-05, 3.4016e-05, 3.4342e-05, 6.1876e-05, 3.5872e-05, 3.3241e-05,
3783
+ 1.3651e-04, 8.2088e-05], device='cuda:0')
3784
+ 2026-01-13 15:48:37,138 INFO [train.py:895] (0/2) Epoch 13, batch 850, loss[loss=0.2614, simple_loss=0.2895, pruned_loss=0.1167, over 2756.00 frames. ], tot_loss[loss=0.2228, simple_loss=0.2803, pruned_loss=0.0826, over 542604.45 frames. ], batch size: 9, lr: 1.28e-02, grad_scale: 16.0
3785
+ 2026-01-13 15:48:49,899 INFO [optim.py:365] (0/2) Clipping_scale=2.0, grad-norm quartiles 8.198e+01 1.222e+02 1.541e+02 2.046e+02 5.252e+02, threshold=3.082e+02, percent-clipped=3.0
3786
+ 2026-01-13 15:48:59,391 INFO [zipformer.py:1188] (0/2) warmup_begin=666.7, warmup_end=1333.3, batch_count=20724.0, num_to_drop=0, layers_to_drop=set()
3787
+ 2026-01-13 15:49:06,414 INFO [train.py:895] (0/2) Epoch 13, batch 900, loss[loss=0.3516, simple_loss=0.3436, pruned_loss=0.1798, over 2654.00 frames. ], tot_loss[loss=0.2223, simple_loss=0.2795, pruned_loss=0.08257, over 543591.31 frames. ], batch size: 7, lr: 1.28e-02, grad_scale: 16.0
3788
+ 2026-01-13 15:49:11,366 INFO [zipformer.py:1188] (0/2) warmup_begin=1333.3, warmup_end=2000.0, batch_count=20744.0, num_to_drop=0, layers_to_drop=set()
3789
+ 2026-01-13 15:49:11,985 INFO [zipformer.py:1188] (0/2) warmup_begin=2000.0, warmup_end=2666.7, batch_count=20745.0, num_to_drop=0, layers_to_drop=set()
3790
+ 2026-01-13 15:49:15,013 INFO [zipformer.py:2441] (0/2) attn_weights_entropy = tensor([1.2095, 3.7481, 4.1434, 2.9856, 3.9451, 4.6327, 1.7514, 1.6009],
3791
+ device='cuda:0'), covar=tensor([0.1481, 0.0031, 0.0032, 0.0337, 0.0048, 0.0030, 0.1604, 0.1229],
3792
+ device='cuda:0'), in_proj_covar=tensor([0.0185, 0.0099, 0.0101, 0.0150, 0.0102, 0.0098, 0.0239, 0.0180],
3793
+ device='cuda:0'), out_proj_covar=tensor([9.8766e-05, 3.4351e-05, 3.4547e-05, 6.2032e-05, 3.6394e-05, 3.3603e-05,
3794
+ 1.3895e-04, 8.3760e-05], device='cuda:0')
3795
+ 2026-01-13 15:49:19,076 INFO [zipformer.py:2441] (0/2) attn_weights_entropy = tensor([1.3222, 0.6770, 1.7440, 1.6580, 1.2149, 1.3854, 2.0752, 1.0691],
3796
+ device='cuda:0'), covar=tensor([0.0025, 0.0026, 0.0049, 0.0026, 0.0043, 0.0024, 0.0021, 0.0027],
3797
+ device='cuda:0'), in_proj_covar=tensor([0.0014, 0.0013, 0.0013, 0.0015, 0.0012, 0.0012, 0.0013, 0.0014],
3798
+ device='cuda:0'), out_proj_covar=tensor([5.6519e-06, 5.5957e-06, 6.3816e-06, 5.2553e-06, 7.4288e-06, 4.9359e-06,
3799
+ 5.1946e-06, 5.1087e-06], device='cuda:0')
3800
+ 2026-01-13 15:49:20,892 INFO [scaling.py:681] (0/2) Whitening: num_groups=8, num_channels=192, metric=1.83 vs. limit=2.0
3801
+ 2026-01-13 15:49:25,513 INFO [zipformer.py:1188] (0/2) warmup_begin=2000.0, warmup_end=2666.7, batch_count=20768.0, num_to_drop=0, layers_to_drop=set()
3802
+ 2026-01-13 15:49:29,040 INFO [zipformer.py:1188] (0/2) warmup_begin=1333.3, warmup_end=2000.0, batch_count=20774.0, num_to_drop=0, layers_to_drop=set()
3803
+ 2026-01-13 15:49:36,124 INFO [train.py:895] (0/2) Epoch 13, batch 950, loss[loss=0.2032, simple_loss=0.27, pruned_loss=0.06819, over 2607.00 frames. ], tot_loss[loss=0.2234, simple_loss=0.2799, pruned_loss=0.08343, over 544365.14 frames. ], batch size: 7, lr: 1.28e-02, grad_scale: 16.0
3804
+ 2026-01-13 15:49:40,293 INFO [zipformer.py:1188] (0/2) warmup_begin=666.7, warmup_end=1333.3, batch_count=20793.0, num_to_drop=0, layers_to_drop=set()
3805
+ 2026-01-13 15:49:49,179 INFO [optim.py:365] (0/2) Clipping_scale=2.0, grad-norm quartiles 7.810e+01 1.270e+02 1.660e+02 2.068e+02 6.055e+02, threshold=3.320e+02, percent-clipped=9.0
3806
+ 2026-01-13 15:49:54,019 INFO [zipformer.py:1188] (0/2) warmup_begin=666.7, warmup_end=1333.3, batch_count=20816.0, num_to_drop=0, layers_to_drop=set()
3807
+ 2026-01-13 15:50:04,624 INFO [zipformer.py:2441] (0/2) attn_weights_entropy = tensor([1.3408, 0.5995, 1.6288, 1.0580, 2.0195, 1.0083, 2.8317, 2.0105],
3808
+ device='cuda:0'), covar=tensor([0.2345, 0.3210, 0.2590, 0.2758, 0.0984, 0.3776, 0.0615, 0.1982],
3809
+ device='cuda:0'), in_proj_covar=tensor([0.0069, 0.0076, 0.0070, 0.0072, 0.0056, 0.0086, 0.0049, 0.0065],
3810
+ device='cuda:0'), out_proj_covar=tensor([1.0131e-04, 1.0238e-04, 9.6378e-05, 9.7097e-05, 6.8026e-05, 1.1415e-04,
3811
+ 6.9696e-05, 9.1096e-05], device='cuda:0')
3812
+ 2026-01-13 15:50:05,702 INFO [train.py:895] (0/2) Epoch 13, batch 1000, loss[loss=0.1844, simple_loss=0.2626, pruned_loss=0.05309, over 2850.00 frames. ], tot_loss[loss=0.2272, simple_loss=0.283, pruned_loss=0.0857, over 545137.63 frames. ], batch size: 8, lr: 1.28e-02, grad_scale: 16.0
3813
+ 2026-01-13 15:50:20,830 INFO [zipformer.py:1188] (0/2) warmup_begin=2000.0, warmup_end=2666.7, batch_count=20860.0, num_to_drop=0, layers_to_drop=set()
log/log-train-2026-01-13-11-44-05-1 CHANGED
@@ -3712,3 +3712,30 @@
3712
  2026-01-13 15:48:12,946 INFO [zipformer.py:1188] (1/2) warmup_begin=3333.3, warmup_end=4000.0, batch_count=20649.0, num_to_drop=0, layers_to_drop=set()
3713
  2026-01-13 15:48:30,026 INFO [scaling.py:681] (1/2) Whitening: num_groups=8, num_channels=96, metric=1.98 vs. limit=2.0
3714
  2026-01-13 15:48:31,056 INFO [zipformer.py:1188] (1/2) warmup_begin=2000.0, warmup_end=2666.7, batch_count=20676.0, num_to_drop=0, layers_to_drop=set()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3712
  2026-01-13 15:48:12,946 INFO [zipformer.py:1188] (1/2) warmup_begin=3333.3, warmup_end=4000.0, batch_count=20649.0, num_to_drop=0, layers_to_drop=set()
3713
  2026-01-13 15:48:30,026 INFO [scaling.py:681] (1/2) Whitening: num_groups=8, num_channels=96, metric=1.98 vs. limit=2.0
3714
  2026-01-13 15:48:31,056 INFO [zipformer.py:1188] (1/2) warmup_begin=2000.0, warmup_end=2666.7, batch_count=20676.0, num_to_drop=0, layers_to_drop=set()
3715
+ 2026-01-13 15:48:37,137 INFO [train.py:895] (1/2) Epoch 13, batch 850, loss[loss=0.2111, simple_loss=0.2778, pruned_loss=0.07226, over 2757.00 frames. ], tot_loss[loss=0.2218, simple_loss=0.2797, pruned_loss=0.08192, over 543156.17 frames. ], batch size: 9, lr: 1.28e-02, grad_scale: 16.0
3716
+ 2026-01-13 15:48:39,224 INFO [scaling.py:681] (1/2) Whitening: num_groups=1, num_channels=384, metric=4.09 vs. limit=5.0
3717
+ 2026-01-13 15:48:48,497 INFO [scaling.py:681] (1/2) Whitening: num_groups=8, num_channels=192, metric=1.96 vs. limit=2.0
3718
+ 2026-01-13 15:48:49,897 INFO [optim.py:365] (1/2) Clipping_scale=2.0, grad-norm quartiles 8.198e+01 1.222e+02 1.541e+02 2.046e+02 5.252e+02, threshold=3.082e+02, percent-clipped=3.0
3719
+ 2026-01-13 15:48:59,390 INFO [zipformer.py:1188] (1/2) warmup_begin=666.7, warmup_end=1333.3, batch_count=20724.0, num_to_drop=0, layers_to_drop=set()
3720
+ 2026-01-13 15:49:06,413 INFO [train.py:895] (1/2) Epoch 13, batch 900, loss[loss=0.2139, simple_loss=0.2769, pruned_loss=0.0754, over 2653.00 frames. ], tot_loss[loss=0.2245, simple_loss=0.2817, pruned_loss=0.08363, over 544632.87 frames. ], batch size: 7, lr: 1.28e-02, grad_scale: 16.0
3721
+ 2026-01-13 15:49:11,368 INFO [zipformer.py:1188] (1/2) warmup_begin=1333.3, warmup_end=2000.0, batch_count=20744.0, num_to_drop=0, layers_to_drop=set()
3722
+ 2026-01-13 15:49:12,027 INFO [zipformer.py:1188] (1/2) warmup_begin=2000.0, warmup_end=2666.7, batch_count=20745.0, num_to_drop=0, layers_to_drop=set()
3723
+ 2026-01-13 15:49:25,513 INFO [zipformer.py:1188] (1/2) warmup_begin=2000.0, warmup_end=2666.7, batch_count=20768.0, num_to_drop=0, layers_to_drop=set()
3724
+ 2026-01-13 15:49:29,046 INFO [zipformer.py:1188] (1/2) warmup_begin=1333.3, warmup_end=2000.0, batch_count=20774.0, num_to_drop=0, layers_to_drop=set()
3725
+ 2026-01-13 15:49:30,378 INFO [zipformer.py:2441] (1/2) attn_weights_entropy = tensor([0.8177, 0.6270, 0.7759, 0.9303, 0.8788, 0.6838, 0.6931, 0.8722],
3726
+ device='cuda:1'), covar=tensor([0.0033, 0.0026, 0.0034, 0.0027, 0.0022, 0.0026, 0.0027, 0.0028],
3727
+ device='cuda:1'), in_proj_covar=tensor([0.0014, 0.0013, 0.0013, 0.0014, 0.0012, 0.0012, 0.0013, 0.0014],
3728
+ device='cuda:1'), out_proj_covar=tensor([5.6162e-06, 5.5680e-06, 6.3374e-06, 5.1898e-06, 7.3254e-06, 4.9182e-06,
3729
+ 5.1334e-06, 5.0560e-06], device='cuda:1')
3730
+ 2026-01-13 15:49:34,384 INFO [zipformer.py:2441] (1/2) attn_weights_entropy = tensor([1.6955, 3.1076, 2.0652, 1.9743, 3.2355, 1.6859, 1.6745, 2.2966],
3731
+ device='cuda:1'), covar=tensor([0.4543, 0.0242, 0.2683, 0.4159, 0.0380, 0.3284, 0.2601, 0.2228],
3732
+ device='cuda:1'), in_proj_covar=tensor([0.0224, 0.0101, 0.0171, 0.0244, 0.0099, 0.0198, 0.0181, 0.0187],
3733
+ device='cuda:1'), out_proj_covar=tensor([0.0003, 0.0001, 0.0002, 0.0003, 0.0001, 0.0002, 0.0002, 0.0002],
3734
+ device='cuda:1')
3735
+ 2026-01-13 15:49:36,124 INFO [train.py:895] (1/2) Epoch 13, batch 950, loss[loss=0.2097, simple_loss=0.2715, pruned_loss=0.07396, over 2608.00 frames. ], tot_loss[loss=0.2242, simple_loss=0.2819, pruned_loss=0.08331, over 545480.37 frames. ], batch size: 7, lr: 1.28e-02, grad_scale: 16.0
3736
+ 2026-01-13 15:49:40,292 INFO [zipformer.py:1188] (1/2) warmup_begin=666.7, warmup_end=1333.3, batch_count=20793.0, num_to_drop=0, layers_to_drop=set()
3737
+ 2026-01-13 15:49:49,176 INFO [optim.py:365] (1/2) Clipping_scale=2.0, grad-norm quartiles 7.810e+01 1.270e+02 1.660e+02 2.068e+02 6.055e+02, threshold=3.320e+02, percent-clipped=9.0
3738
+ 2026-01-13 15:49:53,700 INFO [scaling.py:681] (1/2) Whitening: num_groups=8, num_channels=96, metric=2.01 vs. limit=2.0
3739
+ 2026-01-13 15:49:54,017 INFO [zipformer.py:1188] (1/2) warmup_begin=666.7, warmup_end=1333.3, batch_count=20816.0, num_to_drop=0, layers_to_drop=set()
3740
+ 2026-01-13 15:50:05,706 INFO [train.py:895] (1/2) Epoch 13, batch 1000, loss[loss=0.2168, simple_loss=0.2699, pruned_loss=0.08184, over 2849.00 frames. ], tot_loss[loss=0.227, simple_loss=0.2845, pruned_loss=0.08478, over 545564.10 frames. ], batch size: 8, lr: 1.28e-02, grad_scale: 16.0
3741
+ 2026-01-13 15:50:20,840 INFO [zipformer.py:1188] (1/2) warmup_begin=2000.0, warmup_end=2666.7, batch_count=20860.0, num_to_drop=0, layers_to_drop=set()
tensorboard/events.out.tfevents.1768304645.8e64ffbd666a.97184.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0dd5b84443c9db429e9c7205731f56796d566848d6fc10662bf8fe66858f8a22
3
- size 206083
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0fc44551cbe503e480cf4049cabc1a023b8e45edf6a311a7df4b975d5bd39723
3
+ size 207999