YuqianFu's picture
Upload folder using huggingface_hub
197d4ca verified
/home/yuqian_fu
{'gpu': '0', 'data': 'mnist', 'ntr': None, 'translate': None, 'autoaug': 'CA_multiple', 'n': 3, 'stride': 3, 'factor_num': 14, 'epochs': 100, 'nbatch': 100, 'batchsize': 32, 'lr': 0.0001, 'lr_scheduler': 'Step', 'svroot': '/data/work-gcp-europe-west4-a/yuqian_fu/datasets/SingleSourceDG/saved-digit/CA_multiple_14fa_all_ep100_lr1e-4_lr_schedulerStep0.8_bs32_lamCa_1_lamRe_1_cls1_adt2_EW2_100_rmTrue_rnTrue_str3_pipelineAugWoNorm', 'clsadapt': True, 'lambda_causal': 1.0, 'lambda_re': 1.0, 'randm': True, 'randn': True, 'network': 'resnet18'}
--------------------------CA_multiple--------------------------
---------------------------14 factors-----------------
randm: True
randn: True
n: 3
randm: False
100
0.0001
changing lr
---------------------saving model at epoch 0----------------------------------------------------
epoch 0, time 183.25, cls_loss 2.1515
100
0.0001
changing lr
---------------------saving model at epoch 1----------------------------------------------------
epoch 1, time 183.70, cls_loss 1.7865
100
0.0001
changing lr
epoch 2, time 183.59, cls_loss 1.5733
100
0.0001
changing lr
---------------------saving model at epoch 3----------------------------------------------------
epoch 3, time 183.31, cls_loss 1.4407
100
0.0001
changing lr
---------------------saving model at epoch 4----------------------------------------------------
epoch 4, time 183.01, cls_loss 1.3369
100
0.0001
changing lr
---------------------saving model at epoch 5----------------------------------------------------
epoch 5, time 183.85, cls_loss 1.3080
100
0.0001
changing lr
epoch 6, time 182.43, cls_loss 1.2082
100
0.0001
changing lr
---------------------saving model at epoch 7----------------------------------------------------
epoch 7, time 182.60, cls_loss 1.1517
100
0.0001
changing lr
---------------------saving model at epoch 8----------------------------------------------------
epoch 8, time 183.05, cls_loss 1.0938
100
0.0001
changing lr
---------------------saving model at epoch 9----------------------------------------------------
epoch 9, time 183.04, cls_loss 1.0485
100
0.0001
changing lr
epoch 10, time 182.31, cls_loss 1.0636
100
0.0001
changing lr
epoch 11, time 182.08, cls_loss 0.9913
100
0.0001
changing lr
epoch 12, time 182.44, cls_loss 0.9240
100
0.0001
changing lr
---------------------saving model at epoch 13----------------------------------------------------
epoch 13, time 182.56, cls_loss 0.8962
100
0.0001
changing lr
epoch 14, time 182.83, cls_loss 0.8474
100
0.0001
changing lr
epoch 15, time 182.24, cls_loss 0.8730
100
0.0001
changing lr
---------------------saving model at epoch 16----------------------------------------------------
epoch 16, time 182.40, cls_loss 0.8184
100
0.0001
changing lr
epoch 17, time 182.12, cls_loss 0.8083
100
0.0001
changing lr
epoch 18, time 182.02, cls_loss 0.7381
100
0.0001
changing lr
epoch 19, time 182.19, cls_loss 0.7326
100
0.0001
changing lr
epoch 20, time 181.69, cls_loss 0.6649
100
0.0001
changing lr
epoch 21, time 181.62, cls_loss 0.6849
100
0.0001
changing lr
epoch 22, time 181.68, cls_loss 0.6675
100
0.0001
changing lr
---------------------saving model at epoch 23----------------------------------------------------
epoch 23, time 182.29, cls_loss 0.6101
100
0.0001
changing lr
epoch 24, time 182.13, cls_loss 0.6237
100
0.0001
changing lr
epoch 25, time 182.23, cls_loss 0.6229
100
0.0001
changing lr
epoch 26, time 182.24, cls_loss 0.5664
100
0.0001
changing lr
epoch 27, time 182.13, cls_loss 0.5588
100
0.0001
changing lr
epoch 28, time 182.14, cls_loss 0.5539
100
0.0001
changing lr
epoch 29, time 182.35, cls_loss 0.5198
100
0.0001
changing lr
epoch 30, time 182.22, cls_loss 0.5153
100
0.0001
changing lr
epoch 31, time 182.36, cls_loss 0.4764
100
0.0001
changing lr
epoch 32, time 182.13, cls_loss 0.4748
100
0.0001
changing lr
epoch 33, time 181.83, cls_loss 0.4448
100
0.0001
changing lr
epoch 34, time 182.32, cls_loss 0.4358
100
0.0001
changing lr
epoch 35, time 181.92, cls_loss 0.4201
100
0.0001
changing lr
epoch 36, time 181.91, cls_loss 0.3949
100
0.0001
changing lr
epoch 37, time 182.01, cls_loss 0.3818
100
0.0001
changing lr
---------------------saving model at epoch 38----------------------------------------------------
epoch 38, time 182.02, cls_loss 0.3651
100
0.0001
changing lr
epoch 39, time 182.07, cls_loss 0.3656
100
0.0001
changing lr
epoch 40, time 181.87, cls_loss 0.3864
100
0.0001
changing lr
epoch 41, time 182.33, cls_loss 0.3647
100
0.0001
changing lr
epoch 42, time 182.58, cls_loss 0.3301
100
0.0001
changing lr
---------------------saving model at epoch 43----------------------------------------------------
epoch 43, time 182.56, cls_loss 0.3279
100
0.0001
changing lr
epoch 44, time 185.15, cls_loss 0.3470
100
0.0001
changing lr
epoch 45, time 182.28, cls_loss 0.2938
100
0.0001
changing lr
epoch 46, time 182.03, cls_loss 0.2920
100
0.0001
changing lr
epoch 47, time 182.53, cls_loss 0.2780
100
0.0001
changing lr
epoch 48, time 182.87, cls_loss 0.2592
100
0.0001
changing lr
epoch 49, time 182.61, cls_loss 0.2725
100
0.0001
changing lr
epoch 50, time 182.34, cls_loss 0.2344
100
0.0001
changing lr
epoch 51, time 182.13, cls_loss 0.2686
100
0.0001
changing lr
epoch 52, time 183.03, cls_loss 0.2475
100
0.0001
changing lr
epoch 53, time 182.25, cls_loss 0.2359
100
0.0001
changing lr
epoch 54, time 182.39, cls_loss 0.2279
100
0.0001
changing lr
epoch 55, time 182.38, cls_loss 0.2340
100
0.0001
changing lr
epoch 56, time 182.19, cls_loss 0.2217
100
0.0001
changing lr
epoch 57, time 182.01, cls_loss 0.2188
100
0.0001
changing lr
epoch 58, time 182.23, cls_loss 0.2269
100
0.0001
changing lr
epoch 59, time 182.47, cls_loss 0.2212
100
0.0001
changing lr
epoch 60, time 182.34, cls_loss 0.1887
100
0.0001
changing lr
epoch 61, time 182.11, cls_loss 0.1859
100
0.0001
changing lr
epoch 62, time 182.40, cls_loss 0.2021
100
0.0001
changing lr
epoch 63, time 182.09, cls_loss 0.1756
100
0.0001
changing lr
epoch 64, time 182.38, cls_loss 0.1737
100
0.0001
changing lr
epoch 65, time 182.21, cls_loss 0.1648
100
0.0001
changing lr
epoch 66, time 182.02, cls_loss 0.1613
100
0.0001
changing lr
epoch 67, time 182.29, cls_loss 0.1569
100
0.0001
changing lr
epoch 68, time 182.29, cls_loss 0.1487
100
0.0001
changing lr
---------------------saving model at epoch 69----------------------------------------------------
epoch 69, time 182.61, cls_loss 0.1538
100
0.0001
changing lr
epoch 70, time 182.28, cls_loss 0.1653
100
0.0001
changing lr
epoch 71, time 181.94, cls_loss 0.1639
100
0.0001
changing lr
epoch 72, time 181.84, cls_loss 0.1784
100
0.0001
changing lr
epoch 73, time 181.70, cls_loss 0.1843
100
0.0001
changing lr
epoch 74, time 180.53, cls_loss 0.1832
100
0.0001
changing lr
epoch 75, time 180.51, cls_loss 0.1421
100
0.0001
changing lr
epoch 76, time 180.07, cls_loss 0.1224
100
0.0001
changing lr
epoch 77, time 180.21, cls_loss 0.1187
100
0.0001
changing lr
epoch 78, time 180.07, cls_loss 0.1058
100
0.0001
changing lr
epoch 79, time 180.76, cls_loss 0.1301
100
1e-05
changing lr
---------------------saving model at epoch 80----------------------------------------------------
epoch 80, time 181.07, cls_loss 0.0915
100
1e-05
changing lr
epoch 81, time 180.00, cls_loss 0.0845
100
1e-05
changing lr
epoch 82, time 180.09, cls_loss 0.0767
100
1e-05
changing lr
epoch 83, time 180.14, cls_loss 0.0711
100
1e-05
changing lr
epoch 84, time 180.25, cls_loss 0.0698
100
1e-05
changing lr
epoch 85, time 180.12, cls_loss 0.0682
100
1e-05
changing lr
epoch 86, time 179.91, cls_loss 0.0590
100
1e-05
changing lr
epoch 87, time 179.84, cls_loss 0.0607
100
1e-05
changing lr
epoch 88, time 179.82, cls_loss 0.0634
100
1e-05
changing lr
epoch 89, time 180.04, cls_loss 0.0718
100
1e-05
changing lr
epoch 90, time 179.62, cls_loss 0.0704
100
1e-05
changing lr
epoch 91, time 179.77, cls_loss 0.0669
100
1e-05
changing lr
epoch 92, time 179.87, cls_loss 0.0574
100
1e-05
changing lr
epoch 93, time 179.66, cls_loss 0.0556
100
1e-05
changing lr
epoch 94, time 179.87, cls_loss 0.0631
100
1e-05
changing lr
epoch 95, time 179.67, cls_loss 0.0525
100
1e-05
changing lr
epoch 96, time 179.69, cls_loss 0.0473
100
1e-05
changing lr
epoch 97, time 179.39, cls_loss 0.0470
100
1e-05
changing lr
epoch 98, time 179.75, cls_loss 0.0529
100
1e-05
changing lr
epoch 99, time 180.06, cls_loss 0.0541
---------------------saving last model at epoch 99----------------------------------------------------
/home/yuqian_fu
{'gpu': '0', 'svroot': '/data/work-gcp-europe-west4-a/yuqian_fu/datasets/SingleSourceDG/saved-digit/CA_multiple_14fa_all_ep100_lr1e-4_lr_schedulerStep0.8_bs32_lamCa_1_lamRe_1_cls1_adt2_EW2_100_rmTrue_rnTrue_str3_pipelineAugWoNorm', 'svpath': '/data/work-gcp-europe-west4-a/yuqian_fu/datasets/SingleSourceDG/saved-digit/CA_multiple_14fa_all_ep100_lr1e-4_lr_schedulerStep0.8_bs32_lamCa_1_lamRe_1_cls1_adt2_EW2_100_rmTrue_rnTrue_str3_pipelineAugWoNorm/14factor_best.csv', 'channels': 3, 'factor_num': 14, 'stride': 3, 'epoch': 'best', 'eval_mapping': True}
loading weight of best
Using downloaded and verified file: /home/yuqian_fu/.pytorch/SVHN/test_32x32.mat
mnist svhn ... usps Avg
w/o do (original x) 93.89 13.579441 ... 89.436971 40.16719
[1 rows x 6 columns]