kaijun123 commited on
Commit
441bd95
·
verified ·
1 Parent(s): c832f93

Upload folder using huggingface_hub

Browse files
unet_epochs15_batch32_lr0.001.log ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ nohup: ignoring input
2
+ PyTorch version: 2.7.0+cu128
3
+ CUDA available: True
4
+ CUDA version: 12.8
5
+ GPU: NVIDIA GeForce RTX 4090
6
+ sigmoid disabled
7
+ checkpoint_path: checkpoints/unet_epochs15_batch32_lr0.001
8
+
9
+ Epoch [1/15]
10
+ Time taken for epoch 1: 0:00:43.374418
11
+ Train Loss: 0.7487 | Val Loss: 0.6093
12
+
13
+ Epoch [2/15]
14
+ Time taken for epoch 2: 0:00:42.266390
15
+ Train Loss: 0.5564 | Val Loss: 0.6209
16
+
17
+ Epoch [3/15]
18
+ Time taken for epoch 3: 0:00:42.305472
19
+ Train Loss: 0.5042 | Val Loss: 0.4676
20
+
21
+ Epoch [4/15]
22
+ Time taken for epoch 4: 0:00:42.237916
23
+ Train Loss: 0.4718 | Val Loss: 0.4394
24
+
25
+ Epoch [5/15]
26
+ Time taken for epoch 5: 0:00:42.203672
27
+ Train Loss: 0.4293 | Val Loss: 0.5233
28
+
29
+ Epoch [6/15]
30
+ Time taken for epoch 6: 0:00:42.163494
31
+ Train Loss: 0.4230 | Val Loss: 0.4025
32
+
33
+ Epoch [7/15]
34
+ Time taken for epoch 7: 0:00:42.253392
35
+ Train Loss: 0.3958 | Val Loss: 0.4379
36
+
37
+ Epoch [8/15]
38
+ Time taken for epoch 8: 0:00:42.183860
39
+ Train Loss: 0.3775 | Val Loss: 0.3714
40
+
41
+ Epoch [9/15]
42
+ Time taken for epoch 9: 0:00:42.259339
43
+ Train Loss: 0.3715 | Val Loss: 0.4443
44
+
45
+ Epoch [10/15]
46
+ Time taken for epoch 10: 0:00:42.254248
47
+ Train Loss: 0.3484 | Val Loss: 0.3739
48
+
49
+ Epoch [11/15]
50
+ Time taken for epoch 11: 0:00:42.232907
51
+ Train Loss: 0.3313 | Val Loss: 0.3589
52
+
53
+ Epoch [12/15]
54
+ Time taken for epoch 12: 0:00:42.228751
55
+ Train Loss: 0.3166 | Val Loss: 0.3541
56
+
57
+ Epoch [13/15]
58
+ Time taken for epoch 13: 0:00:42.221082
59
+ Train Loss: 0.2971 | Val Loss: 0.3467
60
+
61
+ Epoch [14/15]
62
+ Time taken for epoch 14: 0:00:42.262076
63
+ Train Loss: 0.2845 | Val Loss: 0.3159
64
+
65
+ Epoch [15/15]
66
+ Time taken for epoch 15: 0:00:42.197832
67
+ Train Loss: 0.2630 | Val Loss: 0.3287
unet_epochs30_batch32_lr0.001.log ADDED
@@ -0,0 +1,153 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ nohup: ignoring input
2
+ PyTorch version: 2.7.0+cu128
3
+ CUDA available: True
4
+ CUDA version: 12.8
5
+ GPU: NVIDIA GeForce RTX 4090
6
+ sigmoid disabled
7
+ checkpoint_path: checkpoints/unet_epochs30_batch32_lr0.001
8
+
9
+ Epoch [1/30]
10
+ Time taken for epoch 1: 0:00:43.209978
11
+ Train Loss: 0.7238 | Val Loss: 0.6560
12
+ ✅ Validation improved — model saved.
13
+
14
+ Epoch [2/30]
15
+ Time taken for epoch 2: 0:00:42.164845
16
+ Train Loss: 0.5603 | Val Loss: 0.5211
17
+ ✅ Validation improved — model saved.
18
+
19
+ Epoch [3/30]
20
+ Time taken for epoch 3: 0:00:42.164324
21
+ Train Loss: 0.5041 | Val Loss: 0.5059
22
+ ✅ Validation improved — model saved.
23
+
24
+ Epoch [4/30]
25
+ Time taken for epoch 4: 0:00:42.141011
26
+ Train Loss: 0.4734 | Val Loss: 0.5228
27
+ ⚠️ No improvement for 1 epochs.
28
+
29
+ Epoch [5/30]
30
+ Time taken for epoch 5: 0:00:42.130260
31
+ Train Loss: 0.4435 | Val Loss: 0.4439
32
+ ✅ Validation improved — model saved.
33
+
34
+ Epoch [6/30]
35
+ Time taken for epoch 6: 0:00:42.149992
36
+ Train Loss: 0.4265 | Val Loss: 0.4024
37
+ ✅ Validation improved — model saved.
38
+
39
+ Epoch [7/30]
40
+ Time taken for epoch 7: 0:00:42.156155
41
+ Train Loss: 0.3851 | Val Loss: 0.3961
42
+ ✅ Validation improved — model saved.
43
+
44
+ Epoch [8/30]
45
+ Time taken for epoch 8: 0:00:42.158115
46
+ Train Loss: 0.3694 | Val Loss: 0.4259
47
+ ⚠️ No improvement for 1 epochs.
48
+
49
+ Epoch [9/30]
50
+ Time taken for epoch 9: 0:00:42.204067
51
+ Train Loss: 0.3657 | Val Loss: 0.3578
52
+ ✅ Validation improved — model saved.
53
+
54
+ Epoch [10/30]
55
+ Time taken for epoch 10: 0:00:42.191173
56
+ Train Loss: 0.3360 | Val Loss: 0.4399
57
+ ⚠️ No improvement for 1 epochs.
58
+
59
+ Epoch [11/30]
60
+ Time taken for epoch 11: 0:00:42.143924
61
+ Train Loss: 0.3231 | Val Loss: 0.3483
62
+ ✅ Validation improved — model saved.
63
+
64
+ Epoch [12/30]
65
+ Time taken for epoch 12: 0:00:42.138028
66
+ Train Loss: 0.3126 | Val Loss: 0.3359
67
+ ✅ Validation improved — model saved.
68
+
69
+ Epoch [13/30]
70
+ Time taken for epoch 13: 0:00:42.148575
71
+ Train Loss: 0.2969 | Val Loss: 0.3537
72
+ ⚠️ No improvement for 1 epochs.
73
+
74
+ Epoch [14/30]
75
+ Time taken for epoch 14: 0:00:42.145657
76
+ Train Loss: 0.2835 | Val Loss: 0.2956
77
+ ✅ Validation improved — model saved.
78
+
79
+ Epoch [15/30]
80
+ Time taken for epoch 15: 0:00:42.166805
81
+ Train Loss: 0.2754 | Val Loss: 0.3219
82
+ ⚠️ No improvement for 1 epochs.
83
+
84
+ Epoch [16/30]
85
+ Time taken for epoch 16: 0:00:42.201696
86
+ Train Loss: 0.2591 | Val Loss: 0.3877
87
+ ⚠️ No improvement for 2 epochs.
88
+
89
+ Epoch [17/30]
90
+ Time taken for epoch 17: 0:00:42.223271
91
+ Train Loss: 0.2509 | Val Loss: 0.2963
92
+ ⚠️ No improvement for 3 epochs.
93
+
94
+ Epoch [18/30]
95
+ Time taken for epoch 18: 0:00:42.165095
96
+ Train Loss: 0.2442 | Val Loss: 0.3683
97
+ ⚠️ No improvement for 4 epochs.
98
+
99
+ Epoch [19/30]
100
+ Time taken for epoch 19: 0:00:42.150570
101
+ Train Loss: 0.2158 | Val Loss: 0.2688
102
+ ✅ Validation improved — model saved.
103
+
104
+ Epoch [20/30]
105
+ Time taken for epoch 20: 0:00:42.169626
106
+ Train Loss: 0.1949 | Val Loss: 0.2692
107
+ ⚠️ No improvement for 1 epochs.
108
+
109
+ Epoch [21/30]
110
+ Time taken for epoch 21: 0:00:42.176523
111
+ Train Loss: 0.1842 | Val Loss: 0.2668
112
+ ✅ Validation improved — model saved.
113
+
114
+ Epoch [22/30]
115
+ Time taken for epoch 22: 0:00:42.203986
116
+ Train Loss: 0.1775 | Val Loss: 0.2602
117
+ ✅ Validation improved — model saved.
118
+
119
+ Epoch [23/30]
120
+ Time taken for epoch 23: 0:00:42.170539
121
+ Train Loss: 0.1689 | Val Loss: 0.2711
122
+ ⚠️ No improvement for 1 epochs.
123
+
124
+ Epoch [24/30]
125
+ Time taken for epoch 24: 0:00:42.147949
126
+ Train Loss: 0.1648 | Val Loss: 0.2704
127
+ ⚠️ No improvement for 2 epochs.
128
+
129
+ Epoch [25/30]
130
+ Time taken for epoch 25: 0:00:42.177936
131
+ Train Loss: 0.1593 | Val Loss: 0.2705
132
+ ⚠️ No improvement for 3 epochs.
133
+
134
+ Epoch [26/30]
135
+ Time taken for epoch 26: 0:00:42.150470
136
+ Train Loss: 0.1553 | Val Loss: 0.2789
137
+ ⚠️ No improvement for 4 epochs.
138
+
139
+ Epoch [27/30]
140
+ Time taken for epoch 27: 0:00:42.179764
141
+ Train Loss: 0.1469 | Val Loss: 0.2732
142
+ ⚠️ No improvement for 5 epochs.
143
+
144
+ Epoch [28/30]
145
+ Time taken for epoch 28: 0:00:42.197436
146
+ Train Loss: 0.1467 | Val Loss: 0.2747
147
+ ⚠️ No improvement for 6 epochs.
148
+
149
+ Epoch [29/30]
150
+ Time taken for epoch 29: 0:00:42.214138
151
+ Train Loss: 0.1460 | Val Loss: 0.2756
152
+ ⚠️ No improvement for 7 epochs.
153
+ ⛔ Early stopping triggered.
unet_epochs30_batch32_lr0.0015.log ADDED
@@ -0,0 +1,158 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ nohup: ignoring input
2
+ PyTorch version: 2.7.0+cu128
3
+ CUDA available: True
4
+ CUDA version: 12.8
5
+ GPU: NVIDIA GeForce RTX 4090
6
+ sigmoid disabled
7
+ checkpoint_path: checkpoints/unet_epochs30_batch32_lr0.0015
8
+
9
+ Epoch [1/30]
10
+ Time taken for epoch 1: 0:00:43.035586
11
+ Train Loss: 0.7250 | Val Loss: 0.6574
12
+ ✅ Validation improved — model saved.
13
+
14
+ Epoch [2/30]
15
+ Time taken for epoch 2: 0:00:42.009461
16
+ Train Loss: 0.5883 | Val Loss: 0.5594
17
+ ✅ Validation improved — model saved.
18
+
19
+ Epoch [3/30]
20
+ Time taken for epoch 3: 0:00:42.119111
21
+ Train Loss: 0.5322 | Val Loss: 0.5302
22
+ ✅ Validation improved — model saved.
23
+
24
+ Epoch [4/30]
25
+ Time taken for epoch 4: 0:00:42.316585
26
+ Train Loss: 0.4971 | Val Loss: 0.4711
27
+ ✅ Validation improved — model saved.
28
+
29
+ Epoch [5/30]
30
+ Time taken for epoch 5: 0:00:42.369174
31
+ Train Loss: 0.4637 | Val Loss: 0.4686
32
+ ✅ Validation improved — model saved.
33
+
34
+ Epoch [6/30]
35
+ Time taken for epoch 6: 0:00:42.353129
36
+ Train Loss: 0.4400 | Val Loss: 0.5063
37
+ ⚠️ No improvement for 1 epochs.
38
+
39
+ Epoch [7/30]
40
+ Time taken for epoch 7: 0:00:42.380148
41
+ Train Loss: 0.4252 | Val Loss: 0.4904
42
+ ⚠️ No improvement for 2 epochs.
43
+
44
+ Epoch [8/30]
45
+ Time taken for epoch 8: 0:00:42.385585
46
+ Train Loss: 0.4041 | Val Loss: 0.4677
47
+ ✅ Validation improved — model saved.
48
+
49
+ Epoch [9/30]
50
+ Time taken for epoch 9: 0:00:42.416812
51
+ Train Loss: 0.3862 | Val Loss: 0.4866
52
+ ⚠️ No improvement for 1 epochs.
53
+
54
+ Epoch [10/30]
55
+ Time taken for epoch 10: 0:00:42.357441
56
+ Train Loss: 0.3748 | Val Loss: 0.3769
57
+ ✅ Validation improved — model saved.
58
+
59
+ Epoch [11/30]
60
+ Time taken for epoch 11: 0:00:42.363413
61
+ Train Loss: 0.3546 | Val Loss: 0.4630
62
+ ⚠️ No improvement for 1 epochs.
63
+
64
+ Epoch [12/30]
65
+ Time taken for epoch 12: 0:00:42.337634
66
+ Train Loss: 0.3410 | Val Loss: 0.3513
67
+ ✅ Validation improved — model saved.
68
+
69
+ Epoch [13/30]
70
+ Time taken for epoch 13: 0:00:42.320807
71
+ Train Loss: 0.3152 | Val Loss: 0.3608
72
+ ⚠️ No improvement for 1 epochs.
73
+
74
+ Epoch [14/30]
75
+ Time taken for epoch 14: 0:00:42.343854
76
+ Train Loss: 0.3013 | Val Loss: 0.3336
77
+ ✅ Validation improved — model saved.
78
+
79
+ Epoch [15/30]
80
+ Time taken for epoch 15: 0:00:42.408566
81
+ Train Loss: 0.2867 | Val Loss: 0.3303
82
+ ✅ Validation improved — model saved.
83
+
84
+ Epoch [16/30]
85
+ Time taken for epoch 16: 0:00:42.411924
86
+ Train Loss: 0.2719 | Val Loss: 0.3228
87
+ ✅ Validation improved — model saved.
88
+
89
+ Epoch [17/30]
90
+ Time taken for epoch 17: 0:00:42.414386
91
+ Train Loss: 0.2559 | Val Loss: 0.3491
92
+ ⚠️ No improvement for 1 epochs.
93
+
94
+ Epoch [18/30]
95
+ Time taken for epoch 18: 0:00:42.367486
96
+ Train Loss: 0.2507 | Val Loss: 0.3233
97
+ ⚠️ No improvement for 2 epochs.
98
+
99
+ Epoch [19/30]
100
+ Time taken for epoch 19: 0:00:42.376158
101
+ Train Loss: 0.2317 | Val Loss: 0.3347
102
+ ⚠️ No improvement for 3 epochs.
103
+
104
+ Epoch [20/30]
105
+ Time taken for epoch 20: 0:00:42.338532
106
+ Train Loss: 0.2170 | Val Loss: 0.3206
107
+ ✅ Validation improved — model saved.
108
+
109
+ Epoch [21/30]
110
+ Time taken for epoch 21: 0:00:42.381323
111
+ Train Loss: 0.2116 | Val Loss: 0.3306
112
+ ⚠️ No improvement for 1 epochs.
113
+
114
+ Epoch [22/30]
115
+ Time taken for epoch 22: 0:00:42.368303
116
+ Train Loss: 0.2051 | Val Loss: 0.3599
117
+ ⚠️ No improvement for 2 epochs.
118
+
119
+ Epoch [23/30]
120
+ Time taken for epoch 23: 0:00:42.343645
121
+ Train Loss: 0.1903 | Val Loss: 0.3126
122
+ ✅ Validation improved — model saved.
123
+
124
+ Epoch [24/30]
125
+ Time taken for epoch 24: 0:00:42.375118
126
+ Train Loss: 0.1816 | Val Loss: 0.3191
127
+ ⚠️ No improvement for 1 epochs.
128
+
129
+ Epoch [25/30]
130
+ Time taken for epoch 25: 0:00:42.332007
131
+ Train Loss: 0.1795 | Val Loss: 0.3459
132
+ ⚠️ No improvement for 2 epochs.
133
+
134
+ Epoch [26/30]
135
+ Time taken for epoch 26: 0:00:42.389358
136
+ Train Loss: 0.1655 | Val Loss: 0.3522
137
+ ⚠️ No improvement for 3 epochs.
138
+
139
+ Epoch [27/30]
140
+ Time taken for epoch 27: 0:00:42.353115
141
+ Train Loss: 0.1620 | Val Loss: 0.3340
142
+ ⚠️ No improvement for 4 epochs.
143
+
144
+ Epoch [28/30]
145
+ Time taken for epoch 28: 0:00:42.356037
146
+ Train Loss: 0.1431 | Val Loss: 0.3156
147
+ ⚠️ No improvement for 5 epochs.
148
+
149
+ Epoch [29/30]
150
+ Time taken for epoch 29: 0:00:42.376427
151
+ Train Loss: 0.1313 | Val Loss: 0.3157
152
+ ⚠️ No improvement for 6 epochs.
153
+
154
+ Epoch [30/30]
155
+ Time taken for epoch 30: 0:00:42.336141
156
+ Train Loss: 0.1226 | Val Loss: 0.3164
157
+ ⚠️ No improvement for 7 epochs.
158
+ ⛔ Early stopping triggered.
unet_epochs30_batch32_lr0.001_figshare_preprocess.log ADDED
@@ -0,0 +1,186 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ nohup: ignoring input
2
+ wandb: Currently logged in as: kaijun123 (kaijun123-nanyang-technological-university-singapore) to https://api.wandb.ai. Use `wandb login --relogin` to force relogin
3
+ wandb: setting up run zy99napl
4
+ wandb: Tracking run with wandb version 0.22.3
5
+ wandb: Run data is saved locally in /home/kaijun/CE6190---Image-Segmentation/wandb/run-20251110_185725-zy99napl
6
+ wandb: Run `wandb offline` to turn off syncing.
7
+ wandb: Syncing run unet_epochs30_batch32_lr0.001_figshare_preprocess
8
+ wandb: ⭐️ View project at https://wandb.ai/kaijun123-nanyang-technological-university-singapore/ce6190-semantic-segmentation
9
+ wandb: 🚀 View run at https://wandb.ai/kaijun123-nanyang-technological-university-singapore/ce6190-semantic-segmentation/runs/zy99napl
10
+ wandb: updating run metadata
11
+ wandb: uploading output.log; uploading wandb-summary.json
12
+ wandb: uploading wandb-summary.json; uploading config.yaml
13
+ wandb: uploading history steps 29-29, summary, console lines 147-149
14
+ wandb:
15
+ wandb: Run history:
16
+ wandb: epochs ▁▁▁▂▂▂▂▃▃▃▃▄▄▄▄▅▅▅▅▆▆▆▆▇▇▇▇███
17
+ wandb: train_loss █▄▃▃▃▂▂▂▂▂▂▂▂▁▁▁▂▂▂▁▁▁▁▁▁▁▁▁▁▁
18
+ wandb: val_loss █▆▃▄▅▄▃▄▃▄▂▂▂▂▂▁▃▃▂▂▁▂▁▁▁▁▁▁▁▁
19
+ wandb:
20
+ wandb: Run summary:
21
+ wandb: epochs 30
22
+ wandb: train_loss 0.07781
23
+ wandb: val_loss 0.16863
24
+ wandb:
25
+ wandb: 🚀 View run unet_epochs30_batch32_lr0.001_figshare_preprocess at: https://wandb.ai/kaijun123-nanyang-technological-university-singapore/ce6190-semantic-segmentation/runs/zy99napl
26
+ wandb: ⭐️ View project at: https://wandb.ai/kaijun123-nanyang-technological-university-singapore/ce6190-semantic-segmentation
27
+ wandb: Synced 5 W&B file(s), 0 media file(s), 0 artifact file(s) and 0 other file(s)
28
+ wandb: Find logs at: ./wandb/run-20251110_185725-zy99napl/logs
29
+ PyTorch version: 2.7.0+cu128
30
+ CUDA available: True
31
+ CUDA version: 12.8
32
+ GPU: NVIDIA GeForce RTX 4090
33
+ sigmoid disabled
34
+ checkpoint_path: checkpoints/unet_epochs30_batch32_lr0.001_figshare_preprocess
35
+ split: train data_split_path: figshare_preprocessed/data_split.json
36
+ split: valid data_split_path: figshare_preprocessed/data_split.json
37
+
38
+ Epoch [1/30]
39
+ Time taken for epoch 1: 0:00:25.245051
40
+ Train Loss: 0.6771 | Val Loss: 0.4430
41
+ ✅ Validation improved — model saved.
42
+
43
+ Epoch [2/30]
44
+ Time taken for epoch 2: 0:00:24.535809
45
+ Train Loss: 0.3119 | Val Loss: 0.3564
46
+ ✅ Validation improved — model saved.
47
+
48
+ Epoch [3/30]
49
+ Time taken for epoch 3: 0:00:24.538127
50
+ Train Loss: 0.2527 | Val Loss: 0.2408
51
+ ✅ Validation improved — model saved.
52
+
53
+ Epoch [4/30]
54
+ Time taken for epoch 4: 0:00:24.545072
55
+ Train Loss: 0.2150 | Val Loss: 0.2956
56
+ ⚠️ No improvement for 1 epochs.
57
+
58
+ Epoch [5/30]
59
+ Time taken for epoch 5: 0:00:24.521895
60
+ Train Loss: 0.2071 | Val Loss: 0.3041
61
+ ⚠️ No improvement for 2 epochs.
62
+
63
+ Epoch [6/30]
64
+ Time taken for epoch 6: 0:00:24.513282
65
+ Train Loss: 0.2008 | Val Loss: 0.2816
66
+ ⚠️ No improvement for 3 epochs.
67
+
68
+ Epoch [7/30]
69
+ Time taken for epoch 7: 0:00:24.548300
70
+ Train Loss: 0.1881 | Val Loss: 0.2340
71
+ ✅ Validation improved — model saved.
72
+
73
+ Epoch [8/30]
74
+ Time taken for epoch 8: 0:00:24.539046
75
+ Train Loss: 0.1913 | Val Loss: 0.2713
76
+ ⚠️ No improvement for 1 epochs.
77
+
78
+ Epoch [9/30]
79
+ Time taken for epoch 9: 0:00:24.630774
80
+ Train Loss: 0.1909 | Val Loss: 0.2169
81
+ ✅ Validation improved — model saved.
82
+
83
+ Epoch [10/30]
84
+ Time taken for epoch 10: 0:00:24.835839
85
+ Train Loss: 0.1662 | Val Loss: 0.2629
86
+ ⚠️ No improvement for 1 epochs.
87
+
88
+ Epoch [11/30]
89
+ Time taken for epoch 11: 0:00:24.544666
90
+ Train Loss: 0.1473 | Val Loss: 0.1893
91
+ ✅ Validation improved — model saved.
92
+
93
+ Epoch [12/30]
94
+ Time taken for epoch 12: 0:00:24.523512
95
+ Train Loss: 0.1345 | Val Loss: 0.1788
96
+ ✅ Validation improved — model saved.
97
+
98
+ Epoch [13/30]
99
+ Time taken for epoch 13: 0:00:24.550038
100
+ Train Loss: 0.1272 | Val Loss: 0.1882
101
+ ⚠️ No improvement for 1 epochs.
102
+
103
+ Epoch [14/30]
104
+ Time taken for epoch 14: 0:00:24.568155
105
+ Train Loss: 0.1203 | Val Loss: 0.1851
106
+ ⚠️ No improvement for 2 epochs.
107
+
108
+ Epoch [15/30]
109
+ Time taken for epoch 15: 0:00:24.523504
110
+ Train Loss: 0.1163 | Val Loss: 0.1881
111
+ ⚠️ No improvement for 3 epochs.
112
+
113
+ Epoch [16/30]
114
+ Time taken for epoch 16: 0:00:24.561030
115
+ Train Loss: 0.1100 | Val Loss: 0.1742
116
+ ✅ Validation improved — model saved.
117
+
118
+ Epoch [17/30]
119
+ Time taken for epoch 17: 0:00:24.583794
120
+ Train Loss: 0.1321 | Val Loss: 0.2474
121
+ ⚠️ No improvement for 1 epochs.
122
+
123
+ Epoch [18/30]
124
+ Time taken for epoch 18: 0:00:24.540402
125
+ Train Loss: 0.1379 | Val Loss: 0.2445
126
+ ⚠️ No improvement for 2 epochs.
127
+
128
+ Epoch [19/30]
129
+ Time taken for epoch 19: 0:00:24.514984
130
+ Train Loss: 0.1210 | Val Loss: 0.1928
131
+ ⚠️ No improvement for 3 epochs.
132
+
133
+ Epoch [20/30]
134
+ Time taken for epoch 20: 0:00:24.541848
135
+ Train Loss: 0.1132 | Val Loss: 0.1753
136
+ ⚠️ No improvement for 4 epochs.
137
+
138
+ Epoch [21/30]
139
+ Time taken for epoch 21: 0:00:24.578071
140
+ Train Loss: 0.0964 | Val Loss: 0.1592
141
+ ✅ Validation improved — model saved.
142
+
143
+ Epoch [22/30]
144
+ Time taken for epoch 22: 0:00:24.545045
145
+ Train Loss: 0.0906 | Val Loss: 0.1774
146
+ ⚠️ No improvement for 1 epochs.
147
+
148
+ Epoch [23/30]
149
+ Time taken for epoch 23: 0:00:24.577851
150
+ Train Loss: 0.0873 | Val Loss: 0.1591
151
+ ✅ Validation improved — model saved.
152
+
153
+ Epoch [24/30]
154
+ Time taken for epoch 24: 0:00:24.609731
155
+ Train Loss: 0.0851 | Val Loss: 0.1644
156
+ ⚠️ No improvement for 1 epochs.
157
+
158
+ Epoch [25/30]
159
+ Time taken for epoch 25: 0:00:24.544784
160
+ Train Loss: 0.0832 | Val Loss: 0.1546
161
+ ✅ Validation improved — model saved.
162
+
163
+ Epoch [26/30]
164
+ Time taken for epoch 26: 0:00:24.544042
165
+ Train Loss: 0.0825 | Val Loss: 0.1558
166
+ ⚠️ No improvement for 1 epochs.
167
+
168
+ Epoch [27/30]
169
+ Time taken for epoch 27: 0:00:24.557742
170
+ Train Loss: 0.0831 | Val Loss: 0.1617
171
+ ⚠️ No improvement for 2 epochs.
172
+
173
+ Epoch [28/30]
174
+ Time taken for epoch 28: 0:00:24.537817
175
+ Train Loss: 0.0807 | Val Loss: 0.1623
176
+ ⚠️ No improvement for 3 epochs.
177
+
178
+ Epoch [29/30]
179
+ Time taken for epoch 29: 0:00:24.574384
180
+ Train Loss: 0.0788 | Val Loss: 0.1661
181
+ ⚠️ No improvement for 4 epochs.
182
+
183
+ Epoch [30/30]
184
+ Time taken for epoch 30: 0:00:24.525882
185
+ Train Loss: 0.0778 | Val Loss: 0.1686
186
+ ⚠️ No improvement for 5 epochs.
unet_epochs30_batch32_lr0.001_freeze_encoder_epochs20_batch32_lr0.005.log ADDED
@@ -0,0 +1,105 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ nohup: ignoring input
2
+ wandb: Currently logged in as: kaijun123 (kaijun123-nanyang-technological-university-singapore) to https://api.wandb.ai. Use `wandb login --relogin` to force relogin
3
+ wandb: setting up run t2gz18kk
4
+ wandb: Tracking run with wandb version 0.22.3
5
+ wandb: Run data is saved locally in /home/kaijun/CE6190---Image-Segmentation/wandb/run-20251111_141336-t2gz18kk
6
+ wandb: Run `wandb offline` to turn off syncing.
7
+ wandb: Syncing run unet_epochs30_batch32_lr0.001_freeze_encoder_epochs20_batch32_lr0.005
8
+ wandb: ⭐️ View project at https://wandb.ai/kaijun123-nanyang-technological-university-singapore/ce6190-semantic-segmentation
9
+ wandb: 🚀 View run at https://wandb.ai/kaijun123-nanyang-technological-university-singapore/ce6190-semantic-segmentation/runs/t2gz18kk
10
+ wandb: updating run metadata
11
+ wandb: uploading wandb-summary.json; uploading output.log
12
+ wandb: uploading wandb-summary.json; uploading output.log; uploading config.yaml; uploading history steps 12-12, summary, console lines 62-65
13
+ wandb: uploading wandb-summary.json; uploading output.log; uploading config.yaml
14
+ wandb: uploading wandb-summary.json; uploading config.yaml
15
+ wandb: uploading data
16
+ wandb:
17
+ wandb: Run history:
18
+ wandb: epochs ▁▂▂▃▃▄▅▅▆▆▇▇█
19
+ wandb: train_loss █▅▅▄▄▄▃▃▃▃▂▁▁
20
+ wandb: val_loss ▇▅▁▅▅▁▅█▂█▃▄▃
21
+ wandb:
22
+ wandb: Run summary:
23
+ wandb: epochs 13
24
+ wandb: train_loss 0.13112
25
+ wandb: val_loss 0.273
26
+ wandb:
27
+ wandb: 🚀 View run unet_epochs30_batch32_lr0.001_freeze_encoder_epochs20_batch32_lr0.005 at: https://wandb.ai/kaijun123-nanyang-technological-university-singapore/ce6190-semantic-segmentation/runs/t2gz18kk
28
+ wandb: ⭐️ View project at: https://wandb.ai/kaijun123-nanyang-technological-university-singapore/ce6190-semantic-segmentation
29
+ wandb: Synced 5 W&B file(s), 0 media file(s), 0 artifact file(s) and 0 other file(s)
30
+ wandb: Find logs at: ./wandb/run-20251111_141336-t2gz18kk/logs
31
+ PyTorch version: 2.7.0+cu128
32
+ CUDA available: True
33
+ CUDA version: 12.8
34
+ GPU: NVIDIA GeForce RTX 4090
35
+ Loading weights from local directory
36
+ freeze encoder
37
+ checkpoint_path: checkpoints/unet_epochs30_batch32_lr0.001_freeze_encoder_epochs20_batch32_lr0.005
38
+ split: train data_split_path: combined_data_split.json
39
+ split: valid data_split_path: combined_data_split.json
40
+
41
+ Epoch [1/20]
42
+ Time taken for epoch 1: 0:01:02.609510
43
+ Train Loss: 0.1681 | Val Loss: 0.2832
44
+ ✅ Validation improved — model saved.
45
+
46
+ Epoch [2/20]
47
+ Time taken for epoch 2: 0:01:00.957051
48
+ Train Loss: 0.1535 | Val Loss: 0.2777
49
+ ✅ Validation improved — model saved.
50
+
51
+ Epoch [3/20]
52
+ Time taken for epoch 3: 0:01:00.259887
53
+ Train Loss: 0.1505 | Val Loss: 0.2684
54
+ ✅ Validation improved — model saved.
55
+
56
+ Epoch [4/20]
57
+ Time taken for epoch 4: 0:01:01.074658
58
+ Train Loss: 0.1488 | Val Loss: 0.2785
59
+ ⚠️ No improvement for 1 epochs.
60
+
61
+ Epoch [5/20]
62
+ Time taken for epoch 5: 0:01:01.501208
63
+ Train Loss: 0.1474 | Val Loss: 0.2783
64
+ ⚠️ No improvement for 2 epochs.
65
+
66
+ Epoch [6/20]
67
+ Time taken for epoch 6: 0:01:01.322478
68
+ Train Loss: 0.1456 | Val Loss: 0.2683
69
+ ✅ Validation improved — model saved.
70
+
71
+ Epoch [7/20]
72
+ Time taken for epoch 7: 0:01:01.486441
73
+ Train Loss: 0.1436 | Val Loss: 0.2781
74
+ ⚠️ No improvement for 1 epochs.
75
+
76
+ Epoch [8/20]
77
+ Time taken for epoch 8: 0:01:01.507692
78
+ Train Loss: 0.1421 | Val Loss: 0.2840
79
+ ⚠️ No improvement for 2 epochs.
80
+
81
+ Epoch [9/20]
82
+ Time taken for epoch 9: 0:01:01.640441
83
+ Train Loss: 0.1409 | Val Loss: 0.2717
84
+ ⚠️ No improvement for 3 epochs.
85
+
86
+ Epoch [10/20]
87
+ Time taken for epoch 10: 0:01:01.018847
88
+ Train Loss: 0.1402 | Val Loss: 0.2848
89
+ ⚠️ No improvement for 4 epochs.
90
+
91
+ Epoch [11/20]
92
+ Time taken for epoch 11: 0:01:01.497435
93
+ Train Loss: 0.1338 | Val Loss: 0.2739
94
+ ⚠️ No improvement for 5 epochs.
95
+
96
+ Epoch [12/20]
97
+ Time taken for epoch 12: 0:00:58.877680
98
+ Train Loss: 0.1323 | Val Loss: 0.2748
99
+ ⚠️ No improvement for 6 epochs.
100
+
101
+ Epoch [13/20]
102
+ Time taken for epoch 13: 0:01:00.537987
103
+ Train Loss: 0.1311 | Val Loss: 0.2730
104
+ ⚠️ No improvement for 7 epochs.
105
+ ⛔ Early stopping triggered.
unet_epochs30_batch32_lr0.001_freeze_encoder_epochs30_batch32_lr0.001.log ADDED
@@ -0,0 +1,144 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ nohup: ignoring input
2
+ wandb: Currently logged in as: kaijun123 (kaijun123-nanyang-technological-university-singapore) to https://api.wandb.ai. Use `wandb login --relogin` to force relogin
3
+ wandb: setting up run 6spl7gwh
4
+ wandb: Tracking run with wandb version 0.22.3
5
+ wandb: Run data is saved locally in /home/kaijun/CE6190---Image-Segmentation/wandb/run-20251111_000302-6spl7gwh
6
+ wandb: Run `wandb offline` to turn off syncing.
7
+ wandb: Syncing run unet_epochs30_batch32_lr0.001_freeze_encoder_epochs30_batch32_lr0.001
8
+ wandb: ⭐️ View project at https://wandb.ai/kaijun123-nanyang-technological-university-singapore/ce6190-semantic-segmentation
9
+ wandb: 🚀 View run at https://wandb.ai/kaijun123-nanyang-technological-university-singapore/ce6190-semantic-segmentation/runs/6spl7gwh
10
+ wandb: updating run metadata
11
+ wandb: uploading output.log; uploading wandb-summary.json
12
+ wandb: uploading output.log; uploading wandb-summary.json; uploading config.yaml
13
+ wandb: uploading output.log; uploading wandb-summary.json
14
+ wandb: uploading history steps 20-20, summary, console lines 102-105
15
+ wandb:
16
+ wandb: Run history:
17
+ wandb: epochs ▁▁▂▂▂▃▃▃▄▄▅▅▅▆▆▆▇▇▇██
18
+ wandb: train_loss ██▇▇█▆▆▅▅▅▅▅▄▄▄▄▄▃▂▁▁
19
+ wandb: val_loss ▃▃▃▃▅▆▂▂▅▆▂█▃▁▂▄▄▂▃▃▄
20
+ wandb:
21
+ wandb: Run summary:
22
+ wandb: epochs 21
23
+ wandb: train_loss 0.12465
24
+ wandb: val_loss 0.2755
25
+ wandb:
26
+ wandb: 🚀 View run unet_epochs30_batch32_lr0.001_freeze_encoder_epochs30_batch32_lr0.001 at: https://wandb.ai/kaijun123-nanyang-technological-university-singapore/ce6190-semantic-segmentation/runs/6spl7gwh
27
+ wandb: ⭐️ View project at: https://wandb.ai/kaijun123-nanyang-technological-university-singapore/ce6190-semantic-segmentation
28
+ wandb: Synced 5 W&B file(s), 0 media file(s), 0 artifact file(s) and 0 other file(s)
29
+ wandb: Find logs at: ./wandb/run-20251111_000302-6spl7gwh/logs
30
+ PyTorch version: 2.7.0+cu128
31
+ CUDA available: True
32
+ CUDA version: 12.8
33
+ GPU: NVIDIA GeForce RTX 4090
34
+ Loading weights from local directory
35
+ freeze encoder
36
+ checkpoint_path: checkpoints/unet_epochs30_batch32_lr0.001_freeze_encoder_epochs30_batch32_lr0.001
37
+ split: train data_split_path: combined_data_split.json
38
+ split: valid data_split_path: combined_data_split.json
39
+
40
+ Epoch [1/30]
41
+ Time taken for epoch 1: 0:00:30.485078
42
+ Train Loss: 0.1490 | Val Loss: 0.2743
43
+ ✅ Validation improved — model saved.
44
+
45
+ Epoch [2/30]
46
+ Time taken for epoch 2: 0:00:29.358091
47
+ Train Loss: 0.1479 | Val Loss: 0.2744
48
+ ⚠️ No improvement for 1 epochs.
49
+
50
+ Epoch [3/30]
51
+ Time taken for epoch 3: 0:00:29.325511
52
+ Train Loss: 0.1464 | Val Loss: 0.2746
53
+ ⚠️ No improvement for 2 epochs.
54
+
55
+ Epoch [4/30]
56
+ Time taken for epoch 4: 0:00:29.550897
57
+ Train Loss: 0.1444 | Val Loss: 0.2723
58
+ ✅ Validation improved — model saved.
59
+
60
+ Epoch [5/30]
61
+ Time taken for epoch 5: 0:00:30.235633
62
+ Train Loss: 0.1492 | Val Loss: 0.2782
63
+ ⚠️ No improvement for 1 epochs.
64
+
65
+ Epoch [6/30]
66
+ Time taken for epoch 6: 0:00:30.045303
67
+ Train Loss: 0.1432 | Val Loss: 0.2798
68
+ ⚠️ No improvement for 2 epochs.
69
+
70
+ Epoch [7/30]
71
+ Time taken for epoch 7: 0:00:30.027648
72
+ Train Loss: 0.1406 | Val Loss: 0.2718
73
+ ✅ Validation improved — model saved.
74
+
75
+ Epoch [8/30]
76
+ Time taken for epoch 8: 0:00:29.842414
77
+ Train Loss: 0.1393 | Val Loss: 0.2716
78
+ ✅ Validation improved — model saved.
79
+
80
+ Epoch [9/30]
81
+ Time taken for epoch 9: 0:00:29.536086
82
+ Train Loss: 0.1389 | Val Loss: 0.2786
83
+ ⚠️ No improvement for 1 epochs.
84
+
85
+ Epoch [10/30]
86
+ Time taken for epoch 10: 0:00:29.464946
87
+ Train Loss: 0.1386 | Val Loss: 0.2797
88
+ ⚠️ No improvement for 2 epochs.
89
+
90
+ Epoch [11/30]
91
+ Time taken for epoch 11: 0:00:29.483013
92
+ Train Loss: 0.1370 | Val Loss: 0.2704
93
+ ✅ Validation improved — model saved.
94
+
95
+ Epoch [12/30]
96
+ Time taken for epoch 12: 0:00:29.445689
97
+ Train Loss: 0.1370 | Val Loss: 0.2860
98
+ ⚠️ No improvement for 1 epochs.
99
+
100
+ Epoch [13/30]
101
+ Time taken for epoch 13: 0:00:29.331030
102
+ Train Loss: 0.1359 | Val Loss: 0.2736
103
+ ⚠️ No improvement for 2 epochs.
104
+
105
+ Epoch [14/30]
106
+ Time taken for epoch 14: 0:00:29.328438
107
+ Train Loss: 0.1353 | Val Loss: 0.2683
108
+ ✅ Validation improved — model saved.
109
+
110
+ Epoch [15/30]
111
+ Time taken for epoch 15: 0:00:29.422292
112
+ Train Loss: 0.1356 | Val Loss: 0.2698
113
+ ⚠️ No improvement for 1 epochs.
114
+
115
+ Epoch [16/30]
116
+ Time taken for epoch 16: 0:00:29.363035
117
+ Train Loss: 0.1348 | Val Loss: 0.2771
118
+ ⚠️ No improvement for 2 epochs.
119
+
120
+ Epoch [17/30]
121
+ Time taken for epoch 17: 0:00:29.363021
122
+ Train Loss: 0.1352 | Val Loss: 0.2749
123
+ ⚠️ No improvement for 3 epochs.
124
+
125
+ Epoch [18/30]
126
+ Time taken for epoch 18: 0:00:29.335649
127
+ Train Loss: 0.1319 | Val Loss: 0.2716
128
+ ⚠️ No improvement for 4 epochs.
129
+
130
+ Epoch [19/30]
131
+ Time taken for epoch 19: 0:00:29.312675
132
+ Train Loss: 0.1266 | Val Loss: 0.2734
133
+ ⚠️ No improvement for 5 epochs.
134
+
135
+ Epoch [20/30]
136
+ Time taken for epoch 20: 0:00:29.372358
137
+ Train Loss: 0.1257 | Val Loss: 0.2737
138
+ ⚠️ No improvement for 6 epochs.
139
+
140
+ Epoch [21/30]
141
+ Time taken for epoch 21: 0:00:29.343112
142
+ Train Loss: 0.1246 | Val Loss: 0.2755
143
+ ⚠️ No improvement for 7 epochs.
144
+ ⛔ Early stopping triggered.
unet_epochs30_batch32_lr0.001_loglossTrue.log ADDED
@@ -0,0 +1,188 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ nohup: ignoring input
2
+ wandb: Currently logged in as: kaijun123 (kaijun123-nanyang-technological-university-singapore) to https://api.wandb.ai. Use `wandb login --relogin` to force relogin
3
+ wandb: setting up run tloron2v
4
+ wandb: Tracking run with wandb version 0.22.3
5
+ wandb: Run data is saved locally in /home/kaijun/CE6190---Image-Segmentation/wandb/run-20251110_181653-tloron2v
6
+ wandb: Run `wandb offline` to turn off syncing.
7
+ wandb: Syncing run unet_epochs30_batch32_lr0.001_loglossTrue
8
+ wandb: ⭐️ View project at https://wandb.ai/kaijun123-nanyang-technological-university-singapore/ce6190-semantic-segmentation
9
+ wandb: 🚀 View run at https://wandb.ai/kaijun123-nanyang-technological-university-singapore/ce6190-semantic-segmentation/runs/tloron2v
10
+ wandb: updating run metadata
11
+ wandb: uploading output.log; uploading wandb-summary.json
12
+ wandb: uploading wandb-summary.json; uploading config.yaml
13
+ wandb: uploading history steps 29-29, summary, console lines 147-150
14
+ wandb:
15
+ wandb: Run history:
16
+ wandb: epochs ▁▁▁▂▂▂▂▃▃▃▃▄▄▄▄▅▅▅▅▆▆▆▆▇▇▇▇███
17
+ wandb: train_loss █▄▄▃▃▃▃▃▂▂▂▂▂▂▂▂▂▂▂▁▁▁▁▁▁▁▁▁▁▁
18
+ wandb: val_loss █▄▃▂▃▂▂▂▂▂▁▁▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁
19
+ wandb:
20
+ wandb: Run summary:
21
+ wandb: epochs 30
22
+ wandb: train_loss 0.14168
23
+ wandb: val_loss 0.32459
24
+ wandb:
25
+ wandb: 🚀 View run unet_epochs30_batch32_lr0.001_loglossTrue at: https://wandb.ai/kaijun123-nanyang-technological-university-singapore/ce6190-semantic-segmentation/runs/tloron2v
26
+ wandb: ⭐️ View project at: https://wandb.ai/kaijun123-nanyang-technological-university-singapore/ce6190-semantic-segmentation
27
+ wandb: Synced 5 W&B file(s), 0 media file(s), 0 artifact file(s) and 0 other file(s)
28
+ wandb: Find logs at: ./wandb/run-20251110_181653-tloron2v/logs
29
+ PyTorch version: 2.7.0+cu128
30
+ CUDA available: True
31
+ CUDA version: 12.8
32
+ GPU: NVIDIA GeForce RTX 4090
33
+ sigmoid disabled
34
+ loss_fn log_loss: True
35
+ checkpoint_path: checkpoints/unet_epochs30_batch32_lr0.001_loglossTrue
36
+ split: train data_split_path: combined_data_split.json
37
+ split: valid data_split_path: combined_data_split.json
38
+
39
+ Epoch [1/30]
40
+ Time taken for epoch 1: 0:00:42.803687
41
+ Train Loss: 1.5506 | Val Loss: 1.4750
42
+ ✅ Validation improved — model saved.
43
+
44
+ Epoch [2/30]
45
+ Time taken for epoch 2: 0:00:41.725235
46
+ Train Loss: 0.7892 | Val Loss: 0.8678
47
+ ✅ Validation improved — model saved.
48
+
49
+ Epoch [3/30]
50
+ Time taken for epoch 3: 0:00:43.323791
51
+ Train Loss: 0.6749 | Val Loss: 0.6256
52
+ ✅ Validation improved — model saved.
53
+
54
+ Epoch [4/30]
55
+ Time taken for epoch 4: 0:00:41.882186
56
+ Train Loss: 0.6081 | Val Loss: 0.5085
57
+ ✅ Validation improved — model saved.
58
+
59
+ Epoch [5/30]
60
+ Time taken for epoch 5: 0:00:41.828401
61
+ Train Loss: 0.5765 | Val Loss: 0.5889
62
+ ⚠️ No improvement for 1 epochs.
63
+
64
+ Epoch [6/30]
65
+ Time taken for epoch 6: 0:00:42.168842
66
+ Train Loss: 0.5159 | Val Loss: 0.5590
67
+ ⚠️ No improvement for 2 epochs.
68
+
69
+ Epoch [7/30]
70
+ Time taken for epoch 7: 0:00:42.150284
71
+ Train Loss: 0.4775 | Val Loss: 0.4829
72
+ ✅ Validation improved — model saved.
73
+
74
+ Epoch [8/30]
75
+ Time taken for epoch 8: 0:00:42.615359
76
+ Train Loss: 0.4685 | Val Loss: 0.4615
77
+ ✅ Validation improved — model saved.
78
+
79
+ Epoch [9/30]
80
+ Time taken for epoch 9: 0:00:42.108275
81
+ Train Loss: 0.4206 | Val Loss: 0.4100
82
+ ✅ Validation improved — model saved.
83
+
84
+ Epoch [10/30]
85
+ Time taken for epoch 10: 0:00:42.083122
86
+ Train Loss: 0.4102 | Val Loss: 0.4860
87
+ ⚠️ No improvement for 1 epochs.
88
+
89
+ Epoch [11/30]
90
+ Time taken for epoch 11: 0:00:41.914494
91
+ Train Loss: 0.3849 | Val Loss: 0.3863
92
+ ✅ Validation improved — model saved.
93
+
94
+ Epoch [12/30]
95
+ Time taken for epoch 12: 0:00:41.899065
96
+ Train Loss: 0.3724 | Val Loss: 0.3726
97
+ ✅ Validation improved — model saved.
98
+
99
+ Epoch [13/30]
100
+ Time taken for epoch 13: 0:00:41.865987
101
+ Train Loss: 0.3548 | Val Loss: 0.4922
102
+ ⚠️ No improvement for 1 epochs.
103
+
104
+ Epoch [14/30]
105
+ Time taken for epoch 14: 0:00:42.502191
106
+ Train Loss: 0.3301 | Val Loss: 0.3736
107
+ ⚠️ No improvement for 2 epochs.
108
+
109
+ Epoch [15/30]
110
+ Time taken for epoch 15: 0:00:42.039146
111
+ Train Loss: 0.3201 | Val Loss: 0.3700
112
+ ✅ Validation improved — model saved.
113
+
114
+ Epoch [16/30]
115
+ Time taken for epoch 16: 0:00:42.558220
116
+ Train Loss: 0.2964 | Val Loss: 0.3731
117
+ ⚠️ No improvement for 1 epochs.
118
+
119
+ Epoch [17/30]
120
+ Time taken for epoch 17: 0:00:41.677247
121
+ Train Loss: 0.2873 | Val Loss: 0.3438
122
+ ✅ Validation improved — model saved.
123
+
124
+ Epoch [18/30]
125
+ Time taken for epoch 18: 0:00:41.744951
126
+ Train Loss: 0.2783 | Val Loss: 0.3603
127
+ ⚠️ No improvement for 1 epochs.
128
+
129
+ Epoch [19/30]
130
+ Time taken for epoch 19: 0:00:41.804037
131
+ Train Loss: 0.2565 | Val Loss: 0.3598
132
+ ⚠️ No improvement for 2 epochs.
133
+
134
+ Epoch [20/30]
135
+ Time taken for epoch 20: 0:00:42.401755
136
+ Train Loss: 0.2383 | Val Loss: 0.3924
137
+ ⚠️ No improvement for 3 epochs.
138
+
139
+ Epoch [21/30]
140
+ Time taken for epoch 21: 0:00:42.078392
141
+ Train Loss: 0.2355 | Val Loss: 0.3562
142
+ ⚠️ No improvement for 4 epochs.
143
+
144
+ Epoch [22/30]
145
+ Time taken for epoch 22: 0:00:41.885645
146
+ Train Loss: 0.1980 | Val Loss: 0.3224
147
+ ✅ Validation improved — model saved.
148
+
149
+ Epoch [23/30]
150
+ Time taken for epoch 23: 0:00:41.761612
151
+ Train Loss: 0.1798 | Val Loss: 0.3134
152
+ ✅ Validation improved — model saved.
153
+
154
+ Epoch [24/30]
155
+ Time taken for epoch 24: 0:00:42.195898
156
+ Train Loss: 0.1695 | Val Loss: 0.3180
157
+ ⚠️ No improvement for 1 epochs.
158
+
159
+ Epoch [25/30]
160
+ Time taken for epoch 25: 0:00:42.152904
161
+ Train Loss: 0.1617 | Val Loss: 0.3190
162
+ ⚠️ No improvement for 2 epochs.
163
+
164
+ Epoch [26/30]
165
+ Time taken for epoch 26: 0:00:41.827357
166
+ Train Loss: 0.1553 | Val Loss: 0.3225
167
+ ⚠️ No improvement for 3 epochs.
168
+
169
+ Epoch [27/30]
170
+ Time taken for epoch 27: 0:00:41.814583
171
+ Train Loss: 0.1503 | Val Loss: 0.3282
172
+ ⚠️ No improvement for 4 epochs.
173
+
174
+ Epoch [28/30]
175
+ Time taken for epoch 28: 0:00:41.907791
176
+ Train Loss: 0.1441 | Val Loss: 0.3238
177
+ ⚠️ No improvement for 5 epochs.
178
+
179
+ Epoch [29/30]
180
+ Time taken for epoch 29: 0:00:42.636240
181
+ Train Loss: 0.1426 | Val Loss: 0.3267
182
+ ⚠️ No improvement for 6 epochs.
183
+
184
+ Epoch [30/30]
185
+ Time taken for epoch 30: 0:00:42.123800
186
+ Train Loss: 0.1417 | Val Loss: 0.3246
187
+ ⚠️ No improvement for 7 epochs.
188
+ ⛔ Early stopping triggered.
unet_epochs30_batch32_lr0.002.log ADDED
@@ -0,0 +1,158 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ nohup: ignoring input
2
+ PyTorch version: 2.7.0+cu128
3
+ CUDA available: True
4
+ CUDA version: 12.8
5
+ GPU: NVIDIA GeForce RTX 4090
6
+ sigmoid disabled
7
+ checkpoint_path: checkpoints/unet_epochs30_batch32_lr0.002
8
+
9
+ Epoch [1/30]
10
+ Time taken for epoch 1: 0:01:07.321107
11
+ Train Loss: 0.7515 | Val Loss: 0.7349
12
+ ✅ Validation improved — model saved.
13
+
14
+ Epoch [2/30]
15
+ Time taken for epoch 2: 0:00:41.725214
16
+ Train Loss: 0.6366 | Val Loss: 0.6584
17
+ ✅ Validation improved — model saved.
18
+
19
+ Epoch [3/30]
20
+ Time taken for epoch 3: 0:00:41.741561
21
+ Train Loss: 0.5936 | Val Loss: 0.7509
22
+ ⚠️ No improvement for 1 epochs.
23
+
24
+ Epoch [4/30]
25
+ Time taken for epoch 4: 0:00:41.705633
26
+ Train Loss: 0.5622 | Val Loss: 0.5722
27
+ ✅ Validation improved — model saved.
28
+
29
+ Epoch [5/30]
30
+ Time taken for epoch 5: 0:00:41.747515
31
+ Train Loss: 0.5302 | Val Loss: 0.4825
32
+ ✅ Validation improved — model saved.
33
+
34
+ Epoch [6/30]
35
+ Time taken for epoch 6: 0:00:41.629183
36
+ Train Loss: 0.5009 | Val Loss: 0.4922
37
+ ⚠️ No improvement for 1 epochs.
38
+
39
+ Epoch [7/30]
40
+ Time taken for epoch 7: 0:00:41.642311
41
+ Train Loss: 0.4852 | Val Loss: 0.4780
42
+ ✅ Validation improved — model saved.
43
+
44
+ Epoch [8/30]
45
+ Time taken for epoch 8: 0:00:41.813184
46
+ Train Loss: 0.4643 | Val Loss: 0.6178
47
+ ⚠️ No improvement for 1 epochs.
48
+
49
+ Epoch [9/30]
50
+ Time taken for epoch 9: 0:00:42.445445
51
+ Train Loss: 0.4381 | Val Loss: 0.4141
52
+ ✅ Validation improved — model saved.
53
+
54
+ Epoch [10/30]
55
+ Time taken for epoch 10: 0:00:42.756022
56
+ Train Loss: 0.4193 | Val Loss: 0.5146
57
+ ⚠️ No improvement for 1 epochs.
58
+
59
+ Epoch [11/30]
60
+ Time taken for epoch 11: 0:00:41.794892
61
+ Train Loss: 0.4019 | Val Loss: 0.4091
62
+ ✅ Validation improved — model saved.
63
+
64
+ Epoch [12/30]
65
+ Time taken for epoch 12: 0:00:41.772372
66
+ Train Loss: 0.3860 | Val Loss: 0.4492
67
+ ⚠️ No improvement for 1 epochs.
68
+
69
+ Epoch [13/30]
70
+ Time taken for epoch 13: 0:00:41.716402
71
+ Train Loss: 0.3818 | Val Loss: 0.3965
72
+ ✅ Validation improved — model saved.
73
+
74
+ Epoch [14/30]
75
+ Time taken for epoch 14: 0:00:41.859875
76
+ Train Loss: 0.3561 | Val Loss: 0.4099
77
+ ⚠️ No improvement for 1 epochs.
78
+
79
+ Epoch [15/30]
80
+ Time taken for epoch 15: 0:00:41.732123
81
+ Train Loss: 0.3416 | Val Loss: 0.3668
82
+ ✅ Validation improved — model saved.
83
+
84
+ Epoch [16/30]
85
+ Time taken for epoch 16: 0:00:41.728876
86
+ Train Loss: 0.3136 | Val Loss: 0.3711
87
+ ⚠️ No improvement for 1 epochs.
88
+
89
+ Epoch [17/30]
90
+ Time taken for epoch 17: 0:00:41.770381
91
+ Train Loss: 0.3043 | Val Loss: 0.3549
92
+ ✅ Validation improved — model saved.
93
+
94
+ Epoch [18/30]
95
+ Time taken for epoch 18: 0:00:41.763336
96
+ Train Loss: 0.2885 | Val Loss: 0.3747
97
+ ⚠️ No improvement for 1 epochs.
98
+
99
+ Epoch [19/30]
100
+ Time taken for epoch 19: 0:00:41.818925
101
+ Train Loss: 0.2639 | Val Loss: 0.3664
102
+ ⚠️ No improvement for 2 epochs.
103
+
104
+ Epoch [20/30]
105
+ Time taken for epoch 20: 0:00:41.741736
106
+ Train Loss: 0.2562 | Val Loss: 0.3781
107
+ ⚠️ No improvement for 3 epochs.
108
+
109
+ Epoch [21/30]
110
+ Time taken for epoch 21: 0:00:41.741983
111
+ Train Loss: 0.2397 | Val Loss: 0.3645
112
+ ⚠️ No improvement for 4 epochs.
113
+
114
+ Epoch [22/30]
115
+ Time taken for epoch 22: 0:00:41.686538
116
+ Train Loss: 0.2058 | Val Loss: 0.3312
117
+ ✅ Validation improved — model saved.
118
+
119
+ Epoch [23/30]
120
+ Time taken for epoch 23: 0:00:41.729279
121
+ Train Loss: 0.1922 | Val Loss: 0.3274
122
+ ✅ Validation improved — model saved.
123
+
124
+ Epoch [24/30]
125
+ Time taken for epoch 24: 0:00:41.670759
126
+ Train Loss: 0.1813 | Val Loss: 0.3299
127
+ ⚠️ No improvement for 1 epochs.
128
+
129
+ Epoch [25/30]
130
+ Time taken for epoch 25: 0:00:41.695230
131
+ Train Loss: 0.1753 | Val Loss: 0.3348
132
+ ⚠️ No improvement for 2 epochs.
133
+
134
+ Epoch [26/30]
135
+ Time taken for epoch 26: 0:00:41.707373
136
+ Train Loss: 0.1688 | Val Loss: 0.3329
137
+ ⚠️ No improvement for 3 epochs.
138
+
139
+ Epoch [27/30]
140
+ Time taken for epoch 27: 0:00:41.708740
141
+ Train Loss: 0.1627 | Val Loss: 0.3336
142
+ ⚠️ No improvement for 4 epochs.
143
+
144
+ Epoch [28/30]
145
+ Time taken for epoch 28: 0:00:41.708981
146
+ Train Loss: 0.1579 | Val Loss: 0.3376
147
+ ⚠️ No improvement for 5 epochs.
148
+
149
+ Epoch [29/30]
150
+ Time taken for epoch 29: 0:00:41.728628
151
+ Train Loss: 0.1571 | Val Loss: 0.3374
152
+ ⚠️ No improvement for 6 epochs.
153
+
154
+ Epoch [30/30]
155
+ Time taken for epoch 30: 0:00:41.785874
156
+ Train Loss: 0.1562 | Val Loss: 0.3348
157
+ ⚠️ No improvement for 7 epochs.
158
+ ⛔ Early stopping triggered.