kaijun123 commited on
Commit
220ac78
Β·
verified Β·
1 Parent(s): 9f419ea

Delete unet_epochs30_batch32_lr0.001_loglossTrue.log

Browse files
unet_epochs30_batch32_lr0.001_loglossTrue.log DELETED
@@ -1,188 +0,0 @@
1
- nohup: ignoring input
2
- wandb: Currently logged in as: kaijun123 (kaijun123-nanyang-technological-university-singapore) to https://api.wandb.ai. Use `wandb login --relogin` to force relogin
3
- wandb: setting up run tloron2v
4
- wandb: Tracking run with wandb version 0.22.3
5
- wandb: Run data is saved locally in /home/kaijun/CE6190---Image-Segmentation/wandb/run-20251110_181653-tloron2v
6
- wandb: Run `wandb offline` to turn off syncing.
7
- wandb: Syncing run unet_epochs30_batch32_lr0.001_loglossTrue
8
- wandb: ⭐️ View project at https://wandb.ai/kaijun123-nanyang-technological-university-singapore/ce6190-semantic-segmentation
9
- wandb: πŸš€ View run at https://wandb.ai/kaijun123-nanyang-technological-university-singapore/ce6190-semantic-segmentation/runs/tloron2v
10
- wandb: updating run metadata
11
- wandb: uploading output.log; uploading wandb-summary.json
12
- wandb: uploading wandb-summary.json; uploading config.yaml
13
- wandb: uploading history steps 29-29, summary, console lines 147-150
14
- wandb:
15
- wandb: Run history:
16
- wandb: epochs β–β–β–β–‚β–‚β–‚β–‚β–ƒβ–ƒβ–ƒβ–ƒβ–„β–„β–„β–„β–…β–…β–…β–…β–†β–†β–†β–†β–‡β–‡β–‡β–‡β–ˆβ–ˆβ–ˆ
17
- wandb: train_loss β–ˆβ–„β–„β–ƒβ–ƒβ–ƒβ–ƒβ–ƒβ–‚β–‚β–‚β–‚β–‚β–‚β–‚β–‚β–‚β–‚β–‚β–β–β–β–β–β–β–β–β–β–β–
18
- wandb: val_loss β–ˆβ–„β–ƒβ–‚β–ƒβ–‚β–‚β–‚β–‚β–‚β–β–β–‚β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–
19
- wandb:
20
- wandb: Run summary:
21
- wandb: epochs 30
22
- wandb: train_loss 0.14168
23
- wandb: val_loss 0.32459
24
- wandb:
25
- wandb: πŸš€ View run unet_epochs30_batch32_lr0.001_loglossTrue at: https://wandb.ai/kaijun123-nanyang-technological-university-singapore/ce6190-semantic-segmentation/runs/tloron2v
26
- wandb: ⭐️ View project at: https://wandb.ai/kaijun123-nanyang-technological-university-singapore/ce6190-semantic-segmentation
27
- wandb: Synced 5 W&B file(s), 0 media file(s), 0 artifact file(s) and 0 other file(s)
28
- wandb: Find logs at: ./wandb/run-20251110_181653-tloron2v/logs
29
- PyTorch version: 2.7.0+cu128
30
- CUDA available: True
31
- CUDA version: 12.8
32
- GPU: NVIDIA GeForce RTX 4090
33
- sigmoid disabled
34
- loss_fn log_loss: True
35
- checkpoint_path: checkpoints/unet_epochs30_batch32_lr0.001_loglossTrue
36
- split: train data_split_path: combined_data_split.json
37
- split: valid data_split_path: combined_data_split.json
38
-
39
- Epoch [1/30]
40
- Time taken for epoch 1: 0:00:42.803687
41
- Train Loss: 1.5506 | Val Loss: 1.4750
42
- βœ… Validation improved β€” model saved.
43
-
44
- Epoch [2/30]
45
- Time taken for epoch 2: 0:00:41.725235
46
- Train Loss: 0.7892 | Val Loss: 0.8678
47
- βœ… Validation improved β€” model saved.
48
-
49
- Epoch [3/30]
50
- Time taken for epoch 3: 0:00:43.323791
51
- Train Loss: 0.6749 | Val Loss: 0.6256
52
- βœ… Validation improved β€” model saved.
53
-
54
- Epoch [4/30]
55
- Time taken for epoch 4: 0:00:41.882186
56
- Train Loss: 0.6081 | Val Loss: 0.5085
57
- βœ… Validation improved β€” model saved.
58
-
59
- Epoch [5/30]
60
- Time taken for epoch 5: 0:00:41.828401
61
- Train Loss: 0.5765 | Val Loss: 0.5889
62
- ⚠️ No improvement for 1 epochs.
63
-
64
- Epoch [6/30]
65
- Time taken for epoch 6: 0:00:42.168842
66
- Train Loss: 0.5159 | Val Loss: 0.5590
67
- ⚠️ No improvement for 2 epochs.
68
-
69
- Epoch [7/30]
70
- Time taken for epoch 7: 0:00:42.150284
71
- Train Loss: 0.4775 | Val Loss: 0.4829
72
- βœ… Validation improved β€” model saved.
73
-
74
- Epoch [8/30]
75
- Time taken for epoch 8: 0:00:42.615359
76
- Train Loss: 0.4685 | Val Loss: 0.4615
77
- βœ… Validation improved β€” model saved.
78
-
79
- Epoch [9/30]
80
- Time taken for epoch 9: 0:00:42.108275
81
- Train Loss: 0.4206 | Val Loss: 0.4100
82
- βœ… Validation improved β€” model saved.
83
-
84
- Epoch [10/30]
85
- Time taken for epoch 10: 0:00:42.083122
86
- Train Loss: 0.4102 | Val Loss: 0.4860
87
- ⚠️ No improvement for 1 epochs.
88
-
89
- Epoch [11/30]
90
- Time taken for epoch 11: 0:00:41.914494
91
- Train Loss: 0.3849 | Val Loss: 0.3863
92
- βœ… Validation improved β€” model saved.
93
-
94
- Epoch [12/30]
95
- Time taken for epoch 12: 0:00:41.899065
96
- Train Loss: 0.3724 | Val Loss: 0.3726
97
- βœ… Validation improved β€” model saved.
98
-
99
- Epoch [13/30]
100
- Time taken for epoch 13: 0:00:41.865987
101
- Train Loss: 0.3548 | Val Loss: 0.4922
102
- ⚠️ No improvement for 1 epochs.
103
-
104
- Epoch [14/30]
105
- Time taken for epoch 14: 0:00:42.502191
106
- Train Loss: 0.3301 | Val Loss: 0.3736
107
- ⚠️ No improvement for 2 epochs.
108
-
109
- Epoch [15/30]
110
- Time taken for epoch 15: 0:00:42.039146
111
- Train Loss: 0.3201 | Val Loss: 0.3700
112
- βœ… Validation improved β€” model saved.
113
-
114
- Epoch [16/30]
115
- Time taken for epoch 16: 0:00:42.558220
116
- Train Loss: 0.2964 | Val Loss: 0.3731
117
- ⚠️ No improvement for 1 epochs.
118
-
119
- Epoch [17/30]
120
- Time taken for epoch 17: 0:00:41.677247
121
- Train Loss: 0.2873 | Val Loss: 0.3438
122
- βœ… Validation improved β€” model saved.
123
-
124
- Epoch [18/30]
125
- Time taken for epoch 18: 0:00:41.744951
126
- Train Loss: 0.2783 | Val Loss: 0.3603
127
- ⚠️ No improvement for 1 epochs.
128
-
129
- Epoch [19/30]
130
- Time taken for epoch 19: 0:00:41.804037
131
- Train Loss: 0.2565 | Val Loss: 0.3598
132
- ⚠️ No improvement for 2 epochs.
133
-
134
- Epoch [20/30]
135
- Time taken for epoch 20: 0:00:42.401755
136
- Train Loss: 0.2383 | Val Loss: 0.3924
137
- ⚠️ No improvement for 3 epochs.
138
-
139
- Epoch [21/30]
140
- Time taken for epoch 21: 0:00:42.078392
141
- Train Loss: 0.2355 | Val Loss: 0.3562
142
- ⚠️ No improvement for 4 epochs.
143
-
144
- Epoch [22/30]
145
- Time taken for epoch 22: 0:00:41.885645
146
- Train Loss: 0.1980 | Val Loss: 0.3224
147
- βœ… Validation improved β€” model saved.
148
-
149
- Epoch [23/30]
150
- Time taken for epoch 23: 0:00:41.761612
151
- Train Loss: 0.1798 | Val Loss: 0.3134
152
- βœ… Validation improved β€” model saved.
153
-
154
- Epoch [24/30]
155
- Time taken for epoch 24: 0:00:42.195898
156
- Train Loss: 0.1695 | Val Loss: 0.3180
157
- ⚠️ No improvement for 1 epochs.
158
-
159
- Epoch [25/30]
160
- Time taken for epoch 25: 0:00:42.152904
161
- Train Loss: 0.1617 | Val Loss: 0.3190
162
- ⚠️ No improvement for 2 epochs.
163
-
164
- Epoch [26/30]
165
- Time taken for epoch 26: 0:00:41.827357
166
- Train Loss: 0.1553 | Val Loss: 0.3225
167
- ⚠️ No improvement for 3 epochs.
168
-
169
- Epoch [27/30]
170
- Time taken for epoch 27: 0:00:41.814583
171
- Train Loss: 0.1503 | Val Loss: 0.3282
172
- ⚠️ No improvement for 4 epochs.
173
-
174
- Epoch [28/30]
175
- Time taken for epoch 28: 0:00:41.907791
176
- Train Loss: 0.1441 | Val Loss: 0.3238
177
- ⚠️ No improvement for 5 epochs.
178
-
179
- Epoch [29/30]
180
- Time taken for epoch 29: 0:00:42.636240
181
- Train Loss: 0.1426 | Val Loss: 0.3267
182
- ⚠️ No improvement for 6 epochs.
183
-
184
- Epoch [30/30]
185
- Time taken for epoch 30: 0:00:42.123800
186
- Train Loss: 0.1417 | Val Loss: 0.3246
187
- ⚠️ No improvement for 7 epochs.
188
- β›” Early stopping triggered.