kaijun123 commited on
Commit
9f419ea
Β·
verified Β·
1 Parent(s): 7953c27

Delete unet_epochs30_batch32_lr0.001_freeze_encoder_epochs30_batch32_lr0.001.log

Browse files
unet_epochs30_batch32_lr0.001_freeze_encoder_epochs30_batch32_lr0.001.log DELETED
@@ -1,144 +0,0 @@
1
- nohup: ignoring input
2
- wandb: Currently logged in as: kaijun123 (kaijun123-nanyang-technological-university-singapore) to https://api.wandb.ai. Use `wandb login --relogin` to force relogin
3
- wandb: setting up run 6spl7gwh
4
- wandb: Tracking run with wandb version 0.22.3
5
- wandb: Run data is saved locally in /home/kaijun/CE6190---Image-Segmentation/wandb/run-20251111_000302-6spl7gwh
6
- wandb: Run `wandb offline` to turn off syncing.
7
- wandb: Syncing run unet_epochs30_batch32_lr0.001_freeze_encoder_epochs30_batch32_lr0.001
8
- wandb: ⭐️ View project at https://wandb.ai/kaijun123-nanyang-technological-university-singapore/ce6190-semantic-segmentation
9
- wandb: πŸš€ View run at https://wandb.ai/kaijun123-nanyang-technological-university-singapore/ce6190-semantic-segmentation/runs/6spl7gwh
10
- wandb: updating run metadata
11
- wandb: uploading output.log; uploading wandb-summary.json
12
- wandb: uploading output.log; uploading wandb-summary.json; uploading config.yaml
13
- wandb: uploading output.log; uploading wandb-summary.json
14
- wandb: uploading history steps 20-20, summary, console lines 102-105
15
- wandb:
16
- wandb: Run history:
17
- wandb: epochs β–β–β–‚β–‚β–‚β–ƒβ–ƒβ–ƒβ–„β–„β–…β–…β–…β–†β–†β–†β–‡β–‡β–‡β–ˆβ–ˆ
18
- wandb: train_loss β–ˆβ–ˆβ–‡β–‡β–ˆβ–†β–†β–…β–…β–…β–…β–…β–„β–„β–„β–„β–„β–ƒβ–‚β–β–
19
- wandb: val_loss β–ƒβ–ƒβ–ƒβ–ƒβ–…β–†β–‚β–‚β–…β–†β–‚β–ˆβ–ƒβ–β–‚β–„β–„β–‚β–ƒβ–ƒβ–„
20
- wandb:
21
- wandb: Run summary:
22
- wandb: epochs 21
23
- wandb: train_loss 0.12465
24
- wandb: val_loss 0.2755
25
- wandb:
26
- wandb: πŸš€ View run unet_epochs30_batch32_lr0.001_freeze_encoder_epochs30_batch32_lr0.001 at: https://wandb.ai/kaijun123-nanyang-technological-university-singapore/ce6190-semantic-segmentation/runs/6spl7gwh
27
- wandb: ⭐️ View project at: https://wandb.ai/kaijun123-nanyang-technological-university-singapore/ce6190-semantic-segmentation
28
- wandb: Synced 5 W&B file(s), 0 media file(s), 0 artifact file(s) and 0 other file(s)
29
- wandb: Find logs at: ./wandb/run-20251111_000302-6spl7gwh/logs
30
- PyTorch version: 2.7.0+cu128
31
- CUDA available: True
32
- CUDA version: 12.8
33
- GPU: NVIDIA GeForce RTX 4090
34
- Loading weights from local directory
35
- freeze encoder
36
- checkpoint_path: checkpoints/unet_epochs30_batch32_lr0.001_freeze_encoder_epochs30_batch32_lr0.001
37
- split: train data_split_path: combined_data_split.json
38
- split: valid data_split_path: combined_data_split.json
39
-
40
- Epoch [1/30]
41
- Time taken for epoch 1: 0:00:30.485078
42
- Train Loss: 0.1490 | Val Loss: 0.2743
43
- βœ… Validation improved β€” model saved.
44
-
45
- Epoch [2/30]
46
- Time taken for epoch 2: 0:00:29.358091
47
- Train Loss: 0.1479 | Val Loss: 0.2744
48
- ⚠️ No improvement for 1 epochs.
49
-
50
- Epoch [3/30]
51
- Time taken for epoch 3: 0:00:29.325511
52
- Train Loss: 0.1464 | Val Loss: 0.2746
53
- ⚠️ No improvement for 2 epochs.
54
-
55
- Epoch [4/30]
56
- Time taken for epoch 4: 0:00:29.550897
57
- Train Loss: 0.1444 | Val Loss: 0.2723
58
- βœ… Validation improved β€” model saved.
59
-
60
- Epoch [5/30]
61
- Time taken for epoch 5: 0:00:30.235633
62
- Train Loss: 0.1492 | Val Loss: 0.2782
63
- ⚠️ No improvement for 1 epochs.
64
-
65
- Epoch [6/30]
66
- Time taken for epoch 6: 0:00:30.045303
67
- Train Loss: 0.1432 | Val Loss: 0.2798
68
- ⚠️ No improvement for 2 epochs.
69
-
70
- Epoch [7/30]
71
- Time taken for epoch 7: 0:00:30.027648
72
- Train Loss: 0.1406 | Val Loss: 0.2718
73
- βœ… Validation improved β€” model saved.
74
-
75
- Epoch [8/30]
76
- Time taken for epoch 8: 0:00:29.842414
77
- Train Loss: 0.1393 | Val Loss: 0.2716
78
- βœ… Validation improved β€” model saved.
79
-
80
- Epoch [9/30]
81
- Time taken for epoch 9: 0:00:29.536086
82
- Train Loss: 0.1389 | Val Loss: 0.2786
83
- ⚠️ No improvement for 1 epochs.
84
-
85
- Epoch [10/30]
86
- Time taken for epoch 10: 0:00:29.464946
87
- Train Loss: 0.1386 | Val Loss: 0.2797
88
- ⚠️ No improvement for 2 epochs.
89
-
90
- Epoch [11/30]
91
- Time taken for epoch 11: 0:00:29.483013
92
- Train Loss: 0.1370 | Val Loss: 0.2704
93
- βœ… Validation improved β€” model saved.
94
-
95
- Epoch [12/30]
96
- Time taken for epoch 12: 0:00:29.445689
97
- Train Loss: 0.1370 | Val Loss: 0.2860
98
- ⚠️ No improvement for 1 epochs.
99
-
100
- Epoch [13/30]
101
- Time taken for epoch 13: 0:00:29.331030
102
- Train Loss: 0.1359 | Val Loss: 0.2736
103
- ⚠️ No improvement for 2 epochs.
104
-
105
- Epoch [14/30]
106
- Time taken for epoch 14: 0:00:29.328438
107
- Train Loss: 0.1353 | Val Loss: 0.2683
108
- βœ… Validation improved β€” model saved.
109
-
110
- Epoch [15/30]
111
- Time taken for epoch 15: 0:00:29.422292
112
- Train Loss: 0.1356 | Val Loss: 0.2698
113
- ⚠️ No improvement for 1 epochs.
114
-
115
- Epoch [16/30]
116
- Time taken for epoch 16: 0:00:29.363035
117
- Train Loss: 0.1348 | Val Loss: 0.2771
118
- ⚠️ No improvement for 2 epochs.
119
-
120
- Epoch [17/30]
121
- Time taken for epoch 17: 0:00:29.363021
122
- Train Loss: 0.1352 | Val Loss: 0.2749
123
- ⚠️ No improvement for 3 epochs.
124
-
125
- Epoch [18/30]
126
- Time taken for epoch 18: 0:00:29.335649
127
- Train Loss: 0.1319 | Val Loss: 0.2716
128
- ⚠️ No improvement for 4 epochs.
129
-
130
- Epoch [19/30]
131
- Time taken for epoch 19: 0:00:29.312675
132
- Train Loss: 0.1266 | Val Loss: 0.2734
133
- ⚠️ No improvement for 5 epochs.
134
-
135
- Epoch [20/30]
136
- Time taken for epoch 20: 0:00:29.372358
137
- Train Loss: 0.1257 | Val Loss: 0.2737
138
- ⚠️ No improvement for 6 epochs.
139
-
140
- Epoch [21/30]
141
- Time taken for epoch 21: 0:00:29.343112
142
- Train Loss: 0.1246 | Val Loss: 0.2755
143
- ⚠️ No improvement for 7 epochs.
144
- β›” Early stopping triggered.