kaijun123 commited on
Commit
7953c27
Β·
verified Β·
1 Parent(s): a972062

Delete unet_epochs30_batch32_lr0.001_freeze_encoder_epochs20_batch32_lr0.005.log

Browse files
unet_epochs30_batch32_lr0.001_freeze_encoder_epochs20_batch32_lr0.005.log DELETED
@@ -1,105 +0,0 @@
1
- nohup: ignoring input
2
- wandb: Currently logged in as: kaijun123 (kaijun123-nanyang-technological-university-singapore) to https://api.wandb.ai. Use `wandb login --relogin` to force relogin
3
- wandb: setting up run t2gz18kk
4
- wandb: Tracking run with wandb version 0.22.3
5
- wandb: Run data is saved locally in /home/kaijun/CE6190---Image-Segmentation/wandb/run-20251111_141336-t2gz18kk
6
- wandb: Run `wandb offline` to turn off syncing.
7
- wandb: Syncing run unet_epochs30_batch32_lr0.001_freeze_encoder_epochs20_batch32_lr0.005
8
- wandb: ⭐️ View project at https://wandb.ai/kaijun123-nanyang-technological-university-singapore/ce6190-semantic-segmentation
9
- wandb: πŸš€ View run at https://wandb.ai/kaijun123-nanyang-technological-university-singapore/ce6190-semantic-segmentation/runs/t2gz18kk
10
- wandb: updating run metadata
11
- wandb: uploading wandb-summary.json; uploading output.log
12
- wandb: uploading wandb-summary.json; uploading output.log; uploading config.yaml; uploading history steps 12-12, summary, console lines 62-65
13
- wandb: uploading wandb-summary.json; uploading output.log; uploading config.yaml
14
- wandb: uploading wandb-summary.json; uploading config.yaml
15
- wandb: uploading data
16
- wandb:
17
- wandb: Run history:
18
- wandb: epochs β–β–‚β–‚β–ƒβ–ƒβ–„β–…β–…β–†β–†β–‡β–‡β–ˆ
19
- wandb: train_loss β–ˆβ–…β–…β–„β–„β–„β–ƒβ–ƒβ–ƒβ–ƒβ–‚β–β–
20
- wandb: val_loss β–‡β–…β–β–…β–…β–β–…β–ˆβ–‚β–ˆβ–ƒβ–„β–ƒ
21
- wandb:
22
- wandb: Run summary:
23
- wandb: epochs 13
24
- wandb: train_loss 0.13112
25
- wandb: val_loss 0.273
26
- wandb:
27
- wandb: πŸš€ View run unet_epochs30_batch32_lr0.001_freeze_encoder_epochs20_batch32_lr0.005 at: https://wandb.ai/kaijun123-nanyang-technological-university-singapore/ce6190-semantic-segmentation/runs/t2gz18kk
28
- wandb: ⭐️ View project at: https://wandb.ai/kaijun123-nanyang-technological-university-singapore/ce6190-semantic-segmentation
29
- wandb: Synced 5 W&B file(s), 0 media file(s), 0 artifact file(s) and 0 other file(s)
30
- wandb: Find logs at: ./wandb/run-20251111_141336-t2gz18kk/logs
31
- PyTorch version: 2.7.0+cu128
32
- CUDA available: True
33
- CUDA version: 12.8
34
- GPU: NVIDIA GeForce RTX 4090
35
- Loading weights from local directory
36
- freeze encoder
37
- checkpoint_path: checkpoints/unet_epochs30_batch32_lr0.001_freeze_encoder_epochs20_batch32_lr0.005
38
- split: train data_split_path: combined_data_split.json
39
- split: valid data_split_path: combined_data_split.json
40
-
41
- Epoch [1/20]
42
- Time taken for epoch 1: 0:01:02.609510
43
- Train Loss: 0.1681 | Val Loss: 0.2832
44
- βœ… Validation improved β€” model saved.
45
-
46
- Epoch [2/20]
47
- Time taken for epoch 2: 0:01:00.957051
48
- Train Loss: 0.1535 | Val Loss: 0.2777
49
- βœ… Validation improved β€” model saved.
50
-
51
- Epoch [3/20]
52
- Time taken for epoch 3: 0:01:00.259887
53
- Train Loss: 0.1505 | Val Loss: 0.2684
54
- βœ… Validation improved β€” model saved.
55
-
56
- Epoch [4/20]
57
- Time taken for epoch 4: 0:01:01.074658
58
- Train Loss: 0.1488 | Val Loss: 0.2785
59
- ⚠️ No improvement for 1 epochs.
60
-
61
- Epoch [5/20]
62
- Time taken for epoch 5: 0:01:01.501208
63
- Train Loss: 0.1474 | Val Loss: 0.2783
64
- ⚠️ No improvement for 2 epochs.
65
-
66
- Epoch [6/20]
67
- Time taken for epoch 6: 0:01:01.322478
68
- Train Loss: 0.1456 | Val Loss: 0.2683
69
- βœ… Validation improved β€” model saved.
70
-
71
- Epoch [7/20]
72
- Time taken for epoch 7: 0:01:01.486441
73
- Train Loss: 0.1436 | Val Loss: 0.2781
74
- ⚠️ No improvement for 1 epochs.
75
-
76
- Epoch [8/20]
77
- Time taken for epoch 8: 0:01:01.507692
78
- Train Loss: 0.1421 | Val Loss: 0.2840
79
- ⚠️ No improvement for 2 epochs.
80
-
81
- Epoch [9/20]
82
- Time taken for epoch 9: 0:01:01.640441
83
- Train Loss: 0.1409 | Val Loss: 0.2717
84
- ⚠️ No improvement for 3 epochs.
85
-
86
- Epoch [10/20]
87
- Time taken for epoch 10: 0:01:01.018847
88
- Train Loss: 0.1402 | Val Loss: 0.2848
89
- ⚠️ No improvement for 4 epochs.
90
-
91
- Epoch [11/20]
92
- Time taken for epoch 11: 0:01:01.497435
93
- Train Loss: 0.1338 | Val Loss: 0.2739
94
- ⚠️ No improvement for 5 epochs.
95
-
96
- Epoch [12/20]
97
- Time taken for epoch 12: 0:00:58.877680
98
- Train Loss: 0.1323 | Val Loss: 0.2748
99
- ⚠️ No improvement for 6 epochs.
100
-
101
- Epoch [13/20]
102
- Time taken for epoch 13: 0:01:00.537987
103
- Train Loss: 0.1311 | Val Loss: 0.2730
104
- ⚠️ No improvement for 7 epochs.
105
- β›” Early stopping triggered.