Add training_output.log
Browse files
training_logs/training_output.log
ADDED
|
@@ -0,0 +1,185 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
nohup: ignoring input
|
| 2 |
+
/data2/edwardsun/flow_home/amp_flow_training_single_gpu_full_data.py:70: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead.
|
| 3 |
+
self.scaler = GradScaler()
|
| 4 |
+
/data2/edwardsun/flow_home/amp_flow_training_single_gpu_full_data.py:116: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
|
| 5 |
+
self.embeddings = torch.load(combined_path, map_location=self.device)
|
| 6 |
+
/data2/edwardsun/flow_home/amp_flow_training_single_gpu_full_data.py:180: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
|
| 7 |
+
self.compressor.load_state_dict(torch.load('final_compressor_model.pth', map_location=self.device))
|
| 8 |
+
/data2/edwardsun/flow_home/amp_flow_training_single_gpu_full_data.py:181: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
|
| 9 |
+
self.decompressor.load_state_dict(torch.load('final_decompressor_model.pth', map_location=self.device))
|
| 10 |
+
/data2/edwardsun/flow_home/cfg_dataset.py:253: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
|
| 11 |
+
self.embeddings = torch.load(combined_path, map_location='cpu')
|
| 12 |
+
Starting optimized training with batch_size=96, epochs=6000
|
| 13 |
+
Using GPU 0 for optimized H100 training
|
| 14 |
+
Mixed precision: True
|
| 15 |
+
Batch size: 96
|
| 16 |
+
Target epochs: 6000
|
| 17 |
+
Learning rate: 0.0004 -> 0.0002
|
| 18 |
+
✓ Mixed precision training enabled (BF16)
|
| 19 |
+
Loading ALL AMP embeddings from /data2/edwardsun/flow_project/peptide_embeddings/...
|
| 20 |
+
Loading combined embeddings from /data2/edwardsun/flow_project/peptide_embeddings/all_peptide_embeddings.pt...
|
| 21 |
+
✓ Loaded ALL embeddings: torch.Size([17968, 50, 1280])
|
| 22 |
+
Computing preprocessing statistics...
|
| 23 |
+
✓ Statistics computed and saved:
|
| 24 |
+
Total embeddings: 17,968
|
| 25 |
+
Mean: -0.0005 ± 0.0897
|
| 26 |
+
Std: 0.0869 ± 0.1168
|
| 27 |
+
Range: [-9.1738, 3.2894]
|
| 28 |
+
Initializing models...
|
| 29 |
+
✓ Model compiled with torch.compile for speedup
|
| 30 |
+
✓ Models initialized:
|
| 31 |
+
Compressor parameters: 78,817,360
|
| 32 |
+
Decompressor parameters: 39,458,720
|
| 33 |
+
Flow model parameters: 50,779,584
|
| 34 |
+
Initializing datasets with FULL data...
|
| 35 |
+
Loading AMP embeddings from /data2/edwardsun/flow_project/peptide_embeddings/...
|
| 36 |
+
Loading combined embeddings from /data2/edwardsun/flow_project/peptide_embeddings/all_peptide_embeddings.pt (FULL DATA)...
|
| 37 |
+
✓ Loaded ALL embeddings: torch.Size([17968, 50, 1280])
|
| 38 |
+
Loading CFG data from FASTA: /home/edwardsun/flow/combined_final.fasta...
|
| 39 |
+
Parsing FASTA file: /home/edwardsun/flow/combined_final.fasta
|
| 40 |
+
Label assignment: >AP = AMP (0), >sp = Non-AMP (1)
|
| 41 |
+
✓ Parsed 6983 valid sequences from FASTA
|
| 42 |
+
AMP sequences: 3306
|
| 43 |
+
Non-AMP sequences: 3677
|
| 44 |
+
Masked for CFG: 698
|
| 45 |
+
Loaded 6983 CFG sequences
|
| 46 |
+
Label distribution: [3306 3677]
|
| 47 |
+
Masked 698 labels for CFG training
|
| 48 |
+
Aligning AMP embeddings with CFG data...
|
| 49 |
+
Aligned 6983 samples
|
| 50 |
+
CFG Flow Dataset initialized:
|
| 51 |
+
AMP embeddings: torch.Size([17968, 50, 1280])
|
| 52 |
+
CFG labels: 6983
|
| 53 |
+
Aligned samples: 6983
|
| 54 |
+
✓ Dataset initialized with FULL data:
|
| 55 |
+
Total samples: 6,983
|
| 56 |
+
Batch size: 96
|
| 57 |
+
Batches per epoch: 73
|
| 58 |
+
Total training steps: 438,000
|
| 59 |
+
Validation every: 10,000 steps
|
| 60 |
+
Initializing optimizer and scheduler...
|
| 61 |
+
✓ Optimizer initialized:
|
| 62 |
+
Base LR: 0.0004
|
| 63 |
+
Min LR: 0.0002
|
| 64 |
+
Warmup steps: 5000
|
| 65 |
+
Weight decay: 0.01
|
| 66 |
+
Gradient clip norm: 1.0
|
| 67 |
+
✓ Optimized Single GPU training setup complete with FULL DATA!
|
| 68 |
+
🚀 Starting Optimized Single GPU Flow Matching Training with FULL DATA
|
| 69 |
+
GPU: 0
|
| 70 |
+
Total iterations: 6000
|
| 71 |
+
Batch size: 96
|
| 72 |
+
Total samples: 6,983
|
| 73 |
+
Mixed precision: True
|
| 74 |
+
Estimated time: ~8-10 hours (overnight training with ALL data)
|
| 75 |
+
============================================================
|
| 76 |
+
|
| 77 |
+
with autocast(dtype=torch.bfloat16):
|
| 78 |
+
/data2/edwardsun/flow_home/amp_flow_training_single_gpu_full_data.py:392: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
|
| 79 |
+
with autocast(dtype=torch.bfloat16):
|
| 80 |
+
/data2/edwardsun/flow_home/amp_flow_training_single_gpu_full_data.py:392: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
|
| 81 |
+
with autocast(dtype=torch.bfloat16):
|
| 82 |
+
|
| 83 |
+
Epoch 0 | Avg Loss: 0.950054 | LR: 4.53e-05 | Time: 69.8s | Samples: 6,983
|
| 84 |
+
/data2/edwardsun/flow_home/amp_flow_training_single_gpu_full_data.py:392: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
|
| 85 |
+
with autocast(dtype=torch.bfloat16):
|
| 86 |
+
/data2/edwardsun/flow_home/amp_flow_training_single_gpu_full_data.py:392: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
|
| 87 |
+
with autocast(dtype=torch.bfloat16):
|
| 88 |
+
|
| 89 |
+
Epoch 1 | Avg Loss: 0.415130 | LR: 5.05e-05 | Time: 5.6s | Samples: 6,983
|
| 90 |
+
/data2/edwardsun/flow_home/amp_flow_training_single_gpu_full_data.py:392: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
|
| 91 |
+
with autocast(dtype=torch.bfloat16):
|
| 92 |
+
|
| 93 |
+
Epoch 2 | Avg Loss: 0.227218 | LR: 5.58e-05 | Time: 2.7s | Samples: 6,983
|
| 94 |
+
|
| 95 |
+
Epoch 3 | Avg Loss: 0.178846 | LR: 6.10e-05 | Time: 2.6s | Samples: 6,983
|
| 96 |
+
|
| 97 |
+
Epoch 4 | Avg Loss: 0.148526 | LR: 6.63e-05 | Time: 2.8s | Samples: 6,983
|
| 98 |
+
|
| 99 |
+
Epoch 5 | Avg Loss: 0.127575 | LR: 7.15e-05 | Time: 2.8s | Samples: 6,983
|
| 100 |
+
|
| 101 |
+
Epoch 6 | Avg Loss: 0.109353 | LR: 7.68e-05 | Time: 2.7s | Samples: 6,983
|
| 102 |
+
|
| 103 |
+
Epoch 7 | Avg Loss: 0.101109 | LR: 8.20e-05 | Time: 2.7s | Samples: 6,983
|
| 104 |
+
|
| 105 |
+
Epoch 8 | Avg Loss: 0.089056 | LR: 8.73e-05 | Time: 2.9s | Samples: 6,983
|
| 106 |
+
|
| 107 |
+
Epoch 9 | Avg Loss: 0.083894 | LR: 9.26e-05 | Time: 2.9s | Samples: 6,983
|
| 108 |
+
|
| 109 |
+
Epoch 10 | Avg Loss: 0.077295 | LR: 9.78e-05 | Time: 2.9s | Samples: 6,983
|
| 110 |
+
|
| 111 |
+
Epoch 11 | Avg Loss: 0.072662 | LR: 1.03e-04 | Time: 2.8s | Samples: 6,983
|
| 112 |
+
|
| 113 |
+
Epoch 12 | Avg Loss: 0.069846 | LR: 1.08e-04 | Time: 2.9s | Samples: 6,983
|
| 114 |
+
|
| 115 |
+
Epoch 13 | Avg Loss: 0.064569 | LR: 1.14e-04 | Time: 2.7s | Samples: 6,983
|
| 116 |
+
|
| 117 |
+
Epoch 14 | Avg Loss: 0.057743 | LR: 1.19e-04 | Time: 2.8s | Samples: 6,983
|
| 118 |
+
|
| 119 |
+
Epoch 15 | Avg Loss: 0.058437 | LR: 1.24e-04 | Time: 2.8s | Samples: 6,983
|
| 120 |
+
|
| 121 |
+
Epoch 16 | Avg Loss: 0.055771 | LR: 1.29e-04 | Time: 2.7s | Samples: 6,983
|
| 122 |
+
|
| 123 |
+
Epoch 17 | Avg Loss: 0.053140 | LR: 1.35e-04 | Time: 2.8s | Samples: 6,983
|
| 124 |
+
|
| 125 |
+
Epoch 18 | Avg Loss: 0.049295 | LR: 1.40e-04 | Time: 2.9s | Samples: 6,983
|
| 126 |
+
|
| 127 |
+
Epoch 19 | Avg Loss: 0.049483 | LR: 1.45e-04 | Time: 2.8s | Samples: 6,983
|
| 128 |
+
|
| 129 |
+
Epoch 20 | Avg Loss: 0.048242 | LR: 1.50e-04 | Time: 2.8s | Samples: 6,983
|
| 130 |
+
|
| 131 |
+
Epoch 21 | Avg Loss: 0.047419 | LR: 1.56e-04 | Time: 2.8s | Samples: 6,983
|
| 132 |
+
|
| 133 |
+
Epoch 22 | Avg Loss: 0.047794 | LR: 1.61e-04 | Time: 2.8s | Samples: 6,983
|
| 134 |
+
|
| 135 |
+
Epoch 23 | Avg Loss: 0.047601 | LR: 1.66e-04 | Time: 3.0s | Samples: 6,983
|
| 136 |
+
|
| 137 |
+
Epoch 24 | Avg Loss: 0.045266 | LR: 1.71e-04 | Time: 2.9s | Samples: 6,983
|
| 138 |
+
|
| 139 |
+
Epoch 25 | Avg Loss: 0.044707 | LR: 1.77e-04 | Time: 2.7s | Samples: 6,983
|
| 140 |
+
|
| 141 |
+
Epoch 26 | Avg Loss: 0.041951 | LR: 1.82e-04 | Time: 2.7s | Samples: 6,983
|
| 142 |
+
|
| 143 |
+
Epoch 27 | Avg Loss: 0.044097 | LR: 1.87e-04 | Time: 2.9s | Samples: 6,983
|
| 144 |
+
|
| 145 |
+
Epoch 28 | Avg Loss: 0.043588 | LR: 1.92e-04 | Time: 2.8s | Samples: 6,983
|
| 146 |
+
|
| 147 |
+
Epoch 29 | Avg Loss: 0.042376 | LR: 1.98e-04 | Time: 3.8s | Samples: 6,983
|
| 148 |
+
|
| 149 |
+
Epoch 30 | Avg Loss: 0.039175 | LR: 2.03e-04 | Time: 3.9s | Samples: 6,983
|
| 150 |
+
|
| 151 |
+
Epoch 31 | Avg Loss: 0.041455 | LR: 2.08e-04 | Time: 4.0s | Samples: 6,983
|
| 152 |
+
|
| 153 |
+
Epoch 32 | Avg Loss: 0.040566 | LR: 2.13e-04 | Time: 3.9s | Samples: 6,983
|
| 154 |
+
|
| 155 |
+
Epoch 33 | Avg Loss: 0.038954 | LR: 2.19e-04 | Time: 3.9s | Samples: 6,983
|
| 156 |
+
|
| 157 |
+
Epoch 34 | Avg Loss: 0.041221 | LR: 2.24e-04 | Time: 3.8s | Samples: 6,983
|
| 158 |
+
|
| 159 |
+
Epoch 35 | Avg Loss: 0.039926 | LR: 2.29e-04 | Time: 4.0s | Samples: 6,983
|
| 160 |
+
|
| 161 |
+
Epoch 36 | Avg Loss: 0.043514 | LR: 2.34e-04 | Time: 4.0s | Samples: 6,983
|
| 162 |
+
|
| 163 |
+
Epoch 37 | Avg Loss: 0.037676 | LR: 2.40e-04 | Time: 3.9s | Samples: 6,983
|
| 164 |
+
|
| 165 |
+
Epoch 38 | Avg Loss: 0.039012 | LR: 2.45e-04 | Time: 3.8s | Samples: 6,983
|
| 166 |
+
|
| 167 |
+
Epoch 39 | Avg Loss: 0.037944 | LR: 2.50e-04 | Time: 3.9s | Samples: 6,983
|
| 168 |
+
|
| 169 |
+
Epoch 40 | Avg Loss: 0.037019 | LR: 2.55e-04 | Time: 3.8s | Samples: 6,983
|
| 170 |
+
|
| 171 |
+
Epoch 41 | Avg Loss: 0.036788 | LR: 2.61e-04 | Time: 3.9s | Samples: 6,983
|
| 172 |
+
|
| 173 |
+
Epoch 42 | Avg Loss: 0.038254 | LR: 2.66e-04 | Time: 3.9s | Samples: 6,983
|
| 174 |
+
|
| 175 |
+
Epoch 43 | Avg Loss: 0.037138 | LR: 2.71e-04 | Time: 4.0s | Samples: 6,983
|
| 176 |
+
|
| 177 |
+
Epoch 44 | Avg Loss: 0.039265 | LR: 2.77e-04 | Time: 3.8s | Samples: 6,983
|
| 178 |
+
|
| 179 |
+
Epoch 45 | Avg Loss: 0.036169 | LR: 2.82e-04 | Time: 3.9s | Samples: 6,983
|
| 180 |
+
|
| 181 |
+
Epoch 46 | Avg Loss: 0.037829 | LR: 2.87e-04 | Time: 3.9s | Samples: 6,983
|
| 182 |
+
|
| 183 |
+
Epoch 47 | Avg Loss: 0.038144 | LR: 2.92e-04 | Time: 4.0s | Samples: 6,983
|
| 184 |
+
|
| 185 |
+
Epoch 48 | Avg Loss: 0.034156 | LR: 2.98e-04 | Time: 3.9s | Samples: 6,983
|