Update README and add additional benchmarking logs
Browse files- README.md +184 -18
- logs_modchembert_classification_ModChemBERT-MLM-TAFT/modchembert_deepchem_splits_run_antimalarial_epochs100_batch_size32_20250925_224126.log +353 -0
- logs_modchembert_classification_ModChemBERT-MLM-TAFT/modchembert_deepchem_splits_run_cocrystal_epochs100_batch_size32_20250926_032614.log +347 -0
- logs_modchembert_classification_ModChemBERT-MLM-TAFT/modchembert_deepchem_splits_run_covid19_epochs100_batch_size32_20250926_005655.log +331 -0
- logs_modchembert_regression_ModChemBERT-MLM-TAFT/modchembert_deepchem_splits_run_adme_microsom_stab_h_epochs100_batch_size32_20250926_053918.log +377 -0
- logs_modchembert_regression_ModChemBERT-MLM-TAFT/modchembert_deepchem_splits_run_adme_microsom_stab_r_epochs100_batch_size32_20250926_061633.log +349 -0
- logs_modchembert_regression_ModChemBERT-MLM-TAFT/modchembert_deepchem_splits_run_adme_permeability_epochs100_batch_size32_20250926_070134.log +353 -0
- logs_modchembert_regression_ModChemBERT-MLM-TAFT/modchembert_deepchem_splits_run_adme_ppb_h_epochs100_batch_size32_20250926_075124.log +323 -0
- logs_modchembert_regression_ModChemBERT-MLM-TAFT/modchembert_deepchem_splits_run_adme_ppb_r_epochs100_batch_size32_20250926_080500.log +345 -0
- logs_modchembert_regression_ModChemBERT-MLM-TAFT/modchembert_deepchem_splits_run_adme_solubility_epochs100_batch_size32_20250926_081922.log +357 -0
- logs_modchembert_regression_ModChemBERT-MLM-TAFT/modchembert_deepchem_splits_run_astrazeneca_cl_epochs100_batch_size32_20250926_093214.log +323 -0
- logs_modchembert_regression_ModChemBERT-MLM-TAFT/modchembert_deepchem_splits_run_astrazeneca_logd74_epochs100_batch_size32_20250926_103314.log +415 -0
- logs_modchembert_regression_ModChemBERT-MLM-TAFT/modchembert_deepchem_splits_run_astrazeneca_ppb_epochs100_batch_size32_20250926_125051.log +329 -0
- logs_modchembert_regression_ModChemBERT-MLM-TAFT/modchembert_deepchem_splits_run_astrazeneca_solubility_epochs100_batch_size32_20250926_140905.log +355 -0
README.md
CHANGED
|
@@ -118,6 +118,123 @@ model-index:
|
|
| 118 |
metrics:
|
| 119 |
- type: rmse
|
| 120 |
value: 0.6708
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 121 |
---
|
| 122 |
|
| 123 |
# ModChemBERT: ModernBERT as a Chemical Language Model
|
|
@@ -159,10 +276,10 @@ print(fill("c1ccccc1[MASK]"))
|
|
| 159 |
- Encoder Layers: 22
|
| 160 |
- Attention heads: 12
|
| 161 |
- Max sequence length: 256 tokens (MLM primarily trained with 128-token sequences)
|
| 162 |
-
-
|
| 163 |
|
| 164 |
## Pooling (Classifier / Regressor Head)
|
| 165 |
-
Kallergis et al. [1] demonstrated that the CLM embedding method prior to the prediction head
|
| 166 |
|
| 167 |
Behrendt et al. [2] noted that the last few layers contain task-specific information and that pooling methods leveraging information from multiple layers can enhance model performance. Their results further demonstrated that the `max_seq_mha` pooling method was particularly effective in low-data regimes, which is often the case for molecular property prediction tasks.
|
| 168 |
|
|
@@ -178,6 +295,9 @@ Multiple pooling strategies are supported by ModChemBERT to explore their impact
|
|
| 178 |
- `mean_sum`: Mean over all layers then sum tokens
|
| 179 |
- `max_seq_mean`: Max over last k layers then mean tokens
|
| 180 |
|
|
|
|
|
|
|
|
|
|
| 181 |
## Training Pipeline
|
| 182 |
<div align="center">
|
| 183 |
<img src="https://cdn-uploads.huggingface.co/production/uploads/656892962693fa22e18b5331/bxNbpgMkU8m60ypyEJoWQ.png" alt="ModChemBERT Training Pipeline" width="650"/>
|
|
@@ -190,23 +310,33 @@ Following Sultan et al. [3], multi-task regression (physicochemical properties)
|
|
| 190 |
Inspired by ModernBERT [4], JaColBERTv2.5 [5], and Llama 3.1 [6], where results show that model merging can enhance generalization or performance while mitigating overfitting to any single fine-tune or annealing checkpoint.
|
| 191 |
|
| 192 |
## Datasets
|
| 193 |
-
- Pretraining: [Derify/augmented_canonical_druglike_QED_Pfizer_15M](https://huggingface.co/datasets/Derify/augmented_canonical_druglike_QED_Pfizer_15M)
|
| 194 |
-
- Domain Adaptive Pretraining (DAPT) & Task Adaptive Fine-tuning (TAFT): ADME + AstraZeneca
|
| 195 |
-
- Benchmarking:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 196 |
|
| 197 |
## Benchmarking
|
| 198 |
-
Benchmarks were conducted
|
|
|
|
|
|
|
| 199 |
|
| 200 |
### Evaluation Methodology
|
| 201 |
-
- Classification Metric: ROC AUC
|
| 202 |
-
- Regression Metric: RMSE
|
| 203 |
- Aggregation: Mean ± standard deviation of the triplicate results.
|
| 204 |
-
- Input Constraints: SMILES truncated / filtered to ≤200 tokens, following
|
| 205 |
|
| 206 |
### Results
|
| 207 |
<details><summary>Click to expand</summary>
|
| 208 |
|
| 209 |
-
#### Classification Datasets (ROC AUC - Higher is better)
|
| 210 |
|
| 211 |
| Model | BACE↑ | BBBP↑ | CLINTOX↑ | HIV↑ | SIDER↑ | TOX21↑ | AVG† |
|
| 212 |
| ---------------------------------------------------------------------------- | ----------------- | ----------------- | --------------------- | --------------------- | --------------------- | ----------------- | ------ |
|
|
@@ -214,14 +344,14 @@ Benchmarks were conducted with the ChemBERTa-3 framework using DeepChem scaffold
|
|
| 214 |
| [ChemBERTa-100M-MLM](https://huggingface.co/DeepChem/ChemBERTa-100M-MLM)* | 0.781 ± 0.019 | 0.700 ± 0.027 | 0.979 ± 0.022 | 0.740 ± 0.013 | 0.611 ± 0.002 | 0.718 ± 0.011 | 0.7548 |
|
| 215 |
| [c3-MoLFormer-1.1B](https://huggingface.co/DeepChem/MoLFormer-c3-1.1B)* | 0.819 ± 0.019 | 0.735 ± 0.019 | 0.839 ± 0.013 | 0.762 ± 0.005 | 0.618 ± 0.005 | 0.723 ± 0.012 | 0.7493 |
|
| 216 |
| MoLFormer-LHPC* | **0.887 ± 0.004** | **0.908 ± 0.013** | 0.993 ± 0.004 | 0.750 ± 0.003 | 0.622 ± 0.007 | **0.791 ± 0.014** | 0.8252 |
|
| 217 |
-
|
|
| 218 |
| [MLM](https://huggingface.co/Derify/ModChemBERT-MLM) | 0.8065 ± 0.0103 | 0.7222 ± 0.0150 | 0.9709 ± 0.0227 | ***0.7800 ± 0.0133*** | 0.6419 ± 0.0113 | 0.7400 ± 0.0044 | 0.7769 |
|
| 219 |
| [MLM + DAPT](https://huggingface.co/Derify/ModChemBERT-MLM-DAPT) | 0.8224 ± 0.0156 | 0.7402 ± 0.0095 | 0.9820 ± 0.0138 | 0.7702 ± 0.0020 | 0.6303 ± 0.0039 | 0.7360 ± 0.0036 | 0.7802 |
|
| 220 |
| [MLM + TAFT](https://huggingface.co/Derify/ModChemBERT-MLM-TAFT) | 0.7924 ± 0.0155 | 0.7282 ± 0.0058 | 0.9725 ± 0.0213 | 0.7770 ± 0.0047 | 0.6542 ± 0.0128 | *0.7646 ± 0.0039* | 0.7815 |
|
| 221 |
| [MLM + DAPT + TAFT](https://huggingface.co/Derify/ModChemBERT-MLM-DAPT-TAFT) | 0.8213 ± 0.0051 | 0.7356 ± 0.0094 | 0.9664 ± 0.0202 | 0.7750 ± 0.0048 | 0.6415 ± 0.0094 | 0.7263 ± 0.0036 | 0.7777 |
|
| 222 |
| [MLM + DAPT + TAFT OPT](https://huggingface.co/Derify/ModChemBERT) | *0.8346 ± 0.0045* | *0.7573 ± 0.0120* | ***0.9938 ± 0.0017*** | 0.7737 ± 0.0034 | ***0.6600 ± 0.0061*** | 0.7518 ± 0.0047 | 0.7952 |
|
| 223 |
|
| 224 |
-
#### Regression Datasets (RMSE - Lower is better)
|
| 225 |
|
| 226 |
| Model | BACE↓ | CLEARANCE↓ | ESOL↓ | FREESOLV↓ | LIPO↓ | AVG‡ |
|
| 227 |
| ---------------------------------------------------------------------------- | --------------------- | ---------------------- | --------------------- | --------------------- | --------------------- | ---------------- |
|
|
@@ -229,17 +359,45 @@ Benchmarks were conducted with the ChemBERTa-3 framework using DeepChem scaffold
|
|
| 229 |
| [ChemBERTa-100M-MLM](https://huggingface.co/DeepChem/ChemBERTa-100M-MLM)* | 1.011 ± 0.038 | 51.582 ± 3.079 | 0.920 ± 0.011 | 0.536 ± 0.016 | 0.758 ± 0.013 | 0.8063 / 10.9614 |
|
| 230 |
| [c3-MoLFormer-1.1B](https://huggingface.co/DeepChem/MoLFormer-c3-1.1B)* | 1.094 ± 0.126 | 52.058 ± 2.767 | 0.829 ± 0.019 | 0.572 ± 0.023 | 0.728 ± 0.016 | 0.8058 / 11.0562 |
|
| 231 |
| MoLFormer-LHPC* | 1.201 ± 0.100 | 45.74 ± 2.637 | 0.848 ± 0.031 | 0.683 ± 0.040 | 0.895 ± 0.080 | 0.9068 / 9.8734 |
|
| 232 |
-
|
|
| 233 |
| [MLM](https://huggingface.co/Derify/ModChemBERT-MLM) | 1.0893 ± 0.1319 | 49.0005 ± 1.2787 | 0.8456 ± 0.0406 | 0.5491 ± 0.0134 | 0.7147 ± 0.0062 | 0.7997 / 10.4398 |
|
| 234 |
| [MLM + DAPT](https://huggingface.co/Derify/ModChemBERT-MLM-DAPT) | 0.9931 ± 0.0258 | 45.4951 ± 0.7112 | 0.9319 ± 0.0153 | 0.6049 ± 0.0666 | 0.6874 ± 0.0040 | 0.8043 / 9.7425 |
|
| 235 |
| [MLM + TAFT](https://huggingface.co/Derify/ModChemBERT-MLM-TAFT) | 1.0304 ± 0.1146 | 47.8418 ± 0.4070 | ***0.7669 ± 0.0024*** | 0.5293 ± 0.0267 | 0.6708 ± 0.0074 | 0.7493 / 10.1678 |
|
| 236 |
| [MLM + DAPT + TAFT](https://huggingface.co/Derify/ModChemBERT-MLM-DAPT-TAFT) | 0.9713 ± 0.0224 | ***42.8010 ± 3.3475*** | 0.8169 ± 0.0268 | 0.5445 ± 0.0257 | 0.6820 ± 0.0028 | 0.7537 / 9.1631 |
|
| 237 |
| [MLM + DAPT + TAFT OPT](https://huggingface.co/Derify/ModChemBERT) | ***0.9665 ± 0.0250*** | 44.0137 ± 1.1110 | 0.8158 ± 0.0115 | ***0.4979 ± 0.0158*** | ***0.6505 ± 0.0126*** | 0.7327 / 9.3889 |
|
| 238 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 239 |
**Bold** indicates the best result in the column; *italic* indicates the best result among ModChemBERT checkpoints.<br/>
|
| 240 |
\* Published results from the ChemBERTa-3 [7] paper for optimized chemical language models using DeepChem scaffold splits.<br/>
|
| 241 |
-
† AVG column shows the mean score across
|
| 242 |
-
‡ AVG column shows the mean scores across
|
| 243 |
|
| 244 |
</details>
|
| 245 |
|
|
@@ -279,6 +437,9 @@ Optimal parameters (per dataset) for the `MLM + DAPT + TAFT OPT` merged model:
|
|
| 279 |
| esol | 64 | sum_mean | N/A | 0.1 | 0.0 | 0.1 |
|
| 280 |
| freesolv | 32 | max_seq_mha | 5 | 0.1 | 0.0 | 0.0 |
|
| 281 |
| lipo | 32 | max_seq_mha | 3 | 0.1 | 0.1 | 0.1 |
|
|
|
|
|
|
|
|
|
|
| 282 |
|
| 283 |
</details>
|
| 284 |
|
|
@@ -312,10 +473,15 @@ If you use ModChemBERT in your research, please cite the checkpoint and the foll
|
|
| 312 |
```
|
| 313 |
|
| 314 |
## References
|
| 315 |
-
1. Kallergis,
|
| 316 |
2. Behrendt, Maike, Stefan Sylvius Wagner, and Stefan Harmeling. "MaxPoolBERT: Enhancing BERT Classification via Layer-and Token-Wise Aggregation." arXiv preprint arXiv:2505.15696 (2025).
|
| 317 |
3. Sultan, Afnan, et al. "Transformers for molecular property prediction: Domain adaptation efficiently improves performance." arXiv preprint arXiv:2503.03360 (2025).
|
| 318 |
4. Warner, Benjamin, et al. "Smarter, better, faster, longer: A modern bidirectional encoder for fast, memory efficient, and long context finetuning and inference." arXiv preprint arXiv:2412.13663 (2024).
|
| 319 |
-
5. Clavié, Benjamin. "JaColBERTv2.5: Optimising Multi-Vector Retrievers to Create State-of-the-Art Japanese Retrievers with Constrained Resources."
|
| 320 |
6. Grattafiori, Aaron, et al. "The llama 3 herd of models." arXiv preprint arXiv:2407.21783 (2024).
|
| 321 |
-
7. Singh,
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 118 |
metrics:
|
| 119 |
- type: rmse
|
| 120 |
value: 0.6708
|
| 121 |
+
- task:
|
| 122 |
+
type: text-classification
|
| 123 |
+
name: Classification (ROC AUC)
|
| 124 |
+
dataset:
|
| 125 |
+
name: Antimalarial
|
| 126 |
+
type: Antimalarial
|
| 127 |
+
metrics:
|
| 128 |
+
- type: roc_auc
|
| 129 |
+
value: 0.8832
|
| 130 |
+
- task:
|
| 131 |
+
type: text-classification
|
| 132 |
+
name: Classification (ROC AUC)
|
| 133 |
+
dataset:
|
| 134 |
+
name: Cocrystal
|
| 135 |
+
type: Cocrystal
|
| 136 |
+
metrics:
|
| 137 |
+
- type: roc_auc
|
| 138 |
+
value: 0.7866
|
| 139 |
+
- task:
|
| 140 |
+
type: text-classification
|
| 141 |
+
name: Classification (ROC AUC)
|
| 142 |
+
dataset:
|
| 143 |
+
name: COVID19
|
| 144 |
+
type: COVID19
|
| 145 |
+
metrics:
|
| 146 |
+
- type: roc_auc
|
| 147 |
+
value: 0.8308
|
| 148 |
+
- task:
|
| 149 |
+
type: regression
|
| 150 |
+
name: Regression (RMSE)
|
| 151 |
+
dataset:
|
| 152 |
+
name: ADME microsom stab human
|
| 153 |
+
type: ADME
|
| 154 |
+
metrics:
|
| 155 |
+
- type: rmse
|
| 156 |
+
value: 0.4375
|
| 157 |
+
- task:
|
| 158 |
+
type: regression
|
| 159 |
+
name: Regression (RMSE)
|
| 160 |
+
dataset:
|
| 161 |
+
name: ADME microsom stab rat
|
| 162 |
+
type: ADME
|
| 163 |
+
metrics:
|
| 164 |
+
- type: rmse
|
| 165 |
+
value: 0.4542
|
| 166 |
+
- task:
|
| 167 |
+
type: regression
|
| 168 |
+
name: Regression (RMSE)
|
| 169 |
+
dataset:
|
| 170 |
+
name: ADME permeability
|
| 171 |
+
type: ADME
|
| 172 |
+
metrics:
|
| 173 |
+
- type: rmse
|
| 174 |
+
value: 0.5202
|
| 175 |
+
- task:
|
| 176 |
+
type: regression
|
| 177 |
+
name: Regression (RMSE)
|
| 178 |
+
dataset:
|
| 179 |
+
name: ADME ppb human
|
| 180 |
+
type: ADME
|
| 181 |
+
metrics:
|
| 182 |
+
- type: rmse
|
| 183 |
+
value: 0.7618
|
| 184 |
+
- task:
|
| 185 |
+
type: regression
|
| 186 |
+
name: Regression (RMSE)
|
| 187 |
+
dataset:
|
| 188 |
+
name: ADME ppb rat
|
| 189 |
+
type: ADME
|
| 190 |
+
metrics:
|
| 191 |
+
- type: rmse
|
| 192 |
+
value: 0.7027
|
| 193 |
+
- task:
|
| 194 |
+
type: regression
|
| 195 |
+
name: Regression (RMSE)
|
| 196 |
+
dataset:
|
| 197 |
+
name: ADME solubility
|
| 198 |
+
type: ADME
|
| 199 |
+
metrics:
|
| 200 |
+
- type: rmse
|
| 201 |
+
value: 0.5023
|
| 202 |
+
- task:
|
| 203 |
+
type: regression
|
| 204 |
+
name: Regression (RMSE)
|
| 205 |
+
dataset:
|
| 206 |
+
name: AstraZeneca CL
|
| 207 |
+
type: AstraZeneca
|
| 208 |
+
metrics:
|
| 209 |
+
- type: rmse
|
| 210 |
+
value: 0.5104
|
| 211 |
+
- task:
|
| 212 |
+
type: regression
|
| 213 |
+
name: Regression (RMSE)
|
| 214 |
+
dataset:
|
| 215 |
+
name: AstraZeneca LogD74
|
| 216 |
+
type: AstraZeneca
|
| 217 |
+
metrics:
|
| 218 |
+
- type: rmse
|
| 219 |
+
value: 0.7599
|
| 220 |
+
- task:
|
| 221 |
+
type: regression
|
| 222 |
+
name: Regression (RMSE)
|
| 223 |
+
dataset:
|
| 224 |
+
name: AstraZeneca PPB
|
| 225 |
+
type: AstraZeneca
|
| 226 |
+
metrics:
|
| 227 |
+
- type: rmse
|
| 228 |
+
value: 0.1233
|
| 229 |
+
- task:
|
| 230 |
+
type: regression
|
| 231 |
+
name: Regression (RMSE)
|
| 232 |
+
dataset:
|
| 233 |
+
name: AstraZeneca Solubility
|
| 234 |
+
type: AstraZeneca
|
| 235 |
+
metrics:
|
| 236 |
+
- type: rmse
|
| 237 |
+
value: 0.8730
|
| 238 |
---
|
| 239 |
|
| 240 |
# ModChemBERT: ModernBERT as a Chemical Language Model
|
|
|
|
| 276 |
- Encoder Layers: 22
|
| 277 |
- Attention heads: 12
|
| 278 |
- Max sequence length: 256 tokens (MLM primarily trained with 128-token sequences)
|
| 279 |
+
- Tokenizer: BPE tokenizer using [MolFormer's vocab](https://github.com/emapco/ModChemBERT/blob/main/modchembert/tokenizers/molformer/vocab.json) (2362 tokens)
|
| 280 |
|
| 281 |
## Pooling (Classifier / Regressor Head)
|
| 282 |
+
Kallergis et al. [1] demonstrated that the CLM embedding method prior to the prediction head was the strongest contributor to downstream performance among evaluated hyperparameters.
|
| 283 |
|
| 284 |
Behrendt et al. [2] noted that the last few layers contain task-specific information and that pooling methods leveraging information from multiple layers can enhance model performance. Their results further demonstrated that the `max_seq_mha` pooling method was particularly effective in low-data regimes, which is often the case for molecular property prediction tasks.
|
| 285 |
|
|
|
|
| 295 |
- `mean_sum`: Mean over all layers then sum tokens
|
| 296 |
- `max_seq_mean`: Max over last k layers then mean tokens
|
| 297 |
|
| 298 |
+
Note: ModChemBERT’s `max_seq_mha` differs from MaxPoolBERT [2]. MaxPoolBERT uses PyTorch `nn.MultiheadAttention`, whereas ModChemBERT's `ModChemBertPoolingAttention` adapts ModernBERT’s `ModernBertAttention`.
|
| 299 |
+
On ChemBERTa-3 benchmarks this variant produced stronger validation metrics and avoided the training instabilities (sporadic zero / NaN losses and gradient norms) seen with `nn.MultiheadAttention`. Training instability with ModernBERT has been reported in the past ([discussion 1](https://huggingface.co/answerdotai/ModernBERT-base/discussions/59) and [discussion 2](https://huggingface.co/answerdotai/ModernBERT-base/discussions/63)).
|
| 300 |
+
|
| 301 |
## Training Pipeline
|
| 302 |
<div align="center">
|
| 303 |
<img src="https://cdn-uploads.huggingface.co/production/uploads/656892962693fa22e18b5331/bxNbpgMkU8m60ypyEJoWQ.png" alt="ModChemBERT Training Pipeline" width="650"/>
|
|
|
|
| 310 |
Inspired by ModernBERT [4], JaColBERTv2.5 [5], and Llama 3.1 [6], where results show that model merging can enhance generalization or performance while mitigating overfitting to any single fine-tune or annealing checkpoint.
|
| 311 |
|
| 312 |
## Datasets
|
| 313 |
+
- Pretraining: [Derify/augmented_canonical_druglike_QED_Pfizer_15M](https://huggingface.co/datasets/Derify/augmented_canonical_druglike_QED_Pfizer_15M) (canonical_smiles column)
|
| 314 |
+
- Domain Adaptive Pretraining (DAPT) & Task Adaptive Fine-tuning (TAFT): ADME (6 tasks) + AstraZeneca (4 tasks) datasets that are split using DA4MT's [3] Bemis-Murcko scaffold splitter (see [domain-adaptation-molecular-transformers](https://github.com/emapco/ModChemBERT/blob/main/domain-adaptation-molecular-transformers/da4mt/splitting.py))
|
| 315 |
+
- Benchmarking:
|
| 316 |
+
- ChemBERTa-3 [7]
|
| 317 |
+
- classification: BACE, BBBP, TOX21, HIV, SIDER, CLINTOX
|
| 318 |
+
- regression: ESOL, FREESOLV, LIPO, BACE, CLEARANCE
|
| 319 |
+
- Mswahili, et al. [8] proposed additional datasets for benchmarking chemical language models:
|
| 320 |
+
- classification: Antimalarial [9], Cocrystal [10], COVID19 [11]
|
| 321 |
+
- DAPT/TAFT stage regression datasets:
|
| 322 |
+
- ADME [12]: adme_microsom_stab_h, adme_microsom_stab_r, adme_permeability, adme_ppb_h, adme_ppb_r, adme_solubility
|
| 323 |
+
- AstraZeneca: astrazeneca_CL, astrazeneca_LogD74, astrazeneca_PPB, astrazeneca_Solubility
|
| 324 |
|
| 325 |
## Benchmarking
|
| 326 |
+
Benchmarks were conducted using the ChemBERTa-3 framework. DeepChem scaffold splits were utilized for all datasets, with the exception of the Antimalarial dataset, which employed a random split. Each task was trained for 100 epochs, with results averaged across 3 random seeds.
|
| 327 |
+
|
| 328 |
+
The complete hyperparameter configurations for these benchmarks are available here: [ChemBERTa3 configs](https://github.com/emapco/ModChemBERT/tree/main/conf/chemberta3)
|
| 329 |
|
| 330 |
### Evaluation Methodology
|
| 331 |
+
- Classification Metric: ROC AUC
|
| 332 |
+
- Regression Metric: RMSE
|
| 333 |
- Aggregation: Mean ± standard deviation of the triplicate results.
|
| 334 |
+
- Input Constraints: SMILES truncated / filtered to ≤200 tokens, following ChemBERTa-3's recommendation.
|
| 335 |
|
| 336 |
### Results
|
| 337 |
<details><summary>Click to expand</summary>
|
| 338 |
|
| 339 |
+
#### ChemBERTa-3 Classification Datasets (ROC AUC - Higher is better)
|
| 340 |
|
| 341 |
| Model | BACE↑ | BBBP↑ | CLINTOX↑ | HIV↑ | SIDER↑ | TOX21↑ | AVG† |
|
| 342 |
| ---------------------------------------------------------------------------- | ----------------- | ----------------- | --------------------- | --------------------- | --------------------- | ----------------- | ------ |
|
|
|
|
| 344 |
| [ChemBERTa-100M-MLM](https://huggingface.co/DeepChem/ChemBERTa-100M-MLM)* | 0.781 ± 0.019 | 0.700 ± 0.027 | 0.979 ± 0.022 | 0.740 ± 0.013 | 0.611 ± 0.002 | 0.718 ± 0.011 | 0.7548 |
|
| 345 |
| [c3-MoLFormer-1.1B](https://huggingface.co/DeepChem/MoLFormer-c3-1.1B)* | 0.819 ± 0.019 | 0.735 ± 0.019 | 0.839 ± 0.013 | 0.762 ± 0.005 | 0.618 ± 0.005 | 0.723 ± 0.012 | 0.7493 |
|
| 346 |
| MoLFormer-LHPC* | **0.887 ± 0.004** | **0.908 ± 0.013** | 0.993 ± 0.004 | 0.750 ± 0.003 | 0.622 ± 0.007 | **0.791 ± 0.014** | 0.8252 |
|
| 347 |
+
| | | | | | | | |
|
| 348 |
| [MLM](https://huggingface.co/Derify/ModChemBERT-MLM) | 0.8065 ± 0.0103 | 0.7222 ± 0.0150 | 0.9709 ± 0.0227 | ***0.7800 ± 0.0133*** | 0.6419 ± 0.0113 | 0.7400 ± 0.0044 | 0.7769 |
|
| 349 |
| [MLM + DAPT](https://huggingface.co/Derify/ModChemBERT-MLM-DAPT) | 0.8224 ± 0.0156 | 0.7402 ± 0.0095 | 0.9820 ± 0.0138 | 0.7702 ± 0.0020 | 0.6303 ± 0.0039 | 0.7360 ± 0.0036 | 0.7802 |
|
| 350 |
| [MLM + TAFT](https://huggingface.co/Derify/ModChemBERT-MLM-TAFT) | 0.7924 ± 0.0155 | 0.7282 ± 0.0058 | 0.9725 ± 0.0213 | 0.7770 ± 0.0047 | 0.6542 ± 0.0128 | *0.7646 ± 0.0039* | 0.7815 |
|
| 351 |
| [MLM + DAPT + TAFT](https://huggingface.co/Derify/ModChemBERT-MLM-DAPT-TAFT) | 0.8213 ± 0.0051 | 0.7356 ± 0.0094 | 0.9664 ± 0.0202 | 0.7750 ± 0.0048 | 0.6415 ± 0.0094 | 0.7263 ± 0.0036 | 0.7777 |
|
| 352 |
| [MLM + DAPT + TAFT OPT](https://huggingface.co/Derify/ModChemBERT) | *0.8346 ± 0.0045* | *0.7573 ± 0.0120* | ***0.9938 ± 0.0017*** | 0.7737 ± 0.0034 | ***0.6600 ± 0.0061*** | 0.7518 ± 0.0047 | 0.7952 |
|
| 353 |
|
| 354 |
+
#### ChemBERTa-3 Regression Datasets (RMSE - Lower is better)
|
| 355 |
|
| 356 |
| Model | BACE↓ | CLEARANCE↓ | ESOL↓ | FREESOLV↓ | LIPO↓ | AVG‡ |
|
| 357 |
| ---------------------------------------------------------------------------- | --------------------- | ---------------------- | --------------------- | --------------------- | --------------------- | ---------------- |
|
|
|
|
| 359 |
| [ChemBERTa-100M-MLM](https://huggingface.co/DeepChem/ChemBERTa-100M-MLM)* | 1.011 ± 0.038 | 51.582 ± 3.079 | 0.920 ± 0.011 | 0.536 ± 0.016 | 0.758 ± 0.013 | 0.8063 / 10.9614 |
|
| 360 |
| [c3-MoLFormer-1.1B](https://huggingface.co/DeepChem/MoLFormer-c3-1.1B)* | 1.094 ± 0.126 | 52.058 ± 2.767 | 0.829 ± 0.019 | 0.572 ± 0.023 | 0.728 ± 0.016 | 0.8058 / 11.0562 |
|
| 361 |
| MoLFormer-LHPC* | 1.201 ± 0.100 | 45.74 ± 2.637 | 0.848 ± 0.031 | 0.683 ± 0.040 | 0.895 ± 0.080 | 0.9068 / 9.8734 |
|
| 362 |
+
| | | | | | |
|
| 363 |
| [MLM](https://huggingface.co/Derify/ModChemBERT-MLM) | 1.0893 ± 0.1319 | 49.0005 ± 1.2787 | 0.8456 ± 0.0406 | 0.5491 ± 0.0134 | 0.7147 ± 0.0062 | 0.7997 / 10.4398 |
|
| 364 |
| [MLM + DAPT](https://huggingface.co/Derify/ModChemBERT-MLM-DAPT) | 0.9931 ± 0.0258 | 45.4951 ± 0.7112 | 0.9319 ± 0.0153 | 0.6049 ± 0.0666 | 0.6874 ± 0.0040 | 0.8043 / 9.7425 |
|
| 365 |
| [MLM + TAFT](https://huggingface.co/Derify/ModChemBERT-MLM-TAFT) | 1.0304 ± 0.1146 | 47.8418 ± 0.4070 | ***0.7669 ± 0.0024*** | 0.5293 ± 0.0267 | 0.6708 ± 0.0074 | 0.7493 / 10.1678 |
|
| 366 |
| [MLM + DAPT + TAFT](https://huggingface.co/Derify/ModChemBERT-MLM-DAPT-TAFT) | 0.9713 ± 0.0224 | ***42.8010 ± 3.3475*** | 0.8169 ± 0.0268 | 0.5445 ± 0.0257 | 0.6820 ± 0.0028 | 0.7537 / 9.1631 |
|
| 367 |
| [MLM + DAPT + TAFT OPT](https://huggingface.co/Derify/ModChemBERT) | ***0.9665 ± 0.0250*** | 44.0137 ± 1.1110 | 0.8158 ± 0.0115 | ***0.4979 ± 0.0158*** | ***0.6505 ± 0.0126*** | 0.7327 / 9.3889 |
|
| 368 |
|
| 369 |
+
#### Mswahili, et al. [8] Proposed Classification Datasets (ROC AUC - Higher is better)
|
| 370 |
+
|
| 371 |
+
| Model | Antimalarial↑ | Cocrystal↑ | COVID19↑ | AVG† |
|
| 372 |
+
| ---------------------------------------------------------------------------- | --------------------- | --------------------- | --------------------- | ------ |
|
| 373 |
+
| **Tasks** | 1 | 1 | 1 | |
|
| 374 |
+
| [MLM](https://huggingface.co/Derify/ModChemBERT-MLM) | 0.8707 ± 0.0032 | 0.7967 ± 0.0124 | 0.8106 ± 0.0170 | 0.8260 |
|
| 375 |
+
| [MLM + DAPT](https://huggingface.co/Derify/ModChemBERT-MLM-DAPT) | 0.8756 ± 0.0056 | 0.8288 ± 0.0143 | 0.8029 ± 0.0159 | 0.8358 |
|
| 376 |
+
| [MLM + TAFT](https://huggingface.co/Derify/ModChemBERT-MLM-TAFT) | 0.8832 ± 0.0051 | 0.7866 ± 0.0204 | ***0.8308 ± 0.0026*** | 0.8335 |
|
| 377 |
+
| [MLM + DAPT + TAFT](https://huggingface.co/Derify/ModChemBERT-MLM-DAPT-TAFT) | 0.8819 ± 0.0052 | 0.8550 ± 0.0106 | 0.8013 ± 0.0118 | 0.8461 |
|
| 378 |
+
| [MLM + DAPT + TAFT OPT](https://huggingface.co/Derify/ModChemBERT) | ***0.8966 ± 0.0045*** | ***0.8654 ± 0.0080*** | 0.8132 ± 0.0195 | 0.8584 |
|
| 379 |
+
|
| 380 |
+
#### ADME/AstraZeneca Regression Datasets (RMSE - Lower is better)
|
| 381 |
+
|
| 382 |
+
Hyperparameter optimization for the TAFT stage appears to induce overfitting, as the `MLM + DAPT + TAFT OPT` model shows slightly degraded performance on the ADME/AstraZeneca datasets compared to the `MLM + DAPT + TAFT` model.
|
| 383 |
+
The `MLM + DAPT + TAFT` model, a merge of unoptimized TAFT checkpoints trained with `max_seq_mean` pooling, achieved the best overall performance across the ADME/AstraZeneca datasets.
|
| 384 |
+
|
| 385 |
+
| | ADME | | | | | | AstraZeneca | | | | |
|
| 386 |
+
| ---------------------------------------------------------------------------- | ------------------- | ------------------- | ------------------- | ------------------- | ------------------- | ------------------- | ------------------- | ------------------- | ------------------- | ------------------- | ------ |
|
| 387 |
+
| Model | microsom_stab_h↓ | microsom_stab_r↓ | permeability↓ | ppb_h↓ | ppb_r↓ | solubility↓ | CL↓ | LogD74↓ | PPB↓ | Solubility↓ | AVG† |
|
| 388 |
+
| | | | | | | | | | | |
|
| 389 |
+
| **Tasks** | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | |
|
| 390 |
+
| [MLM](https://huggingface.co/Derify/ModChemBERT-MLM) | 0.4489 ± 0.0114 | 0.4685 ± 0.0225 | 0.5423 ± 0.0076 | 0.8041 ± 0.0378 | 0.7849 ± 0.0394 | 0.5191 ± 0.0147 | **0.4812 ± 0.0073** | 0.8204 ± 0.0070 | 0.1365 ± 0.0066 | 0.9614 ± 0.0189 | 0.5967 |
|
| 391 |
+
| [MLM + DAPT](https://huggingface.co/Derify/ModChemBERT-MLM-DAPT) | **0.4199 ± 0.0064** | 0.4568 ± 0.0091 | 0.5042 ± 0.0135 | 0.8376 ± 0.0629 | 0.8446 ± 0.0756 | 0.4800 ± 0.0118 | 0.5351 ± 0.0036 | 0.8191 ± 0.0066 | 0.1237 ± 0.0022 | 0.9280 ± 0.0088 | 0.5949 |
|
| 392 |
+
| [MLM + TAFT](https://huggingface.co/Derify/ModChemBERT-MLM-TAFT) | 0.4375 ± 0.0027 | 0.4542 ± 0.0024 | 0.5202 ± 0.0141 | **0.7618 ± 0.0138** | 0.7027 ± 0.0023 | 0.5023 ± 0.0107 | 0.5104 ± 0.0110 | 0.7599 ± 0.0050 | 0.1233 ± 0.0088 | 0.8730 ± 0.0112 | 0.5645 |
|
| 393 |
+
| [MLM + DAPT + TAFT](https://huggingface.co/Derify/ModChemBERT-MLM-DAPT-TAFT) | 0.4206 ± 0.0071 | **0.4400 ± 0.0039** | **0.4899 ± 0.0068** | 0.8927 ± 0.0163 | **0.6942 ± 0.0397** | 0.4641 ± 0.0082 | 0.5022 ± 0.0136 | **0.7467 ± 0.0041** | 0.1195 ± 0.0026 | **0.8564 ± 0.0265** | 0.5626 |
|
| 394 |
+
| [MLM + DAPT + TAFT OPT](https://huggingface.co/Derify/ModChemBERT) | 0.4248 ± 0.0041 | 0.4403 ± 0.0046 | 0.5025 ± 0.0029 | 0.8901 ± 0.0123 | 0.7268 ± 0.0090 | **0.4627 ± 0.0083** | 0.4932 ± 0.0079 | 0.7596 ± 0.0044 | **0.1150 ± 0.0002** | 0.8735 ± 0.0053 | 0.5689 |
|
| 395 |
+
|
| 396 |
+
|
| 397 |
**Bold** indicates the best result in the column; *italic* indicates the best result among ModChemBERT checkpoints.<br/>
|
| 398 |
\* Published results from the ChemBERTa-3 [7] paper for optimized chemical language models using DeepChem scaffold splits.<br/>
|
| 399 |
+
† AVG column shows the mean score across classification tasks.<br/>
|
| 400 |
+
‡ AVG column shows the mean scores across regression tasks without and with the clearance score.
|
| 401 |
|
| 402 |
</details>
|
| 403 |
|
|
|
|
| 437 |
| esol | 64 | sum_mean | N/A | 0.1 | 0.0 | 0.1 |
|
| 438 |
| freesolv | 32 | max_seq_mha | 5 | 0.1 | 0.0 | 0.0 |
|
| 439 |
| lipo | 32 | max_seq_mha | 3 | 0.1 | 0.1 | 0.1 |
|
| 440 |
+
| antimalarial | 16 | max_seq_mha | 3 | 0.1 | 0.1 | 0.1 |
|
| 441 |
+
| cocrystal | 16 | max_cls | 3 | 0.1 | 0.0 | 0.1 |
|
| 442 |
+
| covid19 | 16 | sum_mean | N/A | 0.1 | 0.0 | 0.1 |
|
| 443 |
|
| 444 |
</details>
|
| 445 |
|
|
|
|
| 473 |
```
|
| 474 |
|
| 475 |
## References
|
| 476 |
+
1. Kallergis, G., Asgari, E., Empting, M. et al. Domain adaptable language modeling of chemical compounds identifies potent pathoblockers for Pseudomonas aeruginosa. Commun Chem 8, 114 (2025). https://doi.org/10.1038/s42004-025-01484-4
|
| 477 |
2. Behrendt, Maike, Stefan Sylvius Wagner, and Stefan Harmeling. "MaxPoolBERT: Enhancing BERT Classification via Layer-and Token-Wise Aggregation." arXiv preprint arXiv:2505.15696 (2025).
|
| 478 |
3. Sultan, Afnan, et al. "Transformers for molecular property prediction: Domain adaptation efficiently improves performance." arXiv preprint arXiv:2503.03360 (2025).
|
| 479 |
4. Warner, Benjamin, et al. "Smarter, better, faster, longer: A modern bidirectional encoder for fast, memory efficient, and long context finetuning and inference." arXiv preprint arXiv:2412.13663 (2024).
|
| 480 |
+
5. Clavié, Benjamin. "JaColBERTv2.5: Optimising Multi-Vector Retrievers to Create State-of-the-Art Japanese Retrievers with Constrained Resources." arXiv preprint arXiv:2407.20750 (2024).
|
| 481 |
6. Grattafiori, Aaron, et al. "The llama 3 herd of models." arXiv preprint arXiv:2407.21783 (2024).
|
| 482 |
+
7. Singh R, Barsainyan AA, Irfan R, Amorin CJ, He S, Davis T, et al. ChemBERTa-3: An Open Source Training Framework for Chemical Foundation Models. ChemRxiv. 2025; doi:10.26434/chemrxiv-2025-4glrl-v2 This content is a preprint and has not been peer-reviewed.
|
| 483 |
+
8. Mswahili, M.E., Hwang, J., Rajapakse, J.C. et al. Positional embeddings and zero-shot learning using BERT for molecular-property prediction. J Cheminform 17, 17 (2025). https://doi.org/10.1186/s13321-025-00959-9
|
| 484 |
+
9. Mswahili, M.E.; Ndomba, G.E.; Jo, K.; Jeong, Y.-S. Graph Neural Network and BERT Model for Antimalarial Drug Predictions Using Plasmodium Potential Targets. Applied Sciences, 2024, 14(4), 1472. https://doi.org/10.3390/app14041472
|
| 485 |
+
10. Mswahili, M.E.; Lee, M.-J.; Martin, G.L.; Kim, J.; Kim, P.; Choi, G.J.; Jeong, Y.-S. Cocrystal Prediction Using Machine Learning Models and Descriptors. Applied Sciences, 2021, 11, 1323. https://doi.org/10.3390/app11031323
|
| 486 |
+
11. Harigua-Souiai, E.; Heinhane, M.M.; Abdelkrim, Y.Z.; Souiai, O.; Abdeljaoued-Tej, I.; Guizani, I. Deep Learning Algorithms Achieved Satisfactory Predictions When Trained on a Novel Collection of Anticoronavirus Molecules. Frontiers in Genetics, 2021, 12:744170. https://doi.org/10.3389/fgene.2021.744170
|
| 487 |
+
12. Cheng Fang, Ye Wang, Richard Grater, Sudarshan Kapadnis, Cheryl Black, Patrick Trapa, and Simone Sciabola. "Prospective Validation of Machine Learning Algorithms for Absorption, Distribution, Metabolism, and Excretion Prediction: An Industrial Perspective" Journal of Chemical Information and Modeling 2023 63 (11), 3263-3274 https://doi.org/10.1021/acs.jcim.3c00160
|
logs_modchembert_classification_ModChemBERT-MLM-TAFT/modchembert_deepchem_splits_run_antimalarial_epochs100_batch_size32_20250925_224126.log
ADDED
|
@@ -0,0 +1,353 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
2025-09-25 22:41:26,099 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Running benchmark for dataset: antimalarial
|
| 2 |
+
2025-09-25 22:41:26,099 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - dataset: antimalarial, tasks: ['label'], epochs: 100, learning rate: 3e-05
|
| 3 |
+
2025-09-25 22:41:26,104 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Starting triplicate run 1 for dataset antimalarial at 2025-09-25_22-41-26
|
| 4 |
+
2025-09-25 22:41:39,669 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.5719 | Val mean-roc_auc_score: 0.7489
|
| 5 |
+
2025-09-25 22:41:39,669 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 120
|
| 6 |
+
2025-09-25 22:41:40,555 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val mean-roc_auc_score: 0.7489
|
| 7 |
+
2025-09-25 22:42:01,179 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.5000 | Val mean-roc_auc_score: 0.8274
|
| 8 |
+
2025-09-25 22:42:01,368 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 240
|
| 9 |
+
2025-09-25 22:42:01,987 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val mean-roc_auc_score: 0.8274
|
| 10 |
+
2025-09-25 22:42:19,250 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.4542 | Val mean-roc_auc_score: 0.8561
|
| 11 |
+
2025-09-25 22:42:19,448 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 360
|
| 12 |
+
2025-09-25 22:42:20,058 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val mean-roc_auc_score: 0.8561
|
| 13 |
+
2025-09-25 22:42:38,021 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.3422 | Val mean-roc_auc_score: 0.8788
|
| 14 |
+
2025-09-25 22:42:38,186 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 480
|
| 15 |
+
2025-09-25 22:42:38,812 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val mean-roc_auc_score: 0.8788
|
| 16 |
+
2025-09-25 22:42:56,503 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.2550 | Val mean-roc_auc_score: 0.8877
|
| 17 |
+
2025-09-25 22:42:56,707 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 600
|
| 18 |
+
2025-09-25 22:42:57,311 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 5 with val mean-roc_auc_score: 0.8877
|
| 19 |
+
2025-09-25 22:43:18,526 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.1984 | Val mean-roc_auc_score: 0.8835
|
| 20 |
+
2025-09-25 22:43:37,384 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1906 | Val mean-roc_auc_score: 0.8844
|
| 21 |
+
2025-09-25 22:43:55,000 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1417 | Val mean-roc_auc_score: 0.8830
|
| 22 |
+
2025-09-25 22:44:16,146 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1320 | Val mean-roc_auc_score: 0.8868
|
| 23 |
+
2025-09-25 22:44:33,180 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.1256 | Val mean-roc_auc_score: 0.8916
|
| 24 |
+
2025-09-25 22:44:33,367 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 1200
|
| 25 |
+
2025-09-25 22:44:33,987 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 10 with val mean-roc_auc_score: 0.8916
|
| 26 |
+
2025-09-25 22:44:50,621 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.1062 | Val mean-roc_auc_score: 0.8833
|
| 27 |
+
2025-09-25 22:45:10,462 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0965 | Val mean-roc_auc_score: 0.8874
|
| 28 |
+
2025-09-25 22:45:27,092 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0708 | Val mean-roc_auc_score: 0.8978
|
| 29 |
+
2025-09-25 22:45:27,271 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 1560
|
| 30 |
+
2025-09-25 22:45:27,909 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 13 with val mean-roc_auc_score: 0.8978
|
| 31 |
+
2025-09-25 22:45:44,925 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0816 | Val mean-roc_auc_score: 0.8880
|
| 32 |
+
2025-09-25 22:46:04,211 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0931 | Val mean-roc_auc_score: 0.8919
|
| 33 |
+
2025-09-25 22:46:20,918 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0566 | Val mean-roc_auc_score: 0.8953
|
| 34 |
+
2025-09-25 22:46:39,265 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0523 | Val mean-roc_auc_score: 0.8924
|
| 35 |
+
2025-09-25 22:46:59,863 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0523 | Val mean-roc_auc_score: 0.8913
|
| 36 |
+
2025-09-25 22:47:16,985 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0383 | Val mean-roc_auc_score: 0.8844
|
| 37 |
+
2025-09-25 22:47:34,852 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0422 | Val mean-roc_auc_score: 0.8870
|
| 38 |
+
2025-09-25 22:47:54,245 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0277 | Val mean-roc_auc_score: 0.8960
|
| 39 |
+
2025-09-25 22:48:11,654 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0402 | Val mean-roc_auc_score: 0.8877
|
| 40 |
+
2025-09-25 22:48:29,153 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0310 | Val mean-roc_auc_score: 0.8801
|
| 41 |
+
2025-09-25 22:48:50,341 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0350 | Val mean-roc_auc_score: 0.8894
|
| 42 |
+
2025-09-25 22:49:08,133 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0259 | Val mean-roc_auc_score: 0.8885
|
| 43 |
+
2025-09-25 22:49:25,111 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0436 | Val mean-roc_auc_score: 0.8910
|
| 44 |
+
2025-09-25 22:49:45,672 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0359 | Val mean-roc_auc_score: 0.8914
|
| 45 |
+
2025-09-25 22:50:02,416 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0312 | Val mean-roc_auc_score: 0.8910
|
| 46 |
+
2025-09-25 22:50:19,414 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0172 | Val mean-roc_auc_score: 0.8877
|
| 47 |
+
2025-09-25 22:50:38,321 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0259 | Val mean-roc_auc_score: 0.8803
|
| 48 |
+
2025-09-25 22:50:55,039 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0234 | Val mean-roc_auc_score: 0.8866
|
| 49 |
+
2025-09-25 22:51:11,648 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0163 | Val mean-roc_auc_score: 0.8915
|
| 50 |
+
2025-09-25 22:51:30,135 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0139 | Val mean-roc_auc_score: 0.8899
|
| 51 |
+
2025-09-25 22:51:48,176 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0110 | Val mean-roc_auc_score: 0.8875
|
| 52 |
+
2025-09-25 22:52:04,259 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0073 | Val mean-roc_auc_score: 0.8882
|
| 53 |
+
2025-09-25 22:52:26,052 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0275 | Val mean-roc_auc_score: 0.8893
|
| 54 |
+
2025-09-25 22:52:43,053 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0191 | Val mean-roc_auc_score: 0.8911
|
| 55 |
+
2025-09-25 22:52:59,795 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0167 | Val mean-roc_auc_score: 0.8942
|
| 56 |
+
2025-09-25 22:53:20,006 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0160 | Val mean-roc_auc_score: 0.8938
|
| 57 |
+
2025-09-25 22:53:37,559 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0079 | Val mean-roc_auc_score: 0.8932
|
| 58 |
+
2025-09-25 22:53:55,719 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0036 | Val mean-roc_auc_score: 0.8946
|
| 59 |
+
2025-09-25 22:54:15,889 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0105 | Val mean-roc_auc_score: 0.8932
|
| 60 |
+
2025-09-25 22:54:32,177 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0150 | Val mean-roc_auc_score: 0.8865
|
| 61 |
+
2025-09-25 22:54:48,743 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0101 | Val mean-roc_auc_score: 0.8885
|
| 62 |
+
2025-09-25 22:55:08,102 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0093 | Val mean-roc_auc_score: 0.8900
|
| 63 |
+
2025-09-25 22:55:25,057 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0273 | Val mean-roc_auc_score: 0.8898
|
| 64 |
+
2025-09-25 22:55:42,938 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0134 | Val mean-roc_auc_score: 0.8895
|
| 65 |
+
2025-09-25 22:56:02,956 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0088 | Val mean-roc_auc_score: 0.8884
|
| 66 |
+
2025-09-25 22:56:20,824 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0097 | Val mean-roc_auc_score: 0.8914
|
| 67 |
+
2025-09-25 22:56:38,813 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0109 | Val mean-roc_auc_score: 0.8872
|
| 68 |
+
2025-09-25 22:56:57,656 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0089 | Val mean-roc_auc_score: 0.8888
|
| 69 |
+
2025-09-25 22:57:15,274 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0041 | Val mean-roc_auc_score: 0.8898
|
| 70 |
+
2025-09-25 22:57:32,885 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0030 | Val mean-roc_auc_score: 0.8890
|
| 71 |
+
2025-09-25 22:57:53,490 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0095 | Val mean-roc_auc_score: 0.8860
|
| 72 |
+
2025-09-25 22:58:11,782 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0081 | Val mean-roc_auc_score: 0.8844
|
| 73 |
+
2025-09-25 22:58:28,658 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0070 | Val mean-roc_auc_score: 0.8845
|
| 74 |
+
2025-09-25 22:58:48,738 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0155 | Val mean-roc_auc_score: 0.8810
|
| 75 |
+
2025-09-25 22:59:05,373 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0118 | Val mean-roc_auc_score: 0.8819
|
| 76 |
+
2025-09-25 22:59:23,073 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0068 | Val mean-roc_auc_score: 0.8856
|
| 77 |
+
2025-09-25 22:59:42,391 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0053 | Val mean-roc_auc_score: 0.8837
|
| 78 |
+
2025-09-25 22:59:59,643 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0113 | Val mean-roc_auc_score: 0.8863
|
| 79 |
+
2025-09-25 23:00:17,180 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0061 | Val mean-roc_auc_score: 0.8868
|
| 80 |
+
2025-09-25 23:00:36,249 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0042 | Val mean-roc_auc_score: 0.8855
|
| 81 |
+
2025-09-25 23:00:52,836 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0040 | Val mean-roc_auc_score: 0.8857
|
| 82 |
+
2025-09-25 23:01:08,954 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0032 | Val mean-roc_auc_score: 0.8863
|
| 83 |
+
2025-09-25 23:01:28,219 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0011 | Val mean-roc_auc_score: 0.8856
|
| 84 |
+
2025-09-25 23:01:46,669 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0051 | Val mean-roc_auc_score: 0.8865
|
| 85 |
+
2025-09-25 23:02:04,352 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0028 | Val mean-roc_auc_score: 0.8866
|
| 86 |
+
2025-09-25 23:02:23,713 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0103 | Val mean-roc_auc_score: 0.8821
|
| 87 |
+
2025-09-25 23:02:40,275 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0091 | Val mean-roc_auc_score: 0.8856
|
| 88 |
+
2025-09-25 23:02:56,487 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0063 | Val mean-roc_auc_score: 0.8860
|
| 89 |
+
2025-09-25 23:03:15,974 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0052 | Val mean-roc_auc_score: 0.8873
|
| 90 |
+
2025-09-25 23:03:32,428 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0038 | Val mean-roc_auc_score: 0.8877
|
| 91 |
+
2025-09-25 23:03:49,236 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0050 | Val mean-roc_auc_score: 0.8834
|
| 92 |
+
2025-09-25 23:04:09,288 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0073 | Val mean-roc_auc_score: 0.8831
|
| 93 |
+
2025-09-25 23:04:25,319 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0061 | Val mean-roc_auc_score: 0.8836
|
| 94 |
+
2025-09-25 23:04:44,805 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0073 | Val mean-roc_auc_score: 0.8842
|
| 95 |
+
2025-09-25 23:05:01,292 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0038 | Val mean-roc_auc_score: 0.8861
|
| 96 |
+
2025-09-25 23:05:17,550 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0068 | Val mean-roc_auc_score: 0.8886
|
| 97 |
+
2025-09-25 23:05:36,877 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0046 | Val mean-roc_auc_score: 0.8884
|
| 98 |
+
2025-09-25 23:05:53,293 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0039 | Val mean-roc_auc_score: 0.8894
|
| 99 |
+
2025-09-25 23:06:10,747 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0040 | Val mean-roc_auc_score: 0.8889
|
| 100 |
+
2025-09-25 23:06:30,201 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0025 | Val mean-roc_auc_score: 0.8889
|
| 101 |
+
2025-09-25 23:06:48,064 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0030 | Val mean-roc_auc_score: 0.8903
|
| 102 |
+
2025-09-25 23:07:06,147 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0037 | Val mean-roc_auc_score: 0.8895
|
| 103 |
+
2025-09-25 23:07:26,747 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0033 | Val mean-roc_auc_score: 0.8892
|
| 104 |
+
2025-09-25 23:07:44,474 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0068 | Val mean-roc_auc_score: 0.8878
|
| 105 |
+
2025-09-25 23:08:01,744 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0031 | Val mean-roc_auc_score: 0.8881
|
| 106 |
+
2025-09-25 23:08:20,948 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0027 | Val mean-roc_auc_score: 0.8883
|
| 107 |
+
2025-09-25 23:08:37,338 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0025 | Val mean-roc_auc_score: 0.8880
|
| 108 |
+
2025-09-25 23:08:53,684 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0040 | Val mean-roc_auc_score: 0.8883
|
| 109 |
+
2025-09-25 23:09:14,076 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0016 | Val mean-roc_auc_score: 0.8881
|
| 110 |
+
2025-09-25 23:09:30,147 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0022 | Val mean-roc_auc_score: 0.8889
|
| 111 |
+
2025-09-25 23:09:47,202 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0029 | Val mean-roc_auc_score: 0.8886
|
| 112 |
+
2025-09-25 23:10:07,893 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0025 | Val mean-roc_auc_score: 0.8884
|
| 113 |
+
2025-09-25 23:10:24,908 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0013 | Val mean-roc_auc_score: 0.8896
|
| 114 |
+
2025-09-25 23:10:41,725 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0033 | Val mean-roc_auc_score: 0.8893
|
| 115 |
+
2025-09-25 23:11:01,475 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0097 | Val mean-roc_auc_score: 0.8875
|
| 116 |
+
2025-09-25 23:11:18,370 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0035 | Val mean-roc_auc_score: 0.8883
|
| 117 |
+
2025-09-25 23:11:37,387 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0043 | Val mean-roc_auc_score: 0.8881
|
| 118 |
+
2025-09-25 23:11:38,762 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Test mean-roc_auc_score: 0.8777
|
| 119 |
+
2025-09-25 23:11:39,138 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Starting triplicate run 2 for dataset antimalarial at 2025-09-25_23-11-39
|
| 120 |
+
2025-09-25 23:11:58,327 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.5687 | Val mean-roc_auc_score: 0.7632
|
| 121 |
+
2025-09-25 23:11:58,328 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 120
|
| 122 |
+
2025-09-25 23:11:59,157 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val mean-roc_auc_score: 0.7632
|
| 123 |
+
2025-09-25 23:12:17,641 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.4938 | Val mean-roc_auc_score: 0.8396
|
| 124 |
+
2025-09-25 23:12:17,848 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 240
|
| 125 |
+
2025-09-25 23:12:18,501 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val mean-roc_auc_score: 0.8396
|
| 126 |
+
2025-09-25 23:12:35,665 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.4021 | Val mean-roc_auc_score: 0.8718
|
| 127 |
+
2025-09-25 23:12:35,884 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 360
|
| 128 |
+
2025-09-25 23:12:36,513 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val mean-roc_auc_score: 0.8718
|
| 129 |
+
2025-09-25 23:12:55,796 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.3328 | Val mean-roc_auc_score: 0.8737
|
| 130 |
+
2025-09-25 23:12:56,018 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 480
|
| 131 |
+
2025-09-25 23:12:56,716 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val mean-roc_auc_score: 0.8737
|
| 132 |
+
2025-09-25 23:13:14,425 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.2737 | Val mean-roc_auc_score: 0.8793
|
| 133 |
+
2025-09-25 23:13:14,642 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 600
|
| 134 |
+
2025-09-25 23:13:15,284 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 5 with val mean-roc_auc_score: 0.8793
|
| 135 |
+
2025-09-25 23:13:31,835 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.2250 | Val mean-roc_auc_score: 0.8793
|
| 136 |
+
2025-09-25 23:13:32,359 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 720
|
| 137 |
+
2025-09-25 23:13:33,215 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 6 with val mean-roc_auc_score: 0.8793
|
| 138 |
+
2025-09-25 23:13:49,877 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1844 | Val mean-roc_auc_score: 0.8825
|
| 139 |
+
2025-09-25 23:13:50,095 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 840
|
| 140 |
+
2025-09-25 23:13:50,809 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 7 with val mean-roc_auc_score: 0.8825
|
| 141 |
+
2025-09-25 23:14:11,703 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1531 | Val mean-roc_auc_score: 0.8929
|
| 142 |
+
2025-09-25 23:14:11,945 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 960
|
| 143 |
+
2025-09-25 23:14:12,603 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 8 with val mean-roc_auc_score: 0.8929
|
| 144 |
+
2025-09-25 23:14:30,431 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1172 | Val mean-roc_auc_score: 0.8904
|
| 145 |
+
2025-09-25 23:14:46,982 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.0956 | Val mean-roc_auc_score: 0.8961
|
| 146 |
+
2025-09-25 23:14:47,155 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 1200
|
| 147 |
+
2025-09-25 23:14:47,833 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 10 with val mean-roc_auc_score: 0.8961
|
| 148 |
+
2025-09-25 23:15:07,356 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.0965 | Val mean-roc_auc_score: 0.8877
|
| 149 |
+
2025-09-25 23:15:24,877 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0836 | Val mean-roc_auc_score: 0.8828
|
| 150 |
+
2025-09-25 23:15:41,314 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0755 | Val mean-roc_auc_score: 0.8880
|
| 151 |
+
2025-09-25 23:16:01,283 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0527 | Val mean-roc_auc_score: 0.8875
|
| 152 |
+
2025-09-25 23:16:17,402 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0597 | Val mean-roc_auc_score: 0.8867
|
| 153 |
+
2025-09-25 23:16:34,538 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0766 | Val mean-roc_auc_score: 0.8890
|
| 154 |
+
2025-09-25 23:16:54,789 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0520 | Val mean-roc_auc_score: 0.8825
|
| 155 |
+
2025-09-25 23:17:11,836 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0474 | Val mean-roc_auc_score: 0.8778
|
| 156 |
+
2025-09-25 23:17:29,288 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0365 | Val mean-roc_auc_score: 0.8852
|
| 157 |
+
2025-09-25 23:17:48,907 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0342 | Val mean-roc_auc_score: 0.8857
|
| 158 |
+
2025-09-25 23:18:05,318 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0734 | Val mean-roc_auc_score: 0.8886
|
| 159 |
+
2025-09-25 23:18:22,464 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0527 | Val mean-roc_auc_score: 0.8824
|
| 160 |
+
2025-09-25 23:18:42,057 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0310 | Val mean-roc_auc_score: 0.8892
|
| 161 |
+
2025-09-25 23:18:58,920 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0420 | Val mean-roc_auc_score: 0.8856
|
| 162 |
+
2025-09-25 23:19:18,815 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0202 | Val mean-roc_auc_score: 0.8907
|
| 163 |
+
2025-09-25 23:19:35,074 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0169 | Val mean-roc_auc_score: 0.8865
|
| 164 |
+
2025-09-25 23:19:51,785 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0318 | Val mean-roc_auc_score: 0.8853
|
| 165 |
+
2025-09-25 23:20:11,497 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0250 | Val mean-roc_auc_score: 0.8882
|
| 166 |
+
2025-09-25 23:20:29,985 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0408 | Val mean-roc_auc_score: 0.8898
|
| 167 |
+
2025-09-25 23:20:48,189 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0422 | Val mean-roc_auc_score: 0.8842
|
| 168 |
+
2025-09-25 23:21:07,851 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0187 | Val mean-roc_auc_score: 0.8850
|
| 169 |
+
2025-09-25 23:21:25,718 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0180 | Val mean-roc_auc_score: 0.8896
|
| 170 |
+
2025-09-25 23:21:42,496 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0236 | Val mean-roc_auc_score: 0.8924
|
| 171 |
+
2025-09-25 23:22:02,419 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0175 | Val mean-roc_auc_score: 0.8913
|
| 172 |
+
2025-09-25 23:22:18,937 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0225 | Val mean-roc_auc_score: 0.8917
|
| 173 |
+
2025-09-25 23:22:35,604 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0209 | Val mean-roc_auc_score: 0.8917
|
| 174 |
+
2025-09-25 23:22:55,127 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0139 | Val mean-roc_auc_score: 0.8914
|
| 175 |
+
2025-09-25 23:23:11,800 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0118 | Val mean-roc_auc_score: 0.8916
|
| 176 |
+
2025-09-25 23:23:28,529 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0246 | Val mean-roc_auc_score: 0.8912
|
| 177 |
+
2025-09-25 23:23:47,743 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0085 | Val mean-roc_auc_score: 0.8892
|
| 178 |
+
2025-09-25 23:24:04,187 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0187 | Val mean-roc_auc_score: 0.8899
|
| 179 |
+
2025-09-25 23:24:22,383 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0063 | Val mean-roc_auc_score: 0.8891
|
| 180 |
+
2025-09-25 23:24:41,977 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0094 | Val mean-roc_auc_score: 0.8872
|
| 181 |
+
2025-09-25 23:24:58,776 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0114 | Val mean-roc_auc_score: 0.8893
|
| 182 |
+
2025-09-25 23:25:15,303 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0098 | Val mean-roc_auc_score: 0.8894
|
| 183 |
+
2025-09-25 23:25:34,615 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0066 | Val mean-roc_auc_score: 0.8890
|
| 184 |
+
2025-09-25 23:25:52,056 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0136 | Val mean-roc_auc_score: 0.8944
|
| 185 |
+
2025-09-25 23:26:08,803 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0132 | Val mean-roc_auc_score: 0.8922
|
| 186 |
+
2025-09-25 23:26:27,959 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0068 | Val mean-roc_auc_score: 0.8922
|
| 187 |
+
2025-09-25 23:26:44,837 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0056 | Val mean-roc_auc_score: 0.8911
|
| 188 |
+
2025-09-25 23:27:02,371 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0083 | Val mean-roc_auc_score: 0.8872
|
| 189 |
+
2025-09-25 23:27:23,024 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0061 | Val mean-roc_auc_score: 0.8868
|
| 190 |
+
2025-09-25 23:27:39,465 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0049 | Val mean-roc_auc_score: 0.8862
|
| 191 |
+
2025-09-25 23:27:55,966 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0201 | Val mean-roc_auc_score: 0.8900
|
| 192 |
+
2025-09-25 23:28:15,805 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0097 | Val mean-roc_auc_score: 0.8902
|
| 193 |
+
2025-09-25 23:28:32,975 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0048 | Val mean-roc_auc_score: 0.8897
|
| 194 |
+
2025-09-25 23:28:50,137 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0061 | Val mean-roc_auc_score: 0.8883
|
| 195 |
+
2025-09-25 23:29:10,127 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0178 | Val mean-roc_auc_score: 0.8874
|
| 196 |
+
2025-09-25 23:29:27,925 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0072 | Val mean-roc_auc_score: 0.8875
|
| 197 |
+
2025-09-25 23:29:44,505 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0079 | Val mean-roc_auc_score: 0.8912
|
| 198 |
+
2025-09-25 23:30:03,999 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0086 | Val mean-roc_auc_score: 0.8910
|
| 199 |
+
2025-09-25 23:30:21,602 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0072 | Val mean-roc_auc_score: 0.8906
|
| 200 |
+
2025-09-25 23:30:38,564 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0037 | Val mean-roc_auc_score: 0.8911
|
| 201 |
+
2025-09-25 23:30:58,727 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0057 | Val mean-roc_auc_score: 0.8916
|
| 202 |
+
2025-09-25 23:31:15,580 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0082 | Val mean-roc_auc_score: 0.8893
|
| 203 |
+
2025-09-25 23:31:34,760 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0097 | Val mean-roc_auc_score: 0.8874
|
| 204 |
+
2025-09-25 23:31:51,946 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0207 | Val mean-roc_auc_score: 0.8866
|
| 205 |
+
2025-09-25 23:32:08,717 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0085 | Val mean-roc_auc_score: 0.8878
|
| 206 |
+
2025-09-25 23:32:27,844 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0064 | Val mean-roc_auc_score: 0.8881
|
| 207 |
+
2025-09-25 23:32:44,295 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0056 | Val mean-roc_auc_score: 0.8886
|
| 208 |
+
2025-09-25 23:33:00,348 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0031 | Val mean-roc_auc_score: 0.8891
|
| 209 |
+
2025-09-25 23:33:20,520 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0052 | Val mean-roc_auc_score: 0.8881
|
| 210 |
+
2025-09-25 23:33:36,900 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0042 | Val mean-roc_auc_score: 0.8881
|
| 211 |
+
2025-09-25 23:33:53,639 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0044 | Val mean-roc_auc_score: 0.8886
|
| 212 |
+
2025-09-25 23:34:13,104 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0040 | Val mean-roc_auc_score: 0.8887
|
| 213 |
+
2025-09-25 23:34:29,682 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0037 | Val mean-roc_auc_score: 0.8888
|
| 214 |
+
2025-09-25 23:34:46,593 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0046 | Val mean-roc_auc_score: 0.8892
|
| 215 |
+
2025-09-25 23:35:06,396 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0049 | Val mean-roc_auc_score: 0.8896
|
| 216 |
+
2025-09-25 23:35:23,112 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0045 | Val mean-roc_auc_score: 0.8901
|
| 217 |
+
2025-09-25 23:35:39,580 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0036 | Val mean-roc_auc_score: 0.8898
|
| 218 |
+
2025-09-25 23:35:58,938 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0040 | Val mean-roc_auc_score: 0.8897
|
| 219 |
+
2025-09-25 23:36:15,688 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0055 | Val mean-roc_auc_score: 0.8902
|
| 220 |
+
2025-09-25 23:36:34,917 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0055 | Val mean-roc_auc_score: 0.8900
|
| 221 |
+
2025-09-25 23:36:51,427 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0031 | Val mean-roc_auc_score: 0.8899
|
| 222 |
+
2025-09-25 23:37:07,816 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0032 | Val mean-roc_auc_score: 0.8898
|
| 223 |
+
2025-09-25 23:37:27,036 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0016 | Val mean-roc_auc_score: 0.8894
|
| 224 |
+
2025-09-25 23:37:43,739 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0048 | Val mean-roc_auc_score: 0.8894
|
| 225 |
+
2025-09-25 23:38:01,865 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0037 | Val mean-roc_auc_score: 0.8901
|
| 226 |
+
2025-09-25 23:38:22,943 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0084 | Val mean-roc_auc_score: 0.8917
|
| 227 |
+
2025-09-25 23:38:40,396 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0064 | Val mean-roc_auc_score: 0.8901
|
| 228 |
+
2025-09-25 23:38:57,544 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0020 | Val mean-roc_auc_score: 0.8905
|
| 229 |
+
2025-09-25 23:39:18,396 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0039 | Val mean-roc_auc_score: 0.8910
|
| 230 |
+
2025-09-25 23:39:35,373 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0047 | Val mean-roc_auc_score: 0.8919
|
| 231 |
+
2025-09-25 23:39:53,161 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0039 | Val mean-roc_auc_score: 0.8922
|
| 232 |
+
2025-09-25 23:40:13,145 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0042 | Val mean-roc_auc_score: 0.8921
|
| 233 |
+
2025-09-25 23:40:29,612 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0016 | Val mean-roc_auc_score: 0.8924
|
| 234 |
+
2025-09-25 23:40:46,931 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0008 | Val mean-roc_auc_score: 0.8927
|
| 235 |
+
2025-09-25 23:41:06,833 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0026 | Val mean-roc_auc_score: 0.8926
|
| 236 |
+
2025-09-25 23:41:24,787 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0036 | Val mean-roc_auc_score: 0.8927
|
| 237 |
+
2025-09-25 23:41:42,549 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0022 | Val mean-roc_auc_score: 0.8925
|
| 238 |
+
2025-09-25 23:41:43,601 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Test mean-roc_auc_score: 0.8818
|
| 239 |
+
2025-09-25 23:41:44,007 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Starting triplicate run 3 for dataset antimalarial at 2025-09-25_23-41-44
|
| 240 |
+
2025-09-25 23:42:02,475 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.5312 | Val mean-roc_auc_score: 0.7570
|
| 241 |
+
2025-09-25 23:42:02,475 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 120
|
| 242 |
+
2025-09-25 23:42:00,779 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val mean-roc_auc_score: 0.7570
|
| 243 |
+
2025-09-25 23:42:20,602 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.4844 | Val mean-roc_auc_score: 0.8270
|
| 244 |
+
2025-09-25 23:42:20,805 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 240
|
| 245 |
+
2025-09-25 23:42:21,448 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val mean-roc_auc_score: 0.8270
|
| 246 |
+
2025-09-25 23:42:37,857 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.4062 | Val mean-roc_auc_score: 0.8522
|
| 247 |
+
2025-09-25 23:42:38,076 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 360
|
| 248 |
+
2025-09-25 23:42:38,750 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val mean-roc_auc_score: 0.8522
|
| 249 |
+
2025-09-25 23:42:55,233 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.3375 | Val mean-roc_auc_score: 0.8777
|
| 250 |
+
2025-09-25 23:42:55,451 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 480
|
| 251 |
+
2025-09-25 23:42:56,143 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val mean-roc_auc_score: 0.8777
|
| 252 |
+
2025-09-25 23:43:15,880 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.2975 | Val mean-roc_auc_score: 0.8841
|
| 253 |
+
2025-09-25 23:43:16,106 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 600
|
| 254 |
+
2025-09-25 23:43:16,844 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 5 with val mean-roc_auc_score: 0.8841
|
| 255 |
+
2025-09-25 23:43:33,608 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.2703 | Val mean-roc_auc_score: 0.8951
|
| 256 |
+
2025-09-25 23:43:34,201 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Global step of best model: 720
|
| 257 |
+
2025-09-25 23:43:34,835 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Best model saved at epoch 6 with val mean-roc_auc_score: 0.8951
|
| 258 |
+
2025-09-25 23:43:52,340 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1555 | Val mean-roc_auc_score: 0.8891
|
| 259 |
+
2025-09-25 23:44:12,341 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1667 | Val mean-roc_auc_score: 0.8845
|
| 260 |
+
2025-09-25 23:44:30,646 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1195 | Val mean-roc_auc_score: 0.8905
|
| 261 |
+
2025-09-25 23:44:48,250 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.1237 | Val mean-roc_auc_score: 0.8816
|
| 262 |
+
2025-09-25 23:45:08,787 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.1594 | Val mean-roc_auc_score: 0.8776
|
| 263 |
+
2025-09-25 23:45:26,449 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0770 | Val mean-roc_auc_score: 0.8838
|
| 264 |
+
2025-09-25 23:45:44,041 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0828 | Val mean-roc_auc_score: 0.8842
|
| 265 |
+
2025-09-25 23:46:04,432 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0703 | Val mean-roc_auc_score: 0.8790
|
| 266 |
+
2025-09-25 23:46:21,814 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0684 | Val mean-roc_auc_score: 0.8835
|
| 267 |
+
2025-09-25 23:46:38,786 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0613 | Val mean-roc_auc_score: 0.8885
|
| 268 |
+
2025-09-25 23:46:59,734 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0459 | Val mean-roc_auc_score: 0.8872
|
| 269 |
+
2025-09-25 23:47:16,829 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0508 | Val mean-roc_auc_score: 0.8864
|
| 270 |
+
2025-09-25 23:47:33,882 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0471 | Val mean-roc_auc_score: 0.8879
|
| 271 |
+
2025-09-25 23:47:53,626 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0389 | Val mean-roc_auc_score: 0.8829
|
| 272 |
+
2025-09-25 23:48:10,473 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0516 | Val mean-roc_auc_score: 0.8857
|
| 273 |
+
2025-09-25 23:48:28,005 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0340 | Val mean-roc_auc_score: 0.8814
|
| 274 |
+
2025-09-25 23:48:47,561 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0500 | Val mean-roc_auc_score: 0.8833
|
| 275 |
+
2025-09-25 23:49:04,594 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0520 | Val mean-roc_auc_score: 0.8850
|
| 276 |
+
2025-09-25 23:49:22,114 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0309 | Val mean-roc_auc_score: 0.8897
|
| 277 |
+
2025-09-25 23:49:41,828 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0357 | Val mean-roc_auc_score: 0.8850
|
| 278 |
+
2025-09-25 23:49:59,140 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0159 | Val mean-roc_auc_score: 0.8859
|
| 279 |
+
2025-09-25 23:50:15,522 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0250 | Val mean-roc_auc_score: 0.8814
|
| 280 |
+
2025-09-25 23:50:35,124 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0194 | Val mean-roc_auc_score: 0.8857
|
| 281 |
+
2025-09-25 23:50:51,758 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0116 | Val mean-roc_auc_score: 0.8866
|
| 282 |
+
2025-09-25 23:51:08,128 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0137 | Val mean-roc_auc_score: 0.8852
|
| 283 |
+
2025-09-25 23:51:28,451 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0202 | Val mean-roc_auc_score: 0.8841
|
| 284 |
+
2025-09-25 23:51:45,383 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0165 | Val mean-roc_auc_score: 0.8882
|
| 285 |
+
2025-09-25 23:52:05,780 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0116 | Val mean-roc_auc_score: 0.8869
|
| 286 |
+
2025-09-25 23:52:21,347 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0275 | Val mean-roc_auc_score: 0.8858
|
| 287 |
+
2025-09-25 23:52:39,076 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0133 | Val mean-roc_auc_score: 0.8878
|
| 288 |
+
2025-09-25 23:52:56,081 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0144 | Val mean-roc_auc_score: 0.8866
|
| 289 |
+
2025-09-25 23:53:15,567 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0117 | Val mean-roc_auc_score: 0.8882
|
| 290 |
+
2025-09-25 23:53:31,973 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0173 | Val mean-roc_auc_score: 0.8850
|
| 291 |
+
2025-09-25 23:53:51,416 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0094 | Val mean-roc_auc_score: 0.8845
|
| 292 |
+
2025-09-25 23:54:07,740 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0205 | Val mean-roc_auc_score: 0.8829
|
| 293 |
+
2025-09-25 23:54:26,296 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0215 | Val mean-roc_auc_score: 0.8843
|
| 294 |
+
2025-09-25 23:54:44,104 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0136 | Val mean-roc_auc_score: 0.8810
|
| 295 |
+
2025-09-25 23:55:04,947 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0085 | Val mean-roc_auc_score: 0.8807
|
| 296 |
+
2025-09-25 23:55:22,139 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0092 | Val mean-roc_auc_score: 0.8828
|
| 297 |
+
2025-09-25 23:55:39,392 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0100 | Val mean-roc_auc_score: 0.8811
|
| 298 |
+
2025-09-25 23:55:59,341 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0116 | Val mean-roc_auc_score: 0.8774
|
| 299 |
+
2025-09-25 23:56:16,369 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0117 | Val mean-roc_auc_score: 0.8743
|
| 300 |
+
2025-09-25 23:56:33,555 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0085 | Val mean-roc_auc_score: 0.8753
|
| 301 |
+
2025-09-25 23:56:53,604 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0071 | Val mean-roc_auc_score: 0.8759
|
| 302 |
+
2025-09-25 23:57:10,361 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0041 | Val mean-roc_auc_score: 0.8772
|
| 303 |
+
2025-09-25 23:57:28,008 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0153 | Val mean-roc_auc_score: 0.8809
|
| 304 |
+
2025-09-25 23:57:47,203 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0067 | Val mean-roc_auc_score: 0.8812
|
| 305 |
+
2025-09-25 23:58:04,051 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0114 | Val mean-roc_auc_score: 0.8824
|
| 306 |
+
2025-09-25 23:58:23,977 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0091 | Val mean-roc_auc_score: 0.8819
|
| 307 |
+
2025-09-25 23:58:39,664 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0141 | Val mean-roc_auc_score: 0.8793
|
| 308 |
+
2025-09-25 23:58:56,748 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0084 | Val mean-roc_auc_score: 0.8804
|
| 309 |
+
2025-09-25 23:59:16,877 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0060 | Val mean-roc_auc_score: 0.8807
|
| 310 |
+
2025-09-25 23:59:34,412 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0060 | Val mean-roc_auc_score: 0.8811
|
| 311 |
+
2025-09-25 23:59:50,875 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0061 | Val mean-roc_auc_score: 0.8807
|
| 312 |
+
2025-09-26 00:00:09,311 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0055 | Val mean-roc_auc_score: 0.8827
|
| 313 |
+
2025-09-26 00:00:26,015 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0087 | Val mean-roc_auc_score: 0.8825
|
| 314 |
+
2025-09-26 00:00:42,367 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0066 | Val mean-roc_auc_score: 0.8842
|
| 315 |
+
2025-09-26 00:01:01,700 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0079 | Val mean-roc_auc_score: 0.8829
|
| 316 |
+
2025-09-26 00:01:18,302 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0087 | Val mean-roc_auc_score: 0.8788
|
| 317 |
+
2025-09-26 00:01:34,569 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0028 | Val mean-roc_auc_score: 0.8789
|
| 318 |
+
2025-09-26 00:01:54,141 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0071 | Val mean-roc_auc_score: 0.8769
|
| 319 |
+
2025-09-26 00:02:11,198 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0052 | Val mean-roc_auc_score: 0.8768
|
| 320 |
+
2025-09-26 00:02:27,994 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0187 | Val mean-roc_auc_score: 0.8770
|
| 321 |
+
2025-09-26 00:02:47,966 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0087 | Val mean-roc_auc_score: 0.8778
|
| 322 |
+
2025-09-26 00:03:04,170 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0094 | Val mean-roc_auc_score: 0.8778
|
| 323 |
+
2025-09-26 00:03:23,988 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0059 | Val mean-roc_auc_score: 0.8776
|
| 324 |
+
2025-09-26 00:03:40,929 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0086 | Val mean-roc_auc_score: 0.8782
|
| 325 |
+
2025-09-26 00:03:57,944 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0062 | Val mean-roc_auc_score: 0.8784
|
| 326 |
+
2025-09-26 00:04:18,317 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0061 | Val mean-roc_auc_score: 0.8793
|
| 327 |
+
2025-09-26 00:04:35,088 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0071 | Val mean-roc_auc_score: 0.8805
|
| 328 |
+
2025-09-26 00:04:52,865 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0046 | Val mean-roc_auc_score: 0.8791
|
| 329 |
+
2025-09-26 00:05:13,175 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0034 | Val mean-roc_auc_score: 0.8794
|
| 330 |
+
2025-09-26 00:05:29,923 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0039 | Val mean-roc_auc_score: 0.8790
|
| 331 |
+
2025-09-26 00:05:46,743 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0046 | Val mean-roc_auc_score: 0.8793
|
| 332 |
+
2025-09-26 00:06:06,164 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0063 | Val mean-roc_auc_score: 0.8807
|
| 333 |
+
2025-09-26 00:06:23,370 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0104 | Val mean-roc_auc_score: 0.8815
|
| 334 |
+
2025-09-26 00:06:40,237 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0064 | Val mean-roc_auc_score: 0.8797
|
| 335 |
+
2025-09-26 00:07:00,618 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0066 | Val mean-roc_auc_score: 0.8818
|
| 336 |
+
2025-09-26 00:07:18,001 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0051 | Val mean-roc_auc_score: 0.8812
|
| 337 |
+
2025-09-26 00:07:34,592 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0040 | Val mean-roc_auc_score: 0.8815
|
| 338 |
+
2025-09-26 00:07:54,313 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0073 | Val mean-roc_auc_score: 0.8811
|
| 339 |
+
2025-09-26 00:08:11,146 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0070 | Val mean-roc_auc_score: 0.8811
|
| 340 |
+
2025-09-26 00:08:28,371 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0041 | Val mean-roc_auc_score: 0.8805
|
| 341 |
+
2025-09-26 00:08:47,797 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0033 | Val mean-roc_auc_score: 0.8797
|
| 342 |
+
2025-09-26 00:09:06,528 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0015 | Val mean-roc_auc_score: 0.8809
|
| 343 |
+
2025-09-26 00:09:24,888 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0047 | Val mean-roc_auc_score: 0.8805
|
| 344 |
+
2025-09-26 00:09:44,814 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0035 | Val mean-roc_auc_score: 0.8809
|
| 345 |
+
2025-09-26 00:10:02,344 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0035 | Val mean-roc_auc_score: 0.8817
|
| 346 |
+
2025-09-26 00:10:20,134 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0051 | Val mean-roc_auc_score: 0.8798
|
| 347 |
+
2025-09-26 00:10:40,521 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0026 | Val mean-roc_auc_score: 0.8802
|
| 348 |
+
2025-09-26 00:10:57,803 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0138 | Val mean-roc_auc_score: 0.8779
|
| 349 |
+
2025-09-26 00:11:17,141 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0061 | Val mean-roc_auc_score: 0.8807
|
| 350 |
+
2025-09-26 00:11:34,187 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0035 | Val mean-roc_auc_score: 0.8826
|
| 351 |
+
2025-09-26 00:11:54,561 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0017 | Val mean-roc_auc_score: 0.8826
|
| 352 |
+
2025-09-26 00:11:55,285 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Test mean-roc_auc_score: 0.8900
|
| 353 |
+
2025-09-26 00:11:55,708 - logs_modchembert_antimalarial_epochs100_batch_size32 - INFO - Final Triplicate Test Results — Avg mean-roc_auc_score: 0.8832, Std Dev: 0.0051
|
logs_modchembert_classification_ModChemBERT-MLM-TAFT/modchembert_deepchem_splits_run_cocrystal_epochs100_batch_size32_20250926_032614.log
ADDED
|
@@ -0,0 +1,347 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
2025-09-26 03:26:14,949 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Running benchmark for dataset: cocrystal
|
| 2 |
+
2025-09-26 03:26:14,949 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - dataset: cocrystal, tasks: ['label'], epochs: 100, learning rate: 3e-05
|
| 3 |
+
2025-09-26 03:26:14,954 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Starting triplicate run 1 for dataset cocrystal at 2025-09-26_03-26-14
|
| 4 |
+
2025-09-26 03:26:19,467 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.7500 | Val mean-roc_auc_score: 0.6476
|
| 5 |
+
2025-09-26 03:26:19,467 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 37
|
| 6 |
+
2025-09-26 03:26:20,521 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val mean-roc_auc_score: 0.6476
|
| 7 |
+
2025-09-26 03:26:25,385 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.5676 | Val mean-roc_auc_score: 0.7545
|
| 8 |
+
2025-09-26 03:26:25,595 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 74
|
| 9 |
+
2025-09-26 03:26:26,343 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val mean-roc_auc_score: 0.7545
|
| 10 |
+
2025-09-26 03:26:32,125 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.4744 | Val mean-roc_auc_score: 0.8006
|
| 11 |
+
2025-09-26 03:26:32,349 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 111
|
| 12 |
+
2025-09-26 03:26:32,996 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val mean-roc_auc_score: 0.8006
|
| 13 |
+
2025-09-26 03:26:38,725 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.4223 | Val mean-roc_auc_score: 0.7821
|
| 14 |
+
2025-09-26 03:26:44,361 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.4020 | Val mean-roc_auc_score: 0.8173
|
| 15 |
+
2025-09-26 03:26:44,559 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 185
|
| 16 |
+
2025-09-26 03:26:45,315 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 5 with val mean-roc_auc_score: 0.8173
|
| 17 |
+
2025-09-26 03:26:50,750 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.3480 | Val mean-roc_auc_score: 0.8460
|
| 18 |
+
2025-09-26 03:26:51,304 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 222
|
| 19 |
+
2025-09-26 03:26:49,455 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 6 with val mean-roc_auc_score: 0.8460
|
| 20 |
+
2025-09-26 03:26:55,578 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.3361 | Val mean-roc_auc_score: 0.8708
|
| 21 |
+
2025-09-26 03:26:55,795 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 259
|
| 22 |
+
2025-09-26 03:26:56,585 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 7 with val mean-roc_auc_score: 0.8708
|
| 23 |
+
2025-09-26 03:27:02,590 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.2973 | Val mean-roc_auc_score: 0.8431
|
| 24 |
+
2025-09-26 03:27:08,454 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.3106 | Val mean-roc_auc_score: 0.8489
|
| 25 |
+
2025-09-26 03:27:14,584 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.2500 | Val mean-roc_auc_score: 0.8706
|
| 26 |
+
2025-09-26 03:27:20,900 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.2902 | Val mean-roc_auc_score: 0.8473
|
| 27 |
+
2025-09-26 03:27:25,694 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.2230 | Val mean-roc_auc_score: 0.8230
|
| 28 |
+
2025-09-26 03:27:31,990 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.1774 | Val mean-roc_auc_score: 0.8468
|
| 29 |
+
2025-09-26 03:27:38,491 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.2830 | Val mean-roc_auc_score: 0.8236
|
| 30 |
+
2025-09-26 03:27:45,405 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.1782 | Val mean-roc_auc_score: 0.8255
|
| 31 |
+
2025-09-26 03:27:49,594 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.1622 | Val mean-roc_auc_score: 0.8234
|
| 32 |
+
2025-09-26 03:27:56,664 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.1067 | Val mean-roc_auc_score: 0.8083
|
| 33 |
+
2025-09-26 03:28:03,698 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.1106 | Val mean-roc_auc_score: 0.8171
|
| 34 |
+
2025-09-26 03:28:10,489 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0579 | Val mean-roc_auc_score: 0.8201
|
| 35 |
+
2025-09-26 03:28:17,046 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0954 | Val mean-roc_auc_score: 0.8157
|
| 36 |
+
2025-09-26 03:28:21,050 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0604 | Val mean-roc_auc_score: 0.8186
|
| 37 |
+
2025-09-26 03:28:28,087 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0826 | Val mean-roc_auc_score: 0.8184
|
| 38 |
+
2025-09-26 03:28:34,393 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0496 | Val mean-roc_auc_score: 0.8087
|
| 39 |
+
2025-09-26 03:28:40,647 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0361 | Val mean-roc_auc_score: 0.7987
|
| 40 |
+
2025-09-26 03:28:46,851 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0678 | Val mean-roc_auc_score: 0.8088
|
| 41 |
+
2025-09-26 03:28:50,163 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0680 | Val mean-roc_auc_score: 0.8078
|
| 42 |
+
2025-09-26 03:28:57,566 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0503 | Val mean-roc_auc_score: 0.8032
|
| 43 |
+
2025-09-26 03:29:04,384 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0299 | Val mean-roc_auc_score: 0.8066
|
| 44 |
+
2025-09-26 03:29:10,591 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0163 | Val mean-roc_auc_score: 0.7935
|
| 45 |
+
2025-09-26 03:29:16,785 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0303 | Val mean-roc_auc_score: 0.7898
|
| 46 |
+
2025-09-26 03:29:20,785 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0861 | Val mean-roc_auc_score: 0.8083
|
| 47 |
+
2025-09-26 03:29:27,407 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0374 | Val mean-roc_auc_score: 0.7845
|
| 48 |
+
2025-09-26 03:29:33,501 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0198 | Val mean-roc_auc_score: 0.7820
|
| 49 |
+
2025-09-26 03:29:39,174 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0172 | Val mean-roc_auc_score: 0.7866
|
| 50 |
+
2025-09-26 03:29:45,670 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0269 | Val mean-roc_auc_score: 0.7699
|
| 51 |
+
2025-09-26 03:29:48,805 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0874 | Val mean-roc_auc_score: 0.7819
|
| 52 |
+
2025-09-26 03:29:55,057 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0363 | Val mean-roc_auc_score: 0.7767
|
| 53 |
+
2025-09-26 03:30:00,399 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0081 | Val mean-roc_auc_score: 0.7679
|
| 54 |
+
2025-09-26 03:30:05,887 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0065 | Val mean-roc_auc_score: 0.7700
|
| 55 |
+
2025-09-26 03:30:11,336 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0122 | Val mean-roc_auc_score: 0.7732
|
| 56 |
+
2025-09-26 03:30:16,729 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0087 | Val mean-roc_auc_score: 0.7728
|
| 57 |
+
2025-09-26 03:30:20,670 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0066 | Val mean-roc_auc_score: 0.7711
|
| 58 |
+
2025-09-26 03:30:26,801 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0298 | Val mean-roc_auc_score: 0.7718
|
| 59 |
+
2025-09-26 03:30:32,404 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0285 | Val mean-roc_auc_score: 0.7737
|
| 60 |
+
2025-09-26 03:30:38,397 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0143 | Val mean-roc_auc_score: 0.7692
|
| 61 |
+
2025-09-26 03:30:44,416 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0029 | Val mean-roc_auc_score: 0.7763
|
| 62 |
+
2025-09-26 03:30:48,187 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0035 | Val mean-roc_auc_score: 0.7758
|
| 63 |
+
2025-09-26 03:30:53,487 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0034 | Val mean-roc_auc_score: 0.7769
|
| 64 |
+
2025-09-26 03:30:58,994 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0035 | Val mean-roc_auc_score: 0.7716
|
| 65 |
+
2025-09-26 03:31:04,943 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0026 | Val mean-roc_auc_score: 0.7694
|
| 66 |
+
2025-09-26 03:31:10,891 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0022 | Val mean-roc_auc_score: 0.7689
|
| 67 |
+
2025-09-26 03:31:17,114 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0033 | Val mean-roc_auc_score: 0.7685
|
| 68 |
+
2025-09-26 03:31:20,364 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0059 | Val mean-roc_auc_score: 0.7734
|
| 69 |
+
2025-09-26 03:31:25,841 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0038 | Val mean-roc_auc_score: 0.7710
|
| 70 |
+
2025-09-26 03:31:33,292 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0026 | Val mean-roc_auc_score: 0.7732
|
| 71 |
+
2025-09-26 03:31:39,108 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0026 | Val mean-roc_auc_score: 0.7675
|
| 72 |
+
2025-09-26 03:31:45,169 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0239 | Val mean-roc_auc_score: 0.7589
|
| 73 |
+
2025-09-26 03:31:48,499 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0167 | Val mean-roc_auc_score: 0.7645
|
| 74 |
+
2025-09-26 03:31:54,538 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0106 | Val mean-roc_auc_score: 0.7641
|
| 75 |
+
2025-09-26 03:32:00,308 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0209 | Val mean-roc_auc_score: 0.7864
|
| 76 |
+
2025-09-26 03:32:05,801 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0090 | Val mean-roc_auc_score: 0.7872
|
| 77 |
+
2025-09-26 03:32:12,046 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0063 | Val mean-roc_auc_score: 0.7796
|
| 78 |
+
2025-09-26 03:32:17,992 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0268 | Val mean-roc_auc_score: 0.8001
|
| 79 |
+
2025-09-26 03:32:21,057 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0123 | Val mean-roc_auc_score: 0.7918
|
| 80 |
+
2025-09-26 03:32:26,640 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.1047 | Val mean-roc_auc_score: 0.7798
|
| 81 |
+
2025-09-26 03:32:31,854 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0296 | Val mean-roc_auc_score: 0.7922
|
| 82 |
+
2025-09-26 03:32:37,641 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0224 | Val mean-roc_auc_score: 0.7885
|
| 83 |
+
2025-09-26 03:32:43,134 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0040 | Val mean-roc_auc_score: 0.7898
|
| 84 |
+
2025-09-26 03:32:48,759 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0026 | Val mean-roc_auc_score: 0.7874
|
| 85 |
+
2025-09-26 03:32:51,880 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0071 | Val mean-roc_auc_score: 0.7814
|
| 86 |
+
2025-09-26 03:32:57,377 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0197 | Val mean-roc_auc_score: 0.7798
|
| 87 |
+
2025-09-26 03:33:03,702 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0172 | Val mean-roc_auc_score: 0.7795
|
| 88 |
+
2025-09-26 03:33:09,342 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0006 | Val mean-roc_auc_score: 0.7770
|
| 89 |
+
2025-09-26 03:33:14,966 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0039 | Val mean-roc_auc_score: 0.7779
|
| 90 |
+
2025-09-26 03:33:18,304 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0046 | Val mean-roc_auc_score: 0.7778
|
| 91 |
+
2025-09-26 03:33:23,846 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0012 | Val mean-roc_auc_score: 0.7797
|
| 92 |
+
2025-09-26 03:33:29,856 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0023 | Val mean-roc_auc_score: 0.7782
|
| 93 |
+
2025-09-26 03:33:35,530 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0021 | Val mean-roc_auc_score: 0.7789
|
| 94 |
+
2025-09-26 03:33:40,930 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0011 | Val mean-roc_auc_score: 0.7763
|
| 95 |
+
2025-09-26 03:33:46,171 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0023 | Val mean-roc_auc_score: 0.7793
|
| 96 |
+
2025-09-26 03:33:49,491 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0020 | Val mean-roc_auc_score: 0.7757
|
| 97 |
+
2025-09-26 03:33:56,418 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0072 | Val mean-roc_auc_score: 0.7755
|
| 98 |
+
2025-09-26 03:34:01,975 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0020 | Val mean-roc_auc_score: 0.7751
|
| 99 |
+
2025-09-26 03:34:07,077 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0008 | Val mean-roc_auc_score: 0.7756
|
| 100 |
+
2025-09-26 03:34:12,601 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0012 | Val mean-roc_auc_score: 0.7751
|
| 101 |
+
2025-09-26 03:34:15,626 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0014 | Val mean-roc_auc_score: 0.7767
|
| 102 |
+
2025-09-26 03:34:21,764 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0027 | Val mean-roc_auc_score: 0.7752
|
| 103 |
+
2025-09-26 03:34:27,473 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0017 | Val mean-roc_auc_score: 0.7737
|
| 104 |
+
2025-09-26 03:34:32,937 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0016 | Val mean-roc_auc_score: 0.7737
|
| 105 |
+
2025-09-26 03:34:38,654 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0015 | Val mean-roc_auc_score: 0.7726
|
| 106 |
+
2025-09-26 03:34:44,419 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0023 | Val mean-roc_auc_score: 0.7714
|
| 107 |
+
2025-09-26 03:34:47,967 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0021 | Val mean-roc_auc_score: 0.7683
|
| 108 |
+
2025-09-26 03:34:53,538 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0010 | Val mean-roc_auc_score: 0.7697
|
| 109 |
+
2025-09-26 03:34:59,040 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0009 | Val mean-roc_auc_score: 0.7686
|
| 110 |
+
2025-09-26 03:35:04,584 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0008 | Val mean-roc_auc_score: 0.7702
|
| 111 |
+
2025-09-26 03:35:10,326 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0015 | Val mean-roc_auc_score: 0.7694
|
| 112 |
+
2025-09-26 03:35:16,496 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0007 | Val mean-roc_auc_score: 0.7711
|
| 113 |
+
2025-09-26 03:35:19,892 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0009 | Val mean-roc_auc_score: 0.7711
|
| 114 |
+
2025-09-26 03:35:25,263 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0007 | Val mean-roc_auc_score: 0.7712
|
| 115 |
+
2025-09-26 03:35:30,553 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0008 | Val mean-roc_auc_score: 0.7711
|
| 116 |
+
2025-09-26 03:35:31,226 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Test mean-roc_auc_score: 0.7622
|
| 117 |
+
2025-09-26 03:35:31,559 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Starting triplicate run 2 for dataset cocrystal at 2025-09-26_03-35-31
|
| 118 |
+
2025-09-26 03:35:35,874 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.7128 | Val mean-roc_auc_score: 0.6616
|
| 119 |
+
2025-09-26 03:35:35,874 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 37
|
| 120 |
+
2025-09-26 03:35:36,610 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val mean-roc_auc_score: 0.6616
|
| 121 |
+
2025-09-26 03:35:42,380 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.5405 | Val mean-roc_auc_score: 0.7867
|
| 122 |
+
2025-09-26 03:35:42,578 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 74
|
| 123 |
+
2025-09-26 03:35:43,228 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val mean-roc_auc_score: 0.7867
|
| 124 |
+
2025-09-26 03:35:46,363 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.4489 | Val mean-roc_auc_score: 0.8217
|
| 125 |
+
2025-09-26 03:35:46,574 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 111
|
| 126 |
+
2025-09-26 03:35:47,249 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val mean-roc_auc_score: 0.8217
|
| 127 |
+
2025-09-26 03:35:52,699 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.3970 | Val mean-roc_auc_score: 0.8159
|
| 128 |
+
2025-09-26 03:35:58,118 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.3699 | Val mean-roc_auc_score: 0.8623
|
| 129 |
+
2025-09-26 03:35:58,332 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 185
|
| 130 |
+
2025-09-26 03:35:58,994 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 5 with val mean-roc_auc_score: 0.8623
|
| 131 |
+
2025-09-26 03:36:04,413 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.3267 | Val mean-roc_auc_score: 0.8700
|
| 132 |
+
2025-09-26 03:36:04,941 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 222
|
| 133 |
+
2025-09-26 03:36:05,589 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 6 with val mean-roc_auc_score: 0.8700
|
| 134 |
+
2025-09-26 03:36:11,182 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.2990 | Val mean-roc_auc_score: 0.8463
|
| 135 |
+
2025-09-26 03:36:16,763 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.2855 | Val mean-roc_auc_score: 0.8652
|
| 136 |
+
2025-09-26 03:36:19,685 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.2405 | Val mean-roc_auc_score: 0.8836
|
| 137 |
+
2025-09-26 03:36:19,904 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 333
|
| 138 |
+
2025-09-26 03:36:20,520 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 9 with val mean-roc_auc_score: 0.8836
|
| 139 |
+
2025-09-26 03:36:25,967 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.2196 | Val mean-roc_auc_score: 0.8700
|
| 140 |
+
2025-09-26 03:36:31,671 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.2243 | Val mean-roc_auc_score: 0.8696
|
| 141 |
+
2025-09-26 03:36:37,800 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.2019 | Val mean-roc_auc_score: 0.8334
|
| 142 |
+
2025-09-26 03:36:43,208 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.2086 | Val mean-roc_auc_score: 0.8520
|
| 143 |
+
2025-09-26 03:36:46,526 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.1302 | Val mean-roc_auc_score: 0.8399
|
| 144 |
+
2025-09-26 03:36:52,268 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.1267 | Val mean-roc_auc_score: 0.8586
|
| 145 |
+
2025-09-26 03:36:58,022 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.1309 | Val mean-roc_auc_score: 0.8288
|
| 146 |
+
2025-09-26 03:37:04,129 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.1616 | Val mean-roc_auc_score: 0.8488
|
| 147 |
+
2025-09-26 03:37:09,735 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.1199 | Val mean-roc_auc_score: 0.8295
|
| 148 |
+
2025-09-26 03:37:15,189 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.1243 | Val mean-roc_auc_score: 0.8124
|
| 149 |
+
2025-09-26 03:37:18,327 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.1377 | Val mean-roc_auc_score: 0.8325
|
| 150 |
+
2025-09-26 03:37:24,036 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0828 | Val mean-roc_auc_score: 0.8196
|
| 151 |
+
2025-09-26 03:37:29,981 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.1406 | Val mean-roc_auc_score: 0.8357
|
| 152 |
+
2025-09-26 03:37:35,431 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0853 | Val mean-roc_auc_score: 0.8491
|
| 153 |
+
2025-09-26 03:37:40,769 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0600 | Val mean-roc_auc_score: 0.8406
|
| 154 |
+
2025-09-26 03:37:46,298 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0741 | Val mean-roc_auc_score: 0.8400
|
| 155 |
+
2025-09-26 03:37:49,209 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0355 | Val mean-roc_auc_score: 0.8205
|
| 156 |
+
2025-09-26 03:37:56,226 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0566 | Val mean-roc_auc_score: 0.8270
|
| 157 |
+
2025-09-26 03:38:01,883 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0417 | Val mean-roc_auc_score: 0.8341
|
| 158 |
+
2025-09-26 03:38:07,419 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0659 | Val mean-roc_auc_score: 0.8461
|
| 159 |
+
2025-09-26 03:38:12,903 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0598 | Val mean-roc_auc_score: 0.8285
|
| 160 |
+
2025-09-26 03:38:16,339 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0274 | Val mean-roc_auc_score: 0.8206
|
| 161 |
+
2025-09-26 03:38:22,597 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0709 | Val mean-roc_auc_score: 0.8483
|
| 162 |
+
2025-09-26 03:38:28,267 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0240 | Val mean-roc_auc_score: 0.8366
|
| 163 |
+
2025-09-26 03:38:34,000 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0146 | Val mean-roc_auc_score: 0.8270
|
| 164 |
+
2025-09-26 03:38:39,687 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0242 | Val mean-roc_auc_score: 0.8394
|
| 165 |
+
2025-09-26 03:38:43,587 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0264 | Val mean-roc_auc_score: 0.8398
|
| 166 |
+
2025-09-26 03:38:49,782 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0106 | Val mean-roc_auc_score: 0.8335
|
| 167 |
+
2025-09-26 03:38:55,882 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0023 | Val mean-roc_auc_score: 0.8368
|
| 168 |
+
2025-09-26 03:39:01,919 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0148 | Val mean-roc_auc_score: 0.8249
|
| 169 |
+
2025-09-26 03:39:07,672 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0258 | Val mean-roc_auc_score: 0.8233
|
| 170 |
+
2025-09-26 03:39:13,542 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0178 | Val mean-roc_auc_score: 0.8221
|
| 171 |
+
2025-09-26 03:39:17,696 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0092 | Val mean-roc_auc_score: 0.8208
|
| 172 |
+
2025-09-26 03:39:23,637 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0041 | Val mean-roc_auc_score: 0.8272
|
| 173 |
+
2025-09-26 03:39:29,298 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0023 | Val mean-roc_auc_score: 0.8256
|
| 174 |
+
2025-09-26 03:39:35,052 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0030 | Val mean-roc_auc_score: 0.8214
|
| 175 |
+
2025-09-26 03:39:40,809 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0320 | Val mean-roc_auc_score: 0.8198
|
| 176 |
+
2025-09-26 03:39:44,517 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0033 | Val mean-roc_auc_score: 0.8151
|
| 177 |
+
2025-09-26 03:39:50,541 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0067 | Val mean-roc_auc_score: 0.7972
|
| 178 |
+
2025-09-26 03:39:56,143 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.1400 | Val mean-roc_auc_score: 0.8052
|
| 179 |
+
2025-09-26 03:40:01,695 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0382 | Val mean-roc_auc_score: 0.7996
|
| 180 |
+
2025-09-26 03:40:07,418 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0090 | Val mean-roc_auc_score: 0.7936
|
| 181 |
+
2025-09-26 03:40:13,972 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0037 | Val mean-roc_auc_score: 0.7898
|
| 182 |
+
2025-09-26 03:40:17,427 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0094 | Val mean-roc_auc_score: 0.7883
|
| 183 |
+
2025-09-26 03:40:23,790 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0098 | Val mean-roc_auc_score: 0.7871
|
| 184 |
+
2025-09-26 03:40:31,698 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0033 | Val mean-roc_auc_score: 0.7819
|
| 185 |
+
2025-09-26 03:40:38,029 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0202 | Val mean-roc_auc_score: 0.7874
|
| 186 |
+
2025-09-26 03:40:44,529 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0117 | Val mean-roc_auc_score: 0.8021
|
| 187 |
+
2025-09-26 03:40:48,063 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0260 | Val mean-roc_auc_score: 0.7803
|
| 188 |
+
2025-09-26 03:40:53,953 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0055 | Val mean-roc_auc_score: 0.7886
|
| 189 |
+
2025-09-26 03:40:59,539 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0338 | Val mean-roc_auc_score: 0.7952
|
| 190 |
+
2025-09-26 03:41:05,442 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0222 | Val mean-roc_auc_score: 0.7907
|
| 191 |
+
2025-09-26 03:41:12,150 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0068 | Val mean-roc_auc_score: 0.7822
|
| 192 |
+
2025-09-26 03:41:15,528 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0149 | Val mean-roc_auc_score: 0.7865
|
| 193 |
+
2025-09-26 03:41:21,231 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0062 | Val mean-roc_auc_score: 0.7858
|
| 194 |
+
2025-09-26 03:41:26,425 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0053 | Val mean-roc_auc_score: 0.7811
|
| 195 |
+
2025-09-26 03:41:31,749 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0018 | Val mean-roc_auc_score: 0.7788
|
| 196 |
+
2025-09-26 03:41:37,512 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0045 | Val mean-roc_auc_score: 0.7766
|
| 197 |
+
2025-09-26 03:41:43,027 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0022 | Val mean-roc_auc_score: 0.7743
|
| 198 |
+
2025-09-26 03:41:45,866 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0015 | Val mean-roc_auc_score: 0.7733
|
| 199 |
+
2025-09-26 03:41:51,439 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0321 | Val mean-roc_auc_score: 0.8012
|
| 200 |
+
2025-09-26 03:41:57,050 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0160 | Val mean-roc_auc_score: 0.8041
|
| 201 |
+
2025-09-26 03:42:02,758 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0130 | Val mean-roc_auc_score: 0.8066
|
| 202 |
+
2025-09-26 03:42:08,560 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0012 | Val mean-roc_auc_score: 0.8057
|
| 203 |
+
2025-09-26 03:42:14,114 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0129 | Val mean-roc_auc_score: 0.8030
|
| 204 |
+
2025-09-26 03:42:17,302 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0039 | Val mean-roc_auc_score: 0.8007
|
| 205 |
+
2025-09-26 03:42:23,341 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0012 | Val mean-roc_auc_score: 0.7995
|
| 206 |
+
2025-09-26 03:42:29,374 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0017 | Val mean-roc_auc_score: 0.7987
|
| 207 |
+
2025-09-26 03:42:34,571 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0069 | Val mean-roc_auc_score: 0.8008
|
| 208 |
+
2025-09-26 03:42:40,249 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0095 | Val mean-roc_auc_score: 0.7964
|
| 209 |
+
2025-09-26 03:42:43,159 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0089 | Val mean-roc_auc_score: 0.8058
|
| 210 |
+
2025-09-26 03:42:48,909 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0086 | Val mean-roc_auc_score: 0.8017
|
| 211 |
+
2025-09-26 03:42:56,159 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0143 | Val mean-roc_auc_score: 0.8150
|
| 212 |
+
2025-09-26 03:43:01,614 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0110 | Val mean-roc_auc_score: 0.8152
|
| 213 |
+
2025-09-26 03:43:07,020 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0079 | Val mean-roc_auc_score: 0.7951
|
| 214 |
+
2025-09-26 03:43:12,760 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0101 | Val mean-roc_auc_score: 0.7972
|
| 215 |
+
2025-09-26 03:43:15,804 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0074 | Val mean-roc_auc_score: 0.8012
|
| 216 |
+
2025-09-26 03:43:21,582 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0109 | Val mean-roc_auc_score: 0.8043
|
| 217 |
+
2025-09-26 03:43:27,104 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0150 | Val mean-roc_auc_score: 0.7917
|
| 218 |
+
2025-09-26 03:43:32,496 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0208 | Val mean-roc_auc_score: 0.7955
|
| 219 |
+
2025-09-26 03:43:38,001 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0093 | Val mean-roc_auc_score: 0.8028
|
| 220 |
+
2025-09-26 03:43:43,486 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0056 | Val mean-roc_auc_score: 0.8075
|
| 221 |
+
2025-09-26 03:43:46,972 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0047 | Val mean-roc_auc_score: 0.8062
|
| 222 |
+
2025-09-26 03:43:52,716 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0020 | Val mean-roc_auc_score: 0.8051
|
| 223 |
+
2025-09-26 03:43:58,131 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0015 | Val mean-roc_auc_score: 0.8064
|
| 224 |
+
2025-09-26 03:44:03,521 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0022 | Val mean-roc_auc_score: 0.8059
|
| 225 |
+
2025-09-26 03:44:09,019 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0018 | Val mean-roc_auc_score: 0.8042
|
| 226 |
+
2025-09-26 03:44:12,647 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0036 | Val mean-roc_auc_score: 0.8229
|
| 227 |
+
2025-09-26 03:44:18,100 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0091 | Val mean-roc_auc_score: 0.8193
|
| 228 |
+
2025-09-26 03:44:23,242 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0071 | Val mean-roc_auc_score: 0.8169
|
| 229 |
+
2025-09-26 03:44:29,374 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0060 | Val mean-roc_auc_score: 0.8152
|
| 230 |
+
2025-09-26 03:44:29,863 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Test mean-roc_auc_score: 0.7856
|
| 231 |
+
2025-09-26 03:44:30,181 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Starting triplicate run 3 for dataset cocrystal at 2025-09-26_03-44-30
|
| 232 |
+
2025-09-26 03:44:35,273 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.7601 | Val mean-roc_auc_score: 0.6874
|
| 233 |
+
2025-09-26 03:44:35,273 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 37
|
| 234 |
+
2025-09-26 03:44:36,112 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val mean-roc_auc_score: 0.6874
|
| 235 |
+
2025-09-26 03:44:42,206 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.5439 | Val mean-roc_auc_score: 0.7846
|
| 236 |
+
2025-09-26 03:44:42,439 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 74
|
| 237 |
+
2025-09-26 03:44:43,109 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val mean-roc_auc_score: 0.7846
|
| 238 |
+
2025-09-26 03:44:46,838 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.4318 | Val mean-roc_auc_score: 0.8148
|
| 239 |
+
2025-09-26 03:44:47,062 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 111
|
| 240 |
+
2025-09-26 03:44:47,676 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val mean-roc_auc_score: 0.8148
|
| 241 |
+
2025-09-26 03:44:53,539 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.4172 | Val mean-roc_auc_score: 0.8466
|
| 242 |
+
2025-09-26 03:44:53,778 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 148
|
| 243 |
+
2025-09-26 03:44:54,531 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val mean-roc_auc_score: 0.8466
|
| 244 |
+
2025-09-26 03:45:00,048 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.3818 | Val mean-roc_auc_score: 0.8392
|
| 245 |
+
2025-09-26 03:45:05,674 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.3722 | Val mean-roc_auc_score: 0.8672
|
| 246 |
+
2025-09-26 03:45:06,192 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 222
|
| 247 |
+
2025-09-26 03:45:06,823 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 6 with val mean-roc_auc_score: 0.8672
|
| 248 |
+
2025-09-26 03:45:12,819 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.3361 | Val mean-roc_auc_score: 0.8514
|
| 249 |
+
2025-09-26 03:45:15,750 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.2956 | Val mean-roc_auc_score: 0.8888
|
| 250 |
+
2025-09-26 03:45:16,011 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 296
|
| 251 |
+
2025-09-26 03:45:16,930 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 8 with val mean-roc_auc_score: 0.8888
|
| 252 |
+
2025-09-26 03:45:22,077 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.2614 | Val mean-roc_auc_score: 0.8955
|
| 253 |
+
2025-09-26 03:45:22,307 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Global step of best model: 333
|
| 254 |
+
2025-09-26 03:45:22,943 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Best model saved at epoch 9 with val mean-roc_auc_score: 0.8955
|
| 255 |
+
2025-09-26 03:45:28,164 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.2432 | Val mean-roc_auc_score: 0.8871
|
| 256 |
+
2025-09-26 03:45:33,680 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.2991 | Val mean-roc_auc_score: 0.8612
|
| 257 |
+
2025-09-26 03:45:39,507 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.2213 | Val mean-roc_auc_score: 0.8553
|
| 258 |
+
2025-09-26 03:45:42,522 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.1900 | Val mean-roc_auc_score: 0.8516
|
| 259 |
+
2025-09-26 03:45:48,240 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.2014 | Val mean-roc_auc_score: 0.8462
|
| 260 |
+
2025-09-26 03:45:53,922 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.1571 | Val mean-roc_auc_score: 0.8600
|
| 261 |
+
2025-09-26 03:45:59,603 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.1495 | Val mean-roc_auc_score: 0.8463
|
| 262 |
+
2025-09-26 03:46:05,761 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.1670 | Val mean-roc_auc_score: 0.8507
|
| 263 |
+
2025-09-26 03:46:11,506 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.1225 | Val mean-roc_auc_score: 0.8401
|
| 264 |
+
2025-09-26 03:46:14,902 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0794 | Val mean-roc_auc_score: 0.8076
|
| 265 |
+
2025-09-26 03:46:20,637 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0819 | Val mean-roc_auc_score: 0.8261
|
| 266 |
+
2025-09-26 03:46:26,281 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.1318 | Val mean-roc_auc_score: 0.8136
|
| 267 |
+
2025-09-26 03:46:32,332 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0770 | Val mean-roc_auc_score: 0.8334
|
| 268 |
+
2025-09-26 03:46:37,795 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0515 | Val mean-roc_auc_score: 0.8483
|
| 269 |
+
2025-09-26 03:46:41,001 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0663 | Val mean-roc_auc_score: 0.8339
|
| 270 |
+
2025-09-26 03:46:46,585 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0806 | Val mean-roc_auc_score: 0.8369
|
| 271 |
+
2025-09-26 03:46:51,925 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0500 | Val mean-roc_auc_score: 0.8267
|
| 272 |
+
2025-09-26 03:46:58,736 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0291 | Val mean-roc_auc_score: 0.8252
|
| 273 |
+
2025-09-26 03:47:04,234 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0399 | Val mean-roc_auc_score: 0.8337
|
| 274 |
+
2025-09-26 03:47:09,364 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0781 | Val mean-roc_auc_score: 0.8401
|
| 275 |
+
2025-09-26 03:47:12,189 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0566 | Val mean-roc_auc_score: 0.8296
|
| 276 |
+
2025-09-26 03:47:17,519 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0229 | Val mean-roc_auc_score: 0.8255
|
| 277 |
+
2025-09-26 03:47:23,564 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0167 | Val mean-roc_auc_score: 0.8232
|
| 278 |
+
2025-09-26 03:47:29,297 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0207 | Val mean-roc_auc_score: 0.8080
|
| 279 |
+
2025-09-26 03:47:34,854 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0145 | Val mean-roc_auc_score: 0.8106
|
| 280 |
+
2025-09-26 03:47:40,368 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0260 | Val mean-roc_auc_score: 0.8157
|
| 281 |
+
2025-09-26 03:47:43,367 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0574 | Val mean-roc_auc_score: 0.8294
|
| 282 |
+
2025-09-26 03:47:49,184 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0959 | Val mean-roc_auc_score: 0.7846
|
| 283 |
+
2025-09-26 03:47:54,528 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0706 | Val mean-roc_auc_score: 0.7985
|
| 284 |
+
2025-09-26 03:48:00,451 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0215 | Val mean-roc_auc_score: 0.8009
|
| 285 |
+
2025-09-26 03:48:06,243 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0143 | Val mean-roc_auc_score: 0.8117
|
| 286 |
+
2025-09-26 03:48:09,441 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0106 | Val mean-roc_auc_score: 0.8191
|
| 287 |
+
2025-09-26 03:48:15,501 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0321 | Val mean-roc_auc_score: 0.8121
|
| 288 |
+
2025-09-26 03:48:21,446 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0105 | Val mean-roc_auc_score: 0.8160
|
| 289 |
+
2025-09-26 03:48:27,467 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0040 | Val mean-roc_auc_score: 0.8141
|
| 290 |
+
2025-09-26 03:48:33,427 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0069 | Val mean-roc_auc_score: 0.8121
|
| 291 |
+
2025-09-26 03:48:39,612 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0145 | Val mean-roc_auc_score: 0.8123
|
| 292 |
+
2025-09-26 03:48:43,628 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0414 | Val mean-roc_auc_score: 0.7883
|
| 293 |
+
2025-09-26 03:48:49,420 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0488 | Val mean-roc_auc_score: 0.7865
|
| 294 |
+
2025-09-26 03:48:55,197 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0113 | Val mean-roc_auc_score: 0.7968
|
| 295 |
+
2025-09-26 03:49:00,572 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0107 | Val mean-roc_auc_score: 0.8026
|
| 296 |
+
2025-09-26 03:49:06,373 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0057 | Val mean-roc_auc_score: 0.8015
|
| 297 |
+
2025-09-26 03:49:10,236 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0125 | Val mean-roc_auc_score: 0.7992
|
| 298 |
+
2025-09-26 03:49:15,409 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0046 | Val mean-roc_auc_score: 0.8012
|
| 299 |
+
2025-09-26 03:49:21,074 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0110 | Val mean-roc_auc_score: 0.8006
|
| 300 |
+
2025-09-26 03:49:27,680 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0086 | Val mean-roc_auc_score: 0.7957
|
| 301 |
+
2025-09-26 03:49:32,890 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0024 | Val mean-roc_auc_score: 0.7949
|
| 302 |
+
2025-09-26 03:49:39,100 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0028 | Val mean-roc_auc_score: 0.7960
|
| 303 |
+
2025-09-26 03:49:42,334 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0016 | Val mean-roc_auc_score: 0.7971
|
| 304 |
+
2025-09-26 03:49:47,964 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0016 | Val mean-roc_auc_score: 0.7991
|
| 305 |
+
2025-09-26 03:49:53,259 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0025 | Val mean-roc_auc_score: 0.7987
|
| 306 |
+
2025-09-26 03:49:58,885 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0012 | Val mean-roc_auc_score: 0.7960
|
| 307 |
+
2025-09-26 03:50:04,720 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0050 | Val mean-roc_auc_score: 0.7957
|
| 308 |
+
2025-09-26 03:50:10,395 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0019 | Val mean-roc_auc_score: 0.7988
|
| 309 |
+
2025-09-26 03:50:13,462 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0013 | Val mean-roc_auc_score: 0.7966
|
| 310 |
+
2025-09-26 03:50:18,928 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0011 | Val mean-roc_auc_score: 0.7964
|
| 311 |
+
2025-09-26 03:50:24,765 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0070 | Val mean-roc_auc_score: 0.8011
|
| 312 |
+
2025-09-26 03:50:30,571 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0015 | Val mean-roc_auc_score: 0.8007
|
| 313 |
+
2025-09-26 03:50:36,135 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0011 | Val mean-roc_auc_score: 0.8021
|
| 314 |
+
2025-09-26 03:50:39,050 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0020 | Val mean-roc_auc_score: 0.8005
|
| 315 |
+
2025-09-26 03:50:44,724 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0049 | Val mean-roc_auc_score: 0.7986
|
| 316 |
+
2025-09-26 03:50:50,486 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0027 | Val mean-roc_auc_score: 0.7955
|
| 317 |
+
2025-09-26 03:50:56,373 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0059 | Val mean-roc_auc_score: 0.7984
|
| 318 |
+
2025-09-26 03:51:01,557 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0009 | Val mean-roc_auc_score: 0.7988
|
| 319 |
+
2025-09-26 03:51:07,107 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0094 | Val mean-roc_auc_score: 0.8035
|
| 320 |
+
2025-09-26 03:51:10,077 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0056 | Val mean-roc_auc_score: 0.8027
|
| 321 |
+
2025-09-26 03:51:15,636 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0073 | Val mean-roc_auc_score: 0.7932
|
| 322 |
+
2025-09-26 03:51:21,714 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0107 | Val mean-roc_auc_score: 0.8128
|
| 323 |
+
2025-09-26 03:51:27,432 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0071 | Val mean-roc_auc_score: 0.8111
|
| 324 |
+
2025-09-26 03:51:33,275 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0058 | Val mean-roc_auc_score: 0.7974
|
| 325 |
+
2025-09-26 03:51:38,421 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0040 | Val mean-roc_auc_score: 0.8080
|
| 326 |
+
2025-09-26 03:51:41,597 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0025 | Val mean-roc_auc_score: 0.8078
|
| 327 |
+
2025-09-26 03:51:48,647 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0016 | Val mean-roc_auc_score: 0.8074
|
| 328 |
+
2025-09-26 03:51:53,777 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0015 | Val mean-roc_auc_score: 0.8066
|
| 329 |
+
2025-09-26 03:51:59,529 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0012 | Val mean-roc_auc_score: 0.8058
|
| 330 |
+
2025-09-26 03:52:05,485 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0009 | Val mean-roc_auc_score: 0.8067
|
| 331 |
+
2025-09-26 03:52:08,885 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0014 | Val mean-roc_auc_score: 0.8071
|
| 332 |
+
2025-09-26 03:52:15,133 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0009 | Val mean-roc_auc_score: 0.8064
|
| 333 |
+
2025-09-26 03:52:20,699 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0014 | Val mean-roc_auc_score: 0.8069
|
| 334 |
+
2025-09-26 03:52:26,274 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0021 | Val mean-roc_auc_score: 0.8064
|
| 335 |
+
2025-09-26 03:52:31,738 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0029 | Val mean-roc_auc_score: 0.8054
|
| 336 |
+
2025-09-26 03:52:37,568 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0009 | Val mean-roc_auc_score: 0.8057
|
| 337 |
+
2025-09-26 03:52:41,149 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0019 | Val mean-roc_auc_score: 0.8058
|
| 338 |
+
2025-09-26 03:52:46,485 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0023 | Val mean-roc_auc_score: 0.8059
|
| 339 |
+
2025-09-26 03:52:52,098 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0018 | Val mean-roc_auc_score: 0.8061
|
| 340 |
+
2025-09-26 03:52:57,121 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0043 | Val mean-roc_auc_score: 0.8030
|
| 341 |
+
2025-09-26 03:53:02,383 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0034 | Val mean-roc_auc_score: 0.7993
|
| 342 |
+
2025-09-26 03:53:08,396 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0013 | Val mean-roc_auc_score: 0.8004
|
| 343 |
+
2025-09-26 03:53:10,989 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0025 | Val mean-roc_auc_score: 0.8017
|
| 344 |
+
2025-09-26 03:53:16,430 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0039 | Val mean-roc_auc_score: 0.8003
|
| 345 |
+
2025-09-26 03:53:20,922 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0134 | Val mean-roc_auc_score: 0.7911
|
| 346 |
+
2025-09-26 03:53:21,402 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Test mean-roc_auc_score: 0.8121
|
| 347 |
+
2025-09-26 03:53:21,744 - logs_modchembert_cocrystal_epochs100_batch_size32 - INFO - Final Triplicate Test Results — Avg mean-roc_auc_score: 0.7866, Std Dev: 0.0204
|
logs_modchembert_classification_ModChemBERT-MLM-TAFT/modchembert_deepchem_splits_run_covid19_epochs100_batch_size32_20250926_005655.log
ADDED
|
@@ -0,0 +1,331 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
2025-09-26 00:56:55,920 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Running benchmark for dataset: covid19
|
| 2 |
+
2025-09-26 00:56:55,920 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - dataset: covid19, tasks: ['label'], epochs: 100, learning rate: 3e-05
|
| 3 |
+
2025-09-26 00:56:55,927 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Starting triplicate run 1 for dataset covid19 at 2025-09-26_00-56-55
|
| 4 |
+
2025-09-26 00:57:13,848 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.5962 | Val mean-roc_auc_score: 0.8212
|
| 5 |
+
2025-09-26 00:57:13,849 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Global step of best model: 65
|
| 6 |
+
2025-09-26 00:57:14,896 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val mean-roc_auc_score: 0.8212
|
| 7 |
+
2025-09-26 00:57:30,073 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.4437 | Val mean-roc_auc_score: 0.8302
|
| 8 |
+
2025-09-26 00:57:30,271 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Global step of best model: 130
|
| 9 |
+
2025-09-26 00:57:31,047 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val mean-roc_auc_score: 0.8302
|
| 10 |
+
2025-09-26 00:57:45,908 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.3808 | Val mean-roc_auc_score: 0.8308
|
| 11 |
+
2025-09-26 00:57:46,104 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Global step of best model: 195
|
| 12 |
+
2025-09-26 00:57:46,810 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val mean-roc_auc_score: 0.8308
|
| 13 |
+
2025-09-26 00:58:05,767 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.3458 | Val mean-roc_auc_score: 0.8410
|
| 14 |
+
2025-09-26 00:58:05,974 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Global step of best model: 260
|
| 15 |
+
2025-09-26 00:58:06,557 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val mean-roc_auc_score: 0.8410
|
| 16 |
+
2025-09-26 00:58:23,065 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.3100 | Val mean-roc_auc_score: 0.8209
|
| 17 |
+
2025-09-26 00:58:42,954 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.2327 | Val mean-roc_auc_score: 0.8465
|
| 18 |
+
2025-09-26 00:58:43,401 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Global step of best model: 390
|
| 19 |
+
2025-09-26 00:58:41,339 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Best model saved at epoch 6 with val mean-roc_auc_score: 0.8465
|
| 20 |
+
2025-09-26 00:58:59,969 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1841 | Val mean-roc_auc_score: 0.7933
|
| 21 |
+
2025-09-26 00:59:17,241 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1336 | Val mean-roc_auc_score: 0.8007
|
| 22 |
+
2025-09-26 00:59:36,743 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1005 | Val mean-roc_auc_score: 0.8098
|
| 23 |
+
2025-09-26 00:59:53,134 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.1138 | Val mean-roc_auc_score: 0.8171
|
| 24 |
+
2025-09-26 01:00:09,453 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.1167 | Val mean-roc_auc_score: 0.7920
|
| 25 |
+
2025-09-26 01:00:29,837 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0755 | Val mean-roc_auc_score: 0.7912
|
| 26 |
+
2025-09-26 01:00:46,963 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0635 | Val mean-roc_auc_score: 0.8168
|
| 27 |
+
2025-09-26 01:01:05,386 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0598 | Val mean-roc_auc_score: 0.8041
|
| 28 |
+
2025-09-26 01:01:20,963 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0406 | Val mean-roc_auc_score: 0.8114
|
| 29 |
+
2025-09-26 01:01:37,666 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0437 | Val mean-roc_auc_score: 0.8002
|
| 30 |
+
2025-09-26 01:01:58,106 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0381 | Val mean-roc_auc_score: 0.7988
|
| 31 |
+
2025-09-26 01:02:14,758 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0995 | Val mean-roc_auc_score: 0.8255
|
| 32 |
+
2025-09-26 01:02:34,486 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0507 | Val mean-roc_auc_score: 0.8001
|
| 33 |
+
2025-09-26 01:02:50,781 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0387 | Val mean-roc_auc_score: 0.8086
|
| 34 |
+
2025-09-26 01:03:07,519 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0320 | Val mean-roc_auc_score: 0.7940
|
| 35 |
+
2025-09-26 01:03:27,527 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0677 | Val mean-roc_auc_score: 0.8104
|
| 36 |
+
2025-09-26 01:03:44,139 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0474 | Val mean-roc_auc_score: 0.8027
|
| 37 |
+
2025-09-26 01:04:02,378 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0424 | Val mean-roc_auc_score: 0.8119
|
| 38 |
+
2025-09-26 01:04:17,057 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0383 | Val mean-roc_auc_score: 0.8081
|
| 39 |
+
2025-09-26 01:04:34,851 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0258 | Val mean-roc_auc_score: 0.8164
|
| 40 |
+
2025-09-26 01:04:50,731 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0312 | Val mean-roc_auc_score: 0.8128
|
| 41 |
+
2025-09-26 01:05:06,051 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0303 | Val mean-roc_auc_score: 0.8053
|
| 42 |
+
2025-09-26 01:05:23,074 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0218 | Val mean-roc_auc_score: 0.8103
|
| 43 |
+
2025-09-26 01:05:37,405 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0261 | Val mean-roc_auc_score: 0.8063
|
| 44 |
+
2025-09-26 01:05:57,235 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0195 | Val mean-roc_auc_score: 0.8062
|
| 45 |
+
2025-09-26 01:06:12,863 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0227 | Val mean-roc_auc_score: 0.8049
|
| 46 |
+
2025-09-26 01:06:31,202 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0229 | Val mean-roc_auc_score: 0.8092
|
| 47 |
+
2025-09-26 01:06:46,467 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0209 | Val mean-roc_auc_score: 0.8081
|
| 48 |
+
2025-09-26 01:07:01,892 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0197 | Val mean-roc_auc_score: 0.8098
|
| 49 |
+
2025-09-26 01:07:20,342 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0233 | Val mean-roc_auc_score: 0.8023
|
| 50 |
+
2025-09-26 01:07:35,996 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0160 | Val mean-roc_auc_score: 0.8092
|
| 51 |
+
2025-09-26 01:07:54,177 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0204 | Val mean-roc_auc_score: 0.8070
|
| 52 |
+
2025-09-26 01:08:09,383 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0253 | Val mean-roc_auc_score: 0.8072
|
| 53 |
+
2025-09-26 01:08:27,450 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0204 | Val mean-roc_auc_score: 0.8191
|
| 54 |
+
2025-09-26 01:08:42,861 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0433 | Val mean-roc_auc_score: 0.7933
|
| 55 |
+
2025-09-26 01:08:58,657 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0246 | Val mean-roc_auc_score: 0.7992
|
| 56 |
+
2025-09-26 01:09:17,277 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0243 | Val mean-roc_auc_score: 0.7998
|
| 57 |
+
2025-09-26 01:09:32,754 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0224 | Val mean-roc_auc_score: 0.7992
|
| 58 |
+
2025-09-26 01:09:48,774 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0216 | Val mean-roc_auc_score: 0.8041
|
| 59 |
+
2025-09-26 01:10:03,832 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0198 | Val mean-roc_auc_score: 0.8011
|
| 60 |
+
2025-09-26 01:10:24,982 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0214 | Val mean-roc_auc_score: 0.8018
|
| 61 |
+
2025-09-26 01:10:40,307 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0220 | Val mean-roc_auc_score: 0.8033
|
| 62 |
+
2025-09-26 01:10:55,728 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0240 | Val mean-roc_auc_score: 0.8098
|
| 63 |
+
2025-09-26 01:11:13,621 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0216 | Val mean-roc_auc_score: 0.8073
|
| 64 |
+
2025-09-26 01:11:32,134 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0216 | Val mean-roc_auc_score: 0.8070
|
| 65 |
+
2025-09-26 01:11:46,616 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0186 | Val mean-roc_auc_score: 0.8083
|
| 66 |
+
2025-09-26 01:12:04,927 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0174 | Val mean-roc_auc_score: 0.8009
|
| 67 |
+
2025-09-26 01:12:23,117 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0203 | Val mean-roc_auc_score: 0.8061
|
| 68 |
+
2025-09-26 01:12:36,838 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0189 | Val mean-roc_auc_score: 0.8085
|
| 69 |
+
2025-09-26 01:12:52,335 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0158 | Val mean-roc_auc_score: 0.8043
|
| 70 |
+
2025-09-26 01:13:10,502 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0127 | Val mean-roc_auc_score: 0.8028
|
| 71 |
+
2025-09-26 01:13:26,306 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0174 | Val mean-roc_auc_score: 0.8090
|
| 72 |
+
2025-09-26 01:13:44,810 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0179 | Val mean-roc_auc_score: 0.8113
|
| 73 |
+
2025-09-26 01:13:59,496 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0201 | Val mean-roc_auc_score: 0.8120
|
| 74 |
+
2025-09-26 01:14:17,526 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0195 | Val mean-roc_auc_score: 0.8087
|
| 75 |
+
2025-09-26 01:14:35,622 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0163 | Val mean-roc_auc_score: 0.8091
|
| 76 |
+
2025-09-26 01:14:50,645 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0184 | Val mean-roc_auc_score: 0.8067
|
| 77 |
+
2025-09-26 01:15:09,154 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0240 | Val mean-roc_auc_score: 0.7872
|
| 78 |
+
2025-09-26 01:15:23,865 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0375 | Val mean-roc_auc_score: 0.8015
|
| 79 |
+
2025-09-26 01:15:42,196 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0659 | Val mean-roc_auc_score: 0.7930
|
| 80 |
+
2025-09-26 01:15:58,586 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0500 | Val mean-roc_auc_score: 0.8051
|
| 81 |
+
2025-09-26 01:16:17,094 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0299 | Val mean-roc_auc_score: 0.8117
|
| 82 |
+
2025-09-26 01:16:32,737 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0220 | Val mean-roc_auc_score: 0.8053
|
| 83 |
+
2025-09-26 01:16:47,748 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0269 | Val mean-roc_auc_score: 0.8045
|
| 84 |
+
2025-09-26 01:17:06,021 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0289 | Val mean-roc_auc_score: 0.8046
|
| 85 |
+
2025-09-26 01:17:24,810 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0222 | Val mean-roc_auc_score: 0.8082
|
| 86 |
+
2025-09-26 01:17:38,816 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0194 | Val mean-roc_auc_score: 0.8074
|
| 87 |
+
2025-09-26 01:17:57,250 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0151 | Val mean-roc_auc_score: 0.8083
|
| 88 |
+
2025-09-26 01:18:12,724 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0257 | Val mean-roc_auc_score: 0.8091
|
| 89 |
+
2025-09-26 01:18:29,702 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0223 | Val mean-roc_auc_score: 0.8117
|
| 90 |
+
2025-09-26 01:18:46,357 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0262 | Val mean-roc_auc_score: 0.8096
|
| 91 |
+
2025-09-26 01:19:01,854 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0195 | Val mean-roc_auc_score: 0.8051
|
| 92 |
+
2025-09-26 01:19:20,075 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0195 | Val mean-roc_auc_score: 0.8061
|
| 93 |
+
2025-09-26 01:19:37,879 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0185 | Val mean-roc_auc_score: 0.8055
|
| 94 |
+
2025-09-26 01:19:55,006 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0175 | Val mean-roc_auc_score: 0.8048
|
| 95 |
+
2025-09-26 01:20:07,697 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0161 | Val mean-roc_auc_score: 0.8041
|
| 96 |
+
2025-09-26 01:20:25,571 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0168 | Val mean-roc_auc_score: 0.8036
|
| 97 |
+
2025-09-26 01:20:43,426 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0169 | Val mean-roc_auc_score: 0.8042
|
| 98 |
+
2025-09-26 01:20:58,136 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0133 | Val mean-roc_auc_score: 0.8042
|
| 99 |
+
2025-09-26 01:21:15,653 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0167 | Val mean-roc_auc_score: 0.8034
|
| 100 |
+
2025-09-26 01:21:31,727 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0153 | Val mean-roc_auc_score: 0.8031
|
| 101 |
+
2025-09-26 01:21:49,308 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0205 | Val mean-roc_auc_score: 0.8054
|
| 102 |
+
2025-09-26 01:22:04,916 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0155 | Val mean-roc_auc_score: 0.8050
|
| 103 |
+
2025-09-26 01:22:19,993 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0173 | Val mean-roc_auc_score: 0.8059
|
| 104 |
+
2025-09-26 01:22:38,163 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0169 | Val mean-roc_auc_score: 0.8074
|
| 105 |
+
2025-09-26 01:22:54,047 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0171 | Val mean-roc_auc_score: 0.8081
|
| 106 |
+
2025-09-26 01:23:13,206 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0128 | Val mean-roc_auc_score: 0.8074
|
| 107 |
+
2025-09-26 01:23:29,071 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0163 | Val mean-roc_auc_score: 0.8061
|
| 108 |
+
2025-09-26 01:23:46,617 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0157 | Val mean-roc_auc_score: 0.8061
|
| 109 |
+
2025-09-26 01:24:02,113 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0165 | Val mean-roc_auc_score: 0.8062
|
| 110 |
+
2025-09-26 01:24:17,983 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0165 | Val mean-roc_auc_score: 0.8058
|
| 111 |
+
2025-09-26 01:24:35,880 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0147 | Val mean-roc_auc_score: 0.8051
|
| 112 |
+
2025-09-26 01:24:50,789 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0176 | Val mean-roc_auc_score: 0.8119
|
| 113 |
+
2025-09-26 01:25:08,278 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0290 | Val mean-roc_auc_score: 0.8037
|
| 114 |
+
2025-09-26 01:25:09,654 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Test mean-roc_auc_score: 0.8319
|
| 115 |
+
2025-09-26 01:25:09,963 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Starting triplicate run 2 for dataset covid19 at 2025-09-26_01-25-09
|
| 116 |
+
2025-09-26 01:25:24,582 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.5808 | Val mean-roc_auc_score: 0.8190
|
| 117 |
+
2025-09-26 01:25:24,582 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Global step of best model: 65
|
| 118 |
+
2025-09-26 01:25:25,585 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val mean-roc_auc_score: 0.8190
|
| 119 |
+
2025-09-26 01:25:38,394 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.4229 | Val mean-roc_auc_score: 0.8349
|
| 120 |
+
2025-09-26 01:25:38,587 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Global step of best model: 130
|
| 121 |
+
2025-09-26 01:25:39,227 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val mean-roc_auc_score: 0.8349
|
| 122 |
+
2025-09-26 01:25:57,413 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.3865 | Val mean-roc_auc_score: 0.8460
|
| 123 |
+
2025-09-26 01:25:57,641 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Global step of best model: 195
|
| 124 |
+
2025-09-26 01:25:58,320 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val mean-roc_auc_score: 0.8460
|
| 125 |
+
2025-09-26 01:26:13,400 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.3375 | Val mean-roc_auc_score: 0.8283
|
| 126 |
+
2025-09-26 01:26:30,775 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.2938 | Val mean-roc_auc_score: 0.7948
|
| 127 |
+
2025-09-26 01:26:46,079 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.2452 | Val mean-roc_auc_score: 0.8267
|
| 128 |
+
2025-09-26 01:27:04,806 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1830 | Val mean-roc_auc_score: 0.8255
|
| 129 |
+
2025-09-26 01:27:19,210 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1898 | Val mean-roc_auc_score: 0.8403
|
| 130 |
+
2025-09-26 01:27:37,677 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1298 | Val mean-roc_auc_score: 0.8159
|
| 131 |
+
2025-09-26 01:27:53,141 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.0950 | Val mean-roc_auc_score: 0.8044
|
| 132 |
+
2025-09-26 01:28:11,235 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.0740 | Val mean-roc_auc_score: 0.8056
|
| 133 |
+
2025-09-26 01:28:26,726 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0942 | Val mean-roc_auc_score: 0.8108
|
| 134 |
+
2025-09-26 01:28:44,589 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0566 | Val mean-roc_auc_score: 0.7996
|
| 135 |
+
2025-09-26 01:28:59,635 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0738 | Val mean-roc_auc_score: 0.7993
|
| 136 |
+
2025-09-26 01:29:14,813 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0697 | Val mean-roc_auc_score: 0.8106
|
| 137 |
+
2025-09-26 01:29:33,706 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0414 | Val mean-roc_auc_score: 0.8137
|
| 138 |
+
2025-09-26 01:29:49,490 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0605 | Val mean-roc_auc_score: 0.8216
|
| 139 |
+
2025-09-26 01:30:07,121 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0507 | Val mean-roc_auc_score: 0.7956
|
| 140 |
+
2025-09-26 01:30:22,588 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0400 | Val mean-roc_auc_score: 0.8221
|
| 141 |
+
2025-09-26 01:30:40,075 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0375 | Val mean-roc_auc_score: 0.8056
|
| 142 |
+
2025-09-26 01:30:55,337 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0353 | Val mean-roc_auc_score: 0.8063
|
| 143 |
+
2025-09-26 01:31:11,441 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0534 | Val mean-roc_auc_score: 0.7975
|
| 144 |
+
2025-09-26 01:31:29,746 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0798 | Val mean-roc_auc_score: 0.8015
|
| 145 |
+
2025-09-26 01:31:45,145 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0719 | Val mean-roc_auc_score: 0.7995
|
| 146 |
+
2025-09-26 01:32:03,084 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0609 | Val mean-roc_auc_score: 0.7888
|
| 147 |
+
2025-09-26 01:32:18,365 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0377 | Val mean-roc_auc_score: 0.7998
|
| 148 |
+
2025-09-26 01:32:36,831 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0324 | Val mean-roc_auc_score: 0.8034
|
| 149 |
+
2025-09-26 01:32:52,221 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0346 | Val mean-roc_auc_score: 0.7983
|
| 150 |
+
2025-09-26 01:33:10,331 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0356 | Val mean-roc_auc_score: 0.8080
|
| 151 |
+
2025-09-26 01:33:24,601 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0297 | Val mean-roc_auc_score: 0.7962
|
| 152 |
+
2025-09-26 01:33:42,582 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0202 | Val mean-roc_auc_score: 0.8128
|
| 153 |
+
2025-09-26 01:34:01,057 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0375 | Val mean-roc_auc_score: 0.8019
|
| 154 |
+
2025-09-26 01:34:16,753 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0243 | Val mean-roc_auc_score: 0.8086
|
| 155 |
+
2025-09-26 01:34:35,270 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0439 | Val mean-roc_auc_score: 0.8064
|
| 156 |
+
2025-09-26 01:34:49,976 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0469 | Val mean-roc_auc_score: 0.8038
|
| 157 |
+
2025-09-26 01:35:08,510 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0559 | Val mean-roc_auc_score: 0.8040
|
| 158 |
+
2025-09-26 01:35:24,725 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0285 | Val mean-roc_auc_score: 0.8015
|
| 159 |
+
2025-09-26 01:35:40,102 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0263 | Val mean-roc_auc_score: 0.8079
|
| 160 |
+
2025-09-26 01:35:58,601 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0249 | Val mean-roc_auc_score: 0.8035
|
| 161 |
+
2025-09-26 01:36:13,923 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0214 | Val mean-roc_auc_score: 0.8051
|
| 162 |
+
2025-09-26 01:36:31,822 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0221 | Val mean-roc_auc_score: 0.8070
|
| 163 |
+
2025-09-26 01:36:47,674 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0201 | Val mean-roc_auc_score: 0.7968
|
| 164 |
+
2025-09-26 01:37:05,570 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0207 | Val mean-roc_auc_score: 0.7985
|
| 165 |
+
2025-09-26 01:37:20,855 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0409 | Val mean-roc_auc_score: 0.8035
|
| 166 |
+
2025-09-26 01:37:38,250 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0233 | Val mean-roc_auc_score: 0.8078
|
| 167 |
+
2025-09-26 01:37:54,495 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0257 | Val mean-roc_auc_score: 0.8167
|
| 168 |
+
2025-09-26 01:38:11,983 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0294 | Val mean-roc_auc_score: 0.8036
|
| 169 |
+
2025-09-26 01:38:30,323 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0504 | Val mean-roc_auc_score: 0.8090
|
| 170 |
+
2025-09-26 01:38:46,228 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0397 | Val mean-roc_auc_score: 0.7986
|
| 171 |
+
2025-09-26 01:39:04,166 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0280 | Val mean-roc_auc_score: 0.7986
|
| 172 |
+
2025-09-26 01:39:20,174 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0227 | Val mean-roc_auc_score: 0.7940
|
| 173 |
+
2025-09-26 01:39:39,226 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0224 | Val mean-roc_auc_score: 0.7928
|
| 174 |
+
2025-09-26 01:39:55,226 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0211 | Val mean-roc_auc_score: 0.7889
|
| 175 |
+
2025-09-26 01:40:10,946 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0180 | Val mean-roc_auc_score: 0.7920
|
| 176 |
+
2025-09-26 01:40:29,475 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0219 | Val mean-roc_auc_score: 0.7969
|
| 177 |
+
2025-09-26 01:40:45,938 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0223 | Val mean-roc_auc_score: 0.7980
|
| 178 |
+
2025-09-26 01:41:05,326 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0711 | Val mean-roc_auc_score: 0.7986
|
| 179 |
+
2025-09-26 01:41:20,895 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0239 | Val mean-roc_auc_score: 0.7732
|
| 180 |
+
2025-09-26 01:41:37,127 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0406 | Val mean-roc_auc_score: 0.7751
|
| 181 |
+
2025-09-26 01:41:54,910 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0282 | Val mean-roc_auc_score: 0.7720
|
| 182 |
+
2025-09-26 01:42:09,731 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0204 | Val mean-roc_auc_score: 0.7844
|
| 183 |
+
2025-09-26 01:42:29,095 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0182 | Val mean-roc_auc_score: 0.7784
|
| 184 |
+
2025-09-26 01:42:44,721 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0191 | Val mean-roc_auc_score: 0.7709
|
| 185 |
+
2025-09-26 01:43:02,894 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0211 | Val mean-roc_auc_score: 0.7811
|
| 186 |
+
2025-09-26 01:43:18,590 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0192 | Val mean-roc_auc_score: 0.7777
|
| 187 |
+
2025-09-26 01:43:37,433 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0190 | Val mean-roc_auc_score: 0.7834
|
| 188 |
+
2025-09-26 01:43:54,794 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0219 | Val mean-roc_auc_score: 0.7820
|
| 189 |
+
2025-09-26 01:44:11,635 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0245 | Val mean-roc_auc_score: 0.7815
|
| 190 |
+
2025-09-26 01:44:30,922 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0168 | Val mean-roc_auc_score: 0.7823
|
| 191 |
+
2025-09-26 01:44:47,123 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0190 | Val mean-roc_auc_score: 0.7846
|
| 192 |
+
2025-09-26 01:45:07,036 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0112 | Val mean-roc_auc_score: 0.7817
|
| 193 |
+
2025-09-26 01:45:24,128 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0206 | Val mean-roc_auc_score: 0.7818
|
| 194 |
+
2025-09-26 01:45:40,655 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0186 | Val mean-roc_auc_score: 0.7866
|
| 195 |
+
2025-09-26 01:46:00,036 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0136 | Val mean-roc_auc_score: 0.7856
|
| 196 |
+
2025-09-26 01:46:17,107 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0192 | Val mean-roc_auc_score: 0.7876
|
| 197 |
+
2025-09-26 01:46:36,066 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0184 | Val mean-roc_auc_score: 0.7854
|
| 198 |
+
2025-09-26 01:46:54,323 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0143 | Val mean-roc_auc_score: 0.7872
|
| 199 |
+
2025-09-26 01:47:11,273 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0167 | Val mean-roc_auc_score: 0.7776
|
| 200 |
+
2025-09-26 01:47:31,028 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0140 | Val mean-roc_auc_score: 0.7773
|
| 201 |
+
2025-09-26 01:47:48,374 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0976 | Val mean-roc_auc_score: 0.8236
|
| 202 |
+
2025-09-26 01:48:04,801 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0630 | Val mean-roc_auc_score: 0.8161
|
| 203 |
+
2025-09-26 01:48:24,723 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0286 | Val mean-roc_auc_score: 0.8199
|
| 204 |
+
2025-09-26 01:48:41,829 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0280 | Val mean-roc_auc_score: 0.8207
|
| 205 |
+
2025-09-26 01:49:01,266 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0242 | Val mean-roc_auc_score: 0.8141
|
| 206 |
+
2025-09-26 01:49:18,135 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0277 | Val mean-roc_auc_score: 0.8181
|
| 207 |
+
2025-09-26 01:49:34,676 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0237 | Val mean-roc_auc_score: 0.8321
|
| 208 |
+
2025-09-26 01:49:54,974 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0443 | Val mean-roc_auc_score: 0.8178
|
| 209 |
+
2025-09-26 01:50:12,551 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0268 | Val mean-roc_auc_score: 0.8173
|
| 210 |
+
2025-09-26 01:50:32,357 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0196 | Val mean-roc_auc_score: 0.8169
|
| 211 |
+
2025-09-26 01:50:49,965 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0231 | Val mean-roc_auc_score: 0.8103
|
| 212 |
+
2025-09-26 01:51:06,437 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0206 | Val mean-roc_auc_score: 0.8202
|
| 213 |
+
2025-09-26 01:51:25,319 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0202 | Val mean-roc_auc_score: 0.8125
|
| 214 |
+
2025-09-26 01:51:43,041 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0187 | Val mean-roc_auc_score: 0.8131
|
| 215 |
+
2025-09-26 01:52:03,009 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0239 | Val mean-roc_auc_score: 0.8091
|
| 216 |
+
2025-09-26 01:52:20,806 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0216 | Val mean-roc_auc_score: 0.8144
|
| 217 |
+
2025-09-26 01:52:37,591 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0210 | Val mean-roc_auc_score: 0.8172
|
| 218 |
+
2025-09-26 01:52:58,180 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0054 | Val mean-roc_auc_score: 0.8137
|
| 219 |
+
2025-09-26 01:53:16,290 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0165 | Val mean-roc_auc_score: 0.8062
|
| 220 |
+
2025-09-26 01:53:33,775 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0191 | Val mean-roc_auc_score: 0.8072
|
| 221 |
+
2025-09-26 01:53:54,038 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0177 | Val mean-roc_auc_score: 0.8093
|
| 222 |
+
2025-09-26 01:53:54,918 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Test mean-roc_auc_score: 0.8272
|
| 223 |
+
2025-09-26 01:53:55,321 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Starting triplicate run 3 for dataset covid19 at 2025-09-26_01-53-55
|
| 224 |
+
2025-09-26 01:54:11,159 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.6269 | Val mean-roc_auc_score: 0.8105
|
| 225 |
+
2025-09-26 01:54:11,159 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Global step of best model: 65
|
| 226 |
+
2025-09-26 01:54:12,230 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val mean-roc_auc_score: 0.8105
|
| 227 |
+
2025-09-26 01:54:31,100 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.4583 | Val mean-roc_auc_score: 0.8388
|
| 228 |
+
2025-09-26 01:54:31,302 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Global step of best model: 130
|
| 229 |
+
2025-09-26 01:54:31,958 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val mean-roc_auc_score: 0.8388
|
| 230 |
+
2025-09-26 01:54:49,994 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.3942 | Val mean-roc_auc_score: 0.8477
|
| 231 |
+
2025-09-26 01:54:50,198 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Global step of best model: 195
|
| 232 |
+
2025-09-26 01:54:50,839 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val mean-roc_auc_score: 0.8477
|
| 233 |
+
2025-09-26 01:55:07,345 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.3479 | Val mean-roc_auc_score: 0.8385
|
| 234 |
+
2025-09-26 01:55:26,282 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.3075 | Val mean-roc_auc_score: 0.8207
|
| 235 |
+
2025-09-26 01:55:43,366 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.2163 | Val mean-roc_auc_score: 0.8114
|
| 236 |
+
2025-09-26 01:56:00,419 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1716 | Val mean-roc_auc_score: 0.8254
|
| 237 |
+
2025-09-26 01:56:19,390 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1336 | Val mean-roc_auc_score: 0.8118
|
| 238 |
+
2025-09-26 01:56:35,750 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.0952 | Val mean-roc_auc_score: 0.7991
|
| 239 |
+
2025-09-26 01:56:54,719 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.0850 | Val mean-roc_auc_score: 0.8066
|
| 240 |
+
2025-09-26 01:57:11,447 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.0891 | Val mean-roc_auc_score: 0.8064
|
| 241 |
+
2025-09-26 01:57:30,785 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0608 | Val mean-roc_auc_score: 0.7960
|
| 242 |
+
2025-09-26 01:57:47,273 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0729 | Val mean-roc_auc_score: 0.8048
|
| 243 |
+
2025-09-26 01:58:04,273 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0516 | Val mean-roc_auc_score: 0.8015
|
| 244 |
+
2025-09-26 01:58:22,562 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0344 | Val mean-roc_auc_score: 0.8157
|
| 245 |
+
2025-09-26 01:58:39,708 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0898 | Val mean-roc_auc_score: 0.8082
|
| 246 |
+
2025-09-26 01:58:59,196 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0473 | Val mean-roc_auc_score: 0.7870
|
| 247 |
+
2025-09-26 01:59:16,400 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0560 | Val mean-roc_auc_score: 0.7913
|
| 248 |
+
2025-09-26 01:59:33,630 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0480 | Val mean-roc_auc_score: 0.7762
|
| 249 |
+
2025-09-26 01:59:52,393 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0416 | Val mean-roc_auc_score: 0.7855
|
| 250 |
+
2025-09-26 02:00:09,509 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0351 | Val mean-roc_auc_score: 0.7945
|
| 251 |
+
2025-09-26 02:00:29,540 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0292 | Val mean-roc_auc_score: 0.7931
|
| 252 |
+
2025-09-26 02:00:46,662 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0457 | Val mean-roc_auc_score: 0.8041
|
| 253 |
+
2025-09-26 02:01:03,474 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0714 | Val mean-roc_auc_score: 0.7956
|
| 254 |
+
2025-09-26 02:01:22,488 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0431 | Val mean-roc_auc_score: 0.8092
|
| 255 |
+
2025-09-26 02:01:40,330 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0401 | Val mean-roc_auc_score: 0.7941
|
| 256 |
+
2025-09-26 02:01:57,812 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0303 | Val mean-roc_auc_score: 0.8031
|
| 257 |
+
2025-09-26 02:02:17,132 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0359 | Val mean-roc_auc_score: 0.8020
|
| 258 |
+
2025-09-26 02:02:34,054 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0218 | Val mean-roc_auc_score: 0.8008
|
| 259 |
+
2025-09-26 02:02:52,262 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0244 | Val mean-roc_auc_score: 0.8007
|
| 260 |
+
2025-09-26 02:03:10,056 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0688 | Val mean-roc_auc_score: 0.8032
|
| 261 |
+
2025-09-26 02:03:27,160 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0736 | Val mean-roc_auc_score: 0.7845
|
| 262 |
+
2025-09-26 02:03:46,308 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0469 | Val mean-roc_auc_score: 0.7818
|
| 263 |
+
2025-09-26 02:04:03,337 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0480 | Val mean-roc_auc_score: 0.7985
|
| 264 |
+
2025-09-26 02:04:22,757 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0428 | Val mean-roc_auc_score: 0.8073
|
| 265 |
+
2025-09-26 02:04:40,103 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0297 | Val mean-roc_auc_score: 0.7974
|
| 266 |
+
2025-09-26 02:04:57,836 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0418 | Val mean-roc_auc_score: 0.8001
|
| 267 |
+
2025-09-26 02:05:17,530 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0256 | Val mean-roc_auc_score: 0.7998
|
| 268 |
+
2025-09-26 02:05:34,863 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0250 | Val mean-roc_auc_score: 0.7999
|
| 269 |
+
2025-09-26 02:05:54,481 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0204 | Val mean-roc_auc_score: 0.8012
|
| 270 |
+
2025-09-26 02:06:11,589 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0244 | Val mean-roc_auc_score: 0.7934
|
| 271 |
+
2025-09-26 02:06:29,699 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0185 | Val mean-roc_auc_score: 0.7992
|
| 272 |
+
2025-09-26 02:06:50,050 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0202 | Val mean-roc_auc_score: 0.7953
|
| 273 |
+
2025-09-26 02:07:07,548 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0214 | Val mean-roc_auc_score: 0.8010
|
| 274 |
+
2025-09-26 02:07:24,985 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0211 | Val mean-roc_auc_score: 0.7950
|
| 275 |
+
2025-09-26 02:07:42,740 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0178 | Val mean-roc_auc_score: 0.7969
|
| 276 |
+
2025-09-26 02:08:00,956 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0219 | Val mean-roc_auc_score: 0.7988
|
| 277 |
+
2025-09-26 02:08:21,049 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0138 | Val mean-roc_auc_score: 0.7970
|
| 278 |
+
2025-09-26 02:08:38,769 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0186 | Val mean-roc_auc_score: 0.7941
|
| 279 |
+
2025-09-26 02:08:56,401 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0146 | Val mean-roc_auc_score: 0.8003
|
| 280 |
+
2025-09-26 02:09:16,246 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0305 | Val mean-roc_auc_score: 0.7915
|
| 281 |
+
2025-09-26 02:09:33,747 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0230 | Val mean-roc_auc_score: 0.7898
|
| 282 |
+
2025-09-26 02:09:53,449 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0193 | Val mean-roc_auc_score: 0.7951
|
| 283 |
+
2025-09-26 02:10:10,664 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0193 | Val mean-roc_auc_score: 0.7979
|
| 284 |
+
2025-09-26 02:10:27,852 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0197 | Val mean-roc_auc_score: 0.8017
|
| 285 |
+
2025-09-26 02:10:47,762 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0203 | Val mean-roc_auc_score: 0.7984
|
| 286 |
+
2025-09-26 02:11:05,438 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0738 | Val mean-roc_auc_score: 0.8103
|
| 287 |
+
2025-09-26 02:11:22,791 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0327 | Val mean-roc_auc_score: 0.8088
|
| 288 |
+
2025-09-26 02:11:42,918 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0238 | Val mean-roc_auc_score: 0.7993
|
| 289 |
+
2025-09-26 02:12:00,433 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0266 | Val mean-roc_auc_score: 0.8129
|
| 290 |
+
2025-09-26 02:12:18,627 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0274 | Val mean-roc_auc_score: 0.8000
|
| 291 |
+
2025-09-26 02:12:36,628 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0255 | Val mean-roc_auc_score: 0.7965
|
| 292 |
+
2025-09-26 02:12:54,232 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0201 | Val mean-roc_auc_score: 0.7919
|
| 293 |
+
2025-09-26 02:13:14,129 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0318 | Val mean-roc_auc_score: 0.8135
|
| 294 |
+
2025-09-26 02:13:31,425 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0300 | Val mean-roc_auc_score: 0.8015
|
| 295 |
+
2025-09-26 02:13:50,545 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0261 | Val mean-roc_auc_score: 0.7954
|
| 296 |
+
2025-09-26 02:14:07,622 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0223 | Val mean-roc_auc_score: 0.7976
|
| 297 |
+
2025-09-26 02:14:24,566 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0179 | Val mean-roc_auc_score: 0.7973
|
| 298 |
+
2025-09-26 02:14:43,739 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0179 | Val mean-roc_auc_score: 0.7979
|
| 299 |
+
2025-09-26 02:15:00,395 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0208 | Val mean-roc_auc_score: 0.7998
|
| 300 |
+
2025-09-26 02:15:19,005 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0258 | Val mean-roc_auc_score: 0.8025
|
| 301 |
+
2025-09-26 02:15:36,084 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0200 | Val mean-roc_auc_score: 0.7991
|
| 302 |
+
2025-09-26 02:15:52,801 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0236 | Val mean-roc_auc_score: 0.8054
|
| 303 |
+
2025-09-26 02:16:12,155 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0254 | Val mean-roc_auc_score: 0.8067
|
| 304 |
+
2025-09-26 02:16:29,098 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0175 | Val mean-roc_auc_score: 0.8115
|
| 305 |
+
2025-09-26 02:16:48,075 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0231 | Val mean-roc_auc_score: 0.8154
|
| 306 |
+
2025-09-26 02:17:04,755 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0064 | Val mean-roc_auc_score: 0.8100
|
| 307 |
+
2025-09-26 02:17:21,299 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0190 | Val mean-roc_auc_score: 0.8113
|
| 308 |
+
2025-09-26 02:17:40,414 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0212 | Val mean-roc_auc_score: 0.8077
|
| 309 |
+
2025-09-26 02:17:57,185 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0179 | Val mean-roc_auc_score: 0.8093
|
| 310 |
+
2025-09-26 02:18:15,955 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0156 | Val mean-roc_auc_score: 0.8079
|
| 311 |
+
2025-09-26 02:18:32,851 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0168 | Val mean-roc_auc_score: 0.8092
|
| 312 |
+
2025-09-26 02:18:49,553 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0151 | Val mean-roc_auc_score: 0.8085
|
| 313 |
+
2025-09-26 02:19:08,722 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0201 | Val mean-roc_auc_score: 0.8091
|
| 314 |
+
2025-09-26 02:19:25,392 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0151 | Val mean-roc_auc_score: 0.8088
|
| 315 |
+
2025-09-26 02:19:44,114 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0152 | Val mean-roc_auc_score: 0.8081
|
| 316 |
+
2025-09-26 02:20:00,975 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0182 | Val mean-roc_auc_score: 0.8077
|
| 317 |
+
2025-09-26 02:20:20,059 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0164 | Val mean-roc_auc_score: 0.8083
|
| 318 |
+
2025-09-26 02:20:36,684 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0175 | Val mean-roc_auc_score: 0.8105
|
| 319 |
+
2025-09-26 02:20:53,308 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0150 | Val mean-roc_auc_score: 0.8064
|
| 320 |
+
2025-09-26 02:21:11,874 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0143 | Val mean-roc_auc_score: 0.8080
|
| 321 |
+
2025-09-26 02:21:27,420 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0174 | Val mean-roc_auc_score: 0.8093
|
| 322 |
+
2025-09-26 02:21:46,052 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0174 | Val mean-roc_auc_score: 0.8098
|
| 323 |
+
2025-09-26 02:22:02,704 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0069 | Val mean-roc_auc_score: 0.8098
|
| 324 |
+
2025-09-26 02:22:19,224 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0162 | Val mean-roc_auc_score: 0.8118
|
| 325 |
+
2025-09-26 02:22:37,789 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0169 | Val mean-roc_auc_score: 0.8084
|
| 326 |
+
2025-09-26 02:22:54,718 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0206 | Val mean-roc_auc_score: 0.8052
|
| 327 |
+
2025-09-26 02:23:13,559 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0165 | Val mean-roc_auc_score: 0.8057
|
| 328 |
+
2025-09-26 02:23:29,964 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0129 | Val mean-roc_auc_score: 0.8062
|
| 329 |
+
2025-09-26 02:23:40,182 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0163 | Val mean-roc_auc_score: 0.8060
|
| 330 |
+
2025-09-26 02:23:41,000 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Test mean-roc_auc_score: 0.8333
|
| 331 |
+
2025-09-26 02:23:41,311 - logs_modchembert_covid19_epochs100_batch_size32 - INFO - Final Triplicate Test Results — Avg mean-roc_auc_score: 0.8308, Std Dev: 0.0026
|
logs_modchembert_regression_ModChemBERT-MLM-TAFT/modchembert_deepchem_splits_run_adme_microsom_stab_h_epochs100_batch_size32_20250926_053918.log
ADDED
|
@@ -0,0 +1,377 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
2025-09-26 05:39:18,034 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Running benchmark for dataset: adme_microsom_stab_h
|
| 2 |
+
2025-09-26 05:39:18,034 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - dataset: adme_microsom_stab_h, tasks: ['y'], epochs: 100, learning rate: 3e-05, transform: True
|
| 3 |
+
2025-09-26 05:39:18,039 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Starting triplicate run 1 for dataset adme_microsom_stab_h at 2025-09-26_05-39-18
|
| 4 |
+
2025-09-26 05:39:27,863 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.8611 | Val rms_score: 0.4439
|
| 5 |
+
2025-09-26 05:39:27,863 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 54
|
| 6 |
+
2025-09-26 05:39:28,561 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.4439
|
| 7 |
+
2025-09-26 05:39:36,511 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.5391 | Val rms_score: 0.4224
|
| 8 |
+
2025-09-26 05:39:36,799 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 108
|
| 9 |
+
2025-09-26 05:39:37,387 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.4224
|
| 10 |
+
2025-09-26 05:39:43,159 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.4699 | Val rms_score: 0.4595
|
| 11 |
+
2025-09-26 05:39:51,175 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.3223 | Val rms_score: 0.4065
|
| 12 |
+
2025-09-26 05:39:51,374 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 216
|
| 13 |
+
2025-09-26 05:39:52,060 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val rms_score: 0.4065
|
| 14 |
+
2025-09-26 05:40:00,921 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.2431 | Val rms_score: 0.4358
|
| 15 |
+
2025-09-26 05:40:07,164 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.1823 | Val rms_score: 0.4600
|
| 16 |
+
2025-09-26 05:40:15,596 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1238 | Val rms_score: 0.4436
|
| 17 |
+
2025-09-26 05:40:24,153 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.0981 | Val rms_score: 0.4236
|
| 18 |
+
2025-09-26 05:40:32,587 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.0822 | Val rms_score: 0.4444
|
| 19 |
+
2025-09-26 05:40:38,268 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.0750 | Val rms_score: 0.4333
|
| 20 |
+
2025-09-26 05:40:46,477 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.0608 | Val rms_score: 0.4441
|
| 21 |
+
2025-09-26 05:40:55,566 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0605 | Val rms_score: 0.4441
|
| 22 |
+
2025-09-26 05:41:03,668 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0674 | Val rms_score: 0.4302
|
| 23 |
+
2025-09-26 05:41:09,530 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0512 | Val rms_score: 0.4245
|
| 24 |
+
2025-09-26 05:41:17,668 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0490 | Val rms_score: 0.4203
|
| 25 |
+
2025-09-26 05:41:25,274 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0408 | Val rms_score: 0.4234
|
| 26 |
+
2025-09-26 05:41:33,789 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0349 | Val rms_score: 0.4142
|
| 27 |
+
2025-09-26 05:41:39,076 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0339 | Val rms_score: 0.4080
|
| 28 |
+
2025-09-26 05:41:47,888 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0347 | Val rms_score: 0.4121
|
| 29 |
+
2025-09-26 05:41:56,390 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0347 | Val rms_score: 0.4250
|
| 30 |
+
2025-09-26 05:42:04,455 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0326 | Val rms_score: 0.4114
|
| 31 |
+
2025-09-26 05:42:10,220 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0295 | Val rms_score: 0.4140
|
| 32 |
+
2025-09-26 05:42:18,058 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0333 | Val rms_score: 0.4157
|
| 33 |
+
2025-09-26 05:42:25,631 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0315 | Val rms_score: 0.4108
|
| 34 |
+
2025-09-26 05:42:33,122 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0270 | Val rms_score: 0.4049
|
| 35 |
+
2025-09-26 05:42:33,292 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 1350
|
| 36 |
+
2025-09-26 05:42:33,975 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 25 with val rms_score: 0.4049
|
| 37 |
+
2025-09-26 05:42:39,268 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0251 | Val rms_score: 0.4095
|
| 38 |
+
2025-09-26 05:42:47,708 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0271 | Val rms_score: 0.4175
|
| 39 |
+
2025-09-26 05:42:55,319 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0299 | Val rms_score: 0.4076
|
| 40 |
+
2025-09-26 05:43:03,406 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0272 | Val rms_score: 0.4103
|
| 41 |
+
2025-09-26 05:43:08,671 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0260 | Val rms_score: 0.4075
|
| 42 |
+
2025-09-26 05:43:16,680 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0272 | Val rms_score: 0.4095
|
| 43 |
+
2025-09-26 05:43:24,754 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0243 | Val rms_score: 0.4133
|
| 44 |
+
2025-09-26 05:43:33,203 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0243 | Val rms_score: 0.4033
|
| 45 |
+
2025-09-26 05:43:33,382 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 1782
|
| 46 |
+
2025-09-26 05:43:34,058 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 33 with val rms_score: 0.4033
|
| 47 |
+
2025-09-26 05:43:39,424 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0224 | Val rms_score: 0.4154
|
| 48 |
+
2025-09-26 05:43:46,783 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0240 | Val rms_score: 0.4098
|
| 49 |
+
2025-09-26 05:43:55,167 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0225 | Val rms_score: 0.4135
|
| 50 |
+
2025-09-26 05:44:03,247 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0233 | Val rms_score: 0.4081
|
| 51 |
+
2025-09-26 05:44:10,110 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0210 | Val rms_score: 0.4035
|
| 52 |
+
2025-09-26 05:44:17,938 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0177 | Val rms_score: 0.4090
|
| 53 |
+
2025-09-26 05:44:26,656 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0194 | Val rms_score: 0.4073
|
| 54 |
+
2025-09-26 05:44:35,117 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0205 | Val rms_score: 0.4104
|
| 55 |
+
2025-09-26 05:44:41,217 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0190 | Val rms_score: 0.4058
|
| 56 |
+
2025-09-26 05:44:49,779 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0225 | Val rms_score: 0.4089
|
| 57 |
+
2025-09-26 05:44:57,735 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0205 | Val rms_score: 0.4062
|
| 58 |
+
2025-09-26 05:45:05,348 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0198 | Val rms_score: 0.4149
|
| 59 |
+
2025-09-26 05:45:10,492 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0187 | Val rms_score: 0.4059
|
| 60 |
+
2025-09-26 05:45:18,672 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0183 | Val rms_score: 0.4080
|
| 61 |
+
2025-09-26 05:45:26,767 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0191 | Val rms_score: 0.4042
|
| 62 |
+
2025-09-26 05:45:34,039 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0201 | Val rms_score: 0.4064
|
| 63 |
+
2025-09-26 05:45:38,876 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0187 | Val rms_score: 0.4085
|
| 64 |
+
2025-09-26 05:45:46,013 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0180 | Val rms_score: 0.4043
|
| 65 |
+
2025-09-26 05:45:54,528 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0162 | Val rms_score: 0.4029
|
| 66 |
+
2025-09-26 05:45:54,853 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 2808
|
| 67 |
+
2025-09-26 05:45:55,528 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 52 with val rms_score: 0.4029
|
| 68 |
+
2025-09-26 05:46:02,883 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0173 | Val rms_score: 0.4077
|
| 69 |
+
2025-09-26 05:46:08,008 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0177 | Val rms_score: 0.4076
|
| 70 |
+
2025-09-26 05:46:15,344 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0184 | Val rms_score: 0.4136
|
| 71 |
+
2025-09-26 05:46:24,495 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0167 | Val rms_score: 0.4036
|
| 72 |
+
2025-09-26 05:46:32,881 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0166 | Val rms_score: 0.4023
|
| 73 |
+
2025-09-26 05:46:33,049 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 3078
|
| 74 |
+
2025-09-26 05:46:33,753 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 57 with val rms_score: 0.4023
|
| 75 |
+
2025-09-26 05:46:39,939 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0164 | Val rms_score: 0.3989
|
| 76 |
+
2025-09-26 05:46:40,155 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 3132
|
| 77 |
+
2025-09-26 05:46:40,829 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 58 with val rms_score: 0.3989
|
| 78 |
+
2025-09-26 05:46:48,838 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0161 | Val rms_score: 0.4018
|
| 79 |
+
2025-09-26 05:46:56,786 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0152 | Val rms_score: 0.4061
|
| 80 |
+
2025-09-26 05:47:04,690 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0157 | Val rms_score: 0.4008
|
| 81 |
+
2025-09-26 05:47:10,690 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0158 | Val rms_score: 0.4030
|
| 82 |
+
2025-09-26 05:47:18,682 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0176 | Val rms_score: 0.3999
|
| 83 |
+
2025-09-26 05:47:26,725 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0161 | Val rms_score: 0.4050
|
| 84 |
+
2025-09-26 05:47:32,414 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0182 | Val rms_score: 0.4035
|
| 85 |
+
2025-09-26 05:47:40,892 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0158 | Val rms_score: 0.4033
|
| 86 |
+
2025-09-26 05:47:48,974 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0154 | Val rms_score: 0.4085
|
| 87 |
+
2025-09-26 05:47:57,206 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0156 | Val rms_score: 0.4043
|
| 88 |
+
2025-09-26 05:48:02,662 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0159 | Val rms_score: 0.4013
|
| 89 |
+
2025-09-26 05:48:11,097 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0151 | Val rms_score: 0.4014
|
| 90 |
+
2025-09-26 05:48:18,464 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0146 | Val rms_score: 0.4064
|
| 91 |
+
2025-09-26 05:48:27,255 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0145 | Val rms_score: 0.4021
|
| 92 |
+
2025-09-26 05:48:32,720 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0153 | Val rms_score: 0.4115
|
| 93 |
+
2025-09-26 05:48:40,456 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0150 | Val rms_score: 0.3996
|
| 94 |
+
2025-09-26 05:48:49,217 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0162 | Val rms_score: 0.4080
|
| 95 |
+
2025-09-26 05:48:56,789 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0149 | Val rms_score: 0.4016
|
| 96 |
+
2025-09-26 05:49:02,151 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0138 | Val rms_score: 0.3994
|
| 97 |
+
2025-09-26 05:49:09,301 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0141 | Val rms_score: 0.4049
|
| 98 |
+
2025-09-26 05:49:17,170 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0146 | Val rms_score: 0.4012
|
| 99 |
+
2025-09-26 05:49:24,811 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0146 | Val rms_score: 0.3996
|
| 100 |
+
2025-09-26 05:49:32,537 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0153 | Val rms_score: 0.4054
|
| 101 |
+
2025-09-26 05:49:37,549 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0127 | Val rms_score: 0.4000
|
| 102 |
+
2025-09-26 05:49:44,879 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0157 | Val rms_score: 0.4029
|
| 103 |
+
2025-09-26 05:49:52,329 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0130 | Val rms_score: 0.4009
|
| 104 |
+
2025-09-26 05:49:59,749 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0150 | Val rms_score: 0.4004
|
| 105 |
+
2025-09-26 05:50:04,726 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0144 | Val rms_score: 0.3993
|
| 106 |
+
2025-09-26 05:50:13,533 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0136 | Val rms_score: 0.4018
|
| 107 |
+
2025-09-26 05:50:21,071 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0128 | Val rms_score: 0.3975
|
| 108 |
+
2025-09-26 05:50:21,264 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 4752
|
| 109 |
+
2025-09-26 05:50:21,971 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 88 with val rms_score: 0.3975
|
| 110 |
+
2025-09-26 05:50:30,132 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0135 | Val rms_score: 0.3948
|
| 111 |
+
2025-09-26 05:50:30,362 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 4806
|
| 112 |
+
2025-09-26 05:50:31,022 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 89 with val rms_score: 0.3948
|
| 113 |
+
2025-09-26 05:50:36,286 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0144 | Val rms_score: 0.3987
|
| 114 |
+
2025-09-26 05:50:44,085 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0136 | Val rms_score: 0.4061
|
| 115 |
+
2025-09-26 05:50:52,701 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0135 | Val rms_score: 0.4011
|
| 116 |
+
2025-09-26 05:51:02,136 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0133 | Val rms_score: 0.4028
|
| 117 |
+
2025-09-26 05:51:07,646 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0135 | Val rms_score: 0.3961
|
| 118 |
+
2025-09-26 05:51:15,145 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0139 | Val rms_score: 0.4029
|
| 119 |
+
2025-09-26 05:51:22,116 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0131 | Val rms_score: 0.4003
|
| 120 |
+
2025-09-26 05:51:29,611 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0128 | Val rms_score: 0.3986
|
| 121 |
+
2025-09-26 05:51:34,921 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0125 | Val rms_score: 0.3981
|
| 122 |
+
2025-09-26 05:51:43,048 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0129 | Val rms_score: 0.3960
|
| 123 |
+
2025-09-26 05:51:50,967 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0115 | Val rms_score: 0.3987
|
| 124 |
+
2025-09-26 05:51:51,528 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Test rms_score: 0.4337
|
| 125 |
+
2025-09-26 05:51:51,960 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Starting triplicate run 2 for dataset adme_microsom_stab_h at 2025-09-26_05-51-51
|
| 126 |
+
2025-09-26 05:51:59,124 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.8472 | Val rms_score: 0.4675
|
| 127 |
+
2025-09-26 05:51:59,124 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 54
|
| 128 |
+
2025-09-26 05:51:59,930 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.4675
|
| 129 |
+
2025-09-26 05:52:04,872 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.6797 | Val rms_score: 0.4478
|
| 130 |
+
2025-09-26 05:52:05,072 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 108
|
| 131 |
+
2025-09-26 05:52:05,691 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.4478
|
| 132 |
+
2025-09-26 05:52:13,800 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.4606 | Val rms_score: 0.4196
|
| 133 |
+
2025-09-26 05:52:14,035 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 162
|
| 134 |
+
2025-09-26 05:52:14,691 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.4196
|
| 135 |
+
2025-09-26 05:52:22,247 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.3320 | Val rms_score: 0.4601
|
| 136 |
+
2025-09-26 05:52:30,040 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.2315 | Val rms_score: 0.4202
|
| 137 |
+
2025-09-26 05:52:35,653 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.1823 | Val rms_score: 0.4331
|
| 138 |
+
2025-09-26 05:52:43,779 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1325 | Val rms_score: 0.4437
|
| 139 |
+
2025-09-26 05:52:50,958 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1089 | Val rms_score: 0.4213
|
| 140 |
+
2025-09-26 05:52:58,548 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.0851 | Val rms_score: 0.4293
|
| 141 |
+
2025-09-26 05:53:03,869 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.0703 | Val rms_score: 0.4293
|
| 142 |
+
2025-09-26 05:53:11,701 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.0605 | Val rms_score: 0.4261
|
| 143 |
+
2025-09-26 05:53:20,016 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0602 | Val rms_score: 0.4263
|
| 144 |
+
2025-09-26 05:53:27,704 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.1445 | Val rms_score: 0.4157
|
| 145 |
+
2025-09-26 05:53:27,887 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 702
|
| 146 |
+
2025-09-26 05:53:28,657 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 13 with val rms_score: 0.4157
|
| 147 |
+
2025-09-26 05:53:33,986 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0593 | Val rms_score: 0.4233
|
| 148 |
+
2025-09-26 05:53:41,655 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0486 | Val rms_score: 0.4143
|
| 149 |
+
2025-09-26 05:53:41,861 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 810
|
| 150 |
+
2025-09-26 05:53:42,529 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 15 with val rms_score: 0.4143
|
| 151 |
+
2025-09-26 05:53:50,125 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0469 | Val rms_score: 0.4208
|
| 152 |
+
2025-09-26 05:53:58,171 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0391 | Val rms_score: 0.4204
|
| 153 |
+
2025-09-26 05:54:02,988 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0382 | Val rms_score: 0.4152
|
| 154 |
+
2025-09-26 05:54:11,328 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0373 | Val rms_score: 0.4085
|
| 155 |
+
2025-09-26 05:54:11,532 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 1026
|
| 156 |
+
2025-09-26 05:54:12,162 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 19 with val rms_score: 0.4085
|
| 157 |
+
2025-09-26 05:54:20,314 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0362 | Val rms_score: 0.4241
|
| 158 |
+
2025-09-26 05:54:27,878 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0322 | Val rms_score: 0.4237
|
| 159 |
+
2025-09-26 05:54:33,410 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0323 | Val rms_score: 0.4272
|
| 160 |
+
2025-09-26 05:54:40,718 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0378 | Val rms_score: 0.4071
|
| 161 |
+
2025-09-26 05:54:40,889 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 1242
|
| 162 |
+
2025-09-26 05:54:41,543 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 23 with val rms_score: 0.4071
|
| 163 |
+
2025-09-26 05:54:49,625 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0310 | Val rms_score: 0.4115
|
| 164 |
+
2025-09-26 05:54:58,228 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0308 | Val rms_score: 0.4042
|
| 165 |
+
2025-09-26 05:54:58,436 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 1350
|
| 166 |
+
2025-09-26 05:54:59,096 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 25 with val rms_score: 0.4042
|
| 167 |
+
2025-09-26 05:55:04,671 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0271 | Val rms_score: 0.4125
|
| 168 |
+
2025-09-26 05:55:13,364 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0268 | Val rms_score: 0.4105
|
| 169 |
+
2025-09-26 05:55:21,359 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0267 | Val rms_score: 0.4089
|
| 170 |
+
2025-09-26 05:55:29,196 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0272 | Val rms_score: 0.4040
|
| 171 |
+
2025-09-26 05:55:29,409 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 1566
|
| 172 |
+
2025-09-26 05:55:30,265 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 29 with val rms_score: 0.4040
|
| 173 |
+
2025-09-26 05:55:35,433 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0256 | Val rms_score: 0.4136
|
| 174 |
+
2025-09-26 05:55:43,164 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0234 | Val rms_score: 0.4101
|
| 175 |
+
2025-09-26 05:55:51,523 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0264 | Val rms_score: 0.4044
|
| 176 |
+
2025-09-26 05:55:59,266 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0246 | Val rms_score: 0.4100
|
| 177 |
+
2025-09-26 05:56:04,439 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0248 | Val rms_score: 0.4094
|
| 178 |
+
2025-09-26 05:56:11,901 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0230 | Val rms_score: 0.4043
|
| 179 |
+
2025-09-26 05:56:19,387 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0238 | Val rms_score: 0.4098
|
| 180 |
+
2025-09-26 05:56:27,464 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0246 | Val rms_score: 0.3994
|
| 181 |
+
2025-09-26 05:56:27,638 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 1998
|
| 182 |
+
2025-09-26 05:56:28,284 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 37 with val rms_score: 0.3994
|
| 183 |
+
2025-09-26 05:56:34,983 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0251 | Val rms_score: 0.4112
|
| 184 |
+
2025-09-26 05:56:43,019 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0208 | Val rms_score: 0.4101
|
| 185 |
+
2025-09-26 05:56:50,734 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0214 | Val rms_score: 0.4112
|
| 186 |
+
2025-09-26 05:56:58,476 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0222 | Val rms_score: 0.4053
|
| 187 |
+
2025-09-26 05:57:04,267 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0216 | Val rms_score: 0.4048
|
| 188 |
+
2025-09-26 05:57:11,943 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0210 | Val rms_score: 0.4055
|
| 189 |
+
2025-09-26 05:57:19,905 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0205 | Val rms_score: 0.4073
|
| 190 |
+
2025-09-26 05:57:27,488 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0216 | Val rms_score: 0.3994
|
| 191 |
+
2025-09-26 05:57:27,671 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 2430
|
| 192 |
+
2025-09-26 05:57:28,326 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 45 with val rms_score: 0.3994
|
| 193 |
+
2025-09-26 05:57:33,579 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0200 | Val rms_score: 0.4006
|
| 194 |
+
2025-09-26 05:57:41,880 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0190 | Val rms_score: 0.4041
|
| 195 |
+
2025-09-26 05:57:49,215 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0179 | Val rms_score: 0.4042
|
| 196 |
+
2025-09-26 05:57:56,528 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0197 | Val rms_score: 0.4025
|
| 197 |
+
2025-09-26 05:58:01,364 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0197 | Val rms_score: 0.4007
|
| 198 |
+
2025-09-26 05:58:09,213 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0174 | Val rms_score: 0.4055
|
| 199 |
+
2025-09-26 05:58:17,420 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0179 | Val rms_score: 0.3970
|
| 200 |
+
2025-09-26 05:58:17,624 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 2808
|
| 201 |
+
2025-09-26 05:58:18,358 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 52 with val rms_score: 0.3970
|
| 202 |
+
2025-09-26 05:58:26,297 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0190 | Val rms_score: 0.4041
|
| 203 |
+
2025-09-26 05:58:32,014 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0172 | Val rms_score: 0.4022
|
| 204 |
+
2025-09-26 05:58:39,661 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0191 | Val rms_score: 0.4048
|
| 205 |
+
2025-09-26 05:58:48,595 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0181 | Val rms_score: 0.4007
|
| 206 |
+
2025-09-26 05:58:57,075 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0171 | Val rms_score: 0.4005
|
| 207 |
+
2025-09-26 05:59:02,363 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0173 | Val rms_score: 0.4060
|
| 208 |
+
2025-09-26 05:59:10,049 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0163 | Val rms_score: 0.4005
|
| 209 |
+
2025-09-26 05:59:18,341 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0173 | Val rms_score: 0.4061
|
| 210 |
+
2025-09-26 05:59:26,724 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0169 | Val rms_score: 0.4035
|
| 211 |
+
2025-09-26 05:59:32,388 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0160 | Val rms_score: 0.4062
|
| 212 |
+
2025-09-26 05:59:39,854 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0181 | Val rms_score: 0.4020
|
| 213 |
+
2025-09-26 05:59:47,011 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0171 | Val rms_score: 0.3996
|
| 214 |
+
2025-09-26 05:59:54,512 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0157 | Val rms_score: 0.4006
|
| 215 |
+
2025-09-26 05:59:59,767 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0170 | Val rms_score: 0.4037
|
| 216 |
+
2025-09-26 06:00:07,984 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0157 | Val rms_score: 0.4054
|
| 217 |
+
2025-09-26 06:00:15,443 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0159 | Val rms_score: 0.3992
|
| 218 |
+
2025-09-26 06:00:23,191 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0153 | Val rms_score: 0.4039
|
| 219 |
+
2025-09-26 06:00:28,361 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0151 | Val rms_score: 0.3994
|
| 220 |
+
2025-09-26 06:00:36,173 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0148 | Val rms_score: 0.4041
|
| 221 |
+
2025-09-26 06:00:44,685 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0160 | Val rms_score: 0.4022
|
| 222 |
+
2025-09-26 06:00:52,069 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0150 | Val rms_score: 0.3984
|
| 223 |
+
2025-09-26 06:00:57,281 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0161 | Val rms_score: 0.4042
|
| 224 |
+
2025-09-26 06:01:06,247 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0155 | Val rms_score: 0.4000
|
| 225 |
+
2025-09-26 06:01:14,025 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0161 | Val rms_score: 0.3986
|
| 226 |
+
2025-09-26 06:01:22,436 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0153 | Val rms_score: 0.4067
|
| 227 |
+
2025-09-26 06:01:27,690 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0177 | Val rms_score: 0.4063
|
| 228 |
+
2025-09-26 06:01:35,231 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0153 | Val rms_score: 0.4015
|
| 229 |
+
2025-09-26 06:01:42,715 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0146 | Val rms_score: 0.4000
|
| 230 |
+
2025-09-26 06:01:50,867 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0145 | Val rms_score: 0.3955
|
| 231 |
+
2025-09-26 06:01:51,349 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 4374
|
| 232 |
+
2025-09-26 06:01:52,023 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 81 with val rms_score: 0.3955
|
| 233 |
+
2025-09-26 06:01:57,498 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0129 | Val rms_score: 0.3982
|
| 234 |
+
2025-09-26 06:02:05,554 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0140 | Val rms_score: 0.4010
|
| 235 |
+
2025-09-26 06:02:13,563 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0141 | Val rms_score: 0.3997
|
| 236 |
+
2025-09-26 06:02:20,874 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0156 | Val rms_score: 0.3963
|
| 237 |
+
2025-09-26 06:02:25,975 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0149 | Val rms_score: 0.4008
|
| 238 |
+
2025-09-26 06:02:34,377 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0132 | Val rms_score: 0.3998
|
| 239 |
+
2025-09-26 06:02:42,000 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0131 | Val rms_score: 0.3947
|
| 240 |
+
2025-09-26 06:02:42,179 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 4752
|
| 241 |
+
2025-09-26 06:02:42,933 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 88 with val rms_score: 0.3947
|
| 242 |
+
2025-09-26 06:02:50,954 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0108 | Val rms_score: 0.4025
|
| 243 |
+
2025-09-26 06:02:56,082 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0143 | Val rms_score: 0.3955
|
| 244 |
+
2025-09-26 06:03:03,977 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0135 | Val rms_score: 0.3973
|
| 245 |
+
2025-09-26 06:03:11,925 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0137 | Val rms_score: 0.3986
|
| 246 |
+
2025-09-26 06:03:20,877 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0135 | Val rms_score: 0.4001
|
| 247 |
+
2025-09-26 06:03:26,306 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0128 | Val rms_score: 0.4016
|
| 248 |
+
2025-09-26 06:03:34,019 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0140 | Val rms_score: 0.3959
|
| 249 |
+
2025-09-26 06:03:41,011 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0132 | Val rms_score: 0.3990
|
| 250 |
+
2025-09-26 06:03:48,714 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0130 | Val rms_score: 0.4011
|
| 251 |
+
2025-09-26 06:03:55,970 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0135 | Val rms_score: 0.4005
|
| 252 |
+
2025-09-26 06:04:00,929 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0135 | Val rms_score: 0.4033
|
| 253 |
+
2025-09-26 06:04:08,849 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0130 | Val rms_score: 0.4022
|
| 254 |
+
2025-09-26 06:04:09,519 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Test rms_score: 0.4398
|
| 255 |
+
2025-09-26 06:04:09,910 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Starting triplicate run 3 for dataset adme_microsom_stab_h at 2025-09-26_06-04-09
|
| 256 |
+
2025-09-26 06:04:16,713 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.8287 | Val rms_score: 0.4521
|
| 257 |
+
2025-09-26 06:04:16,713 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 54
|
| 258 |
+
2025-09-26 06:04:17,494 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.4521
|
| 259 |
+
2025-09-26 06:04:25,626 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.5898 | Val rms_score: 0.4316
|
| 260 |
+
2025-09-26 06:04:25,821 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 108
|
| 261 |
+
2025-09-26 06:04:26,463 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.4316
|
| 262 |
+
2025-09-26 06:04:31,324 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.4537 | Val rms_score: 0.4366
|
| 263 |
+
2025-09-26 06:04:38,990 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.3105 | Val rms_score: 0.4170
|
| 264 |
+
2025-09-26 06:04:39,228 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 216
|
| 265 |
+
2025-09-26 06:04:40,057 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val rms_score: 0.4170
|
| 266 |
+
2025-09-26 06:04:47,562 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.2326 | Val rms_score: 0.4174
|
| 267 |
+
2025-09-26 06:04:55,146 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.1693 | Val rms_score: 0.4257
|
| 268 |
+
2025-09-26 06:05:00,817 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1412 | Val rms_score: 0.4109
|
| 269 |
+
2025-09-26 06:05:01,026 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 378
|
| 270 |
+
2025-09-26 06:05:01,670 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 7 with val rms_score: 0.4109
|
| 271 |
+
2025-09-26 06:05:09,929 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1104 | Val rms_score: 0.4338
|
| 272 |
+
2025-09-26 06:05:17,389 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.0920 | Val rms_score: 0.4280
|
| 273 |
+
2025-09-26 06:05:24,851 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.0723 | Val rms_score: 0.4176
|
| 274 |
+
2025-09-26 06:05:30,434 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.0671 | Val rms_score: 0.4193
|
| 275 |
+
2025-09-26 06:05:38,633 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0534 | Val rms_score: 0.4160
|
| 276 |
+
2025-09-26 06:05:46,221 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0552 | Val rms_score: 0.4113
|
| 277 |
+
2025-09-26 06:05:54,188 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0469 | Val rms_score: 0.4134
|
| 278 |
+
2025-09-26 06:05:59,063 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0461 | Val rms_score: 0.4207
|
| 279 |
+
2025-09-26 06:06:06,726 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0414 | Val rms_score: 0.4067
|
| 280 |
+
2025-09-26 06:06:07,212 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 864
|
| 281 |
+
2025-09-26 06:06:07,852 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 16 with val rms_score: 0.4067
|
| 282 |
+
2025-09-26 06:06:14,914 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0399 | Val rms_score: 0.4190
|
| 283 |
+
2025-09-26 06:06:22,662 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0411 | Val rms_score: 0.4093
|
| 284 |
+
2025-09-26 06:06:29,262 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0397 | Val rms_score: 0.4113
|
| 285 |
+
2025-09-26 06:06:36,755 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0353 | Val rms_score: 0.4096
|
| 286 |
+
2025-09-26 06:06:44,932 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0352 | Val rms_score: 0.4177
|
| 287 |
+
2025-09-26 06:06:53,315 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0312 | Val rms_score: 0.4128
|
| 288 |
+
2025-09-26 06:06:58,392 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0322 | Val rms_score: 0.4246
|
| 289 |
+
2025-09-26 06:07:05,964 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0308 | Val rms_score: 0.4115
|
| 290 |
+
2025-09-26 06:07:13,609 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0303 | Val rms_score: 0.4116
|
| 291 |
+
2025-09-26 06:07:21,094 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0261 | Val rms_score: 0.4126
|
| 292 |
+
2025-09-26 06:07:26,943 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0292 | Val rms_score: 0.4080
|
| 293 |
+
2025-09-26 06:07:34,568 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0270 | Val rms_score: 0.4082
|
| 294 |
+
2025-09-26 06:07:42,299 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0288 | Val rms_score: 0.4034
|
| 295 |
+
2025-09-26 06:07:42,485 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 1566
|
| 296 |
+
2025-09-26 06:07:43,148 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 29 with val rms_score: 0.4034
|
| 297 |
+
2025-09-26 06:07:50,451 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0258 | Val rms_score: 0.4208
|
| 298 |
+
2025-09-26 06:07:55,459 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0272 | Val rms_score: 0.4010
|
| 299 |
+
2025-09-26 06:07:55,986 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 1674
|
| 300 |
+
2025-09-26 06:07:56,634 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 31 with val rms_score: 0.4010
|
| 301 |
+
2025-09-26 06:08:04,450 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0268 | Val rms_score: 0.4041
|
| 302 |
+
2025-09-26 06:08:12,253 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0253 | Val rms_score: 0.4048
|
| 303 |
+
2025-09-26 06:08:19,626 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0251 | Val rms_score: 0.4134
|
| 304 |
+
2025-09-26 06:08:24,359 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0262 | Val rms_score: 0.4061
|
| 305 |
+
2025-09-26 06:08:31,689 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0240 | Val rms_score: 0.4163
|
| 306 |
+
2025-09-26 06:08:40,151 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0220 | Val rms_score: 0.4042
|
| 307 |
+
2025-09-26 06:08:49,143 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0215 | Val rms_score: 0.4052
|
| 308 |
+
2025-09-26 06:08:54,749 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0173 | Val rms_score: 0.4018
|
| 309 |
+
2025-09-26 06:09:02,869 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0207 | Val rms_score: 0.4052
|
| 310 |
+
2025-09-26 06:09:10,560 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0220 | Val rms_score: 0.4065
|
| 311 |
+
2025-09-26 06:09:18,817 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0208 | Val rms_score: 0.4034
|
| 312 |
+
2025-09-26 06:09:24,421 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0218 | Val rms_score: 0.4021
|
| 313 |
+
2025-09-26 06:09:32,384 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0207 | Val rms_score: 0.4071
|
| 314 |
+
2025-09-26 06:09:40,580 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0184 | Val rms_score: 0.4113
|
| 315 |
+
2025-09-26 06:09:48,904 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0214 | Val rms_score: 0.4019
|
| 316 |
+
2025-09-26 06:09:55,035 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0189 | Val rms_score: 0.4126
|
| 317 |
+
2025-09-26 06:10:03,089 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0200 | Val rms_score: 0.4002
|
| 318 |
+
2025-09-26 06:10:03,263 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 2592
|
| 319 |
+
2025-09-26 06:10:03,934 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 48 with val rms_score: 0.4002
|
| 320 |
+
2025-09-26 06:10:12,016 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0205 | Val rms_score: 0.4069
|
| 321 |
+
2025-09-26 06:10:19,391 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0192 | Val rms_score: 0.4057
|
| 322 |
+
2025-09-26 06:10:25,122 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0191 | Val rms_score: 0.4051
|
| 323 |
+
2025-09-26 06:10:33,357 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0159 | Val rms_score: 0.4072
|
| 324 |
+
2025-09-26 06:10:41,432 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0169 | Val rms_score: 0.3990
|
| 325 |
+
2025-09-26 06:10:41,657 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 2862
|
| 326 |
+
2025-09-26 06:10:42,346 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 53 with val rms_score: 0.3990
|
| 327 |
+
2025-09-26 06:10:50,192 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0183 | Val rms_score: 0.4087
|
| 328 |
+
2025-09-26 06:10:55,214 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0183 | Val rms_score: 0.4114
|
| 329 |
+
2025-09-26 06:11:04,422 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0191 | Val rms_score: 0.4073
|
| 330 |
+
2025-09-26 06:11:12,856 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0194 | Val rms_score: 0.4088
|
| 331 |
+
2025-09-26 06:11:20,857 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0175 | Val rms_score: 0.4060
|
| 332 |
+
2025-09-26 06:11:25,702 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0187 | Val rms_score: 0.4002
|
| 333 |
+
2025-09-26 06:11:33,286 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0153 | Val rms_score: 0.4004
|
| 334 |
+
2025-09-26 06:11:40,792 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0168 | Val rms_score: 0.4032
|
| 335 |
+
2025-09-26 06:11:48,811 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0171 | Val rms_score: 0.4054
|
| 336 |
+
2025-09-26 06:11:53,746 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0206 | Val rms_score: 0.4041
|
| 337 |
+
2025-09-26 06:12:00,988 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0159 | Val rms_score: 0.4024
|
| 338 |
+
2025-09-26 06:12:08,934 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0164 | Val rms_score: 0.4122
|
| 339 |
+
2025-09-26 06:12:17,185 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0157 | Val rms_score: 0.4090
|
| 340 |
+
2025-09-26 06:12:22,956 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0176 | Val rms_score: 0.4087
|
| 341 |
+
2025-09-26 06:12:30,375 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0165 | Val rms_score: 0.3979
|
| 342 |
+
2025-09-26 06:12:30,560 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Global step of best model: 3672
|
| 343 |
+
2025-09-26 06:12:31,348 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Best model saved at epoch 68 with val rms_score: 0.3979
|
| 344 |
+
2025-09-26 06:12:39,973 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0165 | Val rms_score: 0.4014
|
| 345 |
+
2025-09-26 06:12:48,041 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0145 | Val rms_score: 0.3980
|
| 346 |
+
2025-09-26 06:12:53,462 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0159 | Val rms_score: 0.4066
|
| 347 |
+
2025-09-26 06:13:01,797 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0153 | Val rms_score: 0.4072
|
| 348 |
+
2025-09-26 06:13:09,773 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0150 | Val rms_score: 0.3995
|
| 349 |
+
2025-09-26 06:13:17,928 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0138 | Val rms_score: 0.4047
|
| 350 |
+
2025-09-26 06:13:24,924 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0153 | Val rms_score: 0.4022
|
| 351 |
+
2025-09-26 06:13:32,938 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0156 | Val rms_score: 0.4048
|
| 352 |
+
2025-09-26 06:13:42,120 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0140 | Val rms_score: 0.3979
|
| 353 |
+
2025-09-26 06:13:50,618 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0136 | Val rms_score: 0.4000
|
| 354 |
+
2025-09-26 06:13:57,062 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0140 | Val rms_score: 0.4073
|
| 355 |
+
2025-09-26 06:14:05,732 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0149 | Val rms_score: 0.4029
|
| 356 |
+
2025-09-26 06:14:13,919 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0143 | Val rms_score: 0.4039
|
| 357 |
+
2025-09-26 06:14:20,093 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0149 | Val rms_score: 0.4059
|
| 358 |
+
2025-09-26 06:14:28,315 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0142 | Val rms_score: 0.4007
|
| 359 |
+
2025-09-26 06:14:36,247 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0153 | Val rms_score: 0.4023
|
| 360 |
+
2025-09-26 06:14:44,137 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0138 | Val rms_score: 0.3987
|
| 361 |
+
2025-09-26 06:14:49,321 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0149 | Val rms_score: 0.4022
|
| 362 |
+
2025-09-26 06:14:57,461 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0150 | Val rms_score: 0.4066
|
| 363 |
+
2025-09-26 06:15:05,385 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0136 | Val rms_score: 0.4003
|
| 364 |
+
2025-09-26 06:15:12,829 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0128 | Val rms_score: 0.4012
|
| 365 |
+
2025-09-26 06:15:20,446 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0139 | Val rms_score: 0.4022
|
| 366 |
+
2025-09-26 06:15:25,484 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0137 | Val rms_score: 0.4040
|
| 367 |
+
2025-09-26 06:15:34,024 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0149 | Val rms_score: 0.4010
|
| 368 |
+
2025-09-26 06:15:43,170 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0128 | Val rms_score: 0.4030
|
| 369 |
+
2025-09-26 06:15:51,232 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0135 | Val rms_score: 0.4024
|
| 370 |
+
2025-09-26 06:15:55,938 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0139 | Val rms_score: 0.3998
|
| 371 |
+
2025-09-26 06:16:03,007 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0126 | Val rms_score: 0.3995
|
| 372 |
+
2025-09-26 06:16:11,714 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0129 | Val rms_score: 0.4033
|
| 373 |
+
2025-09-26 06:16:19,867 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0132 | Val rms_score: 0.4043
|
| 374 |
+
2025-09-26 06:16:24,806 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0128 | Val rms_score: 0.3981
|
| 375 |
+
2025-09-26 06:16:32,115 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0131 | Val rms_score: 0.3982
|
| 376 |
+
2025-09-26 06:16:32,946 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Test rms_score: 0.4388
|
| 377 |
+
2025-09-26 06:16:33,355 - logs_modchembert_adme_microsom_stab_h_epochs100_batch_size32 - INFO - Final Triplicate Test Results — Avg rms_score: 0.4375, Std Dev: 0.0027
|
logs_modchembert_regression_ModChemBERT-MLM-TAFT/modchembert_deepchem_splits_run_adme_microsom_stab_r_epochs100_batch_size32_20250926_061633.log
ADDED
|
@@ -0,0 +1,349 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
2025-09-26 06:16:33,357 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Running benchmark for dataset: adme_microsom_stab_r
|
| 2 |
+
2025-09-26 06:16:33,357 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - dataset: adme_microsom_stab_r, tasks: ['y'], epochs: 100, learning rate: 3e-05, transform: True
|
| 3 |
+
2025-09-26 06:16:33,360 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Starting triplicate run 1 for dataset adme_microsom_stab_r at 2025-09-26_06-16-33
|
| 4 |
+
2025-09-26 06:16:41,849 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.7647 | Val rms_score: 0.5541
|
| 5 |
+
2025-09-26 06:16:41,850 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Global step of best model: 68
|
| 6 |
+
2025-09-26 06:16:42,968 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.5541
|
| 7 |
+
2025-09-26 06:16:50,585 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.5347 | Val rms_score: 0.5038
|
| 8 |
+
2025-09-26 06:16:50,783 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Global step of best model: 136
|
| 9 |
+
2025-09-26 06:16:51,462 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.5038
|
| 10 |
+
2025-09-26 06:17:01,412 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.2988 | Val rms_score: 0.5167
|
| 11 |
+
2025-09-26 06:17:11,411 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.2757 | Val rms_score: 0.4959
|
| 12 |
+
2025-09-26 06:17:11,629 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Global step of best model: 272
|
| 13 |
+
2025-09-26 06:17:12,304 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val rms_score: 0.4959
|
| 14 |
+
2025-09-26 06:17:19,549 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.2188 | Val rms_score: 0.5213
|
| 15 |
+
2025-09-26 06:17:28,813 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.1377 | Val rms_score: 0.5138
|
| 16 |
+
2025-09-26 06:17:39,321 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1080 | Val rms_score: 0.5198
|
| 17 |
+
2025-09-26 06:17:49,165 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.0952 | Val rms_score: 0.4963
|
| 18 |
+
2025-09-26 06:17:56,546 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.0866 | Val rms_score: 0.5054
|
| 19 |
+
2025-09-26 06:18:05,932 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.0657 | Val rms_score: 0.5177
|
| 20 |
+
2025-09-26 06:18:15,310 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.0618 | Val rms_score: 0.5260
|
| 21 |
+
2025-09-26 06:18:22,320 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0583 | Val rms_score: 0.5093
|
| 22 |
+
2025-09-26 06:18:31,549 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0466 | Val rms_score: 0.5090
|
| 23 |
+
2025-09-26 06:18:40,841 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0412 | Val rms_score: 0.4982
|
| 24 |
+
2025-09-26 06:18:48,696 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0453 | Val rms_score: 0.5126
|
| 25 |
+
2025-09-26 06:18:58,329 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0352 | Val rms_score: 0.5214
|
| 26 |
+
2025-09-26 06:19:08,284 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0357 | Val rms_score: 0.5077
|
| 27 |
+
2025-09-26 06:19:17,345 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0350 | Val rms_score: 0.5027
|
| 28 |
+
2025-09-26 06:19:23,943 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0312 | Val rms_score: 0.5140
|
| 29 |
+
2025-09-26 06:19:33,606 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0309 | Val rms_score: 0.5057
|
| 30 |
+
2025-09-26 06:19:43,018 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0352 | Val rms_score: 0.5006
|
| 31 |
+
2025-09-26 06:19:49,950 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0287 | Val rms_score: 0.5047
|
| 32 |
+
2025-09-26 06:19:59,056 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0255 | Val rms_score: 0.5060
|
| 33 |
+
2025-09-26 06:20:08,151 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0273 | Val rms_score: 0.5130
|
| 34 |
+
2025-09-26 06:20:17,903 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0249 | Val rms_score: 0.5019
|
| 35 |
+
2025-09-26 06:20:26,476 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0255 | Val rms_score: 0.5099
|
| 36 |
+
2025-09-26 06:20:36,697 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0241 | Val rms_score: 0.5079
|
| 37 |
+
2025-09-26 06:20:46,199 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0208 | Val rms_score: 0.5024
|
| 38 |
+
2025-09-26 06:20:54,242 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0219 | Val rms_score: 0.5098
|
| 39 |
+
2025-09-26 06:21:05,751 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0241 | Val rms_score: 0.5050
|
| 40 |
+
2025-09-26 06:21:15,977 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0226 | Val rms_score: 0.5045
|
| 41 |
+
2025-09-26 06:21:24,180 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0217 | Val rms_score: 0.4961
|
| 42 |
+
2025-09-26 06:21:34,140 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0213 | Val rms_score: 0.5098
|
| 43 |
+
2025-09-26 06:21:44,655 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0265 | Val rms_score: 0.4947
|
| 44 |
+
2025-09-26 06:21:44,840 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Global step of best model: 2312
|
| 45 |
+
2025-09-26 06:21:45,633 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Best model saved at epoch 34 with val rms_score: 0.4947
|
| 46 |
+
2025-09-26 06:21:53,237 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0217 | Val rms_score: 0.5080
|
| 47 |
+
2025-09-26 06:22:03,004 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0201 | Val rms_score: 0.5086
|
| 48 |
+
2025-09-26 06:22:12,922 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0192 | Val rms_score: 0.4995
|
| 49 |
+
2025-09-26 06:22:19,776 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0193 | Val rms_score: 0.5052
|
| 50 |
+
2025-09-26 06:22:29,080 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0197 | Val rms_score: 0.5048
|
| 51 |
+
2025-09-26 06:22:38,396 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0215 | Val rms_score: 0.5004
|
| 52 |
+
2025-09-26 06:22:47,220 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0177 | Val rms_score: 0.5062
|
| 53 |
+
2025-09-26 06:22:54,989 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0179 | Val rms_score: 0.5199
|
| 54 |
+
2025-09-26 06:23:04,667 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0187 | Val rms_score: 0.5069
|
| 55 |
+
2025-09-26 06:23:13,840 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0167 | Val rms_score: 0.5051
|
| 56 |
+
2025-09-26 06:23:22,373 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0167 | Val rms_score: 0.5043
|
| 57 |
+
2025-09-26 06:23:32,010 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0173 | Val rms_score: 0.5056
|
| 58 |
+
2025-09-26 06:23:42,509 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0161 | Val rms_score: 0.5071
|
| 59 |
+
2025-09-26 06:23:49,724 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0157 | Val rms_score: 0.5027
|
| 60 |
+
2025-09-26 06:23:59,798 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0164 | Val rms_score: 0.5047
|
| 61 |
+
2025-09-26 06:24:09,543 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0164 | Val rms_score: 0.5156
|
| 62 |
+
2025-09-26 06:24:16,375 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0176 | Val rms_score: 0.5072
|
| 63 |
+
2025-09-26 06:24:26,842 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0169 | Val rms_score: 0.5091
|
| 64 |
+
2025-09-26 06:24:36,136 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0164 | Val rms_score: 0.5098
|
| 65 |
+
2025-09-26 06:24:45,830 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0167 | Val rms_score: 0.5027
|
| 66 |
+
2025-09-26 06:24:53,122 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0167 | Val rms_score: 0.5133
|
| 67 |
+
2025-09-26 06:25:02,116 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0155 | Val rms_score: 0.5063
|
| 68 |
+
2025-09-26 06:25:11,381 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0156 | Val rms_score: 0.5079
|
| 69 |
+
2025-09-26 06:25:17,940 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0161 | Val rms_score: 0.5039
|
| 70 |
+
2025-09-26 06:25:27,944 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0165 | Val rms_score: 0.5046
|
| 71 |
+
2025-09-26 06:25:37,272 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0151 | Val rms_score: 0.5025
|
| 72 |
+
2025-09-26 06:25:46,612 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0150 | Val rms_score: 0.5137
|
| 73 |
+
2025-09-26 06:25:54,094 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0157 | Val rms_score: 0.5033
|
| 74 |
+
2025-09-26 06:26:05,863 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0146 | Val rms_score: 0.5009
|
| 75 |
+
2025-09-26 06:26:15,126 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0158 | Val rms_score: 0.5048
|
| 76 |
+
2025-09-26 06:26:26,804 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0146 | Val rms_score: 0.5037
|
| 77 |
+
2025-09-26 06:26:35,844 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0137 | Val rms_score: 0.5065
|
| 78 |
+
2025-09-26 06:26:45,791 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0139 | Val rms_score: 0.5048
|
| 79 |
+
2025-09-26 06:26:52,812 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0124 | Val rms_score: 0.5068
|
| 80 |
+
2025-09-26 06:27:01,656 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0137 | Val rms_score: 0.5035
|
| 81 |
+
2025-09-26 06:27:10,635 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0132 | Val rms_score: 0.5052
|
| 82 |
+
2025-09-26 06:27:17,151 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0131 | Val rms_score: 0.5043
|
| 83 |
+
2025-09-26 06:27:26,982 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0137 | Val rms_score: 0.5050
|
| 84 |
+
2025-09-26 06:27:36,278 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0132 | Val rms_score: 0.4980
|
| 85 |
+
2025-09-26 06:27:43,889 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0137 | Val rms_score: 0.5002
|
| 86 |
+
2025-09-26 06:27:53,272 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0128 | Val rms_score: 0.5113
|
| 87 |
+
2025-09-26 06:28:02,160 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0133 | Val rms_score: 0.5015
|
| 88 |
+
2025-09-26 06:28:11,800 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0131 | Val rms_score: 0.5034
|
| 89 |
+
2025-09-26 06:28:18,100 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0111 | Val rms_score: 0.5048
|
| 90 |
+
2025-09-26 06:28:27,193 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0125 | Val rms_score: 0.4991
|
| 91 |
+
2025-09-26 06:28:36,200 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0126 | Val rms_score: 0.5022
|
| 92 |
+
2025-09-26 06:28:42,803 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0143 | Val rms_score: 0.5035
|
| 93 |
+
2025-09-26 06:28:52,346 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0136 | Val rms_score: 0.5050
|
| 94 |
+
2025-09-26 06:29:01,122 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0128 | Val rms_score: 0.5014
|
| 95 |
+
2025-09-26 06:29:10,853 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0120 | Val rms_score: 0.5043
|
| 96 |
+
2025-09-26 06:29:17,193 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0128 | Val rms_score: 0.5037
|
| 97 |
+
2025-09-26 06:29:26,314 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0118 | Val rms_score: 0.5047
|
| 98 |
+
2025-09-26 06:29:35,516 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0125 | Val rms_score: 0.5044
|
| 99 |
+
2025-09-26 06:29:44,535 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0122 | Val rms_score: 0.5048
|
| 100 |
+
2025-09-26 06:29:52,887 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0122 | Val rms_score: 0.5014
|
| 101 |
+
2025-09-26 06:30:01,817 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0119 | Val rms_score: 0.5062
|
| 102 |
+
2025-09-26 06:30:11,083 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0120 | Val rms_score: 0.5049
|
| 103 |
+
2025-09-26 06:30:18,353 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0125 | Val rms_score: 0.5019
|
| 104 |
+
2025-09-26 06:30:27,349 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0115 | Val rms_score: 0.5049
|
| 105 |
+
2025-09-26 06:30:36,760 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0122 | Val rms_score: 0.5066
|
| 106 |
+
2025-09-26 06:30:43,733 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0121 | Val rms_score: 0.5003
|
| 107 |
+
2025-09-26 06:30:52,546 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0123 | Val rms_score: 0.5015
|
| 108 |
+
2025-09-26 06:31:01,530 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0114 | Val rms_score: 0.5069
|
| 109 |
+
2025-09-26 06:31:10,643 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0121 | Val rms_score: 0.5045
|
| 110 |
+
2025-09-26 06:31:17,055 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0118 | Val rms_score: 0.5056
|
| 111 |
+
2025-09-26 06:31:26,104 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0115 | Val rms_score: 0.5011
|
| 112 |
+
2025-09-26 06:31:27,004 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Test rms_score: 0.4576
|
| 113 |
+
2025-09-26 06:31:27,371 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Starting triplicate run 2 for dataset adme_microsom_stab_r at 2025-09-26_06-31-27
|
| 114 |
+
2025-09-26 06:31:34,693 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.7463 | Val rms_score: 0.5249
|
| 115 |
+
2025-09-26 06:31:34,694 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Global step of best model: 68
|
| 116 |
+
2025-09-26 06:31:35,379 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.5249
|
| 117 |
+
2025-09-26 06:31:42,539 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.5174 | Val rms_score: 0.5051
|
| 118 |
+
2025-09-26 06:31:42,755 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Global step of best model: 136
|
| 119 |
+
2025-09-26 06:31:43,419 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.5051
|
| 120 |
+
2025-09-26 06:31:52,931 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.3203 | Val rms_score: 0.5169
|
| 121 |
+
2025-09-26 06:32:02,088 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.2739 | Val rms_score: 0.5032
|
| 122 |
+
2025-09-26 06:32:02,320 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Global step of best model: 272
|
| 123 |
+
2025-09-26 06:32:03,006 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val rms_score: 0.5032
|
| 124 |
+
2025-09-26 06:32:12,148 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.2000 | Val rms_score: 0.5127
|
| 125 |
+
2025-09-26 06:32:19,214 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.1504 | Val rms_score: 0.5159
|
| 126 |
+
2025-09-26 06:32:28,692 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1186 | Val rms_score: 0.5097
|
| 127 |
+
2025-09-26 06:32:37,576 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.0916 | Val rms_score: 0.5039
|
| 128 |
+
2025-09-26 06:32:44,422 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.0898 | Val rms_score: 0.5136
|
| 129 |
+
2025-09-26 06:32:53,494 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.0694 | Val rms_score: 0.5173
|
| 130 |
+
2025-09-26 06:33:02,443 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.0635 | Val rms_score: 0.5034
|
| 131 |
+
2025-09-26 06:33:11,787 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0552 | Val rms_score: 0.5231
|
| 132 |
+
2025-09-26 06:33:18,411 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0457 | Val rms_score: 0.5130
|
| 133 |
+
2025-09-26 06:33:27,391 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0436 | Val rms_score: 0.5073
|
| 134 |
+
2025-09-26 06:33:38,297 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0350 | Val rms_score: 0.5047
|
| 135 |
+
2025-09-26 06:33:45,113 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0386 | Val rms_score: 0.5212
|
| 136 |
+
2025-09-26 06:33:55,033 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0368 | Val rms_score: 0.5115
|
| 137 |
+
2025-09-26 06:34:03,900 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0316 | Val rms_score: 0.5011
|
| 138 |
+
2025-09-26 06:34:04,075 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Global step of best model: 1224
|
| 139 |
+
2025-09-26 06:34:04,731 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Best model saved at epoch 18 with val rms_score: 0.5011
|
| 140 |
+
2025-09-26 06:34:11,665 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0340 | Val rms_score: 0.5081
|
| 141 |
+
2025-09-26 06:34:21,702 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0320 | Val rms_score: 0.5035
|
| 142 |
+
2025-09-26 06:34:31,096 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0332 | Val rms_score: 0.5073
|
| 143 |
+
2025-09-26 06:34:41,412 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0294 | Val rms_score: 0.5166
|
| 144 |
+
2025-09-26 06:34:48,001 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0284 | Val rms_score: 0.4988
|
| 145 |
+
2025-09-26 06:34:48,183 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Global step of best model: 1564
|
| 146 |
+
2025-09-26 06:34:48,950 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Best model saved at epoch 23 with val rms_score: 0.4988
|
| 147 |
+
2025-09-26 06:34:58,209 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0289 | Val rms_score: 0.5038
|
| 148 |
+
2025-09-26 06:35:07,767 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0324 | Val rms_score: 0.5005
|
| 149 |
+
2025-09-26 06:35:14,714 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0252 | Val rms_score: 0.5051
|
| 150 |
+
2025-09-26 06:35:24,879 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0250 | Val rms_score: 0.5113
|
| 151 |
+
2025-09-26 06:35:34,489 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0199 | Val rms_score: 0.5072
|
| 152 |
+
2025-09-26 06:35:41,669 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0218 | Val rms_score: 0.5003
|
| 153 |
+
2025-09-26 06:35:52,402 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0224 | Val rms_score: 0.5016
|
| 154 |
+
2025-09-26 06:36:01,609 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0223 | Val rms_score: 0.5082
|
| 155 |
+
2025-09-26 06:36:09,395 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0207 | Val rms_score: 0.5080
|
| 156 |
+
2025-09-26 06:36:18,968 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0225 | Val rms_score: 0.4985
|
| 157 |
+
2025-09-26 06:36:19,138 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Global step of best model: 2244
|
| 158 |
+
2025-09-26 06:36:19,832 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Best model saved at epoch 33 with val rms_score: 0.4985
|
| 159 |
+
2025-09-26 06:36:29,580 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0213 | Val rms_score: 0.5078
|
| 160 |
+
2025-09-26 06:36:38,653 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0209 | Val rms_score: 0.5078
|
| 161 |
+
2025-09-26 06:36:45,550 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0198 | Val rms_score: 0.5047
|
| 162 |
+
2025-09-26 06:36:55,722 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0214 | Val rms_score: 0.4997
|
| 163 |
+
2025-09-26 06:37:04,827 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0200 | Val rms_score: 0.5029
|
| 164 |
+
2025-09-26 06:37:11,741 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0188 | Val rms_score: 0.5024
|
| 165 |
+
2025-09-26 06:37:21,305 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0206 | Val rms_score: 0.5043
|
| 166 |
+
2025-09-26 06:37:30,265 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0187 | Val rms_score: 0.5049
|
| 167 |
+
2025-09-26 06:37:39,756 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0170 | Val rms_score: 0.5021
|
| 168 |
+
2025-09-26 06:37:46,450 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0170 | Val rms_score: 0.5038
|
| 169 |
+
2025-09-26 06:37:55,540 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0186 | Val rms_score: 0.5073
|
| 170 |
+
2025-09-26 06:38:05,868 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0184 | Val rms_score: 0.5050
|
| 171 |
+
2025-09-26 06:38:12,564 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0169 | Val rms_score: 0.5089
|
| 172 |
+
2025-09-26 06:38:21,915 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0180 | Val rms_score: 0.5040
|
| 173 |
+
2025-09-26 06:38:30,580 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0170 | Val rms_score: 0.5018
|
| 174 |
+
2025-09-26 06:38:40,231 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0166 | Val rms_score: 0.5014
|
| 175 |
+
2025-09-26 06:38:47,069 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0168 | Val rms_score: 0.5017
|
| 176 |
+
2025-09-26 06:38:56,301 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0169 | Val rms_score: 0.5034
|
| 177 |
+
2025-09-26 06:39:06,201 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0166 | Val rms_score: 0.5070
|
| 178 |
+
2025-09-26 06:39:12,887 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0176 | Val rms_score: 0.5042
|
| 179 |
+
2025-09-26 06:39:21,663 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0148 | Val rms_score: 0.5024
|
| 180 |
+
2025-09-26 06:39:31,194 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0161 | Val rms_score: 0.4989
|
| 181 |
+
2025-09-26 06:39:39,754 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0135 | Val rms_score: 0.5000
|
| 182 |
+
2025-09-26 06:39:47,826 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0151 | Val rms_score: 0.5063
|
| 183 |
+
2025-09-26 06:39:57,467 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0138 | Val rms_score: 0.5051
|
| 184 |
+
2025-09-26 06:40:08,638 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0156 | Val rms_score: 0.4978
|
| 185 |
+
2025-09-26 06:40:08,812 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Global step of best model: 4012
|
| 186 |
+
2025-09-26 06:40:09,497 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Best model saved at epoch 59 with val rms_score: 0.4978
|
| 187 |
+
2025-09-26 06:40:16,526 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0154 | Val rms_score: 0.4998
|
| 188 |
+
2025-09-26 06:40:25,924 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0157 | Val rms_score: 0.4984
|
| 189 |
+
2025-09-26 06:40:35,460 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0156 | Val rms_score: 0.5026
|
| 190 |
+
2025-09-26 06:40:42,008 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0141 | Val rms_score: 0.4982
|
| 191 |
+
2025-09-26 06:40:51,422 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0142 | Val rms_score: 0.5010
|
| 192 |
+
2025-09-26 06:41:00,730 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0144 | Val rms_score: 0.4972
|
| 193 |
+
2025-09-26 06:41:00,909 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Global step of best model: 4420
|
| 194 |
+
2025-09-26 06:41:01,573 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Best model saved at epoch 65 with val rms_score: 0.4972
|
| 195 |
+
2025-09-26 06:41:08,082 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0146 | Val rms_score: 0.5028
|
| 196 |
+
2025-09-26 06:41:18,091 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0153 | Val rms_score: 0.4997
|
| 197 |
+
2025-09-26 06:41:27,026 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0155 | Val rms_score: 0.5059
|
| 198 |
+
2025-09-26 06:41:36,258 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0141 | Val rms_score: 0.5018
|
| 199 |
+
2025-09-26 06:41:43,187 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0145 | Val rms_score: 0.5021
|
| 200 |
+
2025-09-26 06:41:52,731 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0135 | Val rms_score: 0.5022
|
| 201 |
+
2025-09-26 06:42:02,929 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0138 | Val rms_score: 0.5005
|
| 202 |
+
2025-09-26 06:42:09,653 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0141 | Val rms_score: 0.5078
|
| 203 |
+
2025-09-26 06:42:20,146 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0132 | Val rms_score: 0.4988
|
| 204 |
+
2025-09-26 06:42:29,182 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0135 | Val rms_score: 0.4956
|
| 205 |
+
2025-09-26 06:42:29,361 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Global step of best model: 5100
|
| 206 |
+
2025-09-26 06:42:30,012 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Best model saved at epoch 75 with val rms_score: 0.4956
|
| 207 |
+
2025-09-26 06:42:37,547 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0142 | Val rms_score: 0.5044
|
| 208 |
+
2025-09-26 06:42:47,539 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0137 | Val rms_score: 0.5013
|
| 209 |
+
2025-09-26 06:42:56,998 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0130 | Val rms_score: 0.5018
|
| 210 |
+
2025-09-26 06:43:06,575 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0126 | Val rms_score: 0.5022
|
| 211 |
+
2025-09-26 06:43:13,210 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0125 | Val rms_score: 0.5011
|
| 212 |
+
2025-09-26 06:43:22,508 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0149 | Val rms_score: 0.4986
|
| 213 |
+
2025-09-26 06:43:32,403 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0133 | Val rms_score: 0.5001
|
| 214 |
+
2025-09-26 06:43:39,051 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0131 | Val rms_score: 0.5015
|
| 215 |
+
2025-09-26 06:43:48,556 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0119 | Val rms_score: 0.5027
|
| 216 |
+
2025-09-26 06:43:57,412 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0130 | Val rms_score: 0.5018
|
| 217 |
+
2025-09-26 06:44:07,487 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0123 | Val rms_score: 0.5051
|
| 218 |
+
2025-09-26 06:44:15,789 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0123 | Val rms_score: 0.5008
|
| 219 |
+
2025-09-26 06:44:25,970 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0125 | Val rms_score: 0.5019
|
| 220 |
+
2025-09-26 06:44:37,486 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0128 | Val rms_score: 0.5005
|
| 221 |
+
2025-09-26 06:44:44,544 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0126 | Val rms_score: 0.4984
|
| 222 |
+
2025-09-26 06:44:53,925 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0125 | Val rms_score: 0.5022
|
| 223 |
+
2025-09-26 06:45:04,031 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0121 | Val rms_score: 0.4986
|
| 224 |
+
2025-09-26 06:45:10,999 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0126 | Val rms_score: 0.5031
|
| 225 |
+
2025-09-26 06:45:20,944 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0130 | Val rms_score: 0.4982
|
| 226 |
+
2025-09-26 06:45:31,010 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0120 | Val rms_score: 0.4999
|
| 227 |
+
2025-09-26 06:45:37,687 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0125 | Val rms_score: 0.4990
|
| 228 |
+
2025-09-26 06:45:47,261 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0115 | Val rms_score: 0.4988
|
| 229 |
+
2025-09-26 06:45:56,194 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0117 | Val rms_score: 0.5026
|
| 230 |
+
2025-09-26 06:46:05,171 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0112 | Val rms_score: 0.5038
|
| 231 |
+
2025-09-26 06:46:11,485 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0115 | Val rms_score: 0.5030
|
| 232 |
+
2025-09-26 06:46:12,250 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Test rms_score: 0.4525
|
| 233 |
+
2025-09-26 06:46:12,645 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Starting triplicate run 3 for dataset adme_microsom_stab_r at 2025-09-26_06-46-12
|
| 234 |
+
2025-09-26 06:46:20,312 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.7132 | Val rms_score: 0.5249
|
| 235 |
+
2025-09-26 06:46:20,312 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Global step of best model: 68
|
| 236 |
+
2025-09-26 06:46:21,170 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.5249
|
| 237 |
+
2025-09-26 06:46:30,426 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.5000 | Val rms_score: 0.5118
|
| 238 |
+
2025-09-26 06:46:30,703 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Global step of best model: 136
|
| 239 |
+
2025-09-26 06:46:31,541 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.5118
|
| 240 |
+
2025-09-26 06:46:38,448 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.3730 | Val rms_score: 0.5079
|
| 241 |
+
2025-09-26 06:46:38,652 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Global step of best model: 204
|
| 242 |
+
2025-09-26 06:46:39,358 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.5079
|
| 243 |
+
2025-09-26 06:46:48,788 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.3033 | Val rms_score: 0.5019
|
| 244 |
+
2025-09-26 06:46:48,993 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Global step of best model: 272
|
| 245 |
+
2025-09-26 06:46:49,655 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val rms_score: 0.5019
|
| 246 |
+
2025-09-26 06:46:59,302 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.2266 | Val rms_score: 0.5081
|
| 247 |
+
2025-09-26 06:47:06,162 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.1641 | Val rms_score: 0.5223
|
| 248 |
+
2025-09-26 06:47:16,602 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1250 | Val rms_score: 0.5201
|
| 249 |
+
2025-09-26 06:47:26,117 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1044 | Val rms_score: 0.5294
|
| 250 |
+
2025-09-26 06:47:35,464 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.0964 | Val rms_score: 0.5043
|
| 251 |
+
2025-09-26 06:47:42,226 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.0722 | Val rms_score: 0.5003
|
| 252 |
+
2025-09-26 06:47:42,401 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Global step of best model: 680
|
| 253 |
+
2025-09-26 06:47:43,091 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Best model saved at epoch 10 with val rms_score: 0.5003
|
| 254 |
+
2025-09-26 06:47:52,952 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.0645 | Val rms_score: 0.5071
|
| 255 |
+
2025-09-26 06:48:03,959 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0615 | Val rms_score: 0.5155
|
| 256 |
+
2025-09-26 06:48:11,832 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0551 | Val rms_score: 0.5181
|
| 257 |
+
2025-09-26 06:48:21,827 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0526 | Val rms_score: 0.5064
|
| 258 |
+
2025-09-26 06:48:32,374 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0492 | Val rms_score: 0.5101
|
| 259 |
+
2025-09-26 06:48:39,736 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0418 | Val rms_score: 0.5122
|
| 260 |
+
2025-09-26 06:48:49,848 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0377 | Val rms_score: 0.5068
|
| 261 |
+
2025-09-26 06:49:00,180 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0337 | Val rms_score: 0.5060
|
| 262 |
+
2025-09-26 06:49:07,841 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0345 | Val rms_score: 0.5029
|
| 263 |
+
2025-09-26 06:49:17,914 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0344 | Val rms_score: 0.5140
|
| 264 |
+
2025-09-26 06:49:27,835 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0293 | Val rms_score: 0.5044
|
| 265 |
+
2025-09-26 06:49:36,090 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0284 | Val rms_score: 0.5078
|
| 266 |
+
2025-09-26 06:49:46,327 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0295 | Val rms_score: 0.5091
|
| 267 |
+
2025-09-26 06:49:56,591 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0237 | Val rms_score: 0.5044
|
| 268 |
+
2025-09-26 06:50:04,184 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0261 | Val rms_score: 0.5080
|
| 269 |
+
2025-09-26 06:50:14,477 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0242 | Val rms_score: 0.5126
|
| 270 |
+
2025-09-26 06:50:25,400 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0278 | Val rms_score: 0.5121
|
| 271 |
+
2025-09-26 06:50:33,413 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0220 | Val rms_score: 0.5149
|
| 272 |
+
2025-09-26 06:50:42,790 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0257 | Val rms_score: 0.5027
|
| 273 |
+
2025-09-26 06:50:53,320 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0230 | Val rms_score: 0.5105
|
| 274 |
+
2025-09-26 06:51:02,822 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0234 | Val rms_score: 0.5081
|
| 275 |
+
2025-09-26 06:51:10,293 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0217 | Val rms_score: 0.5110
|
| 276 |
+
2025-09-26 06:51:19,671 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0207 | Val rms_score: 0.5091
|
| 277 |
+
2025-09-26 06:51:29,265 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0218 | Val rms_score: 0.5016
|
| 278 |
+
2025-09-26 06:51:36,124 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0201 | Val rms_score: 0.5034
|
| 279 |
+
2025-09-26 06:51:45,437 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0215 | Val rms_score: 0.5106
|
| 280 |
+
2025-09-26 06:51:55,505 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0200 | Val rms_score: 0.5131
|
| 281 |
+
2025-09-26 06:52:02,498 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0202 | Val rms_score: 0.5092
|
| 282 |
+
2025-09-26 06:52:12,018 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0192 | Val rms_score: 0.5077
|
| 283 |
+
2025-09-26 06:52:20,847 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0200 | Val rms_score: 0.5078
|
| 284 |
+
2025-09-26 06:52:30,469 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0195 | Val rms_score: 0.5156
|
| 285 |
+
2025-09-26 06:52:38,149 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0198 | Val rms_score: 0.5067
|
| 286 |
+
2025-09-26 06:52:47,720 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0199 | Val rms_score: 0.5140
|
| 287 |
+
2025-09-26 06:52:57,557 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0183 | Val rms_score: 0.5106
|
| 288 |
+
2025-09-26 06:53:06,252 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0171 | Val rms_score: 0.5071
|
| 289 |
+
2025-09-26 06:53:16,452 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0187 | Val rms_score: 0.5009
|
| 290 |
+
2025-09-26 06:53:27,405 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0169 | Val rms_score: 0.5103
|
| 291 |
+
2025-09-26 06:53:35,095 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0164 | Val rms_score: 0.5081
|
| 292 |
+
2025-09-26 06:53:44,997 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0194 | Val rms_score: 0.5064
|
| 293 |
+
2025-09-26 06:53:54,547 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0163 | Val rms_score: 0.5094
|
| 294 |
+
2025-09-26 06:54:01,415 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0170 | Val rms_score: 0.5066
|
| 295 |
+
2025-09-26 06:54:10,448 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0163 | Val rms_score: 0.4997
|
| 296 |
+
2025-09-26 06:54:10,644 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Global step of best model: 3536
|
| 297 |
+
2025-09-26 06:54:11,538 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Best model saved at epoch 52 with val rms_score: 0.4997
|
| 298 |
+
2025-09-26 06:54:21,850 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0161 | Val rms_score: 0.5046
|
| 299 |
+
2025-09-26 06:54:32,454 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0159 | Val rms_score: 0.5027
|
| 300 |
+
2025-09-26 06:54:39,944 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0163 | Val rms_score: 0.5106
|
| 301 |
+
2025-09-26 06:54:49,856 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0161 | Val rms_score: 0.5091
|
| 302 |
+
2025-09-26 06:54:59,528 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0154 | Val rms_score: 0.5063
|
| 303 |
+
2025-09-26 06:55:06,168 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0157 | Val rms_score: 0.5086
|
| 304 |
+
2025-09-26 06:55:16,460 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0163 | Val rms_score: 0.5113
|
| 305 |
+
2025-09-26 06:55:25,588 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0156 | Val rms_score: 0.5048
|
| 306 |
+
2025-09-26 06:55:32,069 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0147 | Val rms_score: 0.5110
|
| 307 |
+
2025-09-26 06:55:41,726 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0142 | Val rms_score: 0.5015
|
| 308 |
+
2025-09-26 06:55:50,934 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0148 | Val rms_score: 0.5058
|
| 309 |
+
2025-09-26 06:56:00,408 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0148 | Val rms_score: 0.5043
|
| 310 |
+
2025-09-26 06:56:07,055 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0140 | Val rms_score: 0.5040
|
| 311 |
+
2025-09-26 06:56:16,721 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0141 | Val rms_score: 0.5057
|
| 312 |
+
2025-09-26 06:56:26,767 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0147 | Val rms_score: 0.5046
|
| 313 |
+
2025-09-26 06:56:33,600 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0175 | Val rms_score: 0.4975
|
| 314 |
+
2025-09-26 06:56:33,784 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Global step of best model: 4624
|
| 315 |
+
2025-09-26 06:56:34,507 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Best model saved at epoch 68 with val rms_score: 0.4975
|
| 316 |
+
2025-09-26 06:56:43,916 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0157 | Val rms_score: 0.5039
|
| 317 |
+
2025-09-26 06:56:53,109 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0145 | Val rms_score: 0.5055
|
| 318 |
+
2025-09-26 06:57:00,211 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0134 | Val rms_score: 0.5034
|
| 319 |
+
2025-09-26 06:57:10,242 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0141 | Val rms_score: 0.5060
|
| 320 |
+
2025-09-26 06:57:20,077 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0131 | Val rms_score: 0.5032
|
| 321 |
+
2025-09-26 06:57:29,382 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0150 | Val rms_score: 0.5060
|
| 322 |
+
2025-09-26 06:57:40,819 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0146 | Val rms_score: 0.5077
|
| 323 |
+
2025-09-26 06:57:51,609 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0138 | Val rms_score: 0.5012
|
| 324 |
+
2025-09-26 06:58:01,998 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0138 | Val rms_score: 0.5019
|
| 325 |
+
2025-09-26 06:58:11,402 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0134 | Val rms_score: 0.5078
|
| 326 |
+
2025-09-26 06:58:17,815 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0129 | Val rms_score: 0.5001
|
| 327 |
+
2025-09-26 06:58:27,391 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0138 | Val rms_score: 0.4998
|
| 328 |
+
2025-09-26 06:58:37,463 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0103 | Val rms_score: 0.5059
|
| 329 |
+
2025-09-26 06:58:45,746 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0125 | Val rms_score: 0.5034
|
| 330 |
+
2025-09-26 06:58:55,352 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0130 | Val rms_score: 0.5041
|
| 331 |
+
2025-09-26 06:59:04,733 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0141 | Val rms_score: 0.5085
|
| 332 |
+
2025-09-26 06:59:12,975 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0128 | Val rms_score: 0.5046
|
| 333 |
+
2025-09-26 06:59:23,909 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0126 | Val rms_score: 0.5031
|
| 334 |
+
2025-09-26 06:59:34,690 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0127 | Val rms_score: 0.5053
|
| 335 |
+
2025-09-26 06:59:44,704 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0118 | Val rms_score: 0.5096
|
| 336 |
+
2025-09-26 06:59:54,027 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0128 | Val rms_score: 0.5077
|
| 337 |
+
2025-09-26 07:00:04,103 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0129 | Val rms_score: 0.5079
|
| 338 |
+
2025-09-26 07:00:13,799 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0134 | Val rms_score: 0.5046
|
| 339 |
+
2025-09-26 07:00:22,426 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0120 | Val rms_score: 0.5025
|
| 340 |
+
2025-09-26 07:00:32,398 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0116 | Val rms_score: 0.5057
|
| 341 |
+
2025-09-26 07:00:42,234 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0118 | Val rms_score: 0.5097
|
| 342 |
+
2025-09-26 07:00:49,568 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0118 | Val rms_score: 0.5073
|
| 343 |
+
2025-09-26 07:00:58,756 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0133 | Val rms_score: 0.5052
|
| 344 |
+
2025-09-26 07:01:08,624 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0122 | Val rms_score: 0.5045
|
| 345 |
+
2025-09-26 07:01:15,646 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0121 | Val rms_score: 0.5043
|
| 346 |
+
2025-09-26 07:01:24,279 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0123 | Val rms_score: 0.5077
|
| 347 |
+
2025-09-26 07:01:33,500 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0117 | Val rms_score: 0.5014
|
| 348 |
+
2025-09-26 07:01:34,349 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Test rms_score: 0.4523
|
| 349 |
+
2025-09-26 07:01:34,942 - logs_modchembert_adme_microsom_stab_r_epochs100_batch_size32 - INFO - Final Triplicate Test Results — Avg rms_score: 0.4542, Std Dev: 0.0024
|
logs_modchembert_regression_ModChemBERT-MLM-TAFT/modchembert_deepchem_splits_run_adme_permeability_epochs100_batch_size32_20250926_070134.log
ADDED
|
@@ -0,0 +1,353 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
2025-09-26 07:01:34,968 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Running benchmark for dataset: adme_permeability
|
| 2 |
+
2025-09-26 07:01:34,968 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - dataset: adme_permeability, tasks: ['y'], epochs: 100, learning rate: 3e-05, transform: True
|
| 3 |
+
2025-09-26 07:01:34,972 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Starting triplicate run 1 for dataset adme_permeability at 2025-09-26_07-01-34
|
| 4 |
+
2025-09-26 07:01:43,671 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.7015 | Val rms_score: 0.5047
|
| 5 |
+
2025-09-26 07:01:43,672 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 67
|
| 6 |
+
2025-09-26 07:01:41,805 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.5047
|
| 7 |
+
2025-09-26 07:01:50,426 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.4118 | Val rms_score: 0.4595
|
| 8 |
+
2025-09-26 07:01:50,622 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 134
|
| 9 |
+
2025-09-26 07:01:51,223 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.4595
|
| 10 |
+
2025-09-26 07:02:00,530 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.1191 | Val rms_score: 0.4553
|
| 11 |
+
2025-09-26 07:02:00,743 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 201
|
| 12 |
+
2025-09-26 07:02:01,387 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.4553
|
| 13 |
+
2025-09-26 07:02:11,770 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.2780 | Val rms_score: 0.4664
|
| 14 |
+
2025-09-26 07:02:18,417 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.1857 | Val rms_score: 0.4324
|
| 15 |
+
2025-09-26 07:02:18,619 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 335
|
| 16 |
+
2025-09-26 07:02:19,211 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 5 with val rms_score: 0.4324
|
| 17 |
+
2025-09-26 07:02:28,025 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.0435 | Val rms_score: 0.4345
|
| 18 |
+
2025-09-26 07:02:37,778 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1241 | Val rms_score: 0.4275
|
| 19 |
+
2025-09-26 07:02:37,982 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 469
|
| 20 |
+
2025-09-26 07:02:38,610 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 7 with val rms_score: 0.4275
|
| 21 |
+
2025-09-26 07:02:45,328 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.0972 | Val rms_score: 0.4384
|
| 22 |
+
2025-09-26 07:02:54,427 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.0592 | Val rms_score: 0.4290
|
| 23 |
+
2025-09-26 07:03:04,072 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.0807 | Val rms_score: 0.4292
|
| 24 |
+
2025-09-26 07:03:10,874 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.0815 | Val rms_score: 0.4455
|
| 25 |
+
2025-09-26 07:03:21,087 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0527 | Val rms_score: 0.4107
|
| 26 |
+
2025-09-26 07:03:21,328 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 804
|
| 27 |
+
2025-09-26 07:03:22,163 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 12 with val rms_score: 0.4107
|
| 28 |
+
2025-09-26 07:03:32,080 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0541 | Val rms_score: 0.4133
|
| 29 |
+
2025-09-26 07:03:41,392 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0752 | Val rms_score: 0.4760
|
| 30 |
+
2025-09-26 07:03:50,051 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0793 | Val rms_score: 0.4236
|
| 31 |
+
2025-09-26 07:03:59,848 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0578 | Val rms_score: 0.4156
|
| 32 |
+
2025-09-26 07:04:10,000 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0525 | Val rms_score: 0.4142
|
| 33 |
+
2025-09-26 07:04:17,860 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0314 | Val rms_score: 0.4201
|
| 34 |
+
2025-09-26 07:04:27,517 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0380 | Val rms_score: 0.4160
|
| 35 |
+
2025-09-26 07:04:37,376 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0350 | Val rms_score: 0.4162
|
| 36 |
+
2025-09-26 07:04:44,553 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0324 | Val rms_score: 0.4158
|
| 37 |
+
2025-09-26 07:04:54,273 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0319 | Val rms_score: 0.4207
|
| 38 |
+
2025-09-26 07:05:03,756 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0354 | Val rms_score: 0.4153
|
| 39 |
+
2025-09-26 07:05:10,593 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0292 | Val rms_score: 0.4184
|
| 40 |
+
2025-09-26 07:05:20,287 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0317 | Val rms_score: 0.4166
|
| 41 |
+
2025-09-26 07:05:30,127 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0268 | Val rms_score: 0.4226
|
| 42 |
+
2025-09-26 07:05:40,346 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0263 | Val rms_score: 0.4196
|
| 43 |
+
2025-09-26 07:05:47,793 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0271 | Val rms_score: 0.4322
|
| 44 |
+
2025-09-26 07:05:57,675 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0352 | Val rms_score: 0.4222
|
| 45 |
+
2025-09-26 07:06:08,388 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0216 | Val rms_score: 0.4190
|
| 46 |
+
2025-09-26 07:06:15,439 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0205 | Val rms_score: 0.4175
|
| 47 |
+
2025-09-26 07:06:25,215 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0202 | Val rms_score: 0.4171
|
| 48 |
+
2025-09-26 07:06:34,470 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0162 | Val rms_score: 0.4111
|
| 49 |
+
2025-09-26 07:06:41,630 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0183 | Val rms_score: 0.4185
|
| 50 |
+
2025-09-26 07:06:51,497 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0186 | Val rms_score: 0.4179
|
| 51 |
+
2025-09-26 07:07:01,245 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0143 | Val rms_score: 0.4158
|
| 52 |
+
2025-09-26 07:07:11,372 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0164 | Val rms_score: 0.4135
|
| 53 |
+
2025-09-26 07:07:18,882 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0189 | Val rms_score: 0.4398
|
| 54 |
+
2025-09-26 07:07:28,913 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0224 | Val rms_score: 0.4253
|
| 55 |
+
2025-09-26 07:07:38,645 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0195 | Val rms_score: 0.4191
|
| 56 |
+
2025-09-26 07:07:45,672 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0182 | Val rms_score: 0.4188
|
| 57 |
+
2025-09-26 07:07:55,896 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0174 | Val rms_score: 0.4189
|
| 58 |
+
2025-09-26 07:08:05,011 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0163 | Val rms_score: 0.4237
|
| 59 |
+
2025-09-26 07:08:11,942 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0157 | Val rms_score: 0.4219
|
| 60 |
+
2025-09-26 07:08:22,944 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0167 | Val rms_score: 0.4229
|
| 61 |
+
2025-09-26 07:08:32,224 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0148 | Val rms_score: 0.4259
|
| 62 |
+
2025-09-26 07:08:39,509 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0142 | Val rms_score: 0.4209
|
| 63 |
+
2025-09-26 07:08:48,904 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0137 | Val rms_score: 0.4213
|
| 64 |
+
2025-09-26 07:08:57,833 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0142 | Val rms_score: 0.4211
|
| 65 |
+
2025-09-26 07:09:06,834 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0144 | Val rms_score: 0.4215
|
| 66 |
+
2025-09-26 07:09:13,481 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0137 | Val rms_score: 0.4171
|
| 67 |
+
2025-09-26 07:09:22,339 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0136 | Val rms_score: 0.4209
|
| 68 |
+
2025-09-26 07:09:31,724 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0145 | Val rms_score: 0.4192
|
| 69 |
+
2025-09-26 07:09:38,592 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0131 | Val rms_score: 0.4179
|
| 70 |
+
2025-09-26 07:09:47,366 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0133 | Val rms_score: 0.4190
|
| 71 |
+
2025-09-26 07:09:56,433 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0125 | Val rms_score: 0.4181
|
| 72 |
+
2025-09-26 07:10:06,273 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0116 | Val rms_score: 0.4179
|
| 73 |
+
2025-09-26 07:10:12,301 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0124 | Val rms_score: 0.4210
|
| 74 |
+
2025-09-26 07:10:20,817 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0130 | Val rms_score: 0.4202
|
| 75 |
+
2025-09-26 07:10:31,058 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0104 | Val rms_score: 0.4237
|
| 76 |
+
2025-09-26 07:10:39,782 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0120 | Val rms_score: 0.4229
|
| 77 |
+
2025-09-26 07:10:46,722 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0120 | Val rms_score: 0.4210
|
| 78 |
+
2025-09-26 07:10:56,194 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0126 | Val rms_score: 0.4199
|
| 79 |
+
2025-09-26 07:11:05,105 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0129 | Val rms_score: 0.4201
|
| 80 |
+
2025-09-26 07:11:12,284 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0114 | Val rms_score: 0.4220
|
| 81 |
+
2025-09-26 07:11:21,427 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0123 | Val rms_score: 0.4201
|
| 82 |
+
2025-09-26 07:11:30,510 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0113 | Val rms_score: 0.4206
|
| 83 |
+
2025-09-26 07:11:36,988 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0128 | Val rms_score: 0.4207
|
| 84 |
+
2025-09-26 07:11:46,147 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0132 | Val rms_score: 0.4180
|
| 85 |
+
2025-09-26 07:11:55,897 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0115 | Val rms_score: 0.4194
|
| 86 |
+
2025-09-26 07:12:05,287 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0114 | Val rms_score: 0.4164
|
| 87 |
+
2025-09-26 07:12:12,974 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0107 | Val rms_score: 0.4219
|
| 88 |
+
2025-09-26 07:12:22,526 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0106 | Val rms_score: 0.4181
|
| 89 |
+
2025-09-26 07:12:32,575 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0105 | Val rms_score: 0.4199
|
| 90 |
+
2025-09-26 07:12:41,135 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0095 | Val rms_score: 0.4201
|
| 91 |
+
2025-09-26 07:12:50,847 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0107 | Val rms_score: 0.4186
|
| 92 |
+
2025-09-26 07:13:00,872 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0097 | Val rms_score: 0.4221
|
| 93 |
+
2025-09-26 07:13:08,004 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0109 | Val rms_score: 0.4175
|
| 94 |
+
2025-09-26 07:13:17,664 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0107 | Val rms_score: 0.4193
|
| 95 |
+
2025-09-26 07:13:27,223 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0111 | Val rms_score: 0.4169
|
| 96 |
+
2025-09-26 07:13:36,547 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0120 | Val rms_score: 0.4178
|
| 97 |
+
2025-09-26 07:13:43,841 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0107 | Val rms_score: 0.4205
|
| 98 |
+
2025-09-26 07:13:52,966 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0106 | Val rms_score: 0.4202
|
| 99 |
+
2025-09-26 07:14:02,691 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0110 | Val rms_score: 0.4204
|
| 100 |
+
2025-09-26 07:14:10,060 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0097 | Val rms_score: 0.4201
|
| 101 |
+
2025-09-26 07:14:19,351 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0098 | Val rms_score: 0.4189
|
| 102 |
+
2025-09-26 07:14:29,260 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0105 | Val rms_score: 0.4198
|
| 103 |
+
2025-09-26 07:14:36,440 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0099 | Val rms_score: 0.4221
|
| 104 |
+
2025-09-26 07:14:45,777 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0100 | Val rms_score: 0.4213
|
| 105 |
+
2025-09-26 07:14:56,512 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0100 | Val rms_score: 0.4181
|
| 106 |
+
2025-09-26 07:15:06,119 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0109 | Val rms_score: 0.4168
|
| 107 |
+
2025-09-26 07:15:13,997 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0108 | Val rms_score: 0.4193
|
| 108 |
+
2025-09-26 07:15:24,120 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0089 | Val rms_score: 0.4183
|
| 109 |
+
2025-09-26 07:15:33,220 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0104 | Val rms_score: 0.4208
|
| 110 |
+
2025-09-26 07:15:40,181 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0092 | Val rms_score: 0.4161
|
| 111 |
+
2025-09-26 07:15:49,584 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0099 | Val rms_score: 0.4182
|
| 112 |
+
2025-09-26 07:15:59,964 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0100 | Val rms_score: 0.4175
|
| 113 |
+
2025-09-26 07:16:06,808 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0097 | Val rms_score: 0.4207
|
| 114 |
+
2025-09-26 07:16:16,535 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0095 | Val rms_score: 0.4176
|
| 115 |
+
2025-09-26 07:16:26,345 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0094 | Val rms_score: 0.4185
|
| 116 |
+
2025-09-26 07:16:27,194 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Test rms_score: 0.5268
|
| 117 |
+
2025-09-26 07:16:27,808 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Starting triplicate run 2 for dataset adme_permeability at 2025-09-26_07-16-27
|
| 118 |
+
2025-09-26 07:16:35,160 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.6604 | Val rms_score: 0.5024
|
| 119 |
+
2025-09-26 07:16:35,160 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 67
|
| 120 |
+
2025-09-26 07:16:36,110 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.5024
|
| 121 |
+
2025-09-26 07:16:43,175 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.4632 | Val rms_score: 0.4540
|
| 122 |
+
2025-09-26 07:16:43,380 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 134
|
| 123 |
+
2025-09-26 07:16:44,181 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.4540
|
| 124 |
+
2025-09-26 07:16:53,451 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.0020 | Val rms_score: 0.4423
|
| 125 |
+
2025-09-26 07:16:53,662 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 201
|
| 126 |
+
2025-09-26 07:16:54,320 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.4423
|
| 127 |
+
2025-09-26 07:17:03,604 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.2799 | Val rms_score: 0.4881
|
| 128 |
+
2025-09-26 07:17:10,927 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.2857 | Val rms_score: 0.4821
|
| 129 |
+
2025-09-26 07:17:20,370 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.0840 | Val rms_score: 0.4328
|
| 130 |
+
2025-09-26 07:17:21,058 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 402
|
| 131 |
+
2025-09-26 07:17:21,778 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 6 with val rms_score: 0.4328
|
| 132 |
+
2025-09-26 07:17:30,768 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1903 | Val rms_score: 0.4248
|
| 133 |
+
2025-09-26 07:17:31,001 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 469
|
| 134 |
+
2025-09-26 07:17:31,757 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 7 with val rms_score: 0.4248
|
| 135 |
+
2025-09-26 07:17:38,363 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1745 | Val rms_score: 0.4227
|
| 136 |
+
2025-09-26 07:17:38,572 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 536
|
| 137 |
+
2025-09-26 07:17:39,213 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 8 with val rms_score: 0.4227
|
| 138 |
+
2025-09-26 07:17:48,181 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.0719 | Val rms_score: 0.4268
|
| 139 |
+
2025-09-26 07:17:57,046 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.1250 | Val rms_score: 0.4158
|
| 140 |
+
2025-09-26 07:17:57,264 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 670
|
| 141 |
+
2025-09-26 07:17:57,986 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 10 with val rms_score: 0.4158
|
| 142 |
+
2025-09-26 07:18:04,831 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.1026 | Val rms_score: 0.4045
|
| 143 |
+
2025-09-26 07:18:05,566 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 737
|
| 144 |
+
2025-09-26 07:18:06,216 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 11 with val rms_score: 0.4045
|
| 145 |
+
2025-09-26 07:18:15,645 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.1260 | Val rms_score: 0.5002
|
| 146 |
+
2025-09-26 07:18:24,806 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.1950 | Val rms_score: 0.4116
|
| 147 |
+
2025-09-26 07:18:33,630 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.1003 | Val rms_score: 0.4095
|
| 148 |
+
2025-09-26 07:18:41,733 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0688 | Val rms_score: 0.4157
|
| 149 |
+
2025-09-26 07:18:51,008 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0770 | Val rms_score: 0.4202
|
| 150 |
+
2025-09-26 07:19:00,732 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0717 | Val rms_score: 0.4110
|
| 151 |
+
2025-09-26 07:19:07,208 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0758 | Val rms_score: 0.4317
|
| 152 |
+
2025-09-26 07:19:16,477 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0709 | Val rms_score: 0.4096
|
| 153 |
+
2025-09-26 07:19:25,359 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0555 | Val rms_score: 0.4085
|
| 154 |
+
2025-09-26 07:19:34,655 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0575 | Val rms_score: 0.4078
|
| 155 |
+
2025-09-26 07:19:41,096 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0515 | Val rms_score: 0.4130
|
| 156 |
+
2025-09-26 07:19:49,868 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0575 | Val rms_score: 0.4053
|
| 157 |
+
2025-09-26 07:19:58,892 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0430 | Val rms_score: 0.4105
|
| 158 |
+
2025-09-26 07:20:05,024 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0438 | Val rms_score: 0.4058
|
| 159 |
+
2025-09-26 07:20:13,965 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0428 | Val rms_score: 0.4044
|
| 160 |
+
2025-09-26 07:20:14,514 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 1742
|
| 161 |
+
2025-09-26 07:20:15,186 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 26 with val rms_score: 0.4044
|
| 162 |
+
2025-09-26 07:20:25,058 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0302 | Val rms_score: 0.4056
|
| 163 |
+
2025-09-26 07:20:32,963 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0359 | Val rms_score: 0.3991
|
| 164 |
+
2025-09-26 07:20:33,190 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 1876
|
| 165 |
+
2025-09-26 07:20:33,989 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 28 with val rms_score: 0.3991
|
| 166 |
+
2025-09-26 07:20:43,465 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0362 | Val rms_score: 0.4002
|
| 167 |
+
2025-09-26 07:20:53,901 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0342 | Val rms_score: 0.3965
|
| 168 |
+
2025-09-26 07:20:54,119 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 2010
|
| 169 |
+
2025-09-26 07:20:54,806 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 30 with val rms_score: 0.3965
|
| 170 |
+
2025-09-26 07:21:04,341 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0336 | Val rms_score: 0.4065
|
| 171 |
+
2025-09-26 07:21:11,692 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0305 | Val rms_score: 0.4028
|
| 172 |
+
2025-09-26 07:21:21,628 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0249 | Val rms_score: 0.4001
|
| 173 |
+
2025-09-26 07:21:31,235 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0280 | Val rms_score: 0.4001
|
| 174 |
+
2025-09-26 07:21:38,729 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0285 | Val rms_score: 0.4015
|
| 175 |
+
2025-09-26 07:21:48,767 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0229 | Val rms_score: 0.4050
|
| 176 |
+
2025-09-26 07:21:59,641 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0247 | Val rms_score: 0.4024
|
| 177 |
+
2025-09-26 07:22:06,624 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0251 | Val rms_score: 0.4073
|
| 178 |
+
2025-09-26 07:22:16,106 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0251 | Val rms_score: 0.4005
|
| 179 |
+
2025-09-26 07:22:25,111 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0236 | Val rms_score: 0.3997
|
| 180 |
+
2025-09-26 07:22:31,974 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0212 | Val rms_score: 0.4020
|
| 181 |
+
2025-09-26 07:22:42,117 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0279 | Val rms_score: 0.4045
|
| 182 |
+
2025-09-26 07:22:51,627 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0243 | Val rms_score: 0.4016
|
| 183 |
+
2025-09-26 07:23:00,794 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0216 | Val rms_score: 0.4009
|
| 184 |
+
2025-09-26 07:23:08,386 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0201 | Val rms_score: 0.3999
|
| 185 |
+
2025-09-26 07:23:18,085 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0227 | Val rms_score: 0.4014
|
| 186 |
+
2025-09-26 07:23:28,632 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0199 | Val rms_score: 0.4068
|
| 187 |
+
2025-09-26 07:23:35,100 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0195 | Val rms_score: 0.4070
|
| 188 |
+
2025-09-26 07:23:44,192 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0204 | Val rms_score: 0.3968
|
| 189 |
+
2025-09-26 07:23:53,764 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0190 | Val rms_score: 0.3992
|
| 190 |
+
2025-09-26 07:24:03,423 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0205 | Val rms_score: 0.4054
|
| 191 |
+
2025-09-26 07:24:10,534 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0185 | Val rms_score: 0.3993
|
| 192 |
+
2025-09-26 07:24:19,865 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0169 | Val rms_score: 0.4032
|
| 193 |
+
2025-09-26 07:24:29,445 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0164 | Val rms_score: 0.3976
|
| 194 |
+
2025-09-26 07:24:35,986 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0173 | Val rms_score: 0.4019
|
| 195 |
+
2025-09-26 07:24:44,709 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0177 | Val rms_score: 0.3990
|
| 196 |
+
2025-09-26 07:24:54,356 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0163 | Val rms_score: 0.4020
|
| 197 |
+
2025-09-26 07:25:01,410 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0160 | Val rms_score: 0.4008
|
| 198 |
+
2025-09-26 07:25:11,266 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0157 | Val rms_score: 0.4026
|
| 199 |
+
2025-09-26 07:25:22,753 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0164 | Val rms_score: 0.4008
|
| 200 |
+
2025-09-26 07:25:32,422 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0146 | Val rms_score: 0.4018
|
| 201 |
+
2025-09-26 07:25:39,937 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0151 | Val rms_score: 0.4015
|
| 202 |
+
2025-09-26 07:25:49,388 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0157 | Val rms_score: 0.4028
|
| 203 |
+
2025-09-26 07:25:58,962 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0147 | Val rms_score: 0.4019
|
| 204 |
+
2025-09-26 07:26:05,452 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0134 | Val rms_score: 0.4046
|
| 205 |
+
2025-09-26 07:26:14,301 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0147 | Val rms_score: 0.4018
|
| 206 |
+
2025-09-26 07:26:23,807 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0149 | Val rms_score: 0.4020
|
| 207 |
+
2025-09-26 07:26:30,248 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0137 | Val rms_score: 0.4028
|
| 208 |
+
2025-09-26 07:26:39,374 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0125 | Val rms_score: 0.4017
|
| 209 |
+
2025-09-26 07:26:48,931 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0132 | Val rms_score: 0.4011
|
| 210 |
+
2025-09-26 07:26:58,199 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0135 | Val rms_score: 0.4028
|
| 211 |
+
2025-09-26 07:27:06,149 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0133 | Val rms_score: 0.4046
|
| 212 |
+
2025-09-26 07:27:15,281 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0135 | Val rms_score: 0.4042
|
| 213 |
+
2025-09-26 07:27:24,420 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0159 | Val rms_score: 0.4033
|
| 214 |
+
2025-09-26 07:27:33,161 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0130 | Val rms_score: 0.4020
|
| 215 |
+
2025-09-26 07:27:42,630 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0141 | Val rms_score: 0.4031
|
| 216 |
+
2025-09-26 07:27:53,653 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0130 | Val rms_score: 0.4019
|
| 217 |
+
2025-09-26 07:28:00,864 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0133 | Val rms_score: 0.4051
|
| 218 |
+
2025-09-26 07:28:10,331 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0135 | Val rms_score: 0.4043
|
| 219 |
+
2025-09-26 07:28:19,186 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0135 | Val rms_score: 0.4000
|
| 220 |
+
2025-09-26 07:28:27,931 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0137 | Val rms_score: 0.3987
|
| 221 |
+
2025-09-26 07:28:34,702 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0124 | Val rms_score: 0.3977
|
| 222 |
+
2025-09-26 07:28:43,956 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0125 | Val rms_score: 0.3976
|
| 223 |
+
2025-09-26 07:28:53,463 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0119 | Val rms_score: 0.3972
|
| 224 |
+
2025-09-26 07:29:01,229 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0132 | Val rms_score: 0.4014
|
| 225 |
+
2025-09-26 07:29:11,238 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0130 | Val rms_score: 0.4011
|
| 226 |
+
2025-09-26 07:29:21,267 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0128 | Val rms_score: 0.4045
|
| 227 |
+
2025-09-26 07:29:30,919 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0126 | Val rms_score: 0.4016
|
| 228 |
+
2025-09-26 07:29:38,604 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0122 | Val rms_score: 0.4003
|
| 229 |
+
2025-09-26 07:29:49,396 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0115 | Val rms_score: 0.4032
|
| 230 |
+
2025-09-26 07:29:58,174 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0115 | Val rms_score: 0.3991
|
| 231 |
+
2025-09-26 07:30:05,134 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0119 | Val rms_score: 0.3998
|
| 232 |
+
2025-09-26 07:30:14,483 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0120 | Val rms_score: 0.3974
|
| 233 |
+
2025-09-26 07:30:23,595 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0125 | Val rms_score: 0.4005
|
| 234 |
+
2025-09-26 07:30:30,304 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0122 | Val rms_score: 0.4036
|
| 235 |
+
2025-09-26 07:30:40,559 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0120 | Val rms_score: 0.3982
|
| 236 |
+
2025-09-26 07:30:49,635 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0147 | Val rms_score: 0.4032
|
| 237 |
+
2025-09-26 07:30:58,412 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0121 | Val rms_score: 0.4041
|
| 238 |
+
2025-09-26 07:31:04,868 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0134 | Val rms_score: 0.4013
|
| 239 |
+
2025-09-26 07:31:13,444 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0110 | Val rms_score: 0.4020
|
| 240 |
+
2025-09-26 07:31:14,298 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Test rms_score: 0.5006
|
| 241 |
+
2025-09-26 07:31:14,741 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Starting triplicate run 3 for dataset adme_permeability at 2025-09-26_07-31-14
|
| 242 |
+
2025-09-26 07:31:22,667 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.6269 | Val rms_score: 0.4918
|
| 243 |
+
2025-09-26 07:31:22,667 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 67
|
| 244 |
+
2025-09-26 07:31:23,614 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.4918
|
| 245 |
+
2025-09-26 07:31:30,343 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.4118 | Val rms_score: 0.4312
|
| 246 |
+
2025-09-26 07:31:30,539 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 134
|
| 247 |
+
2025-09-26 07:31:31,174 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.4312
|
| 248 |
+
2025-09-26 07:31:40,009 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.0610 | Val rms_score: 0.4264
|
| 249 |
+
2025-09-26 07:31:40,221 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 201
|
| 250 |
+
2025-09-26 07:31:40,843 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.4264
|
| 251 |
+
2025-09-26 07:31:50,401 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.2761 | Val rms_score: 0.4104
|
| 252 |
+
2025-09-26 07:31:50,616 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 268
|
| 253 |
+
2025-09-26 07:31:51,271 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val rms_score: 0.4104
|
| 254 |
+
2025-09-26 07:31:58,214 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.1946 | Val rms_score: 0.4098
|
| 255 |
+
2025-09-26 07:31:58,419 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Global step of best model: 335
|
| 256 |
+
2025-09-26 07:31:59,052 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Best model saved at epoch 5 with val rms_score: 0.4098
|
| 257 |
+
2025-09-26 07:32:07,906 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.0708 | Val rms_score: 0.4200
|
| 258 |
+
2025-09-26 07:32:17,926 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1241 | Val rms_score: 0.4160
|
| 259 |
+
2025-09-26 07:32:27,101 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1098 | Val rms_score: 0.4211
|
| 260 |
+
2025-09-26 07:32:33,374 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1172 | Val rms_score: 0.4304
|
| 261 |
+
2025-09-26 07:32:42,334 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.0951 | Val rms_score: 0.4136
|
| 262 |
+
2025-09-26 07:32:50,812 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.0802 | Val rms_score: 0.4431
|
| 263 |
+
2025-09-26 07:32:57,873 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0603 | Val rms_score: 0.4203
|
| 264 |
+
2025-09-26 07:33:06,932 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0812 | Val rms_score: 0.4783
|
| 265 |
+
2025-09-26 07:33:16,223 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.1102 | Val rms_score: 0.4270
|
| 266 |
+
2025-09-26 07:33:26,620 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0746 | Val rms_score: 0.4230
|
| 267 |
+
2025-09-26 07:33:33,749 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0653 | Val rms_score: 0.4229
|
| 268 |
+
2025-09-26 07:33:43,719 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0605 | Val rms_score: 0.4296
|
| 269 |
+
2025-09-26 07:33:53,232 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0661 | Val rms_score: 0.4302
|
| 270 |
+
2025-09-26 07:34:00,896 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0445 | Val rms_score: 0.4211
|
| 271 |
+
2025-09-26 07:34:10,138 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0354 | Val rms_score: 0.4212
|
| 272 |
+
2025-09-26 07:34:19,478 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0312 | Val rms_score: 0.4285
|
| 273 |
+
2025-09-26 07:34:26,954 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0354 | Val rms_score: 0.4238
|
| 274 |
+
2025-09-26 07:34:36,088 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0448 | Val rms_score: 0.4212
|
| 275 |
+
2025-09-26 07:34:44,976 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0289 | Val rms_score: 0.4170
|
| 276 |
+
2025-09-26 07:34:53,332 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0317 | Val rms_score: 0.4106
|
| 277 |
+
2025-09-26 07:35:00,280 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0359 | Val rms_score: 0.4233
|
| 278 |
+
2025-09-26 07:35:10,139 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0341 | Val rms_score: 0.4232
|
| 279 |
+
2025-09-26 07:35:19,311 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0294 | Val rms_score: 0.4223
|
| 280 |
+
2025-09-26 07:35:25,921 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0258 | Val rms_score: 0.4187
|
| 281 |
+
2025-09-26 07:35:36,321 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0202 | Val rms_score: 0.4149
|
| 282 |
+
2025-09-26 07:35:45,525 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0245 | Val rms_score: 0.4206
|
| 283 |
+
2025-09-26 07:35:56,001 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0245 | Val rms_score: 0.4205
|
| 284 |
+
2025-09-26 07:36:02,346 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0221 | Val rms_score: 0.4259
|
| 285 |
+
2025-09-26 07:36:11,791 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0278 | Val rms_score: 0.4360
|
| 286 |
+
2025-09-26 07:36:21,560 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0226 | Val rms_score: 0.4221
|
| 287 |
+
2025-09-26 07:36:28,573 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0188 | Val rms_score: 0.4265
|
| 288 |
+
2025-09-26 07:36:38,222 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0187 | Val rms_score: 0.4273
|
| 289 |
+
2025-09-26 07:36:47,269 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0197 | Val rms_score: 0.4211
|
| 290 |
+
2025-09-26 07:36:56,902 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0172 | Val rms_score: 0.4269
|
| 291 |
+
2025-09-26 07:37:03,857 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0190 | Val rms_score: 0.4234
|
| 292 |
+
2025-09-26 07:37:13,564 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0166 | Val rms_score: 0.4192
|
| 293 |
+
2025-09-26 07:37:23,528 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0172 | Val rms_score: 0.4197
|
| 294 |
+
2025-09-26 07:37:30,511 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0166 | Val rms_score: 0.4251
|
| 295 |
+
2025-09-26 07:37:39,589 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0174 | Val rms_score: 0.4225
|
| 296 |
+
2025-09-26 07:37:49,912 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0161 | Val rms_score: 0.4268
|
| 297 |
+
2025-09-26 07:37:57,115 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0155 | Val rms_score: 0.4214
|
| 298 |
+
2025-09-26 07:38:07,195 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0159 | Val rms_score: 0.4246
|
| 299 |
+
2025-09-26 07:38:16,581 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0147 | Val rms_score: 0.4197
|
| 300 |
+
2025-09-26 07:38:24,659 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0160 | Val rms_score: 0.4176
|
| 301 |
+
2025-09-26 07:38:33,921 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0166 | Val rms_score: 0.4177
|
| 302 |
+
2025-09-26 07:38:43,206 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0159 | Val rms_score: 0.4223
|
| 303 |
+
2025-09-26 07:38:53,059 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0163 | Val rms_score: 0.4190
|
| 304 |
+
2025-09-26 07:39:00,442 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0152 | Val rms_score: 0.4240
|
| 305 |
+
2025-09-26 07:39:09,370 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0166 | Val rms_score: 0.4169
|
| 306 |
+
2025-09-26 07:39:19,130 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0135 | Val rms_score: 0.4165
|
| 307 |
+
2025-09-26 07:39:25,150 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0131 | Val rms_score: 0.4193
|
| 308 |
+
2025-09-26 07:39:35,268 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0149 | Val rms_score: 0.4160
|
| 309 |
+
2025-09-26 07:39:44,737 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0142 | Val rms_score: 0.4272
|
| 310 |
+
2025-09-26 07:39:53,938 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0168 | Val rms_score: 0.4235
|
| 311 |
+
2025-09-26 07:40:01,992 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0124 | Val rms_score: 0.4187
|
| 312 |
+
2025-09-26 07:40:11,600 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0132 | Val rms_score: 0.4201
|
| 313 |
+
2025-09-26 07:40:22,014 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0130 | Val rms_score: 0.4171
|
| 314 |
+
2025-09-26 07:40:28,650 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0139 | Val rms_score: 0.4211
|
| 315 |
+
2025-09-26 07:40:38,102 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0137 | Val rms_score: 0.4219
|
| 316 |
+
2025-09-26 07:40:46,931 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0134 | Val rms_score: 0.4170
|
| 317 |
+
2025-09-26 07:40:53,278 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0126 | Val rms_score: 0.4153
|
| 318 |
+
2025-09-26 07:41:02,206 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0124 | Val rms_score: 0.4193
|
| 319 |
+
2025-09-26 07:41:11,386 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0119 | Val rms_score: 0.4137
|
| 320 |
+
2025-09-26 07:41:20,159 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0121 | Val rms_score: 0.4152
|
| 321 |
+
2025-09-26 07:41:26,348 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0121 | Val rms_score: 0.4148
|
| 322 |
+
2025-09-26 07:41:34,915 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0117 | Val rms_score: 0.4149
|
| 323 |
+
2025-09-26 07:41:44,693 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0121 | Val rms_score: 0.4173
|
| 324 |
+
2025-09-26 07:41:53,461 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0115 | Val rms_score: 0.4178
|
| 325 |
+
2025-09-26 07:41:59,944 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0112 | Val rms_score: 0.4213
|
| 326 |
+
2025-09-26 07:42:10,405 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0120 | Val rms_score: 0.4181
|
| 327 |
+
2025-09-26 07:42:19,023 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0114 | Val rms_score: 0.4199
|
| 328 |
+
2025-09-26 07:42:25,679 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0114 | Val rms_score: 0.4231
|
| 329 |
+
2025-09-26 07:42:34,265 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0115 | Val rms_score: 0.4161
|
| 330 |
+
2025-09-26 07:42:43,387 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0109 | Val rms_score: 0.4185
|
| 331 |
+
2025-09-26 07:42:52,198 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0114 | Val rms_score: 0.4214
|
| 332 |
+
2025-09-26 07:42:58,259 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0109 | Val rms_score: 0.4139
|
| 333 |
+
2025-09-26 07:43:07,579 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0118 | Val rms_score: 0.4188
|
| 334 |
+
2025-09-26 07:43:17,077 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0114 | Val rms_score: 0.4160
|
| 335 |
+
2025-09-26 07:43:23,550 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0100 | Val rms_score: 0.4157
|
| 336 |
+
2025-09-26 07:43:32,403 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0105 | Val rms_score: 0.4139
|
| 337 |
+
2025-09-26 07:43:41,328 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0103 | Val rms_score: 0.4161
|
| 338 |
+
2025-09-26 07:43:51,176 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0104 | Val rms_score: 0.4175
|
| 339 |
+
2025-09-26 07:43:57,650 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0101 | Val rms_score: 0.4156
|
| 340 |
+
2025-09-26 07:44:07,821 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0104 | Val rms_score: 0.4170
|
| 341 |
+
2025-09-26 07:44:18,924 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0109 | Val rms_score: 0.4201
|
| 342 |
+
2025-09-26 07:44:25,978 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0105 | Val rms_score: 0.4170
|
| 343 |
+
2025-09-26 07:44:35,818 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0102 | Val rms_score: 0.4159
|
| 344 |
+
2025-09-26 07:44:44,492 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0105 | Val rms_score: 0.4175
|
| 345 |
+
2025-09-26 07:44:52,798 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0101 | Val rms_score: 0.4162
|
| 346 |
+
2025-09-26 07:45:00,324 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0103 | Val rms_score: 0.4159
|
| 347 |
+
2025-09-26 07:45:12,331 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0098 | Val rms_score: 0.4148
|
| 348 |
+
2025-09-26 07:45:22,583 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0100 | Val rms_score: 0.4173
|
| 349 |
+
2025-09-26 07:45:34,564 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0095 | Val rms_score: 0.4140
|
| 350 |
+
2025-09-26 07:45:45,986 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0098 | Val rms_score: 0.4168
|
| 351 |
+
2025-09-26 07:45:55,158 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0110 | Val rms_score: 0.4195
|
| 352 |
+
2025-09-26 07:45:56,196 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Test rms_score: 0.5332
|
| 353 |
+
2025-09-26 07:45:56,736 - logs_modchembert_adme_permeability_epochs100_batch_size32 - INFO - Final Triplicate Test Results — Avg rms_score: 0.5202, Std Dev: 0.0141
|
logs_modchembert_regression_ModChemBERT-MLM-TAFT/modchembert_deepchem_splits_run_adme_ppb_h_epochs100_batch_size32_20250926_075124.log
ADDED
|
@@ -0,0 +1,323 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
2025-09-26 07:51:24,023 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Running benchmark for dataset: adme_ppb_h
|
| 2 |
+
2025-09-26 07:51:24,024 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - dataset: adme_ppb_h, tasks: ['y'], epochs: 100, learning rate: 3e-05, transform: True
|
| 3 |
+
2025-09-26 07:51:24,028 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Starting triplicate run 1 for dataset adme_ppb_h at 2025-09-26_07-51-24
|
| 4 |
+
2025-09-26 07:51:31,352 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.6219 | Val rms_score: 0.7597
|
| 5 |
+
2025-09-26 07:51:31,352 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Global step of best model: 5
|
| 6 |
+
2025-09-26 07:51:31,890 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.7597
|
| 7 |
+
2025-09-26 07:51:33,577 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.4250 | Val rms_score: 0.5020
|
| 8 |
+
2025-09-26 07:51:33,747 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Global step of best model: 10
|
| 9 |
+
2025-09-26 07:51:34,254 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.5020
|
| 10 |
+
2025-09-26 07:51:35,926 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.3438 | Val rms_score: 0.5526
|
| 11 |
+
2025-09-26 07:51:37,326 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.2609 | Val rms_score: 0.6064
|
| 12 |
+
2025-09-26 07:51:38,924 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.2172 | Val rms_score: 0.6261
|
| 13 |
+
2025-09-26 07:51:40,534 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.1656 | Val rms_score: 0.5962
|
| 14 |
+
2025-09-26 07:51:42,300 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1523 | Val rms_score: 0.6127
|
| 15 |
+
2025-09-26 07:51:43,833 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1187 | Val rms_score: 0.6691
|
| 16 |
+
2025-09-26 07:51:45,388 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1016 | Val rms_score: 0.6729
|
| 17 |
+
2025-09-26 07:51:46,949 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.0875 | Val rms_score: 0.6546
|
| 18 |
+
2025-09-26 07:51:48,670 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.0816 | Val rms_score: 0.6465
|
| 19 |
+
2025-09-26 07:51:50,791 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0594 | Val rms_score: 0.6603
|
| 20 |
+
2025-09-26 07:51:52,472 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0531 | Val rms_score: 0.6760
|
| 21 |
+
2025-09-26 07:51:54,120 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0402 | Val rms_score: 0.6715
|
| 22 |
+
2025-09-26 07:51:55,855 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0320 | Val rms_score: 0.6639
|
| 23 |
+
2025-09-26 07:51:58,026 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0297 | Val rms_score: 0.6571
|
| 24 |
+
2025-09-26 07:52:00,274 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0283 | Val rms_score: 0.6738
|
| 25 |
+
2025-09-26 07:52:02,388 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0197 | Val rms_score: 0.6700
|
| 26 |
+
2025-09-26 07:52:04,521 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0168 | Val rms_score: 0.6580
|
| 27 |
+
2025-09-26 07:52:06,547 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0144 | Val rms_score: 0.6606
|
| 28 |
+
2025-09-26 07:52:08,376 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0130 | Val rms_score: 0.6721
|
| 29 |
+
2025-09-26 07:52:10,801 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0107 | Val rms_score: 0.6772
|
| 30 |
+
2025-09-26 07:52:12,751 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0107 | Val rms_score: 0.6770
|
| 31 |
+
2025-09-26 07:52:14,484 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0093 | Val rms_score: 0.6672
|
| 32 |
+
2025-09-26 07:52:16,308 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0117 | Val rms_score: 0.6779
|
| 33 |
+
2025-09-26 07:52:18,391 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0105 | Val rms_score: 0.6658
|
| 34 |
+
2025-09-26 07:52:20,762 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0093 | Val rms_score: 0.6712
|
| 35 |
+
2025-09-26 07:52:22,691 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0074 | Val rms_score: 0.6692
|
| 36 |
+
2025-09-26 07:52:24,606 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0089 | Val rms_score: 0.6625
|
| 37 |
+
2025-09-26 07:52:26,592 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0077 | Val rms_score: 0.6522
|
| 38 |
+
2025-09-26 07:52:28,416 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0077 | Val rms_score: 0.6694
|
| 39 |
+
2025-09-26 07:52:30,672 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0095 | Val rms_score: 0.6664
|
| 40 |
+
2025-09-26 07:52:32,375 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0083 | Val rms_score: 0.6500
|
| 41 |
+
2025-09-26 07:52:34,397 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0085 | Val rms_score: 0.6533
|
| 42 |
+
2025-09-26 07:52:36,480 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0084 | Val rms_score: 0.6624
|
| 43 |
+
2025-09-26 07:52:38,451 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0077 | Val rms_score: 0.6559
|
| 44 |
+
2025-09-26 07:52:40,668 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0075 | Val rms_score: 0.6647
|
| 45 |
+
2025-09-26 07:52:42,589 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0121 | Val rms_score: 0.6640
|
| 46 |
+
2025-09-26 07:52:44,585 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0101 | Val rms_score: 0.6632
|
| 47 |
+
2025-09-26 07:52:46,190 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0098 | Val rms_score: 0.6608
|
| 48 |
+
2025-09-26 07:52:48,016 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0116 | Val rms_score: 0.6720
|
| 49 |
+
2025-09-26 07:52:50,297 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0085 | Val rms_score: 0.6545
|
| 50 |
+
2025-09-26 07:52:52,163 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0097 | Val rms_score: 0.6766
|
| 51 |
+
2025-09-26 07:52:54,060 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0066 | Val rms_score: 0.6604
|
| 52 |
+
2025-09-26 07:52:55,972 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0069 | Val rms_score: 0.6599
|
| 53 |
+
2025-09-26 07:52:58,069 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0060 | Val rms_score: 0.6566
|
| 54 |
+
2025-09-26 07:53:00,205 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0055 | Val rms_score: 0.6669
|
| 55 |
+
2025-09-26 07:53:02,117 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0057 | Val rms_score: 0.6662
|
| 56 |
+
2025-09-26 07:53:04,108 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0055 | Val rms_score: 0.6611
|
| 57 |
+
2025-09-26 07:53:06,302 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0051 | Val rms_score: 0.6541
|
| 58 |
+
2025-09-26 07:53:08,223 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0050 | Val rms_score: 0.6582
|
| 59 |
+
2025-09-26 07:53:10,237 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0064 | Val rms_score: 0.6707
|
| 60 |
+
2025-09-26 07:53:12,341 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0067 | Val rms_score: 0.6719
|
| 61 |
+
2025-09-26 07:53:15,356 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0051 | Val rms_score: 0.6694
|
| 62 |
+
2025-09-26 07:53:18,167 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0040 | Val rms_score: 0.6553
|
| 63 |
+
2025-09-26 07:53:21,285 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0057 | Val rms_score: 0.6517
|
| 64 |
+
2025-09-26 07:53:24,767 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0047 | Val rms_score: 0.6563
|
| 65 |
+
2025-09-26 07:53:27,693 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0062 | Val rms_score: 0.6456
|
| 66 |
+
2025-09-26 07:53:30,180 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0044 | Val rms_score: 0.6520
|
| 67 |
+
2025-09-26 07:53:33,138 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0056 | Val rms_score: 0.6462
|
| 68 |
+
2025-09-26 07:53:36,115 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0058 | Val rms_score: 0.6595
|
| 69 |
+
2025-09-26 07:53:39,295 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0060 | Val rms_score: 0.6555
|
| 70 |
+
2025-09-26 07:53:42,391 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0052 | Val rms_score: 0.6664
|
| 71 |
+
2025-09-26 07:53:44,864 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0062 | Val rms_score: 0.6462
|
| 72 |
+
2025-09-26 07:53:47,968 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0063 | Val rms_score: 0.6606
|
| 73 |
+
2025-09-26 07:53:50,578 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0047 | Val rms_score: 0.6519
|
| 74 |
+
2025-09-26 07:53:53,849 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0047 | Val rms_score: 0.6495
|
| 75 |
+
2025-09-26 07:53:56,802 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0057 | Val rms_score: 0.6483
|
| 76 |
+
2025-09-26 07:53:59,487 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0045 | Val rms_score: 0.6612
|
| 77 |
+
2025-09-26 07:54:02,651 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0069 | Val rms_score: 0.6400
|
| 78 |
+
2025-09-26 07:54:05,395 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0075 | Val rms_score: 0.6586
|
| 79 |
+
2025-09-26 07:54:08,659 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0049 | Val rms_score: 0.6546
|
| 80 |
+
2025-09-26 07:54:11,077 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0049 | Val rms_score: 0.6544
|
| 81 |
+
2025-09-26 07:54:14,040 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0054 | Val rms_score: 0.6541
|
| 82 |
+
2025-09-26 07:54:17,207 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0039 | Val rms_score: 0.6504
|
| 83 |
+
2025-09-26 07:54:19,498 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0043 | Val rms_score: 0.6471
|
| 84 |
+
2025-09-26 07:54:22,757 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0042 | Val rms_score: 0.6603
|
| 85 |
+
2025-09-26 07:54:25,362 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0043 | Val rms_score: 0.6637
|
| 86 |
+
2025-09-26 07:54:28,088 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0045 | Val rms_score: 0.6475
|
| 87 |
+
2025-09-26 07:54:31,000 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0043 | Val rms_score: 0.6527
|
| 88 |
+
2025-09-26 07:54:33,478 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0040 | Val rms_score: 0.6613
|
| 89 |
+
2025-09-26 07:54:36,863 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0052 | Val rms_score: 0.6515
|
| 90 |
+
2025-09-26 07:54:39,478 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0057 | Val rms_score: 0.6558
|
| 91 |
+
2025-09-26 07:54:42,444 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0054 | Val rms_score: 0.6635
|
| 92 |
+
2025-09-26 07:54:45,399 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0061 | Val rms_score: 0.6470
|
| 93 |
+
2025-09-26 07:54:48,118 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0051 | Val rms_score: 0.6519
|
| 94 |
+
2025-09-26 07:54:51,448 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0042 | Val rms_score: 0.6479
|
| 95 |
+
2025-09-26 07:54:53,883 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0041 | Val rms_score: 0.6430
|
| 96 |
+
2025-09-26 07:54:56,856 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0043 | Val rms_score: 0.6537
|
| 97 |
+
2025-09-26 07:54:59,399 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0042 | Val rms_score: 0.6554
|
| 98 |
+
2025-09-26 07:55:02,245 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0043 | Val rms_score: 0.6439
|
| 99 |
+
2025-09-26 07:55:05,374 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0037 | Val rms_score: 0.6552
|
| 100 |
+
2025-09-26 07:55:07,780 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0051 | Val rms_score: 0.6501
|
| 101 |
+
2025-09-26 07:55:10,682 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0036 | Val rms_score: 0.6380
|
| 102 |
+
2025-09-26 07:55:13,041 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0043 | Val rms_score: 0.6394
|
| 103 |
+
2025-09-26 07:55:15,957 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0042 | Val rms_score: 0.6445
|
| 104 |
+
2025-09-26 07:55:19,061 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0042 | Val rms_score: 0.6404
|
| 105 |
+
2025-09-26 07:55:21,307 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0042 | Val rms_score: 0.6347
|
| 106 |
+
2025-09-26 07:55:24,227 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0040 | Val rms_score: 0.6400
|
| 107 |
+
2025-09-26 07:55:26,568 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0030 | Val rms_score: 0.6420
|
| 108 |
+
2025-09-26 07:55:27,087 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Test rms_score: 0.7767
|
| 109 |
+
2025-09-26 07:55:27,389 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Starting triplicate run 2 for dataset adme_ppb_h at 2025-09-26_07-55-27
|
| 110 |
+
2025-09-26 07:55:29,963 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.7906 | Val rms_score: 0.5477
|
| 111 |
+
2025-09-26 07:55:29,963 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Global step of best model: 5
|
| 112 |
+
2025-09-26 07:55:30,704 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.5477
|
| 113 |
+
2025-09-26 07:55:33,679 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.4469 | Val rms_score: 0.6190
|
| 114 |
+
2025-09-26 07:55:36,443 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.3500 | Val rms_score: 0.5338
|
| 115 |
+
2025-09-26 07:55:36,620 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Global step of best model: 15
|
| 116 |
+
2025-09-26 07:55:37,195 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.5338
|
| 117 |
+
2025-09-26 07:55:39,611 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.3031 | Val rms_score: 0.5897
|
| 118 |
+
2025-09-26 07:55:42,440 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.2562 | Val rms_score: 0.6139
|
| 119 |
+
2025-09-26 07:55:45,347 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.2062 | Val rms_score: 0.5982
|
| 120 |
+
2025-09-26 07:55:48,065 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1922 | Val rms_score: 0.6044
|
| 121 |
+
2025-09-26 07:55:50,977 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1617 | Val rms_score: 0.6349
|
| 122 |
+
2025-09-26 07:55:53,112 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1281 | Val rms_score: 0.6537
|
| 123 |
+
2025-09-26 07:55:56,104 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.1016 | Val rms_score: 0.6382
|
| 124 |
+
2025-09-26 07:55:58,991 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.0852 | Val rms_score: 0.6446
|
| 125 |
+
2025-09-26 07:56:01,621 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0730 | Val rms_score: 0.6626
|
| 126 |
+
2025-09-26 07:56:04,696 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0523 | Val rms_score: 0.6761
|
| 127 |
+
2025-09-26 07:56:07,069 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0420 | Val rms_score: 0.6670
|
| 128 |
+
2025-09-26 07:56:10,010 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0354 | Val rms_score: 0.6694
|
| 129 |
+
2025-09-26 07:56:12,897 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0320 | Val rms_score: 0.6723
|
| 130 |
+
2025-09-26 07:56:15,686 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0249 | Val rms_score: 0.6575
|
| 131 |
+
2025-09-26 07:56:18,574 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0201 | Val rms_score: 0.6506
|
| 132 |
+
2025-09-26 07:56:20,932 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0206 | Val rms_score: 0.6601
|
| 133 |
+
2025-09-26 07:56:23,867 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0168 | Val rms_score: 0.6615
|
| 134 |
+
2025-09-26 07:56:26,814 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0154 | Val rms_score: 0.6659
|
| 135 |
+
2025-09-26 07:56:29,596 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0159 | Val rms_score: 0.6640
|
| 136 |
+
2025-09-26 07:56:32,543 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0151 | Val rms_score: 0.6703
|
| 137 |
+
2025-09-26 07:56:34,779 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0108 | Val rms_score: 0.6675
|
| 138 |
+
2025-09-26 07:56:37,707 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0098 | Val rms_score: 0.6685
|
| 139 |
+
2025-09-26 07:56:40,614 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0093 | Val rms_score: 0.6645
|
| 140 |
+
2025-09-26 07:56:43,364 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0093 | Val rms_score: 0.6559
|
| 141 |
+
2025-09-26 07:56:46,289 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0091 | Val rms_score: 0.6562
|
| 142 |
+
2025-09-26 07:56:48,608 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0092 | Val rms_score: 0.6695
|
| 143 |
+
2025-09-26 07:56:51,544 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0094 | Val rms_score: 0.6681
|
| 144 |
+
2025-09-26 07:56:54,393 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0078 | Val rms_score: 0.6540
|
| 145 |
+
2025-09-26 07:56:57,167 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0096 | Val rms_score: 0.6633
|
| 146 |
+
2025-09-26 07:57:00,077 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0090 | Val rms_score: 0.6622
|
| 147 |
+
2025-09-26 07:57:02,650 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0059 | Val rms_score: 0.6608
|
| 148 |
+
2025-09-26 07:57:05,622 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0087 | Val rms_score: 0.6674
|
| 149 |
+
2025-09-26 07:57:08,592 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0066 | Val rms_score: 0.6634
|
| 150 |
+
2025-09-26 07:57:11,320 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0085 | Val rms_score: 0.6697
|
| 151 |
+
2025-09-26 07:57:14,170 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0066 | Val rms_score: 0.6656
|
| 152 |
+
2025-09-26 07:57:16,726 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0058 | Val rms_score: 0.6621
|
| 153 |
+
2025-09-26 07:57:19,640 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0054 | Val rms_score: 0.6583
|
| 154 |
+
2025-09-26 07:57:22,377 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0066 | Val rms_score: 0.6508
|
| 155 |
+
2025-09-26 07:57:25,393 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0063 | Val rms_score: 0.6652
|
| 156 |
+
2025-09-26 07:57:28,351 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0056 | Val rms_score: 0.6548
|
| 157 |
+
2025-09-26 07:57:31,206 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0069 | Val rms_score: 0.6564
|
| 158 |
+
2025-09-26 07:57:34,264 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0057 | Val rms_score: 0.6515
|
| 159 |
+
2025-09-26 07:57:36,911 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0066 | Val rms_score: 0.6573
|
| 160 |
+
2025-09-26 07:57:40,250 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0057 | Val rms_score: 0.6615
|
| 161 |
+
2025-09-26 07:57:43,057 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0052 | Val rms_score: 0.6607
|
| 162 |
+
2025-09-26 07:57:45,576 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0045 | Val rms_score: 0.6613
|
| 163 |
+
2025-09-26 07:57:48,572 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0042 | Val rms_score: 0.6562
|
| 164 |
+
2025-09-26 07:57:51,462 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0050 | Val rms_score: 0.6573
|
| 165 |
+
2025-09-26 07:57:54,655 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0050 | Val rms_score: 0.6531
|
| 166 |
+
2025-09-26 07:57:57,515 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0042 | Val rms_score: 0.6548
|
| 167 |
+
2025-09-26 07:58:00,233 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0042 | Val rms_score: 0.6651
|
| 168 |
+
2025-09-26 07:58:03,139 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0061 | Val rms_score: 0.6675
|
| 169 |
+
2025-09-26 07:58:05,755 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0057 | Val rms_score: 0.6575
|
| 170 |
+
2025-09-26 07:58:09,089 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0051 | Val rms_score: 0.6578
|
| 171 |
+
2025-09-26 07:58:11,541 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0053 | Val rms_score: 0.6553
|
| 172 |
+
2025-09-26 07:58:14,435 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0047 | Val rms_score: 0.6486
|
| 173 |
+
2025-09-26 07:58:17,376 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0044 | Val rms_score: 0.6567
|
| 174 |
+
2025-09-26 07:58:20,000 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0044 | Val rms_score: 0.6618
|
| 175 |
+
2025-09-26 07:58:23,439 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0040 | Val rms_score: 0.6574
|
| 176 |
+
2025-09-26 07:58:25,923 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0042 | Val rms_score: 0.6547
|
| 177 |
+
2025-09-26 07:58:28,800 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0044 | Val rms_score: 0.6411
|
| 178 |
+
2025-09-26 07:58:31,720 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0060 | Val rms_score: 0.6628
|
| 179 |
+
2025-09-26 07:58:33,921 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0052 | Val rms_score: 0.6541
|
| 180 |
+
2025-09-26 07:58:37,113 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0059 | Val rms_score: 0.6538
|
| 181 |
+
2025-09-26 07:58:39,685 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0080 | Val rms_score: 0.6508
|
| 182 |
+
2025-09-26 07:58:42,607 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0082 | Val rms_score: 0.6661
|
| 183 |
+
2025-09-26 07:58:45,547 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0077 | Val rms_score: 0.6461
|
| 184 |
+
2025-09-26 07:58:47,849 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0071 | Val rms_score: 0.6648
|
| 185 |
+
2025-09-26 07:58:51,151 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0059 | Val rms_score: 0.6565
|
| 186 |
+
2025-09-26 07:58:53,736 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0053 | Val rms_score: 0.6421
|
| 187 |
+
2025-09-26 07:58:56,589 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0047 | Val rms_score: 0.6595
|
| 188 |
+
2025-09-26 07:58:59,469 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0053 | Val rms_score: 0.6521
|
| 189 |
+
2025-09-26 07:59:01,874 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0046 | Val rms_score: 0.6480
|
| 190 |
+
2025-09-26 07:59:05,603 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0036 | Val rms_score: 0.6476
|
| 191 |
+
2025-09-26 07:59:08,017 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0029 | Val rms_score: 0.6517
|
| 192 |
+
2025-09-26 07:59:11,039 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0040 | Val rms_score: 0.6515
|
| 193 |
+
2025-09-26 07:59:13,838 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0045 | Val rms_score: 0.6429
|
| 194 |
+
2025-09-26 07:59:16,643 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0039 | Val rms_score: 0.6343
|
| 195 |
+
2025-09-26 07:59:19,674 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0044 | Val rms_score: 0.6460
|
| 196 |
+
2025-09-26 07:59:22,761 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0040 | Val rms_score: 0.6464
|
| 197 |
+
2025-09-26 07:59:25,673 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0041 | Val rms_score: 0.6523
|
| 198 |
+
2025-09-26 07:59:28,183 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0043 | Val rms_score: 0.6471
|
| 199 |
+
2025-09-26 07:59:31,255 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0030 | Val rms_score: 0.6423
|
| 200 |
+
2025-09-26 07:59:33,918 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0038 | Val rms_score: 0.6470
|
| 201 |
+
2025-09-26 07:59:36,926 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0036 | Val rms_score: 0.6558
|
| 202 |
+
2025-09-26 07:59:39,882 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0048 | Val rms_score: 0.6419
|
| 203 |
+
2025-09-26 07:59:42,216 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0045 | Val rms_score: 0.6420
|
| 204 |
+
2025-09-26 07:59:45,194 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0046 | Val rms_score: 0.6418
|
| 205 |
+
2025-09-26 07:59:47,912 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0041 | Val rms_score: 0.6487
|
| 206 |
+
2025-09-26 07:59:50,844 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0031 | Val rms_score: 0.6559
|
| 207 |
+
2025-09-26 07:59:53,739 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0038 | Val rms_score: 0.6535
|
| 208 |
+
2025-09-26 07:59:56,209 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0030 | Val rms_score: 0.6444
|
| 209 |
+
2025-09-26 07:59:59,132 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0046 | Val rms_score: 0.6430
|
| 210 |
+
2025-09-26 08:00:01,842 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0035 | Val rms_score: 0.6433
|
| 211 |
+
2025-09-26 08:00:04,901 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0041 | Val rms_score: 0.6467
|
| 212 |
+
2025-09-26 08:00:07,877 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0037 | Val rms_score: 0.6433
|
| 213 |
+
2025-09-26 08:00:10,296 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0032 | Val rms_score: 0.6431
|
| 214 |
+
2025-09-26 08:00:10,801 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Test rms_score: 0.7435
|
| 215 |
+
2025-09-26 08:00:11,116 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Starting triplicate run 3 for dataset adme_ppb_h at 2025-09-26_08-00-11
|
| 216 |
+
2025-09-26 08:00:13,623 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.8313 | Val rms_score: 0.6659
|
| 217 |
+
2025-09-26 08:00:13,623 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Global step of best model: 5
|
| 218 |
+
2025-09-26 08:00:14,355 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.6659
|
| 219 |
+
2025-09-26 08:00:16,961 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.5000 | Val rms_score: 0.6325
|
| 220 |
+
2025-09-26 08:00:17,144 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Global step of best model: 10
|
| 221 |
+
2025-09-26 08:00:17,718 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.6325
|
| 222 |
+
2025-09-26 08:00:20,808 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.3844 | Val rms_score: 0.5220
|
| 223 |
+
2025-09-26 08:00:21,008 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Global step of best model: 15
|
| 224 |
+
2025-09-26 08:00:21,632 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.5220
|
| 225 |
+
2025-09-26 08:00:24,375 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.3266 | Val rms_score: 0.5702
|
| 226 |
+
2025-09-26 08:00:27,695 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.2391 | Val rms_score: 0.6431
|
| 227 |
+
2025-09-26 08:00:30,190 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.2219 | Val rms_score: 0.6440
|
| 228 |
+
2025-09-26 08:00:33,383 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1914 | Val rms_score: 0.6170
|
| 229 |
+
2025-09-26 08:00:35,881 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1844 | Val rms_score: 0.6381
|
| 230 |
+
2025-09-26 08:00:38,826 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1391 | Val rms_score: 0.6611
|
| 231 |
+
2025-09-26 08:00:41,835 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.1125 | Val rms_score: 0.6647
|
| 232 |
+
2025-09-26 08:00:44,285 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.0992 | Val rms_score: 0.6680
|
| 233 |
+
2025-09-26 08:00:46,955 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0824 | Val rms_score: 0.6696
|
| 234 |
+
2025-09-26 08:00:49,418 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0742 | Val rms_score: 0.6652
|
| 235 |
+
2025-09-26 08:00:52,347 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0613 | Val rms_score: 0.6710
|
| 236 |
+
2025-09-26 08:00:55,591 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0539 | Val rms_score: 0.6759
|
| 237 |
+
2025-09-26 08:00:58,146 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0408 | Val rms_score: 0.6681
|
| 238 |
+
2025-09-26 08:01:01,447 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0340 | Val rms_score: 0.6680
|
| 239 |
+
2025-09-26 08:01:03,825 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0295 | Val rms_score: 0.6675
|
| 240 |
+
2025-09-26 08:01:06,247 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0238 | Val rms_score: 0.6714
|
| 241 |
+
2025-09-26 08:01:09,290 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0195 | Val rms_score: 0.6703
|
| 242 |
+
2025-09-26 08:01:11,728 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0174 | Val rms_score: 0.6641
|
| 243 |
+
2025-09-26 08:01:15,000 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0161 | Val rms_score: 0.6663
|
| 244 |
+
2025-09-26 08:01:17,423 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0141 | Val rms_score: 0.6655
|
| 245 |
+
2025-09-26 08:01:20,538 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0137 | Val rms_score: 0.6605
|
| 246 |
+
2025-09-26 08:01:23,538 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0110 | Val rms_score: 0.6608
|
| 247 |
+
2025-09-26 08:01:25,984 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0111 | Val rms_score: 0.6701
|
| 248 |
+
2025-09-26 08:01:29,243 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0090 | Val rms_score: 0.6757
|
| 249 |
+
2025-09-26 08:01:31,791 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0104 | Val rms_score: 0.6676
|
| 250 |
+
2025-09-26 08:01:34,748 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0095 | Val rms_score: 0.6643
|
| 251 |
+
2025-09-26 08:01:37,648 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0079 | Val rms_score: 0.6713
|
| 252 |
+
2025-09-26 08:01:40,142 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0083 | Val rms_score: 0.6752
|
| 253 |
+
2025-09-26 08:01:43,360 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0079 | Val rms_score: 0.6739
|
| 254 |
+
2025-09-26 08:01:45,730 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0078 | Val rms_score: 0.6756
|
| 255 |
+
2025-09-26 08:01:48,733 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0063 | Val rms_score: 0.6728
|
| 256 |
+
2025-09-26 08:01:51,731 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0063 | Val rms_score: 0.6658
|
| 257 |
+
2025-09-26 08:01:54,205 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0068 | Val rms_score: 0.6637
|
| 258 |
+
2025-09-26 08:01:57,724 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0073 | Val rms_score: 0.6758
|
| 259 |
+
2025-09-26 08:02:00,850 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0097 | Val rms_score: 0.6695
|
| 260 |
+
2025-09-26 08:02:03,782 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0077 | Val rms_score: 0.6702
|
| 261 |
+
2025-09-26 08:02:06,239 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0084 | Val rms_score: 0.6604
|
| 262 |
+
2025-09-26 08:02:09,220 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0058 | Val rms_score: 0.6679
|
| 263 |
+
2025-09-26 08:02:12,387 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0061 | Val rms_score: 0.6687
|
| 264 |
+
2025-09-26 08:02:15,283 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0062 | Val rms_score: 0.6616
|
| 265 |
+
2025-09-26 08:02:18,191 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0052 | Val rms_score: 0.6695
|
| 266 |
+
2025-09-26 08:02:20,643 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0059 | Val rms_score: 0.6691
|
| 267 |
+
2025-09-26 08:02:23,586 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0057 | Val rms_score: 0.6793
|
| 268 |
+
2025-09-26 08:02:26,330 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0067 | Val rms_score: 0.6725
|
| 269 |
+
2025-09-26 08:02:29,274 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0068 | Val rms_score: 0.6640
|
| 270 |
+
2025-09-26 08:02:32,211 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0054 | Val rms_score: 0.6630
|
| 271 |
+
2025-09-26 08:02:34,786 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0052 | Val rms_score: 0.6702
|
| 272 |
+
2025-09-26 08:02:37,725 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0055 | Val rms_score: 0.6620
|
| 273 |
+
2025-09-26 08:02:40,578 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0045 | Val rms_score: 0.6563
|
| 274 |
+
2025-09-26 08:02:43,464 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0046 | Val rms_score: 0.6612
|
| 275 |
+
2025-09-26 08:02:46,488 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0045 | Val rms_score: 0.6671
|
| 276 |
+
2025-09-26 08:02:48,996 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0052 | Val rms_score: 0.6698
|
| 277 |
+
2025-09-26 08:02:51,920 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0049 | Val rms_score: 0.6672
|
| 278 |
+
2025-09-26 08:02:54,691 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0051 | Val rms_score: 0.6641
|
| 279 |
+
2025-09-26 08:02:57,559 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0050 | Val rms_score: 0.6609
|
| 280 |
+
2025-09-26 08:03:00,540 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0052 | Val rms_score: 0.6618
|
| 281 |
+
2025-09-26 08:03:03,217 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0056 | Val rms_score: 0.6598
|
| 282 |
+
2025-09-26 08:03:06,091 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0046 | Val rms_score: 0.6707
|
| 283 |
+
2025-09-26 08:03:09,382 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0059 | Val rms_score: 0.6640
|
| 284 |
+
2025-09-26 08:03:12,393 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0044 | Val rms_score: 0.6653
|
| 285 |
+
2025-09-26 08:03:14,941 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0035 | Val rms_score: 0.6614
|
| 286 |
+
2025-09-26 08:03:17,871 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0038 | Val rms_score: 0.6595
|
| 287 |
+
2025-09-26 08:03:20,883 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0039 | Val rms_score: 0.6624
|
| 288 |
+
2025-09-26 08:03:24,254 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0047 | Val rms_score: 0.6575
|
| 289 |
+
2025-09-26 08:03:27,274 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0034 | Val rms_score: 0.6628
|
| 290 |
+
2025-09-26 08:03:30,033 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0048 | Val rms_score: 0.6649
|
| 291 |
+
2025-09-26 08:03:33,282 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0041 | Val rms_score: 0.6610
|
| 292 |
+
2025-09-26 08:03:35,785 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0045 | Val rms_score: 0.6594
|
| 293 |
+
2025-09-26 08:03:39,273 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0044 | Val rms_score: 0.6580
|
| 294 |
+
2025-09-26 08:03:41,918 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0049 | Val rms_score: 0.6589
|
| 295 |
+
2025-09-26 08:03:45,150 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0046 | Val rms_score: 0.6647
|
| 296 |
+
2025-09-26 08:03:48,218 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0040 | Val rms_score: 0.6633
|
| 297 |
+
2025-09-26 08:03:50,732 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0031 | Val rms_score: 0.6583
|
| 298 |
+
2025-09-26 08:03:54,464 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0038 | Val rms_score: 0.6533
|
| 299 |
+
2025-09-26 08:03:57,267 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0048 | Val rms_score: 0.6567
|
| 300 |
+
2025-09-26 08:04:00,162 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0040 | Val rms_score: 0.6588
|
| 301 |
+
2025-09-26 08:04:02,593 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0045 | Val rms_score: 0.6549
|
| 302 |
+
2025-09-26 08:04:05,542 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0053 | Val rms_score: 0.6535
|
| 303 |
+
2025-09-26 08:04:08,809 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0051 | Val rms_score: 0.6593
|
| 304 |
+
2025-09-26 08:04:11,354 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0043 | Val rms_score: 0.6450
|
| 305 |
+
2025-09-26 08:04:14,079 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0042 | Val rms_score: 0.6470
|
| 306 |
+
2025-09-26 08:04:16,875 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0052 | Val rms_score: 0.6567
|
| 307 |
+
2025-09-26 08:04:19,728 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0036 | Val rms_score: 0.6534
|
| 308 |
+
2025-09-26 08:04:22,941 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0039 | Val rms_score: 0.6562
|
| 309 |
+
2025-09-26 08:04:25,372 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0035 | Val rms_score: 0.6576
|
| 310 |
+
2025-09-26 08:04:28,467 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0034 | Val rms_score: 0.6505
|
| 311 |
+
2025-09-26 08:04:30,876 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0037 | Val rms_score: 0.6483
|
| 312 |
+
2025-09-26 08:04:33,841 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0033 | Val rms_score: 0.6513
|
| 313 |
+
2025-09-26 08:04:36,798 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0040 | Val rms_score: 0.6538
|
| 314 |
+
2025-09-26 08:04:39,744 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0034 | Val rms_score: 0.6515
|
| 315 |
+
2025-09-26 08:04:42,717 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0041 | Val rms_score: 0.6551
|
| 316 |
+
2025-09-26 08:04:45,314 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0035 | Val rms_score: 0.6463
|
| 317 |
+
2025-09-26 08:04:48,254 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0042 | Val rms_score: 0.6538
|
| 318 |
+
2025-09-26 08:04:51,098 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0040 | Val rms_score: 0.6506
|
| 319 |
+
2025-09-26 08:04:54,051 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0037 | Val rms_score: 0.6500
|
| 320 |
+
2025-09-26 08:04:56,939 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0032 | Val rms_score: 0.6606
|
| 321 |
+
2025-09-26 08:04:59,505 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0038 | Val rms_score: 0.6445
|
| 322 |
+
2025-09-26 08:05:00,022 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Test rms_score: 0.7652
|
| 323 |
+
2025-09-26 08:05:00,323 - logs_modchembert_adme_ppb_h_epochs100_batch_size32 - INFO - Final Triplicate Test Results — Avg rms_score: 0.7618, Std Dev: 0.0138
|
logs_modchembert_regression_ModChemBERT-MLM-TAFT/modchembert_deepchem_splits_run_adme_ppb_r_epochs100_batch_size32_20250926_080500.log
ADDED
|
@@ -0,0 +1,345 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
2025-09-26 08:05:00,324 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Running benchmark for dataset: adme_ppb_r
|
| 2 |
+
2025-09-26 08:05:00,325 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - dataset: adme_ppb_r, tasks: ['y'], epochs: 100, learning rate: 3e-05, transform: True
|
| 3 |
+
2025-09-26 08:05:00,334 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Starting triplicate run 1 for dataset adme_ppb_r at 2025-09-26_08-05-00
|
| 4 |
+
2025-09-26 08:05:02,769 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.7937 | Val rms_score: 0.3716
|
| 5 |
+
2025-09-26 08:05:02,770 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 5
|
| 6 |
+
2025-09-26 08:05:03,425 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.3716
|
| 7 |
+
2025-09-26 08:05:06,523 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.4625 | Val rms_score: 0.3542
|
| 8 |
+
2025-09-26 08:05:06,715 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 10
|
| 9 |
+
2025-09-26 08:05:07,285 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.3542
|
| 10 |
+
2025-09-26 08:05:10,331 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.3937 | Val rms_score: 0.3847
|
| 11 |
+
2025-09-26 08:05:12,693 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.3656 | Val rms_score: 0.3280
|
| 12 |
+
2025-09-26 08:05:12,880 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 20
|
| 13 |
+
2025-09-26 08:05:13,482 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val rms_score: 0.3280
|
| 14 |
+
2025-09-26 08:05:16,333 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.3078 | Val rms_score: 0.3099
|
| 15 |
+
2025-09-26 08:05:16,528 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 25
|
| 16 |
+
2025-09-26 08:05:17,104 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 5 with val rms_score: 0.3099
|
| 17 |
+
2025-09-26 08:05:20,216 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.2391 | Val rms_score: 0.3132
|
| 18 |
+
2025-09-26 08:05:23,470 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.2141 | Val rms_score: 0.3086
|
| 19 |
+
2025-09-26 08:05:23,665 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 35
|
| 20 |
+
2025-09-26 08:05:24,259 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 7 with val rms_score: 0.3086
|
| 21 |
+
2025-09-26 08:05:26,806 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1688 | Val rms_score: 0.3158
|
| 22 |
+
2025-09-26 08:05:29,398 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1398 | Val rms_score: 0.3156
|
| 23 |
+
2025-09-26 08:05:32,314 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.1305 | Val rms_score: 0.3298
|
| 24 |
+
2025-09-26 08:05:35,237 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.1219 | Val rms_score: 0.3433
|
| 25 |
+
2025-09-26 08:05:38,598 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0727 | Val rms_score: 0.4234
|
| 26 |
+
2025-09-26 08:05:41,102 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0883 | Val rms_score: 0.3771
|
| 27 |
+
2025-09-26 08:05:43,576 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0699 | Val rms_score: 0.3715
|
| 28 |
+
2025-09-26 08:05:46,351 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0389 | Val rms_score: 0.3394
|
| 29 |
+
2025-09-26 08:05:49,355 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0395 | Val rms_score: 0.3675
|
| 30 |
+
2025-09-26 08:05:52,528 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0377 | Val rms_score: 0.3434
|
| 31 |
+
2025-09-26 08:05:54,979 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0262 | Val rms_score: 0.3540
|
| 32 |
+
2025-09-26 08:05:57,361 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0264 | Val rms_score: 0.3627
|
| 33 |
+
2025-09-26 08:06:00,305 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0303 | Val rms_score: 0.3559
|
| 34 |
+
2025-09-26 08:06:03,211 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0320 | Val rms_score: 0.3617
|
| 35 |
+
2025-09-26 08:06:06,621 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0184 | Val rms_score: 0.3609
|
| 36 |
+
2025-09-26 08:06:09,032 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0169 | Val rms_score: 0.3860
|
| 37 |
+
2025-09-26 08:06:11,504 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0170 | Val rms_score: 0.3731
|
| 38 |
+
2025-09-26 08:06:14,322 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0189 | Val rms_score: 0.3858
|
| 39 |
+
2025-09-26 08:06:17,213 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0174 | Val rms_score: 0.3588
|
| 40 |
+
2025-09-26 08:06:20,336 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0117 | Val rms_score: 0.3439
|
| 41 |
+
2025-09-26 08:06:23,158 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0097 | Val rms_score: 0.3594
|
| 42 |
+
2025-09-26 08:06:25,665 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0091 | Val rms_score: 0.3497
|
| 43 |
+
2025-09-26 08:06:28,688 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0064 | Val rms_score: 0.3479
|
| 44 |
+
2025-09-26 08:06:31,680 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0073 | Val rms_score: 0.3550
|
| 45 |
+
2025-09-26 08:06:34,854 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0071 | Val rms_score: 0.3573
|
| 46 |
+
2025-09-26 08:06:37,936 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0063 | Val rms_score: 0.3519
|
| 47 |
+
2025-09-26 08:06:40,713 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0071 | Val rms_score: 0.3504
|
| 48 |
+
2025-09-26 08:06:43,692 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0060 | Val rms_score: 0.3518
|
| 49 |
+
2025-09-26 08:06:46,611 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0065 | Val rms_score: 0.3526
|
| 50 |
+
2025-09-26 08:06:49,414 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0058 | Val rms_score: 0.3534
|
| 51 |
+
2025-09-26 08:06:52,015 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0058 | Val rms_score: 0.3532
|
| 52 |
+
2025-09-26 08:06:55,013 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0065 | Val rms_score: 0.3504
|
| 53 |
+
2025-09-26 08:06:57,911 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0057 | Val rms_score: 0.3522
|
| 54 |
+
2025-09-26 08:07:00,955 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0061 | Val rms_score: 0.3547
|
| 55 |
+
2025-09-26 08:07:03,840 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0051 | Val rms_score: 0.3474
|
| 56 |
+
2025-09-26 08:07:06,077 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0055 | Val rms_score: 0.3499
|
| 57 |
+
2025-09-26 08:07:09,067 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0066 | Val rms_score: 0.3544
|
| 58 |
+
2025-09-26 08:07:12,270 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0040 | Val rms_score: 0.3522
|
| 59 |
+
2025-09-26 08:07:15,385 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0057 | Val rms_score: 0.3555
|
| 60 |
+
2025-09-26 08:07:18,264 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0059 | Val rms_score: 0.3549
|
| 61 |
+
2025-09-26 08:07:21,088 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0047 | Val rms_score: 0.3563
|
| 62 |
+
2025-09-26 08:07:24,089 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0063 | Val rms_score: 0.3585
|
| 63 |
+
2025-09-26 08:07:27,062 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0049 | Val rms_score: 0.3573
|
| 64 |
+
2025-09-26 08:07:29,236 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0058 | Val rms_score: 0.3571
|
| 65 |
+
2025-09-26 08:07:32,568 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0049 | Val rms_score: 0.3554
|
| 66 |
+
2025-09-26 08:07:34,953 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0061 | Val rms_score: 0.3553
|
| 67 |
+
2025-09-26 08:07:37,895 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0052 | Val rms_score: 0.3556
|
| 68 |
+
2025-09-26 08:07:40,983 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0048 | Val rms_score: 0.3499
|
| 69 |
+
2025-09-26 08:07:43,266 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0048 | Val rms_score: 0.3521
|
| 70 |
+
2025-09-26 08:07:46,426 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0043 | Val rms_score: 0.3554
|
| 71 |
+
2025-09-26 08:07:48,867 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0054 | Val rms_score: 0.3505
|
| 72 |
+
2025-09-26 08:07:51,411 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0051 | Val rms_score: 0.3537
|
| 73 |
+
2025-09-26 08:07:54,312 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0049 | Val rms_score: 0.3559
|
| 74 |
+
2025-09-26 08:07:57,135 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0044 | Val rms_score: 0.3528
|
| 75 |
+
2025-09-26 08:08:00,380 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0045 | Val rms_score: 0.3532
|
| 76 |
+
2025-09-26 08:08:02,821 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0051 | Val rms_score: 0.3523
|
| 77 |
+
2025-09-26 08:08:05,706 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0046 | Val rms_score: 0.3525
|
| 78 |
+
2025-09-26 08:08:08,651 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0047 | Val rms_score: 0.3525
|
| 79 |
+
2025-09-26 08:08:11,142 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0041 | Val rms_score: 0.3522
|
| 80 |
+
2025-09-26 08:08:14,380 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0058 | Val rms_score: 0.3587
|
| 81 |
+
2025-09-26 08:08:16,897 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0051 | Val rms_score: 0.3534
|
| 82 |
+
2025-09-26 08:08:19,920 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0075 | Val rms_score: 0.3600
|
| 83 |
+
2025-09-26 08:08:22,783 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0056 | Val rms_score: 0.3663
|
| 84 |
+
2025-09-26 08:08:25,048 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0054 | Val rms_score: 0.3545
|
| 85 |
+
2025-09-26 08:08:28,352 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0053 | Val rms_score: 0.3580
|
| 86 |
+
2025-09-26 08:08:30,915 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0052 | Val rms_score: 0.3573
|
| 87 |
+
2025-09-26 08:08:33,833 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0042 | Val rms_score: 0.3567
|
| 88 |
+
2025-09-26 08:08:36,749 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0051 | Val rms_score: 0.3569
|
| 89 |
+
2025-09-26 08:08:38,903 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0044 | Val rms_score: 0.3520
|
| 90 |
+
2025-09-26 08:08:42,117 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0061 | Val rms_score: 0.3564
|
| 91 |
+
2025-09-26 08:08:44,847 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0045 | Val rms_score: 0.3518
|
| 92 |
+
2025-09-26 08:08:47,710 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0057 | Val rms_score: 0.3598
|
| 93 |
+
2025-09-26 08:08:50,632 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0044 | Val rms_score: 0.3598
|
| 94 |
+
2025-09-26 08:08:52,998 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0048 | Val rms_score: 0.3554
|
| 95 |
+
2025-09-26 08:08:56,271 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0042 | Val rms_score: 0.3603
|
| 96 |
+
2025-09-26 08:08:58,845 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0054 | Val rms_score: 0.3557
|
| 97 |
+
2025-09-26 08:09:01,736 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0048 | Val rms_score: 0.3587
|
| 98 |
+
2025-09-26 08:09:04,585 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0050 | Val rms_score: 0.3642
|
| 99 |
+
2025-09-26 08:09:06,938 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0051 | Val rms_score: 0.3607
|
| 100 |
+
2025-09-26 08:09:10,122 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0041 | Val rms_score: 0.3618
|
| 101 |
+
2025-09-26 08:09:12,494 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0049 | Val rms_score: 0.3607
|
| 102 |
+
2025-09-26 08:09:15,412 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0042 | Val rms_score: 0.3609
|
| 103 |
+
2025-09-26 08:09:18,306 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0038 | Val rms_score: 0.3628
|
| 104 |
+
2025-09-26 08:09:20,833 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0053 | Val rms_score: 0.3569
|
| 105 |
+
2025-09-26 08:09:24,081 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0038 | Val rms_score: 0.3590
|
| 106 |
+
2025-09-26 08:09:26,601 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0055 | Val rms_score: 0.3553
|
| 107 |
+
2025-09-26 08:09:29,794 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0070 | Val rms_score: 0.3675
|
| 108 |
+
2025-09-26 08:09:32,643 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0088 | Val rms_score: 0.3542
|
| 109 |
+
2025-09-26 08:09:35,126 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0085 | Val rms_score: 0.3686
|
| 110 |
+
2025-09-26 08:09:38,424 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0080 | Val rms_score: 0.3513
|
| 111 |
+
2025-09-26 08:09:40,938 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0078 | Val rms_score: 0.3622
|
| 112 |
+
2025-09-26 08:09:43,874 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0062 | Val rms_score: 0.3585
|
| 113 |
+
2025-09-26 08:09:46,812 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0067 | Val rms_score: 0.3582
|
| 114 |
+
2025-09-26 08:09:47,125 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Test rms_score: 0.7056
|
| 115 |
+
2025-09-26 08:09:47,439 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Starting triplicate run 2 for dataset adme_ppb_r at 2025-09-26_08-09-47
|
| 116 |
+
2025-09-26 08:09:49,764 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 1.0750 | Val rms_score: 0.5181
|
| 117 |
+
2025-09-26 08:09:49,764 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 5
|
| 118 |
+
2025-09-26 08:09:50,366 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.5181
|
| 119 |
+
2025-09-26 08:09:53,055 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.6312 | Val rms_score: 0.4155
|
| 120 |
+
2025-09-26 08:09:53,246 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 10
|
| 121 |
+
2025-09-26 08:09:53,853 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.4155
|
| 122 |
+
2025-09-26 08:09:57,118 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.4938 | Val rms_score: 0.3889
|
| 123 |
+
2025-09-26 08:09:57,305 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 15
|
| 124 |
+
2025-09-26 08:09:57,867 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.3889
|
| 125 |
+
2025-09-26 08:10:00,983 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.4125 | Val rms_score: 0.3161
|
| 126 |
+
2025-09-26 08:10:01,175 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 20
|
| 127 |
+
2025-09-26 08:10:01,779 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val rms_score: 0.3161
|
| 128 |
+
2025-09-26 08:10:04,751 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.3937 | Val rms_score: 0.2932
|
| 129 |
+
2025-09-26 08:10:04,981 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 25
|
| 130 |
+
2025-09-26 08:10:05,589 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 5 with val rms_score: 0.2932
|
| 131 |
+
2025-09-26 08:10:08,649 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.3375 | Val rms_score: 0.2806
|
| 132 |
+
2025-09-26 08:10:09,135 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 30
|
| 133 |
+
2025-09-26 08:10:09,688 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 6 with val rms_score: 0.2806
|
| 134 |
+
2025-09-26 08:10:13,000 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.2484 | Val rms_score: 0.3007
|
| 135 |
+
2025-09-26 08:10:15,478 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.2094 | Val rms_score: 0.2998
|
| 136 |
+
2025-09-26 08:10:18,215 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1836 | Val rms_score: 0.3041
|
| 137 |
+
2025-09-26 08:10:20,699 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.2437 | Val rms_score: 0.3158
|
| 138 |
+
2025-09-26 08:10:23,620 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.1508 | Val rms_score: 0.3808
|
| 139 |
+
2025-09-26 08:10:27,025 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.1219 | Val rms_score: 0.3253
|
| 140 |
+
2025-09-26 08:10:29,703 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0926 | Val rms_score: 0.3654
|
| 141 |
+
2025-09-26 08:10:32,285 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0805 | Val rms_score: 0.3692
|
| 142 |
+
2025-09-26 08:10:35,148 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0668 | Val rms_score: 0.3448
|
| 143 |
+
2025-09-26 08:10:38,261 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0512 | Val rms_score: 0.3534
|
| 144 |
+
2025-09-26 08:10:41,445 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0451 | Val rms_score: 0.3568
|
| 145 |
+
2025-09-26 08:10:43,851 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0373 | Val rms_score: 0.3491
|
| 146 |
+
2025-09-26 08:10:46,433 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0336 | Val rms_score: 0.3465
|
| 147 |
+
2025-09-26 08:10:49,269 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0273 | Val rms_score: 0.3540
|
| 148 |
+
2025-09-26 08:10:52,130 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0328 | Val rms_score: 0.3709
|
| 149 |
+
2025-09-26 08:10:54,712 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0221 | Val rms_score: 0.3549
|
| 150 |
+
2025-09-26 08:10:57,794 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0219 | Val rms_score: 0.3555
|
| 151 |
+
2025-09-26 08:11:00,348 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0171 | Val rms_score: 0.3648
|
| 152 |
+
2025-09-26 08:11:03,337 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0147 | Val rms_score: 0.3598
|
| 153 |
+
2025-09-26 08:11:06,359 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0155 | Val rms_score: 0.3642
|
| 154 |
+
2025-09-26 08:11:09,347 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0123 | Val rms_score: 0.3766
|
| 155 |
+
2025-09-26 08:11:12,390 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0148 | Val rms_score: 0.3613
|
| 156 |
+
2025-09-26 08:11:14,784 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0102 | Val rms_score: 0.3510
|
| 157 |
+
2025-09-26 08:11:17,678 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0117 | Val rms_score: 0.3526
|
| 158 |
+
2025-09-26 08:11:20,497 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0087 | Val rms_score: 0.3803
|
| 159 |
+
2025-09-26 08:11:23,212 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0102 | Val rms_score: 0.3729
|
| 160 |
+
2025-09-26 08:11:26,185 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0077 | Val rms_score: 0.3699
|
| 161 |
+
2025-09-26 08:11:28,398 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0068 | Val rms_score: 0.3577
|
| 162 |
+
2025-09-26 08:11:31,345 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0082 | Val rms_score: 0.3503
|
| 163 |
+
2025-09-26 08:11:34,228 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0086 | Val rms_score: 0.3558
|
| 164 |
+
2025-09-26 08:11:36,970 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0056 | Val rms_score: 0.3604
|
| 165 |
+
2025-09-26 08:11:40,037 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0067 | Val rms_score: 0.3563
|
| 166 |
+
2025-09-26 08:11:42,430 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0069 | Val rms_score: 0.3610
|
| 167 |
+
2025-09-26 08:11:45,333 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0056 | Val rms_score: 0.3639
|
| 168 |
+
2025-09-26 08:11:48,235 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0051 | Val rms_score: 0.3642
|
| 169 |
+
2025-09-26 08:11:51,092 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0069 | Val rms_score: 0.3647
|
| 170 |
+
2025-09-26 08:11:53,962 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0044 | Val rms_score: 0.3569
|
| 171 |
+
2025-09-26 08:11:56,407 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0060 | Val rms_score: 0.3587
|
| 172 |
+
2025-09-26 08:11:59,672 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0043 | Val rms_score: 0.3583
|
| 173 |
+
2025-09-26 08:12:02,432 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0050 | Val rms_score: 0.3621
|
| 174 |
+
2025-09-26 08:12:05,248 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0049 | Val rms_score: 0.3610
|
| 175 |
+
2025-09-26 08:12:08,164 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0065 | Val rms_score: 0.3562
|
| 176 |
+
2025-09-26 08:12:10,559 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0053 | Val rms_score: 0.3486
|
| 177 |
+
2025-09-26 08:12:13,613 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0055 | Val rms_score: 0.3484
|
| 178 |
+
2025-09-26 08:12:16,509 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0053 | Val rms_score: 0.3489
|
| 179 |
+
2025-09-26 08:12:19,218 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0059 | Val rms_score: 0.3656
|
| 180 |
+
2025-09-26 08:12:21,987 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0050 | Val rms_score: 0.3630
|
| 181 |
+
2025-09-26 08:12:24,414 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0062 | Val rms_score: 0.3651
|
| 182 |
+
2025-09-26 08:12:27,387 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0058 | Val rms_score: 0.3596
|
| 183 |
+
2025-09-26 08:12:30,394 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0051 | Val rms_score: 0.3517
|
| 184 |
+
2025-09-26 08:12:33,026 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0046 | Val rms_score: 0.3593
|
| 185 |
+
2025-09-26 08:12:36,042 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0051 | Val rms_score: 0.3568
|
| 186 |
+
2025-09-26 08:12:38,255 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0066 | Val rms_score: 0.3569
|
| 187 |
+
2025-09-26 08:12:40,856 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0043 | Val rms_score: 0.3659
|
| 188 |
+
2025-09-26 08:12:43,695 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0053 | Val rms_score: 0.3629
|
| 189 |
+
2025-09-26 08:12:46,515 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0057 | Val rms_score: 0.3485
|
| 190 |
+
2025-09-26 08:12:49,565 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0053 | Val rms_score: 0.3490
|
| 191 |
+
2025-09-26 08:12:52,258 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0053 | Val rms_score: 0.3558
|
| 192 |
+
2025-09-26 08:12:55,045 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0047 | Val rms_score: 0.3538
|
| 193 |
+
2025-09-26 08:12:57,961 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0041 | Val rms_score: 0.3517
|
| 194 |
+
2025-09-26 08:13:00,678 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0041 | Val rms_score: 0.3522
|
| 195 |
+
2025-09-26 08:13:03,619 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0057 | Val rms_score: 0.3528
|
| 196 |
+
2025-09-26 08:13:05,944 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0060 | Val rms_score: 0.3663
|
| 197 |
+
2025-09-26 08:13:08,852 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0060 | Val rms_score: 0.3703
|
| 198 |
+
2025-09-26 08:13:11,721 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0050 | Val rms_score: 0.3573
|
| 199 |
+
2025-09-26 08:13:14,540 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0048 | Val rms_score: 0.3504
|
| 200 |
+
2025-09-26 08:13:17,545 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0047 | Val rms_score: 0.3390
|
| 201 |
+
2025-09-26 08:13:20,170 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0060 | Val rms_score: 0.3479
|
| 202 |
+
2025-09-26 08:13:23,119 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0048 | Val rms_score: 0.3526
|
| 203 |
+
2025-09-26 08:13:26,040 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0034 | Val rms_score: 0.3504
|
| 204 |
+
2025-09-26 08:13:28,908 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0050 | Val rms_score: 0.3525
|
| 205 |
+
2025-09-26 08:13:31,912 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0052 | Val rms_score: 0.3500
|
| 206 |
+
2025-09-26 08:13:34,534 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0041 | Val rms_score: 0.3577
|
| 207 |
+
2025-09-26 08:13:37,740 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0056 | Val rms_score: 0.3488
|
| 208 |
+
2025-09-26 08:13:40,386 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0050 | Val rms_score: 0.3569
|
| 209 |
+
2025-09-26 08:13:43,752 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0057 | Val rms_score: 0.3585
|
| 210 |
+
2025-09-26 08:13:46,750 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0046 | Val rms_score: 0.3570
|
| 211 |
+
2025-09-26 08:13:49,233 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0055 | Val rms_score: 0.3654
|
| 212 |
+
2025-09-26 08:13:52,293 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0055 | Val rms_score: 0.3622
|
| 213 |
+
2025-09-26 08:13:54,692 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0068 | Val rms_score: 0.3495
|
| 214 |
+
2025-09-26 08:13:58,207 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0048 | Val rms_score: 0.3506
|
| 215 |
+
2025-09-26 08:14:00,794 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0048 | Val rms_score: 0.3471
|
| 216 |
+
2025-09-26 08:14:03,507 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0054 | Val rms_score: 0.3420
|
| 217 |
+
2025-09-26 08:14:06,510 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0047 | Val rms_score: 0.3493
|
| 218 |
+
2025-09-26 08:14:09,228 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0042 | Val rms_score: 0.3500
|
| 219 |
+
2025-09-26 08:14:12,665 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0042 | Val rms_score: 0.3570
|
| 220 |
+
2025-09-26 08:14:15,030 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0044 | Val rms_score: 0.3579
|
| 221 |
+
2025-09-26 08:14:18,215 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0043 | Val rms_score: 0.3559
|
| 222 |
+
2025-09-26 08:14:21,103 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0045 | Val rms_score: 0.3602
|
| 223 |
+
2025-09-26 08:14:23,780 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0044 | Val rms_score: 0.3518
|
| 224 |
+
2025-09-26 08:14:26,815 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0047 | Val rms_score: 0.3509
|
| 225 |
+
2025-09-26 08:14:29,285 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0030 | Val rms_score: 0.3573
|
| 226 |
+
2025-09-26 08:14:32,159 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0032 | Val rms_score: 0.3572
|
| 227 |
+
2025-09-26 08:14:35,096 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0036 | Val rms_score: 0.3620
|
| 228 |
+
2025-09-26 08:14:35,453 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Test rms_score: 0.6998
|
| 229 |
+
2025-09-26 08:14:35,794 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Starting triplicate run 3 for dataset adme_ppb_r at 2025-09-26_08-14-35
|
| 230 |
+
2025-09-26 08:14:37,961 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.7969 | Val rms_score: 0.5791
|
| 231 |
+
2025-09-26 08:14:37,961 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 5
|
| 232 |
+
2025-09-26 08:14:38,679 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.5791
|
| 233 |
+
2025-09-26 08:14:41,283 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.5531 | Val rms_score: 0.4323
|
| 234 |
+
2025-09-26 08:14:41,542 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 10
|
| 235 |
+
2025-09-26 08:14:42,179 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.4323
|
| 236 |
+
2025-09-26 08:14:45,313 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.4844 | Val rms_score: 0.4080
|
| 237 |
+
2025-09-26 08:14:45,517 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 15
|
| 238 |
+
2025-09-26 08:14:46,237 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.4080
|
| 239 |
+
2025-09-26 08:14:48,632 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.3766 | Val rms_score: 0.3485
|
| 240 |
+
2025-09-26 08:14:48,842 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 20
|
| 241 |
+
2025-09-26 08:14:49,436 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val rms_score: 0.3485
|
| 242 |
+
2025-09-26 08:14:52,445 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.2953 | Val rms_score: 0.3351
|
| 243 |
+
2025-09-26 08:14:52,632 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 25
|
| 244 |
+
2025-09-26 08:14:53,191 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 5 with val rms_score: 0.3351
|
| 245 |
+
2025-09-26 08:14:55,588 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.2875 | Val rms_score: 0.3145
|
| 246 |
+
2025-09-26 08:14:56,091 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 30
|
| 247 |
+
2025-09-26 08:14:56,661 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 6 with val rms_score: 0.3145
|
| 248 |
+
2025-09-26 08:14:59,733 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.2406 | Val rms_score: 0.3327
|
| 249 |
+
2025-09-26 08:15:02,146 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.2656 | Val rms_score: 0.3439
|
| 250 |
+
2025-09-26 08:15:05,085 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1812 | Val rms_score: 0.3062
|
| 251 |
+
2025-09-26 08:15:05,289 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Global step of best model: 45
|
| 252 |
+
2025-09-26 08:15:05,875 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Best model saved at epoch 9 with val rms_score: 0.3062
|
| 253 |
+
2025-09-26 08:15:08,799 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.1516 | Val rms_score: 0.3339
|
| 254 |
+
2025-09-26 08:15:11,471 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.1234 | Val rms_score: 0.3165
|
| 255 |
+
2025-09-26 08:15:14,209 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.1688 | Val rms_score: 0.3279
|
| 256 |
+
2025-09-26 08:15:17,197 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0852 | Val rms_score: 0.3512
|
| 257 |
+
2025-09-26 08:15:20,177 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0770 | Val rms_score: 0.3950
|
| 258 |
+
2025-09-26 08:15:22,970 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0867 | Val rms_score: 0.3528
|
| 259 |
+
2025-09-26 08:15:25,904 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0648 | Val rms_score: 0.3452
|
| 260 |
+
2025-09-26 08:15:28,722 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0465 | Val rms_score: 0.3454
|
| 261 |
+
2025-09-26 08:15:31,719 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0324 | Val rms_score: 0.3412
|
| 262 |
+
2025-09-26 08:15:34,738 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0270 | Val rms_score: 0.3368
|
| 263 |
+
2025-09-26 08:15:37,087 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0236 | Val rms_score: 0.3417
|
| 264 |
+
2025-09-26 08:15:40,028 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0228 | Val rms_score: 0.3369
|
| 265 |
+
2025-09-26 08:15:42,612 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0203 | Val rms_score: 0.3375
|
| 266 |
+
2025-09-26 08:15:45,491 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0155 | Val rms_score: 0.3393
|
| 267 |
+
2025-09-26 08:15:48,354 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0132 | Val rms_score: 0.3422
|
| 268 |
+
2025-09-26 08:15:50,934 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0113 | Val rms_score: 0.3448
|
| 269 |
+
2025-09-26 08:15:53,822 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0131 | Val rms_score: 0.3474
|
| 270 |
+
2025-09-26 08:15:56,746 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0114 | Val rms_score: 0.3512
|
| 271 |
+
2025-09-26 08:15:59,614 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0092 | Val rms_score: 0.3505
|
| 272 |
+
2025-09-26 08:16:02,470 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0129 | Val rms_score: 0.3542
|
| 273 |
+
2025-09-26 08:16:04,869 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0109 | Val rms_score: 0.3553
|
| 274 |
+
2025-09-26 08:16:07,871 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0082 | Val rms_score: 0.3550
|
| 275 |
+
2025-09-26 08:16:10,752 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0097 | Val rms_score: 0.3584
|
| 276 |
+
2025-09-26 08:16:13,637 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0087 | Val rms_score: 0.3605
|
| 277 |
+
2025-09-26 08:16:16,610 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0092 | Val rms_score: 0.3575
|
| 278 |
+
2025-09-26 08:16:18,889 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0063 | Val rms_score: 0.3586
|
| 279 |
+
2025-09-26 08:16:21,857 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0053 | Val rms_score: 0.3588
|
| 280 |
+
2025-09-26 08:16:24,645 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0065 | Val rms_score: 0.3575
|
| 281 |
+
2025-09-26 08:16:27,573 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0053 | Val rms_score: 0.3586
|
| 282 |
+
2025-09-26 08:16:30,606 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0069 | Val rms_score: 0.3651
|
| 283 |
+
2025-09-26 08:16:33,009 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0078 | Val rms_score: 0.3631
|
| 284 |
+
2025-09-26 08:16:36,070 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0079 | Val rms_score: 0.3696
|
| 285 |
+
2025-09-26 08:16:38,751 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0057 | Val rms_score: 0.3626
|
| 286 |
+
2025-09-26 08:16:41,608 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0066 | Val rms_score: 0.3579
|
| 287 |
+
2025-09-26 08:16:44,719 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0061 | Val rms_score: 0.3609
|
| 288 |
+
2025-09-26 08:16:47,342 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0050 | Val rms_score: 0.3610
|
| 289 |
+
2025-09-26 08:16:50,203 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0048 | Val rms_score: 0.3623
|
| 290 |
+
2025-09-26 08:16:53,315 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0057 | Val rms_score: 0.3607
|
| 291 |
+
2025-09-26 08:16:55,809 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0066 | Val rms_score: 0.3636
|
| 292 |
+
2025-09-26 08:16:58,694 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0055 | Val rms_score: 0.3657
|
| 293 |
+
2025-09-26 08:17:01,069 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0100 | Val rms_score: 0.3614
|
| 294 |
+
2025-09-26 08:17:03,488 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0055 | Val rms_score: 0.3701
|
| 295 |
+
2025-09-26 08:17:06,670 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0068 | Val rms_score: 0.3671
|
| 296 |
+
2025-09-26 08:17:09,529 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0052 | Val rms_score: 0.3601
|
| 297 |
+
2025-09-26 08:17:12,393 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0047 | Val rms_score: 0.3612
|
| 298 |
+
2025-09-26 08:17:13,996 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0044 | Val rms_score: 0.3702
|
| 299 |
+
2025-09-26 08:17:16,483 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0055 | Val rms_score: 0.3659
|
| 300 |
+
2025-09-26 08:17:19,249 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0048 | Val rms_score: 0.3672
|
| 301 |
+
2025-09-26 08:17:22,156 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0046 | Val rms_score: 0.3642
|
| 302 |
+
2025-09-26 08:17:25,037 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0062 | Val rms_score: 0.3591
|
| 303 |
+
2025-09-26 08:17:28,024 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0055 | Val rms_score: 0.3629
|
| 304 |
+
2025-09-26 08:17:30,909 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0048 | Val rms_score: 0.3591
|
| 305 |
+
2025-09-26 08:17:33,676 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0048 | Val rms_score: 0.3613
|
| 306 |
+
2025-09-26 08:17:36,519 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0059 | Val rms_score: 0.3602
|
| 307 |
+
2025-09-26 08:17:39,546 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0054 | Val rms_score: 0.3603
|
| 308 |
+
2025-09-26 08:17:41,973 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0038 | Val rms_score: 0.3649
|
| 309 |
+
2025-09-26 08:17:44,886 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0048 | Val rms_score: 0.3644
|
| 310 |
+
2025-09-26 08:17:47,627 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0040 | Val rms_score: 0.3613
|
| 311 |
+
2025-09-26 08:17:50,609 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0050 | Val rms_score: 0.3605
|
| 312 |
+
2025-09-26 08:17:53,673 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0042 | Val rms_score: 0.3649
|
| 313 |
+
2025-09-26 08:17:56,199 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0048 | Val rms_score: 0.3620
|
| 314 |
+
2025-09-26 08:17:59,149 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0050 | Val rms_score: 0.3613
|
| 315 |
+
2025-09-26 08:18:02,142 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0046 | Val rms_score: 0.3599
|
| 316 |
+
2025-09-26 08:18:04,984 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0055 | Val rms_score: 0.3618
|
| 317 |
+
2025-09-26 08:18:07,701 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0036 | Val rms_score: 0.3634
|
| 318 |
+
2025-09-26 08:18:10,574 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0047 | Val rms_score: 0.3621
|
| 319 |
+
2025-09-26 08:18:13,603 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0046 | Val rms_score: 0.3597
|
| 320 |
+
2025-09-26 08:18:16,492 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0040 | Val rms_score: 0.3647
|
| 321 |
+
2025-09-26 08:18:19,408 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0051 | Val rms_score: 0.3636
|
| 322 |
+
2025-09-26 08:18:21,585 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0040 | Val rms_score: 0.3622
|
| 323 |
+
2025-09-26 08:18:24,575 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0036 | Val rms_score: 0.3631
|
| 324 |
+
2025-09-26 08:18:27,552 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0047 | Val rms_score: 0.3563
|
| 325 |
+
2025-09-26 08:18:30,593 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0050 | Val rms_score: 0.3576
|
| 326 |
+
2025-09-26 08:18:33,489 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0059 | Val rms_score: 0.3564
|
| 327 |
+
2025-09-26 08:18:35,838 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0052 | Val rms_score: 0.3584
|
| 328 |
+
2025-09-26 08:18:38,752 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0052 | Val rms_score: 0.3715
|
| 329 |
+
2025-09-26 08:18:41,644 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0048 | Val rms_score: 0.3701
|
| 330 |
+
2025-09-26 08:18:44,450 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0050 | Val rms_score: 0.3726
|
| 331 |
+
2025-09-26 08:18:47,383 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0042 | Val rms_score: 0.3671
|
| 332 |
+
2025-09-26 08:18:49,836 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0047 | Val rms_score: 0.3657
|
| 333 |
+
2025-09-26 08:18:52,869 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0052 | Val rms_score: 0.3650
|
| 334 |
+
2025-09-26 08:18:55,952 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0036 | Val rms_score: 0.3651
|
| 335 |
+
2025-09-26 08:18:59,181 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0037 | Val rms_score: 0.3651
|
| 336 |
+
2025-09-26 08:19:01,882 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0038 | Val rms_score: 0.3657
|
| 337 |
+
2025-09-26 08:19:04,714 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0041 | Val rms_score: 0.3652
|
| 338 |
+
2025-09-26 08:19:07,743 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0042 | Val rms_score: 0.3651
|
| 339 |
+
2025-09-26 08:19:10,143 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0042 | Val rms_score: 0.3654
|
| 340 |
+
2025-09-26 08:19:13,410 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0038 | Val rms_score: 0.3655
|
| 341 |
+
2025-09-26 08:19:16,108 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0043 | Val rms_score: 0.3675
|
| 342 |
+
2025-09-26 08:19:19,063 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0029 | Val rms_score: 0.3599
|
| 343 |
+
2025-09-26 08:19:22,039 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0042 | Val rms_score: 0.3613
|
| 344 |
+
2025-09-26 08:19:22,395 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Test rms_score: 0.7028
|
| 345 |
+
2025-09-26 08:19:22,696 - logs_modchembert_adme_ppb_r_epochs100_batch_size32 - INFO - Final Triplicate Test Results — Avg rms_score: 0.7027, Std Dev: 0.0023
|
logs_modchembert_regression_ModChemBERT-MLM-TAFT/modchembert_deepchem_splits_run_adme_solubility_epochs100_batch_size32_20250926_081922.log
ADDED
|
@@ -0,0 +1,357 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
2025-09-26 08:19:22,698 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Running benchmark for dataset: adme_solubility
|
| 2 |
+
2025-09-26 08:19:22,698 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - dataset: adme_solubility, tasks: ['y'], epochs: 100, learning rate: 3e-05, transform: True
|
| 3 |
+
2025-09-26 08:19:22,704 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Starting triplicate run 1 for dataset adme_solubility at 2025-09-26_08-19-22
|
| 4 |
+
2025-09-26 08:19:36,134 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.6773 | Val rms_score: 0.4316
|
| 5 |
+
2025-09-26 08:19:36,134 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 55
|
| 6 |
+
2025-09-26 08:19:36,701 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.4316
|
| 7 |
+
2025-09-26 08:19:50,872 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.5188 | Val rms_score: 0.4485
|
| 8 |
+
2025-09-26 08:20:05,502 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.4068 | Val rms_score: 0.3837
|
| 9 |
+
2025-09-26 08:20:05,658 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 165
|
| 10 |
+
2025-09-26 08:20:06,252 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.3837
|
| 11 |
+
2025-09-26 08:20:21,253 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.3250 | Val rms_score: 0.4087
|
| 12 |
+
2025-09-26 08:20:33,303 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.2136 | Val rms_score: 0.4043
|
| 13 |
+
2025-09-26 08:20:47,930 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.1698 | Val rms_score: 0.3818
|
| 14 |
+
2025-09-26 08:20:48,374 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 330
|
| 15 |
+
2025-09-26 08:20:49,000 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 6 with val rms_score: 0.3818
|
| 16 |
+
2025-09-26 08:21:02,719 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1324 | Val rms_score: 0.4198
|
| 17 |
+
2025-09-26 08:21:17,058 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1187 | Val rms_score: 0.3863
|
| 18 |
+
2025-09-26 08:21:31,732 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1017 | Val rms_score: 0.3775
|
| 19 |
+
2025-09-26 08:21:31,875 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 495
|
| 20 |
+
2025-09-26 08:21:32,408 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 9 with val rms_score: 0.3775
|
| 21 |
+
2025-09-26 08:21:47,327 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.0869 | Val rms_score: 0.3730
|
| 22 |
+
2025-09-26 08:21:47,516 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 550
|
| 23 |
+
2025-09-26 08:21:48,132 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 10 with val rms_score: 0.3730
|
| 24 |
+
2025-09-26 08:22:02,466 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.0797 | Val rms_score: 0.3774
|
| 25 |
+
2025-09-26 08:22:16,877 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0699 | Val rms_score: 0.3744
|
| 26 |
+
2025-09-26 08:22:31,038 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0813 | Val rms_score: 0.3831
|
| 27 |
+
2025-09-26 08:22:45,929 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0713 | Val rms_score: 0.3837
|
| 28 |
+
2025-09-26 08:23:00,490 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0575 | Val rms_score: 0.3759
|
| 29 |
+
2025-09-26 08:23:15,295 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0509 | Val rms_score: 0.3799
|
| 30 |
+
2025-09-26 08:23:29,776 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0471 | Val rms_score: 0.3803
|
| 31 |
+
2025-09-26 08:23:43,864 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0415 | Val rms_score: 0.3752
|
| 32 |
+
2025-09-26 08:23:59,122 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0424 | Val rms_score: 0.3721
|
| 33 |
+
2025-09-26 08:23:59,279 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 1045
|
| 34 |
+
2025-09-26 08:23:59,845 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 19 with val rms_score: 0.3721
|
| 35 |
+
2025-09-26 08:24:14,396 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0398 | Val rms_score: 0.3773
|
| 36 |
+
2025-09-26 08:24:29,460 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0395 | Val rms_score: 0.3766
|
| 37 |
+
2025-09-26 08:24:44,641 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0270 | Val rms_score: 0.3748
|
| 38 |
+
2025-09-26 08:24:58,762 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0311 | Val rms_score: 0.3739
|
| 39 |
+
2025-09-26 08:25:12,307 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0336 | Val rms_score: 0.3777
|
| 40 |
+
2025-09-26 08:25:26,321 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0286 | Val rms_score: 0.3731
|
| 41 |
+
2025-09-26 08:25:41,181 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0285 | Val rms_score: 0.3734
|
| 42 |
+
2025-09-26 08:25:56,755 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0273 | Val rms_score: 0.3813
|
| 43 |
+
2025-09-26 08:26:11,618 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0236 | Val rms_score: 0.3726
|
| 44 |
+
2025-09-26 08:26:26,482 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0232 | Val rms_score: 0.3772
|
| 45 |
+
2025-09-26 08:26:40,654 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0227 | Val rms_score: 0.3782
|
| 46 |
+
2025-09-26 08:26:54,947 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0229 | Val rms_score: 0.3725
|
| 47 |
+
2025-09-26 08:27:09,819 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0232 | Val rms_score: 0.3725
|
| 48 |
+
2025-09-26 08:27:24,873 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0230 | Val rms_score: 0.3700
|
| 49 |
+
2025-09-26 08:27:25,094 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 1815
|
| 50 |
+
2025-09-26 08:27:25,735 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 33 with val rms_score: 0.3700
|
| 51 |
+
2025-09-26 08:27:41,237 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0202 | Val rms_score: 0.3745
|
| 52 |
+
2025-09-26 08:27:56,675 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0225 | Val rms_score: 0.3743
|
| 53 |
+
2025-09-26 08:28:11,460 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0197 | Val rms_score: 0.3764
|
| 54 |
+
2025-09-26 08:28:27,387 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0176 | Val rms_score: 0.3737
|
| 55 |
+
2025-09-26 08:28:41,251 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0197 | Val rms_score: 0.3759
|
| 56 |
+
2025-09-26 08:28:55,770 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0177 | Val rms_score: 0.3771
|
| 57 |
+
2025-09-26 08:29:10,165 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0173 | Val rms_score: 0.3741
|
| 58 |
+
2025-09-26 08:29:25,674 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0158 | Val rms_score: 0.3709
|
| 59 |
+
2025-09-26 08:29:40,380 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0197 | Val rms_score: 0.3757
|
| 60 |
+
2025-09-26 08:29:55,708 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0170 | Val rms_score: 0.3693
|
| 61 |
+
2025-09-26 08:29:55,865 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 2365
|
| 62 |
+
2025-09-26 08:29:56,451 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 43 with val rms_score: 0.3693
|
| 63 |
+
2025-09-26 08:30:11,066 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0167 | Val rms_score: 0.3690
|
| 64 |
+
2025-09-26 08:30:11,250 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 2420
|
| 65 |
+
2025-09-26 08:30:11,848 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 44 with val rms_score: 0.3690
|
| 66 |
+
2025-09-26 08:30:26,159 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0166 | Val rms_score: 0.3721
|
| 67 |
+
2025-09-26 08:30:40,349 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0152 | Val rms_score: 0.3718
|
| 68 |
+
2025-09-26 08:30:55,305 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0154 | Val rms_score: 0.3770
|
| 69 |
+
2025-09-26 08:31:10,227 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0152 | Val rms_score: 0.3759
|
| 70 |
+
2025-09-26 08:31:25,408 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0158 | Val rms_score: 0.3819
|
| 71 |
+
2025-09-26 08:31:39,931 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0143 | Val rms_score: 0.3710
|
| 72 |
+
2025-09-26 08:31:53,992 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0148 | Val rms_score: 0.3755
|
| 73 |
+
2025-09-26 08:32:09,024 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0126 | Val rms_score: 0.3787
|
| 74 |
+
2025-09-26 08:32:23,714 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0121 | Val rms_score: 0.3704
|
| 75 |
+
2025-09-26 08:32:38,632 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0138 | Val rms_score: 0.3707
|
| 76 |
+
2025-09-26 08:32:54,642 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0130 | Val rms_score: 0.3713
|
| 77 |
+
2025-09-26 08:33:09,287 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0135 | Val rms_score: 0.3704
|
| 78 |
+
2025-09-26 08:33:23,749 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0138 | Val rms_score: 0.3747
|
| 79 |
+
2025-09-26 08:33:37,461 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0126 | Val rms_score: 0.3708
|
| 80 |
+
2025-09-26 08:33:51,268 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0141 | Val rms_score: 0.3754
|
| 81 |
+
2025-09-26 08:34:06,355 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0121 | Val rms_score: 0.3761
|
| 82 |
+
2025-09-26 08:34:21,480 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0134 | Val rms_score: 0.3714
|
| 83 |
+
2025-09-26 08:34:36,748 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0123 | Val rms_score: 0.3686
|
| 84 |
+
2025-09-26 08:34:36,907 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 3410
|
| 85 |
+
2025-09-26 08:34:37,476 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 62 with val rms_score: 0.3686
|
| 86 |
+
2025-09-26 08:34:51,812 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0116 | Val rms_score: 0.3720
|
| 87 |
+
2025-09-26 08:35:05,673 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0124 | Val rms_score: 0.3689
|
| 88 |
+
2025-09-26 08:35:19,893 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0111 | Val rms_score: 0.3690
|
| 89 |
+
2025-09-26 08:35:34,262 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0121 | Val rms_score: 0.3691
|
| 90 |
+
2025-09-26 08:35:49,879 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0107 | Val rms_score: 0.3697
|
| 91 |
+
2025-09-26 08:36:04,463 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0104 | Val rms_score: 0.3740
|
| 92 |
+
2025-09-26 08:36:18,488 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0102 | Val rms_score: 0.3712
|
| 93 |
+
2025-09-26 08:36:32,483 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0111 | Val rms_score: 0.3678
|
| 94 |
+
2025-09-26 08:36:32,645 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 3850
|
| 95 |
+
2025-09-26 08:36:33,252 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 70 with val rms_score: 0.3678
|
| 96 |
+
2025-09-26 08:36:47,034 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0083 | Val rms_score: 0.3747
|
| 97 |
+
2025-09-26 08:37:02,502 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0102 | Val rms_score: 0.3692
|
| 98 |
+
2025-09-26 08:37:18,574 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0087 | Val rms_score: 0.3752
|
| 99 |
+
2025-09-26 08:37:33,656 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0102 | Val rms_score: 0.3697
|
| 100 |
+
2025-09-26 08:37:47,430 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0094 | Val rms_score: 0.3695
|
| 101 |
+
2025-09-26 08:38:00,764 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0099 | Val rms_score: 0.3757
|
| 102 |
+
2025-09-26 08:38:15,658 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0098 | Val rms_score: 0.3699
|
| 103 |
+
2025-09-26 08:38:30,291 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0099 | Val rms_score: 0.3679
|
| 104 |
+
2025-09-26 08:38:45,148 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0096 | Val rms_score: 0.3701
|
| 105 |
+
2025-09-26 08:38:59,769 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0089 | Val rms_score: 0.3745
|
| 106 |
+
2025-09-26 08:39:14,276 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0082 | Val rms_score: 0.3734
|
| 107 |
+
2025-09-26 08:39:28,843 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0079 | Val rms_score: 0.3708
|
| 108 |
+
2025-09-26 08:39:43,141 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0087 | Val rms_score: 0.3745
|
| 109 |
+
2025-09-26 08:39:57,720 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0085 | Val rms_score: 0.3749
|
| 110 |
+
2025-09-26 08:40:12,585 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0087 | Val rms_score: 0.3736
|
| 111 |
+
2025-09-26 08:40:27,274 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0083 | Val rms_score: 0.3707
|
| 112 |
+
2025-09-26 08:40:42,593 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0098 | Val rms_score: 0.3708
|
| 113 |
+
2025-09-26 08:40:56,910 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0087 | Val rms_score: 0.3729
|
| 114 |
+
2025-09-26 08:41:11,167 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0092 | Val rms_score: 0.3744
|
| 115 |
+
2025-09-26 08:41:26,307 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0082 | Val rms_score: 0.3726
|
| 116 |
+
2025-09-26 08:41:41,899 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0078 | Val rms_score: 0.3705
|
| 117 |
+
2025-09-26 08:41:56,207 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0092 | Val rms_score: 0.3746
|
| 118 |
+
2025-09-26 08:42:11,250 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0084 | Val rms_score: 0.3732
|
| 119 |
+
2025-09-26 08:42:25,141 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0084 | Val rms_score: 0.3740
|
| 120 |
+
2025-09-26 08:42:39,028 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0089 | Val rms_score: 0.3718
|
| 121 |
+
2025-09-26 08:42:53,760 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0082 | Val rms_score: 0.3720
|
| 122 |
+
2025-09-26 08:43:08,210 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0083 | Val rms_score: 0.3746
|
| 123 |
+
2025-09-26 08:43:22,508 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0084 | Val rms_score: 0.3741
|
| 124 |
+
2025-09-26 08:43:37,222 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0085 | Val rms_score: 0.3716
|
| 125 |
+
2025-09-26 08:43:50,842 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0081 | Val rms_score: 0.3716
|
| 126 |
+
2025-09-26 08:43:51,899 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Test rms_score: 0.5057
|
| 127 |
+
2025-09-26 08:43:52,214 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Starting triplicate run 2 for dataset adme_solubility at 2025-09-26_08-43-52
|
| 128 |
+
2025-09-26 08:44:05,496 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.6591 | Val rms_score: 0.4892
|
| 129 |
+
2025-09-26 08:44:05,496 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 55
|
| 130 |
+
2025-09-26 08:44:06,218 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.4892
|
| 131 |
+
2025-09-26 08:44:20,275 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.5031 | Val rms_score: 0.4141
|
| 132 |
+
2025-09-26 08:44:20,423 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 110
|
| 133 |
+
2025-09-26 08:44:21,013 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.4141
|
| 134 |
+
2025-09-26 08:44:36,096 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.4295 | Val rms_score: 0.3700
|
| 135 |
+
2025-09-26 08:44:36,278 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 165
|
| 136 |
+
2025-09-26 08:44:36,851 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.3700
|
| 137 |
+
2025-09-26 08:44:51,635 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.3234 | Val rms_score: 0.4290
|
| 138 |
+
2025-09-26 08:45:06,669 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.2523 | Val rms_score: 0.3758
|
| 139 |
+
2025-09-26 08:45:21,223 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.2250 | Val rms_score: 0.3954
|
| 140 |
+
2025-09-26 08:45:35,694 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1602 | Val rms_score: 0.3845
|
| 141 |
+
2025-09-26 08:45:49,217 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1313 | Val rms_score: 0.4073
|
| 142 |
+
2025-09-26 08:46:04,212 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1193 | Val rms_score: 0.3751
|
| 143 |
+
2025-09-26 08:46:18,789 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.0981 | Val rms_score: 0.3943
|
| 144 |
+
2025-09-26 08:46:33,581 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.1000 | Val rms_score: 0.3629
|
| 145 |
+
2025-09-26 08:46:34,018 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 605
|
| 146 |
+
2025-09-26 08:46:34,574 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 11 with val rms_score: 0.3629
|
| 147 |
+
2025-09-26 08:46:49,551 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0784 | Val rms_score: 0.3683
|
| 148 |
+
2025-09-26 08:47:04,100 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0672 | Val rms_score: 0.3729
|
| 149 |
+
2025-09-26 08:47:18,153 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0670 | Val rms_score: 0.3611
|
| 150 |
+
2025-09-26 08:47:18,306 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 770
|
| 151 |
+
2025-09-26 08:47:18,866 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 14 with val rms_score: 0.3611
|
| 152 |
+
2025-09-26 08:47:32,898 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0556 | Val rms_score: 0.3514
|
| 153 |
+
2025-09-26 08:47:33,090 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 825
|
| 154 |
+
2025-09-26 08:47:33,729 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 15 with val rms_score: 0.3514
|
| 155 |
+
2025-09-26 08:47:47,905 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0531 | Val rms_score: 0.3696
|
| 156 |
+
2025-09-26 08:48:02,996 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0507 | Val rms_score: 0.3608
|
| 157 |
+
2025-09-26 08:48:18,180 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0480 | Val rms_score: 0.3438
|
| 158 |
+
2025-09-26 08:48:18,343 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 990
|
| 159 |
+
2025-09-26 08:48:18,917 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 18 with val rms_score: 0.3438
|
| 160 |
+
2025-09-26 08:48:34,581 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0377 | Val rms_score: 0.3528
|
| 161 |
+
2025-09-26 08:48:49,623 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0375 | Val rms_score: 0.3456
|
| 162 |
+
2025-09-26 08:49:03,566 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0369 | Val rms_score: 0.3568
|
| 163 |
+
2025-09-26 08:49:17,853 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0361 | Val rms_score: 0.3530
|
| 164 |
+
2025-09-26 08:49:32,103 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0314 | Val rms_score: 0.3445
|
| 165 |
+
2025-09-26 08:49:47,001 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0256 | Val rms_score: 0.3423
|
| 166 |
+
2025-09-26 08:49:47,161 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 1320
|
| 167 |
+
2025-09-26 08:49:47,725 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 24 with val rms_score: 0.3423
|
| 168 |
+
2025-09-26 08:50:02,419 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0294 | Val rms_score: 0.3527
|
| 169 |
+
2025-09-26 08:50:17,172 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0263 | Val rms_score: 0.3535
|
| 170 |
+
2025-09-26 08:50:31,920 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0295 | Val rms_score: 0.3567
|
| 171 |
+
2025-09-26 08:50:46,096 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0241 | Val rms_score: 0.3553
|
| 172 |
+
2025-09-26 08:51:00,323 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0234 | Val rms_score: 0.3581
|
| 173 |
+
2025-09-26 08:51:14,741 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0244 | Val rms_score: 0.3516
|
| 174 |
+
2025-09-26 08:51:29,821 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0268 | Val rms_score: 0.3585
|
| 175 |
+
2025-09-26 08:51:45,363 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0209 | Val rms_score: 0.3622
|
| 176 |
+
2025-09-26 08:51:59,986 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0182 | Val rms_score: 0.3506
|
| 177 |
+
2025-09-26 08:52:14,018 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0207 | Val rms_score: 0.3600
|
| 178 |
+
2025-09-26 08:52:28,103 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0181 | Val rms_score: 0.3645
|
| 179 |
+
2025-09-26 08:52:42,423 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0195 | Val rms_score: 0.3571
|
| 180 |
+
2025-09-26 08:52:58,174 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0202 | Val rms_score: 0.3599
|
| 181 |
+
2025-09-26 08:53:13,444 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0196 | Val rms_score: 0.3538
|
| 182 |
+
2025-09-26 08:53:28,548 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0181 | Val rms_score: 0.3622
|
| 183 |
+
2025-09-26 08:53:43,467 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0200 | Val rms_score: 0.3561
|
| 184 |
+
2025-09-26 08:53:55,698 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0185 | Val rms_score: 0.3542
|
| 185 |
+
2025-09-26 08:54:09,210 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0170 | Val rms_score: 0.3546
|
| 186 |
+
2025-09-26 08:54:23,723 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0169 | Val rms_score: 0.3512
|
| 187 |
+
2025-09-26 08:54:38,613 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0147 | Val rms_score: 0.3509
|
| 188 |
+
2025-09-26 08:54:53,263 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0153 | Val rms_score: 0.3528
|
| 189 |
+
2025-09-26 08:55:07,496 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0159 | Val rms_score: 0.3525
|
| 190 |
+
2025-09-26 08:55:21,904 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0148 | Val rms_score: 0.3469
|
| 191 |
+
2025-09-26 08:55:36,282 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0148 | Val rms_score: 0.3548
|
| 192 |
+
2025-09-26 08:55:51,187 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0154 | Val rms_score: 0.3524
|
| 193 |
+
2025-09-26 08:56:06,170 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0141 | Val rms_score: 0.3509
|
| 194 |
+
2025-09-26 08:56:20,791 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0207 | Val rms_score: 0.3558
|
| 195 |
+
2025-09-26 08:56:35,984 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0141 | Val rms_score: 0.3536
|
| 196 |
+
2025-09-26 08:56:49,936 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0193 | Val rms_score: 0.3526
|
| 197 |
+
2025-09-26 08:57:04,471 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0133 | Val rms_score: 0.3562
|
| 198 |
+
2025-09-26 08:57:19,593 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0134 | Val rms_score: 0.3527
|
| 199 |
+
2025-09-26 08:57:34,772 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0145 | Val rms_score: 0.3559
|
| 200 |
+
2025-09-26 08:57:49,864 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0128 | Val rms_score: 0.3616
|
| 201 |
+
2025-09-26 08:58:04,240 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0131 | Val rms_score: 0.3574
|
| 202 |
+
2025-09-26 08:58:18,532 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0114 | Val rms_score: 0.3573
|
| 203 |
+
2025-09-26 08:58:31,540 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0116 | Val rms_score: 0.3591
|
| 204 |
+
2025-09-26 08:58:46,138 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0114 | Val rms_score: 0.3550
|
| 205 |
+
2025-09-26 08:59:01,348 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0112 | Val rms_score: 0.3545
|
| 206 |
+
2025-09-26 08:59:15,701 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0121 | Val rms_score: 0.3612
|
| 207 |
+
2025-09-26 08:59:30,462 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0119 | Val rms_score: 0.3555
|
| 208 |
+
2025-09-26 08:59:45,093 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0116 | Val rms_score: 0.3644
|
| 209 |
+
2025-09-26 08:59:59,309 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0102 | Val rms_score: 0.3582
|
| 210 |
+
2025-09-26 09:00:14,095 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0102 | Val rms_score: 0.3569
|
| 211 |
+
2025-09-26 09:00:28,704 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0106 | Val rms_score: 0.3548
|
| 212 |
+
2025-09-26 09:00:43,231 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0109 | Val rms_score: 0.3545
|
| 213 |
+
2025-09-26 09:00:58,137 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0101 | Val rms_score: 0.3531
|
| 214 |
+
2025-09-26 09:01:12,483 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0083 | Val rms_score: 0.3529
|
| 215 |
+
2025-09-26 09:01:26,784 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0099 | Val rms_score: 0.3526
|
| 216 |
+
2025-09-26 09:01:41,964 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0098 | Val rms_score: 0.3564
|
| 217 |
+
2025-09-26 09:01:56,803 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0098 | Val rms_score: 0.3510
|
| 218 |
+
2025-09-26 09:02:11,046 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0090 | Val rms_score: 0.3526
|
| 219 |
+
2025-09-26 09:02:25,120 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0106 | Val rms_score: 0.3526
|
| 220 |
+
2025-09-26 09:02:39,653 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0097 | Val rms_score: 0.3532
|
| 221 |
+
2025-09-26 09:02:53,389 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0089 | Val rms_score: 0.3482
|
| 222 |
+
2025-09-26 09:03:07,758 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0093 | Val rms_score: 0.3542
|
| 223 |
+
2025-09-26 09:03:22,152 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0091 | Val rms_score: 0.3540
|
| 224 |
+
2025-09-26 09:03:36,676 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0089 | Val rms_score: 0.3516
|
| 225 |
+
2025-09-26 09:03:51,816 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0099 | Val rms_score: 0.3587
|
| 226 |
+
2025-09-26 09:04:05,695 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0089 | Val rms_score: 0.3538
|
| 227 |
+
2025-09-26 09:04:19,840 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0090 | Val rms_score: 0.3581
|
| 228 |
+
2025-09-26 09:04:34,252 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0084 | Val rms_score: 0.3544
|
| 229 |
+
2025-09-26 09:04:48,981 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0104 | Val rms_score: 0.3567
|
| 230 |
+
2025-09-26 09:05:03,746 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0100 | Val rms_score: 0.3518
|
| 231 |
+
2025-09-26 09:05:18,621 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0096 | Val rms_score: 0.3486
|
| 232 |
+
2025-09-26 09:05:32,744 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0093 | Val rms_score: 0.3558
|
| 233 |
+
2025-09-26 09:05:46,910 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0094 | Val rms_score: 0.3570
|
| 234 |
+
2025-09-26 09:06:02,376 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0074 | Val rms_score: 0.3544
|
| 235 |
+
2025-09-26 09:06:17,454 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0081 | Val rms_score: 0.3551
|
| 236 |
+
2025-09-26 09:06:31,927 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0091 | Val rms_score: 0.3540
|
| 237 |
+
2025-09-26 09:06:46,289 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0080 | Val rms_score: 0.3542
|
| 238 |
+
2025-09-26 09:07:00,438 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0078 | Val rms_score: 0.3503
|
| 239 |
+
2025-09-26 09:07:14,384 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0081 | Val rms_score: 0.3539
|
| 240 |
+
2025-09-26 09:07:29,081 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0080 | Val rms_score: 0.3489
|
| 241 |
+
2025-09-26 09:07:43,926 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0080 | Val rms_score: 0.3562
|
| 242 |
+
2025-09-26 09:07:57,907 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0083 | Val rms_score: 0.3546
|
| 243 |
+
2025-09-26 09:08:12,995 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0080 | Val rms_score: 0.3556
|
| 244 |
+
2025-09-26 09:08:13,633 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Test rms_score: 0.5134
|
| 245 |
+
2025-09-26 09:08:14,024 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Starting triplicate run 3 for dataset adme_solubility at 2025-09-26_09-08-14
|
| 246 |
+
2025-09-26 09:08:28,139 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.6818 | Val rms_score: 0.4004
|
| 247 |
+
2025-09-26 09:08:28,139 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 55
|
| 248 |
+
2025-09-26 09:08:28,754 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.4004
|
| 249 |
+
2025-09-26 09:08:42,957 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.5469 | Val rms_score: 0.3792
|
| 250 |
+
2025-09-26 09:08:43,102 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 110
|
| 251 |
+
2025-09-26 09:08:43,655 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.3792
|
| 252 |
+
2025-09-26 09:08:57,351 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.3977 | Val rms_score: 0.3952
|
| 253 |
+
2025-09-26 09:09:12,505 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.2812 | Val rms_score: 0.3705
|
| 254 |
+
2025-09-26 09:09:12,689 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 220
|
| 255 |
+
2025-09-26 09:09:13,266 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val rms_score: 0.3705
|
| 256 |
+
2025-09-26 09:09:28,662 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.2250 | Val rms_score: 0.3869
|
| 257 |
+
2025-09-26 09:09:43,051 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.1656 | Val rms_score: 0.4211
|
| 258 |
+
2025-09-26 09:09:57,464 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1466 | Val rms_score: 0.3779
|
| 259 |
+
2025-09-26 09:10:12,638 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1320 | Val rms_score: 0.3831
|
| 260 |
+
2025-09-26 09:10:25,962 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1125 | Val rms_score: 0.3624
|
| 261 |
+
2025-09-26 09:10:26,115 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 495
|
| 262 |
+
2025-09-26 09:10:26,676 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 9 with val rms_score: 0.3624
|
| 263 |
+
2025-09-26 09:10:41,224 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.1000 | Val rms_score: 0.3832
|
| 264 |
+
2025-09-26 09:10:55,707 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.0813 | Val rms_score: 0.3894
|
| 265 |
+
2025-09-26 09:11:09,803 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0724 | Val rms_score: 0.3892
|
| 266 |
+
2025-09-26 09:11:23,751 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0677 | Val rms_score: 0.3551
|
| 267 |
+
2025-09-26 09:11:23,946 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Global step of best model: 715
|
| 268 |
+
2025-09-26 09:11:24,498 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 13 with val rms_score: 0.3551
|
| 269 |
+
2025-09-26 09:11:38,730 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0739 | Val rms_score: 0.4148
|
| 270 |
+
2025-09-26 09:11:52,341 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0569 | Val rms_score: 0.3638
|
| 271 |
+
2025-09-26 09:12:07,214 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0494 | Val rms_score: 0.3665
|
| 272 |
+
2025-09-26 09:12:21,994 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0491 | Val rms_score: 0.3584
|
| 273 |
+
2025-09-26 09:12:35,916 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0432 | Val rms_score: 0.3705
|
| 274 |
+
2025-09-26 09:12:50,622 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0392 | Val rms_score: 0.3710
|
| 275 |
+
2025-09-26 09:13:03,890 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0351 | Val rms_score: 0.3680
|
| 276 |
+
2025-09-26 09:13:18,815 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0355 | Val rms_score: 0.3698
|
| 277 |
+
2025-09-26 09:13:34,428 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0346 | Val rms_score: 0.3652
|
| 278 |
+
2025-09-26 09:13:48,384 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0334 | Val rms_score: 0.3652
|
| 279 |
+
2025-09-26 09:14:02,228 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0338 | Val rms_score: 0.3591
|
| 280 |
+
2025-09-26 09:14:14,114 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0266 | Val rms_score: 0.3613
|
| 281 |
+
2025-09-26 09:14:29,214 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0271 | Val rms_score: 0.3701
|
| 282 |
+
2025-09-26 09:14:44,196 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0280 | Val rms_score: 0.3620
|
| 283 |
+
2025-09-26 09:14:59,158 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0252 | Val rms_score: 0.3674
|
| 284 |
+
2025-09-26 09:15:11,606 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0240 | Val rms_score: 0.3693
|
| 285 |
+
2025-09-26 09:15:25,440 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0267 | Val rms_score: 0.3576
|
| 286 |
+
2025-09-26 09:15:39,694 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0240 | Val rms_score: 0.3721
|
| 287 |
+
2025-09-26 09:15:55,126 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0230 | Val rms_score: 0.3700
|
| 288 |
+
2025-09-26 09:16:10,191 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0277 | Val rms_score: 0.3634
|
| 289 |
+
2025-09-26 09:16:25,013 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0240 | Val rms_score: 0.3647
|
| 290 |
+
2025-09-26 09:16:38,746 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0278 | Val rms_score: 0.3623
|
| 291 |
+
2025-09-26 09:16:52,509 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0226 | Val rms_score: 0.3602
|
| 292 |
+
2025-09-26 09:17:06,534 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0202 | Val rms_score: 0.3568
|
| 293 |
+
2025-09-26 09:17:21,736 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0173 | Val rms_score: 0.3627
|
| 294 |
+
2025-09-26 09:17:36,567 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0175 | Val rms_score: 0.3584
|
| 295 |
+
2025-09-26 09:17:50,257 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0182 | Val rms_score: 0.3603
|
| 296 |
+
2025-09-26 09:18:03,791 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0183 | Val rms_score: 0.3625
|
| 297 |
+
2025-09-26 09:18:18,065 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0186 | Val rms_score: 0.3615
|
| 298 |
+
2025-09-26 09:18:31,498 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0151 | Val rms_score: 0.3633
|
| 299 |
+
2025-09-26 09:18:46,554 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0160 | Val rms_score: 0.3638
|
| 300 |
+
2025-09-26 09:19:01,035 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0165 | Val rms_score: 0.3603
|
| 301 |
+
2025-09-26 09:19:14,994 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0150 | Val rms_score: 0.3581
|
| 302 |
+
2025-09-26 09:19:28,143 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0140 | Val rms_score: 0.3617
|
| 303 |
+
2025-09-26 09:19:42,198 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0149 | Val rms_score: 0.3587
|
| 304 |
+
2025-09-26 09:19:57,026 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0143 | Val rms_score: 0.3615
|
| 305 |
+
2025-09-26 09:20:12,566 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0152 | Val rms_score: 0.3638
|
| 306 |
+
2025-09-26 09:20:25,999 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0137 | Val rms_score: 0.3679
|
| 307 |
+
2025-09-26 09:20:40,443 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0134 | Val rms_score: 0.3634
|
| 308 |
+
2025-09-26 09:20:53,751 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0146 | Val rms_score: 0.3664
|
| 309 |
+
2025-09-26 09:21:08,307 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0141 | Val rms_score: 0.3681
|
| 310 |
+
2025-09-26 09:21:24,102 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0150 | Val rms_score: 0.3651
|
| 311 |
+
2025-09-26 09:21:39,029 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0125 | Val rms_score: 0.3625
|
| 312 |
+
2025-09-26 09:21:54,008 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0120 | Val rms_score: 0.3594
|
| 313 |
+
2025-09-26 09:22:08,664 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0123 | Val rms_score: 0.3694
|
| 314 |
+
2025-09-26 09:22:22,792 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0122 | Val rms_score: 0.3609
|
| 315 |
+
2025-09-26 09:22:36,615 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0115 | Val rms_score: 0.3608
|
| 316 |
+
2025-09-26 09:22:49,672 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0138 | Val rms_score: 0.3672
|
| 317 |
+
2025-09-26 09:23:05,537 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0107 | Val rms_score: 0.3682
|
| 318 |
+
2025-09-26 09:23:20,140 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0119 | Val rms_score: 0.3659
|
| 319 |
+
2025-09-26 09:23:34,293 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0115 | Val rms_score: 0.3652
|
| 320 |
+
2025-09-26 09:23:45,270 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0122 | Val rms_score: 0.3637
|
| 321 |
+
2025-09-26 09:24:00,230 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0128 | Val rms_score: 0.3703
|
| 322 |
+
2025-09-26 09:24:15,892 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0126 | Val rms_score: 0.3702
|
| 323 |
+
2025-09-26 09:24:29,837 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0126 | Val rms_score: 0.3659
|
| 324 |
+
2025-09-26 09:24:43,397 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0108 | Val rms_score: 0.3562
|
| 325 |
+
2025-09-26 09:24:57,414 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0111 | Val rms_score: 0.3648
|
| 326 |
+
2025-09-26 09:25:11,666 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0162 | Val rms_score: 0.3681
|
| 327 |
+
2025-09-26 09:25:27,022 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0101 | Val rms_score: 0.3640
|
| 328 |
+
2025-09-26 09:25:43,287 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0107 | Val rms_score: 0.3629
|
| 329 |
+
2025-09-26 09:25:57,847 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0101 | Val rms_score: 0.3651
|
| 330 |
+
2025-09-26 09:26:11,435 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0101 | Val rms_score: 0.3680
|
| 331 |
+
2025-09-26 09:26:25,282 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0108 | Val rms_score: 0.3678
|
| 332 |
+
2025-09-26 09:26:39,635 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0099 | Val rms_score: 0.3662
|
| 333 |
+
2025-09-26 09:26:54,874 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0097 | Val rms_score: 0.3658
|
| 334 |
+
2025-09-26 09:27:09,405 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0097 | Val rms_score: 0.3658
|
| 335 |
+
2025-09-26 09:27:24,339 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0092 | Val rms_score: 0.3619
|
| 336 |
+
2025-09-26 09:27:38,132 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0085 | Val rms_score: 0.3628
|
| 337 |
+
2025-09-26 09:27:52,808 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0073 | Val rms_score: 0.3679
|
| 338 |
+
2025-09-26 09:28:06,857 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0085 | Val rms_score: 0.3640
|
| 339 |
+
2025-09-26 09:28:21,839 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0090 | Val rms_score: 0.3615
|
| 340 |
+
2025-09-26 09:28:37,299 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0101 | Val rms_score: 0.3662
|
| 341 |
+
2025-09-26 09:28:52,110 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0080 | Val rms_score: 0.3657
|
| 342 |
+
2025-09-26 09:29:06,858 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0085 | Val rms_score: 0.3663
|
| 343 |
+
2025-09-26 09:29:21,028 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0092 | Val rms_score: 0.3630
|
| 344 |
+
2025-09-26 09:29:34,248 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0089 | Val rms_score: 0.3652
|
| 345 |
+
2025-09-26 09:29:48,789 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0088 | Val rms_score: 0.3665
|
| 346 |
+
2025-09-26 09:30:04,792 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0070 | Val rms_score: 0.3701
|
| 347 |
+
2025-09-26 09:30:20,360 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0085 | Val rms_score: 0.3666
|
| 348 |
+
2025-09-26 09:30:35,617 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0091 | Val rms_score: 0.3646
|
| 349 |
+
2025-09-26 09:30:50,530 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0084 | Val rms_score: 0.3685
|
| 350 |
+
2025-09-26 09:31:03,770 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0096 | Val rms_score: 0.3663
|
| 351 |
+
2025-09-26 09:31:16,144 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0088 | Val rms_score: 0.3648
|
| 352 |
+
2025-09-26 09:31:31,215 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0080 | Val rms_score: 0.3644
|
| 353 |
+
2025-09-26 09:31:46,473 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0081 | Val rms_score: 0.3621
|
| 354 |
+
2025-09-26 09:32:00,817 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0081 | Val rms_score: 0.3624
|
| 355 |
+
2025-09-26 09:32:13,290 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0075 | Val rms_score: 0.3638
|
| 356 |
+
2025-09-26 09:32:14,278 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Test rms_score: 0.4879
|
| 357 |
+
2025-09-26 09:32:14,675 - logs_modchembert_adme_solubility_epochs100_batch_size32 - INFO - Final Triplicate Test Results — Avg rms_score: 0.5023, Std Dev: 0.0107
|
logs_modchembert_regression_ModChemBERT-MLM-TAFT/modchembert_deepchem_splits_run_astrazeneca_cl_epochs100_batch_size32_20250926_093214.log
ADDED
|
@@ -0,0 +1,323 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
2025-09-26 09:32:14,676 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Running benchmark for dataset: astrazeneca_cl
|
| 2 |
+
2025-09-26 09:32:14,677 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - dataset: astrazeneca_cl, tasks: ['y'], epochs: 100, learning rate: 3e-05, transform: True
|
| 3 |
+
2025-09-26 09:32:14,680 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Starting triplicate run 1 for dataset astrazeneca_cl at 2025-09-26_09-32-14
|
| 4 |
+
2025-09-26 09:32:24,754 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.8090 | Val rms_score: 0.4885
|
| 5 |
+
2025-09-26 09:32:24,754 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Global step of best model: 36
|
| 6 |
+
2025-09-26 09:32:25,331 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.4885
|
| 7 |
+
2025-09-26 09:32:38,871 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.6007 | Val rms_score: 0.4738
|
| 8 |
+
2025-09-26 09:32:39,064 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Global step of best model: 72
|
| 9 |
+
2025-09-26 09:32:39,684 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.4738
|
| 10 |
+
2025-09-26 09:32:52,024 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.5039 | Val rms_score: 0.4706
|
| 11 |
+
2025-09-26 09:32:52,208 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Global step of best model: 108
|
| 12 |
+
2025-09-26 09:32:52,816 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.4706
|
| 13 |
+
2025-09-26 09:33:03,600 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.3889 | Val rms_score: 0.5088
|
| 14 |
+
2025-09-26 09:33:16,902 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.3524 | Val rms_score: 0.4720
|
| 15 |
+
2025-09-26 09:33:29,707 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.3047 | Val rms_score: 0.4750
|
| 16 |
+
2025-09-26 09:33:40,387 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.2465 | Val rms_score: 0.4860
|
| 17 |
+
2025-09-26 09:33:53,014 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1823 | Val rms_score: 0.5073
|
| 18 |
+
2025-09-26 09:34:06,258 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1706 | Val rms_score: 0.4955
|
| 19 |
+
2025-09-26 09:34:18,190 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.1372 | Val rms_score: 0.5037
|
| 20 |
+
2025-09-26 09:34:30,117 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.1259 | Val rms_score: 0.5062
|
| 21 |
+
2025-09-26 09:34:43,701 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.1201 | Val rms_score: 0.5202
|
| 22 |
+
2025-09-26 09:34:56,102 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.1102 | Val rms_score: 0.5150
|
| 23 |
+
2025-09-26 09:35:07,745 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0908 | Val rms_score: 0.4943
|
| 24 |
+
2025-09-26 09:35:20,268 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.1020 | Val rms_score: 0.5043
|
| 25 |
+
2025-09-26 09:35:33,155 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0894 | Val rms_score: 0.5117
|
| 26 |
+
2025-09-26 09:35:45,476 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.1016 | Val rms_score: 0.5230
|
| 27 |
+
2025-09-26 09:35:57,354 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0881 | Val rms_score: 0.5102
|
| 28 |
+
2025-09-26 09:36:09,994 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0829 | Val rms_score: 0.5070
|
| 29 |
+
2025-09-26 09:36:21,752 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0773 | Val rms_score: 0.5072
|
| 30 |
+
2025-09-26 09:36:33,400 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0625 | Val rms_score: 0.5150
|
| 31 |
+
2025-09-26 09:36:46,584 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0608 | Val rms_score: 0.5058
|
| 32 |
+
2025-09-26 09:36:59,728 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0611 | Val rms_score: 0.5150
|
| 33 |
+
2025-09-26 09:37:11,241 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0595 | Val rms_score: 0.5256
|
| 34 |
+
2025-09-26 09:37:23,435 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0551 | Val rms_score: 0.5182
|
| 35 |
+
2025-09-26 09:37:36,564 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0608 | Val rms_score: 0.5082
|
| 36 |
+
2025-09-26 09:37:47,593 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0530 | Val rms_score: 0.5101
|
| 37 |
+
2025-09-26 09:38:01,072 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0476 | Val rms_score: 0.5195
|
| 38 |
+
2025-09-26 09:38:13,470 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0501 | Val rms_score: 0.5077
|
| 39 |
+
2025-09-26 09:38:26,189 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0467 | Val rms_score: 0.5163
|
| 40 |
+
2025-09-26 09:38:38,055 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0454 | Val rms_score: 0.5257
|
| 41 |
+
2025-09-26 09:38:50,810 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0443 | Val rms_score: 0.5212
|
| 42 |
+
2025-09-26 09:39:03,800 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0373 | Val rms_score: 0.5150
|
| 43 |
+
2025-09-26 09:39:16,194 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0366 | Val rms_score: 0.5184
|
| 44 |
+
2025-09-26 09:39:27,932 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0380 | Val rms_score: 0.5050
|
| 45 |
+
2025-09-26 09:39:40,516 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0354 | Val rms_score: 0.5070
|
| 46 |
+
2025-09-26 09:39:52,783 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0354 | Val rms_score: 0.5201
|
| 47 |
+
2025-09-26 09:40:04,449 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0371 | Val rms_score: 0.5131
|
| 48 |
+
2025-09-26 09:40:16,182 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0356 | Val rms_score: 0.5110
|
| 49 |
+
2025-09-26 09:40:28,596 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0319 | Val rms_score: 0.5109
|
| 50 |
+
2025-09-26 09:40:39,701 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0293 | Val rms_score: 0.5129
|
| 51 |
+
2025-09-26 09:40:52,027 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0332 | Val rms_score: 0.5076
|
| 52 |
+
2025-09-26 09:41:04,369 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0317 | Val rms_score: 0.5162
|
| 53 |
+
2025-09-26 09:41:16,064 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0328 | Val rms_score: 0.5137
|
| 54 |
+
2025-09-26 09:41:27,875 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0369 | Val rms_score: 0.5303
|
| 55 |
+
2025-09-26 09:41:40,153 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0306 | Val rms_score: 0.5068
|
| 56 |
+
2025-09-26 09:41:53,385 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0271 | Val rms_score: 0.5055
|
| 57 |
+
2025-09-26 09:42:04,710 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0280 | Val rms_score: 0.5017
|
| 58 |
+
2025-09-26 09:42:17,476 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0286 | Val rms_score: 0.5009
|
| 59 |
+
2025-09-26 09:42:29,950 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0268 | Val rms_score: 0.4980
|
| 60 |
+
2025-09-26 09:42:41,689 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0251 | Val rms_score: 0.5037
|
| 61 |
+
2025-09-26 09:42:54,603 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0243 | Val rms_score: 0.5033
|
| 62 |
+
2025-09-26 09:43:07,487 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0225 | Val rms_score: 0.5001
|
| 63 |
+
2025-09-26 09:43:18,602 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0222 | Val rms_score: 0.4995
|
| 64 |
+
2025-09-26 09:43:30,433 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0230 | Val rms_score: 0.4986
|
| 65 |
+
2025-09-26 09:43:43,772 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0238 | Val rms_score: 0.5000
|
| 66 |
+
2025-09-26 09:43:57,010 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0232 | Val rms_score: 0.4979
|
| 67 |
+
2025-09-26 09:44:08,477 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0227 | Val rms_score: 0.5083
|
| 68 |
+
2025-09-26 09:44:20,814 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0262 | Val rms_score: 0.5042
|
| 69 |
+
2025-09-26 09:44:33,350 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0234 | Val rms_score: 0.5155
|
| 70 |
+
2025-09-26 09:44:45,575 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0201 | Val rms_score: 0.5109
|
| 71 |
+
2025-09-26 09:44:57,813 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0211 | Val rms_score: 0.5031
|
| 72 |
+
2025-09-26 09:45:10,265 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0228 | Val rms_score: 0.5016
|
| 73 |
+
2025-09-26 09:45:22,732 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0227 | Val rms_score: 0.5131
|
| 74 |
+
2025-09-26 09:45:34,255 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0212 | Val rms_score: 0.5063
|
| 75 |
+
2025-09-26 09:45:46,973 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0224 | Val rms_score: 0.5114
|
| 76 |
+
2025-09-26 09:46:00,652 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0225 | Val rms_score: 0.4919
|
| 77 |
+
2025-09-26 09:46:11,606 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0189 | Val rms_score: 0.5003
|
| 78 |
+
2025-09-26 09:46:22,103 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0193 | Val rms_score: 0.5037
|
| 79 |
+
2025-09-26 09:46:35,450 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0190 | Val rms_score: 0.5009
|
| 80 |
+
2025-09-26 09:46:46,873 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0193 | Val rms_score: 0.4994
|
| 81 |
+
2025-09-26 09:46:58,497 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0195 | Val rms_score: 0.5035
|
| 82 |
+
2025-09-26 09:47:11,700 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0197 | Val rms_score: 0.5010
|
| 83 |
+
2025-09-26 09:47:23,233 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0173 | Val rms_score: 0.4978
|
| 84 |
+
2025-09-26 09:47:34,275 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0161 | Val rms_score: 0.4940
|
| 85 |
+
2025-09-26 09:47:47,385 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0177 | Val rms_score: 0.5042
|
| 86 |
+
2025-09-26 09:48:00,487 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0184 | Val rms_score: 0.4950
|
| 87 |
+
2025-09-26 09:48:10,340 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0189 | Val rms_score: 0.5051
|
| 88 |
+
2025-09-26 09:48:23,375 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0187 | Val rms_score: 0.4997
|
| 89 |
+
2025-09-26 09:48:35,119 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0173 | Val rms_score: 0.4895
|
| 90 |
+
2025-09-26 09:48:45,714 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0194 | Val rms_score: 0.5031
|
| 91 |
+
2025-09-26 09:48:59,136 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0169 | Val rms_score: 0.5011
|
| 92 |
+
2025-09-26 09:49:11,431 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0179 | Val rms_score: 0.4959
|
| 93 |
+
2025-09-26 09:49:22,170 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0146 | Val rms_score: 0.4981
|
| 94 |
+
2025-09-26 09:49:35,314 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0187 | Val rms_score: 0.5083
|
| 95 |
+
2025-09-26 09:49:48,562 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0156 | Val rms_score: 0.4934
|
| 96 |
+
2025-09-26 09:49:58,065 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0160 | Val rms_score: 0.5006
|
| 97 |
+
2025-09-26 09:50:11,339 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0170 | Val rms_score: 0.5040
|
| 98 |
+
2025-09-26 09:50:23,828 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0188 | Val rms_score: 0.4952
|
| 99 |
+
2025-09-26 09:50:33,654 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0163 | Val rms_score: 0.5067
|
| 100 |
+
2025-09-26 09:50:46,663 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0155 | Val rms_score: 0.5004
|
| 101 |
+
2025-09-26 09:50:58,074 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0143 | Val rms_score: 0.4925
|
| 102 |
+
2025-09-26 09:51:08,996 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0176 | Val rms_score: 0.4911
|
| 103 |
+
2025-09-26 09:51:22,414 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0164 | Val rms_score: 0.5051
|
| 104 |
+
2025-09-26 09:51:32,879 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0158 | Val rms_score: 0.4934
|
| 105 |
+
2025-09-26 09:51:43,792 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0162 | Val rms_score: 0.4975
|
| 106 |
+
2025-09-26 09:51:56,747 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0169 | Val rms_score: 0.4963
|
| 107 |
+
2025-09-26 09:52:08,282 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0153 | Val rms_score: 0.5014
|
| 108 |
+
2025-09-26 09:52:19,870 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0148 | Val rms_score: 0.4899
|
| 109 |
+
2025-09-26 09:52:32,311 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0133 | Val rms_score: 0.5014
|
| 110 |
+
2025-09-26 09:52:32,778 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Test rms_score: 0.5085
|
| 111 |
+
2025-09-26 09:52:33,142 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Starting triplicate run 2 for dataset astrazeneca_cl at 2025-09-26_09-52-33
|
| 112 |
+
2025-09-26 09:52:45,395 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.8125 | Val rms_score: 0.4687
|
| 113 |
+
2025-09-26 09:52:45,395 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Global step of best model: 36
|
| 114 |
+
2025-09-26 09:52:45,992 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.4687
|
| 115 |
+
2025-09-26 09:52:55,367 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.5868 | Val rms_score: 0.4990
|
| 116 |
+
2025-09-26 09:53:08,425 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.4297 | Val rms_score: 0.4724
|
| 117 |
+
2025-09-26 09:53:20,191 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.3854 | Val rms_score: 0.4819
|
| 118 |
+
2025-09-26 09:53:29,852 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.2986 | Val rms_score: 0.5458
|
| 119 |
+
2025-09-26 09:53:43,262 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.2412 | Val rms_score: 0.5047
|
| 120 |
+
2025-09-26 09:53:53,415 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.2222 | Val rms_score: 0.5413
|
| 121 |
+
2025-09-26 09:54:04,910 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1849 | Val rms_score: 0.4816
|
| 122 |
+
2025-09-26 09:54:18,135 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1367 | Val rms_score: 0.5110
|
| 123 |
+
2025-09-26 09:54:28,330 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.1372 | Val rms_score: 0.5359
|
| 124 |
+
2025-09-26 09:54:39,861 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.1372 | Val rms_score: 0.5188
|
| 125 |
+
2025-09-26 09:54:53,412 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.1152 | Val rms_score: 0.5237
|
| 126 |
+
2025-09-26 09:55:03,350 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.1098 | Val rms_score: 0.5188
|
| 127 |
+
2025-09-26 09:55:14,828 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0918 | Val rms_score: 0.5180
|
| 128 |
+
2025-09-26 09:55:27,766 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0877 | Val rms_score: 0.5171
|
| 129 |
+
2025-09-26 09:55:39,216 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0838 | Val rms_score: 0.5349
|
| 130 |
+
2025-09-26 09:55:51,004 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0846 | Val rms_score: 0.5278
|
| 131 |
+
2025-09-26 09:56:03,938 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0773 | Val rms_score: 0.5036
|
| 132 |
+
2025-09-26 09:56:14,845 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0725 | Val rms_score: 0.5253
|
| 133 |
+
2025-09-26 09:56:25,204 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0746 | Val rms_score: 0.5230
|
| 134 |
+
2025-09-26 09:56:38,467 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0612 | Val rms_score: 0.5237
|
| 135 |
+
2025-09-26 09:56:49,159 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0846 | Val rms_score: 0.5346
|
| 136 |
+
2025-09-26 09:56:59,556 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0725 | Val rms_score: 0.5173
|
| 137 |
+
2025-09-26 09:57:12,832 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0595 | Val rms_score: 0.5179
|
| 138 |
+
2025-09-26 09:57:23,583 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0612 | Val rms_score: 0.5016
|
| 139 |
+
2025-09-26 09:57:36,024 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0577 | Val rms_score: 0.5202
|
| 140 |
+
2025-09-26 09:57:48,452 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0473 | Val rms_score: 0.5042
|
| 141 |
+
2025-09-26 09:58:01,527 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0586 | Val rms_score: 0.5139
|
| 142 |
+
2025-09-26 09:58:13,270 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0458 | Val rms_score: 0.5195
|
| 143 |
+
2025-09-26 09:58:25,829 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0458 | Val rms_score: 0.5090
|
| 144 |
+
2025-09-26 09:58:38,478 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0432 | Val rms_score: 0.5069
|
| 145 |
+
2025-09-26 09:58:50,831 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0484 | Val rms_score: 0.5021
|
| 146 |
+
2025-09-26 09:59:03,439 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0391 | Val rms_score: 0.5172
|
| 147 |
+
2025-09-26 09:59:15,881 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0378 | Val rms_score: 0.5073
|
| 148 |
+
2025-09-26 09:59:28,129 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0373 | Val rms_score: 0.4999
|
| 149 |
+
2025-09-26 09:59:39,852 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0393 | Val rms_score: 0.5089
|
| 150 |
+
2025-09-26 09:59:52,851 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0356 | Val rms_score: 0.5081
|
| 151 |
+
2025-09-26 10:00:05,271 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0367 | Val rms_score: 0.5099
|
| 152 |
+
2025-09-26 10:00:17,393 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0356 | Val rms_score: 0.5023
|
| 153 |
+
2025-09-26 10:00:29,476 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0315 | Val rms_score: 0.5163
|
| 154 |
+
2025-09-26 10:00:41,887 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0354 | Val rms_score: 0.5013
|
| 155 |
+
2025-09-26 10:00:54,815 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0291 | Val rms_score: 0.5085
|
| 156 |
+
2025-09-26 10:01:06,380 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0291 | Val rms_score: 0.4989
|
| 157 |
+
2025-09-26 10:01:16,864 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0308 | Val rms_score: 0.5059
|
| 158 |
+
2025-09-26 10:01:29,377 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0312 | Val rms_score: 0.5115
|
| 159 |
+
2025-09-26 10:01:40,913 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0339 | Val rms_score: 0.5120
|
| 160 |
+
2025-09-26 10:01:53,863 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0304 | Val rms_score: 0.4997
|
| 161 |
+
2025-09-26 10:02:05,984 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0264 | Val rms_score: 0.5073
|
| 162 |
+
2025-09-26 10:02:18,013 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0265 | Val rms_score: 0.4913
|
| 163 |
+
2025-09-26 10:02:30,181 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0263 | Val rms_score: 0.4939
|
| 164 |
+
2025-09-26 10:02:41,853 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0247 | Val rms_score: 0.5059
|
| 165 |
+
2025-09-26 10:02:54,602 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0253 | Val rms_score: 0.5091
|
| 166 |
+
2025-09-26 10:03:06,895 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0210 | Val rms_score: 0.5068
|
| 167 |
+
2025-09-26 10:03:18,878 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0209 | Val rms_score: 0.5004
|
| 168 |
+
2025-09-26 10:03:30,899 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0232 | Val rms_score: 0.5042
|
| 169 |
+
2025-09-26 10:03:44,069 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0215 | Val rms_score: 0.5029
|
| 170 |
+
2025-09-26 10:03:56,663 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0222 | Val rms_score: 0.4945
|
| 171 |
+
2025-09-26 10:04:08,324 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0225 | Val rms_score: 0.5115
|
| 172 |
+
2025-09-26 10:04:20,742 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0220 | Val rms_score: 0.5037
|
| 173 |
+
2025-09-26 10:04:33,029 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0207 | Val rms_score: 0.5065
|
| 174 |
+
2025-09-26 10:04:45,151 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0203 | Val rms_score: 0.5026
|
| 175 |
+
2025-09-26 10:04:57,246 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0239 | Val rms_score: 0.5088
|
| 176 |
+
2025-09-26 10:05:09,448 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0213 | Val rms_score: 0.5077
|
| 177 |
+
2025-09-26 10:05:21,783 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0217 | Val rms_score: 0.5102
|
| 178 |
+
2025-09-26 10:05:33,635 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0212 | Val rms_score: 0.5151
|
| 179 |
+
2025-09-26 10:05:45,841 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0226 | Val rms_score: 0.5052
|
| 180 |
+
2025-09-26 10:05:58,495 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0243 | Val rms_score: 0.4972
|
| 181 |
+
2025-09-26 10:06:10,353 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0207 | Val rms_score: 0.5058
|
| 182 |
+
2025-09-26 10:06:22,922 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0187 | Val rms_score: 0.4996
|
| 183 |
+
2025-09-26 10:06:35,298 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0197 | Val rms_score: 0.4973
|
| 184 |
+
2025-09-26 10:06:47,433 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0178 | Val rms_score: 0.5034
|
| 185 |
+
2025-09-26 10:06:59,928 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0186 | Val rms_score: 0.4987
|
| 186 |
+
2025-09-26 10:07:12,439 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0205 | Val rms_score: 0.4963
|
| 187 |
+
2025-09-26 10:07:24,785 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0207 | Val rms_score: 0.4933
|
| 188 |
+
2025-09-26 10:07:36,401 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0195 | Val rms_score: 0.4869
|
| 189 |
+
2025-09-26 10:07:48,987 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0184 | Val rms_score: 0.4891
|
| 190 |
+
2025-09-26 10:08:01,606 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0195 | Val rms_score: 0.5010
|
| 191 |
+
2025-09-26 10:08:14,320 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0153 | Val rms_score: 0.4873
|
| 192 |
+
2025-09-26 10:08:26,280 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0166 | Val rms_score: 0.4935
|
| 193 |
+
2025-09-26 10:08:38,327 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0161 | Val rms_score: 0.5002
|
| 194 |
+
2025-09-26 10:08:51,615 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0192 | Val rms_score: 0.5011
|
| 195 |
+
2025-09-26 10:09:03,776 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0163 | Val rms_score: 0.4912
|
| 196 |
+
2025-09-26 10:09:15,916 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0191 | Val rms_score: 0.4975
|
| 197 |
+
2025-09-26 10:09:28,409 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0155 | Val rms_score: 0.4940
|
| 198 |
+
2025-09-26 10:09:40,650 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0170 | Val rms_score: 0.4918
|
| 199 |
+
2025-09-26 10:09:51,922 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0162 | Val rms_score: 0.4934
|
| 200 |
+
2025-09-26 10:10:04,713 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0160 | Val rms_score: 0.4931
|
| 201 |
+
2025-09-26 10:10:16,965 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0148 | Val rms_score: 0.5006
|
| 202 |
+
2025-09-26 10:10:26,266 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0143 | Val rms_score: 0.4964
|
| 203 |
+
2025-09-26 10:10:39,267 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0163 | Val rms_score: 0.4969
|
| 204 |
+
2025-09-26 10:10:48,971 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0151 | Val rms_score: 0.4987
|
| 205 |
+
2025-09-26 10:11:01,221 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0151 | Val rms_score: 0.4939
|
| 206 |
+
2025-09-26 10:11:14,595 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0142 | Val rms_score: 0.4975
|
| 207 |
+
2025-09-26 10:11:26,129 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0141 | Val rms_score: 0.4904
|
| 208 |
+
2025-09-26 10:11:37,529 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0152 | Val rms_score: 0.5062
|
| 209 |
+
2025-09-26 10:11:50,407 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0133 | Val rms_score: 0.4925
|
| 210 |
+
2025-09-26 10:12:03,023 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0142 | Val rms_score: 0.4960
|
| 211 |
+
2025-09-26 10:12:13,911 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0142 | Val rms_score: 0.5004
|
| 212 |
+
2025-09-26 10:12:26,457 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0136 | Val rms_score: 0.4870
|
| 213 |
+
2025-09-26 10:12:38,264 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0154 | Val rms_score: 0.4986
|
| 214 |
+
2025-09-26 10:12:38,822 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Test rms_score: 0.4980
|
| 215 |
+
2025-09-26 10:12:39,187 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Starting triplicate run 3 for dataset astrazeneca_cl at 2025-09-26_10-12-39
|
| 216 |
+
2025-09-26 10:12:50,263 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.9583 | Val rms_score: 0.5456
|
| 217 |
+
2025-09-26 10:12:50,263 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Global step of best model: 36
|
| 218 |
+
2025-09-26 10:12:51,077 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.5456
|
| 219 |
+
2025-09-26 10:13:02,733 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.6597 | Val rms_score: 0.4698
|
| 220 |
+
2025-09-26 10:13:02,909 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Global step of best model: 72
|
| 221 |
+
2025-09-26 10:13:03,481 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.4698
|
| 222 |
+
2025-09-26 10:13:16,005 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.4980 | Val rms_score: 0.4897
|
| 223 |
+
2025-09-26 10:13:28,738 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.3941 | Val rms_score: 0.4789
|
| 224 |
+
2025-09-26 10:13:40,379 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.3073 | Val rms_score: 0.4964
|
| 225 |
+
2025-09-26 10:13:52,900 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.2383 | Val rms_score: 0.4777
|
| 226 |
+
2025-09-26 10:14:05,505 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.2092 | Val rms_score: 0.4609
|
| 227 |
+
2025-09-26 10:14:05,654 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Global step of best model: 252
|
| 228 |
+
2025-09-26 10:14:06,205 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Best model saved at epoch 7 with val rms_score: 0.4609
|
| 229 |
+
2025-09-26 10:14:18,800 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1962 | Val rms_score: 0.5372
|
| 230 |
+
2025-09-26 10:14:30,828 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1641 | Val rms_score: 0.5084
|
| 231 |
+
2025-09-26 10:14:43,103 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.1267 | Val rms_score: 0.5174
|
| 232 |
+
2025-09-26 10:14:55,519 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.1172 | Val rms_score: 0.5115
|
| 233 |
+
2025-09-26 10:15:06,939 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.1001 | Val rms_score: 0.5171
|
| 234 |
+
2025-09-26 10:15:19,065 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0924 | Val rms_score: 0.5119
|
| 235 |
+
2025-09-26 10:15:31,414 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0957 | Val rms_score: 0.5196
|
| 236 |
+
2025-09-26 10:15:43,464 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0903 | Val rms_score: 0.5159
|
| 237 |
+
2025-09-26 10:15:55,299 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0768 | Val rms_score: 0.5108
|
| 238 |
+
2025-09-26 10:16:08,109 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0911 | Val rms_score: 0.4990
|
| 239 |
+
2025-09-26 10:16:20,003 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0712 | Val rms_score: 0.5197
|
| 240 |
+
2025-09-26 10:16:31,850 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0677 | Val rms_score: 0.5247
|
| 241 |
+
2025-09-26 10:16:44,293 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0645 | Val rms_score: 0.5060
|
| 242 |
+
2025-09-26 10:16:56,276 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0540 | Val rms_score: 0.5024
|
| 243 |
+
2025-09-26 10:17:08,724 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0543 | Val rms_score: 0.4949
|
| 244 |
+
2025-09-26 10:17:20,591 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0692 | Val rms_score: 0.5215
|
| 245 |
+
2025-09-26 10:17:33,262 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0651 | Val rms_score: 0.5025
|
| 246 |
+
2025-09-26 10:17:44,989 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0553 | Val rms_score: 0.5080
|
| 247 |
+
2025-09-26 10:17:57,634 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0501 | Val rms_score: 0.5088
|
| 248 |
+
2025-09-26 10:18:10,577 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0464 | Val rms_score: 0.5159
|
| 249 |
+
2025-09-26 10:18:23,721 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0476 | Val rms_score: 0.5113
|
| 250 |
+
2025-09-26 10:18:35,473 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0421 | Val rms_score: 0.5294
|
| 251 |
+
2025-09-26 10:18:48,030 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0404 | Val rms_score: 0.5120
|
| 252 |
+
2025-09-26 10:19:00,467 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0356 | Val rms_score: 0.5049
|
| 253 |
+
2025-09-26 10:19:12,773 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0373 | Val rms_score: 0.5079
|
| 254 |
+
2025-09-26 10:19:25,553 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0373 | Val rms_score: 0.5073
|
| 255 |
+
2025-09-26 10:19:37,932 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0384 | Val rms_score: 0.4823
|
| 256 |
+
2025-09-26 10:19:50,093 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0336 | Val rms_score: 0.5064
|
| 257 |
+
2025-09-26 10:20:02,390 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0334 | Val rms_score: 0.5207
|
| 258 |
+
2025-09-26 10:20:15,360 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0337 | Val rms_score: 0.5003
|
| 259 |
+
2025-09-26 10:20:27,231 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0326 | Val rms_score: 0.5038
|
| 260 |
+
2025-09-26 10:20:39,510 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0344 | Val rms_score: 0.5032
|
| 261 |
+
2025-09-26 10:20:51,432 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0308 | Val rms_score: 0.4834
|
| 262 |
+
2025-09-26 10:21:03,224 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0310 | Val rms_score: 0.5101
|
| 263 |
+
2025-09-26 10:21:16,045 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0280 | Val rms_score: 0.5020
|
| 264 |
+
2025-09-26 10:21:28,534 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0267 | Val rms_score: 0.4996
|
| 265 |
+
2025-09-26 10:21:40,212 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0293 | Val rms_score: 0.4994
|
| 266 |
+
2025-09-26 10:21:52,662 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0287 | Val rms_score: 0.5267
|
| 267 |
+
2025-09-26 10:22:04,781 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0254 | Val rms_score: 0.5088
|
| 268 |
+
2025-09-26 10:22:17,840 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0232 | Val rms_score: 0.5105
|
| 269 |
+
2025-09-26 10:22:29,736 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0230 | Val rms_score: 0.4972
|
| 270 |
+
2025-09-26 10:22:41,979 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0234 | Val rms_score: 0.4937
|
| 271 |
+
2025-09-26 10:22:54,337 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0242 | Val rms_score: 0.5033
|
| 272 |
+
2025-09-26 10:23:06,566 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0221 | Val rms_score: 0.5095
|
| 273 |
+
2025-09-26 10:23:19,368 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0244 | Val rms_score: 0.5037
|
| 274 |
+
2025-09-26 10:23:31,258 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0231 | Val rms_score: 0.5110
|
| 275 |
+
2025-09-26 10:23:43,932 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0286 | Val rms_score: 0.5072
|
| 276 |
+
2025-09-26 10:23:56,288 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0219 | Val rms_score: 0.5095
|
| 277 |
+
2025-09-26 10:24:09,614 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0260 | Val rms_score: 0.5144
|
| 278 |
+
2025-09-26 10:24:22,008 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0215 | Val rms_score: 0.5193
|
| 279 |
+
2025-09-26 10:24:34,624 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0224 | Val rms_score: 0.5001
|
| 280 |
+
2025-09-26 10:24:46,903 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0197 | Val rms_score: 0.5052
|
| 281 |
+
2025-09-26 10:24:59,028 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0199 | Val rms_score: 0.5022
|
| 282 |
+
2025-09-26 10:25:10,856 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0179 | Val rms_score: 0.4984
|
| 283 |
+
2025-09-26 10:25:23,286 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0190 | Val rms_score: 0.4989
|
| 284 |
+
2025-09-26 10:25:35,748 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0183 | Val rms_score: 0.5106
|
| 285 |
+
2025-09-26 10:25:48,118 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0162 | Val rms_score: 0.4994
|
| 286 |
+
2025-09-26 10:25:59,953 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0175 | Val rms_score: 0.5035
|
| 287 |
+
2025-09-26 10:26:12,418 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0184 | Val rms_score: 0.5000
|
| 288 |
+
2025-09-26 10:26:24,631 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0162 | Val rms_score: 0.4927
|
| 289 |
+
2025-09-26 10:26:37,091 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0171 | Val rms_score: 0.5008
|
| 290 |
+
2025-09-26 10:26:49,462 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0163 | Val rms_score: 0.4998
|
| 291 |
+
2025-09-26 10:27:01,525 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0162 | Val rms_score: 0.5188
|
| 292 |
+
2025-09-26 10:27:14,057 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0157 | Val rms_score: 0.5017
|
| 293 |
+
2025-09-26 10:27:26,773 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0165 | Val rms_score: 0.5148
|
| 294 |
+
2025-09-26 10:27:39,064 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0175 | Val rms_score: 0.4989
|
| 295 |
+
2025-09-26 10:27:51,031 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0177 | Val rms_score: 0.5108
|
| 296 |
+
2025-09-26 10:28:03,617 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0160 | Val rms_score: 0.5013
|
| 297 |
+
2025-09-26 10:28:15,290 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0141 | Val rms_score: 0.5136
|
| 298 |
+
2025-09-26 10:28:27,435 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0150 | Val rms_score: 0.5099
|
| 299 |
+
2025-09-26 10:28:39,830 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0129 | Val rms_score: 0.5071
|
| 300 |
+
2025-09-26 10:28:51,546 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0136 | Val rms_score: 0.5044
|
| 301 |
+
2025-09-26 10:29:04,179 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0131 | Val rms_score: 0.5056
|
| 302 |
+
2025-09-26 10:29:16,459 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0118 | Val rms_score: 0.5070
|
| 303 |
+
2025-09-26 10:29:29,071 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0127 | Val rms_score: 0.4986
|
| 304 |
+
2025-09-26 10:29:41,070 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0132 | Val rms_score: 0.5070
|
| 305 |
+
2025-09-26 10:29:54,649 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0134 | Val rms_score: 0.4917
|
| 306 |
+
2025-09-26 10:30:07,381 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0132 | Val rms_score: 0.5035
|
| 307 |
+
2025-09-26 10:30:18,820 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0129 | Val rms_score: 0.5061
|
| 308 |
+
2025-09-26 10:30:31,514 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0113 | Val rms_score: 0.5038
|
| 309 |
+
2025-09-26 10:30:44,032 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0127 | Val rms_score: 0.4984
|
| 310 |
+
2025-09-26 10:30:56,845 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0115 | Val rms_score: 0.4967
|
| 311 |
+
2025-09-26 10:31:08,954 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0132 | Val rms_score: 0.5123
|
| 312 |
+
2025-09-26 10:31:21,153 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0133 | Val rms_score: 0.4971
|
| 313 |
+
2025-09-26 10:31:34,158 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0142 | Val rms_score: 0.5040
|
| 314 |
+
2025-09-26 10:31:45,796 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0124 | Val rms_score: 0.5108
|
| 315 |
+
2025-09-26 10:31:58,293 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0123 | Val rms_score: 0.5017
|
| 316 |
+
2025-09-26 10:32:10,683 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0128 | Val rms_score: 0.5059
|
| 317 |
+
2025-09-26 10:32:22,925 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0126 | Val rms_score: 0.5061
|
| 318 |
+
2025-09-26 10:32:35,502 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0120 | Val rms_score: 0.5038
|
| 319 |
+
2025-09-26 10:32:48,355 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0109 | Val rms_score: 0.5042
|
| 320 |
+
2025-09-26 10:33:01,220 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0107 | Val rms_score: 0.5062
|
| 321 |
+
2025-09-26 10:33:12,990 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0114 | Val rms_score: 0.5007
|
| 322 |
+
2025-09-26 10:33:13,912 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Test rms_score: 0.5248
|
| 323 |
+
2025-09-26 10:33:14,282 - logs_modchembert_astrazeneca_cl_epochs100_batch_size32 - INFO - Final Triplicate Test Results — Avg rms_score: 0.5104, Std Dev: 0.0110
|
logs_modchembert_regression_ModChemBERT-MLM-TAFT/modchembert_deepchem_splits_run_astrazeneca_logd74_epochs100_batch_size32_20250926_103314.log
ADDED
|
@@ -0,0 +1,415 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
2025-09-26 10:33:14,293 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Running benchmark for dataset: astrazeneca_logd74
|
| 2 |
+
2025-09-26 10:33:14,294 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - dataset: astrazeneca_logd74, tasks: ['y'], epochs: 100, learning rate: 3e-05, transform: True
|
| 3 |
+
2025-09-26 10:33:14,297 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Starting triplicate run 1 for dataset astrazeneca_logd74 at 2025-09-26_10-33-14
|
| 4 |
+
2025-09-26 10:33:43,674 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.4813 | Val rms_score: 0.7282
|
| 5 |
+
2025-09-26 10:33:43,674 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 105
|
| 6 |
+
2025-09-26 10:33:44,390 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.7282
|
| 7 |
+
2025-09-26 10:34:15,968 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.2141 | Val rms_score: 0.6808
|
| 8 |
+
2025-09-26 10:34:16,127 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 210
|
| 9 |
+
2025-09-26 10:34:16,663 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.6808
|
| 10 |
+
2025-09-26 10:34:47,898 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.1833 | Val rms_score: 0.6713
|
| 11 |
+
2025-09-26 10:34:48,052 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 315
|
| 12 |
+
2025-09-26 10:34:48,612 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.6713
|
| 13 |
+
2025-09-26 10:35:19,459 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.1656 | Val rms_score: 0.6912
|
| 14 |
+
2025-09-26 10:35:51,049 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.1206 | Val rms_score: 0.6743
|
| 15 |
+
2025-09-26 10:36:23,337 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.1161 | Val rms_score: 0.6698
|
| 16 |
+
2025-09-26 10:36:23,959 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 630
|
| 17 |
+
2025-09-26 10:36:24,548 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 6 with val rms_score: 0.6698
|
| 18 |
+
2025-09-26 10:36:55,945 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.0871 | Val rms_score: 0.6748
|
| 19 |
+
2025-09-26 10:37:18,390 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.0867 | Val rms_score: 0.6740
|
| 20 |
+
2025-09-26 10:37:37,875 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.0799 | Val rms_score: 0.6712
|
| 21 |
+
2025-09-26 10:37:54,342 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.0634 | Val rms_score: 0.6826
|
| 22 |
+
2025-09-26 10:38:12,496 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.0656 | Val rms_score: 0.6605
|
| 23 |
+
2025-09-26 10:38:13,033 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 1155
|
| 24 |
+
2025-09-26 10:38:13,599 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 11 with val rms_score: 0.6605
|
| 25 |
+
2025-09-26 10:38:30,793 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0570 | Val rms_score: 0.6702
|
| 26 |
+
2025-09-26 10:38:49,156 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0483 | Val rms_score: 0.6598
|
| 27 |
+
2025-09-26 10:38:49,312 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 1365
|
| 28 |
+
2025-09-26 10:38:49,880 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 13 with val rms_score: 0.6598
|
| 29 |
+
2025-09-26 10:39:07,607 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0442 | Val rms_score: 0.6689
|
| 30 |
+
2025-09-26 10:39:26,253 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0429 | Val rms_score: 0.6615
|
| 31 |
+
2025-09-26 10:39:44,697 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0416 | Val rms_score: 0.6553
|
| 32 |
+
2025-09-26 10:39:45,311 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 1680
|
| 33 |
+
2025-09-26 10:39:45,878 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 16 with val rms_score: 0.6553
|
| 34 |
+
2025-09-26 10:40:04,211 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0401 | Val rms_score: 0.6656
|
| 35 |
+
2025-09-26 10:40:22,156 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0377 | Val rms_score: 0.6592
|
| 36 |
+
2025-09-26 10:40:39,679 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0354 | Val rms_score: 0.6527
|
| 37 |
+
2025-09-26 10:40:39,837 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 1995
|
| 38 |
+
2025-09-26 10:40:40,444 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 19 with val rms_score: 0.6527
|
| 39 |
+
2025-09-26 10:40:58,593 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0348 | Val rms_score: 0.6599
|
| 40 |
+
2025-09-26 10:41:15,367 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0375 | Val rms_score: 0.6618
|
| 41 |
+
2025-09-26 10:41:32,156 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0324 | Val rms_score: 0.6533
|
| 42 |
+
2025-09-26 10:41:47,730 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0284 | Val rms_score: 0.6572
|
| 43 |
+
2025-09-26 10:42:05,177 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0285 | Val rms_score: 0.6539
|
| 44 |
+
2025-09-26 10:42:20,880 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0322 | Val rms_score: 0.6507
|
| 45 |
+
2025-09-26 10:42:21,054 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 2625
|
| 46 |
+
2025-09-26 10:42:21,644 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 25 with val rms_score: 0.6507
|
| 47 |
+
2025-09-26 10:42:40,220 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0293 | Val rms_score: 0.6453
|
| 48 |
+
2025-09-26 10:42:40,789 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 2730
|
| 49 |
+
2025-09-26 10:42:41,364 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 26 with val rms_score: 0.6453
|
| 50 |
+
2025-09-26 10:42:59,694 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0281 | Val rms_score: 0.6510
|
| 51 |
+
2025-09-26 10:43:17,711 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0243 | Val rms_score: 0.6484
|
| 52 |
+
2025-09-26 10:43:37,073 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0259 | Val rms_score: 0.6522
|
| 53 |
+
2025-09-26 10:43:55,425 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0264 | Val rms_score: 0.6604
|
| 54 |
+
2025-09-26 10:44:13,579 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0249 | Val rms_score: 0.6479
|
| 55 |
+
2025-09-26 10:44:31,811 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0241 | Val rms_score: 0.6568
|
| 56 |
+
2025-09-26 10:44:50,396 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0238 | Val rms_score: 0.6539
|
| 57 |
+
2025-09-26 10:45:08,952 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0241 | Val rms_score: 0.6638
|
| 58 |
+
2025-09-26 10:45:26,356 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0218 | Val rms_score: 0.6470
|
| 59 |
+
2025-09-26 10:45:43,187 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0216 | Val rms_score: 0.6461
|
| 60 |
+
2025-09-26 10:45:59,844 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0227 | Val rms_score: 0.6480
|
| 61 |
+
2025-09-26 10:46:18,146 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0210 | Val rms_score: 0.6476
|
| 62 |
+
2025-09-26 10:46:36,984 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0214 | Val rms_score: 0.6447
|
| 63 |
+
2025-09-26 10:46:37,135 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 4095
|
| 64 |
+
2025-09-26 10:46:37,718 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 39 with val rms_score: 0.6447
|
| 65 |
+
2025-09-26 10:46:56,663 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0214 | Val rms_score: 0.6460
|
| 66 |
+
2025-09-26 10:47:14,840 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0219 | Val rms_score: 0.6472
|
| 67 |
+
2025-09-26 10:47:32,600 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0182 | Val rms_score: 0.6504
|
| 68 |
+
2025-09-26 10:47:50,873 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0180 | Val rms_score: 0.6403
|
| 69 |
+
2025-09-26 10:47:51,041 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 4515
|
| 70 |
+
2025-09-26 10:47:51,622 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 43 with val rms_score: 0.6403
|
| 71 |
+
2025-09-26 10:48:09,966 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0191 | Val rms_score: 0.6488
|
| 72 |
+
2025-09-26 10:48:28,311 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0184 | Val rms_score: 0.6446
|
| 73 |
+
2025-09-26 10:48:46,726 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0181 | Val rms_score: 0.6427
|
| 74 |
+
2025-09-26 10:49:06,128 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0193 | Val rms_score: 0.6451
|
| 75 |
+
2025-09-26 10:49:25,249 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0161 | Val rms_score: 0.6431
|
| 76 |
+
2025-09-26 10:49:43,307 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0178 | Val rms_score: 0.6461
|
| 77 |
+
2025-09-26 10:49:59,866 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0182 | Val rms_score: 0.6493
|
| 78 |
+
2025-09-26 10:50:14,950 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0183 | Val rms_score: 0.6441
|
| 79 |
+
2025-09-26 10:50:32,264 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0189 | Val rms_score: 0.6431
|
| 80 |
+
2025-09-26 10:50:50,003 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0173 | Val rms_score: 0.6463
|
| 81 |
+
2025-09-26 10:51:09,072 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0169 | Val rms_score: 0.6398
|
| 82 |
+
2025-09-26 10:51:09,229 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 5670
|
| 83 |
+
2025-09-26 10:51:09,788 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 54 with val rms_score: 0.6398
|
| 84 |
+
2025-09-26 10:51:28,063 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0190 | Val rms_score: 0.6362
|
| 85 |
+
2025-09-26 10:51:28,218 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 5775
|
| 86 |
+
2025-09-26 10:51:28,785 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 55 with val rms_score: 0.6362
|
| 87 |
+
2025-09-26 10:51:47,711 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0174 | Val rms_score: 0.6427
|
| 88 |
+
2025-09-26 10:52:06,445 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0164 | Val rms_score: 0.6421
|
| 89 |
+
2025-09-26 10:52:26,145 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0154 | Val rms_score: 0.6427
|
| 90 |
+
2025-09-26 10:52:44,578 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0156 | Val rms_score: 0.6427
|
| 91 |
+
2025-09-26 10:53:02,514 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0159 | Val rms_score: 0.6394
|
| 92 |
+
2025-09-26 10:53:21,064 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0149 | Val rms_score: 0.6441
|
| 93 |
+
2025-09-26 10:53:36,764 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0140 | Val rms_score: 0.6504
|
| 94 |
+
2025-09-26 10:53:54,600 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0137 | Val rms_score: 0.6398
|
| 95 |
+
2025-09-26 10:54:11,172 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0151 | Val rms_score: 0.6398
|
| 96 |
+
2025-09-26 10:54:26,346 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0150 | Val rms_score: 0.6405
|
| 97 |
+
2025-09-26 10:54:41,764 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0169 | Val rms_score: 0.6460
|
| 98 |
+
2025-09-26 10:54:59,061 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0152 | Val rms_score: 0.6374
|
| 99 |
+
2025-09-26 10:55:17,070 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0151 | Val rms_score: 0.6418
|
| 100 |
+
2025-09-26 10:55:35,132 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0158 | Val rms_score: 0.6406
|
| 101 |
+
2025-09-26 10:55:52,775 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0155 | Val rms_score: 0.6371
|
| 102 |
+
2025-09-26 10:56:10,881 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0152 | Val rms_score: 0.6388
|
| 103 |
+
2025-09-26 10:56:29,561 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0154 | Val rms_score: 0.6388
|
| 104 |
+
2025-09-26 10:56:48,080 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0138 | Val rms_score: 0.6376
|
| 105 |
+
2025-09-26 10:57:06,219 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0144 | Val rms_score: 0.6380
|
| 106 |
+
2025-09-26 10:57:24,343 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0139 | Val rms_score: 0.6477
|
| 107 |
+
2025-09-26 10:57:40,855 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0139 | Val rms_score: 0.6388
|
| 108 |
+
2025-09-26 10:58:00,702 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0142 | Val rms_score: 0.6384
|
| 109 |
+
2025-09-26 10:58:17,264 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0143 | Val rms_score: 0.6396
|
| 110 |
+
2025-09-26 10:58:32,446 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0135 | Val rms_score: 0.6373
|
| 111 |
+
2025-09-26 10:58:48,410 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0140 | Val rms_score: 0.6412
|
| 112 |
+
2025-09-26 10:59:04,194 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0106 | Val rms_score: 0.6395
|
| 113 |
+
2025-09-26 10:59:22,033 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0143 | Val rms_score: 0.6404
|
| 114 |
+
2025-09-26 10:59:40,240 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0145 | Val rms_score: 0.6353
|
| 115 |
+
2025-09-26 10:59:40,397 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 8715
|
| 116 |
+
2025-09-26 10:59:41,036 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 83 with val rms_score: 0.6353
|
| 117 |
+
2025-09-26 10:59:57,442 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0130 | Val rms_score: 0.6393
|
| 118 |
+
2025-09-26 11:00:16,309 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0140 | Val rms_score: 0.6406
|
| 119 |
+
2025-09-26 11:00:35,526 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0136 | Val rms_score: 0.6398
|
| 120 |
+
2025-09-26 11:00:53,812 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0129 | Val rms_score: 0.6419
|
| 121 |
+
2025-09-26 11:01:11,853 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0131 | Val rms_score: 0.6377
|
| 122 |
+
2025-09-26 11:01:30,765 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0134 | Val rms_score: 0.6381
|
| 123 |
+
2025-09-26 11:01:54,925 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0126 | Val rms_score: 0.6359
|
| 124 |
+
2025-09-26 11:02:19,513 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0123 | Val rms_score: 0.6352
|
| 125 |
+
2025-09-26 11:02:20,023 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 9555
|
| 126 |
+
2025-09-26 11:02:20,581 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 91 with val rms_score: 0.6352
|
| 127 |
+
2025-09-26 11:02:47,207 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0134 | Val rms_score: 0.6343
|
| 128 |
+
2025-09-26 11:02:47,409 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 9660
|
| 129 |
+
2025-09-26 11:02:48,037 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 92 with val rms_score: 0.6343
|
| 130 |
+
2025-09-26 11:03:19,145 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0134 | Val rms_score: 0.6370
|
| 131 |
+
2025-09-26 11:03:50,599 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0132 | Val rms_score: 0.6406
|
| 132 |
+
2025-09-26 11:04:21,438 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0129 | Val rms_score: 0.6398
|
| 133 |
+
2025-09-26 11:04:52,706 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0123 | Val rms_score: 0.6381
|
| 134 |
+
2025-09-26 11:05:23,687 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0131 | Val rms_score: 0.6377
|
| 135 |
+
2025-09-26 11:05:54,471 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0125 | Val rms_score: 0.6431
|
| 136 |
+
2025-09-26 11:06:25,231 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0121 | Val rms_score: 0.6348
|
| 137 |
+
2025-09-26 11:06:55,706 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0127 | Val rms_score: 0.6295
|
| 138 |
+
2025-09-26 11:06:55,859 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 10500
|
| 139 |
+
2025-09-26 11:06:56,421 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 100 with val rms_score: 0.6295
|
| 140 |
+
2025-09-26 11:06:58,142 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Test rms_score: 0.7532
|
| 141 |
+
2025-09-26 11:06:58,528 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Starting triplicate run 2 for dataset astrazeneca_logd74 at 2025-09-26_11-06-58
|
| 142 |
+
2025-09-26 11:07:27,755 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.3187 | Val rms_score: 0.6970
|
| 143 |
+
2025-09-26 11:07:27,755 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 105
|
| 144 |
+
2025-09-26 11:07:28,644 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.6970
|
| 145 |
+
2025-09-26 11:07:58,909 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.2328 | Val rms_score: 0.6796
|
| 146 |
+
2025-09-26 11:07:59,162 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 210
|
| 147 |
+
2025-09-26 11:07:59,734 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.6796
|
| 148 |
+
2025-09-26 11:08:30,803 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.1708 | Val rms_score: 0.6608
|
| 149 |
+
2025-09-26 11:08:30,993 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 315
|
| 150 |
+
2025-09-26 11:08:31,549 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.6608
|
| 151 |
+
2025-09-26 11:09:01,965 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.1461 | Val rms_score: 0.6725
|
| 152 |
+
2025-09-26 11:09:33,199 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.1300 | Val rms_score: 0.6585
|
| 153 |
+
2025-09-26 11:09:33,347 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 525
|
| 154 |
+
2025-09-26 11:09:33,910 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 5 with val rms_score: 0.6585
|
| 155 |
+
2025-09-26 11:10:04,729 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.1042 | Val rms_score: 0.6840
|
| 156 |
+
2025-09-26 11:10:35,101 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.0964 | Val rms_score: 0.6653
|
| 157 |
+
2025-09-26 11:11:05,286 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.0746 | Val rms_score: 0.6627
|
| 158 |
+
2025-09-26 11:11:35,559 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.0757 | Val rms_score: 0.6637
|
| 159 |
+
2025-09-26 11:12:08,072 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.0678 | Val rms_score: 0.6686
|
| 160 |
+
2025-09-26 11:12:39,223 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.0662 | Val rms_score: 0.6601
|
| 161 |
+
2025-09-26 11:13:11,271 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0589 | Val rms_score: 0.6608
|
| 162 |
+
2025-09-26 11:13:42,818 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0500 | Val rms_score: 0.6536
|
| 163 |
+
2025-09-26 11:13:42,966 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 1365
|
| 164 |
+
2025-09-26 11:13:43,571 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 13 with val rms_score: 0.6536
|
| 165 |
+
2025-09-26 11:14:15,051 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0467 | Val rms_score: 0.6551
|
| 166 |
+
2025-09-26 11:14:45,707 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0475 | Val rms_score: 0.6529
|
| 167 |
+
2025-09-26 11:14:45,872 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 1575
|
| 168 |
+
2025-09-26 11:14:46,422 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 15 with val rms_score: 0.6529
|
| 169 |
+
2025-09-26 11:15:16,574 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0414 | Val rms_score: 0.6489
|
| 170 |
+
2025-09-26 11:15:17,142 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 1680
|
| 171 |
+
2025-09-26 11:15:17,720 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 16 with val rms_score: 0.6489
|
| 172 |
+
2025-09-26 11:15:49,400 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0399 | Val rms_score: 0.6586
|
| 173 |
+
2025-09-26 11:16:19,954 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0384 | Val rms_score: 0.6589
|
| 174 |
+
2025-09-26 11:16:50,642 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0360 | Val rms_score: 0.6497
|
| 175 |
+
2025-09-26 11:17:23,081 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0342 | Val rms_score: 0.6551
|
| 176 |
+
2025-09-26 11:17:53,798 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0369 | Val rms_score: 0.6517
|
| 177 |
+
2025-09-26 11:18:25,360 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0293 | Val rms_score: 0.6559
|
| 178 |
+
2025-09-26 11:18:56,457 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0352 | Val rms_score: 0.6541
|
| 179 |
+
2025-09-26 11:19:26,393 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0309 | Val rms_score: 0.6531
|
| 180 |
+
2025-09-26 11:19:57,846 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0289 | Val rms_score: 0.6491
|
| 181 |
+
2025-09-26 11:20:28,311 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0281 | Val rms_score: 0.6511
|
| 182 |
+
2025-09-26 11:20:59,012 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0248 | Val rms_score: 0.6528
|
| 183 |
+
2025-09-26 11:21:30,144 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0275 | Val rms_score: 0.6567
|
| 184 |
+
2025-09-26 11:22:01,908 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0257 | Val rms_score: 0.6546
|
| 185 |
+
2025-09-26 11:22:33,858 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0250 | Val rms_score: 0.6547
|
| 186 |
+
2025-09-26 11:23:05,799 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0267 | Val rms_score: 0.6505
|
| 187 |
+
2025-09-26 11:23:37,691 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0243 | Val rms_score: 0.6458
|
| 188 |
+
2025-09-26 11:23:37,843 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 3360
|
| 189 |
+
2025-09-26 11:23:38,379 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 32 with val rms_score: 0.6458
|
| 190 |
+
2025-09-26 11:24:08,721 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0256 | Val rms_score: 0.6566
|
| 191 |
+
2025-09-26 11:24:40,407 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0234 | Val rms_score: 0.6482
|
| 192 |
+
2025-09-26 11:25:10,347 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0229 | Val rms_score: 0.6503
|
| 193 |
+
2025-09-26 11:25:41,603 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0224 | Val rms_score: 0.6592
|
| 194 |
+
2025-09-26 11:26:12,300 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0224 | Val rms_score: 0.6488
|
| 195 |
+
2025-09-26 11:26:39,927 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0233 | Val rms_score: 0.6552
|
| 196 |
+
2025-09-26 11:27:08,787 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0217 | Val rms_score: 0.6448
|
| 197 |
+
2025-09-26 11:27:08,940 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 4095
|
| 198 |
+
2025-09-26 11:27:09,484 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 39 with val rms_score: 0.6448
|
| 199 |
+
2025-09-26 11:27:36,020 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0216 | Val rms_score: 0.6483
|
| 200 |
+
2025-09-26 11:28:05,028 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0213 | Val rms_score: 0.6468
|
| 201 |
+
2025-09-26 11:28:36,416 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0170 | Val rms_score: 0.6475
|
| 202 |
+
2025-09-26 11:29:07,418 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0194 | Val rms_score: 0.6555
|
| 203 |
+
2025-09-26 11:29:39,142 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0214 | Val rms_score: 0.6458
|
| 204 |
+
2025-09-26 11:30:10,237 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0179 | Val rms_score: 0.6495
|
| 205 |
+
2025-09-26 11:30:41,260 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0184 | Val rms_score: 0.6545
|
| 206 |
+
2025-09-26 11:31:12,610 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0186 | Val rms_score: 0.6513
|
| 207 |
+
2025-09-26 11:31:45,459 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0202 | Val rms_score: 0.6477
|
| 208 |
+
2025-09-26 11:32:15,363 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0187 | Val rms_score: 0.6520
|
| 209 |
+
2025-09-26 11:32:46,309 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0178 | Val rms_score: 0.6501
|
| 210 |
+
2025-09-26 11:33:18,144 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0179 | Val rms_score: 0.6479
|
| 211 |
+
2025-09-26 11:33:49,539 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0194 | Val rms_score: 0.6495
|
| 212 |
+
2025-09-26 11:34:20,339 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0179 | Val rms_score: 0.6491
|
| 213 |
+
2025-09-26 11:34:51,298 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0182 | Val rms_score: 0.6507
|
| 214 |
+
2025-09-26 11:35:20,613 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0180 | Val rms_score: 0.6476
|
| 215 |
+
2025-09-26 11:35:51,378 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0176 | Val rms_score: 0.6420
|
| 216 |
+
2025-09-26 11:35:51,954 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 5880
|
| 217 |
+
2025-09-26 11:35:52,643 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 56 with val rms_score: 0.6420
|
| 218 |
+
2025-09-26 11:36:23,963 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0169 | Val rms_score: 0.6465
|
| 219 |
+
2025-09-26 11:36:56,322 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0168 | Val rms_score: 0.6441
|
| 220 |
+
2025-09-26 11:37:27,816 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0162 | Val rms_score: 0.6431
|
| 221 |
+
2025-09-26 11:37:58,650 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0163 | Val rms_score: 0.6482
|
| 222 |
+
2025-09-26 11:38:30,390 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0157 | Val rms_score: 0.6520
|
| 223 |
+
2025-09-26 11:39:01,651 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0156 | Val rms_score: 0.6459
|
| 224 |
+
2025-09-26 11:39:33,523 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0177 | Val rms_score: 0.6416
|
| 225 |
+
2025-09-26 11:39:33,680 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 6615
|
| 226 |
+
2025-09-26 11:39:34,639 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 63 with val rms_score: 0.6416
|
| 227 |
+
2025-09-26 11:40:04,741 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0152 | Val rms_score: 0.6455
|
| 228 |
+
2025-09-26 11:40:35,857 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0159 | Val rms_score: 0.6458
|
| 229 |
+
2025-09-26 11:41:06,457 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0168 | Val rms_score: 0.6424
|
| 230 |
+
2025-09-26 11:41:38,506 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0160 | Val rms_score: 0.6407
|
| 231 |
+
2025-09-26 11:41:38,678 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 7035
|
| 232 |
+
2025-09-26 11:41:39,231 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 67 with val rms_score: 0.6407
|
| 233 |
+
2025-09-26 11:42:10,928 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0144 | Val rms_score: 0.6487
|
| 234 |
+
2025-09-26 11:42:41,695 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0173 | Val rms_score: 0.6466
|
| 235 |
+
2025-09-26 11:43:12,549 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0152 | Val rms_score: 0.6467
|
| 236 |
+
2025-09-26 11:43:42,762 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0163 | Val rms_score: 0.6423
|
| 237 |
+
2025-09-26 11:44:13,604 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0154 | Val rms_score: 0.6410
|
| 238 |
+
2025-09-26 11:44:43,687 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0153 | Val rms_score: 0.6376
|
| 239 |
+
2025-09-26 11:44:43,850 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 7665
|
| 240 |
+
2025-09-26 11:44:44,423 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 73 with val rms_score: 0.6376
|
| 241 |
+
2025-09-26 11:45:13,557 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0150 | Val rms_score: 0.6422
|
| 242 |
+
2025-09-26 11:45:41,987 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0142 | Val rms_score: 0.6458
|
| 243 |
+
2025-09-26 11:46:12,481 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0138 | Val rms_score: 0.6375
|
| 244 |
+
2025-09-26 11:46:13,034 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 7980
|
| 245 |
+
2025-09-26 11:46:13,604 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 76 with val rms_score: 0.6375
|
| 246 |
+
2025-09-26 11:46:44,154 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0140 | Val rms_score: 0.6438
|
| 247 |
+
2025-09-26 11:47:14,197 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0141 | Val rms_score: 0.6484
|
| 248 |
+
2025-09-26 11:47:44,333 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0141 | Val rms_score: 0.6434
|
| 249 |
+
2025-09-26 11:48:14,948 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0138 | Val rms_score: 0.6408
|
| 250 |
+
2025-09-26 11:48:44,985 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0161 | Val rms_score: 0.6489
|
| 251 |
+
2025-09-26 11:49:15,489 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0118 | Val rms_score: 0.6485
|
| 252 |
+
2025-09-26 11:49:45,490 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0130 | Val rms_score: 0.6425
|
| 253 |
+
2025-09-26 11:50:13,487 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0164 | Val rms_score: 0.6405
|
| 254 |
+
2025-09-26 11:50:42,354 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0125 | Val rms_score: 0.6404
|
| 255 |
+
2025-09-26 11:51:13,445 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0138 | Val rms_score: 0.6412
|
| 256 |
+
2025-09-26 11:51:43,590 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0139 | Val rms_score: 0.6416
|
| 257 |
+
2025-09-26 11:52:14,518 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0148 | Val rms_score: 0.6433
|
| 258 |
+
2025-09-26 11:52:43,615 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0134 | Val rms_score: 0.6381
|
| 259 |
+
2025-09-26 11:53:13,388 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0138 | Val rms_score: 0.6415
|
| 260 |
+
2025-09-26 11:53:44,158 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0129 | Val rms_score: 0.6441
|
| 261 |
+
2025-09-26 11:54:14,322 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0132 | Val rms_score: 0.6444
|
| 262 |
+
2025-09-26 11:54:45,029 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0132 | Val rms_score: 0.6393
|
| 263 |
+
2025-09-26 11:55:15,786 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0121 | Val rms_score: 0.6448
|
| 264 |
+
2025-09-26 11:55:46,381 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0126 | Val rms_score: 0.6381
|
| 265 |
+
2025-09-26 11:56:18,160 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0133 | Val rms_score: 0.6410
|
| 266 |
+
2025-09-26 11:56:49,299 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0127 | Val rms_score: 0.6394
|
| 267 |
+
2025-09-26 11:57:17,132 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0123 | Val rms_score: 0.6369
|
| 268 |
+
2025-09-26 11:57:17,294 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 10290
|
| 269 |
+
2025-09-26 11:57:17,842 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 98 with val rms_score: 0.6369
|
| 270 |
+
2025-09-26 11:57:45,860 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0123 | Val rms_score: 0.6420
|
| 271 |
+
2025-09-26 11:58:16,301 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0124 | Val rms_score: 0.6379
|
| 272 |
+
2025-09-26 11:58:18,182 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Test rms_score: 0.7613
|
| 273 |
+
2025-09-26 11:58:18,587 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Starting triplicate run 3 for dataset astrazeneca_logd74 at 2025-09-26_11-58-18
|
| 274 |
+
2025-09-26 11:58:48,063 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.2922 | Val rms_score: 0.7365
|
| 275 |
+
2025-09-26 11:58:48,063 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 105
|
| 276 |
+
2025-09-26 11:58:48,620 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.7365
|
| 277 |
+
2025-09-26 11:59:16,179 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.2234 | Val rms_score: 0.6852
|
| 278 |
+
2025-09-26 11:59:16,356 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 210
|
| 279 |
+
2025-09-26 11:59:16,903 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.6852
|
| 280 |
+
2025-09-26 11:59:45,406 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.1604 | Val rms_score: 0.6693
|
| 281 |
+
2025-09-26 11:59:45,555 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 315
|
| 282 |
+
2025-09-26 11:59:46,111 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.6693
|
| 283 |
+
2025-09-26 12:00:12,674 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.1406 | Val rms_score: 0.6918
|
| 284 |
+
2025-09-26 12:00:42,229 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.1269 | Val rms_score: 0.6648
|
| 285 |
+
2025-09-26 12:00:42,418 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 525
|
| 286 |
+
2025-09-26 12:00:43,046 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 5 with val rms_score: 0.6648
|
| 287 |
+
2025-09-26 12:01:13,042 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.1042 | Val rms_score: 0.6688
|
| 288 |
+
2025-09-26 12:01:42,010 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.0946 | Val rms_score: 0.6753
|
| 289 |
+
2025-09-26 12:02:12,022 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.0793 | Val rms_score: 0.6717
|
| 290 |
+
2025-09-26 12:02:42,995 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.0677 | Val rms_score: 0.6576
|
| 291 |
+
2025-09-26 12:02:43,148 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 945
|
| 292 |
+
2025-09-26 12:02:43,706 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 9 with val rms_score: 0.6576
|
| 293 |
+
2025-09-26 12:03:15,947 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.0669 | Val rms_score: 0.6504
|
| 294 |
+
2025-09-26 12:03:16,135 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 1050
|
| 295 |
+
2025-09-26 12:03:16,699 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 10 with val rms_score: 0.6504
|
| 296 |
+
2025-09-26 12:03:47,827 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.0577 | Val rms_score: 0.6565
|
| 297 |
+
2025-09-26 12:04:19,271 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.0568 | Val rms_score: 0.6625
|
| 298 |
+
2025-09-26 12:04:49,858 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.0498 | Val rms_score: 0.6495
|
| 299 |
+
2025-09-26 12:04:50,017 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 1365
|
| 300 |
+
2025-09-26 12:04:50,620 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 13 with val rms_score: 0.6495
|
| 301 |
+
2025-09-26 12:05:21,515 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0440 | Val rms_score: 0.6587
|
| 302 |
+
2025-09-26 12:05:52,558 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0431 | Val rms_score: 0.6494
|
| 303 |
+
2025-09-26 12:05:52,717 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 1575
|
| 304 |
+
2025-09-26 12:05:53,294 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 15 with val rms_score: 0.6494
|
| 305 |
+
2025-09-26 12:06:23,827 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0416 | Val rms_score: 0.6515
|
| 306 |
+
2025-09-26 12:06:55,431 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0401 | Val rms_score: 0.6565
|
| 307 |
+
2025-09-26 12:07:27,300 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0377 | Val rms_score: 0.6503
|
| 308 |
+
2025-09-26 12:07:58,339 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0352 | Val rms_score: 0.6471
|
| 309 |
+
2025-09-26 12:07:58,525 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 1995
|
| 310 |
+
2025-09-26 12:07:59,089 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 19 with val rms_score: 0.6471
|
| 311 |
+
2025-09-26 12:08:31,591 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0331 | Val rms_score: 0.6550
|
| 312 |
+
2025-09-26 12:09:02,187 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0248 | Val rms_score: 0.6494
|
| 313 |
+
2025-09-26 12:09:32,631 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0281 | Val rms_score: 0.6447
|
| 314 |
+
2025-09-26 12:09:32,908 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 2310
|
| 315 |
+
2025-09-26 12:09:33,492 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 22 with val rms_score: 0.6447
|
| 316 |
+
2025-09-26 12:10:05,112 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0268 | Val rms_score: 0.6463
|
| 317 |
+
2025-09-26 12:10:35,272 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0285 | Val rms_score: 0.6457
|
| 318 |
+
2025-09-26 12:11:05,862 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0303 | Val rms_score: 0.6482
|
| 319 |
+
2025-09-26 12:11:35,831 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0297 | Val rms_score: 0.6434
|
| 320 |
+
2025-09-26 12:11:36,384 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 2730
|
| 321 |
+
2025-09-26 12:11:36,957 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 26 with val rms_score: 0.6434
|
| 322 |
+
2025-09-26 12:12:08,571 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0283 | Val rms_score: 0.6475
|
| 323 |
+
2025-09-26 12:12:39,834 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0256 | Val rms_score: 0.6495
|
| 324 |
+
2025-09-26 12:13:12,348 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0269 | Val rms_score: 0.6472
|
| 325 |
+
2025-09-26 12:13:43,983 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0259 | Val rms_score: 0.6466
|
| 326 |
+
2025-09-26 12:14:13,885 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0239 | Val rms_score: 0.6350
|
| 327 |
+
2025-09-26 12:14:14,400 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 3255
|
| 328 |
+
2025-09-26 12:14:15,001 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 31 with val rms_score: 0.6350
|
| 329 |
+
2025-09-26 12:14:46,069 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0229 | Val rms_score: 0.6363
|
| 330 |
+
2025-09-26 12:15:16,560 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0231 | Val rms_score: 0.6525
|
| 331 |
+
2025-09-26 12:15:48,186 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0225 | Val rms_score: 0.6374
|
| 332 |
+
2025-09-26 12:16:19,676 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0217 | Val rms_score: 0.6390
|
| 333 |
+
2025-09-26 12:16:49,606 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0225 | Val rms_score: 0.6422
|
| 334 |
+
2025-09-26 12:17:22,228 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0222 | Val rms_score: 0.6539
|
| 335 |
+
2025-09-26 12:17:53,937 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0213 | Val rms_score: 0.6389
|
| 336 |
+
2025-09-26 12:18:26,794 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0201 | Val rms_score: 0.6427
|
| 337 |
+
2025-09-26 12:18:59,053 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0195 | Val rms_score: 0.6355
|
| 338 |
+
2025-09-26 12:19:30,437 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0246 | Val rms_score: 0.6413
|
| 339 |
+
2025-09-26 12:20:02,802 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0192 | Val rms_score: 0.6332
|
| 340 |
+
2025-09-26 12:20:03,010 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 4410
|
| 341 |
+
2025-09-26 12:20:03,602 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 42 with val rms_score: 0.6332
|
| 342 |
+
2025-09-26 12:20:36,260 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0177 | Val rms_score: 0.6409
|
| 343 |
+
2025-09-26 12:21:08,365 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0199 | Val rms_score: 0.6350
|
| 344 |
+
2025-09-26 12:21:39,960 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0205 | Val rms_score: 0.6363
|
| 345 |
+
2025-09-26 12:22:11,085 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0194 | Val rms_score: 0.6364
|
| 346 |
+
2025-09-26 12:22:43,447 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0182 | Val rms_score: 0.6336
|
| 347 |
+
2025-09-26 12:23:15,705 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0175 | Val rms_score: 0.6354
|
| 348 |
+
2025-09-26 12:23:47,887 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0188 | Val rms_score: 0.6310
|
| 349 |
+
2025-09-26 12:23:48,041 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 5145
|
| 350 |
+
2025-09-26 12:23:48,741 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 49 with val rms_score: 0.6310
|
| 351 |
+
2025-09-26 12:24:20,579 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0187 | Val rms_score: 0.6419
|
| 352 |
+
2025-09-26 12:24:52,741 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0180 | Val rms_score: 0.6346
|
| 353 |
+
2025-09-26 12:25:25,220 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0165 | Val rms_score: 0.6425
|
| 354 |
+
2025-09-26 12:25:56,718 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0173 | Val rms_score: 0.6389
|
| 355 |
+
2025-09-26 12:26:28,609 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0179 | Val rms_score: 0.6418
|
| 356 |
+
2025-09-26 12:27:00,105 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0175 | Val rms_score: 0.6415
|
| 357 |
+
2025-09-26 12:27:30,707 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0172 | Val rms_score: 0.6377
|
| 358 |
+
2025-09-26 12:28:02,630 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0163 | Val rms_score: 0.6381
|
| 359 |
+
2025-09-26 12:28:35,339 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0164 | Val rms_score: 0.6371
|
| 360 |
+
2025-09-26 12:29:06,982 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0162 | Val rms_score: 0.6350
|
| 361 |
+
2025-09-26 12:29:37,809 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0154 | Val rms_score: 0.6351
|
| 362 |
+
2025-09-26 12:30:09,850 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0142 | Val rms_score: 0.6342
|
| 363 |
+
2025-09-26 12:30:40,897 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0161 | Val rms_score: 0.6333
|
| 364 |
+
2025-09-26 12:31:12,862 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0177 | Val rms_score: 0.6321
|
| 365 |
+
2025-09-26 12:31:43,801 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0164 | Val rms_score: 0.6360
|
| 366 |
+
2025-09-26 12:32:16,155 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0149 | Val rms_score: 0.6335
|
| 367 |
+
2025-09-26 12:32:47,008 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0156 | Val rms_score: 0.6294
|
| 368 |
+
2025-09-26 12:32:47,581 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 6930
|
| 369 |
+
2025-09-26 12:32:48,371 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 66 with val rms_score: 0.6294
|
| 370 |
+
2025-09-26 12:33:21,221 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0164 | Val rms_score: 0.6308
|
| 371 |
+
2025-09-26 12:33:52,894 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0146 | Val rms_score: 0.6338
|
| 372 |
+
2025-09-26 12:34:24,462 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0152 | Val rms_score: 0.6383
|
| 373 |
+
2025-09-26 12:34:55,284 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0141 | Val rms_score: 0.6359
|
| 374 |
+
2025-09-26 12:35:26,924 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0147 | Val rms_score: 0.6339
|
| 375 |
+
2025-09-26 12:35:59,240 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0154 | Val rms_score: 0.6330
|
| 376 |
+
2025-09-26 12:36:30,670 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0143 | Val rms_score: 0.6365
|
| 377 |
+
2025-09-26 12:37:02,847 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0144 | Val rms_score: 0.6364
|
| 378 |
+
2025-09-26 12:37:34,117 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0144 | Val rms_score: 0.6330
|
| 379 |
+
2025-09-26 12:38:06,000 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0139 | Val rms_score: 0.6406
|
| 380 |
+
2025-09-26 12:38:38,641 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0136 | Val rms_score: 0.6341
|
| 381 |
+
2025-09-26 12:39:10,102 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0146 | Val rms_score: 0.6367
|
| 382 |
+
2025-09-26 12:39:40,419 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0138 | Val rms_score: 0.6339
|
| 383 |
+
2025-09-26 12:40:12,264 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0142 | Val rms_score: 0.6330
|
| 384 |
+
2025-09-26 12:40:43,664 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0135 | Val rms_score: 0.6283
|
| 385 |
+
2025-09-26 12:40:44,337 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 8505
|
| 386 |
+
2025-09-26 12:40:44,945 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 81 with val rms_score: 0.6283
|
| 387 |
+
2025-09-26 12:41:16,976 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0132 | Val rms_score: 0.6319
|
| 388 |
+
2025-09-26 12:41:49,007 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0133 | Val rms_score: 0.6277
|
| 389 |
+
2025-09-26 12:41:49,165 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 8715
|
| 390 |
+
2025-09-26 12:41:49,779 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 83 with val rms_score: 0.6277
|
| 391 |
+
2025-09-26 12:42:21,678 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0149 | Val rms_score: 0.6266
|
| 392 |
+
2025-09-26 12:42:21,863 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 8820
|
| 393 |
+
2025-09-26 12:42:22,448 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 84 with val rms_score: 0.6266
|
| 394 |
+
2025-09-26 12:42:53,800 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0141 | Val rms_score: 0.6295
|
| 395 |
+
2025-09-26 12:43:26,579 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0137 | Val rms_score: 0.6297
|
| 396 |
+
2025-09-26 12:43:56,936 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0124 | Val rms_score: 0.6298
|
| 397 |
+
2025-09-26 12:44:27,411 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0127 | Val rms_score: 0.6285
|
| 398 |
+
2025-09-26 12:44:59,076 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0124 | Val rms_score: 0.6308
|
| 399 |
+
2025-09-26 12:45:30,730 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0128 | Val rms_score: 0.6282
|
| 400 |
+
2025-09-26 12:46:02,058 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0133 | Val rms_score: 0.6265
|
| 401 |
+
2025-09-26 12:46:02,658 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 9555
|
| 402 |
+
2025-09-26 12:46:03,256 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 91 with val rms_score: 0.6265
|
| 403 |
+
2025-09-26 12:46:36,132 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0123 | Val rms_score: 0.6286
|
| 404 |
+
2025-09-26 12:47:06,667 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0121 | Val rms_score: 0.6276
|
| 405 |
+
2025-09-26 12:47:39,322 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0126 | Val rms_score: 0.6292
|
| 406 |
+
2025-09-26 12:48:10,446 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0127 | Val rms_score: 0.6306
|
| 407 |
+
2025-09-26 12:48:43,818 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0130 | Val rms_score: 0.6255
|
| 408 |
+
2025-09-26 12:48:44,358 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Global step of best model: 10080
|
| 409 |
+
2025-09-26 12:48:44,963 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Best model saved at epoch 96 with val rms_score: 0.6255
|
| 410 |
+
2025-09-26 12:49:15,572 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0122 | Val rms_score: 0.6332
|
| 411 |
+
2025-09-26 12:49:47,074 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0125 | Val rms_score: 0.6311
|
| 412 |
+
2025-09-26 12:50:17,704 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0127 | Val rms_score: 0.6290
|
| 413 |
+
2025-09-26 12:50:49,848 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0125 | Val rms_score: 0.6313
|
| 414 |
+
2025-09-26 12:50:51,304 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Test rms_score: 0.7653
|
| 415 |
+
2025-09-26 12:50:51,773 - logs_modchembert_astrazeneca_logd74_epochs100_batch_size32 - INFO - Final Triplicate Test Results — Avg rms_score: 0.7599, Std Dev: 0.0050
|
logs_modchembert_regression_ModChemBERT-MLM-TAFT/modchembert_deepchem_splits_run_astrazeneca_ppb_epochs100_batch_size32_20250926_125051.log
ADDED
|
@@ -0,0 +1,329 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
2025-09-26 12:50:51,809 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Running benchmark for dataset: astrazeneca_ppb
|
| 2 |
+
2025-09-26 12:50:51,810 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - dataset: astrazeneca_ppb, tasks: ['y'], epochs: 100, learning rate: 3e-05, transform: True
|
| 3 |
+
2025-09-26 12:50:51,814 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Starting triplicate run 1 for dataset astrazeneca_ppb at 2025-09-26_12-50-51
|
| 4 |
+
2025-09-26 12:51:06,328 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.6778 | Val rms_score: 0.1087
|
| 5 |
+
2025-09-26 12:51:06,328 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 45
|
| 6 |
+
2025-09-26 12:51:06,903 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.1087
|
| 7 |
+
2025-09-26 12:51:22,294 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.4833 | Val rms_score: 0.1030
|
| 8 |
+
2025-09-26 12:51:22,445 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 90
|
| 9 |
+
2025-09-26 12:51:23,002 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.1030
|
| 10 |
+
2025-09-26 12:51:38,314 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.3375 | Val rms_score: 0.1063
|
| 11 |
+
2025-09-26 12:51:53,304 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.3722 | Val rms_score: 0.1248
|
| 12 |
+
2025-09-26 12:52:08,188 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.2950 | Val rms_score: 0.1192
|
| 13 |
+
2025-09-26 12:52:23,307 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.2250 | Val rms_score: 0.1105
|
| 14 |
+
2025-09-26 12:52:39,048 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.2104 | Val rms_score: 0.1071
|
| 15 |
+
2025-09-26 12:52:54,205 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1632 | Val rms_score: 0.1098
|
| 16 |
+
2025-09-26 12:53:09,170 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1953 | Val rms_score: 0.1160
|
| 17 |
+
2025-09-26 12:53:24,503 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.1257 | Val rms_score: 0.1090
|
| 18 |
+
2025-09-26 12:53:39,921 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.1069 | Val rms_score: 0.1078
|
| 19 |
+
2025-09-26 12:53:55,669 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.1125 | Val rms_score: 0.1138
|
| 20 |
+
2025-09-26 12:54:10,890 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.1174 | Val rms_score: 0.1106
|
| 21 |
+
2025-09-26 12:54:26,273 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.0922 | Val rms_score: 0.1117
|
| 22 |
+
2025-09-26 12:54:40,604 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0781 | Val rms_score: 0.1152
|
| 23 |
+
2025-09-26 12:54:55,546 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0750 | Val rms_score: 0.1106
|
| 24 |
+
2025-09-26 12:55:10,998 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0688 | Val rms_score: 0.1097
|
| 25 |
+
2025-09-26 12:55:26,587 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0832 | Val rms_score: 0.1139
|
| 26 |
+
2025-09-26 12:55:41,072 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0656 | Val rms_score: 0.1134
|
| 27 |
+
2025-09-26 12:55:56,393 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0608 | Val rms_score: 0.1098
|
| 28 |
+
2025-09-26 12:56:11,202 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0649 | Val rms_score: 0.1152
|
| 29 |
+
2025-09-26 12:56:27,300 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0646 | Val rms_score: 0.1168
|
| 30 |
+
2025-09-26 12:56:43,781 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0536 | Val rms_score: 0.1105
|
| 31 |
+
2025-09-26 12:56:59,079 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0462 | Val rms_score: 0.1112
|
| 32 |
+
2025-09-26 12:57:14,095 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0550 | Val rms_score: 0.1124
|
| 33 |
+
2025-09-26 12:57:29,370 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0524 | Val rms_score: 0.1133
|
| 34 |
+
2025-09-26 12:57:45,260 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0435 | Val rms_score: 0.1084
|
| 35 |
+
2025-09-26 12:58:00,176 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0493 | Val rms_score: 0.1069
|
| 36 |
+
2025-09-26 12:58:15,380 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0406 | Val rms_score: 0.1100
|
| 37 |
+
2025-09-26 12:58:30,477 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0406 | Val rms_score: 0.1133
|
| 38 |
+
2025-09-26 12:58:45,980 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0417 | Val rms_score: 0.1102
|
| 39 |
+
2025-09-26 12:59:01,463 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0447 | Val rms_score: 0.1127
|
| 40 |
+
2025-09-26 12:59:16,802 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0405 | Val rms_score: 0.1131
|
| 41 |
+
2025-09-26 12:59:31,924 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0448 | Val rms_score: 0.1121
|
| 42 |
+
2025-09-26 12:59:47,354 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0384 | Val rms_score: 0.1112
|
| 43 |
+
2025-09-26 13:00:02,668 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0414 | Val rms_score: 0.1109
|
| 44 |
+
2025-09-26 13:00:18,097 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0349 | Val rms_score: 0.1100
|
| 45 |
+
2025-09-26 13:00:33,524 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0332 | Val rms_score: 0.1106
|
| 46 |
+
2025-09-26 13:00:48,684 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0354 | Val rms_score: 0.1107
|
| 47 |
+
2025-09-26 13:01:03,722 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0340 | Val rms_score: 0.1121
|
| 48 |
+
2025-09-26 13:01:18,930 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0321 | Val rms_score: 0.1124
|
| 49 |
+
2025-09-26 13:01:34,754 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0311 | Val rms_score: 0.1149
|
| 50 |
+
2025-09-26 13:01:50,101 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0278 | Val rms_score: 0.1104
|
| 51 |
+
2025-09-26 13:02:05,313 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0339 | Val rms_score: 0.1102
|
| 52 |
+
2025-09-26 13:02:21,324 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0272 | Val rms_score: 0.1119
|
| 53 |
+
2025-09-26 13:02:36,664 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0318 | Val rms_score: 0.1108
|
| 54 |
+
2025-09-26 13:02:51,914 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0357 | Val rms_score: 0.1103
|
| 55 |
+
2025-09-26 13:03:07,241 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0290 | Val rms_score: 0.1113
|
| 56 |
+
2025-09-26 13:03:22,340 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0441 | Val rms_score: 0.1138
|
| 57 |
+
2025-09-26 13:03:37,646 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0290 | Val rms_score: 0.1134
|
| 58 |
+
2025-09-26 13:03:51,647 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0262 | Val rms_score: 0.1120
|
| 59 |
+
2025-09-26 13:04:05,673 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0266 | Val rms_score: 0.1102
|
| 60 |
+
2025-09-26 13:04:20,446 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0252 | Val rms_score: 0.1102
|
| 61 |
+
2025-09-26 13:04:35,227 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0264 | Val rms_score: 0.1103
|
| 62 |
+
2025-09-26 13:04:48,876 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0271 | Val rms_score: 0.1139
|
| 63 |
+
2025-09-26 13:05:02,748 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0299 | Val rms_score: 0.1120
|
| 64 |
+
2025-09-26 13:05:18,171 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0245 | Val rms_score: 0.1122
|
| 65 |
+
2025-09-26 13:05:31,568 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0209 | Val rms_score: 0.1113
|
| 66 |
+
2025-09-26 13:05:46,561 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0236 | Val rms_score: 0.1114
|
| 67 |
+
2025-09-26 13:06:00,967 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0248 | Val rms_score: 0.1122
|
| 68 |
+
2025-09-26 13:06:15,778 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0240 | Val rms_score: 0.1107
|
| 69 |
+
2025-09-26 13:06:31,561 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0236 | Val rms_score: 0.1105
|
| 70 |
+
2025-09-26 13:06:46,927 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0234 | Val rms_score: 0.1113
|
| 71 |
+
2025-09-26 13:07:02,042 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0240 | Val rms_score: 0.1125
|
| 72 |
+
2025-09-26 13:07:17,043 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0197 | Val rms_score: 0.1119
|
| 73 |
+
2025-09-26 13:07:32,177 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0227 | Val rms_score: 0.1126
|
| 74 |
+
2025-09-26 13:07:49,033 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0270 | Val rms_score: 0.1120
|
| 75 |
+
2025-09-26 13:08:04,504 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0215 | Val rms_score: 0.1117
|
| 76 |
+
2025-09-26 13:08:19,733 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0164 | Val rms_score: 0.1109
|
| 77 |
+
2025-09-26 13:08:34,982 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0196 | Val rms_score: 0.1103
|
| 78 |
+
2025-09-26 13:08:50,118 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0204 | Val rms_score: 0.1139
|
| 79 |
+
2025-09-26 13:09:05,925 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0191 | Val rms_score: 0.1105
|
| 80 |
+
2025-09-26 13:09:21,057 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0213 | Val rms_score: 0.1095
|
| 81 |
+
2025-09-26 13:09:35,659 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0243 | Val rms_score: 0.1137
|
| 82 |
+
2025-09-26 13:09:50,942 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0203 | Val rms_score: 0.1103
|
| 83 |
+
2025-09-26 13:10:06,635 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0229 | Val rms_score: 0.1135
|
| 84 |
+
2025-09-26 13:10:22,029 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0213 | Val rms_score: 0.1125
|
| 85 |
+
2025-09-26 13:10:37,235 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0220 | Val rms_score: 0.1119
|
| 86 |
+
2025-09-26 13:10:52,653 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0227 | Val rms_score: 0.1138
|
| 87 |
+
2025-09-26 13:11:08,024 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0205 | Val rms_score: 0.1126
|
| 88 |
+
2025-09-26 13:11:23,200 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0181 | Val rms_score: 0.1111
|
| 89 |
+
2025-09-26 13:11:38,776 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0175 | Val rms_score: 0.1142
|
| 90 |
+
2025-09-26 13:11:54,028 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0198 | Val rms_score: 0.1105
|
| 91 |
+
2025-09-26 13:12:09,555 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0166 | Val rms_score: 0.1104
|
| 92 |
+
2025-09-26 13:12:24,800 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0176 | Val rms_score: 0.1116
|
| 93 |
+
2025-09-26 13:12:39,706 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0167 | Val rms_score: 0.1104
|
| 94 |
+
2025-09-26 13:12:55,275 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0154 | Val rms_score: 0.1120
|
| 95 |
+
2025-09-26 13:13:10,589 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0174 | Val rms_score: 0.1097
|
| 96 |
+
2025-09-26 13:13:26,508 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0204 | Val rms_score: 0.1109
|
| 97 |
+
2025-09-26 13:13:40,833 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0262 | Val rms_score: 0.1138
|
| 98 |
+
2025-09-26 13:13:55,661 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0211 | Val rms_score: 0.1104
|
| 99 |
+
2025-09-26 13:14:11,444 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0156 | Val rms_score: 0.1109
|
| 100 |
+
2025-09-26 13:14:26,663 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0164 | Val rms_score: 0.1111
|
| 101 |
+
2025-09-26 13:14:41,461 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0148 | Val rms_score: 0.1122
|
| 102 |
+
2025-09-26 13:14:56,070 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0171 | Val rms_score: 0.1111
|
| 103 |
+
2025-09-26 13:15:11,036 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0186 | Val rms_score: 0.1091
|
| 104 |
+
2025-09-26 13:15:25,295 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0166 | Val rms_score: 0.1104
|
| 105 |
+
2025-09-26 13:15:39,058 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0156 | Val rms_score: 0.1096
|
| 106 |
+
2025-09-26 13:15:52,342 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0147 | Val rms_score: 0.1096
|
| 107 |
+
2025-09-26 13:16:04,433 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0145 | Val rms_score: 0.1100
|
| 108 |
+
2025-09-26 13:16:05,459 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Test rms_score: 0.1170
|
| 109 |
+
2025-09-26 13:16:05,902 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Starting triplicate run 2 for dataset astrazeneca_ppb at 2025-09-26_13-16-05
|
| 110 |
+
2025-09-26 13:16:20,693 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.7083 | Val rms_score: 0.1166
|
| 111 |
+
2025-09-26 13:16:20,693 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 45
|
| 112 |
+
2025-09-26 13:16:21,309 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.1166
|
| 113 |
+
2025-09-26 13:16:36,144 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.5583 | Val rms_score: 0.1108
|
| 114 |
+
2025-09-26 13:16:36,332 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 90
|
| 115 |
+
2025-09-26 13:16:36,937 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.1108
|
| 116 |
+
2025-09-26 13:16:52,540 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.4214 | Val rms_score: 0.1137
|
| 117 |
+
2025-09-26 13:17:07,983 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.2889 | Val rms_score: 0.1078
|
| 118 |
+
2025-09-26 13:17:08,222 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 180
|
| 119 |
+
2025-09-26 13:17:08,871 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val rms_score: 0.1078
|
| 120 |
+
2025-09-26 13:17:24,054 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.2013 | Val rms_score: 0.1134
|
| 121 |
+
2025-09-26 13:17:39,167 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.2208 | Val rms_score: 0.1175
|
| 122 |
+
2025-09-26 13:17:54,787 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.2031 | Val rms_score: 0.1109
|
| 123 |
+
2025-09-26 13:18:10,241 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1847 | Val rms_score: 0.1093
|
| 124 |
+
2025-09-26 13:18:25,221 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1531 | Val rms_score: 0.1136
|
| 125 |
+
2025-09-26 13:18:39,541 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.1264 | Val rms_score: 0.1130
|
| 126 |
+
2025-09-26 13:18:56,310 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.1181 | Val rms_score: 0.1153
|
| 127 |
+
2025-09-26 13:19:10,710 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.1219 | Val rms_score: 0.1139
|
| 128 |
+
2025-09-26 13:19:26,986 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.1014 | Val rms_score: 0.1155
|
| 129 |
+
2025-09-26 13:19:42,189 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.1135 | Val rms_score: 0.1151
|
| 130 |
+
2025-09-26 13:19:59,535 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0958 | Val rms_score: 0.1127
|
| 131 |
+
2025-09-26 13:20:15,125 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.1109 | Val rms_score: 0.1135
|
| 132 |
+
2025-09-26 13:20:31,589 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0778 | Val rms_score: 0.1113
|
| 133 |
+
2025-09-26 13:20:47,420 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0777 | Val rms_score: 0.1133
|
| 134 |
+
2025-09-26 13:21:02,929 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0656 | Val rms_score: 0.1135
|
| 135 |
+
2025-09-26 13:21:18,498 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0747 | Val rms_score: 0.1156
|
| 136 |
+
2025-09-26 13:21:34,286 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0632 | Val rms_score: 0.1164
|
| 137 |
+
2025-09-26 13:21:50,507 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0601 | Val rms_score: 0.1096
|
| 138 |
+
2025-09-26 13:22:07,250 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0594 | Val rms_score: 0.1173
|
| 139 |
+
2025-09-26 13:22:23,322 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0573 | Val rms_score: 0.1100
|
| 140 |
+
2025-09-26 13:22:39,468 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0503 | Val rms_score: 0.1125
|
| 141 |
+
2025-09-26 13:22:55,180 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0465 | Val rms_score: 0.1165
|
| 142 |
+
2025-09-26 13:23:11,158 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0456 | Val rms_score: 0.1132
|
| 143 |
+
2025-09-26 13:23:27,145 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0451 | Val rms_score: 0.1136
|
| 144 |
+
2025-09-26 13:23:42,727 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0395 | Val rms_score: 0.1129
|
| 145 |
+
2025-09-26 13:23:58,207 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0444 | Val rms_score: 0.1129
|
| 146 |
+
2025-09-26 13:24:13,672 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0429 | Val rms_score: 0.1126
|
| 147 |
+
2025-09-26 13:24:29,943 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0426 | Val rms_score: 0.1159
|
| 148 |
+
2025-09-26 13:24:45,457 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0420 | Val rms_score: 0.1140
|
| 149 |
+
2025-09-26 13:25:00,674 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0471 | Val rms_score: 0.1103
|
| 150 |
+
2025-09-26 13:25:17,016 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0391 | Val rms_score: 0.1104
|
| 151 |
+
2025-09-26 13:25:32,959 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0412 | Val rms_score: 0.1109
|
| 152 |
+
2025-09-26 13:25:49,200 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0378 | Val rms_score: 0.1128
|
| 153 |
+
2025-09-26 13:26:05,318 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0330 | Val rms_score: 0.1118
|
| 154 |
+
2025-09-26 13:26:21,975 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0406 | Val rms_score: 0.1108
|
| 155 |
+
2025-09-26 13:26:37,911 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0335 | Val rms_score: 0.1089
|
| 156 |
+
2025-09-26 13:26:54,104 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0352 | Val rms_score: 0.1116
|
| 157 |
+
2025-09-26 13:27:10,156 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0347 | Val rms_score: 0.1139
|
| 158 |
+
2025-09-26 13:27:26,997 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0328 | Val rms_score: 0.1129
|
| 159 |
+
2025-09-26 13:27:40,797 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0339 | Val rms_score: 0.1134
|
| 160 |
+
2025-09-26 13:27:58,409 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0297 | Val rms_score: 0.1118
|
| 161 |
+
2025-09-26 13:28:13,276 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0323 | Val rms_score: 0.1131
|
| 162 |
+
2025-09-26 13:28:30,943 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0352 | Val rms_score: 0.1099
|
| 163 |
+
2025-09-26 13:28:46,264 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0280 | Val rms_score: 0.1113
|
| 164 |
+
2025-09-26 13:29:03,037 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0252 | Val rms_score: 0.1127
|
| 165 |
+
2025-09-26 13:29:17,943 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0314 | Val rms_score: 0.1129
|
| 166 |
+
2025-09-26 13:29:34,819 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0273 | Val rms_score: 0.1122
|
| 167 |
+
2025-09-26 13:29:50,293 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0271 | Val rms_score: 0.1127
|
| 168 |
+
2025-09-26 13:30:06,668 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0259 | Val rms_score: 0.1146
|
| 169 |
+
2025-09-26 13:30:22,281 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0251 | Val rms_score: 0.1141
|
| 170 |
+
2025-09-26 13:30:38,781 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0257 | Val rms_score: 0.1134
|
| 171 |
+
2025-09-26 13:30:54,634 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0256 | Val rms_score: 0.1139
|
| 172 |
+
2025-09-26 13:31:11,275 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0281 | Val rms_score: 0.1142
|
| 173 |
+
2025-09-26 13:31:27,453 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0312 | Val rms_score: 0.1109
|
| 174 |
+
2025-09-26 13:31:43,517 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0255 | Val rms_score: 0.1114
|
| 175 |
+
2025-09-26 13:31:59,418 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0280 | Val rms_score: 0.1111
|
| 176 |
+
2025-09-26 13:32:15,201 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0267 | Val rms_score: 0.1139
|
| 177 |
+
2025-09-26 13:32:31,533 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0247 | Val rms_score: 0.1115
|
| 178 |
+
2025-09-26 13:32:47,480 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0248 | Val rms_score: 0.1088
|
| 179 |
+
2025-09-26 13:33:02,764 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0224 | Val rms_score: 0.1128
|
| 180 |
+
2025-09-26 13:33:18,917 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0198 | Val rms_score: 0.1118
|
| 181 |
+
2025-09-26 13:33:35,090 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0209 | Val rms_score: 0.1112
|
| 182 |
+
2025-09-26 13:33:52,285 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0273 | Val rms_score: 0.1146
|
| 183 |
+
2025-09-26 13:34:08,211 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0220 | Val rms_score: 0.1109
|
| 184 |
+
2025-09-26 13:34:23,914 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0354 | Val rms_score: 0.1082
|
| 185 |
+
2025-09-26 13:34:40,414 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0238 | Val rms_score: 0.1101
|
| 186 |
+
2025-09-26 13:34:56,509 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0207 | Val rms_score: 0.1128
|
| 187 |
+
2025-09-26 13:35:13,253 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0222 | Val rms_score: 0.1106
|
| 188 |
+
2025-09-26 13:35:28,565 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0201 | Val rms_score: 0.1103
|
| 189 |
+
2025-09-26 13:35:45,201 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0225 | Val rms_score: 0.1110
|
| 190 |
+
2025-09-26 13:35:58,734 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0198 | Val rms_score: 0.1105
|
| 191 |
+
2025-09-26 13:36:15,715 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0198 | Val rms_score: 0.1119
|
| 192 |
+
2025-09-26 13:36:29,334 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0167 | Val rms_score: 0.1129
|
| 193 |
+
2025-09-26 13:36:46,219 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0180 | Val rms_score: 0.1124
|
| 194 |
+
2025-09-26 13:36:58,790 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0185 | Val rms_score: 0.1102
|
| 195 |
+
2025-09-26 13:37:15,751 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0194 | Val rms_score: 0.1117
|
| 196 |
+
2025-09-26 13:37:30,902 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0191 | Val rms_score: 0.1109
|
| 197 |
+
2025-09-26 13:37:48,050 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0224 | Val rms_score: 0.1102
|
| 198 |
+
2025-09-26 13:38:03,664 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0177 | Val rms_score: 0.1127
|
| 199 |
+
2025-09-26 13:38:20,345 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0198 | Val rms_score: 0.1115
|
| 200 |
+
2025-09-26 13:38:36,037 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0160 | Val rms_score: 0.1099
|
| 201 |
+
2025-09-26 13:38:51,641 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0182 | Val rms_score: 0.1102
|
| 202 |
+
2025-09-26 13:39:07,127 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0163 | Val rms_score: 0.1101
|
| 203 |
+
2025-09-26 13:39:23,061 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0167 | Val rms_score: 0.1129
|
| 204 |
+
2025-09-26 13:39:40,118 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0157 | Val rms_score: 0.1118
|
| 205 |
+
2025-09-26 13:39:55,747 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0174 | Val rms_score: 0.1104
|
| 206 |
+
2025-09-26 13:40:11,234 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0158 | Val rms_score: 0.1116
|
| 207 |
+
2025-09-26 13:40:26,937 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0160 | Val rms_score: 0.1116
|
| 208 |
+
2025-09-26 13:40:43,358 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0174 | Val rms_score: 0.1102
|
| 209 |
+
2025-09-26 13:40:57,611 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0169 | Val rms_score: 0.1109
|
| 210 |
+
2025-09-26 13:41:14,348 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0173 | Val rms_score: 0.1100
|
| 211 |
+
2025-09-26 13:41:28,090 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0159 | Val rms_score: 0.1112
|
| 212 |
+
2025-09-26 13:41:45,198 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0160 | Val rms_score: 0.1104
|
| 213 |
+
2025-09-26 13:41:59,714 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0198 | Val rms_score: 0.1098
|
| 214 |
+
2025-09-26 13:42:16,449 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0169 | Val rms_score: 0.1097
|
| 215 |
+
2025-09-26 13:42:30,320 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0148 | Val rms_score: 0.1103
|
| 216 |
+
2025-09-26 13:42:31,416 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Test rms_score: 0.1172
|
| 217 |
+
2025-09-26 13:42:31,910 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Starting triplicate run 3 for dataset astrazeneca_ppb at 2025-09-26_13-42-31
|
| 218 |
+
2025-09-26 13:42:47,805 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.7222 | Val rms_score: 0.1203
|
| 219 |
+
2025-09-26 13:42:47,805 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 45
|
| 220 |
+
2025-09-26 13:42:48,426 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.1203
|
| 221 |
+
2025-09-26 13:43:02,534 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.5028 | Val rms_score: 0.1193
|
| 222 |
+
2025-09-26 13:43:02,716 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 90
|
| 223 |
+
2025-09-26 13:43:03,295 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.1193
|
| 224 |
+
2025-09-26 13:43:20,123 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.4429 | Val rms_score: 0.1097
|
| 225 |
+
2025-09-26 13:43:20,304 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 135
|
| 226 |
+
2025-09-26 13:43:20,842 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.1097
|
| 227 |
+
2025-09-26 13:43:35,961 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.3389 | Val rms_score: 0.1082
|
| 228 |
+
2025-09-26 13:43:36,149 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 180
|
| 229 |
+
2025-09-26 13:43:36,790 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 4 with val rms_score: 0.1082
|
| 230 |
+
2025-09-26 13:43:53,640 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.2425 | Val rms_score: 0.1122
|
| 231 |
+
2025-09-26 13:44:07,131 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.2097 | Val rms_score: 0.1100
|
| 232 |
+
2025-09-26 13:44:24,145 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.2198 | Val rms_score: 0.1110
|
| 233 |
+
2025-09-26 13:44:38,481 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1458 | Val rms_score: 0.1136
|
| 234 |
+
2025-09-26 13:44:54,990 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1547 | Val rms_score: 0.1082
|
| 235 |
+
2025-09-26 13:45:09,941 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.1201 | Val rms_score: 0.1076
|
| 236 |
+
2025-09-26 13:45:10,106 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Global step of best model: 450
|
| 237 |
+
2025-09-26 13:45:10,692 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Best model saved at epoch 10 with val rms_score: 0.1076
|
| 238 |
+
2025-09-26 13:45:27,501 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.1153 | Val rms_score: 0.1147
|
| 239 |
+
2025-09-26 13:45:41,952 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.1172 | Val rms_score: 0.1158
|
| 240 |
+
2025-09-26 13:45:57,973 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.1049 | Val rms_score: 0.1144
|
| 241 |
+
2025-09-26 13:46:13,958 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.1094 | Val rms_score: 0.1152
|
| 242 |
+
2025-09-26 13:46:30,018 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.0851 | Val rms_score: 0.1116
|
| 243 |
+
2025-09-26 13:46:46,059 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0730 | Val rms_score: 0.1134
|
| 244 |
+
2025-09-26 13:47:02,121 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0729 | Val rms_score: 0.1110
|
| 245 |
+
2025-09-26 13:47:18,264 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0609 | Val rms_score: 0.1160
|
| 246 |
+
2025-09-26 13:47:33,808 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0632 | Val rms_score: 0.1137
|
| 247 |
+
2025-09-26 13:47:50,413 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0674 | Val rms_score: 0.1165
|
| 248 |
+
2025-09-26 13:48:05,743 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0667 | Val rms_score: 0.1129
|
| 249 |
+
2025-09-26 13:48:22,329 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0601 | Val rms_score: 0.1140
|
| 250 |
+
2025-09-26 13:48:38,600 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0516 | Val rms_score: 0.1143
|
| 251 |
+
2025-09-26 13:48:55,495 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0556 | Val rms_score: 0.1117
|
| 252 |
+
2025-09-26 13:49:11,013 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0544 | Val rms_score: 0.1107
|
| 253 |
+
2025-09-26 13:49:27,613 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0479 | Val rms_score: 0.1144
|
| 254 |
+
2025-09-26 13:49:43,262 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0424 | Val rms_score: 0.1152
|
| 255 |
+
2025-09-26 13:49:59,687 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0469 | Val rms_score: 0.1158
|
| 256 |
+
2025-09-26 13:50:13,229 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0477 | Val rms_score: 0.1143
|
| 257 |
+
2025-09-26 13:50:29,786 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0420 | Val rms_score: 0.1127
|
| 258 |
+
2025-09-26 13:50:42,776 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0420 | Val rms_score: 0.1151
|
| 259 |
+
2025-09-26 13:51:00,039 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0480 | Val rms_score: 0.1113
|
| 260 |
+
2025-09-26 13:51:14,101 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0455 | Val rms_score: 0.1124
|
| 261 |
+
2025-09-26 13:51:30,655 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0401 | Val rms_score: 0.1101
|
| 262 |
+
2025-09-26 13:51:45,415 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0434 | Val rms_score: 0.1136
|
| 263 |
+
2025-09-26 13:52:02,265 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0338 | Val rms_score: 0.1137
|
| 264 |
+
2025-09-26 13:52:17,415 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0359 | Val rms_score: 0.1118
|
| 265 |
+
2025-09-26 13:52:33,719 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0330 | Val rms_score: 0.1146
|
| 266 |
+
2025-09-26 13:52:49,449 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0330 | Val rms_score: 0.1138
|
| 267 |
+
2025-09-26 13:53:04,929 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0354 | Val rms_score: 0.1133
|
| 268 |
+
2025-09-26 13:53:21,682 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0361 | Val rms_score: 0.1117
|
| 269 |
+
2025-09-26 13:53:36,789 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0344 | Val rms_score: 0.1126
|
| 270 |
+
2025-09-26 13:53:53,460 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0299 | Val rms_score: 0.1138
|
| 271 |
+
2025-09-26 13:54:07,510 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0337 | Val rms_score: 0.1112
|
| 272 |
+
2025-09-26 13:54:24,891 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0334 | Val rms_score: 0.1122
|
| 273 |
+
2025-09-26 13:54:39,919 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0323 | Val rms_score: 0.1172
|
| 274 |
+
2025-09-26 13:54:56,939 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0257 | Val rms_score: 0.1114
|
| 275 |
+
2025-09-26 13:55:11,383 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0299 | Val rms_score: 0.1110
|
| 276 |
+
2025-09-26 13:55:28,029 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0309 | Val rms_score: 0.1120
|
| 277 |
+
2025-09-26 13:55:42,838 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0264 | Val rms_score: 0.1134
|
| 278 |
+
2025-09-26 13:55:59,104 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0283 | Val rms_score: 0.1124
|
| 279 |
+
2025-09-26 13:56:14,164 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0264 | Val rms_score: 0.1111
|
| 280 |
+
2025-09-26 13:56:31,225 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0241 | Val rms_score: 0.1119
|
| 281 |
+
2025-09-26 13:56:46,741 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0221 | Val rms_score: 0.1126
|
| 282 |
+
2025-09-26 13:57:03,221 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0231 | Val rms_score: 0.1116
|
| 283 |
+
2025-09-26 13:57:18,487 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0243 | Val rms_score: 0.1140
|
| 284 |
+
2025-09-26 13:57:35,795 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0253 | Val rms_score: 0.1129
|
| 285 |
+
2025-09-26 13:57:51,585 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0204 | Val rms_score: 0.1132
|
| 286 |
+
2025-09-26 13:58:08,147 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0240 | Val rms_score: 0.1134
|
| 287 |
+
2025-09-26 13:58:23,675 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0243 | Val rms_score: 0.1129
|
| 288 |
+
2025-09-26 13:58:39,833 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0229 | Val rms_score: 0.1136
|
| 289 |
+
2025-09-26 13:58:55,973 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0250 | Val rms_score: 0.1143
|
| 290 |
+
2025-09-26 13:59:12,057 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0279 | Val rms_score: 0.1108
|
| 291 |
+
2025-09-26 13:59:28,006 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0285 | Val rms_score: 0.1136
|
| 292 |
+
2025-09-26 13:59:44,155 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0206 | Val rms_score: 0.1116
|
| 293 |
+
2025-09-26 14:00:00,268 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0212 | Val rms_score: 0.1136
|
| 294 |
+
2025-09-26 14:00:17,746 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0198 | Val rms_score: 0.1120
|
| 295 |
+
2025-09-26 14:00:33,845 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0203 | Val rms_score: 0.1125
|
| 296 |
+
2025-09-26 14:00:49,090 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0175 | Val rms_score: 0.1126
|
| 297 |
+
2025-09-26 14:01:05,220 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0229 | Val rms_score: 0.1148
|
| 298 |
+
2025-09-26 14:01:19,534 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0191 | Val rms_score: 0.1141
|
| 299 |
+
2025-09-26 14:01:36,637 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0206 | Val rms_score: 0.1131
|
| 300 |
+
2025-09-26 14:01:52,362 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0191 | Val rms_score: 0.1141
|
| 301 |
+
2025-09-26 14:02:09,250 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0215 | Val rms_score: 0.1140
|
| 302 |
+
2025-09-26 14:02:24,670 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0231 | Val rms_score: 0.1112
|
| 303 |
+
2025-09-26 14:02:41,428 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0256 | Val rms_score: 0.1135
|
| 304 |
+
2025-09-26 14:02:57,133 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0209 | Val rms_score: 0.1148
|
| 305 |
+
2025-09-26 14:03:13,710 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0204 | Val rms_score: 0.1112
|
| 306 |
+
2025-09-26 14:03:28,801 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0194 | Val rms_score: 0.1120
|
| 307 |
+
2025-09-26 14:03:45,291 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0196 | Val rms_score: 0.1128
|
| 308 |
+
2025-09-26 14:04:00,338 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0188 | Val rms_score: 0.1119
|
| 309 |
+
2025-09-26 14:04:17,730 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0183 | Val rms_score: 0.1116
|
| 310 |
+
2025-09-26 14:04:32,797 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0175 | Val rms_score: 0.1115
|
| 311 |
+
2025-09-26 14:04:49,593 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0202 | Val rms_score: 0.1123
|
| 312 |
+
2025-09-26 14:05:04,230 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0172 | Val rms_score: 0.1109
|
| 313 |
+
2025-09-26 14:05:20,977 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0177 | Val rms_score: 0.1133
|
| 314 |
+
2025-09-26 14:05:37,151 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0145 | Val rms_score: 0.1123
|
| 315 |
+
2025-09-26 14:05:53,713 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0181 | Val rms_score: 0.1134
|
| 316 |
+
2025-09-26 14:06:08,191 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0246 | Val rms_score: 0.1131
|
| 317 |
+
2025-09-26 14:06:23,702 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0172 | Val rms_score: 0.1122
|
| 318 |
+
2025-09-26 14:06:39,283 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0180 | Val rms_score: 0.1132
|
| 319 |
+
2025-09-26 14:06:55,656 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0159 | Val rms_score: 0.1130
|
| 320 |
+
2025-09-26 14:07:11,666 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0172 | Val rms_score: 0.1150
|
| 321 |
+
2025-09-26 14:07:27,515 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0151 | Val rms_score: 0.1139
|
| 322 |
+
2025-09-26 14:07:43,371 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0159 | Val rms_score: 0.1140
|
| 323 |
+
2025-09-26 14:07:59,517 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0152 | Val rms_score: 0.1123
|
| 324 |
+
2025-09-26 14:08:16,020 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0144 | Val rms_score: 0.1137
|
| 325 |
+
2025-09-26 14:08:31,992 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0146 | Val rms_score: 0.1123
|
| 326 |
+
2025-09-26 14:08:48,030 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0175 | Val rms_score: 0.1122
|
| 327 |
+
2025-09-26 14:09:03,642 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0181 | Val rms_score: 0.1114
|
| 328 |
+
2025-09-26 14:09:04,687 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Test rms_score: 0.1357
|
| 329 |
+
2025-09-26 14:09:05,144 - logs_modchembert_astrazeneca_ppb_epochs100_batch_size32 - INFO - Final Triplicate Test Results — Avg rms_score: 0.1233, Std Dev: 0.0088
|
logs_modchembert_regression_ModChemBERT-MLM-TAFT/modchembert_deepchem_splits_run_astrazeneca_solubility_epochs100_batch_size32_20250926_140905.log
ADDED
|
@@ -0,0 +1,355 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
2025-09-26 14:09:05,145 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Running benchmark for dataset: astrazeneca_solubility
|
| 2 |
+
2025-09-26 14:09:05,146 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - dataset: astrazeneca_solubility, tasks: ['y'], epochs: 100, learning rate: 3e-05, transform: True
|
| 3 |
+
2025-09-26 14:09:05,164 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Starting triplicate run 1 for dataset astrazeneca_solubility at 2025-09-26_14-09-05
|
| 4 |
+
2025-09-26 14:09:19,882 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.9278 | Val rms_score: 0.9321
|
| 5 |
+
2025-09-26 14:09:19,882 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 45
|
| 6 |
+
2025-09-26 14:09:20,446 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.9321
|
| 7 |
+
2025-09-26 14:09:35,865 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.6111 | Val rms_score: 0.9681
|
| 8 |
+
2025-09-26 14:09:51,954 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.4375 | Val rms_score: 0.9570
|
| 9 |
+
2025-09-26 14:10:07,068 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.3722 | Val rms_score: 0.9720
|
| 10 |
+
2025-09-26 14:10:23,354 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.3088 | Val rms_score: 0.9091
|
| 11 |
+
2025-09-26 14:10:23,504 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 225
|
| 12 |
+
2025-09-26 14:10:24,147 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 5 with val rms_score: 0.9091
|
| 13 |
+
2025-09-26 14:10:38,812 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.2514 | Val rms_score: 0.8941
|
| 14 |
+
2025-09-26 14:10:39,359 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 270
|
| 15 |
+
2025-09-26 14:10:39,926 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 6 with val rms_score: 0.8941
|
| 16 |
+
2025-09-26 14:10:56,158 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.2281 | Val rms_score: 0.8823
|
| 17 |
+
2025-09-26 14:10:56,349 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 315
|
| 18 |
+
2025-09-26 14:10:57,063 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 7 with val rms_score: 0.8823
|
| 19 |
+
2025-09-26 14:11:10,044 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1847 | Val rms_score: 0.9396
|
| 20 |
+
2025-09-26 14:11:26,379 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.1711 | Val rms_score: 0.9139
|
| 21 |
+
2025-09-26 14:11:40,181 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.1729 | Val rms_score: 0.9118
|
| 22 |
+
2025-09-26 14:11:56,471 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.1431 | Val rms_score: 0.8961
|
| 23 |
+
2025-09-26 14:12:11,229 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.1492 | Val rms_score: 0.9043
|
| 24 |
+
2025-09-26 14:12:27,361 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.1271 | Val rms_score: 0.8968
|
| 25 |
+
2025-09-26 14:12:42,138 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.1193 | Val rms_score: 0.8947
|
| 26 |
+
2025-09-26 14:12:58,330 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.1368 | Val rms_score: 0.9516
|
| 27 |
+
2025-09-26 14:13:13,412 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.1086 | Val rms_score: 0.8601
|
| 28 |
+
2025-09-26 14:13:14,085 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 720
|
| 29 |
+
2025-09-26 14:13:14,688 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 16 with val rms_score: 0.8601
|
| 30 |
+
2025-09-26 14:13:30,719 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0931 | Val rms_score: 0.8684
|
| 31 |
+
2025-09-26 14:13:45,590 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0699 | Val rms_score: 0.8704
|
| 32 |
+
2025-09-26 14:14:00,601 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0747 | Val rms_score: 0.8982
|
| 33 |
+
2025-09-26 14:14:15,721 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0840 | Val rms_score: 0.9074
|
| 34 |
+
2025-09-26 14:14:31,037 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0924 | Val rms_score: 0.8821
|
| 35 |
+
2025-09-26 14:14:46,398 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0656 | Val rms_score: 0.8730
|
| 36 |
+
2025-09-26 14:15:01,771 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0571 | Val rms_score: 0.8796
|
| 37 |
+
2025-09-26 14:15:16,727 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0566 | Val rms_score: 0.8961
|
| 38 |
+
2025-09-26 14:15:31,277 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0578 | Val rms_score: 0.8580
|
| 39 |
+
2025-09-26 14:15:31,433 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 1125
|
| 40 |
+
2025-09-26 14:15:31,989 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 25 with val rms_score: 0.8580
|
| 41 |
+
2025-09-26 14:15:47,555 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0580 | Val rms_score: 0.8588
|
| 42 |
+
2025-09-26 14:16:02,482 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0555 | Val rms_score: 0.8686
|
| 43 |
+
2025-09-26 14:16:18,094 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0573 | Val rms_score: 0.8720
|
| 44 |
+
2025-09-26 14:16:32,248 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0672 | Val rms_score: 0.8690
|
| 45 |
+
2025-09-26 14:16:47,797 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0542 | Val rms_score: 0.8581
|
| 46 |
+
2025-09-26 14:17:01,834 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0472 | Val rms_score: 0.8554
|
| 47 |
+
2025-09-26 14:17:02,412 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 1395
|
| 48 |
+
2025-09-26 14:17:02,991 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 31 with val rms_score: 0.8554
|
| 49 |
+
2025-09-26 14:17:18,671 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0469 | Val rms_score: 0.8553
|
| 50 |
+
2025-09-26 14:17:18,862 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 1440
|
| 51 |
+
2025-09-26 14:17:19,421 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 32 with val rms_score: 0.8553
|
| 52 |
+
2025-09-26 14:17:33,308 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0415 | Val rms_score: 0.8561
|
| 53 |
+
2025-09-26 14:17:49,054 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0398 | Val rms_score: 0.8447
|
| 54 |
+
2025-09-26 14:17:49,246 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 1530
|
| 55 |
+
2025-09-26 14:17:49,860 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 34 with val rms_score: 0.8447
|
| 56 |
+
2025-09-26 14:18:03,889 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0398 | Val rms_score: 0.8444
|
| 57 |
+
2025-09-26 14:18:04,079 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 1575
|
| 58 |
+
2025-09-26 14:18:04,636 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 35 with val rms_score: 0.8444
|
| 59 |
+
2025-09-26 14:18:20,177 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0498 | Val rms_score: 0.8768
|
| 60 |
+
2025-09-26 14:18:34,121 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0441 | Val rms_score: 0.8506
|
| 61 |
+
2025-09-26 14:18:49,631 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0287 | Val rms_score: 0.8492
|
| 62 |
+
2025-09-26 14:19:04,375 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0345 | Val rms_score: 0.8534
|
| 63 |
+
2025-09-26 14:19:19,327 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0335 | Val rms_score: 0.8484
|
| 64 |
+
2025-09-26 14:19:33,642 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0311 | Val rms_score: 0.8489
|
| 65 |
+
2025-09-26 14:19:48,696 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0306 | Val rms_score: 0.8514
|
| 66 |
+
2025-09-26 14:20:03,156 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0342 | Val rms_score: 0.8584
|
| 67 |
+
2025-09-26 14:20:17,520 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0377 | Val rms_score: 0.8442
|
| 68 |
+
2025-09-26 14:20:17,680 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 1980
|
| 69 |
+
2025-09-26 14:20:18,249 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 44 with val rms_score: 0.8442
|
| 70 |
+
2025-09-26 14:20:33,825 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0356 | Val rms_score: 0.8415
|
| 71 |
+
2025-09-26 14:20:34,016 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 2025
|
| 72 |
+
2025-09-26 14:20:34,584 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 45 with val rms_score: 0.8415
|
| 73 |
+
2025-09-26 14:20:49,391 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0299 | Val rms_score: 0.8507
|
| 74 |
+
2025-09-26 14:21:04,154 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0284 | Val rms_score: 0.8432
|
| 75 |
+
2025-09-26 14:21:18,880 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0276 | Val rms_score: 0.8414
|
| 76 |
+
2025-09-26 14:21:19,038 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 2160
|
| 77 |
+
2025-09-26 14:21:19,601 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 48 with val rms_score: 0.8414
|
| 78 |
+
2025-09-26 14:21:34,478 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0273 | Val rms_score: 0.8427
|
| 79 |
+
2025-09-26 14:21:49,293 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0281 | Val rms_score: 0.8312
|
| 80 |
+
2025-09-26 14:21:49,481 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 2250
|
| 81 |
+
2025-09-26 14:21:50,046 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 50 with val rms_score: 0.8312
|
| 82 |
+
2025-09-26 14:22:05,168 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0269 | Val rms_score: 0.8389
|
| 83 |
+
2025-09-26 14:22:20,312 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0254 | Val rms_score: 0.8451
|
| 84 |
+
2025-09-26 14:22:35,170 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0257 | Val rms_score: 0.8472
|
| 85 |
+
2025-09-26 14:22:49,357 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0294 | Val rms_score: 0.8375
|
| 86 |
+
2025-09-26 14:23:04,820 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0382 | Val rms_score: 0.8537
|
| 87 |
+
2025-09-26 14:23:18,642 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0285 | Val rms_score: 0.8449
|
| 88 |
+
2025-09-26 14:23:34,633 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0271 | Val rms_score: 0.8458
|
| 89 |
+
2025-09-26 14:23:48,767 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0225 | Val rms_score: 0.8518
|
| 90 |
+
2025-09-26 14:24:03,911 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0236 | Val rms_score: 0.8465
|
| 91 |
+
2025-09-26 14:24:17,834 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0231 | Val rms_score: 0.8499
|
| 92 |
+
2025-09-26 14:24:32,550 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0234 | Val rms_score: 0.8393
|
| 93 |
+
2025-09-26 14:24:47,264 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0253 | Val rms_score: 0.8403
|
| 94 |
+
2025-09-26 14:25:02,572 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0225 | Val rms_score: 0.8529
|
| 95 |
+
2025-09-26 14:25:17,150 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0255 | Val rms_score: 0.8438
|
| 96 |
+
2025-09-26 14:25:32,016 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0231 | Val rms_score: 0.8464
|
| 97 |
+
2025-09-26 14:25:46,879 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0218 | Val rms_score: 0.8479
|
| 98 |
+
2025-09-26 14:26:02,987 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0203 | Val rms_score: 0.8442
|
| 99 |
+
2025-09-26 14:26:17,805 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0203 | Val rms_score: 0.8417
|
| 100 |
+
2025-09-26 14:26:32,559 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0214 | Val rms_score: 0.8477
|
| 101 |
+
2025-09-26 14:26:47,501 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0200 | Val rms_score: 0.8492
|
| 102 |
+
2025-09-26 14:27:01,974 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0278 | Val rms_score: 0.8529
|
| 103 |
+
2025-09-26 14:27:17,973 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0237 | Val rms_score: 0.8448
|
| 104 |
+
2025-09-26 14:27:31,542 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0207 | Val rms_score: 0.8513
|
| 105 |
+
2025-09-26 14:27:47,043 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0233 | Val rms_score: 0.8440
|
| 106 |
+
2025-09-26 14:28:00,971 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0245 | Val rms_score: 0.8489
|
| 107 |
+
2025-09-26 14:28:16,617 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0205 | Val rms_score: 0.8470
|
| 108 |
+
2025-09-26 14:28:31,500 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0185 | Val rms_score: 0.8448
|
| 109 |
+
2025-09-26 14:28:47,062 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0174 | Val rms_score: 0.8550
|
| 110 |
+
2025-09-26 14:29:01,208 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0182 | Val rms_score: 0.8437
|
| 111 |
+
2025-09-26 14:29:15,754 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0200 | Val rms_score: 0.8515
|
| 112 |
+
2025-09-26 14:29:30,792 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0233 | Val rms_score: 0.8514
|
| 113 |
+
2025-09-26 14:29:46,058 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0190 | Val rms_score: 0.8471
|
| 114 |
+
2025-09-26 14:30:00,917 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0163 | Val rms_score: 0.8468
|
| 115 |
+
2025-09-26 14:30:15,415 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0171 | Val rms_score: 0.8470
|
| 116 |
+
2025-09-26 14:30:30,208 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0177 | Val rms_score: 0.8467
|
| 117 |
+
2025-09-26 14:30:44,861 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0161 | Val rms_score: 0.8448
|
| 118 |
+
2025-09-26 14:31:00,476 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0172 | Val rms_score: 0.8503
|
| 119 |
+
2025-09-26 14:31:14,447 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0161 | Val rms_score: 0.8478
|
| 120 |
+
2025-09-26 14:31:31,083 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0219 | Val rms_score: 0.8442
|
| 121 |
+
2025-09-26 14:31:45,342 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0194 | Val rms_score: 0.8527
|
| 122 |
+
2025-09-26 14:32:00,759 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0174 | Val rms_score: 0.8482
|
| 123 |
+
2025-09-26 14:32:15,070 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0170 | Val rms_score: 0.8477
|
| 124 |
+
2025-09-26 14:32:31,046 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0194 | Val rms_score: 0.8434
|
| 125 |
+
2025-09-26 14:32:45,585 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0180 | Val rms_score: 0.8493
|
| 126 |
+
2025-09-26 14:33:00,959 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0161 | Val rms_score: 0.8470
|
| 127 |
+
2025-09-26 14:33:15,612 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0155 | Val rms_score: 0.8474
|
| 128 |
+
2025-09-26 14:33:30,907 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0154 | Val rms_score: 0.8437
|
| 129 |
+
2025-09-26 14:33:45,580 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0190 | Val rms_score: 0.8504
|
| 130 |
+
2025-09-26 14:34:00,087 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0163 | Val rms_score: 0.8401
|
| 131 |
+
2025-09-26 14:34:14,426 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0167 | Val rms_score: 0.8415
|
| 132 |
+
2025-09-26 14:34:15,474 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Test rms_score: 0.8596
|
| 133 |
+
2025-09-26 14:34:15,876 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Starting triplicate run 2 for dataset astrazeneca_solubility at 2025-09-26_14-34-15
|
| 134 |
+
2025-09-26 14:34:30,017 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.8000 | Val rms_score: 0.9277
|
| 135 |
+
2025-09-26 14:34:30,017 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 45
|
| 136 |
+
2025-09-26 14:34:30,582 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.9277
|
| 137 |
+
2025-09-26 14:34:44,811 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.5500 | Val rms_score: 0.8623
|
| 138 |
+
2025-09-26 14:34:44,985 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 90
|
| 139 |
+
2025-09-26 14:34:45,524 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.8623
|
| 140 |
+
2025-09-26 14:35:00,498 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.4321 | Val rms_score: 0.8753
|
| 141 |
+
2025-09-26 14:35:15,224 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.3347 | Val rms_score: 0.8893
|
| 142 |
+
2025-09-26 14:35:29,624 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.3063 | Val rms_score: 0.8933
|
| 143 |
+
2025-09-26 14:35:45,299 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.2472 | Val rms_score: 0.9748
|
| 144 |
+
2025-09-26 14:36:00,005 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.1969 | Val rms_score: 0.9125
|
| 145 |
+
2025-09-26 14:36:15,536 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.1833 | Val rms_score: 0.9412
|
| 146 |
+
2025-09-26 14:36:29,629 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.2125 | Val rms_score: 0.9829
|
| 147 |
+
2025-09-26 14:36:45,226 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.1806 | Val rms_score: 0.9261
|
| 148 |
+
2025-09-26 14:36:59,624 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.1375 | Val rms_score: 0.9606
|
| 149 |
+
2025-09-26 14:37:15,747 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.1313 | Val rms_score: 0.9306
|
| 150 |
+
2025-09-26 14:37:29,914 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.1194 | Val rms_score: 0.9187
|
| 151 |
+
2025-09-26 14:37:45,553 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.1297 | Val rms_score: 0.9259
|
| 152 |
+
2025-09-26 14:38:00,250 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.1014 | Val rms_score: 0.9005
|
| 153 |
+
2025-09-26 14:38:15,315 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0957 | Val rms_score: 0.9058
|
| 154 |
+
2025-09-26 14:38:30,273 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0951 | Val rms_score: 0.9677
|
| 155 |
+
2025-09-26 14:38:45,222 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.0672 | Val rms_score: 0.9438
|
| 156 |
+
2025-09-26 14:38:59,168 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0868 | Val rms_score: 0.9050
|
| 157 |
+
2025-09-26 14:39:13,948 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0781 | Val rms_score: 0.8899
|
| 158 |
+
2025-09-26 14:39:28,895 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0719 | Val rms_score: 0.9118
|
| 159 |
+
2025-09-26 14:39:44,058 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0708 | Val rms_score: 0.9095
|
| 160 |
+
2025-09-26 14:40:00,136 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0754 | Val rms_score: 1.0072
|
| 161 |
+
2025-09-26 14:40:14,519 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.1514 | Val rms_score: 0.8731
|
| 162 |
+
2025-09-26 14:40:29,921 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0794 | Val rms_score: 0.8759
|
| 163 |
+
2025-09-26 14:40:43,511 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0719 | Val rms_score: 0.8801
|
| 164 |
+
2025-09-26 14:40:59,567 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0596 | Val rms_score: 0.8753
|
| 165 |
+
2025-09-26 14:41:13,469 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0545 | Val rms_score: 0.8737
|
| 166 |
+
2025-09-26 14:41:29,073 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0750 | Val rms_score: 0.8750
|
| 167 |
+
2025-09-26 14:41:43,316 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0517 | Val rms_score: 0.8760
|
| 168 |
+
2025-09-26 14:41:58,912 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0465 | Val rms_score: 0.8755
|
| 169 |
+
2025-09-26 14:42:13,591 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0469 | Val rms_score: 0.8864
|
| 170 |
+
2025-09-26 14:42:29,212 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0580 | Val rms_score: 0.8835
|
| 171 |
+
2025-09-26 14:42:44,052 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0607 | Val rms_score: 0.8726
|
| 172 |
+
2025-09-26 14:42:59,229 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0483 | Val rms_score: 0.8671
|
| 173 |
+
2025-09-26 14:43:14,098 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0412 | Val rms_score: 0.8764
|
| 174 |
+
2025-09-26 14:43:29,126 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0427 | Val rms_score: 0.8946
|
| 175 |
+
2025-09-26 14:43:43,822 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0434 | Val rms_score: 0.8674
|
| 176 |
+
2025-09-26 14:43:58,164 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0719 | Val rms_score: 0.8707
|
| 177 |
+
2025-09-26 14:44:13,489 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0563 | Val rms_score: 0.8683
|
| 178 |
+
2025-09-26 14:44:27,821 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0462 | Val rms_score: 0.8716
|
| 179 |
+
2025-09-26 14:44:43,879 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0590 | Val rms_score: 0.8505
|
| 180 |
+
2025-09-26 14:44:44,027 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 1890
|
| 181 |
+
2025-09-26 14:44:44,586 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 42 with val rms_score: 0.8505
|
| 182 |
+
2025-09-26 14:44:59,445 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0408 | Val rms_score: 0.8574
|
| 183 |
+
2025-09-26 14:45:14,805 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0370 | Val rms_score: 0.8607
|
| 184 |
+
2025-09-26 14:45:30,339 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0386 | Val rms_score: 0.8768
|
| 185 |
+
2025-09-26 14:45:45,724 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0424 | Val rms_score: 0.8734
|
| 186 |
+
2025-09-26 14:46:00,529 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0326 | Val rms_score: 0.8690
|
| 187 |
+
2025-09-26 14:46:16,062 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0304 | Val rms_score: 0.8728
|
| 188 |
+
2025-09-26 14:46:29,378 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0352 | Val rms_score: 0.8706
|
| 189 |
+
2025-09-26 14:46:44,968 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0319 | Val rms_score: 0.8600
|
| 190 |
+
2025-09-26 14:46:59,220 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0359 | Val rms_score: 0.8690
|
| 191 |
+
2025-09-26 14:47:15,085 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0326 | Val rms_score: 0.8636
|
| 192 |
+
2025-09-26 14:47:29,615 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0300 | Val rms_score: 0.8931
|
| 193 |
+
2025-09-26 14:47:45,262 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0339 | Val rms_score: 0.8715
|
| 194 |
+
2025-09-26 14:48:00,193 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0292 | Val rms_score: 0.8649
|
| 195 |
+
2025-09-26 14:48:15,287 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0281 | Val rms_score: 0.8574
|
| 196 |
+
2025-09-26 14:48:30,554 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0276 | Val rms_score: 0.8669
|
| 197 |
+
2025-09-26 14:48:45,017 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0271 | Val rms_score: 0.8654
|
| 198 |
+
2025-09-26 14:48:58,305 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0280 | Val rms_score: 0.8652
|
| 199 |
+
2025-09-26 14:49:13,146 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0269 | Val rms_score: 0.8680
|
| 200 |
+
2025-09-26 14:49:27,747 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0247 | Val rms_score: 0.8627
|
| 201 |
+
2025-09-26 14:49:43,172 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0262 | Val rms_score: 0.8639
|
| 202 |
+
2025-09-26 14:49:58,863 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0232 | Val rms_score: 0.8622
|
| 203 |
+
2025-09-26 14:50:13,361 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0243 | Val rms_score: 0.8640
|
| 204 |
+
2025-09-26 14:50:28,948 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0245 | Val rms_score: 0.8701
|
| 205 |
+
2025-09-26 14:50:43,140 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0226 | Val rms_score: 0.8643
|
| 206 |
+
2025-09-26 14:51:00,104 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0267 | Val rms_score: 0.8720
|
| 207 |
+
2025-09-26 14:51:14,202 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0276 | Val rms_score: 0.8677
|
| 208 |
+
2025-09-26 14:51:29,858 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0192 | Val rms_score: 0.8774
|
| 209 |
+
2025-09-26 14:51:44,285 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0241 | Val rms_score: 0.8692
|
| 210 |
+
2025-09-26 14:51:59,544 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0259 | Val rms_score: 0.8703
|
| 211 |
+
2025-09-26 14:52:14,508 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0221 | Val rms_score: 0.8665
|
| 212 |
+
2025-09-26 14:52:29,488 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0216 | Val rms_score: 0.8676
|
| 213 |
+
2025-09-26 14:52:44,385 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0219 | Val rms_score: 0.8597
|
| 214 |
+
2025-09-26 14:52:59,218 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0205 | Val rms_score: 0.8691
|
| 215 |
+
2025-09-26 14:53:13,639 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0232 | Val rms_score: 0.8706
|
| 216 |
+
2025-09-26 14:53:28,911 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0209 | Val rms_score: 0.8715
|
| 217 |
+
2025-09-26 14:53:43,660 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0227 | Val rms_score: 0.8741
|
| 218 |
+
2025-09-26 14:53:58,065 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0226 | Val rms_score: 0.8737
|
| 219 |
+
2025-09-26 14:54:12,880 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0219 | Val rms_score: 0.8702
|
| 220 |
+
2025-09-26 14:54:27,456 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0199 | Val rms_score: 0.8726
|
| 221 |
+
2025-09-26 14:54:43,018 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0295 | Val rms_score: 0.8731
|
| 222 |
+
2025-09-26 14:54:57,836 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0241 | Val rms_score: 0.8729
|
| 223 |
+
2025-09-26 14:55:13,480 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0203 | Val rms_score: 0.8732
|
| 224 |
+
2025-09-26 14:55:28,157 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0230 | Val rms_score: 0.8843
|
| 225 |
+
2025-09-26 14:55:43,778 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0252 | Val rms_score: 0.8607
|
| 226 |
+
2025-09-26 14:55:58,999 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0198 | Val rms_score: 0.8774
|
| 227 |
+
2025-09-26 14:56:14,660 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0194 | Val rms_score: 0.8676
|
| 228 |
+
2025-09-26 14:56:29,556 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0156 | Val rms_score: 0.8667
|
| 229 |
+
2025-09-26 14:56:45,234 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0185 | Val rms_score: 0.8626
|
| 230 |
+
2025-09-26 14:56:58,872 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0195 | Val rms_score: 0.8690
|
| 231 |
+
2025-09-26 14:57:14,754 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0204 | Val rms_score: 0.8694
|
| 232 |
+
2025-09-26 14:57:29,501 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0174 | Val rms_score: 0.8764
|
| 233 |
+
2025-09-26 14:57:44,813 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0181 | Val rms_score: 0.8715
|
| 234 |
+
2025-09-26 14:57:59,550 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0192 | Val rms_score: 0.8714
|
| 235 |
+
2025-09-26 14:58:14,751 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0179 | Val rms_score: 0.8702
|
| 236 |
+
2025-09-26 14:58:29,785 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0182 | Val rms_score: 0.8734
|
| 237 |
+
2025-09-26 14:58:44,471 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0170 | Val rms_score: 0.8661
|
| 238 |
+
2025-09-26 14:58:58,997 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0197 | Val rms_score: 0.8686
|
| 239 |
+
2025-09-26 14:59:13,096 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0187 | Val rms_score: 0.8739
|
| 240 |
+
2025-09-26 14:59:13,822 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Test rms_score: 0.8870
|
| 241 |
+
2025-09-26 14:59:14,232 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Starting triplicate run 3 for dataset astrazeneca_solubility at 2025-09-26_14-59-14
|
| 242 |
+
2025-09-26 14:59:28,017 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 1/100 | Train Loss: 0.8389 | Val rms_score: 0.9314
|
| 243 |
+
2025-09-26 14:59:28,017 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 45
|
| 244 |
+
2025-09-26 14:59:28,681 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 1 with val rms_score: 0.9314
|
| 245 |
+
2025-09-26 14:59:43,369 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 2/100 | Train Loss: 0.5917 | Val rms_score: 0.9087
|
| 246 |
+
2025-09-26 14:59:43,545 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 90
|
| 247 |
+
2025-09-26 14:59:44,075 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 2 with val rms_score: 0.9087
|
| 248 |
+
2025-09-26 14:59:58,872 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 3/100 | Train Loss: 0.4304 | Val rms_score: 0.8660
|
| 249 |
+
2025-09-26 14:59:59,049 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 135
|
| 250 |
+
2025-09-26 14:59:59,570 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 3 with val rms_score: 0.8660
|
| 251 |
+
2025-09-26 15:00:14,369 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 4/100 | Train Loss: 0.3667 | Val rms_score: 0.8849
|
| 252 |
+
2025-09-26 15:00:29,037 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 5/100 | Train Loss: 0.2875 | Val rms_score: 0.9057
|
| 253 |
+
2025-09-26 15:00:43,486 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 6/100 | Train Loss: 0.2764 | Val rms_score: 0.9770
|
| 254 |
+
2025-09-26 15:00:58,819 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 7/100 | Train Loss: 0.2427 | Val rms_score: 0.9304
|
| 255 |
+
2025-09-26 15:01:13,685 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 8/100 | Train Loss: 0.2056 | Val rms_score: 0.8880
|
| 256 |
+
2025-09-26 15:01:28,630 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 9/100 | Train Loss: 0.2391 | Val rms_score: 0.9165
|
| 257 |
+
2025-09-26 15:01:44,002 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 10/100 | Train Loss: 0.2014 | Val rms_score: 0.9142
|
| 258 |
+
2025-09-26 15:01:57,770 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 11/100 | Train Loss: 0.1757 | Val rms_score: 0.8967
|
| 259 |
+
2025-09-26 15:02:13,693 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 12/100 | Train Loss: 0.1477 | Val rms_score: 0.9254
|
| 260 |
+
2025-09-26 15:02:28,340 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 13/100 | Train Loss: 0.1160 | Val rms_score: 0.8887
|
| 261 |
+
2025-09-26 15:02:43,587 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 14/100 | Train Loss: 0.1151 | Val rms_score: 0.8874
|
| 262 |
+
2025-09-26 15:02:58,006 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 15/100 | Train Loss: 0.1042 | Val rms_score: 0.8827
|
| 263 |
+
2025-09-26 15:03:12,914 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 16/100 | Train Loss: 0.0965 | Val rms_score: 0.9091
|
| 264 |
+
2025-09-26 15:03:27,952 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 17/100 | Train Loss: 0.0958 | Val rms_score: 0.8813
|
| 265 |
+
2025-09-26 15:03:42,635 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 18/100 | Train Loss: 0.1039 | Val rms_score: 0.8597
|
| 266 |
+
2025-09-26 15:03:42,784 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 810
|
| 267 |
+
2025-09-26 15:03:43,305 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 18 with val rms_score: 0.8597
|
| 268 |
+
2025-09-26 15:03:57,597 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 19/100 | Train Loss: 0.0958 | Val rms_score: 0.8653
|
| 269 |
+
2025-09-26 15:04:11,689 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 20/100 | Train Loss: 0.0760 | Val rms_score: 0.8561
|
| 270 |
+
2025-09-26 15:04:11,871 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 900
|
| 271 |
+
2025-09-26 15:04:12,393 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 20 with val rms_score: 0.8561
|
| 272 |
+
2025-09-26 15:04:26,778 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 21/100 | Train Loss: 0.0743 | Val rms_score: 0.8658
|
| 273 |
+
2025-09-26 15:04:41,643 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 22/100 | Train Loss: 0.0722 | Val rms_score: 0.8756
|
| 274 |
+
2025-09-26 15:04:57,204 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 23/100 | Train Loss: 0.0634 | Val rms_score: 0.8800
|
| 275 |
+
2025-09-26 15:05:11,644 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 24/100 | Train Loss: 0.0944 | Val rms_score: 0.8625
|
| 276 |
+
2025-09-26 15:05:25,988 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 25/100 | Train Loss: 0.0737 | Val rms_score: 0.8726
|
| 277 |
+
2025-09-26 15:05:40,086 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 26/100 | Train Loss: 0.0615 | Val rms_score: 0.8744
|
| 278 |
+
2025-09-26 15:05:55,099 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 27/100 | Train Loss: 0.0536 | Val rms_score: 0.8661
|
| 279 |
+
2025-09-26 15:06:09,784 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 28/100 | Train Loss: 0.0569 | Val rms_score: 0.8769
|
| 280 |
+
2025-09-26 15:06:24,590 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 29/100 | Train Loss: 0.0918 | Val rms_score: 0.9090
|
| 281 |
+
2025-09-26 15:06:38,724 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 30/100 | Train Loss: 0.0889 | Val rms_score: 0.8911
|
| 282 |
+
2025-09-26 15:06:54,375 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 31/100 | Train Loss: 0.0729 | Val rms_score: 0.8913
|
| 283 |
+
2025-09-26 15:07:09,558 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 32/100 | Train Loss: 0.0555 | Val rms_score: 0.8723
|
| 284 |
+
2025-09-26 15:07:24,812 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 33/100 | Train Loss: 0.0497 | Val rms_score: 0.8771
|
| 285 |
+
2025-09-26 15:07:39,132 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 34/100 | Train Loss: 0.0466 | Val rms_score: 0.8797
|
| 286 |
+
2025-09-26 15:07:53,626 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 35/100 | Train Loss: 0.0472 | Val rms_score: 0.8692
|
| 287 |
+
2025-09-26 15:08:07,806 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 36/100 | Train Loss: 0.0404 | Val rms_score: 0.8715
|
| 288 |
+
2025-09-26 15:08:22,893 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 37/100 | Train Loss: 0.0394 | Val rms_score: 0.8708
|
| 289 |
+
2025-09-26 15:08:37,096 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 38/100 | Train Loss: 0.0350 | Val rms_score: 0.8671
|
| 290 |
+
2025-09-26 15:08:51,829 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 39/100 | Train Loss: 0.0387 | Val rms_score: 0.8572
|
| 291 |
+
2025-09-26 15:09:06,570 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 40/100 | Train Loss: 0.0384 | Val rms_score: 0.8775
|
| 292 |
+
2025-09-26 15:09:21,213 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 41/100 | Train Loss: 0.0332 | Val rms_score: 0.8801
|
| 293 |
+
2025-09-26 15:09:35,513 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 42/100 | Train Loss: 0.0354 | Val rms_score: 0.8748
|
| 294 |
+
2025-09-26 15:09:50,265 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 43/100 | Train Loss: 0.0348 | Val rms_score: 0.8678
|
| 295 |
+
2025-09-26 15:10:05,135 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 44/100 | Train Loss: 0.0339 | Val rms_score: 0.8740
|
| 296 |
+
2025-09-26 15:10:21,185 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 45/100 | Train Loss: 0.0345 | Val rms_score: 0.8638
|
| 297 |
+
2025-09-26 15:10:35,812 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 46/100 | Train Loss: 0.0307 | Val rms_score: 0.8725
|
| 298 |
+
2025-09-26 15:10:50,812 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 47/100 | Train Loss: 0.0279 | Val rms_score: 0.8657
|
| 299 |
+
2025-09-26 15:11:05,933 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 48/100 | Train Loss: 0.0318 | Val rms_score: 0.8651
|
| 300 |
+
2025-09-26 15:11:20,094 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 49/100 | Train Loss: 0.0240 | Val rms_score: 0.8696
|
| 301 |
+
2025-09-26 15:11:34,544 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 50/100 | Train Loss: 0.0304 | Val rms_score: 0.8678
|
| 302 |
+
2025-09-26 15:11:48,000 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 51/100 | Train Loss: 0.0299 | Val rms_score: 0.8650
|
| 303 |
+
2025-09-26 15:12:03,852 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 52/100 | Train Loss: 0.0291 | Val rms_score: 0.8547
|
| 304 |
+
2025-09-26 15:12:04,002 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Global step of best model: 2340
|
| 305 |
+
2025-09-26 15:12:04,527 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Best model saved at epoch 52 with val rms_score: 0.8547
|
| 306 |
+
2025-09-26 15:12:17,850 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 53/100 | Train Loss: 0.0288 | Val rms_score: 0.8744
|
| 307 |
+
2025-09-26 15:12:33,827 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 54/100 | Train Loss: 0.0288 | Val rms_score: 0.8549
|
| 308 |
+
2025-09-26 15:12:47,584 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 55/100 | Train Loss: 0.0271 | Val rms_score: 0.8672
|
| 309 |
+
2025-09-26 15:13:03,528 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 56/100 | Train Loss: 0.0273 | Val rms_score: 0.8594
|
| 310 |
+
2025-09-26 15:13:11,312 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 57/100 | Train Loss: 0.0269 | Val rms_score: 0.8706
|
| 311 |
+
2025-09-26 15:13:18,262 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 58/100 | Train Loss: 0.0258 | Val rms_score: 0.8690
|
| 312 |
+
2025-09-26 15:13:25,000 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 59/100 | Train Loss: 0.0273 | Val rms_score: 0.8700
|
| 313 |
+
2025-09-26 15:13:31,780 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 60/100 | Train Loss: 0.0283 | Val rms_score: 0.8814
|
| 314 |
+
2025-09-26 15:13:38,856 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 61/100 | Train Loss: 0.0267 | Val rms_score: 0.8662
|
| 315 |
+
2025-09-26 15:13:47,006 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 62/100 | Train Loss: 0.0257 | Val rms_score: 0.8727
|
| 316 |
+
2025-09-26 15:13:53,457 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 63/100 | Train Loss: 0.0261 | Val rms_score: 0.8708
|
| 317 |
+
2025-09-26 15:14:00,673 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 64/100 | Train Loss: 0.0243 | Val rms_score: 0.8670
|
| 318 |
+
2025-09-26 15:14:08,189 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 65/100 | Train Loss: 0.0234 | Val rms_score: 0.8649
|
| 319 |
+
2025-09-26 15:14:15,240 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 66/100 | Train Loss: 0.0234 | Val rms_score: 0.8770
|
| 320 |
+
2025-09-26 15:14:23,138 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 67/100 | Train Loss: 0.0204 | Val rms_score: 0.8700
|
| 321 |
+
2025-09-26 15:14:30,308 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 68/100 | Train Loss: 0.0241 | Val rms_score: 0.8638
|
| 322 |
+
2025-09-26 15:14:36,310 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 69/100 | Train Loss: 0.0241 | Val rms_score: 0.8731
|
| 323 |
+
2025-09-26 15:14:43,279 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 70/100 | Train Loss: 0.0224 | Val rms_score: 0.8670
|
| 324 |
+
2025-09-26 15:14:50,447 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 71/100 | Train Loss: 0.0227 | Val rms_score: 0.8686
|
| 325 |
+
2025-09-26 15:14:58,006 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 72/100 | Train Loss: 0.0232 | Val rms_score: 0.8684
|
| 326 |
+
2025-09-26 15:15:04,747 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 73/100 | Train Loss: 0.0202 | Val rms_score: 0.8702
|
| 327 |
+
2025-09-26 15:15:11,425 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 74/100 | Train Loss: 0.0198 | Val rms_score: 0.8769
|
| 328 |
+
2025-09-26 15:15:18,019 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 75/100 | Train Loss: 0.0205 | Val rms_score: 0.8839
|
| 329 |
+
2025-09-26 15:15:24,675 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 76/100 | Train Loss: 0.0207 | Val rms_score: 0.8742
|
| 330 |
+
2025-09-26 15:15:31,596 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 77/100 | Train Loss: 0.0229 | Val rms_score: 0.8701
|
| 331 |
+
2025-09-26 15:15:38,165 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 78/100 | Train Loss: 0.0230 | Val rms_score: 0.8684
|
| 332 |
+
2025-09-26 15:15:44,655 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 79/100 | Train Loss: 0.0224 | Val rms_score: 0.8670
|
| 333 |
+
2025-09-26 15:15:51,429 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 80/100 | Train Loss: 0.0205 | Val rms_score: 0.8652
|
| 334 |
+
2025-09-26 15:15:57,894 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 81/100 | Train Loss: 0.0211 | Val rms_score: 0.8751
|
| 335 |
+
2025-09-26 15:16:05,268 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 82/100 | Train Loss: 0.0221 | Val rms_score: 0.8639
|
| 336 |
+
2025-09-26 15:16:11,481 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 83/100 | Train Loss: 0.0220 | Val rms_score: 0.8622
|
| 337 |
+
2025-09-26 15:16:17,745 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 84/100 | Train Loss: 0.0208 | Val rms_score: 0.8684
|
| 338 |
+
2025-09-26 15:16:24,231 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 85/100 | Train Loss: 0.0284 | Val rms_score: 0.8997
|
| 339 |
+
2025-09-26 15:16:30,584 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 86/100 | Train Loss: 0.0399 | Val rms_score: 0.8909
|
| 340 |
+
2025-09-26 15:16:37,652 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 87/100 | Train Loss: 0.0316 | Val rms_score: 0.8641
|
| 341 |
+
2025-09-26 15:16:43,773 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 88/100 | Train Loss: 0.0247 | Val rms_score: 0.8636
|
| 342 |
+
2025-09-26 15:16:50,631 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 89/100 | Train Loss: 0.0230 | Val rms_score: 0.8690
|
| 343 |
+
2025-09-26 15:16:56,799 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 90/100 | Train Loss: 0.0199 | Val rms_score: 0.8761
|
| 344 |
+
2025-09-26 15:17:03,401 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 91/100 | Train Loss: 0.0205 | Val rms_score: 0.8717
|
| 345 |
+
2025-09-26 15:17:10,313 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 92/100 | Train Loss: 0.0211 | Val rms_score: 0.8730
|
| 346 |
+
2025-09-26 15:17:16,582 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 93/100 | Train Loss: 0.0197 | Val rms_score: 0.8653
|
| 347 |
+
2025-09-26 15:17:22,874 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 94/100 | Train Loss: 0.0182 | Val rms_score: 0.8686
|
| 348 |
+
2025-09-26 15:17:29,371 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 95/100 | Train Loss: 0.0203 | Val rms_score: 0.9025
|
| 349 |
+
2025-09-26 15:17:36,157 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 96/100 | Train Loss: 0.0297 | Val rms_score: 0.8682
|
| 350 |
+
2025-09-26 15:17:43,119 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 97/100 | Train Loss: 0.0250 | Val rms_score: 0.8614
|
| 351 |
+
2025-09-26 15:17:49,536 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 98/100 | Train Loss: 0.0217 | Val rms_score: 0.8699
|
| 352 |
+
2025-09-26 15:17:56,016 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 99/100 | Train Loss: 0.0236 | Val rms_score: 0.8598
|
| 353 |
+
2025-09-26 15:18:03,165 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Epoch 100/100 | Train Loss: 0.0292 | Val rms_score: 0.8643
|
| 354 |
+
2025-09-26 15:18:03,819 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Test rms_score: 0.8724
|
| 355 |
+
2025-09-26 15:18:04,363 - logs_modchembert_astrazeneca_solubility_epochs100_batch_size32 - INFO - Final Triplicate Test Results — Avg rms_score: 0.8730, Std Dev: 0.0112
|