Update README.md
Browse files
README.md
CHANGED
|
@@ -145,7 +145,7 @@ outputs = tokenizer.batch_encode_plus(smiles_list, padding=True, truncation=True
|
|
| 145 |
|
| 146 |
|
| 147 |
## 📚 Early VAE Evaluation (vs. ChemBERTa's) [WIP for Scaling]
|
| 148 |
-
1st Epoch, on ~13K samples of len(token_ids)<=25; embed_dim=64, hidden_dim=128, latent_dim=64, num_layers=2; batch_size= 16 * 4 (grad acc)
|
| 149 |
|
| 150 |
Latent Space Visualization based on SMILES Interpolation Validity
|
| 151 |
|
|
|
|
| 145 |
|
| 146 |
|
| 147 |
## 📚 Early VAE Evaluation (vs. ChemBERTa's) [WIP for Scaling]
|
| 148 |
+
Using `benchmark_simpler.py`: 1st Epoch, on ~13K samples of len(token_ids)<=25; embed_dim=64, hidden_dim=128, latent_dim=64, num_layers=2; batch_size= 16 * 4 (grad acc)
|
| 149 |
|
| 150 |
Latent Space Visualization based on SMILES Interpolation Validity
|
| 151 |
|