Commit
·
8d08d2f
1
Parent(s):
2d62678
Update README.md
Browse files
README.md
CHANGED
|
@@ -4,4 +4,34 @@ metrics:
|
|
| 4 |
- accuracy
|
| 5 |
tags:
|
| 6 |
- chemistry
|
| 7 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 4 |
- accuracy
|
| 5 |
tags:
|
| 6 |
- chemistry
|
| 7 |
+
---
|
| 8 |
+
# Molecular BERT Pretrained Using ChEMBL Database
|
| 9 |
+
|
| 10 |
+
This model has been pretrained based on the methodology outlined in the paper [Pushing the Boundaries of Molecular Property Prediction for Drug Discovery with Multitask Learning BERT Enhanced by SMILES Enumeration](https://spj.science.org/doi/10.34133/research.0004). While the original model was initially trained using custom code, it has been adapted for use within the Hugging Face Transformers framework in this project.
|
| 11 |
+
|
| 12 |
+
## Model Details
|
| 13 |
+
The model architecture utilized is based on BERT. Here are the key configuration details:
|
| 14 |
+
|
| 15 |
+
```
|
| 16 |
+
BertConfig(
|
| 17 |
+
vocab_size=len(tokenizer_pretrained.vocab),
|
| 18 |
+
hidden_size=256,
|
| 19 |
+
num_hidden_layers=8,
|
| 20 |
+
num_attention_heads=8,
|
| 21 |
+
intermediate_size=1024,
|
| 22 |
+
hidden_act="gelu",
|
| 23 |
+
hidden_dropout_prob=0.1,
|
| 24 |
+
attention_probs_dropout_prob=0.1,
|
| 25 |
+
max_position_embeddings=max_seq_len,
|
| 26 |
+
type_vocab_size=1,
|
| 27 |
+
pad_token_id=tokenizer_pretrained.vocab["[PAD]"],
|
| 28 |
+
position_embedding_type="absolute"
|
| 29 |
+
)
|
| 30 |
+
```
|
| 31 |
+
|
| 32 |
+
## Pretraining Database
|
| 33 |
+
The model was pretrained using data from the ChEMBL database, specifically version 33. You can download the database from [ChEMBL](https://ftp.ebi.ac.uk/pub/databases/chembl/ChEMBLdb/latest/).
|
| 34 |
+
|
| 35 |
+
## Performance
|
| 36 |
+
The accuracy score achieved by the pretrained model is 0.9672. The testing dataset used for evaluation constitutes 10% of the ChEMBL dataset.
|
| 37 |
+
|