File size: 2,215 Bytes
ed221d6
 
7daf385
 
 
 
ed221d6
 
7daf385
ed221d6
 
 
7daf385
 
 
ed221d6
7daf385
 
ed221d6
 
 
 
7daf385
ed221d6
7daf385
ed221d6
7daf385
 
 
ed221d6
7daf385
 
 
ed221d6
7daf385
 
ed221d6
7daf385
 
ed221d6
7daf385
 
 
 
 
 
 
 
ed221d6
7daf385
 
 
ed221d6
 
 
7daf385
ed221d6
7daf385
 
 
 
ed221d6
7daf385
 
ed221d6
 
 
7daf385
 
ed221d6
7daf385
ed221d6
7daf385
ed221d6
 
 
7daf385
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
---
library_name: transformers
tags:
- chemistry
- molecule
license: mit
---

# Model Card for ErbB1 MLP

### Model Description

`erbb1_mlp` is a MLP-style model trained to predict ErbB1 (EGFR) binding affinity from 
embeddings generated by the [roberta_zinc_480m](https://huggingface.co/entropy/roberta_zinc_480m) 
model.

- **Developed by:** Karl Heyer
- **License:** MIT


### Direct Use

Usage examples. Note that input SMILES strings should be canonicalized.

With the Transformers library:

```python
from sentence_transformers import models, SentenceTransformer
from transformers import AutoModel

transformer = models.Transformer("entropy/roberta_zinc_480m", 
                                 max_seq_length=256, 
                                 model_args={"add_pooling_layer": False})

pooling = models.Pooling(transformer.get_word_embedding_dimension(), 
                         pooling_mode="mean")

roberta_zinc = SentenceTransformer(modules=[transformer, pooling])
erbb1_mlp = AutoModel.from_pretrained("entropy/erbb1_mlp", trust_remote_code=True)

# smiles should be canonicalized
smiles = [
    "Brc1cc2c(NCc3ccccc3)ncnc2s1",
    "Brc1cc2c(NCc3ccccn3)ncnc2s1",
    "Brc1cc2c(NCc3cccs3)ncnc2s1",
    "Brc1cc2c(NCc3ccncc3)ncnc2s1",
    "Brc1cc2c(Nc3ccccc3)ncnc2s1"
]

embeddings = roberta_zinc.encode(smiles, convert_to_tensor=True)
predictions = erbb1_mlp(embeddings).predictions
```

### Training Procedure

#### Preprocessing

ErbB1 ligands were downloaded from ChEMBL (`target_chembl_id="CHEMBL203"`, `type="IC50"`, 
`relation="="`, `assay_type="B"`). Results were filtered for assays with IC50 values in nM 
for homo sapiens, canonicalized and deduplicated. IC50 values were converted to pIC50 values. 
The final dataset contains 7327 data points.

Prior to training, pIC50 values were normalized. The model was trained on normalized values,
and uses the saved mean/variance of the dataset to denormalize predictions.

#### Training Hyperparameters

The model was trained for 30 epochs with a batch size of 32, learing rate of 1e-3, weight decay of 
1e-4 and cosine learning rate scheduling.

## Model Card Authors

Karl Heyer

## Model Card Contact

karl@darmatterai.xyz

---
license: mit
---