File size: 4,853 Bytes
57b3d79
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d54a299
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fd811ee
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d54a299
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
---
dataset_info:
  features:
  - name: Y_affinity
    dtype: float64
  - name: smiles
    dtype: string
  - name: seq
    dtype: string
  - name: Y_binary
    dtype: int64
  - name: Y_log_affinity
    dtype: float64
  - name: selfies
    dtype: string
  splits:
  - name: train
    num_bytes: 606248442.8293403
    num_examples: 740566
  download_size: 161743110
  dataset_size: 606248442.8293403
---
# From the [GLASS GPCR database](https://aideepmed.com/GLASS1/), converted to SELFIES

Steps to prepare the database:

1. Download the GLASS database

```bash
wget https://zhanggroup.org/GLASS/downloads/interactions_active.tsv
wget https://zhanggroup.org/GLASS/downloads/interactions_inactives.tsv
wget https://zhanggroup.org/GLASS/downloads/targets.tsv
wget https://zhanggroup.org/GLASS/downloads/ligands.tsv
```

2. Select just the columns of interest

```bash
cut -d$'\t' -f6,9 ligands.tsv > ligands2.tsv
cut -d$'\t' -f2,5 targets.tsv > targets2.tsv
cut -d$'\t' -f1,2,4,5 interactions_active.tsv > interactions_active2.tsv
cut -d$'\t' -f1,2,4,5 interactions_inactives.tsv > interactions_inactives2.tsv
```

3. Parse interactions

```python
import pandas as pd
import numpy as np

ligands = pd.read_csv('ligands2.tsv', sep='\t')
proteins = pd.read_csv('targets2.tsv', sep='\t')

actives = pd.read_csv('interactions_active2.tsv', sep='\t')
inactives = pd.read_csv('interactions_inactives2.tsv', sep='\t')

ligand_dict = {}

for index, item in ligands.iterrows():
  ligand_dict[item['InChI Key']] = item['Canonical SMILES']

protein_dict = {}

for index, item in proteins.iterrows():
  protein_dict[item['UniProt ID']] = item['FASTA Sequence']

def rehydrate_interactions(x):
  uniprot_id = x['UniProt ID']
  inchi_id = x['InChI Key']

  smile = ligand_dict[inchi_id]
  fasta = protein_dict[uniprot_id]

  return smile, fasta

actives[["smiles", "seq"]] = actives.apply(rehydrate_interactions, axis=1, result_type="expand")
inactives[["smiles", "seq"]] = inactives.apply(rehydrate_interactions, axis=1, result_type="expand")

actives = actives.drop(columns=['UniProt ID', 'InChI Key'])
inactives = inactives.drop(columns=['UniProt ID', 'InChI Key'])

actives['Y_binary'] = len(actives) * [1]
inactives['Y_binary'] = len(inactives) * [0]

merged = pd.concat([actives, inactives], axis=0)
merged = merged[merged['Unit'] == 'nM']

merged = merged.drop(columns=['Unit'])
merged = merged.reset_index()
merged = merged.drop(columns=['index'])

merged['Value'] = merged['Value'].map(lambda x: x.strip('>').strip(' ').strip('<'))
merged['Value'] = merged['Value'].map(lambda x: x.split(" - ")[0])
merged['Value'] = pd.to_numeric(merged['Value'], errors='coerce')
merged = merged.rename(columns={"Value": "Y_affinity"})
merged = merged[merged['Y_affinity'] != 0]
merged['Y_log_affinity'] = 6 - np.log(merged['Y_affinity']) / np.log(10)
merged = merged[merged['Y_log_affinity'] >= 0]
merged = merged[merged['Y_log_affinity'] <= 10]

shuffled = merged.sample(frac=1)
shuffled = shuffled.reset_index()
shuffled = shuffled.drop(columns=['index'])
```

4. Convert SMILES to [SELFIES](https://github.com/aspuru-guzik-group/selfies)

```python
from datasets import Dataset, DatasetDict
from datasets import load_dataset
import selfies

dataset = Dataset.from_pandas(shuffled, split='train')

def smiles_to_selfies(dataset):
  try:
    return {"selfies": selfies.encoder(dataset["smiles"])}
  except selfies.EncoderError:
    return {"selfies": None}

dataset_selfies = dataset.map(smiles_to_selfies)
dataset_selfies = dataset_selfies.filter(lambda dataset: dataset["selfies"] != None)
```

5. Compute protein ('prot_bert') / ligand embeddings ('SELFIES-RoBERTa-PubChem10M')

```python
from sentence_transformers import SentenceTransformer
import re
import pickle
import os.path

# Protein embeddings
all_proteins_shortened = dataset_selfies.unique('seq')

protein_sequences = [re.sub(r"[UZOB]", "X", sequence) for sequence in all_proteins_shortened]
protein_sequences = [" ".join(sequence) for sequence in protein_sequences]

protein_model = SentenceTransformer('Rostlab/prot_bert')
protein_model.max_seq_length = 512

protein_emb = protein_model.encode(protein_sequences)

protein_embeddings = dict(zip(protein_sequences, protein_emb))

with open('glass_protein_embeddings.pkl', "wb") as fOut:
  pickle.dump(protein_embeddings, fOut, protocol=pickle.HIGHEST_PROTOCOL)

# Ligand embeddings
all_selfies_shortened = dataset_selfies.unique('selfies')

ligand_sequences = [sequence for sequence in all_selfies_shortened]

ligand_model = SentenceTransformer('ejy/SELFIES-RoBERTa-PubChem10M')
ligand_model.max_seq_length = 128

ligand_emb = ligand_model.encode(ligand_sequences)

ligand_embeddings = dict(zip(ligand_sequences, ligand_emb))

with open('glass_ligand_embeddings.pkl', "wb") as fOut:
  pickle.dump(ligand_embeddings, fOut, protocol=pickle.HIGHEST_PROTOCOL)
```