File size: 1,849 Bytes
daf5ccb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5382a14
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cb09d1c
 
5382a14
 
cb09d1c
5382a14
 
cb09d1c
5382a14
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
---
dataset_info:
  features:
  - name: smiles
    dtype: string
  - name: ZINC_id
    dtype: string
  - name: selfies
    dtype: string
  splits:
  - name: train
    num_bytes: 934973541.8725228
    num_examples: 4015268
  download_size: 341776659
  dataset_size: 934973541.8725228
---
# From ZINC20 ['In-stock, Lead-like'](https://zinc20.docking.org/tranches/home/) tranche, converted to SELFIES

Steps to prepare the database:

1) Select the appropriate tranche from from ZINC20

- Select 'Purch' -> 'In-stock'
- Select 'Predefined Subsets' -> 'Lead-Like'
- Select 'Download Format' -> 'SMILES (*.smi)'
- Select 'Download Method' -> 'Raw URLs'

2) Download and concatenate the SMILES

```bash
# Download all ZINC20 tranches from 'in-stock, lead-like' subset
mkdir zinc
wget -i ZINC-downloader-2D-smi.uri -P zinc

# Remove first line of every file and save into txt file
for i in zinc/*; do tail -n +2 "$i" > "$i".txt; done

# Concatenate all created files into one (contains 4015274 ligands)
cat zinc/*.txt > zinc_all.txt
```

3) Parse the concatenated text file into a Huggingface dataset

```python
from datasets import load_dataset

dataset = load_dataset('text', data_files='zinc_all.txt')

# Split SMILES from ZINC_id and store as separate database features
def split_text(dataset):
  split_item = dataset["text"].split()
  return {"smiles": split_item[0], "ZINC_id": split_item[1]}

dataset = dataset.map(split_text)
dataset = dataset.remove_columns("text")
```

4) Convert SMILES to [SELFIES](https://github.com/aspuru-guzik-group/selfies)

```python
import selfies

def smiles_to_selfies(dataset):
  try:
    return {"selfies": selfies.encoder(dataset["smiles"])}
  except selfies.EncoderError:
    return {"selfies": None}

dataset = dataset.map(smiles_to_selfies)
dataset = dataset.filter(lambda dataset: dataset["selfies"] != None)
```