|
|
--- |
|
|
dataset_info: |
|
|
features: |
|
|
- name: smiles |
|
|
dtype: string |
|
|
- name: ZINC_id |
|
|
dtype: string |
|
|
- name: selfies |
|
|
dtype: string |
|
|
splits: |
|
|
- name: train |
|
|
num_bytes: 934973541.8725228 |
|
|
num_examples: 4015268 |
|
|
download_size: 341776659 |
|
|
dataset_size: 934973541.8725228 |
|
|
--- |
|
|
# From ZINC20 ['In-stock, Lead-like'](https://zinc20.docking.org/tranches/home/) tranche, converted to SELFIES |
|
|
|
|
|
Steps to prepare the database: |
|
|
|
|
|
1) Select the appropriate tranche from from ZINC20 |
|
|
|
|
|
- Select 'Purch' -> 'In-stock' |
|
|
- Select 'Predefined Subsets' -> 'Lead-Like' |
|
|
- Select 'Download Format' -> 'SMILES (*.smi)' |
|
|
- Select 'Download Method' -> 'Raw URLs' |
|
|
|
|
|
2) Download and concatenate the SMILES |
|
|
|
|
|
```bash |
|
|
# Download all ZINC20 tranches from 'in-stock, lead-like' subset |
|
|
mkdir zinc |
|
|
wget -i ZINC-downloader-2D-smi.uri -P zinc |
|
|
|
|
|
# Remove first line of every file and save into txt file |
|
|
for i in zinc/*; do tail -n +2 "$i" > "$i".txt; done |
|
|
|
|
|
# Concatenate all created files into one (contains 4015274 ligands) |
|
|
cat zinc/*.txt > zinc_all.txt |
|
|
``` |
|
|
|
|
|
3) Parse the concatenated text file into a Huggingface dataset |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
dataset = load_dataset('text', data_files='zinc_all.txt') |
|
|
|
|
|
# Split SMILES from ZINC_id and store as separate database features |
|
|
def split_text(dataset): |
|
|
split_item = dataset["text"].split() |
|
|
return {"smiles": split_item[0], "ZINC_id": split_item[1]} |
|
|
|
|
|
dataset = dataset.map(split_text) |
|
|
dataset = dataset.remove_columns("text") |
|
|
``` |
|
|
|
|
|
4) Convert SMILES to [SELFIES](https://github.com/aspuru-guzik-group/selfies) |
|
|
|
|
|
```python |
|
|
import selfies |
|
|
|
|
|
def smiles_to_selfies(dataset): |
|
|
try: |
|
|
return {"selfies": selfies.encoder(dataset["smiles"])} |
|
|
except selfies.EncoderError: |
|
|
return {"selfies": None} |
|
|
|
|
|
dataset = dataset.map(smiles_to_selfies) |
|
|
dataset = dataset.filter(lambda dataset: dataset["selfies"] != None) |
|
|
``` |