ZINC_4M_SELFIES / README.md
ejy's picture
Update README.md
cb09d1c verified
metadata
dataset_info:
  features:
    - name: smiles
      dtype: string
    - name: ZINC_id
      dtype: string
    - name: selfies
      dtype: string
  splits:
    - name: train
      num_bytes: 934973541.8725228
      num_examples: 4015268
  download_size: 341776659
  dataset_size: 934973541.8725228

From ZINC20 'In-stock, Lead-like' tranche, converted to SELFIES

Steps to prepare the database:

  1. Select the appropriate tranche from from ZINC20
  • Select 'Purch' -> 'In-stock'
  • Select 'Predefined Subsets' -> 'Lead-Like'
  • Select 'Download Format' -> 'SMILES (*.smi)'
  • Select 'Download Method' -> 'Raw URLs'
  1. Download and concatenate the SMILES
# Download all ZINC20 tranches from 'in-stock, lead-like' subset
mkdir zinc
wget -i ZINC-downloader-2D-smi.uri -P zinc

# Remove first line of every file and save into txt file
for i in zinc/*; do tail -n +2 "$i" > "$i".txt; done

# Concatenate all created files into one (contains 4015274 ligands)
cat zinc/*.txt > zinc_all.txt
  1. Parse the concatenated text file into a Huggingface dataset
from datasets import load_dataset

dataset = load_dataset('text', data_files='zinc_all.txt')

# Split SMILES from ZINC_id and store as separate database features
def split_text(dataset):
  split_item = dataset["text"].split()
  return {"smiles": split_item[0], "ZINC_id": split_item[1]}

dataset = dataset.map(split_text)
dataset = dataset.remove_columns("text")
  1. Convert SMILES to SELFIES
import selfies

def smiles_to_selfies(dataset):
  try:
    return {"selfies": selfies.encoder(dataset["smiles"])}
  except selfies.EncoderError:
    return {"selfies": None}

dataset = dataset.map(smiles_to_selfies)
dataset = dataset.filter(lambda dataset: dataset["selfies"] != None)