Datasets:
language: en
license: mit
pretty_name: BELKA-DEL Experimental Benchmark Data
tags:
- chemistry
- biology
dataset_summary: >-
a curated collection of PDB structures, designed as an experimental validation
benchmark for models trained on the Big Encoded Library for Chemical
Assessment (BELKA) DNA-Encoded Library (DEL)
configs:
- config_name: raw
data_files: raw/*.csv
- config_name: processed
data_files: processed/*.csv
BELKA-DEL-Experimental-BenchmarkData
This dataset comprises a curated collection of PDB structures, designed as an experimental validation benchmark for models trained on the Big Encoded Library for Chemical Assessment (BELKA) DNA-Encoded Library (DEL). Each structure includes at least one bound small molecule ligand, providing a robust basis for benchmarking model performance in accurately identifying potential binders to BELKA protein targets.
Introduction to the BELKA DEL Dataset
The BELKA (Big Encoded Library for Chemical Assessment) dataset contains roughly 100 million small molecules screened against three protein targets of biological interest:
Bromodomain-containing protein 4 (BRD4)
Soluble epoxide hydrolase (sEH)
Human serum albumin (HSA)
Originally released by Leash Bio for a Kaggle competition, BELKA provides a large-scale resource for training and evaluating machine learning models on DNA-Encoded Library data, with the goal of enabling generalizable predictions of small-molecule binding.
For competition details and citation information, visit the BELKA Kaggle competition page.
Splits
raw
- Description: Raw experimental data as collected from PDB
- Files: Located in
raw/
processed
- Description: Cleaned and processed data
- Files: Located in
processed/
Usage
from datasets import load_dataset
# Load the raw split
raw_data = load_dataset("RosettaCommons/BELKA-DEL-ExperimentalData", 'raw')
Which should output the following if loaded correctly:
Downloading readme: 4.06kB [00:00, 10.0MB/s]
Downloading data: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.37M/2.37M [00:00<00:00, 12.3MB/s]
Generating train split: 20563 examples [00:00, 114183.04 examples/s]
from datasets import load_dataset
# Load the processed split
processed_data = load_dataset("RosettaCommons/BELKA-DEL-ExperimentalData", 'processed')
Which should output the following if loaded correctly:
Downloading readme: 4.06kB [00:00, 21.5MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 766k/766k [00:00<00:00, 4.13MB/s]
Generating train split: 2548 examples [00:00, 38587.25 examples/s]
Data Collection
The following queries were used to retrieve all Protein Data Bank (PDB) structures containing BRD4, sEH, or HSA with at least one bound small-molecule ligand:
BRD4 query summary: Text search inside all attributes = 'BRD4' AND Number of Distinct Non-polymer Entities >= 1
sEH query summary: Accession Code(s) = P34913 AND Database Name = "UniProt" AND Number of Distinct Non-polymer Entities >= 1
HSA query summary: Text search inside all attributes = 'albumin' AND Scientific Name of the Source Organism = "Homo sapiens" AND Number of Distinct Non-polymer Entities >= 1
These queries yielded 640, 128, and 1,920 initial structures (some with multiple ligands) for BRD4, sEH, and HSA, respectively. The raw data can be found here:
raw/PDB-query_rawdata.csv
Data Cleaning
This dataset contains some proteins that contain multiple ligands, which each appear as separate rows. In this case, the columns in subsequent rows that refer only to the protein are left empty. To fix this, rows with only entries in the Ligand ID, Ligand Name, Ligand SMILES and Protein will have NA values filled with the row from directly above. This is accomplished by the fill_nans_iteratively function in src/utils.py.
The dataset also contains 'ligands' that are ions, cofactors, or crystallization agents. These are removed by filtering ligands to include only those that have a minimum mw of 150, and a maximum mw of 1000. This is accomplished by the filter_by_molecular_weight function in src/utils.py.
Requirements for curating/cleaning data:
NumpyPandasRDKit
Cleaning the data:
To clean the data, clone this repository and run (from the parent directory):
python src/clean_data.py
The cleaned data will be output in the processed directory, and the output of running the above script should look similar to:
length of data before processing: 20563
length of data after filling NaN values (length should be the same): 20563
[09:04:16] Explicit valence for atom # 54 N, 5, is greater than permitted
[09:04:16] Explicit valence for atom # 54 N, 5, is greater than permitted
[09:04:16] Explicit valence for atom # 16 O, 3, is greater than permitted
[09:04:16] Explicit valence for atom # 0 B, 4, is greater than permitted
[09:04:16] Explicit valence for atom # 17 In, 4, is greater than permitted
[09:04:16] Explicit valence for atom # 0 B, 4, is greater than permitted
length of data after removing cofactors, crystallization factors, or ions: 2548
data cleaning successful, resulting csv written to: data/processed/processed_data.csv
Reference
Andrew Blevins, Ian K Quigley, Brayden J Halverson, Nate Wilkinson, Rebecca S Levin, Agastya Pulapaka, Walter Reade, and Addison Howard. NeurIPS 2024 - Predict New Medicines with BELKA. https://kaggle.com/competitions/leash-BELKA, 2024. Kaggle.
Citation
@misc{blevins2024belka,
title={NeurIPS 2024 -- Predict New Medicines with BELKA},
author={Blevins, Andrew and Quigley, Ian K and Halverson, Brayden J and Wilkinson, Nate and Levin, Rebecca S and Pulapaka, Agastya and Reade, Walter and Howard, Addison},
year={2024},
howpublished={Kaggle Competition},
url={https://kaggle.com/competitions/leash-BELKA}
}
Dataset Card Authors
Marissa Dolorfino (mdolo@umich.edu)