Datasets:
The dataset viewer is not available for this dataset.
Error code: ConfigNamesError
Exception: FileNotFoundError
Message: Couldn't find any data file at /src/services/worker/topu-benchmark/topu-lbvs. Couldn't find 'topu-benchmark/topu-lbvs' on the Hugging Face Hub either: FileNotFoundError: Unable to find 'hf://datasets/topu-benchmark/topu-lbvs@281d4566ed59745795f118003e532752ee00c0d7/flat/topu-lbvs-full/train.parquet' with any supported extension ['.csv', '.tsv', '.json', '.jsonl', '.ndjson', '.parquet', '.geoparquet', '.gpq', '.arrow', '.txt', '.tar', '.xml', '.hdf5', '.h5', '.eval', '.lance', '.blp', '.bmp', '.dib', '.bufr', '.cur', '.pcx', '.dcx', '.dds', '.ps', '.eps', '.fit', '.fits', '.fli', '.flc', '.ftc', '.ftu', '.gbr', '.gif', '.grib', '.png', '.apng', '.jp2', '.j2k', '.jpc', '.jpf', '.jpx', '.j2c', '.icns', '.ico', '.im', '.iim', '.tif', '.tiff', '.jfif', '.jpe', '.jpg', '.jpeg', '.mpg', '.mpeg', '.msp', '.pcd', '.pxr', '.pbm', '.pgm', '.ppm', '.pnm', '.psd', '.bw', '.rgb', '.rgba', '.sgi', '.ras', '.tga', '.icb', '.vda', '.vst', '.webp', '.wmf', '.emf', '.xbm', '.xpm', '.BLP', '.BMP', '.DIB', '.BUFR', '.CUR', '.PCX', '.DCX', '.DDS', '.PS', '.EPS', '.FIT', '.FITS', '.FLI', '.FLC', '.FTC', '.FTU', '.GBR', '.GIF', '.GRIB', '.PNG', '.APNG', '.JP2', '.J2K', '.JPC', '.JPF', '.JPX', '.J2C', '.ICNS', '.ICO', '.IM', '.IIM', '.TIF', '.TIFF', '.JFIF', '.JPE', '.JPG', '.JPEG', '.MPG', '.MPEG', '.MSP', '.PCD', '.PXR', '.PBM', '.PGM', '.PPM', '.PNM', '.PSD', '.BW', '.RGB', '.RGBA', '.SGI', '.RAS', '.TGA', '.ICB', '.VDA', '.VST', '.WEBP', '.WMF', '.EMF', '.XBM', '.XPM', '.aiff', '.au', '.avr', '.caf', '.flac', '.htk', '.svx', '.mat4', '.mat5', '.mpc2k', '.ogg', '.paf', '.pvf', '.raw', '.rf64', '.sd2', '.sds', '.ircam', '.voc', '.w64', '.wav', '.nist', '.wavex', '.wve', '.xi', '.mp3', '.opus', '.3gp', '.3g2', '.avi', '.asf', '.flv', '.mp4', '.mov', '.m4v', '.mkv', '.webm', '.f4v', '.wmv', '.wma', '.ogm', '.mxf', '.nut', '.AIFF', '.AU', '.AVR', '.CAF', '.FLAC', '.HTK', '.SVX', '.MAT4', '.MAT5', '.MPC2K', '.OGG', '.PAF', '.PVF', '.RAW', '.RF64', '.SD2', '.SDS', '.IRCAM', '.VOC', '.W64', '.WAV', '.NIST', '.WAVEX', '.WVE', '.XI', '.MP3', '.OPUS', '.3GP', '.3G2', '.AVI', '.ASF', '.FLV', '.MP4', '.MOV', '.M4V', '.MKV', '.WEBM', '.F4V', '.WMV', '.WMA', '.OGM', '.MXF', '.NUT', '.pdf', '.PDF', '.nii', '.NII', '.zip', '.idx', '.manifest', '.txn']
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
config_names = get_dataset_config_names(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
dataset_module = dataset_module_factory(
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1203, in dataset_module_factory
raise FileNotFoundError(
FileNotFoundError: Couldn't find any data file at /src/services/worker/topu-benchmark/topu-lbvs. Couldn't find 'topu-benchmark/topu-lbvs' on the Hugging Face Hub either: FileNotFoundError: Unable to find 'hf://datasets/topu-benchmark/topu-lbvs@281d4566ed59745795f118003e532752ee00c0d7/flat/topu-lbvs-full/train.parquet' with any supported extension ['.csv', '.tsv', '.json', '.jsonl', '.ndjson', '.parquet', '.geoparquet', '.gpq', '.arrow', '.txt', '.tar', '.xml', '.hdf5', '.h5', '.eval', '.lance', '.blp', '.bmp', '.dib', '.bufr', '.cur', '.pcx', '.dcx', '.dds', '.ps', '.eps', '.fit', '.fits', '.fli', '.flc', '.ftc', '.ftu', '.gbr', '.gif', '.grib', '.png', '.apng', '.jp2', '.j2k', '.jpc', '.jpf', '.jpx', '.j2c', '.icns', '.ico', '.im', '.iim', '.tif', '.tiff', '.jfif', '.jpe', '.jpg', '.jpeg', '.mpg', '.mpeg', '.msp', '.pcd', '.pxr', '.pbm', '.pgm', '.ppm', '.pnm', '.psd', '.bw', '.rgb', '.rgba', '.sgi', '.ras', '.tga', '.icb', '.vda', '.vst', '.webp', '.wmf', '.emf', '.xbm', '.xpm', '.BLP', '.BMP', '.DIB', '.BUFR', '.CUR', '.PCX', '.DCX', '.DDS', '.PS', '.EPS', '.FIT', '.FITS', '.FLI', '.FLC', '.FTC', '.FTU', '.GBR', '.GIF', '.GRIB', '.PNG', '.APNG', '.JP2', '.J2K', '.JPC', '.JPF', '.JPX', '.J2C', '.ICNS', '.ICO', '.IM', '.IIM', '.TIF', '.TIFF', '.JFIF', '.JPE', '.JPG', '.JPEG', '.MPG', '.MPEG', '.MSP', '.PCD', '.PXR', '.PBM', '.PGM', '.PPM', '.PNM', '.PSD', '.BW', '.RGB', '.RGBA', '.SGI', '.RAS', '.TGA', '.ICB', '.VDA', '.VST', '.WEBP', '.WMF', '.EMF', '.XBM', '.XPM', '.aiff', '.au', '.avr', '.caf', '.flac', '.htk', '.svx', '.mat4', '.mat5', '.mpc2k', '.ogg', '.paf', '.pvf', '.raw', '.rf64', '.sd2', '.sds', '.ircam', '.voc', '.w64', '.wav', '.nist', '.wavex', '.wve', '.xi', '.mp3', '.opus', '.3gp', '.3g2', '.avi', '.asf', '.flv', '.mp4', '.mov', '.m4v', '.mkv', '.webm', '.f4v', '.wmv', '.wma', '.ogm', '.mxf', '.nut', '.AIFF', '.AU', '.AVR', '.CAF', '.FLAC', '.HTK', '.SVX', '.MAT4', '.MAT5', '.MPC2K', '.OGG', '.PAF', '.PVF', '.RAW', '.RF64', '.SD2', '.SDS', '.IRCAM', '.VOC', '.W64', '.WAV', '.NIST', '.WAVEX', '.WVE', '.XI', '.MP3', '.OPUS', '.3GP', '.3G2', '.AVI', '.ASF', '.FLV', '.MP4', '.MOV', '.M4V', '.MKV', '.WEBM', '.F4V', '.WMV', '.WMA', '.OGM', '.MXF', '.NUT', '.pdf', '.PDF', '.nii', '.NII', '.zip', '.idx', '.manifest', '.txn']Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Dataset Card for TopU-LBVS
Multi-target benchmark for ligand-based virtual screening (LBVS) under hard-negative screening conditions.
Submitted to the NeurIPS 2026 Datasets and Benchmarks track (under review).
- 📄 Paper: TopU-LBVS: A Realistic Multi-Target Benchmark for Ligand-Based Virtual Screening (Kumar, Zhou, Shiralkar, Huang, Coskunuzer)
- 💻 Code: https://github.com/topu-benchmark/topu-lbvs
- 📜 License: CC-BY-SA-4.0
Dataset Description
Dataset Summary
TopU-LBVS is a multi-target benchmark for ligand-based virtual screening (LBVS) covering 93 protein targets across 7 protein classes (cytochrome P450 enzymes, GPCRs, ion channels, kinases, nuclear receptors, proteases, and miscellaneous enzymes), curated from ChEMBL 35.
For each target, we construct a TopU library of property-matched, structurally similar hard decoys at a fixed 1:40 active-to-decoy ratio. Decoys are selected via a constrained genetic algorithm so that simple physicochemical filters and nearest-neighbour fingerprint retrieval cannot separate them from actives.
The benchmark is organised into three evaluation protocols:
| Protocol | Purpose | Targets |
|---|---|---|
topu-lbvs-full |
Train on historical ChEMBL* SAR, evaluate on hard TopU library | 93 |
topu-lbvs-few-tier1 |
Few-shot TopU → TopU (Tier 1: <50 TopU actives, 6:2:2 split) | 46 |
topu-lbvs-few-tier2 |
Few-shot TopU → TopU (Tier 2: ≥50 TopU actives, 7:1:2 split) | 47 |
topu-lbvs-mini |
7-target compact protocol with paired random-decoy control | 7 |
A canonical list of all 93 targets, their PDB / UniProt / ChEMBL identifiers, protein classes, and per-target compound counts is in targets_master.csv at the repo root.
Supported Tasks and Leaderboards
tabular-classification— Per-target binary classification of compounds as active or inactive against a fixed protein target. The standard formulation uses 2048-bit Morgan (ECFP4) fingerprints as input features and binary activity labels as targets.graph-ml— The same task formulated as molecular-graph property prediction. Each compound is represented as an atom-bond graph (9-dim node features, 3-dim edge features); models such as GIN, GAT, GPS, and D-MPNN consume this representation directly via SMILES.- Few-shot molecular property prediction — The
topu-lbvs-few-tier1andtopu-lbvs-few-tier2configurations support evaluation of episodic learning, meta-learning, and other data-efficient methods, with as few as 6 actives in the training pool for the smallest Tier 1 targets.
The primary leaderboard metric varies by protocol:
| Protocol | Primary metric | Validation metric |
|---|---|---|
topu-lbvs-full, topu-lbvs-mini |
EF@1% | PR-AUC |
topu-lbvs-few-tier1, topu-lbvs-few-tier2 |
EF@10% | PR-AUC |
Secondary metrics (EF@5%, ROC-AUC, BEDROC at α=20, LogAUC) are reported in the paper appendix.
Languages
The dataset is chemistry, not natural language. Compounds are encoded as canonical SMILES strings using the RDKit canonicalisation pipeline. SMILES is a domain-specific text representation; many models in this benchmark (GIN, GAT, GPS, D-MPNN) operate on the molecular graph derived from SMILES, while one (MolFormer) treats SMILES as a sequence and applies a pretrained transformer.
Dataset Structure
Data Instances
Each protein target has its own subdirectory containing fingerprint NPZs, a CSV companion file with SMILES and metadata, and (for Setting 1) seed-specific train/validation index splits. A representative record (from topu-lbvs-full/egfr/CHEMBL203_topUnbiased.csv):
The corresponding NPZ row contains a 2048-bit Morgan fingerprint, the binary label (1 / 0), the canonical SMILES, and the ChEMBL ID.
Data Fields
Every *_ecfp4.npz file contains four arrays:
| Key | Shape | Dtype | Description |
|---|---|---|---|
X |
(n, 2048) | uint8 | Morgan fingerprints (radius 2, 2048 bits, RDKit) |
y |
(n,) | int | Binary activity label (1=active, 0=inactive) |
smiles |
(n,) | object | Canonical SMILES (RDKit, isomericSmiles=False) |
ids |
(n,) | object | ChEMBL compound IDs (e.g. CHEMBL35820) |
Companion *.csv files contain the same compounds in plain tabular form for inspection.
For Setting 1, each target's splits/seed_{2026,2027,2028}/ directory contains:
| File | Description |
|---|---|
train_idx.npy |
int64 indices into _final_ecfp4.npz for the training fold |
val_idx.npy |
int64 indices for the validation fold (15% stratified) |
split_info.json |
Metadata: seed, ratio, augmentation provenance |
Data Splits
Setting 1 — topu-lbvs-full:
For each target, the ChEMBL* training pool is split 85% / 15% (stratified random, seed 2026/2027/2028) at a fixed 1:10 active-to-decoy ratio. The full TopU library is the test set at 1:40 active-to-decoy ratio. Targets with too few inactives have their negative pool augmented with same-class inactives (selections released).
Setting 2 — topu-lbvs-few-tier1 and topu-lbvs-few-tier2:
Splits are drawn from each target's TopU library at the fixed 1:40 active-to-decoy ratio. Tier 1 (<50 TopU actives) uses a 6:2:2 active split; Tier 2 (≥50 actives) uses 7:1:2. Active compounds are split by Bemis–Murcko scaffold; inactives are randomly assigned to preserve the 1:40 ratio in each fold.
Setting 3 (mini) — topu-lbvs-mini:
For each of the 7 mini targets, both an s1/ (TopU hard-decoy test) and s3/ (random ChEMBL* decoy test) subfolder is provided. The two test sets share the same training pool, the same number of test actives, and the same 1:40 ratio — they differ only in test-decoy construction. The EF@1% gap between the two thus measures only the effect of decoy difficulty.
Dataset Creation
Curation Rationale
Standard LBVS benchmarks (DUD-E, MUV, DEKOIS, LIT-PCBA) have well-documented limitations: random or easily separable decoys, limited target coverage, no fixed learning protocols, and active/decoy pairs that can be distinguished by simple physicochemical descriptors such as molecular weight or LogP. Models that perform well on these benchmarks can fail dramatically under realistic prospective screening conditions.
TopU-LBVS is designed to expose this gap by jointly providing:
- Hard decoys — property-matched and structurally similar to actives, optimised against a Random Forest validator (cross-validation AUROC < 0.6 on physicochemical + Morgan features).
- Realistic 1:40 imbalance rather than artificial balance.
- 93 targets across 7 classes rather than a single family.
- Fixed splits and seeds for reproducible learning-based evaluation.
- Paired random-decoy control that isolates how much random-decoy evaluation overestimates performance — in our baselines, mean EF@1% drops nearly 4× when random decoys are replaced with TopU hard decoys.
Source Data
Initial Data Collection and Normalization
All bioactivity data is sourced from ChEMBL 35 (released August 2024), specifically the per-target activities tables filtered to the four canonical potency endpoints IC50, Ki, Kd, and EC50. Records with ambiguous target assignments or inconsistent assay annotations are discarded.
Molecules are standardised via RDKit: canonical SMILES, salt and solvent fragment removal, charge normalisation. Compounds that fail RDKit sanitisation are dropped.
Who are the source language producers?
Bioactivity measurements are produced by the global medicinal-chemistry research community and aggregated by the ChEMBL group at the European Bioinformatics Institute (EMBL-EBI). The dataset inherits all upstream provenance — see https://www.ebi.ac.uk/chembl/ for original sources.
Annotations
Annotation process
For each target, continuous activity measurements are binarised at pChEMBL > 4.9 → active, otherwise inactive. The 4.9 threshold was chosen empirically to maximise per-target coverage while maintaining assay reliability. Compounds with conflicting labels (e.g. active under one assay, inactive under another) are resolved as active to avoid penalising true positives discovered later.
Who are the annotators?
There are no human annotators specific to TopU-LBVS. All labels are derived programmatically from ChEMBL bioactivity records via the threshold rule above. The TopU hard-decoy selection itself is fully automated via a constrained genetic algorithm (see Appendix A.2 of the paper).
Personal and Sensitive Information
None. The dataset consists of small-molecule chemical structures (SMILES), ChEMBL identifiers, and binary activity labels against protein targets. No human-subject data, no personally identifiable information, no patient records.
Considerations for Using the Data
Social Impact of Dataset
TopU-LBVS is intended to improve the rigour and reproducibility of computational drug discovery research. By exposing the gap between random-decoy and hard-negative screening performance, the benchmark may help reduce overoptimistic claims about molecular machine-learning models and encourage methods that work under realistic early-stage discovery conditions. Improved virtual screening could reduce the time and cost of identifying lead compounds for diseases with unmet medical need.
Dual-use risk note: improved virtual-screening methods could in principle be misused to prioritise harmful bioactive compounds. TopU-LBVS, however, is a retrospective evaluation resource only — it provides no generative design capability, no synthesis routes, no prospective experimental validation, and no compound-purchase pathway. The release is focused on transparent evaluation, not molecule generation or deployment.
Discussion of Biases
- Target-class imbalance. The 93 targets are not class-balanced. Kinases, GPCRs, and miscellaneous enzymes contribute the most tasks; ion channels, proteases, and nuclear receptors fewer. This reflects the structure of public bioactivity coverage in ChEMBL rather than a deliberate sampling choice. Class-averaged metrics (reported in the paper) partially compensate.
- Chemical-space bias. ChEMBL skews toward chemotypes that have already been explored in the medicinal-chemistry literature. Compounds from underrepresented chemical spaces (covalent inhibitors, macrocycles, PROTACs, PPI modulators) are sparse.
- Threshold-binarisation noise. Fixed pChEMBL > 4.9 cutoff does not capture experimental uncertainty, assay context, or graded potency. A compound at pChEMBL = 4.95 is labelled identically to one at 9.0.
- Inactive labels are noisy. Inactives in ChEMBL are often "tested and not active in this assay" rather than confirmed inactive across all conditions, so some decoys may be mis-labelled actives. The TopU hard-decoy selection process partially mitigates this by selecting decoys from a curated inactive pool, but cannot remove the noise entirely.
Other Known Limitations
- Retrospective and 2D ligand-based. No 3D protein structure information, docking scores, conformer ensembles, or prospective experimental validation. The benchmark does not evaluate structure-based screening or hybrid ligand–protein methods.
- Adversarial by design. TopU decoys emphasise worst-case structural similarity to actives, so absolute EF values are systematically lower than on standard benchmarks. This is intentional and exposes failure modes — but it may overestimate difficulty relative to early-exploration screening with diverse libraries.
- ChEMBL-only. Conclusions may not directly generalise to underrepresented target families, proprietary chemical spaces, or non-ChEMBL data sources.
- Tier 1 variance. Few-shot tasks with fewer than 50 actives have high per-target metric variance. Class-level and overall averages are more reliable than individual-target EF estimates for these tasks.
A full discussion is in Appendix F of the paper.
Additional Information
Dataset Curators
- Surbhi Kumar — UT Dallas, Mathematical Sciences (co-first author)
- Yuhe Zhou — National Institute of Biological Sciences, Beijing (co-first author)
- Varun Shiralkar — UT Dallas, Computer Science
- Niu Huang — National Institute of Biological Sciences, Beijing (co-senior author)
- Baris Coskunuzer — UT Dallas, Mathematical Sciences (co-senior author)
Licensing Information
The TopU-LBVS dataset is released under CC-BY-SA-4.0. The companion code (https://github.com/topu-benchmark/topu-lbvs) is released under the MIT License.
ChEMBL is released under CC-BY-SA-3.0. The curated TopU-LBVS data inherits a compatible CC-BY-SA-4.0 license. Users redistributing the data or derived works must preserve the CC-BY-SA terms and provide attribution to both ChEMBL and this dataset.
Citation Information
If you use TopU-LBVS in your work, please cite both this benchmark and ChEMBL:
@unpublished{topu_lbvs_2026,
title = {TopU-LBVS: A Realistic Multi-Target Benchmark for Ligand-Based Virtual Screening},
author = {Kumar, Surbhi and Zhou, Yuhe and Shiralkar, Varun and Huang, Niu and Coskunuzer, Baris},
year = {2026},
note = {Under review at NeurIPS 2026 Datasets and Benchmarks Track}
}
@article{zdrazil2024chembl,
title = {The ChEMBL Database in 2023: a drug discovery platform spanning multiple bioactivity data types and time periods},
author = {Zdrazil, Barbara and others},
journal = {Nucleic Acids Research},
year = {2024}
}
Contributions
We thank the ChEMBL team at EMBL-EBI for maintaining the underlying bioactivity database, and the broader medicinal-chemistry and cheminformatics community whose data this benchmark builds on. We thank the authors of RDKit, PyTorch Geometric, Chemprop, and MolFormer for the open-source tools used in the reference baselines.
For issues, questions, and discussion, please use the GitHub issue tracker.
- Downloads last month
- 73