protac-bench / README.md
PROTAC-Bench's picture
Update dataset card to use current project name
0669fd3 verified
---
license: cc-by-4.0
task_categories:
- tabular-classification
tags:
- chemistry
- drug-discovery
- protac
- protein-degradation
- benchmark
pretty_name: PROTAC-Bench
size_categories:
- 10K<n<100K
---
# PROTAC-Bench: A Cold-Target Benchmark for PROTAC Degradation Prediction
## Dataset Description
PROTAC-Bench is a merged PROTAC degradation dataset containing **10,748 entries** across **173 protein targets** (9,359 unique SMILES). It combines data from PROTAC-DB 3.0, Ribes et al. (2024), and DegradeMaster with deduplication and canonical SMILES standardization. Each entry has a binary activity label (active: DC50 < 1 μM OR Dmax > 50%) along with target UniProt ID and E3 ligase type (VHL/CRBN/Other). The dataset includes pre-computed 7-property ADMET cascade scores for all entries.
## Evaluation Protocol
All models are evaluated under the **Leave-One-Target-Out (LOTO)** protocol across **65 eligible targets** (≥10 entries, activity rate 10–90%). For each fold, all entries for one target are held out as the test set while the remaining entries are used for training. Performance is reported as mean AUROC across the 65 folds. Statistical significance against the RF+Morgan baseline is assessed via paired Wilcoxon signed-rank test.
## Dataset Structure
### Data Fields
- `smiles` (string): Canonical SMILES of the PROTAC molecule
- `target_uniprot` (string): UniProt accession of the target protein
- `e3_type` (string): E3 ligase type (VHL, CRBN, or Other)
- `label` (int): Binary activity label (1 = active, 0 = inactive)
- `dc50_nm` (float): DC50 in nanomolar (when available)
- `dmax_pct` (float): Dmax percentage (when available)
### Data Splits
The dataset uses a Leave-One-Target-Out (LOTO) cross-validation protocol with 65 pre-defined folds (see `data/loto_folds.json`).
### Additional Files
- `data/admet_scores.csv`: 7-property ADMET cascade scores for all 10,748 entries
- `evaluation/evaluate.py`: Standardized LOTO evaluation script
- `evaluation/baselines.py`: RF+Morgan baseline reproduction
- `examples/example_submission.py`: Template for formatting predictions
## Quick Start
```bash
pip install -r evaluation/requirements.txt
# Run the RF+Morgan baseline (~2 min)
python evaluation/baselines.py
# Evaluate your own predictions
python evaluation/evaluate.py --predictions my_predictions.csv --output results.json
```
## Baseline Results
| Model | Mean AUROC | Δ vs RF+Morgan | p-value |
|-------|-----------|----------------|---------|
| RF + Morgan (2048-bit) | 0.666 | — | — |
| RF + Morgan + ADMET | 0.687 | +0.021 | <0.05 |
| RF + Morgan + ADMET + k=5 | 0.700 | +0.034 | <0.01 |
| EGNN-27 | 0.801 | +0.135 | <0.001 |
## Dataset Creation
### Source Data
Merged from three public databases:
- **PROTAC-DB 3.0**: Curated PROTAC degradation data
- **Ribes et al. (2024)**: Published PROTAC activity data
- **DegradeMaster**: Comprehensive degradation database
### Curation
- Canonical SMILES standardization via RDKit
- Deduplication by canonical SMILES + target pair
- Binary labeling: active if DC50 < 1 μM OR Dmax > 50%
### Personal and Sensitive Information
This dataset contains no personal or sensitive information. All entries are chemical structures (SMILES) and protein identifiers.
## Considerations for Using the Data
### Known Biases
- **Kinase-dominated**: 24 of 65 evaluation targets are kinases
- **E3 ligase imbalance**: Only VHL and CRBN ligases are well-represented
- **Publication bias**: Positive results may be over-represented in source databases
### Citation
```bibtex
@inproceedings{protacbench2025,
title={PROTAC-Bench: A Cold-Target Benchmark for PROTAC Degradation Prediction},
author={[Authors TBD]},
booktitle={NeurIPS Datasets and Benchmarks Track},
year={2025}
}
```
## License
- **Data**: CC-BY-4.0
- **Code**: MIT