ARC-Bind Energy Prediction
Overview
ARC-Bind is a synthetic 2D binding dataset designed to study how machine learning models learn to predict molecular interactions. The goal is to understand model reasoning about binding in a setting small enough to fully inspect and debug.
Inspiration: ARC-AGI Format
Inspired by François Chollet's ARC-AGI benchmark, ARC-Bind uses 16×16 pixel grids with discrete color tokens to represent binding:
- Protein: Light blue (token 8) - large, contiguous region with binding pockets
- Ligand: Light gray scaffold with functional groups
- Chemistry:
- Red / Blue pixels = complementary polar interactions (charges, H-bonds)
- Yellow pixels = hydrophobic regions
The format emphasizes abstraction and systematic generalization - the tasks are easy for humans but challenging for conventional deep models, especially in the low-data regime.
Energy Prediction Task
This dataset is specifically designed for high-throughput virtual screening - predicting binding affinity without explicitly solving for the optimal pose, analogous to DNA-Encoded Library (DEL) screening in real drug discovery.
Dataset Description
This dataset contains 89,730 protein-ligand pairings for binding energy prediction from a 300×300 matrix of molecules selected for high binding potential and experimental coverage.
Task
Given a protein-ligand pair and initial configuration (input), predict the binding energy improvement (delta_energy).
This task mirrors real-world drug discovery where researchers screen large combinatorial libraries to identify promising binders. The goal is to predict the binding affinity without explicitly solving for the optimal pose.
Splits
This dataset includes one training set and four validation sets to test different types of generalization:
Train: 62,308 examples (250 proteins × 250 ligands)
- Standard training data with diverse protein-ligand combinations
Validation-Random: 2,000 examples
- Sampled from the training distribution (seen proteins and ligands)
- Tests interpolation: Can the model predict binding for new poses of known molecules?
Validation-Unseen-Protein: 6,229 examples (25 new proteins × 250 training ligands)
- Tests protein generalization: Can the model predict binding for completely new protein targets?
- Analogous to finding binders for a novel therapeutic target
Validation-Unseen-Ligand: 6,238 examples (250 training proteins × 25 new ligands)
- Tests ligand generalization: Can the model predict binding for novel chemical compounds?
- Analogous to virtual screening of new chemical libraries
Validation-Unseen-Both: 622 examples (25 different proteins × 25 different ligands)
- Tests full cold-start: Can the model predict binding for completely novel protein-ligand pairs?
- Most challenging setting, tests learned physical and chemical principles
Note: The validation sets overlap in coverage (molecules in val-unseen-protein may appear in val-unseen-both), allowing comprehensive evaluation of different generalization capabilities without requiring additional data generation.
Data Fields
protein: 16×16 grid representing the proteinligand: 5×5 grid representing the ligand moleculeinput: 16×16 grid showing initial binding posedelta_energy: Energy improvement from random to optimized pose (prediction target)protein_representation: Canonical string for groupingligand_representation: Canonical string for grouping
Energy Distribution: Realistic Binding Rarity
One of the most interesting properties of ARC-Bind is how it captures a fundamental truth from real drug discovery: most random protein-ligand pairs simply don't bind well. This matches our experience with high-throughput screening, where strong binders are rare.
The dataset contains a skewed Gaussian distribution of binding affinities:
- 21.5% very weak/non-binding (delta_energy ≤ 10)
- 70.6% weak, below detection threshold (10 < delta_energy ≤ 20)
- 7.6% detectable binding (20 < delta_energy ≤ 30)
- 0.3% strong, drug-like binding (delta_energy > 30)
Average: 13.87, Median: 13.00, Range: 1-43
This distribution mirrors real DEL screens where most combinations show weak or no binding, with only a small fraction showing strong affinity. It is trivial for a human to design molecules that bind much stronger than anything discovered randomly - a property that also holds in real medicinal chemistry.
Molecular Similarity
The protein_representation and ligand_representation fields are canonical strings designed so that Hamming distance reflects molecular similarity. Use Hamming distance to:
- Cluster proteins/ligands by structural similarity
- Create train/val splits based on molecular diversity
- Analyze structure-activity relationships across the 300×300 matrix
Example usage:
def hamming_distance(rep1, rep2):
return sum(c1 != c2 for c1, c2 in zip(rep1, rep2))
# Find similar ligands
dist = hamming_distance(ligand1_rep, ligand2_rep)
Dataset Statistics
- Total pairs: 89,730
- Distinct proteins: 300
- Distinct ligands: 300
- Coverage: 99.7% (270 missing pairs out of 90,000 theoretical)
- Energy range: 1.00 - 43.00
Split distribution:
- Train: 62,308 examples (69.4%)
- Val-random: 2,000 examples (2.2%)
- Val-unseen-protein: 6,229 examples (6.9%)
- Val-unseen-ligand: 6,238 examples (7.0%)
- Val-unseen-both: 622 examples (0.7%)
Overlap with Pose Prediction Dataset
- 285 proteins appear in both datasets
- 290 ligands appear in both datasets
- Enables transfer learning experiments between tasks
Usage Example
from datasets import load_dataset
import numpy as np
# Load dataset
dataset = load_dataset("username/arc-bind-energy-prediction")
# Access different splits
train = dataset['train'] # 62,308 examples
val_random = dataset['val_random'] # 2,000 examples
val_unseen_protein = dataset['val_unseen_protein'] # 6,229 examples
val_unseen_ligand = dataset['val_unseen_ligand'] # 6,238 examples
val_unseen_both = dataset['val_unseen_both'] # 622 examples
# Get an example
example = train[0]
protein = np.array(example['protein']) # (16, 16)
ligand = np.array(example['ligand']) # (5, 5)
input_pose = np.array(example['input']) # (16, 16)
delta_energy = example['delta_energy'] # float - prediction target
# Evaluate on different validation sets to measure different capabilities:
# - val_random: Interpolation performance
# - val_unseen_protein: Generalization to new targets
# - val_unseen_ligand: Generalization to new compounds
# - val_unseen_both: Full cold-start prediction
Evaluation Metrics
Models should be evaluated separately on each validation set to measure different capabilities:
- Mean Absolute Error (MAE): Average absolute difference from true delta_energy
- Root Mean Square Error (RMSE): Standard regression metric
- Top-K Recall: Ability to identify the top K% of binders (especially important for screening)
- Ranking correlation: Spearman/Pearson correlation with true energies
Recommended reporting: Report performance on all four validation sets:
- Val-random: Baseline performance on interpolation
- Val-unseen-protein: Target generalization (critical for drug discovery)
- Val-unseen-ligand: Compound generalization (virtual screening capability)
- Val-unseen-both: Full cold-start (tests physical understanding)
Citation
TODO
License
MIT License - See repository for details
Source
Generated from the ARC-Bind project: https://github.com/Leash-Labs/arc-bind
- Downloads last month
- 2