File size: 5,277 Bytes
dbc6c02 69ac948 dbc6c02 69ac948 dbc6c02 69ac948 dbc6c02 85154c1 69ac948 dbc6c02 69ac948 dbc6c02 69ac948 dbc6c02 69ac948 dbc6c02 69ac948 dbc6c02 69ac948 dbc6c02 69ac948 dbc6c02 69ac948 dbc6c02 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 | ---
pretty_name: BioAssayAlign Assay-Compound Data
tags:
- biology
- chemistry
- drug-discovery
- bioassay
- screening
- ranking
- parquet
language:
- en
license: other
size_categories:
- 100M<n<1B
---
# BioAssayAlign Assay-Compound Data
<p align="center">
<img src="./bioassayalign.png" alt="BioAssayAlign logo" width="280">
</p>
## What this dataset is
BioAssayAlign Assay-Compound Data is a **frozen assay-and-molecule dataset for assay-conditioned ranking and retrieval**.
It answers questions like:
- given an assay description, which molecules in a submitted list should rank first?
- which historical assays are closest to this assay?
It is not:
- a chatbot dataset
- a generic pretraining corpus
- a clinical or patient dataset
Companion model:
- [BioAssayAlign Qwen3-Embedding-0.6B Compatibility](https://huggingface.co/lighteternal/BioAssayAlign-Qwen3-Embedding-0.6B-Compatibility)
Companion Space:
- [BioAssayAlign Compatibility Explorer](https://huggingface.co/spaces/lighteternal/BioAssayAlign-Compatibility-Explorer)
## What is included
This public release is focused on the **prepared compatibility-ranking subset** used by the published model.
Directory: `prepared/compatibility-ranking/`
Files:
- `compat_assays.parquet`
- `compat_candidate_pools.parquet`
- `compat_train_groups.parquet`
- `COMPATIBILITY_PREPARED_MANIFEST.json`
- `SOURCE_DATASET_MANIFEST.json`
This prepared subset is the one used to train the published compatibility model linked above.
For lineage and reproducibility, the release also includes:
- `raw/DATASET_MANIFEST.json`
That manifest records the frozen upstream sources and hashes for the full raw corpus derived from:
- PubChem BioAssay snapshot dated `2026-03-01`
- ChEMBL release `chembl_36`
The full raw parquet pair is **not** included in this compact public repo. This repo is intentionally scoped to the prepared subset that reproduces the public model.
## Why there are multiple parquet files
### `prepared/compatibility-ranking/compat_assays.parquet`
Prepared assay rows used for compatibility ranking.
### `prepared/compatibility-ranking/compat_candidate_pools.parquet`
Held-out assay candidate pools used for evaluation.
### `prepared/compatibility-ranking/compat_train_groups.parquet`
Training groups with:
- one assay
- one positive molecule
- explicit same-assay negative molecules
## Dataset scale
### Source frozen corpus referenced by `raw/DATASET_MANIFEST.json`
| Source table | Rows |
|---|---:|
| assays | `3,800,882` |
| measurements | `323,706,180` |
### Prepared ranking subset used by the public model
| File | Rows |
|---|---:|
| `compat_assays.parquet` | `11,195` |
| `compat_candidate_pools.parquet` | `1,432,532` |
| `compat_train_groups.parquet` | `508,216` |
Split counts:
| Split | Assays |
|---|---:|
| train | `8,967` |
| val | `1,117` |
| test | `1,111` |
## Sanitization and privacy
This public dataset does **not** contain patient data or direct personal identifiers.
Before release, I removed internal-only publishing clutter such as:
- shard outputs from HF CPU prep jobs
- precomputed training feature stores
- private training-only intermediate files
This public repo intentionally excludes:
- shard directories from HF CPU prep jobs
- precomputed training feature stores
- internal benchmark artifacts unrelated to the released model
- local build outputs unrelated to the public model
## File schemas
### `prepared/compatibility-ranking/compat_train_groups.parquet`
Important columns:
- `assay_uid`
- `positive_smiles`
- `positive_smiles_hash`
- `negative_smiles`
- `negative_smiles_hashes`
- `example_weight`
This is the core ranking supervision format used by the public model.
## Example row
Conceptually, one training observation looks like:
```json
{
"assay_uid": "pubchem:720659",
"positive_smiles": "CC1=CC(=O)N(C)C(=O)N1",
"positive_smiles_hash": "4d6f0d...abc",
"negative_smiles": [
"CCOC1=CC=CC=C1",
"CCN(CC)CCOC1=CC=CC=C1",
"COC1=CC=CC=C1O"
],
"negative_smiles_hashes": [
"a1...",
"b2...",
"c3..."
],
"example_weight": 1.34
}
```
## How to load it locally
### Python / pandas
```python
import pandas as pd
train_groups = pd.read_parquet("prepared/compatibility-ranking/compat_train_groups.parquet")
compat_assays = pd.read_parquet("prepared/compatibility-ranking/compat_assays.parquet")
candidate_pools = pd.read_parquet("prepared/compatibility-ranking/compat_candidate_pools.parquet")
```
### Python / pyarrow
```python
import pyarrow.parquet as pq
train_groups = pq.read_table("prepared/compatibility-ranking/compat_train_groups.parquet")
```
## How this relates to the public model
The published model was trained on:
- `prepared/compatibility-ranking/compat_assays.parquet`
- `prepared/compatibility-ranking/compat_candidate_pools.parquet`
- `prepared/compatibility-ranking/compat_train_groups.parquet`
Published model:
- [lighteternal/BioAssayAlign-Qwen3-Embedding-0.6B-Compatibility](https://huggingface.co/lighteternal/BioAssayAlign-Qwen3-Embedding-0.6B-Compatibility)
## Upstream sources
This dataset is derived from public upstream resources including:
- PubChem BioAssay
- ChEMBL
Users are responsible for complying with the attribution and usage terms of the upstream sources.
|