lighteternal commited on
Commit
69ac948
·
verified ·
1 Parent(s): ace1ff0

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +18 -72
README.md CHANGED
@@ -42,22 +42,7 @@ Companion Space:
42
 
43
  ## What is included
44
 
45
- This public release has two layers.
46
-
47
- ### 1. Sanitized frozen raw release
48
-
49
- Directory: `raw/`
50
-
51
- Files:
52
- - `raw/assays.parquet`
53
- - `raw/measurements.parquet`
54
- - `raw/DATASET_MANIFEST.json`
55
-
56
- These are the public, sanitized versions of the frozen corpus derived from:
57
- - PubChem BioAssay snapshot dated `2026-03-01`
58
- - ChEMBL release `chembl_36`
59
-
60
- ### 2. Prepared compatibility-ranking subset
61
 
62
  Directory: `prepared/compatibility-ranking/`
63
 
@@ -70,20 +55,14 @@ Files:
70
 
71
  This prepared subset is the one used to train the published compatibility model linked above.
72
 
73
- ## Why there are multiple parquet files
 
74
 
75
- ### `raw/assays.parquet`
76
- One row per assay. Contains:
77
- - assay text
78
- - assay metadata
79
- - normalized target identifiers
80
 
81
- ### `raw/measurements.parquet`
82
- One row per assay-compound measurement. Contains:
83
- - assay-compound links
84
- - standardized SMILES
85
- - activity labels
86
- - potency-like fields when available
87
 
88
  ### `prepared/compatibility-ranking/compat_assays.parquet`
89
  Prepared assay rows used for compatibility ranking.
@@ -99,12 +78,12 @@ Training groups with:
99
 
100
  ## Dataset scale
101
 
102
- ### Sanitized frozen release
103
 
104
- | File | Rows |
105
  |---|---:|
106
- | `raw/assays.parquet` | `3,800,882` |
107
- | `raw/measurements.parquet` | `323,706,180` |
108
 
109
  ### Prepared ranking subset used by the public model
110
 
@@ -126,51 +105,19 @@ Split counts:
126
 
127
  This public dataset does **not** contain patient data or direct personal identifiers.
128
 
129
- Before release, I removed internal-only provenance details such as:
130
- - local filesystem paths
131
- - internal shard/precompute artifacts
132
  - private training-only intermediate files
133
 
134
  This public repo intentionally excludes:
135
  - shard directories from HF CPU prep jobs
136
  - precomputed training feature stores
137
  - internal benchmark artifacts unrelated to the released model
 
138
 
139
  ## File schemas
140
 
141
- ### `raw/assays.parquet`
142
-
143
- Important columns:
144
- - `assay_uid`
145
- - `source`
146
- - `assay_id`
147
- - `title`
148
- - `description_text`
149
- - `organism`
150
- - `readout`
151
- - `assay_format`
152
- - `assay_type`
153
- - `target_uniprot`
154
- - `metadata_json`
155
- - `provenance_json`
156
-
157
- ### `raw/measurements.parquet`
158
-
159
- Important columns:
160
- - `assay_uid`
161
- - `compound_uid`
162
- - `canonical_smiles`
163
- - `smiles_hash`
164
- - `activity_label`
165
- - `activity_type`
166
- - `activity_value`
167
- - `activity_units`
168
- - `p_activity`
169
- - `relation`
170
- - `confidence`
171
- - `metadata_json`
172
- - `provenance_json`
173
-
174
  ### `prepared/compatibility-ranking/compat_train_groups.parquet`
175
 
176
  Important columns:
@@ -213,9 +160,9 @@ Conceptually, one training observation looks like:
213
  ```python
214
  import pandas as pd
215
 
216
- assays = pd.read_parquet("raw/assays.parquet")
217
- measurements = pd.read_parquet("raw/measurements.parquet")
218
  train_groups = pd.read_parquet("prepared/compatibility-ranking/compat_train_groups.parquet")
 
 
219
  ```
220
 
221
  ### Python / pyarrow
@@ -223,8 +170,7 @@ train_groups = pd.read_parquet("prepared/compatibility-ranking/compat_train_grou
223
  ```python
224
  import pyarrow.parquet as pq
225
 
226
- assays = pq.read_table("raw/assays.parquet")
227
- measurements = pq.read_table("raw/measurements.parquet")
228
  ```
229
 
230
  ## How this relates to the public model
 
42
 
43
  ## What is included
44
 
45
+ This public release is focused on the **prepared compatibility-ranking subset** used by the published model.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
46
 
47
  Directory: `prepared/compatibility-ranking/`
48
 
 
55
 
56
  This prepared subset is the one used to train the published compatibility model linked above.
57
 
58
+ For lineage and reproducibility, the release also includes:
59
+ - `raw/DATASET_MANIFEST.json`
60
 
61
+ That manifest records the frozen upstream sources and hashes for the full raw corpus derived from:
62
+ - PubChem BioAssay snapshot dated `2026-03-01`
63
+ - ChEMBL release `chembl_36`
 
 
64
 
65
+ ## Why there are multiple parquet files
 
 
 
 
 
66
 
67
  ### `prepared/compatibility-ranking/compat_assays.parquet`
68
  Prepared assay rows used for compatibility ranking.
 
78
 
79
  ## Dataset scale
80
 
81
+ ### Source frozen corpus referenced by `raw/DATASET_MANIFEST.json`
82
 
83
+ | Source table | Rows |
84
  |---|---:|
85
+ | assays | `3,800,882` |
86
+ | measurements | `323,706,180` |
87
 
88
  ### Prepared ranking subset used by the public model
89
 
 
105
 
106
  This public dataset does **not** contain patient data or direct personal identifiers.
107
 
108
+ Before release, I removed internal-only publishing clutter such as:
109
+ - shard outputs from HF CPU prep jobs
110
+ - precomputed training feature stores
111
  - private training-only intermediate files
112
 
113
  This public repo intentionally excludes:
114
  - shard directories from HF CPU prep jobs
115
  - precomputed training feature stores
116
  - internal benchmark artifacts unrelated to the released model
117
+ - local build outputs unrelated to the public model
118
 
119
  ## File schemas
120
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
121
  ### `prepared/compatibility-ranking/compat_train_groups.parquet`
122
 
123
  Important columns:
 
160
  ```python
161
  import pandas as pd
162
 
 
 
163
  train_groups = pd.read_parquet("prepared/compatibility-ranking/compat_train_groups.parquet")
164
+ compat_assays = pd.read_parquet("prepared/compatibility-ranking/compat_assays.parquet")
165
+ candidate_pools = pd.read_parquet("prepared/compatibility-ranking/compat_candidate_pools.parquet")
166
  ```
167
 
168
  ### Python / pyarrow
 
170
  ```python
171
  import pyarrow.parquet as pq
172
 
173
+ train_groups = pq.read_table("prepared/compatibility-ranking/compat_train_groups.parquet")
 
174
  ```
175
 
176
  ## How this relates to the public model