jang1563 commited on
Commit
c3bf51a
·
verified ·
1 Parent(s): 4e6d5a4

Replace with public-facing version (remove internal plans/strategy)

Browse files
Files changed (1) hide show
  1. ROADMAP.md +48 -561
ROADMAP.md CHANGED
@@ -1,561 +1,48 @@
1
- # NegBioDB Execution Roadmap
2
-
3
- > Last updated: 2026-03-30 (v19 — DTI ✅ CT ✅ PPI ✅ GE near-complete: ML seed 42 done, LLM 4/5 models done)
4
-
5
- ---
6
-
7
- ## Critical Findings (Updated March 2026)
8
-
9
- 1. **HCDT 2.0 License: CC BY-NC-ND 4.0** — Cannot redistribute derivatives. Must independently recreate from underlying sources (BindingDB, ChEMBL, GtoPdb, PubChem, TTD). Use 10 uM primary threshold (not 100 uM) to differentiate.
10
- 2. **InertDB License: CC BY-NC** Cannot include in commercial track. Provide optional download script only.
11
- 3. **Submission requirements**: downloadable data, Croissant metadata, code available, Datasheet for Datasets.
12
- 4. **LIT-PCBA compromised** (2025 audit found data leakage) — Creates urgency for NegBioDB as replacement gold-standard.
13
- 5. **Recommended NegBioDB License: CC BY-SA 4.0** — Compatible with ChEMBL (CC BY-SA 3.0) via one-way upgrade.
14
- 6. **No direct competitor exists** as of March 2026.
15
- 7. **No LLM benchmark tests negative DTI tasks** — ChemBench, Mol-Instructions, MedQA, SciBench all lack negative result evaluation. NegBioBench LLM track is first-of-kind.
16
- 8. **LLM evaluation also free** — Gemini Flash free tier as LLM-as-Judge + ollama local models as baselines. Flagship models (GPT-4, Claude) added post-stabilization only.
17
- 9. **Data volume is NOT the bottleneck** — ChEMBL alone has ~527K quality inactive records (pchembl < 5, validated). PubChem has ~61M target-annotated confirmatory inactives. Estimated 200K+ unique compound-target pairs available. Minimum target raised to **10K curated entries** (from 5K).
18
- 10. **PubChem FTP bulk is far superior to API** — `bioactivities.tsv.gz` (3 GB) contains all 301M bioactivity rows. Processing: < 1 day. API approach would take weeks.
19
- 11. **LLM-as-Judge rate limit (250 RPD)** — Must-have tasks (L1, L2, L4) all use automated evaluation. Judge needed only for should-have L3 (1,530 calls = 6 days). All judge tasks with 3 models = 20 days. With 6 models = 39 days (NOT feasible for sprint).
20
- 12. **Paper narrative must be problem-first** "Existing benchmarks are broken" (Exp 1 + Exp 4), not "Here's a database." Database is the solution, not the contribution.
21
- 13. **Positive data protocol required** — NegBioDB is negative-only. For ML benchmarking (M1), positive data must be sourced from ChEMBL (pChEMBL ≥ 6). Report two class ratios: balanced (1:1) and realistic (1:10). See §Positive Data Protocol below.
22
- 14. **Random negative baseline must be precisely defined** — Exp 1 compares NegBioDB negatives against random negatives. Random = uniform sampling from untested compound-target pairs (TDC standard). See §Random Negative Control Design.
23
- 15. **Paper format: 9 pages** + unlimited appendix. Croissant is **mandatory** (desk rejection if missing/invalid).
24
- 16. **GPU strategy: Kaggle free tier** (30 hrs/week) is sufficient for 18 ML baseline runs (~36-72 GPU-hours over 4 weeks). Fallback: Colab Pro ($10/month).
25
- 17. **ChEMBL v36** (Sep 2025, 24.3M activities) should be used, not v35. `chembl_downloader` fetches latest by default.
26
- 18. **Nature MI 2025** — Biologically driven negative subsampling paper independently shows "assumed negatives" distort DTI models. Related: EviDTI (Nature Comms 2025), DDB paper (BMC Biology 2025), LIT-PCBA audit (2025).
27
-
28
- ---
29
- ---
30
-
31
- ## Positive Data Protocol (P0 — Expert Panel Finding)
32
-
33
- NegBioDB is a negative-only database. For ML benchmarking (Task M1: binary DTI prediction), **positive (active) data is required**. This section defines the protocol.
34
-
35
- ### Positive Data Source
36
-
37
- ```sql
38
- -- Extract active DTIs from ChEMBL v36 SQLite
39
- -- Threshold: pChEMBL 6 (IC50/Ki/Kd/EC50 1 uM)
40
- SELECT
41
- a.molregno, a.pchembl_value, a.standard_type,
42
- cs.canonical_smiles, cs.standard_inchi_key,
43
- cp.accession AS uniprot_id
44
- FROM activities a
45
- JOIN compound_structures cs ON a.molregno = cs.molregno
46
- JOIN assays ass ON a.assay_id = ass.assay_id
47
- JOIN target_dictionary td ON ass.tid = td.tid
48
- LEFT JOIN target_components tc ON td.tid = tc.tid
49
- LEFT JOIN component_sequences cp ON tc.component_id = cp.component_id
50
- WHERE a.pchembl_value >= 6
51
- AND a.standard_type IN ('IC50', 'Ki', 'Kd', 'EC50')
52
- AND a.data_validity_comment IS NULL
53
- AND td.target_type = 'SINGLE PROTEIN'
54
- AND cp.accession IS NOT NULL
55
- ```
56
-
57
- ### Positive-Negative Pairing
58
-
59
- | Setting | Ratio | Purpose | Primary Use |
60
- |---------|-------|---------|-------------|
61
- | **Balanced** | 1:1 (active:inactive) | Fair model comparison | Exp 1, Exp 4, baselines |
62
- | **Realistic** | 1:10 (active:inactive) | Real-world HTS simulation | Supplementary evaluation |
63
-
64
- - Positives restricted to **shared targets** between ChEMBL actives and NegBioDB inactives (same target pool)
65
- - Same compound standardization pipeline (RDKit) applied to positives
66
- - DAVIS matrix known actives (pKd ≥ 7, Kd ≤ 100 nM) used as **Gold-standard validation set**
67
-
68
- ### Overlap Prevention
69
-
70
- - Active and inactive compound-target pairs must not overlap (same pair cannot be both active and inactive)
71
- - Borderline zone (pChEMBL 4.5–5.5) excluded from both positive and negative sets for clean separation
72
- - Overlap analysis: report % of NegBioDB negatives where the same compound appears as active against a different target
73
-
74
- ---
75
-
76
- ## Random Negative Control Design (P0 — Expert Panel Finding)
77
-
78
- Experiment 1 compares NegBioDB's experimentally confirmed negatives against **random negatives**. The random negative generation must be precisely defined.
79
-
80
- ### Control Conditions for Exp 1
81
-
82
- | Control | Method | What it Tests |
83
- |---------|--------|---------------|
84
- | **Uniform random** | Sample untested compound-target pairs uniformly at random from the full cross-product space | Standard TDC approach; tests baseline inflation |
85
- | **Degree-matched random** | Sample untested pairs matching the degree distribution of NegBioDB pairs | Isolates the effect of experimental confirmation vs. degree bias |
86
-
87
- **All Exp 1 runs:**
88
- - 3 ML models (DeepDTA, GraphDTA, DrugBAN)
89
- - Random split only (for controlled comparison)
90
- - Same positive data, same split seed
91
- - Only the negative set changes: NegBioDB confirmed vs. uniform random vs. degree-matched random
92
- - **Total: 3 models × 3 negative conditions = 9 runs** (was 3 runs; updated)
93
- - **Note:** The 3 NegBioDB-negative random-split runs are shared with the baseline count (9 baselines include random split). Thus Exp 1 adds only **6 new runs** (uniform random + degree-matched random). Similarly, Exp 4 shares the random-split baseline and adds only **3 new DDB runs**. Overall: 9 baseline + 6 Exp 1 + 3 Exp 4 = **18 total**.
94
- - **Exp 4 definition:** The DDB comparison uses a full-task degree-balanced split on the merged M1 balanced benchmark. Positives and negatives are reassigned together under the same split policy.
95
-
96
- ### Reporting
97
-
98
- - Table: [Model × Negative Source × Metric] for LogAUC, AUPRC, MCC
99
- - Expected: NegBioDB > degree-matched > uniform random for precision-oriented metrics
100
- - If NegBioDB ≈ uniform random → narrative shifts to Exp 4 (DDB bias) as primary result
101
-
102
- ---
103
-
104
- ## Phase 1: Implementation Sprint (Weeks 0-11)
105
-
106
- ### Week 1: Scaffolding + Download + Schema ✅ COMPLETE
107
-
108
- - [x] **Project scaffolding**: Create `src/negbiodb/`, `scripts/`, `tests/`, `migrations/`, `config.yaml`, `Makefile`, `pyproject.toml`
109
- - [x] **Dependency management**: `pyproject.toml` with Python 3.11+, rdkit, pandas, pyarrow, mlcroissant, tqdm, scikit-learn
110
- - [x] **Makefile skeleton**: Define target structure (full pipeline encoding in Week 2)
111
- - [x] Finalize database schema (SQLite for MVP) — apply `migrations/001_initial_schema.sql`
112
- - [x] Download all source data (see below — < 1 day total)
113
- - [x] **Verify ChEMBL v36** (Sep 2025) downloaded, not v35
114
- - [x] **[B7] Verify PubChem bioactivities.tsv.gz column names** after download
115
- - [ ] **[B4] Hardware decision**: Test local RAM/GPU. If < 32GB RAM → use Llama 3.1 8B + Mistral 7B (not 70B). If ≥ 32GB → quantized Llama 3.3 70B (Q4). Document choice.
116
- - [ ] **[B2] Verify citations**: Search for Nature MI 2025 negative subsampling paper + Science 2025 editorial. If not found → substitute with EviDTI, DDB paper, LIT-PCBA audit
117
- - [ ] **[B3] Monitor submission deadlines**
118
-
119
- ### Week 2: Standardization + Extraction Start ✅ COMPLETE
120
-
121
- - [x] Implement compound standardization pipeline (RDKit: salt removal, normalization, InChIKey)
122
- - [x] Implement target standardization pipeline (UniProt accession as canonical ID)
123
- - [x] Set up cross-DB deduplication (InChIKey[0:14] connectivity layer)
124
- - [x] **Makefile pipeline**: Encode full data pipeline dependency graph as executable Makefile targets
125
- - [ ] **[B5] Check shared target pool size**: Count intersection of NegBioDB targets ∩ ChEMBL pChEMBL ≥ 6 targets. If < 200 targets → expand NegBioDB target extraction
126
- - [ ] **[B6] Check borderline exclusion impact**: Run pChEMBL distribution query on ChEMBL. Estimate data loss from excluding pChEMBL 4.5–5.5 zone
127
-
128
- ### Week 2-4: Data Extraction ✅ COMPLETE
129
-
130
- **Result: 30.5M negative_results (>minimum target of 10K — far exceeded)**
131
-
132
- **Data Sources (License-Safe Only):**
133
-
134
- | Source | Available Volume | Method | License |
135
- |--------|-----------------|--------|---------|
136
- | PubChem BioAssay (confirmatory inactive) | **~61M** (target-annotated) | **FTP bulk: `bioactivities.tsv.gz` (3 GB)** + `bioassays.tsv.gz` (52 MB) | Public domain |
137
- | ChEMBL pChEMBL < 5 (quality-filtered) | **~527K** records → ~100-200K unique pairs | **SQLite via `chembl_downloader`** (4.6 GB, 1h setup) | CC BY-SA 3.0 |
138
- | ChEMBL activity_comment "Not Active" | **~763K** (literature-curated) | SQL query on same SQLite dump | CC BY-SA 3.0 |
139
- | BindingDB (Kd/Ki > 10 uM) | **~30K+** | Bulk TSV download + filter | CC BY |
140
- | DAVIS complete matrix (pKd ≤ 5) | **~27K** | TDC Python download | Public/academic |
141
-
142
- **NOT bundled (license issues):**
143
- - HCDT 2.0 (CC BY-NC-ND) — Use as validation reference only; we use 10 uM threshold (not 100 uM) to differentiate
144
- - InertDB (CC BY-NC) — Optional download script for users
145
-
146
- **PubChem FTP extraction pipeline (< 1 day):**
147
- ```
148
- 1. bioassays.tsv.gz �� filter confirmatory AIDs with target annotations → ~260K AIDs
149
- 2. bioactivities.tsv.gz (stream) → filter AID ∈ confirmatory, Outcome=Inactive → ~61M records
150
- 3. Prioritize MLPCN/MLSCN assays (~4,500 AIDs, genuine HTS dose-response) for Silver tier
151
- 4. Map SID→CID via Sid2CidSMILES.gz, targets via Aid2GeneidAccessionUniProt.gz
152
- ```
153
-
154
- - [x] Download PubChem FTP files (bioactivities.tsv.gz + bioassays.tsv.gz + mapping files)
155
- - [x] Download ChEMBL v36 SQLite via chembl_downloader
156
- - [x] Download BindingDB bulk TSV
157
- - [x] Build PubChem FTP extraction script (**streaming with chunksize=100K** — 12GB uncompressed)
158
- - [x] Build ChEMBL extraction SQL: inactive (activity_comment + pChEMBL < 5) **AND active (pChEMBL ≥ 6)** for positive data
159
- - [x] Build BindingDB extraction script (filter Kd/Ki > 10 uM, human targets)
160
- - [x] Integrate DAVIS matrix from TDC (both actives pKd ≥ 7 and inactives pKd ≤ 5)
161
- - [x] Run compound/target standardization on all extracted data (multiprocessing for RDKit)
162
- - [x] Run cross-DB deduplication + **overlap analysis** (vs DAVIS, TDC, DUD-E, LIT-PCBA)
163
- - [x] Assign confidence tiers (gold/silver/bronze/copper — lowercase, matching DDL CHECK constraint)
164
- - [x] **Extract ChEMBL positives**: 883K → 863K after 21K overlap removal (pChEMBL ≥ 6, shared targets only)
165
- - [x] **Positive-negative pairing**: M1 balanced (1.73M, 1:1) + M1 realistic (9.49M, 1:10). Zero compound-target overlap verified.
166
- - [x] **Borderline exclusion**: pChEMBL 4.5–5.5 removed from both pools
167
- - [x] Spot-check top 100 most-duplicated compounds (manual QC checkpoint)
168
- - [x] Run data leakage check: cold split leaks = 0, cross-source overlaps documented
169
-
170
- ### Week 3-5: Benchmark Construction (ML + LLM)
171
-
172
- **ML Track:**
173
- - [x] Implement 3 must-have splits (Random, Cold-Compound, Cold-Target) + DDB for Exp 4
174
- - [x] Implement ML evaluation metrics: LogAUC[0.001,0.1], BEDROC, EF@1%, EF@5%, AUPRC, MCC, AUROC
175
- - [x] (Should have) Add Cold-Both, Temporal, Scaffold splits (all 6 implemented)
176
-
177
- **LLM Track:** ✅ INFRASTRUCTURE COMPLETE (2026-03-12)
178
- - [x] Design prompt templates for L1, L2, L4 (priority tasks) → `llm_prompts.py`
179
- - [x] Construct L1 dataset: 2,000 MCQ from NegBioDB entries → `build_l1_dataset.py`
180
- - [x] Construct L2 dataset: 116 candidates (semi-automated) → `build_l2_dataset.py`
181
- - [x] Construct L4 dataset: 500 tested/untested pairs → `build_l4_dataset.py`
182
- - [x] Implement automated evaluation scripts → `llm_eval.py` (L1: accuracy/F1, L2: entity F1, L4: classification F1)
183
- - [x] Build compound name cache → `compound_names.parquet` (144,633 names from ChEMBL)
184
- - [x] Construct L3 dataset: 50 pilot reasoning examples → `build_l3_dataset.py`
185
- - [x] LLM client (vLLM + Gemini) → `llm_client.py`
186
- - [x] SLURM templates + batch submission → `run_llm_local.slurm`, `run_llm_gemini.slurm`, `submit_llm_all.sh`
187
- - [x] Results aggregation → `collect_llm_results.py` (Table 2)
188
- - [x] 54 new tests (29 eval + 25 dataset), 329 total pass
189
- - [ ] **L2 gold annotation**: 15–20h human review needed for `l2_gold.jsonl`
190
-
191
- **Shared:**
192
- - [ ] Generate Croissant machine-readable metadata (mandatory for submission)
193
- - [ ] **Validate Croissant** with `mlcroissant` library. Gate: `mlcroissant.Dataset('metadata.json')` runs without errors
194
- - [ ] Write Datasheet for Datasets (Gebru et al. template)
195
-
196
- ### Week 5-7: Baseline Experiments (ML + LLM)
197
-
198
- **ML Baselines:**
199
-
200
- | Model | Type | Priority | Runs (3 splits) | Status |
201
- |-------|------|----------|-----------------|--------|
202
- | DeepDTA | Sequence CNN | Must have | 3 | ✅ Implemented |
203
- | GraphDTA | Graph neural network | Must have | 3 | ✅ Implemented |
204
- | DrugBAN | Bilinear attention | Must have | 3 | ✅ Implemented |
205
- | Random Forest | Traditional ML | Should have | 3 | Planned |
206
- | XGBoost | Traditional ML | Should have | 3 | Planned |
207
- | DTI-LM | Language model-based | Nice to have | 3 | Planned |
208
- | EviDTI | Evidential/uncertainty | Nice to have | 3 | Planned |
209
-
210
- **Must-have ML: 9 baseline runs (3 models × 3 splits) + 6 Exp 1 (2 random conditions) + 3 Exp 4 (DDB split) = 18 total (~36-72 GPU-hours, 3-4 days)**
211
-
212
- > **Status (2026-03-13):** All 18/18 ML baseline runs COMPLETE on Cayuga HPC. Results in `results/baselines/`. 3 timed-out DrugBAN jobs recovered via `eval_checkpoint.py`. Key findings: degree-matched negatives inflate LogAUC by +0.112 avg; cold-target LogAUC drops to 0.15–0.33; DDB ≈ random (≤0.010 diff).
213
-
214
- **LLM Baselines (all free):**
215
-
216
- | Model | Access | Priority |
217
- |-------|--------|----------|
218
- | Gemini 2.5 Flash | Free API (250 RPD) | Must have |
219
- | Llama 3.3 70B | Ollama local | Must have |
220
- | Mistral 7B | Ollama local | Must have |
221
- | Phi-3.5 3.8B | Ollama local | Should have |
222
- | Qwen2.5 7B | Ollama local | Should have |
223
-
224
- **Must-have LLM: 3 models × 3 tasks (L1,L2,L4) × 2 configs (zero-shot, 3-shot) = 18 eval runs (all automated)**
225
-
226
- **Flagship models (post-stabilization):**
227
- - GPT-4/4.1, Claude Sonnet/Opus, Gemini Pro — added to leaderboard later
228
-
229
- **Must-have experiments (minimum for paper):**
230
- - [x] **Exp 1: NegBioDB vs. random negatives** ✅ COMPLETE — degree-matched avg +0.112 over negbiodb → benchmark inflation confirmed
231
- - [x] **Exp 4: Node degree bias** ✅ COMPLETE — DDB ≈ random (≤0.010 diff) → degree balancing alone not harder
232
- - [ ] **Exp 9: LLM vs. ML comparison** (L1 vs. M1 on matched test set — reuses baseline results; awaiting LLM runs)
233
- - [ ] **Exp 10: LLM extraction quality** (L2 entity F1 — awaiting LLM runs)
234
-
235
- **Should-have experiments (strengthen paper, no extra training):**
236
- - [ ] Exp 5: Cross-database consistency (analysis only, no training)
237
- - [ ] Exp 7: Target class coverage analysis (analysis only)
238
- - [ ] Exp 11: Prompt strategy comparison (add CoT config to LLM baselines)
239
- - [ ] L3 task + Exp 12: LLM-as-Judge reliability (1,530 judge calls = 6 days)
240
-
241
- **Nice-to-have experiments (defer to camera-ready):**
242
- - [ ] Exp 2: Confidence tier discrimination
243
- - [ ] Exp 3: Assay context dependency (with assay format stratification)
244
- - [ ] Exp 6: Temporal generalization
245
- - [ ] Exp 8: LIT-PCBA recapitulation
246
-
247
- ### Week 8-10: Paper Writing
248
-
249
- - [ ] Write benchmark paper (**9 pages** + unlimited appendix)
250
- - [ ] Create key figures (see `paper/scripts/generate_figures.py`)
251
- - [ ] **Paper structure (9 pages)**: Intro (1.5) → DB Design (1.5) → Benchmark (1.5) → Experiments (3) → Discussion (1.5)
252
- - [ ] **Appendix contents**: Full schema DDL, all metric tables, L2 annotation details, few-shot examples, Datasheet
253
- - [ ] Python download script: `pip install negbiodb` or simple wget script
254
- - [ ] Host dataset (HuggingFace primary + Zenodo DOI for archival)
255
- - [ ] Author ethical statement
256
- - [ ] **Dockerfile** for full pipeline reproducibility: Python 3.11, rdkit, torch, chembl_downloader, pyarrow, mlcroissant. Must reproduce full pipeline from raw data → final benchmark export
257
-
258
- ### Week 10-11: Review & Submit
259
-
260
- - [ ] Internal review and polish
261
- - [ ] Submit abstract (~May 1)
262
- - [ ] Submit full paper (~May 15)
263
- - [ ] Post ArXiv preprint (same day or before submission)
264
-
265
- ---
266
-
267
- ## Phase 1-CT: Clinical Trial Failure Domain
268
-
269
- > Initiated: 2026-03-17 | Pipeline code + data loading complete, benchmark design complete
270
-
271
- ### Step CT-1: Infrastructure ✅ COMPLETE
272
-
273
- - [x] CT schema design (2 migrations: 001 initial + 002 expert review fixes)
274
- - [x] 5 pipeline modules: etl_aact, etl_classify, drug_resolver, etl_outcomes, ct_db
275
- - [x] 138 tests passing
276
- - [x] Data download scripts for all 4 sources
277
-
278
- ### Step CT-2: Data Loading ✅ COMPLETE
279
-
280
- - [x] AACT ETL: 216,987 trials, 476K trial-interventions, 372K trial-conditions
281
- - [x] Failure classification (3-tier): 132,925 results (bronze 60K / silver 28K / gold 23K / copper 20K)
282
- - [x] Open Targets: 32,782 intervention-target mappings
283
- - [x] Pair aggregation: 102,850 intervention-condition pairs
284
-
285
- ### Step CT-3: Enrichment & Resolution ✅ COMPLETE
286
-
287
- - [x] Outcome enrichment: +66 AACT p-values, +31,969 Shi & Du SAE records
288
- - [x] Drug resolution Steps 1-2: ChEMBL exact (18K) + PubChem API
289
- - [x] Drug resolution Step 3: Fuzzy matching — 15,616 resolved
290
- - [x] Drug resolution Step 4: Manual overrides — 291 resolved (88 entries used)
291
- - [x] Pair aggregation refresh (post-resolution) — 102,850 pairs
292
- - [x] Post-run coverage analysis — 36,361/176,741 (20.6%) ChEMBL, 27,534 SMILES, 66,393 targets
293
-
294
- ### Step CT-4: Analysis & Benchmark Design ✅ COMPLETE
295
-
296
- - [x] Data quality analysis script (`scripts_ct/analyze_ct_data.py`) — 16 queries, JSON+MD output
297
- - [x] Data quality report (`results/ct/ct_data_quality.md`)
298
- - [x] ML benchmark design
299
- - 3 tasks: CT-M1 (binary), CT-M2 (7-way category), CT-M3 (phase transition, deferred)
300
- - 6 split strategies, 3 models (XGBoost, MLP, GNN+Tabular)
301
- - 3 experiments: negative source, generalization, temporal
302
- - [x] LLM benchmark design
303
- - 4 levels: CT-L1 (5-way MCQ), CT-L2 (extraction), CT-L3 (reasoning), CT-L4 (discrimination)
304
- - 5 models, anti-contamination analysis
305
-
306
- ### Step CT-5: ML Export & Splits ✅ COMPLETE
307
-
308
- - [x] CT export module (`src/negbiodb_ct/ct_export.py`)
309
- - [x] CTO success trials extraction (CT-M1 positive class)
310
- - [x] Feature engineering (drug FP + mol properties + condition one-hot + trial design)
311
- - [x] 6 split strategies implementation
312
-
313
- ### Step CT-6: ML Baseline Experiments ✅ COMPLETE (108/108 runs)
314
-
315
- - [x] XGBoost baseline (CT-M1 + CT-M2)
316
- - [x] MLP baseline
317
- - [x] GNN+Tabular baseline
318
- - [x] Key finding: CT-M1 trivially separable on NegBioDB negatives (AUROC=1.0); M2 XGBoost macro-F1=0.51
319
-
320
- ### Step CT-7: LLM Benchmark Execution ✅ COMPLETE (80/80 runs)
321
-
322
- - [x] CT-L1/L2/L3/L4 dataset construction
323
- - [x] CT prompt templates + evaluation functions
324
- - [x] Inference runs on Cayuga HPC (5 models × 4 levels × 4 configs)
325
- - [x] Key finding: CT L4 MCC 0.48–0.56 — highest discrimination across domains
326
-
327
- ---
328
-
329
- ## Phase 1b: Post-Submission Expansion (Months 3-6)
330
-
331
- ### Data Expansion (if not at 10K+ for submission)
332
- - [ ] Complete PubChem BioAssay extraction (full confirmatory set)
333
- - [ ] LLM text mining pipeline activation (PubMed abstracts)
334
- - [ ] Supplementary materials table extraction (pilot)
335
-
336
- ### Benchmark Refinement
337
- - [ ] Add remaining ML and LLM baseline models
338
- - [ ] Complete all 12 validation experiments (8 ML + 4 LLM)
339
- - [ ] Complete LLM tasks L5, L6 datasets
340
- - [ ] Add flagship LLM evaluations (GPT-4, Claude)
341
- - [ ] Build public leaderboard (simple GitHub-based, separate ML and LLM tracks)
342
-
343
-
344
- ---
345
-
346
- ## Phase 2: Community & Platform (Months 6-18)
347
-
348
- ### 2.1 Platform Development
349
- - [ ] Web interface (search, browse, download)
350
- - [ ] Python library: `pip install negbiodb`
351
- - [ ] REST API with tiered access
352
- - [ ] Community submission portal with controlled vocabularies
353
- - [ ] Leaderboard system
354
-
355
- ### 2.2 Community Building
356
- - [ ] GitHub repository with documentation and tutorials
357
- - [ ] Partner with SGC and Target 2035/AIRCHECK for data access
358
- - [ ] Engage with DREAM challenge community
359
- - [ ] Tutorial at relevant workshop
360
- - [ ] Researcher incentive design (citation credit, DOI per submission)
361
-
362
-
363
- ---
364
-
365
- ## Schema Design
366
-
367
- ### Common Layer
368
-
369
- ```
370
- NegativeResult {
371
- id: UUID
372
- compound_id: InChIKey + ChEMBL ID + PubChem CID
373
- target_id: UniProt ID + ChEMBL Target ID
374
-
375
- // Core negative result
376
- result_type: ENUM [hard_negative, conditional_negative, methodological_negative,
377
- hypothesis_negative, dose_time_negative]
378
- confidence_tier: ENUM [gold, silver, bronze, copper]
379
-
380
- // Quantitative evidence
381
- activity_value: FLOAT (IC50, Kd, Ki, EC50)
382
- activity_unit: STRING
383
- activity_type: STRING
384
- pchembl_value: FLOAT
385
- inactivity_threshold: FLOAT
386
- max_concentration_tested: FLOAT
387
-
388
- // Assay context (BAO-based)
389
- assay_type: BAO term
390
- assay_format: ENUM [biochemical, cell-based, in_vivo]
391
- assay_technology: STRING
392
- detection_method: STRING
393
- cell_line: STRING (if cell-based)
394
- organism: STRING
395
-
396
- // Quality metrics
397
- z_factor: FLOAT
398
- ssmd: FLOAT
399
- num_replicates: INT
400
- screen_type: ENUM [primary_single_point, confirmatory_dose_response,
401
- counter_screen, orthogonal_assay]
402
-
403
- // Provenance
404
- source_db: STRING (PubChem, ChEMBL, literature, community)
405
- source_id: STRING (assay ID, paper DOI)
406
- extraction_method: ENUM [database_direct, text_mining, llm_extracted,
407
- community_submitted]
408
- curator_validated: BOOLEAN
409
-
410
- // Target context (DTO-based)
411
- target_type: DTO term
412
- target_family: STRING (kinase, GPCR, ion_channel, etc.)
413
- target_development_level: ENUM [Tclin, Tchem, Tbio, Tdark]
414
-
415
- // Metadata
416
- created_at: TIMESTAMP
417
- updated_at: TIMESTAMP
418
- related_positive_results: [UUID] (links to known actives for same target)
419
- }
420
- ```
421
-
422
- ### Biology/DTI Domain Layer
423
-
424
- ```
425
- DTIContext {
426
- negative_result_id: UUID (FK)
427
- binding_site: STRING (orthosteric, allosteric, unknown)
428
- selectivity_data: BOOLEAN (part of selectivity panel?)
429
- species_tested: STRING
430
- counterpart_species_result: STRING (active in other species?)
431
- cell_permeability_issue: BOOLEAN
432
- compound_solubility: FLOAT
433
- compound_stability: STRING
434
- }
435
- ```
436
-
437
- ---
438
-
439
- ## Benchmark Design (NegBioBench) — Dual ML + LLM Track
440
-
441
- ### Track A: Traditional ML Tasks
442
-
443
- | Task | Input | Output | Primary Metric |
444
- |------|-------|--------|----------------|
445
- | **M1: DTI Binary Prediction** | (compound SMILES, target sequence) | Active / Inactive | LogAUC[0.001,0.1], AUPRC |
446
- | **M2: Negative Confidence Prediction** | (SMILES, sequence, assay features) | gold/silver/bronze/copper | Weighted F1, MCC |
447
- | **M3: Activity Value Regression** | (SMILES, sequence) | pIC50 / pKd | RMSE, R², Spearman ρ |
448
-
449
- **ML Baselines:** DeepDTA, GraphDTA, DrugBAN, RF, XGBoost, DTI-LM, EviDTI
450
-
451
- ### Track B: LLM Tasks
452
-
453
- | Task | Input | Output | Metric | Eval Method |
454
- |------|-------|--------|--------|-------------|
455
- | **L1: Negative DTI Classification** | Natural language description | Active/Inactive/Inconclusive/Conditional (MCQ) | Accuracy, F1, MCC | Automated |
456
- | **L2: Negative Result Extraction** | Paper abstract | Structured JSON (compound, target, outcome) | Schema compliance, Entity F1, STED | Automated |
457
- | **L3: Inactivity Reasoning** | Confirmed negative + context | Scientific explanation | 4-dim rubric (accuracy, reasoning, completeness, specificity) | LLM-as-Judge + human sample |
458
- | **L4: Tested-vs-Untested Discrimination** | Compound-target pairs | Tested/Untested + evidence | Accuracy, F1, evidence quality | Automated + spot-check |
459
- | **L5: Assay Context Reasoning** | Negative result + condition changes | Prediction + reasoning per scenario | Prediction accuracy, reasoning quality | LLM-as-Judge |
460
- | **L6: Evidence Quality Assessment** | Negative result + metadata | Confidence tier + justification | Tier F1, justification quality | Automated + LLM-judge |
461
-
462
- **LLM Baselines (Phase 1 — Free):** Gemini 2.5 Flash, Llama 3.3, Mistral 7B, Phi-3.5, Qwen2.5
463
- **LLM Baselines (Phase 2 — Flagship):** GPT-4, Claude Sonnet/Opus, Gemini Pro
464
- **LLM-as-Judge:** Gemini 2.5 Flash free tier (validated against human annotations)
465
-
466
- ### Track C: Cross-Track (Future)
467
-
468
- | Task | Description |
469
- |------|-------------|
470
- | **C1: Ensemble Prediction** | Combine ML model scores + LLM reasoning — does LLM improve ML? |
471
-
472
- ### Splitting Strategies (7 total, for Track A)
473
- 1. Random (stratified 70/10/20)
474
- 2. Cold compound (Butina clustering on Murcko scaffolds)
475
- 3. Cold target (by UniProt accession)
476
- 4. Cold both (compound + target unseen)
477
- 5. Temporal (train < 2020, val 2020-2022, test > 2022)
478
- 6. Scaffold (Murcko scaffold cluster-based)
479
- 7. DDB — Degree Distribution Balanced (addresses node degree bias)
480
-
481
- ### Evaluation Metrics (Track A)
482
-
483
- | Metric | Type | Role |
484
- |--------|------|------|
485
- | **LogAUC[0.001,0.1]** | Enrichment | **Primary ranking metric** |
486
- | **BEDROC (α=20)** | Enrichment | Early enrichment |
487
- | **EF@1%, EF@5%** | Enrichment | Top-ranked performance |
488
- | **AUPRC** | Ranking | **Secondary ranking metric** |
489
- | **MCC** | Classification | Balanced classification |
490
- | **AUROC** | Ranking | Backward compatibility only (not for ranking) |
491
-
492
- ### LLM Evaluation Configuration
493
- - **Full benchmark** (5 configs): zero-shot, 3-shot, 5-shot, CoT, CoT+3-shot
494
- - **Must-have** (2 configs): zero-shot, 3-shot only (see research/08 §3)
495
- - **Should-have** (add CoT): 3 configs total for Exp 11 (prompt strategy comparison)
496
- - 3 runs per evaluation, report mean ± std
497
- - Temperature = 0, prompts version-controlled
498
- - Anti-contamination: temporal holdout + paraphrased variants + contamination detection
499
-
500
- ---
501
-
502
- ## Phase 3: Scale & Sustainability (Months 18-36)
503
-
504
- ### 3.1 Data Expansion
505
- - [ ] Expand to 100K+ curated negative DTIs
506
- - [ ] Full LLM-based literature mining pipeline (PubMed/PMC)
507
- - [ ] Supplementary materials table extraction (Table Transformer)
508
- - [ ] Integrate Target 2035 AIRCHECK data as it becomes available
509
- - [ ] Begin Gene Function (KO/KD) negative data collection
510
-
511
- ### 3.2 Benchmark Evolution (NegBioBench v1.0)
512
- - [ ] Track A expansion: multi-modal integration (protein structures, assay images)
513
- - [ ] Track B expansion: additional tasks — Failure Diagnosis, Experimental Design Critique, Literature Contradiction Detection
514
- - [ ] Track C: Cross-track ensemble evaluation (ML + LLM combined prediction)
515
- - [ ] Specialized bio-LLM evaluations (LlaSMol, BioMedGPT, DrugChat)
516
- - [ ] Regular leaderboard updates (both ML and LLM tracks)
517
-
518
-
519
- ---
520
-
521
- ## Phase 4: Domain Expansion (Months 36+)
522
-
523
- ```
524
- DTI (Phase 1 — COMPLETE)
525
-
526
- ├── Clinical Trial Failure (Phase 1-CT — COMPLETE ✅)
527
- │ └── 132,925 failure results loaded, benchmarks designed
528
-
529
- ├── Gene Function (CRISPR KO/KD negatives)
530
- │ └── Leverage CRISPR screen data, DepMap
531
-
532
- ├── Chemistry Domain Layer
533
- │ └── Failed reactions, yield = 0 data
534
-
535
- └── Materials Science Domain Layer
536
- └── HTEM DB integration, failed synthesis conditions
537
- ```
538
-
539
- ---
540
-
541
- ## Key Milestones (Revised)
542
-
543
- | Milestone | Target Date | Deliverable | Status |
544
- |-----------|------------|-------------|--------|
545
- | Schema v1.0 finalized | Week 2 (Mar 2026) | SQLite schema + standardization pipeline | ✅ Done |
546
- | Data extraction complete | Week 3-4 (Mar 2026) | **30.5M** negative results (far exceeded 10K target) | ✅ Done |
547
- | ML export & splits | Week 3 (Mar 2026) | 6 split strategies + M1 benchmark datasets | ✅ Done |
548
- | ML evaluation metrics | Week 3 (Mar 2026) | 7 metrics, 329 tests | ✅ Done |
549
- | ML baseline infrastructure | Week 4 (Mar 2026) | 3 models + SLURM harness | ✅ Done |
550
- | ML baseline experiments | Week 5 (Mar 2026) | 18/18 runs complete, key findings confirmed | ✅ Done |
551
- | LLM benchmark infrastructure | Week 5 (Mar 2026) | L1–L4 datasets, prompts, eval, SLURM templates | ✅ Done |
552
- | LLM benchmark execution | Week 5-6 (Mar 2026) | 81/81 runs complete (9 models × 4 tasks + configs) | ✅ Done |
553
- | Python library v0.1 | Month 8 | `pip install negbiodb` |
554
- | Web platform launch | Month 12 | Public access + leaderboard |
555
- | 100K+ entries | Month 24 | Scale milestone |
556
-
557
- ---
558
-
559
- ---
560
-
561
- ---
 
1
+ # NegBioDB -- Roadmap
2
+
3
+ > Last updated: 2026-03-30
4
+
5
+ ## Completed (Phase 1)
6
+
7
+ ### DTI Domain (Drug-Target Interaction)
8
+ - 30.5M negative results from 4 sources (ChEMBL, PubChem, BindingDB, DAVIS)
9
+ - ML baselines: DeepDTA, GraphDTA, DrugBAN across 5 splits + 2 negative controls
10
+ - LLM benchmark: L1-L4 tasks, 5 models, zero-shot and 3-shot configs
11
+ - Key finding: degree-matched negatives inflate LogAUC by +0.112
12
+
13
+ ### CT Domain (Clinical Trial Failure)
14
+ - 132,925 failure results from 216,987 trials (AACT, CTO, Open Targets, Shi & Du)
15
+ - ML baselines: XGBoost, MLP, GNN across M1 (binary) and M2 (7-way) tasks
16
+ - LLM benchmark: L1-L4, 5 models
17
+ - Key finding: NegBioDB negatives trivially separable (AUROC=1.0); M2 macro-F1=0.51
18
+
19
+ ### PPI Domain (Protein-Protein Interaction)
20
+ - 2.2M negative results from 4 sources (IntAct, HuRI, hu.MAP, STRING)
21
+ - ML baselines: Siamese CNN, PIPR, MLPFeatures across 4 splits
22
+ - LLM benchmark: L1-L4, 5 models
23
+ - Key finding: PIPR AUROC drops to 0.41 on cold_both; temporal contamination detected
24
+
25
+ ### GE Domain (Gene Essentiality / DepMap)
26
+ - 28.8M negative results from DepMap CRISPR and RNAi screens
27
+ - ML baselines: XGBoost, MLPFeatures (seed 42 complete)
28
+ - LLM benchmark: 4/5 models complete
29
+ - Key finding: cold-gene splits reveal severe generalization gaps
30
+
31
+ ## In Progress
32
+
33
+ - GE domain: remaining ML seeds (43/44) and Llama LLM runs
34
+ - Paper preparation for NeurIPS 2026 Evaluations & Datasets Track
35
+
36
+ ## Planned
37
+
38
+ ### Phase 2: Community & Platform
39
+ - Web interface for search, browse, and download
40
+ - Python library: `pip install negbiodb`
41
+ - REST API with tiered access
42
+ - Community submission portal
43
+ - Public leaderboard
44
+
45
+ ### Phase 3: Scale
46
+ - Expand curated entries via literature mining
47
+ - Specialized bio-LLM evaluations
48
+ - Cross-track ensemble evaluation (ML + LLM)