Version 1.0.0
Browse files- .DS_Store +0 -0
- README.md +6 -6
- data/README.md +0 -7
- data/additives/README.md +10 -0
- data/{setups/lcms_setups.csv → additives/add.smi} +2 -2
- data/{testing/retina_testing.csv → additives/add_descriptors.csv} +2 -2
- data/{validation/retina_validation.csv → cluster_split/cluster_1.csv} +2 -2
- data/cluster_split/cluster_2.csv +3 -0
- data/cluster_split/cluster_3.csv +3 -0
- data/cluster_split/cluster_4.csv +3 -0
- data/cluster_split/cluster_5.csv +3 -0
- data/cluster_split/figures/cluster_assignments.csv +3 -0
- data/cluster_split/figures/cluster_rt.png +3 -0
- data/cluster_split/figures/cluster_umap.png +3 -0
- data/compounds/README.md +1 -4
- data/compounds/comp.smi +2 -2
- data/compounds/comp_descriptors.csv +3 -0
- data/lcms_methods.csv +3 -0
- data/method_split/figures/methods_rt.png +3 -0
- data/method_split/figures/methods_umap.png +3 -0
- data/method_split/methods_1.csv +3 -0
- data/method_split/methods_2.csv +3 -0
- data/method_split/methods_3.csv +3 -0
- data/method_split/methods_4.csv +3 -0
- data/method_split/methods_5.csv +3 -0
- data/retina_dataset.csv +3 -0
- data/scaffold_split/figures/scaffold_assignments.csv +3 -0
- data/scaffold_split/figures/scaffold_rt.png +3 -0
- data/scaffold_split/figures/scaffold_umap.png +3 -0
- data/scaffold_split/fold_1.csv +3 -0
- data/scaffold_split/fold_2.csv +3 -0
- data/scaffold_split/fold_3.csv +3 -0
- data/scaffold_split/fold_4.csv +3 -0
- data/scaffold_split/fold_5.csv +3 -0
- data/scripts/cluster_split.py +395 -0
- data/scripts/method_split.py +358 -0
- data/scripts/scaffold_split.py +462 -0
- data/solvents/README.md +1 -4
- data/solvents/solv.smi +2 -2
- data/solvents/solv_descriptors.csv +2 -2
.DS_Store
CHANGED
|
Binary files a/.DS_Store and b/.DS_Store differ
|
|
|
README.md
CHANGED
|
@@ -4,12 +4,12 @@ license: mit
|
|
| 4 |
|
| 5 |
## ⚗️ ReTiNA: A Benchmark Dataset for LC-MS Retention Time Modeling
|
| 6 |
|
|
|
|
|
|
|
| 7 |
ReTiNA is a large open-source dataset for training machine learning models to predict small molecule retention times in LC-MS workflows.
|
| 8 |
|
| 9 |
This dataset is actively expanding with new experimental retention time values from the Coley Research Group at MIT, ensuring it remains a growing resource for optical property prediction.
|
| 10 |
|
| 11 |
-
Additionally, ReTiNA includes ```.smi``` lists of 641,651 unique compounds and 6 unique solvents in the dataset for chemical descriptor calculations.
|
| 12 |
-
|
| 13 |
ReTiNA is designed for use in:
|
| 14 |
|
| 15 |
- Estimating retention times for new compound–environment combinations
|
|
@@ -20,11 +20,11 @@ ReTiNA is designed for use in:
|
|
| 20 |
|
| 21 |
The ReTiNA dataset contains:
|
| 22 |
|
| 23 |
-
-
|
| 24 |
- Experimentally measured retention times, in seconds, curated from public datasets, benchmark papers, and literature
|
| 25 |
-
-
|
| 26 |
|
| 27 |
-
|
| 28 |
|
| 29 |
- Solvent mixtures A and B, consisting of solvents and solvent additives contributing to pH
|
| 30 |
- The mobile phase gradient used, defined by the percentage of solvent mixture B over time (min)
|
|
@@ -33,7 +33,7 @@ The ReTiNA dataset contains:
|
|
| 33 |
- The mobile phase flow rate, measured in mL/min
|
| 34 |
- The column temperature, measured in degrees Celsius
|
| 35 |
|
| 36 |
-
|
| 37 |
|
| 38 |
## 📋 Data Sources Used
|
| 39 |
|
|
|
|
| 4 |
|
| 5 |
## ⚗️ ReTiNA: A Benchmark Dataset for LC-MS Retention Time Modeling
|
| 6 |
|
| 7 |
+
Current Version: **1.0.0**
|
| 8 |
+
|
| 9 |
ReTiNA is a large open-source dataset for training machine learning models to predict small molecule retention times in LC-MS workflows.
|
| 10 |
|
| 11 |
This dataset is actively expanding with new experimental retention time values from the Coley Research Group at MIT, ensuring it remains a growing resource for optical property prediction.
|
| 12 |
|
|
|
|
|
|
|
| 13 |
ReTiNA is designed for use in:
|
| 14 |
|
| 15 |
- Estimating retention times for new compound–environment combinations
|
|
|
|
| 20 |
|
| 21 |
The ReTiNA dataset contains:
|
| 22 |
|
| 23 |
+
- 119,039 unique molecule–environment combinations, the largest singular LC-MS retention time dataset of its kind to date
|
| 24 |
- Experimentally measured retention times, in seconds, curated from public datasets, benchmark papers, and literature
|
| 25 |
+
- Chemical descriptors for 105,809 unique compounds, 6 unique solvents, and 8 unique additives
|
| 26 |
|
| 27 |
+
73 distinct LC-MS setup environments are used in ReTiNA. Each environment consists of:
|
| 28 |
|
| 29 |
- Solvent mixtures A and B, consisting of solvents and solvent additives contributing to pH
|
| 30 |
- The mobile phase gradient used, defined by the percentage of solvent mixture B over time (min)
|
|
|
|
| 33 |
- The mobile phase flow rate, measured in mL/min
|
| 34 |
- The column temperature, measured in degrees Celsius
|
| 35 |
|
| 36 |
+
The ReTiNA dataset is divided into different scaffold, cluster, and method splits, which can be accessed in the `data` directory and are using in model evaluation.
|
| 37 |
|
| 38 |
## 📋 Data Sources Used
|
| 39 |
|
data/README.md
CHANGED
|
@@ -63,10 +63,3 @@ Each data entry in the ReTiNA-1 dataset is comprised of 7 columns. Examples and
|
|
| 63 |
- 2,252 compound-environment combinations
|
| 64 |
- 2,178 unique compounds
|
| 65 |
- 2 unique LC-MS setup environments
|
| 66 |
-
|
| 67 |
-
[RTPred](https://doi.org/10.1016/j.chroma.2025.465816) (rtpred): A Web Server for Accurate, Customized Liquid Chromatography Retention Time Prediction of Chemicals
|
| 68 |
-
|
| 69 |
-
- Data sourced from 33 individual RTPred datasets
|
| 70 |
-
- 5,996,830 compound-environment combinations
|
| 71 |
-
- 535,847 unique compounds
|
| 72 |
-
- 11 unique LC-MS setup environments
|
|
|
|
| 63 |
- 2,252 compound-environment combinations
|
| 64 |
- 2,178 unique compounds
|
| 65 |
- 2 unique LC-MS setup environments
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
data/additives/README.md
ADDED
|
@@ -0,0 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# 🧪 ReTiNA Table of Additive Desciptors
|
| 2 |
+
|
| 3 |
+
The ReTiNA dataset is accompanied with Morgan fingerprints for each additive in the
|
| 4 |
+
dataset, capturing features for model training. Descriptors were computed using RDKit.
|
| 5 |
+
|
| 6 |
+
## Topological Descriptors
|
| 7 |
+
|
| 8 |
+
| Descriptor | Summary | Software Used |
|
| 9 |
+
|------------|---------|---------------|
|
| 10 |
+
| Morgan_Fingerprint | Circular fingerprints encoding molecular substructures up to a given radius, used for similarity and machine learning | RDKit |
|
data/{setups/lcms_setups.csv → additives/add.smi}
RENAMED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:863d1d1cb4f713ade2ddaa442a8086043e179d009da7f0cffc53c939de2f1fe2
|
| 3 |
+
size 167
|
data/{testing/retina_testing.csv → additives/add_descriptors.csv}
RENAMED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3cbd8ed99eb4d1706acf8c4eadc1aee7e9a6920ec2665549cded4a411d9f7fbc
|
| 3 |
+
size 16545
|
data/{validation/retina_validation.csv → cluster_split/cluster_1.csv}
RENAMED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c444f4d4e8d1212ddcab8f66af02c785e24a1bda9c12683d8349a2bbdf5e5794
|
| 3 |
+
size 15137399
|
data/cluster_split/cluster_2.csv
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ad0e96a5e25e8c2c10ce3a4f30a846e1a1cfa068dc00c89cc71c80e36df80bf0
|
| 3 |
+
size 6286181
|
data/cluster_split/cluster_3.csv
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:187a59054c2dd03fa523c4f8d4d90f6606d03f3d32a1a13daefbcebe51b0460e
|
| 3 |
+
size 4165851
|
data/cluster_split/cluster_4.csv
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:022197af27f1ba2a362eed6ad1b79e70f0e30556b9a4f341bd47985a72119a69
|
| 3 |
+
size 3627182
|
data/cluster_split/cluster_5.csv
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6642e32fbc5c52f18fcb17c79bbb672ad3f8f5de40b628b801cd8fb6a6c7c432
|
| 3 |
+
size 4341002
|
data/cluster_split/figures/cluster_assignments.csv
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:739cd0035dbb30b1111a20e055c0da7aef04d2f1d39489837091e7d652f75ec0
|
| 3 |
+
size 7695580
|
data/cluster_split/figures/cluster_rt.png
ADDED
|
Git LFS Details
|
data/cluster_split/figures/cluster_umap.png
ADDED
|
Git LFS Details
|
data/compounds/README.md
CHANGED
|
@@ -1,6 +1,6 @@
|
|
| 1 |
# ⚛️ ReTiNA Table of Compound Desciptors
|
| 2 |
|
| 3 |
-
The ReTiNA dataset is accompanied with
|
| 4 |
|
| 5 |
## Topological Descriptors
|
| 6 |
|
|
@@ -9,16 +9,13 @@ The ReTiNA dataset is accompanied with 160 descriptors for each compound, captur
|
|
| 9 |
| BalabanJ | Quantifies molecular complexity based on average distance connectivity and graph branching | RDKit |
|
| 10 |
| BertzCT | Calculates molecular complexity based on graph connectivity and atomic contributions | RDKit |
|
| 11 |
| Chi (0-1), Chi_n (0-4) Chi_v (0-4) | Connectivity indices reflecting molecular topology, branching, and size | RDKit |
|
| 12 |
-
| Ipc | Information content index representing structural complexity | RDKit |
|
| 13 |
| Kappa (1-3) | Shape indices describing molecular flexibility and overall geometry | RDKit |
|
| 14 |
|
| 15 |
## Electronic Descriptors
|
| 16 |
|
| 17 |
| Descriptor | Summary | Software Used |
|
| 18 |
|------------|---------|---------------|
|
| 19 |
-
| MaxAbsPartialCharge | Maximum absolute atomic partial charge | RDKit |
|
| 20 |
| MaxEStateIndex | Maximum E-state value in the molecule | RDKit |
|
| 21 |
-
| MaxPartialCharge | Highest partial charge in the molecule | RDKit |
|
| 22 |
| NumValenceElectrons | Total number of valence electrons in the molecule | RDKit |
|
| 23 |
| NumRadicalElectrons | Total number of unpaired electrons (radicals) | RDKit |
|
| 24 |
| HallKierAlpha | Atom-type electrotopological descriptor modeling polarity and hybridization | RDKit |
|
|
|
|
| 1 |
# ⚛️ ReTiNA Table of Compound Desciptors
|
| 2 |
|
| 3 |
+
The ReTiNA dataset is accompanied with 157 descriptors for each compound, capturing detailed structural, electronic, and topological features for model training. Descriptors were computed using RDKit.
|
| 4 |
|
| 5 |
## Topological Descriptors
|
| 6 |
|
|
|
|
| 9 |
| BalabanJ | Quantifies molecular complexity based on average distance connectivity and graph branching | RDKit |
|
| 10 |
| BertzCT | Calculates molecular complexity based on graph connectivity and atomic contributions | RDKit |
|
| 11 |
| Chi (0-1), Chi_n (0-4) Chi_v (0-4) | Connectivity indices reflecting molecular topology, branching, and size | RDKit |
|
|
|
|
| 12 |
| Kappa (1-3) | Shape indices describing molecular flexibility and overall geometry | RDKit |
|
| 13 |
|
| 14 |
## Electronic Descriptors
|
| 15 |
|
| 16 |
| Descriptor | Summary | Software Used |
|
| 17 |
|------------|---------|---------------|
|
|
|
|
| 18 |
| MaxEStateIndex | Maximum E-state value in the molecule | RDKit |
|
|
|
|
| 19 |
| NumValenceElectrons | Total number of valence electrons in the molecule | RDKit |
|
| 20 |
| NumRadicalElectrons | Total number of unpaired electrons (radicals) | RDKit |
|
| 21 |
| HallKierAlpha | Atom-type electrotopological descriptor modeling polarity and hybridization | RDKit |
|
data/compounds/comp.smi
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:bd2147571bca5f201ba48e1a3384d1e0c86f959b5e0678185e06d2b4a71c6f10
|
| 3 |
+
size 6091768
|
data/compounds/comp_descriptors.csv
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:26d523156d754e5a2a141107ca6daddde58c484f64d683392e49a2d4a723ee2a
|
| 3 |
+
size 348748041
|
data/lcms_methods.csv
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2cdc220e7e4f4e8fe03d184520977930afc848a5e959335cadb9f464cd170ae8
|
| 3 |
+
size 16640
|
data/method_split/figures/methods_rt.png
ADDED
|
Git LFS Details
|
data/method_split/figures/methods_umap.png
ADDED
|
Git LFS Details
|
data/method_split/methods_1.csv
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:00cfc646462c23f9fb06166c4d4c06fe611b86a1e539827f39d7b5f3fd12168f
|
| 3 |
+
size 1313997
|
data/method_split/methods_2.csv
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8f941df201fb4c5906a21be42835004aeee26b6b0c5406493087b8cfdfa66a8e
|
| 3 |
+
size 1315427
|
data/method_split/methods_3.csv
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:07372d721e5463008f23a65650b6820a28d76b6161a8aff2129d194019b8ba95
|
| 3 |
+
size 1089765
|
data/method_split/methods_4.csv
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7f7f062ae5c2a76d605a5a1fca71bf1cf605c8e85b4e910265c422d9803bd11b
|
| 3 |
+
size 6077386
|
data/method_split/methods_5.csv
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:31b7a01d172ad706d0c1113a64a730346d8fda02db897c1d6691c7cd82a48dc1
|
| 3 |
+
size 19186922
|
data/retina_dataset.csv
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:acac7b2363b4bcec2d899887839b5100a9232e0bd29bceeee5de3d1edb58a803
|
| 3 |
+
size 28866318
|
data/scaffold_split/figures/scaffold_assignments.csv
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f5e9b851080bdf12f85936ddc736be108b9a01a1cd61a94013505ad91fa5368b
|
| 3 |
+
size 2028108
|
data/scaffold_split/figures/scaffold_rt.png
ADDED
|
Git LFS Details
|
data/scaffold_split/figures/scaffold_umap.png
ADDED
|
Git LFS Details
|
data/scaffold_split/fold_1.csv
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:844bf2bb7d44aa9678931de4ccf8a2c69ff47501e5d41c7f2943c81d3d412017
|
| 3 |
+
size 5897260
|
data/scaffold_split/fold_2.csv
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:9f39d4adac0d2ef0d304f48e86ea9654dee12c91a471897e9451d12e65358a3e
|
| 3 |
+
size 5770319
|
data/scaffold_split/fold_3.csv
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5c33ab689ee68c0aac7e510df4b254a37ece1d11ec3c3ee7ae9ae2c53ce0d49b
|
| 3 |
+
size 5699489
|
data/scaffold_split/fold_4.csv
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a0daeb1563bd6ca770d03b3aef42b19ef9b5580e489d70f7ec4a5d02bc313d4b
|
| 3 |
+
size 5720713
|
data/scaffold_split/fold_5.csv
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:808d6c9b2fd9735bb6a49821b0be0522d96bd8f2afcb51350c58b92a788428cb
|
| 3 |
+
size 5759302
|
data/scripts/cluster_split.py
ADDED
|
@@ -0,0 +1,395 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/usr/bin/env python3
|
| 2 |
+
"""
|
| 3 |
+
cluster_split.py
|
| 4 |
+
|
| 5 |
+
Author: natelgrw
|
| 6 |
+
Created: 11/05/2025
|
| 7 |
+
|
| 8 |
+
Splits dataset into folds based on compound structural clustering using
|
| 9 |
+
Morgan fingerprints. Uses UMAP for dimensionality reduction and clustering
|
| 10 |
+
in a higher-dimensional space to preserve chemical similarity while enabling
|
| 11 |
+
spatial separation.
|
| 12 |
+
"""
|
| 13 |
+
|
| 14 |
+
import pandas as pd
|
| 15 |
+
import numpy as np
|
| 16 |
+
import matplotlib.pyplot as plt
|
| 17 |
+
import seaborn as sns
|
| 18 |
+
from pathlib import Path
|
| 19 |
+
from collections import defaultdict
|
| 20 |
+
import random
|
| 21 |
+
from tqdm import tqdm
|
| 22 |
+
import umap
|
| 23 |
+
from sklearn.cluster import KMeans
|
| 24 |
+
from rdkit import Chem
|
| 25 |
+
from rdkit.Chem import AllChem, DataStructs
|
| 26 |
+
|
| 27 |
+
|
| 28 |
+
# ===== Configuration ===== #
|
| 29 |
+
|
| 30 |
+
|
| 31 |
+
INPUT_CSV = "../retina_dataset.csv"
|
| 32 |
+
OUTPUT_DIR = "../cluster_split"
|
| 33 |
+
N_FOLDS = 5
|
| 34 |
+
RANDOM_SEED = 42
|
| 35 |
+
|
| 36 |
+
UMAP_NEIGHBORS = 15
|
| 37 |
+
UMAP_MIN_DIST = 0.1
|
| 38 |
+
UMAP_VIZ_DIM = 2
|
| 39 |
+
|
| 40 |
+
FP_RADIUS = 2
|
| 41 |
+
FP_N_BITS = 2048
|
| 42 |
+
|
| 43 |
+
random.seed(RANDOM_SEED)
|
| 44 |
+
np.random.seed(RANDOM_SEED)
|
| 45 |
+
|
| 46 |
+
|
| 47 |
+
# ===== Helper Functions ===== #
|
| 48 |
+
|
| 49 |
+
|
| 50 |
+
def analyze_dataset(df):
|
| 51 |
+
"""
|
| 52 |
+
Prints dataset statistics.
|
| 53 |
+
"""
|
| 54 |
+
print("=" * 70)
|
| 55 |
+
print("Dataset Analysis")
|
| 56 |
+
print("=" * 70)
|
| 57 |
+
print(f"Total rows: {len(df):,}")
|
| 58 |
+
if "compound" in df.columns:
|
| 59 |
+
print(f"Unique compounds: {df['compound'].nunique():,}")
|
| 60 |
+
if 'rt' in df.columns:
|
| 61 |
+
print(f"\nRetention Time Stats:")
|
| 62 |
+
print(f"Mean: {df['rt'].mean():.2f} s | Median: {df['rt'].median():.2f} s")
|
| 63 |
+
print()
|
| 64 |
+
|
| 65 |
+
|
| 66 |
+
def assign_clusters_to_folds(compound_sizes, n_folds, cluster_assignments):
|
| 67 |
+
"""
|
| 68 |
+
Assign compound clusters to folds with balancing.
|
| 69 |
+
"""
|
| 70 |
+
fold_assignments = defaultdict(list)
|
| 71 |
+
fold_counts = [0] * n_folds
|
| 72 |
+
|
| 73 |
+
cluster_to_compounds = defaultdict(list)
|
| 74 |
+
for compound, cluster_id in cluster_assignments.items():
|
| 75 |
+
cluster_to_compounds[cluster_id].append(compound)
|
| 76 |
+
|
| 77 |
+
cluster_info = []
|
| 78 |
+
for cluster_id, compounds in cluster_to_compounds.items():
|
| 79 |
+
size = sum(compound_sizes.get(c, 0) for c in compounds)
|
| 80 |
+
cluster_info.append((cluster_id, size, compounds))
|
| 81 |
+
|
| 82 |
+
cluster_info.sort(key=lambda x: x[1], reverse=True)
|
| 83 |
+
|
| 84 |
+
total_size = sum(compound_sizes.values())
|
| 85 |
+
target_size = total_size / n_folds
|
| 86 |
+
|
| 87 |
+
print(f"\nAssigning {len(cluster_info)} clusters to {n_folds} folds...")
|
| 88 |
+
print(f"Target size per fold: {target_size:,.0f} datapoints")
|
| 89 |
+
|
| 90 |
+
for cluster_id, size, compounds in cluster_info:
|
| 91 |
+
min_fold = min(range(n_folds), key=lambda i: fold_counts[i])
|
| 92 |
+
|
| 93 |
+
for compound in compounds:
|
| 94 |
+
fold_assignments[min_fold].append(compound)
|
| 95 |
+
fold_counts[min_fold] += compound_sizes.get(compound, 0)
|
| 96 |
+
|
| 97 |
+
print("\nFold balance:")
|
| 98 |
+
for i, count in enumerate(fold_counts):
|
| 99 |
+
print(f"Fold {i+1}: {count:,} datapoints ({100*count/total_size:.2f}%)")
|
| 100 |
+
print(f"Balance ratio: {max(fold_counts)/min(fold_counts):.2f}x")
|
| 101 |
+
|
| 102 |
+
return fold_assignments, fold_counts
|
| 103 |
+
|
| 104 |
+
|
| 105 |
+
# ===== Analyzer Class ===== #
|
| 106 |
+
|
| 107 |
+
|
| 108 |
+
class ClusterAnalyzer:
|
| 109 |
+
"""
|
| 110 |
+
Analyzer for compound clustering-based splitting.
|
| 111 |
+
"""
|
| 112 |
+
|
| 113 |
+
def __init__(self, data_path, output_dir):
|
| 114 |
+
self.data_path = Path(data_path)
|
| 115 |
+
self.output_dir = Path(output_dir)
|
| 116 |
+
self.output_dir.mkdir(exist_ok=True, parents=True)
|
| 117 |
+
self.df = None
|
| 118 |
+
self.compounds_df = None
|
| 119 |
+
|
| 120 |
+
def load_data(self):
|
| 121 |
+
"""Load dataset."""
|
| 122 |
+
print("\nLOADING RETINA DATASET")
|
| 123 |
+
self.df = pd.read_csv(self.data_path)
|
| 124 |
+
|
| 125 |
+
unique_compounds = self.df['compound'].unique()
|
| 126 |
+
self.compounds_df = pd.DataFrame({'compound': unique_compounds})
|
| 127 |
+
print(f"Loaded {len(self.df):,} datapoints with {len(unique_compounds):,} unique compounds.")
|
| 128 |
+
|
| 129 |
+
def compute_morgan_fingerprints(self):
|
| 130 |
+
"""Compute Morgan fingerprints for all compounds."""
|
| 131 |
+
print("\nCOMPUTING MORGAN FINGERPRINTS")
|
| 132 |
+
fingerprints = []
|
| 133 |
+
valid_compounds = []
|
| 134 |
+
|
| 135 |
+
for smiles in tqdm(self.compounds_df['compound'], desc="Computing fingerprints"):
|
| 136 |
+
mol = Chem.MolFromSmiles(smiles)
|
| 137 |
+
if mol is not None:
|
| 138 |
+
fp = AllChem.GetMorganFingerprintAsBitVect(
|
| 139 |
+
mol, FP_RADIUS, nBits=FP_N_BITS
|
| 140 |
+
)
|
| 141 |
+
arr = np.zeros((FP_N_BITS,), dtype=np.int8)
|
| 142 |
+
DataStructs.ConvertToNumpyArray(fp, arr)
|
| 143 |
+
fingerprints.append(arr)
|
| 144 |
+
valid_compounds.append(smiles)
|
| 145 |
+
else:
|
| 146 |
+
print(f"Warning: Could not parse SMILES: {smiles}")
|
| 147 |
+
|
| 148 |
+
fingerprints = np.array(fingerprints)
|
| 149 |
+
self.compounds_df = self.compounds_df[self.compounds_df['compound'].isin(valid_compounds)].copy()
|
| 150 |
+
|
| 151 |
+
print(f"Computed {len(fingerprints)} fingerprints of dimension {FP_N_BITS}")
|
| 152 |
+
return fingerprints
|
| 153 |
+
|
| 154 |
+
def compute_umap_embedding(self, fingerprints):
|
| 155 |
+
"""
|
| 156 |
+
Compute 2D UMAP embedding for clustering and visualization.
|
| 157 |
+
Clustering will be done directly on these 2D coordinates.
|
| 158 |
+
"""
|
| 159 |
+
print(f"\nCOMPUTING 2D UMAP EMBEDDING")
|
| 160 |
+
print("(Clustering will be done on 2D UMAP coordinates)")
|
| 161 |
+
|
| 162 |
+
# 2d embedding for visualization only
|
| 163 |
+
reducer_viz = umap.UMAP(
|
| 164 |
+
n_neighbors=UMAP_NEIGHBORS,
|
| 165 |
+
min_dist=UMAP_MIN_DIST,
|
| 166 |
+
n_components=UMAP_VIZ_DIM,
|
| 167 |
+
metric='jaccard',
|
| 168 |
+
random_state=RANDOM_SEED,
|
| 169 |
+
verbose=False
|
| 170 |
+
)
|
| 171 |
+
embedding_viz = reducer_viz.fit_transform(fingerprints)
|
| 172 |
+
print(f"2D UMAP computed: {embedding_viz.shape}")
|
| 173 |
+
|
| 174 |
+
return embedding_viz
|
| 175 |
+
|
| 176 |
+
def cluster_compounds(self, embedding, n_clusters):
|
| 177 |
+
"""
|
| 178 |
+
Cluster compounds using KMeans in 2D UMAP space.
|
| 179 |
+
This creates spatially-separated, visually distinct regions.
|
| 180 |
+
"""
|
| 181 |
+
print(f"\nCLUSTERING COMPOUNDS IN 2D UMAP SPACE")
|
| 182 |
+
print(f"Using KMeans with k={n_clusters} clusters...")
|
| 183 |
+
print(f"Input shape: {embedding.shape}")
|
| 184 |
+
|
| 185 |
+
kmeans = KMeans(
|
| 186 |
+
n_clusters=n_clusters,
|
| 187 |
+
random_state=RANDOM_SEED,
|
| 188 |
+
n_init=20,
|
| 189 |
+
max_iter=300,
|
| 190 |
+
verbose=0
|
| 191 |
+
)
|
| 192 |
+
cluster_labels = kmeans.fit_predict(embedding)
|
| 193 |
+
|
| 194 |
+
self.compounds_df['cluster'] = cluster_labels
|
| 195 |
+
|
| 196 |
+
# print cluster distribution
|
| 197 |
+
cluster_counts = pd.Series(cluster_labels).value_counts().sort_index()
|
| 198 |
+
print(f"\nCluster distribution:")
|
| 199 |
+
for cluster_id, count in cluster_counts.items():
|
| 200 |
+
print(f"Cluster {cluster_id}: {count:,} compounds")
|
| 201 |
+
|
| 202 |
+
return cluster_labels
|
| 203 |
+
|
| 204 |
+
def create_cluster_splits(self, n_splits=5):
|
| 205 |
+
"""
|
| 206 |
+
Create folds based on compound clusters in 2D UMAP space.
|
| 207 |
+
"""
|
| 208 |
+
print("\nCREATING CLUSTER SPLITS")
|
| 209 |
+
print("=" * 60)
|
| 210 |
+
print("\nStrategy: Cluster in 2D UMAP space for spatial separation")
|
| 211 |
+
|
| 212 |
+
fingerprints = self.compute_morgan_fingerprints()
|
| 213 |
+
|
| 214 |
+
embedding_viz = self.compute_umap_embedding(fingerprints)
|
| 215 |
+
|
| 216 |
+
self.compounds_df['umap_x'] = embedding_viz[:, 0]
|
| 217 |
+
self.compounds_df['umap_y'] = embedding_viz[:, 1]
|
| 218 |
+
|
| 219 |
+
cluster_labels = self.cluster_compounds(embedding_viz, n_clusters=n_splits)
|
| 220 |
+
|
| 221 |
+
cluster_assignments = dict(zip(
|
| 222 |
+
self.compounds_df['compound'],
|
| 223 |
+
self.compounds_df['cluster']
|
| 224 |
+
))
|
| 225 |
+
|
| 226 |
+
compound_sizes = self.df['compound'].value_counts().to_dict()
|
| 227 |
+
print(f"\n{len(compound_sizes):,} unique compounds in dataset.")
|
| 228 |
+
|
| 229 |
+
fold_assignments = defaultdict(list)
|
| 230 |
+
for cluster_id in range(n_splits):
|
| 231 |
+
cluster_compounds = self.compounds_df[self.compounds_df['cluster'] == cluster_id]['compound'].tolist()
|
| 232 |
+
fold_assignments[cluster_id] = cluster_compounds
|
| 233 |
+
|
| 234 |
+
fold_counts = [sum(compound_sizes.get(c, 0) for c in compounds)
|
| 235 |
+
for compounds in fold_assignments.values()]
|
| 236 |
+
|
| 237 |
+
total_size = sum(compound_sizes.values())
|
| 238 |
+
print("\nFold balance (direct 1:1 cluster-to-fold mapping):")
|
| 239 |
+
for i, count in enumerate(fold_counts):
|
| 240 |
+
print(f"Fold {i+1}: {count:,} datapoints ({100*count/total_size:.2f}%)")
|
| 241 |
+
print(f"Balance ratio: {max(fold_counts)/min(fold_counts):.2f}x")
|
| 242 |
+
|
| 243 |
+
compound_to_fold = {}
|
| 244 |
+
for fold_idx, compounds in fold_assignments.items():
|
| 245 |
+
for compound in compounds:
|
| 246 |
+
compound_to_fold[compound] = fold_idx + 1
|
| 247 |
+
|
| 248 |
+
self.df['fold'] = self.df['compound'].map(compound_to_fold)
|
| 249 |
+
self.compounds_df['fold'] = self.compounds_df['compound'].map(compound_to_fold)
|
| 250 |
+
|
| 251 |
+
compound_to_umap = dict(zip(
|
| 252 |
+
self.compounds_df['compound'],
|
| 253 |
+
zip(self.compounds_df['umap_x'], self.compounds_df['umap_y'])
|
| 254 |
+
))
|
| 255 |
+
self.df['umap_x'] = self.df['compound'].map(lambda c: compound_to_umap.get(c, (None, None))[0])
|
| 256 |
+
self.df['umap_y'] = self.df['compound'].map(lambda c: compound_to_umap.get(c, (None, None))[1])
|
| 257 |
+
|
| 258 |
+
fold_dataframes = {}
|
| 259 |
+
for i in range(n_splits):
|
| 260 |
+
fold_df = self.df[self.df['fold'] == i + 1].copy()
|
| 261 |
+
out_file = self.output_dir / f"cluster_{i+1}.csv"
|
| 262 |
+
fold_df.to_csv(out_file, index=False)
|
| 263 |
+
fold_dataframes[i] = fold_df
|
| 264 |
+
print(f"Saved cluster_{i+1}.csv ({len(fold_df):,} rows, {fold_df['compound'].nunique():,} compounds)")
|
| 265 |
+
|
| 266 |
+
cluster_file = self.output_dir / "figures" / "cluster_assignments.csv"
|
| 267 |
+
cluster_file.parent.mkdir(exist_ok=True, parents=True)
|
| 268 |
+
self.compounds_df[['compound', 'cluster', 'fold', 'umap_x', 'umap_y']].to_csv(
|
| 269 |
+
cluster_file, index=False
|
| 270 |
+
)
|
| 271 |
+
print(f"\nSaved cluster assignments to figures/cluster_assignments.csv")
|
| 272 |
+
|
| 273 |
+
return fold_dataframes
|
| 274 |
+
|
| 275 |
+
def visualize_rt_distributions(self, fold_dataframes):
|
| 276 |
+
"""
|
| 277 |
+
Generates a KDE plot of the RT distribution per cluster split.
|
| 278 |
+
"""
|
| 279 |
+
print("\nPLOTTING RETENTION TIME DISTRIBUTIONS")
|
| 280 |
+
fig, ax = plt.subplots(figsize=(14, 6))
|
| 281 |
+
colors = sns.color_palette("husl", len(fold_dataframes))
|
| 282 |
+
|
| 283 |
+
if "rt" in self.df.columns:
|
| 284 |
+
overall_rt = self.df["rt"].dropna() / 60.0
|
| 285 |
+
if len(overall_rt) > 0:
|
| 286 |
+
sns.kdeplot(
|
| 287 |
+
overall_rt, ax=ax,
|
| 288 |
+
color='black', linewidth=2.5,
|
| 289 |
+
linestyle='--',
|
| 290 |
+
label=f"Overall (n={len(overall_rt):,})"
|
| 291 |
+
)
|
| 292 |
+
|
| 293 |
+
for i, fold_df in fold_dataframes.items():
|
| 294 |
+
if "rt" not in fold_df.columns:
|
| 295 |
+
continue
|
| 296 |
+
rt_min = fold_df["rt"].dropna() / 60.0
|
| 297 |
+
if len(rt_min) > 0:
|
| 298 |
+
sns.kdeplot(
|
| 299 |
+
rt_min, ax=ax,
|
| 300 |
+
label=f"Cluster {i+1} (n={len(rt_min):,})",
|
| 301 |
+
color=colors[i],
|
| 302 |
+
linewidth=2.5
|
| 303 |
+
)
|
| 304 |
+
|
| 305 |
+
ax.set_xlabel("Retention Time (min)", fontsize=12, fontweight='bold')
|
| 306 |
+
ax.set_ylabel("Density", fontsize=12, fontweight='bold')
|
| 307 |
+
ax.set_title("Retention Time Distribution Across Cluster Splits", fontsize=14, fontweight='bold')
|
| 308 |
+
ax.legend(fontsize=10, framealpha=0.9)
|
| 309 |
+
ax.grid(alpha=0.3, linestyle=':', linewidth=0.5)
|
| 310 |
+
ax.set_xlim(left=0)
|
| 311 |
+
|
| 312 |
+
fig_dir = self.output_dir / "figures"
|
| 313 |
+
fig_dir.mkdir(exist_ok=True)
|
| 314 |
+
plt.savefig(fig_dir / "cluster_rt.png", dpi=300, bbox_inches="tight")
|
| 315 |
+
plt.close()
|
| 316 |
+
print(f"Saved RT KDE plot to figures/cluster_rt.png")
|
| 317 |
+
|
| 318 |
+
def generate_umap_plot(self):
|
| 319 |
+
"""
|
| 320 |
+
Generates a UMAP visualization colored by fold, showing all datapoints.
|
| 321 |
+
"""
|
| 322 |
+
print("\nGENERATING UMAP VISUALIZATION")
|
| 323 |
+
print(f"Plotting all {len(self.df):,} datapoints (including duplicates)")
|
| 324 |
+
|
| 325 |
+
plt.figure(figsize=(10, 7))
|
| 326 |
+
|
| 327 |
+
if "fold" in self.df.columns and "umap_x" in self.df.columns:
|
| 328 |
+
colors = sns.color_palette("husl", N_FOLDS)
|
| 329 |
+
|
| 330 |
+
for fold_num in sorted(self.df["fold"].dropna().unique()):
|
| 331 |
+
fold_data = self.df[self.df["fold"] == fold_num]
|
| 332 |
+
n_datapoints = len(fold_data)
|
| 333 |
+
plt.scatter(
|
| 334 |
+
fold_data["umap_x"], fold_data["umap_y"],
|
| 335 |
+
label=f"Cluster {int(fold_num)} (n={n_datapoints:,})",
|
| 336 |
+
s=5, alpha=0.5, edgecolor='none',
|
| 337 |
+
color=colors[int(fold_num) - 1]
|
| 338 |
+
)
|
| 339 |
+
|
| 340 |
+
plt.legend(bbox_to_anchor=(1.05, 1), loc='upper left', fontsize=10)
|
| 341 |
+
plt.title("UMAP Projection of Compound Space (Colored by Cluster Split)", fontsize=14, fontweight='bold', pad=15)
|
| 342 |
+
|
| 343 |
+
else:
|
| 344 |
+
plt.scatter(
|
| 345 |
+
self.df["umap_x"], self.df["umap_y"],
|
| 346 |
+
s=5, alpha=0.5, color="steelblue", edgecolor='none'
|
| 347 |
+
)
|
| 348 |
+
|
| 349 |
+
plt.tight_layout()
|
| 350 |
+
|
| 351 |
+
fig_dir = self.output_dir / "figures"
|
| 352 |
+
fig_dir.mkdir(exist_ok=True)
|
| 353 |
+
plt.savefig(fig_dir / "cluster_umap.png", dpi=300, bbox_inches="tight")
|
| 354 |
+
plt.close()
|
| 355 |
+
print(f"Saved UMAP plot to figures/cluster_umap.png")
|
| 356 |
+
|
| 357 |
+
|
| 358 |
+
# ===== Main ===== #
|
| 359 |
+
|
| 360 |
+
|
| 361 |
+
def main():
|
| 362 |
+
"""
|
| 363 |
+
Main execution function.
|
| 364 |
+
"""
|
| 365 |
+
print("\n" + "=" * 80)
|
| 366 |
+
print(" " * 30 + "CLUSTER SPLIT PIPELINE")
|
| 367 |
+
print(f"=" * 80)
|
| 368 |
+
print(f"1. Compute 2D UMAP embedding from 2048D fingerprints")
|
| 369 |
+
print(f"2. Cluster directly in 2D UMAP space (k={N_FOLDS})")
|
| 370 |
+
print(f"3. Assign clusters to folds (1:1 mapping)")
|
| 371 |
+
print(f"4. Result: Spatially-separated regions in UMAP visualization")
|
| 372 |
+
print(f"=" * 80)
|
| 373 |
+
|
| 374 |
+
analyzer = ClusterAnalyzer(INPUT_CSV, OUTPUT_DIR)
|
| 375 |
+
analyzer.load_data()
|
| 376 |
+
analyze_dataset(analyzer.df)
|
| 377 |
+
|
| 378 |
+
fold_dataframes = analyzer.create_cluster_splits(n_splits=N_FOLDS)
|
| 379 |
+
|
| 380 |
+
analyzer.generate_umap_plot()
|
| 381 |
+
analyzer.visualize_rt_distributions(fold_dataframes)
|
| 382 |
+
|
| 383 |
+
print("\n" + "=" * 80)
|
| 384 |
+
print(" " * 30 + "CLUSTER SPLIT COMPLETE!")
|
| 385 |
+
print("=" * 80)
|
| 386 |
+
print(f"\nOutputs in: {OUTPUT_DIR}/")
|
| 387 |
+
print(f"- cluster_1.csv through cluster_{N_FOLDS}.csv")
|
| 388 |
+
print(f"- figures/cluster_umap.png")
|
| 389 |
+
print(f"- figures/cluster_rt.png")
|
| 390 |
+
print(f"- figures/cluster_assignments.csv")
|
| 391 |
+
|
| 392 |
+
|
| 393 |
+
if __name__ == "__main__":
|
| 394 |
+
main()
|
| 395 |
+
|
data/scripts/method_split.py
ADDED
|
@@ -0,0 +1,358 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/usr/bin/env python3
|
| 2 |
+
"""
|
| 3 |
+
method_split.py
|
| 4 |
+
|
| 5 |
+
Author: natelgrw
|
| 6 |
+
Last Edited: 11/04/2025
|
| 7 |
+
|
| 8 |
+
Splits the ReTiNA dataset into 5 folds based on LC-MS setup configurations
|
| 9 |
+
(e.g., solvents, gradient, column, temperature, flow rate).
|
| 10 |
+
Each setup group is assigned to a single fold to avoid data leakage
|
| 11 |
+
across similar chromatographic conditions.
|
| 12 |
+
"""
|
| 13 |
+
|
| 14 |
+
import pandas as pd
|
| 15 |
+
import numpy as np
|
| 16 |
+
import matplotlib.pyplot as plt
|
| 17 |
+
import seaborn as sns
|
| 18 |
+
from pathlib import Path
|
| 19 |
+
from collections import defaultdict
|
| 20 |
+
import random
|
| 21 |
+
import umap
|
| 22 |
+
from sklearn.preprocessing import StandardScaler, OneHotEncoder
|
| 23 |
+
from sklearn.compose import ColumnTransformer
|
| 24 |
+
from sklearn.cluster import KMeans
|
| 25 |
+
from ast import literal_eval
|
| 26 |
+
|
| 27 |
+
|
| 28 |
+
# ===== Configuration ===== #
|
| 29 |
+
|
| 30 |
+
|
| 31 |
+
INPUT_CSV = "../retina_dataset.csv"
|
| 32 |
+
METHOD_CSV = "../lcms_methods.csv"
|
| 33 |
+
OUTPUT_DIR = "../method_split"
|
| 34 |
+
N_FOLDS = 5
|
| 35 |
+
RANDOM_SEED = 42
|
| 36 |
+
|
| 37 |
+
random.seed(RANDOM_SEED)
|
| 38 |
+
np.random.seed(RANDOM_SEED)
|
| 39 |
+
|
| 40 |
+
|
| 41 |
+
# ===== Helper Functions ===== #
|
| 42 |
+
|
| 43 |
+
|
| 44 |
+
def analyze_dataset(df):
|
| 45 |
+
"""
|
| 46 |
+
Prints dataset statistics.
|
| 47 |
+
"""
|
| 48 |
+
print("=" * 70)
|
| 49 |
+
print("Dataset Analysis")
|
| 50 |
+
print("=" * 70)
|
| 51 |
+
print(f"Total rows: {len(df):,}")
|
| 52 |
+
if "compound" in df.columns:
|
| 53 |
+
print(f"Unique compounds: {df['compound'].nunique():,}")
|
| 54 |
+
if 'rt' in df.columns:
|
| 55 |
+
print(f"\nRetention Time Stats:")
|
| 56 |
+
print(f" Mean: {df['rt'].mean():.2f} s | Median: {df['rt'].median():.2f} s")
|
| 57 |
+
if 'method_number' in df.columns:
|
| 58 |
+
print(f"Unique methods: {df['method_number'].nunique():,}")
|
| 59 |
+
print()
|
| 60 |
+
|
| 61 |
+
|
| 62 |
+
def assign_methods_to_folds(method_sizes, n_folds, method_features=None, method_ids=None):
|
| 63 |
+
"""
|
| 64 |
+
Assigns methods to folds using KMeans clustering on UMAP coordinates.
|
| 65 |
+
This creates compact, spatially-separated regions in the UMAP visualization.
|
| 66 |
+
"""
|
| 67 |
+
fold_assignments = defaultdict(list)
|
| 68 |
+
fold_counts = [0] * n_folds
|
| 69 |
+
|
| 70 |
+
if method_features is not None and method_ids is not None:
|
| 71 |
+
print(f"\nUsing KMeans to create {n_folds} spatially-separated regions...")
|
| 72 |
+
|
| 73 |
+
kmeans = KMeans(n_clusters=n_folds, random_state=RANDOM_SEED, n_init=20)
|
| 74 |
+
cluster_labels = kmeans.fit_predict(method_features)
|
| 75 |
+
|
| 76 |
+
cluster_to_methods = defaultdict(list)
|
| 77 |
+
for method_id, cluster_id in zip(method_ids, cluster_labels):
|
| 78 |
+
cluster_to_methods[cluster_id].append(method_id)
|
| 79 |
+
|
| 80 |
+
cluster_sizes = {}
|
| 81 |
+
for cluster_id, methods in cluster_to_methods.items():
|
| 82 |
+
cluster_sizes[cluster_id] = sum(method_sizes[m] for m in methods)
|
| 83 |
+
|
| 84 |
+
total_size = sum(method_sizes.values())
|
| 85 |
+
avg_size = total_size / n_folds
|
| 86 |
+
|
| 87 |
+
print(f"Target size per fold: {avg_size:,.0f} datapoints")
|
| 88 |
+
|
| 89 |
+
for cluster_id, methods in cluster_to_methods.items():
|
| 90 |
+
cluster_size = cluster_sizes[cluster_id]
|
| 91 |
+
method_size_list = [(m, method_sizes[m]) for m in methods]
|
| 92 |
+
method_size_list.sort(key=lambda x: x[1], reverse=True)
|
| 93 |
+
|
| 94 |
+
if len(method_size_list) > 0:
|
| 95 |
+
largest_method, largest_size = method_size_list[0]
|
| 96 |
+
if largest_size > cluster_size * 0.6 and largest_size > avg_size * 0.5:
|
| 97 |
+
print(f"Warning: Method {largest_method} has {largest_size:,} datapoints "
|
| 98 |
+
f"({100*largest_size/total_size:.1f}% of total dataset)")
|
| 99 |
+
|
| 100 |
+
for cluster_id in range(n_folds):
|
| 101 |
+
methods = cluster_to_methods[cluster_id]
|
| 102 |
+
for method in methods:
|
| 103 |
+
fold_assignments[cluster_id].append(method)
|
| 104 |
+
fold_counts[cluster_id] += method_sizes[method]
|
| 105 |
+
|
| 106 |
+
print("\nSpatial assignment complete!")
|
| 107 |
+
print(f"Largest fold: {max(fold_counts):,} datapoints")
|
| 108 |
+
print(f"Smallest fold: {min(fold_counts):,} datapoints")
|
| 109 |
+
print(f"Fold size ratio: {max(fold_counts)/min(fold_counts):.2f}x")
|
| 110 |
+
|
| 111 |
+
else:
|
| 112 |
+
print("\nWarning: No method features provided, using greedy assignment")
|
| 113 |
+
sorted_methods = sorted(method_sizes.items(), key=lambda x: x[1], reverse=True)
|
| 114 |
+
for method, size in sorted_methods:
|
| 115 |
+
min_fold = min(range(n_folds), key=lambda i: fold_counts[i])
|
| 116 |
+
fold_assignments[min_fold].append(method)
|
| 117 |
+
fold_counts[min_fold] += size
|
| 118 |
+
|
| 119 |
+
return fold_assignments, fold_counts
|
| 120 |
+
|
| 121 |
+
|
| 122 |
+
# ===== Analyzer Class ===== #
|
| 123 |
+
|
| 124 |
+
|
| 125 |
+
class MethodAnalyzer:
|
| 126 |
+
"""
|
| 127 |
+
Analyzer for LC-MS setup-based splitting.
|
| 128 |
+
"""
|
| 129 |
+
|
| 130 |
+
def __init__(self, data_path, method_path, output_dir):
|
| 131 |
+
self.data_path = Path(data_path)
|
| 132 |
+
self.method_path = Path(method_path)
|
| 133 |
+
self.output_dir = Path(output_dir)
|
| 134 |
+
self.output_dir.mkdir(exist_ok=True, parents=True)
|
| 135 |
+
self.df = None
|
| 136 |
+
self.methods = None
|
| 137 |
+
|
| 138 |
+
def load_data(self):
|
| 139 |
+
"""
|
| 140 |
+
Loads the datasets.
|
| 141 |
+
"""
|
| 142 |
+
print("\nLOADING RETINA + METHOD DATA")
|
| 143 |
+
self.df = pd.read_csv(self.data_path)
|
| 144 |
+
self.methods = pd.read_csv(self.method_path)
|
| 145 |
+
|
| 146 |
+
for col in ["gradient", "column"]:
|
| 147 |
+
if col in self.methods.columns:
|
| 148 |
+
self.methods[col] = self.methods[col].apply(lambda x: literal_eval(x) if isinstance(x, str) else x)
|
| 149 |
+
print(f"Loaded {len(self.df):,} datapoints and {len(self.methods):,} methods.")
|
| 150 |
+
|
| 151 |
+
def extract_method_features(self):
|
| 152 |
+
"""
|
| 153 |
+
Converts LC-MS setups into numerical feature vectors.
|
| 154 |
+
"""
|
| 155 |
+
if hasattr(self, 'method_features_X'):
|
| 156 |
+
return self.method_features_X
|
| 157 |
+
|
| 158 |
+
df = self.methods.copy()
|
| 159 |
+
df["phase"] = df["column"].apply(lambda x: x[0] if isinstance(x, list) else "UNK")
|
| 160 |
+
df["col_diam"] = df["column"].apply(lambda x: float(x[1]) if isinstance(x, list) else 0)
|
| 161 |
+
df["col_len"] = df["column"].apply(lambda x: float(x[2]) if isinstance(x, list) else 0)
|
| 162 |
+
df["col_part"] = df["column"].apply(lambda x: float(x[3]) if isinstance(x, list) else 0)
|
| 163 |
+
df["grad_len"] = df["gradient"].apply(lambda x: x[-1][0] if isinstance(x, list) and len(x) > 0 else 0)
|
| 164 |
+
df["grad_range"] = df["gradient"].apply(
|
| 165 |
+
lambda x: max([p[1] for p in x]) - min([p[1] for p in x]) if isinstance(x, list) and len(x) > 0 else 0
|
| 166 |
+
)
|
| 167 |
+
|
| 168 |
+
feat_cols = ["phase", "col_diam", "col_len", "col_part", "grad_len", "grad_range", "flow_rate", "temp"]
|
| 169 |
+
cat_cols = ["phase"]
|
| 170 |
+
num_cols = [c for c in feat_cols if c not in cat_cols]
|
| 171 |
+
|
| 172 |
+
preprocessor = ColumnTransformer([
|
| 173 |
+
("cat", OneHotEncoder(), cat_cols),
|
| 174 |
+
("num", StandardScaler(), num_cols)
|
| 175 |
+
])
|
| 176 |
+
|
| 177 |
+
X = preprocessor.fit_transform(df[feat_cols])
|
| 178 |
+
df["vector"] = list(X.toarray() if hasattr(X, "toarray") else X)
|
| 179 |
+
print(f"Extracted {X.shape[1]} features per method.")
|
| 180 |
+
self.methods = df
|
| 181 |
+
self.method_features_X = X
|
| 182 |
+
return X
|
| 183 |
+
|
| 184 |
+
def create_method_splits(self, n_splits=5):
|
| 185 |
+
"""
|
| 186 |
+
Splits the dataset by LC-MS setup groups into folds.
|
| 187 |
+
"""
|
| 188 |
+
print("\nCREATING METHOD SPLITS")
|
| 189 |
+
print("=" * 60)
|
| 190 |
+
|
| 191 |
+
print("Extracting method features and computing UMAP...")
|
| 192 |
+
X = self.extract_method_features()
|
| 193 |
+
|
| 194 |
+
# compute UMAP embedding
|
| 195 |
+
reducer = umap.UMAP(
|
| 196 |
+
n_neighbors=8,
|
| 197 |
+
min_dist=0.1,
|
| 198 |
+
metric="euclidean",
|
| 199 |
+
random_state=RANDOM_SEED
|
| 200 |
+
)
|
| 201 |
+
umap_embedding = reducer.fit_transform(X)
|
| 202 |
+
self.methods["umap_x"], self.methods["umap_y"] = umap_embedding[:, 0], umap_embedding[:, 1]
|
| 203 |
+
print("UMAP embedding computed.")
|
| 204 |
+
|
| 205 |
+
method_ids = self.methods["method_number"].values
|
| 206 |
+
|
| 207 |
+
method_sizes = self.df["method_number"].value_counts().to_dict()
|
| 208 |
+
print(f"{len(method_sizes):,} unique methods in dataset.")
|
| 209 |
+
|
| 210 |
+
fold_assignments, fold_counts = assign_methods_to_folds(
|
| 211 |
+
method_sizes, n_splits,
|
| 212 |
+
method_features=umap_embedding,
|
| 213 |
+
method_ids=method_ids
|
| 214 |
+
)
|
| 215 |
+
|
| 216 |
+
print("\nFold balance summary:")
|
| 217 |
+
for i, count in enumerate(fold_counts):
|
| 218 |
+
print(f" Fold {i+1}: {count:,} datapoints ({100*count/len(self.df):.2f}%)")
|
| 219 |
+
|
| 220 |
+
method_to_fold = {}
|
| 221 |
+
for i, methods in fold_assignments.items():
|
| 222 |
+
for m in methods:
|
| 223 |
+
method_to_fold[m] = i + 1
|
| 224 |
+
|
| 225 |
+
self.df["fold"] = self.df["method_number"].map(method_to_fold)
|
| 226 |
+
|
| 227 |
+
self.methods["fold"] = self.methods["method_number"].map(method_to_fold)
|
| 228 |
+
|
| 229 |
+
fold_dataframes = {}
|
| 230 |
+
for i in range(n_splits):
|
| 231 |
+
fold_df = self.df[self.df["fold"] == i + 1].copy()
|
| 232 |
+
out_file = self.output_dir / f"methods_{i+1}.csv"
|
| 233 |
+
fold_df.to_csv(out_file, index=False)
|
| 234 |
+
fold_dataframes[i] = fold_df
|
| 235 |
+
print(f"Saved setup_{i+1}.csv ({len(fold_df):,} rows)")
|
| 236 |
+
|
| 237 |
+
return fold_dataframes
|
| 238 |
+
|
| 239 |
+
def visualize_rt_distributions(self, fold_dataframes):
|
| 240 |
+
"""
|
| 241 |
+
Generates a KDE plot of the RT distribution per setup split.
|
| 242 |
+
"""
|
| 243 |
+
print("\nPLOTTING RETENTION TIME DISTRIBUTIONS")
|
| 244 |
+
fig, ax = plt.subplots(figsize=(14, 6))
|
| 245 |
+
colors = sns.color_palette("husl", len(fold_dataframes))
|
| 246 |
+
|
| 247 |
+
if "rt" in self.df.columns:
|
| 248 |
+
overall_rt = self.df["rt"].dropna() / 60.0
|
| 249 |
+
if len(overall_rt) > 0:
|
| 250 |
+
sns.kdeplot(
|
| 251 |
+
overall_rt, ax=ax,
|
| 252 |
+
color='black', linewidth=2.5,
|
| 253 |
+
linestyle='--',
|
| 254 |
+
label=f"Overall (n={len(overall_rt):,})"
|
| 255 |
+
)
|
| 256 |
+
|
| 257 |
+
for i, fold_df in fold_dataframes.items():
|
| 258 |
+
if "rt" not in fold_df.columns:
|
| 259 |
+
continue
|
| 260 |
+
rt_min = fold_df["rt"].dropna() / 60.0
|
| 261 |
+
if len(rt_min) > 0:
|
| 262 |
+
sns.kdeplot(
|
| 263 |
+
rt_min, ax=ax,
|
| 264 |
+
label=f"Setup {i+1} (n={len(rt_min):,})",
|
| 265 |
+
color=colors[i],
|
| 266 |
+
linewidth=2.5
|
| 267 |
+
)
|
| 268 |
+
|
| 269 |
+
ax.set_xlabel("Retention Time (min)", fontsize=12, fontweight='bold')
|
| 270 |
+
ax.set_ylabel("Density", fontsize=12, fontweight='bold')
|
| 271 |
+
ax.set_title("Retention Time Distribution Across Method Splits", fontsize=14, fontweight='bold')
|
| 272 |
+
ax.legend(fontsize=10, framealpha=0.9)
|
| 273 |
+
ax.grid(alpha=0.3, linestyle=':', linewidth=0.5)
|
| 274 |
+
ax.set_xlim(left=0)
|
| 275 |
+
|
| 276 |
+
fig_dir = self.output_dir / "figures"
|
| 277 |
+
fig_dir.mkdir(exist_ok=True)
|
| 278 |
+
plt.savefig(fig_dir / "methods_rt.png", dpi=300, bbox_inches="tight")
|
| 279 |
+
plt.close()
|
| 280 |
+
print(f"Saved RT KDE plot to figures/method_rt.png")
|
| 281 |
+
|
| 282 |
+
def generate_umap_plot(self):
|
| 283 |
+
"""
|
| 284 |
+
Generates a UMAP visualization plot (embedding already computed).
|
| 285 |
+
"""
|
| 286 |
+
print("\nGENERATING UMAP VISUALIZATION")
|
| 287 |
+
|
| 288 |
+
if "umap_x" not in self.methods.columns or "umap_y" not in self.methods.columns:
|
| 289 |
+
print("Warning: UMAP coordinates not found, computing now...")
|
| 290 |
+
X = self.extract_method_features()
|
| 291 |
+
reducer = umap.UMAP(
|
| 292 |
+
n_neighbors=8,
|
| 293 |
+
min_dist=0.1,
|
| 294 |
+
metric="euclidean",
|
| 295 |
+
random_state=RANDOM_SEED
|
| 296 |
+
)
|
| 297 |
+
embedding = reducer.fit_transform(X)
|
| 298 |
+
self.methods["umap_x"], self.methods["umap_y"] = embedding[:, 0], embedding[:, 1]
|
| 299 |
+
|
| 300 |
+
plt.figure(figsize=(10, 7))
|
| 301 |
+
|
| 302 |
+
if "fold" in self.methods.columns:
|
| 303 |
+
colors = sns.color_palette("husl", N_FOLDS)
|
| 304 |
+
for fold_num in sorted(self.methods["fold"].dropna().unique()):
|
| 305 |
+
fold_data = self.methods[self.methods["fold"] == fold_num]
|
| 306 |
+
n_methods = len(fold_data)
|
| 307 |
+
plt.scatter(
|
| 308 |
+
fold_data["umap_x"], fold_data["umap_y"],
|
| 309 |
+
label=f"Cluster {int(fold_num)} (n={n_methods} methods)",
|
| 310 |
+
s=100, alpha=0.7, edgecolor="k", linewidth=0.5,
|
| 311 |
+
color=colors[int(fold_num) - 1]
|
| 312 |
+
)
|
| 313 |
+
plt.legend(bbox_to_anchor=(1.05, 1), loc='upper left', fontsize=10)
|
| 314 |
+
plt.title("UMAP Projection of LC-MS Method Space (Colored by Method Split)", fontsize=14, pad=15, fontweight='bold')
|
| 315 |
+
else:
|
| 316 |
+
sns.scatterplot(
|
| 317 |
+
x="umap_x", y="umap_y", data=self.methods,
|
| 318 |
+
s=70, color="steelblue", edgecolor="k"
|
| 319 |
+
)
|
| 320 |
+
plt.title("2D UMAP Visualization of LC-MS Method Space", fontsize=14, pad=15)
|
| 321 |
+
|
| 322 |
+
plt.tight_layout()
|
| 323 |
+
|
| 324 |
+
fig_dir = self.output_dir / "figures"
|
| 325 |
+
fig_dir.mkdir(exist_ok=True)
|
| 326 |
+
plt.savefig(fig_dir / "methods_umap.png", dpi=300, bbox_inches="tight")
|
| 327 |
+
plt.close()
|
| 328 |
+
print(f"Saved UMAP plot to figures/method_umap.png")
|
| 329 |
+
|
| 330 |
+
|
| 331 |
+
# ===== Main ===== #
|
| 332 |
+
|
| 333 |
+
|
| 334 |
+
def main():
|
| 335 |
+
"""
|
| 336 |
+
Main execution function.
|
| 337 |
+
"""
|
| 338 |
+
print("\n" + "=" * 80)
|
| 339 |
+
print(" " * 30 + "METHOD SPLIT PIPELINE")
|
| 340 |
+
print("=" * 80)
|
| 341 |
+
|
| 342 |
+
analyzer = MethodAnalyzer(INPUT_CSV, METHOD_CSV, OUTPUT_DIR)
|
| 343 |
+
analyzer.load_data()
|
| 344 |
+
analyze_dataset(analyzer.df)
|
| 345 |
+
|
| 346 |
+
fold_dataframes = analyzer.create_method_splits(n_splits=N_FOLDS)
|
| 347 |
+
|
| 348 |
+
analyzer.generate_umap_plot()
|
| 349 |
+
|
| 350 |
+
analyzer.visualize_rt_distributions(fold_dataframes)
|
| 351 |
+
|
| 352 |
+
print("\n" + "=" * 80)
|
| 353 |
+
print(" " * 30 + "METHOD SPLIT COMPLETE!")
|
| 354 |
+
print("=" * 80)
|
| 355 |
+
|
| 356 |
+
|
| 357 |
+
if __name__ == "__main__":
|
| 358 |
+
main()
|
data/scripts/scaffold_split.py
ADDED
|
@@ -0,0 +1,462 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/usr/bin/env python3
|
| 2 |
+
"""
|
| 3 |
+
scaffold_split.py
|
| 4 |
+
|
| 5 |
+
Author: natelgrw
|
| 6 |
+
Last Edited: 11/02/2025
|
| 7 |
+
|
| 8 |
+
Computes Bemis-Murcko scaffolds for the retina dataset using RDKit
|
| 9 |
+
and splits scaffolds into 5 distinct folds with approximately balanced
|
| 10 |
+
compound counts across folds. Computes UMAP, scaffold assignments, and
|
| 11 |
+
method length distributions for visualizing scaffold splits.
|
| 12 |
+
"""
|
| 13 |
+
|
| 14 |
+
import pandas as pd
|
| 15 |
+
import numpy as np
|
| 16 |
+
import matplotlib.pyplot as plt
|
| 17 |
+
import seaborn as sns
|
| 18 |
+
from pathlib import Path
|
| 19 |
+
from collections import defaultdict, Counter
|
| 20 |
+
import random
|
| 21 |
+
from tqdm import tqdm
|
| 22 |
+
from rdkit import Chem
|
| 23 |
+
from rdkit.Chem.Scaffolds import MurckoScaffold
|
| 24 |
+
from rdkit.Chem import AllChem
|
| 25 |
+
import umap
|
| 26 |
+
|
| 27 |
+
|
| 28 |
+
# ===== Configuration ===== #
|
| 29 |
+
|
| 30 |
+
|
| 31 |
+
INPUT_CSV = "../retina_dataset.csv"
|
| 32 |
+
OUTPUT_DIR = "../scaffold_split"
|
| 33 |
+
N_FOLDS = 5
|
| 34 |
+
RANDOM_SEED = 42
|
| 35 |
+
|
| 36 |
+
random.seed(RANDOM_SEED)
|
| 37 |
+
np.random.seed(RANDOM_SEED)
|
| 38 |
+
|
| 39 |
+
|
| 40 |
+
# ===== Helper Functions ===== #
|
| 41 |
+
|
| 42 |
+
|
| 43 |
+
def get_murcko_scaffold(smiles):
|
| 44 |
+
"""
|
| 45 |
+
Compute Bemis-Murcko scaffold from SMILES string.
|
| 46 |
+
"""
|
| 47 |
+
try:
|
| 48 |
+
mol = Chem.MolFromSmiles(smiles)
|
| 49 |
+
if mol is None:
|
| 50 |
+
return "INVALID"
|
| 51 |
+
scaffold = MurckoScaffold.MurckoScaffoldSmiles(mol=mol)
|
| 52 |
+
return scaffold if scaffold else "NO_SCAFFOLD"
|
| 53 |
+
except Exception as e:
|
| 54 |
+
return "INVALID"
|
| 55 |
+
|
| 56 |
+
|
| 57 |
+
def analyze_dataset(df):
|
| 58 |
+
"""
|
| 59 |
+
Prints dataset statistics.
|
| 60 |
+
"""
|
| 61 |
+
print("=" * 70)
|
| 62 |
+
print("Dataset Analysis")
|
| 63 |
+
print("=" * 70)
|
| 64 |
+
print(f"Total rows: {len(df):,}")
|
| 65 |
+
print(f"Columns: {df.columns.tolist()}")
|
| 66 |
+
print(f"\nUnique compounds: {df['compound'].nunique():,}")
|
| 67 |
+
if 'rt' in df.columns:
|
| 68 |
+
print(f"\nRetention time statistics:")
|
| 69 |
+
print(f"Min: {df['rt'].min():.2f} min")
|
| 70 |
+
print(f"Max: {df['rt'].max():.2f} min")
|
| 71 |
+
print(f"Mean: {df['rt'].mean():.2f} min")
|
| 72 |
+
print(f"Median: {df['rt'].median():.2f} min")
|
| 73 |
+
print()
|
| 74 |
+
|
| 75 |
+
|
| 76 |
+
def assign_scaffolds_to_folds(scaffold_sizes, n_folds):
|
| 77 |
+
"""
|
| 78 |
+
Assign scaffolds to folds using a greedy algorithm to balance compound counts.
|
| 79 |
+
"""
|
| 80 |
+
fold_assignments = defaultdict(list)
|
| 81 |
+
fold_counts = [0] * n_folds
|
| 82 |
+
|
| 83 |
+
sorted_scaffolds = sorted(scaffold_sizes.items(), key=lambda x: x[1], reverse=True)
|
| 84 |
+
|
| 85 |
+
# greedy scaffold assignment
|
| 86 |
+
for scaffold, size in sorted_scaffolds:
|
| 87 |
+
min_fold = min(range(n_folds), key=lambda i: fold_counts[i])
|
| 88 |
+
fold_assignments[min_fold].append(scaffold)
|
| 89 |
+
fold_counts[min_fold] += size
|
| 90 |
+
|
| 91 |
+
return fold_assignments, fold_counts
|
| 92 |
+
|
| 93 |
+
|
| 94 |
+
# ===== Analyzer Class ===== #
|
| 95 |
+
|
| 96 |
+
|
| 97 |
+
class ScaffoldAnalyzer:
|
| 98 |
+
"""
|
| 99 |
+
Analyzer for scaffold splitting and visualization.
|
| 100 |
+
"""
|
| 101 |
+
|
| 102 |
+
def __init__(self, data_path, output_dir):
|
| 103 |
+
self.data_path = Path(data_path)
|
| 104 |
+
self.output_dir = Path(output_dir)
|
| 105 |
+
self.output_dir.mkdir(exist_ok=True, parents=True)
|
| 106 |
+
self.df = None
|
| 107 |
+
self.scaffold_dict = {}
|
| 108 |
+
self.scaffold_to_compounds = defaultdict(list)
|
| 109 |
+
|
| 110 |
+
def load_data(self, sample_size=None):
|
| 111 |
+
"""
|
| 112 |
+
Loads the retina dataset.
|
| 113 |
+
"""
|
| 114 |
+
print("\n" + "=" * 70)
|
| 115 |
+
print("LOADING DATA")
|
| 116 |
+
print("=" * 70)
|
| 117 |
+
|
| 118 |
+
if sample_size:
|
| 119 |
+
print(f"Loading {sample_size:,} rows (sample)...")
|
| 120 |
+
self.df = pd.read_csv(self.data_path, nrows=sample_size)
|
| 121 |
+
else:
|
| 122 |
+
print(f"Loading full dataset from {self.data_path}...")
|
| 123 |
+
chunks = []
|
| 124 |
+
chunk_size = 100000
|
| 125 |
+
for chunk in tqdm(pd.read_csv(self.data_path, chunksize=chunk_size),
|
| 126 |
+
desc="Reading chunks"):
|
| 127 |
+
chunks.append(chunk)
|
| 128 |
+
self.df = pd.concat(chunks, ignore_index=True)
|
| 129 |
+
|
| 130 |
+
print(f" Loaded {len(self.df):,} data points")
|
| 131 |
+
print(f" Columns: {list(self.df.columns)}")
|
| 132 |
+
return self.df
|
| 133 |
+
|
| 134 |
+
def extract_scaffolds(self):
|
| 135 |
+
"""
|
| 136 |
+
Extracts Bemis-Murcko scaffolds from SMILES.
|
| 137 |
+
"""
|
| 138 |
+
print("\n" + "=" * 70)
|
| 139 |
+
print("EXTRACTING BEMIS-MURCKO SCAFFOLDS")
|
| 140 |
+
print("=" * 70)
|
| 141 |
+
|
| 142 |
+
print("Computing scaffolds for all compounds...")
|
| 143 |
+
self.df['scaffold'] = self.df['compound'].apply(get_murcko_scaffold)
|
| 144 |
+
|
| 145 |
+
invalid_count = (self.df['scaffold'] == "INVALID").sum()
|
| 146 |
+
no_scaffold_count = (self.df['scaffold'] == "NO_SCAFFOLD").sum()
|
| 147 |
+
valid_count = len(self.df) - invalid_count - no_scaffold_count
|
| 148 |
+
|
| 149 |
+
if invalid_count > 0:
|
| 150 |
+
print(f"Warning: {invalid_count:,} rows have invalid SMILES")
|
| 151 |
+
if no_scaffold_count > 0:
|
| 152 |
+
print(f"Info: {no_scaffold_count:,} rows have no scaffold (single atoms)")
|
| 153 |
+
|
| 154 |
+
print(f"Successfully extracted scaffolds for {valid_count:,} rows")
|
| 155 |
+
|
| 156 |
+
scaffold_groups = self.df.groupby('scaffold')
|
| 157 |
+
scaffold_sizes = scaffold_groups.size().to_dict()
|
| 158 |
+
|
| 159 |
+
n_unique_scaffolds = len(scaffold_sizes)
|
| 160 |
+
sizes_array = np.array(list(scaffold_sizes.values()))
|
| 161 |
+
|
| 162 |
+
print(f"\nScaffold Statistics:")
|
| 163 |
+
print(f"Unique scaffolds: {n_unique_scaffolds:,}")
|
| 164 |
+
print(f"Scaffolds with 1 data point: {(sizes_array == 1).sum():,}")
|
| 165 |
+
print(f"Scaffolds with >10 data points: {(sizes_array > 10).sum():,}")
|
| 166 |
+
print(f"Scaffolds with >100 data points: {(sizes_array > 100).sum():,}")
|
| 167 |
+
print(f"Mean data points per scaffold: {sizes_array.mean():.2f}")
|
| 168 |
+
print(f"Median data points per scaffold: {np.median(sizes_array):.0f}")
|
| 169 |
+
print(f"Max data points in a scaffold: {sizes_array.max():,}")
|
| 170 |
+
|
| 171 |
+
return scaffold_sizes
|
| 172 |
+
|
| 173 |
+
def create_scaffold_splits(self, scaffold_sizes, n_splits=5):
|
| 174 |
+
"""
|
| 175 |
+
Creates scaffold-based cross-validation splits.
|
| 176 |
+
|
| 177 |
+
Splits are created by grouping scaffolds to balance the number of data points
|
| 178 |
+
in each fold while ensuring no scaffold appears in multiple folds.
|
| 179 |
+
"""
|
| 180 |
+
print("\n" + "=" * 70)
|
| 181 |
+
print(f"CREATING {n_splits}-FOLD SCAFFOLD SPLITS")
|
| 182 |
+
print("=" * 70)
|
| 183 |
+
|
| 184 |
+
print(f"Assigning {len(scaffold_sizes):,} scaffolds to {n_splits} folds...")
|
| 185 |
+
fold_assignments, fold_counts = assign_scaffolds_to_folds(scaffold_sizes, n_splits)
|
| 186 |
+
|
| 187 |
+
print("\nFold Statistics:")
|
| 188 |
+
print("-" * 70)
|
| 189 |
+
for fold_id in range(n_splits):
|
| 190 |
+
scaffolds = fold_assignments[fold_id]
|
| 191 |
+
count = fold_counts[fold_id]
|
| 192 |
+
percentage = 100 * count / len(self.df)
|
| 193 |
+
print(f"Fold {fold_id + 1}: {count:,} data points ({percentage:.2f}%) | "
|
| 194 |
+
f"{len(scaffolds):,} scaffolds")
|
| 195 |
+
print("-" * 70)
|
| 196 |
+
print(f"Total: {sum(fold_counts):,} data points")
|
| 197 |
+
|
| 198 |
+
scaffold_to_fold = {}
|
| 199 |
+
for fold_idx in range(n_splits):
|
| 200 |
+
for scaffold in fold_assignments[fold_idx]:
|
| 201 |
+
scaffold_to_fold[scaffold] = fold_idx + 1
|
| 202 |
+
|
| 203 |
+
self.df['fold'] = self.df['scaffold'].map(scaffold_to_fold)
|
| 204 |
+
|
| 205 |
+
print(f"\nSaving fold CSV files...")
|
| 206 |
+
fold_dataframes = {}
|
| 207 |
+
|
| 208 |
+
for fold_id in range(n_splits):
|
| 209 |
+
fold_df = self.df[self.df['fold'] == fold_id + 1].copy()
|
| 210 |
+
fold_df_output = fold_df.drop(columns=['scaffold', 'fold'])
|
| 211 |
+
|
| 212 |
+
output_file = self.output_dir / f"fold_{fold_id + 1}.csv"
|
| 213 |
+
fold_df_output.to_csv(output_file, index=False)
|
| 214 |
+
fold_dataframes[fold_id] = fold_df
|
| 215 |
+
|
| 216 |
+
print(f"Saved fold_{fold_id + 1}.csv: {len(fold_df):,} rows")
|
| 217 |
+
|
| 218 |
+
scaffold_assignments_data = []
|
| 219 |
+
for fold_id in range(n_splits):
|
| 220 |
+
for scaffold in fold_assignments[fold_id]:
|
| 221 |
+
scaffold_assignments_data.append({
|
| 222 |
+
'scaffold': scaffold,
|
| 223 |
+
'fold': fold_id + 1,
|
| 224 |
+
'datapoint_count': scaffold_sizes[scaffold]
|
| 225 |
+
})
|
| 226 |
+
|
| 227 |
+
scaffold_assignments_df = pd.DataFrame(scaffold_assignments_data)
|
| 228 |
+
scaffold_assignments_df = scaffold_assignments_df.sort_values(
|
| 229 |
+
['fold', 'datapoint_count'], ascending=[True, False]
|
| 230 |
+
)
|
| 231 |
+
|
| 232 |
+
scaffold_assignments_file = self.output_dir / "figures/scaffold_assignments.csv"
|
| 233 |
+
scaffold_assignments_df.to_csv(scaffold_assignments_file, index=False)
|
| 234 |
+
print(f"\nSaved scaffold assignments to: scaffold_assignments.csv")
|
| 235 |
+
print(f"Total scaffolds: {len(scaffold_assignments_df):,}")
|
| 236 |
+
print(f"Columns: scaffold, fold, datapoint_count")
|
| 237 |
+
|
| 238 |
+
print("\nVerifying scaffold separation...")
|
| 239 |
+
all_fold_scaffolds = [set(fold_assignments[i]) for i in range(n_splits)]
|
| 240 |
+
has_overlap = False
|
| 241 |
+
for i in range(n_splits):
|
| 242 |
+
for j in range(i + 1, n_splits):
|
| 243 |
+
overlap = all_fold_scaffolds[i] & all_fold_scaffolds[j]
|
| 244 |
+
if overlap:
|
| 245 |
+
print(f"ERROR: Overlap between fold {i+1} and fold {j+1}: {len(overlap)} scaffolds")
|
| 246 |
+
has_overlap = True
|
| 247 |
+
|
| 248 |
+
if not has_overlap:
|
| 249 |
+
print(f"No overlap - all scaffolds are uniquely assigned")
|
| 250 |
+
|
| 251 |
+
all_assigned = set()
|
| 252 |
+
for fold_id in range(n_splits):
|
| 253 |
+
all_assigned.update(fold_assignments[fold_id])
|
| 254 |
+
|
| 255 |
+
if len(all_assigned) == len(scaffold_sizes):
|
| 256 |
+
print(f"All {len(scaffold_sizes):,} scaffolds assigned to folds")
|
| 257 |
+
else:
|
| 258 |
+
missing = set(scaffold_sizes.keys()) - all_assigned
|
| 259 |
+
print(f"WARNING: {len(missing)} scaffolds not assigned to any fold")
|
| 260 |
+
|
| 261 |
+
return fold_assignments, scaffold_to_fold, fold_dataframes
|
| 262 |
+
|
| 263 |
+
def visualize_rt_distributions(self, fold_dataframes):
|
| 264 |
+
"""
|
| 265 |
+
Generates an RT (retention time) KDE plot across folds.
|
| 266 |
+
"""
|
| 267 |
+
print("\n" + "=" * 70)
|
| 268 |
+
print("GENERATING RT DISTRIBUTION VISUALIZATION")
|
| 269 |
+
print("=" * 70)
|
| 270 |
+
|
| 271 |
+
# converting RT from seconds to minutes
|
| 272 |
+
valid_rt = self.df['rt'].dropna() / 60.0
|
| 273 |
+
print(f"Data points with valid RT: {len(valid_rt):,}")
|
| 274 |
+
|
| 275 |
+
print(f"\nRT Statistics (in minutes):")
|
| 276 |
+
print(f" Mean: {valid_rt.mean():.2f} minutes")
|
| 277 |
+
print(f" Median: {valid_rt.median():.2f} minutes")
|
| 278 |
+
print(f" Min: {valid_rt.min():.2f} minutes")
|
| 279 |
+
print(f" Max: {valid_rt.max():.2f} minutes")
|
| 280 |
+
print(f" Std: {valid_rt.std():.2f} minutes")
|
| 281 |
+
|
| 282 |
+
fig, ax = plt.subplots(figsize=(12, 6))
|
| 283 |
+
|
| 284 |
+
if fold_dataframes:
|
| 285 |
+
colors = sns.color_palette("husl", len(fold_dataframes))
|
| 286 |
+
|
| 287 |
+
for fold_id in range(len(fold_dataframes)):
|
| 288 |
+
fold_df = fold_dataframes[fold_id]
|
| 289 |
+
fold_rt = fold_df['rt'].dropna() / 60.0
|
| 290 |
+
|
| 291 |
+
if len(fold_rt) > 0:
|
| 292 |
+
sns.kdeplot(data=fold_rt, ax=ax,
|
| 293 |
+
label=f'Fold {fold_id + 1} (n={len(fold_rt):,})',
|
| 294 |
+
linewidth=2.5, color=colors[fold_id])
|
| 295 |
+
|
| 296 |
+
sns.kdeplot(data=valid_rt, ax=ax,
|
| 297 |
+
label=f'Overall (n={len(valid_rt):,})',
|
| 298 |
+
linewidth=2, linestyle='--', color='black', alpha=0.7)
|
| 299 |
+
else:
|
| 300 |
+
sns.kdeplot(data=valid_rt, ax=ax, linewidth=2.5, color='blue')
|
| 301 |
+
|
| 302 |
+
ax.set_xlabel('Retention Time (min)', fontsize=12, fontweight='bold')
|
| 303 |
+
ax.set_ylabel('Density', fontsize=12, fontweight='bold')
|
| 304 |
+
ax.set_title('Retention Time Distribution Across Scaffold Splits', fontsize=14, fontweight='bold')
|
| 305 |
+
ax.set_xlim(0, 135)
|
| 306 |
+
ax.legend(loc='best', frameon=True, fancybox=True, shadow=True)
|
| 307 |
+
ax.grid(alpha=0.3)
|
| 308 |
+
|
| 309 |
+
plt.tight_layout()
|
| 310 |
+
|
| 311 |
+
# creating figures subdirectory
|
| 312 |
+
figures_dir = self.output_dir / 'figures'
|
| 313 |
+
figures_dir.mkdir(exist_ok=True)
|
| 314 |
+
|
| 315 |
+
output_file = figures_dir / 'scaffold_rt.png'
|
| 316 |
+
plt.savefig(output_file, dpi=300, bbox_inches='tight')
|
| 317 |
+
print(f"\nSaved RT distribution visualization to figures/scaffold_rt.png")
|
| 318 |
+
plt.close()
|
| 319 |
+
|
| 320 |
+
def generate_umap(self, n_samples=None):
|
| 321 |
+
"""
|
| 322 |
+
Generates a UMAP visualization of chemical space.
|
| 323 |
+
Uses Morgan fingerprints for molecular representation.
|
| 324 |
+
"""
|
| 325 |
+
print("\n" + "=" * 70)
|
| 326 |
+
print("GENERATING UMAP VISUALIZATION")
|
| 327 |
+
print("=" * 70)
|
| 328 |
+
|
| 329 |
+
if n_samples is not None and len(self.df) > n_samples:
|
| 330 |
+
print(f"Sampling {n_samples:,} datapoints for UMAP...")
|
| 331 |
+
df_sample = self.df.sample(n=n_samples, random_state=42)
|
| 332 |
+
else:
|
| 333 |
+
print(f"Using all {len(self.df):,} datapoints for UMAP...")
|
| 334 |
+
df_sample = self.df
|
| 335 |
+
|
| 336 |
+
print(f"Generating Morgan fingerprints for {len(df_sample):,} datapoints...")
|
| 337 |
+
|
| 338 |
+
fps = []
|
| 339 |
+
valid_indices = []
|
| 340 |
+
|
| 341 |
+
for idx, smiles in tqdm(enumerate(df_sample['compound']),
|
| 342 |
+
total=len(df_sample),
|
| 343 |
+
desc="Generating fingerprints"):
|
| 344 |
+
try:
|
| 345 |
+
mol = Chem.MolFromSmiles(smiles)
|
| 346 |
+
if mol is not None:
|
| 347 |
+
fp = AllChem.GetMorganFingerprintAsBitVect(mol, radius=2, nBits=2048)
|
| 348 |
+
fps.append(fp)
|
| 349 |
+
valid_indices.append(idx)
|
| 350 |
+
except:
|
| 351 |
+
pass
|
| 352 |
+
|
| 353 |
+
print(f"Generated {len(fps):,} valid fingerprints")
|
| 354 |
+
|
| 355 |
+
fp_array = np.array([list(fp) for fp in fps])
|
| 356 |
+
df_valid = df_sample.iloc[valid_indices].reset_index(drop=True)
|
| 357 |
+
|
| 358 |
+
print(f"\nFitting UMAP (this may take a while)...")
|
| 359 |
+
print(f"n_samples: {len(fp_array):,}")
|
| 360 |
+
print(f"n_features: {fp_array.shape[1]}")
|
| 361 |
+
|
| 362 |
+
reducer = umap.UMAP(
|
| 363 |
+
n_neighbors=15,
|
| 364 |
+
min_dist=0.1,
|
| 365 |
+
n_components=2,
|
| 366 |
+
metric='jaccard',
|
| 367 |
+
random_state=42,
|
| 368 |
+
verbose=True
|
| 369 |
+
)
|
| 370 |
+
|
| 371 |
+
embedding = reducer.fit_transform(fp_array)
|
| 372 |
+
print(f"UMAP embedding complete")
|
| 373 |
+
|
| 374 |
+
# create umap visualization
|
| 375 |
+
print("\nGenerating UMAP visualization...")
|
| 376 |
+
fig, ax = plt.subplots(figsize=(14, 10))
|
| 377 |
+
|
| 378 |
+
if 'fold' in df_valid.columns:
|
| 379 |
+
colors = sns.color_palette("husl", df_valid['fold'].nunique())
|
| 380 |
+
|
| 381 |
+
for fold_id in sorted(df_valid['fold'].unique()):
|
| 382 |
+
mask = df_valid['fold'] == fold_id
|
| 383 |
+
fold_data = embedding[mask]
|
| 384 |
+
n_compounds = mask.sum()
|
| 385 |
+
|
| 386 |
+
ax.scatter(fold_data[:, 0], fold_data[:, 1],
|
| 387 |
+
label=f'Fold {int(fold_id)} (n={n_compounds:,})',
|
| 388 |
+
alpha=0.6, s=20, c=[colors[int(fold_id)-1]])
|
| 389 |
+
else:
|
| 390 |
+
ax.scatter(embedding[:, 0], embedding[:, 1],
|
| 391 |
+
alpha=0.6, s=20, color='blue')
|
| 392 |
+
|
| 393 |
+
ax.set_title('UMAP Projection of Compound Space (Colored by Scaffold Split)',
|
| 394 |
+
fontsize=14, fontweight='bold')
|
| 395 |
+
ax.legend(loc='best', frameon=True, fancybox=True, shadow=True, fontsize=10)
|
| 396 |
+
ax.grid(alpha=0.3)
|
| 397 |
+
|
| 398 |
+
plt.tight_layout()
|
| 399 |
+
|
| 400 |
+
# saving results to figures subdirectory
|
| 401 |
+
figures_dir = self.output_dir / 'figures'
|
| 402 |
+
figures_dir.mkdir(exist_ok=True)
|
| 403 |
+
|
| 404 |
+
output_file = figures_dir / 'scaffold_umap.png'
|
| 405 |
+
plt.savefig(output_file, dpi=300, bbox_inches='tight')
|
| 406 |
+
print(f"Saved UMAP visualization to figures/scaffold_umap.png")
|
| 407 |
+
plt.close()
|
| 408 |
+
|
| 409 |
+
return embedding, df_valid
|
| 410 |
+
|
| 411 |
+
|
| 412 |
+
# ===== Main ===== #
|
| 413 |
+
|
| 414 |
+
|
| 415 |
+
def main():
|
| 416 |
+
"""
|
| 417 |
+
Main execution function.
|
| 418 |
+
"""
|
| 419 |
+
print("\n" + "=" * 80)
|
| 420 |
+
print(" " * 20 + "BEMIS-MURCKO SCAFFOLD SPLIT")
|
| 421 |
+
print(" " * 25 + "Retina Dataset Analysis")
|
| 422 |
+
print("=" * 80)
|
| 423 |
+
|
| 424 |
+
script_dir = Path(__file__).parent
|
| 425 |
+
data_path = script_dir / INPUT_CSV
|
| 426 |
+
output_dir = script_dir / OUTPUT_DIR
|
| 427 |
+
|
| 428 |
+
print(f"\nConfiguration:")
|
| 429 |
+
print(f"Script location: {script_dir}")
|
| 430 |
+
print(f"Data path: {data_path}")
|
| 431 |
+
print(f"Output directory: {output_dir}")
|
| 432 |
+
print(f"Number of folds: {N_FOLDS}")
|
| 433 |
+
print(f"Random seed: {RANDOM_SEED}")
|
| 434 |
+
|
| 435 |
+
if not data_path.exists():
|
| 436 |
+
raise FileNotFoundError(f"Input file not found: {data_path}")
|
| 437 |
+
|
| 438 |
+
analyzer = ScaffoldAnalyzer(data_path, output_dir)
|
| 439 |
+
|
| 440 |
+
print("\nLoading dataset...")
|
| 441 |
+
analyzer.load_data(sample_size=None)
|
| 442 |
+
|
| 443 |
+
analyze_dataset(analyzer.df)
|
| 444 |
+
|
| 445 |
+
scaffold_sizes = analyzer.extract_scaffolds()
|
| 446 |
+
|
| 447 |
+
fold_assignments, scaffold_to_fold, fold_dataframes = analyzer.create_scaffold_splits(
|
| 448 |
+
scaffold_sizes, n_splits=N_FOLDS
|
| 449 |
+
)
|
| 450 |
+
|
| 451 |
+
analyzer.visualize_rt_distributions(fold_dataframes)
|
| 452 |
+
|
| 453 |
+
analyzer.generate_umap(n_samples=None) # Use all datapoints
|
| 454 |
+
|
| 455 |
+
print("\n" + "=" * 80)
|
| 456 |
+
print(" " * 25 + "5-FOLD SCAFFOLD SPLIT COMPLETE!")
|
| 457 |
+
print("=" * 80)
|
| 458 |
+
|
| 459 |
+
|
| 460 |
+
if __name__ == "__main__":
|
| 461 |
+
main()
|
| 462 |
+
|
data/solvents/README.md
CHANGED
|
@@ -1,6 +1,6 @@
|
|
| 1 |
# 🥽 ReTiNA Table of Solvent Desciptors
|
| 2 |
|
| 3 |
-
The ReTiNA dataset is accompanied with
|
| 4 |
|
| 5 |
## Topological Descriptors
|
| 6 |
|
|
@@ -9,16 +9,13 @@ The ReTiNA dataset is accompanied with 160 descriptors for each solvent, capturi
|
|
| 9 |
| BalabanJ | Quantifies molecular complexity based on average distance connectivity and graph branching | RDKit |
|
| 10 |
| BertzCT | Calculates molecular complexity based on graph connectivity and atomic contributions | RDKit |
|
| 11 |
| Chi (0-1), Chi_n (0-4) Chi_v (0-4) | Connectivity indices reflecting molecular topology, branching, and size | RDKit |
|
| 12 |
-
| Ipc | Information content index representing structural complexity | RDKit |
|
| 13 |
| Kappa (1-3) | Shape indices describing molecular flexibility and overall geometry | RDKit |
|
| 14 |
|
| 15 |
## Electronic Descriptors
|
| 16 |
|
| 17 |
| Descriptor | Summary | Software Used |
|
| 18 |
|------------|---------|---------------|
|
| 19 |
-
| MaxAbsPartialCharge | Maximum absolute atomic partial charge | RDKit |
|
| 20 |
| MaxEStateIndex | Maximum E-state value in the molecule | RDKit |
|
| 21 |
-
| MaxPartialCharge | Highest partial charge in the molecule | RDKit |
|
| 22 |
| NumValenceElectrons | Total number of valence electrons in the molecule | RDKit |
|
| 23 |
| NumRadicalElectrons | Total number of unpaired electrons (radicals) | RDKit |
|
| 24 |
| HallKierAlpha | Atom-type electrotopological descriptor modeling polarity and hybridization | RDKit |
|
|
|
|
| 1 |
# 🥽 ReTiNA Table of Solvent Desciptors
|
| 2 |
|
| 3 |
+
The ReTiNA dataset is accompanied with 157 descriptors for each solvent, capturing detailed structural, electronic, and topological features for model training. Descriptors were computed using RDKit.
|
| 4 |
|
| 5 |
## Topological Descriptors
|
| 6 |
|
|
|
|
| 9 |
| BalabanJ | Quantifies molecular complexity based on average distance connectivity and graph branching | RDKit |
|
| 10 |
| BertzCT | Calculates molecular complexity based on graph connectivity and atomic contributions | RDKit |
|
| 11 |
| Chi (0-1), Chi_n (0-4) Chi_v (0-4) | Connectivity indices reflecting molecular topology, branching, and size | RDKit |
|
|
|
|
| 12 |
| Kappa (1-3) | Shape indices describing molecular flexibility and overall geometry | RDKit |
|
| 13 |
|
| 14 |
## Electronic Descriptors
|
| 15 |
|
| 16 |
| Descriptor | Summary | Software Used |
|
| 17 |
|------------|---------|---------------|
|
|
|
|
| 18 |
| MaxEStateIndex | Maximum E-state value in the molecule | RDKit |
|
|
|
|
| 19 |
| NumValenceElectrons | Total number of valence electrons in the molecule | RDKit |
|
| 20 |
| NumRadicalElectrons | Total number of unpaired electrons (radicals) | RDKit |
|
| 21 |
| HallKierAlpha | Atom-type electrotopological descriptor modeling polarity and hybridization | RDKit |
|
data/solvents/solv.smi
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5f451c636378e59fdb245e7f16fab89a55abc927fbb285a5dda79df0a1058167
|
| 3 |
+
size 61
|
data/solvents/solv_descriptors.csv
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:eefce3ea7ad3e74e4ecd496fe0dad84e534dd4c8255c7880283f5815e2fbbe28
|
| 3 |
+
size 18660
|