natelgrw commited on
Commit
155c363
·
1 Parent(s): bb9b15d

Version 1.0.0

Browse files
Files changed (34) hide show
  1. .DS_Store +0 -0
  2. .gitattributes +1 -0
  3. README.md +10 -4
  4. data/{testing/amax_testing.csv → cluster_split/cluster_1.csv} +2 -2
  5. data/{training/amax_training.csv → cluster_split/cluster_2.csv} +2 -2
  6. data/{validation/amax_validation.csv → cluster_split/cluster_3.csv} +2 -2
  7. data/cluster_split/cluster_4.csv +3 -0
  8. data/cluster_split/cluster_5.csv +3 -0
  9. data/cluster_split/figures/cluster_assignments.csv +3 -0
  10. data/cluster_split/figures/cluster_lmax.png +3 -0
  11. data/cluster_split/figures/cluster_umap.png +3 -0
  12. data/compounds/README.md +1 -4
  13. data/compounds/comp_descriptors.csv +2 -2
  14. data/scaffold_split/figures/scaffold_assignments.csv +3 -0
  15. data/scaffold_split/figures/scaffold_lmax.png +3 -0
  16. data/scaffold_split/figures/scaffold_umap.png +3 -0
  17. data/scaffold_split/fold_1.csv +3 -0
  18. data/scaffold_split/fold_2.csv +3 -0
  19. data/scaffold_split/fold_3.csv +3 -0
  20. data/scaffold_split/fold_4.csv +3 -0
  21. data/scaffold_split/fold_5.csv +3 -0
  22. data/scripts/cluster_split.py +282 -0
  23. data/scripts/scaffold_split.py +352 -0
  24. data/scripts/solvent_split.py +323 -0
  25. data/solvent_split/figures/solvent_distribution.csv +3 -0
  26. data/solvent_split/figures/solvent_lmax.png +3 -0
  27. data/solvent_split/figures/solvent_umap.png +3 -0
  28. data/solvent_split/solvents_1.csv +3 -0
  29. data/solvent_split/solvents_2.csv +3 -0
  30. data/solvent_split/solvents_3.csv +3 -0
  31. data/solvent_split/solvents_4.csv +3 -0
  32. data/solvent_split/solvents_5.csv +3 -0
  33. data/solvents/README.md +1 -4
  34. data/solvents/solv_descriptors.csv +2 -2
.DS_Store ADDED
Binary file (6.15 kB). View file
 
.gitattributes CHANGED
@@ -1,2 +1,3 @@
1
  *.csv filter=lfs diff=lfs merge=lfs -text
2
  *.smi filter=lfs diff=lfs merge=lfs -text
 
 
1
  *.csv filter=lfs diff=lfs merge=lfs -text
2
  *.smi filter=lfs diff=lfs merge=lfs -text
3
+ *.png filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -3,9 +3,15 @@ license: mit
3
  ---
4
 
5
  ## 〰️ AMAX: A Benchmark Dataset for UV-Vis Lambda Max Prediction in LC-MS
6
-
7
  AMAX is an open source dataset designed to assist machine learning models in small molecule UV-Vis absorption maxima (λ<sub>max</sub>) prediction and LC-MS compound characterization workflows.
8
 
 
 
 
 
 
 
9
  This dataset is actively expanding with new experimental retention time values from the Coley Research Group at MIT, ensuring it remains a growing resource for optical property prediction.
10
 
11
  AMAX is designed for use in:
@@ -20,9 +26,9 @@ The AMAX dataset contains:
20
 
21
  - 40,013 unique molecule–environment combinations, the largest singular LC-MS retention time dataset of its kind to date
22
  - Experimentally measured λ<sub>max</sub> values in nm, curated from public datasets, benchmark papers, and literature
23
- - 160 calculated chemical descriptors for 22,415 unique compounds and 356 unique solvents
24
 
25
- Additionally, the AMAX dataset is divided into training, validation, and testing subsets in an 80:10:10 split.
26
 
27
  ## 📋 Data Sources Used
28
 
@@ -33,7 +39,7 @@ Detailed information on the data sources comprising AMAX-1 can be found in the d
33
  If you use this code in a project, please cite the following:
34
 
35
  ```
36
- @dataset{natelgrwamax1dataset,
37
  title={AMAX: A Benchmark Dataset for UV-Vis Lambda Max Prediction in LC-MS},
38
  author={Leung, Nathan},
39
  institution={Coley Research Group @ MIT}
 
3
  ---
4
 
5
  ## 〰️ AMAX: A Benchmark Dataset for UV-Vis Lambda Max Prediction in LC-MS
6
+
7
  AMAX is an open source dataset designed to assist machine learning models in small molecule UV-Vis absorption maxima (λ<sub>max</sub>) prediction and LC-MS compound characterization workflows.
8
 
9
+ Current Version: **1.0.0**
10
+
11
+ Available models trained on the AMAX dataset are available at this [Hugging Face Repository](https://huggingface.co/natelgrw/AMAX-Models).
12
+
13
+ Source code for the AMAX model collection is available at this [Github Repository](https://github.com/natelgrw/amax_models).
14
+
15
  This dataset is actively expanding with new experimental retention time values from the Coley Research Group at MIT, ensuring it remains a growing resource for optical property prediction.
16
 
17
  AMAX is designed for use in:
 
26
 
27
  - 40,013 unique molecule–environment combinations, the largest singular LC-MS retention time dataset of its kind to date
28
  - Experimentally measured λ<sub>max</sub> values in nm, curated from public datasets, benchmark papers, and literature
29
+ - 157 calculated chemical descriptors for 22,415 unique compounds and 356 unique solvents
30
 
31
+ Additionally, the AMAX dataset is divided into different scaffold, cluster, and solvent splits for model evaluation.
32
 
33
  ## 📋 Data Sources Used
34
 
 
39
  If you use this code in a project, please cite the following:
40
 
41
  ```
42
+ @dataset{natelgrwamaxdataset,
43
  title={AMAX: A Benchmark Dataset for UV-Vis Lambda Max Prediction in LC-MS},
44
  author={Leung, Nathan},
45
  institution={Coley Research Group @ MIT}
data/{testing/amax_testing.csv → cluster_split/cluster_1.csv} RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:77e5dc8879575ef4c1707149fc3c9309ee9216c95c3423e362dfd13cd3e76810
3
- size 353520
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b2358204b90177b151f65f96cd0bd21f12acf457693c93c5583976d890e6b991
3
+ size 524145
data/{training/amax_training.csv → cluster_split/cluster_2.csv} RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:eff44a87e6557a5455a96e970fbf3d0df8b6adeb1dfc15839577a0a4ab156957
3
- size 2840619
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3090a2022f3202178720fe0c04293dbdfef15a097bc619ee0f764455ec82be0c
3
+ size 1416404
data/{validation/amax_validation.csv → cluster_split/cluster_3.csv} RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f0b237eb2b0d6b8f8f39942edbea9c47070612b22fce29949acbb3775b724673
3
- size 353594
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8862bbc7bcdffdd4d2a1c9f02cd1c27a48ddb6745511ae0b8929dd25e5995e00
3
+ size 426057
data/cluster_split/cluster_4.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:72b45200a0a587a062199acaac70ac9ed2761aa00de2e0e745e8d85962a7e922
3
+ size 713086
data/cluster_split/cluster_5.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1a1af58c8250506df19e5dc059a403586a43868b4c9560cbbe50d2ed251c829d
3
+ size 468111
data/cluster_split/figures/cluster_assignments.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3a60d8b5bd43e8da0c24aba8a65e4eacf672bfd58e222028ae25b4c3c982b35c
3
+ size 3701129
data/cluster_split/figures/cluster_lmax.png ADDED

Git LFS Details

  • SHA256: 88ab74e7d1555af91acca03a0354fcd58b6d587833e547dd5408b2803b00126e
  • Pointer size: 131 Bytes
  • Size of remote file: 457 kB
data/cluster_split/figures/cluster_umap.png ADDED

Git LFS Details

  • SHA256: 8f1fc7dd891f49d4f81d55300e2fbebc579c247e650bf6393927cd162fdf7874
  • Pointer size: 131 Bytes
  • Size of remote file: 882 kB
data/compounds/README.md CHANGED
@@ -1,6 +1,6 @@
1
  # 🔬 AMAX Compound Desciptors
2
 
3
- The AMAX dataset is accompanied with 160 descriptors for each compound, capturing detailed structural, electronic, and topological features for model training. Descriptors were computed using RDKit.
4
 
5
  ## Topological Descriptors
6
 
@@ -9,16 +9,13 @@ The AMAX dataset is accompanied with 160 descriptors for each compound, capturin
9
  | BalabanJ | Quantifies molecular complexity based on average distance connectivity and graph branching | RDKit |
10
  | BertzCT | Calculates molecular complexity based on graph connectivity and atomic contributions | RDKit |
11
  | Chi (0-1), Chi_n (0-4) Chi_v (0-4) | Connectivity indices reflecting molecular topology, branching, and size | RDKit |
12
- | Ipc | Information content index representing structural complexity | RDKit |
13
  | Kappa (1-3) | Shape indices describing molecular flexibility and overall geometry | RDKit |
14
 
15
  ## Electronic Descriptors
16
 
17
  | Descriptor | Summary | Software Used |
18
  |------------|---------|---------------|
19
- | MaxAbsPartialCharge | Maximum absolute atomic partial charge | RDKit |
20
  | MaxEStateIndex | Maximum E-state value in the molecule | RDKit |
21
- | MaxPartialCharge | Highest partial charge in the molecule | RDKit |
22
  | NumValenceElectrons | Total number of valence electrons in the molecule | RDKit |
23
  | NumRadicalElectrons | Total number of unpaired electrons (radicals) | RDKit |
24
  | HallKierAlpha | Atom-type electrotopological descriptor modeling polarity and hybridization | RDKit |
 
1
  # 🔬 AMAX Compound Desciptors
2
 
3
+ The AMAX dataset is accompanied with 157 descriptors for each compound, capturing detailed structural, electronic, and topological features for model training. Descriptors were computed using RDKit.
4
 
5
  ## Topological Descriptors
6
 
 
9
  | BalabanJ | Quantifies molecular complexity based on average distance connectivity and graph branching | RDKit |
10
  | BertzCT | Calculates molecular complexity based on graph connectivity and atomic contributions | RDKit |
11
  | Chi (0-1), Chi_n (0-4) Chi_v (0-4) | Connectivity indices reflecting molecular topology, branching, and size | RDKit |
 
12
  | Kappa (1-3) | Shape indices describing molecular flexibility and overall geometry | RDKit |
13
 
14
  ## Electronic Descriptors
15
 
16
  | Descriptor | Summary | Software Used |
17
  |------------|---------|---------------|
 
18
  | MaxEStateIndex | Maximum E-state value in the molecule | RDKit |
 
19
  | NumValenceElectrons | Total number of valence electrons in the molecule | RDKit |
20
  | NumRadicalElectrons | Total number of unpaired electrons (radicals) | RDKit |
21
  | HallKierAlpha | Atom-type electrotopological descriptor modeling polarity and hybridization | RDKit |
data/compounds/comp_descriptors.csv CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:522aefb71e6a19f242966624bc0d9cb08f1acfc96b4ee0d5426a9914e06606c2
3
- size 75216122
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6fc2cdc42d04799170971745c51284edadced7b9d2703bffe2f4b0756a76b9d2
3
+ size 73813553
data/scaffold_split/figures/scaffold_assignments.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5feb7d7bb9744f2d80eeef01b7dabd7ad12e88d0683457e57a5a291fe4945bc0
3
+ size 639302
data/scaffold_split/figures/scaffold_lmax.png ADDED

Git LFS Details

  • SHA256: a4685342509b4e1f10fb791d78d378b415957752c295857efa366e6cdda1adaf
  • Pointer size: 131 Bytes
  • Size of remote file: 412 kB
data/scaffold_split/figures/scaffold_umap.png ADDED

Git LFS Details

  • SHA256: 5a7e206c83805cfba22ee8b24dba6c327ff250cd8be32cc646bd59a8188b199e
  • Pointer size: 132 Bytes
  • Size of remote file: 1.69 MB
data/scaffold_split/fold_1.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:556dcfd1ae38d7b5ded4c694ae3e3b3d0feb6e07d5f4c0824da16593d3781c84
3
+ size 710149
data/scaffold_split/fold_2.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:32ca3b32332d9fd18d927c82e39240890cad9cbcbea5930175936a210fb280f4
3
+ size 703499
data/scaffold_split/fold_3.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ef25141b3003c1673298618f07e02266c9ae6cb9cfbf949e19ef7a84e9373404
3
+ size 703073
data/scaffold_split/fold_4.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f62efd58d6ad42c4379faf264a169ca5d648b6bdaa69a0224607553ba0b27791
3
+ size 734476
data/scaffold_split/fold_5.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7df68f7b5a9609115b1a9e06f4f98715c370abda583688ada03b145af5ab171f
3
+ size 696606
data/scripts/cluster_split.py ADDED
@@ -0,0 +1,282 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ cluster_split.py
4
+
5
+ Author: natelgrw
6
+ Last Edited: 11/07/2025
7
+
8
+ Performs spatial KMeans cluster splitting for the AMAX dataset directly on
9
+ 2D UMAP coordinates. This ensures visual consistency between the UMAP
10
+ visualization and the actual fold assignments, and creates realistic chemical
11
+ neighborhoods for evaluation.
12
+ """
13
+
14
+ import os
15
+ import random
16
+ import numpy as np
17
+ import pandas as pd
18
+ from collections import defaultdict
19
+ from rdkit import Chem
20
+ from rdkit.Chem import AllChem
21
+ from sklearn.cluster import KMeans
22
+ import matplotlib.pyplot as plt
23
+ import seaborn as sns
24
+ import umap
25
+
26
+
27
+ # ===== Configuration ===== #
28
+
29
+
30
+ INPUT_CSV = "../amax_dataset.csv"
31
+ OUTPUT_DIR = "../cluster_split"
32
+ N_REGIONS = 5
33
+ RANDOM_SEED = 42
34
+
35
+ random.seed(RANDOM_SEED)
36
+ np.random.seed(RANDOM_SEED)
37
+
38
+
39
+ # ===== Helper Functions ===== #
40
+
41
+
42
+ def compute_fingerprints(df, smiles_column="compound"):
43
+ """
44
+ Compute Morgan fingerprints for all SMILES.
45
+ Returns array of fingerprints and list of valid indices.
46
+ """
47
+ fps, valid_idx = [], []
48
+ print("Computing Morgan fingerprints (radius=2, nBits=2048)...")
49
+ for i, smi in enumerate(df[smiles_column]):
50
+ mol = Chem.MolFromSmiles(smi)
51
+ if mol is None:
52
+ continue
53
+ fp = AllChem.GetMorganFingerprintAsBitVect(mol, radius=2, nBits=2048)
54
+ fps.append(fp)
55
+ valid_idx.append(i)
56
+ fps_array = np.array([list(fp) for fp in fps], dtype=np.float32)
57
+ print(f"Valid compounds: {len(fps_array):,}")
58
+ return fps_array, valid_idx
59
+
60
+
61
+ def compute_umap_embedding(fps_array):
62
+ """
63
+ Compute 2D UMAP embedding of fingerprints using Jaccard metric.
64
+ Returns 2D coordinates for spatial clustering.
65
+ """
66
+ print("\nComputing 2D UMAP embedding (Jaccard/Tanimoto metric)...")
67
+ fps_bin = (fps_array > 0).astype(float)
68
+ reducer = umap.UMAP(
69
+ n_neighbors=25,
70
+ min_dist=0.1,
71
+ metric="jaccard",
72
+ random_state=RANDOM_SEED,
73
+ )
74
+ emb = reducer.fit_transform(fps_bin)
75
+ print(f" UMAP embedding computed: {emb.shape}")
76
+ return emb
77
+
78
+
79
+ def spatial_cluster_split(df, umap_coords, valid_indices, n_regions):
80
+ """
81
+ Performs spatial KMeans clustering directly on UMAP 2D coordinates.
82
+ Each KMeans cluster becomes one fold - simple 1:1 mapping.
83
+ Creates visually consistent and chemically meaningful regions.
84
+ """
85
+ print("=" * 65)
86
+ print("Performing Spatial KMeans Clustering on 2D UMAP Coordinates")
87
+ print("=" * 65)
88
+
89
+ print(f"\nRunning KMeans with k={n_regions} on 2D UMAP coordinates...")
90
+ km = KMeans(n_clusters=n_regions, random_state=RANDOM_SEED, n_init=20)
91
+ labels = km.fit_predict(umap_coords)
92
+
93
+ print(f"\nCluster centroids in 2D UMAP space:")
94
+ for i, centroid in enumerate(km.cluster_centers_):
95
+ print(f" Region {i+1}: ({centroid[0]:.2f}, {centroid[1]:.2f})")
96
+
97
+ folds = defaultdict(list)
98
+ for local_idx, global_idx in enumerate(valid_indices):
99
+ region = labels[local_idx]
100
+ folds[region].append(global_idx)
101
+
102
+ total = len(df)
103
+ print("\nRegion Summary:")
104
+ for r in sorted(folds.keys()):
105
+ n = len(folds[r])
106
+ p = 100 * n / total
107
+ print(f"Region {r+1}: {n:,} compounds ({p:.2f}%)")
108
+
109
+ return folds, labels
110
+
111
+
112
+ def save_cluster_assignments(df, folds, cluster_labels, umap_coords, valid_idx, output_dir):
113
+ """
114
+ Saves cluster assignments with UMAP coordinates to CSV.
115
+ Format: compound,cluster,fold,umap_x,umap_y
116
+ """
117
+ print("\nSaving cluster assignments...")
118
+ fig_dir = os.path.join(output_dir, "figures")
119
+ os.makedirs(fig_dir, exist_ok=True)
120
+
121
+ assignments = []
122
+ for fold_id, indices in folds.items():
123
+ for global_idx in indices:
124
+ if global_idx in valid_idx:
125
+ local_idx = valid_idx.index(global_idx)
126
+ compound_smiles = df.loc[global_idx, 'compound']
127
+ cluster = int(cluster_labels[local_idx])
128
+ fold = fold_id + 1
129
+ umap_x = umap_coords[local_idx, 0]
130
+ umap_y = umap_coords[local_idx, 1]
131
+
132
+ assignments.append({
133
+ 'compound': compound_smiles,
134
+ 'cluster': cluster,
135
+ 'fold': fold,
136
+ 'umap_x': umap_x,
137
+ 'umap_y': umap_y
138
+ })
139
+
140
+ assignments_df = pd.DataFrame(assignments)
141
+ output_file = os.path.join(fig_dir, "cluster_assignments.csv")
142
+ assignments_df.to_csv(output_file, index=False)
143
+ print(f"Saved figures/cluster_assignments.csv: {len(assignments_df):,} entries")
144
+
145
+
146
+ def visualize_lambda_max(df, folds, output_dir):
147
+ """
148
+ Plots λmax distributions across regions.
149
+ """
150
+ if "lambda_max" not in df.columns:
151
+ return
152
+ print("\nGenerating λmax distribution plot...")
153
+ os.makedirs(output_dir, exist_ok=True)
154
+
155
+ fig_dir = os.path.join(output_dir, "figures")
156
+ os.makedirs(fig_dir, exist_ok=True)
157
+
158
+ plt.figure(figsize=(12, 6))
159
+ colors = sns.color_palette("husl", len(folds))
160
+
161
+ for i, (r, idxs) in enumerate(sorted(folds.items())):
162
+ region_df = df.loc[idxs]
163
+ sns.kdeplot(
164
+ data=region_df,
165
+ x="lambda_max",
166
+ label=f"Cluster {r+1} (n={len(region_df):,})",
167
+ linewidth=2.2,
168
+ color=colors[i],
169
+ )
170
+
171
+ sns.kdeplot(
172
+ data=df,
173
+ x="lambda_max",
174
+ label=f"Overall (n={len(df):,})",
175
+ linewidth=2.0,
176
+ linestyle="--",
177
+ color="black",
178
+ alpha=0.7,
179
+ )
180
+
181
+ plt.title("Lambda Max Distribution Across Cluster Splits", fontsize=14, fontweight="bold")
182
+ plt.xlabel("λmax (nm)", fontsize=12, fontweight="bold")
183
+ plt.ylabel("Density", fontsize=12, fontweight="bold")
184
+ plt.legend(frameon=True, shadow=True)
185
+ plt.tight_layout()
186
+ plt.savefig(os.path.join(fig_dir, "cluster_lmax.png"), dpi=300)
187
+ plt.close()
188
+ print("Saved figures/cluster_lmax.png")
189
+
190
+
191
+ def visualize_umap(umap_coords, valid_idx, folds, output_dir):
192
+ """
193
+ Generates 2D UMAP of chemical space colored by region.
194
+ """
195
+ print("\nGenerating UMAP visualization...")
196
+ os.makedirs(output_dir, exist_ok=True)
197
+
198
+ colors = sns.color_palette("husl", len(folds))
199
+ plt.figure(figsize=(12, 10))
200
+ for i, (r, idxs) in enumerate(sorted(folds.items())):
201
+ local_idx = [j for j, g in enumerate(valid_idx) if g in idxs]
202
+ label = f"Cluster {r+1} (n={len(local_idx):,})"
203
+ plt.scatter(
204
+ umap_coords[local_idx, 0],
205
+ umap_coords[local_idx, 1],
206
+ s=10,
207
+ alpha=0.6,
208
+ label=label,
209
+ color=colors[i],
210
+ )
211
+
212
+ plt.title(
213
+ "UMAP Projection of Compound Space (Colored by Cluster Split)",
214
+ fontsize=14,
215
+ fontweight="bold",
216
+ )
217
+
218
+
219
+ plt.legend(markerscale=2, frameon=True, loc='best')
220
+ plt.tight_layout()
221
+ os.makedirs(os.path.join(output_dir, "figures"), exist_ok=True)
222
+ plt.savefig(os.path.join(output_dir, "figures/cluster_umap.png"), dpi=300)
223
+ plt.close()
224
+ print("Saved figures/cluster_umap.png")
225
+
226
+
227
+ # ===== Main ===== #
228
+
229
+
230
+ def main():
231
+ """
232
+ Main function to perform spatial cluster splitting.
233
+ """
234
+ print("=" * 65)
235
+ print("SPATIAL CLUSTER SPLITTING PIPELINE")
236
+ print("=" * 65)
237
+ print(f"Configuration:")
238
+ print(f"- Number of regions: {N_REGIONS}")
239
+ print(f"- Clustering dimension: 2D (UMAP coordinates)")
240
+ print(f"- Random seed: {RANDOM_SEED}")
241
+ print(f"- Input: {INPUT_CSV}")
242
+ print(f"- Output: {OUTPUT_DIR}")
243
+ print()
244
+
245
+ print("Loading dataset...")
246
+ df = pd.read_csv(INPUT_CSV)
247
+ print(f"Loaded {len(df):,} rows.")
248
+
249
+ fps_array, valid_idx = compute_fingerprints(df)
250
+
251
+ umap_coords = compute_umap_embedding(fps_array)
252
+
253
+ folds, cluster_labels = spatial_cluster_split(
254
+ df, umap_coords, valid_idx, N_REGIONS
255
+ )
256
+
257
+ print("\nSaving cluster CSV files...")
258
+ os.makedirs(OUTPUT_DIR, exist_ok=True)
259
+ for r, idxs in folds.items():
260
+ output_path = os.path.join(OUTPUT_DIR, f"cluster_{r+1}.csv")
261
+ df.loc[idxs].to_csv(output_path, index=False)
262
+ print(f" Saved cluster_{r+1}.csv")
263
+
264
+ save_cluster_assignments(df, folds, cluster_labels, umap_coords, valid_idx, OUTPUT_DIR)
265
+
266
+ visualize_lambda_max(df, folds, OUTPUT_DIR)
267
+ visualize_umap(umap_coords, valid_idx, folds, OUTPUT_DIR)
268
+
269
+ print("\n" + "=" * 65)
270
+ print("CLUSTERING COMPLETE!")
271
+ print("=" * 65)
272
+ print(f"Output directory: {OUTPUT_DIR}")
273
+ print(f"- {len(folds)} region CSV files")
274
+ print(f"- figures/cluster_assignments.csv")
275
+ print(f"- figures/cluster_lmax.png")
276
+ print(f"- figures/cluster_umap.png")
277
+ print()
278
+ print("Note: Clustering performed in 2D UMAP space for visual consistency")
279
+
280
+
281
+ if __name__ == "__main__":
282
+ main()
data/scripts/scaffold_split.py ADDED
@@ -0,0 +1,352 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ scaffold_split.py
4
+
5
+ Author: natelgrw
6
+ Last Edited: 11/01/2025
7
+
8
+ Computes Bemis-Murcko scaffolds for the AMAX dataset using RDKit
9
+ and splits scaffolds into 5 distinct folds with approximately balanced
10
+ compound counts across folds. Computes UMAP, scaffold assignments, and
11
+ lambda max distributions for visualizing scaffold splits.
12
+ """
13
+
14
+ import pandas as pd
15
+ import numpy as np
16
+ from rdkit import Chem
17
+ from rdkit.Chem.Scaffolds import MurckoScaffold
18
+ from rdkit.Chem import AllChem
19
+ import random
20
+ import os
21
+ from collections import defaultdict
22
+ import matplotlib.pyplot as plt
23
+ import seaborn as sns
24
+ import umap
25
+
26
+
27
+ # ===== Configuration ===== #
28
+
29
+
30
+ INPUT_CSV = "../amax_dataset.csv"
31
+ OUTPUT_DIR = "../scaffold_split"
32
+ N_FOLDS = 5
33
+ RANDOM_SEED = 42
34
+
35
+ random.seed(RANDOM_SEED)
36
+ np.random.seed(RANDOM_SEED)
37
+
38
+
39
+ # ===== Helper Functions ===== #
40
+
41
+
42
+ def get_murcko_scaffold(smiles):
43
+ """
44
+ Compute Bemis–Murcko scaffold from SMILES string.
45
+
46
+ Returns:
47
+ str: Scaffold SMILES string, or "INVALID" if molecule is invalid,
48
+ or "NO_SCAFFOLD" if scaffold cannot be computed
49
+ """
50
+ try:
51
+ mol = Chem.MolFromSmiles(smiles)
52
+ if mol is None:
53
+ return "INVALID"
54
+ scaffold = MurckoScaffold.MurckoScaffoldSmiles(mol=mol)
55
+ return scaffold if scaffold else "NO_SCAFFOLD"
56
+ except Exception as e:
57
+ print(f"Warning: Error processing SMILES '{smiles}': {e}")
58
+ return "INVALID"
59
+
60
+
61
+ def analyze_dataset(df):
62
+ """
63
+ Print dataset statistics.
64
+ """
65
+ print("=" * 60)
66
+ print("Dataset Analysis")
67
+ print("=" * 60)
68
+ print(f"Total rows: {len(df):,}")
69
+ print(f"Columns: {df.columns.tolist()}")
70
+ print(f"\nUnique compounds: {df['compound'].nunique():,}")
71
+ if 'solvent' in df.columns:
72
+ print(f"Unique solvents: {df['solvent'].nunique():,}")
73
+ if 'lambda_max' in df.columns:
74
+ print(f"\nLambda_max statistics:")
75
+ print(f" Min: {df['lambda_max'].min():.2f}")
76
+ print(f" Max: {df['lambda_max'].max():.2f}")
77
+ print(f" Mean: {df['lambda_max'].mean():.2f}")
78
+ print(f" Median: {df['lambda_max'].median():.2f}")
79
+ print()
80
+
81
+
82
+ def assign_scaffolds_to_folds(scaffold_sizes, n_folds, total_rows):
83
+ """
84
+ Assign scaffolds to folds using a greedy algorithm to balance compound counts.
85
+
86
+ Args:
87
+ scaffold_sizes: dict mapping scaffold SMILES to number of compounds
88
+ n_folds: number of folds
89
+ total_rows: total number of rows in dataset
90
+
91
+ Returns:
92
+ dict mapping fold_id (0 to n_folds-1) to list of scaffold SMILES
93
+ """
94
+ fold_assignments = defaultdict(list)
95
+ fold_counts = [0] * n_folds
96
+
97
+ sorted_scaffolds = sorted(scaffold_sizes.items(), key=lambda x: x[1], reverse=True)
98
+
99
+ # greedy scaffold assignment
100
+ for scaffold, size in sorted_scaffolds:
101
+ min_fold = min(range(n_folds), key=lambda i: fold_counts[i])
102
+ fold_assignments[min_fold].append(scaffold)
103
+ fold_counts[min_fold] += size
104
+
105
+ return fold_assignments, fold_counts
106
+
107
+
108
+ def create_visualizations(df, scaffold_sizes, fold_assignments, fold_counts, fold_dataframes, output_dir_path):
109
+ """
110
+ Create visualizations for scaffold split analysis.
111
+
112
+ Generates:
113
+ 1. Lambda_max distribution across folds (KDE plot)
114
+ 2. UMAP 2D visualization of scaffold assignments
115
+ """
116
+ print("\nGenerating visualizations...")
117
+
118
+ sns.set_style("whitegrid")
119
+ plt.rcParams['figure.dpi'] = 100
120
+ plt.rcParams['savefig.dpi'] = 300
121
+
122
+ # create figures directory
123
+ fig_dir = os.path.join(output_dir_path, "figures")
124
+ os.makedirs(fig_dir, exist_ok=True)
125
+
126
+ colors = sns.color_palette("husl", len(fold_counts))
127
+
128
+ # lambda max distribution across folds
129
+ if 'lambda_max' in df.columns:
130
+ print("Creating lambda_max distribution plot...")
131
+ fig, ax = plt.subplots(figsize=(12, 6))
132
+
133
+ for fold_id in range(len(fold_dataframes)):
134
+ fold_df = fold_dataframes[fold_id]
135
+ fold_label = f"Fold {fold_id + 1} (n={len(fold_df):,})"
136
+ sns.kdeplot(data=fold_df, x='lambda_max', label=fold_label,
137
+ ax=ax, linewidth=2.5)
138
+
139
+ sns.kdeplot(data=df, x='lambda_max', label=f'Overall (n={len(df):,})',
140
+ ax=ax, linewidth=2, linestyle='--', color='black', alpha=0.7)
141
+
142
+ ax.set_xlabel('Lambda Max (nm)', fontsize=12, fontweight='bold')
143
+ ax.set_ylabel('Density', fontsize=12, fontweight='bold')
144
+ ax.set_title('Lambda Max Distribution Across Scaffold Splits', fontsize=14, fontweight='bold')
145
+ ax.legend(loc='best', frameon=True, fancybox=True, shadow=True)
146
+ ax.grid(alpha=0.3)
147
+
148
+ plt.tight_layout()
149
+ plt.savefig(os.path.join(fig_dir, 'scaffold_lmax.png'), bbox_inches='tight')
150
+ print(f"Saved: figures/scaffold_lmax.png")
151
+ plt.close()
152
+
153
+ # umap visualization
154
+ print("\nComputing UMAP embedding (this may take a few minutes)...")
155
+
156
+ scaffold_to_fold = {}
157
+ for fold_id in range(len(fold_assignments)):
158
+ for scaffold in fold_assignments[fold_id]:
159
+ scaffold_to_fold[scaffold] = fold_id
160
+
161
+ df_with_fold = df.copy()
162
+ df_with_fold['fold'] = df_with_fold['scaffold'].map(scaffold_to_fold)
163
+
164
+ valid_mask = (~df_with_fold['scaffold'].isin(['INVALID', 'NO_SCAFFOLD'])) & (df_with_fold['fold'].notna())
165
+ compounds_for_umap = df_with_fold[valid_mask].copy()
166
+
167
+ print(f"Computing fingerprints for {len(compounds_for_umap):,} data points...")
168
+
169
+ unique_compounds = compounds_for_umap['compound'].unique()
170
+ print(f" ({len(unique_compounds):,} unique compounds)")
171
+
172
+ compound_to_fp = {}
173
+ for smiles in unique_compounds:
174
+ try:
175
+ mol = Chem.MolFromSmiles(smiles)
176
+ if mol is not None:
177
+ fp = AllChem.GetMorganFingerprintAsBitVect(mol, radius=2, nBits=2048)
178
+ compound_to_fp[smiles] = fp.ToBitString()
179
+ except Exception:
180
+ continue
181
+
182
+ fps = []
183
+ valid_indices = []
184
+ for idx, row in compounds_for_umap.iterrows():
185
+ smiles = row['compound']
186
+ if smiles in compound_to_fp:
187
+ fps.append(compound_to_fp[smiles])
188
+ valid_indices.append(idx)
189
+
190
+ if len(fps) < 100:
191
+ print("Warning: Too few valid compounds for UMAP. Skipping UMAP visualization.")
192
+ else:
193
+ fps_array = np.array([[int(bit) for bit in fp] for fp in fps])
194
+
195
+ print(f"Fitting UMAP (n={len(fps_array):,} data points, dim={fps_array.shape[1]})...")
196
+
197
+ reducer = umap.UMAP(n_components=2, random_state=RANDOM_SEED,
198
+ n_neighbors=15, min_dist=0.1, metric='jaccard', verbose=False)
199
+ embedding = reducer.fit_transform(fps_array)
200
+
201
+ valid_compounds_df = compounds_for_umap.loc[valid_indices].copy()
202
+ valid_compounds_df['umap_x'] = embedding[:, 0]
203
+ valid_compounds_df['umap_y'] = embedding[:, 1]
204
+
205
+ fig, ax = plt.subplots(figsize=(14, 10))
206
+
207
+ for fold_id in range(len(fold_assignments)):
208
+ fold_data = valid_compounds_df[valid_compounds_df['fold'] == fold_id]
209
+ if len(fold_data) > 0:
210
+ ax.scatter(fold_data['umap_x'], fold_data['umap_y'],
211
+ label=f'Fold {fold_id + 1} (n={len(fold_data):,})',
212
+ alpha=0.6, s=20, c=[colors[fold_id]])
213
+
214
+ ax.set_title('UMAP Projection of All Data Points (Colored by Scaffold Split)',
215
+ fontsize=14, fontweight='bold')
216
+ ax.legend(loc='best', frameon=True, fancybox=True, shadow=True, fontsize=10)
217
+ ax.grid(alpha=0.3)
218
+
219
+ plt.tight_layout()
220
+ plt.savefig(os.path.join(fig_dir, 'scaffold_umap.png'), bbox_inches='tight')
221
+ print(f"Saved: figures/scaffold_umap.png")
222
+ plt.close()
223
+
224
+ print(f"\nAll visualizations saved to: {os.path.join(output_dir_path, 'figures')}")
225
+
226
+
227
+ # ===== Main ===== #
228
+
229
+ def main():
230
+ """
231
+ Main function to perform scaffold splitting pipeline.
232
+ """
233
+ print("Loading dataset...")
234
+ input_path = os.path.join(os.path.dirname(__file__), INPUT_CSV)
235
+ if not os.path.exists(input_path):
236
+ raise FileNotFoundError(f"Input file not found: {input_path}")
237
+
238
+ df = pd.read_csv(input_path)
239
+
240
+ if 'compound' not in df.columns:
241
+ raise ValueError("Dataset must contain 'compound' column")
242
+
243
+ analyze_dataset(df)
244
+
245
+ print("Computing Bemis-Murcko scaffolds...")
246
+ df['scaffold'] = df['compound'].apply(get_murcko_scaffold)
247
+
248
+ invalid_count = (df['scaffold'] == "INVALID").sum()
249
+ no_scaffold_count = (df['scaffold'] == "NO_SCAFFOLD").sum()
250
+
251
+ if invalid_count > 0:
252
+ print(f"Warning: {invalid_count:,} compounds have invalid SMILES")
253
+ if no_scaffold_count > 0:
254
+ print(f"Info: {no_scaffold_count:,} compounds have no scaffold (single atoms)")
255
+
256
+ scaffold_groups = df.groupby('scaffold')
257
+ scaffold_sizes = scaffold_groups.size().to_dict()
258
+
259
+ print(f"\nScaffold Statistics:")
260
+ print(f"Unique scaffolds: {len(scaffold_sizes):,}")
261
+ print(f"Scaffolds with 1 compound: {(np.array(list(scaffold_sizes.values())) == 1).sum():,}")
262
+ print(f"Scaffolds with >10 compounds: {(np.array(list(scaffold_sizes.values())) > 10).sum():,}")
263
+ print(f"Scaffolds with >100 compounds: {(np.array(list(scaffold_sizes.values())) > 100).sum():,}")
264
+
265
+ print(f"\nAssigning scaffolds to {N_FOLDS} folds...")
266
+ fold_assignments, fold_counts = assign_scaffolds_to_folds(
267
+ scaffold_sizes, N_FOLDS, len(df)
268
+ )
269
+
270
+ print("\nFold Statistics:")
271
+ print("-" * 60)
272
+ for fold_id in range(N_FOLDS):
273
+ scaffolds = fold_assignments[fold_id]
274
+ count = fold_counts[fold_id]
275
+ percentage = 100 * count / len(df)
276
+ print(f"Fold {fold_id + 1}: {count:,} compounds ({percentage:.2f}%) | "
277
+ f"{len(scaffolds):,} scaffolds")
278
+ print("-" * 60)
279
+ print(f"Total: {sum(fold_counts):,} compounds")
280
+
281
+ output_dir_path = os.path.join(os.path.dirname(__file__), OUTPUT_DIR)
282
+ os.makedirs(output_dir_path, exist_ok=True)
283
+
284
+ # saving data
285
+ print(f"\nSaving folds to '{OUTPUT_DIR}' directory...")
286
+ fold_dataframes = {}
287
+
288
+ for fold_id in range(N_FOLDS):
289
+ scaffolds_in_fold = set(fold_assignments[fold_id])
290
+ fold_mask = df['scaffold'].isin(scaffolds_in_fold)
291
+ fold_df = df[fold_mask].copy()
292
+
293
+ fold_df_output = fold_df.drop(columns=['scaffold'])
294
+
295
+ output_file = os.path.join(output_dir_path, f"fold_{fold_id + 1}.csv")
296
+ fold_df_output.to_csv(output_file, index=False)
297
+ fold_dataframes[fold_id] = fold_df
298
+
299
+ print(f"Saved fold_{fold_id + 1}.csv: {len(fold_df):,} rows")
300
+
301
+ scaffold_assignments_data = []
302
+ for fold_id in range(N_FOLDS):
303
+ for scaffold in fold_assignments[fold_id]:
304
+ scaffold_assignments_data.append({
305
+ 'scaffold': scaffold,
306
+ 'fold': fold_id + 1,
307
+ 'compound_count': scaffold_sizes[scaffold]
308
+ })
309
+
310
+ scaffold_assignments_df = pd.DataFrame(scaffold_assignments_data)
311
+ scaffold_assignments_df = scaffold_assignments_df.sort_values(['fold', 'compound_count'],
312
+ ascending=[True, False])
313
+
314
+ print(f"\nSaved scaffold assignments to: scaffold_assignments.csv")
315
+ print(f"Total scaffolds: {len(scaffold_assignments_df):,}")
316
+ print(f"Columns: scaffold, fold, compound_count")
317
+
318
+ # create visualizations
319
+ create_visualizations(df, scaffold_sizes, fold_assignments, fold_counts,
320
+ fold_dataframes, output_dir_path)
321
+
322
+ scaffold_assignments_file = os.path.join(output_dir_path, "scaffold_assignments.csv")
323
+ scaffold_assignments_df.to_csv(scaffold_assignments_file, index=False)
324
+
325
+ print("\nVerifying scaffold separation...")
326
+ all_fold_scaffolds = [set(fold_assignments[i]) for i in range(N_FOLDS)]
327
+ for i in range(N_FOLDS):
328
+ for j in range(i + 1, N_FOLDS):
329
+ overlap = all_fold_scaffolds[i] & all_fold_scaffolds[j]
330
+ if overlap:
331
+ print(f"ERROR: Overlap between fold {i+1} and fold {j+1}: {len(overlap)} scaffolds")
332
+ else:
333
+ print(f"No overlap between fold {i+1} and fold {j+1}")
334
+
335
+ all_assigned = set()
336
+ for fold_id in range(N_FOLDS):
337
+ all_assigned.update(fold_assignments[fold_id])
338
+
339
+ if len(all_assigned) == len(scaffold_sizes):
340
+ print(f"All {len(scaffold_sizes):,} scaffolds assigned to folds")
341
+ else:
342
+ missing = set(scaffold_sizes.keys()) - all_assigned
343
+ print(f"WARNING: {len(missing)} scaffolds not assigned to any fold")
344
+
345
+ print("\n" + "=" * 60)
346
+ print("5-fold scaffold split completed successfully!")
347
+ print("=" * 60)
348
+
349
+
350
+ if __name__ == "__main__":
351
+ main()
352
+
data/scripts/solvent_split.py ADDED
@@ -0,0 +1,323 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ solvent_split.py
4
+
5
+ Author: natelgrw
6
+ Last Edited: 11/05/2025
7
+
8
+ Performs spatial KMeans cluster splitting on AMAX solvent chemical space.
9
+ Clusters solvents in 5 groups by similarity using UMAP + KMeans, then assigns compounds
10
+ to folds based on their solvent's cluster membership.
11
+
12
+ This ensures compounds are split by solvent similarity rather than
13
+ individual solvent identity.
14
+ """
15
+
16
+ import pandas as pd
17
+ import numpy as np
18
+ from rdkit import Chem
19
+ from rdkit.Chem import AllChem
20
+ import random
21
+ import os
22
+ from collections import defaultdict
23
+ import matplotlib.pyplot as plt
24
+ import seaborn as sns
25
+ import umap
26
+ from sklearn.cluster import KMeans
27
+
28
+
29
+ # ===== Configuration ===== #
30
+
31
+
32
+ INPUT_CSV = "../amax_dataset.csv"
33
+ OUTPUT_DIR = "../solvent_split"
34
+ N_SOLVENT_CLUSTERS = 5
35
+ RANDOM_SEED = 42
36
+
37
+ random.seed(RANDOM_SEED)
38
+ np.random.seed(RANDOM_SEED)
39
+
40
+
41
+ # ===== Helper Functions ===== #
42
+
43
+
44
+ def compute_solvent_fingerprints(unique_solvents):
45
+ """
46
+ Compute Morgan fingerprints for unique solvents.
47
+ """
48
+ fps = []
49
+ valid_solvents = []
50
+
51
+ for solvent in unique_solvents:
52
+ try:
53
+ mol = Chem.MolFromSmiles(solvent)
54
+ if mol is not None:
55
+ fp = AllChem.GetMorganFingerprintAsBitVect(mol, radius=2, nBits=1024)
56
+ fps.append(fp)
57
+ valid_solvents.append(solvent)
58
+ except Exception:
59
+ print(f"Warning: Could not process solvent {solvent}")
60
+ continue
61
+
62
+ fps_array = np.array([list(fp) for fp in fps], dtype=np.float32)
63
+ solvent_to_idx = {solv: idx for idx, solv in enumerate(valid_solvents)}
64
+
65
+ print(f"Valid solvents: {len(fps_array)}")
66
+ print(f"Fingerprint dimension: {fps_array.shape[1]}")
67
+
68
+ return fps_array, valid_solvents, solvent_to_idx
69
+
70
+
71
+ def compute_solvent_umap(solvent_fps):
72
+ """
73
+ Compute 2D UMAP embedding of solvent fingerprints using Jaccard metric.
74
+ Returns 2D coordinates for spatial clustering.
75
+ """
76
+ print("\nComputing 2D UMAP embedding of solvent space...")
77
+ fps_bin = (solvent_fps > 0).astype(float)
78
+ reducer = umap.UMAP(
79
+ n_neighbors=min(15, len(solvent_fps) - 1),
80
+ min_dist=0.1,
81
+ metric="jaccard",
82
+ random_state=RANDOM_SEED,
83
+ )
84
+ emb = reducer.fit_transform(fps_bin)
85
+ print(f"UMAP embedding computed: {emb.shape}")
86
+ return emb
87
+
88
+
89
+ def cluster_solvents(solvent_umap_coords, valid_solvents, n_clusters):
90
+ """
91
+ Performs spatial KMeans clustering directly on solvent UMAP 2D coordinates.
92
+ """
93
+ print("\n" + "=" * 65)
94
+ print("Performing Spatial KMeans Clustering on Solvent UMAP Coordinates")
95
+ print("=" * 65)
96
+
97
+ print(f"\nRunning KMeans with k={n_clusters} on solvent 2D UMAP coordinates...")
98
+ km = KMeans(n_clusters=n_clusters, random_state=RANDOM_SEED, n_init=20)
99
+ labels = km.fit_predict(solvent_umap_coords)
100
+
101
+ print(f"\nSolvent cluster centroids:")
102
+ for i, centroid in enumerate(km.cluster_centers_):
103
+ print(f" Cluster {i+1}: ({centroid[0]:.2f}, {centroid[1]:.2f})")
104
+
105
+ solvent_to_cluster = {solvent: int(labels[idx])
106
+ for idx, solvent in enumerate(valid_solvents)}
107
+
108
+ print(f"\nSolvent cluster assignments:")
109
+ clusters = defaultdict(list)
110
+ for solvent, cluster in solvent_to_cluster.items():
111
+ clusters[cluster].append(solvent)
112
+
113
+ for cluster_id in sorted(clusters.keys()):
114
+ solvents_in_cluster = clusters[cluster_id]
115
+ print(f"Cluster {cluster_id+1}: {len(solvents_in_cluster)} solvents")
116
+ for solv in sorted(solvents_in_cluster):
117
+ print(f"- {solv}")
118
+
119
+ return solvent_to_cluster, labels, km
120
+
121
+
122
+ def create_visualizations(df, cluster_folds, solvent_umap_coords, valid_solvents,
123
+ solvent_cluster_labels, solvent_to_cluster, output_dir_path):
124
+ """
125
+ Create visualizations for solvent cluster split analysis.
126
+ """
127
+ print("\n" + "=" * 65)
128
+ print("Generating Visualizations")
129
+ print("=" * 65)
130
+
131
+ sns.set_style("whitegrid")
132
+ plt.rcParams['figure.dpi'] = 100
133
+ plt.rcParams['savefig.dpi'] = 300
134
+
135
+ fig_dir = os.path.join(output_dir_path, "figures")
136
+ os.makedirs(fig_dir, exist_ok=True)
137
+
138
+ n_clusters = len(cluster_folds)
139
+ colors = sns.color_palette("husl", n_clusters)
140
+
141
+ if 'lambda_max' in df.columns:
142
+ print("\n1. Creating lambda_max distribution plot...")
143
+ fig, ax = plt.subplots(figsize=(12, 6))
144
+
145
+ for cluster_id in sorted(cluster_folds.keys()):
146
+ group_df = cluster_folds[cluster_id]
147
+ fold_label = f"Cluster {cluster_id+1} (n={len(group_df):,})"
148
+ sns.kdeplot(data=group_df, x='lambda_max', label=fold_label,
149
+ ax=ax, linewidth=2.5, color=colors[cluster_id])
150
+
151
+ sns.kdeplot(data=df, x='lambda_max', label=f'Overall (n={len(df):,})',
152
+ ax=ax, linewidth=2, linestyle='--', color='black', alpha=0.7)
153
+
154
+ ax.set_xlabel('λmax (nm)', fontsize=12, fontweight='bold')
155
+ ax.set_ylabel('Density', fontsize=12, fontweight='bold')
156
+ ax.set_title('Lambda Max Distribution Across Solvent Splits',
157
+ fontsize=14, fontweight='bold')
158
+ ax.legend(loc='best', frameon=True, shadow=True)
159
+ ax.grid(alpha=0.3)
160
+
161
+ plt.tight_layout()
162
+ plt.savefig(os.path.join(fig_dir, 'solvent_lmax.png'), bbox_inches='tight')
163
+ print(f"Saved: figures/solvent_lmax.png")
164
+ plt.close()
165
+
166
+ print("\n2. Creating solvent space UMAP visualization...")
167
+
168
+ solvent_counts = df['solvent'].value_counts().to_dict()
169
+
170
+ fig, ax = plt.subplots(figsize=(14, 10))
171
+
172
+ for cluster_id in range(n_clusters):
173
+ cluster_solvents = [solv for solv, cid in solvent_to_cluster.items() if cid == cluster_id]
174
+
175
+ cluster_indices = [valid_solvents.index(solv) for solv in cluster_solvents if solv in valid_solvents]
176
+
177
+ if len(cluster_indices) > 0:
178
+ cluster_coords = solvent_umap_coords[cluster_indices]
179
+
180
+ sizes = [np.log10(solvent_counts.get(valid_solvents[idx], 1) + 1) * 50 for idx in cluster_indices]
181
+
182
+ ax.scatter(cluster_coords[:, 0], cluster_coords[:, 1],
183
+ label=f'Cluster {cluster_id+1} ({len(cluster_solvents)} solvents)',
184
+ alpha=0.7, s=sizes, color=colors[cluster_id], edgecolors='black', linewidth=0.5)
185
+
186
+ ax.set_title('UMAP Projection of Chemical Solvent Space (Colored by Solvent Split)',
187
+ fontsize=14, fontweight='bold')
188
+ ax.legend(loc='best', frameon=True, shadow=True, fontsize=10)
189
+ ax.grid(alpha=0.3)
190
+
191
+ plt.tight_layout()
192
+ plt.savefig(os.path.join(fig_dir, 'solvent_umap.png'), bbox_inches='tight')
193
+ print(f"Saved: figures/solvent_umap.png")
194
+ plt.close()
195
+
196
+ print(f"\nAll visualizations saved to: {os.path.join(output_dir_path, 'figures')}")
197
+
198
+
199
+ # ===== Main ===== #
200
+
201
+
202
+ def main():
203
+ """
204
+ Main function to perform spatial solvent cluster splitting.
205
+ """
206
+ print("=" * 65)
207
+ print("SPATIAL SOLVENT CLUSTER SPLITTING PIPELINE")
208
+ print("=" * 65)
209
+ print(f"Configuration:")
210
+ print(f"- Number of solvent clusters: {N_SOLVENT_CLUSTERS}")
211
+ print(f"- Random seed: {RANDOM_SEED}")
212
+ print(f"- Input: {INPUT_CSV}")
213
+ print(f"- Output: {OUTPUT_DIR}")
214
+ print()
215
+
216
+ print("Step 1: Loading dataset...")
217
+ input_path = os.path.join(os.path.dirname(__file__), INPUT_CSV)
218
+ if not os.path.exists(input_path):
219
+ raise FileNotFoundError(f"Input file not found: {input_path}")
220
+
221
+ df = pd.read_csv(input_path)
222
+
223
+ if 'solvent' not in df.columns:
224
+ raise ValueError("Dataset must contain 'solvent' column")
225
+
226
+ print(f"Total compounds: {len(df):,}")
227
+ print(f"Columns: {df.columns.tolist()}")
228
+
229
+ print(f"\nStep 2: Analyzing solvent distribution...")
230
+ solvent_counts = df['solvent'].value_counts()
231
+ print(f"Unique solvents: {len(solvent_counts):,}")
232
+
233
+ distribution_data = []
234
+ for solvent, count in solvent_counts.items():
235
+ distribution_data.append({
236
+ 'solvent': solvent,
237
+ 'count': int(count),
238
+ 'percentage': 100 * count / len(df)
239
+ })
240
+
241
+ distribution_df = pd.DataFrame(distribution_data)
242
+ distribution_df = distribution_df.sort_values('count', ascending=False)
243
+
244
+ print(f"\nTop 10 solvents by occurrence:")
245
+ for idx, row in distribution_df.head(10).iterrows():
246
+ print(f"{row['solvent']}: {row['count']:,} ({row['percentage']:.2f}%)")
247
+
248
+ unique_solvents = df['solvent'].unique().tolist()
249
+ solvent_fps, valid_solvents, solvent_to_idx = compute_solvent_fingerprints(unique_solvents)
250
+
251
+ print(f"\nStep 3: Computing UMAP on solvent space...")
252
+ solvent_umap_coords = compute_solvent_umap(solvent_fps)
253
+
254
+ print(f"\nStep 4: Clustering solvents...")
255
+ solvent_to_cluster, solvent_cluster_labels, km = cluster_solvents(
256
+ solvent_umap_coords, valid_solvents, N_SOLVENT_CLUSTERS
257
+ )
258
+
259
+ print(f"\nStep 5: Assigning compounds to solvent clusters...")
260
+ cluster_folds = defaultdict(list)
261
+
262
+ for idx, row in df.iterrows():
263
+ solvent = row['solvent']
264
+ if solvent in solvent_to_cluster:
265
+ cluster_id = solvent_to_cluster[solvent]
266
+ cluster_folds[cluster_id].append(idx)
267
+ else:
268
+ print(f"Warning: Solvent '{solvent}' not found in valid solvents")
269
+
270
+ cluster_dataframes = {}
271
+ for cluster_id, indices in cluster_folds.items():
272
+ cluster_dataframes[cluster_id] = df.loc[indices].copy()
273
+
274
+ print(f"\nCluster summary:")
275
+ for cluster_id in sorted(cluster_dataframes.keys()):
276
+ n = len(cluster_dataframes[cluster_id])
277
+ p = 100 * n / len(df)
278
+ print(f"Cluster {cluster_id+1}: {n:,} compounds ({p:.2f}%)")
279
+
280
+ output_dir_path = os.path.join(os.path.dirname(__file__), OUTPUT_DIR)
281
+ os.makedirs(output_dir_path, exist_ok=True)
282
+
283
+ print(f"\nStep 6: Saving solvent cluster CSV files to '{OUTPUT_DIR}'...")
284
+ for cluster_id in sorted(cluster_dataframes.keys()):
285
+ output_file = os.path.join(output_dir_path, f"solvents_{cluster_id+1}.csv")
286
+ cluster_dataframes[cluster_id].to_csv(output_file, index=False)
287
+ print(f"Saved solvents_{cluster_id+1}.csv: {len(cluster_dataframes[cluster_id]):,} compounds")
288
+
289
+ fig_dir = os.path.join(output_dir_path, "figures")
290
+ os.makedirs(fig_dir, exist_ok=True)
291
+
292
+ print(f"\nStep 7: Creating enhanced solvent_distribution.csv...")
293
+
294
+ distribution_df['solvent_cluster'] = distribution_df['solvent'].map(
295
+ lambda s: solvent_to_cluster.get(s, -1)
296
+ )
297
+ distribution_df['solvent_cluster'] = distribution_df['solvent_cluster'].apply(
298
+ lambda c: f"Cluster {c+1}" if c >= 0 else "Unknown"
299
+ )
300
+
301
+ distribution_file = os.path.join(fig_dir, "solvent_distribution.csv")
302
+ distribution_df.to_csv(distribution_file, index=False)
303
+ print(f"Saved figures/solvent_distribution.csv: {len(distribution_df):,} entries")
304
+
305
+ print(f"\nStep 8: Creating visualizations...")
306
+ create_visualizations(
307
+ df, cluster_dataframes, solvent_umap_coords, valid_solvents,
308
+ solvent_cluster_labels, solvent_to_cluster, output_dir_path
309
+ )
310
+
311
+ print("\n" + "=" * 65)
312
+ print("SOLVENT CLUSTERING COMPLETE!")
313
+ print("=" * 65)
314
+ print(f"Output directory: {OUTPUT_DIR}")
315
+ print(f"- {len(cluster_dataframes)} solvent cluster CSV files (solvents_1.csv, solvents_2.csv, etc.)")
316
+ print(f"- figures/solvent_distribution.csv")
317
+ print(f"- figures/solvent_lmax.png")
318
+ print(f"- figures/solvent_umap.png (shows solvent chemical space)")
319
+
320
+
321
+ if __name__ == "__main__":
322
+ main()
323
+
data/solvent_split/figures/solvent_distribution.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:76f1c7b81cc8720e74b26da703574450cef3fb64b7d284083b72c25addbde4fb
3
+ size 16437
data/solvent_split/figures/solvent_lmax.png ADDED

Git LFS Details

  • SHA256: ee9b12253f5293f7386fb252cadde8c88591b7cf55a4fa11bab071b1e1e321a6
  • Pointer size: 131 Bytes
  • Size of remote file: 451 kB
data/solvent_split/figures/solvent_umap.png ADDED

Git LFS Details

  • SHA256: 0d9757fd2d72950f9e672cbfcf4b4c311184af2ee2d6e6fc1453d5a174edc760
  • Pointer size: 131 Bytes
  • Size of remote file: 534 kB
data/solvent_split/solvents_1.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:afa17c4ebc2a258d65fcca5a7485e013fa72b0d15e94565e514857033d1a7e60
3
+ size 485646
data/solvent_split/solvents_2.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b03b5a006549f4ad4b5a55f9c7e9d0189af9fee72ddf76fb5d8eba1294570e48
3
+ size 310445
data/solvent_split/solvents_3.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:309b4c25ed19a12316c664c65d5fc70efe32b410d6c3784ae9a7388956fc8e4b
3
+ size 591502
data/solvent_split/solvents_4.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:038b583299b1a6093672fdf84746d561cc1320be60e6a3fcf667a6f04d9cbe3a
3
+ size 610925
data/solvent_split/solvents_5.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5a885b49ba635c241b2275bcf73e2b84ec87c68051288d788011a3bdec71e991
3
+ size 1549285
data/solvents/README.md CHANGED
@@ -1,6 +1,6 @@
1
  # 🧪 AMAX Solvent Desciptors
2
 
3
- The ReTiNA dataset is accompanied with 160 descriptors for each solvent, capturing detailed structural, electronic, and topological features for model training. Descriptors were computed using RDKit.
4
 
5
  ## Topological Descriptors
6
 
@@ -9,16 +9,13 @@ The ReTiNA dataset is accompanied with 160 descriptors for each solvent, capturi
9
  | BalabanJ | Quantifies molecular complexity based on average distance connectivity and graph branching | RDKit |
10
  | BertzCT | Calculates molecular complexity based on graph connectivity and atomic contributions | RDKit |
11
  | Chi (0-1), Chi_n (0-4) Chi_v (0-4) | Connectivity indices reflecting molecular topology, branching, and size | RDKit |
12
- | Ipc | Information content index representing structural complexity | RDKit |
13
  | Kappa (1-3) | Shape indices describing molecular flexibility and overall geometry | RDKit |
14
 
15
  ## Electronic Descriptors
16
 
17
  | Descriptor | Summary | Software Used |
18
  |------------|---------|---------------|
19
- | MaxAbsPartialCharge | Maximum absolute atomic partial charge | RDKit |
20
  | MaxEStateIndex | Maximum E-state value in the molecule | RDKit |
21
- | MaxPartialCharge | Highest partial charge in the molecule | RDKit |
22
  | NumValenceElectrons | Total number of valence electrons in the molecule | RDKit |
23
  | NumRadicalElectrons | Total number of unpaired electrons (radicals) | RDKit |
24
  | HallKierAlpha | Atom-type electrotopological descriptor modeling polarity and hybridization | RDKit |
 
1
  # 🧪 AMAX Solvent Desciptors
2
 
3
+ The ReTiNA dataset is accompanied with 157 descriptors for each solvent, capturing detailed structural, electronic, and topological features for model training. Descriptors were computed using RDKit.
4
 
5
  ## Topological Descriptors
6
 
 
9
  | BalabanJ | Quantifies molecular complexity based on average distance connectivity and graph branching | RDKit |
10
  | BertzCT | Calculates molecular complexity based on graph connectivity and atomic contributions | RDKit |
11
  | Chi (0-1), Chi_n (0-4) Chi_v (0-4) | Connectivity indices reflecting molecular topology, branching, and size | RDKit |
 
12
  | Kappa (1-3) | Shape indices describing molecular flexibility and overall geometry | RDKit |
13
 
14
  ## Electronic Descriptors
15
 
16
  | Descriptor | Summary | Software Used |
17
  |------------|---------|---------------|
 
18
  | MaxEStateIndex | Maximum E-state value in the molecule | RDKit |
 
19
  | NumValenceElectrons | Total number of valence electrons in the molecule | RDKit |
20
  | NumRadicalElectrons | Total number of unpaired electrons (radicals) | RDKit |
21
  | HallKierAlpha | Atom-type electrotopological descriptor modeling polarity and hybridization | RDKit |
data/solvents/solv_descriptors.csv CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:039a6b443b641b5b7874ac425922a548a49a597ed4f0630089cabf186b1922e3
3
- size 1091020
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e1b723c797883beef1b2d1735547dfede8363984a93824aa1ef6713569816756
3
+ size 1066550