Upload 18 files
Browse files- README.md +67 -5
- data/QML9.parquet +3 -0
- data/test_split.parquet +3 -0
- data/train_split.parquet +3 -0
- data/validation_split.parquet +3 -0
- src/00_download_data.sh +5 -0
- src/01_batch_data.py +20 -0
- src/02_process_batch.py +88 -0
- src/03_run_batches.sh +91 -0
- src/04_merge_data.py +28 -0
- src/05_sanitize_data.py +21 -0
- src/06_calculate_ecfp.py +29 -0
- src/07_calculate_properties.py +56 -0
- src/08_organize_columns.py +20 -0
- src/09_unpack_ecfp4.py +38 -0
- src/10_finalize_ml_data.py +15 -0
- src/11_split_ml_data.py +19 -0
- src/12_train_randomforest.py +42 -0
README.md
CHANGED
|
@@ -1,5 +1,67 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# QM9 Molecular Data Preprocessing and ML Pipeline
|
| 2 |
+
|
| 3 |
+
This repository contains a series of scripts for processing the QM9 dataset, computing molecular descriptors (e.g., ECFP4), cleaning and organizing molecular data, and preparing it for machine learning (ML) tasks.
|
| 4 |
+
|
| 5 |
+
|
| 6 |
+
This pipeline transforms raw .xyz molecular files into a structured ML-ready format by:
|
| 7 |
+
1. Extracting properties
|
| 8 |
+
2. Calculating ECFP4 fingerprints
|
| 9 |
+
3. Cleaning and filtering problematic entries
|
| 10 |
+
4. Organizing columns and unpacking features
|
| 11 |
+
5. Finalizing the dataset for ML
|
| 12 |
+
6. Splitting it into training, validation, and test sets
|
| 13 |
+
|
| 14 |
+
# Scripts
|
| 15 |
+
|
| 16 |
+
| File | Description |
|
| 17 |
+
|--------------------------|-------------|
|
| 18 |
+
| `00_download_data.sh` | Downloads, unzips, and moves the files into a directory called `input_data`. |
|
| 19 |
+
| `01_batch_data.py` | Creates many batch files, each containing a portion of the data as file paths. Batching helps parallelize the next step. |
|
| 20 |
+
| `02_process_batch.py` | Reads and parses `.xyz` files, saving processed molecular data (e.g., atoms, coordinates, properties) in `.parquet` format. |
|
| 21 |
+
| `03_run_batches.sh` | A shell script to process batches of `.xyz` files using SLURM or local runs. Calls `02_process_batch.py` over all batch files. |
|
| 22 |
+
| `04_merge_data.py` | Merges all batch-level `.parquet` outputs into a single unified dataset. |
|
| 23 |
+
| `05_sanitize_data.py` | Standardizes SMILES strings and removes molecules that fail sanitization. |
|
| 24 |
+
| `06_calculate_ecfp.py` | Computes ECFP4 molecular fingerprints using RDKit. |
|
| 25 |
+
| `07_calculate_properties.py` | Calculates and appends molecular properties to the dataset. |
|
| 26 |
+
| `08_organize_columns.py` | Reorganizes column order, drops unnecessary columns, and prepares a clean structure. |
|
| 27 |
+
| `09_unpack_ecfp4.py` | Unpacks the ECFP4 bit vectors into individual binary columns for ML input. |
|
| 28 |
+
| `10_finalize_ml_data.py` | Final checks and formatting to ensure the data is ready for ML training. |
|
| 29 |
+
| `11_split_ml_data.py` | Splits the final dataset into train/val/test sets using fixed proportions (e.g., 80/10/10). |
|
| 30 |
+
|
| 31 |
+
|
| 32 |
+
|
| 33 |
+
|
| 34 |
+
|
| 35 |
+
|
| 36 |
+
---
|
| 37 |
+
|
| 38 |
+
# Output
|
| 39 |
+
|
| 40 |
+
|
| 41 |
+
|
| 42 |
+
- Clean .parquet file containing ECFP4 + molecular properties, Split files for ML training: train.parquet, val.parquet, test.parquet
|
| 43 |
+
|
| 44 |
+
### output format
|
| 45 |
+
|
| 46 |
+
| Property Group (Column Range) | Columns |
|
| 47 |
+
|--------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
| 48 |
+
| Identifier (1–1) | `ID` |
|
| 49 |
+
| Chemical descriptors (2–9) | `MolWt`, `ClogP`, `TPSA`, `HBD`, `HBA`, `NumRotatableBonds`, `RingCount`, `FractionCSP3` |
|
| 50 |
+
| Quantum-chemical & thermodynamic properties (10–24) | `Rotational_Constant_A`, `Rotational_Constant_B`, `Rotational_Constant_C`, `Dipole_Moment`, `Isotropic_polarizability`, `Energy_of_HOMO`, `Energy_of_LUMO`, `LUMO_HOMO_GAP`, `Electronic_spatial_extent`, `Zero_point_vibrational_energy`, `Internal_energy_at_0_K`, `Internal_energy_at_298.15_K`, `Enthalpy_at_298.15_K`, `Free_energy_at_298.15_K`, `Heat_capacity_at_298.15_K` |
|
| 51 |
+
| Structural fingerprint (25–25) | `Ecfp_4` (2,048-bit Morgan fingerprint, radius 2) |
|
| 52 |
+
|
| 53 |
+
|
| 54 |
+
|
| 55 |
+
|
| 56 |
+
# Sources:
|
| 57 |
+
|
| 58 |
+
L. Ruddigkeit, R. van Deursen, L. C. Blum, J.-L. Reymond, Enumeration of 166 billion organic small molecules in the chemical universe database GDB-17, J. Chem. Inf. Model. 52, 2864–2875, 2012.
|
| 59 |
+
R. Ramakrishnan, P. O. Dral, M. Rupp, O. A. von Lilienfeld, Quantum chemistry structures and properties of 134 kilo molecules, Scientific Data 1, 140022, 2014.
|
| 60 |
+
|
| 61 |
+
|
| 62 |
+
# License
|
| 63 |
+
|
| 64 |
+
This project is licensed under CC0 (for use with QM9 which is public domain). Attribution is appreciated but not required.
|
| 65 |
+
|
| 66 |
+
|
| 67 |
+
|
data/QML9.parquet
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0a11c987215e55ba1aaaf918cd1bdc78c9f727ff3046fc7cfcf64880eddcfdad
|
| 3 |
+
size 18136924
|
data/test_split.parquet
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4d4c6426bd2f1bad91dfc8f6cfded2b16ee09af9bd37ec34c763de3201f53ecd
|
| 3 |
+
size 3563047
|
data/train_split.parquet
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b13ce66c1cfbf776e512a641e90cb39f010c1e7badcf0440fce291ff8c276d1e
|
| 3 |
+
size 17905474
|
data/validation_split.parquet
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6271827abe5802361841942e4c8f23dcb37427376261c5b3efb8fb8277f3c8a8
|
| 3 |
+
size 3566196
|
src/00_download_data.sh
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/bin/bash
|
| 2 |
+
wget https://figshare.com/ndownloader/files/3195389
|
| 3 |
+
tar -xvjf 3195389
|
| 4 |
+
mkdir input_data
|
| 5 |
+
mv *.xyz input_data/
|
src/01_batch_data.py
ADDED
|
@@ -0,0 +1,20 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import os
|
| 2 |
+
import math
|
| 3 |
+
|
| 4 |
+
input_dir = "input_data"
|
| 5 |
+
output_dir = "batches"
|
| 6 |
+
batch_size = 1000
|
| 7 |
+
os.makedirs(output_dir, exist_ok=True)
|
| 8 |
+
|
| 9 |
+
all_files = sorted([
|
| 10 |
+
os.path.join(input_dir, f)
|
| 11 |
+
for f in os.listdir(input_dir)
|
| 12 |
+
if f.endswith(".xyz")
|
| 13 |
+
])
|
| 14 |
+
|
| 15 |
+
n_batches = math.ceil(len(all_files) / batch_size)
|
| 16 |
+
|
| 17 |
+
for i in range(n_batches):
|
| 18 |
+
batch_files = all_files[i*batch_size : (i+1)*batch_size]
|
| 19 |
+
with open(f"{output_dir}/batch_{i:03d}.txt", "w") as f:
|
| 20 |
+
f.write("\n".join(batch_files))
|
src/02_process_batch.py
ADDED
|
@@ -0,0 +1,88 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import pandas as pd
|
| 2 |
+
import argparse
|
| 3 |
+
import os
|
| 4 |
+
|
| 5 |
+
def safe_float(value):
|
| 6 |
+
try:
|
| 7 |
+
# Handle malformed scientific notation like 2.1997*^-6
|
| 8 |
+
clean_value = value.replace("*^-", "e-").replace("*^", "e")
|
| 9 |
+
return float(clean_value)
|
| 10 |
+
except Exception as e:
|
| 11 |
+
raise ValueError(f"Failed to parse float from '{value}': {e}")
|
| 12 |
+
|
| 13 |
+
def process_xyz(filepath):
|
| 14 |
+
data = {}
|
| 15 |
+
with open(filepath, 'r') as f:
|
| 16 |
+
lines = f.readlines()
|
| 17 |
+
|
| 18 |
+
data["n_atoms"] = int(lines[0].strip())
|
| 19 |
+
values = lines[1].split()
|
| 20 |
+
data["ID"] = values[1]
|
| 21 |
+
|
| 22 |
+
#data["SMILES_GDB17"], data["SMILES_B3LYP"] = lines[-2].strip().split()
|
| 23 |
+
data["SMILES_GDB17"] = lines[-2].strip().split()[0]
|
| 24 |
+
|
| 25 |
+
|
| 26 |
+
data["Rotational_Constant_A"] = safe_float(values[2])
|
| 27 |
+
data["Rotational_Constant_B"] = safe_float(values[3])
|
| 28 |
+
data["Rotational_Constant_C"] = safe_float(values[4])
|
| 29 |
+
data["Dipole_Moment"] = safe_float(values[5])
|
| 30 |
+
data["Isotropic_polarizability"] = safe_float(values[6])
|
| 31 |
+
data["Energy_of_HOMO"] = safe_float(values[7])
|
| 32 |
+
data["Energy_of_LUMO"] = safe_float(values[8])
|
| 33 |
+
data["LUMO_HOMO_GAP"] = safe_float(values[9])
|
| 34 |
+
data["Electronic_spatial_extent"] = safe_float(values[10])
|
| 35 |
+
data["Zero_point_vibrational_energy"] = safe_float(values[11])
|
| 36 |
+
data["Internal_energy_at_0_K"] = safe_float(values[12])
|
| 37 |
+
data["Internal_energy_at_298.15_K"] = safe_float(values[13])
|
| 38 |
+
data["Enthalpy_at_298.15_K"] = safe_float(values[14])
|
| 39 |
+
data["Free_energy_at_298.15_K"] = safe_float(values[15])
|
| 40 |
+
data["Heat_capacity_at_298.15_K"] = safe_float(values[16])
|
| 41 |
+
|
| 42 |
+
for i in range(data["n_atoms"]):
|
| 43 |
+
atom = lines[2 + i].split()
|
| 44 |
+
data[f"element_{i}"] = atom[0]
|
| 45 |
+
data[f"x_{i}"] = safe_float(atom[1])
|
| 46 |
+
data[f"y_{i}"] = safe_float(atom[2])
|
| 47 |
+
data[f"z_{i}"] = safe_float(atom[3])
|
| 48 |
+
data[f"charge_{i}"] = safe_float(atom[4])
|
| 49 |
+
|
| 50 |
+
|
| 51 |
+
return data
|
| 52 |
+
|
| 53 |
+
|
| 54 |
+
def main():
|
| 55 |
+
parser = argparse.ArgumentParser()
|
| 56 |
+
parser.add_argument("--file_list", required=True, help="Path to .txt file listing input .xyz files")
|
| 57 |
+
parser.add_argument("--output_file", required=True, help="Path to output .parquet file (checkpointed)")
|
| 58 |
+
parser.add_argument("--checkpoint_every", type=int, default=100, help="Save after every N molecules")
|
| 59 |
+
args = parser.parse_args()
|
| 60 |
+
print("Processing",args.file_list)
|
| 61 |
+
with open(args.file_list, 'r') as f:
|
| 62 |
+
files = [line.strip() for line in f if line.strip()]
|
| 63 |
+
|
| 64 |
+
records = {}
|
| 65 |
+
for i, path in enumerate(files):
|
| 66 |
+
try:
|
| 67 |
+
data = process_xyz(path)
|
| 68 |
+
n_atoms=data["n_atoms"]
|
| 69 |
+
records[n_atoms]=data
|
| 70 |
+
except Exception as e:
|
| 71 |
+
print(f" Failed on {path}: {e}")
|
| 72 |
+
|
| 73 |
+
if (i + 1) % args.checkpoint_every == 0:
|
| 74 |
+
df = pd.DataFrame(records)
|
| 75 |
+
df.to_parquet(args.output_file, index=False)
|
| 76 |
+
print(f"Checkpointed {len(df)} molecules at {i+1}/{len(files)}")
|
| 77 |
+
|
| 78 |
+
# Final save
|
| 79 |
+
if records:
|
| 80 |
+
df = pd.DataFrame(records)
|
| 81 |
+
df.to_parquet(args.output_file, index=False)
|
| 82 |
+
print(f" Final write: {len(df)} molecules → {args.output_file}")
|
| 83 |
+
else:
|
| 84 |
+
print(" No valid data to save.")
|
| 85 |
+
|
| 86 |
+
|
| 87 |
+
if __name__ == "__main__":
|
| 88 |
+
main()
|
src/03_run_batches.sh
ADDED
|
@@ -0,0 +1,91 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/bin/bash
|
| 2 |
+
python src/02_process_batch.py --file_list ./batches/batch_000.txt --output_file checkpoints/batch_000.parquet --checkpoint_every 100
|
| 3 |
+
python src/02_process_batch.py --file_list ./batches/batch_001.txt --output_file checkpoints/batch_001.parquet --checkpoint_every 100
|
| 4 |
+
python src/02_process_batch.py --file_list ./batches/batch_002.txt --output_file checkpoints/batch_002.parquet --checkpoint_every 100
|
| 5 |
+
python src/02_process_batch.py --file_list ./batches/batch_003.txt --output_file checkpoints/batch_003.parquet --checkpoint_every 100
|
| 6 |
+
python src/02_process_batch.py --file_list ./batches/batch_004.txt --output_file checkpoints/batch_004.parquet --checkpoint_every 100
|
| 7 |
+
python src/02_process_batch.py --file_list ./batches/batch_005.txt --output_file checkpoints/batch_005.parquet --checkpoint_every 100
|
| 8 |
+
python src/02_process_batch.py --file_list ./batches/batch_006.txt --output_file checkpoints/batch_006.parquet --checkpoint_every 100
|
| 9 |
+
python src/02_process_batch.py --file_list ./batches/batch_007.txt --output_file checkpoints/batch_007.parquet --checkpoint_every 100
|
| 10 |
+
python src/02_process_batch.py --file_list ./batches/batch_008.txt --output_file checkpoints/batch_008.parquet --checkpoint_every 100
|
| 11 |
+
python src/02_process_batch.py --file_list ./batches/batch_009.txt --output_file checkpoints/batch_009.parquet --checkpoint_every 100
|
| 12 |
+
python src/02_process_batch.py --file_list ./batches/batch_010.txt --output_file checkpoints/batch_010.parquet --checkpoint_every 100
|
| 13 |
+
python src/02_process_batch.py --file_list ./batches/batch_011.txt --output_file checkpoints/batch_011.parquet --checkpoint_every 100
|
| 14 |
+
python src/02_process_batch.py --file_list ./batches/batch_012.txt --output_file checkpoints/batch_012.parquet --checkpoint_every 100
|
| 15 |
+
python src/02_process_batch.py --file_list ./batches/batch_013.txt --output_file checkpoints/batch_013.parquet --checkpoint_every 100
|
| 16 |
+
python src/02_process_batch.py --file_list ./batches/batch_014.txt --output_file checkpoints/batch_014.parquet --checkpoint_every 100
|
| 17 |
+
python src/02_process_batch.py --file_list ./batches/batch_015.txt --output_file checkpoints/batch_015.parquet --checkpoint_every 100
|
| 18 |
+
python src/02_process_batch.py --file_list ./batches/batch_016.txt --output_file checkpoints/batch_016.parquet --checkpoint_every 100
|
| 19 |
+
python src/02_process_batch.py --file_list ./batches/batch_017.txt --output_file checkpoints/batch_017.parquet --checkpoint_every 100
|
| 20 |
+
python src/02_process_batch.py --file_list ./batches/batch_018.txt --output_file checkpoints/batch_018.parquet --checkpoint_every 100
|
| 21 |
+
python src/02_process_batch.py --file_list ./batches/batch_019.txt --output_file checkpoints/batch_019.parquet --checkpoint_every 100
|
| 22 |
+
python src/02_process_batch.py --file_list ./batches/batch_020.txt --output_file checkpoints/batch_020.parquet --checkpoint_every 100
|
| 23 |
+
python src/02_process_batch.py --file_list ./batches/batch_021.txt --output_file checkpoints/batch_021.parquet --checkpoint_every 100
|
| 24 |
+
python src/02_process_batch.py --file_list ./batches/batch_022.txt --output_file checkpoints/batch_022.parquet --checkpoint_every 100
|
| 25 |
+
python src/02_process_batch.py --file_list ./batches/batch_023.txt --output_file checkpoints/batch_023.parquet --checkpoint_every 100
|
| 26 |
+
python src/02_process_batch.py --file_list ./batches/batch_024.txt --output_file checkpoints/batch_024.parquet --checkpoint_every 100
|
| 27 |
+
python src/02_process_batch.py --file_list ./batches/batch_025.txt --output_file checkpoints/batch_025.parquet --checkpoint_every 100
|
| 28 |
+
python src/02_process_batch.py --file_list ./batches/batch_026.txt --output_file checkpoints/batch_026.parquet --checkpoint_every 100
|
| 29 |
+
python src/02_process_batch.py --file_list ./batches/batch_027.txt --output_file checkpoints/batch_027.parquet --checkpoint_every 100
|
| 30 |
+
python src/02_process_batch.py --file_list ./batches/batch_028.txt --output_file checkpoints/batch_028.parquet --checkpoint_every 100
|
| 31 |
+
python src/02_process_batch.py --file_list ./batches/batch_029.txt --output_file checkpoints/batch_029.parquet --checkpoint_every 100
|
| 32 |
+
python src/02_process_batch.py --file_list ./batches/batch_030.txt --output_file checkpoints/batch_030.parquet --checkpoint_every 100
|
| 33 |
+
python src/02_process_batch.py --file_list ./batches/batch_031.txt --output_file checkpoints/batch_031.parquet --checkpoint_every 100
|
| 34 |
+
python src/02_process_batch.py --file_list ./batches/batch_032.txt --output_file checkpoints/batch_032.parquet --checkpoint_every 100
|
| 35 |
+
python src/02_process_batch.py --file_list ./batches/batch_033.txt --output_file checkpoints/batch_033.parquet --checkpoint_every 100
|
| 36 |
+
python src/02_process_batch.py --file_list ./batches/batch_034.txt --output_file checkpoints/batch_034.parquet --checkpoint_every 100
|
| 37 |
+
python src/02_process_batch.py --file_list ./batches/batch_035.txt --output_file checkpoints/batch_035.parquet --checkpoint_every 100
|
| 38 |
+
python src/02_process_batch.py --file_list ./batches/batch_036.txt --output_file checkpoints/batch_036.parquet --checkpoint_every 100
|
| 39 |
+
python src/02_process_batch.py --file_list ./batches/batch_037.txt --output_file checkpoints/batch_037.parquet --checkpoint_every 100
|
| 40 |
+
python src/02_process_batch.py --file_list ./batches/batch_038.txt --output_file checkpoints/batch_038.parquet --checkpoint_every 100
|
| 41 |
+
python src/02_process_batch.py --file_list ./batches/batch_039.txt --output_file checkpoints/batch_039.parquet --checkpoint_every 100
|
| 42 |
+
python src/02_process_batch.py --file_list ./batches/batch_040.txt --output_file checkpoints/batch_040.parquet --checkpoint_every 100
|
| 43 |
+
python src/02_process_batch.py --file_list ./batches/batch_041.txt --output_file checkpoints/batch_041.parquet --checkpoint_every 100
|
| 44 |
+
python src/02_process_batch.py --file_list ./batches/batch_042.txt --output_file checkpoints/batch_042.parquet --checkpoint_every 100
|
| 45 |
+
python src/02_process_batch.py --file_list ./batches/batch_043.txt --output_file checkpoints/batch_043.parquet --checkpoint_every 100
|
| 46 |
+
python src/02_process_batch.py --file_list ./batches/batch_044.txt --output_file checkpoints/batch_044.parquet --checkpoint_every 100
|
| 47 |
+
python src/02_process_batch.py --file_list ./batches/batch_045.txt --output_file checkpoints/batch_045.parquet --checkpoint_every 100
|
| 48 |
+
python src/02_process_batch.py --file_list ./batches/batch_046.txt --output_file checkpoints/batch_046.parquet --checkpoint_every 100
|
| 49 |
+
python src/02_process_batch.py --file_list ./batches/batch_047.txt --output_file checkpoints/batch_047.parquet --checkpoint_every 100
|
| 50 |
+
python src/02_process_batch.py --file_list ./batches/batch_048.txt --output_file checkpoints/batch_048.parquet --checkpoint_every 100
|
| 51 |
+
python src/02_process_batch.py --file_list ./batches/batch_049.txt --output_file checkpoints/batch_049.parquet --checkpoint_every 100
|
| 52 |
+
python src/02_process_batch.py --file_list ./batches/batch_050.txt --output_file checkpoints/batch_050.parquet --checkpoint_every 100
|
| 53 |
+
python src/02_process_batch.py --file_list ./batches/batch_051.txt --output_file checkpoints/batch_051.parquet --checkpoint_every 100
|
| 54 |
+
python src/02_process_batch.py --file_list ./batches/batch_052.txt --output_file checkpoints/batch_052.parquet --checkpoint_every 100
|
| 55 |
+
python src/02_process_batch.py --file_list ./batches/batch_053.txt --output_file checkpoints/batch_053.parquet --checkpoint_every 100
|
| 56 |
+
python src/02_process_batch.py --file_list ./batches/batch_054.txt --output_file checkpoints/batch_054.parquet --checkpoint_every 100
|
| 57 |
+
python src/02_process_batch.py --file_list ./batches/batch_055.txt --output_file checkpoints/batch_055.parquet --checkpoint_every 100
|
| 58 |
+
python src/02_process_batch.py --file_list ./batches/batch_056.txt --output_file checkpoints/batch_056.parquet --checkpoint_every 100
|
| 59 |
+
python src/02_process_batch.py --file_list ./batches/batch_057.txt --output_file checkpoints/batch_057.parquet --checkpoint_every 100
|
| 60 |
+
python src/02_process_batch.py --file_list ./batches/batch_058.txt --output_file checkpoints/batch_058.parquet --checkpoint_every 100
|
| 61 |
+
python src/02_process_batch.py --file_list ./batches/batch_059.txt --output_file checkpoints/batch_059.parquet --checkpoint_every 100
|
| 62 |
+
python src/02_process_batch.py --file_list ./batches/batch_060.txt --output_file checkpoints/batch_060.parquet --checkpoint_every 100
|
| 63 |
+
python src/02_process_batch.py --file_list ./batches/batch_061.txt --output_file checkpoints/batch_061.parquet --checkpoint_every 100
|
| 64 |
+
python src/02_process_batch.py --file_list ./batches/batch_062.txt --output_file checkpoints/batch_062.parquet --checkpoint_every 100
|
| 65 |
+
python src/02_process_batch.py --file_list ./batches/batch_063.txt --output_file checkpoints/batch_063.parquet --checkpoint_every 100
|
| 66 |
+
python src/02_process_batch.py --file_list ./batches/batch_064.txt --output_file checkpoints/batch_064.parquet --checkpoint_every 100
|
| 67 |
+
python src/02_process_batch.py --file_list ./batches/batch_065.txt --output_file checkpoints/batch_065.parquet --checkpoint_every 100
|
| 68 |
+
python src/02_process_batch.py --file_list ./batches/batch_066.txt --output_file checkpoints/batch_066.parquet --checkpoint_every 100
|
| 69 |
+
python src/02_process_batch.py --file_list ./batches/batch_067.txt --output_file checkpoints/batch_067.parquet --checkpoint_every 100
|
| 70 |
+
python src/02_process_batch.py --file_list ./batches/batch_068.txt --output_file checkpoints/batch_068.parquet --checkpoint_every 100
|
| 71 |
+
python src/02_process_batch.py --file_list ./batches/batch_069.txt --output_file checkpoints/batch_069.parquet --checkpoint_every 100
|
| 72 |
+
python src/02_process_batch.py --file_list ./batches/batch_070.txt --output_file checkpoints/batch_070.parquet --checkpoint_every 100
|
| 73 |
+
python src/02_process_batch.py --file_list ./batches/batch_071.txt --output_file checkpoints/batch_071.parquet --checkpoint_every 100
|
| 74 |
+
python src/02_process_batch.py --file_list ./batches/batch_072.txt --output_file checkpoints/batch_072.parquet --checkpoint_every 100
|
| 75 |
+
python src/02_process_batch.py --file_list ./batches/batch_073.txt --output_file checkpoints/batch_073.parquet --checkpoint_every 100
|
| 76 |
+
python src/02_process_batch.py --file_list ./batches/batch_074.txt --output_file checkpoints/batch_074.parquet --checkpoint_every 100
|
| 77 |
+
python src/02_process_batch.py --file_list ./batches/batch_075.txt --output_file checkpoints/batch_075.parquet --checkpoint_every 100
|
| 78 |
+
python src/02_process_batch.py --file_list ./batches/batch_076.txt --output_file checkpoints/batch_076.parquet --checkpoint_every 100
|
| 79 |
+
python src/02_process_batch.py --file_list ./batches/batch_077.txt --output_file checkpoints/batch_077.parquet --checkpoint_every 100
|
| 80 |
+
python src/02_process_batch.py --file_list ./batches/batch_078.txt --output_file checkpoints/batch_078.parquet --checkpoint_every 100
|
| 81 |
+
python src/02_process_batch.py --file_list ./batches/batch_079.txt --output_file checkpoints/batch_079.parquet --checkpoint_every 100
|
| 82 |
+
python src/02_process_batch.py --file_list ./batches/batch_080.txt --output_file checkpoints/batch_080.parquet --checkpoint_every 100
|
| 83 |
+
python src/02_process_batch.py --file_list ./batches/batch_081.txt --output_file checkpoints/batch_081.parquet --checkpoint_every 100
|
| 84 |
+
python src/02_process_batch.py --file_list ./batches/batch_082.txt --output_file checkpoints/batch_082.parquet --checkpoint_every 100
|
| 85 |
+
python src/02_process_batch.py --file_list ./batches/batch_083.txt --output_file checkpoints/batch_083.parquet --checkpoint_every 100
|
| 86 |
+
python src/02_process_batch.py --file_list ./batches/batch_084.txt --output_file checkpoints/batch_084.parquet --checkpoint_every 100
|
| 87 |
+
python src/02_process_batch.py --file_list ./batches/batch_085.txt --output_file checkpoints/batch_085.parquet --checkpoint_every 100
|
| 88 |
+
python src/02_process_batch.py --file_list ./batches/batch_086.txt --output_file checkpoints/batch_086.parquet --checkpoint_every 100
|
| 89 |
+
python src/02_process_batch.py --file_list ./batches/batch_087.txt --output_file checkpoints/batch_087.parquet --checkpoint_every 100
|
| 90 |
+
python src/02_process_batch.py --file_list ./batches/batch_088.txt --output_file checkpoints/batch_088.parquet --checkpoint_every 100
|
| 91 |
+
python src/02_process_batch.py --file_list ./batches/batch_089.txt --output_file checkpoints/batch_089.parquet --checkpoint_every 100
|
src/04_merge_data.py
ADDED
|
@@ -0,0 +1,28 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import pandas as pd
|
| 2 |
+
import glob
|
| 3 |
+
import os
|
| 4 |
+
|
| 5 |
+
def merge_parquets(input_dir, output_file):
|
| 6 |
+
# find all .parquet files
|
| 7 |
+
paths = glob.glob(os.path.join(input_dir, "*.parquet"))
|
| 8 |
+
if not paths:
|
| 9 |
+
print(f"No parquet files found in {input_dir}")
|
| 10 |
+
return
|
| 11 |
+
|
| 12 |
+
dfs = []
|
| 13 |
+
for p in paths:
|
| 14 |
+
try:
|
| 15 |
+
df = pd.read_parquet(p)
|
| 16 |
+
dfs.append(df)
|
| 17 |
+
except Exception as e:
|
| 18 |
+
print(f" ⚠️ Skipping {p}: {e}")
|
| 19 |
+
|
| 20 |
+
# concat will union all columns; missing columns → NaN
|
| 21 |
+
merged = pd.concat(dfs, ignore_index=True, sort=False)
|
| 22 |
+
|
| 23 |
+
merged.to_parquet(output_file, index=False)
|
| 24 |
+
print(f"✅ Merged {len(dfs)} files → {output_file}")
|
| 25 |
+
print(f" Total rows: {len(merged)}, Total columns: {len(merged.columns)}")
|
| 26 |
+
|
| 27 |
+
|
| 28 |
+
merge_parquets("./checkpoints/","./checkpoints_merged/all_data_merged.parquet")
|
src/05_sanitize_data.py
ADDED
|
@@ -0,0 +1,21 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from molvs import standardize_smiles
|
| 2 |
+
import pandas as pd
|
| 3 |
+
|
| 4 |
+
file_path = "./checkpoints_merged/all_data_merged.parquet"
|
| 5 |
+
df = pd.read_parquet(file_path)
|
| 6 |
+
smiles_column_key = df.columns[2]
|
| 7 |
+
|
| 8 |
+
sanatized_rows = []
|
| 9 |
+
std_smile_list=[]
|
| 10 |
+
failed_to_sanatize_row = []
|
| 11 |
+
|
| 12 |
+
for index, row in df.iterrows():
|
| 13 |
+
smile = row[smiles_column_key]
|
| 14 |
+
std_smile = standardize_smiles(smile)
|
| 15 |
+
std_smile_list.append(std_smile)
|
| 16 |
+
|
| 17 |
+
df["Smiles_molvs"]=std_smile_list
|
| 18 |
+
col = df.columns[-1] # name of the last column
|
| 19 |
+
df.insert(3, col, df.pop(col)) # remove it then re‑insert at index 2
|
| 20 |
+
df.drop('SMILES_GDB17', axis=1, inplace=True)
|
| 21 |
+
df.to_parquet("./clean_checkpoints/all_data_merged-cleaned.parquet", index=False)
|
src/06_calculate_ecfp.py
ADDED
|
@@ -0,0 +1,29 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from rdkit import Chem
|
| 2 |
+
from rdkit.Chem import rdFingerprintGenerator
|
| 3 |
+
from rdkit.Chem.Draw import IPythonConsole
|
| 4 |
+
from rdkit import DataStructs
|
| 5 |
+
import rdkit
|
| 6 |
+
import pandas as pd
|
| 7 |
+
|
| 8 |
+
df=pd.read_parquet("./clean_checkpoints/all_data_merged-cleaned.parquet")
|
| 9 |
+
|
| 10 |
+
smiles_list=df["Smiles_molvs"]
|
| 11 |
+
ecfp_list=[]
|
| 12 |
+
|
| 13 |
+
mfpgen = rdFingerprintGenerator.GetMorganGenerator(radius=2,fpSize=2048)
|
| 14 |
+
for i, smile in enumerate(smiles_list):
|
| 15 |
+
mol = Chem.MolFromSmiles(smile)
|
| 16 |
+
if mol is None:
|
| 17 |
+
raise ValueError(f"Invalid SMILES: {smile}, at item", {i})
|
| 18 |
+
else:
|
| 19 |
+
fp = mfpgen.GetFingerprint(mol)
|
| 20 |
+
arr = np.zeros((2048,), dtype=int) # make empty array
|
| 21 |
+
DataStructs.ConvertToNumpyArray(fp, arr) # fill it
|
| 22 |
+
ecfp_list.append(arr)
|
| 23 |
+
|
| 24 |
+
df["Ecfp_4"]=ecfp_list
|
| 25 |
+
col = df.columns[-1]
|
| 26 |
+
df.insert(3, col, df.pop(col))
|
| 27 |
+
|
| 28 |
+
output_file="./ecfp/all_data_merged-cleaned-ecfp4.parquet"
|
| 29 |
+
df.to_parquet(output_file, index=False)
|
src/07_calculate_properties.py
ADDED
|
@@ -0,0 +1,56 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import numpy as np
|
| 2 |
+
import pandas as pd
|
| 3 |
+
from rdkit import Chem
|
| 4 |
+
from rdkit.Chem import Descriptors
|
| 5 |
+
|
| 6 |
+
|
| 7 |
+
mw_list=[]
|
| 8 |
+
logp_list=[]
|
| 9 |
+
TPSA_list=[]
|
| 10 |
+
HBD_list=[]
|
| 11 |
+
HBA_list=[]
|
| 12 |
+
rot_bond_list=[]
|
| 13 |
+
ring_count_list=[]
|
| 14 |
+
frac_sp3_list=[]
|
| 15 |
+
|
| 16 |
+
df=pd.read_parquet("./ecfp/all_data_merged-cleaned-ecfp4.parquet")
|
| 17 |
+
# Loop over SMILES
|
| 18 |
+
|
| 19 |
+
for smi in df["Smiles_molvs"]:
|
| 20 |
+
mol = Chem.MolFromSmiles(smi)
|
| 21 |
+
if mol is None:
|
| 22 |
+
# if parsing fails, append NaN
|
| 23 |
+
mw_list.append(np.nan)
|
| 24 |
+
logp_list.append(np.nan)
|
| 25 |
+
TPSA_list.append(np.nan)
|
| 26 |
+
HBD_list.append(np.nan)
|
| 27 |
+
HBA_list.append(np.nan)
|
| 28 |
+
rot_bond_list.append(np.nan)
|
| 29 |
+
ring_count_list.append(np.nan)
|
| 30 |
+
frac_sp3_list.append(np.nan)
|
| 31 |
+
print(smi,"this smile failed")
|
| 32 |
+
continue
|
| 33 |
+
|
| 34 |
+
# compute and append
|
| 35 |
+
mw_list.append( Descriptors.MolWt(mol) )
|
| 36 |
+
logp_list.append( Descriptors.MolLogP(mol) )
|
| 37 |
+
TPSA_list.append( Descriptors.TPSA(mol) )
|
| 38 |
+
HBD_list.append( Descriptors.NumHDonors(mol) )
|
| 39 |
+
HBA_list.append( Descriptors.NumHAcceptors(mol) )
|
| 40 |
+
rot_bond_list.append(Descriptors.NumRotatableBonds(mol) )
|
| 41 |
+
ring_count_list.append(Descriptors.RingCount(mol) )
|
| 42 |
+
frac_sp3_list.append(Descriptors.FractionCSP3(mol) )
|
| 43 |
+
|
| 44 |
+
# Attach back to DataFrame
|
| 45 |
+
df["MolWt"] = mw_list
|
| 46 |
+
df["ClogP"] = logp_list
|
| 47 |
+
df["TPSA"] = TPSA_list
|
| 48 |
+
df["HBD"] = HBD_list
|
| 49 |
+
df["HBA"] = HBA_list
|
| 50 |
+
df["NumRotatableBonds"] = rot_bond_list
|
| 51 |
+
df["RingCount"] = ring_count_list
|
| 52 |
+
df["FractionCSP3"] = frac_sp3_list
|
| 53 |
+
|
| 54 |
+
output_file="./ecfp_and_properties/all_data_merged-cleaned-ecfp4-properties.parquet"
|
| 55 |
+
|
| 56 |
+
df.to_parquet(output_file, index=False)
|
src/08_organize_columns.py
ADDED
|
@@ -0,0 +1,20 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import pandas as pd
|
| 2 |
+
|
| 3 |
+
df=pd.read_parquet("./ecfp_and_properties/all_data_merged-cleaned-ecfp4-properties.parquet")
|
| 4 |
+
|
| 5 |
+
new_desc = [
|
| 6 |
+
"MolWt","ClogP","TPSA","HBD","HBA",
|
| 7 |
+
"NumRotatableBonds","RingCount","FractionCSP3"
|
| 8 |
+
]
|
| 9 |
+
|
| 10 |
+
base_cols = ["n_atoms","ID","Smiles_molvs","Ecfp_4"]
|
| 11 |
+
|
| 12 |
+
other_cols = [c for c in df.columns if c not in base_cols + new_desc]
|
| 13 |
+
|
| 14 |
+
new_order = base_cols + new_desc + other_cols
|
| 15 |
+
|
| 16 |
+
df = df[new_order]
|
| 17 |
+
|
| 18 |
+
output_file="./ecfp_and_properties/all_data_merged-cleaned-ecfp4-properties-sorted-columns.parquet"
|
| 19 |
+
|
| 20 |
+
df.to_parquet(output_file, index=False)
|
src/09_unpack_ecfp4.py
ADDED
|
@@ -0,0 +1,38 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import pandas as pd
|
| 2 |
+
def expand_array_column(df: pd.DataFrame, column_name: str, prefix: str = None) -> pd.DataFrame:
|
| 3 |
+
"""
|
| 4 |
+
Expand a column of sequence-like values into multiple scalar columns.
|
| 5 |
+
|
| 6 |
+
Parameters:
|
| 7 |
+
- df: pandas DataFrame with a column of list/array-like entries.
|
| 8 |
+
- column_name: name of the column to expand.
|
| 9 |
+
- prefix: optional prefix for new columns; defaults to column_name.
|
| 10 |
+
|
| 11 |
+
Returns:
|
| 12 |
+
- A new DataFrame with the original column dropped and new columns added.
|
| 13 |
+
"""
|
| 14 |
+
# Extract the list/array values
|
| 15 |
+
sequences = df[column_name].tolist()
|
| 16 |
+
if not sequences:
|
| 17 |
+
raise ValueError(f"Column '{column_name}' is empty.")
|
| 18 |
+
|
| 19 |
+
# Determine vector length
|
| 20 |
+
vec_length = len(sequences[0])
|
| 21 |
+
# Use provided prefix or fallback to column name
|
| 22 |
+
prefix = prefix or column_name
|
| 23 |
+
new_column_names = [f"{prefix}_{i}" for i in range(vec_length)]
|
| 24 |
+
|
| 25 |
+
# Build expanded DataFrame
|
| 26 |
+
expanded_df = pd.DataFrame(sequences, index=df.index, columns=new_column_names)
|
| 27 |
+
|
| 28 |
+
# Drop original column and concatenate
|
| 29 |
+
df_dropped = df.drop(columns=[column_name])
|
| 30 |
+
result_df = pd.concat([df_dropped, expanded_df], axis=1)
|
| 31 |
+
|
| 32 |
+
return result_df
|
| 33 |
+
|
| 34 |
+
df=pd.read_parquet("./ecfp_and_properties/all_data_merged-cleaned-ecfp4-properties-sorted-columns.parquet")
|
| 35 |
+
result_df=expand_array_column(df,"Ecfp_4","ECFP")
|
| 36 |
+
|
| 37 |
+
output_file="./ecfp_and_properties/all_data_merged-cleaned-ecfp4-properties-sorted-columns-expanded-ecfp.parquet"
|
| 38 |
+
result_df.to_parquet(output_file, index=False)
|
src/10_finalize_ml_data.py
ADDED
|
@@ -0,0 +1,15 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import pandas as pd
|
| 2 |
+
|
| 3 |
+
df=pd.read_parquet("./ecfp_and_properties/all_data_merged-cleaned-ecfp4-properties-sorted-columns-expanded-ecfp.parquet")
|
| 4 |
+
|
| 5 |
+
ID_col = df.keys()[1]
|
| 6 |
+
chemical_values= df.keys()[3:11]
|
| 7 |
+
thermo_values = df.columns[11:26]
|
| 8 |
+
ecfp_cols = df.columns[174:]
|
| 9 |
+
|
| 10 |
+
|
| 11 |
+
selected_cols = [ID_col] + list(chemical_values) + list(thermo_values)+ list(ecfp_cols)
|
| 12 |
+
|
| 13 |
+
# slice your DataFrame
|
| 14 |
+
combined_df = df[selected_cols]
|
| 15 |
+
combined_df.to_parquet("./ml_input_data/QML9.parquet") #! final_file
|
src/11_split_ml_data.py
ADDED
|
@@ -0,0 +1,19 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import pandas as pd
|
| 2 |
+
from sklearn.model_selection import train_test_split
|
| 3 |
+
|
| 4 |
+
|
| 5 |
+
combined_df = pd.read_parquet("./ml_input_data/QML9.parquet")
|
| 6 |
+
|
| 7 |
+
|
| 8 |
+
train_df, temp_df = train_test_split(combined_df, test_size=0.20, random_state=42)
|
| 9 |
+
|
| 10 |
+
# 2) split temp into 50/50 → 10% each of original
|
| 11 |
+
valid_df, test_df = train_test_split(temp_df, test_size=0.50, random_state=42)
|
| 12 |
+
|
| 13 |
+
# confirm sizes
|
| 14 |
+
print(f"Train: {len(train_df)/len(df):.2%}")
|
| 15 |
+
print(f"Valid: {len(valid_df)/len(df):.2%}")
|
| 16 |
+
print(f"Test: {len(test_df)/len(df):.2%}")
|
| 17 |
+
train_df.to_parquet("./ml_input_data/train_split.parquet")
|
| 18 |
+
valid_df.to_parquet("./ml_input_data/validation_split.parquet")
|
| 19 |
+
test_df.to_parquet("./ml_input_data/test_split.parquet")
|
src/12_train_randomforest.py
ADDED
|
@@ -0,0 +1,42 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import pandas as pd
|
| 2 |
+
from sklearn.ensemble import RandomForestRegressor
|
| 3 |
+
from sklearn.metrics import mean_squared_error, r2_score
|
| 4 |
+
|
| 5 |
+
train_df = pd.read_parquet("./ml_input_data/train_split.parquet")
|
| 6 |
+
valid_df=pd.read_parquet("./ml_input_data/validation_split.parquet")
|
| 7 |
+
test_df=pd.read_parquet("./ml_input_data/test_split.parquet")
|
| 8 |
+
|
| 9 |
+
chem_properties=list(train_df.keys()[1:9])
|
| 10 |
+
ecfp=list(train_df.keys()[24:])
|
| 11 |
+
target="LUMO_HOMO_GAP"
|
| 12 |
+
feature_cols = chem_properties + ecfp
|
| 13 |
+
|
| 14 |
+
|
| 15 |
+
X_train, y_train = train_df[feature_cols], train_df[target]
|
| 16 |
+
X_valid, y_valid = valid_df[feature_cols], valid_df[target]
|
| 17 |
+
X_test, y_test = test_df[feature_cols], test_df[target]
|
| 18 |
+
|
| 19 |
+
# train simple base line model
|
| 20 |
+
rf_10 = RandomForestRegressor(
|
| 21 |
+
n_estimators=10,
|
| 22 |
+
max_depth=None,
|
| 23 |
+
random_state=42,
|
| 24 |
+
)
|
| 25 |
+
rf_10.fit(X_train, y_train)
|
| 26 |
+
y_pred = rf_10.predict(X_test)
|
| 27 |
+
mse = mean_squared_error(y_test, y_pred)
|
| 28 |
+
r2 = r2_score(y_test, y_pred)
|
| 29 |
+
|
| 30 |
+
print(f"Test MSE: {mse:.4f}")
|
| 31 |
+
print(f"Test R² : {r2:.4f}")
|
| 32 |
+
|
| 33 |
+
#test a subset of molecules with the highest 10% energy gaps
|
| 34 |
+
all_data="./ml_input_data/all_data_no_split.parquet"
|
| 35 |
+
df=pd.read_parquet(all_data)
|
| 36 |
+
top_10_percent = df.sort_values(by='LUMO_HOMO_GAP', ascending=False).head(int(0.1 * len(df)))
|
| 37 |
+
top_10_percent_X_test, top_10_percent_y_test = top_10_percent[feature_cols], top_10_percent[target]
|
| 38 |
+
top_10_percent_y_pred = rf_10.predict(top_10_percent_X_test)
|
| 39 |
+
mse = mean_squared_error(top_10_percent_y_test, top_10_percent_y_pred)
|
| 40 |
+
r2 = r2_score(top_10_percent_y_test, top_10_percent_y_pred)
|
| 41 |
+
print(f"Top 10% Test MSE: {mse:.4f}")
|
| 42 |
+
print(f"Top 10% Test R² : {r2:.4f}")
|