Update README.md
Browse files
README.md
CHANGED
|
@@ -124,26 +124,29 @@ The dataset prepared using one-hot encoding (to enable the training of the H2O A
|
|
| 124 |
|
| 125 |
Scripts are stored in the src/ directory and should be used in numerical order by name. The purpose of each script is described below:
|
| 126 |
|
| 127 |
-
01.install_packages.py
|
| 128 |
-
|
| 129 |
-
|
| 130 |
|
| 131 |
-
02.download_dataset.py
|
| 132 |
-
|
| 133 |
|
| 134 |
-
03.sanitize_data.py
|
| 135 |
-
|
| 136 |
-
|
| 137 |
-
|
| 138 |
|
| 139 |
-
04.prepare_data_for_ML.py
|
| 140 |
-
|
| 141 |
|
| 142 |
-
05.run_autoML_updated.py
|
| 143 |
-
|
| 144 |
-
|
| 145 |
-
|
| 146 |
-
|
|
|
|
|
|
|
|
|
|
| 147 |
|
| 148 |
---
|
| 149 |
license: mit
|
|
|
|
| 124 |
|
| 125 |
Scripts are stored in the src/ directory and should be used in numerical order by name. The purpose of each script is described below:
|
| 126 |
|
| 127 |
+
01.install_packages.py --> This script includes all python packages used across all scripts.
|
| 128 |
+
The user should check which packages they do not yet have installed and install any missing ones.
|
| 129 |
+
It is recommended that all packages be installed to a unique Conda environment set up for handling this dataset & associated ML model.
|
| 130 |
|
| 131 |
+
02.download_dataset.py --> This script is used to download the dataset directly from the ORD data repository on GitHub.
|
| 132 |
+
Further details can be found at https://github.com/open-reaction-database
|
| 133 |
|
| 134 |
+
03.sanitize_data.py --> This script uses the MolVS package to convert the molecular SMILES strings in the original dataset into canonical SMILES strings
|
| 135 |
+
(i.e., to perform'sanitization'). The user should input the original dataset saved as a .csv file (Here, "Ahneman_ORD_Data.csv").
|
| 136 |
+
The script will output a new .csv file ("Sanitized_Ahneman_ORD_Data.csv") that is identical in structure to the original,
|
| 137 |
+
but with the sanitized SMILES strings.
|
| 138 |
|
| 139 |
+
04.prepare_data_for_ML.py --> This script takes the sanitized dataset as an input and performs one-hot enconding in order to prepare the data to be used in the
|
| 140 |
+
H2O AutoML model. A new .csv file ("Prepared_Data.csv") is created to save the dataset after one-hot encoding.
|
| 141 |
|
| 142 |
+
05.run_autoML_updated.py --> This script takes in the one-hot encoded reaction data and splits it into training and test sets (70%/30%).
|
| 143 |
+
The data is used to train an H2O AutoML model (maximum 8 models, omitting stacked ensemble models). After training
|
| 144 |
+
the H2O AutoML model, the best-performing model suggested by AutoML is selected and analyzed by SHAP analysis. A loss curve
|
| 145 |
+
is also generated for the model, along with a plot comparing the predicted reaction yields from the validation set to the actual
|
| 146 |
+
yields included in the original dataset.
|
| 147 |
+
|
| 148 |
+
06.upload_to_huggingface.py --> This script was used to upload datasets used and generated for this project to this Huggingface repository.
|
| 149 |
+
The datasets package must be installed to run this script.
|
| 150 |
|
| 151 |
---
|
| 152 |
license: mit
|