Update README.md
Browse files
README.md
CHANGED
|
@@ -8,14 +8,14 @@ tags:
|
|
| 8 |
- biology
|
| 9 |
---
|
| 10 |
|
| 11 |
-
|
| 12 |
# tRNA-based classification model
|
| 13 |
|
| 14 |
The dataset contains:
|
| 15 |
1. Generic files used for training the dataset
|
| 16 |
2. Supplementary data used for labeling
|
| 17 |
3. An HTML file with a step-by-step description of the research
|
| 18 |
-
4. Python scripts used to train the models
|
|
|
|
| 19 |
|
| 20 |
## Setup
|
| 21 |
Download Miniconda and use:
|
|
@@ -32,10 +32,18 @@ If any packages are missing during python code execution, install them manually
|
|
| 32 |
3. Run **full.sh** to download genomes and extract features for model training from full dataset, saved as **FEATURES_ALL.ndjson** (genomes are removed to preserve memory)
|
| 33 |
4. Run **80_20_split_fixed.py** on **FEATURES_ALL.ndjson** together with both supplementary files to perform an automatic stratified 80/20 split, with archaeal and contaminated genomes filtered out.
|
| 34 |
5. Run **Mass_models.py** on **FEATURES_ALL.ndjson**, **Supp1.csv**, **Supp2.xlsx**
|
|
|
|
| 35 |
|
| 36 |
Run Mass_models.py using FEATURES_ALL.ndjson, Supp1.csv, and Supp2.xlsx.
|
| 37 |
Example run settings:
|
| 38 |
```python
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 39 |
python3 Mass_models.py
|
| 40 |
--ndjson split_dataset/subset01/
|
| 41 |
--supp2 Supp2.xlsx
|
|
@@ -48,6 +56,14 @@ python3 Mass_models.py
|
|
| 48 |
--n_trials 30
|
| 49 |
--timeout 5400
|
| 50 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 51 |
|
| 52 |
|
| 53 |
-
|
|
|
|
| 8 |
- biology
|
| 9 |
---
|
| 10 |
|
|
|
|
| 11 |
# tRNA-based classification model
|
| 12 |
|
| 13 |
The dataset contains:
|
| 14 |
1. Generic files used for training the dataset
|
| 15 |
2. Supplementary data used for labeling
|
| 16 |
3. An HTML file with a step-by-step description of the research
|
| 17 |
+
4. Python scripts used to train the models
|
| 18 |
+
5. The two best models were selected based on the lowest number of false negatives (FNs) on a third, independent test dataset.
|
| 19 |
|
| 20 |
## Setup
|
| 21 |
Download Miniconda and use:
|
|
|
|
| 32 |
3. Run **full.sh** to download genomes and extract features for model training from full dataset, saved as **FEATURES_ALL.ndjson** (genomes are removed to preserve memory)
|
| 33 |
4. Run **80_20_split_fixed.py** on **FEATURES_ALL.ndjson** together with both supplementary files to perform an automatic stratified 80/20 split, with archaeal and contaminated genomes filtered out.
|
| 34 |
5. Run **Mass_models.py** on **FEATURES_ALL.ndjson**, **Supp1.csv**, **Supp2.xlsx**
|
| 35 |
+
6. Run **predict_dir.py** to generate predictions for all trained models on FASTA genomes. If files provided, annotate predictions with ground truth from the TSV file, and report metrics separately for Isolate and MAG genomes.
|
| 36 |
|
| 37 |
Run Mass_models.py using FEATURES_ALL.ndjson, Supp1.csv, and Supp2.xlsx.
|
| 38 |
Example run settings:
|
| 39 |
```python
|
| 40 |
+
python3 80_20_split_fixed.py
|
| 41 |
+
--ndjson FEATURE_ALL.ndjson
|
| 42 |
+
--supp1 Supp1.csv
|
| 43 |
+
--supp2 Supp2.xlsx
|
| 44 |
+
--outdir split_dataset
|
| 45 |
+
```
|
| 46 |
+
```python
|
| 47 |
python3 Mass_models.py
|
| 48 |
--ndjson split_dataset/subset01/
|
| 49 |
--supp2 Supp2.xlsx
|
|
|
|
| 56 |
--n_trials 30
|
| 57 |
--timeout 5400
|
| 58 |
```
|
| 59 |
+
```python
|
| 60 |
+
python3 predict_models_dir.py \
|
| 61 |
+
--genomes_dir /path/to/fasta_dir \
|
| 62 |
+
--models_dir results_models \
|
| 63 |
+
--outdir predictions
|
| 64 |
+
```
|
| 65 |
+
|
| 66 |
+

|
| 67 |
|
| 68 |
|
| 69 |
+
Code and files will be modified and further developed in a packaged container after all required tests and training are completed.
|