File size: 2,468 Bytes
90b6fa1 bd0542e b17fe43 bd0542e b17fe43 bd0542e 7bc306a bd0542e b17fe43 bd0542e b17fe43 bd0542e b17fe43 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 | ---
license: apache-2.0
task_categories:
- token-classification
language:
- en
tags:
- biology
---
# tRNA-based classification model
The dataset contains:
1. Generic files used for training the dataset
2. Supplementary data used for labeling
3. An HTML file with a step-by-step description of the research
4. Python scripts used to train the models
5. The two best models were selected based on the lowest number of false negatives (FNs) on a third, independent test dataset.
## Setup
Download Miniconda and use:
```bash
conda env create -f environment.yml
```
to replicate the working environment.
If any packages are missing during python code execution, install them manually using pip, based on import error messages.
## Steps for replication:
1. Download supplementary data from https://doi.org/10.7554/eLife.71402
2. **ftp_urls.txt** contains a list of genome download addresses (most of them are available).
3. Run **full.sh** to download genomes and extract features for model training from full dataset, saved as **FEATURES_ALL.ndjson** (genomes are removed to preserve memory)
4. Run **80_20_split_fixed.py** on **FEATURES_ALL.ndjson** together with both supplementary files to perform an automatic stratified 80/20 split, with archaeal and contaminated genomes filtered out.
5. Run **Mass_models.py** on **FEATURES_ALL.ndjson**, **Supp1.csv**, **Supp2.xlsx**
6. Run **predict_dir.py** to generate predictions for all trained models on FASTA genomes. If files provided, annotate predictions with ground truth from the TSV file, and report metrics separately for Isolate and MAG genomes.
Example run settings (All resutls were obtained using seed=42):
```python
python3 80_20_split_fixed.py
--ndjson FEATURE_ALL.ndjson
--supp1 Supp1.csv
--supp2 Supp2.xlsx
--outdir split_dataset
```
```python
python3 Mass_models.py
--ndjson split_dataset/subset01/
--supp2 Supp2.xlsx
--supp1 Supp1.csv
--outdir .
--train_mode both
--weight_mode both
--model all
--metric all
--n_trials 30
--timeout 5400
```
```python
python3 predict_models_dir.py \
--genomes_dir /path/to/fasta_dir \
--models_dir results_models \
--outdir predictions
```

Code and files will be modified and further developed in a packaged container after all required tests and training are completed. |