Datasets:

License:
microbert / README.md
vijayoct27's picture
Update README.md
ee7e52a verified
metadata
license: mit

Datasets for MicrobeRT

Taxonomy

Data collection was pulled using the ncbi-genome-download package on both Genbank and RefSeq assemblies for fungal, viral, bacterial, archaeal genomes.

For fungi, complete, chromosome, and scaffold assemblies were downloaded from GenBank. For comparison, a single representative assembly was retrieved for potential host organisms, including human, cow, dog, domestic cat, pig, wheat, and corn.

For bacterial, archaea, and viral genomes we included both complete & all representative assemblies.

Superkingdom / Kingdom Representation (n)
Archaea 1,252,736
Bacteria 145,122,383
Eukaryota 8,633,180
Viruses 717,342
Total 155,725,641

To minimize redundancy, a cluster-based down-selection was applied. Full-length sequences were first randomly fragmented into subsequences of 750-3500 base pairs (bp). These subsequences were clustered using MMSeqs2 at 90% identity within each taxonomic family, and one representative sequence was retained per cluster. The resulting representatives were then clustered again at 70% identity, with a single sequence selected from each cluster to form the final dataset. In addition, derivative datasets of defined sequence lengths were generated to benchmark trained genomic tokenizers. A hold-out set of ~1% of the total dataset, comprising 1,572,987 sequences, was reserved for testing.

Two datasets were created to reflect similar sequence sizes to standard sequencing platform output lengths. That is, each model trained would perform more effectively the closer to an observed sequence was to the original trained ones. We created 2 separate datasets for Long reads at a max of 1750 bp and Short reads at a max of 300 bp. Each of these files were evenly fragmented into x number of subsequences depending on the original MMseqs2 clustered sequence length. That is, the maximum subsequence was targeted based on the training type (Long or Short).

Superkingdom / Kingdom Long Read (n) Short Read (n)
Archaea 1,290,396 5,169,673
Bacteria 148,337,911 569,859,777
Fungal 8,990,001 36,070,662
Viruses 917,475 3,239,241
Total 159,535,783 614,339,353

Anti-Microbial Resistance (AMR)

Data was pulled from the Resistance Gene Identifier (RGI) which predicts antibiotic resistomes from protein or nucleotide data based on homology and single nucleotide polymorphism (SNP) models using the Comprehensive Antibiotic Resistance Database (CARD). Both the Perfect and Strict algorithms were used to pull the sequences.

A total of 672,392 AMR-positive sequences and 3,598,448,873 AMR-negative sequences were pulled from the database. However, we "chunked" each into fragments of a varying range between 140 & 475 bp. MMseqs2 was applied to reduce redundancy at 100% and 70% for positive and negative sequences, respectively. The final count is as shown below:

AMR Status Representation (n)
Negative 86,866,041
Positive 3,000,091
Total 89,866,132

Performance was assessed under different sequence lengths, curated into 5 datasets in total. The first contained full-length AMR sequences from the CARD and NCBI AMR Reference databases. The second included partial AMR sequences randomly truncated to 140–450 bp. The remaining three consisted of full AMR sequences flanked on both ends by 50 bp, 100 bp, or randomly distributed lengths of 50–2500 bp. Table 3 lists the number of sequences in each set.

Test Set Representation (n)
Whole AMR 14,069
Partial AMR 20,000
50 bp flank 12,093
100 bp flank 12,093
Random flanks between 50 and 2500 bp 12,093

In addition to a binary classification of sequences resistance, information about the specific antibiotic to which each sequence shows resistance was extracted from the RGI output to assess the models’ ability to perform multi-class classification. To generate negative controls, positional information was used to extract sequence fragments that did not map to antimicrobial resistance genes (ARGs).

Antibiotic Drug Class Training (n) Whole AMR (n) Partial AMR (n) 50 bp flank (n) 100 bp flank (n) Random flanks between 50 and 2500 bp (n)
acriflavine 580
aminocoumarin 16,217 297 1 1 1 1
aminoglycoside 13,062 383 182 182 182
bacitracin 1,128
beta_lactam 10,299 5809 2850 4906 490 4906
bleomycin 6
fosfomycin 3,658 11 1229 37 37 37
fusidic_acid 51 7 10 9 9 9
glycopeptide 65,297
kasugamycin 18
linezolid 11
macrolide-lincosamide-streptogramin 4,994 164 422 131 131 131
multidrug 9,359 4 10 7 7
mupirocin 82 8 3 3 3
peptide 686 109 730 210 210 210
phenicol 2,466 84 1443 59 59 59
pleuromutilin 1,271 12 1271 28 28 28
polymyxin 13,479
puromycin 3,619 19 32 23 23 23
quinolone 22,297 183 66 162 162 162
rifampin 8,648 29 41 43 43 43
roxithromycin 508
streptothricin 72
sulfonamide 191 7 750 6 6 6
tetracycline 100,124 81 396 109 109 109
triclosan 13,045
trimethoprim 8,801 120 111 102 102 102
tunicamycin 110
viomycin 12
Total 300091 6936 9753 6018 6018* 6018

Data and Metadata Paths

Taxonomy - Load Read

  • train: nucl_gb_train.csv
  • test: nucl_gb_test.csv
  • data_processor: data_processor.pkl
  • metadata.json: metadata.json

AMR - Binary

  • train: train_process.csv
  • test: test_process.csv
  • data_processor: data_processor_amr.pkl
  • metadata.json: metadata_amr.json

AMR - Multiclass

  • train: train_process_multiclass.csv
  • test: test_process_multiclass.csv
  • data_processor: data_processor_amr_multiclass.pkl
  • metadata.json: metadata_amr_multiclass.json

Loading in train/val/test sets

All datasets presented here are already fitted with a label encoder and split into train/val/test sets.

To inverse transform the dataset labels to their original label values, use the corresponding data_processor.pkl and metadata.json files.

Here is a sample code snippet using code using helper functions in the code repository github.com/jhuapl-bio/microbert. Use this to reverse transform the dataset labels for ease of use.

import json
import pandas as pd
from analysis.experiment.utils.data_processor import DataProcessor

metadata_path = 'taxonomy/long_read/data_processor/metadata.json'
data_processor_dir = 'taxonomy/long_read/data_processor'
dataset_path = 'taxonomy/long_read/nucl_gb_test.csv'

with open(metadata_path, "r") as f:
    metadata = json.load(f)

sequence_column = metadata["sequence_column"]
labels = metadata["labels"]
data_processor_filename = "data_processor.pkl"

data_processor = DataProcessor(
    sequence_column=sequence_column,
    labels=labels,
    save_file=data_processor_filename,
)
data_processor.load_processor(data_processor_dir)

df = pd.read_csv(dataset_path)

for label in labels:
    label_col = f"label_level_{label}"
    encoded_values = df[label_col]
    df[label] = data_processor.encoders[label].inverse_transform(encoded_values)