name stringlengths 4 18 | hf_name stringlengths 10 44 | hf_prefix stringclasses 5
values | hf_path stringlengths 22 56 | model_complexity int64 25 7k | max_token_scaling float64 0.2 1 | tokenizer_class_name stringclasses 2
values | train_tokenizer_function stringclasses 4
values | tokenization stringclasses 3
values | tokenization_kmer int64 1 6 ⌀ | tokenization_shift int64 1 6 ⌀ | tokenizer_short_name stringclasses 7
values | model_id stringlengths 4 18 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
prokbert-mini | prokbert-mini | neuralbioinfo | neuralbioinfo/prokbert-mini | 25 | 1 | LCATokenizer | tokenize_function_prokbert | LCA | 6 | 1 | mini | prokbert-mini |
prokbert-mini-long | prokbert-mini-long | neuralbioinfo | neuralbioinfo/prokbert-mini-long | 25 | 0.5 | AutoTokenizer | tokenize_function_prokbert | LCA | 6 | 2 | minil | prokbert-mini-long |
prokbert-mini-c | prokbert-mini-c | neuralbioinfo | neuralbioinfo/prokbert-mini-c | 25 | 1 | AutoTokenizer | tokenize_function_prokbert | LCA | 1 | 1 | minic | prokbert-mini-c |
dnabert2 | DNABERT-2-117M | zhihan1996 | zhihan1996/DNABERT-2-117M | 117 | 0.3333 | AutoTokenizer | tokenize_function_DNABERT | SenPiece | null | null | dnabert2 | dnabert2 |
nt2.5b | nucleotide-transformer-2.5b-multi-species | InstaDeepAI | InstaDeepAI/nucleotide-transformer-2.5b-multi-species | 2,500 | 0.2 | AutoTokenizer | tokenize_function_NT | LCA | 6 | 6 | nt | nt2.5b |
nt500 | nucleotide-transformer-v2-500m-multi-species | InstaDeepAI | InstaDeepAI/nucleotide-transformer-v2-500m-multi-species | 500 | 0.2 | AutoTokenizer | tokenize_function_NT | LCA | 6 | 6 | nt | nt500 |
nt100 | nucleotide-transformer-v2-100m-multi-species | InstaDeepAI | InstaDeepAI/nucleotide-transformer-v2-100m-multi-species | 100 | 0.2 | AutoTokenizer | tokenize_function_NT | LCA | 6 | 6 | nt | nt100 |
nt50 | nucleotide-transformer-v2-50m-multi-species | InstaDeepAI | InstaDeepAI/nucleotide-transformer-v2-50m-multi-species | 50 | 0.2 | AutoTokenizer | tokenize_function_NT | LCA | 6 | 6 | nt | nt50 |
nt250 | nucleotide-transformer-v2-250m-multi-species | InstaDeepAI | InstaDeepAI/nucleotide-transformer-v2-250m-multi-species | 250 | 0.2 | AutoTokenizer | tokenize_function_NT | LCA | 6 | 6 | nt | nt250 |
metagene1 | METAGENE-1 | metagene-ai | metagene-ai/METAGENE-1 | 7,000 | 0.3333 | AutoTokenizer | tokenize_function_evo_metagene | BPE | null | null | mg1 | metagene1 |
evo1-8k | evo-1-8k-base | togethercomputer | togethercomputer/evo-1-8k-base | 7,000 | 1 | AutoTokenizer | tokenize_function_evo_metagene | LCA | 1 | 1 | evo1 | evo1-8k |
ProkBERT Training Registry
JSON-backed training registry for ProkBERT and related nucleotide foundation models.
Files:
basemodels.json: model metadata and tokenizer dispatch.default_training_parameters.json: default finetuning parameters by model and sequence-length range.finetuning_task.json: placeholder task mapping from the workbook.task.json: empty sheet placeholder preserved from the workbook.manifest.json: simple schema and file manifest.
Conventions:
model_idis the canonical short internal model identifier.basemodelis preserved from the original workbook.seq_length_min/seq_length_maxare raw sequence-length ranges.max_token_lengthis the model-facing token-length cap used by training defaults.
This dataset is intended to be consumed directly from the Hugging Face Hub by the ProkBERT helper utilities.
- Downloads last month
- 70