vojtam commited on
Commit
c0e2987
·
verified ·
1 Parent(s): 394b34d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +90 -3
README.md CHANGED
@@ -4,6 +4,93 @@ tags:
4
  - pytorch_model_hub_mixin
5
  ---
6
 
7
- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
8
- - Library: [More Information Needed]
9
- - Docs: [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  - pytorch_model_hub_mixin
5
  ---
6
 
7
+ # DNAGPT2: Genomic Large Language Model for Compression and Analysis
8
+
9
+ **DNAGPT2** is a family of autoregressive (decoder-only) transformer models trained on genomic DNA sequences.
10
+
11
+ The models follow the GPT-2 architecture and are trained from scratch on a multi-species genome dataset.
12
+
13
+ ## Model Details
14
+
15
+ - **Model Type:** Causal Language Model (Decoder-only Transformer)
16
+ - **Architecture:** GPT-2 Small
17
+ - **Parameters:** ~86 Million
18
+ - **Layers:** 12
19
+ - **Heads:** 12
20
+ - **Embedding Dimensions:** 768
21
+ - **Context Window:** 1,024 tokens
22
+ - **Vocabulary Sizes:** Models are available with vocabulary sizes of 16, **32**, 64, 128, 256, 512, 1024, 2048, 4096, and 8192.
23
+
24
+ ## Intended Use
25
+
26
+ These models are designed for:
27
+ 1. **DNA Compression:** Used in conjunction with Arithmetic Encoding (AE) to compress genomic sequences.
28
+ 3. **Sequence Modeling:** Next-token prediction for DNA sequences.
29
+
30
+ **Input:** Raw DNA sequences containing the characters `A`, `C`, `G`, `T`.
31
+ **Output:** Logits/Probabilities for the next token in the sequence.
32
+
33
+ ## Training Data
34
+
35
+ The models were pretrained on the dataset provided by the authors of **DNABERT-2**.
36
+ - **Composition:** 135 genomes covering Vertebrata, Fungi, Protozoa, Invertebrata, and Bacteria.
37
+ - **Size:** Approximately 32.5 billion nucleotides.
38
+ - **Preprocessing:** The alphabet was restricted to **A, C, G, T**. The letter **N** (unknown/ambiguous nucleotide) was omitted from the training data.
39
+
40
+ ## Training Procedure
41
+
42
+ The models were trained using the PyTorch framework and the `nanoGPT` recipe.
43
+
44
+ - **Tokenizer:** Byte-Pair Encoding (BPE) trained via SentencePiece.
45
+ - **Epochs:** 1
46
+ - **Optimization:** AdamW (Betas: 0.9, 0.95; Weight decay: 0.1)
47
+ - **Learning Rate:** Cosine decay (Max: 8e-4, Min: 8e-5) with linear warmup.
48
+ - **Batch Size:** $2^{19}$ tokens per step.
49
+ - **Hardware:** Single NVIDIA A40 GPU.
50
+
51
+ ## Performance
52
+
53
+ The models were evaluated on their ability to compress DNA sequences (measured in **bits per symbol** or **bps**) using Arithmetic Encoding. Lower is better.
54
+
55
+ | Dataset | Metric | DNAGPT2_32 | Benchmark (gzip -9) | Benchmark (Jarvis3) |
56
+ | :--- | :--- | :--- | :--- | :--- |
57
+ | **Homo sapiens** (T2T-CHM13v2.0) | bits/symbol | **1.470** | 2.022 | 1.384 |
58
+ | **M. llanfair...** (Bacteria) | bits/symbol | **1.783** | 2.142 | 1.713 |
59
+ | **A. thaliana** (Plant - Chr1) | bits/symbol | **1.876** | 2.161 | 1.702 |
60
+
61
+ The `DNAGPT2_32` model outperforms general-purpose compressors (gzip) and competitive deep learning models like `hyenaDNA` and `megaDNA` on the evaluated datasets.
62
+
63
+ ## How to Use
64
+
65
+ The model is compatible with the Hugging Face `transformers` library.
66
+
67
+ ```python
68
+ import torch
69
+ from transformers import AutoModelForCausalLM, AutoTokenizer
70
+
71
+ # Select the model variant (e.g., vocab size 128 or 32)
72
+ # Replace with the specific repository path if hosted on HF Hub
73
+ hf_model_repository = "vojtam/DNAGPT2_128"
74
+
75
+ device = "cuda" if torch.cuda.is_available() else "cpu"
76
+
77
+ # Load model and tokenizer
78
+ model = AutoModelForCausalLM.from_pretrained(
79
+ hf_model_repository,
80
+ trust_remote_code=True
81
+ ).to(device)
82
+
83
+ tokenizer = AutoTokenizer.from_pretrained(
84
+ hf_model_repository,
85
+ trust_remote_code=True
86
+ )
87
+
88
+ # Inference Example
89
+ dna_sequence = "ACGTTGCAAACG"
90
+ token_ids = tokenizer.encode(dna_sequence, return_tensors="pt").to(device)
91
+
92
+ with torch.no_grad():
93
+ logits = model(token_ids).logits
94
+
95
+ print(f"Input: {dna_sequence}")
96
+ print(f"Logits shape: {logits.shape}")