Update README.md
Browse files
README.md
CHANGED
|
@@ -10,7 +10,7 @@ We perform continuous pre-training of DNA sequence data based on the LLaMA model
|
|
| 10 |
|
| 11 |
By continuously pre-training the LLaMA model with this DNA sequence data, we ensure that the model remains up-to-date with the latest genomic discoveries and maintains its ability to generalize well across different genomics tasks. This continuous learning process helps to improve the model's accuracy and robustness in handling complex biological sequences.
|
| 12 |
|
| 13 |
-
```
|
| 14 |
from transformers import AutoTokenizer, AutoConfig,AutoModel
|
| 15 |
from transformers import DataCollatorForLanguageModeling
|
| 16 |
from transformers import Trainer, TrainingArguments
|
|
|
|
| 10 |
|
| 11 |
By continuously pre-training the LLaMA model with this DNA sequence data, we ensure that the model remains up-to-date with the latest genomic discoveries and maintains its ability to generalize well across different genomics tasks. This continuous learning process helps to improve the model's accuracy and robustness in handling complex biological sequences.
|
| 12 |
|
| 13 |
+
```python
|
| 14 |
from transformers import AutoTokenizer, AutoConfig,AutoModel
|
| 15 |
from transformers import DataCollatorForLanguageModeling
|
| 16 |
from transformers import Trainer, TrainingArguments
|