sarahalamdari commited on
Commit
2c44f9a
·
verified ·
1 Parent(s): 95c2791

Update transformers version in installation instructions

Browse files
Files changed (1) hide show
  1. README.md +15 -5
README.md CHANGED
@@ -8,6 +8,16 @@ tags:
8
  datasets:
9
  - microsoft/Dayhoff
10
  ---
 
 
 
 
 
 
 
 
 
 
11
 
12
  # Model Card for Dayhoff
13
 
@@ -75,12 +85,12 @@ To import from HuggingFace, you will need to install these versions:
75
 
76
  ```bash
77
  uv pip install datasets==3.2.0 #for HF datasets
78
- uv pip install transformers==4.51.3
79
  uv pip install huggingface_hub~=0.34.4
80
  ```
81
 
82
  **Sample protein generation code:**
83
-
84
 
85
  ```py
86
 
@@ -90,8 +100,8 @@ from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed
90
  set_seed(0)
91
  torch.set_default_device("cuda")
92
 
93
- model = AutoModelForCausalLM.from_pretrained('microsoft/Dayhoff-3b-GR-HM')
94
- tokenizer = AutoTokenizer.from_pretrained('microsoft/Dayhoff-3b-GR-HM', trust_remote_code=True)
95
 
96
 
97
  inputs = tokenizer(tokenizer.bos_token, return_tensors="pt", return_token_type_ids=False)
@@ -253,4 +263,4 @@ The code and datasets released in this repository are provided for research and
253
  If you use the code, data, models, or results. please cite our [preprint](https://aka.ms/dayhoff/preprint).
254
 
255
  ## Data Summary
256
- https://huggingface.co/microsoft/Dayhoff-3b-GR-HM/blob/main/data_summary_card.md
 
8
  datasets:
9
  - microsoft/Dayhoff
10
  ---
11
+ ---
12
+ license: mit
13
+ pipeline_tag: text-generation
14
+ library_name: transformers
15
+ tags:
16
+ - protein-generation
17
+ - jamba
18
+ datasets:
19
+ - microsoft/Dayhoff
20
+ ---
21
 
22
  # Model Card for Dayhoff
23
 
 
85
 
86
  ```bash
87
  uv pip install datasets==3.2.0 #for HF datasets
88
+ uv pip install transformers==4.51.0
89
  uv pip install huggingface_hub~=0.34.4
90
  ```
91
 
92
  **Sample protein generation code:**
93
+
94
 
95
  ```py
96
 
 
100
  set_seed(0)
101
  torch.set_default_device("cuda")
102
 
103
+ model = AutoModelForCausalLM.from_pretrained("microsoft/Dayhoff-3b-GR-HM-41000").to("cuda")
104
+ tokenizer = AutoTokenizer.from_pretrained("microsoft/Dayhoff-3b-GR-HM-41000", trust_remote_code=True)
105
 
106
 
107
  inputs = tokenizer(tokenizer.bos_token, return_tensors="pt", return_token_type_ids=False)
 
263
  If you use the code, data, models, or results. please cite our [preprint](https://aka.ms/dayhoff/preprint).
264
 
265
  ## Data Summary
266
+ https://huggingface.co/microsoft/Dayhoff-3b-GR-HM-41000/blob/main/data_summary_card.md