JuIm commited on
Commit
4f69217
·
verified ·
1 Parent(s): 5c35c7f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -28
README.md CHANGED
@@ -5,47 +5,25 @@ tags:
5
  model-index:
6
  - name: ProGemma
7
  results: []
 
8
  ---
9
 
10
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
11
- should probably proofread and complete it, then remove this comment. -->
12
 
13
  # ProGemma
14
 
15
- This model is a fine-tuned version of [JuIm/ProGemma](https://huggingface.co/JuIm/ProGemma) on an unknown dataset.
16
 
17
- ## Model description
18
-
19
- More information needed
20
-
21
- ## Intended uses & limitations
22
-
23
- More information needed
24
 
25
- ## Training and evaluation data
26
-
27
- More information needed
28
-
29
- ## Training procedure
30
-
31
- ### Training hyperparameters
32
 
33
- The following hyperparameters were used during training:
34
- - learning_rate: 0.001
35
- - train_batch_size: 1
36
- - eval_batch_size: 8
37
- - seed: 42
38
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
39
- - lr_scheduler_type: linear
40
- - lr_scheduler_warmup_ratio: 0.4
41
- - training_steps: 5000
42
 
43
- ### Training results
44
 
 
45
 
 
46
 
47
  ### Framework versions
48
 
49
  - Transformers 4.42.4
50
  - Pytorch 2.3.1+cu121
51
- - Tokenizers 0.19.1
 
5
  model-index:
6
  - name: ProGemma
7
  results: []
8
+ pipeline_tag: text-generation
9
  ---
10
 
 
 
11
 
12
  # ProGemma
13
 
14
+ This is a custom configuration of Google's Gemma 2 model that was pre-trained on amino acid sequences of lengths 0 to 512.
15
 
 
 
 
 
 
 
 
16
 
17
+ ## Model description
 
 
 
 
 
 
18
 
 
 
 
 
 
 
 
 
 
19
 
 
20
 
21
+ ## Intended uses & limitations
22
 
23
+ The purpose of this model was to
24
 
25
  ### Framework versions
26
 
27
  - Transformers 4.42.4
28
  - Pytorch 2.3.1+cu121
29
+ - Tokenizers 0.19.1