JuIm commited on
Commit
b9eeb7a
·
verified ·
1 Parent(s): 27d9662

End of training

Browse files
README.md CHANGED
@@ -12,25 +12,37 @@ should probably proofread and complete it, then remove this comment. -->
12
 
13
  # ProGemma
14
 
15
- This is a custom configuration of Google's Gemma 2 model that is being pre-trained on amino acid sequences of lengths 0 to 512. Updates are made regularly as the model hits new checkpoints. As of 08.10.2024, the model has been trained on about 40% of the dataset.
16
 
17
- The model generates amino acids on a letter-by-letter basis.
18
 
19
- Current training loss is about 2.7. Preliminary evaluation of generated sequences on AlphaFold 3 shows pTM scores of ~0.4 and average pLLDT scores ~60. After training is complete, a proper evaluation will be done to see whether sequences result in proteins with a low free energy. Current perplexity scores are on par with NVIDIA's ProtGPT2.
20
 
21
- The purpose of this model was to see whether I could develop an alternative to NVIDIA's ProtGPT2. ProGemma also serves as a stepping stone to a new model that will also utilize control tags to generate proteins based on function.
22
 
23
- To use this mode for yourself using the pipeline within the Transformers package, please see the code below:
24
 
25
- from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM
26
 
27
- model = AutoModelForCausalLM.from_pretrained("JuIm/ProGemma") tokenizer = AutoTokenizer.from_pretrained("JuIm/Amino-Acid-Sequence-Tokenizer")
28
 
29
- progemma = pipeline("text-generation", model=model, tokenizer=tokenizer)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
30
 
31
- sequence = progemma("\<bos>", top_k=950, max_length=100, num_return_sequences=1, do_sample=True, repetition_penalty=1.2, eos_token_id=21, pad_token_id=22, bos_token_id=20)
32
 
33
- print(sequence)
34
 
35
  ### Framework versions
36
 
 
12
 
13
  # ProGemma
14
 
15
+ This model is a fine-tuned version of [JuIm/ProGemma](https://huggingface.co/JuIm/ProGemma) on an unknown dataset.
16
 
17
+ ## Model description
18
 
19
+ More information needed
20
 
21
+ ## Intended uses & limitations
22
 
23
+ More information needed
24
 
25
+ ## Training and evaluation data
26
 
27
+ More information needed
28
 
29
+ ## Training procedure
30
+
31
+ ### Training hyperparameters
32
+
33
+ The following hyperparameters were used during training:
34
+ - learning_rate: 0.001
35
+ - train_batch_size: 1
36
+ - eval_batch_size: 8
37
+ - seed: 42
38
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
39
+ - lr_scheduler_type: linear
40
+ - lr_scheduler_warmup_ratio: 0.4
41
+ - training_steps: 7000
42
+
43
+ ### Training results
44
 
 
45
 
 
46
 
47
  ### Framework versions
48
 
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:988ea0907838961457ed1bf589c6aacc3a5b91fb6636e52c115ed7a70464c365
3
  size 1101271208
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:163530de19b777cbaad4666196e0b8b539c6c4cae40efa85a9ca50b41c0ea22c
3
  size 1101271208
runs/Aug11_18-24-18_7b9bd87a7474/events.out.tfevents.1723400661.7b9bd87a7474.1022.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:89ffabc153601523089106f0f1f2bfd6a973089b07bac98adf2e242020648c09
3
+ size 1481827
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3e5834fc92e47de97f2a3b62e481b968324d03c14c57dcb4235c92f8c7f81b06
3
  size 5112
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:79721e214f0af3d96f868fe1ce86d6d34eaccfa023e46fd9c2f7b5d1e130cfea
3
  size 5112