JuIm commited on
Commit
b137af4
·
verified ·
1 Parent(s): 194139d

End of training

Browse files
README.md CHANGED
@@ -12,25 +12,36 @@ should probably proofread and complete it, then remove this comment. -->
12
 
13
  # ProGemma
14
 
15
- This is a custom configuration of Google's Gemma 2 model that is being pre-trained on amino acid sequences of lengths 0 to 512. Updates are made regularly as the model hits new checkpoints. As of 08.11.2024, the model has been trained on about 40% of the dataset.
16
 
17
- The model generates amino acids on a letter-by-letter basis.
18
 
19
- Current training loss is about 2.7. Preliminary evaluation of generated sequences on AlphaFold 3 shows pTM scores of ~0.4 and average pLLDT scores ~60. After training is complete, a proper evaluation will be done to see whether sequences result in proteins with a low free energy. Current perplexity scores are on par with NVIDIA's ProtGPT2.
20
 
21
- The purpose of this model was to see whether I could develop an alternative to NVIDIA's ProtGPT2. ProGemma also serves as a stepping stone to a new model that will also utilize control tags to generate proteins based on function.
22
 
23
- To use this mode for yourself using the pipeline within the Transformers package, please see the code below:
24
 
25
- from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM
26
 
27
- model = AutoModelForCausalLM.from_pretrained("JuIm/ProGemma") tokenizer = AutoTokenizer.from_pretrained("JuIm/Amino-Acid-Sequence-Tokenizer")
28
 
29
- progemma = pipeline("text-generation", model=model, tokenizer=tokenizer)
30
 
31
- sequence = progemma("\<bos>", top_k=950, max_length=100, num_return_sequences=1, do_sample=True, repetition_penalty=1.2, eos_token_id=21, pad_token_id=22, bos_token_id=20)
 
 
 
 
 
 
 
 
 
 
 
 
32
 
33
- print(sequence)
34
 
35
 
36
  ### Framework versions
 
12
 
13
  # ProGemma
14
 
15
+ This model is a fine-tuned version of [JuIm/ProGemma](https://huggingface.co/JuIm/ProGemma) on an unknown dataset.
16
 
17
+ ## Model description
18
 
19
+ More information needed
20
 
21
+ ## Intended uses & limitations
22
 
23
+ More information needed
24
 
25
+ ## Training and evaluation data
26
 
27
+ More information needed
28
 
29
+ ## Training procedure
30
 
31
+ ### Training hyperparameters
32
+
33
+ The following hyperparameters were used during training:
34
+ - learning_rate: 0.001
35
+ - train_batch_size: 1
36
+ - eval_batch_size: 8
37
+ - seed: 42
38
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
39
+ - lr_scheduler_type: linear
40
+ - lr_scheduler_warmup_ratio: 0.4
41
+ - training_steps: 7000
42
+
43
+ ### Training results
44
 
 
45
 
46
 
47
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7ff3b3bc883eda8cb4fc5bac5347c914d812e15141a68fc82dc021789c31f0fb
3
  size 1101271208
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:76e55b5e3a90e593f84f7769fb0b636a0ddac199f42539b34e6a105ee6557be9
3
  size 1101271208
runs/Aug13_17-14-46_52f86e41e42d/events.out.tfevents.1723569291.52f86e41e42d.425.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d7a5f5790c04fcf557b613bf13e88c12acbaa0ef34f6b9d5cbf14b09c2ad8ad4
3
+ size 1481827
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:742c42af90c31fc49784ff9a34a7741b715d49a191c1e40f8e224c5b19059d6a
3
  size 5112
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ebefd7ce04a84a98f111860c1f1e4632b765cdeb51a22d0793b422917db18694
3
  size 5112