JuIm commited on
Commit
051c53a
·
verified ·
1 Parent(s): bc3e43f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -0
README.md CHANGED
@@ -12,6 +12,26 @@ should probably proofread and complete it, then remove this comment. -->
12
 
13
  # ProGemma
14
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
 
16
  ### Framework versions
17
 
 
12
 
13
  # ProGemma
14
 
15
+ This is a custom configuration of Google’s Gemma 2 LLM that is being pre-trained on amino acid sequences of 512 AA or less in length. Periodic updates are made to this page as training reaches new checkpoints.
16
+
17
+ The purpose of this model was to investigate the differences between ProGemma and ProtGPT (GPT-2 architecture) as it pertains to sequence generation.
18
+ As of 8.22.2024, ProGemma has been trained on ~80% of the training dataset and is still on epoch 1. Training loss is ~2.6. Perplexity scores as well as AlphaFold 3’s ptm, pLDDT, and iptm scores are generally in line with ProtGPT’s scores for sequence lengths < 250, although the testing phase is still very early. I have yet to do testing for sequence lengths > 250. More robust testing is also required for lengths < 250 AA. In my very preliminary testing, HHblit scores of ~0.1 are achieved with relatively easily.
19
+
20
+ Controlled generation is not a capability of this model, and therefore serves as a method to significantly improve generation as, in principal, a sequence that performs a given function or resides in a particular cellular location can be generated.
21
+
22
+ In sequence generation, a top_k of 950 appears to work well as it prevents repetition. This is also seen in ProtGPT.
23
+
24
+ Below is code using the Transformers library to generate sequences using ProGemma.
25
+
26
+
27
+ from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM
28
+ model = AutoModelForCausalLM.from_pretrained("JuIm/ProGemma")
29
+ tokenizer = AutoTokenizer.from_pretrained("JuIm/Amino-Acid-Sequence-Tokenizer")
30
+ progemma = pipeline("text-generation", model=model, tokenizer=tokenizer)
31
+ sequence = progemma("<bos>", top_k=950, max_length=100, num_return_sequences=1, do_sample=True, repetition_penalty=1.2, eos_token_id=21, pad_token_id=22, bos_token_id=20)
32
+ s = sequence[0]['generated_text']
33
+ print(s)
34
+
35
 
36
  ### Framework versions
37