File size: 2,288 Bytes
e7e3e36
7a845cc
e7e3e36
 
 
 
 
 
 
561c8ed
 
e7e3e36
 
 
c3c3111
 
051c53a
 
 
49393da
051c53a
 
 
 
 
 
 
 
 
b014fc7
051c53a
b014fc7
051c53a
b014fc7
051c53a
b014fc7
051c53a
b014fc7
051c53a
bf297d2
051c53a
 
e7e3e36
 
 
 
 
561c8ed
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
---
base_model: JuIm/ProGemma
tags:
- generated_from_trainer
model-index:
- name: ProGemma
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# ProGemma

### Please see JuIm/ProGemma2 instead

This is a custom configuration of Google’s Gemma 2 LLM that is being pre-trained on amino acid sequences of 512 AA or less in length. Periodic updates are made to this page as training reaches new checkpoints. 

The purpose of this model was to investigate the differences between ProGemma and ProtGPT (GPT-2 architecture) as it pertains to sequence generation. 
As of 8.22.2024, ProGemma has been trained on ~80% of the training dataset and is still on epoch 1. Training loss is ~2.6. Perplexity scores as well as AlphaFold 3’s ptm, pLDDT, and iptm scores are generally in line with ProtGPT’s scores for sequence lengths < 250, although the testing phase is still very early. I have yet to do testing for sequence lengths > 250. More robust testing is also required for lengths < 250 AA. In my very preliminary testing, HHblit e-values of ~0.1 are achieved with relatively easily.

Controlled generation is not a capability of this model, and therefore serves as a method to significantly improve generation as, in principal, a sequence that performs a given function or resides in a particular cellular location can be generated. 

In sequence generation, a top_k of 950 appears to work well as it prevents repetition. This is also seen in ProtGPT.

Below is code using the Transformers library to generate sequences using ProGemma.


from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained("JuIm/ProGemma")

tokenizer = AutoTokenizer.from_pretrained("JuIm/Amino-Acid-Sequence-Tokenizer")

progemma = pipeline("text-generation", model=model, tokenizer=tokenizer)

sequence = progemma("<bos>", top_k=950, max_length=100, num_return_sequences=1, do_sample=True, repetition_penalty=1.2, eos_token_id=21, pad_token_id=22, bos_token_id=20)

s = sequence[0]['generated_text']

print(s)


### Framework versions

- Transformers 4.42.4
- Pytorch 2.3.1+cu121
- Tokenizers 0.19.1