File size: 821 Bytes
5f13d79
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
---
license: bsd-3-clause
---

Modification of the base ProGen2-large model by [Nijkamp, et al.](https://arxiv.org/abs/2206.13517). The vocab_size is changed from 51200 to 32, the same as the ProGen2-small and ProGen2-medium. The model follows the original code of [Progen2](https://github.com/salesforce/progen/tree/main/progen2).  


Example usage:

```python
from models.modeling_progen import ProGenForCausalLM
from tokenizers import Tokenizer
import torch

# load model and tokenizer
model = ProGenForCausalLM.from_pretrained("IDEA-XL/progen2-large", torch_dtype="auto")
tokenizer = Tokenizer.from_pretrained("IDEA-XL/progen2-large")
tokenizer.no_padding()

# prepare input
prompt = "1MEVVIVTGMSGAGK"
input_ids = torch.tensor(tokenizer.encode(prompt).ids).to(model.device)

# forward
logits = model(input_ids).logits