|
|
--- |
|
|
license: bsd-3-clause |
|
|
--- |
|
|
|
|
|
Modification of the base ProGen2-large model by [Nijkamp, et al.](https://arxiv.org/abs/2206.13517). The vocab_size is changed from 51200 to 32, the same as the ProGen2-small and ProGen2-medium. The model follows the original code of [Progen2](https://github.com/salesforce/progen/tree/main/progen2). |
|
|
|
|
|
|
|
|
Example usage: |
|
|
|
|
|
```python |
|
|
from models.modeling_progen import ProGenForCausalLM |
|
|
from tokenizers import Tokenizer |
|
|
import torch |
|
|
|
|
|
# load model and tokenizer |
|
|
model = ProGenForCausalLM.from_pretrained("IDEA-XL/progen2-large", torch_dtype="auto") |
|
|
tokenizer = Tokenizer.from_pretrained("IDEA-XL/progen2-large") |
|
|
tokenizer.no_padding() |
|
|
|
|
|
# prepare input |
|
|
prompt = "1MEVVIVTGMSGAGK" |
|
|
input_ids = torch.tensor(tokenizer.encode(prompt).ids).to(model.device) |
|
|
|
|
|
# forward |
|
|
logits = model(input_ids).logits |