|
|
--- |
|
|
license: afl-3.0 |
|
|
datasets: |
|
|
- bloyal/uniref100 |
|
|
pipeline_tag: fill-mask |
|
|
library_name: transformers |
|
|
tags: |
|
|
- biology |
|
|
- protein-language-model |
|
|
- protein |
|
|
--- |
|
|
# ProtAlbert |
|
|
|
|
|
ProtAlbert is a protein Language Model (pLM) pretrained on the Uniref100 using a masked language modeling (MLM) objective. It's suitable for creation of embeddings (feature extraction), but also fill the mask. The model was developed by Ahmed Elnaggar et al. and more information can be found on the [GitHub repository](https://github.com/agemagician/ProtTrans) and in the [accompanying paper](https://ieeexplore.ieee.org/document/9477085). This repository is a fork of their [HuggingFace repository](https://huggingface.co/Rostlab/prot_albert/tree/main). |
|
|
|
|
|
|
|
|
## Inference example |
|
|
|
|
|
Example for masked language modeling: |
|
|
|
|
|
```python |
|
|
from transformers import AlbertForMaskedLM, AlbertTokenizer, pipeline |
|
|
|
|
|
>>>> tokenizer = AlbertTokenizer.from_pretrained("virtual-human-chc/prot_albert", do_lower_case=False ) |
|
|
>>>> model = AlbertForMaskedLM.from_pretrained("virtual-human-chc/prot_albert") |
|
|
>>>> unmasker = pipeline('fill-mask', model=model, tokenizer=tokenizer) |
|
|
>>>> unmasker('D L I P T S S K L V V [MASK] D T S L Q V K K A F F A L V T') |
|
|
|
|
|
{'score': 0.10074187070131302, 'token': 13, 'token_str': 'L', 'sequence': 'D L I P T S S K L V V L D T S L Q V K K A F F A L V T'}, |
|
|
{'score': 0.08413360267877579, 'token': 14, 'token_str': 'S', 'sequence': 'D L I P T S S K L V V S D T S L Q V K K A F F A L V T'}, |
|
|
{'score': 0.07617155462503433, 'token': 18, 'token_str': 'V', 'sequence': 'D L I P T S S K L V V V D T S L Q V K K A F F A L V T'}, |
|
|
{'score': 0.06521160155534744, 'token': 19, 'token_str': 'T', 'sequence': 'D L I P T S S K L V V T D T S L Q V K K A F F A L V T'}, |
|
|
{'score': 0.06321343779563904, 'token': 15, 'token_str': 'A', 'sequence': 'D L I P T S S K L V V A D T S L Q V K K A F F A L V T'}] |
|
|
``` |
|
|
|
|
|
Example for feature extraction: |
|
|
|
|
|
```python |
|
|
from transformers import AutoModel, AlbertTokenizer, pipeline |
|
|
import re |
|
|
|
|
|
tokenizer = AlbertTokenizer.from_pretrained("virtual-human-chc/prot_albert", do_lower_case=False) |
|
|
|
|
|
model = AutoModel.from_pretrained("virtual-human-chc/prot_albert") |
|
|
|
|
|
fe = pipeline('feature-extraction', model=model, tokenizer=tokenizer, device=0) |
|
|
|
|
|
sequences_Example = ["A E T C Z A O", "S K T Z P"] |
|
|
|
|
|
sequences_Example = [re.sub(r"[UZOB]", "X", sequence) for sequence in sequences_Example] |
|
|
|
|
|
embedding = fe(sequences_Example) |
|
|
|
|
|
print(embedding) |
|
|
``` |
|
|
|
|
|
# Copyright |
|
|
|
|
|
Code derived from https://github.com/agemagician/ProtTrans is licensed under the MIT License, Copyright (c) 2025 Ahmed Elnaggar. The ProtTrans pretrained models are released under the under terms of the [Academic Free License v3.0 License](https://choosealicense.com/licenses/afl-3.0/), Copyright (c) 2025 Ahmed Elnaggar. The other code is licensed under the MIT license, Copyright (c) 2025 Maksim Pavlov. |