pavm595 commited on
Commit
f79e09e
·
verified ·
1 Parent(s): f6ce6ba

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -24
README.md CHANGED
@@ -1,16 +1,16 @@
1
  ---
2
  tags:
3
- - protein language model
4
  - protein
5
  datasets:
6
- - Uniref100
7
  ---
8
 
9
  # ProtBert model
10
 
11
  Pretrained model on protein sequences using a masked language modeling (MLM) objective. It was introduced in
12
  [this paper](https://doi.org/10.1101/2020.07.12.199554) and first released in
13
- [this repository](https://github.com/agemagician/ProtTrans). This model is trained on uppercase amino acids: it only works with capital letter amino acids.
14
 
15
 
16
  ## Model description
@@ -38,8 +38,8 @@ You can use this model directly with a pipeline for masked language modeling:
38
 
39
  ```python
40
  >>> from transformers import BertForMaskedLM, BertTokenizer, pipeline
41
- >>> tokenizer = BertTokenizer.from_pretrained("Rostlab/prot_bert", do_lower_case=False )
42
- >>> model = BertForMaskedLM.from_pretrained("Rostlab/prot_bert")
43
  >>> unmasker = pipeline('fill-mask', model=model, tokenizer=tokenizer)
44
  >>> unmasker('D L I P T S S K L V V [MASK] D T S L Q V K K A F F A L V T')
45
 
@@ -70,8 +70,8 @@ Here is how to use this model to get the features of a given protein sequence in
70
  ```python
71
  from transformers import BertModel, BertTokenizer
72
  import re
73
- tokenizer = BertTokenizer.from_pretrained("Rostlab/prot_bert", do_lower_case=False )
74
- model = BertModel.from_pretrained("Rostlab/prot_bert")
75
  sequence_Example = "A E T C Z A O"
76
  sequence_Example = re.sub(r"[UZOB]", "X", sequence_Example)
77
  encoded_input = tokenizer(sequence_Example, return_tensors='pt')
@@ -121,21 +121,6 @@ Test results :
121
  | CB513 | 81 | 66 | | |
122
  | DeepLoc | | | 79 | 91 |
123
 
124
- ### BibTeX entry and citation info
125
-
126
- ```bibtex
127
- @article {Elnaggar2020.07.12.199554,
128
- author = {Elnaggar, Ahmed and Heinzinger, Michael and Dallago, Christian and Rehawi, Ghalia and Wang, Yu and Jones, Llion and Gibbs, Tom and Feher, Tamas and Angerer, Christoph and Steinegger, Martin and BHOWMIK, DEBSINDHU and Rost, Burkhard},
129
- title = {ProtTrans: Towards Cracking the Language of Life{\textquoteright}s Code Through Self-Supervised Deep Learning and High Performance Computing},
130
- elocation-id = {2020.07.12.199554},
131
- year = {2020},
132
- doi = {10.1101/2020.07.12.199554},
133
- publisher = {Cold Spring Harbor Laboratory},
134
- abstract = {Computational biology and bioinformatics provide vast data gold-mines from protein sequences, ideal for Language Models (LMs) taken from Natural Language Processing (NLP). These LMs reach for new prediction frontiers at low inference costs. Here, we trained two auto-regressive language models (Transformer-XL, XLNet) and two auto-encoder models (Bert, Albert) on data from UniRef and BFD containing up to 393 billion amino acids (words) from 2.1 billion protein sequences (22- and 112 times the entire English Wikipedia). The LMs were trained on the Summit supercomputer at Oak Ridge National Laboratory (ORNL), using 936 nodes (total 5616 GPUs) and one TPU Pod (V3-512 or V3-1024). We validated the advantage of up-scaling LMs to larger models supported by bigger data by predicting secondary structure (3-states: Q3=76-84, 8 states: Q8=65-73), sub-cellular localization for 10 cellular compartments (Q10=74) and whether a protein is membrane-bound or water-soluble (Q2=89). Dimensionality reduction revealed that the LM-embeddings from unlabeled data (only protein sequences) captured important biophysical properties governing protein shape. This implied learning some of the grammar of the language of life realized in protein sequences. The successful up-scaling of protein LMs through HPC to larger data sets slightly reduced the gap between models trained on evolutionary information and LMs. Availability ProtTrans: \<a href="https://github.com/agemagician/ProtTrans"\>https://github.com/agemagician/ProtTrans\</a\>Competing Interest StatementThe authors have declared no competing interest.},
135
- URL = {https://www.biorxiv.org/content/early/2020/07/21/2020.07.12.199554},
136
- eprint = {https://www.biorxiv.org/content/early/2020/07/21/2020.07.12.199554.full.pdf},
137
- journal = {bioRxiv}
138
- }
139
- ```
140
 
141
- > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
 
1
  ---
2
  tags:
3
+ - protein-language-model
4
  - protein
5
  datasets:
6
+ - bloyal/uniref100
7
  ---
8
 
9
  # ProtBert model
10
 
11
  Pretrained model on protein sequences using a masked language modeling (MLM) objective. It was introduced in
12
  [this paper](https://doi.org/10.1101/2020.07.12.199554) and first released in
13
+ [this repository](https://github.com/agemagician/ProtTrans). his repository is a fork of their [HuggingFace repository](https://huggingface.co/Rostlab/prot_bert/tree/main). This model is trained on uppercase amino acids: it only works with capital letter amino acids.
14
 
15
 
16
  ## Model description
 
38
 
39
  ```python
40
  >>> from transformers import BertForMaskedLM, BertTokenizer, pipeline
41
+ >>> tokenizer = BertTokenizer.from_pretrained("virtual-human-chc/prot_bert", do_lower_case=False )
42
+ >>> model = BertForMaskedLM.from_pretrained("virtual-human-chc/prot_bert")
43
  >>> unmasker = pipeline('fill-mask', model=model, tokenizer=tokenizer)
44
  >>> unmasker('D L I P T S S K L V V [MASK] D T S L Q V K K A F F A L V T')
45
 
 
70
  ```python
71
  from transformers import BertModel, BertTokenizer
72
  import re
73
+ tokenizer = BertTokenizer.from_pretrained("virtual-human-chc/prot_bert", do_lower_case=False )
74
+ model = BertModel.from_pretrained("virtual-human-chc/prot_bert")
75
  sequence_Example = "A E T C Z A O"
76
  sequence_Example = re.sub(r"[UZOB]", "X", sequence_Example)
77
  encoded_input = tokenizer(sequence_Example, return_tensors='pt')
 
121
  | CB513 | 81 | 66 | | |
122
  | DeepLoc | | | 79 | 91 |
123
 
124
+ # Copyright
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
125
 
126
+ Code derived from https://github.com/agemagician/ProtTrans is licensed under the MIT License, Copyright (c) 2025 Ahmed Elnaggar. The ProtTrans pretrained models are released under the under terms of the [Academic Free License v3.0 License](https://choosealicense.com/licenses/afl-3.0/), Copyright (c) 2025 Ahmed Elnaggar. The other code is licensed under the MIT license, Copyright (c) 2025 Maksim Pavlov.