|
|
--- |
|
|
license: mit |
|
|
tags: |
|
|
- sentence-transformers |
|
|
- sentence-similarity |
|
|
- feature-extraction |
|
|
- dense |
|
|
pipeline_tag: sentence-similarity |
|
|
library_name: sentence-transformers |
|
|
base_model: |
|
|
- westlake-repl/ProTrek_650M_UniRef50 |
|
|
--- |
|
|
|
|
|
# ProTrek_650M_UniRef50_text_encoder |
|
|
This model is a SentenceTransformer-compatible version of ProTrek_650M_UniRef50_text_encoder. It has been converted for use with the sentence-transformers library, enabling easy integration into semantic similarity tasks, such as semantic search, clustering, and feature extraction. |
|
|
|
|
|
**Github repo: https://github.com/westlake-repl/ProTrek** |
|
|
**Hugging Face repo https://huggingface.co/westlake-repl/ProTrek_650M_UniRef50** |
|
|
|
|
|
## Model Details |
|
|
|
|
|
### Model Description |
|
|
- **Model Type:** Sentence Transformer |
|
|
<!-- - **Base model:** [Unknown](https://huggingface.co/unknown) --> |
|
|
- **Maximum Sequence Length:** 512 tokens |
|
|
- **Output Dimensionality:** 1024 dimensions |
|
|
- **Similarity Function:** Cosine Similarity |
|
|
<!-- - **Training Dataset:** Unknown --> |
|
|
<!-- - **Language:** Unknown --> |
|
|
<!-- - **License:** Unknown --> |
|
|
|
|
|
### Full Model Architecture |
|
|
|
|
|
``` |
|
|
SentenceTransformer( |
|
|
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'BertModel'}) |
|
|
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) |
|
|
(2): Dense({'in_features': 768, 'out_features': 1024, 'bias': True, 'activation_function': 'torch.nn.modules.linear.Identity'}) |
|
|
(3): Normalize() |
|
|
) |
|
|
``` |
|
|
|
|
|
## Usage |
|
|
|
|
|
### Direct Usage (Sentence Transformers) |
|
|
|
|
|
First install the Sentence Transformers library: |
|
|
|
|
|
```bash |
|
|
pip install -U sentence-transformers |
|
|
``` |
|
|
|
|
|
Then you can load this model and run inference. |
|
|
```python |
|
|
from sentence_transformers import SentenceTransformer |
|
|
|
|
|
# Download from the 🤗 Hub |
|
|
protein_encoder = SentenceTransformer("yosshstd/ProTrek_650M_UniRef50_protein_encoder") |
|
|
text_encoder = SentenceTransformer("yosshstd/ProTrek_650M_UniRef50_text_encoder") |
|
|
structure_encoder = SentenceTransformer("yosshstd/ProTrek_650M_UniRef50_structure_encoder") |
|
|
# Run inference |
|
|
ProTrek_650M_UniRef50_temperature = 0.0186767578 |
|
|
def sim(a, b): return (a @ b.T / ProTrek_650M_UniRef50_temperature).item() |
|
|
|
|
|
aa_seq = "MALWMRLLPLLALLALWGPDPAAAFVNQHLCGSHLVEALYLVCGERGFFYTPKTRREAEDLQVGQVELGGGPGAGSLQPLALEGSLQKRGIVEQCCTSICSLYQLENYCN" |
|
|
text = 'Insulin decreases blood glucose concentration. It increases cell permeability to monosaccharides, amino acids and fatty acids. It accelerates glycolysis, the pentose phosphate cycle, and glycogen synthesis in liver.' |
|
|
foldseek_seq = 'DVVVVVVVVVVVVCVVPPDDPVPPFDFDFDADVVLVVLLCVLLVPLAFDDDDPDPVVVVVVVVDDDPPDDDPPDPDPDPPVVVVVVVVDDCSVVRRVGIDGSVSSNVRGD'.lower() |
|
|
|
|
|
seq_emb = protein_encoder.encode([aa_seq], convert_to_tensor=True) |
|
|
text_emb = text_encoder.encode([text], convert_to_tensor=True) |
|
|
struc_emb = structure_encoder.encode([foldseek_seq], convert_to_tensor=True) |
|
|
print("Seq-Text similarity:", sim(seq_emb, text_emb)) |
|
|
print("Seq-Structure similarity:", sim(seq_emb, struc_emb)) |
|
|
print("Text-Structure similarity:", sim(text_emb, struc_emb)) |
|
|
``` |
|
|
|
|
|
## Overview |
|
|
ProTrek is a multimodal model that integrates protein sequence, protein structure, and text information for better |
|
|
protein understanding. It adopts contrastive learning to learn the representations of protein sequence and structure. |
|
|
During the pre-training phase, we calculate the InfoNCE loss for each two modalities as [CLIP](https://arxiv.org/abs/2103.00020) |
|
|
does. |
|
|
|
|
|
## Model architecture |
|
|
**Protein sequence encoder**: [esm2_t33_650M_UR50D](https://huggingface.co/facebook/esm2_t33_650M_UR50D) |
|
|
|
|
|
**Protein structure encoder**: foldseek_t30_150M (identical architecture with esm2 except that the vocabulary only contains 3Di tokens) |
|
|
|
|
|
**Text encoder**: [BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext) |
|
|
|
|
|
<!-- |
|
|
### Direct Usage (Transformers) |
|
|
|
|
|
<details><summary>Click to see the direct usage in Transformers</summary> |
|
|
|
|
|
</details> |
|
|
--> |
|
|
|
|
|
<!-- |
|
|
### Downstream Usage (Sentence Transformers) |
|
|
|
|
|
You can finetune this model on your own dataset. |
|
|
|
|
|
<details><summary>Click to expand</summary> |
|
|
|
|
|
</details> |
|
|
--> |
|
|
|
|
|
<!-- |
|
|
### Out-of-Scope Use |
|
|
|
|
|
*List how the model may foreseeably be misused and address what users ought not to do with the model.* |
|
|
--> |
|
|
|
|
|
<!-- |
|
|
## Bias, Risks and Limitations |
|
|
|
|
|
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* |
|
|
--> |
|
|
|
|
|
<!-- |
|
|
### Recommendations |
|
|
|
|
|
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* |
|
|
--> |
|
|
|
|
|
## Training Details |
|
|
|
|
|
### Framework Versions |
|
|
- Python: 3.11.11 |
|
|
- Sentence Transformers: 5.0.0 |
|
|
- Transformers: 4.53.2 |
|
|
- PyTorch: 2.2.1+cu121 |
|
|
- Accelerate: |
|
|
- Datasets: |
|
|
- Tokenizers: 0.21.2 |
|
|
|
|
|
## Citation |
|
|
``` |
|
|
@article{su2024protrek, |
|
|
title={ProTrek: Navigating the Protein Universe through Tri-Modal Contrastive Learning}, |
|
|
author={Su, Jin and Zhou, Xibin and Zhang, Xuting and Yuan, Fajie}, |
|
|
journal={bioRxiv}, |
|
|
pages={2024--05}, |
|
|
year={2024}, |
|
|
publisher={Cold Spring Harbor Laboratory} |
|
|
} |
|
|
``` |
|
|
|
|
|
<!-- |
|
|
## Glossary |
|
|
|
|
|
*Clearly define terms in order to be accessible across audiences.* |
|
|
--> |
|
|
|
|
|
<!-- |
|
|
## Model Card Authors |
|
|
|
|
|
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* |
|
|
--> |
|
|
|
|
|
<!-- |
|
|
## Model Card Contact |
|
|
|
|
|
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* |
|
|
--> |