yosshstd's picture
Update README.md
a5bbba4 verified
metadata
license: mit
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - dense
pipeline_tag: sentence-similarity
library_name: sentence-transformers
base_model:
  - westlake-repl/ProTrek_650M_UniRef50

ProTrek_650M_UniRef50_text_encoder

This model is a SentenceTransformer-compatible version of ProTrek_650M_UniRef50_text_encoder. It has been converted for use with the sentence-transformers library, enabling easy integration into semantic similarity tasks, such as semantic search, clustering, and feature extraction.

Github repo: https://github.com/westlake-repl/ProTrek
Hugging Face repo https://huggingface.co/westlake-repl/ProTrek_650M_UniRef50

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 1024 dimensions
  • Similarity Function: Cosine Similarity

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'BertModel'})
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Dense({'in_features': 768, 'out_features': 1024, 'bias': True, 'activation_function': 'torch.nn.modules.linear.Identity'})
  (3): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
protein_encoder = SentenceTransformer("yosshstd/ProTrek_650M_UniRef50_protein_encoder")
text_encoder = SentenceTransformer("yosshstd/ProTrek_650M_UniRef50_text_encoder")
structure_encoder = SentenceTransformer("yosshstd/ProTrek_650M_UniRef50_structure_encoder")
# Run inference
ProTrek_650M_UniRef50_temperature = 0.0186767578
def sim(a, b): return (a @ b.T / ProTrek_650M_UniRef50_temperature).item()

aa_seq = "MALWMRLLPLLALLALWGPDPAAAFVNQHLCGSHLVEALYLVCGERGFFYTPKTRREAEDLQVGQVELGGGPGAGSLQPLALEGSLQKRGIVEQCCTSICSLYQLENYCN"
text = 'Insulin decreases blood glucose concentration. It increases cell permeability to monosaccharides, amino acids and fatty acids. It accelerates glycolysis, the pentose phosphate cycle, and glycogen synthesis in liver.'
foldseek_seq = 'DVVVVVVVVVVVVCVVPPDDPVPPFDFDFDADVVLVVLLCVLLVPLAFDDDDPDPVVVVVVVVDDDPPDDDPPDPDPDPPVVVVVVVVDDCSVVRRVGIDGSVSSNVRGD'.lower()

seq_emb = protein_encoder.encode([aa_seq], convert_to_tensor=True)
text_emb = text_encoder.encode([text], convert_to_tensor=True)
struc_emb = structure_encoder.encode([foldseek_seq], convert_to_tensor=True)
print("Seq-Text similarity:", sim(seq_emb, text_emb))
print("Seq-Structure similarity:", sim(seq_emb, struc_emb))
print("Text-Structure similarity:", sim(text_emb, struc_emb))

Overview

ProTrek is a multimodal model that integrates protein sequence, protein structure, and text information for better protein understanding. It adopts contrastive learning to learn the representations of protein sequence and structure. During the pre-training phase, we calculate the InfoNCE loss for each two modalities as CLIP does.

Model architecture

Protein sequence encoder: esm2_t33_650M_UR50D

Protein structure encoder: foldseek_t30_150M (identical architecture with esm2 except that the vocabulary only contains 3Di tokens)

Text encoder: BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext

Training Details

Framework Versions

  • Python: 3.11.11
  • Sentence Transformers: 5.0.0
  • Transformers: 4.53.2
  • PyTorch: 2.2.1+cu121
  • Accelerate:
  • Datasets:
  • Tokenizers: 0.21.2

Citation

@article{su2024protrek,
  title={ProTrek: Navigating the Protein Universe through Tri-Modal Contrastive Learning},
  author={Su, Jin and Zhou, Xibin and Zhang, Xuting and Yuan, Fajie},
  journal={bioRxiv},
  pages={2024--05},
  year={2024},
  publisher={Cold Spring Harbor Laboratory}
}