Transformers
LTEnjoy commited on
Commit
6c234b7
·
verified ·
1 Parent(s): 9943097

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +78 -3
README.md CHANGED
@@ -1,3 +1,78 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+ Github repo: https://github.com/westlake-repl/ProTrek
5
+
6
+ ## Overview
7
+ ProTrek is a multimodal model that integrates protein sequence, protein structure, and text information for better
8
+ protein understanding. It adopts contrastive learning to learn the representations of protein sequence and structure.
9
+ During the pre-training phase, we calculate the InfoNCE loss for each two modalities as [CLIP](https://arxiv.org/abs/2103.00020)
10
+ does.
11
+
12
+ ## Model architecture
13
+ Protein sequence encoder: [esm2_t12_35M_UR50D](https://huggingface.co/facebook/esm2_t12_35M_UR50D)
14
+ Protein structure encoder: foldseek_t12_35M (identical architecture with esm2 except that the vocabulary only contains 3Di tokens)
15
+ Text encoder: [BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext)
16
+
17
+ ## Obtain embeddings and calculate similarity score (please clone the repo first)
18
+ ```
19
+ import torch
20
+
21
+ from model.ProtTrek.protrek_trimodal_model import ProTrekTrimodalModel
22
+ from utils.foldseek_util import get_struc_seq
23
+
24
+ # Load model
25
+ config = {
26
+ "protein_config": "weights/ProTrek_35M_UniRef50/esm2_t12_35M_UR50D",
27
+ "text_config": "weights/ProTrek_35M_UniRef50/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext",
28
+ "structure_config": "weights/ProTrek_35M_UniRef50/foldseek_t12_35M",
29
+ "load_protein_pretrained": False,
30
+ "load_text_pretrained": False,
31
+ "from_checkpoint": "weights/ProTrek_35M_UniRef50/ProTrek_35M_UniRef50.pt"
32
+ }
33
+
34
+ device = "cuda"
35
+ model = ProTrekTrimodalModel(**config).eval().to(device)
36
+
37
+ # Load protein and text
38
+ pdb_path = "example/8ac8.cif"
39
+ seqs = get_struc_seq("bin/foldseek", pdb_path, ["A"])["A"]
40
+ aa_seq = seqs[0]
41
+ foldseek_seq = seqs[1].lower()
42
+ text = "Replication initiator in the monomeric form, and autogenous repressor in the dimeric form."
43
+
44
+ with torch.no_grad():
45
+ # Obtain protein sequence embedding
46
+ seq_embedding = model.get_protein_repr([aa_seq])
47
+ print("Protein sequence embedding shape:", seq_embedding.shape)
48
+
49
+ # Obtain protein structure embedding
50
+ struc_embedding = model.get_structure_repr([foldseek_seq])
51
+ print("Protein structure embedding shape:", struc_embedding.shape)
52
+
53
+ # Obtain text embedding
54
+ text_embedding = model.get_text_repr([text])
55
+ print("Text embedding shape:", text_embedding.shape)
56
+
57
+ # Calculate similarity score between protein sequence and structure
58
+ seq_struc_score = seq_embedding @ struc_embedding.T / model.temperature
59
+ print("Similarity score between protein sequence and structure:", seq_struc_score.item())
60
+
61
+ # Calculate similarity score between protein sequence and text
62
+ seq_text_score = seq_embedding @ text_embedding.T / model.temperature
63
+ print("Similarity score between protein sequence and text:", seq_text_score.item())
64
+
65
+ # Calculate similarity score between protein structure and text
66
+ struc_text_score = struc_embedding @ text_embedding.T / model.temperature
67
+ print("Similarity score between protein structure and text:", struc_text_score.item())
68
+
69
+
70
+ """
71
+ Protein sequence embedding shape: torch.Size([1, 1024])
72
+ Protein structure embedding shape: torch.Size([1, 1024])
73
+ Text embedding shape: torch.Size([1, 1024])
74
+ Similarity score between protein sequence and structure: 38.83826446533203
75
+ Similarity score between protein sequence and text: 17.90523338317871
76
+ Similarity score between protein structure and text: 18.044755935668945
77
+ """
78
+ ```