Add model card with paper link and sample usage

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +70 -9
README.md CHANGED
@@ -1,9 +1,70 @@
1
- ---
2
- license: mit
3
- base_model:
4
- - facebook/esm2_t36_3B_UR50D
5
- - facebook/esm2_t33_650M_UR50D
6
- - facebook/esm2_t30_150M_UR50D
7
- - facebook/esm2_t12_35M_UR50D
8
- - facebook/esm2_t6_8M_UR50D
9
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - facebook/esm2_t36_3B_UR50D
4
+ - facebook/esm2_t33_650M_UR50D
5
+ - facebook/esm2_t30_150M_UR50D
6
+ - facebook/esm2_t12_35M_UR50D
7
+ - facebook/esm2_t6_8M_UR50D
8
+ license: mit
9
+ pipeline_tag: feature-extraction
10
+ ---
11
+
12
+ # PLM Reverse Distillation
13
+
14
+ This repository contains the weights for the protein language models presented in the paper [Reverse Distillation: Consistently Scaling Protein Language Model Representations](https://huggingface.co/papers/2603.07710).
15
+
16
+ Reverse Distillation is a principled framework that decomposes large Protein Language Model (PLM) representations into orthogonal subspaces guided by smaller models of the same family. The resulting embeddings have a Matryoshka-style nested structure, ensuring that larger reverse-distilled models consistently outperform smaller ones.
17
+
18
+ - **GitHub Repository**: [rohitsinghlab/plm_reverse_distillation](https://github.com/rohitsinghlab/plm_reverse_distillation)
19
+
20
+ ## Quick Start
21
+
22
+ Reverse Distilled ESM-2 models are designed to be a drop-in replacement for ESM-2 for most embedding-generation tasks.
23
+
24
+ ```python
25
+ import esm
26
+ import torch
27
+ import reverse_distillation
28
+
29
+ # Load ESM-2 model and the reverse distillation version
30
+ esm2_model, alphabet = esm.pretrained.esm2_t33_650M_UR50D()
31
+ rd_model, alphabet = reverse_distillation.pretrained.esm2_rd_650M()
32
+
33
+ batch_converter = alphabet.get_batch_converter()
34
+ esm2_model.eval() # disables dropout for deterministic results
35
+ rd_model.eval() # disables dropout for deterministic results
36
+
37
+ # Prepare data
38
+ data = [
39
+ ("protein1", "MKTVRQERLKSIVRILERSKEPVSGAQLAEELSVSRQVIVQDIAYLRSLGYNIVATPRGYVLAGG"),
40
+ ("protein2", "KALTARQQEVFDLIRDHISQTGMPPTRAEIAQRLGFRSPNAAEEHLKALARKGVIEIVSGASRGIRLLQEE"),
41
+ ]
42
+ batch_labels, batch_strs, batch_tokens = batch_converter(data)
43
+ batch_lens = (batch_tokens != alphabet.padding_idx).sum(1)
44
+
45
+ # Extract per-residue representations
46
+ with torch.no_grad():
47
+ results_esm = esm2_model(batch_tokens, repr_layers=[33], return_contacts=True)
48
+ results_rd = rd_model(batch_tokens)
49
+
50
+ esm_token_representations = results_esm["representations"][33]
51
+ rd_token_representations = results_rd["representations"]["650M"]
52
+
53
+ # Generate per-sequence representations via averaging
54
+ for i, tokens_len in enumerate(batch_lens):
55
+ print(f"esm representation size: {esm_token_representations[i, 1 : tokens_len - 1].size()}")
56
+ print(f"rd representation size: {rd_token_representations[i, 1 : tokens_len - 1].size()}")
57
+ ```
58
+
59
+ ## Citation
60
+
61
+ If you use reverse distillation, please cite:
62
+
63
+ ```bibtex
64
+ @inproceedings{catrina2026reverse,
65
+ title = {Reverse Distillation: Consistently Scaling Protein Language Model Representations},
66
+ author = {Catrina, Darius and Bepler, Christian and Sledzieski, Samuel and Singh, Rohit},
67
+ booktitle = {International Conference on Learning Representations},
68
+ year = {2026}
69
+ }
70
+ ```