File size: 2,813 Bytes
a357805
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
---
base_model:
- facebook/esm2_t36_3B_UR50D
- facebook/esm2_t33_650M_UR50D
- facebook/esm2_t30_150M_UR50D
- facebook/esm2_t12_35M_UR50D
- facebook/esm2_t6_8M_UR50D
license: mit
pipeline_tag: feature-extraction
---

# PLM Reverse Distillation

This repository contains the weights for the protein language models presented in the paper [Reverse Distillation: Consistently Scaling Protein Language Model Representations](https://huggingface.co/papers/2603.07710).

Reverse Distillation is a principled framework that decomposes large Protein Language Model (PLM) representations into orthogonal subspaces guided by smaller models of the same family. The resulting embeddings have a Matryoshka-style nested structure, ensuring that larger reverse-distilled models consistently outperform smaller ones.

- **GitHub Repository**: [rohitsinghlab/plm_reverse_distillation](https://github.com/rohitsinghlab/plm_reverse_distillation)

## Quick Start

Reverse Distilled ESM-2 models are designed to be a drop-in replacement for ESM-2 for most embedding-generation tasks.

```python
import esm
import torch
import reverse_distillation

# Load ESM-2 model and the reverse distillation version
esm2_model, alphabet = esm.pretrained.esm2_t33_650M_UR50D()
rd_model, alphabet = reverse_distillation.pretrained.esm2_rd_650M()

batch_converter = alphabet.get_batch_converter()
esm2_model.eval()  # disables dropout for deterministic results
rd_model.eval()  # disables dropout for deterministic results

# Prepare data
data = [
    ("protein1", "MKTVRQERLKSIVRILERSKEPVSGAQLAEELSVSRQVIVQDIAYLRSLGYNIVATPRGYVLAGG"),
    ("protein2", "KALTARQQEVFDLIRDHISQTGMPPTRAEIAQRLGFRSPNAAEEHLKALARKGVIEIVSGASRGIRLLQEE"),
]
batch_labels, batch_strs, batch_tokens = batch_converter(data)
batch_lens = (batch_tokens != alphabet.padding_idx).sum(1)

# Extract per-residue representations
with torch.no_grad():
    results_esm = esm2_model(batch_tokens, repr_layers=[33], return_contacts=True)
    results_rd = rd_model(batch_tokens)

esm_token_representations = results_esm["representations"][33]
rd_token_representations = results_rd["representations"]["650M"]

# Generate per-sequence representations via averaging
for i, tokens_len in enumerate(batch_lens):
    print(f"esm representation size: {esm_token_representations[i, 1 : tokens_len - 1].size()}")
    print(f"rd representation size: {rd_token_representations[i, 1 : tokens_len - 1].size()}")
```

## Citation

If you use reverse distillation, please cite:

```bibtex
@inproceedings{catrina2026reverse,
  title   = {Reverse Distillation: Consistently Scaling Protein Language Model Representations},
  author  = {Catrina, Darius and Bepler, Christian and Sledzieski, Samuel and Singh, Rohit},
  booktitle = {International Conference on Learning Representations},
  year    = {2026}
}
```