File size: 3,249 Bytes
05f8564
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9e4f949
05f8564
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5121c4c
 
 
 
 
 
 
 
05f8564
 
5121c4c
 
 
05f8564
 
 
 
 
498941d
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
---
language:
  - en
tags:
  - protein-language-models
  - sparse-autoencoder 
license: mit
---

# Sparse Autoencoders for ESM-2 (650M)

Interpret protein language model representations using sparse autoencoders trained on ESM-2-650m layers. These models decompose complex neural representations into interpretable features, enabling deeper understanding of how protein language models process sequence information.

* 📊 Model details in the [InterPLM pre-print](https://www.biorxiv.org/content/10.1101/2024.11.14.623630v1)
* 👩‍💻 Training and analysis code in the [GitHub repo](https://github.com/ElanaPearl/InterPLM) 
* 🧬 Explore features at [InterPLM.ai](https://www.interplm.ai)

## Model Details
- Base Model: ESM-2 650M (33 layers)
- Architecture: Sparse Autoencoder
- Input Dimension: 1,280
- Feature Dimension: 10,240

## Available Models

We provide SAE models trained on different layers of ESM-2-650M:

| Model name | ESM2 model | ESM2 layer |
|-|-|-|
| [InterPLM-esm2-650m-l1](https://huggingface.co/Elana/InterPLM-esm2-650m/tree/main/layer_1) | esm2_t33_650m_UR50D | 1 |
| [InterPLM-esm2-650m-l9](https://huggingface.co/Elana/InterPLM-esm2-650m/tree/main/layer_9) | esm2_t33_650m_UR50D | 9 |
| [InterPLM-esm2-650m-l18](https://huggingface.co/Elana/InterPLM-esm2-650m/tree/main/layer_18) | esm2_t33_650m_UR50D | 18 |
| [InterPLM-esm2-650m-l24](https://huggingface.co/Elana/InterPLM-esm2-650m/tree/main/layer_24) | esm2_t33_650m_UR50D | 24 |
| [InterPLM-esm2-650m-l30](https://huggingface.co/Elana/InterPLM-esm2-650m/tree/main/layer_30) | esm2_t33_650m_UR50D | 30 |
| [InterPLM-esm2-650m-l33](https://huggingface.co/Elana/InterPLM-esm2-650m/tree/main/layer_33) | esm2_t33_650m_UR50D | 33 |

All models share the same architecture and dictionary size (10,240). You can find SAEs trained on ESM-2 8M [here](https://huggingface.co/Elana/InterPLM-esm2-8m). The 650M SAEs capture more known biological concepts than the 8M but require additional compute for both ESM embedding and SAE feature extraction.

## Usage

```python
from interplm.sae.inference import load_sae_from_hf
from interplm.esm.embed import embed_single_sequence

# Get ESM embeddings for protein sequence
embeddings = embed_single_sequence(
   sequence="MRWQEMGYIFYPRKLR",
   model_name="esm2_t33_650M_UR50D",
   layer=18  # Choose ESM layer (1,9,18,24,30,33)
)

# Load SAE model and extract features 
sae = load_sae_from_hf(plm_model="esm2-650m", plm_layer=18)
features = sae.encode(embeddings)
```

For details on training and analyzing SAEs on PLMs, see the [GitHub README](https://github.com/ElanaPearl/InterPLM/blob/main/README.md).

## Model Normalization
The SAEs we've trained have arbitrary scales between features since encoder/decoder weights could be linearly scaled without changing reconstructions. To make features comparable, we normalize them to activate between 0-1 based on max activation values from Swiss-Prot (since this is our primary analysis dataset). By default, use our pre-normalized SAEs (`ae_normalized.pt`). As this might not perfectly scale features not present in Swiss-Prot proteins, for custom normalization use `ae_unnormalized.pt` with [this code](https://github.com/ElanaPearl/InterPLM/blob/main/interplm/sae/normalize.py).