Update README.md
Browse files
README.md
CHANGED
|
@@ -11,29 +11,38 @@ A protein language model that outputs amino acid sequence embeddings for use in
|
|
| 11 |
|
| 12 |
## Key Features
|
| 13 |
|
| 14 |
-
- **Blazing fast and efficient**: ProtHash uses
|
| 15 |
|
| 16 |
- **Biologically-relevant**: Biologically similar proteins will show up nearby in the embedding space enabling downstream tasks such as clustering, classification, and locality-sensitive hashing.
|
| 17 |
|
| 18 |
- **Compatible with ESMC**: ProtHash can output embeddings in its native or ESMC teacher's dimensionality - allowing it to serve as either a faster drop-in approximation to ESMC embeddings or a more efficient compressed representation.
|
| 19 |
|
| 20 |
-
- **Quantization-ready**: With quantization-aware post-training, ProtHash allows you to quantize the weights of the model while maintaining similarity to the teacher's embedding space.
|
| 21 |
|
| 22 |
## Pretrained Models
|
| 23 |
|
| 24 |
-
| Name | Context Length | Embedding Dimensions | Attention Heads (Q/KV) | Encoder Layers
|
| 25 |
-
|---|---|---|---|---|---|---|---|
|
| 26 |
-
| [andrewdalpino/ProtHash-
|
| 27 |
-
| [andrewdalpino/ProtHash-384](https://huggingface.co/andrewdalpino/ProtHash-384) | 2048 | 384 | 16/4 |
|
| 28 |
-
| [andrewdalpino/ProtHash-
|
| 29 |
-
| [andrewdalpino/ProtHash-512](https://huggingface.co/andrewdalpino/ProtHash-512) | 2048 | 512 | 16/4 |
|
|
|
|
| 30 |
|
| 31 |
-
##
|
| 32 |
|
| 33 |
-
First, you'll need the `prothash` and `esm` packages installed into your environment. We recommend using a virtual environment such as Python's `venv` module to prevent version conflicts with
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 34 |
|
| 35 |
```sh
|
| 36 |
-
pip install prothash esm
|
| 37 |
```
|
| 38 |
|
| 39 |
Then, load the weights from HuggingFace Hub, tokenize a protein sequence, and pass it to the model. ProtHash adopts the ESM tokenizer as it's amino acids tokenization scheme which consists of a vocabulary of 33 amino acid and special tokens. The output will be an embedding vector that can be used in downstream tasks such as comparing to other protein sequence embeddings, clustering, and near-duplicate detection.
|
|
@@ -47,11 +56,11 @@ from prothash.model import ProtHash
|
|
| 47 |
|
| 48 |
tokenizer = EsmSequenceTokenizer()
|
| 49 |
|
| 50 |
-
model_name = "andrewdalpino/ProtHash-512-Tiny"
|
| 51 |
|
| 52 |
model = ProtHash.from_pretrained(model_name)
|
| 53 |
|
| 54 |
-
# Optionally quantize the weights.
|
| 55 |
model.quantize_weights()
|
| 56 |
|
| 57 |
sequence = input("Enter a sequence: ")
|
|
@@ -75,6 +84,8 @@ print(y_embed_teacher.shape)
|
|
| 75 |
|
| 76 |
## Training
|
| 77 |
|
|
|
|
|
|
|
| 78 |
### Clone the project repo
|
| 79 |
|
| 80 |
We'll need the code from the project repository to train and/or fine-tune the model.
|
|
@@ -104,7 +115,7 @@ python train.py
|
|
| 104 |
You can change the default arguments like in the example below.
|
| 105 |
|
| 106 |
```sh
|
| 107 |
-
python train --teacher_name="esmc_300m" --max_steps=
|
| 108 |
```
|
| 109 |
|
| 110 |
#### Training Dashboard
|
|
@@ -126,9 +137,9 @@ Then navigate to the dashboard using your favorite web browser.
|
|
| 126 |
| --min_sequence_length | 1 | int | The minimum length of the input sequences. |
|
| 127 |
| --max_sequence_length | 2048 | int | The maximum length of the input sequences. |
|
| 128 |
| --quantization_aware_training | False | bool | Should we add fake quantized tensors to simulate quantized training? |
|
| 129 |
-
| --batch_size | 4 | int | The number of training
|
| 130 |
| --gradient_accumulation_steps | 32 | int | The number of batches to pass through the network before updating the model weights. |
|
| 131 |
-
| --max_steps |
|
| 132 |
| --learning_rate | 1e-4 | float | The learning rate of the AdamW optimizer. |
|
| 133 |
| --max_gradient_norm | 100.0 | float | Clip gradients above this threshold norm before stepping. |
|
| 134 |
| --temperature | 8.0 | float | The smoothing parameter of the activations - higher temperature results in smoother activations. |
|
|
|
|
| 11 |
|
| 12 |
## Key Features
|
| 13 |
|
| 14 |
+
- **Blazing fast and efficient**: ProtHash uses less than 1.5% of its ESMC teacher's total parameters to achieve near-perfect cosine similarity between the two embedding spaces.
|
| 15 |
|
| 16 |
- **Biologically-relevant**: Biologically similar proteins will show up nearby in the embedding space enabling downstream tasks such as clustering, classification, and locality-sensitive hashing.
|
| 17 |
|
| 18 |
- **Compatible with ESMC**: ProtHash can output embeddings in its native or ESMC teacher's dimensionality - allowing it to serve as either a faster drop-in approximation to ESMC embeddings or a more efficient compressed representation.
|
| 19 |
|
| 20 |
+
- **Quantization-ready**: With quantization-aware post-training, ProtHash allows you to quantize the weights of the model while maintaining its near-perfect similarity to the teacher's embedding space.
|
| 21 |
|
| 22 |
## Pretrained Models
|
| 23 |
|
| 24 |
+
| Name | Context Length | Embedding Dimensions | Attention Heads (Q/KV) | Encoder Layers | Total Params | Teacher Model | Teacher Dimensions | Library Version |
|
| 25 |
+
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
|
| 26 |
+
| [andrewdalpino/ProtHash-V2-512-Tiny](https://huggingface.co/andrewdalpino/ProtHash-V2-512-Tiny) | 2048 | 512 | 16/4 | 4 | 7.4M | esmc_600m | 1152 | 0.2.x |
|
| 27 |
+
| [andrewdalpino/ProtHash-384-Tiny](https://huggingface.co/andrewdalpino/ProtHash-384-Tiny) | 2048 | 384 | 16/4 | 4 | 5M | esmc_300m | 960 | 0.1.x |
|
| 28 |
+
| [andrewdalpino/ProtHash-384](https://huggingface.co/andrewdalpino/ProtHash-384) | 2048 | 384 | 16/4 | 10 | 11M | esmc_300m | 960 | 0.1.x |
|
| 29 |
+
| [andrewdalpino/ProtHash-512-Tiny](https://huggingface.co/andrewdalpino/ProtHash-512-Tiny) | 2048 | 512 | 16/4 | 4 | 8.5M | esmc_600m | 1152 | 0.1.x |
|
| 30 |
+
| [andrewdalpino/ProtHash-512](https://huggingface.co/andrewdalpino/ProtHash-512) | 2048 | 512 | 16/4 | 10 | 19M | esmc_600m | 1152 | 0.1.x |
|
| 31 |
|
| 32 |
+
## Example
|
| 33 |
|
| 34 |
+
First, you'll need the `prothash` and `esm` packages installed into your environment. For ProtHash version 1 use library version `0.1.x` and for version 2 install library version `0.2.x`. We recommend using a virtual environment such as Python's `venv` module to prevent version conflicts with other packages.
|
| 35 |
+
|
| 36 |
+
### Version 1
|
| 37 |
+
|
| 38 |
+
```sh
|
| 39 |
+
pip install prothash~=0.1.0 esm
|
| 40 |
+
```
|
| 41 |
+
|
| 42 |
+
### Version 2
|
| 43 |
|
| 44 |
```sh
|
| 45 |
+
pip install prothash~=0.2.0 esm
|
| 46 |
```
|
| 47 |
|
| 48 |
Then, load the weights from HuggingFace Hub, tokenize a protein sequence, and pass it to the model. ProtHash adopts the ESM tokenizer as it's amino acids tokenization scheme which consists of a vocabulary of 33 amino acid and special tokens. The output will be an embedding vector that can be used in downstream tasks such as comparing to other protein sequence embeddings, clustering, and near-duplicate detection.
|
|
|
|
| 56 |
|
| 57 |
tokenizer = EsmSequenceTokenizer()
|
| 58 |
|
| 59 |
+
model_name = "andrewdalpino/ProtHash-V2-512-Tiny"
|
| 60 |
|
| 61 |
model = ProtHash.from_pretrained(model_name)
|
| 62 |
|
| 63 |
+
# Optionally quantize the weights to Int8.
|
| 64 |
model.quantize_weights()
|
| 65 |
|
| 66 |
sequence = input("Enter a sequence: ")
|
|
|
|
| 84 |
|
| 85 |
## Training
|
| 86 |
|
| 87 |
+
If you want to train your own custom ProtHash model then follow the instructions below.
|
| 88 |
+
|
| 89 |
### Clone the project repo
|
| 90 |
|
| 91 |
We'll need the code from the project repository to train and/or fine-tune the model.
|
|
|
|
| 115 |
You can change the default arguments like in the example below.
|
| 116 |
|
| 117 |
```sh
|
| 118 |
+
python train --teacher_name="esmc_300m" --max_steps=4200 --embedding_dimensions=768 --temperature=4.0
|
| 119 |
```
|
| 120 |
|
| 121 |
#### Training Dashboard
|
|
|
|
| 137 |
| --min_sequence_length | 1 | int | The minimum length of the input sequences. |
|
| 138 |
| --max_sequence_length | 2048 | int | The maximum length of the input sequences. |
|
| 139 |
| --quantization_aware_training | False | bool | Should we add fake quantized tensors to simulate quantized training? |
|
| 140 |
+
| --batch_size | 4 | int | The number of training samples to pass through the network at a time. |
|
| 141 |
| --gradient_accumulation_steps | 32 | int | The number of batches to pass through the network before updating the model weights. |
|
| 142 |
+
| --max_steps | 4000 | int | The number of steps to train for. |
|
| 143 |
| --learning_rate | 1e-4 | float | The learning rate of the AdamW optimizer. |
|
| 144 |
| --max_gradient_norm | 100.0 | float | Clip gradients above this threshold norm before stepping. |
|
| 145 |
| --temperature | 8.0 | float | The smoothing parameter of the activations - higher temperature results in smoother activations. |
|