Push model using huggingface_hub.
Browse files- README.md +6 -65
- config.json +2 -1
- model.safetensors +2 -2
README.md
CHANGED
|
@@ -1,69 +1,10 @@
|
|
| 1 |
---
|
| 2 |
-
license: apache-2.0
|
| 3 |
-
datasets:
|
| 4 |
-
- andrewdalpino/SwissProt-Gene-Ontology
|
| 5 |
tags:
|
| 6 |
-
-
|
|
|
|
| 7 |
---
|
| 8 |
-
# ProtHash
|
| 9 |
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
- **Structurally-relevant embeddings**: Structurally similar proteins will show up nearby in the embedding space enabling downstream tasks such as clustering, classification, and locality-sensitive hashing based on atomic structure.
|
| 15 |
-
|
| 16 |
-
- **Blazing fast and efficient**: ProtHash uses only 3% of ESMC's parameters to achieve near perfect cosine similarity between the two embedding spaces.
|
| 17 |
-
|
| 18 |
-
- **Long context**: With a context window of 2048 amino acid tokens you can embed proteins with long sequences.
|
| 19 |
-
|
| 20 |
-
## Pretrained Models
|
| 21 |
-
|
| 22 |
-
| Name | Context Length | Embedding Dimensionality | Attention Heads (Q/KV) | Encoder Layers | Total Params |
|
| 23 |
-
|---|---|---|---|---|---|
|
| 24 |
-
| [andrewdalpino/ProtHash-384-Tiny](https://huggingface.co/andrewdalpino/ProtHash-384-Tiny) | 2048 | 384 | 16/4 | 4 | 7M |
|
| 25 |
-
| [andrewdalpino/ProtHash-512-Tiny](https://huggingface.co/andrewdalpino/ProtHash-512-Tiny) | 2048 | 512 | 16/4 | 4 | 12M |
|
| 26 |
-
|
| 27 |
-
## Pretrained Example
|
| 28 |
-
|
| 29 |
-
First, you'll need the `prothash` and `esm` packages installed into your environment. We recommend using a virtual environment such as Python's `venv` module to prevent version conflicts with any system packages.
|
| 30 |
-
|
| 31 |
-
```sh
|
| 32 |
-
pip install prothash esm
|
| 33 |
-
```
|
| 34 |
-
|
| 35 |
-
Then, load the weights from HuggingFace Hub, tokenize a protein sequence, and pass it to the model. ProtHash adopts the ESM tokenizer as it's amino acids tokenization scheme. The output will be an embedding vector that can be used in downstream tasks such as comparing to other protein sequence embeddings, clustering, and near-duplicate detection.
|
| 36 |
-
|
| 37 |
-
```python
|
| 38 |
-
import torch
|
| 39 |
-
|
| 40 |
-
from esm.tokenization import EsmSequenceTokenizer
|
| 41 |
-
|
| 42 |
-
from prothash.model import ProtHash
|
| 43 |
-
|
| 44 |
-
tokenizer = EsmSequenceTokenizer()
|
| 45 |
-
|
| 46 |
-
model_name = "andrewdalpino/ProtHash-512-Tiny"
|
| 47 |
-
|
| 48 |
-
model = ProtHash.from_pretrained(model_name)
|
| 49 |
-
|
| 50 |
-
sequence = input("Enter a sequence: ")
|
| 51 |
-
|
| 52 |
-
out = tokenizer(sequence, max_length=2048)
|
| 53 |
-
|
| 54 |
-
tokens = out["input_ids"]
|
| 55 |
-
|
| 56 |
-
x = torch.tensor(tokens, dtype=torch.int64).unsqueeze(0)
|
| 57 |
-
|
| 58 |
-
y_embed = model.embed(x)
|
| 59 |
-
|
| 60 |
-
print(y_embed)
|
| 61 |
-
```
|
| 62 |
-
|
| 63 |
-
## References
|
| 64 |
-
|
| 65 |
-
>- The UniProt Consortium, UniProt: the Universal Protein Knowledgebase in 2025, Nucleic Acids Research, 2025, 53, D609–D617.
|
| 66 |
-
>- T. Hayes, et al. Simulating 500 million years of evolution with a language model, 2024.
|
| 67 |
-
>- B. Zhang, et al. Root Mean Square Layer Normalization. 33rd Conference on Neural Information Processing Systems, NeurIPS 2019.
|
| 68 |
-
>- J. Ainslie, et al. GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints, Google Research, 2023.
|
| 69 |
-
>- T. Kim, et al. Comparing Kullback-Leibler Divergence and Mean Squared Error Loss in Knowledge Distillation, 2021.
|
|
|
|
| 1 |
---
|
|
|
|
|
|
|
|
|
|
| 2 |
tags:
|
| 3 |
+
- model_hub_mixin
|
| 4 |
+
- pytorch_model_hub_mixin
|
| 5 |
---
|
|
|
|
| 6 |
|
| 7 |
+
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
|
| 8 |
+
- Code: [More Information Needed]
|
| 9 |
+
- Paper: [More Information Needed]
|
| 10 |
+
- Docs: [More Information Needed]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
config.json
CHANGED
|
@@ -2,10 +2,11 @@
|
|
| 2 |
"context_length": 2048,
|
| 3 |
"dropout": 0.0,
|
| 4 |
"embedding_dimensions": 384,
|
| 5 |
-
"hidden_ratio":
|
| 6 |
"kv_heads": 4,
|
| 7 |
"num_encoder_layers": 4,
|
| 8 |
"padding_index": 1,
|
| 9 |
"q_heads": 16,
|
|
|
|
| 10 |
"vocabulary_size": 33
|
| 11 |
}
|
|
|
|
| 2 |
"context_length": 2048,
|
| 3 |
"dropout": 0.0,
|
| 4 |
"embedding_dimensions": 384,
|
| 5 |
+
"hidden_ratio": 2,
|
| 6 |
"kv_heads": 4,
|
| 7 |
"num_encoder_layers": 4,
|
| 8 |
"padding_index": 1,
|
| 9 |
"q_heads": 16,
|
| 10 |
+
"teacher_dimensions": 960,
|
| 11 |
"vocabulary_size": 33
|
| 12 |
}
|
model.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f57d1cc2aaa32651f2cc8beac91d000851a2e82ddeb24af043f041ed7b4e467e
|
| 3 |
+
size 20022320
|