andrewdalpino commited on
Commit
f68b947
·
verified ·
1 Parent(s): 94a9bca

Push model using huggingface_hub.

Browse files
Files changed (3) hide show
  1. README.md +6 -72
  2. config.json +1 -1
  3. model.safetensors +2 -2
README.md CHANGED
@@ -1,76 +1,10 @@
1
  ---
2
- license: apache-2.0
3
- datasets:
4
- - andrewdalpino/SwissProt-Gene-Ontology
5
  tags:
6
- - esmc
 
7
  ---
8
- # ProtHash
9
 
10
- A protein language model that outputs amino acid sequence embeddings for use in clustering, classification, locality-sensitive hashing, and more. Distilled from the [ESMC](https://www.evolutionaryscale.ai/blog/esm-cambrian) family of models with deep comprehension of protein structure, ProtHash produces contextual embeddings that align in vector space according to the sequences' atomic structure. Trained on the [SwissProt](https://huggingface.co/datasets/andrewdalpino/SwissProt-Gene-Ontology) dataset to mimic the activations of its ESMC teacher model, ProtHash embeddings have near perfect similarity to ESMC embeddings but at a greatly reduced computational cost.
11
-
12
- ## Key Features
13
-
14
- - **Blazing fast and efficient**: ProtHash uses as little as 3% of its ESMC teacher's total parameters to achieve near perfect cosine similarity between the two embedding spaces.
15
-
16
- - **Structurally-relevant embeddings**: Structurally similar proteins will show up nearby in the embedding space enabling downstream tasks such as clustering, classification, and locality-sensitive hashing based on atomic structure.
17
-
18
- - **Compatible with ESMC embeddings**: ProtHash can output embeddings in its native or ESMC teacher's dimensionality - allowing it to serve as both a faster drop-in replacement for ESMC embeddings or a more efficient compressed representation.
19
-
20
- ## Pretrained Models
21
-
22
- | Name | Context Length | Embedding Dimensionality | Attention Heads (Q/KV) | Encoder Layers | Total Params | Teacher Model | Teacher Dimensionality |
23
- |---|---|---|---|---|---|---|---|
24
- | [andrewdalpino/ProtHash-384-Tiny](https://huggingface.co/andrewdalpino/ProtHash-384-Tiny) | 2048 | 384 | 16/4 | 4 | 7M | esmc_300m | 960 |
25
- | [andrewdalpino/ProtHash-512-Tiny](https://huggingface.co/andrewdalpino/ProtHash-512-Tiny) | 2048 | 512 | 16/4 | 4 | 13M | esmc_600m | 1152 |
26
-
27
- ## Pretrained Example
28
-
29
- First, you'll need the `prothash` and `esm` packages installed into your environment. We recommend using a virtual environment such as Python's `venv` module to prevent version conflicts with any system packages.
30
-
31
- ```sh
32
- pip install prothash esm
33
- ```
34
-
35
- Then, load the weights from HuggingFace Hub, tokenize a protein sequence, and pass it to the model. ProtHash adopts the ESM tokenizer as it's amino acids tokenization scheme which consists of a vocabulary of 33 amino acid and special tokens. The output will be an embedding vector that can be used in downstream tasks such as comparing to other protein sequence embeddings, clustering, and near-duplicate detection.
36
-
37
- ```python
38
- import torch
39
-
40
- from esm.tokenization import EsmSequenceTokenizer
41
-
42
- from prothash.model import ProtHash
43
-
44
- tokenizer = EsmSequenceTokenizer()
45
-
46
- model_name = "andrewdalpino/ProtHash-512-Tiny"
47
-
48
- model = ProtHash.from_pretrained(model_name)
49
-
50
- sequence = input("Enter a sequence: ")
51
-
52
- out = tokenizer(sequence, max_length=2048)
53
-
54
- tokens = out["input_ids"]
55
-
56
- # Input is a [1, T] tensor of token indices.
57
- x = torch.tensor(tokens, dtype=torch.int64).unsqueeze(0)
58
-
59
- # Output the sequence embedding in native dimensionality.
60
- y_embed_native = model.embed_native(x).squeeze(0)
61
-
62
- print(y_embed_native.shape)
63
-
64
- # Output a drop-in replacement for the teacher's embeddings.
65
- y_embed_teacher = model.embed_teacher(x).squeeze(0)
66
-
67
- print(y_embed_teacher.shape)
68
- ```
69
-
70
- ## References
71
-
72
- >- The UniProt Consortium, UniProt: the Universal Protein Knowledgebase in 2025, Nucleic Acids Research, 2025, 53, D609–D617.
73
- >- T. Hayes, et al. Simulating 500 million years of evolution with a language model, 2024.
74
- >- B. Zhang, et al. Root Mean Square Layer Normalization. 33rd Conference on Neural Information Processing Systems, NeurIPS 2019.
75
- >- J. Ainslie, et al. GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints, Google Research, 2023.
76
- >- T. Kim, et al. Comparing Kullback-Leibler Divergence and Mean Squared Error Loss in Knowledge Distillation, 2021.
 
1
  ---
 
 
 
2
  tags:
3
+ - model_hub_mixin
4
+ - pytorch_model_hub_mixin
5
  ---
 
6
 
7
+ This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
8
+ - Code: [More Information Needed]
9
+ - Paper: [More Information Needed]
10
+ - Docs: [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
config.json CHANGED
@@ -2,7 +2,7 @@
2
  "context_length": 2048,
3
  "dropout": 0.0,
4
  "embedding_dimensions": 512,
5
- "hidden_ratio": 4,
6
  "kv_heads": 4,
7
  "num_encoder_layers": 4,
8
  "padding_index": 1,
 
2
  "context_length": 2048,
3
  "dropout": 0.0,
4
  "embedding_dimensions": 512,
5
+ "hidden_ratio": 2,
6
  "kv_heads": 4,
7
  "num_encoder_layers": 4,
8
  "padding_index": 1,
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:afd93aefe45bf2cacfb34206aa7bd4a8610e912526fb5a74e6cb03ff972b6719
3
- size 50681440
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4984ba0d3c2504b4d46c3420722f931f22a1dbbbef56edb679b44cbc03a568ae
3
+ size 33904208