Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
# TransHLA2.0-BIND
|
| 2 |
|
| 3 |
A minimal Hugging Face-compatible PyTorch model for peptide–HLA binding classification using ESM with optional LoRA and cross-attention. There is no custom predict API; inference follows the training path: tokenize peptide and HLA pseudosequence with the ESM tokenizer, pad or truncate to fixed lengths (default peptide=16, HLA=36), run a forward pass as `logits, features = model(epitope_ids, hla_ids)`, then apply softmax to get the binding probability.
|
|
@@ -14,6 +21,12 @@ Requirements:
|
|
| 14 |
```bash
|
| 15 |
pip install torch transformers peft
|
| 16 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 17 |
## How to use TransHLA2.0-BIND
|
| 18 |
```python
|
| 19 |
import torch
|
|
|
|
| 1 |
+
---
|
| 2 |
+
tags:
|
| 3 |
+
- protein language model
|
| 4 |
+
datasets:
|
| 5 |
+
- IEDB
|
| 6 |
+
---
|
| 7 |
+
|
| 8 |
# TransHLA2.0-BIND
|
| 9 |
|
| 10 |
A minimal Hugging Face-compatible PyTorch model for peptide–HLA binding classification using ESM with optional LoRA and cross-attention. There is no custom predict API; inference follows the training path: tokenize peptide and HLA pseudosequence with the ESM tokenizer, pad or truncate to fixed lengths (default peptide=16, HLA=36), run a forward pass as `logits, features = model(epitope_ids, hla_ids)`, then apply softmax to get the binding probability.
|
|
|
|
| 21 |
```bash
|
| 22 |
pip install torch transformers peft
|
| 23 |
```
|
| 24 |
+
## Usage (Transformers)
|
| 25 |
+
```python
|
| 26 |
+
model_id = "SkywalkerLu/TransHLA2.0"
|
| 27 |
+
model = AutoModel.from_pretrained(model_id, trust_remote_code=True).to(device).eval()
|
| 28 |
+
```
|
| 29 |
+
```
|
| 30 |
## How to use TransHLA2.0-BIND
|
| 31 |
```python
|
| 32 |
import torch
|