Update README.md
Browse files
README.md
CHANGED
|
@@ -20,11 +20,11 @@ paired TCR-petide-HLA-I binding based on amino acid sequence inputs. It is a tra
|
|
| 20 |
|
| 21 |
|
| 22 |
|
| 23 |
-
- **Developed by:**
|
| 24 |
- **Funded by [optional]:** [More Information Needed]
|
| 25 |
- **Shared by [optional]:** [More Information Needed]
|
| 26 |
-
- **Model type:**
|
| 27 |
-
- **Language(s) (NLP):**
|
| 28 |
- **License:** [More Information Needed]
|
| 29 |
- **Finetuned from model [optional]:** [More Information Needed]
|
| 30 |
|
|
@@ -40,7 +40,17 @@ paired TCR-petide-HLA-I binding based on amino acid sequence inputs. It is a tra
|
|
| 40 |
|
| 41 |
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
| 42 |
|
| 43 |
-
###
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 44 |
|
| 45 |
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
| 46 |
|
|
|
|
| 20 |
|
| 21 |
|
| 22 |
|
| 23 |
+
- **Developed by:** Justin Barton and Trupti Gore
|
| 24 |
- **Funded by [optional]:** [More Information Needed]
|
| 25 |
- **Shared by [optional]:** [More Information Needed]
|
| 26 |
+
- **Model type:** DeBERTa Transformer
|
| 27 |
+
- **Language(s) (NLP):** Python
|
| 28 |
- **License:** [More Information Needed]
|
| 29 |
- **Finetuned from model [optional]:** [More Information Needed]
|
| 30 |
|
|
|
|
| 40 |
|
| 41 |
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
| 42 |
|
| 43 |
+
### How to Use
|
| 44 |
+
```
|
| 45 |
+
from transformers import (DebertaForSequenceClassification,DebertaTokenizerFast)
|
| 46 |
+
model = DebertaForSequenceClassification.from_pretrained(f'shepherdgroup/nuTCRacker', num_labels=2)
|
| 47 |
+
tokenizer=DebertaTokenizerFast.from_pretrained('shepherdgroup/nuTCRacker')
|
| 48 |
+
example="'[cdra1]SSVPPY[cdra2]YTSAATLV[cdra3]CAVSAGDYKLSF[cdrb1]KGHDR[cdrb2]SFDVKD[cdrb3]CATSDSVAGNQPQHF','[peptide]ATDALMTGF[mhc]YFAMYQENMAHTDANTLYIIYRDYTWVARVYRGY'"
|
| 49 |
+
encoded_example=tokenizer(example,return_tensors='pt')
|
| 50 |
+
output=model(**encoded_example)
|
| 51 |
+
output
|
| 52 |
+
|
| 53 |
+
```
|
| 54 |
|
| 55 |
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
| 56 |
|