Text Classification
Transformers
PyTorch
bert
protein language model
biology
text-embeddings-inference
Instructions to use GleghornLab/SYNTERACT with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use GleghornLab/SYNTERACT with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="GleghornLab/SYNTERACT")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("GleghornLab/SYNTERACT") model = AutoModelForSequenceClassification.from_pretrained("GleghornLab/SYNTERACT") - Notebooks
- Google Colab
- Kaggle
Update README.md
Browse files
README.md
CHANGED
|
@@ -52,8 +52,8 @@ import torch
|
|
| 52 |
import torch.nn.functional as F
|
| 53 |
from transformers import BertForSequenceClassification, BertTokenizer
|
| 54 |
|
| 55 |
-
model = BertForSequenceClassification.from_pretrained('
|
| 56 |
-
tokenizer = BertTokenizer.from_pretrained('
|
| 57 |
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu') # gather device
|
| 58 |
model.to(device) # move to device
|
| 59 |
model.eval() # put in eval mode
|
|
|
|
| 52 |
import torch.nn.functional as F
|
| 53 |
from transformers import BertForSequenceClassification, BertTokenizer
|
| 54 |
|
| 55 |
+
model = BertForSequenceClassification.from_pretrained('GleghornLab/SYNTERACT') # load model
|
| 56 |
+
tokenizer = BertTokenizer.from_pretrained('GleghornLab/SYNTERACT') # load tokenizer
|
| 57 |
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu') # gather device
|
| 58 |
model.to(device) # move to device
|
| 59 |
model.eval() # put in eval mode
|