Instructions to use AI4Protein/ProPrime_650M_OGT_Prediction with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use AI4Protein/ProPrime_650M_OGT_Prediction with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("feature-extraction", model="AI4Protein/ProPrime_650M_OGT_Prediction", trust_remote_code=True)# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("AI4Protein/ProPrime_650M_OGT_Prediction", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
ProPrime
from transformers import AutoModel, AutoTokenizer
model_path = "AI4Protein/ProPrime_650M_OGT_Prediction"
model = AutoModel.from_pretrained(model_path, trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
seqs = [
"ADHJSJHSLGJSGLSGGAD",
"MSJKFHLSKGJSG",
"SDKHF"
]
inputs = tokenizer(seqs, padding=True, return_tensors="pt")
outputs = model(**inputs)
print(outputs.predicted_values)
- Downloads last month
- 52