# Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("nepp1d0/SingleBertSmilesTargetInteraction")
model = AutoModelForSequenceClassification.from_pretrained("nepp1d0/SingleBertSmilesTargetInteraction")Quick Links
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
Prot_bert finetuned on GPCR_train dataset of Drug Target prediction
Trainig paramenters: overwrite_output_dir=True, evaluation_strategy="epoch", learning_rate=1e-3, weight_decay=0.001, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, push_to_hub=True, fp16=True, logging_steps=logging_steps, save_strategy='epoch', num_train_epochs=2
- Downloads last month
- 5
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="nepp1d0/SingleBertSmilesTargetInteraction")