Instructions to use gokul-pv/deberta-base-OnionOrNot with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use gokul-pv/deberta-base-OnionOrNot with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="gokul-pv/deberta-base-OnionOrNot")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("gokul-pv/deberta-base-OnionOrNot") model = AutoModelForSequenceClassification.from_pretrained("gokul-pv/deberta-base-OnionOrNot") - Notebooks
- Google Colab
- Kaggle
Model Card
This model is a finetuned version of microsoft/deberta-base on the Onion or Not dataset. The model was fine-tuned for 5 epochs with a learning rate of 2e-5 and a linear schedule. Random token dropout was implemented during training to avoid overfitting. The classification report is shown below:
Final Validation Accuracy: 93.31%
Final Classification Report:
precision recall f1-score support
NotOnion 0.94 0.96 0.95 3000
Onion 0.93 0.89 0.91 1800
accuracy 0.93 4800
macro avg 0.93 0.92 0.93 4800
weighted avg 0.93 0.93 0.93 4800
Running inference on a new sample gave the correct prediction:
Running example inference...
Text: Man With Fogged-Up Glasses Forced To Finish Soup Using Other Senses
Prediction: Onion
Confidence: 87.76%
Probabilities:
NotOnion: 12.24%
Onion: 87.76%
- Downloads last month
- -
Model tree for gokul-pv/deberta-base-OnionOrNot
Base model
microsoft/deberta-base