stanfordnlp/imdb
Viewer • Updated • 100k • 178k • 370
How to use Sif10/my_awesome_model with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-classification", model="Sif10/my_awesome_model") # Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("Sif10/my_awesome_model")
model = AutoModelForSequenceClassification.from_pretrained("Sif10/my_awesome_model")This model is a fine-tuned version of distilbert/distilbert-base-uncased on the imdb dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|---|---|---|---|---|
| 0.3564 | 1.0 | 1563 | 0.3677 | 0.8426 |
| 0.2878 | 2.0 | 3126 | 0.3378 | 0.8588 |
| 0.2124 | 3.0 | 4689 | 0.4398 | 0.8550 |
| 0.1556 | 4.0 | 6252 | 0.5750 | 0.8555 |
| 0.1075 | 5.0 | 7815 | 0.6733 | 0.8558 |
| 0.0831 | 6.0 | 9378 | 0.7218 | 0.8561 |
| 0.0652 | 7.0 | 10941 | 0.7331 | 0.8564 |
| 0.0458 | 8.0 | 12504 | 0.8166 | 0.8538 |
| 0.0415 | 9.0 | 14067 | 0.8619 | 0.8568 |
| 0.0357 | 10.0 | 15630 | 0.8771 | 0.8559 |
Base model
distilbert/distilbert-base-uncased