stanfordnlp/imdb
Viewer • Updated • 100k • 177k • 370
How to use liewchooichin/distilbert-base-uncased-tiny-imdb with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("fill-mask", model="liewchooichin/distilbert-base-uncased-tiny-imdb") # Load model directly
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("liewchooichin/distilbert-base-uncased-tiny-imdb")
model = AutoModelForMaskedLM.from_pretrained("liewchooichin/distilbert-base-uncased-tiny-imdb")This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set:
This model is created from following the lesson in Hugging Face Learn. NLP -- Main NLP Tasks -- Fine-tuning a masked language model.
This is only a small scale fine-tuning of the standfordnlp/imbd datasets. Only 1000 rows of the unsupervised dataset is used for training.
The exercise is carried on Google Colab - T4 gpu.
1000 rows from the standfordnlp/imbd datasets.
The following hyperparameters were used during training:
| Train Loss | Validation Loss | Epoch |
|---|---|---|
| 3.2484 | 3.2338 | 0 |
| 3.0821 | 2.8758 | 1 |
| 2.9373 | 2.9930 | 2 |
Base model
distilbert/distilbert-base-uncased